id
stringlengths
40
40
source
stringclasses
9 values
title
stringlengths
2
345
clean_text
stringlengths
35
1.63M
raw_text
stringlengths
4
1.63M
url
stringlengths
4
498
overview
stringlengths
0
10k
4b24b9885596178647e3576ebc74fe2c1fafc929
cdc
None
These recommendations update information on the vaccine and antiviral agents available for controlling influenza during the 1996-97 influenza season (superseding MMWR 1995;44(No. RR-3):1-22). The principal changes include information about a) the influenza virus strains included in the trivalent vaccine for 1996-97 and b) extension of the optimal time for influenza vaccination campaigns for persons in high-risk groups.# INTRODUCTION Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (H1, H2, and H3) and two subtypes of neuraminidase (N1 and N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens-especially to the hemagglutinin-reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of circulating strains provide the basis for selecting the virus strains included in each year's vaccine. Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory illnesses, influenza can cause severe malaise lasting several days. More severe illness can result if either primary influenza pneumonia or secondary bacterial pneumonia occurs. During influenza epidemics, high attack rates of acute illness result in both increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower respiratory tract complications. Elderly persons and persons with underlying health problems are at increased risk for complications of influenza. If they become ill with influenza, such members of high-risk groups (see Groups at Increased Risk for Influenza-Related Complications under Target Groups for Special Vaccination Programs) are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for persons at high risk may increase twofold to fivefold, depending on the age group. Previously healthy children and younger adults may also require hospitalization for influenza-related complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups. An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza. It is estimated that >20,000 influenza-associated deaths occurred during each of 10 different U.S. epidemics from 1972-73 to 1990-91, and >40,000 influenza-associated deaths occurred during each of three of these epidemics. More than 90% of the deaths attributed to pneumonia and influenza occurred among persons ≥65 years of age. Because the proportion of elderly persons in the U.S. population is increasing and because age and its associated chronic diseases are risk factors for severe influenza illness, the number of deaths from influenza can be expected to increase unless control measures are implemented more vigorously. The number of persons <65 years of age at increased risk for influenza-related complications is also increasing. Better survival rates for organ-transplant recipients, the success of neonatal intensive-care units, and better management of diseases such as cystic fibrosis and acquired immunodeficiency syndrome (AIDS) result in a higher survival rate for younger persons at high risk. # OPTIONS FOR THE CONTROL OF INFLUENZA In the United States, two measures are available that can reduce the impact of influenza: immunoprophylaxis with inactivated (killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (amantadine or rimantadine). Vaccination of persons at high risk each year before the influenza season is currently the most effective measure for reducing the impact of influenza. Vaccination can be highly cost effective when it is a) directed at persons who are most likely to experience complications or who are at increased risk for exposure and b) administered to persons at high risk during hospitalizations or routine health-care visits before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. When vaccine and epidemic strains of virus are well matched, achieving high vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) can reduce the risk for outbreaks by inducing herd immunity. # INACTIVATED VACCINE FOR INFLUENZA A AND B Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing the influenza viruses that are likely to circulate in the United States in the upcoming winter. The vaccine is made from highly purified, egg-grown viruses that have been made noninfectious (inactivated). Influenza vaccine rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purified-surfaceantigen preparations are available. To minimize febrile reactions, only subvirion or purified-surface-antigen preparations should be used for children; any of the preparations may be used for adults. Most vaccinated children and young adults develop high postvaccination hemagglutination-inhibition antibody titers. These antibody titers are protective against illness caused by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults and thus may remain susceptible to influenza-related upper respiratory tract infection. However, even if such persons develop influenza illness despite vaccination, the vaccine can be effective in preventing lower respiratory tract involvement or other secondary complications, thereby reducing the risk for hospitalization and death. The effectiveness of influenza vaccine in preventing or attenuating illness varies, depending primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the virus strains included in the vaccine and those that circulate during the influenza season. When there is a good match between vaccine and circulating viruses, influenza vaccine has been shown to prevent illness in approximately 70% of healthy persons <65 years of age. In these circumstances, studies have also indicated that the effectiveness of influenza vaccine in preventing hospitalization for pneumonia and influenza among elderly persons living in settings other than nursing homes or similar chronic-care facilities ranges from 30%-70%. Among elderly persons residing in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and death. Studies of this population have indicated that the vaccine can be 50%-60% effective in preventing hospitalization and pneumonia and 80% effective in preventing death, even though efficacy in preventing influenza illness may often be in the range of 30%-40% among the frail elderly. Achieving a high rate of vaccination among nursing home residents can reduce the spread of infection in a facility, thus preventing disease through herd immunity. # RECOMMENDATIONS FOR THE USE OF INFLUENZA VACCINE Influenza vaccine is strongly recommended for any person ≥6 months of age whobecause of age or underlying medical condition-is at increased risk for complications of influenza. Health-care workers and others (including household members) in close contact with persons in high-risk groups should also be vaccinated. In addition, influenza vaccine may be administered to any person who wishes to reduce the chance of becoming infected with influenza. The trivalent influenza vaccine prepared for the 1996-97 season will include A/Texas/36/91-like (H1N1), A/Wuhan/359/95-like (H3N2), and B/Beijing/184/93-like hemagglutinin antigens. For both A/Wuhan/359/95-like and B/Beijing/184/93-like antigens, U.S. manufacturers will use the antigenically equivalent strains A/Nanchang/933/95 (H3N2) and B/Harbin/07/94 because of their growth properties. Guidelines for the use of vaccine among certain patient populations follow; dosage recommendations are also summarized (Table 1). Although the current influenza vaccine can contain one or more of the antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines in the year following vaccination. Because the 1996-97 vaccine differs from the 1995-96 vaccine, supplies of 1995-96 vaccine should not be administered to provide protection for the 1996-97 influenza season. Two doses administered at least 1 month apart may be required for satisfactory antibody responses among previously unvaccinated children <9 years of age; however, studies of vaccines similar to those being used currently have indicated little or no improvement in antibody response when a second dose is administered to adults during the same season. During recent decades, data on influenza vaccine immunogenicity and side effects have been obtained for intramuscularly administered vaccine. Because recent influenza vaccines have not been adequately evaluated when administered by other routes, the intramuscular route is recommended. Adults and older children should be vaccinated in the deltoid muscle and infants and young children in the anterolateral aspect of the thigh. # TARGET GROUPS FOR SPECIAL VACCINATION PROGRAMS To maximize protection of high-risk persons, they and their close contacts should be targeted for organized vaccination programs. # Groups at Increased Risk for Influenza-Related Complications: - 800) FLU-SHIELD. † Because of the lower potential for causing febrile reactions, only split-virus vaccines should be used for children. They may be labeled as "split," "subvirion," or "purified-surface-antigen" vaccine. Immunogenicity and side effects of split-and whole-virus vaccines are similar among adults when vaccines are administered at the recommended dosage. § The recommended site of vaccination is the deltoid muscle for adults and older children. The preferred site for infants and young children is the anterolateral aspect of the thigh. ¶ Two doses administered at least 1 month apart are recommended for children <9 years of age who are receiving influenza vaccine for the first time. - Children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy and therefore might be at risk for developing Reye syndrome after influenza # Groups that Can Transmit Influenza to Persons at High Risk Persons who are clinically or subclinically infected and who care for or live with members of high-risk groups can transmit influenza virus to them. Some persons at high risk (e.g., the elderly, transplant recipients, and persons with AIDS) can have a low antibody response to influenza vaccine. Efforts to protect these members of highrisk groups against influenza might be improved by reducing the likelihood of influenza exposure from their caregivers. Therefore, the following groups should be vaccinated: - physicians, nurses, and other personnel in both hospital and outpatient-care settings; - employees of nursing homes and chronic-care facilities who have contact with patients or residents; - providers of home care to persons at high risk (e.g., visiting nurses and volunteer workers); and - household members (including children) of persons in high-risk groups. # VACCINATION OF OTHER GROUPS # General Population Physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics. # Pregnant Women Influenza-associated excess mortality among pregnant women has not been documented except during the pandemics of 1918-19 and 1957-58. However, because death-certificate data often do not indicate whether a woman was pregnant at the time of death, studies are needed to assess the risks of influenza infection that are specifically associated with pregnancy. Case reports and limited studies suggest that women in the third trimester of pregnancy and early puerperium, including those women without underlying risk factors, might be at increased risk for serious complications following influenza infection. Certain pregnancy-related physiologic changes may increase the risk for such complications; as pregnancy progresses, cardiac output, heart rate, oxygen consumption, and stroke volume increase while lung capacity decreases. Immunologic changes during pregnancy also may increase the risk for severe influenza illness. Health-care workers who provide care for pregnant women should consider administering influenza vaccine to all women who would be in the third trimester of pregnancy or early puerperium during the influenza season. Pregnant women who have medical conditions that increase their risk for complications from influenza should be vaccinated before the influenza season, regardless of the stage of pregnancy. Although definitive studies have not been conducted, influenza vaccination is considered safe at any stage of pregnancy. # Persons Infected with Human Immunodeficiency Virus (HIV) Limited information exists regarding the frequency and severity of influenza illness among HIV-infected persons, but reports suggest that symptoms might be prolonged and the risk for complications increased for some HIV-infected persons. Influenza vaccine has produced protective antibody titers against influenza in vaccinated HIV-infected persons who have minimal AIDS-related symptoms and high CD4+ T-lymphocyte cell counts. In patients who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, however, influenza vaccine may not induce protective antibody titers; a second dose of vaccine does not improve the immune response for these persons. Recent studies have examined the effect of influenza vaccination on replication of HIV type 1 (HIV-1). Although some studies have demonstrated a transient (i.e., 2-to 4-week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration, other studies using similar laboratory techniques have not indicated any substantial increase in replication. Deterioration of CD4+ T-lymphocyte cell counts and progression of clinical HIV disease have not been demonstrated among HIV-infected persons who receive vaccine. Because influenza can result in serious illness and complications and because influenza vaccination may result in protective antibody titers, vaccination will benefit many HIV-infected patients. # Foreign Travelers The risk for exposure to influenza during foreign travel varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the Southern Hemisphere, most activity occurs from April through September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that begins while traveling, an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the Southern Hemisphere from April through September should review their influenza vaccination histories. If they were not vaccinated the previous fall or winter, they should consider influenza vaccination before travel. Persons in high-risk categories should be especially encouraged to receive the most current vaccine. Persons at high risk who received the previous season's vaccine before travel should be revaccinated in the fall or winter with the current vaccine. # PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Use of an antiviral agent (i.e., amantadine or rimantadine) is an option for prevention of influenza A in such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at high risk for complications of influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Specific information about vaccine components can be found in package inserts for each manufacturer. Adults with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever should not contraindicate the use of influenza vaccine, particularly among children with mild upper respiratory tract infection or allergic rhinitis. # SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respiratory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination reported by fewer than one third of vaccinees is soreness at the vaccination site that lasts for up to 2 days. In addition, two types of systemic reactions have occurred: - Fever, malaise, myalgia, and other systemic symptoms occur infrequently and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after vaccination and can persist for 1 or 2 days; - Immediate-presumably allergic-reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) occur rarely after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component; the majority of reactions are most likely related to residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons with documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs-including those who have had occupational asthma or other allergic responses due to exposure to egg protein-might also be at increased risk for reactions from influenza vaccine, and similar consultation should be considered. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenza-associated complications (Murphy and Strunk, 1985). Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, most patients do not develop reactions to thimerosal when administered as a component of vaccines-even when patch or intradermal tests for thimerosal indicate hypersensitivity. When reported, hypersensitivity to thimerosal has usually consisted of local, delayed-type hypersensitivity reactions. Unlike the 1976 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been clearly associated with an increased frequency of Guillain-Barré syndrome (GBS). However, a precise estimate of risk is difficult to determine for a rare condition such as GBS, which has an annual background incidence of only one to two cases per 100,000 adult population. Among persons who received the swine influenza vaccine, the rate of GBS that exceeded the background rate was slightly less than one case per 100,000 vaccinations. An investigation of GBS cases in 1990-91 indicated no overall increase in frequency of GBS among persons who were administered influenza vaccine; a slight increase in GBS cases among vaccinated persons might have occurred in the age group 18-64 years, but not among persons ≥65 years of age. In contrast to the swine influenza vaccine, the epidemiologic features of the possible association of the 1990-91 vaccine with GBS were not as convincing. The rate of GBS cases after vaccination that was passively reported to the Vaccine Adverse Event Reporting System (VAERS) during 1993-94 was estimated to be approximately twice the average rate reported during other recent seasons (i.e., 1990-91, 1991-92, 1992-93 and 1994-95). The data currently available are not sufficient to determine whether this represents an actual risk. However, even if GBS were a true side effect, the very low estimated risk for GBS is less than that for severe influenza that could be prevented by vaccination. Whereas the incidence of GBS in the general population is very low, persons with a history of GBS have a substantially greater likelihood of subsequently developing GBS than persons without such a history. Thus, the likelihood of coincidentally developing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination might be causally associated with this risk for recurrence is not known. Although it would seem prudent to avoid a subsequent influenza vaccination in a person known to have developed GBS within 6 weeks of a previous influenza vaccination, for most persons with a history of GBS who are at high risk for severe complications from influenza, the established benefits of influenza vaccination justify yearly immunization. # SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES The target groups for influenza and pneumococcal vaccination overlap considerably. For persons at high risk who have not previously been vaccinated with pneumococcal vaccine, health-care providers should strongly consider administering both pneumococcal and influenza vaccines concurrently. Both vaccines can be administered at the same time at different sites without increasing side effects. However, influenza vaccine is administered each year, whereas pneumococcal vaccine is not. Children at high risk for influenza-related complications can receive influenza vaccine at the same time they receive other routine vaccinations, including pertussis vaccine (DTP or DTaP). Because influenza vaccine can cause fever when administered to young children, DTaP might be preferable in those children ≥15 months of age who are receiving the fourth or fifth dose of pertussis vaccine. DTaP is not licensed for the initial three-dose series of pertussis vaccine. # TIMING OF INFLUENZA VACCINATION ACTIVITIES Beginning each September (when vaccine for the upcoming influenza season becomes available) persons at high risk who are seen by health-care providers for routine care or as a result of hospitalization should be offered influenza vaccine. Opportunities to vaccinate persons at high risk for complications of influenza should not be missed. In previously published recommendations, the optimal time for organized vaccination campaigns for persons in high-risk groups was defined as the period from mid-October through mid-November. This period has been extended to include the first 2 weeks in October. In the United States, influenza activity generally peaks between late December and early March. High levels of influenza activity infrequently occur in the contiguous 48 states before December. Administering vaccine too far in advance of the influenza season should be avoided in facilities such as nursing homes, because antibody levels might begin to decline within a few months of vaccination. Vaccination programs can be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December. Children <9 years of age who have not been vaccinated previously should receive two doses of vaccine at least 1 month apart to maximize the likelihood of a satisfactory antibody response to all three vaccine antigens. The second dose should be administered before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community. # STRATEGIES FOR IMPLEMENTING INFLUENZA VACCINE RECOMMENDATIONS Influenza vaccine campaigns are targeted to approximately 32 million persons ≥65 years of age and 27 million to 31 million persons <65 years of age who are at high risk for influenza-associated complications. National health objectives for the year 2000 include vaccination of at least 60% of persons at risk for severe influenza-related illness. Influenza vaccination levels among persons ≥65 years of age improved substantially from 1989 (33%) to 1993 (52%); however, vaccination levels among high-risk persons <65 years of age are estimated to be <30%. Successful vaccination programs have combined education for health-care workers, publicity and education targeted toward potential recipients, a plan for identifying persons at high risk (usually by medical-record review), and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following paragraphs. # Outpatient Clinics and Physicians' Offices Staff in physicians' offices, clinics, health-maintenance organizations, and employee health clinics should be instructed to identify and label the medical records of patients who should receive vaccine. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccine and its receipt or refusal should be documented in the medical record. Patients in high-risk groups who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccine. If possible, arrangements should be made to provide vaccine with minimal waiting time and at the lowest possible cost. # Facilities Providing Episodic or Acute Care Health-care providers in these settings (e.g., emergency rooms and walk-in clinics) should be familiar with influenza vaccine recommendations. They should offer vaccine to persons in high-risk groups or should provide written information on why, where, and how to obtain the vaccine. Written information should be available in language(s) appropriate for the population served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians rather than by obtaining individual vaccination orders for each patient. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility, and all residents should be vaccinated at one time, immediately preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated when they are admitted. # Acute-Care Hospitals All persons ≥65 years of age and younger persons (including children) with highrisk conditions who are hospitalized at any time from September through March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Household members and others with whom they will have contact should receive written information about why and where to obtain influenza vaccine. # Outpatient Facilities Providing Continuing Care to Patients at High Risk All patients should be offered vaccine before the beginning of the influenza season. Patients admitted to such programs (e.g., hemodialysis centers, hospital specialtycare clinics, and outpatient rehabilitation programs) during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission. Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. # Visiting Nurses and Others Providing Home Care to Persons at High Risk Nursing-care plans should identify patients in high-risk groups, and vaccine should be provided in the home if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination. # Facilities Providing Services to Persons ≥65 Years of Age In these facilities (e.g., retirement communities and recreation centers), all unvaccinated residents/attendees should be offered vaccine on site before the influenza season. Education/publicity programs should also be provided; these programs should emphasize the need for influenza vaccine and provide specific information on how, where, and when to obtain it. # Clinics and Others Providing Health Care for Travelers Indications for influenza vaccination should be reviewed before travel, and vaccine should be offered if appropriate (see Foreign Travelers). # Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine. Particular emphasis should be placed on vaccination of persons who care for members of high-risk groups (e.g., staff of intensive-care units , staff of medical/surgical units, and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts can enhance compliance, as can a follow-up campaign early in the course of a community outbreak. # ANTIVIRAL AGENTS FOR INFLUENZA A The two antiviral agents with specific activity against influenza A viruses are amantadine hydrochloride and rimantadine hydrochloride. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses. When administered prophylactically to healthy adults or children before and throughout the epidemic period, both drugs are approximately 70%-90% effective in preventing illness caused by naturally occurring strains of type A influenza viruses. Because antiviral agents taken prophylactically can prevent illness but not subclinical infection, some persons who take these drugs can still develop immune responses that will protect them when they are exposed to antigenically related viruses in later years. In otherwise healthy adults, amantadine and rimantadine can reduce the severity and duration of signs and symptoms of influenza A illness when administered within 48 hours of illness onset. Studies evaluating the efficacy of treatment for children with either amantadine or rimantadine are limited. Amantadine was approved for treatment and prophylaxis of all influenza type A virus infections in 1976. Although few placebo-controlled studies were conducted to determine the efficacy of amantadine treatment among children prior to approval, amantadine is indicated for treatment and prophylaxis of adults and children ≥1 year of age. Rimantadine was approved in 1993 for treatment and prophylaxis in adults but was approved only for prophylaxis in children. Further studies might provide the data needed to support future approval of rimantadine treatment in this age group. As with all drugs, amantadine and rimantadine can cause adverse reactions in some persons. Such adverse reactions are rarely severe; however, for some categories of patients, severe adverse reactions are more likely to occur. Amantadine has been associated with a higher incidence of adverse central nervous system (CNS) reactions than rimantadine (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment). # RECOMMENDATIONS FOR THE USE OF AMANTADINE AND RIMANTADINE # Use as Prophylaxis Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk for severe illness and complications if infected with influenza A virus. When amantadine or rimantadine is administered as prophylaxis, factors such as cost, compliance, and potential side effects should be considered when determining the period of prophylaxis. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost effective, amantadine or rimantadine prophylaxis should be taken only during the period of peak influenza activity in a community. # Persons at High Risk Vaccinated After Influenza A Activity Has Begun Persons at high risk can still be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination can take as long as 2 weeks, during which time chemoprophylaxis should be considered. Children who receive influenza vaccine for the first time can require as long as 6 weeks of prophylaxis (i.e., prophylaxis for 2 weeks after the second dose of vaccine has been received). Amantadine and rimantadine do not interfere with the antibody response to the vaccine. # Persons Providing Care to Those at High Risk To reduce the spread of virus to persons at high risk, chemoprophylaxis may be considered during community outbreaks for a) unvaccinated persons who have frequent contact with persons at high risk (e.g., household members, visiting nurses, and volunteer workers) and b) unvaccinated employees of hospitals, clinics, and chronic-care facilities. For those persons who cannot be vaccinated, chemoprophylaxis during the period of peak influenza activity may be considered. For those persons who receive vaccine at a time when influenza A is present in the community, chemoprophylaxis can be administered for 2 weeks after vaccination. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that might not be controlled by the vaccine. # Persons Who Have Immune Deficiency Chemoprophylaxis might be indicated for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons who have HIV infection, especially those who have advanced HIV disease. No data are available concerning possible interactions with other drugs used in the management of patients who have HIV infection. Such patients should be monitored closely if amantadine or rimantadine chemoprophylaxis is administered. # Persons for Whom Influenza Vaccine Is Contraindicated Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Influenza vaccine may be contraindicated in persons who have severe anaphylactic hypersensitivity to egg protein or other vaccine components. # Other Persons Amantadine or rimantadine also can be administered prophylactically to anyone who wishes to avoid influenza A illness. The health-care provider and patient should make this decision on an individual basis. # Use of Antivirals as Therapy Amantadine and rimantadine can reduce the severity and shorten the duration of influenza A illness among healthy adults when administered within 48 hours of illness onset. Whether antiviral therapy will prevent complications of influenza type A among persons at high risk is unknown. Insufficient data exist to determine the efficacy of rimantadine treatment in children. Thus, rimantadine is currently approved only for prophylaxis in children, but it is not approved for treatment in this age group. Amantadine-and rimantadine-resistant influenza A viruses can emerge when either of these drugs is administered for treatment; amantadine-resistant strains are cross-resistant to rimantadine and vice versa. Both the frequency with which resistant viruses emerge and the extent of their transmission are unknown, but data indicate that amantadine-and rimantadine-resistant viruses are no more virulent or transmissible than amantadine-and rimantadine-sensitive viruses. The screening of naturally occurring epidemic strains of influenza type A has rarely detected amantadine-and rimantadine-resistant viruses. Resistant viruses have most frequently been isolated from persons taking one of these drugs as therapy for influenza A infection. Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy. Persons who have influenza-like illness should avoid contact with uninfected persons as much as possible, regardless of whether they are being treated with amantadine or rimantadine. Persons who have influenza type A infection and who are treated with either drug can shed amantadine-or rimantadinesensitive viruses early in the course of treatment, but can later shed drug-resistant viruses, especially after 5-7 days of therapy. Such persons can benefit from therapy even when resistant viruses emerge; however, they also can transmit infection to other persons with whom they come in contact. Because of possible induction of amantadine or rimantadine resistance, treatment of persons who have influenza-like illness should be discontinued as soon as clinically warranted, generally after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. Laboratory isolation of influenza viruses obtained from persons who are receiving amantadine or rimantadine should be reported to CDC through state health departments, and the isolates should be sent to CDC for antiviral sensitivity testing. # Outbreak Control in Institutions When confirmed or suspected outbreaks of influenza A occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. Contingency planning is needed to ensure rapid administration of amantadine or rimantadine to residents. This planning should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amantadine or rimantadine is used for outbreak control, the drug should be administered to all residents of the institution-regardless of whether they received influenza vaccine the previous fall. The drug should be continued for at least 2 weeks or until approximately 1 week after the end of the outbreak. The dose for each resident should be determined after consulting the dosage recommendations and precautions (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment) and the manufacturer's package insert. To reduce the spread of virus and to minimize disruption of patient care, chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that is not controlled by the vaccine. Chemoprophylaxis also may be considered for controlling influenza A outbreaks in other closed or semi-closed settings (e.g., dormitories or other settings where persons live in close proximity). To reduce the spread of infection and the chances of prophylaxis failure due to transmission of drug-resistant virus, measures should be taken to reduce contact as much as possible between persons on chemoprophylaxis and those taking drug for treatment. # CONSIDERATIONS FOR SELECTING AMANTADINE OR RIMANTADINE FOR CHEMOPROPHYLAXIS OR TREATMENT # Side Effects/Toxicity Despite the similarities between the two drugs, amantadine and rimantadine differ in their pharmacokinetic properties. More than 90% of amantadine is excreted unchanged, whereas approximately 75% of rimantadine is metabolized by the liver. However, both drugs and their metabolites are excreted by the kidney. The pharmacokinetic differences between amantadine and rimantadine might explain differences in side effects. Although both drugs can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day, the incidence of CNS side effects (e.g., nervousness, anxiety, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine compared with those taking rimantadine. In a 6-week study of prophylaxis in healthy adults, approximately 6% of participants taking rimantadine at a dose of 200 mg/day experienced at least one CNS symptom, compared with approximately 14% of those taking the same dose of amantadine and 4% of those taking placebo. The incidence of gastrointestinal side effects (e.g., nausea and anorexia) is approximately 3% among persons taking either drug, compared with 1%-2% among persons receiving the placebo. Side effects associated with both drugs are usually mild and cease soon after discontinuing the drug. Side effects can diminish or disappear after the first week despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among elderly persons who have been taking amantadine as prophylaxis at a dose of 200 mg/day. Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects, and recommendations for reduced dosages for these groups of patients have been made. Because rimantadine has only recently been approved for marketing, its safety in certain patient populations (e.g., chronically ill and elderly persons) has been evaluated less frequently. Clinical trials of rimantadine have more commonly involved young, healthy persons. Providers should review the package insert before using amantadine or rimantadine for any patient. The patient's age, weight, and renal function; the presence of other medical conditions; the indications for use of amantadine or rimantadine (i.e., prophylaxis or therapy); and the potential for interaction with other medications must be considered, and the dosage and duration of treatment must be adjusted appropriately. Modifications in dosage might be required for persons who have impaired renal or hepatic function, the elderly, children, and persons with a history of seizures. The following are guidelines for the use of amantadine and rimantadine in certain patient populations. Dosage recommendations are also summarized (Table 2). # Persons Who Have Impaired Renal Function # Amantadine Amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion. Thus, renal clearance of amantadine is reduced substantially in persons with renal insufficiency. A reduction in dosage is recommended for patients with creatinine clearance ≤50 mL/min. Guidelines for amantadine dosage based on creatinine clearance are found in the packet insert. However, because recommended dosages based on creatinine clearance might provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully so that adverse reactions can be recognized promptly and either the dose can be further # Rimantadine The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration. Further studies are needed to determine the multiple-dose pharmacokinetics and the most appropriate dosages for these patients. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6-fold greater than that in healthy controls of the same age. Hemodialysis did not contribute to drug clearance. In studies among persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were higher compared with control patients without renal disease who were the same weight, age, and sex. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance ≤10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including elderly persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. # Persons ≥65 Years of Age # Amantadine Because renal function declines with increasing age, the daily dose for persons ≥65 years of age should not exceed 100 mg for prophylaxis or treatment. For some elderly persons, the dose should be further reduced. Studies suggest that because of their smaller average body size, elderly women are more likely than elderly men to experience side effects at a daily dose of 100 mg. # Rimantadine The incidence and severity of CNS side effects among elderly persons appear to be substantially lower among those taking rimantadine at a dose of 200 mg/day compared with elderly persons taking the same dose of amantadine. However, when rimantadine has been administered at a dosage of 200 mg/day to chronically ill elderly persons, they have had a higher incidence of CNS and gastrointestinal symptoms than healthy, younger persons taking rimantadine at the same dosage. After longterm administration of rimantadine at a dosage of 200 mg/day, serum rimantadine concentrations among elderly nursing-home residents have been two to four times greater than those reported in younger adults. The dosage of rimantadine should be reduced to 100 mg/day for treatment or prophylaxis of elderly nursing-home residents. Although further studies are needed to determine the optimal dose for other elderly persons, a reduction in dosage to 100 mg/day should be considered for all persons ≥65 years of age if they experience signs and symptoms that might represent side effects when taking a dosage of 200 mg/day. # Persons Who Have Liver Disease # Amantadine No increase in adverse reactions to amantadine has been observed among persons with liver disease. # Rimantadine The safety and pharmacokinetics of rimantadine have only been evaluated after single-dose administration. In a study of persons with chronic liver disease (most with stabilized cirrhosis), no alterations were observed after a single dose. However, in persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease. A dose reduction to 100 mg/day is recommended for persons with severe hepatic dysfunction. # Persons Who Have Seizure Disorders # Amantadine An increased incidence of seizures has been reported in patients with a history of seizure disorders who have received amantadine. Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine. # Rimantadine In clinical trials, seizures (or seizure-like activity) have been observed in a few persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine. The extent to which rimantadine might increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated, because such persons have usually been excluded from participating in clinical trials of rimantadine. # Children # Amantadine The use of amantadine in children <1 year of age has not been adequately evaluated. The FDA-approved dosage for children 1-9 years of age is 4.4-8.8 mg/kg/day, not to exceed 150 mg/day. Although further studies to determine the optimal dosage for children are needed, physicians should consider prescribing only 5 mg/kg/day (not to exceed 150 mg/day) to reduce the risk for toxicity. The approved dosage for children ≥10 years of age is 200 mg/day; however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, is advisable. The following CDC staff members prepared this report: # Rimantadine The use of rimantadine in children <1 year of age has not been adequately evaluated. In children 1-9 years of age, rimantadine should be administered in one or two divided doses at a dosage of 5 mg/kg/day, not to exceed 150 mg/day. The approved dosage for children ≥10 years of age is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, also is recommended. # Drug Interactions # Amantadine Careful observation is advised when amantadine is administered concurrently with drugs that affect the CNS, especially CNS stimulants. # Rimantadine No clinically significant interactions between rimantadine and other drugs have been identified. For more detailed information concerning potential drug interactions for either drug, the package insert should be consulted. # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS Information regarding influenza surveillance is available through the CDC Voice Information System (influenza update), telephone (404) 332-4551, or through the CDC Information Service on the Public Health Network electronic bulletin board. From October through May, the information is updated at least every other week. In addition, periodic updates about influenza are published in the weekly MMWR. State and local health departments should be consulted regarding availability of influenza vaccine, access to vaccination programs, and information about state or local influenza activity. # Selected Bibliography
These recommendations update information on the vaccine and antiviral agents available for controlling influenza during the 1996-97 influenza season (superseding MMWR 1995;44(No. RR-3):1-22). The principal changes include information about a) the influenza virus strains included in the trivalent vaccine for 1996-97 and b) extension of the optimal time for influenza vaccination campaigns for persons in high-risk groups.# INTRODUCTION Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (H1, H2, and H3) and two subtypes of neuraminidase (N1 and N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens-especially to the hemagglutinin-reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of circulating strains provide the basis for selecting the virus strains included in each year's vaccine. Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory illnesses, influenza can cause severe malaise lasting several days. More severe illness can result if either primary influenza pneumonia or secondary bacterial pneumonia occurs. During influenza epidemics, high attack rates of acute illness result in both increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower respiratory tract complications. Elderly persons and persons with underlying health problems are at increased risk for complications of influenza. If they become ill with influenza, such members of high-risk groups (see Groups at Increased Risk for Influenza-Related Complications under Target Groups for Special Vaccination Programs) are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for persons at high risk may increase twofold to fivefold, depending on the age group. Previously healthy children and younger adults may also require hospitalization for influenza-related complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups. An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza. It is estimated that >20,000 influenza-associated deaths occurred during each of 10 different U.S. epidemics from 1972-73 to 1990-91, and >40,000 influenza-associated deaths occurred during each of three of these epidemics. More than 90% of the deaths attributed to pneumonia and influenza occurred among persons ≥65 years of age. Because the proportion of elderly persons in the U.S. population is increasing and because age and its associated chronic diseases are risk factors for severe influenza illness, the number of deaths from influenza can be expected to increase unless control measures are implemented more vigorously. The number of persons <65 years of age at increased risk for influenza-related complications is also increasing. Better survival rates for organ-transplant recipients, the success of neonatal intensive-care units, and better management of diseases such as cystic fibrosis and acquired immunodeficiency syndrome (AIDS) result in a higher survival rate for younger persons at high risk. # OPTIONS FOR THE CONTROL OF INFLUENZA In the United States, two measures are available that can reduce the impact of influenza: immunoprophylaxis with inactivated (killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (amantadine or rimantadine). Vaccination of persons at high risk each year before the influenza season is currently the most effective measure for reducing the impact of influenza. Vaccination can be highly cost effective when it is a) directed at persons who are most likely to experience complications or who are at increased risk for exposure and b) administered to persons at high risk during hospitalizations or routine health-care visits before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. When vaccine and epidemic strains of virus are well matched, achieving high vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) can reduce the risk for outbreaks by inducing herd immunity. # INACTIVATED VACCINE FOR INFLUENZA A AND B Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing the influenza viruses that are likely to circulate in the United States in the upcoming winter. The vaccine is made from highly purified, egg-grown viruses that have been made noninfectious (inactivated). Influenza vaccine rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purified-surfaceantigen preparations are available. To minimize febrile reactions, only subvirion or purified-surface-antigen preparations should be used for children; any of the preparations may be used for adults. Most vaccinated children and young adults develop high postvaccination hemagglutination-inhibition antibody titers. These antibody titers are protective against illness caused by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults and thus may remain susceptible to influenza-related upper respiratory tract infection. However, even if such persons develop influenza illness despite vaccination, the vaccine can be effective in preventing lower respiratory tract involvement or other secondary complications, thereby reducing the risk for hospitalization and death. The effectiveness of influenza vaccine in preventing or attenuating illness varies, depending primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the virus strains included in the vaccine and those that circulate during the influenza season. When there is a good match between vaccine and circulating viruses, influenza vaccine has been shown to prevent illness in approximately 70% of healthy persons <65 years of age. In these circumstances, studies have also indicated that the effectiveness of influenza vaccine in preventing hospitalization for pneumonia and influenza among elderly persons living in settings other than nursing homes or similar chronic-care facilities ranges from 30%-70%. Among elderly persons residing in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and death. Studies of this population have indicated that the vaccine can be 50%-60% effective in preventing hospitalization and pneumonia and 80% effective in preventing death, even though efficacy in preventing influenza illness may often be in the range of 30%-40% among the frail elderly. Achieving a high rate of vaccination among nursing home residents can reduce the spread of infection in a facility, thus preventing disease through herd immunity. # RECOMMENDATIONS FOR THE USE OF INFLUENZA VACCINE Influenza vaccine is strongly recommended for any person ≥6 months of age whobecause of age or underlying medical condition-is at increased risk for complications of influenza. Health-care workers and others (including household members) in close contact with persons in high-risk groups should also be vaccinated. In addition, influenza vaccine may be administered to any person who wishes to reduce the chance of becoming infected with influenza. The trivalent influenza vaccine prepared for the 1996-97 season will include A/Texas/36/91-like (H1N1), A/Wuhan/359/95-like (H3N2), and B/Beijing/184/93-like hemagglutinin antigens. For both A/Wuhan/359/95-like and B/Beijing/184/93-like antigens, U.S. manufacturers will use the antigenically equivalent strains A/Nanchang/933/95 (H3N2) and B/Harbin/07/94 because of their growth properties. Guidelines for the use of vaccine among certain patient populations follow; dosage recommendations are also summarized (Table 1). Although the current influenza vaccine can contain one or more of the antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines in the year following vaccination. Because the 1996-97 vaccine differs from the 1995-96 vaccine, supplies of 1995-96 vaccine should not be administered to provide protection for the 1996-97 influenza season. Two doses administered at least 1 month apart may be required for satisfactory antibody responses among previously unvaccinated children <9 years of age; however, studies of vaccines similar to those being used currently have indicated little or no improvement in antibody response when a second dose is administered to adults during the same season. During recent decades, data on influenza vaccine immunogenicity and side effects have been obtained for intramuscularly administered vaccine. Because recent influenza vaccines have not been adequately evaluated when administered by other routes, the intramuscular route is recommended. Adults and older children should be vaccinated in the deltoid muscle and infants and young children in the anterolateral aspect of the thigh. # TARGET GROUPS FOR SPECIAL VACCINATION PROGRAMS To maximize protection of high-risk persons, they and their close contacts should be targeted for organized vaccination programs. # Groups at Increased Risk for Influenza-Related Complications: • 800) FLU-SHIELD. † Because of the lower potential for causing febrile reactions, only split-virus vaccines should be used for children. They may be labeled as "split," "subvirion," or "purified-surface-antigen" vaccine. Immunogenicity and side effects of split-and whole-virus vaccines are similar among adults when vaccines are administered at the recommended dosage. § The recommended site of vaccination is the deltoid muscle for adults and older children. The preferred site for infants and young children is the anterolateral aspect of the thigh. ¶ Two doses administered at least 1 month apart are recommended for children <9 years of age who are receiving influenza vaccine for the first time. • Children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy and therefore might be at risk for developing Reye syndrome after influenza # Groups that Can Transmit Influenza to Persons at High Risk Persons who are clinically or subclinically infected and who care for or live with members of high-risk groups can transmit influenza virus to them. Some persons at high risk (e.g., the elderly, transplant recipients, and persons with AIDS) can have a low antibody response to influenza vaccine. Efforts to protect these members of highrisk groups against influenza might be improved by reducing the likelihood of influenza exposure from their caregivers. Therefore, the following groups should be vaccinated: • physicians, nurses, and other personnel in both hospital and outpatient-care settings; • employees of nursing homes and chronic-care facilities who have contact with patients or residents; • providers of home care to persons at high risk (e.g., visiting nurses and volunteer workers); and • household members (including children) of persons in high-risk groups. # VACCINATION OF OTHER GROUPS # General Population Physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics. # Pregnant Women Influenza-associated excess mortality among pregnant women has not been documented except during the pandemics of 1918-19 and 1957-58. However, because death-certificate data often do not indicate whether a woman was pregnant at the time of death, studies are needed to assess the risks of influenza infection that are specifically associated with pregnancy. Case reports and limited studies suggest that women in the third trimester of pregnancy and early puerperium, including those women without underlying risk factors, might be at increased risk for serious complications following influenza infection. Certain pregnancy-related physiologic changes may increase the risk for such complications; as pregnancy progresses, cardiac output, heart rate, oxygen consumption, and stroke volume increase while lung capacity decreases. Immunologic changes during pregnancy also may increase the risk for severe influenza illness. Health-care workers who provide care for pregnant women should consider administering influenza vaccine to all women who would be in the third trimester of pregnancy or early puerperium during the influenza season. Pregnant women who have medical conditions that increase their risk for complications from influenza should be vaccinated before the influenza season, regardless of the stage of pregnancy. Although definitive studies have not been conducted, influenza vaccination is considered safe at any stage of pregnancy. # Persons Infected with Human Immunodeficiency Virus (HIV) Limited information exists regarding the frequency and severity of influenza illness among HIV-infected persons, but reports suggest that symptoms might be prolonged and the risk for complications increased for some HIV-infected persons. Influenza vaccine has produced protective antibody titers against influenza in vaccinated HIV-infected persons who have minimal AIDS-related symptoms and high CD4+ T-lymphocyte cell counts. In patients who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, however, influenza vaccine may not induce protective antibody titers; a second dose of vaccine does not improve the immune response for these persons. Recent studies have examined the effect of influenza vaccination on replication of HIV type 1 (HIV-1). Although some studies have demonstrated a transient (i.e., 2-to 4-week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration, other studies using similar laboratory techniques have not indicated any substantial increase in replication. Deterioration of CD4+ T-lymphocyte cell counts and progression of clinical HIV disease have not been demonstrated among HIV-infected persons who receive vaccine. Because influenza can result in serious illness and complications and because influenza vaccination may result in protective antibody titers, vaccination will benefit many HIV-infected patients. # Foreign Travelers The risk for exposure to influenza during foreign travel varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the Southern Hemisphere, most activity occurs from April through September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that begins while traveling, an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the Southern Hemisphere from April through September should review their influenza vaccination histories. If they were not vaccinated the previous fall or winter, they should consider influenza vaccination before travel. Persons in high-risk categories should be especially encouraged to receive the most current vaccine. Persons at high risk who received the previous season's vaccine before travel should be revaccinated in the fall or winter with the current vaccine. # PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Use of an antiviral agent (i.e., amantadine or rimantadine) is an option for prevention of influenza A in such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at high risk for complications of influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Specific information about vaccine components can be found in package inserts for each manufacturer. Adults with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever should not contraindicate the use of influenza vaccine, particularly among children with mild upper respiratory tract infection or allergic rhinitis. # SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respiratory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination reported by fewer than one third of vaccinees is soreness at the vaccination site that lasts for up to 2 days. In addition, two types of systemic reactions have occurred: • Fever, malaise, myalgia, and other systemic symptoms occur infrequently and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after vaccination and can persist for 1 or 2 days; • Immediate-presumably allergic-reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) occur rarely after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component; the majority of reactions are most likely related to residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons with documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs-including those who have had occupational asthma or other allergic responses due to exposure to egg protein-might also be at increased risk for reactions from influenza vaccine, and similar consultation should be considered. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenza-associated complications (Murphy and Strunk, 1985). Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, most patients do not develop reactions to thimerosal when administered as a component of vaccines-even when patch or intradermal tests for thimerosal indicate hypersensitivity. When reported, hypersensitivity to thimerosal has usually consisted of local, delayed-type hypersensitivity reactions. Unlike the 1976 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been clearly associated with an increased frequency of Guillain-Barré syndrome (GBS). However, a precise estimate of risk is difficult to determine for a rare condition such as GBS, which has an annual background incidence of only one to two cases per 100,000 adult population. Among persons who received the swine influenza vaccine, the rate of GBS that exceeded the background rate was slightly less than one case per 100,000 vaccinations. An investigation of GBS cases in 1990-91 indicated no overall increase in frequency of GBS among persons who were administered influenza vaccine; a slight increase in GBS cases among vaccinated persons might have occurred in the age group 18-64 years, but not among persons ≥65 years of age. In contrast to the swine influenza vaccine, the epidemiologic features of the possible association of the 1990-91 vaccine with GBS were not as convincing. The rate of GBS cases after vaccination that was passively reported to the Vaccine Adverse Event Reporting System (VAERS) during 1993-94 was estimated to be approximately twice the average rate reported during other recent seasons (i.e., 1990-91, 1991-92, 1992-93 and 1994-95). The data currently available are not sufficient to determine whether this represents an actual risk. However, even if GBS were a true side effect, the very low estimated risk for GBS is less than that for severe influenza that could be prevented by vaccination. Whereas the incidence of GBS in the general population is very low, persons with a history of GBS have a substantially greater likelihood of subsequently developing GBS than persons without such a history. Thus, the likelihood of coincidentally developing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination might be causally associated with this risk for recurrence is not known. Although it would seem prudent to avoid a subsequent influenza vaccination in a person known to have developed GBS within 6 weeks of a previous influenza vaccination, for most persons with a history of GBS who are at high risk for severe complications from influenza, the established benefits of influenza vaccination justify yearly immunization. # SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES The target groups for influenza and pneumococcal vaccination overlap considerably. For persons at high risk who have not previously been vaccinated with pneumococcal vaccine, health-care providers should strongly consider administering both pneumococcal and influenza vaccines concurrently. Both vaccines can be administered at the same time at different sites without increasing side effects. However, influenza vaccine is administered each year, whereas pneumococcal vaccine is not. Children at high risk for influenza-related complications can receive influenza vaccine at the same time they receive other routine vaccinations, including pertussis vaccine (DTP or DTaP). Because influenza vaccine can cause fever when administered to young children, DTaP might be preferable in those children ≥15 months of age who are receiving the fourth or fifth dose of pertussis vaccine. DTaP is not licensed for the initial three-dose series of pertussis vaccine. # TIMING OF INFLUENZA VACCINATION ACTIVITIES Beginning each September (when vaccine for the upcoming influenza season becomes available) persons at high risk who are seen by health-care providers for routine care or as a result of hospitalization should be offered influenza vaccine. Opportunities to vaccinate persons at high risk for complications of influenza should not be missed. In previously published recommendations, the optimal time for organized vaccination campaigns for persons in high-risk groups was defined as the period from mid-October through mid-November. This period has been extended to include the first 2 weeks in October. In the United States, influenza activity generally peaks between late December and early March. High levels of influenza activity infrequently occur in the contiguous 48 states before December. Administering vaccine too far in advance of the influenza season should be avoided in facilities such as nursing homes, because antibody levels might begin to decline within a few months of vaccination. Vaccination programs can be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December. Children <9 years of age who have not been vaccinated previously should receive two doses of vaccine at least 1 month apart to maximize the likelihood of a satisfactory antibody response to all three vaccine antigens. The second dose should be administered before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community. # STRATEGIES FOR IMPLEMENTING INFLUENZA VACCINE RECOMMENDATIONS Influenza vaccine campaigns are targeted to approximately 32 million persons ≥65 years of age and 27 million to 31 million persons <65 years of age who are at high risk for influenza-associated complications. National health objectives for the year 2000 include vaccination of at least 60% of persons at risk for severe influenza-related illness. Influenza vaccination levels among persons ≥65 years of age improved substantially from 1989 (33%) to 1993 (52%); however, vaccination levels among high-risk persons <65 years of age are estimated to be <30%. Successful vaccination programs have combined education for health-care workers, publicity and education targeted toward potential recipients, a plan for identifying persons at high risk (usually by medical-record review), and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following paragraphs. # Outpatient Clinics and Physicians' Offices Staff in physicians' offices, clinics, health-maintenance organizations, and employee health clinics should be instructed to identify and label the medical records of patients who should receive vaccine. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccine and its receipt or refusal should be documented in the medical record. Patients in high-risk groups who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccine. If possible, arrangements should be made to provide vaccine with minimal waiting time and at the lowest possible cost. # Facilities Providing Episodic or Acute Care Health-care providers in these settings (e.g., emergency rooms and walk-in clinics) should be familiar with influenza vaccine recommendations. They should offer vaccine to persons in high-risk groups or should provide written information on why, where, and how to obtain the vaccine. Written information should be available in language(s) appropriate for the population served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians rather than by obtaining individual vaccination orders for each patient. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility, and all residents should be vaccinated at one time, immediately preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated when they are admitted. # Acute-Care Hospitals All persons ≥65 years of age and younger persons (including children) with highrisk conditions who are hospitalized at any time from September through March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Household members and others with whom they will have contact should receive written information about why and where to obtain influenza vaccine. # Outpatient Facilities Providing Continuing Care to Patients at High Risk All patients should be offered vaccine before the beginning of the influenza season. Patients admitted to such programs (e.g., hemodialysis centers, hospital specialtycare clinics, and outpatient rehabilitation programs) during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission. Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. # Visiting Nurses and Others Providing Home Care to Persons at High Risk Nursing-care plans should identify patients in high-risk groups, and vaccine should be provided in the home if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination. # Facilities Providing Services to Persons ≥65 Years of Age In these facilities (e.g., retirement communities and recreation centers), all unvaccinated residents/attendees should be offered vaccine on site before the influenza season. Education/publicity programs should also be provided; these programs should emphasize the need for influenza vaccine and provide specific information on how, where, and when to obtain it. # Clinics and Others Providing Health Care for Travelers Indications for influenza vaccination should be reviewed before travel, and vaccine should be offered if appropriate (see Foreign Travelers). # Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine. Particular emphasis should be placed on vaccination of persons who care for members of high-risk groups (e.g., staff of intensive-care units [including newborn intensive-care units], staff of medical/surgical units, and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts can enhance compliance, as can a follow-up campaign early in the course of a community outbreak. # ANTIVIRAL AGENTS FOR INFLUENZA A The two antiviral agents with specific activity against influenza A viruses are amantadine hydrochloride and rimantadine hydrochloride. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses. When administered prophylactically to healthy adults or children before and throughout the epidemic period, both drugs are approximately 70%-90% effective in preventing illness caused by naturally occurring strains of type A influenza viruses. Because antiviral agents taken prophylactically can prevent illness but not subclinical infection, some persons who take these drugs can still develop immune responses that will protect them when they are exposed to antigenically related viruses in later years. In otherwise healthy adults, amantadine and rimantadine can reduce the severity and duration of signs and symptoms of influenza A illness when administered within 48 hours of illness onset. Studies evaluating the efficacy of treatment for children with either amantadine or rimantadine are limited. Amantadine was approved for treatment and prophylaxis of all influenza type A virus infections in 1976. Although few placebo-controlled studies were conducted to determine the efficacy of amantadine treatment among children prior to approval, amantadine is indicated for treatment and prophylaxis of adults and children ≥1 year of age. Rimantadine was approved in 1993 for treatment and prophylaxis in adults but was approved only for prophylaxis in children. Further studies might provide the data needed to support future approval of rimantadine treatment in this age group. As with all drugs, amantadine and rimantadine can cause adverse reactions in some persons. Such adverse reactions are rarely severe; however, for some categories of patients, severe adverse reactions are more likely to occur. Amantadine has been associated with a higher incidence of adverse central nervous system (CNS) reactions than rimantadine (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment). # RECOMMENDATIONS FOR THE USE OF AMANTADINE AND RIMANTADINE # Use as Prophylaxis Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk for severe illness and complications if infected with influenza A virus. When amantadine or rimantadine is administered as prophylaxis, factors such as cost, compliance, and potential side effects should be considered when determining the period of prophylaxis. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost effective, amantadine or rimantadine prophylaxis should be taken only during the period of peak influenza activity in a community. # Persons at High Risk Vaccinated After Influenza A Activity Has Begun Persons at high risk can still be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination can take as long as 2 weeks, during which time chemoprophylaxis should be considered. Children who receive influenza vaccine for the first time can require as long as 6 weeks of prophylaxis (i.e., prophylaxis for 2 weeks after the second dose of vaccine has been received). Amantadine and rimantadine do not interfere with the antibody response to the vaccine. # Persons Providing Care to Those at High Risk To reduce the spread of virus to persons at high risk, chemoprophylaxis may be considered during community outbreaks for a) unvaccinated persons who have frequent contact with persons at high risk (e.g., household members, visiting nurses, and volunteer workers) and b) unvaccinated employees of hospitals, clinics, and chronic-care facilities. For those persons who cannot be vaccinated, chemoprophylaxis during the period of peak influenza activity may be considered. For those persons who receive vaccine at a time when influenza A is present in the community, chemoprophylaxis can be administered for 2 weeks after vaccination. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that might not be controlled by the vaccine. # Persons Who Have Immune Deficiency Chemoprophylaxis might be indicated for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons who have HIV infection, especially those who have advanced HIV disease. No data are available concerning possible interactions with other drugs used in the management of patients who have HIV infection. Such patients should be monitored closely if amantadine or rimantadine chemoprophylaxis is administered. # Persons for Whom Influenza Vaccine Is Contraindicated Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Influenza vaccine may be contraindicated in persons who have severe anaphylactic hypersensitivity to egg protein or other vaccine components. # Other Persons Amantadine or rimantadine also can be administered prophylactically to anyone who wishes to avoid influenza A illness. The health-care provider and patient should make this decision on an individual basis. # Use of Antivirals as Therapy Amantadine and rimantadine can reduce the severity and shorten the duration of influenza A illness among healthy adults when administered within 48 hours of illness onset. Whether antiviral therapy will prevent complications of influenza type A among persons at high risk is unknown. Insufficient data exist to determine the efficacy of rimantadine treatment in children. Thus, rimantadine is currently approved only for prophylaxis in children, but it is not approved for treatment in this age group. Amantadine-and rimantadine-resistant influenza A viruses can emerge when either of these drugs is administered for treatment; amantadine-resistant strains are cross-resistant to rimantadine and vice versa. Both the frequency with which resistant viruses emerge and the extent of their transmission are unknown, but data indicate that amantadine-and rimantadine-resistant viruses are no more virulent or transmissible than amantadine-and rimantadine-sensitive viruses. The screening of naturally occurring epidemic strains of influenza type A has rarely detected amantadine-and rimantadine-resistant viruses. Resistant viruses have most frequently been isolated from persons taking one of these drugs as therapy for influenza A infection. Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy. Persons who have influenza-like illness should avoid contact with uninfected persons as much as possible, regardless of whether they are being treated with amantadine or rimantadine. Persons who have influenza type A infection and who are treated with either drug can shed amantadine-or rimantadinesensitive viruses early in the course of treatment, but can later shed drug-resistant viruses, especially after 5-7 days of therapy. Such persons can benefit from therapy even when resistant viruses emerge; however, they also can transmit infection to other persons with whom they come in contact. Because of possible induction of amantadine or rimantadine resistance, treatment of persons who have influenza-like illness should be discontinued as soon as clinically warranted, generally after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. Laboratory isolation of influenza viruses obtained from persons who are receiving amantadine or rimantadine should be reported to CDC through state health departments, and the isolates should be sent to CDC for antiviral sensitivity testing. # Outbreak Control in Institutions When confirmed or suspected outbreaks of influenza A occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. Contingency planning is needed to ensure rapid administration of amantadine or rimantadine to residents. This planning should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amantadine or rimantadine is used for outbreak control, the drug should be administered to all residents of the institution-regardless of whether they received influenza vaccine the previous fall. The drug should be continued for at least 2 weeks or until approximately 1 week after the end of the outbreak. The dose for each resident should be determined after consulting the dosage recommendations and precautions (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment) and the manufacturer's package insert. To reduce the spread of virus and to minimize disruption of patient care, chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that is not controlled by the vaccine. Chemoprophylaxis also may be considered for controlling influenza A outbreaks in other closed or semi-closed settings (e.g., dormitories or other settings where persons live in close proximity). To reduce the spread of infection and the chances of prophylaxis failure due to transmission of drug-resistant virus, measures should be taken to reduce contact as much as possible between persons on chemoprophylaxis and those taking drug for treatment. # CONSIDERATIONS FOR SELECTING AMANTADINE OR RIMANTADINE FOR CHEMOPROPHYLAXIS OR TREATMENT # Side Effects/Toxicity Despite the similarities between the two drugs, amantadine and rimantadine differ in their pharmacokinetic properties. More than 90% of amantadine is excreted unchanged, whereas approximately 75% of rimantadine is metabolized by the liver. However, both drugs and their metabolites are excreted by the kidney. The pharmacokinetic differences between amantadine and rimantadine might explain differences in side effects. Although both drugs can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day, the incidence of CNS side effects (e.g., nervousness, anxiety, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine compared with those taking rimantadine. In a 6-week study of prophylaxis in healthy adults, approximately 6% of participants taking rimantadine at a dose of 200 mg/day experienced at least one CNS symptom, compared with approximately 14% of those taking the same dose of amantadine and 4% of those taking placebo. The incidence of gastrointestinal side effects (e.g., nausea and anorexia) is approximately 3% among persons taking either drug, compared with 1%-2% among persons receiving the placebo. Side effects associated with both drugs are usually mild and cease soon after discontinuing the drug. Side effects can diminish or disappear after the first week despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among elderly persons who have been taking amantadine as prophylaxis at a dose of 200 mg/day. Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects, and recommendations for reduced dosages for these groups of patients have been made. Because rimantadine has only recently been approved for marketing, its safety in certain patient populations (e.g., chronically ill and elderly persons) has been evaluated less frequently. Clinical trials of rimantadine have more commonly involved young, healthy persons. Providers should review the package insert before using amantadine or rimantadine for any patient. The patient's age, weight, and renal function; the presence of other medical conditions; the indications for use of amantadine or rimantadine (i.e., prophylaxis or therapy); and the potential for interaction with other medications must be considered, and the dosage and duration of treatment must be adjusted appropriately. Modifications in dosage might be required for persons who have impaired renal or hepatic function, the elderly, children, and persons with a history of seizures. The following are guidelines for the use of amantadine and rimantadine in certain patient populations. Dosage recommendations are also summarized (Table 2). # Persons Who Have Impaired Renal Function # Amantadine Amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion. Thus, renal clearance of amantadine is reduced substantially in persons with renal insufficiency. A reduction in dosage is recommended for patients with creatinine clearance ≤50 mL/min. Guidelines for amantadine dosage based on creatinine clearance are found in the packet insert. However, because recommended dosages based on creatinine clearance might provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully so that adverse reactions can be recognized promptly and either the dose can be further # Rimantadine The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration. Further studies are needed to determine the multiple-dose pharmacokinetics and the most appropriate dosages for these patients. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6-fold greater than that in healthy controls of the same age. Hemodialysis did not contribute to drug clearance. In studies among persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were higher compared with control patients without renal disease who were the same weight, age, and sex. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance ≤10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including elderly persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. # Persons ≥65 Years of Age # Amantadine Because renal function declines with increasing age, the daily dose for persons ≥65 years of age should not exceed 100 mg for prophylaxis or treatment. For some elderly persons, the dose should be further reduced. Studies suggest that because of their smaller average body size, elderly women are more likely than elderly men to experience side effects at a daily dose of 100 mg. # Rimantadine The incidence and severity of CNS side effects among elderly persons appear to be substantially lower among those taking rimantadine at a dose of 200 mg/day compared with elderly persons taking the same dose of amantadine. However, when rimantadine has been administered at a dosage of 200 mg/day to chronically ill elderly persons, they have had a higher incidence of CNS and gastrointestinal symptoms than healthy, younger persons taking rimantadine at the same dosage. After longterm administration of rimantadine at a dosage of 200 mg/day, serum rimantadine concentrations among elderly nursing-home residents have been two to four times greater than those reported in younger adults. The dosage of rimantadine should be reduced to 100 mg/day for treatment or prophylaxis of elderly nursing-home residents. Although further studies are needed to determine the optimal dose for other elderly persons, a reduction in dosage to 100 mg/day should be considered for all persons ≥65 years of age if they experience signs and symptoms that might represent side effects when taking a dosage of 200 mg/day. # Persons Who Have Liver Disease # Amantadine No increase in adverse reactions to amantadine has been observed among persons with liver disease. # Rimantadine The safety and pharmacokinetics of rimantadine have only been evaluated after single-dose administration. In a study of persons with chronic liver disease (most with stabilized cirrhosis), no alterations were observed after a single dose. However, in persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease. A dose reduction to 100 mg/day is recommended for persons with severe hepatic dysfunction. # Persons Who Have Seizure Disorders # Amantadine An increased incidence of seizures has been reported in patients with a history of seizure disorders who have received amantadine. Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine. # Rimantadine In clinical trials, seizures (or seizure-like activity) have been observed in a few persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine. The extent to which rimantadine might increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated, because such persons have usually been excluded from participating in clinical trials of rimantadine. # Children # Amantadine The use of amantadine in children <1 year of age has not been adequately evaluated. The FDA-approved dosage for children 1-9 years of age is 4.4-8.8 mg/kg/day, not to exceed 150 mg/day. Although further studies to determine the optimal dosage for children are needed, physicians should consider prescribing only 5 mg/kg/day (not to exceed 150 mg/day) to reduce the risk for toxicity. The approved dosage for children ≥10 years of age is 200 mg/day; however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, is advisable. # The following CDC staff members prepared this report: # Rimantadine The use of rimantadine in children <1 year of age has not been adequately evaluated. In children 1-9 years of age, rimantadine should be administered in one or two divided doses at a dosage of 5 mg/kg/day, not to exceed 150 mg/day. The approved dosage for children ≥10 years of age is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, also is recommended. # Drug Interactions # Amantadine Careful observation is advised when amantadine is administered concurrently with drugs that affect the CNS, especially CNS stimulants. # Rimantadine No clinically significant interactions between rimantadine and other drugs have been identified. For more detailed information concerning potential drug interactions for either drug, the package insert should be consulted. # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS Information regarding influenza surveillance is available through the CDC Voice Information System (influenza update), telephone (404) 332-4551, or through the CDC Information Service on the Public Health Network electronic bulletin board. From October through May, the information is updated at least every other week. In addition, periodic updates about influenza are published in the weekly MMWR. State and local health departments should be consulted regarding availability of influenza vaccine, access to vaccination programs, and information about state or local influenza activity. # Selected Bibliography
None
None
3e2c15f87569d17809327ee84752d8ece448fbbb
cdc
None
Rapid, readily available, and accurate diagnosis of rabies is the keystone of prevention. All surveillance activity and description of the complex epizootiologic characteristics of rabies in the United States is based on laboratory diagnosis. Veterinarians are the first line of defense against rabies when responding to clinically ill animals. Rabies is an important consideration when an animal has compatible clinical signs because of the diverse potential sources provided by wildlife reservoirs, such as raccoons, skunks, foxes, and bats. Historically, the most imminent threat of rabies exposure to veterinarians and clients originated from domestic dogs. However, rabies in canids has been nearly eliminated throughout most of the United States via vaccination and control of stray dogs. At present, the greatest threat of rabies is among domestic cats, cows, horses, and captive wild animals. Infection with rabies among these animals poses unique risks for exposure of multiple persons, often because rabies is considered late in the clinical course or during postmortem examination.2-5 The standard diagnostic test consists of direct fluorescent antibody (FA) testing of impressions made from fresh brain samples (ie, cerebellum, hippocampus, and brain stem). Fresh brain tissue may not be routinely collected, and this prevents diagnosis with the FA test. If necropsy has been performed, formalin-fixed tissues may be the only available samples. Experimental diagnostic techniques may need to be applied, such as the direct FA test on formalin-fixed material, immunohistochemistry, or polymerase chain reaction assay on paraffin-embedded tissue.6 Accurate diagnostic capacity and appropriate management of biting animals are essential to the proper handling of potentially exposed animals. In conjunction with local or state health authorities, practicing veterinarians are often directly involved in the 10-day confinement and observation of biting animals or the euthanasia and submission of brain material for testing. They are also routinely consulted in the management of exposed animals; this consists of a 6-month quarantine or the euthanasia of naive animals and the booster of previously vaccinated animals. A reasonable index of suspicion of rabies among animals with neurologic signs of disease, the preservation of appropriate fresh brain tissue, and demonstrably proficient diagnostic laboratories are essential to the appropriate treatment of potentially exposed humans, as well as identification of at-risk animals for consideration of increased vaccination coverage. A number of recent cases of rabies in human beings in the United States have been diagnosed either retrospectively or well into the clinical course of the disease, despite compatible clinical observations. ' This lack of early recognition has led to the administration of numerous postexposure prophylaxes (PEP) in hospital staff and family members exposed to the patient. Additionally, the recent identification of a rabies virus variant associated with silver-haired (Lasionycteris noctivagans) and eastern pipistrelle (Pipistrellus subflavus) bats in most of these cases is typically associated with a history in which the patient did not report or recognize a bite.9-11 Awareness needs to be heightened among physicians of the possibility of rabies in clinically compatible cases (eg, viral encephalitis of unknown cause) where a bite history may be lacking. Physicians and public health officers need to be cognizant of clinical signs and exposure history leading to a suspicion of rabies in human beings; they must also be aware of techniques for appropriate sample collection for antemortem diagnosis and clinical implications of test results. # Laboratory Capacity for Prompt Testing of Potentially Rabid Animals Rapid and accurate laboratory testing allows a physician to initiate timely PEP in a human being who has had contact with a rabid animal. Equally important, the knowledge that an animal is not rabid rules out the necessity for expensive and extended treatment (eg, prophylaxis typically extends > 1 month). Characterization of people receiving PEP would elucidate the critical role of rapid, reliable, available diagnosis and the prevention of unnecessary medical treatment. If laboratory diagnosis is not available within a reasonable period, PEP is often initiated and then discontinued after negative laboratory results are obtained. In addition, rapid testing allows the initiation of appropriate quarantine and booster vaccinations for exposed domestic animals. General recommendations for the testing of animals suspected of rabies, as provided by the annual Compendium of Animal Rabies Control,12 should be followed. Recommendations-Timely ad hoc evaluation of specimens, even after business hours and on weekends, is encouraged. Adequate staffing by trained personnel for routine and emergency testing of suspect animals is a basic requirement for those public health laboratories performing diagnosis of rabies. Sufficient staffing enables laboratories to provide diagnostic testing on a routine basis and in emergency situations, such as when a human being is bitten, when the biting animal is suspected of having rabies, and when a physician is awaiting test results to provide PEP. # Precision in Diagnostic Performance " of the Direct Fluorescent Antibody Test Adherence to established technique among laboratories performing the FA test is essential for accurate and reliable diagnosis of rabies. When performed properly, no other laboratory test for the diagnosis of rabies is as simple, sensitive, specific, and rapid as the direct FA test performed on fresh brain tissue.13 However, a recent survey of laboratories performing analysis of specimens with public hea lth implications revealed an array of divergent modifications to virtually every step of the FA test. 14Recommendations-Guidelines for validation and verification of procedural FA test modifications should be compiled. Statistically meaningful validations of the effect of any proposed modifications on the sensitivity and specificity of diagnosis of rabies should be required. National proficiency testing for diagnosis of rabies should be strongly supported, and all state and local laboratories should be encouraged to participate. Training courses should be conducted by the Centers for Disease Control and Prevention (CDC) in association with the National Laboratory Training Network and other experts, consisting of lectures and routine laboratory sessions (eg, annually or biannually). The CDC should coordinate efforts to identify and promote aspects of the FA test that would provide a minimum national standard of diagnosis of rabies. This assessment should focus on equipment, procedures, training of personnel, and the number of tests performed annually by state and local laboratories. Professional associations, such as the Association of State and Territorial Public Health Laboratory Directors (currently the Association of Public Health Laboratories ) and the American Public Health Association, should promote the importance of consistent methods for diagnosis of rabies among their members. Confirmatory methods (eg, virus isolation) for evaluating the sensitivity and specificity of a direct FA test should be available to and used by every laboratory performing rabies testing, with partnering as necessary, when a laboratory does not have the capacity for an in-house confirmatory test. Alternative tests for diagnosis of rabies should be evaluated by use of the FA test and virus isolation as standards, when such tests become available. This evaluation should include tests of fixed tissue from animals for which fresh tissue is unavailable. # Status of Rabies Virus Diagnostic Reagents The FA test is the method of choice used by public health laboratories14 for routine diagnosis of rabies. At one time, the CDC and many state facilities produced diagnostic conjugates. Gradually, commercially available products met the widespread need for reliable reagents. In the past decade, however, great concern has evolved regarding ongoing problems with availability and variations in the quality of commercial diagnostic reagents and a decreasing number of commercial producers. Given the reemergence of rabies in the United States, high quality and reliably available reagents are a fundamental requisite for diagnosis and prevention of rabies. Recommendations-A reference reagent for quality control comparisons to commercial lots of diagnostic conjugates should be produced and maintained in adequate quantities at the CDC or at designated reference laboratories for the FA test. The CDC or designated reference laboratories should ensure that these reagents are readily available for diagnosis of rabies. # Discussion of Current Laboratory Issues and Dissemination of Information Given the complex epizootic characteristics of rabies in the United States,15 communication among individuals at diagnostic laboratories is critical. For example, early detection of unusual epizootiologic patterns may reveal problems with animal translocation or an emerging rabies virus variant, as revealed by the emergence of rabies in raccoons and coyotes. In addition, effective communication is essential for timely warnings of issues that require problem solving (eg, identification of a poor quality diagnostic reagent) and for discussion and updates of current methods. Recommendations-An annual or biannual workshop on rabies should be held in conjunction with the American Society for Microbiology. The workshop format should include lectures and discussion of laboratory issues in the diagnosis of rabies. In addition, ~ 1 annual or biannual bench training workshop should be held at different sites in the United States. This training should be open to individuals from all state and local laboratories and should cover all aspects of diagnosis of rabies. Emphasis at this workshop should be on the exchange of ideas between participants. Within each region, a reference laboratory could be identified by agreement among states in the region with the cooperation of the CDC. Each state laboratory should be encouraged to provide training with the support of a reference laboratory and the CDC. A forum (eg, a moderated computer bulletin board or listserver) should be established for informal exchange between individuals at diagnostic laboratories. Membership in this forum would be by subscription. # National and Regional Capability " for Typing Rabies Strains Modern virologic techniques have be.en essential in clarifying the role of divergent rabies reservoirs in wildlife in the United States. " Although sophisticated, these tools are increasingly becoming standard diagnostic procedure because of their capacity to elucidate characteristics of rabies. A clearer understanding of epizootiologic patterns will facilitate the formulation of appropriate public health information and policy for prevention and control of rabies. Support for the continued transfer of reagents and expertise from the CDC to state diagnostic laboratories is critically needed with regard to advanced diagnosis of rabies beyond the routine FA test, such as monoclonal antibody analysis and polymerase chain reaction-based typing methods. Recommendations-A strong commitment should be made to expand resources and to continue a leadership role for the national rabies laboratory at the CDC. The CDC should be encouraged to maintain an experienced laboratory staff familiar with routine diagnostic procedures, to develop new technologies, and to facilitate practice of these technologies at appropriate federal, regional, and state agencies. Proposed reference laboratories should be identified and supported by the CDC and the APHL to provide confirmation of diagnostic tests, serologic testing, and virus typing. A format (eg, a limited access Web site) should be established for the routine sharing of information among any reference laboratories and the national laboratory at the CDC. At the present time, instructions to professionals with regard to samples for rabies testing in human beings have been made more widely available through postings in a professional information section on the CDC rabies Web page (). A videotape on the proper removal of animal brains for diagnosis of rabies has been produced and distributed to state health laboratories. A comprehensive national rabies laboratory training session was held in January 1998 in San Antonio, Tex; another event is in the planning stages for 2000. Unfortunately, limitations in the travel funding of laboratorians preclude the participation of some individuals. An additional monoclonal antibody preparation has become available, but the only commercial polyclonal rabies diagnostic reagent continues to vary in availability and quality. Although training by region has advanced, the need for additional transfer of viral typing technology to state laboratories remains high. # Preview of Article III The third and final article in this series will be published in the Dec 1, 1999 issue of JAVMA and will review the incidence of rabies in wildlife; it will conclude with recommendations for control of wildlife rabies.
# Rapid, readily available, and accurate diagnosis of rabies is the keystone of prevention. All surveillance activity and description of the complex epizootiologic characteristics of rabies in the United States is based on laboratory diagnosis. Veterinarians are the first line of defense against rabies when responding to clinically ill animals. Rabies is an important consideration when an animal has compatible clinical signs because of the diverse potential sources provided by wildlife reservoirs, such as raccoons, skunks, foxes, and bats. Historically, the most imminent threat of rabies exposure to veterinarians and clients originated from domestic dogs. However, rabies in canids has been nearly eliminated throughout most of the United States via vaccination and control of stray dogs. At present, the greatest threat of rabies is among domestic cats, cows, horses, and captive wild animals. Infection with rabies among these animals poses unique risks for exposure of multiple persons, often because rabies is considered late in the clinical course or during postmortem examination.2-5 The standard diagnostic test consists of direct fluorescent antibody (FA) testing of impressions made from fresh brain samples (ie, cerebellum, hippocampus, and brain stem). Fresh brain tissue may not be routinely collected, and this prevents diagnosis with the FA test. If necropsy has been performed, formalin-fixed tissues may be the only available samples. Experimental diagnostic techniques may need to be applied, such as the direct FA test on formalin-fixed material, immunohistochemistry, or polymerase chain reaction assay on paraffin-embedded tissue.6 Accurate diagnostic capacity and appropriate management of biting animals are essential to the proper handling of potentially exposed animals. In conjunction with local or state health authorities, practicing veterinarians are often directly involved in the 10-day confinement and observation of biting animals or the euthanasia and submission of brain material for testing. They are also routinely consulted in the management of exposed animals; this consists of a 6-month quarantine or the euthanasia of naive animals and the booster of previously vaccinated animals. A reasonable index of suspicion of rabies among animals with neurologic signs of disease, the preservation of appropriate fresh brain tissue, and demonstrably proficient diagnostic laboratories are essential to the appropriate treatment of potentially exposed humans, as well as identification of at-risk animals for consideration of increased vaccination coverage. A number of recent cases of rabies in human beings in the United States have been diagnosed either retrospectively or well into the clinical course of the disease, despite compatible clinical observations. ' This lack of early recognition has led to the administration of numerous postexposure prophylaxes (PEP) in hospital staff and family members exposed to the patient. Additionally, the recent identification of a rabies virus variant associated with silver-haired (Lasionycteris noctivagans) and eastern pipistrelle (Pipistrellus subflavus) bats in most of these cases is typically associated with a history in which the patient did not report or recognize a bite.9-11 Awareness needs to be heightened among physicians of the possibility of rabies in clinically compatible cases (eg, viral encephalitis of unknown cause) where a bite history may be lacking. Physicians and public health officers need to be cognizant of clinical signs and exposure history leading to a suspicion of rabies in human beings; they must also be aware of techniques for appropriate sample collection for antemortem diagnosis and clinical implications of test results. # Laboratory Capacity for Prompt Testing of Potentially Rabid Animals Rapid and accurate laboratory testing allows a physician to initiate timely PEP in a human being who has had contact with a rabid animal. Equally important, the knowledge that an animal is not rabid rules out the necessity for expensive and extended treatment (eg, prophylaxis typically extends > 1 month). Characterization of people receiving PEP would elucidate the critical role of rapid, reliable, available diagnosis and the prevention of unnecessary medical treatment. If laboratory diagnosis is not available within a reasonable period, PEP is often initiated and then discontinued after negative laboratory results are obtained. In addition, rapid testing allows the initiation of appropriate quarantine and booster vaccinations for exposed domestic animals. General recommendations for the testing of animals suspected of rabies, as provided by the annual Compendium of Animal Rabies Control,12 should be followed. Recommendations-Timely ad hoc evaluation of specimens, even after business hours and on weekends, is encouraged. Adequate staffing by trained personnel for routine and emergency testing of suspect animals is a basic requirement for those public health laboratories performing diagnosis of rabies. Sufficient staffing enables laboratories to provide diagnostic testing on a routine basis and in emergency situations, such as when a human being is bitten, when the biting animal is suspected of having rabies, and when a physician is awaiting test results to provide PEP. # Precision in Diagnostic Performance " of the Direct Fluorescent Antibody Test Adherence to established technique among laboratories performing the FA test is essential for accurate and reliable diagnosis of rabies. When performed properly, no other laboratory test for the diagnosis of rabies is as simple, sensitive, specific, and rapid as the direct FA test performed on fresh brain tissue.13 However, a recent survey of laboratories performing analysis of specimens with public hea lth implications revealed an array of divergent modifications to virtually every step of the FA test. 14Recommendations-Guidelines for validation and verification of procedural FA test modifications should be compiled. Statistically meaningful validations of the effect of any proposed modifications on the sensitivity and specificity of diagnosis of rabies should be required. National proficiency testing for diagnosis of rabies should be strongly supported, and all state and local laboratories should be encouraged to participate. Training courses should be conducted by the Centers for Disease Control and Prevention (CDC) in association with the National Laboratory Training Network and other experts, consisting of lectures and routine laboratory sessions (eg, annually or biannually). The CDC should coordinate efforts to identify and promote aspects of the FA test that would provide a minimum national standard of diagnosis of rabies. This assessment should focus on equipment, procedures, training of personnel, and the number of tests performed annually by state and local laboratories. Professional associations, such as the Association of State and Territorial Public Health Laboratory Directors (currently the Association of Public Health Laboratories [APHL]) and the American Public Health Association, should promote the importance of consistent methods for diagnosis of rabies among their members. Confirmatory methods (eg, virus isolation) for evaluating the sensitivity and specificity of a direct FA test should be available to and used by every laboratory performing rabies testing, with partnering as necessary, when a laboratory does not have the capacity for an in-house confirmatory test. Alternative tests for diagnosis of rabies should be evaluated by use of the FA test and virus isolation as standards, when such tests become available. This evaluation should include tests of fixed tissue from animals for which fresh tissue is unavailable. # Status of Rabies Virus Diagnostic Reagents The FA test is the method of choice used by public health laboratories14 for routine diagnosis of rabies. At one time, the CDC and many state facilities produced diagnostic conjugates. Gradually, commercially available products met the widespread need for reliable reagents. In the past decade, however, great concern has evolved regarding ongoing problems with availability and variations in the quality of commercial diagnostic reagents and a decreasing number of commercial producers. Given the reemergence of rabies in the United States, high quality and reliably available reagents are a fundamental requisite for diagnosis and prevention of rabies. Recommendations-A reference reagent for quality control comparisons to commercial lots of diagnostic conjugates should be produced and maintained in adequate quantities at the CDC or at designated reference laboratories for the FA test. The CDC or designated reference laboratories should ensure that these reagents are readily available for diagnosis of rabies. # Discussion of Current Laboratory Issues and Dissemination of Information Given the complex epizootic characteristics of rabies in the United States,15 communication among individuals at diagnostic laboratories is critical. For example, early detection of unusual epizootiologic patterns may reveal problems with animal translocation or an emerging rabies virus variant, as revealed by the emergence of rabies in raccoons and coyotes. In addition, effective communication is essential for timely warnings of issues that require problem solving (eg, identification of a poor quality diagnostic reagent) and for discussion and updates of current methods. Recommendations-An annual or biannual workshop on rabies should be held in conjunction with the American Society for Microbiology. The workshop format should include lectures and discussion of laboratory issues in the diagnosis of rabies. In addition, ~ 1 annual or biannual bench training workshop should be held at different sites in the United States. This training should be open to individuals from all state and local laboratories and should cover all aspects of diagnosis of rabies. Emphasis at this workshop should be on the exchange of ideas between participants. Within each region, a reference laboratory could be identified by agreement among states in the region with the cooperation of the CDC. Each state laboratory should be encouraged to provide training with the support of a reference laboratory and the CDC. A forum (eg, a moderated computer bulletin board or listserver) should be established for informal exchange between individuals at diagnostic laboratories. Membership in this forum would be by subscription. # National and Regional Capability " for Typing Rabies Strains Modern virologic techniques have be.en essential in clarifying the role of divergent rabies reservoirs in wildlife in the United States. " Although sophisticated, these tools are increasingly becoming standard diagnostic procedure because of their capacity to elucidate characteristics of rabies. A clearer understanding of epizootiologic patterns will facilitate the formulation of appropriate public health information and policy for prevention and control of rabies. Support for the continued transfer of reagents and expertise from the CDC to state diagnostic laboratories is critically needed with regard to advanced diagnosis of rabies beyond the routine FA test, such as monoclonal antibody analysis and polymerase chain reaction-based typing methods. Recommendations-A strong commitment should be made to expand resources and to continue a leadership role for the national rabies laboratory at the CDC. The CDC should be encouraged to maintain an experienced laboratory staff familiar with routine diagnostic procedures, to develop new technologies, and to facilitate practice of these technologies at appropriate federal, regional, and state agencies. Proposed reference laboratories should be identified and supported by the CDC and the APHL to provide confirmation of diagnostic tests, serologic testing, and virus typing. A format (eg, a limited access Web site) should be established for the routine sharing of information among any reference laboratories and the national laboratory at the CDC. At the present time, instructions to professionals with regard to samples for rabies testing in human beings have been made more widely available through postings in a professional information section on the CDC rabies Web page (http://www.cdc.gov/ncidod/dvrd/rabies). A videotape on the proper removal of animal brains for diagnosis of rabies has been produced and distributed to state health laboratories. A comprehensive national rabies laboratory training session was held in January 1998 in San Antonio, Tex; another event is in the planning stages for 2000. Unfortunately, limitations in the travel funding of laboratorians preclude the participation of some individuals. An additional monoclonal antibody preparation has become available, but the only commercial polyclonal rabies diagnostic reagent continues to vary in availability and quality. Although training by region has advanced, the need for additional transfer of viral typing technology to state laboratories remains high. # Preview of Article III The third and final article in this series will be published in the Dec 1, 1999 issue of JAVMA and will review the incidence of rabies in wildlife; it will conclude with recommendations for control of wildlife rabies.
None
None
f94b7aca6f54717b42e20a52792fa4e78e8360ae
cdc
None
These recommendations update information on the vaccine and antiviral agents available for controlling influenza during the 1992-1993 influenza season (superseding the MMWR 1991;40(no. RR-6):1-15.) The primary changes include statements about vaccination of persons with known hypersensitivity to eggs or other components of the influenza vaccine, the optimal timing of influenza vaccination, and the influenza strains in the trivalent vaccine for 1992-1993.# INTRODUCTION Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (H1, H2, H3) and two subtypes of neuraminidase (N1, N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens --especially to the hemagglutinin --reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of strains currently circulating provide the basis for selecting virus strains to include in each year's vaccine. Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory infections, influenza can cause severe malaise lasting several days. More severe illness can result if primary influenza pneumonia or secondary bacterial pneumonia occur. During influenza epidemics, high attack rates of acute illness result in increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower-respiratory-tract complications. Elderly persons and persons with underlying health problems are at increased risk for complications of influenza infection. If infected, such high-risk persons or groups (listed as ``groups at increased risk for influenza-related complications'' under Target Groups for Special Vaccination Programs) are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for high-risk persons may increase 2-to 5-fold, depending on the age group. Previously healthy children and younger adults may also require hospitalization for influenza-related complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups. An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza infection. It is estimated that more than 10,000 excess deaths occurred during each of seven different U.S. epidemics in the period 1977-1988, and more than 40,000 excess deaths occurred during each of two of these epidemics. Approximately 80%-90% of the deaths attributed to pneumonia and influenza occurred among persons greater than or equal to 65 years of age. Because the proportion of elderly persons in the U.S. population is increasing and because age and its associated chronic diseases are risk factors for severe influenza illness, the toll from influenza can be expected to increase unless control measures are administered more vigorously. The number of younger persons at increased risk for influenza-related complications is also increasing for various reasons, such as the success of neonatal intensive care units, better management of diseases such as cystic fibrosis and acquired immunodeficiency syndrome (AIDS), and better survival rates for organ-transplant recipients. # OPTIONS FOR THE CONTROL OF INFLUENZA In the United States two measures are available that can reduce the impact of influenza; immunoprophylaxis with inactivated (killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (e.g., amantadine). Vaccination of high-risk persons each year before the influenza season is currently the most effective measure for reducing the impact of influenza. Vaccination can be highly cost-effective when a) it is directed at persons who are most likely to experience complications or who are at increased risk for exposure, and b) it is administered to high-risk persons during hospitalization or a routine health-care visit before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. Recent reports indicate that --when vaccine and epidemic strains of virus are well matched --achieving high vaccination rates among closed populations can reduce the risk of outbreaks by inducing herd immunity. Other indications for vaccination include the strong desire of any person to avoid influenza infection, reduce the severity of disease, or reduce the chance of transmitting influenza to high-risk persons with whom the individual has frequent contact. The antiviral agent available for use at this time (amantadine hydrochloride) is effective only against influenza A and, for maximum effectiveness as prophylaxis, must be administered throughout the period of risk. When administered as either prophylaxis or therapy, the potential effectiveness of amantadine must be balanced against potential side effects. Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk of severe illness and complications if infected with an influenza A virus. Use of amantadine may be considered a) as a control measure when influenza A outbreaks occur in institutions housing high-risk persons, both for treatment of ill individuals and as prophylaxis for others; b) as short-term prophylaxis after late vaccination of high-risk persons (i.e., when influenza A infections are already occurring in the community) during the period when immunity is developing in response to vaccination; c) as seasonal prophylaxis for persons for whom vaccination is contraindicated; d) as seasonal prophylaxis for immunocompromised persons who may not produce protective levels of antibody in response to vaccination; and e) as prophylaxis for unvaccinated health-care workers and household contacts who care for high-risk persons either for the duration of influenza activity in the community or until immunity develops after vaccination. Amantadine is also approved for use by any person who wishes to reduce his or her chances of becoming ill with influenza A. # INACTIVATED VACCINE FOR INFLUENZA A AND B Influenza vaccine is made from highly purified, egg-grown viruses that have been rendered noninfectious (inactivated). Therefore, the vaccine cannot cause influenza. Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing influenza viruses believed likely to circulate in the United States in the upcoming winter. The composition of the vaccine is such that it rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purified-surface-antigen preparations are available. To minimize febrile reactions, only subvirion or purified-surface-antigen preparations should be used for children; any of the preparations may be used for adults. Most vaccinated children and young adults develop high postvaccination hemagglutination-inhibition antibody titers. These antibody titers are protective against infection by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults, and thus may remain susceptible to influenza upper-respiratory-tract infection. Nevertheless, even if such persons develop influenza illness, the vaccine has been shown to be effective in preventing lower-respiratory-tract involvement or other complications, thereby reducing the risk of hospitalization and death. # RECOMMENDATIONS FOR USE OF INFLUENZA VACCINE Influenza vaccine is strongly recommended for any person greater than or equal to 6 months of age who -because of age or underlying medical condition --is at increased risk for complications of influenza. Health-care workers and others (including household members) in close contact with high-risk persons should also be vaccinated. In addition, influenza vaccine may be administered to any person who wishes to reduce the chance of becoming infected with influenza. The trivalent influenza vaccine prepared for the 1992-1993 season will include A/Texas/36/91-like(H1N1), A/Beijing/353/89-like (H3N2), and B/Panama/45/90-like hemagglutinin antigens. Recommended doses are listed in Table 1. Guidelines for the use of vaccine among different groups follow. Although the current influenza vaccine can contain one or more antigens administered in previous years, annual vaccination using the current vaccine is necessary because immunity for a person declines in the year following vaccination. Because the 1992-1993 vaccine differs from the 1991-1992 vaccine, supplies of 1991-1992 vaccine should not be administered to provide protection for the 1992-1993 influenza season. Two doses administered at least 1 month apart may be required for a satisfactory antibody response among previously unvaccinated children less than 9 years of age; however, studies with vaccines similar to those in current use have shown little or no improvement in antibody responses when a second dose is administered to adults during the same season. During the past decade, data on influenza vaccine immunogenicity and side effects have been obtained when vaccine has been administered intramuscularly. Because there has been no adequate evaluation of recent influenza vaccines administered by other routes, the intramuscular route is recommended for use. Adults and older children should be vaccinated in the deltoid muscle, and infants and young children in the anterolateral aspect of the thigh. # TARGET GROUPS FOR SPECIAL VACCINATION PROGRAMS To maximize protection of high-risk persons, they and their close contacts should be targeted for organized vaccination programs. Groups at Increased Risk for Influenza-Related Complications: 1. Persons greater than or equal to 65 years of age. # Residents of nursing homes and other chronic-care facilities housing persons of any age with chronic medical conditions. 3. Adults and children with chronic disorders of the pulmonary or cardiovascular systems, including children with asthma. 4. Adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications). 5. Children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy, and therefore may be at risk of developing Reye syndrome after influenza. Groups That Can Transmit Influenza to High-Risk Persons: Persons who are clinically or subclinically infected and who attend or live with high-risk persons can transmit influenza virus to them. Some high-risk persons (e.g., the elderly, transplant recipients, or persons with AIDS) can have low antibody responses to influenza vaccine. Efforts to protect these high-risk persons against influenza may be improved by reducing the chances of exposure to influenza from their care providers. Therefore, the following groups should be vaccinated: 1. Physicians, nurses, and other personnel in both hospital and outpatient-care settings who have contact with high-risk persons among all age groups, including infants. 2. Employees of nursing homes and chronic-care facilities who have contact with patients or residents. 3. Providers of home care to high-risk persons (e.g., visiting nurses, volunteer workers). 4. Household members (including children) of high-risk persons. # VACCINATION OF OTHER GROUPS General Population Physicians should administer influenza vaccine to any person who wishes to reduce the chance of acquiring influenza infection. Persons who provide essential community services may be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Similarly, students or other persons in institutional settings such as those who reside in dormitories may be considered for vaccination to minimize the disruption of routine activities during epidemics. # Pregnant Women Influenza-associated excess mortality among pregnant women has not been documented except in the pandemics of 1918-1919 and 1957-1958. However, pregnant women who have other medical conditions that increase their risks for complications from influenza should be vaccinated, as the vaccine is considered safe for pregnant women. Administering the vaccine after the first trimester is a reasonable precaution to minimize any concern over the theoretical risk of teratogenicity. However, it is undesirable to delay vaccination of pregnant women who have high-risk conditions and who will still be in the first trimester of pregnancy when the influenza season begins. Persons Infected with HIV Little information exists regarding the frequency and severity of influenza illness among human immunodeficiency virus (HIV)-infected persons, but recent reports suggest that symptoms may be prolonged and the risk of complications increased for HIV-infected persons. Because influenza may result in serious illness and complications, vaccination is a prudent precaution and will result in protective antibody levels in many recipients. However, the antibody response to vaccine may be low in persons with advanced HIV-related illnesses; a booster dose of vaccine has not improved the immune response for these individuals. # Foreign Travelers Increasingly, the elderly and persons with high-risk medical conditions are embarking on international travel. The risk of exposure to influenza during foreign travel varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the southern hemisphere, the season of greatest activity is April-September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that also begins while traveling, an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the southern hemisphere during April-September should review their influenza vaccination histories. If they were not vaccinated the previous fall or winter, they should consider influenza vaccination before travel. Persons among the high-risk categories should be especially encouraged to receive the most currently available vaccine. High-risk persons administered the previous season's vaccine before travel should be revaccinated in the fall or winter with the current vaccine. # PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Amantadine hydrochloride is an option for prevention of influenza A in such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at higher risk for complications of influenza infections may benefit from vaccine after appropriate allergy evaluation and desensitization. Specific information about vaccine components can be found in warnings and contraindications in package inserts for each manufacturer. It is usually preferable to delay vaccination of adults with acute febrile illnesses until their symptoms have abated. However, minor illnesses with or without fever should not contraindicate the use of influenza vaccine, particularly among children with a mild upper respiratory tract infection or allergic rhinitis (see American Academy of Pediatrics, The Red Book, 1991). # SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respiratory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination is soreness at the vaccination site that lasts for up to 2 days; this is reported for less than one-third of vaccinees. In addition, two types of systemic reactions have occurred: 1. Fever, malaise, myalgia, and other systemic symptoms occur infrequently and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after vaccination and can persist for 1 or 2 days. 2. Immediate --presumably allergic --reactions (such as hives, angioedema, allergic asthma, or systemic anaphylaxis) occur rarely after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component --the majority are most likely related to residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein may induce immediate hypersensitivity reactions among persons with severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to assist in determining whether vaccination may proceed or should be deferred. Persons with documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs -including those who have had occupational asthma or other allergic responses from exposure to egg protein --may also be at increased risk for reactions from influenza vaccine, and similar consultation should be considered. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenza infection or its complications (See Murphy and Strunk, 1985). Unlike the 1976 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been clearly associated with an increased frequency of Guillain-Barre syndrome. Although influenza vaccination can inhibit the clearance of warfarin and theophylline, studies have failed to show any adverse clinical effects attributable to these drugs among patients receiving influenza vaccine. The potential exists for hypersensitivity reactions to any vaccine component. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, most patients do not develop reactions to thimerosal administered as a component of vaccines even when patch or intradermal tests for thimerosal indicate hypersensitivity. When it has been reported, hypersensitivity to thimerosal has usually consisted of local delayed-type hypersensitivity reactions. # SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES The target groups for influenza and pneumococcal vaccination overlap considerably. Both vaccines can be administered at the same time at different sites without increasing side effects. However, influenza vaccine must be administered each year, whereas pneumococcal vaccine is generally administered only once to all but those at highest risk of fatal pneumococcal disease (reference ACIP statement, MMWR 1989;38:64-8, 73-6.). Children at high risk for influenza-related complications may receive influenza vaccine at the same time as measles-mumps-rubella, Haemophilus b, pneumococcal, and oral polio vaccines. Vaccines should be administered at different sites on the body. Influenza vaccine should not be administered within 3 days of vaccination with pertussis vaccine. # TIMING OF INFLUENZA VACCINATION ACTIVITIES Beginning each September, when vaccine for the upcoming influenza season becomes available, high-risk persons who are seen by health-care providers for routine care or as a result of hospitalization should be offered influenza vaccine. Opportunities to vaccinate persons at high risk for complications of influenza should not be missed. The optimal time for organized vaccination campaigns for high-risk persons usually is the period between mid-October and mid-November. In the United States influenza activity generally peaks between late December and early March, and high levels of influenza activity infrequently occur in the contiguous 48 states before December. It is particularly important to avoid administering vaccine too far in advance of the influenza season in facilities such as nursing homes because antibody levels may begin to decline within a few months of vaccination. Vaccination programs can be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December. Children less than 9 years of age who have not previously been vaccinated should receive two doses of vaccine at least a month apart to maximize the chance of a satisfactory antibody response to all three vaccine antigens. The second dose should be administered before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community, as late as April in some years. # STRATEGIES FOR IMPLEMENTING INFLUENZA VACCINE RECOMMENDATIONS Despite the recognition that optimum medical care for both adults and children includes regular review of immunization records and administration of vaccines as appropriate, less than 30% of persons among high-risk groups receive influenza vaccine each year. More effective strategies are needed for delivering vaccine to high-risk persons, their health-care providers, and their household contacts. In general, successful vaccination programs have combined education for health-care workers, publicity and education targeted toward potential recipients, a plan for identifying (usually by medical-record review) persons at high-risk, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described below. # Outpatient Clinics and Physicians' Offices Staff in physicians' offices, clinics, health-maintenance organizations, and employee health clinics should be instructed to identify and label the medical records of patients who should receive vaccine. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccine and its receipt or refusal should be documented in the medical record. Patients among high-risk groups who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccine. If possible, arrangements should be made to provide vaccine with minimal waiting time and at the lowest possible cost. Facilities Providing Episodic or Acute Care (e.g., emergency rooms, walk-in clinics) Health-care providers in these settings should be familiar with influenza vaccine recommendations. They should offer vaccine to persons among high-risk groups or should provide written information on why, where, and how to obtain the vaccine. Written information should be available in language(s) appropriate for the population served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be routinely provided to all residents of for each patient. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility, and all residents should be vaccinated at one time, immediately preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated when they are admitted. # Acute-Care Hospitals All persons greater than or equal to 65 years of age and younger persons (including children) with highrisk conditions who are hospitalized from September through March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Household members and others with whom they will have contact should receive written information about why and where to obtain influenza vaccine. Outpatient Facilities Providing Continuing Care to High-Risk Patients (e.g., hemodialysis centers, hospital specialty-care clinics, outpatient rehabilitation programs) All patients should be offered vaccine in one period shortly before the beginning of the influenza season. Patients admitted to such programs during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission. Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. Visiting Nurses and Others Providing Home Care to High-Risk Persons Nursing-care plans should identify high-risk patients, and vaccine should be provided in the home if necessary. Care givers and others in the household (including children) should be referred for vaccination. Facilities Providing Services to Persons greater than or equal to 65 Years of Age (e.g., retirement communities, recreation centers) All unvaccinated residents/attendees should be offered vaccine on site at one time period before the influenza season; alternatively, education/publicity programs should emphasize the need for influenza vaccine and should provide specific information on how, where, and when to obtain it. # Clinics and Others Providing Health Care for Travelers Indications for influenza vaccination should be reviewed before travel and vaccine offered if appropriate (see Foreign Travelers). # Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine, with particular emphasis on vaccination of persons who care for high-risk persons (e.g., staff of intensive-care units, including newborn intensive-care units; staff of medical/surgical units; and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts may enhance compliance, as may a follow-up campaign if an outbreak occurs in the community. # ANTIVIRAL AGENTS FOR INFLUENZA A The two antiviral agents with specific activity against influenza A viruses are amantadine hydrochloride and rimantadine hydrochloride. Only amantadine is licensed for use in the United States. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses, although the specific mechanisms of their antiviral activity are not completely understood. When administered prophylactically to healthy young adults or children in advance of and throughout the epidemic period, amantadine is approximately 70%-90% effective in preventing illnesses caused by naturally occurring strains of type A influenza viruses. When administered to otherwise healthy young adults and children for symptomatic treatment within 48 hours after the onset of influenza illness, amantadine has been shown to reduce the duration of fever and other systemic symptoms and may permit a more rapid return to routine daily activities. Since antiviral agents taken prophylactically may prevent illness but not subclinical infection, some persons who take these drugs may still develop immune responses that will protect them when exposed to antigenically related viruses in later years. As with all drugs, symptoms may occur that are side effects of amantadine among a small proportion of persons. Such symptoms are rarely severe, but may be important for some categories of patients. # RECOMMENDATIONS FOR THE USE OF AMANTADINE Outbreak Control in Institutions When outbreaks of influenza A occur in institutions that house high-risk persons, chemoprophylaxis should begin as early as possible to reduce the spread of the infection. Contingency planning is needed to ensure rapid administration of amantadine to residents and employees. This should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amantadine is used for outbreak control, it should be administered to all residents of the affected institution regardless of whether they received influenza vaccine the previous fall. The dose for each resident should be determined after consulting the dosage recommendations and precautions that follow in this document and those listed in the manufacturer's package insert. To reduce spread of virus and to minimize disruption of patient care, chemoprophylaxis should also be offered to unvaccinated staff who provide care to high-risk persons. To be fully effective as prophylaxis, the antiviral drug must be taken each day for the duration of influenza activity in the community. # Use as Prophylaxis High-risk persons vaccinated after influenza A activity has begun High-risk persons can still be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination usually takes 2 weeks, during which time amantadine should be administered. Children who receive influenza vaccine for the first time may require up to 6 weeks of prophylaxis, or until 2 weeks after the second dose of vaccine has been received. Amantadine does not interfere with the antibody response to the vaccine. # Persons providing care to high-risk persons To reduce the spread of virus and to maintain care for high-risk persons in the home, hospital, or institutional setting, chemoprophylaxis should be considered for unvaccinated persons who have frequent contact with high-risk persons in the home setting (e.g., household members, visiting nurses, volunteer workers) and unvaccinated employees of hospitals, clinics, and chronic-care facilities. For employees who cannot be vaccinated, chemoprophylaxis should be continued for the entire period influenza A virus is circulating in the community; for those who are vaccinated at a time when influenza A is present in the community, chemoprophylaxis should be administered for 2 weeks after vaccination. Prophylaxis should be considered for all employees,regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that is not covered by the vaccine. # Immunodeficient persons Chemoprophylaxis may be indicated for high-risk persons who are expected to have a poor antibody response to influenza vaccine. This includes many persons with HIV infection, especially those with advanced disease. No data are available on possible interactions with other drugs used in the management of patients with HIV infection. Such patients must be monitored closely if amantadine is administered. # Persons for whom influenza vaccine is contraindicated Chemoprophylaxis throughout the influenza season may be appropriate for high-risk persons for whom influenza vaccine is contraindicated because of anaphylactic hypersensitivity to egg protein or other vaccine components. Other persons Amantadine can also be administered prophylactically by anyone who wishes to avoid influenza A illness. This decision should be made by the physician and patient on an individual basis. # Use as Therapy Amantadine can reduce the severity and shorten the duration of influenza A illness among healthy adults. However, there are no data on the efficacy of amantadine therapy in preventing complications of influenza A among high-risk persons. Therefore, no specific recommendations can be made regarding the therapeutic use of amantadine for these patients. This does not preclude physicians' using amantadine for high-risk patients who develop illness compatible with influenza during a period of known or suspected influenza A activity in the community. Whether amantadine is effective when treatment begins beyond the first 48 hours of illness is not known. # OTHER CONSIDERATIONS FOR THE SELECTION OF AMANTADINE FOR PROPHYLAXIS OR TREATMENT Side Effects/Toxicity When amantadine is administered to healthy young adults at a dose of 200 mg/day, minor central-nervoussystem (CNS) side effects (nervousness, anxiety, insomnia, difficulty concentrating, and lightheadedness) or gastrointestinal side effects (anorexia and nausea) occur among approximately 5%-10% of patients. Side effects diminish or cease soon after discontinuing use of the drug. With prolonged use, side effects may also diminish or disappear after the first week of use. More serious but less frequent CNS-related side effects (seizures, confusion) associated with use of amantadine have usually affected only elderly persons, those with renal disease, and those with seizure disorders or other altered mental or behavioral conditions. Reducing the dosage to less than or equal to 100 mg/day appears to reduce the frequency of these side effects among such persons without compromising the prophylactic effectiveness of amantadine. The package insert should be reviewed before use of amantadine for any patient. The patient's age, weight, renal function, presence of other medical conditions, and indications for use of amantadine (prophylaxis or therapy) must be considered, and the dosage and duration of treatment adjusted appropriately. Modifications in dosage may be required for persons with impaired renal function, the elderly, children, persons who have neuropsychiatric disorders or who take psychotropic drugs, and persons with a history of seizures. # Development of Drug-Resistant Viruses Amantadine-resistant influenza viruses can emerge when amantadine is administered for treatment. The frequency with which resistant isolates emerge and the extent of their transmission are unknown, but there is no evidence that amantadine-resistant viruses are more virulent or more transmissible than amantadinesensitive viruses. Thus the use of amantadine remains an appropriate outbreak --control measure. In closed populations such as nursing homes, persons with influenza who are treated with amantadine should be separated, if possible, from asymptomatic persons who are administered amantadine as prophylaxis. Because of possible induction of amantadine resistance, it is advisable to discontinue amantadine treatment of persons who have influenza-like illness as soon as clinically warranted, generally within 3-5 days. Isolation of influenza viruses from persons who are receiving amantadine should be reported through state health departments to CDC and the isolates saved for antiviral sensitivity testing. # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS Educational materials about influenza and its control are available from several sources, including the CDC. Information can be obtained from Information Services, National Center for Prevention Services, Mailstop E06, CDC, Atlanta, GA 30333. Phone number: (404) 639-1819. State and local health departments should also be consulted regarding availability of vaccine and access to vaccination programs.
These recommendations update information on the vaccine and antiviral agents available for controlling influenza during the 1992-1993 influenza season (superseding the MMWR 1991;40(no. RR-6):1-15.) The primary changes include statements about vaccination of persons with known hypersensitivity to eggs or other components of the influenza vaccine, the optimal timing of influenza vaccination, and the influenza strains in the trivalent vaccine for 1992-1993.# INTRODUCTION Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (H1, H2, H3) and two subtypes of neuraminidase (N1, N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens --especially to the hemagglutinin --reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of strains currently circulating provide the basis for selecting virus strains to include in each year's vaccine. Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory infections, influenza can cause severe malaise lasting several days. More severe illness can result if primary influenza pneumonia or secondary bacterial pneumonia occur. During influenza epidemics, high attack rates of acute illness result in increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower-respiratory-tract complications. Elderly persons and persons with underlying health problems are at increased risk for complications of influenza infection. If infected, such high-risk persons or groups (listed as ``groups at increased risk for influenza-related complications'' under Target Groups for Special Vaccination Programs) are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for high-risk persons may increase 2-to 5-fold, depending on the age group. Previously healthy children and younger adults may also require hospitalization for influenza-related complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups. An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza infection. It is estimated that more than 10,000 excess deaths occurred during each of seven different U.S. epidemics in the period 1977-1988, and more than 40,000 excess deaths occurred during each of two of these epidemics. Approximately 80%-90% of the deaths attributed to pneumonia and influenza occurred among persons greater than or equal to 65 years of age. Because the proportion of elderly persons in the U.S. population is increasing and because age and its associated chronic diseases are risk factors for severe influenza illness, the toll from influenza can be expected to increase unless control measures are administered more vigorously. The number of younger persons at increased risk for influenza-related complications is also increasing for various reasons, such as the success of neonatal intensive care units, better management of diseases such as cystic fibrosis and acquired immunodeficiency syndrome (AIDS), and better survival rates for organ-transplant recipients. # OPTIONS FOR THE CONTROL OF INFLUENZA In the United States two measures are available that can reduce the impact of influenza; immunoprophylaxis with inactivated (killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (e.g., amantadine). Vaccination of high-risk persons each year before the influenza season is currently the most effective measure for reducing the impact of influenza. Vaccination can be highly cost-effective when a) it is directed at persons who are most likely to experience complications or who are at increased risk for exposure, and b) it is administered to high-risk persons during hospitalization or a routine health-care visit before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. Recent reports indicate that --when vaccine and epidemic strains of virus are well matched --achieving high vaccination rates among closed populations can reduce the risk of outbreaks by inducing herd immunity. Other indications for vaccination include the strong desire of any person to avoid influenza infection, reduce the severity of disease, or reduce the chance of transmitting influenza to high-risk persons with whom the individual has frequent contact. The antiviral agent available for use at this time (amantadine hydrochloride) is effective only against influenza A and, for maximum effectiveness as prophylaxis, must be administered throughout the period of risk. When administered as either prophylaxis or therapy, the potential effectiveness of amantadine must be balanced against potential side effects. Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk of severe illness and complications if infected with an influenza A virus. Use of amantadine may be considered a) as a control measure when influenza A outbreaks occur in institutions housing high-risk persons, both for treatment of ill individuals and as prophylaxis for others; b) as short-term prophylaxis after late vaccination of high-risk persons (i.e., when influenza A infections are already occurring in the community) during the period when immunity is developing in response to vaccination; c) as seasonal prophylaxis for persons for whom vaccination is contraindicated; d) as seasonal prophylaxis for immunocompromised persons who may not produce protective levels of antibody in response to vaccination; and e) as prophylaxis for unvaccinated health-care workers and household contacts who care for high-risk persons either for the duration of influenza activity in the community or until immunity develops after vaccination. Amantadine is also approved for use by any person who wishes to reduce his or her chances of becoming ill with influenza A. # INACTIVATED VACCINE FOR INFLUENZA A AND B Influenza vaccine is made from highly purified, egg-grown viruses that have been rendered noninfectious (inactivated). Therefore, the vaccine cannot cause influenza. Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing influenza viruses believed likely to circulate in the United States in the upcoming winter. The composition of the vaccine is such that it rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purified-surface-antigen preparations are available. To minimize febrile reactions, only subvirion or purified-surface-antigen preparations should be used for children; any of the preparations may be used for adults. Most vaccinated children and young adults develop high postvaccination hemagglutination-inhibition antibody titers. These antibody titers are protective against infection by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults, and thus may remain susceptible to influenza upper-respiratory-tract infection. Nevertheless, even if such persons develop influenza illness, the vaccine has been shown to be effective in preventing lower-respiratory-tract involvement or other complications, thereby reducing the risk of hospitalization and death. # RECOMMENDATIONS FOR USE OF INFLUENZA VACCINE Influenza vaccine is strongly recommended for any person greater than or equal to 6 months of age who -because of age or underlying medical condition --is at increased risk for complications of influenza. Health-care workers and others (including household members) in close contact with high-risk persons should also be vaccinated. In addition, influenza vaccine may be administered to any person who wishes to reduce the chance of becoming infected with influenza. The trivalent influenza vaccine prepared for the 1992-1993 season will include A/Texas/36/91-like(H1N1), A/Beijing/353/89-like (H3N2), and B/Panama/45/90-like hemagglutinin antigens. Recommended doses are listed in Table 1. Guidelines for the use of vaccine among different groups follow. Although the current influenza vaccine can contain one or more antigens administered in previous years, annual vaccination using the current vaccine is necessary because immunity for a person declines in the year following vaccination. Because the 1992-1993 vaccine differs from the 1991-1992 vaccine, supplies of 1991-1992 vaccine should not be administered to provide protection for the 1992-1993 influenza season. Two doses administered at least 1 month apart may be required for a satisfactory antibody response among previously unvaccinated children less than 9 years of age; however, studies with vaccines similar to those in current use have shown little or no improvement in antibody responses when a second dose is administered to adults during the same season. During the past decade, data on influenza vaccine immunogenicity and side effects have been obtained when vaccine has been administered intramuscularly. Because there has been no adequate evaluation of recent influenza vaccines administered by other routes, the intramuscular route is recommended for use. Adults and older children should be vaccinated in the deltoid muscle, and infants and young children in the anterolateral aspect of the thigh. # TARGET GROUPS FOR SPECIAL VACCINATION PROGRAMS To maximize protection of high-risk persons, they and their close contacts should be targeted for organized vaccination programs. Groups at Increased Risk for Influenza-Related Complications: 1. Persons greater than or equal to 65 years of age. # Residents of nursing homes and other chronic-care facilities housing persons of any age with chronic medical conditions. 3. Adults and children with chronic disorders of the pulmonary or cardiovascular systems, including children with asthma. 4. Adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications). 5. Children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy, and therefore may be at risk of developing Reye syndrome after influenza. Groups That Can Transmit Influenza to High-Risk Persons: Persons who are clinically or subclinically infected and who attend or live with high-risk persons can transmit influenza virus to them. Some high-risk persons (e.g., the elderly, transplant recipients, or persons with AIDS) can have low antibody responses to influenza vaccine. Efforts to protect these high-risk persons against influenza may be improved by reducing the chances of exposure to influenza from their care providers. Therefore, the following groups should be vaccinated: 1. Physicians, nurses, and other personnel in both hospital and outpatient-care settings who have contact with high-risk persons among all age groups, including infants. 2. Employees of nursing homes and chronic-care facilities who have contact with patients or residents. 3. Providers of home care to high-risk persons (e.g., visiting nurses, volunteer workers). 4. Household members (including children) of high-risk persons. # VACCINATION OF OTHER GROUPS General Population Physicians should administer influenza vaccine to any person who wishes to reduce the chance of acquiring influenza infection. Persons who provide essential community services may be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Similarly, students or other persons in institutional settings such as those who reside in dormitories may be considered for vaccination to minimize the disruption of routine activities during epidemics. # Pregnant Women Influenza-associated excess mortality among pregnant women has not been documented except in the pandemics of 1918-1919 and 1957-1958. However, pregnant women who have other medical conditions that increase their risks for complications from influenza should be vaccinated, as the vaccine is considered safe for pregnant women. Administering the vaccine after the first trimester is a reasonable precaution to minimize any concern over the theoretical risk of teratogenicity. However, it is undesirable to delay vaccination of pregnant women who have high-risk conditions and who will still be in the first trimester of pregnancy when the influenza season begins. Persons Infected with HIV Little information exists regarding the frequency and severity of influenza illness among human immunodeficiency virus (HIV)-infected persons, but recent reports suggest that symptoms may be prolonged and the risk of complications increased for HIV-infected persons. Because influenza may result in serious illness and complications, vaccination is a prudent precaution and will result in protective antibody levels in many recipients. However, the antibody response to vaccine may be low in persons with advanced HIV-related illnesses; a booster dose of vaccine has not improved the immune response for these individuals. # Foreign Travelers Increasingly, the elderly and persons with high-risk medical conditions are embarking on international travel. The risk of exposure to influenza during foreign travel varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the southern hemisphere, the season of greatest activity is April-September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that also begins while traveling, an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the southern hemisphere during April-September should review their influenza vaccination histories. If they were not vaccinated the previous fall or winter, they should consider influenza vaccination before travel. Persons among the high-risk categories should be especially encouraged to receive the most currently available vaccine. High-risk persons administered the previous season's vaccine before travel should be revaccinated in the fall or winter with the current vaccine. # PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Amantadine hydrochloride is an option for prevention of influenza A in such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at higher risk for complications of influenza infections may benefit from vaccine after appropriate allergy evaluation and desensitization. Specific information about vaccine components can be found in warnings and contraindications in package inserts for each manufacturer. It is usually preferable to delay vaccination of adults with acute febrile illnesses until their symptoms have abated. However, minor illnesses with or without fever should not contraindicate the use of influenza vaccine, particularly among children with a mild upper respiratory tract infection or allergic rhinitis (see American Academy of Pediatrics, The Red Book, 1991). # SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respiratory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination is soreness at the vaccination site that lasts for up to 2 days; this is reported for less than one-third of vaccinees. In addition, two types of systemic reactions have occurred: 1. Fever, malaise, myalgia, and other systemic symptoms occur infrequently and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after vaccination and can persist for 1 or 2 days. 2. Immediate --presumably allergic --reactions (such as hives, angioedema, allergic asthma, or systemic anaphylaxis) occur rarely after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component --the majority are most likely related to residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein may induce immediate hypersensitivity reactions among persons with severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to assist in determining whether vaccination may proceed or should be deferred. Persons with documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs -including those who have had occupational asthma or other allergic responses from exposure to egg protein --may also be at increased risk for reactions from influenza vaccine, and similar consultation should be considered. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenza infection or its complications (See Murphy and Strunk, 1985). Unlike the 1976 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been clearly associated with an increased frequency of Guillain-Barre syndrome. Although influenza vaccination can inhibit the clearance of warfarin and theophylline, studies have failed to show any adverse clinical effects attributable to these drugs among patients receiving influenza vaccine. The potential exists for hypersensitivity reactions to any vaccine component. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, most patients do not develop reactions to thimerosal administered as a component of vaccines even when patch or intradermal tests for thimerosal indicate hypersensitivity. When it has been reported, hypersensitivity to thimerosal has usually consisted of local delayed-type hypersensitivity reactions. # SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES The target groups for influenza and pneumococcal vaccination overlap considerably. Both vaccines can be administered at the same time at different sites without increasing side effects. However, influenza vaccine must be administered each year, whereas pneumococcal vaccine is generally administered only once to all but those at highest risk of fatal pneumococcal disease (reference ACIP statement, MMWR 1989;38:64-8, 73-6.). Children at high risk for influenza-related complications may receive influenza vaccine at the same time as measles-mumps-rubella, Haemophilus b, pneumococcal, and oral polio vaccines. Vaccines should be administered at different sites on the body. Influenza vaccine should not be administered within 3 days of vaccination with pertussis vaccine. # TIMING OF INFLUENZA VACCINATION ACTIVITIES Beginning each September, when vaccine for the upcoming influenza season becomes available, high-risk persons who are seen by health-care providers for routine care or as a result of hospitalization should be offered influenza vaccine. Opportunities to vaccinate persons at high risk for complications of influenza should not be missed. The optimal time for organized vaccination campaigns for high-risk persons usually is the period between mid-October and mid-November. In the United States influenza activity generally peaks between late December and early March, and high levels of influenza activity infrequently occur in the contiguous 48 states before December. It is particularly important to avoid administering vaccine too far in advance of the influenza season in facilities such as nursing homes because antibody levels may begin to decline within a few months of vaccination. Vaccination programs can be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December. Children less than 9 years of age who have not previously been vaccinated should receive two doses of vaccine at least a month apart to maximize the chance of a satisfactory antibody response to all three vaccine antigens. The second dose should be administered before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community, as late as April in some years. # STRATEGIES FOR IMPLEMENTING INFLUENZA VACCINE RECOMMENDATIONS Despite the recognition that optimum medical care for both adults and children includes regular review of immunization records and administration of vaccines as appropriate, less than 30% of persons among high-risk groups receive influenza vaccine each year. More effective strategies are needed for delivering vaccine to high-risk persons, their health-care providers, and their household contacts. In general, successful vaccination programs have combined education for health-care workers, publicity and education targeted toward potential recipients, a plan for identifying (usually by medical-record review) persons at high-risk, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described below. # Outpatient Clinics and Physicians' Offices Staff in physicians' offices, clinics, health-maintenance organizations, and employee health clinics should be instructed to identify and label the medical records of patients who should receive vaccine. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccine and its receipt or refusal should be documented in the medical record. Patients among high-risk groups who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccine. If possible, arrangements should be made to provide vaccine with minimal waiting time and at the lowest possible cost. Facilities Providing Episodic or Acute Care (e.g., emergency rooms, walk-in clinics) Health-care providers in these settings should be familiar with influenza vaccine recommendations. They should offer vaccine to persons among high-risk groups or should provide written information on why, where, and how to obtain the vaccine. Written information should be available in language(s) appropriate for the population served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be routinely provided to all residents of for each patient. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility, and all residents should be vaccinated at one time, immediately preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated when they are admitted. # Acute-Care Hospitals All persons greater than or equal to 65 years of age and younger persons (including children) with highrisk conditions who are hospitalized from September through March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Household members and others with whom they will have contact should receive written information about why and where to obtain influenza vaccine. Outpatient Facilities Providing Continuing Care to High-Risk Patients (e.g., hemodialysis centers, hospital specialty-care clinics, outpatient rehabilitation programs) All patients should be offered vaccine in one period shortly before the beginning of the influenza season. Patients admitted to such programs during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission. Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. Visiting Nurses and Others Providing Home Care to High-Risk Persons Nursing-care plans should identify high-risk patients, and vaccine should be provided in the home if necessary. Care givers and others in the household (including children) should be referred for vaccination. Facilities Providing Services to Persons greater than or equal to 65 Years of Age (e.g., retirement communities, recreation centers) All unvaccinated residents/attendees should be offered vaccine on site at one time period before the influenza season; alternatively, education/publicity programs should emphasize the need for influenza vaccine and should provide specific information on how, where, and when to obtain it. # Clinics and Others Providing Health Care for Travelers Indications for influenza vaccination should be reviewed before travel and vaccine offered if appropriate (see Foreign Travelers). # Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine, with particular emphasis on vaccination of persons who care for high-risk persons (e.g., staff of intensive-care units, including newborn intensive-care units; staff of medical/surgical units; and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts may enhance compliance, as may a follow-up campaign if an outbreak occurs in the community. # ANTIVIRAL AGENTS FOR INFLUENZA A The two antiviral agents with specific activity against influenza A viruses are amantadine hydrochloride and rimantadine hydrochloride. Only amantadine is licensed for use in the United States. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses, although the specific mechanisms of their antiviral activity are not completely understood. When administered prophylactically to healthy young adults or children in advance of and throughout the epidemic period, amantadine is approximately 70%-90% effective in preventing illnesses caused by naturally occurring strains of type A influenza viruses. When administered to otherwise healthy young adults and children for symptomatic treatment within 48 hours after the onset of influenza illness, amantadine has been shown to reduce the duration of fever and other systemic symptoms and may permit a more rapid return to routine daily activities. Since antiviral agents taken prophylactically may prevent illness but not subclinical infection, some persons who take these drugs may still develop immune responses that will protect them when exposed to antigenically related viruses in later years. As with all drugs, symptoms may occur that are side effects of amantadine among a small proportion of persons. Such symptoms are rarely severe, but may be important for some categories of patients. # RECOMMENDATIONS FOR THE USE OF AMANTADINE Outbreak Control in Institutions When outbreaks of influenza A occur in institutions that house high-risk persons, chemoprophylaxis should begin as early as possible to reduce the spread of the infection. Contingency planning is needed to ensure rapid administration of amantadine to residents and employees. This should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amantadine is used for outbreak control, it should be administered to all residents of the affected institution regardless of whether they received influenza vaccine the previous fall. The dose for each resident should be determined after consulting the dosage recommendations and precautions that follow in this document and those listed in the manufacturer's package insert. To reduce spread of virus and to minimize disruption of patient care, chemoprophylaxis should also be offered to unvaccinated staff who provide care to high-risk persons. To be fully effective as prophylaxis, the antiviral drug must be taken each day for the duration of influenza activity in the community. # Use as Prophylaxis High-risk persons vaccinated after influenza A activity has begun High-risk persons can still be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination usually takes 2 weeks, during which time amantadine should be administered. Children who receive influenza vaccine for the first time may require up to 6 weeks of prophylaxis, or until 2 weeks after the second dose of vaccine has been received. Amantadine does not interfere with the antibody response to the vaccine. # Persons providing care to high-risk persons To reduce the spread of virus and to maintain care for high-risk persons in the home, hospital, or institutional setting, chemoprophylaxis should be considered for unvaccinated persons who have frequent contact with high-risk persons in the home setting (e.g., household members, visiting nurses, volunteer workers) and unvaccinated employees of hospitals, clinics, and chronic-care facilities. For employees who cannot be vaccinated, chemoprophylaxis should be continued for the entire period influenza A virus is circulating in the community; for those who are vaccinated at a time when influenza A is present in the community, chemoprophylaxis should be administered for 2 weeks after vaccination. Prophylaxis should be considered for all employees,regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that is not covered by the vaccine. # Immunodeficient persons Chemoprophylaxis may be indicated for high-risk persons who are expected to have a poor antibody response to influenza vaccine. This includes many persons with HIV infection, especially those with advanced disease. No data are available on possible interactions with other drugs used in the management of patients with HIV infection. Such patients must be monitored closely if amantadine is administered. # Persons for whom influenza vaccine is contraindicated Chemoprophylaxis throughout the influenza season may be appropriate for high-risk persons for whom influenza vaccine is contraindicated because of anaphylactic hypersensitivity to egg protein or other vaccine components. Other persons Amantadine can also be administered prophylactically by anyone who wishes to avoid influenza A illness. This decision should be made by the physician and patient on an individual basis. # Use as Therapy Amantadine can reduce the severity and shorten the duration of influenza A illness among healthy adults. However, there are no data on the efficacy of amantadine therapy in preventing complications of influenza A among high-risk persons. Therefore, no specific recommendations can be made regarding the therapeutic use of amantadine for these patients. This does not preclude physicians' using amantadine for high-risk patients who develop illness compatible with influenza during a period of known or suspected influenza A activity in the community. Whether amantadine is effective when treatment begins beyond the first 48 hours of illness is not known. # OTHER CONSIDERATIONS FOR THE SELECTION OF AMANTADINE FOR PROPHYLAXIS OR TREATMENT Side Effects/Toxicity When amantadine is administered to healthy young adults at a dose of 200 mg/day, minor central-nervoussystem (CNS) side effects (nervousness, anxiety, insomnia, difficulty concentrating, and lightheadedness) or gastrointestinal side effects (anorexia and nausea) occur among approximately 5%-10% of patients. Side effects diminish or cease soon after discontinuing use of the drug. With prolonged use, side effects may also diminish or disappear after the first week of use. More serious but less frequent CNS-related side effects (seizures, confusion) associated with use of amantadine have usually affected only elderly persons, those with renal disease, and those with seizure disorders or other altered mental or behavioral conditions. Reducing the dosage to less than or equal to 100 mg/day appears to reduce the frequency of these side effects among such persons without compromising the prophylactic effectiveness of amantadine. The package insert should be reviewed before use of amantadine for any patient. The patient's age, weight, renal function, presence of other medical conditions, and indications for use of amantadine (prophylaxis or therapy) must be considered, and the dosage and duration of treatment adjusted appropriately. Modifications in dosage may be required for persons with impaired renal function, the elderly, children, persons who have neuropsychiatric disorders or who take psychotropic drugs, and persons with a history of seizures. # Development of Drug-Resistant Viruses Amantadine-resistant influenza viruses can emerge when amantadine is administered for treatment. The frequency with which resistant isolates emerge and the extent of their transmission are unknown, but there is no evidence that amantadine-resistant viruses are more virulent or more transmissible than amantadinesensitive viruses. Thus the use of amantadine remains an appropriate outbreak --control measure. In closed populations such as nursing homes, persons with influenza who are treated with amantadine should be separated, if possible, from asymptomatic persons who are administered amantadine as prophylaxis. Because of possible induction of amantadine resistance, it is advisable to discontinue amantadine treatment of persons who have influenza-like illness as soon as clinically warranted, generally within 3-5 days. Isolation of influenza viruses from persons who are receiving amantadine should be reported through state health departments to CDC and the isolates saved for antiviral sensitivity testing. # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS Educational materials about influenza and its control are available from several sources, including the CDC. Information can be obtained from Information Services, National Center for Prevention Services, Mailstop E06, CDC, Atlanta, GA 30333. Phone number: (404) 639-1819. State and local health departments should also be consulted regarding availability of vaccine and access to vaccination programs.
None
None
ec5b6c37f288246e848c210f10218a0710cfda4b
cdc
None
The National Institute for Occupational Safety and Health (NIOSH) recommends that worker exposure to chloroform in the workplace be controlled by adherence to the follow ing sections. The standard is designed to protect the health and provide fo r the safety of work ers fo r up to a 10-hour workday, 40-hour work week over a working lifetim e. Compliance w ith all sections of the standard should pre vent the adverse effects reported from ex posure to chloroform in the workplace and m aterially reduce the risk of cancer from oc cupational exposure to chloroform . The stand ard is measurable by techiques th a t are valid, reproducible, and available. S ufficient tech nology exists to perm it com pliance with the recommended standard. The standard w ill be subject to fu rth e r review and revision as nec essary. " Occupational exposure to chloroform 1 '' is defined as exposure to chloroform in any establishm ent where chloroform is used, manufactured, or stored. Exposure to chloro form under any of the above conditions will require adherence to all of the follow ing sec tions. # Section 1 -Environmental (Workplace Air) Chloroform shall be controlled in the work place so th a t the concentration of chloroform is not greater than 2 ppm (9.7 8m g/m 3) of breathing zone air in a 45 lite r air sample taken over a period not to exceed 1 hour in duration. Procedures for sam pling and analysis of chloroform in air shall be as provided in Ap pendices I and II, or by any method shown to be equivalent in precision, accuracy, and sen sitivity to the methods specified. Section 2 -Medical (a) Comprehensive preplacem ent and an nual medical exam inations shall be made available to all workers exposed to chloroform unless a different frequency is indicated by professional medical judgm ent based on such factors as emergencies, variations in work peri ods, and preexisting health status of individual workers. (b) These exam inations shall include, but shall not be lim ited to: (1) A comprehensive and interim m edi cal and work history giving special attention to gastrointestinal symptoms, mental status, and use of alcohol and barbiturates. (2) A comprehensive medical exami nation, giving particular attention to cardiac rhythm, liver and kidney function. Liver fu n c tion tests and urinalysis shall be performed. (3) An evaluation of the worker's phys ical ability to safely wear a respirator. (c) Employees shall be counseled regard ing the increased hazards of working with chloroform resulting from use of alcohol. (d) Medical records shall be maintained fo r all persons employed in work involving oc cupational exposure to chloroform for at least 30 years after the individual's employment is term inated. The medical representatives of the Secretary of Health, Education, and Welfare, of the Secretary of Labor, of the employer, and of the employee or form er employee shall have access to these records. # (e) Initial exam inations fo r presently em ployed workers shall be offered w ithin 6 months of the prom ulgation of a standard in corporating these recommendations. (b) In areas where there is occupational exposure to chloroform , the follow ing warning sign shall be posted in readily visible locations, particularly at the entrances to the area. # DANGER! EXTREME HEALTH HAZARD CANCER-SUSPECT AGENT USED IN THIS AREA UNAUTHORIZED PERSONS KEEP OUT The sign shall be printed both in English and in the predom inant language of non-Englishspeaking workers, if any, unless employers use other equally effective means to ensure that these workers know the hazards associated w ith chloroform and the locations of areas in which there is occupational exposure to chloro form. Employers shall ensure that illiterate workers also know these hazards and the loca tions of these areas. # Section 4 -Personal Protective Equipment and Protective C lothing (a) Protective Clothing (1) Coveralls or other full-body protec tive clothing shall be worn in areas where there is occupational exposure to chloroform . Pro tective clothing shall be changed at least daily at the end of the s h ift and more frequently if it should become grossly contaminated. (2) Impervious gloves, aprons, and footwear shall be worn at operations where chloroform may contact the skin. (3) Eye protective devices shall be pro vided by the employer and used by the em ployees where contact of chloroform w ith eyes is likely. Selection, use, and m aintenance of eye protective equipm ent shall be in accord ance w ith the provisions of the American Na tional Standard Practice fo r Occupational and Educational Eye and Face Protection, ANSI Z87.1-1968. Unless eye protection is afforded by a respirator hood or facepiece, protective goggles or a face shield shall be worn at oper ations where there is danger of contact of the eyes w ith wet m aterials containing chloroform because of spills, splashes, or excessive vapors in the air. (4) The employer shall ensure that all personal protective devices are inspected reg ularly and m aintained in clean and satisfactory working condition. (5) Work clothing may not be taken home by employees. The em ployer shall pro vide fo r m aintenance and laundering of pro tective clothing. (6) The em ployer shall ensure th a t pre cautions necessary to protect laundry person nel are taken while soiled protective clothing is being laundered. (7) The employer shall ensure that chloroform is not discharged into m unicipal waste treatm ent systems or the com m unity air. # (b) Respiratory Protection from Chloroform Engineering controls shall be used wher ever feasible to m aintain chloroform concen trations at or below th a t recommended in Sec tion 1 above. Compliance w ith the environ mental exposure lim it by the use of respirators is allowed only when chloroform concentra tions are in excess of the workplace environ mental lim it because required engineering controls are being installed or tested, when nonroutine maintenance or repair is being accomplished, or during emergencies. When a respirator is thus perm itted, it shall be se # N ID 5H lected and used in accordance w ith the fo l lowing requirem ents: (1) For the purpose of determ ining if it is necessary fo r workers to wear respirators, the em ployer shall measure the concentration of chloroform in the workplace in itia lly and thereafter whenever process, worksite, clim ate, or control changes occur w hich are likely to increase the concentration of chloroform . (2) The em ployer shall ensure th a t no w orker is exposed to chloroform above the w orkplace environm ental lim it because of im proper respirator selection, fit, use, or m ain tenance. (3) A respiratory protection program m eeting the requirem ents of 29 CFR 1910.134 and 30 CFR 11 which incorporates the Am eri can National Standard Practices fo r Respira tory Protection Z88.2-1969 shall be estab lished and enforced by the employer. (4) The em ployer shall provide respi rators in accordance w ith Table 1-1 and shall ensure th a t the employee uses the respirator provided anytim e the concentration of chloro form exceeds the level described in section 1. (5) Respirators described in Table 1-1 shall be those approved under the provisions of 29 CFR 1910.134 and 30 CFR 11. (6) The em ployer shall ensure th a t res pirators are adequately cleaned, and th a t em ployees are instructed on the use of respirators assigned to them , th e ir location in the work place, and on how to test for leakage. (7) Where an emergency may develop w hich could result in employee injury from chloroform , the employer shall provide an escape device as listed in At the beginning of em ploym ent or assign ment fo r work in a chloroform area, employees w ith occupational exposure to chloroform shall be inform ed of the hazards, relevant signs and symptoms of overexposure, appropriate emer gency procedures, and proper conditions and precautions fo r the safe use o f chloroform. Instruction shall include, as a m inim um , all inform ation in Appendix 111 which is appli cable to the specific chloroform product or ma terial to which there is exposure, as well as inform ation which indicates th a t chloroform causes cancer in experim ental animals. This inform ation shall be posted in the work area and kept on file, readily accessible to the worker at all places of em ploym ent where chloroform is involved in u n it processes and operations. A continuing educational program shall be instituted to ensure that all workers have cur rent knowledge o f job hazards, proper m ain tenance procedures, and cleanup methods, and th a t they know how to use respiratory protec tive equipm ent and protective clothing cor rectly. Inform ation as specified in Appendix III shall be recorded on the Material Safety Data Sheet or a sim ilar form approved by the Occu pational Safety and Health A dm inistration, US Departm ent of Labor. # Section 6 -Work Practices (a) Control of Airborne Contam ination Emission of vapors of chloroform shall be controlled at the sources of dispersion by means of effective and properly maintained methods such as fu lly enclosed operations and local exhaust ventilation, under negative pres sure if possible. No recirculation of ventilation control ventilation or process air shall be per m itted. Other methods may be used if they are shown to effectively control airborne con centrations of chloroform w ithin the lim it of the recommended standard. # (b) Control of Contact w ith Skin and Eyes (1) Employees working in areas where contact of skin or eyes w ith chloroform is pos sible shall wear full-body protective clothing, including neck and head coverings, and gloves, in accord w ith Section 4(a). (2) Clean protective clothing shall be put on before each work shift. (3) If, during the shift, the clothing be comes wetted w ith chloroform , it shall be re moved prom ptly and placed in a special con tainer for garments fo r decontam ination or dis posal. The employee shall wash the contam i nated skin area thoroughly w ith soap and a copious am ount of water. A com plete shower is preferred after anything but lim ited, m inor contact. Then, clean protective clothing shall be put on before resuming work. (4) Small areas of skin (principally the hands) contam inated by contact w ith chloro form shall be washed im m ediately and th o r oughly w ith an abundance of water. Water shall be easily accessible in the work areas as low-pressure, free-running hose lines or showers. (5) If chloroform comes into contact with the eyes, the eyes should be flushed with a large volume of low-pressure flowing water for at least 15 minutes. Medical attention shall be obtained w ithout delay, but not at the expense of im m ediate and thorough flushing of the eyes. (c) Procedures for emergencies, including firefightin g, shall be established to meet foreseeable events. Necessary emergency equipm ent, including appropriate respiratory protective devices, shall be kept in readily accessible locations. Only self-contained breathing apparatus with positive pressure in the facepiece shall be used in firefighting. Appropriate respirators shall be available for use during evacuation. (d) Special supervision and care shall be exercised to ensure tha t the exposures of re pair and maintenance personnel to chloroform are w ithin the lim it prescribed by this standard. (e) Prompt cleaning of spills of chloroform : Spills shall be channeled for appropri ate treatm ent or collection fo r disposal. They may not be channeled directly into the m unici pal sanitary sewer system. (f) General requirements: (1) Good housekeeping practices shall be observed to prevent or m inim ize contam ina tion of areas and equipm ent and to prevent build-up of such contam ination. Good personal hygiene practices shall be encouraged. (3 Equipment shall be kept in good repair and free of leaks. Containers of chloroform shall be kept covered insofar as is practical. (g) A regulated area shall be established and maintained where: (1) Carbon tetrachloride is m anufac tured, reacted, mixed w ith other substances, répackaged, stored, handled, or used. (2) Concentrations of chloroform are in excess of the workplace environm ental lim it in Section 1. (h) Access to the regulated areas desig nated by Section 6(h) shall be lim ited to authorized persons. A daily roster shall be made of persons authorized to enter; these rosters shall be m aintained fo r 30 years. (j) Employers shall ensure that before employees leave a regulated area they remove and leave protective clothing at the point of exit. # Section 7 -Sanitation (a) Washing Facilities Emergency showers and eye-flushing foun tains w ith cool water under adequate pressure shall be provided and be readily accessible in areas where contact of skin or eyes w ith chloro form may occur. This equipm ent shall be fre quently inspected and m aintained in good working condition. Showers and washbasins shall be provided in the employees' locker areas. Employers shall ensure that employees exposed or poten tia lly exposed to chloroform during th e ir work s h ift shall wash before eating or smoking periods taken during the work shift. (b) Food Facilities Food storage, preparation, and eating shall be prohibited in areas where chloroform is handled, processed, or stored. Eating facilities provided fo r employees shall be located in areas ventilated in such a manner to ensure that contam ination from chloroform is pre vented. Surfaces in these areas shall be kept free of chloroform . Employers shall ensure that before employees enter premises reserved fo r eating, food storage, or food preparation, they remove protective clothing. Washing fa c ilitie s should be accessible nearby. (c) Employees may not smoke in areas where chloroform is handled, processed, or stored. (d) C lothing and Locker Room Facilities Locker room fa c ilitie s fo r employees re quired to change clothing before and after work shall be provided in an area separate and ventilated in such a manner to ensure that contam ination from chloroform is non-existent. The fa cilitie s shall provide fo r the storing of street clothing and clean work clothing sep arately from soiled work clothing. Showers and washbasins should be located in the locker area to encourage good personal hygiene. Covered containers should be provided for work clothing discarded at the end of the s h ift or after a contam ination incident. The clothing shall be held in these containers u n til removed fo r decontam ination or disposal. # Section 8 -M onitoring and Recordkeeping Requirements In any workplace where chloroform is handled or processed, surveys shall be re peated semi-annually. Requirements set forth below apply to areas in which there is occupa tional exposure to chloroform . Employers shall m aintain records of work place environm ental exposures to chloroform based on the follow ing samples, analytical, and recording schedules. # (a) In all m onitoring, samples representa tive of the exposure in the breathing zone of employees shall be collected by personal sam plers. (b) An adequate number of samples shall be taken in order to determ ine the occupa tional exposure of each employee. (c) The firs t determ ination of the exposure of employees to chloroform shall be com pleted w ithin 6 months after the promulgation of a standard incorporating these recommen dations. (d) A réévaluation of the exposure of em ployees to chloroform shall be made w ithin 30 days a fte r installation of a new process or process change. (e) A réévaluation of employee exposure to chloroform shall be repeated at 1-week in tervals when the airborne concentration has been found to exceed the recommended work place environm ental lim it. In such cases, s u it able controls shall be instituted and m onitor ing shall continue at 1-week intervals until 3 consecutive surveys indicate the adequacy of controls. (f) Records of all sam pling and analysis of chloroform and of medical examinations shall be m aintained fo r at least 30 years a fte r the individual's em ploym ent is term inated. Rec ords shall indicate the details of (1) type of personal protective devices, if any, in use at the tim e of sampling, and (2) methods of sam pling and analysis used. Each employee shall be able to obtain inform ation on his own exposure. If the employer who has or has had employees w ith occupational exposure to chloroform ceases business w ithout a suc cessor, he shall forward th e ir records by regis tered mail to the Director, National Institute for Occupational Safety and Health. # D E P A R T M E N T O F H E A L T H , E D U C A T I O N , A N D W E L F A R E PUBLIC HEALTH SERVICE C E N T E R F O R D I S E A S E C O N T R O L N A T I O N A L I N S T I T U T E F O R O C C U P A T I O N A L S A F E T Y A N O H E A L T H R O B E R T A . T A F T L A B O R
# The National Institute for Occupational Safety and Health (NIOSH) recommends that worker exposure to chloroform in the workplace be controlled by adherence to the follow ing sections. The standard is designed to protect the health and provide fo r the safety of work ers fo r up to a 10-hour workday, 40-hour work week over a working lifetim e. Compliance w ith all sections of the standard should pre vent the adverse effects reported from ex posure to chloroform in the workplace and m aterially reduce the risk of cancer from oc cupational exposure to chloroform . The stand ard is measurable by techiques th a t are valid, reproducible, and available. S ufficient tech nology exists to perm it com pliance with the recommended standard. The standard w ill be subject to fu rth e r review and revision as nec essary. " Occupational exposure to chloroform 1 '' is defined as exposure to chloroform in any establishm ent where chloroform is used, manufactured, or stored. Exposure to chloro form under any of the above conditions will require adherence to all of the follow ing sec tions. # Section 1 -Environmental (Workplace Air) Chloroform shall be controlled in the work place so th a t the concentration of chloroform is not greater than 2 ppm (9.7 8m g/m 3) of breathing zone air in a 45 lite r air sample taken over a period not to exceed 1 hour in duration. Procedures for sam pling and analysis of chloroform in air shall be as provided in Ap pendices I and II, or by any method shown to be equivalent in precision, accuracy, and sen sitivity to the methods specified. Section 2 -Medical (a) Comprehensive preplacem ent and an nual medical exam inations shall be made available to all workers exposed to chloroform unless a different frequency is indicated by professional medical judgm ent based on such factors as emergencies, variations in work peri ods, and preexisting health status of individual workers. (b) These exam inations shall include, but shall not be lim ited to: (1) A comprehensive and interim m edi cal and work history giving special attention to gastrointestinal symptoms, mental status, and use of alcohol and barbiturates. (2) A comprehensive medical exami nation, giving particular attention to cardiac rhythm, liver and kidney function. Liver fu n c tion tests and urinalysis shall be performed. (3) An evaluation of the worker's phys ical ability to safely wear a respirator. (c) Employees shall be counseled regard ing the increased hazards of working with chloroform resulting from use of alcohol. (d) Medical records shall be maintained fo r all persons employed in work involving oc cupational exposure to chloroform for at least 30 years after the individual's employment is term inated. The medical representatives of the Secretary of Health, Education, and Welfare, of the Secretary of Labor, of the employer, and of the employee or form er employee shall have access to these records. # (e) Initial exam inations fo r presently em ployed workers shall be offered w ithin 6 months of the prom ulgation of a standard in corporating these recommendations. (b) In areas where there is occupational exposure to chloroform , the follow ing warning sign shall be posted in readily visible locations, particularly at the entrances to the area. # DANGER! EXTREME HEALTH HAZARD CANCER-SUSPECT AGENT USED IN THIS AREA UNAUTHORIZED PERSONS KEEP OUT The sign shall be printed both in English and in the predom inant language of non-Englishspeaking workers, if any, unless employers use other equally effective means to ensure that these workers know the hazards associated w ith chloroform and the locations of areas in which there is occupational exposure to chloro form. Employers shall ensure that illiterate workers also know these hazards and the loca tions of these areas. # Section 4 -Personal Protective Equipment and Protective C lothing (a) Protective Clothing (1) Coveralls or other full-body protec tive clothing shall be worn in areas where there is occupational exposure to chloroform . Pro tective clothing shall be changed at least daily at the end of the s h ift and more frequently if it should become grossly contaminated. (2) Impervious gloves, aprons, and footwear shall be worn at operations where chloroform may contact the skin. (3) Eye protective devices shall be pro vided by the employer and used by the em ployees where contact of chloroform w ith eyes is likely. Selection, use, and m aintenance of eye protective equipm ent shall be in accord ance w ith the provisions of the American Na tional Standard Practice fo r Occupational and Educational Eye and Face Protection, ANSI Z87.1-1968. Unless eye protection is afforded by a respirator hood or facepiece, protective goggles or a face shield shall be worn at oper ations where there is danger of contact of the eyes w ith wet m aterials containing chloroform because of spills, splashes, or excessive vapors in the air. (4) The employer shall ensure that all personal protective devices are inspected reg ularly and m aintained in clean and satisfactory working condition. (5) Work clothing may not be taken home by employees. The em ployer shall pro vide fo r m aintenance and laundering of pro tective clothing. (6) The em ployer shall ensure th a t pre cautions necessary to protect laundry person nel are taken while soiled protective clothing is being laundered. (7) The employer shall ensure that chloroform is not discharged into m unicipal waste treatm ent systems or the com m unity air. # (b) Respiratory Protection from Chloroform Engineering controls shall be used wher ever feasible to m aintain chloroform concen trations at or below th a t recommended in Sec tion 1 above. Compliance w ith the environ mental exposure lim it by the use of respirators is allowed only when chloroform concentra tions are in excess of the workplace environ mental lim it because required engineering controls are being installed or tested, when nonroutine maintenance or repair is being accomplished, or during emergencies. When a respirator is thus perm itted, it shall be se # N ID 5H lected and used in accordance w ith the fo l lowing requirem ents: (1) For the purpose of determ ining if it is necessary fo r workers to wear respirators, the em ployer shall measure the concentration of chloroform in the workplace in itia lly and thereafter whenever process, worksite, clim ate, or control changes occur w hich are likely to increase the concentration of chloroform . (2) The em ployer shall ensure th a t no w orker is exposed to chloroform above the w orkplace environm ental lim it because of im proper respirator selection, fit, use, or m ain tenance. (3) A respiratory protection program m eeting the requirem ents of 29 CFR 1910.134 and 30 CFR 11 which incorporates the Am eri can National Standard Practices fo r Respira tory Protection Z88.2-1969 shall be estab lished and enforced by the employer. (4) The em ployer shall provide respi rators in accordance w ith Table 1-1 and shall ensure th a t the employee uses the respirator provided anytim e the concentration of chloro form exceeds the level described in section 1. (5) Respirators described in Table 1-1 shall be those approved under the provisions of 29 CFR 1910.134 and 30 CFR 11. (6) The em ployer shall ensure th a t res pirators are adequately cleaned, and th a t em ployees are instructed on the use of respirators assigned to them , th e ir location in the work place, and on how to test for leakage. (7) Where an emergency may develop w hich could result in employee injury from chloroform , the employer shall provide an escape device as listed in At the beginning of em ploym ent or assign ment fo r work in a chloroform area, employees w ith occupational exposure to chloroform shall be inform ed of the hazards, relevant signs and symptoms of overexposure, appropriate emer gency procedures, and proper conditions and precautions fo r the safe use o f chloroform. Instruction shall include, as a m inim um , all inform ation in Appendix 111 which is appli cable to the specific chloroform product or ma terial to which there is exposure, as well as inform ation which indicates th a t chloroform causes cancer in experim ental animals. This inform ation shall be posted in the work area and kept on file, readily accessible to the worker at all places of em ploym ent where chloroform is involved in u n it processes and operations. A continuing educational program shall be instituted to ensure that all workers have cur rent knowledge o f job hazards, proper m ain tenance procedures, and cleanup methods, and th a t they know how to use respiratory protec tive equipm ent and protective clothing cor rectly. Inform ation as specified in Appendix III shall be recorded on the Material Safety Data Sheet or a sim ilar form approved by the Occu pational Safety and Health A dm inistration, US Departm ent of Labor. # Section 6 -Work Practices (a) Control of Airborne Contam ination Emission of vapors of chloroform shall be controlled at the sources of dispersion by means of effective and properly maintained methods such as fu lly enclosed operations and local exhaust ventilation, under negative pres sure if possible. No recirculation of ventilation control ventilation or process air shall be per m itted. Other methods may be used if they are shown to effectively control airborne con centrations of chloroform w ithin the lim it of the recommended standard. # (b) Control of Contact w ith Skin and Eyes (1) Employees working in areas where contact of skin or eyes w ith chloroform is pos sible shall wear full-body protective clothing, including neck and head coverings, and gloves, in accord w ith Section 4(a). (2) Clean protective clothing shall be put on before each work shift. (3) If, during the shift, the clothing be comes wetted w ith chloroform , it shall be re moved prom ptly and placed in a special con tainer for garments fo r decontam ination or dis posal. The employee shall wash the contam i nated skin area thoroughly w ith soap and a copious am ount of water. A com plete shower is preferred after anything but lim ited, m inor contact. Then, clean protective clothing shall be put on before resuming work. (4) Small areas of skin (principally the hands) contam inated by contact w ith chloro form shall be washed im m ediately and th o r oughly w ith an abundance of water. Water shall be easily accessible in the work areas as low-pressure, free-running hose lines or showers. (5) If chloroform comes into contact with the eyes, the eyes should be flushed with a large volume of low-pressure flowing water for at least 15 minutes. Medical attention shall be obtained w ithout delay, but not at the expense of im m ediate and thorough flushing of the eyes. (c) Procedures for emergencies, including firefightin g, shall be established to meet foreseeable events. Necessary emergency equipm ent, including appropriate respiratory protective devices, shall be kept in readily accessible locations. Only self-contained breathing apparatus with positive pressure in the facepiece shall be used in firefighting. Appropriate respirators shall be available for use during evacuation. (d) Special supervision and care shall be exercised to ensure tha t the exposures of re pair and maintenance personnel to chloroform are w ithin the lim it prescribed by this standard. (e) Prompt cleaning of spills of chloroform : Spills shall be channeled for appropri ate treatm ent or collection fo r disposal. They may not be channeled directly into the m unici pal sanitary sewer system. (f) General requirements: (1) Good housekeeping practices shall be observed to prevent or m inim ize contam ina tion of areas and equipm ent and to prevent build-up of such contam ination. (2) Good personal hygiene practices shall be encouraged. (3 Equipment shall be kept in good repair and free of leaks. (4) Containers of chloroform shall be kept covered insofar as is practical. (g) A regulated area shall be established and maintained where: (1) Carbon tetrachloride is m anufac tured, reacted, mixed w ith other substances, répackaged, stored, handled, or used. (2) Concentrations of chloroform are in excess of the workplace environm ental lim it in Section 1. (h) Access to the regulated areas desig nated by Section 6(h) shall be lim ited to authorized persons. A daily roster shall be made of persons authorized to enter; these rosters shall be m aintained fo r 30 years. (j) Employers shall ensure that before employees leave a regulated area they remove and leave protective clothing at the point of exit. # Section 7 -Sanitation (a) Washing Facilities Emergency showers and eye-flushing foun tains w ith cool water under adequate pressure shall be provided and be readily accessible in areas where contact of skin or eyes w ith chloro form may occur. This equipm ent shall be fre quently inspected and m aintained in good working condition. Showers and washbasins shall be provided in the employees' locker areas. Employers shall ensure that employees exposed or poten tia lly exposed to chloroform during th e ir work s h ift shall wash before eating or smoking periods taken during the work shift. (b) Food Facilities Food storage, preparation, and eating shall be prohibited in areas where chloroform is handled, processed, or stored. Eating facilities provided fo r employees shall be located in areas ventilated in such a manner to ensure that contam ination from chloroform is pre vented. Surfaces in these areas shall be kept free of chloroform . Employers shall ensure that before employees enter premises reserved fo r eating, food storage, or food preparation, they remove protective clothing. Washing fa c ilitie s should be accessible nearby. (c) Employees may not smoke in areas where chloroform is handled, processed, or stored. (d) C lothing and Locker Room Facilities Locker room fa c ilitie s fo r employees re quired to change clothing before and after work shall be provided in an area separate and ventilated in such a manner to ensure that contam ination from chloroform is non-existent. The fa cilitie s shall provide fo r the storing of street clothing and clean work clothing sep arately from soiled work clothing. Showers and washbasins should be located in the locker area to encourage good personal hygiene. Covered containers should be provided for work clothing discarded at the end of the s h ift or after a contam ination incident. The clothing shall be held in these containers u n til removed fo r decontam ination or disposal. # Section 8 -M onitoring and Recordkeeping Requirements In any workplace where chloroform is handled or processed, surveys shall be re peated semi-annually. Requirements set forth below apply to areas in which there is occupa tional exposure to chloroform . Employers shall m aintain records of work place environm ental exposures to chloroform based on the follow ing samples, analytical, and recording schedules. # (a) In all m onitoring, samples representa tive of the exposure in the breathing zone of employees shall be collected by personal sam plers. (b) An adequate number of samples shall be taken in order to determ ine the occupa tional exposure of each employee. (c) The firs t determ ination of the exposure of employees to chloroform shall be com pleted w ithin 6 months after the promulgation of a standard incorporating these recommen dations. (d) A réévaluation of the exposure of em ployees to chloroform shall be made w ithin 30 days a fte r installation of a new process or process change. (e) A réévaluation of employee exposure to chloroform shall be repeated at 1-week in tervals when the airborne concentration has been found to exceed the recommended work place environm ental lim it. In such cases, s u it able controls shall be instituted and m onitor ing shall continue at 1-week intervals until 3 consecutive surveys indicate the adequacy of controls. (f) Records of all sam pling and analysis of chloroform and of medical examinations shall be m aintained fo r at least 30 years a fte r the individual's em ploym ent is term inated. Rec ords shall indicate the details of (1) type of personal protective devices, if any, in use at the tim e of sampling, and (2) methods of sam pling and analysis used. Each employee shall be able to obtain inform ation on his own exposure. If the employer who has or has had employees w ith occupational exposure to chloroform ceases business w ithout a suc cessor, he shall forward th e ir records by regis tered mail to the Director, National Institute for Occupational Safety and Health. # D E P A R T M E N T O F H E A L T H , E D U C A T I O N , A N D W E L F A R E PUBLIC HEALTH SERVICE C E N T E R F O R D I S E A S E C O N T R O L N A T I O N A L I N S T I T U T E F O R O C C U P A T I O N A L S A F E T Y A N O H E A L T H R O B E R T A . T A F T L A B O R
None
None
272091323f74c82c1dd5bd06582893c761be9910
cdc
None
# Injury claims All vaccines included in the adult immunization schedule except pneumococcal 23-valent polysaccharide (PPSV23) and zoster (RZV, ZVL) vaccines are covered by the Vaccine Injury Compensation Program. Information on how to file a vaccine injury claim is available at www.hrsa.gov/vaccinecompensation. # Questions or comments Contact www.cdc.gov/cdc-info or 800-CDC- , in English or Spanish, 8 a.m.-8 p.m. ET, Monday through Friday, excluding holidays. 3 Review vaccine types, frequencies, and intervals and considerations for special situations (Notes) # Helpful information Recommended by the Advisory Committee on Immunization Practices (www.cdc.gov/vaccines/acip) and approved by the Centers for Disease Control and Prevention (www.cdc.gov), American College of Physicians (www.acponline.org), American Academy of Family Physicians (www.aafp.org), American College of Obstetricians and Gynecologists (www.acog.org), and American College of Nurse-Midwives (www.midwife.org). # UNITED STATES 2020 Vaccines in the Adult Immunization Schedule* # Human papillomavirus vaccination # Routine vaccination y HPV vaccination recommended for all adults through age 26 years: 2-or 3-dose series depending on age at initial vaccination or condition: -Age 15 years or older at initial vaccination: 3-dose series at 0, 1-2, 6 months (minimum intervals: 4 weeks between doses 1 and 2/12 weeks between doses 2 and 3/5 months between doses 1 and 3; repeat dose if administered too soon) -Age 9 through 14 years at initial vaccination and received 1 dose or 2 doses less than 5 months apart: 1 dose -Age 9 through 14 years at initial vaccination and received 2 doses at least 5 months apart: HPV vaccination complete, no additional dose needed. -Born in 1957 or later with no evidence of immunity to measles, mumps, or rubella: 2-dose series at least 4 weeks apart for measles or mumps or at least 1 dose for rubella -Born before 1957 with no evidence of immunity to measles, mumps, or rubella: Consider 2-dose series at least 4 weeks apart for measles or mumps or 1 dose for rubella
# Injury claims All vaccines included in the adult immunization schedule except pneumococcal 23-valent polysaccharide (PPSV23) and zoster (RZV, ZVL) vaccines are covered by the Vaccine Injury Compensation Program. Information on how to file a vaccine injury claim is available at www.hrsa.gov/vaccinecompensation. # Questions or comments Contact www.cdc.gov/cdc-info or 800-CDC- , in English or Spanish, 8 a.m.-8 p.m. ET, Monday through Friday, excluding holidays. 3 Review vaccine types, frequencies, and intervals and considerations for special situations (Notes) # Helpful information Recommended by the Advisory Committee on Immunization Practices (www.cdc.gov/vaccines/acip) and approved by the Centers for Disease Control and Prevention (www.cdc.gov), American College of Physicians (www.acponline.org), American Academy of Family Physicians (www.aafp.org), American College of Obstetricians and Gynecologists (www.acog.org), and American College of Nurse-Midwives (www.midwife.org). # UNITED STATES 2020 Vaccines in the Adult Immunization Schedule* # Human papillomavirus vaccination # Routine vaccination y HPV vaccination recommended for all adults through age 26 years: 2-or 3-dose series depending on age at initial vaccination or condition: -Age 15 years or older at initial vaccination: 3-dose series at 0, 1-2, 6 months (minimum intervals: 4 weeks between doses 1 and 2/12 weeks between doses 2 and 3/5 months between doses 1 and 3; repeat dose if administered too soon) -Age 9 through 14 years at initial vaccination and received 1 dose or 2 doses less than 5 months apart: 1 dose -Age 9 through 14 years at initial vaccination and received 2 doses at least 5 months apart: HPV vaccination complete, no additional dose needed. -Born in 1957 or later with no evidence of immunity to measles, mumps, or rubella: 2-dose series at least 4 weeks apart for measles or mumps or at least 1 dose for rubella -Born before 1957 with no evidence of immunity to measles, mumps, or rubella: Consider 2-dose series at least 4 weeks apart for measles or mumps or 1 dose for rubella
None
None
2cd1a80ccc5ec240d647f984bbec24679abd088d
cdc
None
new information was presented to the Advisory Committee on Immunization Practices (ACIP) regarding the risk for febrile seizures among children aged 12--23 months after administration of the combination measles, mumps, rubella, and varicella (MMRV) vaccine (ProQuad ® , Merck & Co., Inc., Whitehouse Station, New Jersey). This report summarizes current knowledge regarding the risk for febrile seizures after MMRV vaccination and presents updated ACIP recommendations that were issued after presentation of the new information. These updated recommendations remove ACIP's previous preference for administering combination MMRV vaccine over separate injections of equivalent component vaccines (i.e., measles, mumps, and rubella vaccine and varicella vaccine).# The combination tetravalent MMRV vaccine was licensed by the Food and Drug Administration (FDA) on September 6, 2005, for use in children aged 12 months--12 years (1). MMRV vaccine can be used in place of trivalent MMR vaccine and monovalent varicella vaccine to implement the recommended 2-dose vaccine policies for prevention of measles, mumps, rubella, and varicella (1,2). The first vaccine dose is recommended at age 12--15 months and the second at age 4--6 years. In MMRV vaccine prelicensure studies, an increased rate of fever was observed 5--12 and 0--42 days after the first vaccine dose, compared with administration of MMR vaccine and varicella vaccine at the same visit (3,4). Because of the known association between fever and febrile seizures (5), CDC and Merck initiated postlicensure studies to better understand the risk for febrile seizures that might be associated with MMRV vaccination. The Vaccine Safety Datalink (VSD),- which routinely monitors vaccine safety by near real-time surveillance using computerized patient data, detected a signal of increased risk for seizures of any etiology among children aged 12--23 months after administration of MMRV vaccine compared with administration of MMR vaccine (many children also received varicella vaccine). When children who received MMRV vaccine were compared with children who received MMR vaccine and varicella vaccine administered at the same visit, statistically significant clustering of seizures was observed 7--10 days after vaccination in both groups. Once the signal was detected, a VSD study was initiated that evaluated the risk for febrile seizures 7--10 days after vaccination among 43,353 children aged 12--23 months who received MMRV vaccine and 314,599 children aged 12--23 months who received MMR vaccine and varicella vaccine administered at the same visit. Medical records were reviewed to validate the diagnosis, and a multivariate logistic regression was used to adjust for age and influenza season. The preliminary results indicated a rate of febrile seizure of nine per 10,000 vaccinations among MMRV vaccine recipients compared with four per 10,000 vaccinations among MMR vaccine and varicella vaccine recipients (adjusted odds ratio = 2.3; 95% confidence interval = 1.6--3.2; p<0.0001). These results suggest that, in the 7--10 day postvaccination period, approximately one additional febrile seizure would occur among every 2,000 children vaccinated with MMRV vaccine, compared with children vaccinated with MMR vaccine and varicella vaccine administered at the same visit. Of the 166 children who experienced febrile seizures after vaccination and had hospitalization information available, 26 (16%) were hospitalized. No child who had a febrile seizure died. At the ACIP meeting, representatives from Merck presented interim results of an ongoing postlicensure study being conducted among children aged 12--60 months (99% of the children were aged 12--23 months). All potential cases of febrile seizure were reviewed using Brighton Collaboration guidelines (6). This interim analysis found a 2.3 times (CI = 0.6--9.0) higher relative risk for confirmed febrile seizures 5--12 days after MMRV vaccination (14,263 children; rate = five per 10,000 vaccinations) when compared with a historic control group of children (matched on age, sex, and date of vaccination) vaccinated with MMR vaccine and varicella vaccine at the same visit (14,263 children; rate = two per 10,000 vaccinations). Although the relative risk was not statistically significant, it was similar to the adjusted odds ratio reported by the VSD study for the 7--10 days after vaccination. The Merck study also evaluated the risk for febrile seizures during the 0--30 days after vaccination. This risk was not significantly different (relative risk = 0.7; CI = 0.4--1.5) for children who received MMRV vaccine (10 per 10,000) compared with those who received MMR vaccine and varicella vaccine at the same visit (13 per 10,000). The Merck results are considered interim; approximately half of the final sample size needed to investigate the risk for febrile seizures was available for this analysis. Neither the VSD study nor the Merck study assessed the risk for febrile seizures after MMRV vaccine administered as a second dose at age 4--6 years. However, previous studies have determined that the second dose of MMRV vaccine is less likely to cause fever than the first dose (3), and rates of febrile seizure are lower in the general population of children aged 4--6 years than in the population aged 12--15 months (5). Febrile seizures are not uncommon in young children and generally have an excellent prognosis (7), although they often are distressing to parents and other family members. Approximately one in 25 (4%) young children will have at least one febrile seizure, usually at age 6--59 months; the peak age for febrile seizures is 14--18 months (5,7). Febrile seizures occur most commonly with the fevers caused by typical childhood illnesses, such as middle ear infections, viral upper respiratory tract infections, and roseola, but can be associated with any condition that results in fever. Febrile seizures can occur after certain vaccinations, although rarely. MMR vaccination has been associated previously with febrile seizures occurring 8--14 days later; approximately one additional febrile seizure occurs among every 3,000--4,000 children vaccinated with MMR vaccine, compared with children not vaccinated during the preceding 30 days (8). Availability of MMRV vaccine currently is limited in the United States because of manufacturing constraints unrelated to vaccine safety or efficacy (9). MMRV vaccine is not expected to be widely available before 2009; however, some clinics might have MMRV vaccine in stock. Consistent with ACIP General Recommendations on Immunization (10), the 2007 ACIP recommendations for prevention of varicella included a preference for use of combination MMRV vaccine over separate injections of equivalent component vaccines (i.e., MMR vaccine and varicella vaccine) (2). At its February 27, 2008, meeting, ACIP considered the preliminary results from the VSD and Merck studies, which suggested an increased risk for febrile seizures after the first dose of MMRV vaccine. Given the availability of alternative options for vaccination against measles, mumps, rubella, and varicella and the limited supply of MMRV vaccine, ACIP voted to change the preference language for MMRV vaccine to read as follows: "Combination MMRV vaccine is approved for use among healthy children aged 12 months--12 years. MMRV vaccine is indicated for simultaneous vaccination against measles, mumps, rubella, and varicella. ACIP does not express a preference for use of MMRV vaccine over separate injections of equivalent component vaccines (i.e., MMR vaccine and varicella vaccine)." ACIP also recommended establishing a work group to conduct in-depth evaluation of the findings regarding the increased risk for febrile seizures after the first dose of MMRV vaccine to present for consideration of future policy options. CDC, FDA, and ACIP will communicate updates and implement further necessary actions based on these evaluations. Clinically significant adverse events that follow vaccination should be reported to the Vaccine Adverse Event Reporting System (VAERS). Guidance about how to obtain and complete a VAERS form is available at or by telephone, 800-822-7967. Additional information on MMRV vaccine and febrile seizures is available at and .
new information was presented to the Advisory Committee on Immunization Practices (ACIP) regarding the risk for febrile seizures among children aged 12--23 months after administration of the combination measles, mumps, rubella, and varicella (MMRV) vaccine (ProQuad ® , Merck & Co., Inc., Whitehouse Station, New Jersey). This report summarizes current knowledge regarding the risk for febrile seizures after MMRV vaccination and presents updated ACIP recommendations that were issued after presentation of the new information. These updated recommendations remove ACIP's previous preference for administering combination MMRV vaccine over separate injections of equivalent component vaccines (i.e., measles, mumps, and rubella [MMR] vaccine and varicella vaccine).# The combination tetravalent MMRV vaccine was licensed by the Food and Drug Administration (FDA) on September 6, 2005, for use in children aged 12 months--12 years (1). MMRV vaccine can be used in place of trivalent MMR vaccine and monovalent varicella vaccine to implement the recommended 2-dose vaccine policies for prevention of measles, mumps, rubella, and varicella (1,2). The first vaccine dose is recommended at age 12--15 months and the second at age 4--6 years. In MMRV vaccine prelicensure studies, an increased rate of fever was observed 5--12 and 0--42 days after the first vaccine dose, compared with administration of MMR vaccine and varicella vaccine at the same visit (3,4). Because of the known association between fever and febrile seizures (5), CDC and Merck initiated postlicensure studies to better understand the risk for febrile seizures that might be associated with MMRV vaccination. The Vaccine Safety Datalink (VSD),* which routinely monitors vaccine safety by near real-time surveillance using computerized patient data, detected a signal of increased risk for seizures of any etiology among children aged 12--23 months after administration of MMRV vaccine compared with administration of MMR vaccine (many children also received varicella vaccine). When children who received MMRV vaccine were compared with children who received MMR vaccine and varicella vaccine administered at the same visit, statistically significant clustering of seizures was observed 7--10 days after vaccination in both groups. Once the signal was detected, a VSD study was initiated that evaluated the risk for febrile seizures 7--10 days after vaccination among 43,353 children aged 12--23 months who received MMRV vaccine and 314,599 children aged 12--23 months who received MMR vaccine and varicella vaccine administered at the same visit. Medical records were reviewed to validate the diagnosis, and a multivariate logistic regression was used to adjust for age and influenza season. The preliminary results indicated a rate of febrile seizure of nine per 10,000 vaccinations among MMRV vaccine recipients compared with four per 10,000 vaccinations among MMR vaccine and varicella vaccine recipients (adjusted odds ratio = 2.3; 95% confidence interval [CI] = 1.6--3.2; p<0.0001). These results suggest that, in the 7--10 day postvaccination period, approximately one additional febrile seizure would occur among every 2,000 children vaccinated with MMRV vaccine, compared with children vaccinated with MMR vaccine and varicella vaccine administered at the same visit. Of the 166 children who experienced febrile seizures after vaccination and had hospitalization information available, 26 (16%) were hospitalized. No child who had a febrile seizure died. At the ACIP meeting, representatives from Merck presented interim results of an ongoing postlicensure study being conducted among children aged 12--60 months (99% of the children were aged 12--23 months). All potential cases of febrile seizure were reviewed using Brighton Collaboration guidelines (6). This interim analysis found a 2.3 times (CI = 0.6--9.0) higher relative risk for confirmed febrile seizures 5--12 days after MMRV vaccination (14,263 children; rate = five per 10,000 vaccinations) when compared with a historic control group of children (matched on age, sex, and date of vaccination) vaccinated with MMR vaccine and varicella vaccine at the same visit (14,263 children; rate = two per 10,000 vaccinations). Although the relative risk was not statistically significant, it was similar to the adjusted odds ratio reported by the VSD study for the 7--10 days after vaccination. The Merck study also evaluated the risk for febrile seizures during the 0--30 days after vaccination. This risk was not significantly different (relative risk = 0.7; CI = 0.4--1.5) for children who received MMRV vaccine (10 per 10,000) compared with those who received MMR vaccine and varicella vaccine at the same visit (13 per 10,000). The Merck results are considered interim; approximately half of the final sample size needed to investigate the risk for febrile seizures was available for this analysis. Neither the VSD study nor the Merck study assessed the risk for febrile seizures after MMRV vaccine administered as a second dose at age 4--6 years. However, previous studies have determined that the second dose of MMRV vaccine is less likely to cause fever than the first dose (3), and rates of febrile seizure are lower in the general population of children aged 4--6 years than in the population aged 12--15 months (5). Febrile seizures are not uncommon in young children and generally have an excellent prognosis (7), although they often are distressing to parents and other family members. Approximately one in 25 (4%) young children will have at least one febrile seizure, usually at age 6--59 months; the peak age for febrile seizures is 14--18 months (5,7). Febrile seizures occur most commonly with the fevers caused by typical childhood illnesses, such as middle ear infections, viral upper respiratory tract infections, and roseola, but can be associated with any condition that results in fever. Febrile seizures can occur after certain vaccinations, although rarely. MMR vaccination has been associated previously with febrile seizures occurring 8--14 days later; approximately one additional febrile seizure occurs among every 3,000--4,000 children vaccinated with MMR vaccine, compared with children not vaccinated during the preceding 30 days (8). Availability of MMRV vaccine currently is limited in the United States because of manufacturing constraints unrelated to vaccine safety or efficacy (9). MMRV vaccine is not expected to be widely available before 2009; however, some clinics might have MMRV vaccine in stock. Consistent with ACIP General Recommendations on Immunization (10), the 2007 ACIP recommendations for prevention of varicella included a preference for use of combination MMRV vaccine over separate injections of equivalent component vaccines (i.e., MMR vaccine and varicella vaccine) (2). At its February 27, 2008, meeting, ACIP considered the preliminary results from the VSD and Merck studies, which suggested an increased risk for febrile seizures after the first dose of MMRV vaccine. Given the availability of alternative options for vaccination against measles, mumps, rubella, and varicella and the limited supply of MMRV vaccine, ACIP voted to change the preference language for MMRV vaccine to read as follows: "Combination MMRV vaccine is approved for use among healthy children aged 12 months--12 years. MMRV vaccine is indicated for simultaneous vaccination against measles, mumps, rubella, and varicella. ACIP does not express a preference for use of MMRV vaccine over separate injections of equivalent component vaccines (i.e., MMR vaccine and varicella vaccine)." ACIP also recommended establishing a work group to conduct in-depth evaluation of the findings regarding the increased risk for febrile seizures after the first dose of MMRV vaccine to present for consideration of future policy options. CDC, FDA, and ACIP will communicate updates and implement further necessary actions based on these evaluations. Clinically significant adverse events that follow vaccination should be reported to the Vaccine Adverse Event Reporting System (VAERS). Guidance about how to obtain and complete a VAERS form is available at http://www.vaers.hhs.gov or by telephone, 800-822-7967. Additional information on MMRV vaccine and febrile seizures is available at http://www.cdc.gov/od/science/iso/vsd/mmrv.htm and http://www.fda.gov/cber/label/proquadlbinfo.htm. # Acknowledgments
None
None
e7505befd09ff3ee702a6fafc02c272ff596b68f
cdc
None
CDC Malaria Hotline: (770) 488-7788 or (855) 856-4713 toll-free Monday-Friday 9 am to 5 pm EST -(770) 488-7100 after hours, weekends and holidays Clinical Diagnosis/ Plasmodium species Region Infection Acquired Recommended Drug and Adult Dose 1 Recommended Drug and Pediatric Dose 1 Pediatric dose should NEVER exceed adult dose Uncomplicated malaria/ P. falciparum or Species not identified If "species not identified" is subsequently diagnosed as P. vivax or P ovale: see P. vivax and P ovale (below) re. treatment with primaquine Chloroquine-resistant or unknown resistance 2 (All malarious regions except those specified as chloroquine-sensitive listed in the box below.) A. Atovaquone-proguanil (Malarone™) 3 Adult tab = 250 mg atovaquone/ 100 mg proguanil 4 adult tabs po qd x 3 days A. Atovaquone-proguanil (Malarone™) 3 Adult tab = 250 mg atovaquone/ 100 mg proguanil Peds tab = 62.5 mg atovaquone/ 25 mg proguanil 5 -8kg: 2 peds tabs po qd x 3 d 9-10kg: 3 peds tabs po qd x 3 d 11-20kg: 1adult tab po qd x 3 d 21-30kg: 2 adult tabs po qd x 3d 31-40kg: 3 adult tabs po qd x 3d > 40 kg: 4 adult tabs po qd x 3d B. Artemether-lumefantrine (Coartem™) 3 1 tablet = 20mg artemether and 120 mg lumefantrine A 3-day treatment schedule with a total of 6 oral doses is recommended for both adult and pediatric patients based on weight. The patient should receive the initial dose, followed by the second dose 8 hours later, then 1 dose po bid for the following 2 days. 5 -<15 kg: 1 tablet per dose 15 -<25 kg: 2 tablets per dose 25 -<35 kg: 3 tablets per dose ≥35 kg: 4 tablets per dose C. Quinine sulfate plus one of the following: Doxycycline, Tetracycline, or Clindamycin Quinine sulfate: 542 mg base (=650 mg salt) 4 po tid x 3 or 7 days 5 Doxycycline: 100 mg po bid x 7 days Tetracycline: 250 mg po qid x 7 days Clindamycin: 20 mg base/kg/day po divided tid x 7 days C. Quinine sulfate 4 plus one of the following: Doxycycline 6 , Tetracycline 6 or Clindamycin Quinine sulfate: 8.3 mg base/kg (=10 mg salt/kg) po tid x 3 or 7 days 5 Doxycycline: 2.2 mg/kg po every 12 hours x 7 days Tetracycline: 25 mg/kg/day po divided qid x 7 days Clindamycin: 20 mg base/kg/day po divided tid x 7 days D. Mefloquine (Lariam™ and generics) 7 684 mg base (=750 mg salt) po as initial dose, followed by 456 mg base (=500 mg salt) po given 6-12 hours after initial dose Total dose= 1,250 mg salt D. Mefloquine (Lariam™ and generics) 7 13.7 mg base/kg (=15 mg salt/kg) po as initial dose, followed by 9.1 mg base/kg (=10 mg salt/kg) po given 6-12 hours after initial dose. Total dose= 25 mg salt/kg 89# Chloroquine phosphate (Aralen™ and generics) 8 600 mg base (=1,000 mg salt) po immediately, followed by 300 mg base (=500 mg salt) po at 6, 24, and 48 hours Total dose: 1,500 mg base (=2,500 mg salt) OR Hydroxychloroquine (Plaquenil TM and generics) 620 mg base (=800 mg salt) po immediately, followed by 310 mg base (=400 mg salt) po at 6, 24, and 48 hours Total dose: 1,550 mg base (=2,000 mg salt) 10 NOTE: There are three options (A, B, or C) available for treatment of uncomplicated malaria caused by chloroquine-resistant P. vivax. High treatment failure rates due to chloroquine-resistant P. vivax have been well documented in Papua New Guinea and Indonesia. Rare case reports of chloroquine-resistant P. vivax have also been documented in Burma (Myanmar), India, and Central and South America. Persons acquiring P. vivax infections outside of Papua New Guinea or Indonesia should be started on chloroquine. If the patient does not respond, the treatment should be changed to a chloroquine-resistant P. vivax regimen and CDC should be notified (Malaria Hotline number listed above). For treatment of chloroquine-resistant P. vivax infections, options A, B, and C are equally recommended. 11 For pregnant women diagnosed with uncomplicated malaria caused by chloroquine-resistant P. falciparum or chloroquine-resistant P. vivax infection, treatment with doxycycline or tetracycline is generally not indicated. However, doxycycline or tetracycline may be used in combination with quinine (as recommended for non-pregnant adults) if other treatment options are not available or are not being tolerated, and the benefit is judged to outweigh the risks. 12 Atovaquone-proguanil and artemether-lumefantrine are generally not recommended for use in pregnant women, particularly in the first trimester due to lack of sufficient safety data. For pregnant women diagnosed with uncomplicated malaria caused by chloroquine-resistant P. falciparum infection, atovaquone-proguanil or artemether-lumefantrine may be used if other treatment options are not available or are not being tolerated, and if the potential benefit is judged to outweigh the potential risks. 13 For P. vivax and P. ovale infections, primaquine phosphate for radical treatment of hypnozoites should not be given during pregnancy. Pregnant patients with P. vivax and P. ovale infections should be maintained on chloroquine prophylaxis for the duration of their pregnancy. The chemoprophylactic dose of chloroquine phosphate is 300 mg base (=500 mg salt) orally once per week. After delivery, pregnant patients who do not have G6PD deficiency should be treated with primaquine. # All regions Quinidine gluconate 14 plus one of the following: Doxycycline, Tetracycline, or Clindamycin Quinidine gluconate: 6.25 mg base/kg (=10 mg salt/kg) loading dose IV over 1-2 hrs, then 0.0125 mg base/kg/min (=0.02 mg salt/kg/min) continuous infusion for at least 24 hours. An alternative regimen is 15 mg base/kg (=24 mg salt/kg) loading dose IV infused over 4 hours, followed by 7.5 mg base/kg (=12 mg salt/kg) infused over 4 hours every 8 hours, starting 8 hours after the loading dose (see package insert). Once parasite density <1% and patient can take oral medication, complete treatment with oral quinine, dose as above. Quinidine/quinine course = 7 days in Southeast Asia; = 3 days in Africa or South America. Doxycycline: Treatment as above. If patient not able to take oral medication, give 100 mg IV every 12 hours and then switch to oral doxycycline (as above) as soon as patient can take oral medication. For IV use, avoid rapid administration. Treatment course = 7 days. Tetracycline: Treatment as above Clindamycin: Treatment as above. If patient not able to take oral medication, give 10 mg base/kg loading dose IV followed by 5 mg base/kg IV every 8 hours. Switch to oral clindamycin (oral dose as above) as soon as patient can take oral medication. For IV use, avoid rapid administration. Treatment course = 7 days. # Investigational new drug (contact CDC for information): Artesunate followed by one of the following: Atovaquoneproguanil (Malarone™), Doxycycline (Clindamycin in pregnant women), or Mefloquine Quinidine gluconate 14 plus one of the following: Doxycycline 4 , Tetracycline 4 , or Clindamycin Quinidine gluconate: Same mg/kg dosing and recommendations as for adults. Doxycycline: Treatment as above. If patient not able to take oral medication, may give IV. For children 45 kg, use same dosing as for adults. For IV use, avoid rapid administration. Treatment course = 7 days. Tetracycline: Treatment as above Clindamycin: Treatment as above. If patient not able to take oral medication, give 10 mg base/kg loading dose IV followed by 5 mg base/kg IV every 8 hours. Switch to oral clindamycin (oral dose as above) as soon as patient can take oral medication. For IV use, avoid rapid administration. Treatment course = 7 days. # Investigational new drug (contact CDC for information): Artesunate followed by one of the following: Atovaquone-proguanil (Malarone™), Clindamycin, or Mefloquine
CDC Malaria Hotline: (770) 488-7788 or (855) 856-4713 toll-free Monday-Friday 9 am to 5 pm EST -(770) 488-7100 after hours, weekends and holidays Clinical Diagnosis/ Plasmodium species Region Infection Acquired Recommended Drug and Adult Dose 1 Recommended Drug and Pediatric Dose 1 Pediatric dose should NEVER exceed adult dose Uncomplicated malaria/ P. falciparum or Species not identified If "species not identified" is subsequently diagnosed as P. vivax or P ovale: see P. vivax and P ovale (below) re. treatment with primaquine Chloroquine-resistant or unknown resistance 2 (All malarious regions except those specified as chloroquine-sensitive listed in the box below.) A. Atovaquone-proguanil (Malarone™) 3 Adult tab = 250 mg atovaquone/ 100 mg proguanil 4 adult tabs po qd x 3 days A. Atovaquone-proguanil (Malarone™) 3 Adult tab = 250 mg atovaquone/ 100 mg proguanil Peds tab = 62.5 mg atovaquone/ 25 mg proguanil 5 -8kg: 2 peds tabs po qd x 3 d 9-10kg: 3 peds tabs po qd x 3 d 11-20kg: 1adult tab po qd x 3 d 21-30kg: 2 adult tabs po qd x 3d 31-40kg: 3 adult tabs po qd x 3d > 40 kg: 4 adult tabs po qd x 3d B. Artemether-lumefantrine (Coartem™) 3 1 tablet = 20mg artemether and 120 mg lumefantrine A 3-day treatment schedule with a total of 6 oral doses is recommended for both adult and pediatric patients based on weight. The patient should receive the initial dose, followed by the second dose 8 hours later, then 1 dose po bid for the following 2 days. 5 -<15 kg: 1 tablet per dose 15 -<25 kg: 2 tablets per dose 25 -<35 kg: 3 tablets per dose ≥35 kg: 4 tablets per dose C. Quinine sulfate plus one of the following: Doxycycline, Tetracycline, or Clindamycin Quinine sulfate: 542 mg base (=650 mg salt) 4 po tid x 3 or 7 days 5 Doxycycline: 100 mg po bid x 7 days Tetracycline: 250 mg po qid x 7 days Clindamycin: 20 mg base/kg/day po divided tid x 7 days C. Quinine sulfate 4 plus one of the following: Doxycycline 6 , Tetracycline 6 or Clindamycin Quinine sulfate: 8.3 mg base/kg (=10 mg salt/kg) po tid x 3 or 7 days 5 Doxycycline: 2.2 mg/kg po every 12 hours x 7 days Tetracycline: 25 mg/kg/day po divided qid x 7 days Clindamycin: 20 mg base/kg/day po divided tid x 7 days D. Mefloquine (Lariam™ and generics) 7 684 mg base (=750 mg salt) po as initial dose, followed by 456 mg base (=500 mg salt) po given 6-12 hours after initial dose Total dose= 1,250 mg salt D. Mefloquine (Lariam™ and generics) 7 13.7 mg base/kg (=15 mg salt/kg) po as initial dose, followed by 9.1 mg base/kg (=10 mg salt/kg) po given 6-12 hours after initial dose. Total dose= 25 mg salt/kg 89# Chloroquine phosphate (Aralen™ and generics) 8 600 mg base (=1,000 mg salt) po immediately, followed by 300 mg base (=500 mg salt) po at 6, 24, and 48 hours Total dose: 1,500 mg base (=2,500 mg salt) OR Hydroxychloroquine (Plaquenil TM and generics) 620 mg base (=800 mg salt) po immediately, followed by 310 mg base (=400 mg salt) po at 6, 24, and 48 hours Total dose: 1,550 mg base (=2,000 mg salt) 10 NOTE: There are three options (A, B, or C) available for treatment of uncomplicated malaria caused by chloroquine-resistant P. vivax. High treatment failure rates due to chloroquine-resistant P. vivax have been well documented in Papua New Guinea and Indonesia. Rare case reports of chloroquine-resistant P. vivax have also been documented in Burma (Myanmar), India, and Central and South America. Persons acquiring P. vivax infections outside of Papua New Guinea or Indonesia should be started on chloroquine. If the patient does not respond, the treatment should be changed to a chloroquine-resistant P. vivax regimen and CDC should be notified (Malaria Hotline number listed above). For treatment of chloroquine-resistant P. vivax infections, options A, B, and C are equally recommended. 11 For pregnant women diagnosed with uncomplicated malaria caused by chloroquine-resistant P. falciparum or chloroquine-resistant P. vivax infection, treatment with doxycycline or tetracycline is generally not indicated. However, doxycycline or tetracycline may be used in combination with quinine (as recommended for non-pregnant adults) if other treatment options are not available or are not being tolerated, and the benefit is judged to outweigh the risks. 12 Atovaquone-proguanil and artemether-lumefantrine are generally not recommended for use in pregnant women, particularly in the first trimester due to lack of sufficient safety data. For pregnant women diagnosed with uncomplicated malaria caused by chloroquine-resistant P. falciparum infection, atovaquone-proguanil or artemether-lumefantrine may be used if other treatment options are not available or are not being tolerated, and if the potential benefit is judged to outweigh the potential risks. 13 For P. vivax and P. ovale infections, primaquine phosphate for radical treatment of hypnozoites should not be given during pregnancy. Pregnant patients with P. vivax and P. ovale infections should be maintained on chloroquine prophylaxis for the duration of their pregnancy. The chemoprophylactic dose of chloroquine phosphate is 300 mg base (=500 mg salt) orally once per week. After delivery, pregnant patients who do not have G6PD deficiency should be treated with primaquine. # All regions Quinidine gluconate 14 plus one of the following: Doxycycline, Tetracycline, or Clindamycin Quinidine gluconate: 6.25 mg base/kg (=10 mg salt/kg) loading dose IV over 1-2 hrs, then 0.0125 mg base/kg/min (=0.02 mg salt/kg/min) continuous infusion for at least 24 hours. An alternative regimen is 15 mg base/kg (=24 mg salt/kg) loading dose IV infused over 4 hours, followed by 7.5 mg base/kg (=12 mg salt/kg) infused over 4 hours every 8 hours, starting 8 hours after the loading dose (see package insert). Once parasite density <1% and patient can take oral medication, complete treatment with oral quinine, dose as above. Quinidine/quinine course = 7 days in Southeast Asia; = 3 days in Africa or South America. Doxycycline: Treatment as above. If patient not able to take oral medication, give 100 mg IV every 12 hours and then switch to oral doxycycline (as above) as soon as patient can take oral medication. For IV use, avoid rapid administration. Treatment course = 7 days. Tetracycline: Treatment as above Clindamycin: Treatment as above. If patient not able to take oral medication, give 10 mg base/kg loading dose IV followed by 5 mg base/kg IV every 8 hours. Switch to oral clindamycin (oral dose as above) as soon as patient can take oral medication. For IV use, avoid rapid administration. Treatment course = 7 days. # Investigational new drug (contact CDC for information): Artesunate followed by one of the following: Atovaquoneproguanil (Malarone™), Doxycycline (Clindamycin in pregnant women), or Mefloquine Quinidine gluconate 14 plus one of the following: Doxycycline 4 , Tetracycline 4 , or Clindamycin Quinidine gluconate: Same mg/kg dosing and recommendations as for adults. Doxycycline: Treatment as above. If patient not able to take oral medication, may give IV. For children <45 kg, give 2.2 mg/kg IV every 12 hours and then switch to oral doxycycline (dose as above) as soon as patient can take oral medication. For children >45 kg, use same dosing as for adults. For IV use, avoid rapid administration. Treatment course = 7 days. Tetracycline: Treatment as above Clindamycin: Treatment as above. If patient not able to take oral medication, give 10 mg base/kg loading dose IV followed by 5 mg base/kg IV every 8 hours. Switch to oral clindamycin (oral dose as above) as soon as patient can take oral medication. For IV use, avoid rapid administration. Treatment course = 7 days. # Investigational new drug (contact CDC for information): Artesunate followed by one of the following: Atovaquone-proguanil (Malarone™), Clindamycin, or Mefloquine
None
None
750e26d9da7066b8352c4f67070690a4603448c7
cdc
None
All animal rabies vaccines should be restricted to use by, or under the direct supervision of, a veterinarian.In comprehensive rabies-control programs, only vaccines with a 3-year duration of immunity should be used. This constitutes the most effective method of increasing the proportion of immunized dogs and cats in any population (See Part II).All vaccines must be administered in accordance with the specifications of the product label or package insert. If administered intramuscularly, the vaccine must be at one site in the thigh.# Compendium of Animal Rabies Control, 1999 National Association of State Public Health Veterinarians, Inc.* The purpose of this Compendium is to provide information on rabies control to veterinarians, public health officials, and others concerned with rabies control. These recommendations serve as the basis for animal rabies-control programs throughout the United States and facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies-control program. This document is reviewed annually and revised as necessary. Immunization procedure recommendations are contained in Part I; all animal rabies vaccines licensed by the United States Department of Agriculture (USDA) and marketed in the United States are listed in Part II; Part III details the principles of rabies control. # D. Vaccination of Wildlife and Hybrid Animals The efficacy of parenteral rabies vaccination of wildlife and hybrids (the offspring of wild animals crossbred to domestic dogs and cats) has not been established, and no such vaccine is licensed for these animals. Zoos or research institutions may establish vaccination programs that attempt to protect valuable animals, but these should not replace appropriate public health activities that protect humans. # E. Accidental Human Exposure to Vaccine Accidental inoculation may occur during administration of animal rabies vaccine. Such exposure to inactivated vaccines constitutes no rabies hazard. # F. Identification of Vaccinated Animals All agencies and veterinarians should adopt the standard tag system. This practice will aid the administration of local, state, national, and international control procedures. Animal license tags should be distinguishable in shape and color from rabies vaccine tags. Anodized aluminum rabies tags should be no less than 0.064 inches in thickness. 1. Rabies Tags. 4. Adjunct Procedures. Methods or procedures that enhance rabies control include the following: a. Licensure. Registration or licensure of all dogs, cats, and ferrets may be used to aid in rabies control. A fee is frequently charged for such licensure, and revenues collected are used to maintain rabies-or animal-control programs. Vaccination is an essential prerequisite to licensure. b. Canvassing of Area. House-to-house canvassing by animal-control personnel facilitates enforcement of vaccination and licensure requirements. c. Citations. Citations are legal summonses issued to owners for violations, including the failure to vaccinate or license their animals. The authority for officers to issue citations should be an integral part of each animal-control program. d. Animal Control. All communities should incorporate stray animal control, leash laws, and training of personnel in their programs. 5. Postexposure Management. Any animal potentially exposed to rabies virus (See Part III.A.1. Rabies Exposure) by a wild, carnivorous mammal or a bat that is not available for testing should be regarded as having been exposed to rabies. a. Dogs, Cats, and Ferrets. Unvaccinated dogs, cats, and ferrets exposed to a rabid animal should be euthanized immediately. If the owner is unwilling to have this done, the animal should be placed in strict isolation for 6 months and vaccinated 1 month before being released. Animals with expired vaccinations need to be evaluated on a case-by-case basis. Dogs, cats, and ferrets that are currently vaccinated should be revaccinated immediately, kept under the owner's control, and observed for 45 days. b. Livestock. All species of livestock are susceptible to rabies; cattle and horses are among those most frequently infected. Livestock exposed to a rabid animal and currently vaccinated with a vaccine approved by the USDA for that species should be revaccinated immediately and observed for 45 days. Unvaccinated livestock should be slaughtered immediately. If the owner is unwilling to have this done, the animal should be kept under close observation for 6 months. The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk of infection, provided liberal portions of the exposed area are discarded. Federal meat inspectors must reject for slaughter any animal known to have been exposed to rabies within 8 months. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption. However, because pasteurization temperatures will inactivate rabies virus, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure. 3) Having more than one rabid animal in a herd or having herbivore-toherbivore transmission is rare; therefore, restricting the rest of the herd if a single animal has been exposed to or infected by rabies may not be necessary. c. Other Animals. Other animals bitten by a rabid animal should be euthanized immediately. Animals maintained in USDA-licensed research facilities or accredited zoological parks should be evaluated on a case-by-case basis. 6. Management of Animals That Bite Humans. A healthy dog, cat, or ferret that bites a person should be confined and observed for 10 days; not administering rabies vaccine during the observation period is recommended. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement. Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be euthanized, its head removed, and the head shipped under refrigeration (not frozen) for examination of the brain by a qualified laboratory designated by the local or state health department. Any stray or unwanted dog, cat, or ferret that bites a person may be euthanized immediately and the head submitted as described above for rabies examination. Other animals that might have exposed a person to rabies should be reported immediately to the local health department. Prior vaccination of an animal does not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs, cats, and ferrets depends on the species, the circumstances of the bite, the epidemiology of rabies in the area, and the biting animal's history, current health status, and potential for exposure to rabies. Postexposure management of persons should follow the recommendations of the ACIP.* # C. Control Methods in Wildlife The public should be warned not to handle wildlife. Wild mammals and hybrids that bite or otherwise expose persons, pets, or livestock should be considered for euthanasia and rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment (See current rabies prophylaxis recommendations of the ACIP*). 1. Terrestrial Mammals. The use of licensed oral vaccines for the mass immunization of free-ranging wildlife should be considered in selected situations, with the approval of the state agency responsible for animal rabies control. Continuous and persistent government-funded programs for trapping or poisoning wildlife are not cost effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in high-contact areas (e.g., picnic grounds, camps, or suburban areas) may be indicated for the removal of selected high-risk species of wildlife. The state wildlife agency and state health department should be consulted for coordination of any proposed vaccination or population-reduction programs. 2. Bats. Indigenous rabid bats have been reported from every state except Hawaii and have caused rabies in at least 32 humans in the United States. However, controlling rabies in bats by programs to reduce bat populations is neither feasible nor desirable. Bats should be excluded from houses and adjacent structures to prevent direct association with humans. Such structures should then be made bat-proof by sealing entrances used by bats. Persons with frequent bat contact should be immunized against rabies as recommended by the ACIP.* The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at / or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800. Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (888) 232-3228. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. 6U.S. Government Printing Office: 1999-733-228/87065 Region IV
All animal rabies vaccines should be restricted to use by, or under the direct supervision of, a veterinarian.In comprehensive rabies-control programs, only vaccines with a 3-year duration of immunity should be used. This constitutes the most effective method of increasing the proportion of immunized dogs and cats in any population (See Part II).All vaccines must be administered in accordance with the specifications of the product label or package insert. If administered intramuscularly, the vaccine must be at one site in the thigh.# Compendium of Animal Rabies Control, 1999 National Association of State Public Health Veterinarians, Inc.* The purpose of this Compendium is to provide information on rabies control to veterinarians, public health officials, and others concerned with rabies control. These recommendations serve as the basis for animal rabies-control programs throughout the United States and facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies-control program. This document is reviewed annually and revised as necessary. Immunization procedure recommendations are contained in Part I; all animal rabies vaccines licensed by the United States Department of Agriculture (USDA) and marketed in the United States are listed in Part II; Part III details the principles of rabies control. # D. Vaccination of Wildlife and Hybrid Animals The efficacy of parenteral rabies vaccination of wildlife and hybrids (the offspring of wild animals crossbred to domestic dogs and cats) has not been established, and no such vaccine is licensed for these animals. Zoos or research institutions may establish vaccination programs that attempt to protect valuable animals, but these should not replace appropriate public health activities that protect humans. # E. Accidental Human Exposure to Vaccine Accidental inoculation may occur during administration of animal rabies vaccine. Such exposure to inactivated vaccines constitutes no rabies hazard. # F. Identification of Vaccinated Animals All agencies and veterinarians should adopt the standard tag system. This practice will aid the administration of local, state, national, and international control procedures. Animal license tags should be distinguishable in shape and color from rabies vaccine tags. Anodized aluminum rabies tags should be no less than 0.064 inches in thickness. 1. Rabies Tags. 4. Adjunct Procedures. Methods or procedures that enhance rabies control include the following: a. Licensure. Registration or licensure of all dogs, cats, and ferrets may be used to aid in rabies control. A fee is frequently charged for such licensure, and revenues collected are used to maintain rabies-or animal-control programs. Vaccination is an essential prerequisite to licensure. b. Canvassing of Area. House-to-house canvassing by animal-control personnel facilitates enforcement of vaccination and licensure requirements. c. Citations. Citations are legal summonses issued to owners for violations, including the failure to vaccinate or license their animals. The authority for officers to issue citations should be an integral part of each animal-control program. d. Animal Control. All communities should incorporate stray animal control, leash laws, and training of personnel in their programs. 5. Postexposure Management. Any animal potentially exposed to rabies virus (See Part III.A.1. Rabies Exposure) by a wild, carnivorous mammal or a bat that is not available for testing should be regarded as having been exposed to rabies. a. Dogs, Cats, and Ferrets. Unvaccinated dogs, cats, and ferrets exposed to a rabid animal should be euthanized immediately. If the owner is unwilling to have this done, the animal should be placed in strict isolation for 6 months and vaccinated 1 month before being released. Animals with expired vaccinations need to be evaluated on a case-by-case basis. Dogs, cats, and ferrets that are currently vaccinated should be revaccinated immediately, kept under the owner's control, and observed for 45 days. b. Livestock. All species of livestock are susceptible to rabies; cattle and horses are among those most frequently infected. Livestock exposed to a rabid animal and currently vaccinated with a vaccine approved by the USDA for that species should be revaccinated immediately and observed for 45 days. Unvaccinated livestock should be slaughtered immediately. If the owner is unwilling to have this done, the animal should be kept under close observation for 6 months. The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk of infection, provided liberal portions of the exposed area are discarded. Federal meat inspectors must reject for slaughter any animal known to have been exposed to rabies within 8 months. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption. However, because pasteurization temperatures will inactivate rabies virus, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure. 3) Having more than one rabid animal in a herd or having herbivore-toherbivore transmission is rare; therefore, restricting the rest of the herd if a single animal has been exposed to or infected by rabies may not be necessary. c. Other Animals. Other animals bitten by a rabid animal should be euthanized immediately. Animals maintained in USDA-licensed research facilities or accredited zoological parks should be evaluated on a case-by-case basis. 6. Management of Animals That Bite Humans. A healthy dog, cat, or ferret that bites a person should be confined and observed for 10 days; not administering rabies vaccine during the observation period is recommended. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement. Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be euthanized, its head removed, and the head shipped under refrigeration (not frozen) for examination of the brain by a qualified laboratory designated by the local or state health department. Any stray or unwanted dog, cat, or ferret that bites a person may be euthanized immediately and the head submitted as described above for rabies examination. Other animals that might have exposed a person to rabies should be reported immediately to the local health department. Prior vaccination of an animal does not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs, cats, and ferrets depends on the species, the circumstances of the bite, the epidemiology of rabies in the area, and the biting animal's history, current health status, and potential for exposure to rabies. Postexposure management of persons should follow the recommendations of the ACIP.* # C. Control Methods in Wildlife The public should be warned not to handle wildlife. Wild mammals and hybrids that bite or otherwise expose persons, pets, or livestock should be considered for euthanasia and rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment (See current rabies prophylaxis recommendations of the ACIP*). 1. Terrestrial Mammals. The use of licensed oral vaccines for the mass immunization of free-ranging wildlife should be considered in selected situations, with the approval of the state agency responsible for animal rabies control. Continuous and persistent government-funded programs for trapping or poisoning wildlife are not cost effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in high-contact areas (e.g., picnic grounds, camps, or suburban areas) may be indicated for the removal of selected high-risk species of wildlife. The state wildlife agency and state health department should be consulted for coordination of any proposed vaccination or population-reduction programs. 2. Bats. Indigenous rabid bats have been reported from every state except Hawaii and have caused rabies in at least 32 humans in the United States. However, controlling rabies in bats by programs to reduce bat populations is neither feasible nor desirable. Bats should be excluded from houses and adjacent structures to prevent direct association with humans. Such structures should then be made bat-proof by sealing entrances used by bats. Persons with frequent bat contact should be immunized against rabies as recommended by the ACIP.* # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at http://www.cdc.gov/ or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800. Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (888) 232-3228. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. 6U.S. Government Printing Office: 1999-733-228/87065 Region IV
None
None
60d48ae6cad0ed90a4a05c4f091e5ab648dc93a9
cdc
None
The standard is designed to protect the health and safety of workers for an 8-hour day, 40-hour week over a working lifetime; compliance with the standard should therefore prevent adverse effects of toluene on the health and safety of workers. The standard is measurable by techniques that are valid, reproducible, and available to industry and government agencies. Sufficient technology exists to permit compliance with the recommended standard. "Exposure to toluene" means exposure to a concentration of toluene equal to or above one-half the recommended workroom environmental standard. Exposures at lower environmental concentrations will not require adherence to the following sections. If "exposure" to other chemicals also occurs, for example from contamination of toluene with benzene, provisions of any applicable standard for the other chemicals shall also be followed.Occupational exposure to toluene shall be controlled so that workers shall not be exposed to toluene at a concentration greater 1 i 1 f>o parts per million parts of air (375 milligrams per cubic meter of air) determined as a time-weighted average (TWA) exposure for an 8~hour workday with a ceiling of 200 parts per million parts of air . 7 " - I igrams per cubic meter of air) as determined by a sampling cline of 10 minutes. (b) Sampling, Collection, and Analysis Procedures for collection and analysis of environmental samples ¿he-1 : ' e as provided in Appendix I or by any method shown to be equi valent in accuracy, precision, and sensitivity to the method specified. Section 2 -Medical Comprehensive preplacement and biennial medical examinations sauuld be provided for all workers subject to "exposure to toluene." The examination should be directed towards but not limited to the ..._Ldeu.r. of headaches, nausea, and dizziness; particular attention ,iiouid be focused on complaints and evidence of eye, mucous membrane, and skin irritation. Laboratory tests recommended at the time of the biennial examination include complete blood count and urinalysis. Section 3 -Labeling (Posting)# Administrative controls should also be used to reduce exposure. # Respirators shall also be provided and used for nonroutine operations (occasional brief exposures above the TWA of 100 ppm and for emergencies); however, for these instances a variance is not required but the requirements set forth below continue to apply. Appropriate respirators as described in Table 1-1 shall only be used pursuant to the following requirements: (1) For the purpose of determining the type of respirator to be used, the employer shall measure the atmospheric concentration of toluene in the workplace when the initial application for variance is made and thereafter whenever process, worksite, climate, or control changes occur which are likely to increase the toluene concentration. The employer shall ensure that no worker is being exposed to toluene in excess of the standard either because of improper respirator selection or fit. (2) Employees experiencing breathing difficulty while wearing respirators shall be medically examined to determine their ability to wear the respirator. ( and half-mask or full facepiece. (2) Air line respirator, demand type (negative pressure.), with half-mask facepiece. # Less than or (1) Full face gas mask, chin style, equal to 50 times with organic vapor canister. (2) Supplied air respirator, demand type (negative pressure), with full facepiece. ( (2) Appropriate respirators shall be available for wear during evacuation. (3) Appropriate extinguishants shall be available for use in toluene fires. # (c) Exhaust Systems Where a local exhaust ventilation system is used, it shall be designed and maintained to prevent the accumulation or recirculation of toluene vapor into the workroom. # (d) General Housekeeping Emphasis shall be placed upon cleanup of spills, inspection and repair of equipment and leaks, and proper storage of materials. # (e) Disposal (1) The disposal of waste toluene and of materials contaminated with it shall be in accordance with applicable regulations. Toluene or toluene-containing materials should not be discharged into drains or sewers. #Monitoring and Reporting Requirements Workroom areas where it has been determined, on the basis of an industrial hygiene survey or the judgment of a compliance officer, # Historical Reports Early reports on the health effects resulting from exposure to toluene described its toxicity as being similar to that of benzene. # Loss of coordination was pronounced and reaction time was definitely impaired. The symptoms from 50 to 200 ppm were considered by the author to be due chiefly to "psychogenic" factors rather than to toluene vapor. The author recommended that the vapor concentration of toluene should never exceed 200 ppm. In addition, changes in the blood and bone marrow were noted and exposures to concentrations of toluene over 500 ppm were considered to pose a risk of depression of the bone marrow. The benzene content of the toluene was not reported. Neither the total composition of the glue nor the purity of the toluene was specified. Two hours after the incident, the toluene concentration was measured and indicated a range of 5,000-10,000 ppm. When one considers that the lower flammable limit of toluene is 12,000 ppm, the hazard encountered was not only a health hazard but also one of fire or an explosion. # Furnas and Hine In The information in Table IV-2 is obtained from a small sampling, but provides a typical example of the accuracy and precision of the method excluding any sampling error. # Sorbability of Toluene on Charcoal # Concentrations # (g) Reagents (1) Spectroquality carbon disulfide. (2) Toluene, preferably chromatoquality grade. (3) Bureau of Mines Grade A helium. (4) Prepurified hydrogen. (5) Filtered compressed air. This may be indicated as a range of maximum amount, ie, 10-20% by volume; 10% maximum by weight.
The standard is designed to protect the health and safety of workers for an 8-hour day, 40-hour week over a working lifetime; compliance with the standard should therefore prevent adverse effects of toluene on the health and safety of workers. The standard is measurable by techniques that are valid, reproducible, and available to industry and government agencies. Sufficient technology exists to permit compliance with the recommended standard. "Exposure to toluene" means exposure to a concentration of toluene equal to or above one-half the recommended workroom environmental standard. Exposures at lower environmental concentrations will not require adherence to the following sections. If "exposure" to other chemicals also occurs, for example from contamination of toluene with benzene, provisions of any applicable standard for the other chemicals shall also be followed.Occupational exposure to toluene shall be controlled so that workers shall not be exposed to toluene at a concentration greater 1 i 1 f>o parts per million parts of air (375 milligrams per cubic meter of air) determined as a time-weighted average (TWA) exposure for an 8~hour workday with a ceiling of 200 parts per million parts of air . 7 " • I igrams per cubic meter of air) as determined by a sampling cline of 10 minutes. (b) Sampling, Collection, and Analysis Procedures for collection and analysis of environmental samples ¿he-1 : ' e as provided in Appendix I or by any method shown to be equi valent in accuracy, precision, and sensitivity to the method specified. Section 2 -Medical Comprehensive preplacement and biennial medical examinations sauuld be provided for all workers subject to "exposure to toluene." The examination should be directed towards but not limited to the ..._Ldeu.r. of headaches, nausea, and dizziness; particular attention ,iiouid be focused on complaints and evidence of eye, mucous membrane, and skin irritation. Laboratory tests recommended at the time of the biennial examination include complete blood count and urinalysis. Section 3 -Labeling (Posting)# Administrative controls should also be used to reduce exposure. # Respirators shall also be provided and used for nonroutine operations (occasional brief exposures above the TWA of 100 ppm and for emergencies); however, for these instances a variance is not required but the requirements set forth below continue to apply. Appropriate respirators as described in Table 1-1 shall only be used pursuant to the following requirements: (1) For the purpose of determining the type of respirator to be used, the employer shall measure the atmospheric concentration of toluene in the workplace when the initial application for variance is made and thereafter whenever process, worksite, climate, or control changes occur which are likely to increase the toluene concentration. The employer shall ensure that no worker is being exposed to toluene in excess of the standard either because of improper respirator selection or fit. (2) Employees experiencing breathing difficulty while wearing respirators shall be medically examined to determine their ability to wear the respirator. ( and half-mask or full facepiece. (2) Air line respirator, demand type (negative pressure.), with half-mask facepiece. # Less than or (1) Full face gas mask, chin style, equal to 50 times with organic vapor canister. (2) Supplied air respirator, demand type (negative pressure), with full facepiece. ( (2) Appropriate respirators shall be available for wear during evacuation. (3) Appropriate extinguishants shall be available for use in toluene fires. # (c) Exhaust Systems Where a local exhaust ventilation system is used, it shall be designed and maintained to prevent the accumulation or recirculation of toluene vapor into the workroom. # (d) General Housekeeping Emphasis shall be placed upon cleanup of spills, inspection and repair of equipment and leaks, and proper storage of materials. # (e) Disposal (1) The disposal of waste toluene and of materials contaminated with it shall be in accordance with applicable regulations. (2) Toluene or toluene-containing materials should not be discharged into drains or sewers. # -Monitoring and Reporting Requirements Workroom areas where it has been determined, on the basis of an industrial hygiene survey or the judgment of a compliance officer, # Historical Reports Early reports on the health effects resulting from exposure to toluene described its toxicity as being similar to that of benzene. # Loss of coordination was pronounced and reaction time was definitely impaired. The symptoms from 50 to 200 ppm were considered by the author to be due chiefly to "psychogenic" factors rather than to toluene vapor. The author recommended that the vapor concentration of toluene should never exceed 200 ppm. In addition, changes in the blood and bone marrow were noted and exposures to concentrations of toluene over 500 ppm were considered to pose a risk of depression of the bone marrow. The benzene content of the toluene was not reported. Neither the total composition of the glue nor the purity of the toluene was specified. Two hours after the incident, the toluene concentration was measured and indicated a range of 5,000-10,000 ppm. When one considers that the lower flammable limit of toluene is 12,000 ppm, the hazard encountered was not only a health hazard but also one of fire or an explosion. # Furnas and Hine In The information in Table IV-2 is obtained from a small sampling, but provides a typical example of the accuracy and precision of the method excluding any sampling error. # Sorbability of Toluene on Charcoal # Concentrations # (g) Reagents (1) Spectroquality carbon disulfide. (2) Toluene, preferably chromatoquality grade. (3) Bureau of Mines Grade A helium. (4) Prepurified hydrogen. (5) Filtered compressed air. This may be indicated as a range of maximum amount, ie, 10-20% by volume; 10% maximum by weight.
None
None
2113944154b2d4c3d94ea78b990ef6347af9d591
cdc
None
# Contents # Executive Summary The Division of HIV/AIDS Prevention at the Centers for Disease Control and Prevention (CDC) was charged by the National HIV/AIDS Strategy to provide "technical assistance to localities to collect data to calculate community viral load" . Guidance on Community Viral Load: A Family of Measures, Definitions, and Method for Calculation represents the work of over 50 Workgroup members from more than 25 jurisdictions. This Guidance document introduces the concept of community viral load and provides definitions of, and methods for calculating, community viral load and related measures. During the past few years, there has been great interest in the scientific literature and at scientific meetings about a new public health HIV measure called "community viral load." It is a population-based measure of HIV-infected individuals' concentration of plasma HIV-1 RNA (viral load). When viral loads are summed across a community and then divided by the number of HIV-infected persons in the community, the average HIV community viral load represents the level of viremia for a geographic area during a defined time. Having a single number that is an indicator of HIV transmission potential and quality of HIV care and treatment for a geographic area is extremely attractive. For community viral load to be a robust measurement, it must reflect the viral loads of persons diagnosed with HIV, be a reliable measure that is also sensitive to viral load changes in the community, and reflect sexual behaviors or other HIV transmission risks in the community. This measure could be used by local HIV reporting jurisdictions each year to assess progress in treating HIV-infected persons with antiretroviral medications that would lower the community's viremia, which could-potentially-reduce transmission within the community. Community viral load may also have some utility in monitoring the progress of national HIV care and treatment objectives when assessed over time. During the process of examining local and federal HIV surveillance activities and the data obtained from such activities, a number of issues were identified that may impact the ability of a jurisdiction to successfully and accurately estimate a population's viral load. These include: - Population selection. A well-defined population should be chosen, possibly even a closed population, so that those who are at risk for transmitting HIV and those at risk for acquiring HIV can be identified and counted, all within a defined geographic area that may not correspond with jurisdictional boundaries. Varying definitions of "community viral load." There are many analyses that report "community viral load" and the methods and definitions used for each varies somewhat. The history of the community viral load concept and an inventory of community viral load analyses highlight these differences and the varying methods used to calculate community viral load (Appendix A and Table 2). Ideally, measures of community viral load and its related terms should use a common method that would, at a minimum, allow for periodic snapshots to assess changes over time by a jurisdiction, as well as allow for comparisons across localities. Complete and accurate surveillance/health data. Locations with universal health coverage or a common source of healthcare will fare better than those areas without such infrastructure because the latter will need to create composite data from disparate sources. In addition, jurisdictions have reported significant amounts of missing viral load results for HIV-infected persons, which may bias community viral load estimates. This may be due to a large proportion of HIV-infected persons being out of care in a jurisdiction, or data quality issues for a jurisdiction, or both. This document proposes a family of viral load measurements, of which one is Community Viral Load (Figure 2); other measures include: Population Viral Load, In-Care Viral Load, and Monitored Viral Load. Technical guidance for estimating Community Viral Load, In-Care Viral Load, and Monitored Viral Load are provided. Analytic methods and tools (spreadsheet and SAS code) for calculating mean viral load, percentage with suppressed viral load (≤200 copies/mL), percentage with undetectable viral load (≤50 copies/mL), and for assessing a statistical difference between two mean viral load measures will be made available by CDC to HIV surveillance coordinators. While this Guidance document is intended primarily for HIV surveillance coordinators to be able to calculate Community Viral Load measures for their jurisdictions, it should be recognized that the ability to do so does not rest solely with the HIV surveillance system (see Appendix B for description of surveillance and viral load issues). Using viral load measures to monitor the epidemic is a function of policy, care and treatment, and surveillance as illustrated below: The effective monitoring of Community Viral Load measures and ultimately documenting progress towards the National HIV/AIDS Strategy goals of reduced HIV incidence and improved health outcomes for persons living with HIV infection in the United States (see Appendix E) requires translation of concepts developed in a few jurisdictions to relevant measures that can be used in all jurisdictions. This document acknowledges the complexity of doing so and offers practical options for measurements under various scenarios where policy, practice, and surveillance intersect. # Policy # History of Community Viral Load In early 2009, based on a mathematical model for South Africa, Granich et al. suggested that universal HIV testing and immediate treatment with antiretroviral medications among persons with heterosexually acquired HIV infection could lead to virtual elimination of HIV disease in 50 years . The underlying mechanism by which this could be possible is to reduce the amount of virus circulating in a person's bloodstream (HIV plasma viral load) by consistent and early use of antiretroviral medications. As the level of viremia and viral shedding decreases, infectiousness also decreases, greatly reducing or eliminating the transmission of HIV . A 2004 study in Taiwan demonstrated that free access to HIV antiretroviral medications for HIV-infected persons decreased HIV transmission . The earliest use of "community viral load" occurred in 2008 at the Conference of Retroviruses and Opportunistic Infections by Ronald Stall . Based on the work of Millett et al. , the high prevalence of untreated HIV infection was used to explain the disproportionately high rates of HIV infection among African American men who have sex with men (MSM). Due to lack of treatment and a resulting high viremia, the African American MSM community was described to have a much higher "community viral load" than other MSM communities, which accounted for more efficient transmission of HIV and higher infection rates. One of the first cohort studies to examine HIV concentration levels in the blood of injecting drug users (IDUs) from Vancouver, Canada was published in 2009 . Wood et al. studied a welldescribed population of IDUs that was followed every six months for more than 11 years. HIV viral loads, HIV diagnostic testing, and antiretroviral therapy were monitored for this population and community plasma HIV-1 RNA concentrations over time were ascertained. A statistically significant correlation between the median IDU community viral load and the incidence of new HIV infections was observed. Furthermore, in their analysis, Wood found that when the community viral load among IDUs decreased to <20,000 copies/mL, the association with HIV incidence ceased. In June 2010, Das et al. published a paper on community viral load in San Francisco . Das and colleagues found a significant association between a declining community viral load and a decline in new HIV diagnoses from 2004 to 2008. Additionally, the authors presented viral load data in novel ways that examined geographic distribution of mean viral load by demographic, risk, and socioeconomic factors. Montaner et al. (2010) published data that helped provide the final link in the mechanism that described viral loads and transmission . Using the community of British Columbia, Canada, they demonstrated that, over time, increased use of antiretroviral therapy was associated with a decrease in the population's viral load and, ultimately, a decrease in new HIV infections. Collectively, these papers have galvanized the HIV prevention and surveillance communities and spurred analyses exploring the relationship between community viral load and newly diagnosed and reported cases and estimated HIV incidence (see Appendix A, Inventory of VL Analyses). In addition to ecological analyses suggesting benefits of reduction in community viral load for population-level HIV prevention, use of antiretroviral therapy by the HIV seropositive partner has been shown to be associated with lower HIV incidence in the seronegative partner . Data from the HIV Prevention Trials Network (HPTN) 052/AIDS Clinical Trials Group (ACTG) 5245 have also found that antiretroviral treatment of an HIV-infected partner reduced transmission to the uninfected partner . These data lend support to the Test-and-Treat strategy that proposes to decrease HIV transmission through the following two pathways: HIV testing identifies HIV-infected persons who, after learning their status, adopt safer behaviors, which decreases HIV transmission . HIV-infected individuals who initiate antiretroviral treatment, maintain high levels of adherence, and achieve viral suppression are less infectious, which decreases HIV transmission. To optimize health outcomes, however, expanded testing efforts must be coupled with initiatives that ensure the newly diagnosed and those already known to have HIV infection are effectively linked to HIV care , receive antiretroviral treatment as indicated, and achieve optimal adherence and suppression of viral replication. (See Appendix D.) # Establishing a Common Language Researchers have interpreted community viral load differently (see 2 are typically not available in the local or national HIV surveillance databases. The shortcomings of available data and nascent methodology for its modeling (including data imputations for persons in boxes C-E in Figure 2), has precluded the estimation of Population Viral Load to date. An estimated 21% of HIV-infected Americans are living with undiagnosed HIV infection (box E, Figure 2) and, thus, their HIV viral loads are unknown but are likely detectable and elevated in the absence of antiretroviral therapy. These persons, who may practice unsafe sex while unaware of their HIV infection, are thought to contribute to the majority of new, sexually acquired HIV infections in the United States . Viral load data are not available for all persons that have been diagnosed with HIV infection. A significant proportion of such persons are not in care at a given point in time (either never linked to HIV care or not retained in continuous care) and, therefore, may not have HIV viral load monitoring performed (box D, Figure 2). It can be assumed that most of these persons have detectable/not suppressed viral loads as they are unlikely to be receiving continuous, effective antiretroviral therapy; however, the actual distribution of their viral loads is not known (in the absence of special studies). Finally, a fraction of persons that are in care may be missing HIV viral load in HIV surveillance for a variety of reasons, including: incomplete or delayed reporting of viral loads from laboratories to the surveillance system, undetectable viral loads not reportable, viral load results that have not been entered into the surveillance database, patient refusal of HIV disease monitoring, receiving medical care but out of HIV care, and no viral load testing (box C). Community Viral Load describes viral load of all HIV-infected persons diagnosed with HIV infection in a given population. As with "Population Viral Load," viral load measurements for persons that are 'diagnosed but not in care' (box D, Figure 2) are typically missing as is information for those 'in care, no VL' (box C). In order to estimate Community Viral Load as accurately as possible, local public health jurisdictions should implement programmatic activities to expand and routinize HIV testing so that the proportion of 'undiagnosed' (box E) is as small as possible, and maximize linkage to and retention in care of HIV-diagnosed persons so that the proportion of 'diagnosed but not in care' (box D) is as small as possible. In jurisdictions where (i) the percentage of persons that are in care but have missing viral loads and persons that are diagnosed but not in care (boxes C and D, respectively) when combined is less than 25% of persons, and in particular, if (ii) additional data relating to care and health status are available (such as history of antiretroviral therapy, insurance status, opportunistic infections, and engagement in care), techniques such as multiple imputation may be considered in modeling Community Viral Load for the total diagnosed population (in and out of care). However, the limitations of multiple imputation and the lack of additional data make calculation of Community Viral Load not feasible for most jurisdictions at this time. Methodologies for such analyses are an active area of research. Jurisdictions may consider calculating a variety of measures to describe Monitored Viral Load, including measures of central tendency and dispersion. One approach to examining the quality of HIV care, antiretroviral treatment uptake, and adherence and engagement in care is to determine the percent of persons virologically suppressed, or the percent that are undetectable (lowercase m, measurements). Among persons that are receiving antiretroviral treatment, the proportion that achieves viral suppression is referred to as maximal virologic suppression and has been endorsed as a quality of HIV care measure by various national groups. Additional measures are discussed in the Technical Guidance section. # In- Care For jurisdictions with overlapping projects such as Test and Treat initiatives, Medical Monitoring Project (MMP), or other clinically oriented projects that capture use of antiretroviral medications, percent suppression is a useful population-based measure of the penetration of the U.S. HIV Treatment Guidelines. HIV care measures calculated from HIV surveillance and clinically collected data will help to evaluate suppression rates among different disproportionately affected populations as highlighted in the National HIV/AIDS Strategy. # Summary It is challenging to estimate Population Viral Load, Community Viral Load, and In-Care Viral Load at this time. Jurisdictions able to address missing viral load data among diagnosed resident persons with HIV by using multiple imputation have been able to calculate Community Viral Load as defined in this document. Those jurisdictions able to impute missing viral load data would also be able to calculate an In-Care Viral Load. For those jurisdictions that are unable to impute missing viral loads for diagnosed cases, they can calculate a Monitored Viral Load Measure. # Technical Guidance This section provides guidance on analytic decisions and methodologic considerations for conducting viral load (VL) analyses and calculating VL Measures using HIV surveillance data. Estimation of Community VL, In-Care VL and Monitored VL, as described in Establishing a Common Language, will be the focus of this section. Each subsection topic will include a description, recommendation, or discussion if no recommendation is provided, and an explanation that follows the recommendation. Selected standardized categorical VL measures are also defined so that comparisons across jurisdictions may be possible. These standardized measures and methodologic recommendations are enclosed within a grey box. An example of VL data to report is included in Figure 4 and a brief discussion of Population VL follows. Because of the variability in types of analyses, a checklist at the end of this section identifies key explanatory elements that should be described in the method section of any Community VL or related analysis. # Getting started As described in Appendix B, Surveillance, a number of periodic, routine surveillance activities must occur to ensure HIV surveillance data are accurate and of sufficient quality. Those steps and others, which will be described in this section, provide the framework for Figure 3, Decision tree for conducting viral load analyses. Note: Geographic analyses may be conducted as a subset of Community VL/In-Care VL or a subset of Monitored VL. # Inclusion criteria for cases Depending on the type of analysis an HIV jurisdiction plans to conduct, the case inclusion criteria may differ. For the purposes of standardization, however, the following criteria for case selection as part of a cross-sectional Community VL, In-Care VL, or Monitored VL analysis is recommended: Explanation: HIV-infected adults and adolescents may transmit HIV through sex and injection drug use. At the time of this writing, the latest full calendar year that could be examined is 2009, allowing for the minimum 12-months lag time for cases to be reported (i.e., using data reported through December 2010), data to be found and entered, etc. Using a calendar year and including # GEOGRAPHIC analyses # Proceed with spatial VL/social determinants analysis # COMMUNITY VIRAL LOAD # Jurisdictional case* Are complete and current addresses available? YES NO YES persons alive at the end of that year is consistent with data requests from HRSA and most local HIV planners. For reporting cities within a state, those reporting jurisdictions will also need to conduct an intrastate deduplication process to determine which persons may have moved out of the jurisdiction but remain in the state. All states must participate in the interstate deduplication process. By the case inclusion criteria, HIV-infected persons who died during 2009 or earlier would be excluded. To keep track of those deaths, matching to state vital registry databases will help identify deaths among HIV-infected persons who died in state; matching to at least one national death registry (National Death Index or Social Security Death Index) is necessary to identify deaths that may have occurred out of state. Although each HIV surveillance jurisdiction has incentive to keep track of its jurisdictional cases, residential cases are more important to understanding current HIV transmission in a locality. # Sample size The sample size needed to detect a 3-fold geometric mean (GM) difference - between mean viral loads depends on the desired power and standard deviation of the sample. Table 1a presents sample sizes needed with 80% power to detect a difference at the 0.5 level for various GM. The dark row lists the sample size needed to detect a difference of 3-fold in the GM; the sample size is the intersection of the row and column, standard deviation, S. The column of S=1.2 reflects the standard deviation observed in national VL data. Explanation: Sample size considerations must be taken into account when choosing the cases or study population to examine. For example, Monitored VL among African American men and women may be of great interest to evaluate by a jurisdiction, but if the sample size is inadequate to meet the recommended case inclusion criterion, an alternate method may need to be used, such as combining multiple years of data. Jurisdictions may want to explore factors related to differences in means of viral loads, such as the difference in the proportion with undetectable or very low VL, which may be expressed as a categorical difference. # Crude viral load data and differences in lower limits of detection With the advent of newer assays for quantifying HIV viral load, the lower limit of detection (LLD) of these assays has steadily decreased from <1000 copies/mL in the early 1990s to <400 copies/mL in the early 2000s to <50 copies/mL or fewer in most recent years. Most persons that receive and adhere to antiretroviral therapy will have a viral load below the LLD, and a question arises how to handle their viral loads in the analyses. Explanation: The distribution of actual VL values below a test's LLD is unknown. Factors that may influence the VL distribution in a jurisdiction include the regimen of and adherence to antiretroviral therapy among the cohort, sensitivity of assay used in measuring these otherwise We recommend that half of the LLD be used in the analyses of viral loads for persons who have VL below the limit of detection (whether numeric result provided or not) to approximate the likelihood of their actual viral load burden. Each jurisdiction will need to assess the standard deviation of their local VL data and then determine the appropriate sample size needed to assess VL. undetectable VL results, and possibly the combination of patient demographics and clinical characteristics of HIV-infected persons at the time. Some researchers have used zero or the value LLD minus one as the VL value for undetectable results in VL analyses. We chose half the value of the LLD as a compromise and for its ease of use. As more data become available, it will be possible to more accurately establish a value for VL results currently classified as undetectable. Nevertheless, for our purposes of providing a uniform method that can be compared across sites or within a site over time, use of half LLD should be adequate. Consequently, when a viral load assay with LLD <50 copies/mL is used, undetectable viral load results should be replaced with 25 copies/mL for analysis. When undertaking a calendar-time trend analysis, however, it may be necessary because of different viral load tests in use to truncate the lower limit of detection and possibly the upper limit of detection † . After examining local VL data by year, it should become apparent where those cut points are needed. As an example, if a jurisdiction had complete and refined data from 2005-2009, it may be found that viral load tests with a LLD <400 copies/mL were in wide use in 2005 and 2006, but then were replaced by a more sensitive test with LLD of 75 copies/mL by 2009. For such an example, test results of undetectable, 400 copies/mL or less would be collapsed into a single group and given a VL value of 200 (half LLD) for each year, 2005 through 2009. VL analyses for each year would then be conducted using this common LLD of 400. If a single year, 2009, as a point in time analysis was planned, however, it would be appropriate to use 75 copies/mL as the LLD and use 37.5 as the value for analysis when results of ≤75 copies/mL or undetectable were reported. In either situation, the LLD or range of VL results used in analysis should be stated. # Multiple imputation of missing viral load Although only 15 of 26 jurisdictions responded to an informal survey conducted among Workgroup members, most jurisdictions reported missing VL information for more than 30% of their cases . VL results may be missing for persons not in care, in care but without VL, and for persons who have died or moved away as described in Appendix B, Table B1. This discussion of multiple imputation of missing viral load for calculating Community VL and In-Care VL applies to data that have been refined after completing data management processes described in Appendix B, Surveillance. Explanation: Two groups of HIV-infected persons have, by surveillance definition, missing VL results-those "in care, no VL" and those "diagnosed but not in care" (boxes C and D, respectively, Figure 2). After data refinement, if the number of persons "in care, no VL" is small, jurisdictions may want to conduct follow-up to better characterize these persons. This information may help inform approaches for data imputation for this group, "in care, no VL." Jurisdictions that routinely collect supplemental information-such as use of antiretroviral medications, engagement in care, insurance, co-morbidities-may wish to explore data imputation using published methods that would enable them to calculate a Community VL or † Sites should assess their data and determine if an upper limit of detection is needed. Because of the recommended methods, providing a uniform upper limit of detection is not warranted in most circumstances. If, however, sites have a significant percentage of high VL results, use of an upper limit of detection may be prudent. We recommend that imputation of missing VL data only be attempted if <25% of cases are missing VL AND jurisdictions are collecting supplemental clinical data. In-Care VL. However, jurisdictions should carefully weigh the benefits and limitations of multiple imputation before embarking on such analyses. Methods like multiple imputation that extrapolate from available surveillance VL data to the remainder of the population may not be appropriate for surveillance data. A fundamental requirement of imputation, that data are missing at random and that the distribution of values in the unknown group mirrors the distribution of values in the known, may not be satisfied. The fundamental problem for a "diagnosed case but not in care" identified from surveillance data (i.e., lacking VL data in a surveillance system that routinely receives fairly complete VL data from labs) is that the case is not in care or has moved out of jurisdiction. The viral load distribution of cases not in care cannot be modeled from the distribution of those that are in care. The Workgroup is not aware of any supplemental data available to the majority of surveillance jurisdictions at this time that could address this lack of information. Even if supplemental information is available to support randomness or similarity (these would likely derive from research samples also reflecting patients in care), there is no clear cut-off for how much information may be imputed (e.g., the recommendation of 75% as the minimum threshold for data completeness). The usefulness of supplemental data for "filling in the blanks" depends on the relevance, completeness, and accuracy of the data; it also depends on the homogeneity of the population that is being modeled. Jurisdictions are advised to consult with statisticians familiar with the methodology when planning analyses that involve imputation. # Mean viral load The early VL and prevention studies have used median viral load as the summary analytic measure for community viral load; others have used the mean of the VL results. Because successful clinical use of antiretroviral therapy results in viral suppression, jurisdictions that have successfully engaged HIV-infected persons in care may have over 75% of such persons with undetectable VL; the median would be undetectable, so the mean was used. Explanation: The rationale for this logarithmic transformation is that it helps to normalize the distribution of viral load values and reduces the influence of outlying measurements for persons having extreme viremia (due to acute infection, advanced HIV disease, concurrent sexually transmitted infections, or random variability). ‡ Since the interpretation of viral load measurements is often more intuitive on a linear scale, we recommend calculation of geometric mean (GM) for viral load. The GM is thus calculated through log transformation by averaging the log transformed values and transforming the average back to the original (linear) scale. The base used for the log transformation has no effect on the final GM estimate. However, using log base 10 has an advantage by its relationship to the value on the original scale; for example, a value of 2 on the log10 scale is 100 on the original scale, 3 corresponding to 1000, 4 corresponding 10000, and so forth. While our recommendation for calculating the mean viral load is limited to Population VL, Community VL, In-Care VL, and Monitored VL, jurisdictions may wish to calculate a mean VL ‡ This is similar to using the median VL We recommend that calculation of Community VL and its related viral load Measures (see Figure 2) is performed after transformation of viral load results onto the logarithmic base 10 scale, followed by calculation of the mean. for a subpopulation. In addition, a geometric mean could be useful in combination with categorical VL measurements (see below). For example, the population can be described in terms of the fraction with undetectable or suppressed VLs and the (geometric) mean of the remaining population. # Selection of viral load results for analysis Once the time frame for analysis is decided, the handling of VL values for individuals that may have more than one VL result needs to be considered. Depending on the type of analysis, jurisdictions may want to average the VL results for each individual and then take the mean of those average VLs to compute the Community VL, In-Care VL, or Monitored VL. For other analyses, it may be more appropriate to use the highest, lowest, or most recent VL result. (See Other viral load methods and analyses later in this section.) As part of the standard case inclusion criteria for a calendar year, the following VL should be selected for analysis: Explanation: The examination of a single VL result (most recent) is the least restrictive definition of 'being in-care.' Whereas other new measures or definitions of 'in-care' require at least two lab results within a specified timeframe of each other in a 12-months period, no such requirement is necessary for this standardized, point in time examination. The Community Viral Load Workgroup has developed and tested an Excel spreadsheet that allows for calculation of mean viral load, proportion suppressed, proportion undetectable, and a Z-test to identify a statistically significant difference between two mean viral loads. This spreadsheet accommodates small datasets. A generic SAS program that incorporates the methodologic guidelines and standards set forth in this document will also be available. The SAS code will be created so that large volumes of data from eHARS can be used for VL calculation by all HIV surveillance jurisdictions. # Categorical measures of viral load Viral load results may be grouped into various clinically meaningful categories. Tracking VL categories for a population yearly (e.g., using histograms) may aid in the interpretation of changes in VL results in the population over time, concomitant with changes in proportion of patients receiving highly active antiretroviral therapy or adherence support, or due to other factors. We recommend use of three categorical VL measures: suppressed/not suppressed, undetectable, and high VL. Use the most recent VL result per person for analysis-this would be for specimens obtained in 2009 that are the closest to December 31. If there is no VL specimen for a case in 2009, that case would be excluded and would belong to either the "In care, no VL" or "Diagnosed but not in care" boxes of Figure 2 (boxes C and D, respectively). Although the VL cut points that define each of these categories may not meet all needs, given the nature of ever evolving VL test technology, we have proposed definitions so that all HIV jurisdictions can report using standard definitions of these measures. The three standardized measures that are proposed are as follows: Explanation: A measurement that may be useful in describing the disease status of local PLWH is percentage of persons that have viral load below LLD or suppressed viral load, defined as VL ≤200 copies/mL by the latest U.S. HIV Treatment Guidelines . Percentage of persons that have viral load suppression may serve as a rough proxy indicator of the combined access to and adherence to antiretroviral therapy in a given population. Explanation: This measure can only be assessed in jurisdictions where recent sensitive assays (LLD of 50 copies/mL or lower) are consistently used by laboratories. While this measure is most informative for a clinical setting among persons on antiretroviral therapy, it could also be applied to populations receiving care from a specific facility or across all persons in care using surveillance data. Explanation: To better characterize the spectrum of viral loads in the population, some jurisdictions may opt to quantify the proportion of VL results that are considered very high (i.e., >100,000 copies/mL), which may indicate high potential transmission risk for a group or the healthcare challenges facing a group in need of immediate antiretroviral therapy . Another potential categorical measure that jurisdictions may want to examine is the proportion of HIV-infected persons that meet a standardized definition of in-care-e.g., at least two lab results obtained within a calendar year that are at least 60 days apart. Categorical measures might be more informative than means and may also be more comprehensible to planners, evaluators, policy makers, and others. These measures provide the VL distribution and have the advantage of not implying a normal distribution of values (as means may be perceived by the public). The actual distribution of VL results in surveillance populations is highly skewed. A large proportion of the population has undetectable or very low values (e.g., <200 copies/mL), and the remainder of values are spread across a wide range. Distributions such as these are not well represented by a single measure, especially one that implies a normal distribution. Jurisdictions should examine their data to determine the local viral load distribution and the best measures to describe it, including outlier cutoff values, etc. Other measures are listed in Table 2. Such distributions can also be stratified by CD4+ T-lymphocyte count to reflect stage of illness and the U.S. HIV Treatment Guidelines recommendations on initiation of antiretroviral therapy. Report percentage ≤200 copies/mL as suppressed as a categorical VL measure. Conversely, the percentage >200 copies/mL may be reported as not suppressed. Report percentage ≤50 copies/mL as undetectable as a categorical VL measure. Report percentage >100,000 copies/mL as high VL, in need of treatment as a categorical VL measure. # Address and residency For analyses that use address (geospatial analyses or linkage to Census geographic information on social determinants data), the current address, if available, should be used. Current address is also useful in establishing whether a jurisdictional case may have moved. While most jurisdictions use patient address from lab reporting, this information is useful only if the address is accurate. Patients with post office boxes or incomplete or inaccurate street addresses cannot be geocoded and placed with certainty on a map. A timeframe in which current address should be updated could not be agreed upon by the Workgroup. Any analyses including cases by residence or using geographic analyses should specify when addresses were updated and the percentage of cases that could not be geocoded with a high level of confidence. # Population Viral Load The Technical Guidance section provides methodologic recommendations for calculating Community VL, In-Care VL, and Monitored VL. Although Population VL is the best Measure for assessing HIV transmission, information is not available for its calculation. However, a component VL measure that is of interest includes an estimation of the number of undiagnosed HIV-infected persons. A back-calculation method estimated that almost 250,000 HIV-infected persons are undiagnosed (21% of the total number of HIV-infected persons) in the United States . Efforts are underway to also estimate the relative proportion of undiagnosed HIV-infected persons by state or metropolitan statistical area. These methods will be available to state health departments in 2011. VLs will vary greatly among undiagnosed persons, depending on how recently the person was infected. However, they may be assumed to not have suppressed VL. # Other viral load methods and analyses Although this Guidance document suggests using one VL result (most recent) of possibly many VL results for a person and recommends a method for calculating VL Measures, viral loads may be analyzed differently. Table 2 includes different VL measures that have been calculated and reported. Each measure has particular strengths and uses. Appendix A lists the references associated with the VL analyses. The mean viral load provides an estimate of the average level of viral burden and is the most useful for comparing subpopulations and neighborhoods in a jurisdiction. Comparing mean of most recent and total VLs at the geographic level may reflect disparities in access to and use of antiretroviral medications by neighborhood. Comparing means among different subpopulations within a jurisdiction can highlight disparities. The total VL is a useful measure for looking at the combination of HIV prevalence and viremia among prevalent cases. Depending on the type of analysis that is being conducted, VL results other than the most recent may be appropriate to use. If the purpose of the analysis is to evaluate a new measure of quality of care, the researcher may choose to report the proportion of cases that were virologically suppressed for a calendar year; thus, cases with even a single VL result >200 copies/mL would not be considered suppressed for the year. If the focus of the analysis is on acute antiretroviral need and possible lapses of optimal antiretroviral use, then the proportion of cases that may have a single VL result >100,000 copies/mL may be used. Although we recommend the most recent (to December 31) VL within a calendar year be used to calculate a VL Measure that is a more "current" snapshot, researchers may also want to use the mean of serial VL results obtained throughout the year for a person and then calculate the overall mean VL among all persons in a group for the year. Again, this document is not meant to be a comprehensive list of all possible VL analyses and measures but to point out ones in use and to recommend a common method for calculating VL Measures and reporting of component and a limited number of categorical VL measures. # Summary for data reporting-see Checklist on page 20 For each type of population-based analysis, Population VL, Community VL, In-Care VL, and Monitored VL, the study population should be described and its size should be reported. The size and relative percentage of each of the component VL measures (boxes A-E), as shown in Figure 2, that defines each VL Measure should also be reported. # Examples for Data Reporting (bottom up) Jurisdictions should report: 1) the component VL measures-e.g., percentage of persons "Diagnosed but not in care" in addition to the percentage, "In care and with undetectable - VL," "In care with detectable VL," and "In-care, no VL" when conducting In-Care VL Measurements; and 2) on categorical VL measures to describe the population that is in care and being monitored such as the percentage of persons with suppressed (i.e., ≤200 copies/mL), the percentage with high viral loads (i.e., >100,000 copies/mL), and the percentage undetectable (i.e., ≤50 copies/mL).  Describe any methods of adjustment for confounders (e.g., standardization, multivariable modeling)  Describe any efforts to address potential sources of bias due to missing data, sensitivity analyses (e.g., imputation). # Results  Give characteristics of the study population and univariate summaries of outcome measures  Report variability (e.g., standard deviation) and precision (e.g., 95% confidence intervals) around estimates  Present results from adjusted analyses, controlling for potential confounders # Discussion  Discuss limitations of the analyses, taking into account potential sources of bias or imprecision; discuss both direction and likely magnitude of the bias (e.g., under or over estimation of Community VL)  Interpret findings in the context of prior analyses-highlight differences and similarities Source: Article refers to a scientific publication; PP is an oral presentation at a scientific meeting; abstract is a short description of study and findings submitted to a meeting; poster is a more detailed paper presentation at a scientific meeting. # As of May 2011 # Public Health Surveillance Surveillance is the foundation of public health. It involves the ongoing, systematic collection, analysis, interpretation, and dissemination of data used by public health to reduce morbidity and mortality and to improve health . Each surveillance system has a number of attributes that contributes to its overall effectiveness. These attributes include: simplicity, flexibility, data quality, acceptability, sensitivity, predictive value positive, representativeness, completeness, timeliness, and stability. To ensure ongoing improvement in data quality, efficiency, and usefulness, each surveillance system should be periodically evaluated. All CDC data originate locally. These data represent a commitment and collaboration between states or jurisdictions and CDC to collect data deemed to be the most vital in fulfilling local and national public health missions. CDC works with local reporting jurisdictions and the Council of State and Territorial Epidemiologists (CSTE) to establish model language and uniformity in state reporting regulations and standardization of data collected for surveillance. CDC works with state and local public health counterparts to decide on national data quality standards, policies, and procedures that can be used to meet surveillance data quality standards. also against at least one national death registry to ensure an accurate count of the number of people living with HIV disease in their jurisdiction. # HIV Surveillance While accurate case counts are important to understand disease burden and where HIV-infected persons seek medical care is important for resource allocation, it is equally important to understand where and how new cases arise for targeted prevention efforts. Jurisdictional boundaries and residency do not predict where healthcare is sought or risk-seeking behaviors occur. For example, persons may cross into an adjacent state to receive health services or engage in high-risk behaviors in a neighboring jurisdiction. Some states have developed lab reporting regulations based on the patient's residence and patient's place of care to accurately describe care patterns. As part of understanding HIV transmission, surveillance areas with residents that cross state lines to engage in high-risk HIV behaviors should take this into consideration when conducting any assessment of HIV transmission potential for their residents. # Assessing those in care In general, HIV surveillance relies on reported lab test (i.e., CD4+ T-lymphocyte count or viral load) results as a marker of persons receiving health care and thus being "in care." While it may be possible to identify HIV-infected persons who are receiving care but lacking lab test results, such as persons identified from linking to Ryan White Care AIDS Drug Assistance Program databases, such cases can be followed-up to address the reasons for missing lab results. To determine whether someone was linked to and retained in ongoing HIV healthcare, serial lab results over an extended period would need to be observed in surveillance data. A limited number of HIV jurisdictions are able to conduct ongoing medical chart review of HIVinfected persons that reside in their jurisdiction. Those jurisdictions have collected supplemental data that are not routinely collected as part of HIV surveillance, such as engagement in care, insurance, and use and type of antiretroviral medications. Other jurisdictions have been funded to participate in special surveillance/prevention activities and, as such, may also be conducting ongoing medical chart reviews and collecting similar type of supplemental data. States annually estimate the proportion of HIV-infected persons living in their state that are in and out of care. Some HIV reporting jurisdictions may report for their jurisdictional cases, not residential cases, because they are more likely to have complete information for their jurisdictional cases . # Viral load test results To ensure that all (detectable and undetectable) viral load results are received by HIV surveillance, jurisdictions must expend great time and resources. As a first step, state public health reporting laws or regulations provide the authority for State Health Departments to receive such information from laboratories. It may be necessary to include language that specifies that all (detectable and undetectable) HIV viral load results are reportable. This type of policy lays the foundation for receiving viral load results but cannot account for the personal interaction needed in establishing relationships with laboratories for reporting and then ongoing work to ensure continued reporting. The magnitude of this can be appreciated by the fact that some jurisdictions have over 80 laboratories that report viral load results . The volume of viral load reporting is large and is likely to increase. U.S. HIV Treatment Guidelines recommend periodic assessments among persons that are on antiretroviral therapy or among persons whose HIV disease is being monitored, and the number of persons living with HIV is increasing. It is not uncommon for a laboratory to cease VL reporting for a short period as new laboratory staff are hired or veteran laboratory staff take vacations. To identify such lapses in reporting, HIV surveillance jurisdictions monitor laboratories for their volume of reporting over time. The complete absence of any lab test result (CD4+ T-lymphocyte count or viral load) may suggest the person is not in care, moved away, is receiving care out of jurisdiction, or died, or there is a significant reporting issue (see Table B1). There are some instances, however, where an HIV-infected person resides and receives care within an HIV reporting jurisdiction but no medical care information is shared with HIV surveillance. There are a limited number of federal and private facilities that do not share or allow HIV surveillance staff to have access to medical records of persons being cared for by the facility. Efforts to address these barriers are underway with some facilities. As technology has advanced, laboratories are moving toward reporting lab results electronically, but many laboratories still do not have this capability. Some HIV reporting jurisdictions have a backlog of paper lab results, and some jurisdictions with early electronic lab reporting have maintained electronic HIV lab reports in a separate database apart from eHARS. Viral load results reported on paper that have not been entered into eHARS are not available for analysis. Separately stored electronic lab files that have not been entered into eHARS may also not be accessible for analytic purposes. The extent of missing lab results is best described from a recent survey of selected HIV reporting jurisdictions. Of the 15 jurisdictions that responded, 26-89% of persons living with HIV and reported to surveillance were missing a viral load result . The large range of missing viral load results may be attributable to many factors as described in Table B1. The issue of jurisdictional versus residential case status has some implications for missing lab results. Persons diagnosed in one jurisdiction, by convention, remain in the database as a case in that jurisdiction even if the person moves across the country and receives care in their new home state. The recommended procedure for each HIV reporting area is to enter all lab results into their HIV database, regardless of jurisdictional or residential case status. After all jurisdictions upload their data to CDC, it may be possible to link lab results to jurisdictional cases that have moved; at this time, there is no mechanism that allows for sharing this information with local jurisdictions by CDC or between local jurisdictions directly. Early in the epidemic, when HIV diagnosis was often shortly followed by death, describing jurisdictional cases was reasonable. Today, with effective antiretroviral treatment available and the impetus to look more closely at transmission and prevention of new HIV infections, residential cases may also be an important population to surveil. # HRSA The Health Resources and Services Administration , HRSA, is one of several agencies, including CDC and NIH, under the leadership of U.S. Department of Health and Human Services. HRSA is responsible for improving access to health care services for people who are uninsured, isolated, or medically vulnerable. Since the early 1990s, the Ryan White HIV/AIDS Program has provided an array of HIV-related and health care services to help vulnerable HIV-infected persons manage their disease. Based on the numbers of persons infected with HIV, service needs among these persons, local resources available, and an assessment of unmet need and service gaps, HRSA provides states with funding to bridge these gaps. Medical care and antiretroviral medications (through the AIDS Drug Assistance Program) are just two services that HRSA funds through and in partnership with states. Each year, jurisdictions provide an epidemiologic description of HIV-infected persons to HRSA's HIV/AIDS Bureau. Most of the data used to describe the epidemic in a jurisdiction originates from HIV surveillance, so it is important to try to harmonize terms and processes that meet both HRSA and CDC needs. # Definitions and terms - Persons living with HIV (with and without AIDS) in a jurisdiction.  HIV surveillance would refer to these individuals as resident cases, which is distinct from jurisdictional cases. HIV surveillance case "ownership" is limited to jurisdictional cases that are residents at diagnosis. - Persons with unmet need are diagnosed with HIV infection but not in care for a defined 12month period .  HIV surveillance uses lab test results (CD4+ T-lymphocyte count or viral load) as a marker for being in care. Those persons without a lab result for 12 months or greater would be considered not in care (or out of care). HRSA includes lab results and receipt of antiretroviral medications within a 12-month period as markers of being in care. - Persons "in care" are receiving primary healthcare for their HIV disease. This care should be consistent with the U.S. HIV Treatment Guidelines .  HIV surveillance uses lab test results as a marker for being in-care. While ongoing evidence of laboratory testing is strongly suggestive of ongoing encounters with the healthcare system, the quality and appropriateness of these medical encounters cannot be fully assessed with routine HIV surveillance data. For most surveillance jurisdictions, with geographically dispersed cases and varying health facility-specific rules and policies, costs are prohibitive to collect detailed medical care information. Collecting this type of medical encounter information is possible for circumscribed populations of HIV-infected persons and is part of the CDC's Medical Monitoring Project (MMP), which includes a representative sample of HIV-infected persons who are engaged in HIV care. - HIV-infected persons are retained in care as assessed by at least two clinical visits in a year that are at least 60 days apart .  HIV surveillance uses lab results as a marker for being in-care. Based on the temporal distribution of the collection date for each lab test, it is possible to assess whether two medical encounters occurred within a calendar year, spaced at least 60 days apart. It would be difficult, however, for surveillance data to determine with certainty if the lab tests were part of the same continuous care or might represent fragmented care as, for example, one test may be ordered by an HIV healthcare provider and another test ordered by an emergency department physician. - Persons in care that begin antiretroviral treatment achieve virologic suppression after 6 month  HIV surveillance can provide a rough proxy assessment of reporting the proportion of persons in care that have viral load results ≤200 copies/mL for a calendar year. Because antiretroviral medication use and initiation is not collected by HIV case surveillance, this type of clinical measure is best addressed by MMP.
# Contents # Executive Summary The Division of HIV/AIDS Prevention at the Centers for Disease Control and Prevention (CDC) was charged by the National HIV/AIDS Strategy to provide "technical assistance to [HIV reporting] localities to collect data to calculate community viral load" [1]. Guidance on Community Viral Load: A Family of Measures, Definitions, and Method for Calculation represents the work of over 50 Workgroup members from more than 25 jurisdictions. This Guidance document introduces the concept of community viral load and provides definitions of, and methods for calculating, community viral load and related measures. During the past few years, there has been great interest in the scientific literature and at scientific meetings about a new public health HIV measure called "community viral load." It is a population-based measure of HIV-infected individuals' concentration of plasma HIV-1 RNA (viral load). When viral loads are summed across a community and then divided by the number of HIV-infected persons in the community, the average HIV community viral load represents the level of viremia for a geographic area during a defined time. Having a single number that is an indicator of HIV transmission potential and quality of HIV care and treatment for a geographic area is extremely attractive. For community viral load to be a robust measurement, it must reflect the viral loads of persons diagnosed with HIV, be a reliable measure that is also sensitive to viral load changes in the community, and reflect sexual behaviors or other HIV transmission risks in the community. This measure could be used by local HIV reporting jurisdictions each year to assess progress in treating HIV-infected persons with antiretroviral medications that would lower the community's viremia, which could-potentially-reduce transmission within the community. Community viral load may also have some utility in monitoring the progress of national HIV care and treatment objectives when assessed over time. During the process of examining local and federal HIV surveillance activities and the data obtained from such activities, a number of issues were identified that may impact the ability of a jurisdiction to successfully and accurately estimate a population's viral load. These include:  Population selection. A well-defined population should be chosen, possibly even a closed population, so that those who are at risk for transmitting HIV and those at risk for acquiring HIV can be identified and counted, all within a defined geographic area that may not correspond with jurisdictional boundaries.  Varying definitions of "community viral load." There are many analyses that report "community viral load" and the methods and definitions used for each varies somewhat. The history of the community viral load concept and an inventory of community viral load analyses highlight these differences and the varying methods used to calculate community viral load (Appendix A and Table 2). Ideally, measures of community viral load and its related terms should use a common method that would, at a minimum, allow for periodic snapshots to assess changes over time by a jurisdiction, as well as allow for comparisons across localities.  Complete and accurate surveillance/health data. Locations with universal health coverage or a common source of healthcare will fare better than those areas without such infrastructure because the latter will need to create composite data from disparate sources. In addition, jurisdictions have reported significant amounts of missing viral load results for HIV-infected persons, which may bias community viral load estimates. This may be due to a large proportion of HIV-infected persons being out of care in a jurisdiction, or data quality issues for a jurisdiction, or both. This document proposes a family of viral load measurements, of which one is Community Viral Load (Figure 2); other measures include: Population Viral Load, In-Care Viral Load, and Monitored Viral Load. Technical guidance for estimating Community Viral Load, In-Care Viral Load, and Monitored Viral Load are provided. Analytic methods and tools (spreadsheet and SAS code) for calculating mean viral load, percentage with suppressed viral load (≤200 copies/mL), percentage with undetectable viral load (≤50 copies/mL), and for assessing a statistical difference between two mean viral load measures will be made available by CDC to HIV surveillance coordinators. While this Guidance document is intended primarily for HIV surveillance coordinators to be able to calculate Community Viral Load measures for their jurisdictions, it should be recognized that the ability to do so does not rest solely with the HIV surveillance system (see Appendix B for description of surveillance and viral load issues). Using viral load measures to monitor the epidemic is a function of policy, care and treatment, and surveillance as illustrated below: The effective monitoring of Community Viral Load measures and ultimately documenting progress towards the National HIV/AIDS Strategy goals of reduced HIV incidence and improved health outcomes for persons living with HIV infection in the United States (see Appendix E) requires translation of concepts developed in a few jurisdictions to relevant measures that can be used in all jurisdictions. This document acknowledges the complexity of doing so and offers practical options for measurements under various scenarios where policy, practice, and surveillance intersect. # Policy # History of Community Viral Load In early 2009, based on a mathematical model for South Africa, Granich et al. suggested that universal HIV testing and immediate treatment with antiretroviral medications among persons with heterosexually acquired HIV infection could lead to virtual elimination of HIV disease in 50 years [2]. The underlying mechanism by which this could be possible is to reduce the amount of virus circulating in a person's bloodstream (HIV plasma viral load) by consistent and early use of antiretroviral medications. As the level of viremia and viral shedding decreases, infectiousness also decreases, greatly reducing or eliminating the transmission of HIV [3,4]. A 2004 study in Taiwan demonstrated that free access to HIV antiretroviral medications for HIV-infected persons decreased HIV transmission [5]. The earliest use of "community viral load" occurred in 2008 at the Conference of Retroviruses and Opportunistic Infections by Ronald Stall [6]. Based on the work of Millett et al. [7], the high prevalence of untreated HIV infection was used to explain the disproportionately high rates of HIV infection among African American men who have sex with men (MSM). Due to lack of treatment and a resulting high viremia, the African American MSM community was described to have a much higher "community viral load" than other MSM communities, which accounted for more efficient transmission of HIV and higher infection rates. One of the first cohort studies to examine HIV concentration levels in the blood of injecting drug users (IDUs) from Vancouver, Canada was published in 2009 [8]. Wood et al. studied a welldescribed population of IDUs that was followed every six months for more than 11 years. HIV viral loads, HIV diagnostic testing, and antiretroviral therapy were monitored for this population and community plasma HIV-1 RNA concentrations over time were ascertained. A statistically significant correlation between the median IDU community viral load and the incidence of new HIV infections was observed. Furthermore, in their analysis, Wood found that when the community viral load among IDUs decreased to <20,000 copies/mL, the association with HIV incidence ceased. In June 2010, Das et al. published a paper on community viral load in San Francisco [9]. Das and colleagues found a significant association between a declining community viral load and a decline in new HIV diagnoses from 2004 to 2008. Additionally, the authors presented viral load data in novel ways that examined geographic distribution of mean viral load by demographic, risk, and socioeconomic factors. Montaner et al. (2010) published data that helped provide the final link in the mechanism that described viral loads and transmission [10]. Using the community of British Columbia, Canada, they demonstrated that, over time, increased use of antiretroviral therapy was associated with a decrease in the population's viral load and, ultimately, a decrease in new HIV infections. Collectively, these papers have galvanized the HIV prevention and surveillance communities and spurred analyses exploring the relationship between community viral load and newly diagnosed and reported cases and estimated HIV incidence (see Appendix A, Inventory of VL Analyses). In addition to ecological analyses suggesting benefits of reduction in community viral load for population-level HIV prevention, use of antiretroviral therapy by the HIV seropositive partner has been shown to be associated with lower HIV incidence in the seronegative partner [3,11]. Data from the HIV Prevention Trials Network (HPTN) 052/AIDS Clinical Trials Group (ACTG) 5245 have also found that antiretroviral treatment of an HIV-infected partner reduced transmission to the uninfected partner [4]. These data lend support to the Test-and-Treat strategy [12,13] that proposes to decrease HIV transmission through the following two pathways:  HIV testing identifies HIV-infected persons who, after learning their status, adopt safer behaviors, which decreases HIV transmission [14].  HIV-infected individuals who initiate antiretroviral treatment, maintain high levels of adherence, and achieve viral suppression are less infectious, which decreases HIV transmission. To optimize health outcomes, however, expanded testing efforts must be coupled with initiatives that ensure the newly diagnosed and those already known to have HIV infection are effectively linked to HIV care [15], receive antiretroviral treatment as indicated, and achieve optimal adherence and suppression of viral replication. (See Appendix D.) # Establishing a Common Language Researchers have interpreted community viral load differently (see 2 are typically not available in the local or national HIV surveillance databases. The shortcomings of available data and nascent methodology for its modeling (including data imputations for persons in boxes C-E in Figure 2), has precluded the estimation of Population Viral Load to date. An estimated 21% of HIV-infected Americans are living with undiagnosed HIV infection [16] (box E, Figure 2) and, thus, their HIV viral loads are unknown but are likely detectable and elevated in the absence of antiretroviral therapy. These persons, who may practice unsafe sex while unaware of their HIV infection, are thought to contribute to the majority of new, sexually acquired HIV infections in the United States [17]. Viral load data are not available for all persons that have been diagnosed with HIV infection. A significant proportion of such persons are not in care at a given point in time (either never linked to HIV care or not retained in continuous care) and, therefore, may not have HIV viral load monitoring performed (box D, Figure 2). It can be assumed that most of these persons have detectable/not suppressed viral loads as they are unlikely to be receiving continuous, effective antiretroviral therapy; however, the actual distribution of their viral loads is not known (in the absence of special studies). Finally, a fraction of persons that are in care may be missing HIV viral load in HIV surveillance for a variety of reasons, including: incomplete or delayed reporting of viral loads from laboratories to the surveillance system, undetectable viral loads not reportable, viral load results that have not been entered into the surveillance database, patient refusal of HIV disease monitoring, receiving medical care but out of HIV care, and no viral load testing (box C). Community Viral Load describes viral load of all HIV-infected persons diagnosed with HIV infection in a given population. As with "Population Viral Load," viral load measurements for persons that are 'diagnosed but not in care' (box D, Figure 2) are typically missing as is information for those 'in care, no VL' (box C). In order to estimate Community Viral Load as accurately as possible, local public health jurisdictions should implement programmatic activities to expand and routinize HIV testing [18] so that the proportion of 'undiagnosed' (box E) is as small as possible, and maximize linkage to and retention in care of HIV-diagnosed persons so that the proportion of 'diagnosed but not in care' (box D) is as small as possible. In jurisdictions where (i) the percentage of persons that are in care but have missing viral loads and persons that are diagnosed but not in care (boxes C and D, respectively) when combined is less than 25% of persons, and in particular, if (ii) additional data relating to care and health status are available (such as history of antiretroviral therapy, insurance status, opportunistic infections, and engagement in care), techniques such as multiple imputation may be considered in modeling Community Viral Load for the total diagnosed population (in and out of care). However, the limitations of multiple imputation and the lack of additional data make calculation of Community Viral Load not feasible for most jurisdictions at this time. Methodologies for such analyses are an active area of research. Jurisdictions may consider calculating a variety of measures to describe Monitored Viral Load, including measures of central tendency and dispersion. One approach to examining the quality of HIV care, antiretroviral treatment uptake, and adherence and engagement in care is to determine the percent of persons virologically suppressed, or the percent that are undetectable (lowercase m, measurements). Among persons that are receiving antiretroviral treatment, the proportion that achieves viral suppression is referred to as maximal virologic suppression [16] and has been endorsed as a quality of HIV care measure by various national groups. Additional measures are discussed in the Technical Guidance section. # In- Care For jurisdictions with overlapping projects such as Test and Treat initiatives, Medical Monitoring Project (MMP), or other clinically oriented projects that capture use of antiretroviral medications, percent suppression is a useful population-based measure of the penetration of the U.S. HIV Treatment Guidelines. HIV care measures calculated from HIV surveillance and clinically collected data will help to evaluate suppression rates among different disproportionately affected populations as highlighted in the National HIV/AIDS Strategy. # Summary It is challenging to estimate Population Viral Load, Community Viral Load, and In-Care Viral Load at this time. Jurisdictions able to address missing viral load data among diagnosed resident persons with HIV by using multiple imputation [9] have been able to calculate Community Viral Load as defined in this document. Those jurisdictions able to impute missing viral load data would also be able to calculate an In-Care Viral Load. For those jurisdictions that are unable to impute missing viral loads for diagnosed cases, they can calculate a Monitored Viral Load Measure. # Technical Guidance This section provides guidance on analytic decisions and methodologic considerations for conducting viral load (VL) analyses and calculating VL Measures using HIV surveillance data. Estimation of Community VL, In-Care VL and Monitored VL, as described in Establishing a Common Language, will be the focus of this section. Each subsection topic will include a description, recommendation, or discussion if no recommendation is provided, and an explanation that follows the recommendation. Selected standardized categorical VL measures are also defined so that comparisons across jurisdictions may be possible. These standardized measures and methodologic recommendations are enclosed within a grey box. An example of VL data to report is included in Figure 4 and a brief discussion of Population VL follows. Because of the variability in types of analyses, a checklist at the end of this section identifies key explanatory elements that should be described in the method section of any Community VL or related analysis. # Getting started As described in Appendix B, Surveillance, a number of periodic, routine surveillance activities must occur to ensure HIV surveillance data are accurate and of sufficient quality. Those steps and others, which will be described in this section, provide the framework for Figure 3, Decision tree for conducting viral load analyses. Note: Geographic analyses may be conducted as a subset of Community VL/In-Care VL or a subset of Monitored VL. # Inclusion criteria for cases Depending on the type of analysis an HIV jurisdiction plans to conduct, the case inclusion criteria may differ. For the purposes of standardization, however, the following criteria for case selection as part of a cross-sectional Community VL, In-Care VL, or Monitored VL analysis is recommended: Explanation: HIV-infected adults and adolescents may transmit HIV through sex and injection drug use. At the time of this writing, the latest full calendar year that could be examined is 2009, allowing for the minimum 12-months lag time for cases to be reported (i.e., using data reported through December 2010), data to be found and entered, etc. Using a calendar year and including # GEOGRAPHIC analyses # Proceed with spatial VL/social determinants analysis # COMMUNITY VIRAL LOAD # Jurisdictional case* Are complete and current addresses available? YES NO YES persons alive at the end of that year is consistent with data requests from HRSA and most local HIV planners. For reporting cities within a state, those reporting jurisdictions will also need to conduct an intrastate deduplication process to determine which persons may have moved out of the jurisdiction but remain in the state. All states must participate in the interstate deduplication process. By the case inclusion criteria, HIV-infected persons who died during 2009 or earlier would be excluded. To keep track of those deaths, matching to state vital registry databases will help identify deaths among HIV-infected persons who died in state; matching to at least one national death registry (National Death Index or Social Security Death Index) is necessary to identify deaths that may have occurred out of state. Although each HIV surveillance jurisdiction has incentive to keep track of its jurisdictional cases, residential cases are more important to understanding current HIV transmission in a locality. # Sample size The sample size needed to detect a 3-fold geometric mean (GM) difference * between mean viral loads depends on the desired power and standard deviation of the sample. Table 1a presents sample sizes needed with 80% power to detect a difference at the 0.5 level for various GM. The dark row lists the sample size needed to detect a difference of 3-fold in the GM; the sample size is the intersection of the row and column, standard deviation, S. The column of S=1.2 reflects the standard deviation observed in national VL data. Explanation: Sample size considerations must be taken into account when choosing the cases or study population to examine. For example, Monitored VL among African American men and women may be of great interest to evaluate by a jurisdiction, but if the sample size is inadequate to meet the recommended case inclusion criterion, an alternate method may need to be used, such as combining multiple years of data. Jurisdictions may want to explore factors related to differences in means of viral loads, such as the difference in the proportion with undetectable or very low VL, which may be expressed as a categorical difference. # Crude viral load data and differences in lower limits of detection With the advent of newer assays for quantifying HIV viral load, the lower limit of detection (LLD) of these assays has steadily decreased from <1000 copies/mL in the early 1990s to <400 copies/mL in the early 2000s to <50 copies/mL or fewer in most recent years. Most persons that receive and adhere to antiretroviral therapy will have a viral load below the LLD, and a question arises how to handle their viral loads in the analyses. Explanation: The distribution of actual VL values below a test's LLD is unknown. Factors that may influence the VL distribution in a jurisdiction include the regimen of and adherence to antiretroviral therapy among the cohort, sensitivity of assay used in measuring these otherwise We recommend that half of the LLD be used in the analyses of viral loads for persons who have VL below the limit of detection (whether numeric result provided or not) to approximate the likelihood of their actual viral load burden. Each jurisdiction will need to assess the standard deviation of their local VL data and then determine the appropriate sample size needed to assess VL. undetectable VL results, and possibly the combination of patient demographics and clinical characteristics of HIV-infected persons at the time. Some researchers have used zero or the value LLD minus one as the VL value for undetectable results in VL analyses. We chose half the value of the LLD as a compromise and for its ease of use. As more data become available, it will be possible to more accurately establish a value for VL results currently classified as undetectable. Nevertheless, for our purposes of providing a uniform method that can be compared across sites or within a site over time, use of half LLD should be adequate. Consequently, when a viral load assay with LLD <50 copies/mL is used, undetectable viral load results should be replaced with 25 copies/mL for analysis. When undertaking a calendar-time trend analysis, however, it may be necessary because of different viral load tests in use to truncate the lower limit of detection and possibly the upper limit of detection † . After examining local VL data by year, it should become apparent where those cut points are needed. As an example, if a jurisdiction had complete and refined data from 2005-2009, it may be found that viral load tests with a LLD <400 copies/mL were in wide use in 2005 and 2006, but then were replaced by a more sensitive test with LLD of 75 copies/mL by 2009. For such an example, test results of undetectable, 400 copies/mL or less would be collapsed into a single group and given a VL value of 200 (half LLD) for each year, 2005 through 2009. VL analyses for each year would then be conducted using this common LLD of 400. If a single year, 2009, as a point in time analysis was planned, however, it would be appropriate to use 75 copies/mL as the LLD and use 37.5 as the value for analysis when results of ≤75 copies/mL or undetectable were reported. In either situation, the LLD or range of VL results used in analysis should be stated. # Multiple imputation of missing viral load Although only 15 of 26 jurisdictions responded to an informal survey conducted among Workgroup members, most jurisdictions reported missing VL information for more than 30% of their cases [20]. VL results may be missing for persons not in care, in care but without VL, and for persons who have died or moved away as described in Appendix B, Table B1. This discussion of multiple imputation of missing viral load for calculating Community VL and In-Care VL applies to data that have been refined after completing data management processes described in Appendix B, Surveillance. Explanation: Two groups of HIV-infected persons have, by surveillance definition, missing VL results-those "in care, no VL" and those "diagnosed but not in care" (boxes C and D, respectively, Figure 2). After data refinement, if the number of persons "in care, no VL" is small, jurisdictions may want to conduct follow-up to better characterize these persons. This information may help inform approaches for data imputation for this group, "in care, no VL." Jurisdictions that routinely collect supplemental information-such as use of antiretroviral medications, engagement in care, insurance, co-morbidities-may wish to explore data imputation using published methods [9] that would enable them to calculate a Community VL or † Sites should assess their data and determine if an upper limit of detection is needed. Because of the recommended methods, providing a uniform upper limit of detection is not warranted in most circumstances. If, however, sites have a significant percentage of high VL results, use of an upper limit of detection may be prudent. We recommend that imputation of missing VL data only be attempted if <25% of cases are missing VL AND jurisdictions are collecting supplemental clinical data. In-Care VL. However, jurisdictions should carefully weigh the benefits and limitations of multiple imputation before embarking on such analyses. Methods like multiple imputation that extrapolate from available surveillance VL data to the remainder of the population may not be appropriate for surveillance data. A fundamental requirement of imputation, that data are missing at random and that the distribution of values in the unknown group mirrors the distribution of values in the known, may not be satisfied. The fundamental problem for a "diagnosed case but not in care" identified from surveillance data (i.e., lacking VL data in a surveillance system that routinely receives fairly complete VL data from labs) is that the case is not in care or has moved out of jurisdiction. The viral load distribution of cases not in care cannot be modeled from the distribution of those that are in care. The Workgroup is not aware of any supplemental data available to the majority of surveillance jurisdictions at this time that could address this lack of information. Even if supplemental information is available to support randomness or similarity (these would likely derive from research samples also reflecting patients in care), there is no clear cut-off for how much information may be imputed (e.g., the recommendation of 75% as the minimum threshold for data completeness). The usefulness of supplemental data for "filling in the blanks" depends on the relevance, completeness, and accuracy of the data; it also depends on the homogeneity of the population that is being modeled. Jurisdictions are advised to consult with statisticians familiar with the methodology when planning analyses that involve imputation. # Mean viral load The early VL and prevention studies have used median viral load as the summary analytic measure for community viral load; others have used the mean of the VL results. Because successful clinical use of antiretroviral therapy results in viral suppression, jurisdictions that have successfully engaged HIV-infected persons in care may have over 75% of such persons with undetectable VL; the median would be undetectable, so the mean was used. Explanation: The rationale for this logarithmic transformation is that it helps to normalize the distribution of viral load values and reduces the influence of outlying measurements for persons having extreme viremia (due to acute infection, advanced HIV disease, concurrent sexually transmitted infections, or random variability). ‡ Since the interpretation of viral load measurements is often more intuitive on a linear scale, we recommend calculation of geometric mean (GM) for viral load. The GM is thus calculated through log transformation by averaging the log transformed values and transforming the average back to the original (linear) scale. The base used for the log transformation has no effect on the final GM estimate. However, using log base 10 has an advantage by its relationship to the value on the original scale; for example, a value of 2 on the log10 scale is 100 on the original scale, 3 corresponding to 1000, 4 corresponding 10000, and so forth. While our recommendation for calculating the mean viral load is limited to Population VL, Community VL, In-Care VL, and Monitored VL, jurisdictions may wish to calculate a mean VL ‡ This is similar to using the median VL We recommend that calculation of Community VL and its related viral load Measures (see Figure 2) is performed after transformation of viral load results onto the logarithmic base 10 scale, followed by calculation of the mean. for a subpopulation. In addition, a geometric mean could be useful in combination with categorical VL measurements (see below). For example, the population can be described in terms of the fraction with undetectable or suppressed VLs and the (geometric) mean of the remaining population. # Selection of viral load results for analysis Once the time frame for analysis is decided, the handling of VL values for individuals that may have more than one VL result needs to be considered. Depending on the type of analysis, jurisdictions may want to average the VL results for each individual and then take the mean of those average VLs to compute the Community VL, In-Care VL, or Monitored VL. For other analyses, it may be more appropriate to use the highest, lowest, or most recent VL result. (See Other viral load methods and analyses later in this section.) As part of the standard case inclusion criteria for a calendar year, the following VL should be selected for analysis: Explanation: The examination of a single VL result (most recent) is the least restrictive definition of 'being in-care.' Whereas other new measures or definitions of 'in-care' require at least two lab results within a specified timeframe of each other in a 12-months period, no such requirement is necessary for this standardized, point in time examination. The Community Viral Load Workgroup has developed and tested an Excel spreadsheet that allows for calculation of mean viral load, proportion suppressed, proportion undetectable, and a Z-test to identify a statistically significant difference between two mean viral loads. This spreadsheet accommodates small datasets. A generic SAS program that incorporates the methodologic guidelines and standards set forth in this document will also be available. The SAS code will be created so that large volumes of data from eHARS can be used for VL calculation by all HIV surveillance jurisdictions. # Categorical measures of viral load Viral load results may be grouped into various clinically meaningful categories. Tracking VL categories for a population yearly (e.g., using histograms) may aid in the interpretation of changes in VL results in the population over time, concomitant with changes in proportion of patients receiving highly active antiretroviral therapy or adherence support, or due to other factors. We recommend use of three categorical VL measures: suppressed/not suppressed, undetectable, and high VL. Use the most recent VL result per person for analysis-this would be for specimens obtained in 2009 that are the closest to December 31. If there is no VL specimen for a case in 2009, that case would be excluded and would belong to either the "In care, no VL" or "Diagnosed but not in care" boxes of Figure 2 (boxes C and D, respectively). Although the VL cut points that define each of these categories may not meet all needs, given the nature of ever evolving VL test technology, we have proposed definitions so that all HIV jurisdictions can report using standard definitions of these measures. The three standardized measures that are proposed are as follows: Explanation: A measurement that may be useful in describing the disease status of local PLWH is percentage of persons that have viral load below LLD or suppressed viral load, defined as VL ≤200 copies/mL by the latest U.S. HIV Treatment Guidelines [21]. Percentage of persons that have viral load suppression may serve as a rough proxy indicator of the combined access to and adherence to antiretroviral therapy in a given population. Explanation: This measure can only be assessed in jurisdictions where recent sensitive assays (LLD of 50 copies/mL or lower) are consistently used by laboratories. While this measure is most informative for a clinical setting among persons on antiretroviral therapy, it could also be applied to populations receiving care from a specific facility or across all persons in care using surveillance data. Explanation: To better characterize the spectrum of viral loads in the population, some jurisdictions may opt to quantify the proportion of VL results that are considered very high (i.e., >100,000 copies/mL), which may indicate high potential transmission risk for a group or the healthcare challenges facing a group in need of immediate antiretroviral therapy [21]. Another potential categorical measure that jurisdictions may want to examine is the proportion of HIV-infected persons that meet a standardized definition of in-care-e.g., at least two lab results obtained within a calendar year that are at least 60 days apart. Categorical measures might be more informative than means and may also be more comprehensible to planners, evaluators, policy makers, and others. These measures provide the VL distribution and have the advantage of not implying a normal distribution of values (as means may be perceived by the public). The actual distribution of VL results in surveillance populations is highly skewed. A large proportion of the population has undetectable or very low values (e.g., <200 copies/mL), and the remainder of values are spread across a wide range. Distributions such as these are not well represented by a single measure, especially one that implies a normal distribution. Jurisdictions should examine their data to determine the local viral load distribution and the best measures to describe it, including outlier cutoff values, etc. Other measures are listed in Table 2. Such distributions can also be stratified by CD4+ T-lymphocyte count to reflect stage of illness and the U.S. HIV Treatment Guidelines recommendations on initiation of antiretroviral therapy. Report percentage ≤200 copies/mL as suppressed as a categorical VL measure. Conversely, the percentage >200 copies/mL may be reported as not suppressed. Report percentage ≤50 copies/mL as undetectable as a categorical VL measure. Report percentage >100,000 copies/mL as high VL, in need of treatment as a categorical VL measure. # Address and residency For analyses that use address (geospatial analyses or linkage to Census geographic information on social determinants data), the current address, if available, should be used. Current address is also useful in establishing whether a jurisdictional case may have moved. While most jurisdictions use patient address from lab reporting, this information is useful only if the address is accurate. Patients with post office boxes or incomplete or inaccurate street addresses cannot be geocoded and placed with certainty on a map. A timeframe in which current address should be updated could not be agreed upon by the Workgroup. Any analyses including cases by residence or using geographic analyses should specify when addresses were updated and the percentage of cases that could not be geocoded with a high level of confidence. # Population Viral Load The Technical Guidance section provides methodologic recommendations for calculating Community VL, In-Care VL, and Monitored VL. Although Population VL is the best Measure for assessing HIV transmission, information is not available for its calculation. However, a component VL measure that is of interest includes an estimation of the number of undiagnosed HIV-infected persons. A back-calculation method estimated that almost 250,000 HIV-infected persons are undiagnosed (21% of the total number of HIV-infected persons) in the United States [16]. Efforts are underway to also estimate the relative proportion of undiagnosed HIV-infected persons by state or metropolitan statistical area. These methods will be available to state health departments in 2011. VLs will vary greatly among undiagnosed persons, depending on how recently the person was infected. However, they may be assumed to not have suppressed VL. # Other viral load methods and analyses Although this Guidance document suggests using one VL result (most recent) of possibly many VL results for a person and recommends a method for calculating VL Measures, viral loads may be analyzed differently. Table 2 includes different VL measures that have been calculated and reported. Each measure has particular strengths and uses. Appendix A lists the references associated with the VL analyses. The mean viral load provides an estimate of the average level of viral burden and is the most useful for comparing subpopulations and neighborhoods in a jurisdiction. Comparing mean of most recent and total VLs at the geographic level may reflect disparities in access to and use of antiretroviral medications by neighborhood. Comparing means among different subpopulations within a jurisdiction can highlight disparities. The total VL is a useful measure for looking at the combination of HIV prevalence and viremia among prevalent cases. Depending on the type of analysis that is being conducted, VL results other than the most recent may be appropriate to use. If the purpose of the analysis is to evaluate a new measure of quality of care, the researcher may choose to report the proportion of cases that were virologically suppressed for a calendar year; thus, cases with even a single VL result >200 copies/mL would not be considered suppressed for the year. If the focus of the analysis is on acute antiretroviral need and possible lapses of optimal antiretroviral use, then the proportion of cases that may have a single VL result >100,000 copies/mL may be used. Although we recommend the most recent (to December 31) VL within a calendar year be used to calculate a VL Measure that is a more "current" snapshot, researchers may also want to use the mean of serial VL results obtained throughout the year for a person and then calculate the overall mean VL among all persons in a group for the year. Again, this document is not meant to be a comprehensive list of all possible VL analyses and measures but to point out ones in use and to recommend a common method for calculating VL Measures and reporting of component and a limited number of categorical VL measures. # Summary for data reporting-see Checklist on page 20 For each type of population-based analysis, Population VL, Community VL, In-Care VL, and Monitored VL, the study population should be described and its size should be reported. The size and relative percentage of each of the component VL measures (boxes A-E), as shown in Figure 2, that defines each VL Measure should also be reported. # Examples for Data Reporting (bottom up) Jurisdictions should report: 1) the component VL measures-e.g., percentage of persons "Diagnosed but not in care" in addition to the percentage, "In care and with undetectable * VL," "In care with detectable VL," and "In-care, no VL" when conducting In-Care VL Measurements; and 2) on categorical VL measures to describe the population that is in care and being monitored such as the percentage of persons with suppressed (i.e., ≤200 copies/mL), the percentage with high viral loads (i.e., >100,000 copies/mL), and the percentage undetectable (i.e., ≤50 copies/mL).  Describe any methods of adjustment for confounders (e.g., standardization, multivariable modeling)  Describe any efforts to address potential sources of bias due to missing data, sensitivity analyses (e.g., imputation). # Results  Give characteristics of the study population and univariate summaries of outcome measures  Report variability (e.g., standard deviation) and precision (e.g., 95% confidence intervals) around estimates  Present results from adjusted analyses, controlling for potential confounders # Discussion  Discuss limitations of the analyses, taking into account potential sources of bias or imprecision; discuss both direction and likely magnitude of the bias (e.g., under or over estimation of Community VL)  Interpret findings in the context of prior analyses-highlight differences and similarities Source: Article refers to a scientific publication; PP is an oral presentation at a scientific meeting; abstract is a short description of study and findings submitted to a meeting; poster is a more detailed paper presentation at a scientific meeting. # * As of May 2011 # Public Health Surveillance Surveillance is the foundation of public health. It involves the ongoing, systematic collection, analysis, interpretation, and dissemination of data used by public health to reduce morbidity and mortality and to improve health [1]. Each surveillance system has a number of attributes that contributes to its overall effectiveness. These attributes include: simplicity, flexibility, data quality, acceptability, sensitivity, predictive value positive, representativeness, completeness, timeliness, and stability. To ensure ongoing improvement in data quality, efficiency, and usefulness, each surveillance system should be periodically evaluated. All CDC data originate locally. These data represent a commitment and collaboration between states or jurisdictions and CDC to collect data deemed to be the most vital in fulfilling local and national public health missions. CDC works with local reporting jurisdictions and the Council of State and Territorial Epidemiologists (CSTE) to establish model language and uniformity in state reporting regulations and standardization of data collected for surveillance. CDC works with state and local public health counterparts to decide on national data quality standards, policies, and procedures that can be used to meet surveillance data quality standards. also against at least one national death registry [4] to ensure an accurate count of the number of people living with HIV disease in their jurisdiction. # HIV Surveillance While accurate case counts are important to understand disease burden and where HIV-infected persons seek medical care is important for resource allocation, it is equally important to understand where and how new cases arise for targeted prevention efforts. Jurisdictional boundaries and residency do not predict where healthcare is sought or risk-seeking behaviors occur. For example, persons may cross into an adjacent state to receive health services or engage in high-risk behaviors in a neighboring jurisdiction. Some states have developed lab reporting regulations based on the patient's residence and patient's place of care to accurately describe care patterns. As part of understanding HIV transmission, surveillance areas with residents that cross state lines to engage in high-risk HIV behaviors should take this into consideration when conducting any assessment of HIV transmission potential for their residents. # Assessing those in care In general, HIV surveillance relies on reported lab test (i.e., CD4+ T-lymphocyte count or viral load) results as a marker of persons receiving health care and thus being "in care." While it may be possible to identify HIV-infected persons who are receiving care but lacking lab test results, such as persons identified from linking to Ryan White Care AIDS Drug Assistance Program databases, such cases can be followed-up to address the reasons for missing lab results. To determine whether someone was linked to and retained in ongoing HIV healthcare, serial lab results over an extended period would need to be observed in surveillance data. A limited number of HIV jurisdictions are able to conduct ongoing medical chart review of HIVinfected persons that reside in their jurisdiction. Those jurisdictions have collected supplemental data that are not routinely collected as part of HIV surveillance, such as engagement in care, insurance, and use and type of antiretroviral medications. Other jurisdictions have been funded to participate in special surveillance/prevention activities and, as such, may also be conducting ongoing medical chart reviews and collecting similar type of supplemental data. States annually estimate the proportion of HIV-infected persons living in their state that are in and out of care. Some HIV reporting jurisdictions may report for their jurisdictional cases, not residential cases, because they are more likely to have complete information for their jurisdictional cases [5]. # Viral load test results To ensure that all (detectable and undetectable) viral load results are received by HIV surveillance, jurisdictions must expend great time and resources. As a first step, state public health reporting laws or regulations provide the authority for State Health Departments to receive such information from laboratories. It may be necessary to include language that specifies that all (detectable and undetectable) HIV viral load results are reportable. This type of policy lays the foundation for receiving viral load results but cannot account for the personal interaction needed in establishing relationships with laboratories for reporting and then ongoing work to ensure continued reporting. The magnitude of this can be appreciated by the fact that some jurisdictions have over 80 laboratories that report viral load results [6]. The volume of viral load reporting is large and is likely to increase. U.S. HIV Treatment Guidelines recommend periodic assessments among persons that are on antiretroviral therapy or among persons whose HIV disease is being monitored, and the number of persons living with HIV is increasing. It is not uncommon for a laboratory to cease VL reporting for a short period as new laboratory staff are hired or veteran laboratory staff take vacations. To identify such lapses in reporting, HIV surveillance jurisdictions monitor laboratories for their volume of reporting over time. The complete absence of any lab test result (CD4+ T-lymphocyte count or viral load) may suggest the person is not in care, moved away, is receiving care out of jurisdiction, or died, or there is a significant reporting issue (see Table B1). There are some instances, however, where an HIV-infected person resides and receives care within an HIV reporting jurisdiction but no medical care information is shared with HIV surveillance. There are a limited number of federal and private facilities that do not share or allow HIV surveillance staff to have access to medical records of persons being cared for by the facility. Efforts to address these barriers are underway with some facilities. As technology has advanced, laboratories are moving toward reporting lab results electronically, but many laboratories still do not have this capability. Some HIV reporting jurisdictions have a backlog of paper lab results, and some jurisdictions with early electronic lab reporting have maintained electronic HIV lab reports in a separate database apart from eHARS. Viral load results reported on paper that have not been entered into eHARS are not available for analysis. Separately stored electronic lab files that have not been entered into eHARS may also not be accessible for analytic purposes. The extent of missing lab results is best described from a recent survey of selected HIV reporting jurisdictions. Of the 15 jurisdictions that responded, 26-89% of persons living with HIV and reported to surveillance were missing a viral load result [6]. The large range of missing viral load results may be attributable to many factors as described in Table B1. The issue of jurisdictional versus residential case status has some implications for missing lab results. Persons diagnosed in one jurisdiction, by convention, remain in the database as a case in that jurisdiction even if the person moves across the country and receives care in their new home state. The recommended procedure for each HIV reporting area is to enter all lab results into their HIV database, regardless of jurisdictional or residential case status. After all jurisdictions upload their data to CDC, it may be possible to link lab results to jurisdictional cases that have moved; at this time, there is no mechanism that allows for sharing this information with local jurisdictions by CDC or between local jurisdictions directly. Early in the epidemic, when HIV diagnosis was often shortly followed by death, describing jurisdictional cases was reasonable. Today, with effective antiretroviral treatment available and the impetus to look more closely at transmission and prevention of new HIV infections, residential cases may also be an important population to surveil. # HRSA The Health Resources and Services Administration [7], HRSA, is one of several agencies, including CDC and NIH, under the leadership of U.S. Department of Health and Human Services. HRSA is responsible for improving access to health care services for people who are uninsured, isolated, or medically vulnerable. Since the early 1990s, the Ryan White HIV/AIDS Program has provided an array of HIV-related and health care services to help vulnerable HIV-infected persons manage their disease. Based on the numbers of persons infected with HIV, service needs among these persons, local resources available, and an assessment of unmet need and service gaps, HRSA provides states with funding to bridge these gaps. Medical care and antiretroviral medications (through the AIDS Drug Assistance Program) are just two services that HRSA funds through and in partnership with states. Each year, jurisdictions provide an epidemiologic description of HIV-infected persons to HRSA's HIV/AIDS Bureau. Most of the data used to describe the epidemic in a jurisdiction originates from HIV surveillance, so it is important to try to harmonize terms and processes that meet both HRSA and CDC needs. # Definitions and terms • Persons living with HIV (with and without AIDS) in a jurisdiction.  HIV surveillance would refer to these individuals as resident cases, which is distinct from jurisdictional cases. HIV surveillance case "ownership" is limited to jurisdictional cases that are residents at diagnosis. • Persons with unmet need are diagnosed with HIV infection but not in care for a defined 12month period [8].  HIV surveillance uses lab test results (CD4+ T-lymphocyte count or viral load) as a marker for being in care. Those persons without a lab result for 12 months or greater would be considered not in care (or out of care). HRSA includes lab results and receipt of antiretroviral medications within a 12-month period as markers of being in care. • Persons "in care" are receiving primary healthcare for their HIV disease. This care should be consistent with the U.S. HIV Treatment Guidelines [2].  HIV surveillance uses lab test results as a marker for being in-care. While ongoing evidence of laboratory testing is strongly suggestive of ongoing encounters with the healthcare system, the quality and appropriateness of these medical encounters cannot be fully assessed with routine HIV surveillance data. For most surveillance jurisdictions, with geographically dispersed cases and varying health facility-specific rules and policies, costs are prohibitive to collect detailed medical care information. Collecting this type of medical encounter information is possible for circumscribed populations of HIV-infected persons and is part of the CDC's Medical Monitoring Project (MMP), which includes a representative sample of HIV-infected persons who are engaged in HIV care. • HIV-infected persons are retained in care as assessed by at least two clinical visits in a year that are at least 60 days apart [9].  HIV surveillance uses lab results as a marker for being in-care. Based on the temporal distribution of the collection date for each lab test, it is possible to assess whether two medical encounters occurred within a calendar year, spaced at least 60 days apart. It would be difficult, however, for surveillance data to determine with certainty if the lab tests were part of the same continuous care or might represent fragmented care as, for example, one test may be ordered by an HIV healthcare provider and another test ordered by an emergency department physician. • Persons in care that begin antiretroviral treatment achieve virologic suppression after 6 month [9]  HIV surveillance can provide a rough proxy assessment of reporting the proportion of persons in care that have viral load results ≤200 copies/mL for a calendar year. Because antiretroviral medication use and initiation is not collected by HIV case surveillance, this type of clinical measure is best addressed by MMP. # Acknowledgments Eric Vittinghoff (University of California, San Francisco) who described the methods used in the Das paper; Aaron Roome (Connecticut) who worked with Rick Song to develop the mean VL calculation spreadsheet; Priscilla Chu (San Francisco) who shared the SAS code they used for their own community viral load calculation; Virginia Hu (Los Angeles), Zhijuan Sheng (Los Angeles), Susannah Cohen (California), Amanda Castel (District of Columbia), and Sarah Willis (District of Columbia) who will be creating a SAS program for all HIV surveillance jurisdictions to use. Special thanks to Moupali Das (San Francisco), Amanda Castel (District of Columbia), Karen Marks (California), Nanette Benbow (Chicago), and Biru Yang (Houston) for contributing pieces to the Guidance document, and Kate Buchacz whose depth of knowledge and quick work saved countless hours. (See Appendix H for the history of the Workgroup.) # CDC Representatives Laurie Kamimoto 1 , Kate Buchacz 2 , Ruiguang "Rick" Song 3 b For some states this is the time when the statute took effect; for other states, because of the breadth and lack of specificity of their reporting regulation, this is the period when they started to receive all VL results # Appendix D U.S. HIV Treatment Guidelines The most recent update of Guidelines for the Use of Antiviral Agents in HIV-1-Infected Adults and Adolescents was released January 2011 [2] (hereafter referred to as the U.S. HIV Treatment Guidelines). The U.S. HIV Treatment Guidelines describe in detail the recommended clinical management of persons infected with HIV and from this and other work, HIV quality of care measures have been developed [8]. The focus of this section is on laboratory monitoring that is part of clinical management because lab data are actively pursued as part of HIV surveillance. As part of monitoring HIV disease progression and response to antiretroviral medications, monitoring of CD4+ T-lymphocyte counts and plasma HIV RNA concentration (viral load) have been recommended. Both tests should be ordered when an HIV-infected person enters care, at antiretroviral treatment initiation, every 3-6 months until clinically stable, and when clinically indicated. Because viral load is the most important indicator of antiretroviral treatment response, more frequent viral load monitoring may be performed when a treatment regimen is changed. At a minimum, one would expect at least 1-2 viral load measurements over the course of a year in which a person is engaged in HIV care. # Appendix E National HIV/AIDS Strategy In July 2010, the White House released the National HIV/AIDS Strategy [10]. The document included three primary goals: 1) reducing the number of people who become infected with HIV; 2) increasing access to care and improving health outcomes for people living with HIV; and 3) reducing HIV-related health disparities. # Appendix F. List of Test and Treat Clinical Trials Appendix G. Mathematical Formula for Sample Size Sample size required to detect the difference of GM between two subpopulation groups April 21, 2011 Suppose that we would like to have a power of W (say 80% chance) to detect a difference that one group has a GM at least as k fold high as the GM of the other group. Null hypothesis: GM 1 = GM 2 or GM max /GM min = 1 where GM max = max(GM 1 , GM 2 ) and GM min = min(GM 1 , GM 2 ) Alternative hypothesis: GM 1 ≠ GM 2 or GM max / GM min > 1 Test statistic: where SE is the pooled standard error estimate given by # ⁄ ⁄ The p-value for this test is given by 1 where Z ~ Normal (0,1) and is the inverse function of the standard normal distribution Given the type I error rate  (e.g., 0.05) and the difference desired to detect: GM max / GM min = k, the power or the probability to detect this difference can be calculated as where S is the expected standard deviation of log 10 (VL) in the population of interest. When n 1 = n 2 = n, we have 1 # ⁄ ⁄ Given the type I error rate (), the relative difference to detect (k), the expected standard deviation of log 10 (VL) (S), and the desired power (W), the required sample size is The required sample sizes are listed in Table 1a for W = 0.8 and Table 1b for W = 0.9 with  = 0.05 and various S and k. A sub-working group was formed to discuss the details for the final recommendations. There were four sub-working group calls and its active members included: California, Chicago, District of Columbia, Houston, Los Angeles, New York City, New York State, Oregon, and San Francisco. # Appendix H. Community Viral Load Workgroup # Glossary ART: antiretroviral treatment or therapy. ARV: antiretroviral; usually used in association with treatment or medication use. Jurisdictional case: person diagnosed with HIV or AIDS while a resident of the jurisdiction Lag time: delay between the occurrence of an event and when the event is known or reported; e.g., deaths in 2009 may not be recorded until 2010 because there is an 18-month lag time for ≥90% of deaths in 2009 to be recognized and captured in a database. Log 10 =log base 10: the logarithm for base 10 is the exponent; e.g., the logarithm of 1000 to base 10 is 3. The log 10 transformation of a viral load of 1000 copies/mL is 3. MMP: Medical Monitoring Project (MMP) is a nationally representative surveillance system administered by CDC among HIV-infected persons that are receiving medical care for their HIV disease. Multiple imputation: analytic process in which missing data are populated with values based on existing data through multiple rounds of data simulations. NDI: National Death Index. See Vital Statistics for an explanation. Resident case: HIV-infected person currently residing in a jurisdiction. Not all residential cases are jurisdictional cases (see definition above) as persons may be diagnosed in one jurisdiction but move to another. SAS: software in wide use for statistical analyses of HIV surveillance data. Soundex: 4-character alphanumeric "code" that is based on the sound or pronunciation of an individual's last name. Soundex is used instead of name to identify a case along with demographic information. SSDI: Social Security Death Index is based on the national Social Security Death Master File, which includes death information (name, social security number, date of death, possibly residence at time of death) for most persons that have a Social Security number. The SSDI is updated quarterly. Viral load (VL): measure of the concentration of viral particles per volume of blood. The unit of measurement for HIV viral load is copies per milliliter or copies/mL. Viremia: presence of virus in the blood. Vital statistics: collection of birth, death, and marriage information; local jurisdictions collect or receive this information. For this document, death data are the focus. Death certificates, which include cause of death information, are collected by local Vital Statistics departments. States aggregate local death information and then provide this data to CDC's National Center for Health Statistics, where a national death dataset is created and cause of death information are recoded to provide uniformity across states. The national dataset is called the National Death Index or NDI.
None
None
ddd02126f3f1b16a6ef5664a018e1c2b3d394f2e
cdc
None
Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use.# Introduction Genetic testing encompasses a broad range of laboratory tests performed to analyze DNA, RNA, chromosomes, proteins, and certain metabolites using biochemical, cytogenetic, or molecular methods or a combination of these methods. In 1992, the regulations for the Clinical Laboratory Improvement Amendments of 1988 (CLIA) were published and began to be implemented. Since that time, advances in scientific research and technology have led to a substantial increase both in the health conditions for which genetic defects or variations can be detected with molecular methods and in the spectrum of the molecular testing methods (1). As the number of molecular genetic tests performed for patient testing has steadily increased, so has the number of laboratories that perform molecular genetic testing for heritable diseases and conditions (2,3). With increasing use in clinical and public health practices, molecular genetic testing affects persons and their families in every life stage by contributing to disease diagnosis, prediction of future disease risk, optimization of treatment, prevention of adverse drug response, and health assessment and management. For example, preconception testing for cystic fibrosis and other heritable diseases has become standard practice for the care of women who are either pregnant or considering pregnancy and are at risk for giving birth to an infant with one of these conditions (4). DNA-based diagnostic testing often is crucial for confirming presumptive results from newborn screening tests, which are performed for approximately 95% of the 4 million infants born in the United States each year (5,6). In addition, pharmacogenetic and pharmacogenomic tests, which identify individual varia-tions in single-nucleotide polymorphisms, haplotype markers, or alterations in gene expression, are considered essential for personalized medicine, which involves customizing medical care on the basis of genetic information (7). The expanding field of molecular genetic testing has prompted measures both in the United States and worldwide to assess factors that affect the quality of performance and delivery of testing services, the adequacy of oversight and quality assurance mechanisms, and the areas of laboratory practice in need of improvement. Problems that could affect patient testing outcomes that have been reported include inadequate establishment or verification of test performance specifications, inadequate personnel training or qualifications, inappropriate test selection and specimen submission, inadequate quality assurance practices, problems in proficiency testing, misunderstanding or misinterpretation of test results, and other concerns associated with one or more phases of the testing process (8)(9)(10)(11). Under CLIA, laboratory testing is categorized as waived testing or nonwaived (which includes tests of moderate and high complexity) based on the level of testing complexity. Laboratories that perform molecular genetic testing are subject to general CLIA requirements for nonwaived testing and CLIA personnel requirements for high-complexity testing; no molecular genetic test has been categorized as waived or moderate complexity. Many laboratories also adhere to professional practice guidelines and voluntary or accreditation standards, such as those developed by the American College of Medical Genetics (ACMG), the Clinical and Laboratory Standards Institute (CLSI), and the College of American Pathologists (CAP), which provide specific guidance for molecular genetic testing (12)(13)(14). In addition, certain state programs, such as the New York State Clinical Laboratory Evaluation Program (CLEP), have specific requirements that apply to genetic testing laboratories in their purview (15). However, no specific requirements exist at the federal level for laboratory performance of molecular genetic testing for heritable diseases and conditions. Since 1997, CDC and the Centers for Medicare & Medicaid Services (CMS) have worked with other federal agencies, professional organizations, standard-setting organizations, CLIAC, and other advisory committees to promote the quality of genetic testing and improve the appropriate use of genetic tests in health care. To enhance the oversight of genetic testing under CLIA, CMS developed a multifaceted action plan aimed at providing guidelines, including the good laboratory practice recommendations in this report, rather than prescriptive regulations (16). Many of the activities in the action plan have been implemented or are in progress, including 1) providing CMS and state CLIA surveyors with guidelines and technical training on assessing genetic testing laboratories for compliance with applicable CLIA requirements, 2) developing educational materials on CLIA compliance for genetic testing laboratories, 3) collecting data on laboratory performance in genetic testing, 4) working with CLIAC and standard-setting organizations on oversight concerns, and 5) collaborating with CDC and the Food and Drug Administration (FDA) on ongoing oversight activities (16). This plan also was supported by the Secretary's Advisory Committee on Genetics, Health, and Society (SACGHS) in its 2008 report providing recommendations regarding future oversight of genetic testing (1). The purposes of this report are to 1) highlight areas of molecular genetic testing that have been recognized by CLIAC as needing specific guidelines for compliance with existing CLIA requirements or needing quality assurance measures in addition to CLIA requirements and 2) provide CLIAC recommendations for good laboratory practices to ensure the quality of molecular genetic testing for heritable diseases and conditions. These recommendations are intended primarily for genetic testing that is conducted to diagnose, prevent, or treat disease or for health assessment purposes. The recommendations are distinct from the good laboratory practice regulations for nonclinical laboratory studies under FDA oversight (21 CFR Part 58) (17).The recommended laboratory practices provide guidelines for ensuring the quality of the testing process (including the preanalytic, analytic, and postanalytic phases of molecular genetic testing), laboratory responsibilities regarding authorized persons, confidentiality of patient information, and personnel competency. The recommendations also address factors to consider before introducing molecular genetic testing or offering new molecular genetic tests and the quality management system approach in molecular genetic testing. Implementation of the recommendations in laboratories that perform molecular genetic testing for heritable diseases and conditions and an understanding of these recommendations by users of laboratory services are expected to prevent or reduce errors and problems related to test selection and requests, specimen submission, test performance, and reporting and interpretation of results, leading to improved use of molecular genetic laboratory services, better health outcome for patients, and in many instances, better health outcomes for families of patients. In future reports, recommendations will be provided for good laboratory practices focusing on other areas of genetic testing, such as biochemical genetic testing, molecular cytogenetic testing, and somatic genetic testing. # Background With the completion of the human genome project, discoveries linking genetic mutations or variations to specific diseases and biologic processes are frequently reported (18). The rapid progress in biomedical research, accompanied by advances in laboratory technology, have led to increased opportunities for development and implementation of new molecular genetic tests. For example, the number of heritable diseases and conditions for which clinical genetic tests are available more than tripled in 8 years, from 423 diseases in November 2000 to approximately 1,300 diseases and conditions in October 2008 (2,19). Molecular genetic testing is performed not only to detect or confirm rare genetic diseases or heritable conditions (20) but also to detect mutations or genetic variations associated with more common and complex conditions such as cancer (21,22), coagulation disorders (23), cardiovascular diseases (24), and diabetes (25). As the rapid pace of genetic research results in a better understanding of the role of genetic variations in diseases and health conditions, the development and clinical use of molecular genetic tests continues to expand (26)(27)(28). Despite considerable information gaps regarding the number of U.S. laboratories that perform molecular genetic tests for heritable diseases and conditions and the number of specific genetic tests being performed (1), molecular genetic testing is one of the areas of laboratory testing that is increasing most rapidly. Molecular genetic tests are performed by a broad range of laboratories, including laboratories that have CLIA certificates for chemistry, pathology, clinical cytogenetics, or other specialties or subspecialties (11). Although nationwide data are not available, data from state programs indicate considerable increases in the numbers of laboratories that perform molecular genetic tests. For example, the number of approved laboratories in the state of New York that perform molecular genetic testing for heritable diseases and conditions increased 36% in 6 years, from 25 laboratories in February 2002 to 34 laboratories in October 2008 (29). Although comprehensive data on the annual number of molecular genetic tests performed nationwide are not available, industry reports indicate a steady increase in the number of common molecular genetic tests for heritable diseases and conditions, such as mutation testing for cystic fibrosis and factor V Leiden thrombophilia (3). The number of cystic fibrosis mutation tests has increased significantly since 2001, pursuant to the recommendations of the American College of Obstetricians and Gynecologists and ACMG for preconception and prenatal carrier screening (30,31). The DNA-based cystic fibrosis mutation tests are now considered to be some of the most commonly performed genetic tests in the United States and have become an essential component of several state newborn screening programs for confirming presumptive screening results of infants (32). The overall increase in molecular genetic testing from 2006 to 2007 worldwide has been reported to be 15% in some market analyses, outpacing other areas of molecular diagnostic testing (33). # CLIA oversight for Molecular Genetic Testing In 1988, Congress enacted Public Law 100-578, a revision of Section 353 of the Public Health Service Act (42 U.S.C. 263a) that amended the Clinical Laboratory Improvement Act of 1967 and required the Department of Health and Human Services (HHS) to establish regulations to ensure the quality and reliability of laboratory testing on human specimens for disease diagnosis, prevention, or treatment or for health assessment purposes. In 1992, HHS published CLIA regulations that describe requirements for all laboratories that perform patient testing (34). Facilities that perform testing for forensic purposes only and research laboratories that test human specimens but do not report patient-specific results are exempt from CLIA regulations (34). CMS (formerly the Health Care Financing Administration) administers the CLIA laboratory certification program in conjunction with FDA and CDC. FDA is responsible for test categorization, and CDC is responsible for CLIA studies, convening CLIAC, and providing scientific and technical support to CMS. CLIAC was chartered by HHS to provide recommendations and advice regarding CLIA regulations, the impact of CLIA regulations on medical and laboratory practices, and modifications needed to CLIA standards to accommodate technological advances. In 2003, CMS and CDC published CLIA regulatory revisions to reorganize and revise CLIA requirements for quality systems for nonwaived testing and the laboratory director qualifications for high-complexity testing (35). The revised regulations included facility administration and quality system requirements for every phase of the testing process (35). Requirements for the clinical cytogenetics specialty also were reorganized and revised. Other genetic tests, such as molecular genetic tests, are not recognized as a specialty or subspecialty under CLIA. However, because these tests are considered high complexity, laboratories that perform molecular genetic testing for heritable diseases and conditions must meet applicable general CLIA requirements for nonwaived testing and the personnel requirements for high-complexity testing (36). To enhance oversight of genetic testing under CLIA, CMS developed a plan to promote a comprehensive approach for effective application of current regulations and to provide training and guidelines to surveyors and laboratories that perform genetic testing (16). CDC and CMS also have been assessing the need to revise and update CLIA requirements for proficiency testing programs and laboratories, taking into consideration the need for improved performance evaluation for laboratories that perform genetic testing (37). # Concerns Related to Molecular Genetic Testing Studies and reports since 1997 have revealed a broad range of concerns related to molecular genetic testing for heritable diseases and conditions, including safe and effective translation of research findings into patient testing, the quality of test performance and results interpretation, appropriate use of testing information and services in health management and patient care, the adequacy of quality assurance measures, and concerns involving the ethical, legal, economic, and social aspects of molecular genetic testing (1,9,22,38,39). Some of these concerns are indicative of the areas of laboratory practice that are in need of improvement, such as performance establishment and verification, proficiency testing, personnel qualifications and training, and results reporting (1,9,11,22,39). # Errors Associated with and Needed Improvements in the Three Phases of Molecular Genetic Testing Studies have indicated that although error rates associated with different areas of laboratory testing vary (40), the overall distribution of errors reported in the preanalytic, analytic, and postanalytic phases of the testing process are similar for many testing areas, including molecular genetic testing (9,11,39,40). The preanalytic phase encompasses test selection and ordering and specimen collection, processing, handling, and delivery to the testing site. The analytic phase includes selection of test methods, performance of test procedures, monitoring and verification of the accuracy and reliability of test results, and documentation of test findings. The postanalytic phase includes reporting test results and archiving records, reports, and tested specimens (41). Studies have indicated that errors are more likely to occur during the preanalytic and postanalytic phases of the testing process than during the analytic phase, with most errors reported for the preanalytic phase (40,(42)(43)(44). In the preanalytic phase, inappropriate selection of laboratory tests has been a significant source of errors (42,43). Misuse of laboratory services, such as unnecessary or inappropriate test requests, might lead to increased risk for medical errors, adverse patient outcome, and increased health-care costs (43). Although no study has determined the overall number of molecular genetic tests performed that could be considered unwarranted or unnecessary, a study of the use and interpretation of adenomatous polyposis coli gene (APC) testing for familial adenomatous polyposis and other heritable conditions associated with colonic polyposis indicated that 17% of the cases evaluated did not have valid indications for testing (22). Although data are limited, studies also indicate that improvements are needed in the analytic phase of molecular genetic testing. A study of the frequency and severity of errors associated with DNA-based genetic testing revealed that errors related to specimen handling in the laboratory and other analytic steps ranged from 0.06% to 0.12% of approximately 92,000 tests evaluated (39). A subsequent meta-analysis indicated that these self-reported error rates were comparable to those detected in nongenetic laboratory testing (40). An analysis of performance data from the CAP molecular genetic survey program during 1995-2000 estimated the overall error rate for cystic fibrosis mutation analysis to be 1.5%, of which approximately 50% of the errors occurred during the analytic or postanalytic phases of testing (45). Unrecognized sequence variations or polymorphisms also could affect the ability of molecular genetic tests to detect or distinguish the genotypes being analyzed, leading to false-positive or false-negative test results. Such problems have been reported for some commonly performed genetic tests such as cystic fibrosis mutation analysis and testing for HFE-associated hereditary hemochromatosis (46,47). The postanalytic phase of molecular genetic testing involves analysis of test results, preparation of test reports, and results reporting. The study on the use of the APC gene testing and interpretation of test results indicated that lack of awareness among health-care providers of APC test limitations was a primary reason for misinterpretation of test results (22). In a study assessing the comprehensiveness and usefulness of reports for cystic fibrosis and factor V Leiden thrombophilia testing, physicians in many medical specialties considered reports that included information beyond that specified by the general CLIA test report requirements to be more informative and useful than test reports that only met CLIA requirements; additional information included patient race/ethnicity, clinical history, reasons for test referral, test methodology, recommendations for follow-up testing, implications for family members, and suggestions for genetic counseling (48). Consistent with these findings, international guidelines for quality assurance in molecular genetic testing recommend that molecular genetic test reports be accurate, concise, and comprehensive and communicate all essential information to enable effective decisionmaking by patients and health care professionals (49). # Proficiency Testing Proficiency testing is a well-established practice for monitoring and improving the quality of laboratory testing (50,51) and is a key component of the external quality assessment process. Studies have indicated that using proficiency testing samples that resemble actual patient specimens could improve monitoring of laboratory performance (50,(52)(53)(54). Participation in proficiency testing has helped laboratories reduce analytic deficiencies, improve testing procedures, and take steps to prevent future errors (55)(56)(57)(58)(59). CLIA regulations have not yet included proficiency testing requirements for molecular genetic tests. Laboratories that perform molecular genetic testing must meet the general CLIA requirement to verify, at least twice annually, the accuracy of the genetic tests they perform ( §493.1236) (36). Laboratories may participate in available proficiency testing programs for the genetic tests they perform to meet this CLIA alternative performance assessment requirement. Proficiency testing participation correlates significantly with the quality assurance measures in place among laboratories that perform molecular genetic testing (9,10). Because proficiency testing is a rigorous external assessment for laboratory performance, in 2008, SACGHS recommended that proficiency testing participation be required for all molecular genetic tests for which proficiency testing programs are available (1). Formal molecular genetic proficiency testing programs are available only for a limited number of tests for heritable diseases and conditions; in addition, the samples provided often are purified DNA, which do not typically require performance of all steps of the testing process, such as nucleic acid extraction and preparation (60). For many genetic conditions that are either rare or for which testing is performed by one or a few laboratories, substantial challenges in developing formal proficiency testing programs have been recognized (1). Development of effective alternative performance assessment approaches to proficiency testing is essential for ensuring the quality of molecular genetic testing (1). Professional guidelines have been developed for laboratories to evaluate and monitor test performance when proficiency testing programs are not available (61). However, reports of the CAP molecular pathology on-site inspections indicate that deficiencies related to participation in interlaboratory comparison or alternative performance assessment are among the most frequently identified deficiencies, accounting for 3.9% of all deficiencies cited (62). # Clinical Validity and Potential Risks Associated with Certain Molecular Genetic Tests The ability of a test to diagnose or predict risk for a particular health condition is the test's clinical validity, which often is measured by clinical (or diagnostic) sensitivity, clinical (or diagnostic) specificity, and predictive values of the test for a given health condition. Clinical validity can be influenced by factors such as the prevalence of the disease or health condition, penetrance (proportion of persons with a mutation causing a particular disorder who exhibit clinical symptoms of the disorder), and modifiers (genetic or environmental factors that might affect the variability of signs or symptoms that occur with a phenotype of a genetic alteration). For genetic tests, clinical validity refers to the ability of a test to detect or predict the presence or absence of a particular disease or phenotype and often corresponds to associations between genotypes and phenotypes (1,28,(63)(64)(65)(66)(67)(68)(69). The usefulness of a test in clinical practice, referred to as clinical utility, involves identifying the outcomes associated with specific test results (28). Clinical validity and clinical utility should be assessed individually for each genetic test because the implications might vary depending on the health condition and population being tested (38). As advances in genomic research and technology result in rapid development of new genetic tests, concerns have been raised that certain tests, particularly predictive genetic tests, could become available without adequate assessment of their validity, benefits, and utility. Consequently, health professionals and consumers might not be able to make a fully informed decision about whether or how to use these tests. In 1997, a task force formed by a National Institutes of Health (NIH)-Department of Energy workgroup recommended that laboratories that perform patient testing establish clinical validity for the genetic tests they develop before offering them for patient testing and carefully review and document evidence of test validity if the test has been developed elsewhere (70). This recommendation was later included in a report of the Secretary's Advisory Committee on Genetic Testing (SACGT), which was established in 1998 to advise HHS on medical, scientific, ethical, legal, and social concerns raised by the development and use of genetic tests (38). Public concerns about inadequate knowledge or documentation of the clinical validity of certain genetic tests were also recognized by SACGHS, the advisory committee that was established by HHS in 2002 to supersede SACGT (1). SACGHS recommended the development and support of sustainable public-private collaborations to fill the gaps in knowledge of the analytic validity, clinical validity, clinical utility, economic value, and population health impact of molecular genetic tests (1). Collaborative efforts that have been recognized include the Evaluation of Genomic Applications in Practice and Prevention (EGAPP) program, a CDC initiative to establish and evaluate a systematic, evidence-based process for assessing genetic tests and other applications of genomic technology in transition from research to clinical practice and public health (71), and the Collaboration, Education, and Test Translation (CETT) Program, which is overseen by the NIH Office of Rare Diseases to promote the effective transition of potential genetic tests for rare diseases from research settings into clinical settings (72). The increase in direct-to-consumer (DTC) genetic testing (i.e., genetic tests offered directly to consumers with no health-care provider involvement) has raised concerns about the potential risks or misuses of certain genetic tests (73). As of October 2008, consumers could directly order laboratory tests in 27 states; in another 10 states, consumer-ordered tests are allowed under defined circumstances (74). As DTC genetic tests become increasingly available, various genetic profile tests have been marketed directly to the public that claim to answer questions regarding cardiovascular risks, drug metabolism, dietary arrangements, and lifestyles (73). In addition, DTC advertisements have caused a substantial increase in the demand for molecular genetic tests, such as those for hereditary breast and ovarian cancers (75,76). Although allowing easy access to the testing services, DTC genetic testing has raised concerns about the potential for inadequate pretest decision-making, misunderstanding of test results, access to tests of questionable clinical value, lack of necessary follow-up, and unexpected additional responsibilities for primary care physicians (77)(78)(79)(80). Both the government and professional organizations have developed educational materials that provide guidance to consumers, laboratories, genetics professionals, and professional organizations regarding DTC genetic tests (80)(81)(82). # Personnel Qualifications and Training Studies indicate that qualifications of laboratory personnel, including training and experience, are critical for ensuring quality performance of genetic testing, because human error has the greatest potential influence on the quality of laboratory test results (9,83,84). A study of laboratories in the United States that perform molecular genetic testing suggested that laboratory adherence to voluntary quality standards and guidelines for genetic testing was significantly associated with laboratories directed or supervised by persons with board certification in medical genetics (9). Results of an international survey revealed a similar correlation between the quality assurance practices of a molecular genetic testing laboratory and the formal training of the laboratory director (10). Overall, the concerns recognized in publications and documented cases support the need to have trained, qualified personnel at all levels to ensure the quality of all phases of the genetic testing process. # Methods Information Collection and Assessment To monitor and assess the scope and growth of molecular genetic testing in the United States, data were collected and analyzed from scientific articles, government reports, the CMS CLIA database, information from state programs, studies by professional groups, publicly available directories and databases of laboratories and laboratory testing, industry reports, and CDC studies (1)(2)(3)5,6,9,29,38,83,(85)(86)(87)(88). To evaluate factors in molecular genetic testing that might affect testing quality and to identify areas that would benefit from quality assurance guidelines, various documents were considered, including professional practice guidelines, CAP laboratory accreditation checklists, CLSI guidelines, state requirements, and international guidelines and standards (12)(13)(14)(15)49,61,(89)(90)(91)(92)(93)(94)(95). # Development of CLIAC Recommendations for Good Laboratory Practices in Molecular Genetic Testing Since 1997, CLIAC has provided HHS with recommendations on approaches needed to ensure the quality of genetic testing (37). At the February 2007 CLIAC meeting, CLIAC asked CDC and CMS to clarify critical concerns in genetic testing oversight and to provide a status report at the subsequent CLIAC meeting. At the September 2007 CLIAC meeting, CDC presented an overview of the regulatory oversight and voluntary measures for quality assurance of genetic testing and described a plan to develop and publish educational material on good laboratory practices. CDC solicited CLIAC recommendations to address concerns that presented particular challenges related to genetic testing oversight, including establishment and verification of performance specifications, control procedures for molecular amplification assays, proficiency testing, genetic test reports, personnel competency assessment, and the definition of genetic tests. CLIAC recommended convening a workgroup of experts in genetic testing to consider these concerns and provide input for CLIAC deliberation. The CLIAC Genetic Testing Good Laboratory Practices Workgroup was formed. The workgroup conducted a series of meetings on the scope of laboratory practice recommendations needed for genetic testing and suggested that recommendations first be developed for molecular genetic testing for heritable diseases and conditions. The workgroup evaluated good laboratory practices for all phases of the genetic testing process after reviewing professional guidelines, regulatory and voluntary standards, accreditation checklists, international standards and guidelines, and other documents that provided general or specific quality standards applicable to molecular genetic testing for heritable diseases and conditions (1,(12)(13)(14)(15)36,41,49,61,80,82,(91)(92)(93)(94)(95)(96)(97)(98)(99)(100)(101)(102)(103)(104)(105)(106)(107)(108)(109). The workgroup also reviewed information on the HHS-approved and other certification boards for laboratory personnel and the number of persons certified in each of the specialties for which certification is available (110)(111)(112)(113)(114)(115)(116)(117)(118). Workgroup suggestions were reported to CLIAC at the September 2008 committee meeting. The CLIAC recommendations were formed on the basis of the workgroup report and additional CLIAC recommendations. The committee recommended that CDC include the CLIACrecommended good laboratory practices for molecular genetic testing in the planned publication. Summaries of CLIAC meetings and CLIAC recommendations are available (37). # Recommended Good Laboratory Practices The following recommended good laboratory practices are for areas of molecular genetic testing for heritable diseases and conditions in need of guidelines for complying with existing CLIA requirements or in need of additional quality assurance measures. These recommendations are not intended to encompass the entire realm of laboratory practice; they are meant to provide guidelines for specific quality concerns in the performance and delivery of laboratory services for molecular genetic testing for heritable diseases and conditions. These recommendations address laboratory practices for the total testing process, including the preanalytic, analytic, and postanalytic phases of molecular genetic testing. The recommendations for the preanalytic phase include guidelines for laboratory responsibilities for providing information to users of laboratory services, informed consent, test requests, specimen submission and handling, test referrals, and preanalytic systems assessment. The recommendations for the analytic phase include guidelines for establishment and verification of performance specifications, quality control procedures, proficiency testing, and alternative performance assessment. The recommendations for the postanalytic phase include guidelines for test reports, retention of records and reports, and specimen retention. The recommendations also address responsibilities of laboratories regarding authorized persons, confidentiality of patient information and test results, personnel competency, factors to consider before introducing molecular genetic testing or offering new molecular genetic tests, and the potential benefits of the quality management system approach in molecular genetic testing. Recommendations are provided in relation to applicable provisions in the CLIA regulations and, when necessary, are followed by a description of how the recommended practices can be used to improve quality assurance and quality assessment for molecular genetic testing. A list of terms and abbreviations used in this report also is provided (Appendix A). # The Preanalytic Testing Phase Test Information to Provide to Users of Laboratory Services Laboratories are responsible for providing information regarding the molecular genetic tests they perform to users of their services; users include authorized persons under applicable state law, health-care professionals, patients, referring laboratories, and payers of laboratory services. Laboratories should review the genetic tests they perform and the procedures they use to provide and update the recommended test information that follows. At a minimum, laboratories should ensure that the test information is available from accessible sources such as websites, service directories, information pamphlets or brochures, newsletters, instructions for specimen submission, and test request forms. Laboratories that already provide the information from these sources should continue to do so. However, laboratories also might decide to provide the information more directly to their users (e.g., by telephone, e-mail, or in an in-person meeting) and should determine the situations in which such direct communication is necessary. The complexity of language used should be appropriate for the particular laboratory user groups (e.g., for patients, plain language understandable by the general public). Test selection, test performance, and specimen submission. Laboratories should provide information regarding the molecular genetic tests they perform to users of their services to facilitate appropriate test selection and requests, specimen handling and submission, and patient care. Each laboratory that performs molecular genetic testing for heritable diseases and conditions should provide the following information to its users: Information necessary for selecting appropriate tests, - including a list of the molecular genetic tests the laboratory performs. For each molecular genetic test, the following information should be provided: -Intended use of the test, including the nucleic acid target of the test (e.g., genes, sequences, mutations, or polymorphisms), the purpose of testing (e.g., diagnostic, preconception, or predictive), and the recommended patient populations -Indications for testing -Test method to be used, presented in user-friendly language in relation to the # Cost. When possible and practical, laboratories should provide users with information on the charges for molecular genetic tests being performed. Estimating the expenses that a patient might incur from a particular genetic test might be difficult for certain laboratories and providers because fee schedules of individual laboratories can vary depending on the health-care payment policy selections of each patient. However, advising the patient and family members of the financial implications of the tests, whenever possible, facilitates informed decision-making. Discussion. Under CLIA, laboratories are required to develop and follow written policies and procedures for specimen submission and handling, specimen referral, and test requests (42 CFR § §493.1241 and 1242). Laboratories must ensure positive identification and optimum integrity of specimens from the time of collection or receipt through the completion of testing and reporting of test results (42 CFR §493.1232). In addition, laboratories that perform nonwaived testing must ensure that a qualified clinical consultant is available to assist laboratory clients with ordering tests appropriate for meeting clinical expectations (42 CFR §493.1457 ). The recommended laboratory practices in this report describe laboratory responsibilities for ensuring appropriate test requests and specimen submission for the molecular genetic tests they perform, in addition to laboratory responsibilities for meeting CLIA requirements. The recommendations emphasize the role of laboratories in providing specific information needed by users before decisions are made regarding test selection and ordering, based on consideration of several factors. First, molecular genetic tests for heritable diseases and conditions are being rapidly developed and increasingly used in health-care settings. Users of laboratory services need the ability to easily access information regarding the intended use, performance specifications, and limitations of the molecular genetic tests a laboratory offers to determine appropriate testing for specific patient conditions. Second, many molecular genetic tests are performed using laboratory-developed tests or test systems. The performance specifications and limitations of the testing might vary among laboratories, even for the same disease or condition, depending on the specific procedures used. Users of laboratory services who are not provided information related to the appropriateness of the tests being considered might select tests that are not indicated or cannot meet clinical expectations. Third, for many heritable diseases and conditions, test performance and interpretation of test results require information regarding patient race/ethnicity, family history, and other pertinent clinical and laboratory information. Informing users before tests are ordered of the specific patient information needed by the laboratory should facilitate test requests and allow prompt initiation of appropriate testing procedures and accurate interpretation of test results. Finally, providing information to users on performance specifications and limitations of tests before test selection and ordering prepares users of laboratory services for understanding test results and implications. CLIA test report requirements (42 CFR §493.1291) indicate that laboratories are required to provide users of their services, on request, with information on laboratory test methods and the performance specifications the laboratory has established or verified for the tests. However, for molecular genetic tests for heritable diseases and conditions, laboratories should provide test performance information to users before test selection and ordering, rather than waiting for a request after the test has been performed. The information provided in the preanalytic phase must be consistent with information included on test reports. Providing molecular genetic testing information to users before tests are selected and ordered should improve test requests and specimen submission and might reduce unnecessary or unwarranted testing. The recommended practices also might increase informed decision-making, improve interpretation of results, and improve patient outcome. # Informed Consent A person who provides informed consent voluntarily confirms a willingness to undergo a particular test, after having been informed of all aspects of the test that are relevant to the patient's decision (49). Informed consent for genetic testing or specific types of genetic tests is required by law in certain states; as of June 2008, 12 states required that informed consent be obtained before a genetic test is requested or performed (119). In addition, certain states (e.g., Massachusetts, Michigan, Nebraska, New York, and South Dakota) have included required informed consent components in their statutes ) (Appendix B). These state statutes can be used as examples for laboratories in other states that are developing specific informed consent forms. Professional organizations recommend that informed consent be obtained for testing for many inherited genetic conditions (12,13). CLIA regulations have no requirements for laboratory documentation of informed consent for requested tests; however, medical decisions for patient diagnosis or treatment should be based on informed decision-making (124). Regardless of whether informed consent is required, laboratories that perform molecular genetic tests for heritable diseases and conditions should be responsible for providing users with the information necessary to make informed decisions. Informed consent is in the purview of the practice of medicine; the persons authorized to order the tests are responsible for obtaining the appropriate level of informed consent (67). Unless mandated by state or local requirements, obtaining informed consent before performing a test generally is not considered a laboratory responsibility. For molecular genetic testing for heritable diseases and conditions, not all tests require written patient consent before testing (125). However, when informed consent for patient testing is recommended or required by law or other applicable requirements as a method for documenting the process and outcome of informed decision-making, laboratories should ensure that certain practices are followed: Be available to assist users of laboratory services with - determining the appropriate level of informed consent by providing useful and necessary information. Include appropriate methods for documenting informed - consent on test request forms, and determine whether the consent information is provided with the test request before initiating testing. Laboratories may determine situations in which a patient specimen can be stabilized until informed consent is obtained, following the practices for specimen retention recommended in these guidelines. Laboratories should refer to professional guidelines for additional information regarding informed consent for molecular genetic tests and should consider available models when developing the content, format, and procedures for documentation of patient consent. # Test Requests CLIA requirements (42 CFR §493.1241) specify that laboratories that perform nonwaived testing must ensure that the test request solicits the following information: 1) the name and address or other suitable identifiers of the authorized person requesting the test and (if applicable) the person responsible for using the test results, or the name and address of the laboratory submitting the specimen, including (if applicable) a contact person to enable reporting of imminently life-threatening laboratory results or critical values; 2) patient name or a unique patient identifier; 3) sex and either age or date of birth of the patient; 4) the tests to be performed; 5) the source of the specimen (if applicable); 6) the date and (if applicable) time of specimen collection; and 7) any additional information relevant and necessary for a specific test to ensure accurate and timely testing and reporting of results, including interpretation (if applicable). For molecular genetic testing for heritable diseases and conditions, laboratories must comply with these CLIA requirements and should solicit the following additional information on test requests: Patient ). Laboratories that perform molecular genetic testing for heritable diseases and conditions should ensure that at least two unique identifiers are solicited on these test requests, which should include patient names, when possible, and any other unique identifiers needed to ensure patient identification. In certain situations (e.g., compatibility testing for which donor names are not always provided to the laboratory), an alternative unique identifier is appropriate. Date of birth. CLIA requirements specify that test requests must solicit the sex and either age or date of birth of the patient (42 CFR §493.1241 ). For molecular genetic testing for heritable diseases and conditions, patient date of birth is more informative than age and should be obtained when possible. Indications for testing, relevant clinical and laboratory information, patient race/ethnicity, family history, and pedigree. Obtaining information on indications for testing, relevant clinical or laboratory information, patient racial/ ethnic background, family history, and pedigree is critical for selecting appropriate test methods, determining the mutations or variants to be tested, interpreting test results, and timely reporting of test results. Genetic conditions often have different disease prevalences with various mutation frequencies and distributions among racial/ethnic groups. Unique, or private, mutations or genotypes might be present only in specific families or can be associated with founder effects (i.e., gene mutations observed in high frequency in a specific population because of the presence of the mutation in a single ancestor or small number of ancestors in the founding population). Family history and other relevant clinical or laboratory information are often important for determining whether the test requested might meet the clinical expectations, including the likelihood of identifying a disease-causing mutation. Specific race/ethnicity, family history, and other pertinent information to be solicited on a test request should be determined according to the specific disease or condition for which the patient is being tested. Laboratories should consider available guidelines for requesting and obtaining this additional information and determine circumstances in which more specific patient information is needed for particular genetic tests (126,127). Although this information is not specified in CLIA, the regulations provide laboratories the flexibility to determine and solicit relevant and necessary information for a specific test (42 CFR §493.1241 ). The recommended test request components also are consistent with many voluntary professional and accreditation guidelines (12)(13)(14). Documentation of informed consent. Methods for indicating and documenting informed consent on a test request might include a statement, text box, or check-off box on the test request form to be signed or checked by the test requestor; a separate form to be signed as part of the test request; or another method that complies with applicable requirements and adheres to professional guidelines. In addition, when state or local laws or regulations specify that patient consent must be obtained regarding the use of tested specimens for quality assurance or other purposes, the test request must include a way for the test requestor to indicate the decision of the patient. Laboratories also might determine that other situations merit documentation of consent before testing. # Specimen Submission, Handling, and Referral CLIA requires laboratories to establish and follow written policies and procedures for patient preparation, specimen collection, specimen labeling (including patient name or unique patient identifier and, when appropriate, specimen source), specimen storage and preservation, conditions for specimen transportation, specimen processing, specimen acceptability and rejection, and referral of specimens to another laboratory (42 CFR §493.1242). If a laboratory accepts a referral specimen, appropriate written instructions providing information on specimen handling and submission must be available to the laboratory clients. The following recommendations are intended to help laboratories that perform molecular genetic testing meet general CLIA requirements and to provide additional guidelines on quality assurance measures for specimen submission, handling, and referral for molecular genetic testing. Before test selection and ordering, laboratories that perform molecular genetic testing should provide their users with instructions on specimen collection, handling, transport, and submission. # The Analytic Testing Phase Establishment and Verification of Performance Specifications CLIA requires laboratories to establish or verify the analytic performance of all nonwaived tests and test systems before introducing them for patient testing and to determine or the analytes to be detected in a specimen might be compromised Specimen transport conditions (e.g., ambient temperature, - refrigeration, and immediate delivery) Reasons for rejection of specimens - Criteria for specimen acceptance or rejection. Laboratories should have written criteria for acceptance or rejection of specimens for the molecular genetic tests they perform and should promptly notify the authorized person when a specimen meets the rejection criteria and is determined to be unsuitable for testing. The criteria should include information on determining the existence of and addressing the following situations: Improper handling or transport of specimens Lack of other information needed to determine whether - the specimen or test requested is appropriate for answering the clinical question Retention and exchange of information throughout the testing process. Information on test requests and test reports is a particularly important component of the complex communication between genetic testing laboratories and their users. Laboratories should have policies and procedures in place to ensure that information needed for selection of appropriate test methods, test performance, and results interpretation is retained throughout the entire molecular genetic testing process. This recommendation is based on CLIAC recognition of instances in which information on test requests or test reports was removed by electronic or other information systems during specimen submission, results reporting, or test referral. CLIA requires laboratories to ensure the accuracy of test request or authorization information when transcribing or entering the information into a record system or a laboratory information system (42 CFR §493.1241). For molecular genetic tests, information on test requests and test reports should be retained accurately and completely throughout the testing process. Specimen referral. CLIA requires laboratories to refer specimens for any type of patient testing to CLIA-certified laboratories or laboratories that meet equivalent requirements as determined by CMS (42 CFR §493.1242 ). Examples of laboratories that meet equivalent requirements include the calibration and control procedures of tests based on the performance specifications verified or established. Before reporting patient test results, each laboratory that introduces an unmodified, FDA-cleared or FDA-approved test system must 1) demonstrate that the manufacturer-established performance specifications for accuracy, precision, and reportable range of test results can be reproduced and 2) verify that the manufacturer-provided reference intervals (or normal values) are appropriate for the laboratory patient population (42 CFR §493.1253). Laboratories are subject to more stringent requirements when introducing 1) FDA-cleared or FDA-approved test systems that have been modified by the laboratory, 2) laboratory-developed tests or test systems that are not subject to FDA clearance or approval (e.g., standardized methods and textbook procedures), or 3) test systems with no manufacturerprovided performance specifications. In these instances, before reporting patient test results, laboratories must conduct more extensive procedures to establish applicable performance specifications for accuracy, precision, analytic sensitivity, analytic specificity; reportable range of test results; reference intervals, or normal values; and other performance characteristics required for test performance. Although laboratories that perform molecular genetic testing for heritable diseases and conditions must comply with these general CLIA requirements, additional guidelines are needed to assist with establishment and verification of performance specifications for these tests. The recommended laboratory practices that follow are primarily intended to provide specific guidelines for establishing performance specifications for laboratory-developed molecular genetic tests to ensure valid and reliable test performance and interpretation of results. The recommendations also might be used by laboratories to verify performance specifications of unmodified FDA-cleared or FDA-approved molecular genetic test systems to be introduced for patient testing. Factors that should be considered when developing performance specifications for molecular genetic tests include the intended use of the test; target genes, sequences, and mutations; intended patient populations; test methods; and samples to be used (99) Accuracy. Accuracy is commonly defined as "closeness of the agreement between the result of a measurement and a true value of the measurand" (128). For qualitative molecular genetic tests, laboratories are responsible for verifying or establishing the accuracy of the method used to identify the presence or absence of the analytes being evaluated (e.g., mutations, variants, or other targeted nucleic acids). Accuracy might be assessed by testing reference materials, comparing test results against results of a reference method, comparing split-sample results with results obtained from a method shown to provide clinically valid results, or correlating research results with the clinical presentation when establishing a test system for a new analyte, such as a newly identified disease gene (96). Precision. Precision is defined as "closeness of agreement between independent test results obtained under stipulated conditions" (129). Precision is commonly determined by assessing repeatability (i.e., closeness of agreement between independent test results for the same measurand under the same conditions) and reproducibility (i.e., closeness of agreement between independent test results for the same measurand under changed conditions). Precision can be verified or established by assessing day-to-day, run-to-run, and within-run variation (as well as operator variance) by repeat testing of known patient samples, quality control materials, or calibration materials over time (96). Analytic sensitivity. Practice guidelines vary in their definitions of analytic sensitivity; certain guidelines consider analytic sensitivity to be the ability of an assay to detect a given analyte, or the lower limit of detection (LOD) (93), whereas guidelines for molecular genetic testing for heritable diseases consider analytic sensitivity to be "the proportion of biological samples that have a positive test result or known mutation and that are correctly classified as positive" (12). However, determining the LOD of a molecular genetic test or test system is often needed as part of the performance establishment and verification (93). To avoid potential confusion among users and the general public in understanding the test performance and test results, laboratories should review and follow applicable professional guidelines before testing is introduced and ensure the guidelines are followed consistently throughout performance establishment and verification and during subsequent patient testing. Analytic sensitivity should be determined for each molecular genetic test before the test is used for patient testing. Analytic specificity. Analytic specificity is generally defined as the ability of a test method to determine only the target analytes to be detected or measured and not the interfering substances that might affect laboratory testing. Interfering substances include factors associated with specimens (e.g., specimen hemolysis, anticoagulant, lipemia, and turbidity) and factors associated with patients (e.g., clinical conditions, disease states, and medications) (96). Laboratories must document information regarding interfering substances and should use product information, literature, or the laboratory's own testing (96). Accepted practice guidelines for molecular genetic testing, such as those developed by ACMG, CAP, and CLSI, define analytic specificity as the ability of a test to distinguish the target sequences, alleles, or mutations from other sequences or alleles in the specimen or genome being analyzed (12)(13)(14). The guidelines also address documentation and determination of common interfering substances specific for molecular detection (e.g., homologous sequences, contaminants, and other exogenous or endogenous substances) (12)(13)(14). Laboratories should adhere to these specific guidelines in establishing or verifying analytic specificity for each of their molecular genetic tests. Reportable range of test results. As defined by CLIA, the reportable range of test results is "the span of test result values over which the laboratory can establish or verify the accuracy of the instrument or test system measurement response" (36). The reportable range of patient test results can be established or verified by assaying low and high calibration materials or control materials or by evaluating known samples of abnormally high and low values (96). For example, laboratories should assay quality control or reference materials, or known normal samples, and samples containing mutations to be detected for targeted mutation analyses. For analysis of trinucleotide repeats, laboratories should include samples representing the full range of expected allele lengths (130). Reference range, or reference interval (i.e., normal values). As defined by CLIA, a reference range, or reference interval, is "the range of test values expected for a designated population of persons (e.g., 95% of persons that are presumed to be healthy )" (36). The CMS Survey Procedures and Interpretive Guidelines for Laboratories and Laboratory Services provides general guidelines regarding the use of manufacturerprovided or published reference ranges appropriate for the patient population and evaluation of an appropriate number of samples to verify manufacturer claims or published reference ranges (96). For all laboratory-developed tests, the laboratory is responsible for establishing the reference range appropriate for the laboratory patient population (including demographic variables such as age and sex) and specimen types (96). For molecular genetic tests for heritable diseases and conditions, normal values might refer to normal alleles in targeted mutation analyses or the reference sequences for sequencing assays. Laboratories should be aware that advances in knowledge and testing technology might affect the recognition and documentation of normal sequences and should keep an updated database for the molecular genetic tests they perform. # Quality control procedures. CLIA requires laboratories to determine the calibration and control procedures for nonwaived tests or test systems on the basis of the verification or establishment of performance specifications for the tests (42 CFR §493.1253 ). Laboratories that perform molecular genetic tests must meet these requirements and, for every molecular genetic test to be introduced for patient testing, should consider the recommended quality control practices. # Documentation of information on clinical validity. Laboratories should ensure that the molecular genetic tests they perform are clinically usable and can be interpreted for specific patient situations. (36). Laboratory directors and clinical consultants must ensure laboratory consultations are available for laboratory clients regarding the appropriateness of the tests ordered and interpretation of test results (36). Documentation of available clinical validity information helps laboratories that perform molecular genetic testing to fulfill their responsibilities for consulting with health-care professionals and other users of laboratory services, especially regarding tests that evaluate germline mutations or variants that might be performed only once during a patient's lifetime. Establishing clinical validity is a continuous process and might require extended studies and involvement of many disciplines (38). The recommendations in this report emphasize the responsibility of laboratories that perform molecular genetic testing to document available information from medical and scientific research studies on the intended patient populations to be able to perform testing and provide results interpretation appropriate for specific clinical contexts. Laboratory directors are responsible for using professional judgment to evaluate the results of such studies as applied to newly discovered gene targets, especially those of a predictive or incompletely penetrant nature, in considering potential new tests. The recommendations in this report are consistent with the voluntary professional and accreditation guidelines of ACMG, CLSI, and CAP for molecular genetic testing (12)(13)(14)93,94). # Control Procedures General quality control practices. The analytic phase of molecular genetic testing often includes the following steps: specimen processing; nucleic acid extraction, preparation, and assessment; enzymatic reaction or amplification; analyte detection; and recording of test results. Laboratories that perform molecular genetic testing must meet the general CLIA requirements for nonwaived testing (42 CFR §493.1256) (36), including the following applicable quality control requirements: Laboratories must have control procedures in place to - monitor the accuracy and precision of the entire analytic process for each test system. The number and type of control materials and the fre-- quency of control procedures must be established using applicable performance specifications verified or established by the laboratory. Control procedures must be in place for laboratories to - detect immediate errors caused by test system failure, adverse environmental conditions, and operator performance to monitor the accuracy and precision of test performance over time. At least once each day that patient specimens are tested, - the laboratory must include the following: -At least two control materials of different concentrations for each quantitative procedure -A negative control material and a positive control material for each qualitative procedure -A negative control material and a control material with graded or titered reactivity, respectively, for each test procedure producing graded or titered results -Two control materials, including one that is capable of detecting errors in the extraction process, for each test system that has an extraction phase -Two control materials for each molecular amplification procedure and, if reaction inhibition is a substantial source of false-negative results, a control material capable of detecting the inhibition If control materials are not available, the laboratory must - have an alternative method for detecting immediate errors and monitoring test system performance over time; the performance of the alternative control procedures must be documented. Specific quality control practices. Specific quality control practices are necessary for ensuring the quality of molecular genetic test performance. The following recommendations include specific guidelines for meeting the general CLIA quality control requirements and additional measures that are more stringent or explicit than the CLIA requirements for monitoring and ensuring the quality of the molecular genetic testing process: When possible, include quality control samples that are - similar to patient specimens to monitor the quality of all analytic steps of the testing process. If a commercial test system provides some but not all - of the controls needed for testing, the laboratory must perform and follow the manufacturer recommendations for control testing and should determine the additional control procedures (including the number and types of control materials and the frequency of testing them) necessary for monitoring and ensuring the quality of test performance (36,96). Laboratories must have an alternative mechanism capable - of monitoring DNA extraction and the preceding analytic steps if 1) purified DNA samples are used as control materials for circumstances in which incorporation of an extraction control is impractical or 2) when testing is performed for a rare disease or rare variants for which no control material is available for the extraction phase. For example, testing patient specimens for an internal control sequence (e.g., a housekeeping gene or a spiked-in control sequence) might allow for monitoring of the sample quality and integrity, the presence of inhibitors, and proper amplification (12,93). A positive control, or a control sample capable of monitoring the ability of a test system to detect the nucleic acid targets, should be tested periodically and carried through the extraction step to monitor and verify the performance of the test system. The CMS Survey Procedures and Interpretive Guidelines for Laboratories and Laboratory Services provides general guidelines for alternative control procedures and encourages laboratories to use multiple mechanisms for ensuring testing quality (96). Following are examples of procedures that, when applicable, should be followed by laboratories that perform molecular genetic testing: Split specimens for testing by another method or in - another laboratory. Include previously tested patient specimens (both positive - and negative) as surrogate controls. Test each patient specimen in duplicate. Test multiple types of specimens from the same patient - (e.g., saliva, urine, or serum). Perform serial dilutions of positive specimens to confirm - positive reactions. Conduct an additional supervisory review of results before - release. Unidirectional workflow for molecular amplification procedures. CLIA requires laboratories to have procedures in place to monitor and minimize contamination during the testing process and to ensure a unidirectional workflow for amplification procedures that are not contained in closed systems (42 CFR §493.1101) (36). In this context, a closed system is a test system designed to be fully integrated and automated to purify, concentrate, amplify, detect, and identify targeted nucleic acid sequences. Such a modular system generates test results directly from unprocessed samples without manipulation or handling by the user; the system does not pose a risk for cross-contamination because amplicon-containing tubes and compartments reamain completely closed during and after the testing process. For example, according to CLIA regulations, an FDA-cleared or FDA-approved test system that contains amplification and detection steps in sealed tubes that are never opened or reopened during or after the testing process and that is used as provided by the manufacturer (i.e., without any modifications) is considered a closed system. The requirement for a unidirectional workflow, which includes having separate areas for specimen preparation, amplification, product detection, and reagent preparation, applies to any testing that involves molecular amplification procedures. The following recommendations provide more spe-cific guidelines for laboratories that perform molecular genetic testing for heritable diseases and conditions using amplification procedures that are not in a closed system: Include and positions of the NTC and other control samples, to adequately monitor carryover contamination. For testing performed in multiple units, the number and positions of NTC samples also may be used for unambiguous identification of each unit. Ensure that specific procedures are in place to moni-- tor the unidirectional workflow and to prevent crosscontamination for tests using successive amplification procedures (e.g., amplification of nucleic acid targets from a previous polymerase chain reaction or nested PCR) if reaction tubes are opened after amplification for subsequent manipulation with the amplicons. Additives that destroy amplicons from previous PCR reactions also may be used. Laboratories should recognize that methods such as PCR amplification, whole genome amplification, or subcloning to prepare quality control materials might be a substantial source of laboratory contamination. These laboratories should have the following specific procedures to monitor, detect, and prevent cross-contamination: Separation of the workflow of generating and preparing - synthetic or amplified products for use as control materials from the patient testing process. To prevent laboratory contamination, control materials should be processed and stored separately from the areas for preparation and storage of patient specimens and testing reagents. Regular testing of appropriate control samples at a fre-- quency adequate to monitor cross-contamination. These practices also should be considered by laboratories that purchase amplified materials for use as control materials, calibration materials, or competitors. # Proficiency Testing and Alternative Performance Assessment Proficiency testing is an important tool for assessing laboratory competence, evaluating the laboratory testing process, and providing education for the laboratory personnel. For certain analytes and testing specialties for which CLIA regulations specifically require proficiency testing, proficiency testing is provided by private-sector and state-operated programs that are approved by HHS because they meet CLIA standards (42 CFR Part 493). These approved programs also may provide proficiency testing for genetic tests and other tests that are not on the list of regulated analytes and specialties (131). Although the CLIA regulations do not have proficiency testing requirements specific for molecular genetic tests, laboratories that perform genetic tests must comply with the general requirements for alternative performance assessment for any test or analyte not specified as a regulated analyte to, at least twice annually, verify the accuracy of any genetic test or procedure they perform (42 CFR §493.1236). Laboratories can meet this requirement by participating in available proficiency testing programs for the genetic tests they perform (132). The following recommended practices provide more specific and stringent measures than the current CLIA requirements for performance assessment of molecular genetic testing. The recommendations should be considered by laboratories that perform molecular genetic testing to monitor and evaluate the ongoing quality of the testing they perform: Participate in available proficiency testing, at least twice - per year, for each molecular genetic test the laboratory performs. Proficiency testing is available for a limited number of molecular genetic tests (e.g., fragile X syndrome, factor V Leiden thrombophilia, and cystic fibrosis) (Appendix C). Laboratories that perform molecular genetic testing should regularly review information on the development of additional proficiency testing programs and ensure participation as new programs become available. Test analyte-specific or disease-specific proficiency testing - challenges with the laboratory's regular patient testing workload by personnel who routinely perform the tests in the laboratory (as required by CLIA for regulated analytes). Evaluate proficiency testing results reported by the profi-- ciency testing program and take steps to investigate and correct disparate results. The corrective actions to be taken after disparate proficiency testing results should include re-evaluation of previous patient test results and, if neces-sary, of retained patient specimens that were previously tested. Proficiency testing samples. When possible, proficiency testing samples should resemble patient specimens; at a minimum, samples resembling patient specimens should be used for proficiency testing for the most common genetic tests. When proficiency testing samples are provided in the form of purified DNA, participating laboratories do not perform all the analytic steps that occur during the patient testing process (e.g., nucleic acid extraction and preparation). Such practical limitations should be recognized when assessing proficiency testing performance. Laboratories are encouraged to enroll in proficiency testing programs that examine the entire testing process, including the preanalytic, analytic, and postanalytic phases. Alternative performance assessment. For molecular genetic tests for which no proficiency testing program is available, alternative performance assessments must be performed at least twice per year to meet the applicable requirements of CLIA and requirements of certain states and accrediting organizations. The following recommendations should be considered when conducting alternative performance assessments: Although no data are available to determine whether - alternative performance assessments are as effective as proficiency testing, professional guidelines (e.g., from CLSI and CAP) provide information on acceptable alternative performance assessment approaches (14,61). Laboratories that perform molecular genetic tests for which no proficiency testing program is available should adhere to these guidelines. Laboratories should ensure that alternative assessments - reflect the test methods involved in performing the testing and that the number of samples in each assessment is adequate to verify the accuracy and reliability of test results. Ideally, alternative assessments should be performed - through interlaboratory exchange (Appendix C) or using externally derived materials, because external quality assessments might detect errors or problems that would not be detected by an internal assessment. When interlaboratory exchange or obtaining external - materials is not practical (e.g., testing for rare diseases, testing performed by only one laboratory, patented testing, or unstable analytes such as RNA or enzymes), laboratories may consider options such as repeat testing of blinded samples, blind testing of materials with known values, exchange with either a research facility or a laboratory in another country, splitting samples with another instrument or method, or interlaboratory data comparison (96). # MMWR June 12, 2009 Various resources for proficiency testing and external quality assessment (60,133,134) and for facilitating interlaboratory sample exchanges (135,136) are available to help laboratories consider approaches to meeting the proficiency testing and alternative performance assessment needs of their molecular genetic testing (Appendix C). Laboratories are required by CLIA to include interpretation of test results on test reports (if applicable). However, results interpretation should be included in all test reports of molecular genetic testing for heritable diseases and conditions. Laboratories should provide information on interpretation of test results in a clinically relevant manner that is relative to the purpose for the testing and should explain how technical limitations might affect the clinical use of the test results. When appropriate and necessary, test results can be explained in reference to family members (e.g., mutations previously detected in a family member that was used for selection of the test method) to ensure appropriate interpretation of results and understanding of their implications by the persons receiving or using the test results. # The Postanalytic Testing Phase # References to literature (if applicable) Recommendation for genetics consultation (when appropri-- ate). A genetics consultation might encompass genetic services (including genetic counseling) provided by trained, qualified genetics professionals (e.g., genetic counselors, clinical geneticists, or other qualified professionals) for health-care providers, patients, or family members at risk for the condition. # Implications of test results for relatives or family members who might benefit from the information (if applicable) Statement indicating that the test results and interpretation - are based on current knowledge and technology Updates and revisions. CLIA requires laboratories to provide pertinent updates on testing information to clients when changes occur that affect the test results or interpretation of test results (42 CFR §493.1291). Because the field of molecular genetic testing is evolving rapidly, laboratories should consider the following: Keep an up-to-date database for the molecular genetic - tests performed in the laboratory, and provide updates to users when knowledge advancement affects performance specifications, interpretation of test results, or both. Provide a revised test report if the interpretation of the - original analytic result changes because of advances in knowledge or testing technology. Indications for providing revised test reports include the following: -A better interpretation is available on a previously detected variant. -Interpretation of previous test results has changed (e.g., a previously determined mutation is later recognized as a benign variant or polymorphism or vice versa). Molecular genetic tests for germline mutations or variants or for other heritable conditions often are one-time tests, with results that can have life-time implications for the patients and family members. Decisions regarding health-care management should be made with consideration of changes or improvements in the interpretation of genetic test results as testing technology and knowledge advance. However, practical limitations, such as the logistical difficulty of recontacting previous users of laboratory services, also should be considered. Laboratories that perform molecular genetic testing for heritable diseases and conditions should have procedures in place that adhere to accepted professional practice guidelines regarding the duty to recontact previous users and should make a good-faith effort to provide updates and revisions to previous test reports, when appropriate (137). When establishing these procedures, laboratories also might consider the retention time frame of their molecular genetic test reports. Signatures. Review of molecular genetic test reports by trained qualified personnel, before reports are released, is critical. The review should be appropriately documented with written or electronic signatures or by other methods. Laboratories should determine which persons should review and sign the test reports in accordance with personnel competency and responsibilities. Format, style, media, and language. Laboratories should assess the needs of laboratory users when determining the format, style, media, and language of molecular genetic test reports. The language used, which includes terminology and nomenclature, should be understandable by nongeneticist health professionals and other specific users of the test results. This practice should be part of the laboratory quality management policies. Test reports should include all necessary information, be easy to understand, and be structured in a way that encourages users read the entire report, rather than just a positive or negative indication. Following the format recommended in accepted practice guidelines should help ensure that the reports are structured effectively (12)(13)(14)49,93,94,100). # Retention of Reports, Records, and Tested Specimens Reports. CLIA requires laboratories to retain or have the ability to retrieve a copy of an original test report (including final, preliminary, and corrected reports) for at least 2 years after the date of reporting and to retain pathology test reports for at least 10 years after the date of reporting (42 CFR §493.1105). A longer retention time frame than required by CLIA is warranted for reports of molecular genetic tests for heritable diseases and conditions. These test reports should be retained for at least 25 years after the date the results are reported. Retaining molecular genetic test reports for a longer time frame is recommended because the results can have long-term, often lifetime, implications for patients and their families, and future generations might need the information to make healthrelated decisions. In addition, advances in testing technology and increased knowledge of disease processes could change the interpretation of the original test results, enable improved interpretation of test results, or permit future retesting with greater sensitivity and accuracy. Laboratories need the ability to retrieve previous test reports, which are valuable resources for conducting quality assessment activities, helping patients and family members make health decisions, and managing the health care of the patient and family members. As laboratories that perform molecular genetic testing for heritable diseases and conditions review and update policies and procedures for report retention, they should consider the financial ramifications of the policies, as well as technology and space concerns. Laboratories may consider retaining test reports electronically, on microfilms, or by other methods but must ensure that all of the information on the original reports is retained and that copies (whether electronic or hard copies) of the original reports can be retrieved. The laboratory policies and procedures for test report retention must comply with applicable state laws and other requirements (e.g., of accrediting organizations if the laboratory is accredited) and should follow practice guidelines developed by recognized professional or standard-setting organizations. If state regulations require retention of genetic test reports for >25 years after the date of results reporting, laboratories must comply. Laboratories also might decide that retaining reports for >25 years is necessary for molecular genetic test reports for heritable diseases and conditions to accommodate patient testing needs and ongoing quality assessment activities. Records. CLIA requires laboratories to retain records of patient testing, including test requests and authorizations, test procedures, analytic systems records, records of test system performance specifications, proficiency testing records, and quality system assessment records, for a minimum of 2 years (42 CFR §493.1105); these requirements apply to molecular genetic testing. Retention policies and procedures must also comply with applicable state laws and other requirements (e.g., of accrediting organizations if the laboratory is accredited). Laboratories should ensure that electronic records are accessible. Tested specimens. CLIA requires laboratories to establish and follow written policies and procedures that ensure positive identification and optimum integrity of patient specimens from the time of collection or receipt in the laboratory through completion of testing and reporting of test results (42 CFR §493.1232). Depending on sample stability, technology, space, and cost, tested specimens for molecular genetic tests for heritable conditions should be retained as long as possible after the completion of testing and reporting of results. At a minimum, tested patient specimens that are stable should be retained until the next proficiency testing or the next alternative performance assessment to allow for identification of problems in patient testing and for corrective action to be taken. Tested specimens also might be needed for testing of current or future family members and for more definitive diagnosis as technology and knowledge evolve. A laboratory specimen retention policy should consider the following factors: Type of specimens retained (e.g., whole blood or DNA - samples) Analytes tested (e.g., DNA, RNA, or both) Test results or the genotypes detected. (If only abnormal - specimens are retained, identifying false-negative results at a later date will be difficult. This practice also might introduce bias if a preponderance of samples with abnormal test results is used to verify or establish performance specifications for future testing.) Test volume - New technologies that might not produce residual - specimens The laboratory director is responsible for ensuring that the laboratory policies and procedures for specimen retention comply with applicable federal, state, and local requirements (including laboratory accreditation requirements, if applicable) and are consistent with the laboratory quality assurance and quality assessment activities. In circumstances in which required patient consent is not provided with the test request, the laboratory should 1) notify the test requestor and 2) determine the time frame after which the test request might be rejected and the specimen discarded because of specimen degradation or deterioration. Laboratory specimen retention procedures should be consistent with patient decisions. # Laboratory Responsibilities Regarding Authorized Persons CLIA regulations define an authorized person as a person authorized by state laws or regulations to order tests, receive test results, or both. Laboratories must have a written or an electronic test request from an authorized person (42 CFR §493.1241). Laboratories may only release test results to authorized persons, the person responsible for using the test results (if applicable), and the laboratory that initially requested the test (42 CFR §493.1291). Laboratories that perform molecular genetic testing must ensure compliance with these requirements in their policies and procedures for receiving test requests and reporting test results and should ensure that qualified laboratory personnel with appropriate experience and expertise are available to assist authorized persons with test requests and interpretation of test results. Laboratories must comply with applicable federal, state, and local requirements regarding whether genetic tests may be offered directly to consumers and should use accepted professional guidelines for additional information. The following recommendations will help laboratories meet CLIA requirements (42 CFR § §493.1241 and 1291), particularly those related to genetic testing offered directly to consumers: The laboratory that initially accepts a test request (regard-- less of whether the laboratory performs the testing on-site or refers the patient specimens to another laboratory) is responsible for verifying that the test requestor is authorized by state laws and regulations to do so. Laboratories that receive patient specimens from multiple states or have specimen collection sites in multiple states should keep an updated copy of the requirements of each state regarding authorized persons and review test requests accordingly. Although referral laboratories might be unable to verify - that the person submitting the original test request quali-request patient test information, the laboratory should request the patient's authorization before releasing the patient's genetic test results. When patient consent is required for testing, the consent - form should include the laboratory confidentiality policies and procedures and describe situations in which test results might be requested by health-care providers caring for family members of the patient. Laboratory directors should be responsible for deter-- mining and approving circumstances in which access to confidential patient information is appropriate, as well as when, how, and to whom information is to be released, in compliance with federal, state, and local requirements. The HIPAA Privacy Rule and CLIA regulations are federal regulations intended to provide minimum standards for ensuring confidentiality of patient information; states or localities might have higher standards. Although the HIPAA Privacy Rule allows health-care providers that are covered entities (i.e., health-care providers that conduct certain transactions in electronic form, health-care clearinghouses, and health plans) to use or disclose protected health information for treatment purposes without patient authorization and to share protected health information to consult with other providers to treat a different patient or to refer a patient, the regulation indicates that states or institutions may implement stricter standards to protect the privacy of patients and the confidentiality of patient information (138). Laboratories that perform molecular genetic testing must comply with applicable requirements and follow professional practice guidelines in establishing policies and procedures to ensure confidentiality of patient information, including molecular genetic testing information and test results. # Personnel Qualifications, Responsibilities, and Competency Assessments Laboratory Director Qualifications and Responsibilities Qualifications. CLIA requires directors of laboratories that perform high-complexity testing to meet at least one of the following sets of qualifications (42 CFR §493.1443): Be a doctor of medicine or a doctor of osteopathy and - have board certification in anatomic or clinical pathology or both Be a doctor of medicine, doctor of osteopathy, or doctor - of podiatric medicine and have at least 1 year of laboratory training during residency or at least 2 years of experience directing or supervising high-complexity testing fies as an authorized person, the test results may only be released to persons authorized by state laws and regulations to receive the results, the persons responsible for using the test results, and the referring laboratory. # Ensuring Confidentiality of Patient Information CLIA requires laboratories to ensure confidentiality of patient information throughout all phases of the testing process that are under laboratory control (42 CFR §493.1231). Laboratories should follow more specific requirements and comply with additional guidelines (e.g., the Health Insurance Portability and Accountability Act of 1996 Privacy Rule, state requirements, accreditation standards, and professional guidelines) to establish procedures and protocols to protect the confidentiality of patient information, including information related to genetic testing. Laboratories that perform molecular genetic testing should establish and follow procedures and protocols that include defined responsibilities of all employees to ensure appropriate access, documentation, storage, release, and transfer of confidential information and prohibit unauthorized or unnecessary access or disclosure. # Information Regarding Family Members In certain circumstances, information about family members is needed for test performance or should be included in test reports to ensure appropriate interpretation of test results. Therefore, laboratories must have procedures and systems in place to ensure confidentiality of all patient information, including that of family members, in all testing procedures and reports, in compliance with CLIA requirements and other applicable federal, state, and local regulations. # Requests for Test Results to Assist with Providing Health Care for a Family Member When a health-care provider requests the genetic test information of a patient to assist with providing care for a family member of the patient, the following practices are recommended: Requests should be handled following established labora-- tory procedures regarding release and transfer of confidential patient information. Laboratories may release patient test information only - to the authorized person ordering the test, the persons responsible for using the test results (e.g., health-care providers of the patient designated by the authorized person to receive test results), and the laboratory that initially requested the test. If a health-care provider who provides care for a family member of the patient is authorized to Have an earned doctoral degree in a chemical, physical, - biological, or clinical laboratory science from an accredited institution and current certification by a board approved by HHS Directors of laboratories that perform molecular genetic testing for heritable diseases and conditions must meet these qualification requirements. Because CLIA requirements are minimum qualifications, laboratories that perform molecular genetic testing for heritable diseases and conditions should evaluate the tests they perform to determine whether additional knowledge, training, or expertise is necessary for fulfilling the responsibilities of laboratory director. Responsibilities. CLIA requires directors of laboratories that perform high-complexity testing to be responsible for the overall operation and administration of the laboratory, which includes responsibility for the following ( 42 training, experience, and expertise needed to provide technical supervision for laboratories that perform these tests. Certain laboratories that perform molecular genetic testing for heritable diseases and conditions might have technical supervisors who meet the applicable CLIA qualification requirements for the high-complexity testing their laboratories perform but do not meet the recommended qualifications in this section. These recommended qualifications are not regulatory requirements and are not intended to restrict access to certain molecular genetic tests; rather, they should be considered part of recommended laboratory practices for ensuring the quality of molecular genetic testing for heritable diseases and conditions. However, because CLIA qualification requirements are intended to be minimum standards, laboratories should assess the tests they perform to determine whether additional qualifications are needed for their technical supervisors to ensure quality throughout the testing process. These recommended qualifications should apply to all highcomplexity molecular genetic tests for heritable diseases and conditions. Responsibilities. CLIA requires technical supervisors of laboratories that perform high-complexity testing to be responsible for the technical and scientific oversight of the laboratories (42 CFR §493.1451 Responsibilities. CLIA requires persons who perform highcomplexity testing to follow laboratory procedures and protocols for test performance, quality control, results reporting, documentation, and problem identification and correction (42 CFR §493.1495). Personnel who perform molecular genetic testing for heritable diseases and conditions must meet these requirements. # Personnel Competency Assessment CLIA requires laboratories to establish and follow written policies and procedures to assess employee competency, and if applicable, consultant competency (42 CFR §493.1235). CLIA requirements for laboratory director responsibilities (42 CFR §493.1445 ) specify that laboratory directors must ensure that policies and procedures are established for monitoring and ensuring the competency of testing personnel and for identifying needs for remedial training or continuing education to improve skills. Technical supervisors are responsible for implementing the personnel competency assessment policies and procedures, including evaluating and ensuring competency of testing personnel (42 CFR §493.1451 ). Laboratories that perform molecular genetic testing for heritable diseases and conditions must meet these general personnel competency assessment requirements. Laboratories also should follow the applicable CMS guidelines to establish and implement policies and procedures specific for assessing and ensuring the competency of all types of laboratory personnel, including technical supervisors, clinical consultants, general supervisors, and testing personnel, in performing duties and responsibilities (96). For example, the performance of testing personnel must be evaluated and documented at least semiannually during the first year a person tests patient specimens. Thereafter, evaluations must be performed at least annually; however, if test methodology or instrumentation changes, performance must be re-evaluated to include the use of the new test methodology or instrumentation before testing personnel can report patient test results. Personnel competency assessments should identify training needs and ensure that persons responsible for performance of molecular genetic testing receive regular in-service training and education appropriate for the services performed. # Considerations Before Introducing Molecular Genetic Testing or offering New Molecular Genetic Tests Recommendations described in this report should be considered, in addition to appropriate professional guidelines and recommendations, when planning and preparing for the introduction of molecular genetic testing or offering new molecular genetic tests. The following scenarios should be considered during the planning stage: Introducing a new molecular genetic test that has not been - offered in any laboratory Introducing a genetic test that previously has been referred - to another laboratory but will be performed in-house Introducing an additional genetic test that can comple-- ment a molecular genetic test that has been performed for patient testing These scenarios present different planning concerns, including needs and requirements for training and competency of laboratory personnel, laboratory facilities and equipment, selection of test methods, development of procedure manuals, establishment or verification of performance specifications, and personnel responsibilities. In addition, the following factors should be assessed: Needs and demands of the new test, which can be assessed - by consulting with ordering physicians and other potential users of laboratory services and by conducting other market analyses Intellectual property or licensing concerns that might - result in restricted use, increased costs, or both of certain genetic tests # Quality Management System Approach for Molecular Genetic Testing The quality management system (QMS) approach provides a framework for managing and monitoring activities to address quality standards and achieve organizational goals, with a focus on user needs (41,109). QMS has been the basis for many international quality standards, such as the International Organization for Standardization (ISO) standards ISO 15189, ISO 17025, and ISO 9001 (91,139,140). These international QMS standards overlap with certain CLIA requirements but are distinct from CLIA regulations. Because QMS is not yet a widely adopted approach in the United States, laboratories that perform molecular genetic testing might not be familiar with QMS implementation in current practice. The QMS approach has been described in several CLSI guidelines (41,109). New York state CLEP and CAP have included QMS concepts in the general laboratory standards (15,102), and CAP and the American Association for Laboratory Accreditation have begun to provide laboratory accreditation to ISO 15189 (141,142). Laboratories that perform molecular genetic testing should monitor QMS development, because implementing the QMS approach could help laboratories accept international test referrals and improve quality management of testing. # Conclusion The recommendations in this report are intended to serve as guidelines for considering and implementing good laboratory practices to 1) improve quality and health-care outcomes related to molecular genetic testing for heritable diseases and conditions and 2) enhance oversight and quality assurance practices for molecular genetic testing under the CLIA regulatory framework. The report can be adapted for use in different settings where molecular genetic testing is conducted or evaluated. Continual monitoring of the practice and test performance of molecular genetic tests is needed to evaluate the effectiveness of these recommendations and to develop additional guidelines for good laboratory practices for genetic testing, which will ultimately improve public health. # Massachusetts † A statement of the purpose of the test - A statement that before signing the consent form, the - consenting person discussed with the medical practitioner ordering the test the reliability of positive or negative test results and the level of certainty that a positive test result for that disease or condition serves as a predictor of such disease A statement that the consenting person was informed - about the availability and importance of genetic counseling and provided with written information identifying a genetic counselor or medical geneticist from whom the consenting person might obtain such counseling A general description of each specific disease or condition - tested for The persons to whom the test results may be disclosed # Michigan § The nature and purpose of the presymptomatic or predic-- tive genetic test The effectiveness and limitations of the presymptomatic - or predictive genetic test The implications of taking the presymptomatic or predic-- tive genetic test, including, but not limited to, the medical risks and benefits The future uses of the sample taken from the test partici-- pant to conduct the presymptomatic or predictive genetic test and the information obtained from the presymptomatic or predictive genetic test The meaning of the presymptomatic or predictive genetic - test results and the procedure for providing notice of the results to the test participant Who will have access to the sample taken from the test - participant to conduct the presymptomatic or predictive genetic test and the information obtained from the presymptomatic or predictive genetic test, and the test participant's right to confidential treatment of the sample and the information # Nebraska ¶ The nature and purpose of the presymptomatic or predic-- tive genetic test The effectiveness and limitations of the presymptomatic - or predictive genetic test The implications of taking the presymptomatic or predic-- tive genetic test, including the medical risks and benefits The future uses of the sample taken to conduct the pre-- symptomatic or predictive genetic test and the genetic information obtained from the presymptomatic or predictive genetic test The meaning of the presymptomatic or predictive genetic - test results and the procedure for providing notice of the results to the patient Who will have access to the sample taken to conduct the - presymptomatic or predictive genetic test and the genetic information obtained from the presymptomatic or predictive genetic test and the patient's right to confidential treatment of the sample and the genetic information New York # MMWR June 12, 2009 A statement that a positive test result is an indication that - the person might be predisposed to or have the specific disease or condition being tested for and might consider additional independent testing, consult a personal physician, or pursue genetic counseling A general description of each specific disease or condition - being tested for The level of certainty that a positive test result for the - disease or condition serves as a predictor of such disease. (If no level of certainty has been established, this may be disregarded.) The name of the person or categories of persons or orga-- nizations to whom the test results may be disclosed A statement that no tests other than those authorized will - be performed on the biological sample and that the sample will be destroyed at the end of the testing process or not more than 60 days after the sample was taken, unless a longer period of retention is expressly authorized in the consent # South Dakota † † The nature and purpose of the test The effectiveness and limitations of the test The implications of taking the test, including, the medical - risks and benefits The future uses of the sample taken from the person - tested to conduct the test and the information obtained from the test The meaning of the test results and the procedure for - providing notice of the results to the person tested A list of who will have access to the sample taken from - the person tested and the information obtained from the test and the person's right to confidential treatment of the sample and the information # Appendix A Terms and Abbreviations Used In This Report # ABMG American Board of Medical Genetics ABN Advance beneficiary notice Accuracy Closeness of the agreement between the result of a measurement and a true value of the measurand ACMG American College of Medical Genetics Allele One version of a gene at a given location (locus) along a chromosome AMP Association for Molecular Pathology Amplicon Piece of nucleic acid formed as the product of molecular amplification Amplification In vitro enzymatic replication of a target nucleic acid (e.g., polymerase chain reaction ) ASR Analyte-specific reagent Bidirectional sequencing A method used to determine the positions of a selected nucleotide base in a target region on both strands of a denatured duplex nucleic acid polymer # Family history The genetic relationships and medical history of a family; also referred to as a pedigree when represented in diagram form using standardized symbols and terminology FDA Food and Drug Administration Founder effect The presence of gene mutation in high frequency in a specific population that arises because the gene mutation was present in a single ancestor or small number of ancestors in the founding population # Genetics The study of inheritance patterns of specific traits Genome The complete genetic content of an organism Genotype The genetic constitution of an organism or cell; also refers to the specific set of alleles inherited at a locus # Germline mutation The presence of an altered gene within the egg or sperm (germ cell), such that the altered gene can be passed to subsequent generations # Informed consent process For molecular genetic testing, the process by which a person voluntarily confirms the willingness to participate in a particular test, after having been informed of all aspects of the test that are relevant to the decision to participate LOD # Lower limit of detection Modifiers Genetic or environmental factors that might affect the expressivity (the variability of signs or symptoms that occur with a phenotype) of a genetic alteration # Mutation An alteration in a gene, which might cause a disease, be a benign alteration, or result in a normal variant # Newborn screening Testing conducted within days of birth to identify infants at increased risk for specific genetic disorders, allowing education and counseling for parents and treatment for patients to be initiated as soon as possible NTC No-template control Pedigree A diagram using standard symbols and terminology to indicate the genetic relationships and medical history of a family Penetrance The proportion of persons with a mutation causing a particular disorder who exhibit clinical symptoms of the disorder # Personalized medicine Approach to medicine involving use of genomic and molecular data to better target health care, facilitate discovery and clinical testing of new products, and determine patient risk for a particular disease or condition Phenotype The observable physical and biochemical traits resulting from of the expression of a gene; the clinical presentation of a person with a particular genotype Polymerase chain reaction (PCR) A DNA amplification procedure that produces millions of copies of a short segment of DNA through repeated cycles of 1) denaturation, 2) annealing, and 3) elongation; a very common procedure in molecular genetic testing used to generate a sufficient quantity of DNA to perform a test (e.g., sequence analysis or mutation scanning) or as a test itself (e.g., allele-specific amplification or trinucleotide repeat quantification) # Positive predictive value The likelihood that a person with a positive test result actually has a particular gene, is affected by the gene, or will develop the disease Precision Closeness of agreement between independent test results obtained under stipulated conditions Private mutation A rare, disease-causing mutation occurring in a few families # Proficiency testing An external quality assessment program in which samples are periodically sent to testing sites for analysis Quality assessment A group of activities to monitor and evaluate the entire testing process; used to help ensure that test results are reliable, improve the testing process, and promote good quality testing practices # Quality control Measures taken to detect, reduce, and correct deficiencies in a laboratory's internal analytical process prior to the release of patient results and to improve the quality of the results reported by the laboratory Reagent A substance that produces a chemical or biological reaction with a patient specimen, allowing detection or measurement of the analyte for which the test is designed # Reference interval Interval between and including the lower reference limit through the upper reference limit of the reference population (e.g., 95% of persons presumed to be healthy ) # Reportable range The range of test values over which the relationship between the instrument, kit, or measurement response of the system is shown to be valid RNA Ribonucleic acid SACGHS Secretary's Advisory Committee on Genetics, Health, and Society Sequencing A procedure used to determine the order of nucleotides (base sequence) in a DNA or RNA molecule or the order of amino acids in a protein # Targeted mutation analysis Testing for one or more specific mutations Total testing process Series of activities or workflow for performing testing; includes three major phases: preanalytic, analytic, and postanalytic # Unidirectional workflow The manner in which testing personnel and patient specimens move through the molecular amplification testing process to prevent cross-contamination # Interlaboratory Exchange The CAP registry service is an Internet-based service that facilitates contact among genetic testing laboratories that perform less frequently performed genetic tests. Laboratories enroll online; when CAP identifies three laboratories that are testing for the same genetic disorder, CAP facilitates communication for making exchange arrangements. The CAP/ACMG Biochemical and Molecular Genetics Committee reviews the results and procedures and makes comments in the Molecular Genetics Survey's Participant Summary Report regarding the overall performance. # Association for Molecular Pathology (AMP) Interlaboratory Exchange AMP facilitates sample exchanges between laboratories through the AMP listserv (CHAMP). Laboratories seeking others to evaluate performance on specific analytes contact one another via the listserv. Laboratories are responsible for establishing testing parameters and facilitating exchange of specimens and test results. ¶ # Goal and Objectives The goal of this report is to improve the quality and usefulness of laboratory services for genetic testing to achieve better health outcomes for the public. Upon completion of this educational activity, the reader should be able to 1) describe the recommended good laboratory practices for each of the three phases of the molecular genetic testing process; 2) describe qualifications, responsibilities, and competency of laboratory personnel; 3) describe planning for introducing molecular genetic testing; and 4) describe how to ensure the confidentiality of patient information in molecular genetic testing for heritable diseases and conditions. To receive continuing education credit, please answer all of the following questions. 1. Under the Clinical Laboratory Improvement Amendments (CLIA) regulations, laboratories performing molecular genetic testing that they have developed are subject to… (Indicate all that apply.) A. the general quality systems requirements for nonwaived testing. B. personnel requirements for high-complexity testing. C. specialty requirements for molecular genetic testing. # For molecular amplification procedures, which of the following is considered an effective mechanism for monitoring and detecting cross-contamination of patient specimens? A. Inclusion of a positive control that represent the genotype to be detected with each run of patient specimens. B. Inclusion of a normal sample as the negative control with each run of patient specimens. C. Inclusion of a no-template control sample that contains all components of the amplification reaction except nucleic acid templates with each run of patient specimens. D. Inclusion of a spiked-in internal control in each amplification sample. 6. Test reports of molecular genetic testing for heritable diseases or conditions should be retained for the longest possible time frame for all the following reasons except… A. Test results have long-term implications for the patients. B. Test results have implications for patients' families and future generations. C. Advances in knowledge and understanding of disease processes might lead to improved interpretation of test results. D. Laboratories need to access previous test reports to conduct quality assessment activities. E. Laboratories must protect the confidentiality of patient information. # A molecular genetic test report should … A. be understood by geneticists only. B. be understandable by nongeneticist health professionals and other authorized users of the test results. C. always be written in English. D. indicate "test result is negative" if no mutation is detected so that the test result can be easily understood. # The director of a laboratory performing molecular genetic testing should… (Indicate all that apply.) A. ensure effective policies and procedures are in place for monitoring and maintaining the competency of the laboratory personnel. B. be able to perform a molecular genetic test better than anyone else in the laboratory. C. ensure available information needed to interpret test results for a patient is documented for each molecular genetic test the laboratory performs. 12. When a laboratory uses a purified DNA sample extracted from a cell line containing a rare mutation as a positive control in patient testing, which of the following is considered appropriate for monitoring the DNA extraction step of the testing process? (Indicate all that apply.) A. Testing patient samples for a housekeeping gene to determine specimen quality and integrity each time patient testing is performed. B. Testing patient samples for a spiked-in control sequence to assess the presence of inhibitors each time patient testing is performed. C. Testing patient samples for a housekeeping gene to determine specimen quality and integrity once each day patient testing is performed. D. Testing patient samples for a spiked-in control sequence to assess the presence of inhibitors once each day patient testing is performed. # Which best describes your professional
Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use.# Introduction Genetic testing encompasses a broad range of laboratory tests performed to analyze DNA, RNA, chromosomes, proteins, and certain metabolites using biochemical, cytogenetic, or molecular methods or a combination of these methods. In 1992, the regulations for the Clinical Laboratory Improvement Amendments of 1988 (CLIA) were published and began to be implemented. Since that time, advances in scientific research and technology have led to a substantial increase both in the health conditions for which genetic defects or variations can be detected with molecular methods and in the spectrum of the molecular testing methods (1). As the number of molecular genetic tests performed for patient testing has steadily increased, so has the number of laboratories that perform molecular genetic testing for heritable diseases and conditions (2,3). With increasing use in clinical and public health practices, molecular genetic testing affects persons and their families in every life stage by contributing to disease diagnosis, prediction of future disease risk, optimization of treatment, prevention of adverse drug response, and health assessment and management. For example, preconception testing for cystic fibrosis and other heritable diseases has become standard practice for the care of women who are either pregnant or considering pregnancy and are at risk for giving birth to an infant with one of these conditions (4). DNA-based diagnostic testing often is crucial for confirming presumptive results from newborn screening tests, which are performed for approximately 95% of the 4 million infants born in the United States each year (5,6). In addition, pharmacogenetic and pharmacogenomic tests, which identify individual varia-tions in single-nucleotide polymorphisms, haplotype markers, or alterations in gene expression, are considered essential for personalized medicine, which involves customizing medical care on the basis of genetic information (7). The expanding field of molecular genetic testing has prompted measures both in the United States and worldwide to assess factors that affect the quality of performance and delivery of testing services, the adequacy of oversight and quality assurance mechanisms, and the areas of laboratory practice in need of improvement. Problems that could affect patient testing outcomes that have been reported include inadequate establishment or verification of test performance specifications, inadequate personnel training or qualifications, inappropriate test selection and specimen submission, inadequate quality assurance practices, problems in proficiency testing, misunderstanding or misinterpretation of test results, and other concerns associated with one or more phases of the testing process (8)(9)(10)(11). Under CLIA, laboratory testing is categorized as waived testing or nonwaived (which includes tests of moderate and high complexity) based on the level of testing complexity. Laboratories that perform molecular genetic testing are subject to general CLIA requirements for nonwaived testing and CLIA personnel requirements for high-complexity testing; no molecular genetic test has been categorized as waived or moderate complexity. Many laboratories also adhere to professional practice guidelines and voluntary or accreditation standards, such as those developed by the American College of Medical Genetics (ACMG), the Clinical and Laboratory Standards Institute (CLSI), and the College of American Pathologists (CAP), which provide specific guidance for molecular genetic testing (12)(13)(14). In addition, certain state programs, such as the New York State Clinical Laboratory Evaluation Program (CLEP), have specific requirements that apply to genetic testing laboratories in their purview (15). However, no specific requirements exist at the federal level for laboratory performance of molecular genetic testing for heritable diseases and conditions. Since 1997, CDC and the Centers for Medicare & Medicaid Services (CMS) have worked with other federal agencies, professional organizations, standard-setting organizations, CLIAC, and other advisory committees to promote the quality of genetic testing and improve the appropriate use of genetic tests in health care. To enhance the oversight of genetic testing under CLIA, CMS developed a multifaceted action plan aimed at providing guidelines, including the good laboratory practice recommendations in this report, rather than prescriptive regulations (16). Many of the activities in the action plan have been implemented or are in progress, including 1) providing CMS and state CLIA surveyors with guidelines and technical training on assessing genetic testing laboratories for compliance with applicable CLIA requirements, 2) developing educational materials on CLIA compliance for genetic testing laboratories, 3) collecting data on laboratory performance in genetic testing, 4) working with CLIAC and standard-setting organizations on oversight concerns, and 5) collaborating with CDC and the Food and Drug Administration (FDA) on ongoing oversight activities (16). This plan also was supported by the Secretary's Advisory Committee on Genetics, Health, and Society (SACGHS) in its 2008 report providing recommendations regarding future oversight of genetic testing (1). The purposes of this report are to 1) highlight areas of molecular genetic testing that have been recognized by CLIAC as needing specific guidelines for compliance with existing CLIA requirements or needing quality assurance measures in addition to CLIA requirements and 2) provide CLIAC recommendations for good laboratory practices to ensure the quality of molecular genetic testing for heritable diseases and conditions. These recommendations are intended primarily for genetic testing that is conducted to diagnose, prevent, or treat disease or for health assessment purposes. The recommendations are distinct from the good laboratory practice regulations for nonclinical laboratory studies under FDA oversight (21 CFR Part 58) (17).The recommended laboratory practices provide guidelines for ensuring the quality of the testing process (including the preanalytic, analytic, and postanalytic phases of molecular genetic testing), laboratory responsibilities regarding authorized persons, confidentiality of patient information, and personnel competency. The recommendations also address factors to consider before introducing molecular genetic testing or offering new molecular genetic tests and the quality management system approach in molecular genetic testing. Implementation of the recommendations in laboratories that perform molecular genetic testing for heritable diseases and conditions and an understanding of these recommendations by users of laboratory services are expected to prevent or reduce errors and problems related to test selection and requests, specimen submission, test performance, and reporting and interpretation of results, leading to improved use of molecular genetic laboratory services, better health outcome for patients, and in many instances, better health outcomes for families of patients. In future reports, recommendations will be provided for good laboratory practices focusing on other areas of genetic testing, such as biochemical genetic testing, molecular cytogenetic testing, and somatic genetic testing. # Background With the completion of the human genome project, discoveries linking genetic mutations or variations to specific diseases and biologic processes are frequently reported (18). The rapid progress in biomedical research, accompanied by advances in laboratory technology, have led to increased opportunities for development and implementation of new molecular genetic tests. For example, the number of heritable diseases and conditions for which clinical genetic tests are available more than tripled in 8 years, from 423 diseases in November 2000 to approximately 1,300 diseases and conditions in October 2008 (2,19). Molecular genetic testing is performed not only to detect or confirm rare genetic diseases or heritable conditions (20) but also to detect mutations or genetic variations associated with more common and complex conditions such as cancer (21,22), coagulation disorders (23), cardiovascular diseases (24), and diabetes (25). As the rapid pace of genetic research results in a better understanding of the role of genetic variations in diseases and health conditions, the development and clinical use of molecular genetic tests continues to expand (26)(27)(28). Despite considerable information gaps regarding the number of U.S. laboratories that perform molecular genetic tests for heritable diseases and conditions and the number of specific genetic tests being performed (1), molecular genetic testing is one of the areas of laboratory testing that is increasing most rapidly. Molecular genetic tests are performed by a broad range of laboratories, including laboratories that have CLIA certificates for chemistry, pathology, clinical cytogenetics, or other specialties or subspecialties (11). Although nationwide data are not available, data from state programs indicate considerable increases in the numbers of laboratories that perform molecular genetic tests. For example, the number of approved laboratories in the state of New York that perform molecular genetic testing for heritable diseases and conditions increased 36% in 6 years, from 25 laboratories in February 2002 to 34 laboratories in October 2008 (29). Although comprehensive data on the annual number of molecular genetic tests performed nationwide are not available, industry reports indicate a steady increase in the number of common molecular genetic tests for heritable diseases and conditions, such as mutation testing for cystic fibrosis and factor V Leiden thrombophilia (3). The number of cystic fibrosis mutation tests has increased significantly since 2001, pursuant to the recommendations of the American College of Obstetricians and Gynecologists and ACMG for preconception and prenatal carrier screening (30,31). The DNA-based cystic fibrosis mutation tests are now considered to be some of the most commonly performed genetic tests in the United States and have become an essential component of several state newborn screening programs for confirming presumptive screening results of infants (32). The overall increase in molecular genetic testing from 2006 to 2007 worldwide has been reported to be 15% in some market analyses, outpacing other areas of molecular diagnostic testing (33). # CLIA oversight for Molecular Genetic Testing In 1988, Congress enacted Public Law 100-578, a revision of Section 353 of the Public Health Service Act (42 U.S.C. 263a) that amended the Clinical Laboratory Improvement Act of 1967 and required the Department of Health and Human Services (HHS) to establish regulations to ensure the quality and reliability of laboratory testing on human specimens for disease diagnosis, prevention, or treatment or for health assessment purposes. In 1992, HHS published CLIA regulations that describe requirements for all laboratories that perform patient testing (34). Facilities that perform testing for forensic purposes only and research laboratories that test human specimens but do not report patient-specific results are exempt from CLIA regulations (34). CMS (formerly the Health Care Financing Administration) administers the CLIA laboratory certification program in conjunction with FDA and CDC. FDA is responsible for test categorization, and CDC is responsible for CLIA studies, convening CLIAC, and providing scientific and technical support to CMS. CLIAC was chartered by HHS to provide recommendations and advice regarding CLIA regulations, the impact of CLIA regulations on medical and laboratory practices, and modifications needed to CLIA standards to accommodate technological advances. In 2003, CMS and CDC published CLIA regulatory revisions to reorganize and revise CLIA requirements for quality systems for nonwaived testing and the laboratory director qualifications for high-complexity testing (35). The revised regulations included facility administration and quality system requirements for every phase of the testing process (35). Requirements for the clinical cytogenetics specialty also were reorganized and revised. Other genetic tests, such as molecular genetic tests, are not recognized as a specialty or subspecialty under CLIA. However, because these tests are considered high complexity, laboratories that perform molecular genetic testing for heritable diseases and conditions must meet applicable general CLIA requirements for nonwaived testing and the personnel requirements for high-complexity testing (36). To enhance oversight of genetic testing under CLIA, CMS developed a plan to promote a comprehensive approach for effective application of current regulations and to provide training and guidelines to surveyors and laboratories that perform genetic testing (16). CDC and CMS also have been assessing the need to revise and update CLIA requirements for proficiency testing programs and laboratories, taking into consideration the need for improved performance evaluation for laboratories that perform genetic testing (37). # Concerns Related to Molecular Genetic Testing Studies and reports since 1997 have revealed a broad range of concerns related to molecular genetic testing for heritable diseases and conditions, including safe and effective translation of research findings into patient testing, the quality of test performance and results interpretation, appropriate use of testing information and services in health management and patient care, the adequacy of quality assurance measures, and concerns involving the ethical, legal, economic, and social aspects of molecular genetic testing (1,9,22,38,39). Some of these concerns are indicative of the areas of laboratory practice that are in need of improvement, such as performance establishment and verification, proficiency testing, personnel qualifications and training, and results reporting (1,9,11,22,39). # Errors Associated with and Needed Improvements in the Three Phases of Molecular Genetic Testing Studies have indicated that although error rates associated with different areas of laboratory testing vary (40), the overall distribution of errors reported in the preanalytic, analytic, and postanalytic phases of the testing process are similar for many testing areas, including molecular genetic testing (9,11,39,40). The preanalytic phase encompasses test selection and ordering and specimen collection, processing, handling, and delivery to the testing site. The analytic phase includes selection of test methods, performance of test procedures, monitoring and verification of the accuracy and reliability of test results, and documentation of test findings. The postanalytic phase includes reporting test results and archiving records, reports, and tested specimens (41). Studies have indicated that errors are more likely to occur during the preanalytic and postanalytic phases of the testing process than during the analytic phase, with most errors reported for the preanalytic phase (40,(42)(43)(44). In the preanalytic phase, inappropriate selection of laboratory tests has been a significant source of errors (42,43). Misuse of laboratory services, such as unnecessary or inappropriate test requests, might lead to increased risk for medical errors, adverse patient outcome, and increased health-care costs (43). Although no study has determined the overall number of molecular genetic tests performed that could be considered unwarranted or unnecessary, a study of the use and interpretation of adenomatous polyposis coli gene (APC) testing for familial adenomatous polyposis and other heritable conditions associated with colonic polyposis indicated that 17% of the cases evaluated did not have valid indications for testing (22). Although data are limited, studies also indicate that improvements are needed in the analytic phase of molecular genetic testing. A study of the frequency and severity of errors associated with DNA-based genetic testing revealed that errors related to specimen handling in the laboratory and other analytic steps ranged from 0.06% to 0.12% of approximately 92,000 tests evaluated (39). A subsequent meta-analysis indicated that these self-reported error rates were comparable to those detected in nongenetic laboratory testing (40). An analysis of performance data from the CAP molecular genetic survey program during 1995-2000 estimated the overall error rate for cystic fibrosis mutation analysis to be 1.5%, of which approximately 50% of the errors occurred during the analytic or postanalytic phases of testing (45). Unrecognized sequence variations or polymorphisms also could affect the ability of molecular genetic tests to detect or distinguish the genotypes being analyzed, leading to false-positive or false-negative test results. Such problems have been reported for some commonly performed genetic tests such as cystic fibrosis mutation analysis and testing for HFE-associated hereditary hemochromatosis (46,47). The postanalytic phase of molecular genetic testing involves analysis of test results, preparation of test reports, and results reporting. The study on the use of the APC gene testing and interpretation of test results indicated that lack of awareness among health-care providers of APC test limitations was a primary reason for misinterpretation of test results (22). In a study assessing the comprehensiveness and usefulness of reports for cystic fibrosis and factor V Leiden thrombophilia testing, physicians in many medical specialties considered reports that included information beyond that specified by the general CLIA test report requirements to be more informative and useful than test reports that only met CLIA requirements; additional information included patient race/ethnicity, clinical history, reasons for test referral, test methodology, recommendations for follow-up testing, implications for family members, and suggestions for genetic counseling (48). Consistent with these findings, international guidelines for quality assurance in molecular genetic testing recommend that molecular genetic test reports be accurate, concise, and comprehensive and communicate all essential information to enable effective decisionmaking by patients and health care professionals (49). # Proficiency Testing Proficiency testing is a well-established practice for monitoring and improving the quality of laboratory testing (50,51) and is a key component of the external quality assessment process. Studies have indicated that using proficiency testing samples that resemble actual patient specimens could improve monitoring of laboratory performance (50,(52)(53)(54). Participation in proficiency testing has helped laboratories reduce analytic deficiencies, improve testing procedures, and take steps to prevent future errors (55)(56)(57)(58)(59). CLIA regulations have not yet included proficiency testing requirements for molecular genetic tests. Laboratories that perform molecular genetic testing must meet the general CLIA requirement to verify, at least twice annually, the accuracy of the genetic tests they perform ( §493.1236[c]) (36). Laboratories may participate in available proficiency testing programs for the genetic tests they perform to meet this CLIA alternative performance assessment requirement. Proficiency testing participation correlates significantly with the quality assurance measures in place among laboratories that perform molecular genetic testing (9,10). Because proficiency testing is a rigorous external assessment for laboratory performance, in 2008, SACGHS recommended that proficiency testing participation be required for all molecular genetic tests for which proficiency testing programs are available (1). Formal molecular genetic proficiency testing programs are available only for a limited number of tests for heritable diseases and conditions; in addition, the samples provided often are purified DNA, which do not typically require performance of all steps of the testing process, such as nucleic acid extraction and preparation (60). For many genetic conditions that are either rare or for which testing is performed by one or a few laboratories, substantial challenges in developing formal proficiency testing programs have been recognized (1). Development of effective alternative performance assessment approaches to proficiency testing is essential for ensuring the quality of molecular genetic testing (1). Professional guidelines have been developed for laboratories to evaluate and monitor test performance when proficiency testing programs are not available (61). However, reports of the CAP molecular pathology on-site inspections indicate that deficiencies related to participation in interlaboratory comparison or alternative performance assessment are among the most frequently identified deficiencies, accounting for 3.9% of all deficiencies cited (62). # Clinical Validity and Potential Risks Associated with Certain Molecular Genetic Tests The ability of a test to diagnose or predict risk for a particular health condition is the test's clinical validity, which often is measured by clinical (or diagnostic) sensitivity, clinical (or diagnostic) specificity, and predictive values of the test for a given health condition. Clinical validity can be influenced by factors such as the prevalence of the disease or health condition, penetrance (proportion of persons with a mutation causing a particular disorder who exhibit clinical symptoms of the disorder), and modifiers (genetic or environmental factors that might affect the variability of signs or symptoms that occur with a phenotype of a genetic alteration). For genetic tests, clinical validity refers to the ability of a test to detect or predict the presence or absence of a particular disease or phenotype and often corresponds to associations between genotypes and phenotypes (1,28,(63)(64)(65)(66)(67)(68)(69). The usefulness of a test in clinical practice, referred to as clinical utility, involves identifying the outcomes associated with specific test results (28). Clinical validity and clinical utility should be assessed individually for each genetic test because the implications might vary depending on the health condition and population being tested (38). As advances in genomic research and technology result in rapid development of new genetic tests, concerns have been raised that certain tests, particularly predictive genetic tests, could become available without adequate assessment of their validity, benefits, and utility. Consequently, health professionals and consumers might not be able to make a fully informed decision about whether or how to use these tests. In 1997, a task force formed by a National Institutes of Health (NIH)-Department of Energy workgroup recommended that laboratories that perform patient testing establish clinical validity for the genetic tests they develop before offering them for patient testing and carefully review and document evidence of test validity if the test has been developed elsewhere (70). This recommendation was later included in a report of the Secretary's Advisory Committee on Genetic Testing (SACGT), which was established in 1998 to advise HHS on medical, scientific, ethical, legal, and social concerns raised by the development and use of genetic tests (38). Public concerns about inadequate knowledge or documentation of the clinical validity of certain genetic tests were also recognized by SACGHS, the advisory committee that was established by HHS in 2002 to supersede SACGT (1). SACGHS recommended the development and support of sustainable public-private collaborations to fill the gaps in knowledge of the analytic validity, clinical validity, clinical utility, economic value, and population health impact of molecular genetic tests (1). Collaborative efforts that have been recognized include the Evaluation of Genomic Applications in Practice and Prevention (EGAPP) program, a CDC initiative to establish and evaluate a systematic, evidence-based process for assessing genetic tests and other applications of genomic technology in transition from research to clinical practice and public health (71), and the Collaboration, Education, and Test Translation (CETT) Program, which is overseen by the NIH Office of Rare Diseases to promote the effective transition of potential genetic tests for rare diseases from research settings into clinical settings (72). The increase in direct-to-consumer (DTC) genetic testing (i.e., genetic tests offered directly to consumers with no health-care provider involvement) has raised concerns about the potential risks or misuses of certain genetic tests (73). As of October 2008, consumers could directly order laboratory tests in 27 states; in another 10 states, consumer-ordered tests are allowed under defined circumstances (74). As DTC genetic tests become increasingly available, various genetic profile tests have been marketed directly to the public that claim to answer questions regarding cardiovascular risks, drug metabolism, dietary arrangements, and lifestyles (73). In addition, DTC advertisements have caused a substantial increase in the demand for molecular genetic tests, such as those for hereditary breast and ovarian cancers (75,76). Although allowing easy access to the testing services, DTC genetic testing has raised concerns about the potential for inadequate pretest decision-making, misunderstanding of test results, access to tests of questionable clinical value, lack of necessary follow-up, and unexpected additional responsibilities for primary care physicians (77)(78)(79)(80). Both the government and professional organizations have developed educational materials that provide guidance to consumers, laboratories, genetics professionals, and professional organizations regarding DTC genetic tests (80)(81)(82). # Personnel Qualifications and Training Studies indicate that qualifications of laboratory personnel, including training and experience, are critical for ensuring quality performance of genetic testing, because human error has the greatest potential influence on the quality of laboratory test results (9,83,84). A study of laboratories in the United States that perform molecular genetic testing suggested that laboratory adherence to voluntary quality standards and guidelines for genetic testing was significantly associated with laboratories directed or supervised by persons with board certification in medical genetics (9). Results of an international survey revealed a similar correlation between the quality assurance practices of a molecular genetic testing laboratory and the formal training of the laboratory director (10). Overall, the concerns recognized in publications and documented cases support the need to have trained, qualified personnel at all levels to ensure the quality of all phases of the genetic testing process. # Methods Information Collection and Assessment To monitor and assess the scope and growth of molecular genetic testing in the United States, data were collected and analyzed from scientific articles, government reports, the CMS CLIA database, information from state programs, studies by professional groups, publicly available directories and databases of laboratories and laboratory testing, industry reports, and CDC studies (1)(2)(3)5,6,9,29,38,83,(85)(86)(87)(88). To evaluate factors in molecular genetic testing that might affect testing quality and to identify areas that would benefit from quality assurance guidelines, various documents were considered, including professional practice guidelines, CAP laboratory accreditation checklists, CLSI guidelines, state requirements, and international guidelines and standards (12)(13)(14)(15)49,61,(89)(90)(91)(92)(93)(94)(95). # Development of CLIAC Recommendations for Good Laboratory Practices in Molecular Genetic Testing Since 1997, CLIAC has provided HHS with recommendations on approaches needed to ensure the quality of genetic testing (37). At the February 2007 CLIAC meeting, CLIAC asked CDC and CMS to clarify critical concerns in genetic testing oversight and to provide a status report at the subsequent CLIAC meeting. At the September 2007 CLIAC meeting, CDC presented an overview of the regulatory oversight and voluntary measures for quality assurance of genetic testing and described a plan to develop and publish educational material on good laboratory practices. CDC solicited CLIAC recommendations to address concerns that presented particular challenges related to genetic testing oversight, including establishment and verification of performance specifications, control procedures for molecular amplification assays, proficiency testing, genetic test reports, personnel competency assessment, and the definition of genetic tests. CLIAC recommended convening a workgroup of experts in genetic testing to consider these concerns and provide input for CLIAC deliberation. The CLIAC Genetic Testing Good Laboratory Practices Workgroup was formed. The workgroup conducted a series of meetings on the scope of laboratory practice recommendations needed for genetic testing and suggested that recommendations first be developed for molecular genetic testing for heritable diseases and conditions. The workgroup evaluated good laboratory practices for all phases of the genetic testing process after reviewing professional guidelines, regulatory and voluntary standards, accreditation checklists, international standards and guidelines, and other documents that provided general or specific quality standards applicable to molecular genetic testing for heritable diseases and conditions (1,(12)(13)(14)(15)36,41,49,61,80,82,(91)(92)(93)(94)(95)(96)(97)(98)(99)(100)(101)(102)(103)(104)(105)(106)(107)(108)(109). The workgroup also reviewed information on the HHS-approved and other certification boards for laboratory personnel and the number of persons certified in each of the specialties for which certification is available (110)(111)(112)(113)(114)(115)(116)(117)(118). Workgroup suggestions were reported to CLIAC at the September 2008 committee meeting. The CLIAC recommendations were formed on the basis of the workgroup report and additional CLIAC recommendations. The committee recommended that CDC include the CLIACrecommended good laboratory practices for molecular genetic testing in the planned publication. Summaries of CLIAC meetings and CLIAC recommendations are available (37). # Recommended Good Laboratory Practices The following recommended good laboratory practices are for areas of molecular genetic testing for heritable diseases and conditions in need of guidelines for complying with existing CLIA requirements or in need of additional quality assurance measures. These recommendations are not intended to encompass the entire realm of laboratory practice; they are meant to provide guidelines for specific quality concerns in the performance and delivery of laboratory services for molecular genetic testing for heritable diseases and conditions. These recommendations address laboratory practices for the total testing process, including the preanalytic, analytic, and postanalytic phases of molecular genetic testing. The recommendations for the preanalytic phase include guidelines for laboratory responsibilities for providing information to users of laboratory services, informed consent, test requests, specimen submission and handling, test referrals, and preanalytic systems assessment. The recommendations for the analytic phase include guidelines for establishment and verification of performance specifications, quality control procedures, proficiency testing, and alternative performance assessment. The recommendations for the postanalytic phase include guidelines for test reports, retention of records and reports, and specimen retention. The recommendations also address responsibilities of laboratories regarding authorized persons, confidentiality of patient information and test results, personnel competency, factors to consider before introducing molecular genetic testing or offering new molecular genetic tests, and the potential benefits of the quality management system approach in molecular genetic testing. Recommendations are provided in relation to applicable provisions in the CLIA regulations and, when necessary, are followed by a description of how the recommended practices can be used to improve quality assurance and quality assessment for molecular genetic testing. A list of terms and abbreviations used in this report also is provided (Appendix A). # The Preanalytic Testing Phase Test Information to Provide to Users of Laboratory Services Laboratories are responsible for providing information regarding the molecular genetic tests they perform to users of their services; users include authorized persons under applicable state law, health-care professionals, patients, referring laboratories, and payers of laboratory services. Laboratories should review the genetic tests they perform and the procedures they use to provide and update the recommended test information that follows. At a minimum, laboratories should ensure that the test information is available from accessible sources such as websites, service directories, information pamphlets or brochures, newsletters, instructions for specimen submission, and test request forms. Laboratories that already provide the information from these sources should continue to do so. However, laboratories also might decide to provide the information more directly to their users (e.g., by telephone, e-mail, or in an in-person meeting) and should determine the situations in which such direct communication is necessary. The complexity of language used should be appropriate for the particular laboratory user groups (e.g., for patients, plain language understandable by the general public). Test selection, test performance, and specimen submission. Laboratories should provide information regarding the molecular genetic tests they perform to users of their services to facilitate appropriate test selection and requests, specimen handling and submission, and patient care. Each laboratory that performs molecular genetic testing for heritable diseases and conditions should provide the following information to its users: Information necessary for selecting appropriate tests, • including a list of the molecular genetic tests the laboratory performs. For each molecular genetic test, the following information should be provided: -Intended use of the test, including the nucleic acid target of the test (e.g., genes, sequences, mutations, or polymorphisms), the purpose of testing (e.g., diagnostic, preconception, or predictive), and the recommended patient populations -Indications for testing -Test method to be used, presented in user-friendly language in relation to the # Cost. When possible and practical, laboratories should provide users with information on the charges for molecular genetic tests being performed. Estimating the expenses that a patient might incur from a particular genetic test might be difficult for certain laboratories and providers because fee schedules of individual laboratories can vary depending on the health-care payment policy selections of each patient. However, advising the patient and family members of the financial implications of the tests, whenever possible, facilitates informed decision-making. Discussion. Under CLIA, laboratories are required to develop and follow written policies and procedures for specimen submission and handling, specimen referral, and test requests (42 CFR § §493.1241 and 1242). Laboratories must ensure positive identification and optimum integrity of specimens from the time of collection or receipt through the completion of testing and reporting of test results (42 CFR §493.1232). In addition, laboratories that perform nonwaived testing must ensure that a qualified clinical consultant is available to assist laboratory clients with ordering tests appropriate for meeting clinical expectations (42 CFR §493.1457 [b]). The recommended laboratory practices in this report describe laboratory responsibilities for ensuring appropriate test requests and specimen submission for the molecular genetic tests they perform, in addition to laboratory responsibilities for meeting CLIA requirements. The recommendations emphasize the role of laboratories in providing specific information needed by users before decisions are made regarding test selection and ordering, based on consideration of several factors. First, molecular genetic tests for heritable diseases and conditions are being rapidly developed and increasingly used in health-care settings. Users of laboratory services need the ability to easily access information regarding the intended use, performance specifications, and limitations of the molecular genetic tests a laboratory offers to determine appropriate testing for specific patient conditions. Second, many molecular genetic tests are performed using laboratory-developed tests or test systems. The performance specifications and limitations of the testing might vary among laboratories, even for the same disease or condition, depending on the specific procedures used. Users of laboratory services who are not provided information related to the appropriateness of the tests being considered might select tests that are not indicated or cannot meet clinical expectations. Third, for many heritable diseases and conditions, test performance and interpretation of test results require information regarding patient race/ethnicity, family history, and other pertinent clinical and laboratory information. Informing users before tests are ordered of the specific patient information needed by the laboratory should facilitate test requests and allow prompt initiation of appropriate testing procedures and accurate interpretation of test results. Finally, providing information to users on performance specifications and limitations of tests before test selection and ordering prepares users of laboratory services for understanding test results and implications. CLIA test report requirements (42 CFR §493.1291[e]) indicate that laboratories are required to provide users of their services, on request, with information on laboratory test methods and the performance specifications the laboratory has established or verified for the tests. However, for molecular genetic tests for heritable diseases and conditions, laboratories should provide test performance information to users before test selection and ordering, rather than waiting for a request after the test has been performed. The information provided in the preanalytic phase must be consistent with information included on test reports. Providing molecular genetic testing information to users before tests are selected and ordered should improve test requests and specimen submission and might reduce unnecessary or unwarranted testing. The recommended practices also might increase informed decision-making, improve interpretation of results, and improve patient outcome. # Informed Consent A person who provides informed consent voluntarily confirms a willingness to undergo a particular test, after having been informed of all aspects of the test that are relevant to the patient's decision (49). Informed consent for genetic testing or specific types of genetic tests is required by law in certain states; as of June 2008, 12 states required that informed consent be obtained before a genetic test is requested or performed (119). In addition, certain states (e.g., Massachusetts, Michigan, Nebraska, New York, and South Dakota) have included required informed consent components in their statutes [97,[120][121][122][123]) (Appendix B). These state statutes can be used as examples for laboratories in other states that are developing specific informed consent forms. Professional organizations recommend that informed consent be obtained for testing for many inherited genetic conditions (12,13). CLIA regulations have no requirements for laboratory documentation of informed consent for requested tests; however, medical decisions for patient diagnosis or treatment should be based on informed decision-making (124). Regardless of whether informed consent is required, laboratories that perform molecular genetic tests for heritable diseases and conditions should be responsible for providing users with the information necessary to make informed decisions. Informed consent is in the purview of the practice of medicine; the persons authorized to order the tests are responsible for obtaining the appropriate level of informed consent (67). Unless mandated by state or local requirements, obtaining informed consent before performing a test generally is not considered a laboratory responsibility. For molecular genetic testing for heritable diseases and conditions, not all tests require written patient consent before testing (125). However, when informed consent for patient testing is recommended or required by law or other applicable requirements as a method for documenting the process and outcome of informed decision-making, laboratories should ensure that certain practices are followed: Be available to assist users of laboratory services with • determining the appropriate level of informed consent by providing useful and necessary information. Include appropriate methods for documenting informed • consent on test request forms, and determine whether the consent information is provided with the test request before initiating testing. Laboratories may determine situations in which a patient specimen can be stabilized until informed consent is obtained, following the practices for specimen retention recommended in these guidelines. Laboratories should refer to professional guidelines for additional information regarding informed consent for molecular genetic tests and should consider available models when developing the content, format, and procedures for documentation of patient consent. # Test Requests CLIA requirements (42 CFR §493.1241[c]) specify that laboratories that perform nonwaived testing must ensure that the test request solicits the following information: 1) the name and address or other suitable identifiers of the authorized person requesting the test and (if applicable) the person responsible for using the test results, or the name and address of the laboratory submitting the specimen, including (if applicable) a contact person to enable reporting of imminently life-threatening laboratory results or critical values; 2) patient name or a unique patient identifier; 3) sex and either age or date of birth of the patient; 4) the tests to be performed; 5) the source of the specimen (if applicable); 6) the date and (if applicable) time of specimen collection; and 7) any additional information relevant and necessary for a specific test to ensure accurate and timely testing and reporting of results, including interpretation (if applicable). For molecular genetic testing for heritable diseases and conditions, laboratories must comply with these CLIA requirements and should solicit the following additional information on test requests: Patient [2]). Laboratories that perform molecular genetic testing for heritable diseases and conditions should ensure that at least two unique identifiers are solicited on these test requests, which should include patient names, when possible, and any other unique identifiers needed to ensure patient identification. In certain situations (e.g., compatibility testing for which donor names are not always provided to the laboratory), an alternative unique identifier is appropriate. Date of birth. CLIA requirements specify that test requests must solicit the sex and either age or date of birth of the patient (42 CFR §493.1241[c] [3]). For molecular genetic testing for heritable diseases and conditions, patient date of birth is more informative than age and should be obtained when possible. Indications for testing, relevant clinical and laboratory information, patient race/ethnicity, family history, and pedigree. Obtaining information on indications for testing, relevant clinical or laboratory information, patient racial/ ethnic background, family history, and pedigree is critical for selecting appropriate test methods, determining the mutations or variants to be tested, interpreting test results, and timely reporting of test results. Genetic conditions often have different disease prevalences with various mutation frequencies and distributions among racial/ethnic groups. Unique, or private, mutations or genotypes might be present only in specific families or can be associated with founder effects (i.e., gene mutations observed in high frequency in a specific population because of the presence of the mutation in a single ancestor or small number of ancestors in the founding population). Family history and other relevant clinical or laboratory information are often important for determining whether the test requested might meet the clinical expectations, including the likelihood of identifying a disease-causing mutation. Specific race/ethnicity, family history, and other pertinent information to be solicited on a test request should be determined according to the specific disease or condition for which the patient is being tested. Laboratories should consider available guidelines for requesting and obtaining this additional information and determine circumstances in which more specific patient information is needed for particular genetic tests (126,127). Although this information is not specified in CLIA, the regulations provide laboratories the flexibility to determine and solicit relevant and necessary information for a specific test (42 CFR §493.1241[c] [8]). The recommended test request components also are consistent with many voluntary professional and accreditation guidelines (12)(13)(14). Documentation of informed consent. Methods for indicating and documenting informed consent on a test request might include a statement, text box, or check-off box on the test request form to be signed or checked by the test requestor; a separate form to be signed as part of the test request; or another method that complies with applicable requirements and adheres to professional guidelines. In addition, when state or local laws or regulations specify that patient consent must be obtained regarding the use of tested specimens for quality assurance or other purposes, the test request must include a way for the test requestor to indicate the decision of the patient. Laboratories also might determine that other situations merit documentation of consent before testing. # Specimen Submission, Handling, and Referral CLIA requires laboratories to establish and follow written policies and procedures for patient preparation, specimen collection, specimen labeling (including patient name or unique patient identifier and, when appropriate, specimen source), specimen storage and preservation, conditions for specimen transportation, specimen processing, specimen acceptability and rejection, and referral of specimens to another laboratory (42 CFR §493.1242). If a laboratory accepts a referral specimen, appropriate written instructions providing information on specimen handling and submission must be available to the laboratory clients. The following recommendations are intended to help laboratories that perform molecular genetic testing meet general CLIA requirements and to provide additional guidelines on quality assurance measures for specimen submission, handling, and referral for molecular genetic testing. Before test selection and ordering, laboratories that perform molecular genetic testing should provide their users with instructions on specimen collection, handling, transport, and submission. # The Analytic Testing Phase Establishment and Verification of Performance Specifications CLIA requires laboratories to establish or verify the analytic performance of all nonwaived tests and test systems before introducing them for patient testing and to determine or the analytes to be detected in a specimen might be compromised Specimen transport conditions (e.g., ambient temperature, • refrigeration, and immediate delivery) Reasons for rejection of specimens • Criteria for specimen acceptance or rejection. Laboratories should have written criteria for acceptance or rejection of specimens for the molecular genetic tests they perform and should promptly notify the authorized person when a specimen meets the rejection criteria and is determined to be unsuitable for testing. The criteria should include information on determining the existence of and addressing the following situations: Improper handling or transport of specimens Lack of other information needed to determine whether • the specimen or test requested is appropriate for answering the clinical question Retention and exchange of information throughout the testing process. Information on test requests and test reports is a particularly important component of the complex communication between genetic testing laboratories and their users. Laboratories should have policies and procedures in place to ensure that information needed for selection of appropriate test methods, test performance, and results interpretation is retained throughout the entire molecular genetic testing process. This recommendation is based on CLIAC recognition of instances in which information on test requests or test reports was removed by electronic or other information systems during specimen submission, results reporting, or test referral. CLIA requires laboratories to ensure the accuracy of test request or authorization information when transcribing or entering the information into a record system or a laboratory information system (42 CFR §493.1241[e]). For molecular genetic tests, information on test requests and test reports should be retained accurately and completely throughout the testing process. Specimen referral. CLIA requires laboratories to refer specimens for any type of patient testing to CLIA-certified laboratories or laboratories that meet equivalent requirements as determined by CMS (42 CFR §493.1242 [c]). Examples of laboratories that meet equivalent requirements include the calibration and control procedures of tests based on the performance specifications verified or established. Before reporting patient test results, each laboratory that introduces an unmodified, FDA-cleared or FDA-approved test system must 1) demonstrate that the manufacturer-established performance specifications for accuracy, precision, and reportable range of test results can be reproduced and 2) verify that the manufacturer-provided reference intervals (or normal values) are appropriate for the laboratory patient population (42 CFR §493.1253). Laboratories are subject to more stringent requirements when introducing 1) FDA-cleared or FDA-approved test systems that have been modified by the laboratory, 2) laboratory-developed tests or test systems that are not subject to FDA clearance or approval (e.g., standardized methods and textbook procedures), or 3) test systems with no manufacturerprovided performance specifications. In these instances, before reporting patient test results, laboratories must conduct more extensive procedures to establish applicable performance specifications for accuracy, precision, analytic sensitivity, analytic specificity; reportable range of test results; reference intervals, or normal values; and other performance characteristics required for test performance. Although laboratories that perform molecular genetic testing for heritable diseases and conditions must comply with these general CLIA requirements, additional guidelines are needed to assist with establishment and verification of performance specifications for these tests. The recommended laboratory practices that follow are primarily intended to provide specific guidelines for establishing performance specifications for laboratory-developed molecular genetic tests to ensure valid and reliable test performance and interpretation of results. The recommendations also might be used by laboratories to verify performance specifications of unmodified FDA-cleared or FDA-approved molecular genetic test systems to be introduced for patient testing. Factors that should be considered when developing performance specifications for molecular genetic tests include the intended use of the test; target genes, sequences, and mutations; intended patient populations; test methods; and samples to be used (99) Accuracy. Accuracy is commonly defined as "closeness of the agreement between the result of a measurement and a true value of the measurand" (128). For qualitative molecular genetic tests, laboratories are responsible for verifying or establishing the accuracy of the method used to identify the presence or absence of the analytes being evaluated (e.g., mutations, variants, or other targeted nucleic acids). Accuracy might be assessed by testing reference materials, comparing test results against results of a reference method, comparing split-sample results with results obtained from a method shown to provide clinically valid results, or correlating research results with the clinical presentation when establishing a test system for a new analyte, such as a newly identified disease gene (96). Precision. Precision is defined as "closeness of agreement between independent test results obtained under stipulated conditions" (129). Precision is commonly determined by assessing repeatability (i.e., closeness of agreement between independent test results for the same measurand under the same conditions) and reproducibility (i.e., closeness of agreement between independent test results for the same measurand under changed conditions). Precision can be verified or established by assessing day-to-day, run-to-run, and within-run variation (as well as operator variance) by repeat testing of known patient samples, quality control materials, or calibration materials over time (96). Analytic sensitivity. Practice guidelines vary in their definitions of analytic sensitivity; certain guidelines consider analytic sensitivity to be the ability of an assay to detect a given analyte, or the lower limit of detection (LOD) (93), whereas guidelines for molecular genetic testing for heritable diseases consider analytic sensitivity to be "the proportion of biological samples that have a positive test result or known mutation and that are correctly classified as positive" (12). However, determining the LOD of a molecular genetic test or test system is often needed as part of the performance establishment and verification (93). To avoid potential confusion among users and the general public in understanding the test performance and test results, laboratories should review and follow applicable professional guidelines before testing is introduced and ensure the guidelines are followed consistently throughout performance establishment and verification and during subsequent patient testing. Analytic sensitivity should be determined for each molecular genetic test before the test is used for patient testing. Analytic specificity. Analytic specificity is generally defined as the ability of a test method to determine only the target analytes to be detected or measured and not the interfering substances that might affect laboratory testing. Interfering substances include factors associated with specimens (e.g., specimen hemolysis, anticoagulant, lipemia, and turbidity) and factors associated with patients (e.g., clinical conditions, disease states, and medications) (96). Laboratories must document information regarding interfering substances and should use product information, literature, or the laboratory's own testing (96). Accepted practice guidelines for molecular genetic testing, such as those developed by ACMG, CAP, and CLSI, define analytic specificity as the ability of a test to distinguish the target sequences, alleles, or mutations from other sequences or alleles in the specimen or genome being analyzed (12)(13)(14). The guidelines also address documentation and determination of common interfering substances specific for molecular detection (e.g., homologous sequences, contaminants, and other exogenous or endogenous substances) (12)(13)(14). Laboratories should adhere to these specific guidelines in establishing or verifying analytic specificity for each of their molecular genetic tests. Reportable range of test results. As defined by CLIA, the reportable range of test results is "the span of test result values over which the laboratory can establish or verify the accuracy of the instrument or test system measurement response" (36). The reportable range of patient test results can be established or verified by assaying low and high calibration materials or control materials or by evaluating known samples of abnormally high and low values (96). For example, laboratories should assay quality control or reference materials, or known normal samples, and samples containing mutations to be detected for targeted mutation analyses. For analysis of trinucleotide repeats, laboratories should include samples representing the full range of expected allele lengths (130). Reference range, or reference interval (i.e., normal values). As defined by CLIA, a reference range, or reference interval, is "the range of test values expected for a designated population of persons (e.g., 95% of persons that are presumed to be healthy [or normal])" (36). The CMS Survey Procedures and Interpretive Guidelines for Laboratories and Laboratory Services provides general guidelines regarding the use of manufacturerprovided or published reference ranges appropriate for the patient population and evaluation of an appropriate number of samples to verify manufacturer claims or published reference ranges (96). For all laboratory-developed tests, the laboratory is responsible for establishing the reference range appropriate for the laboratory patient population (including demographic variables such as age and sex) and specimen types (96). For molecular genetic tests for heritable diseases and conditions, normal values might refer to normal alleles in targeted mutation analyses or the reference sequences for sequencing assays. Laboratories should be aware that advances in knowledge and testing technology might affect the recognition and documentation of normal sequences and should keep an updated database for the molecular genetic tests they perform. # Quality control procedures. CLIA requires laboratories to determine the calibration and control procedures for nonwaived tests or test systems on the basis of the verification or establishment of performance specifications for the tests (42 CFR §493.1253[b][3] ). Laboratories that perform molecular genetic tests must meet these requirements and, for every molecular genetic test to be introduced for patient testing, should consider the recommended quality control practices. # Documentation of information on clinical validity. Laboratories should ensure that the molecular genetic tests they perform are clinically usable and can be interpreted for specific patient situations. (36). Laboratory directors and clinical consultants must ensure laboratory consultations are available for laboratory clients regarding the appropriateness of the tests ordered and interpretation of test results (36). Documentation of available clinical validity information helps laboratories that perform molecular genetic testing to fulfill their responsibilities for consulting with health-care professionals and other users of laboratory services, especially regarding tests that evaluate germline mutations or variants that might be performed only once during a patient's lifetime. Establishing clinical validity is a continuous process and might require extended studies and involvement of many disciplines (38). The recommendations in this report emphasize the responsibility of laboratories that perform molecular genetic testing to document available information from medical and scientific research studies on the intended patient populations to be able to perform testing and provide results interpretation appropriate for specific clinical contexts. Laboratory directors are responsible for using professional judgment to evaluate the results of such studies as applied to newly discovered gene targets, especially those of a predictive or incompletely penetrant nature, in considering potential new tests. The recommendations in this report are consistent with the voluntary professional and accreditation guidelines of ACMG, CLSI, and CAP for molecular genetic testing (12)(13)(14)93,94). # Control Procedures General quality control practices. The analytic phase of molecular genetic testing often includes the following steps: specimen processing; nucleic acid extraction, preparation, and assessment; enzymatic reaction or amplification; analyte detection; and recording of test results. Laboratories that perform molecular genetic testing must meet the general CLIA requirements for nonwaived testing (42 CFR §493.1256) (36), including the following applicable quality control requirements: Laboratories must have control procedures in place to • monitor the accuracy and precision of the entire analytic process for each test system. The number and type of control materials and the fre-• quency of control procedures must be established using applicable performance specifications verified or established by the laboratory. Control procedures must be in place for laboratories to • detect immediate errors caused by test system failure, adverse environmental conditions, and operator performance to monitor the accuracy and precision of test performance over time. At least once each day that patient specimens are tested, • the laboratory must include the following: -At least two control materials of different concentrations for each quantitative procedure -A negative control material and a positive control material for each qualitative procedure -A negative control material and a control material with graded or titered reactivity, respectively, for each test procedure producing graded or titered results -Two control materials, including one that is capable of detecting errors in the extraction process, for each test system that has an extraction phase -Two control materials for each molecular amplification procedure and, if reaction inhibition is a substantial source of false-negative results, a control material capable of detecting the inhibition If control materials are not available, the laboratory must • have an alternative method for detecting immediate errors and monitoring test system performance over time; the performance of the alternative control procedures must be documented. Specific quality control practices. Specific quality control practices are necessary for ensuring the quality of molecular genetic test performance. The following recommendations include specific guidelines for meeting the general CLIA quality control requirements and additional measures that are more stringent or explicit than the CLIA requirements for monitoring and ensuring the quality of the molecular genetic testing process: When possible, include quality control samples that are • similar to patient specimens to monitor the quality of all analytic steps of the testing process. If a commercial test system provides some but not all • of the controls needed for testing, the laboratory must perform and follow the manufacturer recommendations for control testing and should determine the additional control procedures (including the number and types of control materials and the frequency of testing them) necessary for monitoring and ensuring the quality of test performance (36,96). Laboratories must have an alternative mechanism capable • of monitoring DNA extraction and the preceding analytic steps if 1) purified DNA samples are used as control materials for circumstances in which incorporation of an extraction control is impractical or 2) when testing is performed for a rare disease or rare variants for which no control material is available for the extraction phase. For example, testing patient specimens for an internal control sequence (e.g., a housekeeping gene or a spiked-in control sequence) might allow for monitoring of the sample quality and integrity, the presence of inhibitors, and proper amplification (12,93). A positive control, or a control sample capable of monitoring the ability of a test system to detect the nucleic acid targets, should be tested periodically and carried through the extraction step to monitor and verify the performance of the test system. The CMS Survey Procedures and Interpretive Guidelines for Laboratories and Laboratory Services provides general guidelines for alternative control procedures and encourages laboratories to use multiple mechanisms for ensuring testing quality (96). Following are examples of procedures that, when applicable, should be followed by laboratories that perform molecular genetic testing: Split specimens for testing by another method or in • another laboratory. Include previously tested patient specimens (both positive • and negative) as surrogate controls. Test each patient specimen in duplicate. # • Test multiple types of specimens from the same patient • (e.g., saliva, urine, or serum). Perform serial dilutions of positive specimens to confirm • positive reactions. Conduct an additional supervisory review of results before • release. Unidirectional workflow for molecular amplification procedures. CLIA requires laboratories to have procedures in place to monitor and minimize contamination during the testing process and to ensure a unidirectional workflow for amplification procedures that are not contained in closed systems (42 CFR §493.1101) (36). In this context, a closed system is a test system designed to be fully integrated and automated to purify, concentrate, amplify, detect, and identify targeted nucleic acid sequences. Such a modular system generates test results directly from unprocessed samples without manipulation or handling by the user; the system does not pose a risk for cross-contamination because amplicon-containing tubes and compartments reamain completely closed during and after the testing process. For example, according to CLIA regulations, an FDA-cleared or FDA-approved test system that contains amplification and detection steps in sealed tubes that are never opened or reopened during or after the testing process and that is used as provided by the manufacturer (i.e., without any modifications) is considered a closed system. The requirement for a unidirectional workflow, which includes having separate areas for specimen preparation, amplification, product detection, and reagent preparation, applies to any testing that involves molecular amplification procedures. The following recommendations provide more spe-cific guidelines for laboratories that perform molecular genetic testing for heritable diseases and conditions using amplification procedures that are not in a closed system: Include and positions of the NTC and other control samples, to adequately monitor carryover contamination. For testing performed in multiple units, the number and positions of NTC samples also may be used for unambiguous identification of each unit. Ensure that specific procedures are in place to moni-• tor the unidirectional workflow and to prevent crosscontamination for tests using successive amplification procedures (e.g., amplification of nucleic acid targets from a previous polymerase chain reaction [PCR] or nested PCR) if reaction tubes are opened after amplification for subsequent manipulation with the amplicons. Additives that destroy amplicons from previous PCR reactions also may be used. Laboratories should recognize that methods such as PCR amplification, whole genome amplification, or subcloning to prepare quality control materials might be a substantial source of laboratory contamination. These laboratories should have the following specific procedures to monitor, detect, and prevent cross-contamination: Separation of the workflow of generating and preparing • synthetic or amplified products for use as control materials from the patient testing process. To prevent laboratory contamination, control materials should be processed and stored separately from the areas for preparation and storage of patient specimens and testing reagents. Regular testing of appropriate control samples at a fre-• quency adequate to monitor cross-contamination. These practices also should be considered by laboratories that purchase amplified materials for use as control materials, calibration materials, or competitors. # Proficiency Testing and Alternative Performance Assessment Proficiency testing is an important tool for assessing laboratory competence, evaluating the laboratory testing process, and providing education for the laboratory personnel. For certain analytes and testing specialties for which CLIA regulations specifically require proficiency testing, proficiency testing is provided by private-sector and state-operated programs that are approved by HHS because they meet CLIA standards (42 CFR Part 493). These approved programs also may provide proficiency testing for genetic tests and other tests that are not on the list of regulated analytes and specialties (131). Although the CLIA regulations do not have proficiency testing requirements specific for molecular genetic tests, laboratories that perform genetic tests must comply with the general requirements for alternative performance assessment for any test or analyte not specified as a regulated analyte to, at least twice annually, verify the accuracy of any genetic test or procedure they perform (42 CFR §493.1236[c]). Laboratories can meet this requirement by participating in available proficiency testing programs for the genetic tests they perform (132). The following recommended practices provide more specific and stringent measures than the current CLIA requirements for performance assessment of molecular genetic testing. The recommendations should be considered by laboratories that perform molecular genetic testing to monitor and evaluate the ongoing quality of the testing they perform: Participate in available proficiency testing, at least twice • per year, for each molecular genetic test the laboratory performs. Proficiency testing is available for a limited number of molecular genetic tests (e.g., fragile X syndrome, factor V Leiden thrombophilia, and cystic fibrosis) (Appendix C). Laboratories that perform molecular genetic testing should regularly review information on the development of additional proficiency testing programs and ensure participation as new programs become available. Test analyte-specific or disease-specific proficiency testing • challenges with the laboratory's regular patient testing workload by personnel who routinely perform the tests in the laboratory (as required by CLIA for regulated analytes). Evaluate proficiency testing results reported by the profi-• ciency testing program and take steps to investigate and correct disparate results. The corrective actions to be taken after disparate proficiency testing results should include re-evaluation of previous patient test results and, if neces-sary, of retained patient specimens that were previously tested. Proficiency testing samples. When possible, proficiency testing samples should resemble patient specimens; at a minimum, samples resembling patient specimens should be used for proficiency testing for the most common genetic tests. When proficiency testing samples are provided in the form of purified DNA, participating laboratories do not perform all the analytic steps that occur during the patient testing process (e.g., nucleic acid extraction and preparation). Such practical limitations should be recognized when assessing proficiency testing performance. Laboratories are encouraged to enroll in proficiency testing programs that examine the entire testing process, including the preanalytic, analytic, and postanalytic phases. Alternative performance assessment. For molecular genetic tests for which no proficiency testing program is available, alternative performance assessments must be performed at least twice per year to meet the applicable requirements of CLIA and requirements of certain states and accrediting organizations. The following recommendations should be considered when conducting alternative performance assessments: Although no data are available to determine whether • alternative performance assessments are as effective as proficiency testing, professional guidelines (e.g., from CLSI and CAP) provide information on acceptable alternative performance assessment approaches (14,61). Laboratories that perform molecular genetic tests for which no proficiency testing program is available should adhere to these guidelines. Laboratories should ensure that alternative assessments • reflect the test methods involved in performing the testing and that the number of samples in each assessment is adequate to verify the accuracy and reliability of test results. Ideally, alternative assessments should be performed • through interlaboratory exchange (Appendix C) or using externally derived materials, because external quality assessments might detect errors or problems that would not be detected by an internal assessment. When interlaboratory exchange or obtaining external • materials is not practical (e.g., testing for rare diseases, testing performed by only one laboratory, patented testing, or unstable analytes such as RNA or enzymes), laboratories may consider options such as repeat testing of blinded samples, blind testing of materials with known values, exchange with either a research facility or a laboratory in another country, splitting samples with another instrument or method, or interlaboratory data comparison (96). # MMWR June 12, 2009 Various resources for proficiency testing and external quality assessment (60,133,134) and for facilitating interlaboratory sample exchanges (135,136) are available to help laboratories consider approaches to meeting the proficiency testing and alternative performance assessment needs of their molecular genetic testing (Appendix C). Laboratories are required by CLIA to include interpretation of test results on test reports (if applicable). However, results interpretation should be included in all test reports of molecular genetic testing for heritable diseases and conditions. Laboratories should provide information on interpretation of test results in a clinically relevant manner that is relative to the purpose for the testing and should explain how technical limitations might affect the clinical use of the test results. When appropriate and necessary, test results can be explained in reference to family members (e.g., mutations previously detected in a family member that was used for selection of the test method) to ensure appropriate interpretation of results and understanding of their implications by the persons receiving or using the test results. # The Postanalytic Testing Phase # References to literature (if applicable) • Recommendation for genetics consultation (when appropri-• ate). A genetics consultation might encompass genetic services (including genetic counseling) provided by trained, qualified genetics professionals (e.g., genetic counselors, clinical geneticists, or other qualified professionals) for health-care providers, patients, or family members at risk for the condition. # Implications of test results for relatives or family members who • might benefit from the information (if applicable) Statement indicating that the test results and interpretation • are based on current knowledge and technology Updates and revisions. CLIA requires laboratories to provide pertinent updates on testing information to clients when changes occur that affect the test results or interpretation of test results (42 CFR §493.1291[e]). Because the field of molecular genetic testing is evolving rapidly, laboratories should consider the following: Keep an up-to-date database for the molecular genetic • tests performed in the laboratory, and provide updates to users when knowledge advancement affects performance specifications, interpretation of test results, or both. Provide a revised test report if the interpretation of the • original analytic result changes because of advances in knowledge or testing technology. Indications for providing revised test reports include the following: -A better interpretation is available on a previously detected variant. -Interpretation of previous test results has changed (e.g., a previously determined mutation is later recognized as a benign variant or polymorphism or vice versa). Molecular genetic tests for germline mutations or variants or for other heritable conditions often are one-time tests, with results that can have life-time implications for the patients and family members. Decisions regarding health-care management should be made with consideration of changes or improvements in the interpretation of genetic test results as testing technology and knowledge advance. However, practical limitations, such as the logistical difficulty of recontacting previous users of laboratory services, also should be considered. Laboratories that perform molecular genetic testing for heritable diseases and conditions should have procedures in place that adhere to accepted professional practice guidelines regarding the duty to recontact previous users and should make a good-faith effort to provide updates and revisions to previous test reports, when appropriate (137). When establishing these procedures, laboratories also might consider the retention time frame of their molecular genetic test reports. Signatures. Review of molecular genetic test reports by trained qualified personnel, before reports are released, is critical. The review should be appropriately documented with written or electronic signatures or by other methods. Laboratories should determine which persons should review and sign the test reports in accordance with personnel competency and responsibilities. Format, style, media, and language. Laboratories should assess the needs of laboratory users when determining the format, style, media, and language of molecular genetic test reports. The language used, which includes terminology and nomenclature, should be understandable by nongeneticist health professionals and other specific users of the test results. This practice should be part of the laboratory quality management policies. Test reports should include all necessary information, be easy to understand, and be structured in a way that encourages users read the entire report, rather than just a positive or negative indication. Following the format recommended in accepted practice guidelines should help ensure that the reports are structured effectively (12)(13)(14)49,93,94,100). # Retention of Reports, Records, and Tested Specimens Reports. CLIA requires laboratories to retain or have the ability to retrieve a copy of an original test report (including final, preliminary, and corrected reports) for at least 2 years after the date of reporting and to retain pathology test reports for at least 10 years after the date of reporting (42 CFR §493.1105). A longer retention time frame than required by CLIA is warranted for reports of molecular genetic tests for heritable diseases and conditions. These test reports should be retained for at least 25 years after the date the results are reported. Retaining molecular genetic test reports for a longer time frame is recommended because the results can have long-term, often lifetime, implications for patients and their families, and future generations might need the information to make healthrelated decisions. In addition, advances in testing technology and increased knowledge of disease processes could change the interpretation of the original test results, enable improved interpretation of test results, or permit future retesting with greater sensitivity and accuracy. Laboratories need the ability to retrieve previous test reports, which are valuable resources for conducting quality assessment activities, helping patients and family members make health decisions, and managing the health care of the patient and family members. As laboratories that perform molecular genetic testing for heritable diseases and conditions review and update policies and procedures for report retention, they should consider the financial ramifications of the policies, as well as technology and space concerns. Laboratories may consider retaining test reports electronically, on microfilms, or by other methods but must ensure that all of the information on the original reports is retained and that copies (whether electronic or hard copies) of the original reports can be retrieved. The laboratory policies and procedures for test report retention must comply with applicable state laws and other requirements (e.g., of accrediting organizations if the laboratory is accredited) and should follow practice guidelines developed by recognized professional or standard-setting organizations. If state regulations require retention of genetic test reports for >25 years after the date of results reporting, laboratories must comply. Laboratories also might decide that retaining reports for >25 years is necessary for molecular genetic test reports for heritable diseases and conditions to accommodate patient testing needs and ongoing quality assessment activities. Records. CLIA requires laboratories to retain records of patient testing, including test requests and authorizations, test procedures, analytic systems records, records of test system performance specifications, proficiency testing records, and quality system assessment records, for a minimum of 2 years (42 CFR §493.1105); these requirements apply to molecular genetic testing. Retention policies and procedures must also comply with applicable state laws and other requirements (e.g., of accrediting organizations if the laboratory is accredited). Laboratories should ensure that electronic records are accessible. Tested specimens. CLIA requires laboratories to establish and follow written policies and procedures that ensure positive identification and optimum integrity of patient specimens from the time of collection or receipt in the laboratory through completion of testing and reporting of test results (42 CFR §493.1232). Depending on sample stability, technology, space, and cost, tested specimens for molecular genetic tests for heritable conditions should be retained as long as possible after the completion of testing and reporting of results. At a minimum, tested patient specimens that are stable should be retained until the next proficiency testing or the next alternative performance assessment to allow for identification of problems in patient testing and for corrective action to be taken. Tested specimens also might be needed for testing of current or future family members and for more definitive diagnosis as technology and knowledge evolve. A laboratory specimen retention policy should consider the following factors: Type of specimens retained (e.g., whole blood or DNA • samples) Analytes tested (e.g., DNA, RNA, or both) • Test results or the genotypes detected. (If only abnormal • specimens are retained, identifying false-negative results at a later date will be difficult. This practice also might introduce bias if a preponderance of samples with abnormal test results is used to verify or establish performance specifications for future testing.) Test volume • New technologies that might not produce residual • specimens The laboratory director is responsible for ensuring that the laboratory policies and procedures for specimen retention comply with applicable federal, state, and local requirements (including laboratory accreditation requirements, if applicable) and are consistent with the laboratory quality assurance and quality assessment activities. In circumstances in which required patient consent is not provided with the test request, the laboratory should 1) notify the test requestor and 2) determine the time frame after which the test request might be rejected and the specimen discarded because of specimen degradation or deterioration. Laboratory specimen retention procedures should be consistent with patient decisions. # Laboratory Responsibilities Regarding Authorized Persons CLIA regulations define an authorized person as a person authorized by state laws or regulations to order tests, receive test results, or both. Laboratories must have a written or an electronic test request from an authorized person (42 CFR §493.1241[a]). Laboratories may only release test results to authorized persons, the person responsible for using the test results (if applicable), and the laboratory that initially requested the test (42 CFR §493.1291[f ]). Laboratories that perform molecular genetic testing must ensure compliance with these requirements in their policies and procedures for receiving test requests and reporting test results and should ensure that qualified laboratory personnel with appropriate experience and expertise are available to assist authorized persons with test requests and interpretation of test results. Laboratories must comply with applicable federal, state, and local requirements regarding whether genetic tests may be offered directly to consumers and should use accepted professional guidelines for additional information. The following recommendations will help laboratories meet CLIA requirements (42 CFR § §493.1241[a] and 1291[f ]), particularly those related to genetic testing offered directly to consumers: The laboratory that initially accepts a test request (regard-• less of whether the laboratory performs the testing on-site or refers the patient specimens to another laboratory) is responsible for verifying that the test requestor is authorized by state laws and regulations to do so. Laboratories that receive patient specimens from multiple states or have specimen collection sites in multiple states should keep an updated copy of the requirements of each state regarding authorized persons and review test requests accordingly. Although referral laboratories might be unable to verify • that the person submitting the original test request quali-request patient test information, the laboratory should request the patient's authorization before releasing the patient's genetic test results. When patient consent is required for testing, the consent • form should include the laboratory confidentiality policies and procedures and describe situations in which test results might be requested by health-care providers caring for family members of the patient. Laboratory directors should be responsible for deter-• mining and approving circumstances in which access to confidential patient information is appropriate, as well as when, how, and to whom information is to be released, in compliance with federal, state, and local requirements. The HIPAA Privacy Rule and CLIA regulations are federal regulations intended to provide minimum standards for ensuring confidentiality of patient information; states or localities might have higher standards. Although the HIPAA Privacy Rule allows health-care providers that are covered entities (i.e., health-care providers that conduct certain transactions in electronic form, health-care clearinghouses, and health plans) to use or disclose protected health information for treatment purposes without patient authorization and to share protected health information to consult with other providers to treat a different patient or to refer a patient, the regulation indicates that states or institutions may implement stricter standards to protect the privacy of patients and the confidentiality of patient information (138). Laboratories that perform molecular genetic testing must comply with applicable requirements and follow professional practice guidelines in establishing policies and procedures to ensure confidentiality of patient information, including molecular genetic testing information and test results. # Personnel Qualifications, Responsibilities, and Competency Assessments Laboratory Director Qualifications and Responsibilities Qualifications. CLIA requires directors of laboratories that perform high-complexity testing to meet at least one of the following sets of qualifications (42 CFR §493.1443): Be a doctor of medicine or a doctor of osteopathy and • have board certification in anatomic or clinical pathology or both Be a doctor of medicine, doctor of osteopathy, or doctor • of podiatric medicine and have at least 1 year of laboratory training during residency or at least 2 years of experience directing or supervising high-complexity testing fies as an authorized person, the test results may only be released to persons authorized by state laws and regulations to receive the results, the persons responsible for using the test results, and the referring laboratory. # Ensuring Confidentiality of Patient Information CLIA requires laboratories to ensure confidentiality of patient information throughout all phases of the testing process that are under laboratory control (42 CFR §493.1231). Laboratories should follow more specific requirements and comply with additional guidelines (e.g., the Health Insurance Portability and Accountability Act of 1996 [HIPAA] Privacy Rule, state requirements, accreditation standards, and professional guidelines) to establish procedures and protocols to protect the confidentiality of patient information, including information related to genetic testing. Laboratories that perform molecular genetic testing should establish and follow procedures and protocols that include defined responsibilities of all employees to ensure appropriate access, documentation, storage, release, and transfer of confidential information and prohibit unauthorized or unnecessary access or disclosure. # Information Regarding Family Members In certain circumstances, information about family members is needed for test performance or should be included in test reports to ensure appropriate interpretation of test results. Therefore, laboratories must have procedures and systems in place to ensure confidentiality of all patient information, including that of family members, in all testing procedures and reports, in compliance with CLIA requirements and other applicable federal, state, and local regulations. # Requests for Test Results to Assist with Providing Health Care for a Family Member When a health-care provider requests the genetic test information of a patient to assist with providing care for a family member of the patient, the following practices are recommended: Requests should be handled following established labora-• tory procedures regarding release and transfer of confidential patient information. Laboratories may release patient test information only • to the authorized person ordering the test, the persons responsible for using the test results (e.g., health-care providers of the patient designated by the authorized person to receive test results), and the laboratory that initially requested the test. If a health-care provider who provides care for a family member of the patient is authorized to Have an earned doctoral degree in a chemical, physical, • biological, or clinical laboratory science from an accredited institution and current certification by a board approved by HHS Directors of laboratories that perform molecular genetic testing for heritable diseases and conditions must meet these qualification requirements. Because CLIA requirements are minimum qualifications, laboratories that perform molecular genetic testing for heritable diseases and conditions should evaluate the tests they perform to determine whether additional knowledge, training, or expertise is necessary for fulfilling the responsibilities of laboratory director. Responsibilities. CLIA requires directors of laboratories that perform high-complexity testing to be responsible for the overall operation and administration of the laboratory, which includes responsibility for the following ( 42 training, experience, and expertise needed to provide technical supervision for laboratories that perform these tests. Certain laboratories that perform molecular genetic testing for heritable diseases and conditions might have technical supervisors who meet the applicable CLIA qualification requirements for the high-complexity testing their laboratories perform but do not meet the recommended qualifications in this section. These recommended qualifications are not regulatory requirements and are not intended to restrict access to certain molecular genetic tests; rather, they should be considered part of recommended laboratory practices for ensuring the quality of molecular genetic testing for heritable diseases and conditions. However, because CLIA qualification requirements are intended to be minimum standards, laboratories should assess the tests they perform to determine whether additional qualifications are needed for their technical supervisors to ensure quality throughout the testing process. These recommended qualifications should apply to all highcomplexity molecular genetic tests for heritable diseases and conditions. Responsibilities. CLIA requires technical supervisors of laboratories that perform high-complexity testing to be responsible for the technical and scientific oversight of the laboratories (42 CFR §493.1451 Responsibilities. CLIA requires persons who perform highcomplexity testing to follow laboratory procedures and protocols for test performance, quality control, results reporting, documentation, and problem identification and correction (42 CFR §493.1495). Personnel who perform molecular genetic testing for heritable diseases and conditions must meet these requirements. # Personnel Competency Assessment CLIA requires laboratories to establish and follow written policies and procedures to assess employee competency, and if applicable, consultant competency (42 CFR §493.1235). CLIA requirements for laboratory director responsibilities (42 CFR §493.1445[e] [13]) specify that laboratory directors must ensure that policies and procedures are established for monitoring and ensuring the competency of testing personnel and for identifying needs for remedial training or continuing education to improve skills. Technical supervisors are responsible for implementing the personnel competency assessment policies and procedures, including evaluating and ensuring competency of testing personnel (42 CFR §493.1451[b] [8]). Laboratories that perform molecular genetic testing for heritable diseases and conditions must meet these general personnel competency assessment requirements. Laboratories also should follow the applicable CMS guidelines to establish and implement policies and procedures specific for assessing and ensuring the competency of all types of laboratory personnel, including technical supervisors, clinical consultants, general supervisors, and testing personnel, in performing duties and responsibilities (96). For example, the performance of testing personnel must be evaluated and documented at least semiannually during the first year a person tests patient specimens. Thereafter, evaluations must be performed at least annually; however, if test methodology or instrumentation changes, performance must be re-evaluated to include the use of the new test methodology or instrumentation before testing personnel can report patient test results. Personnel competency assessments should identify training needs and ensure that persons responsible for performance of molecular genetic testing receive regular in-service training and education appropriate for the services performed. # Considerations Before Introducing Molecular Genetic Testing or offering New Molecular Genetic Tests Recommendations described in this report should be considered, in addition to appropriate professional guidelines and recommendations, when planning and preparing for the introduction of molecular genetic testing or offering new molecular genetic tests. The following scenarios should be considered during the planning stage: Introducing a new molecular genetic test that has not been • offered in any laboratory Introducing a genetic test that previously has been referred • to another laboratory but will be performed in-house Introducing an additional genetic test that can comple-• ment a molecular genetic test that has been performed for patient testing These scenarios present different planning concerns, including needs and requirements for training and competency of laboratory personnel, laboratory facilities and equipment, selection of test methods, development of procedure manuals, establishment or verification of performance specifications, and personnel responsibilities. In addition, the following factors should be assessed: Needs and demands of the new test, which can be assessed • by consulting with ordering physicians and other potential users of laboratory services and by conducting other market analyses Intellectual property or licensing concerns that might • result in restricted use, increased costs, or both of certain genetic tests # Quality Management System Approach for Molecular Genetic Testing The quality management system (QMS) approach provides a framework for managing and monitoring activities to address quality standards and achieve organizational goals, with a focus on user needs (41,109). QMS has been the basis for many international quality standards, such as the International Organization for Standardization (ISO) standards ISO 15189, ISO 17025, and ISO 9001 (91,139,140). These international QMS standards overlap with certain CLIA requirements but are distinct from CLIA regulations. Because QMS is not yet a widely adopted approach in the United States, laboratories that perform molecular genetic testing might not be familiar with QMS implementation in current practice. The QMS approach has been described in several CLSI guidelines (41,109). New York state CLEP and CAP have included QMS concepts in the general laboratory standards (15,102), and CAP and the American Association for Laboratory Accreditation have begun to provide laboratory accreditation to ISO 15189 (141,142). Laboratories that perform molecular genetic testing should monitor QMS development, because implementing the QMS approach could help laboratories accept international test referrals and improve quality management of testing. # Conclusion The recommendations in this report are intended to serve as guidelines for considering and implementing good laboratory practices to 1) improve quality and health-care outcomes related to molecular genetic testing for heritable diseases and conditions and 2) enhance oversight and quality assurance practices for molecular genetic testing under the CLIA regulatory framework. The report can be adapted for use in different settings where molecular genetic testing is conducted or evaluated. Continual monitoring of the practice and test performance of molecular genetic tests is needed to evaluate the effectiveness of these recommendations and to develop additional guidelines for good laboratory practices for genetic testing, which will ultimately improve public health. # Massachusetts † A statement of the purpose of the test • A statement that before signing the consent form, the • consenting person discussed with the medical practitioner ordering the test the reliability of positive or negative test results and the level of certainty that a positive test result for that disease or condition serves as a predictor of such disease A statement that the consenting person was informed • about the availability and importance of genetic counseling and provided with written information identifying a genetic counselor or medical geneticist from whom the consenting person might obtain such counseling A general description of each specific disease or condition • tested for The persons to whom the test results may be disclosed • # Michigan § The nature and purpose of the presymptomatic or predic-• tive genetic test The effectiveness and limitations of the presymptomatic • or predictive genetic test The implications of taking the presymptomatic or predic-• tive genetic test, including, but not limited to, the medical risks and benefits The future uses of the sample taken from the test partici-• pant to conduct the presymptomatic or predictive genetic test and the information obtained from the presymptomatic or predictive genetic test The meaning of the presymptomatic or predictive genetic • test results and the procedure for providing notice of the results to the test participant Who will have access to the sample taken from the test • participant to conduct the presymptomatic or predictive genetic test and the information obtained from the presymptomatic or predictive genetic test, and the test participant's right to confidential treatment of the sample and the information # Nebraska ¶ The nature and purpose of the presymptomatic or predic-• tive genetic test The effectiveness and limitations of the presymptomatic • or predictive genetic test The implications of taking the presymptomatic or predic-• tive genetic test, including the medical risks and benefits The future uses of the sample taken to conduct the pre-• symptomatic or predictive genetic test and the genetic information obtained from the presymptomatic or predictive genetic test The meaning of the presymptomatic or predictive genetic • test results and the procedure for providing notice of the results to the patient Who will have access to the sample taken to conduct the • presymptomatic or predictive genetic test and the genetic information obtained from the presymptomatic or predictive genetic test and the patient's right to confidential treatment of the sample and the genetic information New York** # MMWR June 12, 2009 A statement that a positive test result is an indication that • the person might be predisposed to or have the specific disease or condition being tested for and might consider additional independent testing, consult a personal physician, or pursue genetic counseling A general description of each specific disease or condition • being tested for The level of certainty that a positive test result for the • disease or condition serves as a predictor of such disease. (If no level of certainty has been established, this may be disregarded.) The name of the person or categories of persons or orga-• nizations to whom the test results may be disclosed A statement that no tests other than those authorized will • be performed on the biological sample and that the sample will be destroyed at the end of the testing process or not more than 60 days after the sample was taken, unless a longer period of retention is expressly authorized in the consent # South Dakota † † The nature and purpose of the test • The effectiveness and limitations of the test • The implications of taking the test, including, the medical • risks and benefits The future uses of the sample taken from the person • tested to conduct the test and the information obtained from the test The meaning of the test results and the procedure for • providing notice of the results to the person tested A list of who will have access to the sample taken from • the person tested and the information obtained from the test and the person's right to confidential treatment of the sample and the information # Appendix A Terms and Abbreviations Used In This Report # ABMG American Board of Medical Genetics ABN Advance beneficiary notice Accuracy Closeness of the agreement between the result of a measurement and a true value of the measurand ACMG American College of Medical Genetics Allele One version of a gene at a given location (locus) along a chromosome AMP Association for Molecular Pathology Amplicon Piece of nucleic acid formed as the product of molecular amplification Amplification In vitro enzymatic replication of a target nucleic acid (e.g., polymerase chain reaction [PCR]) ASR Analyte-specific reagent Bidirectional sequencing A method used to determine the positions of a selected nucleotide base in a target region on both strands of a denatured duplex nucleic acid polymer # Family history The genetic relationships and medical history of a family; also referred to as a pedigree when represented in diagram form using standardized symbols and terminology FDA Food and Drug Administration Founder effect The presence of gene mutation in high frequency in a specific population that arises because the gene mutation was present in a single ancestor or small number of ancestors in the founding population # Genetics The study of inheritance patterns of specific traits Genome The complete genetic content of an organism Genotype The genetic constitution of an organism or cell; also refers to the specific set of alleles inherited at a locus # Germline mutation The presence of an altered gene within the egg or sperm (germ cell), such that the altered gene can be passed to subsequent generations # Informed consent process For molecular genetic testing, the process by which a person voluntarily confirms the willingness to participate in a particular test, after having been informed of all aspects of the test that are relevant to the decision to participate LOD # Lower limit of detection Modifiers Genetic or environmental factors that might affect the expressivity (the variability of signs or symptoms that occur with a phenotype) of a genetic alteration # Mutation An alteration in a gene, which might cause a disease, be a benign alteration, or result in a normal variant # Newborn screening Testing conducted within days of birth to identify infants at increased risk for specific genetic disorders, allowing education and counseling for parents and treatment for patients to be initiated as soon as possible NTC No-template control Pedigree A diagram using standard symbols and terminology to indicate the genetic relationships and medical history of a family Penetrance The proportion of persons with a mutation causing a particular disorder who exhibit clinical symptoms of the disorder # Personalized medicine Approach to medicine involving use of genomic and molecular data to better target health care, facilitate discovery and clinical testing of new products, and determine patient risk for a particular disease or condition Phenotype The observable physical and biochemical traits resulting from of the expression of a gene; the clinical presentation of a person with a particular genotype Polymerase chain reaction (PCR) A DNA amplification procedure that produces millions of copies of a short segment of DNA through repeated cycles of 1) denaturation, 2) annealing, and 3) elongation; a very common procedure in molecular genetic testing used to generate a sufficient quantity of DNA to perform a test (e.g., sequence analysis or mutation scanning) or as a test itself (e.g., allele-specific amplification or trinucleotide repeat quantification) # Positive predictive value The likelihood that a person with a positive test result actually has a particular gene, is affected by the gene, or will develop the disease Precision Closeness of agreement between independent test results obtained under stipulated conditions Private mutation A rare, disease-causing mutation occurring in a few families # Proficiency testing An external quality assessment program in which samples are periodically sent to testing sites for analysis Quality assessment A group of activities to monitor and evaluate the entire testing process; used to help ensure that test results are reliable, improve the testing process, and promote good quality testing practices # Quality control Measures taken to detect, reduce, and correct deficiencies in a laboratory's internal analytical process prior to the release of patient results and to improve the quality of the results reported by the laboratory Reagent A substance that produces a chemical or biological reaction with a patient specimen, allowing detection or measurement of the analyte for which the test is designed # Reference interval Interval between and including the lower reference limit through the upper reference limit of the reference population (e.g., 95% of persons presumed to be healthy [or normal]) # Reportable range The range of test values over which the relationship between the instrument, kit, or measurement response of the system is shown to be valid RNA Ribonucleic acid SACGHS Secretary's Advisory Committee on Genetics, Health, and Society Sequencing A procedure used to determine the order of nucleotides (base sequence) in a DNA or RNA molecule or the order of amino acids in a protein # Targeted mutation analysis Testing for one or more specific mutations Total testing process Series of activities or workflow for performing testing; includes three major phases: preanalytic, analytic, and postanalytic # Unidirectional workflow The manner in which testing personnel and patient specimens move through the molecular amplification testing process to prevent cross-contamination # Interlaboratory Exchange The CAP registry service is an Internet-based service that facilitates contact among genetic testing laboratories that perform less frequently performed genetic tests. Laboratories enroll online; when CAP identifies three laboratories that are testing for the same genetic disorder, CAP facilitates communication for making exchange arrangements. The CAP/ACMG Biochemical and Molecular Genetics Committee reviews the results and procedures and makes comments in the Molecular Genetics Survey's Participant Summary Report regarding the overall performance. # Association for Molecular Pathology (AMP)** Interlaboratory Exchange AMP facilitates sample exchanges between laboratories through the AMP listserv (CHAMP). Laboratories seeking others to evaluate performance on specific analytes contact one another via the listserv. Laboratories are responsible for establishing testing parameters and facilitating exchange of specimens and test results. ¶ # Goal and Objectives The goal of this report is to improve the quality and usefulness of laboratory services for genetic testing to achieve better health outcomes for the public. Upon completion of this educational activity, the reader should be able to 1) describe the recommended good laboratory practices for each of the three phases of the molecular genetic testing process; 2) describe qualifications, responsibilities, and competency of laboratory personnel; 3) describe planning for introducing molecular genetic testing; and 4) describe how to ensure the confidentiality of patient information in molecular genetic testing for heritable diseases and conditions. To receive continuing education credit, please answer all of the following questions. 1. Under the Clinical Laboratory Improvement Amendments (CLIA) regulations, laboratories performing molecular genetic testing that they have developed are subject to… (Indicate all that apply.) A. the general quality systems requirements for nonwaived testing. B. personnel requirements for high-complexity testing. C. specialty requirements for molecular genetic testing. # For molecular amplification procedures, which of the following is considered an effective mechanism for monitoring and detecting cross-contamination of patient specimens? A. Inclusion of a positive control that represent the genotype to be detected with each run of patient specimens. B. Inclusion of a normal sample as the negative control with each run of patient specimens. C. Inclusion of a no-template control sample that contains all components of the amplification reaction except nucleic acid templates with each run of patient specimens. D. Inclusion of a spiked-in internal control in each amplification sample. 6. Test reports of molecular genetic testing for heritable diseases or conditions should be retained for the longest possible time frame for all the following reasons except… A. Test results have long-term implications for the patients. B. Test results have implications for patients' families and future generations. C. Advances in knowledge and understanding of disease processes might lead to improved interpretation of test results. D. Laboratories need to access previous test reports to conduct quality assessment activities. E. Laboratories must protect the confidentiality of patient information. # A molecular genetic test report should … A. be understood by geneticists only. B. be understandable by nongeneticist health professionals and other authorized users of the test results. C. always be written in English. D. indicate "test result is negative" if no mutation is detected so that the test result can be easily understood. # The director of a laboratory performing molecular genetic testing should… (Indicate all that apply.) A. ensure effective policies and procedures are in place for monitoring and maintaining the competency of the laboratory personnel. B. be able to perform a molecular genetic test better than anyone else in the laboratory. C. ensure available information needed to interpret test results for a patient is documented for each molecular genetic test the laboratory performs. 12. When a laboratory uses a purified DNA sample extracted from a cell line containing a rare mutation as a positive control in patient testing, which of the following is considered appropriate for monitoring the DNA extraction step of the testing process? (Indicate all that apply.) A. Testing patient samples for a housekeeping gene to determine specimen quality and integrity each time patient testing is performed. B. Testing patient samples for a spiked-in control sequence to assess the presence of inhibitors each time patient testing is performed. C. Testing patient samples for a housekeeping gene to determine specimen quality and integrity once each day patient testing is performed. D. Testing patient samples for a spiked-in control sequence to assess the presence of inhibitors once each day patient testing is performed. # Which best describes your professional
None
None
df4271ffee278081e5bd60585cd3d48376a128ec
cdc
None
In January 1991, epidem ic cholera appeared in Peru and quickly spread to m any o the r Latin Am erican countries. Because reporting o f cholera cases was often delayed in som e areas, the scope o f the epidem ic was unclear. An assessment o f the conduct o f surveillance fo r cholera in several countries identifie d som e recurrent problem s invo lvin g surveillance case definitions, laboratory surveillance, surveillance methods, national coordination, and data m anagem ent. A key conclusion is that a simple, w ell-com m unicated cholera surveillance system in place during an epidem ic w ill facilitate prevention and treatm ent efforts. We recom m end the fo llo w in g measures: a) s im p lify case d efin itio ns fo r cholera; b) focus on laboratory surveillance o f patients w ith diarrhea p rim a rily in the in itia l stage o f the epidem ic; c) use p re do m in an tly the "su s p e ct" case defin itio n when the n um ber o f " co nfirm e d " cases rises; d) tran sm it w eekly the num bers o f cases, hospitalized patients, and deaths to regional and central levels; e) analyze data frequently and d istribute a weekly or biw eekly sum m ary; and f) report the n um ber o f cholera cases p ro m p tly to the W orld Health Organization.# INTRODUCTION Background Cholera is a highly preventable and treatable disease. C hlorinating w ater supplies and im plem enting other emergency measures can prevent transm ission, and p ro vid ing ready access to oral and intravenous rehydration therapy can dram atically low er death rates. Prevention and treatm ent efforts can function o ptim ally when there is cooperation between regional and central public health offices, as w ell as at national and international levels. The m ovem ent o f the cholera epidem ic, the need fo r supplies, and the effectiveness o f control measures are better assessed w ith a clear and representative picture o f the epidemic. A sim ple, w ide ly accepted, w ell-described surveillance system is the best means o f obtaining that epidem ic picture. A fter January 1991, when epidem ic cholera firs t appeared in Peru, the disease quickly spread to m any other Latin Am erican countries (7,2). Because this epidem ic was unexpected, some countries had little tim e to prepare fo r it. However, many countries had already drafted and im plem ented preparedness plans fo r the control of cholera. An ideal plan fo r cholera control has several essential components, including health education, environm ental sanitation, clinical managem ent, laboratory diagno sis, epidem iologic investigation, and surveillance (3). During a cholera epidemic, surveillance is essential to estimate the incidence and the fa ta lity rate, to assess the m ovem ent o f the epidem ic, to plan the d istribution o f supplies fo r treatm ent and prevention, to plan tim e ly epidem iologic investigations, and to determ ine the effec tiveness o f control measures. During investigations o f the cholera epidem ic in m any countries, epidem iologists from CDC have identified several recurrent problem s w ith cholera surveillance, including d ifficulties w ith collection, transm ission, and analysis o f data. These problem s often have caused delays and obscured the scope o f the epidem ic regionally, nationally, and internationally. In addition, some national surveillance systems fo r cholera use elaborate, com plex case definitions that hinder sm ooth, rapid reporting o f cases. This report outlines selected problem s that characterize cholera surveillance systems in some Latin Am erican countries and includes recom m enda tions to facilitate cholera surveillance both nationally and internationally. # SURVEILLANCE ISSUES AND RECOMMENDATIONS # Case Definitions One com m on problem w ith cholera surveillance is the case definition. A case d efinition is a set o f objective criteria (sym ptom s, signs, and laboratory data) that lead to a reliable, reproducible report o f the disease. In defining cases, m any countries use tw o main categories of cases o f cholera -"su sp e ct" and "c o n firm e d ." Often, w ithin each main category, m ultiple case definitions are used. One country, fo r example, uses three definitions fo r a "su sp e ct" case o f cholera: a) profuse diarrhea, w ith severe dehydration, affecting a person years o f age; b) acute diarrhea in an area w ith confirm ed cholera; and c) acute diarrhea affecting a person w ho traveled th rough an infected area w ith in 5 days before onset o f illness. In another country, only one definition is used fo r a "su sp e ct" cholera case: acute diarrhea affecting a person &5 years o f age; however, this country also uses the additional category o f "p ro b a b le " cholera case, defined as dehydrating diarrhea, vo m itin g , cramps, and malaise affecting a person e pidem iologically associated w ith other cholera cases. M ultiple d efinitions such as these increase the d ifficu lty o f reporting and are likely to confuse the analysis o f surveillance data. M ost countries use 5 years as the low er age lim it fo r cholera surveillance. A lthough the W orld Health Organization (WHO) has previously recom m ended 10 years as the low er age lim it fo r initial identification o f cholera (3), changes to low er this age are in progress (Dr. J. Tulloch, Director, Division o f Diarrhoeal and Acute Respiratory Disease Control, WHO, personal com m unication). Five years is a useful low er age lim it since it corresponds w ith the transition from preschool to school age; school children m ay be m ore likely to be exposed to some com m unicable diseases, including cholera, than are younger children. For m ost countries, much o f the m o rb id ity from all causes o f diarrhea occurs in the <5-year age group (4). Using 5 years as the low er age lim it in a cholera case d efinition excludes m ore o f the cases of diarrhea that are not cholera and, in effect, renders the d efinition m ore specific. The other main category, the confirm ed case, is usually defined as Vibrio cholerae 01 infection verified by laboratory m ethods. The m ost com m only used m ethod is stool culture, w ith confirm ation that the isolate is 01 V. cholerae. W here infection is extrem ely rare, it can be helpful to dem onstrate that the isolate produces cholera toxin, because some nontoxigenic 01 strains o f V. cholerae have been documented (5). Serologic diagnosis, based on measurem ent o f acute-and convalescent-phase titers o f vibriocidal or antitoxin antibodies, is available, although rarely used. Sim ply counting laboratory isolates as cases may obscure the true picture o f the epidem ic. In one country, fo r instance, routine culturing o f specimens from fa m ily m em bers and close contacts o f patients w ith previously confirm ed cholera identified some persons w ith asym ptom atic infections. These asym ptom atic persons were then reported as having confirm ed cholera cases. In fact, asym ptom atic cholera Infections are num er ous in epidem ics, but cannot be Identified as cases by clinical signs and sym ptom s alone. Reporting persons w ith o u t diarrhea as confirm ed cholera case-patients can distort surveillance data. The m ajor d ifficu lty w ith sim ple case definitions fo r cholera lies in the broad spectrum o f illness associated w ith this infection. Over 70% o f infected persons are asym ptom atic, and an additional 15%-23% o f infected persons have m ild or m oder ate nonbloody diarrhea sim ilar to diarrhea from other causes. Some persons w ho meet a " suspect" case definition may not have cholera, although they are likely to represent only a small proportion o f the reported cases in the epidem ic setting. A case d efinition based solely on an adult cholera patient's having dehydrating diarrhea (approxim ately 2%-5% o f those Infected) w ill be more specific than a case definition based on patients w ith any type o f diarrhea, but w ill also miss m any infected persons. In the context o f public health action, an accurate report o f the num ber o f sym pto m atic infections is a more useful measure. No single case definition is perfect; a balance is needed between sensitivity and specificity to provide a representative picture o f the epidem ic in any given area. # Recommendations For surveillance in a cholera epidemic, a case definition should be brief and simple to facilitate uniform and rapid reporting of cases. To simplify case reporting, cholera case definitions should be limited to tw o categories, the "confirmed" case and the "suspect" case. A confirmed cholera case is laboratory-confirmed V. cholerae 01 infection of any person who has diarrhea. In the epidemic setting, w e suggest that a suspect case of cholera be defined as acute, watery diarrhea affecting a person 3=5 Years of age. # Laboratory Confirmation and Environmental Surveillance The laboratory is a central com ponent o f cholera surveillance. It is essential fo r confirm ing that V. cholerae 01 has arrived in an area and is infecting humans, fo r M onitoring its continued presence or docum enting its disappearance, fo r determ in ing its antim icrobial susceptibilities, and fo r identifying its presence in the e nviron ment. Prelim inary isolation and confirm ation o f V. cholerae 01 require trained Personnel using thiosulfate-citrate-bile salts-sucrose (TCBS) agar and polyvalent antisera (6). In the current cholera epidem ic in Latin Am erica, m ost countries were able to staff some laboratories w ith trained personnel and m inim al supplies shortly after the firs t fe w cholera cases had been confirm ed. However, as epidem ic cholera advanced, m any laboratories w ere quickly overwhelm ed w ith dem ands fo r confirm a tion o f num erous suspect cholera patients. Reporting o f laboratory results slowed because o f this increased am ount o f work. # Recommendations In regional laboratories, trained personnel are needed to confirm V. cholerae 01 infection using TCBS agar and polyvalent antisera. At the central laboratory, trained personnel are also needed to confirm field isolates. Initially, fo r an area threatened w ith cholera, a sample of persons w ith suspect cases should have specimens taken fo r culture. A fter a sufficient num ber of suspect cases have been confirm ed to indicate that cholera is epidem ic in that area (e.g., 10-20), the local or regional laboratory may then reduce the frequency o f perform ing cholera stool cultures from that area (e.g., 10 specim ens/m onth) to confirm the continuing presence o f V. cholerae 01 and to m on itor its a ntibiotic susceptibility. Every laboratory that identifies V. cholerae 01 should provide weekly reports o f the total num ber o f patient Isolates to designated regional and central offices. In locations where cholera has not been confirm ed, especially those bordering areas w ith cholera, M oore swabs ( 7) can be placed In the sewage effluents o f a lim ited num ber o f sentinel tow ns and cities every 1-3 weeks. If V. cholerae 01 is isolated from sewage or from a person In a given area, the presence o f the organism in the area has been established, and the surveillance w ith M oore swabs can be discontinued. Thereafter, laboratory-based surveillance in that area should focus on patients w ith diarrhea. When V. cholerae 01 is identified in a country fo r the first tim e, the isolate should be referred to an international reference laboratory fo r confirm ation and further characterization (e.g., using m olecular biological techniques), w hich may be helpful in determ ining Its origin. # Stages of Surveillance When epidem ic cholera appears in a region, tw o stages o f surveillance are observed: an early stage, when cultures are obtained from m any patients w ith diarrhea to diagnose cholera, and a later stage, when cholera Is firm ly established in the region and larger num bers o f people are ill. In the early stage, the num ber of persons w ith confirm ed cases may be small and may represent a m in or proportion of the persons w ith suspect cases. M ost countries report only culture-confirm ed cases at this stage. However, as the cholera epidem ic grows, m òre cases are confirm ed, and the num ber o f patients w ith suspect cases m ore accurately reflects the cholera situation in that area. M any countries, nonetheless, continue to report only cultureconfirm ed cases in this later stage o f cholera surveillance, because o f concerns about public response and adverse econom ic consequences if the larger num ber o f suspect cases is reported. # Recommendations In a cholera-threatened area, available diarrhea surveillance data can be reviewed to detect trends suggesting early cholera outbreaks. Any report of acute dehydrating diarrhea affecting a person 3 5 years of age should immediately alert local public health workers to investigate for possible cholera. Early in a cholera epidemic when small numbers of cases are being confirmed, a region may report only cultureconfirmed cases. However, when the number of confirmed cases becomes suffi ciently high, surveillance should shift to using the "suspect" case definition because it allows simpler, more timely, and more accurate reporting, and because it avoids overburdening laboratory resources. For a region in this later stage of cholera surveillance, it may be appropriate to limit culturing to a sample of the suspect cases that should continue to be reported. The decision as to when the number of confirmed cases becomes sufficiently high to change from reporting only confirmed cases to reporting cases in both "suspect" and "confirmed" categories should be made promptly on an individual basis after all implications have been assessed. Since most cholera outbreaks are large and often well established when confirmed, early shifting to reporting suspect cases may be more appropriate. When the cholera epidem ic wanes and the num ber o f infections decreases to the level o f the early stage o f the epidem ic, some countries may revert to reporting only confirm ed cases. However, in m any areas, cholera appears seasonally, w ith num bers of cases Increasing in w arm m onths and decreasing in cold m onths. Therefore, it may be useful to report suspect cases fo r at least a year after the epidem ic wanes, until it is clearly shown that cholera is controlled in an area. A t that tim e, it is reasonable to return to reporting only confirm ed cases fo r that area. # Information from Patients with Cholera Some countries have adm inistered lengthy, detailed questionnaires to every patient w ith cholera. In one country, the inform a tion about the patient on the questionnaire included dem ographic data, clinical signs and sym ptom s, laboratory results, characteristics o f the feces and vom itus, travel history, food history, contact history, and additional com m ents by the person fillin g out the questionnaire. A lthough detailed inform ation m ay be helpful in describing a sample o f cholera cases clinically, it is o f epidem iologic interest only in the earliest phase of an epidem ic. Thereafter, as the num ber o f cholera cases increases, the form s may be incom plete or ignored, and much o f the data w ill be unm anageable and unanalyzed unless data-handling resources are diverted from other w o rth w h ile program s. Exposure inform ation collected from patients alone w ill not determ ine the m odes o f transm is sion, because cholera m ay be transm itted by com m on foods or beverages or by m ultiple sources th a t vary from place to place. W ell-designed and w ell-adm inistered case-control investigations in affected areas are a more effective approach to identify vehicles o f transm ission. In sum m ary, lengthy surveillance questionnaires waste scarce hum an resources and im pede handling o f surveillance data. # Recommendations For cholera, exposure histories are best reserved for investigations. Surveillance should be streamlined, using simple case definitions and concentrating on timely and accurate reporting of data. At the local level, including all treatm ent centers, information gathered about patients who meet the cholera surveillance case defini tion should be basic, including age, gender, date treated, and home address. The information transmitted to regional and central levels may include age, gender, and location, but it should primarily focus on the numbers of cases, hospitalized patients, and deaths. # Communication, Analysis, and Timely Reporting of Surveillance Data Surveillance and laboratory inform ation is o f little value unless it is com m unicated clearly and prom ptly. Tim ely reporting o f laboratory results to a regional epide m io l o g y office w ill allo w early identification o f cholera-affected areas and perm it im m e diate investigations; tim e ly reporting to the central laboratory w ill allo w early confirm ation o f the isolates; and, to com plete the inform a tion loop, tim e ly feedback to the o riginal su bm itter of the isolates w ill help validate diagnoses and im prove patient care. S im ilarly, tim e ly reporting o f the total num ber o f cases and basic analysis results from a central e pidem iology office back to the laboratories and regional e pidem iology offices is essential to allo w the epidem ic to be characterized and to ensure continued cooperation at all levels. In some locations, inform ation dissem ination am ong these m ajor com ponents in the surveillance process som e tim es has not been tim e ly and com plete. Early In the cholera epidem ic in Latin Am erica, fo r example, some countries instituted emergency daily reporting o f the num ber o f cases to a central office, yet did not com m unicate the total and cum ulative num ber o f cases by region to th e ir constituents until m onths later. The suitable form o f inform ation flo w am ong local and central levels m ust be w orked out country by country. # Recommendations A proposed communication system includes both reporting to a central office and feedback to regional offices (Figure 1). Initially, treatm ent centers and regional offices may w ish to report to the central office daily by a rapid m ethod (e.g., radio, telephone, telegram ) the num ber o f suspect cases, the num ber o f confirm ed cases, the num ber o f patients w ho w ere hospitalized, and the num ber w ho died. A rapid sw itch to weekly reporting w ill often reduce the burden o f w ork created by dally reporting w ith o u t com prom ising the main goals of surveillance. The adm inistrative level to w hich each treatm ent center's report should be sent m ust be clearly identified. Surveillance data should be analyzed p ro m p tly and frequently. Basic tabulation and com parison o f data should be perform ed at the local and regional level If trained personnel are available. A t the central level, epidem iologists should analyze reports o f suspect and confirm ed cholera cases by region and by week to track the spread of the epidem ic, to determ ine w hether unexpectedly large num bers o f cases are occurring In any region, and to evaluate the im pact o f interventions. The results should be dissem inated to all levels and used to estim ate resources needed at local levels and to decide w hether epidem iologic investigations are needed. W hen survell- # FIGURE 1. lance data suggest an increase in the num ber o f cases in an area, an epidem iologic investigation should be conducted to determ ine the m odes o f transm ission and identify fu rth e r prevention measures. The results o f these investigations should be reported to all categories o f participants in the surveillance system. # International Reporting Cholera is one o f three internationally notifiable diseases, and countries are requested to report cases to WHO prom ptly. Some governm ents m ay be concerned that reporting a large num ber o f cases could have a detrim ental im pact on econom ic factors such as tourism and food exportation. Prom pt and accurate reporting, however, w ill Im prove national and International efforts to allocate resources to control cholera m orbid ity and m ortality. # Recommendations The number of cholera cases should be reported to WHO in a timely manner. # Summary of Recommendations Case definitions for cholera - In a cholera epidem ic, use on ly tw o categories, "su sp e ct" and "c o n firm e d ." - Define a suspect case as acute, w a te ry diarrhea affecting a person » 5 years o f age. - Define a confirm ed case as la boratory-confirm ed Vibrio cholerae 01 infection o f any person w h o has diarrhea. # Laboratory confirmation and environmental surveillance - Use trained personnel in regional and central laboratories to isolate and confirm V. cholerae 01. - Confirm the diagnosis bacteriologically o f several suspect cases in new ly threatened areas. - A fte r cholera has become established in an area, use stool cultures o n ly to confirm the continuing presence o f V. cholerae 01 and to m o n ito r its a n tibiotic susceptibility. - Consider using M oore swabs to id en tify V. cholerae 01 in sewage In cholera-threatened areas where cholera has not been confirm ed. # Stages of surveillance - In a cholera-threatened area, investigate cases o f acute dehydrating diarrhea affecting persons 5=5 years o f age. - W hen the num ber o f "c o n firm e d " cases rises, shift to using p rim a rily the "su sp e ct" case definition. - Continue to report suspect cases fo r at least 1 year after the epidem ic wanes. # Information from patients w ith cholera - Refrain from using lengthy surveillance questionnaires. - Collect basic in fo rm a tio n on patients at the local level. - T ransm it sum m ary data (prim arily the num ber o f cases, hospitalizations, and deaths) to the central level. # Communication, analysis, and tim ely reporting of surveillance data - Report surveillance results to a central office w eekly by a rapid m ethod. - Analyze surveillance data and dissem inate surveillance reports to all com ponents o f the surveillance system quickly and frequently. - Conduct ep id em io log ic investigations in areas w ith increasing num bers o f cases. # International reporting - Accurately report the cou ntry's cholera situation to WHO in a tim e ly manner.
In January 1991, epidem ic cholera appeared in Peru and quickly spread to m any o the r Latin Am erican countries. Because reporting o f cholera cases was often delayed in som e areas, the scope o f the epidem ic was unclear. An assessment o f the conduct o f surveillance fo r cholera in several countries identifie d som e recurrent problem s invo lvin g surveillance case definitions, laboratory surveillance, surveillance methods, national coordination, and data m anagem ent. A key conclusion is that a simple, w ell-com m unicated cholera surveillance system in place during an epidem ic w ill facilitate prevention and treatm ent efforts. We recom m end the fo llo w in g measures: a) s im p lify case d efin itio ns fo r cholera; b) focus on laboratory surveillance o f patients w ith diarrhea p rim a rily in the in itia l stage o f the epidem ic; c) use p re do m in an tly the "su s p e ct" case defin itio n when the n um ber o f " co nfirm e d " cases rises; d) tran sm it w eekly the num bers o f cases, hospitalized patients, and deaths to regional and central levels; e) analyze data frequently and d istribute a weekly or biw eekly sum m ary; and f) report the n um ber o f cholera cases p ro m p tly to the W orld Health Organization.# INTRODUCTION Background Cholera is a highly preventable and treatable disease. C hlorinating w ater supplies and im plem enting other emergency measures can prevent transm ission, and p ro vid ing ready access to oral and intravenous rehydration therapy can dram atically low er death rates. Prevention and treatm ent efforts can function o ptim ally when there is cooperation between regional and central public health offices, as w ell as at national and international levels. The m ovem ent o f the cholera epidem ic, the need fo r supplies, and the effectiveness o f control measures are better assessed w ith a clear and representative picture o f the epidemic. A sim ple, w ide ly accepted, w ell-described surveillance system is the best means o f obtaining that epidem ic picture. A fter January 1991, when epidem ic cholera firs t appeared in Peru, the disease quickly spread to m any other Latin Am erican countries (7,2). Because this epidem ic was unexpected, some countries had little tim e to prepare fo r it. However, many countries had already drafted and im plem ented preparedness plans fo r the control of cholera. An ideal plan fo r cholera control has several essential components, including health education, environm ental sanitation, clinical managem ent, laboratory diagno sis, epidem iologic investigation, and surveillance (3). During a cholera epidemic, surveillance is essential to estimate the incidence and the fa ta lity rate, to assess the m ovem ent o f the epidem ic, to plan the d istribution o f supplies fo r treatm ent and prevention, to plan tim e ly epidem iologic investigations, and to determ ine the effec tiveness o f control measures. During investigations o f the cholera epidem ic in m any countries, epidem iologists from CDC have identified several recurrent problem s w ith cholera surveillance, including d ifficulties w ith collection, transm ission, and analysis o f data. These problem s often have caused delays and obscured the scope o f the epidem ic regionally, nationally, and internationally. In addition, some national surveillance systems fo r cholera use elaborate, com plex case definitions that hinder sm ooth, rapid reporting o f cases. This report outlines selected problem s that characterize cholera surveillance systems in some Latin Am erican countries and includes recom m enda tions to facilitate cholera surveillance both nationally and internationally. # SURVEILLANCE ISSUES AND RECOMMENDATIONS # Case Definitions One com m on problem w ith cholera surveillance is the case definition. A case d efinition is a set o f objective criteria (sym ptom s, signs, and laboratory data) that lead to a reliable, reproducible report o f the disease. In defining cases, m any countries use tw o main categories of cases o f cholera -"su sp e ct" and "c o n firm e d ." Often, w ithin each main category, m ultiple case definitions are used. One country, fo r example, uses three definitions fo r a "su sp e ct" case o f cholera: a) profuse diarrhea, w ith severe dehydration, affecting a person years o f age; b) acute diarrhea in an area w ith confirm ed cholera; and c) acute diarrhea affecting a person w ho traveled th rough an infected area w ith in 5 days before onset o f illness. In another country, only one definition is used fo r a "su sp e ct" cholera case: acute diarrhea affecting a person &5 years o f age; however, this country also uses the additional category o f "p ro b a b le " cholera case, defined as dehydrating diarrhea, vo m itin g , cramps, and malaise affecting a person e pidem iologically associated w ith other cholera cases. M ultiple d efinitions such as these increase the d ifficu lty o f reporting and are likely to confuse the analysis o f surveillance data. M ost countries use 5 years as the low er age lim it fo r cholera surveillance. A lthough the W orld Health Organization (WHO) has previously recom m ended 10 years as the low er age lim it fo r initial identification o f cholera (3), changes to low er this age are in progress (Dr. J. Tulloch, Director, Division o f Diarrhoeal and Acute Respiratory Disease Control, WHO, personal com m unication). Five years is a useful low er age lim it since it corresponds w ith the transition from preschool to school age; school children m ay be m ore likely to be exposed to some com m unicable diseases, including cholera, than are younger children. For m ost countries, much o f the m o rb id ity from all causes o f diarrhea occurs in the <5-year age group (4). Using 5 years as the low er age lim it in a cholera case d efinition excludes m ore o f the cases of diarrhea that are not cholera and, in effect, renders the d efinition m ore specific. The other main category, the confirm ed case, is usually defined as Vibrio cholerae 01 infection verified by laboratory m ethods. The m ost com m only used m ethod is stool culture, w ith confirm ation that the isolate is 01 V. cholerae. W here infection is extrem ely rare, it can be helpful to dem onstrate that the isolate produces cholera toxin, because some nontoxigenic 01 strains o f V. cholerae have been documented (5). Serologic diagnosis, based on measurem ent o f acute-and convalescent-phase titers o f vibriocidal or antitoxin antibodies, is available, although rarely used. Sim ply counting laboratory isolates as cases may obscure the true picture o f the epidem ic. In one country, fo r instance, routine culturing o f specimens from fa m ily m em bers and close contacts o f patients w ith previously confirm ed cholera identified some persons w ith asym ptom atic infections. These asym ptom atic persons were then reported as having confirm ed cholera cases. In fact, asym ptom atic cholera Infections are num er ous in epidem ics, but cannot be Identified as cases by clinical signs and sym ptom s alone. Reporting persons w ith o u t diarrhea as confirm ed cholera case-patients can distort surveillance data. The m ajor d ifficu lty w ith sim ple case definitions fo r cholera lies in the broad spectrum o f illness associated w ith this infection. Over 70% o f infected persons are asym ptom atic, and an additional 15%-23% o f infected persons have m ild or m oder ate nonbloody diarrhea sim ilar to diarrhea from other causes. Some persons w ho meet a " suspect" case definition may not have cholera, although they are likely to represent only a small proportion o f the reported cases in the epidem ic setting. A case d efinition based solely on an adult cholera patient's having dehydrating diarrhea (approxim ately 2%-5% o f those Infected) w ill be more specific than a case definition based on patients w ith any type o f diarrhea, but w ill also miss m any infected persons. In the context o f public health action, an accurate report o f the num ber o f sym pto m atic infections is a more useful measure. No single case definition is perfect; a balance is needed between sensitivity and specificity to provide a representative picture o f the epidem ic in any given area. # Recommendations For surveillance in a cholera epidemic, a case definition should be brief and simple to facilitate uniform and rapid reporting of cases. To simplify case reporting, cholera case definitions should be limited to tw o categories, the "confirmed" case and the "suspect" case. A confirmed cholera case is laboratory-confirmed V. cholerae 01 infection of any person who has diarrhea. In the epidemic setting, w e suggest that a suspect case of cholera be defined as acute, watery diarrhea affecting a person 3=5 Years of age. # Laboratory Confirmation and Environmental Surveillance The laboratory is a central com ponent o f cholera surveillance. It is essential fo r confirm ing that V. cholerae 01 has arrived in an area and is infecting humans, fo r M onitoring its continued presence or docum enting its disappearance, fo r determ in ing its antim icrobial susceptibilities, and fo r identifying its presence in the e nviron ment. Prelim inary isolation and confirm ation o f V. cholerae 01 require trained Personnel using thiosulfate-citrate-bile salts-sucrose (TCBS) agar and polyvalent antisera (6). In the current cholera epidem ic in Latin Am erica, m ost countries were able to staff some laboratories w ith trained personnel and m inim al supplies shortly after the firs t fe w cholera cases had been confirm ed. However, as epidem ic cholera advanced, m any laboratories w ere quickly overwhelm ed w ith dem ands fo r confirm a tion o f num erous suspect cholera patients. Reporting o f laboratory results slowed because o f this increased am ount o f work. # Recommendations In regional laboratories, trained personnel are needed to confirm V. cholerae 01 infection using TCBS agar and polyvalent antisera. At the central laboratory, trained personnel are also needed to confirm field isolates. Initially, fo r an area threatened w ith cholera, a sample of persons w ith suspect cases should have specimens taken fo r culture. A fter a sufficient num ber of suspect cases have been confirm ed to indicate that cholera is epidem ic in that area (e.g., 10-20), the local or regional laboratory may then reduce the frequency o f perform ing cholera stool cultures from that area (e.g., 10 specim ens/m onth) to confirm the continuing presence o f V. cholerae 01 and to m on itor its a ntibiotic susceptibility. Every laboratory that identifies V. cholerae 01 should provide weekly reports o f the total num ber o f patient Isolates to designated regional and central offices. In locations where cholera has not been confirm ed, especially those bordering areas w ith cholera, M oore swabs ( 7) can be placed In the sewage effluents o f a lim ited num ber o f sentinel tow ns and cities every 1-3 weeks. If V. cholerae 01 is isolated from sewage or from a person In a given area, the presence o f the organism in the area has been established, and the surveillance w ith M oore swabs can be discontinued. Thereafter, laboratory-based surveillance in that area should focus on patients w ith diarrhea. When V. cholerae 01 is identified in a country fo r the first tim e, the isolate should be referred to an international reference laboratory fo r confirm ation and further characterization (e.g., using m olecular biological techniques), w hich may be helpful in determ ining Its origin. # Stages of Surveillance When epidem ic cholera appears in a region, tw o stages o f surveillance are observed: an early stage, when cultures are obtained from m any patients w ith diarrhea to diagnose cholera, and a later stage, when cholera Is firm ly established in the region and larger num bers o f people are ill. In the early stage, the num ber of persons w ith confirm ed cases may be small and may represent a m in or proportion of the persons w ith suspect cases. M ost countries report only culture-confirm ed cases at this stage. However, as the cholera epidem ic grows, m òre cases are confirm ed, and the num ber o f patients w ith suspect cases m ore accurately reflects the cholera situation in that area. M any countries, nonetheless, continue to report only cultureconfirm ed cases in this later stage o f cholera surveillance, because o f concerns about public response and adverse econom ic consequences if the larger num ber o f suspect cases is reported. # Recommendations In a cholera-threatened area, available diarrhea surveillance data can be reviewed to detect trends suggesting early cholera outbreaks. Any report of acute dehydrating diarrhea affecting a person 3 5 years of age should immediately alert local public health workers to investigate for possible cholera. Early in a cholera epidemic when small numbers of cases are being confirmed, a region may report only cultureconfirmed cases. However, when the number of confirmed cases becomes suffi ciently high, surveillance should shift to using the "suspect" case definition because it allows simpler, more timely, and more accurate reporting, and because it avoids overburdening laboratory resources. For a region in this later stage of cholera surveillance, it may be appropriate to limit culturing to a sample of the suspect cases that should continue to be reported. The decision as to when the number of confirmed cases becomes sufficiently high to change from reporting only confirmed cases to reporting cases in both "suspect" and "confirmed" categories should be made promptly on an individual basis after all implications have been assessed. Since most cholera outbreaks are large and often well established when confirmed, early shifting to reporting suspect cases may be more appropriate. When the cholera epidem ic wanes and the num ber o f infections decreases to the level o f the early stage o f the epidem ic, some countries may revert to reporting only confirm ed cases. However, in m any areas, cholera appears seasonally, w ith num bers of cases Increasing in w arm m onths and decreasing in cold m onths. Therefore, it may be useful to report suspect cases fo r at least a year after the epidem ic wanes, until it is clearly shown that cholera is controlled in an area. A t that tim e, it is reasonable to return to reporting only confirm ed cases fo r that area. # Information from Patients with Cholera Some countries have adm inistered lengthy, detailed questionnaires to every patient w ith cholera. In one country, the inform a tion about the patient on the questionnaire included dem ographic data, clinical signs and sym ptom s, laboratory results, characteristics o f the feces and vom itus, travel history, food history, contact history, and additional com m ents by the person fillin g out the questionnaire. A lthough detailed inform ation m ay be helpful in describing a sample o f cholera cases clinically, it is o f epidem iologic interest only in the earliest phase of an epidem ic. Thereafter, as the num ber o f cholera cases increases, the form s may be incom plete or ignored, and much o f the data w ill be unm anageable and unanalyzed unless data-handling resources are diverted from other w o rth w h ile program s. Exposure inform ation collected from patients alone w ill not determ ine the m odes o f transm is sion, because cholera m ay be transm itted by com m on foods or beverages or by m ultiple sources th a t vary from place to place. W ell-designed and w ell-adm inistered case-control investigations in affected areas are a more effective approach to identify vehicles o f transm ission. In sum m ary, lengthy surveillance questionnaires waste scarce hum an resources and im pede handling o f surveillance data. # Recommendations For cholera, exposure histories are best reserved for investigations. Surveillance should be streamlined, using simple case definitions and concentrating on timely and accurate reporting of data. At the local level, including all treatm ent centers, information gathered about patients who meet the cholera surveillance case defini tion should be basic, including age, gender, date treated, and home address. The information transmitted to regional and central levels may include age, gender, and location, but it should primarily focus on the numbers of cases, hospitalized patients, and deaths. # Communication, Analysis, and Timely Reporting of Surveillance Data Surveillance and laboratory inform ation is o f little value unless it is com m unicated clearly and prom ptly. Tim ely reporting o f laboratory results to a regional epide m io l o g y office w ill allo w early identification o f cholera-affected areas and perm it im m e diate investigations; tim e ly reporting to the central laboratory w ill allo w early confirm ation o f the isolates; and, to com plete the inform a tion loop, tim e ly feedback to the o riginal su bm itter of the isolates w ill help validate diagnoses and im prove patient care. S im ilarly, tim e ly reporting o f the total num ber o f cases and basic analysis results from a central e pidem iology office back to the laboratories and regional e pidem iology offices is essential to allo w the epidem ic to be characterized and to ensure continued cooperation at all levels. In some locations, inform ation dissem ination am ong these m ajor com ponents in the surveillance process som e tim es has not been tim e ly and com plete. Early In the cholera epidem ic in Latin Am erica, fo r example, some countries instituted emergency daily reporting o f the num ber o f cases to a central office, yet did not com m unicate the total and cum ulative num ber o f cases by region to th e ir constituents until m onths later. The suitable form o f inform ation flo w am ong local and central levels m ust be w orked out country by country. # Recommendations A proposed communication system includes both reporting to a central office and feedback to regional offices (Figure 1). Initially, treatm ent centers and regional offices may w ish to report to the central office daily by a rapid m ethod (e.g., radio, telephone, telegram ) the num ber o f suspect cases, the num ber o f confirm ed cases, the num ber o f patients w ho w ere hospitalized, and the num ber w ho died. A rapid sw itch to weekly reporting w ill often reduce the burden o f w ork created by dally reporting w ith o u t com prom ising the main goals of surveillance. The adm inistrative level to w hich each treatm ent center's report should be sent m ust be clearly identified. Surveillance data should be analyzed p ro m p tly and frequently. Basic tabulation and com parison o f data should be perform ed at the local and regional level If trained personnel are available. A t the central level, epidem iologists should analyze reports o f suspect and confirm ed cholera cases by region and by week to track the spread of the epidem ic, to determ ine w hether unexpectedly large num bers o f cases are occurring In any region, and to evaluate the im pact o f interventions. The results should be dissem inated to all levels and used to estim ate resources needed at local levels and to decide w hether epidem iologic investigations are needed. W hen survell- # FIGURE 1. lance data suggest an increase in the num ber o f cases in an area, an epidem iologic investigation should be conducted to determ ine the m odes o f transm ission and identify fu rth e r prevention measures. The results o f these investigations should be reported to all categories o f participants in the surveillance system. # International Reporting Cholera is one o f three internationally notifiable diseases, and countries are requested to report cases to WHO prom ptly. Some governm ents m ay be concerned that reporting a large num ber o f cases could have a detrim ental im pact on econom ic factors such as tourism and food exportation. Prom pt and accurate reporting, however, w ill Im prove national and International efforts to allocate resources to control cholera m orbid ity and m ortality. # Recommendations The number of cholera cases should be reported to WHO in a timely manner. # Summary of Recommendations Case definitions for cholera • In a cholera epidem ic, use on ly tw o categories, "su sp e ct" and "c o n firm e d ." • Define a suspect case as acute, w a te ry diarrhea affecting a person » 5 years o f age. • Define a confirm ed case as la boratory-confirm ed Vibrio cholerae 01 infection o f any person w h o has diarrhea. # Laboratory confirmation and environmental surveillance • Use trained personnel in regional and central laboratories to isolate and confirm V. cholerae 01. • Confirm the diagnosis bacteriologically o f several suspect cases in new ly threatened areas. • A fte r cholera has become established in an area, use stool cultures o n ly to confirm the continuing presence o f V. cholerae 01 and to m o n ito r its a n tibiotic susceptibility. • Consider using M oore swabs to id en tify V. cholerae 01 in sewage In cholera-threatened areas where cholera has not been confirm ed. # Stages of surveillance • In a cholera-threatened area, investigate cases o f acute dehydrating diarrhea affecting persons 5=5 years o f age. • W hen the num ber o f "c o n firm e d " cases rises, shift to using p rim a rily the "su sp e ct" case definition. • Continue to report suspect cases fo r at least 1 year after the epidem ic wanes. # Information from patients w ith cholera • Refrain from using lengthy surveillance questionnaires. • Collect basic in fo rm a tio n on patients at the local level. • T ransm it sum m ary data (prim arily the num ber o f cases, hospitalizations, and deaths) to the central level. # Communication, analysis, and tim ely reporting of surveillance data • Report surveillance results to a central office w eekly by a rapid m ethod. • Analyze surveillance data and dissem inate surveillance reports to all com ponents o f the surveillance system quickly and frequently. • Conduct ep id em io log ic investigations in areas w ith increasing num bers o f cases. # International reporting • Accurately report the cou ntry's cholera situation to WHO in a tim e ly manner.
None
None
f2c809722977ce0efe5010d17dd85a2163dac561
cdc
None
The need for a single childhood immunization schedule prompted the unification of previous vaccine recommendations made by the American Academy of Pediatrics (AAP) and the Advisory Committee on Immunization Practices (ACIP). In addition to presenting the newly recommended schedule for the administration of vaccines during childhood, this report addresses the previous differences between the AAP and ACIP childhood vaccination schedules and the rationale for changing previous recommendations.# INTRODUCTION Since 1988, the U.S. childhood immunization schedule has rapidly expanded to accommodate the introduction of new, universally recommended vaccines (i.e., Haemophilus influenzae type b conjugate and hepatitis B vaccines) and recommendations for a second dose of measles-mumps-rubella vaccine (MMR) (4,5 ) and the use of acellular pertussis vaccines (2,6 ). For approximately 30 years, the Advisory Committee on Immunization Practices (ACIP) and the Committee on Infectious Diseases (COID) of the American Academy of Pediatrics (AAP)-the two groups responsible for developing vaccine recommendations for the public and private sectors-worked to develop similar schedules for routine childhood vaccination. However, some differences in the two schedules persisted. The unification of these childhood immunization schedules is essential to issuing consistent recommendations for both private and public health practitioners and for parents. In February 1994, a working group was convened comprising members of AAP, ACIP, the American Academy of Family Physicians (AAFP), the Food and Drug Administration (FDA), the National Institutes of Health, and CDC. Representatives from state immunization programs, the Maternal and Child Health Bureau of the Health Resources and Services Administration, and vaccine manufacturers also participated. The objective of this working group was to develop a single, scientifically valid childhood immunization schedule-presented in an easily comprehensible format-that would accommodate the current recommendations of both ACIP and AAP and ensure the timely vaccination of preschool-age children. The schedule would identify a specified age for administering each vaccine dose and provide an acceptable range of ages to ensure flexibility for health-care providers. The working group also addressed the number of antigens and injections that should be administered at each visit, the number of visits required for children by 2 years of age, the availability of combined diphtheria and tetanus toxoids and pertussis (DTP)-Hib vaccines, and the capacity of the schedule to accommodate newly licensed vaccines (e.g., varicella vaccine). This report presents the recommended childhood immunization schedule (approved by ACIP, AAP, and AAFP) (Table 1) and the rationale for changing the previous recommendations. Practitioners should consult the Report of the Committee on Infectious Diseases (Red Book) (2 ), the vaccine-specific recommendations of ACIP, and the *Recommended vaccines are listed under the routinely recommended ages. Shaded bars indicate range of acceptable ages for vaccination. † Although no changes have been made to this schedule since publication in MMWR (weekly) in January 1995, this table has been revised to more accurately reflect the recommendations. § Vaccines recommended for administration at 12-15 months of age may be administered at either one or two visits. ¶ Infants born to hepatitis B surface antigen (HBsAg)-negative mothers should receive the second dose of hepatitis B vaccine between 1 and 4 months of age, provided at least 1 month has elapsed since receipt of the first dose. The third dose is recommended between 6 and 18 months of age. Infants born to HBsAg-positive mothers should receive immunoprophylaxis for hepatitis B with 0.5 mL Hepatitis B Immune Globulin (HBIG) within 12 hours of birth, and 5 µg of either Merck, Sharpe, & Dohme (West Point, Pennsylvania) vaccine (Recombivax HB ® ) or 10 µg of SmithKline Beecham (Philadelphia) vaccine (Engerix-B ® ) at a separate site. For these infants, the second dose of vaccine is recommended at 1 month of age and the third dose at 6 months of age. All pregnant women should be screened for HBsAg during an early prenatal visit. The fourth dose of DTP may be administered as early as 12 7) for detailed information and specific recommendations for administration of vaccines. # RATIONALE FOR CHANGE AND CURRENT RECOMMENDATIONS In 1994, the substantial differences between the recommended AAP and ACIP schedules included the schedule for infant hepatitis B vaccination and the timing of the third dose of oral poliovirus vaccine (OPV) and the second dose of MMR (Table 2). Resolution of the differences between the schedules is described in the following sections. # OPV Since 1963, OPV has been the recommended vaccine for inducing long-lasting immunity to poliomyelitis. The primary series has consisted of two doses administered during infancy at approximately 2-month intervals beginning at 6-8 weeks of age, a third dose recommended at 6 weeks to 14 months after the second dose (generally administered at 15-18 months of age), and a fourth dose administered at 4-6 years of age. In late 1993, ACIP recommended that the third dose of OPV be administered at 6 months of age ( 8), whereas AAP recommended that this dose be administered at 6-18 months of age (2 ). A study comparing two infant immunization schedules (one recommending vaccination at approximately 2, 4, 6, and 12 months of age and one at 2, 4, and 12 months of age) indicated high seroconversion rates (i.e., 96%-100%) and similar geometric mean antibody titers (measured after three doses) when following either schedule (9). Several other studies have evaluated the seroresponse to OPV administered at 2, 4, and 6 months; 2, 4, and 12 months; and 2, 4, and 18 months of age (10-13 ). These data indicated excellent response to all serotypes of OPV when the third dose was administered at 6, 12, or 18 months of age (Table 3). Recommendation: Because immune response is not affected by administering the third dose of OPV at as early as 6 months of age, and because earlier scheduling can ensure a higher rate of completion of the OPV primary series at a younger age, the third dose of OPV should be administered routinely at 6 months of age. Vaccination at as late as 18 months of age remains an acceptable alternative. # TABLE 2. Differences between the American Academy of Pediatrics' (AAP) and the Advisory Committee on Immunization Practices' (ACIP) childhood immunization schedules, by selected vaccine -United States, 1994 # MMR First Dose During 1989 and 1990, more than 55,000 cases of measles were reported in the United States. Nearly 25% of these cases occurred among children ≤15 months of age, including approximately 9% among children 12-15 months of age (CDC, unpublished data). At that time, the recommended age for routine measles vaccination was 15 months of age. Recent studies have examined the impact of vaccine-induced immunity on maternally derived transplacental antibody levels; these studies have indicated that younger women (i.e., women who were born after 1956 and who are therefore more likely to have vaccine-induced immunity) transfer lower titers of measles antibodies to their newborn infants than older women (who are more likely to have had natural measles infection). The transplacental antibody acquired by these younger mothers' infants wanes earlier, causing their children to become susceptible to measles at a younger age (14,15 ). This finding suggests that children born to younger mothers might respond well to measles vaccine administered at 12 months of age. In one recent study in which children randomly received measles vaccine at either 12 or 15 months of age (16 ), the measles antibody response to MMR was 93% when the vaccine was administered at 12 months of age; at 15 months of age, the antibody response was 98%. Among children of mothers born after 1961, who probably had received measles vaccine and were less likely to have had measles infection than women born in previous years, the seroconversion rate was 96% among children vaccinated at 12 months of age and 98% among those vaccinated at 15 months of age. # Recommendation: The slightly lower response to the first dose of measles vaccine when administered at 12 months of age compared with administration at 15 months of age has limited clinical importance because a second dose of MMR is recommended routinely for all children, enhancing the likelihood of seroconversion among children who do not respond to the first dose. In addition, earlier scheduling of the first dose of measles vaccine can improve vaccination coverage. In 1994, both AAP and ACIP recommended administration of the first dose of MMR vaccine at 12-15 months of age (2,8 ); this schedule is still recommended. # TABLE 3. Percentage of children with serum-neutralizing antibody to poliovirus types 1 (p1), 2 (p2), and 3 (p3) after two and three doses of oral poliovirus vaccine, by age at vaccination and study # Second Dose In 1989, both ACIP and AAP recommended that all children receive a second dose of measles-containing vaccine; however, ACIP recommended administering the second dose at 4-6 years of age (5 ), and AAP recommended this dose at 11-12 years of age (4 ). Most states have implemented school entry requirements based on one or both of these recommendations. Currently, 12 states require administration of the second dose of measles vaccine before children enter kindergarten (i.e., at 4-6 years of age), 12 require this dose before entry to middle school (i.e., at 11-12 years of age), and 13 states require that the second dose be administered before children enter either kindergarten or middle school. Recommendation: Because response to the second dose is high when administered to children in either age group (CDC, unpublished data), and because state-specific laws govern the administration of the second dose of MMR, the second dose of MMR can be administered at either 4-6 years of age or 11-12 years of age. # Hepatitis B Universal hepatitis B vaccination of infants was recommended in 1991 (3,17 ). Although a protective serologic response (i.e., ≥10 mIU/mL) has been demonstrated in >95% of hepatitis B vaccine recipients who received vaccine according to several schedules beginning at birth or 2 months of age (Table 4), higher antibody titers were achieved when the third dose was administered at 12 or 15 months of age (18,19 ). Available data indicate that higher titers of antibody ensure longer persistence of antibody (20)(21)(22); however, the effect of high antibody levels on long-term protection against disease is not known. # TABLE 4. Percentage of children who seroconverted and geometric mean titers (GMTs) after vaccination with hepatitis B vaccine, by age at first dose and vaccination schedule # Recommendation: The routine hepatitis B vaccination series should begin at birth, with the second dose administered at 2 months of age, for infants whose mothers are hepatitis B surface antigen (HBsAg) negative. Acceptable ranges are from birth through 2 months of age for the first dose and from 1 through 4 months of age for the second dose, provided that at least 1 month elapses between these doses. The third dose should be administered at 6-18 months of age. Limited available data suggest an augmented response when the third dose is administered after 12 months of age (Merck Research Laboratories, unpublished data, 1994). Infants of HBsAgpositive mothers should receive the first dose of vaccine at birth (along with immunoprophylaxis with hepatitis B immune globulin); the second dose at 1 month of age; and the third dose at 6 months of age. # Diphtheria and Tetanus Toxoids and Pertussis Vaccine (DTP) Since the late 1940s, the approved schedule for DTP has consisted of a primary series of three doses administered at 4-8 week intervals and a fourth (i.e., reinforcing) dose administered 6-12 months after the third dose. Although the fourth dose has been administered routinely at 15-18 months of age, it may be administered as early as 12 months of age, provided that at least 6 months elapse between the third and fourth dose. No recent data are available comparing the immunogenicity of DTP or diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP) when administered at 12-14 months with immunogenicity at 15-18 months of age when vaccine is either administered alone or simultaneously with MMR and Hib vaccines. # Recommendation: The current schedule for DTP vaccination is still recommended-including the option that the fourth dose may be administered at as early as 12 months of age if 6 months elapse after the third dose. Thus, the fourth dose of DTP can be scheduled with other vaccines that are administered at 12-18 months of age. DTaP currently is licensed for use only as the fourth and/or fifth dose of the DTP series for children ≥15 months of age (2,6 ). # Tetanus and Diphtheria Toxoids, Adsorbed, For Adult Use (Td) For most persons who received a dose of DTP vaccine at 4-6 years of age, the first dose of Td is administered at 14-16 years of age and every 10 years thereafter to maintain adequate protection against tetanus and diphtheria (6 ). A recent U.S. serologic survey of tetanus immunity (23 ) indicated that tetanus immunity in the majority of the population decreases with time after the administration of the recipient's most recent vaccination. Among persons 6-16 years of age who had received their most recent tetanus vaccination 6-10 years previously, 28% had tetanus antibody titers lower than protective levels, which suggested that Td could be administered as early as 11-12 years of age. # Recommendation: The booster dose of Td should be administered at 11-12 years of age, although vaccination at 14-16 years of age is an acceptable alternative. The earlier scheduling of this dose at 11-12 years of age encourages a routine preadolescent preventive care visit. During this visit, the practitioner should also administer a second dose of measlescontaining vaccine to those persons who have not already received this dose and should ensure that children who previously have not received hepatitis B vaccine begin the vaccination series. Adolescent hepatitis B vaccination currently is recommended by AAP (2 ); ACIP will issue a similar recommendation. A routine visit at 11-12 years of age also will facilitate administration of other needed vaccines to adolescents. # SIMULTANEOUS ADMINISTRATION OF MULTIPLE VACCINES Simultaneous administration of vaccines has been recommended through the administration of combined vaccines (e.g., DTP vaccine, trivalent OPV, and MMR vaccine) or administration of multiple vaccines at different sites or by different routes (e.g., simultaneous administration of DTP, OPV, and Hib). Several studies have examined the safety and immunogenicity of simultaneously administered MMR and Hib (24,25 ); DTP, OPV, and MMR (26,27 ); DTP, OPV, and Hib (25,28 ); hepatitis B, DTP, and OPV (29)(30)(31); and hepatitis B and MMR (Merck Research Laboratories, unpublished data, 1993). Hepatitis B vaccine, the vaccine most recently licensed for use among infants, has been shown to be safe and effective when administered from birth through 15 months of age with other routinely recommended childhood vaccines (D. Greenberg, personal communication, 1994) (32 ). The available safety and immunogenicity data for vaccines currently recommended by ACIP and AAP have been reviewed recently (33 ). Although data are limited concerning the simultaneous administration of the entire recommended vaccine series (i.e., DTP, OPV, MMR, and Hib vaccines, with or without hepatitis B vaccine), data from numerous studies have indicated no interference between routinely recommended childhood vaccines (either live, atttenuated or killed) (33 ). These findings support the simultaneous use of all vaccines as recommended. # CONCLUSION The development of a unified childhood immunization schedule approved by ACIP, AAP, and AAFP represents the beginning of a process that will ensure continued collaboration among the recommending groups, the pharmaceutical manufacturing industry, and FDA to maintain and work toward further simplification of a unified schedule. The recommended childhood immunization schedule will be updated and published annually. Since the development of these recommendations in January 1995, FDA has licensed varicella zoster virus vaccine for use among susceptible persons ≥12 months of age. The ACIP will publish recommendations for this new vaccine, and these recommendations will be incorporated into the 1996 Recommended Childhood Immunization Schedule. The following CDC staff members prepared this report: Jacqueline S. Gindler, M.D. Stephen
The need for a single childhood immunization schedule prompted the unification of previous vaccine recommendations made by the American Academy of Pediatrics (AAP) and the Advisory Committee on Immunization Practices (ACIP). In addition to presenting the newly recommended schedule for the administration of vaccines during childhood, this report addresses the previous differences between the AAP and ACIP childhood vaccination schedules and the rationale for changing previous recommendations.# INTRODUCTION Since 1988, the U.S. childhood immunization schedule has rapidly expanded to accommodate the introduction of new, universally recommended vaccines (i.e., Haemophilus influenzae type b [Hib] conjugate [1,2 ] and hepatitis B [2,3 ] vaccines) and recommendations for a second dose of measles-mumps-rubella vaccine (MMR) (4,5 ) and the use of acellular pertussis vaccines (2,6 ). For approximately 30 years, the Advisory Committee on Immunization Practices (ACIP) and the Committee on Infectious Diseases (COID) of the American Academy of Pediatrics (AAP)-the two groups responsible for developing vaccine recommendations for the public and private sectors-worked to develop similar schedules for routine childhood vaccination. However, some differences in the two schedules persisted. The unification of these childhood immunization schedules is essential to issuing consistent recommendations for both private and public health practitioners and for parents. In February 1994, a working group was convened comprising members of AAP, ACIP, the American Academy of Family Physicians (AAFP), the Food and Drug Administration (FDA), the National Institutes of Health, and CDC. Representatives from state immunization programs, the Maternal and Child Health Bureau of the Health Resources and Services Administration, and vaccine manufacturers also participated. The objective of this working group was to develop a single, scientifically valid childhood immunization schedule-presented in an easily comprehensible format-that would accommodate the current recommendations of both ACIP and AAP and ensure the timely vaccination of preschool-age children. The schedule would identify a specified age for administering each vaccine dose and provide an acceptable range of ages to ensure flexibility for health-care providers. The working group also addressed the number of antigens and injections that should be administered at each visit, the number of visits required for children by 2 years of age, the availability of combined diphtheria and tetanus toxoids and pertussis (DTP)-Hib vaccines, and the capacity of the schedule to accommodate newly licensed vaccines (e.g., varicella vaccine). This report presents the recommended childhood immunization schedule (approved by ACIP, AAP, and AAFP) (Table 1) and the rationale for changing the previous recommendations. Practitioners should consult the Report of the Committee on Infectious Diseases (Red Book) (2 ), the vaccine-specific recommendations of ACIP, and the *Recommended vaccines are listed under the routinely recommended ages. Shaded bars indicate range of acceptable ages for vaccination. † Although no changes have been made to this schedule since publication in MMWR (weekly) in January 1995, this table has been revised to more accurately reflect the recommendations. § Vaccines recommended for administration at 12-15 months of age may be administered at either one or two visits. ¶ Infants born to hepatitis B surface antigen (HBsAg)-negative mothers should receive the second dose of hepatitis B vaccine between 1 and 4 months of age, provided at least 1 month has elapsed since receipt of the first dose. The third dose is recommended between 6 and 18 months of age. Infants born to HBsAg-positive mothers should receive immunoprophylaxis for hepatitis B with 0.5 mL Hepatitis B Immune Globulin (HBIG) within 12 hours of birth, and 5 µg of either Merck, Sharpe, & Dohme (West Point, Pennsylvania) vaccine (Recombivax HB ® ) or 10 µg of SmithKline Beecham (Philadelphia) vaccine (Engerix-B ® ) at a separate site. For these infants, the second dose of vaccine is recommended at 1 month of age and the third dose at 6 months of age. All pregnant women should be screened for HBsAg during an early prenatal visit. ** The fourth dose of DTP may be administered as early as 12 7) for detailed information and specific recommendations for administration of vaccines. # RATIONALE FOR CHANGE AND CURRENT RECOMMENDATIONS In 1994, the substantial differences between the recommended AAP and ACIP schedules included the schedule for infant hepatitis B vaccination and the timing of the third dose of oral poliovirus vaccine (OPV) and the second dose of MMR (Table 2). Resolution of the differences between the schedules is described in the following sections. # OPV Since 1963, OPV has been the recommended vaccine for inducing long-lasting immunity to poliomyelitis. The primary series has consisted of two doses administered during infancy at approximately 2-month intervals beginning at 6-8 weeks of age, a third dose recommended at 6 weeks to 14 months after the second dose (generally administered at 15-18 months of age), and a fourth dose administered at 4-6 years of age. In late 1993, ACIP recommended that the third dose of OPV be administered at 6 months of age ( 8), whereas AAP recommended that this dose be administered at 6-18 months of age (2 ). A study comparing two infant immunization schedules (one recommending vaccination at approximately 2, 4, 6, and 12 months of age and one at 2, 4, and 12 months of age) indicated high seroconversion rates (i.e., 96%-100%) and similar geometric mean antibody titers (measured after three doses) when following either schedule (9). Several other studies have evaluated the seroresponse to OPV administered at 2, 4, and 6 months; 2, 4, and 12 months; and 2, 4, and 18 months of age (10-13 ). These data indicated excellent response to all serotypes of OPV when the third dose was administered at 6, 12, or 18 months of age (Table 3). Recommendation: Because immune response is not affected by administering the third dose of OPV at as early as 6 months of age, and because earlier scheduling can ensure a higher rate of completion of the OPV primary series at a younger age, the third dose of OPV should be administered routinely at 6 months of age. Vaccination at as late as 18 months of age remains an acceptable alternative. # TABLE 2. Differences between the American Academy of Pediatrics' (AAP) and the Advisory Committee on Immunization Practices' (ACIP) childhood immunization schedules, by selected vaccine -United States, 1994 # MMR First Dose During 1989 and 1990, more than 55,000 cases of measles were reported in the United States. Nearly 25% of these cases occurred among children ≤15 months of age, including approximately 9% among children 12-15 months of age (CDC, unpublished data). At that time, the recommended age for routine measles vaccination was 15 months of age. Recent studies have examined the impact of vaccine-induced immunity on maternally derived transplacental antibody levels; these studies have indicated that younger women (i.e., women who were born after 1956 and who are therefore more likely to have vaccine-induced immunity) transfer lower titers of measles antibodies to their newborn infants than older women (who are more likely to have had natural measles infection). The transplacental antibody acquired by these younger mothers' infants wanes earlier, causing their children to become susceptible to measles at a younger age (14,15 ). This finding suggests that children born to younger mothers might respond well to measles vaccine administered at 12 months of age. In one recent study in which children randomly received measles vaccine at either 12 or 15 months of age (16 ), the measles antibody response to MMR was 93% when the vaccine was administered at 12 months of age; at 15 months of age, the antibody response was 98%. Among children of mothers born after 1961, who probably had received measles vaccine and were less likely to have had measles infection than women born in previous years, the seroconversion rate was 96% among children vaccinated at 12 months of age and 98% among those vaccinated at 15 months of age. # Recommendation: The slightly lower response to the first dose of measles vaccine when administered at 12 months of age compared with administration at 15 months of age has limited clinical importance because a second dose of MMR is recommended routinely for all children, enhancing the likelihood of seroconversion among children who do not respond to the first dose. In addition, earlier scheduling of the first dose of measles vaccine can improve vaccination coverage. In 1994, both AAP and ACIP recommended administration of the first dose of MMR vaccine at 12-15 months of age (2,8 ); this schedule is still recommended. # TABLE 3. Percentage of children with serum-neutralizing antibody to poliovirus types 1 (p1), 2 (p2), and 3 (p3) after two and three doses of oral poliovirus vaccine, by age at vaccination and study # Second Dose In 1989, both ACIP and AAP recommended that all children receive a second dose of measles-containing vaccine; however, ACIP recommended administering the second dose at 4-6 years of age (5 ), and AAP recommended this dose at 11-12 years of age (4 ). Most states have implemented school entry requirements based on one or both of these recommendations. Currently, 12 states require administration of the second dose of measles vaccine before children enter kindergarten (i.e., at 4-6 years of age), 12 require this dose before entry to middle school (i.e., at 11-12 years of age), and 13 states require that the second dose be administered before children enter either kindergarten or middle school. Recommendation: Because response to the second dose is high when administered to children in either age group (CDC, unpublished data), and because state-specific laws govern the administration of the second dose of MMR, the second dose of MMR can be administered at either 4-6 years of age or 11-12 years of age. # Hepatitis B Universal hepatitis B vaccination of infants was recommended in 1991 (3,17 ). Although a protective serologic response (i.e., ≥10 mIU/mL) has been demonstrated in >95% of hepatitis B vaccine recipients who received vaccine according to several schedules beginning at birth or 2 months of age (Table 4), higher antibody titers were achieved when the third dose was administered at 12 or 15 months of age (18,19 ). Available data indicate that higher titers of antibody ensure longer persistence of antibody (20)(21)(22); however, the effect of high antibody levels on long-term protection against disease is not known. # TABLE 4. Percentage of children who seroconverted and geometric mean titers (GMTs) after vaccination with hepatitis B vaccine, by age at first dose and vaccination schedule # Recommendation: The routine hepatitis B vaccination series should begin at birth, with the second dose administered at 2 months of age, for infants whose mothers are hepatitis B surface antigen (HBsAg) negative. Acceptable ranges are from birth through 2 months of age for the first dose and from 1 through 4 months of age for the second dose, provided that at least 1 month elapses between these doses. The third dose should be administered at 6-18 months of age. Limited available data suggest an augmented response when the third dose is administered after 12 months of age (Merck Research Laboratories, unpublished data, 1994). Infants of HBsAgpositive mothers should receive the first dose of vaccine at birth (along with immunoprophylaxis with hepatitis B immune globulin); the second dose at 1 month of age; and the third dose at 6 months of age. # Diphtheria and Tetanus Toxoids and Pertussis Vaccine (DTP) Since the late 1940s, the approved schedule for DTP has consisted of a primary series of three doses administered at 4-8 week intervals and a fourth (i.e., reinforcing) dose administered 6-12 months after the third dose. Although the fourth dose has been administered routinely at 15-18 months of age, it may be administered as early as 12 months of age, provided that at least 6 months elapse between the third and fourth dose. No recent data are available comparing the immunogenicity of DTP or diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP) when administered at 12-14 months with immunogenicity at 15-18 months of age when vaccine is either administered alone or simultaneously with MMR and Hib vaccines. # Recommendation: The current schedule for DTP vaccination is still recommended-including the option that the fourth dose may be administered at as early as 12 months of age if 6 months elapse after the third dose. Thus, the fourth dose of DTP can be scheduled with other vaccines that are administered at 12-18 months of age. DTaP currently is licensed for use only as the fourth and/or fifth dose of the DTP series for children ≥15 months of age (2,6 ). # Tetanus and Diphtheria Toxoids, Adsorbed, For Adult Use (Td) For most persons who received a dose of DTP vaccine at 4-6 years of age, the first dose of Td is administered at 14-16 years of age and every 10 years thereafter to maintain adequate protection against tetanus and diphtheria (6 ). A recent U.S. serologic survey of tetanus immunity (23 ) indicated that tetanus immunity in the majority of the population decreases with time after the administration of the recipient's most recent vaccination. Among persons 6-16 years of age who had received their most recent tetanus vaccination 6-10 years previously, 28% had tetanus antibody titers lower than protective levels, which suggested that Td could be administered as early as 11-12 years of age. # Recommendation: The booster dose of Td should be administered at 11-12 years of age, although vaccination at 14-16 years of age is an acceptable alternative. The earlier scheduling of this dose at 11-12 years of age encourages a routine preadolescent preventive care visit. During this visit, the practitioner should also administer a second dose of measlescontaining vaccine to those persons who have not already received this dose and should ensure that children who previously have not received hepatitis B vaccine begin the vaccination series. Adolescent hepatitis B vaccination currently is recommended by AAP (2 ); ACIP will issue a similar recommendation. A routine visit at 11-12 years of age also will facilitate administration of other needed vaccines to adolescents. # SIMULTANEOUS ADMINISTRATION OF MULTIPLE VACCINES Simultaneous administration of vaccines has been recommended through the administration of combined vaccines (e.g., DTP vaccine, trivalent OPV, and MMR vaccine) or administration of multiple vaccines at different sites or by different routes (e.g., simultaneous administration of DTP, OPV, and Hib). Several studies have examined the safety and immunogenicity of simultaneously administered MMR and Hib (24,25 ); DTP, OPV, and MMR (26,27 ); DTP, OPV, and Hib (25,28 ); hepatitis B, DTP, and OPV (29)(30)(31); and hepatitis B and MMR (Merck Research Laboratories, unpublished data, 1993). Hepatitis B vaccine, the vaccine most recently licensed for use among infants, has been shown to be safe and effective when administered from birth through 15 months of age with other routinely recommended childhood vaccines (D. Greenberg, personal communication, 1994) (32 ). The available safety and immunogenicity data for vaccines currently recommended by ACIP and AAP have been reviewed recently (33 ). Although data are limited concerning the simultaneous administration of the entire recommended vaccine series (i.e., DTP, OPV, MMR, and Hib vaccines, with or without hepatitis B vaccine), data from numerous studies have indicated no interference between routinely recommended childhood vaccines (either live, atttenuated or killed) (33 ). These findings support the simultaneous use of all vaccines as recommended. # CONCLUSION The development of a unified childhood immunization schedule approved by ACIP, AAP, and AAFP represents the beginning of a process that will ensure continued collaboration among the recommending groups, the pharmaceutical manufacturing industry, and FDA to maintain and work toward further simplification of a unified schedule. The recommended childhood immunization schedule will be updated and published annually. Since the development of these recommendations in January 1995, FDA has licensed varicella zoster virus vaccine for use among susceptible persons ≥12 months of age. The ACIP will publish recommendations for this new vaccine, and these recommendations will be incorporated into the 1996 Recommended Childhood Immunization Schedule. # The following CDC staff members prepared this report: Jacqueline S. Gindler, M.D. Stephen
None
None
b60c81d032f5b61a9833e1ff7d6f5cb1d9d34f86
cdc
None
The Collaborative Strategy on Bed Bugs reflects a broad-based consensus of federal agencies. It is an outcome of the interagency Federal Bed Bug Workgroup. The Strategy was authored by key agencies (EPA, HHS , HUD, and USDA) and includes technical information and input from the Department of Defense and the National Institutes of Health. The cover photo was downloaded from the CDC Public Health Image Library ().# Introduction The Collaborative Strategy on Bed Bugs (the strategy) was developed by the Federal Bed Bug Workgroup to clarify the federal role in bed bug control and highlight ways that all levels of government, community, academia and private industry can work together to reduce bed bugs across the United States. Controlling bed bugs can be very costly and nearly all communities and states are currently facing resource limitations. Collaborating and sharing training programs, communication materials or treatment plans will conserve resources and may also lead to higher quality outputs by providing new opportunities for improvement. The strategy outlines four priority areas for bed bug control (Prevention, Surveillance and Integrated Pest Management , Education and Communication, and Research). Each of these areas is critical to national efforts, but the workgroup recognizes that interest in an individual area will vary across localities and stakeholders. The strategy does not outline specific commitments for the federal government because any action will depend on an individual agency's mission as well as regulatory and budgetary circumstances. The workgroup recognizes that collaboration among partners is the best path to success and has designed the strategy to facilitate and encourage these interactions. Within each priority area, the goals of the strategy are to help stakeholders by Coordinating and guiding federal activities; Facilitating collaboration among stakeholders and various levels of government; Raising awareness of the issues surrounding the bed bug problem; Maximizing efficient and effective use of resources to address high priority needs; and Encouraging realistic appraisal and management of bed bug problems through education and integrated mitigation measures. The strategy also focuses on the importance of evaluating the success of any intervention effort. With resource pressures at all levels of government, it is critical to evaluate which interventions are the most effective and the best candidates for sharing with other communities. The evaluations can be conducted by the lead local/state/federal agency or others, to provide an objective means of allocating future resources. # Federal Involvement and Community Efforts Resourceful and innovative communities can take information disseminated at the national or state level and apply it to their unique local context for a more robust program. Private and nonprofit organizations, or commercial businesses, can work with local governments, adding their expertise or resources to the community response. Examples of strong partners include: private funders looking to provide resources for research or control or affected commercial sectors such as health, pest control, hospitality, housing, real estate, or chemical manufacturing industries. Stakeholders in the community who have an interest in bed bug control include schools, housing providers, social service providers, pest management firms, local businesses, law enforcement, and local public health departments. Communities can lower costs and improve bed bug control by working with all of their stakeholders to develop and share the following kinds of information: - Infestation rates in specific areas of the community, including potential reservoirs of bed bug infestations that may serve as sources for new infestations; Levels of resistance to specific pesticides in local bed bug populations; Needs and existing resources to support underserved groups of people; and Cultural considerations (for example, values, ethnicity, national origin, language, gender, age, education, mobility, beliefs, standards, behavioral norms, communication styles, literacy, etc.) that potentially affect management efforts and recommendations. The federal government can help communities by Providing information and educational materials based on the best scientific information and expert knowledge available; Facilitating interactions among communities to leverage knowledge and experience; Encouraging objective evaluations of interventions; and Promoting research activities to improve prevention and control techniques. When complete elimination is not feasible, people living or working in an infested area can take steps to prevent the spread of this pest to new areas within the community and strive to reduce the bed bug population(s) as much as possible given the characteristics of the infested site. In this way, local efforts in even the most difficult cases can contribute to the management of bed bugs throughout the community. # IPM and Community Efforts Communities cannot expect to effectively manage bed bugs without coordinated community involvement using the principles of Integrated Pest Management (IPM). IPM is an effective and safe approach to pest management that relies on a combination of common-sense practices that present the least possible hazard to people, property and the environment. An IPM approach is especially critical to control in multifamily housing, lodging and institutional facilities. To manage bed bugs at the local level, communities need the capacity to coordinate prevention, surveillance, IPM, education, and communication activities. Appendix A contains additional key elements to include when setting up a bed bug program. Bed bug action committees (such as task forces) made up of recognized and respected leaders can be an effective way to organize collaborative local efforts. In addition, the state's IPM Coordinator may be a source of information about groups doing bed bug work in the state (www.ipmcenters.org/contacts/IPMDirectory.cfm). Guidance and examples from programs that shared successful approaches at the Second National Bed Bug Summit can be found at . # Nature of the Problem Bed bug infestations can happen anywhere − at home, at work, at school or anyplace people bring their belongings. Because bed bugs are small, people may unknowingly transport them from place to place on clothes, luggage or other goods. Infestations may spread when bed bugs crawl to adjacent rooms or housing units. Bed bug infestations can be small and isolated or more extensive and complex. Small, isolated infestations are more easily controlled; however, extensive and complex infestations can persist for an extended period of time. The United States, like many countries, has recently seen increases in bed bug populations. In 2010, CDC and EPA issued the Joint Statement on Bed Bug Control in the United States, which discusses the public health implications of bed bugs and their control. Bed bugs cause a variety of negative physical health, mental health and economic consequences, including Various reactions to bed bug bites, ranging from no observable reaction to mild or severe allergic reactions; Secondary infections of the skin; Mental health implications for people living in infested homes; and Time-consuming and expensive control measures. The management of bed bugs continues to pose a major challenge to state and local governments, private industry and the American public. This strategy provides guidance for how various parts of the government can contribute to minimizing the negative effects of bed bug infestations on human health and the economy. # Priority Area I: Prevention Prevention is a very cost-effective approach to managing bed bugs. Effective prevention efforts, when taking into account cultural considerations, can work in a wide range of settings. Prevention strategies are particularly effective for owners/agents of shelters, some group homes and other housing accommodations for transient populations where the risk of bed bug introductions and subsequent infestations is high. When prevention fails, site-specific early detection and rapid response measures within the IPM plan can prevent further spread. To support prevention, the workgroup recognizes the need for Increasing general awareness through effective outreach and education programs; Increasing accessibility to bed bug information through a dedicated website: www.epa.gov/bedbugs; Facilitating communities' ability to educate stakeholders about prevention through the clearinghouse portion of the website; - Examining the effectiveness of potential prevention techniques through research and other projects; Establishing programs to inform employees about the potential for exposure to bed bugs, how to recognize an infestation and what to do if exposure occurs; Continuing to inform HUD stakeholders about housing policy; and Providing technical information and help when infestations occur in settings under federal oversight, such as during disaster relief efforts or at military sites. # Actions to Improve Prevention To stop the spread of bed bugs, people living with bed bugs can take steps to reduce the opportunities for bed bugs to migrate to new locations. By developing a strategy in advance, people who travel or have jobs that may expose them to bed bugs can reduce the likelihood of bringing bed bugs home. Integrating prevention strategies into IPM plans can help property owners and agents keep their buildings free of bed bugs. Education and communication are the foundation for effectively preventing bed bug infestations. Priority Area IV (Education and Communication) provides more information on effective programs. The workgroup also envisions that effective prevention programs will include the following modules: - Accurate identification of bed bugs (they are easily confused with other pests); Early detection of new infestations and/or infestations that may persist after treatments; Sources for science-based technical information on bed bugs (particularly for high-risk settings); and Open communication about bed bugs and infestations, which will foster collaboration to solve the problem, rather than assigning blame or fostering stigma about infestations. Creating a prevention program involves three main steps: - Minimizing movement of bed bugs to new locations; Creating living spaces that are less receptive to bed bug infestations; and Identifying and eliminating significant infestations that can serve as reservoirs for spreading bed bugs. The workgroup believes successful community programs will include Preventing the movement of bed bugs by o Treating existing infestations quickly and notifying all neighbors of the problem to ensure they are on the lookout for bed bugs; Making areas less receptive to new infestations by o Removing clutter; o Installing mattress and box spring encasements (if funds are severely limited, at least encase the box springs); o Installing, inspecting, and maintaining interception devices or traps; and o Providing clothes dryers hot enough to kill bed bugs. - Carefully inspecting all used clothing and furniture for bed bugs to avoid bringing them into the home; Choosing appropriate control techniques for treating existing infestations to increase effectiveness of treatment and to reduce movement of bugs into adjacent units (although many registered products are available, every situation is different); Encouraging managers or owners of multifamily dwellings, when possible, to invest in o Washers and dryers on each floor to reduce the spread of bugs from residents carrying bedding and other items to laundry facilities via the elevator or stairwells; and o Portable heating units so that returning travelers can treat their luggage with heat before unpacking. Sealing cracks and crevices around baseboards, light sockets, etc., to discourage movement through wall voids; Encouraging managers of multifamily dwellings to install door sweeps on the bottom of doors to discourage movement into hallways; Implementing IPM plans in high-risk settings (hostels, shelters, etc.), to encourage o Inspecting and/or treating incoming personal belongings with a portable heat chamber; o Routinely monitoring areas around where people sleep or rest; and o Identifying and destroying reservoir populations whenever possible. # Measuring Progress The following types of data can show whether programs and activities are successfully preventing bed bug infestations: - Results of inspections such as bed bug numbers from monitoring devices, visual inspections, or numbers of locations where scent-detecting canines alert; and Increased knowledge or behavior change of participants completing training or receiving educational materials, as measured by polls and questionnaires. # Priority Area II: Surveillance and IPM for Bed Bugs Even if residents diligently engage in prevention activities, bed bugs can still infest an area and an infestation may become well established before anyone notices. If an IPM plan is in place that details specific roles in dealing with an infestation, bed bug experts can stop the infestation before it becomes overwhelming. In most cases, the longer an infestation remains unchecked, the harder and more expensive it will be to eliminate. Reliable and cost-effective early detection methods and educational efforts can help communities by lowering overall treatment costs and reducing new infestations. Effective site-specific IPM plans tailor educational efforts and procedures to the community. The plans also address any factors that could affect program effectiveness, such as Cultural considerations, Availability of qualified bed bug control professionals, Pesticide and applicator availability, Potential for pesticide misuse, and Financial resources. Identifying and planning for complicating factors when designing a site-specific IPM plan are important both for budgeting and for anticipating special considerations that could lengthen the time needed to eliminate the infestation. In addition, bed bug populations vary in their level of resistance to pesticide products. If pesticides are used to control bed bugs, it is critical to select those that will be effective against the infestation. Continuing to monitor for pests at the site being treated is a critical element of any IPM-based management program; monitoring provides an objective measure of progress toward the goal of eliminating the infestation. Bites alone are a very poor indicator of an infestation because people sometimes mistake bites from mosquitoes and other pests as bites from bed bugs. Using community-wide surveillance to identify bed bug sources and identifying ways that bed bugs move between residences can help communities target resources to areas where they will be most effective. Rather than relying on anecdotal information, communities should use IPM best-management practices and reliable surveillance data to verify the presence of living bed bugs when there is a report of bed bugs. Bed bug sightings at schools can help identify potential infestations in the community; however, this information is very sensitive because it has the potential to stigmatize students and their families and should be handled discreetly. Community control efforts can be significantly improved when the community has access to quality resources and information. The workgroup recognizes the need for collaboration among stakeholders to: - Provide science-based information and educational materials about the importance of early detection; Share success stories about management efforts; Identify and promote research activities that may lead to reliable, portable and affordable detection methods; Ensure that scientifically sound information is collected and made readily available to stakeholders about bed bugs including effective monitoring and control strategies, properly labeled chemical insecticides and related regulatory requirements; Facilitate the incorporation of new discoveries into training, education and communication materials; and Encourage funding for basic and applied research on bed bugs, bed bug biology, and effective control techniques and products, and conduct such research or facilitate funding where possible. # Actions to Improve Surveillance and IPM By having an IPM plan for bed bugs in place, occupants and staff will be able to act quickly and confidently if they find bed bugs. Those living or working at a property can conduct basic bed bug inspections as an added step to existing scheduled routines (for example, housekeeping inspections), which will help minimize costs. Residents should not serve as the sole source for reporting bed bug infestations. Inspecting every area at risk for bed bug infestation will help find bed bug infestations while they are still relatively easy to eliminate. The frequency of such inspections should be part of the IPM plan, based on the local situation and conditions. Hiring a pest management professional (experienced specialists who use certified canine bed bug detectors or other effective bed bug monitors) or training specific staff members to conduct intensive inspections can be an effective way to encourage surveillance and IPM efforts. Bed bugs can be detected before an infestation becomes severe by using devices and practices that facilitate early detection. These include the following: - Using rip-resistant, bite-proof and escape-proof mattress and box spring encasements that make it easier to see the signs of bed bugs; Managing the environment so that when infestations occur they are more easily detected (such as routine cleaning, minimizing clutter, removing evidence of past infestations, using light-colored bedding, etc.); Using passive bed bug monitors that intercept and trap bed bugs; and Using active monitors that attract bed bugs by releasing heat, carbon dioxide and/or a scent lure. Dogs can be successfully used as part of a program for early detection of bed bugs or to verify that bed bugs have been eliminated from a residence. However, this success depends on how well both the handler and the dog have been trained and how well they work together as a team. Whenever using dogs or hiring a dog handler, success is most easily achieved if The dog and/or team are certified by a nationally recognized association; The resident recognizes that dogs are not perfect in detecting bed bugs o a positive identification by a dog should always be confirmed by a visual inspection before beginning treatment; o a negative result should also be confirmed by another monitoring technique; and The team that is hired is a professional dog/handler team with a proven track record. The federal government has no oversight or advisory role for bed bug surveillance by dog/handler teams. However, several states have begun programs to require training, licensing or certification for dog teams. Research into basic bed bug biology and behavior shows certain characteristics are common to all bed bugs. Focusing detection efforts on the following characteristics can help reduce costs: The majority of bed bugs are found in or near sleeping and resting areas. As an infestation grows within an apartment or home, bed bugs will travel to other locations, making it more difficult, time consuming and expensive to pinpoint every infested area. Because bed bugs can move between housing units, the best IPM practice is to inspect adjacent areas or units, units above and below the infested unit, and units across the hall. If a unit is vacant, bed bugs may behave differently (for example, by becoming inactive or more active during the day) while waiting for a new host to arrive. Although not preferred hosts, pets and rodents serve as alternate hosts in some circumstances (such as when other hosts suddenly become unavailable or when an infestation is severe). Having these additional hosts can greatly increase the chances of the bugs' survival. Bed bugs can be difficult to differentiate from other common insects such as carpet beetles, cockroach nymphs or booklice. A qualified expert should confirm the presence of a bed bug infestation before control efforts begin. Once a licensed pest control professional or other expert confirms the infestation, site occupants and staff follow the predetermined IPM plan to efficiently control the infestation. If no IPM plan is in place, follow research-based recommendations available at EPA's bed bug website (www.epa.gov/bedbugs). Each bed bug infestation site is unique, so control options and IPM plans will vary. The workgroup has also identified the following actions that managers/owners/occupants and pest control professionals can take: - Encourage discussions with a licensed pest control management company or other professional knowledgeable about the control of bed bugs about appropriate ways to quarantine infested items; Advocate for clear guidelines to minimize potential for bed bugs to develop resistance to pesticides; and Establish minimum acceptable standards for detecting an active infestation compared with a prior infestation. # Measuring Progress Sometimes residents or landlords will treat for bed bugs based on anecdotal reports, without proper evidence. Evaluation is essential to ensure that limited resources are used for the treatment methods that will be most effective. Professional organizations have recommended best management practices, but not all of these practices have been adequately evaluated for effectiveness. Measuring results from surveillance and IPM programs at the federal, state and local levels helps ensure effective and efficient use of resources. The following could demonstrate whether an approach has resulted in better bed bug management: - Community or locality infestation rates, which indicate successful interventions and likely reduction in new infestations. Data showing availability and distribution of information and educational materials from the surveillance system, coupled with data showing increased knowledge of residents and housing employees. Documentation showing a reduction in bed bug control expenses over time. Inspections, both routine and non-routine, revealing less frequent and smaller populations of bed bugs. Verification that infestations are only in one dwelling area (not multiple areas in a home or adjacent units). Documentation that monitors and resident surveys that show no activity for months after one or two treatments of an active infestation. Satisfaction surveys that show an increase in satisfaction. # Priority Area III: Education and Communication How much participants know about bed bugs often determines how willing they are to cooperate with prevention and control activities. Knowledge also often determines the outcome of IPM plan implementation. Information about bed bug basics, prevention and control is critical for all stakeholders so they can contribute to pest management efforts. Training and communications materials that are current, targeted, science-based and consistent are key to a successful program. In addition, target audience members are most likely to access the information if it is available in plain language (see www.plainlanguage.gov). Although pest control companies are on the front lines of the bed bug battle, they vary in their ability and willingness to train employees and educate clients. In addition, existing training curricula frequently lack the latest, science-based information about bed bugs. To improve the quality of education and training materials, the workgroup recognizes the need for Defining core competencies for groups that need training or education; Setting audience-specific learning objectives (based on core competencies) for diverse stakeholders involved in bed bug management; Ensuring sufficient numbers of effective trainers to help convey consistent information and quality delivery; Promoting current, science-based messages (pulling from multiple websites or training from some sources can give an incomplete or incorrect understanding of bed bugs and their management); Tailoring messages to the target community using cultural considerations and current needs, recognizing that settings at risk for infestation (for example, offices, shelters, group homes, hostels, camps, etc.) may routinely have uninformed people rotating in and out; Reducing the costs of developing and disseminating high-quality education, training and communications materials for all stakeholders; Understanding that the greatest cost savings may result from prevention programs; Refining models of community coordination to serve as templates for minimizing costs; and Communicating with people who have different learning styles or those who rely on electronic communications such as social media as their primary source for learning. # Actions to Improve Education and Communications Training efforts will reach the most people in the most effective way when they use current information based on standardized learning objectives that demonstrate core competencies. In addition to bed bugspecific training programs, cooperating with broader training programs to discuss bed bugs (such as Healthy Homes, IPM, and training for management of other pests) can be cost effective. Training can be provided in classrooms, at the workplace, and through webinars. Regardless of format, regularly updating training programs is necessary to keep information current. Developing a federally endorsed set of core competencies and learning objectives for bed bug training may help define the level of training needed in a bed bug management program. A professional certification program indicating a level of knowledge, experience and educational ability would help consumers select a qualified training provider. Interaction among communities could lead to more effective and efficient use of resources. Costs can be significantly reduced by using existing strategies for communications and instructional design whenever possible. Greater use of the clearinghouse portion of EPA's bed bug website could help management programs format and tailor communications to a target audience. The website Provides accurate information on bed bug management from unbiased sources; Discredits myths linking sanitation, poverty and immigration status to bed bugs; Offers consistent information; and Promotes more efficient communications on a local level. For example, the clearinghouse could be a repository for educational materials like in-depth extensive training developed by professionals, including videos on how to inspect, caulk crevices, make your bed an "island," or prevent bed bug transport. # Measuring Progress Measuring the success of a training program is critical to determining whether those resources are being used productively. Core competencies should align with measureable learning objectives and the objectives should be audience-specific for the various stakeholders and/or sectors involved in bed bug management. Once developed, communities could use these standard objectives to measure the level of training expertise in a community. Nationally, achievement of objectives could be compared across communities to measure success in communication and training programs. Likewise, communicators can track efforts and evaluate the results to support efficient use of resources and improve effectiveness of messages. Ideally, target audiences have the opportunity to provide feedback. By using the feedback, communities can provide valuable information to those developing messages for use in public forums and social media. The following are examples of data that could assess the effectiveness of communication strategies and messages: - Documented results from satisfaction surveys of customers who pay to have pest management professionals rid their homes/facilities of bed bugs; Results from focus groups during development of communication strategies and after dissemination to determine whether the target audiences receive the messages, understand them, and take the recommended steps to prevent or control bed bug infestations; Evaluations of training from interviews of participants to determine their perceptions about the effectiveness of communication strategies; and Before and after surveys analyzing training participants' behavior changes with regard to bed bugs. - Create a source of reagents (for example, plasmids, antibodies, DNA and other materials), samples, or testing protocols; and Integrate results with international collaborators to make research and, ultimately, control more effective and less costly. # Actions to Improve Research Research is important to improve bed bug management techniques. Basic research is often the source of innovative new solutions, while applied research directly supports information used in establishing pest control best practices. Both are necessary to build a foundation of knowledge that can be used to develop solutions. Regardless of research questions, the workgroup suggests researchers use the following guiding principles in their work: - Strive to serve all members of American society with appropriate control techniques; Minimize costs while maximizing the benefits from the products of research efforts; and Communicate results effectively and efficiently to stakeholders. Meetings that bring researchers together could include forums for discussion of strategies for research, technology, etc. Researchers may coordinate scientific efforts with professionals from other specialties to develop new solutions and to use methods as effectively as possible. For example, working with scientists studying related organisms could help inform approaches relevant to bed bugs. The following systems may facilitate federal/academic/industrial partnerships: - Small Business Innovation Research: a program that enables small businesses to explore their technological potential and provides the incentive to profit from commercialization. Cooperative Research and Development Agreement: an agreement between one or more federal laboratories and one or more nonfederal parties through which the federal laboratory provides personnel, services, facilities, equipment or other resources toward the conduct of specified research or development efforts. - Specific Cooperative Agreement: An agreement between a federal agency and another party that describes in detail a jointly planned and executed research program or project of mutual interest between the parties to which both contribute resources. # Measuring Progress Like all of the other priority areas, measuring research efforts is critical to evaluating and directing research funding. Improvements in basic research can be measured by looking for increases in number of Publication of research outcomes in peer-reviewed journals; New patents; Shared resources that can be used by the bed bug research community; Research projects that contribute to solutions; and Trained bed bug biologists. In addition to those increases listed above, the following may indicate an improvement in applied research: - Transfers of intellectual property; Translation of research into operational manuals and other sources of information for those who manage bed bugs; Improved, more effective products available to the public and businesses; Improved communication with the public and businesses; A current baseline of the problem; Comparison of the outcome of interventions to a baseline; and New strategies for detection, prevention and control. # Conclusions Integrating several methods for controlling bed bugs in response to IPM surveillance data is likely to be more successful against bed bugs than application of any single method. Reducing bed bug populations throughout a community provides an opportunity to reduce the probability of new infestations. With few exceptions, the federal government does not have a role in direct interventions against bed bugs, but it does have an important role in providing reliable information, coordinating stakeholders and providing resources for research to achieve long-term solutions. Many other stakeholders such as nongovernmental organizations, state and local governments, school associations and community coalitions have been active in efforts to manage the bed bug problem. Their input has been important in crafting this strategy. Since problems associated with bed bugs increased in the United States, research and development has produced better products and methods for control. As improvements continue, indications are that IPM is the best approach in most situations. Public awareness and dissemination of accurate information is helpful for earlier detection, greater chances of successful elimination of infestations, reduction in spread of infestations and more efficient use of resources. This strategy advocates a logical, integrated approach to bed bug management, with a focus on cooperation among all levels of government and communities. Effective implementation of this strategy should decrease costs to communities and achieve better bed bug control. Continuous measurements of progress will be key to efficient use of resources and essential to prevent the reoccurrence of widespread infestations in the United States. Department of Defense (DOD) DoD addresses bed bugs and their management from three different perspectives and through multiple agencies. Technical consulting, information sharing, educational efforts and control actions may involve uniformed and civilian entomologists at each of these levels: Policy: The Armed Forces Pest Management Board (AFPMB) provides overall information and develops and provides general policies, guidance, training and consultation on pest management to other DoD elements and whomever they support. Technical training: Preventive medicine staff members and pest management professionals (PMPs) receive routine periodic technical and certification training and guidance, as needed, by service-specific regional or national agencies. Bed bug management: Bed bug management programs and specific actions are carried out at the local level using IPM strategies and techniques, involving all local stakeholders. Those typically include members of the affected human populations, local public health or medical authorities, facilities managers and DoD military, civilian or appropriate contracted PMPs. The Armed Forces Pest Management Board (AFPMB) is an arm of DoD that recommends policy, provides guidance, and coordinates the exchange of information on all matters related to pest management throughout the DoD. The AFPMB's mission is to ensure that environmentally sound and effective programs are available to prevent pests and disease vectors from adversely affecting DoD operations. # Department of Housing and Urban Development (HUD) HUD's mission is to "create strong, sustainable, inclusive communities and quality affordable homes for all. HUD is working to strengthen the housing market to bolster the economy and protect consumers; meet the need for quality affordable rental homes: utilize housing as a platform for improving quality of life; build inclusive and sustainable communities free from discrimination; and transform the way HUD does business." HUD promotes a collaborative approach to bed bug management in public and Indian housing (HUD Notices PIH 2011-22, PIH 2012-17, respectively, and H 2012. While bed bug infestations are not limited to the housing sector, quality housing and strong communities are undermined by the presence of bed bug infestations. Prevention and control of bed bug infestations are enhanced by the team approach to the problem when affected and vulnerable housing providers, residents, pest management professionals, public health professionals and others focus their different perspectives and resources on the common problem. HUD is particularly concerned about effects on vulnerable populations, including underserved groups, such as elderly and disabled individuals, and owners and residents who lack the resources to effectively deal with bed bug infestations. Education of all involved parties, especially with regard to prevention methods, is a priority for HUD. # Priority Area IV: Research When the numbers and geographical distribution of bed bugs began to increase, pest-specific knowledge was based on previous, limited research. Since then, our understanding of bed bugs and their management has improved. Newer research about bed bugs answered many questions and allows program planners and the American public to more effectively manage infestations. Research activity can be divided into two areas: basic and applied. Basic research answers questions about the biology of the bed bug in its environment and its association with hosts. Not every basic research study results in a practical application, but unexpected answers to basic research questions may lead to innovative solutions. Peer-reviewed basic research on topics such as colony growth, movement behavior and the pest's apparent inability to transmit pathogens that cause human diseases would improve the current body of knowledge about this pest. Applied research provides new methods for IPM or new information that allows better pest control as part of an IPM program. Such research activity may lead to new tools or evaluation of existing tools. Applied research can also bring together many techniques into a clear, site-specific IPM plan to test the effectiveness of the plan. Research pertaining to the priority areas will likely come from Academic institutions (for example, departments of entomology, schools of medicine, schools of public health, departments of social work, departments of communication, departments of consumer economics, and schools of architecture); Private industry (for example, include insecticide manufacturers, product formulators, venture companies, pest control); user group associations (for example, builders, hoteliers, private housing groups, insecticide manufacturers); International research entities; and Federal, state and local governments. Because both basic and applied research will provide the foundation for new, more effective control options, the workgroup recognizes the need to Promote partnerships that leverage resources and expertise; Integrate the results of peer-reviewed research into management programs through education, training and communications; Further investigate potential health effects from bed bugs, including potential to trigger an allergic reaction; Improve access to study sites and the potential use of human subjects; Quantify the costs for control, prevention and the value of tailoring intervention efforts to unique communities; Develop inexpensive, practical control and prevention protocols for residents owners and managers; Improve access to research colonies, including field strains; Maintain defined strains of bed bugs with known resistance to insecticides; # Appendix A: Key Elements of Successful Bed Bug Management Although bed bug management programs vary regionally, successful programs frequently share common elements. The workgroup has identified five key elements generally found in well-run, successful programs: Collaboration -Programs that draw on the strengths of the various participants and include diverse stakeholders are better able to improve outcomes and broaden the reach of their efforts. That is, they accomplish more as a group than as independent programs. Plain Language, Targeted -Materials and educational efforts are most likely to be effective when project planners define the audience, customize materials for that audience and target delivery of materials. Developing resources and educational programs in plain language helps audiences find what they need, understand what they find and use what they find to meet their needs (see /). IPM-Based -Bed bug control is most successful when it uses an integrated pest management (IPM) approach. IPM is an effective and safe approach to pest management that relies on a combination of common-sense practices that present the least possible hazard to people, property and the environment. IPM takes advantage of all appropriate pest management options including the judicious use of pesticides. IPM programs use current, comprehensive information on the life cycles of pests and their interaction with the environment to create programs that are efficient, effective and safe. A site-specific IPM plan helps ensure use of standard operating procedures and coordination of the efforts of everyone who has a role in pest management. More information on IPM can be found in the USDA National Roadmap for Integrated Pest Management (). Science-Based -Recommendations are most likely to be consistent and successful when project planners base them on objective, accepted scientific evidence. Recommendations based on assumptions results in wasted or duplicative effort and resources. Evaluated for Measurable Success -Developing sustainable programs, especially over time or distance, is most easily accomplished by evaluating the success of individual intervention efforts. When communities objectively measure the success of their community-wide communication and control efforts, they are better able to use resources most effectively and at lower costs. Sharing successful results is also important for improving quality of efforts across communities and to support efficient use of resources. # Appendix B: The Federal Bed Bug Workgroup A recommendation from the first National Bed Bug Summit led five federal agencies to establish the Federal Bed Bug Workgroup in August 2009. The current workgroup comprises representatives from the Centers for Disease Control and Prevention (CDC), the Department of Defense (DoD), the Environmental Protection Agency (EPA), the Department of Housing and Urban Development (HUD), and the U.S. Department of Agriculture (USDA). The workgroup is a cooperative effort; no single federal agency has the lead in the fight against bed bugs. Because each agency's mission influences its bed bug work, the individual missions and agency-specific activities are listed below. # Centers for Disease Control and Prevention (CDC) CDC collaborates to create the expertise, information and tools that people and communities need to protect their health -through health promotion; prevention of disease, injury and disability; and preparedness for new health threats. CDC offers live and online courses about the importance of IPM, including the biology and control of pests. The course trains environmental health, public health, pest management, and other professionals in the principles of effective pest control, including bed bugs, and offers a special field-based exercise that trains participants to conduct an inspection of a potentially infested bedroom. Department of Agriculture (USDA) The USDA Research, Education, and Economics mission areas, the National Institute of Food and Agriculture (NIFA) and the Agricultural Research Service (ARS) provide federal leadership in creating and disseminating knowledge spanning the biological, physical, and social sciences related to agricultural research, extension, and higher education. NIFA's mission is to lead food and agricultural sciences by supporting research, education, and extension programs in the Land-Grant University System and other partner organizations. Funding from NIFA supports bed bug researchers and extension personnel at several Land-Grant universities. EPA's dedicated bed bug website includes science-based information developed by these university experts. This work leverages other efforts in ARS on chemical ecology of insects and development of new insecticides. This work has resulted in the discovery of new kinds of chemical attractants for bed bugs and new insecticides effective against resistant bed bugs. ARS' problem-solving orientation focuses on bringing together many scientific skills to find innovative solutions. Environmental Protection Agency (EPA) EPA's mission is to protect human health and the environment. For bed bugs, like other pests of public health significance, EPA has a statutory charge to ensure that the pesticides are (1) safe and (2) effective against the pests on their labels. EPA's Office of Pesticide Programs works to ensure that pest management professionals and the public have access to the latest information on effective bed bug control tools. EPA realizes that certain bed bug populations in communities across the nation are becoming more resistant to many of the existing pesticides. EPA is also working to educate the general public, pest professionals and public health officials about bed bug biology and IPM, both of which are critical to long-term bed bug control.
The Collaborative Strategy on Bed Bugs reflects a broad-based consensus of federal agencies. It is an outcome of the interagency Federal Bed Bug Workgroup. The Strategy was authored by key agencies (EPA, HHS [CDC], HUD, and USDA) and includes technical information and input from the Department of Defense and the National Institutes of Health. The cover photo was downloaded from the CDC Public Health Image Library (http://phil.cdc.gov/phil/home.asp).# Introduction The Collaborative Strategy on Bed Bugs (the strategy) was developed by the Federal Bed Bug Workgroup to clarify the federal role in bed bug control and highlight ways that all levels of government, community, academia and private industry can work together to reduce bed bugs across the United States. Controlling bed bugs can be very costly and nearly all communities and states are currently facing resource limitations. Collaborating and sharing training programs, communication materials or treatment plans will conserve resources and may also lead to higher quality outputs by providing new opportunities for improvement. The strategy outlines four priority areas for bed bug control (Prevention, Surveillance and Integrated Pest Management [IPM], Education and Communication, and Research). Each of these areas is critical to national efforts, but the workgroup recognizes that interest in an individual area will vary across localities and stakeholders. The strategy does not outline specific commitments for the federal government because any action will depend on an individual agency's mission as well as regulatory and budgetary circumstances. The workgroup recognizes that collaboration among partners is the best path to success and has designed the strategy to facilitate and encourage these interactions. Within each priority area, the goals of the strategy are to help stakeholders by  Coordinating and guiding federal activities;  Facilitating collaboration among stakeholders and various levels of government;  Raising awareness of the issues surrounding the bed bug problem;  Maximizing efficient and effective use of resources to address high priority needs; and  Encouraging realistic appraisal and management of bed bug problems through education and integrated mitigation measures. The strategy also focuses on the importance of evaluating the success of any intervention effort. With resource pressures at all levels of government, it is critical to evaluate which interventions are the most effective and the best candidates for sharing with other communities. The evaluations can be conducted by the lead local/state/federal agency or others, to provide an objective means of allocating future resources. # Federal Involvement and Community Efforts Resourceful and innovative communities can take information disseminated at the national or state level and apply it to their unique local context for a more robust program. Private and nonprofit organizations, or commercial businesses, can work with local governments, adding their expertise or resources to the community response. Examples of strong partners include: private funders looking to provide resources for research or control or affected commercial sectors such as health, pest control, hospitality, housing, real estate, or chemical manufacturing industries. Stakeholders in the community who have an interest in bed bug control include schools, housing providers, social service providers, pest management firms, local businesses, law enforcement, and local public health departments. Communities can lower costs and improve bed bug control by working with all of their stakeholders to develop and share the following kinds of information:  Infestation rates in specific areas of the community, including potential reservoirs of bed bug infestations that may serve as sources for new infestations;  Levels of resistance to specific pesticides in local bed bug populations;  Needs and existing resources to support underserved groups of people; and  Cultural considerations (for example, values, ethnicity, national origin, language, gender, age, education, mobility, beliefs, standards, behavioral norms, communication styles, literacy, etc.) that potentially affect management efforts and recommendations. The federal government can help communities by  Providing information and educational materials based on the best scientific information and expert knowledge available;  Facilitating interactions among communities to leverage knowledge and experience;  Encouraging objective evaluations of interventions; and  Promoting research activities to improve prevention and control techniques. When complete elimination is not feasible, people living or working in an infested area can take steps to prevent the spread of this pest to new areas within the community and strive to reduce the bed bug population(s) as much as possible given the characteristics of the infested site. In this way, local efforts in even the most difficult cases can contribute to the management of bed bugs throughout the community. # IPM and Community Efforts Communities cannot expect to effectively manage bed bugs without coordinated community involvement using the principles of Integrated Pest Management (IPM). IPM is an effective and safe approach to pest management that relies on a combination of common-sense practices that present the least possible hazard to people, property and the environment. An IPM approach is especially critical to control in multifamily housing, lodging and institutional facilities. To manage bed bugs at the local level, communities need the capacity to coordinate prevention, surveillance, IPM, education, and communication activities. Appendix A contains additional key elements to include when setting up a bed bug program. Bed bug action committees (such as task forces) made up of recognized and respected leaders can be an effective way to organize collaborative local efforts. In addition, the state's IPM Coordinator may be a source of information about groups doing bed bug work in the state (www.ipmcenters.org/contacts/IPMDirectory.cfm). Guidance and examples from programs that shared successful approaches at the Second National Bed Bug Summit can be found at http://www.epa.gov/oppfead1/cb/ppdc/bedbug-summit/2011/2nd-bedbug-summit.html. # Nature of the Problem Bed bug infestations can happen anywhere − at home, at work, at school or anyplace people bring their belongings. Because bed bugs are small, people may unknowingly transport them from place to place on clothes, luggage or other goods. Infestations may spread when bed bugs crawl to adjacent rooms or housing units. Bed bug infestations can be small and isolated or more extensive and complex. Small, isolated infestations are more easily controlled; however, extensive and complex infestations can persist for an extended period of time. The United States, like many countries, has recently seen increases in bed bug populations. In 2010, CDC and EPA issued the Joint Statement on Bed Bug Control in the United States, which discusses the public health implications of bed bugs and their control. Bed bugs cause a variety of negative physical health, mental health and economic consequences, including  Various reactions to bed bug bites, ranging from no observable reaction to mild or severe allergic reactions;  Secondary infections of the skin;  Mental health implications for people living in infested homes; and  Time-consuming and expensive control measures. The management of bed bugs continues to pose a major challenge to state and local governments, private industry and the American public. This strategy provides guidance for how various parts of the government can contribute to minimizing the negative effects of bed bug infestations on human health and the economy. # Priority Area I: Prevention Prevention is a very cost-effective approach to managing bed bugs. Effective prevention efforts, when taking into account cultural considerations, can work in a wide range of settings. Prevention strategies are particularly effective for owners/agents of shelters, some group homes and other housing accommodations for transient populations where the risk of bed bug introductions and subsequent infestations is high. When prevention fails, site-specific early detection and rapid response measures within the IPM plan can prevent further spread. To support prevention, the workgroup recognizes the need for  Increasing general awareness through effective outreach and education programs;  Increasing accessibility to bed bug information through a dedicated website: www.epa.gov/bedbugs;  Facilitating communities' ability to educate stakeholders about prevention through the clearinghouse portion of the website;  Examining the effectiveness of potential prevention techniques through research and other projects;  Establishing programs to inform employees about the potential for exposure to bed bugs, how to recognize an infestation and what to do if exposure occurs;  Continuing to inform HUD stakeholders about housing policy; and  Providing technical information and help when infestations occur in settings under federal oversight, such as during disaster relief efforts or at military sites. # Actions to Improve Prevention To stop the spread of bed bugs, people living with bed bugs can take steps to reduce the opportunities for bed bugs to migrate to new locations. By developing a strategy in advance, people who travel or have jobs that may expose them to bed bugs can reduce the likelihood of bringing bed bugs home. Integrating prevention strategies into IPM plans can help property owners and agents keep their buildings free of bed bugs. Education and communication are the foundation for effectively preventing bed bug infestations. Priority Area IV (Education and Communication) provides more information on effective programs. The workgroup also envisions that effective prevention programs will include the following modules:  Accurate identification of bed bugs (they are easily confused with other pests);  Early detection of new infestations and/or infestations that may persist after treatments;  Sources for science-based technical information on bed bugs (particularly for high-risk settings); and  Open communication about bed bugs and infestations, which will foster collaboration to solve the problem, rather than assigning blame or fostering stigma about infestations. Creating a prevention program involves three main steps:  Minimizing movement of bed bugs to new locations;  Creating living spaces that are less receptive to bed bug infestations; and  Identifying and eliminating significant infestations that can serve as reservoirs for spreading bed bugs. The workgroup believes successful community programs will include  Preventing the movement of bed bugs by o Treating existing infestations quickly and notifying all neighbors of the problem to ensure they are on the lookout for bed bugs;  Making areas less receptive to new infestations by o Removing clutter; o Installing mattress and box spring encasements (if funds are severely limited, at least encase the box springs); o Installing, inspecting, and maintaining interception devices or traps; and o Providing clothes dryers hot enough to kill bed bugs.  Carefully inspecting all used clothing and furniture for bed bugs to avoid bringing them into the home;  Choosing appropriate control techniques for treating existing infestations to increase effectiveness of treatment and to reduce movement of bugs into adjacent units (although many registered products are available, every situation is different);  Encouraging managers or owners of multifamily dwellings, when possible, to invest in o Washers and dryers on each floor to reduce the spread of bugs from residents carrying bedding and other items to laundry facilities via the elevator or stairwells; and o Portable heating units so that returning travelers can treat their luggage with heat before unpacking.  Sealing cracks and crevices around baseboards, light sockets, etc., to discourage movement through wall voids;  Encouraging managers of multifamily dwellings to install door sweeps on the bottom of doors to discourage movement into hallways;  Implementing IPM plans in high-risk settings (hostels, shelters, etc.), to encourage o Inspecting and/or treating incoming personal belongings with a portable heat chamber; o Routinely monitoring areas around where people sleep or rest; and o Identifying and destroying reservoir populations whenever possible. # Measuring Progress The following types of data can show whether programs and activities are successfully preventing bed bug infestations:  Results of inspections such as bed bug numbers from monitoring devices, visual inspections, or numbers of locations where scent-detecting canines alert; and  Increased knowledge or behavior change of participants completing training or receiving educational materials, as measured by polls and questionnaires. # Priority Area II: Surveillance and IPM for Bed Bugs Even if residents diligently engage in prevention activities, bed bugs can still infest an area and an infestation may become well established before anyone notices. If an IPM plan is in place that details specific roles in dealing with an infestation, bed bug experts can stop the infestation before it becomes overwhelming. In most cases, the longer an infestation remains unchecked, the harder and more expensive it will be to eliminate. Reliable and cost-effective early detection methods and educational efforts can help communities by lowering overall treatment costs and reducing new infestations. Effective site-specific IPM plans tailor educational efforts and procedures to the community. The plans also address any factors that could affect program effectiveness, such as  Cultural considerations,  Availability of qualified bed bug control professionals,  Pesticide and applicator availability,  Potential for pesticide misuse, and  Financial resources. Identifying and planning for complicating factors when designing a site-specific IPM plan are important both for budgeting and for anticipating special considerations that could lengthen the time needed to eliminate the infestation. In addition, bed bug populations vary in their level of resistance to pesticide products. If pesticides are used to control bed bugs, it is critical to select those that will be effective against the infestation. Continuing to monitor for pests at the site being treated is a critical element of any IPM-based management program; monitoring provides an objective measure of progress toward the goal of eliminating the infestation. Bites alone are a very poor indicator of an infestation because people sometimes mistake bites from mosquitoes and other pests as bites from bed bugs. Using community-wide surveillance to identify bed bug sources and identifying ways that bed bugs move between residences can help communities target resources to areas where they will be most effective. Rather than relying on anecdotal information, communities should use IPM best-management practices and reliable surveillance data to verify the presence of living bed bugs when there is a report of bed bugs. Bed bug sightings at schools can help identify potential infestations in the community; however, this information is very sensitive because it has the potential to stigmatize students and their families and should be handled discreetly. Community control efforts can be significantly improved when the community has access to quality resources and information. The workgroup recognizes the need for collaboration among stakeholders to:  Provide science-based information and educational materials about the importance of early detection;  Share success stories about management efforts;  Identify and promote research activities that may lead to reliable, portable and affordable detection methods;  Ensure that scientifically sound information is collected and made readily available to stakeholders about bed bugs including effective monitoring and control strategies, properly labeled chemical insecticides and related regulatory requirements;  Facilitate the incorporation of new discoveries into training, education and communication materials; and  Encourage funding for basic and applied research on bed bugs, bed bug biology, and effective control techniques and products, and conduct such research or facilitate funding where possible. # Actions to Improve Surveillance and IPM By having an IPM plan for bed bugs in place, occupants and staff will be able to act quickly and confidently if they find bed bugs. Those living or working at a property can conduct basic bed bug inspections as an added step to existing scheduled routines (for example, housekeeping inspections), which will help minimize costs. Residents should not serve as the sole source for reporting bed bug infestations. Inspecting every area at risk for bed bug infestation will help find bed bug infestations while they are still relatively easy to eliminate. The frequency of such inspections should be part of the IPM plan, based on the local situation and conditions. Hiring a pest management professional (experienced specialists who use certified canine bed bug detectors or other effective bed bug monitors) or training specific staff members to conduct intensive inspections can be an effective way to encourage surveillance and IPM efforts. Bed bugs can be detected before an infestation becomes severe by using devices and practices that facilitate early detection. These include the following:  Using rip-resistant, bite-proof and escape-proof mattress and box spring encasements that make it easier to see the signs of bed bugs;  Managing the environment so that when infestations occur they are more easily detected (such as routine cleaning, minimizing clutter, removing evidence of past infestations, using light-colored bedding, etc.);  Using passive bed bug monitors that intercept and trap bed bugs; and  Using active monitors that attract bed bugs by releasing heat, carbon dioxide and/or a scent lure. Dogs can be successfully used as part of a program for early detection of bed bugs or to verify that bed bugs have been eliminated from a residence. However, this success depends on how well both the handler and the dog have been trained and how well they work together as a team. Whenever using dogs or hiring a dog handler, success is most easily achieved if  The dog and/or team are certified by a nationally recognized association;  The resident recognizes that dogs are not perfect in detecting bed bugs o a positive identification by a dog should always be confirmed by a visual inspection before beginning treatment; o a negative result should also be confirmed by another monitoring technique; and  The team that is hired is a professional dog/handler team with a proven track record. The federal government has no oversight or advisory role for bed bug surveillance by dog/handler teams. However, several states have begun programs to require training, licensing or certification for dog teams. Research into basic bed bug biology and behavior shows certain characteristics are common to all bed bugs. Focusing detection efforts on the following characteristics can help reduce costs:  The majority of bed bugs are found in or near sleeping and resting areas. As an infestation grows within an apartment or home, bed bugs will travel to other locations, making it more difficult, time consuming and expensive to pinpoint every infested area.  Because bed bugs can move between housing units, the best IPM practice is to inspect adjacent areas or units, units above and below the infested unit, and units across the hall. If a unit is vacant, bed bugs may behave differently (for example, by becoming inactive or more active during the day) while waiting for a new host to arrive.  Although not preferred hosts, pets and rodents serve as alternate hosts in some circumstances (such as when other hosts suddenly become unavailable or when an infestation is severe). Having these additional hosts can greatly increase the chances of the bugs' survival. Bed bugs can be difficult to differentiate from other common insects such as carpet beetles, cockroach nymphs or booklice. A qualified expert should confirm the presence of a bed bug infestation before control efforts begin. Once a licensed pest control professional or other expert confirms the infestation, site occupants and staff follow the predetermined IPM plan to efficiently control the infestation. If no IPM plan is in place, follow research-based recommendations available at EPA's bed bug website (www.epa.gov/bedbugs). Each bed bug infestation site is unique, so control options and IPM plans will vary. The workgroup has also identified the following actions that managers/owners/occupants and pest control professionals can take:  Encourage discussions with a licensed pest control management company or other professional knowledgeable about the control of bed bugs about appropriate ways to quarantine infested items;  Advocate for clear guidelines to minimize potential for bed bugs to develop resistance to pesticides; and  Establish minimum acceptable standards for detecting an active infestation compared with a prior infestation. # Measuring Progress Sometimes residents or landlords will treat for bed bugs based on anecdotal reports, without proper evidence. Evaluation is essential to ensure that limited resources are used for the treatment methods that will be most effective. Professional organizations have recommended best management practices, but not all of these practices have been adequately evaluated for effectiveness. Measuring results from surveillance and IPM programs at the federal, state and local levels helps ensure effective and efficient use of resources. The following could demonstrate whether an approach has resulted in better bed bug management:  Community or locality infestation rates, which indicate successful interventions and likely reduction in new infestations.  Data showing availability and distribution of information and educational materials from the surveillance system, coupled with data showing increased knowledge of residents and housing employees.  Documentation showing a reduction in bed bug control expenses over time.  Inspections, both routine and non-routine, revealing less frequent and smaller populations of bed bugs.  Verification that infestations are only in one dwelling area (not multiple areas in a home or adjacent units).  Documentation that monitors and resident surveys that show no activity for months after one or two treatments of an active infestation.  Satisfaction surveys that show an increase in satisfaction. # Priority Area III: Education and Communication How much participants know about bed bugs often determines how willing they are to cooperate with prevention and control activities. Knowledge also often determines the outcome of IPM plan implementation. Information about bed bug basics, prevention and control is critical for all stakeholders so they can contribute to pest management efforts. Training and communications materials that are current, targeted, science-based and consistent are key to a successful program. In addition, target audience members are most likely to access the information if it is available in plain language (see www.plainlanguage.gov). Although pest control companies are on the front lines of the bed bug battle, they vary in their ability and willingness to train employees and educate clients. In addition, existing training curricula frequently lack the latest, science-based information about bed bugs. To improve the quality of education and training materials, the workgroup recognizes the need for  Defining core competencies for groups that need training or education;  Setting audience-specific learning objectives (based on core competencies) for diverse stakeholders involved in bed bug management;  Ensuring sufficient numbers of effective trainers to help convey consistent information and quality delivery;  Promoting current, science-based messages (pulling from multiple websites or training from some sources can give an incomplete or incorrect understanding of bed bugs and their management);  Tailoring messages to the target community using cultural considerations and current needs, recognizing that settings at risk for infestation (for example, offices, shelters, group homes, hostels, camps, etc.) may routinely have uninformed people rotating in and out;  Reducing the costs of developing and disseminating high-quality education, training and communications materials for all stakeholders;  Understanding that the greatest cost savings may result from prevention programs;  Refining models of community coordination to serve as templates for minimizing costs; and  Communicating with people who have different learning styles or those who rely on electronic communications such as social media as their primary source for learning. # Actions to Improve Education and Communications Training efforts will reach the most people in the most effective way when they use current information based on standardized learning objectives that demonstrate core competencies. In addition to bed bugspecific training programs, cooperating with broader training programs to discuss bed bugs (such as Healthy Homes, IPM, and training for management of other pests) can be cost effective. Training can be provided in classrooms, at the workplace, and through webinars. Regardless of format, regularly updating training programs is necessary to keep information current. Developing a federally endorsed set of core competencies and learning objectives for bed bug training may help define the level of training needed in a bed bug management program. A professional certification program indicating a level of knowledge, experience and educational ability would help consumers select a qualified training provider. Interaction among communities could lead to more effective and efficient use of resources. Costs can be significantly reduced by using existing strategies for communications and instructional design whenever possible. Greater use of the clearinghouse portion of EPA's bed bug website could help management programs format and tailor communications to a target audience. The website  Provides accurate information on bed bug management from unbiased sources;  Discredits myths linking sanitation, poverty and immigration status to bed bugs;  Offers consistent information; and  Promotes more efficient communications on a local level. For example, the clearinghouse could be a repository for educational materials like in-depth extensive training developed by professionals, including videos on how to inspect, caulk crevices, make your bed an "island," or prevent bed bug transport. # Measuring Progress Measuring the success of a training program is critical to determining whether those resources are being used productively. Core competencies should align with measureable learning objectives and the objectives should be audience-specific for the various stakeholders and/or sectors involved in bed bug management. Once developed, communities could use these standard objectives to measure the level of training expertise in a community. Nationally, achievement of objectives could be compared across communities to measure success in communication and training programs. Likewise, communicators can track efforts and evaluate the results to support efficient use of resources and improve effectiveness of messages. Ideally, target audiences have the opportunity to provide feedback. By using the feedback, communities can provide valuable information to those developing messages for use in public forums and social media. The following are examples of data that could assess the effectiveness of communication strategies and messages:  Documented results from satisfaction surveys of customers who pay to have pest management professionals rid their homes/facilities of bed bugs;  Results from focus groups during development of communication strategies and after dissemination to determine whether the target audiences receive the messages, understand them, and take the recommended steps to prevent or control bed bug infestations;  Evaluations of training from interviews of participants to determine their perceptions about the effectiveness of communication strategies; and  Before and after surveys analyzing training participants' behavior changes with regard to bed bugs.  Create a source of reagents (for example, plasmids, antibodies, DNA and other materials), samples, or testing protocols; and  Integrate results with international collaborators to make research and, ultimately, control more effective and less costly. # Actions to Improve Research Research is important to improve bed bug management techniques. Basic research is often the source of innovative new solutions, while applied research directly supports information used in establishing pest control best practices. Both are necessary to build a foundation of knowledge that can be used to develop solutions. Regardless of research questions, the workgroup suggests researchers use the following guiding principles in their work:  Strive to serve all members of American society with appropriate control techniques;  Minimize costs while maximizing the benefits from the products of research efforts; and  Communicate results effectively and efficiently to stakeholders. Meetings that bring researchers together could include forums for discussion of strategies for research, technology, etc. Researchers may coordinate scientific efforts with professionals from other specialties to develop new solutions and to use methods as effectively as possible. For example, working with scientists studying related organisms could help inform approaches relevant to bed bugs. The following systems may facilitate federal/academic/industrial partnerships:  Small Business Innovation Research: a program that enables small businesses to explore their technological potential and provides the incentive to profit from commercialization.  Cooperative Research and Development Agreement: an agreement between one or more federal laboratories and one or more nonfederal parties through which the federal laboratory provides personnel, services, facilities, equipment or other resources toward the conduct of specified research or development efforts.  Specific Cooperative Agreement: An agreement between a federal agency and another party that describes in detail a jointly planned and executed research program or project of mutual interest between the parties to which both contribute resources. # Measuring Progress Like all of the other priority areas, measuring research efforts is critical to evaluating and directing research funding. Improvements in basic research can be measured by looking for increases in number of  Publication of research outcomes in peer-reviewed journals;  New patents;  Shared resources that can be used by the bed bug research community;  Research projects that contribute to solutions; and  Trained bed bug biologists. In addition to those increases listed above, the following may indicate an improvement in applied research:  Transfers of intellectual property;  Translation of research into operational manuals and other sources of information for those who manage bed bugs;  Improved, more effective products available to the public and businesses;  Improved communication with the public and businesses;  A current baseline of the problem;  Comparison of the outcome of interventions to a baseline; and  New strategies for detection, prevention and control. # Conclusions Integrating several methods for controlling bed bugs in response to IPM surveillance data is likely to be more successful against bed bugs than application of any single method. Reducing bed bug populations throughout a community provides an opportunity to reduce the probability of new infestations. With few exceptions, the federal government does not have a role in direct interventions against bed bugs, but it does have an important role in providing reliable information, coordinating stakeholders and providing resources for research to achieve long-term solutions. Many other stakeholders such as nongovernmental organizations, state and local governments, school associations and community coalitions have been active in efforts to manage the bed bug problem. Their input has been important in crafting this strategy. Since problems associated with bed bugs increased in the United States, research and development has produced better products and methods for control. As improvements continue, indications are that IPM is the best approach in most situations. Public awareness and dissemination of accurate information is helpful for earlier detection, greater chances of successful elimination of infestations, reduction in spread of infestations and more efficient use of resources. This strategy advocates a logical, integrated approach to bed bug management, with a focus on cooperation among all levels of government and communities. Effective implementation of this strategy should decrease costs to communities and achieve better bed bug control. Continuous measurements of progress will be key to efficient use of resources and essential to prevent the reoccurrence of widespread infestations in the United States. Department of Defense (DOD) DoD addresses bed bugs and their management from three different perspectives and through multiple agencies. Technical consulting, information sharing, educational efforts and control actions may involve uniformed and civilian entomologists at each of these levels:  Policy: The Armed Forces Pest Management Board (AFPMB) provides overall information and develops and provides general policies, guidance, training and consultation on pest management to other DoD elements and whomever they support.  Technical training: Preventive medicine staff members and pest management professionals (PMPs) receive routine periodic technical and certification training and guidance, as needed, by service-specific regional or national agencies.  Bed bug management: Bed bug management programs and specific actions are carried out at the local level using IPM strategies and techniques, involving all local stakeholders. Those typically include members of the affected human populations, local public health or medical authorities, facilities managers and DoD military, civilian or appropriate contracted PMPs. The Armed Forces Pest Management Board (AFPMB) is an arm of DoD that recommends policy, provides guidance, and coordinates the exchange of information on all matters related to pest management throughout the DoD. The AFPMB's mission is to ensure that environmentally sound and effective programs are available to prevent pests and disease vectors from adversely affecting DoD operations. # Department of Housing and Urban Development (HUD) HUD's mission is to "create strong, sustainable, inclusive communities and quality affordable homes for all. HUD is working to strengthen the housing market to bolster the economy and protect consumers; meet the need for quality affordable rental homes: utilize housing as a platform for improving quality of life; build inclusive and sustainable communities free from discrimination; and transform the way HUD does business." HUD promotes a collaborative approach to bed bug management in public and Indian housing (HUD Notices PIH 2011-22, PIH 2012-17, respectively, and H 2012. While bed bug infestations are not limited to the housing sector, quality housing and strong communities are undermined by the presence of bed bug infestations. Prevention and control of bed bug infestations are enhanced by the team approach to the problem when affected and vulnerable housing providers, residents, pest management professionals, public health professionals and others focus their different perspectives and resources on the common problem. HUD is particularly concerned about effects on vulnerable populations, including underserved groups, such as elderly and disabled individuals, and owners and residents who lack the resources to effectively deal with bed bug infestations. Education of all involved parties, especially with regard to prevention methods, is a priority for HUD. # Priority Area IV: Research When the numbers and geographical distribution of bed bugs began to increase, pest-specific knowledge was based on previous, limited research. Since then, our understanding of bed bugs and their management has improved. Newer research about bed bugs answered many questions and allows program planners and the American public to more effectively manage infestations. Research activity can be divided into two areas: basic and applied. Basic research answers questions about the biology of the bed bug in its environment and its association with hosts. Not every basic research study results in a practical application, but unexpected answers to basic research questions may lead to innovative solutions. Peer-reviewed basic research on topics such as colony growth, movement behavior and the pest's apparent inability to transmit pathogens that cause human diseases would improve the current body of knowledge about this pest. Applied research provides new methods for IPM or new information that allows better pest control as part of an IPM program. Such research activity may lead to new tools or evaluation of existing tools. Applied research can also bring together many techniques into a clear, site-specific IPM plan to test the effectiveness of the plan. Research pertaining to the priority areas will likely come from  Academic institutions (for example, departments of entomology, schools of medicine, schools of public health, departments of social work, departments of communication, departments of consumer economics, and schools of architecture);  Private industry (for example, include insecticide manufacturers, product formulators, venture companies, pest control); user group associations (for example, builders, hoteliers, private housing groups, insecticide manufacturers);  International research entities; and  Federal, state and local governments. Because both basic and applied research will provide the foundation for new, more effective control options, the workgroup recognizes the need to  Promote partnerships that leverage resources and expertise;  Integrate the results of peer-reviewed research into management programs through education, training and communications;  Further investigate potential health effects from bed bugs, including potential to trigger an allergic reaction;  Improve access to study sites and the potential use of human subjects;  Quantify the costs for control, prevention and the value of tailoring intervention efforts to unique communities;  Develop inexpensive, practical control and prevention protocols for residents owners and managers;  Improve access to research colonies, including field strains;  Maintain defined strains of bed bugs with known resistance to insecticides; # Appendix A: Key Elements of Successful Bed Bug Management Although bed bug management programs vary regionally, successful programs frequently share common elements. The workgroup has identified five key elements generally found in well-run, successful programs:  Collaboration -Programs that draw on the strengths of the various participants and include diverse stakeholders are better able to improve outcomes and broaden the reach of their efforts. That is, they accomplish more as a group than as independent programs.  Plain Language, Targeted -Materials and educational efforts are most likely to be effective when project planners define the audience, customize materials for that audience and target delivery of materials. Developing resources and educational programs in plain language helps audiences find what they need, understand what they find and use what they find to meet their needs (see http://www.plainlanguage.gov/).  IPM-Based -Bed bug control is most successful when it uses an integrated pest management (IPM) approach. IPM is an effective and safe approach to pest management that relies on a combination of common-sense practices that present the least possible hazard to people, property and the environment. IPM takes advantage of all appropriate pest management options including the judicious use of pesticides. IPM programs use current, comprehensive information on the life cycles of pests and their interaction with the environment to create programs that are efficient, effective and safe. A site-specific IPM plan helps ensure use of standard operating procedures and coordination of the efforts of everyone who has a role in pest management. More information on IPM can be found in the USDA National Roadmap for Integrated Pest Management (http://www.csrees.usda.gov/nea/pest/pdfs/nat_ipm_roadmap.pdf).  Science-Based -Recommendations are most likely to be consistent and successful when project planners base them on objective, accepted scientific evidence. Recommendations based on assumptions results in wasted or duplicative effort and resources.  Evaluated for Measurable Success -Developing sustainable programs, especially over time or distance, is most easily accomplished by evaluating the success of individual intervention efforts. When communities objectively measure the success of their community-wide communication and control efforts, they are better able to use resources most effectively and at lower costs. Sharing successful results is also important for improving quality of efforts across communities and to support efficient use of resources. # Appendix B: The Federal Bed Bug Workgroup A recommendation from the first National Bed Bug Summit led five federal agencies to establish the Federal Bed Bug Workgroup in August 2009. The current workgroup comprises representatives from the Centers for Disease Control and Prevention (CDC), the Department of Defense (DoD), the Environmental Protection Agency (EPA), the Department of Housing and Urban Development (HUD), and the U.S. Department of Agriculture (USDA). The workgroup is a cooperative effort; no single federal agency has the lead in the fight against bed bugs. Because each agency's mission influences its bed bug work, the individual missions and agency-specific activities are listed below. # Centers for Disease Control and Prevention (CDC) CDC collaborates to create the expertise, information and tools that people and communities need to protect their health -through health promotion; prevention of disease, injury and disability; and preparedness for new health threats. CDC offers live and online courses about the importance of IPM, including the biology and control of pests. The course trains environmental health, public health, pest management, and other professionals in the principles of effective pest control, including bed bugs, and offers a special field-based exercise that trains participants to conduct an inspection of a potentially infested bedroom. Department of Agriculture (USDA) The USDA Research, Education, and Economics mission areas, the National Institute of Food and Agriculture (NIFA) and the Agricultural Research Service (ARS) provide federal leadership in creating and disseminating knowledge spanning the biological, physical, and social sciences related to agricultural research, extension, and higher education. NIFA's mission is to lead food and agricultural sciences by supporting research, education, and extension programs in the Land-Grant University System and other partner organizations. Funding from NIFA supports bed bug researchers and extension personnel at several Land-Grant universities. EPA's dedicated bed bug website includes science-based information developed by these university experts. This work leverages other efforts in ARS on chemical ecology of insects and development of new insecticides. This work has resulted in the discovery of new kinds of chemical attractants for bed bugs and new insecticides effective against resistant bed bugs. ARS' problem-solving orientation focuses on bringing together many scientific skills to find innovative solutions. Environmental Protection Agency (EPA) EPA's mission is to protect human health and the environment. For bed bugs, like other pests of public health significance, EPA has a statutory charge to ensure that the pesticides are (1) safe and (2) effective against the pests on their labels. EPA's Office of Pesticide Programs works to ensure that pest management professionals and the public have access to the latest information on effective bed bug control tools. EPA realizes that certain bed bug populations in communities across the nation are becoming more resistant to many of the existing pesticides. EPA is also working to educate the general public, pest professionals and public health officials about bed bug biology and IPM, both of which are critical to long-term bed bug control.
None
None
1550536f2feb3b6ca308c9d0cf4401deac6ca980
cdc
None
Universal HIV testing of pregnant women in the United States is the key to prevention of mother-to-child transmission of HIV. Repeat testing in the third trimester and rapid HIV testing at labor and delivery are additional strategies to further reduce the rate of perinatal HIV transmission. Prevention of mother-to-child transmission of HIV is most effective when antiretroviral drugs are received by the mother during her pregnancy and continued through delivery and then administered to the infant after birth. Antiretroviral drugs are effective in reducing the risk of mother-to-child transmission of HIV even when prophylaxis is started for the infant soon after birth. New rapid testing methods allow identification of HIV-infected women or HIV-exposed infants in 20 to 60 minutes. The American Academy of Pediatrics recommends documented, routine HIV testing for all pregnant women in the United States after notifying the patient that testing will be performed, unless the patient declines HIV testing ("optout" consent or "right of refusal"). For women in labor with undocumented HIVinfection status during the current pregnancy, immediate maternal HIV testing with opt-out consent, using a rapid HIV antibody test, is recommended. Positive HIV antibody screening test results should be confirmed with immunofluorescent antibody or Western blot assay. For women with a positive rapid HIV antibody test result, antiretroviral prophylaxis should be administered promptly to the mother and newborn infant on the basis of the positive result of the rapid antibody test without waiting for results of confirmatory HIV testing. If the confirmatory test result is negative, then prophylaxis should be discontinued. For a newborn infant whose mother's HIV serostatus is unknown, the health care professional should perform rapid HIV antibody testing on the mother or on the newborn infant, with results reported to the health care professional no later than 12 hours after the infant's birth. If the rapid HIV antibody test result is positive, antiretroviral prophylaxis should be instituted as soon as possible after birth but certainly by 12 hours after delivery, pending completion of confirmatory HIV testing. The mother should be counseled not to breastfeed the infant. Assistance with immediate initiation of hand and pump expression to stimulate milk production should be offered to the mother, given the possibility that the confirmatory test result may be negative. If the confirmatory test result is negative, then prophylaxis should be stopped and breastfeeding may be initiated. If the confirmatory test result is positive, infants should receive antiretroviral prophylaxis for 6 weeks after birth, and the mother should not breastfeed the infant.# INTRODUCTION Continuing technologic and medical advances in the diagnosis, prevention, and treatment of pediatric HIV infection require ongoing assessment and review of recommendations relating to pediatric HIV infection, including recommendations regarding prenatal and perinatal HIV counseling and testing. Current guidelines are consistent in their recognition of the importance of universal HIV testing of pregnant women in the United States as the key to prevention of mother-to-child transmission (MTCT) (also referred to as vertical or perinatal transmission) of HIV. The American Academy of Pediatrics (AAP) continues to support these guidelines. This policy statement updates the data that support the guidelines and suggests ways to continue improving the implementation of the recommendations for universal testing during routine prenatal care. # WHY IS A NEW STATEMENT NEEDED NOW? There is continued MTCT of HIV in the United States; 397 infants were infected through MTCT in 1999 -2001 in areas conducting enhanced perinatal surveillance. 1 These infant infections occurred despite the availability of efficacious interventions for preventing such transmission (antiretroviral prophylaxis to mother and infant, elective cesarean delivery before the onset of labor and before rupture of membranes, and complete avoidance of breastfeeding). 2 In recent years, lack of identification of maternal HIV-infection status has been the primary reason for new infant HIV infections; effective interventions cannot be implemented unless maternal HIV status is known. Rapid HIV antibody testing methods now allow identification of HIV-infected women or HIV-exposed infants in 20 to 60 minutes (see also www.cdc.gov/ hiv/topics/testing/rapid/index.htm). Studies have demonstrated the effectiveness of ARV prophylaxis for preventing MTCT of HIV, even when prophylaxis is initiated after the birth of the infant. New guidelines from the Centers for Disease Control and Prevention (CDC) strengthen the recommendations for routine HIV testing during pregnancy. 11 This report summarizes the recommendations of the AAP regarding HIV testing and prophylaxis to prevent MTCT of HIV, which are consistent with and supportive of the CDC recommendations. # HIV TESTING For adults and children 18 months or older, "conventional testing" of blood for the presence of antibodies against HIV is performed by using a screening enzyme immunoassay (EIA). If the initial EIA result is positive, the laboratory repeats the EIA on the same blood sample, and if the repeat result is positive, the remainder of the blood is used to perform a confirmatory test, either immunofluorescent antibody (IFA) or Western blot assay. Because children younger than 18 months may have a positive HIV antibody test result because of the presence of passively acquired maternal antibody, assays that directly detect HIV DNA or RNA (generically referred to as HIV nucleic acid amplification tests) are required for diagnosis of HIV infection in children younger than 18 months. 12,13 Results of conventional HIV antibody tests and HIV nucleic acid amplification tests usually require hours to days to be returned. "Rapid" HIV antibody tests have been available since 2002. 3,5,14 These are screening tests, which means that a positive test result requires confirmation with an IFA or Western blot assay. The rapid antibody test is more sensitive and more specific than the conventional EIA, so a conventional EIA is not used as the confirmatory test for a rapid HIV antibody test. 15 In 1 study of the feasibility and benefit of use of the rapid test in pregnant women already in labor when they presented to health care professionals, the positive predictive value of the rapid test was shown to be higher than that of the conventional EIA. 4 That study of 4849 women (HIV prevalence: 7 in 1000) demonstrated that for the conventional EIA, the sensitivity was 100%, specificity was 99.8%, and the positive predictive value was 76%, whereas for the rapid HIV test, the sensitivity was 100%, specificity was 99.9%, and the positive predictive value was 90%. 4 Although it is usually recommended that all HIV antibody screening tests be confirmed before HIV-specific treatments are started, this can take several weeks, be-cause there is often a delay between availability of the results of the screening test and results of the confirmatory IFA or Western blot assay. This is not problematic for a woman identified early in pregnancy, because initiation of ARV prophylaxis against MTCT generally is not started until the second trimester. However, it is a problem for a woman who is late in pregnancy or in labor or is being tested immediately postpartum. In such instances, time is of the essence in initiating ARV prophylaxis to prevent MTCT. Therefore, when women have positive results of rapid antibody screening late in pregnancy, during labor, or within the first few hours of delivery of the infant, ARV prophylaxis to prevent MTCT should be instituted promptly on the basis of the positive results of the screening test. A maternal blood sample for a confirmatory HIV antibody assay should be obtained and sent for testing. 4,16,17 Prophylaxis should be stopped if the confirmatory test result is negative. # BENEFITS OF HIV TESTING HIV testing during pregnancy allows identification of HIV infection in women who might not know they are infected. This is important for the health of the woman, because knowledge of her HIV-infection status will allow appropriate evaluation, including CD4 ϩ T-lymphocyte count and HIV viral load quantification, initiation of comprehensive care, and appropriate ARV treatment. HIV antibody testing early in pregnancy has the added benefit of allowing the most effective interventions to prevent MTCT of HIV to be initiated, including ARV prophylaxis, planning an appropriate mode of delivery (elective cesarean delivery or vaginal delivery, depending on maternal viral load near delivery), and avoidance of breastfeeding. HIV antibody testing later in pregnancy, or even after delivery of the newborn infant, still allows initiation of effective ARV interventions that can reduce the risk of MTCT of HIV. # Benefits of HIV Testing Early in Pregnancy Single-Drug ARV Prophylaxis and MTCT of HIV The 3-part 1994 Pediatric AIDS Clinical Trials Group study 076 (PACTG 076) regimen is the starting point for understanding ARV prophylaxis and the prevention of MTCT of HIV. Among nonbreastfeeding pregnant women with HIV infection and a CD4 ϩ T-lymphocyte count of greater than 200 cells per mm 3 , oral zidovudine (ZDV) prophylaxis initiated after the first trimester, followed by administration of intravenous ZDV beginning at the onset of labor and continued until the cord is clamped, combined with 6 weeks of oral ZDV administered to the infant (2 mg/kg per dose every 6 hours), reduces the rate of MTCT of HIV from 25% to 8%. 18 Among nonbreastfeeding pregnant women with HIV infection, initiation of ZDV prophylaxis before the 28th week of pregnancy is associated with a lower risk of in utero transmission of HIV than is prophylaxis initiated at 35 weeks of pregnancy. 19 # Combination ARV Regimens and MTCT of HIV Regimens that include combinations of 3 ARV drugs are more effective for prevention of MTCT of HIV than is ZDV alone. 20,21 For nonbreastfeeding pregnant women with HIV infection, successful combination therapy with 3 ARV drugs and resultant reduction of maternal plasma virus load to below the limits of detection on sensitive assays (the goal of standard ARV therapy) is associated with rates of MTCT of HIV of less than 1%. Current US Public Health Service (USPHS) guidelines for prevention of MTCT of HIV recommend use of combination ARV regimens including at least 3 ARV drugs during pregnancy and labor for all pregnant women with HIV infection. ARV drugs are discontinued after delivery unless the mother requires ARV therapy for her own health, in which case ARV therapy would be continued following guidelines for nonpregnant HIV-infected adults. 25,26 Intravenous ZDV should be administered to the pregnant woman during labor until the cord is clamped, with the other ARV drugs in the regimen continued orally during labor, and all infants should receive 6 weeks of ZDV prophylaxis. 25 The full 6-week course of infant ARV prophylaxis, and careful instructions for its administration, should be provided to the family before discharge from the hospital. A prescription and recommendations to purchase ZDV for use by the infant are not adequate to ensure appropriate prophylaxis. In some states, infants may not be registered for insurance for a few weeks after birth, so even if the family has insurance, coverage may not be immediately available to pay for health care costs for the infant. Some families have health insurance that covers inpatient costs but not prescription medications. Outpatient pharmacies may not stock the infant-dosage form of ZDV. At hospital discharge, the family should be supplied with the medication itself, along with careful instructions for its administration, not just a prescription. # Mode of Delivery and MTCT of HIV Elective cesarean delivery (performed before onset of labor and before rupture of membranes) can prevent MTCT of HIV 27 and is associated with at least a 50% decrease in the risk of MTCT among HIV-infected women either not receiving ARV drugs or receiving ZDV alone. 28 Although several studies have suggested that elective cesarean delivery performed before labor onset and before rupture of membranes may remain effective among HIV-infected women with low virus load (low either intrinsically off ARV therapy or low because of administration of combination ARV regimens during pregnancy), further research is required to definitively demonstrate whether elective cesarean delivery can further reduce the risk of MTCT of HIV for women being successfully treated with combinations of 3 or more ARV drugs (eg, virus load undetectable on sensitive assays while on a combination 3-drug ARV regimen). 29 Current American College of Obstetricians and Gynecologists (ACOG) 30 and USPHS guidelines for prevention of MTCT recommend elective cesarean delivery at 38 weeks' gestation for all HIV-infected pregnant women with HIV RNA levels greater than 1000 copies per mL near the time of delivery (or who have unknown viral load), regardless of the type of maternal ARV prophylaxis being received. 14,25 Breastfeeding Breastfeeding confers approximately 9% to 15% excess risk of MTCT of HIV. In the United States, because safe infant feeding alternatives exist, women with HIV infection should not breastfeed regardless of maternal ARV use. 31,32 Benefits of HIV Testing in the Peripartum and Newborn Periods When HIV antibody testing is performed before or during pregnancy, if a woman is found to be infected with HIV, it is possible to use all 3 known efficacious interventions for prevention of MTCT of HIV (prepartum and intrapartum maternal and postpartum infant ARV prophylaxis, elective cesarean delivery, and avoidance of breastfeeding). However, if HIV diagnostic testing is not performed until the peripartum or postpartum periods, then only 2 of the 3 interventions can be implemented (intrapartum maternal and postpartum infant ARV prophylaxis and avoidance of breastfeeding). Although ARV prophylaxis initiated during pregnancy is most effective at reducing MTCT of HIV, prophylaxis initiated for the pregnant woman at the time of labor and continued to the infant after birth, or even prophylaxis only administered to the infant after birth, can reduce the risk of MTCT of HIV compared with no prophylaxis. 6,7,10 Moreover, identification of HIV exposure allows the pediatrician to offer advice on appropriate alternatives to breastfeeding, follow-up testing, and prophylaxis against opportunistic infections for the infant as well as referral of the mother for care of her HIV infection. 12 For the woman with HIV infection identified at the time of labor, maternal prophylaxis with intravenous ZDV, together with infant prophylaxis with 6 weeks of ZDV, is associated with an approximately 60% lower risk of MTCT of HIV 7 compared with no prophylaxis. For infants whose mothers received no ARV therapy during pregnancy or labor, prompt (optimally as soon as possible after birth but certainly within 12 hours after birth) prophylaxis of the infant with ZDV alone for 6 weeks is associated with a 50% reduction in the risk of MTCT of HIV compared with no prophylaxis. 7 In certain situations, some experts combine the 6-week infant ZDV prophylaxis regimen with additional ARV drugs. Such situations might include infants born to mothers who received prenatal ARV drugs but had suboptimal viral suppression at delivery, particularly if the infant was delivered vaginally; infants born to mothers who have received only intrapartum ARV drugs; infants born to mothers who have received no prepartum or intrapartum ARV drugs; and infants born to mothers with known drug-resistant virus. Whether combining ZDV with other ARV drugs provides additional efficacy for prevention of MTCT of HIV has not been proven in clinical trials. In addition, appropriate ARV drug formulations and dosing regimens for neonates are incompletely defined for many drugs, and there are minimal data about the safety of combination ARV drugs in the neonate. Therefore, use of combination infant ARV prophylaxis involves complex balancing of potential benefits in terms of prevention of MTCT of HIV and risks in terms of toxicity to the infant. The USPHS guidelines for prevention of MTCT of HIV include extensive discussion of considerations for infant ARV prophylaxis regimens for different clinical scenarios and should be reviewed for specific recommendations. 25 If infant prophylaxis with ARV drugs in addition to ZDV is being considered, decisions and choice of ARV drugs should be determined in consultation with a practitioner who is experienced in care of infants with HIV infection. # Other Considerations Early identification of HIV-exposed infants allows (1) appropriate testing to identify HIV-infection status of the infant, (2) counseling of the mother regarding the risk of HIV transmission through breastfeeding and institution of appropriate infant feeding, and (3) prophylaxis with trimethoprim-sulfamethoxazole to prevent Pneumocystis jiroveci infection for infants whose HIV-infection status has not been determined or are identified as being HIV infected. 12 # SUMMARY: PROPHYLAXIS AND TREATMENT OF PREGNANT WOMEN WITH HIV INFECTION AND THEIR INFANTS Guidelines for initiation of ARV therapy for pregnant women are the same as for nonpregnant HIV-infected adults and follow the USPHS "Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents" 26 except that the choice of ARV drugs includes special considerations related to pregnancy and fetal drug exposure as described in the "Recommendations for Use of Antiretroviral Drugs in Pregnant HIV-Infected Women for Maternal Health and Interventions to Reduce Perinatal HIV Transmission in the United States." 25 For women who need immediate initiation of ARV therapy for their own health, treatment should be initiated as soon as possible, including in the first trimester. For women who do not require treatment for their own health, the effectiveness of ARV prophylaxis depends on the timing of institution of such prophylaxis. For women identified as being HIV infected early in pregnancy, generally 3 ARV drugs are recommended for prophylaxis during their pregnancy and should be continued until the time of delivery. Delaying initiation of prophylaxis until after the first trimester can be considered if treatment of HIV infection is not needed for the woman's own health, intravenous ZDV is administered to the pregnant woman at the time of labor and continued until the cord is clamped, and the other ARV components of the regimen are continued orally during labor. Oral ZDV is administered to the infant for the first 6 weeks of life. 25 For women identified as being HIV infected during labor, intravenous ZDV is recommended together with infant ARV prophylaxis. For infants born to women who have not received prepartum or intrapartum ARV therapy, prophylaxis of the infant with 6 weeks of ZDV is recommended. In the latter 2 situations, some experts may administer additional ARV drugs to the mother and/or infant. The USPHS guidelines for prevention of MTCT of HIV should be consulted for detailed discussion of these more complex situations, and decisions for maternal and infant prophylaxis and therapy in such situations should be made in consultation with a practitioner who is experienced in care of infants with HIV infection. 25 # RISKS OF HIV TESTING IN THE PRENATAL AND NEWBORN PERIODS A positive HIV antibody test result for an infant identifies HIV infection in the mother and HIV exposure (with possible infection) of the infant. Therefore, even if testing is only performed on the infant after birth, if the infant is found to be HIV-seropositive, the infant's mother will be identified as having HIV infection, which can be associated with personal psychological trauma and societal stigma if the mother does not know that she is infected. Linkage of the newly identified HIV-infected mother to appropriate psychosocial supports and to HIV care programs is important. If the test result is found to be a false-positive, this psychological harm will have occurred needlessly. Thus, the fact that confirmatory testing is required to definitively diagnose HIV infection of the mother, and the need for rapid presumptive treatment of the infant to prevent MTCT should she be HIV infected, should be explained to the mother when performing rapid testing during labor or after delivery. ARV administration to the mother and/or infant may be associated with infant drug toxicity, and if the rapid test result is not confirmed to be positive, the benefit/risk ratio may not favor prophylaxis. However, the infant should have received only 1 to 2 days of ARV prophylaxis in such a situation, and short-term toxicity of ARV drugs is limited. Expedited confirmatory testing should be performed to ensure that results are reported quickly so that the duration of infant exposure to ARV drugs is minimized. # CONSENT FOR HIV TESTING Opt-out consent (documented patient notification, with testing to take place unless rejected by the patient) is associated with higher testing percentages than opt-in consent, and universal HIV screening of pregnant women with opt-out consent is recommended by the CDC, 11 the ACOG, 14 and the AAP. 39 As part of its recommendation, the CDC states that HIV screening should be included in the routine panel of prenatal screening tests for all pregnant women and that separate written consent for HIV testing should not be required. The CDC also states that general consent for medical care should be considered sufficient to encompass consent for HIV care. 11 In states where laws or regulations require written informed maternal consent for testing ("opt-in" consent), practitioners should obtain appropriate consent as required. A compendium of state HIV-testing laws can be found at www.ucsf.edu/hivcntr. In states where laws and regulations require written informed maternal consent for testing, practitioners should work to modify the laws or regulations to permit opt-out consent. Such an advocacy effort is best undertaken with a broad coalition of interested parties throughout the state, including state and local health departments, the state AAP chapter, representatives of the ACOG and the American Academy of Family Physi-cians, nursing groups, community groups interested in maternal health, and AIDS activist organizations. Mandatory HIV testing of the newborn infant whose mother has not been tested during pregnancy or in the immediate postpartum period has been associated with high rates of prenatal testing in New York state 7 and may act as a safety net for identifying infants who would not have been tested otherwise. There has not been a study to compare the percentage of women tested before delivery in programs with mandatory newborn testing compared with those with opt-out consent policies. Therefore, an evidence-based recommendation for or against mandatory newborn testing cannot be made. A few states have passed laws that require HIV testing of newborn infants without maternal consent when the HIV-infection status of the mother is unknown. In states where legislation aimed at mandating testing of newborn infants has been proposed, issues have been raised regarding the ethics and legality of this approach, because it diagnoses HIV infection in the mother without her consent for testing. In addition, concerns have been raised about the costs of such screening programs, given the already high numbers of mothers and newborn infants tested in voluntary (opt-out) programs. Regardless of the form of consent used in testing programs, it is important that the results of infant testing be returned as rapidly as possible so that ARV prophylaxis for the infant can be started promptly if needed. Infant ARV prophylaxis is likely to be less effective in the prevention of MTCT of HIV when started later than 12 hours after birth. Optimal prevention of MTCT of HIV requires identification of the mother's HIV status during pregnancy. # TIMING OF TESTING IN PREGNANCY As noted, testing of the pregnant woman early in pregnancy is recommended to allow informed and timely therapeutic decisions concerning health care for her and for prevention of MTCT of HIV. 11 A second HIV test during the third trimester has been shown to be costeffective under certain conditions, 40 and the CDC recommends that a second test be performed late in pregnancy but at Ͻ36 weeks' gestation for women who are in areas of high incidence, for women delivering in hospitals with HIV prevalence in pregnant women of at least 1 in 1000, and for women at high risk of acquiring HIV (women with a sexually transmitted infection diagnosed during pregnancy, injection drug users and their partners, women who exchange sex or money for drugs, women who are sex partners of HIV-infected persons, women who have had a new or more than 1 sex partner during pregnancy, or women with signs/symptoms of acute HIV infection). 11 However, because risk-based and prevalence-based testing programs may be difficult to implement, some practitioners and hospitals choose to test all pregnant women a second time late in pregnancy, even in the absence of specific prevalence or risk data. Because of the high levels of viral replication observed with acute HIV infection, women who become infected with HIV during pregnancy have a particularly high risk of transmitting HIV to their infants. For women whose HIV status is unknown at the time of presentation in labor, testing should be timed so that results are available to allow predelivery administration of prophylaxis if indicated. For a newborn infant whose mother's HIV status is unknown, testing should be performed quickly enough so that results can be available for infant ARV prophylaxis to begin within 12 hours of birth, as stated previously. 11 # CONCLUSIONS Universal HIV testing of pregnant women is standard care in the United States. Identification of HIV infection early in pregnancy allows the greatest ability to treat the pregnant woman for her HIV infection for her own health and to prevent MTCT of HIV. Rapid HIV antibody testing allows for timely identification of HIV infection in women even late in pregnancy, during labor, or in the immediate postpartum period as well as HIV exposure in their newborn infants. The results can be available quickly enough to implement successful ARV interventions that can reduce MTCT of HIV when administered to the mother started later in pregnancy or in labor or to the infant when administered within the first few hours of life. # RECOMMENDATIONS 1. Information about HIV infection, prevention of MTCT of HIV, and HIV antibody testing should be provided routinely as part of a comprehensive program of health care for pregnant women. 2. Documented, routine HIV antibody testing should be performed for all pregnant women in the United States after notifying the patient that testing will be performed, unless the patient declines HIV testing (opt-out consent or right of refusal). All HIV antibody testing should be performed in a manner consistent with state and local laws. 3. In states where laws and regulations require written informed maternal consent for testing, health care professionals should work to modify the laws or regulations to permit opt-out consent. 4. All programs for the detection of HIV infection in pregnant women and their infants should periodically evaluate the proportion of women who are not tested. Programs in which an unacceptably high proportion of women do not receive HIV antibody testing should examine the reasons and make appropriate program modifications as needed. 5. Repeat HIV antibody testing is recommended in the third trimester, preferably before 36 weeks' gestation, for women in states with high HIV prevalence in women 15 to 45 years of age, for women delivering in hospitals with HIV prevalence of 1 or more in 1000 pregnant women screened, or for women at increased risk of acquiring HIV (women with a sexually transmitted infection diagnosed during pregnancy, injection drug users and their partners, women who exchange sex or money for drugs, women who are sex partners of HIV-infected per-sons, women who have had a new or more than 1 sex partner during pregnancy, or women with signs/symptoms of acute HIV infection). Because prevalence-based testing may be difficult to implement and individual risk assessment is unreliable and the risk of MTCT of HIV is high in women who first acquire HIV infection during pregnancy, some experts recommend that repeat HIV screening be considered for all pregnant women in the third trimester. 6. For women in labor with undocumented HIV-infection status during the current pregnancy, maternal HIV antibody testing with opt-out consent, using a rapid HIV antibody test, is recommended. For women with a positive HIV rapid antibody test result, ARV prophylaxis should be administered to the mother and newborn infant on the basis of the positive rapid antibody test result without waiting for results of confirmatory HIV testing, and breastfeeding should not occur. Assistance with the immediate initiation of hand and pump expression to stimulate milk production should be offered to the mother, given the possibility that the confirmatory test results may be negative. If confirmatory test results are negative, prophylaxis should be stopped and breastfeeding may be initiated. 7. Rapid HIV antibody testing should be available on a 24-hour basis at all facilities with an obstetric unit and/or newborn nursery of any level. 8. The health care professional for the newborn infant needs to be informed promptly of maternal HIV serostatus so that appropriate care and testing of the newborn infant can be accomplished and so that ARV prophylaxis can be administered to HIV-exposed infants. The infant medical chart needs to contain documentation of the maternal HIV-infection status. Presence of maternal HIV-infection status on the maternal and infant record should be a standard measure of the adequacy of hospital care for the mother and infant. 9. For newborn infants whose mother's HIV serostatus is unknown, the newborn infant's health care professional should order rapid HIV antibody testing to be performed for the mother or the newborn, with appropriate consent as required by state or local law. Results should be reported to health care professionals quickly enough to allow effective ARV prophylaxis to be administered, if indicated, to the infant as soon as possible after birth but certainly by 12 hours after birth. ARV prophylaxis for the newborn infant should be administered promptly on the basis of a positive rapid antibody test result without waiting for results of confirmatory HIV testing. Breastfeeding should be avoided. Confirmatory testing should be performed, and assistance with hand and pump expression to stimulate milk production should be offered to the mother, given the possibility that the confirmatory test results may be negative. If confirmatory test results are negative (indicating that the infant was not truly exposed to HIV), then ARV prophylaxis should be stopped and breastfeeding may be initiated. If the confirmatory test result is positive, infants should receive ARV prophylaxis for 6 weeks after birth, and they should not breastfeed. Prophylaxis is most effective if administered within 12 hours of birth but may still be effective when administered as late as at 48 hours of life. 10. The full 6-week course of infant ARV prophylaxis, and careful instructions for its administration, should be provided to the family before discharge from the hospital. Payment for this should be covered by all third-party payers. 11. If the mother or infant has a positive test result for HIV antibody, the infant should not breastfeed. 12. In the absence of parental availability for consent to test the newborn infant for HIV antibody, the newborn infant should be tested, ideally within the first 12 hours of life. State and local jurisdictions need to develop procedures to facilitate the rapid evaluation and testing of the infant. 13. For infants of unknown HIV exposure status at the first health supervision visit, HIV antibody testing with appropriate consent should be performed to guide appropriate care and follow-up testing if needed. 14. Care of the mother, fetus, newborn, and child with perinatal exposure to HIV should be performed in consultation with specialists in obstetric and pediatric HIV infection. # COMMITTEE ON PEDIATRIC AIDS, 2007-2008
Universal HIV testing of pregnant women in the United States is the key to prevention of mother-to-child transmission of HIV. Repeat testing in the third trimester and rapid HIV testing at labor and delivery are additional strategies to further reduce the rate of perinatal HIV transmission. Prevention of mother-to-child transmission of HIV is most effective when antiretroviral drugs are received by the mother during her pregnancy and continued through delivery and then administered to the infant after birth. Antiretroviral drugs are effective in reducing the risk of mother-to-child transmission of HIV even when prophylaxis is started for the infant soon after birth. New rapid testing methods allow identification of HIV-infected women or HIV-exposed infants in 20 to 60 minutes. The American Academy of Pediatrics recommends documented, routine HIV testing for all pregnant women in the United States after notifying the patient that testing will be performed, unless the patient declines HIV testing ("optout" consent or "right of refusal"). For women in labor with undocumented HIVinfection status during the current pregnancy, immediate maternal HIV testing with opt-out consent, using a rapid HIV antibody test, is recommended. Positive HIV antibody screening test results should be confirmed with immunofluorescent antibody or Western blot assay. For women with a positive rapid HIV antibody test result, antiretroviral prophylaxis should be administered promptly to the mother and newborn infant on the basis of the positive result of the rapid antibody test without waiting for results of confirmatory HIV testing. If the confirmatory test result is negative, then prophylaxis should be discontinued. For a newborn infant whose mother's HIV serostatus is unknown, the health care professional should perform rapid HIV antibody testing on the mother or on the newborn infant, with results reported to the health care professional no later than 12 hours after the infant's birth. If the rapid HIV antibody test result is positive, antiretroviral prophylaxis should be instituted as soon as possible after birth but certainly by 12 hours after delivery, pending completion of confirmatory HIV testing. The mother should be counseled not to breastfeed the infant. Assistance with immediate initiation of hand and pump expression to stimulate milk production should be offered to the mother, given the possibility that the confirmatory test result may be negative. If the confirmatory test result is negative, then prophylaxis should be stopped and breastfeeding may be initiated. If the confirmatory test result is positive, infants should receive antiretroviral prophylaxis for 6 weeks after birth, and the mother should not breastfeed the infant.# INTRODUCTION Continuing technologic and medical advances in the diagnosis, prevention, and treatment of pediatric HIV infection require ongoing assessment and review of recommendations relating to pediatric HIV infection, including recommendations regarding prenatal and perinatal HIV counseling and testing. Current guidelines are consistent in their recognition of the importance of universal HIV testing of pregnant women in the United States as the key to prevention of mother-to-child transmission (MTCT) (also referred to as vertical or perinatal transmission) of HIV. The American Academy of Pediatrics (AAP) continues to support these guidelines. This policy statement updates the data that support the guidelines and suggests ways to continue improving the implementation of the recommendations for universal testing during routine prenatal care. # WHY IS A NEW STATEMENT NEEDED NOW? There is continued MTCT of HIV in the United States; 397 infants were infected through MTCT in 1999 -2001 in areas conducting enhanced perinatal surveillance. 1 These infant infections occurred despite the availability of efficacious interventions for preventing such transmission (antiretroviral [ARV] prophylaxis to mother and infant, elective cesarean delivery before the onset of labor and before rupture of membranes, and complete avoidance of breastfeeding). 2 In recent years, lack of identification of maternal HIV-infection status has been the primary reason for new infant HIV infections; effective interventions cannot be implemented unless maternal HIV status is known. Rapid HIV antibody testing methods now allow identification of HIV-infected women or HIV-exposed infants in 20 to 60 minutes [3][4][5] (see also www.cdc.gov/ hiv/topics/testing/rapid/index.htm). Studies have demonstrated the effectiveness of ARV prophylaxis for preventing MTCT of HIV, even when prophylaxis is initiated after the birth of the infant. [6][7][8][9][10] New guidelines from the Centers for Disease Control and Prevention (CDC) strengthen the recommendations for routine HIV testing during pregnancy. 11 This report summarizes the recommendations of the AAP regarding HIV testing and prophylaxis to prevent MTCT of HIV, which are consistent with and supportive of the CDC recommendations. # HIV TESTING For adults and children 18 months or older, "conventional testing" of blood for the presence of antibodies against HIV is performed by using a screening enzyme immunoassay (EIA). If the initial EIA result is positive, the laboratory repeats the EIA on the same blood sample, and if the repeat result is positive, the remainder of the blood is used to perform a confirmatory test, either immunofluorescent antibody (IFA) or Western blot assay. Because children younger than 18 months may have a positive HIV antibody test result because of the presence of passively acquired maternal antibody, assays that directly detect HIV DNA or RNA (generically referred to as HIV nucleic acid amplification tests) are required for diagnosis of HIV infection in children younger than 18 months. 12,13 Results of conventional HIV antibody tests and HIV nucleic acid amplification tests usually require hours to days to be returned. "Rapid" HIV antibody tests have been available since 2002. 3,5,14 These are screening tests, which means that a positive test result requires confirmation with an IFA or Western blot assay. The rapid antibody test is more sensitive and more specific than the conventional EIA, so a conventional EIA is not used as the confirmatory test for a rapid HIV antibody test. 15 In 1 study of the feasibility and benefit of use of the rapid test in pregnant women already in labor when they presented to health care professionals, the positive predictive value of the rapid test was shown to be higher than that of the conventional EIA. 4 That study of 4849 women (HIV prevalence: 7 in 1000) demonstrated that for the conventional EIA, the sensitivity was 100%, specificity was 99.8%, and the positive predictive value was 76%, whereas for the rapid HIV test, the sensitivity was 100%, specificity was 99.9%, and the positive predictive value was 90%. 4 Although it is usually recommended that all HIV antibody screening tests be confirmed before HIV-specific treatments are started, this can take several weeks, be-cause there is often a delay between availability of the results of the screening test and results of the confirmatory IFA or Western blot assay. This is not problematic for a woman identified early in pregnancy, because initiation of ARV prophylaxis against MTCT generally is not started until the second trimester. However, it is a problem for a woman who is late in pregnancy or in labor or is being tested immediately postpartum. In such instances, time is of the essence in initiating ARV prophylaxis to prevent MTCT. Therefore, when women have positive results of rapid antibody screening late in pregnancy, during labor, or within the first few hours of delivery of the infant, ARV prophylaxis to prevent MTCT should be instituted promptly on the basis of the positive results of the screening test. A maternal blood sample for a confirmatory HIV antibody assay should be obtained and sent for testing. 4,16,17 Prophylaxis should be stopped if the confirmatory test result is negative. # BENEFITS OF HIV TESTING HIV testing during pregnancy allows identification of HIV infection in women who might not know they are infected. This is important for the health of the woman, because knowledge of her HIV-infection status will allow appropriate evaluation, including CD4 ϩ T-lymphocyte count and HIV viral load quantification, initiation of comprehensive care, and appropriate ARV treatment. HIV antibody testing early in pregnancy has the added benefit of allowing the most effective interventions to prevent MTCT of HIV to be initiated, including ARV prophylaxis, planning an appropriate mode of delivery (elective cesarean delivery or vaginal delivery, depending on maternal viral load near delivery), and avoidance of breastfeeding. HIV antibody testing later in pregnancy, or even after delivery of the newborn infant, still allows initiation of effective ARV interventions that can reduce the risk of MTCT of HIV. # Benefits of HIV Testing Early in Pregnancy Single-Drug ARV Prophylaxis and MTCT of HIV The 3-part 1994 Pediatric AIDS Clinical Trials Group study 076 (PACTG 076) regimen is the starting point for understanding ARV prophylaxis and the prevention of MTCT of HIV. Among nonbreastfeeding pregnant women with HIV infection and a CD4 ϩ T-lymphocyte count of greater than 200 cells per mm 3 , oral zidovudine (ZDV) prophylaxis initiated after the first trimester, followed by administration of intravenous ZDV beginning at the onset of labor and continued until the cord is clamped, combined with 6 weeks of oral ZDV administered to the infant (2 mg/kg per dose every 6 hours), reduces the rate of MTCT of HIV from 25% to 8%. 18 Among nonbreastfeeding pregnant women with HIV infection, initiation of ZDV prophylaxis before the 28th week of pregnancy is associated with a lower risk of in utero transmission of HIV than is prophylaxis initiated at 35 weeks of pregnancy. 19 # Combination ARV Regimens and MTCT of HIV Regimens that include combinations of 3 ARV drugs are more effective for prevention of MTCT of HIV than is ZDV alone. 20,21 For nonbreastfeeding pregnant women with HIV infection, successful combination therapy with 3 ARV drugs and resultant reduction of maternal plasma virus load to below the limits of detection on sensitive assays (the goal of standard ARV therapy) is associated with rates of MTCT of HIV of less than 1%. [22][23][24] Current US Public Health Service (USPHS) guidelines for prevention of MTCT of HIV recommend use of combination ARV regimens including at least 3 ARV drugs during pregnancy and labor for all pregnant women with HIV infection. ARV drugs are discontinued after delivery unless the mother requires ARV therapy for her own health, in which case ARV therapy would be continued following guidelines for nonpregnant HIV-infected adults. 25,26 Intravenous ZDV should be administered to the pregnant woman during labor until the cord is clamped, with the other ARV drugs in the regimen continued orally during labor, and all infants should receive 6 weeks of ZDV prophylaxis. 25 The full 6-week course of infant ARV prophylaxis, and careful instructions for its administration, should be provided to the family before discharge from the hospital. A prescription and recommendations to purchase ZDV for use by the infant are not adequate to ensure appropriate prophylaxis. In some states, infants may not be registered for insurance for a few weeks after birth, so even if the family has insurance, coverage may not be immediately available to pay for health care costs for the infant. Some families have health insurance that covers inpatient costs but not prescription medications. Outpatient pharmacies may not stock the infant-dosage form of ZDV. At hospital discharge, the family should be supplied with the medication itself, along with careful instructions for its administration, not just a prescription. # Mode of Delivery and MTCT of HIV Elective cesarean delivery (performed before onset of labor and before rupture of membranes) can prevent MTCT of HIV 27 and is associated with at least a 50% decrease in the risk of MTCT among HIV-infected women either not receiving ARV drugs or receiving ZDV alone. 28 Although several studies have suggested that elective cesarean delivery performed before labor onset and before rupture of membranes may remain effective among HIV-infected women with low virus load (low either intrinsically off ARV therapy or low because of administration of combination ARV regimens during pregnancy), further research is required to definitively demonstrate whether elective cesarean delivery can further reduce the risk of MTCT of HIV for women being successfully treated with combinations of 3 or more ARV drugs (eg, virus load undetectable on sensitive assays while on a combination 3-drug ARV regimen). 29 Current American College of Obstetricians and Gynecologists (ACOG) 30 and USPHS guidelines for prevention of MTCT recommend elective cesarean delivery at 38 weeks' gestation for all HIV-infected pregnant women with HIV RNA levels greater than 1000 copies per mL near the time of delivery (or who have unknown viral load), regardless of the type of maternal ARV prophylaxis being received. 14,25 Breastfeeding Breastfeeding confers approximately 9% to 15% excess risk of MTCT of HIV. In the United States, because safe infant feeding alternatives exist, women with HIV infection should not breastfeed regardless of maternal ARV use. 31,32 Benefits of HIV Testing in the Peripartum and Newborn Periods When HIV antibody testing is performed before or during pregnancy, if a woman is found to be infected with HIV, it is possible to use all 3 known efficacious interventions for prevention of MTCT of HIV (prepartum and intrapartum maternal and postpartum infant ARV prophylaxis, elective cesarean delivery, and avoidance of breastfeeding). However, if HIV diagnostic testing is not performed until the peripartum or postpartum periods, then only 2 of the 3 interventions can be implemented (intrapartum maternal and postpartum infant ARV prophylaxis and avoidance of breastfeeding). Although ARV prophylaxis initiated during pregnancy is most effective at reducing MTCT of HIV, prophylaxis initiated for the pregnant woman at the time of labor and continued to the infant after birth, or even prophylaxis only administered to the infant after birth, can reduce the risk of MTCT of HIV compared with no prophylaxis. 6,7,10 Moreover, identification of HIV exposure allows the pediatrician to offer advice on appropriate alternatives to breastfeeding, follow-up testing, and prophylaxis against opportunistic infections for the infant as well as referral of the mother for care of her HIV infection. 12 For the woman with HIV infection identified at the time of labor, maternal prophylaxis with intravenous ZDV, together with infant prophylaxis with 6 weeks of ZDV, is associated with an approximately 60% lower risk of MTCT of HIV 7 compared with no prophylaxis. For infants whose mothers received no ARV therapy during pregnancy or labor, prompt (optimally as soon as possible after birth but certainly within 12 hours after birth) prophylaxis of the infant with ZDV alone for 6 weeks is associated with a 50% reduction in the risk of MTCT of HIV compared with no prophylaxis. 7 In certain situations, some experts combine the 6-week infant ZDV prophylaxis regimen with additional ARV drugs. Such situations might include infants born to mothers who received prenatal ARV drugs but had suboptimal viral suppression at delivery, particularly if the infant was delivered vaginally; infants born to mothers who have received only intrapartum ARV drugs; infants born to mothers who have received no prepartum or intrapartum ARV drugs; and infants born to mothers with known drug-resistant virus. Whether combining ZDV with other ARV drugs provides additional efficacy for prevention of MTCT of HIV has not been proven in clinical trials. In addition, appropriate ARV drug formulations and dosing regimens for neonates are incompletely defined for many drugs, and there are minimal data about the safety of combination ARV drugs in the neonate. Therefore, use of combination infant ARV prophylaxis involves complex balancing of potential benefits in terms of prevention of MTCT of HIV and risks in terms of toxicity to the infant. The USPHS guidelines for prevention of MTCT of HIV include extensive discussion of considerations for infant ARV prophylaxis regimens for different clinical scenarios and should be reviewed for specific recommendations. 25 If infant prophylaxis with ARV drugs in addition to ZDV is being considered, decisions and choice of ARV drugs should be determined in consultation with a practitioner who is experienced in care of infants with HIV infection. # Other Considerations Early identification of HIV-exposed infants allows (1) appropriate testing to identify HIV-infection status of the infant, (2) counseling of the mother regarding the risk of HIV transmission through breastfeeding and institution of appropriate infant feeding, and (3) prophylaxis with trimethoprim-sulfamethoxazole to prevent Pneumocystis jiroveci infection for infants whose HIV-infection status has not been determined or are identified as being HIV infected. 12 # SUMMARY: PROPHYLAXIS AND TREATMENT OF PREGNANT WOMEN WITH HIV INFECTION AND THEIR INFANTS Guidelines for initiation of ARV therapy for pregnant women are the same as for nonpregnant HIV-infected adults and follow the USPHS "Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents" 26 except that the choice of ARV drugs includes special considerations related to pregnancy and fetal drug exposure as described in the "Recommendations for Use of Antiretroviral Drugs in Pregnant HIV-Infected Women for Maternal Health and Interventions to Reduce Perinatal HIV Transmission in the United States." 25 For women who need immediate initiation of ARV therapy for their own health, treatment should be initiated as soon as possible, including in the first trimester. For women who do not require treatment for their own health, the effectiveness of ARV prophylaxis depends on the timing of institution of such prophylaxis. For women identified as being HIV infected early in pregnancy, generally 3 ARV drugs are recommended for prophylaxis during their pregnancy and should be continued until the time of delivery. Delaying initiation of prophylaxis until after the first trimester can be considered if treatment of HIV infection is not needed for the woman's own health, intravenous ZDV is administered to the pregnant woman at the time of labor and continued until the cord is clamped, and the other ARV components of the regimen are continued orally during labor. Oral ZDV is administered to the infant for the first 6 weeks of life. 25 For women identified as being HIV infected during labor, intravenous ZDV is recommended together with infant ARV prophylaxis. For infants born to women who have not received prepartum or intrapartum ARV therapy, prophylaxis of the infant with 6 weeks of ZDV is recommended. In the latter 2 situations, some experts may administer additional ARV drugs to the mother and/or infant. The USPHS guidelines for prevention of MTCT of HIV should be consulted for detailed discussion of these more complex situations, and decisions for maternal and infant prophylaxis and therapy in such situations should be made in consultation with a practitioner who is experienced in care of infants with HIV infection. 25 # RISKS OF HIV TESTING IN THE PRENATAL AND NEWBORN PERIODS A positive HIV antibody test result for an infant identifies HIV infection in the mother and HIV exposure (with possible infection) of the infant. Therefore, even if testing is only performed on the infant after birth, if the infant is found to be HIV-seropositive, the infant's mother will be identified as having HIV infection, which can be associated with personal psychological trauma and societal stigma if the mother does not know that she is infected. Linkage of the newly identified HIV-infected mother to appropriate psychosocial supports and to HIV care programs is important. If the test result is found to be a false-positive, this psychological harm will have occurred needlessly. Thus, the fact that confirmatory testing is required to definitively diagnose HIV infection of the mother, and the need for rapid presumptive treatment of the infant to prevent MTCT should she be HIV infected, should be explained to the mother when performing rapid testing during labor or after delivery. ARV administration to the mother and/or infant may be associated with infant drug toxicity, [33][34][35] and if the rapid test result is not confirmed to be positive, the benefit/risk ratio may not favor prophylaxis. However, the infant should have received only 1 to 2 days of ARV prophylaxis in such a situation, and short-term toxicity of ARV drugs is limited. Expedited confirmatory testing should be performed to ensure that results are reported quickly so that the duration of infant exposure to ARV drugs is minimized. # CONSENT FOR HIV TESTING Opt-out consent (documented patient notification, with testing to take place unless rejected by the patient) is associated with higher testing percentages than opt-in consent, [36][37][38] and universal HIV screening of pregnant women with opt-out consent is recommended by the CDC, 11 the ACOG, 14 and the AAP. 39 As part of its recommendation, the CDC states that HIV screening should be included in the routine panel of prenatal screening tests for all pregnant women and that separate written consent for HIV testing should not be required. The CDC also states that general consent for medical care should be considered sufficient to encompass consent for HIV care. 11 In states where laws or regulations require written informed maternal consent for testing ("opt-in" consent), practitioners should obtain appropriate consent as required. A compendium of state HIV-testing laws can be found at www.ucsf.edu/hivcntr. In states where laws and regulations require written informed maternal consent for testing, practitioners should work to modify the laws or regulations to permit opt-out consent. Such an advocacy effort is best undertaken with a broad coalition of interested parties throughout the state, including state and local health departments, the state AAP chapter, representatives of the ACOG and the American Academy of Family Physi-cians, nursing groups, community groups interested in maternal health, and AIDS activist organizations. Mandatory HIV testing of the newborn infant whose mother has not been tested during pregnancy or in the immediate postpartum period has been associated with high rates of prenatal testing in New York state 7 and may act as a safety net for identifying infants who would not have been tested otherwise. There has not been a study to compare the percentage of women tested before delivery in programs with mandatory newborn testing compared with those with opt-out consent policies. Therefore, an evidence-based recommendation for or against mandatory newborn testing cannot be made. A few states have passed laws that require HIV testing of newborn infants without maternal consent when the HIV-infection status of the mother is unknown. In states where legislation aimed at mandating testing of newborn infants has been proposed, issues have been raised regarding the ethics and legality of this approach, because it diagnoses HIV infection in the mother without her consent for testing. In addition, concerns have been raised about the costs of such screening programs, given the already high numbers of mothers and newborn infants tested in voluntary (opt-out) programs. Regardless of the form of consent used in testing programs, it is important that the results of infant testing be returned as rapidly as possible so that ARV prophylaxis for the infant can be started promptly if needed. Infant ARV prophylaxis is likely to be less effective in the prevention of MTCT of HIV when started later than 12 hours after birth. Optimal prevention of MTCT of HIV requires identification of the mother's HIV status during pregnancy. # TIMING OF TESTING IN PREGNANCY As noted, testing of the pregnant woman early in pregnancy is recommended to allow informed and timely therapeutic decisions concerning health care for her and for prevention of MTCT of HIV. 11 A second HIV test during the third trimester has been shown to be costeffective under certain conditions, 40 and the CDC recommends that a second test be performed late in pregnancy but at Ͻ36 weeks' gestation for women who are in areas of high incidence, for women delivering in hospitals with HIV prevalence in pregnant women of at least 1 in 1000, and for women at high risk of acquiring HIV (women with a sexually transmitted infection diagnosed during pregnancy, injection drug users and their partners, women who exchange sex or money for drugs, women who are sex partners of HIV-infected persons, women who have had a new or more than 1 sex partner during pregnancy, or women with signs/symptoms of acute HIV infection). 11 However, because risk-based and prevalence-based testing programs may be difficult to implement, some practitioners and hospitals choose to test all pregnant women a second time late in pregnancy, even in the absence of specific prevalence or risk data. Because of the high levels of viral replication observed with acute HIV infection, women who become infected with HIV during pregnancy have a particularly high risk of transmitting HIV to their infants. For women whose HIV status is unknown at the time of presentation in labor, testing should be timed so that results are available to allow predelivery administration of prophylaxis if indicated. For a newborn infant whose mother's HIV status is unknown, testing should be performed quickly enough so that results can be available for infant ARV prophylaxis to begin within 12 hours of birth, as stated previously. 11 # CONCLUSIONS Universal HIV testing of pregnant women is standard care in the United States. Identification of HIV infection early in pregnancy allows the greatest ability to treat the pregnant woman for her HIV infection for her own health and to prevent MTCT of HIV. Rapid HIV antibody testing allows for timely identification of HIV infection in women even late in pregnancy, during labor, or in the immediate postpartum period as well as HIV exposure in their newborn infants. The results can be available quickly enough to implement successful ARV interventions that can reduce MTCT of HIV when administered to the mother started later in pregnancy or in labor or to the infant when administered within the first few hours of life. # RECOMMENDATIONS 1. Information about HIV infection, prevention of MTCT of HIV, and HIV antibody testing should be provided routinely as part of a comprehensive program of health care for pregnant women. 2. Documented, routine HIV antibody testing should be performed for all pregnant women in the United States after notifying the patient that testing will be performed, unless the patient declines HIV testing (opt-out consent or right of refusal). All HIV antibody testing should be performed in a manner consistent with state and local laws. 3. In states where laws and regulations require written informed maternal consent for testing, health care professionals should work to modify the laws or regulations to permit opt-out consent. 4. All programs for the detection of HIV infection in pregnant women and their infants should periodically evaluate the proportion of women who are not tested. Programs in which an unacceptably high proportion of women do not receive HIV antibody testing should examine the reasons and make appropriate program modifications as needed. 5. Repeat HIV antibody testing is recommended in the third trimester, preferably before 36 weeks' gestation, for women in states with high HIV prevalence in women 15 to 45 years of age, for women delivering in hospitals with HIV prevalence of 1 or more in 1000 pregnant women screened, or for women at increased risk of acquiring HIV (women with a sexually transmitted infection diagnosed during pregnancy, injection drug users and their partners, women who exchange sex or money for drugs, women who are sex partners of HIV-infected per-sons, women who have had a new or more than 1 sex partner during pregnancy, or women with signs/symptoms of acute HIV infection). Because prevalence-based testing may be difficult to implement and individual risk assessment is unreliable and the risk of MTCT of HIV is high in women who first acquire HIV infection during pregnancy, some experts recommend that repeat HIV screening be considered for all pregnant women in the third trimester. 6. For women in labor with undocumented HIV-infection status during the current pregnancy, maternal HIV antibody testing with opt-out consent, using a rapid HIV antibody test, is recommended. For women with a positive HIV rapid antibody test result, ARV prophylaxis should be administered to the mother and newborn infant on the basis of the positive rapid antibody test result without waiting for results of confirmatory HIV testing, and breastfeeding should not occur. Assistance with the immediate initiation of hand and pump expression to stimulate milk production should be offered to the mother, given the possibility that the confirmatory test results may be negative. If confirmatory test results are negative, prophylaxis should be stopped and breastfeeding may be initiated. 7. Rapid HIV antibody testing should be available on a 24-hour basis at all facilities with an obstetric unit and/or newborn nursery of any level. 8. The health care professional for the newborn infant needs to be informed promptly of maternal HIV serostatus so that appropriate care and testing of the newborn infant can be accomplished and so that ARV prophylaxis can be administered to HIV-exposed infants. The infant medical chart needs to contain documentation of the maternal HIV-infection status. Presence of maternal HIV-infection status on the maternal and infant record should be a standard measure of the adequacy of hospital care for the mother and infant. 9. For newborn infants whose mother's HIV serostatus is unknown, the newborn infant's health care professional should order rapid HIV antibody testing to be performed for the mother or the newborn, with appropriate consent as required by state or local law. Results should be reported to health care professionals quickly enough to allow effective ARV prophylaxis to be administered, if indicated, to the infant as soon as possible after birth but certainly by 12 hours after birth. ARV prophylaxis for the newborn infant should be administered promptly on the basis of a positive rapid antibody test result without waiting for results of confirmatory HIV testing. Breastfeeding should be avoided. Confirmatory testing should be performed, and assistance with hand and pump expression to stimulate milk production should be offered to the mother, given the possibility that the confirmatory test results may be negative. If confirmatory test results are negative (indicating that the infant was not truly exposed to HIV), then ARV prophylaxis should be stopped and breastfeeding may be initiated. If the confirmatory test result is positive, infants should receive ARV prophylaxis for 6 weeks after birth, and they should not breastfeed. Prophylaxis is most effective if administered within 12 hours of birth but may still be effective when administered as late as at 48 hours of life. 10. The full 6-week course of infant ARV prophylaxis, and careful instructions for its administration, should be provided to the family before discharge from the hospital. Payment for this should be covered by all third-party payers. 11. If the mother or infant has a positive test result for HIV antibody, the infant should not breastfeed. 12. In the absence of parental availability for consent to test the newborn infant for HIV antibody, the newborn infant should be tested, ideally within the first 12 hours of life. State and local jurisdictions need to develop procedures to facilitate the rapid evaluation and testing of the infant. 13. For infants of unknown HIV exposure status at the first health supervision visit, HIV antibody testing with appropriate consent should be performed to guide appropriate care and follow-up testing if needed. 14. Care of the mother, fetus, newborn, and child with perinatal exposure to HIV should be performed in consultation with specialists in obstetric and pediatric HIV infection. # COMMITTEE ON PEDIATRIC AIDS, 2007-2008
None
None
481af9e6404daf88c1b771283e9f19d57bf30251
cdc
None
The Occupational Safety and Health Act of 1970 emphasized the need for standards to protect the health and safety of workers exposed to an ever increasing number of potential hazards in the workplace. Section 22(d)(2) of the Act authorizes the Director of the National In stitu te for Occupational Safety and Health (NIOSH) to make recommendations to the Occupational Safety and Health Administration (OSHA) concerning improved occupational safety and health standards. The purpose of th is document is to provide recommendations for preventing injuries and disease caused by hazardous energy. The document was developed through the use of "systems analysis" so as to provide a logical means of performing maintenance and servicing a c tiv itie s safely, with energy present, with energy removed, and during the process of reenergizing. We gratefully acknowledge the contributions to th is document made by representatives of other Federal agencies or departments, labor unions, trade and professional associations, consultants, and the s ta ff of the In stitu te. Whatever the contributions made by others, conclusions expressed in th is document are those of the In stitu te, and we are so lely responsible for them. However, a ll comments made by those outside NIOSH, whether or not incorporated, were considered carjêïully and were sent with the document to the Occupational Sa^e(ty\ and Health Administration. .A Dona kid M illar, M.D. A ssistant Surgeon General Director, National In stitu te for Occupational Safety and Health The Division of Safety Research, NIOSH, had primary resp on sib ility for devel opment of the recommended guidelines for controlling hazardous energy during maintenance and servicing. Ted A. P ettit served as project o ffic e r and docu ment manager. Boeing Aerospace developed the basic information for considera tion by NIOSH sta ff and consultants under contract No. 210-79-0024.# Figure I. D IA G R A M FOR CO NTRO LLING H A Z A R D O U S ENERGY DU RING M A IN T E N A N C E # f^> > IM P L E M E N T A T IO N O F S A F E G U A R D S T O C O N T R O L H A Z A R D S WITH E N E R G Y P R E S E N T M A Y BE C H O S E N IN S T E A D O F H A Z A R D O U S E N E R G Y E L IM IN A T IO N B Y D E V IC E S O R T E C H N IQ U E S^ W H E N IT C A N B E D E M O N S T R A T E D T H A T H A Z A R D S A R E C O N T R O L L E D W ITH E N E R G Y P R E S E N T B Y : # . I D E N T IF Y IN G A L L H A Z A R D O U S E N E R G Y S O U R C E S A N D H A Z A R D O U S R E S ID U A L E N E R G Y , A N D 2 . D O C U M E N T IN G A P R O C E D U R E F O R A N D D E M O N S T R A T IN G T H A T T H E P R O C E D U R E W ILL C O N T R O L H A Z A R D S R E S U L T IN G F R O M E A C H H A Z A R D O U S E N E R G Y ID E N T IF IE D . T H E D E C IS IO N A T T H IS P O IN T IS P R E D E T E R M IN E D B Y T H E O R IG IN A L O P T IO N C H O S E N IN . I I I I I I # Legend # R E C T A N G L E S Y M B O L -I D E N T IF IE S A N E V E N T T H A T R E S U L T S F R O M T H E C O M B IN A T IO N O R E X C L U S IO N O F A C T IV IT IE S O R E V E N T S . A N D G A T E -D E S C R IB E S A N O P E R A T IO N W H E R E B Y C O -E X IS T E N C E O F A L L IN P U T S A R E R E Q U IR E D TO P R O D U C E A N O U T P U T E V E N T S . O R G A T E -D E F I N E S S IT U A T IO N W H E R E B Y A N O U T P U T E V E N T W ILL O C C U R IF O N E O F T H E IN P U T S E X IS T . D E C ISIO N G A T E -E X C L U S IV E O R G A T E W H IC H F U N C T IO N S A S A N O R G A T E B U T P R O V ID E S A F O O T N O T E D ( t > ) L IS T O F D E C IS IO N C R IT E R IA A R E N O T S E L F E V ID E N T . D IA M O N D -D E S C R IB E S A N E V E N T T H A T IS -C O N S ID E R E D B A S IC IN A G IV E N L O G IC S E Q U E N C E . E V E N T IS N O T D E V E L O P E D F U R T H E R B E C A U S E D E V E L O P M E N T IS O B V IO U S . u n til i t becomes operational. For sim p lification , "maintenance and servicing" are referred to as "maintenance" throughout much of the document. "Inspection" is defined as checking or testin g machinery, equipment, system, e t c ., against established standards. The d efin itio n suggests that the object being inspected has a function and should comply with a standard. From th is, i t is deduced that (1) a standard must e x ist that: establishes the good or bad ch aracteristics of the object being inspected and (2) the object must be com plete so that i t can perform it s function; i . e . , construction, assembly, or manufacture should be complete before the object can be maintained. I f the object has more than one function, i t may be necessary to change or add parts to the object so that it may perform the d ifferent functions. The a c tiv ity required to make the changes or additions is considered maintenance. In some sectors of industry the term "set-up" is used to denote the a c tiv itie s needed to change the functions of the equipment. "Service" is defined as repair or maintenance. The d efin itio n of service is synonymous with maintenance; however, service (as used in this report) refers to the a c tiv itie s needed to keep a machine, process, or system in a state of efficien cy ; e .g ., changing crank case o i l , greasing, cleaning, painting, adjusting, calibratin g, e tc . Energy. For this document, "energy" means mechanical motion; potential energy due to pressure, gravity, or springs; e le c tr ic a l energy; or thermal energy resulting from higfr or low temperature. As suggested by one of the approaches taken by industry, maintenance may be performed with power o ff or power on. Maintenance a c t iv it ie s , however, are always performed with some form of energy on and some form of energy o ff . Thus, the concept of "power off" or "power on" implies that power, the time rate at which work is done or energy emitted or transferred, can be turned o ff. This concept is too r e str ic tiv e because it does not consider gravity or temperature. The term "energy," as opposed to "power," is therefore suggested as an altern ative, because current consensus standards and other litera tu re on th is subject use th is term. The concept of energy (any quantity with dimensions mass times length squared divided by time squared), however, is too in clu sive. The concept of energy, for the purpose of th is document, is lim ited to: 1. Kinetic energy -energy possessed by a body by virtue of i t s motion. # Potential energy -energy possessed by a body by virtue of it s position in a gravity fie ld . 3. E lectrical energy -energy as a result of a generated e le c tr ic a l power source or a s ta tic source. Thermal energy -energy as a resu lt of mechanical work, radiation, chemi cal reaction, or e le c tr ic a l resistance. Radiation, the process by which electromagnetic energy is propagated, is not included because this type of energy a ffects internal organ tolerance, which is a health e ffe c t. Personnel Hazard. A condition which could lead to injury or death. This con d itio n should be recognized by a person fam iliar with the particular circum stances and facts unique to a particular industry. The following concept of personnel hazard was used to objectively identify hazards to humans: A per sonnel hazard e x ists when the environment, conditions, natural phenomena, or equipment ch aracteristics may release le v els of energy that exceed human tolerance. This concept encompasses the human physiological tolerance to trauma as well as internal organ tolerance to environment. Isolated or Blocked Energy. Energy is considered isolated or blocked when i t s flow would not be reactivated by a foreseeable unplanned event. The term "isolate" means to set apart from others. The term "block" (noun) means an obstacle or obstruction; or, (verb) to make unsuitable for passage or progress by obstruction, to prevent normal functioning. These terms are sim ilar in meaning, but they cannot be used synonymously in a ll instances. Although they may describe the same function ( i . e . , prevention of the normal flow of energy), the way in which the function is performed is d ifferen t. For instance, to control gravitational energy, the energy should be blocked in the sense that an obstacle or obstruction is placed. It would be incorrect to use the term "isolate" when referring to the control of gravi tation al energy. E lectrical energy should be controlled by iso la tin g it in the sense that i t is set apart, or disconnected. It would be incorrect to use the terra "blocked" when referring to the control of e le c tr ic a l energy. The terms may be used synonymously in some cases, such as when referring to blocking the passage of fluid or iso la tin g the pressure in a pipe. Sometimes the difference in meaning becomes evident only by the method used to implement the function. # Point(s) -f Control. The point or points from which energy-blocking, -is o la tin g , or -d issip atin g devices are controlled. Securing the Point(s) of Control. The point(s) of control are secured to pre vent unauthorized persons from reactivating the flow of energy. Securing is a separate and d istin ct action from iso la tin g or blocking the energy sources. The use of locks, tags, or posting a qualified person or a combination thereof are methods of accomplishing these c r ite r ia . # Dissipate Energy. To cause energy to be spread out or reduced to lev els tolerable by humans. When the word "dissipate" is applied to the word "energy," the term may be interpreted d iffe r e n tly . The following concepts should be used to determine the dissipation a c tiv itie s : 1. Dissipate Mechanical Motion -Motion tends to continue because of in ertia after removal of energy; therefore, mechanical motion should be dissipated. For example, a flywheel should be allowed to come to rest before starting work. Dissipate Potential Energy -Potential energy can be manifested in the form of pressure (above or below atmospheric), springs, and gravity. Gravity can never be eliminated or dissipated; i t can only be controlled. Springs under tension or compression can be released (to d issip ate stored energy) or the stored energy can be controlled. Pressure may be blocked, iso la te d , or dissipated. The term "dissipate pressure" implies reducing pressure to a level that would not harm humans. Normally, th is pressure value is atmos pheric . Dissipate E lectrica l Energy -D issipation of e le c tr ic ity may be accom plished by "grounding" the d e e lec tr ifie d portion of the c ir c u it after i t has been isolated . Grounding liv e circ u its may be catastrophic. D issipation of e le c tr ic a l energy includes the actions necessary to prevent the buildup of e le c tr ic a l potential (s ta tic e le c t r ic it y ) . D issipate Chemicals -Chemical reactions are exothermic or endothermie. Exothermic reactions raise temperatures, which may cause a variety of e ffe c ts such as fir e s , explosions, burns, e tc . Endothermie reactions lower tempera tures and cause the need for additional heat. Some elements manufactured by endothermie reaction are used as explosives or have explosive ch aracteristics because of their in s ta b ility and rapid release of energy. For the purpose of th is document, emphasis is placed on the e ffo r t necessary for the prevention or control of chemical reactions. Thus, the term "dissipation of chemicals" implies those actions needed to prevent chemical reactions that would (1) raise or lower temperatures or (2) cause e ffe c ts which humans cannot to lera te. Dissipate Thermal Energy -Human tolerance to temperature is very lim ited. Human tissu e is harmed when it is exposed to temperatures above 45°C (113°F) or below 4°C (39°F). Since temperature cannot be isolated or blocked, the only way to control it s e ffe c ts on humans is through d issip ation or employee protection. Mechanical motion, e le c tr ic a l resista n ce, chemical reactions, and radiation w ill raise the temperature of materials which, in turn, can burn or damage human tissu e . Therefore, when energy sources that a ffect temperature are id en tified in equipment, processes, or systems, controls of the energy source should be effected to allow the temperature to dissipate to a tolerable le v e l. # C. W ORKING WITH OR WITHOUT ENERGY PRESENT The basic decision that must be made before maintenance begins is: Can the task be accomplished safely with or without energy present or is i t necessary to deenergize before in itia tin g maintenance? # Concepts which should be considered in th is decision include: 1) Energy is always present; 2) energy is not necessarily dangerous; and 3) danger is present only when energy is released in quantities which exceed human to ler ances. Further, prior to the development of sp ecific energy control measures, a ll energy sources should be: 1) id en tified ; 2) analyzed independently; and 3) analyzed in combination with any other energy sources present. The following paragraphs present c r ite r ia which should be considered during analysis of sp ecific energy sources. -5 -D. # ENERGY SOURCES Mechanical motion can be linear translation or rotation, or i t can produce work which, in turn, produces changes in temperature. This type of energy can be turned o ff or le f t on. P otential energy can be due to pressure (above or below atmospheric), springs, or gravity. Maintenance is always conducted with gravity on. Potential energy manifested as pressures or in springs can be dissipated or controlled; i t cannot be turned o ff or on. E lectrical energy refers to generated e le c tr ic a l power or s ta tic e le c t r ic it y . In the case of generated e le c t r ic it y , the e le c tr ic a l power can be turned on or turned o ff . Static e le c tr ic ity may not be turned off; i t can only be d is s i pated . Thermal energy is manifested by high or low temperature. This type of energy is the result of mechanical work, radiation, chemical reaction, or e le c tr ic a l resistance. It cannot be turned o ff or eliminated; however, it can be d is s i pated or controlled. # Chemical reaction is manifested by exothermic or endothermic e ffe c ts . In either case, the energy-on/energy-off approach does not apply. Any material which could chemically react should be eliminated, dissipated, or controlled. That i s , some p ositive measures must be taken to (1) eliminate the chemical so that no chemical reaction can take place, or (2) control the reaction so that the energy released by the chemical reaction w ill not harm humans. Methods of controlling hazards due to chemicals or chemical reactions as energy sources have not been addressed in th is document because of their complexity. These c riter ia indicate that some energy sources can be turned o ff , some can be dissipated, other sources of energy can be eliminated, and some can only be controlled. The following examples illu s tr a te how th is concept is applied. # Example 1: The task is the removal and replacement of a spring in a car. Potential energy is present due to the force of gravity and of the tension or compres sion of the spring. The force of gravity can be controlled only by blocking the car. The energy stored in the spring can be dissipated or controlled. It is usually controlled by blocking the release of the spring. Therefore, the decision logic indicates that maintenance must be done with energy present. # Example 2: A large mixing drum with internal rotating blades is in need of repair. One of the mixing blades is out of alignment. The established maintenance pro cedure requires the entry of two men to realign the blade. The e le c tr ic a l energy source can be switched o ff and locked out and/or the mechanical lin k ages can be disconnected. The lockout at the switch w ill provide a physical barrier and a tag w ill identify the task being performed. In addition, i f the mechanical linkages are disconnected, a ll energy sources, except gravity, w ill be removed from the system. -6 -I I . # INVESTIGATION OF THE PROBLEM Much of the data examined during th is study did not include enough sp e cific information to id en tify the factors causing or contributing to accidents which occur during maintenance as a result of inadequate energy control measures. However, an extensive search of world litera tu re related to maintenance haz ards and maintenance hazard controls, revealed a number of accident scenarios which could be evaluated in the attempt to id en tify generic causes of maintenance-related accidents. Accidents which could be attributed to the presence of uncontrolled energy during maintenance a c t iv it ie s were selected for closer study. Although the limited nature of the information precluded exact cause determination, the scenarios did provide a general indication of causes or contributing causes as well as illu str a tio n s of the types of a c ci dents that can occur when hazardous lev els of energy are not controlled. # A. THE ACCIDENT SCENARIOS Out of 300 accident scenarios found in 14 d ifferen t sources, 59 accidents were selected to illu s tr a te that adequate energy control methods may have prevented the accidents. (A b rief description of each accident is included in Appendix B.) Causes of these accidents have been categorized as follows: 1. Maintenance a c tiv itie s were in itia te d without attempting to deenergize the equipment or system, or control the hazards with energy present. Twenty-seven accidents f e l l into th is category. Typical of these accidents was accident No. 10 (Appendix B), in which an experienced repairman saw a piece of wire stuck in a f i l t e r wheel. He attempted to remove the wire while the wheel was in motion. Losing his balance, he f e ll into the wheel. His leg was crushed and had to be amputated. # Energy blockage or iso la tio n was attempted, but was inadequate. Six incidents were the result of in e ffec tiv e energy iso la tio n or blockage. An example was accident No. 30 (Appendix B), in which a worker attempted to prevent an elevator from moving by jamming the doors open with a plank while the elevator was on the second floor, and then turning o ff the outside panel switch on the main floo r. He was k ille d while working on the main floor when the elevator returned to i t s home base rather than remaining on the second floor. # Residual (potential) energy was not dissipated. Accident No. 34 (Appendix B) resulted in injury due to failure to dissipate residual energy. The worker turned o ff the power to a packaging machine and attempted to remove a jam. Residual hydraulic pressure activated the holding device causing the injury. # Accidental activation of energy. Twenty-five of the accidents were caused by accidental a ctivation . These accidents were caused by either (1) persons unintentionally actuating controls, or (2) by other persons activating the controls, not realizin g that maintenance was in progress. The r esu lt, in either case, can be disastrous. # An accident typical of th is category was accident No. 36 (Appendix B) in which a mechanic was repairing an e le c tr ic a lly operated caustic pump. A co-worker dragged a cable across the toggle switch that operated the pump, and the mechanic was sprayed with ca u stic. Another incident, accident No. 38 (Appendix B), occurred when an employee was cutting a pipe with a torch and d iesel fuel was mistakenly discharged into the lin e . The ensuing ignition of the d iesel fuel resulted in a f a ta lity . These types of accidents are preventable i f e ffe c tiv e energy control tech niques or procedures are availab le, workers are trained to use them, and management provides the motivation to ensure their use. Table I summarizes the accident causes and indicates the applicable guidelines from Chapter III which, i f followed, would have eliminated the hazard and prevented the a c ci dent. In Appendix B, an energy control method that would have been e ffe c tiv e is id en tified for each entry. The methods shown are only examples. For many cases, several energy control techniques could have been used to prevent the accident. In addition, Appendix B id e n tifie s the hazard types, energy sources, and accident causes as w ell as consequences of each accident. # STATISTICAL INVESTIGATION Besides the literatu re search, a s t a t is t ic a l investigation was undertaken in the attempt to obtain data that would help to identify factors causing or contributing to the type of accident under consideration. It was planned that s t a t is t ic a l accident data would be obtained that related to inadequate energy controls during maintenance. The data would help to id en tify in e ffec tiv e control methods and particular problems that should be avoided. Manual and automated searches were conducted to locate the informa tion . Most of the data found, however, did not include enough sp ecific information to identify the factors causing or contributing to the accidents. This Existing data are very lim ited . Occupational injury data are often gathered primarily for the purpose of processing worker's compensation claims rather than for developing countermeasures. Most current information systems emphasize data about the injury, not the accident, and about the end resu lts of the accident sequence, rather than the precipitating events and conditions leading to the injuring event. Data of the la tter type are often much more important for the development of countermeasures. # Ihe ANSI Z16.2 American National Standard Method of Recording Basic Facts Relating to the Nature and Occurrence of Work Injuries, or an adaptation of this coding method, is used almost exclu sively among both the Federal and State Government agencies and private sector information systems. The ANSI Z16.2 method is extremely lim ited and is designed primarily as a monitoring tool for grouping accidents by major types and for flagging a lim ited number of isolated factors about accidents so that the cases can be recalled when data analysis begins. The ANSI Z16.2 method is not designed as an analytical tool for identifying the patterns, the precipi tating events and conditions in an accident sequence, or the relationships among contributing causal factors in an accident that are normally needed for countermeasure development." Attempts to obtain more sp e cific data from such sources as association s, insurance companies, labor unions, and industry proved unsuccessful because of one or more of the following reasons: # Energy i s c o n sid e r e d a d e q u a te ly i s o l a t e d , b lo c k e d , o r d is s ip a t e d when an unplanned e v e n t would n o t r e a c t i v a t e th e flo w o f e n e r g y . Adequate iso la tio n can be achieved by many methods or combinations of methods so long as the controls are not lik e ly to be accidentally turned on. Appendix A id e n tifie s several examples of methods of e ffe c tiv e ly iso la tin g or blocking energy. # b . S to r e d o r r e s id u a l energy th a t c o n s t i t u t e s a p e rso n n e l h azard s h a ll be i s o l a t e d , b lo c k e d , o r d is s ip a t e d . Another necessary step is to block or d issip a te any hazardous residual energy once the decision is made to deenergize. Residual energy is not always as obvious a hazard as the incoming energy supplies. For th is reason, special effo rt must be made to id en tify any stored energy that could result in personnel hazards. For example, i f i t is necessary fo r a worker to climb or move about m echanical lin k a g e s, the p o t e n t i a l energy due to the w o rk e r's weight may be s u f f i c i e n t to cause dangerous movements o f the lin k a g e s. I f t h i s hazard is p re s e n t, a m echanical block or pin can u s u a lly be used to block out the energy and p o t e n t i a l l y dangerous movement. Forms o f p o t e n t i a l energy which may be s to re d in s u f f i c i e n t q u a n t i t ie s to re p re se n t h azard s in clude: # c . The p o i n t ( s ) o f c o n t r o l s h a l l h e s e c u r e d s o t h a t u n a u t h o r iz e d p e r s o n s a r e p re v e n t e d from r e e n e r g i z i n g th e m a c h in e , p r o c e s s , o r s y s t e m . # A m eans o f s e c u r i t y m ust b e im plem ented t o e n s u r e t h a t th e equipm ent b e in g m a in t a in e d o r s e r v i c e d i s n o t somehow r e e n e r g iz e d . The g u id e l i n e s a llo t r a c h o ic e o f t h r e e m eth o d s. The s e le c tio n should be based on the p a r t i c u l a r circum stances and c h a r a c t e r i s t i c s of each re s p e c tiv e f a c i l i t y or a p p lic a tio n . A farm employee working with p o t e n t i a l l y hazardous powered equipment in a remote f i e l d would not be expected to use a padlock for s e c u r ity . A padlock (or e q u iv a le n t) , however, would be a lo g ic a l s e le c tio n for a sw itch or v alve a c c e s s ib le to a larg e number of people. The use of tag s, when only tra in e d p er sonnel have access to the p o in t( s ) of c o n tro l is an accepted in d u stry p r a c tic e . Any one o f the follow ing methods w ill prevent unwanted re e n e r g iz a tio n of equipm ent. # t " ) su c h t h a t r e e n e r g i z i n g th e sy ste m r e q u i r e s th e u s e o f s p e c i a l equipm ent r o u t i n e l y a v a i l a b l e o n l y t o th e p e r s o n who a p p li e d th e c o n t r o l . A w a rn in g c o n t a i n i n g a p p r o p r ia t e in f o r m a t io n s h a l l be d i s p l a y e d a t th e p o i n t ( s ) o f c o n t r o l . The method of securing the p o in ts of c o n tro l by physical means th a t prevent unauthorised persons from re e n e rg iz in g the machine, p ro cess, or system is w idely used. The most common device used fo r s e c u r ity is the padlock. Often each worker is provided w ith h i s own padlock and the only key. # c k e d , o r d i s s i p a t e d , th e d a t e , th e p e r s o n ( s ) r e s p o n s i b l e f o r th e c o n t r o l m e a su re , and th e p e r s o n ( s ) r e s p o n s i b l e f o r th e w ork t o b e a c c o m p lis h e d . i n a d d i t i o n , a c c e s s t o th e c o n t r o l p o i n t ( s ) o u s t be l i m i t e d t o p e r s o n s who a r e t r a in e d t o u n d e r s ta n d and o b s e r v e th e p o s t e d w a r n in g . Access may be lim ite d by physical lo c a tio n (such as e le v a tio n ) or by procedural means such as color-coded badges. When th is method of s e c u r ity is used , each new employee should re c e iv e tr a in in g to observe the g u id e lin e s b efo re having access to the p o in t(s ) used for c o n tr o llin g energy. The tra in in g should in clude the purpose of the w arning, the format and co lo r of the w arning, and should s t r e s s the r e s p o n s i b i l i t y of each person fo r h i s co-w orkers' s a f e ty . R etrain in g should be given as necessary so th a t the im portance of observing the posted warning is no t fo r g o tte n . # ( 3 ) P o s t q u a l i f i e d p e r s o n n e l , w ith th e s p e c i f i c r e s p o n s i b i l i t y o f p r o t e c t i n g a g a i n s t u n a u t h o r iz e d a c t u a t i o n , a t th e p o i n t ( s ) o f c o n t r o l t h ro u g h o u t th e m a in te n a n ce a c t i v i t y . T h is a p p l i e s m a in ly t o s h o r t d u r a t io n w ork i n th e im m ediate v i c i n i t y o f th e c o n t r o l p o i n t ( s ) . d . B e f o r e s t a r t i n g m a in te n a n c e , v e r i f y t h a t s t e p s a . th ro u g h c . h a v e been e f f e c t i v e i n i s o l a t i n g , b l o c k i n g o r d i s s i p a t i n g h a z a r d o u s e n e r g y , and s e c u r i n g th e p o i n t ( s ) o f c o n t r o l . In applying any method or technique to i s o l a t e , b lo ck , or d i s s ip a t e energy from s p e c if ie d a re a s, d evices, machines, system s, or p ro c esses, i t should be v e r i f i e d th a t d e e n e rg iz a tio n has been e ffe c te d p rio r to the s t a r t of m aintenance. V e r if ic a tio n should be accomplished each time energy is elim in a te d and re a p p lie d , re g a rd le s s of the time i n t e r v a l between removal and r e a p p lic a tio n . Proven methods should be used to e f f e c ti v e l y dem onstrate th a t a l l hazardous energy has been i s o l a te d , blocked, or d is s ip a te d in the areas where personnel w ill perform the req u ired ta s k s . I f th ere is the p o s s i b i l i t y of reaccum ulation of energy to hazardous le v e ls , v e r i f i c a t i o n should be continued u n t i l the m aintenance a c t i v i t y is com pleted. The devices should be re g u la r ly and fre q u e n tly in sp ected and/or c a li b r a te d to ensure th a t they are fu n c tio n a l and a c c u ra te and to d e te c t any p o t e n t i a l device f a i l u r e s . Good in te n tio n s to e lim in a te energy have f a il e d and i n j u r i e s have re s u lte d when the wrong sources of energy were c o n tro lle d . To avoid t h i s p o s s i b i l i t y , the energy should be v e r i f i e d to be below hazardous le v e ls b efo re proceeding. This a c tio n is so obvious th a t many re fe re n c e s f a i l to i d e n tif y i t ; y et i t is v i t a l l y im portant th a t i t be included in each procedure. The procedures should c l e a r l y a ssig n r e s p o n s i b i l i t y for each ste p of the c r i t e r i a as w e ll as in d ic a te where i s o l a t i o n , b lo ck in g , or d i s s ip a t io n of energy is to be accom plished, how d e e n e rg iz a tio n is to be v e r i f i e d , what method is to be used fo r s e c u r i ty , and how r e s p o n s i b i l i t y is to be tr a n s f e r r e d during s h i f t changes. # . F o r t h e f i v e s t e p s l i s t e d a b o ve t o c o m p r is e a v a l i d t e c h n iq u e f o r c o n t r o l l i n g h a z a r d o u s e n e r g y s o u r c e s , th e f o l l o w i n g two p r e c o n d i t i o n s m ust The procedures need not be unique for a s in g le machine or ta sk ; procedures may be used which apply to a group o f s im ila r machines or ta s k s . Depending upon the com plexity of the equipment and i t s a p p lic a tio n , good procedures may vary from one to many pages. No m a tte r how simple and s tra ig h tfo rw a rd the procedures may seem, they should be documented so th a t a l l le v e ls of personnel understand company p o lic y , as w ell as the re q u ire d s a f e ty procedures. # b . The p e r s o n n e l who im plem ent th e a b o ve s t e p s s h a l l be q u a l i f i e d . E a ch w o rk e r m ust t h o r o u g h ly u n d e r sta n d a l l docum ented p r o c e d u r e s . T r a i n i n g s h a l l b e a c c o m p lis h e d a s n ec essary t o e s t a b l i s h and m a in t a in p r o f i c i e n c y , and t o e n com p a ss p r o c e d u r a l o r eq u ip m ent c h a n g e s w h ich a f f e c t e n e r g y c o n t r o l d u r i n g m a in te n a n ce . A fte r the energy c o n tro l need has been i d e n t i f i e d , the methods o f im plem entation s e le c te d , and the procedures f in a l iz e d , a t r a in in g program should be i n s t i t u t e d to ensure th a t the procedures are followed and the purpose and fu n ctio n s are understood. The t r a in in g should involve elem ents of management and the w orkforce concerned w ith m aintenance a c t i v i t i e s so th a t the c r i t e r i a are e x p e d itio u s ly and u n i formly a p p lie d . The tr a in in g should ensure th a t the u n d erstan d in g and s k i l l s re q u ired fo r the safe a p p lic a tio n and removal of energy con t r o l s are a v a ila b le as re q u ire d . R etrain in g should be scheduled as -1 4 - C l i X U L i 11 J . - fte n as n ec essary to m ain tain p ro fic ie n c y and to in tro d u ce re v ise d dev ices, p r a c t i c e s , and methods. I f energy i s o l a t i o n , b lo ck in g , or d i s s ip a t io n are secured accord in g to c r i t e r i o n 2 . c . ( 2 ) , then every person having job r e la t e d access to the p o in t( s ) of c o n tro l should be tra in e d . This tr a in in g is in a d d itio n to th a t re q u ired fo r personnel involved in applying energy c o n tro ls . The t r a in in g should in clude the purpose of the w arning, the format and c o lo r of the w arning, and should emphasize the extreme im portance for a l l personnel to obey the warnings in ord er to p ro te c t t h e i r co w orkers. The personnel who implement the procedures should be tra in e d not only to know how to accom plish the ste p s re q u ired to c o n tro l energy so u rces, but a ls o to understand the hazards a s s o c ia te d w ith energy so u rc es. All le v e ls of su p e rv is io n concerned w ith m aintenance and s e r v ic in g , w ith s p e c ia l emphasis on f i r s t -l i n e su p e rv isio n , must be tra in e d in o rd e r to ensure th a t procedures are follow ed. # B . CONTROLLING HAZARDS WITH ENERGY PRESENT C o n t r o l l i n g m a in te n a n ce h a z a r d s w ith e n e rg y p r e s e n t i s th e o n l y a l t e r n a t i v e t o d e e n e r g i z in g t o p e rfo rm m a in te n a n ce t a s k s s a f e l y . F o r some t y p e s o f m a c h in e s o r p r o c e s s e s , th e c o n d i t i o n s o r c o m b in a t io n o f c o n d i t i o n s t h a t m ust be e v a l u a t e d a n d c o n t r o l l e d i n e n e r g iz e d s y s t e m s ca n be much more com p lex th a n d e e n e r g i z e d s y s t e m s . The f o l l o w i n g p a r a g r a p h s d e s c r i b e g u i d e l i n e s f o r m a in t a in i n g " e n e r g i z e d " s y s t e m s . # . H a z a rd o u s e n e r g y s o u r c e s , i n c l u d i n g r e s i d u a l e n e rg y s o u r c e s , s h a l l b e i d e n t i f i e d . I d e n t i f i c a t i o n of the hazards needs no j u s t i f i c a t i o n , fo r a hazard cannot be c o n tro lle d u n less i t is i d e n t i f i e d . # . Docum ented p r o c e d u r e s , w h ich h a v e b een d e te rm in e d t o c o n t r o l each h a z a r d o u s e n e r g y s o u r c e , s h a l l b e u s e d . P r o c e d u r e s s h a l l a s s i g n r e s p o n s i b i l i t y and a c c o u n t a b i l i t y f o r c o n t r o l l i n g p e r s o n n e l h a z a r d s . Before a worker is exposed to a h az a rd , the hazard c o n tro ls must be known to be e f f e c t i v e . D eterm ination of hazard c o n tro l e f fe c tiv e n e s s can be based on p hysical dem onstration p rio r to im plem entation on a d ay -to-day b a s is . This can be by s i m i l a r i t y to a dem onstrably e f f e c t i v e hazard c o n tr o l, or by a n a ly s is . # . P e r s o n n e l who p e rfo rm t h e m a in te n a n ce s h a l l be q u a l i f i e d t o u se th e p r o c e d u r e s . Q u a l i f i c a t i o n t o u se th e p r o c e d u r e s may b e o b t a in e d b y e d u c a t io n , e x p e r ie n c e a n d / o r t r a i n i n g . The personnel who perform the m aintenance and s e r v ic in g a c t i v i t i e s must be tra in e d to understand the p a r t i c u l a r h azard s, the c o n tro ls for the h az ard s, and how to implement the c o n tro ls e f f e c t i v e l y . # IV. CONCLUSIONS This study confirms th a t the need to c o n tro l hazardous energy during m ain te nance a c t i v i t i e s is recognized by in d u s try , lab o r, and the Government. This need is m anifested by the d i f f e r e n t approaches implemented by in d u stry to reduce i n j u r i e s , the evidence subm itted to the O ccupational S afety and Health A d m in istration a t p u b lic h ea rin g s on the s u b je c t of lockouts and ta g o u ts , and r e s u l t s of the l i t e r a t u r e search in which a g re a t number of a r t i c l e s were found which emphasize the need fo r some type o f c o n tro l. The l i t e r a t u r e review did not produce s t a t i s t i c a l evidence of the e f f e c t i v e ness o f any one s p e c if ic type o f energy c o n tro l method over an o th e r, nor did i t id e n tify ac cid en t c a u sa tiv e fa c to rs leading to i n j u r i e s . No values could be developed which d i f f e r e n t i a t e ac c id e n ts and i n j u r i e s o ccu rrin g during main tenance from aggregate U.S. in ju ry s t a t i s t i c s ; i . e . , d ata to provide in d ic e s on the p ro b a b ility o f in ju ry occurrence, the magnitude o f the i n j u r i e s , and the exposure to the hazard (p o pulation of workers a t r i s k ) th a t c o r r e l a t e with the i d e n t i f i e d primary hazard causes. I n d u s t r i a l accid en t and in ju ry d ata a v a ila b le in the U.S. d e sc rib e the in ju r i e s s u s ta in e d by the workers in g re a te r d e t a i l than the causes of the a c c i d en ts th a t i n f l i c t e d the i n j u r i e s . The hazard causes found in the analyzed ac cid en t re p o rts are c a te g o riz e d by f i r e , ex p lo sio n , im pact, f a l l , caught in or between, and o th e rs . The hazard causes id e n t i f i e d by t h i s study are d i f f e r e n t . Consequently, the published s t a t i s t i c s (aggregates of a c c id e n t/in ju r y re p o rts ) cannot be broken down in to ac cid en t causes id e n t i f i e d as s p e c if ic to m aintenance and s e r v ic in g a c t i v i t i e s . The study i d e n t i f i e d the follow ing hazard causes: Maintenance a c t i v i t i e s were i n i t i a t e d w ithout a ttem p tin g to deen erg ize the equipment or system or c o n tro l the hazard with energy p re s e n t. Energy blockage or i s o l a t i o n was attem p ted , but was in adequate. 3. R esidual energy was not d is s ip a te d . Energy was a c c id e n ta lly a c tiv a te d . The worker p opulation a t r i s k could not be d efined because employers who implement procedural c o n tro ls may choose to make the procedures a p p lic a b le to (1) a l l personnel; (2) personnel engaged in any one or any com bination o f a c t i v i t i e s such as c o n s tru c tio n , o p e ra tio n s , m anufacturing, te s t i n g , e t c . ; or (3) m aintenance personnel only. Thus, the e f fe c tiv e n e s s of any one method of c o n tro l w ith re s p e c t to an o th er cannot be determ ined. However, the implemen ta t i o n o f any method which in c re a se s worker awareness of p o t e n t ia l l y hazardous energy sources is b e t t e r than no method a t a l l . Most of these e x is tin g re g u la tio n s use the concept of "power o ff" to prevent i n j u r i e s , and do not -1 6 -pro v id e guidance on how to d is c e rn when to apply lo ck s, ta g s , or a com bination o f locks and ta g s . They a ls o do not allow fo r the perform ance of m aintenance w ith power on, even under norm al, everyday c o n d itio n s , or when such m ain te nance cannot be performed w ith power o f f . In f a c t , p e r sonnel can be harmed by to x ic , c a u s tic , or asphyxiant m a te r ia ls with no con c u rre n t r e le a s e of energy. I t is th e re fo re recommended th a t c r i t e r i a be developed s p e c i f i c a l l y to p ro te c t m aintenance personnel from such hazardous m a te r ia ls . The re searc h e f f o r t should focus on id e n tify in g p o t e n t i a l h az a rd s and determ ining c o r r e c tiv e a c tio n which e i t h e r reduces hazardous energy le v e ls or changes the w o rk e r's proxim ity to hazardous energy. -1 9 -V I. REFERENCES 1. M anufacturing Chemists A sso cia tio n . Case H is to rie s of A ccidents in the Chemical In d u stry . Washington, DC: 1962DC: -1975. 5 v o l. 2. G lasser, Melvin A. Data Regarding O ccupational F a t a l i t i e s Concerned With Safety Lockout. In te r n a tio n a l Union, UAW, D e tr o it, MI: June 29, 1979. 3pp. 3. American Petroleum I n s t i t u t e . Review of F a ta l I n ju r i e s in the Petroleum In d u stry fo r 1978. Washington -3 1 - # 2] AN EM PLOYEE ATTEM PTED TO R EM O V E SO M E CLOGGED CO M PO ST M ATERIAL THAT HAD JAM M ED AN OPERATING KILN DISCHARGE CONVEYOR. HIS CLOTHING BECAM E ENTANGLED IN THE UNGUARDED REVOLVING ROLLERS. HIS ARM W A S CAUGHT IN THE M ACHINE. H E W A S PULLED AGAINST TH E M ACHINE AND W A S CH OK ED TO DEATH. # 22^ n n # AN EM PLOYEE W A S ATTEMPTING TO CLEAR A JAM ON A GARNETT M A CHINE W HILE IT W A S IN OPERATION. HE CRAW LED INSIDE THE M ACHINE TH RO UG H AN UNGUARDED H O LE W H ER E HE BECAM E ENTANGLED IN M OVING PARTS, W A S DRA W N INTO THE ROLLER AND W A S CRUSHED. # T T . [Ti LACERATIONS 1 The e x is tin g B r itis h Code of P ra c tic e r e l i e s on th e human element to follow a documented "perm it to work" system for perform ing m aintenance. I t c lo s e ly p a r a l l e l s the recommended g u id e lin e s , and r e l i e s on the documented system to serv e as a record o f a l l the fo re se e a b le hazards which have been considered in advance. # ROTATION ELECTRICAL AN EM PLOYEE W A S CLEANING AN EDGER DISCONNECT SA W IN A SAW M ILL. PROTECTIVE FATALITY PANELS W ER E REM OVED. HE REQUESTED ANOTHER EM PLOYEE TO START THE SAW . HIS CLOTHES BECAM E ENTANGLED AND HE W A S PULLED INTO THE SAW . # 24^ CRUSHED I ROTATION ELECTRICAL AN EM PLOYEE W A S EM PTYING DO UG H DISCONNECT FRO M A M IXER W HILE IT W A S STILL FATALITY RUNNING. W H EN H E REACHED INTO THE M ACHINE HIS ARM W A S CAUGHT AND H E W A S PULLED INTO THE M ACHINE. I t a lso r e l i e s h e a v ily on s u p e rv isio n to see t h a t the system oper a te s p ro p erly . The Consensus Standards are s im ila r to the recommended g u id e lin e s . They pro v id e the o p tio n s of (1) sec u rin g the p o in t( s ) o f c o n tro l by lockout or tag o u t dev ices or (2) having no locks or ta g s, provided o th e r requirem ents are met. These s ta n d a rd s, however, do not allow the o p tio n of having a person remain a t th e p o in t(s ) of c o n tro l to p ro te c t a g a in st u n authorized a c tu a tio n of the ma chine or process during m aintenance and s e r v ic in g . Also, the stan d ard s do not cover the broad spectrum o f h azard s as do the recommended g u id e lin e s . # N ational Standards The e x is tin g n a tio n a l stan d ard s as shown in Table C-4 are not uniform in t h e i r coverage. In c o n sis te n c ie s in the requirem ents e x i s t between i n d u s tr ie s and between equipment w ith in the same in d u s try . Some s e c tio n s of the General In d u stry Standards imply lo ck in g out or tagging out energy r a th e r than s p e c i fying th a t lockouts or ta g o u ts be perform ed. S ections of the standard cover ing in d u s tr ie s or equipment re q u irin g p ro v isio n s fo r only locking out or ta g ging out energy are: overhead and gantry cran es; woodworking m achinery; m echanical power p re sse s ; c e r t a i n fo rg in g m achines; c e r t a i n pulp, paper, and paperboard m ills ; t e x t i l e s ; c e r t a i n bakery equipment; and c e r t a i n sawmill equipment. Other # S ta te Standards The e x is tin g s t a t e stan d ard s (Table C -5 261 (e (1 2 )( i i i ) 1 4 5 (f)( 3 ) ( i i i ) 261 (e (13) 1 179(g)( 5) (i) 261 (f ( 6 ) (i) ' 17 9 (g )( 5) ( i i ) 261 (g (4) ( i i ) < 1 7 9 ( g ) ( 5) ( iii) to prevent unauthorized persons from reenergizing the system; using warning tags to caution personnel that energy has been isolated and the reason for isolation; verifying that isolation of energy has been effective; verifying that personnel have cleared the area prior to reenergizing the machine or system; documenting procedures; and ensuring that personnel who implement the criteria have been adequately trained to thoroughly understand the procedures. 261 (g (1 5 )(i) < 179(1) (2) Ci) (c) 261 (g (1 6 )(i) i 1 8 1 (f)( 2 ) ( i ) ( c ) 261 (g (1 9 )( i i i ) £ 2 1 3 (a )(10) 261 (g (21) ' ' 213(b)(5) 261 (j ( 1 ) ( i i i ) i 2 1 7 (b )( 8 ) (i) 261 (j ( 4 ) ( i i i ) t 217( d ) ( 9 ) (iv) 261 (j ( 5 ) ( i i i ) E 218( a ) ( 3 ) ( i i i ) 261 ( j (6) (i) I 2 1 8 (a )( 3 ) (iv) 261 (k ( 2 ) ( i i ) c 218(d) (2) 262(c (1) c 218(e)(1) ( i i ) 262 (n (2) 3 2 18(f)(1) 262(p (1) 3 2 18(f)(2) 262 (q (2) 3 218(g)(1) 263(k (1 2 )(i) 3 218(h)(2) 263(1 ( 3 ) ( i i i ) ( b ) 3 218(h)(5) 263(1 ( 8 ) ( i i i ) i 218( i ) (1) 265 (c (1 2 )(v The Michigan standards closely parallel the recommended guidelines except Michigan does not include verification that blocking, isolating, and dissipa ting hazardous energy have been effected before starting maintenance. # International Standards International standards from Canada, Germany, and Britain (summarized in Table C-6) parallel portions of the recommended criteria. The Alberta, Canada, regulations for machinery and equipment state that no maintenance or repairs shall be carried out until the machinery or equipment has been shut down and secured against accidental movement or the power control devices have been locked out in an inoperative mode by the installation of lockout devices and warning tags. Dissipation of stored energy, verification that isolation and dissipation have been effected prior to starting maintenance, and verification that upon completion of maintenance, personnel are clear of the danger points before the machine or equipment is reenergized are also included as require ments. The German specifications for setup, troubleshooting, and maintenance of power-driven equipment state that these tasks must only be performed when: (1) The hazardous movements are brought to a halt; (2) the initiation of haz ardous movements as a consequence of stored power or energy is prevented; and (3) unauthorized, erroneous, or unexpected initiation of hazardous movements is avoided by some adequate means. The British Code of Practice for Safeguarding of Machinery states that effec tive control of hazards during maintenance can be achieved by having a written "permit to work" system. Such a system must clearly identify the hazards and document the practices to be followed, precautions to be taken, and responsi b ilitie s of the workers and of management. The code of practice also calls for adequate training of workers and supervision in safe systems of work and lockout systems for maintenance operation. # Consensus Standards The ANSI and NFPA standards (Table C-7) establish requirements and procedures for lockout/tagout of energy sources for stationary machines and equipment and for electrical circuits and electrical equipment, respectively. These stan dards require (1) that energy sources be isolated or blocked for maintenance, (2) that stored energy be dissipated prior to beginning maintenance, (3) that physical means or devices be used to secure the energy sources, (4) verifica tion that the energy sources have been isolated prior to starting work, (5) verification that all personnel are clear of hazards before reenergizing the machines or systems, (6) assurance that the procedures for lockout/tagout have been documented, and (7) that personnel have been adequately trained to under stand and implement the recommended guidelines. Applicability of the ANSI standard is limited to unexpected energization, startup, or release of stored energy of equipment or process. The NFPA standard does not have such a limitation but covers only electrical energy. The recommended guidelines are applicable for all phases of maintenance and servicing. -5 2 - This valve shall be closed and locked in the off position while the hammer is being adjusted, repaired, or serviced, or when the dies are being changed. (ii) A ir-lift hammers shall have an air shutoff valve as required in paragraph (d)(2) of this section and should be conveniently located and distinctly marked for ease of identification. ( iii) A ir-lift hammers shall be provided with two drain cocks: one on main head cylinder, and one on clamp cylinder. (1) Mechanical forging presses. When dies are being changed or maintenance is being performed on the press, the following shall be accomplished: (1) The power to the press shall be locked out. (ii) The flywheel shall be at rest. # 1910.252(c)(2) 1 9 1 0 .2 1 8 (g ) (i) The hydraulic pumps and power apparatus shall be locked out. (ii) The ram shall be blocked with a material the strength of which shall meet or exceed the specifica tions or dimensions shown in Table 0-11. (1) Hot trimming presses. The requirements of para graph (f)(1) of this section shall also apply to hot trimming presses. (2) Lockout. Upsetters shall be provided with a means for locking out the power at its entry point to the machine and rendering its cycling controls inoperable. (5) Changing dies. W hen dies are being changed, main tenance performed, or any work done on the machine, the power to the upsetter shall be locked out, and the fly wheel shall be at rest. (1) Boltheading. The provisions of paragraph (h) of this section shall apply to boltheading. (2) Rivet making. The provisions of paragraph (h) of this section shall apply to rivet making. (1) Billet shears. A positive-type lockout device for disconnecting the power to the shear shall be provided. # AND BRAZING (i) Installation. All equipment shall be installed by a qualified electrician in conformance with subpart S of this part. There shall be a safety-type disconnect ing switch or a circuit breaker or circuit interrupter to open each power circuit to the machine, conveniently located at or near the machine, so that the power can be shut off when the machine or its controls are to be serviced. (ii) Capacitor welding. Stored energy or capacitor discharge type of resistance welding equipment and control panels involving high voltage (over 550 volts) shall be suitably insulated and protected by complete enclosures, all doors of which shall be provided with suitable interlocks and contacts wired into the control circuit (similar to elevator interlocks). Such inter locks or contacts shall be so designed as to effective ly interrupt power and short circuit a ll capacitors when the door or panel is open. A manually operated switch or suitable positive device shall be installed, in addition to the mechanical interlocks or contacts, as an added safety measure assuring absolute discharge of all capacitors. (a) The controller disconnecting means is capable of being locked in the open position. # GENERAL SAFETY AND HEALTH PROVISIONS # 1926.20(b) (3) The use of any machinery, tool, material or equip ment which is not in compliance with any applicable requirement of this part is prohibited. Such machine, tool, material, or equipment shall either be identified as unsafe by tagging or locking the controls to render them inoperable or shall be physically removed from its place of operation. # NONIONIZING RADIATION (e) Beam shutters or caps shall be utilized, or the laser turned off, when laser transmission is not actually required. When the laser is left unattended for a substantial period of time, such as during lunch hour, overnight, or at change of shifts, the laser shall be turned off. # FIRE PROTECTION AND PREVENTION # 1926.150(d)(1) (ii) During demolition or alterations, existing auto matic sprinkler installations shall be retained in service as long as reasonable. The operation of sprinkler control valves shall be permitted only by properly authorized persons. Modification of sprinkler systems to permit alterations or additional demolition should be expedited so that the automatic protection may be returned to service as quickly as possible. Sprinkler control valves shall be checked daily at close of work to ascertain that the protection is in service. # 1926.200(h) (1) Accident prevention tags shall be used as a temporary means of warning employees of an existing hazard, such as defective tools, equipment, etc. They shall not be used in place of, or as a substitute for, accident prevention signs. (2) Specifications for accident prevention tags similar to those in Table G-l shall apply. (a) All fixed power driven woodworking tools shall be provided with a disconnect switch that can either be locked or tagged in the off position. # W ELDING AND CUTTING (g) For the elimination of possible fire in enclosed spaces as a result of gas escaping through leaking or improperly closed torch valves, the gas supply to the torch shall be positively shut off at some point out side the enclosed space whenever the torch is not to be used or whenever the torch is left unattended for a substantial period of time, such as during the lunch period. Overnight and at the change of shifts, the torch and hose shall be removed from the confined space. Open end fuel gas and oxygen hoses shall be immediately removed from enclosed spaces when they are disconnected from the torch or other gas-consuming device. # ELECTRICAL-GENERAL REQUIREM ENTS # 1926.400(g) (1) Equipment or circuits that are deenergized shall be rendered inoperative and have tags attached at all points where such equipment or circuits can be energized. (2) Controls that are to be deactivated during the course of work on energized or deenergized equipment or circuits shall be tagged. (3) Tags shall be placed to identify plainly the equip ment or circuits being worked on. (11) Operating levers controlling hoisting or dumping devices on haulage bodies shall be equipped with a latch or other device which will prevent accidental starting or tripping of the mechanism. # BASE- # 1926.603(a) (5) A blocking device, capable of safely supporting the weight of the hammer, shall be provided for placement in the leads under the hammer at a ll times while employees are working under the hammer. # INITIATION OF EXPLOSIVE CHARGES-ELECTRICAL BLASTING 1926.906 (j) In underground operations when firing from a power circuit, a safety switch shall be placed in the perma nent firing line at intervals. This switch shall be made so it can be locked only in the "Off" position and shall be provided with a short-circuit arrangement of the firing lines to the cap circuit. (1) When firing from a power circuit, the firing switch shall be locked in the open or "Off" position at all times, except when firing. It shall be so designed that the firing lines to the cap circuit are automati cally short-circuited when the switch is in the "Off" position. Keys to this switch shall be entrusted only to the blaster. # PO W ER TRANSMISSION AND DISTRIBUTION # (d) Deenergizing lines and equipment (1) W hen deenergizing lines and equipment operated in excess of 600 volts, and the means of disconnecting from electric energy is not visibly open or visibly locked out, the provisions of subdivisions (i) through (vii) of this subparagraph shall be complied with: (i) The particular section of line or equipment to be deenergized shall be clearly identified, and it shall be isolated from all sources of voltage. # Appendix C EVALUATION OF EXISTING STANDARDS -INTERNATIONAL, NATIONAL, STATE, CONSENSUS The follow ing paragraphs compare and e v a lu a te the c u rre n t n a tio n a l, s t a t e , in t e r n a t i o n a l , and consensus stan d ard s r e la t e d to energy c o n tro ls (lo ck o u t/ tag o u ts) fo r accom plishing m aintenance and s e rv ic in g a c t i v i t i e s s a f e ly . S p e c if ic a lly , the g u id e lin e s recommended are compared w ith: OSHA General In d u stry Standards (29 CFR 1910) and OSHA C o n stru ctio n Standards (29 CFR 1926); S ta te Standards (with approved OSHA p la n s ); c r i t e r i a developed in Canada (A lb e rta ), Germany, and B rita in ; and consensus stan d ard s ANSI Z244.1-1982 and NFPA 70E p a r t I I . # Summary Appendix Tables C -l, C-2, and C-3 p re se n t an o v e r a ll comparison of the recom mended g u id e lin e s w ith the re s p e c tiv e n a t i o n a l , s t a t e , i n t e r n a t i o n a l , and consensus stan d ard s. Even though the types of energy a p p lic a b le and the i n d u s tr ie s a ffe c te d v a ry , a c o n sid erab le c o n siste n c y is found in the g u id e l in e s re q u ire d for a l l except th e OSHA n a tio n a l sta n d a rd s and the s t a t e OSHA stan d ard s (with the excep tio n s of C a lifo rn ia and M ichigan). The primary c r i t e r i a elem ents e lim in a tin g energy, secu rin g the means by which the energy is e lim in a te d , v e r if y in g th a t energy has d is s The C a lifo rn ia stan d ard s are a ls o lim ite d to e l e c t r i c a l energy and m echanical motion o f equipment; whereas, the recommended c r i t e r i a cover e l e c t r i c a l , m echanical, and therm al energy. The e x is tin g A lb erta , Canada, re g u la tio n s lack e f fe c tiv e n e s s sin ce they do not include m echanical motion or therm al energy as a p p lic a b le forms o f energy th a t p ersonnel must be safeguarded a g a in s t. Also, the A lb e rta re g u la tio n s do not sp ec ify the documenting o f procedures and t r a in in g of personnel as n ec essary req u irem en ts fo r an e f f e c t i v e program for c o n tr o llin g hazards during m ainte nance. The d r a f t German Accident P rev en tio n S p e c ific a tio n s do not in clude e l e c t r i c a l , chem ical, or therm al as a p p lic a b le forms of energy to be c o n tro lle d during m aintenance and s e r v ic in g . I n d u s trie s a f fe c te d by the s p e c if ic a t i o n s are lim ite d to "power driven equipm ent" which excludes many i n d u s tr ie s th a t expose workers to hazardous le v e ls of energy during m aintenance. The German s p e c i f i c a tio n s do not re q u ire v e r i f i c a t i o n th a t hazards have been c o n tr o lle d , docu m enting o f procedures, or t r a in in g of personnel in s a fe work p r a c t i c e s . # General Industry Lockout Related # Standard Provisions # SPECIFICATIONS FOR ACCIDENT PREVENTIONSIGNS AND TAGS No. # 1910.145(f)(1) # STANDARD (i) The tags are a temporary means of warning all con cerned of a hazardous condition, defective equipment, radiation hazards, etc. The tags are not to be con sidered as a complete warning method, but should be used until a positive means can be employed to elimi nate the hazard; for example, a "Do not Start" tag on power equipment shall be used for a few moments or a very short time until the switch in the system can be locked out; a "Defective Equipment" tag shall be placed on a damaged ladder and immediate arrangements made for the ladder to be taken out of service and sent to the repair shop. # REQUIREM ENTS (10) It is recommended that each power-driven wood working machine be provided with a disconnect switch that can be locked in the off position. (3) On applications where injury to the operator might result if motors were to restart after power failures, provision shall be made to prevent machines from auto matically restarting upon restoration of power. (5) On each machine operated by electric motors, posi tive means shall be provided for rendering such con trols or devices inoperative while repairs or adjust ments are being made to the machines they control. # PULP, PAPER AND PAPERBOARD MILLS # 1910.261(b) (4) Lockouts. Devices such as padlocks shall be pro vided for locking out the source of power at the main disconnect switch. Before any maintenance, inspection, cleaning, adjusting, or servicing of equipment (elec trical, mechanical, or other) that requires entrance into or close contact with the machinery or equipment, the main power disconnect switch or valve, or both, controlling its source of power or flow of material, shall be locked out or blocked off with padlock, blank flange, or similar device. (5) Vessel entering. Lifelines and safety harness shall be worn by anyone entering closed vessels, tanks, ship bins, and similar equipment, and a person shall be stationed outside in a position to handle the line and to summon assistance in case of emergency. The air in the vessels shall be tested for oxygen deficiency and the presence of both toxic and explosive gases and vapors, before entry into closed vessels, tanks, etc., is permitted. Self-contained air-or oxygen-supply masks shall be readily available in case of emergency. Work shall not be done on equipment under conditions where an injury would result if a valve were unexpect edly opened or closed unless the valve has been locked in a safe position. # 1910.261(e) (2) Slasher tables. Saws shall be stopped and power switches shall be locked out and tagged whenever it is necessary for any person to be on the slasher table. (10) Stops. All control devices shall be locked out and tagged when knives are being changed. # 1910.261(e)(12) ( ii i) Whenever it becomes necessary for a workman to go within a drum, the driving mechanism shall be locked and tagged, at the main disconnect switch, in accor dance with paragraph (b)(4) of this section. # 1910.261(e) (13) Intermittent barking drums. In addition to motor switch, clutch, belt shifter, or other power discon necting device, intermittent barking drums shall be equipped with a device which may be locked to prevent the drum from moving while it is being emptied or filled . # 1910.261(f)(6) (i) When cleaning, inspection, or other work requires that persons enter rag cookers, all steam and water valves, or other control devices, shall be locked and tagged in the closed or "off" position. Blank flanging of pipelines is acceptable in place of closed and locked valves. Every textile machine shall be provided with individual mechanical or elec trical means for stopping such machines. On machines driven by belts and shafting, a locking-type shifter or an equivalent positive device shall be used. On opera tions where injury to the operator might result if motors were to restart after failures, provision shall be made to prevent machines from automatically restart ing upon restoration of power. (2) Protection for loom fixer. Provisions shall be made so that every loom fixer can prevent the loom from being started while he is at work on the loom. This may be accomplished by means of a lock, the key to which is retained in the possession of the loom fixer, or by some other effective means to prevent starting the loom. (1) J-box protection. Each valve controlling the flow of steam, injurious gases, or liquids into a J-box shall be equipped with a chain, lock, and key, so that any worker who enters the J-box can lock the valve and retain the key in his possession. Any other method which will prevent steam, injurious gases, or liquids from entering the J-box while the worker is in it will be acceptable. (2) Kier valve protection. Each valve controlling the flow of steam, injurious gases, or liquids into a kier shall be equipped with a chain, lock, and key, so that any worker who enters the kier can lock the valve and retain the key in his possession. Any other method which w ill prevent steam, injurious gases, or liquids from entering the kier while the worker is in it w ill be acceptable. (iii) A main disconnect switch or circuit breaker shall be provided. This switch or circuit breaker shall be so located that it can be reached quickly and safely. # 1910.265(e)(1) (iv) Carriage control. A positive means shall be pro vided to prevent unintended movement of the carriage. This may involve a control locking device, a carriage tie-down, or both. (2) Before the voltage is applied, cable conductors shall be isolated to the extent practicable. Employees shall be warned, by such techniques as briefing and tagging at all affected locations, to stay clear while the voltage is applied. ( ii i) After all designated switches and disconnectors have been opened, rendered inoperable, and tagged, visual inspection or tests shall be conducted to insure that equipment or lines have been deenergized. # 1910.268(m)(7) (i) (iv) Protective grounds shall be applied on the discon nected lines or equipment to be worked on. (v) Guards or barriers shall be erected as necessary to adjacent energized lines. (vi) When more than one independent crew requires the same line or equipment to be deenergized, a prominent tag for each such independent crew shall be placed on the line or equipment by the designated employee in charge. (vii) Upon completion of work on deenergized lines or equipment, each designated employee in charge shall determine that all employees in his crew are clear, that protective grounds installed by his crew have been removed, and he shall report to the designated authori ty that all tags protecting his crew may be removed. # 2) When a crew working on a line or equipment can clearly see that the means of disconnecting from elec tric energy are visibly open or visibly locked-out, the provisions of subdivisions (i) and (ii) of this subparagraph shall apply: (i) Guards or barriers shall be erected as neces sary to adjacent energized lines. (ii) Upon completion of work on deenergized lines or equipment, each designated employee in charge shall determine that all employees in his crew are clear, that protective grounds installed by his -6 7-crew have been removed, and he shall report to the designated authority that all tags protecting his crew may be removed. # (b) Deenergized equipment or lines When it is necessary to deenergize equipment or lines for protection of employees, the requirements of para graph 1926.950(d) shall be complied with. -70an nisi TTinni . « r»ci np^ninv i k k k i: « ri 11 - -- uen ormeii s h i p i v wiiri in» nni*7#-»T n r r . nil r # APPENDIX E REFERENCES SUPPORTING GUIDELINES FOR CONTROLLING HAZARDOUS ENERGY DURING MAINTENANCE The "Report of the Committee on Electrical Safety Requirements for Employee Workplaces" makes specific provisions for performing maintenance functions without benefit of lockout or tagout procedures. Other supporting references are: In a letter to OSHA from the Printing Industries of America, Inc. , the statement that "much of the equipment currently used in the printing process is designed to be cleaned, and to a lesser degree repaired, while energized" hi^ilights the need to protect workers from ener gized equipment. The only way to protect the worker under these conditions is to be sure the procedures used are safe. Additionally, the ANSI Safety Stan dard for the Lockout/Tagout of Energy Sources states that "where energized work is required, acceptable procedures and equipment shall be employed to provide effective protection to personnel." The terms "acceptable" and "effective protection" indicate a definite need to determine that the proce dures used are safe.
The Occupational Safety and Health Act of 1970 emphasized the need for standards to protect the health and safety of workers exposed to an ever increasing number of potential hazards in the workplace. Section 22(d)(2) of the Act authorizes the Director of the National In stitu te for Occupational Safety and Health (NIOSH) to make recommendations to the Occupational Safety and Health Administration (OSHA) concerning improved occupational safety and health standards. The purpose of th is document is to provide recommendations for preventing injuries and disease caused by hazardous energy. The document was developed through the use of "systems analysis" so as to provide a logical means of performing maintenance and servicing a c tiv itie s safely, with energy present, with energy removed, and during the process of reenergizing. We gratefully acknowledge the contributions to th is document made by representatives of other Federal agencies or departments, labor unions, trade and professional associations, consultants, and the s ta ff of the In stitu te. Whatever the contributions made by others, conclusions expressed in th is document are those of the In stitu te, and we are so lely responsible for them. However, a ll comments made by those outside NIOSH, whether or not incorporated, were considered carjêïully and were sent with the document to the Occupational Sa^e(ty\ and Health Administration. .A Dona kid M illar, M.D. A ssistant Surgeon General Director, National In stitu te for Occupational Safety and Health The Division of Safety Research, NIOSH, had primary resp on sib ility for devel opment of the recommended guidelines for controlling hazardous energy during maintenance and servicing. Ted A. P ettit served as project o ffic e r and docu ment manager. Boeing Aerospace developed the basic information for considera tion by NIOSH sta ff and consultants under contract No. 210-79-0024.# Figure I. D IA G R A M FOR CO NTRO LLING H A Z A R D O U S ENERGY DU RING M A IN T E N A N C E # f^> > IM P L E M E N T A T IO N O F S A F E G U A R D S T O C O N T R O L H A Z A R D S WITH E N E R G Y P R E S E N T M A Y BE C H O S E N IN S T E A D O F H A Z A R D O U S E N E R G Y E L IM IN A T IO N B Y D E V IC E S O R T E C H N IQ U E S^ W H E N IT C A N B E D E M O N S T R A T E D T H A T H A Z A R D S A R E C O N T R O L L E D W ITH E N E R G Y P R E S E N T B Y : # . I D E N T IF Y IN G A L L H A Z A R D O U S E N E R G Y S O U R C E S A N D H A Z A R D O U S R E S ID U A L E N E R G Y , A N D 2 . D O C U M E N T IN G A P R O C E D U R E F O R A N D D E M O N S T R A T IN G T H A T T H E P R O C E D U R E W ILL C O N T R O L H A Z A R D S R E S U L T IN G F R O M E A C H H A Z A R D O U S E N E R G Y ID E N T IF IE D . T H E D E C IS IO N A T T H IS P O IN T IS P R E D E T E R M IN E D B Y T H E O R IG IN A L O P T IO N C H O S E N IN . I I I I I I # Legend # R E C T A N G L E S Y M B O L -I D E N T IF IE S A N E V E N T T H A T R E S U L T S F R O M T H E C O M B IN A T IO N O R E X C L U S IO N O F A C T IV IT IE S O R E V E N T S . A N D G A T E -D E S C R IB E S A N O P E R A T IO N W H E R E B Y C O -E X IS T E N C E O F A L L IN P U T S A R E R E Q U IR E D TO P R O D U C E A N O U T P U T E V E N T S . O R G A T E -D E F I N E S S IT U A T IO N W H E R E B Y A N O U T P U T E V E N T W ILL O C C U R IF O N E O F T H E IN P U T S E X IS T . D E C ISIO N G A T E -E X C L U S IV E O R G A T E W H IC H F U N C T IO N S A S A N O R G A T E B U T P R O V ID E S A F O O T N O T E D ( t > ) L IS T O F D E C IS IO N C R IT E R IA A R E N O T S E L F E V ID E N T . D IA M O N D -D E S C R IB E S A N E V E N T T H A T IS -C O N S ID E R E D B A S IC IN A G IV E N L O G IC S E Q U E N C E . E V E N T IS N O T D E V E L O P E D F U R T H E R B E C A U S E D E V E L O P M E N T IS O B V IO U S . u n til i t becomes operational. For sim p lification , "maintenance and servicing" are referred to as "maintenance" throughout much of the document. "Inspection" is defined as checking or testin g machinery, equipment, system, e t c ., against established standards. The d efin itio n suggests that the object being inspected has a function and should comply with a standard. From th is, i t is deduced that (1) a standard must e x ist that: establishes the good or bad ch aracteristics of the object being inspected and (2) the object must be com plete so that i t can perform it s function; i . e . , construction, assembly, or manufacture should be complete before the object can be maintained. I f the object has more than one function, i t may be necessary to change or add parts to the object so that it may perform the d ifferent functions. The a c tiv ity required to make the changes or additions is considered maintenance. In some sectors of industry the term "set-up" is used to denote the a c tiv itie s needed to change the functions of the equipment. "Service" is defined as repair or maintenance. The d efin itio n of service is synonymous with maintenance; however, service (as used in this report) refers to the a c tiv itie s needed to keep a machine, process, or system in a state of efficien cy ; e .g ., changing crank case o i l , greasing, cleaning, painting, adjusting, calibratin g, e tc . Energy. For this document, "energy" means mechanical motion; potential energy due to pressure, gravity, or springs; e le c tr ic a l energy; or thermal energy resulting from higfr or low temperature. As suggested by one of the approaches taken by industry, maintenance may be performed with power o ff or power on. Maintenance a c t iv it ie s , however, are always performed with some form of energy on and some form of energy o ff . Thus, the concept of "power off" or "power on" implies that power, the time rate at which work is done or energy emitted or transferred, can be turned o ff. This concept is too r e str ic tiv e because it does not consider gravity or temperature. The term "energy," as opposed to "power," is therefore suggested as an altern ative, because current consensus standards and other litera tu re on th is subject use th is term. The concept of energy (any quantity with dimensions mass times length squared divided by time squared), however, is too in clu sive. The concept of energy, for the purpose of th is document, is lim ited to: 1. Kinetic energy -energy possessed by a body by virtue of i t s motion. # Potential energy -energy possessed by a body by virtue of it s position in a gravity fie ld . 3. E lectrical energy -energy as a result of a generated e le c tr ic a l power source or a s ta tic source. # 4. Thermal energy -energy as a resu lt of mechanical work, radiation, chemi cal reaction, or e le c tr ic a l resistance. Radiation, the process by which electromagnetic energy is propagated, is not included because this type of energy a ffects internal organ tolerance, which is a health e ffe c t. Personnel Hazard. A condition which could lead to injury or death. This con d itio n should be recognized by a person fam iliar with the particular circum stances and facts unique to a particular industry. The following concept of personnel hazard was used to objectively identify hazards to humans: A per sonnel hazard e x ists when the environment, conditions, natural phenomena, or equipment ch aracteristics may release le v els of energy that exceed human tolerance. This concept encompasses the human physiological tolerance to trauma as well as internal organ tolerance to environment. Isolated or Blocked Energy. Energy is considered isolated or blocked when i t s flow would not be reactivated by a foreseeable unplanned event. The term "isolate" means to set apart from others. The term "block" (noun) means an obstacle or obstruction; or, (verb) to make unsuitable for passage or progress by obstruction, to prevent normal functioning. These terms are sim ilar in meaning, but they cannot be used synonymously in a ll instances. Although they may describe the same function ( i . e . , prevention of the normal flow of energy), the way in which the function is performed is d ifferen t. For instance, to control gravitational energy, the energy should be blocked in the sense that an obstacle or obstruction is placed. It would be incorrect to use the term "isolate" when referring to the control of gravi tation al energy. E lectrical energy should be controlled by iso la tin g it in the sense that i t is set apart, or disconnected. It would be incorrect to use the terra "blocked" when referring to the control of e le c tr ic a l energy. The terms may be used synonymously in some cases, such as when referring to blocking the passage of fluid or iso la tin g the pressure in a pipe. Sometimes the difference in meaning becomes evident only by the method used to implement the function. # Point(s) of Control. The point or points from which energy-blocking, -is o la tin g , or -d issip atin g devices are controlled. Securing the Point(s) of Control. The point(s) of control are secured to pre vent unauthorized persons from reactivating the flow of energy. Securing is a separate and d istin ct action from iso la tin g or blocking the energy sources. The use of locks, tags, or posting a qualified person or a combination thereof are methods of accomplishing these c r ite r ia . # Dissipate Energy. To cause energy to be spread out or reduced to lev els tolerable by humans. When the word "dissipate" is applied to the word "energy," the term may be interpreted d iffe r e n tly . The following concepts should be used to determine the dissipation a c tiv itie s : 1. Dissipate Mechanical Motion -Motion tends to continue because of in ertia after removal of energy; therefore, mechanical motion should be dissipated. For example, a flywheel should be allowed to come to rest before starting work. # 2. Dissipate Potential Energy -Potential energy can be manifested in the form of pressure (above or below atmospheric), springs, and gravity. Gravity can never be eliminated or dissipated; i t can only be controlled. Springs under tension or compression can be released (to d issip ate stored energy) or the stored energy can be controlled. Pressure may be blocked, iso la te d , or dissipated. The term "dissipate pressure" implies reducing pressure to a level that would not harm humans. Normally, th is pressure value is atmos pheric . # 3. Dissipate E lectrica l Energy -D issipation of e le c tr ic ity may be accom plished by "grounding" the d e e lec tr ifie d portion of the c ir c u it after i t has been isolated . Grounding liv e circ u its may be catastrophic. D issipation of e le c tr ic a l energy includes the actions necessary to prevent the buildup of e le c tr ic a l potential (s ta tic e le c t r ic it y ) . # 4. D issipate Chemicals -Chemical reactions are exothermic or endothermie. Exothermic reactions raise temperatures, which may cause a variety of e ffe c ts such as fir e s , explosions, burns, e tc . Endothermie reactions lower tempera tures and cause the need for additional heat. Some elements manufactured by endothermie reaction are used as explosives or have explosive ch aracteristics because of their in s ta b ility and rapid release of energy. For the purpose of th is document, emphasis is placed on the e ffo r t necessary for the prevention or control of chemical reactions. Thus, the term "dissipation of chemicals" implies those actions needed to prevent chemical reactions that would (1) raise or lower temperatures or (2) cause e ffe c ts which humans cannot to lera te. # 5. Dissipate Thermal Energy -Human tolerance to temperature is very lim ited. Human tissu e is harmed when it is exposed to temperatures above 45°C (113°F) or below 4°C (39°F). Since temperature cannot be isolated or blocked, the only way to control it s e ffe c ts on humans is through d issip ation or employee protection. Mechanical motion, e le c tr ic a l resista n ce, chemical reactions, and radiation w ill raise the temperature of materials which, in turn, can burn or damage human tissu e . Therefore, when energy sources that a ffect temperature are id en tified in equipment, processes, or systems, controls of the energy source should be effected to allow the temperature to dissipate to a tolerable le v e l. # C. W ORKING WITH OR WITHOUT ENERGY PRESENT The basic decision that must be made before maintenance begins is: Can the task be accomplished safely with or without energy present or is i t necessary to deenergize before in itia tin g maintenance? # Concepts which should be considered in th is decision include: 1) Energy is always present; 2) energy is not necessarily dangerous; and 3) danger is present only when energy is released in quantities which exceed human to ler ances. Further, prior to the development of sp ecific energy control measures, a ll energy sources should be: 1) id en tified ; 2) analyzed independently; and 3) analyzed in combination with any other energy sources present. The following paragraphs present c r ite r ia which should be considered during analysis of sp ecific energy sources. -5 -D. # ENERGY SOURCES Mechanical motion can be linear translation or rotation, or i t can produce work which, in turn, produces changes in temperature. This type of energy can be turned o ff or le f t on. P otential energy can be due to pressure (above or below atmospheric), springs, or gravity. Maintenance is always conducted with gravity on. Potential energy manifested as pressures or in springs can be dissipated or controlled; i t cannot be turned o ff or on. E lectrical energy refers to generated e le c tr ic a l power or s ta tic e le c t r ic it y . In the case of generated e le c t r ic it y , the e le c tr ic a l power can be turned on or turned o ff . Static e le c tr ic ity may not be turned off; i t can only be d is s i pated . Thermal energy is manifested by high or low temperature. This type of energy is the result of mechanical work, radiation, chemical reaction, or e le c tr ic a l resistance. It cannot be turned o ff or eliminated; however, it can be d is s i pated or controlled. # Chemical reaction is manifested by exothermic or endothermic e ffe c ts . In either case, the energy-on/energy-off approach does not apply. Any material which could chemically react should be eliminated, dissipated, or controlled. That i s , some p ositive measures must be taken to (1) eliminate the chemical so that no chemical reaction can take place, or (2) control the reaction so that the energy released by the chemical reaction w ill not harm humans. Methods of controlling hazards due to chemicals or chemical reactions as energy sources have not been addressed in th is document because of their complexity. These c riter ia indicate that some energy sources can be turned o ff , some can be dissipated, other sources of energy can be eliminated, and some can only be controlled. The following examples illu s tr a te how th is concept is applied. # Example 1: The task is the removal and replacement of a spring in a car. Potential energy is present due to the force of gravity and of the tension or compres sion of the spring. The force of gravity can be controlled only by blocking the car. The energy stored in the spring can be dissipated or controlled. It is usually controlled by blocking the release of the spring. Therefore, the decision logic indicates that maintenance must be done with energy present. # Example 2: A large mixing drum with internal rotating blades is in need of repair. One of the mixing blades is out of alignment. The established maintenance pro cedure requires the entry of two men to realign the blade. The e le c tr ic a l energy source can be switched o ff and locked out and/or the mechanical lin k ages can be disconnected. The lockout at the switch w ill provide a physical barrier and a tag w ill identify the task being performed. In addition, i f the mechanical linkages are disconnected, a ll energy sources, except gravity, w ill be removed from the system. -6 -I I . # INVESTIGATION OF THE PROBLEM Much of the data examined during th is study did not include enough sp e cific information to id en tify the factors causing or contributing to accidents which occur during maintenance as a result of inadequate energy control measures. However, an extensive search of world litera tu re related to maintenance haz ards and maintenance hazard controls, revealed a number of accident scenarios which could be evaluated in the attempt to id en tify generic causes of maintenance-related accidents. Accidents which could be attributed to the presence of uncontrolled energy during maintenance a c t iv it ie s were selected for closer study. Although the limited nature of the information precluded exact cause determination, the scenarios did provide a general indication of causes or contributing causes as well as illu str a tio n s of the types of a c ci dents that can occur when hazardous lev els of energy are not controlled. # A. THE ACCIDENT SCENARIOS Out of 300 accident scenarios found in 14 d ifferen t sources, 59 accidents were selected to illu s tr a te that adequate energy control methods may have prevented the accidents. (A b rief description of each accident is included in Appendix B.) Causes of these accidents have been categorized as follows: 1. Maintenance a c tiv itie s were in itia te d without attempting to deenergize the equipment or system, or control the hazards with energy present. Twenty-seven accidents f e l l into th is category. Typical of these accidents was accident No. 10 (Appendix B), in which an experienced repairman saw a piece of wire stuck in a f i l t e r wheel. He attempted to remove the wire while the wheel was in motion. Losing his balance, he f e ll into the wheel. His leg was crushed and had to be amputated. # Energy blockage or iso la tio n was attempted, but was inadequate. Six incidents were the result of in e ffec tiv e energy iso la tio n or blockage. An example was accident No. 30 (Appendix B), in which a worker attempted to prevent an elevator from moving by jamming the doors open with a plank while the elevator was on the second floor, and then turning o ff the outside panel switch on the main floo r. He was k ille d while working on the main floor when the elevator returned to i t s home base rather than remaining on the second floor. # Residual (potential) energy was not dissipated. Accident No. 34 (Appendix B) resulted in injury due to failure to dissipate residual energy. The worker turned o ff the power to a packaging machine and attempted to remove a jam. Residual hydraulic pressure activated the holding device causing the injury. # Accidental activation of energy. Twenty-five of the accidents were caused by accidental a ctivation . These accidents were caused by either (1) persons unintentionally actuating controls, or (2) by other persons activating the controls, not realizin g that maintenance was in progress. The r esu lt, in either case, can be disastrous. # An accident typical of th is category was accident No. 36 (Appendix B) in which a mechanic was repairing an e le c tr ic a lly operated caustic pump. A co-worker dragged a cable across the toggle switch that operated the pump, and the mechanic was sprayed with ca u stic. Another incident, accident No. 38 (Appendix B), occurred when an employee was cutting a pipe with a torch and d iesel fuel was mistakenly discharged into the lin e . The ensuing ignition of the d iesel fuel resulted in a f a ta lity . These types of accidents are preventable i f e ffe c tiv e energy control tech niques or procedures are availab le, workers are trained to use them, and management provides the motivation to ensure their use. Table I summarizes the accident causes and indicates the applicable guidelines from Chapter III which, i f followed, would have eliminated the hazard and prevented the a c ci dent. In Appendix B, an energy control method that would have been e ffe c tiv e is id en tified for each entry. The methods shown are only examples. For many cases, several energy control techniques could have been used to prevent the accident. In addition, Appendix B id e n tifie s the hazard types, energy sources, and accident causes as w ell as consequences of each accident. # STATISTICAL INVESTIGATION Besides the literatu re search, a s t a t is t ic a l investigation was undertaken in the attempt to obtain data that would help to identify factors causing or contributing to the type of accident under consideration. It was planned that s t a t is t ic a l accident data would be obtained that related to inadequate energy controls during maintenance. The data would help to id en tify in e ffec tiv e control methods and particular problems that should be avoided. Manual and automated searches were conducted to locate the informa tion . Most of the data found, however, did not include enough sp ecific information to identify the factors causing or contributing to the accidents. This Existing data are very lim ited . Occupational injury data are often gathered primarily for the purpose of processing worker's compensation claims rather than for developing countermeasures. Most current information systems emphasize data about the injury, not the accident, and about the end resu lts of the accident sequence, rather than the precipitating events and conditions leading to the injuring event. Data of the la tter type are often much more important for the development of countermeasures. # Ihe ANSI Z16.2 American National Standard Method of Recording Basic Facts Relating to the Nature and Occurrence of Work Injuries, or an adaptation of this coding method, is used almost exclu sively among both the Federal and State Government agencies and private sector information systems. The ANSI Z16.2 method is extremely lim ited and is designed primarily as a monitoring tool for grouping accidents by major types and for flagging a lim ited number of isolated factors about accidents so that the cases can be recalled when data analysis begins. The ANSI Z16.2 method is not designed as an analytical tool for identifying the patterns, the precipi tating events and conditions in an accident sequence, or the relationships among contributing causal factors in an accident that are normally needed for countermeasure development." Attempts to obtain more sp e cific data from such sources as association s, insurance companies, labor unions, and industry proved unsuccessful because of one or more of the following reasons: # Energy i s c o n sid e r e d a d e q u a te ly i s o l a t e d , b lo c k e d , o r d is s ip a t e d when an unplanned e v e n t would n o t r e a c t i v a t e th e flo w o f e n e r g y . Adequate iso la tio n can be achieved by many methods or combinations of methods so long as the controls are not lik e ly to be accidentally turned on. Appendix A id e n tifie s several examples of methods of e ffe c tiv e ly iso la tin g or blocking energy. # b . S to r e d o r r e s id u a l energy th a t c o n s t i t u t e s a p e rso n n e l h azard s h a ll be i s o l a t e d , b lo c k e d , o r d is s ip a t e d . Another necessary step is to block or d issip a te any hazardous residual energy once the decision is made to deenergize. Residual energy is not always as obvious a hazard as the incoming energy supplies. For th is reason, special effo rt must be made to id en tify any stored energy that could result in personnel hazards. For example, i f i t is necessary fo r a worker to climb or move about m echanical lin k a g e s, the p o t e n t i a l energy due to the w o rk e r's weight may be s u f f i c i e n t to cause dangerous movements o f the lin k a g e s. I f t h i s hazard is p re s e n t, a m echanical block or pin can u s u a lly be used to block out the energy and p o t e n t i a l l y dangerous movement. Forms o f p o t e n t i a l energy which may be s to re d in s u f f i c i e n t q u a n t i t ie s to re p re se n t h azard s in clude: # c . The p o i n t ( s ) o f c o n t r o l s h a l l h e s e c u r e d s o t h a t u n a u t h o r iz e d p e r s o n s a r e p re v e n t e d from r e e n e r g i z i n g th e m a c h in e , p r o c e s s , o r s y s t e m . # A m eans o f s e c u r i t y m ust b e im plem ented t o e n s u r e t h a t th e equipm ent b e in g m a in t a in e d o r s e r v i c e d i s n o t somehow r e e n e r g iz e d . The g u id e l i n e s a llo t r a c h o ic e o f t h r e e m eth o d s. The s e le c tio n should be based on the p a r t i c u l a r circum stances and c h a r a c t e r i s t i c s of each re s p e c tiv e f a c i l i t y or a p p lic a tio n . A farm employee working with p o t e n t i a l l y hazardous powered equipment in a remote f i e l d would not be expected to use a padlock for s e c u r ity . A padlock (or e q u iv a le n t) , however, would be a lo g ic a l s e le c tio n for a sw itch or v alve a c c e s s ib le to a larg e number of people. The use of tag s, when only tra in e d p er sonnel have access to the p o in t( s ) of c o n tro l is an accepted in d u stry p r a c tic e . Any one o f the follow ing methods w ill prevent unwanted re e n e r g iz a tio n of equipm ent. # t " ) su c h t h a t r e e n e r g i z i n g th e sy ste m r e q u i r e s th e u s e o f s p e c i a l equipm ent r o u t i n e l y a v a i l a b l e o n l y t o th e p e r s o n who a p p li e d th e c o n t r o l . A w a rn in g c o n t a i n i n g a p p r o p r ia t e in f o r m a t io n s h a l l be d i s p l a y e d a t th e p o i n t ( s ) o f c o n t r o l . The method of securing the p o in ts of c o n tro l by physical means th a t prevent unauthorised persons from re e n e rg iz in g the machine, p ro cess, or system is w idely used. The most common device used fo r s e c u r ity is the padlock. Often each worker is provided w ith h i s own padlock and the only key. # c k e d , o r d i s s i p a t e d , th e d a t e , th e p e r s o n ( s ) r e s p o n s i b l e f o r th e c o n t r o l m e a su re , and th e p e r s o n ( s ) r e s p o n s i b l e f o r th e w ork t o b e a c c o m p lis h e d . i n a d d i t i o n , a c c e s s t o th e c o n t r o l p o i n t ( s ) o u s t be l i m i t e d t o p e r s o n s who a r e t r a in e d t o u n d e r s ta n d and o b s e r v e th e p o s t e d w a r n in g . Access may be lim ite d by physical lo c a tio n (such as e le v a tio n ) or by procedural means such as color-coded badges. When th is method of s e c u r ity is used , each new employee should re c e iv e tr a in in g to observe the g u id e lin e s b efo re having access to the p o in t(s ) used for c o n tr o llin g energy. The tra in in g should in clude the purpose of the w arning, the format and co lo r of the w arning, and should s t r e s s the r e s p o n s i b i l i t y of each person fo r h i s co-w orkers' s a f e ty . R etrain in g should be given as necessary so th a t the im portance of observing the posted warning is no t fo r g o tte n . # ( 3 ) P o s t q u a l i f i e d p e r s o n n e l , w ith th e s p e c i f i c r e s p o n s i b i l i t y o f p r o t e c t i n g a g a i n s t u n a u t h o r iz e d a c t u a t i o n , a t th e p o i n t ( s ) o f c o n t r o l t h ro u g h o u t th e m a in te n a n ce a c t i v i t y . T h is a p p l i e s m a in ly t o s h o r t d u r a t io n w ork i n th e im m ediate v i c i n i t y o f th e c o n t r o l p o i n t ( s ) . d . B e f o r e s t a r t i n g m a in te n a n c e , v e r i f y t h a t s t e p s a . th ro u g h c . h a v e been e f f e c t i v e i n i s o l a t i n g , b l o c k i n g o r d i s s i p a t i n g h a z a r d o u s e n e r g y , and s e c u r i n g th e p o i n t ( s ) o f c o n t r o l . In applying any method or technique to i s o l a t e , b lo ck , or d i s s ip a t e energy from s p e c if ie d a re a s, d evices, machines, system s, or p ro c esses, i t should be v e r i f i e d th a t d e e n e rg iz a tio n has been e ffe c te d p rio r to the s t a r t of m aintenance. V e r if ic a tio n should be accomplished each time energy is elim in a te d and re a p p lie d , re g a rd le s s of the time i n t e r v a l between removal and r e a p p lic a tio n . Proven methods should be used to e f f e c ti v e l y dem onstrate th a t a l l hazardous energy has been i s o l a te d , blocked, or d is s ip a te d in the areas where personnel w ill perform the req u ired ta s k s . I f th ere is the p o s s i b i l i t y of reaccum ulation of energy to hazardous le v e ls , v e r i f i c a t i o n should be continued u n t i l the m aintenance a c t i v i t y is com pleted. The devices should be re g u la r ly and fre q u e n tly in sp ected and/or c a li b r a te d to ensure th a t they are fu n c tio n a l and a c c u ra te and to d e te c t any p o t e n t i a l device f a i l u r e s . Good in te n tio n s to e lim in a te energy have f a il e d and i n j u r i e s have re s u lte d when the wrong sources of energy were c o n tro lle d . To avoid t h i s p o s s i b i l i t y , the energy should be v e r i f i e d to be below hazardous le v e ls b efo re proceeding. This a c tio n is so obvious th a t many re fe re n c e s f a i l to i d e n tif y i t ; y et i t is v i t a l l y im portant th a t i t be included in each procedure. The procedures should c l e a r l y a ssig n r e s p o n s i b i l i t y for each ste p of the c r i t e r i a as w e ll as in d ic a te where i s o l a t i o n , b lo ck in g , or d i s s ip a t io n of energy is to be accom plished, how d e e n e rg iz a tio n is to be v e r i f i e d , what method is to be used fo r s e c u r i ty , and how r e s p o n s i b i l i t y is to be tr a n s f e r r e d during s h i f t changes. # . F o r t h e f i v e s t e p s l i s t e d a b o ve t o c o m p r is e a v a l i d t e c h n iq u e f o r c o n t r o l l i n g h a z a r d o u s e n e r g y s o u r c e s , th e f o l l o w i n g two p r e c o n d i t i o n s m ust The procedures need not be unique for a s in g le machine or ta sk ; procedures may be used which apply to a group o f s im ila r machines or ta s k s . Depending upon the com plexity of the equipment and i t s a p p lic a tio n , good procedures may vary from one to many pages. No m a tte r how simple and s tra ig h tfo rw a rd the procedures may seem, they should be documented so th a t a l l le v e ls of personnel understand company p o lic y , as w ell as the re q u ire d s a f e ty procedures. # b . The p e r s o n n e l who im plem ent th e a b o ve s t e p s s h a l l be q u a l i f i e d . E a ch w o rk e r m ust t h o r o u g h ly u n d e r sta n d a l l docum ented p r o c e d u r e s . T r a i n i n g s h a l l b e a c c o m p lis h e d a s n ec essary t o e s t a b l i s h and m a in t a in p r o f i c i e n c y , and t o e n com p a ss p r o c e d u r a l o r eq u ip m ent c h a n g e s w h ich a f f e c t e n e r g y c o n t r o l d u r i n g m a in te n a n ce . A fte r the energy c o n tro l need has been i d e n t i f i e d , the methods o f im plem entation s e le c te d , and the procedures f in a l iz e d , a t r a in in g program should be i n s t i t u t e d to ensure th a t the procedures are followed and the purpose and fu n ctio n s are understood. The t r a in in g should involve elem ents of management and the w orkforce concerned w ith m aintenance a c t i v i t i e s so th a t the c r i t e r i a are e x p e d itio u s ly and u n i formly a p p lie d . The tr a in in g should ensure th a t the u n d erstan d in g and s k i l l s re q u ired fo r the safe a p p lic a tio n and removal of energy con t r o l s are a v a ila b le as re q u ire d . R etrain in g should be scheduled as -1 4 - C l i X U L i 11 J . o fte n as n ec essary to m ain tain p ro fic ie n c y and to in tro d u ce re v ise d dev ices, p r a c t i c e s , and methods. I f energy i s o l a t i o n , b lo ck in g , or d i s s ip a t io n are secured accord in g to c r i t e r i o n 2 . c . ( 2 ) , then every person having job r e la t e d access to the p o in t( s ) of c o n tro l should be tra in e d . This tr a in in g is in a d d itio n to th a t re q u ired fo r personnel involved in applying energy c o n tro ls . The t r a in in g should in clude the purpose of the w arning, the format and c o lo r of the w arning, and should emphasize the extreme im portance for a l l personnel to obey the warnings in ord er to p ro te c t t h e i r co w orkers. The personnel who implement the procedures should be tra in e d not only to know how to accom plish the ste p s re q u ired to c o n tro l energy so u rces, but a ls o to understand the hazards a s s o c ia te d w ith energy so u rc es. All le v e ls of su p e rv is io n concerned w ith m aintenance and s e r v ic in g , w ith s p e c ia l emphasis on f i r s t -l i n e su p e rv isio n , must be tra in e d in o rd e r to ensure th a t procedures are follow ed. # B . CONTROLLING HAZARDS WITH ENERGY PRESENT C o n t r o l l i n g m a in te n a n ce h a z a r d s w ith e n e rg y p r e s e n t i s th e o n l y a l t e r n a t i v e t o d e e n e r g i z in g t o p e rfo rm m a in te n a n ce t a s k s s a f e l y . F o r some t y p e s o f m a c h in e s o r p r o c e s s e s , th e c o n d i t i o n s o r c o m b in a t io n o f c o n d i t i o n s t h a t m ust be e v a l u a t e d a n d c o n t r o l l e d i n e n e r g iz e d s y s t e m s ca n be much more com p lex th a n d e e n e r g i z e d s y s t e m s . The f o l l o w i n g p a r a g r a p h s d e s c r i b e g u i d e l i n e s f o r m a in t a in i n g " e n e r g i z e d " s y s t e m s . # . H a z a rd o u s e n e r g y s o u r c e s , i n c l u d i n g r e s i d u a l e n e rg y s o u r c e s , s h a l l b e i d e n t i f i e d . I d e n t i f i c a t i o n of the hazards needs no j u s t i f i c a t i o n , fo r a hazard cannot be c o n tro lle d u n less i t is i d e n t i f i e d . # . Docum ented p r o c e d u r e s , w h ich h a v e b een d e te rm in e d t o c o n t r o l each h a z a r d o u s e n e r g y s o u r c e , s h a l l b e u s e d . P r o c e d u r e s s h a l l a s s i g n r e s p o n s i b i l i t y and a c c o u n t a b i l i t y f o r c o n t r o l l i n g p e r s o n n e l h a z a r d s . Before a worker is exposed to a h az a rd , the hazard c o n tro ls must be known to be e f f e c t i v e . D eterm ination of hazard c o n tro l e f fe c tiv e n e s s can be based on p hysical dem onstration p rio r to im plem entation on a d ay -to-day b a s is . This can be by s i m i l a r i t y to a dem onstrably e f f e c t i v e hazard c o n tr o l, or by a n a ly s is . # . P e r s o n n e l who p e rfo rm t h e m a in te n a n ce s h a l l be q u a l i f i e d t o u se th e p r o c e d u r e s . Q u a l i f i c a t i o n t o u se th e p r o c e d u r e s may b e o b t a in e d b y e d u c a t io n , e x p e r ie n c e a n d / o r t r a i n i n g . The personnel who perform the m aintenance and s e r v ic in g a c t i v i t i e s must be tra in e d to understand the p a r t i c u l a r h azard s, the c o n tro ls for the h az ard s, and how to implement the c o n tro ls e f f e c t i v e l y . -1 5 - # IV. CONCLUSIONS This study confirms th a t the need to c o n tro l hazardous energy during m ain te nance a c t i v i t i e s is recognized by in d u s try , lab o r, and the Government. This need is m anifested by the d i f f e r e n t approaches implemented by in d u stry to reduce i n j u r i e s , the evidence subm itted to the O ccupational S afety and Health A d m in istration a t p u b lic h ea rin g s on the s u b je c t of lockouts and ta g o u ts , and r e s u l t s of the l i t e r a t u r e search in which a g re a t number of a r t i c l e s were found which emphasize the need fo r some type o f c o n tro l. The l i t e r a t u r e review did not produce s t a t i s t i c a l evidence of the e f f e c t i v e ness o f any one s p e c if ic type o f energy c o n tro l method over an o th e r, nor did i t id e n tify ac cid en t c a u sa tiv e fa c to rs leading to i n j u r i e s . No values could be developed which d i f f e r e n t i a t e ac c id e n ts and i n j u r i e s o ccu rrin g during main tenance from aggregate U.S. in ju ry s t a t i s t i c s ; i . e . , d ata to provide in d ic e s on the p ro b a b ility o f in ju ry occurrence, the magnitude o f the i n j u r i e s , and the exposure to the hazard (p o pulation of workers a t r i s k ) th a t c o r r e l a t e with the i d e n t i f i e d primary hazard causes. I n d u s t r i a l accid en t and in ju ry d ata a v a ila b le in the U.S. d e sc rib e the in ju r i e s s u s ta in e d by the workers in g re a te r d e t a i l than the causes of the a c c i d en ts th a t i n f l i c t e d the i n j u r i e s . The hazard causes found in the analyzed ac cid en t re p o rts are c a te g o riz e d by f i r e , ex p lo sio n , im pact, f a l l , caught in or between, and o th e rs . The hazard causes id e n t i f i e d by t h i s study are d i f f e r e n t . Consequently, the published s t a t i s t i c s (aggregates of a c c id e n t/in ju r y re p o rts ) cannot be broken down in to ac cid en t causes id e n t i f i e d as s p e c if ic to m aintenance and s e r v ic in g a c t i v i t i e s . The study i d e n t i f i e d the follow ing hazard causes: 1. Maintenance a c t i v i t i e s were i n i t i a t e d w ithout a ttem p tin g to deen erg ize the equipment or system or c o n tro l the hazard with energy p re s e n t. # 2. Energy blockage or i s o l a t i o n was attem p ted , but was in adequate. 3. R esidual energy was not d is s ip a te d . # 4. Energy was a c c id e n ta lly a c tiv a te d . The worker p opulation a t r i s k could not be d efined because employers who implement procedural c o n tro ls may choose to make the procedures a p p lic a b le to (1) a l l personnel; (2) personnel engaged in any one or any com bination o f a c t i v i t i e s such as c o n s tru c tio n , o p e ra tio n s , m anufacturing, te s t i n g , e t c . ; or (3) m aintenance personnel only. Thus, the e f fe c tiv e n e s s of any one method of c o n tro l w ith re s p e c t to an o th er cannot be determ ined. However, the implemen ta t i o n o f any method which in c re a se s worker awareness of p o t e n t ia l l y hazardous energy sources is b e t t e r than no method a t a l l . Most of these e x is tin g re g u la tio n s use the concept of "power o ff" to prevent i n j u r i e s , and do not -1 6 -pro v id e guidance on how to d is c e rn when to apply lo ck s, ta g s , or a com bination o f locks and ta g s . They a ls o do not allow fo r the perform ance of m aintenance w ith power on, even under norm al, everyday c o n d itio n s , or when such m ain te nance cannot be performed w ith power o f f . In f a c t , p e r sonnel can be harmed by to x ic , c a u s tic , or asphyxiant m a te r ia ls with no con c u rre n t r e le a s e of energy. I t is th e re fo re recommended th a t c r i t e r i a be developed s p e c i f i c a l l y to p ro te c t m aintenance personnel from such hazardous m a te r ia ls . The re searc h e f f o r t should focus on id e n tify in g p o t e n t i a l h az a rd s and determ ining c o r r e c tiv e a c tio n which e i t h e r reduces hazardous energy le v e ls or changes the w o rk e r's proxim ity to hazardous energy. -1 9 -V I. REFERENCES 1. M anufacturing Chemists A sso cia tio n . Case H is to rie s of A ccidents in the Chemical In d u stry . Washington, DC: 1962DC: -1975. 5 v o l. 2. G lasser, Melvin A. Data Regarding O ccupational F a t a l i t i e s Concerned With Safety Lockout. In te r n a tio n a l Union, UAW, D e tr o it, MI: June 29, 1979. 3pp. 3. American Petroleum I n s t i t u t e . Review of F a ta l I n ju r i e s in the Petroleum In d u stry fo r 1978. Washington -3 1 - # 21. [ # 2] AN EM PLOYEE ATTEM PTED TO R EM O V E SO M E CLOGGED CO M PO ST M ATERIAL THAT HAD JAM M ED AN OPERATING KILN DISCHARGE CONVEYOR. HIS CLOTHING BECAM E ENTANGLED IN THE UNGUARDED REVOLVING ROLLERS. HIS ARM W A S CAUGHT IN THE M ACHINE. H E W A S PULLED AGAINST TH E M ACHINE AND W A S CH OK ED TO DEATH. # 22^ n n # AN EM PLOYEE W A S ATTEMPTING TO CLEAR A JAM ON A GARNETT M A CHINE W HILE IT W A S IN OPERATION. HE CRAW LED INSIDE THE M ACHINE TH RO UG H AN UNGUARDED H O LE W H ER E HE BECAM E ENTANGLED IN M OVING PARTS, W A S DRA W N INTO THE ROLLER AND W A S CRUSHED. # T T . [Ti LACERATIONS 1 The e x is tin g B r itis h Code of P ra c tic e r e l i e s on th e human element to follow a documented "perm it to work" system for perform ing m aintenance. I t c lo s e ly p a r a l l e l s the recommended g u id e lin e s , and r e l i e s on the documented system to serv e as a record o f a l l the fo re se e a b le hazards which have been considered in advance. # ROTATION ELECTRICAL AN EM PLOYEE W A S CLEANING AN EDGER DISCONNECT SA W IN A SAW M ILL. PROTECTIVE FATALITY PANELS W ER E REM OVED. HE REQUESTED ANOTHER EM PLOYEE TO START THE SAW . HIS CLOTHES BECAM E ENTANGLED AND HE W A S PULLED INTO THE SAW . # 24^ [T] CRUSHED I ROTATION ELECTRICAL AN EM PLOYEE W A S EM PTYING DO UG H DISCONNECT FRO M A M IXER W HILE IT W A S STILL FATALITY RUNNING. W H EN H E REACHED INTO THE M ACHINE HIS ARM W A S CAUGHT AND H E W A S PULLED INTO THE M ACHINE. I t a lso r e l i e s h e a v ily on s u p e rv isio n to see t h a t the system oper a te s p ro p erly . The Consensus Standards are s im ila r to the recommended g u id e lin e s . They pro v id e the o p tio n s of (1) sec u rin g the p o in t( s ) o f c o n tro l by lockout or tag o u t dev ices or (2) having no locks or ta g s, provided o th e r requirem ents are met. These s ta n d a rd s, however, do not allow the o p tio n of having a person remain a t th e p o in t(s ) of c o n tro l to p ro te c t a g a in st u n authorized a c tu a tio n of the ma chine or process during m aintenance and s e r v ic in g . Also, the stan d ard s do not cover the broad spectrum o f h azard s as do the recommended g u id e lin e s . # N ational Standards The e x is tin g n a tio n a l stan d ard s as shown in Table C-4 are not uniform in t h e i r coverage. In c o n sis te n c ie s in the requirem ents e x i s t between i n d u s tr ie s and between equipment w ith in the same in d u s try . Some s e c tio n s of the General In d u stry Standards imply lo ck in g out or tagging out energy r a th e r than s p e c i fying th a t lockouts or ta g o u ts be perform ed. S ections of the standard cover ing in d u s tr ie s or equipment re q u irin g p ro v isio n s fo r only locking out or ta g ging out energy are: overhead and gantry cran es; woodworking m achinery; m echanical power p re sse s ; c e r t a i n fo rg in g m achines; c e r t a i n pulp, paper, and paperboard m ills ; t e x t i l e s ; c e r t a i n bakery equipment; and c e r t a i n sawmill equipment. Other # S ta te Standards The e x is tin g s t a t e stan d ard s (Table C -5 261 (e (1 2 )( i i i ) 1 4 5 (f)( 3 ) ( i i i ) 261 (e (13) 1 179(g)( 5) (i) 261 (f ( 6 ) (i) ' 17 9 (g )( 5) ( i i ) 261 (g (4) ( i i ) < 1 7 9 ( g ) ( 5) ( iii) to prevent unauthorized persons from reenergizing the system; using warning tags to caution personnel that energy has been isolated and the reason for isolation; verifying that isolation of energy has been effective; verifying that personnel have cleared the area prior to reenergizing the machine or system; documenting procedures; and ensuring that personnel who implement the criteria have been adequately trained to thoroughly understand the procedures. 261 (g (1 5 )(i) < 179(1) (2) Ci) (c) 261 (g (1 6 )(i) i 1 8 1 (f)( 2 ) ( i ) ( c ) 261 (g (1 9 )( i i i ) £ 2 1 3 (a )(10) 261 (g (21) ' ' 213(b)(5) 261 (j ( 1 ) ( i i i ) i 2 1 7 (b )( 8 ) (i) 261 (j ( 4 ) ( i i i ) t 217( d ) ( 9 ) (iv) 261 (j ( 5 ) ( i i i ) E 218( a ) ( 3 ) ( i i i ) 261 ( j (6) (i) I 2 1 8 (a )( 3 ) (iv) 261 (k ( 2 ) ( i i ) c 218(d) (2) 262(c (1) c 218(e)(1) ( i i ) 262 (n (2) 3 2 18(f)(1) 262(p (1) 3 2 18(f)(2) 262 (q (2) 3 218(g)(1) 263(k (1 2 )(i) 3 218(h)(2) 263(1 ( 3 ) ( i i i ) ( b ) 3 218(h)(5) 263(1 ( 8 ) ( i i i ) i 218( i ) (1) 265 (c (1 2 )(v The Michigan standards closely parallel the recommended guidelines except Michigan does not include verification that blocking, isolating, and dissipa ting hazardous energy have been effected before starting maintenance. # International Standards International standards from Canada, Germany, and Britain (summarized in Table C-6) parallel portions of the recommended criteria. The Alberta, Canada, regulations for machinery and equipment state that no maintenance or repairs shall be carried out until the machinery or equipment has been shut down and secured against accidental movement or the power control devices have been locked out in an inoperative mode by the installation of lockout devices and warning tags. Dissipation of stored energy, verification that isolation and dissipation have been effected prior to starting maintenance, and verification that upon completion of maintenance, personnel are clear of the danger points before the machine or equipment is reenergized are also included as require ments. The German specifications for setup, troubleshooting, and maintenance of power-driven equipment state that these tasks must only be performed when: (1) The hazardous movements are brought to a halt; (2) the initiation of haz ardous movements as a consequence of stored power or energy is prevented; and (3) unauthorized, erroneous, or unexpected initiation of hazardous movements is avoided by some adequate means. The British Code of Practice for Safeguarding of Machinery states that effec tive control of hazards during maintenance can be achieved by having a written "permit to work" system. Such a system must clearly identify the hazards and document the practices to be followed, precautions to be taken, and responsi b ilitie s of the workers and of management. The code of practice also calls for adequate training of workers and supervision in safe systems of work and lockout systems for maintenance operation. # Consensus Standards The ANSI and NFPA standards (Table C-7) establish requirements and procedures for lockout/tagout of energy sources for stationary machines and equipment and for electrical circuits and electrical equipment, respectively. These stan dards require (1) that energy sources be isolated or blocked for maintenance, (2) that stored energy be dissipated prior to beginning maintenance, (3) that physical means or devices be used to secure the energy sources, (4) verifica tion that the energy sources have been isolated prior to starting work, (5) verification that all personnel are clear of hazards before reenergizing the machines or systems, (6) assurance that the procedures for lockout/tagout have been documented, and (7) that personnel have been adequately trained to under stand and implement the recommended guidelines. Applicability of the ANSI standard is limited to unexpected energization, startup, or release of stored energy of equipment or process. The NFPA standard does not have such a limitation but covers only electrical energy. The recommended guidelines are applicable for all phases of maintenance and servicing. -5 2 - This valve shall be closed and locked in the off position while the hammer is being adjusted, repaired, or serviced, or when the dies are being changed. (ii) A ir-lift hammers shall have an air shutoff valve as required in paragraph (d)(2) of this section and should be conveniently located and distinctly marked for ease of identification. ( iii) A ir-lift hammers shall be provided with two drain cocks: one on main head cylinder, and one on clamp cylinder. (1) Mechanical forging presses. When dies are being changed or maintenance is being performed on the press, the following shall be accomplished: (1) The power to the press shall be locked out. (ii) The flywheel shall be at rest. # 1910.252(c)(2) 1 9 1 0 .2 1 8 (g ) (i) The hydraulic pumps and power apparatus shall be locked out. (ii) The ram shall be blocked with a material the strength of which shall meet or exceed the specifica tions or dimensions shown in Table 0-11. (1) Hot trimming presses. The requirements of para graph (f)(1) of this section shall also apply to hot trimming presses. (2) Lockout. Upsetters shall be provided with a means for locking out the power at its entry point to the machine and rendering its cycling controls inoperable. (5) Changing dies. W hen dies are being changed, main tenance performed, or any work done on the machine, the power to the upsetter shall be locked out, and the fly wheel shall be at rest. (1) Boltheading. The provisions of paragraph (h) of this section shall apply to boltheading. (2) Rivet making. The provisions of paragraph (h) of this section shall apply to rivet making. (1) Billet shears. A positive-type lockout device for disconnecting the power to the shear shall be provided. # AND BRAZING (i) Installation. All equipment shall be installed by a qualified electrician in conformance with subpart S of this part. There shall be a safety-type disconnect ing switch or a circuit breaker or circuit interrupter to open each power circuit to the machine, conveniently located at or near the machine, so that the power can be shut off when the machine or its controls are to be serviced. (ii) Capacitor welding. Stored energy or capacitor discharge type of resistance welding equipment and control panels involving high voltage (over 550 volts) shall be suitably insulated and protected by complete enclosures, all doors of which shall be provided with suitable interlocks and contacts wired into the control circuit (similar to elevator interlocks). Such inter locks or contacts shall be so designed as to effective ly interrupt power and short circuit a ll capacitors when the door or panel is open. A manually operated switch or suitable positive device shall be installed, in addition to the mechanical interlocks or contacts, as an added safety measure assuring absolute discharge of all capacitors. (a) The controller disconnecting means is capable of being locked in the open position. # GENERAL SAFETY AND HEALTH PROVISIONS # 1926.20(b) (3) The use of any machinery, tool, material or equip ment which is not in compliance with any applicable requirement of this part is prohibited. Such machine, tool, material, or equipment shall either be identified as unsafe by tagging or locking the controls to render them inoperable or shall be physically removed from its place of operation. # NONIONIZING RADIATION # 1926.54 (e) Beam shutters or caps shall be utilized, or the laser turned off, when laser transmission is not actually required. When the laser is left unattended for a substantial period of time, such as during lunch hour, overnight, or at change of shifts, the laser shall be turned off. # FIRE PROTECTION AND PREVENTION # 1926.150(d)(1) (ii) During demolition or alterations, existing auto matic sprinkler installations shall be retained in service as long as reasonable. The operation of sprinkler control valves shall be permitted only by properly authorized persons. Modification of sprinkler systems to permit alterations or additional demolition should be expedited so that the automatic protection may be returned to service as quickly as possible. Sprinkler control valves shall be checked daily at close of work to ascertain that the protection is in service. # 1926.200(h) # 1926.304 (1) Accident prevention tags shall be used as a temporary means of warning employees of an existing hazard, such as defective tools, equipment, etc. They shall not be used in place of, or as a substitute for, accident prevention signs. (2) Specifications for accident prevention tags similar to those in Table G-l shall apply. (a) All fixed power driven woodworking tools shall be provided with a disconnect switch that can either be locked or tagged in the off position. # W ELDING AND CUTTING # 1926.352 (g) For the elimination of possible fire in enclosed spaces as a result of gas escaping through leaking or improperly closed torch valves, the gas supply to the torch shall be positively shut off at some point out side the enclosed space whenever the torch is not to be used or whenever the torch is left unattended for a substantial period of time, such as during the lunch period. Overnight and at the change of shifts, the torch and hose shall be removed from the confined space. Open end fuel gas and oxygen hoses shall be immediately removed from enclosed spaces when they are disconnected from the torch or other gas-consuming device. # ELECTRICAL-GENERAL REQUIREM ENTS # 1926.400(g) (1) Equipment or circuits that are deenergized shall be rendered inoperative and have tags attached at all points where such equipment or circuits can be energized. (2) Controls that are to be deactivated during the course of work on energized or deenergized equipment or circuits shall be tagged. (3) Tags shall be placed to identify plainly the equip ment or circuits being worked on. (11) Operating levers controlling hoisting or dumping devices on haulage bodies shall be equipped with a latch or other device which will prevent accidental starting or tripping of the mechanism. # BASE- # 1926.603(a) (5) A blocking device, capable of safely supporting the weight of the hammer, shall be provided for placement in the leads under the hammer at a ll times while employees are working under the hammer. # INITIATION OF EXPLOSIVE CHARGES-ELECTRICAL BLASTING 1926.906 (j) In underground operations when firing from a power circuit, a safety switch shall be placed in the perma nent firing line at intervals. This switch shall be made so it can be locked only in the "Off" position and shall be provided with a short-circuit arrangement of the firing lines to the cap circuit. (1) When firing from a power circuit, the firing switch shall be locked in the open or "Off" position at all times, except when firing. It shall be so designed that the firing lines to the cap circuit are automati cally short-circuited when the switch is in the "Off" position. Keys to this switch shall be entrusted only to the blaster. # PO W ER TRANSMISSION AND DISTRIBUTION # (d) Deenergizing lines and equipment (1) W hen deenergizing lines and equipment operated in excess of 600 volts, and the means of disconnecting from electric energy is not visibly open or visibly locked out, the provisions of subdivisions (i) through (vii) of this subparagraph shall be complied with: (i) The particular section of line or equipment to be deenergized shall be clearly identified, and it shall be isolated from all sources of voltage. # Appendix C EVALUATION OF EXISTING STANDARDS -INTERNATIONAL, NATIONAL, STATE, CONSENSUS The follow ing paragraphs compare and e v a lu a te the c u rre n t n a tio n a l, s t a t e , in t e r n a t i o n a l , and consensus stan d ard s r e la t e d to energy c o n tro ls (lo ck o u t/ tag o u ts) fo r accom plishing m aintenance and s e rv ic in g a c t i v i t i e s s a f e ly . S p e c if ic a lly , the g u id e lin e s recommended are compared w ith: OSHA General In d u stry Standards (29 CFR 1910) and OSHA C o n stru ctio n Standards (29 CFR 1926); S ta te Standards (with approved OSHA p la n s ); c r i t e r i a developed in Canada (A lb e rta ), Germany, and B rita in ; and consensus stan d ard s ANSI Z244.1-1982 and NFPA 70E p a r t I I . # Summary Appendix Tables C -l, C-2, and C-3 p re se n t an o v e r a ll comparison of the recom mended g u id e lin e s w ith the re s p e c tiv e n a t i o n a l , s t a t e , i n t e r n a t i o n a l , and consensus stan d ard s. Even though the types of energy a p p lic a b le and the i n d u s tr ie s a ffe c te d v a ry , a c o n sid erab le c o n siste n c y is found in the g u id e l in e s re q u ire d for a l l except th e OSHA n a tio n a l sta n d a rd s and the s t a t e OSHA stan d ard s (with the excep tio n s of C a lifo rn ia and M ichigan). The primary c r i t e r i a elem ents e lim in a tin g energy, secu rin g the means by which the energy is e lim in a te d , v e r if y in g th a t energy has d is s The C a lifo rn ia stan d ard s are a ls o lim ite d to e l e c t r i c a l energy and m echanical motion o f equipment; whereas, the recommended c r i t e r i a cover e l e c t r i c a l , m echanical, and therm al energy. The e x is tin g A lb erta , Canada, re g u la tio n s lack e f fe c tiv e n e s s sin ce they do not include m echanical motion or therm al energy as a p p lic a b le forms o f energy th a t p ersonnel must be safeguarded a g a in s t. Also, the A lb e rta re g u la tio n s do not sp ec ify the documenting o f procedures and t r a in in g of personnel as n ec essary req u irem en ts fo r an e f f e c t i v e program for c o n tr o llin g hazards during m ainte nance. The d r a f t German Accident P rev en tio n S p e c ific a tio n s do not in clude e l e c t r i c a l , chem ical, or therm al as a p p lic a b le forms of energy to be c o n tro lle d during m aintenance and s e r v ic in g . I n d u s trie s a f fe c te d by the s p e c if ic a t i o n s are lim ite d to "power driven equipm ent" which excludes many i n d u s tr ie s th a t expose workers to hazardous le v e ls of energy during m aintenance. The German s p e c i f i c a tio n s do not re q u ire v e r i f i c a t i o n th a t hazards have been c o n tr o lle d , docu m enting o f procedures, or t r a in in g of personnel in s a fe work p r a c t i c e s . # General Industry Lockout Related # Standard Provisions # SPECIFICATIONS FOR ACCIDENT PREVENTIONSIGNS AND TAGS No. # 1910.145(f)(1) # STANDARD (i) The tags are a temporary means of warning all con cerned of a hazardous condition, defective equipment, radiation hazards, etc. The tags are not to be con sidered as a complete warning method, but should be used until a positive means can be employed to elimi nate the hazard; for example, a "Do not Start" tag on power equipment shall be used for a few moments or a very short time until the switch in the system can be locked out; a "Defective Equipment" tag shall be placed on a damaged ladder and immediate arrangements made for the ladder to be taken out of service and sent to the repair shop. # REQUIREM ENTS (10) It is recommended that each power-driven wood working machine be provided with a disconnect switch that can be locked in the off position. (3) On applications where injury to the operator might result if motors were to restart after power failures, provision shall be made to prevent machines from auto matically restarting upon restoration of power. (5) On each machine operated by electric motors, posi tive means shall be provided for rendering such con trols or devices inoperative while repairs or adjust ments are being made to the machines they control. # PULP, PAPER AND PAPERBOARD MILLS # 1910.261(b) (4) Lockouts. Devices such as padlocks shall be pro vided for locking out the source of power at the main disconnect switch. Before any maintenance, inspection, cleaning, adjusting, or servicing of equipment (elec trical, mechanical, or other) that requires entrance into or close contact with the machinery or equipment, the main power disconnect switch or valve, or both, controlling its source of power or flow of material, shall be locked out or blocked off with padlock, blank flange, or similar device. (5) Vessel entering. Lifelines and safety harness shall be worn by anyone entering closed vessels, tanks, ship bins, and similar equipment, and a person shall be stationed outside in a position to handle the line and to summon assistance in case of emergency. The air in the vessels shall be tested for oxygen deficiency and the presence of both toxic and explosive gases and vapors, before entry into closed vessels, tanks, etc., is permitted. Self-contained air-or oxygen-supply masks shall be readily available in case of emergency. Work shall not be done on equipment under conditions where an injury would result if a valve were unexpect edly opened or closed unless the valve has been locked in a safe position. # 1910.261(e) (2) Slasher tables. Saws shall be stopped and power switches shall be locked out and tagged whenever it is necessary for any person to be on the slasher table. (10) Stops. All control devices shall be locked out and tagged when knives are being changed. # 1910.261(e)(12) ( ii i) Whenever it becomes necessary for a workman to go within a drum, the driving mechanism shall be locked and tagged, at the main disconnect switch, in accor dance with paragraph (b)(4) of this section. # 1910.261(e) (13) Intermittent barking drums. In addition to motor switch, clutch, belt shifter, or other power discon necting device, intermittent barking drums shall be equipped with a device which may be locked to prevent the drum from moving while it is being emptied or filled . # 1910.261(f)(6) (i) When cleaning, inspection, or other work requires that persons enter rag cookers, all steam and water valves, or other control devices, shall be locked and tagged in the closed or "off" position. Blank flanging of pipelines is acceptable in place of closed and locked valves. Every textile machine shall be provided with individual mechanical or elec trical means for stopping such machines. On machines driven by belts and shafting, a locking-type shifter or an equivalent positive device shall be used. On opera tions where injury to the operator might result if motors were to restart after failures, provision shall be made to prevent machines from automatically restart ing upon restoration of power. (2) Protection for loom fixer. Provisions shall be made so that every loom fixer can prevent the loom from being started while he is at work on the loom. This may be accomplished by means of a lock, the key to which is retained in the possession of the loom fixer, or by some other effective means to prevent starting the loom. (1) J-box protection. Each valve controlling the flow of steam, injurious gases, or liquids into a J-box shall be equipped with a chain, lock, and key, so that any worker who enters the J-box can lock the valve and retain the key in his possession. Any other method which will prevent steam, injurious gases, or liquids from entering the J-box while the worker is in it will be acceptable. (2) Kier valve protection. Each valve controlling the flow of steam, injurious gases, or liquids into a kier shall be equipped with a chain, lock, and key, so that any worker who enters the kier can lock the valve and retain the key in his possession. Any other method which w ill prevent steam, injurious gases, or liquids from entering the kier while the worker is in it w ill be acceptable. (iii) A main disconnect switch or circuit breaker shall be provided. This switch or circuit breaker shall be so located that it can be reached quickly and safely. # 1910.265(e)(1) (iv) Carriage control. A positive means shall be pro vided to prevent unintended movement of the carriage. This may involve a control locking device, a carriage tie-down, or both. # 1910.268(1) (2) Before the voltage is applied, cable conductors shall be isolated to the extent practicable. Employees shall be warned, by such techniques as briefing and tagging at all affected locations, to stay clear while the voltage is applied. ( ii i) After all designated switches and disconnectors have been opened, rendered inoperable, and tagged, visual inspection or tests shall be conducted to insure that equipment or lines have been deenergized. # 1910.268(m)(7) (i) (iv) Protective grounds shall be applied on the discon nected lines or equipment to be worked on. (v) Guards or barriers shall be erected as necessary to adjacent energized lines. (vi) When more than one independent crew requires the same line or equipment to be deenergized, a prominent tag for each such independent crew shall be placed on the line or equipment by the designated employee in charge. (vii) Upon completion of work on deenergized lines or equipment, each designated employee in charge shall determine that all employees in his crew are clear, that protective grounds installed by his crew have been removed, and he shall report to the designated authori ty that all tags protecting his crew may be removed. ( # 2) When a crew working on a line or equipment can clearly see that the means of disconnecting from elec tric energy are visibly open or visibly locked-out, the provisions of subdivisions (i) and (ii) of this subparagraph shall apply: (i) Guards or barriers shall be erected as neces sary to adjacent energized lines. (ii) Upon completion of work on deenergized lines or equipment, each designated employee in charge shall determine that all employees in his crew are clear, that protective grounds installed by his -6 7-crew have been removed, and he shall report to the designated authority that all tags protecting his crew may be removed. # (b) Deenergized equipment or lines When it is necessary to deenergize equipment or lines for protection of employees, the requirements of para graph 1926.950(d) shall be complied with. -70an nisi TTinni . « r»ci np^ninv i k k k i: « ri 11 * -* uen ormeii s h i p i v wiiri in*-» ■ nni*7#-»T n r r . nil r # APPENDIX E REFERENCES SUPPORTING GUIDELINES FOR CONTROLLING HAZARDOUS ENERGY DURING MAINTENANCE The "Report of the Committee on Electrical Safety Requirements for Employee Workplaces" [54] makes specific provisions for performing maintenance functions without benefit of lockout or tagout procedures. Other supporting references are: In a letter to OSHA from the Printing Industries of America, Inc. [81], the statement that "much of the equipment currently used in the printing process is designed to be cleaned, and to a lesser degree repaired, while energized" hi^ilights the need to protect workers from ener gized equipment. The only way to protect the worker under these conditions is to be sure the procedures used are safe. Additionally, the ANSI Safety Stan dard for the Lockout/Tagout of Energy Sources [5] states that "where energized work is required, acceptable procedures and equipment shall be employed to provide effective protection to personnel." The terms "acceptable" and "effective protection" indicate a definite need to determine that the proce dures used are safe.
None
None
39baa8b6e9feea2021ca376619c2de33e0e7129b
cdc
None
Seal all openings where piping and other items penetrate through the deck.# VSP would like to acknowledge the following organizations and companies for their cooperative efforts in the revisions of the VSP 2011 Construction Guidelines. The cover art was designed by Carrie Green. # Background and Purpose The Centers for Disease Control and Prevention (CDC) established the Vessel Sanitation Program (VSP) in 1975 as a cooperative endeavor with the cruise vessel industry. VSP's goal is to assist the industry to develop and implement comprehensive sanitation programs to protect the health of passengers and crew aboard cruise vessels. Every cruise vessel that has a foreign itinerary, carries 13 or more passengers, and calls on a U.S. port is subject to biannual operational inspections and, when necessary, reinspection by VSP. The vessel owner pays a fee, based on gross registered tonnage (GRT) of the vessel, for all operational inspections. The Vessel Sanitation Program 2011 Operations Manual, which is available on the VSP Web site (www.cdc.gov/nceh/vsp), covers details of these inspections. Additionally, cruise vessel owners or shipyards that build or renovate cruise vessels may voluntarily request plan reviews, onsite shipyard construction inspections, and/or final construction inspections of new or renovated vessels before their first or next operational inspection. The vessel owner or shipyard pays a fee, based on GRT of the vessel, for onsite and final construction inspections. VSP does not charge a fee for plan reviews or consultations. Section 3.0 covers details pertaining to plan reviews, consultations, or construction inspections. When a plan review or construction inspection is requested, VSP reviews current construction billing invoices of the shipyard or owner requesting the inspection. If this review identifies construction invoices unpaid for more than 90 days, no inspection will be scheduled. An inspection can be scheduled after the outstanding invoices are paid in full. The VSP 2011 Construction Guidelines provide a framework of consistent construction and design guidelines that protect passenger and crew health. CDC is committed to promoting high construction standards to protect the public's health. Compliance with these guidelines will help to ensure a healthy environment on cruise vessels. CDC reviewed references from many sources to develop these guidelines. These references are indicated in section 38.2. The VSP 2011 Construction Guidelines cover components of the vessel's facilities related to public health, including FOOD STORAGE, PREPARATION, and SERVICE, and water bunkering, storage, DISINFECTION, and distribution. Vessel owners and operators may select the design and equipment that best meets their needs. However, the design and equipment must also meet the sanitary design criteria of the American National Standards Institute (ANSI) or equivalent organization as well as VSP's routine operational inspection requirements. These guidelines are not meant to limit the introduction of new designs, materials, or technology for shipbuilding. A shipbuilder, owner, manufacturer, or other interested party may ask VSP to periodically review or revise these guidelines in relation to new information or technology. VSP reviews such requests in accordance with the criteria described in section 2.0. New cruise vessels must comply with all international code requirements (e.g., International Maritime Organization Conventions). Those include requirements of the following: - Safety of Life-at-Sea Convention. - International Convention for the Prevention of Pollution from Ships. - Tonnage and Load Line Convention. - International Electrical Code. - International Plumbing Code. - International Standards Organization. This document does not cross-reference related and sometimes overlapping standards that new cruise vessels must meet. The VSP 2011 Construction Guidelines went into effect on September 15, 2011. They apply to vessels that LAY KEEL or perform any major renovation or equipment replacement (e.g., any changes to the structural elements of the vessel covered by these guidelines) after this date. The guidelines do not apply to minor renovations such as the installation or removal of single pieces of equipment (refrigerator units, warewash machines, bain-marie units, etc.) or single pipe runs. These guidelines apply to all areas of the vessel affected by a renovation. VSP will inspect the entire vessel in accordance with the VSP 2011 Operations Manual during routine vessel sanitation inspections and reinspections. # Revisions and Changes VSP periodically reviews and revises these recommendations in coordination with industry representatives and other interested parties to stay abreast with industry innovations. A shipbuilder, owner, manufacturer, or other interested party may ask VSP to review a construction guideline on the basics of new technologies, concepts, or methods. Recommendations for changes or additions to these guidelines must be submitted in writing to the VSP Chief (see section 39.2.1 for contact information). The recommendation should - Identify the section to be revised. - Describe the proposed change or addition. - State the reason for recommending the change or addition. - Include research or test results and any other pertinent information that support the change or addition. VSP will coordinate a professional evaluation and consult with industry to determine whether to include the recommendation in the next revision. VSP gives special consideration to shipyards and owners of vessels that have had plan reviews conducted before an effective date of a revision of these guidelines. This helps limit any burden placed on the shipyards and owners to make excessive changes to previously agreed-upon plans. VSP asks industry representatives and other knowledgeable parties to meet with VSP representatives periodically to review the guidelines and determine whether changes are necessary to keep up with the innovations in the industry. # Procedures for Requesting Plan Reviews, Consultations, and Construction-related Inspections To coordinate or schedule a plan review or construction-related inspection, submit an official written request to the VSP Chief as early as possible in the planning, construction, or renovation process. Requests that require foreign travel must be received in writing at least 45 days before the intended visit. The request will be honored depending on VSP staff availability (see section 39.2.1 for contact information). After the initial contact, VSP assigns primary and secondary officers to coordinate with the vessel owner and shipyard. Normally two officers will be assigned. These officers are the points of contact for the vessel from the time the plan review and subsequent consultations take place through the final construction inspection. Vessel representatives should provide points of contact to represent the owners, shipyard, and key subcontractors. All parties will use these points of contact during consultations between any of the parties and VSP to ensure awareness of all consultative activities after the plan review is conducted. # Plan Reviews and Consultations VSP normally conducts plan reviews for new construction a minimum of 18 months before the vessel is scheduled for delivery. The time required for major renovations varies. To allow time for any necessary changes, VSP coordinates plan reviews for such projects well before the work begins. Plan reviews normally take 2 working days. They are conducted in Atlanta, Georgia; Fort Lauderdale, Florida; or other agreed-upon sites. Normally, two VSP officers will be assigned to the project. Representatives from the shipyard, vessel owner, and subcontractor(s) who will be doing most of the work should attend the review. They should bring all pertinent materials for areas covered in these guidelines, including (but not limited to) the following: - Complete plans or drawings (this includes new vessels from a class built under a previous version of the VSP Construction Guidelines). - Any available menus. - Equipment specifications. - General arrangement plans. - Decorative materials for FOOD AREAS and bars. - All FOOD-related STORAGE, PREPARATION, and SERVICE AREA plans. - Level and type of FOOD SERVICE (e.g., concept menus, staffing plans, etc.). - POTABLE and nonpotable water system plans with details on water inlets, (e.g., sea chests, overboard discharge points, and BACKFLOW PREVENTION DEVICES). - Ventilation system plans. - Plans for all RECREATIONAL WATER FACILITIES. - Size profiles for operational areas. - Owner-supplied and PORTABLE equipment specifications, including cleaning procedures. - Cabin attendant work zones. - Operational schematics for misting systems and decorative fountains. VSP will prepare a plan review report summarizing recommendations made during the plan review and will submit the report to the shipyard and owner representatives. Following the plan review, the shipyard will provide the following: - Any redrawn plans. - Copies of any major change orders made after the plan review in areas covered by these guidelines. While the vessel is being built, shipyard representatives, the ship owner, or other vessel representatives may direct questions or requests for consultative services to the VSP project officers. Direct these questions or requests in writing to the officer(s) assigned to the project. Include fax number(s) and an e-mail address(es) for appropriate contacts. The VSP officer(s) will coordinate the request with the owner and shipyard points of contact designated during the plan review. # Onsite Construction Inspections VSP conducts most onsite or shipyard construction inspections in shipyards outside the United States. A formal written request must be submitted to the VSP Chief at least 45 days before the inspection date so that VSP can process the required foreign travel orders for VSP officers (see section 3.0). A sample request is shown in section 39.1. A completed vessel profile sheet must also be submitted with the request for the onsite inspection (see section 40.0). VSP encourages shipyards to contact the VSP Chief and coordinate onsite construction inspections well before the 45-day minimum to better plan the actual inspection dates. If a shipyard requests an onsite construction inspection, VSP will advise the vessel owner of the inspection dates so that the owner's representatives are present. An onsite construction inspection normally requires the expertise of one to three officers, depending on the size of the vessel and whether it is the first of a hull design class or a subsequent hull in a series of the same class of vessels. The inspection, including travel, generally takes 5 working days. The onsite inspection should be conducted approximately 4 to 5 weeks before delivery of the vessel when 90% of the areas of the vessel to be inspected are completed. VSP will provide a written report to the party that requested the inspection. After the inspection and before the ship's arrival in the United States, the shipyard will submit to VSP a statement of corrective action outlining how it will address and correct each item identified in the inspection report. # Final Construction Inspections # Purpose and Scheduling At the request of a vessel owner or shipyard, VSP may conduct a final construction inspection. Final construction inspections are conducted only after construction is 100% complete and the ship is fully operational. These inspections are conducted to evaluate the findings of the previous yard inspection, assess all areas that were incomplete in the previous yard inspection, and evaluate performance tests on systems that could not be tested in the previous yard inspection. Such systems include the following: - Ventilation for cooking, holding, and warewashing areas. - Warewash machines. - Artificial light levels. - Temperatures in cold or hot holding equipment. - HALOGEN and other chemistry measures for POTABLE WATER or RECREATIONAL WATER systems. To schedule the inspection, the vessel owner or shipyard submits a formal written request to the VSP Chief as soon as possible after the vessel is completed, or a minimum of 10 days before its arrival in the United States. At the request of a vessel owner or shipyard and provided the vessel is not entering the U.S. market immediately, VSP may conduct final construction inspections outside the United States (see section 3.2 for foreign inspection procedures). As soon as possible after the final construction inspection, the vessel owner or shipyard will submit a statement of corrective action to VSP. The statement outlines how the shipyard will address each item cited in the inspection report and includes the projected date of completion. # Unannounced Operational Inspection VSP generally schedules vessels that undergo final construction inspection in the United States for an unannounced operational inspection within 4 weeks of the vessel's final construction inspection. VSP conducts operational inspections in accordance with the VSP 2011 Operations Manual. If a final construction inspection is not requested, VSP generally will conduct an unannounced operational inspection within 4 weeks after the vessel's arrival in the United States. VSP conducts operational inspections in accordance with the VSP 2011 Operations Manual. # Equipment Standards, Testing, and Certification Although these guidelines establish certain standards for equipment and materials installed on cruise vessels, VSP does not test, certify, or otherwise endorse or approve any equipment or materials used by the cruise industry. Instead, VSP recognizes certification from independent testing laboratories such as NSF International, Underwriter's Laboratories (UL), the American National Standards Institute (ANSI), and other recognized independent international testing institutions. In most cases, independent testing laboratories test equipment and materials to certain minimum standards that generally meet the recommended standards established by these guidelines. Equipment built to questionable standards will be reviewed by a committee consisting of VSP, cruise ship industry, and independent testing organization participants. The committee will determine whether the equipment meets the recommended standards established in these guidelines. Copies of test or certification standards are available from the independent testing laboratories. Equipment manufacturers and suppliers should not contact the VSP to request approval of their products. # General Definitions and Acronyms # Scope These VSP 2011 Construction Guidelines provide definitions to clarify commonly used terminology in this manual. The definition section is organized alphabetically. Terms defined in section 5.2 are identified in the text of these guidelines by SMALL CAPITAL LETTERS, or SMALL CAPS. For example: section 6.2.5 states "Provide READILY REMOVABLE DRIP TRAYS for condiment dispensing equipment." READILY REMOVABLE and DRIP TRAYS are in SMALL CAPS and are defined in section 5.2. # Definitions Accessible: Exposed for cleaning and inspection with the use of simple tools such as a screwdriver, pliers, or wrench. Activity pools: Include but are not limited to the following: wave pools, catch pools, water slides, INTERACTIVE RECREATIONAL WATER FACILITIES, lazy rivers, action rivers, vortex pools, and continuous surface pools. Adequate: Sufficient in number, features, or capacity to accomplish the purpose for which something is intended and to such a degree that there is no unreasonable risk to health or safety. # Air-break: A piping arrangement in which a drain from a fixture, appliance, or device discharges indirectly into another fixture, receptacle, or interceptor at a point below the flood-level rim (Figure 1). # Air gap: (AG) The unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, PLUMBING FIXTURE, or other device and the flood-level rim of the receptacle or receiving fixture. The air gap must be at least twice the inside diameter of the supply pipe or faucet and not less than 25 millimeters (1 inch) (Figure 2). Manufactured air gaps must be certified by a recognized plumbing or engineering organization. Backpressure: An elevation of pressure in the downstream piping system (by pump, elevation of piping, or steam and/or air pressure) above the supply pressure at the point of consideration that would cause a reversal of normal direction of flow. # Backsiphonage: The reversal of flow of used, contaminated, or polluted water from a PLUMBING FIXTURE or vessel or other source into a water supply pipe as a result of negative pressure in the pipe. Black water: Wastewater from toilets, urinals, medical sinks, and other similar facilities. Blast chiller: A unit specifically designed for rapid cooling of food products. # Blockable drain/suction fitting: A drain or suction fitting in a RECREATIONAL WATER FACILITY that that can be completely covered or blocked by a 457 millimeters x 584 millimeters (18 inches x 23 inches) body-blocking element as set forth in ASME A112.19.8M. # Child activity center: A facility for child-related activities where children under the age of 6 are placed to be cared for by vessel staff. # Children's pool: A pool that has a depth of 1 meter (3 feet) or less and is intended for use by children who are toilet trained. Child-sized toilet: Toilets whose toilet seat height is no more than 280 millimeters (11 inches) and the toilet seat opening is no greater than 203 millimeters (8 inches). Cleaning locker: A room or cabinet specifically designed or modified for storage of cleaning equipment such as mops, brooms, floor-scrubbing machines, and cleaning chemicals. Continuous pressure (CP) backflow prevention device: A device generally consisting of two check valves and an intermediate atmospheric vent that has been specifically designed to be used under conditions of continuous pressure (greater than 12 hours out of a 24-hour period). # Coved (also coving): A curved or concave surface, molding, or other design that eliminates the usual joint angles of 90° or less. A single piece of stainless steel bent to an angle not less than 90° with a minimum 9.5-millimeter radius is acceptable (Figures 3-5). Unique circumstances for coving can be reviewed during plan review. Cross-connection: An actual or potential connection or structural arrangement between a POTABLE WATER system and any other source or system through which it is possible to introduce into any part of the POTABLE WATER system any used water, industrial fluid, gas, or substance other than the intended POTABLE WATER with which the system is supplied. # Deck drain: The physical connection between decks, SCUPPERS, or DECK SINKS and the GRAY or BLACK WATER systems. # Deck sink: A sink recessed into the deck and sized to contain waste liquids from tilting kettles and pans. Disinfection: A process (physical or chemical) that destroys many or all pathogenic microorganisms, except bacterial and mycotic spores, on inanimate objects. Distillate water lines: Pipes carrying water that is condensed from the evaporators and that may be directed to the POTABLE WATER system. This is the VSP definition for pipe striping purposes. Food-contact surface: Surfaces (food zone, splash zone) of equipment and utensils with which food normally comes in contact and surfaces from which food may drain, drip, or splash back into a food or surfaces normally in contact with food (Figure 6). # Food display areas: Any area where food is displayed for consumption by passengers and/or crew. Applies to displays served by vessel staff or self service. Food-handling areas: Any area where food is stored, processed, prepared, or served. # Food preparation areas: Any area where food is processed, cooked, or prepared for service. Food service areas: Any area where food is presented to passengers or crew members (excluding individual cabin service). Food storage areas: Any area where food or food products are stored. Food waste system: A system used to collect, transport, and process food waste from FOOD AREAS to a waste disposal system (e.g., pulper, vacuum system). Gap: An open juncture that is more than 3 millimeters (1/8 inch). # Gravity drain: A drain fitting used to drain the body of water in a RECREATIONAL WATER FACILITY by gravity and with no pump downstream of the fitting. - Hydrotherapy pools. - INTERACTIVE RECREATIONAL WATER FACILITIES. - Slides. - SPA POOLS. - SWIMMING POOLS. - Therapeutic pools. - WADING POOLS. - WHIRLPOOLS. # Reduced pressure principle backflow prevention assembly (RP assembly): An assembly containing two independently acting internally loaded check valves together with a hydraulically operating, mechanically independent pressure differential relief valve located between the check valves and at the same time below the first check valve. The unit must include properly located resilient seated test cocks and tightly closing resilient seated shutoff valves at each end of the assembly. Removable: Capable of being detached from the main unit with the use of simple tools such as a screwdriver, pliers, or an open-end wrench. # Safety vacuum release system (SVRS): A system that is capable of releasing a vacuum at a suction outlet caused by a high vacuum due to a blockage in the outlet flow. These systems shall be designed and certified in accordance with ASTM F2387-04 or ANSI/ASME A 112.19.17-2002. Sanitary seawater lines: Water lines with seawater intended for use in the POTABLE WATER production systems or in RECREATIONAL WATER FACILITIES. Scupper: A conduit or collection basin that channels liquid runoff to a DECK DRAIN. Sealant: Material used to fill SEAMS. Seam: An open juncture that is greater than 0.8 millimeters (1/32 inch) but less than 3 millimeters (1/8 inch). # Smooth: - A FOOD-CONTACT SURFACE having a surface free of pits and inclusions with a cleanability equal to or exceeding that of (100-grit) number 3 stainless steel. - A NONFOOD-CONTACT SURFACE of equipment having a surface equal to that of commercial grade hot-rolled steel free of visible scale. - Deck, bulkhead, or deckhead that has an even or level surface with no roughness or projections to make it difficult to clean. # Spa pool: A POTABLE WATER or saltwater-supplied pool with temperatures and turbulence comparable to a WHIRLPOOL SPA. # General characteristics are - Water temperature of 30°C-40°C or 86°F-104°F. - Bubbling, jetted, or sprayed water effects that physically break at or above the water surface. - Depth of more than 1 meter (3 feet). - Tub volume of more than 6 tons of water. # Spill-resistant vacuum breaker (SVB): A specific modification to a PVB to minimize water spillage. # Spray pad: The play and water contact area that is designed to have no standing water. # Suction fitting: A fitting in a RECREATIONAL WATER FACILITY under direct suction through which water is drawn by a pump. Swimming pool: A RECREATIONAL WATER FACILITY greater than 1 meter in depth. This does not include SPA POOLS that meet this depth. Technical water: Water that has not been chlorinated or PH controlled and that originates from a bunkering or condensate collection process, or seawater processed through the evaporators or reverse osmosis plant and is intended for storage and use in the technical water system. Temperature-measuring devices (TMDs): Thermometers, thermocouples, thermistors, or other devices that indicate the temperature of food, air, or water and are numerically scaled in Celsius and/or Fahrenheit. TMDs must be designed to be easily readable. # Turnover: The circulation, through the recirculation system, of a quantity of water equal to the total RWF tub volume. For facilities with zero depth, the turnover will be based on the total volume of the system, including compensation or make-up tanks and piping, and up to the entire volume for the system as designed. Unblockable drain/suction fitting: A drain or suction fitting in a RECREATIONAL WATER FACILITY that cannot be completely covered or blocked by a 457 millimeters x 584 millimeters (18 inches x 23 inches) body-blocking element and that is rated by the test procedures or by the appropriate calculation in accordance with ASME A112.19.8M. Utility sink: Any sink located in a FOOD SERVICE AREA not intended for handwashing and/or warewashing. Wading pool: RECREATIONAL WATER FACILITY with a maximum depth of less than 1 meter and that is not designed for use by children. must be sized to prevent the storage of bulk foods in provisions passageways unless the passageways are specifically designed to meet provision room standards (section 15.0). Refrigeration and hot-food holding facilities, including temporary storage facilities, must be available for all FOOD PREPARATION and SERVICE areas and for foods being transported to remote areas. # Food Flow Arrange the flow of food through a vessel in a logical sequence that eliminates or minimizes cross-traffic or backtracking. Provide a clear separation of clean and soiled operations. When a common corridor is used for movement of both clean and soiled operations, the minimum distance from bulkhead to bulkhead must be considered. Within a galley, the standard separation between clean and soiled operations must be a minimum of 2 meters (6½ feet). For smaller galleys (e.g., specialty, bell box) the minimum distance will be assessed during the plan review. Additionally, common corridors for size and flow of galley operations will be reviewed during the plan review. Provide an orderly flow of food from the suppliers at dockside through the FOOD STORAGE, PREPARATION, and finishing areas to the SERVICE areas and, finally, to the waste management area. The goals are to reduce the risk for cross-contamination, prepare and serve food rapidly in accordance with strict time and temperature-control requirements, and minimize handling. Provide a size profile for each FOOD AREA, including provisions, preparation rooms, galleys, pantries, warewash, garbage processing area, and storage. The size profile shows the square meters of space designated for that area. Where possible, the VSP will visit the profile vessel(s) to verify the capacity during operational inspections. The size profile must be an established standard for each cruise line based on the line's review of the area size for the same FOOD AREA in its existing vessels. As the ship size and passenger and crew totals change, there must be a proportional change in each FOOD AREA size based on the profile to ensure the service needs are met for each area. Size evaluations of FOOD AREAS will incorporate seating capacity and staffing, service, and equipment needs. During the plan review process VSP evaluates the size of a particular room or area and the flow of food through the vessel to those rooms or areas. VSP will also use the results of operational inspections to review the size profiles submitted by individual cruise lines. # Equipment Requirements # Galleys The equipment in sections 6.2.1.1 through 6.2.1.12 is required in galleys, depending on the level and type of service, and may be recommended for other areas. # Blast Chillers Incorporate BLAST CHILLERS into the design of passenger and crew galleys. More than one unit may be necessary depending on the size of the vessel, and the distances between the BLAST CHILLERS and the storage and service areas. The size and type of BLAST CHILLERS installed for each FOOD PREPARATION AREA are based on the concept/menu, operational requirements to satisfy that menu, and volume of food requiring cooling. # Utility Sinks Include food preparation UTILITY SINKS in all meat, fish, and vegetable preparation rooms; in cold pantries or garde mangers; and in any other areas where personnel wash or soak food. An automatic vegetable washing machine may be used in addition to FOOD PREPARATION UTILITY SINKS in vegetable preparation rooms. # Food Storage Include storage cabinets, shelves, or racks for food products and equipment in FOOD STORAGE, PREPARATION, and SERVICE AREAS, including bars and pantries. # Tables, Carts, or Pallets Locate fixed or PORTABLE tables, carts, or pallets in areas where food or ice is dispensed from cooking equipment, such as from soup kettles, steamers, braising pans, tilting pans, or ice storage bins. # Storage for Large Utensils Include a storage cabinet or rack for large utensils such as ladles, paddles, whisks, and spatulas and provide for vertical storage of cutting boards. # Sizing Size DECK DRAINS, SCUPPERS, and sinks to eliminate spillage and overflow to adjacent deck surfaces. # Deck Drainage Provide sufficient deck drainage and design deck and SCUPPER drain lines in all FOOD SERVICE and warewash areas to prevent liquids from pooling on the decks. Do not use DECK SINKS as substitutes for DECK DRAINS. # Cross-drain Connections Provide cross-drain connections to prevent pooling and spillage from the SCUPPER when the vessel is listing. # Coaming If a nonremovable coaming is provided around a DECK DRAINS, ensure that the juncture with the deck is COVED. Integral COVING is not required. # Ramps 6.6.1 Installation Install ramps over thresholds and ensure that they are easily REMOVABLE or sealed in place. Slope ramps for easy trolley roll-in and roll-out. Ensure ramps are strong enough to maintain their shape. If ramps over SCUPPER covers are built as an integral part of the SCUPPER system, construct them of SMOOTH, durable, and EASILY CLEANABLE materials. Gray and Black Water Drain Lines # Installation Limit the installation of drain lines that carry BLACK WATER or other liquid wastes directly overhead or horizontally through spaces used for FOOD PREPARATION or STORAGE. This limitation includes areas for washing or storing utensils and equipment (e.g., in bars, in deck pantries, and over buffet counters). If installation of waste lines is unavoidable in these areas, sleeve weld or butt weld steel piping; and heat fuse or chemically weld plastic piping. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. Do not use push-fit or press-fit piping over these areas. # General Hygiene Facilities Requirements for Food Areas # Handwashing Stations # Potable Water Provide hot and cold POTABLE WATER to all handwashing sinks. Equip handwashing sinks to provide water at a temperature between 38°C (100°F) and 49°C (120°F) through a mixing valve or combination faucet. # Construction Construct handwashing sinks of stainless steel in FOOD AREAS. Handwashing sinks in FOOD SERVICE AREAS and bars may be constructed of a similar, SMOOTH, durable material. # Supplies Provide handwashing stations that include a soap dispenser, paper towel dispenser, corrosion-resistant waste receptacle, and, where necessary, splash panels to protect - adjoining equipment, - clean utensils, - FOOD STORAGE, or - FOOD PREPARATION surfaces. If attached to the bulkhead, permanently seal soap dispensers, paper towel dispensers, and waste towel receptacles or make them REMOVABLE for cleaning. Air hand dryers are not permitted. # Dispenser Locations Install soap dispensers and paper towel dispensers so that they are not over adjoining equipment, clean utensil storage, FOOD STORAGE, FOOD PREPARATION surfaces, bar counters, or water fountains. For a multiple-station sink, ensure that there is a soap dispenser within 380 millimeters (15 inches) of each faucet and a paper towel dispenser within 760 millimeters (30 inches) of each faucet. # Dispenser Installation Install paper towel dispensers a minimum of 450 millimeters (18 inches) above the deck (as measured from the lower edge of the dispenser). # Installation Specifications Install handwash sinks a minimum of 750 millimeters (30 inches) above the deck, as measured at the top edge of the basin, and so that employees do not have to reach excessively to wash their hands. Install counter-mounted handwash sinks a minimum of 600 millimeters (24 inches) above the deck, as measured at the counter level. The minimum size of the handwash sink basin must be 300 millimeters (12 inches) in length and 300 millimeters (12 inches) in width. The diameter of round basins must be at least 300 millimeters (12 inches). Additionally, the minimum distance from the bottom of the water tap to the bottom of the basin must be 200 millimeters (8 inches). # Locations Locate handwashing stations throughout FOOD-HANDLING, PREPARATION, and warewash areas so that no employee must walk more than 8 meters (26 feet) to reach a station or pass through a normally closed door that requires touching a handle to open. # Food-dispensing Waiter Stations Provide a handwashing station at food-dispensing waiter stations (e.g., soups, ice, etc.) where the staff do not routinely return to an area with a handwashing station. # Food-handling Areas Provide a handwashing station in provision areas where bulk raw foods are handled by provisioning staff. # Crew Buffets Provide at least one handwashing station for every 100 seats (e.g., 1-100 seats = one handwashing station, 101-200 seats = two handwashing stations, etc.). Locate stations near the entrance of all officer/staff/crew mess areas where FOOD SERVICE lines are "selfservice." # Soiled Dish Drop-off Install handwashing stations at the soiled dish drop-off area(s) in the main galley, specialty galleys, and pantries for employees bringing soiled dishware from the dining rooms or other FOOD SERVICE AREAS to prevent long waiting lines at handwashing stations. Provide one sink or one faucet on a multiple-station sink for every 10 wait staff who handle clean items and are assigned to a FOOD SERVICE AREA during maximum capacity. During the plan review, VSP will evaluate work assignments for wait staff to determine the appropriate number of handwashing stations. # Faucet Handles Install easy-to-operate sanitary faucet handles (e.g., large elephant-ear handles, foot pedals, knee pedals, or electronic sensors) on handwashing sinks in FOOD AREAS. If a faucet is self-closing, slow-closing, or metering, provide a water flow of at least 15 seconds without the need to reactivate the faucet. # Signs Install permanent signs in English and other appropriate languages stating "wash hands often," "wash hands frequently," or similar wording. # Bucket Filling Station # Location Provide at least one bucket filling station in each area of the galleys (e.g., cold galley, hot galley, bakery, etc.), FOOD STORAGE, and FOOD PREPARATION AREAS. # Mixing Valve Supply hot and cold POTABLE WATER through a mixing valve to a faucet with the appropriate BACKFLOW protection at each bucket filling station. # Deck Drainage Provide appropriate deck drainage (e.g., SCUPPER or sloping deck to DECK DRAIN) under all bucket filling stations to eliminate any pooling of water on the decks below the bucket filling station. # Crew Public Toilet Rooms for Food Service Employees # Location and Number Install at least one employee toilet room in close proximity to the work area of all FOOD PREPARATION AREAS (beverage-only service bars are excluded). Provide one toilet per 25 employees and provide separate facilities for males and females if more than 25 employees are assigned to a FOOD PREPARATION AREA, excluding wait staff. This refers to the shift with the maximum number of food employees, excluding wait staff. Urinals may be installed, but do not count toward the toilet/employee ratio. # Main Galleys and Crew Galleys For main galleys and crew galleys, locate toilet rooms inside the FOOD PREPARATION AREA or in a passageway immediately outside the area. If a main galley has multiple levels and there is stairwell access between the galleys, toilet rooms may be located near the stairwell within one deck above or below. # Other Food Service Outlets For other FOOD SERVICE outlets (lido galley, specialty galley, etc.), do not locate toilet rooms more than two decks above/below within the distance of a fire zone, if on the same deck, no more than one fire zone away (should be within the same fire zone or an adjacent fire zone). If more than one FOOD SERVICE outlet is located on the same deck, the toilet room may be located on the same deck between the outlets and within two fire zones of each outlet. # Provisions For preparations rooms located in provisions areas, use the distance requirement described in 7.3.1.2 to locate toilet rooms. # Ventilation and Handwashing Install exhaust ventilation and handwashing facilities in each toilet room. Air hand dryers are not permitted in these toilet rooms. Install a permanent sign in English, and other languages where appropriate, stating the exact wording: "WASH HANDS AFTER USING THE TOILET." Locate this sign on the bulkhead adjacent to the main toilet room door or on the main door inside the toilet room. # Hands-free Exit Ensure hands-free exit for toilet rooms, as described in section 36.1.1. Ensure handwashing facilities have sanitary faucet handles as in section 7.1.8. # Doors Install tight-fitting, self-closing doors. # Decks Construct decks of hard, durable materials and provide COVING at the bulkhead-deck juncture. # Deckheads and Bulkheads Install EASILY CLEANABLE deckheads and bulkheads. # Equipment Placement and Mounting # Seal Seal counter-mounted equipment that is not PORTABLE to the bulkhead, tabletop, countertop, or adjacent equipment. If the equipment is not sealed, provide sufficient, unobstructed space for cleaning around, behind, and between fixed equipment. The space provided is dependent on the distance from either a position directly in front or from either side of the equipment to the farthest point requiring cleaning as described in sections 8. # Cleaning Distance Less Than 600 Millimeters (24 Inches) A distance to be cleaned of less than 600 millimeters (24 inches) requires an unobstructed space of 150 millimeters (6 inches). # Cleaning Distance Between 600 Millimeters (24 Inches) and 1,200 Millimeters (48 Inches) A distance to be cleaned between 600 millimeters (24 inches) and 1,200 millimeters (48 inches) requires an unobstructed space of 200 millimeters (8 inches). # Cleaning Distance Between 1,200 Millimeters (48 Inches) and 1,800 Millimeters (72 Inches) A distance to be cleaned between 1,200 millimeters (48 inches) and 1,800 millimeters (72 inches) requires an unobstructed space of 300 millimeters (12 inches). Cleaning Distance Greater Than 1,800 Millimeters (72 Inches) A distance to be cleaned greater than 1,800 millimeters (72 inches) requires an unobstructed space of 460 millimeters (18 inches). # Cleaning Distance In case the unobstructed cleaning space includes a corner, the cleaning distance has to be treated separately in two sections. The farther space behind the equipment has to be treated separately according to sections 8.1.1 through 8.1.4. The closer space beside the equipment has to be treated by calculating closer and farther cleaning distance together and using cleaning space according to sections 8.1.1 through 8.1.4. The closer space always has to be minimum of 300 millimeters (12 inches). See Figure 8d. # Seal or Elevate Seal equipment that is not PORTABLE to the deck or elevate it on legs that provide at least a 150-millimeter (6-inch) clearance between the deck and the equipment. If no part of the equipment is more than 150 millimeters (6 inches) from the point of cleaning access, the clearance space may be only 100 millimeters (4 inches). This includes vending and dispensing machines in FOOD AREAS, including mess rooms. Exceptions to the equipment requirements may be granted if there are no barriers to cleaning (e.g., equipment such as waste handling systems and warewashing machines with pipelines, motors, and cables) where a 150-millimeter (6-inch) clearance from the deck may not be practical. # Deck Mounting Continuous weld all equipment that is not PORTABLE to stainless steel pads or plates on the deck. Ensure the welds have SMOOTH edges, rounded corners, and no GAPS. # Adhesives Attach deck-mounted equipment as an integral part of the deck surface with glue, epoxy, or other durable, APPROVED adhesive product. Ensure that the attached surfaces are SMOOTH and EASILY CLEANABLE. # Deckhead Clearance Provide a minimum of at least 150 millimeters (6 inches) between equipment and deckheads. If this clearance cannot be achieved, extend the equipment to the deckhead panels and seal appropriately. # Foundation or Coaming Provide a sealed-type foundation or coaming for equipment not mounted on legs. Do not allow equipment to overhang the foundation or coaming by more than 100 millimeters (4 inches). Completely seal any overhanging equipment along the bottom (Figure 9). Mount equipment that is on a foundation or coaming at least 100 millimeters (4 inches) above the finished deck. Use cement, hard SEALANT, or continuous weld to seal equipment to the foundation or coaming. # Counter-Mounted Equipment Seal counter-mounted equipment, unless PORTABLE, to the countertop or mount on legs. # Leg Length The length of the legs is dependent on the horizontal distance of the Attach all FOOD-CONTACT SURFACES or connections from FOOD-CONTACT SURFACES to adjacent splash zones to ensure a seamless COVED corner. Reinforce all bulkheads, deckheads, or decks receiving such attachments. # Fasteners Use low profile, nonslotted, NONCORRODING, and easy-to-clean fasteners on FOOD-CONTACT SURFACES and in splash zones. The use of exposed slotted screws, Phillips head screws, or pop rivets in these areas is prohibited. # Nonfood-contact Surfaces # Seal Seal equipment SEAMS with an appropriate SEALANT (see SEAM definition). Avoid excessive use of SEALANT. Use stainless steel profile strips on surfaces exposed to extreme temperatures (e.g., freezers, cook tops, grills, and fryers) or for GAPS greater than 3 millimeters (1/8 inch). Do not use SEALANTS to close GAPS. # Construction Materials Use stainless steel or other durable, NONCORRODING, and EASILY CLEANABLE rigid or flexible material in the construction of drain lines. Do not use ribbed, braided, or woven materials in areas subject to splash or soiling unless coated with a SMOOTH, durable, and EASILY CLEANABLE material. # Size Size drain lines appropriately, with a minimum interior diameter of 25 millimeters (1 inch) for custom-built equipment. # Walk-in Refrigerators and Freezers Slope walk-in refrigerator and freezer evaporator drain lines and extend them through the bulkhead or deck. # Evaporator Drain Lines Direct walk-in refrigerator and freezer evaporator drain lines through an ACCESSIBLE AIR-BREAK to a deck SCUPPER or drain below the deck level or to a SCUPPER outside the unit. # Deck Drains and Scuppers Direct drain lines from DECK DRAINS and SCUPPERS in walk-in refrigerator and freezer units through an indirect connection to the wastewater system. # Horizontal Distance Install drain lines to minimize the horizontal distance from the source of the drainage to the discharge. # Vertical Distance Install horizontal drain lines at least 100 millimeters (4 inches) above the deck and slope to drain. # Food Equipment Drain Lines All drain lines (except condensate drain lines) from hood washing systems, cold-top tables, bains-marie, dipper wells, UTILITY SINKS, and warewashing sinks or machines must meet the following criteria: # Length Lines must be less than 1,000 millimeters (40 inches) in length and free of sharp angles or corners if designed to be cleaned in place by a brush. # Cleaning Lines must be READILY REMOVABLE for cleaning if they are longer than 1,000 millimeters (40 inches). # Extend Vertically Extend fixed equipment drain lines vertically to a SCUPPER or DECK DRAIN when possible. If not possible, keep the horizontal distance of the line to a minimum. # Air-break Handwashing sinks, mop sinks, and drinking fountains are not required to drain through an AIR-BREAK. 13.0 Electrical Connections, Pipelines, Service Lines, and Attached Equipment # Encase Encase electrical wiring from permanently installed equipment in durable and EASILY CLEANABLE material. Do not use ribbed or woven stainless steel electrical conduit where it is subject to splash or soiling unless it is encased in EASILY CLEANABLE plastic or similar EASILY CLEANABLE material. Do not use ribbed, braided, or woven conduit. # Install or Fasten For equipment that is not permanently mounted, install or fasten service lines in a manner that prevents the lines from contacting decks or countertops. # Mounted Equipment Tightly seal bulkhead or deckhead-mounted equipment (phones, speakers, electrical control panels, outlet boxes, etc.) with the bulkhead or deckhead panels. Do not locate such equipment in areas exposed to food splash. # Seal Penetrations Tightly seal any areas where electrical lines, steam or water pipelines, etc., penetrate the panels or tiles of the deck, bulkhead, or deckhead, including inside technical spaces located above or below equipment or work surfaces. Seal any openings or voids around the electrical lines or the steam or water pipelines and the surrounding conduit or pipelines. # Enclose Pipelines Enclose steam and water pipelines to kettles and boilers in stainless steel cabinets or position the pipelines behind bulkhead panels. Minimize the number of exposed pipelines. Cover any exposed insulated pipelines with stainless steel or other durable, EASILY CLEANABLE material. # Hood Systems # Warewashing Install canopy exhaust hood or direct duct exhaust systems over warewashing equipment (except undercounter warewashing machines) and over threecompartment sinks in pot wash areas where hot water is used for sanitizing. # Direct Duct Exhaust Directly connect warewashing machines that have a direct duct exhaust to the hood exhaust trunk. # Overhang Provide canopy exhaust hoods over warewashing equipment or threecompartment sinks to have a minimum 150-millimeter (6-inch) overhang from the edge of equipment to capture excess steam and heat and prevent condensate from collecting on surfaces. # Cleanout Ports Install cleanout ports in the direct exhaust ducts of the ventilation systems between the top of the warewashing machine and the hood system or deckhead. # Drip Trays Provide ACCESSIBLE and REMOVABLE condensate DRIP TRAYS in warewashing machine ventilation ducts. # Cooking and Hot Holding Equipment # Cooking Equipment Install hood or canopy systems above cooking equipment in accordance with safety of life-at-sea (SOLAS) requirements to ensure that they remove excess steam and grease-laden vapors and prevent condensate from collecting on surfaces. # Hot Holding Equipment Install a hood or canopy system or dedicated local exhaust ventilation directly above bains marie, steam tables, or other open hot holding equipment to control excess heat and steam and prevent condensate from collecting on surfaces. # Countertop and Portable Equipment Install a hood or canopy system or dedicated local extraction when SOLAS requirements do not specify an exhaust system for countertop cooking appliances or where PORTABLE appliances are used. The exhaust system must remove excess steam and grease-laden vapors and prevent collection of the cooking byproducts or condensate on surfaces. For buffet service areas where FOOD PREPARATION occurs, galley standards for construction must be followed (see section 16.0). FOOD PREPARATION AREAS include areas where utensils are used to mix and prepare foods (e.g., salad, sandwich, sushi, pizza, meat carving) and the food is prepared and cooked (e.g., grills, ovens, fryers, griddles, skillets, waffle makers). If such facilities are installed along a buffet counter, they will be evaluated in the plan review. For example, a station specific for salads, sushi, deli, or a pizzeria is a preparation area, as are locations where foods are prepared completely (e.g., waffle batter poured into griddle, cooked, plated, and served). However, a one-person carving station is not a preparation area. Omelet stations would have to be evaluated. # Decks # Buffet Lines Install hard, durable, nonabsorbent, nonskid decks at all buffet lines that are at least 1,000 millimeters (40 inches) in width measured from the edge of the service counter or, if present, from the outside edge of the tray rail. Carpet, vinyl, and linoleum deck materials are not acceptable. # Waiter Stations Install hard, durable, nonabsorbent decks (e.g., tile, sealed granite, or marble) that extend at least 600 millimeters (24 inches) from the edge of the working side(s) of the waiter stations. The sides of stations that have a splash shield of 150 millimeters (6 inches) or higher are not considered working sides. Carpet, vinyl, and linoleum deck materials are not acceptable. # Technical Spaces Construct decks in technical spaces of hard, durable, nonabsorbent materials (e.g., tiles, epoxy resin, or stainless steel) and provide COVING. Do not use painted steel or concrete decking. # Worker Side of Buffets and Bars Install durable COVING as an integral part of the deck/bulkhead and deck/cabinet foundation juncture on the worker-only side of the deck/buffet and deck/bar. # Consumer Side of Buffets and Waiter Stations Install durable COVING at the consumer side of buffet service counters, counters shared with worker activities (islands), and waiter stations. Install durable COVING at deck/bulkhead junctures located within one meter of the waiter stations. Consumer sides of bars are excluded. See Figures 10a and 10b. # Areas for Buffet Service and Food Preparation For buffet service areas where FOOD PREPARATION occurs, galley standards for construction must be followed (see section 16.0). # Food Display Protection # Effective Means Provide effective means to protect food (e.g., sneeze guards, display cases, raised shield) in all areas where food is on display. This includes locations where food is being displayed during preparation (e.g., carving stations, induction cooking stations, sushi, deli). This excludes teppanyaki style cooking. # Solid Vertical Shield Without Tray Rail For a solid vertical shield without a tray rail, the minimum height from the deck to the top edge of the shield must be 140 centimeters. # Solid Vertical Shield With Tray Rail For a solid vertical shield with a tray rail, for every 3 centimeters that the tray rail extends from the buffet, the height of the shield may be lowered by 1 centimeter, but the minimum height from the deck to the top edge of the shield must be 120 centimeters. # Consumer Seating at Counter For designs where consumers are seated at the counter and workers are preparing food on the other side of the sneeze guard, consideration must be given to the height of the preparation counter, consumer counter, and consumer seat. VSP will evaluate these designs and establish the shield height during the plan review. # Sneeze Guard Criteria # Portable or Built-in Sneeze guards may be temporary (PORTABLE) or built-in and integral parts of display tables, bains marie, or cold-top tables. # Panel Material Sneeze guard panels must be durable plastic or glass that is SMOOTH and EASILY CLEANABLE. Design panels to be cleaned in place or, if REMOVABLE for cleaning, use sections that are manageable in weight and length. Sneeze guard panels must be transparent and designed to minimize obstruction of the customer's view of the food. To protect against chipping, provide edge guards for glass panels. Sneeze guards for preparation-only protection do not need to be transparent. # Spaces or Openings If there are spaces or openings greater than 25 millimeters (1 inch) along the length of the sneeze guard (such as between two pieces of the sneeze guard), ensure that there are no food wells, bains marie, etc., under the spaces or openings. # Position Position sneeze guards so that the panels intercept a line between the average consumer's mouth and the displayed foods. Take into account factors such as the height of the FOOD DISPLAY counter, the presence or absence of a tray rail, and the distance between the edge of the display counter and the actual placement of the food (Figure 10). # Sneeze Guard Design If the buffet is built to the calculations in Figure 11: The maximum vertical distance between a counter top and the bottom leading edge of a sneeze guard must be 356 millimeters (14 inches). The bottom leading edge of the sneeze guard must extend a minimum horizontal distance of 178 mm (7 inches) beyond the front inside edge of a food well. The sum of a sneeze guard's protected horizontal plane (X) and its protected vertical plane (Y) must equal a minimum of 457 millimeters (18 inches) (Figure 10). Either X or Y may equal 0. Install side protection for sneeze guards if the distance between exposed food and where people are expected to stand is less than 1 meter (40 inches). See Figures 12-15 for additional examples of sneeze guards. # Tray Rail Surfaces Use tray rail surfaces that are sealed, COVED, or have an open design. These surfaces must also be EASILY CLEANABLE in accordance with guidelines for food splash zones. # Food Pan Length Consideration should be given to the length of the food pans in relation to the distance a consumer must reach to obtain food. # Soup Wells If soups, oatmeal, and similar foods are to be self-served, equipment must be able to be placed under a sneeze guard. # Beverage Delivery System # Backflow Prevention Device Install a BACKFLOW PREVENTION DEVICE that is APPROVED for use on carbonation systems (e.g., multiflow beverage dispensing systems). Install the device before the carbonator and downstream of brass or copper fittings in the POTABLE WATER supply line. A second device may be required if noncarbonated water is supplied to a multiflow hose dispensing gun. # Encase Supply Lines Encase supply lines to the dispensing guns in a single tube. If the tube penetrates through any bulkhead or countertop, seal the penetration with a grommet. # Clean-in-place System For bulk beverage delivery systems, incorporate fittings and connections for a clean-in-place system that can flush and sanitize the entire interior of the dispensing lines in accordance with the manufacturers' instructions. Passenger Self-Service Buffet Handwashing Stations # Number Provide one handwashing station per 100-passenger seating or fraction thereof. Stations should be equally distributed between the major passenger entry points to the buffet area and must be separate from a toilet room. # Passenger Entries Provide handwashing stations at each minor passenger entry to the main buffet areas proportional to the passenger flow, with at least one per entry. These handwash stations can be counted towards the requirement of one per 100 passengers. # Self-service Stations Outside the Main Buffet Provide at least one handwashing station at the passenger entrance of each self-service station outside of the main buffet. Beverage stations are excluded. # Equipment and Supplies The handwashing station must include a handwash sink, soap dispenser, and single-use paper towel dispenser. Electric hand dryers can be installed in addition to paper towel dispensers. Waste receptacles must be provided in close proximity to the handwash sink and sized to accommodate the quantity of paper towel waste generated. The handwashing station may be decorative but must be nonabsorbent, durable, and EASILY CLEANABLE. # Automatic Handwashing System An automatic handwashing system in lieu of a handwash sink is acceptable. # Sign Each handwashing station must have a sign advising passengers to wash hands before eating. A pictogram can be used in lieu of words on the sign. # Location Stations can be installed just outside of the entry. Position the handwashing stations along the passenger flow to the buffets. # Lighting Provide a minimum of 110 lux lighting at the handwash stations. # Bar Counter Tops # Access Bar counter tops are to be constructed to provide access for workers and to prevent workers from stooping or crawling to access the bar area from pantries or service areas. # Warewashing # Prewash Hoses Provide rinse hoses for prewashing (not required but recommended in bar and deck pantries). If a sink is to be used for prerinsing, provide a REMOVABLE strainer. # Splash Panel Install a splash panel if a clean utensil/glass storage rack or preparation counter is within an unobstructed 2 meters (6½ feet) of a prewash spray hose. This does not include the area behind the worker. # Food Waste Disposal Provide space for trash cans, garbage grinder, or FOOD WASTE SYSTEMS. Grinders are optional in pantries and bars. # Trough Provide a food waste trough that extends the full length of soiled landing tables with FOOD WASTE SYSTEMS. # Seal Seal the back edge of the soiled landing table to the bulkhead or provide a minimum clearance between the table and the bulkhead according to section 8.0. # Design Design soiled landing tables to drain waste liquids and prevent contamination of adjacent clean surfaces. # Drain and Slope Provide across-the-counter gutters with drains and slope the clean landing tables to the gutters at the exit from the warewashing machines. If the first gutter does not effectively remove pooled water, install additional gutter(s) and drain line(s). Minimize the length of drain lines and, when possible, direct them in a straight line to the deck SCUPPER. # Space for Cleaning Provide sufficient space for cleaning around and behind equipment (e.g., FOOD WASTE SYSTEMS and warewashing machines). Refer to section 8.0 for spacing requirements. # Enclose Wiring Enclose FOOD WASTE SYSTEM wiring in a durable and easy to clean stainless steel or nonmetallic watertight conduit. Install all warewashing machine components at least 150 millimeters (6 inches) above the deck, except as noted in section 8.4. # Splash Panels Construct REMOVABLE splash panels of stainless steel to protect the FOOD WASTE SYSTEM and technical areas. # Materials Construct grinder cones, FOOD WASTE SYSTEM tables, and dish-landing tables from stainless steel with continuous welding. Construct platforms for supporting warewashing equipment from stainless steel. # Size Size warewashing machines for their intended use and install them according to the manufacturer's recommendations. # Alarm Equip warewashing machines with an audible or visual alarm that indicates if the sanitizing temperature or chemical sanitizer drop below the levels stated on the machine data plate. # Data Plate Affix data plate so that the information is easy to read by the operator. The data plate must include the following information as provided by the manufacturer of the warewash machine: # Water Temperatures Temperatures required for washing, rinsing (if applicable), and sanitizing. # Water Pressure Pressure required for the fresh water sanitizing rinse unless the machine is designed to use only a pumped sanitizing rinse. # Conveyer Speed or Cycle Time Conveyor speed in meters or feet per minute or minimum transit time for belt conveyor machines; minimum transit time for rack conveyor machines; or cycle time for stationary rack machines. # Chemical Concentration Chemical concentration (if chemical sanitizers are used). # Manuals and Schematics Warewash machine operating manuals and schematics of the internal BACKFLOW PREVENTION DEVICES must be provided. # Pot and Utensil Washing Provide pot and utensil washing facilities as listed in section 6.2.2. # Three-compartment Sinks Correctly size three-compartment warewashing and potwashing sinks for their intended use. Use sinks that are large enough to submerge the largest piece of equipment used in the area that is served. Use sinks that have COVED, continuously welded, integral internal corners. # Prevent Excessive Contamination Install one of the following arrangements to prevent excessive contamination of rinse water with wash water splash: - Gutter and drain: An across-the-counter gutter with a drain that divides the compartments. The gutter should extend the entire distance from the front edge of the counter to the backsplash. - Splash shield: A splash shield at least 25 millimeters (1 inch) above the flood level rim of the sink between the compartments. The splash shield should extend the entire distance from the front edge of the counter to the backsplash; - Overflow drain: An overflow drain in the wash compartment 100 millimeters (4 inches) below the flood level. # Hot Water Sanitizing Sinks Equip hot water sanitizing sinks with an easy-to-read TEMPERATURE-MEASURING DEVICE, a utensil/equipment retrieval system (e.g., long-handled stainless steel hook or other retrieval system), a jacketed or coiled steam supply with a temperature control valve, or electric heating system. # Shelving Provide sufficient shelving for storage of soiled and clean ware. Use open round tubular shelving or racks. Design overhead shelves to drain away from clean surfaces. Sufficient space must be determined by the initial sizing of the warewash area, as based on the profile or reference size from an existing vessel of the same cruise line per section 6.1. # Ventilation For ventilation requirements, see section 14.0. # Lighting # Work Surface Provide a minimum of 220 lux (20 foot-candles) of light at the work surface level in all FOOD PREPARATION, FOOD SERVICE, and warewashing areas when all equipment is installed. Provide 220 lux (20 foot-candles) of lighting for equipment storage, garbage and food lifts, garbage rooms, and toilet rooms, measured at 760 millimeters (30 inches) above the deck. # Behind and Around Equipment Provide a minimum light level of 110 lux (10 foot-candles) behind and around equipment as measured at the counter surface or at a distance of 760 millimeters (30 inches) above the deck (e.g., ice machines, combination ovens, beverage dispensers, etc.). # Countertops Provide a minimum light level of 220 lux (20 foot-candles) at countertops (e.g., beverage lines, etc.). # Deckhead-mounted Fixtures For effective illumination, place the deckhead-mounted light fixtures above the work surfaces and position them in an "L" pattern rather than a straight line pattern. # Installation Install light fixtures tightly against the bulkhead and deckhead panels. Completely seal electrical penetrations to permit easy cleaning around the fixtures. # Light Shields Use shatter-resistant and REMOVABLE light shields for light fixtures. Completely enclose the entire light bulb or fluorescent light tube(s). # Provision Rooms Provide lighting levels of at least of 220 lux (20 foot-candles) in provision rooms, measured at 760 millimeters (30 inches) above the deck while the rooms are empty. During normal operations when foods are stored in the rooms, provide lighting levels of at least 110 lux (10 foot-candles), measured at a distance of 760 millimeters (30 inches) above the deck. # Bars and Waiter Stations In bars and over dining room waiters' stations designed for lowered lighting during normal operations, provide lighting that can be raised to 220 lux (20 foot-candles) during cleaning operations, as measured at 760 millimeters (30 inches) above the deck. Provide a minimum light level of 110 lux (10 foot-candles) at handwash stations at a bar, and ensure this level is able to be maintained at all times. # Light Bulbs Use shielded, coated, or otherwise shatter-resistant light bulbs in areas where there is exposed food; clean equipment, utensils, and linens; or unwrapped single-service and single-use articles. This includes lights above waiter stations. # Heat Lamps Use shields that surround and extend beyond bulbs on infrared or other heat lamps to protect against breakage. Allow only the face of the bulb to be exposed. # Track or Recessed Lights Decorative track or recessed deckhead-mounted lights above bar countertops, buffets, and other similar areas may be mounted on or recessed within the deckhead panels without being shielded. However, install specially coated, shatter-resistant bulbs in the light fixtures. # Cleaning Materials, Filters, and Drinking Fountains # Facilities and Lockers for Cleaning Materials # Racks Provide bulkhead-mounted racks for brooms and mops or provide sufficient space and hanging brackets within CLEANING LOCKERS. Locate bulkheadmounted racks outside of FOOD STORAGE, PREPARATION, or SERVICE areas. These racks may be located on the soiled side of warewash areas. # Stainless Steel Provide stainless steel vented lockers, with COVED junctures, for storing buckets, detergents, sanitizers, cloths, and other wet items. # Size and Location Size and locate the lockers according to the needs of the vessel and make access convenient. # Multiple-level Galleys If CLEANING LOCKERS are not provided in each of the preparation areas, provide a single CLEANING ROOM for each deck of multiple level galleys. Construct rooms used for cleaning materials in accordance with section 16.0. # Mop Cleaning Provide facilities equipped with a mop sink and ADEQUATE DECK DRAIN or a pressure washing system for cleaning mops and buckets and that are separate from food facilities. The mop sink may be located on the soiled side of warewash areas. # Black and Gray Water Systems # Drain Lines Limit the installation of drain lines that carry BLACK WATER or other liquid waste directly overhead or horizontally through spaces used for FOOD AREAS. This includes areas for washing or storage of utensils and equipment, such as in bars and deck pantries and over buffet counters. Sleeve-weld or butt-weld steel pipe and heat fuse or chemically weld plastic pipe. # Piping Do not use push-fit or press-fit piping over these areas. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. # Drainage Systems Ensure that BLACK and GRAY WATER drainage systems from cabins, FOOD AREAS, and public spaces are designed and installed to prevent waste back up and odor or gas emission into these areas. # Venting Vent BLACK WATER holding tanks to the outside of the vessel and ensure that vented gases do not enter the vessel through any air intakes. # Independent Construct BLACK WATER holding tank vents so that they are independent of all other tanks. # General Hygiene Construct handwashing stations in the following areas according to section 7.1. # Wastewater Areas Install at least one handwashing station in each main wastewater treatment, processing, and storage area. # Laundry Areas Install at least one handwashing station at soiled linen handling areas and at the main exits of the main laundry. Vessel owners will provide locations during the plan review. # Housekeeping Areas Install handwashing stations in housekeeping areas as described in section 35.1. Provide each handwashing station with a soap dispenser, paper towel dispenser, waste receptacle, and sign that states "wash hands often," "wash hands frequently," or similar wording in English and in other languages, where appropriate. # Potable Water System # Striping # Potable Water Lines Stripe or paint POTABLE WATER lines either in accordance with ISO 14726 (blue/green/blue) or blue only. # Storage and Production Capacity for Potable Water # Minimum Storage Capacity Provide a minimum of 2 days storage capacity that assumes 120 liters (30 gallons) of water per day per person for the maximum capacity of crew and passengers on the vessel. # Production Capacity Provide POTABLE WATER production capacity of 120 liters (30 gallons) per day per person for the maximum capacity of crew and passengers on the vessel. Potable Water Storage Tanks 22.7.1 General Requirements # Independent of Vessel Shell Ensure that POTABLE WATER storage tanks are independent of the shell of the vessel. # No Common Wall Ensure that POTABLE WATER storage tanks do not share a common wall with other tanks containing nonpotable water or other liquids. # Cofferdam Provide a 450-millimeter (18-inch) cofferdam above and between POTABLE WATER TANKS and tanks that are not for storage of POTABLE WATER and between POTABLE WATER TANKS and the shell. Skin or double-bottom tanks are not allowed for POTABLE WATER storage. # Deck Top If the deck is the top of POTABLE WATER TANKS, these tanks must be identified during the plan review. The shipyard will provide the owners a written declaration of the tanks involved and the drawings of the areas that include these tanks. # Tanks with Nonpotable Liquid Do not install tanks containing nonpotable liquid directly over POTABLE WATER TANKS. # Coatings Use APPROVED POTABLE WATER TANK coatings. Follow all of the manufacturer's recommendations for applying, drying, and curing the tank coatings. Provide the following for the tank coatings: - Written documentation of the approval from the certification organization (independent of the coating manufacturer). - Manufacturer's recommendations for applying, drying, and curing. - Written documentation that the manufacturer's recommendations have been followed for applying, drying, and curing. # Items That Penetrate Tank Coat all items that penetrate the tank (e.g., bolts, pipes, pipe flanges) with the same product used for the tank's interior. # Super-chlorination Design tanks to be super-chlorinated one tank at a time. # Lines for Nonpotable Liquids Ensure that lines for nonpotable liquids do not pass through POTABLE WATER TANKS. # Nonpotable Lines Above Potable Water Tanks Minimize the use of nonpotable lines above POTABLE WATER TANKS. If nonpotable water lines are installed, do not use mechanical couplings or push-fit or press-fit piping on lines above tanks. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. # Coaming If coaming is present along the edges or top of the tank, provide slots along the coaming to allow leaking liquids to run off and be detected. # Welded Pipes Treat welded pipes over the POTABLE WATER storage tanks to make them corrosion resistant. # Lines Inside Potable Water Tanks Treat all POTABLE WATER lines inside POTABLE WATER TANKS to make them jointless and NONCORRODING. # Label Tanks Label each POTABLE WATER TANK on its side and where clearly visible, with a number and the exact wording "POTABLE WATER" in letters a minimum of 13 millimeters (1/2 inch) high. # Sample Cock Install at least one sample cock located at least 450 millimeters (18 inches) above the deck plating on each tank. The sample cock must be easily ACCESSIBLE. Point sample cocks down and identify them with the appropriate tank number. # Storage Tank Access Hatch # Installation Install an access hatch for entry on the sides of POTABLE WATER TANKS. # Storage Tank Water Level # Automatic Provide an automatic method for determining the water level of POTABLE WATER TANKS. Visual sight glasses are acceptable. # Storage Tank Vents # Location Ensure that air-relief vents end at least 1,000 millimeters (40 inches) above the maximum load level of the vessel. Make the cross-sectional area of the vent equal to or greater than that of the filling line to the tank. Position the end of the vent so that its opening faces down or is otherwise protected, and install a 16-mesh corrosion-resistant screen. # Single Pipe A single pipe may be used as a combination vent and overflow. # Vent Connections Do not connect the vent of a POTABLE WATER TANK to the vent of a tank that is not a POTABLE WATER TANK. # Storage Tank Drains # Design Design the tanks to drain completely. # Drain Opening Provide a drain opening that is at least 100 millimeters (4 inches) in diameter and preferably matches the diameter of the inlet pipe. # Suction Pump If drained by a suction pump, provide a sump and install the pump suction port in the bottom of the sump. Install separate pumps and piping not connected to the POTABLE WATER distribution system for draining tanks (Figure 16). WATER systems. - Drinking fountains. - Emergency showers. - Eye wash stations. - FOOD AREAS. - Handwash sinks. - HVAC fan rooms. - Medical facilities. Utility sinks for engine/mechanical spaces are excluded. # Paint or Stripe Paint or stripe POTABLE WATER piping and fittings either blue only or in accordance with ISO 14726 at 5-meter (15-foot) intervals and on each side of partitions, decks, and bulkheads except where the decor would be marred by such markings. This includes POTABLE WATER supply lines in technical lockers. # Steam Generation for Food Areas Use POTABLE WATER to generate steam applied directly to food and FOOD-CONTACT SURFACES. Generate the steam locally from FOOD SERVICE equipment designed for this purpose (e.g., vegetable steamers, combinationovens, etc.). # Nonpotable Water Steam generated by nonpotable water may be applied indirectly to food or food equipment if routed through coils, tubes, or separate chambers. # Disinfection of the Potable Water System # Before Placed in Service Clean, disinfect, and flush POTABLE WATER TANKS and all parts of the POTABLE WATER system before the system is placed in service. # Free Chlorine Solution Ensure that DISINFECTION is accomplished by using a 50-MG/L (50-ppm) free chlorine solution for a minimum of 4 hours. Ensure that only POTABLE WATER is used for these procedures. Prior VSP agreement is required if alternative APPROVED DISINFECTION practices are used. # Documentation Provide written documentation showing that a representative sampling was conducted at various PLUMBING FIXTURES on each deck throughout the vessel (forward, aft, port, and starboard) to ensure that the 50-MG/L (ppm) free chlorine residual has circulated throughout the distribution system to include recommendation. Provide all manufacturers' literature for installation, operation, and maintenance. # pH Adjustment Provide automatic PH adjustment equipment for water bunkering and production. Install analyzer, controller, and dosing pumps that are designed to accommodate changes in flow rates. # Distribution # Sample Point Provide an analyzer controlled, automatic halogenation system. Install the analyzer probe sample point at least 3 meters (10 feet) downstream of the HALOGEN injection point. If a static mixer is used in lieu of the 3-meter (10-foot) distance, see section 22.14.2.7 for static mixer requirements. # Free Halogen Probes Use probes to measure free HALOGEN and link them to the analyzer/controller and chemical dosing pumps. # Backup Halogenation Pump Provide a back-up halogenation pump with an automatic switchover that begins pumping HALOGEN when the primary (in-use) pump fails or cannot meet the halogenation demand. # Probe/Sample Location Locate HALOGEN analyzer probe and/or sample cock at a distant point in each distribution system loop where significant water flow exists. # Alarm Provide an audible alarm in a continually occupied watch station (e.g., the engine control room or bridge) to indicate low or high free HALOGEN readings at each distant point analyzer. # Backflow Prevention Provide POTABLE WATER taps with appropriate BACKFLOW prevention at HALOGEN supply tanks. # Sample Cock Location # Double-wall Construction Double-wall construction between the SANITARY SEAWATER or POTABLE and nonpotable liquids with both of the following safety features: - A void space to allow any leaking liquid to drain away. - An alarm system to indicate a leak in the double wall. # Single-wall Construction Single-wall construction with all of the following safety features: # Higher Pressure Higher pressure of at least 1 bar on the SANITARY SEAWATER or POTABLE WATER side of the heat exchanger. # Automatic Valve An automatic valve arrangement that closes SANITARY SEAWATER or POTABLE WATER circulation in the heat exchanger when the pressure difference is less than 1 bar. # Alarm An alarm system that sounds when the diverter valve directs SANITARY SEAWATER or POTABLE WATER from the heat exchanger. # Recreational Water Facilities (RWFs) Water Source # Filling System Provide a filling system that allows for the filling of each RWF with SANITARY SEAWATER or POTABLE WATER. For a compensation or make-up tank supplied with POTABLE water, an overflow line located below the fill line and at least twice the diameter of the fill line is an acceptable method of BACKFLOW protection provided that the overflow line discharges to the wastewater system through an indirect connection. # Compensation or Make-up Tank Where make-up water is required to replace water loss due to splashing, carry out, and other volume loss, install an appropriately designed compensation or make-up tank to ensure that ADEQUATE chemical balance can be maintained. # Combining RWFs No more than two similar RWFs may be combined. CHILDREN'S POOLS and BABY-ONLY WATER FACILITIES must not be combined with any other type of RWFs. # Independent Manual Testing When combining RWFs, provisions must be made for independent manual water testing within the mechanical room for each RWF. # Independent Slide RWF and Adult Swimming Pool An independent slide RWF and an adult SWIMMING POOL may be combined provided that the water volume added to the slide and the slide pump capacity are sufficient to maintain the TURNOVER rate as shown in section 29.10. Any other combinations of RWFs will be decided on a case-by-case basis during the plan review. # RWF Showers and Toilet Facilities # Shower Temperature Equip showers to provide POTABLE WATER at a temperature not to exceed 43°C (110°F) during normal operations. Install the showers within 10 meters of the entrances to RWFs. The location and number of showers for multifacilities with multiple entrances will be determined during the plan review. # Showers for Children RWFs designed for use by children under 6 years of age must have appropriately sized shower facilities. Standard height is acceptable, but the mechanism to operate the flow of water must not be more than 1 meter above the deck. # Toilet Facilities Locate toilet facilities within one fire zone (approximately 48 meters ) of each RWF and on the same deck. Install a minimum of two separate toilet rooms (either two unisex or one male and one female). Each toilet facility must include a toilet and a handwashing facility. The total number of toilets and toilet facilities required will be assessed during the plan review. Urinals may be installed in addition to the required toilet, but may not replace the toilet. # Diaper-changing Facilities Provide diaper changing facilities within one fire zone (approximately 48 meters ) and on the same deck of any BABY-ONLY WATER FACILITY. If these facilities are placed within toilet rooms, there must be one facility located within each toilet room (men's, women's, and unisex). Diaper-changing facilities must be equipped in accordance with section 34.2.1. # RWF Drainage # Independent System Provide an independent drainage system for RWFs from other drainage systems. If RWF drains are connected to another drainage system, provide an AIR GAP or a DUAL SWING CHECK VALVE between the two. This includes the drainage for compensation or make-up tanks. # Slope Slope the bottom of the RWF toward the drains to achieve complete drainage. # Seating Drains If seating is provided inside an RWF, ensure that drains are installed to allow for complete draining of the seating area (including seats inside WHIRLPOOL SPAS and SPA POOLS). # Drain Completely Decorative and working features of an RWF must be designed to drain completely and must be constructed of nonporous EASILY CLEANABLE materials. These features must be designed to be shock halogenated. # RWF Safety # Antientrapment Drain Covers and Suction Fittings Where referenced within these guidelines, drain covers must comply with the requirements in ASME A112.19.8-2007, including addenda. See table below for primary and secondary ANTIENTRAPMENT requirements. VSP is aware that the requirements shown in Table 28.1.7 may not fully meet the letter of the Virginia Graeme Baker Act, but we also recognize the life-safety concerns for rapid dumping of RWFs in conditions of instability at sea. Therefore, it is the owner's decision to meet or exceed VSP requirements. # Installation Install dual drains that are at least 1 meter (3 feet) apart and at the lowest point in the RWF. Ensure that there are no intermediate drain isolation valves on the lines between the drains (Figure 17a). In a channel system (an UNBLOCKABLE DRAIN), a grate-type cover would be attached to the channel (Figure 17b). When fully assembled and installed, SUCTION FITTINGS must reduce the potential for body entrapment, digit, or limb entrapment in accordance with ASME A112.19.8M-2007. # Stamped and Certified Manufactured drain covers and SUCTION FITTINGS must be stamped and certified in accordance with the standards set forth in ASME A112.19.8-2007. # Design of Field Fabricated Covers and Fittings The design of custom/shipyard constructed (field fabricated) drain covers and SUCTION FITTINGS must be fully specified by a registered design professional in accordance with ASME A112.19.8-2007. The specifications must fully address cover/grate loadings; durability; hair, finger, and limb entrapment issues; cover/grate secondary layer of protection; related sump design; and features specific to the RWF. # Alternate to Marking Field Fabricated Fittings As an alternate to marking custom/shipyard constructed (field fabricated) drain cover fittings, the owner of the facility where these fittings will be installed must be advised in writing by the registered design professional of the information set forth in section 7.1.1 of ASME A112.19.8-2007. # Accompanying Letter A letter from the shipyard must accompany each custom/shipyard constructed (field fabricated) drain cover fitting. At a minimum, the letter must specify the shipyard, name of the vessel, specifications and dimensions of the drain cover, as noted above, and the exact location of the RWF for which it was designed. The registered design professional's name, contact information, and signature must be on the letter. # Antientrapment/Antientanglement Requirements See *Options 1 through 5 are for fittings that are not under direct suction. These include both fittings to drain the RWF and fittings used to recirculate the water. Options 6 through 8 are for fittings that are under direct suction. These include fittings to drain the RWF and fittings used to recirculate the water. Definitions: - Alarm = the audible alarm must sound in a continuously manned space AND at the RWF. This alarm is for all draining: accidental, routine, and emergency. - GDS (GRAVITY DRAINAGE SYSTEM) = a drainage system that uses a collector tank from which the pump draws water. Water moves from the RWF to the collector tank due to atmospheric pressure, gravity, and the displacement of water by bathers. There is no direct suction at the RWF. - SVRS (SAFETY VACUUM RELEASE SYSTEM) = a system that stops the operation of the pump, reverses the circulation flow, or otherwise provides a vacuum release at a suction outlet when a blockage is detected. System must be tested by an independent third party and conform with ASME/ANSI A112.19.17 or ASTM standard F2387. - APS (AUTOMATIC PUMP SHUT-OFF system) = a device that detects a blockage and shuts off the pump system. A manual shut-off near the RWF does not qualify as an APS. # Depth Markers # Installation Install depth markers for each RWF where the maximum water depth is 1 meter (3 feet) or greater. Install depth markers so that they can be seen from the deck and inside the RWF tub. Ensure that the markers are in both meters and feet. Additionally, depth markers must be installed for every 1-meter (3foot) change in depth. # Safety Signs # Installation Install safety signs at each RWF except for BABY-ONLY WATER FACILITIES. At a minimum the signs must include the following words: - Do not use these facilities if you are experiencing diarrhea, vomiting, or fever. - No children in diapers or who are not toilet trained. - Shower before entering the facility. - Bather load number. (The maximum bather load must be based on the following factor: One person per 19 liters per minute of recirculation flow.) Pictograms may replace words, as appropriate or available. It is advisable to post additional cautions and concerns on signs. See section 31.3 for safety signs specific to BABY-ONLY WATER FACILITIES and section 32.3 for safety signs specific to WHIRLPOOL SPAS and SPA POOLS. # Children's RWF For the children's RWF signs, include the exact wording "TAKE CHILDREN ON FREQUENT BATHROOM BREAKS" or "TAKE CHILDREN ON FREQUENT TOILET BREAKS." This is in addition to the basic RWF safety sign. Life-saving Equipment # Location A rescue or shepherd's hook and an APPROVED floatation device must be provided at a prominent location (visible from the full perimeter of the pool) at each RECREATIONAL WATER FACILITY that has a depth of 1 meter (3 feet) or greater. These devices must be mounted in a manner that allows for easy access during an emergency. - The pole of the shepherd's hook must be long enough to reach the center of the deepest portion of the pool from the side plus 0.6 meters (2 feet) It must be light, strong, and nontelescoping with rounded nonsharp ends. - The flotation device must have an attached rope that is at least 2/3 of the maximum pool width. # Recirculation and Filtration Systems # Skim Gutters Where skim gutters are installed, ensure that the maximum fill level of the RWF is to the skim gutter level. # Overflows Ensure that overflows are directed by gravity to the compensation or make-up tank for filtration and DISINFECTION. Alternatively, overflows may be directed to the RWF drainage system. If the overflow is connected to another drainage system, provide an AIR GAP or a DUAL SWING CHECK VALVE between the two. # Return Water All water returning from an RWF must be directed to the compensation or make-up tank or the filtration and DISINFECTION system. # Compensation or Make-up Tanks Ensure that 100% of the water in the compensation or make-up tanks passes through the filtration and DISINFECTION systems before returning to the RWF. This includes any water directed to water features in RWF's. DISINFECTION system capable of inactivating Cryptosporidium. Ensure that these systems are installed in accordance with the manufacturer's specifications. Secondary UV DISINFECTION systems must be designed to operate in accordance with the parameters set forth in NSF International or equivalent standard. # Sized Secondary DISINFECTION systems must be appropriately sized to disinfect 100% of the water at the appropriate TURNOVER rate. Secondary DISINFECTION systems are to be installed after filtration but before HALOGEN-based DISINFECTION. Unless otherwise accepted by the VSP, secondary DISINFECTION must be accomplished by a UV DISINFECTION system. # Low-and Medium-pressure UV Systems Low-and medium-pressure UV systems can be used and must be designed to treat 100% of the flow through the feature line(s). Multiple units are acceptable. UV systems must be designed to provide 40 mJ/cm 2 at the end of lamp life. UV systems must be rated at a minimum of 254 nm. # Recirculation and Filtration System # Compensation or Makeup Tank Install a compensation or make-up tank with an automatic level control system capable of holding an amount of water sufficient to ensure continuous operation of the filtration and DISINFECTION systems. This capacity must be equal to at least 3 times the total operating volume of the system. # Accessible Drain Install an ACCESSIBLE drain at the bottom of the tank to allow for complete draining of the tank. Install an access port for cleaning the tank and for the addition of batch halogenation and PH control chemicals. # Secondary Disinfection and pH Systems Design the system so that 100% of the water for the BABY-ONLY WATER FACILITY feature passes through filtration, halogenation, secondary DISINFECTION, and PH systems before returning to the BABY-ONLY WATER FACILITY. # Disinfection and pH Control # Independent Automatic Analyzer Install independent automatic analyzer-controlled HALOGEN-based DISINFECTION and PH dosing systems. The analyzer must be capable of measuring HALOGEN levels in MG/L (ppm) and PH levels. Analyzers must have digital readouts that indicate measurements from the installed analyzer probes. # Automatic Monitoring and Recording Provide an automatic monitoring and recording system for the free HALOGEN residuals in MG/L (ppm) and PH levels. The recording system must be capable of recording these levels 24 hours/day. # Secondary Disinfection System # Cryptosporidium Provide a secondary UV DISINFECTION system capable of inactivating Cryptosporidium. Ensure that these systems are installed in accordance with the manufacturer's specifications. Secondary UV DISINFECTION systems must be designed to operate in accordance with the parameters set forth in NSF International for use in BABY-ONLY WATER FACILITIES. # Size Secondary DISINFECTION systems must be appropriately sized to disinfect 100% of the water at the appropriate TURNOVER rate. Secondary DISINFECTION systems are to be installed after filtration but before HALOGEN-based DISINFECTION. Unless otherwise APPROVED by
Seal all openings where piping and other items penetrate through the deck.# VSP would like to acknowledge the following organizations and companies for their cooperative efforts in the revisions of the VSP 2011 Construction Guidelines. The cover art was designed by Carrie Green. # Background and Purpose The Centers for Disease Control and Prevention (CDC) established the Vessel Sanitation Program (VSP) in 1975 as a cooperative endeavor with the cruise vessel industry. VSP's goal is to assist the industry to develop and implement comprehensive sanitation programs to protect the health of passengers and crew aboard cruise vessels. Every cruise vessel that has a foreign itinerary, carries 13 or more passengers, and calls on a U.S. port is subject to biannual operational inspections and, when necessary, reinspection by VSP. The vessel owner pays a fee, based on gross registered tonnage (GRT) of the vessel, for all operational inspections. The Vessel Sanitation Program 2011 Operations Manual, which is available on the VSP Web site (www.cdc.gov/nceh/vsp), covers details of these inspections. Additionally, cruise vessel owners or shipyards that build or renovate cruise vessels may voluntarily request plan reviews, onsite shipyard construction inspections, and/or final construction inspections of new or renovated vessels before their first or next operational inspection. The vessel owner or shipyard pays a fee, based on GRT of the vessel, for onsite and final construction inspections. VSP does not charge a fee for plan reviews or consultations. Section 3.0 covers details pertaining to plan reviews, consultations, or construction inspections. When a plan review or construction inspection is requested, VSP reviews current construction billing invoices of the shipyard or owner requesting the inspection. If this review identifies construction invoices unpaid for more than 90 days, no inspection will be scheduled. An inspection can be scheduled after the outstanding invoices are paid in full. The VSP 2011 Construction Guidelines provide a framework of consistent construction and design guidelines that protect passenger and crew health. CDC is committed to promoting high construction standards to protect the public's health. Compliance with these guidelines will help to ensure a healthy environment on cruise vessels. CDC reviewed references from many sources to develop these guidelines. These references are indicated in section 38.2. The VSP 2011 Construction Guidelines cover components of the vessel's facilities related to public health, including FOOD STORAGE, PREPARATION, and SERVICE, and water bunkering, storage, DISINFECTION, and distribution. Vessel owners and operators may select the design and equipment that best meets their needs. However, the design and equipment must also meet the sanitary design criteria of the American National Standards Institute (ANSI) or equivalent organization as well as VSP's routine operational inspection requirements. These guidelines are not meant to limit the introduction of new designs, materials, or technology for shipbuilding. A shipbuilder, owner, manufacturer, or other interested party may ask VSP to periodically review or revise these guidelines in relation to new information or technology. VSP reviews such requests in accordance with the criteria described in section 2.0. New cruise vessels must comply with all international code requirements (e.g., International Maritime Organization Conventions). Those include requirements of the following: • Safety of Life-at-Sea Convention. • International Convention for the Prevention of Pollution from Ships. • Tonnage and Load Line Convention. • International Electrical Code. • International Plumbing Code. • International Standards Organization. This document does not cross-reference related and sometimes overlapping standards that new cruise vessels must meet. The VSP 2011 Construction Guidelines went into effect on September 15, 2011. They apply to vessels that LAY KEEL or perform any major renovation or equipment replacement (e.g., any changes to the structural elements of the vessel covered by these guidelines) after this date. The guidelines do not apply to minor renovations such as the installation or removal of single pieces of equipment (refrigerator units, warewash machines, bain-marie units, etc.) or single pipe runs. These guidelines apply to all areas of the vessel affected by a renovation. VSP will inspect the entire vessel in accordance with the VSP 2011 Operations Manual during routine vessel sanitation inspections and reinspections. # Revisions and Changes VSP periodically reviews and revises these recommendations in coordination with industry representatives and other interested parties to stay abreast with industry innovations. A shipbuilder, owner, manufacturer, or other interested party may ask VSP to review a construction guideline on the basics of new technologies, concepts, or methods. Recommendations for changes or additions to these guidelines must be submitted in writing to the VSP Chief (see section 39.2.1 for contact information). The recommendation should • Identify the section to be revised. • Describe the proposed change or addition. • State the reason for recommending the change or addition. • Include research or test results and any other pertinent information that support the change or addition. VSP will coordinate a professional evaluation and consult with industry to determine whether to include the recommendation in the next revision. VSP gives special consideration to shipyards and owners of vessels that have had plan reviews conducted before an effective date of a revision of these guidelines. This helps limit any burden placed on the shipyards and owners to make excessive changes to previously agreed-upon plans. VSP asks industry representatives and other knowledgeable parties to meet with VSP representatives periodically to review the guidelines and determine whether changes are necessary to keep up with the innovations in the industry. # Procedures for Requesting Plan Reviews, Consultations, and Construction-related Inspections To coordinate or schedule a plan review or construction-related inspection, submit an official written request to the VSP Chief as early as possible in the planning, construction, or renovation process. Requests that require foreign travel must be received in writing at least 45 days before the intended visit. The request will be honored depending on VSP staff availability (see section 39.2.1 for contact information). After the initial contact, VSP assigns primary and secondary officers to coordinate with the vessel owner and shipyard. Normally two officers will be assigned. These officers are the points of contact for the vessel from the time the plan review and subsequent consultations take place through the final construction inspection. Vessel representatives should provide points of contact to represent the owners, shipyard, and key subcontractors. All parties will use these points of contact during consultations between any of the parties and VSP to ensure awareness of all consultative activities after the plan review is conducted. # Plan Reviews and Consultations VSP normally conducts plan reviews for new construction a minimum of 18 months before the vessel is scheduled for delivery. The time required for major renovations varies. To allow time for any necessary changes, VSP coordinates plan reviews for such projects well before the work begins. Plan reviews normally take 2 working days. They are conducted in Atlanta, Georgia; Fort Lauderdale, Florida; or other agreed-upon sites. Normally, two VSP officers will be assigned to the project. Representatives from the shipyard, vessel owner, and subcontractor(s) who will be doing most of the work should attend the review. They should bring all pertinent materials for areas covered in these guidelines, including (but not limited to) the following: • Complete plans or drawings (this includes new vessels from a class built under a previous version of the VSP Construction Guidelines). • Any available menus. • Equipment specifications. • General arrangement plans. • Decorative materials for FOOD AREAS and bars. • All FOOD-related STORAGE, PREPARATION, and SERVICE AREA plans. • Level and type of FOOD SERVICE (e.g., concept menus, staffing plans, etc.). • POTABLE and nonpotable water system plans with details on water inlets, (e.g., sea chests, overboard discharge points, and BACKFLOW PREVENTION DEVICES). • Ventilation system plans. • Plans for all RECREATIONAL WATER FACILITIES. • Size profiles for operational areas. • Owner-supplied and PORTABLE equipment specifications, including cleaning procedures. • Cabin attendant work zones. • Operational schematics for misting systems and decorative fountains. VSP will prepare a plan review report summarizing recommendations made during the plan review and will submit the report to the shipyard and owner representatives. Following the plan review, the shipyard will provide the following: • Any redrawn plans. • Copies of any major change orders made after the plan review in areas covered by these guidelines. While the vessel is being built, shipyard representatives, the ship owner, or other vessel representatives may direct questions or requests for consultative services to the VSP project officers. Direct these questions or requests in writing to the officer(s) assigned to the project. Include fax number(s) and an e-mail address(es) for appropriate contacts. The VSP officer(s) will coordinate the request with the owner and shipyard points of contact designated during the plan review. # Onsite Construction Inspections VSP conducts most onsite or shipyard construction inspections in shipyards outside the United States. A formal written request must be submitted to the VSP Chief at least 45 days before the inspection date so that VSP can process the required foreign travel orders for VSP officers (see section 3.0). A sample request is shown in section 39.1. A completed vessel profile sheet must also be submitted with the request for the onsite inspection (see section 40.0). VSP encourages shipyards to contact the VSP Chief and coordinate onsite construction inspections well before the 45-day minimum to better plan the actual inspection dates. If a shipyard requests an onsite construction inspection, VSP will advise the vessel owner of the inspection dates so that the owner's representatives are present. An onsite construction inspection normally requires the expertise of one to three officers, depending on the size of the vessel and whether it is the first of a hull design class or a subsequent hull in a series of the same class of vessels. The inspection, including travel, generally takes 5 working days. The onsite inspection should be conducted approximately 4 to 5 weeks before delivery of the vessel when 90% of the areas of the vessel to be inspected are completed. VSP will provide a written report to the party that requested the inspection. After the inspection and before the ship's arrival in the United States, the shipyard will submit to VSP a statement of corrective action outlining how it will address and correct each item identified in the inspection report. # Final Construction Inspections # Purpose and Scheduling At the request of a vessel owner or shipyard, VSP may conduct a final construction inspection. Final construction inspections are conducted only after construction is 100% complete and the ship is fully operational. These inspections are conducted to evaluate the findings of the previous yard inspection, assess all areas that were incomplete in the previous yard inspection, and evaluate performance tests on systems that could not be tested in the previous yard inspection. Such systems include the following: • Ventilation for cooking, holding, and warewashing areas. • Warewash machines. • Artificial light levels. • Temperatures in cold or hot holding equipment. • HALOGEN and other chemistry measures for POTABLE WATER or RECREATIONAL WATER systems. To schedule the inspection, the vessel owner or shipyard submits a formal written request to the VSP Chief as soon as possible after the vessel is completed, or a minimum of 10 days before its arrival in the United States. At the request of a vessel owner or shipyard and provided the vessel is not entering the U.S. market immediately, VSP may conduct final construction inspections outside the United States (see section 3.2 for foreign inspection procedures). As soon as possible after the final construction inspection, the vessel owner or shipyard will submit a statement of corrective action to VSP. The statement outlines how the shipyard will address each item cited in the inspection report and includes the projected date of completion. # Unannounced Operational Inspection VSP generally schedules vessels that undergo final construction inspection in the United States for an unannounced operational inspection within 4 weeks of the vessel's final construction inspection. VSP conducts operational inspections in accordance with the VSP 2011 Operations Manual. If a final construction inspection is not requested, VSP generally will conduct an unannounced operational inspection within 4 weeks after the vessel's arrival in the United States. VSP conducts operational inspections in accordance with the VSP 2011 Operations Manual. # Equipment Standards, Testing, and Certification Although these guidelines establish certain standards for equipment and materials installed on cruise vessels, VSP does not test, certify, or otherwise endorse or approve any equipment or materials used by the cruise industry. Instead, VSP recognizes certification from independent testing laboratories such as NSF International, Underwriter's Laboratories (UL), the American National Standards Institute (ANSI), and other recognized independent international testing institutions. In most cases, independent testing laboratories test equipment and materials to certain minimum standards that generally meet the recommended standards established by these guidelines. Equipment built to questionable standards will be reviewed by a committee consisting of VSP, cruise ship industry, and independent testing organization participants. The committee will determine whether the equipment meets the recommended standards established in these guidelines. Copies of test or certification standards are available from the independent testing laboratories. Equipment manufacturers and suppliers should not contact the VSP to request approval of their products. # General Definitions and Acronyms # Scope These VSP 2011 Construction Guidelines provide definitions to clarify commonly used terminology in this manual. The definition section is organized alphabetically. Terms defined in section 5.2 are identified in the text of these guidelines by SMALL CAPITAL LETTERS, or SMALL CAPS. For example: section 6.2.5 states "Provide READILY REMOVABLE DRIP TRAYS for condiment dispensing equipment." READILY REMOVABLE and DRIP TRAYS are in SMALL CAPS and are defined in section 5.2. # Definitions Accessible: Exposed for cleaning and inspection with the use of simple tools such as a screwdriver, pliers, or wrench. Activity pools: Include but are not limited to the following: wave pools, catch pools, water slides, INTERACTIVE RECREATIONAL WATER FACILITIES, lazy rivers, action rivers, vortex pools, and continuous surface pools. Adequate: Sufficient in number, features, or capacity to accomplish the purpose for which something is intended and to such a degree that there is no unreasonable risk to health or safety. # Air-break: A piping arrangement in which a drain from a fixture, appliance, or device discharges indirectly into another fixture, receptacle, or interceptor at a point below the flood-level rim (Figure 1). # Air gap: (AG) The unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, PLUMBING FIXTURE, or other device and the flood-level rim of the receptacle or receiving fixture. The air gap must be at least twice the inside diameter of the supply pipe or faucet and not less than 25 millimeters (1 inch) (Figure 2). Manufactured air gaps must be certified by a recognized plumbing or engineering organization. Backpressure: An elevation of pressure in the downstream piping system (by pump, elevation of piping, or steam and/or air pressure) above the supply pressure at the point of consideration that would cause a reversal of normal direction of flow. # Backsiphonage: The reversal of flow of used, contaminated, or polluted water from a PLUMBING FIXTURE or vessel or other source into a water supply pipe as a result of negative pressure in the pipe. Black water: Wastewater from toilets, urinals, medical sinks, and other similar facilities. Blast chiller: A unit specifically designed for rapid cooling of food products. # Blockable drain/suction fitting: A drain or suction fitting in a RECREATIONAL WATER FACILITY that that can be completely covered or blocked by a 457 millimeters x 584 millimeters (18 inches x 23 inches) body-blocking element as set forth in ASME A112.19.8M. # Child activity center: A facility for child-related activities where children under the age of 6 are placed to be cared for by vessel staff. # Children's pool: A pool that has a depth of 1 meter (3 feet) or less and is intended for use by children who are toilet trained. Child-sized toilet: Toilets whose toilet seat height is no more than 280 millimeters (11 inches) and the toilet seat opening is no greater than 203 millimeters (8 inches). Cleaning locker: A room or cabinet specifically designed or modified for storage of cleaning equipment such as mops, brooms, floor-scrubbing machines, and cleaning chemicals. Continuous pressure (CP) backflow prevention device: A device generally consisting of two check valves and an intermediate atmospheric vent that has been specifically designed to be used under conditions of continuous pressure (greater than 12 hours out of a 24-hour period). # Coved (also coving): A curved or concave surface, molding, or other design that eliminates the usual joint angles of 90° or less. A single piece of stainless steel bent to an angle not less than 90° with a minimum 9.5-millimeter radius is acceptable (Figures 3-5). Unique circumstances for coving can be reviewed during plan review. Cross-connection: An actual or potential connection or structural arrangement between a POTABLE WATER system and any other source or system through which it is possible to introduce into any part of the POTABLE WATER system any used water, industrial fluid, gas, or substance other than the intended POTABLE WATER with which the system is supplied. # Deck drain: The physical connection between decks, SCUPPERS, or DECK SINKS and the GRAY or BLACK WATER systems. # Deck sink: A sink recessed into the deck and sized to contain waste liquids from tilting kettles and pans. Disinfection: A process (physical or chemical) that destroys many or all pathogenic microorganisms, except bacterial and mycotic spores, on inanimate objects. Distillate water lines: Pipes carrying water that is condensed from the evaporators and that may be directed to the POTABLE WATER system. This is the VSP definition for pipe striping purposes. Food-contact surface: Surfaces (food zone, splash zone) of equipment and utensils with which food normally comes in contact and surfaces from which food may drain, drip, or splash back into a food or surfaces normally in contact with food (Figure 6). # Food display areas: Any area where food is displayed for consumption by passengers and/or crew. Applies to displays served by vessel staff or self service. Food-handling areas: Any area where food is stored, processed, prepared, or served. # Food preparation areas: Any area where food is processed, cooked, or prepared for service. Food service areas: Any area where food is presented to passengers or crew members (excluding individual cabin service). Food storage areas: Any area where food or food products are stored. Food waste system: A system used to collect, transport, and process food waste from FOOD AREAS to a waste disposal system (e.g., pulper, vacuum system). Gap: An open juncture that is more than 3 millimeters (1/8 inch). # Gravity drain: A drain fitting used to drain the body of water in a RECREATIONAL WATER FACILITY by gravity and with no pump downstream of the fitting. • Hydrotherapy pools. • INTERACTIVE RECREATIONAL WATER FACILITIES. • Slides. • SPA POOLS. • SWIMMING POOLS. • Therapeutic pools. • WADING POOLS. • WHIRLPOOLS. # Reduced pressure principle backflow prevention assembly (RP assembly): An assembly containing two independently acting internally loaded check valves together with a hydraulically operating, mechanically independent pressure differential relief valve located between the check valves and at the same time below the first check valve. The unit must include properly located resilient seated test cocks and tightly closing resilient seated shutoff valves at each end of the assembly. Removable: Capable of being detached from the main unit with the use of simple tools such as a screwdriver, pliers, or an open-end wrench. # Safety vacuum release system (SVRS): A system that is capable of releasing a vacuum at a suction outlet caused by a high vacuum due to a blockage in the outlet flow. These systems shall be designed and certified in accordance with ASTM F2387-04 or ANSI/ASME A 112.19.17-2002. Sanitary seawater lines: Water lines with seawater intended for use in the POTABLE WATER production systems or in RECREATIONAL WATER FACILITIES. Scupper: A conduit or collection basin that channels liquid runoff to a DECK DRAIN. Sealant: Material used to fill SEAMS. Seam: An open juncture that is greater than 0.8 millimeters (1/32 inch) but less than 3 millimeters (1/8 inch). # Smooth: • A FOOD-CONTACT SURFACE having a surface free of pits and inclusions with a cleanability equal to or exceeding that of (100-grit) number 3 stainless steel. • A NONFOOD-CONTACT SURFACE of equipment having a surface equal to that of commercial grade hot-rolled steel free of visible scale. • Deck, bulkhead, or deckhead that has an even or level surface with no roughness or projections to make it difficult to clean. # Spa pool: A POTABLE WATER or saltwater-supplied pool with temperatures and turbulence comparable to a WHIRLPOOL SPA. # General characteristics are • Water temperature of 30°C-40°C or 86°F-104°F. • Bubbling, jetted, or sprayed water effects that physically break at or above the water surface. • Depth of more than 1 meter (3 feet). • Tub volume of more than 6 tons of water. # Spill-resistant vacuum breaker (SVB): A specific modification to a PVB to minimize water spillage. # Spray pad: The play and water contact area that is designed to have no standing water. # Suction fitting: A fitting in a RECREATIONAL WATER FACILITY under direct suction through which water is drawn by a pump. Swimming pool: A RECREATIONAL WATER FACILITY greater than 1 meter in depth. This does not include SPA POOLS that meet this depth. Technical water: Water that has not been chlorinated or PH controlled and that originates from a bunkering or condensate collection process, or seawater processed through the evaporators or reverse osmosis plant and is intended for storage and use in the technical water system. Temperature-measuring devices (TMDs): Thermometers, thermocouples, thermistors, or other devices that indicate the temperature of food, air, or water and are numerically scaled in Celsius and/or Fahrenheit. TMDs must be designed to be easily readable. # Turnover: The circulation, through the recirculation system, of a quantity of water equal to the total RWF tub volume. For facilities with zero depth, the turnover will be based on the total volume of the system, including compensation or make-up tanks and piping, and up to the entire volume for the system as designed. Unblockable drain/suction fitting: A drain or suction fitting in a RECREATIONAL WATER FACILITY that cannot be completely covered or blocked by a 457 millimeters x 584 millimeters (18 inches x 23 inches) body-blocking element and that is rated by the test procedures or by the appropriate calculation in accordance with ASME A112.19.8M. Utility sink: Any sink located in a FOOD SERVICE AREA not intended for handwashing and/or warewashing. Wading pool: RECREATIONAL WATER FACILITY with a maximum depth of less than 1 meter and that is not designed for use by children. must be sized to prevent the storage of bulk foods in provisions passageways unless the passageways are specifically designed to meet provision room standards (section 15.0). Refrigeration and hot-food holding facilities, including temporary storage facilities, must be available for all FOOD PREPARATION and SERVICE areas and for foods being transported to remote areas. # Food Flow Arrange the flow of food through a vessel in a logical sequence that eliminates or minimizes cross-traffic or backtracking. Provide a clear separation of clean and soiled operations. When a common corridor is used for movement of both clean and soiled operations, the minimum distance from bulkhead to bulkhead must be considered. Within a galley, the standard separation between clean and soiled operations must be a minimum of 2 meters (6½ feet). For smaller galleys (e.g., specialty, bell box) the minimum distance will be assessed during the plan review. Additionally, common corridors for size and flow of galley operations will be reviewed during the plan review. Provide an orderly flow of food from the suppliers at dockside through the FOOD STORAGE, PREPARATION, and finishing areas to the SERVICE areas and, finally, to the waste management area. The goals are to reduce the risk for cross-contamination, prepare and serve food rapidly in accordance with strict time and temperature-control requirements, and minimize handling. Provide a size profile for each FOOD AREA, including provisions, preparation rooms, galleys, pantries, warewash, garbage processing area, and storage. The size profile shows the square meters of space designated for that area. Where possible, the VSP will visit the profile vessel(s) to verify the capacity during operational inspections. The size profile must be an established standard for each cruise line based on the line's review of the area size for the same FOOD AREA in its existing vessels. As the ship size and passenger and crew totals change, there must be a proportional change in each FOOD AREA size based on the profile to ensure the service needs are met for each area. Size evaluations of FOOD AREAS will incorporate seating capacity and staffing, service, and equipment needs. During the plan review process VSP evaluates the size of a particular room or area and the flow of food through the vessel to those rooms or areas. VSP will also use the results of operational inspections to review the size profiles submitted by individual cruise lines. # Equipment Requirements # Galleys The equipment in sections 6.2.1.1 through 6.2.1.12 is required in galleys, depending on the level and type of service, and may be recommended for other areas. # Blast Chillers Incorporate BLAST CHILLERS into the design of passenger and crew galleys. More than one unit may be necessary depending on the size of the vessel, and the distances between the BLAST CHILLERS and the storage and service areas. # 6.2.1.1.1: The size and type of BLAST CHILLERS installed for each FOOD PREPARATION AREA are based on the concept/menu, operational requirements to satisfy that menu, and volume of food requiring cooling. # Utility Sinks Include food preparation UTILITY SINKS in all meat, fish, and vegetable preparation rooms; in cold pantries or garde mangers; and in any other areas where personnel wash or soak food. # 6.2.1.2.1: An automatic vegetable washing machine may be used in addition to FOOD PREPARATION UTILITY SINKS in vegetable preparation rooms. # Food Storage Include storage cabinets, shelves, or racks for food products and equipment in FOOD STORAGE, PREPARATION, and SERVICE AREAS, including bars and pantries. # Tables, Carts, or Pallets Locate fixed or PORTABLE tables, carts, or pallets in areas where food or ice is dispensed from cooking equipment, such as from soup kettles, steamers, braising pans, tilting pans, or ice storage bins. # Storage for Large Utensils Include a storage cabinet or rack for large utensils such as ladles, paddles, whisks, and spatulas and provide for vertical storage of cutting boards. # Sizing Size DECK DRAINS, SCUPPERS, and sinks to eliminate spillage and overflow to adjacent deck surfaces. # Deck Drainage Provide sufficient deck drainage and design deck and SCUPPER drain lines in all FOOD SERVICE and warewash areas to prevent liquids from pooling on the decks. Do not use DECK SINKS as substitutes for DECK DRAINS. # Cross-drain Connections Provide cross-drain connections to prevent pooling and spillage from the SCUPPER when the vessel is listing. # Coaming If a nonremovable coaming is provided around a DECK DRAINS, ensure that the juncture with the deck is COVED. Integral COVING is not required. # Ramps 6.6.1 Installation Install ramps over thresholds and ensure that they are easily REMOVABLE or sealed in place. Slope ramps for easy trolley roll-in and roll-out. Ensure ramps are strong enough to maintain their shape. If ramps over SCUPPER covers are built as an integral part of the SCUPPER system, construct them of SMOOTH, durable, and EASILY CLEANABLE materials. # 6.7 Gray and Black Water Drain Lines # Installation Limit the installation of drain lines that carry BLACK WATER or other liquid wastes directly overhead or horizontally through spaces used for FOOD PREPARATION or STORAGE. This limitation includes areas for washing or storing utensils and equipment (e.g., in bars, in deck pantries, and over buffet counters). If installation of waste lines is unavoidable in these areas, sleeve weld or butt weld steel piping; and heat fuse or chemically weld plastic piping. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. Do not use push-fit or press-fit piping over these areas. # General Hygiene Facilities Requirements for Food Areas # Handwashing Stations # Potable Water Provide hot and cold POTABLE WATER to all handwashing sinks. Equip handwashing sinks to provide water at a temperature between 38°C (100°F) and 49°C (120°F) through a mixing valve or combination faucet. # Construction Construct handwashing sinks of stainless steel in FOOD AREAS. Handwashing sinks in FOOD SERVICE AREAS and bars may be constructed of a similar, SMOOTH, durable material. # Supplies Provide handwashing stations that include a soap dispenser, paper towel dispenser, corrosion-resistant waste receptacle, and, where necessary, splash panels to protect • adjoining equipment, • clean utensils, • FOOD STORAGE, or • FOOD PREPARATION surfaces. If attached to the bulkhead, permanently seal soap dispensers, paper towel dispensers, and waste towel receptacles or make them REMOVABLE for cleaning. Air hand dryers are not permitted. # Dispenser Locations Install soap dispensers and paper towel dispensers so that they are not over adjoining equipment, clean utensil storage, FOOD STORAGE, FOOD PREPARATION surfaces, bar counters, or water fountains. For a multiple-station sink, ensure that there is a soap dispenser within 380 millimeters (15 inches) of each faucet and a paper towel dispenser within 760 millimeters (30 inches) of each faucet. # Dispenser Installation Install paper towel dispensers a minimum of 450 millimeters (18 inches) above the deck (as measured from the lower edge of the dispenser). # Installation Specifications Install handwash sinks a minimum of 750 millimeters (30 inches) above the deck, as measured at the top edge of the basin, and so that employees do not have to reach excessively to wash their hands. Install counter-mounted handwash sinks a minimum of 600 millimeters (24 inches) above the deck, as measured at the counter level. The minimum size of the handwash sink basin must be 300 millimeters (12 inches) in length and 300 millimeters (12 inches) in width. The diameter of round basins must be at least 300 millimeters (12 inches). Additionally, the minimum distance from the bottom of the water tap to the bottom of the basin must be 200 millimeters (8 inches). # Locations Locate handwashing stations throughout FOOD-HANDLING, PREPARATION, and warewash areas so that no employee must walk more than 8 meters (26 feet) to reach a station or pass through a normally closed door that requires touching a handle to open. # Food-dispensing Waiter Stations Provide a handwashing station at food-dispensing waiter stations (e.g., soups, ice, etc.) where the staff do not routinely return to an area with a handwashing station. # Food-handling Areas Provide a handwashing station in provision areas where bulk raw foods are handled by provisioning staff. # Crew Buffets Provide at least one handwashing station for every 100 seats (e.g., 1-100 seats = one handwashing station, 101-200 seats = two handwashing stations, etc.). Locate stations near the entrance of all officer/staff/crew mess areas where FOOD SERVICE lines are "selfservice." # Soiled Dish Drop-off Install handwashing stations at the soiled dish drop-off area(s) in the main galley, specialty galleys, and pantries for employees bringing soiled dishware from the dining rooms or other FOOD SERVICE AREAS to prevent long waiting lines at handwashing stations. Provide one sink or one faucet on a multiple-station sink for every 10 wait staff who handle clean items and are assigned to a FOOD SERVICE AREA during maximum capacity. During the plan review, VSP will evaluate work assignments for wait staff to determine the appropriate number of handwashing stations. # Faucet Handles Install easy-to-operate sanitary faucet handles (e.g., large elephant-ear handles, foot pedals, knee pedals, or electronic sensors) on handwashing sinks in FOOD AREAS. If a faucet is self-closing, slow-closing, or metering, provide a water flow of at least 15 seconds without the need to reactivate the faucet. # Signs Install permanent signs in English and other appropriate languages stating "wash hands often," "wash hands frequently," or similar wording. # Bucket Filling Station # Location Provide at least one bucket filling station in each area of the galleys (e.g., cold galley, hot galley, bakery, etc.), FOOD STORAGE, and FOOD PREPARATION AREAS. # Mixing Valve Supply hot and cold POTABLE WATER through a mixing valve to a faucet with the appropriate BACKFLOW protection at each bucket filling station. # Deck Drainage Provide appropriate deck drainage (e.g., SCUPPER or sloping deck to DECK DRAIN) under all bucket filling stations to eliminate any pooling of water on the decks below the bucket filling station. # Crew Public Toilet Rooms for Food Service Employees # Location and Number Install at least one employee toilet room in close proximity to the work area of all FOOD PREPARATION AREAS (beverage-only service bars are excluded). Provide one toilet per 25 employees and provide separate facilities for males and females if more than 25 employees are assigned to a FOOD PREPARATION AREA, excluding wait staff. This refers to the shift with the maximum number of food employees, excluding wait staff. Urinals may be installed, but do not count toward the toilet/employee ratio. # Main Galleys and Crew Galleys For main galleys and crew galleys, locate toilet rooms inside the FOOD PREPARATION AREA or in a passageway immediately outside the area. If a main galley has multiple levels and there is stairwell access between the galleys, toilet rooms may be located near the stairwell within one deck above or below. # Other Food Service Outlets For other FOOD SERVICE outlets (lido galley, specialty galley, etc.), do not locate toilet rooms more than two decks above/below within the distance of a fire zone, if on the same deck, no more than one fire zone away (should be within the same fire zone or an adjacent fire zone). If more than one FOOD SERVICE outlet is located on the same deck, the toilet room may be located on the same deck between the outlets and within two fire zones of each outlet. # Provisions For preparations rooms located in provisions areas, use the distance requirement described in 7.3.1.2 to locate toilet rooms. # Ventilation and Handwashing Install exhaust ventilation and handwashing facilities in each toilet room. Air hand dryers are not permitted in these toilet rooms. Install a permanent sign in English, and other languages where appropriate, stating the exact wording: "WASH HANDS AFTER USING THE TOILET." Locate this sign on the bulkhead adjacent to the main toilet room door or on the main door inside the toilet room. # Hands-free Exit Ensure hands-free exit for toilet rooms, as described in section 36.1.1. Ensure handwashing facilities have sanitary faucet handles as in section 7.1.8. # Doors Install tight-fitting, self-closing doors. # Decks Construct decks of hard, durable materials and provide COVING at the bulkhead-deck juncture. # Deckheads and Bulkheads Install EASILY CLEANABLE deckheads and bulkheads. # Equipment Placement and Mounting # Seal Seal counter-mounted equipment that is not PORTABLE to the bulkhead, tabletop, countertop, or adjacent equipment. If the equipment is not sealed, provide sufficient, unobstructed space for cleaning around, behind, and between fixed equipment. The space provided is dependent on the distance from either a position directly in front or from either side of the equipment to the farthest point requiring cleaning as described in sections 8. # Cleaning Distance Less Than 600 Millimeters (24 Inches) A distance to be cleaned of less than 600 millimeters (24 inches) requires an unobstructed space of 150 millimeters (6 inches). # Cleaning Distance Between 600 Millimeters (24 Inches) and 1,200 Millimeters (48 Inches) A distance to be cleaned between 600 millimeters (24 inches) and 1,200 millimeters (48 inches) requires an unobstructed space of 200 millimeters (8 inches). # Cleaning Distance Between 1,200 Millimeters (48 Inches) and 1,800 Millimeters (72 Inches) A distance to be cleaned between 1,200 millimeters (48 inches) and 1,800 millimeters (72 inches) requires an unobstructed space of 300 millimeters (12 inches). # 8.1.4 Cleaning Distance Greater Than 1,800 Millimeters (72 Inches) A distance to be cleaned greater than 1,800 millimeters (72 inches) requires an unobstructed space of 460 millimeters (18 inches). # Cleaning Distance In case the unobstructed cleaning space includes a corner, the cleaning distance has to be treated separately in two sections. The farther space behind the equipment has to be treated separately according to sections 8.1.1 through 8.1.4. The closer space beside the equipment has to be treated by calculating closer and farther cleaning distance together and using cleaning space according to sections 8.1.1 through 8.1.4. The closer space always has to be minimum of 300 millimeters (12 inches). See Figure 8d. # Seal or Elevate Seal equipment that is not PORTABLE to the deck or elevate it on legs that provide at least a 150-millimeter (6-inch) clearance between the deck and the equipment. If no part of the equipment is more than 150 millimeters (6 inches) from the point of cleaning access, the clearance space may be only 100 millimeters (4 inches). This includes vending and dispensing machines in FOOD AREAS, including mess rooms. Exceptions to the equipment requirements may be granted if there are no barriers to cleaning (e.g., equipment such as waste handling systems and warewashing machines with pipelines, motors, and cables) where a 150-millimeter (6-inch) clearance from the deck may not be practical. # Deck Mounting Continuous weld all equipment that is not PORTABLE to stainless steel pads or plates on the deck. Ensure the welds have SMOOTH edges, rounded corners, and no GAPS. # Adhesives Attach deck-mounted equipment as an integral part of the deck surface with glue, epoxy, or other durable, APPROVED adhesive product. Ensure that the attached surfaces are SMOOTH and EASILY CLEANABLE. # Deckhead Clearance Provide a minimum of at least 150 millimeters (6 inches) between equipment and deckheads. If this clearance cannot be achieved, extend the equipment to the deckhead panels and seal appropriately. # Foundation or Coaming Provide a sealed-type foundation or coaming for equipment not mounted on legs. Do not allow equipment to overhang the foundation or coaming by more than 100 millimeters (4 inches). Completely seal any overhanging equipment along the bottom (Figure 9). Mount equipment that is on a foundation or coaming at least 100 millimeters (4 inches) above the finished deck. Use cement, hard SEALANT, or continuous weld to seal equipment to the foundation or coaming. # Counter-Mounted Equipment Seal counter-mounted equipment, unless PORTABLE, to the countertop or mount on legs. # Leg Length The length of the legs is dependent on the horizontal distance of the Attach all FOOD-CONTACT SURFACES or connections from FOOD-CONTACT SURFACES to adjacent splash zones to ensure a seamless COVED corner. Reinforce all bulkheads, deckheads, or decks receiving such attachments. # Fasteners Use low profile, nonslotted, NONCORRODING, and easy-to-clean fasteners on FOOD-CONTACT SURFACES and in splash zones. The use of exposed slotted screws, Phillips head screws, or pop rivets in these areas is prohibited. # Nonfood-contact Surfaces # Seal Seal equipment SEAMS with an appropriate SEALANT (see SEAM definition). Avoid excessive use of SEALANT. Use stainless steel profile strips on surfaces exposed to extreme temperatures (e.g., freezers, cook tops, grills, and fryers) or for GAPS greater than 3 millimeters (1/8 inch). Do not use SEALANTS to close GAPS. # Construction Materials Use stainless steel or other durable, NONCORRODING, and EASILY CLEANABLE rigid or flexible material in the construction of drain lines. Do not use ribbed, braided, or woven materials in areas subject to splash or soiling unless coated with a SMOOTH, durable, and EASILY CLEANABLE material. # Size Size drain lines appropriately, with a minimum interior diameter of 25 millimeters (1 inch) for custom-built equipment. # Walk-in Refrigerators and Freezers Slope walk-in refrigerator and freezer evaporator drain lines and extend them through the bulkhead or deck. # Evaporator Drain Lines Direct walk-in refrigerator and freezer evaporator drain lines through an ACCESSIBLE AIR-BREAK to a deck SCUPPER or drain below the deck level or to a SCUPPER outside the unit. # Deck Drains and Scuppers Direct drain lines from DECK DRAINS and SCUPPERS in walk-in refrigerator and freezer units through an indirect connection to the wastewater system. # Horizontal Distance Install drain lines to minimize the horizontal distance from the source of the drainage to the discharge. # Vertical Distance Install horizontal drain lines at least 100 millimeters (4 inches) above the deck and slope to drain. # Food Equipment Drain Lines All drain lines (except condensate drain lines) from hood washing systems, cold-top tables, bains-marie, dipper wells, UTILITY SINKS, and warewashing sinks or machines must meet the following criteria: # Length Lines must be less than 1,000 millimeters (40 inches) in length and free of sharp angles or corners if designed to be cleaned in place by a brush. # Cleaning Lines must be READILY REMOVABLE for cleaning if they are longer than 1,000 millimeters (40 inches). # Extend Vertically Extend fixed equipment drain lines vertically to a SCUPPER or DECK DRAIN when possible. If not possible, keep the horizontal distance of the line to a minimum. # Air-break Handwashing sinks, mop sinks, and drinking fountains are not required to drain through an AIR-BREAK. 13.0 Electrical Connections, Pipelines, Service Lines, and Attached Equipment # Encase Encase electrical wiring from permanently installed equipment in durable and EASILY CLEANABLE material. Do not use ribbed or woven stainless steel electrical conduit where it is subject to splash or soiling unless it is encased in EASILY CLEANABLE plastic or similar EASILY CLEANABLE material. Do not use ribbed, braided, or woven conduit. # Install or Fasten For equipment that is not permanently mounted, install or fasten service lines in a manner that prevents the lines from contacting decks or countertops. # Mounted Equipment Tightly seal bulkhead or deckhead-mounted equipment (phones, speakers, electrical control panels, outlet boxes, etc.) with the bulkhead or deckhead panels. Do not locate such equipment in areas exposed to food splash. # Seal Penetrations Tightly seal any areas where electrical lines, steam or water pipelines, etc., penetrate the panels or tiles of the deck, bulkhead, or deckhead, including inside technical spaces located above or below equipment or work surfaces. Seal any openings or voids around the electrical lines or the steam or water pipelines and the surrounding conduit or pipelines. # Enclose Pipelines Enclose steam and water pipelines to kettles and boilers in stainless steel cabinets or position the pipelines behind bulkhead panels. Minimize the number of exposed pipelines. Cover any exposed insulated pipelines with stainless steel or other durable, EASILY CLEANABLE material. # Hood Systems # Warewashing Install canopy exhaust hood or direct duct exhaust systems over warewashing equipment (except undercounter warewashing machines) and over threecompartment sinks in pot wash areas where hot water is used for sanitizing. # Direct Duct Exhaust Directly connect warewashing machines that have a direct duct exhaust to the hood exhaust trunk. # Overhang Provide canopy exhaust hoods over warewashing equipment or threecompartment sinks to have a minimum 150-millimeter (6-inch) overhang from the edge of equipment to capture excess steam and heat and prevent condensate from collecting on surfaces. # Cleanout Ports Install cleanout ports in the direct exhaust ducts of the ventilation systems between the top of the warewashing machine and the hood system or deckhead. # Drip Trays Provide ACCESSIBLE and REMOVABLE condensate DRIP TRAYS in warewashing machine ventilation ducts. # Cooking and Hot Holding Equipment # Cooking Equipment Install hood or canopy systems above cooking equipment in accordance with safety of life-at-sea (SOLAS) requirements to ensure that they remove excess steam and grease-laden vapors and prevent condensate from collecting on surfaces. # Hot Holding Equipment Install a hood or canopy system or dedicated local exhaust ventilation directly above bains marie, steam tables, or other open hot holding equipment to control excess heat and steam and prevent condensate from collecting on surfaces. # Countertop and Portable Equipment Install a hood or canopy system or dedicated local extraction when SOLAS requirements do not specify an exhaust system for countertop cooking appliances or where PORTABLE appliances are used. The exhaust system must remove excess steam and grease-laden vapors and prevent collection of the cooking byproducts or condensate on surfaces. For buffet service areas where FOOD PREPARATION occurs, galley standards for construction must be followed (see section 16.0). FOOD PREPARATION AREAS include areas where utensils are used to mix and prepare foods (e.g., salad, sandwich, sushi, pizza, meat carving) and the food is prepared and cooked (e.g., grills, ovens, fryers, griddles, skillets, waffle makers). If such facilities are installed along a buffet counter, they will be evaluated in the plan review. For example, a station specific for salads, sushi, deli, or a pizzeria is a preparation area, as are locations where foods are prepared completely (e.g., waffle batter poured into griddle, cooked, plated, and served). However, a one-person carving station is not a preparation area. Omelet stations would have to be evaluated. # Decks # Buffet Lines Install hard, durable, nonabsorbent, nonskid decks at all buffet lines that are at least 1,000 millimeters (40 inches) in width measured from the edge of the service counter or, if present, from the outside edge of the tray rail. Carpet, vinyl, and linoleum deck materials are not acceptable. # Waiter Stations Install hard, durable, nonabsorbent decks (e.g., tile, sealed granite, or marble) that extend at least 600 millimeters (24 inches) from the edge of the working side(s) of the waiter stations. The sides of stations that have a splash shield of 150 millimeters (6 inches) or higher are not considered working sides. Carpet, vinyl, and linoleum deck materials are not acceptable. # Technical Spaces Construct decks in technical spaces of hard, durable, nonabsorbent materials (e.g., tiles, epoxy resin, or stainless steel) and provide COVING. Do not use painted steel or concrete decking. # Worker Side of Buffets and Bars Install durable COVING as an integral part of the deck/bulkhead and deck/cabinet foundation juncture on the worker-only side of the deck/buffet and deck/bar. # Consumer Side of Buffets and Waiter Stations Install durable COVING at the consumer side of buffet service counters, counters shared with worker activities (islands), and waiter stations. Install durable COVING at deck/bulkhead junctures located within one meter of the waiter stations. Consumer sides of bars are excluded. See Figures 10a and 10b. # Areas for Buffet Service and Food Preparation For buffet service areas where FOOD PREPARATION occurs, galley standards for construction must be followed (see section 16.0). # Food Display Protection # Effective Means Provide effective means to protect food (e.g., sneeze guards, display cases, raised shield) in all areas where food is on display. This includes locations where food is being displayed during preparation (e.g., carving stations, induction cooking stations, sushi, deli). This excludes teppanyaki style cooking. # Solid Vertical Shield Without Tray Rail For a solid vertical shield without a tray rail, the minimum height from the deck to the top edge of the shield must be 140 centimeters. # Solid Vertical Shield With Tray Rail For a solid vertical shield with a tray rail, for every 3 centimeters that the tray rail extends from the buffet, the height of the shield may be lowered by 1 centimeter, but the minimum height from the deck to the top edge of the shield must be 120 centimeters. # Consumer Seating at Counter For designs where consumers are seated at the counter and workers are preparing food on the other side of the sneeze guard, consideration must be given to the height of the preparation counter, consumer counter, and consumer seat. VSP will evaluate these designs and establish the shield height during the plan review. # Sneeze Guard Criteria # Portable or Built-in Sneeze guards may be temporary (PORTABLE) or built-in and integral parts of display tables, bains marie, or cold-top tables. # Panel Material Sneeze guard panels must be durable plastic or glass that is SMOOTH and EASILY CLEANABLE. Design panels to be cleaned in place or, if REMOVABLE for cleaning, use sections that are manageable in weight and length. Sneeze guard panels must be transparent and designed to minimize obstruction of the customer's view of the food. To protect against chipping, provide edge guards for glass panels. Sneeze guards for preparation-only protection do not need to be transparent. # Spaces or Openings If there are spaces or openings greater than 25 millimeters (1 inch) along the length of the sneeze guard (such as between two pieces of the sneeze guard), ensure that there are no food wells, bains marie, etc., under the spaces or openings. # Position Position sneeze guards so that the panels intercept a line between the average consumer's mouth and the displayed foods. Take into account factors such as the height of the FOOD DISPLAY counter, the presence or absence of a tray rail, and the distance between the edge of the display counter and the actual placement of the food (Figure 10). # Sneeze Guard Design If the buffet is built to the calculations in Figure 11: The maximum vertical distance between a counter top and the bottom leading edge of a sneeze guard must be 356 millimeters (14 inches). The bottom leading edge of the sneeze guard must extend a minimum horizontal distance of 178 mm (7 inches) beyond the front inside edge of a food well. The sum of a sneeze guard's protected horizontal plane (X) and its protected vertical plane (Y) must equal a minimum of 457 millimeters (18 inches) (Figure 10). Either X or Y may equal 0. Install side protection for sneeze guards if the distance between exposed food and where people are expected to stand is less than 1 meter (40 inches). See Figures 12-15 for additional examples of sneeze guards. # Tray Rail Surfaces Use tray rail surfaces that are sealed, COVED, or have an open design. These surfaces must also be EASILY CLEANABLE in accordance with guidelines for food splash zones. # Food Pan Length Consideration should be given to the length of the food pans in relation to the distance a consumer must reach to obtain food. # Soup Wells If soups, oatmeal, and similar foods are to be self-served, equipment must be able to be placed under a sneeze guard. # Beverage Delivery System # Backflow Prevention Device Install a BACKFLOW PREVENTION DEVICE that is APPROVED for use on carbonation systems (e.g., multiflow beverage dispensing systems). Install the device before the carbonator and downstream of brass or copper fittings in the POTABLE WATER supply line. A second device may be required if noncarbonated water is supplied to a multiflow hose dispensing gun. # Encase Supply Lines Encase supply lines to the dispensing guns in a single tube. If the tube penetrates through any bulkhead or countertop, seal the penetration with a grommet. # Clean-in-place System For bulk beverage delivery systems, incorporate fittings and connections for a clean-in-place system that can flush and sanitize the entire interior of the dispensing lines in accordance with the manufacturers' instructions. # 17.5 Passenger Self-Service Buffet Handwashing Stations # Number Provide one handwashing station per 100-passenger seating or fraction thereof. Stations should be equally distributed between the major passenger entry points to the buffet area and must be separate from a toilet room. # Passenger Entries Provide handwashing stations at each minor passenger entry to the main buffet areas proportional to the passenger flow, with at least one per entry. These handwash stations can be counted towards the requirement of one per 100 passengers. # Self-service Stations Outside the Main Buffet Provide at least one handwashing station at the passenger entrance of each self-service station outside of the main buffet. Beverage stations are excluded. # Equipment and Supplies The handwashing station must include a handwash sink, soap dispenser, and single-use paper towel dispenser. Electric hand dryers can be installed in addition to paper towel dispensers. Waste receptacles must be provided in close proximity to the handwash sink and sized to accommodate the quantity of paper towel waste generated. The handwashing station may be decorative but must be nonabsorbent, durable, and EASILY CLEANABLE. # Automatic Handwashing System An automatic handwashing system in lieu of a handwash sink is acceptable. # Sign Each handwashing station must have a sign advising passengers to wash hands before eating. A pictogram can be used in lieu of words on the sign. # Location Stations can be installed just outside of the entry. Position the handwashing stations along the passenger flow to the buffets. # Lighting Provide a minimum of 110 lux lighting at the handwash stations. # Bar Counter Tops # Access Bar counter tops are to be constructed to provide access for workers and to prevent workers from stooping or crawling to access the bar area from pantries or service areas. # Warewashing # Prewash Hoses Provide rinse hoses for prewashing (not required but recommended in bar and deck pantries). If a sink is to be used for prerinsing, provide a REMOVABLE strainer. # Splash Panel Install a splash panel if a clean utensil/glass storage rack or preparation counter is within an unobstructed 2 meters (6½ feet) of a prewash spray hose. This does not include the area behind the worker. # Food Waste Disposal Provide space for trash cans, garbage grinder, or FOOD WASTE SYSTEMS. Grinders are optional in pantries and bars. # Trough Provide a food waste trough that extends the full length of soiled landing tables with FOOD WASTE SYSTEMS. # Seal Seal the back edge of the soiled landing table to the bulkhead or provide a minimum clearance between the table and the bulkhead according to section 8.0. # Design Design soiled landing tables to drain waste liquids and prevent contamination of adjacent clean surfaces. # Drain and Slope Provide across-the-counter gutters with drains and slope the clean landing tables to the gutters at the exit from the warewashing machines. If the first gutter does not effectively remove pooled water, install additional gutter(s) and drain line(s). Minimize the length of drain lines and, when possible, direct them in a straight line to the deck SCUPPER. # Space for Cleaning Provide sufficient space for cleaning around and behind equipment (e.g., FOOD WASTE SYSTEMS and warewashing machines). Refer to section 8.0 for spacing requirements. # Enclose Wiring Enclose FOOD WASTE SYSTEM wiring in a durable and easy to clean stainless steel or nonmetallic watertight conduit. Install all warewashing machine components at least 150 millimeters (6 inches) above the deck, except as noted in section 8.4. # Splash Panels Construct REMOVABLE splash panels of stainless steel to protect the FOOD WASTE SYSTEM and technical areas. # Materials Construct grinder cones, FOOD WASTE SYSTEM tables, and dish-landing tables from stainless steel with continuous welding. Construct platforms for supporting warewashing equipment from stainless steel. # Size Size warewashing machines for their intended use and install them according to the manufacturer's recommendations. # Alarm Equip warewashing machines with an audible or visual alarm that indicates if the sanitizing temperature or chemical sanitizer drop below the levels stated on the machine data plate. # Data Plate Affix data plate so that the information is easy to read by the operator. The data plate must include the following information as provided by the manufacturer of the warewash machine: # Water Temperatures Temperatures required for washing, rinsing (if applicable), and sanitizing. # Water Pressure Pressure required for the fresh water sanitizing rinse unless the machine is designed to use only a pumped sanitizing rinse. # Conveyer Speed or Cycle Time Conveyor speed in meters or feet per minute or minimum transit time for belt conveyor machines; minimum transit time for rack conveyor machines; or cycle time for stationary rack machines. # Chemical Concentration Chemical concentration (if chemical sanitizers are used). # Manuals and Schematics Warewash machine operating manuals and schematics of the internal BACKFLOW PREVENTION DEVICES must be provided. # Pot and Utensil Washing Provide pot and utensil washing facilities as listed in section 6.2.2. # Three-compartment Sinks Correctly size three-compartment warewashing and potwashing sinks for their intended use. Use sinks that are large enough to submerge the largest piece of equipment used in the area that is served. Use sinks that have COVED, continuously welded, integral internal corners. # Prevent Excessive Contamination Install one of the following arrangements to prevent excessive contamination of rinse water with wash water splash: • Gutter and drain: An across-the-counter gutter with a drain that divides the compartments. The gutter should extend the entire distance from the front edge of the counter to the backsplash. • Splash shield: A splash shield at least 25 millimeters (1 inch) above the flood level rim of the sink between the compartments. The splash shield should extend the entire distance from the front edge of the counter to the backsplash; • Overflow drain: An overflow drain in the wash compartment 100 millimeters (4 inches) below the flood level. # Hot Water Sanitizing Sinks Equip hot water sanitizing sinks with an easy-to-read TEMPERATURE-MEASURING DEVICE, a utensil/equipment retrieval system (e.g., long-handled stainless steel hook or other retrieval system), a jacketed or coiled steam supply with a temperature control valve, or electric heating system. # Shelving Provide sufficient shelving for storage of soiled and clean ware. Use open round tubular shelving or racks. Design overhead shelves to drain away from clean surfaces. Sufficient space must be determined by the initial sizing of the warewash area, as based on the profile or reference size from an existing vessel of the same cruise line per section 6.1. # Ventilation For ventilation requirements, see section 14.0. # Lighting # Work Surface Provide a minimum of 220 lux (20 foot-candles) of light at the work surface level in all FOOD PREPARATION, FOOD SERVICE, and warewashing areas when all equipment is installed. Provide 220 lux (20 foot-candles) of lighting for equipment storage, garbage and food lifts, garbage rooms, and toilet rooms, measured at 760 millimeters (30 inches) above the deck. # Behind and Around Equipment Provide a minimum light level of 110 lux (10 foot-candles) behind and around equipment as measured at the counter surface or at a distance of 760 millimeters (30 inches) above the deck (e.g., ice machines, combination ovens, beverage dispensers, etc.). # Countertops Provide a minimum light level of 220 lux (20 foot-candles) at countertops (e.g., beverage lines, etc.). # Deckhead-mounted Fixtures For effective illumination, place the deckhead-mounted light fixtures above the work surfaces and position them in an "L" pattern rather than a straight line pattern. # Installation Install light fixtures tightly against the bulkhead and deckhead panels. Completely seal electrical penetrations to permit easy cleaning around the fixtures. # Light Shields Use shatter-resistant and REMOVABLE light shields for light fixtures. Completely enclose the entire light bulb or fluorescent light tube(s). # Provision Rooms Provide lighting levels of at least of 220 lux (20 foot-candles) in provision rooms, measured at 760 millimeters (30 inches) above the deck while the rooms are empty. During normal operations when foods are stored in the rooms, provide lighting levels of at least 110 lux (10 foot-candles), measured at a distance of 760 millimeters (30 inches) above the deck. # Bars and Waiter Stations In bars and over dining room waiters' stations designed for lowered lighting during normal operations, provide lighting that can be raised to 220 lux (20 foot-candles) during cleaning operations, as measured at 760 millimeters (30 inches) above the deck. Provide a minimum light level of 110 lux (10 foot-candles) at handwash stations at a bar, and ensure this level is able to be maintained at all times. # Light Bulbs Use shielded, coated, or otherwise shatter-resistant light bulbs in areas where there is exposed food; clean equipment, utensils, and linens; or unwrapped single-service and single-use articles. This includes lights above waiter stations. # Heat Lamps Use shields that surround and extend beyond bulbs on infrared or other heat lamps to protect against breakage. Allow only the face of the bulb to be exposed. # Track or Recessed Lights Decorative track or recessed deckhead-mounted lights above bar countertops, buffets, and other similar areas may be mounted on or recessed within the deckhead panels without being shielded. However, install specially coated, shatter-resistant bulbs in the light fixtures. # Cleaning Materials, Filters, and Drinking Fountains # Facilities and Lockers for Cleaning Materials # Racks Provide bulkhead-mounted racks for brooms and mops or provide sufficient space and hanging brackets within CLEANING LOCKERS. Locate bulkheadmounted racks outside of FOOD STORAGE, PREPARATION, or SERVICE areas. These racks may be located on the soiled side of warewash areas. # Stainless Steel Provide stainless steel vented lockers, with COVED junctures, for storing buckets, detergents, sanitizers, cloths, and other wet items. # Size and Location Size and locate the lockers according to the needs of the vessel and make access convenient. # Multiple-level Galleys If CLEANING LOCKERS are not provided in each of the preparation areas, provide a single CLEANING ROOM for each deck of multiple level galleys. Construct rooms used for cleaning materials in accordance with section 16.0. # Mop Cleaning Provide facilities equipped with a mop sink and ADEQUATE DECK DRAIN or a pressure washing system for cleaning mops and buckets and that are separate from food facilities. The mop sink may be located on the soiled side of warewash areas. # Black and Gray Water Systems # Drain Lines Limit the installation of drain lines that carry BLACK WATER or other liquid waste directly overhead or horizontally through spaces used for FOOD AREAS. This includes areas for washing or storage of utensils and equipment, such as in bars and deck pantries and over buffet counters. Sleeve-weld or butt-weld steel pipe and heat fuse or chemically weld plastic pipe. # Piping Do not use push-fit or press-fit piping over these areas. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. # Drainage Systems Ensure that BLACK and GRAY WATER drainage systems from cabins, FOOD AREAS, and public spaces are designed and installed to prevent waste back up and odor or gas emission into these areas. # Venting Vent BLACK WATER holding tanks to the outside of the vessel and ensure that vented gases do not enter the vessel through any air intakes. # Independent Construct BLACK WATER holding tank vents so that they are independent of all other tanks. # General Hygiene Construct handwashing stations in the following areas according to section 7.1. # Wastewater Areas Install at least one handwashing station in each main wastewater treatment, processing, and storage area. # Laundry Areas Install at least one handwashing station at soiled linen handling areas and at the main exits of the main laundry. Vessel owners will provide locations during the plan review. # Housekeeping Areas Install handwashing stations in housekeeping areas as described in section 35.1. Provide each handwashing station with a soap dispenser, paper towel dispenser, waste receptacle, and sign that states "wash hands often," "wash hands frequently," or similar wording in English and in other languages, where appropriate. # Potable Water System # Striping # Potable Water Lines Stripe or paint POTABLE WATER lines either in accordance with ISO 14726 (blue/green/blue) or blue only. # Storage and Production Capacity for Potable Water # Minimum Storage Capacity Provide a minimum of 2 days storage capacity that assumes 120 liters (30 gallons) of water per day per person for the maximum capacity of crew and passengers on the vessel. # Production Capacity Provide POTABLE WATER production capacity of 120 liters (30 gallons) per day per person for the maximum capacity of crew and passengers on the vessel. # 22.7 Potable Water Storage Tanks 22.7.1 General Requirements # Independent of Vessel Shell Ensure that POTABLE WATER storage tanks are independent of the shell of the vessel. # No Common Wall Ensure that POTABLE WATER storage tanks do not share a common wall with other tanks containing nonpotable water or other liquids. # Cofferdam Provide a 450-millimeter (18-inch) cofferdam above and between POTABLE WATER TANKS and tanks that are not for storage of POTABLE WATER and between POTABLE WATER TANKS and the shell. Skin or double-bottom tanks are not allowed for POTABLE WATER storage. # Deck Top If the deck is the top of POTABLE WATER TANKS, these tanks must be identified during the plan review. The shipyard will provide the owners a written declaration of the tanks involved and the drawings of the areas that include these tanks. # Tanks with Nonpotable Liquid Do not install tanks containing nonpotable liquid directly over POTABLE WATER TANKS. # Coatings Use APPROVED POTABLE WATER TANK coatings. Follow all of the manufacturer's recommendations for applying, drying, and curing the tank coatings. Provide the following for the tank coatings: • Written documentation of the approval from the certification organization (independent of the coating manufacturer). • Manufacturer's recommendations for applying, drying, and curing. • Written documentation that the manufacturer's recommendations have been followed for applying, drying, and curing. # Items That Penetrate Tank Coat all items that penetrate the tank (e.g., bolts, pipes, pipe flanges) with the same product used for the tank's interior. # Super-chlorination Design tanks to be super-chlorinated one tank at a time. # Lines for Nonpotable Liquids Ensure that lines for nonpotable liquids do not pass through POTABLE WATER TANKS. # Nonpotable Lines Above Potable Water Tanks Minimize the use of nonpotable lines above POTABLE WATER TANKS. If nonpotable water lines are installed, do not use mechanical couplings or push-fit or press-fit piping on lines above tanks. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. # Coaming If coaming is present along the edges or top of the tank, provide slots along the coaming to allow leaking liquids to run off and be detected. # Welded Pipes Treat welded pipes over the POTABLE WATER storage tanks to make them corrosion resistant. # Lines Inside Potable Water Tanks Treat all POTABLE WATER lines inside POTABLE WATER TANKS to make them jointless and NONCORRODING. # Label Tanks Label each POTABLE WATER TANK on its side and where clearly visible, with a number and the exact wording "POTABLE WATER" in letters a minimum of 13 millimeters (1/2 inch) high. # Sample Cock Install at least one sample cock located at least 450 millimeters (18 inches) above the deck plating on each tank. The sample cock must be easily ACCESSIBLE. Point sample cocks down and identify them with the appropriate tank number. # Storage Tank Access Hatch # Installation Install an access hatch for entry on the sides of POTABLE WATER TANKS. # Storage Tank Water Level # Automatic Provide an automatic method for determining the water level of POTABLE WATER TANKS. Visual sight glasses are acceptable. # Storage Tank Vents # Location Ensure that air-relief vents end at least 1,000 millimeters (40 inches) above the maximum load level of the vessel. Make the cross-sectional area of the vent equal to or greater than that of the filling line to the tank. Position the end of the vent so that its opening faces down or is otherwise protected, and install a 16-mesh corrosion-resistant screen. # Single Pipe A single pipe may be used as a combination vent and overflow. # Vent Connections Do not connect the vent of a POTABLE WATER TANK to the vent of a tank that is not a POTABLE WATER TANK. # Storage Tank Drains # Design Design the tanks to drain completely. # Drain Opening Provide a drain opening that is at least 100 millimeters (4 inches) in diameter and preferably matches the diameter of the inlet pipe. # Suction Pump If drained by a suction pump, provide a sump and install the pump suction port in the bottom of the sump. Install separate pumps and piping not connected to the POTABLE WATER distribution system for draining tanks (Figure 16). WATER systems. • Drinking fountains. • Emergency showers. • Eye wash stations. • FOOD AREAS. • Handwash sinks. • HVAC fan rooms. • Medical facilities. Utility sinks for engine/mechanical spaces are excluded. # Paint or Stripe Paint or stripe POTABLE WATER piping and fittings either blue only or in accordance with ISO 14726 at 5-meter (15-foot) intervals and on each side of partitions, decks, and bulkheads except where the decor would be marred by such markings. This includes POTABLE WATER supply lines in technical lockers. # Steam Generation for Food Areas Use POTABLE WATER to generate steam applied directly to food and FOOD-CONTACT SURFACES. Generate the steam locally from FOOD SERVICE equipment designed for this purpose (e.g., vegetable steamers, combinationovens, etc.). # Nonpotable Water Steam generated by nonpotable water may be applied indirectly to food or food equipment if routed through coils, tubes, or separate chambers. # Disinfection of the Potable Water System # Before Placed in Service Clean, disinfect, and flush POTABLE WATER TANKS and all parts of the POTABLE WATER system before the system is placed in service. # Free Chlorine Solution Ensure that DISINFECTION is accomplished by using a 50-MG/L (50-ppm) free chlorine solution for a minimum of 4 hours. Ensure that only POTABLE WATER is used for these procedures. Prior VSP agreement is required if alternative APPROVED DISINFECTION practices are used. # Documentation Provide written documentation showing that a representative sampling was conducted at various PLUMBING FIXTURES on each deck throughout the vessel (forward, aft, port, and starboard) to ensure that the 50-MG/L (ppm) free chlorine residual has circulated throughout the distribution system to include recommendation. Provide all manufacturers' literature for installation, operation, and maintenance. # pH Adjustment Provide automatic PH adjustment equipment for water bunkering and production. Install analyzer, controller, and dosing pumps that are designed to accommodate changes in flow rates. # Distribution # Sample Point Provide an analyzer controlled, automatic halogenation system. Install the analyzer probe sample point at least 3 meters (10 feet) downstream of the HALOGEN injection point. If a static mixer is used in lieu of the 3-meter (10-foot) distance, see section 22.14.2.7 for static mixer requirements. # Free Halogen Probes Use probes to measure free HALOGEN and link them to the analyzer/controller and chemical dosing pumps. # Backup Halogenation Pump Provide a back-up halogenation pump with an automatic switchover that begins pumping HALOGEN when the primary (in-use) pump fails or cannot meet the halogenation demand. # Probe/Sample Location Locate HALOGEN analyzer probe and/or sample cock at a distant point in each distribution system loop where significant water flow exists. # Alarm Provide an audible alarm in a continually occupied watch station (e.g., the engine control room or bridge) to indicate low or high free HALOGEN readings at each distant point analyzer. # Backflow Prevention Provide POTABLE WATER taps with appropriate BACKFLOW prevention at HALOGEN supply tanks. # Sample Cock Location # Double-wall Construction Double-wall construction between the SANITARY SEAWATER or POTABLE and nonpotable liquids with both of the following safety features: • A void space to allow any leaking liquid to drain away. • An alarm system to indicate a leak in the double wall. # Single-wall Construction Single-wall construction with all of the following safety features: # Higher Pressure Higher pressure of at least 1 bar on the SANITARY SEAWATER or POTABLE WATER side of the heat exchanger. # Automatic Valve An automatic valve arrangement that closes SANITARY SEAWATER or POTABLE WATER circulation in the heat exchanger when the pressure difference is less than 1 bar. # Alarm An alarm system that sounds when the diverter valve directs SANITARY SEAWATER or POTABLE WATER from the heat exchanger. # Recreational Water Facilities (RWFs) Water Source # Filling System Provide a filling system that allows for the filling of each RWF with SANITARY SEAWATER or POTABLE WATER. For a compensation or make-up tank supplied with POTABLE water, an overflow line located below the fill line and at least twice the diameter of the fill line is an acceptable method of BACKFLOW protection provided that the overflow line discharges to the wastewater system through an indirect connection. # Compensation or Make-up Tank Where make-up water is required to replace water loss due to splashing, carry out, and other volume loss, install an appropriately designed compensation or make-up tank to ensure that ADEQUATE chemical balance can be maintained. # Combining RWFs No more than two similar RWFs may be combined. CHILDREN'S POOLS and BABY-ONLY WATER FACILITIES must not be combined with any other type of RWFs. # Independent Manual Testing When combining RWFs, provisions must be made for independent manual water testing within the mechanical room for each RWF. # Independent Slide RWF and Adult Swimming Pool An independent slide RWF and an adult SWIMMING POOL may be combined provided that the water volume added to the slide and the slide pump capacity are sufficient to maintain the TURNOVER rate as shown in section 29.10. Any other combinations of RWFs will be decided on a case-by-case basis during the plan review. # RWF Showers and Toilet Facilities # Shower Temperature Equip showers to provide POTABLE WATER at a temperature not to exceed 43°C (110°F) during normal operations. Install the showers within 10 meters of the entrances to RWFs. The location and number of showers for multifacilities with multiple entrances will be determined during the plan review. # Showers for Children RWFs designed for use by children under 6 years of age must have appropriately sized shower facilities. Standard height is acceptable, but the mechanism to operate the flow of water must not be more than 1 meter above the deck. # Toilet Facilities Locate toilet facilities within one fire zone (approximately 48 meters [157 feet]) of each RWF and on the same deck. Install a minimum of two separate toilet rooms (either two unisex or one male and one female). Each toilet facility must include a toilet and a handwashing facility. The total number of toilets and toilet facilities required will be assessed during the plan review. Urinals may be installed in addition to the required toilet, but may not replace the toilet. # Diaper-changing Facilities Provide diaper changing facilities within one fire zone (approximately 48 meters [157 feet]) and on the same deck of any BABY-ONLY WATER FACILITY. If these facilities are placed within toilet rooms, there must be one facility located within each toilet room (men's, women's, and unisex). Diaper-changing facilities must be equipped in accordance with section 34.2.1. # RWF Drainage # Independent System Provide an independent drainage system for RWFs from other drainage systems. If RWF drains are connected to another drainage system, provide an AIR GAP or a DUAL SWING CHECK VALVE between the two. This includes the drainage for compensation or make-up tanks. # Slope Slope the bottom of the RWF toward the drains to achieve complete drainage. # Seating Drains If seating is provided inside an RWF, ensure that drains are installed to allow for complete draining of the seating area (including seats inside WHIRLPOOL SPAS and SPA POOLS). # Drain Completely Decorative and working features of an RWF must be designed to drain completely and must be constructed of nonporous EASILY CLEANABLE materials. These features must be designed to be shock halogenated. # RWF Safety # Antientrapment Drain Covers and Suction Fittings Where referenced within these guidelines, drain covers must comply with the requirements in ASME A112.19.8-2007, including addenda. See table below for primary and secondary ANTIENTRAPMENT requirements. VSP is aware that the requirements shown in Table 28.1.7 may not fully meet the letter of the Virginia Graeme Baker Act, but we also recognize the life-safety concerns for rapid dumping of RWFs in conditions of instability at sea. Therefore, it is the owner's decision to meet or exceed VSP requirements. # Installation Install dual drains that are at least 1 meter (3 feet) apart and at the lowest point in the RWF. Ensure that there are no intermediate drain isolation valves on the lines between the drains (Figure 17a). In a channel system (an UNBLOCKABLE DRAIN), a grate-type cover would be attached to the channel (Figure 17b). When fully assembled and installed, SUCTION FITTINGS must reduce the potential for body entrapment, digit, or limb entrapment in accordance with ASME A112.19.8M-2007. # Stamped and Certified Manufactured drain covers and SUCTION FITTINGS must be stamped and certified in accordance with the standards set forth in ASME A112.19.8-2007. # Design of Field Fabricated Covers and Fittings The design of custom/shipyard constructed (field fabricated) drain covers and SUCTION FITTINGS must be fully specified by a registered design professional in accordance with ASME A112.19.8-2007. The specifications must fully address cover/grate loadings; durability; hair, finger, and limb entrapment issues; cover/grate secondary layer of protection; related sump design; and features specific to the RWF. # Alternate to Marking Field Fabricated Fittings As an alternate to marking custom/shipyard constructed (field fabricated) drain cover fittings, the owner of the facility where these fittings will be installed must be advised in writing by the registered design professional of the information set forth in section 7.1.1 of ASME A112.19.8-2007. # Accompanying Letter A letter from the shipyard must accompany each custom/shipyard constructed (field fabricated) drain cover fitting. At a minimum, the letter must specify the shipyard, name of the vessel, specifications and dimensions of the drain cover, as noted above, and the exact location of the RWF for which it was designed. The registered design professional's name, contact information, and signature must be on the letter. # Antientrapment/Antientanglement Requirements See *Options 1 through 5 are for fittings that are not under direct suction. These include both fittings to drain the RWF and fittings used to recirculate the water. Options 6 through 8 are for fittings that are under direct suction. These include fittings to drain the RWF and fittings used to recirculate the water. **Definitions: • Alarm = the audible alarm must sound in a continuously manned space AND at the RWF. This alarm is for all draining: accidental, routine, and emergency. • GDS (GRAVITY DRAINAGE SYSTEM) = a drainage system that uses a collector tank from which the pump draws water. Water moves from the RWF to the collector tank due to atmospheric pressure, gravity, and the displacement of water by bathers. There is no direct suction at the RWF. • SVRS (SAFETY VACUUM RELEASE SYSTEM) = a system that stops the operation of the pump, reverses the circulation flow, or otherwise provides a vacuum release at a suction outlet when a blockage is detected. System must be tested by an independent third party and conform with ASME/ANSI A112.19.17 or ASTM standard F2387. • APS (AUTOMATIC PUMP SHUT-OFF system) = a device that detects a blockage and shuts off the pump system. A manual shut-off near the RWF does not qualify as an APS. # Depth Markers # Installation Install depth markers for each RWF where the maximum water depth is 1 meter (3 feet) or greater. Install depth markers so that they can be seen from the deck and inside the RWF tub. Ensure that the markers are in both meters and feet. Additionally, depth markers must be installed for every 1-meter (3foot) change in depth. # Safety Signs # Installation Install safety signs at each RWF except for BABY-ONLY WATER FACILITIES. At a minimum the signs must include the following words: • Do not use these facilities if you are experiencing diarrhea, vomiting, or fever. • No children in diapers or who are not toilet trained. • Shower before entering the facility. • Bather load number. (The maximum bather load must be based on the following factor: One person per 19 liters [5 gallons] per minute of recirculation flow.) Pictograms may replace words, as appropriate or available. It is advisable to post additional cautions and concerns on signs. See section 31.3 for safety signs specific to BABY-ONLY WATER FACILITIES and section 32.3 for safety signs specific to WHIRLPOOL SPAS and SPA POOLS. # Children's RWF For the children's RWF signs, include the exact wording "TAKE CHILDREN ON FREQUENT BATHROOM BREAKS" or "TAKE CHILDREN ON FREQUENT TOILET BREAKS." This is in addition to the basic RWF safety sign. # 28.4 Life-saving Equipment # Location A rescue or shepherd's hook and an APPROVED floatation device must be provided at a prominent location (visible from the full perimeter of the pool) at each RECREATIONAL WATER FACILITY that has a depth of 1 meter (3 feet) or greater. These devices must be mounted in a manner that allows for easy access during an emergency. • The pole of the shepherd's hook must be long enough to reach the center of the deepest portion of the pool from the side plus 0.6 meters (2 feet) It must be light, strong, and nontelescoping with rounded nonsharp ends. • The flotation device must have an attached rope that is at least 2/3 of the maximum pool width. # Recirculation and Filtration Systems # Skim Gutters Where skim gutters are installed, ensure that the maximum fill level of the RWF is to the skim gutter level. # Overflows Ensure that overflows are directed by gravity to the compensation or make-up tank for filtration and DISINFECTION. Alternatively, overflows may be directed to the RWF drainage system. If the overflow is connected to another drainage system, provide an AIR GAP or a DUAL SWING CHECK VALVE between the two. # Return Water All water returning from an RWF must be directed to the compensation or make-up tank or the filtration and DISINFECTION system. # Compensation or Make-up Tanks Ensure that 100% of the water in the compensation or make-up tanks passes through the filtration and DISINFECTION systems before returning to the RWF. This includes any water directed to water features in RWF's. DISINFECTION system capable of inactivating Cryptosporidium. Ensure that these systems are installed in accordance with the manufacturer's specifications. Secondary UV DISINFECTION systems must be designed to operate in accordance with the parameters set forth in NSF International or equivalent standard. # Sized Secondary DISINFECTION systems must be appropriately sized to disinfect 100% of the water at the appropriate TURNOVER rate. Secondary DISINFECTION systems are to be installed after filtration but before HALOGEN-based DISINFECTION. Unless otherwise accepted by the VSP, secondary DISINFECTION must be accomplished by a UV DISINFECTION system. # Low-and Medium-pressure UV Systems Low-and medium-pressure UV systems can be used and must be designed to treat 100% of the flow through the feature line(s). Multiple units are acceptable. UV systems must be designed to provide 40 mJ/cm 2 at the end of lamp life. UV systems must be rated at a minimum of 254 nm. # Recirculation and Filtration System # Compensation or Makeup Tank Install a compensation or make-up tank with an automatic level control system capable of holding an amount of water sufficient to ensure continuous operation of the filtration and DISINFECTION systems. This capacity must be equal to at least 3 times the total operating volume of the system. # Accessible Drain Install an ACCESSIBLE drain at the bottom of the tank to allow for complete draining of the tank. Install an access port for cleaning the tank and for the addition of batch halogenation and PH control chemicals. # Secondary Disinfection and pH Systems Design the system so that 100% of the water for the BABY-ONLY WATER FACILITY feature passes through filtration, halogenation, secondary DISINFECTION, and PH systems before returning to the BABY-ONLY WATER FACILITY. # Disinfection and pH Control # Independent Automatic Analyzer Install independent automatic analyzer-controlled HALOGEN-based DISINFECTION and PH dosing systems. The analyzer must be capable of measuring HALOGEN levels in MG/L (ppm) and PH levels. Analyzers must have digital readouts that indicate measurements from the installed analyzer probes. # Automatic Monitoring and Recording Provide an automatic monitoring and recording system for the free HALOGEN residuals in MG/L (ppm) and PH levels. The recording system must be capable of recording these levels 24 hours/day. # Secondary Disinfection System # Cryptosporidium Provide a secondary UV DISINFECTION system capable of inactivating Cryptosporidium. Ensure that these systems are installed in accordance with the manufacturer's specifications. Secondary UV DISINFECTION systems must be designed to operate in accordance with the parameters set forth in NSF International for use in BABY-ONLY WATER FACILITIES. # Size Secondary DISINFECTION systems must be appropriately sized to disinfect 100% of the water at the appropriate TURNOVER rate. Secondary DISINFECTION systems are to be installed after filtration but before HALOGEN-based DISINFECTION. Unless otherwise APPROVED by # Acknowledgments; i Acknowledgments # Gravity drainage system: A water collection system whereby a collection tank is located between the RECREATIONAL WATER FACILITY and the suction pumps. Gray water: Wastewater from galley equipment and DECK DRAINS, dishwashers, showers and baths, laundries, washbasins, DECK DRAINS, and recirculated RECREATIONAL WATER FACILITIES. Gray water does not include BLACK WATER or bilge water from the machinery spaces. Gutterway: See SCUPPER. Halogen: The group of elements including chlorine, bromine, and iodine used for the DISINFECTION of water. # Hose bib connection vacuum breaker (HVB): A BACKFLOW PREVENTION DEVICE that attaches directly to a hose bib by way of a threaded head. This device uses a single check valve and vacuum breaker vent. It is not APPROVED for use under CONTINUOUS PRESSURE (e.g., when a shut-off valve is located downstream from the device). This device is a form of an AVB specifically designed for a hose connection. Interactive recreational water facilities: Structures that provide a variety of recreational water features such as flowing, misting, sprinkling, jetting, and waterfalls. These facilities may be zero depth. # Keel laying: The date at which construction identifiable with a specific ship begins and when assembly of that ship comprises at least 50 tons or 1% of the estimated mass of all structural material, whichever is less. mg/L: Milligrams per liter, the metric equivalent of parts per million (ppm). Noncorroding: Material that maintains its original surface characteristics through prolonged influence by the use environment, food contact, and normal use of cleaning compounds and sanitizing solutions. Nonfood-contact surfaces (nonfood zone): All exposed surfaces, other than FOOD-CONTACT SURFACES, of equipment located in FOOD AREAS (Figure 6). Permeate water lines: Pipes carrying permeate water from the reverse osmosis unit that may be directed to the POTABLE WATER system. This is the VSP definition for pipe striping purposes. # pH (Potens hydrogen): The symbol for the negative logarithm of the hydrogen ion concentration, which is a measure of the degree of acidity or alkalinity of a solution. Values between 0 and 7 indicate acidity and values between 7 and 14 indicate alkalinity. The value for pure distilled water is 7, which is neutral. Plumbing fixture: A receptacle or device that • Is permanently or temporarily connected to the water-distribution system of the vessel and demands a supply of water from the system; or • Discharges used water, waste materials, or SEWAGE directly or indirectly to the drainage system of the vessel. Portable: A description of equipment that is READILY REMOVABLE or mounted on casters, gliders, or rollers; provided with a mechanical means so that it can be tilted safely for cleaning; or EASILY MOVABLE by one person. Potable water: Water that is HALOGENATED and PH controlled and is intended for • drinking, washing, bathing, or showering; • use in fresh water SWIMMING POOLS and WHIRLPOOL SPAS; • use in the vessel's hospital; • handling, preparing, or cooking food; and • cleaning FOOD STORAGE and PREPARATION areas, utensils, and equipment. Potable water is free from impurities in amounts sufficient to cause disease or harmful physiological effects. The water quality must conform to requirements of the World Health Organization drinking water standards. Potable water tanks: All tanks in which potable water is stored for use in the POTABLE WATER system. # Pressure vacuum breaker assembly (PVB): A device consisting of an independently loaded internal check valve and a spring-loaded air inlet valve. This device is also equipped with two resilient seated gate valves and test cocks. Readily accessible: Exposed or capable of being exposed for cleaning or inspection without the use of tools. Readily removable: Capable of being detached from the main unit without the use of tools. Recreational seawater: Seawater taken onboard while making way at a position at least 12 miles at sea and routed directly to the RWFs for either sea-to-sea exchange or recirculation. # Recreational water facility (RWF): A water facility that has been modified, improved, constructed, or installed for the purpose of public swimming or recreational bathing. RWFs include, but are not limited to, • ACTIVITY POOLS. • BABY-ONLY WATER FACILITIES. • CHILDREN'S POOLS. • Diving pools. • Hot tubs. # Fasteners Construct slotted or Phillips head screws, pop rivets, and other fasteners used in NONFOOD-CONTACT AREAS of NONCORRODING materials. # 9.3 Use of Sealants # Gaskets # Materials Use SMOOTH, nonabsorbent, nonporous materials for equipment gaskets in reach-in refrigerators, steamers, ice bins, ice cream freezers, and similar equipment. # Exposed Surfaces Close and seal exposed surfaces of gaskets at their ends and corners. # Removable Use refrigerator door gaskets that are designed to be REMOVABLE. # Fasteners Follow the requirements in section 9.0 when using fasteners to install gaskets. # Equipment Drain Lines # Connections Connect drain lines to the appropriate waste system by means of an AIR GAP or AIR-BREAK from all fixtures, sinks, appliances, compartments, refrigeration units, or other equipment used, designed for, or intended to be used in the preparation, processing, storage, or handling of food, ice, or drinks. Ensure that the AIR GAP or AIR-BREAK is easily ACCESSIBLE for inspection and cleaning. # Size Properly size all exhaust and supply vents. # Position and Balance Position and balance all exhaust and supply vents to ensure proper air conditioning and capture/exhaust of heat and steam. # Prevent Condensate Limit condensate formation on either the exhaust canopy hood or air supply vents by either: • locating or directing conditioned air away from exhaust hoods and heat generating equipment or • installing a shield blocking the air from the hood supply vents. # Filters Where used, provide READILY REMOVABLE and cleanable filters. # Access Provide access for cleaning vents and ductwork. Automatic clean-in-place systems are recommended for removal of grease generated from cooking equipment. # Hood Cleaning Cabinets Locate automatic clean-in-place hood wash control panels that have a chemical reservoir so they are not over FOOD PREPARATION equipment or counters, FOOD PREPARATION or warewashing sinks, or food and clean equipment storage. # Construction Construct hood systems of stainless steel with COVED corners of at least a 9.5millimeter (3/8-inch) radius. # Continuous Welds Use continuous welds or profile strips on adjoining pieces of stainless steel. # Drainage System Install a drainage system for automatic clean-in-place hood-washing systems. A drainage system is not required for normal grease and condensate hoods or for locations where cleaning solutions are applied manually to hood assemblies. # Manufacturer's Recommendations Install all ventilation systems in accordance with the manufacturer's recommendations. # Test System Test each system using a method that determines if the system is properly # Labeling Label all CLEANING LOCKERS and cleaning rooms with the exact wording "CLEANING MATERIALS ONLY." # Filters # Potable Water Filters If used, install only point-of-use POTABLE WATER filters on ice machines, combination ovens, beverage machine, etc. Ensure that filters are ACCESSIBLE # Suction Lines Place suction lines at least 150 millimeters (6 inches) from the tank bottom or sump bottom. # 22.9 Potable Water Distribution System # Location Locate DISTILLATE, PERMEATE, and distribution lines at least 450 millimeters (18 inches) above the deck plating or the normal bilge water level. # Pipe Materials Do not use lead, cadmium, or other hazardous materials for pipes, fittings, or solder. # Fixtures That Require Potable Water Supply only POTABLE WATER to the following areas and plumbing connections, regardless of the locations of these fixtures on the vessel: • All showers and sinks (not just in cabins). • Chemical feed tanks for the POTABLE WATER system or RECREATIONAL # Free Halogen Analyzer-chart Recorder Provide continuous recording free HALOGEN analyzer-chart recorder(s) that have ranges of 0.0 to 5.0 MG/L (ppm) and indicate the level of free HALOGEN for 24-hour time periods (e.g., circular 24-hour charts). Electronic data loggers with certified data security features may be installed in lieu of chart recorders. Acceptable data loggers produce records that conform to the principles of operation and data display required of the analog charts, including printing the records. Use electronic data loggers that log times in increments of <15 minutes. # Multiple Distribution Loops When supplying POTABLE WATER throughout the distribution network with more than one ring or loop (lower to upper decks or forward to aft), there must be • Pipe connections that link those loops and a single distant point monitoring analyzer or • Individual analyzers on each ring or loop. A single return line that connects to only one ring or loop of a multiple loop system is not acceptable. One chart recorder may be used to record multiple loop readings. POTABLE WATER distribution loops/rings supplied by separate HALOGEN dosing equipment must include an analyzer chart recorder at a distant point for each loop/ring. # Cross-connection Control # Backflow Prevention Use appropriate BACKFLOW prevention at all CROSS-CONNECTIONS. This may include nonmechanical protection such as an AIR GAP or a mechanical BACKFLOW PREVENTION DEVICE. # Air Gaps AIR GAPS should be used where feasible and when water under pressure is not required. # Atmospheric Vent A mechanical BACKFLOW PREVENTION DEVICE must have an atmospheric vent. # Protect Against Health Hazards Ensure that connections where there is a potential of a health hazard are protected by AIR GAPS or BACKFLOW PREVENTION DEVICES designed to protect against health hazards. VSP Construction Guidelines; 71 # Test Kit Provide an appropriate test kit for all testable devices. Test all testable devices after installation and provide pressure differential test results for each device. # Atmospheric Vacuum Breaker When used, install an ATMOSPHERIC VACUUM BREAKER (AVB) 150 millimeters (6 inches) above the fixture flood level rim with no valves downstream of the device. # Atmospheric or Hose Bib Vacuum Breaker Ensure an AVB or HOSE-BIB CONNECTED VACUUM BREAKER (HVB) is not installed at a connection where it can be subjected to CONTINUOUS PRESSURE for more than 12 continuous hours. # Connections Between Potable and Black Water Systems Ensure that any connection between the POTABLE WATER system and the BLACK WATER system is through an AIR GAP. Where feasible, water required for the BLACK WATER system should not be from the POTABLE WATER system. # Protection Against Backflow Protect the following connections to the POTABLE WATER system against BACKFLOW (BACKSIPHONAGE or BACKPRESSURE) with AIR GAPS or mechanical BACKFLOW PREVENTION DEVICES: • Air conditioning expansion tanks. • Automatic galley hood washing systems. • Beauty and barber shop spray-rinse hoses. • BLACK WATER or combined GRAY WATER/BLACK WATER systems. An AIR GAP is the only allowable protection for these connections. # List of Connections to Potable Water System A listing must be developed of all connections to the POTABLE WATER system where there is a potential for contamination either with a pollutant or contaminant. At a minimum, this listing must include the following: • Exact location of the connection. • PLUMBING FIXTURE (plumbing part [pipe, valve, etc.]) or component connected (what the fixture is connected to [sprinkler, shower, tank, etc.]). • Form of protection used: # 29.5 Approved Install recirculation, filtration, and DISINFECTION equipment that has been APPROVED for use in RWFs based on NSF International or an equivalent standard. # Centrifugal Pumps Ensure that pumps used to recirculate RWF water are centrifugal pumps that are self-priming or that prime automatically. Flooded end suction pumps are permitted if suitable for the application. # Skimmers or Gutters Install surface skimmers or gutters that are capable of handling approximately 80% of the filter flow of the recirculation system. If skimmers are used instead of gutters, install at least one skimmer for every 37 square meters (400 square feet) of pool surface area. # Hair and Lint Strainer Provide a hair and lint strainer between the RWF outlet and the suction side of the pumps to remove foreign debris such as hair, lint, pins, etc. Ensure that the REMOVABLE portion of the hair and lint strainer is corrosionresistant and has holes no greater than 6 millimeters (1/4 inch) in diameter. # Filters # Particle Size Use filters that are designed to remove all particles greater than 20 microns from the entire volume of the RWF within the specified TURNOVER rate. # Cartridge or Media Type Use cartridge or media-type filters (e.g., rapid-pressure sand filters, high rate sand filters, diatomaceous earth filters, gravity sand filters). Make filter sizing consistent with American National Standards Institute (ANSI) standards for public RWFs. Ensure that commercial filtration rates for calculations are used for cartridge filters if multiple rates are provided by the manufacturer. # Backwash Ensure that media-type filters are capable of being backwashed. Provide a clear sight glass on the backwash side of all media filters. # Accessories Install filter accessories, such as pressure gauges, air-relief valves, and flow meters. # Access Design and install filters and filter housings in a manner that allows access for VSP Construction Guidelines; 81 inspection, cleaning, and maintenance. The probe for the automated analyzer recorder must be installed before the compensation or make-up tank or from a line taken directly from the RWF. Install appropriate sample taps for analyzer calibration. # Analyzer Probes For WHIRLPOOL SPAS and SPA POOLS, analyzer probes for the dosing and recording system must be capable of measuring and recording levels up to 10 MG/L (10 ppm). # Alarm Provide an audible alarm in a continuously occupied watch station (e.g., the engine control room) to indicate low and high free HALOGEN and PH readings in each RWF. Install DECK DRAINS in each RWF mechanical room that allow for draining of the entire pump, filter system, compensation or make-up tank, and associated piping. Provide sufficient drainage to prevent pooling on the deck. # RWF System Drainage # Installation Install drains in the RWF system to allow for complete drainage of the entire volume of water from the pump, filter system, compensation or make-up tank, and all associated piping. # Compensation Tank Drain Provide a drain at the bottom of each compensation or make-up tank to allow for complete draining of the tank. Install an access port for cleaning the tank and for the addition of batch halogenation and PH control chemicals. # Utility Sink Install a utility sink and a hose-bib tap supplied with POTABLE WATER in each RWF pump room. A threaded hose attachment at the utility sink is acceptable for the tap. # Additional Requirements for Children's Pools # Prevent Access Provide a method to prevent access to pools located in remote areas of the vessel. # Design Design the pool such that the maximum water level cannot exceed 1 meter (3 feet). # Secondary Disinfection System # Secondary UV Disinfection In addition to the HALOGEN DISINFECTION system, provide a secondary UV Any spray features must be designed and constructed to prevent water run-off from the surrounding deck from entering the BABY-ONLY WATER FACILITY. # Safety Sign # Content Install an easy-to-read permanent sign, with letters at least 25 millimeters (1 inch) high, at each entrance to the BABY-ONLY WATER FACILITY feature. At a minimum, the sign should state the following: • This facility is intended for use by children in diapers or children who are not completely toilet trained. • Use of this facility may put children at increased risk for illness. • Children who have a medical condition that may put them at increased risk for illness should not use these facilities. • Children who are experiencing symptoms such as vomiting, diarrhea, skin sores, or infections are prohibited from using these facilities. • Children must wear a swim diaper. • Children must be accompanied by an adult at all times. • Ensure that children have a clean swim diaper before using these facilities. Frequent swim diaper changes are recommended. • Do not change diapers in the area of the BABY-ONLY WATER FACILITY. A diaper changing station has been provided (give exact location) for your convenience. VSP, secondary DISINFECTION must be accomplished by a UV DISINFECTION system. # Low-and Medium-pressure UV Systems Low-and medium-pressure UV systems can be used and must be designed to treat 100% of the flow through the feature line(s). Multiple units are acceptable. UV systems must be rated at a minimum of 254 nm. UV systems must be designed to provide 40 mJ/cm 2 at the end of lamp life. # Cleaning Install UV systems that allow for cleaning of the lamp jacket without dissembling the unit. # Spare Lamp A spare ultraviolet lamp and any accessories required by the manufacturer to change the lamp must be provided. In addition, operational instructions for the UV DISINFECTION system must be provided. # 31.6 Automatic Shut-off # Installation Install an automatic control that shuts off the water supply to the BABY-ONLY WATER FACILITY if the free HALOGEN residual or PH range have not been met per the requirements set forth in the current VSP 2011 Operations Manual. The shut-off control must operate similarly when the UV DISINFECTION system is not operating within acceptable parameters. # Baby-only Water Facility Pump Room # Discharge All recirculated water discharged to waste must be through a visible indirect connection in the pump room. # Flow Meter A flow meter must be installed in the return line before HALOGEN injection. The flow meter must be accurate to within 10% of actual flow. # Additional Requirements for Whirlpool Spas and Spa Pools WHIRLPOOL SPAS that are similar in design and construction to public WHIRLPOOL SPAS but which are located for the sole use of an individual cabin or groups of cabins must comply with the public WHIRLPOOL SPA requirements if the WHIRLPOOL SPA has either of the following features: • Tub capacity of more than four individuals. • Location outside of cabin or cabin balcony. VSP Construction Guidelines; 88 # Overflow System For WHIRLPOOL SPAS, design the overflow system so the water level is maintained. # Temperature Control Provide a temperature control mechanism to prevent the temperature from exceeding 40ºC (104ºF). # Safety Sign In addition to the RWF safety sign in section 28.3, install a sign at each WHIRLPOOL SPA and SPA POOL entrance listing precautions and risks associated with the use of these facilities. At a minimum, include a caution against use by the following: • Individuals who are immunocompromised. • Individuals on medication or who have underlying medical conditions, such as cardiovascular disease, diabetes, or high or low blood pressure. • Children, pregnant women, and elderly persons. Additionally, caution against exceeding 15 minutes of use. # Drainage System For WHIRLPOOL SPAS, provide a line in the drainage system to allow these facilities to be drained to the GRAY WATER, TECHNICAL WATER, or other wastewater holding system through an indirect connection or a DUAL SWING CHECK VALVE. This does not include the BLACK WATER system. # Ventilation Systems # Air Supply Systems # Accessible Design and install air handling units to be ACCESSIBLE for periodic inspections and air intake filter changing. # Drain Completely Install air condition condensate collection pans that drain completely. Connect condensate collection pans to drain piping to prevent condensate from pooling on the decks. # Air Intakes Locate air intakes for fan rooms so that any ventilation or processed exhaust air is not drawn back into the vessel. # Access Panels Provide access panels to all major air exhaust trunks to allow periodic inspection and cleaning. # Written Balancing Report Provide a written ventilation system balancing report for areas listed in sections 33.2.1.1 through 33.2.1.6. # Child Activity Center # Facilities Include the following in CHILD ACTIVITY CENTER (this does not apply for areas only for children 6 years of age and older). # Handwashing Handwashing facilities must be provided in each CHILD ACTIVITY CENTER. They must be ACCESSIBLE to each CHILD ACTIVITY CENTER without barriers such as doors. Locate the handwashing facility outside of the toilet room and install handwashing sinks with a maximum height of 560 millimeters (22 inches) above the deck. • Provide hot and cold POTABLE WATER to all handwashing sinks. • Equip handwashing sinks to provide water at a temperature not to exceed 43°C (110°F) during use. • Provide handwashing facilities that include a soap dispenser, paper towel dispenser or air dryer, and a waste receptacle. # Toilet Rooms Toilet rooms must be provided in CHILD ACTIVITY CENTERS. Provide one toilet for every 25 children or fraction thereof, based on the maximum capacity of the center. The toilet rooms must include items noted in sections 34.1.2.1 through 34.1.2.6. # Child-sized Toilets CHILD-SIZED TOILETS with a maximum height of 280 millimeters (11 inches) (including the toilet seat) and toilet seat opening no greater than 203 millimeters (8 inches). # Handwashing Facilities • Provide hot and cold POTABLE WATER to all handwashing sinks. • Equip handwashing sinks to provide water at a temperature not to exceed 43°C (110°F) during use. • Install handwashing sinks with a maximum height of 560 millimeters (22 inches) above the deck. • Provide handwashing facilities that include a soap dispenser and paper towel dispenser or air dryer, and a waste receptacle. VSP Construction Guidelines; 91 # Storage Provide storage for gloves and wipes. # Waste Receptacle Provide an airtight, washable, waste receptacle. # Self-closing Doors Provide self-closing toilet room exit doors. # Sign Provide a sign with the exact wording "WASH YOUR HANDS AND ASSIST THE CHILDREN WITH HANDWASHING AFTER HELPING THEM USE THE TOILET." The sign should be in English and can also be in other languages. # Diaper-changing Station Provide a diaper-changing station in CHILD ACTIVITY CENTERS where children in diapers or children who are not toilet trained will be accepted. # Supplies Include the following in each diaper changing station: • A diaper-changing table that is impervious, nonabsorbent, nontoxic, SMOOTH, durable, and cleanable, and designed for diaper changing. • An airtight, soiled-diaper receptacle. • An adjacent handwashing station equipped in accordance with section 34.1.2.2. • A storage area for diapers, gloves, wipes, and disinfectant. • A sign stating with the exact wording "WASH YOUR HANDS AFTER EACH DIAPER CHANGE." The sign should be in English and can also be in other languages. # Child-care Providers Provide toilet and handwashing facilities for child care providers that are separate from the children's toilet rooms. A public toilet outside the center is acceptable. # Furnishings Surfaces of tables, chairs, and other furnishings must be constructed of an EASILY CLEANABLE, nonabsorbent material. # Housekeeping # Handwashing Stations Provide handwashing stations for housekeeping staff. VSP will evaluate the number and location for these handwashing stations during the plan review process. # Location Ensure that at least one handwashing station is available for each cabin attendant work zone and on the same deck as the work zone. One handwashing station may be located between two cabin attendant work zones, and travel across crew passageways is permitted. # Ice/Deck Pantries Handwashing stations for housekeeping staff include those in ice/deck pantries, but do not include those located in bars, room service pantries, bell boxes, or other FOOD AREAS. # Supplies Handwashing stations not located in ice/deck pantries must have a paper towel dispenser, soap dispenser, and a waste receptacle. These stations must provide water at a temperature between 38°C (100°F) and 49°C (120°F) through a mixing valve and be installed to allow for easy access by cabin attendants. Handwash stations inside of ice/deck pantries must be installed in accordance with section 7.1. # Passenger and Crew Public Toilet Rooms # No-touch Exits Provide either of the following in the public toilet rooms: # Hands-free Exit Hands-free exits from toilet rooms (such as doorless entry, automatic door openers, latchless doors that open out), or # Paper Towel Dispensers and Waste Receptacle Paper towel dispensers at or after handwashing stations and a waste receptacle near the last exit door(s) to allow for towel disposal. # Self-closing Doors All public toilet room exit doors must be self-closing. # Decorative Fountains and Misting Systems # Potable Water Provide POTABLE WATER to all decorative fountains, misting systems, and similar facilities. # Design Design and install decorative fountains, misting systems and similar facilities to be maintained free of Mycobacterium, Legionella, algae, and mold growth. Automated Treatment Install an automated treatment system (halogenation, UV, or other effective disinfectant) to prevent the growth of Mycobacterium and Legionella in any recirculated decorative fountain, misting system, or similar facility. Ensure that these systems can also be manually disinfected. # Manual Disinfection Provide a plumbing connection for manual disinfection for all non-recirculated decorative fountains, misting systems, or similar facilities. # Water Temperature If heat is used as a disinfectant, ensure that the water temperature, as measured at the misting nozzle, can be maintained at 65°C (149°F) for a minimum of 10 minutes. # Removable Nozzles Ensure that misting nozzles are REMOVABLE for cleaning and DISINFECTION. # Schematics Provide operational schematics for misting systems. # Acknowledgments # Individuals This document is a result of the cooperative effort of many individuals from the government, private industry, and the public. VSP thanks all of those who submitted comments and participated throughout this lengthy process. We request the presence of USPHS representatives to conduct a construction inspection on the cruise vessel (NAME). We tentatively expect to deliver the vessel on (DATE). We would like to schedule the inspection for (DATE) in (CITY, COUNTRY). We expect the inspection to take approximately (NUMBER OF DAYS). We will pay CDC in accordance with the inspection fees published in the Federal Register. For inspections occurring outside of the United States, we will make all necessary arrangements for lodging and transportation of the Vessel Sanitation Program staff conducting this inspection, which includes airfare and ground transportation in (CITY, COUNTRY For updates to these guidelines and information about the Vessel Sanitation Program, visit http://www.cdc.gov/nceh/vsp. # VSP Construction Checklists # Available VSP developed checklists from these guidelines that may be helpful to shipyard and cruise industry personnel in achieving compliance with these guidelines. Contact VSP for a copy of these checklists.
None
None
3e133b195dc705156e36720a0045594a70ca995f
cdc
None
Objective The Working Group on Civilian Biodefense has developed consensusbased recommendations for measures to be taken by medical and public health professionals following the use of plague as a biological weapon against a civilian population.The working group included 25 representatives from major academic medical centers and research, government, military, public health, and emergency management institutions and agencies. Evidence MEDLINE databases were searched from January 1966 to June 1998 for the Medical Subject Headings plague, Yersinia pestis, biological weapon, biological terrorism, biological warfare, and biowarfare. Review of the bibliographies of the references identified by this search led to subsequent identification of relevant references published prior to 1966. In addition, participants identified other unpublished references and sources. Additional MEDLINE searches were conducted through January 2000.The first draft of the consensus statement was a synthesis of information obtained in the formal evidence-gathering process. The working group was convened to review drafts of the document in October 1998 and May 1999. The final statement incorporates all relevant evidence obtained by the literature search in conjunction with final consensus recommendations supported by all working group members. Conclusions An aerosolized plague weapon could cause fever, cough, chest pain, and hemoptysis with signs consistent with severe pneumonia 1 to 6 days after exposure. Rapid evolution of disease would occur in the 2 to 4 days after symptom onset and would lead to septic shock with high mortality without early treatment. Early treatment and prophylaxis with streptomycin or gentamicin or the tetracycline or fluoroquinolone classes of antimicrobials would be advised.# T HIS IS THE THIRD ARTICLE IN A series entitled Medical and Public Health Management Following the Use of a Biological Weapon: Consensus Statements of the Working Group on Civilian Biodefense. 1,2 The working group has identified a limited number of agents that, if used as weapons, could cause disease and death in sufficient numbers to cripple a city or region. These agents also comprise the top of the list of "Critical Biological Agents" recently developed by the Centers for Disease Control and Prevention (CDC). 3 Yersinia pestis, the causative agent of plague, is one of the most serious of these. Given the availability of Y pestis around the world, capacity for its mass production and aerosol dissemination, difficulty in preventing such activities, high fatality rate of pneumonic plague, and potential for secondary spread of cases during an epidemic, the potential use of plague as a biological weapon is of great concern. # CONSENSUS METHODS The working group comprised 25 representatives from major academic medical centers and research, government, military, public health, and emergency management institutions and agencies. MEDLINE databases were searched from January 1966 to June 1998 using the Medical Subject Headings (MeSH) plague, Yersinia pestis, biological weapon, biological terrorism, biological warfare, and biowarfare. Review of the bibliographies of the references identified by this search led to subsequent identification of relevant references published prior to 1966. In addition, participants identified other unpublished references and sources in their fields of expertise. Additional MEDLINE searches were conducted through January 2000 during the review and revisions of the statement. The first draft of the consensus statement was a synthesis of information obtained in the initial formal evidencegathering process. Members of the working group were asked to make formal written comments on this first draft of the document in September 1998. The document was revised incorporating changes suggested by members of the working group, which was convened to review the second draft of the document on October 30, 1998. Following this meeting and a second meeting of the working group on May 24, 1999, a third draft of the document was completed, reviewed, and revised. Working group members had a final opportunity to review the document and suggest revisions. The final document incorporates all relevant evidence obtained by the literature search in conjunction with consensus recommendations supported by all working group members. The assessment and recommendations provided herein represent the best professional judgment of the working group based on data and expertise currently available. The conclusions and recommendations need to be regularly reassessed as new information becomes available. # HISTORY AND POTENTIAL AS A BIOTERRORIST AGENT In AD 541, the first recorded plague pandemic began in Egypt and swept across Europe with attributable population losses of between 50% and 60% in North Africa, Europe, and central and southern Asia. 4 The second plague pandemic, also known as the black death or great pestilence, began in 1346 and eventually killed 20 to 30 million people in Europe, one third of the European population. 5 Plague spread slowly and inexorably from village to village by infected rats and humans or more quickly from country to country by ships. The pandemic lasted more than 130 years and had major political, cultural, and religious ramifications. The third pandemic began in China in 1855, spread to all inhabited continents, and ultimately killed more than 12 million people in India and China alone. 4 Small outbreaks of plague continue to occur throughout the world. 4,5 Advances in living conditions, public health, and antibiotic therapy make future pandemics improbable. However, plague outbreaks following use of a biological weapon are a plausible threat. In World War II, a secret branch of the Japanese army, Unit 731, is reported to have dropped plague-infected fleas over populated areas of China, thereby causing outbreaks of plague. 6 In the ensuing years, the biological weapons programs of the United States and the Soviet Union developed techniques to aerosolize plague directly, eliminating dependence on the unpredictable flea vector. In 1970, the World Health Organization (WHO) reported that, in a worstcase scenario, if 50 kg of Y pestis were released as an aerosol over a city of 5 million, pneumonic plague could occur in as many as 150000 persons, 36000 of whom would be expected to die. 7 The plague bacilli would remain viable as an aerosol for 1 hour for a distance of up to 10 km. Significant numbers of city inhabitants might attempt to flee, further spreading the disease. 7 While US scientists had not succeeded in making quantities of plague organisms sufficient to use as an effective weapon by the time the US offensive program was terminated in 1970, Soviet scientists were able to manufacture large quantities of the agent suitable for placing into weapons. 8 More than 10 institutes and thousands of scientists were reported to have worked with plague in the former Soviet Union. 8 In contrast, few scientists in the United States study this disease. 9 There is little published information indicating actions of autonomous groups or individuals seeking to develop plague as a weapon. However, in 1995 in Ohio, a microbiologist with suspect motives was arrested after fraudulently acquiring Y pestis by mail. 10 New antiterrorism legislation was introduced in reaction. # EPIDEMIOLOGY Naturally Occurring Plague Human plague most commonly occurs when plague-infected fleas bite humans who then develop bubonic plague. As a prelude to human epidemics, rats frequently die in large numbers, precipitating the movement of the flea population from its natural rat reservoir to humans. Although most persons infected by this route develop bubonic plague, a small minority will develop sepsis with no bubo, a form of plague termed primary septicemic plague. Neither bubonic nor septicemic plague spreads directly from person to person. A small percentage of patients with bubonic or septicemic plague develop secondary pneumonic plague and can then spread the disease by respiratory droplet. Persons contracting the disease by this route develop primary pneumonic plague. 11 Plague remains an enzootic infection of rats, ground squirrels, prairie dogs, and other rodents on every populated continent except Australia. 4 Worldwide, on average in the last 50 years, 1700 cases have been reported annually. 4 In the United States, 390 cases of plague were reported from 1947 to 1996, 84% of which were bubonic, 13% septicemic, and 2% pneumonic. Concomitant case fatality rates were 14%, 22%, and 57%, respectively. 12 Most US cases were in New Mexico, Arizona, Colorado, and California. Of the 15 cases following exposure to domestic cats with plague, 4 were primary pneumonic plague. 13 In the United States, the last case of human-tohuman transmission of plague occurred in Los Angeles in 1924. 14,15 Although pneumonic plague has rarely been the dominant manifestation of the disease, large outbreaks of pneumonic plague have occurred. 16 In an outbreak in Manchuria in 1910-1911, as many as 60000 persons developed pneumonic plague; a second large Manchurian pneumonic plague outbreak occurred in 1920-1921. 16,17 As would be anticipated in the preantibiotic era, nearly 100% of these cases were reported to be fatal. 16,17 Reports from the Manchurian outbreaks suggested that indoor contacts of affected patients were at higher risk than outdoor contacts and that cold temperature, increased humidity, and crowding contributed to increased spread. 14,15 In northern India, there was an epidemic of pneumonic plague with 1400 deaths reported at about the same time. 15 While epidemics of pneumonic plague of this scale have not occurred since, smaller epidemics of pneumonic plague have occurred recently. In 1997 in Madagascar, 1 patient with bubonic plague and secondary pneumonic infection transmitted pneumonic plague to 18 persons, 8 of whom died. 18 # Plague Following Use of a Biological Weapon The epidemiology of plague following its use as a biological weapon would differ substantially from that of naturally occurring infection. Intentional dissemination of plague would most probably occur via an aerosol of Y pestis, a mechanism that has been shown to produce disease in nonhuman primates. 19 A pneumonic plague outbreak would result with symptoms initially resembling those of other severe respiratory illnesses. The size of the outbreak would depend on factors including the quantity of biological agent used, characteristics of the strain, environmental conditions, and methods of aerosolization. Symptoms would begin to occur 1 to 6 days following exposure, and people would die quickly following onset of symptoms. 16 Indications that plague had been artificially disseminated would be the occurrence of cases in locations not known to have enzootic infection, in persons without known risk factors, and in the absence of prior rodent deaths. # MICROBIOLOGY AND VIRULENCE FACTORS Y pestis is a nonmotile, gram-negative bacillus, sometimes coccobacillus, that shows bipolar (also termed safety pin) staining with Wright, Giemsa, or Way-son stain (FIGURE 1). 20 Y pestis is a lactose nonfermenter, urease and indole negative, and a member of the Enterobacteriaceae family. 21 It grows optimally at 28°C on blood agar or Mac-Conkey agar, typically requiring 48 hours for observable growth, but colonies are initially much smaller than other Enterobacteriaceae and may be overlooked. Y pestis has a number of virulence factors that enable it to survive in humans by facilitating use of host nutrients, causing damage to host cells, and subverting phagocytosis and other host defense mechanisms. 4,11,21,22 # PATHOGENESIS AND CLINICAL MANIFESTATIONS Naturally Occurring Plague In most cases of naturally occurring plague, the bite by a plague-infected flea leads to the inoculation of up to thousands of organisms into a patient's skin. The bacteria migrate through cutaneous lymphatics to regional lymph nodes where they are phagocytosed but resist destruction. They rapidly multiply, causing destruction and necrosis of lymph node architecture with subsequent bacteremia, septicemia, and endotoxemia that can lead quickly to shock, disseminated intravascular coagulation, and coma. 21 Patients typically develop symptoms of bubonic plague 2 to 8 days after being bitten by an infected flea. There is sudden onset of fever, chills, and weakness and the development of an acutely swollen tender lymph node, or bubo, up to 1 day later. 23 The bubo most typically develops in the groin, axilla, or cervical region (FIGURE 2, A) and is often so painful that it prevents patients from moving the affected area of the body. Buboes are 1 to 10 cm in diameter, and the overlying skin is erythematous. 21 They are extremely tender, nonfluctuant, and warm and are often associated with considerable surrounding edema, but seldom lymphangitis. Rarely, buboes become fluctuant and suppurate. In addition, pustules or skin ulcerations may occur at the site of the flea bite in a minority of patients. A small minority of patients infected by fleas develop Y pes-tis septicemia without a discernable bubo, the form of disease termed primary septicemic plague. 23 Septicemia can also arise secondary to bubonic plague. 21 Septicemic plague may lead to disseminated intravascular coagulation, necrosis of small vessels, and purpuric skin lesions (Figure 2, B). Gangrene of acral regions such as the digits and nose may also occur in advanced disease, a process believed responsible for the name black death in the second plague pandemic (Figure 2, C). 21 However, the finding of gangrene would not be expected to be helpful in diagnosing the disease in the early stages of illness when early antibiotic treatment could be lifesaving. Secondary pneumonic plague develops in a minority of patients with bubonic or primary septicemic plagueapproximately 12% of total cases in the United States over the last 50 years. 4 This process, termed secondary pneumonic plague, develops via hematogenous spread of plague bacilli to the lungs. Patients commonly have symptoms of severe bronchopneumonia, chest pain, dyspnea, cough, and hemoptysis. 16,21 Primary pneumonic plague resulting from the inhalation of plague bacilli occurs rarely in the United States. 12 Reports of 2 recent cases of primary pneumonic plague, contracted after handling cats with pneumonic plague, reveal that both patients had pneumonic symptoms as well as prominent gastro- intestinal symptoms including nausea, vomiting, abdominal pain, and diarrhea. Diagnosis and treatment were delayed more than 24 hours after symptom onset in both patients, both of whom died. 24,25 Less common plague syndromes include plague meningitis and plague pharyngitis. Plague meningitis follows the hematogenous seeding of bacilli into the meninges and is associated with fever and meningismus. Plague pharyngitis follows inhalation or ingestion of plague bacilli and is associated with cervical lymphadenopathy. 21 # Plague Following Use of a Biological Weapon The pathogenesis and clinical manifestations of plague following a biologi-cal attack would be notably different than naturally occurring plague. Inhaled aerosolized Y pestis bacilli would cause primary pneumonic plague. The time from exposure to aerosolized plague bacilli until development of first symptoms in humans and nonhuman primates has been found to be 1 to 6 days and most often, 2 to 4 days. 12,16,19,26 The first sign of illness would be expected to be fever with cough and dyspnea, sometimes with the production of bloody, watery, or less commonly, purulent sputum. 16,19,27 Prominent gastrointestinal symptoms, including nausea, vomiting, abdominal pain, and diarrhea, might be present. 24,25 The ensuing clinical findings of primary pneumonic plague are similar to those of any severe rapidly progressive pneumonia and are quite similar to those of secondary pneumonic plague. Clinicopathological features may help distinguish primary from secondary pneumonic plague. 11 In contrast to secondary pneumonic plague, features of primary pneumonic plague would include absence of buboes (except, rarely, cervical buboes) and, on pathologic examination, pulmonary disease with areas of profound lobular exudation and bacillary aggregation. 11 Chest radiographic findings are variable but bilateral infiltrates or consolidation are common (FIGURE 3). 22 Laboratory studies may reveal leukocytosis with toxic granulations, co-agulation abnormalities, aminotransferase elevations, azotemia, and other evidence of multiorgan failure. All are nonspecific findings associated with sepsis and systemic inflammatory response syndrome. 11,21 The time from respiratory exposure to death in humans is reported to have been between 2 to 6 days in epidemics during the preantibiotic era, with a mean of 2 to 4 days in most epidemics. 16 # DIAGNOSIS Given the rarity of plague infection and the possibility that early cases are a harbinger of a larger epidemic, the first clinical or laboratory suspicion of plague must lead to immediate notification of the hospital epidemiologist or infection control practitioner, health department, and the local or state health laboratory. Definitive tests can thereby be arranged rapidly through a state reference laboratory or, as necessary, the Diagnostic and Reference Laboratory of the CDC and early interventions instituted. The early diagnosis of plague requires a high index of suspicion in naturally occurring cases and even more so following the use of a biological weapon. There are no effective environmental warning systems to detect an aerosol of plague bacilli. 28 The first indication of a clandestine terrorist attack with plague would most likely be a sudden outbreak of illness presenting as severe pneumonia and sepsis. If there are only small numbers of cases, the possibility of them being plague may be at first overlooked given the clinical similarity to other bacterial or viral pneumonias and that few Western physicians have ever seen a case of pneumonic plague. However, the sudden appearance of a large number of previously healthy patients with fever, cough, shortness of breath, chest pain, and a fulminant course leading to death should immediately suggest the possibility of pneumonic plague or inhalational anthrax. 1 The presence of hemoptysis in this setting would strongly suggest plague (TABLE 1). 22 There are no widely available rapid diagnostic tests for plague. 28 Tests that would be used to confirm a suspected diagnosis-antigen detection, IgM enzyme immunoassay, immunostaining, and polymerase chain reactionare available only at some state health departments, the CDC, and military laboratories. 21 The routinely used passive hemagglutination antibody detection assay is typically only of retrospective value since several days to weeks usually pass after disease onset before antibodies develop. Microbiologic studies are important in the diagnosis of pneumonic plague. A Gram stain of sputum or blood may reveal gram-negative bacilli or coccobacilli. 4,21,29 A Wright, Giemsa, or Wayson stain will often show bipolar staining (Figure 1), and direct fluorescent antibody testing, if available, may be positive. In the unlikely event that a cervical bubo is present in pneumonic plague, an aspirate (obtained with a 20gauge needle and a 10-mL syringe containing 1-2 mL of sterile saline for infusing the node) may be cultured and similarly stained (Table 1). 22 Cultures of sputum, blood, or lymph node aspirate should demonstrate growth approximately 24 to 48 hours after inoculation. Most microbiology laboratories use either automated or semiautomated bacterial identification systems. Some of these systems may misidentify Y pestis. 12,30 In laboratories without automated bacterial identification, as many as 6 days may be required for identification, and there is some chance that the diagnosis may be missed entirely. Approaches for biochemical characterization of Y pestis are described in detail elsewhere. 20 If a laboratory using automated or nonautomated techniques is notified that plague is suspected, it should split the culture: 1 culture incubated at 28°C for rapid growth and the second culture incubated at 37°C for identification of the diagnostic capsular (F 1 ) antigen. Using these methods, up to 72 hours may be required following specimen procurement to make the identification (May Chu, PhD, CDC, Fort Collins, Colo, written communication, April 9, 1999). Antibiotic susceptibility testing should be performed at a reference laboratory because of the lack of standardized susceptibility testing procedures for Y pestis. A process establishing criteria and training measures for laboratory diagnosis of this disease is being undertaken jointly by the Association of Public Health Laboratories and the CDC. # VACCINATION The US-licensed formaldehyde-killed whole bacilli vaccine was discontinued by its manufacturers in 1999 and is no longer available. Plans for future licensure and production are unclear. This killed vaccine demonstrated efficacy in preventing or ameliorating bubonic disease, but it does not prevent or amelio-rate the development of primary pneumonic plague. 19,31 It was used in special circumstances for individuals deemed to be at high risk of developing plague, such as military personnel working in plague endemic areas, microbiologists working with Y pestis in the laboratory, or researchers working with plagueinfected rats or fleas. Research is ongoing in the pursuit of a vaccine that protects against primary pneumonic plague. 22,32 # THERAPY Recommendations for the use of antibiotics following a plague biological weapon exposure are conditioned by the lack of published trials in treating plague in humans, limited number of studies in animals, and possible requirement to treat large numbers of persons. A number of possible therapeutic regimens for treating plague have yet to be adequately studied or submitted for approval to the Food and Drug Administration (FDA). For these reasons, the working group offers consensus recommendations based on the best available evidence. The recommendations do not necessarily represent uses currently approved by the FDA or an official position on the part of any of the federal agencies whose scientists participated in these discussions. Recommendations will need to be revised as further relevant information becomes available. In the United States during the last 50 years, 4 of the 7 reported primary pneumonic plague patients died. 12 Fatality rates depend on various factors including time to initiation of antibiotics, access to advanced supportive care, and the dose of inhaled bacilli. The fatality rate of patients with pneumonic plague when treatment is delayed more than 24 hours after symptom onset is extremely high. 14,24,25,33 Historically, the preferred treatment for plague infection has been streptomycin, an FDA-approved treatment for plague. 21,34,35 Administered early during the disease, streptomycin has reduced overall plague mortality to the 5% to 14% range. 12,21,34 However, streptomycin is infrequently used in the United States and only modest supplies are available. 35 Gentamicin is not FDA approved for the treatment of plague but has been used successfully and is recommended as an acceptable alternative by experts. 23,40 In 1 case series, 8 patients with plague were treated with gentamicin with morbidity or mortality equivalent to that of patients treated with streptomycin (Lucy Boulanger, MD, Indian Health Services, Crown Point, NM, written communication, July 20, 1999). In vitro studies and an in vivo study in mice show equal or improved activity of gentamicin against many strains of Y pestis when compared with streptomycin. 41,42 In addition, gentamicin is widely available, inexpensive, and can be given once daily. 35 Tetracycline and doxycycline also have been used in the treatment and prophylaxis of plague; both are FDA approved for these purposes. In vitro studies have shown that Y pestis susceptibility to tetracycline 43 and doxycycline 41, 44 is equivalent to that of the aminoglycosides. In another investigation, 13% of Y pestis strains in Madagascar were found to have some in vitro resistance to tetracycline. 45 Experimental murine models of Y pestis infection have yielded data that are difficult to extrapolate to humans. Some mouse studies have shown doxycycline to be a highly efficacious treatment of infection 44,46 or prophylaxis 47 against na-turally occurring plague strains. Experimental murine infection with F 1 -deficient variants of Y pestis have shown decreased efficacy of doxycycline, 47,48 but only 1 human case of F 1deficient plague infection has been reported. 49 Russell and colleagues 50 reported poor efficacy of doxycycline against plague-infected mice, but the dosing schedules used in this experiment would have failed to maintain drug levels above the minimum inhibitory concentration due to the short half-life of doxycycline in mice. In another study, doxycycline failed to prevent death in mice intraperitoneally infected with 29 to 290 000 times the median lethal inocula of Y pestis. 51 There are no controlled clinical trials comparing either tetracycline or doxycycline to aminoglycoside in the treatment of plague, but anecdotal case series and a number of medical authorities support use of this class of antimicrobials for prophylaxis and for therapy in the event that streptomycin or gentamicin cannot be administered. 23,27, Based on evidence from in vitro studies, animal studies, and uncontrolled human data, the working group recommends that the tetracycline class of antibiotics be used to treat pneumonic plague if aminoglycoside therapy cannot be administered. This might be the case in a mass casualty scenario when parenteral therapy was either unavailable or impractical. Doxycycline would be considered pharmacologically superior to other antibiotics in the tetracycline class for this indication, because it is well absorbed without food interactions, is well distributed with good tissue penetration, and has a long half-life. 35 The fluoroquinolone family of antimicrobials has demonstrated efficacy in animal studies. Ciprofloxacin has been demonstrated to be at least as efficacious as aminoglycosides and tetracyclines in studies of mice with experimentally induced pneumonic plague. 44,50,51 In vitro studies also suggest equivalent or greater activity of ciprofloxacin, levofloxacin, and ofloxacin against Y pestis when compared with aminoglycosides or tetracyclines. 41,55 However, there have been no trials of fluoroquinolones in human plague, and they are not FDA approved for this indication. Chloramphenicol has been used to treat plague infection and has been recommended for treatment of plague meningitis because of its ability to cross the blood-brain barrier. 21,34 However, human clinical trials demonstrating the superiority of chloramphenicol in the therapy of classic plague infection or plague meningitis have not been performed. It has been associated with dose dependent hematologic abnormalities and with rare idiosyncratic fatal aplastic anemia. 35 A number of different sulfonamides have been used successfully in the treatment of human plague infection: sulfathiazole, 56 sulfadiazine, sulfamerazine, and trimethoprim-sulfamethoxazole. 57,58 The 1970 WHO analysis reported that sulfadiazine reduced mortality for bubonic plague but was ineffective against pneumonic plague and was less effective than tetracycline overall. 59 In a study comparing trimethoprim-sulfamethoxazole with streptomycin, patients treated with trimethoprim-sulfamethoxazole had a longer median duration of fever and a higher incidence of complications. 58 Authorities have generally considered trimethoprim-sulfamethoxazole a second-tier choice. 21,23,34 Some have recommended sulfonamides only in the setting of pediatric prophylaxis. 22 No sulfonamides have been FDA approved for the treatment of plague. Antimicrobials that have been shown to have poor or only modest efficacy in animal studies have included rifampin, aztreonam, ceftazidime, cefotetan, and cefazolin; these antibiotics should not be used. 42 Antibiotic resistance patterns must also be considered in making treatment recommendations. Naturally occurring antibiotic resistance to the tetracycline class of drugs has occurred rarely. 4 Recently, a plasmid-mediated multidrug-resistant strain was isolated in Madagascar. 60 A report published by Russian scientists cited quinoloneresistant Y pestis. 61 There have been assertions that Russian scientists have en-gineered multidrug-resistant strains of Y pestis, 8 although there is as yet no scientific publication confirming this. # Recommendations for Antibiotic Therapy The working group treatment recommendations are based on literature reports on treatment of human disease, reports of studies in animal models, reports on in vitro susceptibility testing, and antibiotic safety. Should antibiotic susceptibility testing reveal resistance, proper antibiotic substitution would need to be made. In a contained casualty setting, a situation in which a modest number of patients require treatment, the working group recommends parenteral antibiotic therapy (TABLE 2). Preferred parenteral forms of the antimicrobials streptomycin or gentamicin are recommended. However, in a mass casualty setting, intravenous or intramuscular therapy may not be possible for reasons of patient care logistics and/or exhaustion of equipment and antibiotic supplies, and parenteral therapy will need to be supplanted by oral therapy. In a mass casualty setting, the working group recommends oral therapy, preferably with doxycycline (or tetracycline) or ciprofloxacin (Table 2). Patients with pneumonic plague will require substantial advanced medical supportive care in addition to antimicrobial therapy. Complications of gramnegative sepsis would be expected, including adult respiratory distress syndrome, disseminated intravascular coagulation, shock, and multiorgan failure. 23 Once it was known or strongly suspected that pneumonic plague cases were occurring, anyone with fever or cough in the presumed area of exposure should be immediately treated with antimicrobials for presumptive pneumonic plague. Delaying therapy until confirmatory testing is performed would greatly decrease survival. 59 Clinical deterioration of patients despite early initiation of empiric therapy could signal antimicrobial resistance and should be promptly evaluated. proved by the Food and Drug Administration. See "Therapy" section for explanations. One antimicrobial agent should be selected. Therapy should be continued for 10 days. Oral therapy should be substituted when patient's condition improves. IM indicates intramuscularly; IV, intravenously. †Aminoglycosides must be adjusted according to renal function. Evidence suggests that gentamicin, 5 mg/kg IM or IV once daily, would be efficacious in children, although this is not yet widely accepted in clinical practice. Neonates up to 1 week of age and premature infants should receive gentamicin, 2.5 mg/kg IV twice daily. ‡Other fluoroquinolones can be substituted at doses appropriate for age. Ciprofloxacin dosage should not exceed 1 g/d in children. §Concentration should be maintained between 5 and 20 µg/mL. Concentrations greater than 25 µg/mL can cause reversible bone marrow suppression. 35,62 Refer to "Management of Special Groups" for details. In children, ciprofloxacin dose should not exceed 1 g/d, chloramphenicol should not exceed 4 g/d. Children younger than 2 years should not receive chloramphenicol. ¶Refer to "Management of Special Groups" for details and for discussion of breastfeeding women. In neonates, gentamicin loading dose of 4 mg/kg should be given initially. 63 #Duration of treatment of plague in mass casualty setting is 10 days. Duration of postexposure prophylaxis to prevent plague infection is 7 days. Children younger than 2 years should not receive chloramphenicol. Oral formulation available only outside the United States. † †Tetracycline could be substituted for doxycycline. # Management of Special Groups Consensus recommendations for special groups as set forth in the following reflect the clinical and evidence-based judgments of the working group and do not necessarily correspond to FDA approved use, indications, or labeling. Children. The treatment of choice for plague in children has been streptomycin or gentamicin. 21,40 If aminoglycosides are not available or cannot be used, recommendations for alternative antimicrobial treatment with efficacy against plague are conditioned by balancing risks associated with treatment against those posed by pneumonic plague. Children aged 8 years and older can be treated with tetracycline antibiotics safely. 35,40 However, in children younger than 8 years, tetracycline antibiotics may cause discolored teeth, and rare instances of retarded skeletal growth have been reported in infants. 35 Chloramphenicol is considered safe in children except for children younger than 2 years who are at risk of "gray baby syndrome." 35,40 Some concern exists that fluoroquinolone use in children may cause arthropathy, 35 although fluoroquinolones have been used to treat serious infections in children. 64 No comparative studies assessing efficacy or safety of alternative treatment strategies for plague in children has or can be performed. Given these considerations, the working group recommends that children in the contained casualty setting receive streptomycin or gentamicin. In a mass casualty setting or for postexposure prophylaxis, we recommend that doxycycline be used. Alternatives are listed for both settings (Table 2). The working group assessment is that the potential benefits of these antimicrobials in the treating of pneumonic plague infection substantially outweigh the risks. Pregnant Women. It has been recommended that aminoglycosides be avoided in pregnancy unless severe illness warrants, 35,65 but there is no more efficacious treatment for pneumonic plague. Therefore, the working group recommends that pregnant women in the contained casualty setting receive gentamicin (Table 2). Since streptomycin has been associated with rare reports of irreversible deafness in children following fetal exposure, this medication should be avoided if possible. 35 The tetracycline class of antibiotics has been associated with fetal toxicity including retarded skeletal growth, 35 although a large case-control study of doxycycline use in pregnancy showed no significant increase in teratogenic risk to the fetus. 66 Liver toxicity has been reported in pregnant women following large doses of intravenous tetracycline (no longer sold in the United States), but it has also been reported following oral administration of tetracycline to nonpregnant individuals. 35 Balancing the risks of pneumonic plague infection with those associated with doxycycline use in pregnancy, the working group recommends that doxycycline be used to treat pregnant women with pneumonic plague if gentamicin is not available. Of the oral antibiotics historically used to treat plague, only trimethoprimsulfamethoxazole has a category C pregnancy classification 65 ; however, many experts do not recommend trimethoprim-sulfamethoxazole for treatment of pneumonic plague. Therefore, the working group recommends that pregnant women receive oral doxycycline for mass casualty treatment or postexposure prophylaxis. If the patient is unable to take doxycycline or the medication is unavailable, ciprofloxacin or other fluoroquinolones would be recommended in the mass casualty setting (Table 2). The working group recommendation for treatment of breastfeeding women is to provide the mother and infant with the same antibiotic based on what is most safe and effective for the infant: gentamicin in the contained casualty setting and doxycycline in the mass casualty setting. Fluoroquinolones would be the recommended alternative (Table 2). Immunosuppressed Persons. The antibiotic treatment or postexposure prophylaxis for pneumonic plague among those who are immunosuppressed has not been studied in human or animal models of pneumonic plague infection. Therefore, the consensus recommendation is to administer antibiotics according to the guidelines developed for immunocompetent adults and children. # POSTEXPOSURE PROPHYLAXIS RECOMMENDATIONS The working group recommends that in a community experiencing a pneumonic plague epidemic, all persons developing a temperature of 38.5°C or higher or new cough should promptly begin parenteral antibiotic treatment. If the resources required to administer parenteral antibiotics are unavailable, oral antibiotics should be used according to the mass casualty recommendations (Table 2). For infants in this setting, tachypnea would also be an additional indication for immediate treatment. 29 Special measures would need to be initiated for treatment or prophylaxis of those who are either unaware of the outbreak or require special assistance, such as the homeless or mentally handicapped persons. Continuing surveillance of patients would be needed to identify individuals and communities at risk requiring postexposure prophylaxis. Asymptomatic persons having household, hospital, or other close contact with persons with untreated pneumonic plague should receive postexposure antibiotic prophylaxis for 7 days 29 and watch for fever and cough. Close contact is defined as contact with a patient at less than 2 meters. 16,31 Tetracycline, doxycycline, sulfonamides, and chloramphenicol have each been used or recommended as postexposure prophylaxis in this setting. 16,22,29,31,59 Fluoroquinolones could also be used based on studies in mice. 51 The working group recommends the use of doxycycline as the first choice antibiotic for postexposure prophylaxis; other recommended antibiotics are noted (Table 2). Contacts who develop fever or cough while receiving prophylaxis should seek prompt medical attention and begin antibiotic treatment as described in Table 2. # INFECTION CONTROL Previous public health guidelines have advised strict isolation for all close contacts of patients with pneumonic plague who refuse prophylaxis. 29 In the modern setting, however, pneumonic plague has not spread widely or rapidly in a community, 4,14,24 and therefore isolation of close contacts refusing antibiotic prophylaxis is not recommended by the working group. Instead, persons refusing prophylaxis should be carefully watched for the development of fever or cough during the first 7 days after exposure and treated immediately should either occur. Modern experience with person-toperson spread of pneumonic plague is limited; few data are available to make specific recommendations regarding appropriate infection control measures. The available evidence indicates that personto-person transmission of pneumonic plague occurs via respiratory droplets; transmission by droplet nuclei has not been demonstrated. In large pneumonic plague epidemics earlier this century, pneumonic plague transmission was prevented in close contacts by wearing masks. 14,16,17 Commensurate with this, existing national infection control guidelines recommend the use of disposable surgical masks to prevent the transmission of pneumonic plague. 29,67 Given the available evidence, the working group recommends that, in addition to beginning antibiotic prophylaxis, persons living or working in close contact with patients with confirmed or suspect pneumonic plague that have had less than 48 hours of antimicrobial treatment should follow respiratory droplet precautions and wear a surgical mask. Further, the working group recommends avoidance of unnecessary close contact with patients with pneumonic plague until at least 48 hours of antibiotic therapy and clinical improvement has taken place. Other standard respiratory droplet precautions (gown, gloves, and eye protection) should be used as well. 29,31 The patient should remain isolated during the first 48 hours of antibiotic therapy and until clinical improvement occurs. 29,31,59 If large numbers of pa-tients make individual isolation impossible, patients with pneumonic plague may be cohorted while undergoing antibiotic therapy. Patients being transported should also wear surgical masks. Hospital rooms of patients with pneumonic plague should receive terminal cleaning in a manner consistent with standard precautions, and clothing or linens contaminated with body fluids of patients infected with plague should be disinfected as per hospital protocol. 29 Microbiology laboratory personnel should be alerted when Y pestis is suspected. Four laboratory-acquired cases of plague have been reported in the United States. 68 Simple clinical materials and cultures should be processed in biosafety level 2 conditions. 31, 69 Only during activities involving high potential for aerosol or droplet production (eg, centrifuging, grinding, vigorous shaking, and animal studies) are biosafety level 3 conditions necessary. 69 Bodies of patients who have died following infection with plague should be handled with routine strict precautions. 29 Contact with the remains should be limited to trained personnel, and the safety precautions for transporting corpses for burial should be the same as those when transporting ill patients. 70 Aerosol-generating procedures, such as bone-sawing associated with surgery or postmortem examinations, would be associated with special risks of transmission and are not recommended. If such aerosol-generating procedures are necessary, then high-efficiency particulate air filtered masks and negativepressure rooms should be used as would be customary in cases in which contagious biological aerosols, such as Mycobacterium tuberculosis, are deemed a possible risk. 71 # ENVIRONMENTAL DECONTAMINATION There is no evidence to suggest that residual plague bacilli pose an environmental threat to the population following the dissolution of the primary aerosol. There is no spore form in the Y pestis life cycle, so it is far more susceptible to environmental conditions than sporulat-ing bacteria such as Bacillus anthracis. Moreover, Y pestis is very sensitive to the action of sunlight and heating and does not survive long outside the host. 72 Although some reports suggest that the bacterium may survive in the soil for some time, 72 there is no evidence to suggest environmental risk to humans in this setting and thus no need for environmental decontamination of an area exposed to an aerosol of plague. In the WHO analysis, in a worst case scenario, a plague aerosol was estimated to be effective and infectious for as long as 1 hour. 7 In the setting of a clandestine release of plague bacilli, the aerosol would have dissipated long before the first case of pneumonic plague occurred. # ADDITIONAL RESEARCH Improving the medical and public health response to an outbreak of plague following the use of a biological weapon will require additional knowledge of the organism, its genetics, and pathogenesis. In addition, improved rapid diagnostic and standard laboratory microbiology techniques are necessary. An improved understanding of prophylactic and therapeutic antibiotic regimens would be of benefit in defining optimal antibiotic strategy. # Ex officio participants in the Working
Objective The Working Group on Civilian Biodefense has developed consensusbased recommendations for measures to be taken by medical and public health professionals following the use of plague as a biological weapon against a civilian population.The working group included 25 representatives from major academic medical centers and research, government, military, public health, and emergency management institutions and agencies. Evidence MEDLINE databases were searched from January 1966 to June 1998 for the Medical Subject Headings plague, Yersinia pestis, biological weapon, biological terrorism, biological warfare, and biowarfare. Review of the bibliographies of the references identified by this search led to subsequent identification of relevant references published prior to 1966. In addition, participants identified other unpublished references and sources. Additional MEDLINE searches were conducted through January 2000.The first draft of the consensus statement was a synthesis of information obtained in the formal evidence-gathering process. The working group was convened to review drafts of the document in October 1998 and May 1999. The final statement incorporates all relevant evidence obtained by the literature search in conjunction with final consensus recommendations supported by all working group members. Conclusions An aerosolized plague weapon could cause fever, cough, chest pain, and hemoptysis with signs consistent with severe pneumonia 1 to 6 days after exposure. Rapid evolution of disease would occur in the 2 to 4 days after symptom onset and would lead to septic shock with high mortality without early treatment. Early treatment and prophylaxis with streptomycin or gentamicin or the tetracycline or fluoroquinolone classes of antimicrobials would be advised.# T HIS IS THE THIRD ARTICLE IN A series entitled Medical and Public Health Management Following the Use of a Biological Weapon: Consensus Statements of the Working Group on Civilian Biodefense. 1,2 The working group has identified a limited number of agents that, if used as weapons, could cause disease and death in sufficient numbers to cripple a city or region. These agents also comprise the top of the list of "Critical Biological Agents" recently developed by the Centers for Disease Control and Prevention (CDC). 3 Yersinia pestis, the causative agent of plague, is one of the most serious of these. Given the availability of Y pestis around the world, capacity for its mass production and aerosol dissemination, difficulty in preventing such activities, high fatality rate of pneumonic plague, and potential for secondary spread of cases during an epidemic, the potential use of plague as a biological weapon is of great concern. # CONSENSUS METHODS The working group comprised 25 representatives from major academic medical centers and research, government, military, public health, and emergency management institutions and agencies. MEDLINE databases were searched from January 1966 to June 1998 using the Medical Subject Headings (MeSH) plague, Yersinia pestis, biological weapon, biological terrorism, biological warfare, and biowarfare. Review of the bibliographies of the references identified by this search led to subsequent identification of relevant references published prior to 1966. In addition, participants identified other unpublished references and sources in their fields of expertise. Additional MEDLINE searches were conducted through January 2000 during the review and revisions of the statement. The first draft of the consensus statement was a synthesis of information obtained in the initial formal evidencegathering process. Members of the working group were asked to make formal written comments on this first draft of the document in September 1998. The document was revised incorporating changes suggested by members of the working group, which was convened to review the second draft of the document on October 30, 1998. Following this meeting and a second meeting of the working group on May 24, 1999, a third draft of the document was completed, reviewed, and revised. Working group members had a final opportunity to review the document and suggest revisions. The final document incorporates all relevant evidence obtained by the literature search in conjunction with consensus recommendations supported by all working group members. The assessment and recommendations provided herein represent the best professional judgment of the working group based on data and expertise currently available. The conclusions and recommendations need to be regularly reassessed as new information becomes available. # HISTORY AND POTENTIAL AS A BIOTERRORIST AGENT In AD 541, the first recorded plague pandemic began in Egypt and swept across Europe with attributable population losses of between 50% and 60% in North Africa, Europe, and central and southern Asia. 4 The second plague pandemic, also known as the black death or great pestilence, began in 1346 and eventually killed 20 to 30 million people in Europe, one third of the European population. 5 Plague spread slowly and inexorably from village to village by infected rats and humans or more quickly from country to country by ships. The pandemic lasted more than 130 years and had major political, cultural, and religious ramifications. The third pandemic began in China in 1855, spread to all inhabited continents, and ultimately killed more than 12 million people in India and China alone. 4 Small outbreaks of plague continue to occur throughout the world. 4,5 Advances in living conditions, public health, and antibiotic therapy make future pandemics improbable. However, plague outbreaks following use of a biological weapon are a plausible threat. In World War II, a secret branch of the Japanese army, Unit 731, is reported to have dropped plague-infected fleas over populated areas of China, thereby causing outbreaks of plague. 6 In the ensuing years, the biological weapons programs of the United States and the Soviet Union developed techniques to aerosolize plague directly, eliminating dependence on the unpredictable flea vector. In 1970, the World Health Organization (WHO) reported that, in a worstcase scenario, if 50 kg of Y pestis were released as an aerosol over a city of 5 million, pneumonic plague could occur in as many as 150000 persons, 36000 of whom would be expected to die. 7 The plague bacilli would remain viable as an aerosol for 1 hour for a distance of up to 10 km. Significant numbers of city inhabitants might attempt to flee, further spreading the disease. 7 While US scientists had not succeeded in making quantities of plague organisms sufficient to use as an effective weapon by the time the US offensive program was terminated in 1970, Soviet scientists were able to manufacture large quantities of the agent suitable for placing into weapons. 8 More than 10 institutes and thousands of scientists were reported to have worked with plague in the former Soviet Union. 8 In contrast, few scientists in the United States study this disease. 9 There is little published information indicating actions of autonomous groups or individuals seeking to develop plague as a weapon. However, in 1995 in Ohio, a microbiologist with suspect motives was arrested after fraudulently acquiring Y pestis by mail. 10 New antiterrorism legislation was introduced in reaction. # EPIDEMIOLOGY Naturally Occurring Plague Human plague most commonly occurs when plague-infected fleas bite humans who then develop bubonic plague. As a prelude to human epidemics, rats frequently die in large numbers, precipitating the movement of the flea population from its natural rat reservoir to humans. Although most persons infected by this route develop bubonic plague, a small minority will develop sepsis with no bubo, a form of plague termed primary septicemic plague. Neither bubonic nor septicemic plague spreads directly from person to person. A small percentage of patients with bubonic or septicemic plague develop secondary pneumonic plague and can then spread the disease by respiratory droplet. Persons contracting the disease by this route develop primary pneumonic plague. 11 Plague remains an enzootic infection of rats, ground squirrels, prairie dogs, and other rodents on every populated continent except Australia. 4 Worldwide, on average in the last 50 years, 1700 cases have been reported annually. 4 In the United States, 390 cases of plague were reported from 1947 to 1996, 84% of which were bubonic, 13% septicemic, and 2% pneumonic. Concomitant case fatality rates were 14%, 22%, and 57%, respectively. 12 Most US cases were in New Mexico, Arizona, Colorado, and California. Of the 15 cases following exposure to domestic cats with plague, 4 were primary pneumonic plague. 13 In the United States, the last case of human-tohuman transmission of plague occurred in Los Angeles in 1924. 14,15 Although pneumonic plague has rarely been the dominant manifestation of the disease, large outbreaks of pneumonic plague have occurred. 16 In an outbreak in Manchuria in 1910-1911, as many as 60000 persons developed pneumonic plague; a second large Manchurian pneumonic plague outbreak occurred in 1920-1921. 16,17 As would be anticipated in the preantibiotic era, nearly 100% of these cases were reported to be fatal. 16,17 Reports from the Manchurian outbreaks suggested that indoor contacts of affected patients were at higher risk than outdoor contacts and that cold temperature, increased humidity, and crowding contributed to increased spread. 14,15 In northern India, there was an epidemic of pneumonic plague with 1400 deaths reported at about the same time. 15 While epidemics of pneumonic plague of this scale have not occurred since, smaller epidemics of pneumonic plague have occurred recently. In 1997 in Madagascar, 1 patient with bubonic plague and secondary pneumonic infection transmitted pneumonic plague to 18 persons, 8 of whom died. 18 # Plague Following Use of a Biological Weapon The epidemiology of plague following its use as a biological weapon would differ substantially from that of naturally occurring infection. Intentional dissemination of plague would most probably occur via an aerosol of Y pestis, a mechanism that has been shown to produce disease in nonhuman primates. 19 A pneumonic plague outbreak would result with symptoms initially resembling those of other severe respiratory illnesses. The size of the outbreak would depend on factors including the quantity of biological agent used, characteristics of the strain, environmental conditions, and methods of aerosolization. Symptoms would begin to occur 1 to 6 days following exposure, and people would die quickly following onset of symptoms. 16 Indications that plague had been artificially disseminated would be the occurrence of cases in locations not known to have enzootic infection, in persons without known risk factors, and in the absence of prior rodent deaths. # MICROBIOLOGY AND VIRULENCE FACTORS Y pestis is a nonmotile, gram-negative bacillus, sometimes coccobacillus, that shows bipolar (also termed safety pin) staining with Wright, Giemsa, or Way-son stain (FIGURE 1). 20 Y pestis is a lactose nonfermenter, urease and indole negative, and a member of the Enterobacteriaceae family. 21 It grows optimally at 28°C on blood agar or Mac-Conkey agar, typically requiring 48 hours for observable growth, but colonies are initially much smaller than other Enterobacteriaceae and may be overlooked. Y pestis has a number of virulence factors that enable it to survive in humans by facilitating use of host nutrients, causing damage to host cells, and subverting phagocytosis and other host defense mechanisms. 4,11,21,22 # PATHOGENESIS AND CLINICAL MANIFESTATIONS Naturally Occurring Plague In most cases of naturally occurring plague, the bite by a plague-infected flea leads to the inoculation of up to thousands of organisms into a patient's skin. The bacteria migrate through cutaneous lymphatics to regional lymph nodes where they are phagocytosed but resist destruction. They rapidly multiply, causing destruction and necrosis of lymph node architecture with subsequent bacteremia, septicemia, and endotoxemia that can lead quickly to shock, disseminated intravascular coagulation, and coma. 21 Patients typically develop symptoms of bubonic plague 2 to 8 days after being bitten by an infected flea. There is sudden onset of fever, chills, and weakness and the development of an acutely swollen tender lymph node, or bubo, up to 1 day later. 23 The bubo most typically develops in the groin, axilla, or cervical region (FIGURE 2, A) and is often so painful that it prevents patients from moving the affected area of the body. Buboes are 1 to 10 cm in diameter, and the overlying skin is erythematous. 21 They are extremely tender, nonfluctuant, and warm and are often associated with considerable surrounding edema, but seldom lymphangitis. Rarely, buboes become fluctuant and suppurate. In addition, pustules or skin ulcerations may occur at the site of the flea bite in a minority of patients. A small minority of patients infected by fleas develop Y pes-tis septicemia without a discernable bubo, the form of disease termed primary septicemic plague. 23 Septicemia can also arise secondary to bubonic plague. 21 Septicemic plague may lead to disseminated intravascular coagulation, necrosis of small vessels, and purpuric skin lesions (Figure 2, B). Gangrene of acral regions such as the digits and nose may also occur in advanced disease, a process believed responsible for the name black death in the second plague pandemic (Figure 2, C). 21 However, the finding of gangrene would not be expected to be helpful in diagnosing the disease in the early stages of illness when early antibiotic treatment could be lifesaving. Secondary pneumonic plague develops in a minority of patients with bubonic or primary septicemic plagueapproximately 12% of total cases in the United States over the last 50 years. 4 This process, termed secondary pneumonic plague, develops via hematogenous spread of plague bacilli to the lungs. Patients commonly have symptoms of severe bronchopneumonia, chest pain, dyspnea, cough, and hemoptysis. 16,21 Primary pneumonic plague resulting from the inhalation of plague bacilli occurs rarely in the United States. 12 Reports of 2 recent cases of primary pneumonic plague, contracted after handling cats with pneumonic plague, reveal that both patients had pneumonic symptoms as well as prominent gastro- intestinal symptoms including nausea, vomiting, abdominal pain, and diarrhea. Diagnosis and treatment were delayed more than 24 hours after symptom onset in both patients, both of whom died. 24,25 Less common plague syndromes include plague meningitis and plague pharyngitis. Plague meningitis follows the hematogenous seeding of bacilli into the meninges and is associated with fever and meningismus. Plague pharyngitis follows inhalation or ingestion of plague bacilli and is associated with cervical lymphadenopathy. 21 # Plague Following Use of a Biological Weapon The pathogenesis and clinical manifestations of plague following a biologi-cal attack would be notably different than naturally occurring plague. Inhaled aerosolized Y pestis bacilli would cause primary pneumonic plague. The time from exposure to aerosolized plague bacilli until development of first symptoms in humans and nonhuman primates has been found to be 1 to 6 days and most often, 2 to 4 days. 12,16,19,26 The first sign of illness would be expected to be fever with cough and dyspnea, sometimes with the production of bloody, watery, or less commonly, purulent sputum. 16,19,27 Prominent gastrointestinal symptoms, including nausea, vomiting, abdominal pain, and diarrhea, might be present. 24,25 The ensuing clinical findings of primary pneumonic plague are similar to those of any severe rapidly progressive pneumonia and are quite similar to those of secondary pneumonic plague. Clinicopathological features may help distinguish primary from secondary pneumonic plague. 11 In contrast to secondary pneumonic plague, features of primary pneumonic plague would include absence of buboes (except, rarely, cervical buboes) and, on pathologic examination, pulmonary disease with areas of profound lobular exudation and bacillary aggregation. 11 Chest radiographic findings are variable but bilateral infiltrates or consolidation are common (FIGURE 3). 22 Laboratory studies may reveal leukocytosis with toxic granulations, co-agulation abnormalities, aminotransferase elevations, azotemia, and other evidence of multiorgan failure. All are nonspecific findings associated with sepsis and systemic inflammatory response syndrome. 11,21 The time from respiratory exposure to death in humans is reported to have been between 2 to 6 days in epidemics during the preantibiotic era, with a mean of 2 to 4 days in most epidemics. 16 # DIAGNOSIS Given the rarity of plague infection and the possibility that early cases are a harbinger of a larger epidemic, the first clinical or laboratory suspicion of plague must lead to immediate notification of the hospital epidemiologist or infection control practitioner, health department, and the local or state health laboratory. Definitive tests can thereby be arranged rapidly through a state reference laboratory or, as necessary, the Diagnostic and Reference Laboratory of the CDC and early interventions instituted. The early diagnosis of plague requires a high index of suspicion in naturally occurring cases and even more so following the use of a biological weapon. There are no effective environmental warning systems to detect an aerosol of plague bacilli. 28 The first indication of a clandestine terrorist attack with plague would most likely be a sudden outbreak of illness presenting as severe pneumonia and sepsis. If there are only small numbers of cases, the possibility of them being plague may be at first overlooked given the clinical similarity to other bacterial or viral pneumonias and that few Western physicians have ever seen a case of pneumonic plague. However, the sudden appearance of a large number of previously healthy patients with fever, cough, shortness of breath, chest pain, and a fulminant course leading to death should immediately suggest the possibility of pneumonic plague or inhalational anthrax. 1 The presence of hemoptysis in this setting would strongly suggest plague (TABLE 1). 22 There are no widely available rapid diagnostic tests for plague. 28 Tests that would be used to confirm a suspected diagnosis-antigen detection, IgM enzyme immunoassay, immunostaining, and polymerase chain reactionare available only at some state health departments, the CDC, and military laboratories. 21 The routinely used passive hemagglutination antibody detection assay is typically only of retrospective value since several days to weeks usually pass after disease onset before antibodies develop. Microbiologic studies are important in the diagnosis of pneumonic plague. A Gram stain of sputum or blood may reveal gram-negative bacilli or coccobacilli. 4,21,29 A Wright, Giemsa, or Wayson stain will often show bipolar staining (Figure 1), and direct fluorescent antibody testing, if available, may be positive. In the unlikely event that a cervical bubo is present in pneumonic plague, an aspirate (obtained with a 20gauge needle and a 10-mL syringe containing 1-2 mL of sterile saline for infusing the node) may be cultured and similarly stained (Table 1). 22 Cultures of sputum, blood, or lymph node aspirate should demonstrate growth approximately 24 to 48 hours after inoculation. Most microbiology laboratories use either automated or semiautomated bacterial identification systems. Some of these systems may misidentify Y pestis. 12,30 In laboratories without automated bacterial identification, as many as 6 days may be required for identification, and there is some chance that the diagnosis may be missed entirely. Approaches for biochemical characterization of Y pestis are described in detail elsewhere. 20 If a laboratory using automated or nonautomated techniques is notified that plague is suspected, it should split the culture: 1 culture incubated at 28°C for rapid growth and the second culture incubated at 37°C for identification of the diagnostic capsular (F 1 ) antigen. Using these methods, up to 72 hours may be required following specimen procurement to make the identification (May Chu, PhD, CDC, Fort Collins, Colo, written communication, April 9, 1999). Antibiotic susceptibility testing should be performed at a reference laboratory because of the lack of standardized susceptibility testing procedures for Y pestis. A process establishing criteria and training measures for laboratory diagnosis of this disease is being undertaken jointly by the Association of Public Health Laboratories and the CDC. # VACCINATION The US-licensed formaldehyde-killed whole bacilli vaccine was discontinued by its manufacturers in 1999 and is no longer available. Plans for future licensure and production are unclear. This killed vaccine demonstrated efficacy in preventing or ameliorating bubonic disease, but it does not prevent or amelio-rate the development of primary pneumonic plague. 19,31 It was used in special circumstances for individuals deemed to be at high risk of developing plague, such as military personnel working in plague endemic areas, microbiologists working with Y pestis in the laboratory, or researchers working with plagueinfected rats or fleas. Research is ongoing in the pursuit of a vaccine that protects against primary pneumonic plague. 22,32 # THERAPY Recommendations for the use of antibiotics following a plague biological weapon exposure are conditioned by the lack of published trials in treating plague in humans, limited number of studies in animals, and possible requirement to treat large numbers of persons. A number of possible therapeutic regimens for treating plague have yet to be adequately studied or submitted for approval to the Food and Drug Administration (FDA). For these reasons, the working group offers consensus recommendations based on the best available evidence. The recommendations do not necessarily represent uses currently approved by the FDA or an official position on the part of any of the federal agencies whose scientists participated in these discussions. Recommendations will need to be revised as further relevant information becomes available. In the United States during the last 50 years, 4 of the 7 reported primary pneumonic plague patients died. 12 Fatality rates depend on various factors including time to initiation of antibiotics, access to advanced supportive care, and the dose of inhaled bacilli. The fatality rate of patients with pneumonic plague when treatment is delayed more than 24 hours after symptom onset is extremely high. 14,24,25,33 Historically, the preferred treatment for plague infection has been streptomycin, an FDA-approved treatment for plague. 21,34,35 Administered early during the disease, streptomycin has reduced overall plague mortality to the 5% to 14% range. 12,21,34 However, streptomycin is infrequently used in the United States and only modest supplies are available. 35 Gentamicin is not FDA approved for the treatment of plague but has been used successfully [36][37][38][39] and is recommended as an acceptable alternative by experts. 23,40 In 1 case series, 8 patients with plague were treated with gentamicin with morbidity or mortality equivalent to that of patients treated with streptomycin (Lucy Boulanger, MD, Indian Health Services, Crown Point, NM, written communication, July 20, 1999). In vitro studies and an in vivo study in mice show equal or improved activity of gentamicin against many strains of Y pestis when compared with streptomycin. 41,42 In addition, gentamicin is widely available, inexpensive, and can be given once daily. 35 Tetracycline and doxycycline also have been used in the treatment and prophylaxis of plague; both are FDA approved for these purposes. In vitro studies have shown that Y pestis susceptibility to tetracycline 43 and doxycycline 41, 44 is equivalent to that of the aminoglycosides. In another investigation, 13% of Y pestis strains in Madagascar were found to have some in vitro resistance to tetracycline. 45 Experimental murine models of Y pestis infection have yielded data that are difficult to extrapolate to humans. Some mouse studies have shown doxycycline to be a highly efficacious treatment of infection 44,46 or prophylaxis 47 against na-turally occurring plague strains. Experimental murine infection with F 1 -deficient variants of Y pestis have shown decreased efficacy of doxycycline, 47,48 but only 1 human case of F 1deficient plague infection has been reported. 49 Russell and colleagues 50 reported poor efficacy of doxycycline against plague-infected mice, but the dosing schedules used in this experiment would have failed to maintain drug levels above the minimum inhibitory concentration due to the short half-life of doxycycline in mice. In another study, doxycycline failed to prevent death in mice intraperitoneally infected with 29 to 290 000 times the median lethal inocula of Y pestis. 51 There are no controlled clinical trials comparing either tetracycline or doxycycline to aminoglycoside in the treatment of plague, but anecdotal case series and a number of medical authorities support use of this class of antimicrobials for prophylaxis and for therapy in the event that streptomycin or gentamicin cannot be administered. 23,27,[38][39][40][52][53][54] Based on evidence from in vitro studies, animal studies, and uncontrolled human data, the working group recommends that the tetracycline class of antibiotics be used to treat pneumonic plague if aminoglycoside therapy cannot be administered. This might be the case in a mass casualty scenario when parenteral therapy was either unavailable or impractical. Doxycycline would be considered pharmacologically superior to other antibiotics in the tetracycline class for this indication, because it is well absorbed without food interactions, is well distributed with good tissue penetration, and has a long half-life. 35 The fluoroquinolone family of antimicrobials has demonstrated efficacy in animal studies. Ciprofloxacin has been demonstrated to be at least as efficacious as aminoglycosides and tetracyclines in studies of mice with experimentally induced pneumonic plague. 44,50,51 In vitro studies also suggest equivalent or greater activity of ciprofloxacin, levofloxacin, and ofloxacin against Y pestis when compared with aminoglycosides or tetracyclines. 41,55 However, there have been no trials of fluoroquinolones in human plague, and they are not FDA approved for this indication. Chloramphenicol has been used to treat plague infection and has been recommended for treatment of plague meningitis because of its ability to cross the blood-brain barrier. 21,34 However, human clinical trials demonstrating the superiority of chloramphenicol in the therapy of classic plague infection or plague meningitis have not been performed. It has been associated with dose dependent hematologic abnormalities and with rare idiosyncratic fatal aplastic anemia. 35 A number of different sulfonamides have been used successfully in the treatment of human plague infection: sulfathiazole, 56 sulfadiazine, sulfamerazine, and trimethoprim-sulfamethoxazole. 57,58 The 1970 WHO analysis reported that sulfadiazine reduced mortality for bubonic plague but was ineffective against pneumonic plague and was less effective than tetracycline overall. 59 In a study comparing trimethoprim-sulfamethoxazole with streptomycin, patients treated with trimethoprim-sulfamethoxazole had a longer median duration of fever and a higher incidence of complications. 58 Authorities have generally considered trimethoprim-sulfamethoxazole a second-tier choice. 21,23,34 Some have recommended sulfonamides only in the setting of pediatric prophylaxis. 22 No sulfonamides have been FDA approved for the treatment of plague. Antimicrobials that have been shown to have poor or only modest efficacy in animal studies have included rifampin, aztreonam, ceftazidime, cefotetan, and cefazolin; these antibiotics should not be used. 42 Antibiotic resistance patterns must also be considered in making treatment recommendations. Naturally occurring antibiotic resistance to the tetracycline class of drugs has occurred rarely. 4 Recently, a plasmid-mediated multidrug-resistant strain was isolated in Madagascar. 60 A report published by Russian scientists cited quinoloneresistant Y pestis. 61 There have been assertions that Russian scientists have en-gineered multidrug-resistant strains of Y pestis, 8 although there is as yet no scientific publication confirming this. # Recommendations for Antibiotic Therapy The working group treatment recommendations are based on literature reports on treatment of human disease, reports of studies in animal models, reports on in vitro susceptibility testing, and antibiotic safety. Should antibiotic susceptibility testing reveal resistance, proper antibiotic substitution would need to be made. In a contained casualty setting, a situation in which a modest number of patients require treatment, the working group recommends parenteral antibiotic therapy (TABLE 2). Preferred parenteral forms of the antimicrobials streptomycin or gentamicin are recommended. However, in a mass casualty setting, intravenous or intramuscular therapy may not be possible for reasons of patient care logistics and/or exhaustion of equipment and antibiotic supplies, and parenteral therapy will need to be supplanted by oral therapy. In a mass casualty setting, the working group recommends oral therapy, preferably with doxycycline (or tetracycline) or ciprofloxacin (Table 2). Patients with pneumonic plague will require substantial advanced medical supportive care in addition to antimicrobial therapy. Complications of gramnegative sepsis would be expected, including adult respiratory distress syndrome, disseminated intravascular coagulation, shock, and multiorgan failure. 23 Once it was known or strongly suspected that pneumonic plague cases were occurring, anyone with fever or cough in the presumed area of exposure should be immediately treated with antimicrobials for presumptive pneumonic plague. Delaying therapy until confirmatory testing is performed would greatly decrease survival. 59 Clinical deterioration of patients despite early initiation of empiric therapy could signal antimicrobial resistance and should be promptly evaluated. proved by the Food and Drug Administration. See "Therapy" section for explanations. One antimicrobial agent should be selected. Therapy should be continued for 10 days. Oral therapy should be substituted when patient's condition improves. IM indicates intramuscularly; IV, intravenously. †Aminoglycosides must be adjusted according to renal function. Evidence suggests that gentamicin, 5 mg/kg IM or IV once daily, would be efficacious in children, although this is not yet widely accepted in clinical practice. Neonates up to 1 week of age and premature infants should receive gentamicin, 2.5 mg/kg IV twice daily. ‡Other fluoroquinolones can be substituted at doses appropriate for age. Ciprofloxacin dosage should not exceed 1 g/d in children. §Concentration should be maintained between 5 and 20 µg/mL. Concentrations greater than 25 µg/mL can cause reversible bone marrow suppression. 35,62 Refer to "Management of Special Groups" for details. In children, ciprofloxacin dose should not exceed 1 g/d, chloramphenicol should not exceed 4 g/d. Children younger than 2 years should not receive chloramphenicol. ¶Refer to "Management of Special Groups" for details and for discussion of breastfeeding women. In neonates, gentamicin loading dose of 4 mg/kg should be given initially. 63 #Duration of treatment of plague in mass casualty setting is 10 days. Duration of postexposure prophylaxis to prevent plague infection is 7 days. **Children younger than 2 years should not receive chloramphenicol. Oral formulation available only outside the United States. † †Tetracycline could be substituted for doxycycline. # Management of Special Groups Consensus recommendations for special groups as set forth in the following reflect the clinical and evidence-based judgments of the working group and do not necessarily correspond to FDA approved use, indications, or labeling. Children. The treatment of choice for plague in children has been streptomycin or gentamicin. 21,40 If aminoglycosides are not available or cannot be used, recommendations for alternative antimicrobial treatment with efficacy against plague are conditioned by balancing risks associated with treatment against those posed by pneumonic plague. Children aged 8 years and older can be treated with tetracycline antibiotics safely. 35,40 However, in children younger than 8 years, tetracycline antibiotics may cause discolored teeth, and rare instances of retarded skeletal growth have been reported in infants. 35 Chloramphenicol is considered safe in children except for children younger than 2 years who are at risk of "gray baby syndrome." 35,40 Some concern exists that fluoroquinolone use in children may cause arthropathy, 35 although fluoroquinolones have been used to treat serious infections in children. 64 No comparative studies assessing efficacy or safety of alternative treatment strategies for plague in children has or can be performed. Given these considerations, the working group recommends that children in the contained casualty setting receive streptomycin or gentamicin. In a mass casualty setting or for postexposure prophylaxis, we recommend that doxycycline be used. Alternatives are listed for both settings (Table 2). The working group assessment is that the potential benefits of these antimicrobials in the treating of pneumonic plague infection substantially outweigh the risks. Pregnant Women. It has been recommended that aminoglycosides be avoided in pregnancy unless severe illness warrants, 35,65 but there is no more efficacious treatment for pneumonic plague. Therefore, the working group recommends that pregnant women in the contained casualty setting receive gentamicin (Table 2). Since streptomycin has been associated with rare reports of irreversible deafness in children following fetal exposure, this medication should be avoided if possible. 35 The tetracycline class of antibiotics has been associated with fetal toxicity including retarded skeletal growth, 35 although a large case-control study of doxycycline use in pregnancy showed no significant increase in teratogenic risk to the fetus. 66 Liver toxicity has been reported in pregnant women following large doses of intravenous tetracycline (no longer sold in the United States), but it has also been reported following oral administration of tetracycline to nonpregnant individuals. 35 Balancing the risks of pneumonic plague infection with those associated with doxycycline use in pregnancy, the working group recommends that doxycycline be used to treat pregnant women with pneumonic plague if gentamicin is not available. Of the oral antibiotics historically used to treat plague, only trimethoprimsulfamethoxazole has a category C pregnancy classification 65 ; however, many experts do not recommend trimethoprim-sulfamethoxazole for treatment of pneumonic plague. Therefore, the working group recommends that pregnant women receive oral doxycycline for mass casualty treatment or postexposure prophylaxis. If the patient is unable to take doxycycline or the medication is unavailable, ciprofloxacin or other fluoroquinolones would be recommended in the mass casualty setting (Table 2). The working group recommendation for treatment of breastfeeding women is to provide the mother and infant with the same antibiotic based on what is most safe and effective for the infant: gentamicin in the contained casualty setting and doxycycline in the mass casualty setting. Fluoroquinolones would be the recommended alternative (Table 2). Immunosuppressed Persons. The antibiotic treatment or postexposure prophylaxis for pneumonic plague among those who are immunosuppressed has not been studied in human or animal models of pneumonic plague infection. Therefore, the consensus recommendation is to administer antibiotics according to the guidelines developed for immunocompetent adults and children. # POSTEXPOSURE PROPHYLAXIS RECOMMENDATIONS The working group recommends that in a community experiencing a pneumonic plague epidemic, all persons developing a temperature of 38.5°C or higher or new cough should promptly begin parenteral antibiotic treatment. If the resources required to administer parenteral antibiotics are unavailable, oral antibiotics should be used according to the mass casualty recommendations (Table 2). For infants in this setting, tachypnea would also be an additional indication for immediate treatment. 29 Special measures would need to be initiated for treatment or prophylaxis of those who are either unaware of the outbreak or require special assistance, such as the homeless or mentally handicapped persons. Continuing surveillance of patients would be needed to identify individuals and communities at risk requiring postexposure prophylaxis. Asymptomatic persons having household, hospital, or other close contact with persons with untreated pneumonic plague should receive postexposure antibiotic prophylaxis for 7 days 29 and watch for fever and cough. Close contact is defined as contact with a patient at less than 2 meters. 16,31 Tetracycline, doxycycline, sulfonamides, and chloramphenicol have each been used or recommended as postexposure prophylaxis in this setting. 16,22,29,31,59 Fluoroquinolones could also be used based on studies in mice. 51 The working group recommends the use of doxycycline as the first choice antibiotic for postexposure prophylaxis; other recommended antibiotics are noted (Table 2). Contacts who develop fever or cough while receiving prophylaxis should seek prompt medical attention and begin antibiotic treatment as described in Table 2. # INFECTION CONTROL Previous public health guidelines have advised strict isolation for all close contacts of patients with pneumonic plague who refuse prophylaxis. 29 In the modern setting, however, pneumonic plague has not spread widely or rapidly in a community, 4,14,24 and therefore isolation of close contacts refusing antibiotic prophylaxis is not recommended by the working group. Instead, persons refusing prophylaxis should be carefully watched for the development of fever or cough during the first 7 days after exposure and treated immediately should either occur. Modern experience with person-toperson spread of pneumonic plague is limited; few data are available to make specific recommendations regarding appropriate infection control measures. The available evidence indicates that personto-person transmission of pneumonic plague occurs via respiratory droplets; transmission by droplet nuclei has not been demonstrated. [14][15][16][17] In large pneumonic plague epidemics earlier this century, pneumonic plague transmission was prevented in close contacts by wearing masks. 14,16,17 Commensurate with this, existing national infection control guidelines recommend the use of disposable surgical masks to prevent the transmission of pneumonic plague. 29,67 Given the available evidence, the working group recommends that, in addition to beginning antibiotic prophylaxis, persons living or working in close contact with patients with confirmed or suspect pneumonic plague that have had less than 48 hours of antimicrobial treatment should follow respiratory droplet precautions and wear a surgical mask. Further, the working group recommends avoidance of unnecessary close contact with patients with pneumonic plague until at least 48 hours of antibiotic therapy and clinical improvement has taken place. Other standard respiratory droplet precautions (gown, gloves, and eye protection) should be used as well. 29,31 The patient should remain isolated during the first 48 hours of antibiotic therapy and until clinical improvement occurs. 29,31,59 If large numbers of pa-tients make individual isolation impossible, patients with pneumonic plague may be cohorted while undergoing antibiotic therapy. Patients being transported should also wear surgical masks. Hospital rooms of patients with pneumonic plague should receive terminal cleaning in a manner consistent with standard precautions, and clothing or linens contaminated with body fluids of patients infected with plague should be disinfected as per hospital protocol. 29 Microbiology laboratory personnel should be alerted when Y pestis is suspected. Four laboratory-acquired cases of plague have been reported in the United States. 68 Simple clinical materials and cultures should be processed in biosafety level 2 conditions. 31, 69 Only during activities involving high potential for aerosol or droplet production (eg, centrifuging, grinding, vigorous shaking, and animal studies) are biosafety level 3 conditions necessary. 69 Bodies of patients who have died following infection with plague should be handled with routine strict precautions. 29 Contact with the remains should be limited to trained personnel, and the safety precautions for transporting corpses for burial should be the same as those when transporting ill patients. 70 Aerosol-generating procedures, such as bone-sawing associated with surgery or postmortem examinations, would be associated with special risks of transmission and are not recommended. If such aerosol-generating procedures are necessary, then high-efficiency particulate air filtered masks and negativepressure rooms should be used as would be customary in cases in which contagious biological aerosols, such as Mycobacterium tuberculosis, are deemed a possible risk. 71 # ENVIRONMENTAL DECONTAMINATION There is no evidence to suggest that residual plague bacilli pose an environmental threat to the population following the dissolution of the primary aerosol. There is no spore form in the Y pestis life cycle, so it is far more susceptible to environmental conditions than sporulat-ing bacteria such as Bacillus anthracis. Moreover, Y pestis is very sensitive to the action of sunlight and heating and does not survive long outside the host. 72 Although some reports suggest that the bacterium may survive in the soil for some time, 72 there is no evidence to suggest environmental risk to humans in this setting and thus no need for environmental decontamination of an area exposed to an aerosol of plague. In the WHO analysis, in a worst case scenario, a plague aerosol was estimated to be effective and infectious for as long as 1 hour. 7 In the setting of a clandestine release of plague bacilli, the aerosol would have dissipated long before the first case of pneumonic plague occurred. # ADDITIONAL RESEARCH Improving the medical and public health response to an outbreak of plague following the use of a biological weapon will require additional knowledge of the organism, its genetics, and pathogenesis. In addition, improved rapid diagnostic and standard laboratory microbiology techniques are necessary. An improved understanding of prophylactic and therapeutic antibiotic regimens would be of benefit in defining optimal antibiotic strategy. # Ex officio participants in the Working
None
None
38cb999ff54491fb321d04c5e42fc0754a22d916
cdc
None
In February 1999, the Advisory Committee on Immunization Practices (ACIP) expanded recommendations for varicella (chickenpox) vaccine to promote wider use of the vaccine for susceptible children and adults. The updated recommendations include establishing child care and school entry requirements, use of the vaccine following exposure and for outbreak control, use of the vaccine for some children infected with the human immunodeficiency virus (HIV), and vaccination of adults and adolescents at high risk for exposure. These recommendations also provide new information on varicella vaccine postlicensure safety data.# INTRODUCTION Varicella (i.e., chickenpox) is a highly contagious disease caused by the varicella zoster virus (VZV). Varicella is usually a self-limited disease that lasts 4-5 days and is characterized by fever, malaise, and a generalized vesicular rash typically consisting of 250-500 lesions. Infants, adolescents, adults, and immunocompromised persons are at higher risk for complications. Before the availability of varicella vaccine, varicella disease was responsible for an estimated 4 million cases, 11,000 hospitalizations, and 100 deaths each year in the United States (CDC, unpublished data, 1999). Approximately 90% of cases occurred in children. A vaccine was licensed in the United States in 1995, and the Advisory Committee on Immunization Practices (ACIP) issued recommendations for prevention of varicella in July 1996 (1 ). # RECOMMENDATIONS Day Care and School Entry Requirements Because varicella incidence is highest among children aged 1-6 years, implementing vaccination requirements for child care and school entry will have the greatest impact on reducing disease incidence. ACIP recommends that all states require that children entering child care facilities and elementary schools either have received varicella vaccine or have other evidence of immunity to varicella. Other evidence of immunity should consist of a physician's diagnosis of varicella, a reliable history of the disease, or serologic evidence of immunity. To prevent susceptible older children from entering adulthood without immunity to varicella, states should also consider implementing a policy that requires evidence of varicella vaccination or other evidence of immunity for children entering middle school (or junior high school). # Postexposure Vaccination and Outbreak Control Data from the United States and Japan from household, hospital, and community settings indicate that varicella vaccine is effective in preventing illness or modifying varicella severity if used within 3 days, and possibly up to 5 days, of exposure (2)(3)(4). ACIP now recommends the vaccine for use in susceptible persons following exposure to varicella. If exposure to varicella does not cause infection, postexposure vaccination should induce protection against subsequent exposure. If the exposure results in infection, no evidence indicates that administration of varicella vaccine during the presymptomatic or prodromal stage of illness increases the risk for vaccineassociated adverse events. Although postexposure use of varicella vaccine has potential applications in hospital settings, vaccination is routinely recommended for all susceptible health-care workers and is the preferred method for preventing varicella in health-care settings (1,5 ). Varicella outbreaks in some settings (e.g., child care facilities, schools, institutions) can last 3-6 months. Varicella vaccine has been used successfully by state and local health departments and by the military for outbreak prevention and control. Therefore, state and local health departments should consider using the vaccine for outbreak control either by advising exposed susceptible persons to contact their health-care providers for vaccination or by offering vaccination through the health department. Guidelines for varicella outbreak investigation and control are available from the National Immunization Program (NIP), CDC. # Vaccination of Persons Aged ≥13 Years at High Risk for Exposure or Transmission ACIP has strengthened its recommendations for susceptible persons aged ≥13 years at high risk for exposure or transmission, including designating adolescents and adults living in households with children as a new high-risk group. Varicella vaccine is recommended for susceptible persons in the following high-risk groups: a) persons who live or work in environments where transmission of VZV is likely (e.g., teachers of young children, day care employees, and residents and staff members in institutional settings), b) persons who live and work in environments where transmission can occur (e.g., college students, inmates and staff members of correctional institutions, and military personnel), c) nonpregnant women of childbearing age, d) adolescents and adults living in households with children, and e) international travelers. # Vaccination of HIV-Infected Children and Other Persons With Altered Immunity Varicella vaccine is not licensed for use in persons who have blood dyscrasias, leukemia, lymphomas of any type, or other malignant neoplasms affecting the bone marrow or lymphatic systems. The manufacturer makes free vaccine available to any physician through a research protocol for use in patients who have acute lymphoblastic leukemia (ALL) and who meet certain eligibility criteria (1 ). ACIP has previously recommended that varicella vaccine should not be administered to persons with primary or acquired immunodeficiency, including immunosuppression associated with acquired immunodeficiency syndrome (AIDS) or other clinical manifestations of human immunodeficiency virus (HIV) infections, cellular immunodeficiencies, hypogammaglobulinemia, and dysgammaglobulinemia. ACIP maintains its recommendation that varicella vaccine should not be administered to persons who have cellular immunodeficiencies, but persons with impaired humoral immunity may now be vaccinated. In addition, some HIV-infected children may now be considered for vaccination. Limited data from a clinical trial in which two doses of varicella vaccine were administered to 41 asymptomatic or mildly symptomatic HIV-infected children (CDC class N1 or A1,- age-specific CD4+ T-lymphocyte percentage of ≥25%) (6 ) indicated that the vaccine was immunogenic and effective (Pediatric AIDS Clinical Trial Group, unpublished data, 1999). Because children infected with HIV are at increased risk for morbidity from varicella and herpes zoster (i.e., shingles) compared with healthy children, ACIP recommends that, after weighing potential risks and benefits, varicella vaccine should be considered for asymptomatic or mildly symptomatic HIV-infected children in CDC class N1 or A1 with age-specific CD4+ T-lymphocyte percentages of ≥25%. Eligible children should receive two doses of varicella vaccine with a 3-month interval between doses. Because persons with impaired cellular immunity are potentially at greater risk for complications after vaccination with a live vaccine, these vaccinees should be encouraged to return for evaluation if they experience a postvaccination varicella-like rash. The use of varicella vaccine in other HIV-infected children is being investigated further. Recommendations regarding use of varicella vaccine in persons with other conditions associated with altered immunity (e.g., immunosuppressive therapy) or in persons receiving steroid therapy have not changed (1 ). # ADVERSE REACTIONS # Reporting of Postlicensure Adverse Events Data on potential adverse events are available from the Vaccine Adverse Event Reporting System (VAERS). During March 1995-July 1998, a total of 9.7 million doses of varicella vaccine were distributed in the United States. During this time, VAERS received 6,580 reports of adverse events, 4% of them serious. Approximately two thirds of the reports were for children aged <10 years. The most frequently reported adverse event was rash (rate: 37/100,000 vaccine doses distributed). Polymerase chain reaction (PCR) analysis confirmed that most rash events occurring within 2 weeks of vaccination were caused by wild-type virus (Merck and Company, Inc., unpublished data, 1998). Postlicensure VAERS and vaccine manufacturer reports of serious adverse events, without regard to causality, have included encephalitis, ataxia, erythema multiforme, Stevens-Johnson syndrome, pneumonia, thrombocytopenia, seizures, neuropathy, and herpes zoster (CDC, unpublished data, 1998). For serious adverse events for which background incidence data are known, VAERS reporting rates are lower than the rates expected after natural varicella or the background rates of disease in the community (CDC, unpublished data, 1998). However, VAERS data are limited by underreporting and unknown sensitivity of the reporting system, making it difficult to *In CDC's pediatric HIV Classification system, Class 1 is an immunologic category defined as "no evidence of suppression." For this ACIP recommendation, two clinical categories under Class 1 are used -NI, defined as "no signs or symptoms," and AI, defined as "mild signs or symptoms." compare adverse event rates following vaccination reported to VAERS with those from complications following natural disease. Nevertheless, the magnitude of these differences makes it likely that serious adverse events following vaccination occur at a substantially lower rate than following natural disease. In rare cases, a causal relationship between the varicella vaccine and a serious adverse event has been confirmed (e.g., pneumonia in an immunocompromised child or herpes zoster). In some cases, wild-type VZV or other causal organisms have been identified. However, in most cases, data are insufficient to determine a causal association. Of the 14 deaths reported to VAERS, eight had definite other explanations for death, three had other plausible explanations for death, and three had insufficient information to determine causality. One death from natural varicella occurred in a child aged 9 years who died from complications of wild-type VZV 20 months after vaccination. # Development of Herpes Zoster The VAERS rate of herpes zoster after varicella vaccination was 2.6/100,000 vaccine doses distributed (CDC, unpublished data, 1998). The incidence of herpes zoster after natural varicella infection among healthy children aged <20 years is 68/100,000 person years (7 ) and, for all ages, 215/100,000 person years (8 ). However, these rates should be compared cautiously because the latter rates are based on populations monitored for longer time periods than were the vaccinees. For PCR-confirmed herpes zoster cases, the range of onset was 25-722 days after vaccination (Merck and Company, Inc., unpublished data, 1998). Cases of herpes zoster have been confirmed by PCR to be caused by both vaccine virus and wild-type virus, suggesting that some herpes zoster cases in vaccinees might result from antecedent natural varicella infection (Merck and Company, Inc., unpublished data, 1998) (9 ). # Transmission of Vaccine Virus Transmission of the vaccine virus is rare and has been documented in immunocompetent persons by PCR analysis on only three occasions out of 15 million doses of varicella vaccine distributed. All three cases resulted in mild disease without complications. In one case, a child aged 12 months transmitted the vaccine virus to his pregnant mother (10 ). The mother elected to terminate the pregnancy, and fetal tissue tested by PCR was negative for varicella vaccine virus. The two other documented cases involved transmission from healthy children aged 1 year to a healthy sibling aged 4 1/2 months and a healthy father, respectively. Secondary transmission has not been documented in the absence of a vesicular rash postvaccination. # CONCLUSION This report updates previous ACIP recommendations for the prevention of varicella. Implementing state requirements that children entering day care facilities and schools either have received varicella vaccine or have evidence of immunity will increase vaccine coverage. Vaccination is now recommended for outbreak control and postexposure, and the vaccine is now available to children with humoral immunodeficiencies and selected children with HIV infection (i.e., in CDC Class N1 or A1, with age-specific CD4+ T-lymphocyte percentages of ≥25%). Recommendations The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at / or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800. Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (888) 232-3228. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. # 6U.S. Government Printing Office: 1999-733-228/87078 Region IV for adult vaccination have been strengthened for persons at high risk for exposure and now include adolescents and adults who live in households with children.
In February 1999, the Advisory Committee on Immunization Practices (ACIP) expanded recommendations for varicella (chickenpox) vaccine to promote wider use of the vaccine for susceptible children and adults. The updated recommendations include establishing child care and school entry requirements, use of the vaccine following exposure and for outbreak control, use of the vaccine for some children infected with the human immunodeficiency virus (HIV), and vaccination of adults and adolescents at high risk for exposure. These recommendations also provide new information on varicella vaccine postlicensure safety data.# INTRODUCTION Varicella (i.e., chickenpox) is a highly contagious disease caused by the varicella zoster virus (VZV). Varicella is usually a self-limited disease that lasts 4-5 days and is characterized by fever, malaise, and a generalized vesicular rash typically consisting of 250-500 lesions. Infants, adolescents, adults, and immunocompromised persons are at higher risk for complications. Before the availability of varicella vaccine, varicella disease was responsible for an estimated 4 million cases, 11,000 hospitalizations, and 100 deaths each year in the United States (CDC, unpublished data, 1999). Approximately 90% of cases occurred in children. A vaccine was licensed in the United States in 1995, and the Advisory Committee on Immunization Practices (ACIP) issued recommendations for prevention of varicella in July 1996 (1 ). # RECOMMENDATIONS Day Care and School Entry Requirements Because varicella incidence is highest among children aged 1-6 years, implementing vaccination requirements for child care and school entry will have the greatest impact on reducing disease incidence. ACIP recommends that all states require that children entering child care facilities and elementary schools either have received varicella vaccine or have other evidence of immunity to varicella. Other evidence of immunity should consist of a physician's diagnosis of varicella, a reliable history of the disease, or serologic evidence of immunity. To prevent susceptible older children from entering adulthood without immunity to varicella, states should also consider implementing a policy that requires evidence of varicella vaccination or other evidence of immunity for children entering middle school (or junior high school). # Postexposure Vaccination and Outbreak Control Data from the United States and Japan from household, hospital, and community settings indicate that varicella vaccine is effective in preventing illness or modifying varicella severity if used within 3 days, and possibly up to 5 days, of exposure (2)(3)(4). ACIP now recommends the vaccine for use in susceptible persons following exposure to varicella. If exposure to varicella does not cause infection, postexposure vaccination should induce protection against subsequent exposure. If the exposure results in infection, no evidence indicates that administration of varicella vaccine during the presymptomatic or prodromal stage of illness increases the risk for vaccineassociated adverse events. Although postexposure use of varicella vaccine has potential applications in hospital settings, vaccination is routinely recommended for all susceptible health-care workers and is the preferred method for preventing varicella in health-care settings (1,5 ). Varicella outbreaks in some settings (e.g., child care facilities, schools, institutions) can last 3-6 months. Varicella vaccine has been used successfully by state and local health departments and by the military for outbreak prevention and control. Therefore, state and local health departments should consider using the vaccine for outbreak control either by advising exposed susceptible persons to contact their health-care providers for vaccination or by offering vaccination through the health department. Guidelines for varicella outbreak investigation and control are available from the National Immunization Program (NIP), CDC. # Vaccination of Persons Aged ≥13 Years at High Risk for Exposure or Transmission ACIP has strengthened its recommendations for susceptible persons aged ≥13 years at high risk for exposure or transmission, including designating adolescents and adults living in households with children as a new high-risk group. Varicella vaccine is recommended for susceptible persons in the following high-risk groups: a) persons who live or work in environments where transmission of VZV is likely (e.g., teachers of young children, day care employees, and residents and staff members in institutional settings), b) persons who live and work in environments where transmission can occur (e.g., college students, inmates and staff members of correctional institutions, and military personnel), c) nonpregnant women of childbearing age, d) adolescents and adults living in households with children, and e) international travelers. # Vaccination of HIV-Infected Children and Other Persons With Altered Immunity Varicella vaccine is not licensed for use in persons who have blood dyscrasias, leukemia, lymphomas of any type, or other malignant neoplasms affecting the bone marrow or lymphatic systems. The manufacturer makes free vaccine available to any physician through a research protocol for use in patients who have acute lymphoblastic leukemia (ALL) and who meet certain eligibility criteria (1 ). ACIP has previously recommended that varicella vaccine should not be administered to persons with primary or acquired immunodeficiency, including immunosuppression associated with acquired immunodeficiency syndrome (AIDS) or other clinical manifestations of human immunodeficiency virus (HIV) infections, cellular immunodeficiencies, hypogammaglobulinemia, and dysgammaglobulinemia. ACIP maintains its recommendation that varicella vaccine should not be administered to persons who have cellular immunodeficiencies, but persons with impaired humoral immunity may now be vaccinated. In addition, some HIV-infected children may now be considered for vaccination. Limited data from a clinical trial in which two doses of varicella vaccine were administered to 41 asymptomatic or mildly symptomatic HIV-infected children (CDC class N1 or A1,* age-specific CD4+ T-lymphocyte percentage of ≥25%) (6 ) indicated that the vaccine was immunogenic and effective (Pediatric AIDS Clinical Trial Group, unpublished data, 1999). Because children infected with HIV are at increased risk for morbidity from varicella and herpes zoster (i.e., shingles) compared with healthy children, ACIP recommends that, after weighing potential risks and benefits, varicella vaccine should be considered for asymptomatic or mildly symptomatic HIV-infected children in CDC class N1 or A1 with age-specific CD4+ T-lymphocyte percentages of ≥25%. Eligible children should receive two doses of varicella vaccine with a 3-month interval between doses. Because persons with impaired cellular immunity are potentially at greater risk for complications after vaccination with a live vaccine, these vaccinees should be encouraged to return for evaluation if they experience a postvaccination varicella-like rash. The use of varicella vaccine in other HIV-infected children is being investigated further. Recommendations regarding use of varicella vaccine in persons with other conditions associated with altered immunity (e.g., immunosuppressive therapy) or in persons receiving steroid therapy have not changed (1 ). # ADVERSE REACTIONS # Reporting of Postlicensure Adverse Events Data on potential adverse events are available from the Vaccine Adverse Event Reporting System (VAERS). During March 1995-July 1998, a total of 9.7 million doses of varicella vaccine were distributed in the United States. During this time, VAERS received 6,580 reports of adverse events, 4% of them serious. Approximately two thirds of the reports were for children aged <10 years. The most frequently reported adverse event was rash (rate: 37/100,000 vaccine doses distributed). Polymerase chain reaction (PCR) analysis confirmed that most rash events occurring within 2 weeks of vaccination were caused by wild-type virus (Merck and Company, Inc., unpublished data, 1998). Postlicensure VAERS and vaccine manufacturer reports of serious adverse events, without regard to causality, have included encephalitis, ataxia, erythema multiforme, Stevens-Johnson syndrome, pneumonia, thrombocytopenia, seizures, neuropathy, and herpes zoster (CDC, unpublished data, 1998). For serious adverse events for which background incidence data are known, VAERS reporting rates are lower than the rates expected after natural varicella or the background rates of disease in the community (CDC, unpublished data, 1998). However, VAERS data are limited by underreporting and unknown sensitivity of the reporting system, making it difficult to *In CDC's pediatric HIV Classification system, Class 1 is an immunologic category defined as "no evidence of suppression." For this ACIP recommendation, two clinical categories under Class 1 are used -NI, defined as "no signs or symptoms," and AI, defined as "mild signs or symptoms." compare adverse event rates following vaccination reported to VAERS with those from complications following natural disease. Nevertheless, the magnitude of these differences makes it likely that serious adverse events following vaccination occur at a substantially lower rate than following natural disease. In rare cases, a causal relationship between the varicella vaccine and a serious adverse event has been confirmed (e.g., pneumonia in an immunocompromised child or herpes zoster). In some cases, wild-type VZV or other causal organisms have been identified. However, in most cases, data are insufficient to determine a causal association. Of the 14 deaths reported to VAERS, eight had definite other explanations for death, three had other plausible explanations for death, and three had insufficient information to determine causality. One death from natural varicella occurred in a child aged 9 years who died from complications of wild-type VZV 20 months after vaccination. # Development of Herpes Zoster The VAERS rate of herpes zoster after varicella vaccination was 2.6/100,000 vaccine doses distributed (CDC, unpublished data, 1998). The incidence of herpes zoster after natural varicella infection among healthy children aged <20 years is 68/100,000 person years (7 ) and, for all ages, 215/100,000 person years (8 ). However, these rates should be compared cautiously because the latter rates are based on populations monitored for longer time periods than were the vaccinees. For PCR-confirmed herpes zoster cases, the range of onset was 25-722 days after vaccination (Merck and Company, Inc., unpublished data, 1998). Cases of herpes zoster have been confirmed by PCR to be caused by both vaccine virus and wild-type virus, suggesting that some herpes zoster cases in vaccinees might result from antecedent natural varicella infection (Merck and Company, Inc., unpublished data, 1998) (9 ). # Transmission of Vaccine Virus Transmission of the vaccine virus is rare and has been documented in immunocompetent persons by PCR analysis on only three occasions out of 15 million doses of varicella vaccine distributed. All three cases resulted in mild disease without complications. In one case, a child aged 12 months transmitted the vaccine virus to his pregnant mother (10 ). The mother elected to terminate the pregnancy, and fetal tissue tested by PCR was negative for varicella vaccine virus. The two other documented cases involved transmission from healthy children aged 1 year to a healthy sibling aged 4 1/2 months and a healthy father, respectively. Secondary transmission has not been documented in the absence of a vesicular rash postvaccination. # CONCLUSION This report updates previous ACIP recommendations for the prevention of varicella. Implementing state requirements that children entering day care facilities and schools either have received varicella vaccine or have evidence of immunity will increase vaccine coverage. Vaccination is now recommended for outbreak control and postexposure, and the vaccine is now available to children with humoral immunodeficiencies and selected children with HIV infection (i.e., in CDC Class N1 or A1, with age-specific CD4+ T-lymphocyte percentages of ≥25%). Recommendations The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at http://www.cdc.gov/ or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800. Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (888) 232-3228. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. # 6U.S. Government Printing Office: 1999-733-228/87078 Region IV # for adult vaccination have been strengthened for persons at high risk for exposure and now include adolescents and adults who live in households with children.
None
None
5f1f83571e88de8c20043846c9f0accfac833ea1
cdc
None
for work for the performance of these task order contracts. The Board may revise or accept the IGCE, the task order, and/or some or all of the ABRWH independent dose reconstruction review of contractor's bids. These contracts will serve to provide technical support consultation to assist the ABRWH in fulfilling its statutory duty to advise the Secretary, HHS, on the scientific validity and quality of dose estimation and reconstruction efforts under EEOICPA. These discussions will include reviews of the technical proposals to determine adequacy of the proposed approach and associated contract cost estimates. The information being discussed will include information of a confidential nature. The ICGEs will include contract cost estimates, the disclosure of which would adversely impact the Governments negotiating position and strategy in regards to these contracts by giving the ABRWH independent dose reconstruction review contractor undue advantage in determining the price associated with its bids. The meeting will be closed to the public in accordance with provisions set forth regarding subject matter considered confidential under the terms of 5 U.S.C. 552b (c)( 9)(B), 48 CFR 5.401(b)(1) and (4), and 48 CFR 7.304(D), and the Determination of the Director of the Management Analysis and Services Office, Centers for Disease Control and Prevention, pursuant to Public Law 92-463. A summary of this meeting will be prepared and submitted within 14 days of the close of the meeting. The agenda is subject to change as priorities dictate.# factors, determination of the cancer potency factor for the mustard AELs, and practical concerns of conducting air monitoring at the lower exposure limits. The key comments potentially impacting CDC's recommendations are summarized and discussed below: 1. One reviewer remarked that the 5 minute ceiling (Ceiling-5M) may require too short of an analytical cycle for use with dual-agent air monitoring instrumentation. Discussion: The Ceiling-5M was defined to provide a ceiling value for near-real-time (NRT) corrective action that would protect worker health in the short term and meet the long-term goal of keeping the carcinogenicity risk below one in one million. The 8-hour time-weighted average (TWA) exposure limit recommended by CDC in 1988 was implemented by the chemical demilitarization program as a ceiling value, monitored by NRT instruments having a sampling and analysis cycle time of under 5 minutes. CDC's proposal sought to reflect this conservative implementation of the 1988 criteria. CDC closely examined the various implicit exposure doses, measured in terms of concentration multiplied by time of exposure (Ct), for various potential exposure scenarios. The ceiling-5M was based upon the analytic cycle times used in the stockpile demilitarization program. Longer sampling and analytic cycle times, such as those used in the monitoring programs for chemical agent storage facilities or nonstockpile program, could be considered in a similar manner, that is, by evaluating the effect on the Ct by changing duration of potential exposure with varying instrument cycle times. CDC examined the implication of applying the ceiling-5M agent concentration with cycle times greater than 5 minutes. Comments received from the Army, indicated that the dual agent monitors use cycle times of up to 10 minutes. Accordingly, CDC reviewed the impact of using 10-to 15-minute cycle times at the same concentration used with the ceiling-5M. Both the short-term and long-term health protection goals were met; that is, the effective dose or Ct associated at this level and duration are still well under the Ct for the acute threshold of effects level (referenced in the July 22, 2003, support document for the proposed sulfur mustard AELs) and the carcinogenicity risk per episode would be well under one in one million. The above analysis would suggest that a longer analytic cycle time, even up to the 15 minutes, associated with the Army's NRT monitoring definitions, would be acceptable at the concentration proposed with the ceiling-5M. However, real-world leaks, spills, or other unplanned agent releases do not follow a defined pattern of gradual airborne concentration increase. The first cycle of a monitoring alarm could be at much higher concentrations than the ceiling-5M. Consequently, to limit potential agent exposure durations at higher level exposures, analytic cycle time should be kept as short as practicable. The final factor considered in CDC's review of this issue is the overall risk management implication of modifying the implied cycle time associated with the ceiling AEL. Clearly, the degree of protectiveness increases as the cycle time decreases, assuming all other quality control criteria remain constant. However, if programmatic delays or extraordinary new personnel protective measures are introduced as interim measures in the pursuit of more ideal monitoring capabilities, overall risk could increase to both workers and the public. In summary, CDC believes that the proposed ceiling-5M was overly proscriptive and possibly counterproductive. Accordingly, CDC redesignated this AEL as a 15-minute short-term exposure limit (STEL). The concentration value, 0.003 mg/m 3 , from the ceiling-5M is retained. This STEL is to be monitored with NRT technology using the shortest practicable instrument cycle time. For the maximum 15-minute duration of the STEL, the Ct is 0.045 mg-min/m 3 . 2. One reviewer remarked that using the proposed general population limit (GPL) for worker protection could result in excessive false-positive situations and attendant disruptions wherever significant interferences might be located. Discussion: The GPL is a criterion that is set to protect the general public. Community exposure limits are set lower than worker limits to reflect wider variation in human susceptibility than that of the healthy worker population. CDC premised its proposal to use the new GPL as a worker protection criterion on two basic considerations. First, because the GPL is designed to protect the community, it would also be adequate for a worker population. Second, CDC believed that historic monitoring for the GPL for demilitarization perimeter monitoring similarly could be implemented in worker locations to accommodate longer 12-hour shifts. As discussed in CDC's proposal, the GPL for sulfur mustard was driven largely by the goal of protecting the public at a cancer risk level of less than one cancer incidence in a million exposures at the GPL for 3 continuous years, a risk level that is considered to be negligible. Three years was chosen for the duration of the potential exposure at a GPL because it was believed to be the maximum duration of a campaign where sulfur mustard munitions would be handled and processed for destruction on a continuing basis. This assumed exposure scenario is conservative for both the public and workers for a number of reasons: - No one worker works continuously for 3 years; actual time at work is probably well under one-third of all available hours per year when weekends, holidays, and vacations are considered. - Demilitarization plant workers, storage site workers, non-stockpile site workers, or others who might reasonably be exposed to chemical agent do not remain stationary at one duty location for extended periods. - Similarly, the individuals within the general community would not normally be anticipated to stay at one location continuously for 3 years. - Varying meteorological conditions would preclude constant exposure conditions. - With the rigorous active demilitarization site monitoring and the ongoing routine storage site inspection program, unplanned releases of chemical agent are unlikely to be sustained for any significant duration. - CDC assumed exposure at the full GPL in its carcinogenicity evaluation even though detection at this level would result in investigation and remedial action. Typically, risk assessment professionals use some fraction of a ''practical quantification limit'' or detection level. The above mitigating factors suggest that long-term exposure scenarios (up to 3 years) used to estimate sulfur mustard carcinogenicity review overstates the true risk. Accordingly, CDC recommends retaining the proposed GPL for perimeter monitoring stations at demilitarization facilities and evaluation of the allowable stack concentrations. For worker protection against lowlevel exposure, CDC now recommends a separate 8-hour TWA for a worker protection limit (WPL) rather than applying the GPL as originally proposed. In the earlier proposal for mustard AELs, CDC investigated the development of a WPL using the Environmental Protection Agency (EPA) Categorical Regression (CatReg) method. The value derived from this method is 0.0003 mg/m 3 . This value is in reasonably close agreement with the U.S. Army Center for Health Promotion and Preventive Medicine (CHPPM) reference concentration-derived (RfC) WPL of 0.0004 mg/m 3 and the Agency for Toxic Substances and Disease Registry (ATSDR) acute inhalation minimum risk level (MRL) 2 of 0.0007 mg/m 3 (1,2). CDC believes that the CHPPM-recommended value for an 8 hour TWA is protective for noncarcinogenic effects and should be implemented for worker protection. 3. The Army noted that, although CDC specified that the proposed AELs were developed for and based upon agent stockpile demilitarization practice, other non-stockpile and storage situations existed to which the AELs would be applied within other Army programs. Illustrations of a number of such situations and some suggested resolutions were provided for CDC's consideration. Discussion: In CDC's proposal, the use of Ct evaluations was emphasized as an indication of potential acute exposure dose. For potential applications beyond strict stockpile demilitarization, adjustments to implementation of AELs might be warranted on the bases of sitespecific or activity-specific conditions. However, any such potential AEL implementation and adjustment for sitespecific conditions must ensure that the new monitoring action level protects at the potential exposure dose (Ct) so that the recommended 8-hour WPL is not exceeded. Also, any NRT monitors should not have action levels set above the recommended STEL. 4. Two reviewers commented that CDC's selection of the National Academy of Science (NAS) cancer potency factor (CPF) was inappropriate because the benzo-a-pyrene (BaP) index value used was based upon oral, not inhalation, exposure. They also believed that CDC should use the 30-year exposure assumption described in EPA's risk assessment guidelines. Discussion: To estimate cancer risk, exposure assumptions and a numeric estimate of the potency of carcinogenicity of a substance are necessary. The reviewers believed that CDC should have used a 30-year duration for such exposure at the lifetime adjusted daily dose. CDC appreciates the general desirability to be 2 ATSDR defines an MRL as ''an estimate of daily human exposure to a substance that is likely to be without appreciable risk of adverse noncancer health effects over a specified route and duration of exposure.'' ATSDR also developed an intermediate MRL (continuous exposure for up to 1 year) for sulfur mustard at a value of 0.00002 mg/ m 3 that is numerically equivalent to the interim GPL recommended herein. Federal Register / Vol. 69, No. 85 / Monday, May 3, 2004 / Notices consistent with established guidelines in risk assessment, but EPA has acknowledged in its 1999 Carcinogen Risk Assessment Guidelines (RAG), that ''in the face of scientific uncertainty, common sense and reasonable application of assumptions and policies are essential to avoid unrealistic estimates of risk'' (3). CDC believes that a 30-year, or even a 10-year, exposure assumption significantly overestimates potential exposures by one or more orders of magnitude. For example, members of the general public are highly unlikely to be continually exposed to sulfur mustard, night and day, for 10 or 30 years. Similarly, atmospheric stability, wind speed, and direction are not fixed for years on end. No agent reduction is assumed for environmental degradations or rainfall that would reduce concentrations. No agent reduction is assumed for low temperature environmental conditions where mustard agent would not significantly volatilize. No agent reduction is assumed for agent dilution beyond the perimeter of a facility. At agent storage sites, GPL readings are taken daily at the facility perimeter. Levels of agent approaching GPL should be detected within days, not years, of occurrence and corrective action would be initiated. Historically, agent releases to the environment have been episodic; no indication exists that continuous, long term low-level agent releases routinely occur. CDC's examination of the potential cancer risk associated with proposed AELs considered only incremental potential risk. That is, historic risk to workers and the public in the vicinity of stockpile storage facilities was not examined. This was because each site would have to be considered individually regarding amount, nature and age of stored mustard items; local spatial, and meteorologic conditions and their relation to area demographics; and the nature and capabilities of historic storage facility inspection programs. These site-specific factors, coupled with a weak quantification of cancer potency (see discussion below) of sulfur mustard, suggested limited utility in attempting to quantify such potential risk. The other major criticism received by CDC regarding carcinogenicity analysis pertained to the use of the NAS recommended CPF (2000) based upon sulfur mustard relative potency compared with BaP. The NAS recommendation was predicated upon oral dosage, not inhalation. CDC believed that the other published studies used to support attempts at developing numeric estimates of the CPF for sulfur mustard seriously lacked merit for this application. Although an averaging estimate (i.e., geometric mean) for all the CPFs developed might provide a reasonable estimate, CDC believes that a mathematic manipulation of questionable numbers in no way ensures that the new number is appropriate. Furthermore, CDC believes that without a reasonable basis to suggest the estimates used in the averaging method bracket the true CPF as applied to humans; CDC should not arbitrarily rely on a number developed in this manner. CDC agrees with the reviewers that extrapolation between exposure routes is undesirable when examining cancer risk. EPA's 1999 Carcinogen RAG addresses this issue briefly: ''In the absence of contrary data, the qualitative default assumption is that, if the agent is absorbed by a route to give an internal dose, it may be carcinogenic by that route'' (3). Furthermore, EPA states that, ''For screening or hazard ranking, route to-route extrapolation may be based on assumed quantitative comparability as a default, as long as it is reasonable to assume absorption by compared routes'' (3). In light of CDC's reluctance to use CPF averaged numbers as described above, and in the absence of other, better data, CDC recognized that a route to-route extrapolation was needed if the carcinogenicity risk through inhalation was to be examined and consequently based its analysis upon the NASrecommended potency value. CDC believes that the reviewers raise a valid point regarding the use of the indexed value as done in the Federal Register proposal. The reasonableness of the assumption that both exposure routes result in comparable agent absorption is debatable. CDC does not believe strongly that such an assumption is valid; consequently, CDC is open to further examination of this issue. CDC does not believe that the CPF geometric mean offers any demonstrable scientific improvement over the route-to route extrapolation originally used in CDC's proposal. The reviewers recommend that a range of inhalation cancer slope factors be described according to EPA's Carcinogen RAG. CHPPM presented such a range of factors in the ''Evaluation of Airborne Exposure Limits for Sulfur Mustard: Occupational and General Population Exposure Criteria,'' November 2000 and can be referred to by the reader for insight into the variability of postulated risk dependent upon a range the exposure assumptions and CPFs (1). The CHPPM examination is consistent with EPA's guidance. CDC must caution the reader, however, that these numeric estimates are tenuous. Oak Ridge National Laboratory's 1993 discussion of this issue for sulfur mustard carcinogenicity illustrates CDC's concerns: ''Unfortunately, quantitative human cancer risk estimates are impractical because the experimental data from animal studies have three large uncertainties: - Only a few experiments were conducted; - Many were in a mouse strain that exhibited a high genetic susceptibility to spontaneous pulmonary tumors; - Routes of administration tested and duration of follow-up observations are not comparable to the human exposures of concern.'' (4) In 1991, EPA examined cancer risk estimates that cover the range of cancer slope factors presented in the CHPPM document. EPA observed, ''Depending on the unknown true shape of the doseresponse curve at low doses, actual risks may be anywhere from this upper bound down to zero'' (5). Similarly, in the 2003 ATSDR Toxicological Profile for Sulfur Mustard, the inhalation cancer effects discussion states, ''- - - in no case was the exposure level or duration quantified, and therefore, these data are inadequate for deriving doseresponse relationships'' (2). CDC recommends that a better characterization of an appropriate cancer slope factor needs to be conducted to set exposure limits. CDC is aware of proposed forthcoming animal research by DoD to examine the chronic impact of long-term exposure to sulfur mustard. CDC encourages this research and the examination of results for possible insights and refinement of an estimate of a more accurate CPF. 5. All four reviewers provided opinions regarding the use of uncertainty factors to derive AELs. One reviewer believed that rationale was sufficient to reduce the total uncertainty used by the National Institute for Occupational Safety and Health (NIOSH) to derive the Immediately Dangerous to Life or Health (IDLH) criterion by a factor of three, which would result in an increase to a value of 2.0 mg/m 3 . Another reviewer wanted to lower the IDLH by a factor of two because of limitations of military studies used to derive the value. Another reviewer believed strongly that the proposed GPL should be reduced by at least an additional factor of 10 to reflect uncertainties not adequately represented by either the CHPPM examination using the RfC method or the CDC examination using the CatReg method. Finally, another reviewer believed that CDC's total uncertainty Federal Register / Vol. 69, No. 85 / Monday, May 3, 2004 / Notices factor of 300 used to derive the GPL was appropriate but recommended that the uncertainty factor for intrahuman variation be decreased from 10 to 3 and the data quality factor be increased from 3 to 10. Supporting rationale was provided for all these opinions. Discussion: Professional judgment is needed in the application of uncertainty factors. As discussed in CDC's original support document, considerable deliberation is ongoing regarding the use of uncertainty factors in risk assessment. No validated or calibrated means exist to precisely quantify total uncertainty used in deriving AELs. This was why CDC considered not only at the RfC, CatReg, and carcinogenicity considerations, but also the risk management aspects of safely managing sulfur mustard agent as associated with the demilitarization program. The reviewer who recommended the minimal 10-fold decrease in the GPL also believed that AELs should be developed independent of risk management considerations. CDC agrees that ideally developed AELs should be independent of existing risk management conditions. One could argue that CDC should ''safe-side'' the AELs by using highest uncertainty factors recommended by all reviewers and ignore any recommendations for reduction of uncertainty factors. Except for compounds exhibiting hormesis, this approach always would be theoretically safer than using a number derived using uncertainty factors that are not on the most conservative end of the spectrum of professional judgment. CDC's mission is to enhance public and worker health protection for people associated with or living near chemical agent demilitarization facilities. CDC believes that real-world risk management must be factored into its deliberations. Otherwise, CDC could increase or extend actual risk in the real world to minimize theoretical or undemonstrated risk. EPA's Carcinogenic RAG noted that, ''While it is appropriate to err on the side of protection of health and the environment in the face of scientific uncertainty, common sense and reasonable application of assumptions and policies are essential to avoid unrealistic estimates of risk'' (3,6). Furthermore, CDC/NIOSH policy for potential occupational carcinogens states that ''- - - policy will be the development, whenever possible, of quantitative RELs (recommended exposure limits) that are based on human and/or animal data, as well as on the consideration of technological feasibility for controlling workplace exposures to the REL'' (emphasis added). # Summary and Recommendations Although CDC received only 4 sets of comments on the proposed mustard AELs, these reviewers clearly tried diligently to represent their perspectives and concerns. Three sets of comments focused primarily upon the process used to develop the proposed AELs, and the fourth focused primarily on the practical implications of the proposed values. In addition to the solicited comments described above, CDC had the original proposal reviewed by other government and professional health risk assessment personnel. With the exception of one reviewer, the CDC approach to developing AELs in concert with ongoing risk management provisions of the chemical demilitarization program was not questioned. The examination of the carcinogenicity issue is problematic in that CDC believes that a numeric estimation of a cancer slope factor for mustard is not well supported. The CHPPM review of this issue, through the evaluation of the range of attempts at quantifying upper bound cancer risk from exposure to sulfur mustard, has been referenced herein to provide the reader with that perspective; however, CDC cannot say with confidence that the numeric range of slope factors is likely to provide a reasonable estimate of the true carcinogenic potency of this agent. Because of the uncertainties discussed above, especially the characterization of cancer potency of sulfur mustard, CDC has decided to issue its recommended AELs as interim values pending better understanding of the CPF for this agent. CDC believes that for noncancer effects, the recommended AELs protect worker and public health. Regarding the implied carcinogenicity risk, CDC believes that the strong risk management provisions, such as engineering and administrative controls within demilitarization facilities, extensive low-level air monitoring, and the previously discussed mitigating factors, minimize cancer risk at the interim AELs. In summary, CDC recommends the following: - Defer recommending a cancer potency factor until better data are available. - Redesignate the ceiling-5M value as a 15-minute STEL, limited to one occurrence per day; CDC encourages shortest practicable analytic cycle times. - Apply the U.S. Army CHPPMderived 8-hour WPL for workplace; retain GPL as proposed for use in protecting the general public. - Implement the recommended AELs as interim values, to go into effect on July 1, 2005; values to remain interim until better cancer potency characterization is available or research data indicate the need for revision. - Continue to recommend rigorous risk management analysis and practice as has been associated with the chemical agent demilitarization program practice. - Given the uncertainty in the risk assessment regarding cancer potency, reduced exposures to sulfur mustard to the lowest practicable level. - Although CDC does not specifically recommend additional reduction factors for statistical assurance of action at the exposure limit, exposures to sulfur mustard should be minimized given the uncertainties in risk assessment, particularly as related to characterizing carcinogenic potency. Federal Register / Vol. 69, No. 85 / Monday, May 3, 2004 / Notices † The toxicity data for agent T is inadequate for setting exposure limits. The very low vapor pressure for agent T precludes it as a vapor hazard under normal ambient conditions. For sulfur mustard and T mixtures, air monitoring for sulfur mustard alone should be sufficient under most cir cumstances to prevent exposure to T. ‡ To be evaluated with near-real-time instrument using shortest practicable analytic cycle time. No more than one exposure per work-shift. § The 30-minute period is not meant to imply that workers should stay in the work environment any longer than necessary; in fact, they should make every effort to exit immediately. IDLH conditions require highly reliable dermal and respiratory protection. § § Historic monitoring typically is used for time-weighted average (TWA) monitoring where the sample analyzed represents an extended time period, e.g., 8 or 12 hours. Results are not known until laboratory analysis is completed after the sampling event. AELs using historic monitoring are set at levels at which health effects are not expected to occur for most workers. Exposures above the WPL-8, but below the STEL, likewise are not expected to result in significant health effects unless such exposures occur continuously for long periods. With respect to the following collection of information, FDA invites comments on these topics: (1) Whether the proposed collection of information is necessary for the proper performance of FDA's functions, including whether the information will have practical utility; (2) the accuracy of FDA's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used; (3) ways to enhance the quality, utility, and clarity of the information to be collected; and (4) ways to minimize the burden of the collection of information on respondents, including through the use of automated collection techniques, when appropriate, and other forms of information technology. # Agency Information Collection # Animal Drug User Fee Cover Sheet; FDA Form 3547 (OMB Control Number 0910-0539)-Extension Under section 740 of the act, as amended by the Animal Drug User Fee Act (ADUFA) (21 U.S.C. 379j-12), FDA has the authority to assess and collect certain animal drug user fees. Because the submission of user fees concurrently with applications and supplements is required, review of an application cannot begin until the fee is submitted. Under the new statutory provisions (section 740(e) of the act, as amended by ADUFA), animal drug applications and supplemental animal drug applications for which the required fee has not been paid are considered incomplete and are not to be accepted for review by the agency. The types of fees that require a cover sheet are certain animal drug application fees and certain supplemental animal drug application fees. The cover sheet, FDA Form 3546, is designed to provide the minimum necessary information to determine whether a fee is required for the review of an application or supplement, to determine the amount of the fee required, and to assure that each animal drug user fee payment and each animal drug application for which payment is made, is appropriately linked to the payment that is made. The form, when completed electronically, will result in the generation of a unique payment identification number used in tracking the payment. FDA will use the information collected, to initiate administrative screening of new animal drug applications and supplements to determine if payment has been received.
for work for the performance of these task order contracts. The Board may revise or accept the IGCE, the task order, and/or some or all of the ABRWH independent dose reconstruction review of contractor's bids. These contracts will serve to provide technical support consultation to assist the ABRWH in fulfilling its statutory duty to advise the Secretary, HHS, on the scientific validity and quality of dose estimation and reconstruction efforts under EEOICPA. These discussions will include reviews of the technical proposals to determine adequacy of the proposed approach and associated contract cost estimates. The information being discussed will include information of a confidential nature. The ICGEs will include contract cost estimates, the disclosure of which would adversely impact the Governments negotiating position and strategy in regards to these contracts by giving the ABRWH independent dose reconstruction review contractor undue advantage in determining the price associated with its bids. The meeting will be closed to the public in accordance with provisions set forth regarding subject matter considered confidential under the terms of 5 U.S.C. 552b (c)( 9)(B), 48 CFR 5.401(b)(1) and (4), and 48 CFR 7.304(D), and the Determination of the Director of the Management Analysis and Services Office, Centers for Disease Control and Prevention, pursuant to Public Law 92-463. A summary of this meeting will be prepared and submitted within 14 days of the close of the meeting. The agenda is subject to change as priorities dictate.# factors, determination of the cancer potency factor for the mustard AELs, and practical concerns of conducting air monitoring at the lower exposure limits. The key comments potentially impacting CDC's recommendations are summarized and discussed below: 1. One reviewer remarked that the 5 minute ceiling (Ceiling-5M) may require too short of an analytical cycle for use with dual-agent air monitoring instrumentation. Discussion: The Ceiling-5M was defined to provide a ceiling value for near-real-time (NRT) corrective action that would protect worker health in the short term and meet the long-term goal of keeping the carcinogenicity risk below one in one million. The 8-hour time-weighted average (TWA) exposure limit recommended by CDC in 1988 was implemented by the chemical demilitarization program as a ceiling value, monitored by NRT instruments having a sampling and analysis cycle time of under 5 minutes. CDC's proposal sought to reflect this conservative implementation of the 1988 criteria. CDC closely examined the various implicit exposure doses, measured in terms of concentration multiplied by time of exposure (Ct), for various potential exposure scenarios. The ceiling-5M was based upon the analytic cycle times used in the stockpile demilitarization program. Longer sampling and analytic cycle times, such as those used in the monitoring programs for chemical agent storage facilities or nonstockpile program, could be considered in a similar manner, that is, by evaluating the effect on the Ct by changing duration of potential exposure with varying instrument cycle times. CDC examined the implication of applying the ceiling-5M agent concentration with cycle times greater than 5 minutes. Comments received from the Army, indicated that the dual agent monitors use cycle times of up to 10 minutes. Accordingly, CDC reviewed the impact of using 10-to 15-minute cycle times at the same concentration used with the ceiling-5M. Both the short-term and long-term health protection goals were met; that is, the effective dose or Ct associated at this level and duration are still well under the Ct for the acute threshold of effects level (referenced in the July 22, 2003, support document for the proposed sulfur mustard AELs) and the carcinogenicity risk per episode would be well under one in one million. The above analysis would suggest that a longer analytic cycle time, even up to the 15 minutes, associated with the Army's NRT monitoring definitions, would be acceptable at the concentration proposed with the ceiling-5M. However, real-world leaks, spills, or other unplanned agent releases do not follow a defined pattern of gradual airborne concentration increase. The first cycle of a monitoring alarm could be at much higher concentrations than the ceiling-5M. Consequently, to limit potential agent exposure durations at higher level exposures, analytic cycle time should be kept as short as practicable. The final factor considered in CDC's review of this issue is the overall risk management implication of modifying the implied cycle time associated with the ceiling AEL. Clearly, the degree of protectiveness increases as the cycle time decreases, assuming all other quality control criteria remain constant. However, if programmatic delays or extraordinary new personnel protective measures are introduced as interim measures in the pursuit of more ideal monitoring capabilities, overall risk could increase to both workers and the public. In summary, CDC believes that the proposed ceiling-5M was overly proscriptive and possibly counterproductive. Accordingly, CDC redesignated this AEL as a 15-minute short-term exposure limit (STEL). The concentration value, 0.003 mg/m 3 , from the ceiling-5M is retained. This STEL is to be monitored with NRT technology using the shortest practicable instrument cycle time. For the maximum 15-minute duration of the STEL, the Ct is 0.045 mg-min/m 3 . 2. One reviewer remarked that using the proposed general population limit (GPL) for worker protection could result in excessive false-positive situations and attendant disruptions wherever significant interferences might be located. Discussion: The GPL is a criterion that is set to protect the general public. Community exposure limits are set lower than worker limits to reflect wider variation in human susceptibility than that of the healthy worker population. CDC premised its proposal to use the new GPL as a worker protection criterion on two basic considerations. First, because the GPL is designed to protect the community, it would also be adequate for a worker population. Second, CDC believed that historic monitoring for the GPL for demilitarization perimeter monitoring similarly could be implemented in worker locations to accommodate longer 12-hour shifts. As discussed in CDC's proposal, the GPL for sulfur mustard was driven largely by the goal of protecting the public at a cancer risk level of less than one cancer incidence in a million exposures at the GPL for 3 continuous years, a risk level that is considered to be negligible. Three years was chosen for the duration of the potential exposure at a GPL because it was believed to be the maximum duration of a campaign where sulfur mustard munitions would be handled and processed for destruction on a continuing basis. This assumed exposure scenario is conservative for both the public and workers for a number of reasons: • No one worker works continuously for 3 years; actual time at work is probably well under one-third of all available hours per year when weekends, holidays, and vacations are considered. • Demilitarization plant workers, storage site workers, non-stockpile site workers, or others who might reasonably be exposed to chemical agent do not remain stationary at one duty location for extended periods. • Similarly, the individuals within the general community would not normally be anticipated to stay at one location continuously for 3 years. • Varying meteorological conditions would preclude constant exposure conditions. • With the rigorous active demilitarization site monitoring and the ongoing routine storage site inspection program, unplanned releases of chemical agent are unlikely to be sustained for any significant duration. • CDC assumed exposure at the full GPL in its carcinogenicity evaluation even though detection at this level would result in investigation and remedial action. Typically, risk assessment professionals use some fraction of a ''practical quantification limit'' or detection level. The above mitigating factors suggest that long-term exposure scenarios (up to 3 years) used to estimate sulfur mustard carcinogenicity review overstates the true risk. Accordingly, CDC recommends retaining the proposed GPL for perimeter monitoring stations at demilitarization facilities and evaluation of the allowable stack concentrations. For worker protection against lowlevel exposure, CDC now recommends a separate 8-hour TWA for a worker protection limit (WPL) rather than applying the GPL as originally proposed. In the earlier proposal for mustard AELs, CDC investigated the development of a WPL using the Environmental Protection Agency (EPA) Categorical Regression (CatReg) method. The value derived from this method is 0.0003 mg/m 3 . This value is in reasonably close agreement with the U.S. Army Center for Health Promotion and Preventive Medicine (CHPPM) reference concentration-derived (RfC) WPL of 0.0004 mg/m 3 and the Agency for Toxic Substances and Disease Registry (ATSDR) acute inhalation minimum risk level (MRL) 2 of 0.0007 mg/m 3 (1,2). CDC believes that the CHPPM-recommended value for an 8 hour TWA is protective for noncarcinogenic effects and should be implemented for worker protection. 3. The Army noted that, although CDC specified that the proposed AELs were developed for and based upon agent stockpile demilitarization practice, other non-stockpile and storage situations existed to which the AELs would be applied within other Army programs. Illustrations of a number of such situations and some suggested resolutions were provided for CDC's consideration. Discussion: In CDC's proposal, the use of Ct evaluations was emphasized as an indication of potential acute exposure dose. For potential applications beyond strict stockpile demilitarization, adjustments to implementation of AELs might be warranted on the bases of sitespecific or activity-specific conditions. However, any such potential AEL implementation and adjustment for sitespecific conditions must ensure that the new monitoring action level protects at the potential exposure dose (Ct) so that the recommended 8-hour WPL is not exceeded. Also, any NRT monitors should not have action levels set above the recommended STEL. 4. Two reviewers commented that CDC's selection of the National Academy of Science (NAS) cancer potency factor (CPF) was inappropriate because the benzo-a-pyrene (BaP) index value used was based upon oral, not inhalation, exposure. They also believed that CDC should use the 30-year exposure assumption described in EPA's risk assessment guidelines. Discussion: To estimate cancer risk, exposure assumptions and a numeric estimate of the potency of carcinogenicity of a substance are necessary. The reviewers believed that CDC should have used a 30-year duration for such exposure at the lifetime adjusted daily dose. CDC appreciates the general desirability to be 2 ATSDR defines an MRL as ''an estimate of daily human exposure to a substance that is likely to be without appreciable risk of adverse noncancer health effects over a specified route and duration of exposure.'' ATSDR also developed an intermediate MRL (continuous exposure for up to 1 year) for sulfur mustard at a value of 0.00002 mg/ m 3 that is numerically equivalent to the interim GPL recommended herein. Federal Register / Vol. 69, No. 85 / Monday, May 3, 2004 / Notices consistent with established guidelines in risk assessment, but EPA has acknowledged in its 1999 Carcinogen Risk Assessment Guidelines (RAG), that ''in the face of scientific uncertainty, common sense and reasonable application of assumptions and policies are essential to avoid unrealistic estimates of risk'' (3). CDC believes that a 30-year, or even a 10-year, exposure assumption significantly overestimates potential exposures by one or more orders of magnitude. For example, members of the general public are highly unlikely to be continually exposed to sulfur mustard, night and day, for 10 or 30 years. Similarly, atmospheric stability, wind speed, and direction are not fixed for years on end. No agent reduction is assumed for environmental degradations or rainfall that would reduce concentrations. No agent reduction is assumed for low temperature environmental conditions where mustard agent would not significantly volatilize. No agent reduction is assumed for agent dilution beyond the perimeter of a facility. At agent storage sites, GPL readings are taken daily at the facility perimeter. Levels of agent approaching GPL should be detected within days, not years, of occurrence and corrective action would be initiated. Historically, agent releases to the environment have been episodic; no indication exists that continuous, long term low-level agent releases routinely occur. CDC's examination of the potential cancer risk associated with proposed AELs considered only incremental potential risk. That is, historic risk to workers and the public in the vicinity of stockpile storage facilities was not examined. This was because each site would have to be considered individually regarding amount, nature and age of stored mustard items; local spatial, and meteorologic conditions and their relation to area demographics; and the nature and capabilities of historic storage facility inspection programs. These site-specific factors, coupled with a weak quantification of cancer potency (see discussion below) of sulfur mustard, suggested limited utility in attempting to quantify such potential risk. The other major criticism received by CDC regarding carcinogenicity analysis pertained to the use of the NAS recommended CPF (2000) based upon sulfur mustard relative potency compared with BaP. The NAS recommendation was predicated upon oral dosage, not inhalation. CDC believed that the other published studies used to support attempts at developing numeric estimates of the CPF for sulfur mustard seriously lacked merit for this application. Although an averaging estimate (i.e., geometric mean) for all the CPFs developed might provide a reasonable estimate, CDC believes that a mathematic manipulation of questionable numbers in no way ensures that the new number is appropriate. Furthermore, CDC believes that without a reasonable basis to suggest the estimates used in the averaging method bracket the true CPF as applied to humans; CDC should not arbitrarily rely on a number developed in this manner. CDC agrees with the reviewers that extrapolation between exposure routes is undesirable when examining cancer risk. EPA's 1999 Carcinogen RAG addresses this issue briefly: ''In the absence of contrary data, the qualitative default assumption is that, if the agent is absorbed by a route to give an internal dose, it may be carcinogenic by that route'' (3). Furthermore, EPA states that, ''For screening or hazard ranking, route to-route extrapolation may be based on assumed quantitative comparability as a default, as long as it is reasonable to assume absorption by compared routes'' (3). In light of CDC's reluctance to use CPF averaged numbers as described above, and in the absence of other, better data, CDC recognized that a route to-route extrapolation was needed if the carcinogenicity risk through inhalation was to be examined and consequently based its analysis upon the NASrecommended potency value. CDC believes that the reviewers raise a valid point regarding the use of the indexed value as done in the Federal Register proposal. The reasonableness of the assumption that both exposure routes result in comparable agent absorption is debatable. CDC does not believe strongly that such an assumption is valid; consequently, CDC is open to further examination of this issue. CDC does not believe that the CPF geometric mean offers any demonstrable scientific improvement over the route-to route extrapolation originally used in CDC's proposal. The reviewers recommend that a range of inhalation cancer slope factors be described according to EPA's Carcinogen RAG. CHPPM presented such a range of factors in the ''Evaluation of Airborne Exposure Limits for Sulfur Mustard: Occupational and General Population Exposure Criteria,'' November 2000 and can be referred to by the reader for insight into the variability of postulated risk dependent upon a range the exposure assumptions and CPFs (1). The CHPPM examination is consistent with EPA's guidance. CDC must caution the reader, however, that these numeric estimates are tenuous. Oak Ridge National Laboratory's 1993 discussion of this issue for sulfur mustard carcinogenicity illustrates CDC's concerns: ''Unfortunately, quantitative human cancer risk estimates are impractical because the experimental data from animal studies have three large uncertainties: • Only a few experiments were conducted; • Many were in a mouse strain that exhibited a high genetic susceptibility to spontaneous pulmonary tumors; • Routes of administration tested and duration of follow-up observations are not comparable to the human exposures of concern.'' (4) In 1991, EPA examined cancer risk estimates that cover the range of cancer slope factors presented in the CHPPM document. EPA observed, ''Depending on the unknown true shape of the doseresponse curve at low doses, actual risks may be anywhere from this upper bound down to zero'' (5). Similarly, in the 2003 ATSDR Toxicological Profile for Sulfur Mustard, the inhalation cancer effects discussion states, ''* * * in no case was the exposure level or duration quantified, and therefore, these data are inadequate for deriving doseresponse relationships'' (2). CDC recommends that a better characterization of an appropriate cancer slope factor needs to be conducted to set exposure limits. CDC is aware of proposed forthcoming animal research by DoD to examine the chronic impact of long-term exposure to sulfur mustard. CDC encourages this research and the examination of results for possible insights and refinement of an estimate of a more accurate CPF. 5. All four reviewers provided opinions regarding the use of uncertainty factors to derive AELs. One reviewer believed that rationale was sufficient to reduce the total uncertainty used by the National Institute for Occupational Safety and Health (NIOSH) to derive the Immediately Dangerous to Life or Health (IDLH) criterion by a factor of three, which would result in an increase to a value of 2.0 mg/m 3 . Another reviewer wanted to lower the IDLH by a factor of two because of limitations of military studies used to derive the value. Another reviewer believed strongly that the proposed GPL should be reduced by at least an additional factor of 10 to reflect uncertainties not adequately represented by either the CHPPM examination using the RfC method or the CDC examination using the CatReg method. Finally, another reviewer believed that CDC's total uncertainty Federal Register / Vol. 69, No. 85 / Monday, May 3, 2004 / Notices factor of 300 used to derive the GPL was appropriate but recommended that the uncertainty factor for intrahuman variation be decreased from 10 to 3 and the data quality factor be increased from 3 to 10. Supporting rationale was provided for all these opinions. Discussion: Professional judgment is needed in the application of uncertainty factors. As discussed in CDC's original support document, considerable deliberation is ongoing regarding the use of uncertainty factors in risk assessment. No validated or calibrated means exist to precisely quantify total uncertainty used in deriving AELs. This was why CDC considered not only at the RfC, CatReg, and carcinogenicity considerations, but also the risk management aspects of safely managing sulfur mustard agent as associated with the demilitarization program. The reviewer who recommended the minimal 10-fold decrease in the GPL also believed that AELs should be developed independent of risk management considerations. CDC agrees that ideally developed AELs should be independent of existing risk management conditions. One could argue that CDC should ''safe-side'' the AELs by using highest uncertainty factors recommended by all reviewers and ignore any recommendations for reduction of uncertainty factors. Except for compounds exhibiting hormesis, this approach always would be theoretically safer than using a number derived using uncertainty factors that are not on the most conservative end of the spectrum of professional judgment. CDC's mission is to enhance public and worker health protection for people associated with or living near chemical agent demilitarization facilities. CDC believes that real-world risk management must be factored into its deliberations. Otherwise, CDC could increase or extend actual risk in the real world to minimize theoretical or undemonstrated risk. EPA's Carcinogenic RAG noted that, ''While it is appropriate to err on the side of protection of health and the environment in the face of scientific uncertainty, common sense and reasonable application of assumptions and policies are essential to avoid unrealistic estimates of risk'' (3,6). Furthermore, CDC/NIOSH policy for potential occupational carcinogens states that ''* * * policy will be the development, whenever possible, of quantitative RELs (recommended exposure limits) that are based on human and/or animal data, as well as on the consideration of technological feasibility for controlling workplace exposures to the REL'' (emphasis added). # Summary and Recommendations Although CDC received only 4 sets of comments on the proposed mustard AELs, these reviewers clearly tried diligently to represent their perspectives and concerns. Three sets of comments focused primarily upon the process used to develop the proposed AELs, and the fourth focused primarily on the practical implications of the proposed values. In addition to the solicited comments described above, CDC had the original proposal reviewed by other government and professional health risk assessment personnel. With the exception of one reviewer, the CDC approach to developing AELs in concert with ongoing risk management provisions of the chemical demilitarization program was not questioned. The examination of the carcinogenicity issue is problematic in that CDC believes that a numeric estimation of a cancer slope factor for mustard is not well supported. The CHPPM review of this issue, through the evaluation of the range of attempts at quantifying upper bound cancer risk from exposure to sulfur mustard, has been referenced herein to provide the reader with that perspective; however, CDC cannot say with confidence that the numeric range of slope factors is likely to provide a reasonable estimate of the true carcinogenic potency of this agent. Because of the uncertainties discussed above, especially the characterization of cancer potency of sulfur mustard, CDC has decided to issue its recommended AELs as interim values pending better understanding of the CPF for this agent. CDC believes that for noncancer effects, the recommended AELs protect worker and public health. Regarding the implied carcinogenicity risk, CDC believes that the strong risk management provisions, such as engineering and administrative controls within demilitarization facilities, extensive low-level air monitoring, and the previously discussed mitigating factors, minimize cancer risk at the interim AELs. In summary, CDC recommends the following: • Defer recommending a cancer potency factor until better data are available. • Redesignate the ceiling-5M value as a 15-minute STEL, limited to one occurrence per day; CDC encourages shortest practicable analytic cycle times. • Apply the U.S. Army CHPPMderived 8-hour WPL for workplace; retain GPL as proposed for use in protecting the general public. • Implement the recommended AELs as interim values, to go into effect on July 1, 2005; values to remain interim until better cancer potency characterization is available or research data indicate the need for revision. • Continue to recommend rigorous risk management analysis and practice as has been associated with the chemical agent demilitarization program practice. • Given the uncertainty in the risk assessment regarding cancer potency, reduced exposures to sulfur mustard to the lowest practicable level. * Although CDC does not specifically recommend additional reduction factors for statistical assurance of action at the exposure limit, exposures to sulfur mustard should be minimized given the uncertainties in risk assessment, particularly as related to characterizing carcinogenic potency. # Federal Register / Vol. 69, No. 85 / Monday, May 3, 2004 / Notices † The toxicity data for agent T is inadequate for setting exposure limits. The very low vapor pressure for agent T precludes it as a vapor hazard under normal ambient conditions. For sulfur mustard and T mixtures, air monitoring for sulfur mustard alone should be sufficient under most cir cumstances to prevent exposure to T. ‡ To be evaluated with near-real-time instrument using shortest practicable analytic cycle time. No more than one exposure per work-shift. § The 30-minute period is not meant to imply that workers should stay in the work environment any longer than necessary; in fact, they should make every effort to exit immediately. IDLH conditions require highly reliable dermal and respiratory protection. § § Historic monitoring typically is used for time-weighted average (TWA) monitoring where the sample analyzed represents an extended time period, e.g., 8 or 12 hours. Results are not known until laboratory analysis is completed after the sampling event. AELs using historic monitoring are set at levels at which health effects are not expected to occur for most workers. Exposures above the WPL-8, but below the STEL, likewise are not expected to result in significant health effects unless such exposures occur continuously for long periods. With respect to the following collection of information, FDA invites comments on these topics: (1) Whether the proposed collection of information is necessary for the proper performance of FDA's functions, including whether the information will have practical utility; (2) the accuracy of FDA's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used; (3) ways to enhance the quality, utility, and clarity of the information to be collected; and (4) ways to minimize the burden of the collection of information on respondents, including through the use of automated collection techniques, when appropriate, and other forms of information technology. # Agency Information Collection # Animal Drug User Fee Cover Sheet; FDA Form 3547 (OMB Control Number 0910-0539)-Extension Under section 740 of the act, as amended by the Animal Drug User Fee Act (ADUFA) (21 U.S.C. 379j-12), FDA has the authority to assess and collect certain animal drug user fees. Because the submission of user fees concurrently with applications and supplements is required, review of an application cannot begin until the fee is submitted. Under the new statutory provisions (section 740(e) of the act, as amended by ADUFA), animal drug applications and supplemental animal drug applications for which the required fee has not been paid are considered incomplete and are not to be accepted for review by the agency. The types of fees that require a cover sheet are certain animal drug application fees and certain supplemental animal drug application fees. The cover sheet, FDA Form 3546, is designed to provide the minimum necessary information to determine whether a fee is required for the review of an application or supplement, to determine the amount of the fee required, and to assure that each animal drug user fee payment and each animal drug application for which payment is made, is appropriately linked to the payment that is made. The form, when completed electronically, will result in the generation of a unique payment identification number used in tracking the payment. FDA will use the information collected, to initiate administrative screening of new animal drug applications and supplements to determine if payment has been received.
None
None
dc9dcf4049457d24e4193bb6e544a719b87a6587
cdc
None
Gonorrhea is a major cause of serious reproductive complications in women and can facilitate human immunodeficiency virus (HIV) transmission (1). Effective treatment is a cornerstone of U.S. gonorrhea control efforts, but treatment of gonorrhea has been complicated by the ability of Neisseria gonorrhoeae to develop antimicrobial resistance. This report, using data from CDC's Gonococcal Isolate Surveillance Project (GISP), describes laboratory evidence of declining cefixime susceptibility among urethral N. gonorrhoeae isolates collected in the United States during 2006-2011 and updates CDC's current recommendations for treatment of gonorrhea (2). Based on GISP data, CDC recommends combination therapy with ceftriaxone 250 mg intramuscularly and either azithromycin 1 g orally as a single dose or doxycycline 100 mg orally twice daily for 7 days as the most reliably effective treatment for uncomplicated gonorrhea. CDC no longer recommends cefixime at any dose as a first-line regimen for treatment of gonococcal infections. If cefixime is used as an alternative agent, then the patient should return in 1 week for a test-of-cure at the site of infection. Infection with N. gonorrhoeae is a major cause of pelvic inflammatory disease, ectopic pregnancy, and infertility, and can facilitate HIV transmission (1). In the United States, gonorrhea is the second most commonly reported notifiable infection, with >300,000 cases reported during 2011. Gonorrhea treatment has been complicated by the ability of N. gonorrhoeae to develop resistance to antimicrobials used for treatment. During the 1990s and 2000s, fluoroquinolone resistance in N. gonorrhoeae emerged in the United States, becoming prevalent in Hawaii and California and among men who have sex with men (MSM) before spreading throughout the United States. In 2007, emergence of fluoroquinoloneresistant N. gonorrhoeae in the United States prompted CDC to no longer recommend fluoroquinolones for treatment of gonorrhea, leaving cephalosporins as the only remaining recommended antimicrobial class (3). To ensure treatment of co-occurring pathogens (e.g., Chlamydia trachomatis) and reflecting concern about emerging gonococcal resistance, CDC's 2010 sexually transmitted diseases (STDs) treatment guidelines recommended combination therapy for gonorrhea with a cephalosporin (ceftriaxone 250 mg intramuscularly or cefixime 400 mg orally) plus either azithromycin orally or doxycycline orally, even if nucleic acid amplification testing (NAAT) for C. trachomatis was negative at the time of treatment (2). From 2006 to 2010, the minimum concentrations of cefixime needed to inhibit the growth in vitro of N. gonorrhoeae strains circulating in the United States and many other countries increased, suggesting that the effectiveness of cefixime might be waning (4). Reports from Europe recently have described patients with uncomplicated gonorrhea infection not cured by treatment with cefixime 400 mg orally (5)(6)(7)(8). GISP is a CDC-supported sentinel surveillance system that has monitored N. gonorrhoeae antimicrobial susceptibilities since 1986, and is the only source in the United States of national and regional N. gonorrhoeae antimicrobial susceptibility data. During September-December 2011, CDC and five external GISP principal investigators, each with N. gonorrhoeae-specific expertise in surveillance, antimicrobial resistance, treatment, and antimicrobial susceptibility testing, reviewed antimicrobial susceptibility trends in GISP through August 2011 to determine whether to update CDC's current recommendations (2) for treatment of uncomplicated gonorrhea. Each month, the first 25 gonococcal urethral isolates collected from men attending participating STD clinics (approximately 6,000 isolates each year) were submitted for antimicrobial susceptibility testing. The minimum inhibitory concentration (MIC), the lowest antimicrobial concentration that inhibits visible bacterial growth in the laboratory, is used to assess antimicrobial susceptibility. Cefixime susceptibilities were not determined during 2007-2008 because cefixime temporarily was unavailable in the United States at that time. Criteria for resistance to cefixime and ceftriaxone have not been defined by the Clinical Laboratory Standards Institute (CLSI). However, CLSI does consider isolates with cefixime or ceftriaxone MICs ≥0.5 µg/mL to have "decreased susceptibility" to these drugs (9). During 2006-2011, 15 (0.1%) isolates had decreased susceptibility to cefixime (all had MICs = 0.5 µg/mL), including nine (0.2%) in 2010 and one (0.03%) during January-August 2011; 12 of 15 were from MSM, and 12 were from the West and three from the Midwest.- No isolates # Update to CDC's Sexually Transmitted Diseases Treatment Guidelines, 2010: Oral # Evidence and Rationale The percentage of isolates with elevated cefixime MICs (MICs ≥0.25 µg/mL) increased from 0.1% in 2006 to 1.5% during January-August 2011 (Figure). In the West, the percentage increased from 0.2% in 2006 to 3.2% in 2011 (Table ). The largest increases were observed in Honolulu, Hawaii (0% in 2006 to 17.0% in 2011); Minneapolis, Minnesota (0% to 6.9%); Portland, Oregon (0% to 6.5%); and San Diego, California (0% to 6.4%). Nationally, among MSM, isolates with elevated MICs to cefixime increased from 0.2% in 2006 to 3.8% in 2011. In 2011, a higher proportion of isolates from MSM had elevated cefixime MICs than isolates from men who have sex exclusively with women (MSW), regardless of region (Table ). The percentage of isolates exhibiting elevated ceftriaxone MICs increased slightly, from 0% in 2006 to 0.4% in 2011 (Figure). The percentage increased from <0.1% in 2006 to 0.8% in 2011 in the West, and did not increase significantly in the Midwest (0% to 0.2%) or the Northeast and South (0.1% in 2006 and 2011). Among MSM, the percentage increased from 0.0% in 2006 to 1.0% in 2011. The 2010 CDC STD treatment guidelines (2) recommend that azithromycin or doxycycline be administered with a cephalosporin as treatment for gonorrhea. The percentage of isolates exhibiting tetracycline resistance (MIC ≥2.0 µg/mL) was high but remained stable from 2006 (20.6%) to 2011 (21.6%). The percentage exhibiting decreased susceptibility to azithromycin (MIC ≥2.0 µg/mL) remained low (0.2% in 2006 to 0.3% in 2011). Among 180 isolates collected during 2006-2011 that exhibited elevated cefixime MICs, 139 (77.2%) exhibited tetracycline resistance, but only one (0.6%) had decreased susceptibility to azithromycin. Ceftriaxone as a single intramuscular injection of 250 mg provides high and sustained bactericidal levels in the blood and is highly efficacious at all anatomic sites of infection for treatment of N. gonorrhoeae infections caused by strains currently circulating in the United States (10,11). Clinical data to support use of doses of ceftriaxone >250 mg are not available. A 400-mg oral dose of cefixime does not provide bactericidal levels as high, nor as sustained as does an intramuscular 250-mg dose of ceftriaxone, and demonstrates limited efficacy for treatment of pharyngeal gonorrhea (10,11). The significant increase in the prevalence of U.S. GISP isolates with elevated cefixime MICs, most notably in the West and among MSM, is of particular concern because the emergence of fluoroquinolone-resistant N. gonorrhoeae in the United States during the 1990s also occurred initially in the West and predominantly among MSM before spreading throughout the United States within several years. Thus, observed patterns might indicate early stages of the development of clinically significant gonococcal resistance to cephalosporins. CDC anticipates that rising cefixime MICs soon will result in declining effectiveness of cefixime for the treatment of urogenital gonorrhea. Furthermore, as cefixime becomes less effective, continued use of cefixime might hasten the development of resistance to ceftriaxone, a safe, welltolerated, injectable cephalosporin and the last antimicrobial that is recommended and known to be highly effective in a single dose for treatment of gonorrhea at all anatomic sites of infection. Maintaining effectiveness of ceftriaxone for as long as possible is critical. Thus, CDC no longer recommends the routine use of cefixime as a first-line regimen for treatment of gonorrhea in the United States. Based on experience with other microbes that have developed antimicrobial resistance rapidly, a theoretical basis exists for combination therapy using two antimicrobials with different mechanisms of action to improve treatment efficacy and potentially delay emergence and spread of resistance to cephalosporins. Therefore, the use of a second antimicrobial (azithromycin as a single 1-g oral dose or doxycycline 100 mg orally twice daily for 7 days) is recommended for administration with ceftriaxone. The use of azithromycin as the second antimicrobial is preferred to doxycycline because of the convenience and compliance advantages of single-dose therapy and the substantially higher prevalence of gonococcal resistance to tetracycline than to azithromycin among GISP isolates, particularly in strains with elevated cefixime MICs. # Recommendations For treatment of uncomplicated urogenital, anorectal, and pharyngeal gonorrhea, CDC recommends combination therapy with a single intramuscular dose of ceftriaxone 250 mg plus either a single dose of azithromycin 1 g orally or doxycycline 100 mg orally twice daily for 7 days (Box). Clinicians who diagnose gonorrhea in a patient with persistent infection after treatment (treatment failure) with the recommended combination therapy regimen should culture relevant clinical specimens and perform antimicrobial susceptibility testing of N. gonorrhoeae isolates. Phenotypic antimicrobial susceptibility testing should be performed using disk diffusion, Etest (BioMérieux, Durham, NC), or agar dilution. Data currently are limited on the use of NAAT-based antimicrobial susceptibility testing for genetic mutations associated with # BOX. Updated recommended treatment regimens for gonococcal infections # Uncomplicated gonococcal infections of the cervix, urethra, and rectum # Recommended regimen Ceftriaxone 250 mg in a single intramuscular dose PLUS Azithromycin 1 g orally in a single dose or doxycycline 100 mg orally twice daily for 7 days* # Alternative regimens If ceftriaxone is not available: Cefixime 400 mg in a single oral dose PLUS Azithromycin 1 g orally in a single dose or doxycycline 100 mg orally twice daily for 7 days- PLUS Test-of-cure in 1 week If the patient has severe cephalosporin allergy: Azithromycin 2 g in a single oral dose PLUS Test-of-cure in 1 week # Uncomplicated gonococcal infections of the pharynx # Recommended regimen Ceftriaxone 250 mg in a single intramuscular dose PLUS Azithromycin 1 g orally in a single dose or doxycycline 100 mg orally twice daily for 7 days- resistance in N. gonorrhoeae. The laboratory should retain the isolate for possible further testing. The treating clinician should consult an infectious disease specialist, an STD/HIV Prevention Training Center (), or CDC (telephone: 404-639-8659) for treatment advice, and report the case to CDC through the local or state health department within 24 hours of diagnosis. A test-of-cure should be conducted 1 week after re-treatment, and clinicians should ensure that the patient's sex partners from the preceding 60 days are evaluated promptly with culture and treated as indicated. When ceftriaxone cannot be used for treatment of urogenital or rectal gonorrhea, two alternative options are available: cefixime 400 mg orally plus either azithromycin 1 g orally or doxycycline 100 mg twice daily orally for 7 days if ceftriaxone is not readily available, or azithromycin 2 g orally in a single dose if ceftriaxone cannot be given because of severe allergy. If a patient with gonorrhea is treated with an alternative regimen, the patient should return 1 week after treatment for a test-of-cure at the infected anatomic site. The test-of-cure ideally should be performed with culture or with a NAAT for N. gonorrhoeae if culture is not readily available. If the NAAT is positive, every effort should be made to perform a confirmatory culture. All positive cultures for test-of-cure should undergo phenotypic antimicrobial susceptibility testing. Patients who experience treatment failure after treatment with alternative regimens should be treated with ceftriaxone 250 mg as a single intramuscular dose and azithromycin 2 g orally as a single dose and should receive infectious disease consultation. The case should be reported to CDC through the local or state health department. For all patients with gonorrhea, every effort should be made to ensure that the patients' sex partners from the preceding 60 days are evaluated and treated for N. gonorrhoeae with a recommended regimen. If a heterosexual partner of a patient cannot be linked to evaluation and treatment in a timely fashion, then expedited partner therapy should be considered, using oral combination antimicrobial therapy for gonorrhea (cefixime 400 mg and azithromycin 1 g) delivered to the partner by the patient, a disease investigation specialist, or through a collaborating pharmacy. The capacity of laboratories in the United States to isolate N. gonorrhoeae by culture is declining rapidly because of the widespread use of NAATs for gonorrhea diagnosis, yet it is essential that culture capacity for N. gonorrhoeae be maintained to monitor antimicrobial resistance trends and determine susceptibility to guide treatment following treatment failure. To help control gonorrhea in the United States, health-care providers must maintain the ability to collect specimens for culture and be knowledgeable of laboratories to which they can send specimens for culture. Health-care systems and health departments must support access to culture, and laboratories must maintain culture capacity or develop partnerships with laboratories that can perform culture. Treatment of patients with gonorrhea with the most effective therapy will limit the transmission of gonorrhea, prevent complications, and likely will slow emergence of resistance. However, resistance to cephalosporins, including ceftriaxone, is expected to emerge. Reinvestment in gonorrhea prevention and control is warranted. New treatment options for gonorrhea are urgently needed. # Reported by
# Gonorrhea is a major cause of serious reproductive complications in women and can facilitate human immunodeficiency virus (HIV) transmission (1). Effective treatment is a cornerstone of U.S. gonorrhea control efforts, but treatment of gonorrhea has been complicated by the ability of Neisseria gonorrhoeae to develop antimicrobial resistance. This report, using data from CDC's Gonococcal Isolate Surveillance Project (GISP), describes laboratory evidence of declining cefixime susceptibility among urethral N. gonorrhoeae isolates collected in the United States during 2006-2011 and updates CDC's current recommendations for treatment of gonorrhea (2). Based on GISP data, CDC recommends combination therapy with ceftriaxone 250 mg intramuscularly and either azithromycin 1 g orally as a single dose or doxycycline 100 mg orally twice daily for 7 days as the most reliably effective treatment for uncomplicated gonorrhea. CDC no longer recommends cefixime at any dose as a first-line regimen for treatment of gonococcal infections. If cefixime is used as an alternative agent, then the patient should return in 1 week for a test-of-cure at the site of infection. Infection with N. gonorrhoeae is a major cause of pelvic inflammatory disease, ectopic pregnancy, and infertility, and can facilitate HIV transmission (1). In the United States, gonorrhea is the second most commonly reported notifiable infection, with >300,000 cases reported during 2011. Gonorrhea treatment has been complicated by the ability of N. gonorrhoeae to develop resistance to antimicrobials used for treatment. During the 1990s and 2000s, fluoroquinolone resistance in N. gonorrhoeae emerged in the United States, becoming prevalent in Hawaii and California and among men who have sex with men (MSM) before spreading throughout the United States. In 2007, emergence of fluoroquinoloneresistant N. gonorrhoeae in the United States prompted CDC to no longer recommend fluoroquinolones for treatment of gonorrhea, leaving cephalosporins as the only remaining recommended antimicrobial class (3). To ensure treatment of co-occurring pathogens (e.g., Chlamydia trachomatis) and reflecting concern about emerging gonococcal resistance, CDC's 2010 sexually transmitted diseases (STDs) treatment guidelines recommended combination therapy for gonorrhea with a cephalosporin (ceftriaxone 250 mg intramuscularly or cefixime 400 mg orally) plus either azithromycin orally or doxycycline orally, even if nucleic acid amplification testing (NAAT) for C. trachomatis was negative at the time of treatment (2). From 2006 to 2010, the minimum concentrations of cefixime needed to inhibit the growth in vitro of N. gonorrhoeae strains circulating in the United States and many other countries increased, suggesting that the effectiveness of cefixime might be waning (4). Reports from Europe recently have described patients with uncomplicated gonorrhea infection not cured by treatment with cefixime 400 mg orally (5)(6)(7)(8). GISP is a CDC-supported sentinel surveillance system that has monitored N. gonorrhoeae antimicrobial susceptibilities since 1986, and is the only source in the United States of national and regional N. gonorrhoeae antimicrobial susceptibility data. During September-December 2011, CDC and five external GISP principal investigators, each with N. gonorrhoeae-specific expertise in surveillance, antimicrobial resistance, treatment, and antimicrobial susceptibility testing, reviewed antimicrobial susceptibility trends in GISP through August 2011 to determine whether to update CDC's current recommendations (2) for treatment of uncomplicated gonorrhea. Each month, the first 25 gonococcal urethral isolates collected from men attending participating STD clinics (approximately 6,000 isolates each year) were submitted for antimicrobial susceptibility testing. The minimum inhibitory concentration (MIC), the lowest antimicrobial concentration that inhibits visible bacterial growth in the laboratory, is used to assess antimicrobial susceptibility. Cefixime susceptibilities were not determined during 2007-2008 because cefixime temporarily was unavailable in the United States at that time. Criteria for resistance to cefixime and ceftriaxone have not been defined by the Clinical Laboratory Standards Institute (CLSI). However, CLSI does consider isolates with cefixime or ceftriaxone MICs ≥0.5 µg/mL to have "decreased susceptibility" to these drugs (9). During 2006-2011, 15 (0.1%) isolates had decreased susceptibility to cefixime (all had MICs = 0.5 µg/mL), including nine (0.2%) in 2010 and one (0.03%) during January-August 2011; 12 of 15 were from MSM, and 12 were from the West and three from the Midwest.* No isolates # Update to CDC's Sexually Transmitted Diseases Treatment Guidelines, 2010: Oral # Evidence and Rationale The percentage of isolates with elevated cefixime MICs (MICs ≥0.25 µg/mL) increased from 0.1% in 2006 to 1.5% during January-August 2011 (Figure). In the West, the percentage increased from 0.2% in 2006 to 3.2% in 2011 (Table ). The largest increases were observed in Honolulu, Hawaii (0% in 2006 to 17.0% in 2011); Minneapolis, Minnesota (0% to 6.9%); Portland, Oregon (0% to 6.5%); and San Diego, California (0% to 6.4%). Nationally, among MSM, isolates with elevated MICs to cefixime increased from 0.2% in 2006 to 3.8% in 2011. In 2011, a higher proportion of isolates from MSM had elevated cefixime MICs than isolates from men who have sex exclusively with women (MSW), regardless of region (Table ). The percentage of isolates exhibiting elevated ceftriaxone MICs increased slightly, from 0% in 2006 to 0.4% in 2011 (Figure). The percentage increased from <0.1% in 2006 to 0.8% in 2011 in the West, and did not increase significantly in the Midwest (0% to 0.2%) or the Northeast and South (0.1% in 2006 and 2011). Among MSM, the percentage increased from 0.0% in 2006 to 1.0% in 2011. The 2010 CDC STD treatment guidelines (2) recommend that azithromycin or doxycycline be administered with a cephalosporin as treatment for gonorrhea. The percentage of isolates exhibiting tetracycline resistance (MIC ≥2.0 µg/mL) was high but remained stable from 2006 (20.6%) to 2011 (21.6%). The percentage exhibiting decreased susceptibility to azithromycin (MIC ≥2.0 µg/mL) remained low (0.2% in 2006 to 0.3% in 2011). Among 180 isolates collected during 2006-2011 that exhibited elevated cefixime MICs, 139 (77.2%) exhibited tetracycline resistance, but only one (0.6%) had decreased susceptibility to azithromycin. Ceftriaxone as a single intramuscular injection of 250 mg provides high and sustained bactericidal levels in the blood and is highly efficacious at all anatomic sites of infection for treatment of N. gonorrhoeae infections caused by strains currently circulating in the United States (10,11). Clinical data to support use of doses of ceftriaxone >250 mg are not available. A 400-mg oral dose of cefixime does not provide bactericidal levels as high, nor as sustained as does an intramuscular 250-mg dose of ceftriaxone, and demonstrates limited efficacy for treatment of pharyngeal gonorrhea (10,11). The significant increase in the prevalence of U.S. GISP isolates with elevated cefixime MICs, most notably in the West and among MSM, is of particular concern because the emergence of fluoroquinolone-resistant N. gonorrhoeae in the United States during the 1990s also occurred initially in the West and predominantly among MSM before spreading throughout the United States within several years. Thus, observed patterns might indicate early stages of the development of clinically significant gonococcal resistance to cephalosporins. CDC anticipates that rising cefixime MICs soon will result in declining effectiveness of cefixime for the treatment of urogenital gonorrhea. Furthermore, as cefixime becomes less effective, continued use of cefixime might hasten the development of resistance to ceftriaxone, a safe, welltolerated, injectable cephalosporin and the last antimicrobial that is recommended and known to be highly effective in a single dose for treatment of gonorrhea at all anatomic sites of infection. Maintaining effectiveness of ceftriaxone for as long as possible is critical. Thus, CDC no longer recommends the routine use of cefixime as a first-line regimen for treatment of gonorrhea in the United States. Based on experience with other microbes that have developed antimicrobial resistance rapidly, a theoretical basis exists for combination therapy using two antimicrobials with different mechanisms of action to improve treatment efficacy and potentially delay emergence and spread of resistance to cephalosporins. Therefore, the use of a second antimicrobial (azithromycin as a single 1-g oral dose or doxycycline 100 mg orally twice daily for 7 days) is recommended for administration with ceftriaxone. The use of azithromycin as the second antimicrobial is preferred to doxycycline because of the convenience and compliance advantages of single-dose therapy and the substantially higher prevalence of gonococcal resistance to tetracycline than to azithromycin among GISP isolates, particularly in strains with elevated cefixime MICs. # Recommendations For treatment of uncomplicated urogenital, anorectal, and pharyngeal gonorrhea, CDC recommends combination therapy with a single intramuscular dose of ceftriaxone 250 mg plus either a single dose of azithromycin 1 g orally or doxycycline 100 mg orally twice daily for 7 days (Box). Clinicians who diagnose gonorrhea in a patient with persistent infection after treatment (treatment failure) with the recommended combination therapy regimen should culture relevant clinical specimens and perform antimicrobial susceptibility testing of N. gonorrhoeae isolates. Phenotypic antimicrobial susceptibility testing should be performed using disk diffusion, Etest (BioMérieux, Durham, NC), or agar dilution. Data currently are limited on the use of NAAT-based antimicrobial susceptibility testing for genetic mutations associated with # BOX. Updated recommended treatment regimens for gonococcal infections # Uncomplicated gonococcal infections of the cervix, urethra, and rectum # Recommended regimen Ceftriaxone 250 mg in a single intramuscular dose PLUS Azithromycin 1 g orally in a single dose or doxycycline 100 mg orally twice daily for 7 days* # Alternative regimens If ceftriaxone is not available: Cefixime 400 mg in a single oral dose PLUS Azithromycin 1 g orally in a single dose or doxycycline 100 mg orally twice daily for 7 days* PLUS Test-of-cure in 1 week If the patient has severe cephalosporin allergy: Azithromycin 2 g in a single oral dose PLUS Test-of-cure in 1 week # Uncomplicated gonococcal infections of the pharynx # Recommended regimen Ceftriaxone 250 mg in a single intramuscular dose PLUS Azithromycin 1 g orally in a single dose or doxycycline 100 mg orally twice daily for 7 days* resistance in N. gonorrhoeae. The laboratory should retain the isolate for possible further testing. The treating clinician should consult an infectious disease specialist, an STD/HIV Prevention Training Center (http://www.nnptc.org), or CDC (telephone: 404-639-8659) for treatment advice, and report the case to CDC through the local or state health department within 24 hours of diagnosis. A test-of-cure should be conducted 1 week after re-treatment, and clinicians should ensure that the patient's sex partners from the preceding 60 days are evaluated promptly with culture and treated as indicated. When ceftriaxone cannot be used for treatment of urogenital or rectal gonorrhea, two alternative options are available: cefixime 400 mg orally plus either azithromycin 1 g orally or doxycycline 100 mg twice daily orally for 7 days if ceftriaxone is not readily available, or azithromycin 2 g orally in a single dose if ceftriaxone cannot be given because of severe allergy. If a patient with gonorrhea is treated with an alternative regimen, the patient should return 1 week after treatment for a test-of-cure at the infected anatomic site. The test-of-cure ideally should be performed with culture or with a NAAT for N. gonorrhoeae if culture is not readily available. If the NAAT is positive, every effort should be made to perform a confirmatory culture. All positive cultures for test-of-cure should undergo phenotypic antimicrobial susceptibility testing. Patients who experience treatment failure after treatment with alternative regimens should be treated with ceftriaxone 250 mg as a single intramuscular dose and azithromycin 2 g orally as a single dose and should receive infectious disease consultation. The case should be reported to CDC through the local or state health department. For all patients with gonorrhea, every effort should be made to ensure that the patients' sex partners from the preceding 60 days are evaluated and treated for N. gonorrhoeae with a recommended regimen. If a heterosexual partner of a patient cannot be linked to evaluation and treatment in a timely fashion, then expedited partner therapy should be considered, using oral combination antimicrobial therapy for gonorrhea (cefixime 400 mg and azithromycin 1 g) delivered to the partner by the patient, a disease investigation specialist, or through a collaborating pharmacy. The capacity of laboratories in the United States to isolate N. gonorrhoeae by culture is declining rapidly because of the widespread use of NAATs for gonorrhea diagnosis, yet it is essential that culture capacity for N. gonorrhoeae be maintained to monitor antimicrobial resistance trends and determine susceptibility to guide treatment following treatment failure. To help control gonorrhea in the United States, health-care providers must maintain the ability to collect specimens for culture and be knowledgeable of laboratories to which they can send specimens for culture. Health-care systems and health departments must support access to culture, and laboratories must maintain culture capacity or develop partnerships with laboratories that can perform culture. Treatment of patients with gonorrhea with the most effective therapy will limit the transmission of gonorrhea, prevent complications, and likely will slow emergence of resistance. However, resistance to cephalosporins, including ceftriaxone, is expected to emerge. Reinvestment in gonorrhea prevention and control is warranted. New treatment options for gonorrhea are urgently needed. # Reported by
None
None
8717b3f5b275c1c68bb4e98b5f0eb9076b2c8f06
cdc
None
Two live, attenuated varicella zoster virus-containing vaccines are available in the United States for prevention o f varicella: 1) a single-antigen varicella vaccine (VARIVAX,# CONTENTS______________________________________ # Introduction Varicella is a highly infectious disease caused by the vari cella-zoster virus (VZV). Secondary attack rates for this virus might reach 90% for susceptible household contacts. V ZV causes a systemic infection that results typically in lifetime immunity. In otherwise healthy persons, clinical illness after reexposure is rare. In 1995, a vaccine to prevent varicella (VARIVAX,® Merck & Co., Inc., Whitehouse Station, New Jersey) was licensed in the United States for use among healthy children aged >12 months, adolescents, and adults; recommendations o f the Advisory Committee on Immunization Practices (ACIP) # M ethods In response to increasing reports o f varicella outbreaks among highly vaccinated populations (3)(4)(5)(6), ACIP's measles-mumpsrubella and varicella (M M RV ) workgroup first met in February 2004 to review data related to varicella vaccine use in the United States since implementation of the vaccination program in 1995 and to consider recommendation options for improving control ofvaricella disease. The workgroup held monthly conference calls and met in person three times a year. The workgroup reviewed data on the impact o f the 1-dose varicella vaccination program, including data on vaccination coverage, changes in varicella epidemiology, transmission from vaccinated persons with varicella, vaccine effectiveness, immune response to vaccination, evidence of immunity, and potential risk factors for vaccine failure. Published and unpublished data related to correlates of protection, safety, immunogenicity, and efficacy' !' o f the new quadrivalent M M RV vaccine and the immunogenicity and efficacy o f a second dose of varicella vaccine also were reviewed. Costbenefit and cost-effectiveness analyses were considered, includ ing revised cost-benefit analysis of both the 1-and 2-dose programs for children compared with no vaccination program and the incremental benefit of a second dose. # Epidemiology of Varicella # General V Z V is transmitted from person to person by direct con tact, inhalation of aerosols from vesicular fluid of skin lesions of acute varicella or zoster, or infected respiratory tract secre tions that also might be aerosolized. The virus enters the host through the upper-respiratory tract or the conjunctiva. The average incubation period for varicella is 14-16 days § after exposure to rash; however, this period can vary (range: 10-21 days). The period o f contagiousness o f infected per sons is estimated to begin 1-2 days before the onset o f rash and to end when all lesions are crusted, typically 4-7 days after onset o f rash (7). Persons who have progressive varicella (i.e., development o f new lesions for >7 days) might be conta gious longer, presumably because their immune response is depressed, which allows viral replication to persist. V Z V remains dormant in sensory-nerve ganglia and might be reactivated at a later time, causing herpes zoster (HZ) (i.e., shingles), a painful vesicular rash typically appearing in a dermatomal distribution o f one or two sensory-nerve roots. Since implementation of a universal childhood varicella vac cination program in 1995, the epidemiology and clinical char acteristics of varicella in the United States have changed, with substantial declines in morbidity and mortality attributable to varicella. No consistent changes in H Z epidemiology have been documented. Vaccinated persons might develop modified varicella dis ease with atypical presentation. Varicella disease that develops >42 days after vaccination (i.e., breakthrough varicella) typi cally is mild, with <50 skin lesions, low or no fever, and shorter (4-6 days) duration o f illness. The rash is more likely to be predominantly maculopapular rather than vesicular. Never theless, breakthrough varicella is contagious. # Prevaccine Era Before the introduction of varicella vaccine in 1995, vari cella was a universal childhood disease in the United States, with peak incidence in the spring and an average annual inci dence o f 15-16 cases per 1,000 population. On the basis of data from the National Health Interview Survey (N H IS) for 1980-1990, an average o f 4 million cases were estimated to have occurred annually (annual incidence rate: 15 cases per 1,000 population) (8). Varicella was not a nationally notifi able disease when vaccine was introduced in 1995, and sur veillance data were limited. In 1994, only 28 states, the District In this rep o rt, efficacy refers to th e extent to w hich a specific intervention p roduces a beneficial result under ideal conditions. § T h e en dash in num eric ranges is used to represent inclusive years, hours, days, ages, dosages, or a sequence o f num b ered items. reporting was passive, with estimated completeness ranging from 50 years. How ever, studies using data from state and local surveys conducted during 1990-1992 and during 1994-1995 indicated that the highest incidence o f varicella occurred among preschool-aged rather than school-aged children, indicating that the disease was being acquired at earlier ages (1 0 ,1 1 ). N ation al seroprevalence data for 1988-1994 indicated that 95.5% of adults aged 20-29 years, 98.9% o f adults aged 30-39 years, and >99.6% o f adults aged >40 years were immune to V ZV (12). However, for reasons that are not well understood, the epidemiology of varicella differs between countries with tem perate and tropical climates (13)(14)(15)(16)(17)(18). In the majority o f coun tries with temperate climates, >90% of persons are infected by adolescence whereas in countries with tropical climates, a higher proportion o f infections are acquired at older ages, which results in higher susceptibility among adults (19). Estimates of the burden of varicella hospitalization varied according to the year(s) studied, the source of data, and the definitions used for a varicella-related hospitalization (20)(21)(22)(23). Estimates were higher if varicella was listed as either a princi pal or a secondary cause o f hospitalization, in which case some incidental varicella hospitalization might have been included. During 1988-1995, an estimated 10,632 hospitalizations were attributable annually to varicella in the United States (range: 8,198-16,586) (20). Another study demonstrated an annual average o f 15,073 hospitalizations during 1993-1995, but this period might have included an epidemic year (22). Overall rates o f hospitalization for varicella during 1988-1995 ranged from 2.3 to 6.0 cases per 100,000 population. If any vari cella-related hospital discharge diagnostic code was included, rates varied between 5.0 and 7.0 cases per 100,000 popula tion (20)(21)(22)(23). D uring 1 9 8 8 -1 9 9 5 , persons without severe im m uno compromising conditions or treatments comprised the larg est proportion (89%) o f annual varicella-related hospitaliza tions (20). Before vaccination, children aged 20 years accounted for 32% -33% (20,22). The rate o f complications from varicella was substantially higher for persons aged >20 years and for infants (i.e., children aged 20 years were 13 times more likely to be hospitalized when they had varicella than children aged 5 -9 years, and infants aged <1 year were six times more likely to be hospitalized than children aged 5-9 years (20). The most common complications of varicella that resulted in hospital izations were skin and soft tissue infections (especially inva sive group A strep to co ccal in fe c tio n s), p n eu m on ia, dehydration, and encephalitis. In 1980, an association was identified between Reye syndrome and the use of aspirin dur ing varicella or influenza-like illness; since then, Reye syn drome, which was once considered a common complication resulting from varicella infection, has become rare (24)(25)(26). During 1970-1994, the average annual number o f deaths for which varicella was recorded as the underlying cause was 105; the overall average annual varicella mortality rate was 0.4 deaths per 1 million population. The age distribution of varicella deaths has shifted during this period. During 1970 1974, persons aged 20 years (27). Although CFRs declined substantially during this period, the risk for varicella-related death during 1990-1994 was still 25 times higher for adults than for children aged 12 months-4 years (CFR: 21.3 and 0.8 per 100,000 cases, respectively). During the same period, 89% o f varicella deaths among children and 75% of varicella deaths among adults occurred in persons without severe underlying im m uno compromising medical conditions. The most common com plication s am ong persons who died o f varicella were pneumonia, central nervous system complications (including encephalitis), secondary infection, and hemorrhagic condi tions. A recent reanalysis of varicella deaths also considered varicella when listed as a contributing cause of death in addi tion to the underlying cause studied in the previous report (28). During 1990-1994, a varicella diagnosis was listed on an aver age o f 145 death certificates per year (105 as an underlying cause and 40 as a contributing cause), with an overall annual varicella mortality rate of 0.6 deaths per 1 million population. Varicella during pregnancy can have adverse consequences for the fetus and infant, including congenital varicella syn drome (see Prenatal and Perinatal Exposure). Reliable data on the number of cases of congenital varicella syndrome are not available. However, on the basis of age-specific varicella inci dence (from N H IS), the annual number of births, and the risk for congenital varicella syndrome (1.1% overall risk in the first 20 weeks of pregnancy), 44 cases of congenital varicella syndrome are estimated to have occurred each year in the United States during the prevaccine era (29). # Postvaccine Era In 1995, a varicella vaccine (VARIVAX,® Merck & Co., Inc., Whitehouse Station, New Jersey) was licensed in the United States for use among healthy children aged >12 months, ado lescents, and adults. At that time, ACIP recommended routine varicella vaccination o f children aged 12-18 months, catch-up vaccination o f susceptible children aged 19 months-12 years, and vaccination of susceptible persons who have close contact with persons at high risk for serious complications (e.g., health care workers and family contacts of immunocompromised per sons) (1; Table 1). In 1999, ACIP updated the recommendations to include child care and school entry requirements, use of the vaccine after exposure and for outbreak control, use of the vac cine for certain children infected with human immunodeficiency virus (HIV), and vaccination o f adolescents and adults at high risk for exposure or transmission (2; Table 1). During 1997-2005, national varicella vaccination coverage among children aged 19-35 months increased from 27% to 88%, with no statistically significant difference in coverage by race or ethnicity (30). In 2005, state-specific varicella vac cination coverage ranged from 69% to 96% (31). National surveillance data continue to be limited, but passive surveil lance data in certain states have documented a decline in varicella incidence. In four states (Illinois, Michigan, Texas, and West Virginia) with adequate (>5% o f expected cases during 1990-1994) reporting to N N D S S, varicella incidence for 2004 declined 53% -88% compared with the average incidence for 1990 1994, with vaccination coverage among children aged 19-35 months ranging from 82% to 88% (32; C D C , unpublished data, 2006). During 2003During -2005, the number o f cases increased in Illinois and Texas; the biggest increase (56%) occurred in Texas (Figure 1). The number o f cases remained stable in Michigan (Figure 1) and declined minimally in West Virginia. In 1995, along with implementation o f the national vacci nation program, C D C instituted active surveillance for vari cella in three communities (Antelope Valley, California; Travis County, Texas; and West Philadelphia, Pennsylvania) in col laboration with state and local health departments to estab lish baseline data and to monitor trends in varicella disease after introduction o f varicella vaccine. By 2000, vaccination coverage among children 19-35 months in these three com munities had reached 74% -84% , and reported total varicella cases had declined 7 1 % -8 4 % (33). Although incidence declined to the greatest extent (83% -90% ) among children aged 12 months-4 years, incidence declined in all age groups, including infants and adults, indicating the herd immunity effects o f the vaccination program. Since 2001, only two sites were funded to continue surveillance (Antelope Valley and West Philadelphia). By 2005, vaccination coverage in these two sites had increased to 90% , and the reduction in inci dence had reached 90% and 91% , respectively (34). During # MMWR June 22, 2007 1996-2005, as vaccination coverage continued to increase, the proportion of persons with varicella who had been vacci nated increased from 2% to 56%. During 1995-2004, peak incidence for varicella cases in active surveillance sites shifted from age 3 -6 years to age 9-11 years. After introduction o f vaccine in 1995, the number and rate of annual varicella-related hospitalizations declined. In one study o f a nationally representative sample that was conducted during 1993-2001, varicella hospitalizations declined 75% (22). In another study, the annual varicella-related hospital ization rate declined 88% during 1994-2002 (23) (Figure 2). Hospitalization rates declined 100% among infants, and sub stantial declines also were recorded in all other age groups (up to age 50 years); hospitalization rates declined 91% among children aged 20 years) (23). In the combined active surveillance area, varicella-related hospitaliza tions declined from 2.4-4.2 hospitalizations per 100,000 popu lation during 1995-1998 to 1.5 per 100,000 population in 2000 (33) and to 0.8 per 100,000 population in 2005 (34). During 1995-2001, the number o f deaths for which vari cella was listed as the underlying cause decreased from 115 to 26 (28) (Figure 3). Since then, the number of deaths declined further; 16 deaths were reported in 2003. Age-adjusted mor tality rates decreased 66% , from an average o f 0.41 deaths per 1 million population during 1990-1994 to 0.14 during 1999 2001. The decline was observed in all age groups 50 years did not decline to the same extent; however, the validity of reported varicella deaths in this age group is low (35), and the majority o f these deaths are not considered to be caused by varicella. During 1999-2001, the average rate of mortality attributed to varicella among all racial and ethnic populations was <0.15 deaths per 1 million persons. Persons without high-risk conditions (e.g., malignan cies, HIV/acquired immunodeficiency syndrome , and other immune deficiencies) accounted for 92% of deaths attributable to varicella. The average rates of deaths for which varicella was listed as a contributing cause of death also declined during 1999-2001, compared with 1990-1994. F IG U R E 2 . V Despite high 1-dose vaccination coverage and the success of the vaccination program in reducing varicella morbidity and mortality, reports to C D C from active surveillance sites and from states with well-implemented vaccination programs and surveillance indicate that in certain states and in one active surveillance site, the number of reported varicella cases has remained constant or declined minimally, and outbreaks have continued to occur. During 2001-2005, outbreaks were reported in schools with high varicella vaccination coverage (range 96% -100% ) (3,4). The outbreaks were similar in cer tain respects: 1) all occurred in elementary schools, 2) vaccine effectiveness was similar (range: 72% -85% ), 3) the highest attack rates occurred among the younger students, 4) each outbreak lasted approximately 2 months, and 5) index cases occurred among vaccinated students (although their disease was mild). Overall attack rates among vaccinated children var ied (range: 11%-17%), with attack rates in certain classrooms as high as 40% . These data indicate that even in settings in which vaccination coverage was nearly universal and vaccine performed as expected, the 1-dose vaccination program could not prevent varicella outbreaks completely. # Prenatal and Perinatal Exposure In the prevaccine era, prenatal infection was uncommon because the majority o f women o f childbearing age were immune to V Z V (12,36). Varicella in pregnant women is associated with a risk for V Z V transmission to the fetus or newborn. Intrauterine V Z V infection might result in congeni tal varicella syndrome, neonatal varicella, or H Z during infancy or early childhood (3 7 -4 6 ). Infants who are exposed prenatally to VZV, even if asymptomatic, might have measurable varicella-specific IgM antibody during the new born period, have persistent varicella-specific IgG immunity after age 1 year without a history of postnatal varicella, or demonstrate positive lymphocyte transformation in response to V Z V antigen (37). Congenital varicella syndrome was first recognized in 1947 (40). Congenital varicella syndrome can occur among infants born to mothers infected during the first half o f pregnancy and might be manifested by low birthweight, cutaneous scar ring, limb hypoplasia, microcephaly, cortical atrophy, chori oretinitis, cataracts, and other anomalies. In one study, incidence o f congenital varicella syndrome was calculated us ing aggregate data from nine cohort studies carried out dur ing 1986-2002 (47). Rates were 0.6% (4 o f 725) for 2-12 weeks' gestation, 1.4% (9 o f 642) for 13-28 weeks, and 0 (0 of 385) after 28 weeks. In a prospective study o f 1,373 mothers with varicella dur ing pregnancy conducted in the United Kingdom and West Germany during 1980-1993, the highest risk (2%) for con genital varicella syndrome was observed when maternal infec tion occurred during 13-20 weeks' gestation (43). The risk was 0.4% after maternal infection during 0-12 weeks' gesta tion. No cases of congenital varicella syndrome occurred among the infants o f 366 mothers with H Z during pregnancy. Nine isolated cases involving birth defects consistent with con genital varicella syndrome have been reported after maternal varicella beyond 20 weeks' gestation (with the latest occur ring at 28 weeks) (47,48). In a prospective study, H Z occurred during infancy or early childhood in four (0.8%) of 477 infants who were exposed to V Z V during 13-24 weeks' gestation and in six (1.7%) o f 345 infants who were exposed during 25-36 weeks' gestation (43). The onset of varicella in pregnant women from 5 days before to 2 days after delivery results in severe varicella infec tion in an estimated 17% -30% o f their newborn infants. These infants are exposed to V Z V without sufficient maternal antibody to lessen the severity of disease. The risk for neona tal death has been estimated to be 31% among infants whose mothers had onset o f rash <4 days before giving birth (45). This estimate was made on the basis of a limited number of infant deaths and might be higher than the actual risk because the study was performed before neonatal intensive care was available. In addition, certain cases were not part of prospective studies but were reported retrospectively, making the results subject to selection bias. When these cases were reevaluated subsequently by another investigator, certain infants were demonstrated to have been at higher risk for death because o f low birthweight; in at least one case, another cause o f death was probable (46). Varicella-zoster immune globulin (VZIG) has been reported to reduce incidence o f severe neo natal varicella disease (49) and therefore is indicated in such situations. Nevertheless, the risk for death among neonates who do not receive postexposure prophylaxis with V Z IG is likely to be substantially lower than was estimated previously. # H erpes Zoster Surveillance After primary infection, V Z V persists as a latent infection in sensory-nerve ganglia. The virus can reactivate, causing HZ. Mechanisms controlling V ZV latency are not well understood. Risk factors for H Z include aging, immunosuppression, and initial infection with varicella in utero or during early child hood (i.e., age <18 months). An estimated 15%-30% o f the general population experience H Z during their lifetimes (50,51); this proportion is likely to increase as life expectancy increases. The most common complication of H Z, particu larly in older persons, is postherpetic neuralgia (PHN), the persistence of sometimes debilitating pain weeks to months after resolution o f H Z. Life-threatening complications o f H Z also can occur; these include herpes ophthalmicus, which can lead to blindness. Another severe manifestation is dissemina tion, which might involve generalized skin eruptions, and cen tral nervous system, pulmonary, hepatic, and pancreatic com plications. D issem ination, pneum onia, and visceral involvement typically are restricted to immunocompromised persons. V Z V can be transmitted from the lesions o f patients who have H Z to susceptible contacts. Although few data are available to assess this risk, one household contact study reported that the risk for V Z V transmission from H Z was approximately 20% ofthe risk for transmission from varicella (52). Varicella vaccination might alter the risk for H Z at the level of both the individual and the population (i.e., herd immu nity). Just as wild-type V Z V can cause wild-type H Z, attenu ated vaccine virus has the potential to become latent and later reactivate to cause vaccine virus strain (also called Oka-strain) H Z (53). Multiple studies have evaluated the risk for Oka-strain Varicella vaccination also might change the risk for H Z at the population level. With the development o f herd immu nity and reduction in the likelihood o f exposure, the varicella vaccination program prevents wild-type V Z V infection among vaccine recipients and nonvaccine recipients, eliminating the risk for wild-type H Z in these persons. Reduction in the like lihood o f wild-type varicella infection also increases the median age for acquiring varicella (although age-specific inci dence rates themselves are lower). This reduces the risk for varicella infection during early childhood (i.e., age <18 months), thereby reducing a risk factor for childhood HZ. Exposure of persons with latent wild-type V Z V infection to persons with varicella is thought to boost specific immu nity, which might contribute to controlling reactivation of V Z V and the development o f H Z (50). Concern has been expressed that by providing fewer opportunities for varicella exposure among persons with previous wild-type varicella infection, reduction in the likelihood o f exposure might increase the risk for H Z, possibly within as few as 5 years after introduction of varicella vaccination (59 ) and reaching a vaccination coverage of >90%. Herpes zoster is not a nationally notifiable disease in the United States, and H Z surveillance has been conducted using multiple methods, study sites, or data sources. For certain stud ies, baseline data were available before the start o f the varicella vaccination program. One study that included baseline data was a retrospective analysis o f electronic medical records from a health maintenance organization (HM O) during 1992-2002 (60). This H M O study indicated that age-adjusted incidence o f H Z remained stable during 1992-2002 as incidence of varicella decreased (60). Age-adjusted and -specific annual incidence rates of H Z fluctuated slightly over time; the ageadjusted rate was highest in 1992, at 4.1 cases per 1,000 per son-years, and was 3. # Use of Acyclovir to Treat and Prevent Varicella Acyclovir is a synthetic nucleoside analog that inhibits rep lication of human herpes viruses, including VZV. Since the early 1980s, intravenous acyclovir has been available to treat immunocompromised persons who have varicella. When administered within 24 hours o f onset o f rash, acyclovir has been demonstrated to be effective in reducing varicellaassociated morbidity and mortality in this population (66)(67)(68). In 1992, the Food and D rug A dm inistration (FDA) approved the use o f oral acyclovir for the treatment o f vari cella in otherwise healthy children. This approval was made on the basis o f placebo-controlled, double-blind studies (69,70) that dem onstrated the beneficial clinical effects (i.e., a decrease in the number o f days in which new lesions appeared, the duration o f fever, and the severity o f cutaneous and systemic signs and symptoms) that occurred when acyclovir was administered within 24 hours o f rash onset. No serious adverse events occurred during the period o f drug administration. Administration o f acyclovir did not decrease transmission o f varicella or reduce the duration o f absence from school. Because few complications occurred (1% -2% ), these studies could not determine whether acyclovir had a sta tistically significant effect on disease severity among healthy children. In these studies, antibody titers after infection in children receiving acyclovir did not differ substantially from titers o f children in the control group (69,70). Clinical trials among adolescents and adults have indicated that acyclovir is well-tolerated and effective in reducing the duration and severity o f clinical illness if the drug is administered within 24 hours o f rash onset (71)(72)(73). In 1993, AAP's Committee on Infectious Diseases published a statement regarding the use o f acyclovir (74). AAP did not consider administration o f acyclovir to healthy children to have clinical benefit sufficient to justify its routine administration; however, AAP stated that certain circumstances might justify its use. AAP recommended that oral acyclovir should be con sidered for otherwise healthy persons at increased risk for mod erate to severe varicella (e.g., persons aged >12 years, persons with chronic cutaneous or pulmonary disorders, persons receiving long-term salicylate therapy, and persons receiving short, intermittent, or aerosolized courses o f corticosteroids). Certain experts also recommend use o f oral acyclovir for secondary case-patients who live in the same households as infected children (74). Acyclovir is classified as a Category B drug in the FD A usein-pregnancy rating. Although studies involving animals have not indicated teratogenic effects, adequate, well-controlled studies in pregnant women have not been conducted. However, a prospective registry o f acyclovir use during pregnancy that collected data on outcomes o f 596 infants whose mothers were exposed to systemic acyclovir during the first trimester o f preg nancy indicated that the rate and types o f birth defects approximated those in the general population (75). AAP has not recommended routine use o f oral acyclovir for pregnant women because the risks and benefits to the fetus and mother were unknown. However, in instances o f serious, viral mediated complications (e.g., pneumonia), AAP has recom mended that intravenous acyclovir should be considered (74). Two nucleoside analogs, acyclovir and famciclovir, have been approved by FDA for treating H Z. If administered within 72 hours of rash onset, acyclovir has accelerated the rate of cuta neous healing and reduced the severity of acute pain in adults who have H Z (76). Oral famciclovir, when administered during the same period, has similar efficacy (77). Acyclovir is not indicated for prophylactic use among otherwise healthy children, adolescents, or adults without evi dence o f immunity after exposure to varicella. Vaccination is the method o f choice in these situations. No studies have been conducted regarding prophylactic use o f acyclovir among immunocompromised persons; therefore, V Z IG is recom mended in these situations. # Vaccines for Prevention of Varicella Two live attenuated varicella virus vaccines are licensed in the United States for prevention of varicella: single-antigen varicella vaccine (VARIVAX,® Merck & Co., Inc., Whitehouse Station, New Jersey) and com bination M M R V vaccine (ProQuad,® Merck & Co., Inc., Whitehouse, New Jersey). Both vaccines are derived from the Oka strain of live, attenu ated VZV. The Oka strain was isolated in Japan (78) in the early 1970s from vesicular fluid in a healthy child who had natural varicella and was attenuated through sequential propa gatio n in cultures o f hum an em b ryon ic lu n g cells, embryonic guinea-pig cells, and human diploid cells (WI-38). The virus in the Oka/Merck vaccine has undergone further passage through human diploid-cell cultures (MRC-5) for a total of 31 passages. In 1995, the single-antigen varicella vaccine was licensed in the United States for use among healthy persons aged >12 months. This vaccine is lyophilized; when reconstituted as directed in the package insert and stored at room temperature for a maximum o f 30 minutes, it contains a minimum o f 1,350 plaque forming units (PFUs) o f Oka/Merck V Z V in each 0.5 mL dose (79). Each dose also contains 12. # Im m une Response to Vaccination In clinical trials o f the single-antigen varicella vaccine con ducted before licensure, seroconversion was assessed using lots o f vaccine with different amounts o f PFUs and laboratory assays with different levels o f sensitivity and specificity. Using a sp ecially dev eloped , sen sitive gp-enzym e-linked immunosorbent assay (ELISA) test that is not available com mercially, seroconversion (defined by the acquisition o f any detectable varicella antibodies >0.3 gpELISA units) was observed at approximately 4 -6 weeks after vaccination with 1 dose o f varicella vaccine in approximately 97% o f 6,889 susceptible children aged 1-12 years (79). The seroconversion rate was 98% for children aged 12-15 months and 95% among those aged 5-12 years (81). Adolescents aged 13-17 years had a lower seroconversion rate (79%) after a single dose o f vac cine. A study perform ed postlicensure used fluorescent antibody to membrane antigen (FAMA) titers 16 weeks after vaccination to assess serologic response and demonstrated that 61 (76%) o f 80 healthy child vaccine recipients seroconverted (FAMA titers >1:4) after 1 dose o f single-antigen varicella vaccine (82). Primary antibody response to the vaccine at 6 weeks post vaccination is correlated with protection against disease (83,84). In clinical trials, rates o f breakthrough disease were lower am ong children with varicella antibody titers o f >5 gpELISA units than among those with titers o f 5 gpELISA units. Later studies o f immunogenicity (85) have reported the pro portion o f vaccinated children who achieved this antibody level instead o f seroconversion. After 1 dose o f the single antigen varicella vaccine, 86% o f children had gpELISA lev els o f >5 units/m L (85). Studies performed using FAMA indicated that a titer >1:4 at 16 weeks postvaccination is cor related with protection against disease (82). O f healthy per sons with a titer o f >1:4 at 16 weeks post vaccination, <1% have had varicella after a household exposure (n = 130). In contrast, the attack rate among those with a titer o f <1:4 was 55% (n = 60). Persistence o f antibody in children after 1 dose o f single antigen varicella vaccine was demonstrated in both short-and long-term follow-up studies. In a clinical study, the rate of antibody persistence detected by gpELISA was nearly 100% after 9 years o f follow-up for 277 children (85). Another study demonstrated that although antibody titers (detected by FAMA) might decline 12-24 months after vaccination, the median titer did not change after 1-4 years and even rose after 10 years (86). In Japan, V Z V antibodies were present in 37 (97%) o f 38 children who received varicella vaccine 7-10 years earlier (with titers comparable to those o f 29 children who had had natural varicella infection within the previous 10 years) (87) and in 100% o f 25 children when followed for as long as 20 years (i.e., antibody levels were higher than those observed 10 years earlier) (88). Interpretation o f long-term studies is complicated by at least two factors. First, asymp tomatic boosting o f vaccine-induced immunity by exposure to wild-type V Z V is likely. Because varicella vaccine is not routinely recommended in Japan, coverage o f children was estimated to be low (approximately 20%) during 1991-1993. Second, sample sizes were limited as a result o f the decrease in the number o f children followed-up with increasing time since vaccination. The second dose o f varicella vaccine in children produced an improved immunologic response that is correlated with improved protection. A comparative study o f healthy chil dren who received 1 or 2 doses o f single-antigen varicella vac cine administered 3 months apart indicated that a second dose provided higher antibody levels as measured by the propor tion o f subjects with titers o f >5 gpELISA units and by geometric mean titers (G M Ts) and higher efficacy (85; Tables 2-4). The proportion o f subjects with antibody titers o f >5 gpELISA units in the 2-dose recipients was higher 6 weeks after the second dose than after the first dose (99.6% and 85.7% , respectively) and remained high at the end o f the 9-year follow-up period, although the difference between the two regimens narrowed (97% and 95% , respectively). G M T 6 weeks after the second dose was substantially higher than that after a single dose (142 and 12, respectively). The differ ence in GM Ts between the two regimens did not persist over 9 years o f follow-up among subjects who seroconverted after vaccination, although GM Ts in both regimens remained high by the end o f the study period. However, receipt o f a second dose decreased the rate o f breakthrough varicella significantly (3.3-fold) and increased vaccine efficacy (p<0.001). Another study that assessed the immunogenicity o f a second dose received 4-6 years after the first dose demonstrated a substan tial increase in antibody levels in the first 7-10 days in the majority o f those tested, indicating an anamnestic response. On the day o f the second dose, G M T was 25.7, compared with 143.6 G M T 7-10 days after the second dose; 60% of recipients had at least a fourfold increase in antibody titers, and an additional 17% had at least a twofold increase (89). Three months after the second dose, G M T remained higher than on the day o f second dose (119.0 and 25.7, respectively). Among children, V Z V antibody levels and GMTs after 2 doses administered 4-6 years apart were comparable to those obtained when the 2 doses were administered 3 months apart. # T A B L E 2. H u m o ra l a n d c e llu la r im m u n e re s p o n s e a m o n g c h ild re n a g e d 12 m o n th s -1 2 y e a rs m e a s u re d a t 6 w e e k s p o s tv a c c in a tio n , b y v a c c in e t y p e a n d v a c c in a tio n s c h e d u le -U n ite d S ta The Am ong persons aged >13 years, multiple studies have described seroconversion rates after receipt o f the single antigen varicella vaccine (range: 72% -94% after 1 dose and 94% -99% after a second dose administered 4-8 weeks later) (79,92,93). In clinical studies, detectable antibody levels have persisted for at least 5 years in 97% o f adolescents and adults who were administered 2 doses o f vaccine 4-8 weeks apart (79). However, other studies demonstrated that 25% -31% o f adult vaccine recipients who seroconverted lost detectable antibodies (by FAMA) at multiple intervals (range: 1-11 years) after vaccination (93,94). For persons who had breakthrough disease after exposure to varicella, the severity o f illness or the attack rates did not increase over time (95). Innate (i.e., nonspecific) and adaptive (i.e., humoral and cellular) immunity are important in the control o f primary varicella infection. The capacity to elicit cell-mediated immu nity is important for viral clearance, providing long-term protection against disease and preventing symptomatic V ZV reactivation. Studies among children and adults have indi cated that breakthrough varicella typically is mild, even among vaccine recipients without seroconversion or vaccine recipi ents who lost detectable antibody, suggesting that VZVspecific cell-mediated immunity affords protection to vaccine recipients in the absence o f a detectable antibody response (94,95). Studies o f the cellular immune response to vaccina tion among children demonstrated that immunization with 1 dose o f varicella vaccine induced VZV-specific T-cell prolif eration that was maintained in 26 (90%) o f 29 children 1 year postvaccination and in 52 (87%) o f 60 children 5 years post vaccination (96). In this study, the mean stimulation index (SI), a marker o f cell-mediated immunity, was 12.1 after 1 year and 22.1 after 5 years. D ata obtained at 1 year postvaccina tion from a subset o f children in a prelicensure study compar ing the immune response among children who received 1 and 2 doses administered 3 months apart demonstrated that the varicella-specific lymphocyte proliferation responses were sig nificantly higher for recipients o f 2 doses than for recipients o f 1 dose (mean SI: 34.7 and 23.1, respectively; p = 0.03) (97). In the study o f the 2 doses administered 4-6 years apart, results also indicated that the lymphocyte proliferation response was significantly higher at 6 weeks and 3 months after the second dose than at the same time points after the first dose (p<0.01) (89; Table 2). Among adults, vaccine-induced VZV-specific T-cell prolif eration was maintained in 16 (94%) o f 17 subjects 1 and 5 years postvaccination (96,98). The mean SI was 9.9 after 1 year and 22.4 after 5 years. # Correlates of Protection For children, the varicella antibody response measured by gpELISA 6 weeks postvaccination correlates with neutraliz ing antibody level, VZV-specific T-cell proliferative responses, vaccine efficacy, and long-term protection against varicella after exposure to V Z V (83,84,99,100). A titer o f >5 gpELISA units/mL is associated with protection against disease although it should not be considered an absolute guarantee o f protec tion. Breakthrough cases have occurred among children with >5 gpELISA units/mL. A FAMA titer >1:4 at 16 weeks post vaccination also correlates with protection against disease (82). However, neither o f these antibody tests is available commer cially. The relationship between the antibody level measured at other intervals postvaccination, especially immediately prior to exposure and breakthrough disease has not been studied. No correlates o f protection have been evaluated for adults. # Vaccine Efficacy and Vaccine Effectiveness One-Dose Regimen # Prelicensure Efficacy In prelicensure studies carried out among children aged 12 months-14 years, the protective efficacy o f single-antigen varicella vaccine varied, depending on the amount o f live virus administered per dose, the exposure setting (community or household), and the quality and length o f the clinical fol low-up. The majority o f the prelicensure studies reported effi cacy o f 1 dose o f varicella vaccine within the range o f 70% -90% against any clinical disease and 95% against severe disease for 7-10 years after vaccination (81,101,102). A ran domized placebo-controlled efficacy trial was conducted among children aged 12 months-14 years, but the formula tion differed from that o f the current vaccine (17,000 PFUs per dose (103,104), with follow-up o f children through 7 years postvaccination (105). Reported efficacy was 100% at 1 year and 98% at 2 years after vaccination, and 100% and 92%, respectively, after exposures to VZV that occurred in the house hold. Although a randomized control study was not conducted for adults, the efficacy o f single-antigen varicella vaccine was determined by evaluation o f protection when adult vaccine recipients were exposed to varicella in the household. On the basis o f the reported historical attack rate o f 87% for natural varicella after household exposure among unvaccinated chil dren, estimated efficacy among adults was approximately 80% (79). The attack rate o f unvaccinated adults exposed in house holds was not studied. # Postlicensure Efficacy and Effectiveness # Prevention o f AH Varicella Disease Postlicensure studies have assessed the effectiveness^ o f the single-antigen varicella vaccine under field conditions in child care, school, household, and community settings using mul tiple methods. Effectiveness frequently has been estimated against all varicella and also against moderate and severe vari cella (defined in different ways). Outbreak investigations have assessed effectiveness against clinically defined varicella. The majority o f these investigations have demonstrated vac cine effectiveness for prevention o f varicella in the same range described in prelicensure trials (70% -90%) (3)(4)(5)(6)(106)(107)(108)(109)(110)(111)(112)(113), with some lower (44%, 56%) (114,115) and some higher (100% in one o f two schools investigated) estimates (107). Â In this report, effectiveness refers to the extent to w hich a specific intervention, w h en deployed in th e field, does w h at it is in ten d ed to do for a defined pop u latio n . retrospective cohort study in 11 childcare centers demonstrated vaccine effectiveness o f 83% for prevention o f clinically diag nosed varicella (116). In a case-control study that measured vaccine effectiveness against laboratory-confirmed varicella in a pediatric office setting during 1997-2003, vaccine effec tiveness was 85% (CI = 78% -90%) during the first four years and 87% (CI = 81% -91% ) for the entire study period (117,118). Finally, in a study o f household secondary attack rates, considered the most robust test o f vaccine performance because o f the intensity o f exposure, varicella vaccine was 79% (CI = 70% -85%) effective in preventing clinically defined varicella in exposed household contacts aged 12 m onths-14 years without a history o f varicella disease or vaccination (119). Postlicensure data on vaccine effectiveness against all disease have been summarized (Table 5). In a randomized clinical trial conducted postlicensure that compared the efficacy o f 1 dose o f varicella vaccine with that o f 2 doses, the estimated vaccine efficacy for 1 dose for a 10-year observation period was 94.4% (CI = 92.9% -95.7% ) (85; Table 3). In the same study, the efficacy o f 1 dose of vaccine in preventing varicella after household exposure for 10 years was 90.2% (CI = 83.7% -96.7% ) (Table 4). This study did not use placebo controls and used historic data for attack rates in unvaccinated children to calculate vaccine efficacy. # Prevention o f Moderate and Severe Varicella Postlicensure studies assessing vaccine performance in preventing moderate and severe varicella have consistently demonstrated high effectiveness. D efinitions for disease severity have varied among studies. Certain studies have used a defined scale o f illness that included the number o f skin lesions, fever, complications, and investigator assessment of illness severity, and others have used only the number o f skin lesions, reported complications, or hospitalizations. Moderate varicella typically has been defined as either 50 500 or 250-500 lesions, and severe varicella has been defined as >500 lesions or any hospitalization or complication. In the randomized postlicensure clinical trial, severe varicella was defined as >300 lesions and fever o f >102°F (38.9°C ), oral equivalent. Regardless o f different definitions, multiple stud ies have demonstrated that single-antigen varicella vaccine was >95% effective in preventing combined moderate and severe disease (3)(4)(5)(6)85,106,107,(109)(110)(111)(112)(113)(115)(116)(117)(118)(119); one study demonstrated effectiveness o f 86% (114). Effectiveness was 100% against severe disease when measured separately (6,85,109,111,117,119). Postlicensure data on vaccine effec tiveness against moderate and severe varicella have been summarized (Table 5). # T A B L E 5. S u m m a ry o f p o s tlic e n s u r e d a ta o n e ffe c tiv e n e s s o f s i n g le -a n t i g e n v a r i c e l la v a c c in e a g a in s t d is e a s e a m o n g c h ild re n a g e d 12 m o n th s -1 4 y e a rs -U n ite d S ta te s , 1 9 9 6 -2 0 0 4 A # Two-Dose Regimen In a randomized clinical trial o f single-antigen varicella vac cine that compared the efficacy o f 1 dose with that o f 2 doses administered 3 months apart, the estimated vaccine efficacy o f 2 doses for a 10-year observation period was 98.3% (CI = 97.3% -99.0% ), which was significantly higher than efficacy after 1 dose (p<0.001) (85; Table 5). The 2-dose regi men also was 100% efficacious against severe varicella. In the same study, the efficacy o f 2 doses o f single-antigen varicella vaccine in preventing disease after household exposure over 10 years was 96.4% (CI = 92.4% -100% ), not significantly different from 1 dose (90.2%) (p = 0.112) (Table 4). How ever, the number o f cases involving household exposure was limited. Formal studies to evaluate the clinical efficacy o f the com bination M M RV vaccine have not been performed. Efficacy o f the individual components was established previously in clinical studies with the single-antigen vaccines. # Breakthrough Disease Breakthrough disease is defined as a case o f infection with wild-type V ZV occurring >42 days after vaccination. In clini cal trials, varicella disease was substantially less severe among vaccinated persons than among unvaccinated persons, who usually have fever and several hundred vesicular lesions (120). In cases o f breakthrough disease, the median number o f skin lesions is com m only <50 (9 9 ,1 2 1 -123). In addition, compared with unvaccinated persons, vaccine recipients have had fewer vesicular lesions (lesions more commonly are atypi cal, with papules that do not progress to vesicles), shorter duration o f illness, and lower incidence o f fever. Multiple postlicensure investigations also have demonstrated that the majority o f breakthrough varicella cases are signifi cantly milder than cases among unvaccinated children (p<0.05) (3,5,(107)(108)(109)(110)(111)(112)(113)(114)(116)(117)(118)124). However, approximately 2 5% -30% o f breakthrough cases are not mild, with clinical features more similar to those in unvaccinated children (124). Since 1999, when varicella deaths became nationally notifiable, two deaths from breakthrough varicella disease have been reported to C D C ; one o f a girl aged 9 years with a history o f asthma who was receiving steroids when she had the breakthrough infection, and the other o f a girl aged 7 years with a history of malignant ependymoma who also was under steroid therapy at the time o f her death (C D C , unpublished data, 2006). # One-Dose Regimen In clinical trials, 1,114 children aged 1-12 years received 1 dose o f single-antigen varicella vaccine containing 2,900 9,000 PFUs o f attenuated virus per dose and were actively followed for up to 10 years postvaccination (79). Among a subset o f 95 vaccine recipients with household exposure to varicella, eight (8%) reported a mild form o f varicella (10-34 lesions). In a randomized clinical trial that compared the efficacy of 1 dose o f vaccine to that o f 2 doses during a 10-year observa tion period, the cumulative rate o f breakthrough varicella among children who received 1 dose was 7.3% (85). Break through cases occurred annually in 0.2% -2.3% o f recipients o f 1 dose o f vaccine. Cases occurred throughout the observa tion period, but the majority were reported 2-5 years after vaccination (Figure 4). O f 57 children with breakthrough cases, 13 (23%) had >50 lesions. In cross-sectional studies, the attack rate for breakthrough disease has ranged between 11% and 17% (and as high as 40% in certain classrooms) in outbreak investigations (3 ) and 15% in household settings (119). # Two-Dose Regimen Data Among Children In a randomized clinical trial that compared the efficacy o f 1 dose o f vaccine with that o f 2 doses, the cumulative rate of breakthrough varicella during a 10-year observation period was 3.3-fold lower among children who received 2 doses than that among children who received 1 dose (2.2% and 7.3, respectively; p50 lesions. The proportion o f children with >50 lesions did not differ between the 1-dose and 2-dose regimens (p = 0.5). # Breakthrough Infections Among Adolescents and Adults In postlicensure studies o f adolescents and adults who received 2 doses, 40 (9%) cases o f breakthrough varicella occurred among 461 vaccine recipients who were followed for 8 weeks-11.8 years (mean: 3.3 years) after vaccination (95), and 12 (10%) cases occurred among 120 vaccine recipi ents who were followed for 1 m onth-20.6 years (mean: 4.6 years) (94). One prelicensure study o f persons who had received 2 doses o f vaccine reported that 12 (8%) breakthrough cases had occurred among 152 vaccine recipients who were followed for 5-66 months (mean: 30 months) postvaccina tion (93). # Contagiousness Prelicensure clinical trials reported the rate o f disease trans mission from vaccinated persons with varicella cases to their vaccinated siblings. In 10 trials that were conducted during 1981-1989, breakthrough infections occurred in 114 (5.3%) o f 2,163 vaccinated children during the 1-8 year follow-up period o f active surveillance, and secondary transmission oc curred to 11 (12.2%) o f their 90 vaccinated siblings (121). Illness was mild in both index and secondary case-patients. Household transmission from a vaccinated child with break through disease to a susceptible adult (one o f whom died) have been reported (C D C , unpublished data, 2006). One study examined secondary attack rates from vaccinated and unvaccinated persons with varicella to both vaccinated and unvaccinated households contacts aged 12 months-14 years (119). This study demonstrated that vaccinated persons with varicella with 50 lesions were as contagious as unvaccinated persons with varicella (119). Vaccinated per sons with varicella tend to have milder disease, and, although they are less contagious than unvaccinated persons with vari cella, they might not receive a diagnosis and be isolated. As a result, they might have more opportunities to infect others in community settings, thereby further contributing to V ZV transmission. Vaccinated persons with varicella also have been index case-patients in varicella outbreaks (3,4,115). # Risk Factors for Vaccine Failure Potential risk factors for vaccine failure have been identified in studies o f vaccine effectiveness during outbreak investiga tions and other specially designed studies (5,(108)(109)(110)(113)(114)(115)118,125). In outbreak investigations, the low number of cases limits the ability o f researchers to conduct multivariate analyses and examine the independent effect o f each risk fac tor for vaccine failure. An increased risk for breakthrough dis ease has been noted with decreasing age at vaccination, with a threefold increase in breakthrough disease risk for children vaccinated at age 3, >5, or >5 years) was associated with an increased risk for breakthrough disease (relative risk = 2.6, 6.7, and 2.6, respectively) (5,114,115). However, age at vaccination and time since vaccination are highly cor related, and their independent association with the risk for breakthrough disease has been assessed in only one outbreak investigation (113). A retrospective cohort study that adjusted for other potential risk factors demonstrated an increased risk for breakthrough disease for children vaccinated at age 15 months (99%) (p = 0.01) (118). However, the difference in the overall effectiveness between children vaccinated at these ages was not statistically significant for subsequent years (8 years o f follow-up) (81% and 88%, respectively; p = 0.17). Active surveillance data collected during 1995-2004 from a sentinel population o f 350,000 persons were analyzed to determine whether the severity and annual incidence o f break through varicella cases increased with time since vaccination (126). Children vaccinated >5 years previously were 2.6 times more likely to have moderate and severe breakthrough Multiple other studies that examined possible reasons for lower vaccine effectiveness did not find age at vaccination (3)(4)(5)111,114) or time since vaccination (3,110,111) to be associated with vaccine failure. An ongoing study is examin ing these factors and risk for vaccine failure (127). After 8 years o f active follow-up o f 7,449 children vaccinated at age 12-23 months, results do not indicate an increased risk for break through disease among children vaccinated at age 12-14 months compared with those vaccinated at age 15-23 months. Moreover, a test for trend revealed no change in the rate of reported breakthrough disease for each additional month of age at vaccination (127). Two outbreak investigations noted an increased risk for breakthrough disease in children with asthma and eczema (109,113). In these investigations, the use o f steroids to treat asthma or eczema was not studied. Steroids have been associ ated previously with severe varicella in unvaccinated persons (128)(129)(130). Only one retrospective cohort study controlled simultaneously for the effect o f multiple risk factors, includ ing the use o f steroids, and this study demonstrated no asso ciation o f risk for breakthrough disease with asthma or eczema (125). However, this study documented an increased risk for breakthrough disease if the child had received a prescription o f oral steroids (considered a proxy for taking oral steroids when exposed to varicella) within 3 months o f breakthrough disease (adjusted RR = 2.4; CI = 1.3%-4.4%) and when varicella vaccination was administered within 28 days o f M M R vaccine (aRR = 3.1; CI = 1.5%-6.4%). # Evidence of Im m unity ACIP has approved criteria for evidence o f immunity to varicella (Box). Only doses ofvaricella vaccines for which writ ten documentation o f the date o f administration is presented should be considered valid. Neither a self-reported dose nor a history o f vaccination provided by a parent is, by itself, con sidered adequate evidence o f immunity. Persons who lack docu mentation o f adequate vaccination or other evidence o f immunity should be vaccinated. Historically, self-reporting o f varicella disease by adults or by parents for their children has been considered valid evi dence o f immunity. The predictive value o f a self-reported positive disease history was extremely high in adults in the prevaccine era although data on positive predictive value are lacking in parental reports regarding their children (131)(132)(133). # Simultaneous Adm inistration of Vaccines Single-antigen varicella vaccine is well-tolerated and effec tive in healthy children aged >12 months when administered simultaneously with M M R vaccine either at separate sites and with separate syringes or separately >4 weeks apart. The num ber and types o f adverse events occurring in children who have received VARIVAX and M M RII concurrently have not dif fered from those in children who have been administered the vaccines at different visits (79,135). D ata concerning the effect o f simultaneous administration o f VARIVAX with vac cines containing various combinations o f M M R, diphtheria and tetanus toxoids and pertussis (D TP), and Haemophilus influenzae type b (Hib) have not been published (79). A ran domized study o f 694 subjects determined that the immune response to M M R, varicella, and Hib vaccines administered concurrently with a fourth dose o f pneumococal conjugate vaccine (PCV7) was not inferior to that o f those vaccines when administered without PCV7; the percentage o f subjects who seroconverted was >90% for all antigens for both groups (136). Concomitant administration o f the combination M M RV vaccine with other vaccines also has been assessed. In a clini cal trial involving 1,913 healthy children aged 12-15 months, three groups were compared (137). One group received con comitantly administered (at separate sites) M M RV vaccine, Diphtheria and Tetanus Toxoids and Acellular Pertussis Vaccine Absorbed (DTaP), Hib conjugate (meningococcal protein conjugate) vaccine, and hepatitis B (recombinant) (Hep B) vaccine. The second group received M M RV vaccine at the initial visit, followed by DTaP, Hib, and Hep B vac cines administered concomitantly 6 weeks later. The third group received M M R and varicella vaccines concomitantly followed 6 weeks later by DTaP, Hib, and Hep B vaccines. Seroconversion rates and antibody titers were comparable for the measles, mumps, rubella, and varicella components for the first two groups. No immunologic data were reported for the third group. The Hib and Hep B seroconversion rates for the two groups that received those vaccines also were comparable. Data are absent or limited for the concomitant use o f MMRV vaccine with inactivated polio, pneum ococal conjugate, influenza, and hepatitis A vaccines. Simultaneous administra tion o f the majority o f widely used live and inactivated vac cines has produced seroconversion rates and rates o f adverse reactions similar to those observed when the vaccines are administered separately. Therefore, single-antigen and com bination M M RV vaccines may be administered simultaneously with other vaccines recommended for children aged 12-15 months and those aged 4 -6 years. Simultaneous administra tion is particularly important when health-care providers anticipate that, because o f certain factors (e.g., previously missed vaccination opportunities), a child might not return for subsequent vaccination. # Economic Analysis of Vaccination A cost-effectiveness analysis was performed before initia tion o f the varicella vaccination program in the United States (138). The results o f the study indicated a savings o f $5.40 for each dollar spent on routine vaccination o f preschool-aged children when direct and indirect costs were considered. When only direct medical costs were considered, the benefit-cost ratio was 0.9:1.0. Benefit-cost ratios were only slightly lower when lower estimates o f the short-and long-term effectiveness of the vaccine were used. A recent analysis was performed that used current estimates o f morbidity and mortality (20,28,33) and current direct and indirect costs (ACIP, unpublished presentation, 2006). The model considered that the second dose will reduce varicella disease residual after the first dose by 79% . From a societal perspective, both 1-dose and 2-dose vaccination programs are cost saving compared with no program. The vaccine program cost was estimated at $320 million for 1 dose and $538 mil lion for 2 doses. The savings from varicella disease prevented were estimated at approximately $1.3 billion for the 1-dose program and approximately $1.4 billion for the 2-dose pro gram. Compared with the 1-dose program, the incremental cost for the second dose was estimated to be $96,000 per qual ity-adjusted life year (QALY) saved. I f benefits from prevent ing group A streptococcus infections and H Z am ong vaccinated persons are added, incremental costs per QALY saved are $91,000 and $17,000, respectively. Because o f the uncertainty o f the modeled predictions o f an increase in H Z among persons with a history o f varicella and the fact that no consistent trends demonstrate an increase in H Z attributable to the varicella vaccination program in the United States, H Z among persons with a history o f varicella was not included in the model. # MMWR June 22, 2007 # Storage, H andling, and Transportation of Varicella Vaccines Single-antigen varicella and combination M M RV vaccines have similar but not identical distribution, handling, and stor age requirements (79,80). For potency to be maintained, the lyophilized varicella vaccines must be stored frozen at an aver age temperature o f 5°F (-15°C) or colder. Household freezers manufactured since the mid-1980s are designed to maintain temperatures from -4°F (-20°C) to 5°F (-15°C). When tested, VARIVAX has remained stable in frost-free freezers. Freezers that reliably maintain an average temperature o f <5°F (<-15°C) and that have a separate sealed freezer door are acceptable for storing VARIVAX and ProQuad. Health-care providers may use stand-alone freezers or the freezer compartment o f refrig erator-freezer combinations, provided that the freezer com partment has its own separate, sealed, and insulated exterior door. Units with an internal freezer door are not acceptable. Temperatures should be documented at the beginning and end o f each day. Providers should document the required tem perature in a newly purchased unit for a minimum o f 1 week before using it to store vaccine and routinely thereafter. When varicella vaccines are stored in the freezer compartment o f a combined refrigerator-freezer, temperatures in both compart ments should be monitored carefully. Setting the thermostat low enough for storage o f varicella-containing vaccines might inadvertently expose refrigerated vaccines to freezing tempera tures. Refrigerators with ice compartments that either are not tightly enclosed or are enclosed with unsealed, uninsulated doors (e.g., small, dormitory-style refrigerators) are not acceptable for the storage o f varicella vaccines. Diluent should be stored separately either at room tempera ture or in the refrigerator. Vaccines should be reconstituted according to the directions in the package insert and only with the diluent supplied with the vaccine, which does not contain preservative or other antiviral substances that could inactivate the vaccine virus. Once reconstituted, vaccine should be used immediately to minimize loss o f potency. Vaccine should be discarded if not used within 30 minutes after reconstitution. # Handling and Transportation of Varicella Vaccines Within Off-Site Clinics When an immunization session is being held at a site dis tant from the freezer in which the vaccine is stored, the num ber o f vaccine vials needed for the immunization session should be packed in either a vaccine shipping container (as received from the manufacturer) or in an insulated cooler, with an adequate quantity o f dry ice (i.e., a minimum o f 6 lbs per box) to preserve potency. When placed in a suitable container, dry ice will maintain a temperature o f <5°F (<-15°C). Dry ice should remain in the container upon arrival at the clinic site. If no dry ice remains when the container is opened at the receiving site, the manufacturer (Merck and Company, Inc.) should be contacted for guidance (telephone: 1-800-982 7482). If dry ice is available at the receiving site, it may be used to store vaccine. Thermometers or temperature indica tors cannot be used in a container with dry ice. Diluent should not be transported on dry ice. If dry ice is not available, only single-antigen varicella vac cine may be transported, with frozen packs to keep the tem peratures between 3 6°F -4 6°F ( 2°C -8°C ) . T ran sport temperatures should be monitored, and a temperature indi cator or thermometer should be placed in the container and checked on arrival. The container should be kept closed as much as possible during the immunization session; tempera tures should be checked and recorded hourly. If the tempera ture rem ains betw een 3 6°F -4 6°F ( 2°C -8°C ) , the single-antigen varicella vaccine may be used for up to 72 hours after its removal from the freezer. The date and time should be marked on the vaccine vial. Single-antigen varicella vac cine stored at refrigerated temperatures for any period o f time may not be refrozen for future use. Transportation and storage o f combination M M RV vaccine at temperatures between 36°F-46°F (2°C -8°C ) is not permis sible for any length o f time. In contrast to single-antigen vari cella vaccine, combination M M RV vaccine must be maintained at temperatures o f <5°F (<-15°C) until the time o f reconstitu tion and administration. This difference in vaccine storage tem peratures must be considered when planning off-site clinics. For this reason, transportation o f M M RV vaccine to off-site clinics is not advised. If any concerns arise regarding the storage o f single-antigen varicella or combination MMRV vaccines, the manufacturer should be contacted for guidance. # Minimizing Wastage of Vaccine Vaccine wastage can be minimized by accurately determin ing the number o f doses needed for a given patient popula tion. To ensure maximal vaccine potency, smaller shipments o f vaccine should be ordered more frequently (preferably at least once every 3 months). Single-antigen varicella vaccine should not be distributed to providers who do not have the capacity to store it properly in a freezer until it is used. Trans portation o f varicella vaccine should be kept to a minimum to prevent loss o f potency. Off-site clinic sites should receive only such amounts o f vaccine as they can use within a short time (72 hours if storing single-antigen varicella vaccine at refrig erated temperatures). # Adverse Events A fter Vaccination Because adverse events after vaccination might continue to be caused by wild-type V Z V even as varicella disease declines, health-care providers should obtain event-appropriate clini cal specimens (e.g., cerebrospinal fluid for encephalitis, bron chial lavage or lung biopsy for pneumonia) for laboratory evaluation, including strain identification. Information regard ing strain identification is available from Merck's V Z V Iden tification Program (telephone: 1-800-652-6372) or from C D C 's National Varicella Reference Laboratory (telephone: 4 0 4 -6 3 9 -0 0 6 6 ; e-mail: [email protected]) or at . cdc.gov/nip/diseases/varicella/surv/default.htm. Commercial laboratories do not have the capability for strain identification. The National Vaccine Injury Act o f 1986 requires physi cians and other health-care providers who administer vaccines to maintain permanent immunization records and to report occurrences o f adverse events for selected vaccines, including varicella vaccines. Serious adverse events (i.e., all events requiring medical attention) suspected to have been caused by varicella vaccines should be reported to the Vaccine Adverse Event Reporting System (VAERS). Forms and instruc tion s are available at h ttp s c u re .v a e rs.o rg / vaersDataEntryintro.htm , in the FD A D rug Bulletin at , or from the 24-hour VAERS information recording at 1-800 822-7967. # Prelicensure Single-Antigen Varicella Vaccine Single-antigen varicella vaccine was well-tolerated when administered to >11,000 healthy children, adolescents, and adults during prelicensure clinical trials. In a double-blind, placebo-controlled study among 914 susceptible healthy chil dren aged 12 m onths-14 years, the only statistically signifi cant (p<0.05) adverse events reported that were more common among vaccine recipients than among placebo recipients were pain and redness at the injection site (103). This study also described the presence o f unspecific rash among 2% o f pla cebo and 4% o f vaccine recipients occurring within 43 days o f vaccination. O f the 28 reported rashes, 10 (36%) were examined by a physician; among those that were examined, four o f the seven noninjection site rashes in vaccine recipients were judged to be varicella-like, compared with none o f the rashes in the placebo recipients. In a study comparing the safety o f 1 dose o f single-antigen varicella vaccine with that o f 2 doses administered 3 months apart, no serious adverse events related to vaccination were reported among approximately 2,000 healthy subjects aged 12 m onths-12 years who were followed for 42 days after each injection. The 2-dose vaccine regimen was generally welltolerated, and its safety profile was comparable to that o f the 1-dose regimen. Incidence o f injection site com plaints observed <3 days after vaccination was slightly higher after dose 2 (25.4%) than after dose 1 (21.7%). Incidence o f sys temic clinical complaints was lower after dose 2; fever inci dence from days 7-21 was 7% after dose 1 and 4% after dose 2 (p = 0.009), and varicelliform rash incidence after dose 1 was 3%, compared with 1% after dose 2 (p = 0.008), with peak occurrence 8-21 days after vaccination (139). In uncontrolled trials o f persons aged >13 years , approxi mately 1,600 vaccine recipients who received 1 dose o f single antigen varicella vaccine and 955 who received 2 doses of vaccine were monitored for 42 days for adverse events (79). After the first and second doses, 24.4% and 32.5% o f vaccine recipients, respectively, had complaints regarding the injec tion site. Varicella-like rash at the injection site occurred in 3% o f vaccine recipients after the first injection and in 1% after the second. A nonlocalized rash occurred in 5.5% of vaccine recipients after the first injection and in 0.9% o f vac cine recipients after the second, at a peak o f 7-21 and 0-23 days postvaccination, respectively. # Combination MMRV Vaccine In clinical trials, combination M M RV vaccine was admin istered to 4,497 children aged 12-23 months without con comitant administration with other vaccines (80). The safety profile of the first dose was compared with the safety o f MMRII vaccine and VARIVAX administered concomitantly at sepa rate injection sites. The follow-up period was 42 days post vaccination. Systemic vaccine-related adverse events were reported at a statistically significantly greater rate in persons who received M M RV vaccine than in persons who received the two vaccines administered concomitantly at separate injection sites: fever (>102°F oral equivalent), (21.5% and 14.9%, respectively), and measles-like rash (3.0% and 2.1% , respectively). Both fever and measles-like rash usu ally occurred within 5-12 days after the vaccination, were of short duration, and resolved with no long-term sequelae. Pain, tenderness, and soreness at the injection site were reported at a statistically significantly lower rate in persons who received To demonstrate that M M RV vaccine could be administered as a second dose, a study was conducted involving 799 chil dren aged 4 -6 years who had received primary doses o f MMRII and VARIVAX vaccines, either concomitantly or not, at age >12 months and >1 month before study enrollment (91). # Postlicensure During March 1, 1995-December 31, 2005, a total o f 47.7 million doses o f varicella vaccine were distributed in the United States, and 25,306 adverse events that occurred after varicella vaccine administration were reported to VAERS, 1,276 (5%) o f which were classified as serious. The overall adverse event reporting rate was 52.7 cases per 100,000 doses distributed. The rate o f reporting o f serious adverse events was 2.6 per 100,000 doses distributed. H alf o f all adverse events reported occurred am ong children aged 1 2-23 months (VAERS, unpublished data, 2006). N ot all adverse events that occur after vaccination are reported, and many reports describe events that might have been caused by confounding or unrelated factors (e.g., medi cations and other diseases). Because varicella disease contin ues to occur, wild-type virus might account for certain reported events. For serious adverse events for which background inci dence data are known, VAERS reporting rates are lower than expected after natural varicella or than background rates of disease in the community. Inherent limitations o f passive safety surveillance impede comparing adverse event rates after vac cination reported to VAERS with those from complications after natural disease. Nevertheless, the magnitude o f these dif ferences suggests that serious adverse events occur at a substantially lower rate after vaccination than after natural disease. This assumption is corroborated by the substantial decline in the number o f severe complications, hospitaliza tions, and deaths related to varicella that have been reported since implementation o f the varicella vaccination program (22,23,28). Similar to the prelicensure experience, postlicensure safety surveillance data after administration o f single-antigen vari cella vaccine indicated that rash, fever, and injection-site reac tions were the most frequently reported adverse events (140,141). Using these reports from passive surveillance of adverse events during the first 4 years o f the vaccination pro gram, when wild-type V Z V was still circulating widely, poly merase chain reaction (PCR) analysis confirmed that the majority o f rash events occurring within 42 days o f vaccina tion were caused primarily by wild-type varicella-zoster virus. Rashes from the wild-type virus occurred a median o f 8 days after vaccination (range: 1-24 days), whereas rashes from the vaccine strain occurred a median o f 21 days after vaccination (range: 5-42 days) (140). As part o f postmarketing evaluation o f the short-term safety o f VARIVAX, 89,753 vaccinated adults and children were identified from automated clinical databases o f hospitals, emergency room visits, and clinic visits during April 1995-December 1996 (56). Out o f all potential adverse events iden tified, no consistent time association or clustering o f any events was noted during the exposure follow-up period. No cases o f ataxia or encephalitis were identified after receipt o f varicella vaccine in this group o f vaccine recipients. In the prevaccine era, among children aged <15 years, acute cerebellar ataxia was estimated to occur at a rate o f one in 4,000 varicella cases, and varicella encephalitis without ataxia was estimated to occur at one in 33,000 varicella cases (142). Severe complications that are laboratory-confirmed to be caused by vaccine virus strain are rare and include pneumonia (140), hepatitis (143), severe disseminated varicella infection (140,141,144,145), and secondary transmission from five vaccine recipients (140,(146)(147)(148). Except for the secondary tran sm issio n cases, these cases all occu rred in immunocompromised patients or in persons who had other serious medical conditions that were undiagnosed at the time o f vaccination. Although other serious adverse events have been reported, vaccine strain involvement was not laboratory-confirmed. Thrombocytopenia (140,141,149) and acute cerebellar ataxia (140,141,150) have been described as potentially associated with single-antigen varicella vaccine. Two children had acute hemiparesis diagnosed after varicella vaccination (one at 5 days and the other at 3 weeks) (151). In both cases, unilateral infarction o f the basal ganglia and internal capsule was noted; this distribution is consistent with varicella angiopathy. Urti caria after varicella vaccine has been associated with gelatin allergy (152). Recurrent papular urticaria has been reported to be potentially associated with varicella vaccination (153). However, available data regarding the potential adverse events after varicella vaccination are insufficient to determine a causal association. The quality o f reported information varies widely, and simultaneous administration with other vaccines (especially M M R) might confound attribution. Herpes Zoster. Similar to wild-type VZV, vaccine virus can establish latent infection and subsequently reactivate, causing H Z disease in vaccine recipients. Before vaccine licensure, studies in children with leukemia had demonstrated a much lower rate o f H Z in vaccinated children compared with those (age and protocol matched) with previous varicella (54). Cases o f H Z in healthy vaccine recipients have been confirmed to be caused by both vaccine virus and wild-type virus, suggesting that certain H Z cases in vaccine recipients might result from antecedent natural varicella infection that might not have been detected by the patient or from infection after vaccination (140). A single case has been reported o f a child who received a diagnosis of neuroblastoma and had severe chronic zoster attributed to vac cine virus strain that with time became drug resistant (145). A large postlicensure safety study performed through surveys con ducted every 6 months and validated by medical chart review in the first 9 years o f a 15-year follow-up study o f >7,000 en rolled children vaccinated with single-antigen varicella vaccine at age 12-24 months estimated H Z disease incidence to be 22 per 100,000 person-years (CI = 13-37) as reported by parents (Steven Black, M D , Northern California Kaiser Permanente Medical Care Program, unpublished presentation, 2005). The incidence o f H Z was 30 per 100,000 person-years among healthy children aged 5-9 years (154) and 46 per 100,000 per son-years for those aged <14 years (64). However, these rates are drawn from different populations and based on different methodologies. In addition, a proportion o f children in these age groups would not have experienced varicella disease; those rates are likely to underestimate rates in a cohort o f children all infected with wild-type VZV, making direct comparison difficult with a vaccinated cohort. # Transmission of Vaccine Virus Results from prelicensure vaccine trials o f the single antigen varicella vaccine suggest that transmission o f varicella vaccine virus from healthy persons to susceptible contacts is rare. This risk was assessed in siblings o f healthy vaccinated children who themselves received placebo (103). Six (1%) o f 439 placebo recipients seroconverted without rash; the vacci nated siblings o f these six children also did not develop rash. Serologic data suggested that three o f these six seroconverters received vaccine mistakenly in lieu o f their siblings. In a smaller study, immunocompromised siblings o f healthy children receiving varicella vaccine were evaluated clinically and by test ing for humoral or cell-mediated immune responses (155). No evidence was demonstrated o f vaccine virus transmission to any o f 30 immunocompromised siblings from 37 healthy children receiving varicella vaccine. Accumulated data from postlicensure surveillance activities suggest that the risk for transmission o f varicella vaccine virus from healthy persons to susceptible contacts is low. With >55 million doses ofVARIVAX distributed since licensure, trans mission from immunocompetent persons after vaccination has been documented by PC R analysis from only five persons, resulting in six secondary infections, all o f them mild (140,(146)(147)(148). Three episodes involved transmission from healthy children aged 1 year to healthy household contacts, including a sibling aged 4 months, a father, and a pregnant mother. In the latter episode, the mother chose to terminate the pregnancy, but fetal tissue tested subsequently by PC R was negative for varicella vaccine virus (147). The children in these episodes had 2, 12, and 30 lesions, respectively. A fourth episode involved transmission from an immunocompetent adolescent who was a resident in an institution for chroni cally disabled children. The adolescent had >500 lesions after vaccination, and vaccine virus was transmitted to another immunocompetent resident o f the institution and to a health care worker, both o f whom had histories o f varicella (146). The fifth episode represented a tertiary spread from a healthy sibling contact o f a vaccinee with leukemia (148). Rashes for both healthy siblings were mild (i.e., 40 and 11 lesions, respectively), and vaccine virus was isolated from all three casepatients. The third sibling had rash 18 days after the onset of the secondary case-patient and 33 days after rash onset in the vaccinated leukemic child. In addition to these five episodes, one child has been reported to have transmitted vaccine virus from H Z that occurred 5 months after varicella vaccination; 2 weeks later, a mild varicella-like rash from which vaccine varicella virus was isolated occurred in the child's vaccinated brother (156). Although varicella vaccine is not recommended for chil dren with cellular immune deficiencies, the experience from prelicensure vaccine trials involving children with leukemia is instructive. Data from a study o f varicella vaccination in chil dren with leukemia indicated that varicella virus vaccine trans mission occurred in 15 (17%) o f 88 healthy, susceptible siblings o f leukemic vaccine recipients; the rash was mild in # Sum m ary of Rationale for Varicella Vaccination Varicella vaccine is an effective prevention tool for decreas ing the burden attributable to varicella disease and its compli cations in the United States. In the prevaccine era, varicella was a childhood disease with >90% o f the 4 million cases, two thirds o f approximately 11,000 hospitalizations, and approximately half o f 100-150 annual deaths occurring among persons aged 12 months, and the combination M M RV vaccine is licensed for use in healthy children aged 12 m on ths-12 years. Prelicensure and postlicensure studies have demonstrated that 1 dose o f single antigen varicella vaccine is approximately 85% effective in preventing varicella. Breakthrough varicella disease that occurs after vaccination frequently is mild and modified. Varicella vaccine is >95% effective in preventing severe vari cella disease. Since implementation o f the varicella vaccina tion program in 1995, varicella incidence, hospitalizations, and deaths have declined substantially. M M RV was licensed on the basis o f immunological noninferiority to its vaccine antigenic components. Initial varicella vaccine policy recom mendations were for 1 dose o f varicella vaccine for children aged 12 months-12 years and 2 doses, 4-8 weeks apart, for persons aged >13 years. In June 2006, ACIP approved a rou tine 2-dose recommendation for children. The first dose should be administered at age 12-15 months and the second dose at age 4-6 years. The rationale for the second dose o f varicella vaccine for children is to further decrease varicella disease and its compli cations in the United States. Despite the successes o f the 1-dose vaccination program in children, vaccine effectiveness o f 85% has not been sufficient to prevent varicella outbreaks, which, although less than in the prevaccine era, have contin ued to occur in highly vaccinated school populations. Break through varicella is contagious. Studies o f the immune re sponse after 1 and 2 doses o f varicella vaccine demonstrate a greater-than-tenfold boost in GM Ts when measured 6 weeks after the second varicella vaccine dose. A higher proportion (>99% ) o f children achieve an antibody response o f >5 gpELISA units after the second dose compared with 7 6 % -85% o f children after a single dose o f varicella vaccine. The second dose o f varicella vaccine is expected to provide im proved protection to the 15% -20% o f children who do not respond adequately to the first dose. D ata from a randomized clinical trial conducted postlicensure indicated that vaccine efficacy after 2 doses o f single-antigen varicella vaccine in chil dren (98.3%; CI = 97.3% -99.0% ) was significantly higher than that after a single dose (94.4%; CI =92.9% -95.7% ). The risk for breakthrough disease was 3.3-fold lower among children who received 2 doses than it was among children who received 1 dose. How this increase in vaccine efficacy (typically higher than observed under field conditions) will translate into vaccine effectiveness under conditions o f com munity use will be an important area o f study. The recommended ages for routine first (at age 12-15 months) and second (at age 4 -6 years) doses o f varicella vac cine are harmonized with the recommendations for M M R vaccine use and intended to limit the period when children have no varicella antibody. The recommended age for the second dose is supported by the current epidemiology o f varicella, with low incidence and few outbreaks among pre school-aged children and higher incidence and more outbreaks among elementary-school-aged children. However, the sec ond dose may be administered at an earlier age, provided that the interval between the first and second doses is 3 months. The recommendation for the minimum interval between doses is made on the basis o f the design o f the studies evaluating 2 doses among children aged 12 months-12 years. M M RV vaccine may be used to vaccinate children against measles, mumps, rubella, and varicella simultaneously. Because the risk for transmission can be high among students in schools, col leges, and other postsecondary educational institutions, stu dents without evidence o f immunity should receive 2 doses o f varicella vaccine. All children and adolescents who received 1 dose o f varicella vaccine previously should receive a second dose. Varicella disease is more severe and its complications more frequent among adolescents and adults. The recommenda tion for vaccination o f all adolescents and adults without evidence o f immunity will provide protection in these age gro u p s. B ecau se v aricella m ight be m ore severe in immunocompromised persons who might not be eligible for vaccination, and because o f the risk o f V Z V transmission in health-care settings, H C P must be vaccinated. Varicella dis ease during the first two trimesters o f pregnancy might infect the fetus and result in congenital varicella syndrome. There fore, routine antenatal screening for evidence o f immunity and postpartum vaccination for those without evidence of immunity now is recommended. # Recommendations for the Use of Varicella Vaccines Two 0.5-m L doses o f varicella vaccine administered subcu taneously are recommended for children aged >12 months, adolescents, and adults without evidence o f immunity. For children aged 12 m onths-12 years, the recommended mini mum interval between the two doses is 3 months. However, if the second dose was administered >28 days after the first dose, the second dose is considered valid and need not be repeated. # Routine Vaccination Persons Aged 12 M onths-12 Years # Preschool-Aged Children All healthy children should receive their first dose o f vari cella-containing vaccine routinely at age 12-15 months. # School-Aged Children A second dose o f varicella vaccine is recommended routinely for all children aged 4 -6 years (i.e., before entering prekindergarten, kindergarten, or first grade). However, it may be administered at an earlier age provided that the interval between the first and second dose is >3 months. Because o f the risk for transmission o f V Z V in schools, all children entering school should have received 2 doses o f vari cella-containing vaccine or have other evidence o f immunity to varicella (see Evidence o f Immunity). # Persons Aged > 1 3 Years Persons aged >13 years without evidence o f varicella immu nity should receive two 0.5-mL doses o f single-antigen vari cella vaccine administered subcutaneously, 4 -8 weeks apart. If >8 weeks elapse after the first dose, the second dose may be administered without restarting the schedule. Only single antigen varicella vaccine may be used for vaccination o f per sons in this age group. M M RV is not licensed for use among persons aged >13 years. # School-Aged Children, College Students, and Students in Other Postsecondary Educational Institutions All students should be assessed for varicella immunity, and those without evidence o f immunity should routinely receive 2 doses o f single-antigen varicella vaccine 4 -8 weeks apart. The risk for transmission o f varicella among school-aged chil dren, college students, and students in other postsecondary educational institutions can be high because o f high contact rates. # Other Adults All healthy adults should be assessed for varicella immu nity, and those who do not have evidence o f immunity should receive 2 doses o f single-antigen varicella vaccine 4 -8 weeks apart. Adults who might be at increased risk for exposure or transmission and who do not have evidence o f immunity sh ould receive special con sideration for vaccin ation , including 1) HCP, 2) household contacts o f im m uno compromised persons, 3) persons who live or work in envi ronments in which transmission ofV Z V is likely (e.g., teachers, day-care employees, residents and staff in institutional set tings), 4) persons who live or work in environments in which transmission has been reported (e.g., college students, inmates and staff members o f correctional institutions, and military personnel), 5) nonpregnant women o f childbearing age, 6) adolescents and adults living in households with children, and 7) international travelers. # Second Dose Catch-Up Vaccination To improve individual protection against varicella and to have a more rapid impact on school outbreaks, second dose catch-up varicella vaccination is recommended for children, adolescents, and adults who previously received 1 dose. The recommended minimum interval between the first dose and the catch-up second dose is 3 months for children aged 13 years. However, the catch-up second dose may be administered at any interval longer than the minimum recommended interval. Catch-up # MMWR June 22, 2007 vaccination may be implemented during routine health-care provider visits and through school-and college-entry require ments. As part o f comprehensive health services for all adolescents, ACIP, AAP, and AAFP recommend a health maintenance visit at age 11-12 years. This visit also should serve as an immuni zation visit to evaluate vaccination status and administer nec essary vaccinations (158). Physicians should use this and other routine visits to ensure that all children without evidence of varicella immunity have received 2 doses o f varicella vaccine. # Requirements for Entry to Child Care, School, College, and Other Postsecondary Educational Institutions Child care and school entry requirements for varicella immunity have been recommended previously (2). In 2005, ACIP recommended expanding the requirements to cover stu dents in all grade levels. Official health agencies should take necessary steps, including developing and enforcing school immunization requirements, to ensure that students at all grade levels (including college) and children in child care centers are protected against varicella and other vaccine-preventable diseases (157). # Prenatal Assessment and Postpartum Vaccination Prenatal assessment o f women for evidence o f varicella immunity is recommended. Birth before 1980 is not consid ered evidence o f immunity for pregnant women because of potential severe consequences ofvaricella infection during preg nancy, including infection o f the fetus. Upon completion or termination o f their pregnancies, women who do not have evidence o f varicella immunity should receive the first dose of vaccine before discharge from the health-care facility. The sec ond dose should be administered 4 -8 weeks later, which coincides with the postpartum visit (6-8 weeks after deliv ery). For women who gave birth, the second dose should be administered at the postpartum visit. Women should be coun seled to avoid conception for 1 month after each dose o f vari cella vaccine. Health-care settings in which completion or termination o f pregnancy occurs should use standing orders to ensure the administration o f varicella vaccine to women without evidence o f immunity. # Special Considerations for Vaccination # Vaccination of HIV-Infected Persons HIV-infected children with C D 4+ T-lymphocyte percent age >15% should be considered for vaccination with the single antigen varicella vaccine. Varicella vaccine was recommended previously for HIV-infected children in C D C clinical and immunologic categories N1 and A1 with age-specific C D 4+ T-lymphocyte percentage >25% (2). Limited data from a clini cal trial in which 2 doses o f single-antigen varicella vaccine were administered 3 months apart to 37 HIV-infected chil dren (C D C clinical categories N , A, or B and immunologic category 2 ) aged 1-8 years indicated that the vaccine was well-tolerated and that >80% o f subjects had detectable V Z V specific immune response (either antibody or cell immune response or both) at 1 year after immunization (159). These children were no less likely to have an antibody response to the vari cella vaccine than were subjects who were less affected immu nologically by H IV infection. Because children infected with H IV are at increased risk for morbidity from varicella and H Z (i.e., shingles) compared with healthy children, ACIP rec ommends that, after weighing potential risks and benefits, single-antigen varicella vaccine should be considered for HIVinfected children with C D 4+ T-lymphocyte percentages >15%. Eligible children should receive 2 doses o f single-antigen vari cella vaccine 3 months apart. Because persons with impaired cellular immunity are potentially at greater risk for complica tions after vaccination with a live vaccine, these vaccine recipients should be encouraged to return for evaluation if they experience a postvaccination varicella-like rash. D ata are not available regarding safety, immunogenicity, or efficacy of M M RV vaccine in HIV-infected children, M M RV vaccine should not be administered as a substitute for the single antigen varicella vaccine when vaccinating these children. The titer o f Oka/Merck V Z V is higher in combination M M RV vaccine than in single-antigen varicella vaccine. Recommen dations for vaccination o f HIV-infected children with measles, mumps, or rubella vaccines have been published previously (160). D ata on use o f varicella vaccine in HIV-infected adoles cents and adults are lacking. However, on the basis o f expert opinion, the safety o f varicella vaccine in HIV-infected per sons aged >8 years with comparable levels o f immune func tion (CD4+T-lymphocyte count >200 cells/^L) is likely to be similar to that o f children aged 200 cells/^L in these age groups. I f vaccination o f HIV-infected persons results in clinical disease, the use o f acyclovir might modify the severity o f disease. # Situations in Which Some Degree of Immunodeficiency Might be Present Persons with impaired humoral immunity may be vacci nated. No data have been published concerning whether per sons without evidence o f immunity receiving only inhaled, nasal, or topical doses o f steroids can be vaccinated safely. However, clinical experience suggests that vaccination is welltolerated among these persons. Persons without evidence of immunity who are receiving systemic steroids for certain con d itio n s (e.g ., asth m a) and who are not otherw ise immunocompromised may be vaccinated if they are receiving 2 mg/kg prednisone) for >2 weeks may be vaccinated once steroid therapy has been discontinued for >1 month, in accordance with the general recommendations for the use o f live-virus vaccines (157). Vaccination o f leukemic children who are in remission and who do not have evidence o f immunity to varicella should be undertaken only with expert guidance and with the availabil ity o f antiviral therapy should complications ensue. Patients with leukemia, lymphoma, or other malignancies whose dis ease is in remission and whose chemotherapy has been termi nated for at least 3 months can receive live-virus vaccines (157). When immunizing persons in whom some degree o f immu nodeficiency might be present, only single-antigen varicella vaccine should be used. # Vaccination of Household Contacts of Immunocompromised Persons Immunocompromised persons are at high risk for serious varicella infections. Severe disease occurs in approximately 30% o f such persons with primary infection. Because varicella vac cine now is recommended for all healthy children and adults w ithout evidence o f im m unity, household contacts o f immunocompromised persons should be vaccinated routinely. A lth o u gh the risk for exposure to w ild V Z V for immunocompromised persons now is lower than it was pre viously, vaccine should be offered to child and adult house hold con tacts w ith ou t evidence o f im m u n ity o f immunocompromised persons. Vaccination o f household con tacts provides protection for immunocompromised persons by decreasing the likelihood that wild-type V ZV will be introduced into the household. Vaccination o f household con tacts o f immunocompromised persons theoretically might pose a m in im al risk for tran sm issio n o f vaccine virus to immunocompromised persons, although in one study, no evi dence o f transmission o f vaccine virus was demonstrated after vaccination o f 37 healthy siblings o f 30 children with malig nancy (155). No cases have been documented o f transmission o f vaccine virus to immunocompromised persons in the postlicensure period in the United States, with >55 million doses o f vaccine distributed. Other data indicate that disease caused by vaccine virus in immunocompromised persons is milder than wild-type disease and can be treated with acyclovir (148,159). The benefits o f vaccinating susceptible household contacts o f im m unocom prom ised persons outweigh the extremely low potential risk for transmission o f vaccine virus to immunocompromised contacts. Vaccine recipients in whom vaccine-related rash occurs, particularly H C P and household contacts o f immunocompromised persons, should avoid con tact with susceptible persons who are at high risk for severe complications. If a susceptible, immunocompromised person is inadvertently exposed to a person who has a vaccine-related rash, postexposure prophylaxis with V Z IG is not needed because disease associated with this type o f virus is expected to be mild. # Nursing Mothers Postpartum vaccination o f women without evidence o f im munity need not be delayed because o f breastfeeding. Women who have received varicella vaccination postpartum may con tinue to breastfeed. The majority o f live vaccines are not asso ciated with virus secretion in breast milk (157). A study involving 12 women who received single-antigen varicella vac cine while breastfeeding indicated no evidence ofV Z V D N A June 22, 2007 either in 217 breast milk samples collected or in infants tested after both vaccine doses (162). No infants seroconverted. Another study did not detect varicella gene sequences in the postvaccination breast milk samples (163). Therefore, single antigen varicella vaccine should be administered to nursing mothers without evidence o f immunity. Combination MMRV vaccine is not licensed for use among persons aged >13 years. # H ealth -C are Personnel N osocom ial transm ission o f V Z V is well-recognized (131,(164)(165)(166)(167)(168)(169)(170)(171)(172)(173), and guidelines for the prevention o f nosoco mial V Z V infection and for infection control in H C P have been published (174,175). Sources o f nosocomial exposure have included patients, hospital staff, and visitors (e.g., the children o f hospital employees) who are infected with vari cella or H Z. In hospitals, airborne transmission o f V Z V has been demonstrated when varicella has occurred in susceptible persons who had no direct contact with the index casepatient (176)(177)(178)(179)(180). To prevent disease and nosocomial spread o f VZV, health care institutions should ensure that all H C P have evidence of immunity to varicella. Birth before 1980 is not considered evidence o f immunity for H C P because o f the possibility of nosocomial transmission to high-risk patients. In health-care institutions, serologic screening before vaccination o f person nel who have a negative or uncertain history o f varicella and who are unvaccinated is likely to be cost effective. Institutions may elect to test all H C P regardless o f disease history because a small proportion o f persons with a positive history o f disease might be susceptible. Routine testing for varicella immunity after 2 doses o f vac cine is not recommended for the management o f vaccinated HCP. Available commercial assays are not sensitive enough to detect antibody after vaccination in all instances. Sensitive tests have indicated that 99% o f adults develop antibodies after the second dose. However, seroconversion does not always result in full protection against disease, and no data regarding correlates o f protection are available for adults. H C P who have received 2 doses o f vaccine and who are exposed to V Z V should be monitored daily during days 10 21 after exposure through the employee health program or by an infection control nurse to determine clinical status (i.e., daily screen for fever, skin lesions, and systemic symptoms). Persons with varicella might be infectious up to 2 days before rash onset. In addition, H C P should be instructed to report fever, headache, or other constitutional symptoms and any atypical skin lesions immediately. H C P should be placed on sick leave immediately if symptoms occur. Health-care insti tutions should establish protocols and recommendations for screening and vaccinating H C P and for management o f H CP after exposures in the work place. H C P who have received 1 dose o f vaccine and who are exposed to V Z V should receive the second dose with single antigen varicella vaccine within 3-5 days after exposure to rash (provided 4 weeks have elapsed after the first dose). After vaccination, management is similar to that o f 2-dose vaccine recipients. Unvaccinated H C P who have no other evidence o f immu nity who are exposed to V Z V are potentially infective from days 10-21 after exposure and should be furloughed during this period. They should receive postexposure vaccination as soon as possible. Vaccination within 3-5 days o f exposure to rash might modify the disease if infection occurred. Vaccina tion >5 days postexposure still is indicated because it induces protection against subsequent exposures (if the current exposure did not cause infection). The risk for transmission o f vaccine virus from vaccine recipients in whom varicella-like rash occurs after vaccination is low and has been documented after exposures in house holds and long-term care facilities (140,(146)(147)(148). No cases have been documented after vaccination o f HCP. The ben efits o f vaccinating H C P without evidence o f immunity out weigh this extremely low potential risk. As a safeguard, institutions should consider precautions for personnel in whom rash occurs after vaccination. H C P in whom a vaccine-related rash occurs should avoid contact with persons without evidence o f immunity who are at risk for severe disease and complications until all lesions resolve (i.e., are crusted over or fade away) or no new lesions appear within a 24-hour period. # Varicella IgG Antibody Testing The tests most widely used to detect varicella IgG antibody after natural varicella infection among H C P are latex aggluti nation (LA) and ELISA. A commercially available LA test using latex particles coated with V Z V glycoprotein antigens can be completed in 15 minutes and does not require special equip ment (181). The sensitivity and specificity o f the LA test are comparable to those o f FAMA in detecting antibody response after natural varicella infection. The LA test generally is more sensitive than commercial ELISAs. The LA test has detected antibody for up to 11 years after varicella vaccination (182). However, for the purpose o f screening H C P for varicella sus ceptibility, a less sensitive and more specific commercial ELISA should be considered. A recent report indicated that the LA test can produce false-positive results, particularly when only a single concentration o f serum is evaluated (183); this led to documented cases o f false-positive results in H C P who conse quently remained unvaccinated and subsequently contracted varicella. # Vaccination for O u tb rea k Control Varicella vaccination is recommended for outbreak control. Persons who do not have adequate evidence ofimmunity should receive their first or second dose as appropriate. Additionally, in outbreaks among preschool-aged children, 2-dose vaccination is recommended for optimal protection, and children vacci nated with 1 dose should receive their second dose provided 3 months have elapsed since the first dose. State and local health departments may advise exposed persons who do not have evi dence o f immunity to contact their health-care providers for vaccination, or they may offer vaccination through the health department or school (or other institutions) vaccination clin ics. Although outbreak control efforts optimally should be imple mented as soon as an outbreak is identified, vaccination should be offered even if the outbreak is identified late. Varicella out breaks in certain settings (e.g., child care facilities, schools, or institutions) can last as long as 4-5 months. Thus, offering vac cine during an outbreak might provide protection to persons not yet exposed and shorten the duration of the outbreak (184). Persons receiving either their first or second dose as part o f the outbreak control program may be readmitted to school imme diately. Those vaccinated with the first dose as part o f outbreak control measures should be scheduled for the second dose as age appropriate. Persons who are unvaccinated and without other evidence o f immunity who do not receive vaccine should be excluded from institutions in which the outbreak is occurring until 21 days after the onset o f rash in the last case o f varicella. In addition, for school-aged persons covered by the 2-dose school vaccination requirements, exclusion during an outbreak is rec ommended for those vaccine recipients who had received the first dose before the outbreak but not the second as part o f the oubtreak control program. Persons at increased risk for severe varicella who have contraindications to vaccination should receive V ZIG within 96 hours o f exposure. # Contraindications General Adequate treatment provisions for anaphylactic reactions, including epinephrine injection (1:1000), should be available for immediate use should an anaphylactic reaction occur. Before administering a vaccine, health-care providers should obtain the vaccine recipient's vaccination history and deter mine whether the individual had any previous reactions to any vaccine including VARIVAX, ProQuad or any measles, mumps, or rubella containing vaccines. # Allergy to Vaccine Components The administration o f live varicella-containing vaccines rarely results in hypersensitivity. The information in the pack age insert should be reviewed carefully before vaccine is administered. Vaccination is contraindicated for persons who have a history o f anaphylactic reaction to any component o f the vaccine, including gelatin. Single-antigen varicella vaccine does not contain preservatives or egg protein; these substances have caused hypersensitive reactions to other vaccines. For the combination M M RV vaccine, live measles and live mumps vaccines are produced in chick embryo culture. However, among persons who are allergic to eggs, the risk for serious allergic reactions after administration o f measles-or mumpscontaining vaccines is low. Because skin testing with vaccine is not predictive o f allergic reaction to vaccination, skin test ing is not required before administering combination M M RV vaccine to persons who are allergic to eggs (160). The major ity o f anaphylactic reactions to measles-and m um pscontaining vaccines are associated not with hypersensitivity to egg antigens but with other vaccine components. Neither single-antigen varicella nor combination M M RV vaccines should be administered to persons who have a history o f ana phylactic reaction to neomycin. However, neomycin allergy usually is manifested as a contact dermatitis, which is a delayedtype immune response rather than anaphylaxis. For persons who experience such a response, the adverse reaction, if any, would appear as an erythematous, pruritic nodule or papule present 48-96 hours after vaccination. A history o f contact dermatitis to neomycin is not a contraindication to receiving varicella vaccines. Varicella vaccines should not be administered to persons receiving high-dose systemic immunosuppressive therapy, including persons on oral steroids >2 mg/kg o f body weight or a total o f >20 mg/day o f prednisone or equivalent for per sons who weigh >10 kg, when administered for >2 weeks. Such persons are more susceptible to infections than healthy persons. Administration o f varicella vaccines can result in a more extensive vaccine-associated rash or disseminated dis ease in persons receiving immunosuppressive doses o f corti costeroids (185). This contraindication does not apply to persons who are receiving inhaled, nasal, or topical corticos teroids or low-dose corticosteroids as are used commonly for asthma prophylaxis or for corticosteroid-replacement therapy (see Situations in Which Some Degree o f Immunodeficiency Might Be Present). # Altered Immunity # Pregnancy Because the effects o f the varicella virus vaccine on the fetus are unknown, pregnant women should not be vaccinated. N onpregnant women who are vaccinated should avoid becoming pregnant for 1 month after each injection. For per sons without evidence o f immunity, having a pregnant house hold member is not a contraindication to vaccination. If a pregnant woman is vaccinated or becomes pregnant within 1 month o f vaccination, she should be counseled about potential effects on the fetus. Wild-type varicella poses a low risk to the fetus (see Prenatal and Perinatal Exposure). Because the virulence o f the attenuated virus used in the vac cine is less than that o f the wild-type virus, the risk to the fetus, if any, should be even lower. In 1995, Merck and Co., Inc., in collaboration with C D C , established the VARIVAX Pregnancy Registry to monitor the maternal-fetal outcomes o f pregnant women who were inadvertently administered varicella vaccine 3 months before or at any time during preg nancy (to report, call: 1-800-986-8999) (186). During the first 10 years o f the pregnancy registry no cases o f congenital varicella syndrome or birth defects compatible with congeni tal varicella syndrome have been documented (187,188). Among 131 live-born infants o f prospectively reported serone gative women (82 o f whom were born to mothers vaccinated during the highest risk period ), no birth defects consistent with congeni tal varicella syndrome have been documented (prevalence rate = 0; CI = 0-6.7% ), and three major birth defects were reported (prevalence rate = 2.3%; CI = 0.5% -6.7%). The rate o f occurrence o f major birth defects from prospective reports in the registry was similar to the rate reported in the general U.S. population (3.2%), and the defects indicated no specific pattern or target organ. Although the study results do not exclude the possibility o f risk for women who received inad vertent varicella vaccination before or during pregnancy, the potential risk, if any, is low. # Precautions Illness Vaccination o f persons who have acute severe illness, including untreated, active tuberculosis, should be postponed until recovery. The decision to delay vaccination depends on the severity o f symptoms and on the etiology o f the disease. No data are available regarding whether either single-antigen varicella or combination M M RV vaccines exacerbate tuber culosis. Live attenuated measles, mumps, and rubella virus vaccines administered individually might result in a tempo rary depression o f tuberculin skin sensitivity. Therefore, if a tuberculin test is to be performed, it should be administered either any time before, simultaneously with, or at least 4 -6 weeks after combination M M RV vaccine. However, tubercu lin skin testing is not a prerequisite for vaccination with single antigen varicella or combination M M RV vaccines. Varicella vaccines may be administered to children without evidence o f immunity who have mild illnesses, with or with out low-grade fever (e.g., diarrhea or upper-respiratory infec tion) (189). Physicians should be alert to the vaccine-associated temperature elevations that might occur predominantly in the second week after vaccination, especially with combination MMRV vaccine. Studies suggest that failure to vaccinate chil dren with minor illnesses can impede vaccination efforts (190). # Thrombocytopenia # Recent Administration of Blood, Plasma, or Immune Globulin Although passively acquired antibody is known to interfere with response to measles and rubella vaccines (191), the effect o f the adm inistration o f immune globulin (IG) on the response to varicella virus vaccine is unknown. The duration o f interference with the response to measles vaccination is doserelated and ranges from 3-11 months. Because o f the poten tial inhibition o f the response to varicella vaccination by passively transferred antibodies, varicella vaccines should not be administered for the same intervals as measles vaccine (3 -1 1 m on th s, d ep en d in g on the dosage) after administration o f blood (except washed red blood cells), plasma, or IG. Suggested intervals between administration of antibody-containing products for different indications and varicella vaccine have been published previously (157). In addition, persons who received a varicella vaccine should not be administered an antibody-containing product for 2 weeks after vaccination unless the benefits exceed those o f vaccina tion. In such cases, the vaccine recipient should either be revaccinated or tested for immunity at the appropriate inter vals, depending on the dose received, and then revaccinated if seronegative. # Use of Salicylates No adverse events associated with the use o f salicylates after varicella vaccination have been reported; however, the vaccine manufacturer recommends that vaccine recipients avoid using salicylates for 6 weeks after receiving varicella vaccines because o f the association between aspirin use and Reye syn drome after varicella. Vaccination with subsequent close moni toring should be considered for children who have rheumatoid arthritis or other conditions requiring therapeutic aspirin. The risk for serious complications associated with aspirin is likely to be greater in children in whom natural varicella develops than it is in children who receive the vaccine containing attenuated VZV. N o association has been docum ented between Reye syndrome and analgesics or antipyretics that do not contain salicylic acid. # Postexposure Prophylaxis # Healthy Persons Prelicensure data from the United States and Japan on vari cella exposures in children from household, hospital, and com munity settings indicate that single-antigen varicella vaccine is effective in preventing illness or modifying varicella severity if administered to unvaccinated children within 3 days, and possibly up to 5 days, o f exposure to rash (78,101,192). Vac cination within 3 days o f exposure to rash was >90% effective in preventing varicella whereas vaccination within 5 days o f exposure to rash was approximately 70% effective in prevent ing varicella and 100% effective in modifying severe disease (101,192). Limited postlicensure studies also have demon strated that varicella vaccine is highly effective in either pre venting or modifying disease if administered within 3 days o f exposure (193,194). Varicella vaccine is recommended for postexposure administration for unvaccinated persons with out other evidence o f immunity. If exposure to V Z V does not cause infection, postexposure vaccination should induce pro tection against subsequent exposures. If the exposure results in infection, no evidence indicates that administration o f single-antigen varicella vaccine during the presymptomatic or prodromal stage o f illness increases the risk for vaccineassociated adverse events. No data are available regarding the potential benefit o f administering a second dose to 1-dose vaccine recipients after exposure. However, administration o f a second dose should be considered for persons who have pre viously received 1 dose to bring them up-to-date. Studies on postexposure use o f varicella vaccine have been conducted exclusively in children. A higher proportion o f adults do not respond to the first dose o f varicella vaccine. Nevertheless, postexposure vaccination should be offered to adults without evidence o f immunity. Although postexposure use o f varicella vaccine has potential applications in hospital settings, vacci nation is recommended routinely for all H C P without evi dence o f immunity and is the preferred method for preventing varicella in health-care settings (195). Preferably, H CP should be vaccinated when they begin employment. No data are avail able on the use o f co m b in atio n M M R V raccine for postexposure prophylaxis. (197,198). In a study o f immunocompromised children who were adm inistered V Z IG within 96 hours o f exposure, approximately one in five exposures resulted in clinical vari cella, and one in 20 resulted in subclinical disease (198). The severity o f clinical varicella (evaluated by percentage o f patients with >100 lesions or complications) was lower than expected on the basis o f historic controls. The V Z IG product currently used in the United States, VariZIG™ (Cangene Corporation, W innipeg, Canada), is available under an Investigational New D rug Application Expanded Access protocol (available at / cber/infosheets/mphvzig020806.htm). A request for licensure in the United States might be submitted to FD A in the future. VariZIG is a lyophilized presentation which, when properly reconstituted, is approximately a 5% solution o f IgG that can be administered intramuscularly (199). VariZIG can be obtained 24 hours a day from the sole authorized U.S. distributor (FFF Enterprises, Temecula, California) at 1-800 843-7477 or online at . # Persons Without Evidence of Immunity # Administration of VZIG V Z IG provides maximum benefit when administered as soon as possible after exposure, but it might be effective if administered as late as 96 hours after exposure. The effective ness o f V Z IG when administered >96 hours after initial exposure has not been evaluated. The duration o f protection provided after administration ofV Z IG is unknown, but pro tection should last at least one half-life o f the IG (i.e., approximately 3 weeks). Susceptible persons at high risk for whom varicella vaccination is contraindicated and who are again exposed >3 weeks after receiving a dose ofV Z IG should receive another full dose ofV Z IG . Patients receiving monthly high-dose immune globulin intravenous (IGIV) (>400 mg/ kg) are likely to be protected and probably do not require V Z IG if the last dose o f IG IV was administered 28 days. This should be taken into account after exposures when V Z IG is administered. # Dosage VariZIG is supplied in 125-U vials. The recommended dose is 125 units/10 kg o f body weight, up to a maximum o f 625 units (five vials). The minimum dose is 125 U. The human IgG content is 60-200 mg per 125 units dose o f VariZIG. # Indications for the Use of VZIG for Postexposure Prophylaxis The decision to administer V Z IG depends on three factors: 1) whether the patient lacks evidence o f immunity, 2) whether the exposure is likely to result in infection, and 3) whether the patient is at greater risk for complications than the general population. Both healthy and immunocompromised children and adults who have verified positive histories ofvaricella (except for bonemarrow transplant recipients) may be considered immune (see Evidence o f Immunity). The association between positive his tories o f varicella in bone-marrow donors and susceptibility to varicella in recipients after transplants has not been studied adequately. Thus, persons who receive bone-marrow trans plants should be considered nonimmune, regardless o f previ ous history o f varicella disease or varicella vaccination in themselves or in their donors. Bone-marrow recipients in whom varicella or H Z develops after transplantation should subsequently be considered immune. V Z IG is not indicated for persons who received 2 doses o f varicella vaccine and became imm unocom prom ised as a result o f disease or treatment later in life. These persons should be monitored closely; if disease occurs, treatment with acyclovir should be instituted at the earliest signs or symptoms. For patients without evidence o f immunity and on steroid therapy doses >2 mg/kg o f body weight or a total o f 20 mg/day of prednisone or equivalent, VariZIG is indicated. # Types of Exposure Certain types o f exposure can place persons without evi dence o f immunity at risk for varicella. Direct contact expo sure is defined as face-to-face contact with an infectious person while indoors. The duration o f face-to-face contact that war rants administration o f V Z IG is not certain. However, the contact should not be transient. Certain experts suggest a con tact o f >5 minutes as constituting significant exposure for this purpose, whereas others define close contact as >1 hour (200). Substantial exposure for hospital contacts consists o f sharing the same hospital room with an infectious patient or direct, face-to-face contact with an infectious person (e.g., HCP). Brief contacts with an infectious person (e.g., contact with x-ray technicians or housekeeping personnel) are less likely than more prolonged contacts to result in V Z V transmission. Persons with continuous exposure to household members who have varicella or disseminated H Z are at greatest risk for infection. Varicella occurs in approximately 85% (range: 65% -100%) o f susceptible household contacts exposed to VZV. Localized H Z is much less infectious than varicella or dis seminated H Z (52). Transmission from localized H Z is more likely after close contact, such as in household settings. Physi cians may consider recommending postexposure prophylaxis with V Z IG in such circumstances. After household exposure to varicella, attack rates among immunocompromised chil dren who were administered V Z IG were up to 60% (197). No comparative data are available for immunocompromised children without evidence o f immunity who were not admin istered V Z IG . However, the incidence o f severe disease (defined as >100 skin lesions) was less than that predicted from the natural history o f disease in normal children (27% and 87%, respectively), and the incidence o f pneumonia was less than that described in children with neoplasm (6% and 25% , respectively) (201). The risk for varicella after close con tact (e.g., contact with playmates) or hospital exposure is esti mated to be approximately 20% o f the risk occurring from household exposure. The attack rate in healthy neonates who were exposed in utero within 7 days o f delivery and who received V Z IG after birth was 62% , which does not differ substantially from rates reported for neonates who were similarly exposed but not treated with V Z IG (49). However, the occurrence o f compli cations and fatal outcomes was substantially lower for neo nates who were treated with V Z IG than for those who were not. In a study o f pregnant women without immunity to VZV who were exposed to varicella and administered V ZIG , the infection rate was 30%. This is substantially lower than the expected rate o f >70% in unimmunized women exposed to varicella (199,202). # Recommendations for the Use of VZIG The following patient groups are at risk for severe disease and complications from varicella and should receive V ZIG : Immunocompromised patients. V Z IG is used primarily for passive immunization o f immunocompromised persons without evidence o f immunity after direct exposure to vari cella or disseminated H Z patients, including persons who 1) have primary and acquired immune-deficiency disorders, 2) have neoplastic diseases, and 3) are receiving immunosup pressive treatment. Patients receiving monthly high-dose IGIV (>400 mg/kg) are likely to be protected and probably do not require V Z IG if the last dose o f IG IV was administered <3 weeks before exposure (200). Neonates whose mothers have signs and symptoms of varicella around the time o f delivery. V Z IG is indicated for neonates whose mothers have signs and symptoms o f varicella from 5 days before to 2 days after delivery. V Z IG is not neces sary for neonates whose mothers have signs and symptoms of varicella more than 5 days before delivery, because those infants should be protected from severe varicella by transpla centally acquired maternal antibody. No evidence suggests that infants born to mothers in whom varicella occurs >48 hours after delivery are at increased risk for serious complications (e.g., pneumonia or death). Premature neonates exposed postnatally. Transmission of varicella in the hospital nursery is rare because the majority of neonates are protected by maternal antibody. Premature infants who have substantial postnatal exposure should be evaluated on an individual basis. The risk for complications o f postnatally acquired varicella in premature infants is unknown. However, because the immune system o f prema ture infants is not fully developed, administration o f V Z IG to premature infants born at >28 weeks o f gestation who are exposed during the neonatal period and whose mothers do not have evidence o f immunity is indicated. Premature infants born at 28 weeks o f gestation to immune mothers have enough acquired maternal antibody to protect them from severe disease and complications. Although infants are at higher risk than older children for serious and fatal complications, the risk for healthy, full-term infants who have varicella after postnatal exposure is substan tially less than that for infants whose mothers were infected 5 days before to 2 days after delivery. V Z IG is not recom mended for healthy, full-term infants who are exposed post natally, even if their mothers have no history o f varicella infection. Pregnant women. Because pregnant women might be at higher risk for severe varicella and complications (37,42,203), V Z IG should be strongly considered for pregnant women without evidence o f immunity who have been exposed. Administration o f V Z IG to these women has not been found to prevent viremia, fetal infection, congenital varicella syn drome, or neonatal varicella. Thus, the primary indication for V Z IG in pregnant women is to prevent complications of varicella in the mother rather than to protect the fetus. Neonates born to mothers who have signs and symptoms o f MMWR June 22, 2007 varicella from 5 days before to 2 days after delivery should receive V Z IG , regardless o f whether the mother received V ZIG . # Interval Between Administration of VZIG and Varicella Vaccine Any patient who receives V Z IG to prevent varicella should receive varicella vaccine subsequently, provided the vaccine is not contraindicated. Varicella vaccination should be delayed until 5 months after V Z IG administration. Varicella vaccine is not needed if the patient has varicella after administration o f V ZIG . # Antiviral Therapy Because V Z IG might prolong the incubation period by >1 week, any patient who receives V Z IG should be observed closely for signs or symptoms o f varicella for 28 days after exposure. Antiviral therapy should be instituted immediately if signs or symptoms o f varicella disease occur. The route and duration o f antiviral therapy should be determined by spe cific host factors, extent o f infection, and initial response to therapy. Information regarding how to obtain VariZIG is avail able at ww .cdc.gov/m m wr/preview/m m w rhtm l/ m m5508a5.htm (204).
Two live, attenuated varicella zoster virus-containing vaccines are available in the United States for prevention o f varicella: 1) a single-antigen varicella vaccine (VARIVAX,# CONTENTS______________________________________ # Introduction Varicella is a highly infectious disease caused by the vari cella-zoster virus (VZV). Secondary attack rates for this virus might reach 90% for susceptible household contacts. V ZV causes a systemic infection that results typically in lifetime immunity. In otherwise healthy persons, clinical illness after reexposure is rare. In 1995, a vaccine to prevent varicella (VARIVAX,® Merck & Co., Inc., Whitehouse Station, New Jersey) was licensed in the United States for use among healthy children aged >12 months, adolescents, and adults; recommendations o f the Advisory Committee on Immunization Practices (ACIP) # M ethods In response to increasing reports o f varicella outbreaks among highly vaccinated populations (3)(4)(5)(6), ACIP's measles-mumpsrubella and varicella (M M RV ) workgroup first met in February 2004 to review data related to varicella vaccine use in the United States since implementation of the vaccination program in 1995 and to consider recommendation options for improving control ofvaricella disease. The workgroup held monthly conference calls and met in person three times a year. The workgroup reviewed data on the impact o f the 1-dose varicella vaccination program, including data on vaccination coverage, changes in varicella epidemiology, transmission from vaccinated persons with varicella, vaccine effectiveness, immune response to vaccination, evidence of immunity, and potential risk factors for vaccine failure. Published and unpublished data related to correlates of protection, safety, immunogenicity, and efficacy' !' o f the new quadrivalent M M RV vaccine and the immunogenicity and efficacy o f a second dose of varicella vaccine also were reviewed. Costbenefit and cost-effectiveness analyses were considered, includ ing revised cost-benefit analysis of both the 1-and 2-dose programs for children compared with no vaccination program and the incremental benefit of a second dose. # Epidemiology of Varicella # General V Z V is transmitted from person to person by direct con tact, inhalation of aerosols from vesicular fluid of skin lesions of acute varicella or zoster, or infected respiratory tract secre tions that also might be aerosolized. The virus enters the host through the upper-respiratory tract or the conjunctiva. The average incubation period for varicella is 14-16 days § after exposure to rash; however, this period can vary (range: 10-21 days). The period o f contagiousness o f infected per sons is estimated to begin 1-2 days before the onset o f rash and to end when all lesions are crusted, typically 4-7 days after onset o f rash (7). Persons who have progressive varicella (i.e., development o f new lesions for >7 days) might be conta gious longer, presumably because their immune response is depressed, which allows viral replication to persist. V Z V remains dormant in sensory-nerve ganglia and might be reactivated at a later time, causing herpes zoster (HZ) (i.e., shingles), a painful vesicular rash typically appearing in a dermatomal distribution o f one or two sensory-nerve roots. Since implementation of a universal childhood varicella vac cination program in 1995, the epidemiology and clinical char acteristics of varicella in the United States have changed, with substantial declines in morbidity and mortality attributable to varicella. No consistent changes in H Z epidemiology have been documented. Vaccinated persons might develop modified varicella dis ease with atypical presentation. Varicella disease that develops >42 days after vaccination (i.e., breakthrough varicella) typi cally is mild, with <50 skin lesions, low or no fever, and shorter (4-6 days) duration o f illness. The rash is more likely to be predominantly maculopapular rather than vesicular. Never theless, breakthrough varicella is contagious. # Prevaccine Era Before the introduction of varicella vaccine in 1995, vari cella was a universal childhood disease in the United States, with peak incidence in the spring and an average annual inci dence o f 15-16 cases per 1,000 population. On the basis of data from the National Health Interview Survey (N H IS) for 1980-1990, an average o f 4 million cases were estimated to have occurred annually (annual incidence rate: 15 cases per 1,000 population) (8). Varicella was not a nationally notifi able disease when vaccine was introduced in 1995, and sur veillance data were limited. In 1994, only 28 states, the District In this rep o rt, efficacy refers to th e extent to w hich a specific intervention p roduces a beneficial result under ideal conditions. § T h e en dash in num eric ranges is used to represent inclusive years, hours, days, ages, dosages, or a sequence o f num b ered items. reporting was passive, with estimated completeness ranging from <0.1% to 20% (9). In multiple studies, age-specific incidence data were derived from N H IS and from state and local surveys (8,10,11). Dur ing 1980-1990, an estimated 33% o f cases occurred among preschool-aged children (i.e., children aged 12 m onths-4 years), and 44% occurred among school-aged children (i.e., children aged 5-9 years) (annual incidence rates: 82.8 and 91.1 cases per 1,000 children, respectively). Approximately 90% -92% of cases occurred among persons aged <15 years, and cases occurred rarely among persons aged >50 years. How ever, studies using data from state and local surveys conducted during 1990-1992 and during 1994-1995 indicated that the highest incidence o f varicella occurred among preschool-aged rather than school-aged children, indicating that the disease was being acquired at earlier ages (1 0 ,1 1 ). N ation al seroprevalence data for 1988-1994 indicated that 95.5% of adults aged 20-29 years, 98.9% o f adults aged 30-39 years, and >99.6% o f adults aged >40 years were immune to V ZV (12). However, for reasons that are not well understood, the epidemiology of varicella differs between countries with tem perate and tropical climates (13)(14)(15)(16)(17)(18). In the majority o f coun tries with temperate climates, >90% of persons are infected by adolescence whereas in countries with tropical climates, a higher proportion o f infections are acquired at older ages, which results in higher susceptibility among adults (19). Estimates of the burden of varicella hospitalization varied according to the year(s) studied, the source of data, and the definitions used for a varicella-related hospitalization (20)(21)(22)(23). Estimates were higher if varicella was listed as either a princi pal or a secondary cause o f hospitalization, in which case some incidental varicella hospitalization might have been included. During 1988-1995, an estimated 10,632 hospitalizations were attributable annually to varicella in the United States (range: 8,198-16,586) (20). Another study demonstrated an annual average o f 15,073 hospitalizations during 1993-1995, but this period might have included an epidemic year (22). Overall rates o f hospitalization for varicella during 1988-1995 ranged from 2.3 to 6.0 cases per 100,000 population. If any vari cella-related hospital discharge diagnostic code was included, rates varied between 5.0 and 7.0 cases per 100,000 popula tion (20)(21)(22)(23). D uring 1 9 8 8 -1 9 9 5 , persons without severe im m uno compromising conditions or treatments comprised the larg est proportion (89%) o f annual varicella-related hospitaliza tions (20). Before vaccination, children aged <4 years accounted for 43% -44% of hospitalizations, and persons aged >20 years accounted for 32% -33% (20,22). The rate o f complications from varicella was substantially higher for persons aged >20 years and for infants (i.e., children aged <1 year). Adults aged >20 years were 13 times more likely to be hospitalized when they had varicella than children aged 5 -9 years, and infants aged <1 year were six times more likely to be hospitalized than children aged 5-9 years (20). The most common complications of varicella that resulted in hospital izations were skin and soft tissue infections (especially inva sive group A strep to co ccal in fe c tio n s), p n eu m on ia, dehydration, and encephalitis. In 1980, an association was identified between Reye syndrome and the use of aspirin dur ing varicella or influenza-like illness; since then, Reye syn drome, which was once considered a common complication resulting from varicella infection, has become rare (24)(25)(26). During 1970-1994, the average annual number o f deaths for which varicella was recorded as the underlying cause was 105; the overall average annual varicella mortality rate was 0.4 deaths per 1 million population. The age distribution of varicella deaths has shifted during this period. During 1970 1974, persons aged <20 years accounted for 80% o f varicella deaths, compared with 46% during 1990-1994. During 1970-1994, the average case-fatality rate (CFR) for varicella for all ages combined ranged from 2.0 to 3.6 per 100,000 cases, with higher rates among infants and adults aged >20 years (27). Although CFRs declined substantially during this period, the risk for varicella-related death during 1990-1994 was still 25 times higher for adults than for children aged 12 months-4 years (CFR: 21.3 and 0.8 per 100,000 cases, respectively). During the same period, 89% o f varicella deaths among children and 75% of varicella deaths among adults occurred in persons without severe underlying im m uno compromising medical conditions. The most common com plication s am ong persons who died o f varicella were pneumonia, central nervous system complications (including encephalitis), secondary infection, and hemorrhagic condi tions. A recent reanalysis of varicella deaths also considered varicella when listed as a contributing cause of death in addi tion to the underlying cause studied in the previous report (28). During 1990-1994, a varicella diagnosis was listed on an aver age o f 145 death certificates per year (105 as an underlying cause and 40 as a contributing cause), with an overall annual varicella mortality rate of 0.6 deaths per 1 million population. Varicella during pregnancy can have adverse consequences for the fetus and infant, including congenital varicella syn drome (see Prenatal and Perinatal Exposure). Reliable data on the number of cases of congenital varicella syndrome are not available. However, on the basis of age-specific varicella inci dence (from N H IS), the annual number of births, and the risk for congenital varicella syndrome (1.1% overall risk in the first 20 weeks of pregnancy), 44 cases of congenital varicella syndrome are estimated to have occurred each year in the United States during the prevaccine era (29). # Postvaccine Era In 1995, a varicella vaccine (VARIVAX,® Merck & Co., Inc., Whitehouse Station, New Jersey) was licensed in the United States for use among healthy children aged >12 months, ado lescents, and adults. At that time, ACIP recommended routine varicella vaccination o f children aged 12-18 months, catch-up vaccination o f susceptible children aged 19 months-12 years, and vaccination of susceptible persons who have close contact with persons at high risk for serious complications (e.g., health care workers and family contacts of immunocompromised per sons) (1; Table 1). In 1999, ACIP updated the recommendations to include child care and school entry requirements, use of the vaccine after exposure and for outbreak control, use of the vac cine for certain children infected with human immunodeficiency virus (HIV), and vaccination o f adolescents and adults at high risk for exposure or transmission (2; Table 1). During 1997-2005, national varicella vaccination coverage among children aged 19-35 months increased from 27% to 88%, with no statistically significant difference in coverage by race or ethnicity (30). In 2005, state-specific varicella vac cination coverage ranged from 69% to 96% (31). National surveillance data continue to be limited, but passive surveil lance data in certain states have documented a decline in varicella incidence. In four states (Illinois, Michigan, Texas, and West Virginia) with adequate (>5% o f expected cases during 1990-1994) reporting to N N D S S, varicella incidence for 2004 declined 53% -88% compared with the average incidence for 1990 1994, with vaccination coverage among children aged 19-35 months ranging from 82% to 88% (32; C D C , unpublished data, 2006). During 2003During -2005, the number o f cases increased in Illinois and Texas; the biggest increase (56%) occurred in Texas (Figure 1). The number o f cases remained stable in Michigan (Figure 1) and declined minimally in West Virginia. In 1995, along with implementation o f the national vacci nation program, C D C instituted active surveillance for vari cella in three communities (Antelope Valley, California; Travis County, Texas; and West Philadelphia, Pennsylvania) in col laboration with state and local health departments to estab lish baseline data and to monitor trends in varicella disease after introduction o f varicella vaccine. By 2000, vaccination coverage among children 19-35 months in these three com munities had reached 74% -84% , and reported total varicella cases had declined 7 1 % -8 4 % (33). Although incidence declined to the greatest extent (83% -90% ) among children aged 12 months-4 years, incidence declined in all age groups, including infants and adults, indicating the herd immunity effects o f the vaccination program. Since 2001, only two sites were funded to continue surveillance (Antelope Valley and West Philadelphia). By 2005, vaccination coverage in these two sites had increased to 90% , and the reduction in inci dence had reached 90% and 91% , respectively (34). During # MMWR June 22, 2007 1996-2005, as vaccination coverage continued to increase, the proportion of persons with varicella who had been vacci nated increased from 2% to 56%. During 1995-2004, peak incidence for varicella cases in active surveillance sites shifted from age 3 -6 years to age 9-11 years. After introduction o f vaccine in 1995, the number and rate of annual varicella-related hospitalizations declined. In one study o f a nationally representative sample that was conducted during 1993-2001, varicella hospitalizations declined 75% (22). In another study, the annual varicella-related hospital ization rate declined 88% during 1994-2002 (23) (Figure 2). Hospitalization rates declined 100% among infants, and sub stantial declines also were recorded in all other age groups (up to age 50 years); hospitalization rates declined 91% among children aged <10 years, 92% among children and adoles cents aged 10-19 years, and 78% among adults aged 20-49 years. The greater decline in hospitalizations among children led to an in crease in the p ro p o rtio n o f varicellarelated hospitalizations among adults (40% of hospitalizations in 2002 occurred among persons aged >20 years) (23). In the combined active surveillance area, varicella-related hospitaliza tions declined from 2.4-4.2 hospitalizations per 100,000 popu lation during 1995-1998 to 1.5 per 100,000 population in 2000 (33) and to 0.8 per 100,000 population in 2005 (34). During 1995-2001, the number o f deaths for which vari cella was listed as the underlying cause decreased from 115 to 26 (28) (Figure 3). Since then, the number of deaths declined further; 16 deaths were reported in 2003. Age-adjusted mor tality rates decreased 66% , from an average o f 0.41 deaths per 1 million population during 1990-1994 to 0.14 during 1999 2001. The decline was observed in all age groups <50 years, with the greatest reduction (92%) occurring among children aged 12 months-4 years (0.09 deaths per 1 million popula tion), followed by an 88% reduction among children aged 5-9 years (0.10 deaths per 1 million population). Deaths among persons aged >50 years did not decline to the same extent; however, the validity of reported varicella deaths in this age group is low (35), and the majority o f these deaths are not considered to be caused by varicella. During 1999-2001, the average rate of mortality attributed to varicella among all racial and ethnic populations was <0.15 deaths per 1 million persons. Persons without high-risk conditions (e.g., malignan cies, HIV/acquired immunodeficiency syndrome [AIDS], and other immune deficiencies) accounted for 92% of deaths attributable to varicella. The average rates of deaths for which varicella was listed as a contributing cause of death also declined during 1999-2001, compared with 1990-1994. F IG U R E 2 . V Despite high 1-dose vaccination coverage and the success of the vaccination program in reducing varicella morbidity and mortality, reports to C D C from active surveillance sites and from states with well-implemented vaccination programs and surveillance indicate that in certain states and in one active surveillance site, the number of reported varicella cases has remained constant or declined minimally, and outbreaks have continued to occur. During 2001-2005, outbreaks were reported in schools with high varicella vaccination coverage (range 96% -100% ) (3,4). The outbreaks were similar in cer tain respects: 1) all occurred in elementary schools, 2) vaccine effectiveness was similar (range: 72% -85% ), 3) the highest attack rates occurred among the younger students, 4) each outbreak lasted approximately 2 months, and 5) index cases occurred among vaccinated students (although their disease was mild). Overall attack rates among vaccinated children var ied (range: 11%-17%), with attack rates in certain classrooms as high as 40% . These data indicate that even in settings in which vaccination coverage was nearly universal and vaccine performed as expected, the 1-dose vaccination program could not prevent varicella outbreaks completely. # Prenatal and Perinatal Exposure In the prevaccine era, prenatal infection was uncommon because the majority o f women o f childbearing age were immune to V Z V (12,36). Varicella in pregnant women is associated with a risk for V Z V transmission to the fetus or newborn. Intrauterine V Z V infection might result in congeni tal varicella syndrome, neonatal varicella, or H Z during infancy or early childhood (3 7 -4 6 ). Infants who are exposed prenatally to VZV, even if asymptomatic, might have measurable varicella-specific IgM antibody during the new born period, have persistent varicella-specific IgG immunity after age 1 year without a history of postnatal varicella, or demonstrate positive lymphocyte transformation in response to V Z V antigen (37). Congenital varicella syndrome was first recognized in 1947 (40). Congenital varicella syndrome can occur among infants born to mothers infected during the first half o f pregnancy and might be manifested by low birthweight, cutaneous scar ring, limb hypoplasia, microcephaly, cortical atrophy, chori oretinitis, cataracts, and other anomalies. In one study, incidence o f congenital varicella syndrome was calculated us ing aggregate data from nine cohort studies carried out dur ing 1986-2002 (47). Rates were 0.6% (4 o f 725) for 2-12 weeks' gestation, 1.4% (9 o f 642) for 13-28 weeks, and 0 (0 of 385) after 28 weeks. In a prospective study o f 1,373 mothers with varicella dur ing pregnancy conducted in the United Kingdom and West Germany during 1980-1993, the highest risk (2%) for con genital varicella syndrome was observed when maternal infec tion occurred during 13-20 weeks' gestation (43). The risk was 0.4% after maternal infection during 0-12 weeks' gesta tion. No cases of congenital varicella syndrome occurred among the infants o f 366 mothers with H Z during pregnancy. Nine isolated cases involving birth defects consistent with con genital varicella syndrome have been reported after maternal varicella beyond 20 weeks' gestation (with the latest occur ring at 28 weeks) (47,48). In a prospective study, H Z occurred during infancy or early childhood in four (0.8%) of 477 infants who were exposed to V Z V during 13-24 weeks' gestation and in six (1.7%) o f 345 infants who were exposed during 25-36 weeks' gestation (43). The onset of varicella in pregnant women from 5 days before to 2 days after delivery results in severe varicella infec tion in an estimated 17% -30% o f their newborn infants. These infants are exposed to V Z V without sufficient maternal antibody to lessen the severity of disease. The risk for neona tal death has been estimated to be 31% among infants whose mothers had onset o f rash <4 days before giving birth (45). This estimate was made on the basis of a limited number of infant deaths and might be higher than the actual risk because the study was performed before neonatal intensive care was available. In addition, certain cases were not part of prospective studies but were reported retrospectively, making the results subject to selection bias. When these cases were reevaluated subsequently by another investigator, certain infants were demonstrated to have been at higher risk for death because o f low birthweight; in at least one case, another cause o f death was probable (46). Varicella-zoster immune globulin (VZIG) has been reported to reduce incidence o f severe neo natal varicella disease (49) and therefore is indicated in such situations. Nevertheless, the risk for death among neonates who do not receive postexposure prophylaxis with V Z IG is likely to be substantially lower than was estimated previously. # H erpes Zoster Surveillance After primary infection, V Z V persists as a latent infection in sensory-nerve ganglia. The virus can reactivate, causing HZ. Mechanisms controlling V ZV latency are not well understood. Risk factors for H Z include aging, immunosuppression, and initial infection with varicella in utero or during early child hood (i.e., age <18 months). An estimated 15%-30% o f the general population experience H Z during their lifetimes (50,51); this proportion is likely to increase as life expectancy increases. The most common complication of H Z, particu larly in older persons, is postherpetic neuralgia (PHN), the persistence of sometimes debilitating pain weeks to months after resolution o f H Z. Life-threatening complications o f H Z also can occur; these include herpes ophthalmicus, which can lead to blindness. Another severe manifestation is dissemina tion, which might involve generalized skin eruptions, and cen tral nervous system, pulmonary, hepatic, and pancreatic com plications. D issem ination, pneum onia, and visceral involvement typically are restricted to immunocompromised persons. V Z V can be transmitted from the lesions o f patients who have H Z to susceptible contacts. Although few data are available to assess this risk, one household contact study reported that the risk for V Z V transmission from H Z was approximately 20% ofthe risk for transmission from varicella (52). Varicella vaccination might alter the risk for H Z at the level of both the individual and the population (i.e., herd immu nity). Just as wild-type V Z V can cause wild-type H Z, attenu ated vaccine virus has the potential to become latent and later reactivate to cause vaccine virus strain (also called Oka-strain) H Z (53). Multiple studies have evaluated the risk for Oka-strain Varicella vaccination also might change the risk for H Z at the population level. With the development o f herd immu nity and reduction in the likelihood o f exposure, the varicella vaccination program prevents wild-type V Z V infection among vaccine recipients and nonvaccine recipients, eliminating the risk for wild-type H Z in these persons. Reduction in the like lihood o f wild-type varicella infection also increases the median age for acquiring varicella (although age-specific inci dence rates themselves are lower). This reduces the risk for varicella infection during early childhood (i.e., age <18 months), thereby reducing a risk factor for childhood HZ. Exposure of persons with latent wild-type V Z V infection to persons with varicella is thought to boost specific immu nity, which might contribute to controlling reactivation of V Z V and the development o f H Z (50). Concern has been expressed that by providing fewer opportunities for varicella exposure among persons with previous wild-type varicella infection, reduction in the likelihood o f exposure might increase the risk for H Z, possibly within as few as 5 years after introduction of varicella vaccination (59 ) and reaching a vaccination coverage of >90%. Herpes zoster is not a nationally notifiable disease in the United States, and H Z surveillance has been conducted using multiple methods, study sites, or data sources. For certain stud ies, baseline data were available before the start o f the varicella vaccination program. One study that included baseline data was a retrospective analysis o f electronic medical records from a health maintenance organization (HM O) during 1992-2002 (60). This H M O study indicated that age-adjusted incidence o f H Z remained stable during 1992-2002 as incidence of varicella decreased (60). Age-adjusted and -specific annual incidence rates of H Z fluctuated slightly over time; the ageadjusted rate was highest in 1992, at 4.1 cases per 1,000 per son-years, and was 3. # Use of Acyclovir to Treat and Prevent Varicella Acyclovir is a synthetic nucleoside analog that inhibits rep lication of human herpes viruses, including VZV. Since the early 1980s, intravenous acyclovir has been available to treat immunocompromised persons who have varicella. When administered within 24 hours o f onset o f rash, acyclovir has been demonstrated to be effective in reducing varicellaassociated morbidity and mortality in this population (66)(67)(68). In 1992, the Food and D rug A dm inistration (FDA) approved the use o f oral acyclovir for the treatment o f vari cella in otherwise healthy children. This approval was made on the basis o f placebo-controlled, double-blind studies (69,70) that dem onstrated the beneficial clinical effects (i.e., a decrease in the number o f days in which new lesions appeared, the duration o f fever, and the severity o f cutaneous and systemic signs and symptoms) that occurred when acyclovir was administered within 24 hours o f rash onset. No serious adverse events occurred during the period o f drug administration. Administration o f acyclovir did not decrease transmission o f varicella or reduce the duration o f absence from school. Because few complications occurred (1% -2% ), these studies could not determine whether acyclovir had a sta tistically significant effect on disease severity among healthy children. In these studies, antibody titers after infection in children receiving acyclovir did not differ substantially from titers o f children in the control group (69,70). Clinical trials among adolescents and adults have indicated that acyclovir is well-tolerated and effective in reducing the duration and severity o f clinical illness if the drug is administered within 24 hours o f rash onset (71)(72)(73). In 1993, AAP's Committee on Infectious Diseases published a statement regarding the use o f acyclovir (74). AAP did not consider administration o f acyclovir to healthy children to have clinical benefit sufficient to justify its routine administration; however, AAP stated that certain circumstances might justify its use. AAP recommended that oral acyclovir should be con sidered for otherwise healthy persons at increased risk for mod erate to severe varicella (e.g., persons aged >12 years, persons with chronic cutaneous or pulmonary disorders, persons receiving long-term salicylate therapy, and persons receiving short, intermittent, or aerosolized courses o f corticosteroids). Certain experts also recommend use o f oral acyclovir for secondary case-patients who live in the same households as infected children (74). Acyclovir is classified as a Category B drug in the FD A usein-pregnancy rating. Although studies involving animals have not indicated teratogenic effects, adequate, well-controlled studies in pregnant women have not been conducted. However, a prospective registry o f acyclovir use during pregnancy that collected data on outcomes o f 596 infants whose mothers were exposed to systemic acyclovir during the first trimester o f preg nancy indicated that the rate and types o f birth defects approximated those in the general population (75). AAP has not recommended routine use o f oral acyclovir for pregnant women because the risks and benefits to the fetus and mother were unknown. However, in instances o f serious, viral mediated complications (e.g., pneumonia), AAP has recom mended that intravenous acyclovir should be considered (74). Two nucleoside analogs, acyclovir and famciclovir, have been approved by FDA for treating H Z. If administered within 72 hours of rash onset, acyclovir has accelerated the rate of cuta neous healing and reduced the severity of acute pain in adults who have H Z (76). Oral famciclovir, when administered during the same period, has similar efficacy (77). Acyclovir is not indicated for prophylactic use among otherwise healthy children, adolescents, or adults without evi dence o f immunity after exposure to varicella. Vaccination is the method o f choice in these situations. No studies have been conducted regarding prophylactic use o f acyclovir among immunocompromised persons; therefore, V Z IG is recom mended in these situations. # Vaccines for Prevention of Varicella Two live attenuated varicella virus vaccines are licensed in the United States for prevention of varicella: single-antigen varicella vaccine (VARIVAX,® Merck & Co., Inc., Whitehouse Station, New Jersey) and com bination M M R V vaccine (ProQuad,® Merck & Co., Inc., Whitehouse, New Jersey). Both vaccines are derived from the Oka strain of live, attenu ated VZV. The Oka strain was isolated in Japan (78) in the early 1970s from vesicular fluid in a healthy child who had natural varicella and was attenuated through sequential propa gatio n in cultures o f hum an em b ryon ic lu n g cells, embryonic guinea-pig cells, and human diploid cells (WI-38). The virus in the Oka/Merck vaccine has undergone further passage through human diploid-cell cultures (MRC-5) for a total of 31 passages. In 1995, the single-antigen varicella vaccine was licensed in the United States for use among healthy persons aged >12 months. This vaccine is lyophilized; when reconstituted as directed in the package insert and stored at room temperature for a maximum o f 30 minutes, it contains a minimum o f 1,350 plaque forming units (PFUs) o f Oka/Merck V Z V in each 0.5 mL dose (79). Each dose also contains 12. # Im m une Response to Vaccination In clinical trials o f the single-antigen varicella vaccine con ducted before licensure, seroconversion was assessed using lots o f vaccine with different amounts o f PFUs and laboratory assays with different levels o f sensitivity and specificity. Using a sp ecially dev eloped , sen sitive gp-enzym e-linked immunosorbent assay (ELISA) test that is not available com mercially, seroconversion (defined by the acquisition o f any detectable varicella antibodies >0.3 gpELISA units) was observed at approximately 4 -6 weeks after vaccination with 1 dose o f varicella vaccine in approximately 97% o f 6,889 susceptible children aged 1-12 years (79). The seroconversion rate was 98% for children aged 12-15 months and 95% among those aged 5-12 years (81). Adolescents aged 13-17 years had a lower seroconversion rate (79%) after a single dose o f vac cine. A study perform ed postlicensure used fluorescent antibody to membrane antigen (FAMA) titers 16 weeks after vaccination to assess serologic response and demonstrated that 61 (76%) o f 80 healthy child vaccine recipients seroconverted (FAMA titers >1:4) after 1 dose o f single-antigen varicella vaccine (82). Primary antibody response to the vaccine at 6 weeks post vaccination is correlated with protection against disease (83,84). In clinical trials, rates o f breakthrough disease were lower am ong children with varicella antibody titers o f >5 gpELISA units than among those with titers o f <5 units (84); children with a 6-week postvaccination antibody titer of <5 gpELISA units were 3.5 times more likely to have break through varicella than those with a titer o f >5 gpELISA units. Later studies o f immunogenicity (85) have reported the pro portion o f vaccinated children who achieved this antibody level instead o f seroconversion. After 1 dose o f the single antigen varicella vaccine, 86% o f children had gpELISA lev els o f >5 units/m L (85). Studies performed using FAMA indicated that a titer >1:4 at 16 weeks postvaccination is cor related with protection against disease (82). O f healthy per sons with a titer o f >1:4 at 16 weeks post vaccination, <1% have had varicella after a household exposure (n = 130). In contrast, the attack rate among those with a titer o f <1:4 was 55% (n = 60). Persistence o f antibody in children after 1 dose o f single antigen varicella vaccine was demonstrated in both short-and long-term follow-up studies. In a clinical study, the rate of antibody persistence detected by gpELISA was nearly 100% after 9 years o f follow-up for 277 children (85). Another study demonstrated that although antibody titers (detected by FAMA) might decline 12-24 months after vaccination, the median titer did not change after 1-4 years and even rose after 10 years (86). In Japan, V Z V antibodies were present in 37 (97%) o f 38 children who received varicella vaccine 7-10 years earlier (with titers comparable to those o f 29 children who had had natural varicella infection within the previous 10 years) (87) and in 100% o f 25 children when followed for as long as 20 years (i.e., antibody levels were higher than those observed 10 years earlier) (88). Interpretation o f long-term studies is complicated by at least two factors. First, asymp tomatic boosting o f vaccine-induced immunity by exposure to wild-type V Z V is likely. Because varicella vaccine is not routinely recommended in Japan, coverage o f children was estimated to be low (approximately 20%) during 1991-1993. Second, sample sizes were limited as a result o f the decrease in the number o f children followed-up with increasing time since vaccination. The second dose o f varicella vaccine in children produced an improved immunologic response that is correlated with improved protection. A comparative study o f healthy chil dren who received 1 or 2 doses o f single-antigen varicella vac cine administered 3 months apart indicated that a second dose provided higher antibody levels as measured by the propor tion o f subjects with titers o f >5 gpELISA units and by geometric mean titers (G M Ts) and higher efficacy (85; Tables 2-4). The proportion o f subjects with antibody titers o f >5 gpELISA units in the 2-dose recipients was higher 6 weeks after the second dose than after the first dose (99.6% and 85.7% , respectively) and remained high at the end o f the 9-year follow-up period, although the difference between the two regimens narrowed (97% and 95% , respectively). G M T 6 weeks after the second dose was substantially higher than that after a single dose (142 and 12, respectively). The differ ence in GM Ts between the two regimens did not persist over 9 years o f follow-up among subjects who seroconverted after vaccination, although GM Ts in both regimens remained high by the end o f the study period. However, receipt o f a second dose decreased the rate o f breakthrough varicella significantly (3.3-fold) and increased vaccine efficacy (p<0.001). Another study that assessed the immunogenicity o f a second dose received 4-6 years after the first dose demonstrated a substan tial increase in antibody levels in the first 7-10 days in the majority o f those tested, indicating an anamnestic response. On the day o f the second dose, G M T was 25.7, compared with 143.6 G M T 7-10 days after the second dose; 60% of recipients had at least a fourfold increase in antibody titers, and an additional 17% had at least a twofold increase (89). Three months after the second dose, G M T remained higher than on the day o f second dose (119.0 and 25.7, respectively). Among children, V Z V antibody levels and GMTs after 2 doses administered 4-6 years apart were comparable to those obtained when the 2 doses were administered 3 months apart. # T A B L E 2. H u m o ra l a n d c e llu la r im m u n e re s p o n s e a m o n g c h ild re n a g e d 12 m o n th s -1 2 y e a rs m e a s u re d a t 6 w e e k s p o s tv a c c in a tio n , b y v a c c in e t y p e a n d v a c c in a tio n s c h e d u le -U n ite d S ta The Am ong persons aged >13 years, multiple studies have described seroconversion rates after receipt o f the single antigen varicella vaccine (range: 72% -94% after 1 dose and 94% -99% after a second dose administered 4-8 weeks later) (79,92,93). In clinical studies, detectable antibody levels have persisted for at least 5 years in 97% o f adolescents and adults who were administered 2 doses o f vaccine 4-8 weeks apart (79). However, other studies demonstrated that 25% -31% o f adult vaccine recipients who seroconverted lost detectable antibodies (by FAMA) at multiple intervals (range: 1-11 years) after vaccination (93,94). For persons who had breakthrough disease after exposure to varicella, the severity o f illness or the attack rates did not increase over time (95). Innate (i.e., nonspecific) and adaptive (i.e., humoral and cellular) immunity are important in the control o f primary varicella infection. The capacity to elicit cell-mediated immu nity is important for viral clearance, providing long-term protection against disease and preventing symptomatic V ZV reactivation. Studies among children and adults have indi cated that breakthrough varicella typically is mild, even among vaccine recipients without seroconversion or vaccine recipi ents who lost detectable antibody, suggesting that VZVspecific cell-mediated immunity affords protection to vaccine recipients in the absence o f a detectable antibody response (94,95). Studies o f the cellular immune response to vaccina tion among children demonstrated that immunization with 1 dose o f varicella vaccine induced VZV-specific T-cell prolif eration that was maintained in 26 (90%) o f 29 children 1 year postvaccination and in 52 (87%) o f 60 children 5 years post vaccination (96). In this study, the mean stimulation index (SI), a marker o f cell-mediated immunity, was 12.1 after 1 year and 22.1 after 5 years. D ata obtained at 1 year postvaccina tion from a subset o f children in a prelicensure study compar ing the immune response among children who received 1 and 2 doses administered 3 months apart demonstrated that the varicella-specific lymphocyte proliferation responses were sig nificantly higher for recipients o f 2 doses than for recipients o f 1 dose (mean SI: 34.7 and 23.1, respectively; p = 0.03) (97). In the study o f the 2 doses administered 4-6 years apart, results also indicated that the lymphocyte proliferation response was significantly higher at 6 weeks and 3 months after the second dose than at the same time points after the first dose (p<0.01) (89; Table 2). Among adults, vaccine-induced VZV-specific T-cell prolif eration was maintained in 16 (94%) o f 17 subjects 1 and 5 years postvaccination (96,98). The mean SI was 9.9 after 1 year and 22.4 after 5 years. # Correlates of Protection For children, the varicella antibody response measured by gpELISA 6 weeks postvaccination correlates with neutraliz ing antibody level, VZV-specific T-cell proliferative responses, vaccine efficacy, and long-term protection against varicella after exposure to V Z V (83,84,99,100). A titer o f >5 gpELISA units/mL is associated with protection against disease although it should not be considered an absolute guarantee o f protec tion. Breakthrough cases have occurred among children with >5 gpELISA units/mL. A FAMA titer >1:4 at 16 weeks post vaccination also correlates with protection against disease (82). However, neither o f these antibody tests is available commer cially. The relationship between the antibody level measured at other intervals postvaccination, especially immediately prior to exposure and breakthrough disease has not been studied. No correlates o f protection have been evaluated for adults. # Vaccine Efficacy and Vaccine Effectiveness One-Dose Regimen # Prelicensure Efficacy In prelicensure studies carried out among children aged 12 months-14 years, the protective efficacy o f single-antigen varicella vaccine varied, depending on the amount o f live virus administered per dose, the exposure setting (community or household), and the quality and length o f the clinical fol low-up. The majority o f the prelicensure studies reported effi cacy o f 1 dose o f varicella vaccine within the range o f 70% -90% against any clinical disease and 95% against severe disease for 7-10 years after vaccination (81,101,102). A ran domized placebo-controlled efficacy trial was conducted among children aged 12 months-14 years, but the formula tion differed from that o f the current vaccine (17,000 PFUs per dose (103,104), with follow-up o f children through 7 years postvaccination (105). Reported efficacy was 100% at 1 year and 98% at 2 years after vaccination, and 100% and 92%, respectively, after exposures to VZV that occurred in the house hold. Although a randomized control study was not conducted for adults, the efficacy o f single-antigen varicella vaccine was determined by evaluation o f protection when adult vaccine recipients were exposed to varicella in the household. On the basis o f the reported historical attack rate o f 87% for natural varicella after household exposure among unvaccinated chil dren, estimated efficacy among adults was approximately 80% (79). The attack rate o f unvaccinated adults exposed in house holds was not studied. # Postlicensure Efficacy and Effectiveness # Prevention o f AH Varicella Disease Postlicensure studies have assessed the effectiveness^ o f the single-antigen varicella vaccine under field conditions in child care, school, household, and community settings using mul tiple methods. Effectiveness frequently has been estimated against all varicella and also against moderate and severe vari cella (defined in different ways). Outbreak investigations have assessed effectiveness against clinically defined varicella. The majority o f these investigations have demonstrated vac cine effectiveness for prevention o f varicella in the same range described in prelicensure trials (70% -90%) (3)(4)(5)(6)(106)(107)(108)(109)(110)(111)(112)(113), with some lower (44%, 56%) (114,115) and some higher (100% in one o f two schools investigated) estimates (107). Â In this report, effectiveness refers to the extent to w hich a specific intervention, w h en deployed in th e field, does w h at it is in ten d ed to do for a defined pop u latio n . retrospective cohort study in 11 childcare centers demonstrated vaccine effectiveness o f 83% for prevention o f clinically diag nosed varicella (116). In a case-control study that measured vaccine effectiveness against laboratory-confirmed varicella in a pediatric office setting during 1997-2003, vaccine effec tiveness was 85% (CI = 78% -90%) during the first four years and 87% (CI = 81% -91% ) for the entire study period (117,118). Finally, in a study o f household secondary attack rates, considered the most robust test o f vaccine performance because o f the intensity o f exposure, varicella vaccine was 79% (CI = 70% -85%) effective in preventing clinically defined varicella in exposed household contacts aged 12 m onths-14 years without a history o f varicella disease or vaccination (119). Postlicensure data on vaccine effectiveness against all disease have been summarized (Table 5). In a randomized clinical trial conducted postlicensure that compared the efficacy o f 1 dose o f varicella vaccine with that o f 2 doses, the estimated vaccine efficacy for 1 dose for a 10-year observation period was 94.4% (CI = 92.9% -95.7% ) (85; Table 3). In the same study, the efficacy o f 1 dose of vaccine in preventing varicella after household exposure for 10 years was 90.2% (CI = 83.7% -96.7% ) (Table 4). This study did not use placebo controls and used historic data for attack rates in unvaccinated children to calculate vaccine efficacy. # Prevention o f Moderate and Severe Varicella Postlicensure studies assessing vaccine performance in preventing moderate and severe varicella have consistently demonstrated high effectiveness. D efinitions for disease severity have varied among studies. Certain studies have used a defined scale o f illness that included the number o f skin lesions, fever, complications, and investigator assessment of illness severity, and others have used only the number o f skin lesions, reported complications, or hospitalizations. Moderate varicella typically has been defined as either 50 500 or 250-500 lesions, and severe varicella has been defined as >500 lesions or any hospitalization or complication. In the randomized postlicensure clinical trial, severe varicella was defined as >300 lesions and fever o f >102°F (38.9°C ), oral equivalent. Regardless o f different definitions, multiple stud ies have demonstrated that single-antigen varicella vaccine was >95% effective in preventing combined moderate and severe disease (3)(4)(5)(6)85,106,107,(109)(110)(111)(112)(113)(115)(116)(117)(118)(119); one study demonstrated effectiveness o f 86% (114). Effectiveness was 100% against severe disease when measured separately (6,85,109,111,117,119). Postlicensure data on vaccine effec tiveness against moderate and severe varicella have been summarized (Table 5). # T A B L E 5. S u m m a ry o f p o s tlic e n s u r e d a ta o n e ffe c tiv e n e s s o f s i n g le -a n t i g e n v a r i c e l la v a c c in e a g a in s t d is e a s e a m o n g c h ild re n a g e d 12 m o n th s -1 4 y e a rs -U n ite d S ta te s , 1 9 9 6 -2 0 0 4 A # Two-Dose Regimen In a randomized clinical trial o f single-antigen varicella vac cine that compared the efficacy o f 1 dose with that o f 2 doses administered 3 months apart, the estimated vaccine efficacy o f 2 doses for a 10-year observation period was 98.3% (CI = 97.3% -99.0% ), which was significantly higher than efficacy after 1 dose (p<0.001) (85; Table 5). The 2-dose regi men also was 100% efficacious against severe varicella. In the same study, the efficacy o f 2 doses o f single-antigen varicella vaccine in preventing disease after household exposure over 10 years was 96.4% (CI = 92.4% -100% ), not significantly different from 1 dose (90.2%) (p = 0.112) (Table 4). How ever, the number o f cases involving household exposure was limited. Formal studies to evaluate the clinical efficacy o f the com bination M M RV vaccine have not been performed. Efficacy o f the individual components was established previously in clinical studies with the single-antigen vaccines. # Breakthrough Disease Breakthrough disease is defined as a case o f infection with wild-type V ZV occurring >42 days after vaccination. In clini cal trials, varicella disease was substantially less severe among vaccinated persons than among unvaccinated persons, who usually have fever and several hundred vesicular lesions (120). In cases o f breakthrough disease, the median number o f skin lesions is com m only <50 (9 9 ,1 2 1 -123). In addition, compared with unvaccinated persons, vaccine recipients have had fewer vesicular lesions (lesions more commonly are atypi cal, with papules that do not progress to vesicles), shorter duration o f illness, and lower incidence o f fever. Multiple postlicensure investigations also have demonstrated that the majority o f breakthrough varicella cases are signifi cantly milder than cases among unvaccinated children (p<0.05) (3,5,(107)(108)(109)(110)(111)(112)(113)(114)(116)(117)(118)124). However, approximately 2 5% -30% o f breakthrough cases are not mild, with clinical features more similar to those in unvaccinated children (124). Since 1999, when varicella deaths became nationally notifiable, two deaths from breakthrough varicella disease have been reported to C D C ; one o f a girl aged 9 years with a history o f asthma who was receiving steroids when she had the breakthrough infection, and the other o f a girl aged 7 years with a history of malignant ependymoma who also was under steroid therapy at the time o f her death (C D C , unpublished data, 2006). # One-Dose Regimen In clinical trials, 1,114 children aged 1-12 years received 1 dose o f single-antigen varicella vaccine containing 2,900 9,000 PFUs o f attenuated virus per dose and were actively followed for up to 10 years postvaccination (79). Among a subset o f 95 vaccine recipients with household exposure to varicella, eight (8%) reported a mild form o f varicella (10-34 lesions). In a randomized clinical trial that compared the efficacy of 1 dose o f vaccine to that o f 2 doses during a 10-year observa tion period, the cumulative rate o f breakthrough varicella among children who received 1 dose was 7.3% (85). Break through cases occurred annually in 0.2% -2.3% o f recipients o f 1 dose o f vaccine. Cases occurred throughout the observa tion period, but the majority were reported 2-5 years after vaccination (Figure 4). O f 57 children with breakthrough cases, 13 (23%) had >50 lesions. In cross-sectional studies, the attack rate for breakthrough disease has ranged between 11% and 17% (and as high as 40% in certain classrooms) in outbreak investigations (3 ) and 15% in household settings (119). # Two-Dose Regimen Data Among Children In a randomized clinical trial that compared the efficacy o f 1 dose o f vaccine with that o f 2 doses, the cumulative rate of breakthrough varicella during a 10-year observation period was 3.3-fold lower among children who received 2 doses than that among children who received 1 dose (2.2% and 7.3, respectively; p<0.001) (85). Breakthrough cases occurred occasionally in 0.8% o f 2-dose vaccine recipients. The majority o f cases o f breakthrough disease occurred 2-5 years after vaccination; no cases were reported 7 -1 0 years after vac cination (Figure 4). O f 16 children with breakthrough cases, three (19%) had >50 lesions. The proportion o f children with >50 lesions did not differ between the 1-dose and 2-dose regimens (p = 0.5). # Breakthrough Infections Among Adolescents and Adults In postlicensure studies o f adolescents and adults who received 2 doses, 40 (9%) cases o f breakthrough varicella occurred among 461 vaccine recipients who were followed for 8 weeks-11.8 years (mean: 3.3 years) after vaccination (95), and 12 (10%) cases occurred among 120 vaccine recipi ents who were followed for 1 m onth-20.6 years (mean: 4.6 years) (94). One prelicensure study o f persons who had received 2 doses o f vaccine reported that 12 (8%) breakthrough cases had occurred among 152 vaccine recipients who were followed for 5-66 months (mean: 30 months) postvaccina tion (93). # Contagiousness Prelicensure clinical trials reported the rate o f disease trans mission from vaccinated persons with varicella cases to their vaccinated siblings. In 10 trials that were conducted during 1981-1989, breakthrough infections occurred in 114 (5.3%) o f 2,163 vaccinated children during the 1-8 year follow-up period o f active surveillance, and secondary transmission oc curred to 11 (12.2%) o f their 90 vaccinated siblings (121). Illness was mild in both index and secondary case-patients. Household transmission from a vaccinated child with break through disease to a susceptible adult (one o f whom died) have been reported (C D C , unpublished data, 2006). One study examined secondary attack rates from vaccinated and unvaccinated persons with varicella to both vaccinated and unvaccinated households contacts aged 12 months-14 years (119). This study demonstrated that vaccinated persons with varicella with <50 lesions were only one third as contagious as unvaccinated persons with varicella. However, vaccinated persons with varicella who had >50 lesions were as contagious as unvaccinated persons with varicella (119). Vaccinated per sons with varicella tend to have milder disease, and, although they are less contagious than unvaccinated persons with vari cella, they might not receive a diagnosis and be isolated. As a result, they might have more opportunities to infect others in community settings, thereby further contributing to V ZV transmission. Vaccinated persons with varicella also have been index case-patients in varicella outbreaks (3,4,115). # Risk Factors for Vaccine Failure Potential risk factors for vaccine failure have been identified in studies o f vaccine effectiveness during outbreak investiga tions and other specially designed studies (5,(108)(109)(110)(113)(114)(115)118,125). In outbreak investigations, the low number of cases limits the ability o f researchers to conduct multivariate analyses and examine the independent effect o f each risk fac tor for vaccine failure. An increased risk for breakthrough dis ease has been noted with decreasing age at vaccination, with a threefold increase in breakthrough disease risk for children vaccinated at age <14 months (110), an increase o f twofold in one study and nearly fourfold in another for children vacci nated at age <16 months (108,115), and a ninefold increase for children vaccinated at age <19 months (113). Other out break investigations have demonstrated that time since vacci nation (variably defined as >3, >5, or >5 years) was associated with an increased risk for breakthrough disease (relative risk [RR] = 2.6, 6.7, and 2.6, respectively) (5,114,115). However, age at vaccination and time since vaccination are highly cor related, and their independent association with the risk for breakthrough disease has been assessed in only one outbreak investigation (113). A retrospective cohort study that adjusted for other potential risk factors demonstrated an increased risk for breakthrough disease for children vaccinated at age <15 months (adjusted relative risk [aRR] = 1.4; CI = 1.1%-1.9%) (125). A case-control study demonstrated that the effective ness o f vaccine in the first year after vaccination was signifi cantly lower (73%) among children vaccinated at age <15 months than it was among children vaccinated at age >15 months (99%) (p = 0.01) (118). However, the difference in the overall effectiveness between children vaccinated at these ages was not statistically significant for subsequent years (8 years o f follow-up) (81% and 88%, respectively; p = 0.17). Active surveillance data collected during 1995-2004 from a sentinel population o f 350,000 persons were analyzed to determine whether the severity and annual incidence o f break through varicella cases increased with time since vaccination (126). Children vaccinated >5 years previously were 2.6 times more likely to have moderate and severe breakthrough Multiple other studies that examined possible reasons for lower vaccine effectiveness did not find age at vaccination (3)(4)(5)111,114) or time since vaccination (3,110,111) to be associated with vaccine failure. An ongoing study is examin ing these factors and risk for vaccine failure (127). After 8 years o f active follow-up o f 7,449 children vaccinated at age 12-23 months, results do not indicate an increased risk for break through disease among children vaccinated at age 12-14 months compared with those vaccinated at age 15-23 months. Moreover, a test for trend revealed no change in the rate of reported breakthrough disease for each additional month of age at vaccination (127). Two outbreak investigations noted an increased risk for breakthrough disease in children with asthma and eczema (109,113). In these investigations, the use o f steroids to treat asthma or eczema was not studied. Steroids have been associ ated previously with severe varicella in unvaccinated persons (128)(129)(130). Only one retrospective cohort study controlled simultaneously for the effect o f multiple risk factors, includ ing the use o f steroids, and this study demonstrated no asso ciation o f risk for breakthrough disease with asthma or eczema (125). However, this study documented an increased risk for breakthrough disease if the child had received a prescription o f oral steroids (considered a proxy for taking oral steroids when exposed to varicella) within 3 months o f breakthrough disease (adjusted RR [aRR] = 2.4; CI = 1.3%-4.4%) and when varicella vaccination was administered within 28 days o f M M R vaccine (aRR = 3.1; CI = 1.5%-6.4%). # Evidence of Im m unity ACIP has approved criteria for evidence o f immunity to varicella (Box). Only doses ofvaricella vaccines for which writ ten documentation o f the date o f administration is presented should be considered valid. Neither a self-reported dose nor a history o f vaccination provided by a parent is, by itself, con sidered adequate evidence o f immunity. Persons who lack docu mentation o f adequate vaccination or other evidence o f immunity should be vaccinated. Historically, self-reporting o f varicella disease by adults or by parents for their children has been considered valid evi dence o f immunity. The predictive value o f a self-reported positive disease history was extremely high in adults in the prevaccine era although data on positive predictive value are lacking in parental reports regarding their children (131)(132)(133). # Simultaneous Adm inistration of Vaccines Single-antigen varicella vaccine is well-tolerated and effec tive in healthy children aged >12 months when administered simultaneously with M M R vaccine either at separate sites and with separate syringes or separately >4 weeks apart. The num ber and types o f adverse events occurring in children who have received VARIVAX and M M RII concurrently have not dif fered from those in children who have been administered the vaccines at different visits (79,135). D ata concerning the effect o f simultaneous administration o f VARIVAX with vac cines containing various combinations o f M M R, diphtheria and tetanus toxoids and pertussis (D TP), and Haemophilus influenzae type b (Hib) have not been published (79). A ran domized study o f 694 subjects determined that the immune response to M M R, varicella, and Hib vaccines administered concurrently with a fourth dose o f pneumococal conjugate vaccine (PCV7) was not inferior to that o f those vaccines when administered without PCV7; the percentage o f subjects who seroconverted was >90% for all antigens for both groups (136). Concomitant administration o f the combination M M RV vaccine with other vaccines also has been assessed. In a clini cal trial involving 1,913 healthy children aged 12-15 months, three groups were compared (137). One group received con comitantly administered (at separate sites) M M RV vaccine, Diphtheria and Tetanus Toxoids and Acellular Pertussis Vaccine Absorbed (DTaP), Hib conjugate (meningococcal protein conjugate) vaccine, and hepatitis B (recombinant) (Hep B) vaccine. The second group received M M RV vaccine at the initial visit, followed by DTaP, Hib, and Hep B vac cines administered concomitantly 6 weeks later. The third group received M M R and varicella vaccines concomitantly followed 6 weeks later by DTaP, Hib, and Hep B vaccines. Seroconversion rates and antibody titers were comparable for the measles, mumps, rubella, and varicella components for the first two groups. No immunologic data were reported for the third group. The Hib and Hep B seroconversion rates for the two groups that received those vaccines also were comparable. Data are absent or limited for the concomitant use o f MMRV vaccine with inactivated polio, pneum ococal conjugate, influenza, and hepatitis A vaccines. Simultaneous administra tion o f the majority o f widely used live and inactivated vac cines has produced seroconversion rates and rates o f adverse reactions similar to those observed when the vaccines are administered separately. Therefore, single-antigen and com bination M M RV vaccines may be administered simultaneously with other vaccines recommended for children aged 12-15 months and those aged 4 -6 years. Simultaneous administra tion is particularly important when health-care providers anticipate that, because o f certain factors (e.g., previously missed vaccination opportunities), a child might not return for subsequent vaccination. # Economic Analysis of Vaccination A cost-effectiveness analysis was performed before initia tion o f the varicella vaccination program in the United States (138). The results o f the study indicated a savings o f $5.40 for each dollar spent on routine vaccination o f preschool-aged children when direct and indirect costs were considered. When only direct medical costs were considered, the benefit-cost ratio was 0.9:1.0. Benefit-cost ratios were only slightly lower when lower estimates o f the short-and long-term effectiveness of the vaccine were used. A recent analysis was performed that used current estimates o f morbidity and mortality (20,28,33) and current direct and indirect costs (ACIP, unpublished presentation, 2006). The model considered that the second dose will reduce varicella disease residual after the first dose by 79% . From a societal perspective, both 1-dose and 2-dose vaccination programs are cost saving compared with no program. The vaccine program cost was estimated at $320 million for 1 dose and $538 mil lion for 2 doses. The savings from varicella disease prevented were estimated at approximately $1.3 billion for the 1-dose program and approximately $1.4 billion for the 2-dose pro gram. Compared with the 1-dose program, the incremental cost for the second dose was estimated to be $96,000 per qual ity-adjusted life year (QALY) saved. I f benefits from prevent ing group A streptococcus infections and H Z am ong vaccinated persons are added, incremental costs per QALY saved are $91,000 and $17,000, respectively. Because o f the uncertainty o f the modeled predictions o f an increase in H Z among persons with a history o f varicella and the fact that no consistent trends demonstrate an increase in H Z attributable to the varicella vaccination program in the United States, H Z among persons with a history o f varicella was not included in the model. # MMWR June 22, 2007 # Storage, H andling, and Transportation of Varicella Vaccines Single-antigen varicella and combination M M RV vaccines have similar but not identical distribution, handling, and stor age requirements (79,80). For potency to be maintained, the lyophilized varicella vaccines must be stored frozen at an aver age temperature o f 5°F (-15°C) or colder. Household freezers manufactured since the mid-1980s are designed to maintain temperatures from -4°F (-20°C) to 5°F (-15°C). When tested, VARIVAX has remained stable in frost-free freezers. Freezers that reliably maintain an average temperature o f <5°F (<-15°C) and that have a separate sealed freezer door are acceptable for storing VARIVAX and ProQuad. Health-care providers may use stand-alone freezers or the freezer compartment o f refrig erator-freezer combinations, provided that the freezer com partment has its own separate, sealed, and insulated exterior door. Units with an internal freezer door are not acceptable. Temperatures should be documented at the beginning and end o f each day. Providers should document the required tem perature in a newly purchased unit for a minimum o f 1 week before using it to store vaccine and routinely thereafter. When varicella vaccines are stored in the freezer compartment o f a combined refrigerator-freezer, temperatures in both compart ments should be monitored carefully. Setting the thermostat low enough for storage o f varicella-containing vaccines might inadvertently expose refrigerated vaccines to freezing tempera tures. Refrigerators with ice compartments that either are not tightly enclosed or are enclosed with unsealed, uninsulated doors (e.g., small, dormitory-style refrigerators) are not acceptable for the storage o f varicella vaccines. Diluent should be stored separately either at room tempera ture or in the refrigerator. Vaccines should be reconstituted according to the directions in the package insert and only with the diluent supplied with the vaccine, which does not contain preservative or other antiviral substances that could inactivate the vaccine virus. Once reconstituted, vaccine should be used immediately to minimize loss o f potency. Vaccine should be discarded if not used within 30 minutes after reconstitution. # Handling and Transportation of Varicella Vaccines Within Off-Site Clinics When an immunization session is being held at a site dis tant from the freezer in which the vaccine is stored, the num ber o f vaccine vials needed for the immunization session should be packed in either a vaccine shipping container (as received from the manufacturer) or in an insulated cooler, with an adequate quantity o f dry ice (i.e., a minimum o f 6 lbs per box) to preserve potency. When placed in a suitable container, dry ice will maintain a temperature o f <5°F (<-15°C). Dry ice should remain in the container upon arrival at the clinic site. If no dry ice remains when the container is opened at the receiving site, the manufacturer (Merck and Company, Inc.) should be contacted for guidance (telephone: 1-800-982 7482). If dry ice is available at the receiving site, it may be used to store vaccine. Thermometers or temperature indica tors cannot be used in a container with dry ice. Diluent should not be transported on dry ice. If dry ice is not available, only single-antigen varicella vac cine may be transported, with frozen packs to keep the tem peratures between 3 6°F -4 6°F ( 2°C -8°C ) . T ran sport temperatures should be monitored, and a temperature indi cator or thermometer should be placed in the container and checked on arrival. The container should be kept closed as much as possible during the immunization session; tempera tures should be checked and recorded hourly. If the tempera ture rem ains betw een 3 6°F -4 6°F ( 2°C -8°C ) , the single-antigen varicella vaccine may be used for up to 72 hours after its removal from the freezer. The date and time should be marked on the vaccine vial. Single-antigen varicella vac cine stored at refrigerated temperatures for any period o f time may not be refrozen for future use. Transportation and storage o f combination M M RV vaccine at temperatures between 36°F-46°F (2°C -8°C ) is not permis sible for any length o f time. In contrast to single-antigen vari cella vaccine, combination M M RV vaccine must be maintained at temperatures o f <5°F (<-15°C) until the time o f reconstitu tion and administration. This difference in vaccine storage tem peratures must be considered when planning off-site clinics. For this reason, transportation o f M M RV vaccine to off-site clinics is not advised. If any concerns arise regarding the storage o f single-antigen varicella or combination MMRV vaccines, the manufacturer should be contacted for guidance. # Minimizing Wastage of Vaccine Vaccine wastage can be minimized by accurately determin ing the number o f doses needed for a given patient popula tion. To ensure maximal vaccine potency, smaller shipments o f vaccine should be ordered more frequently (preferably at least once every 3 months). Single-antigen varicella vaccine should not be distributed to providers who do not have the capacity to store it properly in a freezer until it is used. Trans portation o f varicella vaccine should be kept to a minimum to prevent loss o f potency. Off-site clinic sites should receive only such amounts o f vaccine as they can use within a short time (72 hours if storing single-antigen varicella vaccine at refrig erated temperatures). # Adverse Events A fter Vaccination Because adverse events after vaccination might continue to be caused by wild-type V Z V even as varicella disease declines, health-care providers should obtain event-appropriate clini cal specimens (e.g., cerebrospinal fluid for encephalitis, bron chial lavage or lung biopsy for pneumonia) for laboratory evaluation, including strain identification. Information regard ing strain identification is available from Merck's V Z V Iden tification Program (telephone: 1-800-652-6372) or from C D C 's National Varicella Reference Laboratory (telephone: 4 0 4 -6 3 9 -0 0 6 6 ; e-mail: [email protected]) or at http://www. cdc.gov/nip/diseases/varicella/surv/default.htm. Commercial laboratories do not have the capability for strain identification. The National Vaccine Injury Act o f 1986 requires physi cians and other health-care providers who administer vaccines to maintain permanent immunization records and to report occurrences o f adverse events for selected vaccines, including varicella vaccines. Serious adverse events (i.e., all events requiring medical attention) suspected to have been caused by varicella vaccines should be reported to the Vaccine Adverse Event Reporting System (VAERS). Forms and instruc tion s are available at h ttp s://se c u re .v a e rs.o rg / vaersDataEntryintro.htm , in the FD A D rug Bulletin at http://www.fda.gov/medwatch, or from the 24-hour VAERS information recording at 1-800 822-7967. # Prelicensure Single-Antigen Varicella Vaccine Single-antigen varicella vaccine was well-tolerated when administered to >11,000 healthy children, adolescents, and adults during prelicensure clinical trials. In a double-blind, placebo-controlled study among 914 susceptible healthy chil dren aged 12 m onths-14 years, the only statistically signifi cant (p<0.05) adverse events reported that were more common among vaccine recipients than among placebo recipients were pain and redness at the injection site (103). This study also described the presence o f unspecific rash among 2% o f pla cebo and 4% o f vaccine recipients occurring within 43 days o f vaccination. O f the 28 reported rashes, 10 (36%) were examined by a physician; among those that were examined, four o f the seven noninjection site rashes in vaccine recipients were judged to be varicella-like, compared with none o f the rashes in the placebo recipients. In a study comparing the safety o f 1 dose o f single-antigen varicella vaccine with that o f 2 doses administered 3 months apart, no serious adverse events related to vaccination were reported among approximately 2,000 healthy subjects aged 12 m onths-12 years who were followed for 42 days after each injection. The 2-dose vaccine regimen was generally welltolerated, and its safety profile was comparable to that o f the 1-dose regimen. Incidence o f injection site com plaints observed <3 days after vaccination was slightly higher after dose 2 (25.4%) than after dose 1 (21.7%). Incidence o f sys temic clinical complaints was lower after dose 2; fever inci dence from days 7-21 was 7% after dose 1 and 4% after dose 2 (p = 0.009), and varicelliform rash incidence after dose 1 was 3%, compared with 1% after dose 2 (p = 0.008), with peak occurrence 8-21 days after vaccination (139). In uncontrolled trials o f persons aged >13 years , approxi mately 1,600 vaccine recipients who received 1 dose o f single antigen varicella vaccine and 955 who received 2 doses of vaccine were monitored for 42 days for adverse events (79). After the first and second doses, 24.4% and 32.5% o f vaccine recipients, respectively, had complaints regarding the injec tion site. Varicella-like rash at the injection site occurred in 3% o f vaccine recipients after the first injection and in 1% after the second. A nonlocalized rash occurred in 5.5% of vaccine recipients after the first injection and in 0.9% o f vac cine recipients after the second, at a peak o f 7-21 and 0-23 days postvaccination, respectively. # Combination MMRV Vaccine In clinical trials, combination M M RV vaccine was admin istered to 4,497 children aged 12-23 months without con comitant administration with other vaccines (80). The safety profile of the first dose was compared with the safety o f MMRII vaccine and VARIVAX administered concomitantly at sepa rate injection sites. The follow-up period was 42 days post vaccination. Systemic vaccine-related adverse events were reported at a statistically significantly greater rate in persons who received M M RV vaccine than in persons who received the two vaccines administered concomitantly at separate injection sites: fever (>102°F [>38.9°C] oral equivalent), (21.5% and 14.9%, respectively), and measles-like rash (3.0% and 2.1% , respectively). Both fever and measles-like rash usu ally occurred within 5-12 days after the vaccination, were of short duration, and resolved with no long-term sequelae. Pain, tenderness, and soreness at the injection site were reported at a statistically significantly lower rate in persons who received To demonstrate that M M RV vaccine could be administered as a second dose, a study was conducted involving 799 chil dren aged 4 -6 years who had received primary doses o f MMRII and VARIVAX vaccines, either concomitantly or not, at age >12 months and >1 month before study enrollment (91). # Postlicensure During March 1, 1995-December 31, 2005, a total o f 47.7 million doses o f varicella vaccine were distributed in the United States, and 25,306 adverse events that occurred after varicella vaccine administration were reported to VAERS, 1,276 (5%) o f which were classified as serious. The overall adverse event reporting rate was 52.7 cases per 100,000 doses distributed. The rate o f reporting o f serious adverse events was 2.6 per 100,000 doses distributed. H alf o f all adverse events reported occurred am ong children aged 1 2-23 months (VAERS, unpublished data, 2006). N ot all adverse events that occur after vaccination are reported, and many reports describe events that might have been caused by confounding or unrelated factors (e.g., medi cations and other diseases). Because varicella disease contin ues to occur, wild-type virus might account for certain reported events. For serious adverse events for which background inci dence data are known, VAERS reporting rates are lower than expected after natural varicella or than background rates of disease in the community. Inherent limitations o f passive safety surveillance impede comparing adverse event rates after vac cination reported to VAERS with those from complications after natural disease. Nevertheless, the magnitude o f these dif ferences suggests that serious adverse events occur at a substantially lower rate after vaccination than after natural disease. This assumption is corroborated by the substantial decline in the number o f severe complications, hospitaliza tions, and deaths related to varicella that have been reported since implementation o f the varicella vaccination program (22,23,28). Similar to the prelicensure experience, postlicensure safety surveillance data after administration o f single-antigen vari cella vaccine indicated that rash, fever, and injection-site reac tions were the most frequently reported adverse events (140,141). Using these reports from passive surveillance of adverse events during the first 4 years o f the vaccination pro gram, when wild-type V Z V was still circulating widely, poly merase chain reaction (PCR) analysis confirmed that the majority o f rash events occurring within 42 days o f vaccina tion were caused primarily by wild-type varicella-zoster virus. Rashes from the wild-type virus occurred a median o f 8 days after vaccination (range: 1-24 days), whereas rashes from the vaccine strain occurred a median o f 21 days after vaccination (range: 5-42 days) (140). As part o f postmarketing evaluation o f the short-term safety o f VARIVAX, 89,753 vaccinated adults and children were identified from automated clinical databases o f hospitals, emergency room visits, and clinic visits during April 1995-December 1996 (56). Out o f all potential adverse events iden tified, no consistent time association or clustering o f any events was noted during the exposure follow-up period. No cases o f ataxia or encephalitis were identified after receipt o f varicella vaccine in this group o f vaccine recipients. In the prevaccine era, among children aged <15 years, acute cerebellar ataxia was estimated to occur at a rate o f one in 4,000 varicella cases, and varicella encephalitis without ataxia was estimated to occur at one in 33,000 varicella cases (142). Severe complications that are laboratory-confirmed to be caused by vaccine virus strain are rare and include pneumonia (140), hepatitis (143), severe disseminated varicella infection (140,141,144,145), and secondary transmission from five vaccine recipients (140,(146)(147)(148). Except for the secondary tran sm issio n cases, these cases all occu rred in immunocompromised patients or in persons who had other serious medical conditions that were undiagnosed at the time o f vaccination. Although other serious adverse events have been reported, vaccine strain involvement was not laboratory-confirmed. Thrombocytopenia (140,141,149) and acute cerebellar ataxia (140,141,150) have been described as potentially associated with single-antigen varicella vaccine. Two children had acute hemiparesis diagnosed after varicella vaccination (one at 5 days and the other at 3 weeks) (151). In both cases, unilateral infarction o f the basal ganglia and internal capsule was noted; this distribution is consistent with varicella angiopathy. Urti caria after varicella vaccine has been associated with gelatin allergy (152). Recurrent papular urticaria has been reported to be potentially associated with varicella vaccination (153). However, available data regarding the potential adverse events after varicella vaccination are insufficient to determine a causal association. The quality o f reported information varies widely, and simultaneous administration with other vaccines (especially M M R) might confound attribution. Herpes Zoster. Similar to wild-type VZV, vaccine virus can establish latent infection and subsequently reactivate, causing H Z disease in vaccine recipients. Before vaccine licensure, studies in children with leukemia had demonstrated a much lower rate o f H Z in vaccinated children compared with those (age and protocol matched) with previous varicella (54). Cases o f H Z in healthy vaccine recipients have been confirmed to be caused by both vaccine virus and wild-type virus, suggesting that certain H Z cases in vaccine recipients might result from antecedent natural varicella infection that might not have been detected by the patient or from infection after vaccination (140). A single case has been reported o f a child who received a diagnosis of neuroblastoma and had severe chronic zoster attributed to vac cine virus strain that with time became drug resistant (145). A large postlicensure safety study performed through surveys con ducted every 6 months and validated by medical chart review in the first 9 years o f a 15-year follow-up study o f >7,000 en rolled children vaccinated with single-antigen varicella vaccine at age 12-24 months estimated H Z disease incidence to be 22 per 100,000 person-years (CI = 13-37) as reported by parents (Steven Black, M D , Northern California Kaiser Permanente Medical Care Program, unpublished presentation, 2005). The incidence o f H Z was 30 per 100,000 person-years among healthy children aged 5-9 years (154) and 46 per 100,000 per son-years for those aged <14 years (64). However, these rates are drawn from different populations and based on different methodologies. In addition, a proportion o f children in these age groups would not have experienced varicella disease; those rates are likely to underestimate rates in a cohort o f children all infected with wild-type VZV, making direct comparison difficult with a vaccinated cohort. # Transmission of Vaccine Virus Results from prelicensure vaccine trials o f the single antigen varicella vaccine suggest that transmission o f varicella vaccine virus from healthy persons to susceptible contacts is rare. This risk was assessed in siblings o f healthy vaccinated children who themselves received placebo (103). Six (1%) o f 439 placebo recipients seroconverted without rash; the vacci nated siblings o f these six children also did not develop rash. Serologic data suggested that three o f these six seroconverters received vaccine mistakenly in lieu o f their siblings. In a smaller study, immunocompromised siblings o f healthy children receiving varicella vaccine were evaluated clinically and by test ing for humoral or cell-mediated immune responses (155). No evidence was demonstrated o f vaccine virus transmission to any o f 30 immunocompromised siblings from 37 healthy children receiving varicella vaccine. Accumulated data from postlicensure surveillance activities suggest that the risk for transmission o f varicella vaccine virus from healthy persons to susceptible contacts is low. With >55 million doses ofVARIVAX distributed since licensure, trans mission from immunocompetent persons after vaccination has been documented by PC R analysis from only five persons, resulting in six secondary infections, all o f them mild (140,(146)(147)(148). Three episodes involved transmission from healthy children aged 1 year to healthy household contacts, including a sibling aged 4 months, a father, and a pregnant mother. In the latter episode, the mother chose to terminate the pregnancy, but fetal tissue tested subsequently by PC R was negative for varicella vaccine virus (147). The children in these episodes had 2, 12, and 30 lesions, respectively. A fourth episode involved transmission from an immunocompetent adolescent who was a resident in an institution for chroni cally disabled children. The adolescent had >500 lesions after vaccination, and vaccine virus was transmitted to another immunocompetent resident o f the institution and to a health care worker, both o f whom had histories o f varicella (146). The fifth episode represented a tertiary spread from a healthy sibling contact o f a vaccinee with leukemia (148). Rashes for both healthy siblings were mild (i.e., 40 and 11 lesions, respectively), and vaccine virus was isolated from all three casepatients. The third sibling had rash 18 days after the onset of the secondary case-patient and 33 days after rash onset in the vaccinated leukemic child. In addition to these five episodes, one child has been reported to have transmitted vaccine virus from H Z that occurred 5 months after varicella vaccination; 2 weeks later, a mild varicella-like rash from which vaccine varicella virus was isolated occurred in the child's vaccinated brother (156). Although varicella vaccine is not recommended for chil dren with cellular immune deficiencies, the experience from prelicensure vaccine trials involving children with leukemia is instructive. Data from a study o f varicella vaccination in chil dren with leukemia indicated that varicella virus vaccine trans mission occurred in 15 (17%) o f 88 healthy, susceptible siblings o f leukemic vaccine recipients; the rash was mild in # Sum m ary of Rationale for Varicella Vaccination Varicella vaccine is an effective prevention tool for decreas ing the burden attributable to varicella disease and its compli cations in the United States. In the prevaccine era, varicella was a childhood disease with >90% o f the 4 million cases, two thirds o f approximately 11,000 hospitalizations, and approximately half o f 100-150 annual deaths occurring among persons aged <20 years. Single-antigen varicella vaccine is licensed for use among healthy persons aged >12 months, and the combination M M RV vaccine is licensed for use in healthy children aged 12 m on ths-12 years. Prelicensure and postlicensure studies have demonstrated that 1 dose o f single antigen varicella vaccine is approximately 85% effective in preventing varicella. Breakthrough varicella disease that occurs after vaccination frequently is mild and modified. Varicella vaccine is >95% effective in preventing severe vari cella disease. Since implementation o f the varicella vaccina tion program in 1995, varicella incidence, hospitalizations, and deaths have declined substantially. M M RV was licensed on the basis o f immunological noninferiority to its vaccine antigenic components. Initial varicella vaccine policy recom mendations were for 1 dose o f varicella vaccine for children aged 12 months-12 years and 2 doses, 4-8 weeks apart, for persons aged >13 years. In June 2006, ACIP approved a rou tine 2-dose recommendation for children. The first dose should be administered at age 12-15 months and the second dose at age 4-6 years. The rationale for the second dose o f varicella vaccine for children is to further decrease varicella disease and its compli cations in the United States. Despite the successes o f the 1-dose vaccination program in children, vaccine effectiveness o f 85% has not been sufficient to prevent varicella outbreaks, which, although less than in the prevaccine era, have contin ued to occur in highly vaccinated school populations. Break through varicella is contagious. Studies o f the immune re sponse after 1 and 2 doses o f varicella vaccine demonstrate a greater-than-tenfold boost in GM Ts when measured 6 weeks after the second varicella vaccine dose. A higher proportion (>99% ) o f children achieve an antibody response o f >5 gpELISA units after the second dose compared with 7 6 % -85% o f children after a single dose o f varicella vaccine. The second dose o f varicella vaccine is expected to provide im proved protection to the 15% -20% o f children who do not respond adequately to the first dose. D ata from a randomized clinical trial conducted postlicensure indicated that vaccine efficacy after 2 doses o f single-antigen varicella vaccine in chil dren (98.3%; CI = 97.3% -99.0% ) was significantly higher than that after a single dose (94.4%; CI =92.9% -95.7% ). The risk for breakthrough disease was 3.3-fold lower among children who received 2 doses than it was among children who received 1 dose. How this increase in vaccine efficacy (typically higher than observed under field conditions) will translate into vaccine effectiveness under conditions o f com munity use will be an important area o f study. The recommended ages for routine first (at age 12-15 months) and second (at age 4 -6 years) doses o f varicella vac cine are harmonized with the recommendations for M M R vaccine use and intended to limit the period when children have no varicella antibody. The recommended age for the second dose is supported by the current epidemiology o f varicella, with low incidence and few outbreaks among pre school-aged children and higher incidence and more outbreaks among elementary-school-aged children. However, the sec ond dose may be administered at an earlier age, provided that the interval between the first and second doses is 3 months. The recommendation for the minimum interval between doses is made on the basis o f the design o f the studies evaluating 2 doses among children aged 12 months-12 years. M M RV vaccine may be used to vaccinate children against measles, mumps, rubella, and varicella simultaneously. Because the risk for transmission can be high among students in schools, col leges, and other postsecondary educational institutions, stu dents without evidence o f immunity should receive 2 doses o f varicella vaccine. All children and adolescents who received 1 dose o f varicella vaccine previously should receive a second dose. Varicella disease is more severe and its complications more frequent among adolescents and adults. The recommenda tion for vaccination o f all adolescents and adults without evidence o f immunity will provide protection in these age gro u p s. B ecau se v aricella m ight be m ore severe in immunocompromised persons who might not be eligible for vaccination, and because o f the risk o f V Z V transmission in health-care settings, H C P must be vaccinated. Varicella dis ease during the first two trimesters o f pregnancy might infect the fetus and result in congenital varicella syndrome. There fore, routine antenatal screening for evidence o f immunity and postpartum vaccination for those without evidence of immunity now is recommended. # Recommendations for the Use of Varicella Vaccines Two 0.5-m L doses o f varicella vaccine administered subcu taneously are recommended for children aged >12 months, adolescents, and adults without evidence o f immunity. For children aged 12 m onths-12 years, the recommended mini mum interval between the two doses is 3 months. However, if the second dose was administered >28 days after the first dose, the second dose is considered valid and need not be repeated. # Routine Vaccination Persons Aged 12 M onths-12 Years # Preschool-Aged Children All healthy children should receive their first dose o f vari cella-containing vaccine routinely at age 12-15 months. # School-Aged Children A second dose o f varicella vaccine is recommended routinely for all children aged 4 -6 years (i.e., before entering prekindergarten, kindergarten, or first grade). However, it may be administered at an earlier age provided that the interval between the first and second dose is >3 months. Because o f the risk for transmission o f V Z V in schools, all children entering school should have received 2 doses o f vari cella-containing vaccine or have other evidence o f immunity to varicella (see Evidence o f Immunity). # Persons Aged > 1 3 Years Persons aged >13 years without evidence o f varicella immu nity should receive two 0.5-mL doses o f single-antigen vari cella vaccine administered subcutaneously, 4 -8 weeks apart. If >8 weeks elapse after the first dose, the second dose may be administered without restarting the schedule. Only single antigen varicella vaccine may be used for vaccination o f per sons in this age group. M M RV is not licensed for use among persons aged >13 years. # School-Aged Children, College Students, and Students in Other Postsecondary Educational Institutions All students should be assessed for varicella immunity, and those without evidence o f immunity should routinely receive 2 doses o f single-antigen varicella vaccine 4 -8 weeks apart. The risk for transmission o f varicella among school-aged chil dren, college students, and students in other postsecondary educational institutions can be high because o f high contact rates. # Other Adults All healthy adults should be assessed for varicella immu nity, and those who do not have evidence o f immunity should receive 2 doses o f single-antigen varicella vaccine 4 -8 weeks apart. Adults who might be at increased risk for exposure or transmission and who do not have evidence o f immunity sh ould receive special con sideration for vaccin ation , including 1) HCP, 2) household contacts o f im m uno compromised persons, 3) persons who live or work in envi ronments in which transmission ofV Z V is likely (e.g., teachers, day-care employees, residents and staff in institutional set tings), 4) persons who live or work in environments in which transmission has been reported (e.g., college students, inmates and staff members o f correctional institutions, and military personnel), 5) nonpregnant women o f childbearing age, 6) adolescents and adults living in households with children, and 7) international travelers. # Second Dose Catch-Up Vaccination To improve individual protection against varicella and to have a more rapid impact on school outbreaks, second dose catch-up varicella vaccination is recommended for children, adolescents, and adults who previously received 1 dose. The recommended minimum interval between the first dose and the catch-up second dose is 3 months for children aged <12 years and 4 weeks for persons aged >13 years. However, the catch-up second dose may be administered at any interval longer than the minimum recommended interval. Catch-up # MMWR June 22, 2007 vaccination may be implemented during routine health-care provider visits and through school-and college-entry require ments. As part o f comprehensive health services for all adolescents, ACIP, AAP, and AAFP recommend a health maintenance visit at age 11-12 years. This visit also should serve as an immuni zation visit to evaluate vaccination status and administer nec essary vaccinations (158). Physicians should use this and other routine visits to ensure that all children without evidence of varicella immunity have received 2 doses o f varicella vaccine. # Requirements for Entry to Child Care, School, College, and Other Postsecondary Educational Institutions Child care and school entry requirements for varicella immunity have been recommended previously (2). In 2005, ACIP recommended expanding the requirements to cover stu dents in all grade levels. Official health agencies should take necessary steps, including developing and enforcing school immunization requirements, to ensure that students at all grade levels (including college) and children in child care centers are protected against varicella and other vaccine-preventable diseases (157). # Prenatal Assessment and Postpartum Vaccination Prenatal assessment o f women for evidence o f varicella immunity is recommended. Birth before 1980 is not consid ered evidence o f immunity for pregnant women because of potential severe consequences ofvaricella infection during preg nancy, including infection o f the fetus. Upon completion or termination o f their pregnancies, women who do not have evidence o f varicella immunity should receive the first dose of vaccine before discharge from the health-care facility. The sec ond dose should be administered 4 -8 weeks later, which coincides with the postpartum visit (6-8 weeks after deliv ery). For women who gave birth, the second dose should be administered at the postpartum visit. Women should be coun seled to avoid conception for 1 month after each dose o f vari cella vaccine. Health-care settings in which completion or termination o f pregnancy occurs should use standing orders to ensure the administration o f varicella vaccine to women without evidence o f immunity. # Special Considerations for Vaccination # Vaccination of HIV-Infected Persons HIV-infected children with C D 4+ T-lymphocyte percent age >15% should be considered for vaccination with the single antigen varicella vaccine. Varicella vaccine was recommended previously for HIV-infected children in C D C clinical and immunologic categories N1 and A1 with age-specific C D 4+ T-lymphocyte percentage >25% (2). Limited data from a clini cal trial in which 2 doses o f single-antigen varicella vaccine were administered 3 months apart to 37 HIV-infected chil dren (C D C clinical categories N , A, or B and immunologic category 2 [CD4+ T-lymphocyte percentage >15% -24%]) aged 1-8 years indicated that the vaccine was well-tolerated and that >80% o f subjects had detectable V Z V specific immune response (either antibody or cell immune response or both) at 1 year after immunization (159). These children were no less likely to have an antibody response to the vari cella vaccine than were subjects who were less affected immu nologically by H IV infection. Because children infected with H IV are at increased risk for morbidity from varicella and H Z (i.e., shingles) compared with healthy children, ACIP rec ommends that, after weighing potential risks and benefits, single-antigen varicella vaccine should be considered for HIVinfected children with C D 4+ T-lymphocyte percentages >15%. Eligible children should receive 2 doses o f single-antigen vari cella vaccine 3 months apart. Because persons with impaired cellular immunity are potentially at greater risk for complica tions after vaccination with a live vaccine, these vaccine recipients should be encouraged to return for evaluation if they experience a postvaccination varicella-like rash. D ata are not available regarding safety, immunogenicity, or efficacy of M M RV vaccine in HIV-infected children, M M RV vaccine should not be administered as a substitute for the single antigen varicella vaccine when vaccinating these children. The titer o f Oka/Merck V Z V is higher in combination M M RV vaccine than in single-antigen varicella vaccine. Recommen dations for vaccination o f HIV-infected children with measles, mumps, or rubella vaccines have been published previously (160). D ata on use o f varicella vaccine in HIV-infected adoles cents and adults are lacking. However, on the basis o f expert opinion, the safety o f varicella vaccine in HIV-infected per sons aged >8 years with comparable levels o f immune func tion (CD4+T-lymphocyte count >200 cells/^L) is likely to be similar to that o f children aged <8 years. Immunogenicity might be lower in older HIV-infected children, adolescents, and adults compared to children aged 1-8 years. However, weighing the risk for severe disease from wild V Z V and potential benefit o f vaccination, vaccination may be consid ered (2 doses, administered 3 months apart) for HIV-infected persons with CD4+T-lymphocytes count >200 cells/^L in these age groups. I f vaccination o f HIV-infected persons results in clinical disease, the use o f acyclovir might modify the severity o f disease. # Situations in Which Some Degree of Immunodeficiency Might be Present Persons with impaired humoral immunity may be vacci nated. No data have been published concerning whether per sons without evidence o f immunity receiving only inhaled, nasal, or topical doses o f steroids can be vaccinated safely. However, clinical experience suggests that vaccination is welltolerated among these persons. Persons without evidence of immunity who are receiving systemic steroids for certain con d itio n s (e.g ., asth m a) and who are not otherw ise immunocompromised may be vaccinated if they are receiving <2 mg/kg o f body weight or a total o f <20 mg/day o f pred nisone or its equivalent. Certain experts suggest withholding steroids for 2-3 weeks after vaccination if it can be done safely (1). D ata from a Japanese study using the Oka/Biken varicella vaccine (which is not available in the United States but whose immunogenicity and efficacy are similar to those o f the vari cella vaccine used in the United States) indicated that chil dren taking steroids for nephrosis were vaccinated safely when the steroids were suspended for 1-2 weeks before vaccina tion, although no serious reactions occurred among children vaccinated when steroid therapy was not suspended (161). Persons who are receiving high doses o f systemic steroids (i.e., >2 mg/kg prednisone) for >2 weeks may be vaccinated once steroid therapy has been discontinued for >1 month, in accordance with the general recommendations for the use o f live-virus vaccines (157). Vaccination o f leukemic children who are in remission and who do not have evidence o f immunity to varicella should be undertaken only with expert guidance and with the availabil ity o f antiviral therapy should complications ensue. Patients with leukemia, lymphoma, or other malignancies whose dis ease is in remission and whose chemotherapy has been termi nated for at least 3 months can receive live-virus vaccines (157). When immunizing persons in whom some degree o f immu nodeficiency might be present, only single-antigen varicella vaccine should be used. # Vaccination of Household Contacts of Immunocompromised Persons Immunocompromised persons are at high risk for serious varicella infections. Severe disease occurs in approximately 30% o f such persons with primary infection. Because varicella vac cine now is recommended for all healthy children and adults w ithout evidence o f im m unity, household contacts o f immunocompromised persons should be vaccinated routinely. A lth o u gh the risk for exposure to w ild V Z V for immunocompromised persons now is lower than it was pre viously, vaccine should be offered to child and adult house hold con tacts w ith ou t evidence o f im m u n ity o f immunocompromised persons. Vaccination o f household con tacts provides protection for immunocompromised persons by decreasing the likelihood that wild-type V ZV will be introduced into the household. Vaccination o f household con tacts o f immunocompromised persons theoretically might pose a m in im al risk for tran sm issio n o f vaccine virus to immunocompromised persons, although in one study, no evi dence o f transmission o f vaccine virus was demonstrated after vaccination o f 37 healthy siblings o f 30 children with malig nancy (155). No cases have been documented o f transmission o f vaccine virus to immunocompromised persons in the postlicensure period in the United States, with >55 million doses o f vaccine distributed. Other data indicate that disease caused by vaccine virus in immunocompromised persons is milder than wild-type disease and can be treated with acyclovir (148,159). The benefits o f vaccinating susceptible household contacts o f im m unocom prom ised persons outweigh the extremely low potential risk for transmission o f vaccine virus to immunocompromised contacts. Vaccine recipients in whom vaccine-related rash occurs, particularly H C P and household contacts o f immunocompromised persons, should avoid con tact with susceptible persons who are at high risk for severe complications. If a susceptible, immunocompromised person is inadvertently exposed to a person who has a vaccine-related rash, postexposure prophylaxis with V Z IG is not needed because disease associated with this type o f virus is expected to be mild. # Nursing Mothers Postpartum vaccination o f women without evidence o f im munity need not be delayed because o f breastfeeding. Women who have received varicella vaccination postpartum may con tinue to breastfeed. The majority o f live vaccines are not asso ciated with virus secretion in breast milk (157). A study involving 12 women who received single-antigen varicella vac cine while breastfeeding indicated no evidence ofV Z V D N A June 22, 2007 either in 217 breast milk samples collected or in infants tested after both vaccine doses (162). No infants seroconverted. Another study did not detect varicella gene sequences in the postvaccination breast milk samples (163). Therefore, single antigen varicella vaccine should be administered to nursing mothers without evidence o f immunity. Combination MMRV vaccine is not licensed for use among persons aged >13 years. # H ealth -C are Personnel N osocom ial transm ission o f V Z V is well-recognized (131,(164)(165)(166)(167)(168)(169)(170)(171)(172)(173), and guidelines for the prevention o f nosoco mial V Z V infection and for infection control in H C P have been published (174,175). Sources o f nosocomial exposure have included patients, hospital staff, and visitors (e.g., the children o f hospital employees) who are infected with vari cella or H Z. In hospitals, airborne transmission o f V Z V has been demonstrated when varicella has occurred in susceptible persons who had no direct contact with the index casepatient (176)(177)(178)(179)(180). To prevent disease and nosocomial spread o f VZV, health care institutions should ensure that all H C P have evidence of immunity to varicella. Birth before 1980 is not considered evidence o f immunity for H C P because o f the possibility of nosocomial transmission to high-risk patients. In health-care institutions, serologic screening before vaccination o f person nel who have a negative or uncertain history o f varicella and who are unvaccinated is likely to be cost effective. Institutions may elect to test all H C P regardless o f disease history because a small proportion o f persons with a positive history o f disease might be susceptible. Routine testing for varicella immunity after 2 doses o f vac cine is not recommended for the management o f vaccinated HCP. Available commercial assays are not sensitive enough to detect antibody after vaccination in all instances. Sensitive tests have indicated that 99% o f adults develop antibodies after the second dose. However, seroconversion does not always result in full protection against disease, and no data regarding correlates o f protection are available for adults. H C P who have received 2 doses o f vaccine and who are exposed to V Z V should be monitored daily during days 10 21 after exposure through the employee health program or by an infection control nurse to determine clinical status (i.e., daily screen for fever, skin lesions, and systemic symptoms). Persons with varicella might be infectious up to 2 days before rash onset. In addition, H C P should be instructed to report fever, headache, or other constitutional symptoms and any atypical skin lesions immediately. H C P should be placed on sick leave immediately if symptoms occur. Health-care insti tutions should establish protocols and recommendations for screening and vaccinating H C P and for management o f H CP after exposures in the work place. H C P who have received 1 dose o f vaccine and who are exposed to V Z V should receive the second dose with single antigen varicella vaccine within 3-5 days after exposure to rash (provided 4 weeks have elapsed after the first dose). After vaccination, management is similar to that o f 2-dose vaccine recipients. Unvaccinated H C P who have no other evidence o f immu nity who are exposed to V Z V are potentially infective from days 10-21 after exposure and should be furloughed during this period. They should receive postexposure vaccination as soon as possible. Vaccination within 3-5 days o f exposure to rash might modify the disease if infection occurred. Vaccina tion >5 days postexposure still is indicated because it induces protection against subsequent exposures (if the current exposure did not cause infection). The risk for transmission o f vaccine virus from vaccine recipients in whom varicella-like rash occurs after vaccination is low and has been documented after exposures in house holds and long-term care facilities (140,(146)(147)(148). No cases have been documented after vaccination o f HCP. The ben efits o f vaccinating H C P without evidence o f immunity out weigh this extremely low potential risk. As a safeguard, institutions should consider precautions for personnel in whom rash occurs after vaccination. H C P in whom a vaccine-related rash occurs should avoid contact with persons without evidence o f immunity who are at risk for severe disease and complications until all lesions resolve (i.e., are crusted over or fade away) or no new lesions appear within a 24-hour period. # Varicella IgG Antibody Testing The tests most widely used to detect varicella IgG antibody after natural varicella infection among H C P are latex aggluti nation (LA) and ELISA. A commercially available LA test using latex particles coated with V Z V glycoprotein antigens can be completed in 15 minutes and does not require special equip ment (181). The sensitivity and specificity o f the LA test are comparable to those o f FAMA in detecting antibody response after natural varicella infection. The LA test generally is more sensitive than commercial ELISAs. The LA test has detected antibody for up to 11 years after varicella vaccination (182). However, for the purpose o f screening H C P for varicella sus ceptibility, a less sensitive and more specific commercial ELISA should be considered. A recent report indicated that the LA test can produce false-positive results, particularly when only a single concentration o f serum is evaluated (183); this led to documented cases o f false-positive results in H C P who conse quently remained unvaccinated and subsequently contracted varicella. # Vaccination for O u tb rea k Control Varicella vaccination is recommended for outbreak control. Persons who do not have adequate evidence ofimmunity should receive their first or second dose as appropriate. Additionally, in outbreaks among preschool-aged children, 2-dose vaccination is recommended for optimal protection, and children vacci nated with 1 dose should receive their second dose provided 3 months have elapsed since the first dose. State and local health departments may advise exposed persons who do not have evi dence o f immunity to contact their health-care providers for vaccination, or they may offer vaccination through the health department or school (or other institutions) vaccination clin ics. Although outbreak control efforts optimally should be imple mented as soon as an outbreak is identified, vaccination should be offered even if the outbreak is identified late. Varicella out breaks in certain settings (e.g., child care facilities, schools, or institutions) can last as long as 4-5 months. Thus, offering vac cine during an outbreak might provide protection to persons not yet exposed and shorten the duration of the outbreak (184). Persons receiving either their first or second dose as part o f the outbreak control program may be readmitted to school imme diately. Those vaccinated with the first dose as part o f outbreak control measures should be scheduled for the second dose as age appropriate. Persons who are unvaccinated and without other evidence o f immunity who do not receive vaccine should be excluded from institutions in which the outbreak is occurring until 21 days after the onset o f rash in the last case o f varicella. In addition, for school-aged persons covered by the 2-dose school vaccination requirements, exclusion during an outbreak is rec ommended for those vaccine recipients who had received the first dose before the outbreak but not the second as part o f the oubtreak control program. Persons at increased risk for severe varicella who have contraindications to vaccination should receive V ZIG within 96 hours o f exposure. # Contraindications General Adequate treatment provisions for anaphylactic reactions, including epinephrine injection (1:1000), should be available for immediate use should an anaphylactic reaction occur. Before administering a vaccine, health-care providers should obtain the vaccine recipient's vaccination history and deter mine whether the individual had any previous reactions to any vaccine including VARIVAX, ProQuad or any measles, mumps, or rubella containing vaccines. # Allergy to Vaccine Components The administration o f live varicella-containing vaccines rarely results in hypersensitivity. The information in the pack age insert should be reviewed carefully before vaccine is administered. Vaccination is contraindicated for persons who have a history o f anaphylactic reaction to any component o f the vaccine, including gelatin. Single-antigen varicella vaccine does not contain preservatives or egg protein; these substances have caused hypersensitive reactions to other vaccines. For the combination M M RV vaccine, live measles and live mumps vaccines are produced in chick embryo culture. However, among persons who are allergic to eggs, the risk for serious allergic reactions after administration o f measles-or mumpscontaining vaccines is low. Because skin testing with vaccine is not predictive o f allergic reaction to vaccination, skin test ing is not required before administering combination M M RV vaccine to persons who are allergic to eggs (160). The major ity o f anaphylactic reactions to measles-and m um pscontaining vaccines are associated not with hypersensitivity to egg antigens but with other vaccine components. Neither single-antigen varicella nor combination M M RV vaccines should be administered to persons who have a history o f ana phylactic reaction to neomycin. However, neomycin allergy usually is manifested as a contact dermatitis, which is a delayedtype immune response rather than anaphylaxis. For persons who experience such a response, the adverse reaction, if any, would appear as an erythematous, pruritic nodule or papule present 48-96 hours after vaccination. A history o f contact dermatitis to neomycin is not a contraindication to receiving varicella vaccines. Varicella vaccines should not be administered to persons receiving high-dose systemic immunosuppressive therapy, including persons on oral steroids >2 mg/kg o f body weight or a total o f >20 mg/day o f prednisone or equivalent for per sons who weigh >10 kg, when administered for >2 weeks. Such persons are more susceptible to infections than healthy persons. Administration o f varicella vaccines can result in a more extensive vaccine-associated rash or disseminated dis ease in persons receiving immunosuppressive doses o f corti costeroids (185). This contraindication does not apply to persons who are receiving inhaled, nasal, or topical corticos teroids or low-dose corticosteroids as are used commonly for asthma prophylaxis or for corticosteroid-replacement therapy (see Situations in Which Some Degree o f Immunodeficiency Might Be Present). # Altered Immunity # Pregnancy Because the effects o f the varicella virus vaccine on the fetus are unknown, pregnant women should not be vaccinated. N onpregnant women who are vaccinated should avoid becoming pregnant for 1 month after each injection. For per sons without evidence o f immunity, having a pregnant house hold member is not a contraindication to vaccination. If a pregnant woman is vaccinated or becomes pregnant within 1 month o f vaccination, she should be counseled about potential effects on the fetus. Wild-type varicella poses a low risk to the fetus (see Prenatal and Perinatal Exposure). Because the virulence o f the attenuated virus used in the vac cine is less than that o f the wild-type virus, the risk to the fetus, if any, should be even lower. In 1995, Merck and Co., Inc., in collaboration with C D C , established the VARIVAX Pregnancy Registry to monitor the maternal-fetal outcomes o f pregnant women who were inadvertently administered varicella vaccine 3 months before or at any time during preg nancy (to report, call: 1-800-986-8999) (186). During the first 10 years o f the pregnancy registry no cases o f congenital varicella syndrome or birth defects compatible with congeni tal varicella syndrome have been documented (187,188). Among 131 live-born infants o f prospectively reported serone gative women (82 o f whom were born to mothers vaccinated during the highest risk period [i.e., the first or second trimes ter o f pregnancy]), no birth defects consistent with congeni tal varicella syndrome have been documented (prevalence rate = 0; CI = 0-6.7% ), and three major birth defects were reported (prevalence rate = 2.3%; CI = 0.5% -6.7%). The rate o f occurrence o f major birth defects from prospective reports in the registry was similar to the rate reported in the general U.S. population (3.2%), and the defects indicated no specific pattern or target organ. Although the study results do not exclude the possibility o f risk for women who received inad vertent varicella vaccination before or during pregnancy, the potential risk, if any, is low. # Precautions Illness Vaccination o f persons who have acute severe illness, including untreated, active tuberculosis, should be postponed until recovery. The decision to delay vaccination depends on the severity o f symptoms and on the etiology o f the disease. No data are available regarding whether either single-antigen varicella or combination M M RV vaccines exacerbate tuber culosis. Live attenuated measles, mumps, and rubella virus vaccines administered individually might result in a tempo rary depression o f tuberculin skin sensitivity. Therefore, if a tuberculin test is to be performed, it should be administered either any time before, simultaneously with, or at least 4 -6 weeks after combination M M RV vaccine. However, tubercu lin skin testing is not a prerequisite for vaccination with single antigen varicella or combination M M RV vaccines. Varicella vaccines may be administered to children without evidence o f immunity who have mild illnesses, with or with out low-grade fever (e.g., diarrhea or upper-respiratory infec tion) (189). Physicians should be alert to the vaccine-associated temperature elevations that might occur predominantly in the second week after vaccination, especially with combination MMRV vaccine. Studies suggest that failure to vaccinate chil dren with minor illnesses can impede vaccination efforts (190). # Thrombocytopenia # Recent Administration of Blood, Plasma, or Immune Globulin Although passively acquired antibody is known to interfere with response to measles and rubella vaccines (191), the effect o f the adm inistration o f immune globulin (IG) on the response to varicella virus vaccine is unknown. The duration o f interference with the response to measles vaccination is doserelated and ranges from 3-11 months. Because o f the poten tial inhibition o f the response to varicella vaccination by passively transferred antibodies, varicella vaccines should not be administered for the same intervals as measles vaccine (3 -1 1 m on th s, d ep en d in g on the dosage) after administration o f blood (except washed red blood cells), plasma, or IG. Suggested intervals between administration of antibody-containing products for different indications and varicella vaccine have been published previously (157). In addition, persons who received a varicella vaccine should not be administered an antibody-containing product for 2 weeks after vaccination unless the benefits exceed those o f vaccina tion. In such cases, the vaccine recipient should either be revaccinated or tested for immunity at the appropriate inter vals, depending on the dose received, and then revaccinated if seronegative. # Use of Salicylates No adverse events associated with the use o f salicylates after varicella vaccination have been reported; however, the vaccine manufacturer recommends that vaccine recipients avoid using salicylates for 6 weeks after receiving varicella vaccines because o f the association between aspirin use and Reye syn drome after varicella. Vaccination with subsequent close moni toring should be considered for children who have rheumatoid arthritis or other conditions requiring therapeutic aspirin. The risk for serious complications associated with aspirin is likely to be greater in children in whom natural varicella develops than it is in children who receive the vaccine containing attenuated VZV. N o association has been docum ented between Reye syndrome and analgesics or antipyretics that do not contain salicylic acid. # Postexposure Prophylaxis # Healthy Persons Prelicensure data from the United States and Japan on vari cella exposures in children from household, hospital, and com munity settings indicate that single-antigen varicella vaccine is effective in preventing illness or modifying varicella severity if administered to unvaccinated children within 3 days, and possibly up to 5 days, o f exposure to rash (78,101,192). Vac cination within 3 days o f exposure to rash was >90% effective in preventing varicella whereas vaccination within 5 days o f exposure to rash was approximately 70% effective in prevent ing varicella and 100% effective in modifying severe disease (101,192). Limited postlicensure studies also have demon strated that varicella vaccine is highly effective in either pre venting or modifying disease if administered within 3 days o f exposure (193,194). Varicella vaccine is recommended for postexposure administration for unvaccinated persons with out other evidence o f immunity. If exposure to V Z V does not cause infection, postexposure vaccination should induce pro tection against subsequent exposures. If the exposure results in infection, no evidence indicates that administration o f single-antigen varicella vaccine during the presymptomatic or prodromal stage o f illness increases the risk for vaccineassociated adverse events. No data are available regarding the potential benefit o f administering a second dose to 1-dose vaccine recipients after exposure. However, administration o f a second dose should be considered for persons who have pre viously received 1 dose to bring them up-to-date. Studies on postexposure use o f varicella vaccine have been conducted exclusively in children. A higher proportion o f adults do not respond to the first dose o f varicella vaccine. Nevertheless, postexposure vaccination should be offered to adults without evidence o f immunity. Although postexposure use o f varicella vaccine has potential applications in hospital settings, vacci nation is recommended routinely for all H C P without evi dence o f immunity and is the preferred method for preventing varicella in health-care settings (195). Preferably, H CP should be vaccinated when they begin employment. No data are avail able on the use o f co m b in atio n M M R V raccine for postexposure prophylaxis. (197,198). In a study o f immunocompromised children who were adm inistered V Z IG within 96 hours o f exposure, approximately one in five exposures resulted in clinical vari cella, and one in 20 resulted in subclinical disease (198). The severity o f clinical varicella (evaluated by percentage o f patients with >100 lesions or complications) was lower than expected on the basis o f historic controls. The V Z IG product currently used in the United States, VariZIG™ (Cangene Corporation, W innipeg, Canada), is available under an Investigational New D rug Application Expanded Access protocol (available at http://www.fda.gov/ cber/infosheets/mphvzig020806.htm). A request for licensure in the United States might be submitted to FD A in the future. VariZIG is a lyophilized presentation which, when properly reconstituted, is approximately a 5% solution o f IgG that can be administered intramuscularly (199). VariZIG can be obtained 24 hours a day from the sole authorized U.S. distributor (FFF Enterprises, Temecula, California) at 1-800 843-7477 or online at http://www.fffenterprises.com. # Persons Without Evidence of Immunity # Administration of VZIG V Z IG provides maximum benefit when administered as soon as possible after exposure, but it might be effective if administered as late as 96 hours after exposure. The effective ness o f V Z IG when administered >96 hours after initial exposure has not been evaluated. The duration o f protection provided after administration ofV Z IG is unknown, but pro tection should last at least one half-life o f the IG (i.e., approximately 3 weeks). Susceptible persons at high risk for whom varicella vaccination is contraindicated and who are again exposed >3 weeks after receiving a dose ofV Z IG should receive another full dose ofV Z IG . Patients receiving monthly high-dose immune globulin intravenous (IGIV) (>400 mg/ kg) are likely to be protected and probably do not require V Z IG if the last dose o f IG IV was administered <3 weeks before exposure (200). V Z IG has not been proven to be use ful in treating clinical varicella or H Z or in preventing dis seminated zoster and is not recommended for such use. V Z IG might extend the incubation period o f the virus from 10-21 days to >28 days. This should be taken into account after exposures when V Z IG is administered. # Dosage VariZIG is supplied in 125-U vials. The recommended dose is 125 units/10 kg o f body weight, up to a maximum o f 625 units (five vials). The minimum dose is 125 U. The human IgG content is 60-200 mg per 125 units dose o f VariZIG. # Indications for the Use of VZIG for Postexposure Prophylaxis The decision to administer V Z IG depends on three factors: 1) whether the patient lacks evidence o f immunity, 2) whether the exposure is likely to result in infection, and 3) whether the patient is at greater risk for complications than the general population. Both healthy and immunocompromised children and adults who have verified positive histories ofvaricella (except for bonemarrow transplant recipients) may be considered immune (see Evidence o f Immunity). The association between positive his tories o f varicella in bone-marrow donors and susceptibility to varicella in recipients after transplants has not been studied adequately. Thus, persons who receive bone-marrow trans plants should be considered nonimmune, regardless o f previ ous history o f varicella disease or varicella vaccination in themselves or in their donors. Bone-marrow recipients in whom varicella or H Z develops after transplantation should subsequently be considered immune. V Z IG is not indicated for persons who received 2 doses o f varicella vaccine and became imm unocom prom ised as a result o f disease or treatment later in life. These persons should be monitored closely; if disease occurs, treatment with acyclovir should be instituted at the earliest signs or symptoms. For patients without evidence o f immunity and on steroid therapy doses >2 mg/kg o f body weight or a total o f 20 mg/day of prednisone or equivalent, VariZIG is indicated. # Types of Exposure Certain types o f exposure can place persons without evi dence o f immunity at risk for varicella. Direct contact expo sure is defined as face-to-face contact with an infectious person while indoors. The duration o f face-to-face contact that war rants administration o f V Z IG is not certain. However, the contact should not be transient. Certain experts suggest a con tact o f >5 minutes as constituting significant exposure for this purpose, whereas others define close contact as >1 hour (200). Substantial exposure for hospital contacts consists o f sharing the same hospital room with an infectious patient or direct, face-to-face contact with an infectious person (e.g., HCP). Brief contacts with an infectious person (e.g., contact with x-ray technicians or housekeeping personnel) are less likely than more prolonged contacts to result in V Z V transmission. Persons with continuous exposure to household members who have varicella or disseminated H Z are at greatest risk for infection. Varicella occurs in approximately 85% (range: 65% -100%) o f susceptible household contacts exposed to VZV. Localized H Z is much less infectious than varicella or dis seminated H Z (52). Transmission from localized H Z is more likely after close contact, such as in household settings. Physi cians may consider recommending postexposure prophylaxis with V Z IG in such circumstances. After household exposure to varicella, attack rates among immunocompromised chil dren who were administered V Z IG were up to 60% (197). No comparative data are available for immunocompromised children without evidence o f immunity who were not admin istered V Z IG . However, the incidence o f severe disease (defined as >100 skin lesions) was less than that predicted from the natural history o f disease in normal children (27% and 87%, respectively), and the incidence o f pneumonia was less than that described in children with neoplasm (6% and 25% , respectively) (201). The risk for varicella after close con tact (e.g., contact with playmates) or hospital exposure is esti mated to be approximately 20% o f the risk occurring from household exposure. The attack rate in healthy neonates who were exposed in utero within 7 days o f delivery and who received V Z IG after birth was 62% , which does not differ substantially from rates reported for neonates who were similarly exposed but not treated with V Z IG (49). However, the occurrence o f compli cations and fatal outcomes was substantially lower for neo nates who were treated with V Z IG than for those who were not. In a study o f pregnant women without immunity to VZV who were exposed to varicella and administered V ZIG , the infection rate was 30%. This is substantially lower than the expected rate o f >70% in unimmunized women exposed to varicella (199,202). # Recommendations for the Use of VZIG The following patient groups are at risk for severe disease and complications from varicella and should receive V ZIG : Immunocompromised patients. V Z IG is used primarily for passive immunization o f immunocompromised persons without evidence o f immunity after direct exposure to vari cella or disseminated H Z patients, including persons who 1) have primary and acquired immune-deficiency disorders, 2) have neoplastic diseases, and 3) are receiving immunosup pressive treatment. Patients receiving monthly high-dose IGIV (>400 mg/kg) are likely to be protected and probably do not require V Z IG if the last dose o f IG IV was administered <3 weeks before exposure (200). Neonates whose mothers have signs and symptoms of varicella around the time o f delivery. V Z IG is indicated for neonates whose mothers have signs and symptoms o f varicella from 5 days before to 2 days after delivery. V Z IG is not neces sary for neonates whose mothers have signs and symptoms of varicella more than 5 days before delivery, because those infants should be protected from severe varicella by transpla centally acquired maternal antibody. No evidence suggests that infants born to mothers in whom varicella occurs >48 hours after delivery are at increased risk for serious complications (e.g., pneumonia or death). Premature neonates exposed postnatally. Transmission of varicella in the hospital nursery is rare because the majority of neonates are protected by maternal antibody. Premature infants who have substantial postnatal exposure should be evaluated on an individual basis. The risk for complications o f postnatally acquired varicella in premature infants is unknown. However, because the immune system o f prema ture infants is not fully developed, administration o f V Z IG to premature infants born at >28 weeks o f gestation who are exposed during the neonatal period and whose mothers do not have evidence o f immunity is indicated. Premature infants born at <28 weeks o f gestation or who weigh <1,000 g at birth and were exposed during the neonatal period should receive V Z IG regardless o f maternal immunity because such infants might not have acquired maternal antibody. The majority o f premature infants born at >28 weeks o f gestation to immune mothers have enough acquired maternal antibody to protect them from severe disease and complications. Although infants are at higher risk than older children for serious and fatal complications, the risk for healthy, full-term infants who have varicella after postnatal exposure is substan tially less than that for infants whose mothers were infected 5 days before to 2 days after delivery. V Z IG is not recom mended for healthy, full-term infants who are exposed post natally, even if their mothers have no history o f varicella infection. Pregnant women. Because pregnant women might be at higher risk for severe varicella and complications (37,42,203), V Z IG should be strongly considered for pregnant women without evidence o f immunity who have been exposed. Administration o f V Z IG to these women has not been found to prevent viremia, fetal infection, congenital varicella syn drome, or neonatal varicella. Thus, the primary indication for V Z IG in pregnant women is to prevent complications of varicella in the mother rather than to protect the fetus. Neonates born to mothers who have signs and symptoms o f MMWR June 22, 2007 varicella from 5 days before to 2 days after delivery should receive V Z IG , regardless o f whether the mother received V ZIG . # Interval Between Administration of VZIG and Varicella Vaccine Any patient who receives V Z IG to prevent varicella should receive varicella vaccine subsequently, provided the vaccine is not contraindicated. Varicella vaccination should be delayed until 5 months after V Z IG administration. Varicella vaccine is not needed if the patient has varicella after administration o f V ZIG . # Antiviral Therapy Because V Z IG might prolong the incubation period by >1 week, any patient who receives V Z IG should be observed closely for signs or symptoms o f varicella for 28 days after exposure. Antiviral therapy should be instituted immediately if signs or symptoms o f varicella disease occur. The route and duration o f antiviral therapy should be determined by spe cific host factors, extent o f infection, and initial response to therapy. Information regarding how to obtain VariZIG is avail able at http://w ww .cdc.gov/m m wr/preview/m m w rhtm l/ m m5508a5.htm (204). # Acknowledgments # A p pendix # Sum m ary of Recommendations for Varicella Vaccination # Routine Childhood Schedule • Routine childhood vaccination should be 2 doses. • Preschool-aged children should receive the first dose of varicella vaccine at age 12-15 months. • School-aged children should receive the second dose at age 4 -6 years (m ay be administered earlier provided >3 months have elapsed after the first dose) Persons Aged > 1 3 Years • Persons aged >13 years should receive 2 doses o f vaccine, doses (4-8 weeks apart). • All adolescents and adults without evidence o f immunity should be vaccinated. • Because o f their increased risk for transmission to per sons at high risk for severe disease or their increased risk o f exposure, vaccination is especially important for per sons without evidence o f immunity in the following groups: -persons who have close contact with persons at high risk for serious complications (e.g., health-care per sonnel and household contacts o f immunocompro mised persons); -persons who live or work in environments in which transmission o f varicella zoster virus is likely (e.g., teachers, child-care workers, and residents and staff in institutional settings); -persons who live and work in environments in which transmission has been reported (e.g., college students, inmates and staff members of correctional institutions, military personnel); -nonpregnant women o f childbearing age; -adolescents and adults living in households with children; and -international travelers. # Prenatal Assessment and Postpartum Vaccination Prenatal assessment o f women for evidence o f varicella immunity is recommended. Upon completion or termination of pregnancy, women who do not have evidence of varicella immunity should be vaccinated. # Vaccination of HIV-Infected Persons Vaccination should be considered for HIV-infected children with age-specific C D 4+ T-lymphocyte percentage >15% and may be considered for adolescents and adults in with C D 4+ T-lymphocyte count >200 cells/^L. # Outbreak Control • 2-dose vaccination policy # Postexposure Prophylaxis • Recommended within 3-5 days # Requirements for Entry to Child Care, School, College, and Other Postsecondary Educational Institutions All states should require that students at all grade levels (including college) and those in child care centers receive varicella vaccine unless they have other evidence of immunity of varicella. # Evidence of Immunity to Varicella Evidence o f immunity to varicella includes any o f the following: • documentation o f age-appropriate vaccination with a varicella vaccine: -preschool-aged children (i.e., aged >12 months): 1 dose -school-aged children, adolescents, and adults: 2 doses* • laboratory evidence o f immunity^ or laboratory confir mation o f disease; • birth in the United States before 1980 §; • diagnosis or verification o f a history o f varicella disease by a health-care provider^; or • diagnosis or verification o f a history o f herpes zoster by a health-care provider. * For children w ho received th eir first dose a t age <13 years an d for w hom th e interval betw een the 2 doses was >28 days, th e second dose is considered valid. C om m ercial assays can be used to assess disease-induced im m unity, b u t they lack sensitivity to always detect vaccine-induced im m u n ity (i.e., th ey m ig h t yield false-negative results). § For health-care personnel, p regnant w om en, an d im m u n o co m p ro m ised persons, b irth before 1980 sh o u ld n o t be considered evidence o f im m unity. V erification o f histo ry or diagnosis o f typical disease can be provided by any health-care provider (e.g., school or occupational clinic nurse, nurse practitioner, physician assistant, o r physician). F or persons reporting a history of, or reporting w ith, atypical or m ild cases, assessm ent by a physician or th eir designee is reco m m en d ed , an d one o f th e follow ing sh o u ld be so u g h t: 1) an epidem iologic lin k to a typical varicella case or to a laboratory-confirm ed case o r 2) evidence o f lab o rato ry co n firm atio n if it was perform ed a t th e tim e o f acute disease. W h e n such d o cu m en tatio n is lacking, persons sh o u ld n o t be considered as having a valid histo ry o f disease because o th er diseases m ig h t m im ic m ild atypical varicella. # Advisory Committee on Im m unization Practices Varicella Working Group # _________________________________________ Goal and Objectives_________________________________________ This report revises, updates, and replaces the 1996 and 1999 ACIP statements o f C D C 's Advisory Com m ittee on Im m unization Practices for prevention o f varicella in the U nited States. The goal o f this report is to improve the health status o f the U.S. population by providing recommendations on the use o f varicella vaccines for prevention ofvaricella disease. U pon completion o f this educational activity, the reader should be able to 1) describe the epidemiology ofvaricella in the U nited States, 2) identify recommendations for varicella vaccination in the U nited States, and 3) describe the characteristics o f the currently licensed varicella vaccines. To receive continuing education credit, please answer all of the following questions.
None
None
de170adb729bbbf9e6682e4269de540708c9c778
cdc
None
This report provides updated recommendations for prevention and control of hantavirus infections associated with rodents in the United States. It supersedes the previous report (CDC. Hantavirus infection-southwestern United States: interim recommendations for risk reduction. MMWR 1993;42:1-13). These recommendations are based on principles of rodent and infection control, and accumulating evidence that most infections result from exposure, in closed spaces, to active infestations of infected rodents. The recommendations contain updated specific measures and precautions for limiting household, recreational, and occupational exposure to rodents, eliminating rodent infestations, rodent-proofing human dwellings, cleaning up rodentcontaminated areas and dead rodents, and working in homes of persons with confirmed hantavirus infection or buildings with heavy rodent infestations.# Introduction Background In 1993, a previously unknown disease, hantavirus pulmonary syndrome (HPS), was identified among residents of the southwestern United States (1)(2)(3). HPS was subsequently recognized throughout the contiguous United States and the Americas. As of June 6, 2002, a total of 318 cases of HPS have been identified in 31 states, with a case fatality of 37%.- The association of hantaviruses with rodent reservoirs warrants recommendations to minimize exposure to wild rodents. These recommendations are based on current understanding of the epidemiologic features of hantavirus infections in the United States. # Rodent Reservoirs of Viruses Causing HPS All hantaviruses known to cause HPS are carried by the New World rats and mice, family Muridae, subfamily Sigmodontinae. The subfamily Sigmodontinae contains at least 430 species of mice and rats, which are widespread in North and South America. These wild rodents are not generally associated with urban environments as are house mice and the black and Norway rats (all of which are in the murid subfamily Murinae). However, some species (e.g., deer mouse and whitefooted mouse) will enter human habitation in rural and suburban areas. A third group of rodents, the voles and lemmings (family Muridae, subfamily Arvicolinae), is associated with a group of hantaviruses distinct from those that cause HPS. None of the numerous arvicoline viruses has been associated with human disease in the United States (4). Several hantaviruses that are pathogenic for humans have been identified in the United States. In general, each virus has a single primary rodent host. Other small mammals can be infected as well, but are much less likely to transmit the virus to other animals or humans (5)(6)(7). The deer mouse (Peromyscus maniculatus) (Figure 1) is the host for Sin Nombre virus (SNV), the primary causative agent of HPS in the United States. The deer mouse is common and widespread in rural areas throughout much of the United States (Figure 2). Although prevalence varies temporally and geographically, on average approximately 10% of deer mice tested throughout the range of the species show evidence of infection with SNV (5). Other hantaviruses associated with sigmodontine rodents and known to cause HPS include New York virus (8), hosted by the white-footed mouse, Peromyscus leucopus (Figures 3,4); Black Creek Canal virus (9), hosted by the 5,6); and Bayou virus (10), hosted by the rice rat, Oryzomys palustris (Figures 7,8). Nearly all of the continental United States falls within the range of one or more of these host species. Several other sigmodontine rodent species in the United States are associated with additional hantaviruses that have yet to be implicated in human disease. These species include the brush mouse, Peromyscus boylii (11); and the Western harvest mouse, Reithrodontomys megalotis (12). Only the deer mouse and the white-footed mouse are commonly associated with peridomestic environments. Identifying characteristics and natural history of all these host species are available from other sources (13,14). Numerous species of sigmodontine rodents also are associated with HPS in South America (4). Several new sigmodontine hantavirus hosts have been discovered each year and more probably await discovery. Until the extent of hantavirus infection throughout the subfamily Sigmodontinae becomes known, as does the pathogenicity of hantaviruses hosted by sigmodontine species, treating all sigmodontines as potential hosts of HPS-causing hantaviruses, and each sigmodontine rodent as though it were infected and infectious is recommended. For the general public, this recommendation applies to all wild mice and rats encountered in rural areas throughout the United States. # Other Diseases Associated with Hantavirus Infection Because the sigmodontine rodents are restricted to the Americas, HPS is restricted to the Americas. Another group of hantaviruses associated with murine and arvicoline rodents causes a group of diseases of varying severity referred to as hemorrhagic fever with renal syndrome (HFRS) in Europe and Asia. Hantaan and Dobrava viruses, hosted by the murine field mice (Apodemus agrarius and Apodemus flavicollis, respectively), cause thousands of cases of severe HFRS each year in Asia and Eastern Europe. Fatality associated with these infections can be as high as 10% (15). The cosmopolitan Norway rat (Rattus norvegicus) is host for Seoul virus, which causes a mild form of HFRS in Asia. Although evidence of infection with Seoul virus has been found in Norway rats # FIGURE 6. Range of the cotton rat (Sigmodon hispidus) in the Americas Source: Hall ER, Kelson KR. The mammals of North America. vol II. New York, NY: Ronald Press, 1959. Hershkovitz P. South American marsh rats, genus Holochilus, with a summary of sigmodent rodents. Fieldiana: Zoology 1955;37:639-73. # FIGURE 7. Rice rat (Oryzomys palustris), reservoir of Bayou virus Photo/R. K. LaVal, Mammal Image Library of the American Society of Mammalogists throughout much of the world, including the United States, human disease caused by Seoul virus is largely restricted to Asia. Only three suspected cases have been reported in the United States (16). Overall mortality associated with Seoul virus infection is probably <1% (15). Puumala virus, carried by an arvicoline rodent, the bank vole (Clethrionomys glareolus), causes a mild form of HFRS, referred to as nephropathia epidemica (NE). NE, which is very common in northern Europe, has a case fatality of <1%. Several other species of arvicoline rodents host hantaviruses in the northern hemisphere, including the United States; none of these have been associated with any human disease. # Infection in the Host Hantaviruses do not cause overt illness in their reservoir hosts (17). Although infected rodents shed virus in saliva, urine, and feces for many weeks, months, or for life, the quantity of virus shed can be much greater approximately 3-8 weeks after infection (18). The demonstrated presence of infectious virus in saliva of infected rodents and the marked sensitivity of these animals to hantaviruses following intramuscular inoculation suggest that biting might be an important mode of transmission from rodent to rodent (18,19). Field data suggest that transmission in host populations occurs horizontally, more frequently among male rodents, and might be associated with fighting, particularly, but not exclusively, among males (7,20). Occasional evidence of infection (antibody) is found in numerous other species of rodents and their predators (e.g., dogs, cats, and coyotes), indicating that many (perhaps any) mammal species coming into contact with an infected host might become infected (21). No evidence supports the transmission of infection to other animals or to humans from these "dead-end" hosts. However, domestic animals (e.g., cats and dogs) might bring infected rodents into contact with humans. Arthropod vectors are not known to have a role in the transmission of hantaviruses (17,22). The reservoir hosts of the hantaviruses in the western United States also act as hosts for the bacterium Yersinia pestis, the etiologic agent of plague. Although no evidence exists that fleas and other ectoparasites play a role in hantavirus epidemiology, rodent fleas transmit plague. Species of Peromyscus are susceptible to Y. pestis infection and can act as hosts for infected fleas. Control of rodents without concurrent control of fleas might therefore increase the risk of human plague as the rodent fleas seek an alternative food source. # Transmission to Humans The Old World hantaviruses causing HFRS, and the New World agents of HPS are believed to be transmitted by the same mechanisms. Human infection occurs most commonly through the inhalation of infectious, aerosolized saliva or excreta. Persons visiting laboratories where infected rodents were housed have been infected after only a few minutes of exposure to animal holding areas (22). Transmission can occur when dried materials contaminated by rodent excreta are disturbed and inhaled, directly introduced into broken skin or conjunctivae, or possibly, when ingested in contaminated food or water. Persons have also acquired HFRS and HPS after being bitten by rodents (23,24). High risk of exposure has been associated with entering or cleaning rodent-infested structures (25). Person-to-person transmission has not been associated with any of the Old World hantaviruses (26) or with HPS cases in the United States (27). However, person-to-person transmission, including nosocomial transmission of Andes virus, was well documented for a single outbreak in southern Argentina (28,29) and suspected to have occurred much less extensively in another outbreak in Chile associated with the same virus (30). # Epidemiology Hantavirus infections are associated with domestic, occupational, or recreational activities that bring humans into contact with infected rodents, usually in rural settings. Known hantavirus infections of humans occur primarily in adults. HPS cases in the United States occur throughout the year, but greater numbers are reported in spring and summer. Hantavirus infection (resulting in HPS or HFRS) has been epidemiologically associated with the following situations (25,(31)(32)(33)(34)(35)(36): - increasing numbers of host rodents in human dwellings; - occupying or cleaning previously vacant cabins or other dwellings that are actively infested with rodents; - cleaning barns and other outbuildings; - disturbing excreta or rodent nests around the home or workplace; - residing in or visiting areas where substantial increases have occurred in numbers of host rodents or numbers of hantavirus-infected host rodents; - handling mice without gloves; - keeping captive wild rodents as pets or research subjects; - handling equipment or machinery that has been in storage; - disturbing excreta in rodent-infested areas while hiking or camping; - sleeping on the ground; and - hand plowing or planting. However, in North America, the absolute risk of hantavirus infection to the general public is low; only 20-50 cases of HPS have been confirmed annually in the United States since the disease was described in 1993 (Figure 2). # Physical Properties of Hantaviruses Hantaviruses have lipid envelopes that are susceptible to most disinfectants (e.g., dilute chlorine solutions, detergents, or most general-purpose household disinfectants) (37). Depending on environmental conditions, these viruses probably survive <1 week in indoor environments and much shorter periods (perhaps hours) when exposed to sunlight outdoors (38). # Prevention Eradicating the reservoir hosts of hantaviruses is neither feasible nor desirable because of the wide distribution of sigmodontine rodents in North America and their importance in the function of natural ecosystems. The best currently available approach for disease control and prevention is risk reduction through environmental modification and hygiene practices that deter rodents from colonizing the home and work environment, as well as safe cleanup of rodent waste and nesting materials. Controlled experiments have demonstrated that simple and inexpensive methods are effective in preventing rodents from entering rural dwellings (39). These recommendations emphasize the prevention of HPS associated with sigmodontine rodents in the Americas. Although the risk of acquiring hantavirus disease from contact with native arvicoline rodents in North America or introduced murine rodents throughout the Americas is low, the true pathogenicity for humans of all hantaviruses carried by these groups of rodents has not been established. Therefore, we recommend that persons avoid contact with all wild and peridomestic rats and mice. The precautions described in this report are broadly applicable to all groups of rats and mice. # Precautions To Limit Household Exposure to Rodents Rodent control in and around the home remains the primary strategy in preventing hantavirus infection. Rodent infestation can be determined by direct observation of animals, or inferred by observation of their nests or feces on floors or in protected areas (e.g., closets, kitchen cabinets, drawers, wall voids, furnace and hot water heating cabinets, and behind ventilation screens), or from evidence that rodents have been gnawing on food or other objects. The interior and exterior of the home should be carefully inspected at least twice per year for any openings where rodents could enter the home and for conditions that could support rodent activity. If any evidence of rodent infestation is detected inside the home or in outbuildings, precautions should be taken. The guidelines in the section Special Precautions for Homes of Persons with Confirmed Hantavirus Infection or Buildings with Heavy Rodent Infestations should be followed if a structure is associated with a confirmed case of hantavirus disease or if evidence of heavy rodent infestation is present (e.g., piles of feces or numerous nests or dead rodents). Recommendations are listed below for 1) reducing rodent shelter and food sources inside and outside the home and 2) preventing rodents from entering the home by rodent-proofing (40)(41)(42). with the end of the trap containing the bait closest to the baseboard or wall. Place traps in areas where rodents might be entering the home. Spring-loaded traps can be painful or even dangerous if they close on fingers; they should be handled with caution, and careful consideration should be given to keep children and pets away from areas where traps are placed. In the western United States (west of the 100th meridian, a line from mid-Texas through mid-North Dakota), a risk of plague transmission to humans from fleas exists. Use insect repellent (containing N,N-diethyl-m-toluamide ) on clothing, shoes, and hands to reduce the risk of fleabites when picking up dead rodents and traps. In cases of heavy rodent infestation in indoor spaces in the western United States, use an insecticide before trapping. Contact your local or state health department to find out if plague is a danger in the area and for additional advice on appropriate flea-control methods. # Reduction of Rodent - Continue trapping for at least 1 additional week after the last rodent is caught. As a precaution against reinfestation, use several baited, spring-loaded traps inside the house at all times in locations where rodents are most likely to be found. - Examine traps regularly. To dispose of traps or trapped animals, wear rubber, latex, vinyl, or nitrile gloves. Spray the dead rodent with a disinfectant or chlorine solution. ¶ After soaking the rodent thoroughly, either take it out of the trap by lifting the spring-loaded metal bar and letting the animal fall into a plastic bag or place the entire trap containing the dead rodent in a plastic bag and seal the bag. Then place the rodent into a second plastic bag and seal it. Dispose of the rodent in the double bag by 1) burying it in a 2-to 3-foot-deep hole or 2) burning it or 3) placing it in a covered trash can that is regularly emptied. Contact the state or local health department concerning other appropriate disposal methods. - If the trap will be reused, decontaminate it by immersing and washing it in a disinfectant or chlorine solution and rinsing afterward. - For substantially severe or persistent infestations, contact a pest-control professional for rodent eradication or a building contractor for rodent exclusion (rodent-proofing). When resident mice are removed from rural buildings without measures to prevent reentry, they are replaced almost immediately by other mice from the outside. Therefore, indoor rodent-trapping could be unsuccessful in reducing rodent infestations without simultaneous efforts to rodentproof permeable dwellings. # Precautions for Outside the Home - Place woodpiles and stacks of lumber, bricks, stones, or other materials >100 feet from the house. - Store grains and animal feed in rodent-proof containers. - Remove, from the vicinity of buildings, any food sources that might attract rodents. - Keep pet food covered and stored in rodent-proof containers. Allow outside pets only enough food for each meal, then store or discard any remaining food from feeding dishes. - Avoid using bird feeders near the home. If they must be placed near the home, use "squirrel-proof" feeders and clean up spilled seeds each evening. - Dispose of garbage and trash in rodent-proof containers with tight-fitting lids. - Haul away trash, abandoned vehicles, discarded tires, and other items that might serve as rodent nesting sites. - Mow grass closely, and cut or remove brush and dense shrubbery to a distance of at least 100 feet from the home. Trim the limbs off any trees or shrubs that overhang or touch the building. - Use raised cement foundations in new construction of sheds, barns, and outbuildings. - Place spring-loaded traps in outbuildings (regardless of their distance from the home) and in areas that might likely serve as rodent shelter, within 100 feet around the home; use these traps continuously, replacing the bait periodically. For instructions concerning the safe use and cleaning of spring-loaded traps and the disposal of trapped rodents, see Precautions for Inside the Home. # Preventing Rodents from Entering the Home by Rodent-Proofing - Look for and seal up all gaps and holes inside and outside the home that are >¼ -inch (>6 mm) in diameter. Inside the home, look for and seal up all gaps and holes underneath, behind, and inside kitchen cabinets; inside closets; around floor air vents and dryer vents; around the fireplace; around windows and doors; behind appliances (e.g., dishwashers, clothes washers, and stoves); around pipes under the kitchen and bathroom sinks; around all electrical, water, gas, and sewer lines (chases); and beneath or behind hot water heaters, radiators, and furnaces and around their pipes that enter the home. Outside the home, look for and seal up all gaps and holes around windows and doors; between the foundation of the home and the ground; under doors without weatherstripping; around electrical, water, gas, and sewer lines (chases); and around the roof, eaves, gables, and soffits. In addition, look for unscreened attic vents and crawlspace vents. In trailers, look for and seal up holes and gaps in the skirting, between the trim and metal siding, around utility lines and pipes and ducts, around roof vents, and around the trailer tongue. - Seal all entry holes >¼-inch (>6 mm) in diameter that are inside and outside the home with any of the following: cement, lath screen or lath metal, † † wire screening, hardware cloth (<¼-inch grate size), or other patching materials (42). Steel wool or STUF-FIT § § also can be used, but caulk must be placed around the steel wool or STUF-FIT to prevent rodents from pushing it through the hole. Caulk and expanding foam can be used to reinforce any repairs where lath metal, hardware cloth, steel wool, or STUF-FIT are the primary materials; however, caulk or expanding foam alone are usually not sufficient to prevent rodent intrusion. - If rodent burrows are found under foundations or trailer skirtings, construct a barrier around the entire foundation using 14-inch wide (35 cm), <¼-inch (<6 mm) mesh, 16-19 gauge hardware cloth. Bend the hardware cloth lengthwise into a right angle with two sides of approximately 7 inches (18 cm). Secure one side of the hardware cloth tightly to the building siding. The other side should be buried at least 2 inches (5 cm) below ground level and extend out away from the wall. ¶ ¶ - Consult a pest-control professional for severe or persistent infestations. # Precautions To Limit Occupational and Recreational Exposure to Rodents # Precautions for Workers Frequently Exposed to Rodents Persons who frequently handle or are exposed to wild rodents are probably at higher risk for hantavirus infection than the general public because of the frequency of their exposures. Such persons include, but are not limited to, mammalogists, pest-control workers, some farm and domestic workers, and building and fire inspectors. Therefore, enhanced precautions are warranted to protect them against hantavirus infection, as described below. - Workers in potentially high-risk settings should be informed by their employers about hantavirus transmission and symptoms of infection and be given detailed guidance on prevention measures. (43). The comprehensive user program should be supervised by a knowledgeable person (44). Given the predictable nature of HPS risk in certain professions or environmental † † Lath screen or metal is a light-gauge metal mesh and is commonly installed over wooden walls before plaster is applied. A galvanized product is preferable. Lath screen is malleable and can be folded and pushed into larger holes. These materials can be found in the masonry or building materials section at hardware or building supply stores. § § STUF-FIT is a soft copper-mesh material that might be preferable to steel wool because it does not rust and is not easily pulled apart by rodents. It can be obtained from pest control retail stores or from Allen Special Products (telephone 800-848-6805). ¶ ¶ Illustrated, complete instructions for rodent-proofing are available 1) in the National Park Service's manual, Mechanical Rodent Proofing Techniques; 2) on CDC's website, All About Hantaviruses (/ diseases/hanta/hps/index.htm); and 3) from CDC's Ramah Home Seal-up protocol, Special Pathogens Branch (e-mail [email protected]). situations, provisions should be made in advance for respiratory protection. Because of the expense associated with purchasing a PAPR system, a negative-pressure tight-seal respirator equipped with N-100 or P-100 filters is recommended when respiratory protection is required for home use. Respirators might cause stress to persons with respiratory or cardiac conditions; these persons should be medically cleared before using such a respirator. Home or other users with potentially impaired respiratory function also should be aware of the risks associated with the use of negative-pressure respirators (43). - Workers should wear rubber, latex, vinyl, or nitrile gloves when handling rodents or handling traps containing rodents. Before removing the gloves, wash gloved hands in a disinfectant or chlorine solution and then wash bare hands in soap and water. - Mammalogists, wildlife biologists, or public health personnel who handle wild rodents for research or management purposes should refer to published safety guidelines (45,46). Precautions are also available on CDC's website, All About Hantaviruses (/ dvrd/spb/mnpages/rodentmanual.htm). # Precautions for Other Occupational Groups Having Potential Contact with Rodents Insufficient information is available to provide general recommendations regarding risks and precautions for persons who work in occupations with unpredictable or incidental contact with rodents or their nesting sites. Examples of such occupations include telephone installers, maintenance workers, plumbers, electricians, and certain construction workers. Workers in these jobs might have to enter buildings, crawl spaces, or other sites that are potentially rodent-infested, and HPS has been reported among these workers. Recommendations for such circumstances must be made on a case-by-case basis after the specific working environment has been assessed and state or local health and labor officials or trade unions and management, as appropriate, have been consulted. Determining the level of risk present and implementing appropriate protective measures is the employer's responsibility.* # Precautions for Campers and Hikers No evidence exists to suggest that travel should be restricted in areas where HPS cases have occurred. The majority of typical tourist activities are associated with limited or no risk that travelers will be exposed to rodents or their excreta. However, persons engaged in outdoor activities (e.g., camping or hiking) should take precautions to reduce the likelihood of exposure to potentially infectious materials by following these recommendations. - Avoid touching live or dead rodents or disturbing rodent burrows, dens, or nests. - Do not use cabins or other enclosed shelters that are potentially rodent-infested until they have been appropriately cleaned and disinfected. (See Precautions for Cleanup of Rodent-Contaminated Areas and Dead Rodents.) Rodent-proofing might be necessary to prevent reinfestation. (See Precautions to Limit Household Exposure to Rodents.) - When an unoccupied cabin or other structure to be used has been closed for several weeks, ventilate the structure by opening doors and windows for at least 30 minutes before occupying. Use cross ventilation if possible. Leave the area (preferably remaining upwind) during the airingout period. The airing helps to remove infectious primary aerosols that might be created when hantavirus-infected rodents urinate. - Do not pitch tents or place sleeping bags in proximity to rodent feces or burrows or near possible rodent shelters (e.g., garbage dumps or woodpiles). - Avoid sleeping on the bare ground. Use a cot with the sleeping surface at least 12 inches above the ground or use a tent with a floor. - Keep food in rodent-proof containers. - Dispose of all trash and garbage promptly in accordance with campsite regulations by -burning or burying, -discarding in rodent-proof trash containers, or -"packing out" in rodent-proof containers. # Precautions for Cleanup of Rodent-Contaminated Areas and Dead Rodents Areas with evidence of rodent activity (e.g., dead rodents and rodent excreta) should be thoroughly cleaned to reduce the likelihood of exposure to hantavirus-infected materials. Cleanup procedures must be performed in a manner that limits the potential for dirt or dust from contaminated surfaces to become airborne. Recommendations are listed in this report for cleaning up 1) rodent urine and droppings, and surfaces potentially contaminated by rodents and 2) dead rodents and rodent nests. # Cleanup of Rodent Urine and Droppings and Contaminated Surfaces - During cleaning, wear rubber, latex, vinyl, or nitrile gloves. - Spray rodent urine and droppings with a disinfectant or chlorine solution until thoroughly soaked. (See Cleanup of Dead Rodents and Rodent Nests.) - To avoid generating potentially infectious aerosols, do not vacuum or sweep rodent urine, droppings, or contaminated surfaces until they have been disinfected. - Use a paper towel to pick up the urine and droppings. Place the paper towel in the garbage. - After the rodent droppings and urine have been removed, disinfect items that might have been contaminated by rodents or their urine and droppings. -Mop floors with a disinfectant or chlorine solution. - prepared by mixing 1½ cups of household bleach in 1 gallon of water (or a 1:10 solution) can be used in place of a commercial disinfectant. When using chlorine solution, avoid spilling the mixture on clothing or other items that might be damaged by bleach. Wear rubber, latex, vinyl, or nitrile gloves when preparing and using chlorine solutions. Chlorine solutions should be prepared fresh daily. # Cleaning Sheds and Other Outbuildings Before cleaning closed sheds and other outbuildings, ventilate the building by opening doors and windows for at least 30 minutes. Use cross ventilation if possible. Leave the area during the airing-out period. This airing helps to remove infectious primary aerosols that might be created when hantavirus-infected rodents urinate. In substantially dirty or dusty environments, additional protective clothing or equipment may be worn. Such equipment includes coveralls (disposable when possible) and safety glasses or goggles, in addition to rubber, latex, vinyl, or nitrile gloves. For recommendations regarding precautions for cleanup of outbuildings with heavy rodent infestations, see Special Precautions for Homes of Persons with Confirmed Hantavirus Infection or Building with Heavy Rodent Infestations. # Special Precautions for Homes of Persons with Confirmed Hantavirus Infection or Buildings with Heavy Rodent Infestations Special precautions are indicated for cleaning homes or buildings with heavy rodent infestations. A rodent infestation is considered heavy if piles of feces or numerous nests or dead rodents are observed. Persons cleaning these homes or buildings should contact the local or state public health agency or CDC for guidance. These precautions also can apply to vacant dwellings that have attracted rodents while unoccupied and to dwellings and other structures that have been occupied by persons with confirmed hantavirus infection. Workers who are either hired specifically to perform the cleanup or asked to do so as part of their work activities should receive a thorough orientation from the responsible health agency or employer about hantavirus transmission and disease symptoms and should be trained to perform the required activities safely. # Recommendations for Cleaning Homes or Buildings with Heavy Rodent Infestations # Applicability and Updates The control and prevention recommendations in this report represent general measures to minimize the likelihood of human exposure to hantavirus-infected rodents in the Americas. Although different geographic areas might have varying housing types and rodent populations, the precautions should be the same. The effect and utility of the recommendations will be continually reviewed by CDC and the involved state and local health agencies as additional epidemiologic, field, and laboratory data become available. These recommendations might be supplemented or modified in the future. These recommendations and additional information concerning hantaviruses are periodically updated and made available on CDC's website, All About Hantaviruses (http:// www.cdc.gov/ncidod/disease/hanta/hps/index.htm). Additional information can be obtained by contacting CDC, National Center for Infectious Diseases (NCID), Special Pathogens Branch, Mailstop A-26, 1600 Clifton Road, N.E., Atlanta, GA 30333; e-mail [email protected]; fax 404-639-1509; or by telephone 404-639-1510.
This report provides updated recommendations for prevention and control of hantavirus infections associated with rodents in the United States. It supersedes the previous report (CDC. Hantavirus infection-southwestern United States: interim recommendations for risk reduction. MMWR 1993;42[No. RR-11]:1-13). These recommendations are based on principles of rodent and infection control, and accumulating evidence that most infections result from exposure, in closed spaces, to active infestations of infected rodents. The recommendations contain updated specific measures and precautions for limiting household, recreational, and occupational exposure to rodents, eliminating rodent infestations, rodent-proofing human dwellings, cleaning up rodentcontaminated areas and dead rodents, and working in homes of persons with confirmed hantavirus infection or buildings with heavy rodent infestations.# Introduction Background In 1993, a previously unknown disease, hantavirus pulmonary syndrome (HPS), was identified among residents of the southwestern United States (1)(2)(3). HPS was subsequently recognized throughout the contiguous United States and the Americas. As of June 6, 2002, a total of 318 cases of HPS have been identified in 31 states, with a case fatality of 37%.* The association of hantaviruses with rodent reservoirs warrants recommendations to minimize exposure to wild rodents. These recommendations are based on current understanding of the epidemiologic features of hantavirus infections in the United States. # Rodent Reservoirs of Viruses Causing HPS All hantaviruses known to cause HPS are carried by the New World rats and mice, family Muridae, subfamily Sigmodontinae. The subfamily Sigmodontinae contains at least 430 species of mice and rats, which are widespread in North and South America. These wild rodents are not generally associated with urban environments as are house mice and the black and Norway rats (all of which are in the murid subfamily Murinae). However, some species (e.g., deer mouse and whitefooted mouse) will enter human habitation in rural and suburban areas. A third group of rodents, the voles and lemmings (family Muridae, subfamily Arvicolinae), is associated with a group of hantaviruses distinct from those that cause HPS. None of the numerous arvicoline viruses has been associated with human disease in the United States (4). Several hantaviruses that are pathogenic for humans have been identified in the United States. In general, each virus has a single primary rodent host. Other small mammals can be infected as well, but are much less likely to transmit the virus to other animals or humans (5)(6)(7). The deer mouse (Peromyscus maniculatus) (Figure 1) is the host for Sin Nombre virus (SNV), the primary causative agent of HPS in the United States. The deer mouse is common and widespread in rural areas throughout much of the United States (Figure 2). Although prevalence varies temporally and geographically, on average approximately 10% of deer mice tested throughout the range of the species show evidence of infection with SNV (5). Other hantaviruses associated with sigmodontine rodents and known to cause HPS include New York virus (8), hosted by the white-footed mouse, Peromyscus leucopus (Figures 3,4); Black Creek Canal virus (9), hosted by the 5,6); and Bayou virus (10), hosted by the rice rat, Oryzomys palustris (Figures 7,8). Nearly all of the continental United States falls within the range of one or more of these host species. Several other sigmodontine rodent species in the United States are associated with additional hantaviruses that have yet to be implicated in human disease. These species include the brush mouse, Peromyscus boylii (11); and the Western harvest mouse, Reithrodontomys megalotis (12). Only the deer mouse and the white-footed mouse are commonly associated with peridomestic environments. Identifying characteristics and natural history of all these host species are available from other sources (13,14). Numerous species of sigmodontine rodents also are associated with HPS in South America (4). Several new sigmodontine hantavirus hosts have been discovered each year and more probably await discovery. Until the extent of hantavirus infection throughout the subfamily Sigmodontinae becomes known, as does the pathogenicity of hantaviruses hosted by sigmodontine species, treating all sigmodontines as potential hosts of HPS-causing hantaviruses, and each sigmodontine rodent as though it were infected and infectious is recommended. For the general public, this recommendation applies to all wild mice and rats encountered in rural areas throughout the United States. # Other Diseases Associated with Hantavirus Infection Because the sigmodontine rodents are restricted to the Americas, HPS is restricted to the Americas. Another group of hantaviruses associated with murine and arvicoline rodents causes a group of diseases of varying severity referred to as hemorrhagic fever with renal syndrome (HFRS) in Europe and Asia. Hantaan and Dobrava viruses, hosted by the murine field mice (Apodemus agrarius and Apodemus flavicollis, respectively), cause thousands of cases of severe HFRS each year in Asia and Eastern Europe. Fatality associated with these infections can be as high as 10% (15). The cosmopolitan Norway rat (Rattus norvegicus) is host for Seoul virus, which causes a mild form of HFRS in Asia. Although evidence of infection with Seoul virus has been found in Norway rats # FIGURE 6. Range of the cotton rat (Sigmodon hispidus) in the Americas Source: Hall ER, Kelson KR. The mammals of North America. vol II. New York, NY: Ronald Press, 1959. Hershkovitz P. South American marsh rats, genus Holochilus, with a summary of sigmodent rodents. Fieldiana: Zoology 1955;37:639-73. # FIGURE 7. Rice rat (Oryzomys palustris), reservoir of Bayou virus Photo/R. K. LaVal, Mammal Image Library of the American Society of Mammalogists throughout much of the world, including the United States, human disease caused by Seoul virus is largely restricted to Asia. Only three suspected cases have been reported in the United States (16). Overall mortality associated with Seoul virus infection is probably <1% (15). Puumala virus, carried by an arvicoline rodent, the bank vole (Clethrionomys glareolus), causes a mild form of HFRS, referred to as nephropathia epidemica (NE). NE, which is very common in northern Europe, has a case fatality of <1%. Several other species of arvicoline rodents host hantaviruses in the northern hemisphere, including the United States; none of these have been associated with any human disease. # Infection in the Host Hantaviruses do not cause overt illness in their reservoir hosts (17). Although infected rodents shed virus in saliva, urine, and feces for many weeks, months, or for life, the quantity of virus shed can be much greater approximately 3-8 weeks after infection (18). The demonstrated presence of infectious virus in saliva of infected rodents and the marked sensitivity of these animals to hantaviruses following intramuscular inoculation suggest that biting might be an important mode of transmission from rodent to rodent (18,19). Field data suggest that transmission in host populations occurs horizontally, more frequently among male rodents, and might be associated with fighting, particularly, but not exclusively, among males (7,20). Occasional evidence of infection (antibody) is found in numerous other species of rodents and their predators (e.g., dogs, cats, and coyotes), indicating that many (perhaps any) mammal species coming into contact with an infected host might become infected (21). No evidence supports the transmission of infection to other animals or to humans from these "dead-end" hosts. However, domestic animals (e.g., cats and dogs) might bring infected rodents into contact with humans. Arthropod vectors are not known to have a role in the transmission of hantaviruses (17,22). The reservoir hosts of the hantaviruses in the western United States also act as hosts for the bacterium Yersinia pestis, the etiologic agent of plague. Although no evidence exists that fleas and other ectoparasites play a role in hantavirus epidemiology, rodent fleas transmit plague. Species of Peromyscus are susceptible to Y. pestis infection and can act as hosts for infected fleas. Control of rodents without concurrent control of fleas might therefore increase the risk of human plague as the rodent fleas seek an alternative food source. # Transmission to Humans The Old World hantaviruses causing HFRS, and the New World agents of HPS are believed to be transmitted by the same mechanisms. Human infection occurs most commonly through the inhalation of infectious, aerosolized saliva or excreta. Persons visiting laboratories where infected rodents were housed have been infected after only a few minutes of exposure to animal holding areas (22). Transmission can occur when dried materials contaminated by rodent excreta are disturbed and inhaled, directly introduced into broken skin or conjunctivae, or possibly, when ingested in contaminated food or water. Persons have also acquired HFRS and HPS after being bitten by rodents (23,24). High risk of exposure has been associated with entering or cleaning rodent-infested structures (25). Person-to-person transmission has not been associated with any of the Old World hantaviruses (26) or with HPS cases in the United States (27). However, person-to-person transmission, including nosocomial transmission of Andes virus, was well documented for a single outbreak in southern Argentina (28,29) and suspected to have occurred much less extensively in another outbreak in Chile associated with the same virus (30). # Epidemiology Hantavirus infections are associated with domestic, occupational, or recreational activities that bring humans into contact with infected rodents, usually in rural settings. Known hantavirus infections of humans occur primarily in adults. HPS cases in the United States occur throughout the year, but greater numbers are reported in spring and summer. Hantavirus infection (resulting in HPS or HFRS) has been epidemiologically associated with the following situations (25,(31)(32)(33)(34)(35)(36): • increasing numbers of host rodents in human dwellings; • occupying or cleaning previously vacant cabins or other dwellings that are actively infested with rodents; • cleaning barns and other outbuildings; • disturbing excreta or rodent nests around the home or workplace; • residing in or visiting areas where substantial increases have occurred in numbers of host rodents or numbers of hantavirus-infected host rodents; • handling mice without gloves; • keeping captive wild rodents as pets or research subjects; • handling equipment or machinery that has been in storage; • disturbing excreta in rodent-infested areas while hiking or camping; • sleeping on the ground; and • hand plowing or planting. However, in North America, the absolute risk of hantavirus infection to the general public is low; only 20-50 cases of HPS have been confirmed annually in the United States since the disease was described in 1993 (Figure 2). # Physical Properties of Hantaviruses Hantaviruses have lipid envelopes that are susceptible to most disinfectants (e.g., dilute chlorine solutions, detergents, or most general-purpose household disinfectants) (37). Depending on environmental conditions, these viruses probably survive <1 week in indoor environments and much shorter periods (perhaps hours) when exposed to sunlight outdoors (38). # Prevention Eradicating the reservoir hosts of hantaviruses is neither feasible nor desirable because of the wide distribution of sigmodontine rodents in North America and their importance in the function of natural ecosystems. The best currently available approach for disease control and prevention is risk reduction through environmental modification and hygiene practices that deter rodents from colonizing the home and work environment, as well as safe cleanup of rodent waste and nesting materials. Controlled experiments have demonstrated that simple and inexpensive methods are effective in preventing rodents from entering rural dwellings (39). These recommendations emphasize the prevention of HPS associated with sigmodontine rodents in the Americas. Although the risk of acquiring hantavirus disease from contact with native arvicoline rodents in North America or introduced murine rodents throughout the Americas is low, the true pathogenicity for humans of all hantaviruses carried by these groups of rodents has not been established. Therefore, we recommend that persons avoid contact with all wild and peridomestic rats and mice. The precautions described in this report are broadly applicable to all groups of rats and mice. # Precautions To Limit Household Exposure to Rodents Rodent control in and around the home remains the primary strategy in preventing hantavirus infection. Rodent infestation can be determined by direct observation of animals, or inferred by observation of their nests or feces on floors or in protected areas (e.g., closets, kitchen cabinets, drawers, wall voids, furnace and hot water heating cabinets, and behind ventilation screens), or from evidence that rodents have been gnawing on food or other objects. The interior and exterior of the home should be carefully inspected at least twice per year for any openings where rodents could enter the home and for conditions that could support rodent activity. If any evidence of rodent infestation is detected inside the home or in outbuildings, precautions should be taken. The guidelines in the section Special Precautions for Homes of Persons with Confirmed Hantavirus Infection or Buildings with Heavy Rodent Infestations should be followed if a structure is associated with a confirmed case of hantavirus disease or if evidence of heavy rodent infestation is present (e.g., piles of feces or numerous nests or dead rodents). Recommendations are listed below for 1) reducing rodent shelter and food sources inside and outside the home and 2) preventing rodents from entering the home by rodent-proofing (40)(41)(42). with the end of the trap containing the bait closest to the baseboard or wall. Place traps in areas where rodents might be entering the home. Spring-loaded traps can be painful or even dangerous if they close on fingers; they should be handled with caution, and careful consideration should be given to keep children and pets away from areas where traps are placed. In the western United States (west of the 100th meridian, a line from mid-Texas through mid-North Dakota), a risk of plague transmission to humans from fleas exists. Use insect repellent (containing N,N-diethyl-m-toluamide [DEET]) on clothing, shoes, and hands to reduce the risk of fleabites when picking up dead rodents and traps. In cases of heavy rodent infestation in indoor spaces in the western United States, use an insecticide before trapping. Contact your local or state health department to find out if plague is a danger in the area and for additional advice on appropriate flea-control methods. # Reduction of Rodent • Continue trapping for at least 1 additional week after the last rodent is caught. As a precaution against reinfestation, use several baited, spring-loaded traps inside the house at all times in locations where rodents are most likely to be found. • Examine traps regularly. To dispose of traps or trapped animals, wear rubber, latex, vinyl, or nitrile gloves. Spray the dead rodent with a disinfectant or chlorine solution. ¶ After soaking the rodent thoroughly, either take it out of the trap by lifting the spring-loaded metal bar and letting the animal fall into a plastic bag or place the entire trap containing the dead rodent in a plastic bag and seal the bag. Then place the rodent into a second plastic bag and seal it. Dispose of the rodent in the double bag by 1) burying it in a 2-to 3-foot-deep hole or 2) burning it or 3) placing it in a covered trash can that is regularly emptied. Contact the state or local health department concerning other appropriate disposal methods.** • If the trap will be reused, decontaminate it by immersing and washing it in a disinfectant or chlorine solution and rinsing afterward. • For substantially severe or persistent infestations, contact a pest-control professional for rodent eradication or a building contractor for rodent exclusion (rodent-proofing). When resident mice are removed from rural buildings without measures to prevent reentry, they are replaced almost immediately by other mice from the outside. Therefore, indoor rodent-trapping could be unsuccessful in reducing rodent infestations without simultaneous efforts to rodentproof permeable dwellings. # Precautions for Outside the Home • Place woodpiles and stacks of lumber, bricks, stones, or other materials >100 feet from the house. • Store grains and animal feed in rodent-proof containers. • Remove, from the vicinity of buildings, any food sources that might attract rodents. • Keep pet food covered and stored in rodent-proof containers. Allow outside pets only enough food for each meal, then store or discard any remaining food from feeding dishes. • Avoid using bird feeders near the home. If they must be placed near the home, use "squirrel-proof" feeders and clean up spilled seeds each evening. • Dispose of garbage and trash in rodent-proof containers with tight-fitting lids. • Haul away trash, abandoned vehicles, discarded tires, and other items that might serve as rodent nesting sites. • Mow grass closely, and cut or remove brush and dense shrubbery to a distance of at least 100 feet from the home. Trim the limbs off any trees or shrubs that overhang or touch the building. • Use raised cement foundations in new construction of sheds, barns, and outbuildings. • Place spring-loaded traps in outbuildings (regardless of their distance from the home) and in areas that might likely serve as rodent shelter, within 100 feet around the home; use these traps continuously, replacing the bait periodically. For instructions concerning the safe use and cleaning of spring-loaded traps and the disposal of trapped rodents, see Precautions for Inside the Home.** # Preventing Rodents from Entering the Home by Rodent-Proofing • Look for and seal up all gaps and holes inside and outside the home that are >¼ -inch (>6 mm) in diameter. Inside the home, look for and seal up all gaps and holes underneath, behind, and inside kitchen cabinets; inside closets; around floor air vents and dryer vents; around the fireplace; around windows and doors; behind appliances (e.g., dishwashers, clothes washers, and stoves); around pipes under the kitchen and bathroom sinks; around all electrical, water, gas, and sewer lines (chases); and beneath or behind hot water heaters, radiators, and furnaces and around their pipes that enter the home. Outside the home, look for and seal up all gaps and holes around windows and doors; between the foundation of the home and the ground; under doors without weatherstripping; around electrical, water, gas, and sewer lines (chases); and around the roof, eaves, gables, and soffits. In addition, look for unscreened attic vents and crawlspace vents. In trailers, look for and seal up holes and gaps in the skirting, between the trim and metal siding, around utility lines and pipes and ducts, around roof vents, and around the trailer tongue. • Seal all entry holes >¼-inch (>6 mm) in diameter that are inside and outside the home with any of the following: cement, lath screen or lath metal, † † wire screening, hardware cloth (<¼-inch grate size), or other patching materials (42). Steel wool or STUF-FIT § § also can be used, but caulk must be placed around the steel wool or STUF-FIT to prevent rodents from pushing it through the hole. Caulk and expanding foam can be used to reinforce any repairs where lath metal, hardware cloth, steel wool, or STUF-FIT are the primary materials; however, caulk or expanding foam alone are usually not sufficient to prevent rodent intrusion. • If rodent burrows are found under foundations or trailer skirtings, construct a barrier around the entire foundation using 14-inch wide (35 cm), <¼-inch (<6 mm) mesh, 16-19 gauge hardware cloth. Bend the hardware cloth lengthwise into a right angle with two sides of approximately 7 inches (18 cm). Secure one side of the hardware cloth tightly to the building siding. The other side should be buried at least 2 inches (5 cm) below ground level and extend out away from the wall. ¶ ¶ • Consult a pest-control professional for severe or persistent infestations. # Precautions To Limit Occupational and Recreational Exposure to Rodents # Precautions for Workers Frequently Exposed to Rodents Persons who frequently handle or are exposed to wild rodents are probably at higher risk for hantavirus infection than the general public because of the frequency of their exposures. Such persons include, but are not limited to, mammalogists, pest-control workers, some farm and domestic workers, and building and fire inspectors. Therefore, enhanced precautions are warranted to protect them against hantavirus infection, as described below. • Workers in potentially high-risk settings should be informed by their employers about hantavirus transmission and symptoms of infection and be given detailed guidance on prevention measures. (43). The comprehensive user program should be supervised by a knowledgeable person (44). Given the predictable nature of HPS risk in certain professions or environmental † † Lath screen or metal is a light-gauge metal mesh and is commonly installed over wooden walls before plaster is applied. A galvanized product is preferable. Lath screen is malleable and can be folded and pushed into larger holes. These materials can be found in the masonry or building materials section at hardware or building supply stores. § § STUF-FIT is a soft copper-mesh material that might be preferable to steel wool because it does not rust and is not easily pulled apart by rodents. It can be obtained from pest control retail stores or from Allen Special Products (telephone 800-848-6805). ¶ ¶ Illustrated, complete instructions for rodent-proofing are available 1) in the National Park Service's manual, Mechanical Rodent Proofing Techniques; 2) on CDC's website, All About Hantaviruses (http://www.cdc.gov/ncidod/ diseases/hanta/hps/index.htm); and 3) from CDC's Ramah Home Seal-up protocol, Special Pathogens Branch (e-mail [email protected]). situations, provisions should be made in advance for respiratory protection. Because of the expense associated with purchasing a PAPR system, a negative-pressure tight-seal respirator equipped with N-100 or P-100 filters is recommended when respiratory protection is required for home use. Respirators might cause stress to persons with respiratory or cardiac conditions; these persons should be medically cleared before using such a respirator. Home or other users with potentially impaired respiratory function also should be aware of the risks associated with the use of negative-pressure respirators (43). • Workers should wear rubber, latex, vinyl, or nitrile gloves when handling rodents or handling traps containing rodents. Before removing the gloves, wash gloved hands in a disinfectant or chlorine solution and then wash bare hands in soap and water.** • Mammalogists, wildlife biologists, or public health personnel who handle wild rodents for research or management purposes should refer to published safety guidelines (45,46). Precautions are also available on CDC's website, All About Hantaviruses (http://www.cdc.gov/ncidod/ dvrd/spb/mnpages/rodentmanual.htm). # Precautions for Other Occupational Groups Having Potential Contact with Rodents Insufficient information is available to provide general recommendations regarding risks and precautions for persons who work in occupations with unpredictable or incidental contact with rodents or their nesting sites. Examples of such occupations include telephone installers, maintenance workers, plumbers, electricians, and certain construction workers. Workers in these jobs might have to enter buildings, crawl spaces, or other sites that are potentially rodent-infested, and HPS has been reported among these workers. Recommendations for such circumstances must be made on a case-by-case basis after the specific working environment has been assessed and state or local health and labor officials or trade unions and management, as appropriate, have been consulted. Determining the level of risk present and implementing appropriate protective measures is the employer's responsibility.*** # Precautions for Campers and Hikers No evidence exists to suggest that travel should be restricted in areas where HPS cases have occurred. The majority of typical tourist activities are associated with limited or no risk that travelers will be exposed to rodents or their excreta. However, persons engaged in outdoor activities (e.g., camping or hiking) should take precautions to reduce the likelihood of exposure to potentially infectious materials by following these recommendations. • Avoid touching live or dead rodents or disturbing rodent burrows, dens, or nests. • Do not use cabins or other enclosed shelters that are potentially rodent-infested until they have been appropriately cleaned and disinfected. (See Precautions for Cleanup of Rodent-Contaminated Areas and Dead Rodents.) Rodent-proofing might be necessary to prevent reinfestation. (See Precautions to Limit Household Exposure to Rodents.) • When an unoccupied cabin or other structure to be used has been closed for several weeks, ventilate the structure by opening doors and windows for at least 30 minutes before occupying. Use cross ventilation if possible. Leave the area (preferably remaining upwind) during the airingout period. The airing helps to remove infectious primary aerosols that might be created when hantavirus-infected rodents urinate. • Do not pitch tents or place sleeping bags in proximity to rodent feces or burrows or near possible rodent shelters (e.g., garbage dumps or woodpiles). • Avoid sleeping on the bare ground. Use a cot with the sleeping surface at least 12 inches above the ground or use a tent with a floor. • Keep food in rodent-proof containers. • Dispose of all trash and garbage promptly in accordance with campsite regulations by -burning or burying, -discarding in rodent-proof trash containers, or -"packing out" in rodent-proof containers. # Precautions for Cleanup of Rodent-Contaminated Areas and Dead Rodents Areas with evidence of rodent activity (e.g., dead rodents and rodent excreta) should be thoroughly cleaned to reduce the likelihood of exposure to hantavirus-infected materials. Cleanup procedures must be performed in a manner that limits the potential for dirt or dust from contaminated surfaces to become airborne. Recommendations are listed in this report for cleaning up 1) rodent urine and droppings, and surfaces potentially contaminated by rodents and 2) dead rodents and rodent nests. # Cleanup of Rodent Urine and Droppings and Contaminated Surfaces • During cleaning, wear rubber, latex, vinyl, or nitrile gloves. • Spray rodent urine and droppings with a disinfectant or chlorine solution until thoroughly soaked. (See Cleanup of Dead Rodents and Rodent Nests.) • To avoid generating potentially infectious aerosols, do not vacuum or sweep rodent urine, droppings, or contaminated surfaces until they have been disinfected. • Use a paper towel to pick up the urine and droppings. Place the paper towel in the garbage. • After the rodent droppings and urine have been removed, disinfect items that might have been contaminated by rodents or their urine and droppings. -Mop floors with a disinfectant or chlorine solution. - prepared by mixing 1½ cups of household bleach in 1 gallon of water (or a 1:10 solution) can be used in place of a commercial disinfectant. When using chlorine solution, avoid spilling the mixture on clothing or other items that might be damaged by bleach. Wear rubber, latex, vinyl, or nitrile gloves when preparing and using chlorine solutions. Chlorine solutions should be prepared fresh daily. # Cleaning Sheds and Other Outbuildings Before cleaning closed sheds and other outbuildings, ventilate the building by opening doors and windows for at least 30 minutes. Use cross ventilation if possible. Leave the area during the airing-out period. This airing helps to remove infectious primary aerosols that might be created when hantavirus-infected rodents urinate. In substantially dirty or dusty environments, additional protective clothing or equipment may be worn. Such equipment includes coveralls (disposable when possible) and safety glasses or goggles, in addition to rubber, latex, vinyl, or nitrile gloves. For recommendations regarding precautions for cleanup of outbuildings with heavy rodent infestations, see Special Precautions for Homes of Persons with Confirmed Hantavirus Infection or Building with Heavy Rodent Infestations. # Special Precautions for Homes of Persons with Confirmed Hantavirus Infection or Buildings with Heavy Rodent Infestations Special precautions are indicated for cleaning homes or buildings with heavy rodent infestations. A rodent infestation is considered heavy if piles of feces or numerous nests or dead rodents are observed. Persons cleaning these homes or buildings should contact the local or state public health agency or CDC for guidance. These precautions also can apply to vacant dwellings that have attracted rodents while unoccupied and to dwellings and other structures that have been occupied by persons with confirmed hantavirus infection. Workers who are either hired specifically to perform the cleanup or asked to do so as part of their work activities should receive a thorough orientation from the responsible health agency or employer about hantavirus transmission and disease symptoms and should be trained to perform the required activities safely. # Recommendations for Cleaning Homes or Buildings with Heavy Rodent Infestations # Applicability and Updates The control and prevention recommendations in this report represent general measures to minimize the likelihood of human exposure to hantavirus-infected rodents in the Americas. Although different geographic areas might have varying housing types and rodent populations, the precautions should be the same. The effect and utility of the recommendations will be continually reviewed by CDC and the involved state and local health agencies as additional epidemiologic, field, and laboratory data become available. These recommendations might be supplemented or modified in the future. These recommendations and additional information concerning hantaviruses are periodically updated and made available on CDC's website, All About Hantaviruses (http:// www.cdc.gov/ncidod/disease/hanta/hps/index.htm). Additional information can be obtained by contacting CDC, National Center for Infectious Diseases (NCID), Special Pathogens Branch, Mailstop A-26, 1600 Clifton Road, N.E., Atlanta, GA 30333; e-mail [email protected]; fax 404-639-1509; or by telephone 404-639-1510.
None
None
9e462dc33f3c61e455ca5e3287f14345f0ce749a
cdc
None
# Introduction This document describes CDC's Office of the Associate Director for Science (OADS) requirements for disclosing the competing interests of authors and members of steering committees and workgroups who develop and report on CDC guidelines and recommendations. A competing interest exists when professional judgment or actions concerning a primary interest (such as patients' or public's welfare or the validity of research) may be improperly influenced by a secondary interest such as financial gain, professional advancement, or personal rivalry. 1,2 The commonly used "conflict of interest" term often refers to a narrower understanding of secondary interests such as financial gain. However, the term competing interest is used in this document to indicate a broader meaning of secondary interests, such as nonfinancial personal gain. CDC guidelines and recommendations have the potential to influence practice, policy, and the allocation of resources on a wide scale. Therefore recommendations must be based on objective assessments of the evidence, and values or preferences that influence judgment must be transparent and not prejudiced by secondary interests. 3 # Requirements for Reporting Competing Interests in CDC Guidelines CDC's OADS requires all authors, steering committee i and workgroup members ii , including federal employees, interns and fellows, and external members participating in the development and reporting of a CDC guideline to disclose any competing interests. To assess competing interests, OADS recommends the use of the Competing Interest and Confidentiality Self-Certification for CDC Guidelines (see Step 1 of this document). Contract employees are also required to disclose competing interests. These employees and their employing agencies may also need to follow contract language or Procurement and Grants Office (PGO) requirements. Using the information from this form, the OADS also requires that a Disclosure of Competing Interest Statement for CDC Guidelines be placed in the document containing the CDC guidelines (see Step 2 of this document for an example of such language). NOTE: These OADS requirements do not apply to members of federal advisory committees and their workgroups formed under the Federal Advisory Committee Act (FACA). They file their own conflict of interest forms. FACA information is available at the following websites: and . The Management Analysis and Services Office (MASO) manages FACA processes, which are separate from this process and therefore are not subject to completing an OADS Disclosure of Competing Interest Statement. To determine whether a group is subject to FACA requirements, contact the MASO Federal Advisory Committee Policy and Oversight Team (CDC) at facmp&[email protected] or (770) 488-4707. i Steering Committee: The steering committee oversees guideline development and writing and approval of recommendations. It may include a few members of the guideline workgroup. ii Workgroup: Anyone who contributes to the planning, development, scientific evidence gathering and synthesis, drafting, , and publication of CDC guidelines and recommendations # Guidance for Assessing and Disclosing Competing Interests Guidance for assessing potential competing interests and developing competing interest statements are provided in steps 1 and 2 below. In step 1, all members of steering committees or workgroups complete the form Competing Interest and Confidentiality Self-Certification Form for CDC Guidelines. In Step 2, the workgroup leader or designee compiles the competing interest information from these forms to develop a Disclosure of Competing Interest Statement for CDC Guidelines. Step 1. Assessing Competing Interests OADS requires authors or members of steering committees or workgroups to complete the form Competing Interest and Confidentiality Self-Certification Form for CDC Guidelines (page 3) before participating in guideline development. If workgroups use a different form, it needs to be approved by CDC, Department of Health and Human Services (HHS), the Office of Personnel Management (OPM), or another federal authority. Disclosing competing interests does not necessarily prevent anyone from becoming a member of a guideline workgroup or otherwise participating in guideline development. In some cases, where the expertise is needed, competing interests may be unavoidable. If a competing interest exists, the individual may decline or be asked to decline participation during any period of guideline development where the competing interest may exist. However, it is ultimately up to the guideline CDC steering committee or a designated workgroup leader to decide whether and how to retain or excuse anyone with a competing interest. If there are competing interests, possible options may include: 1) allowing the individual to participate and reporting the competing interest in the guideline document; 2) prohibiting the individual from participating in portions of the process where the competing interest is greatest; or 3) prohibiting the individual from participating entirely. In a few sentences, authors need to summarize in the methods section of the guideline document: a) what form they used to assess competing interests and b) how they managed competing interests in the workgroup. # COMPETING INTEREST AND CONFIDENTIALITY SELF-CERTIFICATION FORM FOR CDC GUIDELINES iii Note: This is an example of a form that CDC staff can use and tailor to meet the needs of a particular guidelines and recommendation working group. This is not an official form approved by the Office of Government Ethics and does not need to be reviewed by the CDC Ethics Office, nor is it required to be filed and maintained in that office. Statements: If one or more of the following statements is true, then the individual should disclose and discuss that statement with the workgroup. # Steering Committee/Working - The purpose of the guideline for which this disclosure form is prepared is a critical review and evaluation of my work outside this workgroup or that of my employer. - I have a professional obligation that requires me to publicly defend or defeat an established position on an issue relevant to the functions to be performed in this working group activity. - To the best of my knowledge, my participation in this working group activity will enable me to obtain access to a competitor's or potential competitor's confidential proprietary information. - As a current or former U.S. government employee (civilian or military), there are federal competing interest restrictions that may apply to my service in connection with this working group activity. - I am interested in seeking an award under the program for which the working group is developing the request for proposals, work statement, or specifications, and I am employed in some capacity by, or have a financial interest in or other economic relationship with, any person or organization that to the best of my knowledge is interested in seeking an award under this program. - I or a member of my immediate family hold a financial, equity, or proprietary interest in, or receive research support from, an organization whose product or product concept is involved in the deliberations of this working group. - I or a member of my immediate family hold financial, equity, or proprietary interest in, or receive research support from, an organization whose product or product concept is competing with a product or product concept being discussed by this working group. - I or a member of my immediate family is seeking employment in an organization or serve as an officer, director, trustee, partner, or employee of an organization whose product or product concept competes with, is involved in the deliberations of, or would benefit from research in an area that is on this working group's agenda. - I or a member of my immediate family hold financial, equity, or proprietary interest in, or receive research support from, an organization whose product or product concept is being discussed by this working group or would substantially benefit from research emphasis in a defined area. I fully understand the confidential nature of the discussions held during sessions of the working group and agree: (1) to destroy or return all materials related to the meetings; (2) not to disclose or discuss the materials associated with the meetings or my evaluations with anyone except with the CDC guideline steering committee to which the working group reports; and (3) to refer all inquiries concerning the meeting to the federal official managing the working group (federal official: a CDC full-time equivalent employee and is a member of the guideline workgroup). I hereby certify that the above information is true and complete to the best of my knowledge and that, as appropriate, I have disclosed and discussed any competing interest or applicable statement with the workgroup. A disclosure of competing interest statement describes competing interests among authors, steering committee, and workgroup members that participated in the development and writing of guidelines and recommendations. This disclosure statement is inserted in the back of a CDC guideline and recommendation document after the list of steering committee and workgroup names and affiliations. The statement must be included in the guideline document before it is submitted for CDC clearance. # Examples of Disclosure of Competing Interest Statements for CDC Guidelines The following are two examples of suggested language for a Disclosure of Completing Interest Statement. The first has no competing interests and the second includes competing interests. Examples of disclosure statements can be tailored for each CDC workgroup. - "CDC, our steering committee, and workgroup wish to disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters that would unfairly influence these CDC guidelines and recommendations." - "CDC, our steering committee, and workgroup wish to disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters that would unfairly influence these CDC guidelines and recommendations with the following exceptions: (name) wishes to disclose that s/he is funded by (name of company); (name) has received research grants from (name of companies) while s/he served as a principal investigator of grants; (name) wishes to disclose that her/his institution receives funding from (name of agency) for her participation in a clinical trial and that her/his spouse is employed by (name of company); (name) wishes to disclose that this work was completed as part of the CDC Experience, a one-year fellowship in applied epidemiology at CDC made possible by a public/private partnership supported by a grant to the CDC Foundation from (name of company). Presentations will not include discussion of the unlabeled use of a product or a product under investigational use." Other Examples from Published Literature - "CDC, our planners, and our content experts wish to disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters." 4 - "CDC, our planners, and our content experts wish to disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters. Planners have reviewed content to ensure there is no bias. This document will not include any discussion of the unlabeled use of a product or a product under investigational use, with the exception that some of the recommendations in this document might be inconsistent with package labeling. CDC does not accept commercial support." 5 - "Our subject matter experts wish to disclose that they have no financial interests or other relationships with the manufacture of commercial products, providers of commercial services, or commercial supporters. This report does not include any discussion of the unlabeled use of commercial products or products for investigational use. - "CDC, our planners, and our content experts disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use. CDC does not accept commercial support." 8 - "CDC, our planners, and our presenters wish to disclose they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters with the exception of Jeffrey P. Salomone, who wishes to disclose he received an honorarium as a consultant and on the Advisory Board for Schering-Plough Pharmaceuticals and Stewart C. Wang, who received research grants from General Motors and Toyota Motors while he served as a principal investigator of Grants. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use." 9 - The developers of these guidelines wish to disclose that they have no financial interests or other competing interests with the manufacturers of commercial products or suppliers of commercial services related to vaccines including any related to hepatitis B vaccines, with the following exceptions: David Weber, MD, wishes to disclose that he served as a consultant and on a speakers' bureau, whether paid or unpaid (e.g., travel related reimbursement, honoraria), for the following vaccine manufacturers: Merck, Sanofi, and Pfizer pharmaceutical companies. Amy B Middleman, MD, also wishes to disclose that she received grant funding from the following pharmaceutical companies: MedImmune, Sanofi, and Merck." 10 (Note: This statement is listed at the end of the document)
# Introduction This document describes CDC's Office of the Associate Director for Science (OADS) requirements for disclosing the competing interests of authors and members of steering committees and workgroups who develop and report on CDC guidelines and recommendations. A competing interest exists when professional judgment or actions concerning a primary interest (such as patients' or public's welfare or the validity of research) may be improperly influenced by a secondary interest such as financial gain, professional advancement, or personal rivalry. 1,2 The commonly used "conflict of interest" term often refers to a narrower understanding of secondary interests such as financial gain. However, the term competing interest is used in this document to indicate a broader meaning of secondary interests, such as nonfinancial personal gain. CDC guidelines and recommendations have the potential to influence practice, policy, and the allocation of resources on a wide scale. Therefore recommendations must be based on objective assessments of the evidence, and values or preferences that influence judgment must be transparent and not prejudiced by secondary interests. 3 # Requirements for Reporting Competing Interests in CDC Guidelines CDC's OADS requires all authors, steering committee i and workgroup members ii , including federal employees, interns and fellows, and external members participating in the development and reporting of a CDC guideline to disclose any competing interests. To assess competing interests, OADS recommends the use of the Competing Interest and Confidentiality Self-Certification for CDC Guidelines (see Step 1 of this document). Contract employees are also required to disclose competing interests. These employees and their employing agencies may also need to follow contract language or Procurement and Grants Office (PGO) requirements. Using the information from this form, the OADS also requires that a Disclosure of Competing Interest Statement for CDC Guidelines be placed in the document containing the CDC guidelines (see Step 2 of this document for an example of such language). NOTE: These OADS requirements do not apply to members of federal advisory committees and their workgroups formed under the Federal Advisory Committee Act (FACA). They file their own conflict of interest forms. FACA information is available at the following websites: http://www.cdc.gov/maso/FACM/facmhome.htm and http://intranet.cdc.gov/maso/cmppa/index.html. The Management Analysis and Services Office (MASO) manages FACA processes, which are separate from this process and therefore are not subject to completing an OADS Disclosure of Competing Interest Statement. To determine whether a group is subject to FACA requirements, contact the MASO Federal Advisory Committee Policy and Oversight Team (CDC) at facmp&[email protected] or (770) 488-4707. i Steering Committee: The steering committee oversees guideline development and writing and approval of recommendations. It may include a few members of the guideline workgroup. ii Workgroup: Anyone who contributes to the planning, development, scientific evidence gathering and synthesis, drafting, , and publication of CDC guidelines and recommendations # Guidance for Assessing and Disclosing Competing Interests Guidance for assessing potential competing interests and developing competing interest statements are provided in steps 1 and 2 below. In step 1, all members of steering committees or workgroups complete the form Competing Interest and Confidentiality Self-Certification Form for CDC Guidelines. In Step 2, the workgroup leader or designee compiles the competing interest information from these forms to develop a Disclosure of Competing Interest Statement for CDC Guidelines. Step 1. Assessing Competing Interests OADS requires authors or members of steering committees or workgroups to complete the form Competing Interest and Confidentiality Self-Certification Form for CDC Guidelines (page 3) before participating in guideline development. If workgroups use a different form, it needs to be approved by CDC, Department of Health and Human Services (HHS), the Office of Personnel Management (OPM), or another federal authority. Disclosing competing interests does not necessarily prevent anyone from becoming a member of a guideline workgroup or otherwise participating in guideline development. In some cases, where the expertise is needed, competing interests may be unavoidable. If a competing interest exists, the individual may decline or be asked to decline participation during any period of guideline development where the competing interest may exist. However, it is ultimately up to the guideline CDC steering committee or a designated workgroup leader to decide whether and how to retain or excuse anyone with a competing interest. If there are competing interests, possible options may include: 1) allowing the individual to participate and reporting the competing interest in the guideline document; 2) prohibiting the individual from participating in portions of the process where the competing interest is greatest; or 3) prohibiting the individual from participating entirely. In a few sentences, authors need to summarize in the methods section of the guideline document: a) what form they used to assess competing interests and b) how they managed competing interests in the workgroup. # COMPETING INTEREST AND CONFIDENTIALITY SELF-CERTIFICATION FORM FOR CDC GUIDELINES iii Note: This is an example of a form that CDC staff can use and tailor to meet the needs of a particular guidelines and recommendation working group. This is not an official form approved by the Office of Government Ethics and does not need to be reviewed by the CDC Ethics Office, nor is it required to be filed and maintained in that office. Statements: If one or more of the following statements is true, then the individual should disclose and discuss that statement with the workgroup. # Steering Committee/Working • The purpose of the guideline for which this disclosure form is prepared is a critical review and evaluation of my work outside this workgroup or that of my employer. • I have a professional obligation that requires me to publicly defend or defeat an established position on an issue relevant to the functions to be performed in this working group activity. • To the best of my knowledge, my participation in this working group activity will enable me to obtain access to a competitor's or potential competitor's confidential proprietary information. • As a current or former U.S. government employee (civilian or military), there are federal competing interest restrictions that may apply to my service in connection with this working group activity. • I am interested in seeking an award under the program for which the working group is developing the request for proposals, work statement, or specifications, and I am employed in some capacity by, or have a financial interest in or other economic relationship with, any person or organization that to the best of my knowledge is interested in seeking an award under this program. • I or a member of my immediate family hold a financial, equity, or proprietary interest in, or receive research support from, an organization whose product or product concept is involved in the deliberations of this working group. • I or a member of my immediate family hold financial, equity, or proprietary interest in, or receive research support from, an organization whose product or product concept is competing with a product or product concept being discussed by this working group. • I or a member of my immediate family is seeking employment in an organization or serve as an officer, director, trustee, partner, or employee of an organization whose product or product concept competes with, is involved in the deliberations of, or would benefit from research in an area that is on this working group's agenda. • I or a member of my immediate family hold financial, equity, or proprietary interest in, or receive research support from, an organization whose product or product concept is being discussed by this working group or would substantially benefit from research emphasis in a defined area. I fully understand the confidential nature of the discussions held during sessions of the working group and agree: (1) to destroy or return all materials related to the meetings; (2) not to disclose or discuss the materials associated with the meetings or my evaluations with anyone except with the CDC guideline steering committee to which the working group reports; and (3) to refer all inquiries concerning the meeting to the federal official managing the working group (federal official: a CDC full-time equivalent employee and is a member of the guideline workgroup). I hereby certify that the above information is true and complete to the best of my knowledge and that, as appropriate, I have disclosed and discussed any competing interest or applicable statement with the workgroup. A disclosure of competing interest statement describes competing interests among authors, steering committee, and workgroup members that participated in the development and writing of guidelines and recommendations. This disclosure statement is inserted in the back of a CDC guideline and recommendation document after the list of steering committee and workgroup names and affiliations. The statement must be included in the guideline document before it is submitted for CDC clearance. # Examples of Disclosure of Competing Interest Statements for CDC Guidelines The following are two examples of suggested language for a Disclosure of Completing Interest Statement. The first has no competing interests and the second includes competing interests. Examples of disclosure statements can be tailored for each CDC workgroup. • "CDC, our steering committee, and workgroup wish to disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters that would unfairly influence these CDC guidelines and recommendations." • "CDC, our steering committee, and workgroup wish to disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters that would unfairly influence these CDC guidelines and recommendations with the following exceptions: (name) wishes to disclose that s/he is funded by (name of company); (name) has received research grants from (name of companies) while s/he served as a principal investigator of grants; (name) wishes to disclose that her/his institution receives funding from (name of agency) for her participation in a clinical trial and that her/his spouse is employed by (name of company); (name) wishes to disclose that this work was completed as part of the CDC Experience, a one-year fellowship in applied epidemiology at CDC made possible by a public/private partnership supported by a grant to the CDC Foundation from (name of company). Presentations will not include discussion of the unlabeled use of a product or a product under investigational use." Other Examples from Published Literature • "CDC, our planners, and our content experts wish to disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters." 4 • "CDC, our planners, and our content experts wish to disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters. Planners have reviewed content to ensure there is no bias. This document will not include any discussion of the unlabeled use of a product or a product under investigational use, with the exception that some of the recommendations in this document might be inconsistent with package labeling. CDC does not accept commercial support." 5 • "Our subject matter experts wish to disclose that they have no financial interests or other relationships with the manufacture of commercial products, providers of commercial services, or commercial supporters. This report does not include any discussion of the unlabeled use of commercial products or products for investigational use. • "CDC, our planners, and our content experts disclose that they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use. CDC does not accept commercial support." 8 • "CDC, our planners, and our presenters wish to disclose they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters with the exception of Jeffrey P. Salomone, who wishes to disclose he received an honorarium as a consultant and on the Advisory Board for Schering-Plough Pharmaceuticals and Stewart C. Wang, who received research grants from General Motors and Toyota Motors while he served as a principal investigator of Grants. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use." 9 • The developers of these guidelines wish to disclose that they have no financial interests or other competing interests with the manufacturers of commercial products or suppliers of commercial services related to vaccines including any related to hepatitis B vaccines, with the following exceptions: David Weber, MD, wishes to disclose that he served as a consultant and on a speakers' bureau, whether paid or unpaid (e.g., travel related reimbursement, honoraria), for the following vaccine manufacturers: Merck, Sanofi, and Pfizer pharmaceutical companies. Amy B Middleman, MD, also wishes to disclose that she received grant funding from the following pharmaceutical companies: MedImmune, Sanofi, and Merck." 10 (Note: This statement is listed at the end of the document) # Acknowledgements We wish to acknowledge the following individuals who made the development of this document possible through their time and expertise: Sudevi Ghosh, JD, senior attorney, CDC Office of the General Counsel Chris Cox, JD, senior attorney, HHS Office of the General Counsel
None
None
216f88b4f77f2104bf1bd01d3c1c593a3587eb11
cdc
None
Objective To review and update consensus-based recommendations for medical and public health professionals following a Bacillus anthracis attack against a civilian population. Participants The working group included 23 experts from academic medical centers, research organizations, and governmental, military, public health, and emergency management institutions and agencies.# O F THE BIOLOGICAL AGENTS that may be used as weapons, the Working Group on Civilian Biodefense identified a limited number of organisms that, in worst case scenarios, could cause disease and deaths in sufficient numbers to gravely impact a city or region. Bacillus anthracis, the bacterium that causes anthrax, is one of the most serious of these. Several countries are believed to have offensive biological weapons programs, and some independent terrorist groups have suggested their intent to use biological weapons. Because the possibility of a terrorist attack using bioweapons is especially difficult to predict, detect, or prevent, it is among the most feared terrorism scenarios. 1 In September 2001, B anthracis spores were sent to several locations via the US Postal Service. Twenty-two confirmed or suspect cases of anthrax infection resulted. Eleven of these were inhalational cases, of whom 5 died; 11 were cutaneous cases (7 confirmed, 4 suspected). 2 In this article, these attacks are termed the anthrax attacks of 2001. The consequences of these attacks substantiated many findings and recommendations in the Working Group on Civilian Biodefense's previous consensus statement published in 1999 3 ; however, the new information from these attacks warrant updating the previous statement. Before the anthrax attacks in 2001, modern experience with inhalational anthrax was limited to an epidemic in Sverdlovsk, Russia, in 1979 following an unintentional release of B anthracis spores from a Soviet bioweapons fac-tory and to 18 occupational exposure cases in the United States during the 20th century. Information about the potential impact of a large, covert attack using B anthracis or the possible effi-cacy of postattack vaccination or therapeutic measures remains limited. Policies and strategies continue to rely partially on interpretation and extrapolation from an incomplete and evolving knowledge base. # CONSENSUS METHODS The working group comprised 23 representatives from academic medical centers; research organizations; and government, military, public health, and emergency management institutions and agencies. For the original consensus statement, 3 we searched MEDLINE databases from January 1966 to April 1998 using Medical Subject Headings of anthrax, Bacillus anthracis, biological weapon, biological terrorism, biological warfare, and biowarfare. Reference review identified work published before 1966. Working group members identified unpublished sources. The first consensus statement, published in 1999, 3 followed a synthesis of the information and revision of 3 drafts. We reviewed anthrax literature again in January 2002, with special attention to articles following the anthrax attacks of 2001. Members commented on a revised document; proposed revisions were incorporated with the working group's support for the final consensus document. The assessment and recommendations provided herein represent our best professional judgment based on current data and expertise. The conclusions and recommendations need to be regularly reassessed as new information develops. # HISTORY OF CURRENT THREAT For centuries, B anthracis has caused disease in animals and serious illness in humans. 4 Research on anthrax as a biological weapon began more than 80 years ago. 5 Most national offensive bioweapons programs were terminated following widespread ratification or signing of the Biological Weapons Convention (BWC) in the early 1970s 6 ; the US offensive bioweapons program was terminated after President Nixon's 1969 and 1970 executive orders. However, some nations continued offensive bioweapons development programs despite ratification of the BWC. In 1995, Iraq acknowledged producing and weaponizing B anthracis to the United Nations Special Commission. 7 The former Soviet Union is also known to have had a large B anthracis production program as part of its offensive bioweapons program. 8 A recent analysis reports that there is clear evidence of or widespread assertions from nongovernmental sources alleging the existence of offensive biological weapons programs in at least 13 countries. 6 The anthrax attacks of 2001 have heightened concern about the feasibility of large-scale aerosol bioweapons attacks by terrorist groups. It has been feared that independent, well-funded groups could obtain a manufactured weapons product or acquire the expertise and resources to produce the materials for an attack. However, some analysts have questioned whether "weapons grade" material such as that used in the 2001 attacks (ie, powders of B anthracis with characteristics such as high spore concentration, uniform particle size, low electrostatic charge, treated to reduce clumping) could be produced by those not supported by the resources of a nation-state. The US Department of Defense recently reported that 3 defense employees with some technical skills but without expert knowledge of bioweapons manufactured a simulant of B anthracis in less than a month for $1 million. 9 It is reported that Aum Shinrikyo, the cult responsible for the 1995 release of sarin nerve gas in a Tokyo subway station, 10 dispersed aerosols of anthrax and botulism throughout Tokyo at least 8 times. 11 Forensic analysis of the B anthracis strain used in these attacks revealed that this isolate most closely matched the Sterne 34F2 strain, which is used for animal vaccination programs and is not a significant risk to humans. 12 It is probable that the cult attacks produced no illnesses for this and other technical reasons. Al Quaeda also has sought to acquire bioweapons in its terrorist planning efforts although the ex-tent to which they have been successful is not reported. 13 In the anthrax attacks of 2001, B anthracis spores were sent in at least 5 letters to Florida, New York City, and Washington, DC. Twenty-two confirmed or suspected cases resulted. All of the identified letters were mailed from Trenton, NJ. The B anthracis spores in all the letters were identified as the Ames strain. The specific source (provenance) of B anthracis cultures used to create the spore-containing powder remains unknown at time of this publication. It is now recognized that the original Ames strain of B anthracis did not come from a laboratory in Ames, Iowa, rather from a laboratory in College Station, Tex. Several distinct Ames strains have been recognized by investigating scientists, which are being compared with the Ames strain used in the attack. At least 1 of these comparison Ames strains was recovered from a goat that died in Texas in 1997. 14 Sen Daschle's letter reportedly had 2 g of B anthracis containing powder; the quantity in the other envelopes has not been disclosed. The powder has been reported to contain between 100 billion to 1 trillion spores per gram 15 although no official analysis of the concentration of spores or the chemical composition of the powder has been published. The anthrax attacks of 2001 used 1 of many possible methods of attack. The use of aerosol-delivery technologies inside buildings or over large outdoor areas is another method of attack that has been studied. In 1970, the World Health Organization 16 and in 1993 the Office of Technology Assessment 17 analyzed the potential scope of larger attacks. The 1979 Sverdlovsk accident provides data on the only known aerosol release of B anthracis spores resulting in an epidemic. 18 An aerosol release of B anthracis would be odorless and invisible and would have the potential to travel many kilometers before dissipating. 16,19 Aerosol technologies for large-scale dissemination have been developed and tested by Iraq 7 and the former Soviet Union 8 Few details of those tests are available. The US military also conducted such trials over the Pacific Ocean in the 1960s. A US study near Johnston Atoll in the South Pacific reported a plane "sprayed a 32-mile long line of agent that traveled for more then 60 miles before it lost its infectiousness." 20 In 1970, the World Health Organization estimated that 50 kg of B anthracis released over an urban population of 5 million would sicken 250 000 and kill 100000. 16 A US Congressional Office of Technology assessment analysis from 1993 estimated that between 130000 and 3 million deaths would follow the release of 100 kg of B anthracis, a lethality matching that of a hydrogen bomb. 17 # EPIDEMIOLOGY OF ANTHRAX Naturally occurring anthrax in humans is a disease acquired from contact with anthrax-infected animals or anthrax-contaminated animal products. The disease most commonly occurs in herbivores, which are infected after ingesting spores from the soil. Large anthrax epizootics in herbivores have been reported. 21 A published report states that anthrax killed 1 million sheep in Iran in 1945 22 ; this number is supported by an unpublished Iranian governmental document. 23 Animal vaccination programs have reduced drastically the animal mortality from the disease. 24 However, B anthracis spores remain prevalent in soil samples throughout the world and cause anthrax cases among herbivores annually. 22,25,26 Anthrax infection occurs in humans by 3 major routes: inhalational, cutaneous, and gastrointestinal. Naturally occurring inhalational anthrax is now rare. Eighteen cases of inhalational anthrax were reported in the United States from 1900 to 1976; none were identified or reported thereafter. Most of these cases occurred in specialrisk groups, including goat hair mill or wool or tannery workers; 2 of them were laboratory associated. 27 Cutaneous anthrax is the most common naturally occurring form, with an estimated 2000 cases reported annually worldwide. 26 The disease typically follows exposure to anthrax-infected animals. In the United States, 224 cases of cutaneous anthrax were reported between 1944 and 1994. 28 One case was reported in 2000. 29 The largest reported epidemic occurred in Zimbabwe between 1979 and 1985, when more than 10000 human cases of anthrax were reported, nearly all of them cutaneous. 30 Although gastrointestinal anthrax is uncommon, outbreaks are continually reported in Africa and Asia 26,31,32 following ingestion of insufficiently cooked contaminated meat. Two distinct syndromes are oral-pharyngeal and abdominal. 31,33,34 Little information is available about the risks of direct contamination of food or water with B anthracis spores. Experimental efforts to infect primates by direct gastrointestinal instillation of B anthracis spores have not been successful. 35 Gastrointestinal infection could occur only after consumption of large numbers of vegetative cells, such as what might be found in raw or undercooked meat from an infected herbivore, but experimental data is lacking. Inhalational anthrax is expected to account for most serious morbidity and most mortality following the use of B anthracis as an aerosolized biological weapon. Given the absence of natu-rally occurring cases of inhalational anthrax in the United States since 1976, the occurrence of a single case is now cause for alarm. # MICROBIOLOGY B anthracis derives from the Greek word for coal, anthrakis, because of the black skin lesions it causes. B anthracis is an aerobic, gram-positive, spore-forming, nonmotile Bacillus species. The nonflagellated vegetative cell is large (1-8 µm long, 1-1.5 µm wide). Spore size is approximately 1 µm. Spores grow readily on all ordinary laboratory media at 37°C, with a "jointed bamboo-rod" cellular appearance (FIGURE 1) and a unique "curled-hair" colonial appearance. Experienced microbiologists should be able to identify this cellular and colonial morphology; however, few practicing microbiologists outside the veterinary community have seen B anthracis colonies beyond what they may have seen in published material. 37 B anthracis spores germinate when they enter an environment rich in amino acids, nucleosides, and glucose, such as that found in the blood or tissues of an animal or human host. The rapidly multiplying vegetative B anthracis bacilli, on the contrary, will only form spores after local nutrients are exhausted, such as when anthrax-infected body fluids are exposed to ambient air. 22 Vegetative bacteria have poor survival outside of an animal or human host; colony counts decline to being undetectable within 24 hours following inoculation into water. 22 This contrasts with the environmentally hardy properties of the B anthracis spore, which can survive for decades in ambient conditions. 37 # PATHOGENESIS AND CLINICAL MANIFESTATIONS Inhalational Anthrax Inhalational anthrax follows deposition into alveolar spaces of sporebearing particles in the 1-to 5-µm range. 38,39 Macrophages then ingest the spores, some of which are lysed and destroyed. Surviving spores are transported via lymphatics to mediastinal lymph nodes, where germination oc- # Figure 1. Gram Stain of Blood in Culture Media Gram-positive bacilli in long chains (original magnification ϫ20). Enlargement shows typical "jointed bamboo-rod" appearance of Bacillus anthracis (original magnificationϫ100). Reprinted from Borio et al. 36 curs after a period of spore dormancy of variable and possibly extended duration. 35,40,41 The trigger(s) responsible for the transformation of B anthracis spores to vegetative cells is not fully understood. 42 In Sverdlovsk, cases occurred from 2 to 43 days after exposure. 18 In experimental infection of monkeys, fatal disease occurred up to 58 days 40 and 98 days 43 after exposure. Viable spores were demonstrated in the mediastinal lymph nodes of 1 monkey 100 days after exposure. 44 Once germination occurs, clinical symptoms follow rapidly. Replicating B anthracis bacilli release toxins that lead to hemorrhage, edema, and necrosis. 32,45 In experimental animals, once toxin production has reached a critical threshold, death occurs even if sterility of the bloodstream is achieved with antibiotics. 27 Extrapolations from animal data suggest that the human LD 50 (ie, dose sufficient to kill 50% of persons exposed to it) is 2500 to 55000 inhaled B anthracis spores. 46 The LD 10 was as low as 100 spores in 1 series of monkeys. 43 Recently published extrapolations from primate data suggest that as few as 1 to 3 spores may be sufficient to cause infection. 47 The dose of spores that caused infection in any of the 11 patients with inhalational anthrax in 2001 could not be estimated although the 2 cases of fatal inhalational anthrax in New York City and Connecticut provoked speculation that the fatal dose, at least in some individuals, may be quite low. A number of factors contribute to the pathogenesis of B anthracis, which makes 3 toxins-protective antigen, lethal factor, and edema factor-that combine to form 2 toxins: lethal toxin and edema toxin (FIGURE 2). The protective antigen allows the binding of lethal and edema factors to the affected cell membrane and facilitates their subsequent transport across the cell membrane. Edema toxin impairs neutrophil function in vivo and affects water homeostasis leading to edema, and lethal toxin causes release of tumor necrosis factor ␣ and interleukin 1 ␤, factors that are believed to be linked to the sudden death in severe anthrax infection. 48 The molecular target of lethal and edema factors within the affected cell is not yet elucidated. 49 In addition to these virulence factors, B anthracis has a capsule that prevents phagocytosis. Full virulence requires the presence of both an antiphagocytic capsule and the # Gastrointestinal The major known virulence factors of B anthracis include the exotoxins edema toxin (PA and EF) and lethal toxin (PA and LF) and the antiphagocytic capsule. Although many exact molecular mechanisms involved in the pathogenicity of the anthrax toxins are uncertain, they appear to inhibit immune function, interrupt intracellular signaling pathways, and lyse cell targets causing massive release of proinflammatory mediators. ATP indicates adenosine triphosphate; cAMP, cyclic adenosine monophosphate; MAPKK, mitogen-activated protein kinase kinase; and MAPK, mitogen-activated protein kinase. 3 toxin components. 37 An additional factor contributing to B anthracis pathogenesis is the high concentration of bacteria occurring in affected hosts. 49 Inhalational anthrax reflects the nature of acquisition of the disease. The term anthrax pneumonia is misleading because typical bronchopneumonia does not occur. Postmortem pathological studies of patients from Sverdlovsk showed that all patients had hemorrhagic thoracic lymphadenitis, hemorrhagic mediastinitis, and pleural effusions. About half had hemorrhagic meningitis. None of these autopsies showed evidence of a bronchoalveolar pneumonic process although 11 of 42 patient autopsies had evidence of a focal, hemorrhagic, necrotizing pneumonic lesion analogous to the Ghon complex associated with tuberculosis. 50 These findings are consistent with other human case series and experimentally induced inhalational anthrax in animals. 40,51,52 A recent reanalysis of pathology specimens from 41 of the Sverdlovsk patients was notable primarily for the presence of necrotizing hemorrhagic mediastinitis; pleural effusions averaging 1700 mL in quantity; meningitis in 50%; arteritis and arterial rupture in many; and the lack of prominent pneumonitis. B anthracis was recovered in concentrations of up to 100 million colony-forming units per milliliter in blood and spinal fluid. 53 In animal models, physiological sequelae of severe anthrax infection have included hypocalcemia, profound hypoglycemia, hyperkalemia, depression and paralysis of respiratory center, hypotension, anoxia, respiratory alkalosis, and terminal acidosis, 54,55 suggesting that besides the rapid administration of antibiotics, survival might improve with vigilant correction of electrolyte disturbances and acid-based imbalance, glucose infusion, and early mechanical ventilation and vasopressor administration. Historical Data. Early diagnosis of inhalational anthrax is difficult and requires a high index of suspicion. Prior to the 2001 attacks, clinical information was limited to a series of 18 cases reported in the 20th century and the limited data from Sverdlovsk. The clinical presentation of inhalational anthrax had been described as a 2-stage illness. Patients reportedly first developed a spectrum of nonspecific symptoms, including fever, dyspnea, cough, headache, vomiting, chills, weakness, abdominal pain, and chest pain. 18,27 Signs of illness and laboratory studies were nonspecific. This stage of illness lasted from hours to a few days. In some patients, a brief period of apparent recovery followed. Other patients progressed directly to the second, fulminant stage of illness. 4,27,56 This second stage was reported to have developed abruptly, with sudden fever, dyspnea, diaphoresis, and shock. Massive lymphadenopathy and expansion of the mediastinum led to stridor in some cases. 57,58 A chest radiograph most often showed a widened mediastinum consistent with lymphadenopathy. 57 Up to half of patients developed hemorrhagic meningitis with concomitant meningismus, delirium, and obtundation. In this second stage, cyanosis and hypotension progressed rapidly; death sometimes occurred within hours. 4,27,56 In the 20th-century series of US cases, the mortality rate of occupationally acquired inhalational anthrax was 89%, but the majority of these cases occurred before the development of critical care units and, in most cases, before the advent of antibiotics. 27 At Sverdlovsk, it had been reported that 68 of the 79 patients with inhalational anthrax died. 18 However a separate report from a hospital physician recorded 358 ill with 45 dead; another recorded 48 deaths among 110 patients. 59 A recent analysis of available Sverdlovsk data suggests there may have been as many as 250 cases with 100 deaths. 60 Sverdlovsk patients who had onset of disease 30 or more days after release of organisms had a higher reported survival rate than those with earlier disease onset. Antibiotics, antianthrax globulin, corticosteroids, mechanical ventilation, and vaccine were used to treat some residents in the affected area after the accident, but how many were given vaccine and antibiotics is unknown, nor is it known which patients received these interventions or when. It is also uncertain if the B anthracis strain (or strains) to which patients was exposed were susceptible to the antibiotics used during the outbreak. However, a community-wide intervention about the 15th day after exposure did appear to diminish the projected attack rate. 60 In fatal cases, the interval between onset of symptoms and death averaged 3 days. This is similar to the disease course and case fatality rate in untreated experimental monkeys, which have developed rapidly fatal disease even after a latency as long as 58 days. 40 2001 Attacks Data. The anthrax attacks of 2001 resulted in 11 cases of inhalational anthrax, 5 of whom died. Symptoms, signs, and important laboratory data from these patients are listed in TABLE 1. Several clinical findings from the first 10 patients with inhalational anthrax deserve emphasis. 36, Malaise and fever were presenting symptoms in all 10 cases. Cough, nausea, and vomiting were also prominent. Drenching sweats, dyspnea, chest pain, and headache were also seen in a majority of patients. Fever and tachycardia were seen in the majority of patients at presentation, as were hypoxemia and elevations in transaminases. Importantly, all 10 patients had abnormal chest x-ray film results: 7 had mediastinal widening; 7 had infiltrates; and 8 had pleural effusions. Chest computed tomographic (CT) scans showed abnormal results in all 8 patients who had this test: 7 had mediastinal widening; 6, infiltrates; 8, pleural effusions. Data are insufficient to identify factors associated with survival although early recognition and initiation of treatment and use of more than 1 antibiotic have been suggested as possible factors. 61 For the 6 patients for whom such information is known, the median period from presumed time of exposure to the onset of symptoms was 4 days (range, 4-6 days). Patients sought care a median of 3.5 days after symptom onset. All 4 patients exhibiting signs of fulminant illness prior to antibiotic administration died. 61 Of note, the incubation period of the 2 fatal cases from New York City and Connecticut is not known. # Cutaneous Anthrax Historically, cutaneous anthrax has been known to occur following the deposition of the organism into skin; previous cuts or abrasions made one especially susceptible to infection. 30,67 Areas of exposed skin, such as arms, hands, face, and neck, were the most frequently affected. In Sverdlovsk, cutaneous cases occurred only as late as 12 days after the original aerosol release; no reports of cutaneous cases appeared after prolonged latency. 18 After the spore germinates in skin tissues, toxin production results in local edema. An initially pruritic macule or papule enlarges into a round ulcer by the second day. Subsequently, 1-to 3-mm vesicles may appear that discharge clear or serosanguinous fluid containing numerous organisms on Gram stain. As shown in FIGURE 3, development of a painless, depressed, black eschar follows, often associated with extensive local edema. The anthrax eschar dries, loosens, and falls off in the next 1 to 2 weeks. Lymphangitis and painful lymphadenopathy can occur with associated systemic symptoms. Differential diagnosis of eschars includes tularemia, scrub typhus, rickettsial spotted fevers, rat bite fever, and ecthyma gangrenosum. 68 Noninfectious causes of eschars include arachnid bites 63 and vasculitides. Although antibiotic therapy does not appear to change the course of eschar formation and healing, it does decrease the likelihood of systemic disease. Without antibiotic therapy, the mortality rate has been reported to be as high as 20%; with appropriate antibiotic treatment, death due to cutaneous anthrax has been reported to be rare. 4 Following the anthrax attacks of 2001, there have been 11 confirmed or probable cases of cutaneous anthrax. One case report of cutaneous anthrax resulting from these attacks has been published (Figure 3). 63 This child had no reported evidence of prior visible cuts, abrasions, or lesions at the site of the cutaneous lesion that developed. The mean incubation period for cutaneous anthrax cases diagnosed in 2001 was 5 days, with a range of 1 to 10 days, based on estimated dates of exposure to B anthracis-contaminated letters. Cutaneous lesions occurred on the forearm, neck, chest, and fingers. 69 The only published case report of cutaneous anthrax from the attacks of 2001 is notable for the difficulty in recognition of the disease in a previously healthy 7-month-old, the rapid progression to severe systemic illness despite hospitalization, and clinical manifestations that included microangiopathic hemolytic anemia with renal involvement, coagulopathy, and hyponatremia. 63 Fortunately, this child recovered, and none of the cutaneous cases of anthrax diagnosed after the 2001 attacks were fatal. # Gastrointestinal Anthrax Some think gastrointestinal anthrax occurs after deposition and germination of spores in the upper or lower gastrointestinal tract. However, considering the rapid transit time in the gastrointestinal tract, it seems more likely that many such cases must result from the ingestion of large numbers of vegetative bacilli from poorly cooked infected meat rather than from spores. In any event, the oral-pharyngeal form of disease results in an oral or esophageal ulcer and leads to the development of regional lymphadenopathy, edema, and sepsis. 31,33 Disease in the lower gastrointestinal tract manifests as primary intestinal lesions occurring predominantly in the terminal ileum or cecum, 50 presenting initially with nausea, vomiting, and malaise and progressing rapidly to bloody diarrhea, acute abdomen, or sep- # cm By hospital day 12, a 2-cm black eschar was present in the center of the cutaneous lesion. Reprinted from Freedman et al. 63 sis. Massive ascites has occurred in some cases of gastrointestinal anthrax. 34 Advanced infection may appear similar to the sepsis syndrome occurring in either inhalational or cutaneous anthrax. 4 Some authors suggest that aggressive medical intervention as would be recommended for inhalational anthrax may reduce mortality. Given the difficulty of early diagnosis of gastrointestinal anthrax, however, mortality may be high. 4 Postmortem examinations in Sverdlovsk showed gastrointestinal submucosal lesions in 39 of 42 patients, 50 but all of these patients were also found to have definitive pathologic evidence of an inhalational source of infection. There were no gastrointestinal cases of anthrax diagnosed in either the Sverdlovsk series or following the anthrax attacks of 2001. The determination of individual patient exposure to B anthracis on the basis of environmental testing is complex due to the uncertain specificity and sensitivity of rapid field tests and the difficulty of assessing individual risks of exposure. A patient (or patients) seeking medical treatment for symptoms of inhalational anthrax will likely be the first evidence of a clandestine release of B anthracis as a biological weapon. The appearance of even a single previously healthy patient who becomes acutely ill with nonspecific febrile illness and symptoms and signs consistent with those listed in Table 1 and whose condition rapidly deteriorates should receive prompt consideration for a diagnosis of anthrax infection. The recognition of cutaneous cases of anthrax may also be the first evidence of an anthrax attack. 70 The likely presence of abnormal findings on either chest x-ray film or chest CT scan is diagnostically important. Although anthrax does not cause a classic bronchopneumonia pathologically, it can cause widened mediastinum, massive pleural effusions, air bronchograms, necrotizing pneumonic lesions, and/or consolidation, as has been noted above. 36,55,56,61, The result can be hypoxemia and chest imaging abnormalities that may or may not be clinically distinguishable from pneumonia. In the anthrax attacks of 2001, each of the first 10 patients had abnormal chest x-ray film results and each of 8 patients for whom CT scans were obtained had abnormal results. These included widened mediastinum on chest radiograph and effusions on chest CT scan (FIGURE 4). Such findings in a previ- ously healthy patient with evidence of overwhelming febrile illness or sepsis would be highly suggestive of advanced inhalational anthrax. # DIAGNOSIS The bacterial burden may be so great in advanced inhalational anthrax infection that bacilli are visible on Gram stain of peripheral blood, as was seen following the 2001 attacks. The most useful microbiologic test is the standard blood culture, which should show growth in 6 to 24 hours. Each of the 8 patients who had blood cultures obtained prior to initiation of antibiotics had positive blood cultures. 61 However, blood cultures appear to be sterilized after even 1 or 2 doses of antibiotics, underscoring the importance of obtaining cultures prior to initiation of antibiotic therapy ( J. Gerberding, oral communication, March 7, 2002). If the laboratory has been alerted to the possibility of anthrax, biochemical testing and review of colonial morphology could provide a preliminary diagnosis 12 to 24 hours after inoculation of the cultures. Definitive diagnosis could be promptly confirmed by an LRN laboratory. However, if the clinical laboratory has not been alerted to the possibility of anthrax, B anthracis may not be correctly identified. Routine procedures customarily identify a Bacillus species in a blood culture approximately 24 hours after growth, but some laboratories do not further identify Bacillus species unless specifically requested. This is because the isolation of Bacillus species most often represents growth of the common contaminant Bacillus cereus. 71 Given the possibility of future anthrax attacks, it is recommended that routine clinical laboratory procedures be modified, so B anthracis is specifically excluded after identification of a Bacillus species bacteremia unless there are compelling reasons not to do so. If it cannot be excluded then the isolate should be transferred to an LRN laboratory. Sputum culture and Gram stain are unlikely to be diagnostic of inhalational anthrax, given the frequent lack of a pneumonic process. 37 Gram stain of sputum was reported positive in only 1 case of inhalational anthrax in the 2001 series. If cutaneous anthrax is suspected, a Gram stain and culture of vesicular fluid should be obtained. If the Gram stain is negative or the patient is taking antibiotics already, punch biopsy should be performed, and specimens sent to a laboratory with the ability to perform immunohistochemical staining or polymerase chain reaction assays. 69,70 Blood cultures should be obtained and antibiotics should be initiated pending confirmation of the diagnosis of inhalational or cutaneous anthrax. Nasal swabs were obtained in some persons believed to be at risk of inhalational anthrax following the anthrax attacks of 2001. Although a study has shown the presence of B anthracis spores in nares of some monkeys following experimental exposure to B anthracis spores for some time after exposure, 72 the predictive value of the nasal swab test for diagnosing inhalational anthrax in humans is unknown and untested. It is not known how quickly antibiotics make spore recovery on nasal swab tests impossible. One patient who died from inhalational anthrax had a negative nasal swab. 36 Thus, the CDC advised in the fall of 2001 that the nasal swab should not be used as a clinical diagnostic test. If obtained for an epidemiological purpose, nasal swab results should not be used to rule out infection in a patient. Persons who have positive nasal swab results for B anthracis should receive a course of postexposure antibiotic prophylaxis since a positive swab would indicate that the individual had been exposed to aerosolized B anthracis. Antibodies to the protective antigen (PA) of B anthracis, termed anti-PA IgG, have been shown to confer immunity in animal models following anthrax vaccination. 73,74 Anti-PA IgG serologies have been obtained from several of those involved in the 2001 anthrax attacks, but the results of these assays are not yet published. Given the lack of data in humans and the expected period required to develop an anti-PA IgG response, this test should not be used as a diagnostic test for anthrax infection in the acutely ill patient but may be useful for epidemiologic purposes. Postmortem findings are especially important following an unexplained death. Thoracic hemorrhagic necrotizing lymphadenitis and hemorrhagic necrotizing mediastinitis in a previously healthy adult are essentially pathognomonic of inhalational anthrax. 50,58 Hemorrhagic meningitis should also raise strong suspicion of anthrax infection. 32,50,58,75 However, given the rarity of anthrax, a pathologist might not identify these findings as caused by anthrax unless previously alerted to this possibility. If only a few patients present contemporaneously, the clinical similarity of early inhalational anthrax infection to other acute febrile respiratory infections may delay initial diagnosis although probably not for long. The severity of the illness and its rapid progression, coupled with unusual radiological findings, possible identification of B anthracis in blood or cerebrospinal fluid, and the unique pathologic findings should serve as an early alarm. The index case of inhalational anthrax in the 2001 attacks was identified because of an alert clinician who suspected the disease on the basis of large gram-positive bacilli in cerebrospinal fluid in a patient with a compatible clinical illness, and as a result of the subsequent analysis by laboratory staff who had recently undergone bioterrorism preparedness training. 65 # VACCINATION The US anthrax vaccine, named anthrax vaccine adsorbed (AVA), is an inactivated cell-free product, licensed in 1970, and produced by Bioport Corp, Lansing, Mich. The vaccine is licensed to be given in a 6-dose series. In 1997, it was mandated that all US military active-and reserve-duty personnel receive it. 76 The vaccine is made from the cell-free filtrate of a nonencapsulated attenuated strain of B anthracis. 77 The principal antigen responsible for inducing immunity is the PA. 26,32 In the rabbit model, the quantity of antibody to PA has been corre-lated with the level of protection against experimental anthrax infection. 78 Preexposure vaccination with AVA has been shown to be efficacious against experimental challenge in a number of animal studies. A similar vaccine was shown in a placebo-controlled human trial to be efficacious against cutaneous anthrax. 81 The efficacy of postexposure vaccination with AVA has been studied in monkeys. 40 Among 60 monkeys exposed to 8 LD 50 of B anthracis spores at baseline, 9 of 10 control animals died, and 8 of 10 animals treated with vaccine alone died. None of 29 animals died while receiving doxycycline, ciprofloxacin, or penicillin for 30 days; 5 developed anthrax once treatment ceased. The remaining 24 all died when rechallenged. The 9 receiving doxycycline for 30 days plus vaccine at baseline and day 14 after exposure did not die from anthrax infection even after being rechallenged. 40 The safety of the anthrax vaccine has been the subject of much study. A recent report reviewed the results of surveillance for adverse events in the Department of Defense program of 1998-2000. 82 At the time of that report, 425976 service members had received 1 620 793 doses of AVA. There were higher rates of local reactions to the vaccine in women than men, but "no patterns of unexpected local or systemic adverse events" were identified. 82 A recent review of safety of AVA anthrax vaccination in employees of the United States Army Medical Research Institute of Infectious Diseases (USAMRIID) over the past 25 years reported that 1583 persons had received 10722 doses of AVA. 83 One percent of these inoculations (101/ 10722) were associated with 1 or more systemic events (defined as headache, malaise, myalgia, fever, nausea, vomiting, dizziness, chills, diarrhea, hives, anorexia, arthralgias, diaphoresis, blurred vision, generalized itching, or sore throat). The most frequently reported systemic adverse event was headache (0.4% of doses). Local or injection site reactions were reported in 3.6%. No long-term sequelae were reported in this series. The Institute of Medicine (IOM) recently published a report on the safety and efficacy of AVA, 84 which concluded that AVA is effective against inhalational anthrax and concluded that if given with appropriate antibiotic therapy, it may help prevent the development of disease after exposure. The IOM committee also concluded that AVA was acceptably safe. Committee recommendations for new research include studies to describe the relationship between immunity and quantitative antibody levels; further studies to test the efficacy of AVA in combination with antibiotics in preventing inhalational anthrax infection; studies of alternative routes and schedules of administration of AVA; and continued monitoring of reported adverse events following vaccination. The committee did not evaluate the production process used by the manufacturer. A recently published report 85 analyzed a cohort of 4092 women at 2 military bases from January 1999 to March 2000. The study compared pregnancy rates and adverse birth outcomes between groups of women who had been vaccinated with women who had not been vaccinated and the study found that anthrax vaccination with AVA had no effect on pregnancy or adverse birth outcomes. A human live attenuated vaccine has been produced and used in countries of the former Soviet Union. 86 In the Western world, live attenuated vaccines have been considered unsuitable for use in humans because of safety concerns. 86 Current vaccine supplies are limited, and the US production capacity remains modest. Bioport is the single US manufacturing facility for the licensed anthrax vaccine. Production has only recently resumed after a halt required the company to alter production methods so that it conformed to the US Food and Drug Administration (FDA) Good Manufacturing Practice standard. Bioport has a contract to produce 4.6 million doses of vaccine for the US Department of Defense that cannot be met until at least 2003 (D. # A. Henderson, oral communication, February 2002). The use of AVA was not initiated immediately in persons believed to have been exposed to B anthracis during the 2001 anthrax attacks for a variety of reasons, including the unavailability of vaccine supplies. Subsequently, near the end of the 60-day period of antibiotic prophylaxis, persons deemed by investigating public health authorities to have been at high risk for exposure were offered postexposure AVA series (3 inoculations at 2-week intervals, given on days 1, 14, and 28) as an adjunct to prolonged postexposure antibiotic prophylaxis. This group of affected persons was also offered the alternatives of continuing a prolonged course of antibiotics or of receiving close medical follow-up without vaccination or additional antibiotics. 87 This vaccine is licensed for use in the preexposure setting, but because it had not been licensed for use in the postexposure context, it was given under investigational new drug procedures. The working group continues to conclude that vaccination of exposed persons following a biological attack in conjunction with antibiotic administration for 60 days following exposure provide optimal protection to those exposed. However, until ample reserve stockpiles of vaccine are available, reliance must be placed on antibiotic administration. To date, there have been no reported cases of anthrax infection among those exposed in the 2001 anthrax attacks who took prophylactic antibiotics, even in those persons not complying with the complete 60-day course of therapy. Preexposure vaccination of some persons deemed to be in high-risk groups should be considered when substantial supplies of vaccine become available. A fast-track program to develop recombinant anthrax vaccine is now under way. This may lead to more plentiful vaccine stocks as well as a product that requires fewer inoculations. 88 Studies to evaluate intramuscular vs subcutaneous routes of administration and less frequent dosing of AVA are also under way. ( J. Hughes, oral communication, February 2002.) # THERAPY Recommendations for antibiotic and vaccine use in the setting of an aerosolized B anthracis attack are conditioned by a very small series of cases in humans, a limited number of studies in experimental animals, and the possible necessity of treating large numbers of casualties. A number of possible therapeutic strategies have yet to be explored experimentally or to be submitted for approval to the FDA. For these reasons, the working group offers consensus recommendations based on the best available evidence. The recommendations do not necessarily represent uses currently approved by the FDA or an official position on the part of any of the federal agencies whose scientists participated in these discussions and will need to be revised as further relevant information becomes available. Given the rapid course of symptomatic inhalational anthrax, early antibiotic administration is essential. A delay of antibiotic treatment for patients with anthrax infection may substantially lessen chances for survival. 89,90 Given the difficulty in achieving rapid microbiologic diagnosis of anthrax, all persons in high-risk groups who develop fever or evidence of systemic disease should start receiving therapy for possible anthrax infection as soon as possible while awaiting the results of laboratory studies. There are no controlled clinical studies for the treatment of inhalational anthrax in humans. Thus, antibiotic regimens commonly recommended for empirical treatment of sepsis have not been studied. In fact, natural strains of B anthracis are resistant to many of the antibiotics used in empirical regimens for sepsis treatment, such as those regimens based on the extended-spectrum cephalosporins. 91,92 Most naturally occurring B anthracis strains are sensitive to penicillin, which historically has been the preferred anthrax therapy. Doxycycline is the preferred option among the tetracycline class because of its proven efficacy in monkey studies 56 and its ease of administration. Other members of this class of antibiotics are suitable alternatives. Although treatment of anthrax in-fection with ciprofloxacin has not been studied in humans, animal models suggest excellent efficacy. 40,56,93 In vitro data suggest that other fluoroquinolone antibiotics would have equivalent efficacy although no animal data using a primate model of inhalational anthrax are available. 92 Penicillin, doxycycline, and ciprofloxacin are approved by the FDA for the treatment of inhalational anthrax infection, 56,89,90,94 and other antibiotics are under study. Other drugs that are usually active in vitro include clindamycin, rifampin, imipenem, aminoglycosides, chloramphenicol, vancomycin, cefazolin, tetracycline, linezolid, and the macrolides. Reports have been published of a B anthracis strain that was engineered to resist the tetracycline and penicillin classes of antibiotics. 95 Balancing considerations of treatment efficacy with concerns regarding resistance, the working group in 1999 recommended that ciprofloxacin or other fluoroquinolone therapy be initiated in adults with presumed inhalational anthrax infection. 3 It was advised that antibiotic resistance to penicillin-and tetracycline-class antibiotics should be assumed following a terrorist attack until laboratory testing demonstrated otherwise. Once the antibiotic susceptibility of the B anthracis strain of the index case had been determined, the most widely available, efficacious, and least toxic antibiotic was recommended for patients requiring treatment and persons requiring postexposure prophylaxis. Since the 1999 consensus statement publication, a study 96 demonstrated the development of in vitro resistance of an isolate of the Sterne strain of B anthracis to ofloxacin (a fluoroquinolone closely related to ciprofloxacin) following subculturing and multiple cell passage. Following the anthrax attacks of 2001, the CDC 97 offered guidelines advocating use of 2 or 3 antibiotics in combination in persons with inhalational anthrax based on susceptibility testing with epidemic strains. Limited early information following the attacks suggested that persons with inhalational anthrax treated intravenously with 2 or more antibiotics active against B anthracis had a greater chance of survival. 61 Given the limited number of persons who developed inhalational anthrax, the paucity of comparative data, and other uncertainties, it remains unclear whether the use of 2 or more antibiotics confers a survival advantage, but combination therapy is a reasonable therapeutic approach in the face of life-threatening illness. Another factor supporting the initiation of combination antibiotic therapy for treatment of inhalational anthrax is the possibility that an engineered strain of B anthracis resistant to 1 or more antibiotics might be used in a future attack. Some infectious disease experts have also advocated the use of clindamycin, citing the theoretical benefit of diminishing bacterial toxin production, a strategy used in some toxin-mediated streptococcal infections. 98 There are no data as yet that bear specifically on this question. Central nervous system penetration is another consideration; doxycycline or fluoroquinolone may not reach therapeutic levels in the cerebrospinal fluid. Thus, in the aftermath of the anthrax attacks, some infectious disease authorities recommended preferential use of ciprofloxacin over doxycycline, plus augmentation with chloramphenicol, rifampin, or penicillin when meningitis is established or suspected. The B anthracis isolate recovered from patients with inhalational anthrax was susceptible to all of the antibiotics expected in a naturally occurring strain. 97 This isolate showed an inducible ␤-lactamase in addition to a constitutive cephalosporinase. The importance of the inducible ␤-lactamase is unknown; these strains are highly susceptible to penicillin in vitro, with minimum inhibiting concentrations less than .06 µg/mL. A theoretical concern is that this sensitivity could be overcome with a large bacterial burden. For this reason, the CDC advised that patients with inhalational anthrax should not be treated with penicillin or amoxicillin as monotherapy and that ciprofloxacin or doxycycline be considered the standards based on in vitro activ-ity, efficacy in the monkey model, and FDA approval. In the contained casualty setting (a situation in which a modest number of patients require therapy), the working group supports these new CDC antibiotic recommendations 97 (TABLE 3) and advises the use of intravenous antibiotic administration. These recommendations will need to be revised as new data become available. If the number of persons requiring therapy following a bioterrorist attack with anthrax is sufficiently high (ie, a mass casualty setting), the working group recognizes that combination drug therapy and intravenous therapy may no longer be possible for reasons of logistics and/or exhaustion of equipment and antibiotic supplies. In such circumstances, oral therapy may be the only feasible option (TABLE 4). The threshold number of cases at which combination and parenteral therapy become impossible depends on a variety of factors, including local and regional health care resources. In experimental animals, antibiotic therapy during anthrax infection has prevented development of an immune response. 40,95 This suggests that even if the antibiotic-treated patient survives anthrax infection, the risk of recurring disease may persist for a prolonged period because of the possibility of delayed germination of spores. Therefore, we recommend that antibiotic therapy be continued for at least 60 days postexposure, with oral therapy replacing intravenous therapy when the patient is clinically stable enough to take oral medication. Cutaneous anthrax historically has been treated with oral penicillin. For reasons articulated above, the working group recommends that oral fluoroquinolone or doxycycline in the adult dosage schedules described in TABLE 5 be used to treat cutaneous anthrax until antibiotic susceptibility is proven. Amoxicillin is a suitable alternative if there are contraindications to fluoroquinolones or doxycycline such as pregnancy, lactating mother, age younger than 18 years, or antibiotic intolerance. For cutaneous lesions associated with extensive edema or for cutaneous lesions of the head and neck, clinical management should be conservative as per inhalational anthrax treatment guidelines in Table 3. Although previous guidelines have suggested treating cutaneous anthrax for 7 to 10 days, 32,71 the working group recommends treatment for 60 days postexposure in the setting of bioterrorism, given the presumed concomitant inhalational exposure to the primary aerosol. Treatment of cutaneous anthrax generally prevents progression to systemic disease although it does not prevent the formation and evolution of the eschar. Topical therapy is not useful. 4 In addition to penicillin, the fluoroquinolones and the tetracycline class of antibiotics, other antibiotics effective in vitro include chloramphenicol, clindamycin, extended-spectrum penicillins, macrolides, aminoglycosides, vancomycin, cefazolin, and other first-generation cephalosporins. 91,99 The efficacy of these antibiotics has not yet been tested Steroids may be considered as an adjunct therapy for patients with severe edema and for meningitis based on experience with bacterial meningitis of other etiologies. d Other agents with in vitro activity include rifampin, vancomycin, penicillin, ampicillin, chloramphenicol, imipenem, clindamycin, and clarithromycin. Because of concerns of constitutive and inducible ␤-lactamases in Bacillus anthracis, penicillin and ampicillin should not be used alone. Consultation with an infectious disease specialist is advised. e Initial therapy may be altered based on clinical course of the patient; 1 or 2 antimicrobial agents may be adequate as the patient improves. f If meningitis is suspected, doxycycline may be less optimal because of poor central nervous system penetration. g If intravenous (IV) ciprofloxacin is not available, oral ciprofloxacin may be acceptable because it is rapidly and well absorbed from the gastrointestinal tract with no substantial loss by first-pass metabolism. Maximum serum concentrations are attained 1 to 2 hours after oral dosing but may not be achieved if vomiting or ileus is present. h In children, ciprofloxacin dosage should not exceed 1 g/d. i The American Academy of Pediatrics recommends treatment of young children with tetracyclines for serious infections (ie, Rocky Mountain spotted fever). j Because of the potential persistence of spores after an aerosol exposure, antimicrobial therapy should be continued for 60 days. k Although tetracyclines are not recommended during pregnancy, their use may be indicated for life-threatening illness. Adverse effects on developing teeth and bones of fetus are dose related; therefore, doxycycline might be used for a short time (7-14 days) before 6 months of gestation. The high death rate from the infection outweighs the risk posed by the antimicrobial agent. in humans or animal studies. The working group recommends the use of these antibiotics only to augment fluoroquinolones or tetracyclines or if the preferred drugs are contraindicated, not available, or inactive in vitro in susceptibility testing. B anthracis strains exhibit natural resistance to sulfamethoxazole, trimethoprim, cefuroxime, cefotaxime sodium, aztreonam, and ceftazidime. 91,92,99 Therefore, these antibiotics should not be used. Pleural effusions were present in all of the first 10 patients with inhalational anthrax in 2001. Seven needed drainage of their pleural effusions, 3 required chest tubes. 69 Future patients with inhalational anthrax should be expected to have pleural effusions that will likely require drainage. # Postexposure Prophylaxis Guidelines for which populations would require postexposure prophylaxis to prevent inhalational anthrax following the release of a B anthracis aerosol as a biological weapon will need to be developed by public health officials depending on epidemiological circumstances. These decisions would require estimates of the timing, location, and conditions of the exposure. 100 Ongoing case monitoring would be needed to define the high-risk groups, to direct follow-up, and to guide the addition or deletion of groups requiring postexposure prophylaxis. There are no FDA-approved postexposure antibiotic regimens following ex-posure to a B anthracis aerosol. Therefore, for postexposure prophylaxis, we recommend the same antibiotic regimen as that recommended for treatment of mass casualties; prophylaxis should be continued for at least 60 days postexposure (Table 4). Preliminary analysis of US postal workers who were advised to take 60 days of antibiotic prophylaxis for exposure to B anthracis spores following the anthrax attacks of 2001 showed that 2% sought medical attention because of concern of possible severe allergic reactions related to the medications, but no persons required hospitalization because of an adverse drug reaction. 101 Many persons did not begin or complete their recommended antibiotic course for a variety of reasons, including gastrointestinal tract intolerance, underscoring the need for careful medical follow-up during the period of prophylaxis. 101 In addition, given the uncertainties regarding how many weeks or months spores may remain latent in the period following discontinu- 98 Cutaneous anthrax with signs of systemic involvement, extensive edema, or lesions on the head or neck require intravenous therapy, and a multidrug approach is recommended (Table 3). †Ciprofloxacin or doxycycline should be considered first-line therapy. Amoxicillin can be substituted if a patient cannot take a fluoroquinolone or tetracycline class drug. Adults are recommended to take 500 mg of amoxicillin orally 3 times a day. For children, 80 mg/kg of amoxicillin to be divided into 3 doses in 8-hour increments is an option for completion of therapy after clinical improvement. Oral amoxicillin dose is based on the need to achieve appropriate minimum inhibitory concentration levels. ‡Previous guidelines have suggested treating cutaneous anthrax for 7 to 10 days, but 60 days is recommended for bioterrorism attacks, given the likelihood of exposure to aerosolized Bacillus anthracis. §The American Academy of Pediatrics recommends treatment of young children with tetracyclines for serious infections (eg, Rocky Mountain spotted fever). Although tetracyclines or ciprofloxacin is not recommended during pregnancy, their use may be indicated for lifethreatening illness. Adverse effects on developing teeth and bones of a fetus are dose related; therefore, doxycycline might be used for a short time (7-14 days) before 6 months of gestation. Same as for nonimmunosuppressed adults and children *Some of these recommendations are based on animal studies or in vitro studies and are not approved by the US Food and Drug Administration. †In vitro studies suggest ofloxacin (400 mg orally every 12 hours, or levofloxacin, 500 mg orally every 24 hours) could be substituted for ciprofloxacin. ‡In vitro studies suggest that 500 mg of tetracycline orally every 6 hours could be substituted for doxycycline. In addition, 400 mg of gatifloxicin or monifloxacin, both fluoroquinolones with mechanisms of action consistent with ciprofloxacin, taken orally daily could be substituted. §According to the Centers for Disease Control and Prevention recommendations, amoxicillin is suitable for postexposure prophylaxis only after 10 to 14 days of fluoroquinolones or doxycycline treatment and then only if there are contraindications to these 2 classes of medications (eg, pregnancy, lactating mother, age Ͻ18 years, or intolerance of other antibiotics). Doxycycline could also be used if antibiotic susceptibility testing, exhaustion of drug supplies, adverse reactions preclude use of ciprofloxacin. For children heavier than 45 kg, adult dosage should be used. For children lighter than 45 kg, 2.5 mg/kg of doxycycline orally every 12 hours should be used. ¶See "Management of Pregnant Population" for details. ation of postexposure prophylaxis, persons should be instructed to report immediately flulike symptoms or febrile illness to their physicians who should then evaluate the need to initiate treatment for possible inhalational anthrax. As noted above, postexposure vaccination is recommended as an adjunct to postexposure antibiotic prophylaxis if vaccine is available. # Management of Special Groups Consensus recommendations for special groups as set forth herein reflect the clinical and evidence-based judgments of the working group and at this time do not necessarily correspond with FDAapproved use, indications, or labeling. Children. It has been recommended that ciprofloxacin and other fluoroquinolones should not be used in children younger than 16 to 18 years because of a link to permanent arthropathy in adolescent animals and transient arthropathy in a small number of children. 94 However, balancing these risks against the risks of anthrax infections caused by an engineered antibiotic-resistant strain, the working group recommends that ciprofloxacin be used as a component of combination therapy for children with inhalational anthrax. For postexposure prophylaxis or following a mass casualty attack, monotherapy with fluoroquinolones is recommended by the working group 97 (Table 4). The American Academy of Pediatrics has recommended that doxycycline not be used in children younger than 9 years because the drug has resulted in retarded skeletal growth in infants and discolored teeth in infants and children. 94 However, the serious risk of infection following an anthrax attack supports the consensus recommendation that doxycycline, instead of ciprofloxacin, be used in children if antibiotic susceptibility testing, exhaustion of drug supplies, or adverse reactions preclude use of ciprofloxacin. According to CDC recommendations, amoxicillin was suitable for treatment or postexposure prophylaxis of possible anthrax infection following the anthrax attacks of 2001 only after 14 to 21 days of fluoroquinolone or doxycycline administration because of the concern about the presence of a ␤-lactamase. 102 In a contained casualty setting, the working group recommends that children with inhalational anthrax receive intravenous antibiotics (Table 3). In a mass casualty setting and as postexposure prophylaxis, the working group recommends that children receive oral antibiotics (Table 4). The US anthrax vaccine is licensed for use only in persons aged 18 to 65 years because studies to date have been conducted exclusively in this group. 77 No data exist for children, but based on experience with other inactivated vaccines, it is likely that the vaccine would be safe and effective. Pregnant Women. Fluoroquinolones are not generally recommended during pregnancy because of their known association with arthropathy in adolescent animals and small numbers of children. Animal studies have discovered no evidence of teratogenicity related to ciprofloxacin, but no controlled studies of ciprofloxacin in pregnant women have been conducted. Balancing these possible risks against the concerns of anthrax due to engineered antibiotic-resistant strains, the working group recommends that pregnant women receive ciprofloxacin as part of combination therapy for treatment of inhalational anthrax (Table 3). We also recommend that pregnant women receive fluoroquinolones in the usual adult dosages for postexposure prophylaxis or monotherapy treatment in the mass casualty setting (Table 4). The tetracycline class of antibiotics has been associated with both toxic effects in the liver in pregnant women and fetal toxic effects, including retarded skeletal growth. 94 Balancing the risks of anthrax infection with those associated with doxycycline use in pregnancy, the working group recommends that doxycycline can be used as an alternative to ciprofloxacin as part of combination therapy in pregnant women for treatment of inhalational anthrax. For postexposure prophylaxis or in mass casualty settings, doxycycline can also be used as an alternate to ciprofloxacin in pregnant women. If doxycycline is used in pregnant women, periodic liver function testing should be performed. No adequate controlled trials of penicillin or amoxicillin administration during pregnancy exist. However, the CDC recommends penicillin for the treatment of syphilis during pregnancy and amoxicillin as a treatment alternative for chlamydial infections during pregnancy. 94 According to CDC recommendations, amoxicillin is suitable postexposure prophylaxis or treatment of inhalational anthrax in pregnancy only after 14 to 21 days of fluoroquinolone or doxycycline administration. 102 Ciprofloxacin (and other fluoroquinolones), penicillin, and doxycycline (and other tetracyclines) are each excreted in breast milk. Therefore, a breastfeeding woman should be treated or given prophylaxis with the same antibiotic as her infant based on what is most safe and effective for the infant. Immunosuppressed Persons. The antibiotic treatment or postexposure prophylaxis for anthrax among those who are immunosuppressed has not been studied in human or animal models of anthrax infection. Therefore, the working group consensus recommends administering antibiotics in the same regimens recommended for immunocompetent adults and children. # INFECTION CONTROL There are no data to suggest that patient-to-patient transmission of anthrax occurs and no person-to-person transmission occurred following the anthrax attacks of 2001. 18,67 Standard barrier isolation precautions are recommended for hospitalized patients with all forms of anthrax infection, but the use of high-efficiency particulate air filter masks or other measures for airborne protection are not indicated. 103 There is no need to immunize or provide prophylaxis to patient contacts (eg, household contacts, friends, coworkers) unless a determination is made that they, like the patient, were exposed to the aerosol or surface contamination at the time of the attack. In addition to immediate notification of the hospital epidemiologist and state health department, the local hospital microbiology laboratories should be notified at the first indication of anthrax so that safe specimen processing under biosafety level 2 conditions can be undertaken as is customary in most hospital laboratories. 56 A number of disinfectants used for standard hospital infection control, such as hypochlorite, are effective in cleaning environmental surfaces contaminated with infected bodily fluids. 22,103 Proper burial or cremation of humans and animals who have died because of anthrax infection is important in preventing further transmission of the disease. Serious consideration should be given to cremation. Embalming of bodies could be associated with special risks. 103 If autopsies are performed, all related instruments and materials should be autoclaved or incinerated. 103 The CDC can provide advice on postmortem procedures in anthrax cases. # DECONTAMINATION Recommendations for decontamination in the event of an intentional aerosolization of B anthracis spores are based on evidence concerning aerosolization techniques, predicted spore survival, environmental exposures at Sverdlovsk and among goat hair mill workers, and environmental data collected following the anthrax attacks of 2001. The greatest risk to humans exposed to an aerosol of B anthracis spores occurs when spores first are made airborne, the period called primary aerosolization. The aerobiological factors that affect how long spores remain airborne include the size of the dispersed particles and their hydrostatic properties. 100 Technologically sophisticated dispersal methods, such as aerosol release from military aircraft of large quantities of B anthracis spores manipulated for use in a weapon, are potentially capable of exposing high numbers of victims over large areas. Recent research by Canadian investigators has demonstrated that even "low-tech" delivery systems, such as the opening of envelopes containing powdered spores in indoor environments, can rapidly deliver high concentrations of spores to persons in the vicinity. 104 In some circumstances, indoor airflows, activity patterns, and heating, ventilation, and air conditioning systems may transport spores to others parts of the building. Following the period of primary aerosolization, B anthracis spores may settle on surfaces, possibly in high concentrations. The risk that B anthracis spores might pose by a process of secondary aerosolization (resuspension of spores into the air) is uncertain and is likely dependent on many variables, including the quantity of spores on a surface; the physical characteristics of the powder used in the attack; the type of surface; the nature of the human or mechanical activity that occurs in the affected area and host factors. A variety of rapid assay kits are available to detect B anthracis spores on environmental surfaces. None of these kits has been independently evaluated or endorsed by the CDC, FDA, or Environmental Protection Agency, and their functional characteristics are not known. 105 Many false-positive results occurred following the anthrax attacks of 2001. Thus, any result using currently available rapid assay kits does not necessarily signify the presence of B anthracis; it is simply an indication that further testing is required by a certified microbiology laboratory. Similarly, the sensitivity and false-negative rate of disease kits are unknown. At Sverdlovsk, no new cases of inhalational anthrax developed beyond 43 days after the presumed date of release. None were documented during the months and years afterward, despite only limited decontamination and vaccination of 47000 of the city's 1 million inhabitants. 59 Some have questioned whether any of the cases with onset of disease beyond 7 days after release might have represented illness following secondary aerosolization from the ground or other surfaces. It is impossible to state with certainty that secondary aerosolizations did not occur in Sverdlovsk, but it appears unlikely. The epidemic curve reported is typical for a common-source epidemic, 3,60 and it is possible to account for virtually all confirmed cases having occurred within the area of the plume on the day of the accident. Moreover, if secondary aerosolization had been important, new cases would have likely continued well beyond the observed 43 days. Although persons working with animal hair or hides are known to be at increased risk of developing inhalational or cutaneous anthrax, surprisingly few occupational exposures in the United States have resulted in disease. During the first half of the 20th century, a significant number of goat hair mill workers were heavily exposed to aerosolized spores. Mandatory vaccination became a requirement for working in goat hair mills only in the 1960s. Prior to that, many unvaccinated person-years of highrisk exposure had occurred, but only 13 cases of inhalational anthrax were reported. 27,54 One study of environmental exposure, conducted at a Pennsylvania goat hair mill, showed that workers inhaled up to 510 B anthracis particles of at least 5 µm in diameter per person per 8-hour shift. 54 These concentrations of spores were constantly present in the environment during the time of this study, but no cases of inhalational anthrax occurred. Field studies using B anthracis-like surrogates have been carried out by US Army scientists seeking to determine the risk of secondary aerosolization. One study concluded that there was no significant threat to personnel in areas contaminated by 1 million spores per square meter either from traffic on asphalt-paved roads or from a runway used by helicopters or jet aircraft. 106 A separate study showed that in areas of ground contaminated with 20 million Bacillus subtilis spores per square meter, a soldier exercising actively for a 3-hour period would inhale between 1000 and 15000 spores. 107 Much has been written about the technical difficulty of decontaminating an environment contaminated with B anthracis spores. A classic case is the experience at Gruinard Island, Scotland. During World War II, British mili-tary undertook explosives testing with B anthracis spores. Spores persisted and remained viable for 36 years following the conclusion of testing. Decontamination of the island occurred in stages, beginning in 1979 and ending in 1987 when the island was finally declared fully decontaminated. The total cost is unpublished, but materials required included 280 tons of formaldehyde and 2000 tons of seawater. 108 Following the anthrax attacks of 2001, substantial efforts were undertaken to decontaminate environmental surfaces exposed to B anthracis spores. Sections of the Hart Senate office building in Washington, DC, contaminated from opening a letter laden with B anthracis, were reopened only after months of decontamination procedures at an estimated cost of $23 million. 109 Decontamination efforts at many other buildings affected by the anthrax attacks of 2001 have not yet been completed. Prior to the anthrax attacks of 2001, there had been no recognition or scientific study showing that B anthracis spores of "weapons grade" quality would be capable of leaking out the edges of envelopes or through the pores of envelopes, with resulting risk to the health of those handling or processing those letters. When it became clear that the Florida case of anthrax was likely caused by a letter contaminated with B anthracis, assessment of postal workers who might have handled or processed that letter showed no illness. 69 When the anthrax cases were discovered, each was linked to a letter that had been opened. At first, there was no evidence of illness among persons handling or processing unopened mail. This fact influenced the judgment that persons handling or processing unopened B anthracis letters were not at risk. These judgments changed when illness was discovered in persons who had handled or processed unopened letters in Washington, DC. Much remains unknown about the risks to persons handling or processing unopened letters containing B anthracis spores. It is not well understood how the me-chanical systems of mail processing in a specific building would affect the risk of disease acquisition in a worker handling a contaminated letter in that facility. It is still uncertain what the minimum dose of spores would be to cause infection in humans although it may theoretically be as few as 1 to 3 spores. 47 The mechanisms of disease acquisition in the 2 fatal inhalational anthrax cases in New York City and in Connecticut remain unknown although it is speculated that disease in these 2 cases followed the inhalation of small numbers of spores present in some manner in "cross-contaminated" mail. The discovery of B anthracis spores in a contaminated letter in the office of Sen Daschle in the Hart office building led the Environmental Protection Agency to conduct tests in this office to assess the risk of secondary aerosolization of spores. Prior to the initiation of decontamination efforts in the Hart building, 17 blood agar gel plates were placed around the office and normal activity in the office was simulated. Sixteen of the 17 plates yielded B anthracis. Although this experiment did not allow conclusions about the specific risk of persons developing anthrax infection in this context, it did demonstrate that routine activity in an environment contaminated with B anthracis spores could cause significant spore resuspension. 110 Given the above considerations, if an environmental surface is proved to be contaminated with B anthracis spores in the immediate area of a spill or close proximity to the point of release of B anthracis biological weapons, the working group believes that decontamination of that area would likely decrease the risk of acquiring anthrax by secondary aerosolization. However, as has been demonstrated in environmental decontamination efforts following the anthrax attacks of 2001, decontamination of buildings or parts of buildings following an anthrax attack is technically difficult. For these reasons, the working group would advise that decisions about methods for decontamination following an anthrax attack follow full expert analysis of the contaminated environment and the anthrax weapon used in the attack and be made in consultation with experts on environmental remediation. If vaccines were available, postexposure vaccination might be a useful intervention for those working in highly contaminated areas, because it could further lower the risk of anthrax infection. In the setting of an announced alleged B anthracis release, such as the series of anthrax hoaxes occurring in many areas of the United States in 1998 111 and following the anthrax attacks of 2001, any person coming in direct physical contact with a substance alleged to be containing B anthracis should thoroughly wash the exposed skin and articles of clothing with soap and water. 112 In addition, any person in direct physical contact with the alleged substance should receive postexposure antibiotic prophylaxis until the substance is proved not to be B anthracis. The anthrax attacks of 2001 and new research 104 have shown that opening letters containing substantial quantities of B anthracis spores in certain conditions can confer risk of disease to persons at some distance from the location of where the letter was opened. For this reason, when a letter is suspected of containing (or proved to contain) B anthracis, immediate consultation with local and state public health authorities and the CDC for advised medical management is warranted. # Additional Research Development of a recombinant anthrax vaccine that would be more easily manufactured and would require fewer doses should remain a top priority. Rapid diagnostic assays that could reliably identify early anthrax infection and quickly distinguish from other flulike or febrile illnesses would become critical in the event of a largescale attack. Simple animal models for use in comparing antibiotic prophylactic and treatment strategies are also needed. Operational research to better characterize risks posed by environmental contamination of spores, particularly inside buildings, and research on approaches to minimize risk in indoor environments by means of air filters or methods for environmental cleaning following a release are also needed. A better understanding of the genetics and pathogenesis of anthrax, as well as mechanisms of virulence and immunity, will be of importance in the prospective evaluation of new therapeutic and diagnostic strategies. Novel therapeutic approaches with promise, such as the administration of competitors against the protective antigen complex, 113 should also be tested in animals and developed where evidence supports this. Recent developments such as the publishing of the B anthracis genome and the discovery of the crystalline structure of the lethal and edema factor could hold great clinical hope for both the prevention and treatment of anthrax infection. 114 Funding/Support: Funding for this study primarily was provided by each participant's institution or agency. Disclaimers: In many cases, the indication and dosages and other information are not consistent with current approved labeling by the US Food and Drug Administration (FDA). The recommendations on the use of drugs and vaccine for uses not approved by the FDA do not represent the official views of the FDA or of any of the federal agencies whose scientists participated in these discussions. Unlabeled uses of the products recommended are noted in the sections of this article in which these products are discussed. Where unlabeled uses are indicated, information used as the basis for the recommendation is discussed. The views, opinions, assertions, and findings contained herein are those of the authors and should not be construed as official US Department of Health and Human Services, US Department of Defense, or US Department of Army positions, policies, or decisions unless so designated by other documentation. floxacin is only approved for "inhalational anthrax (postexposure)" and is not approved by the FDA for the treatment of inhalational anthrax." At this time, then, clinicians have no options that have been approved by the FDA for the treatment of inhalational anthrax. In the absence of FDA approval for any specific treatment for inhalational anthrax, clinicians must rely on other sources of guidance regarding treatment recommendations for this disease process. # Ex Officio Participants in the Working Dr Tice recommends consideration of a loading dose of doxycycline and ciprofloxacin in the treatment of inhalational anthrax. We do not believe there is sufficient evidence to support changing our recommendations to include these recommendations. Tetracyclines exhibit persistent timedependent bactericidal effects; the time above minimum inhibitory concentration (MIC) predicts therapeutic outcome. 1 Fluoroquinolone antibiotics, on the other hand, exhibit persistent concentration-dependent killing with persistent effects; the ratio of the area under the curve to the MIC predicts therapeutic outcome. 2 These factors are more important clinically than steady state levels of these drugs. In addition, we are aware of no information that suggests improvement in clinical outcome using loading doses of these classes of antibiotics, and the therapeutic efficacy of the standard recommended dosing regimen for these antibiotics (the same regimens that appear in our consensus paper) have been demonstrated in numerous clinical settings. Until more data regarding improvement in clinical outcomes following the use of loading doses for these antimicrobials exists, we are reluctant to propose any changes in the guidelines. # CORRECTION Incorrect Wording: Subsequent to the publication of the Consensus Statement entitled "Anthrax as a Biological Weapon, 2002: Updated Recommendations for Management," published in the May 1, 2002, issue of THE JOURNAL (2002;287: 2236-2252), the authors wish to make available the following updates based on information from the US Food and Drug Administration and the Centers from Disease Control and Prevention (CDC). In Table 3 on page 2246, the pediatric dosage of ciprofloxacin for "Initial IV Therapy" for inhalational anthrax in the contained casualty setting should read, "10 mg/kg every 12 h (maximum of 400 mg per dose)" and subsequent oral therapy under "Duration" should be "15 mg/kg per dose taken orally every 12 h (maximum of 500 mg per dose)." The doxycycline dosages for children should be based on weight (ie, Ͼ or Յ 45 kg) and not on age. In Table 4 on page 2247, the pediatric dosage of ciprofloxacin for "Initial Oral Therapy" of inhalational anthrax infection in the mass casualty setting or for postexposure prophylaxis should read, "15 mg/kg per dose taken orally every 12 h (maximum of 500 mg per dose)." The correct dosage of amoxicillin for children who weigh less than 20 kg in a mass casualty setting or for postexposure prophylaxis is "80 mg/kg to be taken orally in 3 divided doses every 8 h." The footnote marked by a section mark ( §) in Table 4 should read as follows: "According to the CDC recommendations for the bioterrorist attacks in 2001, in which B anthracis was susceptible to penicillin, amoxicillin was a suitable alternative for postexposure prophylaxis in infants, children, and women who were pregnant or who were breastfeeding. Amoxicillin was also a suitable alternative for completion of 60 days of antibiotic therapy for patients in these groups with cutaneous or inhalational anthrax whose clinical illness had resolved after treatment with a ciprofloxacin-or doxycycline-based regimen (14-21 days for inhalational or complicated cutaneous anthrax; 7-10 days for uncomplicated cutaneous anthrax). Such patients required prolonged therapy because they were presumably exposed to aerosolized B anthracis." In Table 5 on page 2247, the pediatric dosage of ciprofloxacin for treatment of cutaneous anthrax infection should be "15 mg/kg per dose taken orally every 12 h (maximum of 500 mg per dose)." Pediatric doxycycline dosage should be based on weight (ie, Ͼ or Յ 45 kg) not age. The most current versions of Tables 3, 4, and 5 are available online at: .ama-assn.org/cgi/content/full/287/17/2236. The textual changes are as follows: On page 2245, the sentence "Penicillin, doxycycline, and ciprofloxacin are approved by the FDA for the treatment of inhalational anthrax infection, 56,89,90,94 and other antibiotics are under study" should read, "Penicillin and doxycycline are approved by the FDA for the treatment of anthrax. 56,89,90,94 Although neither penicillin, doxycycline, nor ciprofloxacin are specifically approved by the FDA for the treatment of inhalational anthrax, these drugs may be useful when given in combination with other antimicrobial drugs." On page 2247, the sentence in the "Postexposure Prophylaxis" section of the text that says, "There are no FDA-approved postexposure antibiotic regimens following exposure to a B anthracis aerosol" should read, "Ciprofloxacin, doxycycline, and penicillin G procaine are approved by the FDA for postexposure prophylaxis of inhalational anthrax." On page 2248 in the "Children" subsection, the sentence that begins "According to CDC recommendations . . . " should read "According to the CDC recommendations for the bioterrorist attacks in 2001, in which B anthracis was susceptible to penicillin, amoxicillin was a suitable alternative for postexposure prophylaxis in infants and children (Table 4)." In the "Pregnant Women" subsection, the sentence that begins, "According to the CDC recommendations . . . " should read, "According to the CDC recommendations for the bioterrorist attacks in 2001, in which B anthracis was susceptible to penicillin, amoxicillin was a suitable alternative for postexposure prophylaxis in women who were pregnant or who were breastfeeding (Table 4)." We apologize for the interruption in CME and hope that you will enjoy the improved online features that will be available in early 2003. # CME ANNOUNCEMENT
Objective To review and update consensus-based recommendations for medical and public health professionals following a Bacillus anthracis attack against a civilian population. Participants The working group included 23 experts from academic medical centers, research organizations, and governmental, military, public health, and emergency management institutions and agencies.# O F THE BIOLOGICAL AGENTS that may be used as weapons, the Working Group on Civilian Biodefense identified a limited number of organisms that, in worst case scenarios, could cause disease and deaths in sufficient numbers to gravely impact a city or region. Bacillus anthracis, the bacterium that causes anthrax, is one of the most serious of these. Several countries are believed to have offensive biological weapons programs, and some independent terrorist groups have suggested their intent to use biological weapons. Because the possibility of a terrorist attack using bioweapons is especially difficult to predict, detect, or prevent, it is among the most feared terrorism scenarios. 1 In September 2001, B anthracis spores were sent to several locations via the US Postal Service. Twenty-two confirmed or suspect cases of anthrax infection resulted. Eleven of these were inhalational cases, of whom 5 died; 11 were cutaneous cases (7 confirmed, 4 suspected). 2 In this article, these attacks are termed the anthrax attacks of 2001. The consequences of these attacks substantiated many findings and recommendations in the Working Group on Civilian Biodefense's previous consensus statement published in 1999 3 ; however, the new information from these attacks warrant updating the previous statement. Before the anthrax attacks in 2001, modern experience with inhalational anthrax was limited to an epidemic in Sverdlovsk, Russia, in 1979 following an unintentional release of B anthracis spores from a Soviet bioweapons fac-tory and to 18 occupational exposure cases in the United States during the 20th century. Information about the potential impact of a large, covert attack using B anthracis or the possible effi-cacy of postattack vaccination or therapeutic measures remains limited. Policies and strategies continue to rely partially on interpretation and extrapolation from an incomplete and evolving knowledge base. # CONSENSUS METHODS The working group comprised 23 representatives from academic medical centers; research organizations; and government, military, public health, and emergency management institutions and agencies. For the original consensus statement, 3 we searched MEDLINE databases from January 1966 to April 1998 using Medical Subject Headings of anthrax, Bacillus anthracis, biological weapon, biological terrorism, biological warfare, and biowarfare. Reference review identified work published before 1966. Working group members identified unpublished sources. The first consensus statement, published in 1999, 3 followed a synthesis of the information and revision of 3 drafts. We reviewed anthrax literature again in January 2002, with special attention to articles following the anthrax attacks of 2001. Members commented on a revised document; proposed revisions were incorporated with the working group's support for the final consensus document. The assessment and recommendations provided herein represent our best professional judgment based on current data and expertise. The conclusions and recommendations need to be regularly reassessed as new information develops. # HISTORY OF CURRENT THREAT For centuries, B anthracis has caused disease in animals and serious illness in humans. 4 Research on anthrax as a biological weapon began more than 80 years ago. 5 Most national offensive bioweapons programs were terminated following widespread ratification or signing of the Biological Weapons Convention (BWC) in the early 1970s 6 ; the US offensive bioweapons program was terminated after President Nixon's 1969 and 1970 executive orders. However, some nations continued offensive bioweapons development programs despite ratification of the BWC. In 1995, Iraq acknowledged producing and weaponizing B anthracis to the United Nations Special Commission. 7 The former Soviet Union is also known to have had a large B anthracis production program as part of its offensive bioweapons program. 8 A recent analysis reports that there is clear evidence of or widespread assertions from nongovernmental sources alleging the existence of offensive biological weapons programs in at least 13 countries. 6 The anthrax attacks of 2001 have heightened concern about the feasibility of large-scale aerosol bioweapons attacks by terrorist groups. It has been feared that independent, well-funded groups could obtain a manufactured weapons product or acquire the expertise and resources to produce the materials for an attack. However, some analysts have questioned whether "weapons grade" material such as that used in the 2001 attacks (ie, powders of B anthracis with characteristics such as high spore concentration, uniform particle size, low electrostatic charge, treated to reduce clumping) could be produced by those not supported by the resources of a nation-state. The US Department of Defense recently reported that 3 defense employees with some technical skills but without expert knowledge of bioweapons manufactured a simulant of B anthracis in less than a month for $1 million. 9 It is reported that Aum Shinrikyo, the cult responsible for the 1995 release of sarin nerve gas in a Tokyo subway station, 10 dispersed aerosols of anthrax and botulism throughout Tokyo at least 8 times. 11 Forensic analysis of the B anthracis strain used in these attacks revealed that this isolate most closely matched the Sterne 34F2 strain, which is used for animal vaccination programs and is not a significant risk to humans. 12 It is probable that the cult attacks produced no illnesses for this and other technical reasons. Al Quaeda also has sought to acquire bioweapons in its terrorist planning efforts although the ex-tent to which they have been successful is not reported. 13 In the anthrax attacks of 2001, B anthracis spores were sent in at least 5 letters to Florida, New York City, and Washington, DC. Twenty-two confirmed or suspected cases resulted. All of the identified letters were mailed from Trenton, NJ. The B anthracis spores in all the letters were identified as the Ames strain. The specific source (provenance) of B anthracis cultures used to create the spore-containing powder remains unknown at time of this publication. It is now recognized that the original Ames strain of B anthracis did not come from a laboratory in Ames, Iowa, rather from a laboratory in College Station, Tex. Several distinct Ames strains have been recognized by investigating scientists, which are being compared with the Ames strain used in the attack. At least 1 of these comparison Ames strains was recovered from a goat that died in Texas in 1997. 14 Sen Daschle's letter reportedly had 2 g of B anthracis containing powder; the quantity in the other envelopes has not been disclosed. The powder has been reported to contain between 100 billion to 1 trillion spores per gram 15 although no official analysis of the concentration of spores or the chemical composition of the powder has been published. The anthrax attacks of 2001 used 1 of many possible methods of attack. The use of aerosol-delivery technologies inside buildings or over large outdoor areas is another method of attack that has been studied. In 1970, the World Health Organization 16 and in 1993 the Office of Technology Assessment 17 analyzed the potential scope of larger attacks. The 1979 Sverdlovsk accident provides data on the only known aerosol release of B anthracis spores resulting in an epidemic. 18 An aerosol release of B anthracis would be odorless and invisible and would have the potential to travel many kilometers before dissipating. 16,19 Aerosol technologies for large-scale dissemination have been developed and tested by Iraq 7 and the former Soviet Union 8 Few details of those tests are available. The US military also conducted such trials over the Pacific Ocean in the 1960s. A US study near Johnston Atoll in the South Pacific reported a plane "sprayed a 32-mile long line of agent that traveled for more then 60 miles before it lost its infectiousness." 20 In 1970, the World Health Organization estimated that 50 kg of B anthracis released over an urban population of 5 million would sicken 250 000 and kill 100000. 16 A US Congressional Office of Technology assessment analysis from 1993 estimated that between 130000 and 3 million deaths would follow the release of 100 kg of B anthracis, a lethality matching that of a hydrogen bomb. 17 # EPIDEMIOLOGY OF ANTHRAX Naturally occurring anthrax in humans is a disease acquired from contact with anthrax-infected animals or anthrax-contaminated animal products. The disease most commonly occurs in herbivores, which are infected after ingesting spores from the soil. Large anthrax epizootics in herbivores have been reported. 21 A published report states that anthrax killed 1 million sheep in Iran in 1945 22 ; this number is supported by an unpublished Iranian governmental document. 23 Animal vaccination programs have reduced drastically the animal mortality from the disease. 24 However, B anthracis spores remain prevalent in soil samples throughout the world and cause anthrax cases among herbivores annually. 22,25,26 Anthrax infection occurs in humans by 3 major routes: inhalational, cutaneous, and gastrointestinal. Naturally occurring inhalational anthrax is now rare. Eighteen cases of inhalational anthrax were reported in the United States from 1900 to 1976; none were identified or reported thereafter. Most of these cases occurred in specialrisk groups, including goat hair mill or wool or tannery workers; 2 of them were laboratory associated. 27 Cutaneous anthrax is the most common naturally occurring form, with an estimated 2000 cases reported annually worldwide. 26 The disease typically follows exposure to anthrax-infected animals. In the United States, 224 cases of cutaneous anthrax were reported between 1944 and 1994. 28 One case was reported in 2000. 29 The largest reported epidemic occurred in Zimbabwe between 1979 and 1985, when more than 10000 human cases of anthrax were reported, nearly all of them cutaneous. 30 Although gastrointestinal anthrax is uncommon, outbreaks are continually reported in Africa and Asia 26,31,32 following ingestion of insufficiently cooked contaminated meat. Two distinct syndromes are oral-pharyngeal and abdominal. 31,33,34 Little information is available about the risks of direct contamination of food or water with B anthracis spores. Experimental efforts to infect primates by direct gastrointestinal instillation of B anthracis spores have not been successful. 35 Gastrointestinal infection could occur only after consumption of large numbers of vegetative cells, such as what might be found in raw or undercooked meat from an infected herbivore, but experimental data is lacking. Inhalational anthrax is expected to account for most serious morbidity and most mortality following the use of B anthracis as an aerosolized biological weapon. Given the absence of natu-rally occurring cases of inhalational anthrax in the United States since 1976, the occurrence of a single case is now cause for alarm. # MICROBIOLOGY B anthracis derives from the Greek word for coal, anthrakis, because of the black skin lesions it causes. B anthracis is an aerobic, gram-positive, spore-forming, nonmotile Bacillus species. The nonflagellated vegetative cell is large (1-8 µm long, 1-1.5 µm wide). Spore size is approximately 1 µm. Spores grow readily on all ordinary laboratory media at 37°C, with a "jointed bamboo-rod" cellular appearance (FIGURE 1) and a unique "curled-hair" colonial appearance. Experienced microbiologists should be able to identify this cellular and colonial morphology; however, few practicing microbiologists outside the veterinary community have seen B anthracis colonies beyond what they may have seen in published material. 37 B anthracis spores germinate when they enter an environment rich in amino acids, nucleosides, and glucose, such as that found in the blood or tissues of an animal or human host. The rapidly multiplying vegetative B anthracis bacilli, on the contrary, will only form spores after local nutrients are exhausted, such as when anthrax-infected body fluids are exposed to ambient air. 22 Vegetative bacteria have poor survival outside of an animal or human host; colony counts decline to being undetectable within 24 hours following inoculation into water. 22 This contrasts with the environmentally hardy properties of the B anthracis spore, which can survive for decades in ambient conditions. 37 # PATHOGENESIS AND CLINICAL MANIFESTATIONS Inhalational Anthrax Inhalational anthrax follows deposition into alveolar spaces of sporebearing particles in the 1-to 5-µm range. 38,39 Macrophages then ingest the spores, some of which are lysed and destroyed. Surviving spores are transported via lymphatics to mediastinal lymph nodes, where germination oc- # Figure 1. Gram Stain of Blood in Culture Media Gram-positive bacilli in long chains (original magnification ϫ20). Enlargement shows typical "jointed bamboo-rod" appearance of Bacillus anthracis (original magnificationϫ100). Reprinted from Borio et al. 36 curs after a period of spore dormancy of variable and possibly extended duration. 35,40,41 The trigger(s) responsible for the transformation of B anthracis spores to vegetative cells is not fully understood. 42 In Sverdlovsk, cases occurred from 2 to 43 days after exposure. 18 In experimental infection of monkeys, fatal disease occurred up to 58 days 40 and 98 days 43 after exposure. Viable spores were demonstrated in the mediastinal lymph nodes of 1 monkey 100 days after exposure. 44 Once germination occurs, clinical symptoms follow rapidly. Replicating B anthracis bacilli release toxins that lead to hemorrhage, edema, and necrosis. 32,45 In experimental animals, once toxin production has reached a critical threshold, death occurs even if sterility of the bloodstream is achieved with antibiotics. 27 Extrapolations from animal data suggest that the human LD 50 (ie, dose sufficient to kill 50% of persons exposed to it) is 2500 to 55000 inhaled B anthracis spores. 46 The LD 10 was as low as 100 spores in 1 series of monkeys. 43 Recently published extrapolations from primate data suggest that as few as 1 to 3 spores may be sufficient to cause infection. 47 The dose of spores that caused infection in any of the 11 patients with inhalational anthrax in 2001 could not be estimated although the 2 cases of fatal inhalational anthrax in New York City and Connecticut provoked speculation that the fatal dose, at least in some individuals, may be quite low. A number of factors contribute to the pathogenesis of B anthracis, which makes 3 toxins-protective antigen, lethal factor, and edema factor-that combine to form 2 toxins: lethal toxin and edema toxin (FIGURE 2). The protective antigen allows the binding of lethal and edema factors to the affected cell membrane and facilitates their subsequent transport across the cell membrane. Edema toxin impairs neutrophil function in vivo and affects water homeostasis leading to edema, and lethal toxin causes release of tumor necrosis factor ␣ and interleukin 1 ␤, factors that are believed to be linked to the sudden death in severe anthrax infection. 48 The molecular target of lethal and edema factors within the affected cell is not yet elucidated. 49 In addition to these virulence factors, B anthracis has a capsule that prevents phagocytosis. Full virulence requires the presence of both an antiphagocytic capsule and the # Gastrointestinal The major known virulence factors of B anthracis include the exotoxins edema toxin (PA and EF) and lethal toxin (PA and LF) and the antiphagocytic capsule. Although many exact molecular mechanisms involved in the pathogenicity of the anthrax toxins are uncertain, they appear to inhibit immune function, interrupt intracellular signaling pathways, and lyse cell targets causing massive release of proinflammatory mediators. ATP indicates adenosine triphosphate; cAMP, cyclic adenosine monophosphate; MAPKK, mitogen-activated protein kinase kinase; and MAPK, mitogen-activated protein kinase. 3 toxin components. 37 An additional factor contributing to B anthracis pathogenesis is the high concentration of bacteria occurring in affected hosts. 49 Inhalational anthrax reflects the nature of acquisition of the disease. The term anthrax pneumonia is misleading because typical bronchopneumonia does not occur. Postmortem pathological studies of patients from Sverdlovsk showed that all patients had hemorrhagic thoracic lymphadenitis, hemorrhagic mediastinitis, and pleural effusions. About half had hemorrhagic meningitis. None of these autopsies showed evidence of a bronchoalveolar pneumonic process although 11 of 42 patient autopsies had evidence of a focal, hemorrhagic, necrotizing pneumonic lesion analogous to the Ghon complex associated with tuberculosis. 50 These findings are consistent with other human case series and experimentally induced inhalational anthrax in animals. 40,51,52 A recent reanalysis of pathology specimens from 41 of the Sverdlovsk patients was notable primarily for the presence of necrotizing hemorrhagic mediastinitis; pleural effusions averaging 1700 mL in quantity; meningitis in 50%; arteritis and arterial rupture in many; and the lack of prominent pneumonitis. B anthracis was recovered in concentrations of up to 100 million colony-forming units per milliliter in blood and spinal fluid. 53 In animal models, physiological sequelae of severe anthrax infection have included hypocalcemia, profound hypoglycemia, hyperkalemia, depression and paralysis of respiratory center, hypotension, anoxia, respiratory alkalosis, and terminal acidosis, 54,55 suggesting that besides the rapid administration of antibiotics, survival might improve with vigilant correction of electrolyte disturbances and acid-based imbalance, glucose infusion, and early mechanical ventilation and vasopressor administration. Historical Data. Early diagnosis of inhalational anthrax is difficult and requires a high index of suspicion. Prior to the 2001 attacks, clinical information was limited to a series of 18 cases reported in the 20th century and the limited data from Sverdlovsk. The clinical presentation of inhalational anthrax had been described as a 2-stage illness. Patients reportedly first developed a spectrum of nonspecific symptoms, including fever, dyspnea, cough, headache, vomiting, chills, weakness, abdominal pain, and chest pain. 18,27 Signs of illness and laboratory studies were nonspecific. This stage of illness lasted from hours to a few days. In some patients, a brief period of apparent recovery followed. Other patients progressed directly to the second, fulminant stage of illness. 4,27,56 This second stage was reported to have developed abruptly, with sudden fever, dyspnea, diaphoresis, and shock. Massive lymphadenopathy and expansion of the mediastinum led to stridor in some cases. 57,58 A chest radiograph most often showed a widened mediastinum consistent with lymphadenopathy. 57 Up to half of patients developed hemorrhagic meningitis with concomitant meningismus, delirium, and obtundation. In this second stage, cyanosis and hypotension progressed rapidly; death sometimes occurred within hours. 4,27,56 In the 20th-century series of US cases, the mortality rate of occupationally acquired inhalational anthrax was 89%, but the majority of these cases occurred before the development of critical care units and, in most cases, before the advent of antibiotics. 27 At Sverdlovsk, it had been reported that 68 of the 79 patients with inhalational anthrax died. 18 However a separate report from a hospital physician recorded 358 ill with 45 dead; another recorded 48 deaths among 110 patients. 59 A recent analysis of available Sverdlovsk data suggests there may have been as many as 250 cases with 100 deaths. 60 Sverdlovsk patients who had onset of disease 30 or more days after release of organisms had a higher reported survival rate than those with earlier disease onset. Antibiotics, antianthrax globulin, corticosteroids, mechanical ventilation, and vaccine were used to treat some residents in the affected area after the accident, but how many were given vaccine and antibiotics is unknown, nor is it known which patients received these interventions or when. It is also uncertain if the B anthracis strain (or strains) to which patients was exposed were susceptible to the antibiotics used during the outbreak. However, a community-wide intervention about the 15th day after exposure did appear to diminish the projected attack rate. 60 In fatal cases, the interval between onset of symptoms and death averaged 3 days. This is similar to the disease course and case fatality rate in untreated experimental monkeys, which have developed rapidly fatal disease even after a latency as long as 58 days. 40 2001 Attacks Data. The anthrax attacks of 2001 resulted in 11 cases of inhalational anthrax, 5 of whom died. Symptoms, signs, and important laboratory data from these patients are listed in TABLE 1. Several clinical findings from the first 10 patients with inhalational anthrax deserve emphasis. 36,[61][62][63][64][65][66] Malaise and fever were presenting symptoms in all 10 cases. Cough, nausea, and vomiting were also prominent. Drenching sweats, dyspnea, chest pain, and headache were also seen in a majority of patients. Fever and tachycardia were seen in the majority of patients at presentation, as were hypoxemia and elevations in transaminases. Importantly, all 10 patients had abnormal chest x-ray film results: 7 had mediastinal widening; 7 had infiltrates; and 8 had pleural effusions. Chest computed tomographic (CT) scans showed abnormal results in all 8 patients who had this test: 7 had mediastinal widening; 6, infiltrates; 8, pleural effusions. Data are insufficient to identify factors associated with survival although early recognition and initiation of treatment and use of more than 1 antibiotic have been suggested as possible factors. 61 For the 6 patients for whom such information is known, the median period from presumed time of exposure to the onset of symptoms was 4 days (range, 4-6 days). Patients sought care a median of 3.5 days after symptom onset. All 4 patients exhibiting signs of fulminant illness prior to antibiotic administration died. 61 Of note, the incubation period of the 2 fatal cases from New York City and Connecticut is not known. # Cutaneous Anthrax Historically, cutaneous anthrax has been known to occur following the deposition of the organism into skin; previous cuts or abrasions made one especially susceptible to infection. 30,67 Areas of exposed skin, such as arms, hands, face, and neck, were the most frequently affected. In Sverdlovsk, cutaneous cases occurred only as late as 12 days after the original aerosol release; no reports of cutaneous cases appeared after prolonged latency. 18 After the spore germinates in skin tissues, toxin production results in local edema. An initially pruritic macule or papule enlarges into a round ulcer by the second day. Subsequently, 1-to 3-mm vesicles may appear that discharge clear or serosanguinous fluid containing numerous organisms on Gram stain. As shown in FIGURE 3, development of a painless, depressed, black eschar follows, often associated with extensive local edema. The anthrax eschar dries, loosens, and falls off in the next 1 to 2 weeks. Lymphangitis and painful lymphadenopathy can occur with associated systemic symptoms. Differential diagnosis of eschars includes tularemia, scrub typhus, rickettsial spotted fevers, rat bite fever, and ecthyma gangrenosum. 68 Noninfectious causes of eschars include arachnid bites 63 and vasculitides. Although antibiotic therapy does not appear to change the course of eschar formation and healing, it does decrease the likelihood of systemic disease. Without antibiotic therapy, the mortality rate has been reported to be as high as 20%; with appropriate antibiotic treatment, death due to cutaneous anthrax has been reported to be rare. 4 Following the anthrax attacks of 2001, there have been 11 confirmed or probable cases of cutaneous anthrax. One case report of cutaneous anthrax resulting from these attacks has been published (Figure 3). 63 This child had no reported evidence of prior visible cuts, abrasions, or lesions at the site of the cutaneous lesion that developed. The mean incubation period for cutaneous anthrax cases diagnosed in 2001 was 5 days, with a range of 1 to 10 days, based on estimated dates of exposure to B anthracis-contaminated letters. Cutaneous lesions occurred on the forearm, neck, chest, and fingers. 69 The only published case report of cutaneous anthrax from the attacks of 2001 is notable for the difficulty in recognition of the disease in a previously healthy 7-month-old, the rapid progression to severe systemic illness despite hospitalization, and clinical manifestations that included microangiopathic hemolytic anemia with renal involvement, coagulopathy, and hyponatremia. 63 Fortunately, this child recovered, and none of the cutaneous cases of anthrax diagnosed after the 2001 attacks were fatal. # Gastrointestinal Anthrax Some think gastrointestinal anthrax occurs after deposition and germination of spores in the upper or lower gastrointestinal tract. However, considering the rapid transit time in the gastrointestinal tract, it seems more likely that many such cases must result from the ingestion of large numbers of vegetative bacilli from poorly cooked infected meat rather than from spores. In any event, the oral-pharyngeal form of disease results in an oral or esophageal ulcer and leads to the development of regional lymphadenopathy, edema, and sepsis. 31,33 Disease in the lower gastrointestinal tract manifests as primary intestinal lesions occurring predominantly in the terminal ileum or cecum, 50 presenting initially with nausea, vomiting, and malaise and progressing rapidly to bloody diarrhea, acute abdomen, or sep- # cm By hospital day 12, a 2-cm black eschar was present in the center of the cutaneous lesion. Reprinted from Freedman et al. 63 sis. Massive ascites has occurred in some cases of gastrointestinal anthrax. 34 Advanced infection may appear similar to the sepsis syndrome occurring in either inhalational or cutaneous anthrax. 4 Some authors suggest that aggressive medical intervention as would be recommended for inhalational anthrax may reduce mortality. Given the difficulty of early diagnosis of gastrointestinal anthrax, however, mortality may be high. 4 Postmortem examinations in Sverdlovsk showed gastrointestinal submucosal lesions in 39 of 42 patients, 50 but all of these patients were also found to have definitive pathologic evidence of an inhalational source of infection. There were no gastrointestinal cases of anthrax diagnosed in either the Sverdlovsk series or following the anthrax attacks of 2001. The determination of individual patient exposure to B anthracis on the basis of environmental testing is complex due to the uncertain specificity and sensitivity of rapid field tests and the difficulty of assessing individual risks of exposure. A patient (or patients) seeking medical treatment for symptoms of inhalational anthrax will likely be the first evidence of a clandestine release of B anthracis as a biological weapon. The appearance of even a single previously healthy patient who becomes acutely ill with nonspecific febrile illness and symptoms and signs consistent with those listed in Table 1 and whose condition rapidly deteriorates should receive prompt consideration for a diagnosis of anthrax infection. The recognition of cutaneous cases of anthrax may also be the first evidence of an anthrax attack. 70 The likely presence of abnormal findings on either chest x-ray film or chest CT scan is diagnostically important. Although anthrax does not cause a classic bronchopneumonia pathologically, it can cause widened mediastinum, massive pleural effusions, air bronchograms, necrotizing pneumonic lesions, and/or consolidation, as has been noted above. 36,55,56,61,[64][65][66] The result can be hypoxemia and chest imaging abnormalities that may or may not be clinically distinguishable from pneumonia. In the anthrax attacks of 2001, each of the first 10 patients had abnormal chest x-ray film results and each of 8 patients for whom CT scans were obtained had abnormal results. These included widened mediastinum on chest radiograph and effusions on chest CT scan (FIGURE 4). Such findings in a previ- ously healthy patient with evidence of overwhelming febrile illness or sepsis would be highly suggestive of advanced inhalational anthrax. # DIAGNOSIS The bacterial burden may be so great in advanced inhalational anthrax infection that bacilli are visible on Gram stain of peripheral blood, as was seen following the 2001 attacks. The most useful microbiologic test is the standard blood culture, which should show growth in 6 to 24 hours. Each of the 8 patients who had blood cultures obtained prior to initiation of antibiotics had positive blood cultures. 61 However, blood cultures appear to be sterilized after even 1 or 2 doses of antibiotics, underscoring the importance of obtaining cultures prior to initiation of antibiotic therapy ( J. Gerberding, oral communication, March 7, 2002). If the laboratory has been alerted to the possibility of anthrax, biochemical testing and review of colonial morphology could provide a preliminary diagnosis 12 to 24 hours after inoculation of the cultures. Definitive diagnosis could be promptly confirmed by an LRN laboratory. However, if the clinical laboratory has not been alerted to the possibility of anthrax, B anthracis may not be correctly identified. Routine procedures customarily identify a Bacillus species in a blood culture approximately 24 hours after growth, but some laboratories do not further identify Bacillus species unless specifically requested. This is because the isolation of Bacillus species most often represents growth of the common contaminant Bacillus cereus. 71 Given the possibility of future anthrax attacks, it is recommended that routine clinical laboratory procedures be modified, so B anthracis is specifically excluded after identification of a Bacillus species bacteremia unless there are compelling reasons not to do so. If it cannot be excluded then the isolate should be transferred to an LRN laboratory. Sputum culture and Gram stain are unlikely to be diagnostic of inhalational anthrax, given the frequent lack of a pneumonic process. 37 Gram stain of sputum was reported positive in only 1 case of inhalational anthrax in the 2001 series. If cutaneous anthrax is suspected, a Gram stain and culture of vesicular fluid should be obtained. If the Gram stain is negative or the patient is taking antibiotics already, punch biopsy should be performed, and specimens sent to a laboratory with the ability to perform immunohistochemical staining or polymerase chain reaction assays. 69,70 Blood cultures should be obtained and antibiotics should be initiated pending confirmation of the diagnosis of inhalational or cutaneous anthrax. Nasal swabs were obtained in some persons believed to be at risk of inhalational anthrax following the anthrax attacks of 2001. Although a study has shown the presence of B anthracis spores in nares of some monkeys following experimental exposure to B anthracis spores for some time after exposure, 72 the predictive value of the nasal swab test for diagnosing inhalational anthrax in humans is unknown and untested. It is not known how quickly antibiotics make spore recovery on nasal swab tests impossible. One patient who died from inhalational anthrax had a negative nasal swab. 36 Thus, the CDC advised in the fall of 2001 that the nasal swab should not be used as a clinical diagnostic test. If obtained for an epidemiological purpose, nasal swab results should not be used to rule out infection in a patient. Persons who have positive nasal swab results for B anthracis should receive a course of postexposure antibiotic prophylaxis since a positive swab would indicate that the individual had been exposed to aerosolized B anthracis. Antibodies to the protective antigen (PA) of B anthracis, termed anti-PA IgG, have been shown to confer immunity in animal models following anthrax vaccination. 73,74 Anti-PA IgG serologies have been obtained from several of those involved in the 2001 anthrax attacks, but the results of these assays are not yet published. Given the lack of data in humans and the expected period required to develop an anti-PA IgG response, this test should not be used as a diagnostic test for anthrax infection in the acutely ill patient but may be useful for epidemiologic purposes. Postmortem findings are especially important following an unexplained death. Thoracic hemorrhagic necrotizing lymphadenitis and hemorrhagic necrotizing mediastinitis in a previously healthy adult are essentially pathognomonic of inhalational anthrax. 50,58 Hemorrhagic meningitis should also raise strong suspicion of anthrax infection. 32,50,58,75 However, given the rarity of anthrax, a pathologist might not identify these findings as caused by anthrax unless previously alerted to this possibility. If only a few patients present contemporaneously, the clinical similarity of early inhalational anthrax infection to other acute febrile respiratory infections may delay initial diagnosis although probably not for long. The severity of the illness and its rapid progression, coupled with unusual radiological findings, possible identification of B anthracis in blood or cerebrospinal fluid, and the unique pathologic findings should serve as an early alarm. The index case of inhalational anthrax in the 2001 attacks was identified because of an alert clinician who suspected the disease on the basis of large gram-positive bacilli in cerebrospinal fluid in a patient with a compatible clinical illness, and as a result of the subsequent analysis by laboratory staff who had recently undergone bioterrorism preparedness training. 65 # VACCINATION The US anthrax vaccine, named anthrax vaccine adsorbed (AVA), is an inactivated cell-free product, licensed in 1970, and produced by Bioport Corp, Lansing, Mich. The vaccine is licensed to be given in a 6-dose series. In 1997, it was mandated that all US military active-and reserve-duty personnel receive it. 76 The vaccine is made from the cell-free filtrate of a nonencapsulated attenuated strain of B anthracis. 77 The principal antigen responsible for inducing immunity is the PA. 26,32 In the rabbit model, the quantity of antibody to PA has been corre-lated with the level of protection against experimental anthrax infection. 78 Preexposure vaccination with AVA has been shown to be efficacious against experimental challenge in a number of animal studies. [78][79][80] A similar vaccine was shown in a placebo-controlled human trial to be efficacious against cutaneous anthrax. 81 The efficacy of postexposure vaccination with AVA has been studied in monkeys. 40 Among 60 monkeys exposed to 8 LD 50 of B anthracis spores at baseline, 9 of 10 control animals died, and 8 of 10 animals treated with vaccine alone died. None of 29 animals died while receiving doxycycline, ciprofloxacin, or penicillin for 30 days; 5 developed anthrax once treatment ceased. The remaining 24 all died when rechallenged. The 9 receiving doxycycline for 30 days plus vaccine at baseline and day 14 after exposure did not die from anthrax infection even after being rechallenged. 40 The safety of the anthrax vaccine has been the subject of much study. A recent report reviewed the results of surveillance for adverse events in the Department of Defense program of 1998-2000. 82 At the time of that report, 425976 service members had received 1 620 793 doses of AVA. There were higher rates of local reactions to the vaccine in women than men, but "no patterns of unexpected local or systemic adverse events" were identified. 82 A recent review of safety of AVA anthrax vaccination in employees of the United States Army Medical Research Institute of Infectious Diseases (USAMRIID) over the past 25 years reported that 1583 persons had received 10722 doses of AVA. 83 One percent of these inoculations (101/ 10722) were associated with 1 or more systemic events (defined as headache, malaise, myalgia, fever, nausea, vomiting, dizziness, chills, diarrhea, hives, anorexia, arthralgias, diaphoresis, blurred vision, generalized itching, or sore throat). The most frequently reported systemic adverse event was headache (0.4% of doses). Local or injection site reactions were reported in 3.6%. No long-term sequelae were reported in this series. The Institute of Medicine (IOM) recently published a report on the safety and efficacy of AVA, 84 which concluded that AVA is effective against inhalational anthrax and concluded that if given with appropriate antibiotic therapy, it may help prevent the development of disease after exposure. The IOM committee also concluded that AVA was acceptably safe. Committee recommendations for new research include studies to describe the relationship between immunity and quantitative antibody levels; further studies to test the efficacy of AVA in combination with antibiotics in preventing inhalational anthrax infection; studies of alternative routes and schedules of administration of AVA; and continued monitoring of reported adverse events following vaccination. The committee did not evaluate the production process used by the manufacturer. A recently published report 85 analyzed a cohort of 4092 women at 2 military bases from January 1999 to March 2000. The study compared pregnancy rates and adverse birth outcomes between groups of women who had been vaccinated with women who had not been vaccinated and the study found that anthrax vaccination with AVA had no effect on pregnancy or adverse birth outcomes. A human live attenuated vaccine has been produced and used in countries of the former Soviet Union. 86 In the Western world, live attenuated vaccines have been considered unsuitable for use in humans because of safety concerns. 86 Current vaccine supplies are limited, and the US production capacity remains modest. Bioport is the single US manufacturing facility for the licensed anthrax vaccine. Production has only recently resumed after a halt required the company to alter production methods so that it conformed to the US Food and Drug Administration (FDA) Good Manufacturing Practice standard. Bioport has a contract to produce 4.6 million doses of vaccine for the US Department of Defense that cannot be met until at least 2003 (D. # A. Henderson, oral communication, February 2002). The use of AVA was not initiated immediately in persons believed to have been exposed to B anthracis during the 2001 anthrax attacks for a variety of reasons, including the unavailability of vaccine supplies. Subsequently, near the end of the 60-day period of antibiotic prophylaxis, persons deemed by investigating public health authorities to have been at high risk for exposure were offered postexposure AVA series (3 inoculations at 2-week intervals, given on days 1, 14, and 28) as an adjunct to prolonged postexposure antibiotic prophylaxis. This group of affected persons was also offered the alternatives of continuing a prolonged course of antibiotics or of receiving close medical follow-up without vaccination or additional antibiotics. 87 This vaccine is licensed for use in the preexposure setting, but because it had not been licensed for use in the postexposure context, it was given under investigational new drug procedures. The working group continues to conclude that vaccination of exposed persons following a biological attack in conjunction with antibiotic administration for 60 days following exposure provide optimal protection to those exposed. However, until ample reserve stockpiles of vaccine are available, reliance must be placed on antibiotic administration. To date, there have been no reported cases of anthrax infection among those exposed in the 2001 anthrax attacks who took prophylactic antibiotics, even in those persons not complying with the complete 60-day course of therapy. Preexposure vaccination of some persons deemed to be in high-risk groups should be considered when substantial supplies of vaccine become available. A fast-track program to develop recombinant anthrax vaccine is now under way. This may lead to more plentiful vaccine stocks as well as a product that requires fewer inoculations. 88 Studies to evaluate intramuscular vs subcutaneous routes of administration and less frequent dosing of AVA are also under way. ( J. Hughes, oral communication, February 2002.) # THERAPY Recommendations for antibiotic and vaccine use in the setting of an aerosolized B anthracis attack are conditioned by a very small series of cases in humans, a limited number of studies in experimental animals, and the possible necessity of treating large numbers of casualties. A number of possible therapeutic strategies have yet to be explored experimentally or to be submitted for approval to the FDA. For these reasons, the working group offers consensus recommendations based on the best available evidence. The recommendations do not necessarily represent uses currently approved by the FDA or an official position on the part of any of the federal agencies whose scientists participated in these discussions and will need to be revised as further relevant information becomes available. Given the rapid course of symptomatic inhalational anthrax, early antibiotic administration is essential. A delay of antibiotic treatment for patients with anthrax infection may substantially lessen chances for survival. 89,90 Given the difficulty in achieving rapid microbiologic diagnosis of anthrax, all persons in high-risk groups who develop fever or evidence of systemic disease should start receiving therapy for possible anthrax infection as soon as possible while awaiting the results of laboratory studies. There are no controlled clinical studies for the treatment of inhalational anthrax in humans. Thus, antibiotic regimens commonly recommended for empirical treatment of sepsis have not been studied. In fact, natural strains of B anthracis are resistant to many of the antibiotics used in empirical regimens for sepsis treatment, such as those regimens based on the extended-spectrum cephalosporins. 91,92 Most naturally occurring B anthracis strains are sensitive to penicillin, which historically has been the preferred anthrax therapy. Doxycycline is the preferred option among the tetracycline class because of its proven efficacy in monkey studies 56 and its ease of administration. Other members of this class of antibiotics are suitable alternatives. Although treatment of anthrax in-fection with ciprofloxacin has not been studied in humans, animal models suggest excellent efficacy. 40,56,93 In vitro data suggest that other fluoroquinolone antibiotics would have equivalent efficacy although no animal data using a primate model of inhalational anthrax are available. 92 Penicillin, doxycycline, and ciprofloxacin are approved by the FDA for the treatment of inhalational anthrax infection, 56,89,90,94 and other antibiotics are under study. Other drugs that are usually active in vitro include clindamycin, rifampin, imipenem, aminoglycosides, chloramphenicol, vancomycin, cefazolin, tetracycline, linezolid, and the macrolides. Reports have been published of a B anthracis strain that was engineered to resist the tetracycline and penicillin classes of antibiotics. 95 Balancing considerations of treatment efficacy with concerns regarding resistance, the working group in 1999 recommended that ciprofloxacin or other fluoroquinolone therapy be initiated in adults with presumed inhalational anthrax infection. 3 It was advised that antibiotic resistance to penicillin-and tetracycline-class antibiotics should be assumed following a terrorist attack until laboratory testing demonstrated otherwise. Once the antibiotic susceptibility of the B anthracis strain of the index case had been determined, the most widely available, efficacious, and least toxic antibiotic was recommended for patients requiring treatment and persons requiring postexposure prophylaxis. Since the 1999 consensus statement publication, a study 96 demonstrated the development of in vitro resistance of an isolate of the Sterne strain of B anthracis to ofloxacin (a fluoroquinolone closely related to ciprofloxacin) following subculturing and multiple cell passage. Following the anthrax attacks of 2001, the CDC 97 offered guidelines advocating use of 2 or 3 antibiotics in combination in persons with inhalational anthrax based on susceptibility testing with epidemic strains. Limited early information following the attacks suggested that persons with inhalational anthrax treated intravenously with 2 or more antibiotics active against B anthracis had a greater chance of survival. 61 Given the limited number of persons who developed inhalational anthrax, the paucity of comparative data, and other uncertainties, it remains unclear whether the use of 2 or more antibiotics confers a survival advantage, but combination therapy is a reasonable therapeutic approach in the face of life-threatening illness. Another factor supporting the initiation of combination antibiotic therapy for treatment of inhalational anthrax is the possibility that an engineered strain of B anthracis resistant to 1 or more antibiotics might be used in a future attack. Some infectious disease experts have also advocated the use of clindamycin, citing the theoretical benefit of diminishing bacterial toxin production, a strategy used in some toxin-mediated streptococcal infections. 98 There are no data as yet that bear specifically on this question. Central nervous system penetration is another consideration; doxycycline or fluoroquinolone may not reach therapeutic levels in the cerebrospinal fluid. Thus, in the aftermath of the anthrax attacks, some infectious disease authorities recommended preferential use of ciprofloxacin over doxycycline, plus augmentation with chloramphenicol, rifampin, or penicillin when meningitis is established or suspected. The B anthracis isolate recovered from patients with inhalational anthrax was susceptible to all of the antibiotics expected in a naturally occurring strain. 97 This isolate showed an inducible ␤-lactamase in addition to a constitutive cephalosporinase. The importance of the inducible ␤-lactamase is unknown; these strains are highly susceptible to penicillin in vitro, with minimum inhibiting concentrations less than .06 µg/mL. A theoretical concern is that this sensitivity could be overcome with a large bacterial burden. For this reason, the CDC advised that patients with inhalational anthrax should not be treated with penicillin or amoxicillin as monotherapy and that ciprofloxacin or doxycycline be considered the standards based on in vitro activ-ity, efficacy in the monkey model, and FDA approval. In the contained casualty setting (a situation in which a modest number of patients require therapy), the working group supports these new CDC antibiotic recommendations 97 (TABLE 3) and advises the use of intravenous antibiotic administration. These recommendations will need to be revised as new data become available. If the number of persons requiring therapy following a bioterrorist attack with anthrax is sufficiently high (ie, a mass casualty setting), the working group recognizes that combination drug therapy and intravenous therapy may no longer be possible for reasons of logistics and/or exhaustion of equipment and antibiotic supplies. In such circumstances, oral therapy may be the only feasible option (TABLE 4). The threshold number of cases at which combination and parenteral therapy become impossible depends on a variety of factors, including local and regional health care resources. In experimental animals, antibiotic therapy during anthrax infection has prevented development of an immune response. 40,95 This suggests that even if the antibiotic-treated patient survives anthrax infection, the risk of recurring disease may persist for a prolonged period because of the possibility of delayed germination of spores. Therefore, we recommend that antibiotic therapy be continued for at least 60 days postexposure, with oral therapy replacing intravenous therapy when the patient is clinically stable enough to take oral medication. Cutaneous anthrax historically has been treated with oral penicillin. For reasons articulated above, the working group recommends that oral fluoroquinolone or doxycycline in the adult dosage schedules described in TABLE 5 be used to treat cutaneous anthrax until antibiotic susceptibility is proven. Amoxicillin is a suitable alternative if there are contraindications to fluoroquinolones or doxycycline such as pregnancy, lactating mother, age younger than 18 years, or antibiotic intolerance. For cutaneous lesions associated with extensive edema or for cutaneous lesions of the head and neck, clinical management should be conservative as per inhalational anthrax treatment guidelines in Table 3. Although previous guidelines have suggested treating cutaneous anthrax for 7 to 10 days, 32,71 the working group recommends treatment for 60 days postexposure in the setting of bioterrorism, given the presumed concomitant inhalational exposure to the primary aerosol. Treatment of cutaneous anthrax generally prevents progression to systemic disease although it does not prevent the formation and evolution of the eschar. Topical therapy is not useful. 4 In addition to penicillin, the fluoroquinolones and the tetracycline class of antibiotics, other antibiotics effective in vitro include chloramphenicol, clindamycin, extended-spectrum penicillins, macrolides, aminoglycosides, vancomycin, cefazolin, and other first-generation cephalosporins. 91,99 The efficacy of these antibiotics has not yet been tested Steroids may be considered as an adjunct therapy for patients with severe edema and for meningitis based on experience with bacterial meningitis of other etiologies. d Other agents with in vitro activity include rifampin, vancomycin, penicillin, ampicillin, chloramphenicol, imipenem, clindamycin, and clarithromycin. Because of concerns of constitutive and inducible ␤-lactamases in Bacillus anthracis, penicillin and ampicillin should not be used alone. Consultation with an infectious disease specialist is advised. e Initial therapy may be altered based on clinical course of the patient; 1 or 2 antimicrobial agents may be adequate as the patient improves. f If meningitis is suspected, doxycycline may be less optimal because of poor central nervous system penetration. g If intravenous (IV) ciprofloxacin is not available, oral ciprofloxacin may be acceptable because it is rapidly and well absorbed from the gastrointestinal tract with no substantial loss by first-pass metabolism. Maximum serum concentrations are attained 1 to 2 hours after oral dosing but may not be achieved if vomiting or ileus is present. h In children, ciprofloxacin dosage should not exceed 1 g/d. i The American Academy of Pediatrics recommends treatment of young children with tetracyclines for serious infections (ie, Rocky Mountain spotted fever). j Because of the potential persistence of spores after an aerosol exposure, antimicrobial therapy should be continued for 60 days. k Although tetracyclines are not recommended during pregnancy, their use may be indicated for life-threatening illness. Adverse effects on developing teeth and bones of fetus are dose related; therefore, doxycycline might be used for a short time (7-14 days) before 6 months of gestation. The high death rate from the infection outweighs the risk posed by the antimicrobial agent. in humans or animal studies. The working group recommends the use of these antibiotics only to augment fluoroquinolones or tetracyclines or if the preferred drugs are contraindicated, not available, or inactive in vitro in susceptibility testing. B anthracis strains exhibit natural resistance to sulfamethoxazole, trimethoprim, cefuroxime, cefotaxime sodium, aztreonam, and ceftazidime. 91,92,99 Therefore, these antibiotics should not be used. Pleural effusions were present in all of the first 10 patients with inhalational anthrax in 2001. Seven needed drainage of their pleural effusions, 3 required chest tubes. 69 Future patients with inhalational anthrax should be expected to have pleural effusions that will likely require drainage. # Postexposure Prophylaxis Guidelines for which populations would require postexposure prophylaxis to prevent inhalational anthrax following the release of a B anthracis aerosol as a biological weapon will need to be developed by public health officials depending on epidemiological circumstances. These decisions would require estimates of the timing, location, and conditions of the exposure. 100 Ongoing case monitoring would be needed to define the high-risk groups, to direct follow-up, and to guide the addition or deletion of groups requiring postexposure prophylaxis. There are no FDA-approved postexposure antibiotic regimens following ex-posure to a B anthracis aerosol. Therefore, for postexposure prophylaxis, we recommend the same antibiotic regimen as that recommended for treatment of mass casualties; prophylaxis should be continued for at least 60 days postexposure (Table 4). Preliminary analysis of US postal workers who were advised to take 60 days of antibiotic prophylaxis for exposure to B anthracis spores following the anthrax attacks of 2001 showed that 2% sought medical attention because of concern of possible severe allergic reactions related to the medications, but no persons required hospitalization because of an adverse drug reaction. 101 Many persons did not begin or complete their recommended antibiotic course for a variety of reasons, including gastrointestinal tract intolerance, underscoring the need for careful medical follow-up during the period of prophylaxis. 101 In addition, given the uncertainties regarding how many weeks or months spores may remain latent in the period following discontinu- 98 Cutaneous anthrax with signs of systemic involvement, extensive edema, or lesions on the head or neck require intravenous therapy, and a multidrug approach is recommended (Table 3). †Ciprofloxacin or doxycycline should be considered first-line therapy. Amoxicillin can be substituted if a patient cannot take a fluoroquinolone or tetracycline class drug. Adults are recommended to take 500 mg of amoxicillin orally 3 times a day. For children, 80 mg/kg of amoxicillin to be divided into 3 doses in 8-hour increments is an option for completion of therapy after clinical improvement. Oral amoxicillin dose is based on the need to achieve appropriate minimum inhibitory concentration levels. ‡Previous guidelines have suggested treating cutaneous anthrax for 7 to 10 days, but 60 days is recommended for bioterrorism attacks, given the likelihood of exposure to aerosolized Bacillus anthracis. §The American Academy of Pediatrics recommends treatment of young children with tetracyclines for serious infections (eg, Rocky Mountain spotted fever). Although tetracyclines or ciprofloxacin is not recommended during pregnancy, their use may be indicated for lifethreatening illness. Adverse effects on developing teeth and bones of a fetus are dose related; therefore, doxycycline might be used for a short time (7-14 days) before 6 months of gestation. Same as for nonimmunosuppressed adults and children *Some of these recommendations are based on animal studies or in vitro studies and are not approved by the US Food and Drug Administration. †In vitro studies suggest ofloxacin (400 mg orally every 12 hours, or levofloxacin, 500 mg orally every 24 hours) could be substituted for ciprofloxacin. ‡In vitro studies suggest that 500 mg of tetracycline orally every 6 hours could be substituted for doxycycline. In addition, 400 mg of gatifloxicin or monifloxacin, both fluoroquinolones with mechanisms of action consistent with ciprofloxacin, taken orally daily could be substituted. §According to the Centers for Disease Control and Prevention recommendations, amoxicillin is suitable for postexposure prophylaxis only after 10 to 14 days of fluoroquinolones or doxycycline treatment and then only if there are contraindications to these 2 classes of medications (eg, pregnancy, lactating mother, age Ͻ18 years, or intolerance of other antibiotics). Doxycycline could also be used if antibiotic susceptibility testing, exhaustion of drug supplies, adverse reactions preclude use of ciprofloxacin. For children heavier than 45 kg, adult dosage should be used. For children lighter than 45 kg, 2.5 mg/kg of doxycycline orally every 12 hours should be used. ¶See "Management of Pregnant Population" for details. ation of postexposure prophylaxis, persons should be instructed to report immediately flulike symptoms or febrile illness to their physicians who should then evaluate the need to initiate treatment for possible inhalational anthrax. As noted above, postexposure vaccination is recommended as an adjunct to postexposure antibiotic prophylaxis if vaccine is available. # Management of Special Groups Consensus recommendations for special groups as set forth herein reflect the clinical and evidence-based judgments of the working group and at this time do not necessarily correspond with FDAapproved use, indications, or labeling. Children. It has been recommended that ciprofloxacin and other fluoroquinolones should not be used in children younger than 16 to 18 years because of a link to permanent arthropathy in adolescent animals and transient arthropathy in a small number of children. 94 However, balancing these risks against the risks of anthrax infections caused by an engineered antibiotic-resistant strain, the working group recommends that ciprofloxacin be used as a component of combination therapy for children with inhalational anthrax. For postexposure prophylaxis or following a mass casualty attack, monotherapy with fluoroquinolones is recommended by the working group 97 (Table 4). The American Academy of Pediatrics has recommended that doxycycline not be used in children younger than 9 years because the drug has resulted in retarded skeletal growth in infants and discolored teeth in infants and children. 94 However, the serious risk of infection following an anthrax attack supports the consensus recommendation that doxycycline, instead of ciprofloxacin, be used in children if antibiotic susceptibility testing, exhaustion of drug supplies, or adverse reactions preclude use of ciprofloxacin. According to CDC recommendations, amoxicillin was suitable for treatment or postexposure prophylaxis of possible anthrax infection following the anthrax attacks of 2001 only after 14 to 21 days of fluoroquinolone or doxycycline administration because of the concern about the presence of a ␤-lactamase. 102 In a contained casualty setting, the working group recommends that children with inhalational anthrax receive intravenous antibiotics (Table 3). In a mass casualty setting and as postexposure prophylaxis, the working group recommends that children receive oral antibiotics (Table 4). The US anthrax vaccine is licensed for use only in persons aged 18 to 65 years because studies to date have been conducted exclusively in this group. 77 No data exist for children, but based on experience with other inactivated vaccines, it is likely that the vaccine would be safe and effective. Pregnant Women. Fluoroquinolones are not generally recommended during pregnancy because of their known association with arthropathy in adolescent animals and small numbers of children. Animal studies have discovered no evidence of teratogenicity related to ciprofloxacin, but no controlled studies of ciprofloxacin in pregnant women have been conducted. Balancing these possible risks against the concerns of anthrax due to engineered antibiotic-resistant strains, the working group recommends that pregnant women receive ciprofloxacin as part of combination therapy for treatment of inhalational anthrax (Table 3). We also recommend that pregnant women receive fluoroquinolones in the usual adult dosages for postexposure prophylaxis or monotherapy treatment in the mass casualty setting (Table 4). The tetracycline class of antibiotics has been associated with both toxic effects in the liver in pregnant women and fetal toxic effects, including retarded skeletal growth. 94 Balancing the risks of anthrax infection with those associated with doxycycline use in pregnancy, the working group recommends that doxycycline can be used as an alternative to ciprofloxacin as part of combination therapy in pregnant women for treatment of inhalational anthrax. For postexposure prophylaxis or in mass casualty settings, doxycycline can also be used as an alternate to ciprofloxacin in pregnant women. If doxycycline is used in pregnant women, periodic liver function testing should be performed. No adequate controlled trials of penicillin or amoxicillin administration during pregnancy exist. However, the CDC recommends penicillin for the treatment of syphilis during pregnancy and amoxicillin as a treatment alternative for chlamydial infections during pregnancy. 94 According to CDC recommendations, amoxicillin is suitable postexposure prophylaxis or treatment of inhalational anthrax in pregnancy only after 14 to 21 days of fluoroquinolone or doxycycline administration. 102 Ciprofloxacin (and other fluoroquinolones), penicillin, and doxycycline (and other tetracyclines) are each excreted in breast milk. Therefore, a breastfeeding woman should be treated or given prophylaxis with the same antibiotic as her infant based on what is most safe and effective for the infant. Immunosuppressed Persons. The antibiotic treatment or postexposure prophylaxis for anthrax among those who are immunosuppressed has not been studied in human or animal models of anthrax infection. Therefore, the working group consensus recommends administering antibiotics in the same regimens recommended for immunocompetent adults and children. # INFECTION CONTROL There are no data to suggest that patient-to-patient transmission of anthrax occurs and no person-to-person transmission occurred following the anthrax attacks of 2001. 18,67 Standard barrier isolation precautions are recommended for hospitalized patients with all forms of anthrax infection, but the use of high-efficiency particulate air filter masks or other measures for airborne protection are not indicated. 103 There is no need to immunize or provide prophylaxis to patient contacts (eg, household contacts, friends, coworkers) unless a determination is made that they, like the patient, were exposed to the aerosol or surface contamination at the time of the attack. In addition to immediate notification of the hospital epidemiologist and state health department, the local hospital microbiology laboratories should be notified at the first indication of anthrax so that safe specimen processing under biosafety level 2 conditions can be undertaken as is customary in most hospital laboratories. 56 A number of disinfectants used for standard hospital infection control, such as hypochlorite, are effective in cleaning environmental surfaces contaminated with infected bodily fluids. 22,103 Proper burial or cremation of humans and animals who have died because of anthrax infection is important in preventing further transmission of the disease. Serious consideration should be given to cremation. Embalming of bodies could be associated with special risks. 103 If autopsies are performed, all related instruments and materials should be autoclaved or incinerated. 103 The CDC can provide advice on postmortem procedures in anthrax cases. # DECONTAMINATION Recommendations for decontamination in the event of an intentional aerosolization of B anthracis spores are based on evidence concerning aerosolization techniques, predicted spore survival, environmental exposures at Sverdlovsk and among goat hair mill workers, and environmental data collected following the anthrax attacks of 2001. The greatest risk to humans exposed to an aerosol of B anthracis spores occurs when spores first are made airborne, the period called primary aerosolization. The aerobiological factors that affect how long spores remain airborne include the size of the dispersed particles and their hydrostatic properties. 100 Technologically sophisticated dispersal methods, such as aerosol release from military aircraft of large quantities of B anthracis spores manipulated for use in a weapon, are potentially capable of exposing high numbers of victims over large areas. Recent research by Canadian investigators has demonstrated that even "low-tech" delivery systems, such as the opening of envelopes containing powdered spores in indoor environments, can rapidly deliver high concentrations of spores to persons in the vicinity. 104 In some circumstances, indoor airflows, activity patterns, and heating, ventilation, and air conditioning systems may transport spores to others parts of the building. Following the period of primary aerosolization, B anthracis spores may settle on surfaces, possibly in high concentrations. The risk that B anthracis spores might pose by a process of secondary aerosolization (resuspension of spores into the air) is uncertain and is likely dependent on many variables, including the quantity of spores on a surface; the physical characteristics of the powder used in the attack; the type of surface; the nature of the human or mechanical activity that occurs in the affected area and host factors. A variety of rapid assay kits are available to detect B anthracis spores on environmental surfaces. None of these kits has been independently evaluated or endorsed by the CDC, FDA, or Environmental Protection Agency, and their functional characteristics are not known. 105 Many false-positive results occurred following the anthrax attacks of 2001. Thus, any result using currently available rapid assay kits does not necessarily signify the presence of B anthracis; it is simply an indication that further testing is required by a certified microbiology laboratory. Similarly, the sensitivity and false-negative rate of disease kits are unknown. At Sverdlovsk, no new cases of inhalational anthrax developed beyond 43 days after the presumed date of release. None were documented during the months and years afterward, despite only limited decontamination and vaccination of 47000 of the city's 1 million inhabitants. 59 Some have questioned whether any of the cases with onset of disease beyond 7 days after release might have represented illness following secondary aerosolization from the ground or other surfaces. It is impossible to state with certainty that secondary aerosolizations did not occur in Sverdlovsk, but it appears unlikely. The epidemic curve reported is typical for a common-source epidemic, 3,60 and it is possible to account for virtually all confirmed cases having occurred within the area of the plume on the day of the accident. Moreover, if secondary aerosolization had been important, new cases would have likely continued well beyond the observed 43 days. Although persons working with animal hair or hides are known to be at increased risk of developing inhalational or cutaneous anthrax, surprisingly few occupational exposures in the United States have resulted in disease. During the first half of the 20th century, a significant number of goat hair mill workers were heavily exposed to aerosolized spores. Mandatory vaccination became a requirement for working in goat hair mills only in the 1960s. Prior to that, many unvaccinated person-years of highrisk exposure had occurred, but only 13 cases of inhalational anthrax were reported. 27,54 One study of environmental exposure, conducted at a Pennsylvania goat hair mill, showed that workers inhaled up to 510 B anthracis particles of at least 5 µm in diameter per person per 8-hour shift. 54 These concentrations of spores were constantly present in the environment during the time of this study, but no cases of inhalational anthrax occurred. Field studies using B anthracis-like surrogates have been carried out by US Army scientists seeking to determine the risk of secondary aerosolization. One study concluded that there was no significant threat to personnel in areas contaminated by 1 million spores per square meter either from traffic on asphalt-paved roads or from a runway used by helicopters or jet aircraft. 106 A separate study showed that in areas of ground contaminated with 20 million Bacillus subtilis spores per square meter, a soldier exercising actively for a 3-hour period would inhale between 1000 and 15000 spores. 107 Much has been written about the technical difficulty of decontaminating an environment contaminated with B anthracis spores. A classic case is the experience at Gruinard Island, Scotland. During World War II, British mili-tary undertook explosives testing with B anthracis spores. Spores persisted and remained viable for 36 years following the conclusion of testing. Decontamination of the island occurred in stages, beginning in 1979 and ending in 1987 when the island was finally declared fully decontaminated. The total cost is unpublished, but materials required included 280 tons of formaldehyde and 2000 tons of seawater. 108 Following the anthrax attacks of 2001, substantial efforts were undertaken to decontaminate environmental surfaces exposed to B anthracis spores. Sections of the Hart Senate office building in Washington, DC, contaminated from opening a letter laden with B anthracis, were reopened only after months of decontamination procedures at an estimated cost of $23 million. 109 Decontamination efforts at many other buildings affected by the anthrax attacks of 2001 have not yet been completed. Prior to the anthrax attacks of 2001, there had been no recognition or scientific study showing that B anthracis spores of "weapons grade" quality would be capable of leaking out the edges of envelopes or through the pores of envelopes, with resulting risk to the health of those handling or processing those letters. When it became clear that the Florida case of anthrax was likely caused by a letter contaminated with B anthracis, assessment of postal workers who might have handled or processed that letter showed no illness. 69 When the anthrax cases were discovered, each was linked to a letter that had been opened. At first, there was no evidence of illness among persons handling or processing unopened mail. This fact influenced the judgment that persons handling or processing unopened B anthracis letters were not at risk. These judgments changed when illness was discovered in persons who had handled or processed unopened letters in Washington, DC. Much remains unknown about the risks to persons handling or processing unopened letters containing B anthracis spores. It is not well understood how the me-chanical systems of mail processing in a specific building would affect the risk of disease acquisition in a worker handling a contaminated letter in that facility. It is still uncertain what the minimum dose of spores would be to cause infection in humans although it may theoretically be as few as 1 to 3 spores. 47 The mechanisms of disease acquisition in the 2 fatal inhalational anthrax cases in New York City and in Connecticut remain unknown although it is speculated that disease in these 2 cases followed the inhalation of small numbers of spores present in some manner in "cross-contaminated" mail. The discovery of B anthracis spores in a contaminated letter in the office of Sen Daschle in the Hart office building led the Environmental Protection Agency to conduct tests in this office to assess the risk of secondary aerosolization of spores. Prior to the initiation of decontamination efforts in the Hart building, 17 blood agar gel plates were placed around the office and normal activity in the office was simulated. Sixteen of the 17 plates yielded B anthracis. Although this experiment did not allow conclusions about the specific risk of persons developing anthrax infection in this context, it did demonstrate that routine activity in an environment contaminated with B anthracis spores could cause significant spore resuspension. 110 Given the above considerations, if an environmental surface is proved to be contaminated with B anthracis spores in the immediate area of a spill or close proximity to the point of release of B anthracis biological weapons, the working group believes that decontamination of that area would likely decrease the risk of acquiring anthrax by secondary aerosolization. However, as has been demonstrated in environmental decontamination efforts following the anthrax attacks of 2001, decontamination of buildings or parts of buildings following an anthrax attack is technically difficult. For these reasons, the working group would advise that decisions about methods for decontamination following an anthrax attack follow full expert analysis of the contaminated environment and the anthrax weapon used in the attack and be made in consultation with experts on environmental remediation. If vaccines were available, postexposure vaccination might be a useful intervention for those working in highly contaminated areas, because it could further lower the risk of anthrax infection. In the setting of an announced alleged B anthracis release, such as the series of anthrax hoaxes occurring in many areas of the United States in 1998 111 and following the anthrax attacks of 2001, any person coming in direct physical contact with a substance alleged to be containing B anthracis should thoroughly wash the exposed skin and articles of clothing with soap and water. 112 In addition, any person in direct physical contact with the alleged substance should receive postexposure antibiotic prophylaxis until the substance is proved not to be B anthracis. The anthrax attacks of 2001 and new research 104 have shown that opening letters containing substantial quantities of B anthracis spores in certain conditions can confer risk of disease to persons at some distance from the location of where the letter was opened. For this reason, when a letter is suspected of containing (or proved to contain) B anthracis, immediate consultation with local and state public health authorities and the CDC for advised medical management is warranted. # Additional Research Development of a recombinant anthrax vaccine that would be more easily manufactured and would require fewer doses should remain a top priority. Rapid diagnostic assays that could reliably identify early anthrax infection and quickly distinguish from other flulike or febrile illnesses would become critical in the event of a largescale attack. Simple animal models for use in comparing antibiotic prophylactic and treatment strategies are also needed. Operational research to better characterize risks posed by environmental contamination of spores, particularly inside buildings, and research on approaches to minimize risk in indoor environments by means of air filters or methods for environmental cleaning following a release are also needed. A better understanding of the genetics and pathogenesis of anthrax, as well as mechanisms of virulence and immunity, will be of importance in the prospective evaluation of new therapeutic and diagnostic strategies. Novel therapeutic approaches with promise, such as the administration of competitors against the protective antigen complex, 113 should also be tested in animals and developed where evidence supports this. Recent developments such as the publishing of the B anthracis genome and the discovery of the crystalline structure of the lethal and edema factor could hold great clinical hope for both the prevention and treatment of anthrax infection. 114 Funding/Support: Funding for this study primarily was provided by each participant's institution or agency. Disclaimers: In many cases, the indication and dosages and other information are not consistent with current approved labeling by the US Food and Drug Administration (FDA). The recommendations on the use of drugs and vaccine for uses not approved by the FDA do not represent the official views of the FDA or of any of the federal agencies whose scientists participated in these discussions. Unlabeled uses of the products recommended are noted in the sections of this article in which these products are discussed. Where unlabeled uses are indicated, information used as the basis for the recommendation is discussed. The views, opinions, assertions, and findings contained herein are those of the authors and should not be construed as official US Department of Health and Human Services, US Department of Defense, or US Department of Army positions, policies, or decisions unless so designated by other documentation. floxacin is only approved for "inhalational anthrax (postexposure)" and is not approved by the FDA for the treatment of inhalational anthrax." At this time, then, clinicians have no options that have been approved by the FDA for the treatment of inhalational anthrax. In the absence of FDA approval for any specific treatment for inhalational anthrax, clinicians must rely on other sources of guidance regarding treatment recommendations for this disease process. # Ex Officio Participants in the Working Dr Tice recommends consideration of a loading dose of doxycycline and ciprofloxacin in the treatment of inhalational anthrax. We do not believe there is sufficient evidence to support changing our recommendations to include these recommendations. Tetracyclines exhibit persistent timedependent bactericidal effects; the time above minimum inhibitory concentration (MIC) predicts therapeutic outcome. 1 Fluoroquinolone antibiotics, on the other hand, exhibit persistent concentration-dependent killing with persistent effects; the ratio of the area under the curve to the MIC predicts therapeutic outcome. 2 These factors are more important clinically than steady state levels of these drugs. In addition, we are aware of no information that suggests improvement in clinical outcome using loading doses of these classes of antibiotics, and the therapeutic efficacy of the standard recommended dosing regimen for these antibiotics (the same regimens that appear in our consensus paper) have been demonstrated in numerous clinical settings. Until more data regarding improvement in clinical outcomes following the use of loading doses for these antimicrobials exists, we are reluctant to propose any changes in the guidelines. # CORRECTION Incorrect Wording: Subsequent to the publication of the Consensus Statement entitled "Anthrax as a Biological Weapon, 2002: Updated Recommendations for Management," published in the May 1, 2002, issue of THE JOURNAL (2002;287: 2236-2252), the authors wish to make available the following updates based on information from the US Food and Drug Administration and the Centers from Disease Control and Prevention (CDC). In Table 3 on page 2246, the pediatric dosage of ciprofloxacin for "Initial IV [intravenous] Therapy" for inhalational anthrax in the contained casualty setting should read, "10 mg/kg every 12 h (maximum of 400 mg per dose)" and subsequent oral therapy under "Duration" should be "15 mg/kg per dose taken orally every 12 h (maximum of 500 mg per dose)." The doxycycline dosages for children should be based on weight (ie, Ͼ or Յ 45 kg) and not on age. In Table 4 on page 2247, the pediatric dosage of ciprofloxacin for "Initial Oral Therapy" of inhalational anthrax infection in the mass casualty setting or for postexposure prophylaxis should read, "15 mg/kg per dose taken orally every 12 h (maximum of 500 mg per dose)." The correct dosage of amoxicillin for children who weigh less than 20 kg in a mass casualty setting or for postexposure prophylaxis is "80 mg/kg to be taken orally in 3 divided doses every 8 h." The footnote marked by a section mark ( §) in Table 4 should read as follows: "According to the CDC recommendations for the bioterrorist attacks in 2001, in which B anthracis was susceptible to penicillin, amoxicillin was a suitable alternative for postexposure prophylaxis in infants, children, and women who were pregnant or who were breastfeeding. Amoxicillin was also a suitable alternative for completion of 60 days of antibiotic therapy for patients in these groups with cutaneous or inhalational anthrax whose clinical illness had resolved after treatment with a ciprofloxacin-or doxycycline-based regimen (14-21 days for inhalational or complicated cutaneous anthrax; 7-10 days for uncomplicated cutaneous anthrax). Such patients required prolonged therapy because they were presumably exposed to aerosolized B anthracis." In Table 5 on page 2247, the pediatric dosage of ciprofloxacin for treatment of cutaneous anthrax infection should be "15 mg/kg per dose taken orally every 12 h (maximum of 500 mg per dose)." Pediatric doxycycline dosage should be based on weight (ie, Ͼ or Յ 45 kg) not age. The most current versions of Tables 3, 4, and 5 are available online at: http://jama .ama-assn.org/cgi/content/full/287/17/2236. The textual changes are as follows: On page 2245, the sentence "Penicillin, doxycycline, and ciprofloxacin are approved by the FDA for the treatment of inhalational anthrax infection, 56,89,90,94 and other antibiotics are under study" should read, "Penicillin and doxycycline are approved by the FDA for the treatment of anthrax. 56,89,90,94 Although neither penicillin, doxycycline, nor ciprofloxacin are specifically approved by the FDA for the treatment of inhalational anthrax, these drugs may be useful when given in combination with other antimicrobial drugs." On page 2247, the sentence in the "Postexposure Prophylaxis" section of the text that says, "There are no FDA-approved postexposure antibiotic regimens following exposure to a B anthracis aerosol" should read, "Ciprofloxacin, doxycycline, and penicillin G procaine are approved by the FDA for postexposure prophylaxis of inhalational anthrax." On page 2248 in the "Children" subsection, the sentence that begins "According to CDC recommendations . . . " should read "According to the CDC recommendations for the bioterrorist attacks in 2001, in which B anthracis was susceptible to penicillin, amoxicillin was a suitable alternative for postexposure prophylaxis in infants and children (Table 4)." In the "Pregnant Women" subsection, the sentence that begins, "According to the CDC recommendations . . . " should read, "According to the CDC recommendations for the bioterrorist attacks in 2001, in which B anthracis was susceptible to penicillin, amoxicillin was a suitable alternative for postexposure prophylaxis in women who were pregnant or who were breastfeeding (Table 4)." We apologize for the interruption in CME and hope that you will enjoy the improved online features that will be available in early 2003. # CME ANNOUNCEMENT
None
None
30e3183532fd99a89e5845e678bb5b6ee2393c23
cdc
None
For information about other occupational safety and health problems, call 1-800-35-NIQSH ii DHHS (NIOSH) Publication No. 93-103 PREFACE A memorandum has been signed between the Centers for Disease Control, National Institute for Occupational Safety and Health (NIOSH), USA, and the Nordic Expert Group for Documentation o f Occupational Exposure Limits (NEG). The purpose o f the memorandum is to exchange information and expertise in the area o f occupational safety and health. One product of this agreement is the development of documents to provide a scientific basis for establishing recommended occupational exposure limits. The exposure limits will be developed separately by each country according to the different national policies. This document on the health effects of occupational exposure to ethyl ether is the second product of that agreement. The document was written by Björn Arvidson, Dr.# INTRODUCTION Ethyl ether is a colorless, highly volatile, mobile liquid with a characteristic pungent odor. Ethyl ether is a serious fire hazard, as its vapor readily forms explosive mixtures with air and oxygen. It is miscible with the lower aliphatic alcohols, benzene, chloroform, petroleum ether, other fat solvents, and many oils. The primary physiologic effect of ethyl ether is general anesthesia. 1 ppm = 3.08 mg/m3 1 mg/m3 " 0.325 ppm Ethyl ether usually starts to oxidize soon after distillation, and ether peroxides, acetaldehyde and acetic acid are formed. Ether peroxide is less volatile than ethyl ether and tends to concentrate during evaporation or distillation. Ether peroxides, when concentrated, are a serious detonation hazard. Ethyl ether can be tested for the presence o f peroxides by shaking 10 ml of ether for one min with 1 ml of a freshly prepared 10% aqueous solution of potassium iodide in a 25-ml, glass-stoppered cylinder of colorless glass, protected from light. Appearance of a yellow color indicates peroxides (53). Other rapid tests for detecting peroxides in liquids are also available (53). Peroxides can be removed by distillation or by passage through an alumina column. If activated alumina is used, the ether is also dried in the process (53). Ferrous sulfate (30% solution in water) can be used in the proportion of 1 lb (0.4536 kg) to each 30 U.S. gallons (113.51) of ethyl ether to extract and to destroy peroxides (7). # PHYSICAL AND CHEMICAL PROPERTIES* Factors which delay decomposition of ethyl ether are absence of light, storage in a cool place, storage in copper tanks, or contact with a copper screen. Copper and certain other metals, particularly iron and mercury, prevent the oxidation o f ethyl ether since they are oxidized preferentially and consume the oxygen which is combined with ethyl ether (3). Ethyl ether should be stored in the original metal cans rather than in glass bottles. Very dry ethyl ether undergoes oxidation much more readily than ethyl ether containing a few tenths of a percent of water (7). Ethyl ether is produced on a large scale by dehydration of ethanol or by hydration o f ethylene, both processes being carried out in the presence of sulfuric acid (3,28,80,151). Ethyl ether has a wide range of uses in the chemical industry. It is a good solvent for fats, oils, dyes, gums, resins, raw rubber, smokeless powder, perfumes, and nitrocellulose. Ethyl ether is also used in the manufacture of photographic films and pharmaceuticals, and as a reaction and extraction medium in the chemical industry. In addition, ethyl ether is used as an inhalation anesthetic in surgery, a refrigerant, in diesel fuels, in dry cleaning, and as a starting fuel for engines ( # USES AND OCCURRENCE # Uses Ethyl ether is commercially available in different grades. Depending on the consumer and use, specifications vary. In many instances the ether has to meet a specific test which is written in the specification (e.g., it may be important that the ether is free from alcohol and aldehyde, or completely anhydrous). The technical concentrated ether contains very small amounts of alcohol, water, aldehydes, peroxides, and other impurities such as sulfuric acid, sulfur dioxide, mercaptans, and ethyl esters (3,28,35,80). Some doubt exists as to whether or not these impurities as found in ethyl ether are toxic to man. In dogs, concentrations o f aldehyde up to 0.5% and mercaptans up to 1% produced no significant effects. Ether peroxides in a concentration of 0.3% had no effect whereas 0.5% caused a fall in blood pressure and respiratory difficulties. Ethyl sulfide in 1 % concentration caused gastroenteritis (23). Ethyl ether containing impurities produced anesthesia in mice more slowly than pure ethyl ether (84). The more refined grades of ethyl ether, such as anesthetic ether, are obtained from technical ether by redistillation and dehydration followed by alkali or charcoal treatment (80). To prevent pollution by anesthetic gases in operating rooms, the Hospital Engineering Cooperative Groups of Denmark made recommendations in 1974 (146). The recommended highest permissible average concentration in the breathing zones of anesthesia personnel for ethyl ether was 3 ppm (9 mg/m3). # Air Concentrations in the Working Air concentrations of ethyl ether were measured in the laboratory of a brewing company in Illinois, USA (154). Ethyl ether was used in an extraction procedure to determine the fat content in com. Environmental monitoring was conducted using two methods; (1) a direct reading instrument, and (2) personal and area sampling to determine the integrated average exposures over the work shift. All results for the personal and area samples were less than 1 ppm. Measurements with the direct reading instrument showed peak concentrations up to 1,000 ppm (3,080 mg/m3) around the cork gaskets and pressure release valves on the extraction apparatus. However, the vapors were released inside a plexiglass enclosure where they were properly exhausted. Personal breathing zone and area samples were taken in an analytical laboratory using solvents in the preparation o f soil and water samples (155). The values for ethyl ether were below the limit of quantitation (0.03 mg/sample). # Analytical Methods for Ethyl Ether in Air and Body Tissues # Direct Field Methods Analysis of ethyl-ether-contaminated air in the work area may be done by means of a commercially available combustible gas indicator. Direct reading detector tubes are also available (8, 92). # Laboratory Methods # An air sampling technique for diethyl ether has been proposed by NIOSH (109) . A known volume of air is drawn through a charcoal tube at a flow rate of 0.01 to 0.2 1/min. The charcoal in the tube is transferred to a small sample container and ethyl ether is desorbed with ethyl acetate. An aliquot of the desorbed sample is then injected into a gas chromatograph and the area of the resulting peak is determined and compared with standards. Advantages with this method is a small, portable sampling device and minimal interferences which can be eliminated by altering chromatographic conditions. A disadvantage is that the upper limit of the range of the method depends on the adsorptive capacity of the charcoal tube. In addition, the precision of the method is limited by the reproducibility of the pressure drop across the tubes. Blood and tissue levels of ethyl ether can be analyzed by infrared analysis. This method was originally devised by Stewart et al. (144) and was used, with a slight modification, by Chenoweth et al. (33) for measuring blood and tissue levels of ethyl ether and other anesthetics in dogs. This method is also applicable to air analysis (8,31). Ethyl ether in blood and urine can be analyzed by gas chromatography. The specimen is injected directly into a gas chromatograph equipped with a flame-ionization detector and a molecular sieve column. Specimens should be stored at 4°C and analyzed as soon as possible. Breath samples may be analyzed by direct injection of 1-2 ml, using as standards suitably prepared dilutions o f ethyl ether in air. Calculation is based on a standard curve of peak height versus concentration of the standards (17). Ethyl ether in air may be analyzed by passing the air through a fritted glass bubbler for reaction of the ethyl ether with acidic potassium dichromate and subsequent iodometric determination (133). A modification o f this method has also been used for determination of ethyl ether in blood (6, 117). # Biological Monitoring Tests for exposure may include expired breath for unmetabolized ethyl ether and blood for ethyl ether content (8). Blood concentrations of ethyl ether have been found to correlate with the degree of ethyl ether exposure and the extent o f intoxication. According to Baselt (17), the concentration o f ethyl ether in blood should not exceed a level o f about 20 mg/1 in asymptomatic workers. Gas chromatography can be used for determinations o f ethyl ether in blood, urine, and breath samples (17). Infrared analysis can be used for detection o f ethyl ether in breath and body fluids (8). # DISTRIBUTION AND METABOLISM # Uptake # Uptake by Inhalation After inhalation, ethyl ether is rapidly transferred from alveoli to blood. The normal alveolar membrane poses no barrier to the transfer of ethyl ether in either direction. The blood/gas distribution coefficient of ethyl ether is high-12.1. For oil/gas, the distribution coefficient is 65 (44). The more soluble an anesthetic is in blood, the more o f it must be dissolved in blood to raise its partial pressure there appreciably. Induction o f ethyl ether anesthesia is therefore slow. To achieve deep anesthesia with ethyl ether in medical practice takes 15-25 min (35) (the concentration of ethyl ether used for the induction o f anesthesia is usually 10 to 15 vol% or 308,000 to 462,000 mg/m3). # Uptake from the Gastrointestinal Tract No studies on the uptake of ethyl ether after peroral administration were found. According to Hayhursth (68), absorption of ether from the stomach and rectum is very prompt and may be used for anesthesia. No data were found on the extent of uptake of ethyl ether through undamaged skin in humans. A case report described a 14-year-old boy who applied ethyl ether to his scalp under plastic occlusion to treat a seborrheic dermatitis. The boy was found dead and elevated but nonfatal concentrations of ethyl ether were found in the various postmortem tissues subjected to toxicological analysis. The authors conclusion was that the boy died from intoxication of resorbed ethyl ether. The authors interpreted the sublethal ethyl ether concentrations found in various tissues as probably incorrect due to ethyl ether's high volatility (29). # Distribution The The distribution of 14C-ethyl ether in the mouse was investigated with use of low-temperature autoradiography (34). Ethyl ether was administered by inhalation for 10 min. When the animals were killed immediately after the inhalation period, the anesthetic was rather uniformly distributed throughout the body, although higher concentrations were found in the brain, kidney, liver, and brown fat. After 2 hr, most of the radioactivity had left the body, but the liver, kidneys, intestines, and nasal mucous membranes were still labelled. A quantitative analysis showed that in 15 min, the radioactivity in brown fat had reached its peak relative concentration, whereas the relative concentrations of radioactivity in liver and kidneys continued to increase until the termination of the 2-hr experimental period. At 2 hr, 3.6% of the administered dose was present in metabolized form in the liver and intestine. By that time, all liver radioactivity was nonvolatile. The distribution of ethyl ether and four other volatile organic solvents in blood was investigated in rats. The animals were exposed to 500 ppm (1540 mg/m3) of ethyl ether for 2 hr. Forty-nine percent of ethyl ether in the blood was found in the red blood cells. When a solution of ethyl ether was added to human plasma and red blood cell (RBC) samples, a large fraction of ethyl ether was recovered from ammonium-sulphate-precipitated plasma proteins and hemoglobin. A small fraction of ethyl ether was recovered from plasma water and RBC water. The authors conclude that proteins, chiefly hemoglobin, are the major carriers o f ethyl ether and other volatile organic solvents in blood (87). # Biotransformation It has been estimated that about 8-10% of absorbed ethyl ether is metabolized in the body (55) whereas the remainder is excreted unchanged through the lungs. Ethyl ether is metabolized to ethanol and acetaldehyde by an inducible hepatic microsomal enzyme system, a cytochrome P450-containing monooxygenase system (32,145). Ethanol and acetaldehyde are rapidly oxidized to acetate, and the acetate subsequently enters the 2-carbon pool o f intermediary metabolism. Van Dyke et al. (160) showed in rats that approximately 4% o f an intraperitoneallyadministered dose (0.1 ml) o f 14C-ethyl ether was recovered as exhaled 14C 0 2 during a 24-hr period, and 2% of the radioactivity was transformed into nonvolatile urinary products. Green and Cohen (55) reported that a portion of 14C-diethyl ether administered to mice by inhalation was rapidly transformed into fatty acids (palmitic, stearic, and oleic acids) and cholesterol, which were recovered from an ether extract of liver. Three other nonvolatile radioactive metabolites were tentatively identified as monoglycerides, diglycerides, and triglycerides. Two hr after administation of ethyl ether by inhalation to mice, 3.6% o f the administered dose was present in metabolized form in the liver and intestine. Thin-layer radiochromatography of extracts from the liver demonstrated four nonvolatile metabolites. The major metabolite was presumed to be a glucuronide of ether (34). The hepatic microsomal oxidation of ethyl ether is catalyzed primarily by the cytochromes P450 (32). More recent investigations have implicated specific isoenzymes. Known in hibitors of isoenzyme HE1 strongly inhibited ether de-ethylation, and a monoclonal antibody to P450IIE1 had the same effect (24). A mixed-function oxidation known to be catalyzed by cytochrome P45013E1, the N-demethylation of dimethylnitrosamine, was inhibited by ethyl ether anesthesia (150). These findings all indicate that ethyl ether is oxidatively metabolized primarily by P450UE1 (24). # Elimination The The concentration o f ethyl ether in brain, blood, and muscle was investigated in rats during ether elimination. When the rats had breathed 300,000 mg/m3 (approximately 9 vol%) of ethyl ether for 10 min, there was an abrupt fall of the concentration o f ethyl ether in the brain and blood immediately after discontinuation of the inhalation. When the inhalation exposures were discontinued after 1 hr, the fall in the ether concentration in the brain and blood was slower (42). The elimination o f ethyl ether from fatty tissue occurred slowly, and was practically finished after about 8 hr (43). According to the experience of anesthesiologists, termination o f anesthesia with soluble anesthetics like ethyl ether is slower in obese individuals than in those with a lean body mass. This might partly be due to partioning of the anesthetic agent into certain tissues (e.g., body fat) (25). # Mechanism of Ether Resistance in Drosophila Melanogaster A few studies have been performed to elucidate the mechanism of ethyl ether resistance in certain strains of Drosophila melanogaster. A strain named Eth-29, was found to be resistant to ethyl ether and other volatile anesthetic agents like chloroform and halothane. Ethyl ether resistance for lethality was determined using mortality as an endpoint when flies were exposed to very high concentrations of ethyl ether (111). Ethyl ether resistance to anesthesia uses loss of avoidance reflex as an endpoint, and is an incompletely dominant or polygenic character (49). The resistance for ethyl ether anesthesia and for ethyl ether lethality was found to be controlled by different genetic systems. In 18 strains of Drosophila melanogaster with different genetic characteristics, a wide variation in sensitivities to ethyl ether anesthesia, gamma-ray knock-down and gamma-ray lethality was demonstrated. There was no correlation between DNA-repair capacity and ethyl ether sensitivity or gamma-ray knock-down sensitivity, whereas strains deficient in excision repair were found to be sensitive to gamma-ray lethality. These findings indicate that lethality is caused by DNA damage, whereas the targets for ethyl ether anesthesia should be different, possibly being the membrane (50). # GENERAL TOXICOLOGY # Mechanism of Action The mechanism of action by which general anesthetics, including ethyl ether, produce reversible loss of consciousness is still unclear. Anesthesia can be produced by a wide variety of chemical agents, ranging from inert rare gases to steroid molecules (128). This apparent lack of specificity, together with the observation that general anesthesia can be reversed by high pressure, poses a unique pharmacological problem (48). Most theories concern interac tion of anesthetics with either membrane lipids or hydrophobic regions of specific membranebound proteins (for reviews see 48, 63, 128). One hypothesis is that the anesthetic changes the function o f an ion channel protein by modifying the conformation of the protein. Some investigators suggest that the GABA receptor may be the ion channel protein that is affected by inhalation anesthetic agents (25,101). According to Halsey (63), the most appropriate concept for the mechanism of general anesthesia is a heterogenous site o f anesthetic action, including both lipid and protein membrane components linked with the neuronal function. Light ethyl ether anesthesia for 1-2 min did not affect the rate of protein synthesis measured in the mammary gland, liver, intestinal mucosa, and muscle of lactating rats using a flooding dose of 3H-phenylalanine that was injected intravenously. The fractional rates of protein synthesis were estimated from incorporation of 3H-phenylalanine into tissue proteins (131). # Effects of Ethyl Ether on the # Effects of Ethyl Ether on Protein Synthesis in Animals # Acute Toxicity # Human Experience # Inhalation Ethyl Oxygen therapy may be used in cases of narcosis (7). # Ingestion Moeschlin (99) has estimated the lethal dose o f ethyl ether after peroral intake to be about In ethyl ether addiction, tolerance develops and large daily quantities of ethyl ether may be ingested (16,52). A daily intake of up to 180 g has been reported (52). # Animal Studies # Inhalation Differences in the induction mixture and the duration of anesthesia may lead to different results for lethal concentrations of inhaled ethyl ether in animals (124). During a continuous exposure for 97 min, the lethal concentration of inhaled ether for mice was 133,400 mg/m3 or 4.4 vol% (86). When rapid induction and short duration of anesthesia was used, the concentration necessary to produce respiratory arrest in mice was 18 vol% (554,000 mg/m3) (103). The lethal concentrations for the rat, rabbit, and dog were 6.4, 10.6 and 7.16 to 19.25 vol%, respectively (139). The age o f the animal has importance for susceptibility to inhaled ethyl ether. In rats, the median time to death (LT50) for neonates was 5 to 6.5 times greater than for adult rats and the mean concentration of ethyl ether in blood at the respective LT50 values was 2.5 to 3 times greater in neonatal rats than in adult rats (132). # Ingestion The oral LD50 in 2-week-old, young adult (body weight 80-160 g) and adult (body weight 300-470 g) Sprague-Dawley rats, were reported to be 2.3,2.4 and 1.7 ml/kg b.w., respectively (83). Smyth et al. ( 138) reported a single peroral LD50 for rats of 3.56 ml/kg. Ethyl ether is metabolized to acetaldehyde by hepatic microsomal enzymes. Long-term ethyl ether treatment of rats leads to induction of hepatic microsomal enzymes and to proliferation of smooth endoplasmic reticulum (127). # Chronic Toxicity # Long # Kidneys # Albuminuria from tubular irritation has been reported to occur in about one-fourth o f patients anesthetized with ethyl ether (68). Chronic exposure to ethyl ether during the manufacture of smokeless powder has been claimed to cause occasional cases o f nephritis (28, 65, 68), but this effect has since been questioned. Hamilton (64) described a man who had worked continuously for 7 years and intermittently for 5 years in a smokeless powder factory, and who developed a severe chronic interstitial nephritis verified at autopsy. # Ethyl ether has been used for dissolution of an obstructed Foley catheter balloon. The use of ethyl ether for this purpose has occasionally resulted in a chemical cystitis with permanent damage to the urinary bladder (51, 91, 105). # Centra! Nervous System Acute exposure of humans to high concentrations of ethyl ether vapor causes initial excitement followed by narcosis and respiratory depression. The different stages o f ethyl ether anesthesia have been described in detail and are given in Table 1. The approximate blood and alveolar levels o f ethyl ether in relation to the depth of anesthesia are given in Table 2. # Long-term exposure to low concentrations o f ethyl ether vapor in industry, for example during the manufacture of smokeless powder, has been reported to cause various symptoms from the central nervous system such as sleepiness, dizziness, excitation, headache and psychic dis turbances (47). Surgeons and nurses exposed for long periods o f time to ethyl ether in the operation theatre complained of tiredness, headache, loss of appetite, irritability and difficul ties in concentrating (168). Ethyl ether has been reported to have complex effects on learning and memory in experimental studies in mice. Ethyl ether, like strychnine, appears to be a modulator of learning and memory (96). Some investigators have described retrograde amnesia after exposure to ethyl ether, but due to differences in methodology, the amnestic effects o f ethyl ether have not been found consistently (167 Ethyl ether causes cerebrovascular dilatation at twice the minimum anesthetic concentration (the minimum anesthetic concentration is the concentration at which 50% o f the subjects move in response to a surgical stimulus) (172). The cerebral blood flow (CBF), cerebral metabolic rate for oxygen (CMRO2), and cerebrovascular resistance (CVR) were measured in two groups of monkeys inhaling 2 vol% (61,600 mg/m3) and 5 vol% (154,000 mg/m3) ethyl ether in air. # CVR fell by 30% when ethyl ether concentration was increased from 2 to 5% whereas CBF and CMRO2 were unaltered. The conclusion of this study was that the effect o f ethyl ether on CVR probably reflects a complex interaction o f a number of factors such as alterations in alfa-adrenergic tone associated with changes in plasma epinephrine levels, alterations in vasoactive metabolites and alterations in beta-adrenergic receptor activity (75). # The effects of ethyl ether on the morphology of the blood-brain barrier within the optic tectum was investigated in hatching chick embryos and young chickens. Ethyl ether was administered on a small compress of surgical gauze covering the chick beak until anesthesia was reached (1 -3 min). Horseradish peroxidase was used as a marker of vascular permeability. Ethyl ether anesthesia was reported to produce extravasation of HRP by opening the tight interendothelial junctions. Areas of tracer extravasation were more numerous in the tecta o f hatching embryos than in young chickens (108). Richards and White (122) investigated the effect of volatile anesthetics (ethyl ether, halothane, methoxyflurane and trichloroethylene) on the synaptic transmissions in the dentate gyrus. The experiments were performed in vitro using a preparation o f the dentate gyrus o f the hippocam pus in guinea pigs. Ethyl ether was applied to slices o f the dentate gyrus in the gas phase by mixing ethyl ether with the O2JCO2 gas mixture that superfused the upper surface of the slice. All four anesthetics depressed synaptic transmission in the dentate gyrus. The mechanism behind this effect was interpreted to be either a reduction of transmitter release, or a decrease in the sensitivity of the postsynaptic membrane to released transmitter, or both effects together. # EEG monitoring in animals during anesthesia with ethyl ether has shown that ethyl ether reduces neuronal excitability (93,147). The depressive effect of increasing concentrations of ethyl ether on the central nervous system is also evident from Table 1. # Peripheral Nervous System Larrabee and Pasternak (90) studied the concentrations of various anesthetics required to block synaptic excitation of sympathetic ganglion cells and compared that to concentrations neces sary to block conduction along sympathetic nerve fibers. The experiments were performed on in vitro preparations from cats, rats, and rabbits. Ethyl ether depressed synaptic transmis sion through a sympathetic ganglion in a concentration similar to that assumed to exist in the blood during surgical anesthesia. The conduction along autonomic fibers o f different types (A, B, and C) was blocked by ethyl ether, but synaptic transmission was depressed more readily than conduction along any type o f axon. Ethyl ether and halothane were found to affect the kinetics of sodium and potassium current in vitro in the crayfish giant axon. Both anesthetics caused a reversible, dose-dependent speeding up of sodium current inactivation at all membrane potentials. The activation of potassium currents was faster with ethyl ether present, but there was no change in the voltage dependence o f steady-state potassium currents. A theory was proposed that the effects on sodium and potassium channel gating processes was due to ei ther an effect on the state of the lipids surrounding the channels, or a direct effect on the protein part o f the channels (18). A bath concentration of 100-300 mM (7,412-22,236 mg/1) ethyl ether blocked the conduction of single action potentials in the bifurcating axon of the lobster deep extensor muscles (58). In frog sciatic nerve, 300 mM (22,236 mg/1) ethyl ether was needed to depress single axon sodium currents (81). Significant depression of mammalian B and C fibers was produced by 77 mM (5,707 mg/1) ethyl ether (90). # Muscle Ethyl ether causes muscle relaxation by its depressing effect on the central nervous system (CNS) and by affecting neuromuscular transmission and the muscle itself. The mechanism of neuromuscular blocking action of ethyl ether seems to be similar to, but not identical with that of d-tubocurarine (35,77). When these two drugs are used in the same patient, an additive effect results from their synergism (35). The neuromuscular block produced by ethyl ether was not effectively antagonized by edrophonium or succinylcholine (77) whereas neostigmine has been reported to reverse the effect of ethyl ether on the motor end-plate (57). The reduction of muscle contractility caused by ethyl ether might be due to a pharmacological effect at the level of the sarcoplasmic reticulum (82, 129). # Gastrointestinal Tract # Salivary and gastric secretions are increased during light ethyl ether anesthesia, but are decreased during deep anesthesia. Bowel movement is decreased due to stimulation of dilator fibers and to depression of plain muscle (162). Nausea and vomiting are common after anesthesia with ethyl ether. Ethyl ether stimulates the vomiting center in the medulla, and ethyl ether is absorbed in saliva and passes to the stomach where it irritates the mucosa (35). The incidence of postoperative nausea varies with the length of operation and depth of anesthesia (162). # Circulatory System Ethyl ether vapor depresses myocardial activity if added to the circulation in a heart-lung preparation (119). Ethyl ether anesthesia causes increased sympathetic nervous activity in the dog and man (98) and the plasma levels of noradrenaline increase with increasing depth of ethyl ether anesthesia (21). Skovsted # and Price (134) postulated that the increase in sym pathetic preganglionic activity caused by ethyl ether anesthesia in cats was due to a central action on the medullary vasomotor center or on spinal vasomotor neurons. The direct myocardial depressant action of ethyl ether seen in the isolated heart preparation is antagonized by the positive inotrope effect of adrenaline and noradrenaline released during ethyl ether anesthesia (26). Ethyl ether seems to block parasympathetic activity in normal man (118). Ethyl ether anesthesia produces remarkably small alterations in blood pressure and pulse rate and rarely leads to cardiac arrythmias (35). In a study in rats, the effects of ethyl ether anesthesia on whole-body hemodynamics and organ blood flow was measured using microspheres. Anesthesia was induced using a bell jar containing a gauze pad moistened with ethyl ether, and the animals were maintained between stage II and stage HI of anesthesia. Ethyl ether anesthesia initially decreased arterial pressure and the total peripheral resistance, whereas the cardiac index was increased. Later, the cardiac index returned to normal and there was an increase in total peripheral resistance index. The blood flow to the spleen, stomach, intestine, kidneys, and colon decreased during anesthesia whereas the blood flow to the brain and heart increased (141). # Hematological System # Anesthetic agents may have both direct and hormone-mediated effects on the leukocytes. After ethyl ether anesthesia in man, a postoperative leukocytosis has been reported which is related to the sympathomimetic effects of ethyl ether (164). Blood samples from workers chronically exposed to ethyl ether fumes in the manufacture of smokeless powder showed in some individuals polycythemia and increased number of leukocytes (65). Plasma aldosterone levels and plasma renin activity increase during ethyl ether anesthesia (15,115). The plasma antidiuretic hormone levels also increase significantly after the induction of anesthesia with ethyl ether (97). Ethyl ether anesthesia caused a significant decrease in serum calcium levels in 30 patients undergoing various routine surgical operations. Serum calcium returned to near normal levels after 24 hr. The mechanism behind this effect is not known (62). # Laboratory Animals Ethyl ether anesthesia administered by placing rats in an ether-saturated jar and then main tained with a nose cone for 3 min caused a significant increase in plasma adrenocorticotrophic hormone (ACTH) levels. Ethyl ether failed to raise the plasma ACTH levels o f rats in which an antero-lateral hypothalamic cut and adrenalectomy had been performed 7 to 8 days previously, and plasma ACTH was also unchanged in rats exposed to ethyl ether 2 hr after an antero-lateral cut. The results are interpreted as evidence that intact neural pathways entering the medial basal hypothalamus from the antero-lateral direction are necessary for the ACTHreleasing action o f ethyl ether stress (78). # Plasma corticotropin releasing hormone (CRH), arginine vasopressin (AVP), oxytocin (OXY) and ACTH levels rose to approximately twice the level o f control rats 2 min after the onset of a 1-min exposure to ethyl ether. Plasma CRH rose further 5 min after the onset o f ethyl ether stress, while plasma A VP and OXY returned to the baseline level at 5 min (67). Ethyl ether anesthesia did not affect the mean concentrations or dissociation constants for rat uterine estrogen or progesterone receptors (173). No difference in plasma thyroxine levels were found between control rats and rats exposed to ethyl ether inhalation for 2 min (89). Ethyl ether anesthesia in rats increased the concentration of plasma beta-endorphin-like immunoreactivity, probably of pituitary origin. This increase was not associated with sig nificant changes in pituitary or brainstem beta-endorphin content (121). # IMMUNE SYSTEM After treatment of human sera with ethyl ether, there were no significant changes in im munoglobulin levels, whereas the complement activity was lost (142). Several commonly used volatile anesthetics were screened for mutagenicity in the Salmonella/ rat liver microsomal assay system. Ethyl ether was reported to be not mutagenic (165). When the genotoxic activity and potency of 135 compounds were investigated in the Ames reversion test and in a bacterial DNA-repair test, there was no evidence o f genotoxic activity for ethyl ether (38). was termed objectionable. For ethyl ether, complaints of nasal irritation started at 200 ppm (616 mg/m3), and 300 ppm (924 mg/m3) was objectionable as a working atmosphere. # CARCINOGENICITY According to Cook (36), industrial exposure at 500 to 1,000 ppm (1,540 to 3,080 mg/m3) did not cause any demonstrable injury to health, but a limit of 500 ppm seemed justifiable to avoid irritation and complaint. Amor (5) also considered a concentration of ethyl ether in the atmosphere greater than 500 ppm (1,540 mg/m3) as indicative o f unsatisfactory conditions. He listed 2,000 ppm (6,160 mg/m3) as the concentration which may lead to symptoms of illness (not specified) if exposure continues for more than a short time. Ethyl ether concentrations of 2,000 to 3,000 ppm (6,160 to 9,240 mg/m3) may occur in the air during the manufacture of smokeless powder and has been reported to result in an occasional slight intoxication (69). The inhalation o f ethyl ether at a concentration of 2,000 ppm (6,160 mg/m3), if continued to equilibrium, has been calculated to result in the absorption of some 6.25 g of ethyl ether and a blood concentration of 90 mg/1. According to Henderson and Haggard (69), inhalation of ethyl ether at these concentrations would probably induce dizziness in some individuals with an increased risk for industrial accidents. Concentrations above 3,000 ppm (9,240 mg/m3) as high as 7,000 ppm (21,560 mg/m3) have been breathed by some workers for variable periods of time. It has been claimed that these concentrations do not cause any physiological or clinical signs of injury, or any increase in the incidence of industrial accidents (7). According to Amor (5), inhalation o f ethyl ether at a concentration o f 8,000 ppm (24,640 mg/m3) for 1 hr will give rise to severe toxic effects (toxic effects not specified). A concentration of 35,000 ppm (107,800 mg/m3) of ethyl ether usually produces unconscious ness in 30 to 40 min (69). Postmortem blood ethyl ether concentrations of 2,880-3,750 mg/1 were reported in two surgical patients who died within 2.5 hr after cessation of ethyl ether administration (30). # Ingestion Ethyl ether is irritating to the mucous membranes, and humans therefore usually refrain from drinking it. The "ether habit" by ingestion is well known, however. The symptoms are said to be similar to those o f chronic alcoholism but they occur earlier (28). The lethal dose of ethyl ether after peroral intake has been estimated to about 30 ml (21. # Observations in Animals # Inhalation Effects of a 35-day continuous exposure to ethyl ether in concentrations o f 0.1 or 1.0 vol% (3,080 to 30,800 mg/m3) were investigated in young mice, rats, and guinea pigs which were in a phase of rapid body growth. Ethyl ether had a negligible effect on rat or mouse weight gain but produced a statistically significant weight loss in guinea pigs at the higher concentra tion. Livers from mice and guinea pigs treated with 1.0 vol% (30,800 mg/m3) ethyl ether showed only slight increases in lesions compared to controls, whereas no increase in lesions were seen in rats. No consistent injury to any other organ than the liver was observed. The higher dose of ethyl ether was reported to be lethal to mice and guinea pigs, but not to rats. A draw-back with this study is that the authors were unable to explain the mechanism behind ethyl ether lethality. The mortality was not related to changes in blood or any tissue examined, including the bone marrow. The animals showed enlargement of the liver, but how this was related to ethyl ether lethality was unclear (143). and a blood concentration of 1,700 to 1,900 mg/1 (average for 20 dogs 1,870 mg/1). The experiments lasted from 20 min to 2 hr. The knee jerk was abolished at a blood ethyl ether concentration of 1,430 mg/1 and the lid reflex at 1,500 mg/1. # The results obtained for the concentration of the ethyl-ether-air mixture required for the maintenance of anesthesia, and for the lethal concentration of inhaled ethyl ether varies between different species and also between different investigators for the same species. This variation can partly be explained by differences in induction mixtures and duration of the anesthesia (124). The results o f various investigations o f inhalation toxicity for ethyl ether in animals is presented in The susceptibility to inhaled ethyl ether in rats is influenced by the age of the animal. The median time to death (L T 50) for neonates was 5 to 6.5 times greater than for adult rats. At the respective L T 50 values, the mean concentration of diethyl ether in blood was 2.5 to 3 times greater in neonatal rats than in adult rats (132). # Ingestion The oral LD50 in a 2-week-old, young adult (body weight 8 0 -1 6 0 g) and adult (body weight 300-470 g) rat, were reported to be 2.
For information about other occupational safety and health problems, call 1-800-35-NIQSH ii DHHS (NIOSH) Publication No. 93-103 PREFACE A memorandum has been signed between the Centers for Disease Control, National Institute for Occupational Safety and Health (NIOSH), USA, and the Nordic Expert Group for Documentation o f Occupational Exposure Limits (NEG). The purpose o f the memorandum is to exchange information and expertise in the area o f occupational safety and health. One product of this agreement is the development of documents to provide a scientific basis for establishing recommended occupational exposure limits. The exposure limits will be developed separately by each country according to the different national policies. This document on the health effects of occupational exposure to ethyl ether is the second product of that agreement. The document was written by Björn Arvidson, Dr.# INTRODUCTION Ethyl ether is a colorless, highly volatile, mobile liquid with a characteristic pungent odor. Ethyl ether is a serious fire hazard, as its vapor readily forms explosive mixtures with air and oxygen. It is miscible with the lower aliphatic alcohols, benzene, chloroform, petroleum ether, other fat solvents, and many oils. The primary physiologic effect of ethyl ether is general anesthesia. 1 ppm = 3.08 mg/m3 1 mg/m3 " 0.325 ppm Ethyl ether usually starts to oxidize soon after distillation, and ether peroxides, acetaldehyde and acetic acid are formed. Ether peroxide is less volatile than ethyl ether and tends to concentrate during evaporation or distillation. Ether peroxides, when concentrated, are a serious detonation hazard. Ethyl ether can be tested for the presence o f peroxides by shaking 10 ml of ether for one min with 1 ml of a freshly prepared 10% aqueous solution of potassium iodide in a 25-ml, glass-stoppered cylinder of colorless glass, protected from light. Appearance of a yellow color indicates peroxides (53). Other rapid tests for detecting peroxides in liquids are also available (53). Peroxides can be removed by distillation or by passage through an alumina column. If activated alumina is used, the ether is also dried in the process (53). Ferrous sulfate (30% solution in water) can be used in the proportion of 1 lb (0.4536 kg) to each 30 U.S. gallons (113.51) of ethyl ether to extract and to destroy peroxides (7). # PHYSICAL AND CHEMICAL PROPERTIES* Factors which delay decomposition of ethyl ether are absence of light, storage in a cool place, storage in copper tanks, or contact with a copper screen. Copper and certain other metals, particularly iron and mercury, prevent the oxidation o f ethyl ether since they are oxidized preferentially and consume the oxygen which is combined with ethyl ether (3). Ethyl ether should be stored in the original metal cans rather than in glass bottles. Very dry ethyl ether undergoes oxidation much more readily than ethyl ether containing a few tenths of a percent of water (7). Ethyl ether is produced on a large scale by dehydration of ethanol or by hydration o f ethylene, both processes being carried out in the presence of sulfuric acid (3,28,80,151). Ethyl ether has a wide range of uses in the chemical industry. It is a good solvent for fats, oils, dyes, gums, resins, raw rubber, smokeless powder, perfumes, and nitrocellulose. Ethyl ether is also used in the manufacture of photographic films and pharmaceuticals, and as a reaction and extraction medium in the chemical industry. In addition, ethyl ether is used as an inhalation anesthetic in surgery, a refrigerant, in diesel fuels, in dry cleaning, and as a starting fuel for engines ( # USES AND OCCURRENCE # Uses Ethyl ether is commercially available in different grades. Depending on the consumer and use, specifications vary. In many instances the ether has to meet a specific test which is written in the specification (e.g., it may be important that the ether is free from alcohol and aldehyde, or completely anhydrous). The technical concentrated ether contains very small amounts of alcohol, water, aldehydes, peroxides, and other impurities such as sulfuric acid, sulfur dioxide, mercaptans, and ethyl esters (3,28,35,80). Some doubt exists as to whether or not these impurities as found in ethyl ether are toxic to man. In dogs, concentrations o f aldehyde up to 0.5% and mercaptans up to 1% produced no significant effects. Ether peroxides in a concentration of 0.3% had no effect whereas 0.5% caused a fall in blood pressure and respiratory difficulties. Ethyl sulfide in 1 % concentration caused gastroenteritis (23). Ethyl ether containing impurities produced anesthesia in mice more slowly than pure ethyl ether (84). The more refined grades of ethyl ether, such as anesthetic ether, are obtained from technical ether by redistillation and dehydration followed by alkali or charcoal treatment (80). To prevent pollution by anesthetic gases in operating rooms, the Hospital Engineering Cooperative Groups of Denmark made recommendations in 1974 (146). The recommended highest permissible average concentration in the breathing zones of anesthesia personnel for ethyl ether was 3 ppm (9 mg/m3). # Air Concentrations in the Working Air concentrations of ethyl ether were measured in the laboratory of a brewing company in Illinois, USA (154). Ethyl ether was used in an extraction procedure to determine the fat content in com. Environmental monitoring was conducted using two methods; (1) a direct reading instrument, and (2) personal and area sampling to determine the integrated average exposures over the work shift. All results for the personal and area samples were less than 1 ppm. Measurements with the direct reading instrument showed peak concentrations up to 1,000 ppm (3,080 mg/m3) around the cork gaskets and pressure release valves on the extraction apparatus. However, the vapors were released inside a plexiglass enclosure where they were properly exhausted. Personal breathing zone and area samples were taken in an analytical laboratory using solvents in the preparation o f soil and water samples (155). The values for ethyl ether were below the limit of quantitation (0.03 mg/sample). # Analytical Methods for Ethyl Ether in Air and Body Tissues # Direct Field Methods Analysis of ethyl-ether-contaminated air in the work area may be done by means of a commercially available combustible gas indicator. Direct reading detector tubes are also available (8, 92). # Laboratory Methods # An air sampling technique for diethyl ether has been proposed by NIOSH (109) . A known volume of air is drawn through a charcoal tube at a flow rate of 0.01 to 0.2 1/min. The charcoal in the tube is transferred to a small sample container and ethyl ether is desorbed with ethyl acetate. An aliquot of the desorbed sample is then injected into a gas chromatograph and the area of the resulting peak is determined and compared with standards. Advantages with this method is a small, portable sampling device and minimal interferences which can be eliminated by altering chromatographic conditions. A disadvantage is that the upper limit of the range of the method depends on the adsorptive capacity of the charcoal tube. In addition, the precision of the method is limited by the reproducibility of the pressure drop across the tubes. Blood and tissue levels of ethyl ether can be analyzed by infrared analysis. This method was originally devised by Stewart et al. (144) and was used, with a slight modification, by Chenoweth et al. (33) for measuring blood and tissue levels of ethyl ether and other anesthetics in dogs. This method is also applicable to air analysis (8,31). Ethyl ether in blood and urine can be analyzed by gas chromatography. The specimen is injected directly into a gas chromatograph equipped with a flame-ionization detector and a molecular sieve column. Specimens should be stored at 4°C and analyzed as soon as possible. Breath samples may be analyzed by direct injection of 1-2 ml, using as standards suitably prepared dilutions o f ethyl ether in air. Calculation is based on a standard curve of peak height versus concentration of the standards (17). Ethyl ether in air may be analyzed by passing the air through a fritted glass bubbler for reaction of the ethyl ether with acidic potassium dichromate and subsequent iodometric determination (133). A modification o f this method has also been used for determination of ethyl ether in blood (6, 117). # Biological Monitoring Tests for exposure may include expired breath for unmetabolized ethyl ether and blood for ethyl ether content (8). Blood concentrations of ethyl ether have been found to correlate with the degree of ethyl ether exposure and the extent o f intoxication. According to Baselt (17), the concentration o f ethyl ether in blood should not exceed a level o f about 20 mg/1 in asymptomatic workers. Gas chromatography can be used for determinations o f ethyl ether in blood, urine, and breath samples (17). Infrared analysis can be used for detection o f ethyl ether in breath and body fluids (8). # DISTRIBUTION AND METABOLISM # Uptake # Uptake by Inhalation After inhalation, ethyl ether is rapidly transferred from alveoli to blood. The normal alveolar membrane poses no barrier to the transfer of ethyl ether in either direction. The blood/gas distribution coefficient of ethyl ether is high-12.1. For oil/gas, the distribution coefficient is 65 (44). The more soluble an anesthetic is in blood, the more o f it must be dissolved in blood to raise its partial pressure there appreciably. Induction o f ethyl ether anesthesia is therefore slow. To achieve deep anesthesia with ethyl ether in medical practice takes 15-25 min (35) (the concentration of ethyl ether used for the induction o f anesthesia is usually 10 to 15 vol% or 308,000 to 462,000 mg/m3). # Uptake from the Gastrointestinal Tract No studies on the uptake of ethyl ether after peroral administration were found. According to Hayhursth (68), absorption of ether from the stomach and rectum is very prompt and may be used for anesthesia. No data were found on the extent of uptake of ethyl ether through undamaged skin in humans. A case report described a 14-year-old boy who applied ethyl ether to his scalp under plastic occlusion to treat a seborrheic dermatitis. The boy was found dead and elevated but nonfatal concentrations of ethyl ether were found in the various postmortem tissues subjected to toxicological analysis. The authors conclusion was that the boy died from intoxication of resorbed ethyl ether. The authors interpreted the sublethal ethyl ether concentrations found in various tissues as probably incorrect due to ethyl ether's high volatility (29). # Distribution The The distribution of 14C-ethyl ether in the mouse was investigated with use of low-temperature autoradiography (34). Ethyl ether was administered by inhalation for 10 min. When the animals were killed immediately after the inhalation period, the anesthetic was rather uniformly distributed throughout the body, although higher concentrations were found in the brain, kidney, liver, and brown fat. After 2 hr, most of the radioactivity had left the body, but the liver, kidneys, intestines, and nasal mucous membranes were still labelled. A quantitative analysis showed that in 15 min, the radioactivity in brown fat had reached its peak relative concentration, whereas the relative concentrations of radioactivity in liver and kidneys continued to increase until the termination of the 2-hr experimental period. At 2 hr, 3.6% of the administered dose was present in metabolized form in the liver and intestine. By that time, all liver radioactivity was nonvolatile. The distribution of ethyl ether and four other volatile organic solvents in blood was investigated in rats. The animals were exposed to 500 ppm (1540 mg/m3) of ethyl ether for 2 hr. Forty-nine percent of ethyl ether in the blood was found in the red blood cells. When a solution of ethyl ether was added to human plasma and red blood cell (RBC) samples, a large fraction of ethyl ether was recovered from ammonium-sulphate-precipitated plasma proteins and hemoglobin. A small fraction of ethyl ether was recovered from plasma water and RBC water. The authors conclude that proteins, chiefly hemoglobin, are the major carriers o f ethyl ether and other volatile organic solvents in blood (87). # Biotransformation It has been estimated that about 8-10% of absorbed ethyl ether is metabolized in the body (55) whereas the remainder is excreted unchanged through the lungs. Ethyl ether is metabolized to ethanol and acetaldehyde by an inducible hepatic microsomal enzyme system, a cytochrome P450-containing monooxygenase system (32,145). Ethanol and acetaldehyde are rapidly oxidized to acetate, and the acetate subsequently enters the 2-carbon pool o f intermediary metabolism. Van Dyke et al. (160) showed in rats that approximately 4% o f an intraperitoneallyadministered dose (0.1 ml) o f 14C-ethyl ether was recovered as exhaled 14C 0 2 during a 24-hr period, and 2% of the radioactivity was transformed into nonvolatile urinary products. Green and Cohen (55) reported that a portion of 14C-diethyl ether administered to mice by inhalation was rapidly transformed into fatty acids (palmitic, stearic, and oleic acids) and cholesterol, which were recovered from an ether extract of liver. Three other nonvolatile radioactive metabolites were tentatively identified as monoglycerides, diglycerides, and triglycerides. Two hr after administation of ethyl ether by inhalation to mice, 3.6% o f the administered dose was present in metabolized form in the liver and intestine. Thin-layer radiochromatography of extracts from the liver demonstrated four nonvolatile metabolites. The major metabolite was presumed to be a glucuronide of ether (34). The hepatic microsomal oxidation of ethyl ether is catalyzed primarily by the cytochromes P450 (32). More recent investigations have implicated specific isoenzymes. Known in hibitors of isoenzyme HE1 strongly inhibited ether de-ethylation, and a monoclonal antibody to P450IIE1 had the same effect (24). A mixed-function oxidation known to be catalyzed by cytochrome P45013E1, the N-demethylation of dimethylnitrosamine, was inhibited by ethyl ether anesthesia (150). These findings all indicate that ethyl ether is oxidatively metabolized primarily by P450UE1 (24). # Elimination The The concentration o f ethyl ether in brain, blood, and muscle was investigated in rats during ether elimination. When the rats had breathed 300,000 mg/m3 (approximately 9 vol%) of ethyl ether for 10 min, there was an abrupt fall of the concentration o f ethyl ether in the brain and blood immediately after discontinuation of the inhalation. When the inhalation exposures were discontinued after 1 hr, the fall in the ether concentration in the brain and blood was slower (42). The elimination o f ethyl ether from fatty tissue occurred slowly, and was practically finished after about 8 hr (43). According to the experience of anesthesiologists, termination o f anesthesia with soluble anesthetics like ethyl ether is slower in obese individuals than in those with a lean body mass. This might partly be due to partioning of the anesthetic agent into certain tissues (e.g., body fat) (25). # Mechanism of Ether Resistance in Drosophila Melanogaster A few studies have been performed to elucidate the mechanism of ethyl ether resistance in certain strains of Drosophila melanogaster. A strain named Eth-29, was found to be resistant to ethyl ether and other volatile anesthetic agents like chloroform and halothane. Ethyl ether resistance for lethality was determined using mortality as an endpoint when flies were exposed to very high concentrations of ethyl ether (111). Ethyl ether resistance to anesthesia uses loss of avoidance reflex as an endpoint, and is an incompletely dominant or polygenic character (49). The resistance for ethyl ether anesthesia and for ethyl ether lethality was found to be controlled by different genetic systems. In 18 strains of Drosophila melanogaster with different genetic characteristics, a wide variation in sensitivities to ethyl ether anesthesia, gamma-ray knock-down and gamma-ray lethality was demonstrated. There was no correlation between DNA-repair capacity and ethyl ether sensitivity or gamma-ray knock-down sensitivity, whereas strains deficient in excision repair were found to be sensitive to gamma-ray lethality. These findings indicate that lethality is caused by DNA damage, whereas the targets for ethyl ether anesthesia should be different, possibly being the membrane (50). # GENERAL TOXICOLOGY # Mechanism of Action The mechanism of action by which general anesthetics, including ethyl ether, produce reversible loss of consciousness is still unclear. Anesthesia can be produced by a wide variety of chemical agents, ranging from inert rare gases to steroid molecules (128). This apparent lack of specificity, together with the observation that general anesthesia can be reversed by high pressure, poses a unique pharmacological problem (48). Most theories concern interac tion of anesthetics with either membrane lipids or hydrophobic regions of specific membranebound proteins (for reviews see 48, 63, 128). One hypothesis is that the anesthetic changes the function o f an ion channel protein by modifying the conformation of the protein. Some investigators suggest that the GABA receptor may be the ion channel protein that is affected by inhalation anesthetic agents (25,101). According to Halsey (63), the most appropriate concept for the mechanism of general anesthesia is a heterogenous site o f anesthetic action, including both lipid and protein membrane components linked with the neuronal function. Light ethyl ether anesthesia for 1-2 min did not affect the rate of protein synthesis measured in the mammary gland, liver, intestinal mucosa, and muscle of lactating rats using a flooding dose of 3H-phenylalanine that was injected intravenously. The fractional rates of protein synthesis were estimated from incorporation of 3H-phenylalanine into tissue proteins (131). # Effects of Ethyl Ether on the # Effects of Ethyl Ether on Protein Synthesis in Animals # Acute Toxicity # Human Experience # Inhalation Ethyl Oxygen therapy may be used in cases of narcosis (7). # Ingestion Moeschlin (99) has estimated the lethal dose o f ethyl ether after peroral intake to be about In ethyl ether addiction, tolerance develops and large daily quantities of ethyl ether may be ingested (16,52). A daily intake of up to 180 g has been reported (52). # Animal Studies # Inhalation Differences in the induction mixture and the duration of anesthesia may lead to different results for lethal concentrations of inhaled ethyl ether in animals (124). During a continuous exposure for 97 min, the lethal concentration of inhaled ether for mice was 133,400 mg/m3 or 4.4 vol% (86). When rapid induction and short duration of anesthesia was used, the concentration necessary to produce respiratory arrest in mice was 18 vol% (554,000 mg/m3) (103). The lethal concentrations for the rat, rabbit, and dog were 6.4, 10.6 and 7.16 to 19.25 vol%, respectively (139). The age o f the animal has importance for susceptibility to inhaled ethyl ether. In rats, the median time to death (LT50) for neonates was 5 to 6.5 times greater than for adult rats and the mean concentration of ethyl ether in blood at the respective LT50 values was 2.5 to 3 times greater in neonatal rats than in adult rats (132). # Ingestion The oral LD50 in 2-week-old, young adult (body weight 80-160 g) and adult (body weight 300-470 g) Sprague-Dawley rats, were reported to be 2.3,2.4 and 1.7 ml/kg b.w., respectively (83). Smyth et al. ( 138) reported a single peroral LD50 for rats of 3.56 ml/kg. Ethyl ether is metabolized to acetaldehyde by hepatic microsomal enzymes. Long-term ethyl ether treatment of rats leads to induction of hepatic microsomal enzymes and to proliferation of smooth endoplasmic reticulum (127). # Chronic Toxicity # Long # Kidneys # Albuminuria from tubular irritation has been reported to occur in about one-fourth o f patients anesthetized with ethyl ether (68). Chronic exposure to ethyl ether during the manufacture of smokeless powder has been claimed to cause occasional cases o f nephritis (28, 65, 68), but this effect has since been questioned. Hamilton (64) described a man who had worked continuously for 7 years and intermittently for 5 years in a smokeless powder factory, and who developed a severe chronic interstitial nephritis verified at autopsy. # Ethyl ether has been used for dissolution of an obstructed Foley catheter balloon. The use of ethyl ether for this purpose has occasionally resulted in a chemical cystitis with permanent damage to the urinary bladder (51, 91, 105). # Centra! Nervous System Acute exposure of humans to high concentrations of ethyl ether vapor causes initial excitement followed by narcosis and respiratory depression. The different stages o f ethyl ether anesthesia have been described in detail and are given in Table 1. The approximate blood and alveolar levels o f ethyl ether in relation to the depth of anesthesia are given in Table 2. # Long-term exposure to low concentrations o f ethyl ether vapor in industry, for example during the manufacture of smokeless powder, has been reported to cause various symptoms from the central nervous system such as sleepiness, dizziness, excitation, headache and psychic dis turbances (47). Surgeons and nurses exposed for long periods o f time to ethyl ether in the operation theatre complained of tiredness, headache, loss of appetite, irritability and difficul ties in concentrating (168). Ethyl ether has been reported to have complex effects on learning and memory in experimental studies in mice. Ethyl ether, like strychnine, appears to be a modulator of learning and memory (96). Some investigators have described retrograde amnesia after exposure to ethyl ether, but due to differences in methodology, the amnestic effects o f ethyl ether have not been found consistently (167 Ethyl ether causes cerebrovascular dilatation at twice the minimum anesthetic concentration (the minimum anesthetic concentration is the concentration at which 50% o f the subjects move in response to a surgical stimulus) (172). The cerebral blood flow (CBF), cerebral metabolic rate for oxygen (CMRO2), and cerebrovascular resistance (CVR) were measured in two groups of monkeys inhaling 2 vol% (61,600 mg/m3) and 5 vol% (154,000 mg/m3) ethyl ether in air. # CVR fell by 30% when ethyl ether concentration was increased from 2 to 5% whereas CBF and CMRO2 were unaltered. The conclusion of this study was that the effect o f ethyl ether on CVR probably reflects a complex interaction o f a number of factors such as alterations in alfa-adrenergic tone associated with changes in plasma epinephrine levels, alterations in vasoactive metabolites and alterations in beta-adrenergic receptor activity (75). # The effects of ethyl ether on the morphology of the blood-brain barrier within the optic tectum was investigated in hatching chick embryos and young chickens. Ethyl ether was administered on a small compress of surgical gauze covering the chick beak until anesthesia was reached (1 -3 min). Horseradish peroxidase was used as a marker of vascular permeability. Ethyl ether anesthesia was reported to produce extravasation of HRP by opening the tight interendothelial junctions. Areas of tracer extravasation were more numerous in the tecta o f hatching embryos than in young chickens (108). Richards and White (122) investigated the effect of volatile anesthetics (ethyl ether, halothane, methoxyflurane and trichloroethylene) on the synaptic transmissions in the dentate gyrus. The experiments were performed in vitro using a preparation o f the dentate gyrus o f the hippocam pus in guinea pigs. Ethyl ether was applied to slices o f the dentate gyrus in the gas phase by mixing ethyl ether with the O2JCO2 gas mixture that superfused the upper surface of the slice. All four anesthetics depressed synaptic transmission in the dentate gyrus. The mechanism behind this effect was interpreted to be either a reduction of transmitter release, or a decrease in the sensitivity of the postsynaptic membrane to released transmitter, or both effects together. # EEG monitoring in animals during anesthesia with ethyl ether has shown that ethyl ether reduces neuronal excitability (93,147). The depressive effect of increasing concentrations of ethyl ether on the central nervous system is also evident from Table 1. # Peripheral Nervous System Larrabee and Pasternak (90) studied the concentrations of various anesthetics required to block synaptic excitation of sympathetic ganglion cells and compared that to concentrations neces sary to block conduction along sympathetic nerve fibers. The experiments were performed on in vitro preparations from cats, rats, and rabbits. Ethyl ether depressed synaptic transmis sion through a sympathetic ganglion in a concentration similar to that assumed to exist in the blood during surgical anesthesia. The conduction along autonomic fibers o f different types (A, B, and C) was blocked by ethyl ether, but synaptic transmission was depressed more readily than conduction along any type o f axon. Ethyl ether and halothane were found to affect the kinetics of sodium and potassium current in vitro in the crayfish giant axon. Both anesthetics caused a reversible, dose-dependent speeding up of sodium current inactivation at all membrane potentials. The activation of potassium currents was faster with ethyl ether present, but there was no change in the voltage dependence o f steady-state potassium currents. A theory was proposed that the effects on sodium and potassium channel gating processes was due to ei ther an effect on the state of the lipids surrounding the channels, or a direct effect on the protein part o f the channels (18). A bath concentration of 100-300 mM (7,412-22,236 mg/1) ethyl ether blocked the conduction of single action potentials in the bifurcating axon of the lobster deep extensor muscles (58). In frog sciatic nerve, 300 mM (22,236 mg/1) ethyl ether was needed to depress single axon sodium currents (81). Significant depression of mammalian B and C fibers was produced by 77 mM (5,707 mg/1) ethyl ether (90). # Muscle Ethyl ether causes muscle relaxation by its depressing effect on the central nervous system (CNS) and by affecting neuromuscular transmission and the muscle itself. The mechanism of neuromuscular blocking action of ethyl ether seems to be similar to, but not identical with that of d-tubocurarine (35,77). When these two drugs are used in the same patient, an additive effect results from their synergism (35). The neuromuscular block produced by ethyl ether was not effectively antagonized by edrophonium or succinylcholine (77) whereas neostigmine has been reported to reverse the effect of ethyl ether on the motor end-plate (57). The reduction of muscle contractility caused by ethyl ether might be due to a pharmacological effect at the level of the sarcoplasmic reticulum (82, 129). # Gastrointestinal Tract # Salivary and gastric secretions are increased during light ethyl ether anesthesia, but are decreased during deep anesthesia. Bowel movement is decreased due to stimulation of dilator fibers and to depression of plain muscle (162). Nausea and vomiting are common after anesthesia with ethyl ether. Ethyl ether stimulates the vomiting center in the medulla, and ethyl ether is absorbed in saliva and passes to the stomach where it irritates the mucosa (35). The incidence of postoperative nausea varies with the length of operation and depth of anesthesia (162). # Circulatory System Ethyl ether vapor depresses myocardial activity if added to the circulation in a heart-lung preparation (119). Ethyl ether anesthesia causes increased sympathetic nervous activity in the dog and man (98) and the plasma levels of noradrenaline increase with increasing depth of ethyl ether anesthesia (21). Skovsted # and Price (134) postulated that the increase in sym pathetic preganglionic activity caused by ethyl ether anesthesia in cats was due to a central action on the medullary vasomotor center or on spinal vasomotor neurons. The direct myocardial depressant action of ethyl ether seen in the isolated heart preparation is antagonized by the positive inotrope effect of adrenaline and noradrenaline released during ethyl ether anesthesia (26). Ethyl ether seems to block parasympathetic activity in normal man (118). Ethyl ether anesthesia produces remarkably small alterations in blood pressure and pulse rate and rarely leads to cardiac arrythmias (35). In a study in rats, the effects of ethyl ether anesthesia on whole-body hemodynamics and organ blood flow was measured using microspheres. Anesthesia was induced using a bell jar containing a gauze pad moistened with ethyl ether, and the animals were maintained between stage II and stage HI of anesthesia. Ethyl ether anesthesia initially decreased arterial pressure and the total peripheral resistance, whereas the cardiac index was increased. Later, the cardiac index returned to normal and there was an increase in total peripheral resistance index. The blood flow to the spleen, stomach, intestine, kidneys, and colon decreased during anesthesia whereas the blood flow to the brain and heart increased (141). # Hematological System # Anesthetic agents may have both direct and hormone-mediated effects on the leukocytes. After ethyl ether anesthesia in man, a postoperative leukocytosis has been reported which is related to the sympathomimetic effects of ethyl ether (164). Blood samples from workers chronically exposed to ethyl ether fumes in the manufacture of smokeless powder showed in some individuals polycythemia and increased number of leukocytes (65). Plasma aldosterone levels and plasma renin activity increase during ethyl ether anesthesia (15,115). The plasma antidiuretic hormone levels also increase significantly after the induction of anesthesia with ethyl ether (97). Ethyl ether anesthesia caused a significant decrease in serum calcium levels in 30 patients undergoing various routine surgical operations. Serum calcium returned to near normal levels after 24 hr. The mechanism behind this effect is not known (62). # Laboratory Animals Ethyl ether anesthesia administered by placing rats in an ether-saturated jar and then main tained with a nose cone for 3 min caused a significant increase in plasma adrenocorticotrophic hormone (ACTH) levels. Ethyl ether failed to raise the plasma ACTH levels o f rats in which an antero-lateral hypothalamic cut and adrenalectomy had been performed 7 to 8 days previously, and plasma ACTH was also unchanged in rats exposed to ethyl ether 2 hr after an antero-lateral cut. The results are interpreted as evidence that intact neural pathways entering the medial basal hypothalamus from the antero-lateral direction are necessary for the ACTHreleasing action o f ethyl ether stress (78). # Plasma corticotropin releasing hormone (CRH), arginine vasopressin (AVP), oxytocin (OXY) and ACTH levels rose to approximately twice the level o f control rats 2 min after the onset of a 1-min exposure to ethyl ether. Plasma CRH rose further 5 min after the onset o f ethyl ether stress, while plasma A VP and OXY returned to the baseline level at 5 min (67). Ethyl ether anesthesia did not affect the mean concentrations or dissociation constants for rat uterine estrogen or progesterone receptors (173). No difference in plasma thyroxine levels were found between control rats and rats exposed to ethyl ether inhalation for 2 min (89). Ethyl ether anesthesia in rats increased the concentration of plasma beta-endorphin-like immunoreactivity, probably of pituitary origin. This increase was not associated with sig nificant changes in pituitary or brainstem beta-endorphin content (121). # IMMUNE SYSTEM After treatment of human sera with ethyl ether, there were no significant changes in im munoglobulin levels, whereas the complement activity was lost (142). Several commonly used volatile anesthetics were screened for mutagenicity in the Salmonella/ rat liver microsomal assay system. Ethyl ether was reported to be not mutagenic (165). When the genotoxic activity and potency of 135 compounds were investigated in the Ames reversion test and in a bacterial DNA-repair test, there was no evidence o f genotoxic activity for ethyl ether (38). was termed objectionable. For ethyl ether, complaints of nasal irritation started at 200 ppm (616 mg/m3), and 300 ppm (924 mg/m3) was objectionable as a working atmosphere. # CARCINOGENICITY According to Cook (36), industrial exposure at 500 to 1,000 ppm (1,540 to 3,080 mg/m3) did not cause any demonstrable injury to health, but a limit of 500 ppm seemed justifiable to avoid irritation and complaint. Amor (5) also considered a concentration of ethyl ether in the atmosphere greater than 500 ppm (1,540 mg/m3) as indicative o f unsatisfactory conditions. He listed 2,000 ppm (6,160 mg/m3) as the concentration which may lead to symptoms of illness (not specified) if exposure continues for more than a short time. Ethyl ether concentrations of 2,000 to 3,000 ppm (6,160 to 9,240 mg/m3) may occur in the air during the manufacture of smokeless powder and has been reported to result in an occasional slight intoxication (69). The inhalation o f ethyl ether at a concentration of 2,000 ppm (6,160 mg/m3), if continued to equilibrium, has been calculated to result in the absorption of some 6.25 g of ethyl ether and a blood concentration of 90 mg/1. According to Henderson and Haggard (69), inhalation of ethyl ether at these concentrations would probably induce dizziness in some individuals with an increased risk for industrial accidents. Concentrations above 3,000 ppm (9,240 mg/m3) as high as 7,000 ppm (21,560 mg/m3) have been breathed by some workers for variable periods of time. It has been claimed that these concentrations do not cause any physiological or clinical signs of injury, or any increase in the incidence of industrial accidents (7). According to Amor (5), inhalation o f ethyl ether at a concentration o f 8,000 ppm (24,640 mg/m3) for 1 hr will give rise to severe toxic effects (toxic effects not specified). A concentration of 35,000 ppm (107,800 mg/m3) of ethyl ether usually produces unconscious ness in 30 to 40 min (69). Postmortem blood ethyl ether concentrations of 2,880-3,750 mg/1 were reported in two surgical patients who died within 2.5 hr after cessation of ethyl ether administration (30). # Ingestion Ethyl ether is irritating to the mucous membranes, and humans therefore usually refrain from drinking it. The "ether habit" by ingestion is well known, however. The symptoms are said to be similar to those o f chronic alcoholism but they occur earlier (28). The lethal dose of ethyl ether after peroral intake has been estimated to about 30 ml (21. # Observations in Animals # Inhalation Effects of a 35-day continuous exposure to ethyl ether in concentrations o f 0.1 or 1.0 vol% (3,080 to 30,800 mg/m3) were investigated in young mice, rats, and guinea pigs which were in a phase of rapid body growth. Ethyl ether had a negligible effect on rat or mouse weight gain but produced a statistically significant weight loss in guinea pigs at the higher concentra tion. Livers from mice and guinea pigs treated with 1.0 vol% (30,800 mg/m3) ethyl ether showed only slight increases in lesions compared to controls, whereas no increase in lesions were seen in rats. No consistent injury to any other organ than the liver was observed. The higher dose of ethyl ether was reported to be lethal to mice and guinea pigs, but not to rats. A draw-back with this study is that the authors were unable to explain the mechanism behind ethyl ether lethality. The mortality was not related to changes in blood or any tissue examined, including the bone marrow. The animals showed enlargement of the liver, but how this was related to ethyl ether lethality was unclear (143). and a blood concentration of 1,700 to 1,900 mg/1 (average for 20 dogs 1,870 mg/1). The experiments lasted from 20 min to 2 hr. The knee jerk was abolished at a blood ethyl ether concentration of 1,430 mg/1 and the lid reflex at 1,500 mg/1. # The results obtained for the concentration of the ethyl-ether-air mixture required for the maintenance of anesthesia, and for the lethal concentration of inhaled ethyl ether varies between different species and also between different investigators for the same species. This variation can partly be explained by differences in induction mixtures and duration of the anesthesia (124). The results o f various investigations o f inhalation toxicity for ethyl ether in animals is presented in The susceptibility to inhaled ethyl ether in rats is influenced by the age of the animal. The median time to death (L T 50) for neonates was 5 to 6.5 times greater than for adult rats. At the respective L T 50 values, the mean concentration of diethyl ether in blood was 2.5 to 3 times greater in neonatal rats than in adult rats (132). # Ingestion The oral LD50 in a 2-week-old, young adult (body weight 8 0 -1 6 0 g) and adult (body weight 300-470 g) rat, were reported to be 2.
None
None
25c2b022642f2e4ccdfee9afb88aefece0604c89
cdc
None
Ceftriaxone 1 g, IM or IV, every 24 hours or Ceftizoxime 1 g, IV, every 8 hours or Cefotaxime 1 g, IV, every 8 hours. Patients who are allergic to (3-lactam drugs should be treated with spectinomycin 2 g IM every 12 hours. "Erythromycin stearate 500 mg or erythromycin ethylsuccinate 800 mg or equivalent may be substituted for erythromycin base.# Introduction # MMWR v User's Guide STD and their therapies have been categorized by etiologic agent, if possible. Some syndromes overlap; for example, urethritis may be caused by Neisseria gonorrhoeae or Chlamydia trachomatis. Table 1 correlates symptoms syndromes with etiologic organisms and directs the user to the appropriate section in this document. # Clinician Guidelines And Public Health Considerations Control of STD is based on four major concepts: 1) education of persons at risk on the modes of disease transmission and the means for reducing the risk of transmis sion; 2) detection of infection in asymptomatic persons and in persons who are symptomatic but unlikely to seek diagnostic and treatment services; 3) effective diagnosis and treatment of persons who are infected; and 4) evaluation, treatment, and counseling of sex partners of persons with an STD. Although this document deals largely with clinical aspects of STD control, the prevention of STD is based primarily on changing the sexual behaviors that put patients at risk. In special situations, such as prenatal visits and legally induced abortions, screening for STD may have greater impact in preventing complications of STD. For specific recommendations in cases of sexual assault or child abuse, see "Sexual Assault and STD." Specific guidelines for screening in each situation are beyond the scope of this document. However, whenever possible, the following laboratory screening tests for STD should be available: *Persons at higher risk for STD include sexually active persons under 25 years of age, those who have had multiple sexual partners within the previous 6 months, and those with a history of STD. In addition, prostitutes and persons having sexual contact with prostitutes, users of illicit drugs, and inmates of detention centers have increased rates of STD and should be evaluated when seeking medical care. Diagnosis of an STD should be considered a "sentinel event" reflecting unpro tected sexual activity. Patients with one STD are at high risk for having others. Therefore, patients should be closely evaluated for other STD infections, including syphilis and human immunodeficiency virus (HIV) serology (if not performed within the previous 3 months), gonorrhea and chlamydial testing from appropriate anatomic sites, and physical examination. Women wishing to prevent unplanned pregnancy and who are not using contraception should be counseled about contraception services; ideally, contraceptives and pregnancy testing should be available at the same facility providing STD services. Annual Pap smear evaluation should also be available. # Clinical Considerations # Primary Prevention Clinics and practitioners who treat patients with STD should have resources available for educating patients about risk assessment and behavioral choices. Behavioral assessment is an integral part of the STD history, and patients should be counseled on methods to lower their risk of acquiring STD, including abstinence, careful selection of partners, use of condoms and spermicides, and periodic exami nation. Specific recommendations for behavioral assessment and counseling are beyond the scope of these guidelines. # Condoms and Spermicides Condoms and spermicides should be available in any facility providing clinical STD services. Instruction in proper use should also be provided. Although condoms do not provide absolute protection from any infection, if properly used, they reduce the risk of infection. Recommendations for the proper use of condoms (Table 2) have been made by CDC and other public health organizations. # Special Populations # Pregnant Women Intrauterine or perinatally transmitted STD can have fatal or severely debilitating effects on the fetus. Routine prenatal care should include an assessment for STD, which in most cases includes serologic screening for syphilis and hepatitis B, testing for chlamydia, and gonorrhea culture (see specific sections for management of clinical disease). Prenatal screening for HIV is indicated for all patients with risk factors for HIV or with a high-risk sexual partner; some authorities recommend HIV screening of all pregnant women. # MMWR ix Practical management issues are discussed in the sections pertaining to specific diseases. Pregnant women and their sexual partners should be questioned about STD and counseled about possible neonatal infections. Pregnant women with primary genital herpes infection, hepatitis B, primary cytomegalovirus (CMV) infection, or Group B streptococcal infection may need to be referred to an expert for manage ment. In the absence of lesions or other evidence of active disease, tests for herpes simplex virus (HSV) and cesarean delivery are n o t routinely indicated for women with a history of recurrent genital herpes infection during pregnancy. Routine HPV screening is not recommended. For a fuller discussion of these issues, as well as for infections not transmitted sexually, refer to "Guidelines for Perinatal Care" (second edition, 1988), jointly written and published by the American Academy of Pediatrics and the American College of Obstetricians and Gynecologists. # Children Management of STD in children requires close cooperation between the clinician, laboratory, and child-protection authorities. Investigations, when indicated, should be initiated promptly. Some diseases, such as gonorrhea, syphilis, and chlamydia, if acquired after the neonatal period, are almost 100% indicative of sexual contact; in other diseases, such as HPV infection and vaginitis, the association with sexual contact is not so clear (see "Sexual Assault and STD"). # TABLE 2. Recommendations for use of condoms 1. Latex condoms should be used because they may offer greater protection against HIV and other viral STD than natural membrane condoms. 2. Condoms should be stored in a cool, dry place out of direct sunlight. 3. Condoms in damaged packages or those that show obvious signs of age (e.g., those that are brittle, sticky, or discolored) should not be used. They cannot be relied upon to prevent infection or pregnancy. 4. Condoms should be handled with care to prevent puncture. 5. The condom should be put on before any genital contact to prevent exposure to fluids that may contain infectious agents. Hold the tip of the condom and unroll it onto the erect penis, leaving space at the tip to collect semen, yet ensuring that no air is trapped in the tip of the condom. 6. Only water-based lubricants should be used. Petroleum-or oil-based lubricants (such as petroleum jelly, cooking oils, shortening, and lotions) should not be used because they weaken the latex and may cause breakage. 7. Use of condoms containing spermicides may provide some additional protection against STD. However, vaginal use of spermicides along with condoms is likely to provide still greater protection. 8. If a condom breaks, it should be replaced immediately. If ejaculation occurs after condom breakage, the immediate use of spermicide has been suggested. However, the protective value of postejaculation application of spermicide in reducing the risk of STD transmission is unknown. 9. After ejaculation, care should be taken so that the condom does not slip off the penis before withdrawal; the base of the condom should be held throughout withdrawal. The penis should be withdrawn while still erect. 10. Condoms should never be reused. Adapted from MMWR 1988;37:133-7. Patients with Multiple Episodes of STD ("Repeaters") These patients have a disproportionately high rate of STD and should be targeted for intensive counseling on methods to reduce risk. More research is needed into methods of behavior modification for these patients, including the role of outreach and support services. In many cases, periodic call-back for STD evaluation may be indicated. # STD Core Groups Populations of core STD transmitters account for most STD morbidity. Although substantial regional variation occurs, in most urban areas the core groups consist largely of ethnic minority populations with low levels of education and socioeco nomic attainment. In many such environments, illicit drug use and prostitution are common. Core groups are often geographically limited, permitting definition of core geographic areas as well as populations. STD programs should evaluate the occur rence of STD in their jurisdictions to define core populations and core areas for targeted education, screening, clinical outreach, call-back reexamination programs, and other control measures. # Illicit Drug Users STD appear to be increasingly linked to illicit drug use. Illicit drug users may be at higher risk for sexual behaviors that put them at risk for STD. In addition, illicit drug users account for an increasing proportion of HIV infections. Further research is needed into the behaviors associated with drug use and STD, particularly to facilitate behavioral and clinical interventions targeted at drug users. Outreach programs in the community and in cooperation with drug treatment programs should be considered. # Prison and Detention Populations Residents of short-term correctional and detention facilities often have high prevalence rates of STD. Screening and treatment for infections that are highly prevalent in the community should be provided for all inmates. In many situations, screening and treatment in the prison population is central to effective STD control. In addition, for many patients, correctional health services may be the only opportu nity for interaction with health-care providers. # Patients with HIV Infection and STD The management of patients with STD who are coinfected with HIV presents complex clinical and behavioral issues. Because of its effect on the immune system, HIV may alter the natural histories of many STD, as well as the effect of antimicrobial therapy. Close clinical follow-up is imperative. STD infection in patients with and without HIV is a sentinel event, often indicating continued unprotected sexual activity. Further patient counseling is indicated in these situations. MMWR xi # STD Reporting and Confidentiality Disease surveillance activities, which include the accurate identification and timely reporting of STD, form an integral part of successful disease control. Reporting assists local health authorities in identifying sexual contacts who may be infected (see next section). Reporting is also important for assessing morbidity trends. Reporting may be provider-and/or laboratory-based. Cases should be reported in accordance with local statutory requirements and in as timely a manner as possible. Clinicians who are unsure of local reporting requirements are encouraged to seek advice through their local health departments or state STD programs. STD reports are held in strictest confidence and in many jurisdictions are protected by statute from subpoena. Before any follow-up of a positive STD test is conducted by STD program representatives, these personnel consult with the provider to verify the diagnosis and treatment. Most local health departments offer STD partner notification and follow-up services for selected STD. # Management of Sex Partners and Partner Notification Clinical guidelines for management of sex partners are included in each disease section. Breaking the chain of transmission is crucial to STD control. Further transmission and reinfection are prevented by referral of sex partners for diagnosis and treatment. Patients should ensure that their sex partners, including those without symptoms, are referred for evaluation. Partners of patients with STD should be examined; treatment should not be provided for partners who are not examined, except in rare instances such as when the partner is at a site remote from medical care. Appropriate referral for sex partners should be provided if care will not or cannot be provided by the initial health-care provider. Disease Intervention Specialists (DIS), i.e., public health profes sionals trained in STD management, can assist patients and practitioners in this process through interviewing and confidential field outreach procedures. # AIDS and HIV Infection in The General STD Setting The acquired immunodeficiency syndrome (AIDS) is a late manifestation of infection with human immunodeficiency virus (HIV). Most people infected with HIV remain asymptomatic for long periods. HIV infection is most often diagnosed by using HIV antibody tests. Detectable antibody usually develops within 3 months after infection. A confirmed positive antibody test means that a person is infected with HIV and is capable of transmitting the virus to others. Although a negative antibody test usually means a person is not infected, antibody tests cannot rule out infection from a recent exposure. If antibody testing is related to a specific exposure, the test should be repeated 3 and 6 months after the exposure. Antibody testing for HIV begins with a screening test, usually an enzyme-linked immunosorbent assay (ELISA). If the screening test is positive, it is followed by a more specific confirmatory test, most commonly the Western blot assay. New antibody tests are being developed and licensed that are either easier to perform or more accurate. Positive results from screening tests must be confirmed before being considered definitive. The time between infection with HIV and development of AIDS ranges from a few months to ^10 years. Most people who are infected with HIV will eventually have some symptoms related to that infection. In one cohort study, AIDS developed in 48% of a group of gay men ^10 years after infection; but additional AIDS cases are expected among those who have remained AIDS-free for >10 years. Therapy with zidovudine (ZDV-previously known as azidothymidine) has been shown to benefit persons in the later stages of disease (AIDS or AIDS-related conditions along with a CD4 lymphocyte count less than 200/mm3). Serious side effects, usually anemias and cytopenias, have been common during therapy with ZDV; therefore, patients taking ZDV require careful follow-up in consultation with physicians who are familiar with ZDV therapy. Clinical trials are currently evaluating ZDV therapy for persons with asymptomatic HIV infection to see if it decreases the rate of progression to AIDS. Other trials are evaluating new drugs or combinations of drugs for persons with different stages of HIV infection, including asymptomatic infections. The complete therapeutic management of HIV infection is beyond the scope of this document. # Preventing the Sexual Transmission of HIV The only way to prevent AIDS is to prevent the initial infection with HIV. Prevention of sexual transmission of HIV can be ensured in only two situations: 1) sexual abstinence or 2) choosing only sex partners who are not infected with HIV. Many HIV-infected persons are asymptomatic and are unaware that they are infected. Therefore, without an antibody test, infected persons are difficult to identify. AIDS case surveillance and HIV seroprevalence studies allow estimation of risk for persons in different areas; however, these population estimates may have a limited impact on an individual's sexual decisions. Although knowledge of antibody status is desirable before a sexual relationship is initiated, this information may not be available. Therefore, individuals should be counseled that when they initiate a sexual relationship they should use sexual practices that reduce the risk of HIV transmission. Sexual practices may influence the likelihood of HIV transmission during sexual contact with an infected partner. Women who practice anal intercourse with an infected partner are more likely to acquire infection than women who have only vaginal intercourse. The relative risk of transmission by oral-genital contact is probably somewhat lower than the risk of transmission by vaginal intercourse. Other STD or local trauma that breaks down the mucosal barrier to infection would be expected to increase the risk of HIV transmission. Condoms supplement natural barriers to infection and therefore reduce the risk of HIV transmission (see "Clinician Guidelines and Public Health Considerations"). # When to Test for HIV Voluntary, confidential, HIV antibody testing should be done routinely when the results may contribute either to the medical management of the person being tested or to the prevention of further transmission. Testing is important for persons with symptoms of HIV-related illnesses or with diseases such as syphilis, chancroid, herpes, or tuberculosis, for which a positive test result might affect the recommended diagnostic evaluation, treatment, or follow-up. HIV counseling and testing for persons with STD is a particularly important part of an HIV prevention program, because patients who have acquired an STD have demon strated their potential risk for acquiring HIV. Because no vaccine or cure is available, HIV prevention requires changes in behavior by people at risk for transmitting or acquiring infection. Therefore, patient counseling must be an integral part of any HIV testing program in an STD clinic. Counseling should be done both before and after HIV testing. # Pretest Counseling Pretest counseling should include assessment of the patient's risk for HIV infection and measures to reduce that risk. Intravenous (IV) drug users should be advised to stop using drugs. If they do not stop, they should not share needles. If needle-sharing continues, injection equipment should be cleaned with bleach between uses. Sexually active persons who have multiple partners should be advised to consider sexual abstinence or to enter a mutually monogamous relationship with a partner who has also been tested for HIV. Condoms should be used consistently if either or both partners are infected or have other partners. Similarly, heterosexuals with STD other than HIV should be encour aged to bring their partners in for HIV testing and to use condoms if they are not in a mutually monogamous relationship with an uninfected partner. # Posttest Counseling and Evaluation Persons who have negative HIV antibody tests should be told their test result by a person who understands the need to reduce unsafe sexual behaviors and can explain ways to modify sexual practices to reduce risks. # MMWR 3 Antibody tests cannot detect infections that occurred in the several weeks before the test (see above). Persons who have negative tests should understand that the negative test result does not signify protection from acquiring infection. They should be advised about the ways the virus is transmitted and how to avoid infection. Their partners' risks for HIV infection should be discussed, and partners at risk should be encouraged to be tested for HIV. Persons who test positive for HIV antibody should be told their test result by a person who is able to discuss the medical, psychological, and social implications of HIV infection. Routes of HIV transmission and methods to prevent further transmission should be emphasized. Risks to past sexual and needle-sharing partners of HIV antibody-positive patients should be discussed, and they should be instructed in how to notify their partners and to refer them for counseling and testing. If they are unable to notify their partners or they are not sure that their partners will seek counseling, physicians or health department personnel should assist, using confidential procedures, to ensure that the partners are notified. Infected women should be advised of the risk of perinatal transmission (see below), and methods of contraception should be discussed and provided. Additional follow-up, counseling, and support systems should be available to facilitate psychosocial adjustment and changes in behavior among HIV antibody positive persons. # Perinatal Infections Infants born to women with HIV infection may also be infected with HIV; this risk is estimated to be 30%-40%. The mother in such a case may be asymptomatic and her HIV infection not recognized at delivery. Infected neonates are usually asymptomatic, and currently HIV infection cannot be readily or easily diagnosed at birth. (A positive antibody test may reflect passively transferred maternal antibodies, and the infant must be observed over time to determine if neonatal infection is present.) Infection may not become evident until the child is 12-18 months of age. All pregnant women with a history of STD should be offered HIV counseling and testing. Recognition of HIV infection in pregnancy permits health-care workers to inform patients about the risks of transmission to the infant and the risks of continuing pregnancy. # Asymptomatic HIV Infections As more HIV-infected persons are identified, primary health-care providers will need to assume increased responsibility for these patients. Most internists, pediatri cians, family practitioners, and gynecologists should be qualified to provide initial evaluation of HIV-infected individuals and follow-up of those with uncomplicated HIV infection. These services should be available in all public health clinics. Health-care professionals who identify HIV-positive patients should provide posttest counseling; medical evaluation (either on site or by referral)-including a physical examination, complete blood count, lymphocyte subset analysis, syphilis serology, and a purified protein derivative (PPD) skin test for tuberculosis. Psychosocial counseling resources should also be available. # Chancroid Because of recent spread of H. ducreyi, chancroid has become an important STD in the United States. Its importance is enhanced by the knowledge that outside the United States chancroid has been associated with increased infection rates for HIV. Chancroid must be considered in the differential diagnosis of any patient with a painful genital ulcer. Painful inguinal lymphadenopathy is present in about half of all chancroid cases. # Recommended Regimen Erythromycin base 500 mg orally 4 times a day for 7 days, or Ceftriaxone 250 mg intramuscularly (IM) in a single dose. # Alternative Regimen Trimethoprim/sulfamethoxazole 160/800 mg (one double-strength tablet) orally 2 times a day for 7 days. # Comment: The susceptibility of H. ducreyi to this combination of antimicrobial agents varies throughout the world; clinical efficacy should be monitored, prefer ably in conjunction with monitoring of susceptibility patterns. -r Amoxicillin 500 mg plus clavulanic acid 125 mg orally 3 times a day for 7 days. Comment: Not evaluated in the United States. or Ciprofloxacin 500 mg orally 2 times a day for 3 days. Comment: Although a regimen of 500 mg orally once was effective outside the United States, based on pharmacokinetics and susceptibility data, 2-or 3-day regimens of the same dose may be prudent, especially for patients coinfected with HIV. Quinolones, such as ciprofloxacin, are contraindicated during pregnancy and in children 16 years of age or younger. # Management of Sex Partners Sex partners, within the 10 days preceding onset of symptoms in an infected patient, whether symptomatic or not, should be examined and treated with a recommended regimen. # Follow-Up If treatment is successful, ulcers due to chancroid symptomatically improve within 3 days and objectively improve (evidenced by resolution of lesions and clearing of exudate) within 7 days after institution of therapy. Clinical resolution of lymphadenopathy is slower than that of ulcers and may require needle aspiration (through healthy, adjacent skin), even during successful therapy. Patients should be observed until the ulcer is completely healed. Because of the epidemiologic association with syphilis, serological testing for syphilis should be considered within 3 months after therapy. # Treatment Failures If no clinical improvement is evident by 7 days after therapy, the clinician should consider whether 1) antimicrobials were taken as prescribed, 2) the H. ducreyi causing infection is resistant to the prescribed antimicrobial, 3) the diagnosis is correct, 4) coinfection with another STD agent exists, or 5) the patient is also infected with HIV. Preliminary information indicates that patients coinfected with HIV do not respond to antimicrobial therapy as well as patients not infected with HIV, especially when single-dose treatment is used. Antimicrobial susceptibility testing should be performed on H. ducreyi isolated from patients who do not respond to recommended therapies. Neurosyphilis cannot be accurately diagnosed from any single test. Cerebrospinal fluid (CSF) tests should include cell count, protein, and VDRL (not RPR). The CSF leukocyte count is usually elevated (>5 WBC/mm3) when neurosyphilis is present and is a sensitive measure of the efficacy of therapy. VDRL is the standard test for CSF; when positive it is considered diagnostic of neurosyphilis. However, it may be negative when neurosyphilis is present and cannot be used to rule out neurosyphilis. Some experts also order an FTA-ABS; this may be less specific (more false positives) but is highly sensitive. The positive predictive value of the CSF-FTA-ABS is lower, but when negative, this test provides evidence against neurosyphilis. # Syphilis General principles # Serologic Tests # Penicillin Therapy Penicillin is the preferred drug for treating patients with syphilis. Penicillin is the only proven therapy that has been widely used for patients with neurosyphilis, congenital syphilis, or syphilis during pregnancy. For patients with penicillin allergy, skin testing-w ith desensitization, if necessary-is optimal. Sample guidelines for skin testing and desensitization are included (see Appendix to this section). However, the minor determinant mixture for penicillin is not currently available commercially. # Jarisch-Herxheimer Reaction The Jarisch-Herxheimer reaction is an acute febrile reaction, often accompanied by headache, myalgia, and other symptoms, that may occur after any therapy for syphilis, and patients should be so warned. Jarisch-Herxheimer reactions are more common in patients with early syphilis. Antipyretics may be recommended, but no proven methods exist for preventing this reaction. Pregnant patients, in particular, should be warned that early labor may occur. # Persons Exposed to Syphilis (Epidemiologic Treatment) Persons sexually exposed to a patient with early syphilis should be evaluated clinically and serologically. If the exposure occurred within the previous 90 days, the person may be infected yet seronegative and therefore should be presumptively treated. (It may be advisable to presumptively treat persons exposed more than 90 days previously if serologic test results are not immediately available and follow-up is uncertain.) Patients who have other STD may also have been exposed to syphilis and should have a serologic test for syphilis. The dual therapy regimen currently recommended for gonorrhea (ceftriaxone and doxycycline) is probably effective against incubating syphilis. If a different, nonpenicillin antibiotic regimen is used to treat gonorrhea, the patient should have a repeat serologic test for syphilis in 3 months. - If compliance and follow-up are ensured, erythromycin, 500 mg orally 4 times a day for 2 weeks, can be used. # Early Syphilis Primary and Secondary Syphilis and Early Latent Syphilis of Less than 1 Year's Duration # Recommended - Patients who are allergic to penicillin may also be allergic to cephalosporins; therefore, caution must be used in treating a penicillin-allergic patient with a cephalosporin. However, preliminary data suggest that ceftriaxone, 250 mg IM once a day for 10 days, is curative-but careful follow-up is mandatory. # Follow-Up Treatment failures can occur with any regimen. Patients should be reexamined clinically and serologically at 3 months and 6 months. If nontreponemal antibody titers have not declined fourfold by 3 months with primary or secondary syphilis, or by 6 months in early latent syphilis, or if signs or symptoms persist and reinfection has been ruled out, patients should have a CSF examination and be retreated appropriately. HIV-infected patients should have more frequent follow-up, including serologic testing at 1, 2, 3, 6, 9, and 12 months. In addition to the above guidelines for 3 and 6 months, any patient with a fourfold increase in titer at any time should have a CSF examination and be treated with the neurosyphilis regimen unless reinfection can be established as the cause of the increased titer. # MMWR September 1, 1989 Lumbar Puncture in Early Syphilis CSF abnormalities are common in adults with early syphilis. Despite the frequency of these CSF findings, very few patients develop neurosyphilis when the treatment regimens described above are used. Therefore, unless clinical signs and symptoms of neurologic involvement exist, such as optic, auditory, cranial nerve, or meningeal symptoms, lumbar puncture is not recommended for routine evaluation of early syphilis. This recommendation also applies to immunocompromised and HIVinfected patients, since no clear data currently show that these patients need increased therapy. # HIV Testing All syphilis patients should be counseled concerning the risks of HIV and be encouraged to be tested for HIV. # Late Latent Syphilis of More Than 1 Year's Duration, Gummas, and Cardiovascular Syphilis All patients should have a thorough clinical examination. Ideally, all patients with syphilis of more than 1 year's duration should have a CSF examination; however, performance of lumbar puncture can be individualized. In older asymptomatic individuals, the yield of lumbar puncture is likely to be low; however, CSF examina tion is clearly indicated in the following specific situations: # Recommended Regimen Benzathine penicillin G, 7.2 million units total, administered as 3 doses of 2.4 million units IM, given 1 week apart for 3 consecutive weeks. # Alternative Regimen for Penicillin-Allergic Patients (Nonpregnant) Doxycycline, 100 mg orally 2 times a day for 4 weeks or Tetracycline, 500 mg orally 4 times a day for 4 weeks. If patients are allergic to penicillin, alternate drugs should be used only after CSF examination has excluded neurosyphilis. Penicillin allergy is best determined by careful history taking, but skin testing may be used if the major and minor determinants are available (see Appendix). MMWR 9 # Follow-Up Quantitative nontreponemal serologic tests should be repeated at 6 months and 12 months. If titers increase fourfold, if an initially high titer (^1 :32) fails to decrease, or if the patient has signs or symptoms attributable to syphilis, the patient should be evaluated for neurosyphilis and retreated appropriately. # HIV Testing All syphilis patients should be counseled concerning the risks of HIV and be encouraged to be tested for HIV antibody. # Neurosyphilis Central nervous system disease may occur during any stage of syphilis. Clinical evidence of neurologic involvement (e.g., optic and auditory symptoms, cranial nerve palsies) warrants CSF examination. # Recommended Regimen Aqueous crystalline penicillin G, 12-24 million units administered 2-4 million units every 4 hours IV, for 10-14 days. # Alternative Regimen (If Outpatient Compliance Can Be Ensured) Procaine penicillin, 2-4 million units IM daily and Probenecid, 500 mg orally 4 times a day, both for 10-14 days. Many authorities recommend addition of benzathine penicillin G, 2.4 million units IM weekly for three doses after completion of these neurosyphilis treatment regimens. No systematically collected data have evaluated therapeutic alternatives to penicillin. Patients who cannot tolerate penicillin should be skin tested and desensitized, if necessary, or managed in consultation with an expert. # Follow-Up If an initial CSF pleocytosis was present, CSF examination should be repeated every 6 months until the cell count is normal. If it has not decreased at 6 months, or is not normal by 2 years, retreatment should be strongly considered. # HIV Testing All syphilis patients should be counseled concerning the risks of HIV and be encouraged to be tested for HIV antibody. which prenatal care utilization is not optimal, patients should be screened, and if necessary, treatment provided at the time pregnancy is detected. In areas of high syphilis prevalence, or in patients at high risk, screening should be repeated in the third trimester and again at delivery. # Syphilis in Pregnancy # Screening # Treatment Patients should be treated with the penicillin regimen appropriate for the woman's stage of syphilis. Tetracycline and doxycycline are contraindicated in pregnancy. Erythromycin should not be used because of the high risk of failure to cure infection in the fetus. Pregnant women with histories of penicillin allergy should first be carefully questioned regarding the validity of the history. If necessary, they should then be skin tested and either treated with penicillin or referred for desensitization (see Appendix). Women who are treated in the second half of pregnancy are at risk for premature labor and/or fetal distress if their treatment precipitates a Jarisch-Herxheimer reaction. They should be advised to seek medical attention following treatment if they notice any change in fetal movements or have any contractions. Stillbirth is a rare complication of treatment; however, since therapy is necessary to prevent further fetal damage, this concern should not delay treatment. # Follow-Up Monthly follow-up is mandatory so that retreatment can be given if needed. The antibody response should be the same as for nonpregnant patients. # HIV Testing All syphilis patients should be counseled concerning the risks of HIV and be encouraged to be tested for HIV antibody. # Congenital Syphilis # Who Should be Evaluated Infants should be evaluated if they were born to seropositive (nontreponemal test confirmed by treponemal test) women who: An infant should not be released from the hospital until the serologic status o f its mother is known. # MMWR 11 # Evaluation of the Infant The clinical and laboratory evaluation of infants born to women described above should include: # Therapy Decisions Infants should be treated if they have: - Any evidence of active disease (physical examination or x-ray); or Even if the evaluation is normal, infants should be treated if their mothers have untreated syphilis or evidence of relapse or reinfection after treatment. Infants, who meet the criteria listed in "Who Should be Evaluated" but are not fully evaluated, should be assumed to be infected, and treated. # Treatment Treatment should consist of: 100,000-150,000 units/kg of aqueous crystalline penicillin G daily (administered as 50,000 units/kg IV every 8-12 hours) or 50,000 units/kg of procaine penicillin daily (administered once IM) for 10-14 days. If more than 1 day of therapy is missed, the entire course should be restarted. All symptom atic neonates should also have an ophthalmologic examination. Infants who meet the criteria listed in "Who Should be Evaluated" but who after evaluation do not meet the criteria listed in "Therapy Decisions," are at low risk for congenital syphilis. If their mothers were treated with erythromycin during preg nancy, or if close follow-up cannot be assured, they should be treated with benza thine penicillin G, 50,000 units/kg IM as a one-time dose. # Follow-Up Seropositive untreated infants must be closely followed at 1, 2, 3, 6, and 12 months of age. In the absence of infection, nontreponemal antibody titers should be decreasing by 3 months of age and should have disappeared by 6 months of age. If *ln the immediate newborn period, interpretation of these tests may be difficult; normal values vary with gestational age and are higher in preterm infants. Other causes of elevated values should also be considered. However, when an infant is being evaluated for congenital syphilis, the infant should be treated if test results cannot exclude infection. # MMWR September 1f 1989 these titers are found to be stable or increasing, the child should be reevaluated and fully treated. Additionally, in the absence of infection, treponemal antibodies may be present up to 1 year. If they are present beyond 1 year, the infant should be treated for congenital syphilis. Treated infants should also be observed to ensure decreasing nontreponemal antibody titers; these should have disappeared by 6 months of age. Treponemal tests should not be used, since they may remain positive despite effective therapy if the child was infected. Infants with documented CSF pleocytosis should be reexamined every 6 months or until the cell count is normal. If the cell count is still abnormal after 2 years, or if a downward trend is not present at each examination, the infant should be retreated. The CSF-VDRL should also be checked at 6 months; if it is still reactive, the infant should be retreated. # Therapy of Older Infants and Children After the newborn period, children discovered to have syphilis should have a CSF examination to rule out congenital syphilis. Any child who is thought to have congenital syphilis or who has neurologic involvement should be treated with 200,000-300,000 units/kg/day of aqueous crystalline penicillin G (administered as 50,000 units/kg every 4-6 hours) for 10-14 days. Older children with definite acquired syphilis and a normal neurologic examination may be treated with benzathine penicillin G# 50,000 units/kg IM, up to the adult dose of 2.4 million units. Children with a history of penicillin allergy should be skin tested and, if necessary, desensitized (see Appendix). Follow-up should be performed as described previously. # HIV Testing In cases of congenital syphilis, the mother should be counseled concerning the risks of HIV and be encouraged to be tested for HIV; if her test is positive, the infant should be referred for follow-up. # Syphilis in HIV-Infected Patients # Diagnosis - All sexually active patients with syphilis should be encouraged to be counseled and tested for HIV because of the frequency of association of the two diseases and the implications for clinical assessment and management. - Neurosyphilis should be considered in the differential diagnosis of neurologic disease in HIV-infected persons. - When clinical findings suggest that syphilis is present but serologic tests are negative or confusing, alternative tests, such as biopsy of lesions, dark-field examination, and direct fluorescent antibody staining of lesion material, should be used. - In cases of congenital syphilis, the mother should be encouraged to be counseled and tested for HIV; if her test is positive, the infant should be referred for follow-up. # Appendix Management of Patients with Histories of Penicillin Allergy Currently, no proven alternative therapies to penicillin are available for treating patients with neurosyphilis, congenital syphilis, or syphilis in pregnancy. Therefore, skin testing-w ith desensitization, if indicated -is recommended for these patients. # Skin Testing Skin testing is a rapid, safe, and accurate procedure (see below). It is also productive; 90% of patients with histories of "penicillin allergy" have negative skin tests and can be given penicillin safely. The other 10% with positive skin tests have an increased risk of being truly penicillin-allergic and should undergo desensitization. Clinics involved in STD management should be equipped and prepared to do skin testing or should establish referral mechanisms to have skin tests performed. Skin testing is quick; four determinants, along with positive and negative controls, can be placed and read in an hour (Table 3). Skin testing is also safe, if properly performed. Patients who have had a severe, life-threatening reaction in the past year should be tested in a controlled environment, such as a hospital setting, and the determinant antigens diluted 100-fold. Other patients can be skin-tested safely in a physician-staffed clinic. Patients with a history of penicillin allergy but with no reaction to penicillin skin tests, who are not on antihistamines and who had a positive histamine control on skin testing (Table 3), should be given penicillin, 250 mg orally, and observed for one hour. Patients who tolerate this dose well may be treated with penicillin as needed. # MMWR September 1,1989 # Desensitization Patients who have a positive skin test to one of the penicillin determinants can be desensitized. This is a straightforward, relatively safe procedure. Although the procedure can be done orally or intravenously, oral desensitization is thought to be safer, simpler, and easier. Desensitization should be done in a hospital setting because serious IgE-mediated allergic reaction, although unlikely, can occur. Desen sitization can be completed in 4 hours, after which the first dose of penicillin is given (Table 4). STD programs should have a referral center where patients with positive skin tests can be desensitized. After desensitization, patients must be maintained on penicillin for the duration of therapy. # TABLE 3. Penicillin allergy skin testing (adapted from Beall*)____________________ Note: If there has been a severe generalized reaction to penicillin in the previous year, the antigens should be diluted 100-fold, and patients should be tested in a controlled environment. Both major and minor determinants should be available for the tests to be interpretable. The patient should not have taken antihistamines in the previous 48 hours. # Reagents Major determinants: Benzylpenicilloyl-polylysine (major, Pre-Pen , 6 x 10"5M) Benzylpenicillin (10'2M or 6000 U/mL) # Minor determinants: Benzylpenicilloic acid (10'2M) Benzylpenilloic acid (10'2M) Positive control (histamine, 1 mg/mL) Negative control (buffered saline solution) Dilute the antigens 100-fold for preliminary testing if there has been an immediate generalized reaction within the past year. # Procedure Epicutaneous (scratch or prick) test: apply one drop of material to volar forearm and pierce epidermis without drawing blood; observe for 20 minutes. If there is no wheal ^4 mm, proceed to intradermal test. Intradermal test: inject 0.02 ml intradermally with a 27-gauge short-bevelled needle; observe for 20 minutes. # Interpretation For the test to be interpretable, the negative (saline) control must elicit no reaction and the positive (histamine) control must elicit a positive reaction. Positive test: a wheal >4 mm in mean diameter to any penicillin reagent; erythema must be present. Negative test: the wheals at the site of the penicillin reagents are equivalent to the negative control. Indeterminate: all other results. Interval between doses, 15 minutes; elapsed time, 3 hours and 45 minutes; cumulative dose, 1.3 m illion units. h e specific amount of drug was diluted in approximately 30 ml of water and then given orally. Adapted with permission from the New England Journal o f Medicine 1985;312:1229-32. # Lymphogranuloma Venereum Lymphogranuloma venereum (LGV) is caused by C. trachomatis (LGV serovars). Inguinal lymphadenopathy is the most common clinical manifestation. Diagnosis is often made clinically and may be confused with chancroid. LGV is not a common cause of inguinal lymphadenopathy in the United States. # Treatment: Genital, inguinal, or anorectal # Recommended Regimen Doxycycline 100 mg orally 2 times a day for 21 days. # Alternative Regimen Tetracycline 500 mg orally 4 times a day for 21 days or Erythromycin 500 mg orally 4 times a day for 21 days or Sulfisoxazole 500 mg orally 4 times a day for 21 days or equivalent sulfonamide course. MMWR September 1, 1989 # Genital Herpes Simplex Virus Infections Genital herpes is a viral disease that may be chronic and recurring and for which no known cure exists. Systemic acyclovir treatment provides partial control of the symptoms and signs of herpes episodes; it accelerates healing but does not eradicate the infection nor affect the subsequent risk, frequency, or severity of recurrences after the drug is discontinued. Topical therapy with acyclovir is substantially less effective than therapy with the oral drug. # First Clinical Episode of Genital Herpes # Recommended Regimen Acyclovir 200 mg orally 5 times a day for 7-10 days or until clinical resolution occurs. # First Clinical Episode of Herpes Proctitis # Recommended Regimen Acyclovir 400 mg orally 5 times a day for 10 days or until clinical resolution occurs. # Inpatient Therapy For patients with severe disease or complications necessitating hospitalization. # Recommended Regimen Acyclovir 5 mg/kg body weight IV every 8 hours for 5-7 days or until clinical resolution occurs. # Recurrent Episodes Most episodes of recurrent herpes do not benefit from therapy with acyclovir. In severe recurrent disease, some patients who start therapy at the beginning of the prodrome or within 2 days after onset of lesions may benefit from therapy, although this has not been proven. # Recommended Regimen Acyclovir 200 mg orally 5 times a day for 5 days or Acyclovir 800 mg orally 2 times a day for 5 days. # MMWR 17 # Daily Suppressive Therapy Daily treatment reduces frequency of recurrences by at least 75% among patients with frequent (more than six per year) recurrences. Safety and efficacy have been clearly documented among persons receiving daily therapy for up to 3 years. Acyclovir-resistant strains of HSV have been isolated from persons receiving sup pressive therapy, but they have not been associated with treatment failure among immunocompetent patients. After 1 year of continuous daily suppressive therapy, acyclovir should be discontinued so that the patient's recurrence rate may be reassessed. # Recommended Regimen Acyclovir 200 mg orally 2 to 5 times a day- or Acyclovir 400 mg orally 2 times a day.* # Genital Herpes among HIV-Infected Patients The need for higher-than-standard doses of oral acyclovir among HIV-infected but immunocompetent patients has not been established. Immune status, not HIV infection alone, is the likely predictor of disease severity and response to therapy. Case reports strongly suggest that patients with clinical immunodeficiency have a more severe clinical course of anogenital herpes than do immunocompetent patients, and some health-care providers are using increased doses of acyclovir for patients with immunodeficiency. However, neither the need for nor the proper increased dosage of acyclovir has been conclusively established. Immunocompromised as well as immunocompetent hosts who fail initial therapy may benefit from an increased dosage of acyclovir. The indications for suppressive therapy among immunocompro mised patients, and the dose required, are controversial. Clinical benefits to the patient must be weighed against the potential for selecting HSV strains that are resistant to acyclovir. Patients whose therapy for a recurrence fails because of resistant strains of HSV should be managed in consultation with an expert. # Acyclovir for Treating Pregnant Patients The safety of systemic acyclovir therapy among pregnant women has not been established. In the presence of life-threatening maternal HSV infection (e.g., dissem inated infection that includes encephalitis, pneumonitis, and/or hepatitis) acyclovir administered IV is probably of value. Among pregnant women without lifethreatening disease, systemic acyclovir treatment should not be used for recurrent genital herpes episodes or as suppressive therapy to prevent reactivation near term. # Perinatal Infections Most mothers of infants who acquire neonatal herpes lack histories of clinically evident genital herpes. The risk of transmission to the neonate from an infected mother is highest among women with primary herpes infection near the time of *Dosage must be individualized for each patient. # MMWR September 1, 1989 delivery, and it is low among women with recurrent herpes. The results of viral cultures during pregnancy do not predict viral shedding at the time of delivery; such cultures are not routinely indicated. At the onset of labor, all women should be examined and carefully questioned about symptoms. Women without symptoms or signs of genital herpes infection or prodrome may have vaginal deliveries. For women who have a history of genital herpes or who have a sex partner with genital herpes, cultures of the birth canal at delivery may be helpful in decisions about neonatal management. Infants delivered through an infected birth canal (proven by culture or presumed by observation of lesions) should be cultured and observed carefully. Although data are limited concerning the use of acyclovir for asymptomatic infants, some experts presump tively treat infants who were exposed to HSV at delivery. Herpes cultures should be obtained from infants before therapy; positive cultures obtained 24-48 hours or more after birth indicate active viral infection. # Counseling and Management of Sex Partners Patients with genital herpes should be told about the natural history of their disease, with emphasis on the potential for recurrent episodes. Patients should be advised to abstain from sexual activity while lesions are present. Sexual transmission of HSV has been documented during periods without recognized lesions. Suppres sive treatment with oral acyclovir reduces the frequency of recurrences but does not totally eliminate viral shedding. Genital herpes and other diseases causing genital ulcers have been associated with an increased risk of acquiring HIV infections; therefore, condoms should be used during all sexual exposures. If sex partners of patients with genital herpes have genital lesions, they may benefit from evaluation; however, evaluation of asymptomatic partners is of little value in preventing trans mission of HSV. The risk of neonatal infection should be explained to all patients -male and female -with genital herpes. Women of child-bearing age with genital herpes should be advised to inform their clinicians of their history during any future pregnancy. No therapy has been shown to eradicate HPV. HPV has been demonstrated in adjacent tissue after laser treatment of HPV-associated cervical intraepithelial neo plasia and after attempts to eliminate subclinical HPV by extensive laser vaporization of the anogenital area. The benefit of treating patients with subclinical HPV infection has not been demonstrated, and recurrence is common. The effect of genital wart treatment on HPV transmission and the natural history of HPV is unknown. Therefore, the goat of treatment is removal of exophytic warts and the amelioration of signs and symptoms, not the eradication of HPV. # Infections of Epithelial Surfaces Genital Warts Expensive therapies, toxic therapies, and procedures that result in scarring should be avoided. Sex partners should be examined for evidence of warts. Patients with anogenital warts should be made aware that they are contagious to uninfected sex partners. The use of condoms is recommended to help reduce transmission. In most clinical situations, cryotherapy with liquid nitrogen or cryoprobe is the treatment of choice for external genital and perianal warts. Cryotherapy is nontoxic, does not require anesthesia, and -if used properly -does not result in scarring. Podophyllin, trichloroacetic acid (TCA), and electrodesiccation/electrocautery are alternative therapies. Treatment with interferon is not recommended because of its relatively low efficacy, high incidence of toxicity, and high cost. The carbon dioxide laser and conventional surgery are useful in the management of extensive warts, particularly for patients who have not responded to cryotherapy; these alternatives are not appropriate for limited lesions. Like more cost-effective treatments, these therapies do not eliminate HPV and often are associated with the recurrence of clinical cases. # Pregnant Patients and Perinatal Infections Cesarean delivery for prevention of transmission of HPV infection to the newborn is not indicated. In rare instances, however, cesarean delivery may be indicated for women with genital warts if the pelvic outlet is obstructed or if vaginal delivery would result in excessive bleeding. Genital papillary lesions have a tendency to proliferate and to become friable during pregnancy. Many experts advocate removal of visible warts during pregnancy, although data on this subject are limited. HPV can cause laryngeal papillomatosis in infants. The route of transmission (transplacental, birth canal, or postnatal) is unknown; therefore, the preventive value of cesarean delivery is unknown. The perinatal transmission rate is also unknown, although it must be very low, given the relatively high prevalence of genital warts and the rarity of laryngeal papillomas. Neither routine HPV screening tests nor cesarean delivery are indicated to prevent transmission of HPV infection to the newborn. # MMWR September 1, 1989 Because of the wide spectrum of antimicrobial therapies effective against N. gonorrhoeae, these guidelines are not intended to be a comprehensive list of all possible treatment regimens. # Treatment of Adults # Uncomplicated Urethral, Endocervical, or Rectal Infections Single-dose efficacy is a major consideration in choosing an antibiotic regimen to treat persons infected with N. gonorrhoeae. Another important concern is coexisting chlamydial infection, documented in up to 45% of gonorrhea cases in some popula tions. Until universal testing for chlamydia with quick, inexpensive, and highly accurate tests becomes available, persons with gonorrhea should also be treated for presumptive chlamydial infections. Generally, patients with gonorrhea infections should be treated simultaneously with antibiotics effective against both C. tracho matis and N. gonorrhoeae. Simultaneous treatment may lessen the possibility of treatment failure due to antibiotic resistance. # Recommended Regimen Ceftriaxone 250 mg IM once plus Doxycycline 100 mg orally 2 times a day for 7 days. Some authorities prefer a dose of 125 mg ceftriaxone IM because it is less expensive and can be given in a volume of only 0.5 ml, which is more easily administered in the deltoid muscle. However, the 250-mg dose is recommended because it may delay the emergence of ceftriaxone-resistant strains. At this time, both doses appear highly effective for mucosal gonorrhea at all sites. # Alternative Regimens For patients who cannot take ceftriaxone, the preferred alternative is Spectinomycin 2 g IM, in a single dose (followed by doxycycline). Other alternatives, for which experience is less extensive, include ciprofloxacin- 500 mg orally once; norfloxacin- 800 mg orally once; cefuroxime axetil 1 g orally once with probenecid 1 g; cefotaxime 1 g IM once; and ceftizoxime 500 mg IM once. All of these regimens are folio wed by doxycycline 100 mg orally, twice daily for 7 days. If infection was acquired from a source proven not to have penicillinresistant gonorrhea, a penicillin such as amoxicillin 3 g orally with 1 g probenecid followed by doxycycline may be used for treatment. Doxycycline or tetracycline alone is no longer considered adequate therapy for gonococcal infections but is added for treatment of coexisting chlamydial infec tions. Tetracycline may be substituted for doxycycline; however, compliance may be worse since tetracycline must be taken at a dose of 500 mg 4 times a day between meals, whereas doxycycline is taken at a dose of 100 mg 2 times a day without regard to meals. Moreover, at current prices, tetracycline costs only a little less than generic doxycycline. *Quinolones, such as ciprofloxacin and norfloxacin, are contraindicated during pregnancy and in children 16 years of age or younger. For patients who cannot take a tetracycline (e.g., pregnant women), erythromycin may be substituted (erythromycin base or stearate at 500 mg orally 4 times a day for 7 days or erythromycin ethylsuccinate, 800 mg orally 4 times a day for 7 days). See "Chlamydial Infections" for further information on management of chlamydial infection. # Special Considerations All patients with gonorrhea should have a serologic test for syphilis and should be offered confidential counseling and testing for HIV infection. Most patients with incubating syphilis (those who are seronegative and have no clinical signs of syphilis) may be cured by any of the regimens containing p-lactams (e.g., ceftriaxone) or tetracyclines. Spectinomycin and the quinolones (ciprofloxacin, norfloxacin) have not been shown to be active against incubating syphilis. Patients treated with these drugs should have a serologic test for syphilis in 1 month. Patients with gonorrhea and documented syphilis and gonorrhea patients who are sex partners of syphilis patients should be treated for syphilis (see "Syphilis") as well as for gonorrhea. Some practitioners report that mixing 1% lidocaine (without epinephrine) with ceftriaxone reduces the discomfort associated with the injection (see package insert). No adverse reactions have been associated with use of lidocaine diluent. # Management of Sex Partners Persons exposed to gonorrhea within the preceding 30 days should be examined, cultured, and treated presumptively. # Follow-Up Treatment failure following combined ceftriaxone/doxycycline therapy is rare; therefore, a follow-up culture ("test-of-cure") is not essential. A more cost-effective strategy may be a reexamination with culture 1-2 months after treatment ("rescreen ing"); this strategy detects both treatment failures and reinfections. Patients should return for examination if symptoms persist after treatment. Because there is less long-term experience with drugs other than ceftriaxone, patients treated with regi mens other than ceftriaxone/doxycycline should have follow-up cultures obtained for 4-7 days after completion of therapy. # Treatment Failures Persistent symptoms after treatment should be evaluated by culture for N. gonorrhoeae, and any gonococcal isolate should be tested for antibiotic sensitivity. Symptoms of urethritis may also be caused by C. trachomatis and other organisms associated with nongonococcal urethritis (see "Nongonococcal Urethritis"). Addi tional treatment for patients with gonorrhea should be ceftriaxone, 250 mg, followed by doxycycline. Infections occurring after treatment with one of the recommended regimens are commonly due to reinfection rather than to treatment failure and indicate a need for improved sex-partner referral and patient education. # MMWR September 1, 1989 # Pharyngeal Gonococcal Infection Patients with uncomplicated pharyngeal gonococcal infection should be treated with ceftriaxone 250 mg IM once. Patients who cannot be treated with ceftriaxone should be treated with ciprofloxacin 500 mg orally as a single dose. Since experience with this regimen is limited, such patients should be evaluated with repeat culture 4-7 days after treatment. # Treatment of Gonococcal Infections in Pregnancy Pregnant women should be cultured for N. gonorrhoeae (and tested for C. trachomatis and syphilis) at the first prenatal-care visit. For women at high risk of STD, a second culture for gonorrhea (as well as tests for chlamydia and syphilis) should be obtained late in the third trimester. # Recommended Regimen Ceftriaxone 250 mg IM once plus Erythromycin base- 500 mg orally 4 times a day for 7 days. Pregnant women allergic to (3-lactams should be treated with spectinomycin 2 g IM once (followed by erythromycin). Follow-up cervical and rectal cultures for N. gonorrhoeae should be obtained 4-7 days after treatment is completed. Ideally, pregnant women with gonorrhea should be treated for chlamydia on the basis of chlamydial diagnostic studies. If chlamydial diagnostic testing is not available, treatment for chlamydia should be given. Tetracyclines (including doxycycline) and the quinolones are contraindicated in pregnancy because of possibly adverse effects on the fetus. Treatments for pregnant patients with chlamydial infection, acute salpingitis, and disseminated gonorrhea in pregnancy are described in respective sections. # Disseminated Gonococcal Infection (DGI) Hospitalization is recommended for initial therapy, especially for patients who cannot reliably comply with treatment, have uncertain diagnoses, or have purulent synovial effusions or other complications. Patients should be examined for clinical evidence of endocarditis or meningitis. When the infecting organism is proven to be penicillin-sensitive, parenteral treatment may be switched to ampicillin 1 g every 6 hours (or equivalent). Patients treated for DGI should be tested for genital C. trachomatis infection. If chlamydial testing is not available, patients should be treated empirically for coexisting chlamydial infection. Reliable patients with uncomplicated disease may be discharged 24-48 hours after all symptoms resolve and may complete the therapy (for a total of 1 week of antibiotic therapy) with an oral regimen of cefuroxime axetil 500 mg 2 times a day or amoxicillin 500 mg with clavulanic acid 3 times a day or, if not pregnant, ciprofloxacin 500 mg 2 times a day. # Meningitis and Endocarditis Meningitis and endocarditis caused by N. gonorrhoeae require high-dose IV therapy with an agent effective against the strain causing the disease, such as ceftriaxone 1-2 g IV every 12 hours. Optimal duration of therapy is unknown, but most authorities treat patients with gonococcal meningitis for 10-14 days and with gonococcal endocarditis for at least 4 weeks. Patients with gonococcal nephritis, endocarditis or meningitis, or recurrent DGI should be evaluated for complement deficiencies. Treatment of complicated DGI should be undertaken in consultation with an expert. # Adult Gonococcal Ophthalmia Adults and children over 20 kg with nonsepticemic gonococcal ophthalmia should be treated with ceftriaxone 1 g IM once. Irrigation of the eyes with saline or buffered ophthalmic solutions may be useful adjunctive therapy to eliminate discharge. All patients must have careful ophthalmologic assessment, including slit-lamp examina tion for ocular complications. Topical antibiotics alone are insufficient therapy and are unnecessary when appropriate systemic therapy is given. Simultaneous ophthalmic infection with C. trachomatis has been reported and should be considered for patients who do not respond promptly. # Gonococcal Infections of Infants and Children Child abuse should be carefully considered and evaluated (see "Sexual Assault and Abuse of Children") for any child with documented gonorrhea. # Treatment of Infants Born to Mothers with Gonococcal Infection Infants born to mothers with untreated gonorrhea are at high risk of infection (e.g., ophthalmia and DGI) and should be treated with a single injection of ceftriaxone (50 mg/kg IV or IM, not to exceed 125 mg). Ceftriaxone should be given cautiously to MMWR September 1, 1989 hyperbilirubinemic infants, especially premature infants. Topical prophylaxis for neonatal ophthalmia is not adequate treatment for documented infections of the eye or other sites. # Treatment of Infants with Gonococcal Infection Infants with documented gonococcal infections at any site (e.g., eye) should be evaluated for DGI. This evaluation should include a careful physical examination, especially of the joints, as well as blood and CSF cultures. Infants with gonococcal ophthalmia or DGI should be treated for 7 days (10 to 14 days if meningitis is present) with one of the following regimens: # Recommended Regimen Ceftriaxone 25-50 mg/kg/day IV or IM in a single daily dose or Cefotaxime 25 mg/kg IV or IM every 12 hours. # Alternative Regimen Limited data suggest that uncomplicated gonococcal ophthalmia among infants may be cured with a single injection of ceftriaxone (50 mg/kg up to 125 mg). A few experts use this regimen for children who have no clinical or laboratory evidence of disseminated disease. If the gonococcal isolate is proven to be susceptible to penicillin, crystalline penicillin G may be given. The dose is 100,000 units/kg/day given in 2 equal doses (4 equal doses per day for infants more than 1 week old). The dose should be increased to 150,000 units/kg/day for meningitis. Infants with gonococcal ophthalmia should receive eye irrigations with buffered saline solutions until discharge has cleared. Topical antibiotic therapy alone is inadequate. Simultaneous infection with C. trachomatis has been reported and should be considered for patients who do not respond satisfactorily. Therefore, the mother and infant should be tested for chlamydial infection. # Gonococcal Infections of Children Children who weigh ^45 kg should be treated with adult regimens. Children who weigh <45 kg who have uncomplicated vulvovaginitis, cervicitis, urethritis, pharyn gitis, or proctitis should be treated as follows: # Recommended Regimen Ceftriaxone 125 mg IM once. Patients who cannot tolerate ceftriaxone may be treated with: Spectinomycin 40 mg/kg IM once. Patients weighing <45 kg with bacteremia or arthritis should be treated with ceftriaxone 50 mg/kg (maximum 1 g) once daily for 7 days. For meningitis, the duration of treatment is increased to 10-14 days and the maximum dose is 2 g. # MMWR 27 Children ^8 years of age should also be given doxycycline 100 mg 2 times a day for 7 days. All patients should be evaluated for coinfection with syphilis and C. trachomatis. Follow-up cultures are necessary to ensure that treatment has been effective. # Prevention of Ophthalmia Neonatorum Instillation of a prophylactic agent into the eyes of all newborn infants is recommended to prevent gonococcal ophthalmia neonatorum and is required by law in most states. Although all regimens listed below effectively prevent gonococcal eye disease, their efficacy in preventing chlamydial eye disease is not clear. Furthermore, they do not eliminate nasopharyngeal colonization with C. trachomatis. Treatment of gonococcal and chlamydial infections in pregnant women is the best method for preventing neonatal gonococcal and chlamydial disease. # Recommended Regimen Erythromycin (0.5%) ophthalmic ointment, once or Tetracycline (1%) ophthalmic ointment, once or Silver nitrate (1%) aqueous solution, once. One of these should be instilled into the eyes of every neonate as soon as possible after delivery, and definitely within 1 hour after birth. Single-use tubes or ampules are preferable to multiple-use tubes. The efficacy of tetracycline and erythromycin in the prevention of TRNG and PPNG ophthalmia is unknown, although both are probably effective because of the high concentrations of drug in these preparations. Bacitracin is not recommended. # Chlamydial Infections Culture and nonculture methods for diagnosis of C. trachomatis are now available. Appropriate use of these diagnostic tests is strongly encouraged, especially for screening asymptomatic high-risk women in whom infection would otherwise be undetected. However, in clinical settings where testing for chlamydia is not routine or available, treatment often is prescribed on the basis of clinical diagnosis or as cotreatment for gonorrhea (see "Gonococcal Infections"). In clinical settings, periodic surveys should be performed to determine local chlamydial prevalence in patients with gonorrhea. Priority groups for chlamydia testing, if resources are limited, are high-risk pregnant women, adolescents, and women with multiple sexual partners. Results of chlamydial tests should be interpreted with care. The sensitivity of all currently available laboratory tests for C. trachomatis tests is substantially less than 100%; thus, false-negative tests are possible. Although the specificity of nonculture MMWR 39 # Cytomegalovirus Cytomegalovirus (CMV) is a common infection in pregnant women, and 0.5%-2% of all infants of CMV-infected women are congenitally infected, although most of these infants are only mildly affected. Another 5%-10% of infants are perinatally infected; these infections are without known sequelae. The risk of severe congenital disease (retardation, deafness, visual problems) is highest when primary CMV infection occurs during pregnancy, although recurrent CMV may also cause severe congenital infection. Because severe congenital infection occurs before delivery, and because CMV infection is widespread, the route of delivery should not be influenced by viral shedding. No accepted routine therapy exists for either maternal or neonatal infection. # Ectoparasitic Infections Pediculosis Pubis # Recommended Regimen Permethrin (1%) creme rinse applied to affected area and washed off after 10 minutes or Pyrethrins and piperonyl butoxide applied to the affected area and washed off after 10 minutes or Lindane 1% shampoo applied for 4 minutes and then thoroughly washed off. (Not recommended for pregnant or lactating women.) Patients should be reevaluated after 1 week if symptoms persist. Retreatment may be necessary if lice are found or eggs are observed at the hair-skin junction. Sex partners should be treated as above. Special Considerations. Pediculosis of the eyelashes should be treated by the application of occlusive ophthalmic ointment to the eyelid margins, 2 times a day for 10 days, to smother lice and nits. Lindane or other drugs should not be applied to the eyes. Clothing or bed linen that may have been contaminated by the patient within the past 2 days should be washed and/or dried by machine (hot cycle in each) or dry cleaned. # Scabies Recommended Regimen (Adults and Older Children) Lindane (1%) 1 oz. of lotion or 30 g of cream applied thinly to all areas of the body from the neck down and washed off thoroughly after 8 hours. (Not recommended for pregnant or lactating women.) # MMWR September 1, 1989 # Treatment Recommendations # External Genital/Perianal Warts # Recommended Regimen Cryotherapy with liquid nitrogen or cryoprobe. # Alternative Regimen Podophyllin 10%-25% in compound tincture of benzoin. Limit the total volume of podophyllin solution applied to <0.5 ml per treatment session. Thoroughly wash off in 1-4 hours. Treat <10 cm2 per session. Repeat applications at weekly intervals. Mucosal warts are more likely to respond than highly keratinized warts on the penile shaft, buttocks, and pubic areas. Contraindicated in pregnancy. Trichloroacetic acid (80%-90%). Apply only to warts; powder with talc or sodium bicarbonate (baking soda) to remove unreacted acid. Repeat application at weekly intervals. Electrodesiccation/electrocautery. Electrodesiccation is contraindicated in pa tients with cardiac pacemakers, or for lesions proximal to the anal verge. Extensive or refractory disease should be referred to an expert. # Cervical Warts # Recommended Regimen For women with cervical warts, dysplasia must be excluded before treatment is begun. Management should therefore be carried out in consultation with an expert. # Vaginal Warts # Recommended Regimen Cryotherapy with liquid nitrogen. (The use of a cryoprobe in the vagina is not recommended because of the risk of vaginal perforation and fistula formation.) # Alternative Regimen Trichloroacetic acid (80%-90%). Apply only to warts; powder with talc or sodium bicarbonate (baking soda) to remove unreacted acid. Repeat application at weekly intervals. Podophyllin 10%-25% in compound tincture of benzoin. Treatment area must be dry before speculum is removed. Treat <2 cm2 per session. Repeat application at weekly intervals. Contraindicated in pregnancy. Extensive or refractory disease should be referred to an expert. # Urethral Meatus Warts # Recommended Regimen Cryotherapy with liquid nitrogen. # Alternative Regimen Podophyllin 10%-25% in compound tincture of benzoin. Treatment area must be dry before contact with normal mucosa, and podophyllin must be washed off in 1-2 hours. Contraindicated in pregnancy. Extensive or refractory disease should be referred to an expert. # Anal Warts # Recommended Regimen Cryotherapy with liquid nitrogen. Extensive or refractory disease should be referred to an expert. # Alternative Regimen Trichloroacetic acid (80%-90%). Surgical removal. # Oral Warts # Recommended Regimen Cryotherapy with liquid nitrogen. # Alternative Regimen Electrodesiccation/electrocautery. Surgical removal. Extensive or refractory disease should be referred to an expert. # Gonococcal Infections Treatment of gonococcal infections in the United States is influenced by the following trends: 1) the spread of infections due to antibiotic-resistant N. gonorrhoeae, including penicillinase-producing N. gonorrhoeae (PPNG), tetracyclineresistant N. gonorrhoeae (TRNG), and strains with chromosomally mediated resis tance to multiple antibiotics; 2) the high frequency of chlamydial infections in persons with gonorrhea; 3) recognition of the serious complications of chlamydial and gonococcal infections; and 4) the absence of a fast, inexpensive, and highly accurate test for chlamydial infection. All gonorrhea cases should be diagnosed or confirmed by culture to facilitate antimicrobial susceptibility testing. The susceptibility of N. gonorrhoeae to antibio tics is likely to change over time in any locality. Therefore, gonorrhea control programs should include a system of regular antibiotic sensitivity testing of a surveillance sample of N. gonorrhoeae isolates as well as all isolates associated with treatment failure. # Recommended Regimen Doxycycline 100 mg orally 2 times a day for 7 days or Tetracycline 500 mg orally 4 times a day for 7 days. ___________________ # Alternative Regimen Erythromycin base 500 mg orally 4 times a day or equivalent salt for 7 days or Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days. If erythromycin is not tolerated because of side effects, the following regimen may be effective: Sulfisoxazole 500 mg orally 4 times a day for 10 days or equivalent. # Test of Cure Because antimicrobial resistance of C. trachomatis to recommended regimens has not been observed, test-of-cure evaluation is not necessary when treatment has been completed. # Treatment of C. trachomatis in Pregnancy Pregnant women should undergo diagnostic testing for C. trachomatis, N. gonorrhoeae, and syphilis, if possible, at their first prenatal visit and, for women at high risk, during the third trimester. Risk factors for chlamydial disease during pregnancy include young age (<25 years), past history or presence of other STD, a new sex partner within the preceding 3 months, and multiple sex partners. Ideally, pregnant women with gonorrhea should be treated for chlamydia on the basis of diagnostic studies, but if chlamydial testing is not available, treatment should be given because of the high likelihood of coinfection. # Recommended Regimen Erythromycin base 500 mg orally 4 times a day for 7 days. If this regimen is not tolerated, the following regimens are recommended: Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days or Erythromycin ethylsuccinate 400 mg orally 4 times a day for 14 days. # Alternative Regimen # Alternative if Erythromycin Cannot Be Tolerated Amoxicillin 500 mg orally 3 times a day for 7 days (limited data exist concerning this regimen). Erythromycin estolate is contraindicated during pregnancy, since drug-related hepatoxicity can result. # Sex Partners of Patients with C. trachomatis Infections Sex partners of patients who have C. trachomatis infection should be tested and treated for C. trachomatis if their contact was within 30 days of onset of symptoms. If testing is not available, they should be treated with the appropriate antimicrobial regimen. # Bacterial STD Syndromes Nongonococcal Urethritis Among men with urethral symptoms, nongonococcal urethritis (NGU) is diag nosed by Gram stain demonstrating abundant polymorphonuclear leukocytes with out intracellular gram-negative diplococci. C. trachomatis has been implicated as the cause of NGU in about 50% of cases. Other organisms that cause 10%-15% of cases include Ureaplasma urealyticum, T. vaginalis, and herpes simplex virus. The cause of other cases is unknown. # Recommended Regimen Doxycycline 100 mg orally 2 times a day for 7 days or Tetracycline 500 mg orally 4 times a day for 7 days. # Alternative Regimen Erythromycin base 500 mg orally 4 times a day or equivalent salt for 7 days or Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days. If high-dose erythromycin schedules are not tolerated, the following regimen is recommended: Erythromycin ethylsuccinate 400 mg orally 4 times a day for 14 days or Erythromycin base 250 mg orally 4 times a day or equivalent salt for 14 days. # Management of Sex Partners Sex partners of men with NGU should be evaluated for STD and treated with an appropriate regimen based on the evaluation. # Recurrent NGU Unresponsive to Conventional Therapy Recurrent NGU may be due to lack of compliance with an initial antibiotic regimen, to reinfection due to failure to treat sex partners, or to factors currently undefined. If noncompliance or reinfection cannot be ruled out, repeat doxycycline (100 mg orally 2 times a day for 7 days) or tetracycline (500 mg orally 4 times a day for 7 days). If compliance with the initial antimicrobial agent is likely, treat with one of the above listed regimens. If objective signs of urethritis continue after adequate treatment, these patients should be evaluated for evidence of other causes of urethritis and referred to a specialist. # Mucopurulent Cervicitis The presence of mucopurulent endocervical exudate often suggests mucopurulent cervicitis (MPC) due to chlamydial or gonococcal infection. Presumptive diagnosis of MPC is made by the finding of mucopurulent secretion from the endocervix, which may appear yellow when viewed on a white cotton-tipped swab (positive swab test). Patients with MPC should have Gram stain and culture for N. gonorrhoeae, test for C. trachomatis, and a wet mount examination for T. vaginalis. # Recommended Regimen If N. gonorrhoeae is found on Gram stain or culture of endocervical or urethral discharge, treatment should be the same as that recommended for uncomplicated gonorrhea in adults, including cotreatment for chlamydial infection. If N. gonorrhoeae is not found, treatment should be the same as that recom mended for chlamydial infection in adults. # Management of Sex Partners Sex partners of women with MPC should be evaluated for STD and treated with an appropriate regimen based on the evaluation. # Epididymitis Among sexually active heterosexual men <35 years of age, epididymitis is most likely caused by N. gonorrhoeae or C. trachomatis. Specimens should be obtained for a urethral smear for Gram stain and culture for N. gonorrhoeae and C. trachom atis and for a urine culture. Empiric therapy based on the clinical diagnosis is recom mended before culture results are available. # Recommended Regimen Ceftriaxone 250 mg IM once and Doxycycline 100 mg orally 2 times a day for 10 days or Tetracycline 500 mg orally 4 times a day for 10 days. # Pelvic Inflammatory Disease Treatment Guidelines Pelvic inflammatory disease (PID) comprises a spectrum of inflammatory disorders of the upper genital tract in women. PID may include endometritis, salpingitis, tubo-ovarian abscess, and pelvic peritonitis. Sexually transmitted organisms, espe cially N. gonorrhoeae and C. trachomatis, are implicated in most cases; however, endogenous organisms, such as anaerobes, gram-negative rods, streptococci, and mycoplasmas, may also be etiologic agents of disease. A confirmed diagnosis of salpingitis and more accurate bacteriologic diagnosis is made by laparoscopy. Since laparoscopy is not always available, the diagnosis of PID is often based on imprecise clinical findings and culture or antigen detection tests of specimens from the lower genital tract. Guidelines for the treatment of patients with PID have been designed to provide flexibility in therapeutic choices. PID therapy regimens are designed to provide empiric, broad-spectrum coverage of likely etiologic pathogens. Antimicrobial cover age should include N. gonorrhoeae, C. trachomatis, gram-negatives, anaerobes, Group B streptococcus, and the genital mycoplasmas. Limited data demonstrate that effective treatment of the upper genital tract pyogenic process will decrease the incidence of long-term complications such as tubal infertility and ectopic pregnancy. Ideally, as for all intra-abdominal infections, hospitalization is recommended whenever possible, and particularly when 1) the diagnosis is uncertain; 2) surgical emergencies such as appendicitis and ectopic pregnancy cannot be excluded; 3) a pelvic abscess is suspected; 4) the patient is pregnant; 5) the patient is an adolescent (the compliance of adolescent patients with therapy is unpredictable, and the long-term sequelae of PID may be particularly severe in this group); 6) severe illness precludes outpatient management; 7) the patient is unable to follow or tolerate an outpatient regimen; 8) the patient has failed to respond to outpatient therapy; or 9) clinical follow-up within 72 hours of starting antibiotic treatment cannot be arranged. Many experts recommend that all patients with PID be hospitalized so that treatment with parenteral antibiotics can be initiated. Selection of a treatment regimen must consider institutional availability, costcontrol efforts, patient acceptance, and regional differences in antimicrobial suscep tibility. These treatment regimens are recommendations only and the specific antibiotics named are examples. Treatments used for PID will continue to be broad spectrum and empiric until more definitive studies are performed. # MMWR September 1, 1989 # Inpatient treatment One of the following: # Recommended Regimen A Cefoxitin 2 g IV every 6 hours, or cefotetan- IV 2 g every 12 hours plus Doxycycline 100 mg every 12 hours orally or IV.______________________________ The above regimen is given for at least 48 hours after the patient clinically improves. After discharge from hospital, continuation of: Doxycycline 100 mg orally 2 times a day for a total of 10-14 days. # Recommended Regimen B Clindamycin IV 900 mg every 8 hours plus Gentamicin loading dose IV or IM (2 mg/kg) followed by a maintenance dose (1.5 mg/kg) every 8 !' jurs. The above regimen is given for at least 48 hours after the patient improves. After discharge from hospital, continuation of: Doxycycline 100 mg orally 2 times a day for 10-14 days total. Continuation of clindamycin, 450 mg orally, 5 times daily, for 10 to 14 days, may be considered as an alternative. Continuation of medication after hospital discharge is important for the treatment of possible C. trachomatis infection. Clindamycin has more complete anaerobic coverage. Although limited data suggest that clindamy cin is effective against C. trachomatis infection, doxycycline remains the treatment of choice for patients with chlamydial disease. When C. trachomatis is strongly suspected or confirmed as an etiologic agent, doxycycline is the preferable alternative. In such instances, doxycycline therapy may be started during hospi talization if initiation of therapy before hospital discharge is thought likely to improve the patient's compliance. # Rationale Clinicians have extensive experience with both the cefoxitin/doxycycline and clindamycin/aminoglycoside combinations. Each of these regimens provides broad coverage against polymicrobial infection. Cefotetan has properties similar to those of cefoxitin and requires less frequent dosing. Clinical data are limited on thirdgeneration cephalosporins (ceftizoxime, cefotaxime, ceftriaxone), although many authorities believe they are effective. Doxycycline administered orally has bioavail ability similar to that of the IV formulation and may be given if normal gastrointestinal function is present. *Other cephalosporins such as ceftizoxime, cefotaxime, and ceftriaxone, which provide ade quate gonococcal, other facultative gram-negative aerobic, and anaerobic coverage, may be utilized in appropriate doses. Experimental studies suggest that aminoglycosides may not be optimal treatment for gram-negative organisms within abscesses, but clinical studies suggest that they are highly effective in the treatment of abscesses when administered in combination with clindamycin. Although short courses of aminoglycosides in healthy young women usually do not require serum-level monitoring, many practitioners may elect to monitor levels. # Ambulatory Management of PID # Recommended Regimen Cefoxitin 2 g IM plus probenecid, 1 g orally concurrently or ceftiaxone 250 mg IM, or equivalent cephalosporin plus Doxycycline 100 mg orally 2 times a day for 10-14 days or Tetracycline 500 mg orally 4 times a day for 10-14 days. # Alternative for Patients Who Do N ot Tolerate Doxycycline Erythromycin, 500 mg orally 4 times a day for 10-14 days may be substituted for doxycycline/tetracycline. This regimen, however, is based on limited clinical data. # Rationale # Management of Sex Partners Sex partners of women with PID should be evaluated for STD. After evaluation, sex partners should be empirically treated with regimens effective against N. gonorrhoeae and C. trachomatis infections. # Intrauterine Device (IUD) The intrauterine device is a risk factor for the development of pelvic inflammatory disease. Although the exact effect of removing an IUD on the response of acute salpingitis to antimicrobial therapy and on the risk of recurrent salpingitis is unknown, removal of the IUD is recommended soon after antimicrobial therapy has been initiated. When an IUD is removed, contraceptive counseling is necessary. # MMWR September 1, 1989 # Sexually Transmitted Enteric Infections Sexually transmitted gastrointestinal syndromes include proctitis, proctocolitis, and enteritis. With the exception of rectal gonococcal infection, these syndromes occur predominantly in homosexual men who participate in receptive anal inter course. Evaluation should include additional diagnostic procedures, such as anoscopy or sigmoidoscopy, stool examination, and culture. Proctitis is inflammation limited to the rectum (the distal 10-12 cm) and is associated with anorectal pain, tenesmus, constipation, and discharge. N. g on orrhoeae, C. trachomatis, and HSV are the most common sexually transmitted patho gens involved. Among patients coinfected with HIV, herpes proctitis may be espe cially severe. Proctocolitis is associated with symptoms of proctitis plus diarrhea and/or abdominal cramps, and the colonic mucosa is inflamed proximal to 12 cm. Etiologic organisms include Campylobacter je ju n i, Shigella spp., amebiasis, and, rarely, T. pallidum or C. trachomatis (often LGV serovars). CMV may be involved in patients coinfected with HIV. Enteritis in homosexual men usually results in diarrhea without signs of proctitis or proctocolitis. In otherwise healthy patients, Giardia lam blia is most commonly implicated. In patients also coinfected with HIV, CMV. M ycobacterium a viu mintracellulare, Salmonella spp., Cryptosporidium , and Isospora must be considered. Special stool preparations are required to diagnose giardiasis or cryptosporidiosis. Additionally, some cases of enteritis may be a primary effect of HIV infection. All patients with sexually transmitted enteric infections should be counseled and tested for HIV infection. Treatment recommendations for all enteric infections are beyond the scope of these guidelines. However, acute proctitis of recent onset in an individual who has recently practiced unprotected receptive anal intercourse is most often sexually transmitted. Such patients should be examined by anoscopy and should be evaluated for infection with N. gonorrhoeae, C. trachomatis, HSV, and T. pallidum . Treatment can be based on specific etiologic diagnosis or can be empiric. # Empiric Treatment for Sexually Transmitted Proctitis Recommended Regimen Ceftriaxone 250 mg IM plus doxycycline 100 mg orally 2 times a day for 7 days provides adequate treatment for gonorrhea and chlamydial infection. # Vaginal Diseases # Recommended Regimen Metronidazole 2 g orally in a single dose. # Alternative Regimen Metronidazole 500 mg twice daily for 7 days. If failure occurs with either regimen, the patient should be retreated with metron idazole 500 mg twice daily for 7 days. If repeated failure occurs, the patient should be treated with a single 2-g dose of metronidazole daily for 3-5 days. Cases of additional culture-documented treatment failure in which reinfection has been excluded should be managed in consultation with an expert. Evaluation of such cases should include determination of the susceptibility of Trichomonas vaginalis to metronidazole. # Treatment of Sex Partners Sex partners should be treated with either the single dose or the 7-day metron idazole regimen. # Trichomoniasis During Pregnancy Metronidazole is contraindicated in the first trimester of pregnancy, and its safety in the rest of pregnancy is not established. However, no other adequate therapy exists. For patients with severe symptoms after the first trimester, treatment with 2 g of metronidazole in a single dose may be considered. # Vulvovaginal Candidiasis Generally not considered a sexually transmitted disease, vulvovaginal candidiasis is frequently diagnosed in women presenting with symptoms involving the genitalia. Treatment with antibiotics predisposes women to the development of vulvovaginal candidiasis. # Treatment Many treatment regimens are effective, but 3-and 7-day regimens are superior to a single-dose therapy. # Recommended Regimen # Examples of effective regimens include the following: Miconazole nitrate (vaginal suppository 200 mg), intravaginally at bedtime for 3 days or Clotrimazole (vaginal tablets 200 mg), intravaginally at bedtime for 3 days or Butaconazole (2% cream 5 g), intravaginally at bedtime for 3 days or Teraconazole 80 mg suppository or 0.4% cream, intravaginally at bedtime for 3 days. # Hepatitis B Vaccination # Recommendation for Hepatitis B Vaccination Hepatitis B vaccine (several FDA-approved recombinant or plasma-derived prep arations are available) in dosages as recommended by the manufacturer. The vaccination series requires an initial visit and two follow-up visits. Vaccine should not be administered in the gluteal (buttocks) or quadriceps (thigh) muscle. After vaccination, testing for antibody response is not routinely indicated unless the patient is infected with HIV. # Post-Exposure Prophylaxis Prophylactic treatment with hepatitis B immune globulin should be considered in the following situations: sexual contact with a patient who has active hepatitis B or who contracts hepatitis B; sexual contact with a hepatitis B carrier (blood test positive for hepatitis B surface antigen). Prophylactic treatment for sexual exposure should be given within 14 days of sexual contact. Assay for preexisting immunity to HBV may be cost effective if individuals concerned are from populations at high risk for HBV. # Recommendation for Post-Exposure Prophylaxis Hepatitis B immune globulin (HBIG) .06 ml/kg, IM in a single dose followed by Initiation of hepatitis B vaccine series as described above. # Perinatal Infections Pregnant women with HBV infection can transmit hepatitis B to their infants at delivery. Infants infected at birth are at high risk for contracting chronic hepatitis B infection. Such infection can be prevented by administering HBIG and hepatitis B vaccine to the infant. Therefore, all pregnant women should be screened during their first obstetrical visit for the presence of HBsAg. If they are found to be HBsAg-positive, their newborns should be given HBIG as soon as possible after birth and subse quently should be immunized with hepatitis B vaccine. Hepatitis guidelines are updated periodically by the Immunization Practices Advi sory Committee, and are published in the MMWR. Reference to the most current recommendations is advised. Persons at risk for sexual transmission of hepatitis B are also at risk for HIV and other STD. HIV coinfection reduces the humoral response to HB vaccine. # MMWR September 1, 1989 Alternative Regimen Crotamiton (10%) applied to the entire body from the neck down for 2 nights and washed off thoroughly 24 hours after the second application. # Sexual Assault and STD Recommendations are limited to the identification and treatment of sexually transmitted infections. Matters concerning the sensitive management of potential pregnancy and of physical and psychological trauma are important and they should be addressed, but are beyond the scope of these guidelines. Victims of sexual assault are evaluated both to provide necessary medical services and to identify and collect forensic evidence. Although some information may be useful for the medical management of a victim, this information may not be admissible in court. Some STD, such as gonorrhea and syphilis, are almost exclusively transmitted sexually and may be useful markers of sexual assault. BV is commonly found after assault, but is highly prevalent in most populations, which limits its usefulness as a marker of assault. Nonculture tests have lower sensitivity and specificity than culture techniques. # Sexual Assault Any sexually transmissible agent, including HIV, may be transmitted during an assault. Few data exist on which to establish the risk of an assaulted person's acquiring an STD. The risk of acquiring gonococcal and/or chlamydial infections appears to be highest. Inferences about STD risk may be based on the known prevalences of these diseases in the community. If the suspected assailant is identified, that individual should be evaluated for STD to the extent possible under the law. The presence of STD within 24 hours of the assault may represent prior infection and not assault-acquired disease. Furthermore, some syndromes, such as BV, may be nonsexually transmitted. # Treatment Treatment should be given for any infection identified on examination or for any infection identified in the assailant. Although the risk of infection is frequently low, use of presumptive treatment is controversial. Some experts recommend presump tive treatment for all victims of sexual assault, whereas some reserve presumptive treatment for special circumstances, for example, when follow-up examination of the victim cannot be ensured or when treatment is specifically requested by the patient. Although no regimen covers all potential pathogens, the following regimens should be effective against gonorrhea, chlamydia, and, most likely, syphilis. # Empiric Regimen for Victims of Sexual Assault Ceftriaxone, 250 mg, given IM, to be followed by either Doxycycline 100 mg orally 2 times a day for 7 days or Tetracycline HCI 500 mg orally 4 times a day for 7 days. # Sexual Assault and Abuse of Children The identification of a sexually transmissible agent from a child beyond the neonatal period suggests sexual abuse. However, exceptions do exist, e.g., rectal and genital infection with C. trachomatis in young children may be due to persistent perinatally acquired infection, which may persist for up to 3 years. In addition, BV and genital mycoplasmas have been identified in both abused and nonabused children. A finding of genital warts, although suggestive of assault, is nonspecific without other evidence of sexual abuse. When the only evidence of sexual abuse is the isolation of an organism or the detection of antibodies, findings should be carefully confirmed. # Evaluation Among sexually abused children, the prevalence of STD appears relatively low; in most studies, C. trachomatis is the most frequently isolated organism. Sexually abused children are best managed by a team of professionals experienced in dealing with the many needs of children. Although testing for the diseases appropriate for children (Table 5) is essentially the same as that for adults, the special needs of children must be taken into account. Recommended evaluation of suspected child abuse/assault - Since a child's report of assault may not be complete, specimens for culture for N. gonorrhoeae and C. trachomatis should be collected from the pharynx and rectum as well as from the vagina (girls) or urethra (boys). - Internal pelvic examinations usually should not be performed unless indicated by the presence of a foreign body or by trauma. - Follow-up visits should be scheduled so as to minimize trauma to the child; for asymptomatic children, an initial visit and one visit at 8-12 weeks may be sufficient. - In cases of continuing abuse, the alleged offender may be available for medical evaluation. In such instances, care for the child may be modified when results of the evaluation of the offender are known. # Treatment Treatment before diagnosis is not indicated unless evidence shows that the assailant is infected. Presumptive treatment after assault may be given if the victim or victim's family requests it or if follow-up examination of the victim cannot be ensured.
Ceftriaxone 1 g, IM or IV, every 24 hours or Ceftizoxime 1 g, IV, every 8 hours or Cefotaxime 1 g, IV, every 8 hours. Patients who are allergic to (3-lactam drugs should be treated with spectinomycin 2 g IM every 12 hours. "Erythromycin stearate 500 mg or erythromycin ethylsuccinate 800 mg or equivalent may be substituted for erythromycin base.# Introduction # MMWR v User's Guide STD and their therapies have been categorized by etiologic agent, if possible. Some syndromes overlap; for example, urethritis may be caused by Neisseria gonorrhoeae or Chlamydia trachomatis. Table 1 correlates symptoms syndromes with etiologic organisms and directs the user to the appropriate section in this document. # Clinician Guidelines And Public Health Considerations Control of STD is based on four major concepts: 1) education of persons at risk on the modes of disease transmission and the means for reducing the risk of transmis sion; 2) detection of infection in asymptomatic persons and in persons who are symptomatic but unlikely to seek diagnostic and treatment services; 3) effective diagnosis and treatment of persons who are infected; and 4) evaluation, treatment, and counseling of sex partners of persons with an STD. Although this document deals largely with clinical aspects of STD control, the prevention of STD is based primarily on changing the sexual behaviors that put patients at risk. In special situations, such as prenatal visits and legally induced abortions, screening for STD may have greater impact in preventing complications of STD. For specific recommendations in cases of sexual assault or child abuse, see "Sexual Assault and STD." Specific guidelines for screening in each situation are beyond the scope of this document. However, whenever possible, the following laboratory screening tests for STD should be available: *Persons at higher risk for STD include sexually active persons under 25 years of age, those who have had multiple sexual partners within the previous 6 months, and those with a history of STD. In addition, prostitutes and persons having sexual contact with prostitutes, users of illicit drugs, and inmates of detention centers have increased rates of STD and should be evaluated when seeking medical care. Diagnosis of an STD should be considered a "sentinel event" reflecting unpro tected sexual activity. Patients with one STD are at high risk for having others. Therefore, patients should be closely evaluated for other STD infections, including syphilis and human immunodeficiency virus (HIV) serology (if not performed within the previous 3 months), gonorrhea and chlamydial testing from appropriate anatomic sites, and physical examination. Women wishing to prevent unplanned pregnancy and who are not using contraception should be counseled about contraception services; ideally, contraceptives and pregnancy testing should be available at the same facility providing STD services. Annual Pap smear evaluation should also be available. # Clinical Considerations # Primary Prevention Clinics and practitioners who treat patients with STD should have resources available for educating patients about risk assessment and behavioral choices. Behavioral assessment is an integral part of the STD history, and patients should be counseled on methods to lower their risk of acquiring STD, including abstinence, careful selection of partners, use of condoms and spermicides, and periodic exami nation. Specific recommendations for behavioral assessment and counseling are beyond the scope of these guidelines. # Condoms and Spermicides Condoms and spermicides should be available in any facility providing clinical STD services. Instruction in proper use should also be provided. Although condoms do not provide absolute protection from any infection, if properly used, they reduce the risk of infection. Recommendations for the proper use of condoms (Table 2) have been made by CDC and other public health organizations. # Special Populations # Pregnant Women Intrauterine or perinatally transmitted STD can have fatal or severely debilitating effects on the fetus. Routine prenatal care should include an assessment for STD, which in most cases includes serologic screening for syphilis and hepatitis B, testing for chlamydia, and gonorrhea culture (see specific sections for management of clinical disease). Prenatal screening for HIV is indicated for all patients with risk factors for HIV or with a high-risk sexual partner; some authorities recommend HIV screening of all pregnant women. # MMWR ix Practical management issues are discussed in the sections pertaining to specific diseases. Pregnant women and their sexual partners should be questioned about STD and counseled about possible neonatal infections. Pregnant women with primary genital herpes infection, hepatitis B, primary cytomegalovirus (CMV) infection, or Group B streptococcal infection may need to be referred to an expert for manage ment. In the absence of lesions or other evidence of active disease, tests for herpes simplex virus (HSV) and cesarean delivery are n o t routinely indicated for women with a history of recurrent genital herpes infection during pregnancy. Routine HPV screening is not recommended. For a fuller discussion of these issues, as well as for infections not transmitted sexually, refer to "Guidelines for Perinatal Care" (second edition, 1988), jointly written and published by the American Academy of Pediatrics and the American College of Obstetricians and Gynecologists. # Children Management of STD in children requires close cooperation between the clinician, laboratory, and child-protection authorities. Investigations, when indicated, should be initiated promptly. Some diseases, such as gonorrhea, syphilis, and chlamydia, if acquired after the neonatal period, are almost 100% indicative of sexual contact; in other diseases, such as HPV infection and vaginitis, the association with sexual contact is not so clear (see "Sexual Assault and STD"). # TABLE 2. Recommendations for use of condoms 1. Latex condoms should be used because they may offer greater protection against HIV and other viral STD than natural membrane condoms. 2. Condoms should be stored in a cool, dry place out of direct sunlight. 3. Condoms in damaged packages or those that show obvious signs of age (e.g., those that are brittle, sticky, or discolored) should not be used. They cannot be relied upon to prevent infection or pregnancy. 4. Condoms should be handled with care to prevent puncture. 5. The condom should be put on before any genital contact to prevent exposure to fluids that may contain infectious agents. Hold the tip of the condom and unroll it onto the erect penis, leaving space at the tip to collect semen, yet ensuring that no air is trapped in the tip of the condom. 6. Only water-based lubricants should be used. Petroleum-or oil-based lubricants (such as petroleum jelly, cooking oils, shortening, and lotions) should not be used because they weaken the latex and may cause breakage. 7. Use of condoms containing spermicides may provide some additional protection against STD. However, vaginal use of spermicides along with condoms is likely to provide still greater protection. 8. If a condom breaks, it should be replaced immediately. If ejaculation occurs after condom breakage, the immediate use of spermicide has been suggested. However, the protective value of postejaculation application of spermicide in reducing the risk of STD transmission is unknown. 9. After ejaculation, care should be taken so that the condom does not slip off the penis before withdrawal; the base of the condom should be held throughout withdrawal. The penis should be withdrawn while still erect. 10. Condoms should never be reused. Adapted from MMWR 1988;37:133-7. Patients with Multiple Episodes of STD ("Repeaters") These patients have a disproportionately high rate of STD and should be targeted for intensive counseling on methods to reduce risk. More research is needed into methods of behavior modification for these patients, including the role of outreach and support services. In many cases, periodic call-back for STD evaluation may be indicated. # STD Core Groups Populations of core STD transmitters account for most STD morbidity. Although substantial regional variation occurs, in most urban areas the core groups consist largely of ethnic minority populations with low levels of education and socioeco nomic attainment. In many such environments, illicit drug use and prostitution are common. Core groups are often geographically limited, permitting definition of core geographic areas as well as populations. STD programs should evaluate the occur rence of STD in their jurisdictions to define core populations and core areas for targeted education, screening, clinical outreach, call-back reexamination programs, and other control measures. # Illicit Drug Users STD appear to be increasingly linked to illicit drug use. Illicit drug users may be at higher risk for sexual behaviors that put them at risk for STD. In addition, illicit drug users account for an increasing proportion of HIV infections. Further research is needed into the behaviors associated with drug use and STD, particularly to facilitate behavioral and clinical interventions targeted at drug users. Outreach programs in the community and in cooperation with drug treatment programs should be considered. # Prison and Detention Populations Residents of short-term correctional and detention facilities often have high prevalence rates of STD. Screening and treatment for infections that are highly prevalent in the community should be provided for all inmates. In many situations, screening and treatment in the prison population is central to effective STD control. In addition, for many patients, correctional health services may be the only opportu nity for interaction with health-care providers. # Patients with HIV Infection and STD The management of patients with STD who are coinfected with HIV presents complex clinical and behavioral issues. Because of its effect on the immune system, HIV may alter the natural histories of many STD, as well as the effect of antimicrobial therapy. Close clinical follow-up is imperative. STD infection in patients with and without HIV is a sentinel event, often indicating continued unprotected sexual activity. Further patient counseling is indicated in these situations. MMWR xi # STD Reporting and Confidentiality Disease surveillance activities, which include the accurate identification and timely reporting of STD, form an integral part of successful disease control. Reporting assists local health authorities in identifying sexual contacts who may be infected (see next section). Reporting is also important for assessing morbidity trends. Reporting may be provider-and/or laboratory-based. Cases should be reported in accordance with local statutory requirements and in as timely a manner as possible. Clinicians who are unsure of local reporting requirements are encouraged to seek advice through their local health departments or state STD programs. STD reports are held in strictest confidence and in many jurisdictions are protected by statute from subpoena. Before any follow-up of a positive STD test is conducted by STD program representatives, these personnel consult with the provider to verify the diagnosis and treatment. Most local health departments offer STD partner notification and follow-up services for selected STD. # Management of Sex Partners and Partner Notification Clinical guidelines for management of sex partners are included in each disease section. Breaking the chain of transmission is crucial to STD control. Further transmission and reinfection are prevented by referral of sex partners for diagnosis and treatment. Patients should ensure that their sex partners, including those without symptoms, are referred for evaluation. Partners of patients with STD should be examined; treatment should not be provided for partners who are not examined, except in rare instances such as when the partner is at a site remote from medical care. Appropriate referral for sex partners should be provided if care will not or cannot be provided by the initial health-care provider. Disease Intervention Specialists (DIS), i.e., public health profes sionals trained in STD management, can assist patients and practitioners in this process through interviewing and confidential field outreach procedures. # AIDS and HIV Infection in The General STD Setting The acquired immunodeficiency syndrome (AIDS) is a late manifestation of infection with human immunodeficiency virus (HIV). Most people infected with HIV remain asymptomatic for long periods. HIV infection is most often diagnosed by using HIV antibody tests. Detectable antibody usually develops within 3 months after infection. A confirmed positive antibody test means that a person is infected with HIV and is capable of transmitting the virus to others. Although a negative antibody test usually means a person is not infected, antibody tests cannot rule out infection from a recent exposure. If antibody testing is related to a specific exposure, the test should be repeated 3 and 6 months after the exposure. Antibody testing for HIV begins with a screening test, usually an enzyme-linked immunosorbent assay (ELISA). If the screening test is positive, it is followed by a more specific confirmatory test, most commonly the Western blot assay. New antibody tests are being developed and licensed that are either easier to perform or more accurate. Positive results from screening tests must be confirmed before being considered definitive. The time between infection with HIV and development of AIDS ranges from a few months to ^10 years. Most people who are infected with HIV will eventually have some symptoms related to that infection. In one cohort study, AIDS developed in 48% of a group of gay men ^10 years after infection; but additional AIDS cases are expected among those who have remained AIDS-free for >10 years. Therapy with zidovudine (ZDV-previously known as azidothymidine) has been shown to benefit persons in the later stages of disease (AIDS or AIDS-related conditions along with a CD4 [T4] lymphocyte count less than 200/mm3). Serious side effects, usually anemias and cytopenias, have been common during therapy with ZDV; therefore, patients taking ZDV require careful follow-up in consultation with physicians who are familiar with ZDV therapy. Clinical trials are currently evaluating ZDV therapy for persons with asymptomatic HIV infection to see if it decreases the rate of progression to AIDS. Other trials are evaluating new drugs or combinations of drugs for persons with different stages of HIV infection, including asymptomatic infections. The complete therapeutic management of HIV infection is beyond the scope of this document. # Preventing the Sexual Transmission of HIV The only way to prevent AIDS is to prevent the initial infection with HIV. Prevention of sexual transmission of HIV can be ensured in only two situations: 1) sexual abstinence or 2) choosing only sex partners who are not infected with HIV. Many HIV-infected persons are asymptomatic and are unaware that they are infected. Therefore, without an antibody test, infected persons are difficult to identify. AIDS case surveillance and HIV seroprevalence studies allow estimation of risk for persons in different areas; however, these population estimates may have a limited impact on an individual's sexual decisions. Although knowledge of antibody status is desirable before a sexual relationship is initiated, this information may not be available. Therefore, individuals should be counseled that when they initiate a sexual relationship they should use sexual practices that reduce the risk of HIV transmission. Sexual practices may influence the likelihood of HIV transmission during sexual contact with an infected partner. Women who practice anal intercourse with an infected partner are more likely to acquire infection than women who have only vaginal intercourse. The relative risk of transmission by oral-genital contact is probably somewhat lower than the risk of transmission by vaginal intercourse. Other STD or local trauma that breaks down the mucosal barrier to infection would be expected to increase the risk of HIV transmission. Condoms supplement natural barriers to infection and therefore reduce the risk of HIV transmission (see "Clinician Guidelines and Public Health Considerations"). # When to Test for HIV Voluntary, confidential, HIV antibody testing should be done routinely when the results may contribute either to the medical management of the person being tested or to the prevention of further transmission. Testing is important for persons with symptoms of HIV-related illnesses or with diseases such as syphilis, chancroid, herpes, or tuberculosis, for which a positive test result might affect the recommended diagnostic evaluation, treatment, or follow-up. HIV counseling and testing for persons with STD is a particularly important part of an HIV prevention program, because patients who have acquired an STD have demon strated their potential risk for acquiring HIV. Because no vaccine or cure is available, HIV prevention requires changes in behavior by people at risk for transmitting or acquiring infection. Therefore, patient counseling must be an integral part of any HIV testing program in an STD clinic. Counseling should be done both before and after HIV testing. # Pretest Counseling Pretest counseling should include assessment of the patient's risk for HIV infection and measures to reduce that risk. Intravenous (IV) drug users should be advised to stop using drugs. If they do not stop, they should not share needles. If needle-sharing continues, injection equipment should be cleaned with bleach between uses. Sexually active persons who have multiple partners should be advised to consider sexual abstinence or to enter a mutually monogamous relationship with a partner who has also been tested for HIV. Condoms should be used consistently if either or both partners are infected or have other partners. Similarly, heterosexuals with STD other than HIV should be encour aged to bring their partners in for HIV testing and to use condoms if they are not in a mutually monogamous relationship with an uninfected partner. # Posttest Counseling and Evaluation Persons who have negative HIV antibody tests should be told their test result by a person who understands the need to reduce unsafe sexual behaviors and can explain ways to modify sexual practices to reduce risks. # MMWR 3 Antibody tests cannot detect infections that occurred in the several weeks before the test (see above). Persons who have negative tests should understand that the negative test result does not signify protection from acquiring infection. They should be advised about the ways the virus is transmitted and how to avoid infection. Their partners' risks for HIV infection should be discussed, and partners at risk should be encouraged to be tested for HIV. Persons who test positive for HIV antibody should be told their test result by a person who is able to discuss the medical, psychological, and social implications of HIV infection. Routes of HIV transmission and methods to prevent further transmission should be emphasized. Risks to past sexual and needle-sharing partners of HIV antibody-positive patients should be discussed, and they should be instructed in how to notify their partners and to refer them for counseling and testing. If they are unable to notify their partners or they are not sure that their partners will seek counseling, physicians or health department personnel should assist, using confidential procedures, to ensure that the partners are notified. Infected women should be advised of the risk of perinatal transmission (see below), and methods of contraception should be discussed and provided. Additional follow-up, counseling, and support systems should be available to facilitate psychosocial adjustment and changes in behavior among HIV antibody positive persons. # Perinatal Infections Infants born to women with HIV infection may also be infected with HIV; this risk is estimated to be 30%-40%. The mother in such a case may be asymptomatic and her HIV infection not recognized at delivery. Infected neonates are usually asymptomatic, and currently HIV infection cannot be readily or easily diagnosed at birth. (A positive antibody test may reflect passively transferred maternal antibodies, and the infant must be observed over time to determine if neonatal infection is present.) Infection may not become evident until the child is 12-18 months of age. All pregnant women with a history of STD should be offered HIV counseling and testing. Recognition of HIV infection in pregnancy permits health-care workers to inform patients about the risks of transmission to the infant and the risks of continuing pregnancy. # Asymptomatic HIV Infections As more HIV-infected persons are identified, primary health-care providers will need to assume increased responsibility for these patients. Most internists, pediatri cians, family practitioners, and gynecologists should be qualified to provide initial evaluation of HIV-infected individuals and follow-up of those with uncomplicated HIV infection. These services should be available in all public health clinics. Health-care professionals who identify HIV-positive patients should provide posttest counseling; medical evaluation (either on site or by referral)-including a physical examination, complete blood count, lymphocyte subset analysis, syphilis serology, and a purified protein derivative (PPD) skin test for tuberculosis. Psychosocial counseling resources should also be available. # Chancroid Because of recent spread of H. ducreyi, chancroid has become an important STD in the United States. Its importance is enhanced by the knowledge that outside the United States chancroid has been associated with increased infection rates for HIV. Chancroid must be considered in the differential diagnosis of any patient with a painful genital ulcer. Painful inguinal lymphadenopathy is present in about half of all chancroid cases. # Recommended Regimen Erythromycin base 500 mg orally 4 times a day for 7 days, or Ceftriaxone 250 mg intramuscularly (IM) in a single dose. # Alternative Regimen Trimethoprim/sulfamethoxazole 160/800 mg (one double-strength tablet) orally 2 times a day for 7 days. # Comment: The susceptibility of H. ducreyi to this combination of antimicrobial agents varies throughout the world; clinical efficacy should be monitored, prefer ably in conjunction with monitoring of susceptibility patterns. or Amoxicillin 500 mg plus clavulanic acid 125 mg orally 3 times a day for 7 days. Comment: Not evaluated in the United States. or Ciprofloxacin 500 mg orally 2 times a day for 3 days. Comment: Although a regimen of 500 mg orally once was effective outside the United States, based on pharmacokinetics and susceptibility data, 2-or 3-day regimens of the same dose may be prudent, especially for patients coinfected with HIV. Quinolones, such as ciprofloxacin, are contraindicated during pregnancy and in children 16 years of age or younger. # Management of Sex Partners Sex partners, within the 10 days preceding onset of symptoms in an infected patient, whether symptomatic or not, should be examined and treated with a recommended regimen. # Follow-Up If treatment is successful, ulcers due to chancroid symptomatically improve within 3 days and objectively improve (evidenced by resolution of lesions and clearing of exudate) within 7 days after institution of therapy. Clinical resolution of lymphadenopathy is slower than that of ulcers and may require needle aspiration (through healthy, adjacent skin), even during successful therapy. Patients should be observed until the ulcer is completely healed. Because of the epidemiologic association with syphilis, serological testing for syphilis should be considered within 3 months after therapy. # Treatment Failures If no clinical improvement is evident by 7 days after therapy, the clinician should consider whether 1) antimicrobials were taken as prescribed, 2) the H. ducreyi causing infection is resistant to the prescribed antimicrobial, 3) the diagnosis is correct, 4) coinfection with another STD agent exists, or 5) the patient is also infected with HIV. Preliminary information indicates that patients coinfected with HIV do not respond to antimicrobial therapy as well as patients not infected with HIV, especially when single-dose treatment is used. Antimicrobial susceptibility testing should be performed on H. ducreyi isolated from patients who do not respond to recommended therapies. Neurosyphilis cannot be accurately diagnosed from any single test. Cerebrospinal fluid (CSF) tests should include cell count, protein, and VDRL (not RPR). The CSF leukocyte count is usually elevated (>5 WBC/mm3) when neurosyphilis is present and is a sensitive measure of the efficacy of therapy. VDRL is the standard test for CSF; when positive it is considered diagnostic of neurosyphilis. However, it may be negative when neurosyphilis is present and cannot be used to rule out neurosyphilis. Some experts also order an FTA-ABS; this may be less specific (more false positives) but is highly sensitive. The positive predictive value of the CSF-FTA-ABS is lower, but when negative, this test provides evidence against neurosyphilis. # Syphilis General principles # Serologic Tests # Penicillin Therapy Penicillin is the preferred drug for treating patients with syphilis. Penicillin is the only proven therapy that has been widely used for patients with neurosyphilis, congenital syphilis, or syphilis during pregnancy. For patients with penicillin allergy, skin testing-w ith desensitization, if necessary-is optimal. Sample guidelines for skin testing and desensitization are included (see Appendix to this section). However, the minor determinant mixture for penicillin is not currently available commercially. # Jarisch-Herxheimer Reaction The Jarisch-Herxheimer reaction is an acute febrile reaction, often accompanied by headache, myalgia, and other symptoms, that may occur after any therapy for syphilis, and patients should be so warned. Jarisch-Herxheimer reactions are more common in patients with early syphilis. Antipyretics may be recommended, but no proven methods exist for preventing this reaction. Pregnant patients, in particular, should be warned that early labor may occur. # Persons Exposed to Syphilis (Epidemiologic Treatment) Persons sexually exposed to a patient with early syphilis should be evaluated clinically and serologically. If the exposure occurred within the previous 90 days, the person may be infected yet seronegative and therefore should be presumptively treated. (It may be advisable to presumptively treat persons exposed more than 90 days previously if serologic test results are not immediately available and follow-up is uncertain.) Patients who have other STD may also have been exposed to syphilis and should have a serologic test for syphilis. The dual therapy regimen currently recommended for gonorrhea (ceftriaxone and doxycycline) is probably effective against incubating syphilis. If a different, nonpenicillin antibiotic regimen is used to treat gonorrhea, the patient should have a repeat serologic test for syphilis in 3 months. • If compliance and follow-up are ensured, erythromycin, 500 mg orally 4 times a day for 2 weeks, can be used. # Early Syphilis Primary and Secondary Syphilis and Early Latent Syphilis of Less than 1 Year's Duration # Recommended • Patients who are allergic to penicillin may also be allergic to cephalosporins; therefore, caution must be used in treating a penicillin-allergic patient with a cephalosporin. However, preliminary data suggest that ceftriaxone, 250 mg IM once a day for 10 days, is curative-but careful follow-up is mandatory. # Follow-Up Treatment failures can occur with any regimen. Patients should be reexamined clinically and serologically at 3 months and 6 months. If nontreponemal antibody titers have not declined fourfold by 3 months with primary or secondary syphilis, or by 6 months in early latent syphilis, or if signs or symptoms persist and reinfection has been ruled out, patients should have a CSF examination and be retreated appropriately. HIV-infected patients should have more frequent follow-up, including serologic testing at 1, 2, 3, 6, 9, and 12 months. In addition to the above guidelines for 3 and 6 months, any patient with a fourfold increase in titer at any time should have a CSF examination and be treated with the neurosyphilis regimen unless reinfection can be established as the cause of the increased titer. # MMWR September 1, 1989 Lumbar Puncture in Early Syphilis CSF abnormalities are common in adults with early syphilis. Despite the frequency of these CSF findings, very few patients develop neurosyphilis when the treatment regimens described above are used. Therefore, unless clinical signs and symptoms of neurologic involvement exist, such as optic, auditory, cranial nerve, or meningeal symptoms, lumbar puncture is not recommended for routine evaluation of early syphilis. This recommendation also applies to immunocompromised and HIVinfected patients, since no clear data currently show that these patients need increased therapy. # HIV Testing All syphilis patients should be counseled concerning the risks of HIV and be encouraged to be tested for HIV. # Late Latent Syphilis of More Than 1 Year's Duration, Gummas, and Cardiovascular Syphilis All patients should have a thorough clinical examination. Ideally, all patients with syphilis of more than 1 year's duration should have a CSF examination; however, performance of lumbar puncture can be individualized. In older asymptomatic individuals, the yield of lumbar puncture is likely to be low; however, CSF examina tion is clearly indicated in the following specific situations: # Recommended Regimen Benzathine penicillin G, 7.2 million units total, administered as 3 doses of 2.4 million units IM, given 1 week apart for 3 consecutive weeks. # Alternative Regimen for Penicillin-Allergic Patients (Nonpregnant) Doxycycline, 100 mg orally 2 times a day for 4 weeks or Tetracycline, 500 mg orally 4 times a day for 4 weeks. If patients are allergic to penicillin, alternate drugs should be used only after CSF examination has excluded neurosyphilis. Penicillin allergy is best determined by careful history taking, but skin testing may be used if the major and minor determinants are available (see Appendix). MMWR 9 # Follow-Up Quantitative nontreponemal serologic tests should be repeated at 6 months and 12 months. If titers increase fourfold, if an initially high titer (^1 :32) fails to decrease, or if the patient has signs or symptoms attributable to syphilis, the patient should be evaluated for neurosyphilis and retreated appropriately. # HIV Testing All syphilis patients should be counseled concerning the risks of HIV and be encouraged to be tested for HIV antibody. # Neurosyphilis Central nervous system disease may occur during any stage of syphilis. Clinical evidence of neurologic involvement (e.g., optic and auditory symptoms, cranial nerve palsies) warrants CSF examination. # Recommended Regimen Aqueous crystalline penicillin G, 12-24 million units administered 2-4 million units every 4 hours IV, for 10-14 days. # Alternative Regimen (If Outpatient Compliance Can Be Ensured) Procaine penicillin, 2-4 million units IM daily and Probenecid, 500 mg orally 4 times a day, both for 10-14 days. Many authorities recommend addition of benzathine penicillin G, 2.4 million units IM weekly for three doses after completion of these neurosyphilis treatment regimens. No systematically collected data have evaluated therapeutic alternatives to penicillin. Patients who cannot tolerate penicillin should be skin tested and desensitized, if necessary, or managed in consultation with an expert. # Follow-Up If an initial CSF pleocytosis was present, CSF examination should be repeated every 6 months until the cell count is normal. If it has not decreased at 6 months, or is not normal by 2 years, retreatment should be strongly considered. # HIV Testing All syphilis patients should be counseled concerning the risks of HIV and be encouraged to be tested for HIV antibody. which prenatal care utilization is not optimal, patients should be screened, and if necessary, treatment provided at the time pregnancy is detected. In areas of high syphilis prevalence, or in patients at high risk, screening should be repeated in the third trimester and again at delivery. # Syphilis in Pregnancy # Screening # Treatment Patients should be treated with the penicillin regimen appropriate for the woman's stage of syphilis. Tetracycline and doxycycline are contraindicated in pregnancy. Erythromycin should not be used because of the high risk of failure to cure infection in the fetus. Pregnant women with histories of penicillin allergy should first be carefully questioned regarding the validity of the history. If necessary, they should then be skin tested and either treated with penicillin or referred for desensitization (see Appendix). Women who are treated in the second half of pregnancy are at risk for premature labor and/or fetal distress if their treatment precipitates a Jarisch-Herxheimer reaction. They should be advised to seek medical attention following treatment if they notice any change in fetal movements or have any contractions. Stillbirth is a rare complication of treatment; however, since therapy is necessary to prevent further fetal damage, this concern should not delay treatment. # Follow-Up Monthly follow-up is mandatory so that retreatment can be given if needed. The antibody response should be the same as for nonpregnant patients. # HIV Testing All syphilis patients should be counseled concerning the risks of HIV and be encouraged to be tested for HIV antibody. # Congenital Syphilis # Who Should be Evaluated Infants should be evaluated if they were born to seropositive (nontreponemal test confirmed by treponemal test) women who: An infant should not be released from the hospital until the serologic status o f its mother is known. # MMWR 11 # Evaluation of the Infant The clinical and laboratory evaluation of infants born to women described above should include: # Therapy Decisions Infants should be treated if they have: • Any evidence of active disease (physical examination or x-ray); or Even if the evaluation is normal, infants should be treated if their mothers have untreated syphilis or evidence of relapse or reinfection after treatment. Infants, who meet the criteria listed in "Who Should be Evaluated" but are not fully evaluated, should be assumed to be infected, and treated. # Treatment Treatment should consist of: 100,000-150,000 units/kg of aqueous crystalline penicillin G daily (administered as 50,000 units/kg IV every 8-12 hours) or 50,000 units/kg of procaine penicillin daily (administered once IM) for 10-14 days. If more than 1 day of therapy is missed, the entire course should be restarted. All symptom atic neonates should also have an ophthalmologic examination. Infants who meet the criteria listed in "Who Should be Evaluated" but who after evaluation do not meet the criteria listed in "Therapy Decisions," are at low risk for congenital syphilis. If their mothers were treated with erythromycin during preg nancy, or if close follow-up cannot be assured, they should be treated with benza thine penicillin G, 50,000 units/kg IM as a one-time dose. # Follow-Up Seropositive untreated infants must be closely followed at 1, 2, 3, 6, and 12 months of age. In the absence of infection, nontreponemal antibody titers should be decreasing by 3 months of age and should have disappeared by 6 months of age. If *ln the immediate newborn period, interpretation of these tests may be difficult; normal values vary with gestational age and are higher in preterm infants. Other causes of elevated values should also be considered. However, when an infant is being evaluated for congenital syphilis, the infant should be treated if test results cannot exclude infection. # MMWR September 1f 1989 these titers are found to be stable or increasing, the child should be reevaluated and fully treated. Additionally, in the absence of infection, treponemal antibodies may be present up to 1 year. If they are present beyond 1 year, the infant should be treated for congenital syphilis. Treated infants should also be observed to ensure decreasing nontreponemal antibody titers; these should have disappeared by 6 months of age. Treponemal tests should not be used, since they may remain positive despite effective therapy if the child was infected. Infants with documented CSF pleocytosis should be reexamined every 6 months or until the cell count is normal. If the cell count is still abnormal after 2 years, or if a downward trend is not present at each examination, the infant should be retreated. The CSF-VDRL should also be checked at 6 months; if it is still reactive, the infant should be retreated. # Therapy of Older Infants and Children After the newborn period, children discovered to have syphilis should have a CSF examination to rule out congenital syphilis. Any child who is thought to have congenital syphilis or who has neurologic involvement should be treated with 200,000-300,000 units/kg/day of aqueous crystalline penicillin G (administered as 50,000 units/kg every 4-6 hours) for 10-14 days. Older children with definite acquired syphilis and a normal neurologic examination may be treated with benzathine penicillin G# 50,000 units/kg IM, up to the adult dose of 2.4 million units. Children with a history of penicillin allergy should be skin tested and, if necessary, desensitized (see Appendix). Follow-up should be performed as described previously. # HIV Testing In cases of congenital syphilis, the mother should be counseled concerning the risks of HIV and be encouraged to be tested for HIV; if her test is positive, the infant should be referred for follow-up. # Syphilis in HIV-Infected Patients # Diagnosis • All sexually active patients with syphilis should be encouraged to be counseled and tested for HIV because of the frequency of association of the two diseases and the implications for clinical assessment and management. • Neurosyphilis should be considered in the differential diagnosis of neurologic disease in HIV-infected persons. • When clinical findings suggest that syphilis is present but serologic tests are negative or confusing, alternative tests, such as biopsy of lesions, dark-field examination, and direct fluorescent antibody staining of lesion material, should be used. • In cases of congenital syphilis, the mother should be encouraged to be counseled and tested for HIV; if her test is positive, the infant should be referred for follow-up. # Appendix Management of Patients with Histories of Penicillin Allergy Currently, no proven alternative therapies to penicillin are available for treating patients with neurosyphilis, congenital syphilis, or syphilis in pregnancy. Therefore, skin testing-w ith desensitization, if indicated -is recommended for these patients. # Skin Testing Skin testing is a rapid, safe, and accurate procedure (see below). It is also productive; 90% of patients with histories of "penicillin allergy" have negative skin tests and can be given penicillin safely. The other 10% with positive skin tests have an increased risk of being truly penicillin-allergic and should undergo desensitization. Clinics involved in STD management should be equipped and prepared to do skin testing or should establish referral mechanisms to have skin tests performed. Skin testing is quick; four determinants, along with positive and negative controls, can be placed and read in an hour (Table 3). Skin testing is also safe, if properly performed. Patients who have had a severe, life-threatening reaction in the past year should be tested in a controlled environment, such as a hospital setting, and the determinant antigens diluted 100-fold. Other patients can be skin-tested safely in a physician-staffed clinic. Patients with a history of penicillin allergy but with no reaction to penicillin skin tests, who are not on antihistamines and who had a positive histamine control on skin testing (Table 3), should be given penicillin, 250 mg orally, and observed for one hour. Patients who tolerate this dose well may be treated with penicillin as needed. # MMWR September 1,1989 # Desensitization Patients who have a positive skin test to one of the penicillin determinants can be desensitized. This is a straightforward, relatively safe procedure. Although the procedure can be done orally or intravenously, oral desensitization is thought to be safer, simpler, and easier. Desensitization should be done in a hospital setting because serious IgE-mediated allergic reaction, although unlikely, can occur. Desen sitization can be completed in 4 hours, after which the first dose of penicillin is given (Table 4). STD programs should have a referral center where patients with positive skin tests can be desensitized. After desensitization, patients must be maintained on penicillin for the duration of therapy. # TABLE 3. Penicillin allergy skin testing (adapted from Beall*)____________________ Note: If there has been a severe generalized reaction to penicillin in the previous year, the antigens should be diluted 100-fold, and patients should be tested in a controlled environment. Both major and minor determinants should be available for the tests to be interpretable. The patient should not have taken antihistamines in the previous 48 hours. # Reagents Major determinants: Benzylpenicilloyl-polylysine (major, Pre-Pen [Taylor Pharmacal Co., Decatur, Illinois], 6 x 10"5M) Benzylpenicillin (10'2M or 6000 U/mL) # Minor determinants: Benzylpenicilloic acid (10'2M) Benzylpenilloic acid (10'2M) Positive control (histamine, 1 mg/mL) Negative control (buffered saline solution) Dilute the antigens 100-fold for preliminary testing if there has been an immediate generalized reaction within the past year. # Procedure Epicutaneous (scratch or prick) test: apply one drop of material to volar forearm and pierce epidermis without drawing blood; observe for 20 minutes. If there is no wheal ^4 mm, proceed to intradermal test. Intradermal test: inject 0.02 ml intradermally with a 27-gauge short-bevelled needle; observe for 20 minutes. # Interpretation For the test to be interpretable, the negative (saline) control must elicit no reaction and the positive (histamine) control must elicit a positive reaction. Positive test: a wheal >4 mm in mean diameter to any penicillin reagent; erythema must be present. Negative test: the wheals at the site of the penicillin reagents are equivalent to the negative control. Indeterminate: all other results. Interval between doses, 15 minutes; elapsed time, 3 hours and 45 minutes; cumulative dose, 1.3 m illion units. h e specific amount of drug was diluted in approximately 30 ml of water and then given orally. Adapted with permission from the New England Journal o f Medicine 1985;312:1229-32. # Lymphogranuloma Venereum Lymphogranuloma venereum (LGV) is caused by C. trachomatis (LGV serovars). Inguinal lymphadenopathy is the most common clinical manifestation. Diagnosis is often made clinically and may be confused with chancroid. LGV is not a common cause of inguinal lymphadenopathy in the United States. # Treatment: Genital, inguinal, or anorectal # Recommended Regimen Doxycycline 100 mg orally 2 times a day for 21 days. # Alternative Regimen Tetracycline 500 mg orally 4 times a day for 21 days or Erythromycin 500 mg orally 4 times a day for 21 days or Sulfisoxazole 500 mg orally 4 times a day for 21 days or equivalent sulfonamide course. MMWR September 1, 1989 # Genital Herpes Simplex Virus Infections Genital herpes is a viral disease that may be chronic and recurring and for which no known cure exists. Systemic acyclovir treatment provides partial control of the symptoms and signs of herpes episodes; it accelerates healing but does not eradicate the infection nor affect the subsequent risk, frequency, or severity of recurrences after the drug is discontinued. Topical therapy with acyclovir is substantially less effective than therapy with the oral drug. # First Clinical Episode of Genital Herpes # Recommended Regimen Acyclovir 200 mg orally 5 times a day for 7-10 days or until clinical resolution occurs. # First Clinical Episode of Herpes Proctitis # Recommended Regimen Acyclovir 400 mg orally 5 times a day for 10 days or until clinical resolution occurs. # Inpatient Therapy For patients with severe disease or complications necessitating hospitalization. # Recommended Regimen Acyclovir 5 mg/kg body weight IV every 8 hours for 5-7 days or until clinical resolution occurs. # Recurrent Episodes Most episodes of recurrent herpes do not benefit from therapy with acyclovir. In severe recurrent disease, some patients who start therapy at the beginning of the prodrome or within 2 days after onset of lesions may benefit from therapy, although this has not been proven. # Recommended Regimen Acyclovir 200 mg orally 5 times a day for 5 days or Acyclovir 800 mg orally 2 times a day for 5 days. # MMWR 17 # Daily Suppressive Therapy Daily treatment reduces frequency of recurrences by at least 75% among patients with frequent (more than six per year) recurrences. Safety and efficacy have been clearly documented among persons receiving daily therapy for up to 3 years. Acyclovir-resistant strains of HSV have been isolated from persons receiving sup pressive therapy, but they have not been associated with treatment failure among immunocompetent patients. After 1 year of continuous daily suppressive therapy, acyclovir should be discontinued so that the patient's recurrence rate may be reassessed. # Recommended Regimen Acyclovir 200 mg orally 2 to 5 times a day* or Acyclovir 400 mg orally 2 times a day.* # Genital Herpes among HIV-Infected Patients The need for higher-than-standard doses of oral acyclovir among HIV-infected but immunocompetent patients has not been established. Immune status, not HIV infection alone, is the likely predictor of disease severity and response to therapy. Case reports strongly suggest that patients with clinical immunodeficiency have a more severe clinical course of anogenital herpes than do immunocompetent patients, and some health-care providers are using increased doses of acyclovir for patients with immunodeficiency. However, neither the need for nor the proper increased dosage of acyclovir has been conclusively established. Immunocompromised as well as immunocompetent hosts who fail initial therapy may benefit from an increased dosage of acyclovir. The indications for suppressive therapy among immunocompro mised patients, and the dose required, are controversial. Clinical benefits to the patient must be weighed against the potential for selecting HSV strains that are resistant to acyclovir. Patients whose therapy for a recurrence fails because of resistant strains of HSV should be managed in consultation with an expert. # Acyclovir for Treating Pregnant Patients The safety of systemic acyclovir therapy among pregnant women has not been established. In the presence of life-threatening maternal HSV infection (e.g., dissem inated infection that includes encephalitis, pneumonitis, and/or hepatitis) acyclovir administered IV is probably of value. Among pregnant women without lifethreatening disease, systemic acyclovir treatment should not be used for recurrent genital herpes episodes or as suppressive therapy to prevent reactivation near term. # Perinatal Infections Most mothers of infants who acquire neonatal herpes lack histories of clinically evident genital herpes. The risk of transmission to the neonate from an infected mother is highest among women with primary herpes infection near the time of *Dosage must be individualized for each patient. # MMWR September 1, 1989 delivery, and it is low among women with recurrent herpes. The results of viral cultures during pregnancy do not predict viral shedding at the time of delivery; such cultures are not routinely indicated. At the onset of labor, all women should be examined and carefully questioned about symptoms. Women without symptoms or signs of genital herpes infection or prodrome may have vaginal deliveries. For women who have a history of genital herpes or who have a sex partner with genital herpes, cultures of the birth canal at delivery may be helpful in decisions about neonatal management. Infants delivered through an infected birth canal (proven by culture or presumed by observation of lesions) should be cultured and observed carefully. Although data are limited concerning the use of acyclovir for asymptomatic infants, some experts presump tively treat infants who were exposed to HSV at delivery. Herpes cultures should be obtained from infants before therapy; positive cultures obtained 24-48 hours or more after birth indicate active viral infection. # Counseling and Management of Sex Partners Patients with genital herpes should be told about the natural history of their disease, with emphasis on the potential for recurrent episodes. Patients should be advised to abstain from sexual activity while lesions are present. Sexual transmission of HSV has been documented during periods without recognized lesions. Suppres sive treatment with oral acyclovir reduces the frequency of recurrences but does not totally eliminate viral shedding. Genital herpes and other diseases causing genital ulcers have been associated with an increased risk of acquiring HIV infections; therefore, condoms should be used during all sexual exposures. If sex partners of patients with genital herpes have genital lesions, they may benefit from evaluation; however, evaluation of asymptomatic partners is of little value in preventing trans mission of HSV. The risk of neonatal infection should be explained to all patients -male and female -with genital herpes. Women of child-bearing age with genital herpes should be advised to inform their clinicians of their history during any future pregnancy. No therapy has been shown to eradicate HPV. HPV has been demonstrated in adjacent tissue after laser treatment of HPV-associated cervical intraepithelial neo plasia and after attempts to eliminate subclinical HPV by extensive laser vaporization of the anogenital area. The benefit of treating patients with subclinical HPV infection has not been demonstrated, and recurrence is common. The effect of genital wart treatment on HPV transmission and the natural history of HPV is unknown. Therefore, the goat of treatment is removal of exophytic warts and the amelioration of signs and symptoms, not the eradication of HPV. # Infections of Epithelial Surfaces Genital Warts Expensive therapies, toxic therapies, and procedures that result in scarring should be avoided. Sex partners should be examined for evidence of warts. Patients with anogenital warts should be made aware that they are contagious to uninfected sex partners. The use of condoms is recommended to help reduce transmission. In most clinical situations, cryotherapy with liquid nitrogen or cryoprobe is the treatment of choice for external genital and perianal warts. Cryotherapy is nontoxic, does not require anesthesia, and -if used properly -does not result in scarring. Podophyllin, trichloroacetic acid (TCA), and electrodesiccation/electrocautery are alternative therapies. Treatment with interferon is not recommended because of its relatively low efficacy, high incidence of toxicity, and high cost. The carbon dioxide laser and conventional surgery are useful in the management of extensive warts, particularly for patients who have not responded to cryotherapy; these alternatives are not appropriate for limited lesions. Like more cost-effective treatments, these therapies do not eliminate HPV and often are associated with the recurrence of clinical cases. # Pregnant Patients and Perinatal Infections Cesarean delivery for prevention of transmission of HPV infection to the newborn is not indicated. In rare instances, however, cesarean delivery may be indicated for women with genital warts if the pelvic outlet is obstructed or if vaginal delivery would result in excessive bleeding. Genital papillary lesions have a tendency to proliferate and to become friable during pregnancy. Many experts advocate removal of visible warts during pregnancy, although data on this subject are limited. HPV can cause laryngeal papillomatosis in infants. The route of transmission (transplacental, birth canal, or postnatal) is unknown; therefore, the preventive value of cesarean delivery is unknown. The perinatal transmission rate is also unknown, although it must be very low, given the relatively high prevalence of genital warts and the rarity of laryngeal papillomas. Neither routine HPV screening tests nor cesarean delivery are indicated to prevent transmission of HPV infection to the newborn. # MMWR September 1, 1989 Because of the wide spectrum of antimicrobial therapies effective against N. gonorrhoeae, these guidelines are not intended to be a comprehensive list of all possible treatment regimens. # Treatment of Adults # Uncomplicated Urethral, Endocervical, or Rectal Infections Single-dose efficacy is a major consideration in choosing an antibiotic regimen to treat persons infected with N. gonorrhoeae. Another important concern is coexisting chlamydial infection, documented in up to 45% of gonorrhea cases in some popula tions. Until universal testing for chlamydia with quick, inexpensive, and highly accurate tests becomes available, persons with gonorrhea should also be treated for presumptive chlamydial infections. Generally, patients with gonorrhea infections should be treated simultaneously with antibiotics effective against both C. tracho matis and N. gonorrhoeae. Simultaneous treatment may lessen the possibility of treatment failure due to antibiotic resistance. # Recommended Regimen Ceftriaxone 250 mg IM once plus Doxycycline 100 mg orally 2 times a day for 7 days. Some authorities prefer a dose of 125 mg ceftriaxone IM because it is less expensive and can be given in a volume of only 0.5 ml, which is more easily administered in the deltoid muscle. However, the 250-mg dose is recommended because it may delay the emergence of ceftriaxone-resistant strains. At this time, both doses appear highly effective for mucosal gonorrhea at all sites. # Alternative Regimens For patients who cannot take ceftriaxone, the preferred alternative is Spectinomycin 2 g IM, in a single dose (followed by doxycycline). Other alternatives, for which experience is less extensive, include ciprofloxacin* 500 mg orally once; norfloxacin* 800 mg orally once; cefuroxime axetil 1 g orally once with probenecid 1 g; cefotaxime 1 g IM once; and ceftizoxime 500 mg IM once. All of these regimens are folio wed by doxycycline 100 mg orally, twice daily for 7 days. If infection was acquired from a source proven not to have penicillinresistant gonorrhea, a penicillin such as amoxicillin 3 g orally with 1 g probenecid followed by doxycycline may be used for treatment. Doxycycline or tetracycline alone is no longer considered adequate therapy for gonococcal infections but is added for treatment of coexisting chlamydial infec tions. Tetracycline may be substituted for doxycycline; however, compliance may be worse since tetracycline must be taken at a dose of 500 mg 4 times a day between meals, whereas doxycycline is taken at a dose of 100 mg 2 times a day without regard to meals. Moreover, at current prices, tetracycline costs only a little less than generic doxycycline. *Quinolones, such as ciprofloxacin and norfloxacin, are contraindicated during pregnancy and in children 16 years of age or younger. For patients who cannot take a tetracycline (e.g., pregnant women), erythromycin may be substituted (erythromycin base or stearate at 500 mg orally 4 times a day for 7 days or erythromycin ethylsuccinate, 800 mg orally 4 times a day for 7 days). See "Chlamydial Infections" for further information on management of chlamydial infection. # Special Considerations All patients with gonorrhea should have a serologic test for syphilis and should be offered confidential counseling and testing for HIV infection. Most patients with incubating syphilis (those who are seronegative and have no clinical signs of syphilis) may be cured by any of the regimens containing p-lactams (e.g., ceftriaxone) or tetracyclines. Spectinomycin and the quinolones (ciprofloxacin, norfloxacin) have not been shown to be active against incubating syphilis. Patients treated with these drugs should have a serologic test for syphilis in 1 month. Patients with gonorrhea and documented syphilis and gonorrhea patients who are sex partners of syphilis patients should be treated for syphilis (see "Syphilis") as well as for gonorrhea. Some practitioners report that mixing 1% lidocaine (without epinephrine) with ceftriaxone reduces the discomfort associated with the injection (see package insert). No adverse reactions have been associated with use of lidocaine diluent. # Management of Sex Partners Persons exposed to gonorrhea within the preceding 30 days should be examined, cultured, and treated presumptively. # Follow-Up Treatment failure following combined ceftriaxone/doxycycline therapy is rare; therefore, a follow-up culture ("test-of-cure") is not essential. A more cost-effective strategy may be a reexamination with culture 1-2 months after treatment ("rescreen ing"); this strategy detects both treatment failures and reinfections. Patients should return for examination if symptoms persist after treatment. Because there is less long-term experience with drugs other than ceftriaxone, patients treated with regi mens other than ceftriaxone/doxycycline should have follow-up cultures obtained for 4-7 days after completion of therapy. # Treatment Failures Persistent symptoms after treatment should be evaluated by culture for N. gonorrhoeae, and any gonococcal isolate should be tested for antibiotic sensitivity. Symptoms of urethritis may also be caused by C. trachomatis and other organisms associated with nongonococcal urethritis (see "Nongonococcal Urethritis"). Addi tional treatment for patients with gonorrhea should be ceftriaxone, 250 mg, followed by doxycycline. Infections occurring after treatment with one of the recommended regimens are commonly due to reinfection rather than to treatment failure and indicate a need for improved sex-partner referral and patient education. # MMWR September 1, 1989 # Pharyngeal Gonococcal Infection Patients with uncomplicated pharyngeal gonococcal infection should be treated with ceftriaxone 250 mg IM once. Patients who cannot be treated with ceftriaxone should be treated with ciprofloxacin 500 mg orally as a single dose. Since experience with this regimen is limited, such patients should be evaluated with repeat culture 4-7 days after treatment. # Treatment of Gonococcal Infections in Pregnancy Pregnant women should be cultured for N. gonorrhoeae (and tested for C. trachomatis and syphilis) at the first prenatal-care visit. For women at high risk of STD, a second culture for gonorrhea (as well as tests for chlamydia and syphilis) should be obtained late in the third trimester. # Recommended Regimen Ceftriaxone 250 mg IM once plus Erythromycin base* 500 mg orally 4 times a day for 7 days. Pregnant women allergic to (3-lactams should be treated with spectinomycin 2 g IM once (followed by erythromycin). Follow-up cervical and rectal cultures for N. gonorrhoeae should be obtained 4-7 days after treatment is completed. Ideally, pregnant women with gonorrhea should be treated for chlamydia on the basis of chlamydial diagnostic studies. If chlamydial diagnostic testing is not available, treatment for chlamydia should be given. Tetracyclines (including doxycycline) and the quinolones are contraindicated in pregnancy because of possibly adverse effects on the fetus. Treatments for pregnant patients with chlamydial infection, acute salpingitis, and disseminated gonorrhea in pregnancy are described in respective sections. # Disseminated Gonococcal Infection (DGI) Hospitalization is recommended for initial therapy, especially for patients who cannot reliably comply with treatment, have uncertain diagnoses, or have purulent synovial effusions or other complications. Patients should be examined for clinical evidence of endocarditis or meningitis. When the infecting organism is proven to be penicillin-sensitive, parenteral treatment may be switched to ampicillin 1 g every 6 hours (or equivalent). Patients treated for DGI should be tested for genital C. trachomatis infection. If chlamydial testing is not available, patients should be treated empirically for coexisting chlamydial infection. Reliable patients with uncomplicated disease may be discharged 24-48 hours after all symptoms resolve and may complete the therapy (for a total of 1 week of antibiotic therapy) with an oral regimen of cefuroxime axetil 500 mg 2 times a day or amoxicillin 500 mg with clavulanic acid 3 times a day or, if not pregnant, ciprofloxacin 500 mg 2 times a day. # Meningitis and Endocarditis Meningitis and endocarditis caused by N. gonorrhoeae require high-dose IV therapy with an agent effective against the strain causing the disease, such as ceftriaxone 1-2 g IV every 12 hours. Optimal duration of therapy is unknown, but most authorities treat patients with gonococcal meningitis for 10-14 days and with gonococcal endocarditis for at least 4 weeks. Patients with gonococcal nephritis, endocarditis or meningitis, or recurrent DGI should be evaluated for complement deficiencies. Treatment of complicated DGI should be undertaken in consultation with an expert. # Adult Gonococcal Ophthalmia Adults and children over 20 kg with nonsepticemic gonococcal ophthalmia should be treated with ceftriaxone 1 g IM once. Irrigation of the eyes with saline or buffered ophthalmic solutions may be useful adjunctive therapy to eliminate discharge. All patients must have careful ophthalmologic assessment, including slit-lamp examina tion for ocular complications. Topical antibiotics alone are insufficient therapy and are unnecessary when appropriate systemic therapy is given. Simultaneous ophthalmic infection with C. trachomatis has been reported and should be considered for patients who do not respond promptly. # Gonococcal Infections of Infants and Children Child abuse should be carefully considered and evaluated (see "Sexual Assault and Abuse of Children") for any child with documented gonorrhea. # Treatment of Infants Born to Mothers with Gonococcal Infection Infants born to mothers with untreated gonorrhea are at high risk of infection (e.g., ophthalmia and DGI) and should be treated with a single injection of ceftriaxone (50 mg/kg IV or IM, not to exceed 125 mg). Ceftriaxone should be given cautiously to MMWR September 1, 1989 hyperbilirubinemic infants, especially premature infants. Topical prophylaxis for neonatal ophthalmia is not adequate treatment for documented infections of the eye or other sites. # Treatment of Infants with Gonococcal Infection Infants with documented gonococcal infections at any site (e.g., eye) should be evaluated for DGI. This evaluation should include a careful physical examination, especially of the joints, as well as blood and CSF cultures. Infants with gonococcal ophthalmia or DGI should be treated for 7 days (10 to 14 days if meningitis is present) with one of the following regimens: # Recommended Regimen Ceftriaxone 25-50 mg/kg/day IV or IM in a single daily dose or Cefotaxime 25 mg/kg IV or IM every 12 hours. # Alternative Regimen Limited data suggest that uncomplicated gonococcal ophthalmia among infants may be cured with a single injection of ceftriaxone (50 mg/kg up to 125 mg). A few experts use this regimen for children who have no clinical or laboratory evidence of disseminated disease. If the gonococcal isolate is proven to be susceptible to penicillin, crystalline penicillin G may be given. The dose is 100,000 units/kg/day given in 2 equal doses (4 equal doses per day for infants more than 1 week old). The dose should be increased to 150,000 units/kg/day for meningitis. Infants with gonococcal ophthalmia should receive eye irrigations with buffered saline solutions until discharge has cleared. Topical antibiotic therapy alone is inadequate. Simultaneous infection with C. trachomatis has been reported and should be considered for patients who do not respond satisfactorily. Therefore, the mother and infant should be tested for chlamydial infection. # Gonococcal Infections of Children Children who weigh ^45 kg should be treated with adult regimens. Children who weigh <45 kg who have uncomplicated vulvovaginitis, cervicitis, urethritis, pharyn gitis, or proctitis should be treated as follows: # Recommended Regimen Ceftriaxone 125 mg IM once. Patients who cannot tolerate ceftriaxone may be treated with: Spectinomycin 40 mg/kg IM once. Patients weighing <45 kg with bacteremia or arthritis should be treated with ceftriaxone 50 mg/kg (maximum 1 g) once daily for 7 days. For meningitis, the duration of treatment is increased to 10-14 days and the maximum dose is 2 g. # MMWR 27 Children ^8 years of age should also be given doxycycline 100 mg 2 times a day for 7 days. All patients should be evaluated for coinfection with syphilis and C. trachomatis. Follow-up cultures are necessary to ensure that treatment has been effective. # Prevention of Ophthalmia Neonatorum Instillation of a prophylactic agent into the eyes of all newborn infants is recommended to prevent gonococcal ophthalmia neonatorum and is required by law in most states. Although all regimens listed below effectively prevent gonococcal eye disease, their efficacy in preventing chlamydial eye disease is not clear. Furthermore, they do not eliminate nasopharyngeal colonization with C. trachomatis. Treatment of gonococcal and chlamydial infections in pregnant women is the best method for preventing neonatal gonococcal and chlamydial disease. # Recommended Regimen Erythromycin (0.5%) ophthalmic ointment, once or Tetracycline (1%) ophthalmic ointment, once or Silver nitrate (1%) aqueous solution, once. One of these should be instilled into the eyes of every neonate as soon as possible after delivery, and definitely within 1 hour after birth. Single-use tubes or ampules are preferable to multiple-use tubes. The efficacy of tetracycline and erythromycin in the prevention of TRNG and PPNG ophthalmia is unknown, although both are probably effective because of the high concentrations of drug in these preparations. Bacitracin is not recommended. # Chlamydial Infections Culture and nonculture methods for diagnosis of C. trachomatis are now available. Appropriate use of these diagnostic tests is strongly encouraged, especially for screening asymptomatic high-risk women in whom infection would otherwise be undetected. However, in clinical settings where testing for chlamydia is not routine or available, treatment often is prescribed on the basis of clinical diagnosis or as cotreatment for gonorrhea (see "Gonococcal Infections"). In clinical settings, periodic surveys should be performed to determine local chlamydial prevalence in patients with gonorrhea. Priority groups for chlamydia testing, if resources are limited, are high-risk pregnant women, adolescents, and women with multiple sexual partners. Results of chlamydial tests should be interpreted with care. The sensitivity of all currently available laboratory tests for C. trachomatis tests is substantially less than 100%; thus, false-negative tests are possible. Although the specificity of nonculture MMWR 39 # Cytomegalovirus Cytomegalovirus (CMV) is a common infection in pregnant women, and 0.5%-2% of all infants of CMV-infected women are congenitally infected, although most of these infants are only mildly affected. Another 5%-10% of infants are perinatally infected; these infections are without known sequelae. The risk of severe congenital disease (retardation, deafness, visual problems) is highest when primary CMV infection occurs during pregnancy, although recurrent CMV may also cause severe congenital infection. Because severe congenital infection occurs before delivery, and because CMV infection is widespread, the route of delivery should not be influenced by viral shedding. No accepted routine therapy exists for either maternal or neonatal infection. # Ectoparasitic Infections Pediculosis Pubis # Recommended Regimen Permethrin (1%) creme rinse applied to affected area and washed off after 10 minutes or Pyrethrins and piperonyl butoxide applied to the affected area and washed off after 10 minutes or Lindane 1% shampoo applied for 4 minutes and then thoroughly washed off. (Not recommended for pregnant or lactating women.) Patients should be reevaluated after 1 week if symptoms persist. Retreatment may be necessary if lice are found or eggs are observed at the hair-skin junction. Sex partners should be treated as above. Special Considerations. Pediculosis of the eyelashes should be treated by the application of occlusive ophthalmic ointment to the eyelid margins, 2 times a day for 10 days, to smother lice and nits. Lindane or other drugs should not be applied to the eyes. Clothing or bed linen that may have been contaminated by the patient within the past 2 days should be washed and/or dried by machine (hot cycle in each) or dry cleaned. # Scabies Recommended Regimen (Adults and Older Children) Lindane (1%) 1 oz. of lotion or 30 g of cream applied thinly to all areas of the body from the neck down and washed off thoroughly after 8 hours. (Not recommended for pregnant or lactating women.) # MMWR September 1, 1989 # Treatment Recommendations # External Genital/Perianal Warts # Recommended Regimen Cryotherapy with liquid nitrogen or cryoprobe. # Alternative Regimen Podophyllin 10%-25% in compound tincture of benzoin. Limit the total volume of podophyllin solution applied to <0.5 ml per treatment session. Thoroughly wash off in 1-4 hours. Treat <10 cm2 per session. Repeat applications at weekly intervals. Mucosal warts are more likely to respond than highly keratinized warts on the penile shaft, buttocks, and pubic areas. Contraindicated in pregnancy. Trichloroacetic acid (80%-90%). Apply only to warts; powder with talc or sodium bicarbonate (baking soda) to remove unreacted acid. Repeat application at weekly intervals. Electrodesiccation/electrocautery. Electrodesiccation is contraindicated in pa tients with cardiac pacemakers, or for lesions proximal to the anal verge. Extensive or refractory disease should be referred to an expert. # Cervical Warts # Recommended Regimen For women with cervical warts, dysplasia must be excluded before treatment is begun. Management should therefore be carried out in consultation with an expert. # Vaginal Warts # Recommended Regimen Cryotherapy with liquid nitrogen. (The use of a cryoprobe in the vagina is not recommended because of the risk of vaginal perforation and fistula formation.) # Alternative Regimen Trichloroacetic acid (80%-90%). Apply only to warts; powder with talc or sodium bicarbonate (baking soda) to remove unreacted acid. Repeat application at weekly intervals. Podophyllin 10%-25% in compound tincture of benzoin. Treatment area must be dry before speculum is removed. Treat <2 cm2 per session. Repeat application at weekly intervals. Contraindicated in pregnancy. Extensive or refractory disease should be referred to an expert. # Urethral Meatus Warts # Recommended Regimen Cryotherapy with liquid nitrogen. # Alternative Regimen Podophyllin 10%-25% in compound tincture of benzoin. Treatment area must be dry before contact with normal mucosa, and podophyllin must be washed off in 1-2 hours. Contraindicated in pregnancy. Extensive or refractory disease should be referred to an expert. # Anal Warts # Recommended Regimen Cryotherapy with liquid nitrogen. Extensive or refractory disease should be referred to an expert. # Alternative Regimen Trichloroacetic acid (80%-90%). Surgical removal. # Oral Warts # Recommended Regimen Cryotherapy with liquid nitrogen. # Alternative Regimen Electrodesiccation/electrocautery. Surgical removal. Extensive or refractory disease should be referred to an expert. # Gonococcal Infections Treatment of gonococcal infections in the United States is influenced by the following trends: 1) the spread of infections due to antibiotic-resistant N. gonorrhoeae, including penicillinase-producing N. gonorrhoeae (PPNG), tetracyclineresistant N. gonorrhoeae (TRNG), and strains with chromosomally mediated resis tance to multiple antibiotics; 2) the high frequency of chlamydial infections in persons with gonorrhea; 3) recognition of the serious complications of chlamydial and gonococcal infections; and 4) the absence of a fast, inexpensive, and highly accurate test for chlamydial infection. All gonorrhea cases should be diagnosed or confirmed by culture to facilitate antimicrobial susceptibility testing. The susceptibility of N. gonorrhoeae to antibio tics is likely to change over time in any locality. Therefore, gonorrhea control programs should include a system of regular antibiotic sensitivity testing of a surveillance sample of N. gonorrhoeae isolates as well as all isolates associated with treatment failure. # Recommended Regimen Doxycycline 100 mg orally 2 times a day for 7 days or Tetracycline 500 mg orally 4 times a day for 7 days. ___________________ # Alternative Regimen Erythromycin base 500 mg orally 4 times a day or equivalent salt for 7 days or Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days. If erythromycin is not tolerated because of side effects, the following regimen may be effective: Sulfisoxazole 500 mg orally 4 times a day for 10 days or equivalent. # Test of Cure Because antimicrobial resistance of C. trachomatis to recommended regimens has not been observed, test-of-cure evaluation is not necessary when treatment has been completed. # Treatment of C. trachomatis in Pregnancy Pregnant women should undergo diagnostic testing for C. trachomatis, N. gonorrhoeae, and syphilis, if possible, at their first prenatal visit and, for women at high risk, during the third trimester. Risk factors for chlamydial disease during pregnancy include young age (<25 years), past history or presence of other STD, a new sex partner within the preceding 3 months, and multiple sex partners. Ideally, pregnant women with gonorrhea should be treated for chlamydia on the basis of diagnostic studies, but if chlamydial testing is not available, treatment should be given because of the high likelihood of coinfection. # Recommended Regimen Erythromycin base 500 mg orally 4 times a day for 7 days. If this regimen is not tolerated, the following regimens are recommended: Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days or Erythromycin ethylsuccinate 400 mg orally 4 times a day for 14 days. # Alternative Regimen # Alternative if Erythromycin Cannot Be Tolerated Amoxicillin 500 mg orally 3 times a day for 7 days (limited data exist concerning this regimen). Erythromycin estolate is contraindicated during pregnancy, since drug-related hepatoxicity can result. # Sex Partners of Patients with C. trachomatis Infections Sex partners of patients who have C. trachomatis infection should be tested and treated for C. trachomatis if their contact was within 30 days of onset of symptoms. If testing is not available, they should be treated with the appropriate antimicrobial regimen. # Bacterial STD Syndromes Nongonococcal Urethritis Among men with urethral symptoms, nongonococcal urethritis (NGU) is diag nosed by Gram stain demonstrating abundant polymorphonuclear leukocytes with out intracellular gram-negative diplococci. C. trachomatis has been implicated as the cause of NGU in about 50% of cases. Other organisms that cause 10%-15% of cases include Ureaplasma urealyticum, T. vaginalis, and herpes simplex virus. The cause of other cases is unknown. # Recommended Regimen Doxycycline 100 mg orally 2 times a day for 7 days or Tetracycline 500 mg orally 4 times a day for 7 days. # Alternative Regimen Erythromycin base 500 mg orally 4 times a day or equivalent salt for 7 days or Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days. If high-dose erythromycin schedules are not tolerated, the following regimen is recommended: Erythromycin ethylsuccinate 400 mg orally 4 times a day for 14 days or Erythromycin base 250 mg orally 4 times a day or equivalent salt for 14 days. # Management of Sex Partners Sex partners of men with NGU should be evaluated for STD and treated with an appropriate regimen based on the evaluation. # Recurrent NGU Unresponsive to Conventional Therapy Recurrent NGU may be due to lack of compliance with an initial antibiotic regimen, to reinfection due to failure to treat sex partners, or to factors currently undefined. If noncompliance or reinfection cannot be ruled out, repeat doxycycline (100 mg orally 2 times a day for 7 days) or tetracycline (500 mg orally 4 times a day for 7 days). If compliance with the initial antimicrobial agent is likely, treat with one of the above listed regimens. If objective signs of urethritis continue after adequate treatment, these patients should be evaluated for evidence of other causes of urethritis and referred to a specialist. # Mucopurulent Cervicitis The presence of mucopurulent endocervical exudate often suggests mucopurulent cervicitis (MPC) due to chlamydial or gonococcal infection. Presumptive diagnosis of MPC is made by the finding of mucopurulent secretion from the endocervix, which may appear yellow when viewed on a white cotton-tipped swab (positive swab test). Patients with MPC should have Gram stain and culture for N. gonorrhoeae, test for C. trachomatis, and a wet mount examination for T. vaginalis. # Recommended Regimen If N. gonorrhoeae is found on Gram stain or culture of endocervical or urethral discharge, treatment should be the same as that recommended for uncomplicated gonorrhea in adults, including cotreatment for chlamydial infection. If N. gonorrhoeae is not found, treatment should be the same as that recom mended for chlamydial infection in adults. # Management of Sex Partners Sex partners of women with MPC should be evaluated for STD and treated with an appropriate regimen based on the evaluation. # Epididymitis Among sexually active heterosexual men <35 years of age, epididymitis is most likely caused by N. gonorrhoeae or C. trachomatis. Specimens should be obtained for a urethral smear for Gram stain and culture for N. gonorrhoeae and C. trachom atis and for a urine culture. Empiric therapy based on the clinical diagnosis is recom mended before culture results are available. # Recommended Regimen Ceftriaxone 250 mg IM once and Doxycycline 100 mg orally 2 times a day for 10 days or Tetracycline 500 mg orally 4 times a day for 10 days. # Pelvic Inflammatory Disease Treatment Guidelines Pelvic inflammatory disease (PID) comprises a spectrum of inflammatory disorders of the upper genital tract in women. PID may include endometritis, salpingitis, tubo-ovarian abscess, and pelvic peritonitis. Sexually transmitted organisms, espe cially N. gonorrhoeae and C. trachomatis, are implicated in most cases; however, endogenous organisms, such as anaerobes, gram-negative rods, streptococci, and mycoplasmas, may also be etiologic agents of disease. A confirmed diagnosis of salpingitis and more accurate bacteriologic diagnosis is made by laparoscopy. Since laparoscopy is not always available, the diagnosis of PID is often based on imprecise clinical findings and culture or antigen detection tests of specimens from the lower genital tract. Guidelines for the treatment of patients with PID have been designed to provide flexibility in therapeutic choices. PID therapy regimens are designed to provide empiric, broad-spectrum coverage of likely etiologic pathogens. Antimicrobial cover age should include N. gonorrhoeae, C. trachomatis, gram-negatives, anaerobes, Group B streptococcus, and the genital mycoplasmas. Limited data demonstrate that effective treatment of the upper genital tract pyogenic process will decrease the incidence of long-term complications such as tubal infertility and ectopic pregnancy. Ideally, as for all intra-abdominal infections, hospitalization is recommended whenever possible, and particularly when 1) the diagnosis is uncertain; 2) surgical emergencies such as appendicitis and ectopic pregnancy cannot be excluded; 3) a pelvic abscess is suspected; 4) the patient is pregnant; 5) the patient is an adolescent (the compliance of adolescent patients with therapy is unpredictable, and the long-term sequelae of PID may be particularly severe in this group); 6) severe illness precludes outpatient management; 7) the patient is unable to follow or tolerate an outpatient regimen; 8) the patient has failed to respond to outpatient therapy; or 9) clinical follow-up within 72 hours of starting antibiotic treatment cannot be arranged. Many experts recommend that all patients with PID be hospitalized so that treatment with parenteral antibiotics can be initiated. Selection of a treatment regimen must consider institutional availability, costcontrol efforts, patient acceptance, and regional differences in antimicrobial suscep tibility. These treatment regimens are recommendations only and the specific antibiotics named are examples. Treatments used for PID will continue to be broad spectrum and empiric until more definitive studies are performed. # MMWR September 1, 1989 # Inpatient treatment One of the following: # Recommended Regimen A Cefoxitin 2 g IV every 6 hours, or cefotetan* IV 2 g every 12 hours plus Doxycycline 100 mg every 12 hours orally or IV.______________________________ The above regimen is given for at least 48 hours after the patient clinically improves. After discharge from hospital, continuation of: Doxycycline 100 mg orally 2 times a day for a total of 10-14 days. # Recommended Regimen B Clindamycin IV 900 mg every 8 hours plus Gentamicin loading dose IV or IM (2 mg/kg) followed by a maintenance dose (1.5 mg/kg) every 8 !' jurs. The above regimen is given for at least 48 hours after the patient improves. After discharge from hospital, continuation of: Doxycycline 100 mg orally 2 times a day for 10-14 days total. Continuation of clindamycin, 450 mg orally, 5 times daily, for 10 to 14 days, may be considered as an alternative. Continuation of medication after hospital discharge is important for the treatment of possible C. trachomatis infection. Clindamycin has more complete anaerobic coverage. Although limited data suggest that clindamy cin is effective against C. trachomatis infection, doxycycline remains the treatment of choice for patients with chlamydial disease. When C. trachomatis is strongly suspected or confirmed as an etiologic agent, doxycycline is the preferable alternative. In such instances, doxycycline therapy may be started during hospi talization if initiation of therapy before hospital discharge is thought likely to improve the patient's compliance. # Rationale Clinicians have extensive experience with both the cefoxitin/doxycycline and clindamycin/aminoglycoside combinations. Each of these regimens provides broad coverage against polymicrobial infection. Cefotetan has properties similar to those of cefoxitin and requires less frequent dosing. Clinical data are limited on thirdgeneration cephalosporins (ceftizoxime, cefotaxime, ceftriaxone), although many authorities believe they are effective. Doxycycline administered orally has bioavail ability similar to that of the IV formulation and may be given if normal gastrointestinal function is present. *Other cephalosporins such as ceftizoxime, cefotaxime, and ceftriaxone, which provide ade quate gonococcal, other facultative gram-negative aerobic, and anaerobic coverage, may be utilized in appropriate doses. Experimental studies suggest that aminoglycosides may not be optimal treatment for gram-negative organisms within abscesses, but clinical studies suggest that they are highly effective in the treatment of abscesses when administered in combination with clindamycin. Although short courses of aminoglycosides in healthy young women usually do not require serum-level monitoring, many practitioners may elect to monitor levels. # Ambulatory Management of PID # Recommended Regimen Cefoxitin 2 g IM plus probenecid, 1 g orally concurrently or ceftiaxone 250 mg IM, or equivalent cephalosporin plus Doxycycline 100 mg orally 2 times a day for 10-14 days or Tetracycline 500 mg orally 4 times a day for 10-14 days. # Alternative for Patients Who Do N ot Tolerate Doxycycline Erythromycin, 500 mg orally 4 times a day for 10-14 days may be substituted for doxycycline/tetracycline. This regimen, however, is based on limited clinical data. # Rationale # Management of Sex Partners Sex partners of women with PID should be evaluated for STD. After evaluation, sex partners should be empirically treated with regimens effective against N. gonorrhoeae and C. trachomatis infections. # Intrauterine Device (IUD) The intrauterine device is a risk factor for the development of pelvic inflammatory disease. Although the exact effect of removing an IUD on the response of acute salpingitis to antimicrobial therapy and on the risk of recurrent salpingitis is unknown, removal of the IUD is recommended soon after antimicrobial therapy has been initiated. When an IUD is removed, contraceptive counseling is necessary. # MMWR September 1, 1989 # Sexually Transmitted Enteric Infections Sexually transmitted gastrointestinal syndromes include proctitis, proctocolitis, and enteritis. With the exception of rectal gonococcal infection, these syndromes occur predominantly in homosexual men who participate in receptive anal inter course. Evaluation should include additional diagnostic procedures, such as anoscopy or sigmoidoscopy, stool examination, and culture. Proctitis is inflammation limited to the rectum (the distal 10-12 cm) and is associated with anorectal pain, tenesmus, constipation, and discharge. N. g on orrhoeae, C. trachomatis, and HSV are the most common sexually transmitted patho gens involved. Among patients coinfected with HIV, herpes proctitis may be espe cially severe. Proctocolitis is associated with symptoms of proctitis plus diarrhea and/or abdominal cramps, and the colonic mucosa is inflamed proximal to 12 cm. Etiologic organisms include Campylobacter je ju n i, Shigella spp., amebiasis, and, rarely, T. pallidum or C. trachomatis (often LGV serovars). CMV may be involved in patients coinfected with HIV. Enteritis in homosexual men usually results in diarrhea without signs of proctitis or proctocolitis. In otherwise healthy patients, Giardia lam blia is most commonly implicated. In patients also coinfected with HIV, CMV. M ycobacterium a viu mintracellulare, Salmonella spp., Cryptosporidium , and Isospora must be considered. Special stool preparations are required to diagnose giardiasis or cryptosporidiosis. Additionally, some cases of enteritis may be a primary effect of HIV infection. All patients with sexually transmitted enteric infections should be counseled and tested for HIV infection. Treatment recommendations for all enteric infections are beyond the scope of these guidelines. However, acute proctitis of recent onset in an individual who has recently practiced unprotected receptive anal intercourse is most often sexually transmitted. Such patients should be examined by anoscopy and should be evaluated for infection with N. gonorrhoeae, C. trachomatis, HSV, and T. pallidum . Treatment can be based on specific etiologic diagnosis or can be empiric. # Empiric Treatment for Sexually Transmitted Proctitis Recommended Regimen Ceftriaxone 250 mg IM plus doxycycline 100 mg orally 2 times a day for 7 days provides adequate treatment for gonorrhea and chlamydial infection. # Vaginal Diseases # Recommended Regimen Metronidazole 2 g orally in a single dose. # Alternative Regimen Metronidazole 500 mg twice daily for 7 days. If failure occurs with either regimen, the patient should be retreated with metron idazole 500 mg twice daily for 7 days. If repeated failure occurs, the patient should be treated with a single 2-g dose of metronidazole daily for 3-5 days. Cases of additional culture-documented treatment failure in which reinfection has been excluded should be managed in consultation with an expert. Evaluation of such cases should include determination of the susceptibility of Trichomonas vaginalis to metronidazole. # Treatment of Sex Partners Sex partners should be treated with either the single dose or the 7-day metron idazole regimen. # Trichomoniasis During Pregnancy Metronidazole is contraindicated in the first trimester of pregnancy, and its safety in the rest of pregnancy is not established. However, no other adequate therapy exists. For patients with severe symptoms after the first trimester, treatment with 2 g of metronidazole in a single dose may be considered. # Vulvovaginal Candidiasis Generally not considered a sexually transmitted disease, vulvovaginal candidiasis is frequently diagnosed in women presenting with symptoms involving the genitalia. Treatment with antibiotics predisposes women to the development of vulvovaginal candidiasis. # Treatment Many treatment regimens are effective, but 3-and 7-day regimens are superior to a single-dose therapy. # Recommended Regimen # Examples of effective regimens include the following: Miconazole nitrate (vaginal suppository 200 mg), intravaginally at bedtime for 3 days or Clotrimazole (vaginal tablets 200 mg), intravaginally at bedtime for 3 days or Butaconazole (2% cream 5 g), intravaginally at bedtime for 3 days or Teraconazole 80 mg suppository or 0.4% cream, intravaginally at bedtime for 3 days. # Hepatitis B Vaccination # Recommendation for Hepatitis B Vaccination Hepatitis B vaccine (several FDA-approved recombinant or plasma-derived prep arations are available) in dosages as recommended by the manufacturer. The vaccination series requires an initial visit and two follow-up visits. Vaccine should not be administered in the gluteal (buttocks) or quadriceps (thigh) muscle. After vaccination, testing for antibody response is not routinely indicated unless the patient is infected with HIV. # Post-Exposure Prophylaxis Prophylactic treatment with hepatitis B immune globulin should be considered in the following situations: sexual contact with a patient who has active hepatitis B or who contracts hepatitis B; sexual contact with a hepatitis B carrier (blood test positive for hepatitis B surface antigen). Prophylactic treatment for sexual exposure should be given within 14 days of sexual contact. Assay for preexisting immunity to HBV may be cost effective if individuals concerned are from populations at high risk for HBV. # Recommendation for Post-Exposure Prophylaxis Hepatitis B immune globulin (HBIG) .06 ml/kg, IM in a single dose followed by Initiation of hepatitis B vaccine series as described above. # Perinatal Infections Pregnant women with HBV infection can transmit hepatitis B to their infants at delivery. Infants infected at birth are at high risk for contracting chronic hepatitis B infection. Such infection can be prevented by administering HBIG and hepatitis B vaccine to the infant. Therefore, all pregnant women should be screened during their first obstetrical visit for the presence of HBsAg. If they are found to be HBsAg-positive, their newborns should be given HBIG as soon as possible after birth and subse quently should be immunized with hepatitis B vaccine. Hepatitis guidelines are updated periodically by the Immunization Practices Advi sory Committee, and are published in the MMWR. Reference to the most current recommendations is advised. Persons at risk for sexual transmission of hepatitis B are also at risk for HIV and other STD. HIV coinfection reduces the humoral response to HB vaccine. # MMWR September 1, 1989 Alternative Regimen Crotamiton (10%) applied to the entire body from the neck down for 2 nights and washed off thoroughly 24 hours after the second application. # Sexual Assault and STD Recommendations are limited to the identification and treatment of sexually transmitted infections. Matters concerning the sensitive management of potential pregnancy and of physical and psychological trauma are important and they should be addressed, but are beyond the scope of these guidelines. Victims of sexual assault are evaluated both to provide necessary medical services and to identify and collect forensic evidence. Although some information may be useful for the medical management of a victim, this information may not be admissible in court. Some STD, such as gonorrhea and syphilis, are almost exclusively transmitted sexually and may be useful markers of sexual assault. BV is commonly found after assault, but is highly prevalent in most populations, which limits its usefulness as a marker of assault. Nonculture tests have lower sensitivity and specificity than culture techniques. # Sexual Assault Any sexually transmissible agent, including HIV, may be transmitted during an assault. Few data exist on which to establish the risk of an assaulted person's acquiring an STD. The risk of acquiring gonococcal and/or chlamydial infections appears to be highest. Inferences about STD risk may be based on the known prevalences of these diseases in the community. If the suspected assailant is identified, that individual should be evaluated for STD to the extent possible under the law. The presence of STD within 24 hours of the assault may represent prior infection and not assault-acquired disease. Furthermore, some syndromes, such as BV, may be nonsexually transmitted. # Treatment Treatment should be given for any infection identified on examination or for any infection identified in the assailant. Although the risk of infection is frequently low, use of presumptive treatment is controversial. Some experts recommend presump tive treatment for all victims of sexual assault, whereas some reserve presumptive treatment for special circumstances, for example, when follow-up examination of the victim cannot be ensured or when treatment is specifically requested by the patient. Although no regimen covers all potential pathogens, the following regimens should be effective against gonorrhea, chlamydia, and, most likely, syphilis. # Empiric Regimen for Victims of Sexual Assault Ceftriaxone, 250 mg, given IM, to be followed by either Doxycycline 100 mg orally 2 times a day for 7 days or Tetracycline HCI 500 mg orally 4 times a day for 7 days. # Sexual Assault and Abuse of Children The identification of a sexually transmissible agent from a child beyond the neonatal period suggests sexual abuse. However, exceptions do exist, e.g., rectal and genital infection with C. trachomatis in young children may be due to persistent perinatally acquired infection, which may persist for up to 3 years. In addition, BV and genital mycoplasmas have been identified in both abused and nonabused children. A finding of genital warts, although suggestive of assault, is nonspecific without other evidence of sexual abuse. When the only evidence of sexual abuse is the isolation of an organism or the detection of antibodies, findings should be carefully confirmed. # Evaluation Among sexually abused children, the prevalence of STD appears relatively low; in most studies, C. trachomatis is the most frequently isolated organism. Sexually abused children are best managed by a team of professionals experienced in dealing with the many needs of children. Although testing for the diseases appropriate for children (Table 5) is essentially the same as that for adults, the special needs of children must be taken into account. Recommended evaluation of suspected child abuse/assault • Since a child's report of assault may not be complete, specimens for culture for N. gonorrhoeae and C. trachomatis should be collected from the pharynx and rectum as well as from the vagina (girls) or urethra (boys). • Internal pelvic examinations usually should not be performed unless indicated by the presence of a foreign body or by trauma. • Follow-up visits should be scheduled so as to minimize trauma to the child; for asymptomatic children, an initial visit and one visit at 8-12 weeks may be sufficient. • In cases of continuing abuse, the alleged offender may be available for medical evaluation. In such instances, care for the child may be modified when results of the evaluation of the offender are known. # Treatment Treatment before diagnosis is not indicated unless evidence shows that the assailant is infected. Presumptive treatment after assault may be given if the victim or victim's family requests it or if follow-up examination of the victim cannot be ensured.
None
None
436df2c0ee5fb7d33901b1a13f5bfbd8602037c5
cdc
None
# DISCLAIMER Mention of company name or product does not constitute endorsement by the National Institute for Occupational Safety and Health. # DHEW (NIOSH) Publication No. 77-200 # PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and safety of workers exposed to an everincreasing number of potential hazards in their workplace. Pursuant to the fulfillment of this need, the National Institute for Occupational Safety and Health (NIOSH) has developed a strategy of disseminating information about adverse effects of widely used chemical or physical agents intended to assist employers in providing protection for employees from exposure to substances considered to possess carcinogenic, mutagenic, or teratogenic potential. This strategy includes the development of Special Occupational Hazard Reviews which serve to support and complement the other major standards development or hazards documentation activities of the Institute. The purpose of Special Occupational Hazard Reviews is to analyze and document, from a health standpoint, the problems associated with a given industrial chemical, process, or physical agent, and to recommend the implementation of engineering controls and work practices to ameliorate these problems. While Special Occupational Hazard Reviews are not intended to supplant the more comprehensive NIOSH Criteria Documents, nor the brief NIOSH Current Intelligence Bulletins, they are nevertheless prepared in such a way as to assist in the formulation of regulations. Special Occupational Hazard Reviews are disseminated to the occupational health community at large, e.g., trade associations, industries, unions, and members of the scientific community. It is unique for this purpose. Alternative chemicals or processes have, in themselves, serious limitations or health hazards. NIOSH recognizes, therefore, that the continued use of ETO as a gaseous sterilant is highly desirable in many situations. Recent results of tests for mutagenesis have increased the concern for potential health hazards associated with exposure to ETO. In order to assess the potential for exposure and associated hazards, NIOSH has undertaken this Special Occupational Hazard Review. An assessment is made of the evidence for toxic effects of ETO, especially with respect to mutagenic, teratogenic and carcinogenic potentials. Additionally, a limited field survey was conducted by NIOSH to document the use, problems, and potential for human exposure in medical facilities. The results of this survey were in agreement with data made available by the American Hospital Association, the U.S. Army, other Federal agencies, and industrial and professional organizations. Based on this review, measures for control of occupational exposure are recommended. The acute toxic effects of ETO in man and animals include acute respiratory and eye irritation, skin sensitization, vomiting, and diarrhea. Known chronic effects consist of respiratory irritation and secondary respiratory infection, anemia, and altered behavior. The observations of (a) heritable alterations in at least 13 different lower biological species following exposure to ETO, (b) alterations in the structure of the genetic material in somatic cells of the rat, and (c) covalent chemical bonding between ETO and DNA support the conclusion that continuous occupational exposure to significant concentrations of ETO may induce an increase in the frequency of mutations in human populations. At present, however, a substantive basis for quantitative evaluation of the genetic risk to exposed human populations does not exist. No definitive epidemiological studies, and no standard long-term carcinogenesis assays, are available on which to assess carcinogenic potential. Limited tests by skin application or subcutaneous injections in mice did not reveal carcinogenicity. However, the alkylating and mutagenic properties of ETO are sufficient bases for concern about its potential carcinogenicity. Neither animal nor human data are available on which to assess the potential teratogenicity of ETO. NIOSH recommends that ETO be considered as mutagenic and potentially carcinogenic to humans, and that occupational exposure to it be minimized by eliminating all unnecessary and improper uses of ETO in medical facilities. Whenever alternative sterilization processes are available which do not present similar or more serious hazards to the employee, they should be substituted for ETO sterilization processes whenever possible. Although this review is limited to ETO, concern is also expressed for hazards from such hydration and reaction products of ETO as ethylene glycol and ethylene chlorohydrin, the latter a teratogen to some lower biological species. v This report includes a summary of the airborne ETO concentrations measured within health care facilities as part of the field survey. NIOSH estimates that there are in excess of 10 thousand ETO sterilizers in use in U.S. health care facilities, and that approximately 75 thousand workers are potentially exposed to ETO in those facilities. Reasons for the unnecessary exposure of personnel were found to include: improper or inadequate ventilation of sterilizers, aerators, and working spaces; improper handling and/or storage of sterilized items; untrained workers operating some sterilization equipment; improper operating techniques leading to mishandling of some ETO sterilizing equipment; poor design of the sterilization facility; and, design limitations of the sterilization equipment. NIOSH recommends, based on the recent results of tests for mutagenesis, that exposure to ETO be controlled so that workers are not exposed to a concentration greater than 135 mg/cu m (75 ppm) determined during a 15-minute sampling period, as a ceiling occupational exposure limit and in addition, with the provision that the time-weighted average (TWA) concentration limit of 90 mg/cu m (50 ppm) for a workday not be exceeded. As additional information on the toxic effects of ETO becomes available, this recommended level for exposures of short duration may be altered. The adequacy of the current U.S. ETO standard, which was based on the data available at the time of promulgation, has not been addressed in this report. Further assessment of other ETO exposure situations, and of the adequacy of the ETO occupational exposure standard will be undertaken during the FY 80 development of a NIOSH criteria document for occupational exposure to epoxides. In the interim, NIOSH strongly recommends that control strategies, such as those described in this document, or others considered to be more applicable to particular local situations, be implemented to assure maximum protection of the health of employees. Good work practices will help to assure their safety. Where the use of ETO is to be continued, improved techniques of exhausting the gas from the sterilizer, the aerator, and the sterilized items need to be implemented. Gas sterilization should be supervised and the areas into which ETO may escape should be monitored to prevent all unnecessary exposure of personnel. When proper control measures are instituted, the escape of ETO into the environment will be greatly reduced. Under such control, the use of ETO as a gaseous sterilizant in medical facilities can be continued with considerably less risk to the health of occupationally-exposed employees. # INTRODUCTION Ethylene oxide (ETO) is a high volume chemical used primarily as an intermediate in the production of ethylene glycol (27% of total consumption), polyethylene terephthalate polyester fiber and film (23%), non-ionic surface-active agents (13%), ethanolamines (9%), with production of di-and tri-ethylene glycol, choline and choline chloride, and other organic chemicals consuming most of the remaining ETO. Although it is estimated that only about 0.02% of the total amount of ETO produced in the U.S. in 1976 was used for sterilization in medical facilities, this amounted to approximately 500,000 kg. ETO is registered with the U.S. Environmental Protection Agency as a fungicide for fumigation of books, dental, pharmaceutical, medical and scientific equipment and supplies (glass, metals, plastics, rubber or textiles), drugs, leather, motor oil, paper, soil, bedding for experimental animals, clothing, furs, furniture, and transportation vehicles, such as jet aircraft, buses, and railroad passenger cars. It has been used also to sterilize foodstuffs such as spices, cocoa, flour, dried egg powder, desiccated coconut, dried fruits and dehydrated vegetables (Wesley et al, 1965), and to accelerate the "maturing" of tobacco leaves (Fishbein, 1969). At one time (Stehle et al , 1924), ETO was used briefly as a possible anesthetic agent, but was discarded due to toxic effects. This Hazard Review Document pertains only to the use of ETO in sterilization of medical supplies and equipment within medical and related facilities. The term "medical facilities" will be used to include hospitals, nursing and "total care" homes, medical, dental and veterinary clinics or facilities, and certain research laboratories affiliated with medical centers. ETO is manufactured by the catalytic oxidation of ethylene with air (or oxygen) in the presence of a silver catalyst. Since 1972, this has been the only method used in the U.S. Wurtz, in 1859, prepared ETO from ethylene chlorohydrin and potassium hydroxide. Until 1957, the chlorohydrin process was the principal method of manufacturing ETO in the U.S. In March 1973, the 13 companies which produced ETO in the U.S. and Puerto Rico had a total production of 1,892 million kg. In 1976, the annual U.S. production of ETO had grown to approximately 2,100 million kg, giving to this chemical a position within the top twenty-five chemicals (by volume) produced in the U.S. It is used extensively worldwide, with total production in Japan in 1974 of 415 million kg. European production in 1972 has been estimated at 865 million kg. The current U.S. standard (0SHA) for occupational exposure to ETO is 50 parts per million (ppm) parts of air, as a time weighted average (TWA) concentration for an 8-hour exposure (CFR 1910(CFR .1000, which corresponds approximately to 90 milligrams per cubic meter of air (mg/cu m ) . The USSR has a standard of 0.5 ppm, (1 mg/cu m) , which was adopted in 1966 . Standards of 50 ppm and 20 ppm (36 mg/cu m) are in effect in the Federal Republic of Germany and Sweden, respectively. While this review includes all known biohazards of ethylene oxide, special emphasis was devoted to its potentials for exerting carcinogenic, mutagenic, and teratogenic effects. Recent reports of the mutagenic potential of ETO , coupled with the "Report of the Secretary's Commission on Pesticides and their Relationship to Environmental Health" , have prompted a reassessment of the adequacy of the current U.S. occupational exposure standard, as well as a critical appraisal regarding re registration of the compound by the Environmental Protection Agency (as required under the Federal Insecticide, Fungicide, and Rodenticide Act, FIFRA). This latter action has resulted in a reappraisal by the Department of Health, Education, and Welfare of the use of ETO for sterilization purposes within health care facilities. During the preparation of this review, a limited field survey of ETO use in medical facilities was conducted by NIOSH in order to better assess the actual occupational exposure situation. A summary of the results of the field survey is presented, and serves as a basis for the recommended control measures. Extensive use was made of information provided by other federal agencies, and from industrial, trade, and professional organizations. In addition, the "Draft Technical Standard and Supporting Documentation for Ethylene Oxide," prepared under the joint NIOSH/OSHA Standards Completion Program, assisted in the preparation of the engineering and control sections of this review. NIOSH was represented on the Ethylene Oxide Subcommittee of the Committee to Coordinate Toxicology and Related Programs (CCTRP), U.S. Department of Health, Education, and Welfare. The Subcommittee met during the period January 30 -March 30, 1977. Certain sections of this NIOSH Special Hazard Review were provided to the CCTRP Subcommittee for inclusion in its Risk/Benefit Analysis of public health uses of ETO (HEW, 1977). The information and recommendations that follow should aid the U.S. Department of Labor, industrial hygienists, physicians, employers, and architects or other designers of health care facilities in protecting the worker from the hazards of ETO exposure. Furthermore, it will aid the worker in recognizing the hazard. # I . PROPERTIES This information has been compiled primarily from the Chemical Safety Data Sheet of the Manufacturing Chemists Association, for Ethylene Oxide, (No. SD-38), 1971, with supplementation from other sources. Highly exothermic and potentially explosive with alkali metal hydroxides, or highly active catalytic surfaces (such as anhydrous chlorides of Fe, Sn and Al, and oxides of Fe and Al), or when heated. Relatively non-corrosive to materials other than certain rubbers. Relatively stable in aqueous solution, and when diluted with C02 or gaseous halocarbons. An alkylating agent which reacts directly (and virtually irreversibly) with -C00H, -NH2, -SH, and -OH groups. Reacts with the ring nitrogen of purine and pyrimidine bases, and the amino groups of amino acids and proteins. 9. explosive limits: 3 to 100 (% by vol in air). 10. flashpoint: -6C (20 F), (Tag. open cup). # II. USES AND OCCURRENCE IN MEDICAL FACILITIES The routine use of ETO in medical and related health care facilities is to sterilize heat-sensitive surgical instruments, equipment, and other objects (or fluids) that come in contact with biological tissue (particularly the vascular system), or extracorporeal equipment through which blood may flow. The absence of all microbiological life forms such as viruses, bacteria, yeast, fungi, and especially persistent spore forms is essential in order to prevent infectious diseases in patients and animals. Complete sterilization by either heat or gaseous agents is essential for many purposes, although varying degrees of disinfection with chemical germicides (which may sharply reduce the populations of many vegetative forms) may be sufficient for some applications. Heat sterilization is normally the preferred method; however, this method can not always be employed because of the heat-sensitive nature of some items. In addition, ETO gas sterilization is more economical for some applications, such as the industrial sterilization of inexpensive disposable, i.e., single-use, items such as syringes and needles. # A. Characterization of Occupational Exposure During ETO Sterilization Procedures There is current large-scale industrial use of ETO gas for sterilization of medical supplies and equipment because such use is effective and economical. In addition, alternate methods often are impractical, hazardous, undependable, or uneconomical. Gaseous ETO is generally used industrially for sterilization processing of disposable sterile kits containing items such as disposable syringes and needles, disposable microbiological laboratory supplies, and life-support items such as electronic cardiac "pacemakers," blood oxygenators, and dialysers. An estimated 80% of all such items are processed by ETO gas sterilizaton in the U.S. ; many could not be processed, regardless of cost, by any other currently available method . The majority of industrial ETO gas sterilization is performed in less than 50 large (greater than 1,000 cu ft) sterilizers and in approximately the same number of smaller industrial units . It has been reported that such ETO sterilizers are operated in accord with manufacturers' recommendations, industry safety regulations, state and local fire codes, and provisions of insurance underwriters . While this review contains information applicable to the few large industrial users of ETO for sterilization, it was primarily intended for medical facility applications (as defined in the Introduction). There are approximately 8,100 hospitals in the U.S., of which about 7,200 are members of the American Hospital Association (AHA). The AHA estimates that 5,500 to 6,500 of its member hospitals have ETO gas sterilizers . Whereas the majority of these sterilizers are small table top units, it is estimated that 1,000-2,000 large sterilizers (permanent installations with chamber volumes greater than 4 cu ft) are also in use . Most hospitals have more than one ETO sterilizer. In addition to hospitals, ETO sterilizers are also used in smaller medical, dental, or veterinary clinics or facilities. While the exact number of sterilizers is not known, NIOSH estimates that it exceeds 10,000 units. Of the almost 2.1 billion kg of ETO currently produced annually in the U.S. , it is estimated that 500,000 kg (0.02% of the total produced) are used for sterilization within medical facilities . This use is increasing. NIOSH (1977) estimates that approximately 75,000 health care workers employed in sterilizer areas are potentially directly exposed to ETO. In addition, an estimated 25,000 others are "casually" exposed due to improper (or inadequate) venting of sterilizers and aerators, storage (or use) of improperly/incompletely aerated ETO-sterilized items, the physical arrangement of the sterilization facility or workroom (which necessitates passage in close proximity to a gas sterilizer or aerator), and mishandling or failure of the equipment (such as, leaking sterilizer door seals). Thus, the total number of exposed workers in medical and related facilities is estimated to exceed 100,000. The principal items processed in such hospital "gas" sterilizers are air-powered surgical instruments, anesthesia supplies and equipment, cardiac catheters, endoscopes and other equipment containing lenses intended to be introduced into the human body, humidifiers and nebulizers, implantable body parts, electronic "pacemakers," ophthalmic instruments, xray supplies and related equipment, respiratory therapy supplies and equipment, some medications, reusable supplies, thermometers, and equipment contaminated by use in "isolation rooms" (i.e., containing patients with infections). Note some significant differences between the items sterilzied by the "user" and the items sterilized by the "vendor". ETO is used in sterilizers produced by at least six different manufacturers. In certain small sterilizers it is used full strength, or diluted to a composition of 84% ETO with inert ingredients. In larger sterilizers a non-explosive sterilant mixture is commercially available, either 10% ET0/90% C02, tradenamed Carboxide (TM), or 12% ETO/88% halocarbon. Halocarbon products, such as , sold under tradenames such as Freon (TM) or U-con (TM), are used. The Federal standard for occupational exposure to dichlorodifluoromethane (Refrigerant-12) is 1,000 ppm (4,550 mg/cu m) as an 8-hour TWA concentration. No federal exposure limit exists for trichloromonofluoromethane (Refrigerant-11). The impact of the possible removal (for environmental considerations) of certain halocarbon-containing products from general use has not been considered in this report. # B. ETO Residues in Sterilized Medical Equipment Residues (or byproducts) are produced mainly by two reactions of ETO: (a) from its slow chemical combination with water to form glycols, or (b) from its combination with chloride ion in the presence of water to form the chlorohydrin. Since even dry materials contain some moisture, it is apparent that glycol formation is unavoidable. Moreover, without the presence of some moisture, ETO sterilization cannot be effected. However, traces of glycols have been generally regarded as relatively harmless and permissible for human exposure. The formation of persistent, toxic ethylene chlorohydrin in foodstuffs fumigated with ETO has been described (Wesley et al, 1965). Chlorohydrins are relatively non-volatile, and are considered highly toxic substances. The Federal standard (CFR 1910(CFR .1000 for occupational exposure to ethylene chlorohydrin is 5 ppm parts of air as an 8-hour TWA concentration. Attempts have been made to determine the conditions necessary for elimination of chlorohydrin residues from foods by volatilization and decomposition at elevated temperatures. In general, these attempts have not been successful. Nor would they be applicable for most sterilized medical products, particularly those which are heat sensitive. While no reference to the formation and levels of ethylene chlorohydrin in medical facilities was found, the possibility that it will be present cannot be ignored. The use of ETO for sterilization of medical devices and equipment raises a number of significant questions regarding (a) the possible entrapment of ETO in a plastic item that may then exert a toxic effect when placed in contact with living tissue, and (b) the effect of (sorbed) ETO on the physical and chemical properties of the rubber and plastic items. Plastic tubing that has been sterilized with ETO has caused significant hemolysis when placed in contact with human blood (Rose et al, 1953;Clarke et al, 1966;Bain and Lowenstein, 1967). For example, Bain and Lowenstein (1967) reported that when mixed leukocyte cultures were incubated in disposable plastic tubes sterilized with ETO, survival of the cells was severely affected by a toxic residue left on the plastic. The residue was dissipated only after 4 or 5 months' storage at room temperature, whereupon the survival of cells cultured in such stored tubes returned to values similar to those obtained with ultraviolet-sterilized (control) tubes. O' Leary and Guess (1968), in their study of the toxic properties of medical plastics sterilized with gaseous ETO, presented data demonstrating the ability of ETO to remain entrapped in non-closed systems, such as, surgical tubing, gas washing bottles, plastic syringes, or plastic bottles, at various temperatures above the ETO boiling point. The hemolyzing ability of known amounts of ETO was determined. Freshly gas-sterilized plastic pharmaceutical products were shown quantitatively to produce blood cell hemolysis in proportion to the amount of ETO remaining in the plastic. Additionally, the effects of plasticizers of the ester type upon the sorption of ETO into polyvinyl chloride (PVC) products was also described. The residual ETO could present a hazard in the sterilization of such devices as plastic syringes. For example the residual ETO gas might have toxic effects due to some of its oxidation products. ETO, if not removed, may be released at a later time (i.e., while under use) and cause hemolysis, erythema, and edema of the tissues. (Clarke et al, 1966;Bain and Lowenstein, 1967;O'Leary and Guess, 1968;Kulkarni et al, 1968;and Sykes, 1964). Other studies relating to the general problem of interaction of ETO with constituents of rubber and plastic have been reported. Downey (1950) demonstrated that the mercaptobenzothiazole vulcanization acceleraters found in rubber reacted rapidly with ETO to produce (hydroxyethyl-mercapto)-thioazole, despite the fact that residual ETO concentration in the rubber tubing had been reported to have been dissipated after 5 hours' aeration. Little is known about the parenteral toxicities of these compounds, or other possible reaction products in ETO-sterilized rubber. Cunliffe and Wesley (1967) have shown that ethylene chlorohydrin was given off from PVC tubing 6 days after ETO sterilization. Gunther (1965) also demonstrated that high concentrations of ETO can be taken up by polyethylene, gum rubber, and plasticized polyvinylchloride. This re emphasizes the general problem of entrapment of ETO within sterilized plastics. Reports of local skin irritation from contact with ETOsterilized plastic items have appeared . This irritation can become severe if sensitization occurs. Items sterilized with ETO must be properly aerated before application to the human body in order to prevent such adverse reactions. # C. Alternatives to the Use of ETO Sterilization The reported alternatives to ETO sterilization include: steam, dry heat, steam-formaldehyde, steam at sub-atmospheric pressure, wet pasteurization, radiation (including gamma-ray, X-ray, ultraviolet radiation, and electron beam exposure), liquid glutaraldehyde, liquid or gaseous formaldehyde, propylene oxide, liquid (or gaseous) betapropiolactone, epichlorohydrin, ethylene imine, glycidaldehyde (i.e., 2,3epoxy-l-propanal), hypochlorite, peracetic acid, methyl bromide, chloropicrin, and ozone. As a completely different strategy, single use, unsterilized disposable supplies have been considered by some. Although the safety of such items has been questioned, microbial contamination capable of causing disease is rarely present . A number of other chemicals or processes have been used (or considered) for sterilization of medically related items. For various reasons their general use is limited. # III. SUMMARY OF RESULTS OF FIELD STUDIES CONDUCTED IN MEDICAL FACILITIES In February 1977, NIOSH conducted a limited field survey of hospitals to gain some perspective on situations related to the use of ETO sterilizers that might result in exposure of hospital staff and patients to ETO. The survey involved four hospitals in a metropolitan area, selected to represent both large and small health care facilities. Although a survey limited to only four medical facilities obviously can not be used with confidence as a basis for describing the conditions of the actual wide-spread use of ETO, it nevertheless does present a general impression of the usage of this compound, and some of the potential problems related to its use. The brief summary of the results of the survey presented below is from a more detailed report . In addition to the data from the NIOSH field survey, information was obtained from other government agencies including EPA, FDA, CDC, and NIH, and from the U.S. Army and U.S. Navy. Information was also made available by the Health Industry Manufacturers Association (HIMA), the American Hospital Association (AHA) and their member group, the American Society for Hospital Central Service Personnel, the Manufacturing Chemists Association (MCA), the Association for the Advancement of Medical Instrumentation (AAMI), the American National Standards Institute (ANSI), the American Society of Hospital Engineers (ASHE), ETO sterilizer and aerator manufacturers, and ETO "gas" manufacturers. From the NIOSH field survey and the above sources, the following account of the use of ETO in medical facilities, and the problems incident to such use, was developed. The results obtained in the NIOSH survey are in agreement with a similar study conducted by the U.S. Army in three of its hospitals (Army, 1977), and with data obtained by a study recently conducted by the American Hospital Assoication (Runnells, 1977). # A. ETO Sterilizer Use, and Number of Workers Potentially Exposed to ETO The number of gas sterilizers per hospital observed in the survey ranged from 1 to 9 (average of 4), with each hospital having at least one large sterilizer in the Central Supply (CS) area. Smaller sterilizers were installed in one or more of the following locations: Operating Room (OR), Surgery, Cardiac Catheterization Laboratory (Cath Lab), Anesthesiology Department, Ear-Nose-Throat (ENT) Clinic, Dental Clinic, Intensive Care Unit (ICU), Urology Department, and Inhalation Therapy Clinic. Additionally, some hospital-related research facilities have gas sterilizers, used in connection with the banking and transplantation of human tissue and organs (i.e., Tissue Bank), and in veterinary facilities which maintain germ-free (i.e., gnotobiotic) animal colonies. The size of the hospitals in the NIOSH survey varied from 115 beds to 750 (immediately expandable to 1135), with the size of the average hospital being between 460 and 500 beds. Nationally, the number of gas sterilizers per "average" size (i.e., 200-300 beds) hospital appears to be 2. The frequency of operation of the large sterilizers (i.e., Central Supply) varies from 3-4 cycles in 24 hours to approximately one cycle every other day. Sterilizers in other areas generally were operated 2-5 times per week. These frequencies agree with data from other sources. The number of personnel directly involved in the sterilization process varied widely. Some facilities reported that only a few (3-5) workers actually operate (i.e., load/unload) the sterilizers and aerators, and then generally only one person at a time. Such a situation occurred mostly in the CS and specific research-related areas (such as the Tissue Bank). The Cath Lab or OR technical staff appear to operate the sterilizers on a rotating, random or assigned, basis with a somewhat larger number of people (3-15) directly involved. While the number of personnel who may be exposed during operation of the sterilizing equipment may be small, many more persons were potentially exposed for a variety of reasons. In one case, access to a main lavatory required that personnel walk directly in front of two large gas sterilizers and a large aerator. In other cases, persons passing between the workroom and stock room had to walk within 2 feet of the fronts of sterilizers located in a small room or area between the CS workroom and the CS stock/storage area. Another sterilizer was located in a small appendage (vestibule) to the OR workroom, adjacent to a doorway to a heavily trafficked hallway. Within a period of less than 15 minutes, more than 30 people were observed passing within 4 feet of the sterilizer and aerator. The aerator vented directly into the workroom, providing additional ETO exposure to workers in that area. In some facilities, staff from various departments withdrawing sterile supplies from the Central Supply walk directly into the sterilizer or stock areas. ETO can enter stock areas from improper ventilation of aeration areas, or by the placing in stock of incompletely aerated items, resulting in diffusion of ETO into the ambient air. While the number of potentially exposed persons may be large, depending on the actual physical arrangements, the number most likely to be exposed was estimated to be between 6 and 60 in each of the hospitals included in this study. This range agrees with an estimate obtained from a nation-wide survey of hospital central service/central sterile supply personnel conducted by the American Society for Hospital Central Service Personnel, of the American Hospital Association, in 1977. # B. Potential Exposure Situations Encountered in the Field Study In addition to the potential for accidental exposures due to the physical arrangements, in a few cases problems were observed in installation, maintenance,, and operation that could increase the unnecessary and/or inadvertent exposure of sterilizer operators to ETO. These are: 1. Improper or Inadequate Venting of Sterilizers: a. Exhaust vent passed through a window and ended wit 1 foot of the intake air duct of an air conditioner. While probably very little of the ETO is actually drawn back into the room by the air conditioner intake, it does illustrate a potential source of exposure. b. An exhaust pipe from a sterilizer discharged into an open floor drain and gave off a cloud of vapors, including ETO, steam, Freon, and possibly other components (such as ethylene chlorohydrin) from the sterilizer into the machinery-room. A local ETO concentration of approximately 8,000 ppm was measured 1 foot above such a drain during a sterilization cycle. High local concentrations of ETO could be significantly reduced by minor modifications of the physical facility. In general, these situations do not result in elevated concentrations of ETO in the operator's breathing zone (BZ), but may contribute to the low background concentration of ETO in the sterilizing area. c. Incomplete sterilizer flushing prior to opening its door allowed very high concentrations of ETO to enter the room immediately upon opening the door of the sterilizer. This could create a hazardous situation for the operator attending the door. Approximately 1,200 ppm was recorded at one installation immediately upon opening the chamber door. d. An operator was able to open one sterilizer during use, to add or remove items, even though its chamber contained ETO. This represents, potentially, a very hazardous situation for the equipment operator. e. Effluent gas from sterilizers and aerators is not treated to destroy all unreacted ETO as well as hazardous byproducts such as ethylene chlorohydrin, so as to render the resulting effluent innocuous. The development of an apparatus (such as a catalytic converter or combustion device) designed to ensure the complete destruction of unreacted ETO at the end of the sterilization procedure should be given high priority to ensure worker safety, as well as to prevent environmental pollution by ETO. # Improper Aeration of Sterilized Equipment: a. With only one notable exception, all aerator cabinets in the medical facilities visited were vented directly into the room in which they were installed, or into the machinery space behind the cabinet. Airborne ETO concentrations of 300 to 500 ppm were not unusual above and behind some aerator cabinets. b. Aeration was permitted on open shelves in the CS or O workroom or stock room areas in one facility. Values of 25-50 ppm were measured 1 foot above stored items which had been sterilized more than 24 hours earlier. c. Sterilized items were stored (following aeration?) in glass cabinets with tightly fitting doors in two facilities visited; an odor of ETO was obvious upon opening one of the cabinets, indicating improper aeration before storage, and lack of venting of the cabinet. d. Exposure to ETO sterilizer gaseous products is of obvious concern, as some technicians reported skin irritation from contact with recently sterilized items, while others reported that their eyes "watered" while removing items from the sterilizer. # Inadequate Room Ventilation: Ineffective or non-existant room ventilation was noted in some facilities, which allowed the buildup of ETO to a high level within the room. A 10 x 10 x 12 foot unventilated room housed a small sterilizer which was found to have a leaking door gasket. ETO concentration at the BZ near the sterilizer was greater than 1,000 ppm. # Malfunctioning or Leaking Equipment: a. A leaking door seal (gasket) on a sterilizer caused an extremely high value of airborne ETO in a small, non-ventilated and closed room. Results are cited above. b. ETO leaks were observed at some gas tank valves, threaded fittings, and near some chamber fittings and piping. This could be very serious for personnel entering poorly ventilated machinery spaces. Leaks producing instrument readings ranging from 400 to 3,000 ppm were located. # Improper Operating Procedures: Sterilized items were removed from the sterilizer and transported on carts through a heavily congested hallway to the aerator or storage area. High rates of off-gassing, 200-300 ppm 1 foot above the cart, were noted during the movement of the sterilized items. # C. Measurement of Ambient Levels of ETO The method used for the sampling and analysis of airborne ETO consisted of the absorption of ETO on charcoal, and gas chromatographic determination following desorption of the ETO with carbon disulfide . A series of measurements were obtained at various locations in rooms containing sterilizers and/or aerators, before, during, and after periods of operation of the equipment. Sampling was also conducted in other places of potential exposure, such as hallways, and vent exhaust áreas. In general, the concentrations of ETO in the employee's breathing zone (BZ) were below the current federal (0SHA) standard (CFR 1910.1000) of 50 parts per million (ppm) parts of air as a time weighted average (TWA) over an 8-hour day. General area air samples collected during a 5.5 hour period in front of a bank of two ETO sterilizers and an aerator contained only 1-2 ppm of ETO. However, many values were recorded which were at, or above, the ACGIH tentative Threshold Limit Value (TLV) for short term exposure (i.e. short-term exposure limit, STEL, of 75 ppm for short-time periods of less than 15 minutes). In fact, some very high short-term BZ values were observed, including a peak of 460 ppm within the first minute after opening the sterilizer door, lasting for about 1 minute. Even higher concentrations, for shorter periods, were noted on occasion. Although there is no absolute maximum or "ceiling" value for ETO, one should regard the recommended STEL as a level not to be exceeded. # D. Conclusions 1. The survey revealed a number of conditions which resulted in unnecessary, preventable exposure of hospital staff, and possibly patients, to ETO. Many of the conditons were due to faulty equipment or improper operating procedures. These can be corrected by minor modifications, e.g., venting floor waste drains, replacing leaking sterilizer door seals or pipe joints, and improved work practices, including techniques for removing items from the sterilizer and aerator. In addition, the operator should avoid contact with both ETO gas and liquid which may remain in the flexible connecting lines when changing sterilizer gas cylinders. Protective gloves and goggles are available, and should be worn by personnel changing gas cylinders or cleaning up following an accidental spill. Gas cylinders should be changed in such a manner as not to expose the technician to ETO. Transfer carts should be used to remove sterilized items from large sterilizers, and gloves and forceps should be used whenever possible to remove items from small sterilizers. This will minimize the inhalation of ETO by the operator, and the possibility of dermal contact with ETO. 3. Other conditions resulted from improper installation and ventilation of sterilizers, aerators, or rooms in which such equipment was installed. This may require more extensive modifications such as additional venting ducts, high velocity vacuum pick-up ducts at the sterilizer door, improved room ventilation, and measures to exhaust decontaminated effluent in such a way as to prevent exposure within the hospital. An absolute minimum of 10 air changes per hour should be provided to all rooms containing gas sterilizers, aerators, or stored ETOsterilized items. Air should not be recirculated in these spaces. Devices which completely destroy all unreacted ETO need to be developed. 4. Modification or engineering re-design of some sterilizer chambers may be necessary to assure effective displacement of the ETO gas following sterilization. It is desirable to have repeated air flushing of the sterilizer before it is opened. A power-operated door-opening device exists on some large sterilizers. It allows the operator to push a button, then walk away, while the sterilizer door slowly opens. If, after a suitable period of time to permit the chamber to "air out," a buzzer could call the operator back to unload the chamber, safer sterilizer operations would result. In lieu of the power door, an alternative might be for the operator, upon completion of the sterilization cycle, to open the chamber door approximately 6 inches and wait 5-15 minutes before removing a load from the sterilizer. Aerators (which are forced-draft, warm air cabinets) should not vent directly into the room; rather, they should be connected to exhaust ducts to carry effluent out of the work area. Aeration performed under ambient conditions (i.e., in the open, at room temperature and atmospheric pressure) is not generally used, due to the lengthy time required to eliminate ETO "residues" from the sterilized material. In the absence of aerators, removal of ETO from sterilized items should be permitted only in dedicated, well-ventilated areas, such as in hoods. Many items are being sterilized with ETO when they could be processed by other methods, e.g., steam or ultraviolet radiation. 6. Most (but not all) hospitals provide a "standard operating procedures" manual, containing ETO sterilization techniques, and conduct thorough on-the-job training programs including the sterilization procedures with ETO. 7. Medical and Health records of personnel employed in areas in which ETO is used were not maintained in all cases. The keeping of employee medical and health records, as well as occupational exposure records needs to be improved in some medical facilities. 8. Within the past few years, sterilizer and aerator equipment manufacturers appear to be making stronger recommendations in the instruction manuals, and in training aids, courses, seminars, and in advertising as to the proper ventilation, location, use, etc., of sterilizers and aerators. Some early equipment manuals did not contain such recommendations. It is necessary that architects and designers of health care facilities, supervisors of the equipment operators, facility safety staff, industrial hygienists, etc., rigorously follow the recommendations and instructions stated by the equipment manufacturers, to protect the worker from the hazards of ETO exposure. # IV. BIOLOGIC EFFECTS OF EXPOSURE TO ETO A. Effects in Animals and Lower Biological Systems 1. Acute Toxicity Acute lethality studies in animals have been performed in four species and by five different routes of exposure. A wide array of responses follow acute exposure to ETO. These include: nausea, salivation, vomiting, diarrhea, lacrimation, nasal discharge, edema of the lungs, gasping, labored breathing, paralysis (particularly of the hind quarters), convulsion, and death. Deaths which occurred shortly after exposure to ETO were attributed primarily to lung edema, whereas delayed deaths often resulted from secondary infection of the lungs along with general systemic intoxication (Patty, 1963). # a. Inhalation Of the acute lethal studies, the inhalation studies of Jacobson et al (1956) are the most pertinent to occupational exposure. In those experiments, exposures to various ETO concentrations for 4-hour periods resulted in LC50's of 835 ppm for female mice, 1,460 ppm for male rats, and 960 ppm for male dogs. Hine and Rowe (1973) compiled data on inhalation exposures to illustrate the variable lethal response by species, concentration, and duration of exposure. (Table 1). In general, no deaths were reported at ETO exposure levels of 250-280 ppm for rats, guinea pigs, rabbits, cats, and dogs. b. Oral and Parenteral LD50's resulting from oral and parenteral exposures ranged between 141 and 631 mg/kg, and are listed in Table 2. Additional support for this range is provided by Patty (1963) who reported LD50's (following intragastric administration of 1 per cent aqueous solutions) of 330 mg/kg for rats, and 270 mg/kg for guinea pigs. Weil et al (1963) reported the single oral LD50 for rats of 330 (range 290-360) mg/kg. In another study reported by Patty (1963), a single 200 mg/kg dose of ethylene oxide given intragastrically (as a 10 per cent solution in olive oil), killed all 5 rats in the group, whereas all animals survived a dose of 100 mg/kg. # c. Tissue (including eye) Irritation The results of studies to determine the acute eye and tissue irritant properties of ETO (in aqueous solution) are summarized in Table 3 . The Draize (1965) system of evaluation, consisting of estimates of the highest doses having no overt actions at various sites after various methods of administration, was used in these studies. Thickening of skin and ecchymoses followed subcutaneous injection in the guinea pig, while mild irritation occurred after intradermal injection and dermal application in the rabbit. No local reactions were observed after intramuscular injections. As illustrated in Table 3, the irritant properties varied with both concentration and total dose. # Ocular Effects: The cornea and conjunctivae appear to be less sensitive to ETO than the skin of the rabbit. The reactions to the dermal and ocular applications most closely resemble those seen in actual occupational exposures. Woodard and Woodard (1971) reported slight irritation in the rabbit eye, with lacrimation and conjunctival erythema. In a series of investigations by McDonald et al (1973), ETO in a balanced salt solution was administered by both ocular instillation and injection into the anterior chamber of the rabbit eye. The maximal concentration which did not produce substantial ocular pathology varied with the specific ocular tissue examined and the route of administration, ranging from 0.1% to greater than 20%. (Table 4). Irritant effects observed after acute ocular instillation were discharge, iritis, corneal cloudiness and damage as evidenced by fluorescein staining. Conjunctival congestion, flare, iritis, corneal opacity, and fluorescein staining were observed after administration into the anterior chamber. Injection into the anterior chamber had more effect on the iris, the lens, and the retina than instillation into the conjunctival sac. Conversely, the latter procedure affected the cornea and the conjunctivae more markedly than the former. Neither differential effect is exceptional. (Alcon Submission, 1973) The irritation potential of a commercial ophthalmic ointment, the components of which were sterilized with ETO, was compared with that of the product prepared from non-sterilized components. The materials were applied to the eye of the rabbit in a 6-hour heavy dosing regimen (0.5 mg/dose at 20-minute intervals for 6 hours), and in a 5-day dosing regimen more comparable to clinical use (0.1 mg/dose, 5 applications/day for 5 days). Minimal conjunctival congestion was seen after the applications of ointment, but no difference was seen between the sterilized and non-sterilized products. (Alcon Submission, 1973) 2. Sub-Chronic Toxicity a. Oral and Parenteral ETO was administered to rats by gavage, 5 days/week, for 3 or 4 weeks, (Hollingsworth et al, 1956), and to rats and dogs by daily subcutaneous injections for 30 days (Woodard and Woodard, 1971). The results of these studies are summarized in Table 5. The no-effect levels for the rat were 30 mg/kg by the oral route, and 18 mg/kg by subcutaneous injection. b. Inhalation Hollingsworth et al (1956) and Jacobson et al (1956) conducted studies in which various animal species were exposed repeatedly to ETO vapor at concentrations ranging from 100 to 841 ppm. The studies are summarized in Table 6. At 100 ppm there was anemia in 1 of the 3 dogs, and 8 of 30 mice and 3 of 20 rats died during 130 exposures of 6-hour duration. At 113 ppm, no rats (of 40), guinea pigs (of 16), rabbits (of 4), or monkeys (of 2) died during 122 to 157 exposures of 7-hour duration. The male rats did have some depression of their growth. Rats of both sexes exhibited increased lung weight. However, at 204 ppm growth depression was noted, an appreciable number of rats died of secondary respiratory infection, and rabbits and monkeys developed posterior paresis. Rats and guinea pigs exhibited increased lung weights also. At higher concentrations more serious conditions developed, including severe nervous and respiratory effects and degeneration of testicular tubules. On the basis of these studies, Hine and Rowe (1973) proposed permissible exposure limits of 100 ppm for repeated exposures of 4 hours/day for a 2-week period, and 50 ppm for 7-hour exposures on a continuing basis. # Chronic Toxicity No chronic test data could be found. NIOSH has been informed that a 2-year vapor inhalation study on rats began on April 27, 1977, at the Camegie-Mellon Institute of Research, Pittsburgh, Pennsylvania. That study is supported by industry and will include cytogenic, mutagenic, and teratogenic evaluations, and a onegeneration reproduction study. A similar 2-year study on mice will also be performed. The No standard long-term carcinogenicity bioassays have been reported, although two screening experiments have been conducted. Walpole (1957) subjected 12 "stock" rats to repeated subcutaneous injections of 1 g/kg (bw) ETO in arachis oil. The exact dosing schedule was not reported, although the period of injection was 94 days. The animals were maintained for their lifetimes, during which no local sarcomas or other tumors were observed. In other studies, Van Duuren et al (1965) applied (by brush) 0.1 ml of a 10% solution of ETO in acetone three times each week onto the clipped dorsal skins of 30 female ICR/Ha Swiss mice. The animals were 8 weeks of age at the start of the skin painting, which was continued for their lifetimes. The median survival time was 493 days; no skin tumors were observed. Two other studies related to carcinogenicity have been conducted. Jacobson et al (1956) exposed rats, mice, and dogs to 100 ppm of ETO (6 hours/day, 5 days/week) for 6 months. There were no significant pathologic changes suggestive of a carcinogenic response. The only observed effects were decreases in red blood cell count, hemoglobin, and hematocrit in dogs. It is felt by scientists at the National Cancer Institute and the International Agency for Research on Cancer that this study was not conducted for a sufficiently long period to test the possibility that ETO causes cancer. In the other "study" (actually, a retrospective analysis of events), 86 female Swiss-Webster germ-free mice were inadvertently maintained on ETO-treated ground-corncob bedding for 150 days, and were then moved to untreated bedding for the remainder of their lives (maximum lifespan = 900 days). Tumors were observed at various sites in 63 mice. In contrast, no tumors were reported in 83 female mice that had not been exposed to the ETO-treated bedding, but were observed for 100-160 days. (Reyniers et al, 1964). The most common tumors in the exposed group were ovarian, lymphoid (malignant lymphoma), and pulmonary. While the effect is noteworthy, it is not known whether it was due to ethylene oxide, ethylene glycol or some other unknown chemical (or factor). # b. Mutagenicity Experimental attempts to induce mutations in at least 14 different species through exposure to ETO have been reported. An increased frequency of mutations was observed in 13 of the test species, the exception being a bacteriophage of Escherichia coli . The data for the species, loci/locus, mutation type, exposure time, exposure route, exposure concentration, and the significant results are summarized in Table 7. The data, indicate that several different types of genetic damage may be induced following exposure to ethylene oxide. The induction of point mutations has occurred in both procaryotic and eucaryotic organisms . While the point mutations probably were of the base-pair substitution type in Salmonella typhimurium, the mutational spectra have not been well characterized at the molecular level in other species . In Drosophila melanogaster, the classes of induced mutations included sex-linked lethals, visibles, and "minutes" . In plants, the classes of heritable changes include mutations at the chlorophyll and waxy loci and chromosomal abnormalities . Some evidence suggests that ETO may induce heritable changes in mammals, although no direct observations of such changes, comparable to those observed in lower organisms, have been reported in mammalian systems. Following three 7-hour/day exposures to 250 ppm of ETO, isochromatid and chromatid gaps and breaks were observed in bone-marrow cells sampled 24 hours after the last exposure of the rats . The frequency of gaps and breaks as a function of total exposure time or total dose was not reported. Following a single oral dose of 9 mg/kg, a significant number (P less than 0.007) of chromosomal abnormalities was observed in the red marrow cells of the femurs of rats. Neither doseresponse data nor the rate of return to a normal chromosomal picture was reported . The dose-response relationship for the induction of micronucleated cells in femoral marrow of Long-Evans rats as a function of the concentration of inhaled ETO was studied by Embree (1975), using groups of 5 rats exposed to 0, 10, 25, 50, 250, or 1,000 ppm of ethylene oxide for 4 hours. A statistically significant increase (P less than 0.05) of micronuclei were induced at the 50, 250, and 1,000 ppm levels. While several alkylating agents which are known to induce mutations are also known to induce the formation of micronuclei in bone marrow cells, the biological significance of this adverse effect is poorly defined. These results suggest that the current federal standard of a time-weighted average (TWA) concentration of 50 ppm parts of air for an 8-hour day may not protect exposed employees from all adverse effects resulting from ETO exposure. The most plausible molecular basis for the induction of heritable changes and other genetic effects following exposure to ethylene oxide is the alkylation of cellular constituents, including deoxyribonucleic acid (DNA). The highly-strained 3-membered ring of the epoxide is broadly reactive toward all classes of cellular nucleophiles. Reactions of ETO with protein , and with nucleic acid have been measured. The primary identifiable product of the reaction of ETO and DNA was 7hydroxyethylguanine, although a number of other minor products would also be anticipated. Since the occurrence of these chemical reactions is a consequence of the intrinsic characteristics of the chemical structures, it seems probable that analogous reactions may occur in humans who are occupationally exposed to ETO. These spontaneous reactions result in the formation of covalently altered biomolecules which may possess abnormal functional properties. In addition to the spontaneous reactions, ETO may also be consumed via detoxifing enzyme-catalysed reactions such as epoxide hydratase and glutathione-S-alkyl tranferase . The rate of consumption of ETO by such enzyme reactions would be a non-linear function of the local concentrations of the substrate. Since the non catalyzed half-time of the reaction of ETO with water is about 4,200 minutes, whereas the in vivo half-life in the mouse is only about 9 minutes , it appears that such enzymatic reactions may rapidly detoxify a large fraction of the absorbed doses in mammals. At a minimum, the greatly reduced half-life of ETO in vivo suggests a non-linear dose-response relationship for induction of genetic effects in mammals due to the existence of saturable detoxifing mechanisms. Further research is required to establish the relative magnitudes of the competing reactions as functions of the doses and of the exposure times. The observations of: (a) heritable alterations in at least 13 different species, (b) alterations in the structure of the genetic material in somatic cells of the rat, and (c) a covalent chemical reaction between ETO and DNA, support the conclusion that continuous occupational exposure to significant concentrations of ETO may induce an increase in the frequency of mutations in human populations. Embree and Ehrenberg et al have attempted to estimate the degree of genetic risk to human populations exposed to ETO by comparing the results with the effects of ionizing radiation, and expressing the result in terms of the "rad-equivalent." Embree's estimate (100 mrad-equivalences/ppm-hr) was based on the frequency of micronuclei in bone marrow following exposure to graded doses of ethylene oxide. At present, the mechanism of generation of micronucleated cells in bone marrow is not known, although it is possible that these cells may arise from either the non-disjunction of the chromosomes during mitosis, or malformation of the spindle. While it is conceivable that somewhat related processes may occur in functionally distinct tissue, such as spermatogonia, neither quantitative nor qualitative evidence is available to indicate the magnitude of the effect in the genetically important cell-types following exposure to ethylene oxide. Consequently, Embree's estimate appears to be highly speculative and with minimal theoretical or experimental basis. The estimate by Ehrenberg et al of the risk to humans (reported as 20 mrad-equivalences/ppm-hr) is based on the assumption that the rate and mechanism of induction of mutations affecting chlorophyll in metabolically dormant barley seeds were equivalent to the rate of induction of specific locus mutations in the metabolically active tissues of the testes of mammals. The reliability of this estimate as a quantitative measure of genetic risk to human populations exposed to ETO is open to question. In addition to the obvious physiologic differences between plants and mammals, it should be noted that forward chlorophyll mutations in barley seeds may occur at several hundred loci while the specific locus test is limited to one locus or a small number of loci . Consequently, the relative risk estimated by Ehrenberg et al may be in error by several orders of magnitude. Thus, the estimates of genetic risk by Embree and Ehrenberg et al probably do not, by themselves, provide a satisfactory basis for the development of occupational exposure standards. Nevertheless, the observations of induction of heritable changes in a broad spectrum of biological organisms, and of the intrinsic chemical reactivity of ETO with cellular constituents, suggest that ETO may have a significant influence on the spontaneous mutation rates of human populations. Kalling briefly reported data, obtained from an experiment performed by Ehrenberg and Hallstrom, on the frequency of pathological mitoses observed in phytohaemaglutinin-stimulated lymphocytes of 7 workers who had been "strongly affected" by ETO following an industrial accident 18 months earlier. Analysis of from 6 to 26 metaphases from each individual suggested an increased frequency of pathological mitoses relative to the frequency in 10 non-exposed workers. Railing's brief description of the results are insufficient to draw substantive conclusions. However, this report of induction of chromosomal abnormalities in humans is consistant with the observations of Embree in the rat. # c. Reproductive Effects/Teratogenicity No data concerning reproductive or teratogenetic effects were found for ETO. Ethylene chlorohydrin (2-chloroethonol), a reaction product of ETO, was shown to cause pronounced teratogenic effects (i.e., anterior hydrophthalmos, a form of buphthalmia) when administered to the developing chicken embryo. The compound was administered in water at pre incubation (0 hours) or at 96 hours of incubation, at a number of different doses. It should be noted at this point that the limited animal test data on the carcinogenic potential of ethylene chlorohydrin suggests no increased incidence (over controls) when the compound was fed (Ambrose, 1950), or injected subcutaneously (Mason et al, 1971) into rats. Ethylene chlorohydrin is to be tested for effects on reproductive processes by inhalation and by skin painting, under NCI contract. Studies have shown chromosome aberrations in rat bone marrow cells following exposure to ethylene chlorohydrin (Isakova et al, 1971, andSemenova et al, 1971), and base-pair substitutions (but not frameshift mutations) in Salmonella (Rosenkranz and Wlodkowski, 1974). Ethylene chlorohydrin also induced mutations in bacterium Klebsiella (Voogd and Van Der Vet, 1969). # d. Metabolism. A comprehensive study designed to isolate and characterize the metabolites of ETO in either lower or higher life forms has not been identified in the literature. However, Frankel-Conrat (1961, 1944 has characterized the spontaneous chemical alkylation of a protein (egg albumin) and viral ribonucleic acid. Brookes and Lawley (1961) characterized the reaction of ETO with guanine. More recently, Ehrenberg et al (1974) have reported the results from several experiments on the fate of inhaled tritium-labeled ETO in mice. The biological half-life was found to be about 9 minutes in the mouse, indicating a rapid metabolic breakdown of the molecule. The second-order rate constants for the reactions of ETO with DNA, and with liver "interphase" protein were found to be 0.9 and 3.3 1/g-hr, respectively. The hydroxyethylated DNA was observed to undergo a slow depurination reaction, a possible pre-mutation lesion, with a halflife of 340 hours. Following inhalation of tritiated ETO, radioactivity (in unidentified chemical form) was found to be associated with proteins isolated from lung, liver, kidney, brain, spleen and testes. Seventy-eight per cent of the absorbed dose was excreted into the urine within 48 hours of the inhalation exposure. Other than the possible isolation of small amounts of 7-hydroxyethyl guanine from the urine, evidence for the alkylation of DNA in vivo was not reported. Since urinary 7-hydroxyethyl guanine may also arise from the alkylation of RNA, the fraction of total absorbed dose which alkylates the genetic material remains to be determined. The study confirms that ETO is readily absorbed via the respiratory tract and is widely distributed throughout, and reacts with, components of body tissues. # B. Effects in Humans 1. Acute Toxicity: # Zernik (1933) extrapolated results of animal tests to approximate lethal dosages of ETO for man. He estimated that an exposure to 0.5 mg/1 (278 ppm) for 1 hour would be objectionable, but that the same concentration for a day would be dangerous. He further speculated that 1.0 mg/1 (556 ppm) would be sufficient to cause sickness and death after a number of hours of inhalation. Systemic poisoning due to inhalation exposure to ETO is rare, but 3 cases have been reported in which headache, vomiting, dyspnea, diarrhea, and lymphocytosis occurred (Hine and Rowe, 1963). Hess and Tilton (1950) observed a similar case. The nausea and vomiting may persist for several hours if untreated. There is also evidence that high vapor concentrations can be sufficiently irritating to cause pulmonary edema in man . Inhalation of high concentrations of ETO for even short exposure periods can produce a general anesthetic effect in addition to coughing, vomiting, and irritation of the eyes and respiratory passages. Early symptoms are irritation of the eyes, nose, and throat and a "peculiar taste"; effects which may be delayed are headache, nausea, vomiting, dyspnea, cyanosis, pulmonary edema, drowsiness, weakness, incoordination, electrocardiographic abnormalities and urinary excretion of bile pigments. Thiess (1963) reported on his observations in 41 cases of excessive industrial exposure to ETO, mainly due to accidents. The principal effect after excessive exposure to the vapor was vomiting, recurring periodically for hours, accompanied by nausea and headache. Short exposure to high concentrations of ETO resulted in irritation of the respiratory passages leading to bronchitis, pulmonary edema, and emphysema. At least two additional cases of accidental poisoning in man by ETO appear in the literature. Blackwood and Erskine (1938) reported cases in 6 men who became ill while working in a ship compartment adjacent to one being fumigated with a commercial mixture of 10% ETO/90% C02 . Ten female employees were reported (Anon, 1947) to have been "overcome" by ETO used as a disinfectant in a California food plant. The Department of Public Health of the State of California issued a report on this incident. Attempts at locating the report have been unsuccessful to date. In neither of these events was the concentration of ethylene oxide known. In both, the gross symptoms of the persons affected included headaches, nausea, vomiting, and respiratory irritation. No permanent ill effects were reported. A number of dermatologic conditions due to exposure to ETO have been reported. These include: Skin burns occurred in workers after prolonged contact (duration unspecified) with a 1% solution of ETO in water (Sexton and Henson, 1949). Skin contact with solutions of ETO caused characteristic burns on human volunteers. After a latent period of 1 to 5 hours, edema and erythema, progressing to vesiculation with a tendency to coalesce into blebs, and desquamation was followed by complete healing within 21 days, usually without treatment. In some cases, residual brown pigmentation occurred. Application of the pure liquid to the skin caused frostbite (due to rapid evaporation of the ETO); 3 of the 8 volunteers became sensitized to ETO solutions. Human experience indicates that extensive blister formation occurs following even brief contact with 40-80% aqueous solutions of ETO. Lesser or greater concentrations produced blisters only after more prolonged contact. ETO has been reported to be retained in rubber and leather for long periods of time, and not to be removable by washing. Wearing contaminated footwear has resulted in serious foot burns (Phillips and Kaye, 1949). Rubber shoes were donned by laboratory workers immediately after the shoes had been sterilized with ETO. Apparently the ETO had dissolved in the rubber and then diffused out to contact the skin. No such incidents occurred when similar articles of clothing were aired until free of ETO before being worn. Contact of liquid ETO with exposed skin does not cause skin irritation immediately, but blistering of the skin develops after a time if shoes or clothing wet with ETO are not removed promptly. Contact of liquid ETO with the eyes can cause severe burns. A workman exposed to ETO in an unstated manner was reported by McLaughlin (1946) to have suffered a corneal burn, which was said to have healed promptly. In 1963, Thiess reported that he had observed the effects of an accidental forceful blast of gaseous ETO that struck a nurse in the eye, nose and mouth, from a gas sterilizer that employed a cartridge of ethylene oxide. This caused no immediate discomfort, but within 2 hours one eye became mildly uncomfortable, and one nostril felt sore and obstructed. The only abnormality in the eye at that time was keratitis epithelialis consisting of fine gray dots in the corneal epithelium in the palpebral fissure. These dots stained with fluorescein. In 3 hours the foreign-body type irritation in the affected eye and soreness in one nostril were enough to interfere with working. The foreign-body type of discomfort reached the maximum 9 to 10 hours after the exposure, but by 15 hours pain began to subside, and in 24 hours the eye was entirely normal, both subjectively and by slit-lamp examination. Soreness in the nose persisted a little longer, but at no time was there respiratory distress or alteration of taste or sensation in the mouth. Another report by Thiess (1963) described an instance of a squirt of liquid ETO into one eye of a patient, treated immediately with copious irrigation with water; this resulted only in irritation of the conjunctivae lasting for 1 day. Thiess quoted Hollingsworth et al (1956) as having observed intense conjunctivitis in rabbit eyes after application of only one drop of ETO. This condition cleared in 4 days. # Chronic Effects a. Case Histories Exposure to low concentrations of gaseous ETO may cause delayed nausea and vomiting . Continuous exposure to even low concentrations will result in a numbing of the sense of smell . Protracted contact of the gas or the liquid with the skin was observed (Thiess, 1963) to cause toxic dermatitis with large bullae. There appeared to be little or no tendency toward development of allergic dermatitis. Evidence of irritation was seldom seen in the respiratory tract, contrary to what others had reported, and cough was not a warning symptom. The eyes of workers excessively exposed to ETO vapor occasionally showed slight irritation of the conjunctivae, but there were no injuries of the cornea, and no special treatment was felt to be needed. Thiess (1963) observed no lacrimatory effect that could be considered a warning symptom. Workers have developed severe skin irritation from wearing rubber gloves which had absorbed ETO; redness, edema, blisters and ulceration were observed (Royce and Moore, 1955). There is considerable evidence Henson, 1949, 1950] which suggests that repeated skin contact with dilute or concentrated aqueous solution of ETO results in the development of skin sensitivity of long duration. The conclusions of Sexton and Henson (1950) are listed: (1) The magnitude of the skin injury from ETO seems to be influenced and determined by the length of time of contact and the concentration. The most hazardous aqueous dilutions seem to be those in the 50% range. (2) Pure anhydrous liquid ETO does not cause primary injury to the dry skin of man. (3) Rapid vaporization of pure liquid ETO sprayed on the skin results in a freezing reaction of the skin. (4) Repeated skin injury can result in such delayed sensitivity manifestations as the "flare-up" and "spontaneous flare-up" reactions. Aqueous solutions of ETO in prolonged contact with the skin are well known to be vesicant (owing presumably to the reactivity of ETO with tissue proteins). The volatility of ETO, and the fact that there are rarely circumstances in which considerable concentrations are held in protracted contact with the eye may account in part for the mildness of ocular injuries which have been observed so far. The comparative insensitivity of the cornea to damage by ETO, shown in Tables 3 and 4, undoubtedly plays a part here also. The highly crosslinked structure of collagen quite possibly contributes to this relative insensitivity of the cornea; the high concentration of glutathione present in the cornea may be a factor also in this insensitivity to injury by ETO. Delayed sensitization reactions have been reported in humans, as has one case of anaphylaxis resulting from residues of ETO in renal dialysis equipment (Poothullil et al, 1974). # b. Epidemiologic Studies Few studies are known which relate directly to long-term exposure to ETO in medical facilities. Correspondence from the Veterans Administration (VA) (Cobis, 1977) indicates a very low incidence of reports of health incidents relating to use of ETO in VA medical facilities. These consist of "1 patient with minor face irritation, and 12 employee incidents, including watering eyes, nausea, and skin irritation." Cobis reported that the "incidents are now being evaluated by the medical staff to see if they can be verified or if there was any sequela." The VA experience is based on an estimated 400,000 sterilization cycles in which ETO was used in 162 hospitals and 7 outpatient clinics over an average of 8.2 years (total of accumulated years is 1,334). The VA estimates that their facilities are processing 5 million packages of sterile supplies in 79,000 sterilization cycles per year (an average of 63.3 packages per cycle). A small retrospective morbidity study was conducted on a group of 37 employees who were exposed to ETO levels of approximately 5-10 ppm for a period of 5-16 years in an ETO production plant. (Joyner, 1964). The "exposed" group consisted of males between the ages of 29 and 56. Mean exposure to ETO was 10 2/3 years, with only 3 workers exposed for less than 5 years, and 2 workers exposed for more than 15 years. The "control" group was selected from among volunteers who were participants in the periodic physical examination program, with no attempt at selection except to match ages with those of the study group. Mean length of service (in production units) of the control group was 11 2/3 years. No attempt was made to categorize the specific types of chemical exposures (to agents encountered in the petrochemical industry) of the "control" group. The study included a review of medical records (with scoring of such factors as number of visits to the dispensary, days "lost" from work, and number and type of injury reported),and a "problem-oriented" tabulation of specific illnesses diagnosed by the physicians, and comparison of the morbidity rates. In general, the ETO workers had a better health record than the "control" group. While such a study might have detected major toxic effects, the sample size, "control" group, dose levels, duration of exposure and observation period were not sufficient to assess carcinogenic potential or more subtle toxic effects. A study by Ehrenberg (1967) involved hematologic and chromosomal investigations of a group of personnel of a factory manufacturing and using ETO. Early in 1960, a preliminary hematological investigation was performed. Twenty-eight persons from the part of the factory where leakage of ETO from tube joints, pumps, etc, was possible (and occurred at least occasionally) were investigated, and 26 persons from other departments working with ETO were used as controls. In addition, 3 persons accidentally exposed to high concentrations of ETO were studied; these were included in the "exposed" group. The subjects were males of about the same age (i.e., 45 years). The "exposed" persons had been active in the ETO department for periods of time varying from 2 to 20 years (avg. 15 years). Since certain differences were found between exposed and control persons, the ventilation of the factory was improved. A second analysis was undertaken in the following winter (termed "1961 investigation") comprising all male employees of the factory, now classified as follows: (1) 66 persons not working in contact with ETO (including the 1960 "control group"). (2) 86 persons intermittently working in ETO premises. (3) 54 persons who had worked in contact with ETO for some period of time. (4) 37 persons permanently working in the ETO area (including the 1960 "exposed group"). (5) 8 persons exposed to high concentrations of ETO during repair and clean-up operations following a tube break in a factory. Two of the group required immediate hospitalization for respiratory difficulties. A comparison of exposed and control persons suggested that: (1) Persons intermittently exposed to ETO during several years exhibited higher absolute lymphocyte counts than did the control group in the initial study population. Following improvement of ventilation this difference became smaller, although it was still significant 1 year later in the same persons. No significant difference between the exposed and the control groups was found after the study population was enlarged, either because the difference found between the groups selected for the first analysis was accidental, or because younger persons with higher lymphocyte counts were introduced into the control group in larger numbers than into the exposed group. (2) Certain cell abnormalities were found in the exposed group (3 cases of anisocytosis, 1 of leukemia), that did not appear in the control group. (3) Lower hemoglobin values were found in the exposed group, in addition to a few cases of slight anemia. (4) Persons exposed accidentally (group 5 of the later study population) for short periods of time to high concentrations of ETO exhibited higher numbers of chromosomal aberrations than a comparable control group. NIOSH has been informed that Ehrenberg currently has plans to "follow up" members of the study group. NIOSH is currently initiating a study of approximately 2,500 workers potentially exposed to ETO during its manufacture and/or use. The study will include mortality, morbidity, and reproductive data if possible. # Sensitization Response. Allergic-type reactions have been reported in workers accidently sprayed with ETO in solution (Sexton and Henson, 1949), and in patients exposed to improperly aerated dressings (Hanifin, 1971). ETO (1% solution) was not a contact sensitizer in the occlusive patch test in guinea pigs, nor did a 0.1% ETO solution produce sensitization by the intracutaneous route in this species (Woodard and Woodard, 1971). In recent skin irritation studies, delayed sensitization of human subjects developed in response to ETO contained in polyvinyl chloride blocks and films, and in vaseline . This finding supports an earlier report of anaphylaxis from products of ETO sterilization of renal dialysis equipment (Poothullil et al, 1974, andAvashia, 1975). # Hemolytic Effects. Hemolysis has been reported (Hirose et al, 1963 andStanley et al, 1971) with ETO sterilized devices used for blood perfusion, and with devices used for IV administration in patients. Anemia was reported (Woodard and Woodard, 1971) to have developed in dogs in a 30-day subcutaneous toxicity study with ETO in saline solution. Dogs injected with 6-36 mg/kg daily developed anemia; the severity was dose-related. Examinations were performed prior to, and at the end of, 30 days of dosing. Hence, effects occurring earlier would not have been detected. Therefore, a re-examination of the hematologic effects appeared to be in order. Beagle dogs were dosed intravenously with ETO/glucose solution daily for 3 weeks. Doses were increased from 3-60 mg/kg at intervals. Three controls received only glucose. No evidence of anemia was detected (Balazs, 1976). Further clarification of these apparently conflicting results is needed. Ehrenberg (1967) observed lymphocytosis in workers exposed to ETO. # C. Tabulation of Biologic Effects of ETO The following tables summarize the bio-effects data described in the earlier section of this report: Notes: Data compiled by Hine and Rowe (1963) from the studies of Jacobson et al, ( 1956), Hollingsworth et al, ( 1956), Waite et al, (1930), and Flury and Zernik, (1931). *Data of Weil et al (1963). . Five animals of each sex were tested at each of six dose levels. (c) Woodard and Woodard (1971) conducted the study with rabbits, in which one rabbit of each sex was exposed at each of six dose levels. (d) Injection Route: IV = intravenous, IP = intraperitoneal, SC = Subcutaneous (Ehrenberg, 1956) 54-fold increased non-monotonic doseresponse curve (Ehrenberg, 1959) Increased mutation rate not given at 12.5 ppm. Statistically significant at 50 ppm (Lindgren et al, 1969) Chromosomal aberrations observed in up to 16% of the germinated kernals (Moutschen-Dahmen et al, 1968) waxy (forward) 24 hr, 100 ppm Greater than 10-fold increase in mutational frequency 0.6% (W/V) 0.15% (V/V) 0 .2% # approx. 1% Up to 2% chlorophyll mutations at 12 different phenotypes (Roy et al, 1973) Mutations observed up to 14.6% of F-2 generation (Jana et al, 1975) 7.2 fold increased frequency; also decrease germination and fertility CAndo, 1968) Approximately 10-fold Increase in mutation frequency (MacKey, 1968) Up to 5.1% chromatid breaks, "erosions, and gaps (Smith et al, 1954) (Bird, 1952) 2 sex-linked recessive lethals per 1,000 chromosomes (Fahmy et al, 1956) Approximate 5-fold increase in frequency of "minutes" (Fahmy et al 1956) Lethals and translocations induced in all stages of spermatogenesis (Nakao et al, 1961) Statistically significant increase (P less than .001) in aberrations in femur bone marrow cells (Strekalova, 1971) Significant increase in dead implantations in females mated to treated males at weeks 1, 2, 3 and 5 post-exposure (Embree, 1975) Isochromatid and chromatid gaps and breaks in bone marrow cells sampled 24 hours after last exposure (Embree, 1975) Exposure from an industrial spill; significant increase (p less than 0.05) in chromosomal aberrations in 7 workers 6 wk after exposure (Ehrenberg et al, 1967) V. OCCUPATIONAL EXPOSURE LIMITS The current U.S. standard (OSHA) for occupational exposure to ethylene oxide is 50 parts per million (ppm) parts of air, which corresponds approximately to 90 mg/cu m, as an 8 hour time-weighted average concentration, with no ceiling level stipulated. [29 CFR 1910[29 CFR .1000. This standard was adopted from the standards established by the American National Standards Institute (ANSI). An identical exposure limit, the Threshold Limit Value (TLV), had been adopted by the American Conference of Governmental Industrial Hygienists (ACGIH, 1957). The USSR has a standard of 0.5 ppm (1 mg/cu m ) , which was adopted in 1966 . Occupational exposure standards of 50 ppm and 20 ppm (36 mg/cu m) are recommended by the Federal Republic of Germany and Sweden, respectively . Table 8 lists the federal standard and the ACGIH recommended TLV and short term exposure limits ("STEL") for ETO. The exposure limits for certain hydration and reaction products of ETO, i.e. ethylene chlorohydrin and ethylene glycol, are listed. Also shown for comparison are the exposure levels published by the Federal Republic of Germany, Sweden, and the USSR. NIOSH recommends, based on the recent results of tests for mutagenesis, that exposure be controlled so that workers are not exposed to ETO at a concentration greater than 135 mg/cu m (75 ppm) determined during a 15-minute sampling period, as a ceiling occupational exposure limit and, in addition, with the provision that the TWA concentration limit of 90 mg/cu m (50 ppm) for a work-day not be exceeded. As additional information on the toxic effects of ETO becomes available, this recommended level for exposures of short duration may be altered. The adequacy of the current U.S. ETO standard, which was based on the data available at the time of promulgation, has not been addressed in this report. Further assessment of other ETO exposure situations, and of the adequacy of the ETO occupational exposure standard will be undertaken during the FY 80 development of the NIOSH Criteria Document on epoxides (including ETO). In the interim, NIOSH strongly recommends that the control strategies presented herein, or others considered to be more applicable to particular local situations, be implemented to assure maximum protection of the health of employees. Good work practices will help to assure their safety. Sustained or intermittent skin contact with liquid may produce dermatitis at the site of contact. However, due to the extreme penetrating ability of ETO, and the consequent ineffectiveness of many types of clothing materials to prevent skin contact, the use of any conventional "impervious" clothing is not suggested. There are, however, certain special types of protective clothing which are effective when working with ETO. For example, one of the large ETO manufacturers provides its workers with knitted gloves which have been coated with certain polymers, including polyvinylchloride. In addition, conscientious adherence to appropriate sanitation practices should eliminate most hazards of skin contact with ETO. # b. If ETO splashes into the eye, severe irritation may result. For this reason it is suggested that rubber framed goggles, equipped with approved impact resistant glass or plastic lenses, be worn whenever there is danger of the material coming in contact with the eyes (i.e., in operations which involve transport of bulk containers of ETO from the storage room to the sterilizer unit for installation). Eye wash fountains within easy access from the immediate work area are recommended; they should be so situated that additional contact of the eyes with ETO in vapor form during washing is unlikely. c. It is suggested that respirators be readily accessible in the event of an emergency situation resulting from an accidental spill or leak of ETO, and for use during subsequent clean-up and disposal procedures. A self-contained breathing apparatus with a full facepiece operated in a pressure-demand or other positive pressure mode is suggested for this purpose. Respirators (respiratory protective devices) should be those approved under the provisions of 30 CFR 11. # Engineering and Other Control Technology A number of control problems were discovered during the field surveys conducted at several medical facilities which use ETO in sterilizing operations. These problems were addressed in Section III of this report, along with some recommendations for their amelioration. Those, and the following general recommendations should be followed in all medical facilities which use ETO as a sterilant. All equipment should be operated in accordance with manufacturers' recommendations. a. Sterilization operations involving ETO should be isolated from all non-ETO work areas. b. ETO work areas, except for outdoor systems, should be maintained under negative pressure, with respect to non-ETO work areas. This may be accomplished by continuous local exhaust ventilation so that air movement is always from non-ETO work areas to ETO work areas. Where a fan is used to affect such air movement, the fan blades and other appropriate parts of the ventilation system should be made of a nonsparking material. Local exhaust pickups should be located at areas where the possibilities for leaks are the greatest (i.e., in close proximity to sterilizing units and aerators). Exhaust air should not be discharged into any work area or into the general environment without decontamination. This may be accomplished using a calalytic converter, or by discharging exhaust air directly to the fire box of a decontamination furnace, with subsequent discharge of this air to the general environment. Sterilizing units and aerators should be closed systems. Elimination of residual ETO from both systems should be accomplished only by ventilation ducts leading directly from the sterilizers and aerators to the decontamination apparatus described above. c. All equipment (e.g., sterilizing units, gas tanks, and aerators) should be periodically checked for leaky valves, fittings, and gaskets, and for any other malfunctioning parts. Equipment manufacturers' recommendations regarding preventive maintenance should be observed. Further, periodic measurements should be made which demonstrate the effectiveness of the local exhaust system (e.g., air velocity or static pressure). d. a. Preplacement medical examinations should include at least: (1) comprehensive medical and work histories with special emphasis directed to symptoms related to eyes, blood, lungs, liver, kidneys, nervous system, and skin. (2) a comprehensive physical examination, with particular emphasis given to pulmonary, neurologic, hepatic, renal, and ophthalmologic systems, and the skin. (3) a complete blood count to include at least a white cell count, a differential count, hemoglobin, and hematocrit. (4) In addition to the medical examination, employees should be counseled by the physician to ensure that each employee is aware that ETO has been shown to induce mutations in experimental animals. The relevancy of these findings in animals to male or female employees has not yet been determined. The findings do indicate, however, that employers and employees should do everything possible to minimize exposure to ETO. If a physician becomes aware of any adverse effects on the reproductive system, any cancers in individuals who have been exposed to ETO, or any abnormal offspring born to parents either or both of whom have been exposed to ETO, the information should be forwarded to the Director, NIOSH, as promptly as possible. b. Periodic examinations should be made available on an annual basis, and more frequently if indicated by professional medical judgment based on such factors as emergencies and the pre-existing health status of the employee. These examinations should include at least: (1) interim medical and work histories. (2) a physical examination as described above for the preplacement examination. # Record Keeping and Availability of Records The employer should keep accurate records on the following: a. All measurements taken to determine employee exposure to ETO, including: (1) Dates of measurement (2) Operations being monitored (3) Sampling and analytical method used (4) Numbers, durations, and results of samples taken (5) Names and airborne exposure concentrations of employees in monitored areas b. Measurements demonstrating the effectiveness of mechanical ventilation (e.g., air velocity, static pressure, or air volume exchanged), including: (1) Dates of measurements (2) Types of measurements taken (3) Results of measurements c. Employee medical surveillance, including: (1) Full name of employee (2) All information obtained from medical examinations which is pertinent to ETO exposure (3) Any complaints by the employee relatable to exposure to ETO (4) Any treatments for exposure to ETO, and the results of that treatment All of the aforementioned records should be updated at least annually. The employee's medical examination and surveillance records should be made available upon request to designated medical representatives of the Assistant Secretary of Labor for Occupational Safety and Health (OSHA), and the Director of the National Institute for Occupational Safety and Health (NIOSH). Records of environmental and occupational monitoring should be made available upon request to authorized representatives of OSHA and NIOSH. All employees or former employees should have access to thé exposure measurement records which indicate their own exposure to ETO. An employee's medical records should be available upon written request to a physician designated by the employee or former employee. Records of all examinations should be held for at least 30 years following the termination of employment. # Informing Employees of the Hazard Each employee, prior to being permitted to work in an ETO sterilizing area, should receive instruction and training on: a. The nature of the hazards and toxicity of ETO (including those hazards described above), including recognition of the signs and symptoms of acute exposure, and the importance of reporting these immediately to designated health and supervisory personnel. b. The specific nature of the operation involving ETO which could result in exposure. c. The purpose for, and operation of, respirator equipment. # d. The purpose for, and application of, decontamination practices. e. The purpose for, and significance of, emergency practices and procedures, and the employees' specific role in such activities. f. The purpose for, and nature of, medical examinations, including advantages to the employee of participating in the medical surveillance program. # VII. SAMPLING AND ANALYTICAL METHODS # A. Airborne-ETO Monitoring Techniques and Equipment. A number of techniques are available for the reliable analytical determination of low concentrations of ETO in air. These include: (1) hydration of ETO (collected in a bubbler) to ethylene glycol followed by oxidation to formaldehyde, which is determined colorimetrically by its reaction with sodium chromotropate (Bolton et al, 1964), and (2) adsorption of the ETO on charcoal, followed by desorption with carbon disulfide, and gas chromatographic determination of the ETO (NIOSH method described in a later section). In addition, instrumentation is available for the direct quantitative determination of the concentration of ETO in air. These direct reading instrumental methods include: a) thermal conductivity detection, b) gas chromatography, c) hydrogen flame-ionization detection, and (d) infrared spectrophotometry. Other methods (as well as modifications of the above methods), and specific techniques have been described for the determination of airborne ETO concentration. These include: spectrophotometry (Pozzoli et al, 1968), colorimetry (Adler, 1965;Critchfield and Johnson, 1957;Gage, 1957), and volumetric methods (Gunther, 1965;Hollingsworth and Waling, 1955;Lubatti, 1944;and Swan, 1954). Gas chromatography of air residues from fumigated materials has been used for foodstuffs, pharmaceuticals, and surgical equipment (Adler, 1965;Ben-Yehoshua and Krinsky, 1968;Berck, 1965;Brown, 1970;Buquet and Manchon, 1970;Scudamore, 1968, 1969;Kulkarni et al, 1968;Manchon and Buquet, 1970). ETO can be determined in cigarette smoke by gas chromatography or mass spectrometry after conversion to ethylene chlorohydrin (Binder and Lindner, 1972;Muramatsu et al, 1968), and in mixtures of lower olefin oxides and aldehydes by gas chromatography (Kaliberdo and Vaabel, 1967). The limits of detection of ETO by spectrophotometry and gas chromatography in these materials are generally of the order of 1 mg/kg. The method used for the analysis of the ETO samples obtained in the NIOSH field study involved adsorption on charcoal, and gas chromatographic determination following desorption with carbon disulfide. The method, known as NIOSH Analytical Method #S286, is outlined below, and is described in detail in the reference. # B. Principle of the NIOSH Standard Method. A known volume of air is drawn through a series of two charcoal tubes to trap the ETO vapor present. The two-tube sampling arrangement is necessary to prevent sample migration (and loss) upon storage, and to insure that the front tube is not overloaded during sampling. The charcoal in each tube is transferred to a 5-ml screw-capped sample container, and the ETO is desorbed with carbon disulfide. An aliquot of the desorbed sample is injected into a gas chromatograph, following which the area of the resulting peak is determined and compared with areas obtained from the injection of "standards" (i.e., known concentrations of ETO) . C. Range of Sensitivity of the Method. The method was validated over the range of ETO concentrations of 40-176 mg/cu m at a temperature of 26 C and atmospheric pressure of 761 mm Hg, using a 5-liter sample. Under the conditions of recommended sample size (5 liters), the probable useful concentration range of this method is 20-270 mg/cu m at a detector sensitivity that gives nearly full scale deflection of the strip chart recorder for a 1.4-mg ETO sample. This method is capable of measuring much smaller amounts if the desorption efficiency is adequate. Desorption efficiency must be determined over the range used. The upper limit of the range of the method is dependent on the adsorptive capacity of the front charcoal tube. This capacity varies with the ETO concentration and with the presence of other substances in the sampled air. The charcoal tube series consists of two separate large tubes; the first tube contains 40 mg of charcoal and the second tube (used as the backup tube) contains 200 mg of charcoal. The charcoal is held in place by glass wool plugs at the tube ends. If a particular atmosphere is suspected of containing a large amount of ETO, an air sample of smaller volume should be taken. # D. Interference. When the amount of moisture in the air is so great that condensation actually occurs in the tube, organic vapors will not be trapped efficiently; consequently, the breakthrough threshold is decreased. Any compound which has the same retention time on the chromatography column as ETO under the conditions described in this method has a potential for interfering with the estimation of ETO. A change in the separation conditions, such as column packing or temperature, will generally circumvent the interference problem. # E. Precision and Accuracy. The coefficient of variation for this method in the range 41-176 mg/cu m is 0.103. This value corresponds to a 9.3 mg/cu m standard deviation at the 90 mg/cu m (present federal standard) level. Advantages and disadvantages of the method, as well as other aspects of the recommended sampling and analytical method, are described in detail in the reference. (NIOSH, 1977)
# DISCLAIMER Mention of company name or product does not constitute endorsement by the National Institute for Occupational Safety and Health. # DHEW (NIOSH) Publication No. 77-200 # PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and safety of workers exposed to an everincreasing number of potential hazards in their workplace. Pursuant to the fulfillment of this need, the National Institute for Occupational Safety and Health (NIOSH) has developed a strategy of disseminating information about adverse effects of widely used chemical or physical agents intended to assist employers in providing protection for employees from exposure to substances considered to possess carcinogenic, mutagenic, or teratogenic potential. This strategy includes the development of Special Occupational Hazard Reviews which serve to support and complement the other major standards development or hazards documentation activities of the Institute. The purpose of Special Occupational Hazard Reviews is to analyze and document, from a health standpoint, the problems associated with a given industrial chemical, process, or physical agent, and to recommend the implementation of engineering controls and work practices to ameliorate these problems. While Special Occupational Hazard Reviews are not intended to supplant the more comprehensive NIOSH Criteria Documents, nor the brief NIOSH Current Intelligence Bulletins, they are nevertheless prepared in such a way as to assist in the formulation of regulations. Special Occupational Hazard Reviews are disseminated to the occupational health community at large, e.g., trade associations, industries, unions, and members of the scientific community. It is unique for this purpose. Alternative chemicals or processes have, in themselves, serious limitations or health hazards. NIOSH recognizes, therefore, that the continued use of ETO as a gaseous sterilant is highly desirable in many situations. Recent results of tests for mutagenesis have increased the concern for potential health hazards associated with exposure to ETO. In order to assess the potential for exposure and associated hazards, NIOSH has undertaken this Special Occupational Hazard Review. An assessment is made of the evidence for toxic effects of ETO, especially with respect to mutagenic, teratogenic and carcinogenic potentials. Additionally, a limited field survey was conducted by NIOSH to document the use, problems, and potential for human exposure in medical facilities. The results of this survey were in agreement with data made available by the American Hospital Association, the U.S. Army, other Federal agencies, and industrial and professional organizations. Based on this review, measures for control of occupational exposure are recommended. The acute toxic effects of ETO in man and animals include acute respiratory and eye irritation, skin sensitization, vomiting, and diarrhea. Known chronic effects consist of respiratory irritation and secondary respiratory infection, anemia, and altered behavior. The observations of (a) heritable alterations in at least 13 different lower biological species following exposure to ETO, (b) alterations in the structure of the genetic material in somatic cells of the rat, and (c) covalent chemical bonding between ETO and DNA support the conclusion that continuous occupational exposure to significant concentrations of ETO may induce an increase in the frequency of mutations in human populations. At present, however, a substantive basis for quantitative evaluation of the genetic risk to exposed human populations does not exist. No definitive epidemiological studies, and no standard long-term carcinogenesis assays, are available on which to assess carcinogenic potential. Limited tests by skin application or subcutaneous injections in mice did not reveal carcinogenicity. However, the alkylating and mutagenic properties of ETO are sufficient bases for concern about its potential carcinogenicity. Neither animal nor human data are available on which to assess the potential teratogenicity of ETO. NIOSH recommends that ETO be considered as mutagenic and potentially carcinogenic to humans, and that occupational exposure to it be minimized by eliminating all unnecessary and improper uses of ETO in medical facilities. Whenever alternative sterilization processes are available which do not present similar or more serious hazards to the employee, they should be substituted for ETO sterilization processes whenever possible. Although this review is limited to ETO, concern is also expressed for hazards from such hydration and reaction products of ETO as ethylene glycol and ethylene chlorohydrin, the latter a teratogen to some lower biological species. v This report includes a summary of the airborne ETO concentrations measured within health care facilities as part of the field survey. NIOSH estimates that there are in excess of 10 thousand ETO sterilizers in use in U.S. health care facilities, and that approximately 75 thousand workers are potentially exposed to ETO in those facilities. Reasons for the unnecessary exposure of personnel were found to include: improper or inadequate ventilation of sterilizers, aerators, and working spaces; improper handling and/or storage of sterilized items; untrained workers operating some sterilization equipment; improper operating techniques leading to mishandling of some ETO sterilizing equipment; poor design of the sterilization facility; and, design limitations of the sterilization equipment. NIOSH recommends, based on the recent results of tests for mutagenesis, that exposure to ETO be controlled so that workers are not exposed to a concentration greater than 135 mg/cu m (75 ppm) determined during a 15-minute sampling period, as a ceiling occupational exposure limit and in addition, with the provision that the time-weighted average (TWA) concentration limit of 90 mg/cu m (50 ppm) for a workday not be exceeded. As additional information on the toxic effects of ETO becomes available, this recommended level for exposures of short duration may be altered. The adequacy of the current U.S. ETO standard, which was based on the data available at the time of promulgation, has not been addressed in this report. Further assessment of other ETO exposure situations, and of the adequacy of the ETO occupational exposure standard will be undertaken during the FY 80 development of a NIOSH criteria document for occupational exposure to epoxides. In the interim, NIOSH strongly recommends that control strategies, such as those described in this document, or others considered to be more applicable to particular local situations, be implemented to assure maximum protection of the health of employees. Good work practices will help to assure their safety. Where the use of ETO is to be continued, improved techniques of exhausting the gas from the sterilizer, the aerator, and the sterilized items need to be implemented. Gas sterilization should be supervised and the areas into which ETO may escape should be monitored to prevent all unnecessary exposure of personnel. When proper control measures are instituted, the escape of ETO into the environment will be greatly reduced. Under such control, the use of ETO as a gaseous sterilizant in medical facilities can be continued with considerably less risk to the health of occupationally-exposed employees. # INTRODUCTION Ethylene oxide (ETO) is a high volume chemical used primarily as an intermediate in the production of ethylene glycol (27% of total consumption), polyethylene terephthalate polyester fiber and film (23%), non-ionic surface-active agents (13%), ethanolamines (9%), with production of di-and tri-ethylene glycol, choline and choline chloride, and other organic chemicals consuming most of the remaining ETO. Although it is estimated that only about 0.02% of the total amount of ETO produced in the U.S. in 1976 was used for sterilization in medical facilities, this amounted to approximately 500,000 kg. ETO is registered with the U.S. Environmental Protection Agency as a fungicide for fumigation of books, dental, pharmaceutical, medical and scientific equipment and supplies (glass, metals, plastics, rubber or textiles), drugs, leather, motor oil, paper, soil, bedding for experimental animals, clothing, furs, furniture, and transportation vehicles, such as jet aircraft, buses, and railroad passenger cars. It has been used also to sterilize foodstuffs such as spices, cocoa, flour, dried egg powder, desiccated coconut, dried fruits and dehydrated vegetables (Wesley et al, 1965), and to accelerate the "maturing" of tobacco leaves (Fishbein, 1969). At one time (Stehle et al , 1924), ETO was used briefly as a possible anesthetic agent, but was discarded due to toxic effects. This Hazard Review Document pertains only to the use of ETO in sterilization of medical supplies and equipment within medical and related facilities. The term "medical facilities" will be used to include hospitals, nursing and "total care" homes, medical, dental and veterinary clinics or facilities, and certain research laboratories affiliated with medical centers. ETO is manufactured by the catalytic oxidation of ethylene with air (or oxygen) in the presence of a silver catalyst. Since 1972, this has been the only method used in the U.S. Wurtz, in 1859, prepared ETO from ethylene chlorohydrin and potassium hydroxide. Until 1957, the chlorohydrin process was the principal method of manufacturing ETO in the U.S. In March 1973, the 13 companies which produced ETO in the U.S. and Puerto Rico had a total production of 1,892 million kg. In 1976, the annual U.S. production of ETO had grown to approximately 2,100 million kg, giving to this chemical a position within the top twenty-five chemicals (by volume) produced in the U.S. It is used extensively worldwide, with total production in Japan in 1974 of 415 million kg. European production in 1972 has been estimated at 865 million kg. The current U.S. standard (0SHA) for occupational exposure to ETO is 50 parts per million (ppm) parts of air, as a time weighted average (TWA) concentration for an 8-hour exposure (CFR 1910(CFR .1000, which corresponds approximately to 90 milligrams per cubic meter of air (mg/cu m ) . The USSR has a standard of 0.5 ppm, (1 mg/cu m) , which was adopted in 1966 [Winell, 1975] . Standards of 50 ppm and 20 ppm (36 mg/cu m) are in effect in the Federal Republic of Germany and Sweden, respectively. While this review includes all known biohazards of ethylene oxide, special emphasis was devoted to its potentials for exerting carcinogenic, mutagenic, and teratogenic effects. Recent reports of the mutagenic potential of ETO [Embree and Hine, 1975;Ehrenberg et al, 1974], coupled with the "Report of the Secretary's Commission on Pesticides and their Relationship to Environmental Health" [Mrak, DHEW Report, 1969], have prompted a reassessment of the adequacy of the current U.S. occupational exposure standard, as well as a critical appraisal regarding re registration of the compound by the Environmental Protection Agency (as required under the Federal Insecticide, Fungicide, and Rodenticide Act, FIFRA). This latter action has resulted in a reappraisal by the Department of Health, Education, and Welfare of the use of ETO for sterilization purposes within health care facilities. During the preparation of this review, a limited field survey of ETO use in medical facilities was conducted by NIOSH in order to better assess the actual occupational exposure situation. A summary of the results of the field survey is presented, and serves as a basis for the recommended control measures. Extensive use was made of information provided by other federal agencies, and from industrial, trade, and professional organizations. In addition, the "Draft Technical Standard and Supporting Documentation for Ethylene Oxide," prepared under the joint NIOSH/OSHA Standards Completion Program, assisted in the preparation of the engineering and control sections of this review. NIOSH was represented on the Ethylene Oxide Subcommittee of the Committee to Coordinate Toxicology and Related Programs (CCTRP), U.S. Department of Health, Education, and Welfare. The Subcommittee met during the period January 30 -March 30, 1977. Certain sections of this NIOSH Special Hazard Review were provided to the CCTRP Subcommittee for inclusion in its Risk/Benefit Analysis of public health uses of ETO (HEW, 1977). The information and recommendations that follow should aid the U.S. Department of Labor, industrial hygienists, physicians, employers, and architects or other designers of health care facilities in protecting the worker from the hazards of ETO exposure. Furthermore, it will aid the worker in recognizing the hazard. # I . PROPERTIES This information has been compiled primarily from the Chemical Safety Data Sheet of the Manufacturing Chemists Association, for Ethylene Oxide, (No. SD-38), 1971, with supplementation from other sources. Highly exothermic and potentially explosive with alkali metal hydroxides, or highly active catalytic surfaces (such as anhydrous chlorides of Fe, Sn and Al, and oxides of Fe and Al), or when heated. Relatively non-corrosive to materials other than certain rubbers. Relatively stable in aqueous solution, and when diluted with C02 or gaseous halocarbons. An alkylating agent which reacts directly (and virtually irreversibly) with -C00H, -NH2, -SH, and -OH groups. Reacts with the ring nitrogen of purine and pyrimidine bases, and the amino groups of amino acids and proteins. 9. explosive limits: 3 to 100 (% by vol in air). 10. flashpoint: -6C (20 F), (Tag. open cup). # II. USES AND OCCURRENCE IN MEDICAL FACILITIES The routine use of ETO in medical and related health care facilities is to sterilize heat-sensitive surgical instruments, equipment, and other objects (or fluids) that come in contact with biological tissue (particularly the vascular system), or extracorporeal equipment through which blood may flow. The absence of all microbiological life forms such as viruses, bacteria, yeast, fungi, and especially persistent spore forms is essential in order to prevent infectious diseases in patients and animals. Complete sterilization by either heat or gaseous agents is essential for many purposes, although varying degrees of disinfection with chemical germicides (which may sharply reduce the populations of many vegetative forms) may be sufficient for some applications. Heat sterilization is normally the preferred method; however, this method can not always be employed because of the heat-sensitive nature of some items. In addition, ETO gas sterilization is more economical for some applications, such as the industrial sterilization of inexpensive disposable, i.e., single-use, items such as syringes and needles. # A. Characterization of Occupational Exposure During ETO Sterilization Procedures There is current large-scale industrial use of ETO gas for sterilization of medical supplies and equipment because such use is effective and economical. In addition, alternate methods often are impractical, hazardous, undependable, or uneconomical. Gaseous ETO is generally used industrially for sterilization processing of disposable sterile kits containing items such as disposable syringes and needles, disposable microbiological laboratory supplies, and life-support items such as electronic cardiac "pacemakers," blood oxygenators, and dialysers. An estimated 80% of all such items are processed by ETO gas sterilizaton in the U.S. [CDC, 1977]; many could not be processed, regardless of cost, by any other currently available method [CDC, 1977]. The majority of industrial ETO gas sterilization is performed in less than 50 large (greater than 1,000 cu ft) sterilizers and in approximately the same number of smaller industrial units [HIMA, 1976]. It has been reported that such ETO sterilizers are operated in accord with manufacturers' recommendations, industry safety regulations, state and local fire codes, and provisions of insurance underwriters [HIMA, 1977]. While this review contains information applicable to the few large industrial users of ETO for sterilization, it was primarily intended for medical facility applications (as defined in the Introduction). There are approximately 8,100 hospitals in the U.S., of which about 7,200 are members of the American Hospital Association (AHA). The AHA estimates that 5,500 to 6,500 of its member hospitals have ETO gas sterilizers [AHA, 1976]. Whereas the majority of these sterilizers are small table top units, it is estimated that 1,000-2,000 large sterilizers (permanent installations with chamber volumes greater than 4 cu ft) are also in use [HIMA, 1977]. Most hospitals have more than one ETO sterilizer. In addition to hospitals, ETO sterilizers are also used in smaller medical, dental, or veterinary clinics or facilities. While the exact number of sterilizers is not known, NIOSH estimates that it exceeds 10,000 units. Of the almost 2.1 billion kg of ETO currently produced annually in the U.S. [Chemical & Engineering News, 1976], it is estimated that 500,000 kg (0.02% of the total produced) are used for sterilization within medical facilities [NIOSH, 1977]. This use is increasing. NIOSH (1977) estimates that approximately 75,000 health care workers employed in sterilizer areas are potentially directly exposed to ETO. In addition, an estimated 25,000 others are "casually" exposed due to improper (or inadequate) venting of sterilizers and aerators, storage (or use) of improperly/incompletely aerated ETO-sterilized items, the physical arrangement of the sterilization facility or workroom (which necessitates passage in close proximity to a gas sterilizer or aerator), and mishandling or failure of the equipment (such as, leaking sterilizer door seals). Thus, the total number of exposed workers in medical and related facilities is estimated to exceed 100,000. The principal items processed in such hospital "gas" sterilizers are air-powered surgical instruments, anesthesia supplies and equipment, cardiac catheters, endoscopes and other equipment containing lenses intended to be introduced into the human body, humidifiers and nebulizers, implantable body parts, electronic "pacemakers," ophthalmic instruments, xray supplies and related equipment, respiratory therapy supplies and equipment, some medications, reusable supplies, thermometers, and equipment contaminated by use in "isolation rooms" (i.e., containing patients with infections). Note some significant differences between the items sterilzied by the "user" and the items sterilized by the "vendor". ETO is used in sterilizers produced by at least six different manufacturers. In certain small sterilizers it is used full strength, or diluted to a composition of 84% ETO with inert ingredients. In larger sterilizers a non-explosive sterilant mixture is commercially available, either 10% ET0/90% C02, tradenamed Carboxide (TM), or 12% ETO/88% halocarbon. Halocarbon products, such as , sold under tradenames such as Freon (TM) or U-con (TM), are used. The Federal standard for occupational exposure to dichlorodifluoromethane (Refrigerant-12) is 1,000 ppm (4,550 mg/cu m) as an 8-hour TWA concentration. No federal exposure limit exists for trichloromonofluoromethane (Refrigerant-11). The impact of the possible removal (for environmental considerations) of certain halocarbon-containing products from general use has not been considered in this report. # B. ETO Residues in Sterilized Medical Equipment Residues (or byproducts) are produced mainly by two reactions of ETO: (a) from its slow chemical combination with water to form glycols, or (b) from its combination with chloride ion in the presence of water to form the chlorohydrin. Since even dry materials contain some moisture, it is apparent that glycol formation is unavoidable. Moreover, without the presence of some moisture, ETO sterilization cannot be effected. However, traces of glycols have been generally regarded as relatively harmless and permissible for human exposure. The formation of persistent, toxic ethylene chlorohydrin in foodstuffs fumigated with ETO has been described (Wesley et al, 1965). Chlorohydrins are relatively non-volatile, and are considered highly toxic substances. The Federal standard (CFR 1910(CFR .1000 for occupational exposure to ethylene chlorohydrin is 5 ppm parts of air as an 8-hour TWA concentration. Attempts have been made to determine the conditions necessary for elimination of chlorohydrin residues from foods by volatilization and decomposition at elevated temperatures. In general, these attempts have not been successful. Nor would they be applicable for most sterilized medical products, particularly those which are heat sensitive. While no reference to the formation and levels of ethylene chlorohydrin in medical facilities was found, the possibility that it will be present cannot be ignored. The use of ETO for sterilization of medical devices and equipment raises a number of significant questions regarding (a) the possible entrapment of ETO in a plastic item that may then exert a toxic effect when placed in contact with living tissue, and (b) the effect of (sorbed) ETO on the physical and chemical properties of the rubber and plastic items. Plastic tubing that has been sterilized with ETO has caused significant hemolysis when placed in contact with human blood (Rose et al, 1953;Clarke et al, 1966;Bain and Lowenstein, 1967). For example, Bain and Lowenstein (1967) reported that when mixed leukocyte cultures were incubated in disposable plastic tubes sterilized with ETO, survival of the cells was severely affected by a toxic residue left on the plastic. The residue was dissipated only after 4 or 5 months' storage at room temperature, whereupon the survival of cells cultured in such stored tubes returned to values similar to those obtained with ultraviolet-sterilized (control) tubes. O' Leary and Guess (1968), in their study of the toxic properties of medical plastics sterilized with gaseous ETO, presented data demonstrating the ability of ETO to remain entrapped in non-closed systems, such as, surgical tubing, gas washing bottles, plastic syringes, or plastic bottles, at various temperatures above the ETO boiling point. The hemolyzing ability of known amounts of ETO was determined. Freshly gas-sterilized plastic pharmaceutical products were shown quantitatively to produce blood cell hemolysis in proportion to the amount of ETO remaining in the plastic. Additionally, the effects of plasticizers of the ester type upon the sorption of ETO into polyvinyl chloride (PVC) products was also described. The residual ETO could present a hazard in the sterilization of such devices as plastic syringes. For example the residual ETO gas might have toxic effects due to some of its oxidation products. ETO, if not removed, may be released at a later time (i.e., while under use) and cause hemolysis, erythema, and edema of the tissues. (Clarke et al, 1966;Bain and Lowenstein, 1967;O'Leary and Guess, 1968;Kulkarni et al, 1968;and Sykes, 1964). Other studies relating to the general problem of interaction of ETO with constituents of rubber and plastic have been reported. Downey (1950) demonstrated that the mercaptobenzothiazole vulcanization acceleraters found in rubber reacted rapidly with ETO to produce (hydroxyethyl-mercapto)-thioazole, despite the fact that residual ETO concentration in the rubber tubing had been reported to have been dissipated after 5 hours' aeration. Little is known about the parenteral toxicities of these compounds, or other possible reaction products in ETO-sterilized rubber. Cunliffe and Wesley (1967) have shown that ethylene chlorohydrin was given off from PVC tubing 6 days after ETO sterilization. Gunther (1965) also demonstrated that high concentrations of ETO can be taken up by polyethylene, gum rubber, and plasticized polyvinylchloride. This re emphasizes the general problem of entrapment of ETO within sterilized plastics. Reports of local skin irritation from contact with ETOsterilized plastic items have appeared [ ]. This irritation can become severe if sensitization occurs. Items sterilized with ETO must be properly aerated before application to the human body in order to prevent such adverse reactions. # C. Alternatives to the Use of ETO Sterilization The reported alternatives to ETO sterilization include: steam, dry heat, steam-formaldehyde, steam at sub-atmospheric pressure, wet pasteurization, radiation (including gamma-ray, X-ray, ultraviolet radiation, and electron beam exposure), liquid glutaraldehyde, liquid or gaseous formaldehyde, propylene oxide, liquid (or gaseous) betapropiolactone, epichlorohydrin, ethylene imine, glycidaldehyde (i.e., 2,3epoxy-l-propanal), hypochlorite, peracetic acid, methyl bromide, chloropicrin, and ozone. As a completely different strategy, single use, unsterilized disposable supplies have been considered by some. Although the safety of such items has been questioned, microbial contamination capable of causing disease is rarely present [HEW Rept., 1977]. A number of other chemicals or processes have been used (or considered) for sterilization of medically related items. For various reasons their general use is limited. # III. SUMMARY OF RESULTS OF FIELD STUDIES CONDUCTED IN MEDICAL FACILITIES In February 1977, NIOSH conducted a limited field survey of hospitals to gain some perspective on situations related to the use of ETO sterilizers that might result in exposure of hospital staff and patients to ETO. The survey involved four hospitals in a metropolitan area, selected to represent both large and small health care facilities. Although a survey limited to only four medical facilities obviously can not be used with confidence as a basis for describing the conditions of the actual wide-spread use of ETO, it nevertheless does present a general impression of the usage of this compound, and some of the potential problems related to its use. The brief summary of the results of the survey presented below is from a more detailed report [Glaser, 1977]. In addition to the data from the NIOSH field survey, information was obtained from other government agencies including EPA, FDA, CDC, and NIH, and from the U.S. Army and U.S. Navy. Information was also made available by the Health Industry Manufacturers Association (HIMA), the American Hospital Association (AHA) and their member group, the American Society for Hospital Central Service Personnel, the Manufacturing Chemists Association (MCA), the Association for the Advancement of Medical Instrumentation (AAMI), the American National Standards Institute (ANSI), the American Society of Hospital Engineers (ASHE), ETO sterilizer and aerator manufacturers, and ETO "gas" manufacturers. From the NIOSH field survey and the above sources, the following account of the use of ETO in medical facilities, and the problems incident to such use, was developed. The results obtained in the NIOSH survey are in agreement with a similar study conducted by the U.S. Army in three of its hospitals (Army, 1977), and with data obtained by a study recently conducted by the American Hospital Assoication (Runnells, 1977). # A. ETO Sterilizer Use, and Number of Workers Potentially Exposed to ETO The number of gas sterilizers per hospital observed in the survey ranged from 1 to 9 (average of 4), with each hospital having at least one large sterilizer in the Central Supply (CS) area. Smaller sterilizers were installed in one or more of the following locations: Operating Room (OR), Surgery, Cardiac Catheterization Laboratory (Cath Lab), Anesthesiology Department, Ear-Nose-Throat (ENT) Clinic, Dental Clinic, Intensive Care Unit (ICU), Urology Department, and Inhalation Therapy Clinic. Additionally, some hospital-related research facilities have gas sterilizers, used in connection with the banking and transplantation of human tissue and organs (i.e., Tissue Bank), and in veterinary facilities which maintain germ-free (i.e., gnotobiotic) animal colonies. The size of the hospitals in the NIOSH survey varied from 115 beds to 750 (immediately expandable to 1135), with the size of the average hospital being between 460 and 500 beds. Nationally, the number of gas sterilizers per "average" size (i.e., 200-300 beds) hospital appears to be 2. [Runnells, 1977] The frequency of operation of the large sterilizers (i.e., Central Supply) varies from 3-4 cycles in 24 hours to approximately one cycle every other day. Sterilizers in other areas generally were operated 2-5 times per week. These frequencies agree with data from other sources. The number of personnel directly involved in the sterilization process varied widely. Some facilities reported that only a few (3-5) workers actually operate (i.e., load/unload) the sterilizers and aerators, and then generally only one person at a time. Such a situation occurred mostly in the CS and specific research-related areas (such as the Tissue Bank). The Cath Lab or OR technical staff appear to operate the sterilizers on a rotating, random or assigned, basis with a somewhat larger number of people (3-15) directly involved. While the number of personnel who may be exposed during operation of the sterilizing equipment may be small, many more persons were potentially exposed for a variety of reasons. In one case, access to a main lavatory required that personnel walk directly in front of two large gas sterilizers and a large aerator. In other cases, persons passing between the workroom and stock room had to walk within 2 feet of the fronts of sterilizers located in a small room or area between the CS workroom and the CS stock/storage area. Another sterilizer was located in a small appendage (vestibule) to the OR workroom, adjacent to a doorway to a heavily trafficked hallway. Within a period of less than 15 minutes, more than 30 people were observed passing within 4 feet of the sterilizer and aerator. The aerator vented directly into the workroom, providing additional ETO exposure to workers in that area. In some facilities, staff from various departments withdrawing sterile supplies from the Central Supply walk directly into the sterilizer or stock areas. ETO can enter stock areas from improper ventilation of aeration areas, or by the placing in stock of incompletely aerated items, resulting in diffusion of ETO into the ambient air. While the number of potentially exposed persons may be large, depending on the actual physical arrangements, the number most likely to be exposed was estimated to be between 6 and 60 in each of the hospitals included in this study. This range agrees with an estimate obtained from a nation-wide survey of hospital central service/central sterile supply personnel conducted by the American Society for Hospital Central Service Personnel, of the American Hospital Association, in 1977. # B. Potential Exposure Situations Encountered in the Field Study In addition to the potential for accidental exposures due to the physical arrangements, in a few cases problems were observed in installation, maintenance,, and operation that could increase the unnecessary and/or inadvertent exposure of sterilizer operators to ETO. These are: 1. Improper or Inadequate Venting of Sterilizers: a. Exhaust vent passed through a window and ended wit 1 foot of the intake air duct of an air conditioner. While probably very little of the ETO is actually drawn back into the room by the air conditioner intake, it does illustrate a potential source of exposure. b. An exhaust pipe from a sterilizer discharged into an open floor drain and gave off a cloud of vapors, including ETO, steam, Freon, and possibly other components (such as ethylene chlorohydrin) from the sterilizer into the machinery-room. A local ETO concentration of approximately 8,000 ppm was measured 1 foot above such a drain during a sterilization cycle. High local concentrations of ETO could be significantly reduced by minor modifications of the physical facility. In general, these situations do not result in elevated concentrations of ETO in the operator's breathing zone (BZ), but may contribute to the low background concentration of ETO in the sterilizing area. c. Incomplete sterilizer flushing prior to opening its door allowed very high concentrations of ETO to enter the room immediately upon opening the door of the sterilizer. This could create a hazardous situation for the operator attending the door. Approximately 1,200 ppm was recorded at one installation immediately upon opening the chamber door. d. An operator was able to open one sterilizer during use, to add or remove items, even though its chamber contained ETO. This represents, potentially, a very hazardous situation for the equipment operator. e. Effluent gas from sterilizers and aerators is not treated to destroy all unreacted ETO as well as hazardous byproducts such as ethylene chlorohydrin, so as to render the resulting effluent innocuous. The development of an apparatus (such as a catalytic converter or combustion device) designed to ensure the complete destruction of unreacted ETO at the end of the sterilization procedure should be given high priority to ensure worker safety, as well as to prevent environmental pollution by ETO. # Improper Aeration of Sterilized Equipment: a. With only one notable exception, all aerator cabinets in the medical facilities visited were vented directly into the room in which they were installed, or into the machinery space behind the cabinet. Airborne ETO concentrations of 300 to 500 ppm were not unusual above and behind some aerator cabinets. b. Aeration was permitted on open shelves in the CS or O workroom or stock room areas in one facility. Values of 25-50 ppm were measured 1 foot above stored items which had been sterilized more than 24 hours earlier. c. Sterilized items were stored (following aeration?) in glass cabinets with tightly fitting doors in two facilities visited; an odor of ETO was obvious upon opening one of the cabinets, indicating improper aeration before storage, and lack of venting of the cabinet. d. Exposure to ETO sterilizer gaseous products is of obvious concern, as some technicians reported skin irritation from contact with recently sterilized items, while others reported that their eyes "watered" while removing items from the sterilizer. # Inadequate Room Ventilation: Ineffective or non-existant room ventilation was noted in some facilities, which allowed the buildup of ETO to a high level within the room. A 10 x 10 x 12 foot unventilated room housed a small sterilizer which was found to have a leaking door gasket. ETO concentration at the BZ near the sterilizer was greater than 1,000 ppm. # Malfunctioning or Leaking Equipment: a. A leaking door seal (gasket) on a sterilizer caused an extremely high value of airborne ETO in a small, non-ventilated and closed room. Results are cited above. b. ETO leaks were observed at some gas tank valves, threaded fittings, and near some chamber fittings and piping. This could be very serious for personnel entering poorly ventilated machinery spaces. Leaks producing instrument readings ranging from 400 to 3,000 ppm were located. # Improper Operating Procedures: Sterilized items were removed from the sterilizer and transported on carts through a heavily congested hallway to the aerator or storage area. High rates of off-gassing, 200-300 ppm 1 foot above the cart, were noted during the movement of the sterilized items. # C. Measurement of Ambient Levels of ETO The method used for the sampling and analysis of airborne ETO consisted of the absorption of ETO on charcoal, and gas chromatographic determination following desorption of the ETO with carbon disulfide [NI0SH Standard Completion Set T, 1976] . A series of measurements were obtained at various locations in rooms containing sterilizers and/or aerators, before, during, and after periods of operation of the equipment. Sampling was also conducted in other places of potential exposure, such as hallways, and vent exhaust áreas. In general, the concentrations of ETO in the employee's breathing zone (BZ) were below the current federal (0SHA) standard (CFR 1910.1000) of 50 parts per million (ppm) parts of air as a time weighted average (TWA) over an 8-hour day. General area air samples collected during a 5.5 hour period in front of a bank of two ETO sterilizers and an aerator contained only 1-2 ppm of ETO. However, many values were recorded which were at, or above, the ACGIH tentative Threshold Limit Value (TLV) for short term exposure (i.e. short-term exposure limit, STEL, of 75 ppm for short-time periods of less than 15 minutes). In fact, some very high short-term BZ values were observed, including a peak of 460 ppm within the first minute after opening the sterilizer door, lasting for about 1 minute. Even higher concentrations, for shorter periods, were noted on occasion. Although there is no absolute maximum or "ceiling" value for ETO, one should regard the recommended STEL as a level not to be exceeded. # D. Conclusions 1. The survey revealed a number of conditions which resulted in unnecessary, preventable exposure of hospital staff, and possibly patients, to ETO. 2. Many of the conditons were due to faulty equipment or improper operating procedures. These can be corrected by minor modifications, e.g., venting floor waste drains, replacing leaking sterilizer door seals or pipe joints, and improved work practices, including techniques for removing items from the sterilizer and aerator. In addition, the operator should avoid contact with both ETO gas and liquid which may remain in the flexible connecting lines when changing sterilizer gas cylinders. Protective gloves and goggles are available, and should be worn by personnel changing gas cylinders or cleaning up following an accidental spill. Gas cylinders should be changed in such a manner as not to expose the technician to ETO. Transfer carts should be used to remove sterilized items from large sterilizers, and gloves and forceps should be used whenever possible to remove items from small sterilizers. This will minimize the inhalation of ETO by the operator, and the possibility of dermal contact with ETO. 3. Other conditions resulted from improper installation and ventilation of sterilizers, aerators, or rooms in which such equipment was installed. This may require more extensive modifications such as additional venting ducts, high velocity vacuum pick-up ducts at the sterilizer door, improved room ventilation, and measures to exhaust decontaminated effluent in such a way as to prevent exposure within the hospital. An absolute minimum of 10 air changes per hour should be provided to all rooms containing gas sterilizers, aerators, or stored ETOsterilized items. Air should not be recirculated in these spaces. Devices which completely destroy all unreacted ETO need to be developed. 4. Modification or engineering re-design of some sterilizer chambers may be necessary to assure effective displacement of the ETO gas following sterilization. It is desirable to have repeated air flushing of the sterilizer before it is opened. A power-operated door-opening device exists on some large sterilizers. It allows the operator to push a button, then walk away, while the sterilizer door slowly opens. If, after a suitable period of time to permit the chamber to "air out," a buzzer could call the operator back to unload the chamber, safer sterilizer operations would result. In lieu of the power door, an alternative might be for the operator, upon completion of the sterilization cycle, to open the chamber door approximately 6 inches and wait 5-15 minutes before removing a load from the sterilizer. Aerators (which are forced-draft, warm air cabinets) should not vent directly into the room; rather, they should be connected to exhaust ducts to carry effluent out of the work area. Aeration performed under ambient conditions (i.e., in the open, at room temperature and atmospheric pressure) is not generally used, due to the lengthy time required to eliminate ETO "residues" from the sterilized material. In the absence of aerators, removal of ETO from sterilized items should be permitted only in dedicated, well-ventilated areas, such as in hoods. # 5. Many items are being sterilized with ETO when they could be processed by other methods, e.g., steam or ultraviolet radiation. 6. Most (but not all) hospitals provide a "standard operating procedures" manual, containing ETO sterilization techniques, and conduct thorough on-the-job training programs including the sterilization procedures with ETO. 7. Medical and Health records of personnel employed in areas in which ETO is used were not maintained in all cases. The keeping of employee medical and health records, as well as occupational exposure records needs to be improved in some medical facilities. 8. Within the past few years, sterilizer and aerator equipment manufacturers appear to be making stronger recommendations in the instruction manuals, and in training aids, courses, seminars, and in advertising as to the proper ventilation, location, use, etc., of sterilizers and aerators. Some early equipment manuals did not contain such recommendations. It is necessary that architects and designers of health care facilities, supervisors of the equipment operators, facility safety staff, industrial hygienists, etc., rigorously follow the recommendations and instructions stated by the equipment manufacturers, to protect the worker from the hazards of ETO exposure. # IV. BIOLOGIC EFFECTS OF EXPOSURE TO ETO A. Effects in Animals and Lower Biological Systems 1. Acute Toxicity Acute lethality studies in animals have been performed in four species and by five different routes of exposure. A wide array of responses follow acute exposure to ETO. These include: nausea, salivation, vomiting, diarrhea, lacrimation, nasal discharge, edema of the lungs, gasping, labored breathing, paralysis (particularly of the hind quarters), convulsion, and death. Deaths which occurred shortly after exposure to ETO were attributed primarily to lung edema, whereas delayed deaths often resulted from secondary infection of the lungs along with general systemic intoxication (Patty, 1963). # a. Inhalation Of the acute lethal studies, the inhalation studies of Jacobson et al (1956) are the most pertinent to occupational exposure. In those experiments, exposures to various ETO concentrations for 4-hour periods resulted in LC50's of 835 ppm for female mice, 1,460 ppm for male rats, and 960 ppm for male dogs. Hine and Rowe (1973) compiled data on inhalation exposures to illustrate the variable lethal response by species, concentration, and duration of exposure. (Table 1). In general, no deaths were reported at ETO exposure levels of 250-280 ppm for rats, guinea pigs, rabbits, cats, and dogs. b. Oral and Parenteral LD50's resulting from oral and parenteral exposures ranged between 141 and 631 mg/kg, and are listed in Table 2. Additional support for this range is provided by Patty (1963) who reported LD50's (following intragastric administration of 1 per cent aqueous solutions) of 330 mg/kg for rats, and 270 mg/kg for guinea pigs. Weil et al (1963) reported the single oral LD50 for rats of 330 (range 290-360) mg/kg. In another study reported by Patty (1963), a single 200 mg/kg dose of ethylene oxide given intragastrically (as a 10 per cent solution in olive oil), killed all 5 rats in the group, whereas all animals survived a dose of 100 mg/kg. # c. Tissue (including eye) Irritation The results of studies to determine the acute eye and tissue irritant properties of ETO (in aqueous solution) are summarized in Table 3 . The Draize (1965) system of evaluation, consisting of estimates of the highest doses having no overt actions at various sites after various methods of administration, was used in these studies. Thickening of skin and ecchymoses followed subcutaneous injection in the guinea pig, while mild irritation occurred after intradermal injection and dermal application in the rabbit. No local reactions were observed after intramuscular injections. As illustrated in Table 3, the irritant properties varied with both concentration and total dose. # Ocular Effects: The cornea and conjunctivae appear to be less sensitive to ETO than the skin of the rabbit. The reactions to the dermal and ocular applications most closely resemble those seen in actual occupational exposures. Woodard and Woodard (1971) reported slight irritation in the rabbit eye, with lacrimation and conjunctival erythema. In a series of investigations by McDonald et al (1973), ETO in a balanced salt solution was administered by both ocular instillation and injection into the anterior chamber of the rabbit eye. The maximal concentration which did not produce substantial ocular pathology varied with the specific ocular tissue examined and the route of administration, ranging from 0.1% to greater than 20%. (Table 4). Irritant effects observed after acute ocular instillation were discharge, iritis, corneal cloudiness and damage as evidenced by fluorescein staining. Conjunctival congestion, flare, iritis, corneal opacity, and fluorescein staining were observed after administration into the anterior chamber. Injection into the anterior chamber had more effect on the iris, the lens, and the retina than instillation into the conjunctival sac. Conversely, the latter procedure affected the cornea and the conjunctivae more markedly than the former. Neither differential effect is exceptional. (Alcon Submission, 1973) The irritation potential of a commercial ophthalmic ointment, the components of which were sterilized with ETO, was compared with that of the product prepared from non-sterilized components. The materials were applied to the eye of the rabbit in a 6-hour heavy dosing regimen (0.5 mg/dose at 20-minute intervals for 6 hours), and in a 5-day dosing regimen more comparable to clinical use (0.1 mg/dose, 5 applications/day for 5 days). Minimal conjunctival congestion was seen after the applications of ointment, but no difference was seen between the sterilized and non-sterilized products. (Alcon Submission, 1973) 2. Sub-Chronic Toxicity a. Oral and Parenteral ETO was administered to rats by gavage, 5 days/week, for 3 or 4 weeks, (Hollingsworth et al, 1956), and to rats and dogs by daily subcutaneous injections for 30 days (Woodard and Woodard, 1971). The results of these studies are summarized in Table 5. The no-effect levels for the rat were 30 mg/kg by the oral route, and 18 mg/kg by subcutaneous injection. b. Inhalation Hollingsworth et al (1956) and Jacobson et al (1956) conducted studies in which various animal species were exposed repeatedly to ETO vapor at concentrations ranging from 100 to 841 ppm. The studies are summarized in Table 6. At 100 ppm there was anemia in 1 of the 3 dogs, and 8 of 30 mice and 3 of 20 rats died during 130 exposures of 6-hour duration. At 113 ppm, no rats (of 40), guinea pigs (of 16), rabbits (of 4), or monkeys (of 2) died during 122 to 157 exposures of 7-hour duration. The male rats did have some depression of their growth. Rats of both sexes exhibited increased lung weight. However, at 204 ppm growth depression was noted, an appreciable number of rats died of secondary respiratory infection, and rabbits and monkeys developed posterior paresis. Rats and guinea pigs exhibited increased lung weights also. At higher concentrations more serious conditions developed, including severe nervous and respiratory effects and degeneration of testicular tubules. On the basis of these studies, Hine and Rowe (1973) proposed permissible exposure limits of 100 ppm for repeated exposures of 4 hours/day for a 2-week period, and 50 ppm for 7-hour exposures on a continuing basis. # Chronic Toxicity No chronic test data could be found. NIOSH has been informed that a 2-year vapor inhalation study on rats began on April 27, 1977, at the Camegie-Mellon Institute of Research, Pittsburgh, Pennsylvania. That study is supported by industry and will include cytogenic, mutagenic, and teratogenic evaluations, and a onegeneration reproduction study. A similar 2-year study on mice will also be performed. The No standard long-term carcinogenicity bioassays have been reported, although two screening experiments have been conducted. Walpole (1957) subjected 12 "stock" rats to repeated subcutaneous injections of 1 g/kg (bw) ETO in arachis oil. The exact dosing schedule was not reported, although the period of injection was 94 days. The animals were maintained for their lifetimes, during which no local sarcomas or other tumors were observed. In other studies, Van Duuren et al (1965) applied (by brush) 0.1 ml of a 10% solution of ETO in acetone three times each week onto the clipped dorsal skins of 30 female ICR/Ha Swiss mice. The animals were 8 weeks of age at the start of the skin painting, which was continued for their lifetimes. The median survival time was 493 days; no skin tumors were observed. Two other studies related to carcinogenicity have been conducted. Jacobson et al (1956) exposed rats, mice, and dogs to 100 ppm of ETO (6 hours/day, 5 days/week) for 6 months. There were no significant pathologic changes suggestive of a carcinogenic response. The only observed effects were decreases in red blood cell count, hemoglobin, and hematocrit in dogs. It is felt by scientists at the National Cancer Institute and the International Agency for Research on Cancer that this study was not conducted for a sufficiently long period to test the possibility that ETO causes cancer. In the other "study" (actually, a retrospective analysis of events), 86 female Swiss-Webster germ-free mice were inadvertently maintained on ETO-treated ground-corncob bedding for 150 days, and were then moved to untreated bedding for the remainder of their lives (maximum lifespan = 900 days). Tumors were observed at various sites in 63 mice. In contrast, no tumors were reported in 83 female mice that had not been exposed to the ETO-treated bedding, but were observed for 100-160 days. (Reyniers et al, 1964). The most common tumors in the exposed group were ovarian, lymphoid (malignant lymphoma), and pulmonary. While the effect is noteworthy, it is not known whether it was due to ethylene oxide, ethylene glycol or some other unknown chemical (or factor). # b. Mutagenicity Experimental attempts to induce mutations in at least 14 different species through exposure to ETO have been reported. An increased frequency of mutations was observed in 13 of the test species, the exception being a bacteriophage of Escherichia coli [Loveless, 1959]. The data for the species, loci/locus, mutation type, exposure time, exposure route, exposure concentration, and the significant results are summarized in Table 7. The data, indicate that several different types of genetic damage may be induced following exposure to ethylene oxide. The induction of point mutations has occurred in both procaryotic and eucaryotic organisms [Table 7]. While the point mutations probably were of the base-pair substitution type in Salmonella typhimurium, the mutational spectra have not been well characterized at the molecular level in other species [Embree, 1975]. In Drosophila melanogaster, the classes of induced mutations included sex-linked lethals, visibles, and "minutes" [Bird, 1952;Fahmy and Fahmy, 1956;Fahmy and Fahmy, 1970;Nakao and Auerbach, 1961] . In plants, the classes of heritable changes include mutations at the chlorophyll and waxy loci and chromosomal abnormalities [Faberge, 1955;Ehrenberg et al, 1956;Ehrenberg et al, 1959;MacKey, 1968;Sulovska et al, 1969;Lindgren and Sulovska, 1969;Roy et al, 1969;Roy and Jana, 1973;Jana and Roy, 1975]. Some evidence suggests that ETO may induce heritable changes in mammals, although no direct observations of such changes, comparable to those observed in lower organisms, have been reported in mammalian systems. Following three 7-hour/day exposures to 250 ppm of ETO, isochromatid and chromatid gaps and breaks were observed in bone-marrow cells sampled 24 hours after the last exposure of the rats [Embree, 1975]. The frequency of gaps and breaks as a function of total exposure time or total dose was not reported. Following a single oral dose of 9 mg/kg, a significant number (P less than 0.007) of chromosomal abnormalities was observed in the red marrow cells of the femurs of rats. Neither doseresponse data nor the rate of return to a normal chromosomal picture was reported [Embree, 1975]. The dose-response relationship for the induction of micronucleated cells in femoral marrow of Long-Evans rats as a function of the concentration of inhaled ETO was studied by Embree (1975), using groups of 5 rats exposed to 0, 10, 25, 50, 250, or 1,000 ppm of ethylene oxide for 4 hours. A statistically significant increase (P less than 0.05) of micronuclei were induced at the 50, 250, and 1,000 ppm levels. While several alkylating agents which are known to induce mutations are also known to induce the formation of micronuclei in bone marrow cells, the biological significance of this adverse effect is poorly defined. These results suggest that the current federal standard of a time-weighted average (TWA) concentration of 50 ppm parts of air for an 8-hour day may not protect exposed employees from all adverse effects resulting from ETO exposure. The most plausible molecular basis for the induction of heritable changes and other genetic effects following exposure to ethylene oxide is the alkylation of cellular constituents, including deoxyribonucleic acid (DNA). The highly-strained 3-membered ring of the epoxide is broadly reactive toward all classes of cellular nucleophiles. Reactions of ETO with protein [Frankel-Conrat, 1944], and with nucleic acid [Frankel-Conrat, 1961;Ehrenberg et al, 1974] have been measured. The primary identifiable product of the reaction of ETO and DNA was 7hydroxyethylguanine, although a number of other minor products would also be anticipated. Since the occurrence of these chemical reactions is a consequence of the intrinsic characteristics of the chemical structures, it seems probable that analogous reactions may occur in humans who are occupationally exposed to ETO. These spontaneous reactions result in the formation of covalently altered biomolecules which may possess abnormal functional properties. In addition to the spontaneous reactions, ETO may also be consumed via detoxifing enzyme-catalysed reactions such as epoxide hydratase and glutathione-S-alkyl tranferase [Arais and Jakoby, 1976]. The rate of consumption of ETO by such enzyme reactions would be a non-linear function of the local concentrations of the substrate. Since the non catalyzed half-time of the reaction of ETO with water is about 4,200 minutes, whereas the in vivo half-life in the mouse is only about 9 minutes [Ehrenberg et al, 1974], it appears that such enzymatic reactions may rapidly detoxify a large fraction of the absorbed doses in mammals. At a minimum, the greatly reduced half-life of ETO in vivo suggests a non-linear dose-response relationship for induction of genetic effects in mammals due to the existence of saturable detoxifing mechanisms. Further research is required to establish the relative magnitudes of the competing reactions as functions of the doses and of the exposure times. The observations of: (a) heritable alterations in at least 13 different species, (b) alterations in the structure of the genetic material in somatic cells of the rat, and (c) a covalent chemical reaction between ETO and DNA, support the conclusion that continuous occupational exposure to significant concentrations of ETO may induce an increase in the frequency of mutations in human populations. Embree [1975] and Ehrenberg et al [1974] have attempted to estimate the degree of genetic risk to human populations exposed to ETO by comparing the results with the effects of ionizing radiation, and expressing the result in terms of the "rad-equivalent." Embree's estimate (100 mrad-equivalences/ppm-hr) was based on the frequency of micronuclei in bone marrow following exposure to graded doses of ethylene oxide. At present, the mechanism of generation of micronucleated cells in bone marrow is not known, although it is possible that these cells may arise from either the non-disjunction of the chromosomes during mitosis, or malformation of the spindle. While it is conceivable that somewhat related processes may occur in functionally distinct tissue, such as spermatogonia, neither quantitative nor qualitative evidence is available to indicate the magnitude of the effect in the genetically important cell-types following exposure to ethylene oxide. Consequently, Embree's estimate appears to be highly speculative and with minimal theoretical or experimental basis. The estimate by Ehrenberg et al [1974] of the risk to humans (reported as 20 mrad-equivalences/ppm-hr) is based on the assumption that the rate and mechanism of induction of mutations affecting chlorophyll in metabolically dormant barley seeds were equivalent to the rate of induction of specific locus mutations in the metabolically active tissues of the testes of mammals. The reliability of this estimate as a quantitative measure of genetic risk to human populations exposed to ETO is open to question. In addition to the obvious physiologic differences between plants and mammals, it should be noted that forward chlorophyll mutations in barley seeds may occur at several hundred loci while the specific locus test is limited to one locus or a small number of loci [Ehrenberg et al, 1974]. Consequently, the relative risk estimated by Ehrenberg et al may be in error by several orders of magnitude. Thus, the estimates of genetic risk by Embree [1975] and Ehrenberg et al [1974] probably do not, by themselves, provide a satisfactory basis for the development of occupational exposure standards. Nevertheless, the observations of induction of heritable changes in a broad spectrum of biological organisms, and of the intrinsic chemical reactivity of ETO with cellular constituents, suggest that ETO may have a significant influence on the spontaneous mutation rates of human populations. Kalling [1967] briefly reported data, obtained from an experiment performed by Ehrenberg and Hallstrom, on the frequency of pathological mitoses observed in phytohaemaglutinin-stimulated lymphocytes of 7 workers who had been "strongly affected" by ETO following an industrial accident 18 months earlier. Analysis of from 6 to 26 metaphases from each individual suggested an increased frequency of pathological mitoses relative to the frequency in 10 non-exposed workers. Railing's brief description of the results are insufficient to draw substantive conclusions. However, this report of induction of chromosomal abnormalities in humans is consistant with the observations of Embree [1975] in the rat. # c. Reproductive Effects/Teratogenicity No data concerning reproductive or teratogenetic effects were found for ETO. Ethylene chlorohydrin (2-chloroethonol), a reaction product of ETO, was shown to cause pronounced teratogenic effects (i.e., anterior hydrophthalmos, a form of buphthalmia) when administered to the developing chicken embryo. The compound was administered in water at pre incubation (0 hours) or at 96 hours of incubation, at a number of different doses. It should be noted at this point that the limited animal test data on the carcinogenic potential of ethylene chlorohydrin suggests no increased incidence (over controls) when the compound was fed (Ambrose, 1950), or injected subcutaneously (Mason et al, 1971) into rats. Ethylene chlorohydrin is to be tested for effects on reproductive processes by inhalation and by skin painting, under NCI contract. Studies have shown chromosome aberrations in rat bone marrow cells following exposure to ethylene chlorohydrin (Isakova et al, 1971, andSemenova et al, 1971), and base-pair substitutions (but not frameshift mutations) in Salmonella (Rosenkranz and Wlodkowski, 1974). Ethylene chlorohydrin also induced mutations in bacterium Klebsiella (Voogd and Van Der Vet, 1969). # d. Metabolism. A comprehensive study designed to isolate and characterize the metabolites of ETO in either lower or higher life forms has not been identified in the literature. However, Frankel-Conrat (1961, 1944 has characterized the spontaneous chemical alkylation of a protein (egg albumin) and viral ribonucleic acid. Brookes and Lawley (1961) characterized the reaction of ETO with guanine. More recently, Ehrenberg et al (1974) have reported the results from several experiments on the fate of inhaled tritium-labeled ETO in mice. The biological half-life was found to be about 9 minutes in the mouse, indicating a rapid metabolic breakdown of the molecule. The second-order rate constants for the reactions of ETO with DNA, and with liver "interphase" protein were found to be 0.9 and 3.3 1/g-hr, respectively. The hydroxyethylated DNA was observed to undergo a slow depurination reaction, a possible pre-mutation lesion, with a halflife of 340 hours. Following inhalation of tritiated ETO, radioactivity (in unidentified chemical form) was found to be associated with proteins isolated from lung, liver, kidney, brain, spleen and testes. Seventy-eight per cent of the absorbed dose was excreted into the urine within 48 hours of the inhalation exposure. Other than the possible isolation of small amounts of 7-hydroxyethyl guanine from the urine, evidence for the alkylation of DNA in vivo was not reported. Since urinary 7-hydroxyethyl guanine may also arise from the alkylation of RNA, the fraction of total absorbed dose which alkylates the genetic material remains to be determined. The study confirms that ETO is readily absorbed via the respiratory tract and is widely distributed throughout, and reacts with, components of body tissues. # B. Effects in Humans 1. Acute Toxicity: # Zernik (1933) extrapolated results of animal tests to approximate lethal dosages of ETO for man. He estimated that an exposure to 0.5 mg/1 (278 ppm) for 1 hour would be objectionable, but that the same concentration for a day would be dangerous. He further speculated that 1.0 mg/1 (556 ppm) would be sufficient to cause sickness and death after a number of hours of inhalation. Systemic poisoning due to inhalation exposure to ETO is rare, but 3 cases have been reported in which headache, vomiting, dyspnea, diarrhea, and lymphocytosis occurred (Hine and Rowe, 1963). Hess and Tilton (1950) observed a similar case. The nausea and vomiting may persist for several hours if untreated. There is also evidence that high vapor concentrations can be sufficiently irritating to cause pulmonary edema in man [Thiess, 1963]. Inhalation of high concentrations of ETO for even short exposure periods can produce a general anesthetic effect in addition to coughing, vomiting, and irritation of the eyes and respiratory passages. Early symptoms are irritation of the eyes, nose, and throat and a "peculiar taste"; effects which may be delayed are headache, nausea, vomiting, dyspnea, cyanosis, pulmonary edema, drowsiness, weakness, incoordination, electrocardiographic abnormalities and urinary excretion of bile pigments. Thiess (1963) reported on his observations in 41 cases of excessive industrial exposure to ETO, mainly due to accidents. The principal effect after excessive exposure to the vapor was vomiting, recurring periodically for hours, accompanied by nausea and headache. Short exposure to high concentrations of ETO resulted in irritation of the respiratory passages leading to bronchitis, pulmonary edema, and emphysema. At least two additional cases of accidental poisoning in man by ETO appear in the literature. Blackwood and Erskine (1938) reported cases in 6 men who became ill while working in a ship compartment adjacent to one being fumigated with a commercial mixture of 10% ETO/90% C02 [Carboxide]. Ten female employees were reported (Anon, 1947) to have been "overcome" by ETO used as a disinfectant in a California food plant. The Department of Public Health of the State of California issued a report on this incident. Attempts at locating the report have been unsuccessful to date. In neither of these events was the concentration of ethylene oxide known. In both, the gross symptoms of the persons affected included headaches, nausea, vomiting, and respiratory irritation. No permanent ill effects were reported. A number of dermatologic conditions due to exposure to ETO have been reported. These include: Skin burns occurred in workers after prolonged contact (duration unspecified) with a 1% solution of ETO in water (Sexton and Henson, 1949). Skin contact with solutions of ETO caused characteristic burns on human volunteers. After a latent period of 1 to 5 hours, edema and erythema, progressing to vesiculation with a tendency to coalesce into blebs, and desquamation was followed by complete healing within 21 days, usually without treatment. In some cases, residual brown pigmentation occurred. Application of the pure liquid to the skin caused frostbite (due to rapid evaporation of the ETO); 3 of the 8 volunteers became sensitized to ETO solutions. Human experience indicates that extensive blister formation occurs following even brief contact with 40-80% aqueous solutions of ETO. Lesser or greater concentrations produced blisters only after more prolonged contact. ETO has been reported to be retained in rubber and leather for long periods of time, and not to be removable by washing. Wearing contaminated footwear has resulted in serious foot burns (Phillips and Kaye, 1949). Rubber shoes were donned by laboratory workers immediately after the shoes had been sterilized with ETO. Apparently the ETO had dissolved in the rubber and then diffused out to contact the skin. No such incidents occurred when similar articles of clothing were aired until free of ETO before being worn. Contact of liquid ETO with exposed skin does not cause skin irritation immediately, but blistering of the skin develops after a time if shoes or clothing wet with ETO are not removed promptly. Contact of liquid ETO with the eyes can cause severe burns. A workman exposed to ETO in an unstated manner was reported by McLaughlin (1946) to have suffered a corneal burn, which was said to have healed promptly. In 1963, Thiess reported that he had observed the effects of an accidental forceful blast of gaseous ETO that struck a nurse in the eye, nose and mouth, from a gas sterilizer that employed a cartridge of ethylene oxide. This caused no immediate discomfort, but within 2 hours one eye became mildly uncomfortable, and one nostril felt sore and obstructed. The only abnormality in the eye at that time was keratitis epithelialis consisting of fine gray dots in the corneal epithelium in the palpebral fissure. These dots stained with fluorescein. In 3 hours the foreign-body type irritation in the affected eye and soreness in one nostril were enough to interfere with working. The foreign-body type of discomfort reached the maximum 9 to 10 hours after the exposure, but by 15 hours pain began to subside, and in 24 hours the eye was entirely normal, both subjectively and by slit-lamp examination. Soreness in the nose persisted a little longer, but at no time was there respiratory distress or alteration of taste or sensation in the mouth. Another report by Thiess (1963) described an instance of a squirt of liquid ETO into one eye of a patient, treated immediately with copious irrigation with water; this resulted only in irritation of the conjunctivae lasting for 1 day. Thiess quoted Hollingsworth et al (1956) as having observed intense conjunctivitis in rabbit eyes after application of only one drop of ETO. This condition cleared in 4 days. # Chronic Effects a. Case Histories Exposure to low concentrations of gaseous ETO may cause delayed nausea and vomiting [Hess and Tilton, 1950]. Continuous exposure to even low concentrations will result in a numbing of the sense of smell [Jacobson et al, 1954]. Protracted contact of the gas or the liquid with the skin was observed (Thiess, 1963) to cause toxic dermatitis with large bullae. There appeared to be little or no tendency toward development of allergic dermatitis. Evidence of irritation was seldom seen in the respiratory tract, contrary to what others had reported, and cough was not a warning symptom. The eyes of workers excessively exposed to ETO vapor occasionally showed slight irritation of the conjunctivae, but there were no injuries of the cornea, and no special treatment was felt to be needed. Thiess (1963) observed no lacrimatory effect that could be considered a warning symptom. Workers have developed severe skin irritation from wearing rubber gloves which had absorbed ETO; redness, edema, blisters and ulceration were observed (Royce and Moore, 1955). There is considerable evidence Henson, 1949, 1950] which suggests that repeated skin contact with dilute or concentrated aqueous solution of ETO results in the development of skin sensitivity of long duration. The conclusions of Sexton and Henson (1950) are listed: (1) The magnitude of the skin injury from ETO seems to be influenced and determined by the length of time of contact and the concentration. The most hazardous aqueous dilutions seem to be those in the 50% range. (2) Pure anhydrous liquid ETO does not cause primary injury to the dry skin of man. (3) Rapid vaporization of pure liquid ETO sprayed on the skin results in a freezing reaction of the skin. (4) Repeated skin injury can result in such delayed sensitivity manifestations as the "flare-up" and "spontaneous flare-up" reactions. Aqueous solutions of ETO in prolonged contact with the skin are well known to be vesicant (owing presumably to the reactivity of ETO with tissue proteins). The volatility of ETO, and the fact that there are rarely circumstances in which considerable concentrations are held in protracted contact with the eye may account in part for the mildness of ocular injuries which have been observed so far. The comparative insensitivity of the cornea to damage by ETO, shown in Tables 3 and 4, undoubtedly plays a part here also. The highly crosslinked structure of collagen quite possibly contributes to this relative insensitivity of the cornea; the high concentration of glutathione present in the cornea may be a factor also in this insensitivity to injury by ETO. Delayed sensitization reactions have been reported [Shupak, unpublished] in humans, as has one case of anaphylaxis resulting from residues of ETO in renal dialysis equipment (Poothullil et al, 1974). # b. Epidemiologic Studies Few studies are known which relate directly to long-term exposure to ETO in medical facilities. Correspondence from the Veterans Administration (VA) (Cobis, 1977) indicates a very low incidence of reports of health incidents relating to use of ETO in VA medical facilities. These consist of "1 patient with minor face irritation, and 12 employee incidents, including watering eyes, nausea, and skin irritation." Cobis reported that the "incidents are now being evaluated by the medical staff to see if they can be verified or if there was any sequela." The VA experience is based on an estimated 400,000 sterilization cycles in which ETO was used in 162 hospitals and 7 outpatient clinics over an average of 8.2 years (total of accumulated years is 1,334). The VA estimates that their facilities are processing 5 million packages of sterile supplies in 79,000 sterilization cycles per year (an average of 63.3 packages per cycle). A small retrospective morbidity study was conducted on a group of 37 employees who were exposed to ETO levels of approximately 5-10 ppm for a period of 5-16 years in an ETO production plant. (Joyner, 1964). The "exposed" group consisted of males between the ages of 29 and 56. Mean exposure to ETO was 10 2/3 years, with only 3 workers exposed for less than 5 years, and 2 workers exposed for more than 15 years. The "control" group was selected from among volunteers who were participants in the periodic physical examination program, with no attempt at selection except to match ages with those of the study group. Mean length of service (in production units) of the control group was 11 2/3 years. No attempt was made to categorize the specific types of chemical exposures (to agents encountered in the petrochemical industry) of the "control" group. The study included a review of medical records (with scoring of such factors as number of visits to the dispensary, days "lost" from work, and number and type of injury reported),and a "problem-oriented" tabulation of specific illnesses diagnosed by the physicians, and comparison of the morbidity rates. In general, the ETO workers had a better health record than the "control" group. While such a study might have detected major toxic effects, the sample size, "control" group, dose levels, duration of exposure and observation period were not sufficient to assess carcinogenic potential or more subtle toxic effects. A study by Ehrenberg (1967) involved hematologic and chromosomal investigations of a group of personnel of a factory manufacturing and using ETO. Early in 1960, a preliminary hematological investigation was performed. Twenty-eight persons from the part of the factory where leakage of ETO from tube joints, pumps, etc, was possible (and occurred at least occasionally) were investigated, and 26 persons from other departments working with ETO were used as controls. In addition, 3 persons accidentally exposed to high concentrations of ETO were studied; these were included in the "exposed" group. The subjects were males of about the same age (i.e., 45 years). The "exposed" persons had been active in the ETO department for periods of time varying from 2 to 20 years (avg. 15 years). Since certain differences were found between exposed and control persons, the ventilation of the factory was improved. A second analysis was undertaken in the following winter (termed "1961 investigation") comprising all male employees of the factory, now classified as follows: (1) 66 persons not working in contact with ETO (including the 1960 "control group"). (2) 86 persons intermittently working in ETO premises. (3) 54 persons who had worked in contact with ETO for some period of time. (4) 37 persons permanently working in the ETO area (including the 1960 "exposed group"). (5) 8 persons exposed to high concentrations of ETO during repair and clean-up operations following a tube break in a factory. Two of the group required immediate hospitalization for respiratory difficulties. A comparison of exposed and control persons suggested that: (1) Persons intermittently exposed to ETO during several years exhibited higher absolute lymphocyte counts than did the control group in the initial study population. Following improvement of ventilation this difference became smaller, although it was still significant 1 year later in the same persons. No significant difference between the exposed and the control groups was found after the study population was enlarged, either because the difference found between the groups selected for the first analysis was accidental, or because younger persons with higher lymphocyte counts were introduced into the control group in larger numbers than into the exposed group. (2) Certain cell abnormalities were found in the exposed group (3 cases of anisocytosis, 1 of leukemia), that did not appear in the control group. (3) Lower hemoglobin values were found in the exposed group, in addition to a few cases of slight anemia. (4) Persons exposed accidentally (group 5 of the later study population) for short periods of time to high concentrations of ETO exhibited higher numbers of chromosomal aberrations than a comparable control group. NIOSH has been informed that Ehrenberg currently has plans to "follow up" members of the study group. NIOSH is currently initiating a study of approximately 2,500 workers potentially exposed to ETO during its manufacture and/or use. The study will include mortality, morbidity, and reproductive data if possible. # Sensitization Response. Allergic-type reactions have been reported in workers accidently sprayed with ETO in solution (Sexton and Henson, 1949), and in patients exposed to improperly aerated dressings (Hanifin, 1971). ETO (1% solution) was not a contact sensitizer in the occlusive patch test in guinea pigs, nor did a 0.1% ETO solution produce sensitization by the intracutaneous route in this species (Woodard and Woodard, 1971). In recent skin irritation studies, delayed sensitization of human subjects developed in response to ETO contained in polyvinyl chloride blocks and films, and in vaseline [Shupak, unpublished data]. This finding supports an earlier report of anaphylaxis from products of ETO sterilization of renal dialysis equipment (Poothullil et al, 1974, andAvashia, 1975). # Hemolytic Effects. Hemolysis has been reported (Hirose et al, 1963 andStanley et al, 1971) with ETO sterilized devices used for blood perfusion, and with devices used for IV administration in patients. Anemia was reported (Woodard and Woodard, 1971) to have developed in dogs in a 30-day subcutaneous toxicity study with ETO in saline solution. Dogs injected with 6-36 mg/kg daily developed anemia; the severity was dose-related. Examinations were performed prior to, and at the end of, 30 days of dosing. Hence, effects occurring earlier would not have been detected. Therefore, a re-examination of the hematologic effects appeared to be in order. Beagle dogs were dosed intravenously with ETO/glucose solution daily for 3 weeks. Doses were increased from 3-60 mg/kg at intervals. Three controls received only glucose. No evidence of anemia was detected (Balazs, 1976). Further clarification of these apparently conflicting results is needed. Ehrenberg (1967) observed lymphocytosis in workers exposed to ETO. # C. Tabulation of Biologic Effects of ETO The following tables summarize the bio-effects data described in the earlier section of this report: Notes: Data compiled by Hine and Rowe (1963) from the studies of Jacobson et al, ( 1956), Hollingsworth et al, ( 1956), Waite et al, (1930), and Flury and Zernik, (1931). *Data of Weil et al (1963). . Five animals of each sex were tested at each of six dose levels. (c) Woodard and Woodard (1971) conducted the study with rabbits, in which one rabbit of each sex was exposed at each of six dose levels. (d) Injection Route: IV = intravenous, IP = intraperitoneal, SC = Subcutaneous (Ehrenberg, 1956) 54-fold increased non-monotonic doseresponse curve (Ehrenberg, 1959) Increased mutation rate not given at 12.5 ppm. Statistically significant at 50 ppm (Lindgren et al, 1969) Chromosomal aberrations observed in up to 16% of the germinated kernals (Moutschen-Dahmen et al, 1968) waxy (forward) 24 hr, 100 ppm Greater than 10-fold increase in mutational frequency 0.6% (W/V) 0.15% (V/V) 0 .2% # approx. 1% Up to 2% chlorophyll mutations at 12 different phenotypes (Roy et al, 1973) Mutations observed up to 14.6% of F-2 generation (Jana et al, 1975) 7.2 fold increased frequency; also decrease germination and fertility CAndo, 1968) Approximately 10-fold Increase in mutation frequency (MacKey, 1968) Up to 5.1% chromatid breaks, "erosions, and gaps (Smith et al, 1954) (Bird, 1952) 2 sex-linked recessive lethals per 1,000 chromosomes (Fahmy et al, 1956) Approximate 5-fold increase in frequency of "minutes" (Fahmy et al 1956) Lethals and translocations induced in all stages of spermatogenesis (Nakao et al, 1961) Statistically significant increase (P less than .001) in aberrations in femur bone marrow cells (Strekalova, 1971) Significant increase in dead implantations in females mated to treated males at weeks 1, 2, 3 and 5 post-exposure (Embree, 1975) Isochromatid and chromatid gaps and breaks in bone marrow cells sampled 24 hours after last exposure (Embree, 1975) Exposure from an industrial spill; significant increase (p less than 0.05) in chromosomal aberrations in 7 workers 6 wk after exposure (Ehrenberg et al, 1967) V. OCCUPATIONAL EXPOSURE LIMITS The current U.S. standard (OSHA) for occupational exposure to ethylene oxide is 50 parts per million (ppm) parts of air, which corresponds approximately to 90 mg/cu m, as an 8 hour time-weighted average concentration, with no ceiling level stipulated. [29 CFR 1910[29 CFR .1000. This standard was adopted from the standards established by the American National Standards Institute (ANSI). An identical exposure limit, the Threshold Limit Value (TLV), had been adopted by the American Conference of Governmental Industrial Hygienists (ACGIH, 1957). The USSR has a standard of 0.5 ppm (1 mg/cu m ) , which was adopted in 1966 [Winell, 1975]. Occupational exposure standards of 50 ppm and 20 ppm (36 mg/cu m) are recommended by the Federal Republic of Germany and Sweden, respectively [ICF Conf., 1975]. Table 8 lists the federal standard and the ACGIH recommended TLV and short term exposure limits ("STEL") for ETO. The exposure limits for certain hydration and reaction products of ETO, i.e. ethylene chlorohydrin and ethylene glycol, are listed. Also shown for comparison are the exposure levels published by the Federal Republic of Germany, Sweden, and the USSR. NIOSH recommends, based on the recent results of tests for mutagenesis, that exposure be controlled so that workers are not exposed to ETO at a concentration greater than 135 mg/cu m (75 ppm) determined during a 15-minute sampling period, as a ceiling occupational exposure limit and, in addition, with the provision that the TWA concentration limit of 90 mg/cu m (50 ppm) for a work-day not be exceeded. As additional information on the toxic effects of ETO becomes available, this recommended level for exposures of short duration may be altered. The adequacy of the current U.S. ETO standard, which was based on the data available at the time of promulgation, has not been addressed in this report. Further assessment of other ETO exposure situations, and of the adequacy of the ETO occupational exposure standard will be undertaken during the FY 80 development of the NIOSH Criteria Document on epoxides (including ETO). In the interim, NIOSH strongly recommends that the control strategies presented herein, or others considered to be more applicable to particular local situations, be implemented to assure maximum protection of the health of employees. Good work practices will help to assure their safety. Sustained or intermittent skin contact with liquid may produce dermatitis at the site of contact. However, due to the extreme penetrating ability of ETO, and the consequent ineffectiveness of many types of clothing materials to prevent skin contact, the use of any conventional "impervious" clothing is not suggested. There are, however, certain special types of protective clothing which are effective when working with ETO. For example, one of the large ETO manufacturers provides its workers with knitted gloves which have been coated with certain polymers, including polyvinylchloride. In addition, conscientious adherence to appropriate sanitation practices should eliminate most hazards of skin contact with ETO. # b. If ETO splashes into the eye, severe irritation may result. For this reason it is suggested that rubber framed goggles, equipped with approved impact resistant glass or plastic lenses, be worn whenever there is danger of the material coming in contact with the eyes (i.e., in operations which involve transport of bulk containers of ETO from the storage room to the sterilizer unit for installation). Eye wash fountains within easy access from the immediate work area are recommended; they should be so situated that additional contact of the eyes with ETO in vapor form during washing is unlikely. c. It is suggested that respirators be readily accessible in the event of an emergency situation resulting from an accidental spill or leak of ETO, and for use during subsequent clean-up and disposal procedures. A self-contained breathing apparatus with a full facepiece operated in a pressure-demand or other positive pressure mode is suggested for this purpose. Respirators (respiratory protective devices) should be those approved under the provisions of 30 CFR 11. # Engineering and Other Control Technology A number of control problems were discovered during the field surveys conducted at several medical facilities which use ETO in sterilizing operations. These problems were addressed in Section III of this report, along with some recommendations for their amelioration. Those, and the following general recommendations should be followed in all medical facilities which use ETO as a sterilant. All equipment should be operated in accordance with manufacturers' recommendations. a. Sterilization operations involving ETO should be isolated from all non-ETO work areas. b. ETO work areas, except for outdoor systems, should be maintained under negative pressure, with respect to non-ETO work areas. This may be accomplished by continuous local exhaust ventilation so that air movement is always from non-ETO work areas to ETO work areas. Where a fan is used to affect such air movement, the fan blades and other appropriate parts of the ventilation system should be made of a nonsparking material. Local exhaust pickups should be located at areas where the possibilities for leaks are the greatest (i.e., in close proximity to sterilizing units and aerators). Exhaust air should not be discharged into any work area or into the general environment without decontamination. This may be accomplished using a calalytic converter, or by discharging exhaust air directly to the fire box of a decontamination furnace, with subsequent discharge of this air to the general environment. Sterilizing units and aerators should be closed systems. Elimination of residual ETO from both systems should be accomplished only by ventilation ducts leading directly from the sterilizers and aerators to the decontamination apparatus described above. c. All equipment (e.g., sterilizing units, gas tanks, and aerators) should be periodically checked for leaky valves, fittings, and gaskets, and for any other malfunctioning parts. Equipment manufacturers' recommendations regarding preventive maintenance should be observed. Further, periodic measurements should be made which demonstrate the effectiveness of the local exhaust system (e.g., air velocity or static pressure). d. a. Preplacement medical examinations should include at least: (1) comprehensive medical and work histories with special emphasis directed to symptoms related to eyes, blood, lungs, liver, kidneys, nervous system, and skin. (2) a comprehensive physical examination, with particular emphasis given to pulmonary, neurologic, hepatic, renal, and ophthalmologic systems, and the skin. (3) a complete blood count to include at least a white cell count, a differential count, hemoglobin, and hematocrit. (4) In addition to the medical examination, employees should be counseled by the physician to ensure that each employee is aware that ETO has been shown to induce mutations in experimental animals. The relevancy of these findings in animals to male or female employees has not yet been determined. The findings do indicate, however, that employers and employees should do everything possible to minimize exposure to ETO. If a physician becomes aware of any adverse effects on the reproductive system, any cancers in individuals who have been exposed to ETO, or any abnormal offspring born to parents either or both of whom have been exposed to ETO, the information should be forwarded to the Director, NIOSH, as promptly as possible. b. Periodic examinations should be made available on an annual basis, and more frequently if indicated by professional medical judgment based on such factors as emergencies and the pre-existing health status of the employee. These examinations should include at least: (1) interim medical and work histories. (2) a physical examination as described above for the preplacement examination. # Record Keeping and Availability of Records The employer should keep accurate records on the following: a. All measurements taken to determine employee exposure to ETO, including: (1) Dates of measurement (2) Operations being monitored (3) Sampling and analytical method used (4) Numbers, durations, and results of samples taken (5) Names and airborne exposure concentrations of employees in monitored areas b. Measurements demonstrating the effectiveness of mechanical ventilation (e.g., air velocity, static pressure, or air volume exchanged), including: (1) Dates of measurements (2) Types of measurements taken (3) Results of measurements c. Employee medical surveillance, including: (1) Full name of employee (2) All information obtained from medical examinations which is pertinent to ETO exposure (3) Any complaints by the employee relatable to exposure to ETO (4) Any treatments for exposure to ETO, and the results of that treatment All of the aforementioned records should be updated at least annually. The employee's medical examination and surveillance records should be made available upon request to designated medical representatives of the Assistant Secretary of Labor for Occupational Safety and Health (OSHA), and the Director of the National Institute for Occupational Safety and Health (NIOSH). Records of environmental and occupational monitoring should be made available upon request to authorized representatives of OSHA and NIOSH. All employees or former employees should have access to thé exposure measurement records which indicate their own exposure to ETO. An employee's medical records should be available upon written request to a physician designated by the employee or former employee. Records of all examinations should be held for at least 30 years following the termination of employment. # Informing Employees of the Hazard Each employee, prior to being permitted to work in an ETO sterilizing area, should receive instruction and training on: a. The nature of the hazards and toxicity of ETO (including those hazards described above), including recognition of the signs and symptoms of acute exposure, and the importance of reporting these immediately to designated health and supervisory personnel. b. The specific nature of the operation involving ETO which could result in exposure. c. The purpose for, and operation of, respirator equipment. # d. The purpose for, and application of, decontamination practices. e. The purpose for, and significance of, emergency practices and procedures, and the employees' specific role in such activities. f. The purpose for, and nature of, medical examinations, including advantages to the employee of participating in the medical surveillance program. # VII. SAMPLING AND ANALYTICAL METHODS # A. Airborne-ETO Monitoring Techniques and Equipment. A number of techniques are available for the reliable analytical determination of low concentrations of ETO in air. These include: (1) hydration of ETO (collected in a bubbler) to ethylene glycol followed by oxidation to formaldehyde, which is determined colorimetrically by its reaction with sodium chromotropate (Bolton et al, 1964), and (2) adsorption of the ETO on charcoal, followed by desorption with carbon disulfide, and gas chromatographic determination of the ETO (NIOSH method described in a later section). In addition, instrumentation is available for the direct quantitative determination of the concentration of ETO in air. These direct reading instrumental methods include: a) thermal conductivity detection, b) gas chromatography, c) hydrogen flame-ionization detection, and (d) infrared spectrophotometry. Other methods (as well as modifications of the above methods), and specific techniques have been described for the determination of airborne ETO concentration. These include: spectrophotometry (Pozzoli et al, 1968), colorimetry (Adler, 1965;Critchfield and Johnson, 1957;Gage, 1957), and volumetric methods (Gunther, 1965;Hollingsworth and Waling, 1955;Lubatti, 1944;and Swan, 1954). Gas chromatography of air residues from fumigated materials has been used for foodstuffs, pharmaceuticals, and surgical equipment (Adler, 1965;Ben-Yehoshua and Krinsky, 1968;Berck, 1965;Brown, 1970;Buquet and Manchon, 1970;Scudamore, 1968, 1969;Kulkarni et al, 1968;Manchon and Buquet, 1970). ETO can be determined in cigarette smoke by gas chromatography or mass spectrometry after conversion to ethylene chlorohydrin (Binder and Lindner, 1972;Muramatsu et al, 1968), and in mixtures of lower olefin oxides and aldehydes by gas chromatography (Kaliberdo and Vaabel, 1967). The limits of detection of ETO by spectrophotometry and gas chromatography in these materials are generally of the order of 1 mg/kg. The method used for the analysis of the ETO samples obtained in the NIOSH field study involved adsorption on charcoal, and gas chromatographic determination following desorption with carbon disulfide. The method, known as NIOSH Analytical Method #S286, is outlined below, and is described in detail in the reference. # B. Principle of the NIOSH Standard Method. A known volume of air is drawn through a series of two charcoal tubes to trap the ETO vapor present. The two-tube sampling arrangement is necessary to prevent sample migration (and loss) upon storage, and to insure that the front tube is not overloaded during sampling. The charcoal in each tube is transferred to a 5-ml screw-capped sample container, and the ETO is desorbed with carbon disulfide. An aliquot of the desorbed sample is injected into a gas chromatograph, following which the area of the resulting peak is determined and compared with areas obtained from the injection of "standards" (i.e., known concentrations of ETO) . C. Range of Sensitivity of the Method. The method was validated over the range of ETO concentrations of 40-176 mg/cu m at a temperature of 26 C and atmospheric pressure of 761 mm Hg, using a 5-liter sample. Under the conditions of recommended sample size (5 liters), the probable useful concentration range of this method is 20-270 mg/cu m at a detector sensitivity that gives nearly full scale deflection of the strip chart recorder for a 1.4-mg ETO sample. This method is capable of measuring much smaller amounts if the desorption efficiency is adequate. Desorption efficiency must be determined over the range used. The upper limit of the range of the method is dependent on the adsorptive capacity of the front charcoal tube. This capacity varies with the ETO concentration and with the presence of other substances in the sampled air. The charcoal tube series consists of two separate large tubes; the first tube contains 40 mg of charcoal and the second tube (used as the backup tube) contains 200 mg of charcoal. The charcoal is held in place by glass wool plugs at the tube ends. If a particular atmosphere is suspected of containing a large amount of ETO, an air sample of smaller volume should be taken. # D. Interference. When the amount of moisture in the air is so great that condensation actually occurs in the tube, organic vapors will not be trapped efficiently; consequently, the breakthrough threshold is decreased. Any compound which has the same retention time on the chromatography column as ETO under the conditions described in this method has a potential for interfering with the estimation of ETO. A change in the separation conditions, such as column packing or temperature, will generally circumvent the interference problem. # E. Precision and Accuracy. The coefficient of variation for this method in the range 41-176 mg/cu m is 0.103. This value corresponds to a 9.3 mg/cu m standard deviation at the 90 mg/cu m (present federal standard) level. Advantages and disadvantages of the method, as well as other aspects of the recommended sampling and analytical method, are described in detail in the reference. (NIOSH, 1977) # ACKNOWLEDGEMENTS Sincere appreciation is extended to Jerry LR Chandler, Ph.D., Norbert # Toxicol Appl Pharmacol; 33,172-173, 1975. Embree JW: Mutagenicity of ethylene oxide and associated health hazard, PhD Dissertation. Univ of Calif (San Francisco). Embree JW, Lyon JP, Hine CH: The mutagenic potential of ethylene oxide using the dominant-lethal assay in rats. Toxicol Appl Pharmacol; 40, 261-267, 1977. Faberge AC: Types of chromosome aberrations induced by ethylene oxide in maize. Genetics;40, 571, 1955. Fahmy OG, Fahmy MJ: Gene elimination in carcinogenesis: Reinterpretation of the somatic mutation theory. Cancer Res;30,195-204, 1970. # DEPARTM ENT OF HEALTH, ED UC A TIO N, AND W ELFA RE P U B L IC H EALTH S E R V IC E CENTER FOR D IS E
None
None
0865596bae6bff9e11e5f72a79b844a3bc4c6d42
cdc
None
CDC, our planners, and our presenters wish to disclose they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters, with the exception of Barbara Body, who works for LabCorp and serves on the Advisory Board of Becton Dickinson, Roche Diagnostic Systems and Roche Molecular Systems, and Nancy Strockbine, who conducted research with Inverness Medical Innovations. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use. There is no commercial support for this activity.Recommendations and Reports 1# Introduction Shiga toxin-producing E. coli (STEC) cause approximately 100,000 illnesses, 3,000 hospitalizations, and 90 deaths annually in the United States, according to the last estimate in 1999 (2). Most reported STEC infections in the United States are caused by E. coli O157:H7, with an estimated 73,000 cases occurring each year (2). Non-O157 STEC bacteria also are important causes of diarrheal illness in the United States; at least 150 STEC serotypes have been associated with outbreaks and sporadic illness (2)(3)(4). In the United States, six non-O157 serogroups (O26, O45, O103, O111, O121, and O145) account for the majority of reported non-O157 STEC infections (5). The toxins produced by STEC were named based on their similarity in structure and function to Shiga toxins produced by Shigella dystenteriae type 1 (6). Shiga toxin 1 (Stx1) is neutralized by antibodies against Shiga toxin, whereas Shiga toxin 2 (Stx2) is not neutralized by antibodies against Shiga toxin but is neutralized by homologous antibodies. STEC are also referred to as verocytotoxigenic E. coli; STEC that cause human illness are also referred to as enterohemorrhagic E. coli. In this report, all E. coli that produce a Shiga toxin are referred to as STEC. STEC serotypes are named according to their somatic (O) and flagellar (H) antigens. In this report, all STEC with the O antigen 157 are referred to as O157 STEC, regardless of whether the H7 antigen has been identified or Shiga toxin production has been confirmed. STEC with other O antigens are referred to as non-O157 STEC or by their specific O antigen. STEC infection causes acute, often bloody, diarrhea. Approximately 8% of persons who receive a diagnosis of O157 STEC infection develop hemolytic uremic syndrome (HUS), a life-threatening condition characterized by thrombocytopenia, hemolytic anemia, and renal failure (7)(8)(9). Thrombotic thrombocytopenic purpura (TTP), a syndrome with signs and symptoms that are similar to those of HUS, is typically diagnosed in adults. When TTP is diagnosed after a diarrheal illness, the condition is usually caused by infection with O157 STEC or another STEC. In this report, regardless of the age of the patient, TTP diagnosed after a diarrheal illness is referred to as HUS (10). Whether an illness progresses to HUS depends on strain virulence and host factors (11). Although most persons with diarrhea-associated HUS have an O157 STEC infection, certain non-O157 STEC strains also can lead to HUS (3). The virulence of non-O157 STEC is partly determined by the toxins they produce; non-O157 STEC strains that produce only Stx2 are more often associated with HUS than strains that produce only Stx1 or that produce both Stx1 and Stx2 (12). STEC infections and HUS occur in persons of all ages, but the incidence of STEC infection is highest in children aged <5 years, as is the risk for HUS (9). Although STEC infections are more common during summer months, they can occur throughout the year. STEC transmission occurs through consumption of a wide variety of contaminated foods, including undercooked ground beef, unpasteurized juice, raw milk, and raw produce (e.g., lettuce, spinach, and alfalfa sprouts); through ingestion of contaminated water; through contact with animals or their environment; and directly from person to person (e.g., in childcare settings). Both O157 STEC and O111 STEC have a low infectious dose (<100 organisms) (13); the infectious dose of other serogroups is not known. Prompt and accurate diagnosis of STEC infection is important because appropriate treatment with parenteral volume expansion early in the course of infection might decrease renal damage and improve patient outcome (14). In addition, because antibiotic therapy in patients with STEC infections might be associated with more severe disease, prompt diagnosis is needed to ensure proper treatment. Furthermore, prompt laboratory identification of STEC strains is essential for implementation of control measures, for effective and timely outbreak responses, to detect new and emerging serotypes, and to monitor trends in disease epidemiology (1,15,16). Most O157 STEC isolates can be readily identified in the laboratory when grown on sorbitol-containing selective media because O157 STEC cannot ferment sorbitol within 24 hours. However, many clinical laboratories do not routinely culture stool specimens for O157 STEC. In addition, selective and differential media are not available for the culture of non-O157 STEC, and even fewer laboratories culture stool specimens for these bacteria than for O157 STEC. Recently, the increased use of enzyme immunoassay (EIA) or polymerase chain reaction (PCR) to detect Shiga toxin or the genes that encode the toxins (stx1 and stx2) has facilitated the diagnosis of both O157 and non-O157 STEC infections. Although EIA and other nonculture tests are useful tools for diagnosing STEC infection, they should not replace culture; a pure culture of the pathogen obtained by the clinical laboratory (O157 STEC) or the public health laboratory (non-O157 STEC) is needed for serotyping and molecular characterization (e.g., pulsed-field gel electrophoresis patterns), which are essential for detecting, investigating, and controlling STEC outbreaks. Simultaneous culture of stool for O157 STEC and EIA testing for Shiga toxin is more effective for identifying STEC infections than the use of either technique alone (17,18). Because virtually all O157 STEC have the genes for Stx2 (stx2) and intimin (eae), which are found in strains that are associated with severe disease (5,12,(19)(20)(21)(22), detection of O157 STEC should prompt immediate initiation of steps such as parenteral volume expansion to reduce the risk for renal damage in the patient and the spread of infection to others. Guidelines for clinical and laboratory identification of STEC infections have been previously published (1); this report provides the first comprehensive and detailed recommendations for isolation and identification of STEC by clinical laboratories. The recommendations are intended primarily for clinical laboratories but also are an important reference for health-care providers, public health laboratories, public health authorities, and patients and their advocates. # Recommendation for Identification of STEC by Clinical Laboratories All stools submitted for testing from patients with acute community-acquired diarrhea (i.e., for detection of the enteric pathogens Salmonella, Shigella, and Campylobacter) should be cultured for O157 STEC on selective and differential agar. These stools should be simultaneously assayed for non-O157 STEC with a test that detects the Shiga toxins or the genes encoding these toxins. All O157 STEC isolates should be forwarded as soon as possible to a state or local public health laboratory for confirmation and additional molecular characterization (i.e., PFGE analysis and virulence gene characterization). Detection of STEC or Shiga toxin should be reported promptly to the treating physician, to the public health laboratory for confirmation, isolation, and subsequent testing of the organism, and to the appropriate public health authorities for case investigation. Specimens or enrichment broths in which Shiga toxin or STEC are detected but from which O157 STEC are not recovered should be forwarded as soon as possible to a state or local public health laboratory. # Benefits of Recommended Testing Strategy # Identification of Additional STEC Infections and Detection of All STEC Serotypes Evidence indicates that STEC might be detected as frequently as other bacterial pathogens. In U.S. studies, STEC were detected in 0%-4.1% of stools submitted for testing at clinical laboratories, rates similar to those of Salmonella species (1.9%-4.8%), Shigella species (0.2%-3.1%), and Campylobacter species (0.9%-9.3%) (9,17,(23)(24)(25)(26)(27)(28)(29)(30)(31). In one study, the proportion of stools with STEC detected varied by study site (9); O157 STEC were more commonly isolated than some other enteric pathogens in northern states. The laboratory strategy of culturing stool while simultaneously testing for Shiga toxin is more sensitive than other strategies for STEC identification and ensures that all STEC serotypes will be detected (17,18,30,31) (Table 1). In addition, immediate culture ensures that O157 STEC bacteria are detected within 24 hours of the initiation of testing. # Early Diagnosis and Improved Patient outcome Early diagnosis of STEC infection is important for determining the proper treatment promptly. Initiation of parenteral volume expansion early in the course of O157 STEC infection might decrease renal damage and improve patient outcome (14). Conversely, certain treatments can worsen patient outcomes; for example, antibiotics might increase the risk for HUS in patients infected with O157 STEC, and antidiarrheal medications might worsen the illness (32). Early diagnosis of STEC infection also might prevent unnecessary procedures or treatments (e.g., surgery or corticosteroids for patients with severe abdominal pain or bloody diarrhea) (33)(34)(35). # Prompt Detection of outbreaks Prompt laboratory diagnosis of STEC infection facilitates rapid subtyping of STEC isolates by public health laboratories and submission of PFGE patterns to PulseNet, the national molecular subtyping network for foodborne disease surveillance (36). Rapid laboratory diagnosis and subtyping of STEC isolates leads to prompt detection of outbreaks, timely public health actions, and detection of emerging STEC strains (37,38). Delayed diagnosis of STEC infections might lead to secondary transmission in homes, child-care settings, nursing homes, and food service establishments (39,(40)(41)(42)(43)(44) and might delay detection of multistate outbreaks related to widely distributed foods (39,45). Outbreaks caused by STEC with multiple serogroups (46) or PFGE patterns (47) have been documented. # Criteria for STEC Testing and Specimen Selection All stool specimens from patients with acute onset of community-acquired diarrhea and from patients with possible HUS should be tested for STEC. Many infections are missed with selective STEC testing strategies (e.g., testing only specimens from children, testing only during summer months, or testing only stools with white blood cells or blood). Some patients with STEC infection do not have visibly bloody stools, whereas some persons infected with other pathogens do have bloody stools (3,9,48,49). Therefore, the absence of blood in the stool does not rule out the possibility of a STEC-associated diarrheal illness; both O157 and non-O157 strains have been isolated from patients with nonbloody diarrhea (30)(31)(32)(49)(50)(51). Similarly, white blood cells are often but not always detected in the stools of patients with STEC infection and should not be used as a criterion for STEC testing (9,39). Selective testing on the basis of patient age or season of the year also might result in undetected infections. Although STEC bacteria are isolated more frequently from children, almost half of all STEC isolates are from persons aged >12 years (5,9,49,52); testing for STEC only in specimens from children would result in many missed infections. In addition, although STEC infections are more common in summer months, infections and outbreaks occur throughout the year (5,9,32). Stools should be tested as early as possible in the course of illness; bacteria might be difficult or impossible to detect in the stool after 1 week of illness (53,54), and the Shiga toxin genes might be lost by the bacteria (55). In certain instances, retrieval of plates from cultures obtained earlier in the illness that were not initially evaluated for STEC might be necessary. Early detection of STEC and proper patient management are especially important among children because they are the age group most likely to have an infection that develops into HUS (32). STEC testing might not be warranted, or selective STEC testing might be appropriate, for patients who have been hospitalized for ≥3 days; infection in this setting is more likely to be caused by Clostridium difficile toxin than another enteric pathogen (39). However, when a patient is admitted to the hospital with symptoms of a diarrheal illness, a stool culture with STEC testing might be appropriate, regardless of the number of days of hospitalization. In addition, although few hospital-associated outbreaks of STEC have been reported, if a hospitalized patient is involved in a hospital-associated outbreak of diarrhea, STEC testing should be performed if tests are also being conducted for other bacterial enteric pathogens (e.g., Salmonella). Although chronic diarrhea is uncommon in patients with STEC infection, certain STEC strains have been associated with prolonged or intermittent diarrhea; therefore, testing for Shiga toxin should be considered if an alternative diagnosis (e.g., ulcerative colitis) has not been identified (56). Testing multiple specimens is likely unnecessary unless the original specimen was not transported or tested appropriately or the test results are not consistent with the patient's signs and symptoms. After STEC bacteria are detected in a specimen, additional specimens from the same patient do not need to be tested for diagnostic purposes. To prevent additional transmission of infection, certain persons (e.g., food-service workers and children who attend child-care facilities or adults who work in these facilities) who receive a diagnosis of STEC infection might be required by state law or a specific facility to prove that they are no longer shedding the bacteria after treatment and before returning to the particular setting. Follow-up specimens are usually tested by state public health laboratories. No data exist regarding the effectiveness of excluding postsymptomatic carriers of non-O157 STEC (i.e., persons who test positive for non-O157 STEC but no longer have symptoms) from work or school settings in preventing secondary spread. # Procedures for Collecting and Handling Specimens for STEC Diagnostic Testing Acceptable Specimens for Testing Laboratories should always consult the manufacturer instructions for the assay being performed to determine procedures for specimen collection and handling, including specimen types that may be used with a particular assay or test system. The ideal specimen for testing is diarrheal stool; stool specimens should be collected as soon as possible after diarrhea begins, while the patient is acutely ill, and before any antibiotic treatment is administered. The same specimen that is collected for Salmonella, Shigella, and Campylobacter testing is acceptable for STEC culture and Shiga toxin detection. Collecting and testing specimens as soon as possible after symptom onset is important to ensure maximal sensitivity and specificity for STEC detection with available commercial diagnostic assays. Diagnostic methods such as the Shiga toxin immunoassay that target traits encoded on mobile genetic elements (e.g., phages) are less sensitive if the elements have been lost (53,57). Shiga toxin testing should be performed on growth from broth culture or primary isolation media because this method is more sensitive and specific than direct testing of stool. In addition, because the amount of free fecal Shiga toxin in stools is often low, EIA testing of broth enrichments from stools or of growth from the primary isolation plate is recommended rather than direct testing of stools (58). Although rectal swabs are often used to collect stool from children, swabs might not contain enough stool to culture for multiple enteric pathogens and to perform STEC testing. If rectal swabs must be used to collect specimens for STEC testing, broth enrichment is recommended. Laboratories should consult the manufacturer instructions for information on the suitability of toxin testing using stool from rectal swabs. Commercially available assays have not been validated for specimens collected by endoscopy or colonoscopy. If a laboratory chooses to use an assay for patient testing with a specimen other than that included in the manufacturer's FDA-cleared package insert, under the Clinical Laboratory Improvement Amendments (CLIA) of 1988, that laboratory must first establish the performance specifications for the alternative specimen type (59). Unless STEC are isolated, results from tests on alternative specimen types should be interpreted with caution. # Specimen Handling Specimens should be sent to the laboratory as soon as possible for O157 STEC culture and Shiga toxin testing. Ideally, specimens should be processed as soon as they are received by the laboratory. Specimens that are not processed immediately should be refrigerated until tested; if possible, they should not be held for >24 hours unpreserved or for >48 hours in transport medium. All O157 STEC isolates and all specimens or enrichment broths in which Shiga toxin is detected but from which O157 STEC bacteria are not recovered should be forwarded to the public health laboratory as soon as possible in compliance with the receiving laboratory's guidelines. # Transport Media Specimens should be transported under conditions appropriate for the transport medium used and tests to be performed; appropriate transport conditions can be determined by reviewing the manufacturer instructions. Stool specimens that cannot be immediately transported to the laboratory for testing should be put into a transport medium (e.g., Cary-Blair) that is optimal for the recovery of all bacterial enteric pathogens. Laboratories should consult the manufacturer instructions for the suitability of toxin testing for stool in transport medium. If a laboratory must perform direct Shiga toxin testing of stool, the stool specimen should be refrigerated but should not be placed in transport medium. Direct toxin testing of stool should follow the manufacturer instructions. # MMWR October 16, 2009 Culture for STEC o157 STEC O157 STEC can usually be easily distinguished from most E. coli that are members of the normal intestinal flora by their inability to ferment sorbitol within 24 hours on sorbitolcontaining agar isolation media. To isolate O157 STEC, a stool specimen should be plated onto a selective and differential medium such as sorbitol-MacConkey agar (SMAC) (60), cefixime tellurite-sorbitol MacConkey agar (CT-SMAC), or CHROMagar O157. After incubation for 16-24 hours at 37°C (99°F), the plate should be examined for possible O157 colonies, which are colorless on SMAC or CT-SMAC and are mauve or pink on CHROMagar O157. Both CT-SMAC and CHROMagar O157 are more selective than SMAC, which increases the sensitivity of culture for detection of O157 STEC (61,62). Sorbitol-fermenting STEC O157:H-(i.e., nonmotile ), a pathogen that is uncommon in the United States and primarily reported from Germany, might not grow on CT-SMAC agar because the bacteria are susceptible to tellurite. To identify O157 STEC, a portion of a well-isolated colony (i.e., a distinct, single colony) should be selected from the culture plate and tested in O157-specific antiserum or O157 latex reagent as recommended by the manufacturer (63). Colonies that agglutinate with one of the O157-specific reagents and do not agglutinate with normal serum or control latex reagent are presumed to be O157 STEC. At least three colonies should be screened (CDC, unpublished data, 2009). If O157 STEC bacteria are identified in any one of the three colonies, no additional colonies need to be tested. The colony in which O157 STEC are detected should be streaked onto SMAC or a nonselective agar medium such as tryptic soy agar (TSA), heart infusion agar (HIA), or blood agar and biochemically confirmed to be E. coli (e.g., using standard biochemical tests or commercial automated systems) because other bacterial species can cross-react in O157 antiserum (64)(65)(66). Before confirmation is complete at the laboratory (which might take >24 hours), the preliminary finding of O157 STEC should be reported to the treating clinician and should be documented according to laboratory policies for other time-sensitive, clinically important laboratory findings. The preliminary nature of these presumptive results and the need for test confirmation should be indicated in the report. After O157 STEC colonies have been isolated, been found to agglutinate with O157 latex reagent, and been biochemically confirmed as E. coli, a written or electronic report should be provided to the clinician and public health authorities (Table 2). All O157 STEC isolates should be forwarded to the public health laboratory as soon as possible, regardless of whether H7 testing has been attempted or completed. At the public health laboratory, O157 STEC isolates should be tested by EIA for Shiga toxin production or by PCR for the stx1 and stx2 genes. Actively motile O157 STEC strains should be tested for the H7 antigen. All O157 STEC strains should be subtyped by PFGE as soon as possible. # non-o157 STEC Identification of non-O157 STEC typically occurs at the public health laboratory and not at clinical laboratories (Table 3). However, this section includes the basic techniques for reference. To isolate non-O157 STEC, the Shiga toxin-positive broth should be streaked to a relatively less selective agar (e.g., MacConkey agar, SMAC, Statens Serum Institut enteric medium, or blood agar). Traditional enteric media such as Hektoen agar, xylose-lysine-desoxycholate agar, and Salmonella-Shigella agar inhibit many E. coli and are not recommended (67). All possible O157 STEC colonies should be tested in O157 latex reagent before isolation of non-O157 STEC is attempted. Well-isolated colonies with E. coli-like morphology should be selected on the basis of sorbitol or lactose fermentation characteristics (or other characteristics specific to the medium used); most non-O157 STEC ferment both sorbitol and lactose, although exceptions have been reported (CDC, unpublished data, 2009). Colonies may be tested for Shiga toxin production by EIA or for stx1 and stx2 genes by PCR. Non-O157 STEC may be tested using commercial O-specific antisera for the most common STEC-associated O antigens (i.e., O26, O45, O103, O111, O121, and O145) (5). Non-O157 STEC isolates should be forwarded to a public health laboratory for confirmation of Shiga toxin production, serogroup determination, and PFGE subtyping. # nonculture Assays for Detection of Shiga Toxins and STEC Nonculture assays that detect the Shiga toxins produced by STEC (e.g., the Shiga toxin EIA) were first introduced in the United States in 1995. The primary advantage of nonculture assays for Shiga toxin is that they can be used to detect all serotypes of STEC. In addition, nonculture assays might provide results more quickly than culture. The primary disadvantage of nonculture based assays is that the infecting organism is not isolated for subsequent serotyping and a specific diagnosis of O157 STEC. Lack of an isolated organism limits the ability of physicians to predict the potential severity of the infection in the patient (e.g., risk for HUS), the risk for severe illness in patient contacts, and the ability of public health officials to detect and control STEC outbreaks and monitor trends in STEC epidemiology. In addition, although the nonculture assays for Shiga toxin also detect Stx1 produced by Shigella dysenteriae type 1, infection with this organism is rare in the United States, with fewer than five cases reported each year (68). # Shiga Toxin Immunoassays The Center for Devices and Radiological Health of the Food and Drug Administration (FDA) has approved several immunoassays for the detection of Shiga toxin in human specimens (Table 4). Because the amount of free fecal Shiga toxin in stools is often low (58), EIA testing of enrichment broth cultures incubated overnight (16-24 hours at 37°C ), rather than direct testing of stool specimens, is recommended. In addition, the manufacturer information indicates that tests performed on broth cultures have higher sensitivity and specificity than those performed on stool. No studies have determined whether one type of broth is most effective; MacConkey and gram-negative broths are both suitable. Four FDA-approved immunoassays are available in the United States (Table 4 Reported sensitivities and specificities of Shiga toxin immunoassays vary by test format and manufacturer. The standard by which each manufacturer evaluates its tests varies; a direct comparison of performance characteristics of various immunoassays has not been made. The clinical performance characteristics of each test are available in the package insert. Clinical laboratories should evaluate these performance characteristics and verify that they can obtain performance specifications comparable to those of the manufacturer before implementing a particular test system. The College of American Pathologists (69) and the American Proficiency Institute (70) offer proficiency testing for STEC immunoassays. Laboratories should immediately report Shiga toxin-positive specimens to the treating clinician and appropriate public health and infection control officials. Clinical laboratories should forward Shiga toxin-positive specimens or enrichment broths to a public health laboratory as soon as possible for isolation and additional characterization. In multiple studies, for reasons that are unknown, EIAs failed to detect a subset of O157 STEC that were readily identified on simultaneously plated SMAC agar, underscoring the importance of primary isolation (17,50,(71)(72)(73). EIA tests also might have false-positive STEC results when other pathogens are present (1, 74,75). # PCR PCR assays to detect the stx1 and stx2 genes are used by many public health laboratories for diagnosis and confirma- tion of STEC infection. Depending on the primers used, these assays can distinguish between stx1 and stx2 (76)(77)(78). Assays also have been developed that determine the specific O group of an organism, detect virulence factors such as intimin and enterohemolysin, and can differentiate among the subtypes of Shiga toxins (79)(80)(81). Because these tests are not commercially available, they are rarely used for human disease diagnosis in the United States. Most PCR assays are designed and validated for testing isolated colonies taken from plated media; some assays have been validated for testing on stool specimens subcultured to an enrichment broth and incubated for 18-24 hours. Shiga toxin PCR assays on DNA extracted from whole stool specimens are not recommended because the sensitivity is low (82). The time required to obtain PCR assay results ranges from 3 hours (if an isolate is tested) to 24-36 hours (if the specimen is first subcultured to an enrichment broth or plate). DNA-based Shiga toxin gene detection is not approved by FDA for diagnosis of human STEC infections by clinical laboratories; however, public health laboratories might use this technique for confirmatory testing after internal validation. One commercial PCR kit is available to test for STEC virulence genes (DEC Primer Mix, Mira Vista Diagnostics, Indianapolis, Indiana); however, this test is labeled for research use only, can only be used on isolates, and is not approved by FDA for diagnosis of human STEC infections. Clinical laboratories that are considering adding a DNA-based assay to their testing options need to establish performance specifications for the assay as required by CLIA (59), and reports from such testing should include a disclaimer to inform clinicians that the test is not approved by FDA (83). No commercially available proficiency testing programs are available in the United States for PCR assays that target the Shiga toxin genes; however, internal proficiency testing events and exchanges with other laboratories may be used to fulfill CLIA requirements (84). - Some testing procedures overlap between clinical and public health laboratories. When risk for STEC transmission to the public is high (e.g., workers in restaurants or child-care facilities), public health laboratories might conduct more of the primary STEC testing. † Simultaneous O157 STEC culture and Shiga toxin testing is recommended. Clinical laboratories should submit all STEC isolates and Shiga toxin-positive broths to a public health laboratory for additional testing. § Before sending the final report, public health laboratories should ensure that the isolated strain has genes for Shiga toxin, produces Shiga toxin, or has the H7 antigen. ¶ The public health laboratory may determine the O antigen or send the isolate to CDC for O antigen and H antigen determination. Public health laboratories that detect Shiga toxin by immunoassay are encouraged to use a different manufacturer kit than the one used by the clinical laboratory whose results they are confirming and should request the name of the kit used for each test. † † DNA-based Shiga toxin gene detection is not approved by the Food and Drug Administration for diagnosis of human STEC infections by clinical laboratories; however, public health laboratories might use this technique for confirmatory testing after internal validation. o157 Immunoassays One commercial immunoassay is available to test for the O157 and H7 antigens in human stools and stool cultures (ImmunoCard STAT! E. coli O157:H7; Meridian Bioscience, Cincinnati, Ohio). This rapid assay may be performed either directly on stools or on an enrichment broth culture incubated overnight (16-24 hours at 37°C ). When performed directly on stool specimens, compared with culture for O157 STEC, the assay has an overall sensitivity of 81% and specificity of 97% (85,86). This test is not recommended as a first-line or primary test for diagnosis, in part because 1) the assay does not detect non-O157 STEC serogroups, 2) not all E. coli O157 produce Shiga toxin, and 3) no isolates will be available for testing at the public health laboratory. Laboratories that use this test should ensure that specimens in which O157 STEC bacteria are not detected are tested for Shiga toxin and cultured for STEC; positive specimens should also be cultured for STEC. In clinical settings, O157 immunoassays are less useful than EIA tests that distinguish between Stx1 and Stx2 for identifying patients at risk for developing severe disease. # Cell Cytotoxicity Assay The Vero (African green monkey kidney) and HeLa cell lines are very sensitive to Shiga toxin because they have high concentrations of globotriaosylceramides Gb3 and Gb4, the receptors for Shiga toxin in eukaryotic cells. Sterile fecal filtrates prepared from fresh stool specimens or broth enrichments of selected colonies are inoculated onto cells and observed for typical cytopathic effect. Confirmation that the cytopathic effect is caused by Shiga toxin is performed by neutralization using anti-Stx 1 and anti-Stx 2 antibodies. Although very sensitive, this method is not routinely used in most clinical microbiology laboratories because the method requires familiarity with tissue culture technique, the availability of cell monolayers, and specific antibodies. Testing typically takes 48-72 hours (13). # Specialized Diagnostic Methods Certain specialized diagnostic methods might be used by public health laboratories for patients with HUS and during outbreak investigations. Immunomagnetic separation (IMS) is useful when the number of STEC organisms in a specimen is expected to be small (e.g., in specimens from patients who seek treatment ≥5 days after illness onset, in specimens from asymptomatic carriers, and in specimens that have been stored or transported improperly) (87,88). IMS beads labeled with O26, O103, O111, O145, or O157 antisera are commercially available. IMS is not approved by FDA for use on human specimens. Serodiagnostic methods that measure antibody responses to serogroup-specific lipopolysaccharides can provide evidence of STEC infection (89). No such tests are commercially available in the United States. CDC uses internally validated tests to detect immunoglobulin M (IgM) and immunoglobulin G (IgG) responses to infection with serogroup O157 and IgM response to infection with serogroup O111 in patient sera obtained during outbreak investigations and for special purposes. # Forwarding Specimens and Isolates to Public Health Laboratories Specimens To Be Forwarded All O157 STEC isolates growing on selective agar should be subcultured to agar slants and forwarded as soon as possible to the appropriate public health laboratory for additional characterization, in compliance with the recommendations of the receiving laboratory and shipping regulations. If agar slants are not available at the submitting laboratory, an acceptable alternative might be a swab that is heavily inoculated with representative growth and placed in transport medium. Not all specimens that test positive for Shiga toxin yield an easily identifiable O157 STEC or non-O157 STEC colony on subculture. All Shiga toxin-positive specimens or broths from which no STEC isolate was recovered should be forwarded to the appropriate public health laboratory for isolation and additional testing; shipping of Shiga toxin-positive specimens or broths should not be delayed pending bacterial growth or isolation. Broths that cannot be shipped on the day that the EIA test is performed should be stored at 4°C (39°F) until they are prepared for shipping. Public health laboratories should be prepared to accept isolates and broths for additional testing, with or without the primary stool specimen, that were Shiga toxin-positive in an EIA. Clinical laboratories should contact the appropriate public health laboratory to determine the laboratory's preferences and applicable regulations. # Transport Considerations United Nations regulations (Division 6.2, Infectious Substances) stipulate that a verotoxigenic E. coli culture is a category A (United Nations number 2814) infectious substance, which is an infectious substance in a form capable of causing permanent disability or life-threatening or fatal disease in otherwise healthy humans or animals when exposure to the substance occurs. The International Air Transportation Association (IATA) and Department of Transportation (DOT) have modified their shipping guidance to comply with this requirement (90,91). Therefore, all possible and confirmed O157 STEC and non-O157 STEC isolates and Shiga toxinpositive EIA broths should be shipped as category A infectious substances. If the identity of the infectious material being transported has not been confirmed or is unknown, but the material might meet the criteria for inclusion in category A (e.g., a broth culture that is positive for Shiga toxin or a stool culture from a patient that might be part of an O157 STEC outbreak), certain IATA regulations apply (91). Both IATA and DOT require that all persons who package, ship, or transport category A infectious substances have formal, documented training every 2 years (92,93). Category A substances must be packaged in a water-tight primary receptacle. For shipment, slants or transport swabs heavily inoculated with representative growth are preferred to plates. Plates are acceptable only in rare instances in which patient diagnosis or management would be delayed by subculturing an organism to a slant for transport; shipment of plates must be preapproved by the receiving public health laboratory. If a swab is used, the shaft should be shortened to ensure a firm fit within the plastic sheath, and the joint should be secured with parafilm to prevent leakage. When shipping enrichment broths, the cap must fit tightly enough to prevent leakage into the shipping container, and parafilm should be wrapped around the cap to provide a better seal. Slants, swabs of pure cultures, and plates (if approved by the receiving laboratory) may be shipped at ambient temperature. Stools in transport media, raw stools, and broths should be shipped with a cold pack to prevent growth of other gramnegative flora. Commercial couriers vary regarding their acceptance of category A agents; clinical laboratories should check with their preferred commercial courier for current requirements. Shipping category A specimens by commercial couriers usually incurs a surcharge in addition to normal shipping fees. Category A infectious substances are not accepted by the U.S. Postal Service (94). Shipping by a private (noncommercial) courier that is dedicated only to the transport of clinical specimens does not exempt specimens from DOT or IATA regulations; category A specimens must be packaged according to United Nations Division 6.2 regulations with appropriate documentation, even if not being transported by a commercial carrier (94). Based on existing specifications, laboratories should collaborate to develop specifications for packaging and shipping, which should be incorporated into a standard operating procedure and followed consistently. A United Nations-approved category A shipping container must be used for cultures, and cultures must be packaged and documented according to DOT and IATA regulations (95). # Interpretation of Final Results Several tests for clinical or public health microbiology laboratories are available for the detection of STEC, and they may be used alone or in combination. No testing method is 100% sensitive or specific, and the predictive value of a positive test is affected by the patient population that a particular laboratory serves. Specificity and sensitivity might be increased by using a combination of tests. However, when test results conflict, interpretation might be difficult, especially when clinical and public health laboratory test results are compared. Clinical and public health laboratories document STEC test results in a final report (Table 2). Discordant results (e.g., positive immunoassay at a clinical laboratory but negative PCR result at a public health laboratory) might need to be discussed among the treating physician, public health epidemiologist, and clinical and public health laboratory staff members; however, the outcome of most patients' illnesses (i.e., resolution of symptoms or progression to HUS) is already known by the time the discordant laboratory findings are resolved. Proper interpretation of test results, which is needed for appropriate patient evaluation and treatment, includes consideration of several factors, including whether the type of specimen tested was appropriate for the test (e.g., specimens from rectal swabs or whole stools placed in transport medium), the timing of the specimen collection relative to illness onset, the patient's signs and symptoms, the epidemiologic context of the patient's illness, whether the manufacturer instructions were followed precisely, and the possibility of a false-positive or false-negative test result. # Clinical Considerations Accurate, rapid identification of STEC, particularly of E. coli O157:H7, is critical for patient management and disease control. Therefore, the types of microbiologic tests chosen, performed, and reported and subsequent communication with treating clinicians are critical. Prompt and proper treatment of patients with a positive or presumptively positive STEC culture requires rapid and clear diagnostic enteric microbiology and reporting of data. More detailed information on clinical considerations and care of patients with STEC infection is available from recent clinical reviews (32,96,97). # MMWR October 16, 2009 # Conclusion Accumulated findings from investigations of STEC outbreaks, studies of sporadic STEC infections, and passive and active surveillance provide compelling evidence to support the recommendation that all stools submitted for routine testing to clinical laboratories from patients with community-acquired diarrhea should be cultured for O157 STEC and simultaneously tested for non-O157 STEC with an assay that detects Shiga toxins. These recommendations should improve the accuracy of diagnosing STEC infections, facilitate assessment of risk for severe illness, promote prompt diagnosis and treatment, and improve detection of outbreaks. Because of the critical impact of time on diagnosis of STEC, treating patients, and recognizing and controlling outbreaks of STEC infections, attempting to isolate O157 STEC and detect other STEC serotypes simultaneously, rather than separately (i.e., conducting a Shiga toxin test to determine whether to culture), is recommended. Performing culture for O157 STEC while simultaneously testing for all STEC serotypes is critical. O157 STEC are responsible for most STEC outbreaks and most cases of severe disease; almost all strains have the virulence genes stx2 and eae, which are associated with severe disease. Detection of O157 STEC within 24 hours after specimen submission to the laboratory helps physicians to rapidly assess the patient's risk for severe disease and to initiate measures to prevent serious complications, such as renal damage and death. Rapid isolation of the infecting organism helps public health officials quickly initiate measures to detect outbreaks and control the spread of infection. Because of the dynamic nature of the Shiga toxin-converting phages and the potential of decreased diagnostic sensitivity for these pathogens later during infection, future commercial assays that target stable traits might improve diagnostic sensitivity. To facilitate diagnosis and patient management, future methods would also ideally allow for an assessment of the organism's potential to cause severe disease (e.g., related to the presence of stx2, certain stx2 subtypes, and eae). Improved isolation methods for non-O157 STEC also are needed. As nucleotide sequences for more STEC strains become available, comparative genomic studies might identify targets that can be used to improve detection, virulence profiling, and isolation strategies. The Association of Public Health Laboratories, in conjunction with state and federal partners, is developing guidelines for receiving, testing, isolating, and characterizing STEC isolates and specimens in public health laboratories. That document will complement the guidelines in this report and will be available on the APHL website () by early 2010. Additional information on STEC is available at http:// www.cdc.gov/ecoli.
CDC, our planners, and our presenters wish to disclose they have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters, with the exception of Barbara Body, who works for LabCorp and serves on the Advisory Board of Becton Dickinson, Roche Diagnostic Systems and Roche Molecular Systems, and Nancy Strockbine, who conducted research with Inverness Medical Innovations. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use. There is no commercial support for this activity.Recommendations and Reports 1# Introduction Shiga toxin-producing E. coli (STEC) cause approximately 100,000 illnesses, 3,000 hospitalizations, and 90 deaths annually in the United States, according to the last estimate in 1999 (2). Most reported STEC infections in the United States are caused by E. coli O157:H7, with an estimated 73,000 cases occurring each year (2). Non-O157 STEC bacteria also are important causes of diarrheal illness in the United States; at least 150 STEC serotypes have been associated with outbreaks and sporadic illness (2)(3)(4). In the United States, six non-O157 serogroups (O26, O45, O103, O111, O121, and O145) account for the majority of reported non-O157 STEC infections (5). The toxins produced by STEC were named based on their similarity in structure and function to Shiga toxins produced by Shigella dystenteriae type 1 (6). Shiga toxin 1 (Stx1) is neutralized by antibodies against Shiga toxin, whereas Shiga toxin 2 (Stx2) is not neutralized by antibodies against Shiga toxin but is neutralized by homologous antibodies. STEC are also referred to as verocytotoxigenic E. coli; STEC that cause human illness are also referred to as enterohemorrhagic E. coli. In this report, all E. coli that produce a Shiga toxin are referred to as STEC. STEC serotypes are named according to their somatic (O) and flagellar (H) antigens. In this report, all STEC with the O antigen 157 are referred to as O157 STEC, regardless of whether the H7 antigen has been identified or Shiga toxin production has been confirmed. STEC with other O antigens are referred to as non-O157 STEC or by their specific O antigen. STEC infection causes acute, often bloody, diarrhea. Approximately 8% of persons who receive a diagnosis of O157 STEC infection develop hemolytic uremic syndrome (HUS), a life-threatening condition characterized by thrombocytopenia, hemolytic anemia, and renal failure (7)(8)(9). Thrombotic thrombocytopenic purpura (TTP), a syndrome with signs and symptoms that are similar to those of HUS, is typically diagnosed in adults. When TTP is diagnosed after a diarrheal illness, the condition is usually caused by infection with O157 STEC or another STEC. In this report, regardless of the age of the patient, TTP diagnosed after a diarrheal illness is referred to as HUS (10). Whether an illness progresses to HUS depends on strain virulence and host factors (11). Although most persons with diarrhea-associated HUS have an O157 STEC infection, certain non-O157 STEC strains also can lead to HUS (3). The virulence of non-O157 STEC is partly determined by the toxins they produce; non-O157 STEC strains that produce only Stx2 are more often associated with HUS than strains that produce only Stx1 or that produce both Stx1 and Stx2 (12). STEC infections and HUS occur in persons of all ages, but the incidence of STEC infection is highest in children aged <5 years, as is the risk for HUS (9). Although STEC infections are more common during summer months, they can occur throughout the year. STEC transmission occurs through consumption of a wide variety of contaminated foods, including undercooked ground beef, unpasteurized juice, raw milk, and raw produce (e.g., lettuce, spinach, and alfalfa sprouts); through ingestion of contaminated water; through contact with animals or their environment; and directly from person to person (e.g., in childcare settings). Both O157 STEC and O111 STEC have a low infectious dose (<100 organisms) (13); the infectious dose of other serogroups is not known. Prompt and accurate diagnosis of STEC infection is important because appropriate treatment with parenteral volume expansion early in the course of infection might decrease renal damage and improve patient outcome (14). In addition, because antibiotic therapy in patients with STEC infections might be associated with more severe disease, prompt diagnosis is needed to ensure proper treatment. Furthermore, prompt laboratory identification of STEC strains is essential for implementation of control measures, for effective and timely outbreak responses, to detect new and emerging serotypes, and to monitor trends in disease epidemiology (1,15,16). Most O157 STEC isolates can be readily identified in the laboratory when grown on sorbitol-containing selective media because O157 STEC cannot ferment sorbitol within 24 hours. However, many clinical laboratories do not routinely culture stool specimens for O157 STEC. In addition, selective and differential media are not available for the culture of non-O157 STEC, and even fewer laboratories culture stool specimens for these bacteria than for O157 STEC. Recently, the increased use of enzyme immunoassay (EIA) or polymerase chain reaction (PCR) to detect Shiga toxin or the genes that encode the toxins (stx1 and stx2) has facilitated the diagnosis of both O157 and non-O157 STEC infections. Although EIA and other nonculture tests are useful tools for diagnosing STEC infection, they should not replace culture; a pure culture of the pathogen obtained by the clinical laboratory (O157 STEC) or the public health laboratory (non-O157 STEC) is needed for serotyping and molecular characterization (e.g., pulsed-field gel electrophoresis [PFGE] patterns), which are essential for detecting, investigating, and controlling STEC outbreaks. Simultaneous culture of stool for O157 STEC and EIA testing for Shiga toxin is more effective for identifying STEC infections than the use of either technique alone (17,18). Because virtually all O157 STEC have the genes for Stx2 (stx2) and intimin (eae), which are found in strains that are associated with severe disease (5,12,(19)(20)(21)(22), detection of O157 STEC should prompt immediate initiation of steps such as parenteral volume expansion to reduce the risk for renal damage in the patient and the spread of infection to others. Guidelines for clinical and laboratory identification of STEC infections have been previously published (1); this report provides the first comprehensive and detailed recommendations for isolation and identification of STEC by clinical laboratories. The recommendations are intended primarily for clinical laboratories but also are an important reference for health-care providers, public health laboratories, public health authorities, and patients and their advocates. # Recommendation for Identification of STEC by Clinical Laboratories All stools submitted for testing from patients with acute community-acquired diarrhea (i.e., for detection of the enteric pathogens Salmonella, Shigella, and Campylobacter) should be cultured for O157 STEC on selective and differential agar. These stools should be simultaneously assayed for non-O157 STEC with a test that detects the Shiga toxins or the genes encoding these toxins. All O157 STEC isolates should be forwarded as soon as possible to a state or local public health laboratory for confirmation and additional molecular characterization (i.e., PFGE analysis and virulence gene characterization). Detection of STEC or Shiga toxin should be reported promptly to the treating physician, to the public health laboratory for confirmation, isolation, and subsequent testing of the organism, and to the appropriate public health authorities for case investigation. Specimens or enrichment broths in which Shiga toxin or STEC are detected but from which O157 STEC are not recovered should be forwarded as soon as possible to a state or local public health laboratory. # Benefits of Recommended Testing Strategy # Identification of Additional STEC Infections and Detection of All STEC Serotypes Evidence indicates that STEC might be detected as frequently as other bacterial pathogens. In U.S. studies, STEC were detected in 0%-4.1% of stools submitted for testing at clinical laboratories, rates similar to those of Salmonella species (1.9%-4.8%), Shigella species (0.2%-3.1%), and Campylobacter species (0.9%-9.3%) (9,17,(23)(24)(25)(26)(27)(28)(29)(30)(31). In one study, the proportion of stools with STEC detected varied by study site (9); O157 STEC were more commonly isolated than some other enteric pathogens in northern states. The laboratory strategy of culturing stool while simultaneously testing for Shiga toxin is more sensitive than other strategies for STEC identification and ensures that all STEC serotypes will be detected (17,18,30,31) (Table 1). In addition, immediate culture ensures that O157 STEC bacteria are detected within 24 hours of the initiation of testing. # Early Diagnosis and Improved Patient outcome Early diagnosis of STEC infection is important for determining the proper treatment promptly. Initiation of parenteral volume expansion early in the course of O157 STEC infection might decrease renal damage and improve patient outcome (14). Conversely, certain treatments can worsen patient outcomes; for example, antibiotics might increase the risk for HUS in patients infected with O157 STEC, and antidiarrheal medications might worsen the illness (32). Early diagnosis of STEC infection also might prevent unnecessary procedures or treatments (e.g., surgery or corticosteroids for patients with severe abdominal pain or bloody diarrhea) (33)(34)(35). # Prompt Detection of outbreaks Prompt laboratory diagnosis of STEC infection facilitates rapid subtyping of STEC isolates by public health laboratories and submission of PFGE patterns to PulseNet, the national molecular subtyping network for foodborne disease surveillance (36). Rapid laboratory diagnosis and subtyping of STEC isolates leads to prompt detection of outbreaks, timely public health actions, and detection of emerging STEC strains (37,38). Delayed diagnosis of STEC infections might lead to secondary transmission in homes, child-care settings, nursing homes, and food service establishments (39,(40)(41)(42)(43)(44) and might delay detection of multistate outbreaks related to widely distributed foods (39,45). Outbreaks caused by STEC with multiple serogroups (46) or PFGE patterns (47) have been documented. # Criteria for STEC Testing and Specimen Selection All stool specimens from patients with acute onset of community-acquired diarrhea and from patients with possible HUS should be tested for STEC. Many infections are missed with selective STEC testing strategies (e.g., testing only specimens from children, testing only during summer months, or testing only stools with white blood cells or blood). Some patients with STEC infection do not have visibly bloody stools, whereas some persons infected with other pathogens do have bloody stools (3,9,48,49). Therefore, the absence of blood in the stool does not rule out the possibility of a STEC-associated diarrheal illness; both O157 and non-O157 strains have been isolated from patients with nonbloody diarrhea (30)(31)(32)(49)(50)(51). Similarly, white blood cells are often but not always detected in the stools of patients with STEC infection and should not be used as a criterion for STEC testing (9,39). Selective testing on the basis of patient age or season of the year also might result in undetected infections. Although STEC bacteria are isolated more frequently from children, almost half of all STEC isolates are from persons aged >12 years (5,9,49,52); testing for STEC only in specimens from children would result in many missed infections. In addition, although STEC infections are more common in summer months, infections and outbreaks occur throughout the year (5,9,32). Stools should be tested as early as possible in the course of illness; bacteria might be difficult or impossible to detect in the stool after 1 week of illness (53,54), and the Shiga toxin genes might be lost by the bacteria (55). In certain instances, retrieval of plates from cultures obtained earlier in the illness that were not initially evaluated for STEC might be necessary. Early detection of STEC and proper patient management are especially important among children because they are the age group most likely to have an infection that develops into HUS (32). STEC testing might not be warranted, or selective STEC testing might be appropriate, for patients who have been hospitalized for ≥3 days; infection in this setting is more likely to be caused by Clostridium difficile toxin than another enteric pathogen (39). However, when a patient is admitted to the hospital with symptoms of a diarrheal illness, a stool culture with STEC testing might be appropriate, regardless of the number of days of hospitalization. In addition, although few hospital-associated outbreaks of STEC have been reported, if a hospitalized patient is involved in a hospital-associated outbreak of diarrhea, STEC testing should be performed if tests are also being conducted for other bacterial enteric pathogens (e.g., Salmonella). Although chronic diarrhea is uncommon in patients with STEC infection, certain STEC strains have been associated with prolonged or intermittent diarrhea; therefore, testing for Shiga toxin should be considered if an alternative diagnosis (e.g., ulcerative colitis) has not been identified (56). Testing multiple specimens is likely unnecessary unless the original specimen was not transported or tested appropriately or the test results are not consistent with the patient's signs and symptoms. After STEC bacteria are detected in a specimen, additional specimens from the same patient do not need to be tested for diagnostic purposes. To prevent additional transmission of infection, certain persons (e.g., food-service workers and children who attend child-care facilities or adults who work in these facilities) who receive a diagnosis of STEC infection might be required by state law or a specific facility to prove that they are no longer shedding the bacteria after treatment and before returning to the particular setting. Follow-up specimens are usually tested by state public health laboratories. No data exist regarding the effectiveness of excluding postsymptomatic carriers of non-O157 STEC (i.e., persons who test positive for non-O157 STEC but no longer have symptoms) from work or school settings in preventing secondary spread. # Procedures for Collecting and Handling Specimens for STEC Diagnostic Testing Acceptable Specimens for Testing Laboratories should always consult the manufacturer instructions for the assay being performed to determine procedures for specimen collection and handling, including specimen types that may be used with a particular assay or test system. The ideal specimen for testing is diarrheal stool; stool specimens should be collected as soon as possible after diarrhea begins, while the patient is acutely ill, and before any antibiotic treatment is administered. The same specimen that is collected for Salmonella, Shigella, and Campylobacter testing is acceptable for STEC culture and Shiga toxin detection. Collecting and testing specimens as soon as possible after symptom onset is important to ensure maximal sensitivity and specificity for STEC detection with available commercial diagnostic assays. Diagnostic methods such as the Shiga toxin immunoassay that target traits encoded on mobile genetic elements (e.g., phages) are less sensitive if the elements have been lost (53,57). Shiga toxin testing should be performed on growth from broth culture or primary isolation media because this method is more sensitive and specific than direct testing of stool. In addition, because the amount of free fecal Shiga toxin in stools is often low, EIA testing of broth enrichments from stools or of growth from the primary isolation plate is recommended rather than direct testing of stools (58). Although rectal swabs are often used to collect stool from children, swabs might not contain enough stool to culture for multiple enteric pathogens and to perform STEC testing. If rectal swabs must be used to collect specimens for STEC testing, broth enrichment is recommended. Laboratories should consult the manufacturer instructions for information on the suitability of toxin testing using stool from rectal swabs. Commercially available assays have not been validated for specimens collected by endoscopy or colonoscopy. If a laboratory chooses to use an assay for patient testing with a specimen other than that included in the manufacturer's FDA-cleared package insert, under the Clinical Laboratory Improvement Amendments (CLIA) of 1988, that laboratory must first establish the performance specifications for the alternative specimen type (59). Unless STEC are isolated, results from tests on alternative specimen types should be interpreted with caution. # Specimen Handling Specimens should be sent to the laboratory as soon as possible for O157 STEC culture and Shiga toxin testing. Ideally, specimens should be processed as soon as they are received by the laboratory. Specimens that are not processed immediately should be refrigerated until tested; if possible, they should not be held for >24 hours unpreserved or for >48 hours in transport medium. All O157 STEC isolates and all specimens or enrichment broths in which Shiga toxin is detected but from which O157 STEC bacteria are not recovered should be forwarded to the public health laboratory as soon as possible in compliance with the receiving laboratory's guidelines. # Transport Media Specimens should be transported under conditions appropriate for the transport medium used and tests to be performed; appropriate transport conditions can be determined by reviewing the manufacturer instructions. Stool specimens that cannot be immediately transported to the laboratory for testing should be put into a transport medium (e.g., Cary-Blair) that is optimal for the recovery of all bacterial enteric pathogens. Laboratories should consult the manufacturer instructions for the suitability of toxin testing for stool in transport medium. If a laboratory must perform direct Shiga toxin testing of stool, the stool specimen should be refrigerated but should not be placed in transport medium. Direct toxin testing of stool should follow the manufacturer instructions. # MMWR October 16, 2009 Culture for STEC o157 STEC O157 STEC can usually be easily distinguished from most E. coli that are members of the normal intestinal flora by their inability to ferment sorbitol within 24 hours on sorbitolcontaining agar isolation media. To isolate O157 STEC, a stool specimen should be plated onto a selective and differential medium such as sorbitol-MacConkey agar (SMAC) (60), cefixime tellurite-sorbitol MacConkey agar (CT-SMAC), or CHROMagar O157. After incubation for 16-24 hours at 37°C (99°F), the plate should be examined for possible O157 colonies, which are colorless on SMAC or CT-SMAC and are mauve or pink on CHROMagar O157. Both CT-SMAC and CHROMagar O157 are more selective than SMAC, which increases the sensitivity of culture for detection of O157 STEC (61,62). Sorbitol-fermenting STEC O157:H-(i.e., nonmotile [NM]), a pathogen that is uncommon in the United States and primarily reported from Germany, might not grow on CT-SMAC agar because the bacteria are susceptible to tellurite. To identify O157 STEC, a portion of a well-isolated colony (i.e., a distinct, single colony) should be selected from the culture plate and tested in O157-specific antiserum or O157 latex reagent as recommended by the manufacturer (63). Colonies that agglutinate with one of the O157-specific reagents and do not agglutinate with normal serum or control latex reagent are presumed to be O157 STEC. At least three colonies should be screened (CDC, unpublished data, 2009). If O157 STEC bacteria are identified in any one of the three colonies, no additional colonies need to be tested. The colony in which O157 STEC are detected should be streaked onto SMAC or a nonselective agar medium such as tryptic soy agar (TSA), heart infusion agar (HIA), or blood agar and biochemically confirmed to be E. coli (e.g., using standard biochemical tests or commercial automated systems) because other bacterial species can cross-react in O157 antiserum (64)(65)(66). Before confirmation is complete at the laboratory (which might take >24 hours), the preliminary finding of O157 STEC should be reported to the treating clinician and should be documented according to laboratory policies for other time-sensitive, clinically important laboratory findings. The preliminary nature of these presumptive results and the need for test confirmation should be indicated in the report. After O157 STEC colonies have been isolated, been found to agglutinate with O157 latex reagent, and been biochemically confirmed as E. coli, a written or electronic report should be provided to the clinician and public health authorities (Table 2). All O157 STEC isolates should be forwarded to the public health laboratory as soon as possible, regardless of whether H7 testing has been attempted or completed. At the public health laboratory, O157 STEC isolates should be tested by EIA for Shiga toxin production or by PCR for the stx1 and stx2 genes. Actively motile O157 STEC strains should be tested for the H7 antigen. All O157 STEC strains should be subtyped by PFGE as soon as possible. # non-o157 STEC Identification of non-O157 STEC typically occurs at the public health laboratory and not at clinical laboratories (Table 3). However, this section includes the basic techniques for reference. To isolate non-O157 STEC, the Shiga toxin-positive broth should be streaked to a relatively less selective agar (e.g., MacConkey agar, SMAC, Statens Serum Institut [SSI] enteric medium, or blood agar). Traditional enteric media such as Hektoen agar, xylose-lysine-desoxycholate agar, and Salmonella-Shigella agar inhibit many E. coli and are not recommended (67). All possible O157 STEC colonies should be tested in O157 latex reagent before isolation of non-O157 STEC is attempted. Well-isolated colonies with E. coli-like morphology should be selected on the basis of sorbitol or lactose fermentation characteristics (or other characteristics specific to the medium used); most non-O157 STEC ferment both sorbitol and lactose, although exceptions have been reported (CDC, unpublished data, 2009). Colonies may be tested for Shiga toxin production by EIA or for stx1 and stx2 genes by PCR. Non-O157 STEC may be tested using commercial O-specific antisera for the most common STEC-associated O antigens (i.e., O26, O45, O103, O111, O121, and O145) (5). Non-O157 STEC isolates should be forwarded to a public health laboratory for confirmation of Shiga toxin production, serogroup determination, and PFGE subtyping. # nonculture Assays for Detection of Shiga Toxins and STEC Nonculture assays that detect the Shiga toxins produced by STEC (e.g., the Shiga toxin EIA) were first introduced in the United States in 1995. The primary advantage of nonculture assays for Shiga toxin is that they can be used to detect all serotypes of STEC. In addition, nonculture assays might provide results more quickly than culture. The primary disadvantage of nonculture based assays is that the infecting organism is not isolated for subsequent serotyping and a specific diagnosis of O157 STEC. Lack of an isolated organism limits the ability of physicians to predict the potential severity of the infection in the patient (e.g., risk for HUS), the risk for severe illness in patient contacts, and the ability of public health officials to detect and control STEC outbreaks and monitor trends in STEC epidemiology. In addition, although the nonculture assays for Shiga toxin also detect Stx1 produced by Shigella dysenteriae type 1, infection with this organism is rare in the United States, with fewer than five cases reported each year (68). # Shiga Toxin Immunoassays The Center for Devices and Radiological Health of the Food and Drug Administration (FDA) has approved several immunoassays for the detection of Shiga toxin in human specimens (Table 4). Because the amount of free fecal Shiga toxin in stools is often low (58), EIA testing of enrichment broth cultures incubated overnight (16-24 hours at 37°C [99°F]), rather than direct testing of stool specimens, is recommended. In addition, the manufacturer information indicates that tests performed on broth cultures have higher sensitivity and specificity than those performed on stool. No studies have determined whether one type of broth is most effective; MacConkey and gram-negative broths are both suitable. Four FDA-approved immunoassays are available in the United States (Table 4 Reported sensitivities and specificities of Shiga toxin immunoassays vary by test format and manufacturer. The standard by which each manufacturer evaluates its tests varies; a direct comparison of performance characteristics of various immunoassays has not been made. The clinical performance characteristics of each test are available in the package insert. Clinical laboratories should evaluate these performance characteristics and verify that they can obtain performance specifications comparable to those of the manufacturer before implementing a particular test system. The College of American Pathologists (69) and the American Proficiency Institute (70) offer proficiency testing for STEC immunoassays. Laboratories should immediately report Shiga toxin-positive specimens to the treating clinician and appropriate public health and infection control officials. Clinical laboratories should forward Shiga toxin-positive specimens or enrichment broths to a public health laboratory as soon as possible for isolation and additional characterization. In multiple studies, for reasons that are unknown, EIAs failed to detect a subset of O157 STEC that were readily identified on simultaneously plated SMAC agar, underscoring the importance of primary isolation (17,50,(71)(72)(73). EIA tests also might have false-positive STEC results when other pathogens are present (1, 74,75). # PCR PCR assays to detect the stx1 and stx2 genes are used by many public health laboratories for diagnosis and confirma- tion of STEC infection. Depending on the primers used, these assays can distinguish between stx1 and stx2 (76)(77)(78). Assays also have been developed that determine the specific O group of an organism, detect virulence factors such as intimin and enterohemolysin, and can differentiate among the subtypes of Shiga toxins (79)(80)(81). Because these tests are not commercially available, they are rarely used for human disease diagnosis in the United States. Most PCR assays are designed and validated for testing isolated colonies taken from plated media; some assays have been validated for testing on stool specimens subcultured to an enrichment broth and incubated for 18-24 hours. Shiga toxin PCR assays on DNA extracted from whole stool specimens are not recommended because the sensitivity is low (82). The time required to obtain PCR assay results ranges from 3 hours (if an isolate is tested) to 24-36 hours (if the specimen is first subcultured to an enrichment broth or plate). DNA-based Shiga toxin gene detection is not approved by FDA for diagnosis of human STEC infections by clinical laboratories; however, public health laboratories might use this technique for confirmatory testing after internal validation. One commercial PCR kit is available to test for STEC virulence genes (DEC Primer Mix, Mira Vista Diagnostics, Indianapolis, Indiana); however, this test is labeled for research use only, can only be used on isolates, and is not approved by FDA for diagnosis of human STEC infections. Clinical laboratories that are considering adding a DNA-based assay to their testing options need to establish performance specifications for the assay as required by CLIA (59), and reports from such testing should include a disclaimer to inform clinicians that the test is not approved by FDA (83). No commercially available proficiency testing programs are available in the United States for PCR assays that target the Shiga toxin genes; however, internal proficiency testing events and exchanges with other laboratories may be used to fulfill CLIA requirements (84). * Some testing procedures overlap between clinical and public health laboratories. When risk for STEC transmission to the public is high (e.g., workers in restaurants or child-care facilities), public health laboratories might conduct more of the primary STEC testing. † Simultaneous O157 STEC culture and Shiga toxin testing is recommended. Clinical laboratories should submit all STEC isolates and Shiga toxin-positive broths to a public health laboratory for additional testing. § Before sending the final report, public health laboratories should ensure that the isolated strain has genes for Shiga toxin, produces Shiga toxin, or has the H7 antigen. ¶ The public health laboratory may determine the O antigen or send the isolate to CDC for O antigen and H antigen determination. ** Public health laboratories that detect Shiga toxin by immunoassay are encouraged to use a different manufacturer kit than the one used by the clinical laboratory whose results they are confirming and should request the name of the kit used for each test. † † DNA-based Shiga toxin gene detection is not approved by the Food and Drug Administration for diagnosis of human STEC infections by clinical laboratories; however, public health laboratories might use this technique for confirmatory testing after internal validation. o157 Immunoassays One commercial immunoassay is available to test for the O157 and H7 antigens in human stools and stool cultures (ImmunoCard STAT! E. coli O157:H7; Meridian Bioscience, Cincinnati, Ohio). This rapid assay may be performed either directly on stools or on an enrichment broth culture incubated overnight (16-24 hours at 37°C [99°F]). When performed directly on stool specimens, compared with culture for O157 STEC, the assay has an overall sensitivity of 81% and specificity of 97% (85,86). This test is not recommended as a first-line or primary test for diagnosis, in part because 1) the assay does not detect non-O157 STEC serogroups, 2) not all E. coli O157 produce Shiga toxin, and 3) no isolates will be available for testing at the public health laboratory. Laboratories that use this test should ensure that specimens in which O157 STEC bacteria are not detected are tested for Shiga toxin and cultured for STEC; positive specimens should also be cultured for STEC. In clinical settings, O157 immunoassays are less useful than EIA tests that distinguish between Stx1 and Stx2 for identifying patients at risk for developing severe disease. # Cell Cytotoxicity Assay The Vero (African green monkey kidney) and HeLa cell lines are very sensitive to Shiga toxin because they have high concentrations of globotriaosylceramides Gb3 and Gb4, the receptors for Shiga toxin in eukaryotic cells. Sterile fecal filtrates prepared from fresh stool specimens or broth enrichments of selected colonies are inoculated onto cells and observed for typical cytopathic effect. Confirmation that the cytopathic effect is caused by Shiga toxin is performed by neutralization using anti-Stx 1 and anti-Stx 2 antibodies. Although very sensitive, this method is not routinely used in most clinical microbiology laboratories because the method requires familiarity with tissue culture technique, the availability of cell monolayers, and specific antibodies. Testing typically takes 48-72 hours (13). # Specialized Diagnostic Methods Certain specialized diagnostic methods might be used by public health laboratories for patients with HUS and during outbreak investigations. Immunomagnetic separation (IMS) is useful when the number of STEC organisms in a specimen is expected to be small (e.g., in specimens from patients who seek treatment ≥5 days after illness onset, in specimens from asymptomatic carriers, and in specimens that have been stored or transported improperly) (87,88). IMS beads labeled with O26, O103, O111, O145, or O157 antisera are commercially available. IMS is not approved by FDA for use on human specimens. Serodiagnostic methods that measure antibody responses to serogroup-specific lipopolysaccharides can provide evidence of STEC infection (89). No such tests are commercially available in the United States. CDC uses internally validated tests to detect immunoglobulin M (IgM) and immunoglobulin G (IgG) responses to infection with serogroup O157 and IgM response to infection with serogroup O111 in patient sera obtained during outbreak investigations and for special purposes. # Forwarding Specimens and Isolates to Public Health Laboratories Specimens To Be Forwarded All O157 STEC isolates growing on selective agar should be subcultured to agar slants and forwarded as soon as possible to the appropriate public health laboratory for additional characterization, in compliance with the recommendations of the receiving laboratory and shipping regulations. If agar slants are not available at the submitting laboratory, an acceptable alternative might be a swab that is heavily inoculated with representative growth and placed in transport medium. Not all specimens that test positive for Shiga toxin yield an easily identifiable O157 STEC or non-O157 STEC colony on subculture. All Shiga toxin-positive specimens or broths from which no STEC isolate was recovered should be forwarded to the appropriate public health laboratory for isolation and additional testing; shipping of Shiga toxin-positive specimens or broths should not be delayed pending bacterial growth or isolation. Broths that cannot be shipped on the day that the EIA test is performed should be stored at 4°C (39°F) until they are prepared for shipping. Public health laboratories should be prepared to accept isolates and broths for additional testing, with or without the primary stool specimen, that were Shiga toxin-positive in an EIA. Clinical laboratories should contact the appropriate public health laboratory to determine the laboratory's preferences and applicable regulations. # Transport Considerations United Nations regulations (Division 6.2, Infectious Substances) stipulate that a verotoxigenic E. coli culture is a category A (United Nations number 2814) infectious substance, which is an infectious substance in a form capable of causing permanent disability or life-threatening or fatal disease in otherwise healthy humans or animals when exposure to the substance occurs. The International Air Transportation Association (IATA) and Department of Transportation (DOT) have modified their shipping guidance to comply with this requirement (90,91). Therefore, all possible and confirmed O157 STEC and non-O157 STEC isolates and Shiga toxinpositive EIA broths should be shipped as category A infectious substances. If the identity of the infectious material being transported has not been confirmed or is unknown, but the material might meet the criteria for inclusion in category A (e.g., a broth culture that is positive for Shiga toxin or a stool culture from a patient that might be part of an O157 STEC outbreak), certain IATA regulations apply (91). Both IATA and DOT require that all persons who package, ship, or transport category A infectious substances have formal, documented training every 2 years (92,93). Category A substances must be packaged in a water-tight primary receptacle. For shipment, slants or transport swabs heavily inoculated with representative growth are preferred to plates. Plates are acceptable only in rare instances in which patient diagnosis or management would be delayed by subculturing an organism to a slant for transport; shipment of plates must be preapproved by the receiving public health laboratory. If a swab is used, the shaft should be shortened to ensure a firm fit within the plastic sheath, and the joint should be secured with parafilm to prevent leakage. When shipping enrichment broths, the cap must fit tightly enough to prevent leakage into the shipping container, and parafilm should be wrapped around the cap to provide a better seal. Slants, swabs of pure cultures, and plates (if approved by the receiving laboratory) may be shipped at ambient temperature. Stools in transport media, raw stools, and broths should be shipped with a cold pack to prevent growth of other gramnegative flora. Commercial couriers vary regarding their acceptance of category A agents; clinical laboratories should check with their preferred commercial courier for current requirements. Shipping category A specimens by commercial couriers usually incurs a surcharge in addition to normal shipping fees. Category A infectious substances are not accepted by the U.S. Postal Service (94). Shipping by a private (noncommercial) courier that is dedicated only to the transport of clinical specimens does not exempt specimens from DOT or IATA regulations; category A specimens must be packaged according to United Nations Division 6.2 regulations with appropriate documentation, even if not being transported by a commercial carrier (94). Based on existing specifications, laboratories should collaborate to develop specifications for packaging and shipping, which should be incorporated into a standard operating procedure and followed consistently. A United Nations-approved category A shipping container must be used for cultures, and cultures must be packaged and documented according to DOT and IATA regulations (95). # Interpretation of Final Results Several tests for clinical or public health microbiology laboratories are available for the detection of STEC, and they may be used alone or in combination. No testing method is 100% sensitive or specific, and the predictive value of a positive test is affected by the patient population that a particular laboratory serves. Specificity and sensitivity might be increased by using a combination of tests. However, when test results conflict, interpretation might be difficult, especially when clinical and public health laboratory test results are compared. Clinical and public health laboratories document STEC test results in a final report (Table 2). Discordant results (e.g., positive immunoassay at a clinical laboratory but negative PCR result at a public health laboratory) might need to be discussed among the treating physician, public health epidemiologist, and clinical and public health laboratory staff members; however, the outcome of most patients' illnesses (i.e., resolution of symptoms or progression to HUS) is already known by the time the discordant laboratory findings are resolved. Proper interpretation of test results, which is needed for appropriate patient evaluation and treatment, includes consideration of several factors, including whether the type of specimen tested was appropriate for the test (e.g., specimens from rectal swabs or whole stools placed in transport medium), the timing of the specimen collection relative to illness onset, the patient's signs and symptoms, the epidemiologic context of the patient's illness, whether the manufacturer instructions were followed precisely, and the possibility of a false-positive or false-negative test result. # Clinical Considerations Accurate, rapid identification of STEC, particularly of E. coli O157:H7, is critical for patient management and disease control. Therefore, the types of microbiologic tests chosen, performed, and reported and subsequent communication with treating clinicians are critical. Prompt and proper treatment of patients with a positive or presumptively positive STEC culture requires rapid and clear diagnostic enteric microbiology and reporting of data. More detailed information on clinical considerations and care of patients with STEC infection is available from recent clinical reviews (32,96,97). # MMWR October 16, 2009 # Conclusion Accumulated findings from investigations of STEC outbreaks, studies of sporadic STEC infections, and passive and active surveillance provide compelling evidence to support the recommendation that all stools submitted for routine testing to clinical laboratories from patients with community-acquired diarrhea should be cultured for O157 STEC and simultaneously tested for non-O157 STEC with an assay that detects Shiga toxins. These recommendations should improve the accuracy of diagnosing STEC infections, facilitate assessment of risk for severe illness, promote prompt diagnosis and treatment, and improve detection of outbreaks. Because of the critical impact of time on diagnosis of STEC, treating patients, and recognizing and controlling outbreaks of STEC infections, attempting to isolate O157 STEC and detect other STEC serotypes simultaneously, rather than separately (i.e., conducting a Shiga toxin test to determine whether to culture), is recommended. Performing culture for O157 STEC while simultaneously testing for all STEC serotypes is critical. O157 STEC are responsible for most STEC outbreaks and most cases of severe disease; almost all strains have the virulence genes stx2 and eae, which are associated with severe disease. Detection of O157 STEC within 24 hours after specimen submission to the laboratory helps physicians to rapidly assess the patient's risk for severe disease and to initiate measures to prevent serious complications, such as renal damage and death. Rapid isolation of the infecting organism helps public health officials quickly initiate measures to detect outbreaks and control the spread of infection. Because of the dynamic nature of the Shiga toxin-converting phages and the potential of decreased diagnostic sensitivity for these pathogens later during infection, future commercial assays that target stable traits might improve diagnostic sensitivity. To facilitate diagnosis and patient management, future methods would also ideally allow for an assessment of the organism's potential to cause severe disease (e.g., related to the presence of stx2, certain stx2 subtypes, and eae). Improved isolation methods for non-O157 STEC also are needed. As nucleotide sequences for more STEC strains become available, comparative genomic studies might identify targets that can be used to improve detection, virulence profiling, and isolation strategies. The Association of Public Health Laboratories, in conjunction with state and federal partners, is developing guidelines for receiving, testing, isolating, and characterizing STEC isolates and specimens in public health laboratories. That document will complement the guidelines in this report and will be available on the APHL website (http://www.aphl.org) by early 2010. Additional information on STEC is available at http:// www.cdc.gov/ecoli. # Acknowledgments
None
None
2076612ec6159079744e22d576e5a6a044a41a9d
cdc
None
British thermal unit CDC Centers for Disease Control and Prevention CFR Code of Federal Regulations CGA Canadian Gas Association CO carbon monoxide CPR cardiopulmonary resuscitation CPSC Consumer Product Safety Commission CSIA Chimney Safety Institute of America ABPA The American Backflow Prevention Association, Develops cross-connections; ABPA is an organization whose members have a common interest in protecting drinking water from contamination. ACI American Concrete Institute, Has produced more than 400 technical documents, reports, guides, specifications, and codes for the best use of concrete. ACI conducts more than 125 educational seminars each year and has 13 certification programs for concrete practitioners, as well as a scholarship program to promote careers in the industry. AGA American Gas Association, Develops standards, tests, and qualifies products used in gas lines and gas appliance installations. AGC Associated General Contractors of America, Is dedicated to improving the construction industry by educating the industry to employ the finest skills, promoting use of the latest technology and advocating building the best quality projects for owners-public and private. AMSA Association of Metropolitan Sewerage Agencies, Represents the interests of the country's wastewater treatment agencies. ANSI American National Standards Institute, Coordinates work among U.S. standards writing groups. Works in conjunction with other groups such as ISO, ASME, and ASTM. ARI Air-Conditioning and Refrigeration Institute, Provides information about the 21st Century Research (21-CR) initiative, a private-public sector research collaboration of the heating, ventilation, air-conditioning, and refrigeration industry, with a focus on energy conservation, indoor environmental quality, and environmental protection. ASCE American Society of Civil Engineers, Provides essential value to its members, careers, partners, and the public by developing leadership, advancing technology, advocating lifelong learning, and promoting the profession.The American Society of Home Inspectors, Is a source of information about the home inspection profession. ASHRAE American Society of Heating, Refrigerating and Air-Conditioning Engineers, Writes standards and guidelines that include uniform methods of testing for rating purposes, describe recommended practices in designing and installing equipment and provide other information to guide the industry. ASHRAE has more than 80 active standards and guideline project committees, addressing such broad areas as indoor air quality, thermal comfort, energy conservation in buildings, reducing refrigerant emissions, and the designation and safety classification of refrigerants.# List of Tables H ousing quality is key to the public's health. Translating that simple axiom into action is the topic of this book. In the 30 years since the first edition was published, the nation's understanding of how specific housing conditions are related to disease and injury has matured and deepened. This new edition will enable public health and housing professionals to grasp our shared responsibility to ensure that our housing stock is safe, decent, affordable, and healthy for our citizens, especially those who are particularly vulnerable and who spend more time in the home, such as children and the elderly. The Centers for Disease Control and Prevention and the U.S. Department of Housing and Urban Development (HUD) have worked together with many others to discover the ways to eliminate substandard housing conditions that harm health. For example, the advances in combating water borne diseases was possible, in part, through standardization of indoor plumbing and sewage, and the institution of federal, state and local regulations and codes. Childhood lead poisoning has been dramatically reduced, in part, through the elimination of residential lead-based paint hazards. Other advances have been made to protect people from carbon monoxide poisoning, falls, safety hazards, electrocution, and many other risks. However, more must be done to control existing conditions and to understand emerging threats that remain poorly understood. For example, nearly 18 million Americans live with the health threat of contaminated drinking water supplies, especially in rural areas where on-site wastewater systems are prevalent. Despite progress, thousands of children still face the threat of lead poisoning from residential lead paint hazards. The increase in asthma in recent decades and its relationship to housing conditions such as excess moisture, mold, settled dust allergens and ventilation remains the subject of intense research. The impact of energy conservation measures on the home environment is still unfolding. Simple affordable construction techniques and materials that minimize moisture problems and indoor air pollution, improve ventilation, and promote durability and efficiency continue to be uncovered. A properly constructed and maintained home is nearly timeless in its usefulness. A home is often the biggest single investment people make. This manual will help to ensure that the investment is a sound one that promotes healthy and safe living. Home rehabilitation has increased significantly in the last few years and HUD has prepared a nine-part series, The Rehab Guide, that can assist both residents and contractors in the rehabilitation process. For additional information, go to . Preface W e acknowledge the suggestions, assistance, and review of numerous individuals and organizations that went into the original and current versions of this manual. The revisions to this manual were made by a team of environmental health, housing, and public health professionals led by Professor Joe Beck, Dr. Darryl Barnett, Dr. Gary Brown, Dr. Carolyn Harvey, Professor Worley Johnson, Dr. Steve Konkel, and Professor Charles Treser. Individuals from the following organizations were involved in the various drafts of this manual: - National Association of Housing and Redevelopment Officials; - Department of Building, Housing and Zoning (Allentown, Pennsylvania); - Code Enforcement Associates (East Orange, New Jersey); - Eastern Kentucky University (Richmond, Kentucky); - University of Washington; Seattle (Washington); and - Battelle Memorial Institute (Columbus, Ohio). - Specifically, our gratitude goes to the following reviewers: - Dr. David Jacobs, Martin Nee, and Dr. Peter Ashley, HUD; - Pat Bohan, East Central University; - James Larue, The House Mender Inc.; - Ellen Tohn, ERT Associates; Accessory building or structure: a detached building or structure in a secondary or subordinate capacity from the main or principal building or structure on the same premises. Appropriate authority/Authority having jurisdiction (AHJ): a person within the governmental structure of the corporate unit who is charged with the administration of the appropriate code. Ashes: the residue from burning combustible materials. Attic: any story or floor of a building situated wholly or partly within the roof, and so designed, arranged, or built to be used for business, storage, or habitation. Basement: the lowest story of a building, below the main floor and wholly or partially lower than the surface of the ground. Building: a fixed construction with walls, foundation, and roof, such as a house, factory, or garage. Bulk container: any metal garbage, rubbish, or refuse container having a capacity of 2 cubic yards or greater and which is equipped with fittings for hydraulic or mechanical emptying, unloading, or removal. Central heating system: a single system supplying heat to one or more dwelling unit(s) or more than one rooming unit. Chimney: a vertical masonry shaft of reinforced concrete, or other approved noncombustible, heat-resisting material enclosing one or more flues, for the purpose of removing products of combustion from solid, liquid, or gaseous fuel. Dilapidated: in a state of disrepair or ruin and no longer adequate for the purpose or use for which it was originally intended. Dormitory: a building or a group of rooms in a building used for institutional living and sleeping purposes by four or more persons. Dwelling: any enclosed space wholly or partly used or intended to be used for living, sleeping, cooking, and eating. (Temporary housing, as hereinafter defined, shall not be classified as a dwelling.) Industrialized housing and modular construction that conform to nationally accepted industry standards and are used or intended for use for living, sleeping, cooking, and eating purposes shall be classified as dwellings. Dwelling unit: a room or group of rooms located within a dwelling forming a single habitable unit with facilities used or intended to be used by a single family for living, sleeping, cooking, and eating. Egress: arrangements and openings to assure a safe means of exit from buildings. Extermination: the control and elimination of insects, rodents, or other pests by eliminating their harborage places; by removing or making inaccessible materials that may serve as their food; by poisoning, spraying, fumigating, trapping, or any other recognized and legal pest elimination methods approved by the local or state authority having such administrative authority. Extermination is one of the components of integrated pest management. Fair market value: a price at which both buyers and sellers will do business. Family: one or more individuals living together and sharing common living, sleeping, cooking, and eating facilities (See also Household). Flush toilet: a toilet bowl that can be flushed with water supplied under pressure and that is equipped with a watersealed trap above the floor level. Garbage: animal and vegetable waste resulting from handling, preparation, cooking, serving, and nonconsumption of food. Grade: the finished ground level adjacent to a required window. # Definitions Guest: an individual who shares a dwelling unit in a nonpermanent status for not more than 30 days. Habitable room: a room or enclosed floor space used or intended to be used for living, sleeping, cooking or eating purposes, excluding bathrooms, laundries, furnace rooms, pantries, kitchenettes and utility rooms of less than 50 square feet of floor space, foyers, or communicating corridors, stairways, closets, storage spaces, workshops, and hobby and recreation areas. Health officer: the legally designated health authority of the jurisdiction or that person's authorized representative. Heated water: water heated to a temperature of not less than 120°F-130°F (49°C-54°C) at the outlet. Heating device: all furnaces, unit heaters, domestic incinerators, cooking and heating stoves and ranges, and other similar devices. Household: one or more individuals living together in a single dwelling unit and sharing common living, sleeping, cooking, and eating facilities (see also Family). Infestation: the presence within or around a dwelling of any insects, rodents, or other pests. Integrated pest management: a coordinated approach to managing roaches, rodents, mosquitoes, and other pests that combines inspection, monitoring, treatment, and evaluation, with special emphasis on the decreased use of toxic agents. Kitchen: any room used for the storage and preparation of foods and containing the following equipment: sink or other device for dishwashing, stove or other device for cooking, refrigerator or other device for cold storage of food, cabinets or shelves for storage of equipment and utensils, and counter or table for food preparation. Kitchenette: a small kitchen or an alcove containing cooking facilities. Lead-based paint: any paint or coating with lead content equal to or greater than 1 milligram per square centimeter, or 0.5% by weight. Multiple dwelling: any dwelling containing more than two dwelling units. Occupant: any individual, over 1 year of age, living, sleeping, cooking, or eating in or having possession of a dwelling unit or a rooming unit; except that in dwelling units a guest shall not be considered an occupant. Operator: any person who has charge, care, control or management of a building, or part thereof, in which dwelling units or rooming units are let. Ordinary summer conditions: a temperature 10°F (-12°C) below the highest recorded temperature in the locality for the prior 10-year period. Ordinary winter conditions: mean a temperature 15°F (-9.4°C) above the lowest recorded temperature in the locality for the prior 10-year period. Owner: any person who alone, jointly, or severally with others (a) shall have legal title to any premises, dwelling, or dwelling unit, with or without accompanying actual possession thereof, or (b) shall have charge, care or control of any premises, dwelling, or dwelling unit, as owner or agent of the owner, or as executor, administrator, trustee, or guardian of the estate of the owner. Permissible occupancy: the maximum number of individuals permitted to reside in a dwelling unit, rooming unit, or dormitory. Person: any individual, firm, corporation, association, partnership, cooperative, or government agency. Plumbing: all of the following supplied facilities and equipment: gas pipes, gas burning equipment, water pipes, garbage disposal units, waste pipes, toilets, sinks, installed dishwashers, bathtubs, shower baths, installed clothes washing machines, catch basins, drains, vents, and similarly supplied fixtures, and the installation thereof, together with all connections to water, sewer, or gas lines. Privacy: the existence of conditions which will permit an individual or individuals to carry out an activity commenced without interruption or interference, either by sight or sound by unwanted individuals. Rat harborage: any conditions or place where rats can live, nest or seek shelter. Ratproofing: a form of construction that will prevent the entry or exit of rats to or from a given space or building, or from gaining access to food, water, or harborage. It consists of the closing and keeping closed of every opening in foundations, basements, cellars, exterior and interior walls, ground or first floors, roofs, sidewalk gratings, sidewalk openings, and other places that may be reached and entered by rats by climbing, burrowing, or other methods, by the use of materials impervious to rat gnawing and other methods approved by the appropriate authority. Refuse: leftover and discarded organic and nonorganic solids (except body wastes), including garbage, rubbish, ashes, and dead animals. Refuse container: a watertight container that is constructed of metal, or other durable material impervious to rodents, that is capable of being serviced without creating unsanitary conditions, or such other containers as have been approved by the appropriate authority (see also Appropriate Authority). Openings into the container, such as covers and doors, shall be tight fitting. Rooming house: any dwelling other than a hotel or motel or that part of any dwelling containing one or more rooming units, or one or more dormitory rooms, and in which persons either individually or as families are housed with or without meals being provided. Rooming unit: any room or group of rooms forming a single habitable unit used or intended to be used for living and sleeping, but not for cooking purposes. Rubbish: nonputrescible solid wastes (excluding ashes) consisting of either: (a) combustible wastes such as paper, cardboard, plastic containers, yard clippings and wood; or (b) noncombustible wastes such as cans, glass, and crockery. Safety: the condition of being reasonably free from danger and hazards that may cause accidents or disease. Space heater: a self-contained heating appliance of either the convection type or the radiant type and intended primarily to heat only a limited space or area such as one room or two adjoining rooms. Supplied: paid for, furnished by, provided by, or under the control of the owner, operator or agent. System: the dynamic interrelationship of components designed to enact a vision. # Systems theory: The concept proposed to promote the dynamic interrelationship of activities designed to accomplish a unified system. Temporary housing: any tent, trailer, mobile home, or other structure used for human shelter that is designed to be transportable and which is not attached to the ground, to another structure, or to any utility system on the same premises for more than 30 consecutive days. Toxic substance: any chemical product applied on the surface of or incorporated into any structural or decorative material, or any other chemical, biologic, or physical agent in the home environment or its immediate surroundings, which constitutes a potential hazard to human health at acute or chronic exposure levels. Variance: a difference between that which is required or specified and that which is permitted. I n addition to the standards and organizations listed in this section, the U.S. Justice Department enforces the requirements of the Americans with Disabilities Act (ADA) () and assures that products fully comply with the provisions of the act to ensure equal access for physically challenged users. AWWA American Water Works Association, Promotes public health through improvement of the quality of water and develops standards for valves, fittings, and other equipment. # CGA Canadian Gas Association, Develops standards, tests, and qualifies products used in gas lines and gas appliance installations. # CPSC U.S Consumer Product Safety Commission, Protects the public from unreasonable risks for serious injury or death from more than 15,000 types of consumer products. CPSC is committed to protecting consumers and families from products that pose a fire, electrical, chemical, or mechanical hazard or can injure children. # CRBT Center for Resourceful Building Technology, Contains the online Guide to Resource-Efficient Building Elements, which provides information about environmentally efficient construction materials, including foundations, wall systems, panels, insulation, siding, roofing, doors, windows, interior finishing, and floor coverings. EPA U.S. Environmental Protection Agency, Protects human health and the environment. # FM Factory Mutual, Develops standards and qualifies products for use by the general public and develops standards for materials, products, systems, and services. # HFHI Habitat for Humanity International, Is a nonprofit, ecumenical Christian housing ministry. HFHI seeks to eliminate poverty housing and homelessness from the world, and to make decent shelter a matter of conscience and action. # ICBO The Uniform Building Code (UBC)/International Conference of Building Officials, Is the most widely adopted model building code in the world and is a proven document meeting the needs of government units charged with enforcement of building regulation. Published triennially, the UBC provides complete regulations covering all major aspects of building design and construction relating to fire and life safety and structural safety. The requirements reflect the latest technologic advances available in the building and fire-and life-safety industry. # ICC International Code Council, Produces the most widely adopted and enforced building safety codes in the United States (I-Codes). International Residential C ode (IRC) 2003 has been adopted by many states, jurisdictions, and localities. IRC also references several industry standards such as ACI 318, ASCE 7, ASTM, and ANSI standards that cover specific load, load combinations, design methods, and material specifications. # ISO International Standard Organization, Provides internationally recognized certification for manufacturers that comply with high standards of quality control, developed standards ISO-9000 through ISO-9004, and qualifies and lists products suitable for use in plumbing installations. # MSS Manufacturers Standardization Society of the Valve and Fittings Industry, Inc., Develops technical codes and standards for the valve and fitting industry. # NACHI The National Association of Certified Home Inspectors, Is the world's largest, most elite nonprofit inspection association. # NAHB National Association of Home Builders, Is a trade association representing more than 220,000 residential home building and remodeling industry members. NAHB is affiliated with more than 800 state and local home builders associations around the country. NAHB urges codes and standards development and application that protects public health and safety without cost impacts that decrease affordability and consequently prevent people from moving into new, healthier, safer homes. # NEC National Electrical Code, Protects public safety by establishing requirements for electrical wiring and equipment in virtually all buildings. # NESC National Environmental Services Center, Is a repository for water, wastewater, solid waste, and environmental training research. # NFPA National Fire Protection Association, Develops, publishes, and disseminates more than 300 consensus codes and standards intended to minimize the possibility and effects of fire and other risks. # NOWRA National Onsite Wastewater Recycling Association, Provides leadership and promotes the onsite wastewater treatment and recycling industry through education, training, communication, and quality tools to support excellence in performance. # NSF National Sanitation Foundation, Develops standards for equipment, products and services; a nonprofit organization now known as International. # UL Underwriters Laboratory, Has developed more than 800 Standards for Safety. Millions of products and their components are tested to UL's rigorous safety standards. # WEF Water Environment Federation, Is a not-for-profit technical and educational organization with members from varied disciplines who work toward the WEF vision of preservation and enhancement of the global water environment. The WEF network includes water quality professionals from 76 member associations in 30 countries. # Executive Summary T he original Basic Housing Inspection manual was published in 1976 by the Center for Disease Control (now known as the Centers for Disease Control and Prevention). Its Foreword stated: "The growing numbers of new families and the increasing population in the United States have created a pressing demand for additional housing that is conducive to healthful living. These demands are increased by the continuing loss of existing housing through deterioration resulting from age and poor maintenance. Large numbers of communities in the past few years have adopted housing codes and initiated code enforcement programs to prevent further deterioration of existing housing units. This growth in housing activities has caused a serious problem for communities in obtaining qualified personnel to provide the array of housing service needed, such as information, counseling, technical advice, inspections, and enforcement. As a result many agencies throughout the country are conducting comprehensive housing inspection training courses. This publication has been designed to be an integral part of these training sessions." The original Basic Housing Inspection manual has been successfully used for several decades by public health and housing personnel across the United States. Although much has changed in the field of housing construction and maintenance, and health and safety issues have expanded, the manual continues to have value, especially as it relates to older housing. Many housing deficiencies impact on health and safety. For example, lead-based paint and dust may contribute to lead poisoning in children; water leakage and mold may contribute to asthma episodes; improper use and storage of pesticides may result in unintentional poisoning; and lack of working smoke, ionization, and carbon monoxide alarms may cause serious injury and death. Government agencies have been very responsive to "healthy homes" issues. The U.S. Department of Housing and Urban Development (HUD) created an office with an exclusive focus on healthy homes. In 2003, CDC joined HUD in the effort to improve housing conditions through the training of environmental health practitioners, public health nurses, housing specialists, and others who have interest and responsibility for creating healthy homes. The revised Basic Housing Inspection manual, renamed the Healthy Housing Reference Manual, responds to the enormous changes that have occurred in housing construction methods and materials and to new knowledge related to the impact of housing on health and safety. New chapters have been added, making the manual more comprehensive. For example, an entire chapter is devoted to rural water supplies and on-site wastewater treatment. A new chapter was added that discusses issues related to residential swimming pools and spas. At over 200 pages, the comprehensive revised manual is designed primarily as a reference document for public health and housing professionals who work in government and industry. The Healthy Housing Reference Manual contains 14 chapters, each with a specific focus. All chapters contain annotated references and a listing of sources for additional topic information. A summary of the content of each chapter follows: Chapter One, Housing History and Purpose, describes the history of dwellings and urbanization and housing trends during the last century. Chapter Two, Basic Principles of Healthy Housing, describes the basic principles of healthy housing and safety-physiologic needs, psychologic needs, protection against injury and disease-and lays the groundwork for following chapters. Chapter Three, Housing Regulations, reviews the history of housing regulations, followed by a discussion of zoning, housing, and building codes. Chapter Four, Disease Vectors and Pests, provides a detailed analysis of disease vectors that have an impact on residences. It includes information on the management of mice, rats, cockroaches, fleas, flies, termites, and fire ants. The quality of housing plays a decisive role in the health status of its occupants. Substandard housing conditions have been linked to adverse health effects such as childhood lead poisoning, asthma and other respiratory conditions, and unintentional injuries. This new and revised Healthy Housing Reference Manual is an important reference for anyone with responsibility and interest in creating and maintaining healthy housing. The housing design and construction industry has made great progress in recent years through the development of new innovative techniques, materials technologies, and products. The HUD Rehab Guide series was developed to inform the design and construction industry about state-of-the-art materials and innovative practices in housing rehabilitation. The series focuses on building technologies, materials, components, and techniques rather than on projects such as adding a new room. The nine volumes each cover a distinct element of housing rehabilitation and feature breakthrough materials, labor-saving tools, and cost-cutting practices. The nine volumes address foundations; exterior walls; roofs; windows and doors; partitions, ceilings, floors, and stairs; kitchen and baths; electrical/electronics; heating, air conditioning, and ventilation; plumbing; and site work. Additional information about the series can be found at and . This series is an excellent adjunct to the Healthy Housing Reference Manual. # Introduction The term "shelter," which is often used to define housing, has a strong connection to the ultimate purpose of housing throughout the world. The mental image of a shelter is of a safe, secure place that provides both privacy and protection from the elements and the temperature extremes of the outside world. This vision of shelter, however, is complex. The earthquake in Bam, Iran, before dawn on December 26, 2003, killed in excess of 30,000 people, most of whom were sleeping in their homes. Although the homes were made of the most simple construction materials, many were well over a thousand years old. Living in a home where generation after generation had been raised should provide an enormous sense of security. Nevertheless, the world press has repeatedly implied that the construction of these homes destined this disaster. The homes in Iran were constructed of sun-dried mud-brick and mud. We should think of our homes as a legacy to future generations and consider the negative environmental effects of building them to serve only one or two generations before razing or reconstructing them. Homes should be built for sustainability and for ease in future modification. We need to learn the lessons of the earthquake in Iran, as well as the 2003 heat wave in France that killed in excess of 15,000 people because of the lack of climate control systems in their homes. We must use our experience, history, and knowledge of both engineering and human health needs to construct housing that meets the need for privacy, comfort, recreation, and health maintenance. Health, home construction, and home maintenance are inseparable because of their overlapping goals. Many highly trained individuals must work together to achieve quality, safe, and healthy housing. Contractors, builders, code inspectors, housing inspectors, environmental health officers, injury control specialists, and epidemiologists all are indispensable to achieving the goal of the best housing in the world for U.S. citizens. This goal is the basis for the collaboration of the U.S. Department of Housing and Urban Development (HUD) and the Centers for Disease and Control and Prevention (CDC). # Preurban Housing inhabitants. The housing similarities among civilizations separated by vast distances may have been a result of a shared heritage, common influences, or chance. Caves were accepted as dwellings, perhaps because they were ready made and required little or no construction. However, in areas with no caves, simple shelters were constructed and adapted to the availability of resources and the needs of the population. Classification systems have been developed to demonstrate how dwelling types evolved in preurban indigenous settings . # Ephemeral Dwellings Ephemeral dwellings, also known as transient dwellings, were typical of nomadic peoples. The African bushmen and Australia's aborigines are examples of societies whose existence depends on an economy of hunting and food gathering in its simple form. Habitation of an ephemeral dwelling is generally a matter of days. # Episodic Dwellings Episodic housing is exemplified by the Inuit igloo, the tents of the Tungus of eastern Siberia, and the very similar tents of the Lapps of northern Europe. These groups are more sophisticated than those living in ephemeral dwellings, tend to be more skilled in hunting or fishing, inhabit a dwelling for a period of weeks, and have a greater effect on the environment. These groups also construct communal housing and often practice slash-andburn cultivation, which is the least productive use of cropland and has a greater environmental impact than the hunting and gathering of ephemeral dwellers. # Periodic Dwellings Periodic dwellings are also defined as regular temporary dwellings used by nomadic tribal societies living in a pastoral economy. This type of housing is reflected in the yurt used by the Mongolian and Kirgizian groups and the Bedouins of North Africa and western Asia. These groups' dwellings essentially demonstrate the next step in the evolution of housing, which is linked to societal develop-Chapter 1: Housing History and Purpose ment. Pastoral nomads are distinguished from people living in episodic dwellings by their homogenous cultures and the beginnings of political organization. Their environmental impact increases with their increased dependence on agriculture rather than livestock. # Seasonal Dwellings Schoenauer describes seasonal dwellings as reflective of societies that are tribal in nature, seminomadic, and based on agricultural pursuits that are both pastoral and marginal. Housing used by seminomads for several months or for a season can be considered semisedentary and reflective of the advancement of the concept of property, which is lacking in the preceding societies. This concept of property is primarily of communal property, as opposed to individual or personal property. This type of housing is found in diverse environmental conditions and is demonstrated in North America by the hogans and armadas of the Navajo Indians. Similar housing can be found in Tanzania (Barabaig) and in Kenya and Tanzania (Masai). # Semipermanent Dwellings According to Schoenauer , sedentary folk societies or hoe peasants practicing subsistence agriculture by cultivating staple crops use semipermanent dwellings. These groups tend to live in their dwellings various amounts of time, usually years, as defined by their crop yields. When land needs to lie fallow, they move to more fertile areas. Groups in the Americas that used semipermanent dwellings included the Mayans with their oval houses and the Hopi, Zuni, and Acoma Indians in the southwestern United States with their pueblos. # Permanent Dwellings The homes of sedentary agricultural societies, whose political and social organizations are defined as nations and who possess surplus agricultural products, exemplify this type of dwelling. Surplus agricultural products allowed the division of labor and the introduction of other pursuits aside from food production; however, agriculture is still the primary occupation for a significant portion of the population. Although they occurred at different points in time, examples of early sedentary agricultural housing can be found in English cottages, such as the Suffolk, Cornwall, and Kent cottages . # Urbanization Permanent dwellings went beyond simply providing shelter and protection and moved to the consideration of comfort. These structures began to find their way into what is now known as the urban setting. The earliest available evidence suggests that towns came into existence around 4000 BC. Thus began the social and public health problems that would increase as the population of cities increased in number and in sophistication. In preurban housing, the sparse concentration of people allowed for movement away from human pollution or allowed the dilution of pollution at its location. The movement of populations into urban settings placed individuals in close proximity, without the benefit of previous linkages and without the ability to relocate away from pollution or other people. Urbanization was relatively slow to begin, but once started, it accelerated rapidly. In the 1800s, only about 3% of the population of the world could be found in urban settings in excess of 5,000 people. This was soon to change. The year 1900 saw the percentage increase to 13.6% and subsequently to 29.8% in 1950. The world's urban population has grown since that time. By 1975, more than one in three of the world's population lived in an urban setting, with almost one out of every two living in urban areas by 1997. Industrialized countries currently find approximately 75% of their population in an urban setting. The United Nations projects that in 2015 the world's urban population will rise to approximately 55% and that in industrialized nations it will rise to just over 80%. In the Western world, one of the primary forces driving urbanization was the Industrial Revolution. The basic source of energy in the earliest phase of the Industrial Revolution was water provided by flowing rivers. Therefore, towns and cities grew next to the great waterways. Factory buildings were of wood and stone and matched the houses in which the workers lived, both in construction and in location. Workers' homes were little different in the urban setting than the agricultural homes from whence they came. However, living close to the workplace was a definite advantage for the worker of the time. When the power source for factories changed from water to coal, steam became the driver and the construction materials became brick and cast iron, which later evolved into steel. Increasing populations in cities and towns increased social problems in overcrowded slums. The lack of inexpensive, rapid public transportation forced many workers to live close to their work. These factory areas were not the pastoral areas with which many were familiar, but were bleak with smoke and other pollutants. The inhabitants of rural areas migrated to ever-expanding cities looking for work. Between 1861 and 1911 the population of England grew by 80%. The cities and towns of England were woefully unprepared to cope with the resulting environmental problems, such as the lack of potable water and insufficient sewerage. In this atmosphere, cholera was rampant; and death rates resembled those of Third World countries today. Children had a one in six chance of dying before the age of 1 year. Because of urban housing problems, social reformers such as Edwin Chadwick began to appear. Chadwick's Report on an Enquiry into the Sanitary Condition of the Labouring Population of Great Britain and on the Means of its Improvement sought many reforms, some of which concerned building ventilation and open spaces around the buildings. However, Chadwick's primary contention was that the health of the working classes could be improved by proper street cleaning, drainage, sewage, ventilation, and water supplies. In the United States, Shattuck et al. wrote the Report of the Sanitary Commission of Massachusetts, which was printed in 1850. In the report, 50 recommendations were made. Among those related to housing and building issues were recommendations for protecting school children by ventilation and sanitation of school buildings, emphasizing town planning and controlling overcrowded tenements and cellar dwellings. Figure 1.1 demonstrates the conditions common in the tenements. In 1845, Dr. John H. Griscom, the City Inspector of New York, published The Sanitary Condition of the Laboring Population of New York . His document expressed once again the argument for housing reform and sanitation. Griscom is credited with being the first to use the phrase "how the other half lives." During this time, the poor were not only subjected to the physical problems of poor housing, but also were victimized by corrupt landlords and builders. # Trends in Housing The term "tenement house" was first used in America and dates from the mid-nineteenth century. It was often intertwined with the term "slum." Wright notes that in English, tenement meant "an abode for a person or for the soul, when someone else owned the property." Slum, on the other hand, initially was used at the beginning of the 19th century as a slang term for a room. By the middle of the century, slum had evolved into a term for a back dwelling occupied by the lowest members of society. Von Hoffman states that this term had, by the end of the century, begun to be used interchangeably with tenement. The author noted that in the larger cities of the United States, the apartment house emerged in the 1830s as a housing unit of two to five stories, with each story containing apartments of two to four rooms. It was origi-nally built for the upper group of the working class. The tenement house emerged in the 1830s when landlords converted warehouses into inexpensive housing designed to accommodate Irish and black workers. Additionally, existing large homes were subdivided and new structures were added, creating rear houses and, in the process, eliminating the traditional gardens and yards behind them. These rear houses, although new, were no healthier than the front house, often housing up to 10 families. When this strategy became inadequate to satisfy demand, the epoch period of the tenements began. Although unpopular, the tenement house grew in numbers, and, by 1850 in New York and Boston, each tenement housed an average of 65 people. During the 1850s, the railroad house or railroad tenement was introduced. This structure was a solid, rectangular block with a narrow alley in the back. The structure was typically 90 feet long and had 12 to 16 rooms, each about 6 feet by 6 feet and holding around four people. The facility allowed no direct light or air into rooms except those facing the street or alley. Further complicating this structure was the lack of privacy for the tenants. A lack of hallways eliminated any semblance of privacy. Open sewers, a single privy in the back of the building, and uncollected garbage resulted in an objectionable and unhygienic place to live. Additionally, the wood construction common at the time, coupled with coal and wood heating, made fire an everpresent danger. As a result of a series of tenement fires in 1860 in New York, such terms as death-trap and fire-trap were coined to describe the poorly constructed living facilities . The two last decades of the 19th century saw the introduction and development of dumbbell tenements, a front and rear tenement connected by a long hall. These tenements were typically five stories, with a basement and no elevator (elevators were not required for any building of less than six stories). Dumbbell tenements, like other tenements, resulted in unaesthetic and unhealthy places to live. Garbage was often thrown down the airshafts, natu-ral light was confined to the first floor hallway, and the public hallways only contained one or two toilets and a sink. This apparent lack of sanitary facilities was compounded by the fact that many families took in boarders to help with expenses. In fact, 44,000 families rented space to boarders in New York in 1890, with this increasing to 164,000 families in 1910. In the early 1890s, New York had a population of more than 1 million, of which 70% were residents of multifamily dwellings. Of this group, 80% lived in tenements consisting mostly of dumbbell tenements. The passage of the New York Tenement House Act of 1901 spelled the end of the dumbbells and acceptance of a new tenement type developed in the 1890s-the park or central court tenement, which was distinguished by a park or open space in the middle of a group of buildings. This design was implemented to reduce the activity on the front street and to enhance the opportunity for fresh air and recreation in the courtyard. The design often included roof playgrounds, kindergartens, communal laundries, and stairways on the courtyard side. Although the tenements did not go away, reform groups supported ideas such as suburban cottages to be developed for the working class. These cottages were two-story brick and timber, with a porch and a gabled roof. According to Wright , a Brooklyn project called Homewood consisted of 53 acres of homes in a planned neighborhood from which multifamily dwellings, saloons, and factories were banned. Although there were many large homes for the well-todo, single homes for the not-so-wealthy were not abundant. The first small house designed for the individual of modest means was the bungalow. According to Schoenauer , bungalows originated in India. The bungalow was introduced into the United States in 1880 with the construction of a home in Cape Cod. The bungalow, derived for use in tropical climates, was especially popular in California. Company towns were another trend in housing in the 19th century. George Pullman, who built railway cars in the 1880s, and John H. Patterson, of the National Cash Register Company, developed notable company towns. Wright notes that in 1917 the U.S. Bureau of Labor Standards estimated that at least 1,000 industrial firms were providing housing for their employees. The provision of housing was not necessarily altruistic. The motivation for providing housing varied from company to company. Such motivations included the use of housing as a recruitment incentive for skilled workers, a method of linking the individual to the company, and a belief that a better home life would make the employees happier and more productive in their jobs. Some companies, such as Firestone and Goodyear, went beyond the company town and allowed their employees to obtain loans for homes from company-established banks. A prime motivator of company town planning was sanitation, because maintaining the worker's health could potentially lead to fewer workdays lost due to illness. Thus, in the development of the town, significant consideration was given to sanitary issues such as window screens, sewage treatment, drainage, and water supplies. Before World War I there was a shortage of adequate dwellings. Even after World War I, insufficient funding, a shortage of skilled labor, and a dearth of building materials compounded the problem. However, the design of homes after the war was driven in part by health consid-erations, such as providing good ventilation, sun orientation and exposure, potable pressurized water, and at least one private toilet. Schoenauer notes that, during the postwar years, the improved mobility of the public led to an increase in the growth of suburban areas, exemplified by the detached and sumptuous communities outside New York, such as Oyster Bay. In the meantime, the conditions of working populations consisting of many immigrants began to improve with the improving economy of the 1920s. The garden apartment became popular. These units were well lighted and ventilated and had a courtyard, which was open to all and well maintained. Immediately after World War I and during the 1920s, city population growth was outpaced by population growth in the suburbs by a factor of two. The focus at the time was on the single-family suburban dwelling. The 1920s were a time of growth, but the decade following the Great Depression, beginning in 1929, was one of deflation, cessation of building, loss of mortgage financing, and the plunge into unemployment of large numbers of building trade workers. Additionally, 1.5 million home loans were foreclosed during this period. In 1936, the housing market began to make a comeback; however, the 1930s would come to be known as the beginning of public housing, with increased public involvement in housing construction, as demonstrated by the many laws passed during the era . The National Housing Act was passed by Congress in 1934 and set up the Federal Housing Administration. This agency encouraged banks, building and loan associations, and others to make loans for building homes, small business establishments, and farm buildings. If the Federal Housing Administration approved the plans, it would insure the loan. In 1937, Congress passed another National Housing Act that enabled the Federal Housing Administration to take control of slum clearance. It made 60-year loans at low interest to local governments to help them build apartment blocks. Rents in these homes were fixed and were only available to lowincome families. By 1941, the agency had assisted in the construction of more than 120,000 family units. During World War II, the focus of home building was on housing for workers who were involved in the war effort. Homes were being built through federal agencies such as the newly formed Federal Housing Administration, formed in 1934 and transferred to HUD in 1965. According to the U.S. Census Bureau (USCB) , in the years since World War II, the types of homes Americans live in have changed dramatically. In 1940, most homes were considered attached houses (row houses, townhouses, and duplexes). Small apartment houses with two to four apartments had their zenith in the 1950s. In the 1960 census, two-thirds of the housing inventory was made up of one-family detached houses, which declined to less than 60% in the 1990 census. The postwar years saw the expansion of suburban housing led by William J. Levitt's Levittown, on Long Island, which had a strong influence on postwar building and initiated the subdivisions and tract houses of the following decades (Figure 1.2). The 1950s and 1960s saw continued suburban development, with the growing ease of transportation marked by the expansion of the interstate highway system. As the cost of housing began to increase as a result of increased demand, a grassroots movement to provide adequate housing for the poor began to emerge. According to Wright , in the 1970s only about 25% of the population could afford a $35,000 home. According to Gaillard , Koinonia Partners, a religious organization founded in 1942 by Clarence Jordan near Albany, Georgia, was the seed for Habitat for Humanity. Habitat for Humanity, founded in 1976 by Millard Fuller, is known for its international efforts and has constructed more than 150,000 houses in 80 countries; 50,000 of these houses are in the United States. The homes are energy-efficient and environmentally friendly to conserve resources and reduce long-term costs to the homeowners. Builders also began promoting one-floor minihomes and no-frills homes of approximately 900 to 1,200 square feet. Manufactured housing began to increase in popularity, with mobile home manufacturers becoming some of the most profitable corporations in the United States in the early 1970s. In the 1940 census, manufactured housing were lumped into the "other" category with boats and tourist cabins: by the 1990 census, manufactured housing made up 7% of the total housing inventory. Many communities ban manufactured housing from residential neighborhoods. According to Hart et al. , nearly 30% of all home sales nationwide are of manufactured housing, and more than 90% of those homes are never moved once they are anchored. According to a 2001 industry report, the demand for prefabricated housing is expected to increase in excess of 3% annually to $20 billion in 2005, with most units being manufactured homes. The largest market is expected to continue in the southern part of the United States, with the most rapid growth occurring in the western part of the country. As of 2000, five manufactured-home producers, representing 35% of the market, dominated the industry. This industry, over the past 20 to 25 years, has been affected by two pieces of federal legislation. The first, the Mobile Home Construction and Safety Standards Act, adopted by HUD in 1974, was passed to aid consumers through regulation and enforcement of HUD design and construction standards for manufactured homes. The second, the 1980 Housing Act, required the federal government to change the term "mobile home" to "manufactured housing" in all federal laws and literature. One of the prime reasons for this change was that these homes were in reality no longer mobile in the true sense. The energy crisis in the United States between 1973 and 1974 had a major effect on the way Americans lived, drove, and built their homes. The high cost of both heating and cooling homes required action, and some of the action taken was ill advised or failed to consider healthy housing concerns. Sealing homes and using untried insulation materials and other energy conservation actions often resulted in major and sometimes dangerous buildups of indoor air pollutants. These buildups of toxins occurred both in homes and offices. Sealing buildings for energy efficiency and using off-gassing building materials containing urea-formaldehyde, vinyl, and other new plastic surfaces, new glues, and even wallpapers created toxic environments. These newly sealed environments were not refreshed with makeup air and resulted in the accumulation of both chemical and biologic pollutants and moisture leading to mold growth, representing new threats to both short-term and long-term health. The results of these actions are still being dealt with today. # Introduction It seems obvious that health is related to where people live. People spend 50% or more of every day inside their homes. Consequently, it makes sense that the housing environment constitutes one of the major influences on health and well-being. Many of the basic principles of the link between housing and health were elucidated more than 60 years ago by the American Public Health Association (APHA) Committee on the Hygiene of Housing. After World War II, political scientists, sociologists, and others became interested in the relation between housing and health, mostly as an outgrowth of a concern over poor housing conditions resulting from the massive influx into American cities of veterans looking for jobs. Now, at the beginning of the 21st century, there is a growing awareness that health is linked not only to the physical structure of a housing unit, but also to the neighborhood and community in which the house is located. According to Ehlers and Steel , in 1938, a Committee on the Hygiene of Housing, appointed by APHA, created the Basic Principles of Healthful Housing, which provided guidance regarding the fundamental needs of humans as they relate to housing. These fundamental needs include physiologic and psychologic needs, protection against disease, protection against injury, protection against fire and electrical shock, and protection against toxic and explosive gases. # Fundamental Physiologic Needs Housing should provide for the following physiologic needs: 1. protection from the elements, 2. a thermal environment that will avoid undue heat loss, 3. a thermal environment that will permit adequate heat loss from the body, 4. an atmosphere of reasonable chemical purity, Overdressing. Older people, because they may not feel the heat, may not dress appropriately in hot weather. Visiting overcrowded places. Trips should be scheduled during nonrush-hour times and participation in special events should be carefully planned to avoid disease transmission. Not checking weather conditions. Older people, particularly those at special risk, should stay indoors on especially hot and humid days, particularly when an air pollution alert is in effect. USCB reported that about 75% of homes in the United States used either utility gas or electricity for heating purposes, with utility gas accounting for about 50%. This, of course, varies with the region of the country, depending on the availability of hydroelectric power. This compares with the 1940 census, which found that threequarters of all households heated with coal or wood. Electric heat was so rare that it was not even an option on the census form of 1940. Today, coal has virtually disappeared as a household fuel. Wood all but disappeared as a heating fuel in 1970, but made a modest comeback at 4% nationally by 1990. This move over time to more flexible fuels allows a majority of today's homes to maintain healthy temperatures, although many houses still lack adequate insulation. The fifth through the seventh physiologic concerns address adequate illumination, both natural and artificial. Research has revealed a strong relationship between light and human physiology. The effects of light on both the human eye and human skin are notable. According to Zilber , one of the physiologic responses of the skin to sunlight is the production of vitamin D. Light allows us to see. It also affects body rhythms and psychologic health. Average individuals are affected daily by both natural and artificial lighting levels in their homes. Adequate lighting is important in allowing people to see unsanitary conditions and to prevent injury, thus contributing to a healthier and safer environment. Improper indoor lighting can also contribute to eyestrain from inadequate illumination, glare, and flicker. Avoiding excessive noise (eighth physiologic concern) is important in the 21st century. However, the concept of noise pollution is not new. Two thousand years ago, Julius Caesar banned chariots from traveling the streets of Rome late at night. In the 19th century, numerous towns and cities prohibited ringing church bells. In the early 20th century, London prohibited church bells from ringing between 9:00 PM and 9:00 AM. In 1929, New York City formed a Noise Abatement Commission that was charged with evaluating noise issues and suggesting solutions. At that time, it was concluded that loud noise affected health and productivity. In 1930, this same commission determined that constant exposure to loud noises could affect worker efficiency and long-term hearing levels. In 1974, the U.S. Environmental Protection Agency (EPA) produced a document titled Information on Levels of Environmental Noise Requisite to Protect Public Health and Welfare With an Adequate Margin of Safety . This document identified maximum levels of 55 decibels outdoors and 45 decibels indoors to prevent interference with activities and 70 decibels for all areas to prevent hearing loss. In 1990, the United Kingdom implemented The Household Appliances (Noise Emission) Regulations to help control indoor noise from modern appliances. Noise has physiologic impacts aside from the potential to reduce hearing ability. According to the American Speech-Language-Hearing Association , these effects include elevated blood pressure; negative cardiovascular effects; increased breathing rates, digestion, and stomach disturbances; ulcers; negative effects on developing fetuses; difficulty sleeping after the noise stops; plus the intensification of the effects of drugs, alcohol, aging, and carbon monoxide. In addition, noise can reduce attention to tasks and impede speech communication. Finally, noise can hamper performance of daily tasks, increase fatigue, and cause irritability. Household noise can be controlled in various ways. Approaching the problem during initial construction is the simplest, but has not become popular. For example, in early 2003, only about 30% of homebuilders offered sound-attenuating blankets for interior walls. A soundattenuating blanket is a lining of noise abatement products (the thickness depends on the material being used). Spray-in-place soft foam insulation can also be used as a sound dampener, as can special walking mats for floors. Actions that can help reduce household noise include installing new, quieter appliances and isolating washing machines to reduce noise and water passing through pipes. The ninth and final physiologic need is for adequate space for exercise and play. Before industrialization in the United States and England, a preponderance of the population lived and worked in more rural areas with very adequate areas for exercise and play. As industrialization impacted demographics, more people were in cities without ample space for play and exercise. In the 19th century, society responded with the development of playgrounds and public parks. Healthful housing should include the provision of safe play and exercise areas. Many American neighborhoods are severely deficient, with no area for children to safely play. New residential areas often do not have sidewalks or street lighting, nor are essential services available by foot because of highway and road configurations. # Fundamental Psychologic Needs Seven fundamental psychologic needs for healthy housing include the following: 1. adequate privacy for the individual, 2. opportunities for normal family life, 3. opportunities for normal community life, 4. facilities that make possible the performance of household tasks without undue physical and mental fatigue, 5. facilities for maintenance of cleanliness of the dwelling and of the person, 6. possibilities for aesthetic satisfaction in the home and its surroundings, and 7. concordance with prevailing social standards of the local community. Privacy is a necessity to most people, to some degree and during some periods. The increase in house size and the diminishing family size have, in many instances, increased the availability of privacy. Ideally, everyone would have their own rooms, or, if that were not possible, would share a bedroom with only one person of the same sex, excepting married couples and small children. Psychiatrists consider it important for children older than 2 years to have bedrooms separate from their parents. In addition, bedrooms and bathrooms should be accessible directly from halls or living rooms and not through other bedrooms. In addition to the psychologic value of privacy, repeated studies have shown that lack of space and quiet due to crowding can lead to poor school performance in children. Coupled with a natural desire for privacy is the social desire for normal family and community life. A wholesome atmosphere requires adequate living room space and adequate space for withdrawal elsewhere during periods of entertainment. This accessibility expands beyond the walls of the home and includes easy communication with centers of culture and business, such as schools, churches, entertainment, shopping, libraries, and medical services. # Protection Against Disease Eight ways to protect against contaminants include the following: 1. provide a safe and sanitary water supply; 2. protect the water supply system against pollution; 3. provide toilet facilities that minimize the danger of transmitting disease; 4. protect against sewage contamination of the interior surfaces of the dwelling; 5. avoid unsanitary conditions near the dwelling; 6. exclude vermin from the dwelling, which may play a part in transmitting disease; 7. provide facilities for keeping milk and food fresh; and 8. allow sufficient space in sleeping rooms to minimize the danger of contact infection. According to the U.S. EPA , there are approximately 160,000 public or community drinking water systems in the United States. The current estimate is that 42 million Americans (mostly in rural America) get their water from private wells or other small, unregulated water systems. The presence of adequate water, sewer, and plumbing facilities is central to the prevention, reduction, and possible elimination of water-related diseases. According to the Population Information Program , water-related diseases can be organized into four categories: - waterborne diseases, including those caused by both fecal-oral organisms and those caused by toxic substances; - water-based diseases; - water-related vector diseases; and - water-scarce diseases. Numerous studies link improvements in sanitation and the provision of potable water with significant reductions in morbidity and mortality from water-related diseases. Clean water and sanitation facilities have proven to reduce infant and child mortality by as much as 55% in Third World countries according to studies from the 1980s. Waterborne diseases are often referred to as "dirty-water" diseases and are the result of contamination from chemical, human, and animal wastes. Specific diseases in this group include cholera, typhoid, shigella, polio, meningitis, and hepatitis A and E. Water-based diseases are caused by aquatic organisms that spend part of their life cycle in the water and another part as parasites of animals. Although rare in the United States, these diseases include dracunculiasis, paragonimiasis, clonorchiasis, and schistosomiasis. The reduction in these diseases in many countries has not only led to decreased rates of illness and death, but has also increased productivity through a reduction in days lost from work. Water-related diseases are linked to vectors that breed and live in or near polluted and unpolluted water. These vectors are primarily mosquitoes that infect people with the disease agents for malaria, yellow fever, dengue fever, and filariasis. While the control of vectorborne diseases is a complex matter, in the United States, most of the control focus has been on controlling habitat and breeding areas for the vectors and reducing and controlling human cases of the disease that can serve as hosts for the vector. Vectorborne diseases have recently become a more of a concern to the United States with the importation of the West Nile virus. The transmission of West Nile virus occurs when a mosquito vector takes a blood meal from a bird or incidental hosts, such as a dog, cat, horse, or other vertebrate. The human cases of West Nile virus in 2003 numbered 9,862, with 264 deaths. Finally, water-scarce diseases are diseases that flourish where sanitation is poor due to a scarcity of fresh water. Diseases included in this category are diphtheria, leprosy, whooping cough, tetanus, tuberculosis, and trachoma. These diseases are often transmitted when the supply of fresh water is inadequate for hand washing and basic hygiene. These conditions are still rampant in much of the world, but are essentially absent from the United States due to the extensive availability of potable drinking water. In 2000, USCB reported that 1.4% of U.S. homes lacked plumbing facilities. This differs greatly from the 1940 census, when nearly one-half of U.S. homes lacked complete plumbing. The proportion has continually dropped, falling to about one-third in 1950 and then to one-sixth in 1960. Complete plumbing facilities are defined as hot and cold piped water, a bathtub or shower, and a flush toilet. The containment of household sewage is instrumental in protecting the public from waterborne and vectorborne diseases. The 1940 census revealed that more than a third of U.S. homes had no flush toilet, with 70% of the homes in some states without a flush toilet. Of the 13 million housing units at the time without flush toilets, 11.8 million (90.7%) had an outside toilet or privy, another 1 million (7.6%) had no toilet or privy, and the remainder had a nonflush toilet in the structure. In contrast to these figures, the 2000 census data demonstrate the great progress that has been made in providing sanitary sewer facilities. Nationally, 74.8% of homes are served by a public sewer, with 24.1% served by a septic tank or cesspool, and the remaining 1.1% using other means. Vermin, such as rodents, have long been linked to property destruction and disease. Integrated pest management, along with proper housing construction, has played a significant role in reducing vermin around the modern home. Proper food storage, rat-proofing construction, and ensuring good sanitation outside the home have served to eliminate or reduce rodent problems in the 21st century home. Facilities to properly store milk and food have not only been instrumental in reducing the incidence of some foodborne diseases, but have also significantly changed the diet in developed countries. Refrigeration can be traced to the ancient Chinese, Hebrews, Greeks, and Romans. In the last 150 years, great strides have been made in using refrigeration to preserve and cool food. Vapor compression using air and, subsequently, ammonia as a coolant was first developed in the 1850s. In the early 1800s, natural ice was extracted for use as a coolant and preserver of food. By the late 1870s, there were 35 commercial ice plants in the United States and, by 1909, there were 2,000. However, as early as the 1890s, sources of natural ice began to be a problem as a result of pollution and sewage dumped into bodies of water. Thus, the use of natural ice as a refrigerant began to present a health problem. Mechanical manufacture of ice provided a temporary solution, which eventually resulted in providing mechanical refrigeration. Refrigeration was first used by the brewing and meatpacking industries; but most households had iceboxes (Figure 2.1), which made the ice wagon a popular icon of the late 1800s and early 1900s. In 1915, the first refrigerator, the Guardian, was introduced. This unit was the predecessor of the Frigidaire. The refrigerator became as necessary to the household as a stove or sewing machine. By 1937, nearly 6 million refrigerators were manufactured in the United States. By 1950, in excess of 80% of American farms and more than 90% of urban homes had a refrigerator. Adequate living and sleeping space are also important in protecting against contagion. It is an issue not only of privacy but of adequate room to reduce the potential for the transmission of contagion. Much improvement has been made in the adequacy of living space for the U.S. family over the last 30 years. According to USCB , the average size of new single homes has increased from a 1970 average of 1,500 square feet to a 2000 average of 2,266 square feet. USCB says that slightly less than 5% of U.S. homes were considered crowded in 1990; that is, they had more than one person per room. However, this is an increase since the 1980 census, when the figure was 4.5%. This is the only time there has been an increase since the first housing census was initiated in 1940, when one in five homes was crowded. During the 1940 census, most crowded homes were found in southern states, primarily in the rural south. Crowding has become common in a few large urban areas, with more than one-fourth of all crowded units located in four metropolitan areas: Houston, Los Angeles, Miami, and New York. The rate for California has not changed significantly between 1940 (13%) and 1990 (12%). Excessive crowding in homes has the potential to increase not only communicable disease transmission, but also the stress level of occupants because modern urban individuals spend considerably more time indoors than did their 1940s counterparts. # Protection Against Injury A major provision for safe housing construction is developing and implementing building codes. According to the International Code Council one-and two-family dwelling code, the purpose of building codes is to provide minimum standards for the protection of life, limb, property, environment, and for the safety and welfare of the consumer, general public, and the owners and occupants of residential buildings regulated by this code . However, as with all types of codes, the development of innovative processes and products must be allowed to take a place in improving construction technology. Thus, according to the International Code Council one-and two-family dwelling code, building codes are not intended to limit the appropriate use of materials, appliances, equipment, or methods by design or construction that are not specifically prescribed by the code if the building official determines that the proposed alternate materials, appliances, equipment or methods of design or construction are at least equivalent of that prescribed in this code. While the details of what a code should include are beyond the scope of this section, additional information can be found at /, the Web site of the International Code Council (ICC). ICC is an organization formed by the consolidation of the Building Officials and Code Administrators International, Southern Building Code Congress International, Inc., and the International Conference of Building Officials . According to the Home Safety Council (HSC) , the leading causes of home injury deaths in 1998 were falls and poisonings, which accounted for 6,756 and 5,758 deaths, respectively. As expected, the rates and national estimates of the number of fall deaths were highest among those older than 64 years, and stairs or steps were associated with 17% of fall deaths. Overall, falls were the leading cause of nonfatal, unintentional injuries occurring at home and accounted for 5.6 million injuries. Similar to the mortality statistics, consumer products most often associated with emergency department visits included stairs and steps, accounting for 854,631 visits, and floors, accounting for 556,800 visits. A national survey by HSC found that one-third of all households with stairs did not have banisters or handrails on at least one set of stairs. Related to this, homes with older persons were more likely to have banisters or handrails than were those where young children live or visit. The survey also revealed that 48% of households have windows on the second floor or above, but only 25% have window locks or bars to prevent children from falling out. Bathtub mats or nonskid strips to reduce bathtub falls were used in 63% of American households. However, in senior households (age 70 years and older), 79% used mats or nonskid strips. Nineteen percent of the total number of homes surveyed had grab bars to supplement the mats and strips. Significantly, only 39% of the group most susceptible to falls (people aged 70 years and older) used both nonskid surfaces and grab bars. # Protection Against Fire An important component of safe housing is to control conditions that promote the initiation and spread of fire. Between 1992 and, an average of 4,266 Americans died annually in fires and nearly 25,000 were injured. This fact and the following information from the United States Fire Administration (USFA) demonstrate the impact that fire safety and the lack of it have in the United States. The United States has one of the highest fire death rates in the industrialized world, with 13.4 deaths per million people. At least 80% of all fire deaths occur in residences. Residential fires account for 23% of all fires and 76% of structure fires. In one-and two-family dwellings, fires start in the kitchen 25.5% of the time and originate in the bedroom 13.7% of the time. Apartment fires most often start in the kitchen, but at almost twice the rate (48.5%), with bedrooms again being the second most common place at 13.4%. These USFA statistics also disclose that cooking is the leading cause of home fires, usually a result of unattended cooking and human error rather than mechanical failure of the cooking units. The leading cause of fire deaths in homes is careless smoking, which can be significantly deterred by smoke alarms and smolder-resistant bedding and upholstered furniture. Heating system fires tend to be a larger problem in single-family homes than in apartments because the heating systems in family homes frequently are not professionally maintained. A number of conditions in the household can contribute to the creation or spread of fire. The USFA data indicate that more than one-third of rural Americans use fireplaces, wood stoves, and other fuel-fired appliances as primary sources of heat. These same systems account for 36% of rural residential fires. Many of these fires are the result of creosote buildup in chimneys and stovepipes. These fires could be avoided by - inspecting and cleaning by a certified chimney specialist; - clearing the area around the hearth of debris, decorations, and flammable materials; - using a metal mesh screen with fireplaces and leaving glass doors open while burning a fire; - installing stovepipe thermometers to monitor flue temperatures; - leaving air inlets on wood stoves open and never restricting air supply to the fireplaces, thus helping to reduce creosote buildup; - using fire-resistant materials on walls around wood stoves; - never using flammable liquids to start a fire; - using only seasoned hardwood rather than soft, moist wood, which accelerates creosote buildup; - building small fires that burn completely and produce less smoke; - never burning trash, debris, or pasteboard in a fireplace; - placing logs in the rear of the fireplace on an adequate supporting grate; - never leaving a fire in the fireplace unattended; - keeping the roof clear of leaves, pine needles, and other debris; - covering the chimney with a mesh screen spark arrester; and - removing branches hanging above the chimney, flues, or vents. USFA also notes that manufactured homes can be susceptible to fires. More than one-fifth of residential fires in these facilities are related to the use of supplemental room heaters, such as wood-and coal-burning stoves, kerosene heaters, gas space-heaters, and electrical heaters. Most fires related to supplemental heating equipment result from improper installation, maintenance, or use of the appliance. USFA recommendations to reduce the chance of fire with these types of appliances include the following: - placing wood stoves on noncombustible surfaces or a code-specified or listed floor surface; - placing noncombustible materials around the opening and hearth of fireplaces; - placing space heaters on firm, out-of-the-way surfaces to reduce tipping over and subsequent spillage of fuel and providing at least 3 feet of air space between the heating device and walls, chairs, firewood, and curtains; - placing vents and chimneys to allow 18 inches of air space between single-wall connector pipes and combustibles and 2 inches between insulated chimneys and combustibles; and - using only the fuel designated by the manufacturer for the appliance. The ability to escape from a building when fire has been discovered or detected is of extreme importance. In the modern home, three key elements can contribute to a safe exit from a home during the threat of fire. The first of these is a working smoke alarm system. The average homeowner in the 1960s had never heard of a smoke alarm, but by the mid-1980s, laws in 38 states and in thousands of municipalities required smoke alarms in all new and existing residences. By 1995, 93% of all singlefamily and multifamily homes, apartments, nursing homes, and dormitories were equipped with alarms. The cost decreased from $1,000 for a professionally installed unit for a three-bedroom home in the 1970s to an ownerinstalled $10 unit. According to the EPA , ionization chamber and photoelectric are the two most common smoke detectors available commercially. Helmenstein states that a smoke alarm uses one or both methods, and occasionally uses a heat detector, to warn of a fire. These units can be powered by a 9-volt battery, a lithium battery, or 120-volt house wiring. Ionization detectors function using an ionization chamber and a minute source of ionizing radiation. The radiation source is americium-241 (perhaps 1/5,000th of a gram), while the ionization chamber consists of two plates separated by about a centimeter. The power source (battery or house current) applies voltage to the plates, resulting in one plate being charged positively while the other plate is charged negatively. The americium constantly releases alpha particles that knock electrons off the atoms in the air, ionizing the oxygen and nitrogen atoms in the chamber. The negative plate attracts the positively charged oxygen and nitrogen atoms, while the electrons are attracted to the positive plate, generating a small, continuous electric current. If smoke enters the ionization chamber, the smoke particles attach to the ions and neutralize them, so they do not reach the plate. The alarm is then triggered by the drop in current between the plates . Photoelectric devices function in one of two ways. First, smoke blocks a light beam, reducing the light reaching the photocell, which sets off the alarm. In the second and more common type of photoelectric unit, smoke particles scatter the light onto a photocell, initiating an alarm. Both detector types are effective smoke sensors and both must pass the same test to be certified as Underwriters Laboratories (UL) smoke detectors. Ionization detectors respond more quickly to flaming fires with smaller combustion particles, while photoelectric detectors respond more quickly to smoldering fires. Detectors can be damaged by steam or high temperatures. Photoelectric detectors are more expensive than ionization detectors and are more sensitive to minute smoke particles. However, ionization detectors have a degree of built-in security not inherent to photoelectric detectors. When the battery starts to fail in an ionization detector, the ion current falls and the alarm sounds, warning that it is time to change the battery before the detector becomes ineffective. Backup batteries may be used for photoelectric detectors that are operated using the home's electrical system. According to USFA , a properly functioning smoke alarm diminishes the risk for dying in a fire by approximately 50% and is considered the single most important means of preventing house and apartment fire fatalities. Proper installation and maintenance, however, are key to their usefulness. Figure 2.2 shows a typical smoke alarm being tested. Following are key issues regarding installation and maintenance of smoke alarms. (Smoke alarms should be installed on every level of the home including the basement, both inside and outside the sleeping area.) - Smoke alarms should be installed on the ceiling or 6-8 inches below the ceiling on side walls. - Battery replacement is imperative to ensuring proper operation. Typically, batteries should be replaced at least once a year, although some units are manufactured with a 10-year battery. A "chirping" noise from the unit indicates the need for battery replacement. A battery-operated smoke alarm has a life expectancy of 8 to 10 years. - Battery replacement is not necessary in units that are connected to the household electrical system. - Regardless of the type, it is crucial to test every smoke alarm monthly. Data from HSC revealed that only 83% of individuals with fire alarms test them at least once a year; while only 19% of households with at least one smoke alarm test them quarterly. A second element impacting escape from a building is a properly installed fire-suppression system. According to USFA , sprinkler systems began to be used over 100 years ago in New England textile mills. Currently, few homes are protected by residential sprinkler systems. However, UL-listed home systems are available and are designed to protect homes much faster than standard commercial or industrial sprinklers. Based on approximately 1% of the total building price in new construction, sprinkler systems can be installed for a reasonable price. These systems can be retrofitted to existing construction and are smaller than commercial systems. In addition, homeowner insurance discounts for such systems range between 5% and 15% and are increasing in availability. The final element in escaping from a residential fire is having a fire plan. A 1999 survey conducted by USFA found that 60% of Americans have an escape plan, with 42% of these individuals having practiced the plan. Surprisingly, 26% of Americans stated they had never thought about practicing an escape plan, and 3% believed escape planning to be unnecessary. In addition, of the people who had a smoke alarm sound an alert over the past year before the study, only 8% believed it to be a fire and thought they should evacuate the building. Protection from electrical shocks and burns is also a vital element in the overall safety of the home. According to the National Fire Protection Association (NFPA) , electrical distribution equipment was the third-leading cause of home fires and the second-leading cause of fire deaths in the United States between 1994 and 1998. Specifically, NFPA reported that 38,300 home electrical fires occurred in 1998, which resulted in 284 deaths, 1,184 injuries, and approximately $670 million in direct property damage. The same report indicated that the leading cause of electrical distribution fires was ground fault or short-circuit problems. A third of the home electrical distribution fires were a result of problems with fixed wiring, while cords and plugs were responsible for 17% of these fires and 28% of the deaths. Additional investigation of these statistics reveals that electrical fires are one of the leading types of home fires in manufactured homes. USFA data demonstrate that many electrical fires in homes are associated with improper installation of electrical devices by do-it-your- selfers. Errors attributed to this amateur electrical work include use of improperly rated devices such as switches or receptacles and loose connections leading to overheating and arcing, resulting in fires. Recommendations to reduce the risk of electrical fires and electrocution include the following: 1. Use only the correct fuse size and do not use pennies behind a fuse. 2. Install ground fault circuit interrupters (GFCI) on all outlets in kitchens, bathrooms, and anywhere else near water. This can also be accomplished by installing a GFCI in the breaker box, thus protecting an entire circuit. 3. Never place combustible materials near light fixtures, especially halogen bulbs that get very hot. 4. Use only the correct bulb size in a light fixture. 5. Use only properly rated extension cords for the job needed. 6. Never use extension cords as a long-term solution to the need for an additional outlet. Size the extension cord to the wattage to be used. 7. Never run extension cords inside walls or under rugs because they generate heat that must be able to dissipate. # Fire Extinguishers A fire extinguisher should be listed and labeled by an independent testing laboratory such as FM (Factory Mutual) or UL. Fire extinguishers are labeled according to the type of fire on which they may be used. Fires involving wood or cloth, flammable liquids, electrical, or metal sources react differently to extinguishers. Using the wrong type of extinguisher on a fire could be dangerous and could worsen the fire. Traditionally, the labels A, B, C, and D have been used to indicate the type of fire on which an extinguisher is to be used. Type A-Used for ordinary combustibles such as cloth, wood, rubber, and many plastics. These types of fire usually leave ashes after they burn: Type A extinguishers for ashes. The Type A label is in a triangle on the extinguisher. Type B-Used for flammable liquid fires such as oil, gasoline, paints, lacquers, grease, and solvents. These substances often come in barrels: Type B extinguishers for barrels. The Type B label is in a square on the extinguisher. Type C-Used for electrical fires such as in wiring, fuse boxes, energized electrical equipment, and other electrical sources. Electricity travels in currents; Type C extinguishers for currents. The Type C label is in a circle on the extinguisher. Type D-Used for metal fires such as magnesium, titanium, and sodium. These types of fires are very dangerous and seldom handled by the general public; Type D means don't get involved. The Type D label is in a star on the extinguisher. The higher the rating number on an A or B fire extinguisher, the more fire it can put out, but high-rated units are often the heavier models. Extinguishers need care and must be recharged after every use-a partially used unit might as well be empty. An extinguisher should be placed in the kitchen and in the garage or workshop. Each extinguisher should be installed in plain view near an escape route and away from potential fire hazards such as heating appliances. Recently, pictograms have come into use on fire extinguishers. These picture the type of fire on which an extinguisher is to be used. For instance, a Type A extinguisher has a pictogram showing burning wood. A Type C extinguisher has a pictogram showing an electrical cord and outlet. These pictograms are also used to show what not to use. For example, a Type A extinguisher also show a pictogram of an electrical cord and outlet with a slash through it (do not use it on an electrical fire). Fire extinguishers also have a number rating. For Type A fires, 1 means 1¼ gallons of water; 2 means 2½ gallons of water, 3 means 3¾ gallons of water, etc. For Type B and Type C fires, the number represents square feet. For example, 2 equals 2 square feet, 5 equals 5 square feet, etc. Fire extinguishers can also be made to extinguish more than one type of fire. For example, you might have an extinguisher with a label that reads 2A5B. This would mean this extinguisher is good for Type A fires with a 2½-gallon equivalence and it is also good for Type B fires with a 5-square-foot equivalency. A good extinguisher to have in each residential kitchen is a 2A10BC fire extinguisher. You might also get a Type A for the living room and bedrooms and an ABC for the basement and garage. PASS is a simple acronym to remind you how to operate most fire extinguishers-pull, aim, squeeze, and sweep. Pull the pin at the top of the cylinder. Some units require the releasing of a lock latch or pressing a puncture lever. Aim the nozzle at the base of the fire. Squeeze or press the handle. Sweep the contents from side to side at the base of the fire until it goes out. Shut off the extinguisher and then watch carefully for any rekindling of the fire. # Protection Against Toxic Gases Protection against gas poisoning has been a problem since the use of fossil fuels was combined with relatively tight housing construction. NFPA notes that National Safety Council statistics reflect unintentional poisonings by gas or vapors, chiefly carbon monoxide (CO), numbering about 600 in 1998. One-fourth of these involved heating or cooking equipment in the home. The U.S. Consumer Product Safety Commission states that in 2001 an estimated 130 deaths occurred as a result of CO poisoning from residential sources; this decrease in deaths is related to the increased use of CO detectors. In addition, approximately 10,000 cases of CO-related injuries occur each year. NFPA # Introduction William Pitt, arguing before the British Parliament against excise officers entering private homes to levy the Cyder Tax, eloquently articulated this long-held and cherished notion of the sanctity of private property. However, a person's right to privacy is not absolute. There has always been a tension between the rights of property owners to do whatever they desire with their property and the ability of the government to regulate uses to protect the safety, health, and welfare of the community. Few, however, would argue with the right and duty of a city government to prohibit the operation of a munitions factory or a chemical plant in the middle of a crowded residential neighborhood. # History The first known housing laws are in the Code of Laws of Hammurabi , who was the King of Babylonia, circa 1792-1750 BC. These laws addressed the responsibility of the home builder to construct a quality home and outlined the implications to the builder if injury or harm came to the owner as a result of the failure to do so. During the Puritan period (about 1620-1690), housing laws essentially governed the behavior of the members of the society. For example, no one was allowed to live alone, so bachelors, widows, and widowers were placed with other families as servants or boarders. In 1652, Boston prohibited building privies within 12 feet of the street. Around the turn of the 18th century, some New England communities implemented local ordinances that specified the size of houses. During the 17th century, additional public policies on housing were established. Because the English tradition of using wooden chimneys and thatched roofs led to fires in many dwellings, several colonies passed regulations prohibiting them. After the early 17th century came an era of very rapid metropolitan growth along the East Coast. This growth was due largely to immigration from Europe and was spurred by the Industrial Revolution. The most serious . The principal requirements of the act included the following: - Every room occupied for sleeping, if it does not communicate directly with the external air, must have a ventilating or transom window of at least 3 square feet to the neighboring room or hall. - A proper fire escape is necessary on every tenement or lodging house. - The roof is to be kept in repair and the stairs are to have banisters. - At least one toilet is required for every 20 occupants for all such houses, and those toilets must be connected to approved disposal systems. - Cleansing of every lodging house is to be to the satisfaction of the Board of Health, which is to have access at any time. - All cases of infectious disease are to be reported to the Board by the owner or his agent; buildings are to be inspected and, if necessary, disinfected or vacated if found to be out of repair. There were also regulations governing distances between buildings, heights of rooms, and dimensions of windows. Although this act had some beneficial influences on overcrowding, sewage disposal, lighting, and ventilation, perhaps its greatest contribution was in laying a foundation for more stringent future legislation. # Zoning, Housing Codes, and Building Codes Housing is inextricably linked to the land on which it is located. Changes in the patterns of land use in the United States, shifting demographics, an awareness of the need for environmental stewardship, and competing uses for increasingly scarce (desirable) land have all placed added stress on the traditional relationship between the property owner and the community. This is certainly not a new development. In the early settlement of this country, following the precedent set by their forefathers from Great Britain, gunpowder mills and storehouses were prohibited from the heavily populated portions of towns, owing to the frequent fires and explosions. Later, zoning took the form of fire districts and, under implied legislative powers, wooden buildings were prohibited from certain sections of a municipality. Massachusetts passed one of the first zoning laws in 1692. This law authorized Boston, Salem, Charlestown, and certain other market towns in the province to restrict the establishment of slaughterhouses and stillhouses for currying leather to certain locations in each town. Few people objected to such restrictions. Still, the tension remained between the right to use one's land and the community's right to protect its citizens. In 1926, the United States Supreme Court took up the issue in Village of Euclid, Ohio, v. Ambler Realty . In this decision, the Court noted, "Until recent years, urban life was comparatively simple; but with great increase and concentration of population, problems have developed which require additional restrictions in respect of the use and occupation of private lands in urban communities." In explaining its reasoning, the Court said, "the law of nuisances may be consulted not for the purpose of controlling, but for the helpful aid of its analogies in the process of ascertaining the scope of the police power. Thus the question of whether the power exists to forbid the erection of a building of a particular kind or a particular use is to be determined, not by an abstract consideration of the building or other thing considered apart, but by considering it in connection with the circumstances and the locality… A nuisance may be merely the right thing in the wrong place-like a pig in the parlor instead of the barnyard." Zoning, housing, and building codes were adopted to improve the health and safety of people living in communities. And, to some extent, they have performed this function. Certainly, housing and building codes, when enforced, have resulted in better constructed and maintained buildings. Zoning codes have been effective in segregating noxious and dangerous enterprises from residential areas. However, as the U.S. population has grown and changed from a rural to an urban then to a suburban society, land use and building regulations developed for the 19th and early 20th centuries are creating new health and safety problems not envisioned in earlier times. # Zoning and Zoning Ordinances Zoning is essentially a means of ensuring that a community's land uses are compatible with the health, safety, and general welfare of the community. Experience has shown that some types of controls are needed to provide orderly growth in relation to the community plan for development. Just as a capital improvement program governs public improvements such as streets, parks and other recreational facilities, schools, and public buildings, so zoning governs the planning program with respect to the use of public and private property. It is very important that housing inspectors know the general nature of zoning regulations because properties in violation of both the housing code and the zoning ordinance must be brought into full compliance with the zoning ordinance before the housing code can be enforced. In many cases, the housing inspector may be able to eliminate violations or properties in violation of housing codes through enforcement of the zoning ordinance. # Zoning Objectives As stated earlier, the purpose of a zoning ordinance is to ensure that the land uses within a community are regulated not only for the health, safety, and welfare of the community, but also are in keeping with the comprehensive plan for community development. The provisions in a zoning ordinance that help to achieve development that provides for health, safety, and welfare are designed to do the following: - Regulate height, bulk, and area of structure. To provide established standards for healthful housing within the community, regulations dealing with building heights, lot coverage, and floor areas must be established. These regulations then ensure that adequate natural lighting, ventilation, privacy, and recreational areas for children will be realized. These are all fundamental physiologic needs necessary for a healthful environment. Safety from fires is enhanced by separating buildings to meet yard and open-space requirements. Through requiring a minimum lot area per dwelling unit, population density controls are established. - Avoid undue levels of noise, vibration, glare, air pollution, and odor. By providing land-use category districts, these environmental stresses upon the individual can be reduced. - Lessen street congestion by requiring off-street parking and off-street loading. - Facilitate adequate provision of water, sewerage, schools, parks, and playgrounds. - Provide safety from flooding. - Conserve property values. Through careful enforcement of the zoning ordinance provisions, property values can be stabilized and conserved. To understand more fully the difference between zoning and subdivision regulations, building codes, and housing ordinances, the housing inspector must know what cannot be accomplished by a zoning ordinance. Items that cannot be accomplished by a zoning ordinance include the following: - Overcrowding or substandard housing. Zoning is not retroactive and cannot correct existing conditions. These are corrected through enforcement of a minimum standards housing code. - Materials and methods of construction. Materials and methods of construction are enforced through building codes rather than through zoning. - Cost of construction. Quality of construction, and hence construction costs, are often regulated through deed restrictions or covenants. Zoning does, however, stabilize property values in an area by prohibiting incompatible development, such as heavy industry in the midst of a well-established subdivision. - Subdivision design and layout. Design and layout of subdivisions, as well as provisions for parks and streets, are controlled through subdivision regulations. # Content of the Zoning Ordinance Zoning ordinances establish districts of whatever size, shape, and number the municipality deems best for carrying out the purposes of the zoning ordinance. Most cities use three major districts: residential (R), commercial (C), and industrial (I). These three may then be subdivided into many subdistricts, depending on local conditions; e.g., R-1 (single-unit dwellings), R-2 (duplexes), R-3 (low-rise apartment buildings), and so on. These districts specify the principal and accessory uses, exceptions, and prohibitions . In general, permitted land uses are based on the intensity of land use-a less intense land use being permitted in a more intense district, but not vice versa. For example, a single-unit residence is a less intense land use than a multiunit dwelling (defined by HUD as more than four living units) and hence would be permitted in a residential district zoned for more intense land use (e.g., R-3). A multiunit dwelling would not, however, be permitted in an R-1 district. While intended to promote the health, safety, and general welfare of the community, housing trends in the last half of the 20th century have led a number of public health and planning officials to question the blind enforcement of zoning districts. These individuals, citing such problems as urban sprawl, have stated that municipalities need to adopt a more flexible approach to land use regulation-one that encourages creating mixed-use spaces, increasing population densities, and reducing reliance on the automobile. These initiatives are often called smart growth programs. It is imperative, if this approach is taken, that both governmental officials and citizens be involved in the planning stage. Without this involvement, the community may end up with major problems, such as overloaded infrastructure, structures of inappropriate construction crowded together, and fire and security issues for residents. Increased density could strain the existing water, sewer and waste collection systems, as well as fire and police services, unless proper planning is implemented. In recent years, some ordinances have been partially based on performance standards rather than solely on land-use intensity. For example, some types of industrial developments may be permitted in a less intense use district provided that the proposed land use creates no noise, glare, smoke, dust, vibration, or other environmental stress exceeding acceptable standards and provided further that adequate off-street parking, screening, landscaping, and similar measures are taken. Bulk and Height Requirements. Most early zoning ordinances stated that, within a particular district, the height and bulk of any structure could not exceed certain dimensions and specified dimensions for front, side, and rear yards. Another approach was to use floor-area ratios for regulation. A floor-area ratio is the relation between the floor space of the structure and the size of the lot on which it is located. For example, a floor-area ratio of 1 would permit either a two-story building covering 50% of the lot, or a one-story building covering 100% of the lot, as demonstrated in Figure 3.1. Other zoning ordinances specify the maximum amount of the lot that can be covered or merely require that a certain amount of open space must be provided for each structure, and leave the builder the flexibility to determine the location of the structure. Still other ordinances, rather than specify a particular height for the structure, specify the angle of light obstruction that will assure adequate air and light to the surrounding structures, as demonstrated in Figure 3.2. Yard Requirements. Zoning ordinances also contain minimum requirements for front, rear, and side yards. These requirements, in addition to stating the lot dimensions, usually designate the amount of setback required. Most ordinances permit the erection of auxiliary buildings in rear yards provided that they are located at stated distances from all lot lines and provided sufficient open space is maintained. If the property is a corner lot, additional requirements are established to allow visibility for motorists. Off-street Parking. Space for off-street parking and offstreet loading, especially for commercial buildings, is also contained in zoning ordinances. These requirements are based on the relationship of floor space or seating capacity to land use. For example, a furniture store would require fewer off-street parking spaces in relation to the floor area than would a movie theater. # Exceptions to the Zoning Code # Nonconforming Uses Because zoning is not retroactive, all zoning ordinances contain a provision for nonconforming uses. If a use has already been established within a particular district before the adoption of the ordinance, it must be permitted to continue, unless it can be shown to be a public nuisance. Provisions are, however, put into the ordinance to aid in eliminating nonconforming uses over time. These provisions generally prohibit a) an enlargement or expansion of the nonconforming use, b) reconstruction of the nonconforming use if more than a certain portion of the building should be destroyed, c) resumption of the use after it has been abandoned for a period of specified time, and d) changing the use to a higher classification or to another nonconforming use. Some zoning ordinances further provide a period of amortization during which nonconforming land use must be phased out. # Variances Zoning ordinances contain provisions for permitting variances and providing a method for granting these variances, subject to certain specified provisions. A variance may be granted when, owing to the specific conditions or use of a particular lot, an undue hardship would be imposed on the owner if the exact content of the ordinance is enforced. A variance may be granted due to the shape, topography, or other characteristic of the lot. For example, suppose an irregularly shaped lot is located in a district having a side yard requirement of 20 feet on a side and a total lot size requirement of 10,000 square feet. Further suppose that this lot contains 10,200 square feet (and thus meets the total size requirement); however, due to the irregular shape of the lot, there would be sufficient space for only a 15-foot side yard. Because a hardship would be imposed on the owner if the exact letter of the law is applied, the owner of the property could apply to the zoning adjustment board for a variance. Because the total area of the lot is sufficient and a lessening of the ordinance requirements would not be detrimental to the surrounding property, nor would it interfere with neighboring properties, a variance would probably be granted. Note that a variance is granted to the owner under specific conditions. Should use of the property change, the variance would be voided. # Exceptions An exception is often confused with a variance. In every city there are some necessary uses that do not correspond to the permitted land uses within the district. The zoning code recognizes, however, that if proper safeguards are provided, these uses would not have a detrimental effect on the district. An example would be a fire station that could be permitted in a residential area, provided the station house is designed and the property is properly landscaped to resemble or fit in with the characteristics of the neighborhood in which it is located. # Administration Zoning inspectors are essential to the zoning process because they have firsthand knowledge of a case. Often, the zoning inspector may also be the building inspector or housing inspector. Because the building inspector or housing inspector is already in the field making inspections, it is relatively easy for that individual to check compliance with the zoning ordinances. Compliance is determined by comparing the actual land use with that allowed for the area and shown on the zoning map. Each zoning ordinance has a map detailing the permitted usage for each block. Using a copy of this map, the inspector can make a preliminary check of the land use in the field. If the use does not conform, the inspector must then contact the Zoning Board to see whether the property in question was a nonconforming use at the time of the passage of the ordinance and whether an exception or variance has been granted. In cities where up-to-date records are maintained, the inspector can check the use in the field. When a violation is observed, and the property owners are duly notified of the violation, they have the right to request a hearing before the Zoning Board of Adjustment (also called the Zoning Board of Appeals in some cities). The board may uphold the zoning enforcement officer or may rule in favor of the property owner. If the action of the zoning officer is upheld, the property owner may, if desired, seek relief by appealing the decision to the courts; otherwise, the violation must be corrected to conform to the zoning code. It is critical for the housing or building inspector and the zoning inspector to work closely in municipalities where these positions and responsibilities are separate. Experience has shown that illegally converted properties are often among the most substandard encountered in the municipality and often contain especially dangerous housing code violations. In communities where the zoning code is enforced effectively, the resulting zoning compliance helps to advance, as well as sustain, many of the minimum standards of the housing code such as occupancy, ventilation, light, and unimpeded egress. By the same token, building or housing inspectors can often aid the zoning inspector by helping eliminate some nonconforming uses through code enforcement. # Housing Codes A housing code, regardless of who promulgates it, is basically an environmental health protection code. Housing codes are distinguished from building codes in that they cover houses, not buildings in general. For example, the housing code requires that walls support the weight of the roof, any floors above, and the furnishings, occupants, etc., within a building. Early housing codes primarily protected only physical health; hence, they were enforced only in slum areas. In the 1970s, it was realized that, if urban blight and its associated human suffering were to be controlled, housing codes must consider both physical and mental health and must be administered uniformly throughout the community. In preparing or revising housing codes, local officials must maintain a level of standards that will not merely be minimal. Standards should maintain a living environment that contributes positively to healthful individual and family living. The fact that a small portion of housing fails to meet a desirable standard is not a legitimate reason for retrogressive modification or abolition of a standard. The adoption of a housing ordinance that establishes low standards for existing housing serves only to legalize and perpetuate an unhealthy living environment. Wherever local conditions are such that immediate enforcement of some standards within the code would cause undue hardship for some individuals, it is better to allow some time for compliance than to eliminate an otherwise satisfactory standard. When immediate health or safety hazards are not involved, it is often wise to attempt to create a reasonable timetable for accomplishing necessary code modifications. # History To assist municipalities with developing legislation necessary to regulate the quality of housing, the American Public Health Association (APHA) Committee on the Hygiene of Housing prepared and published in 1952 a proposed housing ordinance. This provided a prototype on which such legislation might be based and has served as the basis for countless housing codes enacted in the United States since that time. Some municipalities enacted it without change. Others made revisions by omitting some portions, modifying others, and sometimes adding new provisions . The APHA ordinance was revised in 1969 and 1971. In 1975, APHA and the CDC jointly undertook the job of rewriting and updating this model ordinance. The new ordinance was entitled the APHA-CDC Recommended Housing Maintenance and Occupancy Ordinance . The most recent model ordinance was published by APHA in 1986 as Housing and Health: APHA-CDC Recommended Minimum Housing Standards . This new ordinance is one of several model ordinances available to communities when they are interested in adopting a housing code. A community should read and consider each element within the model code to determine its applicability to their community. A housing code is merely a means to an end. The end is the eventual elimination of all substandard conditions within the home and the neighborhood. This end cannot be achieved if the community adopts an inadequate housing code. # Objectives The Housing Act of 1949 gave new impetus to existing local, state, and federal housing programs directed toward eliminating poor housing. In passing this legislation, Congress defined a new national objective by declaring that "the general welfare and security of the nation and the health and living standards of its people... require a decent home and a suitable living environment for every American family." This mandate generated an awareness that the quality of housing and residential environment has an enormous influence upon the physical and mental health and the social well-being of each individual and, in turn, on the economic, political, and social conditions in every community. Consequently, public agencies, units of government, professional organizations and others sought ways to ensure that the quality of housing and the residential environment did not deteriorate. It soon became apparent that ordinances regulating the supplied utilities and the maintenance and occupancy of dwellings were needed. Commonly called housing codes, these ordinances establish minimum standards to make dwellings safe, sanitary, and fit for human habitation by governing their condition and maintenance, their supplied utilities and facilities, and their occupancy. The 2003 International Code Council (ICC) International Residential Code-One-and Two-Family Dwellings (R101.3) states "the purpose of this code is to provide minimum requirements to safeguard the public safety, health and general welfare, through affordability, structural strength, means of egress, facilities, stability, sanitation, light and ventilation, energy conservation, safety to fire and property from fire and other hazards attributed to the built environment." # Critical Requirements of an Effective Housing Program A housing code is limited in its effectiveness by several factors. First, if the housing code does not contain standards that adequately protect the health and well-being of the individuals, it cannot be effective. The best-trained housing inspector, if not armed with an adequate housing code, can accomplish little good in the battle against urban blight. A second issue in establishing an effective housing code is the need to establish a baseline of current housing conditions. A systems approach requires that you establish where you are, where you are going, and how you plan to achieve your goals. In using a systems approach, it is essential to know where the program started so that the success or failure of various initiatives can be established. Without this information, success cannot be replicated, because you cannot identify the obstacles navigated nor the elements of success. Many initiatives fail because program administrators are without the necessary proof of success when facing funding shortfalls and budget cuts. A third factor affecting the quality of housing codes is budget. Without adequate funds and personnel, the community can expect to lose the battle against urban blight. It is only through a systematic enforcement effort by an adequately sized staff of properly trained inspectors that the battle can be won. A fourth factor is the attitude of the political bodies within the area. A properly administered housing program will require upgrading substandard housing throughout the community. Frequently, this results in political pressures being exerted to prevent the enforcement of the code in certain areas of the city. If the housing effort is backed properly by all political elements, blight can be controlled and eventually eliminated within the community. If, however, the housing program is not permitted to choke out the spreading influence of substandard conditions, urban blight will spread like a cancer, engulfing greater and greater portions of the city. Similarly, an effort directed at only the most seriously blighted blocks in the city will upgrade merely those blocks, while the blight spreads elsewhere. If urban blight is to be controlled, it must be cut out in its entirety. A fifth element that limits housing programs is whether they are supported fully by the other departments within the city. Regardless of which city agency administers the housing program, other city agencies must support the activities of the housing program. In addition, great effort should be expended to obtain the support and cooperation of the community. This can be accomplished through public awareness and public information programs, which can result in considerable support or considerable resistance to the efforts of the program. A sixth limitation is an inadequately or improperly trained inspection staff. Inspectors should be capable of evaluating whether a serious or a minor problem exists in matters ranging from the structural stability of a building to the health and sanitary aspects of the structure. If they do not have the authority or expertise, they should develop that expertise or establish effective and efficient agreements with overlapping agencies to ensure timely and appropriate response. A seventh item that frequently restricts the effectiveness of a housing program is the fact that many housing groups fail to do a complete job of evaluating housing problems. The deterioration of an area may be due to factors such as housing affordability, tax rates, or issues related to investment cost and return. In many cases, the inspection effort is restricted to merely evaluating the conditions that exist, with little or no thought given to why these conditions exist. If a housing effort is to be successful, as part of a systems approach, the question of why the homes deteriorated must be considered. Was it because of environmental stresses within the neighborhood that need to be eliminated or was it because of apathy on the part of the occupants? In either case, if the causative agent is not removed, then the inspector faces an annual problem of maintaining the quality of that residence. It is only by eliminating the causes of deterioration that the quality of the neighborhood can be maintained. Often the regulatory authority does not have adequate authority within the enabling legislation of the code needed to resolve the problem or there are gaps in jurisdiction. # Content of a Housing Code Although all comprehensive housing codes or ordinances contain a number of common elements, the provisions of communities will usually vary. These variations stem from differences in local policies, preferences, and, to a lesser extent, needs. They are also influenced by the standards set by the related provisions of the diverse building, electrical, and plumbing codes in use in the municipality. Within any housing code there are generally five features: 1. Definitions of terms used in the code. 2. Administrative provisions showing who is authorized to administer the code and the basic methods and procedures that must be followed in implementing and enforcing the sections of the code. Administrative provisions deal with items such as reasonable hours of inspections, whether serving violation notices is required, how to notify absentee owners or resident-owners or tenants, how to process and conduct hearings, what rules to follow in processing dwellings alleged to be unfit for human habitation, and how to occupy or use dwellings finally declared fit. 3. Substantive provisions specifying the various types of health, building, electrical, heating, plumbing, maintenance, occupancy, and use conditions that constitute violations of the housing code. These provisions can be and often are grouped into three categories: minimum facilities and equipment for dwelling units; adequate maintenance of dwellings and dwelling units, as well as their facilities and equipment; and occupancy conditions of dwellings and dwelling units. 4. Court and penalty sections outlining the basis for court action and thepenalty or penalties to which the alleged violator will be subjected if proved guilty of violating one or more provisions of the code. 5. Enabling, conflict, and unconstitutionality clauses providing the date a new or amended code will take effect, prevalence of more stringent provision when there is a conflict of two codes, severability of any part of the ordinance that might be found unconstitutional, and retention of all other parts in full course and effect. In any city following the format of the APHA-CDC Recommended Housing Maintenance and Occupancy Ordinance the housing officer or other supervisor in charge of housing inspections will also adopt appropriate housing rules and regulations from time to time to clarify or further refine the provisions of the ordinance. When rules and regulations are used, care should be taken that the department is not overburdened with a number of minor rules and regulations. Similarly, a housing ordinance that encompasses all rules and regulations might have difficulty because any amendments to it will require action by the political element of the community. Some housing groups, in attempting to obtain amendments to an ordinance, have had the entire ordinance thrown out by the political bodies. # Administrative Provisions of a Housing Code The administrative procedures and powers of the housing inspection agency, its supervisors, and its staff are similar to other provisions in that all are based on the police power of the state to legislate for public health and safety. In addition, the administrative provisions, and to a lesser extent, the court and penalty provisions, outline how the police power is to be exercised in administering and enforcing the code. Generally, the administrative elements deal with procedures for ensuring that the constitutional doctrines of reasonableness, equal protection under the law and due process of law are observed. They also must guard against violation of prohibitions against unlawful search and seizure, impairment of obligations of contract, and unlawful delegation of authority. These factors encompass items of great importance to housing inspection supervisors such as the inspector's right of entry, reasonable hours of inspection, proper service, and the validity of the provisions of the housing codes they administer. # Owner of Record. It is essential to file legal actions against the true owners of properties in violation of housing codes. With the advent of the computer, this is often much easier than in the past. Databases that provide this information are readily available from many offices of local government such as the tax assessment office. The method of obtaining the name and address of the legal owner of a property in violation varies from place to place. Ordinarily, a check of the city tax records will suffice unless there is reason to believe these are not up to date. In this case, a further check of county or parish records will turn up the legal owner if state law requires deed registration there. If it does not, the advice of the municipal law department should be sought about the next steps to follow. Due Process Requirements. Every notice, complaint, summons, or other type of legal paper concerning alleged housing code violations in a given dwelling or dwelling unit must be legally served on the proper party to be valid and to prevent harassment of innocent parties. This might be the owner, agent, or tenant, as required by the code. It is customary to require that the notice to correct existing violations and any subsequent notices or letters be served by certified or registered mail with return receipt requested. The receipt serves as proof of service if the case has to be taken to court. Due process requirements also call for clarity and specificity with respect to the alleged violations, both in the violation notices and the court complaint-summons. For this reason, special care must be taken to be complete and accurate in listing the violations and charges. To illustrate, rather than direct the violator to repair all windows where needed, the violator should be told exactly which windows and what repairs are involved. The chief limitation on the due process requirement, with respect to service of notices, lies in cases involving immediate threats to health and safety. In these instances, the inspection agency or its representative may, without notice or hearing, issue an order citing the existence of the emergency and requiring that action deemed necessary to meet the emergency be taken. In some areas housing courts on the municipal level have advocates that assist both plaintiffs and defendants prepare for the court process or to resolve the issue to avoid court. Hearings and Condemnation Power. The purpose of a hearing is to give the alleged violator an opportunity to be heard before further action is taken by the housing inspection agency. These hearings may be very informal, involving meetings between a representative of the agency and the person ordered to take corrective action. They also may be formal hearings at which the agency head presides and at which the city and the defendant both are entitled to be represented by counsel and expert witnesses. Informal Hearings. A violator may have questions about a violation notice or the notice may be served at a time when personal hardship or other factors prevent a violator from meeting the terms of the notice. Therefore, many housing codes provide the opportunity for a hearing at which the violator may discuss questions or problems and seek additional time or some modification of the order. Administered in a firm but understanding manner, these hearings can serve as invaluable aids in relieving needless fears of those involved, in showing how the inspection program is designed to help them and in winning their voluntary compliance. Formal Hearings. Formal hearings are often quasijudicial hearings (even though the prevailing court rules of evidence do not always apply) from which an appeal may be taken to court. All witnesses must therefore be sworn in, and a record of the proceedings must be made. The formal hearing is used chiefly as the basis for determining whether a dwelling is fit for human habitation, occupancy, or use. In the event it is proved unfit, the building is condemned and the owner is given a designated amount of time either to rehabilitate it completely or to demolish it. Where local funds are available, a municipality may demolish the building and place a lien against the property to cover demolition costs if the owner fails to obey the order within the time specified. This type of condemnation hearing is a very effective means of stimulating prompt and appropriate corrective action when it is administered fairly and firmly. Request for Inspections. Several states permit their municipalities to offer a request for inspection service. For a fee, the housing inspector will inspect a property for violations of the housing code before its sale so that the buyer can learn its condition in advance. Many states and localities now require owners to notify prospective purchasers of any outstanding notice of health risk or violations they have against their property before the sale. If they fail to do so, some codes will hold the owner liable to the purchaser and the inspection agency for violations. Tickets for Minor Offenses. Denver, Colorado, has used minimal financial fines to prod minor violators and first offenders into correcting violations without the city resorting to court action. There are mixed views about this technique because it is akin to formal police action. Nevertheless, the action may stimulate compliance and reduce the amount of court action needed to achieve it. # Forms and Form Letters. A fairly typical set of forms and form letters are described below. It should be stressed that inspection forms to be used for legal notices must satisfy legal standards of the code, be meaningful to the owner and sufficiently explicit about the extent and location of particular defects, be adaptable to statistical compilation for the governing body reports, and be written in a manner that will facilitate clerical and other administrative usage. The Daily Report Form. This form gives the inspection agency an accurate basis for reporting, evaluating, and, if necessary, improving the productivity and performance of its inspectors. Complaint Form. This form helps obtain full information from the complainant and thus makes the relative seriousness of the problem clear and reduces the number of crank complaints. No-entry Notice. This notice advises occupants or owners that an inspector was there and that they must return a call to the inspector. Inspection Report Form. This is the most important form in an agency. It comes in countless varieties, but if designed properly, it will ensure more productivity and more thoroughness by the inspectors, reduce the time spent in writing reports, locate all violations correctly, and reduce the time required for typing violation notices. Forms may vary widely in sophistication from a very simple form to one whose components are identified by number for use in processing the case by automation. Some forms are a combined inspection report and notice form in triplicate so that the first page can be used as the notice of violation, the second as the office record, and the third as the guide for reinspection. A covering form letter notifies the violator of the time allowed to correct the conditions listed in the report form. Violation Notice. This is the legal notice that housing code violations exist and must be corrected within the indicated amount of time. The notice may be in the form of a letter that includes the alleged violations or has a copy of these attached. It may be a standard notice form, or it may be a combined report-notice. Regardless of the type of notice used, it should make the location and nature of all violations clear and specify the exact section of the code that covers each one. The notice must advise violators of their right to a hearing. It should also indicate that the violator has a right to be represented by counsel and that failure to obtain counsel will not be accepted as grounds for postponing a hearing or court case. Hearing Forms. These should include a form letter notifying the violator of the date and time set for the hearing, a standard summary sheet on which the supervisor can record the facts presented at an informal hearing, and a hearing-decision letter for notifying all concerned of the hearing results. The latter should include the names of the violator, inspector, law department, and any other city official or agency that may be involved in the case. Reinspection Form Letters or Notices. These have the same characteristics as violation notices except that they cover the follow-up orders given to the violator who has failed to comply with the original notice within the time specified. Some agencies may use two or three types of these form letters to accommodate different degrees of response by the violator. Whether one or several are used, standardization of these letters or notices will expedite the processing of cases. Court Complaint and Summons Forms. These forms advise alleged violators of the charges against them and summon them to appear in court at the specified time and place. It is essential that the housing inspection agency work closely with the municipal law department in preparing these forms so that each is done in exact accord with the rules of court procedure in the relevant state and community. Court Action Record Form. This form provides an accurate running record of the inspection agency's court actions and their results. # Substantive Provisions of a Housing Code A housing code is the primary tool of the housing inspector. The code spells out what the inspector may or may not do. An effort to improve housing conditions can be no better than the code allows. The substantive provisions of the code specify the minimal housing conditions acceptable to the community that developed them. Dwelling units should have provisions for preparing at least one regularly cooked meal per day. Minimum equipment should include a kitchen sink in good working condition and properly connected to the water supply system approved by the appropriate authority. It should provide, at all times, an adequate amount of heated and unheated running water under pressure and should be connected to a sewer system approved by the appropriate authority. Cabinets or shelves, or both, for storing eating, drinking, and cooking utensils and food should be provided. These surfaces should be of sound construction and made of material that is easy to clean and that will not have a toxic or deleterious effect on food. In addition, a stove and refrigerator should be provided. Within every dwelling there should be a room that affords privacy and is equipped with a flush toilet in good working condition. Within the vicinity of the flush toilet, a sink should be provided. In no case should a kitchen sink substitute as a lavatory sink. In addition, within each dwelling unit there should be, within a room that affords privacy, either a bathtub or shower or both, in good working condition. Both the lavatory sink and the bathtub or shower or both should be equipped with an adequate amount of heated and unheated water under pressure. Each should be connected to an approved sewer system. Within each dwelling unit two or more means of egress should be provided to safe and open space at ground level. Provisions should be incorporated within the housing code to meet the safety requirements of the state and community involved. The housing code should spell out minimum standards for lighting and ventilation within each room in the structure. In addition, minimum thermal standards should be provided. Although most codes merely provide the requirement of a given temperature at a given height above floor level, the community should give consideration to the use of effective temperatures. The effective temperature is a means of incorporating not only absolute temperature in degrees, but also humidity and air movement, giving a better indication of the comfort index of a room. The code should provide that no person shall occupy or let for occupancy any dwelling or dwelling units that do not comply with stated requirements. Generally, these requirements specify that the foundation, roof, exterior walls, doors, window space and windows of the structure be sound and in good repair; that it be moisture-free, watertight and reasonably weather tight and that all structural surfaces be sound and in good repair. HUD defines a multifamily dwelling unit as one that contains four or more dwelling units in a single structure. A dwelling unit is further defined as a single unit of residence for a family of one or more persons in which sleeping accommodations are provided but toileting or cooking facilities are shared by the occupants. # Building Codes # Introduction The most immediate and obvious link between housing and health involves exposure to biologic, chemical, and physical agents that can affect the health and safety of the occupants of the home. Conditions such as childhood lead poisoning and respiratory illnesses caused by exposure to radon, asbestos, tobacco smoke, and other pollutants are increasingly well understood and documented. However, even 50 years ago, public health officials understood that housing conditions were linked to a broader pattern of community health. For example, in 1949, the Surgeon General released a report comparing several health status indicators among six cities having slums. The publication reported that: - the rate of deaths from communicable disease in these areas was the same as it was for the rest of the country 50 years ago (i.e., around 1900); - most of the tuberculosis cases came from 25% of the population of these cities; and - the infant mortality rate was five times higher than in the rest of the country, approximately equal to what it was 50 years ago. Housing-related health concerns include asthma episodes triggered by exposure to dust mites, cockroaches, pets, and rodents. The existence of cockroaches, rats, and mice mean that they can also be vectors for significant problems that affect health and well-being. They are capable of transmitting diseases to humans. According to a 1997 American Housing Survey, rats and mice infested 2.7 million of 97 million housing units. A CDC-sponsored survey of two major American cities documented that nearly 50% of the premises were infected with rats and mice. This chapter deals with disease vectors and pests as factors related to the health of households. # Disease Vectors and Pests Integrated pest management (IPM) techniques are necessary to reduce the number of pests that threaten human health and property. This systems approach to the problem relies on more than one technique to reduce or eliminate pests. It can be visualized best as concentric rings of protection that reduce the need for the most risky and dangerous options of control and the potential for pests to evolve and develop. It typically involves using some or all of the following steps: - monitoring, identifying, and determining the level of threat from pests; - making the environment hostile to pests; - building the pests out by using pest-proof building materials; - eliminating food sources, hiding areas, and other pest attractants; - using traps and other physical elimination devices; and - when necessary, selecting appropriate poisons for identified pests. The above actions are discussed in more detail in the following section on the four basic strategies for controlling rodents. Most homeowners have encountered a problem with rodents, cockroaches, fleas, flies, termites, or fire ants. These pests destroy property or carry disease, or both, and can be a problem for rich and poor alike. # Rodents Rodents destroy property, spread disease, compete for human food sources, and are aesthetically displeasing. Rodent-associated diseases affecting humans include plague, murine typhus, leptospirosis, rickettsialpox, and rat-bite fever. The three primary rodents of concern to the homeowner are the Norway rat (Rattus norvegicus), roof rat (Rattus rattus), and the house mouse (Mus musculus). The term "commensal" is applied to these rodents, meaning they live at people's expense. The physical traits of each are demonstrated in Figure 4.1. # Chapter 4: Disease Vectors and Pests Barnett notes that the house mouse is abundant throughout the United States. The Norway rat (Figure 4.2) is found throughout the temperate regions of the world, including the United States. The roof rat is found mainly in the South, across the entire nation to the Pacific coast. As a group, rodents have certain behavioral characteristics that are helpful in understanding them. They are perceptive to touch, with sensitive whiskers and guard hairs on their bodies. Thus, they favor running along walls and between objects that allow them constant contact with vertical surfaces. They are known to have poor eyesight and are alleged to be colorblind. Contrastingly, they have an extremely sharp sense of smell and a keen sense of taste. The word rodent is derived from the Latin verb rodere, meaning "to gnaw." The gnawing tendency leads to structural damage to buildings and initiates fires when insulation is chewed from electrical wires. Rodents will gnaw to gain entrance and to obtain food. The roof rat (Figure 4.3) is a slender, graceful, and very agile climber. The roof rat prefers to live aboveground: indoors in attics, between floors, in walls, or in enclosed spaces; and outdoors in trees and dense vine growth. Contrasted with the roof rat, the Norway rat is at home below the ground, living in a burrow. The house mouse commonly is found living in human quarters, as suggested by its name. Signs indicative of the presence of rodents-aside from seeing live or dead rats and hearing rats-are rodent droppings, runways, and tracks (Figure 4.4). Other signs include nests, gnawings, food scraps, rat hair, urine spots, and rat body odors. Note that waste droppings from rodents are often confused with cockroach egg packets, which are smooth, segmented, and considerably smaller than a mouse dropping. , rats and mice are very suspicious of any new objects or food found in their surroundings. This characteristic is one reason rodents can survive in dangerous environments. This avoidance reaction accounts for prebaiting (baiting without poisoning) in control programs. Initially, rats or mice begin by taking only small amounts of food. If the animal becomes ill from a sublethal dose of poison, its avoidance reaction is strengthened, and a poisoning program becomes extremely difficult to complete. If rodents are hungry or exposed to an environment where new objects and food are commonly found, such as a dump, their avoidance reaction may not be as strong; in extreme cases of hunger, it may even be absent. # According to the Military Pest Management Handbook (MPMH) The first of four basic strategies for controlling rodents is to eliminate food sources. To accomplish this, it is imperative for the homeowner or occupant to do a good job of solid waste management. This requires proper storing, collecting, and disposing of refuse. The second strategy is to eliminate breeding and nesting places. This is accomplished by removing rubbish from near the home, including excess lumber, firewood, and similar materials. These items should be stored above ground with 18 inches of clearance below them. This height does not provide a habitat for rats, which have a propensity for dark, moist places in which to burrow. Wood should not be stored directly on the ground, and trash and similar rubbish should be eliminated. The third strategy is to construct buildings and other structures using rat-proofing methods. MPMH notes that it is much easier to manage rodents if a structure is built or modified in a way that prevents easy access by rodents. Tactics for rodent exclusion include building or covering doors and windows with metal. Rats can gnaw through wooden doors and windows in a very short time to gain entrance. All holes in a building's exterior should be sealed. Rats are capable of enlarging openings in masonry, especially if the mortar or brick is of poor quality. All openings more than ¾-inch wide should be closed, especially around pipes and conduits. Cracks around doors, gratings, windows, and other such openings should be covered if they are less than 4 feet above the ground or accessible from ledges, pipes, or wires (Figure 4.5). Additional tactics include using proper materials for rat proofing. For example, sheet metal of at least 26-gauge, ¼-inch or ½-inch hardware cloth, and cement are all suitable rat-resistant materials. However, ½-inch hardware cloth has little value against house mice. Tight fittings and self-closing doors should be constructed. Rodent runways can be behind double walls; therefore, spaces between walls and floor-supporting beams should be blocked with fire stops. A proper rodent-proofing strategy must bear in mind that rats can routinely jump 2 feet vertically, dig 4 feet or more to get under a foundation, climb rough walls or smooth pipes up to 3 inches in diameter, and routinely travel on electric or telephone wires. The first three strategies-good sanitation techniques, habitat denial, and rat proofing-should be used initially in any rodent management program. Should they fail, the fourth strategy is a killing program, which can vary from a family cat to the professional application of rodenticides. Cats can be effective against mice, but typically are not useful against a rat infestation. Over-the-counter rodenticides can be purchased and used by the homeowner or occupant. These typically are in the red squill or warfarin groups. A more effective alternative is trapping. There are a variety of devices to choose from when trapping rats or mice. The two main groups of rat and mouse traps are live traps (Figure 4.6) and kill traps (Figure 4.7). Traps usually are placed along walls, near runways and burrows, and in other areas. Bait is often used to attract the rodents to the trap. To be effective, traps must be monitored and emptied or removed quickly. If a rat caught in a trap is left there, other rats may avoid the traps. A trapping strategy also may include using live traps to remove these vermin. # Cockroaches Cockroaches have become well adapted to living with and near humans, and their hardiness is legendary. In light of these facts, cockroach control may become a homeowner's most difficult task because of the time and special knowledge it often involves. The cockroach is considered an allergen source and an asthma trigger for residents. Although little evidence exists to link the cockroach to specific disease outbreaks, it has bee demonstrated to carry Salmonella typhimurium, Entamoeba histolytica, and the poliomyelitis virus. In addition, Kamble and Keith note that most cockroaches produce a repulsive odor that can be detected in infested areas. The sight of cockroaches can cause considerable psychologic or emotional distress in some individuals. They do not bite, but they do have heavy leg spines that may scratch. According to MPMH , there are 55 species of cockroaches in the United States. As a group, they tend to prefer a moist, warm habitat because most are tropical in origin. Although some tropical cockroaches feed only on vegetation, cockroaches of public health interest tend to live in structures and are customarily scavengers. Cockroaches will eat a great variety of materials, including cheese and bakery products, but they are especially fond of starchy materials, sweet substances, and meat products. Cockroaches are primarily nocturnal. Daytime sightings may indicate potentially heavy infestations. They tend to hide in cracks and crevices and can move freely from room to room or adjoining housing units via wall spaces, plumbing, and other utility installations. Entry into homes is often accomplished through food and beverage boxes, grocery sacks, animal food, and household goods carried into the home. The species of public health interest that commonly inhabit human dwellings (Figures 4.8-4.13) include the following: German cockroach (Blattella germanica); American cockroach (Periplaneta americana); Oriental cockroach (Blatta orientalis); brownbanded cockroach (Supella longipalpa); Australian cockroach (Periplaneta australasiae); smoky-brown cockroach (Periplaneta fuliginosa); and brown cockroach (Periplaneta brunnea). Four management strategies exist for controlling cockroaches. The first is prevention. This strategy includes inspecting items being carried into the home and sealing cracks and crevices in kitchens, bathrooms, exterior doors, and windows. Structural modifications would include weather stripping and pipe collars. The second strategy is sanitation. This denies cockroaches food, water, and shelter. These efforts include quickly cleaning food particles from shelving and floors; timely washing of dinnerware; and routine cleaning under refrigerators, stoves, furniture, and similar areas. If pets are fed indoors, pet food should be stored in tight containers and not left in bowls overnight. Litter boxes should be cleaned routinely. Access should be denied to water sources by fixing leaking plumbing, drains, sink traps, and purging clutter, such as papers and soiled clothing and rags. The third strategy is trapping. Commercially available cockroach traps can be used to capture roaches and serve as a monitoring device. The most effective trap placement is against vertical surfaces, primarily corners, and under sinks, in cabinets, basements, and floor drains. The fourth strategy is chemical control. The use of chemicals typically indicates that the other three strategies have been applied incorrectly. Numerous insecticides are available and appropriate information is obtainable from EPA. # Fleas The most important fleas as disease vectors are those that carry murine typhus and bubonic plague. In addition, fleas serve as intermediate hosts for some species of dog and rodent tapeworms that occasionally infest people. They also may act as intermediate hosts of filarial worms (heartworms) in dogs. In the United States, the most important disease related to fleas is the bubonic plague. This is primarily a concern of residents in the southwestern and western parts of the country (Figure 4. Of approximately 2,000 species of flea, the most common flea infesting both dogs and cats is the cat flea Ctenocephalides felis. Although numerous animals, both wild and domestic, can have flea infestations, it is from the exposure of domestic dogs and cats that most homeowners inherit flea infestation problems. According to MPMH , fleas are wingless insects varying from 1 to 8½ millimeters (mm) long, averaging 2 to 4 mm, and feed through a siphon or tube. They are narrow and compressed laterally with backwardly directed spines, which adapt them for moving between the hairs and feathers of mammals and birds. They have long, powerful legs adapted for jumping. Both sexes feed on blood, and the female requires a blood meal before she can produce viable eggs. Fleas tend to be host-specific, thus feeding on only one type of host. However, they will infest other species in the absence of the favored host. They are found in relative abundance on animals that live in burrows and sheltered nests, while mammals and birds with no permanent nests or that are exposed to the elements tend to have light infestations. MPMH notes that fleas undergo complete metamorphosis (egg, larva, pupa, and adult). The time it takes to complete the life cycle from egg to adult varies according to the species, temperature, humidity, and food availability. Under favorable conditions, some species can complete a generation in as little as 2 or 3 weeks. # Flies The historical attitude of Western society toward flies has been one of aesthetic disdain. The public health view is to classify flies as biting or nonbiting. Biting flies include sand flies, horseflies, and deerflies. Nonbiting flies include houseflies, bottleflies, and screwworm flies. The latter group is often referred to as synanthropic because of their close association with humans. In general, the presence of flies is a sign of poor sanitation. The primary concern of most homeowners is nonbiting flies. According to MPMH , the housefly (Musca domestica) (Figure 4.16) is one of the most widely distributed insects, occurring throughout the United States, and is usually the predominant fly species in homes and restaurants. M. domestica is also the most prominent human-associated (synanthropic) fly in the southern United States. Because of its close association with people, its abundance, and its ability to transmit disease, it is considered a greater threat to human welfare than any other species of nonbiting fly. Each housefly can easily carry more than 1 million bacteria on its body. Some of the disease-causing agents transmitted by houseflies to humans are Shigella spp. (dysentery and diarrhea = shigellosis), Salmonella spp. (typhoid fever), Escherichia coli, (traveler's diarrhea), and Vibrio cholera (cholera). Sometimes these organisms are carried on the fly's tarsi or body hairs, and frequently they are regurgitated onto food when the fly attempts to liquefy it for ingestion. The fly life cycle is similar across the synanthropic group. MPMH notes that the egg and larval stages develop in animal and vegetable refuse. Favorite breeding sites include garbage, animal manure, spilled animal feed, and soil contaminated with organic matter. Favorable environmental conditions will result in the eggs hatching in 24 hours or less. Normally, a female fly will produce 500 to 600 eggs during her lifetime. The creamy, white larvae (maggots) are about ½-inch long when mature and move within the breeding material to maintain optimum temperature and moisture conditions. This stage lasts an average of 4 to 7 days in warm weather. The larvae move to dry parts of the breeding medium or move out of it onto the soil or sheltered Flea eggs usually are laid singly or in small groups among the feathers or hairs of the host or in a nest. They are often laid in carpets of living quarters if the primary host is a household pet. Eggs are smooth, spherical to oval, light colored, and large enough to be seen with the naked eye. An adult female flea can produce up to 2,000 eggs in a lifetime. Flea larvae are small (2 to 5 mm), white, and wormlike with a darker head and a body that will appear brown if they have fed on flea feces. This stage is mobile and will move away from light, thus they typically will be found in shaded areas or under furniture. In 5 to 12 days, they complete the three larval stages; however, this may take several months depending on environmental conditions. The larvae, after completing development, spin a cocoon of silk encrusted with granules of sand or various types of debris to form the pupal stage. The pupal stage can be dormant for 140 to 170 days. In some areas of the country, fleas can survive through the winter. The pupae, after development, are stimulated to emerge as adults by movement, pressure, or heat. The pupal form of the flea is resistant to insecticides. An initial treatment, while killing egg, larvae, and adult forms, will not kill the pupae. Therefore, a reapplication will often be necessary. The adult forms are usually ready to feed about 24 hours after they emerge from the cocoon and will begin to feed within 10 seconds of landing on a host. Mating usually follows the initial blood meal, and egg production is initiated 24 to 48 hours after consuming a blood meal. The adult flea lives approximately 100 days, depending on environmental conditions. Following are some guidelines for controlling fleas: - The most important principle in a total flea control program is simultaneously treating all pets and their environments (indoor and outdoor). - Before using insecticides, thoroughly clean the environment, removing as many fleas as possible, regardless of the form. This would include indoor vacuuming and carpet steam cleaning. Special attention should be paid to source points where pets spend most of their time. - Outdoor cleanup should include mowing, yard raking, and removing organic debris from flowerbeds and under bushes. - Insecticide should be applied to the indoor and outdoor environments and to the pet. - Reapplication to heavily infested source points in the home and the yard may be needed to eliminate preemerged adults. places under debris to pupate, with this stage usually lasting 4 to 5 days. When the pupal stage is accomplished, the adult fly exits the puparium, dries, hardens, and flies away to feed, with mating occurring soon after emergence. Figure 4.17 demonstrates the typical fly life cycle. The control of the housefly is hinged on good sanitation (denying food sources and breeding sites to the fly). This includes the proper disposal of food wastes by placing garbage in cans with close-fitting lids. Cans need to be periodically washed and cleaned to remove food debris. The disposal of garbage in properly operated sanitary landfills is paramount to fly control. The presence of adult flies can be addressed in various ways. Outside methods include limited placement of common mercury vapor lamps that tend to attract flies. Less-attractive sodium vapor lamps should be used near the home. Self-closing doors in the home will deny entrance, as will the use of proper-fitting and well-maintained screening on doors and windows. Larger flies use homes for shelter from the cold, but do not reproduce inside the home. Caulking entry points and using fly swatters is effective and much safer than the use of most pesticides. Insecticide "bombs" can be used in attics and other rooms that can be isolated from the rest of the house. However, these should be applied to areas away from food, where flies rest. The blowfly is a fairly large, metallic green, gray, blue, bronze, or black fly. They may spend the winter in homes or other protected sites, but will not reproduce during this time. Blowflies breed most commonly on decayed carcasses (e.g., dead squirrels, rodents, birds) and in droppings of dogs or other pets during the summer; thus, removal of these sources is imperative. Small animals, on occasion, may die inside walls or under the crawl space of a house. A week or two later, blowflies or maggots may appear. The adult blowfly is also attracted to gas leaks. # Termites According to Gold et al. , subterranean termites are the most destructive insect pests of wood in the United States, causing more than $2 billion in damage each year. Annually, this is more property damage than that caused by fire and windstorms combined. In the natural world, these insects are beneficial because they break down dead trees and other wood materials that would otherwise accumulate. This biomass breakdown is recycled to the soil as humus. MPMH , on the other hand, notes that these insects can damage a building so severely it may have to be replaced. Termites consume wood and other cellulose products, such as paper, cardboard, and fiberboard. They will also destroy structural timbers, pallets, crates, furniture, and other wood products. In addition, they will damage many materials they do not normally eat as they search for food. The tunneling efforts of subterranean termites can penetrate lead-and plastic-covered electric cable and cause electrical system failure. In nature, termites may live for years in tree stumps or lumber beneath concrete buildings before they penetrate hairline cracks in floors and walls, as well as expansion joints, to search for food in areas such as interior door frames and immobile furniture. Termite management costs to homeowners are exceeded only by cockroach control costs. Lyon notes that termites are frequently mistaken by the homeowner as ants and often are referred to erroneously as white ants. Typical signs of termite infestations occur in March through June and in September and October. Swarming is an event where a group of adult males and female reproductives leave the nest to establish a new colony. If the emergence happens inside a building, flying termites may constitute a considerable nuisance. These pests can be collected with a vacuum cleaner or otherwise disposed of without using pesticides. Each homeowner should be aware of the following signs of termite infestation: - Pencil-thin mud tubes extending over the inside and outside surfaces of foundation walls, piers, sills, joists, and similar areas (Figures 4. 18 and 4.19). - Presence of winged termites or their shed wings on windowsills and along the edges of floors. - Damaged wood hollowed out along the grain and lined with bits of mud or soil. According to Oi et al. , termite tubes and nests are made of mud and carton. Carton is composed of partially chewed wood, feces, and soil packed together. Tubes maintain the to Lyon , the reproductives can be winged or wingless, with the latter found in colonies to serve as replacements for the primary reproductives. The primary reproductives (alates) vary in color from pale yellowbrown to coal black, are ½-inch to 3/8-inch in length, are flattened dorsa-ventrally, and have pale or smoke-gray to brown wings. The secondary reproductives have short wing buds and are white to cream colored. The workers are the same size as the primary reproductives and are white to grayish-white, with a yellow-brown head, and are wingless. In addition, the soldiers resemble workers, in that they are wingless, but soldiers have large, rectangular, yellowish, and brown heads with large jaws. MPMH states there are five families of termites found in the world, with four of them occurring in the United States. The families in the United States are Hodotermitidae (rotten-wood termites), Kalotermitidae (dry-wood termites), Rhinotermitidae (subterranean termites) and Termitidae (desert termites). Subterranean high humidity required for survival, protect termites from predators, and allow termites to move from one spot to another. Differentiating the ant from the dark brown or black termite reproductives can be accomplished by noting the respective wings and body shape. MPMH states that a termite has four wings of about equal length and that the wings are nearly twice as long as the body. By comparison, ant wings that are only a little longer than the body and the hind pair is much shorter than the front. Additionally, ants typically have a narrow waist, with the abdomen connected to the thorax by a thin petiole. Termites do not have a narrow or pinched waist. Figure 4.20a and b demonstrates the differences between the ant and termite. Entomologists refer to winged ants and termites as alates. termites typically work in wood aboveground, but must have direct contact with the ground to obtain moisture. Nonsubterranean termites colonize above the ground and feed on cellulose; however, their life cycles and methods of attack, and consequently methods of control, are quite different. Nonsubterranean termites in the United States are commonly called drywood termites. In the United States, according to MPMH , native subterranean termites are the most important and the most common. These termites include the genus Reticulitermes, occurring primarily in the continental United States, and the genus Heterotermes, occurring in the Virgin Islands, Puerto Rico, and the deserts of California and Arizona. The appearance, habits, and type of damage they cause are similar. The Formosan termite (Coptotermes formosanus) is the newest species to become established in the United States. It is a native of the Pacific Islands and spread from Hawaii and Asia to the United States during the 1960s. It is now found along the Gulf Coast, in California, and in South Carolina, and is expected to spread to other areas as well. Formosan termites cause greater damage than do native species because of their more vigorous and aggressive behavior and their ability to rapidly reproduce, build tubes and tunnels, and seek out new items to infest. They have also shown more resistance to some soil pesticides than native species. Reproductives (swarmers) are larger than native species, reaching up to 5/8-inch in length, and are yellow to brown in color. Swarmers have hairy-looking wings and swarm after dusk, unlike native species, which swarm in the daytime. Formosan soldiers have more oval-shaped heads than do native species. On top of the head is an opening that emits a sticky, whitish substance. Dry-wood termites (Cryptotermes spp.) live entirely in moderately to extremely dry wood. They require contact with neither the soil nor any other moisture source and may invade isolated pieces of furniture, fence posts, utility poles, firewood, and structures. Dry-wood termite colonies are not as large as other species in the United States, so they can occupy small wooden articles, which are one way these insects spread to different locations. They are of major economic importance in southern California, Arizona, and along the Gulf Coast. The West Indian drywood termite is a problem in Puerto Rico, the U.S. Virgin Islands, Hawaii, parts of Florida and Louisiana, and a number of U.S. Pacific Island territories. Dry-wood termites are slightly larger than most other species, ranging from ½ inch to 5/8 inch long, and are generally lighter in color. Damp-wood termites do not need contact with damp ground like subterranean termites do, but they do require higher moisture content in wood. However, once established, these termites may extend into slightly drier wood. Termites of minor importance are the tree-nesting groups. The nests of these termites are found in trees, posts, and, occasionally, buildings. Their aboveground nests are connected to soil by tubes. Tree-nesting termites may be a problem in Puerto Rico and the U.S. Virgin Islands. The risk for encountering subterranean termites in the United States is greater in the southeastern states and in southwestern California. In the United States, the risk for termite infestations tends to decrease as the latitude increases northward. According to Potter , homeowners can reduce the risk for termite attack by adhering to the following suggestions: - Eliminate wood contact with the ground. Earth-towood contact provides termites with simultaneous access to food, moisture, and shelter in conjunction with direct, hidden entry into the structure. In addition, the homeowner or occupant should be aware that pressure-treated wood is not immune to termite attack because termites can enter through the cut ends and build tunnels over the surfaces. - Do not allow moisture to accumulate near the home's foundation. Proper drainage, repair of plumbing, and proper grading will help to reduce the presence of moisture, which attracts termites. - Reduce humidity in crawl spaces. Most building codes state that crawl space area should be vented at a rate of 1 square foot per 150 square feet of crawl space area. This rate can be reduced for crawl spaces equipped with a polyethylene or equivalent vapor barrier to one square foot per 300 to 500 square feet of crawl space area. Vent placement design includes positioning one vent within 3 feet of each building corner. Trimming and controlling shrubs so that they do not obstruct the vents is imperative. Installling a 4to 6-mil polyethylene sheeting over a minimum of 75% of the crawl space will reduce the crawl-space moisture. Covering the entire floor of the crawl space with such material can reduce two potential home problems at one time: excess moisture and radon (Chapter 5). The barrier will reduce the absorption of moisture from the air and the release of moisture into the air in the crawl space from the underlying soil. - Never store firewood, lumber, or other wood debris against the foundation or inside the crawl space. Termites are both attracted to and fed by this type of storage. Wood stacked in contact with a dwelling and vines, trellises, and dense plant material provide a pathway for termites to bypass soil barrier treatment. - Use decorative wood chips and mulch sparingly. Cellulose-containing products attract termites, especially materials that have moisture-holding properties, such as mulch. The homeowner should never allow these products to contact wood components of the home. The use of crushed stone or pea gravel is recommended as being less attractive to termites and helpful in diminishing other pest problems. - Have the structure treated by a professional pest control treatment. The final, and most effective, strategy to prevent infestation is to treat the soil around and beneath the building with termiticide. The treated ground is then both a repellant and toxic to termites. Figure 4.23 demonstrates some typical points of attack by subterranean termites and some faulty construction practices that can contribute to subterranean termite infestations. Lyon notes the following alternative termite control measures: - Nematodes. Certain species of parasitic round worms (nematodes) will infest and kill termites and other soil insects. Varying success has been experienced with this method because it is dependent on several variables, such as soil moisture and soil type. - Sand as a physical barrier. This would require preconstruction planning and would depend on termites being unable to manipulate the sand to create tunnels. Some research in California and Hawaii has indicated early success. - Chemical baits. This method uses wood or laminated texture-flavored cellulose impregnated with a toxicant and/or insect growth regulator. The worker termite feeds on the substance and carries it back to the nest, reducing or eliminating the entire colony. According to HomeReports.com , an additional system is to strategically place a series of baits around the house. The intention is for termite colonies to encounter one or more of the baits before approaching the house. Once termite activity is observed, the bait wood is replaced with a poison. The termites bring the poison back to the colony and the colony is either eliminated or substantially reduced. This system is relatively new to the market. Its success depends heavily on the termites finding the bait before finding and damaging the house. Additional measures include construction techniques that discourage termite attacks, as demonstrated in Figure 4.24. Termites often invade homes by way of the foundation, either by crawling up the exterior surface where their activity is usually obvious or by traveling inside hollow block masonry. One way to deter their activity is to block their access points on or through the foundation. Metal termite shields have been used for decades to deter termite movement along foundation walls and piers on up to the wooden structure. Metal termite shields should extend 2 inches from the foundation and 2 inches down. Improperly installed (i.e., not soldered/sealed properly), damaged, or deteriorated termite shields may allow termites to reach parts of the wooden floor system. Shields should be made of noncorroding metal and have no cracks or gaps along the seams. If a house is being built with metal termite shielding, the shielding should extend at least 2 inches out and 2 inches down at a 45° angle from the foundation wall. An alternative to using termite shields on a hollow-block foundation is to fill the block with concrete or put in a few courses of solid or concretefilled brick (which is often done anyway to level foundations). These are referred to as masonry caps. The same approach can be used with support piers in the crawl space. Solid caps (i.e., a continuously poured concrete cap) are best at stopping termites, but are not commonly used. Concrete-filled brick caps should deter termite movement or force them through small gaps, thus allowing them to be spotted during an inspection . # Fire Ants According to MPMH , ants are one of the most numerous species on earth. Ants are in the same order as wasps and bees and, because of their geographic distribution, they are universally recognized (Figure 4.25). The life cycle of the fire ant begins with the mating of the winged forms (alates) some 300 to 800 feet in the air, typically occurring in the late spring or early summer. The male dies after the mating; and the newly mated queen finds a suitable moist site, drops her wings, and burrows in the soil, sealing the opening behind her. Ants undergo complete metamorphosis and, therefore, have egg, larval, pupal and adult stages. The new queen will begin laying eggs within 24 hours. Once fully developed, she will produce approximately 1,600 eggs per day over a maximum life span of 7 years. Soft, whitish, legless larvae are produced from the hatching. These larvae are fed by the worker ants. Pupae resemble adults in form, but are soft, nonpigmented, and lack mobility. There are at least three distinct castes of ants: workers, queens, and males. Typically, the males have wings, which they retain until death. Queens, the largest of the three castes, normally have wings, but lose them after mating. The worker, which is also a female, is never winged, except as a rare abnormality. Within this hierarchy, mature colonies contain males and females that are capable of flight and reproduction. These are known as reproductives, and an average colony may produce approximately 4,500 of these per year. A healthy nest usually produces two nuptial flights of reproductives each year and a healthy, mature colony may contain more than 250,000 ants. Though uncommon among ants, multiple queen colonies (10 to 1. Cracks in foundation permit hidden points of entry from soil to sill. 2. Posts through concrete in contact with substructural soil. Watch door frames and intermediate supporting posts. 3. Wood-framing members in contact with earthfill under concrete slab. 4. Form boards left in place contribute to termite food supply. 5. Leaking pipes and dripping faucets sustain soil moisture. Excess irrigation has same effect. 6. Shrubbery blocking air flow through vents. 7. Debris supports termite colony until large population attacks superstructure. 8. Heating unit accelerates termite development by maintaining warmth of colony on a year-round basis. 9. Foundation wall too low permits wood to contact soil. Adding topsoil often builds exterior grade up to sill level. 10. Footing too low or soil thrown against it causes woodsoil contact. There should be 8 inches of clean concrete between soil and pier block. 11. Stucco carried down over concrete foundation permits hid den entrance between stucco and foundation if bond fails. 12. Insufficient clearance for inspection also permits easy construction of termite shelter tubes from soil to wood. 13. Wood framing of crawl hole forming wood-soil contact. RIFAs prefer open and sun-exposed areas. They are found in cultivated fields, cemeteries, parks, and yards, and even inside cars, trucks, and recreational vehicles. RIFAs are attracted to electrical currents and are known to nest in and around heat pumps, junction boxes, and similar areas. They are omnivorous; thus they will attack most things, living or dead. Their economic effects are felt by their destruction of the seeds, fruit, shoots, and seedlings of numerous native plant species. Fire ants are known to tend pests, such as scale insects, mealy bugs, and aphids, for feeding on their sweet waste excretion (honeydew). RIFAs transport these insects to new feeding sites and protect them from predators. The positive side of RIFA infestation is that the fire ant is a predator of ticks and controls the ground stage of horn flies. The urban dweller with a RIFA infestation may find significant damage to landscape plants, with reductions in the number of wild birds and mammals. RIFAs can discourage outdoor activities and be a threat to young animals or small confined pets. RIFA nests typically are not found indoors, but around homes, roadways, and structures, as well as under sidewalks. Shifting of soil after RIFAs abandon sites has resulted in collapsing structures. Figure 4.27 shows a fire ant mound with fire ants and a measure of their relative size. The medical complications of fire ant stings have been noted in the literature since 1957. People with disabilities, reduced feeling in their feet and legs, young children, and those with mobility issues are at risk for sustaining numerous stings before escaping or receiving assistance. Fatalities have resulted from attacks on the elderly and on infants. Control of the fire ant is primarily focused on the mound by using attractant bait consisting of soybean oil, corn grits, or chemical agents. The bait is picked up by the worker ants and taken deep into the mound to the queen. These products typically require weeks to work. Individual mound treatment is usually most effective in the spring. The key is to locate and treat all mounds in the area to be protected. If young mounds are missed, the area can become reinfested in less than a year. # Mosquitoes All mosquitoes have four stages of development-egg, larva, pupa, and adult-and spend their larval and pupal stages in water. The females of some mosquito species deposit eggs on moist surfaces, such as mud or fallen leaves, that may be near water but dry. Later, rain or high tides reflood these surfaces and stimulate the eggs to hatch into larvae. The females of other species deposit their eggs directly on the surface of still water in such places as ditches, street catch basins, tire tracks, streams that are drying up, and fields or excavations that hold water for some time. This water is often stagnant and close to the home in discarded tires, ornamental pools, unused wading and swimming pools, tin cans, bird baths, plant saucers, and even gutters and flat roofs. The eggs soon hatch into larvae. In the hot summer months, larvae grow rapidly, become pupae, and emerge 1 week later as flying adult mosquitoes. A few important spring species have only one generation per year. However, most species have many generations per year, and their rapid increase in numbers becomes a problem. When adult mosquitoes emerge from the aquatic stages, they mate, and the female seeks a blood meal to obtain the protein necessary for the development of her eggs. The females of a few species may produce a first batch of eggs without this first blood meal. After a blood meal is digested and the eggs are laid, the female mosquito again seeks a blood meal to produce a second batch of eggs. Depending on her stamina and the weather, she may repeat this process many times without mating again. The male mosquito does not take a blood meal, but may feed on plant nectar. He lives for only a short time after mating. Most mosquito species survive the winter, or overwinter, in the egg stage, awaiting the spring thaw, when waters warm and the eggs hatch. A few important species spend the winter as adult, mated females, resting in protected, cool locations, such as cellars, sewers, crawl spaces, and well pits. With warm spring days, these females seek a blood meal and begin the cycle again. Only a few species can overwinter as larvae. Mosquitoborne diseases, such as malaria and yellow fever, have plagued civilization for thousands of years. Newer threats include Lyme disease and West Nile virus. Organized mosquito control in the United States has greatly reduced the incidence of these diseases. However, mosquitoes can still transmit a few diseases, including eastern equine encephalitis and St. Louis encephalitis. The frequency and extent of these diseases depend on a complex series of factors. Mosquito control agencies and health departments cooperate in being aware of these factors and reducing the chance for disease. It is important to recognize that young adult female mosquitoes taking their first blood meal do not transmit diseases. It is instead the older females, who, if they have picked up a disease organism in their first blood meals, can then transmit the disease during the second blood meal. The proper method to manage the mosquito problem in a community is through an organized integrated pest management system that includes all approaches that safely manage the problem. The spraying of toxic agents is but one of many approaches. When mosquitoes are numerous and interfere with living, recreation, and work, you can use the various measures described in the following paragraphs to reduce their annoyance, depending on location and conditions. # How to Reduce the Mosquito Population The most efficient method of controlling mosquitoes is by reducing the availability of water suitable for larval and pupal growth. Large lakes, ponds, and streams that have waves, contain mosquito-eating fish, and lack aquatic vegetation around their edges do not contain mosquitoes; mosquitoes thrive in smaller bodies of water in protected places. Examine your home and neighborhood and take the following precautions recommended by the New Jersey Agricultural Experiment Station : - dispose of unwanted tin cans and tires; - clean clogged roof gutters and drain flat roofs; - turn over unused wading pools and other containers that tend to collect rainwater; - change water in birdbaths, fountains, and troughs twice a week; - clean and chlorinate swimming pools; - cover containers tightly with window screen or plastic when storing rainwater for garden use during drought periods; - flush sump-pump pits weekly; and - stock ornamental pools with fish. If mosquito breeding is extensive in areas such as woodland pools or roadside ditches, the problem may be too great for individual residents. In such cases, call the organized mosquito control agency in your area. These agencies have highly trained personnel who can deal with the problem effectively. Several commercially available insecticides can be effective in controlling larval and adult mosquitoes. These chemicals are considered sufficiently safe for use by the public. Select a product whose label states that the material is effective against mosquito larvae or adults. For safe and effective use, read the label and follow the instructions for applying the material. The label lists those insects that the EPA agrees are effectively controlled by the product. For use against adult mosquitoes, some liquid insecticides can be mixed according to direction and sprayed lightly on building foundations, hedges, low shrubbery, ground covers, and grasses. Do not overapply liquid insecticidesexcess spray drips from the sprayed surfaces to the ground, where it is ineffective. The purpose of such sprays is to leave a fine deposit of insecticide on surfaces where mosquitoes rest. Such sprays are not effective for more than 1 or 2 days. Some insecticides are available as premixed products or aerosol cans. These devices spray the insecticide as very small aerosol droplets that remain floating in the air and hit the flying mosquitoes. Apply the sprays upwind, so the droplets drift through the area where mosquito control is desired. Rather than applying too much of these aerosols initially, it is more practical to apply them briefly but periodically, thereby eliminating those mosquitoes that recently flew into the area. Various commercially available repellents can be purchased as a cream or lotion or in pressurized cans, then applied to the skin and clothing. Some manufacturers also offer clothing impregnated with repellents; coarse, repellent-bearing particles to be scattered on the ground; and candles whose wicks can be lit to release a repellent chemical. The effectiveness of all repellents varies from location to location, from person to person, and from mosquito to mosquito. Repellents can be especially effective in recreation areas, where mosquito control may not be con-ducted. All repellents should be used according to the manufacturers' instructions. Mosquitoes are attracted by perspiration, warmth, body odor, carbon dioxide, and light. Mosquito control agencies use some of these attractants to help determine the relative number of adult mosquitoes in an area. Several devices are sold that are supposed to attract, trap, and destroy mosquitoes and other flying insects. However, if these devices are attractive to mosquitoes, they probably attract more mosquitoes into the area and may, therefore, increase rather than decrease mosquito annoyance. # Introduction We all face a variety of risks to our health as we go about our day-to-day lives. Driving in cars, flying in airplanes, engaging in recreational activities, and being exposed to environmental pollutants all pose varying degrees of risk. Some risks are simply unavoidable. Some we choose to accept because to do otherwise would restrict our ability to lead our lives the way we want. Some are risks we might decide to avoid if we had the opportunity to make informed choices. Indoor air pollution and exposure to hazardous substances in the home are risks we can do something about. In the last several years, a growing body of scientific evidence has indicated that the air within homes and other buildings can be more seriously polluted than the outdoor air in even the largest and most industrialized cities. Other research indicates that people spend approximately 90% of their time indoors. Thus, for many people, the risks to health from exposure to indoor air pollution may be greater than risks from outdoor pollution. In addition, people exposed to indoor air pollutants for the longest periods are often those most susceptible to their effects. Such groups include the young, the elderly, and the chronically ill, especially those suffering from respiratory or cardiovascular disease . # Indoor Air Pollution Numerous forms of indoor air pollution are possible in the modern home. Air pollutant levels in the home increase if not enough outdoor air is brought in to dilute emissions from indoor sources and to carry indoor air pollutants out of the home. In addition, high temperature and humidity levels can increase the concentration of some pollutants. Indoor pollutants can be placed into two groups, biologic and chemical. # Biologic Pollutants Biologic pollutants include bacteria, molds, viruses, animal dander, cat saliva, dust mites, cockroaches, and pollen. These biologic pollutants can be related to some Chapter 5: Indoor Air Pollutants and Toxic Materials serious health effects. Some biologic pollutants, such as measles, chickenpox, and influenza are transmitted through the air. However, the first two are now preventable with vaccines. Influenza virus transmission, although vaccines have been developed, still remains of concern in crowded indoor conditions and can be affected by ventilation levels in the home. Common pollutants, such as pollen, originate from plants and can elicit symptoms such as sneezing, watery eyes, coughing, shortness of breath, dizziness, lethargy, fever, and digestive problems. Allergic reactions are the result of repeated exposure and immunologic sensitization to particular biologic allergens. Although pollen allergies can be bothersome, asthmatic responses to pollutants can be life threatening. Asthma is a chronic disease of the airways that causes recurrent and distressing episodes of wheezing, breathlessness, chest tightness, and coughing . Asthma can be broken down into two groups based on the causes of an attack: extrinsic (allergic) and intrinsic (nonallergic). Most people with asthma do not fall neatly into either type, but somewhere in between, displaying characteristics of both classifications. Extrinsic asthma has a known cause, such as allergies to dust mites, various pollens, grass or weeds, or pet danders. Individuals with extrinsic asthma produce an excess amount of antibodies when exposed to triggers. Intrinsic asthma has a known cause, but the connection between the cause and the symptoms is not clearly understood. There is no antibody hypersensitivity in intrinsic asthma. Intrinsic asthma usually starts in adulthood without a strong family history of asthma. Some of the known triggers of intrinsic asthma are infections, such as cold and flu viruses, exercise and cold air, industrial and occupational pollutants, food additives and preservatives, drugs such as aspirin, and emotional stress. Asthma is more common in children than in adults, with nearly 1 of every 13 school-age children having asthma . Lowincome African-Americans and certain Hispanic populations suffer disproportionately, with urban inner cities having particularly severe problems. The impact on neighborhoods, school systems, and health care facilities from asthma is severe because one-third of all pediatric emergency room visits are due to asthma, and it is the fourth most prominent cause of physician office visits. Additionally, it is the leading cause of school absenteeism-14 million school days lost each year-from chronic illness . The U.S. population, on the average, spends as much as 90% of its time indoors. Consquently, allergens and irritants from the indoor environment may play a significant role in triggering asthma episodes. A number of indoor environmental asthma triggers are biologic pollutants. These can include rodents (discussed in Chapter 4), cockroaches, mites, and mold. # Cockroaches The droppings, body parts, and saliva of cockroaches can be asthma triggers. Cockroaches are commonly found in crowded cities and in the southern United States. Allergens contained in the feces and saliva of cockroaches can cause allergic reactions or trigger asthma symptoms. A national study by Crain et al. of 994 inner-city allergic children from seven U.S. cities revealed that cockroaches were reported in 58% of the homes. The Community Environmental Health Resource Center reports that cockroach debris, such as body parts and old shells, trigger asthma attacks in individuals who are sensitized to cockroach allergen . Special attention to cleaning must be a priority after eliminating the presence of cockroaches to get rid of the presence of any allergens left that can be asthma triggers. # House Dust Mites Another group of arthropods linked to asthma is house dust mites. In 1921, a link was suggested between asthmatic symptoms and house dust, but it was not until 1964 that investigators suggested that a mite could be responsible. Further investigation linked a number of mite species to the allergen response and revealed that humid homes have more mites and, subsequently, more allergens. In addition, researchers established that fecal pellets deposited by the mites accumulated in home fabrics and could become airborne via domestic activities such as vacuuming and dusting, resulting in inhalation by the inhabitants of the home. House dust mites are distributed worldwide, with a minimum of 13 species identified from house dust. The two most common in the United States are the North American house dust mite (Dermatophagoides farinae) and the European house dust mite (D. pteronyssinus). According to Lyon , house dust mites thrive in homes that provide a source of food and shelter and adequate humidity. Mites prefer relative humidity levels of 70% to 80% and temperatures of 75°F to 80°F (24°C to 27°C). Most mites are found in bedrooms in bedding, where they spend up to a third of their lives. A typical used mattress may have from 100,000 to 10 million mites in it. In addition, carpeted floors, especially long, loose pile carpet, provide a microhabitat for the accumulation of food and moisture for the mite, and also provide protection from removal by vacuuming. The house dust mite's favorite food is human dander (skin flakes), which are shed at a rate of approximately 0.20 ounces per week. A good microscope and a trained observer are imperative in detecting mites. House dust mites also can be detected using diagnostic tests that measure the presence and infestation level of mites by combining dust samples collected from various places inside the home with indicator reagents . Assuming the presence of mites, the precautions listed below should be taken if people with asthma are present in the home: - Use synthetic rather than feather and down pillows. - Use an approved allergen barrier cover to enclose the top and sides of mattresses and pillows and the base of the bed. - Use a damp cloth to dust the plastic mattress cover daily. - Change bedding and vacuum the bed base and mattress weekly. - Use nylon or cotton cellulose blankets rather than wool blankets. - Use hot (120°F-130°F ) water to wash all bedding, as well as room curtains. - Eliminate or reduce fabric wall hangings, curtains, and drapes. - Use wood, tile, linoleum, or vinyl floor covering rather than carpet. If carpet is present, vacuum regularly with a high-efficiency particulate air (HEPA) vacuum or a household vacuum with a microfiltration bag. - Purchase stuffed toys that are machine washable. - Use fitted sheets to help reduce the accumulation of human skin on the mattress surface. HEPA vacuums are now widely available and have also been shown to be effective . A conventional vacuum tends to be inefficient as a control measure and results in a significant increase in airborne dust concentrations, but can be used with multilayer microfiltration collection bags. Another approach to mite control is reducing indoor humidity to below 50% and installing central air conditioning. Two products are available to treat house dust mites and their allergens. These products contain the active ingredients benzyl benzoate and tannic acid. # Pets According to the U.S. Environmental Protection Agency (EPA) , pets can be significant asthma triggers because of dead skin flakes, urine, feces, saliva, and hair. Proteins in the dander, urine, or saliva of warm-blooded animals can sensitize individuals and lead to allergic reactions or trigger asthmatic episodes. Warm-blooded animals include dogs, cats, birds, and rodents (hamsters, guinea pigs, gerbils, rats, and mice). Numerous strategies, such as the following, can diminish or eliminate animal allergens in the home: - Eliminate animals from the home. - Thoroughly clean the home (including floors and walls) after animal removal. - If pets must remain in the home, reduce pet exposure in sleeping areas. Keep pets away from upholstered furniture, carpeted areas, and stuffed toys, and keep the pets outdoors as much as possible. However, there is some evidence that pets introduced early into the home may prevent asthma. Several studies have shown that exposure to dogs and cats in the first year of life decreases a child's chances of developing allergies and that exposure to cats significantly decreases sensitivity to cats in adulthood . Many other studies have shown a decrease in allergies and asthma among children who grew up on a farm and were around many animals . # Mold People are routinely exposed to more than 200 species of fungi indoors and outdoors . These include moldlike fungi, as well as other fungi such as yeasts and mushrooms. The terms "mold" and "mildew" are nontechnical names commonly used to refer to any fungus that is growing in the indoor environment. Mold colonies may appear cottony, velvety, granular, or leathery, and may be white, gray, black, brown, yellow, greenish, or other colors. Many reproduce via the production and dispersion of spores. They usually feed on dead organic matter and, provided with sufficient moisture, can live off of many materials found in homes, such as wood, cellulose in the paper backing on drywall, insulation, wallpaper, glues used to bond carpet to its backing, and everyday dust and dirt. Certain molds can cause a variety of adverse human health effects, including allergic reactions and immune responses (e.g., asthma), infectious disease (e.g., histoplasmosis), and toxic effects (e.g., aflatoxin-induced liver cancer from exposure to this mold-produced toxin in food) . A recent Institute of Medicine (IOM) review of the scientific literature found sufficient evidence for an associ-ation between exposure to mold or other agents in damp indoor environments and the following conditions: upper respiratory tract symptoms, cough, wheeze, hypersensitivity pneumonitis in susceptible persons, and asthma symptoms in sensitized persons . A previous scientific review was more specific in concluding that sufficient evidence exists to support associations between fungal allergen exposure and asthma exacerbation and upper respiratory disease . Finally, mold toxins can cause direct lung damage leading to pulmonary diseases other than asthma . The topic of residential mold has received increasing public and media attention over the past decade. Many news stories have focused on problems associated with "toxic mold" or "black mold," which is often a reference to the toxin-producing mold, Stachybotrys chartarum. This might give the impression that mold problems in homes are more frequent now than in past years; however, no good evidence supports this. Reasons for the increasing attention to this issue include high-visibility lawsuits brought by property owners against builders and developers, scientific controversies regarding the degree to which specific illness outbreaks are mold-induced, and an increase in the cost of homeowner insurance policies due to the increasing number of mold-related claims. Modern construction might be more vulnerable to mold problems because tighter construction makes it more difficult for internally generated water vapor to escape, as well as the widespread use of paper-backed drywall in construction (paper is an excellent medium for mold growth when wet), and the widespread use of carpeting. Allergic Health Effects. Many molds produce numerous protein or glycoprotein allergens capable of causing allergic reactions in people. These allergens have been measured in spores as well as in other fungal fragments. An estimated 6%-10% of the general population and 15%-50% of those who are genetically susceptible are sensitized to mold allergens . Fifty percent of the 937 children tested in a large multicity asthma study sponsored by the National Institutes of Health showed sensitivity to mold, indicating the importance of mold as an asthma trigger among these children . Molds are thought to play a role in asthma in several ways. Molds produce many potentially allergenic compounds, and molds may play a role in asthma via release of irritants that increase potential for sensitization or release of toxins (mycotoxins) that affect immune response . Toxics and Irritants. Many molds also produce mycotoxins that can be a health hazard on ingestion, dermal contact, or inhalation . Although common outdoor molds present in ambient air, such as Cladosporium cladosporioides and Alternaria alternata, do not usually produce toxins, many other different mold species do . Genera-producing fungi associated with wet buildings, such as Aspergillus versicolor, Fusarium verticillioides, Penicillium aiurantiorisen, and S. chartarum, can produce potent toxins . A single mold species may produce several different toxins, and a given mycotoxin may be produced by more than one species of fungi. Furthermore, toxin-producing fungi do not necessarily produce mycotoxins under all growth conditions, with production being dependent on the substrate it is metabolizing, temperature, water content, and humidity . Because species of toxin-producing molds generally have a higher water requirement than do common household molds, they tend to thrive only under conditions of chronic and severe water damage . For example, Stachybotrys typically only grows under continuously wet conditions . It has been suggested that very young children may be especially vulnerable to certain mycotoxins . For example, associations have been reported for pulmonary hemorrhage (bleeding lung) deaths in infants and the presence of S. chartarum . Causes of Mold. Mold growth can be caused by any condition resulting in excess moisture. Common moisture sources include rain leaks (e.g., on roofs and wall joints); surface and groundwater leaks (e.g., poorly designed or clogged rain gutters and footing drains, basement leaks); plumbing leaks; and stagnant water in appliances (e.g., dehumidifiers, dishwashers, refrigerator drip pans, and condensing coils and drip pans in HVAC systems). Moisture problems can also be due to water vapor migration and condensation problems, including uneven indoor temperatures, poor air circulation, soil air entry into basements, contact of humid unconditioned air with cooled interior surfaces, and poor insulation on indoor chilled surfaces (e.g., chilled water lines). Problems can also be caused by the production of excess moisture within homes from humidifiers, unvented clothes dryers, overcrowding, etc. Finished basements are particularly susceptible to mold problems caused by the combination of poorly controlled moisture and mold-supporting materials (e.g., carpet, paper-backed sheetrock) . There is also some evidence that mold spores from damp or wet crawl spaces can be transported through air currents into the upper living quarters. Older, substandard housing low income families can be particularly prone to mold problems because of inadequate maintenance (e.g., inoperable gutters, basement and roof leaks), overcrowding, inadequate insulation, lack of air conditioning, and poor heating. Low interior temperatures (e.g., when one or two rooms are left unheated) result in an increase in the relative humidity, increasing the potential for water to condense on cold surfaces. Mold Assessment Methods. Mold growth or the potential for mold growth can be detected by visual inspection for active or past microbial growth, detection of musty odors, and inspection for water staining or damage. If it is not possible or practical to inspect a residence, this information can be obtained using occupant questionnaires. Visual observation of mold growth, however, is limited by the fact that fungal elements such as spores are microscopic, and that their presence is often not apparent until growth is extensive and the fact that growth can occur in hidden spaces (e.g., wall cavities, air ducts). Portable, hand-held moisture meters, for the direct measurement of moisture levels in materials, may also be useful in qualitative home assessments to aid in pinpointing areas of potential biologic growth that may not otherwise be obvious during a visual inspection . For routine assessments in which the goal is to identify possible mold contamination problems before remediation, it is usually unnecessary to collect and analyze air or settled dust samples for mold analysis because decisions about appropriate intervention strategies can typically be made on the basis of a visual inspection . Also, sampling and analysis costs can be relatively high and the interpretation of results is not straightforward. Air and dust monitoring may, however, be necessary in certain situations, including 1) if an individual has been diagnosed with a disease associated with fungal exposure through inhalation, 2) if it is suspected that the ventilation systems are contaminated, or 3) if the presence of mold is suspected but cannot be identified by a visual inspection or bulk sampling . Generally, indoor environments contain large reservoirs of mold spores in settled dust and contaminated building materials, of which only a relatively small amount is airborne at a given time. Common methods for sampling for mold growth include bulk sampling techniques, air sampling, and collection of settled dust samples. In bulk sampling, portions of materials with visual or suspected mold growth (e.g., sections of wallboard, pieces of duct lining, carpet segments, or return air filters) are collected and directly examined to determine if mold is growing and to identify the mold species or groups that are present. Surface sampling in mold contamination investigations may also be used when a less destructive technique than bulk sampling is desired. For example, nondestructive samples of mold may be collected using a simple swab or adhesive tape . Air can also be sampled for mold using pumps that pull air across a filter medium, which traps airborne mold spores and fragments. It is generally recommended that outdoor air samples are collected concurrent with indoor samples for comparison purposes for measurement of baseline ambient air conditions. Indoor contamination can be indicated by indoor mold distributions (both species and concentrations) that differ significantly from the distributions in outdoor samples . Captured mold spores can be examined under a microscope to identify the mold species/groups and determine concentrations or they can be cultured on growth media and the resulting colonies counted and identified. Both techniques require considerable expertise. Dust sampling involves the collection of settled dust samples (e.g., floor dust) using a vacuum method in which the dust is collected onto a porous filter medium or into a container. The dust is then processed in the laboratory and the mold identified by culturing viable spores. # Mold Standards. No standard numeric guidelines exist for assessing whether mold contamination exists in an area. In the United States, no EPA regulations or standards exist for airborne mold contaminants . Various governmental and private organizations have, however, proposed guidance on the interpretation of fungal measures of environmental media in indoor environments (quantitative limits for fungal concentrations). Given evidence that young children may be especially vulnerable to certain mycotoxins and in view of the potential severity or diseases associated with mycotoxin exposure, some organizations support a precautionary approach to limiting mold exposure . For example, the American Academy of Pediatrics recommends that infants under 1 year of age are not exposed at all to chronically moldy, water-damaged environments . Mold Mitigation. Common intervention methods for addressing mold problems include the following: - maintaining heating, ventilating, and air conditioning (HVAC) systems; - changing HVAC filters frequently, as recommended by manufacturer; - keeping gutters and downspouts in working order and ensuring that they drain water away from the foundation; - routinely checking, cleaning, and drying drip pans in air conditioners, refrigerators, and dehumidifiers; - increasing ventilation (e.g., using exhaust fans or open windows to remove humidity when cooking, showering, or using the dishwasher); - venting clothes dryers to the outside; and - maintaining an ideal relative humidity level in the home of 40% to 60%. - locating and removing sources of moisture (controlling dampness and humidity and repairing water leakage problems); - cleaning or removing mold-contaminated materials; - removing materials with severe mold growth; and - using high-efficiency air filters. Moisture Control. Because one of the most important factors affecting mold growth in homes is moisture level, controlling this factor is crucial in mold abatement strategies. Many simple measures can significantly control moisture, for example maintaining indoor relative humidity at no greater than 40%-60% through the use of dehumidifiers, fixing water leakage problems, increasing ventilation in kitchens and bathrooms by using exhaust fans, venting clothes dryers to the outside, reducing the number of indoor plants, using air conditioning at times of high outdoor humidity, heating all rooms in the winter and adding heating to outside wall closets, sloping surrounding soil away from building foundations, fixing gutters and downspouts, and using a sump pump in basements prone to flooding . Vapor barriers, sump pumps, and aboveground vents can also be installed in crawlspaces to prevent moisture problems . Removal and Cleaning of Mold-contaminated Materials. Nonporous (e.g., metals, glass, and hard plastics) and semiporous (e.g., wood and concrete) materials contaminated with mold and that are still structurally sound can often be cleaned with bleach-and-water solutions. However, in some cases, the material may not be easily cleaned or may be so severely contaminated that it may have to be removed. It is recommended that porous materials (e.g., ceiling tiles, wallboards, and fabrics) that cannot be cleaned be removed and discarded . In severe cases, clean-up and repair of mold-contaminated buildings may be conducted using methods similar to those used for abatement of other hazardous substances such as asbestos . For example, in situations of extensive colonization (large surface areas greater than 100 square feet or where the material is severely degraded), extreme precautions may be required, including full containment (complete isolation of work area) with critical barriers (airlock and decontami-nation room) and negative pressurization, personnel trained to handle hazardous wastes, and the use of full-face respirators with HEPA filters, eye protection, and disposable full-body covering . # Worker Protection When Conducting Mold Assessment and Mitigation Projects. Activities such as cleaning or removal of mold-contaminated materials in homes, as well as investigations of mold contamination extent, have the potential to disturb areas of mold growth and release fungal spores and fragments into the air. Recommended measures to protect workers during mold remediation efforts depend on the severity and nature of the mold contamination being addressed, but include the use of well fitted particulate masks or respirators that retain particles as small as 1 micrometer or less, disposable gloves and coveralls, and protective eyewear . Following are examples of guidance documents for remediation of mold contamination: # Chemical Pollutants # Carbon Monoxide Carbon monoxide (CO) is a significant combustion pollutant in the United States. CO is a leading cause of poisoning deaths . According to the National Fire Protection Association (NFPA), CO-related nonfire deaths are often attributed to heating and cooking equipment. The leading specific types of equipment blamed for CO-related deaths include gas-fueled space heaters, gasfueled furnaces, charcoal grills, gas-fueled ranges, portable kerosene heaters, and wood stoves. As with fire deaths, the risk for unintentional CO death is highest for the very young (ages 4 years and younger) and the very old (ages 75 years and older). CO is an odorless, colorless gas that can cause sudden illness and death. It is a result of the incomplete combustion of carbon. Headache, dizziness, weakness, nausea, vomiting, chest pain, and confusion are the most frequent symptoms of CO poisoning. According to the American Lung Association (ALA) , breathing low levels of CO can cause fatigue and increase chest pain in people with chronic heart disease. Higher levels of CO can cause flulike symptoms in healthy people. In addition, extremely high levels of CO cause loss of consciousness and death. In the home, any fuel-burning appliance that is not adequately vented and maintained can be a potential source of CO. The following steps should be followed to reduce CO (as well as sulfur dioxide and oxides of nitrogen) levels: - Never use gas-powered equipment, charcoal grills, hibachis, lanterns, or portable camping stoves in enclosed areas or indoors. , the major source of indoor ozone is outdoor ozone. Indoor levels can vary from 10% of the outdoor air to levels as high as 80% of the outdoor air. The Food and Drug Administration has set a limit of 0.05 ppm of ozone in indoor air. In recent years, there have been numerous advertisements for ion generators that destroy harmful indoor air pollutants. These devices create ozone or elemental oxygen that reacts with pollutants. EPA has reviewed the evidence on ozone generators and states: "available scientific evidence shows that at concentrations that do not exceed public health standards, ozone has little potential to remove indoor air contaminants," and "there is evidence to show that at concentrations that do not exceed public health standards, ozone is not effective at removing many odor causing chemicals" . Ozone is also created by the exposure of polluted air to sunlight or ultraviolet light emitters. This ozone produced outside of the home can infiltrate the house and react with indoor surfaces, creating additional pollutants. # Environmental Tobacco Smoke or Secondhand Smoke Like CO, environmental tobacco smoke (ETS; also known as secondhand smoke), is a product of combustion. The National Cancer Institute (NCI) , states that ETS is the combination of two forms of smoke from burning tobacco products: - Sidestream smoke, or smoke that is emitted between the puffs of a burning cigarette, pipe, or cigar; and - Mainstream smoke, or the smoke that is exhaled by the smoker. - Install a CO monitor (Figure 5.2) in appropriate areas of the home. These monitors are designed to provide a warning before potentially life-threatening levels of CO are reached. - Choose vented appliances when possible and keep gas appliances properly adjusted to decrease the combustion to CO. (Note: Vented appliances are always preferable for several reasons: oxygen levels, carbon dioxide buildup, and humidity management). - Only buy certified and tested combustion appliances that meet current safety standards, as certified by Underwriter's Laboratories (UL), American Gas Association (AGA) Laboratories, or equivalent. - Assure that all gas heaters possess safety devices that shut off an improperly vented gas heater. Heaters made after 1982 use a pilot light safety system known as an oxygen depletion sensor. When inadequate fresh air exists, this system shuts off the heater before large amounts of CO can be produced. - Use appliances that have electronic ignitions instead of pilot lights. These appliances are typically more energy efficient and eliminate the continuous low-level pollutants from pilot lights. - Use the proper fuel in kerosene appliances. - Install and use an exhaust fan vented to the outdoors over gas stoves. - Have a trained professional annually inspect, clean, and tune up central heating systems (furnaces, flues, and chimneys) and repair them as needed. - Do not idle a car inside a garage. The U.S. Consumer Product Safety Commission (CPSC) recommends installing at least one CO alarm per household near the sleeping area. For an extra measure of safety, another alarm should be placed near the home's heating source. ALA recommends weighing the benefits of using models powered by electrical outlets versus models powered by batteries that run out of power and need replacing. Battery-powered CO detectors provide continuous protection and do not require recalibration in the event of a power outage. Electric-powered systems do not provide protection during a loss of power and can take up to 2 days to recalibrate. A device that can be easily self-tested and reset to ensure proper functioning should be chosen. The product should meet Underwriters Laboratories Standard UL 2034. To lower levels of VOCs in the home, follow these steps: - use all household products according to directions; - provide good ventilation when using these products; - properly dispose of partially full containers of old or unneeded chemicals; - purchase limited quantities of products; and - minimize exposure to emissions from products containing methylene chloride, benzene, and perchlorethylene. A prominent VOC found in household products and construction products is formaldehyde. According to CPSC , these products include the glue or adhesive used in pressed wood products; preservatives in paints, coating, and cosmetics; coatings used for permanent-press quality in fabrics and draperies; and the finish on paper products and certain insulation materials. Formaldehyde The . The EPA states that, because of their relative body size and respiratory rates, children are affected by ETS more than adults are. It is estimated that an additional 7,500 to 15,000 hospitalizations resulting from increased respiratory infections occur in children younger than 18 months of age due to ETS exposure. Figure 5.3 shows the ETS exposure levels in homes with children under age 7 years. The following actions are recommended in the home to protect children from ETS: - if individuals insist on smoking, increase ventilation in the smoking area by opening windows or using exhaust fans; and - refrain from smoking in the presence of children and do not allow babysitters or others who work in the home to smoke in the home or near children. # Volatile Organic Compounds In the modern home, many organic chemicals are used as ingredients in household products. Organic chemicals that vaporize and become gases at normal room temperature are collectively known as VOCs. Examples of common items that can release VOCs include paints, varnishes, and wax, as well as in many cleaning, disinfecting, cosmetic, degreasing, and hobby products. Levels of approximately a dozen common VOCs can be two to five times higher inside the home, as opposed to outside, whether in highly industrialized areas or rural areas. VOCs that frequently pollute indoor air include toluene, styrene, xylenes, and trichloroethylene. Some of these chemicals may be emitted from aerosol products, dry-cleaned clothing, paints, varnishes, glues, is contained in urea-formaldehyde (UF) foam insulation installed in the wall cavities of homes as an energy conservation measure. Levels of formaldehyde increase soon after installation of this product, but these levels decline with time. In 1982, CPSC voted to ban UF foam insulation. The courts overturned the ban; however, the publicity has decreased the use of this product. More recently, the most significant source of formaldehyde in homes has been pressed wood products made using adhesives that contain UF resins . The most significant of these is medium-density fiberboard, which contains a higher resin-to-wood ratio than any other UF pressed wood product. This product is generally recognized as being the highest formaldehyde-emitting pressed wood product. Additional pressed wood products are produced using phenol-formaldehyde resin. The latter type of resin generally emits formaldehyde at a considerably slower rate than those containing UF resin. The emission rate for both resins will change over time and will be influenced by high indoor temperatures and humidity. Since 1985, U.S. Department of Housing and Urban Development (HUD) regulations (24 CFR 3280.308, 3280.309, and 3280.406) have permitted only the use of plywood and particleboard that conform to specified formaldehyde emission limits in the construction of prefabricated and manufactured homes . This limit was to ensure that indoor formaldehyde levels are below 0.4 ppm. CPSC notes that formaldehyde is a colorless, strongsmelling gas. At an air level above 0.1 ppm, it can cause watery eyes; burning sensations in the eyes, nose, and throat; nausea; coughing; chest tightness; wheezing; skin rashes; and allergic reactions. Laboratory animal studies have revealed that formaldehyde can cause cancer in animals and may cause cancer in humans. Formaldehyde is usually present at levels less than 0.03 ppm indoors and outdoors, with rural areas generally experiencing lower concentrations than urban areas. Indoor areas that contain products that release formaldehyde can have levels greater than 0.03 ppm. CPSC also recommends the following actions to avoid high levels of exposure to formaldehyde: - Purchase pressed wood products that are labeled or stamped to be in conformance with American National Standards Institute criteria ANSI A208. 1-1993. Use particleboard flooring marked with ANSI grades PBU, D2, or D3. Medium-density fiberboard should be in conformance with ANSI A208.2-1994 and hardwood plywood with ANSI/HPVA HP-1-1994 (Figure 5.4). - Purchase furniture or cabinets that contain a high percentage of panel surface and edges that are laminated or coated. Unlaminated or uncoated (raw) panels of pressed wood panel products will generally emit more formaldehyde than those that are laminated or coated. - Use alternative products, such as wood panel products not made with UF glues, lumber, or metal. - Avoid the use of foamed-in-place insulation containing formaldehyde, especially UF foam insulation. - Wash durable-press fabrics before use. CPSC also recommends the following actions to reduce existing levels of indoor formaldehyde: - Ventilate the home well by opening doors and windows and installing an exhaust fan(s). - Seal the surfaces of formaldehyde-containing products that are not laminated or coated with paint, varnish, or a layer of vinyl or polyurethane-like materials. - Remove products that release formaldehyde in the indoor air from the home. # Radon According to the EPA , radon is a colorless, odorless gas that occurs naturally in soil and rock and is a decay product of uranium. The U.S. Geological Survey (USGS) notes that the typical uranium content of rock and the surrounding soil is between 1 and 3 ppm. Higher levels of uranium are often contained in rock such as lightcolored volcanic rock, granite, dark shale, and sedimentary rock containing phosphate. Uranium levels as high as 100 ppm may be present in various areas of the United States because of these rocks. The main source of high-level radon pollution in buildings is surrounding uranium-containing soil. Thus, the greater the level of uranium nearby, the greater the chances are that buildings in the area will have high levels of indoor radon. Figure 5.5 demonstrates the geographic variation in radon levels in the United States. Maps of the individual states and areas that have proven high for radon are available at / zonemap.html. A free video is available from the U.S. EPA: call 1-800-438-4318 and ask for EPA 402-V-02-003 (TRT 13.10). Radon, according to the California Geological Survey , is one of the intermediate radioactive elements formed during the radioactive decay of uranium-238, uranium-235, or thorium-232. Radon-222 is the radon isotope of most concern to public health because of its longer half-life (3.8 days). The mobility of radon gas is much greater than are uranium and radium, which are solids at room temperature. Thus, radon can leave rocks and soil, move through fractures and pore spaces, and ultimately enter a building to collect in high concentrations. When in water, radon moves less than 1 inch before it decays, compared to 6 feet or more in dry rocks or soil. USGS notes that radon near the surface of soil typically escapes into the atmosphere. However, where a house is present, soil air often flows toward the house foundation because of - differences in air pressure between the soil and the house, with soil pressure often being higher; - presence of openings in the house's foundation; and - increases in permeability around the basement (if present). EPA also recommends that this map be supplemented with any available local data to further understand and predict the radon potential of a specific area. Houses are often constructed with loose fill under a basement slab and between the walls and exterior ground. This fill is more permeable than the original ground. Houses typically draw less than 1% of their indoor air from the soil. However, houses with low indoor air pressures, poorly sealed foundations, and several entry points for soil air may draw up to 20% of their indoor air from the soil. USGS states that radon may also enter the home through the water systems. Surface water sources typically contain little radon because it escapes into the air. In larger cities, radon is released to the air by municipal processing systems that aerate the water. However, in areas where groundwater is the main water supply for communities, small public systems and private wells are typically closed systems that do not allow radon to escape. Radon then enters the indoor air from showers, clothes washing, dishwashing, and other uses of water. Figure 5.6 shows typical entry points of radon. Health risks of radon stem from its breakdown into "radon daughters," which emit high-energy alpha particles. These progeny enter the lungs, attach themselves, and may eventually lead to lung cancer. This exposure to radon is believed to contribute to between 15,000 and 21,000 excess lung cancer deaths in the United States each year. The EPA has identified levels greater than 4 picocuries per liter as levels at which remedial action should be taken. Approximately 1 in 15 homes nationwide have radon above this level, according to the U.S. Surgeon General's recent advisory . Smokers are at significantly higher risk for radon-related lung cancer. Radon in the home can be measured either by the occupant or by a professional. Because radon has no odor or color, special devices are used to measure its presence. Radon levels vary from day to day and season to season. Short-term tests (2 to 90 days) are best if quick results are needed, but long-term tests (more than 3 months) yield better information on average year-round exposure. Measurement devices are routinely placed in the lowest occupied level of the home. The devices either measure the radon gas directly or the daughter products. The simplest devices are passive, require no electricity, and include a charcoal canister, charcoal liquid scintillation device, alpha tract detector, and electret ion detectors . All of these devices, with the exception of the ion detector, can be purchased in hardware stores or by mail. The ion detector generally is only available through laboratories. These devices are inexpensive, primarily used for short-term testing, and require little to no training. Active devices, however, need electrical power and include continuous monitoring devices. They are customarily more expensive and require professionally trained testers for their operation. Figure 5. 7 shows examples of the charcoal tester (a; left) and the alpha tract detector (b; right). After testing and evaluation by a professional, it may be necessary to lower the radon levels in the structure. The Pennsylvania Department of Environmental Protection states that in most cases, a system with pipes and a fan is used to reduce radon. This system, known as a subslab depressurization system, requires no major changes to the home. The cost typically ranges from $500 to $2,500 and averages approximately $1,000, varying with geographic region. The typical mitigation system usually has only one pipe penetrating through the basement floor; the pipe also may be installed outside the house. The Connecticut Department of Public Health notes that it is more cost effective to include radon-resistant techniques while constructing a building than to install a reduction system in an existing home. Inclusion of radon-resistant techniques in initial construction costs approximately $350 to $500 . Figure 5. 8 shows examples of radon-resistant construction techniques. A passive radon-resistant system has five major parts: 1. A layer of gas-permeable material under the foundation. 2. The foundation (usually 4 inches of gravel). 3. Plastic sheeting over the foundation, with all openings in the concrete foundation floor sealed and caulked. A gas-tight, 3-or 4-inch vent pipe running from under the foundation through the house to the roof. A roughed-in electrical junction box for the future installation of a fan, if needed. These features create a physical barrier to radon entry. The vent pipe redirects the flow of air under the foundation, preventing radon from seeping into the house. # Pesticides Much pesticide use could be reduced if integrated pest management (IPM) practices were used in the home. IPM is a coordinated approach to managing roaches, rodents, mosquitoes, and other pests that integrates inspection, monitoring, treatment, and evaluation, with special emphasis on the decreased use of toxic agents. However, all pest management options, including natural, biologic, cultural, and chemical methods, should be considered. Those that have the least impact on health and the environment should be selected. Most household pests can be controlled by eliminating the habitat for the pest both inside and outside, building or screening them out, eliminating food and harborage areas, and safely using appropriate pesticides if necessary. EPA states that 75% of U.S. households used at least one pesticide indoors during the past year and that 80% of most people's exposure to pesticides occurs indoors. Measurable levels of up to a dozen pesticides have been found in the air inside homes. Pesticides used in and around the home include products to control insects (insecticides), termites (termiticides), rodents (rodenticides), fungi (fungicides), and microbes (disinfectants). These products are found in sprays, sticks, powders, crystals, balls, and foggers. Delaplane notes that the ancient Romans killed insect pests by burning sulfur and controlled weeds with salt. In the 1600s, ants were controlled with mixtures of honey and arsenic. U.S. farmers in the late 19th century used copper actoarsenite (Paris green), calcium arsenate, nicotine sulfate, and sulfur to control insect pests in field crops. By World War II and afterward, numerous pesticides had been introduced, including DDT, BHC, aldrin, dieldrin, endrin, and 2,4-D. A significant factor with regard to these pesticides used in and around the home is their impact on children. According to a 2003 EPA survey, 47% of all households with children under the age of 5 years had at least one pesticide stored in an unlocked cabinet less than 4 feet off the ground. This is within easy reach of children. Similarly, 74% of households without children under the age of 5 also stored pesticides in an unlocked cabinet less than 4 feet off the ground. This issue is significant because 13% of all pesticide poisoning incidents occur in homes other than the child's home. The EPA notes a report by the American Association of Poison Control Centers indicating that approximately 79,000 children were involved in common household pesticide poisonings or exposures. The health effects of pesticides vary with the product. However, local effects from most of the products will be on eyes, noses, and throats; more severe consequences, such as on the central nervous system and kidneys and on cancer risks, are possible. The active and inert ingredients of pesticides can be organic compounds, which can contribute to the level of organic compounds in indoor air. More significantly, products containing cyclodiene pesticides have been commonly associated with misapplication. Individuals inadvertently exposed during this misapplication had numerous symptoms, including headaches, dizziness, muscle twitching, weakness, tin-gling sensations, and nausea. In addition, there is concern that these pesticides may cause long-term damage to the liver and the central nervous system, as well as an increased cancer risk. Cyclodiene pesticides were developed for use as insecticides in the 1940s and 1950s. The four main cyclodiene pesticides-aldrin, dieldrin, chlordane, and heptachlor-were used to guard soil and seed against insect infestation and to control insect pests in crops. Outside of agriculture they were used for ant control; farm, industrial, and domestic control of fleas, flies, lice, and mites; locust control; termite control in buildings, fences, and power poles; and pest control in home gardens. No other commercial use is permitted for cyclodiene or related products. The only exception is the use of heptachlor by utility companies to control fire ants in underground cable boxes. An EPA survey revealed that bathrooms and kitchens are areas in the home most likely to have improperly stored pesticides. In the United States, EPA regulates pesticides under the pesticide law known as the Federal Insecticide, Fungicide, and Rodenticide Act. Since 1981, this law has required most residential-use pesticides to bear a signal word such as "danger" or "warning" and to be contained in child-resistant packaging. This type of packaging is designed to prevent or delay access by most children under the age of 5 years. EPA offers the following recommendations for preventing accidental poisoning: - store pesticides away from the reach of children in a locked cabinet, garden shed, or similar location; - read the product label and follow all directions exactly, especially precautions and restrictions; - remove children, pets, and toys from areas before applying pesticides; - if interrupted while applying a pesticide, properly close the package and assure that the container is not within reach of children; - do not transfer pesticides to other containers that children may associate with food or drink; - do not place rodent or insect baits where small children have access to them; - use child-resistant packaging properly by closing the container tightly after use; - assure that other caregivers for children are aware of the potential hazards of pesticides; - teach children that pesticides are poisons and should not be handled; and - keep the local Poison Control Center telephone number available. # Toxic Materials Asbestos Asbestos, from the Greek word meaning "inextinguishable," refers to a group of six naturally occurring mineral fibers. Asbestos is a mineral fiber of which there are several types: amosite, crocidiolite, tremolite, actinolite, anthrophyllite, and chrysotile. Chrysotile asbestos, also known as white asbestos, is the predominant commercial form of asbestos. Asbestos is strong, flexible, resistant to heat and chemical corrosion, and insulates well. These features led to the use of asbestos in up to 3,000 consumer products before government agencies began to phase it out in the 1970s because of its health hazards. Asbestos has been used in insulation, roofing, siding, vinyl floor tiles, fireproofing materials, texturized paint and soundproofing materials, heating appliances (such as clothes dryers and ovens), fireproof gloves, and ironing boards. Asbestos continues to be used in some products, such as brake pads. Other mineral products, such as talc and vermiculite, can be contaminated with asbestos. The health effects of asbestos exposure are numerous and varied. Industrial studies of workers exposed to asbestos in factories and shipyards have revealed three primary health risk concerns from breathing high levels of asbestos fibers: lung cancer, mesothelioma (a cancer of the lining of the chest and the abdominal cavity), and asbestosis (a condition in which the lungs become scarred with fibrous tissue). The risk for all of these conditions is amplified as the number of fibers inhaled increases. Smoking also enhances the risk for lung cancer from inhaling asbestos fibers by acting synergistically. The incubation period (from time of exposure to appearance of symptoms) of these diseases is usually about 20 to 30 years. Individuals who develop asbestosis have typically been exposed to high levels of asbestos for a long time. Exposure levels to asbestos are measured in fibers per cubic centimeter of air. Most individuals are exposed to small amounts of asbestos in daily living activities; however, a preponderance of them do not develop health problems. According to the Agency for Toxic Substances and Disease Registry (ATSDR), if an individual is exposed, several factors determine whether the individual will be harmed . These factors include the dose (how much), the duration (how long), and the fiber type (mineral form and distribution). ATSDR also states that children may be more adversely affected than adults . Children breathe differently and have different lung structures than adults; however, it has not been determined whether these differences cause a greater amount of asbestos fibers to stay in the lungs of a child than in the lungs of an adult. In addition, children drink more fluids per kilogram of body weight than do adults and they can be exposed through asbestos-contaminated drinking water. Eating asbestoscontaminated soil and dust is another source of exposure for children. Certain children intentionally eat soil and children's hand-to-mouth activities mean that all young children eat more soil than do adults. Family members also have been exposed to asbestos that was carried home on the clothing of other family members who worked in asbestos mines or mills. Breathing asbestos fibers may result in difficulty in breathing. Diseases usually appear many years after the first exposure to asbestos and are therefore not likely to be seen in children. But people who have been exposed to asbestos at a young age may be more likely to contract diseases than those who are first exposed later in life. In the small number of studies that have specifically looked at asbestos exposure in children, there is no indication that younger people might develop asbestos-related diseases more quickly than older people. Developing fetuses and infants are not likely to be exposed to asbestos through the placenta or breast milk of the mother. Results of animal studies do not indicate that exposure to asbestos is likely to result in birth defects. A joint document issued by CPSC, EPA, and ALA, notes that most products in today's homes do not contain asbestos. However, asbestos can still be found in products and areas of the home. These products contain asbestos that could be inhaled and are required to be labeled as such. Until the 1970s, many types of building products and insulation materials used in homes routinely contained asbestos. A potential asbestos problem both inside and outside the home is that of vermiculite. According to the USGS , vermiculite is a claylike material that expands when heated to form wormlike particles. It is used in concrete aggregate, fertilizer carriers, insulation, potting soil, and soil conditioners. This product ceased being mined in 1992, but old stocks may still be available. Common products that contained asbestos in the past and conditions that may release fibers include the following: - Steam pipes, boilers, and furnace ducts insulated with an asbestos blanket or asbestos paper tape. These materials may release asbestos fibers if damaged, repaired, or removed improperly. - Resilient floor tiles (vinyl asbestos, asphalt, and rubber), the backing on vinyl sheet flooring, and adhesives used for installing floor tile. Sanding tiles can release fibers, as may scraping or sanding the backing of sheet flooring during removal. - Cement sheet, millboard, and paper used as insulation around furnaces and wood-burning stoves. Repairing or removing appliances may release asbestos fibers, as may cutting, tearing, sanding, drilling, or sawing insulation. - Door gaskets in furnaces, wood stoves, and coal stoves. Worn seals can release asbestos fibers during use. - Soundproofing or decorative material sprayed on walls and ceilings. Loose, crumbly, or water-damaged material may release fibers, as will sanding, drilling, or scraping the material. - Patching and joint compounds for walls, ceilings, and textured paints. Sanding, scraping, or drilling these surfaces may release asbestos. - Asbestos cement roofing, shingles, and siding. These products are not likely to release asbestos fibers unless sawed, drilled, or cut. - Artificial ashes and embers sold for use in gas-fired fireplaces in addition to other older household products such as fireproof gloves, stove-top pads, ironing board covers, and certain hair dryers. - Automobile brake pads and linings, clutch facings, and gaskets. Homeowners who believe material in their home may be asbestos should not disturb the material. Generally, material in good condition will not release asbestos fibers, and there is little danger unless the fibers are released and inhaled into the lungs. However, if disturbed, asbestos material may release asbestos fibers, which can be inhaled into the lungs. The fibers can remain in the lungs for a long time, increasing the risk for disease. Suspected asbestos-containing material should be checked regularly for damage from abrasions, tears, or water. If possible, access to the area should be limited. Asbestos-containing products such as asbestos gloves, stove-top pads, and ironing board covers should be discarded if damaged or worn. Permission and proper disposal methods should be obtainable from local health, environmental, or other appropriate officials. If asbestos material is more than slightly damaged, or if planned changes in the home might disturb it, repair or removal by a professional is needed. Before remodeling, determine whether asbestos materials are present. Only a trained professional can confirm suspected asbestos materials that are part of a home's construction. This individual will take samples for analysis and submit them to an EPA-approved laboratory. If the asbestos material is in good shape and will not be disturbed, the best approach is to take no action and continue to monitor the material. If the material needs action to address potential exposure problems, there are two approaches to correcting the problem: repair and removal. Repair involves sealing or covering the asbestos material. Sealing or encapsulation involves treating the material with a sealant that either binds the asbestos fibers together or coats the material so fibers are not released. This is an approach often used for pipe, furnace, and boiler insulation; however, this work should be done only by a professional who is trained to handle asbestos safely. Covering (enclosing) involves placing something over or around the material that contains asbestos to prevent release of fibers. Exposed insulated piping may be covered with a protective wrap or jacket. In the repair process, the approach is for the material to remain in position undisturbed. Repair is a less expensive process than is removal. With any type of repair, the asbestos remains in place. Repair may make later removal of asbestos, if necessary, more difficult and costly. Repairs can be major or minor. Both major and minor repairs must be done only by a professional trained in methods for safely handling asbestos. Removal is usually the most expensive and, unless required by state or local regulations, should be the last option considered in most situations. This is because removal poses the greatest risk for fiber release. However, removal may be required when remodeling or making major changes to the home that will disturb asbestos material. In addition, removal may be called for if asbestos material is damaged extensively and cannot be otherwise repaired. Removal is complex and must be done only by a contractor with special training. Improper removal of asbestos material may create more of a problem than simply leaving it alone. # Lead Many individuals recognize lead in the form often seen in tire weights and fishing equipment, but few recognize its various forms in and around the home. The Merriam-Webster Dictionary Lead is widespread in the environment. People absorb lead from a variety of sources every day. Although lead has been used in numerous consumer products, the most important sources of lead exposure to children and others today are the following: - contaminated house dust that has settled on horizontal surfaces, - deteriorated lead-based paint, - contaminated bare soil, - food (which can be contaminated by lead in the air or in food containers, particularly lead-soldered food containers), - drinking water (from corrosion of plumbing systems), and - occupational exposure or hobbies. Federal controls on lead in gasoline, new paint, food canning, and drinking water, as well as lead from industrial air emissions, have significantly reduced total human exposure to lead. The number of children with blood lead levels above 10 micrograms per deciliter (µg/ dL), a level designated as showing no physiologic toxicity, has declined from 1.7 million in the late 1980s to 310,000 in 1999-2002. This demonstrates that the controls have been effective, but that many children are still at risk. CDC data show that deteriorated lead-based paint and the contaminated dust and soil it generates are the most common sources of exposure to children today. HUD data show that the number of houses with lead paint declined from 64 million in 1990 to 38 million in 2000 . Children are more vulnerable to lead poisoning than are adults. Infants can be exposed to lead in the womb if their mothers have lead in their bodies. Infants and children can swallow and breathe lead in dirt, dust, or sand through normal hand-to-mouth contact while they play on the floor or ground. These activities make it easier for children to be exposed to lead. Other sources of exposure have included imported vinyl miniblinds, crayons, children's jewelry, and candy. In 2004, increases in lead in water service pipes were observed in Washington, D.C., accompanied by increases in blood lead levels in children under the age of 6 years who were served by the water system . In some cases, children swallow nonfood items such as paint chips. These may contain very large amounts of lead, particularly in and around older houses that were painted with lead-based paint. Many studies have verified the effect of lead exposure on IQ scores in the United States. The effects of lead exposure have been reviewed by the National Academy of Sciences . Generally, the tests for blood lead levels are from drawn blood, not from a finger-stick test, which can be unreliable if performed improperly. Units are measured in micrograms per deciliter and reflect the 1991 guidance from the Centers of Disease Control : - Children: 10 µg/dL (level of concern)-find source of lead; - Children: 15 µg/dL and above-environmental intervention, counseling, medical monitoring; - Children: 20 µg/dL and above-medical treatment; - Adults: 25 µg/dL (level of concern)-find source of lead; and - Adults: 50 µg/dL-Occupational Safety and Health Administration (OSHA) standard for medical removal from the worksite. Adults are usually exposed to lead from occupational sources (e.g., battery construction, paint removal) or at home (e.g., paint removal, home renovations). In 1978, CPSC banned the use of lead-based paint in residential housing. Because houses are periodically repainted, the most recent layer of paint will most likely not contain lead, but the older layers underneath probably will. Therefore, the only way to accurately determine the amount of lead present in older paint is to have it analyzed. It is important that owners of homes built before 1978 be aware that layers of older paint can contain a great deal of lead. Guidelines on identifying and controlling lead-based paint hazards in housing have been published by HUD . # Controlling Lead Hazards The purpose of a home risk assessment is to determine, through testing and evaluation, where hazards from lead warrant remedial action. A certified inspector or risk assessor can test paint, soil, or lead dust either on-site or in a laboratory using methods such as x-ray fluorescence HUD has published detailed protocols for risk assessments and inspections . It is important from a health standpoint that future tenants, painters, and construction workers know that leadbased paint is present, even under treated surfaces, so they can take precautions when working in areas that will generate lead dust. Whenever mitigation work is completed, it is important to have a clearance test using the dust wipe method to ensure that lead-laden dust generated during the work does not remain at levels above those established by the EPA and HUD. Such testing is required for owners of most housing that is receiving federal financial assistance, such as Section 8 rental housing. A building or housing file should be maintained and updated whenever any additional lead hazard control work is completed. Owners are required by law to disclose information about lead-based paint or lead-based paint hazards to buyers or tenants before completing a sales or lease contract . All hazards should be controlled as identified in a risk assessment. Whenever extensive amounts of lead must be removed from a property, or when methods of removing toxic substances will affect the environment, it is extremely important that the owner be aware of the issues surrounding worker safety, environmental controls, and proper disposal. Appropriate architectural, engineering, and environmental professionals should be consulted when lead hazard projects are complex. Following are brief explanations of the two approaches for controlling lead hazard risks. These controls are recommended by HUD in HUD Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing , and are summarized here to focus on special considerations for historic housing: Interim Controls. Short-term solutions include thorough dust removal and thorough washdown and cleanup, paint film stabilization and repainting, covering of lead-contaminated soil, and informing tenants about lead hazards. Interim controls require ongoing maintenance and evaluation. Hazard Abatement. Long-term solutions are defined as having an expected life of 20 years or more and involve permanent removal of hazardous paint through chemicals, heat guns, or controlled sanding or abrasive methods; permanent removal of deteriorated painted features through replacement; removal or permanent covering of contaminated soil; and the use of enclosures (such as drywall) to isolate painted surfaces. The use of specialized encapsulant products can be considered as permanent abatement of lead. Deteriorated lead-based paint: Paint known to contain lead above the regulated level that shows signs of peeling, chipping, chalking, blistering, alligatoring, or otherwise separating from its substrate. # Dust removal: The process of removing dust to avoid creating a greater problem of spreading lead particles; usually through wet or damp collection and use of HEPA vacuums. Hazard abatement: Long-term measures to remove the hazards of lead-based paint through replacement of building components, enclosure, encapsulation, or paint removal. Interim control: Short-term methods to remove lead dust, stabilize deteriorating painted surfaces, treat friction and impact surfaces that generate lead dust, and repaint surfaces. Maintenance can ensure that housing remains lead-safe. # Lead-based paint: Any existing paint, varnish, shellac, or other coating that is equal to or greater than 1.0 milligrams per square centimeter (mg/cm 2 ) or greater than 0.5% by weight (5,000 ppm, 5,000 micrograms per gram , or 5,000 milligrams per kilogram ). For new paint, CPSC has established 0.06% as the maximum amount of lead allowed in new paint. Lead in paint can be measured by x-ray fluorescence analyzers or laboratory analysis by certified personnel and approved laboratories. # Risk assessment: An on-site investigation to determine the presence and condition of lead-based paint, including limited test samples and an evaluation of the age, condition, housekeeping practices, and uses of a residence. # Definitions Related to Lead Reducing and controlling lead hazards can be successfully accomplished without destroying the character-defining features and finishes of historic buildings. Federal and state laws generally support the reasonable control of lead-based paint hazards through a variety of treatments, ranging from modified maintenance to selective substrate removal. The key to protecting children, workers, and the environment is to be informed about the hazards of lead, to control exposure to lead dust and lead in soil and lead paint chips, and to follow existing regulations. The following summarizes several important regulations that affect lead-hazard reduction projects. Owners should be aware that regulations change, and they have a responsibility to check state and local ordinances as well. Care must be taken to ensure that any procedures used to release lead from the home protect both the residents and workers from lead dust exposure. Local Ordinances. Check with local health departments, poison control centers, and offices of housing and community development to determine whether any laws require compliance by building owners. Determine whether projects are considered abatements and will require special contractors and permits. Owner's Responsibility. Owners are ultimately responsible for ensuring that hazardous waste is properly disposed of when it is generated on their own sites. Owners should check with their state government to determine whether an abatement project requires a certified contractor. Owners should establish that the contractor is responsible for the safety of the crew, to ensure that all applicable laws are followed, and that transporters and disposers of hazardous waste have liability insurance as a protection for the owner. The owner should notify the contractor that lead-based paint may be present and that it is the contractor's responsibility to follow appropriate work practices to protect workers and to complete a thorough cleanup to ensure that lead-laden dust is not present after the work is completed. Renovation contractors are required by EPA to distribute an informative educational pamphlet (Protect Your Family from Lead in Your Home) to occupants before starting work that could disturb leadbased paint (/ leadinfo. htm#remodeling). # Arsenic Lead arsenate was used legally up to 1988 in most of the orchards in the United States. Often 50 applications or more of this pesticide were applied each year. This toxic heavy metal compound has accumulated in the soil around houses and under the numerous orchards in the country, contaminating both wells and land. These orchards are often turned into subdivisions as cities expand and sprawl occurs. Residues from the pesticide lead arsenate, once used heavily on apple, pear, and other orchards, contaminate an estimated 70,000 to 120,000 acres in the state of Washington alone, some of it in areas where agriculture has been replaced with housing, according to state ecology department officials and others. Lead arsenate, which was not banned for use on food crops until 1988, nevertheless was mostly replaced by the pesticide dichlorodiphenyltrichloroethane (DDT) and its derivatives in the late 1940s. DDT was banned in the United States in 1972, but is used elsewhere in the world. For more than 20 years, the wood industry has infused green wood with heavy doses of arsenic to kill bugs and prevent rot. Numerous studies show that arsenic sticks to children's hands when they play on treated wood, and it is absorbed through the skin and ingested when they put their hands in their mouths. Although most uses of arsenic wood treatments were phased out by 2004, an estimated 90% of existing outdoor structures are made of arsenic-treated wood . In a study conducted by the University of North Carolina Environmental Quality Institute in Asheville, wood samples were analyzed and showed that - Older decks and play sets (7 to 15 years old) that were preserved with chromated copper arsenic expose people to just as much arsenic on the wood surface as do newer structures (less than 1 year old). The amount of arsenic that testers wiped off a small area of wood about the size of a 4-year-old's handprint typically far exceeds what EPA allows in a glass of water under the Safe Drinking Water Act standard. Figure 5.9 shows a safety warning label placed on wood products. - Arsenic in the soil from two of every five backyards or parks tested exceeded EPA's Superfund cleanup level of 20 ppm. # Introduction The principal function of a house is to provide protection from the elements. Our present society, however, requires that a home provide not only shelter, but also privacy, safety, and reasonable protection of our physical and mental health. A living facility that fails to offer these essentials through adequately designed and properly maintained interiors and exteriors cannot be termed "healthful housing." In this chapter, the home is considered in terms of the parts that have a bearing on its soundness, state of repair, and safety. These are some of the elements that the housing inspector must examine when making a thorough housing inspection. figure and the key are available in an interactive format in the glossary on the U.S. Inspect Web site . Figure 6.2 shows a typical house built between 1950 and 1980 and also includes a terminology key. The figures show the complexity and the numerous components of a home. These components form the vocabulary that is necessary to discuss housing structure inspection issues. Key to Figure 6.1 (New Housing Terminology) 1. Ash dump (see 35)-A door or opening in the firebox that leads directly to the ash pit, through which the ashes are swept after the fire is burned out. All fireboxes are not equipped with an ash dump. # Attic space- The open space within the attic area. 3. Backfill-The material used to refill an excavation around the outside of a foundation wall or pipe trench. 4. Baluster-One of a series of small pillars that is attached to and runs between the stairs and the handrails. The spacing between the balusters should be less than 4 inches to prevent small children from getting stuck between the balusters. Balusters are considered a safety item and provide an additional barrier. 5. Baseboard trim-Typically a wood trim board that is placed against the wall around the perimeter of a room next to the floor. The intent is to conceal the joint between the floor and wall finish. 6. Basement window-A window opening installed in the basement wall. Basement windows are occasionally below the finish grade level and will be surrounded on the exterior by a window well. 7. Blind or shutter-A lightweight frame in the form of a door located on each side of a window. They are most commonly constructed of wood (solid or louvered panels) or plastic. Originally they were designed to close and secure over the windows for security and foul weather. Most shutters now are more likely decorative pieces that are secured to the house beside the windows. 8. Bridging-Small pieces of wood or metal strapping placed in an X-pattern between the floor joists at midspan to prevent the joists from twisting and squeaking and to provide reinforcement and distribution of stress. 9. Building paper/underlayment-Building material, usually a felt paper that is used as a protective barrier against air and moisture passage from the area beneath the flooring as well as providing a movement/noise isolator in hardwood flooring. 10. Ceiling joist-A horizontally placed framing members at the ceiling of the top-most living space of a house that provides a platform to which the finished ceiling material can be attached. # Chair rail (not shown)- Decorative trim applied around the perimeter of a room such as a formal dining room or kitchen/breakfast nook at the approximate same height as the back of a chair. It is sometimes used as a cap trim for wainscoting (see wainscoting). 12. Chimney-A masonry or in more modern construction wood framed enclosure that surrounds and contains one or more flues and extends above the roofline. 13. Chimney cap-The metal or masonry protective coveringat the top of the chimney that seals the chimney shaft from water entry between the chimney enclosure and the flue tiles. 22. Downspout-A pipe, usually of metal or vinyl, that is connected to the gutters and is used to carry the roof-water runoff down and away from the house. 23. Downspout gooseneck-Segmented section of downspout that is bent at a radius to allow the downspout to be attached to the house and to follow the bends and curves of the eaves and ground. 24. Downspout shoe-The bottom downspout gooseneck that directs the water from the downspout to the extension or splash block at the grade. 25. Downspout strap-Strap used to secure the downspout to the side of the house. 26. Drain tile-A tube or cylinder that is normally installed around the exterior perimeter of the foundation footings that collects and directs ground water away from the foundation of the house. The tile can be individual sections of clay or asphalt tubing or, in more recent construction, a perforated-plastic drain tile that is approximately 4 inches in diameter. The drain tile leads either towards a sump or to an exterior discharge away from the house. 27. Entrance canopy-A small overhanging roof that shelters the front entrance. 28. Entrance stoop-An elevated platform constructed of wood framing or masonry at the front entry that allows visitors to stand above or out of the elements. The platform should be wide enough to allow someone to stand on the platform while opening an outward swinging door such as a storm door even if one is not present. 29. Exterior siding-The decorative exterior finish on a house. Its primary function is to protect the shell of the house from the elements. The choice of siding materials varies widely to include wood, brick, metal, vinyl, concrete, stucco, and a variety of manufactured compositions such as compressed wood, compressed cellulose (paper), fiber-reinforced cement, and synthetic stucco. 30. Fascia-The visible flat front board that caps the rafter tail ends and encloses the overhang under the eave that runs along the roof edge. The gutter is usually attached at this location. 31. Fascia/rake board-The visible flat front board that caps the rafter tail ends and encloses the overhang under the eave that runs along the roof edge and at the edge of the roofing at the gables. The gutter is usually attached to this board at the eaves. 35. Fireplace cleanout door-The access door to the ash pit beneath the fireplace. On a fireplace that is located inside the house, the cleanout door is usually located in the lowest accessible level of the house such as the basement or crawl space. On a fireplace that is located at the outside of the house, the cleanout door will be located at the exterior of the chimney. Not all fireplaces are equipped with a cleanout door. 36. Fireplace hearth-The inner or outer floor of a fireplace, usually made of brick, tile, or stone. # Flashing (not shown)- The building component used to connect and cover portions of a deck, roof, or siding material to another surface such as a wall, a chimney, a vent pipe, or anywhere that runoff is heavy or where two dissimilar materials meet. The flashing is mainly intended to prevent water entry and is usually made of rubber, tar, asphalt, or various metals. 38. Floor joists-The main subfloor framing members that support the floor span. Joists are usually made of engineered wood I-beams or 2×8 or larger lumber. 39. Foundation footing-The base on which the foundation walls rests. The foundation is wider than the foundation wall to spread out the load it is bearing and to help prevent settling. 40. Foundation wall-The concrete block, concrete slab or other nonwood material that extends below or partly below grade, which provides support for exterior walls and other structural pans of the building. Framing studs-A 2×4 or 2×6 vertical framing member used to construct walls and partitions, usually spaced 12 to 24 inches apart. 42. Gable framing-The vertical and horizontal framing members that make up and support the end of a building as distinguished from the front or rear side. A gable is the triangular end of an exterior wall above the eaves. 43. Garage door-The door for the vehicle passage into the garage area. Typical garage doors consist of multiple jointed panels of wood, metal, or fiberglass. Girder-A large beam supporting floor joists at the same level as the sills. A larger or principal beam used to support concentrated loads at isolated points along its length. 45. Gravel fill-A bed of coarse rock fragments or pebbles that is laid atop the existing soil before pouring the concrete slab. The gravel serves a dual purpose of breaking surface tension on the concrete slab and providing a layer that interrupts capillary action of subsurface moisture from reaching the concrete slab. Typically, a polyethylene sheeting will be installed between the gravel fill and the concrete slab for further moisture proofing. 46. Gutter-A channel used for carrying water run-off. Usually located at the eaves of a house and connected to a downspout. The primary purpose of the gutters and downspouts is to carry roof water run-off as far away from the house as possible. 47. Insulation-A manufactured or natural material that resists heat flow that is installed in a house's shell to keep the heat in a house in the winter and the coolness in the house in the summer. The most common form of insulation is fiberglass, whether in batts or blown-in material, along with cellulose, rigid foam boards, sprayed-in foam, and rock wool. 48. Jack/king stud-The framing stud, sometimes called the trimmer, that supports the header above a window, door, or other opening within a bearing wall. Depending on the size of the opening, there may be several jack studs on either side of the opening. 49. Mantel-The ornamental or decorative facing around a fireplace including a shelf that is attached to the breast or backing wall above the fireplace. 50. Moisture/vapor barrier-A nonporous material, such as plastic or polyethylene sheeting, that is used to retard the movement of water vapor into walls and attics and prevent condensation in them. A vapor barrier is also installed in crawl space areas to prevent moisture vapor from entering up through the ground. 51. Newel post-The post at the top and bottom of the handrails and anywhere along the stair run that creates a directional change in the handrails is called the newel post. The newel post is securely anchored into the underlying floor framing or the stair stringer to provide stability to the handrails. 52. Reinforcing lath-A strip of wood or metal attached to studs and used as a foundation for plastering, slating or tiling. Lath has been replaced by gypsum board in most modern construction. Ridge board/beam-The board placed on edge at the top-most point of the roof framing, into which the upper ends of the rafters are joined or attached. 54. Roofing-The finished surface at the top of the house that must be able to withstand the effects of the elements (i.e., wind, rain, snow, hail, etc.). A wide variety of materials are available, including asphalt shingles, wood shakes, metal roofing, ceramic and concrete tiles, and slate, with asphalt shingles making up the bulk of the material used. 55. Roof rafters-Inclined structural framing members that support the roof, running from the exterior wall the to the ridge beam. Rafters directly support the roof sheathing and create the angle or slope of the roof. 56. Roof sheathing-The material used to cover the outside surface of the roof framing to provide lateral and rack support to the roof, as well as to provide a nailing surface for the roofing material. This material most commonly consists of plywood OSB or horizontally laid wood boards. 57. Sidewalk-A walkway that provides a direct, all-weather approach to an entry. The sidewalk can be constructed of poured concrete, laid stone, concrete pavers, or gravel contained between borders or curbs. 58. Sill plate-The horizontal wood member that is anchored to the foundation masonry to provide a nailing surface for floors or walls built above. 59. Silt fabric-A porous fabric that acts as a barrier between the backfilled soil (see backfill) and the gravel surrounding the drain tile. This barrier prevents soil particles from blocking the movement of groundwater to the drain tile. 60. Soffit/lookout block-Rake cross-bracing between the fly rafters and end gable rafters that the soffit is nailed to. 65. Subfloor-Boards or plywood, installed over joists, on which the finish floor rests. # Stair rail- 66. Support post-A vertical framing member usually designed to carry or support a beam or girder. In newer construction a metal lally (pronounced "lolly") column is commonly used, as well as 4×4-or 6×6-inch wood posts. 67. Tar-Otherwise known as asphalt, tar is a very thick, dark brown/black substance that is used as a sealant or waterproofing agent. It is usually produced naturally by the breakdown of animal and vegetable matter that has been buried and compressed deep underground. Tar is also manufactured-a hydrocarbon by-product or residue that is left over after the distillation of petroleum. It is commonly used as a sealant or patch for roof penetrations, such as plumbing vents and chimney flashing. Tar is also used as a sealer on concrete and masonry foundation walls before they have been backfilled. 68. Termite shield-A metal flashing that is installed below the sill plate that acts as a deterrent to keep termites from reaching the sill plate. 69. Top plate-The topmost horizontal framing members of a framed wall. Most construction practices require the top plate to be doubled in thickness. 70. Wainscoting-The wooden paneling of the lower part of an interior wall up to approximately waist-height or between 36 and 48 inches from the floor. # Wall insulation- A manufactured or natural material that resists heat flow that is installed in a house's shell to keep the heat in a house in the winter and the coolness in the house in the summer. Fiberglass batts are the most common form of wall insulation. 72. Wall sheathing-The material used to cover the outside surface of the wall framing that provides lateral and shear support to the wall as well as a nailing surface for the exterior siding. 4. Chimney flashing-Sheet metal flashing provides a tight joint between chimney and roof. 5. Firebrick-An ordinary brick cannot withstand the heat of direct fire, and so special firebrick is used to line the fireplace. In newer construction, fireplaces are constructed of prefabricated metal inserts. 6. Ash dump-A trap door to let the ashes drop to a pit below, where they may be easily removed. 7. Cleanout door-The door to the ash pit or the bottom of a chimney through which the chimney can be cleaned. 8. Chimney breast-The inside face or front of a fireplace chimney. 9. Hearth-The floor of a fireplace that extends into the room for safety purposes. # Roof 10. Ridge-The top intersection of two opposite adjoining roof surfaces. 11. Ridge board-The board that follows along under the ridge. 12. Roof rafters-The structural members that support the roof. 13. Collar beam-Not a beam at all; this tie keeps the roof from spreading and connects similar rafters on opposite sides of the roof. 14. Roof insulation-An insulating material (usually rock wool or fiberglass) in a blanket form placed between the roof rafters to keep a house warm in the winter and cool in the summer. 15. Roof sheathing-The boards that provide the base for the finished roof. In newer construction, roof sheathing is composed of sheets of plywood, or oriented strand board (OSB). 16. Roofing-The wood, asphalt or asbestos shingles-or tile, slate, or metal-that form the outer protection against the weather. 17. Cornice-A decorative element made of molded members, usually placed at or near the top of an exterior or interior wall. 18. Gutter-The trough that gathers rainwater from a roof. 19. Downspout-The pipe that leads the water down from the gutter. 20. Storm sewer tile-The underground pipe that receives the water from the downspouts and carries it to the sewer. In newer construction, plastic-type material have replaced tile. 21. Gable-The triangular end of a building with a sloping roof. 22. Barage board-The fascia or board at the gable just under the edge of the roof. Louvers-A series of slanted slots arranged to keep out rain, yet allow ventilation. # Walls and Floors 24. Corner post-The vertical member at the corner of the frame, made up to receive inner and outer covering materials. 25. Studs-The vertical wood members of the house, usually 2×4s at minimum and spaced every 16 inches. 26. Sill-The board that is laid first on the foundation, and on which the frame rests. 27. Plate-The board laid across the top ends of the studs to hold them even and tight. 28. Corner bracing-Diagonal strips to keep the frame square and plumb. 29. Sheathing-The first layer of outer wall covering nailed to the studs. 30. Joist-The structural members or beams that hold up the floor or ceiling, usually 2×10s or 2×12s spaced 16 inches apart. 31. Bridging-Cross-bridging or solid. Members at the middle or third points of joist spans to brace one to the next and to prevent them from twisting. 32. Subflooring-Typically plywood or particle wood that is laid over the joists. 33. Flooring paper-A felt paper laid on the rough floor to stop air infiltration and, to some extent, noise. 34. Finish flooring-Hardwood, of tongued and grooved strips, carpet, or vinyl products (tile, linoleum). 35. Building paper or sheathing-Paper or plasticized material placed outside the sheathing, not as a vapor barrier, but to prevent water and air from leaking in. Building paper is also used as a tarred felt under shingles or siding to keep out moisture or wind. 36. Beveled siding-Sometimes called clapboards, with a thick butt and a thin upper edge lapped to shed water. In newer construction, vinyl, aluminum, or fiber cement siding and stucco are more prevalent. 37. Wall insulation-A blanket of wool or reflective foil placed inside the walls. 38. Metal lath-A mesh made from sheet metal onto which plaster or other composite surfacing materials can be applied. In newer construction, plaster sheetrock 4-×8-foot sheets have replaced lath. 62. Balusters-Vertical rods or spindles supporting a rail. # Foundation The word "foundation" is used to mean - construction below grade, such as footings, cellar, or basement; - the composition of the earth on which the building rests; and - special construction, such as pilings and piers used to support the building. The foundation bed may be composed of solid rock, sand, gravel, or unconsolidated sand or clay. Rock, sand, or gravel are the most reliable foundation materials. Figure 6.3 shows the three most common foundations for homes. Unconsolidated sand and clay, though found in many sections of the country, are not as desirable for foundations because they are subject to sliding and settling . Capillary breaks have been identified as a key way of reducing moisture incursion in new construction . The footing distributes the weight of the building over a sufficient area of ground to ensure that the foundation walls will stand properly. Footings are usually concrete; however, in the past, wood and stone have been used. Some older houses were constructed without footings. Although it is usually difficult to determine the condition of a footing without excavating the foundation, a footing in a state of disrepair or lack of a footing will usually be indicated either by large cracks or by settlement in the foundation walls. This type of crack is called a "Z" crack. Foundation wall cracks are usually diagonal, starting from the top, the bottom, or the end of the wall (Figure 6.4). Cracks that do not extend to at least one edge of the wall may not be caused by foundation problems. Such wall cracks may be due to other structural problems and should also be reported. The foundation walls support the weight of the structure and transfer this weight to the footings. The foundation walls may be made of stone, brick, concrete, or concrete blocks. The exterior should be moisture proofed with either a membrane of waterproof material or a coating of portland cement mortar. The membrane may consist of plastic sheeting or a sandwich of standard roofing felt joined and covered with tar or asphalt. The purpose of waterproofing the foundation and walls is to prevent water from penetrating the wall material and leaving the basement or cellar walls damp. Holes in the foundation walls are common in many old houses. These holes may be caused by missing bricks or blocks. Holes and cracks in a foundation wall are undesirable because they make a convenient entry for rats and other rodents and also indicate the possibility of further structural deterioration. Basement problems are a major complaint of homeowners . Concrete is naturally porous (12%-18% air). When it cures, surplus water creates a network of interconnected capillaries. These pores let in liquid water, water vapor, and radon gas. Like a sponge, concrete draws water from several feet away. As concrete ages, the pores get bigger as a result of freezing, thawing, and erosion. Concrete paints, waterproofing sealers, or cement coatings are a temporary fix. They crack or peel and cannot stop gases such as water vapor and radon. Damp basement air spreads mold and radon through the house. Efflorescence (white powder stains) and musty odors are telltale signs of moisture problems. potential problem, 6-mil plastic sheets should be laid as vapor barriers over the entire crawl space floor. The sheets should overlap each other by at least 6 inches and should be taped in place. The plastic should extend up the perimeter walls by about 6 inches. The plastic sheets should be attached to the interior walls of the crawl space with mastic or batten strips. All of the perimeter walls should be insulated, and insulation should be between the joists at the top of the walls. Vents, which may need to be opened in the late spring and closed in the fall, should not be blocked. If not properly managed, moisture originating in the crawl space can cause problems with wood flooring and create many biologic threats to health and property. A properly placed vapor barrier can prevent or reduce problem moisture from entering the home. # Vapor Barriers for Concrete Slab Homes Strip flooring and related products should be protected from moisture migration by a slab. Proper on-grade or above-grade construction requires that a vapor barrier be placed beneath the slab. Moisture tests should be done to determine the suitability of the slab before installing wood products. A vapor barrier equivalent to 4-or 6-mil polyethylene should be installed on top of the slab to further protect the wood products and the residents of the home. # Wall and Ceiling Vapor Barriers Wall and ceiling vapor barriers should go on the heated side of the insulation and are necessary in cold climates. Water vapor flows from areas of high pressure (indoors in winter) through the wall to an area of low pressure (outdoors in winter). People and their pets produce amazing quantities of water vapor by breathing. Additional moisture in considerable quantities is created in the home from everyday activities such as washing clothes, cooking, and personal hygiene. The purpose of the vapor barrier is to prevent this moisture from entering the wall and freezing, then draining, causing damage. In addition, wet insulation has very little insulating value. Insulation with the vapor barrier misplaced will allow the vapor to condense in the insulation and then freeze. In cold climates, this ice can actually build up all winter and run out on the floor in the spring. Such moisture buildup blisters paint, rots sheathing, and destroys the insulating value of insulation. Basement remodeling traps invisible water vapor, causing mold and mildew. Most basements start leaking within 10 to 15 years. The basement walls and floors should be sealed and preserved before they deteriorate. The basement floor should be concrete placed on at least 6 inches of gravel. The gravel distributes groundwater movement under the concrete floor, reducing the possibility of the water penetrating the floor. A waterproof membrane, such as plastic sheeting, should be laid before the concrete is placed for additional protection against flooding and the infiltration of radon and other gases. The basement floor should be gradually, but uniformly, sloped from all directions toward a drain or a series of drains. These drains permit the basement or cellar to drain if it becomes flooded. Water or moisture marks on the floor and walls are signs of ineffective waterproofing or moisture proofing. Cellar doors, hatchways, and basement windows should be weather-tight and rodent-proof. A hatchway can be inspected by standing at the lower portion with the doors closed; if daylight can be seen, the door needs to be sealed or repaired. # Vapor Barriers Crawl Space Vapor Barriers Throughout the United States, even in desert areas, there is moisture in the ground from groundwater being absorbed. Even in an apparently dry crawl space, a large amount of water is entering. The moisture is drying out as fast as it is entering, which causes high moisture levels in the crawl space and elsewhere in the house. A solid vapor barrier is recommended in all crawl spaces and should be required if moisture problems exist . This vapor barrier, if properly installed, also reduces the infiltration of radon gas. Of course, if the moisture is coming from above ground, a vapor barrier will collect and hold the moisture. Therefore, any source of moisture must be found and eliminated. The source may be as obvious as sweating pipes, or may be more difficult to spot, such as condensation on surfaces. The solution can be as simple as applying insulation to exposed sections of the piping or complex enough to require power exhaust fans and the addition of insulation and vapor barriers. The more common causes of moisture problems in a new home are moisture trapped within the structure during construction and a continuing source of excess moisture from the basement, crawl space, or slab. To resolve this # House Framing Many types of house-framing systems are found in various sections of the country; however, most framing systems include the elements described in this section. # Foundation Sills The purpose of the sill is to provide support or a bearing surface for the outside walls of the building. The sill is the first part of the frame to be placed and rests directly on the foundation wall. It is often bolted to the foundation wall by sill anchors. In many homes, metal straps are cemented into the foundation wall that are bent around and secured to the sill. It is good practice to protect the sill against termites by extending the foundation wall to at least 18 inches above the ground and using a noncorroding metal shield continuously around the outside top of the foundation wall. # Flooring Systems The flooring system is composed of a combination of girders, joists, subflooring, and finished flooring that may be made up of concrete, steel, or wood. Joists are laid perpendicular to the girders, at about 16 inches on center, and the subflooring is attached to them. If the subfloor is wood, it may be nailed, glued, or screwed at either right angles or diagonally to the joists. Many homes are built with wood I-joists or trusses rather on than solid wood joists. In certain framing systems, a girder supports the joists and is usually a larger section than the joists it supports. Girders are found in framing systems where there are no interior bearing walls or where the span between bearing walls is too great for the joists. The most common application of a girder is to support the first floor. Often a board known as a ledger is applied to the side of a wood girder or beam to form a ledge for the joists to rest upon. The girder, in turn, is supported by wood posts or steel "lally columns" that extend from the cellar or basement floor to the girder. # Studs For years, wall studs were composed of wood and were 2×4 inches; but, with the demand for greater energy efficiency in homes, that standard no longer holds true. Frame studs up to 6 inches wide are used to increase the area available for placing insulation material. The increased size in the studs allow for larger spaces between joists. There are now alternatives to conventional wood studs, specifically, insulated concrete forms, structural insulated panels, light-gauge steel, and combined steel and wood . The advantages of light-gauge steel include the following: - weighs 60% less than equivalent wood units and has greater strength and durability; - is impervious to termites and other damage-causing pests; - stays true and does not warp; - is noncombustible; and - is recyclable. The disadvantages of steel include these: - steel is an excellent thermal conductor and requires additional external insulation; - as a new product, it is unfamiliar to craftsmen, engineers, and code officials; and - a different type of construction tools are required. The combined steel and wood framing system includes light-gauge steel studs with 6-inch wooden stud pieces attached to the top and bottom to allow easy attachment to traditional wood frame materials. There are two types of walls or partitions: bearing and nonbearing. A bearing wall is constructed at right angles to support the joists. A nonbearing wall, or partition, acts as a screen or enclosure; hence, the headers in it are often parallel to the joists of the floor above. In general, studs, like joists, are spaced 16 inches on center. In light construction, such as garages and summer cottages, wider spacing on studs is common. Openings for windows or doors must be framed in studs. This framing consists of horizontal members (headers) and vertical members (trimmers or jack studs). Because the vertical spaces between studs can act as flues to transmit flames in the event of a fire, fire stops are important in preventing or retarding fire from spreading through a building by way of air passages in walls, floors, and partitions. Fire stops are wood obstructions placed between studs or floor joists to block fire from spreading in these natural flue spaces. # Interior Walls Many types of materials are used for covering interior walls and ceilings, but the principal type is drywall. The generic term "drywall" is typically used when talking about gypsum board. It is also called wallboard or referred to by the brand name Sheetrock. Gypsum board is a sheet material composed of a gypsum filler faced with paper. In drywall construction, gypsum boards are fastened to the studs either vertically or horizontally and then painted. The edges along the length of the sheet are slightly recessed to receive joint cement and tape. Drywall finish, composed of gypsum board, is a material that requires little, if any, wait for application. Other drywall finishes include plywood, fiberboard, or wood in various sizes and forms. Plaster was once quite popular for interior walls. Plaster is a mixture (usually of lime, sand, and water) applied in two or three coats to lath to form a hard-wall surface. A plaster finish requires a base on which plaster can be spread. Wood lath at one time was the plaster base most commonly used, but today gypsum-board lath is more popular. Gypsum lath may be perforated to improve the bond and thus lengthen the time the plaster can remain intact when exposed to fire. Building codes in some cities require that gypsum lath be perforated. Expanded-metal lath also may be used as a plaster base. Expanded-metal lath consists of sheet metal slit and expanded to form openings to hold the plaster. Plaster is applied over the base to a minimum thickness of ½ inch. Because wood-framing members may dry after the house is completed, some shrinkage can be expected, which, in turn, may cause plaster cracks to develop around openings and in corners. Strips of lath embedded in the plaster at these locations prevent cracks. Bathrooms have unique moisture exposure problems and local code approved cement board should be used around bath and shower enclosures. # Stairways The purpose of stairway dimension standards is to ensure adequate headroom and uniformity in riser and tread size. Interior stairways (Figure 6.5) should be no less than 44 inches wide. The width of a stairway may be reduced to 36 inches when permitted by local or state code in oneand two-family dwellings. Stairs with closed risers should have maximum risers of 8¼ inches and minimum treads of 9 inches plus 1 inch nosing. Basement stairs are often constructed with open risers. These stairs should have maximum risers of 8¼ inches and minimum treads of 9 inches plus 1-inch nosing. The headroom in all parts of the stair enclosure should be no less than 80 inches. Dimensions of exterior stairways should be the same as those of interior stairways, except that the headroom requirement does not apply. Staircases should have handrails that are between 1¼ and 25/8 inches wide, particularly if the staircases have more than four steps. Handrails should be shaped so they can be readily grasped for safety and placed so they are easily accessible. Handrails should be 4⅛ inches from the wall and be 34 to 38 inches above the leading edge of the stairway treads. Handrails should not end in any manner or have projections that can snag clothing. # Windows The six general classifications of windows (Figure 6.6) are as follows : - Double-hung sash windows that move up or down, balanced by weights hung on chains, ropes, or springs on each side; - Casement sash windows that are hinged at the side and can be hung so they will swing out or in; - Awning windows that usually have two or more glass panes that are hinged at the top and swing out horizontally; - Sliding windows that usually have two or more glass panes that slide past one another on a horizontal track; - Fixed windows that are generally for increased light entry and decorative effect; and - Skylight windows for increased room illumination and decoration that can be built to open. The principal parts of a window, shown in threedimensional view in Figure 6.7 and face-on and side view in Figure 6.8, are the following: Drip cap-A separate piece of wood projecting over the top of the window; a component of the window casing. The drip cap protects against moisture. Window trough-The cut or groove in which the sash of the window slides or rests. Window sill-The shelf on the bottom edge of a window, either a projecting part of the window frame or the bottom of the wall recess that the window fits into. The sill contains the trough and protects against moisture. Recent technological advancements-new materials, coatings, design, and construction features-make it possible to choose windows that allow you to balance winter heating and summer cooling needs without sacrificing versatility or style. To ensure that windows, doors, or skylights selected are appropriate for the region in which they are to be installed, Energy Star Certification labels include a climate region map. Some window glass is made of tempered glass to resist breakage. Some windows are made of laminated glass, which resists breakage, but if broken produces glass shards too small to cause injury . The glazing, or glass, can be a solid glass sheet (single glazed) or have two layers of glass (double glazed) separated by a spacer. Air trapped between the glass layers provides some insulation value. Triple-glazed windows have three pieces of glass, or two layers of glass with a low emissivity film suspended between them. Triple-glazed windows have advantages where extremes in weather and temperature are the norm. They also can reduce sound transmission to a greater degree than can single-or double-glazed windows. becomes difficult to lift and lower. This may be something that can be resolved with simple adjustments, or it may be more serious. If the door is connected to an electric opener, the opener mechanism can be disconnected from the door by pulling the release cord or lever. If the door then works manually, the problem is with the electric opener. A door that seems unusually hard to lift may have a problem with spring tension. Wood doors should be properly painted or stained both outside and inside. If only the outside of a garage door is finished, the door may warp and moisture may cause the paint to peel. Rules issued by the Consumer Product Safety Commission on December 3, 1992, specify entrapment protection requirements for garage doors . The rules require that residential garage door openers contain one of the following: - An external entrapment protection device, such as an electric eye that sees an object obstructing the door without having actual contact with the object. A door-edge sensor is a similar device. The door-edge sensor acts much like the door-edge sensors on elevator doors. - A constant contact control button, which is a wall-mounted button requiring a person to hold in the control button continuously for the door to close completely. If the button is released before the door closes, the door reverses and opens to the highest position. - A sticker on all newly manufactured garage door openers warning consumers of the potential entrapment hazard. The sticker is to be placed near the wall-mounted control button. The variety of exterior door systems has increased significantly over the past 5 to 10 years. Many combine several different materials to make a realistic, if not actual wood, door that provides both beauty and enhanced security. # Exterior House Doors Exterior door frames are ordinarily of softwood plank, with the side rabbitted to receive the door in the same way as casement windows. At the foot is a sill, made of hardwood or other material, such as aluminum, to withstand the wear of traffic and sloped down and out to shed water. Doors often come equipped with door sweeps to conserve energy. The four primary categories of modern exterior doors are steel, fiberglass, composites, and wood. # Doors There are many styles of doors both for exterior and interior use. Exterior doors must, in addition to offering privacy, protect the interior of the structure from the elements. Various parts of a door are the same as the corresponding parts of a window. A door's function is best determined by the material from which it is made, how it looks, and how it operates. When doors are used for security, they are typically made from heavy materials and have durable, effective locks and hinges. A door that lets in light or allows people to look out onto the yard, such as a sliding glass door or a french door, will have multiple panes (also called lights) or be made almost completely of glass. Houses have many exterior and interior door options. Exterior doors are typically far sturdier than interior doors and need to be weather tight and ensure security for the home. Exterior doors are also more decorative than most interior doors and may cost a considerable amount. Typical exterior doors include front entry doors, back doors, french doors, dutch doors, sliding glass doors, patio doors, and garage doors. Most doors are made of wood or materials made to look like wood. Fiberglass composite and steel doors often have polymer or vinyl coatings embossed with wood grain; some even have cellulose-based coatings that can be stained like wood doors. Wood doors are made from every kind of wood imaginable, hardwoods being the most durable and elegant. Wood doors insulate better than glass; composite and steel doors provide even more insulation and durability, as well as better security than does wood. # Garage Doors Garage doors open in almost any configuration needed for the design of the home. Installing most garage doors is complex and dangerous enough that only a building professional should attempt it. Garage doors often include very strong springs that can come loose and severely injure the unsuspecting installer. Garage door springs are under extreme tension because of the heavy loads they must lift, which makes them dangerous to adjust. A garage door may suffer from any of several problems. The most common problem is that the door Steel-The most common exterior door sold today is steel. Humidity will not cause a steel door to warp or twist. Steel doors often have synthetic wood-grain embossed finishes that accept stain. Just about every steel exterior door is filled with some type of foam. This foam allows the doors to achieve R-values almost five times that of an ordinary wood door. Metal is often used as a veneer frame. In general, the horizontal members are called rails and the vertical members are called stiles. Every door has a top and bottom rail, and some may have intermediate rails. There are always at least two stiles, one on each side of the door. # Fiberglass- The second most frequently selected exterior door is fiberglass. Fiberglass doors are similar to steel doors, but tend to be much more resistant to denting. (Steel doors can be dented quite easily.) Fiberglass doors also are stainable and have rich, realistic wood graining. Fiberglass doors are insulated with foam and have high R-values. Composite materials-The third most common exterior door is made of composite materials. These doors often are of two materials blended together. Their composite fiber-reinforced core can be twice as strong as wood. This composite core will not rot, warp, or twist when subjected to high levels of humidity. # Wood- The last major category of doors is wood. Solid wood doors range from inexpensive to true works of art. Their downside is that they can warp and bow if not sealed properly from humidity and will then fit poorly in their frames. Other types of wooden doors are described below. - Batten doors are often found on older homes. They are made of boards nailed together in various ways. The simplest is two layers nailed to each other at right angles, usually with each layer at 45° to the vertical. Another type of batten door consists of vertical boards nailed at right angles to several (two to four) cross strips called ledgers, with diagonal bracing members nailed between the ledgers. If vertical members corresponding to ledgers are added at the sides, the verticals are called frames. Batten doors are often found in cellars and other places where appearance is not a factor and economy is desired. - Solid flush doors are perfectly flat, usually on both sides, although occasionally they are made flush on one side and are paneled on the other. Flush doors sometimes are solid planking, but they are commonly veneered and possess a core of small pieces of white pine or other wood. These pieces are glued together with staggered end joints. Along the sides, top, and bottom are glued ¾ inch edge strips of the same wood, used to create a smooth surface that can be cut or planed. The front and back faces are then covered with a ⅛-inch to ¼-inch layer of veneer. Solid flush doors may be used on both the interior and exterior. - Hollow-core doors, like solid flush doors, are perfectly flat; but, unlike solid doors, the core consists mainly of a grid of crossed wooden slats or some other type of grid construction. Faces are three-ply plywood instead of one or two plies of veneer, and the surface veneer may be any species of wood, usually hardwood. The edges of the core are solid wood and are made wide enough at the appropriate places to accommodate locks and butts. Doors of this kind are considerably lighter than solid flush doors. Hollow-core doors are usually used as interior doors. Many doors are paneled, with most panels consisting of solid wood or plywood, either raised or flat, although exterior doors frequently have one or more panels of glass. One or more panels may be used, although some have as many as nine panels. Paneled doors may be used both on the interior or exterior. The frame of a doorway is the portion to which the door is hinged. It consists of two side jambs and a head jamb, with an integral or attached stop against which the door closes. # Roof Framing Rafters One of a series of structural roof members spanning from an exterior beam or a ridge board. Rafters serve the same purpose for the roof as joists do for floors, that is, providing support for sheathing and roofing material. They are typically placed on 16-inch centers. # Collar Beam Collar beams are ties between rafters on opposite sides of the roof. If the attic is to be used for rooms, the collar beam may double as the ceiling joist. # Purlin A purlin is the horizontal member that forms the support for the rafters at the intersection of the two slopes of a gambrel roof. # Ridge Board A ridge board is a horizontal member that forms a lateral tie to make rafters secure. # Hip A hip is like a ridge, except that it slopes. It is the intersection of two adjacent, rather than two opposite, roof planes. # Roof Sheathing The manner in which roof sheathing is applied depends upon the type of roofing material. Roof boards may vary from tongue-and-groove lumber to plywood panels. # Dormer The term "dormer window" is applied to all windows in the roof of a building, whatever their size or shape. # Roofs Asphalt Shingle The principal damage to asphalt shingle roofs is caused by strong winds on shingles nailed close to the ridge line of the roof. Usually the shingles affected by winds are those in the four or five courses nearest the ridge and in the area extending about 5 feet down from the edge or rake of the roof. # EPDM Ethylene propylene diene monomer (EPDM) is a singleply roofing system. EPDM allows extreme structural movement without splitting or cracking and retains its pliability in a wide range of temperatures. # Asphalt Built-up Roofs Asphalt roofs may be unsurfaced (a coating of bitumen being exposed directly to the weather) or surfaced (with slag or gravel embedded in the bituminous coating). Using surfacing material is desirable as a protection against wind damage and the elements. This type of roof should have enough pitch to drain water readily. # Coal Tar Pitch Built-up Roofs This type of roof must be surfaced with slag or gravel. A coal tar pitch built-up roof should always be used on a deck pitched less than ½ inch per foot; that is, where water may collect and stand. This type of roof should be inspected on completion, 6 months later, and then at least once a year, preferably in the fall. When the top coating of bitumen shows damage or has become badly weathered, it should be renewed. # Slate Roofs The most common problem with slate roofs is the replacement of broken slates. Otherwise, slate roofs normally render long service with little or no repair. # Tile Roofs Replacement of broken shingle tiles is the main maintenance problem with tile roofs. This is one of the most expensive roofing materials. It requires very little maintenance and gives long service. # Copper Roofs Usually made of 16-ounce copper sheeting and applied to permanent structures, copper roofs require practically no maintenance or repair when properly installed. Proper installation allows for expansion and contraction with changes in temperature. # Galvanized Iron Roofs The principal maintenance for galvanized iron roofs involves removing rust and keeping the roof well painted. Leaks can be corrected by renailing, caulking, or replacing all or part of the sheet or sheets in disrepair. # Wood Shingle Roofs The most important factors of wood shingle roofs are their high pitch and exposure, the character of wood, the kind of nails used and the preservative treatment given the shingles. At one time these roofs were treated with creosote and coal tar preservatives. Because they are made from a flammable material, insurance companies frequently have higher rates for wood shingle roofs. # Roof Flashing Valleys in roofs (such as gambrel roofs, which have two pitches designed to provide more space on upper floors and are steeper on their lower slope and flatter toward the ridge) that are formed by the junction of two downward slopes may be open or closed. In a closed valley, the slates, tiles, or shingles of one side meet those of the other, and the flashing below them may be comparatively narrow. In an open valley, the flashing, which may be made of zinc, copper, or aluminum, is laid in a continuous strip, extending 12 to 18 inches on each side of the valley, while the tiles or slates do not come within 4 to 6 inches of it. The ridges built up on a sloping roof where it runs down against a vertical projection, like a chimney or a skylight, should be weatherproofed with flashing. Failure of roof flashing is usually due to exposed nails that have come loose. The loose nails allow the flashing to lift, resulting in leakage. Flashings made of lead or coated with lead should not be used. The use of a thin, self-sticking rubber ice and water shield under flashings and on the edge of roofs is now common practice. The shield helps reduce leakage and ice backup in cold climates, preventing serious damage to this part of the home. # Gutters and Leaders Gutters and leaders should be of noncombustible materials and should not be made of lead, lead-coated copper, or any other formulation containing lead. They should be securely fastened to the structure and spill into a storm sewer, not a sanitary sewer, if the neighborhood has one. When there is no storm sewer, a concrete or stone block placed on the ground beneath the leader prevents water from eroding the lawn. This stone block is called a splash block. Gutters should be checked every spring and fall and cleaned when necessary. Gutters must be placed or installed to ensure that water drainage is taken away from the foundation of the house. Soil around the home should be graded in a manner that also drains the water away from the foundation of the home. # Exterior Walls and Trim Exterior walls are enclosure walls whose purpose is not only to make the building weather tight, but to also allow the building to dry out. In most one-to three-story buildings they also serve as bearing walls. These walls may be made of many different materials (Figure 6.9). Brick is often used to cover framed exterior walls. In this situation, the brick is only one course thick and is called a brick veneer. It supports nothing but itself and is kept from toppling by ties connected to the frame wall. In frame construction, the base material of the exterior walls is called sheathing. The sheathing material may be square-edge, shiplap, tongue-and-groove boards, or plywood or oriented strand board (OSB). Sheathing, in addition to serving as a base for the finished siding material, stiffens the frame to resist sway caused by wind. It is for this reason that sheathing is applied diagonally on frame buildings. Its role is to brace the walls effectively to keep them from racking. Many types of sidings, shingles, and other exterior coverings are applied over the sheathing. Vinyl siding; wood siding; brick, cedar, and other wood shingles or shakes; asphalt; concrete; clapboard; common siding (called bevel siding); composition siding; cement shingles; fiber cement (e.g., Hardiplank); and aluminum siding are commonly used for exterior coverings. In older homes, asbestos-cement siding shingles can still be found as an exterior application or underneath various types of aluminum or vinyl siding. Clapboard and common siding differ only in the length of the pieces. Composition siding is made of felt, grit, and asphalt, which are often shaped to look like brick. Asbestos and cement shingles, which were used until the early 1970s, are rigid and produce a siding that is fireresistant, but also a health hazard. Cedar wood shingles and aluminum are manufactured with a backer board that gives insulation and fire-resistant qualities. Vinyl siding is manufactured from polyvinyl chloride (PVC), a building material that has replaced metal as the prime material for many industrial, commercial, and consumer products. PVC has many years of performance as a construction material, providing impact-resistance, rigidity, and strength. The use of vinyl siding is not without controversy, because PVC is known to cause cancer in humans. Accidental fires in vinyl-sided buildings are more dangerous because vinyl produces toxic vapors when heated. # Putting It All Together The next section shows a home being built by Habitat for Humanity. This small, one-family home represents all of the processes that would also be used for a far more expensive and elaborate dwelling. The homebuilding demonstrated by the following pictures was by an industrial arts class to educate and train a new generation of construction specialists and homebuilders. A. The foundation trench for a new home has horizontal metal rods, also called reinforcement rods or rebar, to increase the strength of the concrete. After the concrete hardens, a perforated pipe 4 to 6 inches in diameter is placed beside it to collect water and allow it to drain away from the foundation. This pipe is the footing drain, and the poured concrete beside it is the footer. The footing drain is important in removing water from the base of the home. It also serves the secondary purpose of moisture control in the home and provides a venting route for radon gas. The holes dug near the legs of the workers will be filled with concrete and form the footer that will hold up the porch of the home. To assist in preventing capillary action from wicking water from the foundation to the wooden structure, a polyethylene sheet is placed over the footer before pouring the concrete foundation, or building a cinderblock foundation. # B. The concrete on top of the footer is leveled to establish a surface for the foundation of the home. Once the footer has hardened, the perforated drainage pipe will be laid on the outside of the poured foundation wall. The reinforcing rods were positioned in the trench before pouring the concrete. # C. Concrete will be poured into this form on top of the footer to create the foundation of the home. Again, reinforcing rods are added to ensure that the concrete has both lateral strength, as well as the strength to support the home. Once the concrete has hardened and becomes seasoned, the forms will be removed to reveal the finished poured concrete foundation over the perforated drainage pipe. Not shown is a newer technique of using insulating polystyrene forms and ties in a building foundation. # E. Gravel fill is placed outside the finished poured concrete foundation. This ensures that moisture does not stand around the foundation for any time. The moisture is routed to the footing drain for fast dispersal. # F. A termite shield is established on top of the concrete wall (foundation) just below the sill of the home. The sill is typically made of pressure-and insecticide-treated wood to ensure stability and long life. A cinderblock foundation will be used to support the storage shed attached to the house. Note the potential for inadvertent sabotage of the termite shield if a shield is not installed on the top of the cinder block foundation. # G. OSB subfloor, the joist supporting the floor, and the metal bridging that is used to keep the joist from twisting can be seen from the crawl space under the home. If the material used for the flooring or external sheathing of the home is made of plywood or a composition that is not waterproof, the material must be protected from rain to prevent deterioration and germination of mold spores. Some glues or resins release toxic vapors for years if deterioration is allowed to begin. # D. Foundations are not always poured concrete, but are often cinderblock or similar materials that are cemented in place to form the load-bearing wall. The arrow shows the concrete chute delivering concrete into the form. Long poles are pushed into the freshly poured concrete to remove air pockets that would weaken the foundation. Care must be taken to ensure that the forms are appropriately supported before pouring the concrete. Often tar, plastic, or other waterproof materials are placed on the outside of the foundation to the ground level to further divert moisture from the house to the footing drains. # H. The flooring material of the first floor of the home is OSB applied to the subfloor with both glue and wood screws. Where possible, the screws should extend into the subfloor and the joist below the subfloor to prevent squeaking. # J. The exterior wall framing is composed of studs that are 2x6-inch boards. The horizontal member extending from one exterior wall to the other is called a girder and is a prime support for the second floor of the home. The larger studs in the exterior wall are used both for greater strength and to provide greater energy efficiency for the home. The lintels above the windows and doors distribute the weight of the second floor and roof across the studs that are located on each side of the openings in the frame. # K. The joists above the first floor are connected to the central girder of the home by steel brackets. These brackets provide a far more effective alternative than does toenailing nails to hold the joists in place or to notching the girder to hold them. # I. The interior wall framing is composed of studs traditionally referred to as 2×4s. The horizontal member at the top of the studs is called a girt or a ribbon. In this case the builders have used two 2×4s, placing one on top of the other. Because the outside walls have used studs that are 2×6-inch boards, the girts or ribbons on top of these are also double 2×6-inch boards. # L. The subroof or roof sheathing is applied from the bottom up with temporary traction boards nailed to the subroof to allow safe installation of the material. The subroof is placed on the rafters up to the ridge board of the roof. A waterproof material will be added to the subroof before installing the final roofing material. # M. An interior wall is installed to create second-floor rooms. The subroof has been installed. The exterior wood of the home has been covered with plastic sheathing or a housewrap to protect it from moisture. # N. Flashing material, such as sheet metal, is installed at critical locations to make sure that water does not enter the home where the joints and angles of a roof meet: where the dormer roof meets the roof and the walls of the dormer meet the roof, where windows penetrate the walls, where the vent stack penetrates the roof, where the porch roof meets the front wall, skylights, and eaves of the house. # O. A safety scaffold is standing at the rear of the home, and the final roofing material has been applied, in addition to the exterior vinyl siding. # Introduction Damaging moisture originates not only from outside a home; it is created inside the home as well. Moisture is produced by smoking; breathing; burning candles; washing and drying clothes; and using fireplaces, gas stoves, furnaces, humidifiers, and air conditioning. Leaks from plumbing, unvented bathrooms, dishwashers, sinks, toilets, and garbage disposal units also create moisture problems because they are not always found before water damage or mold growth occurs. Figure 7.1 provides an overview of the sources of moisture and types of air pollutants that can enter a home. Solving moisture problems is often expensive and timeconsuming. The first step is to do a moisture inventory to eliminate problems in their order of severity. Problems that are easiest and least expensive to resolve should be addressed first. For example, many basement leaks have been eliminated by making sure sump pumps and downspouts drain away from the house. On the other hand, moisture seeping though basement or foundation walls often is very expensive to repair. Eliminating such mois-Chapter 7: Environmental Barriers ture is seldom as simple as coating the interior wall, but often requires expert consultation and excavating around the perimeter of the house to install or clean clogged footing drains. Sealing the outside of the basement walls and coating the exterior foundation wall with tar or other waterproofing compounds are often the only solutions to eliminate moisture. Moisture condensation occurs in both winter and summer. The following factors increase the probability of condensation: - Homes that are ineffectively insulated and are not sealed against air infiltration in cold climates can result in major moisture problems. - Cool interior surfaces such as pipes, windows, tile floors, and metal appliances; air conditioner coils with poor outside drainage; masonry or concrete surfaces; toilet tanks; and, in the winter, outside walls and ceilings can result in moisture buildup from condensation. If the temperature of an interior surface is low enough to reach the dew point, moisture in the air will condense on it and enhance the growth of mold. - Dehumidifiers used in regions where outside humidity levels are normally 80% or higher have a moisturecollecting tank that should be cleaned and disinfected regularly to prevent the growth of mold and bacteria. It is best if dehumidifiers have a drain line continuously discharging directly to the outside or into a properly plumbed trap. This is also true in climates where air conditioning units are used on a full-time or seasonal basis. Their cooling pans provide an excellent environment for the growth of allergenic or pathogenic organisms. - Moisture removed from clothing by clothes driers ends up in the dryer vent if it is clogged by lint or improperly configured. Moisture buildup in this vent can result in mold growth and, if leakage occurs, damage to the structure of the home. The vent over the cooking area of the kitchen also should be checked routinely for moisture or grease buildup. # Roof The control of moisture in a home is of paramount importance. It is no surprise that moisture control begins with the design and integrity of the roof. Many types of surfacing materials are used for roofs-stone, composition asphalt, plastic, or metal, for example. Some have relatively short lives and some, such as slate and tile, have extraordinarily long lives. As in nearly all construction materials, tradeoffs must be made in terms of cost, thermal efficiency, and longevity. However, all roofs have two things in common: the need to shed moisture and protect the interior from the environment. When evaluating the roof of a home, the first thing to observe is the roofline against the sky to see if the roof 's ridge board is straight and level. If the roofline is not straight, it could mean that serious deterioration has taken place in the structure of the home as a result of 1. Is the roofline of the house straight? 2. Are there ripples or waves in the roof? 3. What is the condition of the gutters and downspouts? 4. What is the condition of the boards the gutters are attached to? 5. Does the flashing appear to be separated or damaged? 6. s there any apparent damage in the attic or can sunlight be seen through the roof? 7. Is there mold or discoloration on the rafters or roof sheathing? 8. Is there evidence of corrosion between the gutter and downspouts and any metal roofing or aluminum siding? 9. Do the downspouts route the water away from the base or foundation of the home? 10. Are the gutters covered or free of leaves? Are they sagging or separating from the fascia? 11. Does the gutter provide a mosquito-breeding area by holding water? # Roof Inspection improper construction, weight buildup, a deteriorated or broken ridge beam, or rotting rafters. Whatever the cause, the focus of an inspection must be to locate the extent of the damage. The next area to inspect is around the flashing on the roof. Flashing is used around any structure that penetrates the surface of a roof or where the roofline takes another direction. These areas include chimneys, gas vents, attic vents, dormers, and raised and lowered roof surfaces. One of the best ways to locate a leak around flashing is to go into the attic and look carefully. Leaks often are discovered when it rains; but if it is not raining, the underside of the roof can be examined with the attic lights off for pinpoints of daylight. Roofing material should lay relatively flat and should not wave or ripple. The roof should be checked for missing or damaged shingles, areas where flashing should be installed, elevation changes in roof surfaces, and evidence of decomposing or displaced surfaces around the edge of the roof . # Insulation A house must be able to breathe; therefore, air must not be trapped inside, but must be allowed to exit the home with its moisture. Moisture buildup in the home will lead to both mold and bacteria growth. Figure 7.2 demonstrates insulation blown into an attic, to a depth of approximately 12 inches (Figure 7.3). Figure 7.4 shows the area extending from a house under the roof, known as the soffit. The soffit is perforated so that air can flow into the attic and up through the ridge vents to ventilate the attic. If insulation is too thick or installed improperly, it restricts proper air turnover in the attic and moisture or extreme temperatures could result in mold or bacteria growth, as well as delamination of the plywood and particleboards and premature aging of the roof 's subsurface and shingles. Care also must be taken in cold climates to ensure that the insulation has a vapor barrier and that it is installed face down. When insulation is placed in the walls of a home, a thin plastic vapor barrier should be placed over the insulation facing the inside of the home. The purpose of this vapor barrier is to keep moisture produced inside the house from compromising the insulation. If the barrier is not installed, warm, moist air will move through the drywall and into the insulated wall cavity. When the air cools, moisture will condense on the fibers of the insulation making it wet; and, if it is cellulose insulation, it will absorb and hold the moisture. Wetness reduces the effectiveness of the insulation and provides a favorable environment for the growth of bacteria and mold . # Siding Good siding should be attractive, durable, insect-and vermin-resistant, waterproof, and capable of holding a weather-resistant coating. Fire-resistant siding and roofing are important in many areas where wildfires are common and are required by many local building codes. All exterior surfaces will eventually deteriorate, regardless of manufacturer warranties or claims. Leaks in the home from the outside occur in many predictable locations. The exterior siding or brick should be checked for cracks or gaps in protective surfaces. Where plumbing, air vents, electrical outlets, or communication lines extend through an exterior wall, they should be carefully checked to ensure an airtight seal around those openings. The exterior surface of the home has doors, windows, and other openings. These openings should be caulked routinely, and the drainage gutters along the top should be checked to ensure that they drain properly. Exterior surface materials include stucco, vinyl, asbestos shingles, brick, metal (aluminum), fiber cement, exterior plywood, hardwood, painted or coated wood, glass, and tile, some of which are discussed in this chapter . # Stucco Synthetic stucco (exterior insulation and finish system; EIFS) is a multilayered exterior finish that has been used in Europe since shortly after World War II, when contractors found it to be a good repair choice for buildings damaged during the war. North American builders began using EIFS in the 1980s, first in commercial buildings, then as an exterior finish to wood frame houses. EIFS has three layers: - Inner layer-foam insulation board secured to the exterior wall surface, often with adhesive; - Middle layer-a polymer and cement base coat applied to the top of the insulation, then reinforced with glass fiber mesh; and - Exterior layer-a textured finish coat. EIFS layers bond to form a covering that does not breathe. If moisture seeps in, it can become trapped behind the layers. With no place to go, constant exposure to moisture can lead to rot in wood and other vulnerable materials within the home. Ripples in the stucco could be a sign of a problem. On the surface it may look like nothing is wrong, but beneath the surface, the stucco may have cracked from settling of the house. With a properly installed moisture barrier, no moisture should be able to seep behind the EIFS, including moisture originating inside the home. Drains in the foundation can be designed to enable moisture that does seep in to escape. Other signs of problems are mold or mildew on the interior or exterior of the home, swollen wood around door and window frames, blistered or peeling paint; and cracked EIFS or cracked sealant. # Vinyl Standard vinyl siding is made from thin, flexible sheets of plastic about 2 mm thick, precolored and bent into shape during manufacturing. The sheets interlock as they are placed above one another. Because temperature and sunlight cause vinyl to expand and contract, it fits into deep channels at the corners and around windows and doors. The channels are deep enough that as the siding contracts, it remains within the channel. Siding composed of either vinyl or aluminum will expand and contract in response to temperature change. This requires careful attention to the manufacturer's specifications during application. Cutting the siding too short causes exposed surfaces when the siding contracts, resulting in moisture damage and eventual leakage. Even small cracks exposing the undersurface can create major damage. # Fiber Cement Fiber cement siding is engineered composite-material products that are extremely stable and durable. Fiber cement siding is made from a combination of cellulose fiber material, cement and silica sand, water, and other additives. Fiber cement siding is fire resistant and useful in high-moisture areas. The fiber cement mixture is formed into siding or individual boards, then dried and cured using superheated steam under pressure. The drying and curing process assures that the fiber cement siding has very low moisture content, which makes the product is stable-no warping or excessive movement-and its surface good for painting. Weight is a minor concern with fiber cement products: they weigh about 1½ times what comparably sized composite wood products do. Other concerns relate to cutting fiber cement: cutting produces a fine dust with microscopic silica fibers, so personal protective equipment (respirator and goggles) are necessary. In addition, special tools are needed for cutting. # Brick Brick homes may seem on the surface to be nearly maintenance free. This is true in some cases, but, like all surfaces, brick also degrades. Although this degradation takes longer in brick than in other materials, repairing brick is complex and quite expensive. There are two basic types of brick homes. One is brick veneer, which is a thin brick set to the outside of a wooden stud wall. The brick is not actually the supporting wall. Brick veneer typically has the same pattern of bricks around the doors and windows; a true brick wall will have brick arches or heavy steel plates above the doors and other openings of the building. Some brick walls have wooden studs behind the brick to provide an area for insulation, plumbing, vents, and wiring. It is important that weep holes and flashing be installed in brick homes to control moisture. Improperly constructed building footers can result in major damage to the exterior brick surface of a home by allowing moisture, insects, and vermin to enter. A crack, such as the one in Figure 7.5, is an example of such a failure. This type of damage will require much more than just a mortar patch. Buildings constructed of concrete block also experience footer failure. The damage is reason to not skimp when installing and inspecting the footing and reinforces the need for appropriate concrete mix, rebar, and footing drains. rent, it will gradually dissolve into ions in the electrolyte and, at the same time, produce electrons, which the least active (cathode) will receive through the metallic connection with the anode. The result is that the cathode will be negatively polarized and hence be protected against corrosion. Thus, less noble metals are more susceptible to corrosion. An example of protecting an appliance such as an ironbodied water heater would be to ensure that piping connections are of similar material when possible and follow the manufacturer's good practice and instructions on using dielectric (not conductors of electricity) unions . Figure 7. 6 shows examples of electrochemical kinetics in pipes that were connected to dissimilar metals. Vinyl has some environmental and health concerns, as do most exterior treatments. Vinyl chloride monomer, of which polyvinyl chloride siding is made, is a strong carcinogen and, when heated, releases toxic gases and vapors. Under normal conditions, significant exposures to vinyl chloride monomer are unlikely. # Asbestos Older homes were often sided with composites containing asbestos. This type of siding was very popular in the early 1940s. It was heavily used through the 1950s and decreasingly used up until the early 1960s. The siding is typically white, although it may be painted. It is often about ¼-inch thick and very brittle and was sold in sections of about 12×18 inches. The composite is quite heavy and very slatelike in difficulty of application. As it ages, it becomes even more brittle, and the surface erodes and becomes powdery. This siding, when removed, must be disposed of in accordance with local, state, and federal laws regulating the disposal of asbestos materials. The workers and the site must be carefully managed and protected from contamination. The composite had several virtues as siding. It was quite resistant to fire, was not attractive to insects or vermin, provided very good insulation, and did not grow mold readily. Because of its very brittle nature, it could be damaged by children playing and, as a result, often was covered later with aluminum siding. # Metal If metal siding is used, the mounting fasteners (nails or screws) must be compatible with the metal composition of the siding, or the siding or fasteners will corrode. This corrosion is due to galvanic response. Galvanic response (corrosion) can produce devastating results that often are only noticed when it is too late. It should always be considered in inspections and is preventable in nearly all cases. When two dissimilar metals, such as aluminum and steel, are coupled and subjected to a corrosive environment (such as air, water, salt spray, or cleaning solutions), the more active metal (aluminum) becomes an anode and corrodes through exfoliation or pitting. This can happen with plumbing, roofing, siding, gutters, metal venting, and heating and air conditioning systems. When two metals are electrically connected to each other in a conductive environment, electrons flow from the more active metal to the less active because of the difference in the electrical potential, the so-called "driving force." When the most active metal (anode) supplies cur- Use like metals when possible Use metals with similar electronegativity levels Use dielectric unions for plumbing Use anodes that are inexpensive to replace Remember: Use metals with less susceptibility to protect metals that are more susceptible to corrosion. # Metal Corrosion Prevention Introduction One of the primary differences between rural and urban housing is that much infrastructure that is often taken for granted by the urban resident does not exist in the rural environment. Examples range from fire and police protection to drinking water and sewage disposal. This chapter is intended to provide basic knowledge about the sources of drinking water typically used for homes in the rural environment. It is estimated that at least 15% of the population of the United States is not served by approved public water systems. Instead, they use individual wells and very small drinking water systems not covered by the Safe Water Drinking Act (SDWA); these wells and systems are often untested and contaminated . Many of these wells are dug rather than drilled. Such shallow sources frequently are contaminated with both chemicals and bacteria. Figure 8.1 shows the change in water supply source in the United States from 1970 to 1990. According to the 2003 American Housing Survey, of the 105,843,000 homes in the United States, water is provided to 92,324,000 (87.2%) by a public or private business; 13,097,000 (12.4%) have a well (11,276,000 drilled, 919,000 dug, and 902,000 not reported) . # Water Sources The primary sources of drinking water are groundwater and surface water. In addition, precipitation (rain and snow) can be collected and contained. The initial quality of the water depends on the source. Surface water (lakes, reservoirs, streams, and rivers), the drinking water source for approximately 50% of our population, is generally of poor quality and requires extensive treatment. Groundwater, the source for the other approximately 50% of our population, is of better quality. However, it still may be contaminated by agricultural runoff or surface and subsurface disposal of liquid waste, including leachate from solid waste landfills. Other sources, such as spring water and rain water, are of varying levels of quality, but each can be developed and treated to render it potable. Most water systems consist of a water source (such as a well, spring, or lake), some type of tank for storage, and a system of pipes for distribution. Means to treat the water to remove harmful bacteria or chemicals may also be required. The system can be as simple as a well, a pump, and a pressure tank to serve a single home. It may be a complex system, with elaborate treatment processes, multiple storage tanks, and a large distribution system serving thousands of homes. Regardless of system size, the basic principles to assure the safety and potability of water are common to all systems. Large-scale water supply systems tend to rely on surface water resources, and smaller water systems tend to use groundwater. Groundwater is pumped from wells drilled into aquifers. Aquifers are geologic formations where water pools, often deep in the ground. Some aquifers are actually higher than the surrounding ground surface, which can result in flowing springs or artesian wells. Artesian wells are often drilled; once the aquifer is penetrated, the water flows onto the surface of the ground because of the hydrologic pressure from the aquifer. SDWA defines a public water system as one that provides piped water to at least 25 persons or 15 service connections for at least 60 days per year. Such systems may be owned by homeowner associations, investor-owned water companies, local governments, and others. Water not from a public water supply, and which serves one or only a few homes, is called a private supply. Private water supplies are, for the most part, unregulated. Community water systems are public systems that serve people yearround in their homes. The U.S. Environmental Protection Agency (EPA) also regulates other kinds of public water systems-such as those at schools, factories, campgrounds, or restaurants-that have their own water supply. The quantity of water in an aquifer and the water produced by a well depend on the nature of the rock, sand, or soil in the aquifer where the well withdraws water. Drinking water wells may be shallow (50 feet or less) or deep (more than 1,000 feet). On average, our society uses almost 100 gallons of drinking water per person per day. Traditionally, water use rates are described in units of gallons per capita per day (gallons used by one person in 1 day). Of the drinking water supplied by public water systems, only a small portion is actually used for drinking. Residential water consumers use most drinking water for other purposes, such as toilet flushing, bathing, cooking, cleaning, and lawn watering. The amount of water we use in our homes varies during the day: - Lowest rate of use-11:30 pm to 5:00 am, - Sharp rise/high use-5:00 am to noon (peak hourly use from 7:00 am to 8:00 am), - Moderate use-noon to 5:00 pm (lull around 3:00 pm), and - Increasing evening use-5:00 pm to 11:00 pm (second minor peak, 6:00 pm to 8:00 pm). # Source Location The location of any source of water under consideration as a potable supply, whether individual or community, should be carefully evaluated for potential sources of contamination. As a general practice, the maximum distance that economics, land ownership, geology, and topography will allow should separate a water source from potential contamination sources. Table 8.1 details some of the sources of contamination and gives minimum distances recommended by EPA to separate pollution sources from the water source. Water withdrawn directly from rivers, lakes, or reservoirs cannot be assumed to be clean enough for human consumption unless it receives treatment. Water pumped from underground aquifers will require some level of treatment. Believing surface water or soil-filtered water has purified itself is dangerous and unjustified. Clear water is not necessarily safe water. To assess the level of treatment a water source requires, follow these steps: - Determine the quality needed for the intended purpose (drinking water quality needs to be evaluated under the SDWA). - For wells and springs, test the water for bacteriologic quality. This should be done with several samples taken over a period of time to establish a history on the source. With few exceptions, surface water and groundwater sources are always presumed to be bacteriologically unsafe and, as a minimum, must be disinfected. - Analyze for chemical quality, including both legal (primary drinking water) standards and aesthetic (secondary) standards. - Determine the economical and technical restraints (e.g., cost of equipment, operation and maintenance costs, cost of alternative sources, availability of power). - Treat if necessary and feasible. # Well Construction Many smaller communities obtain drinking water solely from underground aquifers. In addition, according to the last census with data on water supply systems, 15% of people in the United States are on individual water supply systems. In some sections of the country, there may be a choice of individual water supply sources that will supply water throughout the year. Some areas of the country may be limited to one source. The various sources of water include drilled wells, driven wells, jetted wells, dug wells, bored wells, springs, and cisterns. Table 8.2 provides a more detailed description of some of these wells. Regardless of the choice for a water supply source, special safety precautions must be taken to assure the potability of the water. Drainage should be away from a well. The casings of the well should be sealed with grout or some other mastic material to ensure that surface water does not seep along the well casing to the water source. In Figure 8.2, the concrete grout has been reinforced with steel and a drain away from the casing has been provided to assist in protecting this water source. Additionally, research suggests that a minimum of 10 feet of soil is essential to filter unwanted biologic organisms from the water source. However, if the area of well construction has any sources of chemical contamination nearby, the local public health authority should be contacted. In areas with karst topography (areas characterized by a limestone landscape with caves, fissures, and underground streams), wells of any type are a health risk because of the long distances that both chemical and biologic contaminants can travel. When determining where a water well is to be located, several factors should be considered: - the groundwater aquifer to be developed, - depth of the water-bearing formations, - the type of rock formations that will be encountered, - freedom from flooding, and - relation to existing or potential sources of contamination. The overriding concern is to protect any kind of well from pollution, primarily bacterial contamination. Groundwater found in sand, clay, and gravel formations is more likely to be safer than groundwater extracted from limestone and other fractured rock formations. Whatever the strata, wells should be protected from - surface water entering directly into the top of the well, - groundwater entering below ground level without filtering through at least 10 feet of earth, and - surface water entering the space between the well casing and surrounding soil. Also, a well should be located in such a way that it is accessible for maintenance, inspection, and pump or pipe replacement when necessary. Driven wells (Figure 8. and another pulling the dirt from the hole with a rope, pulley, and bucket. Of course, this required a hole of rather large circumference, with the size increasing the potential for leakage from the surface. The dug well also was traditionally quite shallow, often less than 25 feet, which often resulted in the water source being contaminated by surface water as it ran through cracks and crevices in the ground to the aquifer. Dug wells provide potable water only if they are properly located and the water source is free of biologic and chemical contamination. The general rule is, the deeper the well, the more likely the aquifer is to be free of contaminants, as long as surface water does not leak into the well without sufficient soil filtration. Two basic processes are used to remediate dug wells. One is to dig around the well to a depth of 10 feet and install a solid slab with a hole in it to accommodate a well casing and an appropriate seal (Figures 8.4 and 8.5). The dirt is then backfilled over the slab to the surface, and the casing is equipped with a vent and second seal, similar to a drilled well, as shown in Figure 8.6. This results in a considerable reduction in the area of the casing that needs ground and are quite shallow, resulting in frequent contamination by both chemical and bacterial sources. # Sanitary Design and Construction Whenever a water-bearing formation is penetrated (as in well construction), a direct route of possible water contamination exists unless satisfactory precautions are taken. Wells should be provided with casing or pipe to an adequate depth to prevent caving and to permit sealing of the earth formation to the casing with watertight cement grout or bentonite clay, from a point just below the surface to as deep as necessary to prevent entry of contaminated water. Once construction of the well is completed, the top of the well casing should be covered with a sanitary seal, an approved well cap, or a pump mounting that completely covers the well opening (Figure 8.3). If pumping at the design rate causes drawdown in the well, a vent through a tapped opening should be provided. The upper end of the vent pipe should be turned downward and suitably screened to prevent the entry of insects and foreign matter. # Pump Selection A variety of pump types and sizes exist to meet the needs of individual or community water systems. Some of the factors to be considered in selecting a pump for a specific application are well depth, system design pressure, demand rate in gallons per minute, availability of power, and economics. # Dug and Drilled Wells Dug wells (Figures 8.4 and 8.5) were one of the most common types of wells for individual water supply in the United States before the 1950s. They were often constructed with one person digging the hole with a shovel to be protected. Experience has shown that the disturbed dirt used for backfilling over the buried slab will continue to release bacteria into the well for a short time after modification. Most experts in well modification suggest installing a chlorination system on all dug wells to disinfect the water because of their shallow depth and possible biologic impurity during changing drainage and weather conditions above ground. Figure 8.7 shows a dug well near the front porch of a house and within 5 feet of a drainage ditch and 6 feet of a rural road. This well is likely to be contaminated with the pesticide used to termite-proof the home and from whatever runs off the nearby road and drainage ditch. The well shown is about 15 feet deep. The brick structure around the well holds the centrifugal pump and a heater to keep the water from freezing. Although dangerous to drink from, this well is typical of dug wells used in rural areas of the United States for drinking water. Samples should not be taken from such wells because they instill a false sense of security if they are negative for both chemicals and biologic organisms. The quality of the water in such wells can change in just a few hours through infiltration of drainage water. Figure 8.8 shows the septic tank discharge in the drainage ditch 5 feet upstream of the dug well in Figure 8.7. This potential combination of drinking water and waste disposal presents an extreme risk to the people serviced by the dug well. Sampling is not the answer; the water source should be changed under the supervision of qualified environmental health professionals. Figure 8.9 shows a drilled well. On the left side of the picture is the corner of the porch of the home. The well appears not to have a sanitary well seal and is likely open to the air and will accept contaminants into the casing. Because the well is so close to the house, the casing is open, and the land slopes toward the well, it is a major candidate for contamination and not a safe water source. # Springs Another source of water for individual water supply is natural springs. A spring is groundwater that reaches the surface because of the natural contours of the land. Springs are common in rolling hillside and mountain areas. Some provide an ample supply of water, but most provide water only seasonally. Without proper precautions, the water may be biologically or chemically contaminated and not considered potable. To obtain satisfactory (potable) water from a spring, it is necessary to - find the source, - properly develop the spring, - eliminate surface water outcroppings above the spring to its source, - prevent animals from accessing the spring area, and - provide continuous chlorination. Figure 8.10 illustrates a properly developed spring. Note that the line supplying the water is well underground, the spring box is watertight, and surface water runoff is diverted away from the area. Also be aware that the water quality of a spring can change rapidly. # Cisterns A cistern is a watertight, traditionally underground reservoir that is filled with rainwater draining from the roof of a building. Cisterns will not provide an ample supply of water for any extended period of time, unless the amount of water used is severely restricted. Because the water is coming off the roof, a pipe is generally installed to allow redirection of the first few minutes of rainwater until the water flows clear. Disinfection is, nevertheless, of utmost importance. Diverting the first flow of water does not assure safe, non-polluted water because chemicals and biologic waste from birds and other animals can migrate from catchment surfaces and from windblown sources. In addition, rainwater has a low pH, which can corrode plumbing pipes and fixtures if not treated. # Disinfection of Water Supplies Water supplies can be disinfected by a variety of methods including chlorination, ozonation, ultraviolet radiation, heat, and iodination. The advantages and disadvantages of each method are noted in Table 8.3. The understanding of certain terms (blue box, next page) is necessary in talking about chlorination. Table 8.4 is a chlorination guide for specific water conditions. Chlorine is the most commonly used water disinfectant. It is available in liquid, powder, gas, and tablet form. Chlorine gas is often used for municipal water disinfection, but can be hazardous if mishandled. Recommended liquid, powder, and tablet forms of chlorine include the following: - Liquid-Chlorine laundry bleach (about 5% chlorine). Swimming pool disinfectant or concentrated chlorine bleach (12%-17% chlorine). - Powder-Chlorinated lime (25% chlorine), dairy sanitizer (30% chlorine), and high-test calcium hypochlorite (65%-75% chlorine). - Tablets-High-test calcium hypochlorite (65%-75% chlorine). - Gas-Gas chlorine is an economical and convenient way to use large amounts of chlorine. It is stored in steel cylinders ranging in size from 100 to 2,000 pounds. The packager fills these cylinders with liquid chlorine to approximately 85% of their total volume; the remaining 15% is occupied by chlorine gas. These ratios are required to prevent tank rupture at high temperatures. It is important that direct sunlight never reaches gas cylinders. It is also important that the user of chlorine knows the maximum withdrawal rate of gas per day per cylinder. For example, the maximum withdrawal rate from a 150-pound cylinder is approximately 40 pounds per day at room temperature discharging to atmospheric pressure. Ozone gas is unstable and must be generated at point of use Breakpoint chlorination-A process sometimes used to ensure the presence of free chlorine in public water supplies by adding enough chlorine to the water to satisfy the chlorine demand and to react with all dissolved ammonia that might be present. The concentrations of chlorine needed to treat a variety of water conditions are listed in Table 8.4. Chlorine concentration-The concentration (amount) of chlorine in a volume of water is measured in parts per million (ppm). In 1 million gallons of water, a chlorine concentration of 1 ppm would require 8.34 pounds of 100% chlorine. # Contact time- The time, after chlorine addition and before use, given for disinfection to occur. For groundwater systems, contact time is minimal. However, in surface water systems, a contact time of 20 to 30 minutes is common. # Dosage- The total amount of chlorine added to water. Given in parts per million (ppm) or milligrams per liter (mg/L). Demand-Chlorine solution used by reacting with particles of organic matter such as slimes or other chemicals and minerals that may be present. The difference between the amount of chlorine applied to water and total available chlorine remaining at the end of a specified contact period. Residual-The amount of chlorine left after the demand is met; available (free) chlorine. This portion provides a ready reserve for bactericidal action. Both combined and free chlorine make up chlorine residual and are involved in disinfection. Total available chlorine = free chlorine + combined chlorine. # Parts per million- # Definitions of Terms Related to Chlorination Chlorine Carrier Solutions On small systems or individual wells, a high-chlorine carrier solution is mixed in a tank in the pump house and pumped by the chlorinator into the system. Table 8.5 shows how to make a 200-ppm carrier solution. By using 200 ppm, only small quantities of this carrier have to be added. Depending on the system, other stock solutions may be needed to better use existing chemical feed equipment. # Routine Water Chlorination (Simple) Most chlorinated public water supplies use routine water chlorination. Enough chlorine is added to the water to meet the chlorine demand, plus enough extra to supply 0.2 to 0.5 ppm of free chlorine when checked after 20 minutes. Simple chlorination may not be enough to kill certain viruses. Chlorine as a disinfectant increases in effectiveness as the chlorine residual is increased and as the contact time is increased. Chlorine solutions should be mixed and chlorinators adjusted according to the manufacturer's instructions. Chlorine solutions deteriorate gradually when standing. Fresh solutions must be prepared as necessary to maintain the required chlorine residual. Chlorine residual should be tested at least once a week to assure effective equipment operation and solution strengths. A dated record should be kept of solution preparation, type, proportion of chlorine used, and residual-test results. Sensing devices are available that will automatically shut off the pump and activate a warning bell or light when the chlorinator needs servicing. # Well Water Shock Chlorination Shock chlorination is used to control iron and sulfatereducing bacteria and to eliminate fecal coliform bacteria in a water system. To be effective, shock chlorination must disinfect the following: the entire well depth, the formation around the bottom of the well, the pressure system, water treatment equipment, and the distribution system. To accomplish this, a large volume of super-chlorinated water is siphoned down the well to displace the water in the well and some of the water in the formation around the well. Check specifications on the water treatment equipment to ensure appropriate protection of the equipment. With shock chlorination, the entire system-from the water-bearing formation through the well-bore and the distribution system-is exposed to water that has a concentration of chlorine strong enough to kill iron and sulfate-reducing bacteria. The shock chlorination process is complex and tedious. Exact procedures and concentrations of chlorine for effective shock treatment are available . # Backflow, Back-siphonage, and Other Water Quality Problems In addition to contamination at its source, water can become contaminated with biologic materials and toxic construction or unsuitable joint materials as it flows through the water distribution system in the home. Water flowing backwards (backflow) in the pipes sucks materials back (back-siphonage) into the water distribution system, creating equally hazardous conditions. Other water quality problems relate to hardness, dissolved iron and iron bacteria, acidity, turbidity, color, odor, and taste. # Backflow Backflow is any unwanted flow of nonpotable water into a potable water system. The direction of flow is the reverse of that intended for the system. Backflow may be caused by numerous factors and conditions. For example, the reverse pressure gradient may be a result of either a loss of pressure in the supply main (back-siphonage) or the flow from a pressurized system through an unprotected cross-connection (back-pressure). A reverse flow in a distribution main or in a customer's system can be created by a change of system pressure wherein the pressure at the supply point becomes lower than the pressure at the point of use. When this happens, the water at the point of use will be siphoned back into the system, potentially polluting or contaminating it. It is also possible that contaminated or polluted water could continue to backflow into the public distribution system. The point at which nonpotable water comes in contact with potable water is called a cross-connection. Examples of backflow causes include supplemental supplies, such as a standby fire protection tank; fire pumps; chemical feed pumps that overpower the potable water system pressure; and sprinkler systems. # Back-Siphonage Back-siphonage is a siphon action in an undesirable or reverse direction. When there is a direct or indirect connection between a potable water supply and water of questionable quality due to poor plumbing design or installation, there is always a possibility that the public water supply may become contaminated. Some examples of common plumbing defects are - washbasins, sterilizers, and sinks with submerged inlets or threaded hose bibs and hose; - oversized booster pumps that overtax the supply capability of the main and thus develop negative pressure; - submerged inlets and fire pumps (if the fire pumps are directly connected into the water main, a negative pressure will develop); and - a threaded hose bib in a health-care facility (which is technically a cross-connection). There are many techniques and devices for preventing back-flow and back-siphonage. Some examples are - Vacuum breakers (nonpressure and pressure); - Back-flow preventers (reduced pressure principle, double gate-double check valves, swing-connection, and air gap-double diameter separation); - Surge tanks (booster pumps for tanks, fire system make-up tank, and covering potable tanks); and - Color coding in all buildings where there is any possibility of connecting two separate systems or taking water from the wrong source (blue-potable, yellow-nonpotable, and other-chemical and gases). An air gap is a physical separation between the incoming water line and maximum level in a container of at least twice the diameter of the incoming water line. If an air gap cannot be installed, then a vacuum breaker should be installed. Vacuum breakers, unlike air gaps, must be installed carefully and maintained regularly. Vacuum breakers are not completely failsafe. # Other Water Quality Problems Water not only has to be safe to drink; it should also be aesthetically pleasing. Various water conditions affect water quality. Table 8.6 describes symptoms, causes, measurements, and how to correct these problems. # Protecting the Groundwater Supply Follow these tips to help protect the quality of groundwater supplies: - Periodically inspect exposed parts of wells for cracked, corroded, or damaged well casings; broken or missing well caps; and settling and cracking of surface seals. - Slope the area around wells to drain surface runoff away from the well. - Install a well cap or sanitary seal to prevent unauthorized use of, or entry into, a well. - Disinfect wells at least once a year with bleach or hypochlorite granules, according to the manufacturer's directions. - Have wells tested once a year for coliform bacteria, nitrates, and other constituents of concern. - Keep accurate records of any well maintenance, such as disinfection or sediment removal, that require the use of chemicals in the well. - Hire a certified well driller for new well construction, modification, or abandonment and closure. - Avoid mixing or using pesticides, fertilizers, herbicides, degreasers, fuels, and other pollutants near wells. - Do not dispose of waste in dry or abandoned wells. - Do not cut off well casings below the land surface. - Pump and inspect septic systems as often as recommended by local health departments. - Never dispose of hazardous materials (e.g., paint, paint stripper, floor stripper compounds) in a septic system. Iron is common in soft water and when water hardness is above 175 ppm. Atomic absorption (AA) units or numerous colormetric test kits measure iron in ppm. Any measurement above 0.3 ppm will cause problems. To treat soft water that contains no iron but picks it up in distribution lines, add calcium to the water with calcite (limestone) units. To treat hard water con taining iron ions, install a sodium zeolite ion exchange unit. To treat soft water con taining iron, carbon dioxide must be neutralized, followed by a manganese zeolite unit. # Iron bacteria (red slime appears in toilet) Caused by bacteria that act in the presence of iron. Check under toilet tank cover for slippery jelly-like coating. The housing inspector's prime concern while inspecting plumbing is to ensure the provision of a safe water supply system, an adequate drainage system, and ample and proper fixtures and equipment that do not contaminate water. The inspector must make sure that the system moves waste safely from the home and protects the occupants from backup of waste and dangerous gases. This chapter covers the major features of a residential plumbing system and the basic plumbing terms and principles the inspector must know and understand to identify housing code violations that involve plumbing. It will also assist in identifying the more complicated defects that the inspector should refer to the appropriate agencies. This chapter is not a plumbing code, but should provide a base of knowledge sufficient to evaluate household systems. # Elements of a Plumbing System The primary purposes of a plumbing system are - To bring an adequate and potable supply of hot and cold water to the inhabitants of a house, and - To drain all wastewater and sewage discharge from fixtures into the public sewer or a private disposal system. It is, therefore, very important that the housing inspector be completely familiar with all elements of these systems so that inadequacies of the structure's plumbing and other code violations will be recognized. To aid the inspector in understanding the plumbing system, a schematic of a home plumbing system is shown in Figure 9.1. # Water Service The piping of a house service line should be as short as possible. Elbows and bends should be kept to a minimum because they reduce water pressure and, therefore, the supply of water to fixtures in the house. The house service line also should be protected from freezing. Four feet of soil is a commonly accepted depth to bury the line to prevent freezing. This depth varies, however, across the country from north to south. The local or state plumbing code should be consulted for recommended depths. The minimum service line size should be ¾ inch. The minimum water supply pressure should be 40 pounds per square inch (psi), no cement or concrete joints should be allowed, no glue joints between different types of plastic should be allowed, and no female threaded PVC fittings should be used. The materials used for a house service may be approved plastic, copper, cast iron, steel, or wrought iron. The connections used should be compatible with the type of pipe used. A typical house service installation is pictured in Figure 9.2. The elements of the service installation are described below. Corporation stop-The corporation stop is connected to the water main. This connection is usually made of brass and can be connected to the main with a special tool without shutting off the municipal supply. The valve incorporated in the corporation stop permits the pressure to be maintained in the main while the service to the building is completed. Curb stop-The curb stop is a similar valve used to isolate the building from the main for repairs, nonpayment, of water bills or flooded basements. Because the corporation stop is usually under the street and it is necessary to break the pavement to reach the valve, the curb stop is used as the isolation valve. Curb stop box-The curb stop box is an access box to the curb stop for opening and closing the valve. A longhandled wrench is used to reach the valve. Air chambers-Pressure-absorbing devices that eliminate water hammer. Air chambers should be installed as close as possible to the valves or faucet and at the end of long runs of pipe. # Air gap (drainage system)- The unobstructed vertical distance through the free atmosphere between the outlet of a water pipe and the flood level rim of the receptacle into which it is discharging. # Air gap (water distribution system)- The unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, plumbing fixture, or other device and the flood level rim of the receptacle. Backflow-The flow of water or other liquids, mixtures, or substances into the distributing pipes of a potable water supply from any source or sources other than the intended source. Back siphonage is one type of backflow. Back siphonage-The flowing back of used, contaminated, or polluted water from a plumbing fixture or vessel into a potable water supply because of negative pressure in the pipe. Branch-Any part of the piping system other than the main, riser, or stack. Branch vent-A vent connecting one or more individual vents with a vent stack. Building drain-Part of the lowest piping of a drainage system that receives the discharge from soil, waste, or other drainage pipes inside the walls of the building (house) and conveys it to the building sewer beginning 3 feet outside the building wall. Cross connection-Any physical connection or arrangement between two otherwise separate piping systems (one of which contains potable water and the other which contains either water of unknown or questionable safety or steam, gas, or chemical) whereby there may be a flow from one system to the other, the direction of flow depending on the pressure differential between the two systems. (See Backflow and Back siphonage.) Disposal field-An area containing a series of one or more trenches lined with coarse aggregate and conveying the effluent from a septic tank through vitrified clay pipe or perforated, nonmetallic pipe, laid in such a manner that the flow will be distributed with reasonable uniformity into natural soil. Drain-Any pipe that carries wastewater or waterborne waste in a building (house) drainage system. # Flood level rim- The top edge of a receptacle from which water overflows. Flushometer valve-A device that discharges a predetermined quantity of water to fixtures for flushing purposes and is closed by direct water pressures. Flushometer toilet-a toilet using a flushometer valve that uses pressure from the water supply system rather than the force of gravity to discharge water into the bowl, designed to use less water than conventional flush toilets. Flush valve-A device located at the bottom of the tank for flushing toilets and similar fixtures. Meter stop-The meter stop is a valve placed on the street side of the water meter to isolate it for installation or maintenance. Many codes require a gate valve on the house side of the meter to shut off water for plumbing repairs. The curb and meter stops can be ruined in a short time if used very frequently. # Definitions of Terms Related to Home Water Systems The water meter is a device used to measure the amount of water used in the house. It is usually the property of the water provider and is a very delicate instrument that should not be abused. In cold climates, the water meter is often inside the home to keep it from freezing. When the Grease trap-See Interceptor. Hot water-Potable water heated to at least 120°F-130°F (49°C-54°C) and used for cooking, cleaning, washing dishes, and bathing. # Insanitary-Unclean enough to endanger health. Interceptor-A device to separate and retain deleterious, hazardous, or undesirable matter from normal waste and permit normal sewage or liquid waste to discharge into the drainage system by gravity. Main vent-The principal artery of the venting system, to which vent branches may be connected. Leader-An exterior drainage pipe for conveying storm water from roof or gutter drains to the building storm drain, combined building sewer, or other means of disposal. Main sewer-See Public sewer. Pneumatic-Pertaining to devices making use of compressed air as in pressure tanks boosted by pumps. Potable water-Water having no impurities present in amounts sufficient to cause disease or harmful physiologic effects and conforming in its bacteriologic and chemical quality to the requirements of the U.S. Environmental Protection Agency's Safe Drinking Water Act or meeting the regulations of other agencies having jurisdiction. # P & T (pressure and temperature) relief valve- A safety valve installed on a hot water storage tank to limit temperature and pressure of the water. # P-trap-A trap with a vertical inlet and a horizontal outlet. Public sewer-A common sewer directly controlled by public authority. Relief vent-An auxiliary vent that permits additional circulation of air in or between drainage and systems. Septic tank-A watertight receptacle that receives the discharge of a building's sanitary drain system or part thereof and is designed and constructed to separate solid from liquid, digest organic matter through a period of detention, and allow the liquids to discharge into the soil outside of the tank through a system of open-joint or perforated piping or through a seepage pit. Sewerage system-A system comprising all piping, appurtenances, and treatment facilities used for the collection and disposal of sewage, except plumbing inside and in connection with buildings served, and the building drain. Soil pipe-The pipe that directs the sewage of a house to the receiving sewer, building drain or building sewer. Soil stack-The vertical piping that terminates in a roof vent and carries off the vapors of a plumbing system. Stack vent-An extension of a solid or waste stack above the highest horizontal drain connected to the stack, sometimes called a waste vent or a soil vent. # Definitions of Terms Related to Home Water Systems meter is located inside the home, the company providing the water must make appointments to read the meter, which often results in higher water costs unless the meter is equipped with a signal that can be observed from the outside. The water meter is not shown in Figure 9.2 because of regional differences in location of the unit. Because the electric system is sometimes grounded to an older home's water line, a grounding loop device should be installed around the meter. Many meters come with a yoke that maintains electrical continuity even though the meter is removed. # Hot and Cold Water Main Lines The hot and cold water main lines are usually hung from the basement ceiling or in the crawl space of the home and are attached to the water meter and hot water tank on one side and the fixture supply risers on the other. These pipes should be installed neatly and should be supported by pipe hangers or straps of sufficient strength and number to prevent sagging. Older homes that have copper pipe with soldered pipes can pose a lead poisoning risk, particularly to children. In 1986, Congress banned lead solder containing greater than 0.2% lead and restricted the lead content of faucets, pipes, and other plumbing materials to no more than 8%. The water should be tested to determine the presence or level of lead in the water. Until such tests can be conducted, the water should be run for about 2 minutes in the morning to flush any such material from the line. Hot and cold water lines should be approximately 6 inches apart unless the hot water line is insulated. This is to ensure that the cold water line does not pick up heat from the hot water line . The supply mains should have a drain valve stop and waste valve to remove water from the system for repairs. These valves should be on the low end of the line or on the end of each fixture riser. The fixture risers start at the basement main and rise vertically to the fixtures on the upper floors. In a one-family dwelling, riser branches will usually proceed from the main riser to each fixture grouping. In any event, the fixture risers should not depend on the branch risers for support, but should be supported with a pipe bracket. Storm sewer-A sewer used for conveying rain water, surface water, condensate, cooling water, or similar liquid waste. Trap-A fitting or device that provides a liquid seal to prevent the emission of sewer gases without materially affecting the flow of sewage or wastewater through it. Vacuum breaker-A device to prevent backflow (back siphonage) by means of an opening through which air may be drawn to relieve negative pressure (vacuum). Vapor lock-A bubble of air that restricts the flow of water in a pipe. Vent stack-The vertical vent pipe installed to provide air circulation to and from the drainage system and that extends through one or more stories. Water hammer-The loud thump of water in a pipe when a valve or faucet is suddenly closed. Water service pipe-The pipe from the water main or other sources of potable water supply to the water-distributing system of the building served. Water supply system-Consists of the water service pipe, the water-distributing pipes, the necessary connecting pipes, fittings, control valves, and all appurtenances in or adjacent to the building or premises. Wet vent-A vent that receives the discharge of waste other than from water closets. Yoke vent-A pipe connecting upward from a soil or waste stack to a vent stack to prevent pressure changes in the stacks. # Definitions of Terms Related to Home Water Systems The size of basement mains and risers depends on the number of fixtures supplied. However, a ¾-inch pipe is usually the minimum size used. This allows for deposits on the pipe due to hardness in the water and will usually give satisfactory volume and pressure. In homes without basements, the water lines are preferably located in the crawl space or under the slab. The water lines are sometimes placed in the attic; however, because of freezing, condensation, or leaks, this placement can result in major water damage to the home. In two-story or multistory homes, the water line placement for the second floor is typically between the studs and, then, for the shortest distance to the fixture, between the joists of the upper floors. # Hot and Cold Water Piping Materials Care must be taken when choosing the piping materials. Some state and local plumbing codes prohibit using some of the materials listed below in water distribution systems. Polyvinyl Chloride (PVC). PVC is used to make plastic pipe. PVC piping has several applications in and around homes such as in underground sprinkler systems, piping for swimming pool pumping systems, and low-pressure drain systems. PVC piping is also used for water service between the meter and building . PVC, or polyvinyl chloride, is one of the most commonly used materials in the marketplace. It is in packaging, construction and automotive material, toys, and medical equipment. # Chlorinated PVC (CPVC). CPVC is a slightly yellow plastic pipe used inside homes. It has a long service life, but is not quite as tough as copper. Some areas with corrosive water will benefit by using chlorinated PVC piping. CPVC piping is designed and recommended for use in hot and cold potable water distribution systems . Copper. Copper comes in three grades: - M for thin wall pipe (used mainly inside homes); - L for thicker wall pipe (used mainly outside for water services); and - K, the thickest (used mainly between water mains and the water meter). Copper lasts a long time, is durable, and connects well to valves. It should not be installed if the water has a pH of 6.5 or less. Most public utilities supply water at a pH between 7.2 and 8.0. Many utilities that have source water with a pH below 6.5 treat the water to raise the pH. Private well water systems often have a pH below 6.5. When this is the case, installing a treatment system to make the water less acidic is a good idea . Galvanized Steel. Galvanized pipe corrodes rather easily. The typical life of this piping is about 40 years. One of the primary problems with galvanized steel is that, in saturated water, the pipe will become severely restricted by corrosion that eventually fills the pipe completely. Another problem is that the mismatch of metals between the brass valves and the steel results in corrosion. Whenever steel pipe meets copper or brass, the steel pipe will rapidly corrode. Dielectric unions can be used between copper and steel pipes; however, these unions will close off flow in a short time. The problem with dielectric unions is that they break the grounding effect if a live electrical wire comes in contact with a pipe. Some cities require the two pipes to be bonded electrically to maintain the safety of grounded pipes. # PEX. PEX is an acronym for a cross-formulated polyethylene. "PE" refers to the raw material used to make PEX (polyethylene), and "X" refers to the cross-linking of the polyethylene across its molecular chains. The molecular chains are linked into a three-dimensional network that makes PEX remarkably durable within a wide range of temperatures, pressures, and chemicals . PEX is flexible and can be installed with fewer fittings than rigid plumbing systems. It is a good choice for repiping and for new homes and works well for corrosive water conditions. PEX stretches to accommodate the expansion of freezing water and then returns to its original size when water thaws. Although it is highly freezeresistant, no material is freeze-proof. Kitec. Kitec is a multipurpose pressure pipe that uniquely unites the advantages of both metal and plastic. It is made of an aluminum tube laminated to interior and exterior layers of plastic. Kitec provides a composite piping system for a wide range of applications, often beyond the scope of metal or plastic alone. Unlike copper and steel materials, Kitec is noncorroding and resists most acids, salt solutions, alkalis, fats, and oils. Poly. Poly pipe is a soft plastic pipe that comes in coils and is used for cold water. It can crack with age or wear through from rocks. Other weak points can be the stainless steel clamps or galvanized couplings. # Polybutylene . Polybutylene pipe is a soft plastic pipe. This material is no longer recommended because of early chemical breakdown. Individuals with a house, mobile home, or other structure that has polybutylene piping with acetal plastic fittings may be eligible for financial relief if they have replaced that plumbing system. For claims information, call 1-800-392-7591 or go to www.pbpipe.com. # Hot Water Safety In the United States, more than 112,000 people enter a hospital emergency room each year with scald burns. Of these, 6,700 (6%), have to be hospitalized. Almost 3,000 of these scald burns come from tap water in the home. The three high risk groups are children under the age of 5 years, the handicapped, and adults over the age of 65 years. It only takes 1 second to get a serious third-degree burn from water that is 156°F (69°C). Tap water is too hot if instant coffee granules melt in it. Young children, some handicapped individuals, and elderly people are particularly vulnerable to tap water burns. Children cannot always tell the hot water faucets from the cold water faucets. Children have delicate skin and often cannot get out of hot water quickly, so they suffer hot water burns most frequently. Elderly and handicapped persons are less agile and more prone to falls in the bath tub. They also may have diseases, such as diabetes, that make them unable to feel heat in some regions of the body, such as the hands and feet. Third-degree burns can occur quickly-in 1 second at 156°F (69°C), in 2 seconds at 149°F (65°C), in 5 seconds at 140°F (60°C), and in 15 seconds at 133°F (56°C). A tap-water temperature of 120°F-130°F (49°C-54°C) is hot enough for washing clothes, bedding, and dishes. Even at 130°F (54°C), water takes only a few minutes of constant contact to produce a third-degree burn. Few people bathe at temperatures above 110°F (43°C), nor should they. Water heater thermostats should be set at about 120°F (49°C) for safety and to save 18% of the energy used at 140°F (60°C). Antiscald devices for faucets and showerheads to regulate water temperature can help prevent burns. A plumber should install and calibrate these devices. Most hot water tank installations now require an expansion tank to reduce pressure fluctuations and a heat trap to keep hot water from escaping up pipes. # Types of Water Flow Controls It is essential that valves be used in a water system to allow the system to be controlled in a safe and efficient manner. The number, type, and size of valves required will depend on the size and complexity of the system. Most valves can be purchased in sizes and types to match the pipe sizes used in water system installations. Listed below are some of the more commonly encountered valves with a description of their basic functions. Shutoff Valves. Shutoff valves should be installed between the pump and the pressure tank and between the pressure tank and service entry to a building. Globe, gate, and ball valves are common shutoff valves. Gate and ball valves cause less friction loss than do globe valves; ball valves last longer and leak less than do gate valves. Shutoff valves allow servicing of parts of the system without draining the entire system. Flow-control Valves. Flow-control valves provide uniform flow at varying pressures. They are sometimes needed to regulate or limit the use of water because of limited water flow from low-yielding wells or an inadequate pumping system. They also may be needed with some treatment equipment. These valves are often used to limit flow to a fixture. Orifices, mechanical valves, or diaphragm valves are used to restrict the flow to any one service line or complete system and to assure a minimum flow rate to all outlets. Relief Valves. Relief valves permit water or air to escape from the system to relieve excess pressure. They are spring-controlled and are usually adjustable to relieve varying pressures, generally above 60 psi. Relief valves should be installed in systems that may develop pressures exceeding the rated limits of the pressure tank or distribution system. Positive displacement and submersible pumps and water heaters can develop these excessive pressures. The relief valve should be installed between the pump and the first shutoff valve and must be capable of discharging the flow rate of the pump. A combined pressure and temperature relief valve is needed on all water heaters. Combination pressure and vacuum relief valves also should be installed to prevent vacuum damage to the system. Pressure-reducing Valves. A pressure-reducing valve is used to reduce line pressure. On main lines, this allows the use of thinner walled pipe and protects house plumbing. Sometimes these valves are installed on individual services to protect plumbing. Altitude Valves. Often an altitude valve is installed at the base of a hot water tank to prevent it from overflowing. Altitude valves sense the tank level through a pressure line to the tank. An adjustable spring allows setting the level so that the valve closes and prevents more inflow when the tank becomes full. # Foot Valves. A foot valve is a special type of check valve installed at the end of a suction pipe or below the jet in a well to prevent backflow and loss of prime. The valve should be of good quality and cause little friction loss. Check Valves. Check valves have a function similar to foot valves. They permit water flow in only one direction through a pipe. A submersible pump may use several check valves. One is located at the top of the pump to prevent backflow from causing back spin of the impellers. Some systems use another check valve and a snifter valve. They will be in the drop pipe or pitless unit in the well casing and allow a weep hole located between the two valves to drain part of the pipe. When the pump is started, it will force the air from the drained part of the pipe into the pressure tank, thus recharging the pressure tank. Frost-proof Faucets. Frost-proof faucets are installed outside a house with the shutoff valve extending into the heated house to prevent freezing. After each use, the water between the valve and outlet drains, provided the hose is disconnected, so water is not left to freeze. Frost-proof Hydrants. Frost-proof hydrants make outdoor water service possible during cold weather without the danger of freezing. The shutoff valve is buried below the frost line. To avoid submerging it, which might result in contamination and back siphoning, the stop-and-waste valve must drain freely into a rock bed. These hydrants are sometimes prohibited by local or state health authorities. Float Valves. Float valves respond to a high water level to close an inlet pipe, as in a tank-type toilet. Miscellaneous Switches. Float switches respond to a high and/or low water level as with an intermediate storage tank. Pressure switches with a low-pressure cutoff stop the pump motor if the line pressure drops to the cutoff point. Low-flow cutoff switches are used with submersible pumps to stop the pump if the water discharge falls below a predetermined minimum operating pressure. High-pressure cut-off switches are used to stop pumps if the system pressure rises above a predetermined maximum. Paddle-type flow switches detect flow by means of a paddle placed in the pipe that operates a mechanical switch when flow in the pipe pushes the paddle. The inadvertent contamination of a public water supply as a result of incorrectly installing plumbing fixtures is a potential public health problem in all communities. Continuous surveillance by environmental health personnel is necessary to know whether such public health hazards have developed as a result of additions or alterations to an approved system. All environmental health specialists should learn to recognize the three general types of defects found in potable water supply systems: back flow, back siphonage, and overhead leakage into open potable water containers. If identified, these conditions should be corrected immediately to prevent the spread of disease or poisoning from high concentrations of organic or inorganic chemicals in the water. # Water Heaters Water heaters (Figure 9.3) are usually powered by electricity, fuel oil, gas, or, in rare cases, coal or wood. They consist of a space for heating the water and a storage tank for providing hot water over a limited period of time. All water heaters should be fitted with a temperature-pressure (T&P) relief valve no matter what fuel is used. The installation port for these valves may be found on the top or on the side of the tank near the top. T&P valves should not be placed close to a wall or door jamb, where they would be inaccessible for inspection and use. Hot water tanks sometimes are sold without the T&P valve, and it must be purchased separately. This fact alone should encourage individual permitting and inspection by counties and municipalities to ensure that they are installed. The T&P valve should be inspected at a minimum of once per year. A properly installed T&P valve will operate when either the temperature or the pressure becomes too high due to an interruption of the water supply or a faulty thermostat. Figure 9.3 shows the correct installation of a gas water heater. Particular care should be paid to the exhaust port of the T&P valve. Figure 9.4 shows the placement of the T&P valve. This vent should be directed to within 6 inches of the floor, and care must be taken to avoid reducing the diameter of the vent and creating any unnecessary bends in the discharge pipe. Most codes will allow only one 90° bend in the vent. The point is to avoid any constrictions that could slow down the steam release from the tank to avoid explosive pressure buildup. Water heaters that are installed on wooden floors should have water collection pans with a drainage tube that drains to a proper drain. The pan should be checked on a regular basis. # Tankless Water Heaters A tankless unit has a heating device that is activated by the flow of water when a hot water valve is opened. Once activated, the heater delivers a constant supply of hot water. The output of the heater, however, limits the rate of the heated water flow. Demand water heaters are available in propane (LP), natural gas, or electric models. They come in a variety of sizes for different applications, such as a whole-house water heater, a hot water source for a remote bathroom or hot tub, or as a boiler to provide hot water for a home heating system. They can also be used as a booster for dishwashers, washing machines, and a solar or wood-fired domestic hot water system . The appeal of demand water heaters is not only the elimination of the tank standby losses and the resulting lower operating costs, but also the fact that the heater delivers hot water continuously. Most tankless models have a life expectancy of more than 20 years. In contrast, storage tank water heaters last 10 to 15 years. Most tankless models have easily replaceable parts that can extend their life by many years more. As mentioned earlier, the purpose of a trap is to seal out sewer gases from the structure. Because a plumbing system is subject to wide variations in flow, and this flow originates in many different sections of the system, pressures vary widely in the waste lines. These pressure differences tend to remove the water seal in the trap. The waste system must be properly vented to prevent the traps from siphoning dry, thus losing their water seal and allowing gas from the sewer into the building. Objectionable Traps. The S-trap and the ¾ S-trap (Figure 9.7) should not be used in plumbing installations. They are almost impossible to ventilate properly, and the ¾ S-trap forms a perfect siphon. Mechanical traps were introduced to counteract this problem. It has been found, however, that the corrosive liquids flowing in the system corrode or jam these mechanical traps. For this reason, most plumbing codes prohibit mechanical traps. The bag trap, an extreme form of S-trap, is seldom found. Traps are used only to prevent the escape of sewer gas into the structure. They do not compensate for pressure variations. Only proper venting will eliminate pressure problems. # Ventilation A plumbing system is ventilated to prevent trap seal loss, material deterioration, and flow retardation. Trap Seal Loss. The seal in a plumbing trap may be lost due to siphonage (direct and indirect or momentum), back pressure, evaporation, capillary attraction, or wind effect. The first two are probably the most common causes of loss. Figure 9.8 depicts this siphonage process; Figure 9.9 depicts loss of trap seal. If a waste pipe is placed vertically after the fixture trap, as in an S-trap, the wastewater continues to flow after the fixture is emptied and clears the trap. This is caused by the pressure of air on the water of the fixture being greater than the pressure of air in the waste pipe. The action of the water discharging into the waste pipe removes the air from that pipe and thereby causes a negative pressure in the waste line. In the case of indirect or momentum siphonage, the flow of water past the entrance to a fixture drain in the waste pipe removes air from the fixture drain. This reduces the air pressure in the fixture drain, and the entire assembly acts as an aspirator. (Figures 9.10 and 9.11 show plumbing configurations that would allow this type of siphonage to occur.) Back Pressure. The flow of water in a soil pipe varies according to the fixtures being used. Small flows tend to cling to the sides of the pipe, but large ones form a slug of waste as they drop. As this slug of water falls down the pipe, the air in front of it becomes pressurized. As the pressure builds, it seeks an escape point. This point is either a vent or a fixture outlet. If the vent is plugged or there is no vent, the only escape for this air is the fixture outlet. The air pressure forces the trap seal up the pipe into the fixture. If the pressure is great enough, the seal is blown out of the fixture entirely. Figures 9.8 and 9.9 illustrate the potential for this type of problem. Large water flow past the vent can aspirate the water from the trap, while water flow approaching the trap can blow the water out of the trap. Vent Sizing. Vent pipe installation is similar to that of soil and waste pipe. The same fixture unit criteria are used. Table 9.3 shows minimum vent pipe sizes. Vent pipes of less than 1¼ inches in diameter should not be used. Vents smaller than this diameter tend to clog and do not perform their function. Individual Fixture Ventilation. Figure 9.12 shows a typical installation of a wall-hung plumbing unit. This type of ventilation is generally used for sinks, drinking fountains, and so forth. Air admittance valves are often used for individual fixtures. Figure 9.13 shows a typical installation of a bathtub or shower ventilation system. Figure 9.14 shows the proper vent connection for toilet fixtures and Figure 9.15 shows a janitor's sink or slop sink that has the proper P-trap. For the plumbing fixture to work properly, it must be vented as in Figures 9.13 and 9.14. Unit Venting. Figures 9.10 and 9.11 show a back-to-back shared ventilation system for various plumbing fixtures. The unit vent-ing system is commonly used in apartment buildings. This type of system saves a great deal of money and space when fixtures are placed back-to-back in separate apartments. It does, however, pose a problem if the vents are undersized because they will aspirate the water from the other trap. Figure 9.16 shows a double combination Y-trap used for joining the fixtures to the common soil pipe fixture on the other side of the wall. Wet Venting. Bathroom fixture groupings are commonly wet vented; that is, the vent pipe also is used as a waste line. # Total Drainage System The drain, soil waste, and vent systems are all connected, and the inspector should remember the following fundamentals: Working vents must provide air to all fixtures to ensure the movement of waste into the sewer. Improperly vented fixtures will drain slowly and clog often. They also present a health risk if highly toxic and explosive sewer gases enter the home. Correct venting is shown in Figures 9.10-9.15; incorrect venting is shown in Figures 9.8 and 9.9. A wet vent can result in one of the traps siphoning the other dry when large volumes of water are poured down the drain. Wet vents are not permitted by many state plumbing codes because of the potential for self-siphoning. Backup of sewage into sinks, dishwashers, and other appliances is always a possibility unless the system is equipped with air gaps or vacuum breakers. All connections to the potable water system must be a minimum of two pipe diameters above the overflow of the appliance and, in some cases, where flat surfaces are near, two and one-half pipe diameters above the overflow of the appliance. A simple demonstration of how easily siphoning can occur is to hold a glass of water with food coloring in it with the tip of a faucet in the colored water. If the sink's vegetable sprayer is directed to a second glass and sprayed, in most cases, the colored water will be aspirated into the faucet and then out of the sprayer into the second glass. Weed or pest killer attachments that hook to garden hoses work on the same principle. Figure 9.17 shows an outside hose bib equipped with a vacuum breaker. In the areas of the United States that freeze, these vacuum breakers must be removed because they trap water in the area of the line that can freeze and burst. Many vacuum breakers sold today automatically drain to prevent freeze damage. Devices that pull water from a utility may create negative pressures that can damage water piping and pull dangerous substances into the line at the same time. These devices include power sprayers that hook to the home hose bib (outside faucets) and pressurize the spray by creating a vacuum on the supply side. # Corrosion Control To understand the proper maintenance procedures for the prevention and elimination of water quality problems in plumbing systems, it is necessary to understand the process used to determine the chemical aggressiveness of water. The process is used to determine when additional treatment is needed. Water that is out of balance can result in many negative outcomes, from toxic water to damaged and ruined equipment. Water dissolves and carries materials when it is not saturated. An equilibrium among pH, temperature, alkalinity, and hardness controls water's ability to create scale or to dissolve material. If water is saturated with harmless or beneficial substances, such as calcium, then the threat of damage can be mitigated. The Langelier method, developed in the early 1930s, is a process used in boiler management, municipal water treatment, and swimming pools to provide this balance. In the Langelier index, saturation over 0.3 is scale forming, and a saturation below 0.3 is corrosive. Fixture Supply Line, Inches Vent Line, Inches Drain Line, Inches Bathtub Kitchen sink 1 / 2 1 1 / 2 1 1 / 2 Lavatory 3 / 8 1 1 / 4 1 1 / 4 Laundry sink 1 / 2 1 1 / 2 1 1 / 2 Shower 1 / 2 2 2 Toilet tank 3 / 8 3 3 According to the EPA, in 2000, a typical U.S. family of four spent approximately $820 every year on water and sewer fees, plus another $230 in energy for heating water. In many cities, according to the U.S. EPA, water and sewer costs can be more than twice those amounts. Many people do not realize how much money they can save by taking simple steps to save water, and they do not know the cumulative effects small changes can have on water resources and environmental quality. Fixing a leaky faucet, toilet, or lawn-watering system can reduce water consumption. Changing to water-efficient plumbing fixtures and appliances can result in major water and energy savings . Summer droughts remind many of the need to appreciate clean water as an invaluable resource. As the U.S. population increases, the need for clean water supplies continues to grow dramatically and puts additional stress on our limited water resources. We can all take steps to save and conserve this valuable resource. The EPA suggests the following steps homeowners should take right away to save water and money: - Stop leaks!-Check indoor water-using appliances and devices for leaks. Pay particular attention to toilets that leak. - Take showers-Showers use considerably less water than do baths. - Replace shower heads-Replacement shower heads are available that reduce water use. - Turn the water off when not needed-While brushing your teeth, turn the water off until you need to rinse. - Replace your old toilet-The largest water user inside the home is the toilet. If a home was built before 1992 and the toilet has never been replaced, it is very likely that it is not a water-efficient, 1.6 gallons-per-flush toilet. Choose a replacement toilet carefully to ensure that what you make up per individual flush, you do not lose because you must flush more often. - Replace your clothes washer-The second largest water user in the home is the washing machine. Energy Star-rated washers that also have a water factor at or The susceptibility of metal to corrosion is as follows (most susceptible to least susceptible): magnesium, zinc, aluminum, cadmium, mild steel, cast iron, stainless steel (active), lead-tin solder, lead, tin, brass, gun metal, aluminum bronze, copper, copper-nickel alloy, Monel, titanium, stainless steel (passive), silver, gold, and platinum. # Water Conservation How much attention should be paid to fixtures that just drip a little bit of water or that just will not quite shut off? At 30 drops per minute, you will lose and pay for 54 gallons per month. At 60 drops per minute, you will lose and pay for 113 gallons per month. At 120 drops per minute, you will lose and pay for 237 gallons per month. This is only a small loss of water considering the 5 to 7 gallons per flush used by a properly functioning toilet. If the toilet is not properly maintained, the loss of water and its effect on the monthly water bill can be incredible. Lower flow toilets have been mandated to save precious and limited resources. Most pre-1992 toilets used up to 7 gallons per flush. Toilets have since evolved to use 5.5, then 3.5, and now 1.6 gallons per flush. With the changes in the water usage laws in 1992, there were many customer complaints, and plumbers were in the bad position of installing products that nobody wanted to use. New and updated products now work better than the old water wasters. lower than 9.5 use 35%-50% less water and 50% less energy per load. This saves money on both water and energy bills. - Plant the right plants with proper landscape design and irrigation-Select plants that are appropriate for the local climate. Having a 100% turf lawn in a dry desert climate uses a significant amount of water. Also, home owners should consider the benefits of a more natural landscape or wildscape. - Water plants only as needed-Most water wasted in the garden is by watering when plants do not need it A. Hot and cold copper water lines and drain, p-trap and vent, and vent for the washer drain shown. When a house is vacant for awhile, the P-trap should be filled with water to prevent sewer gas from entering the home. Mineral oil added to the water can slow the loss of fluid in the trap. # B. Hot and cold water pipes, soil pipe, and vent shown. # C. Vent for the sink and toilet, soil pipe, and cap for toilet connection shown. A wax or plastic seal shaped like a donut will be placed on the cap before bolting down the toilet. -r by not maintaining the irrigation system. If manually watering, set a timer and move the hose promptly. Make sure the irrigation controller has a rain shutoff device and that it is appropriately scheduled. Drip irrigation should be considered where practical. Newer irrigation systems have sensors to prevent watering while it is raining. # Putting It All Together These photographs, taken during construction of a home by Habitat for Humanity, show various plumbing elements discussed in this chapter. # D. Mixing and antiscald water flow control, vent for fixture, hot and cold water lines, and bathtub overflow shown. At this point in construction, insulation might be considered for the hot water lines. Water service and waste water line. # E. Polyethylene water service pipe entering the home through the concrete basement wall shown. White plastic adapter shown between polyethylene water service pipe and ¾ inch copper water line. A short distance above the adapter is a pressure reducing valve. To the right of water line is the 4-inch PVC pipe waste water line. # Introduction The French are considered the first to use an underground septic tank system in the 1870s. By the mid 1880s, two-chamber, automatic siphoning septic tank systems, similar to those used today, were being installed in the United States. Even now, more than a century later, septic tank systems represent a major household wastewater treatment option. Fully one-fourth to one-third of the homes in the United States use such a system . On-site sewage disposal systems are used in rural areas where houses are spaced so far apart that a sewer system would be too expensive to install, or in areas around cities where the city government has not yet provided sewers to which the homes can connect. In these areas, people install their own private sewage treatment plants. As populations continue to expand beyond the reach of municipal sewer systems, more families are relying on individual on-site wastewater treatment systems and private water supplies. The close proximity of on-site water and wastewater systems in subdivisions and other developed areas, reliance on marginal or poor soils for on-site wastewater disposal, and a general lack of understanding by homeowners about proper septic tank system maintenance pose a significant threat to public health. The expertise on inspecting, maintaining, and installing these systems generally rests with the environmental health staff of the local county or city health departments. These private disposal systems are typically called septic tank systems. A septic tank is a sewage holding device made of concrete, steel, fiberglass, polyethylene, or other Chapter 10: On-site Wastewater Treatment Effluent leaves home through a pipe, enters a septic tank, travels through a distribution box to a trench absorption system composed of perforated pipe. approved material cistern, buried in a yard, which may hold 1,000 gallons or more of wastewater. Wastewater flows from the home into the tank at one end and leaves the tank at the other (Figure 10.1) . Proper maintenance of septic tanks is a public health necessity. Enteric diseases such as cryptospordiosis, giardiasis, salmonellosis, hepatitis A, and shigellosis may be transmitted through human excrement. Historically, major epidemics of cholera and typhoid fever were primarily caused by improper disposal of wastewater. The earliest epidemiology lesson learned was through the effort of Dr. John Snow of England (1813-1858) during a devastating cholera epidemic in London . Dr. Snow, known as the father of field epidemiology, discovered that the city's water supply was being contaminated by improper disposal of human waste. He published a brief pamphlet, On the Mode of Communication of Cholera, suggesting that cholera is a contagious disease caused by a poison that reproduces in the human body and is found in the vomitus and stools of cholera patients. He believed that the main, although not only, means of transmission was water contaminated with this poison. This differed from the commonly accepted belief at the time that diseases were transmitted by inhaling vapors. # Treatment of Human Waste Safe, sanitary, nuisance-free disposal of wastewater is a public health priority in all population groups, small and large, rural or urban. Wastewater should be disposed of in a manner that ensures that - community or private drinking water supplies are not threatened; - direct human exposure is not possible; - waste is inaccessible to vectors, insects, rodents, or other possible carriers; - all environmental laws and regulations are complied with; and - odor or aesthetic nuisances are not created. In Figure 10.2, a straight pipe from a nearby home discharges untreated sewage that flows from a shallow drainage ditch to a roadside mountain creek in which many children and some adults wade and fish. The clear water (Figure 10.3) is quite deceptive in terms of the health hazard presented. A 4-mile walk along the creek revealed 12 additional pipes that were also releasing untreated sewage. Some people in the area reportedly regard this creek as a source of drinking water. Raw or untreated domestic wastewater (sewage) is primarily water, containing only 0.1% of impurities that must be treated and removed. Domestic wastewater contains biodegradable organic materials and, very likely, pathogens. The primary purpose of wastewater treatment is to remove impurities and release the treated effluent into the ground or a stream. There are various processes for accomplishing this: John Snow, a London physician, was among the first to use anesthesia. It is his work in epidemiology, however, that earned him his reputation as a prototype for epidemiologists. Dr. Snow's brief 1849 pamphlet, On the Mode of Communication of Cholera, caused no great stir, and his theory that the city's water supply was contaminated was only one of many proposed during the epidemic. Snow, however, was able to prove his theory in 1854, when another severe epidemic of cholera occurred in London. Through painstaking documentation of cholera cases and correlation of the comparative incidence of cholera among subscribers to the city's two water companies, he showed that cholera occurred much more frequently in customers of one water company. This company drew its water from the lower Thames, which become contaminated with London sewage, whereas the other company obtained water from the upper Thames. Snow's evidence soon gained many converts. A striking incident during this epidemic has become legendary. In one neighborhood, the intersection of Cambridge Street and Broad Street, the concentration of cholera cases was so great that the number of deaths reached over 500 in 10 days. Snow investigated the situation and concluded that the cases were clustered around the Broad Street pump. He advised an incredulous but panicked assembly of officials to have the pump handle removed, and when this was done, the epidemic was contained. Snow was a skilled practitioner as well as an epidemiologist, and his creative use of the scientific information of his time is an appropriate example for those interested in disease prevention and control . # Epidemiology - Centralized treatment-Publicly owned treatment works (POTWs) that use primary (physical) treatment and secondary (biologic) treatment on a large scale to treat flows of up to millions of gallons or liters per day, - Treatment on-site-Septic tanks and absorption fields or variations thereof, and - Stabilization ponds (lagoons)-Centralized treatment for populations of 10,000 or less when soil conditions are marginal and land space is ample. Not included are pit privies and compost toilets. Historically, wastewater disposal systems are categorized as water-carrying and nonwater-carrying. Nonwatercarried human fecal waste can be contained and decomposed on-site, the primary examples being a pit privy or compost toilet. These systems are not practical for individual residences because they are inconvenient and they expose users to inclement weather, biting insects, and odors. Because of the depth of the disposal pit for privies, they may introduce waste directly into groundwater. It should be noted that these types of systems are often used and may be acceptable in low-water-use conditions such as small campsites or along nature trails . # On-site Wastewater Treatment Systems As urban sprawl continues and the population increases in rural areas, the cost of building additional sewage disposal systems increases. One of the prime reasons for annexation is to increase the tax base without increasing the cost of municipal government. The governments involved often buy into short-term tax gains at massive long-term costs for eventual infrastructure improvements to annexed communities. Installing septic tank systems is common to provide on-site disposal systems, but it is a temporary solution at best. Because property size must be sufficient to allow space for septic system replacement, the cost to the municipality installing a centralized sewer system will be dramatically increased because of the large lot size. Two microbiologic processes occur in all methods that attempt to decompose domestic wastewater: anaerobic (by bacteria that do not require oxygen) and aerobic (by bacteria that require oxygen) decomposition. Aerobic decomposition is generally preferred because aerobic bacteria decompose organic matter (sewage) at a rate much faster than do anaerobic bacteria and odors are less likely. Centralized wastewater treatment facilities use aerobic processes, as do most types of lagoons. Septic tank systems use both processes. # Septic Tank Systems Approximately 21% of American homes are served by onsite sewage disposal systems. Of these, 95% are septic tank field systems. Septic tank systems are used as a means of on-site wastewater treatment in many homes, both in rural and urban areas, in the United States. If maintained and operated within acceptable parameters, they are capable of properly treating wastewater for a limited number of years and will need both routine maintenance and eventually major repairs. Proper placement and installation is a key to the successful operation of any on- site wastewater treatment system, but septic tank systems have a finite life expectancy and all such systems will eventually fail and need to be replaced. Figure 10.4 shows a typical rural home with a well and a septic system. Septic tank systems generally are composed of the septic tank, distribution box, absorption field (also known as the soil drain field), and leach field. The septic tank serves three purposes: sedimentation of solids in the wastewater, storage of solids, and anaerobic breakdown of organic materials. To place the septic tank and absorption field in a way that will not contaminate water wells, groundwater, or streams, the system should be 10 feet from the house and other structures, at least 5 feet from property lines, 50 feet from water wells, and 25 feet from streams. The entire system area should be easily identifiable. There have been occasions when owners have paved or built over the area. The local health code authorities must be consulted on required distances in their area because of soil and groundwater issues. Aerobic, or aerated, septic systems use a suspended growth wastewater treatment process, and can remove suspended solids that are not removed by simple sedimentation. Under appropriate conditions, aerobic units may also provide for nitrification of ammonia, as well as significant pathogen reduction. Some type of primary treatment usually precedes the aerated tank. The tanks contain an aeration chamber, with either mechanical aerators or blowers, or air diffusers, and an area for final clarification/settling. Aerobic units may be designed as either continuous flow or batch flow systems. The continuous flow type are the most commercially available units. Effluent from the aerated tank is conveyed either by gravity flow or pumping to either further treatment/ pretreatment processes or to final treatment and disposal in a subsurface soil disposal system. Various types of pretreatment may be used ahead of the aerobic units, including septic tanks and trash traps. The batch flow system collects and treats wastewater over a period of time, then discharges the settled effluent at the end of the cycle . Aerobic units may be used by individual or clustered residences and establishments for treating wastewater before either further treatment/pretreatment or final on-site subsurface treatment and disposal. These units are particularly applicable where enhanced pretreatment is important, and where there is limited availability of land suitable for final on-site disposal of wastewater effluent. Because of their need for routine maintenance to ensure proper operation and performance, aerobic units may be well-suited for multiple-home or commercial applications, where economies of scale tend to reduce maintenance and/or repair costs per user. The lower organic and suspended solids content of the effluent may allow a reduction of land area requirements for subsurface disposal systems. A properly functioning septic tank will remove approximately 75% of the suspended solids, oil, and grease from effluent. Because the detention time in the tank is 24 hours or less, there is not a major kill of pathogenic bacteria. The bacteria will be removed in the absorption field (drain field). However, there are soils and soil conditions that prohibit the ability of a drain field to absorb effluent from the septic tank. Septic tanks are sized to retain the total volume of sewage produced by a household in a 24-hour period. Normally a 1,000-gallon tank is the minimum size to use. State or local codes generally require larger tanks as the potential occupancy of the home increases (e.g., 1,250 gallons for four bedrooms) and may require two tanks in succession when inadequate soils require alternative system installation. Figure 10.5 shows a typical septic tank. Distribution boxes are not required by most on-site plumbing codes or by the U.S. Environmental Protection Agency. When used, distribution boxes provide a convenient inspection port. In addition, if a split system absorption field is installed (two separate absorption trench systems), the distribution box is a convenient place to install a diversion valve for annually alternating absorption fields. # Absorption Field Site Evaluation The absorption field has a variety of names, including leach field, tile field, drain field, disposal field, and nitrification field. The effluent from the septic tank is directed to the absorption field for final treatment. The suitability of the soil, along with other factors noted below, determines the best way to properly treat and dispose of the wastewater. Most, but unfortunately not all, states require areas not served by publicly owned sewers to be preapproved for on-site wastewater disposal before home construction through a permitting process. This process typically requires a site evaluation by a local environmental health specialist, soil scientist, or, in some cases, a private contractor. To assist in the site evaluation process, soil survey maps from the local soil conservation service office may be used to provide general information about soils in the area. The form shown in Figure 10.6 is typical of those used in conducting a soil evaluation. Sites for on-site wastewater disposal are first evaluated for use with a conventional septic tank system. Evaluation factors include site topography, landscape position, soil texture, soil structure, internal drainage, depth to rock or other restrictive horizons, and useable area. If the criteria are met, a permit is issued to allow the installation of a conventional septic tank system. Areas that do not meet the criteria for a conventional system may meet lessrestrictive criteria for an alternative type of system. Many sites are unsuitable for any type of on-site wastewater disposal system because of severe topographic limitations, poor soils, or other evaluation criteria. Such sites should not be used for on-site wastewater disposal because of the high likelihood of system failure. Some states and localities may require a percolation test as part of the site evaluation process. As a primary evaluation method, percolation tests are a poor indicator of the ability of a soil to treat and move wastewater throughout the year. However, information obtained by percolation tests may be useful when used in conjunction with a comprehensive soil analysis. # Absorption Field Trench A conventional absorption field trench (Figure 10.7), also known as a rock lateral system, is the most common system used on level land or land with moderate slopes with adequate soil depth above the water table or other restrictive horizons. The effluent from the septic tank flows through solid piping to a distribution box or, in many cases, straight to an absorption field. With the conventional system and most alternative systems, the effluent flows through perforated pipes into gravel-filled trenches and subsequently seeps through the gravel into the soil. The local regulatory agency should be consulted about the acceptable depth of the absorption field trench. Some states require as much as 4 feet of separation beneath the bottom of the trench and the groundwater. The depth of absorption field trenches should be at least 18 inches, and ideally no deeper than 24 inches. The absorption field pipe should be laid flat with no slope. There should be a minimum of 12 to 18 inches of acceptable soil below the bottom of the trench to any bedrock, water table, or restrictive horizon. The length of the trench should not exceed 100 feet for systems using a distribution box. Serpentine systems may be several hundred feet long and should be filled with crushed or fragmented clean rock or gravel in the bottom 6 inches of the trench. Perforated 4-inch-diameter pipe is laid on top of the gravel then covered with an additional 2 inches of rock and leveled for a total of 12 inches. A geotextile material or a biodegradable material such as straw should be placed over the gravel before backfilling the trench to prevent soil from clogging the spaces between the rocks. One or more monitoring ports should be installed in the absorption area extending to the bottom of the gravel to allow measurement of the actual liquid depth in the gravel. This is essential for subsequent testing of the adequacy of the system. As a general rule, using longer and narrower trenches to meet square footage requirements produces a better working and longer lasting ground absorption sewage disposal system. Studies have shown that as septic systems age, the majority of effluent absorption by the soil is provided by lateral movement through the trench sidewalls. Longer and narrower trenches (such as 400 feet long by 2 feet wide instead of 200 feet by 4 feet to obtain 800 square feet) greatly increase the sidewall area of the system for lateral movement of wastewater. # Alternative Septic Tank Systems As the cost of land for home building increases and the availability of land decreases, land that was once considered unsuitable is being developed. This land often has poor soil and drainage properties. Such sites require a considerable amount of engineering skill to design an acceptable wastewater disposal system. In many cases, sites are not acceptable for seepage systems within a reasonable cost. These systems are primarily regulated by state and local government and, before use, approval must be obtained from the appropriate regulatory agencies. Even if a site is approved in one state or county jurisdiction, a similar site may not be approved in another. The primary difficulty with septic tank systems is treating effluent in slowly permeable or marginal soils. Low-wateruse devices, when installed, may make it possible to use a small percentage of septic tank systems in marginal soil. However, low-water-use devices are usually required as part of a larger effort to develop a usable alternative sewage disposal system. Alternative sewage disposal methods that can be used if regular subsurface disposal is not appropriate are numerous . Some of the more common alternative systems are described below. Advantages Disadvantages May be used in areas with high groundwater, bedrock, or restrictive clay soil near the surface Must be installed on relatively level lots Space efficient compared to conventional rock lateral systems Periodic flushing of the distribution network is required Allows home building in areas unsuitable for below grade systems System may be expensive Water softener wastes, common household chemicals, and detergents are not harmful to this system System may be difficult to design Regular inspection of the pumps and controls necessary to maintain the system in proper working condition # Mound Systems A mound system (Table 10.1) is elevated above the natural soil surface to achieve the desired vertical separation from a water table or impervious material. The elevation is accomplished by placing sand fill material on top of the best native soil stratum. At least 1 foot of naturally occurring soil is necessary for a mound system to function properly. Minimizing water usage in the home also is critical to prevent effluent from weeping through the sides of the mound (Figure 10.8). When a mound system is constructed, the septic tank usually receives wastewater from the house by gravity flow. A lift station is located in the second compartment or in a separate tank to pump the effluent up to the distribution piping in the mound. Floats in the lift station control the size of the pumped effluent dose. An alarm should be installed to alert the homeowner of pump failure so that repairs can be made before the pump tank overfills. # Low-Pressure Pipe Systems Low-pressure pipe (LPP) systems may also be used where the soil profile is shallow. These systems are similar to mounds except that they use naturally occurring soil as it exists on-site instead of elevating the disposal field with soil fill material. LPP systems are installed with a trenching machine at depths of 12 to 18 inches. The LPP system consists of a septic tank, high-water alarm, pumping tank, supply line, manifold, lateral line, and submersible effluent pump (Figure 10.9). When septic tank effluent rises to the level of the pump control in the pumping tank, the pump turns on, and effluent moves through the supply line and distribution laterals. The laterals contain small holes and are typically placed 3 to 8 feet apart. From the trenches, the effluent moves into the soil where it is treated. The pump turns off when the effluent falls to the lower control. Dosing takes place one to two times daily, depending on the amount of effluent generated. Pump malfunctions set off an alarm to alert the homeowner. The time between doses allows the effluent to be absorbed into the soil and also allows oxygen to reenter the soil to break down solids that may be left behind. If the pump malfunctions, an alarm notifies the homeowner to contact a qualified septic system contractor. The pump must be repaired or replaced quickly to prevent the pump tank from overflowing. Table 10.2 shows the advantages and disadvantages of LPP systems. Advantages Disadvantages Space requirements are nearly half those of a conventional septic tank system Some low-pressure pipe systems may gradually accumulate solids at the dead-ends of the lateral lines; therefore, maintenance is required Can be installed on irregular lot shapes and sizes Electric components are necessary Can be installed at shallower depths and requires less topsoil cover Design and installation may be difficult; installers with experience with such systems should be sought # Provides alternating dosing and resting cycles Installation sites are left in their natural condition # Plant-rock Filter Systems (Constructed Wetlands) Considered experimental in some states, plant-rock filter systems are being used with great success in Kentucky, Louisiana, and Michigan. Plant-rock filters generally consist of a septic tank (two-compartment), a rock filter, and a small overflow lateral (absorption) field. Overflow from the septic tank is directed into the rock filter. The rock filter is a long narrow trench (3 to 5 feet wide and 60 to 100 feet long) lined with leak-proof polyvinyl chloride or butylplastic to which rock is added. A 2-to 4-inch-diameter rock is used below the effluent flow line and larger rock above (Figure 10.10). Plant-rock filter systems are typically sized to allow 1.3 cubic feet of rock area per gallon of total daily waste flow. A typical size for a three-bedroom house would be 468 square feet of interior area. Various width-to-length ratios within the parameters listed above could be used to obtain the necessary square footage. The trenches can even be designed in an "L" shape to accommodate small building lots. Treatment begins in the septic tank. The partially treated wastewater enters the lined plant-rock filter cell through solid piping, where it is distributed across the cell. The plants within the system introduce oxygen into the waste-water through their roots. As the wastewater becomes oxygenated, beneficial microorganisms and fungi thrive on and around the roots, which leads to digestion of organic matter. In addition, large amounts of water are lost through evapotranspiration. The kinds of plants most widely used in these systems include cattails, bulrush, water lilies, many varieties of iris, and nutgrass. Winter temperatures have little effect because the roots are doing the work in these systems, and they stay alive during the winter months. Discharge from wetlands systems may require disinfection. The advantages and disadvantages of the plant-rock filter system are shown in Table 10.3. # Maintaining On-site Wastewater Treatment Systems Do's and don'ts inside the house: - Do conserve water. Putting too much water into the septic system can eventually lead to system failure. Advantages Disadvantages Approximately one-third the size of conventional septic tank absorption field systems May be slightly more costly to install Disinfection required for effluent discharge Can be placed on irregular or segmented lots May not find installers knowledgeable about the system May be placed in areas with shallow water tables, high bedrock, or restrictive horizons Life span of the system is unknown because of its relative newness Relatively low maintenance Perception of being unsightly to some (Typical water use is about 60 gallons per day for each person in the family.) The standard drain field is designed for a capacity of 120 gallons per bedroom. If near capacity, systems may not work. Water conservation will extend the life of the system and reduce the chances of system failure. - Do fix dripping faucets and leaking toilets. - Do avoid long showers. - Do use washing machines and dishwashers only for full loads. - Do not allow the water to run continually when brushing teeth or while shaving. - Do avoid disposing of the following items down the sink drains or toilets: chemicals, sanitary napkins, tissues, cigarette butts, grease, cooking oil, pesticides, kitty litter, coffee grounds, disposable diapers, stockings, or nylons. - Do not install garbage disposals. - Do not use septic tank additives or cleaners. They are unnecessary and some of the chemicals can contaminate the groundwater. Do's and don'ts for outside maintenance: - Do maintain adequate vegetative cover over the absorption field. - Do not allow surface waters to flow over the tank and drain field areas. (Diversion ditches or subsurface tiles may be used to direct water away from system.) - Do not allow heavy equipment, trucks, or automobiles to drive across any part of the system. - Do not dig into the absorption field or build additions near the septic system or the repair area. - Do make sure a concrete riser (or manhole) is installed over the tank if not within 6 inches of the surface, providing easy access for measuring and pumping solids. (Note: All tanks should have two manholes, one positioned over the inlet device and one over the outlet device.) There is no need to add any commercial substance to "start" or clean a tank to keep it operating properly. They may actually hinder the natural bacterial action that takes place inside a septic tank. The fecal material, cereal grain, salt, baking soda, vegetable oil, detergents, and vitamin supplements that routinely make their way from the house to the tank are far superior to any additive. # Symptoms of Septic System Problems These symptoms can mean you have a serious septic system problem: - Sewage backup in drains or toilets (often a black liquid with a disagreeable odor). - Slow flushing of toilets. Many of the drains will drain much slower than usual, despite the use of plungers or drain-cleaning products. This also can be the result of a clogged plumbing vent or a nonvented fixture. - Surface flow of wastewater. Sometimes liquid seeps along the surface of the ground near your septic system. It may or may not have much of an odor and will range from very clear to black in color. - Lush green grass over the absorption field, even during dry weather. Often, this indicates that an excessive amount of liquid from the system is moving up through the soil, instead of down, as it should. Although some upward movement of liquid from the absorption field is good, too much could indicate major problems. - The presence of nitrates or bacteria in the drinking water well indicates that liquid from the system may be flowing into the well through the ground or over the surface. Water tests available from the local health department will indicate whether this is a problem. - Buildup of aquatic weeds or algae in lakes or ponds adjacent to your home. This may indicate that nutrient-rich septic system waste is leaching into the surface water, which may lead to both inconvenience and possible health problems. - Unpleasant odors around the house. Often, an improperly vented plumbing system or a failing septic system causes a buildup of disagreeable odors. Table 10.4 is a guide to troubleshooting septic tank problems. # Septic Tank Inspection The first priority in the inspection process is the safety of the homeowner, neighbors, workers, and anyone else for which the process could create a hazard. - Do not enter septic tanks or cesspools. - Do not work alone on these tanks. - Do not bend or lean over septic tanks or cesspools. - Note and take appropriate action regarding unsafe tank covers. - Note unsanitary conditions or maintenance needs (sewage backups, odor, seepage). - Do not bring sewage-contaminated clothing into the home. - Have current tetanus inoculations if working in septic tank inspection. Methane and hydrogen sulfide gases are produced in a septic tank. They are both toxic and explosive. Hydrogen sulfide gas is quite deceptive. It can have a very strong odor one moment, but after exposure, the odor may not be noticed. # Inspection Process As sewage enters a septic tank, the rate of flow is reduced and heavy solids settle, forming sludge. Grease and other light solids rise to the surface, forming a scum. The sludge and scum (Figure 10.11) are retained and break down while the clarified effluent (liquid) is discharged to the absorption field. Sludge eventually accumulates in the bottom of all septic tanks. The buildup is slower in warm climates than in colder climates. The only way to determine the sludge depth is to measure the sludge with a probe inserted through an inspection port in the tank's lid. Do not put this job off until the tank fills and the toilet overflows. If this happens, damage to the absorption field could occur and be expensive to repair. # Scum Measurement The floating scum thickness can be measured with a probe. The scum thickness and the vertical distance from the bottom of the scum to the bottom of the inlet can also be measured. If the bottom of the scum gets within 3 inches of the outlet, scum and grease can enter the absorption field. If grease gets into the absorption field, percolation is impaired and the field can fail. If the scum Problem Possible Cause(s) Remedies Wastewater backs up into the building or plumbing fixtures sluggish or do not drain well. Excess water entering the septic tank system, plumbing installed improperly, roots clogging the system, plumbing lines blocked, pump failure, absorption field damaged or crushed by vehicular traffic. Fix leaks, stop using garbage disposal, clean septic tank and inspect pumps, replace broken pipes, seal pipe connections, avoid planting willow trees close to system lines. Do not allow vehicles to drive over or park over lines. Contact local health department for guidance. Wastewater surfaces in the yard. Excess water entering the septic tank system, system blockage, improper system elevations, undersized soil treatment system, pump failure, absorption field damaged or crushed by vehicular traffic. Fix leaks, clean septic tank and check pumps, make sure distribu tion box is free of debris and functioning properly, fence off area until problem is fixed, call in experts. Contact local health department for guidance. # Sewage odors indoors. Sewage surfacing in yard, improper plumbing, sewage backing up in the building, trap under sink dried out, roof vent pipe frozen shut. Repair plumbing, clean septic tank and check pumps, replace water in drain pipes, thaw vent pipe. Contact local health department for guidance. Sewage odors outdoors. Source other than owner's system, sewage surfacing in yard, manhole or inspection pipes damaged or partially removed, downdraft from vent pipes on roof. Clean tank and check pumps, replace damaged inspection port covers, replace or repair absorption field. Contact local health department for guidance. Contaminated drinking water or surface water. System too close to a well, water table, or fractured bedrock; cesspool or dry well being used; improper well construction; broken water supply or wastewater lines. Improperly located wells must be sealed in strict accordance with state and local codes. Abandon well and locate a new one far and upslope from the septic system, fix all broken lines, stop using cesspool or drywell. Contact local health department for guidance. Distribution pipes and soil treatment system freeze in winter. Improper construction, check valve in lift station not working, heavy equipment traffic compacting soil in area, low flow rate, lack of use. Examine check valve, keep heavy equipment such as cars off area, increase water usage, have friend run water while away on vaca tion, operate septic tank as a holding tank, do not use anti freeze. Contact local health department for guidance. is near the bottom of the tee, the septic tank needs to be cleaned out. The scum thickness can best be measured through the large inspection port. Scum should never be closer than 3 inches to the bottom of the baffle. The scum thickness is observed by breaking through it with a probe, usually a pole. # Introduction Two basic codes concerned with residential wiring are important to the housing inspector. The first is the local electrical code. The purpose of this code is to safeguard persons as well as buildings and their contents from hazards arising from the use of electricity for light, heat, and power. The electrical code contains basic minimum provisions considered necessary for safety. Compliance with this code and proper maintenance will result in an installation essentially free from hazards, but not necessarily efficient, convenient, or adequate for good service or future expansion. Most local electrical codes are modeled after the National Electrical Code, published by the National Fire Protection Association (NFPA). Reference to the "code" in the remainder of this chapter will be to the National Electrical Code, unless specified otherwise . An electrical installation that was safe and adequate under the provisions of the electrical code at the time of installa- # Chapter 11: Electricity Ampere-The unit for measuring intensity of flow of electricity. Its symbol is "I." Bonding-Applies inert material to metal surfaces to eliminate electrical potential between metal components and prevent components and piping systems from having an elevated voltage potential. Circuit-The flow of electricity through two or more wires from the supply source to one or more outlets and back to the source. Circuit breaker-A safety device used to break the flow of electricity by opening the circuit automatically in the event of overloading or used to open or close the circuit manually. Conductor-Any substance capable of conveying an electric current. In the home, copper wire is usually used. - A bare conductor is one with no insulation or covering. - A covered conductor is one covered with one or more layers of insulation. # Definitions of Terms Related to Electricity tion is not necessarily safe and adequate today. An example would be the grounding of a home electrical system. In the past, electrical systems could be grounded to the home's plumbing system. Today, many plumbing systems are no longer constructed of conductive material, but are made of plastic or polyvinyl chloride-based materials. Today, the recommendations for grounding a home electrical system are to use two 8-foot by 5/8-inch copper ground rods. These must be spaced 6 feet apart and be connected by a continuous (unbroken) piece of copper wire (the size of this wire corresponds to the size of the system main). It is also highly recommended that the system be grounded to the incoming water line if it is conductive or to the nearest conductive cold water supply line. Hazards often occur because of overloading wiring systems or usage not in conformity with the code. This occurs because initial wiring did not provide for increases in the use of electricity. For this reason, it is recommended that initial installations be adequate and that reasonable provisions for system changes be made for further increases in the use of electricity. The other code that contains electrical provisions is the local housing code. It establishes minimum standards for artificial and natural lighting and ventilation, specifies the minimum number of electric outlets and lighting fixtures per room, and prohibits temporary wiring except under certain circumstances. In addition, the housing code usually requires that all components of an electrical system be installed and maintained in a safe condition to prevent fire or electric shock. Voltage is a measure of the force at which electricity is delivered. It is similar to pressure in a water supply system. Current is measured in amperes and is the quantity of flow of electricity. It is similar to measuring water in gallons per second. A watt is equal to volts times amperes. It is a measure of how much power is flowing. Electricity is sold in quantities of kilowatt-hours. The earth, by virtue of moisture contained within the soil, serves as a very effective conductor. Therefore, in power transmission, instead of having both the hot and neutral wires carried by the transmission poles, one lead of the generator is connected to the ground, which serves as a conductor (Figure 11.1). All electrical utility wires are carried by the transmission towers and are considered hot or charged. At the house, or point where the electricity is to be used, the circuit is completed by another connection to ground. The electric power utility provides a ground somewhere in its local distribution system; therefore, there is a ground wire in addition to the hot wires within the service drop. In Figure 11.1 this ground can be seen at the power pole that contains the step-down transformer. In addition to the ground connection provided by the electric utility, every building is required to have an independent ground called a system ground. The system ground is a connection to ground from one of the current-carrying conductors of the electrical system. System grounding, applied to limit overvoltages in the event of a fault, provides personnel safety, provides a positive means of detecting and isolating ground faults, and improves service reliability. Therefore, the system ground's main purpose is to protect the electrical system itself and offers limited protection to the user. The system ground serves the same purpose as the power company's ground; however, it has a lower resistance because it is closer to the building. The equipment ground protects people from potential harm during the # Electric Service Entrance Service Drop To prevent accidental contact by people, the entrance head (Figure 11.2) should be attached to the building at least 10 feet above ground. The conductor should clear all roofs by at least 8 feet and residential driveways by 12 feet. For public streets, alleys, roads, and driveways on other than residential property, the clearance must be 18 feet. The wires or conductor should be of sufficient size to carry the load and not smaller than No. 8 copper wire or equivalent. For connecting wire from the entrance head to the service drop wires, the code requires that the service entrance conductors be installed either below the level of the service head or below the termination of the service entrance cable sheath. Drip loops must be formed on individual conductors. This will prevent water from entering the electric service system. The wires that form the entrance cable should extend 36 inches from the entrance head to provide a sufficient length to connect service drop wires to the building with insulators. The entrance cable may be a special type of armored outdoor cable, or it may be enclosed in a conduit. The electric power meter may be located either inside or outside the building. In either instance, the meter must be located before the main power disconnect. Figure 11.3 shows an armored cable service entrance with a fuse system. Newer construction will have circuit breakers, as shown in Figure 11.4. The armored cable is anchored to the building with metal straps spaced every 4 feet. The cable is run down the wall and through a hole drilled through the building. The cable is then connected to the service panel, which should be located within 1 foot of where the cable enters the building. The ground wire need not be insulated. This ground wire may be either solid or stranded copper, or a material with an equivalent resistance. Figure 11.5 shows the use of thin-wall conduit in a service entrance. # Underground Service When wires are run underground, they must be protected from moisture and physical damage. The opening in the building foundation where the underground service enters the building must be moisture proof. Refer to local codes for information about allowable materials for this type of service entrance. # Electric Meter The electric meter (Figure 11.6) may be located inside or outside the building. The meter itself is weatherproof and is plugged into a weatherproof socket. The electric power company furnishes the meter; the socket may or may not be furnished by the power company. # Grounding The system ground consists of grounding the neutral incoming wire and the neutral wire of the branch circuits. The equipment ground consists of grounding the metal parts of the service entrance, such as the service switch, as well as the service entrance conduit, armor or cable. Poor grounding at any point can result in a person providing a more effective route to ground than the intended ground, resulting in electrocution. This can occur from damaged insulation allowing electricity to flow into the case or cabinet of the appliance. The system must be grounded by two 8 foot by 5/8-inch copper ground rods of at least 8 feet in length driven into the ground and connected by a continuous (unbroken) piece of copper wire (the size of this wire corresponds to the size of the system main). It is highly recommended that the system also be grounded to the incoming water line or nearest cold water supply line if it is metal. The usual ground connection is to a conductive water pipe of the city water system. The connection should be made to the street side of the water meter, as shown in Figure 11.7. If the water meter is located near the street curb, then the ground connection should be made to the cold water pipe as close as possible to where it enters the building. It is not unusual for a water meter to be removed from the building for service. If the ground connection is made at a point in the water piping system on the building side of the water meter, the ground circuit will be broken on removal of the meter. This broken ground circuit is a shock hazard if both sides of the water meter connections are touched simultaneously. Local or state codes should be checked to determine compliance with correct grounding protocols. In increasing instances, the connections between the water meters and pipes are electrically very poor. In this case, if the ground connection is made on the building side of the water meter, there may not be an effective ground. To prevent the two aforementioned situations, the code requires effective bonding by a properly sized jumper-wire around any equipment that is likely to be disconnected for repairs or replacement. Often, the house ground will be disconnected. Therefore, the housing inspector should always check the house ground to see if it is properly connected. Figure 11.8 shows a typical grounding scheme at the service box of a residence. In this figure, only the grounded neutral wires are shown. The neutral strap is a conductive bare metal strip that is riveted directly to the service box. This conductive strip forms a collective ground that joins the ground wires from the service entrance, branch circuits, and house ground. Follow these key grounding points: - Use two metal rods driven 8 feet into the ground. - Bond around water heaters and filters to assure grounding. - If water pipes are used, they must be supplemented with a second ground. - Ground rod must be driven to full depth. - If ground rod resistance exceeds 25 ohms, install two rods at a minimum of 6 feet apart. - When properly grounded, the metal frame of a building can be used as a ground point. - Do not use underground gas lines as a ground. - Provide external grounds to other systems such as satellite, telephone, and other services to further protect the electrical system from surges. - If the water service pipes to the home are not metal or if all of the service components in the home are not metal, then the water system cannot play a role in grounding. Bonding is necessary to provide a route for electricity to flow around isolated elements of a piping system to ensure electrical potential is minimized for both the protection of the system from corrosion and to protect individuals from electrical shock. # Two-or Three-wire Electric Services One of the wires in every electrical service installation is supposed to be grounded. This neutral wire should always be white. The hot wires are usually black, red, or some other color, but never white. The potential difference or voltage between the hot wires and the ground or neutral wire of a normal residential electrical system is 115 volts. Thus, where there is a two-wire installation (one hot and one neutral), only 115 volts are available. When three wires are installed (two hot and one neutral), either 115 or 230 volts are available. In a three-wire system, the voltage between the neutral and either of the hot wires is 115 volts; between the two hot wires, it is 230 (Figure 11.9). The major advantage of a three-wire system is that it permits the operation of heavy electrical equipment such as clothes dryers, cooking ranges, and air conditioners, the majority of which require 230-volt circuits. In addition, the three-wire system is split at the service panel into two 115-volt systems to supply power for small appliances and electric lights. The result is a doubling of the number of circuits, and, possibly, a corresponding increase in the number of branch circuits, with a reduction in the probability of fire caused by overloading electrical circuits if the electrical demands exceed the capacity. # Residential Wiring Adequacy The use of electricity in the home has risen sharply since the 1930s. Many homeowners have failed to repair or improve their wiring to keep it safe and up to date. In the 1970s, the code recommended that the main distribution panel in a home be a minimum of 100 amps. Because the number of appliances that use electricity has continued to grow, so has the size of recommended panels. For a normal house (2,500 to 3,500 square feet), a 200-amp panel is recommended. The panel must be of the breaker type with a main breaker for the entire system (Figure 11.4). Fuse boxes are not recommended for new housing. This type of service is sufficient in a one-family house or dwelling unit to provide safe and adequate electricity for the lighting, refrigerator, iron, and an 8,000-watt cooking range, plus other appliances requiring a total of up to 10,000 watts. Some older homes have a 60-ampere, three-wire service (Figure 11.10). It is recommended that these homes be rewired for at least the minimum of 200-amperes recommended in the code. The 60-amp service is safely capable of supplying current for only lighting and portable appliances, such as a cooking range and regular dryer (4,500 watts), or an electric hot water heater (2,500 watts), and cannot handle additional major appliances. Other older homes today have only a 30-ampere, 115-volt, two-wire service (Figure 11.11). This system can safely handle only a limited amount of lighting, a few minor appliances, and no major appliances. Therefore, this size service is substandard in terms of the modern household's needs for electricity. Furthermore, it is a fire hazard and a threat to the safety of the home and the occupants. # Wire Sizes and Types Aluminum wiring, used in some homes from the mid 1960s to the early 1970s, is a potential fire hazard . According to the U.S. Consumer Product Safety Commission (CPSC), fires and even deaths have been caused by this hazard. Problems due to expansion can cause overheating at connections between the wire and devices (switches and outlets) or at splices. CPSC research shows that homes wired with aluminum wire manufactured before 1972 are 55 times more likely to have one or more connections reach fire hazard conditions than are homes wired with copper. Post-1972 aluminum wire is also a concern. Introduction of aluminum wire alloys around 1972 did not solve most of the connection failure problems. Aluminum wiring is still permitted and used for certain applications, including residential service entrance wiring and single-purpose higher amperage circuits such as 240-volt air conditioning or electric range circuits. # Reducing Risk Only two remedies for aluminum wiring have been recommended by the CPSC: discontinued use of the aluminum circuit or the less costly option of adding copper connecting "pigtail" wires between the aluminum wire and the wired device (receptacle, switch, or other device). The pigtail connection must be made using only a special connector and special crimping tool licensed by the AMP Corporation. Emergency temporary repairs necessary to keep an essential circuit in service might be possible following other procedures described by the CPSC, and in accordance with local electrical codes . # Wire Sizes Electric power actually flows along the surface of the wire. It flows with relative ease (little resistance) in some materials, such as copper and aluminum, and with a substantial amount of resistance in iron. If iron wire were used, it would have to be 10 times as large as copper wire to be as effective in conducting electricity. In fine electronics, gold is the preferred conductor because of the resistance to corrosion and the very high conductivity. Electricity is the movement of electrons from an area of higher potential to one of lower potential. An analogy to how electricity flows would be the flow of water along the path of least resistance or down a hill. All it takes to create the potential for electricity is the collection of electrons and a pathway for them to flow to an area of lesser concentration along a conductor. When a person walks across a nylon carpet in times of low atmospheric humidity, his or her body will often collect electrons and serve as a capacitor (a storage container for electrons). When that person nears a grounding source, the electrons will often jump from a finger to the ground, creating a spark and small shock. A number preceded by the letters AWG (American Wire Gauge) indicates copper wire sizes . As the AWG number of the wire becomes smaller, the size and current capacity of the wire increases. AWG 14 is most commonly found in older residential branch circuits. AWG 14 wires should be used only in a branch circuit with a 15-ampere capacity or no more than a 1,500-watt demand. Wire sizes AWG 16, 18, and 20 are progressively smaller than AWG 14 and are used for extension wires or low-voltage systems. Wire of the correct size must be used for two reasons: current capacity and voltage drop or loss. When current flows through a wire, it creates heat. The greater the amount of flow, the greater the amount of heat generated. (Doubling the amperes without changing the wire size increases the amount of heat by four times.) The heat is electric energy (electrons) that has been converted into heat energy by the resistance of the wire. The heat created by the coils in a toaster is an example of designed resistance to create heat. Most heat developed by an electrical conductor is wasted; therefore, the electric energy used to generate it is also wasted. If the amount of heat generated by the flow of current through a wire becomes excessive, a fire may result. Therefore, the code sets the maximum permissible current that may flow through a certain type and size wire. The blue box provides examples of current capacities for copper wire of various sizes. In addition to heat generation, there will be a reduction in voltage as a result of attempting to force more current through a wire than it is designed to carry. Certain appliances, such as induction-type electric motors, may be damaged if operated at too low a voltage. # Wire Types All wires must be marked to indicate the maximum working voltage, the proper type letter or letters for the type wire specified in the code, the manufacturer's name or trademark, and the AWG size or circular-mil area (Figure 11.12). A variety of wire types can be used for a wide range of temperature and moisture conditions. The code should be consulted to determine the proper wire for specific conditions. Maximum Current Recommended for AWG Wire Size # Types of Cable Nonmetallic sheathed cable consists of wires wrapped in plastic and then a paper layer, followed by another spiral layer of paper, and enclosed in a fabric braid, which is treated with moisture-and fire-resistant compounds. Figure 11.12 shows this type of cable, which often is marketed under the name Romex. This type of cable can be used only indoors and in permanently dry locations. Romex-type wiring is normally used in residential construction. However, when cost permits, it is recommended that a conduit-based system be used. Armored cable is commonly known as BX or Flexsteel trade names. Wires are wrapped in a tough paper and covered with a strong spiral flexible steel armor. This type of cable is shown in Figure 11.13 and may be used only in permanently dry indoor locations. Armored cable must be supported by a strap or staple every 6 feet and within 24 inches of every switch or junction box, except for concealed runs in old work where it is impossible to mount straps. Cables are also available with other outer coatings of metals, such as copper, bronze, and aluminum for use in a variety of conditions. # Flexible Cords CPSC estimates that about 4,000 injuries associated with electric extension cords are treated in hospital emergency rooms each year. About half of the injuries involve fractures, lacerations, contusions, or sprains from people tripping over extension cords. Thirteen percent of the injuries involve children younger than 5 years of age; electrical burns to the mouth account for half the injuries to young children . CPSC also estimates that about 3,300 residential fires originate in extension cords each year, killing 50 people and injuring about 270 others . The most frequent causes of such fires are short circuits, overloading the system, and damage to or misuse of extension cords. # The Problem Following are CPSC investigations of injuries that illustrate the major injury patterns associated with extension cords: children putting extension cords in their mouths, overloaded cords, worn or damaged cords, and tripping over cords: - A 15-month-old girl put an extension cord in her mouth and suffered an electrical burn. She required surgery. - Two young children were injured in a fire caused by an overloaded extension cord in their family's home. A lamp, TV set, and electric heater had been plugged into a single, light-duty extension cord. - A 65-year-old woman was treated for a fractured ankle after tripping over an extension cord. # The Standards The code says that many cord-connected appliances should be equipped with polarized grounding plugs. Polarized plugs can only be inserted one way into the outlet because one blade is slightly wider than the other. Polarization and grounding ensure that certain parts of appliances that could have a higher risk of electric shock when they become live are instead connected to the neutral, or grounded, side of the circuit. Such electrical products should only be used with polarized or grounded extension cords. Voluntary industry safety standards, including those of Underwriter's Laboratory (UL), now require that generaluse extension cords have safety closures, warning labels, rating information about the electrical current, and other features for the protection of children and other consumers. In addition, UL-listed extension cords now must be constructed with 16-gauge or larger wire or be equipped with integral fuses. The 16-gauge wire is rated to carry 13 amperes (up to 1,560 watts), as compared with the for- merly used 18-gauge cords that were rated for 10 amperes (up to 1,200 watts). # Safety Suggestions The following are CPSC recommendations for purchasing and safely using extension cords: - Use extension cords only when necessary and only on a temporary basis. - Use polarized extension cords with polarize appliances. - Make sure cords do not dangle from the counter or tabletops where they can be pulled down or tripped over. - Replace cracked or worn extension cords with new 16-gauge cords that have the listing of a nationally recognized testing laboratory, safety closures, and other safety features. - With cords lacking safety closures, cover any unused outlets with electrical tape or with plastic caps to prevent the chance of a child making contact with the live circuit. - Insert plugs fully so that no part of the prongs is exposed when an extension cord is in use. - When disconnecting cords, pull the plug rather than the cord itself. - Teach children not to play with plugs and outlets. - Use only three-wire extension cords for appliances with three-prong plugs. Never remove the third (round or U-shaped) prong, which is a safety feature designed to reduce the risk for shock and electrocution. - In locations where furniture or beds may be pushed against an extension cord where the cord joins the plug, use a special angle extension cord specifically designed for use in these instances. - Check the plug and the body of the extension cord while the cord is in use. Noticeable warming of these plastic parts is expected when cords are being used at their maximum rating. If the cord feels hot or if there is a softening of the plastic, this is a warning that the plug wires or connections are failing and that the extension cord should be discarded and replaced. - Never use an extension cord while it is coiled or looped. Never cover any part of an extension cord with newspapers, clothing, rugs, or any objects while the cord is in use. Never place an extension cord where it is likely to be damaged by heavy furniture or foot traffic. - Do not use staples or nails to attach extension cords to a baseboard or to another surface. This could damage the cord and present a shock or fire hazard. - Do not overload extension cords by plugging in appliances that draw a total of more watts than the rating of the cord. - Use special heavy-duty extension cords for highwattage appliances such as air conditioners, portable electric heaters, and freezers. - When using outdoor tools and appliances, use only extension cords labeled for outdoor use. # Wiring Open Wiring Open wiring is a wiring method using knobs, nonmetallic tubes, cleats, and flexible tubing for the protection and support of insulated conductors in or on buildings and not concealed by the structure. The term "open wiring" does not mean exposed, bare wiring. In dry locations, when not exposed to severe physical damage, conductors may be separately encased in flexible tubing. Tubing should be in continuous lengths not exceeding 15 feet and secured to the surface by straps not more than 4½ feet apart. Tubing should be separated from other conductors by at least 2½ inches and should have a permanently maintained airspace between them and any and all pipes they cross. # Concealed Knob and Tube Wiring Concealed knob and tube wiring is a wiring method using knobs, tubes, and flexible nonmetallic tubing for the protection and support of insulated wires concealed in hollow spaces of walls and ceilings of buildings. This wiring method is similar to open wiring and, like open wiring, is usually found only in older buildings. # Electric Service Panel The service switch is a main switch that will disconnect the entire electrical system at one time. The main fuses or circuit breakers are usually located within the service switch box. The branch circuit fuse or circuit breaker may also be located within this box. According to the code, the switch must be externally operable. This condition is fulfilled if the switch can be operated without the operator being exposed to electrically active parts. Figure 11.14 shows a 200-amp service box. Figure 11.15 shows an external "hinged switch" power shutoff installed on the outside of a home. Most of today's older homes do not have hinged switches. Instead, the main fuse is mounted on a small insulated block that can be pulled out of the switch. When this block is removed, the circuit is broken. In some installations, the service switch is a "solid neutral" switch, meaning that the switch or a fuse does not break the neutral wire in the switch. When circuit breakers are used in homes instead of fuses, main circuit breakers may or may not be required. If it takes more than six movements of the hand to open all the branch-circuit breakers, a main breaker, switch, or fuse will be required ahead of the branch-circuit breakers. Thus, a house with seven or more branch circuits requires a separate disconnect means or a main circuit breaker ahead of the branch-circuit breakers. # Over-current Devices The amperage (current flow) in any wire is limited to the maximum permitted by using an over-current device of a size specified by the code. Four types of over-current devices are common: circuit breakers, ground fault circuit interrupters (GFCIs), arc fault circuit interrupters (AFCIs), and fuses. The over-current device of a specific size is specified by the code. The over-current device must be rated at equal or lower capacity than the wire of the circuit it protects. Circuit Breakers (Fuseless Service Panels) A circuit breaker looks something like an ordinary electric light switch. Figure 11.14 shows the service box in a 200amp fuseless system. Figure 11.15 shows a service switch. There is a handle that may be used to turn power on or off. Inside is a simple mechanism that, in case of a circuit overload, trips the switch and breaks the circuit. The circuit breaker may be reset by turning the switch to off and then simply resetting the switch to the on position. A circuit breaker is capable of taking harmless short-period overloads (such as the heavy initial current required in the starting of a washing machine or air conditioner) without tripping, but protects against prolonged overloads. After the cause of trouble has been located and corrected, power is easily restored. Fuseless service panels or breaker boxes are usually broken up into the following circuits: - A 100-ampere or larger main circuit breaker that shuts off all power. - A 40-ampere circuit for an appliance such as an electric cooking range. - A 30-ampere circuit for a clothes dryer, water heater, heat pump, or central air conditioning. - A 20-ampere circuit for small appliances and power tools. - A 15-ampere circuit for general-purpose lighting, TVs, VCRs, computers, and vacuum cleaners. - Space for new circuits to be added if needed for future use. # Ground Fault Circuit Interrupters Unlike circuit breakers and fuses, GFCIs are installed to protect the user from electrocution. These devices provide protection against electrical shock and electrocution from ground faults or contact with live parts by a grounded individual. They constantly monitor electrical currents flowing into a product. If the electricity flowing through the product differs even slightly from that returning, the GFCI will quickly shut off the current. GFCIs detect amounts of electricity much smaller than those required for a fuse or circuit breaker to activate and shut off the circuit. UL lists three types of GFCIs designed for home use that are readily available and fairly inexpensive and simple to install: - Wall Receptacle GFCI-This type of GFCI (Figure 11.16) is used in place of a standard receptacle found throughout the house. It fits into a standard outlet box and protects against ground faults whenever an electrical product is plugged into the outlet. If strategically located, it will also provide protection to downstream receptacles. - Circuit Breaker GFCI-In homes equipped with circuit breakers, this type of GFCI may be installed in a panel box to protect selected circuits. A circuit breaker GFCI serves a dual purpose: it shuts off electricity in the event of a ground fault and will also trip when a short circuit or an overload occurs. - Portable GFCI-A portable GFCI requires no special knowledge or equipment to install. One type contains the GFCI circuitry in a self-contained enclosure with plug blades in the back and receptacle slots in the front. It can be plugged into a receptacle, and the electrical product plugged into the GFCI. Another type of portable GFCI is an extension cord combined with a GFCI. It adds flexibility in using receptacles that are not protected by GFCIs. Once a GFCI is installed, it must be checked monthly to determine that it is operating properly. Pressing the test button can check units; the GFCI should disconnect the power to that outlet. Pressing the reset button reconnects the power. If the GFCI does not disconnect the power, have it checked by a qualified, certified electrician. GFCIs should be installed on circuits in the following areas: garages, bathrooms, kitchens, crawl spaces, unfinished basements, hot tubs and spas, pool electronics, and exterior outlets. However, they are not required on single outlets that serve major appliances. # Arc-fault Circuit Interrupters Arc-fault circuit interrupters are new devices intended to provide fire protection by opening the circuit if an arcing fault is detected. An arcing fault is an electric spark or hot plasma field that extends from the hot wire to a ground. An arc is a luminous discharge of electricity across an insulating medium or simply a spark across an air gap. Arcs occur every day in homes. For example, an arc occurs inside the switch when a light is turned on. Toy racecars and trains create arcs. The motors inside hair dyers and power drills have tiny arcs. All of these are controlled arcs. It is the uncontrolled or nondesigned arc that is a serious fire hazard in the home. The arc-fault circuit interrupter looks like the GFCI unit (Figure 11.17), but it is not designed to protect against electric shock. and the neutral wire is grounded the same as for a two-wire system. Below each fuse is a terminal to which a black or red wire is connected. The white or light gray neutral wires are then connected to the neutral strip. Each fuse indicates a separate circuit (Figure 11.18). - Nontamperable fuses-All ordinary plug fuses have the same diameter and physical appearance, regardless of their current capacity, whereas nontamperable fuses are sized by amperage load. Thus, with regular fuses, if a circuit designed for a 15-ampere fuse is overloaded so that the 15-ampere fuse blows out, nothing will prevent a person from replacing the 15-ampere fuse with a 20-or 30-ampere fuse, which may not blow out. If a circuit wired with 14-gauge wire (current capacity 15 amperes) is fused with a 20-or 30-ampere fuse and an overload develops, more current than the 14-gauge wire is safely capable of carrying could pass through the circuit. The result would be a heating of the wire and potential fire. - Type-S fuses-Type-S fuses have different lengths and diameter threads for each amperage capacity. An adapter is first inserted into the ordinary fuse holder, which adapts the fuse holder for only one capacity fuse. Once the adapter is inserted, it cannot be removed. - Cartridge fuses-A cartridge fuse protects an electric circuit in the same manner as an ordinary plug fuse, already described, protects an electric circuit. Cartridge fuses are often used as main fuses. Because most electrical wiring in a home is hidden from view, many arc faults go undetected and continue arcing indefinitely. If left in this arcing state, the potential for fire increases. It is estimated that about one third of fires are caused by arcing faults. Normal fuses and circuit breakers are not capable of detecting arc faults and therefore will not open the circuit and stop the flow of electricity. # Fused Ampere Service Panel (Fuse Box) Fuse-type panel boxes are generally found in older homes. They are as safe and adequate as a circuit breaker of equivalent capacity, provided fuses of the proper size are used. A fuse, like a circuit breaker, is designed to protect a circuit against overloading and short circuits and does so in two ways. When a fuse is blown by a short circuit, the metal strip is instantly heated to an extremely high temperature, and this heat causes it to vaporize. A fuse blown by a short circuit may be easily recognized because the window of the fuse usually becomes discolored. In a fuse blown by an overload, the metal strip is melted at its weakest point, breaking the flow of current to the load. In this case, the window of the fuse remains clear; therefore, a blown fuse caused by an overload may also be easily recognized. Sometimes, although a fuse has not been blown, the bottom of the fuse may be severely discolored and pitted. This indicates a loose connection because the fuse was not screwed in properly. It is critical to check that all fuses are properly rated for the designed amperage. The placing of a fuse with a higher amperage than recommended presents a significant fire hazard. Generally, all fused panel boxes are wired similarly for two-and three-wire systems. In a two-wire-circuit panel box, the black or red hot wire is connected to a terminal of the main disconnect, and the white or light gray neutral wire is connected to the neutral strip, which is then grounded to the pipe on the street side of the water meter. In a three-wire system, the black and red hot wires are connected to separate terminals of the main disconnect, # Electric Circuits An electric circuit in good repair carries electricity through two or three wires from the source of supply to an outlet and back to the source. A branch circuit is an electric circuit that supplies current to a limited number of outlets and fixtures. A residence generally has many branch circuits. Each is protected against short circuits and overloads by a 15-or 20-ampere fuse or circuit breaker. The number of outlets per branch circuit varies from building to building. The code requires enough light circuits so that 3 watts of power will be available for each square foot of floor area in a house. A circuit wired with 14-gauge wire and protected by a 15 ampere over-current protection device provides 15×115 (1,725 watts); each circuit is enough for 1,725 divided by 3 (575 square feet). Note that 575 is a minimum figure; if future use is considered, 500 or even 400 square feet per branch circuit should be used. Special appliance circuits will provide electric power for lighting, radio, TV, and small portable appliances. However, the larger electric appliances usually found in the kitchen consume more power and must have their own special circuit. Section 220-3b of the code requires two special circuits to serve only appliance outlets in the kitchen, laundry, pantry, family room, dining room, and breakfast room. Both circuits must be extended to the kitchen; either one or both of these circuits may serve the other rooms. No lighting outlets may be connected to these circuits, and they must be wired with 12-gauge wire and protected by a 20-ampere over-current device. Each circuit will have a capacity of 20×115 (2,300 watts), which is not too much when toasters often require more than 1,600 watts. It is customary to provide a circuit for each of the following appliances: range, water heater, washing machine, clothes dryer, garbage disposal, dishwasher, furnace, water pump, air conditioner, heat pump, and air compressor. These circuits may be either 115 volts or 230 volts, depending on the particular appliance or motor installed. # Outlet Switches and Junction Boxes The code requires that every switch, outlet, and joint in wire or cable be housed in a box. Every fixture must be mounted on a box. Most boxes are made of plastic or metal with a galvanized coating. When a cable of any style is used for wiring, the code requires that it be securely anchored with a connector to each box it enters. # Grounding Outlets An electrical appliance may appear to be in good repair, and yet it might be a danger to the user. Older portable electric drills consist of an electric motor inside a metal casing. When the switch is depressed, the current flows to the motor and the drill rotates. As a result of wear, however, the insulation on the wire inside the drill may deteriorate and allow the hot side of the power cord to come in contact with the metal casing. This will not affect the operation of the drill. A person fully clothed using the drill in the living room, which has a dry floor, will not receive a shock, even though he or she is in contact with the electrified drill case. The operator's body is not grounded because of the dry floor. If standing on a wet basement floor, the operator's body might be grounded; and, when the electrified drill case is touched, current will pass through the operator's body. To protect people from electrocution, the drill case is usually connected to the system ground by means of a wire called an appliance ground. In this instance, as the drill is plugged in, current will flow between the shorted hot wire and the drill case and cause the over-current device to break the circuit. Thus, the appliance ground has protected the human operator. Newer appliances and tools are equipped with two-prong polarized plugs, as discussed in the standards section of this manual. The appliance ground (Figure 11.19) is the third wire found on many appliances. The appliance ground will be of no use unless the outlet into which the appliance is plugged is grounded. Being in physical contact with a ground outlet box grounds the outlet. Having a third ground wire, or a grounded conduit, as part of the circuit wiring grounds the outlet box. All new buildings are required to have grounded outlets. A two-lead circuit tester can be used to test the outlet. The circuit tester lights when both of its leads are plugged into the two elongated parallel openings of the outlet. In addition, the tester lights when one lead is plugged into the round third opening and the other is plugged into the hot side of the outlet. Most problems can be resolved using inexpensive testers resembling a plug with three leads. These can be purchased in many stores and most hardware stores for very reasonable prices. If the conventional two-opening outlet is used, it may be grounded if the screw that holds the outlet cover plate is electrically connected to the third-wire ground. The tester should light when one lead is in contact with a clean paint-free metal outlet cover plate screw and the hot side of the outlet. If the tester does not light, the outlet is not grounded. If a two-opening outlet is grounded, it may be adapted for use by a three-wire appliance by using an adapter. The loose-wire portion or screw tab of the adapter should be secured behind the metal screw of the outlet plate cover. Many appliances, such as electric shavers and some new hand tools, are double insulated and are safe without having a third ground wire. # Polarized Plugs and Connectors Plugs are polarized or unpolarized. Polarization helps reduce the potential for shock. Consumers can easily identify polarized plugs; one blade-the ground prong-is wider than the other. Three-conductor plugs are automatically polarized because they can only be inserted one way. Polarized plugs are used to connect the most-exposed part of an appliance to the ground wire so that if you are touching a ground (such as a pipe, bathtub, or faucet) and the exposed part of an appliance (the case, the threaded part of a light bulb socket, etc.), you will not get an electrical shock. Many appliances, such as electric drills, are doubly insulated so the probability of any exposed part of the appliance being connected, by a short or other problem in the appliance, to either wire is very small. Such devices often use unpolarized plugs where the two prongs of the plug are identical. # Common Electrical Violations The most obvious things that a housing inspector must check are the power supply; the type, location, and condition of the wiring; and the number and conditions of wall outlets or ceiling fixtures required by the local code. In making an investigation, the following considerations will serve as useful guides. - Power supply-Where is it, is it grounded properly, and is it at least of the minimum capacity required to supply current safely for lighting and the major and minor appliances in the dwelling? - Panel box covers or doors-These should be accessible only from the front and should be sealed in such a way that they can be operated safely without the danger of contact with live or exposed parts of the wiring system. - Switch, outlets, and junction boxes-These also must be covered to protect against danger of electric shock. - Frayed or bare wires-These are usually the result of long use and drying out and cracking of the insulation, which leave the wires exposed, or of constant friction and rough handling of the wire, which cause it to fray or become bare. Wiring in this condition constitutes a safety hazard. Correction of such defects should be ordered immediately. - Electric cords under rugs or other floor coverings-Putting electric cords in locations such as these is prohibited because of the potential fire hazard caused by continuing contact over a period of time between these heat-bearing cords and the flammable floor coverings. Direct the occupant to shift the cords to a safe location, explain why, and make sure it is done before you leave. - Ground fault circuit interrupter-All bathroom, kitchen, and workroom outlets-where shock hazard is great-should have GFCI outlets. Check for lack of or nonuse of GFCI outlets. - Bathroom lighting-Bathrooms should include at least one permanently installed ceiling or wall light fixture with a wall switch and plate located and maintained so that there is no danger of short circuiting from use of other bathroom facilities or splashing water. Fixture or cover plates should be insulated or grounded. - Lighting of public hallways, stairways, landings, and foyers-A common standard is sufficient lighting to illuminate 10 foot-candles on every part of these areas at all times. Sufficient lighting means that people can clearly see their feet on all parts of the stairways and halls. Public halls and stairways in structures containing less than three dwelling units may be supplied with conveniently located light switches controlling an adequate lighting system that may be turned on when needed, instead of full-time lighting. - Habitable room lighting-The standard here may be two floor convenience outlets (although floor outlets are dangerous unless protected by proper dust and water covers) or one convenience outlet and one wall or ceiling electric light fixture. This number is an absolute and often inadequate minimum, given the contemporary widespread use of electricity in the home. The minimum should be the number required to provide adequate lighting and power to accommodate lighting and appliances normally used in each room. - Octopus outlets or wiring-This term is applied to outlets into which plugs have been inserted and are being used to permit more than two lights or portable appliances, such as a TV, lamp, or radio, to be connected to the electrical system. The condition occurs where the number of outlets is insufficient to accommodate the normal use of the room. This practice overloads the circuit and is a potential source of fire. - Outlet covers-Every outlet and receptacle must be covered by a protective plate to prevent contact of its wiring or terminals with the body, combustible objects, or water. Following are six situations that can cause danger and should also be corrected. # Excessive or Faulty Fusing The wire's capacity must not be exceeded by the fuse or circuit breaker capacity or be left unprotected by faulty fusing or circuit breakers. Fuses and circuit breakers are safety devices designed to "blow" to protect against overloading the electrical system or one or more of its circuits. Pennies under fuses are there to bypass the fuse. These are illegal and must be removed. Overfusing is done for the same reason. The latter can be prevented by installing modern fusestats, which prevent use of any fuse of a higher amperage than can be handled by the circuit it serves. # Cords Run Through Walls or Doorways and Hanging Cords or Wires This makeshift installation often is the work of an unqualified handyman or do-it-yourself occupant. The inspector should check the local electrical code to determine the policy regarding this type of installation. # Temporary Wiring Temporary wiring should not be allowed, with the exception of extension cords that go directly from portable lights and electric fixtures to convenience outlets. # Excessively Long Extension Cords City code standards often limit the length of loose cords or extension lines to a maximum of 8 feet. This is necessary because cords that are too long will overheat if overloaded or if a short circuit develops and, thus, create a fire hazard. This requirement does not apply to specially designed extension cords for operating portable tools and trouble lights. # Dead or Dummy Outlets These are sometimes installed to deceive the housing inspector. All outlets must be tested or the occupants questioned to see if these are live and functioning properly. A dead outlet cannot be counted to determine compliance with the code. # Aluminum Wiring Inside the Home Although aluminum is an excellent conductor, it tends to oxidize on the conducting surface. The nonconductive oxidized face of the conductor will arc from the remaining conductive surfaces, and this arc can result in fire. # Inspection Steps The basic tools required by a housing inspector for making an electrical inspection are fuse and circuit testers and a flashlight. The first thing to remember is that you are in a strange house, and the layout is unfamiliar to you. The second thing to remember is that you are dealing with electricity-take no chances. Go to the site of the ground, usually at the water meter, and check the ground. It should connect to the water line on the street side of the water meter or be equipped with a jumper wire. Do not touch any box or wire until you are sure of the ground. To sum up, the housing inspector investigates specified electrical elements in a house to detect any obvious evidence of an insufficient power supply, to ensure the availability of adequate and safe lighting and electrical facilities, and to discover and correct any obvious hazard. Because electricity is a technical, complicated field, the housing inspector, when in doubt, should consult his or her supervisor. The inspector cannot, however, close the case until appropriate corrective action has been taken on all such referrals. # Introduction The quotes below provide a profound lesson in the need for housing to provide protection from both the heat and cold. " Air duct-A formed conduit that carries warm or cold air from the furnace or air-conditioner and back again. France Antiflooding Control-A safety control that shuts off fuel and ignition when excessive fuel oil accumulates in the appliance. Appliance: - High heat-A unit that operates with flue entrance temperature of combustion products above 1,500°F (820°C). - Medium heat-Same as high-heat, except above 600°F (320°C). - Low heat-same as high-heat, except below 600°F (320°C). Boiler: - High pressure-A boiler furnishing pressure at 15 psi or more. - Low pressure (hot water or steam)-A boiler furnishing steam at a pressure less than 15 psi or hot water not more than 30 psi. Burner-A device that mixes fuel, air, and ignition in a combustion chamber. # Definitions of Terms Related to HVAC Systems "In many temperate countries, death rates during the winter season are 10%-25% higher than those in the summer." World Health Organization, Health Evidence Network, November 1, 2004. This chapter provides a general overview of the heating and cooling of today's homes. Heating and cooling are not merely a matter of comfort, but of survival. Both very cold and very hot temperatures can threaten health. Excessive exposure to heat is referred to as heat stress and excessive exposure to cold is referred to as cold stress. In a very hot environment, the most serious health risk is heat stroke. Heat stroke requires immediate medical attention and can be fatal or leave permanent damage. Heat stroke fatalities occur every summer. Heat exhaustion and fainting are less serious types of illnesses. Typically they are not fatal, but they do interfere with a person's ability to work. At very cold temperatures, the most serious concern is the risk for hypothermia or dangerous overcooling of the body. Another serious effect of cold exposure is frostbite or freezing of exposed extremities, such as fingers, toes, nose, and ear lobes. Hypothermia can be fatal if immediate medical attention is not received. Heat and cold are dangerous because the victims of heat stroke and hypothermia often do not notice the symptoms. This means that family, neighbors, and friends are essential for early recognition of the onset of the conditions. The affected individual's survival depends on others Heat stroke signs and symptoms include sudden and severe fatigue, nausea, dizziness, rapid pulse, lightheadedness, confusion, unconsciousness, extremely high temperature, and hot and dry skin surface. An individual who appears disorientated or confused, seems euphoric or unaccountably irritable, or suffers from malaise or flulike symptoms should be moved to a cool location and medical advice should be sought immediately. to identify symptoms and to seek medical help. Family, neighbors, and friends must be particularly diligent during heat or cold waves to check on individuals who live alone. Although symptoms vary from person to person, the warning signs of heat exhaustion include confusion and profuse and prolonged sweating. The person should be removed from the heat, cooled, and heavily hydrated. Carbon monoxide (CO) detector-A device used to detect CO (specific gravity of 0.97 vs. 1.00 for oxygen, a colorless odorless gas resulting from combustion of fuel). CO detectors should be placed on each floor of the structure at eye level and should have an audible alarm and, when possible, a digital readout at eye level. Chimney-A vertical shaft containing one or more passageways. Factory-built chimney-A tested and accredited flue for venting gas appliances, incinerators and solid or liquid fuel-burning appliances. Masonry chimney-A field-constructed chimney built of masonry and lined with terra cotta flue or firebrick. Metal chimney-A field-constructed chimney of metal. Chimney connector-A pipe or breeching that connects the heating appliance to the chimney. Clearance-The distance separating the appliance, chimney connector, plenum, and flue from the nearest surface of combustible material. Central cooling system-An electric or gas-powered system containing an outside compressor, cooling coils, and a ducting system inside the structure designed to supply cool air uniformly throughout the structure. Central heating system-A flue connected boiler or furnace installed as an integral part of the structure and designed to supply heat adequately for the structure. # Collectors- The key component of active solar systems and are designed to meet the specific temperature requirements and climate conditions for different end uses. Several types of solar collectors exist: flat-plate collectors, evacuated-tube collectors, concentrating collectors, and transpired air collectors. # Controls: - High-low limit control-An automatic control that responds to liquid level changes and will shut down if they are exceeded. - Primary safety control-The automatic safety control intended to prevent abnormal discharge of fuel at the burner in case of ignition failure or flame failure. Combustion safety control-A primary safety control that responds to flame properties, sensing the presence of flame and causing fuel to be shut off in event of flame failure. Convector-A radiator that supplies a maximum amount of heat by convection, using many closely spaced metal fins fitted onto pipes that carry hot water or steam and thereby heat the circulating air. Conversion-A boiler or furnace, flue connected, originally designed for solid fuel but converted for liquid or gas fuel. Damper-A valve for regulating draft on coal-fired equipment. Generally located on the exhaust side of the combustion chamber, usually in the chimney connector. Dampers are not allowed on oil-and gas-fired equipment. Draft hood-A device placed in and made a part of the vent connector (chimney connector or smoke pipe) from an appliance, or in the appliance itself. The hood is designed to a) ensure the ready escape of the products of combustion in the event of no draft, back draft, or stoppage beyond the draft hood; b) prevent backdraft from entering the appliance; and c) neutralize the effect of stack action of the chimney flue upon appliance operation. # Definitions of Terms Related to HVAC Systems Draft regulator-A device that functions to maintain a desired draft in oil-fired appliances by automatically reducing the chimney draft to the desired value. Sometimes this device is referred to in the field as air-balance, air-stat, or flue velocity control or barometer damper. Fuel oil-A liquid mixture or compound derived from petroleum that does not emit flammable vapor below a temperature of 125°F (52°C). Heat-The warming of a building, apartment, or room by a furnace or electrical stove. Heating plant-The furnace, boiler, or the other heating devices used to generate steam, hot water, or hot air, which then is circulated through a distribution system. It typically uses coal, gas, oil or wood as its source of heat. Limit control-A thermostatic device installed in the duct system to shut off the supply of heat at a predetermined temperature of the circulated air. Oil burner-A device for burning oil in heating appliances such as boilers, furnaces, water heaters, and ranges. A burner of this type may be a pressure-atomizing gun type, a horizontal or vertical rotary type, or a mechanical or natural draft-vaporizing type. Oil stove-A flue-connected, self-contained, self-supporting oil-burning range or room heater equipped with an integral tank not exceeding 10 gallons; it may be designed to be connected to a separate oil tank. Plenum chamber-An air compartment to which one or more distributing air ducts are connected. Pump, automatic oil-A device that automatically pumps oil from the supply tank and delivers it in specific quantities to an oil-burning appliance. The pump or device is designed to stop pumping automatically if the oil supply line breaks. Radiant heat-A method of heating a building by means of electric coils, hot water, or steam pipes installed in the floors, walls, or ceilings. Register-A grille-covered opening in a floor, ceiling, or wall through which hot or cold air can be introduced into a room. It may or may not be arranged to permit closing the grille. Room heater-A self-contained, freestanding heating appliance (space heater) intended for installation in the space being heated and not intended for duct connection. Smoke detector-A device installed in several rooms of the structure to warn of the presence of smoke. It should provide an audible alarm. It can be battery powered or electric, or both. If the unit is battery powered, the batteries should be tested or checked on a routine basis and changed once per year. If the unit is equipped with a 10-year battery, it is not necessary to replace the battery every year. Tank-A separate tank connected, directly or by pump, to an oil-burning appliance. Thimble-A metal or terra cotta lining for a chimney or furnace pipe. # Valve (main shut-off valve)- A manually operated valve in an oil line used to turn the oil supply to the burner on or off. Vent system-The gas vent or chimney and vent connector, if used, assembled to form a continuous, unobstructed passageway from the gas appliance to the outside atmosphere to remove vent gases. # Definitions of Terms Related to HVAC Systems Warning signs of hypothermia include nausea, fatigue, dizziness, irritability, or euphoria. Individuals also experience pain in their extremities (e.g., hands, feet, ears) and severe shivering. People who exhibit these symptoms, particularly the elderly and young, should be moved to a heated shelter and medical advice should be sought when appropriate. The function of a heating, ventilation, and air conditioning (HVAC) system is to provide for more than human health and comfort. The HVAC system produces heat, cool air, and ventilation, and helps control dust and moisture, which can lead to adverse health effects. The variables to be controlled are temperature, air quality, air motion, and relative humidity. Temperature must be maintained uniformly throughout the heated/cooled area. There is a 6°F to 10°F (-14°C to -12°C) variation in room temperature from floor to ceiling. The adequacy of the HVAC system and the air-tightness of the structure or room determine the degree of personal safety and comfort within the dwelling. Gas, electricity, oil, coal, wood, and solar energy are the main energy sources for home heating and cooling. Heating systems commonly used are steam, hot water and hot air. A housing inspector should have knowledge of the various heating fuels and systems to be able to determine their adequacy and safety in operation. To cover fully all aspects of the heating and cooling system, the entire area and physical components of the system must be considered. # Heating Fifty-one percent of the homes in the United States are heated with natural gas, 30% are heated with electricity, and 9% with fuel oil. The remaining 11% are heated with bottled fuel, wood, coal, solar, geothermal, wind, or solar energy . Any home using combustion as a source of heating, cooling, or cooking or that has an attached garage should have appropriately located and maintained carbon monoxide (CO) gas detectors. According to the U.S. Consumer Product Safety Commission (CPSC), from data collected in 2000, CO kills 200 people and sends more than 10,000 to the hospital each year. The standard fuels for heating are discussed below. # Standard Fuels # Gas More than 50% of American homes use gas fuel. Gas fuels are colorless gases. Some have a characteristic pungent odor; others are odorless and cannot be detected by smell. Although gas fuels are easily handled in heating equipment, their presence in air in appreciable quantities becomes a serious health hazard. Gases diffuse readily in the air, making explosive mixtures possible. A proportion of combustible gas and air that is ignited burns with such a high velocity that an explosive force is created. Because of these characteristics of gas fuels, precautions must be taken to prevent leaks, and care must be exercised when gas-fired equipment is lit. Gas is broadly classified as natural or manufactured. - Natural gas-This gas is a mixture of several combustible and inert gases. It is one of the richest gases and is obtained from wells ordinarily located in petroleum-producing areas. The heat content may vary from 700 to 1,300 British thermal units (BTUs) per cubic foot, with a generally accepted average figure of 1,000 BTUs per cubic foot. Natural gases are distributed through pipelines to the point of use and are often mixed with manufactured gas to maintain a guaranteed BTU content. - Manufactured gas-This gas, as distributed, is usually a combination of certain proportions of gases produced from coke, coal, and petroleum. Its BTU value per cubic foot is generally closely regulated, and costs are determined on a guaranteed BTU basis, usually 520 to 540 BTUs per cubic foot. - Liquefied petroleum gas-Principal products of liquefied petroleum gas are butane and propane. Butane and propane are derived from natural gas or petroleum refinery gas and are chemically classified as hydrocarbon gases. Specifically, butane and propane are on the borderline between a liquid and a gaseous state. At ordinary atmospheric pressure, butane is a gas above 33°F (0.6°C) and propane a gas at -42°F (-41°C). These gases are mixed to produce commercial gas suitable for various climatic conditions. Butane and propane are heavier than air. The heat content of butane is 3,274 BTUs per cubic foot, whereas that of propane is 2,519 BTUs per cubic foot. Gas burners should be equipped with an automatic shutoff in case the flame fails. Shutoff valves should be located within 1 foot of the burner connection and on the output side of the meter. Caution: Liquefied petroleum gas is heavier than air; therefore, the gas will accumulate at the bottom of confined areas. If a leak develops, care should be taken to ventilate the appliance before lighting it. # Electricity Electricity has gained popularity for heating in many regions, particularly where costs are competitive with other sources of heat energy, with usage increasing from 2% in 1960 to 30% in 2000. With an electric system, the housing inspector should rely mainly on the electrical inspector to determine proper installation. There are a few items, however, to be concerned with to ensure safe use of the equipment. Check to see that the units are approved by an accredited testing agency and installed according to the manufacturer's specifications. Most convector-type units must be installed at least 2 inches above the floor level, not only to ensure that proper convection currents are established through the unit, but also to allow sufficient air insulation from any combustible flooring material. The housing inspector should check for curtains that extend too close to the unit or loose, long-pile rugs that are too close. A distance of 6 inches on the floor and 12 inches on the walls should separate rugs or curtains from the appliance. Heat pumps are air conditioners that contain a valve that allows switching between air conditioner and heater. When the valve is switched one way, the heat pump acts like an air conditioner; when it is switched the other way, it reverses the flow of refrigerants and acts like a heater. Cold is the absence of energy or calories of heat. To cool something, the heat must be removed; to warm something, energy or calories of heat must be provided. Heat pumps do both. A heat pump has a few additions beyond the typical air conditioner: a reversing valve, two thermal expansion valves, and two bypass valves. The reversing valve allows the unit to provide both cooling and heating. Figure 12.1 shows a heat pump in cooling mode. The unit operates as follows: - The compressor compacts the refrigerant vapor and pumps it to the reversing valve. - The reversing valve directs the compressed vapor to flow to the outside heat exchanger (condenser), where the refrigerant is cooled and condensed to a liquid. - The air blowing through the condenser coil removes heat from the refrigerant. - The liquid refrigerant bypasses the first thermal expansion valve and flows to the second thermal expansion valve at the inside heat exchanger (evaporator) where it expands into the evaporator and becomes vapor. - The refrigerant picks up heat energy from the air blowing across the evaporator coil and cool air comes out at the other side of the coil. The cool air is ducted to the occupied space as air-conditioned air. - The refrigerant vapor then goes back to the reversing valve to be directed to the compressor to start the refrigeration cycle all over again. Heat pumps are quite efficient in their use of energy. However, heat pumps often freeze up; that is, the coils in the outside air collect ice. The heat pump has to melt this ice periodically, so it switches itself back to air conditioner mode to heat up the coils. To avoid pumping cold air into the house in air conditioner mode, the heat pump also uses electric strip heaters to heat the cold air that the air conditioner is pumping out. Once the ice is melted, the heat pump switches back to heating mode and turns off the burners. # Fuel Oil Fuel oils are derived from petroleum, which consists primarily of compounds of hydrogen and carbon (hydrocarbons) and smaller amounts of nitrogen and sulfur. Domestic fuel oils are controlled by rigid specifications. Six grades of fuel oil-numbered 1 through 6-are generally used in heating systems; the lighter two grades are used primarily for domestic heating: - Grade Number 1-a volatile, distillate oil for use in burners that prepare fuel for burning solely by vaporization (oil-fired space heaters). - Grade Number 2-a moderate weight, volatile, distillate oil used for burners that prepare oil for burning by a combination of vaporization and atomization. This grade of oil is commonly used in domestic heating furnaces. Heating values of oil vary from approximately 152,000 BTU per gallon for number 6 oil to 136,000 BTU per gallon for number 1 oil. Oil is more widely used today than coal and provides a more automatic source of heat and comfort. It also requires more complicated systems and controls. If the oil supply is in the basement or cellar area, certain code regulations must be followed (Figure 12. 2) . No more than two 275-gallon tanks may be installed above ground in the lowest story of any one building. The IRC recommends a maximum fuel oil storage of 660 gallons. The tank shall not be closer than 7 feet horizontally to any boiler, furnace, stove, or exposed flame(s). Fuel oil lines should be embedded in a concrete or cement floor or protected against damage if they run across the floor. Each tank must have a shutoff valve that will stop the flow if a leak develops in the line to or in the burner itself. A leak-tight liner or pan should be installed under tanks and lines located above the floor. They contain potential leaks so the oil does not spread over the floor, creating a fire hazard. The tank or tanks must be vented to the outside, and a gauge showing the quantity of oil in the tank or tanks must be tight and operative. Steel tanks constructed before 1985 had a life expectancy of 12-20 years. Tanks must be off the floor and on a stable base to prevent settlement or movement that may rupture the connections. Figure 12.3 shows a buried outside tank installation. In 1985, federal legislation was passed requiring that the exterior components of underground storage tanks (USTs) installed after 1985 resist the effects of pressure, vibration, and movement. Federal regulations for USTs exclude the following: farm and residential tanks of 1,100 gallons (420 liters) or less capacity; tanks storing heating oil used on the premises; tanks on or above the floor of basements; septic tanks; flow-through process tanks; all tanks with capacity of 110 gallons or less; and emergency spill and overfill tanks . A review of local and state regulations should be completed before installing underground tanks because many jurisdictions do not allow burial of gas or oil tanks. has the Million Solar Roofs Initiative begun in 1997 to install solar energy systems in more than 1 million U.S. buildings by 2010. # Central Heating Units The boiler should be placed in a separate room whenever possible, which is usually required in new construction. In most housing inspections, however, the inspector is dealing with existing conditions and must adapt the situation as closely as possible to acceptable safety standards. In many old buildings, the furnace is located in the center of the cellar or basement. This location does not lend itself to practical conversion to a boiler room. Consider the physical requirements for a boiler or furnace. - Ventilation-More circulating air is required for the boiler room than for a habitable room, to reduce the heat buildup caused by the boiler or furnace and to supply oxygen for combustion. - Fire protection rating-As specified by various codes (fire code, building code, and insurance underwriters), the fire regulations must be strictly adhered to in areas surrounding the boiler or furnace. This minimum clearance for a boiler or furnace from a wall or ceiling is shown in Figures 12.4 and 12.5. Asbestos was used in numerous places on furnaces to protect buildings from fire and to prevent lost heat. Figure 12.6 shows asbestos-coated heating ducts, for example. Where asbestos insulation is found, it must be handled with care (breathing protection and protective clothing) and care must be taken to prevent or contain release into the air . The furnace or boiler makes it difficult to supply air and ventilation for the room. Where codes and local authority permit, it may be more practical to place the furnace or boiler in an open area. The ceiling above the furnace should be protected to a distance of 3 feet beyond all furnace or boiler appurtenances, and this area should be free of all storage material. The furnace or boiler should be on a firm foundation of concrete if located in the cellar or basement. If codes permit furnace installations on the first floor, then they must be consulted for proper setting and location. # Heating Boilers The term boiler is applied to the single heat source that can supply either steam or hot water (a boiler is often called a heater). Boilers may be classified according to several kinds of characteristics. They are typically made from cast iron or steel. Their construction design may be sectional, portable, fire-tube, water-tube, or special. Domestic heating boilers are generally of low-pressure type with a maximum working pressure of 15 pounds per square inch (psi) for steam and 30 psi for hot water. All boilers have a combustion chamber for burning fuel. Automatic fuel-firing devices help supply the fuel and control the combus-tion. Hand firing is accomplished by the provision of a grate, ash pit, and controllable drafts to admit air under the fuel bed and over it through slots in the firing door. A check draft is required at the smoke pipe connection to control chimney draft. The gas passes from the combustion chamber to the flue passages (smoke pipe) designed for maximum possible transfer of heat from the gas. Provisions must be made for cleaning flue passages. Cast-iron boilers are usually shipped in sections and assembled at the site. They are generally classified as - square or rectangular boilers with vertical sections; and - round, square, or rectangular boilers with horizontal pancake sections. Most steel boilers are assembled units with welded steel construction and are called portable boilers. Large boilers are installed in refractory brick settings on the site. Above the combustion chamber, a group of tubes is suspended, usually horizontally, between two headers. If flue gases pass through the tubes and water surrounds them, the boiler is designated as the fire-tube type. When water flows through the tubes, it is termed water-tube. Firetube is the predominant type. # Heating Furnaces Heating furnaces are the heat sources used when air is the heat-carrying medium. When air circulates because of the different densities of the heated and cooled air, the furnace is a gravity type. A fan may be included for the air circulation; this type is called a mechanical warm-air furnace. Furnaces may be of cast iron or steel and burn various types of fuel. Some new furnaces are as fuel efficient as 95%. Furnaces with an efficiency of 90% or greater use two heat exchangers instead of one. Energy savings come not only from the increased efficiency, but also from improved comfort at lower thermostat settings. # Fuel-burning Furnaces Some localities throughout the United States still use coal as a heating fuel, including residences, schools, colleges and universities, small manufacturing facilities, and other facilities located near coal sources. In many older furnaces, the coal is stoked or fed into the firebox by hand. The single-retort, underfeed-type bituminous coal stoker is the most commonly used domestic automatic-stoking steam or hot-water boiler (Figure 12.7). The stoker consists of a coal hopper, a screw for conveying coal from hopper to retort, a fan that supplies air for combustion, a transmission for driving coal feed and fan, and an electric motor for supplying power. The air for combustion is admitted to the fuel through tuyeres (air inlets) at the top of the retort. The stoker feeds coal to the furnace intermittently in accordance with temperature or pressure demands. Oil burners are broadly designated as distillate, domestic, and commercial or industrial. Distillate burners are usually found in oil-fired space heaters. Domestic oil burners are usually power driven and are used in domestic heating plants. Commercial or industrial burners are used in larger central-heating plants for steam or power generation. Domestic oil burners vaporize and atomize the oil and deliver a predetermined quantity of oil and air to the combustion chambers. Domestic oil burners operate automatically to maintain a desired temperature. Gun-type burners atomize the oil either by oil pressure or by low-pressure air forced through a nozzle. The oil system pressure-atomizing burner consists of a strainer, pump, pressure-regulating valve, shutoff valve and atomizing nozzle. The air system consists of a power-driven fan and an air tube that surrounds the nozzle and electrode assembly. The fan and oil pump are generally connected directly to the motor. Oil pressures normally used are about 100 psi, but pressures considerably in excess of this are sometimes used. The form and parts of low-pressure, air-atomizing burners are similar to high-pressure atomizing (gun) burners (Figure 12.8) except for addition of a small air pump and a different way of delivering air and oil to the nozzle or orifice. The atomizing type burner, sometimes known as a radiant or suspended flame burner, atomizes oil by throwing it from the circumference of a rapidly rotating motordriven cup. The burner is installed so that the driving parts are protected from the heat of the flame by a hearth of refractory material at about the grate elevation. Oil is fed by pump or gravity; the draft is mechanical or a combination of natural and mechanical. Horizontal rotary burners were originally designed for commercial and industrial use but are available in sizes suitable for domestic use. In this burner, fuel oil being thrown in a conical spray from a rapidly rotating cup is atomized. Horizontal rotary burners use electric gas or gas-pilot ignition and operate with a wide range of fuels, primarily Numbers 1 and 2 fuel oil. Primary safety controls for burner operation are necessary. An antiflooding device must be a part of the system to stop oil flow if ignition in the burner should fail. Likewise, a stack control is necessary to shut off the burner if the stack temperatures are exceeded, thus cutting off all power to the burner. This button must be reset before starting can be attempted. The newer models now use electric eye-type control on the burner itself. On the basis of the method used to ignite fuels, burners are divided into five groups: - Electric-A high-voltage electric spark in the path of an oil and air mixture causes ignition. This electric spark may be continuous or may operate only long enough to ignite the oil. Electric ignition is almost universally used. Electrodes are located near the nozzles, but not in the path of the oil spray. - Gas pilot-A small gas pilot light that burns continuously is frequently used. Gas pilots usually have expanded gas valves that automatically increase flame size when a motor circuit starts. After a fixed interval, the flame reverts to normal size (Figure 12.9). - Electric gas-An electric spark ignites a gas jet, which in turn ignites the oil-air mixture. - Oil pilot-A small oil flame is used. - Manual-A burning wick or torch is placed in the combustion space through peepholes and thus ignites the charge. The operator should stand to one side of the fire door to guard against injury from chance explosion. The refractory lining or material should be an insulating fireproof bricklike substance, never ordinary firebrick. The insulating brick should be set on end to build a 2½-inch-thick wall from furnace to furnace. The size and shape of the refractory pot vary from furnace to furnace. The shape can be either round or square, whichever is more convenient to build. It is more important to use a special cement having properties similar to that of the insulating refractory-type brick. # Steam Heating Systems Steam heating systems are classified according to the pipe arrangement, accessories used, method of returning the condensate to the boiler, method of expelling air from the system, or the type of control used. The successful operation of a steam heating system consists of generating steam in sufficient quantity to equalize building heat loss at maximum efficiency, expelling entrapped air, and returning all condensate to the boiler rapidly. Steam cannot enter a space filled with air or water at pressure equal to the steam pressure. It is important, therefore, to eliminate air and remove water from the distribution system. All hot pipelines exposed to contact by residents must be properly insulated or guarded. Steam heating systems use the following methods to return the condensate to the boiler: - Gravity one-pipe air-vent system-One of the earliest types used, this method returns condensate to the boiler by gravity. This system is generally found in one-building-type heating systems. The steam is supplied by the boiler and carried through a single system or pipe to radiators, as shown in Figure 12.10. Return of the condensate is dependent on hydrostatic head. Therefore, the end of the stream main, where it attaches to the boiler, must be full of water (termed a wet return) for a distance above the boiler line to create a pressure drop balance between the boiler and the stream main. Radiators are equipped with an inlet valve and an air valve. The air valve permits venting of air from the radiator and its displacement by steam. Condensate is drained from the radiator through the same pipe that supplies steam. - Two-pipe steam vapor system with return trap-The two-pipe vapor system with boiler return trap and air eliminator is an improvement of the one-pipe system. The return connection of the radiator has a thermostatic trap that permits flow of condensate and air only from the radiator and prevents steam from leaving the radiator. Because the return main is at atmospheric pressure or less, a boiler return trap is installed to equalize condensate return pressure with boiler pressure. # Water Heating Systems All water heating systems are similar in design and operating principle. The one-pipe gravity water heating system is the most elementary of the gravity systems and is shown in Figure 12.10. Water is heated at the lowest point in the system. It rises through a single main because of a difference in density between hot and cold water. The supply rise or radiator branch takes off from the top of the main to supply water to the radiators. After the water gives up heat in the radiator, it goes back to the same main through return piping from the radiator. This cooler return water mixes with water in the supply main and causes the water to cool a little. As a result, the next radiator on the system has a lower emission rate and must be larger. Note in Figure 12.11 that the high points of the hot water system are vented and the low points are drained. In this case, the radiators are the high points and the heater is the low point. - One-pipe forced-feed system-If a pump or circulator is introduced in the main near the heater of the one-pipe system, it becomes a forced system that can be used for much larger applications than can the gravity type. This system can operate at higher water temperatures than the gravity system can. When the water is moving faster and at higher temperatures, it makes a more responsive system, with smaller temperature drops and smaller radiators for the same heating load. - Two-pipe gravity system-A one-pipe gravity system may become a two-pipe system if the return radiator branch connects to a second main that returns water to the heater (Figure 12.12). Water temperature is practically the same in the entire radiator. - Two-pipe forced-circulation system-This system is similar to a one-pipe forced-circulation system except that it uses the same piping arrangement found in the two-pipe gravity system. - Expansion tanks-When water is heated, it tends to expand. Therefore, an expansion tank is necessary in a hot water system. The expansion tank, either open or closed, must be of sufficient size to permit a change in water volume within the heating system. If the expansion tank is open, it must be placed at least 3 feet above the highest point of the system. It will require a vent and an overflow. The open tank is usually in an attic, where it needs protection from freezing. The closed expansion tank is found in modern installations. An air cushion in the tank compresses and expands according to the change of volume and pressure in the system. Closed tanks are usually at the low point in the system and close to the heater. They can, however, be placed at almost any location within the heating system. # Air Heating Systems Gravity Warm-air Heating Systems. These operate because of the difference in specific gravity of warm air and cold air. Warm air is lighter than cold air and rises if cold air is available to replace it (Figure 12.13). - Operation-Satisfactory operation of a gravity warmair heating system depends on three factors: size of warm air and cold air ducts, heat loss of the building, and heat available from the furnace. - Heat distribution-The most common source of trouble in these systems is insufficient pipe area, usually in the return or cold air duct. The total crosssection area of the cold duct or ducts must be at least equal to the total cross-section area of all warm ducts. - Pipeless furnaces-The pipeless hot-air furnace is the simplest type of hot-air furnace and is suitable for small homes where all rooms can be grouped about a single large register. Other pipeless gravity furnaces are often installed at floor level. These are really oversized jacketed space heaters. The most common difficulty experienced with this type of furnace is supplying a return air opening of sufficient size on the floor. a forced-warm-air heating system is very similar to that of the gravity system, except that a fan or blower is added to increase air movement. Because of the assistance of the fan or blower, the pitch of the ducts or leaders can be disregarded; therefore, it is practical to deliver heated air in the most convenient places. - Operation-In a forced-air system, operation of the fan or blower must be controlled by air temperature in a bonnet or by a blower control furnace stat. The blower control starts the fan or blower when the temperature reaches a certain point and turns the fan or blower off when the temperature drops to a predetermined point. - Heat distribution-Dampers in the various warm-air ducts control distribution of warm air either at the branch takeoff or at the warm-air outlet. Humidifiers are often mounted in the supply bonnet to regulate the humidity within the residence. # Space Heaters Space heaters are the least desirable type of heating from the viewpoint of fire safety and housing inspection. A space heater is a self-contained, free-standing air-heating appliance intended for installation in the space being heated and not intended for duct connection. According to the CPSC, consumers are not using care when purchasing and using space heaters. Approximately 21,800 residential fires are caused by space heaters a year, and 300 people die in these fires. An estimated 6,000 persons receive hospital emergency room care for burn injuries associated with contacting hot surfaces of space heaters, mostly in nonfire situations. Individuals using space heaters should use the heaters in accordance with the following precautions: - Read and follow the manufacturer's operating instructions. A good practice is to read aloud the instructions and warning labels to all members of the household to be certain that everyone understands how to operate the heater safely. Keep the owner's manual in a convenient place to refer to when needed. - Choose a space heater that has been tested and certified by a nationally recognized testing laboratory. These heaters meet specific safety standards. - Buy a heater that is the correct size for the area you want to heat. The wrong size heater could produce more pollutants and may not be an efficient use of energy. - Choose models that have automatic safety switches that turn off the unit if it is tipped over accidentally. - Select a space heater with a guard around the flame area or heating element. Place the heater on a level, hard, nonflammable surface, not on rugs or carpets or near bedding or drapes. Keep the heater at least 3 feet from bedding, drapes, furniture, or other flammable materials. - Keep doors open to the rest of the house if you are using an unvented fuel-burning space heater. This helps prevent pollutant build-up and promotes proper combustion. Follow the manufacturer's instructions for oil heaters to provide sufficient combustion air to prevent CO production. - Never leave a space heater on when you go to sleep. Never place a space heater close to any sleeping person. - Turn the space heater off if you leave the area. Keep children and pets away from space heaters. Children should not be permitted to either adjust the controls or move the heater. - Keep any portable heater as least 3 feet away from curtains, newspapers, or anything that might burn. - Have a smoke detector with fresh batteries on each level of the house and a CO detector outside the sleeping area. Install a CO monitor near oil space heaters at the height recommended by the manufacturer. - Be aware that mobile homes require specially designed heating equipment. Only electric or vented fuel-fired heaters should be used. - Have gas and kerosene space heaters inspected annually. - Do not hang items to dry above or on the heater. - Keep all heaters out of exit and high-traffic areas. - Keep portable electric heaters away from sinks, tubs, and other wet or damp places to avoid deadly electric shocks. - Never use or store flammable liquids (such as gasoline) around a space heater. The flammable vapors can flow from one part of the room to another and be ignited by the open flame or by an electrical spark. # Coal-fired Space Heaters (Cannon Stove) A coal stove is made entirely of cast iron. Coal on the grates receives primary air for combustion through the grates from the ash-door draft intake. Combustible gases driven from the coal by heat burn in the barrel of the stove, where they receive additional or secondary air through the feed door. The side and top of the stove absorb the heat of combustion and radiate it to the surrounding space. Coal stoves must be vented to the flue. # Oil-fired Space Heaters Oil-fired space heaters have atmospheric vaporizing-type burners. The burners require a light grade of fuel oil that vaporizes easily and in comparatively low temperatures. In addition, the oil must be such that it leaves only a small amount of carbon residue and ash within the heater. Oil stoves must be vented. The burner of an oil-fired space heater consists essentially of a bowl, 8 to 13 inches in diameter, with perforations in the side that admit air for combustion. The upper part of the bowl has a flame ring or collar. Figure 12.15 shows a perforated-sleeve burner. When several space heaters are installed in a building, an oil supply from an outside tank to all heaters is often desirable. Figure 12. 16 shows the condition of a burner flame with different rates of fuel flow and indicates the ideal flame height. # Electric Space Heaters Electric space heaters do not need to be vented. # Gas-fired Space Heaters The three types of gas-fired space heaters (natural, manufactured, and liquefied petroleum gas) have a similar construction. All gas-fired space heaters must be vented to prevent a dangerous buildup of poisonous gases. Each unit console consists of an enamel steel cabinet with top and bottom circulating grilles or openings, gas burners, heating elements, gas pilot, and a gas valve. The heating element or combustion chamber is usually cast iron. Caution: All gas-fired space heaters and their connections must be approved by the American Gas Association (AGA). They must be installed in accordance with the recommendations of that organization or the local code. # Venting Use of proper venting materials and correct installation of venting for gas-fired space heaters is necessary to minimize harmful effects of condensation and to ensure that combustion products are carried off. (Approximately 12 gallons of water are produced in the burn- ing of 1,000 cubic feet of natural gas. The inner surface of the vent must therefore be heated above the dew point of the combustion products to prevent water from forming in the flue.) A horizontal vent must be given an upward pitch of at least 1 inch per foot of horizontal distance. When the smoke pipe extends through floors or walls, the metal pipe must be insulated from the floor or wall system by an air space (Figure 12.17). Sharp bends should be avoided. A 9° vent elbow has a resistance to flow equivalent to that of a straight section of pipe with a length 10 times the elbow diameter. Be sure that vents are of rigid construction and resistant to corrosion by flue gas products. Several types of venting material are available such as B-vent and other ceramic-type materials. A chimney lined with firebrick type of terra cotta must be relined with an acceptable vent material if it is to be used for venting gas-fired appliances. The same size vent pipe should be used throughout its length. A vent should never be smaller than the heater outlet except when two or more vents converge from separate heaters. To determine the size of vents beyond the point of convergence, one-half the area of each vent should be added to the area of the largest heater's vent. Vents should be installed with male ends of inner liner down to ensure condensate is kept within pipes on a cold start. The vertical length of each vent or stack should be at least 2 feet greater than the length between horizontal connection and stack. Remember that the more conductive the unit, the lower the temperature of combustion and the more byproducts of combustion are likely to be produced. These by-products are sometimes referred to as soot and creosote. These by-products will build up in vents, stacks, and chimneys. They are extremely flammable and can result in fire in these units that is hot enough to penetrate the heat shielding and throw burning material onto the roof of the home. The vent should be run at least 3 feet above any projection within 20 feet of the building to place it above a possible pressure zone due to wind currents (Figure 12.18). A weather cap should prevent entrance of rain and snow. Gas-fired space heaters, as well as gas furnaces and water heaters, must be equipped with a backdraft diverter (Figure 12.19) to protect heaters against downdrafts and excessive updrafts. Only draft diverters approved by the AGA should be used. The combustion chamber or firebox must be insulated from the floor, usually with airspace of 15 to 18 inches. The firebox is sometimes insulated within the unit and thus allows for lesser clearance firebox combustibles. Floors should be protected where coal space heaters are located. The floor protection allows hot coals and ashes to cool off if dropped while being removed from the ash chamber. Noncombustible walls and materials should be used when they are exposed to heated surfaces. For space heaters, a top or ceiling clearance of 36 inches, a wall clearance of 18 inches, and a smoke pipe clearance of 18 inches are recommended. # Hydronic Systems Hydronic (circulating water) systems involving traditional baseboards can be single-pipe or two-pipe. Radiant systems are also an option. All hydronic systems require an expansion tank to compensate for the increase in water volume when it is heated (i.e., the volume of 50°F water increases almost 4% when it is heated to 200°F ). Single-pipe hydronic systems are most commonly used in residences. They use a single pipe with hot water flowing in a series loop from radiator to radiator. Massachusetts has a prototype set of hydronic systems requirements . The drawback to this arrangement is that the temperature of the water decreases as it moves through each radiator. Thus, larger radiators are needed for those locations downstream in the loop. A common solution to this is multiple loops or zones. Each zone has its own temperature control with circulation provided by a small pump or zone valve in each loop. Two-pipe hydronic systems use a pipe for supplying hot water to the radiators and a second pipe for returning the water from the radiators to the boiler. There are also direct-and reverse-return arrangements. The direct-return system can be difficult to balance because the pressure drop through the nearest radiator piping can be significantly less than for the farthest radiator. Reverse-return systems take care of the balancing problem, but require the expense of additional piping. Orifice plates at radiator inlets or balancing valves at radiator outlets can also be used to balance the pressure drops in a direct-return system. # Direct Vent Wall Furnaces Direct vent wall furnaces are specifically designed for areas where flues or chimneys are not available or cannot be used. The furnace is directly vented to the outside and external air is used to support combustion. The air on the inside is warmed as it recirculates around a sealed chamber. # Cooling Air Conditioning Many old homes relied on passive cooling-opening windows and doors and using shading devices-during the summer months. Homes were designed with windows on opposite walls to encourage cross ventilation and large shade trees reduced solar heat gains. This approach is still viable, and improved thermal performance (insulating value) windows are available that allow for larger window areas to let in more air in the summer without the heat loss penalty in the winter. However, increased outdoor noise levels, pollution, and security issues make relying on open windows a less attractive option in some locations today. An air conditioning system of some kind may be installed in the home. It may be a window air conditioner or through-the-wall unit for cooling one or two rooms, or a central split-system air conditioner or heat pump. In any event, the performance of these systems, in terms of providing adequate comfort without excessive energy use, should be investigated. The age of the equipment alone will provide some indication. If the existing system is more than 10 years old, replacement should be considered because it is much less efficient than today's systems and nearing the end of its useful life. The refrigerant commonly used in today's residential air conditioners is R-22. Because of the suspicion that R-22 depletes the ozone layer, manufacturers will be prohibited from producing units with R-22 in 2010. The leading replacements for R-22 are R-134A and R-410A, and new products are now available with these nonozone-depleting refrigerants. The performance measure for electric air conditioners with capacities less than 65,000 BTU is the seasonal energy efficiency ratio (SEER). SEER is a rating of cooling performance based on representative residential loads. It is reported in units of BTU of cooling per watt per hour of electric energy consumption. It includes energy used by the unit's compressor, fans, and controls. The higher the SEER, the more efficient the system. However, the highest SEER unit may not provide the most com- fort. In humid climates, some of the highest SEER units exhibit poor dehumidification capability because they operate at higher evaporator temperatures to attain the higher efficiency. The refrigerant gas then travels through refrigerant piping to the outdoor unit, where it is pressurized in an electrically driven compressor, raising its temperature and pressure, and returned to a liquid state in the condenser as it releases, or dumps, the heat to the outdoors. A fan draws outdoor air in over the condenser coil. The use of twospeed indoor fans can be advantageous in this type of system because the cooling load often requires higher airflows than the heating load. The lower speed can be used for the heating season and for improved dehumidification performance during the cooling season. The condenser unit for a house air conditioner is shown in Figure 12.21. # Circulation Fans Air movement can make a person feel comfortable even when dry-bulb temperatures are elevated. A circulation fan (ceiling or portable) that creates an airspeed of 150 to 200 feet per minute can compensate for a 4°F (-16°C) increase in temperature. Ceiling circulation fans also can be beneficial in the heating season by redistributing warm air that collects along the ceiling, but they can be noisy. # Evaporation Coolers In dry climates, as in the southwestern United States, an evaporative cooler or "swamp" cooler may provide sufficient cooling. This system cools an airstream by evaporating water into it; the airstream's relative humidity increases while the dry-bulb temperature decreases. A 95°F (35°C), 15% relative humidity airstream can be conditioned to 75°F (24°C), 50% relative humidity. The simplest direct systems are centrally located and use a pump to supply water to a saturated pad over which the supply air is blown. Indirect systems use a heat exchanger between the airstream that is cooled by evaporating water and the supply airstream. The moisture level of the supply airstream is not affected as it is cooled. Evaporation coolers have lower installation and operating costs than electric air conditioning. No ozone-depleting refrigerant is involved. They provide high levels of ventilation because they typically condition and supply 100% outside air. The disadvantages are that bacterial contamination can result if not properly maintained and they are only appropriate for dry, hot climates. # Safety Cooling homes with window air conditioners requires attention to the maintenance requirements of the unit. The filter must be cleaned or replaced as recommended by the manufacturer, and the drip pan should be checked to ensure that proper drainage from the unit is occurring. The pans should be rinsed and disinfected as recommended by the manufacturer. Both bacteria and fungi can establish themselves in these areas and present serious health hazards. Condensation forms on the cooling coils of central air units inside and outside the home. These units should have a properly installed drip pan and should be drained according to the manufacturer's instructions. They also should receive routine maintenance, flushing, and disinfection. In the spring, before starting the air conditioner, the unit should be checked by a professional or someone familiar with the operation of the system. This is a good time to check drip line(s) for conditions such as plugs, cracks, or bacterial contamination because many of these lines are plastic. The drip pan should be cleaned thoroughly and disinfected if necessary or replaced. A plugged drip line can cause water damage by overflow from the drip pan. In the fall, the heat unit also should be checked before starting the system. Care should be taken with both window air conditioning units and central air systems to use quality air filters that are designed for the specific units and meet the specifications required by the system's manufacturer. The housing inspector should be on the alert for unvented, open flame heaters. Coil-type, wall-mounted water heaters that do not have safety relief valves are not permitted. Kerosene (portable) units for cooking or heating should be prohibited. Generally, open-flame portable units are not allowed under fire safety regulations. In oil heating units, other than integral tank units, the oil must be filled and vented outside the building. Filling oil within buildings is prohibited. Cutoff switches should be close to the entry but outside of a boiler room. # Chimneys Chimneys (Figure 12.22) are often an integral part of a building. Masonry chimneys must be tight and sound; flues should be terra cotta-lined; and, where no linings are installed, the brick should be tight to permit proper draft and elimination of combustion gases. Chimneys that act as flues for gas-fired equipment must be lined with either B vent or terra cotta. When a portion of the chimney above the roof either loses insulation or the insulation peels back, it indicates potential poisonous gas release or water leakage problems and a need for rebuilding. Exterior deterioration of the chimney, if neglected too long, will permit erosion from within the flues and eventually block the flue opening. Rusted flashing at the roof level will also contribute to the chimney's deterioration. Efflorescence on the inside wall of the chimney below the roof and on the outside of the chimney, if exposed, will show salt accumulations-a telltale sign of water penetration and flue gas escape and a sign of chimney deterioration. During rainy seasons, if terra cotta chimneys leak, dark areas show the number of flues inside the masonry chimney so they can actually be counted. When this condition occurs, it usually requires 2 or 3 months to dry out. After drying out, the mortar joints are discolored (brown). After a few years of this type of deterioration, the joints can be distinguished whether the chimney is wet or dry. These conditions usually develop when coal is used and become more pronounced 2 to 5 years after conversion to oil or gas. An unlined chimney can be checked for deterioration below the roofline by looking for residue deposited at the base of the chimney, usually accessible through a cleanout (door or plug) or breaching. Red granular or fine powder showing through coal or oil soot will generally indicate, if in quantity (a handful), that deterioration is excessive and repairs are needed. Unlined chimneys with attached gas units will be devoid of soot, but will usually show similar telltale brick powder Figure 12.22. Chimney Plan and deterioration. Manufactured gas has a greater tendency to dehydrate and decompose brick in chimney flues than does natural gas. For gas installations in older homes, utility companies usually specify chimney requirements before installation; therefore, older chimneys may require the installation of terra cotta liners, nonlead-lined copper liners, stainless steel liners, or transit pipe. Black carbon deposits around the top of the chimney usually indicate an oil burner operation using a low air ratio and high oil consumption. Prolonged operation in this burner setting results in long carbon water deposits down the chimney for 4 to 6 feet or more and should indicate to the inspector a possibility of poor burner maintenance. This will accent the need to be more thorough on the next inspection. This type of condition can result from other causes, such as improper chimney height, or exterior obstructions, such as trees or buildings, that will cause downdrafts or insufficient draft or contribute to a faulty heating operation. Rust spots and soot-mold usually occur on deteriorated galvanized smoke pipe. # Fireplaces Careful attention should be given to construction of the fireplace (Figure 12.23). Improperly built fireplaces are a serious safety and fire hazard. The most common causes of fireplace fires are thin walls, combustible materials such as studding or trim against sides and back of the fireplace, wood mantels, and unsafe hearths. Fireplace walls should be no less than 8 inches thick; if built of stone or hollow masonry units, they should be no less than 12 inches thick. The faces of all walls exposed to fire should be lined with firebrick or other suitable fireresistant material. When the lining consists of 4 inches of firebrick, such lining thickness may be included in the required minimum thickness of the wall. The fireplace hearth should be constructed of brick, stone, tile, or similar incombustible material and should be supported on a fireproof slab or on a brick arch. The hearth should extend at least 20 inches beyond the chimney breast and no less than 12 inches beyond each side of the fireplace opening along the chimney breast. The combined thickness of the hearth and its supporting construction should be no less than 6 inches at any point. It is important that all wooden beams, joists, and studs are set off from the fireplace and chimney so that there is no less than 2 inches of clearance between the wood members and the sidewalls of the fireplace or chimney and no less than 4 inches of clearance between wood members and the back wall of the fireplace. A gas-log set is primarily a decorative appliance. It includes a grate holding ceramic logs, simulated embers, a gas burner, and a variable flame controller. These sets can be installed in most existing fireplaces. There are two principal types: vented and unvented. Vented types require a chimney flue for exhausting the gases. They are only 20% to 30% efficient; and most codes require that the flue be welded open, which results in an easy exit path for heated room air. Unvented types operate like the burner on a gas stove and the combustion products are emitted into the room. They are more efficient because no heat is lost up the flue and most are equipped with oxygen-depletion sensors. However, unvented types are banned in some states, including Massachusetts and California. Gas fireplaces incorporate a gas-log set into a complete firebox unit with a glass door. Some have builtin dampers, smoke shelves, and heat-circulating features that allow them to provide both radiant and convective heat. Units can have push-button ignition, remote control, variable heat controls, and thermostats. Gas fireplaces are more efficient than gas logs, with efficiencies of 60% to 80%. Many draw combustion air in from the outside and are direct vented, eliminating the need for a chimney. Some of these units are wall-furnace rated. There are also electric fireplaces that provide the ambience of a fire and, if desired, a small amount of resistance heat. These units have no venting requirements. The advantages are that there are no ashes or flying sparks that occur with wood-burning fireplaces. They are not affected by wood-burning bans imposed in some areas when air quality standards are not met. Direct-vented gas or electric models eliminate the need for a chimney. The disadvantages are that the cost for equipment and running the gas line can be high. # Introduction Using energy efficiently can reduce the cost of heating, ventilating, and air-conditioning, which account for a significant part of the overall cost of housing. Energy costs recur month-to-month and are hard to reduce after a home has been designed and built. The development of an energy-efficient home or building must be thought through using a systems approach. Planning for energy efficiency involves considering where the air is coming from, how it is treated, and where it is desired in the home. Improper use or installation of sealing and insulating materials may lead to moisture saturation or retention, encouraging the growth of mold, bacteria, and viruses. In addition, toxic chemicals may be created or contained within the living environment. These building errors may result in major health hazards. The major issues that must be balanced in using a systems approach to energy efficiency are energy cost and availability, longterm affordability and sustainability, comfort and efficiency, and health and safety. # Energy Systems Making sound decisions in designing, constructing, or updating dwellings will ensure not only greater use and enjoyment of the space, but also can significantly lower energy bills and help residents avoid adverse health effects. Systematic planning for energy efficiency also can assist prospective homeowners in qualifying for mortgages because lower fuel bills translate into lower total housing and utility payments. Some banks and credit unions take this into account when qualifying prospective homeowners for mortgages. "Energy-efficient" mortgages provide buyers with special benefits when purchasing an energyefficient home. Energy use and efficiency should be addressed in the context of selection of fuel types and appliances, location of the equipment, equipment sizing and backup systems, and programmed use when making decisions on space A price is paid for poor design and lack of proper insulation of dwellings, both in dollars for utility bills and in comfort of the occupants. The layout of rooms and overall tightness of a house in terms of air exchange affect energy requirements. In addition, home occupants and owners often are called on to make relatively minor decisions affecting total energy consumption, such as selecting lighting fixtures and bulbs and selecting settings for thermostats. Buying energy-efficient appliances can save energy, but the largest reduction in energy use can be derived from major decisions, such as considering the R-value of roof systems, insulation, and windows. # R-values Thermal resistance (a material's resistance to heat flow) is rated by R-value. Higher R values mean greater insulating power, which means greater household energy savings and commensurate cost savings. Table 13.1 is a guideline for choosing R-values that are right for a particular home based on the climate, household heating system, and area in which it is located. Another way of understanding R-value is to see it as the resistance to heat losses from a warmer inside temperature to the outside temperature through a material or building envelope (wall, ceiling or roof assembly, or window). Total heat loss is a function of the thermal conductivity of materials, area, time, and construction in a house. The R-value of thermal insulation depends on the type of material, its thickness, and its density. In calculating the R-value of a multilayered installation, the R-values of the individual layers are added. Installing more insulation increases R-value and the resistance to heat flow. The effectiveness of an insulated wall or ceiling also depends on how and where the insulation is installed. For example, insulation that is compressed will not provide its full rated R-value. Also, the overall R-value of a wall or ceiling will be somewhat different from the R-value of the insulation itself because some heat flows around the insulation through the studs and joists. That is, the overall R-value of a wall with insulation between wood studs is less than the R-value of the insulation itself because the wood provides a thermal short-circuit around the insulation. The short-circuiting through metal framing is much greater than that through wood-framed walls; sometimes the metal wall's overall R-value can be as low as half the insulation's R-value. With careful design, this short-circuiting can be reduced. # Roofs Roofs are composite structures, with composite R-values. The total R-value for the roof components shown in Figure 13.1 is 14.54 (Table 13.2). In general, a composite structure with a composite R-value of more than R-38 provides a substantial barrier to heat loss. Of course, in the winter the outside air temperature would vary significantly between locations such as Pensacola, Florida, and Fairbanks, Alaska, and would affect the cost-effectiveness of additional insulation and construction using various roofing components (Table 13.2). The location of a house is usually a fixed variable in calculating R-values once the lot is purchased. However, the homeowner should consider the value of additional insulation by comparing its cost with the savings resulting from the increase in energy efficiency. Roof construction, including components such as ridge vents and insulating materials, is quite important and is often one of the more cost-effective ways to lower energy costs. # Ridge Vents Ridge vents are important to roofs for at least three reasons. First, ridge vents help lower the temperature in the roof structure and, consequently, in the attic and in the habitable space below. Second, ridge vents and rotating turbine vents help prolong the life of the roofing materials, particularly asphalt shingles and plywood sheathing. Third, ridge vents assist in air circulation and help avoid problems with excessive moisture. # Fan-powered Attic Ventilation Attic ventilators are small fans that remove hot air and reduce attic temperature. Adequate inlet vents are important. Typically these vents are located under the eaves of the house. The fan should be located near the peak of the roof for best performance. # White Roof Surface White roof surfaces combined with any of the measures listed above will improve their performance significantly. The white surface reflects much of the sun's heat and keeps the roof much cooler than a typical roof. # Insulation Insulation forms a barrier to the outside elements. It can help ensure that occupants are comfortable and that the home is energy-efficient. Ceiling insulation improves comfort and cuts electricity or natural gas costs for heating and cooling. For instance, the use of R-19 insulation in houses in Hawaii could have the following results: - Reduce indoor air temperature by 4°F (-16°C) in the afternoon. - Lower the ceiling temperature, perhaps by more than 15°F (-9.4°C). Insulation can reduce ceiling temperatures from 101°F (38°C) in bright sun on Oahu to 83°F (28°C). (Figure 13.2). - Reduce or eliminate the need for an air-conditioner. Energy savings, of course, will vary depending on energy prices. The payback afforded by additional insulation or investment in energy conservation measures is the average amount of time it will require for the initial capital cost to be recovered as a result of the savings in energy bills. A payback of 3 to 5 years might be economic, because the average homeowner stays in a home that long. However, payback criteria can vary by individual, and renters, for example, often face the dilemma of not wanting to make improvements for which they may not be able to fully realize the benefits. Described below are a few insulation alternatives. To achieve maximum effect, the method of installation and type of insulation are of considerable importance. The proper placement of moisture barriers is essential. If insulation becomes moisture-saturated, its resistance to energy loss is significantly reduced. Barriers to moisture should be installed toward the living area because significant moisture is generated in the home through respiration, cooking, and the combustion of heating fuels. Cellulose or fiberglass insulation is the most cost-effective insulation. Blown-in cellulose or fiberglass and fiberglass batts are similar in cost and performance. Recycled cellulose insulation may be available. For the best performance, insulation should be 5 to 6 inches thick. It can be installed in attics of new and existing homes. It is typically the best choice for framed ceilings in new homes, but can be costly to install in existing framed ceilings. It is very important that this type of insulation be treated for fire resistance. Foamboard (R-10, 1.5 to 2 inches) provides more insulation per inch than does cellulose or fiberglass, but is also more expensive. It is best where other insulation cannot be used, such as open-beam ceilings. It is applicable for new construction or when roofing is replaced on an existing home. Two common materials are polystyrene and polyisocyanurate. Polystyrene is better in moist conditions, and polyisocyanurate has a higher R-value per inch. However, some of these insulations present serious fire spread hazards. They should be evaluated to ensure that they are covered with fire-retardant materials and meet local fire and building codes. Radiant barrier insulation is a reflective foil sheet installed under the roof deck like regular roof sheathing. The effectiveness of a radiant barrier (Figure 13.2) depends on its emissivity (the relative power of the surface to emit heat by radiation). In general, the shinier the foil the better. Radiant barrier insulation cuts the amount of heat radiated from the hot roof to the ceiling below. It may be draped over the rafters before the roof is installed or stapled to the underside of the rafters. The shiny side should face downward for best performance. Some manufacturers claim that the radiant barrier prevents up to 97% of the sun's heat from entering the attic. # Wall Insulation As shown in Table 13.1, it makes sense to insulate to high R-values in the ceiling. Insulation in walls should range from R-11 in relatively mild climate zones to R-38 in New England, the northern Midwest, the Great Lakes, and the Rocky Mountain states of Colorado and Wyoming. Insulation requirements vary within climate zones in these states and areas as well (for instance, mountainous areas and areas farther north may have more heating-degree days). The same logic of installing insulation applies to both ceilings and walls: the insulation should provide a barrier for heat and moisture transfer and buildup from inside the dwelling, where temperatures will generally be in the 68°F to 72°F (20°C to 22°C) range, compared with the much colder or hotter temperatures outside. The key to heat loss is the difference in temperatures and the time that the heat transfer takes place over a given area or surface. The choice of heating system, from gas/oil or heat pump, to electric resistance, will also affect the payback of additional wall insulation due to variation in energy fuel prices. For regions identified as "cold," careful attention should be made in selecting energy fuel type; in particular, a heat pump may not be a practical option. A homeowner exploring designs and construction methods should examine the value of using structural insulated panels. The incorporation of high levels of insulation directly from the factory on building wall and ceiling components makes them outstanding barriers to heat and moisture. These integrated systems, if appropriately used, can save substantial amounts of energy when compared with traditional stick-built systems using 2×4 or 2×6 lumber. Also, building energy-efficient features (as well as electrical, plumbing, and other elements) directly into the building envelope at the factory can result in labor cost savings over the more traditional methods of construction. # Floor Insulation Warm air expands and rises above surrounding cooler air. This process of heat transfer is called convection. Warm air, which is lighter, rises and, as it cools, falls, creating a convection current of air. The two other processes of heat transfer are conduction (kinetic energy transferred from particle to particle, such as in a water-or electrically heated floor) and radiation (radiant energy emitted in the form of waves or particles such as in a fireplace or hot glowing heating element). Floor insulation limits all three modes of heat loss. A warmer floor reduces the temperature difference that drives convection. Floor insulation also directly impedes conduction and radiation to the colder air below the floor. # Batt Insulation The advantage of floor insulation lies in adding extra R-value without a significant increase in cost. It is cheaper to put more insulation under the floor than to add foam sheathing or change the type of wall construction to accommodate greater insulation levels. Like walls, floor cavities should be completely filled with insulation-without gaps, missing insulation, or cavity voids. Floor insulation must contact the subfloor and both joists. In many cases, it is worth the extra cost to buy enough insulation to fill the entire cavity. The amount of floor insulation required by some codes can be less than the space available. For example, an R-19 fiberglass batt is 6¼ inches thick. A floor framed with 2×8s is about 7½ inches deep, while a 2×10 floor is 9½ inches deep. A builder following a code's minimum insulation level will leave extra space that will allow for greater In some areas, it's common to hang plastic mesh over floor joists. Installers drop the insulation onto the mesh before the subfloor is installed. However, hanging the mesh creates sagging bellies. Insulation compresses near the framing and sags in the middle. Mesh should be attached to the bottom of the floor framing . Each stage of increased floor insulation, from R-19 to R-30 or R-30 to R-38, can save energy over the life of the house. This energy translates into energy savings that are multiples of the initial installation costs. Floor insulation will generate the greatest savings in colder climates; in moderate climates, the target insulation level should depend on economics. # Blow-In Insulation A blown-in insulation system allows the builder or insulator to fill the entire cavity completely, even around pipes, wires and other appurtenances. Using well-trained installers will pay dividends in quality workmanship. # Doors Today there is an endless variety of doors, ranging from metal doors with or without insulation to hollow core to solid wood. When properly installed into fitted frames, doors serve as a heat barrier to maintain indoor temperatures. Quality metal doors with insulation are best if they have a thermal break between the interior and exterior metal surfaces; this keeps heat from being transferred from one side to the other. # Standard Doors Because doors take up a small percentage of a wall, insulating them is not as high a priority as is insulating walls and ceilings. That said, heat loss follows the path of least resistance; therefore, doors should be selected that are functional and add to the energy-efficiency of the house. Doors usually have lower R-values than the surrounding wall. Storm doors can add R-1 to R-2 to the existing door's R-value. They are a valuable addition to doors that are frequently used and those that are exposed to cold winds, snow, and other weather. Screens allow natural breezes to circulate air from outside, rather than totally relying on air-conditioning, which can be energy intensive. When considering replacement doors, select insulated, metal foam-core doors. Besides insulation, metal doors provide good security, seal more tightly, tend to warp less. Metal doors also are more soundproof than conventional wood doors. heat loss. To avoid this situation, the batt must be pushed up into the cavity. With the proper support, this can be done. Springy metal rods are commonly used to hold insulation up in the top of the floor cavity. Another viable option is the use of plastic straps. Figure 13.3 shows batt insulation improperly applied to the floor above a crawl space or a basement. The thickness of typical fiberglass batts can assist the designer and the builder in creating a floor system that works for the occupants. Table 13.3 shows a list of R-values, along with the associated batt thickness. Individual brands can vary by as much as 1 inch. # Cavity Fill According to Oikos, a commercial Web site devoted to serving professionals whose work promotes sustainable design and construction, "Buying a thicker batt may be a better option than trying to lift a thinner batt into the proper position. Material costs will climb slightly but labor should be the same. Attaching the insulation support to the bottom of the floor joist will be easier. It could also lead to a higher quality job because there is less chance for compression or gaps" (Figure 13.4) . # Sliding Glass Doors Although sliding glass doors have aesthetic appeal, they have very low R-values and hence are minimally energy efficient. To improve the energy efficiency of existing sliding glass doors, the homeowner should ensure that they seal tightly and are properly weather-stripped. Additionally, heavy insulated drapes with weights, which impede the airflow, can cut down on heat loss through sliding glass doors. # Door Installation Doors must be installed as recommended by the manufacturer. Care must be taken to be sure that doors are installed in a manner that does not trap moisture or allow unintended introduction of air. Numerous types of sealing materials are available that range from foam to plastic, to metal flanging and magnetic strips. # Hot Water Systems The hot water tank can be insulated to make it more efficient, unless the heat loss is used within the space where it is located. Special insulation is available for this type of appliance, and insulating it will reduce the energy required to deliver the hot water needed by the occupants of the dwelling. Of course, any pipe that is subject to extreme temperatures also should be insulated to decrease heat loss. # Windows Windows by nature are transparent. They allow occupants of a dwelling to see outside and bring in sunlight and heat from the sun. They make space more pleasant and often provide lighting for tasks undertaken in the space. Especially in the winter, these desirable characteristics offset the heat loss. Heat gain in the summer through windows can be undesirable. Rather than give them up, it is important to use windows prudently and to keep energy considerations in mind in their design and their insulating characteristics (air, glass, plastic, or gas filler). Good design takes advantage of day lighting. Weather-stripping and sealing leaks around windows can enhance comfort and energy savings. Energy Star windows are highly recommended. Housekeeping measures can improve the efficiency of retaining heat. Heat loss follows the path of least resistance: caulking, weather-stripped framing, and films can help. These measures are relatively labor intensive, low to very low in cost, and can be quite satisfying to the homeowner if accomplished correctly. On the other hand, it is not easy finding the perfect materials or even replacement parts for old windows. When working with older windows, remember that there is the risk for leaded paint and the dispersion of toxic lead dust into the work area. Please refer to the lead section of Chapter 5, Indoor Air Pollutants and Toxic Materials. # Caulking and Weather-Stripping According to the U.S. Department of Energy, caulking and weather-stripping have substantial housekeeping benefits in preventing energy loss or unwanted heat gain. # Caulking Caulks are airtight compounds (usually latex or silicone) that fill cracks and holes. Before applying new caulk, old caulk or paint residue remaining around a window should be removed using a putty knife, stiff brush, or special solvent. After old caulk is removed, new caulk can then be applied to all joints in the window frame and the joint between the frame and the wall. The best time to apply caulk is during dry weather when the outdoor temperature is above 45°F (7.2°C). Low humidity is important during application to prevent cracks from swelling with moisture. Warm temperatures are also necessary so the caulk will set properly and adhere to the surface . # Weather-stripping Weather-stripped frames are narrow pieces of metal, vinyl, rubber, felt, or foam that seal the contact area between the fixed and movable sections of a window joint. They should be applied between the sash and the frame, but should not interfere with the operation of the window . # Replacing Window Frames The heat-loss characteristics and the air tightness of a window vary with the type and quality of the window frame. The types of available window frames are fixedpane, casement, double-and single-hung, horizontal sliding, hopper, and awning. Each type varies in energy efficiency. Correctly installed fixed-pane windows are the most airtight and inexpensive choice, but are not suited to places that require ventilation. The air infiltration properties of casement windows (which open sideways with hand cranks), awning windows (which are similar to casement windows but have hinges at the top), and hopper windows (inverted awning windows with hinges at the bottom) are moderate. Double-hung windows, which have top and bottom sashes (the part of the window that can slide), tend to be leaky. The advantage of the single-hung window over the double-hung is that it tends to restrict air leakage because there is only one moving part. Horizontal sliding windows, though suitable for small, narrow spaces, provide minimal ventilation and are the least airtight. In buildings with large older windows, there are often weight cavity areas that hide counter balances that make it easy to raise and lower heavy windows. These areas should be insulated to reduce energy loss. # Tinted Windows Another way to conserve energy is the installation of tinted windows. Window tinting can be installed that will both conserve energy and also prevent damaging ultraviolet light from entering the room and potentially fading wood surfaces, fabrics, and carpeting. Low-emissivity coatings, called low-e coatings, are also available. These coatings are designed for specific geographic regions. # Reducing Heat Loss and Condensation The energy efficiency of windows is measured in terms of their U-values (measure of the conductance of heat) or their R-values. Besides a few highly energy-efficient exceptions, window R-values range from 0.9 to 3.0. When comparing different windows, it is advisable to focus on the following guidance for R-and U-values: - R-and U-values are based on standards set by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers . - R-and U-values are calculated for the entire window, which includes the frame. - R-and U-values represent the same style and size of windows. The R-value of a window in an actual house is affected by the type of glazing material, the number of layers of glass, the amount of space between layers and the nature of the gas filling them, the heat-conducting properties of the frame and spacer materials, and the airtightness associated with manufacturing. For windows, rating and approval by the National Fenestration Rating Council or equivalent rating and approval is strongly recommended . Please refer to the window section of Chapter 6, Housing Structure. # Glazing Glazing refers to cutting and fitting windowpanes into frames. Glass has been traditionally the material of choice for windowpanes, but that is changing. Several new materials are available that can increase the energy efficiency of windows. These include the following: - Low-emissivity (low-e) glass uses a surface coating to minimize transmission of heat through the window by reflecting 40% to 70% of incident heat while letting full light pass through the pane. - Heat-absorbing glass is specially tinted to absorb approximately 45% of the incoming solar energy; some of this energy passes through the pane. - Reflective glass has a reflective film that reduces heat gain by reflecting most of the incident solar radiation. - Plastic glazing materials such as acrylic, polycarbonate, polyester, polyvinyl fluoride, and polyethylene are stronger, lighter, cheaper, and easier to cut than glass. However, they are less durable and tend to be affected by the weather more than glass is. - Storm windows can improve the energy efficiency of single-pane windows. The simplest example of storm windows would be plastic film, available in prepackaged kits, taped to the inside of the window frame. Because this can affect visibility and be easily damaged, a better choice would be to attach rigid or semirigid plastic sheets such as plexiglass, acrylic, polycarbonate, or fiber-reinforced polyester directly to the window frame or mounting it in channels around the frame on the outside of the building. Care should be taken in installation to avoid ripples or blemishes that will affect visibility. # Layering The insulating capacity of single-pane windows is minimal, around R-1. Multiple layers of glass can be used to increase the energy efficiency of windows. Double-or triple-pane windows have air-filled or gas-filled spaces, coupled with multiple panes that resist heat flow. The space between the panes is critical because the air spaces that are too wide (more than 5 / 8 inch) or too narrow (less than ½ inch) allow excessive heat transfer. Modern windows use inert gases, such as argon and krypton, to fill the spaces between panes because these gases are much more resistant to heat flow than air is. These gas-filled windows are more expensive than regular double-pane windows. - Frame and spacer materials may be aluminum, wood, vinyl, fiberglass, or a combination of these materials, such as vinyl-or aluminum-clad wood. - Aluminum frames are strong and are ideal for customized window design, but they conduct heat and are prone to condensation. The deterioration of these frames can be avoided by anodizing or coating. Their thermal resistance can be boosted using continuous strips of plastic between the interior and exterior of the frame. - Wood frames are superior to aluminum frames in having higher R-values, tolerance to temperature extremes, and resistance to condensation. On the other hand, wood frames require considerable maintenance in the form of painting or staining. Improper maintenance can lead to rot or warping. - Vinyl window frames made from polyvinyl chloride are available in a wide range of styles and shapes, can be easily customized, have moderate R-values, and can be competitively priced. Large windows made of vinyl frames are reinforced using aluminum or steel bars. Vinyl windows should be selected only after consideration of the concerns surrounding the use of vinyl materials and their off-gassing characteristics. - Fiberglass frames have the highest R-values and are not given to warping, shrinking, swelling, rotting, or corroding. Fiberglass is not weather-resistant, so it should also be painted. Some fiberglass frames are hollow; others are filled with fiberglass insulation. - Spacers separating multiple windowpanes in a window use aluminum to separate glass in multipane windows, but it conducts heat. In addition, in cold weather, the thermal resistance around the edge of such a window is lower than that in the center, allowing heat to escape and condensation to occur along the edges. - Polyvinyl chloride foam separators placed along the edges of the frame reduce heat loss and condensation. Window manufacturers use foam separators, nylon spacers, and insulation materials such as polystyrene and rock wool insulation between the glass panes inside windows. # Other Options Shades, shutters, and drapes used on windows inside the house reduce heat loss in the winter and heat gain in the summer. The heat gain during summer can also be minimized by the use of awnings, exterior shutters, or screens. These cost-effective window treatments should be considered before deciding on window replacement. By considering orientation, day lighting, storage of or reflection of energy from sunshine, and materials used within the house and on the building envelope, heat loss and gain can be decreased. # Solar Energy Solar energy is a form of renewable energy available to homeowners for heating, cooling, and lighting. The more energy-efficient new structures are designed to store solar energy. Remodeled structures may be retrofitted to increase energy efficiency by improving insulation characteristics, improving airflow and airtightness of the structure, and enhancing the ability to use solar energy. Solar energy systems are active and passive. Whereas active solar systems use some type of mechanical power to collect, store, and distribute the sun's energy, passive systems use the materials and design elements in the structure itself. # Active Solar Systems Active solar systems use devices to collect, convert, and deliver solar energy. Solar collectors on roofs or other south-facing surfaces can be used to heat water and air and generate electricity. Active solar systems can be installed in new or existing buildings and periodically need to be inspected and maintained. Active solar energy equipment consists of collectors, a storage tank, piping or ductwork, fans, motors, and other hardware. Flat panel collectors (Figure 13.5) can be placed on the roof or on walls. Typically, the collector will be a sandwich of one or two sheets of glass or plastic and another air space above a metal absorber plate, which is painted black to enhance heat absorption. After collection, when the sun's energy is converted to heat, a transfer is made to a liquid storage tank. The heated liquid travels through coils in the hot water tank, and the heat is transferred to the water and perhaps the heating system. Most hot water systems use a liquid collector system because it is more efficient and less costly than an airtype system. In the southwest United States, solar roof ponds have become popular for solar cooling. Evaporative cooling systems depend on water vaporization to lower the temperature of the air. These have been shown to be more effective in dry climates than in areas with extremely high relative humidity. In certain climates, like those in the Hawaiian Islands, using solar energy is cost-effective for providing hot water. Some builders even include it as a standard feature in their homes. The total cost to the homeowner of solar energy systems consists of the capital, operational, and maintenance costs. The real cost of capital may be lowered by the availability of tax credits offered at the federal (to lower federal income taxes) and state levels. Homeowners and builders can benefit from tax credits because they lower the total upfront investment cost of installing active solar systems. This is the major portion of the total cost of using solar energy, because operation and maintenance costs are small in comparison to initial system costs. # Passive Solar Systems Buildings designed to use passive solar energy have features incorporated into their design that absorb and slowly release the sun's heat. In cold climates, the design allows the light and heat of the sun to be stored in the structure, while insulating against the cold. In warm climates, the best effect is achieved by admitting light while rejecting heat. A building using passive solar systems may have the following features in the floor plan: - Large south-facing windows - Small windows in other directions, particularly on the north side of the structure - Designs that allow daylight and solar heat to permeate the main living areas - Special glass to block ultraviolet radiation - Building materials that absorb and slowly reradiate the solar heat - Structural features such as overhangs, baffles, and summer shading to eliminate summer overheating. Passive design can be a direct-gain system when the sun shines directly into the building, thereby heating it and storing this heat in the building materials (concrete, stone floor slabs, and masonry partitions). Alternatively, it may be an indirect gain system where the thermal mass is located between the sun and the living space. Isolated gain is yet another type of system that is separated from the main living area (such as a sunroom or a solar greenhouse), with convective loops for space conditioning into the living space. Energy Star is a program supported and promoted by the U.S. Environmental Protection Agency (EPA) that helps individuals protect the environment through superior energy efficiency. For the individual in his or her home, energy-efficient choices can save families about one third on their energy bill, with similar savings of greenhouse gas emissions, without sacrificing features, style, or comfort. When replacing household products, look for ones that have earned the Energy Star; these products meet strict energy-efficiency guidelines set by EPA and the U.S. Department of Energy. When looking for a new home, look for one that has earned the Energy Star approval. If you are planning to make larger improvements to your home, EPA offers tools and resources to help you plan and undertake projects to reduce your energy bills and improve home comfort . In 2004 alone, Americans, with the help of Energy Star, saved enough energy to power 24 million homes and avoid greenhouse gas emissions equivalent to those from 20 million cars-all while saving $10 billion. # Conducting an Energy Audit Energy audits can help identify areas where energy investments can be made, thereby reducing energy used in lighting, heating, cooling, or meeting other demands of housing occupants. An inspection can evaluate the worthiness or compliance with codes of energy-saving measures, including accepted or written standards. For example, if a new addition requires the equivalent of R-19 insulation in the ceilings, this can be validated in the inspection process. Whereas an audit is generally informational, an inspection should validate that materials and workmanship have yielded a structure that protects the occupants from the elements, such as rain, snow, wind, cold, and heat. Potentially hazardous situations within a structure should be evaluated in an inspection. The overall goal of a housing inspection in the case of energy efficiency is to identify potential hazardous conditions and help to create conditions under which the health and welfare of the occupants can be enhanced, rather than put at risk. The housing inspector should be aware that there is variation (sometimes quite significant differences) in heating degree days or cooling loads and in relative humidity conditions within given regions. Local and regional topography, as well as site conditions, can affect temperatures and moisture. Numerous Web sites listed in this chapter's Additional Sources of Information section discuss the procedures for conducting energy audits. Local and regional utilities often offer audit services and assist with selecting cost-effective conservation measures for given areas of the United States. # Introduction Swimming is one of the best forms of exercise available and having a residential swimming pool also can provide much pleasure. Nevertheless, it takes a great deal of work and expense to make and keep the pool water clean and free of floating debris. Without a doubt, a properly maintained and operated pool is quite rewarding. Home pools, however, are sometimes referred to as attractive nuisances or hazards. It is essential to be able to evaluate the risks associated with a pool. A regulatory agent or consultant must understand the total engineered pool system and be capable of identifying all equipment, valves, and piping systems. The piping system for a pool should be colorcoded to assist the pool operator or the owner to determine the correct way to operate the swimming pool. The specific goal is to protect the owners, their families, and others who may be attracted to a residential pool. Residential pools and spas should provide clean, clear water; water free of disease agents; and a safe recreational environment. In addition, residential pools and spas should have effective, properly operating equipment and effective maintenance and operation. # Childproofing Although it seems obvious, close supervision of young children is vital for families with a residential pool. A common scenario is a young child leaving the house without the parent or caregiver realizing it. Children are drawn to water, and they can drown even if they know how to swim. All children should be supervised at all times while in and around a pool. The key to preventing pool tragedies is to provide layers of protection. These layers include limiting pool access, using pool alarms, closely supervising children, and being prepared in case of an emergency. The U.S. Consumer Product Safety Commission (CPSC) offers these tips to prevent drowning: - Fences and walls should be at least 4 feet high and installed completely around the pool. The fence should be no more than 2 inches above grade. Openings in the fence should be a maximum of 4 inches. A fence should be difficult to climb over. - Fence gates should be self-closing and self-latching. The latch should be out of a small child's reach. The gate should open away from the pool; the latch should face the pool. - Any doors with direct pool access should have an audible alarm that sounds for 30 seconds. The alarm control must be a minimum of 54 inches high and reset automatically. - If the house forms one side of the barrier to the pool, then doors leading from the house to the pool should be protected with alarms that produce a sound when a door is opened. - Young children who have taken swimming lessons should not be considered "drown proof "; young children should always be watched carefully while swimming. - A power safety cover-a motor-powered barrier that can be placed over the water area-can be used when the pool is not in use. - Rescue equipment and a telephone should be kept by the pool; emergency numbers should be posted. Knowing cardiopulmonary resuscitation (CPR) can be a lifesaver. - For aboveground pools, steps and ladders should be secured and locked or removed when the pool is not in use. - Babysitters should be instructed about potential hazards to young children in and around swimming pools and their need for constant supervision. - If a child is missing, the pool should always be checked first. Seconds count in preventing death or disability. - Pool alarms can be used as an added precaution. Underwater pool alarms can be used in conjunction with power safety covers. CPSC advises consumers to use remote alarm receivers so the alarm can be heard inside the house or in other places away from the pool area. - Toys and flotation devices should be used in pools only under supervision; they should not be used in place of supervision. - Well-maintained rescue equipment (including a ring buoy with an attached line and/or a shepherd's crook rescue pole should be kept by the pool. - Emergency procedures should be clearly written and posted in the pool area. - All caregivers must know how to swim, know how to get emergency help, and know CPR. - Children should be taught to swim (swimming classes are not recommended for children under the age of 4 years) and should always swim with a buddy. - Alcohol should not be consumed during or just before swimming or while supervising children. - To prevent choking, chewing gum and eating should be avoided while swimming, diving, or playing in water. - Water depth should be checked before entering a pool. The American Red Cross recommends 9 feet as a minimum depth for diving and jumping. - Rules should be posted in easily seen areas. Rules should state "no running," "no pushing," no drinking," and "never swim alone." Be sure to enforce the rules. - Tables, chairs, and other objects should be placed well away from the pool fence to prevent children from using them to climb into the pool area. - When the pool is not in use, all toys should be removed to prevent children from playing with or reaching for them and unintentionally falling into the water. - A clear view of the pool from the house should be ensured by removing vegetation and other obstacles that block the view. # Hazards Numerous issues need to be considered before building residential pools: location of overhead power lines, installation and maintenance of ground fault circuit interruptors, electrical system grounding, electrical wiring sizing, location of the pool, and type of vegetation near the pool. The commonly used solar covers that rest on the surface of the pool and amplify sunlight do an excellent job of increasing the pool temperature, and they also increase the risk for drowning. If children or pets fall in and sink below the cover, it can be nearly impenetrable if they attempt to surface under it. Winterizing the pool also can be hazardous. The pool water in most belowground pools is seldom drained because of groundwater pressure that can damage the structure of the pool. Therefore, water in most home pools is only lowered below the frost line for winter protection. In these cases, a pool cover is installed to keep debris and leaves from filling the pool in the winter months. The pool cover becomes an excellent mosquitobreeding area before the pool is reopened in the spring because of the decomposing vegetation that is on the pool cover, the rain that accumulates on the top of the pool cover during the winter, and the eggs laid on the pool cover in early fall and early spring. The cover also provides ideal conditions for mosquitoes to breed: stagnant water, protection from wind that can sink floating eggs, the near absence of predators, and warm water created by the pool cover collecting heat just below the surface (Figure 14.1). # Public Health Issues Current epidemiologic evidence indicates that correctly constructed and operated swimming pools are not a major public health problem. In 1999, an outbreak of Campylobacter jejuni was associated with a private pool that did not have continuous chlorine disinfection and reportedly had ducks swimming in the pool . # Diseases - Intestinal diseases: Escherichia coli O157:H7, typhoid fever, paratyphoid fever, amoebic dysentery, leptospirosis, cryptosporidiosis (highly chlorine resistant), and bacillary dysentery can be a problem where water is polluted by domestic or animal sewage or waste. Swimming pools have also been implicated in outbreaks of leptospirosis. - Respiratory diseases: Colds, sinusitis, and septic sore throat can spread more readily in swimming areas as a result of close contact, or improperly treated pool water, coupled with lowered resistance because of exertion. - Eye, ear, nose, throat, and skin infections: The exposure of delicate mucous membranes, the movement of harmful organisms into ear and nasal passages, the excessive use of water-treatment chemicals, and the presence of harmful agents in water can contribute to eye, ear, nose, throat, and skin infections. Close physical contact and the presence of fomites (such as towels) also help to spread athlete's foot, impetigo, and dermatitis. # Injuries Injuries and drowning deaths are by far the greatest problem at swimming pools. Lack of bather supervision is a prime cause, as is the improper construction, use, and maintenance of equipment. Injuries include evisceration, electrocution, entrapment, and entanglement. Some particular problem areas include the following: - loose or poorly located diving board, - slippery decks or pool bottoms, - poorly designed or located water slides, - projecting or ungrated pipes and drains that can catch hair or body parts, - drain grates of inadequate size, - improperly installed or maintained electrical equipment, and - improperly vented chlorinators and mishandled chlorine materials. # Water Testing Equipment It is essential that correct equipment be used and maintained for assessing the water quality of both swimming pools and spas. The operators of pool and spas need to monitor a wide range of chemicals that influence pool operations and water quality. Their equipment should test for chlorine, bromine, pH, alkalinity, hardness, and cyanuric acid build up. The chlorine should be measurable at a range of 0 to 10 parts per million (ppm). Water pH levels should be accurately measured with an acid or base test. A kit to check pool chemical levels usually includes N,N-diethyl-p-phenylene-diamine (DPD) tablet tests for free and total chlorine, and other one-step tablet tests for pH, total alkalinity, calcium hardness, and cyanuric acids. The homeowner should determine acid or base demand using an already reacted pH sample in dropper bottles. Paper test strips with multiple tests (including chlorine, bromine, and pH) are also available, but the reliability of these tests varies greatly. If used, they should be kept fresh, protected from heat and moisture, and checked against other test systems periodically if water quality problems persist. Swimming pools are engineered systems, with demanding safety and sanitary requirements that result in rather sophisticated design standards and water treatment systems. The size, shape, and operating system of the pool is based on the following considerations: - the intended use of the pool and the maximum expected bather loading; - the selection of skimmers, scuppers, or gutters, depending on the purpose, size, and shape of the pool; - the recirculation pump, whose horsepower and impeller configuration are based on the distance, volume, and height of the water to be pumped; - the filters, which are sized on the volume of water to be treated and the maximum gallons (liters) of water per minute that can be delivered by the pump and the type of filter media selected; and - the chemical feeder sizes and types, which are based on the chemicals used, total quantity of the water in the system, expected use rates, and external environmental factors, such as quantity of sunlight and wind that affect the system. # Disinfection The length of time it takes to disinfect a pool depends, for example, on the type of fecal accident and the chlorine levels chosen to disinfect the pool. If a fecal accident is a formed stool, the faollowing chlorine levels will determine the times needed to inactivate Giardia: for Giardia is 45 and the value for Cryptosporidia is 9,600. If a different chlorine concentration or inactivation time is used, CT values must remain the same. For example, to determine the length of time needed to disinfect a pool at 15 ppm after a diarrheal accident, the following formula is used: C×T = 9,600. Solve for time: T= 9,600÷15 ppm = 10.7 hours. It would, thus, take 10.7 hours to inactivate Cryptosporidia at 15 ppm. You can do the same for Giardia by using the CT of 45. Chlorine CDC has Web sites that contain excellent information about safe swimming recommendations, recreational water diseases, and disinfection procedures for fecal accidents . # Content Turnover Rate The number of times a pool's contents can be filtered though its filtration equipment in a 24-hour period is the turnover rate of the pool. Because the filtered water is diluted with the nonfiltered water of the pool, the turbid-ity continually decreases. Once the pool water has reached equilibrium with the sources of contamination, a 6-hour turnover rate will result in 98% clarification if the pool is properly designed. A typical-use pool should have a pump and filtration system capable of pumping the entire contents of the pool though the filters every 6 hours. To determine compliance with this 6-hour turnover standard, the following formula is used: Turnover rate = pool volume (gallons)/flow rate×60 (minutes in hour) Following is a sample calculation of the pool content turnover rate using the rate of flow reading from the flow meter: Turnover rate = 90,000 (gallons in pool)/ 180 gallons per minute×60 (minutes in hour) 8.3-hour turnover rate = 90,000 (pool volume in gallons)/10,800 The above pool would not meet the required turnover rate of 6 hours. The cause could be improperly sized piping or restrictions in the piping, an undersized pump, or undersized or clogged filters. This turnover rate would probably result in cloudy water if the pool is used at the normal bather load. The decreased circulation would also make it difficult for the disinfecting equipment to meet the required levels. # Filters Pool filters are not designed to remove bacteria, but to make the water in the pool clear. Normal tap water looks quite dingy if used to fill a pool and, in some cases, the bottom of the pool is not visible. The maximum turbidity level of a pool should be less than 0.5 nephelometric turbidity units. Pool filters should be sized to ensure that the complete contents of the pool pass through the filter once every 6 hours. Home pools typically use one of three types of filters. # High-rate Sand Filters High-rate sand filters were introduced more than 30 years ago and reduced the size of the conventional sand filter by 80%. The sand filter is the most popular filter on the market. High-rate sand filters use a silica sand that has been strained to give it a uniform size. It is referred to as pool-grade sand #20 silica. The sand is normally 0.45 millimeters (mm) to 0.55 mm in diameter. As water passes through the filter, the sharp edges of the sand trap the dirt from the pool water. When the backpressure of the filter increases to 3 to 5 psi, the filter needs to be cleaned. This is usually accomplished by reversing the flow of the water through the filter and flushing the dirt out the waste pipe until the water being discharged appears clear. These filters perform best when used at pressure levels below 15 to 20 gallons per minute, depending on the manufacturer of the filter. # Cartridge Filters Cartridge filters have been around for many years, but only recently have gained in popularity in the pool industry. They are similar to the filter on a car engine. The water is passed through the cartridge and returned to the pool. When the pressure of a cartridge filter increases approximately 5 psi, the pump is turned off; and the top of the filter is removed. The cartridge is removed and either discarded and replaced or, in some cases, washed. # Diatomaceous Earth Diatomaceous earth (DE) is a porous powder made from the skeletons of billions of microscopic animals that were buried millions of years ago. There are two primary types of DE filters, but they both work the same way. Water comes into the filter, passes through the DE, and is returned to the pool. If properly sized and operated, DE filters are considered by some to provide the highest quality of water. They are capable of filtering the smallest particle size of all the filter types. It is usually adequate to change the DE once every 30 days. However, if your pool water is very dirty, it is not uncommon to change it 3-4 times a day until the water is clear. The frequency of backwashing will depend on many factors, including the size of your filter, flow rate of your plumbing, and the bather load in your pool. When the pressure reading on the filter reaches the level set by the manufacturer's manual, it will be ready for backwashing. # Filter Loading Rates The specification plate on the side of approved residential or commercial swimming pool filters contains such information as the manufacturer, type of filter, serial number, surface area, and designed loading rate. Knowing the surface area of the filter permits calculation of the number of gallons flowing through the filter per minute. An excessive flow rate can push the media into the pool or force pool solids and materials thought the media, resulting in turbid water. Figure 14.2 shows a typical home pool treatment system. Regulations typically specify how much water can be filtered through the various types of pool filtration systems. # Disinfectants Many disinfectants are used in pools and spas around the world, including halogen-based compounds (chlorine, bromine, iodine), ozone, and ultraviolet light with hydrogen peroxide. Those used most often are chlorine, bromine, and iodine, and each has advantages and limitations. Chlorine-Pools can be disinfected with chlorine-releasing compounds, including hypochlorite salt compounds. Calcium hypochlorite is inexpensive and popular for cold-water pools, but not suitable for hot pools and spas because it will promote scaling on heat exchangers and piping. Chlorine levels can be rapidly reduced with high use and regular checks should be made to ensure maintenance of disinfection. Some adjustment of pH is required for most forms of chlorine disinfection. When chlorine gas is used, a fairly high alkalinity needs to be maintained to remove the acid formed during dosing . Sodium hypochlorite is a liquid chlorine, and has a pH of 13, causing a slight increase in the pH of the pool water, which should be adjusted with an acidic mixture. The sun's rays will degrade sodium hypochlorite. Chlorinated isocyanate is available in three forms-granular, tablet, and stick. The granular form contains 55%-62% available chlorine and the stick and tablet form contain 89% available chlorine . Tables 14.1-14.4 serve as a quick problem-solving reference for the home pool owner and operator. The CDC Web site (www.cdc.gov/healthyswimming) provides a great deal of useful information for both the inspector and the homeowner. # Pool Water Hardness and Alkalinity The ideal range of water hardness for a plaster pool is 200 to 275 ppm. The ideal range for a vinyl, painted, or fiberglass surface is 175 to 225 ppm. Excess hardness causes scaling, discoloration, and filter inefficiency. Less than recommended hardness results in corrosion of most contact surfaces. Alkalinity should be 80 to 120 ppm. High alkaline levels cause scale and high chlorine demand. Low levels cause unstable pH. Sodium bicarbonate will raise the alkalinity level. The pool water will be cloudy if alkalinity is over 200 ppm. # Liquid Chemical Feeders Positive Displacement Pump A positive displacement pump is preferable to erosion disinfectant feeders. Positive displacement pumps can be set to administer varied and specific chemical dosage rates to ensure that a pool does not become contaminated with harmful microorganisms. A positive displacement pump does need routine cleaning, descaling, and servicing. Running a weak muratic acid or vinegar solution through the pump weekly can minimize most major servicing of the pump. Most service on the pump involves one of four areas: # Erosion and Flow-through Disinfectant Feeders These feeders work by the action of water moving around a solid cake of chlorine and eroding the cake. The feeders work quite well for smaller pools, but require considerable care and maintenance. The variables that affect the effectiveness of erosion feeders are 1. solubility of the chlorine cake or tablet; 2. surface area of the cake or tablet; 3. amount of water flowing around the cake or tablet; 4. concentration of chlorine in the cake or tablet; and 5. number of cakes or tablets in the feeder. Note: For safety reasons, the disinfectant cake must not be accessible. # Spas and Hot Tubs Hot tubs (large tubs filled with hot water for one or more people) or spas (a tub with aerating or swirling water) are used for pleasure and are increasingly being recommended for therapy. The complexity of these devices increases with each new model manufactured. Newer models often have both ozone and ultraviolet light emitters for enhanced disinfection (see Disinfectants section earlier in this chapter). However, the environment of the spa and hot tub, if not cleaned and operated correctly, can become a culture medium for microorganisms. Because the warm water is at the ideal temperature for growth of microorganisms, good disinfection is critical. Table 14.5 provides suggested hot tub and spa operating parameters. It is essential that all equipment works properly and that the units are cleaned and disinfected on a routine basis. Monitoring the water temperature is very important and, depending on the health of the user, can be a matter of life and death. Time in the heated water should be limited, and the temperature for pregnant users should be below 103°F (39°C) to protect the unborn baby.
British thermal unit CDC Centers for Disease Control and Prevention CFR Code of Federal Regulations CGA Canadian Gas Association CO carbon monoxide CPR cardiopulmonary resuscitation CPSC Consumer Product Safety Commission CSIA Chimney Safety Institute of America ABPA The American Backflow Prevention Association, http://abpa.org Develops cross-connections; ABPA is an organization whose members have a common interest in protecting drinking water from contamination. ACI American Concrete Institute, http://www.concrete.org/general/home.asp Has produced more than 400 technical documents, reports, guides, specifications, and codes for the best use of concrete. ACI conducts more than 125 educational seminars each year and has 13 certification programs for concrete practitioners, as well as a scholarship program to promote careers in the industry. AGA American Gas Association, http://www.aga.org Develops standards, tests, and qualifies products used in gas lines and gas appliance installations. AGC Associated General Contractors of America, http://www.agc.org Is dedicated to improving the construction industry by educating the industry to employ the finest skills, promoting use of the latest technology and advocating building the best quality projects for owners-public and private. AMSA Association of Metropolitan Sewerage Agencies, http://www.amsa-cleanwater.org Represents the interests of the country's wastewater treatment agencies. ANSI American National Standards Institute, http://www.ansi.org Coordinates work among U.S. standards writing groups. Works in conjunction with other groups such as ISO, ASME, and ASTM. ARI Air-Conditioning and Refrigeration Institute, http://www.ari.org Provides information about the 21st Century Research (21-CR) initiative, a private-public sector research collaboration of the heating, ventilation, air-conditioning, and refrigeration industry, with a focus on energy conservation, indoor environmental quality, and environmental protection. ASCE American Society of Civil Engineers, http://www.asce.org Provides essential value to its members, careers, partners, and the public by developing leadership, advancing technology, advocating lifelong learning, and promoting the profession.The American Society of Home Inspectors, http://www.ashi.org Is a source of information about the home inspection profession. ASHRAE American Society of Heating, Refrigerating and Air-Conditioning Engineers, http://www.ashrae.org Writes standards and guidelines that include uniform methods of testing for rating purposes, describe recommended practices in designing and installing equipment and provide other information to guide the industry. ASHRAE has more than 80 active standards and guideline project committees, addressing such broad areas as indoor air quality, thermal comfort, energy conservation in buildings, reducing refrigerant emissions, and the designation and safety classification of refrigerants.# List of Tables H ousing quality is key to the public's health. Translating that simple axiom into action is the topic of this book. In the 30 years since the first edition was published, the nation's understanding of how specific housing conditions are related to disease and injury has matured and deepened. This new edition will enable public health and housing professionals to grasp our shared responsibility to ensure that our housing stock is safe, decent, affordable, and healthy for our citizens, especially those who are particularly vulnerable and who spend more time in the home, such as children and the elderly. The Centers for Disease Control and Prevention and the U.S. Department of Housing and Urban Development (HUD) have worked together with many others to discover the ways to eliminate substandard housing conditions that harm health. For example, the advances in combating water borne diseases was possible, in part, through standardization of indoor plumbing and sewage, and the institution of federal, state and local regulations and codes. Childhood lead poisoning has been dramatically reduced, in part, through the elimination of residential lead-based paint hazards. Other advances have been made to protect people from carbon monoxide poisoning, falls, safety hazards, electrocution, and many other risks. However, more must be done to control existing conditions and to understand emerging threats that remain poorly understood. For example, nearly 18 million Americans live with the health threat of contaminated drinking water supplies, especially in rural areas where on-site wastewater systems are prevalent. Despite progress, thousands of children still face the threat of lead poisoning from residential lead paint hazards. The increase in asthma in recent decades and its relationship to housing conditions such as excess moisture, mold, settled dust allergens and ventilation remains the subject of intense research. The impact of energy conservation measures on the home environment is still unfolding. Simple affordable construction techniques and materials that minimize moisture problems and indoor air pollution, improve ventilation, and promote durability and efficiency continue to be uncovered. A properly constructed and maintained home is nearly timeless in its usefulness. A home is often the biggest single investment people make. This manual will help to ensure that the investment is a sound one that promotes healthy and safe living. Home rehabilitation has increased significantly in the last few years and HUD has prepared a nine-part series, The Rehab Guide, that can assist both residents and contractors in the rehabilitation process. For additional information, go to http://www.huduser.org/publications/destech/rehabgui.html. Preface W e acknowledge the suggestions, assistance, and review of numerous individuals and organizations that went into the original and current versions of this manual. The revisions to this manual were made by a team of environmental health, housing, and public health professionals led by Professor Joe Beck, Dr. Darryl Barnett, Dr. Gary Brown, Dr. Carolyn Harvey, Professor Worley Johnson, Dr. Steve Konkel, and Professor Charles Treser. Individuals from the following organizations were involved in the various drafts of this manual: • National Association of Housing and Redevelopment Officials; • • Department of Building, Housing and Zoning (Allentown, Pennsylvania); • Code Enforcement Associates (East Orange, New Jersey); • Eastern Kentucky University (Richmond, Kentucky); • University of Washington; Seattle (Washington); and • Battelle Memorial Institute (Columbus, Ohio). • Specifically, our gratitude goes to the following reviewers: • Dr. David Jacobs, Martin Nee, and Dr. Peter Ashley, HUD; • Pat Bohan, East Central University; • James Larue, The House Mender Inc.; • Ellen Tohn, ERT Associates; Accessory building or structure: a detached building or structure in a secondary or subordinate capacity from the main or principal building or structure on the same premises. • Appropriate authority/Authority having jurisdiction (AHJ): a person within the governmental structure of the corporate unit who is charged with the administration of the appropriate code. Ashes: the residue from burning combustible materials. Attic: any story or floor of a building situated wholly or partly within the roof, and so designed, arranged, or built to be used for business, storage, or habitation. Basement: the lowest story of a building, below the main floor and wholly or partially lower than the surface of the ground. Building: a fixed construction with walls, foundation, and roof, such as a house, factory, or garage. Bulk container: any metal garbage, rubbish, or refuse container having a capacity of 2 cubic yards or greater and which is equipped with fittings for hydraulic or mechanical emptying, unloading, or removal. Central heating system: a single system supplying heat to one or more dwelling unit(s) or more than one rooming unit. Chimney: a vertical masonry shaft of reinforced concrete, or other approved noncombustible, heat-resisting material enclosing one or more flues, for the purpose of removing products of combustion from solid, liquid, or gaseous fuel. Dilapidated: in a state of disrepair or ruin and no longer adequate for the purpose or use for which it was originally intended. Dormitory: a building or a group of rooms in a building used for institutional living and sleeping purposes by four or more persons. Dwelling: any enclosed space wholly or partly used or intended to be used for living, sleeping, cooking, and eating. (Temporary housing, as hereinafter defined, shall not be classified as a dwelling.) Industrialized housing and modular construction that conform to nationally accepted industry standards and are used or intended for use for living, sleeping, cooking, and eating purposes shall be classified as dwellings. Dwelling unit: a room or group of rooms located within a dwelling forming a single habitable unit with facilities used or intended to be used by a single family for living, sleeping, cooking, and eating. Egress: arrangements and openings to assure a safe means of exit from buildings. Extermination: the control and elimination of insects, rodents, or other pests by eliminating their harborage places; by removing or making inaccessible materials that may serve as their food; by poisoning, spraying, fumigating, trapping, or any other recognized and legal pest elimination methods approved by the local or state authority having such administrative authority. Extermination is one of the components of integrated pest management. Fair market value: a price at which both buyers and sellers will do business. Family: one or more individuals living together and sharing common living, sleeping, cooking, and eating facilities (See also Household). Flush toilet: a toilet bowl that can be flushed with water supplied under pressure and that is equipped with a watersealed trap above the floor level. Garbage: animal and vegetable waste resulting from handling, preparation, cooking, serving, and nonconsumption of food. Grade: the finished ground level adjacent to a required window. # Definitions Guest: an individual who shares a dwelling unit in a nonpermanent status for not more than 30 days. Habitable room: a room or enclosed floor space used or intended to be used for living, sleeping, cooking or eating purposes, excluding bathrooms, laundries, furnace rooms, pantries, kitchenettes and utility rooms of less than 50 square feet of floor space, foyers, or communicating corridors, stairways, closets, storage spaces, workshops, and hobby and recreation areas. Health officer: the legally designated health authority of the jurisdiction or that person's authorized representative. Heated water: water heated to a temperature of not less than 120°F-130°F (49°C-54°C) at the outlet. Heating device: all furnaces, unit heaters, domestic incinerators, cooking and heating stoves and ranges, and other similar devices. Household: one or more individuals living together in a single dwelling unit and sharing common living, sleeping, cooking, and eating facilities (see also Family). Infestation: the presence within or around a dwelling of any insects, rodents, or other pests. Integrated pest management: a coordinated approach to managing roaches, rodents, mosquitoes, and other pests that combines inspection, monitoring, treatment, and evaluation, with special emphasis on the decreased use of toxic agents. Kitchen: any room used for the storage and preparation of foods and containing the following equipment: sink or other device for dishwashing, stove or other device for cooking, refrigerator or other device for cold storage of food, cabinets or shelves for storage of equipment and utensils, and counter or table for food preparation. Kitchenette: a small kitchen or an alcove containing cooking facilities. Lead-based paint: any paint or coating with lead content equal to or greater than 1 milligram per square centimeter, or 0.5% by weight. Multiple dwelling: any dwelling containing more than two dwelling units. Occupant: any individual, over 1 year of age, living, sleeping, cooking, or eating in or having possession of a dwelling unit or a rooming unit; except that in dwelling units a guest shall not be considered an occupant. Operator: any person who has charge, care, control or management of a building, or part thereof, in which dwelling units or rooming units are let. Ordinary summer conditions: a temperature 10°F (-12°C) below the highest recorded temperature in the locality for the prior 10-year period. Ordinary winter conditions: mean a temperature 15°F (-9.4°C) above the lowest recorded temperature in the locality for the prior 10-year period. Owner: any person who alone, jointly, or severally with others (a) shall have legal title to any premises, dwelling, or dwelling unit, with or without accompanying actual possession thereof, or (b) shall have charge, care or control of any premises, dwelling, or dwelling unit, as owner or agent of the owner, or as executor, administrator, trustee, or guardian of the estate of the owner. Permissible occupancy: the maximum number of individuals permitted to reside in a dwelling unit, rooming unit, or dormitory. Person: any individual, firm, corporation, association, partnership, cooperative, or government agency. Plumbing: all of the following supplied facilities and equipment: gas pipes, gas burning equipment, water pipes, garbage disposal units, waste pipes, toilets, sinks, installed dishwashers, bathtubs, shower baths, installed clothes washing machines, catch basins, drains, vents, and similarly supplied fixtures, and the installation thereof, together with all connections to water, sewer, or gas lines. Privacy: the existence of conditions which will permit an individual or individuals to carry out an activity commenced without interruption or interference, either by sight or sound by unwanted individuals. Rat harborage: any conditions or place where rats can live, nest or seek shelter. Ratproofing: a form of construction that will prevent the entry or exit of rats to or from a given space or building, or from gaining access to food, water, or harborage. It consists of the closing and keeping closed of every opening in foundations, basements, cellars, exterior and interior walls, ground or first floors, roofs, sidewalk gratings, sidewalk openings, and other places that may be reached and entered by rats by climbing, burrowing, or other methods, by the use of materials impervious to rat gnawing and other methods approved by the appropriate authority. Refuse: leftover and discarded organic and nonorganic solids (except body wastes), including garbage, rubbish, ashes, and dead animals. Refuse container: a watertight container that is constructed of metal, or other durable material impervious to rodents, that is capable of being serviced without creating unsanitary conditions, or such other containers as have been approved by the appropriate authority (see also Appropriate Authority). Openings into the container, such as covers and doors, shall be tight fitting. Rooming house: any dwelling other than a hotel or motel or that part of any dwelling containing one or more rooming units, or one or more dormitory rooms, and in which persons either individually or as families are housed with or without meals being provided. Rooming unit: any room or group of rooms forming a single habitable unit used or intended to be used for living and sleeping, but not for cooking purposes. Rubbish: nonputrescible solid wastes (excluding ashes) consisting of either: (a) combustible wastes such as paper, cardboard, plastic containers, yard clippings and wood; or (b) noncombustible wastes such as cans, glass, and crockery. Safety: the condition of being reasonably free from danger and hazards that may cause accidents or disease. Space heater: a self-contained heating appliance of either the convection type or the radiant type and intended primarily to heat only a limited space or area such as one room or two adjoining rooms. Supplied: paid for, furnished by, provided by, or under the control of the owner, operator or agent. System: the dynamic interrelationship of components designed to enact a vision. # Systems theory: The concept proposed to promote the dynamic interrelationship of activities designed to accomplish a unified system. Temporary housing: any tent, trailer, mobile home, or other structure used for human shelter that is designed to be transportable and which is not attached to the ground, to another structure, or to any utility system on the same premises for more than 30 consecutive days. Toxic substance: any chemical product applied on the surface of or incorporated into any structural or decorative material, or any other chemical, biologic, or physical agent in the home environment or its immediate surroundings, which constitutes a potential hazard to human health at acute or chronic exposure levels. Variance: a difference between that which is required or specified and that which is permitted. I n addition to the standards and organizations listed in this section, the U.S. Justice Department enforces the requirements of the Americans with Disabilities Act (ADA) (http://www.ada.gov) and assures that products fully comply with the provisions of the act to ensure equal access for physically challenged users. AWWA American Water Works Association, http://www.awwa.org Promotes public health through improvement of the quality of water and develops standards for valves, fittings, and other equipment. # CGA Canadian Gas Association, http://www.cga.ca Develops standards, tests, and qualifies products used in gas lines and gas appliance installations. # CPSC U.S Consumer Product Safety Commission, http://www.cpsc.gov Protects the public from unreasonable risks for serious injury or death from more than 15,000 types of consumer products. CPSC is committed to protecting consumers and families from products that pose a fire, electrical, chemical, or mechanical hazard or can injure children. # CRBT Center for Resourceful Building Technology, http://www.crbt.org Contains the online Guide to Resource-Efficient Building Elements, which provides information about environmentally efficient construction materials, including foundations, wall systems, panels, insulation, siding, roofing, doors, windows, interior finishing, and floor coverings. EPA U.S. Environmental Protection Agency, http://www.epa.gov Protects human health and the environment. # FM Factory Mutual, http://fmglobal.com Develops standards and qualifies products for use by the general public and develops standards for materials, products, systems, and services. # HFHI Habitat for Humanity International, http://www.habitat.org Is a nonprofit, ecumenical Christian housing ministry. HFHI seeks to eliminate poverty housing and homelessness from the world, and to make decent shelter a matter of conscience and action. # ICBO The Uniform Building Code (UBC)/International Conference of Building Officials, http://www.iccsafe.org Is the most widely adopted model building code in the world and is a proven document meeting the needs of government units charged with enforcement of building regulation. Published triennially, the UBC provides complete regulations covering all major aspects of building design and construction relating to fire and life safety and structural safety. The requirements reflect the latest technologic advances available in the building and fire-and life-safety industry. # ICC International Code Council, http://www.iccsafe.org Produces the most widely adopted and enforced building safety codes in the United States (I-Codes). International Residential C ode (IRC) 2003 has been adopted by many states, jurisdictions, and localities. IRC also references several industry standards such as ACI 318, ASCE 7, ASTM, and ANSI standards that cover specific load, load combinations, design methods, and material specifications. # ISO International Standard Organization, http://www.iso.org Provides internationally recognized certification for manufacturers that comply with high standards of quality control, developed standards ISO-9000 through ISO-9004, and qualifies and lists products suitable for use in plumbing installations. # MSS Manufacturers Standardization Society of the Valve and Fittings Industry, Inc., http://www.mss-hq.com Develops technical codes and standards for the valve and fitting industry. # NACHI The National Association of Certified Home Inspectors, http://www.nachi.org/index.htm Is the world's largest, most elite nonprofit inspection association. # NAHB National Association of Home Builders, http://www.nahb.org Is a trade association representing more than 220,000 residential home building and remodeling industry members. NAHB is affiliated with more than 800 state and local home builders associations around the country. NAHB urges codes and standards development and application that protects public health and safety without cost impacts that decrease affordability and consequently prevent people from moving into new, healthier, safer homes. # NEC National Electrical Code, http://www.nfpa.org Protects public safety by establishing requirements for electrical wiring and equipment in virtually all buildings. # NESC National Environmental Services Center, http://www.nesc.wvu.edu/nesc/nesc_about.htm Is a repository for water, wastewater, solid waste, and environmental training research. # NFPA National Fire Protection Association, http://www.nfpa.org/index.asp Develops, publishes, and disseminates more than 300 consensus codes and standards intended to minimize the possibility and effects of fire and other risks. # NOWRA National Onsite Wastewater Recycling Association, http://www.nowra.org Provides leadership and promotes the onsite wastewater treatment and recycling industry through education, training, communication, and quality tools to support excellence in performance. # NSF National Sanitation Foundation, http://www.nsf.org Develops standards for equipment, products and services; a nonprofit organization now known as International. # UL Underwriters Laboratory, http://www.ul.com Has developed more than 800 Standards for Safety. Millions of products and their components are tested to UL's rigorous safety standards. # WEF Water Environment Federation, http://www.wef.org Is a not-for-profit technical and educational organization with members from varied disciplines who work toward the WEF vision of preservation and enhancement of the global water environment. The WEF network includes water quality professionals from 76 member associations in 30 countries. # Executive Summary T he original Basic Housing Inspection manual was published in 1976 by the Center for Disease Control (now known as the Centers for Disease Control and Prevention). Its Foreword stated: "The growing numbers of new families and the increasing population in the United States have created a pressing demand for additional housing that is conducive to healthful living. These demands are increased by the continuing loss of existing housing through deterioration resulting from age and poor maintenance. Large numbers of communities in the past few years have adopted housing codes and initiated code enforcement programs to prevent further deterioration of existing housing units. This growth in housing activities has caused a serious problem for communities in obtaining qualified personnel to provide the array of housing service needed, such as information, counseling, technical advice, inspections, and enforcement. As a result many agencies throughout the country are conducting comprehensive housing inspection training courses. This publication has been designed to be an integral part of these training sessions." The original Basic Housing Inspection manual has been successfully used for several decades by public health and housing personnel across the United States. Although much has changed in the field of housing construction and maintenance, and health and safety issues have expanded, the manual continues to have value, especially as it relates to older housing. Many housing deficiencies impact on health and safety. For example, lead-based paint and dust may contribute to lead poisoning in children; water leakage and mold may contribute to asthma episodes; improper use and storage of pesticides may result in unintentional poisoning; and lack of working smoke, ionization, and carbon monoxide alarms may cause serious injury and death. Government agencies have been very responsive to "healthy homes" issues. The U.S. Department of Housing and Urban Development (HUD) created an office with an exclusive focus on healthy homes. In 2003, CDC joined HUD in the effort to improve housing conditions through the training of environmental health practitioners, public health nurses, housing specialists, and others who have interest and responsibility for creating healthy homes. The revised Basic Housing Inspection manual, renamed the Healthy Housing Reference Manual, responds to the enormous changes that have occurred in housing construction methods and materials and to new knowledge related to the impact of housing on health and safety. New chapters have been added, making the manual more comprehensive. For example, an entire chapter is devoted to rural water supplies and on-site wastewater treatment. A new chapter was added that discusses issues related to residential swimming pools and spas. At over 200 pages, the comprehensive revised manual is designed primarily as a reference document for public health and housing professionals who work in government and industry. The Healthy Housing Reference Manual contains 14 chapters, each with a specific focus. All chapters contain annotated references and a listing of sources for additional topic information. A summary of the content of each chapter follows: Chapter One, Housing History and Purpose, describes the history of dwellings and urbanization and housing trends during the last century. Chapter Two, Basic Principles of Healthy Housing, describes the basic principles of healthy housing and safety-physiologic needs, psychologic needs, protection against injury and disease-and lays the groundwork for following chapters. Chapter Three, Housing Regulations, reviews the history of housing regulations, followed by a discussion of zoning, housing, and building codes. Chapter Four, Disease Vectors and Pests, provides a detailed analysis of disease vectors that have an impact on residences. It includes information on the management of mice, rats, cockroaches, fleas, flies, termites, and fire ants. The quality of housing plays a decisive role in the health status of its occupants. Substandard housing conditions have been linked to adverse health effects such as childhood lead poisoning, asthma and other respiratory conditions, and unintentional injuries. This new and revised Healthy Housing Reference Manual is an important reference for anyone with responsibility and interest in creating and maintaining healthy housing. The housing design and construction industry has made great progress in recent years through the development of new innovative techniques, materials technologies, and products. The HUD Rehab Guide series was developed to inform the design and construction industry about state-of-the-art materials and innovative practices in housing rehabilitation. The series focuses on building technologies, materials, components, and techniques rather than on projects such as adding a new room. The nine volumes each cover a distinct element of housing rehabilitation and feature breakthrough materials, labor-saving tools, and cost-cutting practices. The nine volumes address foundations; exterior walls; roofs; windows and doors; partitions, ceilings, floors, and stairs; kitchen and baths; electrical/electronics; heating, air conditioning, and ventilation; plumbing; and site work. Additional information about the series can be found at http://www.huduser.org/publications/destech/rehabgui.html and http://www.pathnet.org/sp.asp?id=997. This series is an excellent adjunct to the Healthy Housing Reference Manual. # Introduction The term "shelter," which is often used to define housing, has a strong connection to the ultimate purpose of housing throughout the world. The mental image of a shelter is of a safe, secure place that provides both privacy and protection from the elements and the temperature extremes of the outside world. This vision of shelter, however, is complex. The earthquake in Bam, Iran, before dawn on December 26, 2003, killed in excess of 30,000 people, most of whom were sleeping in their homes. Although the homes were made of the most simple construction materials, many were well over a thousand years old. Living in a home where generation after generation had been raised should provide an enormous sense of security. Nevertheless, the world press has repeatedly implied that the construction of these homes destined this disaster. The homes in Iran were constructed of sun-dried mud-brick and mud. We should think of our homes as a legacy to future generations and consider the negative environmental effects of building them to serve only one or two generations before razing or reconstructing them. Homes should be built for sustainability and for ease in future modification. We need to learn the lessons of the earthquake in Iran, as well as the 2003 heat wave in France that killed in excess of 15,000 people because of the lack of climate control systems in their homes. We must use our experience, history, and knowledge of both engineering and human health needs to construct housing that meets the need for privacy, comfort, recreation, and health maintenance. Health, home construction, and home maintenance are inseparable because of their overlapping goals. Many highly trained individuals must work together to achieve quality, safe, and healthy housing. Contractors, builders, code inspectors, housing inspectors, environmental health officers, injury control specialists, and epidemiologists all are indispensable to achieving the goal of the best housing in the world for U.S. citizens. This goal is the basis for the collaboration of the U.S. Department of Housing and Urban Development (HUD) and the Centers for Disease and Control and Prevention (CDC). # Preurban Housing inhabitants. The housing similarities among civilizations separated by vast distances may have been a result of a shared heritage, common influences, or chance. Caves were accepted as dwellings, perhaps because they were ready made and required little or no construction. However, in areas with no caves, simple shelters were constructed and adapted to the availability of resources and the needs of the population. Classification systems have been developed to demonstrate how dwelling types evolved in preurban indigenous settings [1]. # Ephemeral Dwellings Ephemeral dwellings, also known as transient dwellings, were typical of nomadic peoples. The African bushmen and Australia's aborigines are examples of societies whose existence depends on an economy of hunting and food gathering in its simple form. Habitation of an ephemeral dwelling is generally a matter of days. # Episodic Dwellings Episodic housing is exemplified by the Inuit igloo, the tents of the Tungus of eastern Siberia, and the very similar tents of the Lapps of northern Europe. These groups are more sophisticated than those living in ephemeral dwellings, tend to be more skilled in hunting or fishing, inhabit a dwelling for a period of weeks, and have a greater effect on the environment. These groups also construct communal housing and often practice slash-andburn cultivation, which is the least productive use of cropland and has a greater environmental impact than the hunting and gathering of ephemeral dwellers. # Periodic Dwellings Periodic dwellings are also defined as regular temporary dwellings used by nomadic tribal societies living in a pastoral economy. This type of housing is reflected in the yurt used by the Mongolian and Kirgizian groups and the Bedouins of North Africa and western Asia. These groups' dwellings essentially demonstrate the next step in the evolution of housing, which is linked to societal develop-Chapter 1: Housing History and Purpose ment. Pastoral nomads are distinguished from people living in episodic dwellings by their homogenous cultures and the beginnings of political organization. Their environmental impact increases with their increased dependence on agriculture rather than livestock. # Seasonal Dwellings Schoenauer [1] describes seasonal dwellings as reflective of societies that are tribal in nature, seminomadic, and based on agricultural pursuits that are both pastoral and marginal. Housing used by seminomads for several months or for a season can be considered semisedentary and reflective of the advancement of the concept of property, which is lacking in the preceding societies. This concept of property is primarily of communal property, as opposed to individual or personal property. This type of housing is found in diverse environmental conditions and is demonstrated in North America by the hogans and armadas of the Navajo Indians. Similar housing can be found in Tanzania (Barabaig) and in Kenya and Tanzania (Masai). # Semipermanent Dwellings According to Schoenauer [1], sedentary folk societies or hoe peasants practicing subsistence agriculture by cultivating staple crops use semipermanent dwellings. These groups tend to live in their dwellings various amounts of time, usually years, as defined by their crop yields. When land needs to lie fallow, they move to more fertile areas. Groups in the Americas that used semipermanent dwellings included the Mayans with their oval houses and the Hopi, Zuni, and Acoma Indians in the southwestern United States with their pueblos. # Permanent Dwellings The homes of sedentary agricultural societies, whose political and social organizations are defined as nations and who possess surplus agricultural products, exemplify this type of dwelling. Surplus agricultural products allowed the division of labor and the introduction of other pursuits aside from food production; however, agriculture is still the primary occupation for a significant portion of the population. Although they occurred at different points in time, examples of early sedentary agricultural housing can be found in English cottages, such as the Suffolk, Cornwall, and Kent cottages [1]. # Urbanization Permanent dwellings went beyond simply providing shelter and protection and moved to the consideration of comfort. These structures began to find their way into what is now known as the urban setting. The earliest available evidence suggests that towns came into existence around 4000 BC. Thus began the social and public health problems that would increase as the population of cities increased in number and in sophistication. In preurban housing, the sparse concentration of people allowed for movement away from human pollution or allowed the dilution of pollution at its location. The movement of populations into urban settings placed individuals in close proximity, without the benefit of previous linkages and without the ability to relocate away from pollution or other people. Urbanization was relatively slow to begin, but once started, it accelerated rapidly. In the 1800s, only about 3% of the population of the world could be found in urban settings in excess of 5,000 people. This was soon to change. The year 1900 saw the percentage increase to 13.6% and subsequently to 29.8% in 1950. The world's urban population has grown since that time. By 1975, more than one in three of the world's population lived in an urban setting, with almost one out of every two living in urban areas by 1997. Industrialized countries currently find approximately 75% of their population in an urban setting. The United Nations projects that in 2015 the world's urban population will rise to approximately 55% and that in industrialized nations it will rise to just over 80%. In the Western world, one of the primary forces driving urbanization was the Industrial Revolution. The basic source of energy in the earliest phase of the Industrial Revolution was water provided by flowing rivers. Therefore, towns and cities grew next to the great waterways. Factory buildings were of wood and stone and matched the houses in which the workers lived, both in construction and in location. Workers' homes were little different in the urban setting than the agricultural homes from whence they came. However, living close to the workplace was a definite advantage for the worker of the time. When the power source for factories changed from water to coal, steam became the driver and the construction materials became brick and cast iron, which later evolved into steel. Increasing populations in cities and towns increased social problems in overcrowded slums. The lack of inexpensive, rapid public transportation forced many workers to live close to their work. These factory areas were not the pastoral areas with which many were familiar, but were bleak with smoke and other pollutants. The inhabitants of rural areas migrated to ever-expanding cities looking for work. Between 1861 and 1911 the population of England grew by 80%. The cities and towns of England were woefully unprepared to cope with the resulting environmental problems, such as the lack of potable water and insufficient sewerage. In this atmosphere, cholera was rampant; and death rates resembled those of Third World countries today. Children had a one in six chance of dying before the age of 1 year. Because of urban housing problems, social reformers such as Edwin Chadwick began to appear. Chadwick's Report on an Enquiry into the Sanitary Condition of the Labouring Population of Great Britain and on the Means of its Improvement [2] sought many reforms, some of which concerned building ventilation and open spaces around the buildings. However, Chadwick's primary contention was that the health of the working classes could be improved by proper street cleaning, drainage, sewage, ventilation, and water supplies. In the United States, Shattuck et al. [3] wrote the Report of the Sanitary Commission of Massachusetts, which was printed in 1850. In the report, 50 recommendations were made. Among those related to housing and building issues were recommendations for protecting school children by ventilation and sanitation of school buildings, emphasizing town planning and controlling overcrowded tenements and cellar dwellings. Figure 1.1 demonstrates the conditions common in the tenements. In 1845, Dr. John H. Griscom, the City Inspector of New York, published The Sanitary Condition of the Laboring Population of New York [4]. His document expressed once again the argument for housing reform and sanitation. Griscom is credited with being the first to use the phrase "how the other half lives." During this time, the poor were not only subjected to the physical problems of poor housing, but also were victimized by corrupt landlords and builders. # Trends in Housing The term "tenement house" was first used in America and dates from the mid-nineteenth century. It was often intertwined with the term "slum." Wright [5] notes that in English, tenement meant "an abode for a person or for the soul, when someone else owned the property." Slum, on the other hand, initially was used at the beginning of the 19th century as a slang term for a room. By the middle of the century, slum had evolved into a term for a back dwelling occupied by the lowest members of society. Von Hoffman [6] states that this term had, by the end of the century, begun to be used interchangeably with tenement. The author noted that in the larger cities of the United States, the apartment house emerged in the 1830s as a housing unit of two to five stories, with each story containing apartments of two to four rooms. It was origi-nally built for the upper group of the working class. The tenement house emerged in the 1830s when landlords converted warehouses into inexpensive housing designed to accommodate Irish and black workers. Additionally, existing large homes were subdivided and new structures were added, creating rear houses and, in the process, eliminating the traditional gardens and yards behind them. These rear houses, although new, were no healthier than the front house, often housing up to 10 families. When this strategy became inadequate to satisfy demand, the epoch period of the tenements began. Although unpopular, the tenement house grew in numbers, and, by 1850 in New York and Boston, each tenement housed an average of 65 people. During the 1850s, the railroad house or railroad tenement was introduced. This structure was a solid, rectangular block with a narrow alley in the back. The structure was typically 90 feet long and had 12 to 16 rooms, each about 6 feet by 6 feet and holding around four people. The facility allowed no direct light or air into rooms except those facing the street or alley. Further complicating this structure was the lack of privacy for the tenants. A lack of hallways eliminated any semblance of privacy. Open sewers, a single privy in the back of the building, and uncollected garbage resulted in an objectionable and unhygienic place to live. Additionally, the wood construction common at the time, coupled with coal and wood heating, made fire an everpresent danger. As a result of a series of tenement fires in 1860 in New York, such terms as death-trap and fire-trap were coined to describe the poorly constructed living facilities [6]. The two last decades of the 19th century saw the introduction and development of dumbbell tenements, a front and rear tenement connected by a long hall. These tenements were typically five stories, with a basement and no elevator (elevators were not required for any building of less than six stories). Dumbbell tenements, like other tenements, resulted in unaesthetic and unhealthy places to live. Garbage was often thrown down the airshafts, natu-ral light was confined to the first floor hallway, and the public hallways only contained one or two toilets and a sink. This apparent lack of sanitary facilities was compounded by the fact that many families took in boarders to help with expenses. In fact, 44,000 families rented space to boarders in New York in 1890, with this increasing to 164,000 families in 1910. In the early 1890s, New York had a population of more than 1 million, of which 70% were residents of multifamily dwellings. Of this group, 80% lived in tenements consisting mostly of dumbbell tenements. The passage of the New York Tenement House Act of 1901 spelled the end of the dumbbells and acceptance of a new tenement type developed in the 1890s-the park or central court tenement, which was distinguished by a park or open space in the middle of a group of buildings. This design was implemented to reduce the activity on the front street and to enhance the opportunity for fresh air and recreation in the courtyard. The design often included roof playgrounds, kindergartens, communal laundries, and stairways on the courtyard side. Although the tenements did not go away, reform groups supported ideas such as suburban cottages to be developed for the working class. These cottages were two-story brick and timber, with a porch and a gabled roof. According to Wright [5], a Brooklyn project called Homewood consisted of 53 acres of homes in a planned neighborhood from which multifamily dwellings, saloons, and factories were banned. Although there were many large homes for the well-todo, single homes for the not-so-wealthy were not abundant. The first small house designed for the individual of modest means was the bungalow. According to Schoenauer [1], bungalows originated in India. The bungalow was introduced into the United States in 1880 with the construction of a home in Cape Cod. The bungalow, derived for use in tropical climates, was especially popular in California. Company towns were another trend in housing in the 19th century. George Pullman, who built railway cars in the 1880s, and John H. Patterson, of the National Cash Register Company, developed notable company towns. Wright [5] notes that in 1917 the U.S. Bureau of Labor Standards estimated that at least 1,000 industrial firms were providing housing for their employees. The provision of housing was not necessarily altruistic. The motivation for providing housing varied from company to company. Such motivations included the use of housing as a recruitment incentive for skilled workers, a method of linking the individual to the company, and a belief that a better home life would make the employees happier and more productive in their jobs. Some companies, such as Firestone and Goodyear, went beyond the company town and allowed their employees to obtain loans for homes from company-established banks. A prime motivator of company town planning was sanitation, because maintaining the worker's health could potentially lead to fewer workdays lost due to illness. Thus, in the development of the town, significant consideration was given to sanitary issues such as window screens, sewage treatment, drainage, and water supplies. Before World War I there was a shortage of adequate dwellings. Even after World War I, insufficient funding, a shortage of skilled labor, and a dearth of building materials compounded the problem. However, the design of homes after the war was driven in part by health consid-erations, such as providing good ventilation, sun orientation and exposure, potable pressurized water, and at least one private toilet. Schoenauer [1] notes that, during the postwar years, the improved mobility of the public led to an increase in the growth of suburban areas, exemplified by the detached and sumptuous communities outside New York, such as Oyster Bay. In the meantime, the conditions of working populations consisting of many immigrants began to improve with the improving economy of the 1920s. The garden apartment became popular. These units were well lighted and ventilated and had a courtyard, which was open to all and well maintained. Immediately after World War I and during the 1920s, city population growth was outpaced by population growth in the suburbs by a factor of two. The focus at the time was on the single-family suburban dwelling. The 1920s were a time of growth, but the decade following the Great Depression, beginning in 1929, was one of deflation, cessation of building, loss of mortgage financing, and the plunge into unemployment of large numbers of building trade workers. Additionally, 1.5 million home loans were foreclosed during this period. In 1936, the housing market began to make a comeback; however, the 1930s would come to be known as the beginning of public housing, with increased public involvement in housing construction, as demonstrated by the many laws passed during the era [5]. The National Housing Act was passed by Congress in 1934 and set up the Federal Housing Administration. This agency encouraged banks, building and loan associations, and others to make loans for building homes, small business establishments, and farm buildings. If the Federal Housing Administration approved the plans, it would insure the loan. In 1937, Congress passed another National Housing Act that enabled the Federal Housing Administration to take control of slum clearance. It made 60-year loans at low interest to local governments to help them build apartment blocks. Rents in these homes were fixed and were only available to lowincome families. By 1941, the agency had assisted in the construction of more than 120,000 family units. During World War II, the focus of home building was on housing for workers who were involved in the war effort. Homes were being built through federal agencies such as the newly formed Federal Housing Administration, formed in 1934 and transferred to HUD in 1965. According to the U.S. Census Bureau (USCB) [7], in the years since World War II, the types of homes Americans live in have changed dramatically. In 1940, most homes were considered attached houses (row houses, townhouses, and duplexes). Small apartment houses with two to four apartments had their zenith in the 1950s. In the 1960 census, two-thirds of the housing inventory was made up of one-family detached houses, which declined to less than 60% in the 1990 census. The postwar years saw the expansion of suburban housing led by William J. Levitt's Levittown, on Long Island, which had a strong influence on postwar building and initiated the subdivisions and tract houses of the following decades (Figure 1.2). The 1950s and 1960s saw continued suburban development, with the growing ease of transportation marked by the expansion of the interstate highway system. As the cost of housing began to increase as a result of increased demand, a grassroots movement to provide adequate housing for the poor began to emerge. According to Wright [5], in the 1970s only about 25% of the population could afford a $35,000 home. According to Gaillard [8], Koinonia Partners, a religious organization founded in 1942 by Clarence Jordan near Albany, Georgia, was the seed for Habitat for Humanity. Habitat for Humanity, founded in 1976 by Millard Fuller, is known for its international efforts and has constructed more than 150,000 houses in 80 countries; 50,000 of these houses are in the United States. The homes are energy-efficient and environmentally friendly to conserve resources and reduce long-term costs to the homeowners. Builders also began promoting one-floor minihomes and no-frills homes of approximately 900 to 1,200 square feet. Manufactured housing began to increase in popularity, with mobile home manufacturers becoming some of the most profitable corporations in the United States in the early 1970s. In the 1940 census, manufactured housing were lumped into the "other" category with boats and tourist cabins: by the 1990 census, manufactured housing made up 7% of the total housing inventory. Many communities ban manufactured housing from residential neighborhoods. According to Hart et al. [9], nearly 30% of all home sales nationwide are of manufactured housing, and more than 90% of those homes are never moved once they are anchored. According to a 2001 industry report, the demand for prefabricated housing is expected to increase in excess of 3% annually to $20 billion in 2005, with most units being manufactured homes. The largest market is expected to continue in the southern part of the United States, with the most rapid growth occurring in the western part of the country. As of 2000, five manufactured-home producers, representing 35% of the market, dominated the industry. This industry, over the past 20 to 25 years, has been affected by two pieces of federal legislation. The first, the Mobile Home Construction and Safety Standards Act, adopted by HUD in 1974, was passed to aid consumers through regulation and enforcement of HUD design and construction standards for manufactured homes. The second, the 1980 Housing Act, required the federal government to change the term "mobile home" to "manufactured housing" in all federal laws and literature. One of the prime reasons for this change was that these homes were in reality no longer mobile in the true sense. The energy crisis in the United States between 1973 and 1974 had a major effect on the way Americans lived, drove, and built their homes. The high cost of both heating and cooling homes required action, and some of the action taken was ill advised or failed to consider healthy housing concerns. Sealing homes and using untried insulation materials and other energy conservation actions often resulted in major and sometimes dangerous buildups of indoor air pollutants. These buildups of toxins occurred both in homes and offices. Sealing buildings for energy efficiency and using off-gassing building materials containing urea-formaldehyde, vinyl, and other new plastic surfaces, new glues, and even wallpapers created toxic environments. These newly sealed environments were not refreshed with makeup air and resulted in the accumulation of both chemical and biologic pollutants and moisture leading to mold growth, representing new threats to both short-term and long-term health. The results of these actions are still being dealt with today. # Introduction It seems obvious that health is related to where people live. People spend 50% or more of every day inside their homes. Consequently, it makes sense that the housing environment constitutes one of the major influences on health and well-being. Many of the basic principles of the link between housing and health were elucidated more than 60 years ago by the American Public Health Association (APHA) Committee on the Hygiene of Housing. After World War II, political scientists, sociologists, and others became interested in the relation between housing and health, mostly as an outgrowth of a concern over poor housing conditions resulting from the massive influx into American cities of veterans looking for jobs. Now, at the beginning of the 21st century, there is a growing awareness that health is linked not only to the physical structure of a housing unit, but also to the neighborhood and community in which the house is located. According to Ehlers and Steel [1], in 1938, a Committee on the Hygiene of Housing, appointed by APHA, created the Basic Principles of Healthful Housing, which provided guidance regarding the fundamental needs of humans as they relate to housing. These fundamental needs include physiologic and psychologic needs, protection against disease, protection against injury, protection against fire and electrical shock, and protection against toxic and explosive gases. # Fundamental Physiologic Needs Housing should provide for the following physiologic needs: 1. protection from the elements, 2. a thermal environment that will avoid undue heat loss, 3. a thermal environment that will permit adequate heat loss from the body, 4. an atmosphere of reasonable chemical purity, Overdressing. Older people, because they may not feel the heat, may not dress appropriately in hot weather. Visiting overcrowded places. Trips should be scheduled during nonrush-hour times and participation in special events should be carefully planned to avoid disease transmission. Not checking weather conditions. Older people, particularly those at special risk, should stay indoors on especially hot and humid days, particularly when an air pollution alert is in effect. USCB [3] reported that about 75% of homes in the United States used either utility gas or electricity for heating purposes, with utility gas accounting for about 50%. This, of course, varies with the region of the country, depending on the availability of hydroelectric power. This compares with the 1940 census, which found that threequarters of all households heated with coal or wood. Electric heat was so rare that it was not even an option on the census form of 1940. Today, coal has virtually disappeared as a household fuel. Wood all but disappeared as a heating fuel in 1970, but made a modest comeback at 4% nationally by 1990. This move over time to more flexible fuels allows a majority of today's homes to maintain healthy temperatures, although many houses still lack adequate insulation. The fifth through the seventh physiologic concerns address adequate illumination, both natural and artificial. Research has revealed a strong relationship between light and human physiology. The effects of light on both the human eye and human skin are notable. According to Zilber [4], one of the physiologic responses of the skin to sunlight is the production of vitamin D. Light allows us to see. It also affects body rhythms and psychologic health. Average individuals are affected daily by both natural and artificial lighting levels in their homes. Adequate lighting is important in allowing people to see unsanitary conditions and to prevent injury, thus contributing to a healthier and safer environment. Improper indoor lighting can also contribute to eyestrain from inadequate illumination, glare, and flicker. Avoiding excessive noise (eighth physiologic concern) is important in the 21st century. However, the concept of noise pollution is not new. Two thousand years ago, Julius Caesar banned chariots from traveling the streets of Rome late at night. In the 19th century, numerous towns and cities prohibited ringing church bells. In the early 20th century, London prohibited church bells from ringing between 9:00 PM and 9:00 AM. In 1929, New York City formed a Noise Abatement Commission that was charged with evaluating noise issues and suggesting solutions. At that time, it was concluded that loud noise affected health and productivity. In 1930, this same commission determined that constant exposure to loud noises could affect worker efficiency and long-term hearing levels. In 1974, the U.S. Environmental Protection Agency (EPA) produced a document titled Information on Levels of Environmental Noise Requisite to Protect Public Health and Welfare With an Adequate Margin of Safety [5]. This document identified maximum levels of 55 decibels outdoors and 45 decibels indoors to prevent interference with activities and 70 decibels for all areas to prevent hearing loss. In 1990, the United Kingdom implemented The Household Appliances (Noise Emission) Regulations [6] to help control indoor noise from modern appliances. Noise has physiologic impacts aside from the potential to reduce hearing ability. According to the American Speech-Language-Hearing Association [7], these effects include elevated blood pressure; negative cardiovascular effects; increased breathing rates, digestion, and stomach disturbances; ulcers; negative effects on developing fetuses; difficulty sleeping after the noise stops; plus the intensification of the effects of drugs, alcohol, aging, and carbon monoxide. In addition, noise can reduce attention to tasks and impede speech communication. Finally, noise can hamper performance of daily tasks, increase fatigue, and cause irritability. Household noise can be controlled in various ways. Approaching the problem during initial construction is the simplest, but has not become popular. For example, in early 2003, only about 30% of homebuilders offered sound-attenuating blankets for interior walls. A soundattenuating blanket is a lining of noise abatement products (the thickness depends on the material being used). Spray-in-place soft foam insulation can also be used as a sound dampener, as can special walking mats for floors. Actions that can help reduce household noise include installing new, quieter appliances and isolating washing machines to reduce noise and water passing through pipes. The ninth and final physiologic need is for adequate space for exercise and play. Before industrialization in the United States and England, a preponderance of the population lived and worked in more rural areas with very adequate areas for exercise and play. As industrialization impacted demographics, more people were in cities without ample space for play and exercise. In the 19th century, society responded with the development of playgrounds and public parks. Healthful housing should include the provision of safe play and exercise areas. Many American neighborhoods are severely deficient, with no area for children to safely play. New residential areas often do not have sidewalks or street lighting, nor are essential services available by foot because of highway and road configurations. # Fundamental Psychologic Needs Seven fundamental psychologic needs for healthy housing include the following: 1. adequate privacy for the individual, 2. opportunities for normal family life, 3. opportunities for normal community life, 4. facilities that make possible the performance of household tasks without undue physical and mental fatigue, 5. facilities for maintenance of cleanliness of the dwelling and of the person, 6. possibilities for aesthetic satisfaction in the home and its surroundings, and 7. concordance with prevailing social standards of the local community. Privacy is a necessity to most people, to some degree and during some periods. The increase in house size and the diminishing family size have, in many instances, increased the availability of privacy. Ideally, everyone would have their own rooms, or, if that were not possible, would share a bedroom with only one person of the same sex, excepting married couples and small children. Psychiatrists consider it important for children older than 2 years to have bedrooms separate from their parents. In addition, bedrooms and bathrooms should be accessible directly from halls or living rooms and not through other bedrooms. In addition to the psychologic value of privacy, repeated studies have shown that lack of space and quiet due to crowding can lead to poor school performance in children. Coupled with a natural desire for privacy is the social desire for normal family and community life. A wholesome atmosphere requires adequate living room space and adequate space for withdrawal elsewhere during periods of entertainment. This accessibility expands beyond the walls of the home and includes easy communication with centers of culture and business, such as schools, churches, entertainment, shopping, libraries, and medical services. # Protection Against Disease Eight ways to protect against contaminants include the following: 1. provide a safe and sanitary water supply; 2. protect the water supply system against pollution; 3. provide toilet facilities that minimize the danger of transmitting disease; 4. protect against sewage contamination of the interior surfaces of the dwelling; 5. avoid unsanitary conditions near the dwelling; 6. exclude vermin from the dwelling, which may play a part in transmitting disease; 7. provide facilities for keeping milk and food fresh; and 8. allow sufficient space in sleeping rooms to minimize the danger of contact infection. According to the U.S. EPA [8], there are approximately 160,000 public or community drinking water systems in the United States. The current estimate is that 42 million Americans (mostly in rural America) get their water from private wells or other small, unregulated water systems. The presence of adequate water, sewer, and plumbing facilities is central to the prevention, reduction, and possible elimination of water-related diseases. According to the Population Information Program [9], water-related diseases can be organized into four categories: • waterborne diseases, including those caused by both fecal-oral organisms and those caused by toxic substances; • water-based diseases; • water-related vector diseases; and • water-scarce diseases. Numerous studies link improvements in sanitation and the provision of potable water with significant reductions in morbidity and mortality from water-related diseases. Clean water and sanitation facilities have proven to reduce infant and child mortality by as much as 55% in Third World countries according to studies from the 1980s. Waterborne diseases are often referred to as "dirty-water" diseases and are the result of contamination from chemical, human, and animal wastes. Specific diseases in this group include cholera, typhoid, shigella, polio, meningitis, and hepatitis A and E. Water-based diseases are caused by aquatic organisms that spend part of their life cycle in the water and another part as parasites of animals. Although rare in the United States, these diseases include dracunculiasis, paragonimiasis, clonorchiasis, and schistosomiasis. The reduction in these diseases in many countries has not only led to decreased rates of illness and death, but has also increased productivity through a reduction in days lost from work. Water-related diseases are linked to vectors that breed and live in or near polluted and unpolluted water. These vectors are primarily mosquitoes that infect people with the disease agents for malaria, yellow fever, dengue fever, and filariasis. While the control of vectorborne diseases is a complex matter, in the United States, most of the control focus has been on controlling habitat and breeding areas for the vectors and reducing and controlling human cases of the disease that can serve as hosts for the vector. Vectorborne diseases have recently become a more of a concern to the United States with the importation of the West Nile virus. The transmission of West Nile virus occurs when a mosquito vector takes a blood meal from a bird or incidental hosts, such as a dog, cat, horse, or other vertebrate. The human cases of West Nile virus in 2003 numbered 9,862, with 264 deaths. Finally, water-scarce diseases are diseases that flourish where sanitation is poor due to a scarcity of fresh water. Diseases included in this category are diphtheria, leprosy, whooping cough, tetanus, tuberculosis, and trachoma. These diseases are often transmitted when the supply of fresh water is inadequate for hand washing and basic hygiene. These conditions are still rampant in much of the world, but are essentially absent from the United States due to the extensive availability of potable drinking water. In 2000, USCB [10] reported that 1.4% of U.S. homes lacked plumbing facilities. This differs greatly from the 1940 census, when nearly one-half of U.S. homes lacked complete plumbing. The proportion has continually dropped, falling to about one-third in 1950 and then to one-sixth in 1960. Complete plumbing facilities are defined as hot and cold piped water, a bathtub or shower, and a flush toilet. The containment of household sewage is instrumental in protecting the public from waterborne and vectorborne diseases. The 1940 census revealed that more than a third of U.S. homes had no flush toilet, with 70% of the homes in some states without a flush toilet. Of the 13 million housing units at the time without flush toilets, 11.8 million (90.7%) had an outside toilet or privy, another 1 million (7.6%) had no toilet or privy, and the remainder had a nonflush toilet in the structure. In contrast to these figures, the 2000 census data demonstrate the great progress that has been made in providing sanitary sewer facilities. Nationally, 74.8% of homes are served by a public sewer, with 24.1% served by a septic tank or cesspool, and the remaining 1.1% using other means. Vermin, such as rodents, have long been linked to property destruction and disease. Integrated pest management, along with proper housing construction, has played a significant role in reducing vermin around the modern home. Proper food storage, rat-proofing construction, and ensuring good sanitation outside the home have served to eliminate or reduce rodent problems in the 21st century home. Facilities to properly store milk and food have not only been instrumental in reducing the incidence of some foodborne diseases, but have also significantly changed the diet in developed countries. Refrigeration can be traced to the ancient Chinese, Hebrews, Greeks, and Romans. In the last 150 years, great strides have been made in using refrigeration to preserve and cool food. Vapor compression using air and, subsequently, ammonia as a coolant was first developed in the 1850s. In the early 1800s, natural ice was extracted for use as a coolant and preserver of food. By the late 1870s, there were 35 commercial ice plants in the United States and, by 1909, there were 2,000. However, as early as the 1890s, sources of natural ice began to be a problem as a result of pollution and sewage dumped into bodies of water. Thus, the use of natural ice as a refrigerant began to present a health problem. Mechanical manufacture of ice provided a temporary solution, which eventually resulted in providing mechanical refrigeration. Refrigeration was first used by the brewing and meatpacking industries; but most households had iceboxes (Figure 2.1), which made the ice wagon a popular icon of the late 1800s and early 1900s. In 1915, the first refrigerator, the Guardian, was introduced. This unit was the predecessor of the Frigidaire. The refrigerator became as necessary to the household as a stove or sewing machine. By 1937, nearly 6 million refrigerators were manufactured in the United States. By 1950, in excess of 80% of American farms and more than 90% of urban homes had a refrigerator. Adequate living and sleeping space are also important in protecting against contagion. It is an issue not only of privacy but of adequate room to reduce the potential for the transmission of contagion. Much improvement has been made in the adequacy of living space for the U.S. family over the last 30 years. According to USCB [11], the average size of new single homes has increased from a 1970 average of 1,500 square feet to a 2000 average of 2,266 square feet. USCB [11] says that slightly less than 5% of U.S. homes were considered crowded in 1990; that is, they had more than one person per room. However, this is an increase since the 1980 census, when the figure was 4.5%. This is the only time there has been an increase since the first housing census was initiated in 1940, when one in five homes was crowded. During the 1940 census, most crowded homes were found in southern states, primarily in the rural south. Crowding has become common in a few large urban areas, with more than one-fourth of all crowded units located in four metropolitan areas: Houston, Los Angeles, Miami, and New York. The rate for California has not changed significantly between 1940 (13%) and 1990 (12%). Excessive crowding in homes has the potential to increase not only communicable disease transmission, but also the stress level of occupants because modern urban individuals spend considerably more time indoors than did their 1940s counterparts. # Protection Against Injury A major provision for safe housing construction is developing and implementing building codes. According to the International Code Council one-and two-family dwelling code, the purpose of building codes is to provide minimum standards for the protection of life, limb, property, environment, and for the safety and welfare of the consumer, general public, and the owners and occupants of residential buildings regulated by this code [12]. However, as with all types of codes, the development of innovative processes and products must be allowed to take a place in improving construction technology. Thus, according to the International Code Council one-and two-family dwelling code, building codes are not intended to limit the appropriate use of materials, appliances, equipment, or methods by design or construction that are not specifically prescribed by the code if the building official determines that the proposed alternate materials, appliances, equipment or methods of design or construction are at least equivalent of that prescribed in this code. While the details of what a code should include are beyond the scope of this section, additional information can be found at http://www.iccsafe.org/, the Web site of the International Code Council (ICC). ICC is an organization formed by the consolidation of the Building Officials and Code Administrators International, Southern Building Code Congress International, Inc., and the International Conference of Building Officials [12]. According to the Home Safety Council (HSC) [13], the leading causes of home injury deaths in 1998 were falls and poisonings, which accounted for 6,756 and 5,758 deaths, respectively. As expected, the rates and national estimates of the number of fall deaths were highest among those older than 64 years, and stairs or steps were associated with 17% of fall deaths. Overall, falls were the leading cause of nonfatal, unintentional injuries occurring at home and accounted for 5.6 million injuries. Similar to the mortality statistics, consumer products most often associated with emergency department visits included stairs and steps, accounting for 854,631 visits, and floors, accounting for 556,800 visits. A national survey by HSC found that one-third of all households with stairs did not have banisters or handrails on at least one set of stairs. Related to this, homes with older persons were more likely to have banisters or handrails than were those where young children live or visit. The survey also revealed that 48% of households have windows on the second floor or above, but only 25% have window locks or bars to prevent children from falling out. Bathtub mats or nonskid strips to reduce bathtub falls were used in 63% of American households. However, in senior households (age 70 years and older), 79% used mats or nonskid strips. Nineteen percent of the total number of homes surveyed had grab bars to supplement the mats and strips. Significantly, only 39% of the group most susceptible to falls (people aged 70 years and older) used both nonskid surfaces and grab bars. # Protection Against Fire An important component of safe housing is to control conditions that promote the initiation and spread of fire. Between 1992 and, an average of 4,266 Americans died annually in fires and nearly 25,000 were injured. This fact and the following information from the United States Fire Administration (USFA) [14] demonstrate the impact that fire safety and the lack of it have in the United States. The United States has one of the highest fire death rates in the industrialized world, with 13.4 deaths per million people. At least 80% of all fire deaths occur in residences. Residential fires account for 23% of all fires and 76% of structure fires. In one-and two-family dwellings, fires start in the kitchen 25.5% of the time and originate in the bedroom 13.7% of the time. Apartment fires most often start in the kitchen, but at almost twice the rate (48.5%), with bedrooms again being the second most common place at 13.4%. These USFA statistics also disclose that cooking is the leading cause of home fires, usually a result of unattended cooking and human error rather than mechanical failure of the cooking units. The leading cause of fire deaths in homes is careless smoking, which can be significantly deterred by smoke alarms and smolder-resistant bedding and upholstered furniture. Heating system fires tend to be a larger problem in single-family homes than in apartments because the heating systems in family homes frequently are not professionally maintained. A number of conditions in the household can contribute to the creation or spread of fire. The USFA data indicate that more than one-third of rural Americans use fireplaces, wood stoves, and other fuel-fired appliances as primary sources of heat. These same systems account for 36% of rural residential fires. Many of these fires are the result of creosote buildup in chimneys and stovepipes. These fires could be avoided by • inspecting and cleaning by a certified chimney specialist; • clearing the area around the hearth of debris, decorations, and flammable materials; • using a metal mesh screen with fireplaces and leaving glass doors open while burning a fire; • installing stovepipe thermometers to monitor flue temperatures; • leaving air inlets on wood stoves open and never restricting air supply to the fireplaces, thus helping to reduce creosote buildup; • using fire-resistant materials on walls around wood stoves; • never using flammable liquids to start a fire; • using only seasoned hardwood rather than soft, moist wood, which accelerates creosote buildup; • building small fires that burn completely and produce less smoke; • never burning trash, debris, or pasteboard in a fireplace; • placing logs in the rear of the fireplace on an adequate supporting grate; • never leaving a fire in the fireplace unattended; • keeping the roof clear of leaves, pine needles, and other debris; • covering the chimney with a mesh screen spark arrester; and • removing branches hanging above the chimney, flues, or vents. USFA [14] also notes that manufactured homes can be susceptible to fires. More than one-fifth of residential fires in these facilities are related to the use of supplemental room heaters, such as wood-and coal-burning stoves, kerosene heaters, gas space-heaters, and electrical heaters. Most fires related to supplemental heating equipment result from improper installation, maintenance, or use of the appliance. USFA recommendations to reduce the chance of fire with these types of appliances include the following: • placing wood stoves on noncombustible surfaces or a code-specified or listed floor surface; • placing noncombustible materials around the opening and hearth of fireplaces; • placing space heaters on firm, out-of-the-way surfaces to reduce tipping over and subsequent spillage of fuel and providing at least 3 feet of air space between the heating device and walls, chairs, firewood, and curtains; • placing vents and chimneys to allow 18 inches of air space between single-wall connector pipes and combustibles and 2 inches between insulated chimneys and combustibles; and • using only the fuel designated by the manufacturer for the appliance. The ability to escape from a building when fire has been discovered or detected is of extreme importance. In the modern home, three key elements can contribute to a safe exit from a home during the threat of fire. The first of these is a working smoke alarm system. The average homeowner in the 1960s had never heard of a smoke alarm, but by the mid-1980s, laws in 38 states and in thousands of municipalities required smoke alarms in all new and existing residences. By 1995, 93% of all singlefamily and multifamily homes, apartments, nursing homes, and dormitories were equipped with alarms. The cost decreased from $1,000 for a professionally installed unit for a three-bedroom home in the 1970s to an ownerinstalled $10 unit. According to the EPA [15], ionization chamber and photoelectric are the two most common smoke detectors available commercially. Helmenstein [16] states that a smoke alarm uses one or both methods, and occasionally uses a heat detector, to warn of a fire. These units can be powered by a 9-volt battery, a lithium battery, or 120-volt house wiring. Ionization detectors function using an ionization chamber and a minute source of ionizing radiation. The radiation source is americium-241 (perhaps 1/5,000th of a gram), while the ionization chamber consists of two plates separated by about a centimeter. The power source (battery or house current) applies voltage to the plates, resulting in one plate being charged positively while the other plate is charged negatively. The americium constantly releases alpha particles that knock electrons off the atoms in the air, ionizing the oxygen and nitrogen atoms in the chamber. The negative plate attracts the positively charged oxygen and nitrogen atoms, while the electrons are attracted to the positive plate, generating a small, continuous electric current. If smoke enters the ionization chamber, the smoke particles attach to the ions and neutralize them, so they do not reach the plate. The alarm is then triggered by the drop in current between the plates [16]. Photoelectric devices function in one of two ways. First, smoke blocks a light beam, reducing the light reaching the photocell, which sets off the alarm. In the second and more common type of photoelectric unit, smoke particles scatter the light onto a photocell, initiating an alarm. Both detector types are effective smoke sensors and both must pass the same test to be certified as Underwriters Laboratories (UL) smoke detectors. Ionization detectors respond more quickly to flaming fires with smaller combustion particles, while photoelectric detectors respond more quickly to smoldering fires. Detectors can be damaged by steam or high temperatures. Photoelectric detectors are more expensive than ionization detectors and are more sensitive to minute smoke particles. However, ionization detectors have a degree of built-in security not inherent to photoelectric detectors. When the battery starts to fail in an ionization detector, the ion current falls and the alarm sounds, warning that it is time to change the battery before the detector becomes ineffective. Backup batteries may be used for photoelectric detectors that are operated using the home's electrical system. According to USFA [14], a properly functioning smoke alarm diminishes the risk for dying in a fire by approximately 50% and is considered the single most important means of preventing house and apartment fire fatalities. Proper installation and maintenance, however, are key to their usefulness. Figure 2.2 shows a typical smoke alarm being tested. Following are key issues regarding installation and maintenance of smoke alarms. (Smoke alarms should be installed on every level of the home including the basement, both inside and outside the sleeping area.) • Smoke alarms should be installed on the ceiling or 6-8 inches below the ceiling on side walls. • Battery replacement is imperative to ensuring proper operation. Typically, batteries should be replaced at least once a year, although some units are manufactured with a 10-year battery. A "chirping" noise from the unit indicates the need for battery replacement. A battery-operated smoke alarm has a life expectancy of 8 to 10 years. • Battery replacement is not necessary in units that are connected to the household electrical system. • Regardless of the type, it is crucial to test every smoke alarm monthly. Data from HSC [13] revealed that only 83% of individuals with fire alarms test them at least once a year; while only 19% of households with at least one smoke alarm test them quarterly. A second element impacting escape from a building is a properly installed fire-suppression system. According to USFA [14], sprinkler systems began to be used over 100 years ago in New England textile mills. Currently, few homes are protected by residential sprinkler systems. However, UL-listed home systems are available and are designed to protect homes much faster than standard commercial or industrial sprinklers. Based on approximately 1% of the total building price in new construction, sprinkler systems can be installed for a reasonable price. These systems can be retrofitted to existing construction and are smaller than commercial systems. In addition, homeowner insurance discounts for such systems range between 5% and 15% and are increasing in availability. The final element in escaping from a residential fire is having a fire plan. A 1999 survey conducted by USFA [14] found that 60% of Americans have an escape plan, with 42% of these individuals having practiced the plan. Surprisingly, 26% of Americans stated they had never thought about practicing an escape plan, and 3% believed escape planning to be unnecessary. In addition, of the people who had a smoke alarm sound an alert over the past year before the study, only 8% believed it to be a fire and thought they should evacuate the building. Protection from electrical shocks and burns is also a vital element in the overall safety of the home. According to the National Fire Protection Association (NFPA) [17], electrical distribution equipment was the third-leading cause of home fires and the second-leading cause of fire deaths in the United States between 1994 and 1998. Specifically, NFPA reported that 38,300 home electrical fires occurred in 1998, which resulted in 284 deaths, 1,184 injuries, and approximately $670 million in direct property damage. The same report indicated that the leading cause of electrical distribution fires was ground fault or short-circuit problems. A third of the home electrical distribution fires were a result of problems with fixed wiring, while cords and plugs were responsible for 17% of these fires and 28% of the deaths. Additional investigation of these statistics reveals that electrical fires are one of the leading types of home fires in manufactured homes. USFA [14] data demonstrate that many electrical fires in homes are associated with improper installation of electrical devices by do-it-your- selfers. Errors attributed to this amateur electrical work include use of improperly rated devices such as switches or receptacles and loose connections leading to overheating and arcing, resulting in fires. Recommendations to reduce the risk of electrical fires and electrocution include the following: 1. Use only the correct fuse size and do not use pennies behind a fuse. 2. Install ground fault circuit interrupters (GFCI) on all outlets in kitchens, bathrooms, and anywhere else near water. This can also be accomplished by installing a GFCI in the breaker box, thus protecting an entire circuit. 3. Never place combustible materials near light fixtures, especially halogen bulbs that get very hot. 4. Use only the correct bulb size in a light fixture. 5. Use only properly rated extension cords for the job needed. 6. Never use extension cords as a long-term solution to the need for an additional outlet. Size the extension cord to the wattage to be used. 7. Never run extension cords inside walls or under rugs because they generate heat that must be able to dissipate. # Fire Extinguishers A fire extinguisher should be listed and labeled by an independent testing laboratory such as FM (Factory Mutual) or UL. Fire extinguishers are labeled according to the type of fire on which they may be used. Fires involving wood or cloth, flammable liquids, electrical, or metal sources react differently to extinguishers. Using the wrong type of extinguisher on a fire could be dangerous and could worsen the fire. Traditionally, the labels A, B, C, and D have been used to indicate the type of fire on which an extinguisher is to be used. Type A-Used for ordinary combustibles such as cloth, wood, rubber, and many plastics. These types of fire usually leave ashes after they burn: Type A extinguishers for ashes. The Type A label is in a triangle on the extinguisher. Type B-Used for flammable liquid fires such as oil, gasoline, paints, lacquers, grease, and solvents. These substances often come in barrels: Type B extinguishers for barrels. The Type B label is in a square on the extinguisher. Type C-Used for electrical fires such as in wiring, fuse boxes, energized electrical equipment, and other electrical sources. Electricity travels in currents; Type C extinguishers for currents. The Type C label is in a circle on the extinguisher. Type D-Used for metal fires such as magnesium, titanium, and sodium. These types of fires are very dangerous and seldom handled by the general public; Type D means don't get involved. The Type D label is in a star on the extinguisher. The higher the rating number on an A or B fire extinguisher, the more fire it can put out, but high-rated units are often the heavier models. Extinguishers need care and must be recharged after every use-a partially used unit might as well be empty. An extinguisher should be placed in the kitchen and in the garage or workshop. Each extinguisher should be installed in plain view near an escape route and away from potential fire hazards such as heating appliances. Recently, pictograms have come into use on fire extinguishers. These picture the type of fire on which an extinguisher is to be used. For instance, a Type A extinguisher has a pictogram showing burning wood. A Type C extinguisher has a pictogram showing an electrical cord and outlet. These pictograms are also used to show what not to use. For example, a Type A extinguisher also show a pictogram of an electrical cord and outlet with a slash through it (do not use it on an electrical fire). Fire extinguishers also have a number rating. For Type A fires, 1 means 1¼ gallons of water; 2 means 2½ gallons of water, 3 means 3¾ gallons of water, etc. For Type B and Type C fires, the number represents square feet. For example, 2 equals 2 square feet, 5 equals 5 square feet, etc. Fire extinguishers can also be made to extinguish more than one type of fire. For example, you might have an extinguisher with a label that reads 2A5B. This would mean this extinguisher is good for Type A fires with a 2½-gallon equivalence and it is also good for Type B fires with a 5-square-foot equivalency. A good extinguisher to have in each residential kitchen is a 2A10BC fire extinguisher. You might also get a Type A for the living room and bedrooms and an ABC for the basement and garage. PASS is a simple acronym to remind you how to operate most fire extinguishers-pull, aim, squeeze, and sweep. Pull the pin at the top of the cylinder. Some units require the releasing of a lock latch or pressing a puncture lever. Aim the nozzle at the base of the fire. Squeeze or press the handle. Sweep the contents from side to side at the base of the fire until it goes out. Shut off the extinguisher and then watch carefully for any rekindling of the fire. # Protection Against Toxic Gases Protection against gas poisoning has been a problem since the use of fossil fuels was combined with relatively tight housing construction. NFPA [17] notes that National Safety Council statistics reflect unintentional poisonings by gas or vapors, chiefly carbon monoxide (CO), numbering about 600 in 1998. One-fourth of these involved heating or cooking equipment in the home. The U.S. Consumer Product Safety Commission [18] states that in 2001 an estimated 130 deaths occurred as a result of CO poisoning from residential sources; this decrease in deaths is related to the increased use of CO detectors. In addition, approximately 10,000 cases of CO-related injuries occur each year. NFPA [17] # Introduction William Pitt, arguing before the British Parliament against excise officers entering private homes to levy the Cyder Tax, eloquently articulated this long-held and cherished notion of the sanctity of private property. However, a person's right to privacy is not absolute. There has always been a tension between the rights of property owners to do whatever they desire with their property and the ability of the government to regulate uses to protect the safety, health, and welfare of the community. Few, however, would argue with the right and duty of a city government to prohibit the operation of a munitions factory or a chemical plant in the middle of a crowded residential neighborhood. # History The first known housing laws are in the Code of Laws of Hammurabi [1], who was the King of Babylonia, circa 1792-1750 BC. These laws addressed the responsibility of the home builder to construct a quality home and outlined the implications to the builder if injury or harm came to the owner as a result of the failure to do so. During the Puritan period (about 1620-1690), housing laws essentially governed the behavior of the members of the society. For example, no one was allowed to live alone, so bachelors, widows, and widowers were placed with other families as servants or boarders. In 1652, Boston prohibited building privies within 12 feet of the street. Around the turn of the 18th century, some New England communities implemented local ordinances that specified the size of houses. During the 17th century, additional public policies on housing were established. Because the English tradition of using wooden chimneys and thatched roofs led to fires in many dwellings, several colonies passed regulations prohibiting them. After the early 17th century came an era of very rapid metropolitan growth along the East Coast. This growth was due largely to immigration from Europe and was spurred by the Industrial Revolution. The most serious [2]. The principal requirements of the act included the following: • Every room occupied for sleeping, if it does not communicate directly with the external air, must have a ventilating or transom window of at least 3 square feet to the neighboring room or hall. • A proper fire escape is necessary on every tenement or lodging house. • The roof is to be kept in repair and the stairs are to have banisters. • At least one toilet is required for every 20 occupants for all such houses, and those toilets must be connected to approved disposal systems. • Cleansing of every lodging house is to be to the satisfaction of the Board of Health, which is to have access at any time. • All cases of infectious disease are to be reported to the Board by the owner or his agent; buildings are to be inspected and, if necessary, disinfected or vacated if found to be out of repair. There were also regulations governing distances between buildings, heights of rooms, and dimensions of windows. Although this act had some beneficial influences on overcrowding, sewage disposal, lighting, and ventilation, perhaps its greatest contribution was in laying a foundation for more stringent future legislation. # Zoning, Housing Codes, and Building Codes Housing is inextricably linked to the land on which it is located. Changes in the patterns of land use in the United States, shifting demographics, an awareness of the need for environmental stewardship, and competing uses for increasingly scarce (desirable) land have all placed added stress on the traditional relationship between the property owner and the community. This is certainly not a new development. In the early settlement of this country, following the precedent set by their forefathers from Great Britain, gunpowder mills and storehouses were prohibited from the heavily populated portions of towns, owing to the frequent fires and explosions. Later, zoning took the form of fire districts and, under implied legislative powers, wooden buildings were prohibited from certain sections of a municipality. Massachusetts passed one of the first zoning laws in 1692. This law authorized Boston, Salem, Charlestown, and certain other market towns in the province to restrict the establishment of slaughterhouses and stillhouses for currying leather to certain locations in each town. Few people objected to such restrictions. Still, the tension remained between the right to use one's land and the community's right to protect its citizens. In 1926, the United States Supreme Court took up the issue in Village of Euclid, Ohio, v. Ambler Realty [7]. In this decision, the Court noted, "Until recent years, urban life was comparatively simple; but with great increase and concentration of population, problems have developed which require additional restrictions in respect of the use and occupation of private lands in urban communities." In explaining its reasoning, the Court said, "the law of nuisances may be consulted not for the purpose of controlling, but for the helpful aid of its analogies in the process of ascertaining the scope of the police power. Thus the question of whether the power exists to forbid the erection of a building of a particular kind or a particular use is to be determined, not by an abstract consideration of the building or other thing considered apart, but by considering it in connection with the circumstances and the locality… A nuisance may be merely the right thing in the wrong place-like a pig in the parlor instead of the barnyard." Zoning, housing, and building codes were adopted to improve the health and safety of people living in communities. And, to some extent, they have performed this function. Certainly, housing and building codes, when enforced, have resulted in better constructed and maintained buildings. Zoning codes have been effective in segregating noxious and dangerous enterprises from residential areas. However, as the U.S. population has grown and changed from a rural to an urban then to a suburban society, land use and building regulations developed for the 19th and early 20th centuries are creating new health and safety problems not envisioned in earlier times. # Zoning and Zoning Ordinances Zoning is essentially a means of ensuring that a community's land uses are compatible with the health, safety, and general welfare of the community. Experience has shown that some types of controls are needed to provide orderly growth in relation to the community plan for development. Just as a capital improvement program governs public improvements such as streets, parks and other recreational facilities, schools, and public buildings, so zoning governs the planning program with respect to the use of public and private property. It is very important that housing inspectors know the general nature of zoning regulations because properties in violation of both the housing code and the zoning ordinance must be brought into full compliance with the zoning ordinance before the housing code can be enforced. In many cases, the housing inspector may be able to eliminate violations or properties in violation of housing codes through enforcement of the zoning ordinance. # Zoning Objectives As stated earlier, the purpose of a zoning ordinance is to ensure that the land uses within a community are regulated not only for the health, safety, and welfare of the community, but also are in keeping with the comprehensive plan for community development. The provisions in a zoning ordinance that help to achieve development that provides for health, safety, and welfare are designed to do the following: • Regulate height, bulk, and area of structure. To provide established standards for healthful housing within the community, regulations dealing with building heights, lot coverage, and floor areas must be established. These regulations then ensure that adequate natural lighting, ventilation, privacy, and recreational areas for children will be realized. These are all fundamental physiologic needs necessary for a healthful environment. Safety from fires is enhanced by separating buildings to meet yard and open-space requirements. Through requiring a minimum lot area per dwelling unit, population density controls are established. • Avoid undue levels of noise, vibration, glare, air pollution, and odor. By providing land-use category districts, these environmental stresses upon the individual can be reduced. • Lessen street congestion by requiring off-street parking and off-street loading. • Facilitate adequate provision of water, sewerage, schools, parks, and playgrounds. • Provide safety from flooding. • Conserve property values. Through careful enforcement of the zoning ordinance provisions, property values can be stabilized and conserved. To understand more fully the difference between zoning and subdivision regulations, building codes, and housing ordinances, the housing inspector must know what cannot be accomplished by a zoning ordinance. Items that cannot be accomplished by a zoning ordinance include the following: • Overcrowding or substandard housing. Zoning is not retroactive and cannot correct existing conditions. These are corrected through enforcement of a minimum standards housing code. • Materials and methods of construction. Materials and methods of construction are enforced through building codes rather than through zoning. • Cost of construction. Quality of construction, and hence construction costs, are often regulated through deed restrictions or covenants. Zoning does, however, stabilize property values in an area by prohibiting incompatible development, such as heavy industry in the midst of a well-established subdivision. • Subdivision design and layout. Design and layout of subdivisions, as well as provisions for parks and streets, are controlled through subdivision regulations. # Content of the Zoning Ordinance Zoning ordinances establish districts of whatever size, shape, and number the municipality deems best for carrying out the purposes of the zoning ordinance. Most cities use three major districts: residential (R), commercial (C), and industrial (I). These three may then be subdivided into many subdistricts, depending on local conditions; e.g., R-1 (single-unit dwellings), R-2 (duplexes), R-3 (low-rise apartment buildings), and so on. These districts specify the principal and accessory uses, exceptions, and prohibitions [8]. In general, permitted land uses are based on the intensity of land use-a less intense land use being permitted in a more intense district, but not vice versa. For example, a single-unit residence is a less intense land use than a multiunit dwelling (defined by HUD as more than four living units) and hence would be permitted in a residential district zoned for more intense land use (e.g., R-3). A multiunit dwelling would not, however, be permitted in an R-1 district. While intended to promote the health, safety, and general welfare of the community, housing trends in the last half of the 20th century have led a number of public health and planning officials to question the blind enforcement of zoning districts. These individuals, citing such problems as urban sprawl, have stated that municipalities need to adopt a more flexible approach to land use regulation-one that encourages creating mixed-use spaces, increasing population densities, and reducing reliance on the automobile. These initiatives are often called smart growth programs. It is imperative, if this approach is taken, that both governmental officials and citizens be involved in the planning stage. Without this involvement, the community may end up with major problems, such as overloaded infrastructure, structures of inappropriate construction crowded together, and fire and security issues for residents. Increased density could strain the existing water, sewer and waste collection systems, as well as fire and police services, unless proper planning is implemented. In recent years, some ordinances have been partially based on performance standards rather than solely on land-use intensity. For example, some types of industrial developments may be permitted in a less intense use district provided that the proposed land use creates no noise, glare, smoke, dust, vibration, or other environmental stress exceeding acceptable standards and provided further that adequate off-street parking, screening, landscaping, and similar measures are taken. Bulk and Height Requirements. Most early zoning ordinances stated that, within a particular district, the height and bulk of any structure could not exceed certain dimensions and specified dimensions for front, side, and rear yards. Another approach was to use floor-area ratios for regulation. A floor-area ratio is the relation between the floor space of the structure and the size of the lot on which it is located. For example, a floor-area ratio of 1 would permit either a two-story building covering 50% of the lot, or a one-story building covering 100% of the lot, as demonstrated in Figure 3.1. Other zoning ordinances specify the maximum amount of the lot that can be covered or merely require that a certain amount of open space must be provided for each structure, and leave the builder the flexibility to determine the location of the structure. Still other ordinances, rather than specify a particular height for the structure, specify the angle of light obstruction that will assure adequate air and light to the surrounding structures, as demonstrated in Figure 3.2. Yard Requirements. Zoning ordinances also contain minimum requirements for front, rear, and side yards. These requirements, in addition to stating the lot dimensions, usually designate the amount of setback required. Most ordinances permit the erection of auxiliary buildings in rear yards provided that they are located at stated distances from all lot lines and provided sufficient open space is maintained. If the property is a corner lot, additional requirements are established to allow visibility for motorists. Off-street Parking. Space for off-street parking and offstreet loading, especially for commercial buildings, is also contained in zoning ordinances. These requirements are based on the relationship of floor space or seating capacity to land use. For example, a furniture store would require fewer off-street parking spaces in relation to the floor area than would a movie theater. # Exceptions to the Zoning Code # Nonconforming Uses Because zoning is not retroactive, all zoning ordinances contain a provision for nonconforming uses. If a use has already been established within a particular district before the adoption of the ordinance, it must be permitted to continue, unless it can be shown to be a public nuisance. Provisions are, however, put into the ordinance to aid in eliminating nonconforming uses over time. These provisions generally prohibit a) an enlargement or expansion of the nonconforming use, b) reconstruction of the nonconforming use if more than a certain portion of the building should be destroyed, c) resumption of the use after it has been abandoned for a period of specified time, and d) changing the use to a higher classification or to another nonconforming use. Some zoning ordinances further provide a period of amortization during which nonconforming land use must be phased out. # Variances Zoning ordinances contain provisions for permitting variances and providing a method for granting these variances, subject to certain specified provisions. A variance may be granted when, owing to the specific conditions or use of a particular lot, an undue hardship would be imposed on the owner if the exact content of the ordinance is enforced. A variance may be granted due to the shape, topography, or other characteristic of the lot. For example, suppose an irregularly shaped lot is located in a district having a side yard requirement of 20 feet on a side and a total lot size requirement of 10,000 square feet. Further suppose that this lot contains 10,200 square feet (and thus meets the total size requirement); however, due to the irregular shape of the lot, there would be sufficient space for only a 15-foot side yard. Because a hardship would be imposed on the owner if the exact letter of the law is applied, the owner of the property could apply to the zoning adjustment board for a variance. Because the total area of the lot is sufficient and a lessening of the ordinance requirements would not be detrimental to the surrounding property, nor would it interfere with neighboring properties, a variance would probably be granted. Note that a variance is granted to the owner under specific conditions. Should use of the property change, the variance would be voided. # Exceptions An exception is often confused with a variance. In every city there are some necessary uses that do not correspond to the permitted land uses within the district. The zoning code recognizes, however, that if proper safeguards are provided, these uses would not have a detrimental effect on the district. An example would be a fire station that could be permitted in a residential area, provided the station house is designed and the property is properly landscaped to resemble or fit in with the characteristics of the neighborhood in which it is located. # Administration Zoning inspectors are essential to the zoning process because they have firsthand knowledge of a case. Often, the zoning inspector may also be the building inspector or housing inspector. Because the building inspector or housing inspector is already in the field making inspections, it is relatively easy for that individual to check compliance with the zoning ordinances. Compliance is determined by comparing the actual land use with that allowed for the area and shown on the zoning map. Each zoning ordinance has a map detailing the permitted usage for each block. Using a copy of this map, the inspector can make a preliminary check of the land use in the field. If the use does not conform, the inspector must then contact the Zoning Board to see whether the property in question was a nonconforming use at the time of the passage of the ordinance and whether an exception or variance has been granted. In cities where up-to-date records are maintained, the inspector can check the use in the field. When a violation is observed, and the property owners are duly notified of the violation, they have the right to request a hearing before the Zoning Board of Adjustment (also called the Zoning Board of Appeals in some cities). The board may uphold the zoning enforcement officer or may rule in favor of the property owner. If the action of the zoning officer is upheld, the property owner may, if desired, seek relief by appealing the decision to the courts; otherwise, the violation must be corrected to conform to the zoning code. It is critical for the housing or building inspector and the zoning inspector to work closely in municipalities where these positions and responsibilities are separate. Experience has shown that illegally converted properties are often among the most substandard encountered in the municipality and often contain especially dangerous housing code violations. In communities where the zoning code is enforced effectively, the resulting zoning compliance helps to advance, as well as sustain, many of the minimum standards of the housing code such as occupancy, ventilation, light, and unimpeded egress. By the same token, building or housing inspectors can often aid the zoning inspector by helping eliminate some nonconforming uses through code enforcement. # Housing Codes A housing code, regardless of who promulgates it, is basically an environmental health protection code. Housing codes are distinguished from building codes in that they cover houses, not buildings in general. For example, the housing code requires that walls support the weight of the roof, any floors above, and the furnishings, occupants, etc., within a building. Early housing codes primarily protected only physical health; hence, they were enforced only in slum areas. In the 1970s, it was realized that, if urban blight and its associated human suffering were to be controlled, housing codes must consider both physical and mental health and must be administered uniformly throughout the community. In preparing or revising housing codes, local officials must maintain a level of standards that will not merely be minimal. Standards should maintain a living environment that contributes positively to healthful individual and family living. The fact that a small portion of housing fails to meet a desirable standard is not a legitimate reason for retrogressive modification or abolition of a standard. The adoption of a housing ordinance that establishes low standards for existing housing serves only to legalize and perpetuate an unhealthy living environment. Wherever local conditions are such that immediate enforcement of some standards within the code would cause undue hardship for some individuals, it is better to allow some time for compliance than to eliminate an otherwise satisfactory standard. When immediate health or safety hazards are not involved, it is often wise to attempt to create a reasonable timetable for accomplishing necessary code modifications. # History To assist municipalities with developing legislation necessary to regulate the quality of housing, the American Public Health Association (APHA) Committee on the Hygiene of Housing prepared and published in 1952 a proposed housing ordinance. This provided a prototype on which such legislation might be based and has served as the basis for countless housing codes enacted in the United States since that time. Some municipalities enacted it without change. Others made revisions by omitting some portions, modifying others, and sometimes adding new provisions [9]. The APHA ordinance was revised in 1969 and 1971. In 1975, APHA and the CDC jointly undertook the job of rewriting and updating this model ordinance. The new ordinance was entitled the APHA-CDC Recommended Housing Maintenance and Occupancy Ordinance [10]. The most recent model ordinance was published by APHA in 1986 as Housing and Health: APHA-CDC Recommended Minimum Housing Standards [11]. This new ordinance is one of several model ordinances available to communities when they are interested in adopting a housing code. A community should read and consider each element within the model code to determine its applicability to their community. A housing code is merely a means to an end. The end is the eventual elimination of all substandard conditions within the home and the neighborhood. This end cannot be achieved if the community adopts an inadequate housing code. # Objectives The Housing Act of 1949 [12] gave new impetus to existing local, state, and federal housing programs directed toward eliminating poor housing. In passing this legislation, Congress defined a new national objective by declaring that "the general welfare and security of the nation and the health and living standards of its people... require a decent home and a suitable living environment for every American family." This mandate generated an awareness that the quality of housing and residential environment has an enormous influence upon the physical and mental health and the social well-being of each individual and, in turn, on the economic, political, and social conditions in every community. Consequently, public agencies, units of government, professional organizations and others sought ways to ensure that the quality of housing and the residential environment did not deteriorate. It soon became apparent that ordinances regulating the supplied utilities and the maintenance and occupancy of dwellings were needed. Commonly called housing codes, these ordinances establish minimum standards to make dwellings safe, sanitary, and fit for human habitation by governing their condition and maintenance, their supplied utilities and facilities, and their occupancy. The 2003 International Code Council (ICC) [13,14] International Residential Code-One-and Two-Family Dwellings (R101.3) states "the purpose of this code is to provide minimum requirements to safeguard the public safety, health and general welfare, through affordability, structural strength, means of egress, facilities, stability, sanitation, light and ventilation, energy conservation, safety to fire and property from fire and other hazards attributed to the built environment." # Critical Requirements of an Effective Housing Program A housing code is limited in its effectiveness by several factors. First, if the housing code does not contain standards that adequately protect the health and well-being of the individuals, it cannot be effective. The best-trained housing inspector, if not armed with an adequate housing code, can accomplish little good in the battle against urban blight. A second issue in establishing an effective housing code is the need to establish a baseline of current housing conditions. A systems approach requires that you establish where you are, where you are going, and how you plan to achieve your goals. In using a systems approach, it is essential to know where the program started so that the success or failure of various initiatives can be established. Without this information, success cannot be replicated, because you cannot identify the obstacles navigated nor the elements of success. Many initiatives fail because program administrators are without the necessary proof of success when facing funding shortfalls and budget cuts. A third factor affecting the quality of housing codes is budget. Without adequate funds and personnel, the community can expect to lose the battle against urban blight. It is only through a systematic enforcement effort by an adequately sized staff of properly trained inspectors that the battle can be won. A fourth factor is the attitude of the political bodies within the area. A properly administered housing program will require upgrading substandard housing throughout the community. Frequently, this results in political pressures being exerted to prevent the enforcement of the code in certain areas of the city. If the housing effort is backed properly by all political elements, blight can be controlled and eventually eliminated within the community. If, however, the housing program is not permitted to choke out the spreading influence of substandard conditions, urban blight will spread like a cancer, engulfing greater and greater portions of the city. Similarly, an effort directed at only the most seriously blighted blocks in the city will upgrade merely those blocks, while the blight spreads elsewhere. If urban blight is to be controlled, it must be cut out in its entirety. A fifth element that limits housing programs is whether they are supported fully by the other departments within the city. Regardless of which city agency administers the housing program, other city agencies must support the activities of the housing program. In addition, great effort should be expended to obtain the support and cooperation of the community. This can be accomplished through public awareness and public information programs, which can result in considerable support or considerable resistance to the efforts of the program. A sixth limitation is an inadequately or improperly trained inspection staff. Inspectors should be capable of evaluating whether a serious or a minor problem exists in matters ranging from the structural stability of a building to the health and sanitary aspects of the structure. If they do not have the authority or expertise, they should develop that expertise or establish effective and efficient agreements with overlapping agencies to ensure timely and appropriate response. A seventh item that frequently restricts the effectiveness of a housing program is the fact that many housing groups fail to do a complete job of evaluating housing problems. The deterioration of an area may be due to factors such as housing affordability, tax rates, or issues related to investment cost and return. In many cases, the inspection effort is restricted to merely evaluating the conditions that exist, with little or no thought given to why these conditions exist. If a housing effort is to be successful, as part of a systems approach, the question of why the homes deteriorated must be considered. Was it because of environmental stresses within the neighborhood that need to be eliminated or was it because of apathy on the part of the occupants? In either case, if the causative agent is not removed, then the inspector faces an annual problem of maintaining the quality of that residence. It is only by eliminating the causes of deterioration that the quality of the neighborhood can be maintained. Often the regulatory authority does not have adequate authority within the enabling legislation of the code needed to resolve the problem or there are gaps in jurisdiction. # Content of a Housing Code Although all comprehensive housing codes or ordinances contain a number of common elements, the provisions of communities will usually vary. These variations stem from differences in local policies, preferences, and, to a lesser extent, needs. They are also influenced by the standards set by the related provisions of the diverse building, electrical, and plumbing codes in use in the municipality. Within any housing code there are generally five features: 1. Definitions of terms used in the code. 2. Administrative provisions showing who is authorized to administer the code and the basic methods and procedures that must be followed in implementing and enforcing the sections of the code. Administrative provisions deal with items such as reasonable hours of inspections, whether serving violation notices is required, how to notify absentee owners or resident-owners or tenants, how to process and conduct hearings, what rules to follow in processing dwellings alleged to be unfit for human habitation, and how to occupy or use dwellings finally declared fit. 3. Substantive provisions specifying the various types of health, building, electrical, heating, plumbing, maintenance, occupancy, and use conditions that constitute violations of the housing code. These provisions can be and often are grouped into three categories: minimum facilities and equipment for dwelling units; adequate maintenance of dwellings and dwelling units, as well as their facilities and equipment; and occupancy conditions of dwellings and dwelling units. 4. Court and penalty sections outlining the basis for court action and thepenalty or penalties to which the alleged violator will be subjected if proved guilty of violating one or more provisions of the code. 5. Enabling, conflict, and unconstitutionality clauses providing the date a new or amended code will take effect, prevalence of more stringent provision when there is a conflict of two codes, severability of any part of the ordinance that might be found unconstitutional, and retention of all other parts in full course and effect. In any city following the format of the APHA-CDC Recommended Housing Maintenance and Occupancy Ordinance [10] the housing officer or other supervisor in charge of housing inspections will also adopt appropriate housing rules and regulations from time to time to clarify or further refine the provisions of the ordinance. When rules and regulations are used, care should be taken that the department is not overburdened with a number of minor rules and regulations. Similarly, a housing ordinance that encompasses all rules and regulations might have difficulty because any amendments to it will require action by the political element of the community. Some housing groups, in attempting to obtain amendments to an ordinance, have had the entire ordinance thrown out by the political bodies. # Administrative Provisions of a Housing Code The administrative procedures and powers of the housing inspection agency, its supervisors, and its staff are similar to other provisions in that all are based on the police power of the state to legislate for public health and safety. In addition, the administrative provisions, and to a lesser extent, the court and penalty provisions, outline how the police power is to be exercised in administering and enforcing the code. Generally, the administrative elements deal with procedures for ensuring that the constitutional doctrines of reasonableness, equal protection under the law and due process of law are observed. They also must guard against violation of prohibitions against unlawful search and seizure, impairment of obligations of contract, and unlawful delegation of authority. These factors encompass items of great importance to housing inspection supervisors such as the inspector's right of entry, reasonable hours of inspection, proper service, and the validity of the provisions of the housing codes they administer. # Owner of Record. It is essential to file legal actions against the true owners of properties in violation of housing codes. With the advent of the computer, this is often much easier than in the past. Databases that provide this information are readily available from many offices of local government such as the tax assessment office. The method of obtaining the name and address of the legal owner of a property in violation varies from place to place. Ordinarily, a check of the city tax records will suffice unless there is reason to believe these are not up to date. In this case, a further check of county or parish records will turn up the legal owner if state law requires deed registration there. If it does not, the advice of the municipal law department should be sought about the next steps to follow. Due Process Requirements. Every notice, complaint, summons, or other type of legal paper concerning alleged housing code violations in a given dwelling or dwelling unit must be legally served on the proper party to be valid and to prevent harassment of innocent parties. This might be the owner, agent, or tenant, as required by the code. It is customary to require that the notice to correct existing violations and any subsequent notices or letters be served by certified or registered mail with return receipt requested. The receipt serves as proof of service if the case has to be taken to court. Due process requirements also call for clarity and specificity with respect to the alleged violations, both in the violation notices and the court complaint-summons. For this reason, special care must be taken to be complete and accurate in listing the violations and charges. To illustrate, rather than direct the violator to repair all windows where needed, the violator should be told exactly which windows and what repairs are involved. The chief limitation on the due process requirement, with respect to service of notices, lies in cases involving immediate threats to health and safety. In these instances, the inspection agency or its representative may, without notice or hearing, issue an order citing the existence of the emergency and requiring that action deemed necessary to meet the emergency be taken. In some areas housing courts on the municipal level have advocates that assist both plaintiffs and defendants prepare for the court process or to resolve the issue to avoid court. Hearings and Condemnation Power. The purpose of a hearing is to give the alleged violator an opportunity to be heard before further action is taken by the housing inspection agency. These hearings may be very informal, involving meetings between a representative of the agency and the person ordered to take corrective action. They also may be formal hearings at which the agency head presides and at which the city and the defendant both are entitled to be represented by counsel and expert witnesses. Informal Hearings. A violator may have questions about a violation notice or the notice may be served at a time when personal hardship or other factors prevent a violator from meeting the terms of the notice. Therefore, many housing codes provide the opportunity for a hearing at which the violator may discuss questions or problems and seek additional time or some modification of the order. Administered in a firm but understanding manner, these hearings can serve as invaluable aids in relieving needless fears of those involved, in showing how the inspection program is designed to help them and in winning their voluntary compliance. Formal Hearings. Formal hearings are often quasijudicial hearings (even though the prevailing court rules of evidence do not always apply) from which an appeal may be taken to court. All witnesses must therefore be sworn in, and a record of the proceedings must be made. The formal hearing is used chiefly as the basis for determining whether a dwelling is fit for human habitation, occupancy, or use. In the event it is proved unfit, the building is condemned and the owner is given a designated amount of time either to rehabilitate it completely or to demolish it. Where local funds are available, a municipality may demolish the building and place a lien against the property to cover demolition costs if the owner fails to obey the order within the time specified. This type of condemnation hearing is a very effective means of stimulating prompt and appropriate corrective action when it is administered fairly and firmly. Request for Inspections. Several states permit their municipalities to offer a request for inspection service. For a fee, the housing inspector will inspect a property for violations of the housing code before its sale so that the buyer can learn its condition in advance. Many states and localities now require owners to notify prospective purchasers of any outstanding notice of health risk or violations they have against their property before the sale. If they fail to do so, some codes will hold the owner liable to the purchaser and the inspection agency for violations. Tickets for Minor Offenses. Denver, Colorado, has used minimal financial fines to prod minor violators and first offenders into correcting violations without the city resorting to court action. There are mixed views about this technique because it is akin to formal police action. Nevertheless, the action may stimulate compliance and reduce the amount of court action needed to achieve it. # Forms and Form Letters. A fairly typical set of forms and form letters are described below. It should be stressed that inspection forms to be used for legal notices must satisfy legal standards of the code, be meaningful to the owner and sufficiently explicit about the extent and location of particular defects, be adaptable to statistical compilation for the governing body reports, and be written in a manner that will facilitate clerical and other administrative usage. The Daily Report Form. This form gives the inspection agency an accurate basis for reporting, evaluating, and, if necessary, improving the productivity and performance of its inspectors. Complaint Form. This form helps obtain full information from the complainant and thus makes the relative seriousness of the problem clear and reduces the number of crank complaints. No-entry Notice. This notice advises occupants or owners that an inspector was there and that they must return a call to the inspector. Inspection Report Form. This is the most important form in an agency. It comes in countless varieties, but if designed properly, it will ensure more productivity and more thoroughness by the inspectors, reduce the time spent in writing reports, locate all violations correctly, and reduce the time required for typing violation notices. Forms may vary widely in sophistication from a very simple form to one whose components are identified by number for use in processing the case by automation. Some forms are a combined inspection report and notice form in triplicate so that the first page can be used as the notice of violation, the second as the office record, and the third as the guide for reinspection. A covering form letter notifies the violator of the time allowed to correct the conditions listed in the report form. Violation Notice. This is the legal notice that housing code violations exist and must be corrected within the indicated amount of time. The notice may be in the form of a letter that includes the alleged violations or has a copy of these attached. It may be a standard notice form, or it may be a combined report-notice. Regardless of the type of notice used, it should make the location and nature of all violations clear and specify the exact section of the code that covers each one. The notice must advise violators of their right to a hearing. It should also indicate that the violator has a right to be represented by counsel and that failure to obtain counsel will not be accepted as grounds for postponing a hearing or court case. Hearing Forms. These should include a form letter notifying the violator of the date and time set for the hearing, a standard summary sheet on which the supervisor can record the facts presented at an informal hearing, and a hearing-decision letter for notifying all concerned of the hearing results. The latter should include the names of the violator, inspector, law department, and any other city official or agency that may be involved in the case. Reinspection Form Letters or Notices. These have the same characteristics as violation notices except that they cover the follow-up orders given to the violator who has failed to comply with the original notice within the time specified. Some agencies may use two or three types of these form letters to accommodate different degrees of response by the violator. Whether one or several are used, standardization of these letters or notices will expedite the processing of cases. Court Complaint and Summons Forms. These forms advise alleged violators of the charges against them and summon them to appear in court at the specified time and place. It is essential that the housing inspection agency work closely with the municipal law department in preparing these forms so that each is done in exact accord with the rules of court procedure in the relevant state and community. Court Action Record Form. This form provides an accurate running record of the inspection agency's court actions and their results. # Substantive Provisions of a Housing Code A housing code is the primary tool of the housing inspector. The code spells out what the inspector may or may not do. An effort to improve housing conditions can be no better than the code allows. The substantive provisions of the code specify the minimal housing conditions acceptable to the community that developed them. Dwelling units should have provisions for preparing at least one regularly cooked meal per day. Minimum equipment should include a kitchen sink in good working condition and properly connected to the water supply system approved by the appropriate authority. It should provide, at all times, an adequate amount of heated and unheated running water under pressure and should be connected to a sewer system approved by the appropriate authority. Cabinets or shelves, or both, for storing eating, drinking, and cooking utensils and food should be provided. These surfaces should be of sound construction and made of material that is easy to clean and that will not have a toxic or deleterious effect on food. In addition, a stove and refrigerator should be provided. Within every dwelling there should be a room that affords privacy and is equipped with a flush toilet in good working condition. Within the vicinity of the flush toilet, a sink should be provided. In no case should a kitchen sink substitute as a lavatory sink. In addition, within each dwelling unit there should be, within a room that affords privacy, either a bathtub or shower or both, in good working condition. Both the lavatory sink and the bathtub or shower or both should be equipped with an adequate amount of heated and unheated water under pressure. Each should be connected to an approved sewer system. Within each dwelling unit two or more means of egress should be provided to safe and open space at ground level. Provisions should be incorporated within the housing code to meet the safety requirements of the state and community involved. The housing code should spell out minimum standards for lighting and ventilation within each room in the structure. In addition, minimum thermal standards should be provided. Although most codes merely provide the requirement of a given temperature at a given height above floor level, the community should give consideration to the use of effective temperatures. The effective temperature is a means of incorporating not only absolute temperature in degrees, but also humidity and air movement, giving a better indication of the comfort index of a room. The code should provide that no person shall occupy or let for occupancy any dwelling or dwelling units that do not comply with stated requirements. Generally, these requirements specify that the foundation, roof, exterior walls, doors, window space and windows of the structure be sound and in good repair; that it be moisture-free, watertight and reasonably weather tight and that all structural surfaces be sound and in good repair. HUD defines a multifamily dwelling unit as one that contains four or more dwelling units in a single structure. A dwelling unit is further defined as a single unit of residence for a family of one or more persons in which sleeping accommodations are provided but toileting or cooking facilities are shared by the occupants. # Building Codes # Introduction The most immediate and obvious link between housing and health involves exposure to biologic, chemical, and physical agents that can affect the health and safety of the occupants of the home. Conditions such as childhood lead poisoning and respiratory illnesses caused by exposure to radon, asbestos, tobacco smoke, and other pollutants are increasingly well understood and documented. However, even 50 years ago, public health officials understood that housing conditions were linked to a broader pattern of community health. For example, in 1949, the Surgeon General released a report comparing several health status indicators among six cities having slums. The publication reported that: • the rate of deaths from communicable disease in these areas was the same as it was for the rest of the country 50 years ago (i.e., around 1900); • most of the tuberculosis cases came from 25% of the population of these cities; and • the infant mortality rate was five times higher than in the rest of the country, approximately equal to what it was 50 years ago. Housing-related health concerns include asthma episodes triggered by exposure to dust mites, cockroaches, pets, and rodents. The existence of cockroaches, rats, and mice mean that they can also be vectors for significant problems that affect health and well-being. They are capable of transmitting diseases to humans. According to a 1997 American Housing Survey, rats and mice infested 2.7 million of 97 million housing units. A CDC-sponsored survey of two major American cities documented that nearly 50% of the premises were infected with rats and mice. This chapter deals with disease vectors and pests as factors related to the health of households. # Disease Vectors and Pests Integrated pest management (IPM) techniques are necessary to reduce the number of pests that threaten human health and property. This systems approach to the problem relies on more than one technique to reduce or eliminate pests. It can be visualized best as concentric rings of protection that reduce the need for the most risky and dangerous options of control and the potential for pests to evolve and develop. It typically involves using some or all of the following steps: • monitoring, identifying, and determining the level of threat from pests; • making the environment hostile to pests; • building the pests out by using pest-proof building materials; • eliminating food sources, hiding areas, and other pest attractants; • using traps and other physical elimination devices; and • when necessary, selecting appropriate poisons for identified pests. The above actions are discussed in more detail in the following section on the four basic strategies for controlling rodents. Most homeowners have encountered a problem with rodents, cockroaches, fleas, flies, termites, or fire ants. These pests destroy property or carry disease, or both, and can be a problem for rich and poor alike. # Rodents Rodents destroy property, spread disease, compete for human food sources, and are aesthetically displeasing. Rodent-associated diseases affecting humans include plague, murine typhus, leptospirosis, rickettsialpox, and rat-bite fever. The three primary rodents of concern to the homeowner are the Norway rat (Rattus norvegicus), roof rat (Rattus rattus), and the house mouse (Mus musculus). The term "commensal" is applied to these rodents, meaning they live at people's expense. The physical traits of each are demonstrated in Figure 4.1. # Chapter 4: Disease Vectors and Pests Barnett [1] notes that the house mouse is abundant throughout the United States. The Norway rat (Figure 4.2) is found throughout the temperate regions of the world, including the United States. The roof rat is found mainly in the South, across the entire nation to the Pacific coast. As a group, rodents have certain behavioral characteristics that are helpful in understanding them. They are perceptive to touch, with sensitive whiskers and guard hairs on their bodies. Thus, they favor running along walls and between objects that allow them constant contact with vertical surfaces. They are known to have poor eyesight and are alleged to be colorblind. Contrastingly, they have an extremely sharp sense of smell and a keen sense of taste. The word rodent is derived from the Latin verb rodere, meaning "to gnaw." The gnawing tendency leads to structural damage to buildings and initiates fires when insulation is chewed from electrical wires. Rodents will gnaw to gain entrance and to obtain food. The roof rat (Figure 4.3) is a slender, graceful, and very agile climber. The roof rat prefers to live aboveground: indoors in attics, between floors, in walls, or in enclosed spaces; and outdoors in trees and dense vine growth. Contrasted with the roof rat, the Norway rat is at home below the ground, living in a burrow. The house mouse commonly is found living in human quarters, as suggested by its name. Signs indicative of the presence of rodents-aside from seeing live or dead rats and hearing rats-are rodent droppings, runways, and tracks (Figure 4.4). Other signs include nests, gnawings, food scraps, rat hair, urine spots, and rat body odors. Note that waste droppings from rodents are often confused with cockroach egg packets, which are smooth, segmented, and considerably smaller than a mouse dropping. [2], rats and mice are very suspicious of any new objects or food found in their surroundings. This characteristic is one reason rodents can survive in dangerous environments. This avoidance reaction accounts for prebaiting (baiting without poisoning) in control programs. Initially, rats or mice begin by taking only small amounts of food. If the animal becomes ill from a sublethal dose of poison, its avoidance reaction is strengthened, and a poisoning program becomes extremely difficult to complete. If rodents are hungry or exposed to an environment where new objects and food are commonly found, such as a dump, their avoidance reaction may not be as strong; in extreme cases of hunger, it may even be absent. # According to the Military Pest Management Handbook (MPMH) The first of four basic strategies for controlling rodents is to eliminate food sources. To accomplish this, it is imperative for the homeowner or occupant to do a good job of solid waste management. This requires proper storing, collecting, and disposing of refuse. The second strategy is to eliminate breeding and nesting places. This is accomplished by removing rubbish from near the home, including excess lumber, firewood, and similar materials. These items should be stored above ground with 18 inches of clearance below them. This height does not provide a habitat for rats, which have a propensity for dark, moist places in which to burrow. Wood should not be stored directly on the ground, and trash and similar rubbish should be eliminated. The third strategy is to construct buildings and other structures using rat-proofing methods. MPMH notes that it is much easier to manage rodents if a structure is built or modified in a way that prevents easy access by rodents. Tactics for rodent exclusion include building or covering doors and windows with metal. Rats can gnaw through wooden doors and windows in a very short time to gain entrance. All holes in a building's exterior should be sealed. Rats are capable of enlarging openings in masonry, especially if the mortar or brick is of poor quality. All openings more than ¾-inch wide should be closed, especially around pipes and conduits. Cracks around doors, gratings, windows, and other such openings should be covered if they are less than 4 feet above the ground or accessible from ledges, pipes, or wires (Figure 4.5). Additional tactics include using proper materials for rat proofing. For example, sheet metal of at least 26-gauge, ¼-inch or ½-inch hardware cloth, and cement are all suitable rat-resistant materials. However, ½-inch hardware cloth has little value against house mice. Tight fittings and self-closing doors should be constructed. Rodent runways can be behind double walls; therefore, spaces between walls and floor-supporting beams should be blocked with fire stops. A proper rodent-proofing strategy must bear in mind that rats can routinely jump 2 feet vertically, dig 4 feet or more to get under a foundation, climb rough walls or smooth pipes up to 3 inches in diameter, and routinely travel on electric or telephone wires. The first three strategies-good sanitation techniques, habitat denial, and rat proofing-should be used initially in any rodent management program. Should they fail, the fourth strategy is a killing program, which can vary from a family cat to the professional application of rodenticides. Cats can be effective against mice, but typically are not useful against a rat infestation. Over-the-counter rodenticides can be purchased and used by the homeowner or occupant. These typically are in the red squill or warfarin groups. A more effective alternative is trapping. There are a variety of devices to choose from when trapping rats or mice. The two main groups of rat and mouse traps are live traps (Figure 4.6) and kill traps (Figure 4.7). Traps usually are placed along walls, near runways and burrows, and in other areas. Bait is often used to attract the rodents to the trap. To be effective, traps must be monitored and emptied or removed quickly. If a rat caught in a trap is left there, other rats may avoid the traps. A trapping strategy also may include using live traps to remove these vermin. # Cockroaches Cockroaches have become well adapted to living with and near humans, and their hardiness is legendary. In light of these facts, cockroach control may become a homeowner's most difficult task because of the time and special knowledge it often involves. The cockroach is considered an allergen source and an asthma trigger for residents. Although little evidence exists to link the cockroach to specific disease outbreaks, it has bee demonstrated to carry Salmonella typhimurium, Entamoeba histolytica, and the poliomyelitis virus. In addition, Kamble and Keith [6] note that most cockroaches produce a repulsive odor that can be detected in infested areas. The sight of cockroaches can cause considerable psychologic or emotional distress in some individuals. They do not bite, but they do have heavy leg spines that may scratch. According to MPMH [2], there are 55 species of cockroaches in the United States. As a group, they tend to prefer a moist, warm habitat because most are tropical in origin. Although some tropical cockroaches feed only on vegetation, cockroaches of public health interest tend to live in structures and are customarily scavengers. Cockroaches will eat a great variety of materials, including cheese and bakery products, but they are especially fond of starchy materials, sweet substances, and meat products. Cockroaches are primarily nocturnal. Daytime sightings may indicate potentially heavy infestations. They tend to hide in cracks and crevices and can move freely from room to room or adjoining housing units via wall spaces, plumbing, and other utility installations. Entry into homes is often accomplished through food and beverage boxes, grocery sacks, animal food, and household goods carried into the home. The species of public health interest that commonly inhabit human dwellings (Figures 4.8-4.13) include the following: German cockroach (Blattella germanica); American cockroach (Periplaneta americana); Oriental cockroach (Blatta orientalis); brownbanded cockroach (Supella longipalpa); Australian cockroach (Periplaneta australasiae); smoky-brown cockroach (Periplaneta fuliginosa); and brown cockroach (Periplaneta brunnea). Four management strategies exist for controlling cockroaches. The first is prevention. This strategy includes inspecting items being carried into the home and sealing cracks and crevices in kitchens, bathrooms, exterior doors, and windows. Structural modifications would include weather stripping and pipe collars. The second strategy is sanitation. This denies cockroaches food, water, and shelter. These efforts include quickly cleaning food particles from shelving and floors; timely washing of dinnerware; and routine cleaning under refrigerators, stoves, furniture, and similar areas. If pets are fed indoors, pet food should be stored in tight containers and not left in bowls overnight. Litter boxes should be cleaned routinely. Access should be denied to water sources by fixing leaking plumbing, drains, sink traps, and purging clutter, such as papers and soiled clothing and rags. The third strategy is trapping. Commercially available cockroach traps can be used to capture roaches and serve as a monitoring device. The most effective trap placement is against vertical surfaces, primarily corners, and under sinks, in cabinets, basements, and floor drains. The fourth strategy is chemical control. The use of chemicals typically indicates that the other three strategies have been applied incorrectly. Numerous insecticides are available and appropriate information is obtainable from EPA. # Fleas The most important fleas as disease vectors are those that carry murine typhus and bubonic plague. In addition, fleas serve as intermediate hosts for some species of dog and rodent tapeworms that occasionally infest people. They also may act as intermediate hosts of filarial worms (heartworms) in dogs. In the United States, the most important disease related to fleas is the bubonic plague. This is primarily a concern of residents in the southwestern and western parts of the country (Figure 4. # 14). Of approximately 2,000 species of flea, the most common flea infesting both dogs and cats is the cat flea Ctenocephalides felis. Although numerous animals, both wild and domestic, can have flea infestations, it is from the exposure of domestic dogs and cats that most homeowners inherit flea infestation problems. According to MPMH [2], fleas are wingless insects varying from 1 to 8½ millimeters (mm) long, averaging 2 to 4 mm, and feed through a siphon or tube. They are narrow and compressed laterally with backwardly directed spines, which adapt them for moving between the hairs and feathers of mammals and birds. They have long, powerful legs adapted for jumping. Both sexes feed on blood, and the female requires a blood meal before she can produce viable eggs. Fleas tend to be host-specific, thus feeding on only one type of host. However, they will infest other species in the absence of the favored host. They are found in relative abundance on animals that live in burrows and sheltered nests, while mammals and birds with no permanent nests or that are exposed to the elements tend to have light infestations. MPMH [2] notes that fleas undergo complete metamorphosis (egg, larva, pupa, and adult). The time it takes to complete the life cycle from egg to adult varies according to the species, temperature, humidity, and food availability. Under favorable conditions, some species can complete a generation in as little as 2 or 3 weeks. # Flies The historical attitude of Western society toward flies has been one of aesthetic disdain. The public health view is to classify flies as biting or nonbiting. Biting flies include sand flies, horseflies, and deerflies. Nonbiting flies include houseflies, bottleflies, and screwworm flies. The latter group is often referred to as synanthropic because of their close association with humans. In general, the presence of flies is a sign of poor sanitation. The primary concern of most homeowners is nonbiting flies. According to MPMH [2], the housefly (Musca domestica) (Figure 4.16) is one of the most widely distributed insects, occurring throughout the United States, and is usually the predominant fly species in homes and restaurants. M. domestica is also the most prominent human-associated (synanthropic) fly in the southern United States. Because of its close association with people, its abundance, and its ability to transmit disease, it is considered a greater threat to human welfare than any other species of nonbiting fly. Each housefly can easily carry more than 1 million bacteria on its body. Some of the disease-causing agents transmitted by houseflies to humans are Shigella spp. (dysentery and diarrhea = shigellosis), Salmonella spp. (typhoid fever), Escherichia coli, (traveler's diarrhea), and Vibrio cholera (cholera). Sometimes these organisms are carried on the fly's tarsi or body hairs, and frequently they are regurgitated onto food when the fly attempts to liquefy it for ingestion. The fly life cycle is similar across the synanthropic group. MPMH [2] notes that the egg and larval stages develop in animal and vegetable refuse. Favorite breeding sites include garbage, animal manure, spilled animal feed, and soil contaminated with organic matter. Favorable environmental conditions will result in the eggs hatching in 24 hours or less. Normally, a female fly will produce 500 to 600 eggs during her lifetime. The creamy, white larvae (maggots) are about ½-inch long when mature and move within the breeding material to maintain optimum temperature and moisture conditions. This stage lasts an average of 4 to 7 days in warm weather. The larvae move to dry parts of the breeding medium or move out of it onto the soil or sheltered Flea eggs usually are laid singly or in small groups among the feathers or hairs of the host or in a nest. They are often laid in carpets of living quarters if the primary host is a household pet. Eggs are smooth, spherical to oval, light colored, and large enough to be seen with the naked eye. An adult female flea can produce up to 2,000 eggs in a lifetime. Flea larvae are small (2 to 5 mm), white, and wormlike with a darker head and a body that will appear brown if they have fed on flea feces. This stage is mobile and will move away from light, thus they typically will be found in shaded areas or under furniture. In 5 to 12 days, they complete the three larval stages; however, this may take several months depending on environmental conditions. The larvae, after completing development, spin a cocoon of silk encrusted with granules of sand or various types of debris to form the pupal stage. The pupal stage can be dormant for 140 to 170 days. In some areas of the country, fleas can survive through the winter. The pupae, after development, are stimulated to emerge as adults by movement, pressure, or heat. The pupal form of the flea is resistant to insecticides. An initial treatment, while killing egg, larvae, and adult forms, will not kill the pupae. Therefore, a reapplication will often be necessary. The adult forms are usually ready to feed about 24 hours after they emerge from the cocoon and will begin to feed within 10 seconds of landing on a host. Mating usually follows the initial blood meal, and egg production is initiated 24 to 48 hours after consuming a blood meal. The adult flea lives approximately 100 days, depending on environmental conditions. Following are some guidelines for controlling fleas: • The most important principle in a total flea control program is simultaneously treating all pets and their environments (indoor and outdoor). • Before using insecticides, thoroughly clean the environment, removing as many fleas as possible, regardless of the form. This would include indoor vacuuming and carpet steam cleaning. Special attention should be paid to source points where pets spend most of their time. • Outdoor cleanup should include mowing, yard raking, and removing organic debris from flowerbeds and under bushes. • Insecticide should be applied to the indoor and outdoor environments and to the pet. • Reapplication to heavily infested source points in the home and the yard may be needed to eliminate preemerged adults. places under debris to pupate, with this stage usually lasting 4 to 5 days. When the pupal stage is accomplished, the adult fly exits the puparium, dries, hardens, and flies away to feed, with mating occurring soon after emergence. Figure 4.17 demonstrates the typical fly life cycle. The control of the housefly is hinged on good sanitation (denying food sources and breeding sites to the fly). This includes the proper disposal of food wastes by placing garbage in cans with close-fitting lids. Cans need to be periodically washed and cleaned to remove food debris. The disposal of garbage in properly operated sanitary landfills is paramount to fly control. The presence of adult flies can be addressed in various ways. Outside methods include limited placement of common mercury vapor lamps that tend to attract flies. Less-attractive sodium vapor lamps should be used near the home. Self-closing doors in the home will deny entrance, as will the use of proper-fitting and well-maintained screening on doors and windows. Larger flies use homes for shelter from the cold, but do not reproduce inside the home. Caulking entry points and using fly swatters is effective and much safer than the use of most pesticides. Insecticide "bombs" can be used in attics and other rooms that can be isolated from the rest of the house. However, these should be applied to areas away from food, where flies rest. The blowfly is a fairly large, metallic green, gray, blue, bronze, or black fly. They may spend the winter in homes or other protected sites, but will not reproduce during this time. Blowflies breed most commonly on decayed carcasses (e.g., dead squirrels, rodents, birds) and in droppings of dogs or other pets during the summer; thus, removal of these sources is imperative. Small animals, on occasion, may die inside walls or under the crawl space of a house. A week or two later, blowflies or maggots may appear. The adult blowfly is also attracted to gas leaks. # Termites According to Gold et al. [11], subterranean termites are the most destructive insect pests of wood in the United States, causing more than $2 billion in damage each year. Annually, this is more property damage than that caused by fire and windstorms combined. In the natural world, these insects are beneficial because they break down dead trees and other wood materials that would otherwise accumulate. This biomass breakdown is recycled to the soil as humus. MPMH [2], on the other hand, notes that these insects can damage a building so severely it may have to be replaced. Termites consume wood and other cellulose products, such as paper, cardboard, and fiberboard. They will also destroy structural timbers, pallets, crates, furniture, and other wood products. In addition, they will damage many materials they do not normally eat as they search for food. The tunneling efforts of subterranean termites can penetrate lead-and plastic-covered electric cable and cause electrical system failure. In nature, termites may live for years in tree stumps or lumber beneath concrete buildings before they penetrate hairline cracks in floors and walls, as well as expansion joints, to search for food in areas such as interior door frames and immobile furniture. Termite management costs to homeowners are exceeded only by cockroach control costs. Lyon [12] notes that termites are frequently mistaken by the homeowner as ants and often are referred to erroneously as white ants. Typical signs of termite infestations occur in March through June and in September and October. Swarming is an event where a group of adult males and female reproductives leave the nest to establish a new colony. If the emergence happens inside a building, flying termites may constitute a considerable nuisance. These pests can be collected with a vacuum cleaner or otherwise disposed of without using pesticides. Each homeowner should be aware of the following signs of termite infestation: • Pencil-thin mud tubes extending over the inside and outside surfaces of foundation walls, piers, sills, joists, and similar areas (Figures 4. 18 and 4.19). • Presence of winged termites or their shed wings on windowsills and along the edges of floors. • Damaged wood hollowed out along the grain and lined with bits of mud or soil. According to Oi et al. [15], termite tubes and nests are made of mud and carton. Carton is composed of partially chewed wood, feces, and soil packed together. Tubes maintain the to Lyon [12], the reproductives can be winged or wingless, with the latter found in colonies to serve as replacements for the primary reproductives. The primary reproductives (alates) vary in color from pale yellowbrown to coal black, are ½-inch to 3/8-inch in length, are flattened dorsa-ventrally, and have pale or smoke-gray to brown wings. The secondary reproductives have short wing buds and are white to cream colored. The workers are the same size as the primary reproductives and are white to grayish-white, with a yellow-brown head, and are wingless. In addition, the soldiers resemble workers, in that they are wingless, but soldiers have large, rectangular, yellowish, and brown heads with large jaws. MPMH [2] states there are five families of termites found in the world, with four of them occurring in the United States. The families in the United States are Hodotermitidae (rotten-wood termites), Kalotermitidae (dry-wood termites), Rhinotermitidae (subterranean termites) and Termitidae (desert termites). Subterranean high humidity required for survival, protect termites from predators, and allow termites to move from one spot to another. Differentiating the ant from the dark brown or black termite reproductives can be accomplished by noting the respective wings and body shape. MPMH [2] states that a termite has four wings of about equal length and that the wings are nearly twice as long as the body. By comparison, ant wings that are only a little longer than the body and the hind pair is much shorter than the front. Additionally, ants typically have a narrow waist, with the abdomen connected to the thorax by a thin petiole. Termites do not have a narrow or pinched waist. Figure 4.20a and b demonstrates the differences between the ant and termite. Entomologists refer to winged ants and termites as alates. termites typically work in wood aboveground, but must have direct contact with the ground to obtain moisture. Nonsubterranean termites colonize above the ground and feed on cellulose; however, their life cycles and methods of attack, and consequently methods of control, are quite different. Nonsubterranean termites in the United States are commonly called drywood termites. In the United States, according to MPMH [2], native subterranean termites are the most important and the most common. These termites include the genus Reticulitermes, occurring primarily in the continental United States, and the genus Heterotermes, occurring in the Virgin Islands, Puerto Rico, and the deserts of California and Arizona. The appearance, habits, and type of damage they cause are similar. The Formosan termite (Coptotermes formosanus) is the newest species to become established in the United States. It is a native of the Pacific Islands and spread from Hawaii and Asia to the United States during the 1960s. It is now found along the Gulf Coast, in California, and in South Carolina, and is expected to spread to other areas as well. Formosan termites cause greater damage than do native species because of their more vigorous and aggressive behavior and their ability to rapidly reproduce, build tubes and tunnels, and seek out new items to infest. They have also shown more resistance to some soil pesticides than native species. Reproductives (swarmers) are larger than native species, reaching up to 5/8-inch in length, and are yellow to brown in color. Swarmers have hairy-looking wings and swarm after dusk, unlike native species, which swarm in the daytime. Formosan soldiers have more oval-shaped heads than do native species. On top of the head is an opening that emits a sticky, whitish substance. Dry-wood termites (Cryptotermes spp.) live entirely in moderately to extremely dry wood. They require contact with neither the soil nor any other moisture source and may invade isolated pieces of furniture, fence posts, utility poles, firewood, and structures. Dry-wood termite colonies are not as large as other species in the United States, so they can occupy small wooden articles, which are one way these insects spread to different locations. They are of major economic importance in southern California, Arizona, and along the Gulf Coast. The West Indian drywood termite is a problem in Puerto Rico, the U.S. Virgin Islands, Hawaii, parts of Florida and Louisiana, and a number of U.S. Pacific Island territories. Dry-wood termites are slightly larger than most other species, ranging from ½ inch to 5/8 inch long, and are generally lighter in color. Damp-wood termites do not need contact with damp ground like subterranean termites do, but they do require higher moisture content in wood. However, once established, these termites may extend into slightly drier wood. Termites of minor importance are the tree-nesting groups. The nests of these termites are found in trees, posts, and, occasionally, buildings. Their aboveground nests are connected to soil by tubes. Tree-nesting termites may be a problem in Puerto Rico and the U.S. Virgin Islands. The risk for encountering subterranean termites in the United States is greater in the southeastern states and in southwestern California. In the United States, the risk for termite infestations tends to decrease as the latitude increases northward. According to Potter [19], homeowners can reduce the risk for termite attack by adhering to the following suggestions: • Eliminate wood contact with the ground. Earth-towood contact provides termites with simultaneous access to food, moisture, and shelter in conjunction with direct, hidden entry into the structure. In addition, the homeowner or occupant should be aware that pressure-treated wood is not immune to termite attack because termites can enter through the cut ends and build tunnels over the surfaces. • Do not allow moisture to accumulate near the home's foundation. Proper drainage, repair of plumbing, and proper grading will help to reduce the presence of moisture, which attracts termites. • Reduce humidity in crawl spaces. Most building codes state that crawl space area should be vented at a rate of 1 square foot per 150 square feet of crawl space area. This rate can be reduced for crawl spaces equipped with a polyethylene or equivalent vapor barrier to one square foot per 300 to 500 square feet of crawl space area. Vent placement design includes positioning one vent within 3 feet of each building corner. Trimming and controlling shrubs so that they do not obstruct the vents is imperative. Installling a 4to 6-mil polyethylene sheeting over a minimum of 75% of the crawl space will reduce the crawl-space moisture. Covering the entire floor of the crawl space with such material can reduce two potential home problems at one time: excess moisture and radon (Chapter 5). The barrier will reduce the absorption of moisture from the air and the release of moisture into the air in the crawl space from the underlying soil. • Never store firewood, lumber, or other wood debris against the foundation or inside the crawl space. Termites are both attracted to and fed by this type of storage. Wood stacked in contact with a dwelling and vines, trellises, and dense plant material provide a pathway for termites to bypass soil barrier treatment. • Use decorative wood chips and mulch sparingly. Cellulose-containing products attract termites, especially materials that have moisture-holding properties, such as mulch. The homeowner should never allow these products to contact wood components of the home. The use of crushed stone or pea gravel is recommended as being less attractive to termites and helpful in diminishing other pest problems. • Have the structure treated by a professional pest control treatment. The final, and most effective, strategy to prevent infestation is to treat the soil around and beneath the building with termiticide. The treated ground is then both a repellant and toxic to termites. Figure 4.23 demonstrates some typical points of attack by subterranean termites and some faulty construction practices that can contribute to subterranean termite infestations. Lyon [12] notes the following alternative termite control measures: • Nematodes. Certain species of parasitic round worms (nematodes) will infest and kill termites and other soil insects. Varying success has been experienced with this method because it is dependent on several variables, such as soil moisture and soil type. • Sand as a physical barrier. This would require preconstruction planning and would depend on termites being unable to manipulate the sand to create tunnels. Some research in California and Hawaii has indicated early success. • Chemical baits. This method uses wood or laminated texture-flavored cellulose impregnated with a toxicant and/or insect growth regulator. The worker termite feeds on the substance and carries it back to the nest, reducing or eliminating the entire colony. According to HomeReports.com [20], an additional system is to strategically place a series of baits around the house. The intention is for termite colonies to encounter one or more of the baits before approaching the house. Once termite activity is observed, the bait wood is replaced with a poison. The termites bring the poison back to the colony and the colony is either eliminated or substantially reduced. This system is relatively new to the market. Its success depends heavily on the termites finding the bait before finding and damaging the house. Additional measures include construction techniques that discourage termite attacks, as demonstrated in Figure 4.24. Termites often invade homes by way of the foundation, either by crawling up the exterior surface where their activity is usually obvious or by traveling inside hollow block masonry. One way to deter their activity is to block their access points on or through the foundation. Metal termite shields have been used for decades to deter termite movement along foundation walls and piers on up to the wooden structure. Metal termite shields should extend 2 inches from the foundation and 2 inches down. Improperly installed (i.e., not soldered/sealed properly), damaged, or deteriorated termite shields may allow termites to reach parts of the wooden floor system. Shields should be made of noncorroding metal and have no cracks or gaps along the seams. If a house is being built with metal termite shielding, the shielding should extend at least 2 inches out and 2 inches down at a 45° angle from the foundation wall. An alternative to using termite shields on a hollow-block foundation is to fill the block with concrete or put in a few courses of solid or concretefilled brick (which is often done anyway to level foundations). These are referred to as masonry caps. The same approach can be used with support piers in the crawl space. Solid caps (i.e., a continuously poured concrete cap) are best at stopping termites, but are not commonly used. Concrete-filled brick caps should deter termite movement or force them through small gaps, thus allowing them to be spotted during an inspection [21]. # Fire Ants According to MPMH [2], ants are one of the most numerous species on earth. Ants are in the same order as wasps and bees and, because of their geographic distribution, they are universally recognized (Figure 4.25). The life cycle of the fire ant begins with the mating of the winged forms (alates) some 300 to 800 feet in the air, typically occurring in the late spring or early summer. The male dies after the mating; and the newly mated queen finds a suitable moist site, drops her wings, and burrows in the soil, sealing the opening behind her. Ants undergo complete metamorphosis and, therefore, have egg, larval, pupal and adult stages. The new queen will begin laying eggs within 24 hours. Once fully developed, she will produce approximately 1,600 eggs per day over a maximum life span of 7 years. Soft, whitish, legless larvae are produced from the hatching. These larvae are fed by the worker ants. Pupae resemble adults in form, but are soft, nonpigmented, and lack mobility. There are at least three distinct castes of ants: workers, queens, and males. Typically, the males have wings, which they retain until death. Queens, the largest of the three castes, normally have wings, but lose them after mating. The worker, which is also a female, is never winged, except as a rare abnormality. Within this hierarchy, mature colonies contain males and females that are capable of flight and reproduction. These are known as reproductives, and an average colony may produce approximately 4,500 of these per year. A healthy nest usually produces two nuptial flights of reproductives each year and a healthy, mature colony may contain more than 250,000 ants. Though uncommon among ants, multiple queen colonies (10 to 1. Cracks in foundation permit hidden points of entry from soil to sill. 2. Posts through concrete in contact with substructural soil. Watch door frames and intermediate supporting posts. 3. Wood-framing members in contact with earthfill under concrete slab. 4. Form boards left in place contribute to termite food supply. 5. Leaking pipes and dripping faucets sustain soil moisture. Excess irrigation has same effect. 6. Shrubbery blocking air flow through vents. 7. Debris supports termite colony until large population attacks superstructure. 8. Heating unit accelerates termite development by maintaining warmth of colony on a year-round basis. 9. Foundation wall too low permits wood to contact soil. Adding topsoil often builds exterior grade up to sill level. 10. Footing too low or soil thrown against it causes woodsoil contact. There should be 8 inches of clean concrete between soil and pier block. 11. Stucco carried down over concrete foundation permits hid den entrance between stucco and foundation if bond fails. 12. Insufficient clearance for inspection also permits easy construction of termite shelter tubes from soil to wood. 13. Wood framing of crawl hole forming wood-soil contact. RIFAs prefer open and sun-exposed areas. They are found in cultivated fields, cemeteries, parks, and yards, and even inside cars, trucks, and recreational vehicles. RIFAs are attracted to electrical currents and are known to nest in and around heat pumps, junction boxes, and similar areas. They are omnivorous; thus they will attack most things, living or dead. Their economic effects are felt by their destruction of the seeds, fruit, shoots, and seedlings of numerous native plant species. Fire ants are known to tend pests, such as scale insects, mealy bugs, and aphids, for feeding on their sweet waste excretion (honeydew). RIFAs transport these insects to new feeding sites and protect them from predators. The positive side of RIFA infestation is that the fire ant is a predator of ticks and controls the ground stage of horn flies. The urban dweller with a RIFA infestation may find significant damage to landscape plants, with reductions in the number of wild birds and mammals. RIFAs can discourage outdoor activities and be a threat to young animals or small confined pets. RIFA nests typically are not found indoors, but around homes, roadways, and structures, as well as under sidewalks. Shifting of soil after RIFAs abandon sites has resulted in collapsing structures. Figure 4.27 shows a fire ant mound with fire ants and a measure of their relative size. The medical complications of fire ant stings have been noted in the literature since 1957. People with disabilities, reduced feeling in their feet and legs, young children, and those with mobility issues are at risk for sustaining numerous stings before escaping or receiving assistance. Fatalities have resulted from attacks on the elderly and on infants. Control of the fire ant is primarily focused on the mound by using attractant bait consisting of soybean oil, corn grits, or chemical agents. The bait is picked up by the worker ants and taken deep into the mound to the queen. These products typically require weeks to work. Individual mound treatment is usually most effective in the spring. The key is to locate and treat all mounds in the area to be protected. If young mounds are missed, the area can become reinfested in less than a year. # Mosquitoes All mosquitoes have four stages of development-egg, larva, pupa, and adult-and spend their larval and pupal stages in water. The females of some mosquito species deposit eggs on moist surfaces, such as mud or fallen leaves, that may be near water but dry. Later, rain or high tides reflood these surfaces and stimulate the eggs to hatch into larvae. The females of other species deposit their eggs directly on the surface of still water in such places as ditches, street catch basins, tire tracks, streams that are drying up, and fields or excavations that hold water for some time. This water is often stagnant and close to the home in discarded tires, ornamental pools, unused wading and swimming pools, tin cans, bird baths, plant saucers, and even gutters and flat roofs. The eggs soon hatch into larvae. In the hot summer months, larvae grow rapidly, become pupae, and emerge 1 week later as flying adult mosquitoes. A few important spring species have only one generation per year. However, most species have many generations per year, and their rapid increase in numbers becomes a problem. When adult mosquitoes emerge from the aquatic stages, they mate, and the female seeks a blood meal to obtain the protein necessary for the development of her eggs. The females of a few species may produce a first batch of eggs without this first blood meal. After a blood meal is digested and the eggs are laid, the female mosquito again seeks a blood meal to produce a second batch of eggs. Depending on her stamina and the weather, she may repeat this process many times without mating again. The male mosquito does not take a blood meal, but may feed on plant nectar. He lives for only a short time after mating. Most mosquito species survive the winter, or overwinter, in the egg stage, awaiting the spring thaw, when waters warm and the eggs hatch. A few important species spend the winter as adult, mated females, resting in protected, cool locations, such as cellars, sewers, crawl spaces, and well pits. With warm spring days, these females seek a blood meal and begin the cycle again. Only a few species can overwinter as larvae. Mosquitoborne diseases, such as malaria and yellow fever, have plagued civilization for thousands of years. Newer threats include Lyme disease and West Nile virus. Organized mosquito control in the United States has greatly reduced the incidence of these diseases. However, mosquitoes can still transmit a few diseases, including eastern equine encephalitis and St. Louis encephalitis. The frequency and extent of these diseases depend on a complex series of factors. Mosquito control agencies and health departments cooperate in being aware of these factors and reducing the chance for disease. It is important to recognize that young adult female mosquitoes taking their first blood meal do not transmit diseases. It is instead the older females, who, if they have picked up a disease organism in their first blood meals, can then transmit the disease during the second blood meal. The proper method to manage the mosquito problem in a community is through an organized integrated pest management system that includes all approaches that safely manage the problem. The spraying of toxic agents is but one of many approaches. When mosquitoes are numerous and interfere with living, recreation, and work, you can use the various measures described in the following paragraphs to reduce their annoyance, depending on location and conditions. # How to Reduce the Mosquito Population The most efficient method of controlling mosquitoes is by reducing the availability of water suitable for larval and pupal growth. Large lakes, ponds, and streams that have waves, contain mosquito-eating fish, and lack aquatic vegetation around their edges do not contain mosquitoes; mosquitoes thrive in smaller bodies of water in protected places. Examine your home and neighborhood and take the following precautions recommended by the New Jersey Agricultural Experiment Station [24]: • dispose of unwanted tin cans and tires; • clean clogged roof gutters and drain flat roofs; • turn over unused wading pools and other containers that tend to collect rainwater; • change water in birdbaths, fountains, and troughs twice a week; • clean and chlorinate swimming pools; • cover containers tightly with window screen or plastic when storing rainwater for garden use during drought periods; • flush sump-pump pits weekly; and • stock ornamental pools with fish. If mosquito breeding is extensive in areas such as woodland pools or roadside ditches, the problem may be too great for individual residents. In such cases, call the organized mosquito control agency in your area. These agencies have highly trained personnel who can deal with the problem effectively. Several commercially available insecticides can be effective in controlling larval and adult mosquitoes. These chemicals are considered sufficiently safe for use by the public. Select a product whose label states that the material is effective against mosquito larvae or adults. For safe and effective use, read the label and follow the instructions for applying the material. The label lists those insects that the EPA agrees are effectively controlled by the product. For use against adult mosquitoes, some liquid insecticides can be mixed according to direction and sprayed lightly on building foundations, hedges, low shrubbery, ground covers, and grasses. Do not overapply liquid insecticidesexcess spray drips from the sprayed surfaces to the ground, where it is ineffective. The purpose of such sprays is to leave a fine deposit of insecticide on surfaces where mosquitoes rest. Such sprays are not effective for more than 1 or 2 days. Some insecticides are available as premixed products or aerosol cans. These devices spray the insecticide as very small aerosol droplets that remain floating in the air and hit the flying mosquitoes. Apply the sprays upwind, so the droplets drift through the area where mosquito control is desired. Rather than applying too much of these aerosols initially, it is more practical to apply them briefly but periodically, thereby eliminating those mosquitoes that recently flew into the area. Various commercially available repellents can be purchased as a cream or lotion or in pressurized cans, then applied to the skin and clothing. Some manufacturers also offer clothing impregnated with repellents; coarse, repellent-bearing particles to be scattered on the ground; and candles whose wicks can be lit to release a repellent chemical. The effectiveness of all repellents varies from location to location, from person to person, and from mosquito to mosquito. Repellents can be especially effective in recreation areas, where mosquito control may not be con-ducted. All repellents should be used according to the manufacturers' instructions. Mosquitoes are attracted by perspiration, warmth, body odor, carbon dioxide, and light. Mosquito control agencies use some of these attractants to help determine the relative number of adult mosquitoes in an area. Several devices are sold that are supposed to attract, trap, and destroy mosquitoes and other flying insects. However, if these devices are attractive to mosquitoes, they probably attract more mosquitoes into the area and may, therefore, increase rather than decrease mosquito annoyance. # Introduction We all face a variety of risks to our health as we go about our day-to-day lives. Driving in cars, flying in airplanes, engaging in recreational activities, and being exposed to environmental pollutants all pose varying degrees of risk. Some risks are simply unavoidable. Some we choose to accept because to do otherwise would restrict our ability to lead our lives the way we want. Some are risks we might decide to avoid if we had the opportunity to make informed choices. Indoor air pollution and exposure to hazardous substances in the home are risks we can do something about. In the last several years, a growing body of scientific evidence has indicated that the air within homes and other buildings can be more seriously polluted than the outdoor air in even the largest and most industrialized cities. Other research indicates that people spend approximately 90% of their time indoors. Thus, for many people, the risks to health from exposure to indoor air pollution may be greater than risks from outdoor pollution. In addition, people exposed to indoor air pollutants for the longest periods are often those most susceptible to their effects. Such groups include the young, the elderly, and the chronically ill, especially those suffering from respiratory or cardiovascular disease [1]. # Indoor Air Pollution Numerous forms of indoor air pollution are possible in the modern home. Air pollutant levels in the home increase if not enough outdoor air is brought in to dilute emissions from indoor sources and to carry indoor air pollutants out of the home. In addition, high temperature and humidity levels can increase the concentration of some pollutants. Indoor pollutants can be placed into two groups, biologic and chemical. # Biologic Pollutants Biologic pollutants include bacteria, molds, viruses, animal dander, cat saliva, dust mites, cockroaches, and pollen. These biologic pollutants can be related to some Chapter 5: Indoor Air Pollutants and Toxic Materials serious health effects. Some biologic pollutants, such as measles, chickenpox, and influenza are transmitted through the air. However, the first two are now preventable with vaccines. Influenza virus transmission, although vaccines have been developed, still remains of concern in crowded indoor conditions and can be affected by ventilation levels in the home. Common pollutants, such as pollen, originate from plants and can elicit symptoms such as sneezing, watery eyes, coughing, shortness of breath, dizziness, lethargy, fever, and digestive problems. Allergic reactions are the result of repeated exposure and immunologic sensitization to particular biologic allergens. Although pollen allergies can be bothersome, asthmatic responses to pollutants can be life threatening. Asthma is a chronic disease of the airways that causes recurrent and distressing episodes of wheezing, breathlessness, chest tightness, and coughing [2]. Asthma can be broken down into two groups based on the causes of an attack: extrinsic (allergic) and intrinsic (nonallergic). Most people with asthma do not fall neatly into either type, but somewhere in between, displaying characteristics of both classifications. Extrinsic asthma has a known cause, such as allergies to dust mites, various pollens, grass or weeds, or pet danders. Individuals with extrinsic asthma produce an excess amount of antibodies when exposed to triggers. Intrinsic asthma has a known cause, but the connection between the cause and the symptoms is not clearly understood. There is no antibody hypersensitivity in intrinsic asthma. Intrinsic asthma usually starts in adulthood without a strong family history of asthma. Some of the known triggers of intrinsic asthma are infections, such as cold and flu viruses, exercise and cold air, industrial and occupational pollutants, food additives and preservatives, drugs such as aspirin, and emotional stress. Asthma is more common in children than in adults, with nearly 1 of every 13 school-age children having asthma [3]. Lowincome African-Americans and certain Hispanic populations suffer disproportionately, with urban inner cities having particularly severe problems. The impact on neighborhoods, school systems, and health care facilities from asthma is severe because one-third of all pediatric emergency room visits are due to asthma, and it is the fourth most prominent cause of physician office visits. Additionally, it is the leading cause of school absenteeism-14 million school days lost each year-from chronic illness [4]. The U.S. population, on the average, spends as much as 90% of its time indoors. Consquently, allergens and irritants from the indoor environment may play a significant role in triggering asthma episodes. A number of indoor environmental asthma triggers are biologic pollutants. These can include rodents (discussed in Chapter 4), cockroaches, mites, and mold. # Cockroaches The droppings, body parts, and saliva of cockroaches can be asthma triggers. Cockroaches are commonly found in crowded cities and in the southern United States. Allergens contained in the feces and saliva of cockroaches can cause allergic reactions or trigger asthma symptoms. A national study by Crain et al. [5] of 994 inner-city allergic children from seven U.S. cities revealed that cockroaches were reported in 58% of the homes. The Community Environmental Health Resource Center reports that cockroach debris, such as body parts and old shells, trigger asthma attacks in individuals who are sensitized to cockroach allergen [6]. Special attention to cleaning must be a priority after eliminating the presence of cockroaches to get rid of the presence of any allergens left that can be asthma triggers. # House Dust Mites Another group of arthropods linked to asthma is house dust mites. In 1921, a link was suggested between asthmatic symptoms and house dust, but it was not until 1964 that investigators suggested that a mite could be responsible. Further investigation linked a number of mite species to the allergen response and revealed that humid homes have more mites and, subsequently, more allergens. In addition, researchers established that fecal pellets deposited by the mites accumulated in home fabrics and could become airborne via domestic activities such as vacuuming and dusting, resulting in inhalation by the inhabitants of the home. House dust mites are distributed worldwide, with a minimum of 13 species identified from house dust. The two most common in the United States are the North American house dust mite (Dermatophagoides farinae) and the European house dust mite (D. pteronyssinus). According to Lyon [7], house dust mites thrive in homes that provide a source of food and shelter and adequate humidity. Mites prefer relative humidity levels of 70% to 80% and temperatures of 75°F to 80°F (24°C to 27°C). Most mites are found in bedrooms in bedding, where they spend up to a third of their lives. A typical used mattress may have from 100,000 to 10 million mites in it. In addition, carpeted floors, especially long, loose pile carpet, provide a microhabitat for the accumulation of food and moisture for the mite, and also provide protection from removal by vacuuming. The house dust mite's favorite food is human dander (skin flakes), which are shed at a rate of approximately 0.20 ounces per week. A good microscope and a trained observer are imperative in detecting mites. House dust mites also can be detected using diagnostic tests that measure the presence and infestation level of mites by combining dust samples collected from various places inside the home with indicator reagents [7]. Assuming the presence of mites, the precautions listed below should be taken if people with asthma are present in the home: • Use synthetic rather than feather and down pillows. • Use an approved allergen barrier cover to enclose the top and sides of mattresses and pillows and the base of the bed. • Use a damp cloth to dust the plastic mattress cover daily. • Change bedding and vacuum the bed base and mattress weekly. • Use nylon or cotton cellulose blankets rather than wool blankets. • Use hot (120°F-130°F [49°C-54°C]) water to wash all bedding, as well as room curtains. • Eliminate or reduce fabric wall hangings, curtains, and drapes. • Use wood, tile, linoleum, or vinyl floor covering rather than carpet. If carpet is present, vacuum regularly with a high-efficiency particulate air (HEPA) vacuum or a household vacuum with a microfiltration bag. • Purchase stuffed toys that are machine washable. • Use fitted sheets to help reduce the accumulation of human skin on the mattress surface. HEPA vacuums are now widely available and have also been shown to be effective [8]. A conventional vacuum tends to be inefficient as a control measure and results in a significant increase in airborne dust concentrations, but can be used with multilayer microfiltration collection bags. Another approach to mite control is reducing indoor humidity to below 50% and installing central air conditioning. Two products are available to treat house dust mites and their allergens. These products contain the active ingredients benzyl benzoate and tannic acid. # Pets According to the U.S. Environmental Protection Agency (EPA) [9], pets can be significant asthma triggers because of dead skin flakes, urine, feces, saliva, and hair. Proteins in the dander, urine, or saliva of warm-blooded animals can sensitize individuals and lead to allergic reactions or trigger asthmatic episodes. Warm-blooded animals include dogs, cats, birds, and rodents (hamsters, guinea pigs, gerbils, rats, and mice). Numerous strategies, such as the following, can diminish or eliminate animal allergens in the home: • Eliminate animals from the home. • Thoroughly clean the home (including floors and walls) after animal removal. • If pets must remain in the home, reduce pet exposure in sleeping areas. Keep pets away from upholstered furniture, carpeted areas, and stuffed toys, and keep the pets outdoors as much as possible. However, there is some evidence that pets introduced early into the home may prevent asthma. Several studies have shown that exposure to dogs and cats in the first year of life decreases a child's chances of developing allergies [10] and that exposure to cats significantly decreases sensitivity to cats in adulthood [11]. Many other studies have shown a decrease in allergies and asthma among children who grew up on a farm and were around many animals [12]. # Mold People are routinely exposed to more than 200 species of fungi indoors and outdoors [13]. These include moldlike fungi, as well as other fungi such as yeasts and mushrooms. The terms "mold" and "mildew" are nontechnical names commonly used to refer to any fungus that is growing in the indoor environment. Mold colonies may appear cottony, velvety, granular, or leathery, and may be white, gray, black, brown, yellow, greenish, or other colors. Many reproduce via the production and dispersion of spores. They usually feed on dead organic matter and, provided with sufficient moisture, can live off of many materials found in homes, such as wood, cellulose in the paper backing on drywall, insulation, wallpaper, glues used to bond carpet to its backing, and everyday dust and dirt. Certain molds can cause a variety of adverse human health effects, including allergic reactions and immune responses (e.g., asthma), infectious disease (e.g., histoplasmosis), and toxic effects (e.g., aflatoxin-induced liver cancer from exposure to this mold-produced toxin in food) [14]. A recent Institute of Medicine (IOM) review of the scientific literature found sufficient evidence for an associ-ation between exposure to mold or other agents in damp indoor environments and the following conditions: upper respiratory tract symptoms, cough, wheeze, hypersensitivity pneumonitis in susceptible persons, and asthma symptoms in sensitized persons [15]. A previous scientific review was more specific in concluding that sufficient evidence exists to support associations between fungal allergen exposure and asthma exacerbation and upper respiratory disease [13]. Finally, mold toxins can cause direct lung damage leading to pulmonary diseases other than asthma [13]. The topic of residential mold has received increasing public and media attention over the past decade. Many news stories have focused on problems associated with "toxic mold" or "black mold," which is often a reference to the toxin-producing mold, Stachybotrys chartarum. This might give the impression that mold problems in homes are more frequent now than in past years; however, no good evidence supports this. Reasons for the increasing attention to this issue include high-visibility lawsuits brought by property owners against builders and developers, scientific controversies regarding the degree to which specific illness outbreaks are mold-induced, and an increase in the cost of homeowner insurance policies due to the increasing number of mold-related claims. Modern construction might be more vulnerable to mold problems because tighter construction makes it more difficult for internally generated water vapor to escape, as well as the widespread use of paper-backed drywall in construction (paper is an excellent medium for mold growth when wet), and the widespread use of carpeting. Allergic Health Effects. Many molds produce numerous protein or glycoprotein allergens capable of causing allergic reactions in people. These allergens have been measured in spores as well as in other fungal fragments. An estimated 6%-10% of the general population and 15%-50% of those who are genetically susceptible are sensitized to mold allergens [13]. Fifty percent of the 937 children tested in a large multicity asthma study sponsored by the National Institutes of Health showed sensitivity to mold, indicating the importance of mold as an asthma trigger among these children [16]. Molds are thought to play a role in asthma in several ways. Molds produce many potentially allergenic compounds, and molds may play a role in asthma via release of irritants that increase potential for sensitization or release of toxins (mycotoxins) that affect immune response [13]. Toxics and Irritants. Many molds also produce mycotoxins that can be a health hazard on ingestion, dermal contact, or inhalation [14]. Although common outdoor molds present in ambient air, such as Cladosporium cladosporioides and Alternaria alternata, do not usually produce toxins, many other different mold species do [17]. Genera-producing fungi associated with wet buildings, such as Aspergillus versicolor, Fusarium verticillioides, Penicillium aiurantiorisen, and S. chartarum, can produce potent toxins [17]. A single mold species may produce several different toxins, and a given mycotoxin may be produced by more than one species of fungi. Furthermore, toxin-producing fungi do not necessarily produce mycotoxins under all growth conditions, with production being dependent on the substrate it is metabolizing, temperature, water content, and humidity [17]. Because species of toxin-producing molds generally have a higher water requirement than do common household molds, they tend to thrive only under conditions of chronic and severe water damage [18]. For example, Stachybotrys typically only grows under continuously wet conditions [19]. It has been suggested that very young children may be especially vulnerable to certain mycotoxins [19,20]. For example, associations have been reported for pulmonary hemorrhage (bleeding lung) deaths in infants and the presence of S. chartarum [21][22][23][24]. Causes of Mold. Mold growth can be caused by any condition resulting in excess moisture. Common moisture sources include rain leaks (e.g., on roofs and wall joints); surface and groundwater leaks (e.g., poorly designed or clogged rain gutters and footing drains, basement leaks); plumbing leaks; and stagnant water in appliances (e.g., dehumidifiers, dishwashers, refrigerator drip pans, and condensing coils and drip pans in HVAC systems). Moisture problems can also be due to water vapor migration and condensation problems, including uneven indoor temperatures, poor air circulation, soil air entry into basements, contact of humid unconditioned air with cooled interior surfaces, and poor insulation on indoor chilled surfaces (e.g., chilled water lines). Problems can also be caused by the production of excess moisture within homes from humidifiers, unvented clothes dryers, overcrowding, etc. Finished basements are particularly susceptible to mold problems caused by the combination of poorly controlled moisture and mold-supporting materials (e.g., carpet, paper-backed sheetrock) [15]. There is also some evidence that mold spores from damp or wet crawl spaces can be transported through air currents into the upper living quarters. Older, substandard housing low income families can be particularly prone to mold problems because of inadequate maintenance (e.g., inoperable gutters, basement and roof leaks), overcrowding, inadequate insulation, lack of air conditioning, and poor heating. Low interior temperatures (e.g., when one or two rooms are left unheated) result in an increase in the relative humidity, increasing the potential for water to condense on cold surfaces. Mold Assessment Methods. Mold growth or the potential for mold growth can be detected by visual inspection for active or past microbial growth, detection of musty odors, and inspection for water staining or damage. If it is not possible or practical to inspect a residence, this information can be obtained using occupant questionnaires. Visual observation of mold growth, however, is limited by the fact that fungal elements such as spores are microscopic, and that their presence is often not apparent until growth is extensive and the fact that growth can occur in hidden spaces (e.g., wall cavities, air ducts). Portable, hand-held moisture meters, for the direct measurement of moisture levels in materials, may also be useful in qualitative home assessments to aid in pinpointing areas of potential biologic growth that may not otherwise be obvious during a visual inspection [14]. For routine assessments in which the goal is to identify possible mold contamination problems before remediation, it is usually unnecessary to collect and analyze air or settled dust samples for mold analysis because decisions about appropriate intervention strategies can typically be made on the basis of a visual inspection [25]. Also, sampling and analysis costs can be relatively high and the interpretation of results is not straightforward. Air and dust monitoring may, however, be necessary in certain situations, including 1) if an individual has been diagnosed with a disease associated with fungal exposure through inhalation, 2) if it is suspected that the ventilation systems are contaminated, or 3) if the presence of mold is suspected but cannot be identified by a visual inspection or bulk sampling [26]. Generally, indoor environments contain large reservoirs of mold spores in settled dust and contaminated building materials, of which only a relatively small amount is airborne at a given time. Common methods for sampling for mold growth include bulk sampling techniques, air sampling, and collection of settled dust samples. In bulk sampling, portions of materials with visual or suspected mold growth (e.g., sections of wallboard, pieces of duct lining, carpet segments, or return air filters) are collected and directly examined to determine if mold is growing and to identify the mold species or groups that are present. Surface sampling in mold contamination investigations may also be used when a less destructive technique than bulk sampling is desired. For example, nondestructive samples of mold may be collected using a simple swab or adhesive tape [14]. Air can also be sampled for mold using pumps that pull air across a filter medium, which traps airborne mold spores and fragments. It is generally recommended that outdoor air samples are collected concurrent with indoor samples for comparison purposes for measurement of baseline ambient air conditions. Indoor contamination can be indicated by indoor mold distributions (both species and concentrations) that differ significantly from the distributions in outdoor samples [14]. Captured mold spores can be examined under a microscope to identify the mold species/groups and determine concentrations or they can be cultured on growth media and the resulting colonies counted and identified. Both techniques require considerable expertise. Dust sampling involves the collection of settled dust samples (e.g., floor dust) using a vacuum method in which the dust is collected onto a porous filter medium or into a container. The dust is then processed in the laboratory and the mold identified by culturing viable spores. # Mold Standards. No standard numeric guidelines exist for assessing whether mold contamination exists in an area. In the United States, no EPA regulations or standards exist for airborne mold contaminants [26]. Various governmental and private organizations have, however, proposed guidance on the interpretation of fungal measures of environmental media in indoor environments (quantitative limits for fungal concentrations). Given evidence that young children may be especially vulnerable to certain mycotoxins [18] and in view of the potential severity or diseases associated with mycotoxin exposure, some organizations support a precautionary approach to limiting mold exposure [19]. For example, the American Academy of Pediatrics recommends that infants under 1 year of age are not exposed at all to chronically moldy, water-damaged environments [18]. Mold Mitigation. Common intervention methods for addressing mold problems include the following: • maintaining heating, ventilating, and air conditioning (HVAC) systems; • changing HVAC filters frequently, as recommended by manufacturer; • keeping gutters and downspouts in working order and ensuring that they drain water away from the foundation; • routinely checking, cleaning, and drying drip pans in air conditioners, refrigerators, and dehumidifiers; • increasing ventilation (e.g., using exhaust fans or open windows to remove humidity when cooking, showering, or using the dishwasher); • venting clothes dryers to the outside; and • maintaining an ideal relative humidity level in the home of 40% to 60%. • locating and removing sources of moisture (controlling dampness and humidity and repairing water leakage problems); • cleaning or removing mold-contaminated materials; • removing materials with severe mold growth; and • using high-efficiency air filters. Moisture Control. Because one of the most important factors affecting mold growth in homes is moisture level, controlling this factor is crucial in mold abatement strategies. Many simple measures can significantly control moisture, for example maintaining indoor relative humidity at no greater than 40%-60% through the use of dehumidifiers, fixing water leakage problems, increasing ventilation in kitchens and bathrooms by using exhaust fans, venting clothes dryers to the outside, reducing the number of indoor plants, using air conditioning at times of high outdoor humidity, heating all rooms in the winter and adding heating to outside wall closets, sloping surrounding soil away from building foundations, fixing gutters and downspouts, and using a sump pump in basements prone to flooding [27]. Vapor barriers, sump pumps, and aboveground vents can also be installed in crawlspaces to prevent moisture problems [28]. Removal and Cleaning of Mold-contaminated Materials. Nonporous (e.g., metals, glass, and hard plastics) and semiporous (e.g., wood and concrete) materials contaminated with mold and that are still structurally sound can often be cleaned with bleach-and-water solutions. However, in some cases, the material may not be easily cleaned or may be so severely contaminated that it may have to be removed. It is recommended that porous materials (e.g., ceiling tiles, wallboards, and fabrics) that cannot be cleaned be removed and discarded [29]. In severe cases, clean-up and repair of mold-contaminated buildings may be conducted using methods similar to those used for abatement of other hazardous substances such as asbestos [30]. For example, in situations of extensive colonization (large surface areas greater than 100 square feet or where the material is severely degraded), extreme precautions may be required, including full containment (complete isolation of work area) with critical barriers (airlock and decontami-nation room) and negative pressurization, personnel trained to handle hazardous wastes, and the use of full-face respirators with HEPA filters, eye protection, and disposable full-body covering [26]. # Worker Protection When Conducting Mold Assessment and Mitigation Projects. Activities such as cleaning or removal of mold-contaminated materials in homes, as well as investigations of mold contamination extent, have the potential to disturb areas of mold growth and release fungal spores and fragments into the air. Recommended measures to protect workers during mold remediation efforts depend on the severity and nature of the mold contamination being addressed, but include the use of well fitted particulate masks or respirators that retain particles as small as 1 micrometer or less, disposable gloves and coveralls, and protective eyewear [31]. Following are examples of guidance documents for remediation of mold contamination: • # Chemical Pollutants # Carbon Monoxide Carbon monoxide (CO) is a significant combustion pollutant in the United States. CO is a leading cause of poisoning deaths [32]. According to the National Fire Protection Association (NFPA), CO-related nonfire deaths are often attributed to heating and cooking equipment. The leading specific types of equipment blamed for CO-related deaths include gas-fueled space heaters, gasfueled furnaces, charcoal grills, gas-fueled ranges, portable kerosene heaters, and wood stoves. As with fire deaths, the risk for unintentional CO death is highest for the very young (ages 4 years and younger) and the very old (ages 75 years and older). CO is an odorless, colorless gas that can cause sudden illness and death. It is a result of the incomplete combustion of carbon. Headache, dizziness, weakness, nausea, vomiting, chest pain, and confusion are the most frequent symptoms of CO poisoning. According to the American Lung Association (ALA) [33], breathing low levels of CO can cause fatigue and increase chest pain in people with chronic heart disease. Higher levels of CO can cause flulike symptoms in healthy people. In addition, extremely high levels of CO cause loss of consciousness and death. In the home, any fuel-burning appliance that is not adequately vented and maintained can be a potential source of CO. The following steps should be followed to reduce CO (as well as sulfur dioxide and oxides of nitrogen) levels: • Never use gas-powered equipment, charcoal grills, hibachis, lanterns, or portable camping stoves in enclosed areas or indoors. [34], the major source of indoor ozone is outdoor ozone. Indoor levels can vary from 10% of the outdoor air to levels as high as 80% of the outdoor air. The Food and Drug Administration has set a limit of 0.05 ppm of ozone in indoor air. In recent years, there have been numerous advertisements for ion generators that destroy harmful indoor air pollutants. These devices create ozone or elemental oxygen that reacts with pollutants. EPA has reviewed the evidence on ozone generators and states: "available scientific evidence shows that at concentrations that do not exceed public health standards, ozone has little potential to remove indoor air contaminants," and "there is evidence to show that at concentrations that do not exceed public health standards, ozone is not effective at removing many odor causing chemicals" [35]. Ozone is also created by the exposure of polluted air to sunlight or ultraviolet light emitters. This ozone produced outside of the home can infiltrate the house and react with indoor surfaces, creating additional pollutants. # Environmental Tobacco Smoke or Secondhand Smoke Like CO, environmental tobacco smoke (ETS; also known as secondhand smoke), is a product of combustion. The National Cancer Institute (NCI) [36], states that ETS is the combination of two forms of smoke from burning tobacco products: • Sidestream smoke, or smoke that is emitted between the puffs of a burning cigarette, pipe, or cigar; and • Mainstream smoke, or the smoke that is exhaled by the smoker. • Install a CO monitor (Figure 5.2) in appropriate areas of the home. These monitors are designed to provide a warning before potentially life-threatening levels of CO are reached. • Choose vented appliances when possible and keep gas appliances properly adjusted to decrease the combustion to CO. (Note: Vented appliances are always preferable for several reasons: oxygen levels, carbon dioxide buildup, and humidity management). • Only buy certified and tested combustion appliances that meet current safety standards, as certified by Underwriter's Laboratories (UL), American Gas Association (AGA) Laboratories, or equivalent. • Assure that all gas heaters possess safety devices that shut off an improperly vented gas heater. Heaters made after 1982 use a pilot light safety system known as an oxygen depletion sensor. When inadequate fresh air exists, this system shuts off the heater before large amounts of CO can be produced. • Use appliances that have electronic ignitions instead of pilot lights. These appliances are typically more energy efficient and eliminate the continuous low-level pollutants from pilot lights. • Use the proper fuel in kerosene appliances. • Install and use an exhaust fan vented to the outdoors over gas stoves. • Have a trained professional annually inspect, clean, and tune up central heating systems (furnaces, flues, and chimneys) and repair them as needed. • Do not idle a car inside a garage. The U.S. Consumer Product Safety Commission (CPSC) recommends installing at least one CO alarm per household near the sleeping area. For an extra measure of safety, another alarm should be placed near the home's heating source. ALA recommends weighing the benefits of using models powered by electrical outlets versus models powered by batteries that run out of power and need replacing. Battery-powered CO detectors provide continuous protection and do not require recalibration in the event of a power outage. Electric-powered systems do not provide protection during a loss of power and can take up to 2 days to recalibrate. A device that can be easily self-tested and reset to ensure proper functioning should be chosen. The product should meet Underwriters Laboratories Standard UL 2034. To lower levels of VOCs in the home, follow these steps: • use all household products according to directions; • provide good ventilation when using these products; • properly dispose of partially full containers of old or unneeded chemicals; • purchase limited quantities of products; and • minimize exposure to emissions from products containing methylene chloride, benzene, and perchlorethylene. A prominent VOC found in household products and construction products is formaldehyde. According to CPSC [40], these products include the glue or adhesive used in pressed wood products; preservatives in paints, coating, and cosmetics; coatings used for permanent-press quality in fabrics and draperies; and the finish on paper products and certain insulation materials. Formaldehyde The [36]. The EPA [37] states that, because of their relative body size and respiratory rates, children are affected by ETS more than adults are. It is estimated that an additional 7,500 to 15,000 hospitalizations resulting from increased respiratory infections occur in children younger than 18 months of age due to ETS exposure. Figure 5.3 shows the ETS exposure levels in homes with children under age 7 years. The following actions are recommended in the home to protect children from ETS: • if individuals insist on smoking, increase ventilation in the smoking area by opening windows or using exhaust fans; and • refrain from smoking in the presence of children and do not allow babysitters or others who work in the home to smoke in the home or near children. # Volatile Organic Compounds In the modern home, many organic chemicals are used as ingredients in household products. Organic chemicals that vaporize and become gases at normal room temperature are collectively known as VOCs. Examples of common items that can release VOCs include paints, varnishes, and wax, as well as in many cleaning, disinfecting, cosmetic, degreasing, and hobby products. Levels of approximately a dozen common VOCs can be two to five times higher inside the home, as opposed to outside, whether in highly industrialized areas or rural areas. VOCs that frequently pollute indoor air include toluene, styrene, xylenes, and trichloroethylene. Some of these chemicals may be emitted from aerosol products, dry-cleaned clothing, paints, varnishes, glues, is contained in urea-formaldehyde (UF) foam insulation installed in the wall cavities of homes as an energy conservation measure. Levels of formaldehyde increase soon after installation of this product, but these levels decline with time. In 1982, CPSC voted to ban UF foam insulation. The courts overturned the ban; however, the publicity has decreased the use of this product. More recently, the most significant source of formaldehyde in homes has been pressed wood products made using adhesives that contain UF resins [41]. The most significant of these is medium-density fiberboard, which contains a higher resin-to-wood ratio than any other UF pressed wood product. This product is generally recognized as being the highest formaldehyde-emitting pressed wood product. Additional pressed wood products are produced using phenol-formaldehyde resin. The latter type of resin generally emits formaldehyde at a considerably slower rate than those containing UF resin. The emission rate for both resins will change over time and will be influenced by high indoor temperatures and humidity. Since 1985, U.S. Department of Housing and Urban Development (HUD) regulations (24 CFR 3280.308, 3280.309, and 3280.406) have permitted only the use of plywood and particleboard that conform to specified formaldehyde emission limits in the construction of prefabricated and manufactured homes [42]. This limit was to ensure that indoor formaldehyde levels are below 0.4 ppm. CPSC [40] notes that formaldehyde is a colorless, strongsmelling gas. At an air level above 0.1 ppm, it can cause watery eyes; burning sensations in the eyes, nose, and throat; nausea; coughing; chest tightness; wheezing; skin rashes; and allergic reactions. Laboratory animal studies have revealed that formaldehyde can cause cancer in animals and may cause cancer in humans. Formaldehyde is usually present at levels less than 0.03 ppm indoors and outdoors, with rural areas generally experiencing lower concentrations than urban areas. Indoor areas that contain products that release formaldehyde can have levels greater than 0.03 ppm. CPSC also recommends the following actions to avoid high levels of exposure to formaldehyde: • Purchase pressed wood products that are labeled or stamped to be in conformance with American National Standards Institute criteria ANSI A208. 1-1993. Use particleboard flooring marked with ANSI grades PBU, D2, or D3. Medium-density fiberboard should be in conformance with ANSI A208.2-1994 and hardwood plywood with ANSI/HPVA HP-1-1994 (Figure 5.4). • Purchase furniture or cabinets that contain a high percentage of panel surface and edges that are laminated or coated. Unlaminated or uncoated (raw) panels of pressed wood panel products will generally emit more formaldehyde than those that are laminated or coated. • Use alternative products, such as wood panel products not made with UF glues, lumber, or metal. • Avoid the use of foamed-in-place insulation containing formaldehyde, especially UF foam insulation. • Wash durable-press fabrics before use. CPSC also recommends the following actions to reduce existing levels of indoor formaldehyde: • Ventilate the home well by opening doors and windows and installing an exhaust fan(s). • Seal the surfaces of formaldehyde-containing products that are not laminated or coated with paint, varnish, or a layer of vinyl or polyurethane-like materials. • Remove products that release formaldehyde in the indoor air from the home. # Radon According to the EPA [43], radon is a colorless, odorless gas that occurs naturally in soil and rock and is a decay product of uranium. The U.S. Geological Survey (USGS) [44] notes that the typical uranium content of rock and the surrounding soil is between 1 and 3 ppm. Higher levels of uranium are often contained in rock such as lightcolored volcanic rock, granite, dark shale, and sedimentary rock containing phosphate. Uranium levels as high as 100 ppm may be present in various areas of the United States because of these rocks. The main source of high-level radon pollution in buildings is surrounding uranium-containing soil. Thus, the greater the level of uranium nearby, the greater the chances are that buildings in the area will have high levels of indoor radon. Figure 5.5 demonstrates the geographic variation in radon levels in the United States. Maps of the individual states and areas that have proven high for radon are available at http://www.epa.gov/iaq/radon/ zonemap.html. A free video is available from the U.S. EPA: call 1-800-438-4318 and ask for EPA 402-V-02-003 (TRT 13.10). Radon, according to the California Geological Survey [45], is one of the intermediate radioactive elements formed during the radioactive decay of uranium-238, uranium-235, or thorium-232. Radon-222 is the radon isotope of most concern to public health because of its longer half-life (3.8 days). The mobility of radon gas is much greater than are uranium and radium, which are solids at room temperature. Thus, radon can leave rocks and soil, move through fractures and pore spaces, and ultimately enter a building to collect in high concentrations. When in water, radon moves less than 1 inch before it decays, compared to 6 feet or more in dry rocks or soil. USGS [44] notes that radon near the surface of soil typically escapes into the atmosphere. However, where a house is present, soil air often flows toward the house foundation because of • differences in air pressure between the soil and the house, with soil pressure often being higher; • presence of openings in the house's foundation; and • increases in permeability around the basement (if present). EPA also recommends that this map be supplemented with any available local data to further understand and predict the radon potential of a specific area. Houses are often constructed with loose fill under a basement slab and between the walls and exterior ground. This fill is more permeable than the original ground. Houses typically draw less than 1% of their indoor air from the soil. However, houses with low indoor air pressures, poorly sealed foundations, and several entry points for soil air may draw up to 20% of their indoor air from the soil. USGS [44] states that radon may also enter the home through the water systems. Surface water sources typically contain little radon because it escapes into the air. In larger cities, radon is released to the air by municipal processing systems that aerate the water. However, in areas where groundwater is the main water supply for communities, small public systems and private wells are typically closed systems that do not allow radon to escape. Radon then enters the indoor air from showers, clothes washing, dishwashing, and other uses of water. Figure 5.6 shows typical entry points of radon. Health risks of radon stem from its breakdown into "radon daughters," which emit high-energy alpha particles. These progeny enter the lungs, attach themselves, and may eventually lead to lung cancer. This exposure to radon is believed to contribute to between 15,000 and 21,000 excess lung cancer deaths in the United States each year. The EPA has identified levels greater than 4 picocuries per liter as levels at which remedial action should be taken. Approximately 1 in 15 homes nationwide have radon above this level, according to the U.S. Surgeon General's recent advisory [46]. Smokers are at significantly higher risk for radon-related lung cancer. Radon in the home can be measured either by the occupant or by a professional. Because radon has no odor or color, special devices are used to measure its presence. Radon levels vary from day to day and season to season. Short-term tests (2 to 90 days) are best if quick results are needed, but long-term tests (more than 3 months) yield better information on average year-round exposure. Measurement devices are routinely placed in the lowest occupied level of the home. The devices either measure the radon gas directly or the daughter products. The simplest devices are passive, require no electricity, and include a charcoal canister, charcoal liquid scintillation device, alpha tract detector, and electret ion detectors [47]. All of these devices, with the exception of the ion detector, can be purchased in hardware stores or by mail. The ion detector generally is only available through laboratories. These devices are inexpensive, primarily used for short-term testing, and require little to no training. Active devices, however, need electrical power and include continuous monitoring devices. They are customarily more expensive and require professionally trained testers for their operation. Figure 5. 7 shows examples of the charcoal tester (a; left) and the alpha tract detector (b; right). After testing and evaluation by a professional, it may be necessary to lower the radon levels in the structure. The Pennsylvania Department of Environmental Protection [48] states that in most cases, a system with pipes and a fan is used to reduce radon. This system, known as a subslab depressurization system, requires no major changes to the home. The cost typically ranges from $500 to $2,500 and averages approximately $1,000, varying with geographic region. The typical mitigation system usually has only one pipe penetrating through the basement floor; the pipe also may be installed outside the house. The Connecticut Department of Public Health [49] notes that it is more cost effective to include radon-resistant techniques while constructing a building than to install a reduction system in an existing home. Inclusion of radon-resistant techniques in initial construction costs approximately $350 to $500 [50]. Figure 5. 8 shows examples of radon-resistant construction techniques. A passive radon-resistant system has five major parts: 1. A layer of gas-permeable material under the foundation. 2. The foundation (usually 4 inches of gravel). 3. Plastic sheeting over the foundation, with all openings in the concrete foundation floor sealed and caulked. # 4. A gas-tight, 3-or 4-inch vent pipe running from under the foundation through the house to the roof. # 5. A roughed-in electrical junction box for the future installation of a fan, if needed. These features create a physical barrier to radon entry. The vent pipe redirects the flow of air under the foundation, preventing radon from seeping into the house. # Pesticides Much pesticide use could be reduced if integrated pest management (IPM) practices were used in the home. IPM is a coordinated approach to managing roaches, rodents, mosquitoes, and other pests that integrates inspection, monitoring, treatment, and evaluation, with special emphasis on the decreased use of toxic agents. However, all pest management options, including natural, biologic, cultural, and chemical methods, should be considered. Those that have the least impact on health and the environment should be selected. Most household pests can be controlled by eliminating the habitat for the pest both inside and outside, building or screening them out, eliminating food and harborage areas, and safely using appropriate pesticides if necessary. EPA [51] states that 75% of U.S. households used at least one pesticide indoors during the past year and that 80% of most people's exposure to pesticides occurs indoors. Measurable levels of up to a dozen pesticides have been found in the air inside homes. Pesticides used in and around the home include products to control insects (insecticides), termites (termiticides), rodents (rodenticides), fungi (fungicides), and microbes (disinfectants). These products are found in sprays, sticks, powders, crystals, balls, and foggers. Delaplane [52] notes that the ancient Romans killed insect pests by burning sulfur and controlled weeds with salt. In the 1600s, ants were controlled with mixtures of honey and arsenic. U.S. farmers in the late 19th century used copper actoarsenite (Paris green), calcium arsenate, nicotine sulfate, and sulfur to control insect pests in field crops. By World War II and afterward, numerous pesticides had been introduced, including DDT, BHC, aldrin, dieldrin, endrin, and 2,4-D. A significant factor with regard to these pesticides used in and around the home is their impact on children. According to a 2003 EPA survey, 47% of all households with children under the age of 5 years had at least one pesticide stored in an unlocked cabinet less than 4 feet off the ground. This is within easy reach of children. Similarly, 74% of households without children under the age of 5 also stored pesticides in an unlocked cabinet less than 4 feet off the ground. This issue is significant because 13% of all pesticide poisoning incidents occur in homes other than the child's home. The EPA [53] notes a report by the American Association of Poison Control Centers indicating that approximately 79,000 children were involved in common household pesticide poisonings or exposures. The health effects of pesticides vary with the product. However, local effects from most of the products will be on eyes, noses, and throats; more severe consequences, such as on the central nervous system and kidneys and on cancer risks, are possible. The active and inert ingredients of pesticides can be organic compounds, which can contribute to the level of organic compounds in indoor air. More significantly, products containing cyclodiene pesticides have been commonly associated with misapplication. Individuals inadvertently exposed during this misapplication had numerous symptoms, including headaches, dizziness, muscle twitching, weakness, tin-gling sensations, and nausea. In addition, there is concern that these pesticides may cause long-term damage to the liver and the central nervous system, as well as an increased cancer risk. Cyclodiene pesticides were developed for use as insecticides in the 1940s and 1950s. The four main cyclodiene pesticides-aldrin, dieldrin, chlordane, and heptachlor-were used to guard soil and seed against insect infestation and to control insect pests in crops. Outside of agriculture they were used for ant control; farm, industrial, and domestic control of fleas, flies, lice, and mites; locust control; termite control in buildings, fences, and power poles; and pest control in home gardens. No other commercial use is permitted for cyclodiene or related products. The only exception is the use of heptachlor by utility companies to control fire ants in underground cable boxes. An EPA survey [53] revealed that bathrooms and kitchens are areas in the home most likely to have improperly stored pesticides. In the United States, EPA regulates pesticides under the pesticide law known as the Federal Insecticide, Fungicide, and Rodenticide Act. Since 1981, this law has required most residential-use pesticides to bear a signal word such as "danger" or "warning" and to be contained in child-resistant packaging. This type of packaging is designed to prevent or delay access by most children under the age of 5 years. EPA offers the following recommendations for preventing accidental poisoning: • store pesticides away from the reach of children in a locked cabinet, garden shed, or similar location; • read the product label and follow all directions exactly, especially precautions and restrictions; • remove children, pets, and toys from areas before applying pesticides; • if interrupted while applying a pesticide, properly close the package and assure that the container is not within reach of children; • do not transfer pesticides to other containers that children may associate with food or drink; • do not place rodent or insect baits where small children have access to them; • use child-resistant packaging properly by closing the container tightly after use; • assure that other caregivers for children are aware of the potential hazards of pesticides; • teach children that pesticides are poisons and should not be handled; and • keep the local Poison Control Center telephone number available. # Toxic Materials Asbestos Asbestos, from the Greek word meaning "inextinguishable," refers to a group of six naturally occurring mineral fibers. Asbestos is a mineral fiber of which there are several types: amosite, crocidiolite, tremolite, actinolite, anthrophyllite, and chrysotile. Chrysotile asbestos, also known as white asbestos, is the predominant commercial form of asbestos. Asbestos is strong, flexible, resistant to heat and chemical corrosion, and insulates well. These features led to the use of asbestos in up to 3,000 consumer products before government agencies began to phase it out in the 1970s because of its health hazards. Asbestos has been used in insulation, roofing, siding, vinyl floor tiles, fireproofing materials, texturized paint and soundproofing materials, heating appliances (such as clothes dryers and ovens), fireproof gloves, and ironing boards. Asbestos continues to be used in some products, such as brake pads. Other mineral products, such as talc and vermiculite, can be contaminated with asbestos. The health effects of asbestos exposure are numerous and varied. Industrial studies of workers exposed to asbestos in factories and shipyards have revealed three primary health risk concerns from breathing high levels of asbestos fibers: lung cancer, mesothelioma (a cancer of the lining of the chest and the abdominal cavity), and asbestosis (a condition in which the lungs become scarred with fibrous tissue). The risk for all of these conditions is amplified as the number of fibers inhaled increases. Smoking also enhances the risk for lung cancer from inhaling asbestos fibers by acting synergistically. The incubation period (from time of exposure to appearance of symptoms) of these diseases is usually about 20 to 30 years. Individuals who develop asbestosis have typically been exposed to high levels of asbestos for a long time. Exposure levels to asbestos are measured in fibers per cubic centimeter of air. Most individuals are exposed to small amounts of asbestos in daily living activities; however, a preponderance of them do not develop health problems. According to the Agency for Toxic Substances and Disease Registry (ATSDR), if an individual is exposed, several factors determine whether the individual will be harmed [54]. These factors include the dose (how much), the duration (how long), and the fiber type (mineral form and distribution). ATSDR also states that children may be more adversely affected than adults [54]. Children breathe differently and have different lung structures than adults; however, it has not been determined whether these differences cause a greater amount of asbestos fibers to stay in the lungs of a child than in the lungs of an adult. In addition, children drink more fluids per kilogram of body weight than do adults and they can be exposed through asbestos-contaminated drinking water. Eating asbestoscontaminated soil and dust is another source of exposure for children. Certain children intentionally eat soil and children's hand-to-mouth activities mean that all young children eat more soil than do adults. Family members also have been exposed to asbestos that was carried home on the clothing of other family members who worked in asbestos mines or mills. Breathing asbestos fibers may result in difficulty in breathing. Diseases usually appear many years after the first exposure to asbestos and are therefore not likely to be seen in children. But people who have been exposed to asbestos at a young age may be more likely to contract diseases than those who are first exposed later in life. In the small number of studies that have specifically looked at asbestos exposure in children, there is no indication that younger people might develop asbestos-related diseases more quickly than older people. Developing fetuses and infants are not likely to be exposed to asbestos through the placenta or breast milk of the mother. Results of animal studies do not indicate that exposure to asbestos is likely to result in birth defects. A joint document issued by CPSC, EPA, and ALA, notes that most products in today's homes do not contain asbestos. However, asbestos can still be found in products and areas of the home. These products contain asbestos that could be inhaled and are required to be labeled as such. Until the 1970s, many types of building products and insulation materials used in homes routinely contained asbestos. A potential asbestos problem both inside and outside the home is that of vermiculite. According to the USGS [55], vermiculite is a claylike material that expands when heated to form wormlike particles. It is used in concrete aggregate, fertilizer carriers, insulation, potting soil, and soil conditioners. This product ceased being mined in 1992, but old stocks may still be available. Common products that contained asbestos in the past and conditions that may release fibers include the following: • Steam pipes, boilers, and furnace ducts insulated with an asbestos blanket or asbestos paper tape. These materials may release asbestos fibers if damaged, repaired, or removed improperly. • Resilient floor tiles (vinyl asbestos, asphalt, and rubber), the backing on vinyl sheet flooring, and adhesives used for installing floor tile. Sanding tiles can release fibers, as may scraping or sanding the backing of sheet flooring during removal. • Cement sheet, millboard, and paper used as insulation around furnaces and wood-burning stoves. Repairing or removing appliances may release asbestos fibers, as may cutting, tearing, sanding, drilling, or sawing insulation. • Door gaskets in furnaces, wood stoves, and coal stoves. Worn seals can release asbestos fibers during use. • Soundproofing or decorative material sprayed on walls and ceilings. Loose, crumbly, or water-damaged material may release fibers, as will sanding, drilling, or scraping the material. • Patching and joint compounds for walls, ceilings, and textured paints. Sanding, scraping, or drilling these surfaces may release asbestos. • Asbestos cement roofing, shingles, and siding. These products are not likely to release asbestos fibers unless sawed, drilled, or cut. • Artificial ashes and embers sold for use in gas-fired fireplaces in addition to other older household products such as fireproof gloves, stove-top pads, ironing board covers, and certain hair dryers. • Automobile brake pads and linings, clutch facings, and gaskets. Homeowners who believe material in their home may be asbestos should not disturb the material. Generally, material in good condition will not release asbestos fibers, and there is little danger unless the fibers are released and inhaled into the lungs. However, if disturbed, asbestos material may release asbestos fibers, which can be inhaled into the lungs. The fibers can remain in the lungs for a long time, increasing the risk for disease. Suspected asbestos-containing material should be checked regularly for damage from abrasions, tears, or water. If possible, access to the area should be limited. Asbestos-containing products such as asbestos gloves, stove-top pads, and ironing board covers should be discarded if damaged or worn. Permission and proper disposal methods should be obtainable from local health, environmental, or other appropriate officials. If asbestos material is more than slightly damaged, or if planned changes in the home might disturb it, repair or removal by a professional is needed. Before remodeling, determine whether asbestos materials are present. Only a trained professional can confirm suspected asbestos materials that are part of a home's construction. This individual will take samples for analysis and submit them to an EPA-approved laboratory. If the asbestos material is in good shape and will not be disturbed, the best approach is to take no action and continue to monitor the material. If the material needs action to address potential exposure problems, there are two approaches to correcting the problem: repair and removal. Repair involves sealing or covering the asbestos material. Sealing or encapsulation involves treating the material with a sealant that either binds the asbestos fibers together or coats the material so fibers are not released. This is an approach often used for pipe, furnace, and boiler insulation; however, this work should be done only by a professional who is trained to handle asbestos safely. Covering (enclosing) involves placing something over or around the material that contains asbestos to prevent release of fibers. Exposed insulated piping may be covered with a protective wrap or jacket. In the repair process, the approach is for the material to remain in position undisturbed. Repair is a less expensive process than is removal. With any type of repair, the asbestos remains in place. Repair may make later removal of asbestos, if necessary, more difficult and costly. Repairs can be major or minor. Both major and minor repairs must be done only by a professional trained in methods for safely handling asbestos. Removal is usually the most expensive and, unless required by state or local regulations, should be the last option considered in most situations. This is because removal poses the greatest risk for fiber release. However, removal may be required when remodeling or making major changes to the home that will disturb asbestos material. In addition, removal may be called for if asbestos material is damaged extensively and cannot be otherwise repaired. Removal is complex and must be done only by a contractor with special training. Improper removal of asbestos material may create more of a problem than simply leaving it alone. # Lead Many individuals recognize lead in the form often seen in tire weights and fishing equipment, but few recognize its various forms in and around the home. The Merriam-Webster Dictionary [56] Lead is widespread in the environment. People absorb lead from a variety of sources every day. Although lead has been used in numerous consumer products, the most important sources of lead exposure to children and others today are the following: • contaminated house dust that has settled on horizontal surfaces, • deteriorated lead-based paint, • contaminated bare soil, • food (which can be contaminated by lead in the air or in food containers, particularly lead-soldered food containers), • drinking water (from corrosion of plumbing systems), and • occupational exposure or hobbies. Federal controls on lead in gasoline, new paint, food canning, and drinking water, as well as lead from industrial air emissions, have significantly reduced total human exposure to lead. The number of children with blood lead levels above 10 micrograms per deciliter (µg/ dL), a level designated as showing no physiologic toxicity, has declined from 1.7 million in the late 1980s to 310,000 in 1999-2002. This demonstrates that the controls have been effective, but that many children are still at risk. CDC data show that deteriorated lead-based paint and the contaminated dust and soil it generates are the most common sources of exposure to children today. HUD data show that the number of houses with lead paint declined from 64 million in 1990 to 38 million in 2000 [57]. Children are more vulnerable to lead poisoning than are adults. Infants can be exposed to lead in the womb if their mothers have lead in their bodies. Infants and children can swallow and breathe lead in dirt, dust, or sand through normal hand-to-mouth contact while they play on the floor or ground. These activities make it easier for children to be exposed to lead. Other sources of exposure have included imported vinyl miniblinds, crayons, children's jewelry, and candy. In 2004, increases in lead in water service pipes were observed in Washington, D.C., accompanied by increases in blood lead levels in children under the age of 6 years who were served by the water system [58]. In some cases, children swallow nonfood items such as paint chips. These may contain very large amounts of lead, particularly in and around older houses that were painted with lead-based paint. Many studies have verified the effect of lead exposure on IQ scores in the United States. The effects of lead exposure have been reviewed by the National Academy of Sciences [59]. Generally, the tests for blood lead levels are from drawn blood, not from a finger-stick test, which can be unreliable if performed improperly. Units are measured in micrograms per deciliter and reflect the 1991 guidance from the Centers of Disease Control [60]: • Children: 10 µg/dL (level of concern)-find source of lead; • Children: 15 µg/dL and above-environmental intervention, counseling, medical monitoring; • Children: 20 µg/dL and above-medical treatment; • Adults: 25 µg/dL (level of concern)-find source of lead; and • Adults: 50 µg/dL-Occupational Safety and Health Administration (OSHA) standard for medical removal from the worksite. Adults are usually exposed to lead from occupational sources (e.g., battery construction, paint removal) or at home (e.g., paint removal, home renovations). In 1978, CPSC banned the use of lead-based paint in residential housing. Because houses are periodically repainted, the most recent layer of paint will most likely not contain lead, but the older layers underneath probably will. Therefore, the only way to accurately determine the amount of lead present in older paint is to have it analyzed. It is important that owners of homes built before 1978 be aware that layers of older paint can contain a great deal of lead. Guidelines on identifying and controlling lead-based paint hazards in housing have been published by HUD [61]. # Controlling Lead Hazards The purpose of a home risk assessment is to determine, through testing and evaluation, where hazards from lead warrant remedial action. A certified inspector or risk assessor can test paint, soil, or lead dust either on-site or in a laboratory using methods such as x-ray fluorescence HUD has published detailed protocols for risk assessments and inspections [61]. It is important from a health standpoint that future tenants, painters, and construction workers know that leadbased paint is present, even under treated surfaces, so they can take precautions when working in areas that will generate lead dust. Whenever mitigation work is completed, it is important to have a clearance test using the dust wipe method to ensure that lead-laden dust generated during the work does not remain at levels above those established by the EPA and HUD. Such testing is required for owners of most housing that is receiving federal financial assistance, such as Section 8 rental housing. A building or housing file should be maintained and updated whenever any additional lead hazard control work is completed. Owners are required by law to disclose information about lead-based paint or lead-based paint hazards to buyers or tenants before completing a sales or lease contract [62]. All hazards should be controlled as identified in a risk assessment. Whenever extensive amounts of lead must be removed from a property, or when methods of removing toxic substances will affect the environment, it is extremely important that the owner be aware of the issues surrounding worker safety, environmental controls, and proper disposal. Appropriate architectural, engineering, and environmental professionals should be consulted when lead hazard projects are complex. Following are brief explanations of the two approaches for controlling lead hazard risks. These controls are recommended by HUD in HUD Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing [61], and are summarized here to focus on special considerations for historic housing: Interim Controls. Short-term solutions include thorough dust removal and thorough washdown and cleanup, paint film stabilization and repainting, covering of lead-contaminated soil, and informing tenants about lead hazards. Interim controls require ongoing maintenance and evaluation. Hazard Abatement. Long-term solutions are defined as having an expected life of 20 years or more and involve permanent removal of hazardous paint through chemicals, heat guns, or controlled sanding or abrasive methods; permanent removal of deteriorated painted features through replacement; removal or permanent covering of contaminated soil; and the use of enclosures (such as drywall) to isolate painted surfaces. The use of specialized encapsulant products can be considered as permanent abatement of lead. Deteriorated lead-based paint: Paint known to contain lead above the regulated level that shows signs of peeling, chipping, chalking, blistering, alligatoring, or otherwise separating from its substrate. # Dust removal: The process of removing dust to avoid creating a greater problem of spreading lead particles; usually through wet or damp collection and use of HEPA vacuums. Hazard abatement: Long-term measures to remove the hazards of lead-based paint through replacement of building components, enclosure, encapsulation, or paint removal. Interim control: Short-term methods to remove lead dust, stabilize deteriorating painted surfaces, treat friction and impact surfaces that generate lead dust, and repaint surfaces. Maintenance can ensure that housing remains lead-safe. # Lead-based paint: Any existing paint, varnish, shellac, or other coating that is equal to or greater than 1.0 milligrams per square centimeter (mg/cm 2 ) or greater than 0.5% by weight (5,000 ppm, 5,000 micrograms per gram [µg/g], or 5,000 milligrams per kilogram [mg/kg]). For new paint, CPSC has established 0.06% as the maximum amount of lead allowed in new paint. Lead in paint can be measured by x-ray fluorescence analyzers or laboratory analysis by certified personnel and approved laboratories. # Risk assessment: An on-site investigation to determine the presence and condition of lead-based paint, including limited test samples and an evaluation of the age, condition, housekeeping practices, and uses of a residence. # Definitions Related to Lead Reducing and controlling lead hazards can be successfully accomplished without destroying the character-defining features and finishes of historic buildings. Federal and state laws generally support the reasonable control of lead-based paint hazards through a variety of treatments, ranging from modified maintenance to selective substrate removal. The key to protecting children, workers, and the environment is to be informed about the hazards of lead, to control exposure to lead dust and lead in soil and lead paint chips, and to follow existing regulations. The following summarizes several important regulations that affect lead-hazard reduction projects. Owners should be aware that regulations change, and they have a responsibility to check state and local ordinances as well. Care must be taken to ensure that any procedures used to release lead from the home protect both the residents and workers from lead dust exposure. Local Ordinances. Check with local health departments, poison control centers, and offices of housing and community development to determine whether any laws require compliance by building owners. Determine whether projects are considered abatements and will require special contractors and permits. Owner's Responsibility. Owners are ultimately responsible for ensuring that hazardous waste is properly disposed of when it is generated on their own sites. Owners should check with their state government to determine whether an abatement project requires a certified contractor. Owners should establish that the contractor is responsible for the safety of the crew, to ensure that all applicable laws are followed, and that transporters and disposers of hazardous waste have liability insurance as a protection for the owner. The owner should notify the contractor that lead-based paint may be present and that it is the contractor's responsibility to follow appropriate work practices to protect workers and to complete a thorough cleanup to ensure that lead-laden dust is not present after the work is completed. Renovation contractors are required by EPA to distribute an informative educational pamphlet (Protect Your Family from Lead in Your Home) to occupants before starting work that could disturb leadbased paint (http://www.epa.gov/lead/ leadinfo. htm#remodeling). # Arsenic Lead arsenate was used legally up to 1988 in most of the orchards in the United States. Often 50 applications or more of this pesticide were applied each year. This toxic heavy metal compound has accumulated in the soil around houses and under the numerous orchards in the country, contaminating both wells and land. These orchards are often turned into subdivisions as cities expand and sprawl occurs. Residues from the pesticide lead arsenate, once used heavily on apple, pear, and other orchards, contaminate an estimated 70,000 to 120,000 acres in the state of Washington alone, some of it in areas where agriculture has been replaced with housing, according to state ecology department officials and others. Lead arsenate, which was not banned for use on food crops until 1988, nevertheless was mostly replaced by the pesticide dichlorodiphenyltrichloroethane (DDT) and its derivatives in the late 1940s. DDT was banned in the United States in 1972, but is used elsewhere in the world. For more than 20 years, the wood industry has infused green wood with heavy doses of arsenic to kill bugs and prevent rot. Numerous studies show that arsenic sticks to children's hands when they play on treated wood, and it is absorbed through the skin and ingested when they put their hands in their mouths. Although most uses of arsenic wood treatments were phased out by 2004, an estimated 90% of existing outdoor structures are made of arsenic-treated wood [65]. In a study conducted by the University of North Carolina Environmental Quality Institute in Asheville, wood samples were analyzed and showed that • Older decks and play sets (7 to 15 years old) that were preserved with chromated copper arsenic expose people to just as much arsenic on the wood surface as do newer structures (less than 1 year old). The amount of arsenic that testers wiped off a small area of wood about the size of a 4-year-old's handprint typically far exceeds what EPA allows in a glass of water under the Safe Drinking Water Act standard. Figure 5.9 shows a safety warning label placed on wood products. • Arsenic in the soil from two of every five backyards or parks tested exceeded EPA's Superfund cleanup level of 20 ppm. # Introduction The principal function of a house is to provide protection from the elements. Our present society, however, requires that a home provide not only shelter, but also privacy, safety, and reasonable protection of our physical and mental health. A living facility that fails to offer these essentials through adequately designed and properly maintained interiors and exteriors cannot be termed "healthful housing." In this chapter, the home is considered in terms of the parts that have a bearing on its soundness, state of repair, and safety. These are some of the elements that the housing inspector must examine when making a thorough housing inspection. figure and the key are available in an interactive format in the glossary on the U.S. Inspect Web site [1]. Figure 6.2 shows a typical house built between 1950 and 1980 and also includes a terminology key. The figures show the complexity and the numerous components of a home. These components form the vocabulary that is necessary to discuss housing structure inspection issues. Key to Figure 6.1 (New Housing Terminology) 1. Ash dump (see 35)-A door or opening in the firebox that leads directly to the ash pit, through which the ashes are swept after the fire is burned out. All fireboxes are not equipped with an ash dump. # Attic space- The open space within the attic area. 3. Backfill-The material used to refill an excavation around the outside of a foundation wall or pipe trench. 4. Baluster-One of a series of small pillars that is attached to and runs between the stairs and the handrails. The spacing between the balusters should be less than 4 inches to prevent small children from getting stuck between the balusters. Balusters are considered a safety item and provide an additional barrier. 5. Baseboard trim-Typically a wood trim board that is placed against the wall around the perimeter of a room next to the floor. The intent is to conceal the joint between the floor and wall finish. 6. Basement window-A window opening installed in the basement wall. Basement windows are occasionally below the finish grade level and will be surrounded on the exterior by a window well. 7. Blind or shutter-A lightweight frame in the form of a door located on each side of a window. They are most commonly constructed of wood (solid or louvered panels) or plastic. Originally they were designed to close and secure over the windows for security and foul weather. Most shutters now are more likely decorative pieces that are secured to the house beside the windows. 8. Bridging-Small pieces of wood or metal strapping placed in an X-pattern between the floor joists at midspan to prevent the joists from twisting and squeaking and to provide reinforcement and distribution of stress. 9. Building paper/underlayment-Building material, usually a felt paper that is used as a protective barrier against air and moisture passage from the area beneath the flooring as well as providing a movement/noise isolator in hardwood flooring. 10. Ceiling joist-A horizontally placed framing members at the ceiling of the top-most living space of a house that provides a platform to which the finished ceiling material can be attached. # Chair rail (not shown)- Decorative trim applied around the perimeter of a room such as a formal dining room or kitchen/breakfast nook at the approximate same height as the back of a chair. It is sometimes used as a cap trim for wainscoting (see wainscoting). 12. Chimney-A masonry or in more modern construction wood framed enclosure that surrounds and contains one or more flues and extends above the roofline. 13. Chimney cap-The metal or masonry protective coveringat the top of the chimney that seals the chimney shaft from water entry between the chimney enclosure and the flue tiles. 22. Downspout-A pipe, usually of metal or vinyl, that is connected to the gutters and is used to carry the roof-water runoff down and away from the house. 23. Downspout gooseneck-Segmented section of downspout that is bent at a radius to allow the downspout to be attached to the house and to follow the bends and curves of the eaves and ground. 24. Downspout shoe-The bottom downspout gooseneck that directs the water from the downspout to the extension or splash block at the grade. 25. Downspout strap-Strap used to secure the downspout to the side of the house. 26. Drain tile-A tube or cylinder that is normally installed around the exterior perimeter of the foundation footings that collects and directs ground water away from the foundation of the house. The tile can be individual sections of clay or asphalt tubing or, in more recent construction, a perforated-plastic drain tile that is approximately 4 inches in diameter. The drain tile leads either towards a sump or to an exterior discharge away from the house. 27. Entrance canopy-A small overhanging roof that shelters the front entrance. 28. Entrance stoop-An elevated platform constructed of wood framing or masonry at the front entry that allows visitors to stand above or out of the elements. The platform should be wide enough to allow someone to stand on the platform while opening an outward swinging door such as a storm door even if one is not present. 29. Exterior siding-The decorative exterior finish on a house. Its primary function is to protect the shell of the house from the elements. The choice of siding materials varies widely to include wood, brick, metal, vinyl, concrete, stucco, and a variety of manufactured compositions such as compressed wood, compressed cellulose (paper), fiber-reinforced cement, and synthetic stucco. 30. Fascia-The visible flat front board that caps the rafter tail ends and encloses the overhang under the eave that runs along the roof edge. The gutter is usually attached at this location. 31. Fascia/rake board-The visible flat front board that caps the rafter tail ends and encloses the overhang under the eave that runs along the roof edge and at the edge of the roofing at the gables. The gutter is usually attached to this board at the eaves. 35. Fireplace cleanout door-The access door to the ash pit beneath the fireplace. On a fireplace that is located inside the house, the cleanout door is usually located in the lowest accessible level of the house such as the basement or crawl space. On a fireplace that is located at the outside of the house, the cleanout door will be located at the exterior of the chimney. Not all fireplaces are equipped with a cleanout door. 36. Fireplace hearth-The inner or outer floor of a fireplace, usually made of brick, tile, or stone. # Flashing (not shown)- The building component used to connect and cover portions of a deck, roof, or siding material to another surface such as a wall, a chimney, a vent pipe, or anywhere that runoff is heavy or where two dissimilar materials meet. The flashing is mainly intended to prevent water entry and is usually made of rubber, tar, asphalt, or various metals. 38. Floor joists-The main subfloor framing members that support the floor span. Joists are usually made of engineered wood I-beams or 2×8 or larger lumber. 39. Foundation footing-The base on which the foundation walls rests. The foundation is wider than the foundation wall to spread out the load it is bearing and to help prevent settling. 40. Foundation wall-The concrete block, concrete slab or other nonwood material that extends below or partly below grade, which provides support for exterior walls and other structural pans of the building. # 41. Framing studs-A 2×4 or 2×6 vertical framing member used to construct walls and partitions, usually spaced 12 to 24 inches apart. 42. Gable framing-The vertical and horizontal framing members that make up and support the end of a building as distinguished from the front or rear side. A gable is the triangular end of an exterior wall above the eaves. 43. Garage door-The door for the vehicle passage into the garage area. Typical garage doors consist of multiple jointed panels of wood, metal, or fiberglass. # 44. Girder-A large beam supporting floor joists at the same level as the sills. A larger or principal beam used to support concentrated loads at isolated points along its length. 45. Gravel fill-A bed of coarse rock fragments or pebbles that is laid atop the existing soil before pouring the concrete slab. The gravel serves a dual purpose of breaking surface tension on the concrete slab and providing a layer that interrupts capillary action of subsurface moisture from reaching the concrete slab. Typically, a polyethylene sheeting will be installed between the gravel fill and the concrete slab for further moisture proofing. 46. Gutter-A channel used for carrying water run-off. Usually located at the eaves of a house and connected to a downspout. The primary purpose of the gutters and downspouts is to carry roof water run-off as far away from the house as possible. 47. Insulation-A manufactured or natural material that resists heat flow that is installed in a house's shell to keep the heat in a house in the winter and the coolness in the house in the summer. The most common form of insulation is fiberglass, whether in batts or blown-in material, along with cellulose, rigid foam boards, sprayed-in foam, and rock wool. 48. Jack/king stud-The framing stud, sometimes called the trimmer, that supports the header above a window, door, or other opening within a bearing wall. Depending on the size of the opening, there may be several jack studs on either side of the opening. 49. Mantel-The ornamental or decorative facing around a fireplace including a shelf that is attached to the breast or backing wall above the fireplace. 50. Moisture/vapor barrier-A nonporous material, such as plastic or polyethylene sheeting, that is used to retard the movement of water vapor into walls and attics and prevent condensation in them. A vapor barrier is also installed in crawl space areas to prevent moisture vapor from entering up through the ground. 51. Newel post-The post at the top and bottom of the handrails and anywhere along the stair run that creates a directional change in the handrails is called the newel post. The newel post is securely anchored into the underlying floor framing or the stair stringer to provide stability to the handrails. 52. Reinforcing lath-A strip of wood or metal attached to studs and used as a foundation for plastering, slating or tiling. Lath has been replaced by gypsum board in most modern construction. # 53. Ridge board/beam-The board placed on edge at the top-most point of the roof framing, into which the upper ends of the rafters are joined or attached. 54. Roofing-The finished surface at the top of the house that must be able to withstand the effects of the elements (i.e., wind, rain, snow, hail, etc.). A wide variety of materials are available, including asphalt shingles, wood shakes, metal roofing, ceramic and concrete tiles, and slate, with asphalt shingles making up the bulk of the material used. 55. Roof rafters-Inclined structural framing members that support the roof, running from the exterior wall the to the ridge beam. Rafters directly support the roof sheathing and create the angle or slope of the roof. 56. Roof sheathing-The material used to cover the outside surface of the roof framing to provide lateral and rack support to the roof, as well as to provide a nailing surface for the roofing material. This material most commonly consists of plywood OSB or horizontally laid wood boards. 57. Sidewalk-A walkway that provides a direct, all-weather approach to an entry. The sidewalk can be constructed of poured concrete, laid stone, concrete pavers, or gravel contained between borders or curbs. 58. Sill plate-The horizontal wood member that is anchored to the foundation masonry to provide a nailing surface for floors or walls built above. 59. Silt fabric-A porous fabric that acts as a barrier between the backfilled soil (see backfill) and the gravel surrounding the drain tile. This barrier prevents soil particles from blocking the movement of groundwater to the drain tile. 60. Soffit/lookout block-Rake cross-bracing between the fly rafters and end gable rafters that the soffit is nailed to. 65. Subfloor-Boards or plywood, installed over joists, on which the finish floor rests. # Stair rail- 66. Support post-A vertical framing member usually designed to carry or support a beam or girder. In newer construction a metal lally (pronounced "lolly") column is commonly used, as well as 4×4-or 6×6-inch wood posts. 67. Tar-Otherwise known as asphalt, tar is a very thick, dark brown/black substance that is used as a sealant or waterproofing agent. It is usually produced naturally by the breakdown of animal and vegetable matter that has been buried and compressed deep underground. Tar is also manufactured-a hydrocarbon by-product or residue that is left over after the distillation of petroleum. It is commonly used as a sealant or patch for roof penetrations, such as plumbing vents and chimney flashing. Tar is also used as a sealer on concrete and masonry foundation walls before they have been backfilled. 68. Termite shield-A metal flashing that is installed below the sill plate that acts as a deterrent to keep termites from reaching the sill plate. 69. Top plate-The topmost horizontal framing members of a framed wall. Most construction practices require the top plate to be doubled in thickness. 70. Wainscoting-The wooden paneling of the lower part of an interior wall up to approximately waist-height or between 36 and 48 inches from the floor. # Wall insulation- A manufactured or natural material that resists heat flow that is installed in a house's shell to keep the heat in a house in the winter and the coolness in the house in the summer. Fiberglass batts are the most common form of wall insulation. 72. Wall sheathing-The material used to cover the outside surface of the wall framing that provides lateral and shear support to the wall as well as a nailing surface for the exterior siding. 4. Chimney flashing-Sheet metal flashing provides a tight joint between chimney and roof. 5. Firebrick-An ordinary brick cannot withstand the heat of direct fire, and so special firebrick is used to line the fireplace. In newer construction, fireplaces are constructed of prefabricated metal inserts. 6. Ash dump-A trap door to let the ashes drop to a pit below, where they may be easily removed. 7. Cleanout door-The door to the ash pit or the bottom of a chimney through which the chimney can be cleaned. 8. Chimney breast-The inside face or front of a fireplace chimney. 9. Hearth-The floor of a fireplace that extends into the room for safety purposes. # Roof 10. Ridge-The top intersection of two opposite adjoining roof surfaces. 11. Ridge board-The board that follows along under the ridge. 12. Roof rafters-The structural members that support the roof. 13. Collar beam-Not a beam at all; this tie keeps the roof from spreading and connects similar rafters on opposite sides of the roof. 14. Roof insulation-An insulating material (usually rock wool or fiberglass) in a blanket form placed between the roof rafters to keep a house warm in the winter and cool in the summer. 15. Roof sheathing-The boards that provide the base for the finished roof. In newer construction, roof sheathing is composed of sheets of plywood, or oriented strand board (OSB). 16. Roofing-The wood, asphalt or asbestos shingles-or tile, slate, or metal-that form the outer protection against the weather. 17. Cornice-A decorative element made of molded members, usually placed at or near the top of an exterior or interior wall. 18. Gutter-The trough that gathers rainwater from a roof. 19. Downspout-The pipe that leads the water down from the gutter. 20. Storm sewer tile-The underground pipe that receives the water from the downspouts and carries it to the sewer. In newer construction, plastic-type material have replaced tile. 21. Gable-The triangular end of a building with a sloping roof. 22. Barage board-The fascia or board at the gable just under the edge of the roof. # 23. Louvers-A series of slanted slots arranged to keep out rain, yet allow ventilation. # Walls and Floors 24. Corner post-The vertical member at the corner of the frame, made up to receive inner and outer covering materials. 25. Studs-The vertical wood members of the house, usually 2×4s at minimum and spaced every 16 inches. 26. Sill-The board that is laid first on the foundation, and on which the frame rests. 27. Plate-The board laid across the top ends of the studs to hold them even and tight. 28. Corner bracing-Diagonal strips to keep the frame square and plumb. 29. Sheathing-The first layer of outer wall covering nailed to the studs. 30. Joist-The structural members or beams that hold up the floor or ceiling, usually 2×10s or 2×12s spaced 16 inches apart. 31. Bridging-Cross-bridging or solid. Members at the middle or third points of joist spans to brace one to the next and to prevent them from twisting. 32. Subflooring-Typically plywood or particle wood that is laid over the joists. 33. Flooring paper-A felt paper laid on the rough floor to stop air infiltration and, to some extent, noise. 34. Finish flooring-Hardwood, of tongued and grooved strips, carpet, or vinyl products (tile, linoleum). 35. Building paper or sheathing-Paper or plasticized material placed outside the sheathing, not as a vapor barrier, but to prevent water and air from leaking in. Building paper is also used as a tarred felt under shingles or siding to keep out moisture or wind. 36. Beveled siding-Sometimes called clapboards, with a thick butt and a thin upper edge lapped to shed water. In newer construction, vinyl, aluminum, or fiber cement siding and stucco are more prevalent. 37. Wall insulation-A blanket of wool or reflective foil placed inside the walls. 38. Metal lath-A mesh made from sheet metal onto which plaster or other composite surfacing materials can be applied. In newer construction, plaster sheetrock 4-×8-foot sheets have replaced lath. 62. Balusters-Vertical rods or spindles supporting a rail. # Foundation The word "foundation" is used to mean • construction below grade, such as footings, cellar, or basement; • the composition of the earth on which the building rests; and • special construction, such as pilings and piers used to support the building. The foundation bed may be composed of solid rock, sand, gravel, or unconsolidated sand or clay. Rock, sand, or gravel are the most reliable foundation materials. Figure 6.3 shows the three most common foundations for homes. Unconsolidated sand and clay, though found in many sections of the country, are not as desirable for foundations because they are subject to sliding and settling [1]. Capillary breaks have been identified as a key way of reducing moisture incursion in new construction [3]. The footing distributes the weight of the building over a sufficient area of ground to ensure that the foundation walls will stand properly. Footings are usually concrete; however, in the past, wood and stone have been used. Some older houses were constructed without footings. Although it is usually difficult to determine the condition of a footing without excavating the foundation, a footing in a state of disrepair or lack of a footing will usually be indicated either by large cracks or by settlement in the foundation walls. This type of crack is called a "Z" crack. Foundation wall cracks are usually diagonal, starting from the top, the bottom, or the end of the wall (Figure 6.4). Cracks that do not extend to at least one edge of the wall may not be caused by foundation problems. Such wall cracks may be due to other structural problems and should also be reported. The foundation walls support the weight of the structure and transfer this weight to the footings. The foundation walls may be made of stone, brick, concrete, or concrete blocks. The exterior should be moisture proofed with either a membrane of waterproof material or a coating of portland cement mortar. The membrane may consist of plastic sheeting or a sandwich of standard roofing felt joined and covered with tar or asphalt. The purpose of waterproofing the foundation and walls is to prevent water from penetrating the wall material and leaving the basement or cellar walls damp. Holes in the foundation walls are common in many old houses. These holes may be caused by missing bricks or blocks. Holes and cracks in a foundation wall are undesirable because they make a convenient entry for rats and other rodents and also indicate the possibility of further structural deterioration. Basement problems are a major complaint of homeowners [4][5][6][7][8][9]. Concrete is naturally porous (12%-18% air). When it cures, surplus water creates a network of interconnected capillaries. These pores let in liquid water, water vapor, and radon gas. Like a sponge, concrete draws water from several feet away. As concrete ages, the pores get bigger as a result of freezing, thawing, and erosion. Concrete paints, waterproofing sealers, or cement coatings are a temporary fix. They crack or peel and cannot stop gases such as water vapor and radon. Damp basement air spreads mold and radon through the house. Efflorescence (white powder stains) and musty odors are telltale signs of moisture problems. potential problem, 6-mil plastic sheets should be laid as vapor barriers over the entire crawl space floor. The sheets should overlap each other by at least 6 inches and should be taped in place. The plastic should extend up the perimeter walls by about 6 inches. The plastic sheets should be attached to the interior walls of the crawl space with mastic or batten strips. All of the perimeter walls should be insulated, and insulation should be between the joists at the top of the walls. Vents, which may need to be opened in the late spring and closed in the fall, should not be blocked. If not properly managed, moisture originating in the crawl space can cause problems with wood flooring and create many biologic threats to health and property. A properly placed vapor barrier can prevent or reduce problem moisture from entering the home. # Vapor Barriers for Concrete Slab Homes Strip flooring and related products should be protected from moisture migration by a slab. Proper on-grade or above-grade construction requires that a vapor barrier be placed beneath the slab. Moisture tests should be done to determine the suitability of the slab before installing wood products. A vapor barrier equivalent to 4-or 6-mil polyethylene should be installed on top of the slab to further protect the wood products and the residents of the home. # Wall and Ceiling Vapor Barriers Wall and ceiling vapor barriers should go on the heated side of the insulation and are necessary in cold climates. Water vapor flows from areas of high pressure (indoors in winter) through the wall to an area of low pressure (outdoors in winter). People and their pets produce amazing quantities of water vapor by breathing. Additional moisture in considerable quantities is created in the home from everyday activities such as washing clothes, cooking, and personal hygiene. The purpose of the vapor barrier is to prevent this moisture from entering the wall and freezing, then draining, causing damage. In addition, wet insulation has very little insulating value. Insulation with the vapor barrier misplaced will allow the vapor to condense in the insulation and then freeze. In cold climates, this ice can actually build up all winter and run out on the floor in the spring. Such moisture buildup blisters paint, rots sheathing, and destroys the insulating value of insulation. Basement remodeling traps invisible water vapor, causing mold and mildew. Most basements start leaking within 10 to 15 years. The basement walls and floors should be sealed and preserved before they deteriorate. The basement floor should be concrete placed on at least 6 inches of gravel. The gravel distributes groundwater movement under the concrete floor, reducing the possibility of the water penetrating the floor. A waterproof membrane, such as plastic sheeting, should be laid before the concrete is placed for additional protection against flooding and the infiltration of radon and other gases. The basement floor should be gradually, but uniformly, sloped from all directions toward a drain or a series of drains. These drains permit the basement or cellar to drain if it becomes flooded. Water or moisture marks on the floor and walls are signs of ineffective waterproofing or moisture proofing. Cellar doors, hatchways, and basement windows should be weather-tight and rodent-proof. A hatchway can be inspected by standing at the lower portion with the doors closed; if daylight can be seen, the door needs to be sealed or repaired. # Vapor Barriers Crawl Space Vapor Barriers Throughout the United States, even in desert areas, there is moisture in the ground from groundwater being absorbed. Even in an apparently dry crawl space, a large amount of water is entering. The moisture is drying out as fast as it is entering, which causes high moisture levels in the crawl space and elsewhere in the house. A solid vapor barrier is recommended in all crawl spaces and should be required if moisture problems exist [10]. This vapor barrier, if properly installed, also reduces the infiltration of radon gas. Of course, if the moisture is coming from above ground, a vapor barrier will collect and hold the moisture. Therefore, any source of moisture must be found and eliminated. The source may be as obvious as sweating pipes, or may be more difficult to spot, such as condensation on surfaces. The solution can be as simple as applying insulation to exposed sections of the piping or complex enough to require power exhaust fans and the addition of insulation and vapor barriers. The more common causes of moisture problems in a new home are moisture trapped within the structure during construction and a continuing source of excess moisture from the basement, crawl space, or slab. To resolve this # House Framing Many types of house-framing systems are found in various sections of the country; however, most framing systems include the elements described in this section. # Foundation Sills The purpose of the sill is to provide support or a bearing surface for the outside walls of the building. The sill is the first part of the frame to be placed and rests directly on the foundation wall. It is often bolted to the foundation wall by sill anchors. In many homes, metal straps are cemented into the foundation wall that are bent around and secured to the sill. It is good practice to protect the sill against termites by extending the foundation wall to at least 18 inches above the ground and using a noncorroding metal shield continuously around the outside top of the foundation wall. # Flooring Systems The flooring system is composed of a combination of girders, joists, subflooring, and finished flooring that may be made up of concrete, steel, or wood. Joists are laid perpendicular to the girders, at about 16 inches on center, and the subflooring is attached to them. If the subfloor is wood, it may be nailed, glued, or screwed at either right angles or diagonally to the joists. Many homes are built with wood I-joists or trusses rather on than solid wood joists. In certain framing systems, a girder supports the joists and is usually a larger section than the joists it supports. Girders are found in framing systems where there are no interior bearing walls or where the span between bearing walls is too great for the joists. The most common application of a girder is to support the first floor. Often a board known as a ledger is applied to the side of a wood girder or beam to form a ledge for the joists to rest upon. The girder, in turn, is supported by wood posts or steel "lally columns" that extend from the cellar or basement floor to the girder. # Studs For years, wall studs were composed of wood and were 2×4 inches; but, with the demand for greater energy efficiency in homes, that standard no longer holds true. Frame studs up to 6 inches wide are used to increase the area available for placing insulation material. The increased size in the studs allow for larger spaces between joists. There are now alternatives to conventional wood studs, specifically, insulated concrete forms, structural insulated panels, light-gauge steel, and combined steel and wood [11][12][13]. The advantages of light-gauge steel include the following: • weighs 60% less than equivalent wood units and has greater strength and durability; • is impervious to termites and other damage-causing pests; • stays true and does not warp; • is noncombustible; and • is recyclable. The disadvantages of steel include these: • steel is an excellent thermal conductor and requires additional external insulation; • as a new product, it is unfamiliar to craftsmen, engineers, and code officials; and • a different type of construction tools are required. The combined steel and wood framing system includes light-gauge steel studs with 6-inch wooden stud pieces attached to the top and bottom to allow easy attachment to traditional wood frame materials. There are two types of walls or partitions: bearing and nonbearing. A bearing wall is constructed at right angles to support the joists. A nonbearing wall, or partition, acts as a screen or enclosure; hence, the headers in it are often parallel to the joists of the floor above. In general, studs, like joists, are spaced 16 inches on center. In light construction, such as garages and summer cottages, wider spacing on studs is common. Openings for windows or doors must be framed in studs. This framing consists of horizontal members (headers) and vertical members (trimmers or jack studs). Because the vertical spaces between studs can act as flues to transmit flames in the event of a fire, fire stops are important in preventing or retarding fire from spreading through a building by way of air passages in walls, floors, and partitions. Fire stops are wood obstructions placed between studs or floor joists to block fire from spreading in these natural flue spaces. # Interior Walls Many types of materials are used for covering interior walls and ceilings, but the principal type is drywall. The generic term "drywall" is typically used when talking about gypsum board. It is also called wallboard or referred to by the brand name Sheetrock. Gypsum board is a sheet material composed of a gypsum filler faced with paper. In drywall construction, gypsum boards are fastened to the studs either vertically or horizontally and then painted. The edges along the length of the sheet are slightly recessed to receive joint cement and tape. Drywall finish, composed of gypsum board, is a material that requires little, if any, wait for application. Other drywall finishes include plywood, fiberboard, or wood in various sizes and forms. Plaster was once quite popular for interior walls. Plaster is a mixture (usually of lime, sand, and water) applied in two or three coats to lath to form a hard-wall surface. A plaster finish requires a base on which plaster can be spread. Wood lath at one time was the plaster base most commonly used, but today gypsum-board lath is more popular. Gypsum lath may be perforated to improve the bond and thus lengthen the time the plaster can remain intact when exposed to fire. Building codes in some cities require that gypsum lath be perforated. Expanded-metal lath also may be used as a plaster base. Expanded-metal lath consists of sheet metal slit and expanded to form openings to hold the plaster. Plaster is applied over the base to a minimum thickness of ½ inch. Because wood-framing members may dry after the house is completed, some shrinkage can be expected, which, in turn, may cause plaster cracks to develop around openings and in corners. Strips of lath embedded in the plaster at these locations prevent cracks. Bathrooms have unique moisture exposure problems and local code approved cement board should be used around bath and shower enclosures. # Stairways The purpose of stairway dimension standards is to ensure adequate headroom and uniformity in riser and tread size. Interior stairways (Figure 6.5) should be no less than 44 inches wide. The width of a stairway may be reduced to 36 inches when permitted by local or state code in oneand two-family dwellings. Stairs with closed risers should have maximum risers of 8¼ inches and minimum treads of 9 inches plus 1 inch nosing. Basement stairs are often constructed with open risers. These stairs should have maximum risers of 8¼ inches and minimum treads of 9 inches plus 1-inch nosing. The headroom in all parts of the stair enclosure should be no less than 80 inches. Dimensions of exterior stairways should be the same as those of interior stairways, except that the headroom requirement does not apply. Staircases should have handrails that are between 1¼ and 25/8 inches wide, particularly if the staircases have more than four steps. Handrails should be shaped so they can be readily grasped for safety and placed so they are easily accessible. Handrails should be 4⅛ inches from the wall and be 34 to 38 inches above the leading edge of the stairway treads. Handrails should not end in any manner or have projections that can snag clothing. # Windows The six general classifications of windows (Figure 6.6) are as follows [1]: • Double-hung sash windows that move up or down, balanced by weights hung on chains, ropes, or springs on each side; • Casement sash windows that are hinged at the side and can be hung so they will swing out or in; • Awning windows that usually have two or more glass panes that are hinged at the top and swing out horizontally; • Sliding windows that usually have two or more glass panes that slide past one another on a horizontal track; • Fixed windows that are generally for increased light entry and decorative effect; and • Skylight windows for increased room illumination and decoration that can be built to open. The principal parts of a window, shown in threedimensional view in Figure 6.7 and face-on and side view in Figure 6.8, are the following: Drip cap-A separate piece of wood projecting over the top of the window; a component of the window casing. The drip cap protects against moisture. Window trough-The cut or groove in which the sash of the window slides or rests. Window sill-The shelf on the bottom edge of a window, either a projecting part of the window frame or the bottom of the wall recess that the window fits into. The sill contains the trough and protects against moisture. Recent technological advancements-new materials, coatings, design, and construction features-make it possible to choose windows that allow you to balance winter heating and summer cooling needs without sacrificing versatility or style. To ensure that windows, doors, or skylights selected are appropriate for the region in which they are to be installed, Energy Star Certification labels include a climate region map. Some window glass is made of tempered glass to resist breakage. Some windows are made of laminated glass, which resists breakage, but if broken produces glass shards too small to cause injury [14]. The glazing, or glass, can be a solid glass sheet (single glazed) or have two layers of glass (double glazed) separated by a spacer. Air trapped between the glass layers provides some insulation value. Triple-glazed windows have three pieces of glass, or two layers of glass with a low emissivity film suspended between them. Triple-glazed windows have advantages where extremes in weather and temperature are the norm. They also can reduce sound transmission to a greater degree than can single-or double-glazed windows. becomes difficult to lift and lower. This may be something that can be resolved with simple adjustments, or it may be more serious. If the door is connected to an electric opener, the opener mechanism can be disconnected from the door by pulling the release cord or lever. If the door then works manually, the problem is with the electric opener. A door that seems unusually hard to lift may have a problem with spring tension. Wood doors should be properly painted or stained both outside and inside. If only the outside of a garage door is finished, the door may warp and moisture may cause the paint to peel. Rules issued by the Consumer Product Safety Commission on December 3, 1992, specify entrapment protection requirements for garage doors [15]. The rules require that residential garage door openers contain one of the following: • An external entrapment protection device, such as an electric eye that sees an object obstructing the door without having actual contact with the object. A door-edge sensor is a similar device. The door-edge sensor acts much like the door-edge sensors on elevator doors. • A constant contact control button, which is a wall-mounted button requiring a person to hold in the control button continuously for the door to close completely. If the button is released before the door closes, the door reverses and opens to the highest position. • A sticker on all newly manufactured garage door openers warning consumers of the potential entrapment hazard. The sticker is to be placed near the wall-mounted control button. The variety of exterior door systems has increased significantly over the past 5 to 10 years. Many combine several different materials to make a realistic, if not actual wood, door that provides both beauty and enhanced security. # Exterior House Doors Exterior door frames are ordinarily of softwood plank, with the side rabbitted to receive the door in the same way as casement windows. At the foot is a sill, made of hardwood or other material, such as aluminum, to withstand the wear of traffic and sloped down and out to shed water. Doors often come equipped with door sweeps to conserve energy. The four primary categories of modern exterior doors are steel, fiberglass, composites, and wood. # Doors There are many styles of doors both for exterior and interior use. Exterior doors must, in addition to offering privacy, protect the interior of the structure from the elements. Various parts of a door are the same as the corresponding parts of a window. A door's function is best determined by the material from which it is made, how it looks, and how it operates. When doors are used for security, they are typically made from heavy materials and have durable, effective locks and hinges. A door that lets in light or allows people to look out onto the yard, such as a sliding glass door or a french door, will have multiple panes (also called lights) or be made almost completely of glass. Houses have many exterior and interior door options. Exterior doors are typically far sturdier than interior doors and need to be weather tight and ensure security for the home. Exterior doors are also more decorative than most interior doors and may cost a considerable amount. Typical exterior doors include front entry doors, back doors, french doors, dutch doors, sliding glass doors, patio doors, and garage doors. Most doors are made of wood or materials made to look like wood. Fiberglass composite and steel doors often have polymer or vinyl coatings embossed with wood grain; some even have cellulose-based coatings that can be stained like wood doors. Wood doors are made from every kind of wood imaginable, hardwoods being the most durable and elegant. Wood doors insulate better than glass; composite and steel doors provide even more insulation and durability, as well as better security than does wood. # Garage Doors Garage doors open in almost any configuration needed for the design of the home. Installing most garage doors is complex and dangerous enough that only a building professional should attempt it. Garage doors often include very strong springs that can come loose and severely injure the unsuspecting installer. Garage door springs are under extreme tension because of the heavy loads they must lift, which makes them dangerous to adjust. A garage door may suffer from any of several problems. The most common problem is that the door Steel-The most common exterior door sold today is steel. Humidity will not cause a steel door to warp or twist. Steel doors often have synthetic wood-grain embossed finishes that accept stain. Just about every steel exterior door is filled with some type of foam. This foam allows the doors to achieve R-values almost five times that of an ordinary wood door. Metal is often used as a veneer frame. In general, the horizontal members are called rails and the vertical members are called stiles. Every door has a top and bottom rail, and some may have intermediate rails. There are always at least two stiles, one on each side of the door. # Fiberglass- The second most frequently selected exterior door is fiberglass. Fiberglass doors are similar to steel doors, but tend to be much more resistant to denting. (Steel doors can be dented quite easily.) Fiberglass doors also are stainable and have rich, realistic wood graining. Fiberglass doors are insulated with foam and have high R-values. Composite materials-The third most common exterior door is made of composite materials. These doors often are of two materials blended together. Their composite fiber-reinforced core can be twice as strong as wood. This composite core will not rot, warp, or twist when subjected to high levels of humidity. # Wood- The last major category of doors is wood. Solid wood doors range from inexpensive to true works of art. Their downside is that they can warp and bow if not sealed properly from humidity and will then fit poorly in their frames. Other types of wooden doors are described below. • Batten doors are often found on older homes. They are made of boards nailed together in various ways. The simplest is two layers nailed to each other at right angles, usually with each layer at 45° to the vertical. Another type of batten door consists of vertical boards nailed at right angles to several (two to four) cross strips called ledgers, with diagonal bracing members nailed between the ledgers. If vertical members corresponding to ledgers are added at the sides, the verticals are called frames. Batten doors are often found in cellars and other places where appearance is not a factor and economy is desired. • Solid flush doors are perfectly flat, usually on both sides, although occasionally they are made flush on one side and are paneled on the other. Flush doors sometimes are solid planking, but they are commonly veneered and possess a core of small pieces of white pine or other wood. These pieces are glued together with staggered end joints. Along the sides, top, and bottom are glued ¾ inch edge strips of the same wood, used to create a smooth surface that can be cut or planed. The front and back faces are then covered with a ⅛-inch to ¼-inch layer of veneer. Solid flush doors may be used on both the interior and exterior. • Hollow-core doors, like solid flush doors, are perfectly flat; but, unlike solid doors, the core consists mainly of a grid of crossed wooden slats or some other type of grid construction. Faces are three-ply plywood instead of one or two plies of veneer, and the surface veneer may be any species of wood, usually hardwood. The edges of the core are solid wood and are made wide enough at the appropriate places to accommodate locks and butts. Doors of this kind are considerably lighter than solid flush doors. Hollow-core doors are usually used as interior doors. Many doors are paneled, with most panels consisting of solid wood or plywood, either raised or flat, although exterior doors frequently have one or more panels of glass. One or more panels may be used, although some have as many as nine panels. Paneled doors may be used both on the interior or exterior. The frame of a doorway is the portion to which the door is hinged. It consists of two side jambs and a head jamb, with an integral or attached stop against which the door closes. # Roof Framing Rafters One of a series of structural roof members spanning from an exterior beam or a ridge board. Rafters serve the same purpose for the roof as joists do for floors, that is, providing support for sheathing and roofing material. They are typically placed on 16-inch centers. # Collar Beam Collar beams are ties between rafters on opposite sides of the roof. If the attic is to be used for rooms, the collar beam may double as the ceiling joist. # Purlin A purlin is the horizontal member that forms the support for the rafters at the intersection of the two slopes of a gambrel roof. # Ridge Board A ridge board is a horizontal member that forms a lateral tie to make rafters secure. # Hip A hip is like a ridge, except that it slopes. It is the intersection of two adjacent, rather than two opposite, roof planes. # Roof Sheathing The manner in which roof sheathing is applied depends upon the type of roofing material. Roof boards may vary from tongue-and-groove lumber to plywood panels. # Dormer The term "dormer window" is applied to all windows in the roof of a building, whatever their size or shape. # Roofs Asphalt Shingle The principal damage to asphalt shingle roofs is caused by strong winds on shingles nailed close to the ridge line of the roof. Usually the shingles affected by winds are those in the four or five courses nearest the ridge and in the area extending about 5 feet down from the edge or rake of the roof. # EPDM Ethylene propylene diene monomer (EPDM) is a singleply roofing system. EPDM allows extreme structural movement without splitting or cracking and retains its pliability in a wide range of temperatures. # Asphalt Built-up Roofs Asphalt roofs may be unsurfaced (a coating of bitumen being exposed directly to the weather) or surfaced (with slag or gravel embedded in the bituminous coating). Using surfacing material is desirable as a protection against wind damage and the elements. This type of roof should have enough pitch to drain water readily. # Coal Tar Pitch Built-up Roofs This type of roof must be surfaced with slag or gravel. A coal tar pitch built-up roof should always be used on a deck pitched less than ½ inch per foot; that is, where water may collect and stand. This type of roof should be inspected on completion, 6 months later, and then at least once a year, preferably in the fall. When the top coating of bitumen shows damage or has become badly weathered, it should be renewed. # Slate Roofs The most common problem with slate roofs is the replacement of broken slates. Otherwise, slate roofs normally render long service with little or no repair. # Tile Roofs Replacement of broken shingle tiles is the main maintenance problem with tile roofs. This is one of the most expensive roofing materials. It requires very little maintenance and gives long service. # Copper Roofs Usually made of 16-ounce copper sheeting and applied to permanent structures, copper roofs require practically no maintenance or repair when properly installed. Proper installation allows for expansion and contraction with changes in temperature. # Galvanized Iron Roofs The principal maintenance for galvanized iron roofs involves removing rust and keeping the roof well painted. Leaks can be corrected by renailing, caulking, or replacing all or part of the sheet or sheets in disrepair. # Wood Shingle Roofs The most important factors of wood shingle roofs are their high pitch and exposure, the character of wood, the kind of nails used and the preservative treatment given the shingles. At one time these roofs were treated with creosote and coal tar preservatives. Because they are made from a flammable material, insurance companies frequently have higher rates for wood shingle roofs. # Roof Flashing Valleys in roofs (such as gambrel roofs, which have two pitches designed to provide more space on upper floors and are steeper on their lower slope and flatter toward the ridge) that are formed by the junction of two downward slopes may be open or closed. In a closed valley, the slates, tiles, or shingles of one side meet those of the other, and the flashing below them may be comparatively narrow. In an open valley, the flashing, which may be made of zinc, copper, or aluminum, is laid in a continuous strip, extending 12 to 18 inches on each side of the valley, while the tiles or slates do not come within 4 to 6 inches of it. The ridges built up on a sloping roof where it runs down against a vertical projection, like a chimney or a skylight, should be weatherproofed with flashing. Failure of roof flashing is usually due to exposed nails that have come loose. The loose nails allow the flashing to lift, resulting in leakage. Flashings made of lead or coated with lead should not be used. The use of a thin, self-sticking rubber ice and water shield under flashings and on the edge of roofs is now common practice. The shield helps reduce leakage and ice backup in cold climates, preventing serious damage to this part of the home. # Gutters and Leaders Gutters and leaders should be of noncombustible materials and should not be made of lead, lead-coated copper, or any other formulation containing lead. They should be securely fastened to the structure and spill into a storm sewer, not a sanitary sewer, if the neighborhood has one. When there is no storm sewer, a concrete or stone block placed on the ground beneath the leader prevents water from eroding the lawn. This stone block is called a splash block. Gutters should be checked every spring and fall and cleaned when necessary. Gutters must be placed or installed to ensure that water drainage is taken away from the foundation of the house. Soil around the home should be graded in a manner that also drains the water away from the foundation of the home. # Exterior Walls and Trim Exterior walls are enclosure walls whose purpose is not only to make the building weather tight, but to also allow the building to dry out. In most one-to three-story buildings they also serve as bearing walls. These walls may be made of many different materials (Figure 6.9). Brick is often used to cover framed exterior walls. In this situation, the brick is only one course thick and is called a brick veneer. It supports nothing but itself and is kept from toppling by ties connected to the frame wall. In frame construction, the base material of the exterior walls is called sheathing. The sheathing material may be square-edge, shiplap, tongue-and-groove boards, or plywood or oriented strand board (OSB). Sheathing, in addition to serving as a base for the finished siding material, stiffens the frame to resist sway caused by wind. It is for this reason that sheathing is applied diagonally on frame buildings. Its role is to brace the walls effectively to keep them from racking. Many types of sidings, shingles, and other exterior coverings are applied over the sheathing. Vinyl siding; wood siding; brick, cedar, and other wood shingles or shakes; asphalt; concrete; clapboard; common siding (called bevel siding); composition siding; cement shingles; fiber cement (e.g., Hardiplank); and aluminum siding are commonly used for exterior coverings. In older homes, asbestos-cement siding shingles can still be found as an exterior application or underneath various types of aluminum or vinyl siding. Clapboard and common siding differ only in the length of the pieces. Composition siding is made of felt, grit, and asphalt, which are often shaped to look like brick. Asbestos and cement shingles, which were used until the early 1970s, are rigid and produce a siding that is fireresistant, but also a health hazard. Cedar wood shingles and aluminum are manufactured with a backer board that gives insulation and fire-resistant qualities. Vinyl siding is manufactured from polyvinyl chloride (PVC), a building material that has replaced metal as the prime material for many industrial, commercial, and consumer products. PVC has many years of performance as a construction material, providing impact-resistance, rigidity, and strength. The use of vinyl siding is not without controversy, because PVC is known to cause cancer in humans. Accidental fires in vinyl-sided buildings are more dangerous because vinyl produces toxic vapors when heated. # Putting It All Together The next section shows a home being built by Habitat for Humanity. This small, one-family home represents all of the processes that would also be used for a far more expensive and elaborate dwelling. The homebuilding demonstrated by the following pictures was by an industrial arts class to educate and train a new generation of construction specialists and homebuilders. A. The foundation trench for a new home has horizontal metal rods, also called reinforcement rods or rebar, to increase the strength of the concrete. After the concrete hardens, a perforated pipe 4 to 6 inches in diameter is placed beside it to collect water and allow it to drain away from the foundation. This pipe is the footing drain, and the poured concrete beside it is the footer. The footing drain is important in removing water from the base of the home. It also serves the secondary purpose of moisture control in the home and provides a venting route for radon gas. The holes dug near the legs of the workers will be filled with concrete and form the footer that will hold up the porch of the home. To assist in preventing capillary action from wicking water from the foundation to the wooden structure, a polyethylene sheet is placed over the footer before pouring the concrete foundation, or building a cinderblock foundation. # B. The concrete on top of the footer is leveled to establish a surface for the foundation of the home. Once the footer has hardened, the perforated drainage pipe will be laid on the outside of the poured foundation wall. The reinforcing rods were positioned in the trench before pouring the concrete. # C. Concrete will be poured into this form on top of the footer to create the foundation of the home. Again, reinforcing rods are added to ensure that the concrete has both lateral strength, as well as the strength to support the home. Once the concrete has hardened and becomes seasoned, the forms will be removed to reveal the finished poured concrete foundation over the perforated drainage pipe. Not shown is a newer technique of using insulating polystyrene forms and ties in a building foundation. # E. Gravel fill is placed outside the finished poured concrete foundation. This ensures that moisture does not stand around the foundation for any time. The moisture is routed to the footing drain for fast dispersal. # F. A termite shield is established on top of the concrete wall (foundation) just below the sill of the home. The sill is typically made of pressure-and insecticide-treated wood to ensure stability and long life. A cinderblock foundation will be used to support the storage shed attached to the house. Note the potential for inadvertent sabotage of the termite shield if a shield is not installed on the top of the cinder block foundation. # G. OSB subfloor, the joist supporting the floor, and the metal bridging that is used to keep the joist from twisting can be seen from the crawl space under the home. If the material used for the flooring or external sheathing of the home is made of plywood or a composition that is not waterproof, the material must be protected from rain to prevent deterioration and germination of mold spores. Some glues or resins release toxic vapors for years if deterioration is allowed to begin. # D. Foundations are not always poured concrete, but are often cinderblock or similar materials that are cemented in place to form the load-bearing wall. The arrow shows the concrete chute delivering concrete into the form. Long poles are pushed into the freshly poured concrete to remove air pockets that would weaken the foundation. Care must be taken to ensure that the forms are appropriately supported before pouring the concrete. Often tar, plastic, or other waterproof materials are placed on the outside of the foundation to the ground level to further divert moisture from the house to the footing drains. # H. The flooring material of the first floor of the home is OSB applied to the subfloor with both glue and wood screws. Where possible, the screws should extend into the subfloor and the joist below the subfloor to prevent squeaking. # J. The exterior wall framing is composed of studs that are 2x6-inch boards. The horizontal member extending from one exterior wall to the other is called a girder and is a prime support for the second floor of the home. The larger studs in the exterior wall are used both for greater strength and to provide greater energy efficiency for the home. The lintels above the windows and doors distribute the weight of the second floor and roof across the studs that are located on each side of the openings in the frame. # K. The joists above the first floor are connected to the central girder of the home by steel brackets. These brackets provide a far more effective alternative than does toenailing nails to hold the joists in place or to notching the girder to hold them. # I. The interior wall framing is composed of studs traditionally referred to as 2×4s. The horizontal member at the top of the studs is called a girt or a ribbon. In this case the builders have used two 2×4s, placing one on top of the other. Because the outside walls have used studs that are 2×6-inch boards, the girts or ribbons on top of these are also double 2×6-inch boards. # L. The subroof or roof sheathing is applied from the bottom up with temporary traction boards nailed to the subroof to allow safe installation of the material. The subroof is placed on the rafters up to the ridge board of the roof. A waterproof material will be added to the subroof before installing the final roofing material. # M. An interior wall is installed to create second-floor rooms. The subroof has been installed. The exterior wood of the home has been covered with plastic sheathing or a housewrap to protect it from moisture. # N. Flashing material, such as sheet metal, is installed at critical locations to make sure that water does not enter the home where the joints and angles of a roof meet: where the dormer roof meets the roof and the walls of the dormer meet the roof, where windows penetrate the walls, where the vent stack penetrates the roof, where the porch roof meets the front wall, skylights, and eaves of the house. # O. A safety scaffold is standing at the rear of the home, and the final roofing material has been applied, in addition to the exterior vinyl siding. # Introduction Damaging moisture originates not only from outside a home; it is created inside the home as well. Moisture is produced by smoking; breathing; burning candles; washing and drying clothes; and using fireplaces, gas stoves, furnaces, humidifiers, and air conditioning. Leaks from plumbing, unvented bathrooms, dishwashers, sinks, toilets, and garbage disposal units also create moisture problems because they are not always found before water damage or mold growth occurs. Figure 7.1 provides an overview of the sources of moisture and types of air pollutants that can enter a home. Solving moisture problems is often expensive and timeconsuming. The first step is to do a moisture inventory to eliminate problems in their order of severity. Problems that are easiest and least expensive to resolve should be addressed first. For example, many basement leaks have been eliminated by making sure sump pumps and downspouts drain away from the house. On the other hand, moisture seeping though basement or foundation walls often is very expensive to repair. Eliminating such mois-Chapter 7: Environmental Barriers ture is seldom as simple as coating the interior wall, but often requires expert consultation and excavating around the perimeter of the house to install or clean clogged footing drains. Sealing the outside of the basement walls and coating the exterior foundation wall with tar or other waterproofing compounds are often the only solutions to eliminate moisture. Moisture condensation occurs in both winter and summer. The following factors increase the probability of condensation: • Homes that are ineffectively insulated and are not sealed against air infiltration in cold climates can result in major moisture problems. • Cool interior surfaces such as pipes, windows, tile floors, and metal appliances; air conditioner coils with poor outside drainage; masonry or concrete surfaces; toilet tanks; and, in the winter, outside walls and ceilings can result in moisture buildup from condensation. If the temperature of an interior surface is low enough to reach the dew point, moisture in the air will condense on it and enhance the growth of mold. • Dehumidifiers used in regions where outside humidity levels are normally 80% or higher have a moisturecollecting tank that should be cleaned and disinfected regularly to prevent the growth of mold and bacteria. It is best if dehumidifiers have a drain line continuously discharging directly to the outside or into a properly plumbed trap. This is also true in climates where air conditioning units are used on a full-time or seasonal basis. Their cooling pans provide an excellent environment for the growth of allergenic or pathogenic organisms. • Moisture removed from clothing by clothes driers ends up in the dryer vent if it is clogged by lint or improperly configured. Moisture buildup in this vent can result in mold growth and, if leakage occurs, damage to the structure of the home. The vent over the cooking area of the kitchen also should be checked routinely for moisture or grease buildup. # Roof The control of moisture in a home is of paramount importance. It is no surprise that moisture control begins with the design and integrity of the roof. Many types of surfacing materials are used for roofs-stone, composition asphalt, plastic, or metal, for example. Some have relatively short lives and some, such as slate and tile, have extraordinarily long lives. As in nearly all construction materials, tradeoffs must be made in terms of cost, thermal efficiency, and longevity. However, all roofs have two things in common: the need to shed moisture and protect the interior from the environment. When evaluating the roof of a home, the first thing to observe is the roofline against the sky to see if the roof 's ridge board is straight and level. If the roofline is not straight, it could mean that serious deterioration has taken place in the structure of the home as a result of 1. Is the roofline of the house straight? 2. Are there ripples or waves in the roof? 3. What is the condition of the gutters and downspouts? 4. What is the condition of the boards the gutters are attached to? 5. Does the flashing appear to be separated or damaged? 6. s there any apparent damage in the attic or can sunlight be seen through the roof? 7. Is there mold or discoloration on the rafters or roof sheathing? 8. Is there evidence of corrosion between the gutter and downspouts and any metal roofing or aluminum siding? 9. Do the downspouts route the water away from the base or foundation of the home? 10. Are the gutters covered or free of leaves? Are they sagging or separating from the fascia? 11. Does the gutter provide a mosquito-breeding area by holding water? # Roof Inspection improper construction, weight buildup, a deteriorated or broken ridge beam, or rotting rafters. Whatever the cause, the focus of an inspection must be to locate the extent of the damage. The next area to inspect is around the flashing on the roof. Flashing is used around any structure that penetrates the surface of a roof or where the roofline takes another direction. These areas include chimneys, gas vents, attic vents, dormers, and raised and lowered roof surfaces. One of the best ways to locate a leak around flashing is to go into the attic and look carefully. Leaks often are discovered when it rains; but if it is not raining, the underside of the roof can be examined with the attic lights off for pinpoints of daylight. Roofing material should lay relatively flat and should not wave or ripple. The roof should be checked for missing or damaged shingles, areas where flashing should be installed, elevation changes in roof surfaces, and evidence of decomposing or displaced surfaces around the edge of the roof [1][2][3]. # Insulation A house must be able to breathe; therefore, air must not be trapped inside, but must be allowed to exit the home with its moisture. Moisture buildup in the home will lead to both mold and bacteria growth. Figure 7.2 demonstrates insulation blown into an attic, to a depth of approximately 12 inches (Figure 7.3). Figure 7.4 shows the area extending from a house under the roof, known as the soffit. The soffit is perforated so that air can flow into the attic and up through the ridge vents to ventilate the attic. If insulation is too thick or installed improperly, it restricts proper air turnover in the attic and moisture or extreme temperatures could result in mold or bacteria growth, as well as delamination of the plywood and particleboards and premature aging of the roof 's subsurface and shingles. Care also must be taken in cold climates to ensure that the insulation has a vapor barrier and that it is installed face down. When insulation is placed in the walls of a home, a thin plastic vapor barrier should be placed over the insulation facing the inside of the home. The purpose of this vapor barrier is to keep moisture produced inside the house from compromising the insulation. If the barrier is not installed, warm, moist air will move through the drywall and into the insulated wall cavity. When the air cools, moisture will condense on the fibers of the insulation making it wet; and, if it is cellulose insulation, it will absorb and hold the moisture. Wetness reduces the effectiveness of the insulation and provides a favorable environment for the growth of bacteria and mold [4,5]. # Siding Good siding should be attractive, durable, insect-and vermin-resistant, waterproof, and capable of holding a weather-resistant coating. Fire-resistant siding and roofing are important in many areas where wildfires are common and are required by many local building codes. All exterior surfaces will eventually deteriorate, regardless of manufacturer warranties or claims. Leaks in the home from the outside occur in many predictable locations. The exterior siding or brick should be checked for cracks or gaps in protective surfaces. Where plumbing, air vents, electrical outlets, or communication lines extend through an exterior wall, they should be carefully checked to ensure an airtight seal around those openings. The exterior surface of the home has doors, windows, and other openings. These openings should be caulked routinely, and the drainage gutters along the top should be checked to ensure that they drain properly. Exterior surface materials include stucco, vinyl, asbestos shingles, brick, metal (aluminum), fiber cement, exterior plywood, hardwood, painted or coated wood, glass, and tile, some of which are discussed in this chapter [6,7]. # Stucco Synthetic stucco (exterior insulation and finish system; EIFS) is a multilayered exterior finish that has been used in Europe since shortly after World War II, when contractors found it to be a good repair choice for buildings damaged during the war. North American builders began using EIFS in the 1980s, first in commercial buildings, then as an exterior finish to wood frame houses. EIFS has three layers: • Inner layer-foam insulation board secured to the exterior wall surface, often with adhesive; • Middle layer-a polymer and cement base coat applied to the top of the insulation, then reinforced with glass fiber mesh; and • Exterior layer-a textured finish coat. EIFS layers bond to form a covering that does not breathe. If moisture seeps in, it can become trapped behind the layers. With no place to go, constant exposure to moisture can lead to rot in wood and other vulnerable materials within the home. Ripples in the stucco could be a sign of a problem. On the surface it may look like nothing is wrong, but beneath the surface, the stucco may have cracked from settling of the house. With a properly installed moisture barrier, no moisture should be able to seep behind the EIFS, including moisture originating inside the home. Drains in the foundation can be designed to enable moisture that does seep in to escape. Other signs of problems are mold or mildew on the interior or exterior of the home, swollen wood around door and window frames, blistered or peeling paint; and cracked EIFS or cracked sealant. # Vinyl Standard vinyl siding is made from thin, flexible sheets of plastic about 2 mm thick, precolored and bent into shape during manufacturing. The sheets interlock as they are placed above one another. Because temperature and sunlight cause vinyl to expand and contract, it fits into deep channels at the corners and around windows and doors. The channels are deep enough that as the siding contracts, it remains within the channel. Siding composed of either vinyl or aluminum will expand and contract in response to temperature change. This requires careful attention to the manufacturer's specifications during application. Cutting the siding too short causes exposed surfaces when the siding contracts, resulting in moisture damage and eventual leakage. Even small cracks exposing the undersurface can create major damage. # Fiber Cement Fiber cement siding is engineered composite-material products that are extremely stable and durable. Fiber cement siding is made from a combination of cellulose fiber material, cement and silica sand, water, and other additives. Fiber cement siding is fire resistant and useful in high-moisture areas. The fiber cement mixture is formed into siding or individual boards, then dried and cured using superheated steam under pressure. The drying and curing process assures that the fiber cement siding has very low moisture content, which makes the product is stable-no warping or excessive movement-and its surface good for painting. Weight is a minor concern with fiber cement products: they weigh about 1½ times what comparably sized composite wood products do. Other concerns relate to cutting fiber cement: cutting produces a fine dust with microscopic silica fibers, so personal protective equipment (respirator and goggles) are necessary. In addition, special tools are needed for cutting. # Brick Brick homes may seem on the surface to be nearly maintenance free. This is true in some cases, but, like all surfaces, brick also degrades. Although this degradation takes longer in brick than in other materials, repairing brick is complex and quite expensive. There are two basic types of brick homes. One is brick veneer, which is a thin brick set to the outside of a wooden stud wall. The brick is not actually the supporting wall. Brick veneer typically has the same pattern of bricks around the doors and windows; a true brick wall will have brick arches or heavy steel plates above the doors and other openings of the building. Some brick walls have wooden studs behind the brick to provide an area for insulation, plumbing, vents, and wiring. It is important that weep holes and flashing be installed in brick homes to control moisture. Improperly constructed building footers can result in major damage to the exterior brick surface of a home by allowing moisture, insects, and vermin to enter. A crack, such as the one in Figure 7.5, is an example of such a failure. This type of damage will require much more than just a mortar patch. Buildings constructed of concrete block also experience footer failure. The damage is reason to not skimp when installing and inspecting the footing and reinforces the need for appropriate concrete mix, rebar, and footing drains. rent, it will gradually dissolve into ions in the electrolyte and, at the same time, produce electrons, which the least active (cathode) will receive through the metallic connection with the anode. The result is that the cathode will be negatively polarized and hence be protected against corrosion. Thus, less noble metals are more susceptible to corrosion. An example of protecting an appliance such as an ironbodied water heater would be to ensure that piping connections are of similar material when possible and follow the manufacturer's good practice and instructions on using dielectric (not conductors of electricity) unions [8]. Figure 7. 6 shows examples of electrochemical kinetics in pipes that were connected to dissimilar metals. Vinyl has some environmental and health concerns, as do most exterior treatments. Vinyl chloride monomer, of which polyvinyl chloride siding is made, is a strong carcinogen and, when heated, releases toxic gases and vapors. Under normal conditions, significant exposures to vinyl chloride monomer are unlikely. # Asbestos Older homes were often sided with composites containing asbestos. This type of siding was very popular in the early 1940s. It was heavily used through the 1950s and decreasingly used up until the early 1960s. The siding is typically white, although it may be painted. It is often about ¼-inch thick and very brittle and was sold in sections of about 12×18 inches. The composite is quite heavy and very slatelike in difficulty of application. As it ages, it becomes even more brittle, and the surface erodes and becomes powdery. This siding, when removed, must be disposed of in accordance with local, state, and federal laws regulating the disposal of asbestos materials. The workers and the site must be carefully managed and protected from contamination. The composite had several virtues as siding. It was quite resistant to fire, was not attractive to insects or vermin, provided very good insulation, and did not grow mold readily. Because of its very brittle nature, it could be damaged by children playing and, as a result, often was covered later with aluminum siding. # Metal If metal siding is used, the mounting fasteners (nails or screws) must be compatible with the metal composition of the siding, or the siding or fasteners will corrode. This corrosion is due to galvanic response. Galvanic response (corrosion) can produce devastating results that often are only noticed when it is too late. It should always be considered in inspections and is preventable in nearly all cases. When two dissimilar metals, such as aluminum and steel, are coupled and subjected to a corrosive environment (such as air, water, salt spray, or cleaning solutions), the more active metal (aluminum) becomes an anode and corrodes through exfoliation or pitting. This can happen with plumbing, roofing, siding, gutters, metal venting, and heating and air conditioning systems. When two metals are electrically connected to each other in a conductive environment, electrons flow from the more active metal to the less active because of the difference in the electrical potential, the so-called "driving force." When the most active metal (anode) supplies cur- Use like metals when possible Use metals with similar electronegativity levels Use dielectric unions for plumbing Use anodes that are inexpensive to replace Remember: Use metals with less susceptibility to protect metals that are more susceptible to corrosion. # Metal Corrosion Prevention Introduction One of the primary differences between rural and urban housing is that much infrastructure that is often taken for granted by the urban resident does not exist in the rural environment. Examples range from fire and police protection to drinking water and sewage disposal. This chapter is intended to provide basic knowledge about the sources of drinking water typically used for homes in the rural environment. It is estimated that at least 15% of the population of the United States is not served by approved public water systems. Instead, they use individual wells and very small drinking water systems not covered by the Safe Water Drinking Act (SDWA); these wells and systems are often untested and contaminated [1]. Many of these wells are dug rather than drilled. Such shallow sources frequently are contaminated with both chemicals and bacteria. Figure 8.1 shows the change in water supply source in the United States from 1970 to 1990. According to the 2003 American Housing Survey, of the 105,843,000 homes in the United States, water is provided to 92,324,000 (87.2%) by a public or private business; 13,097,000 (12.4%) have a well (11,276,000 drilled, 919,000 dug, and 902,000 not reported) [3]. # Water Sources The primary sources of drinking water are groundwater and surface water. In addition, precipitation (rain and snow) can be collected and contained. The initial quality of the water depends on the source. Surface water (lakes, reservoirs, streams, and rivers), the drinking water source for approximately 50% of our population, is generally of poor quality and requires extensive treatment. Groundwater, the source for the other approximately 50% of our population, is of better quality. However, it still may be contaminated by agricultural runoff or surface and subsurface disposal of liquid waste, including leachate from solid waste landfills. Other sources, such as spring water and rain water, are of varying levels of quality, but each can be developed and treated to render it potable. Most water systems consist of a water source (such as a well, spring, or lake), some type of tank for storage, and a system of pipes for distribution. Means to treat the water to remove harmful bacteria or chemicals may also be required. The system can be as simple as a well, a pump, and a pressure tank to serve a single home. It may be a complex system, with elaborate treatment processes, multiple storage tanks, and a large distribution system serving thousands of homes. Regardless of system size, the basic principles to assure the safety and potability of water are common to all systems. Large-scale water supply systems tend to rely on surface water resources, and smaller water systems tend to use groundwater. Groundwater is pumped from wells drilled into aquifers. Aquifers are geologic formations where water pools, often deep in the ground. Some aquifers are actually higher than the surrounding ground surface, which can result in flowing springs or artesian wells. Artesian wells are often drilled; once the aquifer is penetrated, the water flows onto the surface of the ground because of the hydrologic pressure from the aquifer. SDWA defines a public water system as one that provides piped water to at least 25 persons or 15 service connections for at least 60 days per year. Such systems may be owned by homeowner associations, investor-owned water companies, local governments, and others. Water not from a public water supply, and which serves one or only a few homes, is called a private supply. Private water supplies are, for the most part, unregulated. Community water systems are public systems that serve people yearround in their homes. The U.S. Environmental Protection Agency (EPA) also regulates other kinds of public water systems-such as those at schools, factories, campgrounds, or restaurants-that have their own water supply. The quantity of water in an aquifer and the water produced by a well depend on the nature of the rock, sand, or soil in the aquifer where the well withdraws water. Drinking water wells may be shallow (50 feet or less) or deep (more than 1,000 feet). On average, our society uses almost 100 gallons of drinking water per person per day. Traditionally, water use rates are described in units of gallons per capita per day (gallons used by one person in 1 day). Of the drinking water supplied by public water systems, only a small portion is actually used for drinking. Residential water consumers use most drinking water for other purposes, such as toilet flushing, bathing, cooking, cleaning, and lawn watering. The amount of water we use in our homes varies during the day: • Lowest rate of use-11:30 pm to 5:00 am, • Sharp rise/high use-5:00 am to noon (peak hourly use from 7:00 am to 8:00 am), • Moderate use-noon to 5:00 pm (lull around 3:00 pm), and • Increasing evening use-5:00 pm to 11:00 pm (second minor peak, 6:00 pm to 8:00 pm). # Source Location The location of any source of water under consideration as a potable supply, whether individual or community, should be carefully evaluated for potential sources of contamination. As a general practice, the maximum distance that economics, land ownership, geology, and topography will allow should separate a water source from potential contamination sources. Table 8.1 details some of the sources of contamination and gives minimum distances recommended by EPA to separate pollution sources from the water source. Water withdrawn directly from rivers, lakes, or reservoirs cannot be assumed to be clean enough for human consumption unless it receives treatment. Water pumped from underground aquifers will require some level of treatment. Believing surface water or soil-filtered water has purified itself is dangerous and unjustified. Clear water is not necessarily safe water. To assess the level of treatment a water source requires, follow these steps: • Determine the quality needed for the intended purpose (drinking water quality needs to be evaluated under the SDWA). • For wells and springs, test the water for bacteriologic quality. This should be done with several samples taken over a period of time to establish a history on the source. With few exceptions, surface water and groundwater sources are always presumed to be bacteriologically unsafe and, as a minimum, must be disinfected. • Analyze for chemical quality, including both legal (primary drinking water) standards and aesthetic (secondary) standards. • Determine the economical and technical restraints (e.g., cost of equipment, operation and maintenance costs, cost of alternative sources, availability of power). • Treat if necessary and feasible. # Well Construction Many smaller communities obtain drinking water solely from underground aquifers. In addition, according to the last census with data on water supply systems, 15% of people in the United States are on individual water supply systems. In some sections of the country, there may be a choice of individual water supply sources that will supply water throughout the year. Some areas of the country may be limited to one source. The various sources of water include drilled wells, driven wells, jetted wells, dug wells, bored wells, springs, and cisterns. Table 8.2 provides a more detailed description of some of these wells. Regardless of the choice for a water supply source, special safety precautions must be taken to assure the potability of the water. Drainage should be away from a well. The casings of the well should be sealed with grout or some other mastic material to ensure that surface water does not seep along the well casing to the water source. In Figure 8.2, the concrete grout has been reinforced with steel and a drain away from the casing has been provided to assist in protecting this water source. Additionally, research suggests that a minimum of 10 feet of soil is essential to filter unwanted biologic organisms from the water source. However, if the area of well construction has any sources of chemical contamination nearby, the local public health authority should be contacted. In areas with karst topography (areas characterized by a limestone landscape with caves, fissures, and underground streams), wells of any type are a health risk because of the long distances that both chemical and biologic contaminants can travel. When determining where a water well is to be located, several factors should be considered: • the groundwater aquifer to be developed, • depth of the water-bearing formations, • the type of rock formations that will be encountered, • freedom from flooding, and • relation to existing or potential sources of contamination. The overriding concern is to protect any kind of well from pollution, primarily bacterial contamination. Groundwater found in sand, clay, and gravel formations is more likely to be safer than groundwater extracted from limestone and other fractured rock formations. Whatever the strata, wells should be protected from • surface water entering directly into the top of the well, • groundwater entering below ground level without filtering through at least 10 feet of earth, and • surface water entering the space between the well casing and surrounding soil. Also, a well should be located in such a way that it is accessible for maintenance, inspection, and pump or pipe replacement when necessary. Driven wells (Figure 8. and another pulling the dirt from the hole with a rope, pulley, and bucket. Of course, this required a hole of rather large circumference, with the size increasing the potential for leakage from the surface. The dug well also was traditionally quite shallow, often less than 25 feet, which often resulted in the water source being contaminated by surface water as it ran through cracks and crevices in the ground to the aquifer. Dug wells provide potable water only if they are properly located and the water source is free of biologic and chemical contamination. The general rule is, the deeper the well, the more likely the aquifer is to be free of contaminants, as long as surface water does not leak into the well without sufficient soil filtration. Two basic processes are used to remediate dug wells. One is to dig around the well to a depth of 10 feet and install a solid slab with a hole in it to accommodate a well casing and an appropriate seal (Figures 8.4 and 8.5). The dirt is then backfilled over the slab to the surface, and the casing is equipped with a vent and second seal, similar to a drilled well, as shown in Figure 8.6. This results in a considerable reduction in the area of the casing that needs ground and are quite shallow, resulting in frequent contamination by both chemical and bacterial sources. # Sanitary Design and Construction Whenever a water-bearing formation is penetrated (as in well construction), a direct route of possible water contamination exists unless satisfactory precautions are taken. Wells should be provided with casing or pipe to an adequate depth to prevent caving and to permit sealing of the earth formation to the casing with watertight cement grout or bentonite clay, from a point just below the surface to as deep as necessary to prevent entry of contaminated water. Once construction of the well is completed, the top of the well casing should be covered with a sanitary seal, an approved well cap, or a pump mounting that completely covers the well opening (Figure 8.3). If pumping at the design rate causes drawdown in the well, a vent through a tapped opening should be provided. The upper end of the vent pipe should be turned downward and suitably screened to prevent the entry of insects and foreign matter. # Pump Selection A variety of pump types and sizes exist to meet the needs of individual or community water systems. Some of the factors to be considered in selecting a pump for a specific application are well depth, system design pressure, demand rate in gallons per minute, availability of power, and economics. # Dug and Drilled Wells Dug wells (Figures 8.4 and 8.5) were one of the most common types of wells for individual water supply in the United States before the 1950s. They were often constructed with one person digging the hole with a shovel to be protected. Experience has shown that the disturbed dirt used for backfilling over the buried slab will continue to release bacteria into the well for a short time after modification. Most experts in well modification suggest installing a chlorination system on all dug wells to disinfect the water because of their shallow depth and possible biologic impurity during changing drainage and weather conditions above ground. Figure 8.7 shows a dug well near the front porch of a house and within 5 feet of a drainage ditch and 6 feet of a rural road. This well is likely to be contaminated with the pesticide used to termite-proof the home and from whatever runs off the nearby road and drainage ditch. The well shown is about 15 feet deep. The brick structure around the well holds the centrifugal pump and a heater to keep the water from freezing. Although dangerous to drink from, this well is typical of dug wells used in rural areas of the United States for drinking water. Samples should not be taken from such wells because they instill a false sense of security if they are negative for both chemicals and biologic organisms. The quality of the water in such wells can change in just a few hours through infiltration of drainage water. Figure 8.8 shows the septic tank discharge in the drainage ditch 5 feet upstream of the dug well in Figure 8.7. This potential combination of drinking water and waste disposal presents an extreme risk to the people serviced by the dug well. Sampling is not the answer; the water source should be changed under the supervision of qualified environmental health professionals. Figure 8.9 shows a drilled well. On the left side of the picture is the corner of the porch of the home. The well appears not to have a sanitary well seal and is likely open to the air and will accept contaminants into the casing. Because the well is so close to the house, the casing is open, and the land slopes toward the well, it is a major candidate for contamination and not a safe water source. # Springs Another source of water for individual water supply is natural springs. A spring is groundwater that reaches the surface because of the natural contours of the land. Springs are common in rolling hillside and mountain areas. Some provide an ample supply of water, but most provide water only seasonally. Without proper precautions, the water may be biologically or chemically contaminated and not considered potable. To obtain satisfactory (potable) water from a spring, it is necessary to • find the source, • properly develop the spring, • eliminate surface water outcroppings above the spring to its source, • prevent animals from accessing the spring area, and • provide continuous chlorination. Figure 8.10 illustrates a properly developed spring. Note that the line supplying the water is well underground, the spring box is watertight, and surface water runoff is diverted away from the area. Also be aware that the water quality of a spring can change rapidly. # Cisterns A cistern is a watertight, traditionally underground reservoir that is filled with rainwater draining from the roof of a building. Cisterns will not provide an ample supply of water for any extended period of time, unless the amount of water used is severely restricted. Because the water is coming off the roof, a pipe is generally installed to allow redirection of the first few minutes of rainwater until the water flows clear. Disinfection is, nevertheless, of utmost importance. Diverting the first flow of water does not assure safe, non-polluted water because chemicals and biologic waste from birds and other animals can migrate from catchment surfaces and from windblown sources. In addition, rainwater has a low pH, which can corrode plumbing pipes and fixtures if not treated. # Disinfection of Water Supplies Water supplies can be disinfected by a variety of methods including chlorination, ozonation, ultraviolet radiation, heat, and iodination. The advantages and disadvantages of each method are noted in Table 8.3. The understanding of certain terms (blue box, next page) is necessary in talking about chlorination. Table 8.4 is a chlorination guide for specific water conditions. Chlorine is the most commonly used water disinfectant. It is available in liquid, powder, gas, and tablet form. Chlorine gas is often used for municipal water disinfection, but can be hazardous if mishandled. Recommended liquid, powder, and tablet forms of chlorine include the following: • Liquid-Chlorine laundry bleach (about 5% chlorine). Swimming pool disinfectant or concentrated chlorine bleach (12%-17% chlorine). • Powder-Chlorinated lime (25% chlorine), dairy sanitizer (30% chlorine), and high-test calcium hypochlorite (65%-75% chlorine). • Tablets-High-test calcium hypochlorite (65%-75% chlorine). • Gas-Gas chlorine is an economical and convenient way to use large amounts of chlorine. It is stored in steel cylinders ranging in size from 100 to 2,000 pounds. The packager fills these cylinders with liquid chlorine to approximately 85% of their total volume; the remaining 15% is occupied by chlorine gas. These ratios are required to prevent tank rupture at high temperatures. It is important that direct sunlight never reaches gas cylinders. It is also important that the user of chlorine knows the maximum withdrawal rate of gas per day per cylinder. For example, the maximum withdrawal rate from a 150-pound cylinder is approximately 40 pounds per day at room temperature discharging to atmospheric pressure. Ozone gas is unstable and must be generated at point of use Breakpoint chlorination-A process sometimes used to ensure the presence of free chlorine in public water supplies by adding enough chlorine to the water to satisfy the chlorine demand and to react with all dissolved ammonia that might be present. The concentrations of chlorine needed to treat a variety of water conditions are listed in Table 8.4. Chlorine concentration-The concentration (amount) of chlorine in a volume of water is measured in parts per million (ppm). In 1 million gallons of water, a chlorine concentration of 1 ppm would require 8.34 pounds of 100% chlorine. # Contact time- The time, after chlorine addition and before use, given for disinfection to occur. For groundwater systems, contact time is minimal. However, in surface water systems, a contact time of 20 to 30 minutes is common. # Dosage- The total amount of chlorine added to water. Given in parts per million (ppm) or milligrams per liter (mg/L). Demand-Chlorine solution used by reacting with particles of organic matter such as slimes or other chemicals and minerals that may be present. The difference between the amount of chlorine applied to water and total available chlorine remaining at the end of a specified contact period. Residual-The amount of chlorine left after the demand is met; available (free) chlorine. This portion provides a ready reserve for bactericidal action. Both combined and free chlorine make up chlorine residual and are involved in disinfection. Total available chlorine = free chlorine + combined chlorine. # Parts per million- # Definitions of Terms Related to Chlorination Chlorine Carrier Solutions On small systems or individual wells, a high-chlorine carrier solution is mixed in a tank in the pump house and pumped by the chlorinator into the system. Table 8.5 shows how to make a 200-ppm carrier solution. By using 200 ppm, only small quantities of this carrier have to be added. Depending on the system, other stock solutions may be needed to better use existing chemical feed equipment. # Routine Water Chlorination (Simple) Most chlorinated public water supplies use routine water chlorination. Enough chlorine is added to the water to meet the chlorine demand, plus enough extra to supply 0.2 to 0.5 ppm of free chlorine when checked after 20 minutes. Simple chlorination may not be enough to kill certain viruses. Chlorine as a disinfectant increases in effectiveness as the chlorine residual is increased and as the contact time is increased. Chlorine solutions should be mixed and chlorinators adjusted according to the manufacturer's instructions. Chlorine solutions deteriorate gradually when standing. Fresh solutions must be prepared as necessary to maintain the required chlorine residual. Chlorine residual should be tested at least once a week to assure effective equipment operation and solution strengths. A dated record should be kept of solution preparation, type, proportion of chlorine used, and residual-test results. Sensing devices are available that will automatically shut off the pump and activate a warning bell or light when the chlorinator needs servicing. # Well Water Shock Chlorination Shock chlorination is used to control iron and sulfatereducing bacteria and to eliminate fecal coliform bacteria in a water system. To be effective, shock chlorination must disinfect the following: the entire well depth, the formation around the bottom of the well, the pressure system, water treatment equipment, and the distribution system. To accomplish this, a large volume of super-chlorinated water is siphoned down the well to displace the water in the well and some of the water in the formation around the well. Check specifications on the water treatment equipment to ensure appropriate protection of the equipment. With shock chlorination, the entire system-from the water-bearing formation through the well-bore and the distribution system-is exposed to water that has a concentration of chlorine strong enough to kill iron and sulfate-reducing bacteria. The shock chlorination process is complex and tedious. Exact procedures and concentrations of chlorine for effective shock treatment are available [6,7]. # Backflow, Back-siphonage, and Other Water Quality Problems In addition to contamination at its source, water can become contaminated with biologic materials and toxic construction or unsuitable joint materials as it flows through the water distribution system in the home. Water flowing backwards (backflow) in the pipes sucks materials back (back-siphonage) into the water distribution system, creating equally hazardous conditions. Other water quality problems relate to hardness, dissolved iron and iron bacteria, acidity, turbidity, color, odor, and taste. # Backflow Backflow is any unwanted flow of nonpotable water into a potable water system. The direction of flow is the reverse of that intended for the system. Backflow may be caused by numerous factors and conditions. For example, the reverse pressure gradient may be a result of either a loss of pressure in the supply main (back-siphonage) or the flow from a pressurized system through an unprotected cross-connection (back-pressure). A reverse flow in a distribution main or in a customer's system can be created by a change of system pressure wherein the pressure at the supply point becomes lower than the pressure at the point of use. When this happens, the water at the point of use will be siphoned back into the system, potentially polluting or contaminating it. It is also possible that contaminated or polluted water could continue to backflow into the public distribution system. The point at which nonpotable water comes in contact with potable water is called a cross-connection. Examples of backflow causes include supplemental supplies, such as a standby fire protection tank; fire pumps; chemical feed pumps that overpower the potable water system pressure; and sprinkler systems. # Back-Siphonage Back-siphonage is a siphon action in an undesirable or reverse direction. When there is a direct or indirect connection between a potable water supply and water of questionable quality due to poor plumbing design or installation, there is always a possibility that the public water supply may become contaminated. Some examples of common plumbing defects are • washbasins, sterilizers, and sinks with submerged inlets or threaded hose bibs and hose; • oversized booster pumps that overtax the supply capability of the main and thus develop negative pressure; • submerged inlets and fire pumps (if the fire pumps are directly connected into the water main, a negative pressure will develop); and • a threaded hose bib in a health-care facility (which is technically a cross-connection). There are many techniques and devices for preventing back-flow and back-siphonage. Some examples are • Vacuum breakers (nonpressure and pressure); • Back-flow preventers (reduced pressure principle, double gate-double check valves, swing-connection, and air gap-double diameter separation); • Surge tanks (booster pumps for tanks, fire system make-up tank, and covering potable tanks); and • Color coding in all buildings where there is any possibility of connecting two separate systems or taking water from the wrong source (blue-potable, yellow-nonpotable, and other-chemical and gases). An air gap is a physical separation between the incoming water line and maximum level in a container of at least twice the diameter of the incoming water line. If an air gap cannot be installed, then a vacuum breaker should be installed. Vacuum breakers, unlike air gaps, must be installed carefully and maintained regularly. Vacuum breakers are not completely failsafe. # Other Water Quality Problems Water not only has to be safe to drink; it should also be aesthetically pleasing. Various water conditions affect water quality. Table 8.6 describes symptoms, causes, measurements, and how to correct these problems. # Protecting the Groundwater Supply Follow these tips to help protect the quality of groundwater supplies: • Periodically inspect exposed parts of wells for cracked, corroded, or damaged well casings; broken or missing well caps; and settling and cracking of surface seals. • Slope the area around wells to drain surface runoff away from the well. • Install a well cap or sanitary seal to prevent unauthorized use of, or entry into, a well. • Disinfect wells at least once a year with bleach or hypochlorite granules, according to the manufacturer's directions. • Have wells tested once a year for coliform bacteria, nitrates, and other constituents of concern. • Keep accurate records of any well maintenance, such as disinfection or sediment removal, that require the use of chemicals in the well. • Hire a certified well driller for new well construction, modification, or abandonment and closure. • Avoid mixing or using pesticides, fertilizers, herbicides, degreasers, fuels, and other pollutants near wells. • Do not dispose of waste in dry or abandoned wells. • Do not cut off well casings below the land surface. • Pump and inspect septic systems as often as recommended by local health departments. • Never dispose of hazardous materials (e.g., paint, paint stripper, floor stripper compounds) in a septic system. Iron is common in soft water and when water hardness is above 175 ppm. Atomic absorption (AA) units or numerous colormetric test kits measure iron in ppm. Any measurement above 0.3 ppm will cause problems. To treat soft water that contains no iron but picks it up in distribution lines, add calcium to the water with calcite (limestone) units. To treat hard water con taining iron ions, install a sodium zeolite ion exchange unit. To treat soft water con taining iron, carbon dioxide must be neutralized, followed by a manganese zeolite unit. # Iron bacteria (red slime appears in toilet) Caused by bacteria that act in the presence of iron. Check under toilet tank cover for slippery jelly-like coating. The housing inspector's prime concern while inspecting plumbing is to ensure the provision of a safe water supply system, an adequate drainage system, and ample and proper fixtures and equipment that do not contaminate water. The inspector must make sure that the system moves waste safely from the home and protects the occupants from backup of waste and dangerous gases. This chapter covers the major features of a residential plumbing system and the basic plumbing terms and principles the inspector must know and understand to identify housing code violations that involve plumbing. It will also assist in identifying the more complicated defects that the inspector should refer to the appropriate agencies. This chapter is not a plumbing code, but should provide a base of knowledge sufficient to evaluate household systems. # Elements of a Plumbing System The primary purposes of a plumbing system are • To bring an adequate and potable supply of hot and cold water to the inhabitants of a house, and • To drain all wastewater and sewage discharge from fixtures into the public sewer or a private disposal system. It is, therefore, very important that the housing inspector be completely familiar with all elements of these systems so that inadequacies of the structure's plumbing and other code violations will be recognized. To aid the inspector in understanding the plumbing system, a schematic of a home plumbing system is shown in Figure 9.1. # Water Service The piping of a house service line should be as short as possible. Elbows and bends should be kept to a minimum because they reduce water pressure and, therefore, the supply of water to fixtures in the house. The house service line also should be protected from freezing. Four feet of soil is a commonly accepted depth to bury the line to prevent freezing. This depth varies, however, across the country from north to south. The local or state plumbing code should be consulted for recommended depths. The minimum service line size should be ¾ inch. The minimum water supply pressure should be 40 pounds per square inch (psi), no cement or concrete joints should be allowed, no glue joints between different types of plastic should be allowed, and no female threaded PVC fittings should be used. The materials used for a house service may be approved plastic, copper, cast iron, steel, or wrought iron. The connections used should be compatible with the type of pipe used. A typical house service installation is pictured in Figure 9.2. The elements of the service installation are described below. Corporation stop-The corporation stop is connected to the water main. This connection is usually made of brass and can be connected to the main with a special tool without shutting off the municipal supply. The valve incorporated in the corporation stop permits the pressure to be maintained in the main while the service to the building is completed. Curb stop-The curb stop is a similar valve used to isolate the building from the main for repairs, nonpayment, of water bills or flooded basements. Because the corporation stop is usually under the street and it is necessary to break the pavement to reach the valve, the curb stop is used as the isolation valve. Curb stop box-The curb stop box is an access box to the curb stop for opening and closing the valve. A longhandled wrench is used to reach the valve. Air chambers-Pressure-absorbing devices that eliminate water hammer. Air chambers should be installed as close as possible to the valves or faucet and at the end of long runs of pipe. # Air gap (drainage system)- The unobstructed vertical distance through the free atmosphere between the outlet of a water pipe and the flood level rim of the receptacle into which it is discharging. # Air gap (water distribution system)- The unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, plumbing fixture, or other device and the flood level rim of the receptacle. Backflow-The flow of water or other liquids, mixtures, or substances into the distributing pipes of a potable water supply from any source or sources other than the intended source. Back siphonage is one type of backflow. Back siphonage-The flowing back of used, contaminated, or polluted water from a plumbing fixture or vessel into a potable water supply because of negative pressure in the pipe. Branch-Any part of the piping system other than the main, riser, or stack. Branch vent-A vent connecting one or more individual vents with a vent stack. Building drain-Part of the lowest piping of a drainage system that receives the discharge from soil, waste, or other drainage pipes inside the walls of the building (house) and conveys it to the building sewer beginning 3 feet outside the building wall. Cross connection-Any physical connection or arrangement between two otherwise separate piping systems (one of which contains potable water and the other which contains either water of unknown or questionable safety or steam, gas, or chemical) whereby there may be a flow from one system to the other, the direction of flow depending on the pressure differential between the two systems. (See Backflow and Back siphonage.) Disposal field-An area containing a series of one or more trenches lined with coarse aggregate and conveying the effluent from a septic tank through vitrified clay pipe or perforated, nonmetallic pipe, laid in such a manner that the flow will be distributed with reasonable uniformity into natural soil. Drain-Any pipe that carries wastewater or waterborne waste in a building (house) drainage system. # Flood level rim- The top edge of a receptacle from which water overflows. Flushometer valve-A device that discharges a predetermined quantity of water to fixtures for flushing purposes and is closed by direct water pressures. Flushometer toilet-a toilet using a flushometer valve that uses pressure from the water supply system rather than the force of gravity to discharge water into the bowl, designed to use less water than conventional flush toilets. Flush valve-A device located at the bottom of the tank for flushing toilets and similar fixtures. Meter stop-The meter stop is a valve placed on the street side of the water meter to isolate it for installation or maintenance. Many codes require a gate valve on the house side of the meter to shut off water for plumbing repairs. The curb and meter stops can be ruined in a short time if used very frequently. # Definitions of Terms Related to Home Water Systems The water meter is a device used to measure the amount of water used in the house. It is usually the property of the water provider and is a very delicate instrument that should not be abused. In cold climates, the water meter is often inside the home to keep it from freezing. When the Grease trap-See Interceptor. Hot water-Potable water heated to at least 120°F-130°F (49°C-54°C) and used for cooking, cleaning, washing dishes, and bathing. # Insanitary-Unclean enough to endanger health. Interceptor-A device to separate and retain deleterious, hazardous, or undesirable matter from normal waste and permit normal sewage or liquid waste to discharge into the drainage system by gravity. Main vent-The principal artery of the venting system, to which vent branches may be connected. Leader-An exterior drainage pipe for conveying storm water from roof or gutter drains to the building storm drain, combined building sewer, or other means of disposal. Main sewer-See Public sewer. Pneumatic-Pertaining to devices making use of compressed air as in pressure tanks boosted by pumps. Potable water-Water having no impurities present in amounts sufficient to cause disease or harmful physiologic effects and conforming in its bacteriologic and chemical quality to the requirements of the U.S. Environmental Protection Agency's Safe Drinking Water Act or meeting the regulations of other agencies having jurisdiction. # P & T (pressure and temperature) relief valve- A safety valve installed on a hot water storage tank to limit temperature and pressure of the water. # P-trap-A trap with a vertical inlet and a horizontal outlet. Public sewer-A common sewer directly controlled by public authority. Relief vent-An auxiliary vent that permits additional circulation of air in or between drainage and systems. Septic tank-A watertight receptacle that receives the discharge of a building's sanitary drain system or part thereof and is designed and constructed to separate solid from liquid, digest organic matter through a period of detention, and allow the liquids to discharge into the soil outside of the tank through a system of open-joint or perforated piping or through a seepage pit. Sewerage system-A system comprising all piping, appurtenances, and treatment facilities used for the collection and disposal of sewage, except plumbing inside and in connection with buildings served, and the building drain. Soil pipe-The pipe that directs the sewage of a house to the receiving sewer, building drain or building sewer. Soil stack-The vertical piping that terminates in a roof vent and carries off the vapors of a plumbing system. Stack vent-An extension of a solid or waste stack above the highest horizontal drain connected to the stack, sometimes called a waste vent or a soil vent. # Definitions of Terms Related to Home Water Systems meter is located inside the home, the company providing the water must make appointments to read the meter, which often results in higher water costs unless the meter is equipped with a signal that can be observed from the outside. The water meter is not shown in Figure 9.2 because of regional differences in location of the unit. Because the electric system is sometimes grounded to an older home's water line, a grounding loop device should be installed around the meter. Many meters come with a yoke that maintains electrical continuity even though the meter is removed. # Hot and Cold Water Main Lines The hot and cold water main lines are usually hung from the basement ceiling or in the crawl space of the home and are attached to the water meter and hot water tank on one side and the fixture supply risers on the other. These pipes should be installed neatly and should be supported by pipe hangers or straps of sufficient strength and number to prevent sagging. Older homes that have copper pipe with soldered pipes can pose a lead poisoning risk, particularly to children. In 1986, Congress banned lead solder containing greater than 0.2% lead and restricted the lead content of faucets, pipes, and other plumbing materials to no more than 8%. The water should be tested to determine the presence or level of lead in the water. Until such tests can be conducted, the water should be run for about 2 minutes in the morning to flush any such material from the line. Hot and cold water lines should be approximately 6 inches apart unless the hot water line is insulated. This is to ensure that the cold water line does not pick up heat from the hot water line [2]. The supply mains should have a drain valve stop and waste valve to remove water from the system for repairs. These valves should be on the low end of the line or on the end of each fixture riser. The fixture risers start at the basement main and rise vertically to the fixtures on the upper floors. In a one-family dwelling, riser branches will usually proceed from the main riser to each fixture grouping. In any event, the fixture risers should not depend on the branch risers for support, but should be supported with a pipe bracket. Storm sewer-A sewer used for conveying rain water, surface water, condensate, cooling water, or similar liquid waste. Trap-A fitting or device that provides a liquid seal to prevent the emission of sewer gases without materially affecting the flow of sewage or wastewater through it. Vacuum breaker-A device to prevent backflow (back siphonage) by means of an opening through which air may be drawn to relieve negative pressure (vacuum). Vapor lock-A bubble of air that restricts the flow of water in a pipe. Vent stack-The vertical vent pipe installed to provide air circulation to and from the drainage system and that extends through one or more stories. Water hammer-The loud thump of water in a pipe when a valve or faucet is suddenly closed. Water service pipe-The pipe from the water main or other sources of potable water supply to the water-distributing system of the building served. Water supply system-Consists of the water service pipe, the water-distributing pipes, the necessary connecting pipes, fittings, control valves, and all appurtenances in or adjacent to the building or premises. Wet vent-A vent that receives the discharge of waste other than from water closets. Yoke vent-A pipe connecting upward from a soil or waste stack to a vent stack to prevent pressure changes in the stacks. # Definitions of Terms Related to Home Water Systems The size of basement mains and risers depends on the number of fixtures supplied. However, a ¾-inch pipe is usually the minimum size used. This allows for deposits on the pipe due to hardness in the water and will usually give satisfactory volume and pressure. In homes without basements, the water lines are preferably located in the crawl space or under the slab. The water lines are sometimes placed in the attic; however, because of freezing, condensation, or leaks, this placement can result in major water damage to the home. In two-story or multistory homes, the water line placement for the second floor is typically between the studs and, then, for the shortest distance to the fixture, between the joists of the upper floors. # Hot and Cold Water Piping Materials Care must be taken when choosing the piping materials. Some state and local plumbing codes prohibit using some of the materials listed below in water distribution systems. Polyvinyl Chloride (PVC). PVC is used to make plastic pipe. PVC piping has several applications in and around homes such as in underground sprinkler systems, piping for swimming pool pumping systems, and low-pressure drain systems. PVC piping is also used for water service between the meter and building [3]. PVC, or polyvinyl chloride, is one of the most commonly used materials in the marketplace. It is in packaging, construction and automotive material, toys, and medical equipment. # Chlorinated PVC (CPVC). CPVC is a slightly yellow plastic pipe used inside homes. It has a long service life, but is not quite as tough as copper. Some areas with corrosive water will benefit by using chlorinated PVC piping. CPVC piping is designed and recommended for use in hot and cold potable water distribution systems [4]. Copper. Copper comes in three grades: • M for thin wall pipe (used mainly inside homes); • L for thicker wall pipe (used mainly outside for water services); and • K, the thickest (used mainly between water mains and the water meter). Copper lasts a long time, is durable, and connects well to valves. It should not be installed if the water has a pH of 6.5 or less. Most public utilities supply water at a pH between 7.2 and 8.0. Many utilities that have source water with a pH below 6.5 treat the water to raise the pH. Private well water systems often have a pH below 6.5. When this is the case, installing a treatment system to make the water less acidic is a good idea [5]. Galvanized Steel. Galvanized pipe corrodes rather easily. The typical life of this piping is about 40 years. One of the primary problems with galvanized steel is that, in saturated water, the pipe will become severely restricted by corrosion that eventually fills the pipe completely. Another problem is that the mismatch of metals between the brass valves and the steel results in corrosion. Whenever steel pipe meets copper or brass, the steel pipe will rapidly corrode. Dielectric unions can be used between copper and steel pipes; however, these unions will close off flow in a short time. The problem with dielectric unions is that they break the grounding effect if a live electrical wire comes in contact with a pipe. Some cities require the two pipes to be bonded electrically to maintain the safety of grounded pipes. # PEX. PEX is an acronym for a cross-formulated polyethylene. "PE" refers to the raw material used to make PEX (polyethylene), and "X" refers to the cross-linking of the polyethylene across its molecular chains. The molecular chains are linked into a three-dimensional network that makes PEX remarkably durable within a wide range of temperatures, pressures, and chemicals [6]. PEX is flexible and can be installed with fewer fittings than rigid plumbing systems. It is a good choice for repiping and for new homes and works well for corrosive water conditions. PEX stretches to accommodate the expansion of freezing water and then returns to its original size when water thaws. Although it is highly freezeresistant, no material is freeze-proof. Kitec. Kitec is a multipurpose pressure pipe that uniquely unites the advantages of both metal and plastic. It is made of an aluminum tube laminated to interior and exterior layers of plastic. Kitec provides a composite piping system for a wide range of applications, often beyond the scope of metal or plastic alone. Unlike copper and steel materials, Kitec is noncorroding and resists most acids, salt solutions, alkalis, fats, and oils. Poly. Poly pipe is a soft plastic pipe that comes in coils and is used for cold water. It can crack with age or wear through from rocks. Other weak points can be the stainless steel clamps or galvanized couplings. # Polybutylene [Discontinued]. Polybutylene pipe is a soft plastic pipe. This material is no longer recommended because of early chemical breakdown. Individuals with a house, mobile home, or other structure that has polybutylene piping with acetal plastic fittings may be eligible for financial relief if they have replaced that plumbing system. For claims information, call 1-800-392-7591 or go to www.pbpipe.com. # Hot Water Safety In the United States, more than 112,000 people enter a hospital emergency room each year with scald burns. Of these, 6,700 (6%), have to be hospitalized. Almost 3,000 of these scald burns come from tap water in the home. The three high risk groups are children under the age of 5 years, the handicapped, and adults over the age of 65 years. It only takes 1 second to get a serious third-degree burn from water that is 156°F (69°C). Tap water is too hot if instant coffee granules melt in it. Young children, some handicapped individuals, and elderly people are particularly vulnerable to tap water burns. Children cannot always tell the hot water faucets from the cold water faucets. Children have delicate skin and often cannot get out of hot water quickly, so they suffer hot water burns most frequently. Elderly and handicapped persons are less agile and more prone to falls in the bath tub. They also may have diseases, such as diabetes, that make them unable to feel heat in some regions of the body, such as the hands and feet. Third-degree burns can occur quickly-in 1 second at 156°F (69°C), in 2 seconds at 149°F (65°C), in 5 seconds at 140°F (60°C), and in 15 seconds at 133°F (56°C). A tap-water temperature of 120°F-130°F (49°C-54°C) is hot enough for washing clothes, bedding, and dishes. Even at 130°F (54°C), water takes only a few minutes of constant contact to produce a third-degree burn. Few people bathe at temperatures above 110°F (43°C), nor should they. Water heater thermostats should be set at about 120°F (49°C) for safety and to save 18% of the energy used at 140°F (60°C). Antiscald devices for faucets and showerheads to regulate water temperature can help prevent burns. A plumber should install and calibrate these devices. Most hot water tank installations now require an expansion tank to reduce pressure fluctuations and a heat trap to keep hot water from escaping up pipes. # Types of Water Flow Controls It is essential that valves be used in a water system to allow the system to be controlled in a safe and efficient manner. The number, type, and size of valves required will depend on the size and complexity of the system. Most valves can be purchased in sizes and types to match the pipe sizes used in water system installations. Listed below are some of the more commonly encountered valves with a description of their basic functions. Shutoff Valves. Shutoff valves should be installed between the pump and the pressure tank and between the pressure tank and service entry to a building. Globe, gate, and ball valves are common shutoff valves. Gate and ball valves cause less friction loss than do globe valves; ball valves last longer and leak less than do gate valves. Shutoff valves allow servicing of parts of the system without draining the entire system. Flow-control Valves. Flow-control valves provide uniform flow at varying pressures. They are sometimes needed to regulate or limit the use of water because of limited water flow from low-yielding wells or an inadequate pumping system. They also may be needed with some treatment equipment. These valves are often used to limit flow to a fixture. Orifices, mechanical valves, or diaphragm valves are used to restrict the flow to any one service line or complete system and to assure a minimum flow rate to all outlets. Relief Valves. Relief valves permit water or air to escape from the system to relieve excess pressure. They are spring-controlled and are usually adjustable to relieve varying pressures, generally above 60 psi. Relief valves should be installed in systems that may develop pressures exceeding the rated limits of the pressure tank or distribution system. Positive displacement and submersible pumps and water heaters can develop these excessive pressures. The relief valve should be installed between the pump and the first shutoff valve and must be capable of discharging the flow rate of the pump. A combined pressure and temperature relief valve is needed on all water heaters. Combination pressure and vacuum relief valves also should be installed to prevent vacuum damage to the system. Pressure-reducing Valves. A pressure-reducing valve is used to reduce line pressure. On main lines, this allows the use of thinner walled pipe and protects house plumbing. Sometimes these valves are installed on individual services to protect plumbing. Altitude Valves. Often an altitude valve is installed at the base of a hot water tank to prevent it from overflowing. Altitude valves sense the tank level through a pressure line to the tank. An adjustable spring allows setting the level so that the valve closes and prevents more inflow when the tank becomes full. # Foot Valves. A foot valve is a special type of check valve installed at the end of a suction pipe or below the jet in a well to prevent backflow and loss of prime. The valve should be of good quality and cause little friction loss. Check Valves. Check valves have a function similar to foot valves. They permit water flow in only one direction through a pipe. A submersible pump may use several check valves. One is located at the top of the pump to prevent backflow from causing back spin of the impellers. Some systems use another check valve and a snifter valve. They will be in the drop pipe or pitless unit in the well casing and allow a weep hole located between the two valves to drain part of the pipe. When the pump is started, it will force the air from the drained part of the pipe into the pressure tank, thus recharging the pressure tank. Frost-proof Faucets. Frost-proof faucets are installed outside a house with the shutoff valve extending into the heated house to prevent freezing. After each use, the water between the valve and outlet drains, provided the hose is disconnected, so water is not left to freeze. Frost-proof Hydrants. Frost-proof hydrants make outdoor water service possible during cold weather without the danger of freezing. The shutoff valve is buried below the frost line. To avoid submerging it, which might result in contamination and back siphoning, the stop-and-waste valve must drain freely into a rock bed. These hydrants are sometimes prohibited by local or state health authorities. Float Valves. Float valves respond to a high water level to close an inlet pipe, as in a tank-type toilet. Miscellaneous Switches. Float switches respond to a high and/or low water level as with an intermediate storage tank. Pressure switches with a low-pressure cutoff stop the pump motor if the line pressure drops to the cutoff point. Low-flow cutoff switches are used with submersible pumps to stop the pump if the water discharge falls below a predetermined minimum operating pressure. High-pressure cut-off switches are used to stop pumps if the system pressure rises above a predetermined maximum. Paddle-type flow switches detect flow by means of a paddle placed in the pipe that operates a mechanical switch when flow in the pipe pushes the paddle. The inadvertent contamination of a public water supply as a result of incorrectly installing plumbing fixtures is a potential public health problem in all communities. Continuous surveillance by environmental health personnel is necessary to know whether such public health hazards have developed as a result of additions or alterations to an approved system. All environmental health specialists should learn to recognize the three general types of defects found in potable water supply systems: back flow, back siphonage, and overhead leakage into open potable water containers. If identified, these conditions should be corrected immediately to prevent the spread of disease or poisoning from high concentrations of organic or inorganic chemicals in the water. # Water Heaters Water heaters (Figure 9.3) are usually powered by electricity, fuel oil, gas, or, in rare cases, coal or wood. They consist of a space for heating the water and a storage tank for providing hot water over a limited period of time. All water heaters should be fitted with a temperature-pressure (T&P) relief valve no matter what fuel is used. The installation port for these valves may be found on the top or on the side of the tank near the top. T&P valves should not be placed close to a wall or door jamb, where they would be inaccessible for inspection and use. Hot water tanks sometimes are sold without the T&P valve, and it must be purchased separately. This fact alone should encourage individual permitting and inspection by counties and municipalities to ensure that they are installed. The T&P valve should be inspected at a minimum of once per year. A properly installed T&P valve will operate when either the temperature or the pressure becomes too high due to an interruption of the water supply or a faulty thermostat. Figure 9.3 shows the correct installation of a gas water heater. Particular care should be paid to the exhaust port of the T&P valve. Figure 9.4 shows the placement of the T&P valve. This vent should be directed to within 6 inches of the floor, and care must be taken to avoid reducing the diameter of the vent and creating any unnecessary bends in the discharge pipe. Most codes will allow only one 90° bend in the vent. The point is to avoid any constrictions that could slow down the steam release from the tank to avoid explosive pressure buildup. Water heaters that are installed on wooden floors should have water collection pans with a drainage tube that drains to a proper drain. The pan should be checked on a regular basis. # Tankless Water Heaters A tankless unit has a heating device that is activated by the flow of water when a hot water valve is opened. Once activated, the heater delivers a constant supply of hot water. The output of the heater, however, limits the rate of the heated water flow. Demand water heaters are available in propane (LP), natural gas, or electric models. They come in a variety of sizes for different applications, such as a whole-house water heater, a hot water source for a remote bathroom or hot tub, or as a boiler to provide hot water for a home heating system. They can also be used as a booster for dishwashers, washing machines, and a solar or wood-fired domestic hot water system [7]. The appeal of demand water heaters is not only the elimination of the tank standby losses and the resulting lower operating costs, but also the fact that the heater delivers hot water continuously. Most tankless models have a life expectancy of more than 20 years. In contrast, storage tank water heaters last 10 to 15 years. Most tankless models have easily replaceable parts that can extend their life by many years more. As mentioned earlier, the purpose of a trap is to seal out sewer gases from the structure. Because a plumbing system is subject to wide variations in flow, and this flow originates in many different sections of the system, pressures vary widely in the waste lines. These pressure differences tend to remove the water seal in the trap. The waste system must be properly vented to prevent the traps from siphoning dry, thus losing their water seal and allowing gas from the sewer into the building. Objectionable Traps. The S-trap and the ¾ S-trap (Figure 9.7) should not be used in plumbing installations. They are almost impossible to ventilate properly, and the ¾ S-trap forms a perfect siphon. Mechanical traps were introduced to counteract this problem. It has been found, however, that the corrosive liquids flowing in the system corrode or jam these mechanical traps. For this reason, most plumbing codes prohibit mechanical traps. The bag trap, an extreme form of S-trap, is seldom found. Traps are used only to prevent the escape of sewer gas into the structure. They do not compensate for pressure variations. Only proper venting will eliminate pressure problems. # Ventilation A plumbing system is ventilated to prevent trap seal loss, material deterioration, and flow retardation. Trap Seal Loss. The seal in a plumbing trap may be lost due to siphonage (direct and indirect or momentum), back pressure, evaporation, capillary attraction, or wind effect. The first two are probably the most common causes of loss. Figure 9.8 depicts this siphonage process; Figure 9.9 depicts loss of trap seal. If a waste pipe is placed vertically after the fixture trap, as in an S-trap, the wastewater continues to flow after the fixture is emptied and clears the trap. This is caused by the pressure of air on the water of the fixture being greater than the pressure of air in the waste pipe. The action of the water discharging into the waste pipe removes the air from that pipe and thereby causes a negative pressure in the waste line. In the case of indirect or momentum siphonage, the flow of water past the entrance to a fixture drain in the waste pipe removes air from the fixture drain. This reduces the air pressure in the fixture drain, and the entire assembly acts as an aspirator. (Figures 9.10 and 9.11 show plumbing configurations that would allow this type of siphonage to occur.) Back Pressure. The flow of water in a soil pipe varies according to the fixtures being used. Small flows tend to cling to the sides of the pipe, but large ones form a slug of waste as they drop. As this slug of water falls down the pipe, the air in front of it becomes pressurized. As the pressure builds, it seeks an escape point. This point is either a vent or a fixture outlet. If the vent is plugged or there is no vent, the only escape for this air is the fixture outlet. The air pressure forces the trap seal up the pipe into the fixture. If the pressure is great enough, the seal is blown out of the fixture entirely. Figures 9.8 and 9.9 illustrate the potential for this type of problem. Large water flow past the vent can aspirate the water from the trap, while water flow approaching the trap can blow the water out of the trap. Vent Sizing. Vent pipe installation is similar to that of soil and waste pipe. The same fixture unit criteria are used. Table 9.3 shows minimum vent pipe sizes. Vent pipes of less than 1¼ inches in diameter should not be used. Vents smaller than this diameter tend to clog and do not perform their function. Individual Fixture Ventilation. Figure 9.12 shows a typical installation of a wall-hung plumbing unit. This type of ventilation is generally used for sinks, drinking fountains, and so forth. Air admittance valves are often used for individual fixtures. Figure 9.13 shows a typical installation of a bathtub or shower ventilation system. Figure 9.14 shows the proper vent connection for toilet fixtures and Figure 9.15 shows a janitor's sink or slop sink that has the proper P-trap. For the plumbing fixture to work properly, it must be vented as in Figures 9.13 and 9.14. Unit Venting. Figures 9.10 and 9.11 show a back-to-back shared ventilation system for various plumbing fixtures. The unit vent-ing system is commonly used in apartment buildings. This type of system saves a great deal of money and space when fixtures are placed back-to-back in separate apartments. It does, however, pose a problem if the vents are undersized because they will aspirate the water from the other trap. Figure 9.16 shows a double combination Y-trap used for joining the fixtures to the common soil pipe fixture on the other side of the wall. Wet Venting. Bathroom fixture groupings are commonly wet vented; that is, the vent pipe also is used as a waste line. # Total Drainage System The drain, soil waste, and vent systems are all connected, and the inspector should remember the following fundamentals: Working vents must provide air to all fixtures to ensure the movement of waste into the sewer. Improperly vented fixtures will drain slowly and clog often. They also present a health risk if highly toxic and explosive sewer gases enter the home. Correct venting is shown in Figures 9.10-9.15; incorrect venting is shown in Figures 9.8 and 9.9. A wet vent can result in one of the traps siphoning the other dry when large volumes of water are poured down the drain. Wet vents are not permitted by many state plumbing codes because of the potential for self-siphoning. Backup of sewage into sinks, dishwashers, and other appliances is always a possibility unless the system is equipped with air gaps or vacuum breakers. All connections to the potable water system must be a minimum of two pipe diameters above the overflow of the appliance and, in some cases, where flat surfaces are near, two and one-half pipe diameters above the overflow of the appliance. A simple demonstration of how easily siphoning can occur is to hold a glass of water with food coloring in it with the tip of a faucet in the colored water. If the sink's vegetable sprayer is directed to a second glass and sprayed, in most cases, the colored water will be aspirated into the faucet and then out of the sprayer into the second glass. Weed or pest killer attachments that hook to garden hoses work on the same principle. Figure 9.17 shows an outside hose bib equipped with a vacuum breaker. In the areas of the United States that freeze, these vacuum breakers must be removed because they trap water in the area of the line that can freeze and burst. Many vacuum breakers sold today automatically drain to prevent freeze damage. Devices that pull water from a utility may create negative pressures that can damage water piping and pull dangerous substances into the line at the same time. These devices include power sprayers that hook to the home hose bib (outside faucets) and pressurize the spray by creating a vacuum on the supply side. # Corrosion Control To understand the proper maintenance procedures for the prevention and elimination of water quality problems in plumbing systems, it is necessary to understand the process used to determine the chemical aggressiveness of water. The process is used to determine when additional treatment is needed. Water that is out of balance can result in many negative outcomes, from toxic water to damaged and ruined equipment. Water dissolves and carries materials when it is not saturated. An equilibrium among pH, temperature, alkalinity, and hardness controls water's ability to create scale or to dissolve material. If water is saturated with harmless or beneficial substances, such as calcium, then the threat of damage can be mitigated. The Langelier method, developed in the early 1930s, is a process used in boiler management, municipal water treatment, and swimming pools to provide this balance. In the Langelier index, saturation over 0.3 is scale forming, and a saturation below 0.3 is corrosive. Fixture Supply Line, Inches Vent Line, Inches Drain Line, Inches Bathtub 1 / 2 1 1 / 2 1 1 / 2 Kitchen sink 1 / 2 1 1 / 2 1 1 / 2 Lavatory 3 / 8 1 1 / 4 1 1 / 4 Laundry sink 1 / 2 1 1 / 2 1 1 / 2 Shower 1 / 2 2 2 Toilet tank 3 / 8 3 3 According to the EPA, in 2000, a typical U.S. family of four spent approximately $820 every year on water and sewer fees, plus another $230 in energy for heating water. In many cities, according to the U.S. EPA, water and sewer costs can be more than twice those amounts. Many people do not realize how much money they can save by taking simple steps to save water, and they do not know the cumulative effects small changes can have on water resources and environmental quality. Fixing a leaky faucet, toilet, or lawn-watering system can reduce water consumption. Changing to water-efficient plumbing fixtures and appliances can result in major water and energy savings [9,10]. Summer droughts remind many of the need to appreciate clean water as an invaluable resource. As the U.S. population increases, the need for clean water supplies continues to grow dramatically and puts additional stress on our limited water resources. We can all take steps to save and conserve this valuable resource. The EPA [11] suggests the following steps homeowners should take right away to save water and money: • Stop leaks!-Check indoor water-using appliances and devices for leaks. Pay particular attention to toilets that leak. • Take showers-Showers use considerably less water than do baths. • Replace shower heads-Replacement shower heads are available that reduce water use. • Turn the water off when not needed-While brushing your teeth, turn the water off until you need to rinse. • Replace your old toilet-The largest water user inside the home is the toilet. If a home was built before 1992 and the toilet has never been replaced, it is very likely that it is not a water-efficient, 1.6 gallons-per-flush toilet. Choose a replacement toilet carefully to ensure that what you make up per individual flush, you do not lose because you must flush more often. • Replace your clothes washer-The second largest water user in the home is the washing machine. Energy Star-rated washers that also have a water factor at or The susceptibility of metal to corrosion is as follows (most susceptible to least susceptible): magnesium, zinc, aluminum, cadmium, mild steel, cast iron, stainless steel (active), lead-tin solder, lead, tin, brass, gun metal, aluminum bronze, copper, copper-nickel alloy, Monel, titanium, stainless steel (passive), silver, gold, and platinum. # Water Conservation How much attention should be paid to fixtures that just drip a little bit of water or that just will not quite shut off? At 30 drops per minute, you will lose and pay for 54 gallons per month. At 60 drops per minute, you will lose and pay for 113 gallons per month. At 120 drops per minute, you will lose and pay for 237 gallons per month. This is only a small loss of water considering the 5 to 7 gallons per flush used by a properly functioning toilet. If the toilet is not properly maintained, the loss of water and its effect on the monthly water bill can be incredible. Lower flow toilets have been mandated to save precious and limited resources. Most pre-1992 toilets used up to 7 gallons per flush. Toilets have since evolved to use 5.5, then 3.5, and now 1.6 gallons per flush. With the changes in the water usage laws in 1992, there were many customer complaints, and plumbers were in the bad position of installing products that nobody wanted to use. New and updated products now work better than the old water wasters. lower than 9.5 use 35%-50% less water and 50% less energy per load. This saves money on both water and energy bills. • Plant the right plants with proper landscape design and irrigation-Select plants that are appropriate for the local climate. Having a 100% turf lawn in a dry desert climate uses a significant amount of water. Also, home owners should consider the benefits of a more natural landscape or wildscape. • Water plants only as needed-Most water wasted in the garden is by watering when plants do not need it A. Hot and cold copper water lines and drain, p-trap and vent, and vent for the washer drain shown. When a house is vacant for awhile, the P-trap should be filled with water to prevent sewer gas from entering the home. Mineral oil added to the water can slow the loss of fluid in the trap. # B. Hot and cold water pipes, soil pipe, and vent shown. # C. Vent for the sink and toilet, soil pipe, and cap for toilet connection shown. A wax or plastic seal shaped like a donut will be placed on the cap before bolting down the toilet. or by not maintaining the irrigation system. If manually watering, set a timer and move the hose promptly. Make sure the irrigation controller has a rain shutoff device and that it is appropriately scheduled. Drip irrigation should be considered where practical. Newer irrigation systems have sensors to prevent watering while it is raining. # Putting It All Together These photographs, taken during construction of a home by Habitat for Humanity, show various plumbing elements discussed in this chapter. # D. Mixing and antiscald water flow control, vent for fixture, hot and cold water lines, and bathtub overflow shown. At this point in construction, insulation might be considered for the hot water lines. Water service and waste water line. # E. Polyethylene water service pipe entering the home through the concrete basement wall shown. White plastic adapter shown between polyethylene water service pipe and ¾ inch copper water line. A short distance above the adapter is a pressure reducing valve. To the right of water line is the 4-inch PVC pipe waste water line. # Introduction The French are considered the first to use an underground septic tank system in the 1870s. By the mid 1880s, two-chamber, automatic siphoning septic tank systems, similar to those used today, were being installed in the United States. Even now, more than a century later, septic tank systems represent a major household wastewater treatment option. Fully one-fourth to one-third of the homes in the United States use such a system [1]. On-site sewage disposal systems are used in rural areas where houses are spaced so far apart that a sewer system would be too expensive to install, or in areas around cities where the city government has not yet provided sewers to which the homes can connect. In these areas, people install their own private sewage treatment plants. As populations continue to expand beyond the reach of municipal sewer systems, more families are relying on individual on-site wastewater treatment systems and private water supplies. The close proximity of on-site water and wastewater systems in subdivisions and other developed areas, reliance on marginal or poor soils for on-site wastewater disposal, and a general lack of understanding by homeowners about proper septic tank system maintenance pose a significant threat to public health. The expertise on inspecting, maintaining, and installing these systems generally rests with the environmental health staff of the local county or city health departments. These private disposal systems are typically called septic tank systems. A septic tank is a sewage holding device made of concrete, steel, fiberglass, polyethylene, or other Chapter 10: On-site Wastewater Treatment Effluent leaves home through a pipe, enters a septic tank, travels through a distribution box to a trench absorption system composed of perforated pipe. approved material cistern, buried in a yard, which may hold 1,000 gallons or more of wastewater. Wastewater flows from the home into the tank at one end and leaves the tank at the other (Figure 10.1) [2]. Proper maintenance of septic tanks is a public health necessity. Enteric diseases such as cryptospordiosis, giardiasis, salmonellosis, hepatitis A, and shigellosis may be transmitted through human excrement. Historically, major epidemics of cholera and typhoid fever were primarily caused by improper disposal of wastewater. The earliest epidemiology lesson learned was through the effort of Dr. John Snow of England (1813-1858) during a devastating cholera epidemic in London [3]. Dr. Snow, known as the father of field epidemiology, discovered that the city's water supply was being contaminated by improper disposal of human waste. He published a brief pamphlet, On the Mode of Communication of Cholera, suggesting that cholera is a contagious disease caused by a poison that reproduces in the human body and is found in the vomitus and stools of cholera patients. He believed that the main, although not only, means of transmission was water contaminated with this poison. This differed from the commonly accepted belief at the time that diseases were transmitted by inhaling vapors. # Treatment of Human Waste Safe, sanitary, nuisance-free disposal of wastewater is a public health priority in all population groups, small and large, rural or urban. Wastewater should be disposed of in a manner that ensures that • community or private drinking water supplies are not threatened; • direct human exposure is not possible; • waste is inaccessible to vectors, insects, rodents, or other possible carriers; • all environmental laws and regulations are complied with; and • odor or aesthetic nuisances are not created. In Figure 10.2, a straight pipe from a nearby home discharges untreated sewage that flows from a shallow drainage ditch to a roadside mountain creek in which many children and some adults wade and fish. The clear water (Figure 10.3) is quite deceptive in terms of the health hazard presented. A 4-mile walk along the creek revealed 12 additional pipes that were also releasing untreated sewage. Some people in the area reportedly regard this creek as a source of drinking water. Raw or untreated domestic wastewater (sewage) is primarily water, containing only 0.1% of impurities that must be treated and removed. Domestic wastewater contains biodegradable organic materials and, very likely, pathogens. The primary purpose of wastewater treatment is to remove impurities and release the treated effluent into the ground or a stream. There are various processes for accomplishing this: John Snow, a London physician, was among the first to use anesthesia. It is his work in epidemiology, however, that earned him his reputation as a prototype for epidemiologists. Dr. Snow's brief 1849 pamphlet, On the Mode of Communication of Cholera, caused no great stir, and his theory that the city's water supply was contaminated was only one of many proposed during the epidemic. Snow, however, was able to prove his theory in 1854, when another severe epidemic of cholera occurred in London. Through painstaking documentation of cholera cases and correlation of the comparative incidence of cholera among subscribers to the city's two water companies, he showed that cholera occurred much more frequently in customers of one water company. This company drew its water from the lower Thames, which become contaminated with London sewage, whereas the other company obtained water from the upper Thames. Snow's evidence soon gained many converts. A striking incident during this epidemic has become legendary. In one neighborhood, the intersection of Cambridge Street and Broad Street, the concentration of cholera cases was so great that the number of deaths reached over 500 in 10 days. Snow investigated the situation and concluded that the cases were clustered around the Broad Street pump. He advised an incredulous but panicked assembly of officials to have the pump handle removed, and when this was done, the epidemic was contained. Snow was a skilled practitioner as well as an epidemiologist, and his creative use of the scientific information of his time is an appropriate example for those interested in disease prevention and control [3]. # Epidemiology • Centralized treatment-Publicly owned treatment works (POTWs) that use primary (physical) treatment and secondary (biologic) treatment on a large scale to treat flows of up to millions of gallons or liters per day, • Treatment on-site-Septic tanks and absorption fields or variations thereof, and • Stabilization ponds (lagoons)-Centralized treatment for populations of 10,000 or less when soil conditions are marginal and land space is ample. Not included are pit privies and compost toilets. Historically, wastewater disposal systems are categorized as water-carrying and nonwater-carrying. Nonwatercarried human fecal waste can be contained and decomposed on-site, the primary examples being a pit privy or compost toilet. These systems are not practical for individual residences because they are inconvenient and they expose users to inclement weather, biting insects, and odors. Because of the depth of the disposal pit for privies, they may introduce waste directly into groundwater. It should be noted that these types of systems are often used and may be acceptable in low-water-use conditions such as small campsites or along nature trails [4][5][6]. # On-site Wastewater Treatment Systems As urban sprawl continues and the population increases in rural areas, the cost of building additional sewage disposal systems increases. One of the prime reasons for annexation is to increase the tax base without increasing the cost of municipal government. The governments involved often buy into short-term tax gains at massive long-term costs for eventual infrastructure improvements to annexed communities. Installing septic tank systems is common to provide on-site disposal systems, but it is a temporary solution at best. Because property size must be sufficient to allow space for septic system replacement, the cost to the municipality installing a centralized sewer system will be dramatically increased because of the large lot size. Two microbiologic processes occur in all methods that attempt to decompose domestic wastewater: anaerobic (by bacteria that do not require oxygen) and aerobic (by bacteria that require oxygen) decomposition. Aerobic decomposition is generally preferred because aerobic bacteria decompose organic matter (sewage) at a rate much faster than do anaerobic bacteria and odors are less likely. Centralized wastewater treatment facilities use aerobic processes, as do most types of lagoons. Septic tank systems use both processes. # Septic Tank Systems Approximately 21% of American homes are served by onsite sewage disposal systems. Of these, 95% are septic tank field systems. Septic tank systems are used as a means of on-site wastewater treatment in many homes, both in rural and urban areas, in the United States. If maintained and operated within acceptable parameters, they are capable of properly treating wastewater for a limited number of years and will need both routine maintenance and eventually major repairs. Proper placement and installation is a key to the successful operation of any on- site wastewater treatment system, but septic tank systems have a finite life expectancy and all such systems will eventually fail and need to be replaced. Figure 10.4 shows a typical rural home with a well and a septic system. Septic tank systems generally are composed of the septic tank, distribution box, absorption field (also known as the soil drain field), and leach field. The septic tank serves three purposes: sedimentation of solids in the wastewater, storage of solids, and anaerobic breakdown of organic materials. To place the septic tank and absorption field in a way that will not contaminate water wells, groundwater, or streams, the system should be 10 feet from the house and other structures, at least 5 feet from property lines, 50 feet from water wells, and 25 feet from streams. The entire system area should be easily identifiable. There have been occasions when owners have paved or built over the area. The local health code authorities must be consulted on required distances in their area because of soil and groundwater issues. Aerobic, or aerated, septic systems use a suspended growth wastewater treatment process, and can remove suspended solids that are not removed by simple sedimentation. Under appropriate conditions, aerobic units may also provide for nitrification of ammonia, as well as significant pathogen reduction. Some type of primary treatment usually precedes the aerated tank. The tanks contain an aeration chamber, with either mechanical aerators or blowers, or air diffusers, and an area for final clarification/settling. Aerobic units may be designed as either continuous flow or batch flow systems. The continuous flow type are the most commercially available units. Effluent from the aerated tank is conveyed either by gravity flow or pumping to either further treatment/ pretreatment processes or to final treatment and disposal in a subsurface soil disposal system. Various types of pretreatment may be used ahead of the aerobic units, including septic tanks and trash traps. The batch flow system collects and treats wastewater over a period of time, then discharges the settled effluent at the end of the cycle [8]. Aerobic units may be used by individual or clustered residences and establishments for treating wastewater before either further treatment/pretreatment or final on-site subsurface treatment and disposal. These units are particularly applicable where enhanced pretreatment is important, and where there is limited availability of land suitable for final on-site disposal of wastewater effluent. Because of their need for routine maintenance to ensure proper operation and performance, aerobic units may be well-suited for multiple-home or commercial applications, where economies of scale tend to reduce maintenance and/or repair costs per user. The lower organic and suspended solids content of the effluent may allow a reduction of land area requirements for subsurface disposal systems. A properly functioning septic tank will remove approximately 75% of the suspended solids, oil, and grease from effluent. Because the detention time in the tank is 24 hours or less, there is not a major kill of pathogenic bacteria. The bacteria will be removed in the absorption field (drain field). However, there are soils and soil conditions that prohibit the ability of a drain field to absorb effluent from the septic tank. Septic tanks are sized to retain the total volume of sewage produced by a household in a 24-hour period. Normally a 1,000-gallon tank is the minimum size to use. State or local codes generally require larger tanks as the potential occupancy of the home increases (e.g., 1,250 gallons for four bedrooms) and may require two tanks in succession when inadequate soils require alternative system installation. Figure 10.5 shows a typical septic tank. Distribution boxes are not required by most on-site plumbing codes or by the U.S. Environmental Protection Agency. When used, distribution boxes provide a convenient inspection port. In addition, if a split system absorption field is installed (two separate absorption trench systems), the distribution box is a convenient place to install a diversion valve for annually alternating absorption fields. # Absorption Field Site Evaluation The absorption field has a variety of names, including leach field, tile field, drain field, disposal field, and nitrification field. The effluent from the septic tank is directed to the absorption field for final treatment. The suitability of the soil, along with other factors noted below, determines the best way to properly treat and dispose of the wastewater. Most, but unfortunately not all, states require areas not served by publicly owned sewers to be preapproved for on-site wastewater disposal before home construction through a permitting process. This process typically requires a site evaluation by a local environmental health specialist, soil scientist, or, in some cases, a private contractor. To assist in the site evaluation process, soil survey maps from the local soil conservation service office may be used to provide general information about soils in the area. The form shown in Figure 10.6 is typical of those used in conducting a soil evaluation. Sites for on-site wastewater disposal are first evaluated for use with a conventional septic tank system. Evaluation factors include site topography, landscape position, soil texture, soil structure, internal drainage, depth to rock or other restrictive horizons, and useable area. If the criteria are met, a permit is issued to allow the installation of a conventional septic tank system. Areas that do not meet the criteria for a conventional system may meet lessrestrictive criteria for an alternative type of system. Many sites are unsuitable for any type of on-site wastewater disposal system because of severe topographic limitations, poor soils, or other evaluation criteria. Such sites should not be used for on-site wastewater disposal because of the high likelihood of system failure. Some states and localities may require a percolation test as part of the site evaluation process. As a primary evaluation method, percolation tests are a poor indicator of the ability of a soil to treat and move wastewater throughout the year. However, information obtained by percolation tests may be useful when used in conjunction with a comprehensive soil analysis. # Absorption Field Trench A conventional absorption field trench (Figure 10.7), also known as a rock lateral system, is the most common system used on level land or land with moderate slopes with adequate soil depth above the water table or other restrictive horizons. The effluent from the septic tank flows through solid piping to a distribution box or, in many cases, straight to an absorption field. With the conventional system and most alternative systems, the effluent flows through perforated pipes into gravel-filled trenches and subsequently seeps through the gravel into the soil. The local regulatory agency should be consulted about the acceptable depth of the absorption field trench. Some states require as much as 4 feet of separation beneath the bottom of the trench and the groundwater. The depth of absorption field trenches should be at least 18 inches, and ideally no deeper than 24 inches. The absorption field pipe should be laid flat with no slope. There should be a minimum of 12 to 18 inches of acceptable soil below the bottom of the trench to any bedrock, water table, or restrictive horizon. The length of the trench should not exceed 100 feet for systems using a distribution box. Serpentine systems may be several hundred feet long and should be filled with crushed or fragmented clean rock or gravel in the bottom 6 inches of the trench. Perforated 4-inch-diameter pipe is laid on top of the gravel then covered with an additional 2 inches of rock and leveled for a total of 12 inches. A geotextile material or a biodegradable material such as straw should be placed over the gravel before backfilling the trench to prevent soil from clogging the spaces between the rocks. One or more monitoring ports should be installed in the absorption area extending to the bottom of the gravel to allow measurement of the actual liquid depth in the gravel. This is essential for subsequent testing of the adequacy of the system. As a general rule, using longer and narrower trenches to meet square footage requirements produces a better working and longer lasting ground absorption sewage disposal system. Studies have shown that as septic systems age, the majority of effluent absorption by the soil is provided by lateral movement through the trench sidewalls. Longer and narrower trenches (such as 400 feet long by 2 feet wide instead of 200 feet by 4 feet to obtain 800 square feet) greatly increase the sidewall area of the system for lateral movement of wastewater. # Alternative Septic Tank Systems As the cost of land for home building increases and the availability of land decreases, land that was once considered unsuitable is being developed. This land often has poor soil and drainage properties. Such sites require a considerable amount of engineering skill to design an acceptable wastewater disposal system. In many cases, sites are not acceptable for seepage systems within a reasonable cost. These systems are primarily regulated by state and local government and, before use, approval must be obtained from the appropriate regulatory agencies. Even if a site is approved in one state or county jurisdiction, a similar site may not be approved in another. The primary difficulty with septic tank systems is treating effluent in slowly permeable or marginal soils. Low-wateruse devices, when installed, may make it possible to use a small percentage of septic tank systems in marginal soil. However, low-water-use devices are usually required as part of a larger effort to develop a usable alternative sewage disposal system. Alternative sewage disposal methods that can be used if regular subsurface disposal is not appropriate are numerous [11]. Some of the more common alternative systems are described below. Advantages Disadvantages May be used in areas with high groundwater, bedrock, or restrictive clay soil near the surface Must be installed on relatively level lots Space efficient compared to conventional rock lateral systems Periodic flushing of the distribution network is required Allows home building in areas unsuitable for below grade systems System may be expensive Water softener wastes, common household chemicals, and detergents are not harmful to this system System may be difficult to design Regular inspection of the pumps and controls necessary to maintain the system in proper working condition # Mound Systems A mound system (Table 10.1) is elevated above the natural soil surface to achieve the desired vertical separation from a water table or impervious material. The elevation is accomplished by placing sand fill material on top of the best native soil stratum. At least 1 foot of naturally occurring soil is necessary for a mound system to function properly. Minimizing water usage in the home also is critical to prevent effluent from weeping through the sides of the mound (Figure 10.8). When a mound system is constructed, the septic tank usually receives wastewater from the house by gravity flow. A lift station is located in the second compartment or in a separate tank to pump the effluent up to the distribution piping in the mound. Floats in the lift station control the size of the pumped effluent dose. An alarm should be installed to alert the homeowner of pump failure so that repairs can be made before the pump tank overfills. # Low-Pressure Pipe Systems Low-pressure pipe (LPP) systems may also be used where the soil profile is shallow. These systems are similar to mounds except that they use naturally occurring soil as it exists on-site instead of elevating the disposal field with soil fill material. LPP systems are installed with a trenching machine at depths of 12 to 18 inches. The LPP system consists of a septic tank, high-water alarm, pumping tank, supply line, manifold, lateral line, and submersible effluent pump (Figure 10.9). When septic tank effluent rises to the level of the pump control in the pumping tank, the pump turns on, and effluent moves through the supply line and distribution laterals. The laterals contain small holes and are typically placed 3 to 8 feet apart. From the trenches, the effluent moves into the soil where it is treated. The pump turns off when the effluent falls to the lower control. Dosing takes place one to two times daily, depending on the amount of effluent generated. Pump malfunctions set off an alarm to alert the homeowner. The time between doses allows the effluent to be absorbed into the soil and also allows oxygen to reenter the soil to break down solids that may be left behind. If the pump malfunctions, an alarm notifies the homeowner to contact a qualified septic system contractor. The pump must be repaired or replaced quickly to prevent the pump tank from overflowing. Table 10.2 shows the advantages and disadvantages of LPP systems. Advantages Disadvantages Space requirements are nearly half those of a conventional septic tank system Some low-pressure pipe systems may gradually accumulate solids at the dead-ends of the lateral lines; therefore, maintenance is required Can be installed on irregular lot shapes and sizes Electric components are necessary Can be installed at shallower depths and requires less topsoil cover Design and installation may be difficult; installers with experience with such systems should be sought # Provides alternating dosing and resting cycles Installation sites are left in their natural condition # Plant-rock Filter Systems (Constructed Wetlands) Considered experimental in some states, plant-rock filter systems are being used with great success in Kentucky, Louisiana, and Michigan. Plant-rock filters generally consist of a septic tank (two-compartment), a rock filter, and a small overflow lateral (absorption) field. Overflow from the septic tank is directed into the rock filter. The rock filter is a long narrow trench (3 to 5 feet wide and 60 to 100 feet long) lined with leak-proof polyvinyl chloride or butylplastic to which rock is added. A 2-to 4-inch-diameter rock is used below the effluent flow line and larger rock above (Figure 10.10). Plant-rock filter systems are typically sized to allow 1.3 cubic feet of rock area per gallon of total daily waste flow. A typical size for a three-bedroom house would be 468 square feet of interior area. Various width-to-length ratios within the parameters listed above could be used to obtain the necessary square footage. The trenches can even be designed in an "L" shape to accommodate small building lots. Treatment begins in the septic tank. The partially treated wastewater enters the lined plant-rock filter cell through solid piping, where it is distributed across the cell. The plants within the system introduce oxygen into the waste-water through their roots. As the wastewater becomes oxygenated, beneficial microorganisms and fungi thrive on and around the roots, which leads to digestion of organic matter. In addition, large amounts of water are lost through evapotranspiration. The kinds of plants most widely used in these systems include cattails, bulrush, water lilies, many varieties of iris, and nutgrass. Winter temperatures have little effect because the roots are doing the work in these systems, and they stay alive during the winter months. Discharge from wetlands systems may require disinfection. The advantages and disadvantages of the plant-rock filter system are shown in Table 10.3. # Maintaining On-site Wastewater Treatment Systems Do's and don'ts inside the house: • Do conserve water. Putting too much water into the septic system can eventually lead to system failure. Advantages Disadvantages Approximately one-third the size of conventional septic tank absorption field systems May be slightly more costly to install Disinfection required for effluent discharge Can be placed on irregular or segmented lots May not find installers knowledgeable about the system May be placed in areas with shallow water tables, high bedrock, or restrictive horizons Life span of the system is unknown because of its relative newness Relatively low maintenance Perception of being unsightly to some (Typical water use is about 60 gallons per day for each person in the family.) The standard drain field is designed for a capacity of 120 gallons per bedroom. If near capacity, systems may not work. Water conservation will extend the life of the system and reduce the chances of system failure. • Do fix dripping faucets and leaking toilets. • Do avoid long showers. • Do use washing machines and dishwashers only for full loads. • Do not allow the water to run continually when brushing teeth or while shaving. • Do avoid disposing of the following items down the sink drains or toilets: chemicals, sanitary napkins, tissues, cigarette butts, grease, cooking oil, pesticides, kitty litter, coffee grounds, disposable diapers, stockings, or nylons. • Do not install garbage disposals. • Do not use septic tank additives or cleaners. They are unnecessary and some of the chemicals can contaminate the groundwater. Do's and don'ts for outside maintenance: • Do maintain adequate vegetative cover over the absorption field. • Do not allow surface waters to flow over the tank and drain field areas. (Diversion ditches or subsurface tiles may be used to direct water away from system.) • Do not allow heavy equipment, trucks, or automobiles to drive across any part of the system. • Do not dig into the absorption field or build additions near the septic system or the repair area. • Do make sure a concrete riser (or manhole) is installed over the tank if not within 6 inches of the surface, providing easy access for measuring and pumping solids. (Note: All tanks should have two manholes, one positioned over the inlet device and one over the outlet device.) There is no need to add any commercial substance to "start" or clean a tank to keep it operating properly. They may actually hinder the natural bacterial action that takes place inside a septic tank. The fecal material, cereal grain, salt, baking soda, vegetable oil, detergents, and vitamin supplements that routinely make their way from the house to the tank are far superior to any additive. # Symptoms of Septic System Problems These symptoms can mean you have a serious septic system problem: • Sewage backup in drains or toilets (often a black liquid with a disagreeable odor). • Slow flushing of toilets. Many of the drains will drain much slower than usual, despite the use of plungers or drain-cleaning products. This also can be the result of a clogged plumbing vent or a nonvented fixture. • Surface flow of wastewater. Sometimes liquid seeps along the surface of the ground near your septic system. It may or may not have much of an odor and will range from very clear to black in color. • Lush green grass over the absorption field, even during dry weather. Often, this indicates that an excessive amount of liquid from the system is moving up through the soil, instead of down, as it should. Although some upward movement of liquid from the absorption field is good, too much could indicate major problems. • The presence of nitrates or bacteria in the drinking water well indicates that liquid from the system may be flowing into the well through the ground or over the surface. Water tests available from the local health department will indicate whether this is a problem. • Buildup of aquatic weeds or algae in lakes or ponds adjacent to your home. This may indicate that nutrient-rich septic system waste is leaching into the surface water, which may lead to both inconvenience and possible health problems. • Unpleasant odors around the house. Often, an improperly vented plumbing system or a failing septic system causes a buildup of disagreeable odors. Table 10.4 is a guide to troubleshooting septic tank problems. # Septic Tank Inspection The first priority in the inspection process is the safety of the homeowner, neighbors, workers, and anyone else for which the process could create a hazard. • Do not enter septic tanks or cesspools. • Do not work alone on these tanks. • Do not bend or lean over septic tanks or cesspools. • Note and take appropriate action regarding unsafe tank covers. • Note unsanitary conditions or maintenance needs (sewage backups, odor, seepage). • Do not bring sewage-contaminated clothing into the home. • Have current tetanus inoculations if working in septic tank inspection. Methane and hydrogen sulfide gases are produced in a septic tank. They are both toxic and explosive. Hydrogen sulfide gas is quite deceptive. It can have a very strong odor one moment, but after exposure, the odor may not be noticed. # Inspection Process As sewage enters a septic tank, the rate of flow is reduced and heavy solids settle, forming sludge. Grease and other light solids rise to the surface, forming a scum. The sludge and scum (Figure 10.11) are retained and break down while the clarified effluent (liquid) is discharged to the absorption field. Sludge eventually accumulates in the bottom of all septic tanks. The buildup is slower in warm climates than in colder climates. The only way to determine the sludge depth is to measure the sludge with a probe inserted through an inspection port in the tank's lid. Do not put this job off until the tank fills and the toilet overflows. If this happens, damage to the absorption field could occur and be expensive to repair. # Scum Measurement The floating scum thickness can be measured with a probe. The scum thickness and the vertical distance from the bottom of the scum to the bottom of the inlet can also be measured. If the bottom of the scum gets within 3 inches of the outlet, scum and grease can enter the absorption field. If grease gets into the absorption field, percolation is impaired and the field can fail. If the scum Problem Possible Cause(s) Remedies Wastewater backs up into the building or plumbing fixtures sluggish or do not drain well. Excess water entering the septic tank system, plumbing installed improperly, roots clogging the system, plumbing lines blocked, pump failure, absorption field damaged or crushed by vehicular traffic. Fix leaks, stop using garbage disposal, clean septic tank and inspect pumps, replace broken pipes, seal pipe connections, avoid planting willow trees close to system lines. Do not allow vehicles to drive over or park over lines. Contact local health department for guidance. Wastewater surfaces in the yard. Excess water entering the septic tank system, system blockage, improper system elevations, undersized soil treatment system, pump failure, absorption field damaged or crushed by vehicular traffic. Fix leaks, clean septic tank and check pumps, make sure distribu tion box is free of debris and functioning properly, fence off area until problem is fixed, call in experts. Contact local health department for guidance. # Sewage odors indoors. Sewage surfacing in yard, improper plumbing, sewage backing up in the building, trap under sink dried out, roof vent pipe frozen shut. Repair plumbing, clean septic tank and check pumps, replace water in drain pipes, thaw vent pipe. Contact local health department for guidance. Sewage odors outdoors. Source other than owner's system, sewage surfacing in yard, manhole or inspection pipes damaged or partially removed, downdraft from vent pipes on roof. Clean tank and check pumps, replace damaged inspection port covers, replace or repair absorption field. Contact local health department for guidance. Contaminated drinking water or surface water. System too close to a well, water table, or fractured bedrock; cesspool or dry well being used; improper well construction; broken water supply or wastewater lines. Improperly located wells must be sealed in strict accordance with state and local codes. Abandon well and locate a new one far and upslope from the septic system, fix all broken lines, stop using cesspool or drywell. Contact local health department for guidance. Distribution pipes and soil treatment system freeze in winter. Improper construction, check valve in lift station not working, heavy equipment traffic compacting soil in area, low flow rate, lack of use. Examine check valve, keep heavy equipment such as cars off area, increase water usage, have friend run water while away on vaca tion, operate septic tank as a holding tank, do not use anti freeze. Contact local health department for guidance. is near the bottom of the tee, the septic tank needs to be cleaned out. The scum thickness can best be measured through the large inspection port. Scum should never be closer than 3 inches to the bottom of the baffle. The scum thickness is observed by breaking through it with a probe, usually a pole. # Introduction Two basic codes concerned with residential wiring are important to the housing inspector. The first is the local electrical code. The purpose of this code is to safeguard persons as well as buildings and their contents from hazards arising from the use of electricity for light, heat, and power. The electrical code contains basic minimum provisions considered necessary for safety. Compliance with this code and proper maintenance will result in an installation essentially free from hazards, but not necessarily efficient, convenient, or adequate for good service or future expansion. Most local electrical codes are modeled after the National Electrical Code, published by the National Fire Protection Association (NFPA). Reference to the "code" in the remainder of this chapter will be to the National Electrical Code, unless specified otherwise [1]. An electrical installation that was safe and adequate under the provisions of the electrical code at the time of installa- # Chapter 11: Electricity Ampere-The unit for measuring intensity of flow of electricity. Its symbol is "I." Bonding-Applies inert material to metal surfaces to eliminate electrical potential between metal components and prevent components and piping systems from having an elevated voltage potential. Circuit-The flow of electricity through two or more wires from the supply source to one or more outlets and back to the source. Circuit breaker-A safety device used to break the flow of electricity by opening the circuit automatically in the event of overloading or used to open or close the circuit manually. Conductor-Any substance capable of conveying an electric current. In the home, copper wire is usually used. • A bare conductor is one with no insulation or covering. • A covered conductor is one covered with one or more layers of insulation. # Definitions of Terms Related to Electricity tion is not necessarily safe and adequate today. An example would be the grounding of a home electrical system. In the past, electrical systems could be grounded to the home's plumbing system. Today, many plumbing systems are no longer constructed of conductive material, but are made of plastic or polyvinyl chloride-based materials. Today, the recommendations for grounding a home electrical system are to use two 8-foot by 5/8-inch copper ground rods. These must be spaced 6 feet apart and be connected by a continuous (unbroken) piece of copper wire (the size of this wire corresponds to the size of the system main). It is also highly recommended that the system be grounded to the incoming water line if it is conductive or to the nearest conductive cold water supply line. Hazards often occur because of overloading wiring systems or usage not in conformity with the code. This occurs because initial wiring did not provide for increases in the use of electricity. For this reason, it is recommended that initial installations be adequate and that reasonable provisions for system changes be made for further increases in the use of electricity. The other code that contains electrical provisions is the local housing code. It establishes minimum standards for artificial and natural lighting and ventilation, specifies the minimum number of electric outlets and lighting fixtures per room, and prohibits temporary wiring except under certain circumstances. In addition, the housing code usually requires that all components of an electrical system be installed and maintained in a safe condition to prevent fire or electric shock. Voltage is a measure of the force at which electricity is delivered. It is similar to pressure in a water supply system. Current is measured in amperes and is the quantity of flow of electricity. It is similar to measuring water in gallons per second. A watt is equal to volts times amperes. It is a measure of how much power is flowing. Electricity is sold in quantities of kilowatt-hours. The earth, by virtue of moisture contained within the soil, serves as a very effective conductor. Therefore, in power transmission, instead of having both the hot and neutral wires carried by the transmission poles, one lead of the generator is connected to the ground, which serves as a conductor (Figure 11.1). All electrical utility wires are carried by the transmission towers and are considered hot or charged. At the house, or point where the electricity is to be used, the circuit is completed by another connection to ground. The electric power utility provides a ground somewhere in its local distribution system; therefore, there is a ground wire in addition to the hot wires within the service drop. In Figure 11.1 this ground can be seen at the power pole that contains the step-down transformer. In addition to the ground connection provided by the electric utility, every building is required to have an independent ground called a system ground. The system ground is a connection to ground from one of the current-carrying conductors of the electrical system. System grounding, applied to limit overvoltages in the event of a fault, provides personnel safety, provides a positive means of detecting and isolating ground faults, and improves service reliability. Therefore, the system ground's main purpose is to protect the electrical system itself and offers limited protection to the user. The system ground serves the same purpose as the power company's ground; however, it has a lower resistance because it is closer to the building. The equipment ground protects people from potential harm during the # Electric Service Entrance Service Drop To prevent accidental contact by people, the entrance head (Figure 11.2) should be attached to the building at least 10 feet above ground. The conductor should clear all roofs by at least 8 feet and residential driveways by 12 feet. For public streets, alleys, roads, and driveways on other than residential property, the clearance must be 18 feet. The wires or conductor should be of sufficient size to carry the load and not smaller than No. 8 copper wire or equivalent. For connecting wire from the entrance head to the service drop wires, the code requires that the service entrance conductors be installed either below the level of the service head or below the termination of the service entrance cable sheath. Drip loops must be formed on individual conductors. This will prevent water from entering the electric service system. The wires that form the entrance cable should extend 36 inches from the entrance head to provide a sufficient length to connect service drop wires to the building with insulators. The entrance cable may be a special type of armored outdoor cable, or it may be enclosed in a conduit. The electric power meter may be located either inside or outside the building. In either instance, the meter must be located before the main power disconnect. Figure 11.3 shows an armored cable service entrance with a fuse system. Newer construction will have circuit breakers, as shown in Figure 11.4. The armored cable is anchored to the building with metal straps spaced every 4 feet. The cable is run down the wall and through a hole drilled through the building. The cable is then connected to the service panel, which should be located within 1 foot of where the cable enters the building. The ground wire need not be insulated. This ground wire may be either solid or stranded copper, or a material with an equivalent resistance. Figure 11.5 shows the use of thin-wall conduit in a service entrance. # Underground Service When wires are run underground, they must be protected from moisture and physical damage. The opening in the building foundation where the underground service enters the building must be moisture proof. Refer to local codes for information about allowable materials for this type of service entrance. # Electric Meter The electric meter (Figure 11.6) may be located inside or outside the building. The meter itself is weatherproof and is plugged into a weatherproof socket. The electric power company furnishes the meter; the socket may or may not be furnished by the power company. # Grounding The system ground consists of grounding the neutral incoming wire and the neutral wire of the branch circuits. The equipment ground consists of grounding the metal parts of the service entrance, such as the service switch, as well as the service entrance conduit, armor or cable. Poor grounding at any point can result in a person providing a more effective route to ground than the intended ground, resulting in electrocution. This can occur from damaged insulation allowing electricity to flow into the case or cabinet of the appliance. The system must be grounded by two 8 foot by 5/8-inch copper ground rods of at least 8 feet in length driven into the ground and connected by a continuous (unbroken) piece of copper wire (the size of this wire corresponds to the size of the system main). It is highly recommended that the system also be grounded to the incoming water line or nearest cold water supply line if it is metal. The usual ground connection is to a conductive water pipe of the city water system. The connection should be made to the street side of the water meter, as shown in Figure 11.7. If the water meter is located near the street curb, then the ground connection should be made to the cold water pipe as close as possible to where it enters the building. It is not unusual for a water meter to be removed from the building for service. If the ground connection is made at a point in the water piping system on the building side of the water meter, the ground circuit will be broken on removal of the meter. This broken ground circuit is a shock hazard if both sides of the water meter connections are touched simultaneously. Local or state codes should be checked to determine compliance with correct grounding protocols. In increasing instances, the connections between the water meters and pipes are electrically very poor. In this case, if the ground connection is made on the building side of the water meter, there may not be an effective ground. To prevent the two aforementioned situations, the code requires effective bonding by a properly sized jumper-wire around any equipment that is likely to be disconnected for repairs or replacement. Often, the house ground will be disconnected. Therefore, the housing inspector should always check the house ground to see if it is properly connected. Figure 11.8 shows a typical grounding scheme at the service box of a residence. In this figure, only the grounded neutral wires are shown. The neutral strap is a conductive bare metal strip that is riveted directly to the service box. This conductive strip forms a collective ground that joins the ground wires from the service entrance, branch circuits, and house ground. Follow these key grounding points: • Use two metal rods driven 8 feet into the ground. • Bond around water heaters and filters to assure grounding. • If water pipes are used, they must be supplemented with a second ground. • Ground rod must be driven to full depth. • If ground rod resistance exceeds 25 ohms, install two rods at a minimum of 6 feet apart. • When properly grounded, the metal frame of a building can be used as a ground point. • Do not use underground gas lines as a ground. • Provide external grounds to other systems such as satellite, telephone, and other services to further protect the electrical system from surges. • If the water service pipes to the home are not metal or if all of the service components in the home are not metal, then the water system cannot play a role in grounding. Bonding is necessary to provide a route for electricity to flow around isolated elements of a piping system to ensure electrical potential is minimized for both the protection of the system from corrosion and to protect individuals from electrical shock. # Two-or Three-wire Electric Services One of the wires in every electrical service installation is supposed to be grounded. This neutral wire should always be white. The hot wires are usually black, red, or some other color, but never white. The potential difference or voltage between the hot wires and the ground or neutral wire of a normal residential electrical system is 115 volts. Thus, where there is a two-wire installation (one hot and one neutral), only 115 volts are available. When three wires are installed (two hot and one neutral), either 115 or 230 volts are available. In a three-wire system, the voltage between the neutral and either of the hot wires is 115 volts; between the two hot wires, it is 230 (Figure 11.9). The major advantage of a three-wire system is that it permits the operation of heavy electrical equipment such as clothes dryers, cooking ranges, and air conditioners, the majority of which require 230-volt circuits. In addition, the three-wire system is split at the service panel into two 115-volt systems to supply power for small appliances and electric lights. The result is a doubling of the number of circuits, and, possibly, a corresponding increase in the number of branch circuits, with a reduction in the probability of fire caused by overloading electrical circuits if the electrical demands exceed the capacity. # Residential Wiring Adequacy The use of electricity in the home has risen sharply since the 1930s. Many homeowners have failed to repair or improve their wiring to keep it safe and up to date. In the 1970s, the code recommended that the main distribution panel in a home be a minimum of 100 amps. Because the number of appliances that use electricity has continued to grow, so has the size of recommended panels. For a normal house (2,500 to 3,500 square feet), a 200-amp panel is recommended. The panel must be of the breaker type with a main breaker for the entire system (Figure 11.4). Fuse boxes are not recommended for new housing. This type of service is sufficient in a one-family house or dwelling unit to provide safe and adequate electricity for the lighting, refrigerator, iron, and an 8,000-watt cooking range, plus other appliances requiring a total of up to 10,000 watts. Some older homes have a 60-ampere, three-wire service (Figure 11.10). It is recommended that these homes be rewired for at least the minimum of 200-amperes recommended in the code. The 60-amp service is safely capable of supplying current for only lighting and portable appliances, such as a cooking range and regular dryer (4,500 watts), or an electric hot water heater (2,500 watts), and cannot handle additional major appliances. Other older homes today have only a 30-ampere, 115-volt, two-wire service (Figure 11.11). This system can safely handle only a limited amount of lighting, a few minor appliances, and no major appliances. Therefore, this size service is substandard in terms of the modern household's needs for electricity. Furthermore, it is a fire hazard and a threat to the safety of the home and the occupants. # Wire Sizes and Types Aluminum wiring, used in some homes from the mid 1960s to the early 1970s, is a potential fire hazard [3]. According to the U.S. Consumer Product Safety Commission (CPSC), fires and even deaths have been caused by this hazard. Problems due to expansion can cause overheating at connections between the wire and devices (switches and outlets) or at splices. CPSC research shows that homes wired with aluminum wire manufactured before 1972 are 55 times more likely to have one or more connections reach fire hazard conditions than are homes wired with copper. Post-1972 aluminum wire is also a concern. Introduction of aluminum wire alloys around 1972 did not solve most of the connection failure problems. Aluminum wiring is still permitted and used for certain applications, including residential service entrance wiring and single-purpose higher amperage circuits such as 240-volt air conditioning or electric range circuits. # Reducing Risk Only two remedies for aluminum wiring have been recommended by the CPSC: discontinued use of the aluminum circuit or the less costly option of adding copper connecting "pigtail" wires between the aluminum wire and the wired device (receptacle, switch, or other device). The pigtail connection must be made using only a special connector and special crimping tool licensed by the AMP Corporation. Emergency temporary repairs necessary to keep an essential circuit in service might be possible following other procedures described by the CPSC, and in accordance with local electrical codes [4,5]. # Wire Sizes Electric power actually flows along the surface of the wire. It flows with relative ease (little resistance) in some materials, such as copper and aluminum, and with a substantial amount of resistance in iron. If iron wire were used, it would have to be 10 times as large as copper wire to be as effective in conducting electricity. In fine electronics, gold is the preferred conductor because of the resistance to corrosion and the very high conductivity. Electricity is the movement of electrons from an area of higher potential to one of lower potential. An analogy to how electricity flows would be the flow of water along the path of least resistance or down a hill. All it takes to create the potential for electricity is the collection of electrons and a pathway for them to flow to an area of lesser concentration along a conductor. When a person walks across a nylon carpet in times of low atmospheric humidity, his or her body will often collect electrons and serve as a capacitor (a storage container for electrons). When that person nears a grounding source, the electrons will often jump from a finger to the ground, creating a spark and small shock. A number preceded by the letters AWG (American Wire Gauge) indicates copper wire sizes [6]. As the AWG number of the wire becomes smaller, the size and current capacity of the wire increases. AWG 14 is most commonly found in older residential branch circuits. AWG 14 wires should be used only in a branch circuit with a 15-ampere capacity or no more than a 1,500-watt demand. Wire sizes AWG 16, 18, and 20 are progressively smaller than AWG 14 and are used for extension wires or low-voltage systems. Wire of the correct size must be used for two reasons: current capacity and voltage drop or loss. When current flows through a wire, it creates heat. The greater the amount of flow, the greater the amount of heat generated. (Doubling the amperes without changing the wire size increases the amount of heat by four times.) The heat is electric energy (electrons) that has been converted into heat energy by the resistance of the wire. The heat created by the coils in a toaster is an example of designed resistance to create heat. Most heat developed by an electrical conductor is wasted; therefore, the electric energy used to generate it is also wasted. If the amount of heat generated by the flow of current through a wire becomes excessive, a fire may result. Therefore, the code sets the maximum permissible current that may flow through a certain type and size wire. The blue box provides examples of current capacities for copper wire of various sizes. In addition to heat generation, there will be a reduction in voltage as a result of attempting to force more current through a wire than it is designed to carry. Certain appliances, such as induction-type electric motors, may be damaged if operated at too low a voltage. # Wire Types All wires must be marked to indicate the maximum working voltage, the proper type letter or letters for the type wire specified in the code, the manufacturer's name or trademark, and the AWG size or circular-mil area (Figure 11.12). A variety of wire types can be used for a wide range of temperature and moisture conditions. The code should be consulted to determine the proper wire for specific conditions. Maximum Current Recommended for AWG Wire Size # Types of Cable Nonmetallic sheathed cable consists of wires wrapped in plastic and then a paper layer, followed by another spiral layer of paper, and enclosed in a fabric braid, which is treated with moisture-and fire-resistant compounds. Figure 11.12 shows this type of cable, which often is marketed under the name Romex. This type of cable can be used only indoors and in permanently dry locations. Romex-type wiring is normally used in residential construction. However, when cost permits, it is recommended that a conduit-based system be used. Armored cable is commonly known as BX or Flexsteel trade names. Wires are wrapped in a tough paper and covered with a strong spiral flexible steel armor. This type of cable is shown in Figure 11.13 and may be used only in permanently dry indoor locations. Armored cable must be supported by a strap or staple every 6 feet and within 24 inches of every switch or junction box, except for concealed runs in old work where it is impossible to mount straps. Cables are also available with other outer coatings of metals, such as copper, bronze, and aluminum for use in a variety of conditions. # Flexible Cords CPSC estimates that about 4,000 injuries associated with electric extension cords are treated in hospital emergency rooms each year. About half of the injuries involve fractures, lacerations, contusions, or sprains from people tripping over extension cords. Thirteen percent of the injuries involve children younger than 5 years of age; electrical burns to the mouth account for half the injuries to young children [7]. CPSC also estimates that about 3,300 residential fires originate in extension cords each year, killing 50 people and injuring about 270 others [7]. The most frequent causes of such fires are short circuits, overloading the system, and damage to or misuse of extension cords. # The Problem Following are CPSC investigations of injuries that illustrate the major injury patterns associated with extension cords: children putting extension cords in their mouths, overloaded cords, worn or damaged cords, and tripping over cords: • A 15-month-old girl put an extension cord in her mouth and suffered an electrical burn. She required surgery. • Two young children were injured in a fire caused by an overloaded extension cord in their family's home. A lamp, TV set, and electric heater had been plugged into a single, light-duty extension cord. • A 65-year-old woman was treated for a fractured ankle after tripping over an extension cord. # The Standards The code says that many cord-connected appliances should be equipped with polarized grounding plugs. Polarized plugs can only be inserted one way into the outlet because one blade is slightly wider than the other. Polarization and grounding ensure that certain parts of appliances that could have a higher risk of electric shock when they become live are instead connected to the neutral, or grounded, side of the circuit. Such electrical products should only be used with polarized or grounded extension cords. Voluntary industry safety standards, including those of Underwriter's Laboratory (UL), now require that generaluse extension cords have safety closures, warning labels, rating information about the electrical current, and other features for the protection of children and other consumers. In addition, UL-listed extension cords now must be constructed with 16-gauge or larger wire or be equipped with integral fuses. The 16-gauge wire is rated to carry 13 amperes (up to 1,560 watts), as compared with the for- merly used 18-gauge cords that were rated for 10 amperes (up to 1,200 watts). # Safety Suggestions The following are CPSC recommendations [7] for purchasing and safely using extension cords: • Use extension cords only when necessary and only on a temporary basis. • Use polarized extension cords with polarize appliances. • Make sure cords do not dangle from the counter or tabletops where they can be pulled down or tripped over. • Replace cracked or worn extension cords with new 16-gauge cords that have the listing of a nationally recognized testing laboratory, safety closures, and other safety features. • With cords lacking safety closures, cover any unused outlets with electrical tape or with plastic caps to prevent the chance of a child making contact with the live circuit. • Insert plugs fully so that no part of the prongs is exposed when an extension cord is in use. • When disconnecting cords, pull the plug rather than the cord itself. • Teach children not to play with plugs and outlets. • Use only three-wire extension cords for appliances with three-prong plugs. Never remove the third (round or U-shaped) prong, which is a safety feature designed to reduce the risk for shock and electrocution. • In locations where furniture or beds may be pushed against an extension cord where the cord joins the plug, use a special angle extension cord specifically designed for use in these instances. • Check the plug and the body of the extension cord while the cord is in use. Noticeable warming of these plastic parts is expected when cords are being used at their maximum rating. If the cord feels hot or if there is a softening of the plastic, this is a warning that the plug wires or connections are failing and that the extension cord should be discarded and replaced. • Never use an extension cord while it is coiled or looped. Never cover any part of an extension cord with newspapers, clothing, rugs, or any objects while the cord is in use. Never place an extension cord where it is likely to be damaged by heavy furniture or foot traffic. • Do not use staples or nails to attach extension cords to a baseboard or to another surface. This could damage the cord and present a shock or fire hazard. • Do not overload extension cords by plugging in appliances that draw a total of more watts than the rating of the cord. • Use special heavy-duty extension cords for highwattage appliances such as air conditioners, portable electric heaters, and freezers. • When using outdoor tools and appliances, use only extension cords labeled for outdoor use. # Wiring Open Wiring Open wiring is a wiring method using knobs, nonmetallic tubes, cleats, and flexible tubing for the protection and support of insulated conductors in or on buildings and not concealed by the structure. The term "open wiring" does not mean exposed, bare wiring. In dry locations, when not exposed to severe physical damage, conductors may be separately encased in flexible tubing. Tubing should be in continuous lengths not exceeding 15 feet and secured to the surface by straps not more than 4½ feet apart. Tubing should be separated from other conductors by at least 2½ inches and should have a permanently maintained airspace between them and any and all pipes they cross. # Concealed Knob and Tube Wiring Concealed knob and tube wiring is a wiring method using knobs, tubes, and flexible nonmetallic tubing for the protection and support of insulated wires concealed in hollow spaces of walls and ceilings of buildings. This wiring method is similar to open wiring and, like open wiring, is usually found only in older buildings. # Electric Service Panel The service switch is a main switch that will disconnect the entire electrical system at one time. The main fuses or circuit breakers are usually located within the service switch box. The branch circuit fuse or circuit breaker may also be located within this box. According to the code, the switch must be externally operable. This condition is fulfilled if the switch can be operated without the operator being exposed to electrically active parts. Figure 11.14 shows a 200-amp service box. Figure 11.15 shows an external "hinged switch" power shutoff installed on the outside of a home. Most of today's older homes do not have hinged switches. Instead, the main fuse is mounted on a small insulated block that can be pulled out of the switch. When this block is removed, the circuit is broken. In some installations, the service switch is a "solid neutral" switch, meaning that the switch or a fuse does not break the neutral wire in the switch. When circuit breakers are used in homes instead of fuses, main circuit breakers may or may not be required. If it takes more than six movements of the hand to open all the branch-circuit breakers, a main breaker, switch, or fuse will be required ahead of the branch-circuit breakers. Thus, a house with seven or more branch circuits requires a separate disconnect means or a main circuit breaker ahead of the branch-circuit breakers. # Over-current Devices The amperage (current flow) in any wire is limited to the maximum permitted by using an over-current device of a size specified by the code. Four types of over-current devices are common: circuit breakers, ground fault circuit interrupters (GFCIs), arc fault circuit interrupters (AFCIs), and fuses. The over-current device of a specific size is specified by the code. The over-current device must be rated at equal or lower capacity than the wire of the circuit it protects. Circuit Breakers (Fuseless Service Panels) A circuit breaker looks something like an ordinary electric light switch. Figure 11.14 shows the service box in a 200amp fuseless system. Figure 11.15 shows a service switch. There is a handle that may be used to turn power on or off. Inside is a simple mechanism that, in case of a circuit overload, trips the switch and breaks the circuit. The circuit breaker may be reset by turning the switch to off and then simply resetting the switch to the on position. A circuit breaker is capable of taking harmless short-period overloads (such as the heavy initial current required in the starting of a washing machine or air conditioner) without tripping, but protects against prolonged overloads. After the cause of trouble has been located and corrected, power is easily restored. Fuseless service panels or breaker boxes are usually broken up into the following circuits: • A 100-ampere or larger main circuit breaker that shuts off all power. • A 40-ampere circuit for an appliance such as an electric cooking range. • A 30-ampere circuit for a clothes dryer, water heater, heat pump, or central air conditioning. • A 20-ampere circuit for small appliances and power tools. • A 15-ampere circuit for general-purpose lighting, TVs, VCRs, computers, and vacuum cleaners. • Space for new circuits to be added if needed for future use. # Ground Fault Circuit Interrupters Unlike circuit breakers and fuses, GFCIs are installed to protect the user from electrocution. These devices provide protection against electrical shock and electrocution from ground faults or contact with live parts by a grounded individual. They constantly monitor electrical currents flowing into a product. If the electricity flowing through the product differs even slightly from that returning, the GFCI will quickly shut off the current. GFCIs detect amounts of electricity much smaller than those required for a fuse or circuit breaker to activate and shut off the circuit. UL lists three types of GFCIs designed for home use that are readily available and fairly inexpensive and simple to install: • Wall Receptacle GFCI-This type of GFCI (Figure 11.16) is used in place of a standard receptacle found throughout the house. It fits into a standard outlet box and protects against ground faults whenever an electrical product is plugged into the outlet. If strategically located, it will also provide protection to downstream receptacles. • Circuit Breaker GFCI-In homes equipped with circuit breakers, this type of GFCI may be installed in a panel box to protect selected circuits. A circuit breaker GFCI serves a dual purpose: it shuts off electricity in the event of a ground fault and will also trip when a short circuit or an overload occurs. • Portable GFCI-A portable GFCI requires no special knowledge or equipment to install. One type contains the GFCI circuitry in a self-contained enclosure with plug blades in the back and receptacle slots in the front. It can be plugged into a receptacle, and the electrical product plugged into the GFCI. Another type of portable GFCI is an extension cord combined with a GFCI. It adds flexibility in using receptacles that are not protected by GFCIs. Once a GFCI is installed, it must be checked monthly to determine that it is operating properly. Pressing the test button can check units; the GFCI should disconnect the power to that outlet. Pressing the reset button reconnects the power. If the GFCI does not disconnect the power, have it checked by a qualified, certified electrician. GFCIs should be installed on circuits in the following areas: garages, bathrooms, kitchens, crawl spaces, unfinished basements, hot tubs and spas, pool electronics, and exterior outlets. However, they are not required on single outlets that serve major appliances. # Arc-fault Circuit Interrupters Arc-fault circuit interrupters are new devices intended to provide fire protection by opening the circuit if an arcing fault is detected. An arcing fault is an electric spark or hot plasma field that extends from the hot wire to a ground. An arc is a luminous discharge of electricity across an insulating medium or simply a spark across an air gap. Arcs occur every day in homes. For example, an arc occurs inside the switch when a light is turned on. Toy racecars and trains create arcs. The motors inside hair dyers and power drills have tiny arcs. All of these are controlled arcs. It is the uncontrolled or nondesigned arc that is a serious fire hazard in the home. The arc-fault circuit interrupter looks like the GFCI unit (Figure 11.17), but it is not designed to protect against electric shock. and the neutral wire is grounded the same as for a two-wire system. Below each fuse is a terminal to which a black or red wire is connected. The white or light gray neutral wires are then connected to the neutral strip. Each fuse indicates a separate circuit (Figure 11.18). • Nontamperable fuses-All ordinary plug fuses have the same diameter and physical appearance, regardless of their current capacity, whereas nontamperable fuses are sized by amperage load. Thus, with regular fuses, if a circuit designed for a 15-ampere fuse is overloaded so that the 15-ampere fuse blows out, nothing will prevent a person from replacing the 15-ampere fuse with a 20-or 30-ampere fuse, which may not blow out. If a circuit wired with 14-gauge wire (current capacity 15 amperes) is fused with a 20-or 30-ampere fuse and an overload develops, more current than the 14-gauge wire is safely capable of carrying could pass through the circuit. The result would be a heating of the wire and potential fire. • Type-S fuses-Type-S fuses have different lengths and diameter threads for each amperage capacity. An adapter is first inserted into the ordinary fuse holder, which adapts the fuse holder for only one capacity fuse. Once the adapter is inserted, it cannot be removed. • Cartridge fuses-A cartridge fuse protects an electric circuit in the same manner as an ordinary plug fuse, already described, protects an electric circuit. Cartridge fuses are often used as main fuses. Because most electrical wiring in a home is hidden from view, many arc faults go undetected and continue arcing indefinitely. If left in this arcing state, the potential for fire increases. It is estimated that about one third of fires are caused by arcing faults. Normal fuses and circuit breakers are not capable of detecting arc faults and therefore will not open the circuit and stop the flow of electricity. # Fused Ampere Service Panel (Fuse Box) Fuse-type panel boxes are generally found in older homes. They are as safe and adequate as a circuit breaker of equivalent capacity, provided fuses of the proper size are used. A fuse, like a circuit breaker, is designed to protect a circuit against overloading and short circuits and does so in two ways. When a fuse is blown by a short circuit, the metal strip is instantly heated to an extremely high temperature, and this heat causes it to vaporize. A fuse blown by a short circuit may be easily recognized because the window of the fuse usually becomes discolored. In a fuse blown by an overload, the metal strip is melted at its weakest point, breaking the flow of current to the load. In this case, the window of the fuse remains clear; therefore, a blown fuse caused by an overload may also be easily recognized. Sometimes, although a fuse has not been blown, the bottom of the fuse may be severely discolored and pitted. This indicates a loose connection because the fuse was not screwed in properly. It is critical to check that all fuses are properly rated for the designed amperage. The placing of a fuse with a higher amperage than recommended presents a significant fire hazard. Generally, all fused panel boxes are wired similarly for two-and three-wire systems. In a two-wire-circuit panel box, the black or red hot wire is connected to a terminal of the main disconnect, and the white or light gray neutral wire is connected to the neutral strip, which is then grounded to the pipe on the street side of the water meter. In a three-wire system, the black and red hot wires are connected to separate terminals of the main disconnect, # Electric Circuits An electric circuit in good repair carries electricity through two or three wires from the source of supply to an outlet and back to the source. A branch circuit is an electric circuit that supplies current to a limited number of outlets and fixtures. A residence generally has many branch circuits. Each is protected against short circuits and overloads by a 15-or 20-ampere fuse or circuit breaker. The number of outlets per branch circuit varies from building to building. The code requires enough light circuits so that 3 watts of power will be available for each square foot of floor area in a house. A circuit wired with 14-gauge wire and protected by a 15 ampere over-current protection device provides 15×115 (1,725 watts); each circuit is enough for 1,725 divided by 3 (575 square feet). Note that 575 is a minimum figure; if future use is considered, 500 or even 400 square feet per branch circuit should be used. Special appliance circuits will provide electric power for lighting, radio, TV, and small portable appliances. However, the larger electric appliances usually found in the kitchen consume more power and must have their own special circuit. Section 220-3b of the code requires two special circuits to serve only appliance outlets in the kitchen, laundry, pantry, family room, dining room, and breakfast room. Both circuits must be extended to the kitchen; either one or both of these circuits may serve the other rooms. No lighting outlets may be connected to these circuits, and they must be wired with 12-gauge wire and protected by a 20-ampere over-current device. Each circuit will have a capacity of 20×115 (2,300 watts), which is not too much when toasters often require more than 1,600 watts. It is customary to provide a circuit for each of the following appliances: range, water heater, washing machine, clothes dryer, garbage disposal, dishwasher, furnace, water pump, air conditioner, heat pump, and air compressor. These circuits may be either 115 volts or 230 volts, depending on the particular appliance or motor installed. # Outlet Switches and Junction Boxes The code requires that every switch, outlet, and joint in wire or cable be housed in a box. Every fixture must be mounted on a box. Most boxes are made of plastic or metal with a galvanized coating. When a cable of any style is used for wiring, the code requires that it be securely anchored with a connector to each box it enters. # Grounding Outlets An electrical appliance may appear to be in good repair, and yet it might be a danger to the user. Older portable electric drills consist of an electric motor inside a metal casing. When the switch is depressed, the current flows to the motor and the drill rotates. As a result of wear, however, the insulation on the wire inside the drill may deteriorate and allow the hot side of the power cord to come in contact with the metal casing. This will not affect the operation of the drill. A person fully clothed using the drill in the living room, which has a dry floor, will not receive a shock, even though he or she is in contact with the electrified drill case. The operator's body is not grounded because of the dry floor. If standing on a wet basement floor, the operator's body might be grounded; and, when the electrified drill case is touched, current will pass through the operator's body. To protect people from electrocution, the drill case is usually connected to the system ground by means of a wire called an appliance ground. In this instance, as the drill is plugged in, current will flow between the shorted hot wire and the drill case and cause the over-current device to break the circuit. Thus, the appliance ground has protected the human operator. Newer appliances and tools are equipped with two-prong polarized plugs, as discussed in the standards section of this manual. The appliance ground (Figure 11.19) is the third wire found on many appliances. The appliance ground will be of no use unless the outlet into which the appliance is plugged is grounded. Being in physical contact with a ground outlet box grounds the outlet. Having a third ground wire, or a grounded conduit, as part of the circuit wiring grounds the outlet box. All new buildings are required to have grounded outlets. A two-lead circuit tester can be used to test the outlet. The circuit tester lights when both of its leads are plugged into the two elongated parallel openings of the outlet. In addition, the tester lights when one lead is plugged into the round third opening and the other is plugged into the hot side of the outlet. Most problems can be resolved using inexpensive testers resembling a plug with three leads. These can be purchased in many stores and most hardware stores for very reasonable prices. If the conventional two-opening outlet is used, it may be grounded if the screw that holds the outlet cover plate is electrically connected to the third-wire ground. The tester should light when one lead is in contact with a clean paint-free metal outlet cover plate screw and the hot side of the outlet. If the tester does not light, the outlet is not grounded. If a two-opening outlet is grounded, it may be adapted for use by a three-wire appliance by using an adapter. The loose-wire portion or screw tab of the adapter should be secured behind the metal screw of the outlet plate cover. Many appliances, such as electric shavers and some new hand tools, are double insulated and are safe without having a third ground wire. # Polarized Plugs and Connectors Plugs are polarized or unpolarized. Polarization helps reduce the potential for shock. Consumers can easily identify polarized plugs; one blade-the ground prong-is wider than the other. Three-conductor plugs are automatically polarized because they can only be inserted one way. Polarized plugs are used to connect the most-exposed part of an appliance to the ground wire so that if you are touching a ground (such as a pipe, bathtub, or faucet) and the exposed part of an appliance (the case, the threaded part of a light bulb socket, etc.), you will not get an electrical shock. Many appliances, such as electric drills, are doubly insulated so the probability of any exposed part of the appliance being connected, by a short or other problem in the appliance, to either wire is very small. Such devices often use unpolarized plugs where the two prongs of the plug are identical. # Common Electrical Violations The most obvious things that a housing inspector must check are the power supply; the type, location, and condition of the wiring; and the number and conditions of wall outlets or ceiling fixtures required by the local code. In making an investigation, the following considerations will serve as useful guides. • Power supply-Where is it, is it grounded properly, and is it at least of the minimum capacity required to supply current safely for lighting and the major and minor appliances in the dwelling? • Panel box covers or doors-These should be accessible only from the front and should be sealed in such a way that they can be operated safely without the danger of contact with live or exposed parts of the wiring system. • Switch, outlets, and junction boxes-These also must be covered to protect against danger of electric shock. • Frayed or bare wires-These are usually the result of long use and drying out and cracking of the insulation, which leave the wires exposed, or of constant friction and rough handling of the wire, which cause it to fray or become bare. Wiring in this condition constitutes a safety hazard. Correction of such defects should be ordered immediately. • Electric cords under rugs or other floor coverings-Putting electric cords in locations such as these is prohibited because of the potential fire hazard caused by continuing contact over a period of time between these heat-bearing cords and the flammable floor coverings. Direct the occupant to shift the cords to a safe location, explain why, and make sure it is done before you leave. • Ground fault circuit interrupter-All bathroom, kitchen, and workroom outlets-where shock hazard is great-should have GFCI outlets. Check for lack of or nonuse of GFCI outlets. • Bathroom lighting-Bathrooms should include at least one permanently installed ceiling or wall light fixture with a wall switch and plate located and maintained so that there is no danger of short circuiting from use of other bathroom facilities or splashing water. Fixture or cover plates should be insulated or grounded. • Lighting of public hallways, stairways, landings, and foyers-A common standard is sufficient lighting to illuminate 10 foot-candles on every part of these areas at all times. Sufficient lighting means that people can clearly see their feet on all parts of the stairways and halls. Public halls and stairways in structures containing less than three dwelling units may be supplied with conveniently located light switches controlling an adequate lighting system that may be turned on when needed, instead of full-time lighting. • Habitable room lighting-The standard here may be two floor convenience outlets (although floor outlets are dangerous unless protected by proper dust and water covers) or one convenience outlet and one wall or ceiling electric light fixture. This number is an absolute and often inadequate minimum, given the contemporary widespread use of electricity in the home. The minimum should be the number required to provide adequate lighting and power to accommodate lighting and appliances normally used in each room. • Octopus outlets or wiring-This term is applied to outlets into which plugs have been inserted and are being used to permit more than two lights or portable appliances, such as a TV, lamp, or radio, to be connected to the electrical system. The condition occurs where the number of outlets is insufficient to accommodate the normal use of the room. This practice overloads the circuit and is a potential source of fire. • Outlet covers-Every outlet and receptacle must be covered by a protective plate to prevent contact of its wiring or terminals with the body, combustible objects, or water. Following are six situations that can cause danger and should also be corrected. # Excessive or Faulty Fusing The wire's capacity must not be exceeded by the fuse or circuit breaker capacity or be left unprotected by faulty fusing or circuit breakers. Fuses and circuit breakers are safety devices designed to "blow" to protect against overloading the electrical system or one or more of its circuits. Pennies under fuses are there to bypass the fuse. These are illegal and must be removed. Overfusing is done for the same reason. The latter can be prevented by installing modern fusestats, which prevent use of any fuse of a higher amperage than can be handled by the circuit it serves. # Cords Run Through Walls or Doorways and Hanging Cords or Wires This makeshift installation often is the work of an unqualified handyman or do-it-yourself occupant. The inspector should check the local electrical code to determine the policy regarding this type of installation. # Temporary Wiring Temporary wiring should not be allowed, with the exception of extension cords that go directly from portable lights and electric fixtures to convenience outlets. # Excessively Long Extension Cords City code standards often limit the length of loose cords or extension lines to a maximum of 8 feet. This is necessary because cords that are too long will overheat if overloaded or if a short circuit develops and, thus, create a fire hazard. This requirement does not apply to specially designed extension cords for operating portable tools and trouble lights. # Dead or Dummy Outlets These are sometimes installed to deceive the housing inspector. All outlets must be tested or the occupants questioned to see if these are live and functioning properly. A dead outlet cannot be counted to determine compliance with the code. # Aluminum Wiring Inside the Home Although aluminum is an excellent conductor, it tends to oxidize on the conducting surface. The nonconductive oxidized face of the conductor will arc from the remaining conductive surfaces, and this arc can result in fire. # Inspection Steps The basic tools required by a housing inspector for making an electrical inspection are fuse and circuit testers and a flashlight. The first thing to remember is that you are in a strange house, and the layout is unfamiliar to you. The second thing to remember is that you are dealing with electricity-take no chances. Go to the site of the ground, usually at the water meter, and check the ground. It should connect to the water line on the street side of the water meter or be equipped with a jumper wire. Do not touch any box or wire until you are sure of the ground. To sum up, the housing inspector investigates specified electrical elements in a house to detect any obvious evidence of an insufficient power supply, to ensure the availability of adequate and safe lighting and electrical facilities, and to discover and correct any obvious hazard. Because electricity is a technical, complicated field, the housing inspector, when in doubt, should consult his or her supervisor. The inspector cannot, however, close the case until appropriate corrective action has been taken on all such referrals. # Introduction The quotes below provide a profound lesson in the need for housing to provide protection from both the heat and cold. " Air duct-A formed conduit that carries warm or cold air from the furnace or air-conditioner and back again. France Antiflooding Control-A safety control that shuts off fuel and ignition when excessive fuel oil accumulates in the appliance. Appliance: • High heat-A unit that operates with flue entrance temperature of combustion products above 1,500°F (820°C). • Medium heat-Same as high-heat, except above 600°F (320°C). • Low heat-same as high-heat, except below 600°F (320°C). Boiler: • High pressure-A boiler furnishing pressure at 15 psi or more. • Low pressure (hot water or steam)-A boiler furnishing steam at a pressure less than 15 psi or hot water not more than 30 psi. Burner-A device that mixes fuel, air, and ignition in a combustion chamber. # Definitions of Terms Related to HVAC Systems "In many temperate countries, death rates during the winter season are 10%-25% higher than those in the summer." World Health Organization, Health Evidence Network, November 1, 2004. This chapter provides a general overview of the heating and cooling of today's homes. Heating and cooling are not merely a matter of comfort, but of survival. Both very cold and very hot temperatures can threaten health. Excessive exposure to heat is referred to as heat stress and excessive exposure to cold is referred to as cold stress. In a very hot environment, the most serious health risk is heat stroke. Heat stroke requires immediate medical attention and can be fatal or leave permanent damage. Heat stroke fatalities occur every summer. Heat exhaustion and fainting are less serious types of illnesses. Typically they are not fatal, but they do interfere with a person's ability to work. At very cold temperatures, the most serious concern is the risk for hypothermia or dangerous overcooling of the body. Another serious effect of cold exposure is frostbite or freezing of exposed extremities, such as fingers, toes, nose, and ear lobes. Hypothermia can be fatal if immediate medical attention is not received. Heat and cold are dangerous because the victims of heat stroke and hypothermia often do not notice the symptoms. This means that family, neighbors, and friends are essential for early recognition of the onset of the conditions. The affected individual's survival depends on others Heat stroke signs and symptoms include sudden and severe fatigue, nausea, dizziness, rapid pulse, lightheadedness, confusion, unconsciousness, extremely high temperature, and hot and dry skin surface. An individual who appears disorientated or confused, seems euphoric or unaccountably irritable, or suffers from malaise or flulike symptoms should be moved to a cool location and medical advice should be sought immediately. to identify symptoms and to seek medical help. Family, neighbors, and friends must be particularly diligent during heat or cold waves to check on individuals who live alone. Although symptoms vary from person to person, the warning signs of heat exhaustion include confusion and profuse and prolonged sweating. The person should be removed from the heat, cooled, and heavily hydrated. Carbon monoxide (CO) detector-A device used to detect CO (specific gravity of 0.97 vs. 1.00 for oxygen, a colorless odorless gas resulting from combustion of fuel). CO detectors should be placed on each floor of the structure at eye level and should have an audible alarm and, when possible, a digital readout at eye level. Chimney-A vertical shaft containing one or more passageways. Factory-built chimney-A tested and accredited flue for venting gas appliances, incinerators and solid or liquid fuel-burning appliances. Masonry chimney-A field-constructed chimney built of masonry and lined with terra cotta flue or firebrick. Metal chimney-A field-constructed chimney of metal. Chimney connector-A pipe or breeching that connects the heating appliance to the chimney. Clearance-The distance separating the appliance, chimney connector, plenum, and flue from the nearest surface of combustible material. Central cooling system-An electric or gas-powered system containing an outside compressor, cooling coils, and a ducting system inside the structure designed to supply cool air uniformly throughout the structure. Central heating system-A flue connected boiler or furnace installed as an integral part of the structure and designed to supply heat adequately for the structure. # Collectors- The key component of active solar systems and are designed to meet the specific temperature requirements and climate conditions for different end uses. Several types of solar collectors exist: flat-plate collectors, evacuated-tube collectors, concentrating collectors, and transpired air collectors. # Controls: • High-low limit control-An automatic control that responds to liquid level changes and will shut down if they are exceeded. • Primary safety control-The automatic safety control intended to prevent abnormal discharge of fuel at the burner in case of ignition failure or flame failure. Combustion safety control-A primary safety control that responds to flame properties, sensing the presence of flame and causing fuel to be shut off in event of flame failure. Convector-A radiator that supplies a maximum amount of heat by convection, using many closely spaced metal fins fitted onto pipes that carry hot water or steam and thereby heat the circulating air. Conversion-A boiler or furnace, flue connected, originally designed for solid fuel but converted for liquid or gas fuel. Damper-A valve for regulating draft on coal-fired equipment. Generally located on the exhaust side of the combustion chamber, usually in the chimney connector. Dampers are not allowed on oil-and gas-fired equipment. Draft hood-A device placed in and made a part of the vent connector (chimney connector or smoke pipe) from an appliance, or in the appliance itself. The hood is designed to a) ensure the ready escape of the products of combustion in the event of no draft, back draft, or stoppage beyond the draft hood; b) prevent backdraft from entering the appliance; and c) neutralize the effect of stack action of the chimney flue upon appliance operation. # Definitions of Terms Related to HVAC Systems Draft regulator-A device that functions to maintain a desired draft in oil-fired appliances by automatically reducing the chimney draft to the desired value. Sometimes this device is referred to in the field as air-balance, air-stat, or flue velocity control or barometer damper. Fuel oil-A liquid mixture or compound derived from petroleum that does not emit flammable vapor below a temperature of 125°F (52°C). Heat-The warming of a building, apartment, or room by a furnace or electrical stove. Heating plant-The furnace, boiler, or the other heating devices used to generate steam, hot water, or hot air, which then is circulated through a distribution system. It typically uses coal, gas, oil or wood as its source of heat. Limit control-A thermostatic device installed in the duct system to shut off the supply of heat at a predetermined temperature of the circulated air. Oil burner-A device for burning oil in heating appliances such as boilers, furnaces, water heaters, and ranges. A burner of this type may be a pressure-atomizing gun type, a horizontal or vertical rotary type, or a mechanical or natural draft-vaporizing type. Oil stove-A flue-connected, self-contained, self-supporting oil-burning range or room heater equipped with an integral tank not exceeding 10 gallons; it may be designed to be connected to a separate oil tank. Plenum chamber-An air compartment to which one or more distributing air ducts are connected. Pump, automatic oil-A device that automatically pumps oil from the supply tank and delivers it in specific quantities to an oil-burning appliance. The pump or device is designed to stop pumping automatically if the oil supply line breaks. Radiant heat-A method of heating a building by means of electric coils, hot water, or steam pipes installed in the floors, walls, or ceilings. Register-A grille-covered opening in a floor, ceiling, or wall through which hot or cold air can be introduced into a room. It may or may not be arranged to permit closing the grille. Room heater-A self-contained, freestanding heating appliance (space heater) intended for installation in the space being heated and not intended for duct connection. Smoke detector-A device installed in several rooms of the structure to warn of the presence of smoke. It should provide an audible alarm. It can be battery powered or electric, or both. If the unit is battery powered, the batteries should be tested or checked on a routine basis and changed once per year. If the unit is equipped with a 10-year battery, it is not necessary to replace the battery every year. Tank-A separate tank connected, directly or by pump, to an oil-burning appliance. Thimble-A metal or terra cotta lining for a chimney or furnace pipe. # Valve (main shut-off valve)- A manually operated valve in an oil line used to turn the oil supply to the burner on or off. Vent system-The gas vent or chimney and vent connector, if used, assembled to form a continuous, unobstructed passageway from the gas appliance to the outside atmosphere to remove vent gases. # Definitions of Terms Related to HVAC Systems Warning signs of hypothermia include nausea, fatigue, dizziness, irritability, or euphoria. Individuals also experience pain in their extremities (e.g., hands, feet, ears) and severe shivering. People who exhibit these symptoms, particularly the elderly and young, should be moved to a heated shelter and medical advice should be sought when appropriate. The function of a heating, ventilation, and air conditioning (HVAC) system is to provide for more than human health and comfort. The HVAC system produces heat, cool air, and ventilation, and helps control dust and moisture, which can lead to adverse health effects. The variables to be controlled are temperature, air quality, air motion, and relative humidity. Temperature must be maintained uniformly throughout the heated/cooled area. There is a 6°F to 10°F (-14°C to -12°C) variation in room temperature from floor to ceiling. The adequacy of the HVAC system and the air-tightness of the structure or room determine the degree of personal safety and comfort within the dwelling. Gas, electricity, oil, coal, wood, and solar energy are the main energy sources for home heating and cooling. Heating systems commonly used are steam, hot water and hot air. A housing inspector should have knowledge of the various heating fuels and systems to be able to determine their adequacy and safety in operation. To cover fully all aspects of the heating and cooling system, the entire area and physical components of the system must be considered. # Heating Fifty-one percent of the homes in the United States are heated with natural gas, 30% are heated with electricity, and 9% with fuel oil. The remaining 11% are heated with bottled fuel, wood, coal, solar, geothermal, wind, or solar energy [1]. Any home using combustion as a source of heating, cooling, or cooking or that has an attached garage should have appropriately located and maintained carbon monoxide (CO) gas detectors. According to the U.S. Consumer Product Safety Commission (CPSC), from data collected in 2000, CO kills 200 people and sends more than 10,000 to the hospital each year. The standard fuels for heating are discussed below. # Standard Fuels # Gas More than 50% of American homes use gas fuel. Gas fuels are colorless gases. Some have a characteristic pungent odor; others are odorless and cannot be detected by smell. Although gas fuels are easily handled in heating equipment, their presence in air in appreciable quantities becomes a serious health hazard. Gases diffuse readily in the air, making explosive mixtures possible. A proportion of combustible gas and air that is ignited burns with such a high velocity that an explosive force is created. Because of these characteristics of gas fuels, precautions must be taken to prevent leaks, and care must be exercised when gas-fired equipment is lit. Gas is broadly classified as natural or manufactured. • Natural gas-This gas is a mixture of several combustible and inert gases. It is one of the richest gases and is obtained from wells ordinarily located in petroleum-producing areas. The heat content may vary from 700 to 1,300 British thermal units (BTUs) per cubic foot, with a generally accepted average figure of 1,000 BTUs per cubic foot. Natural gases are distributed through pipelines to the point of use and are often mixed with manufactured gas to maintain a guaranteed BTU content. • Manufactured gas-This gas, as distributed, is usually a combination of certain proportions of gases produced from coke, coal, and petroleum. Its BTU value per cubic foot is generally closely regulated, and costs are determined on a guaranteed BTU basis, usually 520 to 540 BTUs per cubic foot. • Liquefied petroleum gas-Principal products of liquefied petroleum gas are butane and propane. Butane and propane are derived from natural gas or petroleum refinery gas and are chemically classified as hydrocarbon gases. Specifically, butane and propane are on the borderline between a liquid and a gaseous state. At ordinary atmospheric pressure, butane is a gas above 33°F (0.6°C) and propane a gas at -42°F (-41°C). These gases are mixed to produce commercial gas suitable for various climatic conditions. Butane and propane are heavier than air. The heat content of butane is 3,274 BTUs per cubic foot, whereas that of propane is 2,519 BTUs per cubic foot. Gas burners should be equipped with an automatic shutoff in case the flame fails. Shutoff valves should be located within 1 foot of the burner connection and on the output side of the meter. Caution: Liquefied petroleum gas is heavier than air; therefore, the gas will accumulate at the bottom of confined areas. If a leak develops, care should be taken to ventilate the appliance before lighting it. # Electricity Electricity has gained popularity for heating in many regions, particularly where costs are competitive with other sources of heat energy, with usage increasing from 2% in 1960 to 30% in 2000. With an electric system, the housing inspector should rely mainly on the electrical inspector to determine proper installation. There are a few items, however, to be concerned with to ensure safe use of the equipment. Check to see that the units are approved by an accredited testing agency and installed according to the manufacturer's specifications. Most convector-type units must be installed at least 2 inches above the floor level, not only to ensure that proper convection currents are established through the unit, but also to allow sufficient air insulation from any combustible flooring material. The housing inspector should check for curtains that extend too close to the unit or loose, long-pile rugs that are too close. A distance of 6 inches on the floor and 12 inches on the walls should separate rugs or curtains from the appliance. Heat pumps are air conditioners that contain a valve that allows switching between air conditioner and heater. When the valve is switched one way, the heat pump acts like an air conditioner; when it is switched the other way, it reverses the flow of refrigerants and acts like a heater. Cold is the absence of energy or calories of heat. To cool something, the heat must be removed; to warm something, energy or calories of heat must be provided. Heat pumps do both. A heat pump has a few additions beyond the typical air conditioner: a reversing valve, two thermal expansion valves, and two bypass valves. The reversing valve allows the unit to provide both cooling and heating. Figure 12.1 shows a heat pump in cooling mode. The unit operates as follows: • The compressor compacts the refrigerant vapor and pumps it to the reversing valve. • The reversing valve directs the compressed vapor to flow to the outside heat exchanger (condenser), where the refrigerant is cooled and condensed to a liquid. • The air blowing through the condenser coil removes heat from the refrigerant. • The liquid refrigerant bypasses the first thermal expansion valve and flows to the second thermal expansion valve at the inside heat exchanger (evaporator) where it expands into the evaporator and becomes vapor. • The refrigerant picks up heat energy from the air blowing across the evaporator coil and cool air comes out at the other side of the coil. The cool air is ducted to the occupied space as air-conditioned air. • The refrigerant vapor then goes back to the reversing valve to be directed to the compressor to start the refrigeration cycle all over again. Heat pumps [3] are quite efficient in their use of energy. However, heat pumps often freeze up; that is, the coils in the outside air collect ice. The heat pump has to melt this ice periodically, so it switches itself back to air conditioner mode to heat up the coils. To avoid pumping cold air into the house in air conditioner mode, the heat pump also uses electric strip heaters to heat the cold air that the air conditioner is pumping out. Once the ice is melted, the heat pump switches back to heating mode and turns off the burners. # Fuel Oil Fuel oils are derived from petroleum, which consists primarily of compounds of hydrogen and carbon (hydrocarbons) and smaller amounts of nitrogen and sulfur. Domestic fuel oils are controlled by rigid specifications. Six grades of fuel oil-numbered 1 through 6-are generally used in heating systems; the lighter two grades are used primarily for domestic heating: • Grade Number 1-a volatile, distillate oil for use in burners that prepare fuel for burning solely by vaporization (oil-fired space heaters). • Grade Number 2-a moderate weight, volatile, distillate oil used for burners that prepare oil for burning by a combination of vaporization and atomization. This grade of oil is commonly used in domestic heating furnaces. Heating values of oil vary from approximately 152,000 BTU per gallon for number 6 oil to 136,000 BTU per gallon for number 1 oil. Oil is more widely used today than coal and provides a more automatic source of heat and comfort. It also requires more complicated systems and controls. If the oil supply is in the basement or cellar area, certain code regulations must be followed (Figure 12. 2) [4][5][6][7]. No more than two 275-gallon tanks may be installed above ground in the lowest story of any one building. The IRC recommends a maximum fuel oil storage of 660 gallons. The tank shall not be closer than 7 feet horizontally to any boiler, furnace, stove, or exposed flame(s). Fuel oil lines should be embedded in a concrete or cement floor or protected against damage if they run across the floor. Each tank must have a shutoff valve that will stop the flow if a leak develops in the line to or in the burner itself. A leak-tight liner or pan should be installed under tanks and lines located above the floor. They contain potential leaks so the oil does not spread over the floor, creating a fire hazard. The tank or tanks must be vented to the outside, and a gauge showing the quantity of oil in the tank or tanks must be tight and operative. Steel tanks constructed before 1985 had a life expectancy of 12-20 years. Tanks must be off the floor and on a stable base to prevent settlement or movement that may rupture the connections. Figure 12.3 shows a buried outside tank installation. In 1985, federal legislation was passed requiring that the exterior components of underground storage tanks (USTs) installed after 1985 resist the effects of pressure, vibration, and movement. Federal regulations for USTs exclude the following: farm and residential tanks of 1,100 gallons (420 liters) or less capacity; tanks storing heating oil used on the premises; tanks on or above the floor of basements; septic tanks; flow-through process tanks; all tanks with capacity of 110 gallons or less; and emergency spill and overfill tanks [8]. A review of local and state regulations should be completed before installing underground tanks because many jurisdictions do not allow burial of gas or oil tanks. has the Million Solar Roofs Initiative begun in 1997 to install solar energy systems in more than 1 million U.S. buildings by 2010. # Central Heating Units The boiler should be placed in a separate room whenever possible, which is usually required in new construction. In most housing inspections, however, the inspector is dealing with existing conditions and must adapt the situation as closely as possible to acceptable safety standards. In many old buildings, the furnace is located in the center of the cellar or basement. This location does not lend itself to practical conversion to a boiler room. Consider the physical requirements for a boiler or furnace. • Ventilation-More circulating air is required for the boiler room than for a habitable room, to reduce the heat buildup caused by the boiler or furnace and to supply oxygen for combustion. • Fire protection rating-As specified by various codes (fire code, building code, and insurance underwriters), the fire regulations must be strictly adhered to in areas surrounding the boiler or furnace. This minimum clearance for a boiler or furnace from a wall or ceiling is shown in Figures 12.4 and 12.5. Asbestos was used in numerous places on furnaces to protect buildings from fire and to prevent lost heat. Figure 12.6 shows asbestos-coated heating ducts, for example. Where asbestos insulation is found, it must be handled with care (breathing protection and protective clothing) and care must be taken to prevent or contain release into the air [10]. The furnace or boiler makes it difficult to supply air and ventilation for the room. Where codes and local authority permit, it may be more practical to place the furnace or boiler in an open area. The ceiling above the furnace should be protected to a distance of 3 feet beyond all furnace or boiler appurtenances, and this area should be free of all storage material. The furnace or boiler should be on a firm foundation of concrete if located in the cellar or basement. If codes permit furnace installations on the first floor, then they must be consulted for proper setting and location. # Heating Boilers The term boiler is applied to the single heat source that can supply either steam or hot water (a boiler is often called a heater). Boilers may be classified according to several kinds of characteristics. They are typically made from cast iron or steel. Their construction design may be sectional, portable, fire-tube, water-tube, or special. Domestic heating boilers are generally of low-pressure type with a maximum working pressure of 15 pounds per square inch (psi) for steam and 30 psi for hot water. All boilers have a combustion chamber for burning fuel. Automatic fuel-firing devices help supply the fuel and control the combus-tion. Hand firing is accomplished by the provision of a grate, ash pit, and controllable drafts to admit air under the fuel bed and over it through slots in the firing door. A check draft is required at the smoke pipe connection to control chimney draft. The gas passes from the combustion chamber to the flue passages (smoke pipe) designed for maximum possible transfer of heat from the gas. Provisions must be made for cleaning flue passages. Cast-iron boilers are usually shipped in sections and assembled at the site. They are generally classified as • square or rectangular boilers with vertical sections; and • round, square, or rectangular boilers with horizontal pancake sections. Most steel boilers are assembled units with welded steel construction and are called portable boilers. Large boilers are installed in refractory brick settings on the site. Above the combustion chamber, a group of tubes is suspended, usually horizontally, between two headers. If flue gases pass through the tubes and water surrounds them, the boiler is designated as the fire-tube type. When water flows through the tubes, it is termed water-tube. Firetube is the predominant type. # Heating Furnaces Heating furnaces are the heat sources used when air is the heat-carrying medium. When air circulates because of the different densities of the heated and cooled air, the furnace is a gravity type. A fan may be included for the air circulation; this type is called a mechanical warm-air furnace. Furnaces may be of cast iron or steel and burn various types of fuel. Some new furnaces are as fuel efficient as 95%. Furnaces with an efficiency of 90% or greater use two heat exchangers instead of one. Energy savings come not only from the increased efficiency, but also from improved comfort at lower thermostat settings. # Fuel-burning Furnaces Some localities throughout the United States still use coal as a heating fuel, including residences, schools, colleges and universities, small manufacturing facilities, and other facilities located near coal sources. In many older furnaces, the coal is stoked or fed into the firebox by hand. The single-retort, underfeed-type bituminous coal stoker is the most commonly used domestic automatic-stoking steam or hot-water boiler (Figure 12.7). The stoker consists of a coal hopper, a screw for conveying coal from hopper to retort, a fan that supplies air for combustion, a transmission for driving coal feed and fan, and an electric motor for supplying power. The air for combustion is admitted to the fuel through tuyeres (air inlets) at the top of the retort. The stoker feeds coal to the furnace intermittently in accordance with temperature or pressure demands. Oil burners are broadly designated as distillate, domestic, and commercial or industrial. Distillate burners are usually found in oil-fired space heaters. Domestic oil burners are usually power driven and are used in domestic heating plants. Commercial or industrial burners are used in larger central-heating plants for steam or power generation. Domestic oil burners vaporize and atomize the oil and deliver a predetermined quantity of oil and air to the combustion chambers. Domestic oil burners operate automatically to maintain a desired temperature. Gun-type burners atomize the oil either by oil pressure or by low-pressure air forced through a nozzle. The oil system pressure-atomizing burner consists of a strainer, pump, pressure-regulating valve, shutoff valve and atomizing nozzle. The air system consists of a power-driven fan and an air tube that surrounds the nozzle and electrode assembly. The fan and oil pump are generally connected directly to the motor. Oil pressures normally used are about 100 psi, but pressures considerably in excess of this are sometimes used. The form and parts of low-pressure, air-atomizing burners are similar to high-pressure atomizing (gun) burners (Figure 12.8) except for addition of a small air pump and a different way of delivering air and oil to the nozzle or orifice. The atomizing type burner, sometimes known as a radiant or suspended flame burner, atomizes oil by throwing it from the circumference of a rapidly rotating motordriven cup. The burner is installed so that the driving parts are protected from the heat of the flame by a hearth of refractory material at about the grate elevation. Oil is fed by pump or gravity; the draft is mechanical or a combination of natural and mechanical. Horizontal rotary burners were originally designed for commercial and industrial use but are available in sizes suitable for domestic use. In this burner, fuel oil being thrown in a conical spray from a rapidly rotating cup is atomized. Horizontal rotary burners use electric gas or gas-pilot ignition and operate with a wide range of fuels, primarily Numbers 1 and 2 fuel oil. Primary safety controls for burner operation are necessary. An antiflooding device must be a part of the system to stop oil flow if ignition in the burner should fail. Likewise, a stack control is necessary to shut off the burner if the stack temperatures are exceeded, thus cutting off all power to the burner. This button must be reset before starting can be attempted. The newer models now use electric eye-type control on the burner itself. On the basis of the method used to ignite fuels, burners are divided into five groups: • Electric-A high-voltage electric spark in the path of an oil and air mixture causes ignition. This electric spark may be continuous or may operate only long enough to ignite the oil. Electric ignition is almost universally used. Electrodes are located near the nozzles, but not in the path of the oil spray. • Gas pilot-A small gas pilot light that burns continuously is frequently used. Gas pilots usually have expanded gas valves that automatically increase flame size when a motor circuit starts. After a fixed interval, the flame reverts to normal size (Figure 12.9). • Electric gas-An electric spark ignites a gas jet, which in turn ignites the oil-air mixture. • Oil pilot-A small oil flame is used. • Manual-A burning wick or torch is placed in the combustion space through peepholes and thus ignites the charge. The operator should stand to one side of the fire door to guard against injury from chance explosion. The refractory lining or material should be an insulating fireproof bricklike substance, never ordinary firebrick. The insulating brick should be set on end to build a 2½-inch-thick wall from furnace to furnace. The size and shape of the refractory pot vary from furnace to furnace. The shape can be either round or square, whichever is more convenient to build. It is more important to use a special cement having properties similar to that of the insulating refractory-type brick. # Steam Heating Systems Steam heating systems are classified according to the pipe arrangement, accessories used, method of returning the condensate to the boiler, method of expelling air from the system, or the type of control used. The successful operation of a steam heating system consists of generating steam in sufficient quantity to equalize building heat loss at maximum efficiency, expelling entrapped air, and returning all condensate to the boiler rapidly. Steam cannot enter a space filled with air or water at pressure equal to the steam pressure. It is important, therefore, to eliminate air and remove water from the distribution system. All hot pipelines exposed to contact by residents must be properly insulated or guarded. Steam heating systems use the following methods to return the condensate to the boiler: • Gravity one-pipe air-vent system-One of the earliest types used, this method returns condensate to the boiler by gravity. This system is generally found in one-building-type heating systems. The steam is supplied by the boiler and carried through a single system or pipe to radiators, as shown in Figure 12.10. Return of the condensate is dependent on hydrostatic head. Therefore, the end of the stream main, where it attaches to the boiler, must be full of water (termed a wet return) for a distance above the boiler line to create a pressure drop balance between the boiler and the stream main. Radiators are equipped with an inlet valve and an air valve. The air valve permits venting of air from the radiator and its displacement by steam. Condensate is drained from the radiator through the same pipe that supplies steam. • Two-pipe steam vapor system with return trap-The two-pipe vapor system with boiler return trap and air eliminator is an improvement of the one-pipe system. The return connection of the radiator has a thermostatic trap that permits flow of condensate and air only from the radiator and prevents steam from leaving the radiator. Because the return main is at atmospheric pressure or less, a boiler return trap is installed to equalize condensate return pressure with boiler pressure. # Water Heating Systems All water heating systems are similar in design and operating principle. The one-pipe gravity water heating system is the most elementary of the gravity systems and is shown in Figure 12.10. Water is heated at the lowest point in the system. It rises through a single main because of a difference in density between hot and cold water. The supply rise or radiator branch takes off from the top of the main to supply water to the radiators. After the water gives up heat in the radiator, it goes back to the same main through return piping from the radiator. This cooler return water mixes with water in the supply main and causes the water to cool a little. As a result, the next radiator on the system has a lower emission rate and must be larger. Note in Figure 12.11 that the high points of the hot water system are vented and the low points are drained. In this case, the radiators are the high points and the heater is the low point. • One-pipe forced-feed system-If a pump or circulator is introduced in the main near the heater of the one-pipe system, it becomes a forced system that can be used for much larger applications than can the gravity type. This system can operate at higher water temperatures than the gravity system can. When the water is moving faster and at higher temperatures, it makes a more responsive system, with smaller temperature drops and smaller radiators for the same heating load. • Two-pipe gravity system-A one-pipe gravity system may become a two-pipe system if the return radiator branch connects to a second main that returns water to the heater (Figure 12.12). Water temperature is practically the same in the entire radiator. • Two-pipe forced-circulation system-This system is similar to a one-pipe forced-circulation system except that it uses the same piping arrangement found in the two-pipe gravity system. • Expansion tanks-When water is heated, it tends to expand. Therefore, an expansion tank is necessary in a hot water system. The expansion tank, either open or closed, must be of sufficient size to permit a change in water volume within the heating system. If the expansion tank is open, it must be placed at least 3 feet above the highest point of the system. It will require a vent and an overflow. The open tank is usually in an attic, where it needs protection from freezing. The closed expansion tank is found in modern installations. An air cushion in the tank compresses and expands according to the change of volume and pressure in the system. Closed tanks are usually at the low point in the system and close to the heater. They can, however, be placed at almost any location within the heating system. # Air Heating Systems Gravity Warm-air Heating Systems. These operate because of the difference in specific gravity of warm air and cold air. Warm air is lighter than cold air and rises if cold air is available to replace it (Figure 12.13). • Operation-Satisfactory operation of a gravity warmair heating system depends on three factors: size of warm air and cold air ducts, heat loss of the building, and heat available from the furnace. • Heat distribution-The most common source of trouble in these systems is insufficient pipe area, usually in the return or cold air duct. The total crosssection area of the cold duct or ducts must be at least equal to the total cross-section area of all warm ducts. • Pipeless furnaces-The pipeless hot-air furnace is the simplest type of hot-air furnace and is suitable for small homes where all rooms can be grouped about a single large register. Other pipeless gravity furnaces are often installed at floor level. These are really oversized jacketed space heaters. The most common difficulty experienced with this type of furnace is supplying a return air opening of sufficient size on the floor. a forced-warm-air heating system is very similar to that of the gravity system, except that a fan or blower is added to increase air movement. Because of the assistance of the fan or blower, the pitch of the ducts or leaders can be disregarded; therefore, it is practical to deliver heated air in the most convenient places. • Operation-In a forced-air system, operation of the fan or blower must be controlled by air temperature in a bonnet or by a blower control furnace stat. The blower control starts the fan or blower when the temperature reaches a certain point and turns the fan or blower off when the temperature drops to a predetermined point. • Heat distribution-Dampers in the various warm-air ducts control distribution of warm air either at the branch takeoff or at the warm-air outlet. Humidifiers are often mounted in the supply bonnet to regulate the humidity within the residence. # Space Heaters Space heaters are the least desirable type of heating from the viewpoint of fire safety and housing inspection. A space heater is a self-contained, free-standing air-heating appliance intended for installation in the space being heated and not intended for duct connection. According to the CPSC, consumers are not using care when purchasing and using space heaters. Approximately 21,800 residential fires are caused by space heaters a year, and 300 people die in these fires. An estimated 6,000 persons receive hospital emergency room care for burn injuries associated with contacting hot surfaces of space heaters, mostly in nonfire situations. Individuals using space heaters should use the heaters in accordance with the following precautions: • Read and follow the manufacturer's operating instructions. A good practice is to read aloud the instructions and warning labels to all members of the household to be certain that everyone understands how to operate the heater safely. Keep the owner's manual in a convenient place to refer to when needed. • Choose a space heater that has been tested and certified by a nationally recognized testing laboratory. These heaters meet specific safety standards. • Buy a heater that is the correct size for the area you want to heat. The wrong size heater could produce more pollutants and may not be an efficient use of energy. • Choose models that have automatic safety switches that turn off the unit if it is tipped over accidentally. • Select a space heater with a guard around the flame area or heating element. Place the heater on a level, hard, nonflammable surface, not on rugs or carpets or near bedding or drapes. Keep the heater at least 3 feet from bedding, drapes, furniture, or other flammable materials. • Keep doors open to the rest of the house if you are using an unvented fuel-burning space heater. This helps prevent pollutant build-up and promotes proper combustion. Follow the manufacturer's instructions for oil heaters to provide sufficient combustion air to prevent CO production. • Never leave a space heater on when you go to sleep. Never place a space heater close to any sleeping person. • Turn the space heater off if you leave the area. Keep children and pets away from space heaters. Children should not be permitted to either adjust the controls or move the heater. • Keep any portable heater as least 3 feet away from curtains, newspapers, or anything that might burn. • Have a smoke detector with fresh batteries on each level of the house and a CO detector outside the sleeping area. Install a CO monitor near oil space heaters at the height recommended by the manufacturer. • Be aware that mobile homes require specially designed heating equipment. Only electric or vented fuel-fired heaters should be used. • Have gas and kerosene space heaters inspected annually. • Do not hang items to dry above or on the heater. • Keep all heaters out of exit and high-traffic areas. • Keep portable electric heaters away from sinks, tubs, and other wet or damp places to avoid deadly electric shocks. • Never use or store flammable liquids (such as gasoline) around a space heater. The flammable vapors can flow from one part of the room to another and be ignited by the open flame or by an electrical spark. # Coal-fired Space Heaters (Cannon Stove) A coal stove is made entirely of cast iron. Coal on the grates receives primary air for combustion through the grates from the ash-door draft intake. Combustible gases driven from the coal by heat burn in the barrel of the stove, where they receive additional or secondary air through the feed door. The side and top of the stove absorb the heat of combustion and radiate it to the surrounding space. Coal stoves must be vented to the flue. # Oil-fired Space Heaters Oil-fired space heaters have atmospheric vaporizing-type burners. The burners require a light grade of fuel oil that vaporizes easily and in comparatively low temperatures. In addition, the oil must be such that it leaves only a small amount of carbon residue and ash within the heater. Oil stoves must be vented. The burner of an oil-fired space heater consists essentially of a bowl, 8 to 13 inches in diameter, with perforations in the side that admit air for combustion. The upper part of the bowl has a flame ring or collar. Figure 12.15 shows a perforated-sleeve burner. When several space heaters are installed in a building, an oil supply from an outside tank to all heaters is often desirable. Figure 12. 16 shows the condition of a burner flame with different rates of fuel flow and indicates the ideal flame height. # Electric Space Heaters Electric space heaters do not need to be vented. # Gas-fired Space Heaters The three types of gas-fired space heaters (natural, manufactured, and liquefied petroleum gas) have a similar construction. All gas-fired space heaters must be vented to prevent a dangerous buildup of poisonous gases. Each unit console consists of an enamel steel cabinet with top and bottom circulating grilles or openings, gas burners, heating elements, gas pilot, and a gas valve. The heating element or combustion chamber is usually cast iron. Caution: All gas-fired space heaters and their connections must be approved by the American Gas Association (AGA). They must be installed in accordance with the recommendations of that organization or the local code. # Venting Use of proper venting materials and correct installation of venting for gas-fired space heaters is necessary to minimize harmful effects of condensation and to ensure that combustion products are carried off. (Approximately 12 gallons of water are produced in the burn- ing of 1,000 cubic feet of natural gas. The inner surface of the vent must therefore be heated above the dew point of the combustion products to prevent water from forming in the flue.) A horizontal vent must be given an upward pitch of at least 1 inch per foot of horizontal distance. When the smoke pipe extends through floors or walls, the metal pipe must be insulated from the floor or wall system by an air space (Figure 12.17). Sharp bends should be avoided. A 9° vent elbow has a resistance to flow equivalent to that of a straight section of pipe with a length 10 times the elbow diameter. Be sure that vents are of rigid construction and resistant to corrosion by flue gas products. Several types of venting material are available such as B-vent and other ceramic-type materials. A chimney lined with firebrick type of terra cotta must be relined with an acceptable vent material if it is to be used for venting gas-fired appliances. The same size vent pipe should be used throughout its length. A vent should never be smaller than the heater outlet except when two or more vents converge from separate heaters. To determine the size of vents beyond the point of convergence, one-half the area of each vent should be added to the area of the largest heater's vent. Vents should be installed with male ends of inner liner down to ensure condensate is kept within pipes on a cold start. The vertical length of each vent or stack should be at least 2 feet greater than the length between horizontal connection and stack. Remember that the more conductive the unit, the lower the temperature of combustion and the more byproducts of combustion are likely to be produced. These by-products are sometimes referred to as soot and creosote. These by-products will build up in vents, stacks, and chimneys. They are extremely flammable and can result in fire in these units that is hot enough to penetrate the heat shielding and throw burning material onto the roof of the home. The vent should be run at least 3 feet above any projection within 20 feet of the building to place it above a possible pressure zone due to wind currents (Figure 12.18). A weather cap should prevent entrance of rain and snow. Gas-fired space heaters, as well as gas furnaces and water heaters, must be equipped with a backdraft diverter (Figure 12.19) to protect heaters against downdrafts and excessive updrafts. Only draft diverters approved by the AGA should be used. The combustion chamber or firebox must be insulated from the floor, usually with airspace of 15 to 18 inches. The firebox is sometimes insulated within the unit and thus allows for lesser clearance firebox combustibles. Floors should be protected where coal space heaters are located. The floor protection allows hot coals and ashes to cool off if dropped while being removed from the ash chamber. Noncombustible walls and materials should be used when they are exposed to heated surfaces. For space heaters, a top or ceiling clearance of 36 inches, a wall clearance of 18 inches, and a smoke pipe clearance of 18 inches are recommended. # Hydronic Systems Hydronic (circulating water) systems involving traditional baseboards can be single-pipe or two-pipe. Radiant systems are also an option. All hydronic systems require an expansion tank to compensate for the increase in water volume when it is heated (i.e., the volume of 50°F [10°C] water increases almost 4% when it is heated to 200°F [93°C]). Single-pipe hydronic systems are most commonly used in residences. They use a single pipe with hot water flowing in a series loop from radiator to radiator. Massachusetts has a prototype set of hydronic systems requirements [11]. The drawback to this arrangement is that the temperature of the water decreases as it moves through each radiator. Thus, larger radiators are needed for those locations downstream in the loop. A common solution to this is multiple loops or zones. Each zone has its own temperature control with circulation provided by a small pump or zone valve in each loop. Two-pipe hydronic systems use a pipe for supplying hot water to the radiators and a second pipe for returning the water from the radiators to the boiler. There are also direct-and reverse-return arrangements. The direct-return system can be difficult to balance because the pressure drop through the nearest radiator piping can be significantly less than for the farthest radiator. Reverse-return systems take care of the balancing problem, but require the expense of additional piping. Orifice plates at radiator inlets or balancing valves at radiator outlets can also be used to balance the pressure drops in a direct-return system. # Direct Vent Wall Furnaces Direct vent wall furnaces are specifically designed for areas where flues or chimneys are not available or cannot be used. The furnace is directly vented to the outside and external air is used to support combustion. The air on the inside is warmed as it recirculates around a sealed chamber. # Cooling Air Conditioning Many old homes relied on passive cooling-opening windows and doors and using shading devices-during the summer months. Homes were designed with windows on opposite walls to encourage cross ventilation and large shade trees reduced solar heat gains. This approach is still viable, and improved thermal performance (insulating value) windows are available that allow for larger window areas to let in more air in the summer without the heat loss penalty in the winter. However, increased outdoor noise levels, pollution, and security issues make relying on open windows a less attractive option in some locations today. An air conditioning system of some kind may be installed in the home. It may be a window air conditioner or through-the-wall unit for cooling one or two rooms, or a central split-system air conditioner or heat pump. In any event, the performance of these systems, in terms of providing adequate comfort without excessive energy use, should be investigated. The age of the equipment alone will provide some indication. If the existing system is more than 10 years old, replacement should be considered because it is much less efficient than today's systems and nearing the end of its useful life. The refrigerant commonly used in today's residential air conditioners is R-22. Because of the suspicion that R-22 depletes the ozone layer, manufacturers will be prohibited from producing units with R-22 in 2010. The leading replacements for R-22 are R-134A and R-410A, and new products are now available with these nonozone-depleting refrigerants. The performance measure for electric air conditioners with capacities less than 65,000 BTU is the seasonal energy efficiency ratio (SEER). SEER is a rating of cooling performance based on representative residential loads. It is reported in units of BTU of cooling per watt per hour of electric energy consumption. It includes energy used by the unit's compressor, fans, and controls. The higher the SEER, the more efficient the system. However, the highest SEER unit may not provide the most com- fort. In humid climates, some of the highest SEER units exhibit poor dehumidification capability because they operate at higher evaporator temperatures to attain the higher efficiency. The refrigerant gas then travels through refrigerant piping to the outdoor unit, where it is pressurized in an electrically driven compressor, raising its temperature and pressure, and returned to a liquid state in the condenser as it releases, or dumps, the heat to the outdoors. A fan draws outdoor air in over the condenser coil. The use of twospeed indoor fans can be advantageous in this type of system because the cooling load often requires higher airflows than the heating load. The lower speed can be used for the heating season and for improved dehumidification performance during the cooling season. The condenser unit for a house air conditioner is shown in Figure 12.21. # Circulation Fans Air movement can make a person feel comfortable even when dry-bulb temperatures are elevated. A circulation fan (ceiling or portable) that creates an airspeed of 150 to 200 feet per minute can compensate for a 4°F (-16°C) increase in temperature. Ceiling circulation fans also can be beneficial in the heating season by redistributing warm air that collects along the ceiling, but they can be noisy. # Evaporation Coolers In dry climates, as in the southwestern United States, an evaporative cooler or "swamp" cooler may provide sufficient cooling. This system cools an airstream by evaporating water into it; the airstream's relative humidity increases while the dry-bulb temperature decreases. A 95°F (35°C), 15% relative humidity airstream can be conditioned to 75°F (24°C), 50% relative humidity. The simplest direct systems are centrally located and use a pump to supply water to a saturated pad over which the supply air is blown. Indirect systems use a heat exchanger between the airstream that is cooled by evaporating water and the supply airstream. The moisture level of the supply airstream is not affected as it is cooled. Evaporation coolers have lower installation and operating costs than electric air conditioning. No ozone-depleting refrigerant is involved. They provide high levels of ventilation because they typically condition and supply 100% outside air. The disadvantages are that bacterial contamination can result if not properly maintained and they are only appropriate for dry, hot climates. # Safety Cooling homes with window air conditioners requires attention to the maintenance requirements of the unit. The filter must be cleaned or replaced as recommended by the manufacturer, and the drip pan should be checked to ensure that proper drainage from the unit is occurring. The pans should be rinsed and disinfected as recommended by the manufacturer. Both bacteria and fungi can establish themselves in these areas and present serious health hazards. Condensation forms on the cooling coils of central air units inside and outside the home. These units should have a properly installed drip pan and should be drained according to the manufacturer's instructions. They also should receive routine maintenance, flushing, and disinfection. In the spring, before starting the air conditioner, the unit should be checked by a professional or someone familiar with the operation of the system. This is a good time to check drip line(s) for conditions such as plugs, cracks, or bacterial contamination because many of these lines are plastic. The drip pan should be cleaned thoroughly and disinfected if necessary or replaced. A plugged drip line can cause water damage by overflow from the drip pan. In the fall, the heat unit also should be checked before starting the system. Care should be taken with both window air conditioning units and central air systems to use quality air filters that are designed for the specific units and meet the specifications required by the system's manufacturer. The housing inspector should be on the alert for unvented, open flame heaters. Coil-type, wall-mounted water heaters that do not have safety relief valves are not permitted. Kerosene (portable) units for cooking or heating should be prohibited. Generally, open-flame portable units are not allowed under fire safety regulations. In oil heating units, other than integral tank units, the oil must be filled and vented outside the building. Filling oil within buildings is prohibited. Cutoff switches should be close to the entry but outside of a boiler room. # Chimneys Chimneys (Figure 12.22) are often an integral part of a building. Masonry chimneys must be tight and sound; flues should be terra cotta-lined; and, where no linings are installed, the brick should be tight to permit proper draft and elimination of combustion gases. Chimneys that act as flues for gas-fired equipment must be lined with either B vent or terra cotta. When a portion of the chimney above the roof either loses insulation or the insulation peels back, it indicates potential poisonous gas release or water leakage problems and a need for rebuilding. Exterior deterioration of the chimney, if neglected too long, will permit erosion from within the flues and eventually block the flue opening. Rusted flashing at the roof level will also contribute to the chimney's deterioration. Efflorescence on the inside wall of the chimney below the roof and on the outside of the chimney, if exposed, will show salt accumulations-a telltale sign of water penetration and flue gas escape and a sign of chimney deterioration. During rainy seasons, if terra cotta chimneys leak, dark areas show the number of flues inside the masonry chimney so they can actually be counted. When this condition occurs, it usually requires 2 or 3 months to dry out. After drying out, the mortar joints are discolored (brown). After a few years of this type of deterioration, the joints can be distinguished whether the chimney is wet or dry. These conditions usually develop when coal is used and become more pronounced 2 to 5 years after conversion to oil or gas. An unlined chimney can be checked for deterioration below the roofline by looking for residue deposited at the base of the chimney, usually accessible through a cleanout (door or plug) or breaching. Red granular or fine powder showing through coal or oil soot will generally indicate, if in quantity (a handful), that deterioration is excessive and repairs are needed. Unlined chimneys with attached gas units will be devoid of soot, but will usually show similar telltale brick powder Figure 12.22. Chimney Plan [4] and deterioration. Manufactured gas has a greater tendency to dehydrate and decompose brick in chimney flues than does natural gas. For gas installations in older homes, utility companies usually specify chimney requirements before installation; therefore, older chimneys may require the installation of terra cotta liners, nonlead-lined copper liners, stainless steel liners, or transit pipe. Black carbon deposits around the top of the chimney usually indicate an oil burner operation using a low air ratio and high oil consumption. Prolonged operation in this burner setting results in long carbon water deposits down the chimney for 4 to 6 feet or more and should indicate to the inspector a possibility of poor burner maintenance. This will accent the need to be more thorough on the next inspection. This type of condition can result from other causes, such as improper chimney height, or exterior obstructions, such as trees or buildings, that will cause downdrafts or insufficient draft or contribute to a faulty heating operation. Rust spots and soot-mold usually occur on deteriorated galvanized smoke pipe. # Fireplaces Careful attention should be given to construction of the fireplace (Figure 12.23). Improperly built fireplaces are a serious safety and fire hazard. The most common causes of fireplace fires are thin walls, combustible materials such as studding or trim against sides and back of the fireplace, wood mantels, and unsafe hearths. Fireplace walls should be no less than 8 inches thick; if built of stone or hollow masonry units, they should be no less than 12 inches thick. The faces of all walls exposed to fire should be lined with firebrick or other suitable fireresistant material. When the lining consists of 4 inches of firebrick, such lining thickness may be included in the required minimum thickness of the wall. The fireplace hearth should be constructed of brick, stone, tile, or similar incombustible material and should be supported on a fireproof slab or on a brick arch. The hearth should extend at least 20 inches beyond the chimney breast and no less than 12 inches beyond each side of the fireplace opening along the chimney breast. The combined thickness of the hearth and its supporting construction should be no less than 6 inches at any point. It is important that all wooden beams, joists, and studs are set off from the fireplace and chimney so that there is no less than 2 inches of clearance between the wood members and the sidewalls of the fireplace or chimney and no less than 4 inches of clearance between wood members and the back wall of the fireplace. A gas-log set is primarily a decorative appliance. It includes a grate holding ceramic logs, simulated embers, a gas burner, and a variable flame controller. These sets can be installed in most existing fireplaces. There are two principal types: vented and unvented. Vented types require a chimney flue for exhausting the gases. They are only 20% to 30% efficient; and most codes require that the flue be welded open, which results in an easy exit path for heated room air. Unvented types operate like the burner on a gas stove and the combustion products are emitted into the room. They are more efficient because no heat is lost up the flue and most are equipped with oxygen-depletion sensors. However, unvented types are banned in some states, including Massachusetts and California. Gas fireplaces incorporate a gas-log set into a complete firebox unit with a glass door. Some have builtin dampers, smoke shelves, and heat-circulating features that allow them to provide both radiant and convective heat. Units can have push-button ignition, remote control, variable heat controls, and thermostats. Gas fireplaces are more efficient than gas logs, with efficiencies of 60% to 80%. Many draw combustion air in from the outside and are direct vented, eliminating the need for a chimney. Some of these units are wall-furnace rated. There are also electric fireplaces that provide the ambience of a fire and, if desired, a small amount of resistance heat. These units have no venting requirements. The advantages are that there are no ashes or flying sparks that occur with wood-burning fireplaces. They are not affected by wood-burning bans imposed in some areas when air quality standards are not met. Direct-vented gas or electric models eliminate the need for a chimney. The disadvantages are that the cost for equipment and running the gas line can be high. # Introduction Using energy efficiently can reduce the cost of heating, ventilating, and air-conditioning, which account for a significant part of the overall cost of housing. Energy costs recur month-to-month and are hard to reduce after a home has been designed and built. The development of an energy-efficient home or building must be thought through using a systems approach. Planning for energy efficiency involves considering where the air is coming from, how it is treated, and where it is desired in the home. Improper use or installation of sealing and insulating materials may lead to moisture saturation or retention, encouraging the growth of mold, bacteria, and viruses. In addition, toxic chemicals may be created or contained within the living environment. These building errors may result in major health hazards. The major issues that must be balanced in using a systems approach to energy efficiency are energy cost and availability, longterm affordability and sustainability, comfort and efficiency, and health and safety. # Energy Systems Making sound decisions in designing, constructing, or updating dwellings will ensure not only greater use and enjoyment of the space, but also can significantly lower energy bills and help residents avoid adverse health effects. Systematic planning for energy efficiency also can assist prospective homeowners in qualifying for mortgages because lower fuel bills translate into lower total housing and utility payments. Some banks and credit unions take this into account when qualifying prospective homeowners for mortgages. "Energy-efficient" mortgages provide buyers with special benefits when purchasing an energyefficient home. Energy use and efficiency should be addressed in the context of selection of fuel types and appliances, location of the equipment, equipment sizing and backup systems, and programmed use when making decisions on space A price is paid for poor design and lack of proper insulation of dwellings, both in dollars for utility bills and in comfort of the occupants. The layout of rooms and overall tightness of a house in terms of air exchange affect energy requirements. In addition, home occupants and owners often are called on to make relatively minor decisions affecting total energy consumption, such as selecting lighting fixtures and bulbs and selecting settings for thermostats. Buying energy-efficient appliances can save energy, but the largest reduction in energy use can be derived from major decisions, such as considering the R-value of roof systems, insulation, and windows. # R-values Thermal resistance (a material's resistance to heat flow) is rated by R-value. Higher R values mean greater insulating power, which means greater household energy savings and commensurate cost savings. Table 13.1 is a guideline for choosing R-values that are right for a particular home based on the climate, household heating system, and area in which it is located. Another way of understanding R-value is to see it as the resistance to heat losses from a warmer inside temperature to the outside temperature through a material or building envelope (wall, ceiling or roof assembly, or window). Total heat loss is a function of the thermal conductivity of materials, area, time, and construction in a house. The R-value of thermal insulation depends on the type of material, its thickness, and its density. In calculating the R-value of a multilayered installation, the R-values of the individual layers are added. Installing more insulation increases R-value and the resistance to heat flow. The effectiveness of an insulated wall or ceiling also depends on how and where the insulation is installed. For example, insulation that is compressed will not provide its full rated R-value. Also, the overall R-value of a wall or ceiling will be somewhat different from the R-value of the insulation itself because some heat flows around the insulation through the studs and joists. That is, the overall R-value of a wall with insulation between wood studs is less than the R-value of the insulation itself because the wood provides a thermal short-circuit around the insulation. The short-circuiting through metal framing is much greater than that through wood-framed walls; sometimes the metal wall's overall R-value can be as low as half the insulation's R-value. With careful design, this short-circuiting can be reduced. # Roofs Roofs are composite structures, with composite R-values. The total R-value for the roof components shown in Figure 13.1 is 14.54 (Table 13.2). In general, a composite structure with a composite R-value of more than R-38 provides a substantial barrier to heat loss. Of course, in the winter the outside air temperature would vary significantly between locations such as Pensacola, Florida, and Fairbanks, Alaska, and would affect the cost-effectiveness of additional insulation and construction using various roofing components (Table 13.2). The location of a house is usually a fixed variable in calculating R-values once the lot is purchased. However, the homeowner should consider the value of additional insulation by comparing its cost with the savings resulting from the increase in energy efficiency. Roof construction, including components such as ridge vents and insulating materials, is quite important and is often one of the more cost-effective ways to lower energy costs. # Ridge Vents Ridge vents are important to roofs for at least three reasons. First, ridge vents help lower the temperature in the roof structure and, consequently, in the attic and in the habitable space below. Second, ridge vents and rotating turbine vents help prolong the life of the roofing materials, particularly asphalt shingles and plywood sheathing. Third, ridge vents assist in air circulation and help avoid problems with excessive moisture. # Fan-powered Attic Ventilation Attic ventilators are small fans that remove hot air and reduce attic temperature. Adequate inlet vents are important. Typically these vents are located under the eaves of the house. The fan should be located near the peak of the roof for best performance. # White Roof Surface White roof surfaces combined with any of the measures listed above will improve their performance significantly. The white surface reflects much of the sun's heat and keeps the roof much cooler than a typical roof. # Insulation Insulation forms a barrier to the outside elements. It can help ensure that occupants are comfortable and that the home is energy-efficient. Ceiling insulation improves comfort and cuts electricity or natural gas costs for heating and cooling. For instance, the use of R-19 insulation in houses in Hawaii [3] could have the following results: • Reduce indoor air temperature by 4°F (-16°C) in the afternoon. • Lower the ceiling temperature, perhaps by more than 15°F (-9.4°C). Insulation [radiant barrier] can reduce ceiling temperatures from 101°F (38°C) in bright sun on Oahu to 83°F (28°C). (Figure 13.2). • Reduce or eliminate the need for an air-conditioner. Energy savings, of course, will vary depending on energy prices. The payback afforded by additional insulation or investment in energy conservation measures is the average amount of time it will require for the initial capital cost to be recovered as a result of the savings in energy bills. A payback of 3 to 5 years might be economic, because the average homeowner stays in a home that long. However, payback criteria can vary by individual, and renters, for example, often face the dilemma of not wanting to make improvements for which they may not be able to fully realize the benefits. Described below are a few insulation alternatives. To achieve maximum effect, the method of installation and type of insulation are of considerable importance. The proper placement of moisture barriers is essential. If insulation becomes moisture-saturated, its resistance to energy loss is significantly reduced. Barriers to moisture should be installed toward the living area because significant moisture is generated in the home through respiration, cooking, and the combustion of heating fuels. Cellulose or fiberglass insulation is the most cost-effective insulation. Blown-in cellulose or fiberglass and fiberglass batts are similar in cost and performance. Recycled cellulose insulation may be available. For the best performance, insulation should be 5 to 6 inches thick. It can be installed in attics of new and existing homes. It is typically the best choice for framed ceilings in new homes, but can be costly to install in existing framed ceilings. It is very important that this type of insulation be treated for fire resistance. Foamboard (R-10, 1.5 to 2 inches) provides more insulation per inch than does cellulose or fiberglass, but is also more expensive. It is best where other insulation cannot be used, such as open-beam ceilings. It is applicable for new construction or when roofing is replaced on an existing home. Two common materials are polystyrene and polyisocyanurate. Polystyrene is better in moist conditions, and polyisocyanurate has a higher R-value per inch. However, some of these insulations present serious fire spread hazards. They should be evaluated to ensure that they are covered with fire-retardant materials and meet local fire and building codes. Radiant barrier insulation is a reflective foil sheet installed under the roof deck like regular roof sheathing. The effectiveness of a radiant barrier (Figure 13.2) depends on its emissivity (the relative power of the surface to emit heat by radiation). In general, the shinier the foil the better. Radiant barrier insulation cuts the amount of heat radiated from the hot roof to the ceiling below. It may be draped over the rafters before the roof is installed or stapled to the underside of the rafters. The shiny side should face downward for best performance. Some manufacturers claim that the radiant barrier prevents up to 97% of the sun's heat from entering the attic. # Wall Insulation As shown in Table 13.1, it makes sense to insulate to high R-values in the ceiling. Insulation in walls should range from R-11 in relatively mild climate zones to R-38 in New England, the northern Midwest, the Great Lakes, and the Rocky Mountain states of Colorado and Wyoming. Insulation requirements vary within climate zones in these states and areas as well (for instance, mountainous areas and areas farther north may have more heating-degree days). The same logic of installing insulation applies to both ceilings and walls: the insulation should provide a barrier for heat and moisture transfer and buildup from inside the dwelling, where temperatures will generally be in the 68°F to 72°F (20°C to 22°C) range, compared with the much colder or hotter temperatures outside. The key to heat loss is the difference in temperatures and the time that the heat transfer takes place over a given area or surface. The choice of heating system, from gas/oil or heat pump, to electric resistance, will also affect the payback of additional wall insulation due to variation in energy fuel prices. For regions identified as "cold," careful attention should be made in selecting energy fuel type; in particular, a heat pump may not be a practical option. A homeowner exploring designs and construction methods should examine the value of using structural insulated panels. The incorporation of high levels of insulation directly from the factory on building wall and ceiling components makes them outstanding barriers to heat and moisture. These integrated systems, if appropriately used, can save substantial amounts of energy when compared with traditional stick-built systems using 2×4 or 2×6 lumber. Also, building energy-efficient features (as well as electrical, plumbing, and other elements) directly into the building envelope at the factory can result in labor cost savings over the more traditional methods of construction. # Floor Insulation Warm air expands and rises above surrounding cooler air. This process of heat transfer is called convection. Warm air, which is lighter, rises and, as it cools, falls, creating a convection current of air. The two other processes of heat transfer are conduction (kinetic energy transferred from particle to particle, such as in a water-or electrically heated floor) and radiation (radiant energy emitted in the form of waves or particles such as in a fireplace or hot glowing heating element). Floor insulation limits all three modes of heat loss. A warmer floor reduces the temperature difference that drives convection. Floor insulation also directly impedes conduction and radiation to the colder air below the floor. # Batt Insulation The advantage of floor insulation lies in adding extra R-value without a significant increase in cost. It is cheaper to put more insulation under the floor than to add foam sheathing or change the type of wall construction to accommodate greater insulation levels. Like walls, floor cavities should be completely filled with insulation-without gaps, missing insulation, or cavity voids. Floor insulation must contact the subfloor and both joists. In many cases, it is worth the extra cost to buy enough insulation to fill the entire cavity. The amount of floor insulation required by some codes can be less than the space available. For example, an R-19 fiberglass batt is 6¼ inches thick. A floor framed with 2×8s is about 7½ inches deep, while a 2×10 floor is 9½ inches deep. A builder following a code's minimum insulation level will leave extra space that will allow for greater In some areas, it's common to hang plastic mesh over floor joists. Installers drop the insulation onto the mesh before the subfloor is installed. However, hanging the mesh creates sagging bellies. Insulation compresses near the framing and sags in the middle. Mesh should be attached to the bottom of the floor framing [4]. Each stage of increased floor insulation, from R-19 to R-30 or R-30 to R-38, can save energy over the life of the house. This energy translates into energy savings that are multiples of the initial installation costs. Floor insulation will generate the greatest savings in colder climates; in moderate climates, the target insulation level should depend on economics. # Blow-In Insulation A blown-in insulation system allows the builder or insulator to fill the entire cavity completely, even around pipes, wires and other appurtenances. Using well-trained installers will pay dividends in quality workmanship. # Doors Today there is an endless variety of doors, ranging from metal doors with or without insulation to hollow core to solid wood. When properly installed into fitted frames, doors serve as a heat barrier to maintain indoor temperatures. Quality metal doors with insulation are best if they have a thermal break between the interior and exterior metal surfaces; this keeps heat from being transferred from one side to the other. # Standard Doors Because doors take up a small percentage of a wall, insulating them is not as high a priority as is insulating walls and ceilings. That said, heat loss follows the path of least resistance; therefore, doors should be selected that are functional and add to the energy-efficiency of the house. Doors usually have lower R-values than the surrounding wall. Storm doors can add R-1 to R-2 to the existing door's R-value. They are a valuable addition to doors that are frequently used and those that are exposed to cold winds, snow, and other weather. Screens allow natural breezes to circulate air from outside, rather than totally relying on air-conditioning, which can be energy intensive. When considering replacement doors, select insulated, metal foam-core doors. Besides insulation, metal doors provide good security, seal more tightly, tend to warp less. Metal doors also are more soundproof than conventional wood doors. heat loss. To avoid this situation, the batt must be pushed up into the cavity. With the proper support, this can be done. Springy metal rods are commonly used to hold insulation up in the top of the floor cavity. Another viable option is the use of plastic straps. Figure 13.3 shows batt insulation improperly applied to the floor above a crawl space or a basement. The thickness of typical fiberglass batts can assist the designer and the builder in creating a floor system that works for the occupants. Table 13.3 shows a list of R-values, along with the associated batt thickness. Individual brands can vary by as much as 1 inch. # Cavity Fill According to Oikos, a commercial Web site devoted to serving professionals whose work promotes sustainable design and construction, "Buying a thicker batt may be a better option than trying to lift a thinner batt into the proper position. Material costs will climb slightly but labor should be the same. Attaching the insulation support to the bottom of the floor joist will be easier. It could also lead to a higher quality job because there is less chance for compression or gaps" (Figure 13.4) [4]. # Sliding Glass Doors Although sliding glass doors have aesthetic appeal, they have very low R-values and hence are minimally energy efficient. To improve the energy efficiency of existing sliding glass doors, the homeowner should ensure that they seal tightly and are properly weather-stripped. Additionally, heavy insulated drapes with weights, which impede the airflow, can cut down on heat loss through sliding glass doors. # Door Installation Doors must be installed as recommended by the manufacturer. Care must be taken to be sure that doors are installed in a manner that does not trap moisture or allow unintended introduction of air. Numerous types of sealing materials are available that range from foam to plastic, to metal flanging and magnetic strips. # Hot Water Systems The hot water tank can be insulated to make it more efficient, unless the heat loss is used within the space where it is located. Special insulation is available for this type of appliance, and insulating it will reduce the energy required to deliver the hot water needed by the occupants of the dwelling. Of course, any pipe that is subject to extreme temperatures also should be insulated to decrease heat loss. # Windows Windows by nature are transparent. They allow occupants of a dwelling to see outside and bring in sunlight and heat from the sun. They make space more pleasant and often provide lighting for tasks undertaken in the space. Especially in the winter, these desirable characteristics offset the heat loss. Heat gain in the summer through windows can be undesirable. Rather than give them up, it is important to use windows prudently and to keep energy considerations in mind in their design and their insulating characteristics (air, glass, plastic, or gas filler). Good design takes advantage of day lighting. Weather-stripping and sealing leaks around windows can enhance comfort and energy savings. Energy Star windows are highly recommended. Housekeeping measures can improve the efficiency of retaining heat. Heat loss follows the path of least resistance: caulking, weather-stripped framing, and films can help. These measures are relatively labor intensive, low to very low in cost, and can be quite satisfying to the homeowner if accomplished correctly. On the other hand, it is not easy finding the perfect materials or even replacement parts for old windows. When working with older windows, remember that there is the risk for leaded paint and the dispersion of toxic lead dust into the work area. Please refer to the lead section of Chapter 5, Indoor Air Pollutants and Toxic Materials. # Caulking and Weather-Stripping According to the U.S. Department of Energy, caulking and weather-stripping have substantial housekeeping benefits in preventing energy loss or unwanted heat gain. # Caulking Caulks are airtight compounds (usually latex or silicone) that fill cracks and holes. Before applying new caulk, old caulk or paint residue remaining around a window should be removed using a putty knife, stiff brush, or special solvent. After old caulk is removed, new caulk can then be applied to all joints in the window frame and the joint between the frame and the wall. The best time to apply caulk is during dry weather when the outdoor temperature is above 45°F (7.2°C). Low humidity is important during application to prevent cracks from swelling with moisture. Warm temperatures are also necessary so the caulk will set properly and adhere to the surface [5]. # Weather-stripping Weather-stripped frames are narrow pieces of metal, vinyl, rubber, felt, or foam that seal the contact area between the fixed and movable sections of a window joint. They should be applied between the sash and the frame, but should not interfere with the operation of the window [6]. # Replacing Window Frames The heat-loss characteristics and the air tightness of a window vary with the type and quality of the window frame. The types of available window frames are fixedpane, casement, double-and single-hung, horizontal sliding, hopper, and awning. Each type varies in energy efficiency. Correctly installed fixed-pane windows are the most airtight and inexpensive choice, but are not suited to places that require ventilation. The air infiltration properties of casement windows (which open sideways with hand cranks), awning windows (which are similar to casement windows but have hinges at the top), and hopper windows (inverted awning windows with hinges at the bottom) are moderate. Double-hung windows, which have top and bottom sashes (the part of the window that can slide), tend to be leaky. The advantage of the single-hung window over the double-hung is that it tends to restrict air leakage because there is only one moving part. Horizontal sliding windows, though suitable for small, narrow spaces, provide minimal ventilation and are the least airtight. In buildings with large older windows, there are often weight cavity areas that hide counter balances that make it easy to raise and lower heavy windows. These areas should be insulated to reduce energy loss. # Tinted Windows Another way to conserve energy is the installation of tinted windows. Window tinting can be installed that will both conserve energy and also prevent damaging ultraviolet light from entering the room and potentially fading wood surfaces, fabrics, and carpeting. Low-emissivity coatings, called low-e coatings, are also available. These coatings are designed for specific geographic regions. # Reducing Heat Loss and Condensation The energy efficiency of windows is measured in terms of their U-values (measure of the conductance of heat) or their R-values. Besides a few highly energy-efficient exceptions, window R-values range from 0.9 to 3.0. When comparing different windows, it is advisable to focus on the following guidance for R-and U-values: • R-and U-values are based on standards set by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers [7]. • R-and U-values are calculated for the entire window, which includes the frame. • R-and U-values represent the same style and size of windows. The R-value of a window in an actual house is affected by the type of glazing material, the number of layers of glass, the amount of space between layers and the nature of the gas filling them, the heat-conducting properties of the frame and spacer materials, and the airtightness associated with manufacturing. For windows, rating and approval by the National Fenestration Rating Council or equivalent rating and approval is strongly recommended [8]. Please refer to the window section of Chapter 6, Housing Structure. # Glazing Glazing refers to cutting and fitting windowpanes into frames. Glass has been traditionally the material of choice for windowpanes, but that is changing. Several new materials are available that can increase the energy efficiency of windows. These include the following: • Low-emissivity (low-e) glass uses a surface coating to minimize transmission of heat through the window by reflecting 40% to 70% of incident heat while letting full light pass through the pane. • Heat-absorbing glass is specially tinted to absorb approximately 45% of the incoming solar energy; some of this energy passes through the pane. • Reflective glass has a reflective film that reduces heat gain by reflecting most of the incident solar radiation. • Plastic glazing materials such as acrylic, polycarbonate, polyester, polyvinyl fluoride, and polyethylene are stronger, lighter, cheaper, and easier to cut than glass. However, they are less durable and tend to be affected by the weather more than glass is. • Storm windows can improve the energy efficiency of single-pane windows. The simplest example of storm windows would be plastic film, available in prepackaged kits, taped to the inside of the window frame. Because this can affect visibility and be easily damaged, a better choice would be to attach rigid or semirigid plastic sheets such as plexiglass, acrylic, polycarbonate, or fiber-reinforced polyester directly to the window frame or mounting it in channels around the frame on the outside of the building. Care should be taken in installation to avoid ripples or blemishes that will affect visibility. # Layering The insulating capacity of single-pane windows is minimal, around R-1. Multiple layers of glass can be used to increase the energy efficiency of windows. Double-or triple-pane windows have air-filled or gas-filled spaces, coupled with multiple panes that resist heat flow. The space between the panes is critical because the air spaces that are too wide (more than 5 / 8 inch) or too narrow (less than ½ inch) allow excessive heat transfer. Modern windows use inert gases, such as argon and krypton, to fill the spaces between panes because these gases are much more resistant to heat flow than air is. These gas-filled windows are more expensive than regular double-pane windows. • Frame and spacer materials may be aluminum, wood, vinyl, fiberglass, or a combination of these materials, such as vinyl-or aluminum-clad wood. • Aluminum frames are strong and are ideal for customized window design, but they conduct heat and are prone to condensation. The deterioration of these frames can be avoided by anodizing or coating. Their thermal resistance can be boosted using continuous strips of plastic between the interior and exterior of the frame. • Wood frames are superior to aluminum frames in having higher R-values, tolerance to temperature extremes, and resistance to condensation. On the other hand, wood frames require considerable maintenance in the form of painting or staining. Improper maintenance can lead to rot or warping. • Vinyl window frames made from polyvinyl chloride are available in a wide range of styles and shapes, can be easily customized, have moderate R-values, and can be competitively priced. Large windows made of vinyl frames are reinforced using aluminum or steel bars. Vinyl windows should be selected only after consideration of the concerns surrounding the use of vinyl materials and their off-gassing characteristics. • Fiberglass frames have the highest R-values and are not given to warping, shrinking, swelling, rotting, or corroding. Fiberglass is not weather-resistant, so it should also be painted. Some fiberglass frames are hollow; others are filled with fiberglass insulation. • Spacers separating multiple windowpanes in a window use aluminum to separate glass in multipane windows, but it conducts heat. In addition, in cold weather, the thermal resistance around the edge of such a window is lower than that in the center, allowing heat to escape and condensation to occur along the edges. • Polyvinyl chloride foam separators placed along the edges of the frame reduce heat loss and condensation. Window manufacturers use foam separators, nylon spacers, and insulation materials such as polystyrene and rock wool insulation between the glass panes inside windows. # Other Options Shades, shutters, and drapes used on windows inside the house reduce heat loss in the winter and heat gain in the summer. The heat gain during summer can also be minimized by the use of awnings, exterior shutters, or screens. These cost-effective window treatments should be considered before deciding on window replacement. By considering orientation, day lighting, storage of or reflection of energy from sunshine, and materials used within the house and on the building envelope, heat loss and gain can be decreased. # Solar Energy Solar energy is a form of renewable energy available to homeowners for heating, cooling, and lighting. The more energy-efficient new structures are designed to store solar energy. Remodeled structures may be retrofitted to increase energy efficiency by improving insulation characteristics, improving airflow and airtightness of the structure, and enhancing the ability to use solar energy. Solar energy systems are active and passive. Whereas active solar systems use some type of mechanical power to collect, store, and distribute the sun's energy, passive systems use the materials and design elements in the structure itself. # Active Solar Systems Active solar systems use devices to collect, convert, and deliver solar energy. Solar collectors on roofs or other south-facing surfaces can be used to heat water and air and generate electricity. Active solar systems can be installed in new or existing buildings and periodically need to be inspected and maintained. Active solar energy equipment consists of collectors, a storage tank, piping or ductwork, fans, motors, and other hardware. Flat panel collectors (Figure 13.5) can be placed on the roof or on walls. Typically, the collector will be a sandwich of one or two sheets of glass or plastic and another air space above a metal absorber plate, which is painted black to enhance heat absorption. After collection, when the sun's energy is converted to heat, a transfer is made to a liquid storage tank. The heated liquid travels through coils in the hot water tank, and the heat is transferred to the water and perhaps the heating system. Most hot water systems use a liquid collector system because it is more efficient and less costly than an airtype system. In the southwest United States, solar roof ponds have become popular for solar cooling. Evaporative cooling systems depend on water vaporization to lower the temperature of the air. These have been shown to be more effective in dry climates than in areas with extremely high relative humidity. In certain climates, like those in the Hawaiian Islands, using solar energy is cost-effective for providing hot water. Some builders even include it as a standard feature in their homes. The total cost to the homeowner of solar energy systems consists of the capital, operational, and maintenance costs. The real cost of capital may be lowered by the availability of tax credits offered at the federal (to lower federal income taxes) and state levels. Homeowners and builders can benefit from tax credits because they lower the total upfront investment cost of installing active solar systems. This is the major portion of the total cost of using solar energy, because operation and maintenance costs are small in comparison to initial system costs. # Passive Solar Systems Buildings designed to use passive solar energy have features incorporated into their design that absorb and slowly release the sun's heat. In cold climates, the design allows the light and heat of the sun to be stored in the structure, while insulating against the cold. In warm climates, the best effect is achieved by admitting light while rejecting heat. A building using passive solar systems may have the following features in the floor plan: • Large south-facing windows • Small windows in other directions, particularly on the north side of the structure • Designs that allow daylight and solar heat to permeate the main living areas • Special glass to block ultraviolet radiation • Building materials that absorb and slowly reradiate the solar heat • Structural features such as overhangs, baffles, and summer shading to eliminate summer overheating. Passive design can be a direct-gain system when the sun shines directly into the building, thereby heating it and storing this heat in the building materials (concrete, stone floor slabs, and masonry partitions). Alternatively, it may be an indirect gain system where the thermal mass is located between the sun and the living space. Isolated gain is yet another type of system that is separated from the main living area (such as a sunroom or a solar greenhouse), with convective loops for space conditioning into the living space. Energy Star is a program supported and promoted by the U.S. Environmental Protection Agency (EPA) that helps individuals protect the environment through superior energy efficiency. For the individual in his or her home, energy-efficient choices can save families about one third on their energy bill, with similar savings of greenhouse gas emissions, without sacrificing features, style, or comfort. When replacing household products, look for ones that have earned the Energy Star; these products meet strict energy-efficiency guidelines set by EPA and the U.S. Department of Energy. When looking for a new home, look for one that has earned the Energy Star approval. If you are planning to make larger improvements to your home, EPA offers tools and resources to help you plan and undertake projects to reduce your energy bills and improve home comfort [9]. In 2004 alone, Americans, with the help of Energy Star, saved enough energy to power 24 million homes and avoid greenhouse gas emissions equivalent to those from 20 million cars-all while saving $10 billion. # Conducting an Energy Audit Energy audits can help identify areas where energy investments can be made, thereby reducing energy used in lighting, heating, cooling, or meeting other demands of housing occupants. An inspection can evaluate the worthiness or compliance with codes of energy-saving measures, including accepted or written standards. For example, if a new addition requires the equivalent of R-19 insulation in the ceilings, this can be validated in the inspection process. Whereas an audit is generally informational, an inspection should validate that materials and workmanship have yielded a structure that protects the occupants from the elements, such as rain, snow, wind, cold, and heat. Potentially hazardous situations within a structure should be evaluated in an inspection. The overall goal of a housing inspection in the case of energy efficiency is to identify potential hazardous conditions and help to create conditions under which the health and welfare of the occupants can be enhanced, rather than put at risk. The housing inspector should be aware that there is variation (sometimes quite significant differences) in heating degree days or cooling loads and in relative humidity conditions within given regions. Local and regional topography, as well as site conditions, can affect temperatures and moisture. Numerous Web sites listed in this chapter's Additional Sources of Information section discuss the procedures for conducting energy audits. Local and regional utilities often offer audit services and assist with selecting cost-effective conservation measures for given areas of the United States. # Introduction Swimming is one of the best forms of exercise available and having a residential swimming pool also can provide much pleasure. Nevertheless, it takes a great deal of work and expense to make and keep the pool water clean and free of floating debris. Without a doubt, a properly maintained and operated pool is quite rewarding. Home pools, however, are sometimes referred to as attractive nuisances or hazards. It is essential to be able to evaluate the risks associated with a pool. A regulatory agent or consultant must understand the total engineered pool system and be capable of identifying all equipment, valves, and piping systems. The piping system for a pool should be colorcoded to assist the pool operator or the owner to determine the correct way to operate the swimming pool. The specific goal is to protect the owners, their families, and others who may be attracted to a residential pool. Residential pools and spas should provide clean, clear water; water free of disease agents; and a safe recreational environment. In addition, residential pools and spas should have effective, properly operating equipment and effective maintenance and operation. # Childproofing Although it seems obvious, close supervision of young children is vital for families with a residential pool. A common scenario is a young child leaving the house without the parent or caregiver realizing it. Children are drawn to water, and they can drown even if they know how to swim. All children should be supervised at all times while in and around a pool. The key to preventing pool tragedies is to provide layers of protection. These layers include limiting pool access, using pool alarms, closely supervising children, and being prepared in case of an emergency. The U.S. Consumer Product Safety Commission (CPSC) offers these tips to prevent drowning: • Fences and walls should be at least 4 feet high and installed completely around the pool. The fence should be no more than 2 inches above grade. Openings in the fence should be a maximum of 4 inches. A fence should be difficult to climb over. • Fence gates should be self-closing and self-latching. The latch should be out of a small child's reach. The gate should open away from the pool; the latch should face the pool. • Any doors with direct pool access should have an audible alarm that sounds for 30 seconds. The alarm control must be a minimum of 54 inches high and reset automatically. • If the house forms one side of the barrier to the pool, then doors leading from the house to the pool should be protected with alarms that produce a sound when a door is opened. • Young children who have taken swimming lessons should not be considered "drown proof "; young children should always be watched carefully while swimming. • A power safety cover-a motor-powered barrier that can be placed over the water area-can be used when the pool is not in use. • Rescue equipment and a telephone should be kept by the pool; emergency numbers should be posted. Knowing cardiopulmonary resuscitation (CPR) can be a lifesaver. • For aboveground pools, steps and ladders should be secured and locked or removed when the pool is not in use. • Babysitters should be instructed about potential hazards to young children in and around swimming pools and their need for constant supervision. • If a child is missing, the pool should always be checked first. Seconds count in preventing death or disability. • Pool alarms can be used as an added precaution. Underwater pool alarms can be used in conjunction with power safety covers. CPSC advises consumers to use remote alarm receivers so the alarm can be heard inside the house or in other places away from the pool area. • Toys and flotation devices should be used in pools only under supervision; they should not be used in place of supervision. • Well-maintained rescue equipment (including a ring buoy with an attached line and/or a shepherd's crook rescue pole should be kept by the pool. • Emergency procedures should be clearly written and posted in the pool area. • All caregivers must know how to swim, know how to get emergency help, and know CPR. • Children should be taught to swim (swimming classes are not recommended for children under the age of 4 years) and should always swim with a buddy. • Alcohol should not be consumed during or just before swimming or while supervising children. • To prevent choking, chewing gum and eating should be avoided while swimming, diving, or playing in water. • Water depth should be checked before entering a pool. The American Red Cross recommends 9 feet as a minimum depth for diving and jumping. • Rules should be posted in easily seen areas. Rules should state "no running," "no pushing," no drinking," and "never swim alone." Be sure to enforce the rules. • Tables, chairs, and other objects should be placed well away from the pool fence to prevent children from using them to climb into the pool area. • When the pool is not in use, all toys should be removed to prevent children from playing with or reaching for them and unintentionally falling into the water. • A clear view of the pool from the house should be ensured by removing vegetation and other obstacles that block the view. # Hazards Numerous issues need to be considered before building residential pools: location of overhead power lines, installation and maintenance of ground fault circuit interruptors, electrical system grounding, electrical wiring sizing, location of the pool, and type of vegetation near the pool. The commonly used solar covers that rest on the surface of the pool and amplify sunlight do an excellent job of increasing the pool temperature, and they also increase the risk for drowning. If children or pets fall in and sink below the cover, it can be nearly impenetrable if they attempt to surface under it. Winterizing the pool also can be hazardous. The pool water in most belowground pools is seldom drained because of groundwater pressure that can damage the structure of the pool. Therefore, water in most home pools is only lowered below the frost line for winter protection. In these cases, a pool cover is installed to keep debris and leaves from filling the pool in the winter months. The pool cover becomes an excellent mosquitobreeding area before the pool is reopened in the spring because of the decomposing vegetation that is on the pool cover, the rain that accumulates on the top of the pool cover during the winter, and the eggs laid on the pool cover in early fall and early spring. The cover also provides ideal conditions for mosquitoes to breed: stagnant water, protection from wind that can sink floating eggs, the near absence of predators, and warm water created by the pool cover collecting heat just below the surface (Figure 14.1). # Public Health Issues Current epidemiologic evidence indicates that correctly constructed and operated swimming pools are not a major public health problem. In 1999, an outbreak of Campylobacter jejuni was associated with a private pool that did not have continuous chlorine disinfection and reportedly had ducks swimming in the pool [1]. # Diseases • Intestinal diseases: Escherichia coli O157:H7, typhoid fever, paratyphoid fever, amoebic dysentery, leptospirosis, cryptosporidiosis (highly chlorine resistant), and bacillary dysentery can be a problem where water is polluted by domestic or animal sewage or waste. Swimming pools have also been implicated in outbreaks of leptospirosis. • Respiratory diseases: Colds, sinusitis, and septic sore throat can spread more readily in swimming areas as a result of close contact, or improperly treated pool water, coupled with lowered resistance because of exertion. • Eye, ear, nose, throat, and skin infections: The exposure of delicate mucous membranes, the movement of harmful organisms into ear and nasal passages, the excessive use of water-treatment chemicals, and the presence of harmful agents in water can contribute to eye, ear, nose, throat, and skin infections. Close physical contact and the presence of fomites (such as towels) also help to spread athlete's foot, impetigo, and dermatitis. # Injuries Injuries and drowning deaths are by far the greatest problem at swimming pools. Lack of bather supervision is a prime cause, as is the improper construction, use, and maintenance of equipment. Injuries include evisceration, electrocution, entrapment, and entanglement. Some particular problem areas include the following: • loose or poorly located diving board, • slippery decks or pool bottoms, • poorly designed or located water slides, • projecting or ungrated pipes and drains that can catch hair or body parts, • drain grates of inadequate size, • improperly installed or maintained electrical equipment, and • improperly vented chlorinators and mishandled chlorine materials. # Water Testing Equipment It is essential that correct equipment be used and maintained for assessing the water quality of both swimming pools and spas. The operators of pool and spas need to monitor a wide range of chemicals that influence pool operations and water quality. Their equipment should test for chlorine, bromine, pH, alkalinity, hardness, and cyanuric acid build up. The chlorine should be measurable at a range of 0 to 10 parts per million (ppm). Water pH levels should be accurately measured with an acid or base test. A kit to check pool chemical levels usually includes N,N-diethyl-p-phenylene-diamine (DPD) tablet tests for free and total chlorine, and other one-step tablet tests for pH, total alkalinity, calcium hardness, and cyanuric acids. The homeowner should determine acid or base demand using an already reacted pH sample in dropper bottles. Paper test strips with multiple tests (including chlorine, bromine, and pH) are also available, but the reliability of these tests varies greatly. If used, they should be kept fresh, protected from heat and moisture, and checked against other test systems periodically if water quality problems persist. Swimming pools are engineered systems, with demanding safety and sanitary requirements that result in rather sophisticated design standards and water treatment systems. The size, shape, and operating system of the pool is based on the following considerations: • the intended use of the pool and the maximum expected bather loading; • the selection of skimmers, scuppers, or gutters, depending on the purpose, size, and shape of the pool; • the recirculation pump, whose horsepower and impeller configuration are based on the distance, volume, and height of the water to be pumped; • the filters, which are sized on the volume of water to be treated and the maximum gallons (liters) of water per minute that can be delivered by the pump and the type of filter media selected; and • the chemical feeder sizes and types, which are based on the chemicals used, total quantity of the water in the system, expected use rates, and external environmental factors, such as quantity of sunlight and wind that affect the system. # Disinfection The length of time it takes to disinfect a pool depends, for example, on the type of fecal accident and the chlorine levels chosen to disinfect the pool. If a fecal accident is a formed stool, the faollowing chlorine levels will determine the times needed to inactivate Giardia: for Giardia is 45 and the value for Cryptosporidia is 9,600. If a different chlorine concentration or inactivation time is used, CT values must remain the same. For example, to determine the length of time needed to disinfect a pool at 15 ppm after a diarrheal accident, the following formula is used: C×T = 9,600. Solve for time: T= 9,600÷15 ppm = 10.7 hours. It would, thus, take 10.7 hours to inactivate Cryptosporidia at 15 ppm. You can do the same for Giardia by using the CT of 45. Chlorine CDC has Web sites that contain excellent information about safe swimming recommendations, recreational water diseases, and disinfection procedures for fecal accidents [3,4]. # Content Turnover Rate The number of times a pool's contents can be filtered though its filtration equipment in a 24-hour period is the turnover rate of the pool. Because the filtered water is diluted with the nonfiltered water of the pool, the turbid-ity continually decreases. Once the pool water has reached equilibrium with the sources of contamination, a 6-hour turnover rate will result in 98% clarification if the pool is properly designed. A typical-use pool should have a pump and filtration system capable of pumping the entire contents of the pool though the filters every 6 hours. To determine compliance with this 6-hour turnover standard, the following formula is used: Turnover rate = pool volume (gallons)/flow rate×60 (minutes in hour) Following is a sample calculation of the pool content turnover rate using the rate of flow reading from the flow meter: Turnover rate = 90,000 (gallons in pool)/ 180 gallons per minute×60 (minutes in hour) 8.3-hour turnover rate = 90,000 (pool volume in gallons)/10,800 The above pool would not meet the required turnover rate of 6 hours. The cause could be improperly sized piping or restrictions in the piping, an undersized pump, or undersized or clogged filters. This turnover rate would probably result in cloudy water if the pool is used at the normal bather load. The decreased circulation would also make it difficult for the disinfecting equipment to meet the required levels. # Filters Pool filters are not designed to remove bacteria, but to make the water in the pool clear. Normal tap water looks quite dingy if used to fill a pool and, in some cases, the bottom of the pool is not visible. The maximum turbidity level of a pool should be less than 0.5 nephelometric turbidity units. Pool filters should be sized to ensure that the complete contents of the pool pass through the filter once every 6 hours. Home pools typically use one of three types of filters. # High-rate Sand Filters High-rate sand filters were introduced more than 30 years ago and reduced the size of the conventional sand filter by 80%. The sand filter is the most popular filter on the market. High-rate sand filters use a silica sand that has been strained to give it a uniform size. It is referred to as pool-grade sand #20 silica. The sand is normally 0.45 millimeters (mm) to 0.55 mm in diameter. As water passes through the filter, the sharp edges of the sand trap the dirt from the pool water. When the backpressure of the filter increases to 3 to 5 psi, the filter needs to be cleaned. This is usually accomplished by reversing the flow of the water through the filter and flushing the dirt out the waste pipe until the water being discharged appears clear. These filters perform best when used at pressure levels below 15 to 20 gallons per minute, depending on the manufacturer of the filter. # Cartridge Filters Cartridge filters have been around for many years, but only recently have gained in popularity in the pool industry. They are similar to the filter on a car engine. The water is passed through the cartridge and returned to the pool. When the pressure of a cartridge filter increases approximately 5 psi, the pump is turned off; and the top of the filter is removed. The cartridge is removed and either discarded and replaced or, in some cases, washed. # Diatomaceous Earth Diatomaceous earth (DE) is a porous powder made from the skeletons of billions of microscopic animals that were buried millions of years ago. There are two primary types of DE filters, but they both work the same way. Water comes into the filter, passes through the DE, and is returned to the pool. If properly sized and operated, DE filters are considered by some to provide the highest quality of water. They are capable of filtering the smallest particle size of all the filter types. It is usually adequate to change the DE once every 30 days. However, if your pool water is very dirty, it is not uncommon to change it 3-4 times a day until the water is clear. The frequency of backwashing will depend on many factors, including the size of your filter, flow rate of your plumbing, and the bather load in your pool. When the pressure reading on the filter reaches the level set by the manufacturer's manual, it will be ready for backwashing. # Filter Loading Rates The specification plate on the side of approved residential or commercial swimming pool filters contains such information as the manufacturer, type of filter, serial number, surface area, and designed loading rate. Knowing the surface area of the filter permits calculation of the number of gallons flowing through the filter per minute. An excessive flow rate can push the media into the pool or force pool solids and materials thought the media, resulting in turbid water. Figure 14.2 shows a typical home pool treatment system. Regulations typically specify how much water can be filtered through the various types of pool filtration systems. # Disinfectants Many disinfectants are used in pools and spas around the world, including halogen-based compounds (chlorine, bromine, iodine), ozone, and ultraviolet light with hydrogen peroxide. Those used most often are chlorine, bromine, and iodine, and each has advantages and limitations. Chlorine-Pools can be disinfected with chlorine-releasing compounds, including hypochlorite salt compounds. Calcium hypochlorite is inexpensive and popular for cold-water pools, but not suitable for hot pools and spas because it will promote scaling on heat exchangers and piping. Chlorine levels can be rapidly reduced with high use and regular checks should be made to ensure maintenance of disinfection. Some adjustment of pH is required for most forms of chlorine disinfection. When chlorine gas is used, a fairly high alkalinity needs to be maintained to remove the acid formed during dosing [5]. Sodium hypochlorite is a liquid chlorine, and has a pH of 13, causing a slight increase in the pH of the pool water, which should be adjusted with an acidic mixture. The sun's rays will degrade sodium hypochlorite. Chlorinated isocyanate is available in three forms-granular, tablet, and stick. The granular form contains 55%-62% available chlorine and the stick and tablet form contain 89% available chlorine [6]. Tables 14.1-14.4 serve as a quick problem-solving reference for the home pool owner and operator. The CDC Web site (www.cdc.gov/healthyswimming) provides a great deal of useful information for both the inspector and the homeowner. # Pool Water Hardness and Alkalinity The ideal range of water hardness for a plaster pool is 200 to 275 ppm. The ideal range for a vinyl, painted, or fiberglass surface is 175 to 225 ppm. Excess hardness causes scaling, discoloration, and filter inefficiency. Less than recommended hardness results in corrosion of most contact surfaces. Alkalinity should be 80 to 120 ppm. High alkaline levels cause scale and high chlorine demand. Low levels cause unstable pH. Sodium bicarbonate will raise the alkalinity level. The pool water will be cloudy if alkalinity is over 200 ppm. # Liquid Chemical Feeders Positive Displacement Pump A positive displacement pump is preferable to erosion disinfectant feeders. Positive displacement pumps can be set to administer varied and specific chemical dosage rates to ensure that a pool does not become contaminated with harmful microorganisms. A positive displacement pump does need routine cleaning, descaling, and servicing. Running a weak muratic acid or vinegar solution through the pump weekly can minimize most major servicing of the pump. Most service on the pump involves one of four areas: # Erosion and Flow-through Disinfectant Feeders These feeders work by the action of water moving around a solid cake of chlorine and eroding the cake. The feeders work quite well for smaller pools, but require considerable care and maintenance. The variables that affect the effectiveness of erosion feeders are 1. solubility of the chlorine cake or tablet; 2. surface area of the cake or tablet; 3. amount of water flowing around the cake or tablet; 4. concentration of chlorine in the cake or tablet; and 5. number of cakes or tablets in the feeder. Note: For safety reasons, the disinfectant cake must not be accessible. # Spas and Hot Tubs Hot tubs (large tubs filled with hot water for one or more people) or spas (a tub with aerating or swirling water) are used for pleasure and are increasingly being recommended for therapy. The complexity of these devices increases with each new model manufactured. Newer models often have both ozone and ultraviolet light emitters for enhanced disinfection (see Disinfectants section earlier in this chapter). However, the environment of the spa and hot tub, if not cleaned and operated correctly, can become a culture medium for microorganisms. Because the warm water is at the ideal temperature for growth of microorganisms, good disinfection is critical. Table 14.5 provides suggested hot tub and spa operating parameters. It is essential that all equipment works properly and that the units are cleaned and disinfected on a routine basis. Monitoring the water temperature is very important and, depending on the health of the user, can be a matter of life and death. Time in the heated water should be limited, and the temperature for pregnant users should be below 103°F (39°C) to protect the unborn baby. # Acknowledgments Healthy Housing Reference Manual # Acknowledgments Healthy Housing Reference Manual Acknowledgments # Chapter 6: Housing Structure P. The front porch of the home is constructed of pressuretreated, insect-resistant lumber. The use of such lumber should be carefully evaluated with respect to what chemicals have been used and the potential for human exposure to the treated wood. Composite wood products and plastic decking materials, collectively called Trex, are available as an alternative to pressure-treated wood. A proper hand railing and balusters will be installed. "We never know the worth of water till the well is dry." # Thomas Fuller Gnomologia, 1732 # Drainage System Water is brought into a house, used, and discharged through the drainage system. This system is a sanitary drainage system carrying just interior wastewater. # Sanitary Drainage System The proper sizing of the sanitary drain or house drain depends on the number of fixtures it serves. The usual minimum size is 4 inches in diameter. The materials used are usually cast iron, vitrified clay, plastic, and, in rare cases, lead. The top two pipe choices for drain, waste, and vent (DWV) systems are PVC or ABS. For proper flow in the drain, the pipe should be sized and angled so that the pipe is approximately half full. This ensures proper scouring action so that the solids contained in the waste will not be deposited in the pipe. Using PVC in DWV pipe is a two-step process needing a primer and then cement. ABS uses cement only. In most cases the decision will be made on the basis of which material is sold in an area. Few areas stock both materials because local contractors usually favor one or the other. ABS costs more than PVC in many areas, but Schedule 40 PVC DWV solid core pipe is stronger than ABS. Their durability is similar. Size of House Drain. The Uniform Plumbing Code Committee has developed a method of sizing house drains in terms of fixture units. One fixture unit equals approximately 7½ gallons of water per minute. This is the surge flow rate of water discharged from a wash basin in one minute. All other fixtures have been related to this unit. Fixture unit values are shown in Table 9.1. # Grade of House Drain. A house drain should be sloped toward the sewer to ensure scouring of the drain. The usual pitch of a house or building sewer is a ¼-inch drop in 1 foot of length. The size of the drain is based on the fixture units flowing into the pipe and the slope of the drain. Table 9.2 shows the required pipe size for the system. House Drain Installation. Typical branch connections to the main are shown in Figure 9.5. # Fixture and Branch Drains. A branch drain is a waste pipe that collects the waste from two or more fixtures and conveys it to the sewer. It is sized in the same way as the sewer, taking into account that all toilets must have a minimum 3-inch diameter drain, and only two toilets may connect into one 3-inch drain. All branch drains must join the house drain with a Y-fitting as shown in Figure 9.5. The same is true for fixture drains joining branch drains. The Y-fitting is used to eliminate, as much as possible, the deposit of solids in or near the connection. A buildup of these solids will block the drain. Recommended minimum sizes of fixture drains are shown in Table 9.2. # Traps A plumbing trap is a device used in a waste system to prevent the passage of sewer gas into the structure and yet not hinder the fixture's discharge to any great extent. All fixtures connected to a household plumbing system should have a trap installed in the line. The effects of sewer gases on the human body are well known; many of the gases are extremely harmful. In addition, certain sewer gases are explosive. P-trap. The most commonly used trap is the P-trap (Figure 9.6). The depth of the seal in a trap is usually 2 inches. A deep seal trap has a 4-inch seal. # Sludge Measurement To measure sludge, make a sludge-measuring stick using a long pole with at least 3 feet of white cloth (e.g., an old towel) on the end. Lower the measuring stick into the tank, behind the outlet baffle to avoid scum particles, until it touches the tank bottom. It is best to pump each tank every 2 to 3 years. Annual checking of sludge level is recommended. The sludge level must never be allowed to rise within 6 inches of the bottom of the outlet baffle. In two-compartment tanks, be sure to check both compartments. When a septic tank is pumped, there is no need to deliberately leave any residual solids. Enough will remain after pumping to restart the biologic processes. # Additional Sources of Information # Flow of Electric Current Electricity is usually created by a generator that converts mechanical energy into electrical energy. The electricity may be the result of water, steam, or wind powering or turning a generator. The electricity is then run through a transformer, where voltage is increased to several hundred thousand volts and, in some instances, to a million or more volts. This high voltage is necessary to increase the efficiency of power transmission over long distances. Conductor gauge-A numeric system used to label electric conductor sizes, given in American Wire Gauge (AWG). The larger the AWG number, the smaller the wire size. Current-The flow of electricity through a circuit. • Alternating current is an electric current that reverses its direction of flow at regular intervals. For example, it would alternate 60 times every second in a 60-cycle system. This type of power is commonly found in homes. • Direct current is an electric current flowing in one direction. This type of current is not commonly found in today's homes. Electricity-Energy that can be used to run household appliances; it can produce light and heat, shocks, and numerous other effects. Fuse-A safety device that cuts off the flow of electricity when the current flowing through the fuse exceeds its rated capacity. Ground-To connect with the earth, as to ground an electric wire directly to the earth or indirectly through a water pipe or some other conductor. Usually, a green-colored wire is used for grounding the whole electrical system to the earth. A copper wire is usually used to ground individual electrical components of the whole system. (The home inspector should never assume that insulation color wiring codes have been used appropriately.) # Ground fault circuit interrupter (GFCI)- A device intended to protect people from electric shock. It de-energizes a circuit or portion of a circuit within an established very brief period of time when a current to ground exceeds some predetermined value (less than that required to operate the over-current protected device of the supply circuit). Hot wires-Those that carry the electric current or power to the load; they are usually black or red. Insulator-A material that will not permit the passage of electricity. # Kilowatt-hour (KWH)- The amount of energy supplied by one kilowatt (1,000 watts) for 1 hour (3,600 seconds), equal to 3,600,000 joule. Electric bills are usually figured by the number of KWHs consumed. Neutral wire-The third wire in a three-wire distribution circuit; it is usually white or light gray and is connected to the ground. Resistance-A measure of the difficulty of electric current to pass through a given material; its unit is the ohm. Service-The conductor and equipment for delivering energy from the electricity supply system to the wiring system of the premises. Service drop-The overhead service connectors from the last pole or other aerial support to and including the splices, if any, connecting to the service entrance conductors at the building or other structure. Service panel-Main panel or cabinet through which electricity is brought to the building and distributed. It contains the main disconnect switch and fuses or circuit breakers. Short circuit-A break in the flow of electricity through a circuit due to the load caused by improper connection between hot and neutral wires. # Volt- The unit for measuring electrical pressure of force, which is known as electromotive force. Its symbol is "E." Voltage drop-A voltage loss when wires carry current. The longer the cord, the greater the voltage drop. Watt-The unit of electric power. Volts times amperes = watts. # Definitions of Terms Related to Electricity This high-transmission voltage is stepped down (reduced) to normal 115/230-volt household current by a transformer located near the point of use (residence). The electricity is then transmitted to the house by a series of wires called a service drop. In areas where the electric wiring is underground, the wires leading to the building are buried in the ground. For electric current to flow, it must travel from a higher to a lower potential voltage. In an electrical system, the hot wires (black or red) are at a higher potential than are the neutral or ground wire (white or green). # Additional Sources of Information # Coal The four types of coal are anthracite, bituminous, subbituminous, and lignite. Coal is prepared in many sizes and combinations of sizes. The combustible portions of the coal are fixed carbons, volatile matter (hydrocarbons), and small amounts of sulfur. In combination with these are noncombustible elements composed of moisture and impurities that form ash. The various types differ in heat content. Heat content is determined by analysis and is expressed in British thermal units per pound. Improper coal furnace operation can result in an extremely hazardous and unhealthy home. Ventilation of the area surrounding the furnace is very important to prevent heat buildup and to supply air for combustion. # Solar Energy Solar energy has gained popularity in the last 25 years as the cost of installation of solar panels and battery storage has decreased. Improved technology with panels, installation of panels, piping, and batteries has created a much larger market. Solar energy largely has been used to heat water. Today, there are more than a million solar water-heating systems in the United States. Solar water heaters use direct sun to heat either water or a heat-transfer fluid in collectors [3]. That water is then stored for use as needed, with a conventional system providing any necessary additional heating. A typical system will reduce the need for conventional water heating by about two-thirds, minimizing the cost of electricity or the use of fossil fuel and thus the environmental impact associated with their use. The U.S. Bromine-Bromine needs to be used at levels twice those of chlorine to achieve similar disinfection. Bromine is available as the sodium or potassium salts. In the presence of ammonia, bromine rapidly forms relatively unstable ammonia bromamines that possess disinfection efficiencies comparable to that of free bromine. It is also unnecessary to destroy ammonia bromamines because they do not produce irritating odors [5]. Iodine-Potassium iodide is a white, crystal chemical. This chemical needs an oxidizer, such as hypochlorite, to react with organic debris and bacteria. Iodine does not react with ammonia, hair, or bathing suits, or cause eye irritation, but it can react with metals, producing greenish-colored pool water [6]. Ozone-Ozone is a very powerful oxidant and is effective against viruses. It can only be generated at the point of use and commercial generation units are safe for use. Ozone dosing is only practical where there is water circulating off-pool because adequate ozone-water mixing is essential for maximum oxidation. Ozone generators may be of the ultraviolet lamp or corona discharge type. The ultraviolet lamp efficiency reduces with time and the lamp and associated activated charcoal filter will need replacement [5]. Ultraviolet Light-Ultraviolet light, like ozone, is sometimes used for off-pool water disinfection. Ultraviolet light has no effect on pH or color and has little effect on the chemical composition of the water. However, color, turbidity, and chemical composition of the water can interfere with ultraviolet light transmission. The water must be adequately treated before ultraviolet light exposure. Hydrogen peroxide is often used for this purpose as it is relatively safe in low concentrations, is nonflammable, and produces oxygen and water as end products. For the ultraviolet light plus hydrogen peroxide system to be effective, it must operate 24 hours a day. Ultraviolet light disinfection is not pH dependent, but the addition of hydrogen peroxide results in slightly acidic conditions [5]. Silver-copper Ionization-Sanitizing can be accomplished by using an ionizing unit that introduces silver and copper ions into the water by electrolysis, or by passing an electric current through a silver and copper electrode. The limiting factors in using this system in the pool and spa are cost, slow bactericidal action, and potentially high contaminant levels caused by bather loads. Also, black spots can form on pool surfaces if the proper parameters of water chemistry are not maintained. An approved chemical disinfectant must be used with an ionizing unit [6]. The effective use of halogen disinfectants is based on the pH, hardness, and alkalinity of the water. Improper pH, hardness, and alkalinity levels in the pool can render high levels of disinfectant useless in killing disease-causing organisms. Table 14.1 summarizes water-quality problems that affect pools and suggests corrective actions. # Effect of pH The ideal pH to avoid eye irritation is 7.3. Bacteria-or algae-killing effectiveness is improved with an even lower pH. National standards typically recommend a range of 7.2 to 7.6, which is cost-effective. Table 14.2 demonstrates the loss of disinfection as pH increases. # Chlorine Disinfectants The options for selecting the form of chlorine disinfectant to use in pools are quite varied, and the choices are complex. Table 14.3 gives the properties of each form. Gas chlorine costs the least, and the relative cost of each form of chlorine increases as you move right across the table. The cost of the disinfectant tends to be less the higher the concentration of available chlorine. The safety issues are more complex than they might appear. The hazards of gas chlorine are well known. The solid forms of chlorine, such as calcium hypochlorite, are quite reactive. When exposed to organic compounds, they can generate a great deal of heat and are potentially explosive. Because solid chlorine seems inert to the untrained worker, it is often stored beside motor oil or gasoline or left in where moisture can start a chemical reaction. Even a pencil with a graphite core that drops from a shirt pocket into a container of calcium hypochlorite could result in a chemical reaction leading to a fire that would release free chlorine gas [7]. The following chemical reactions produce chlorine byproducts that reduce the effectiveness of chlorine and cause most eye irritation.
None
None
61bdee57235e55e486784be6fa54e2aa433a72f8
cdc
None
# DISCLAIMER Mention of company names or products does not co n stitu te endorsement by the National In s titu te for Occupational Safety and Health. # D H H S (N IO S H ) Publication No. 81-131 # PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need to p ro tect the h ealth and safety of workers occupationally exposed to an everincreasing number of p o te n tia l hazards. Consequently, the N ational I n s titu te fo r Occupational Safety and H ealth (NIOSH) has implemented a program to evaluate the adverse h ealth e ffe c ts of chemical and physical agents and in d u stria l processes. This summary of NIOSH's Occupational Hazard Assessment of Coal L iquefaction is a re s u lt of th at program. By addressing the hazard while d ire c t coal liq u e fa c tio n technology is s t i l l being developed, the ris k of p o te n tia l adverse h ealth e ffe c ts can be su b sta n tia lly reduced in both experim ental and commercial p la n ts. While the occupational hazard assessm ent, Volume I I , is a c r i t ic a l, lengthy review of the s c ie n tific and technical inform ation a v ailab le and discusses the occupational safety and health issues of p ilo t p lan t o p eration s, th is summary document presents the h ig h lig h ts in a sh o rte r, more useable form fo r a wider audience. Both documents are intended for use by w orkers, in dustry , trade a sso c ia tio n s, government agencies, s c ie n tific and tech n ical in v e stig a to rs, and the public; those in te re ste d in a more thorough and d etailed discussion should read the Occupational Hazard Assessment of Coal L iquefaction. The inform ation and recommendations presented should f a c ilita te development of sp e c ific procedures for hazard control in individual workplaces by persons immediately responsible for h ealth and safety . NIOSH w ill p e rio d ica lly update and evaluate new data and inform ation as they become av ailab le and, a t the appropriate tim e, w ill consider proposing recommendations for a standard to protect workers in commercial coal liq u efactio n f a c ilitie s . Ronald F. Coene, P.E. Acting D irector N ational I n s titu te fo r Occupational Safety and Health # SYNOPSIS The coal liq u efactio n process converts coal in to hydrocarbon products. In doing so, i t both uses and produces to xican ts p o te n tia lly hazardous to worker h e alth . The to x ican ts are v aried , ranging from simple chem icals to complex m ixtures or organic carcinogens. The possib le h ealth e ffe c ts are also v aried . Animal stu d ies have demonstrated th a t some of the process chemicals have produced tumors a t the s ite of ap p licatio n . Some workers may be exposed by in h alatio n of gases, vapors, or airborne p a rtic le s , skin contact w ith airborne m ateria l, contact w ith contaminated su rfaces, or accid en tal in g estio n . Some of these same chemicals have caused severe long-term e ffe c ts such as skin and lung cancer in workers in sim ilar in d u strie s. Other p o te n tia l adverse h ealth e ffe c ts may include fa ta l poisoning from in h a la tio n , severe re sp ira to ry ir r ita tio n , and chemical burns. F ires and explosions can also occur. This document is designed to provide inform ation th a t can be used to guard against these p o te n tia l adverse h ealth e ffe c ts. Measures such as engineering co n tro ls, sp e c ific work p ra c tic e s, personal p ro te c tiv e equipment, medical su rv eillan ce and exposure m onitoring, recordkeeping, and emergency plans and procedures are recommended. INTRODUCTION # INTRODUCTION Coal liq u e fa c tio n is a process th at converts coal in to hydrocarbon products. Most o ften , the major products are condensed arom atic liq u id s, with some gases and so lid s also produced, pending on operating conditions and the type of coal and process used. L iquefaction is accomplished by breaking the chemical and physical bonds of the coal in the presence of a hydrogen source. The four major coal liq u efactio n processes are: (1) p y ro ly sis, in which the coal is heated in the absence of oxygen, and hydrogen is ab stracted from coal hydrogen donors; (2) solvent e x tractio n , which is performed a t high tem peratures and pressures in the presence of hydrogen and a solvent; (3) d ire c t hydrogenation, in which the coal slu rry is hydrogenated in contact w ith a c a ta ly st under high pressure and tem perature; and (4) in d ire c t liq u e fa c tio n , in which the coal is g asified w ith steam and oxygen and then c a ta ly tic a lly converted to liq u id products. This assessment is concerned w ith d ire c t liq u e fa c tio n , ie , processes 1-3. In the United S ta te s, coal liq u efactio n technology is only beginning to develop. I t cu rren tly c o n sists of a small number of process development u n its (PDU) fo r e stab lish in g process fe a s ib ility and p ilo t p lan ts. Processing cap a c ities range from le ss than 5 tons per day (4.5 Mg/d) for the PDU's up to 600 tons per day (544 Mg/d) fo r the p ilo t p la n ts. These p ilo t p lan ts are prim arily a te stin g ground for determ ining optimum operating conditions for fu tu re commercial p lan ts. Previous p ro jectio n s based on plans for development of sy n th etic fuel technology show th a t as many as 12 comm ercial-sized liq u efactio n p lan ts may be operating by 1995 (assuming each produces approximately 50,000 b b l/d ). C o llectiv ely , these p lan ts are lik e ly to employ more than 12,000 workers , An increased number of coal miners must also be considered part of the to ta l impact of coal liq u efactio n technology on occupational safety and h e alth . In addition to evaluating health e ffe c ts , th is summary document recommends engineering co n tro ls, work p ra c tic e s, personal p ro tectiv e equipment and clo th in g , medical su rv eillan ce, and exposure m onitoring to reduce the h ealth risk s and provide fo r increased worker safety in coal liq u efactio n p lan ts. G reater d e ta il is given in the fu ll-le n g th document . NIOSH's recommendations can be used fo r planning occupational safety and h ealth programs and as a basis for technology development. In the fu ll-le n g th document on coal liq u e fa c tio n , the occupational hazards associated w ith the d ire c t process are thoroughly reviewed. NIOSH has determined th a t chemicals and physical agents id e n tifie d in the process could present serious h ealth and safety hazards to the workers in p ilo t p lants and in future commercial p lants unless precautions are taken. One serious h ealth hazard associated w ith coal liq u efactio n processes is cancer . C ertain arom atic amines and polycyclic hydrocarbons measured a t two p ilo t p lan ts in v estig ated by NIOSH are known carcinogens or co-carcinogens . In ad d itio n , coal liq u e fa c tio n m aterials applied to the skin of laboratory animals have produced cancerous growths . Coal liq u efactio n m aterials have also caused adverse reproductive e ffe c ts in laboratory animals . A number of other chemicals found in coal liq u e fa c tio n processes may adversely a ffe c t the re sp ira to ry and nervous systems, as w ell as the blood, liv e r, and kidneys. # HEALTH EFFECTS Because coal liq u e fa c tio n is an emerging in d u stry , the opportunity for epidem iologic in v estig atio n has been lim ited to evaluation of worker medical su rv eillan ce programs in p ilo t p lan ts. In 1960 Sexton suggested th at workers exposed to coal liq u efactio n m aterials have an incidence of cancerous and precancerous skin lesio n s g re ate r than expected in a sim ilar population not occupationally exposed to the same hazard. In sp ite of a number of lim ita tio n s in th is study (including a sm all number of exposed workers, no appropriate comparison population, and a lack of inform ation concerning work h is to rie s , medical h is to rie s , and exposure le v e ls ), the excess incidence of cancerous and precancerous skin lesion s reported would s t i l l be s u ffic ie n t to cause concern. Two other rep o rts dem onstrate th a t the most common m edical problems a t p ilo t p lan ts have been d e rm a titis, eye ir r ita tio n , and therm al burns. Although there are lim ita tio n s in the epidem iologic data av ailab le on coal liq u e fa c tio n , there are strong im plications from animal toxicologic and human epidem iologic stu d ies on analogous compounds th a t c e rta in agents in the coal liq u efactio n process pose serious h ealth hazards. Such examples include chemical agents found in coal ta r products and coke oven emissions . Therefore, every possible precautionary measure should be taken to p ro tect the worker. Processing of abrasive s lu rrie s , p a rtic u la rly a t high tem peratures and p ressures, accelerates erosion and corrosion th at may cause leaks in process equipment. Leaks increase the p o te n tia l fo r worker skin contact with process m ateria ls, in h alatio n exposure, burns, and f ir e s . In gas handling systems, leaks increase the p o s s ib ility of worker exposure to high, and p o te n tia lly f a ta l, lev els of carbon monoxide and hydrogen su lfid e . Because coal liq u efactio n processes operate above the au to ig n itio n tem perature of many process m ateria ls, leaks could re s u lt in immediate f ire s . Furtherm ore, many process m aterials are explosive, and leaks may re s u lt in a m ixture th a t exceeds i t s lower explosive lim it. This could re s u lt in an explosion. The follow ing paragraphs describe sp ec ific h ealth e ffe c ts . Some guidance is given on elements of the appropriate physical exam inations and laboratory te s ts . Skin E ffects NIOSH has previously reported on some chemicals th a t are sim ilar to those in coal liq u efactio n processes and th at a ffe c t the skin. For example, creosote has caused k e r a titis , and crude naphtha, creosote, and resid u al pitch have caused skin cancer . These m aterials can also inflam e h a ir f o llic le s and sebaceous glands. Cresol causes a burning sensation, erythema, lo calized anesthesia, and a brown d isco lo ratio n of the skin upon contact . NIOSH has also described skin e ffe c ts as major concerns in workers exposed to carbon black process m aterials , refin ed petroleum solvents , coke oven em issions , and phenol . Skin se n sitiz a tio n may occur a fte r skin contact w ith or in h alatio n of many d iffe re n t chem icals. Patch te stin g should be used as a diagnostic aid only a fte r a worker has symptoms of skin s e n sitiz a tio n . Patch te s ts must not be used as a preplacement or screening technique, because they may cause se n sitiz a tio n ; medical h isto ry and physical examination are much safer ways of estim ating whether a worker is se n sitiz e d . Skin lesio n s should be m onitored, and w ritten and photographic records of th e ir development should be made and kept. If lesio n s change, the worker should be referred to a derm atologist. Liver Effects NIOSH has previously concluded th a t exposure to creso l may re s u lt in damage to the liv e r and th a t medical su rv eillan ce w ith emphasis on any preex istin g liv e r disorder should be conducted on workers exposed to cresol . Because of the vast reserve functio nal capacity of the liv e r, acute hep ato to x icity or severe cumulative chronic damage may occur before recognizable symptoms appear. These symptoms include nausea, vom iting, diarrh ea, weakness, general m alaise, and jaundice. Numerous screening te s ts for liv e r dysfunction e x ist; those most frequently used are serum b iliru b in , serum glutam ic-oxaloacetic transam inase (SGOT), serum glutam icpyruvic transam inase (SGPT), gamma glutamyl tran sp ep tid ase (GGTP), and is o c itr ic dehydrogenase. # Kidney Effects NIOSH has previously concluded th a t exposure to creso l , carbon d isu lfid e , coke oven em issions , and coal ta r products can re s u lt in impaired ren al function. Renal function impairment can be screened by u rin a ly sis augmented by more sp ec ific te s ts . # Respiratory System Effects Inhalatio n of chemicals in coal liq u e fa c tio n p lan ts may re s u lt in damage to the resp ira to ry system. For example, su lfu r dioxide and ammonia are re sp ira to ry tr a c t ir r ita n ts , and su b sta n tia l exposures to ammonia can produce chronic b ro n c h itis, la ry n g itis , tr a c h e itis , bronchopneumonia, and pulmonary edema . Exposure to su lfu r dioxide can cause asphyxia and severe chemical bronchopneumonia . In h alatio n of coke oven em issions and coal ta r products can lead tc chronic lung disease, including cancer . Inhalatio n and pulmonary deposition of fre e s ilic a can cause s ilic o s is , a form of pulmonary fib ro s is . If a worker does not have symptoms of re sp ira to ry d is tre s s , physical examination alone may not d etect early pulmonary illn e s s . Therefore, a chest X-ray examination is recommended a t the time of employment and th e re a fte r a t the d isc re tio n of the physician. In ad d itio n , pulmonary function te s ts , ie , forced v ita l capacity (FVC) and forced expiratory volume in 1 second (FEV 1), are also recommended screening te s ts . Blood E ffects Because exposure to benzene has been linked w ith leukemia , workers exposed to i t should have a complete blood count (CBC) as a screening te s t fo r blood d iso rd ers. A CBC includes determ ination of hemoglobin (Hb) concentration, hem atocrit, red blood c e ll (RBC) count, re tic u lo c y te count, w hite blood c e ll (WBC) count, and WBC d iffe re n tia l count. C entral Nervous System E ffects Cresol , Medical monitoring is essen tial to protect workers in coal liquefaction plants from adverse health e ffe c ts. To be e ffectiv e, medical surveillance should be both w ell timed and thorough to detect adverse effe c ts on worker health as early as possible. Thoroughness in medical surveillance is necessary because of the many chemical and physical agents a worker may be exposed to. Worker exposure is not predictable; i t can occur whenever any closed process equipment accidentally leaks, vents, or ruptures. Not a ll adverse effects of exposure in coal liquefaction plants are known, but most of the major organs (the liv e r , kidneys , lungs , heart , and skin ) may be affected. Medical surveillance should consist of preplacement, periodic, and term ination examinations. Preplacement examinations set a baseline for the worker's general health and for sp ecific te s ts , eg, future audiometric te s ts . During these examinations, a worker's physical a b ility to perform h is job while using a resp irato r can be determined. These examinations can also detect some preexisting conditions that may be aggravated by, or make the worker more vulnerable to, the hazards of coal liq u efaction processes. If such a condition is found, the employer should be n o tified and the worker fu lly counseled on these hazards. Periodic examinations allow monitoring of worker health to assess changes resu ltin g from exposure to m aterials in coal liquefaction processes. Workers should be encouraged to report any change in th e ir health to the medical department; self-exam ination is important to health monitoring. Periodic examinations should be performed at le a st annually. Physical examinations of workers prior to term ination of employment w ill provide complete information to the worker and to the medical surveillance program. All three types of examinations should be comprehensive and include medical and work h isto rie s , physical examinations, and special te s ts . Medical h isto rie s should focus on preexisting disorders. Work h isto rie s are important because past exposures may predispose the worker to adverse effects; for example, past exposure to coal-derived products may have caused sen sitizatio n . Physical examinations should be thorough and some c lin ic a l te s ts may be useful as general screening measures. In the screening te s ts , worker acceptance of te s ts should be weighed against the information that the te s t w ill y ield. Physicians should consider the te s t se n sitiv ity , the seriousness of the disorder, and whether the disorder can re su lt from exposure to coal liquefaction m aterials. They should choose te s ts based on th e ir knowledge of exposure-effect relatio n sh ip s, the w orker's exposure conditions, and any preexisting facto rs. If possible, they should choose te s ts w ith simple sample c o lle c tio n , processing, and an aly sis. The physician should ensure th a t appropriate followup te s ts and procedures are carried out if te s t re s u lts are abnormal. # Exposure M onitoring In d u stria l hygiene m onitoring is used to determ ine whether worker exposure to chemical and physical hazards is w ithin the lim its se t by OSHA or recommended by NIOSH or other voluntary stan d ard -se ttin g org anization s, and to in d icate where co rrectiv e measures are needed. However, no exposure lim its have been established fo r many substances th at may contam inate the workplace a ir in coal liq u efactio n p lan ts. Exposure m onitoring is necessary where exposure to suspect cancer-causing agents or other toxic m aterials might occur. For these substances, exposure m onitoring is im portant because i t can spot fa ilu re s in engineering con tro ls and work p ra c tic e s, and produce data th at can be used to develop exposure-effect re la tio n sh ip s. I t is not possible now to p red ict a ll of the chemicals in coal liq u efactio n p lan ts th a t may have toxic e ffe c ts . In ad d itio n , the possib le in te ra c tiv e e ffe c ts of individual chemicals should be considered when designing the m onitoring program. A workplace m aterial survey should be conducted to tab u late workplace m aterials th at may be released in to the atmosphere or may contam inate the skin. G uidelines for sampling and an aly sis may be found in NIOSH's Occupational Exposure Sampling Strategy Manual , the NIOSH Manual of A nalytical Methods , and OSHA's In d u stria l Hygiene F ield Operations Manual . A ll processes and work operations using m aterials known to be toxic or hazardous should be evaluated. Many m aterials in coal liq u efactio n p lan ts are complex m ixtures of hydrocarbons occurring as vapors, aero so ls, or p a rtic u la te s . Although i t is im practical to measure each component of the m ixture, measuring the cyclohexane-extractable fra c tio n of to ta l p a rtic u la te samples w ill y ield data u seful for evaluating worker exposure. Chemical an aly sis procedures developed fo r coal ta r p itch v o la tile s can be applied to coal liq u efactio n m aterials. However, one must not in fe r th at the perm issible exposure lim it (PEL) of 0.15 mg/cu m for the benzene-soluble fra c tio n of to ta l p a rtic u la te m atter during the carbonization of coal, estab lish ed fo r coke oven em issions, is safe for workers in coal liq u e fa c tio n p la n ts. In stead , m onitoring re s u lts should be in terp reted w ith toxicologic data on sp e c ific , or m ixtures of coal liq u e fa c tio n m aterials. Alarms should be in sta lle d to monitor substances' or hazards th at could immediately th reaten l i f e and h ealth , such as hydrogen su lfid e , carbon monoxide, oxygen deficiency, and p o te n tia l explosive hazards. NIOSH has published recommendations fo r the use of hydrogen su lfid e alarms , which should be in sta lle d in areas where high concentrations of hydrogen su lfid e may be released . NIOSH also recommends th a t alarms be developed fo r other gases where life -th re a te n in g exposures could occur. # Recordkeeping M onitoring worker h ealth and the working environment is e sse n tia l in an occupational h ealth program. Coal liq u efactio n is a developing technology, w ith many hazards s t i l l unknown. Recordkeeping is p a rtic u la rly c r i tic a l to th is technology, where the occupational environment is d if f ic u lt to ch aracterize. S tandardization of the recordkeeping systems is recommended because i t w ill allow easy cross-com parison of data from sim ilar p lan ts and thereby f a c ilita te a more comprehensive examination of the hazards in the industry as a whole . Therefore, coal liq u e fa c tio n p lan ts should have a system th a t records a t le a s t the follow ing inform ation. (a) Employment H istory For each worker there should be a work h isto ry d e ta ilin g job c la s s ific a tio n s , plant lo catio n codes, and time spent on each job. Compensation claim rep o rts and death c e rtific a te s should be included. (b) Medical H istory A medical h isto ry comprising personal h ealth h isto ry , medical examination records, and reported illn e ss e s and in ju rie s should be m aintained for each worker. (c) In d u stria l Hygiene Data Records should be kept of personal and area sampling th at s ta te m onitoring re s u lts , note whether personal p ro te c tiv e equipment was used, and id en tify worker(s) monitored or p lan t lo catio n codes for areas where sampling was performed. Estim ates of the frequency and extent of skin contam ination by coal-derived m aterials should be recorded annually fo r each worker. # ENGINEERING CONTROLS Engineering controls of workplace hazards can be applied to both p ilo t and commercial p la n ts. Engineering design for dem onstration coal liq u efactio n p lan ts is cu rren tly in the developmental stag e, and i t should emphasize prevention of worker exposure by ensuring process containm ent, lim itin g worker exposure during m aintenance, and providing for equipment r e lia b ility . The e ffe c ts of erosion, corrosion, and fa ilu re of se a ls, valves, and other process components should be minimized. Design and q u a lity control of pressure v essels should ensure r e lia b ility of vessel in te g rity . Pressure safety and release valves must be designed according to w ell-estab lish ed engineering p ra ctice s and located so th at operating personnel are not exposed to re le a se s. Control rooms should be stru c tu ra lly designed to p ro tect operating personnel and should remain functio nal in an accident. Process sampling equipment should be designed to minimize the relea se of process m ateria l. Many of the engineering design considerations discussed in th is and the fu ll-le n g th document are addressed in ex istin g standards, codes, and reg u latio n s . A formal system safety program should be in itia te d to provide fo r early recognition of safety problems, an aly sis of design, id e n tific a tio n of p o te n tia l hazards, and sp e c ific a tio n of safety co n tro ls and procedures. At a minimum, the safety program should include: (1) Scheduled reviews and analyses; (2) Review re s p o n s ib ilitie s of designated workers; (3) Required methods of analyses; and (4) Requirements for documentation and c e rtific a tio n . Before equipment is removed or serviced, i t should be iso la te d from the process stream and flushed and purged where p o ssib le. Gas purges should be disposed of by a fla re or some other e ffe c tiv e method. In areas where flammable m aterials are c o llec te d , the flammable gas and vapor concentration must be m aintained a t le ss than i t s lower explosive lim it. If workers enter areas where flammable or toxic m aterials c o lle c t, v e n tila tio n must be adequate to m aintain safe concentrations. Workers who sample process stream s may be exposed to sig n ific a n t amounts of process m ateria l. During p lan t v i s it s , NIOSH observed workers using a nozzle to d ire c t the m aterial in to sm all cans , a technique th at could lead to excessive and unnecessary exposures. In other p la n ts, NIOSH observed a sa fer method and recommends i t where p ra ctica b le: the use of a closed-loop system designed to remove flammable or toxic m aterial from the sampling lin e s by flushing and purging p rio r to removing the sampling bomb . These flushings from the sampling lin e s should be collected and disposed of properly. To minimize plugging of lin e s and equipment through coking and s o lid ific a tio n of coal s lu r rie s , instrum entation and co n tro ls should be provided. For example, lin e s and equipment should be h eat-traced to m aintain tem peratures g reater than the pour point of the m ateria l. However, sometimes plugging w ill occur reg ardless of the preventive measures taken. Under these circum stances, lo cal v e n tila tio n and/or re sp ira to rs should be provided and used to lim it exposure to hazardous m aterials during decoking or hydroblasting. Contaminated w ater from hydroblasting should be c o llected , tre a te d , and recycled or disposed of properly. Where p ra ctica b le, equipment used to handle flu id s containing so lid s should be flushed and purged during shutdown to minimize p o te n tia l coal slu rry s o lid ific a tio n or s e ttlin g of so lid s. Thus, proper engineering design and co n tro ls must include safeguards sp e c ific a lly designed to p ro tect workers. Good engineering design, when combined with safe operating procedures for fu ll-s c a le coal liq u efactio n p lan ts, should provide inherent safeguards. By removing pathways for release of toxic substances to the workplace, the need fo r minimizing exposures to workers w ill be reduced, as w ill relia n ce on personal p ro tectiv e equipment. # SPECIFIC WORK PRACTICES There are workplace safety and h ealth programs in coal liq u efactio n p ilo t p lants sim ilar to programs in petroleum re fin e rie s and the chemical in dustry . Most coal liq u efactio n p ilo t p lan ts have w ritten procedures for operational safety p ra c tic e s, including lockout of e le c tric a l equipment, tagout of valves, f ir e and rescue brigades, safe work perm its, v essel entry perm its, wearing of safety glasses and hardhats, and housekeeping. Developers of occupational safety and h ealth programs should use (1) the general industry standards sp ecified in the Code of Federal Regulations (29 CFR P art 1910), (2) voluntary guidelines from sim ilar in d u strie s, (3) NIOSH recommendations describing sp ec ific work p ra ctice s fo r sim ilar m aterials , (4) recommendations from equipment m anufacturers, and (5) professio nal judgment and operating experience. For engineering co n tro ls, supplemental work p ra c tic e s, and personal p ro tectiv e equipment to be e ffe c tiv e , employers should ensure th a t workers are informed of the hazards associated w ith the m aterials p resen t, the procedures for handling m aterials and cleaning up s p ills , and the use of personal p ro tectiv e equipment and emergency procedures. Where p o ssib le, o ra l in stru c tio n s should be augmented by w ritten and audiovisual m aterials to ensure th a t workers are always aware of the most current procedures and techniques for ensuring th e ir safety and h e alth . In ad d itio n , worker comprehension of in stru c tio n s should be evaluated. # Operating Procedures Procedures are being developed for coal liq u efactio n p ilo t p lan ts for each phase of operation, including sta rtu p , normal operation , routine maintenance, normal shutdown, emergency shutdown, and shutdown fo r extended periods. P rovisions should be made in these procedures for safely sto rin g process m ateria ls, preventing s o lid ific a tio n of dissolved coal, cleaning up s p ills , and decontam inating equipment th at req u ires maintenance. Preventing leaks is a major problem during sta rtu p of high pressure coal liq u efactio n systems; th erefo re , systems should be gradually pressurized and checked for leak s, esp ecially a t valve o u tle ts , b lin d s, and flan ges. Recently rep aired , serviced, or replaced equipment should be c are fu lly examined. I f no leaks are found, the system should be slowly brought up to operating pressure and tem perature. I f leaks occur, appropriate rep airs should be made. Equipment th at operates at high pressures should be ro u tin ely inspected to determ ine maintenance needs. In te rv a ls should be estab lished for m onitoring and replacing equipment su scep tib le to erosion and corrosion. M onitoring requirem ents, schedules, and replacement in te rv a ls should be p art of a q u ality assurance program. # Confined Space Entry NIOSH has made recommendations th at are generally app licable to entry in confined spaces in coal liq u efactio n f a c il iti e s . These recommendations p e rtain to entry perm its, te stin g work atmospheres, m onitoring, medical su rv e illa n c e, personnel tra in in g , lab elin g and postin g, preparations such as is o la tio n , lockout, tagout, purging, and v e n tila tio n , safety equipment and clo th in g , rescue equipment, and recordkeeping. One coal liq u efactio n f a c ility v is ite d by NIOSH recommends an a ir changeover ra te of six times per hour during entry and work in confined spaces. # R estricted Areas Access should be controlled a t coal liq u e fa c tio n p lan ts to prevent entry by persons unfam iliar w ith the hazards p resen t, the precautions necessary, and the plant emergency procedures. Access a t one coal liq u efactio n f a c ility is controlled by using warning sig n s, red lig h ts , and b a rrie rs . Because th is f a c ility is sm all, the use of doors as physical b a rrie rs is fe a sib le for access c o n tro l. Fences and gates serve a sim ilar purpose in la rg er operations. Employers should keep a log of workers entering re s tric te d areas to provide a record of who is exposed in high ris k operations. A v is ito r s ' log would aid in accounting fo r people a t the plant in an accident or emergency. V isito rs should be informed of the hazards, necessary precautions, and emergency procedures. S p ill Decontamination S p ills of toxic liq u id s should be cleaned up at the e a r lie s t safe opportunity. Equipment should be designed to ensure th at s p ills and leaks w ill be contained w ith a minimum of worker exposure. Small s p ills may be contained by a sorbent; spent sorbent should then be disposed of in a safe manner. If s p ills must be removed by hand, only workers who are in stru cted and trained in safe decontam ination and disposal procedures should supervise and perform cleanup. Employers must supply appropriate personal p ro tectiv e equipment and ensure th a t workers understand how to use i t . Dried ta r is d if f ic u lt to remove from su rfaces. For th is reason, e a sily strip p a b le coatings may be used where ta r may s p ill. Because such coatings do not adhere w ell, the ta r can be removed w ith the coating. The clean surface can then be recoated. In some p la n ts, where such coatings are not used, manual scraping and chipping, commercial clean sers, or steam have been used . These methods fu rth e r increase the hazard to cleanup personnel. For example, steam strip p in g may generate airborne contam inants, as w ell as increase the p o te n tia l for burns. For these reasons, steam strip p in g is discouraged. Where organic solvents are used for cleanup, sp ecial care is necessary to lim it exposure to solvent vapors and aero so ls. This may include the use of lo cal v e n tila tio n and approved re s p ira to rs . Hand tools and portable equipment may be contaminated and should be cleaned before reuse. They can be steam-cleaned in an enclosed system or cleaned by vapor degreasing and u ltraso n ic a g ita tio n . The residue should be contained and disposed of properly, and solvents should be w ell co n tro lled . # Personal Hygiene Where engineering controls are inadequate to prevent exposure, workers should be extrem ely a le r t to the p o te n tia l hazards of d ire c t contact w ith coal-derived products. They must exercise good personal hygiene by (1) avoiding touching th e ir faces, g e n ita ls, or re c ta l areas w ithout f i r s t washing th e ir hands; (2) washing skin w ith soap and w ater a fte r d ire c t contact w ith coal-derived products; and (3) washing th e ir hands, forearm s, face, and neck a fte r completing each operation where th ere was known or suspected contact w ith coal-derived products. Workers must also be a le rt to sores th a t do not heal properly, dead skin, and changes in w arts and moles. A ll suspicious lesio n s should be reported immediately to the medical department because they may be early signs of cancer. To ensure good personal hygiene in p la n ts, workers must take showers at the end of each s h if t. Workers should thoroughly wash th e ir h a ir and h a irlin e during showers and should c a re fu lly clean skin creases and fin g e rn a ils. Double locker rooms are also necessary to segregate contaminated clothing from s tre e t clothing . Employers should provide an adequate number of washrooms throughout the p lan t and encourage th e ir use. Washrooms should be close to lunchrooms so workers can wash before eatin g . Employers should ensure th a t lunchrooms remain uncontaminated to minimize the likeliho od of in h alatio n or ingestion of coal-derived products. Employers should req uire workers to remove contaminated p ro tectiv e clothing and wash before entering the lunchroom. Bar soap should be provided in a l l plant showers, and lanolin-based or equivalent w aterless hand cleaners should be provided in a ll p lan t washrooms and in the locker f a c ility . Organic solvents should not be used, because they may cause the contaminants to fu rth e r penetrate the skin and they are in h eren tly toxic in many cases. V PERSONAL PROTECTIVE EQUIPMENT AND CLOTHING Design of coal liq u efactio n p lants must focus on engineering controls to e ffe c tiv e ly minimize worker exposure to coal-derived products. In thost instances where engineering controls do not provide adequate p ro tectio i (mechanical breakdown, for example), good work p ra ctice s become the primary means of safeguarding h ealth . I f the work p ra ctice s are also inadequate and workers are d ire c tly exposed to coal-derived products, i t may be necessary to use personal p ro tectiv e equipment and clo th in g . # R espiratory P rotection Use of re sp ira to rs for p ro tectio n is acceptable only when engineering controls and standard work p ra ctice s are im practical and cannot be improved. Because coal liq u efactio n is a fled g lin g in dustry , th ere should be few circum stances where equipment needs to be r e tro fitte d to improve engineering co n tro l. Id e ally , equipment should be in itia lly designed to provide adequate co n tro l. However, if design is inadequate and engineering controls do need to be in sta lle d a fte r p lan ts are in operation , re sp ira to rs must be used u n til engineering controls are in e ffe c t. Other circum stances during which re sp ira to rs may be necessary for short periods include m aintenance, emergencies, leaks, s p ills , and f ir e s . When re sp ira to rs are used, they should reduce in h alatio n and in gestio n of airborne contaminants and provide li f e support in oxygen-deficient atmospheres. Although re sp ira to rs are useful fo r reducing exposure to hazardous m ateria ls, they have several undesirable asp ects. Problems associated w ith using re sp ira to rs include poor communication and hearing, reduced fie ld of v isio n , increased fatig u e and reduced worker e ffic ie n c y , s tra in on h eart and lungs, skin ir r ita tio n or d erm atitis caused by p e rsp ira tio n or fa c ia l contact w ith the re s p ira to r, and discom fort. Medical screening should be conducted to determ ine a w orker's a b ility to wear a re s p ira to r. F acial f i t is c ru cia l fo r e ffe c tiv e use of most negative pressure re s p ira to rs, because i f leaks occur, a ir contam inants may be inhaled. F acial h a ir, such as beards or long sideburns, and fa c ia l movements can prevent good re sp ira to r f i t ; th erefo re, i t is im portant th at carefu l f i t te stin g be conducted to assure an adequate f i t . Training workers to properly use, handle, and m aintain re sp ira to rs helps to achieve maximum effectiv en ess of re sp ira to ry p ro tectio n . Minimum tra in in g requirem ents for workers and supervisors have been estab lish ed by the Department of Labor (29 CFR 1910.134). These requirem ents include handling the re s p ira to r, proper f ittin g , te stin g facep iece-to -face se a l, and wearing the re sp ira to r in uncontaminated workplaces during a long t r i a l period. This tra in in g should enable the worker to determ ine whether the re s p ira to r is operating properly by checking i t for c le an lin e ss, leak s, proper f i t , and spent c artrid g es or f i l t e r s . Employers should impress upon workers th at re sp ira to ry pro tectio n is necessary if engineering controls are not su ffic ie n t. R espirator facepieces need to be m aintained reg u larly to remove contam ination and to slow down the aging of rubber p a rts. Employers should consult the m anufacturers' recommendations on cleaning methods, and care should be taken not to use solvents th at may d e te rio ra te rubber p a rts. # Gloves Gloves are usually worn a t coal liq u e fa c tio n p ilo t p lan ts in cold w eather, when heavy equipment is handled, and in areas where there is hot process equipment. Where gloves w ill not cause a sig n ific a n t safety hazard, they should be worn to p ro tect the hands from process m ateria ls. Gloves made of several types of m aterials have been used in coal liq u efactio n p ilo t p la n ts, including cotton m ill gloves, vinyl-coated heavy rubber gloves, and neoprene rubber-lined cotton gloves. No q u a n tita tiv e data on glove pen etratio n by coal liq u efactio n m aterials have been reported. One should assume, however, th at gloves and other p ro tectiv e clothing do not provide complete p ro tectio n ag ain st skin contact. To prevent contam ination of glove lin in g s, workers should wash th e ir hands before p uttin g on gloves. Because penetratio n by toxic chemicals may occur in a re la tiv e ly short tim e, gloves should be discarded follow ing noticeable contam ination. Work Clothing and Footwear Work clothing and footwear can reduce exposure to coal-derived products, esp ecially heavy o ils . Work clothing should be supplied by the employer. The clothing program at one coal liq u e fa c tio n p ilo t p lant provides each process-area worker w ith 15 se ts of s h ir ts , slack s, T -sh irts, underpants, and cotton socks; 3 jack ets; and 1 rubber rain co at . In another f a c ility , thermal underwear for use in cold weather was provided . Work clo th in g , including workshoes and hardhats, should be changed a t the end of every w orkshift or as soon as p ossib le when contam inated. A ll work clothing and footwear should be le f t at the p lan t a t the end of each w orkshift, and the employer should be responsible for handling and laundering the garments. Because of the volume of laundry involved, on s ite laundry f a c ilitie s would be convenient. Any commercial laundering establishm ent th at cleans work clothing should receive o ral and w ritten warning of p o te n tia l hazards th a t might re s u lt from handling contaminated clothing . Operators of coal liq u efactio n p lants should req u ire w ritten acknowledgment from laundering establishm ents th a t proper work procedures have been adopted. # Hearing P rotection In c e rta in in stan ces, engineering controls w ill not be adequate, and exposure-to-noise le v e ls exceeding the NIOSH-recommended lim it (85 dBA for an 8-hour period) may occur in some areas. Under these circum stances, workers should be provided w ith p ro tectiv e devices to prevent hearing lo ss. There are two basic types of hearing protectors available: earmuffs that f i t over the ear and earplugs that are inserted into the ear. Workers should choose the type of hearing protector they want to wear. Some workers may find earplugs uncomfortable, and others may be unable to wear earmuffs with th e ir glasses, hardhats, or re sp ira to rs. A hearing conservation program should be established. As part of th is program, workers should be instructed in the care, use, and cleaning of hearing p ro tecto rs. The program should also evaluate the need for protection against noise in various areas of the plant and provide the workers with a choice of hearing protectors su itab le for those a re a s. Workers should be cautioned not to use earplugs that are contaminated with coal liquefaction m aterials. # Eye and Face Protection Eye and face protection, such as safety glasses and faceshields, should be provided for and worn by a ll workers engaged in a c tiv itie s in which hazardous m aterials may contact the eyes or face. In addition, fu ll-len g th p la stic faceshields should be worn in areas where contact with ta r or tar o il is lik ely , except when fu ll-facep iece re sp ira to rs are being worn. Emergency personnel should also be train ed according to guidelines in 29 CFR 1910, Subpart L, giving sp ecial a tte n tio n to systems handling coalderived m aterials and to any associated sp ecial procedures. Inform ation on sp ecial fire fig h tin g and rescue procedures, p ro te c tiv e clothing requirem ents, and breathing apparatus needs should be d istrib u te d fo r areas where m aterials might be released from the process equipment during a f ir e or explosion. These procedures should be incorporated in to w ritten standard operating procedures, and copies should be given to a ll emergency personnel. # EMERGENCY PLANS AND PROCEDURES P lant emergency services should be adequate to control such situ a tio n s u n til community-provided emergency services a rriv e . Where a community f ir e department is staffed w ith permanent, p ro fessio n ally train ed workers, a plant manager may rely more on the emergency serv ices of th at departm ent. When community services are re lie d on for emergency situ a tio n s , the p lant emergency plan should include provisions for close coordination w ith these se rv ic es, frequent exercises w ith them, and adequate tra in in g in the p o te n tia l hazards of various systems in the p lan t. Emergency medical personnel, such as nurses or those w ith f ir s t-a id tra in in g , should be present a t the plant a t a ll tim es. Immediate response is needed when a delay in treatm ent would endanger l i f e . These cases may involve in h alatio n of toxic gases such as hydrogen su lfid e or carbon monoxide and asphyxiation due to oxygen displacem ent by in e rt gases such as nitrogen.
# DISCLAIMER Mention of company names or products does not co n stitu te endorsement by the National In s titu te for Occupational Safety and Health. # D H H S (N IO S H ) Publication No. 81-131 # PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need to p ro tect the h ealth and safety of workers occupationally exposed to an everincreasing number of p o te n tia l hazards. Consequently, the N ational I n s titu te fo r Occupational Safety and H ealth (NIOSH) has implemented a program to evaluate the adverse h ealth e ffe c ts of chemical and physical agents and in d u stria l processes. This summary of NIOSH's Occupational Hazard Assessment of Coal L iquefaction is a re s u lt of th at program. By addressing the hazard while d ire c t coal liq u e fa c tio n technology is s t i l l being developed, the ris k of p o te n tia l adverse h ealth e ffe c ts can be su b sta n tia lly reduced in both experim ental and commercial p la n ts. While the occupational hazard assessm ent, Volume I I , is a c r i t ic a l, lengthy review of the s c ie n tific and technical inform ation a v ailab le and discusses the occupational safety and health issues of p ilo t p lan t o p eration s, th is summary document presents the h ig h lig h ts in a sh o rte r, more useable form fo r a wider audience. Both documents are intended for use by w orkers, in dustry , trade a sso c ia tio n s, government agencies, s c ie n tific and tech n ical in v e stig a to rs, and the public; those in te re ste d in a more thorough and d etailed discussion should read the Occupational Hazard Assessment of Coal L iquefaction. The inform ation and recommendations presented should f a c ilita te development of sp e c ific procedures for hazard control in individual workplaces by persons immediately responsible for h ealth and safety . NIOSH w ill p e rio d ica lly update and evaluate new data and inform ation as they become av ailab le and, a t the appropriate tim e, w ill consider proposing recommendations for a standard to protect workers in commercial coal liq u efactio n f a c ilitie s . Ronald F. Coene, P.E. Acting D irector N ational I n s titu te fo r Occupational Safety and Health # SYNOPSIS The coal liq u efactio n process converts coal in to hydrocarbon products. In doing so, i t both uses and produces to xican ts p o te n tia lly hazardous to worker h e alth . The to x ican ts are v aried , ranging from simple chem icals to complex m ixtures or organic carcinogens. The possib le h ealth e ffe c ts are also v aried . Animal stu d ies have demonstrated th a t some of the process chemicals have produced tumors a t the s ite of ap p licatio n . Some workers may be exposed by in h alatio n of gases, vapors, or airborne p a rtic le s , skin contact w ith airborne m ateria l, contact w ith contaminated su rfaces, or accid en tal in g estio n . Some of these same chemicals have caused severe long-term e ffe c ts such as skin and lung cancer in workers in sim ilar in d u strie s. Other p o te n tia l adverse h ealth e ffe c ts may include fa ta l poisoning from in h a la tio n , severe re sp ira to ry ir r ita tio n , and chemical burns. F ires and explosions can also occur. This document is designed to provide inform ation th a t can be used to guard against these p o te n tia l adverse h ealth e ffe c ts. Measures such as engineering co n tro ls, sp e c ific work p ra c tic e s, personal p ro te c tiv e equipment, medical su rv eillan ce and exposure m onitoring, recordkeeping, and emergency plans and procedures are recommended. INTRODUCTION # INTRODUCTION Coal liq u e fa c tio n is a process th at converts coal in to hydrocarbon products. Most o ften , the major products are condensed arom atic liq u id s, with some gases and so lid s also produced, pending on operating conditions and the type of coal and process used. L iquefaction is accomplished by breaking the chemical and physical bonds of the coal in the presence of a hydrogen source. The four major coal liq u efactio n processes are: (1) p y ro ly sis, in which the coal is heated in the absence of oxygen, and hydrogen is ab stracted from coal hydrogen donors; (2) solvent e x tractio n , which is performed a t high tem peratures and pressures in the presence of hydrogen and a solvent; (3) d ire c t hydrogenation, in which the coal slu rry is hydrogenated in contact w ith a c a ta ly st under high pressure and tem perature; and (4) in d ire c t liq u e fa c tio n , in which the coal is g asified w ith steam and oxygen and then c a ta ly tic a lly converted to liq u id products. This assessment is concerned w ith d ire c t liq u e fa c tio n , ie , processes 1-3. In the United S ta te s, coal liq u efactio n technology is only beginning to develop. I t cu rren tly c o n sists of a small number of process development u n its (PDU) fo r e stab lish in g process fe a s ib ility and p ilo t p lan ts. Processing cap a c ities range from le ss than 5 tons per day (4.5 Mg/d) for the PDU's up to 600 tons per day (544 Mg/d) fo r the p ilo t p la n ts. These p ilo t p lan ts are prim arily a te stin g ground for determ ining optimum operating conditions for fu tu re commercial p lan ts. Previous p ro jectio n s based on plans for development of sy n th etic fuel technology show th a t as many as 12 comm ercial-sized liq u efactio n p lan ts may be operating by 1995 (assuming each produces approximately 50,000 b b l/d ). C o llectiv ely , these p lan ts are lik e ly to employ more than 12,000 workers [1 ], An increased number of coal miners must also be considered part of the to ta l impact of coal liq u efactio n technology on occupational safety and h e alth . In addition to evaluating health e ffe c ts , th is summary document recommends engineering co n tro ls, work p ra c tic e s, personal p ro tectiv e equipment and clo th in g , medical su rv eillan ce, and exposure m onitoring to reduce the h ealth risk s and provide fo r increased worker safety in coal liq u efactio n p lan ts. G reater d e ta il is given in the fu ll-le n g th document [2]. NIOSH's recommendations can be used fo r planning occupational safety and h ealth programs and as a basis for technology development. In the fu ll-le n g th document on coal liq u e fa c tio n , the occupational hazards associated w ith the d ire c t process are thoroughly reviewed. NIOSH has determined th a t chemicals and physical agents id e n tifie d in the process could present serious h ealth and safety hazards to the workers in p ilo t p lants and in future commercial p lants unless precautions are taken. One serious h ealth hazard associated w ith coal liq u efactio n processes is cancer [3][4][5]. C ertain arom atic amines and polycyclic hydrocarbons measured a t two p ilo t p lan ts in v estig ated by NIOSH are known carcinogens or co-carcinogens [6]. In ad d itio n , coal liq u e fa c tio n m aterials applied to the skin of laboratory animals have produced cancerous growths [7][8][9][10]. Coal liq u efactio n m aterials have also caused adverse reproductive e ffe c ts in laboratory animals [11,12]. A number of other chemicals found in coal liq u e fa c tio n processes may adversely a ffe c t the re sp ira to ry and nervous systems, as w ell as the blood, liv e r, and kidneys. # HEALTH EFFECTS Because coal liq u e fa c tio n is an emerging in d u stry , the opportunity for epidem iologic in v estig atio n has been lim ited to evaluation of worker medical su rv eillan ce programs in p ilo t p lan ts. In 1960 Sexton [13] suggested th at workers exposed to coal liq u efactio n m aterials have an incidence of cancerous and precancerous skin lesio n s g re ate r than expected in a sim ilar population not occupationally exposed to the same hazard. In sp ite of a number of lim ita tio n s in th is study (including a sm all number of exposed workers, no appropriate comparison population, and a lack of inform ation concerning work h is to rie s , medical h is to rie s , and exposure le v e ls ), the excess incidence of cancerous and precancerous skin lesion s reported would s t i l l be s u ffic ie n t to cause concern. Two other rep o rts [14,15] dem onstrate th a t the most common m edical problems a t p ilo t p lan ts have been d e rm a titis, eye ir r ita tio n , and therm al burns. Although there are lim ita tio n s in the epidem iologic data av ailab le on coal liq u e fa c tio n , there are strong im plications from animal toxicologic and human epidem iologic stu d ies on analogous compounds th a t c e rta in agents in the coal liq u efactio n process pose serious h ealth hazards. Such examples include chemical agents found in coal ta r products [16] and coke oven emissions [17]. Therefore, every possible precautionary measure should be taken to p ro tect the worker. Processing of abrasive s lu rrie s , p a rtic u la rly a t high tem peratures and p ressures, accelerates erosion and corrosion th at may cause leaks in process equipment. Leaks increase the p o te n tia l fo r worker skin contact with process m ateria ls, in h alatio n exposure, burns, and f ir e s . In gas handling systems, leaks increase the p o s s ib ility of worker exposure to high, and p o te n tia lly f a ta l, lev els of carbon monoxide and hydrogen su lfid e . Because coal liq u efactio n processes operate above the au to ig n itio n tem perature of many process m ateria ls, leaks could re s u lt in immediate f ire s . Furtherm ore, many process m aterials are explosive, and leaks may re s u lt in a m ixture th a t exceeds i t s lower explosive lim it. This could re s u lt in an explosion. The follow ing paragraphs describe sp ec ific h ealth e ffe c ts . Some guidance is given on elements of the appropriate physical exam inations and laboratory te s ts . Skin E ffects NIOSH has previously reported on some chemicals th a t are sim ilar to those in coal liq u efactio n processes and th at a ffe c t the skin. For example, creosote has caused k e r a titis , and crude naphtha, creosote, and resid u al pitch have caused skin cancer [16]. These m aterials can also inflam e h a ir f o llic le s and sebaceous glands. Cresol causes a burning sensation, erythema, lo calized anesthesia, and a brown d isco lo ratio n of the skin upon contact [18]. NIOSH has also described skin e ffe c ts as major concerns in workers exposed to carbon black process m aterials [19], refin ed petroleum solvents [20], coke oven em issions [17], and phenol [21]. Skin se n sitiz a tio n may occur a fte r skin contact w ith or in h alatio n of many d iffe re n t chem icals. Patch te stin g should be used as a diagnostic aid only a fte r a worker has symptoms of skin s e n sitiz a tio n . Patch te s ts must not be used as a preplacement or screening technique, because they may cause se n sitiz a tio n ; medical h isto ry and physical examination are much safer ways of estim ating whether a worker is se n sitiz e d . Skin lesio n s should be m onitored, and w ritten and photographic records of th e ir development should be made and kept. If lesio n s change, the worker should be referred to a derm atologist. Liver Effects NIOSH has previously concluded th a t exposure to creso l may re s u lt in damage to the liv e r and th a t medical su rv eillan ce w ith emphasis on any preex istin g liv e r disorder should be conducted on workers exposed to cresol [18]. Because of the vast reserve functio nal capacity of the liv e r, acute hep ato to x icity or severe cumulative chronic damage may occur before recognizable symptoms appear. These symptoms include nausea, vom iting, diarrh ea, weakness, general m alaise, and jaundice. Numerous screening te s ts for liv e r dysfunction e x ist; those most frequently used are serum b iliru b in , serum glutam ic-oxaloacetic transam inase (SGOT), serum glutam icpyruvic transam inase (SGPT), gamma glutamyl tran sp ep tid ase (GGTP), and is o c itr ic dehydrogenase. # Kidney Effects NIOSH has previously concluded th a t exposure to creso l [18], carbon d isu lfid e [22], coke oven em issions [17], and coal ta r products [16] can re s u lt in impaired ren al function. Renal function impairment can be screened by u rin a ly sis augmented by more sp ec ific te s ts . # Respiratory System Effects Inhalatio n of chemicals in coal liq u e fa c tio n p lan ts may re s u lt in damage to the resp ira to ry system. For example, su lfu r dioxide and ammonia are re sp ira to ry tr a c t ir r ita n ts , and su b sta n tia l exposures to ammonia can produce chronic b ro n c h itis, la ry n g itis , tr a c h e itis , bronchopneumonia, and pulmonary edema [23]. Exposure to su lfu r dioxide can cause asphyxia and severe chemical bronchopneumonia [24]. In h alatio n of coke oven em issions and coal ta r products can lead tc chronic lung disease, including cancer [16,17]. Inhalatio n and pulmonary deposition of fre e s ilic a can cause s ilic o s is , a form of pulmonary fib ro s is . If a worker does not have symptoms of re sp ira to ry d is tre s s , physical examination alone may not d etect early pulmonary illn e s s . Therefore, a chest X-ray examination is recommended a t the time of employment and th e re a fte r a t the d isc re tio n of the physician. In ad d itio n , pulmonary function te s ts , ie , forced v ita l capacity (FVC) and forced expiratory volume in 1 second (FEV 1), are also recommended screening te s ts . Blood E ffects Because exposure to benzene has been linked w ith leukemia [25], workers exposed to i t should have a complete blood count (CBC) as a screening te s t fo r blood d iso rd ers. A CBC includes determ ination of hemoglobin (Hb) concentration, hem atocrit, red blood c e ll (RBC) count, re tic u lo c y te count, w hite blood c e ll (WBC) count, and WBC d iffe re n tia l count. C entral Nervous System E ffects Cresol [18], Medical monitoring is essen tial to protect workers in coal liquefaction plants from adverse health e ffe c ts. To be e ffectiv e, medical surveillance should be both w ell timed and thorough to detect adverse effe c ts on worker health as early as possible. Thoroughness in medical surveillance is necessary because of the many chemical and physical agents a worker may be exposed to. Worker exposure is not predictable; i t can occur whenever any closed process equipment accidentally leaks, vents, or ruptures. Not a ll adverse effects of exposure in coal liquefaction plants are known, but most of the major organs (the liv e r [18], kidneys [22], lungs [28], heart [26], and skin [20]) may be affected. Medical surveillance should consist of preplacement, periodic, and term ination examinations. Preplacement examinations set a baseline for the worker's general health and for sp ecific te s ts , eg, future audiometric te s ts . During these examinations, a worker's physical a b ility to perform h is job while using a resp irato r can be determined. These examinations can also detect some preexisting conditions that may be aggravated by, or make the worker more vulnerable to, the hazards of coal liq u efaction processes. If such a condition is found, the employer should be n o tified and the worker fu lly counseled on these hazards. Periodic examinations allow monitoring of worker health to assess changes resu ltin g from exposure to m aterials in coal liquefaction processes. Workers should be encouraged to report any change in th e ir health to the medical department; self-exam ination is important to health monitoring. Periodic examinations should be performed at le a st annually. Physical examinations of workers prior to term ination of employment w ill provide complete information to the worker and to the medical surveillance program. All three types of examinations should be comprehensive and include medical and work h isto rie s , physical examinations, and special te s ts . Medical h isto rie s should focus on preexisting disorders. Work h isto rie s are important because past exposures may predispose the worker to adverse effects; for example, past exposure to coal-derived products may have caused sen sitizatio n . Physical examinations should be thorough and some c lin ic a l te s ts may be useful as general screening measures. In the screening te s ts , worker acceptance of te s ts should be weighed against the information that the te s t w ill y ield. Physicians should consider the te s t se n sitiv ity , the seriousness of the disorder, and whether the disorder can re su lt from exposure to coal liquefaction m aterials. They should choose te s ts based on th e ir knowledge of exposure-effect relatio n sh ip s, the w orker's exposure conditions, and any preexisting facto rs. If possible, they should choose te s ts w ith simple sample c o lle c tio n , processing, and an aly sis. The physician should ensure th a t appropriate followup te s ts and procedures are carried out if te s t re s u lts are abnormal. # Exposure M onitoring In d u stria l hygiene m onitoring is used to determ ine whether worker exposure to chemical and physical hazards is w ithin the lim its se t by OSHA or recommended by NIOSH or other voluntary stan d ard -se ttin g org anization s, and to in d icate where co rrectiv e measures are needed. However, no exposure lim its have been established fo r many substances th at may contam inate the workplace a ir in coal liq u efactio n p lan ts. Exposure m onitoring is necessary where exposure to suspect cancer-causing agents or other toxic m aterials might occur. For these substances, exposure m onitoring is im portant because i t can spot fa ilu re s in engineering con tro ls and work p ra c tic e s, and produce data th at can be used to develop exposure-effect re la tio n sh ip s. I t is not possible now to p red ict a ll of the chemicals in coal liq u efactio n p lan ts th a t may have toxic e ffe c ts . In ad d itio n , the possib le in te ra c tiv e e ffe c ts of individual chemicals should be considered when designing the m onitoring program. A workplace m aterial survey should be conducted to tab u late workplace m aterials th at may be released in to the atmosphere or may contam inate the skin. G uidelines for sampling and an aly sis may be found in NIOSH's Occupational Exposure Sampling Strategy Manual [29], the NIOSH Manual of A nalytical Methods [30][31][32], and OSHA's In d u stria l Hygiene F ield Operations Manual [33]. A ll processes and work operations using m aterials known to be toxic or hazardous should be evaluated. Many m aterials in coal liq u efactio n p lan ts are complex m ixtures of hydrocarbons occurring as vapors, aero so ls, or p a rtic u la te s . Although i t is im practical to measure each component of the m ixture, measuring the cyclohexane-extractable fra c tio n of to ta l p a rtic u la te samples w ill y ield data u seful for evaluating worker exposure. Chemical an aly sis procedures developed fo r coal ta r p itch v o la tile s can be applied to coal liq u efactio n m aterials. However, one must not in fe r th at the perm issible exposure lim it (PEL) of 0.15 mg/cu m for the benzene-soluble fra c tio n of to ta l p a rtic u la te m atter during the carbonization of coal, estab lish ed fo r coke oven em issions, is safe for workers in coal liq u e fa c tio n p la n ts. In stead , m onitoring re s u lts should be in terp reted w ith toxicologic data on sp e c ific , or m ixtures of coal liq u e fa c tio n m aterials. Alarms should be in sta lle d to monitor substances' or hazards th at could immediately th reaten l i f e and h ealth , such as hydrogen su lfid e , carbon monoxide, oxygen deficiency, and p o te n tia l explosive hazards. NIOSH has published recommendations fo r the use of hydrogen su lfid e alarms [34], which should be in sta lle d in areas where high concentrations of hydrogen su lfid e may be released . NIOSH also recommends th a t alarms be developed fo r other gases where life -th re a te n in g exposures could occur. # Recordkeeping M onitoring worker h ealth and the working environment is e sse n tia l in an occupational h ealth program. Coal liq u efactio n is a developing technology, w ith many hazards s t i l l unknown. Recordkeeping is p a rtic u la rly c r i tic a l to th is technology, where the occupational environment is d if f ic u lt to ch aracterize. S tandardization of the recordkeeping systems is recommended because i t w ill allow easy cross-com parison of data from sim ilar p lan ts and thereby f a c ilita te a more comprehensive examination of the hazards in the industry as a whole [35]. Therefore, coal liq u e fa c tio n p lan ts should have a system th a t records a t le a s t the follow ing inform ation. (a) Employment H istory For each worker there should be a work h isto ry d e ta ilin g job c la s s ific a tio n s , plant lo catio n codes, and time spent on each job. Compensation claim rep o rts and death c e rtific a te s should be included. (b) Medical H istory A medical h isto ry comprising personal h ealth h isto ry , medical examination records, and reported illn e ss e s and in ju rie s should be m aintained for each worker. (c) In d u stria l Hygiene Data Records should be kept of personal and area sampling th at s ta te m onitoring re s u lts , note whether personal p ro te c tiv e equipment was used, and id en tify worker(s) monitored or p lan t lo catio n codes for areas where sampling was performed. Estim ates of the frequency and extent of skin contam ination by coal-derived m aterials should be recorded annually fo r each worker. # ENGINEERING CONTROLS Engineering controls of workplace hazards can be applied to both p ilo t and commercial p la n ts. Engineering design for dem onstration coal liq u efactio n p lan ts is cu rren tly in the developmental stag e, and i t should emphasize prevention of worker exposure by ensuring process containm ent, lim itin g worker exposure during m aintenance, and providing for equipment r e lia b ility . The e ffe c ts of erosion, corrosion, and fa ilu re of se a ls, valves, and other process components should be minimized. Design and q u a lity control of pressure v essels should ensure r e lia b ility of vessel in te g rity . Pressure safety and release valves must be designed according to w ell-estab lish ed engineering p ra ctice s and located so th at operating personnel are not exposed to re le a se s. Control rooms should be stru c tu ra lly designed to p ro tect operating personnel and should remain functio nal in an accident. Process sampling equipment should be designed to minimize the relea se of process m ateria l. Many of the engineering design considerations discussed in th is and the fu ll-le n g th document are addressed in ex istin g standards, codes, and reg u latio n s [2]. A formal system safety program should be in itia te d to provide fo r early recognition of safety problems, an aly sis of design, id e n tific a tio n of p o te n tia l hazards, and sp e c ific a tio n of safety co n tro ls and procedures. At a minimum, the safety program should include: (1) Scheduled reviews and analyses; (2) Review re s p o n s ib ilitie s of designated workers; (3) Required methods of analyses; and (4) Requirements for documentation and c e rtific a tio n . Before equipment is removed or serviced, i t should be iso la te d from the process stream and flushed and purged where p o ssib le. Gas purges should be disposed of by a fla re or some other e ffe c tiv e method. In areas where flammable m aterials are c o llec te d , the flammable gas and vapor concentration must be m aintained a t le ss than i t s lower explosive lim it. If workers enter areas where flammable or toxic m aterials c o lle c t, v e n tila tio n must be adequate to m aintain safe concentrations. Workers who sample process stream s may be exposed to sig n ific a n t amounts of process m ateria l. During p lan t v i s it s , NIOSH observed workers using a nozzle to d ire c t the m aterial in to sm all cans [14], a technique th at could lead to excessive and unnecessary exposures. In other p la n ts, NIOSH observed a sa fer method and recommends i t where p ra ctica b le: the use of a closed-loop system designed to remove flammable or toxic m aterial from the sampling lin e s by flushing and purging p rio r to removing the sampling bomb [14]. These flushings from the sampling lin e s should be collected and disposed of properly. To minimize plugging of lin e s and equipment through coking and s o lid ific a tio n of coal s lu r rie s , instrum entation and co n tro ls should be provided. For example, lin e s and equipment should be h eat-traced to m aintain tem peratures g reater than the pour point of the m ateria l. However, sometimes plugging w ill occur reg ardless of the preventive measures taken. Under these circum stances, lo cal v e n tila tio n and/or re sp ira to rs should be provided and used to lim it exposure to hazardous m aterials during decoking or hydroblasting. Contaminated w ater from hydroblasting should be c o llected , tre a te d , and recycled or disposed of properly. Where p ra ctica b le, equipment used to handle flu id s containing so lid s should be flushed and purged during shutdown to minimize p o te n tia l coal slu rry s o lid ific a tio n or s e ttlin g of so lid s. Thus, proper engineering design and co n tro ls must include safeguards sp e c ific a lly designed to p ro tect workers. Good engineering design, when combined with safe operating procedures for fu ll-s c a le coal liq u efactio n p lan ts, should provide inherent safeguards. By removing pathways for release of toxic substances to the workplace, the need fo r minimizing exposures to workers w ill be reduced, as w ill relia n ce on personal p ro tectiv e equipment. # SPECIFIC WORK PRACTICES There are workplace safety and h ealth programs in coal liq u efactio n p ilo t p lants sim ilar to programs in petroleum re fin e rie s and the chemical in dustry . Most coal liq u efactio n p ilo t p lan ts have w ritten procedures for operational safety p ra c tic e s, including lockout of e le c tric a l equipment, tagout of valves, f ir e and rescue brigades, safe work perm its, v essel entry perm its, wearing of safety glasses and hardhats, and housekeeping. Developers of occupational safety and h ealth programs should use (1) the general industry standards sp ecified in the Code of Federal Regulations (29 CFR P art 1910), (2) voluntary guidelines from sim ilar in d u strie s, (3) NIOSH recommendations describing sp ec ific work p ra ctice s fo r sim ilar m aterials [16,17,20], (4) recommendations from equipment m anufacturers, and (5) professio nal judgment and operating experience. For engineering co n tro ls, supplemental work p ra c tic e s, and personal p ro tectiv e equipment to be e ffe c tiv e , employers should ensure th a t workers are informed of the hazards associated w ith the m aterials p resen t, the procedures for handling m aterials and cleaning up s p ills , and the use of personal p ro tectiv e equipment and emergency procedures. Where p o ssib le, o ra l in stru c tio n s should be augmented by w ritten and audiovisual m aterials to ensure th a t workers are always aware of the most current procedures and techniques for ensuring th e ir safety and h e alth . In ad d itio n , worker comprehension of in stru c tio n s should be evaluated. # Operating Procedures Procedures are being developed for coal liq u efactio n p ilo t p lan ts for each phase of operation, including sta rtu p , normal operation , routine maintenance, normal shutdown, emergency shutdown, and shutdown fo r extended periods. P rovisions should be made in these procedures for safely sto rin g process m ateria ls, preventing s o lid ific a tio n of dissolved coal, cleaning up s p ills , and decontam inating equipment th at req u ires maintenance. Preventing leaks is a major problem during sta rtu p of high pressure coal liq u efactio n systems; th erefo re , systems should be gradually pressurized and checked for leak s, esp ecially a t valve o u tle ts , b lin d s, and flan ges. Recently rep aired , serviced, or replaced equipment should be c are fu lly examined. I f no leaks are found, the system should be slowly brought up to operating pressure and tem perature. I f leaks occur, appropriate rep airs should be made. Equipment th at operates at high pressures should be ro u tin ely inspected to determ ine maintenance needs. In te rv a ls should be estab lished for m onitoring and replacing equipment su scep tib le to erosion and corrosion. M onitoring requirem ents, schedules, and replacement in te rv a ls should be p art of a q u ality assurance program. # Confined Space Entry NIOSH has made recommendations th at are generally app licable to entry in confined spaces in coal liq u efactio n f a c il iti e s [36]. These recommendations p e rtain to entry perm its, te stin g work atmospheres, m onitoring, medical su rv e illa n c e, personnel tra in in g , lab elin g and postin g, preparations such as is o la tio n , lockout, tagout, purging, and v e n tila tio n , safety equipment and clo th in g , rescue equipment, and recordkeeping. One coal liq u efactio n f a c ility v is ite d by NIOSH recommends an a ir changeover ra te of six times per hour during entry and work in confined spaces. # R estricted Areas Access should be controlled a t coal liq u e fa c tio n p lan ts to prevent entry by persons unfam iliar w ith the hazards p resen t, the precautions necessary, and the plant emergency procedures. Access a t one coal liq u efactio n f a c ility is controlled by using warning sig n s, red lig h ts , and b a rrie rs [37]. Because th is f a c ility is sm all, the use of doors as physical b a rrie rs is fe a sib le for access c o n tro l. Fences and gates serve a sim ilar purpose in la rg er operations. Employers should keep a log of workers entering re s tric te d areas to provide a record of who is exposed in high ris k operations. A v is ito r s ' log would aid in accounting fo r people a t the plant in an accident or emergency. V isito rs should be informed of the hazards, necessary precautions, and emergency procedures. S p ill Decontamination S p ills of toxic liq u id s should be cleaned up at the e a r lie s t safe opportunity. Equipment should be designed to ensure th at s p ills and leaks w ill be contained w ith a minimum of worker exposure. Small s p ills may be contained by a sorbent; spent sorbent should then be disposed of in a safe manner. If s p ills must be removed by hand, only workers who are in stru cted and trained in safe decontam ination and disposal procedures should supervise and perform cleanup. Employers must supply appropriate personal p ro tectiv e equipment and ensure th a t workers understand how to use i t . Dried ta r is d if f ic u lt to remove from su rfaces. For th is reason, e a sily strip p a b le coatings may be used where ta r may s p ill. Because such coatings do not adhere w ell, the ta r can be removed w ith the coating. The clean surface can then be recoated. In some p la n ts, where such coatings are not used, manual scraping and chipping, commercial clean sers, or steam have been used [38]. These methods fu rth e r increase the hazard to cleanup personnel. For example, steam strip p in g may generate airborne contam inants, as w ell as increase the p o te n tia l for burns. For these reasons, steam strip p in g is discouraged. Where organic solvents are used for cleanup, sp ecial care is necessary to lim it exposure to solvent vapors and aero so ls. This may include the use of lo cal v e n tila tio n and approved re s p ira to rs . Hand tools and portable equipment may be contaminated and should be cleaned before reuse. They can be steam-cleaned in an enclosed system or cleaned by vapor degreasing and u ltraso n ic a g ita tio n . The residue should be contained and disposed of properly, and solvents should be w ell co n tro lled . # Personal Hygiene Where engineering controls are inadequate to prevent exposure, workers should be extrem ely a le r t to the p o te n tia l hazards of d ire c t contact w ith coal-derived products. They must exercise good personal hygiene by (1) avoiding touching th e ir faces, g e n ita ls, or re c ta l areas w ithout f i r s t washing th e ir hands; (2) washing skin w ith soap and w ater a fte r d ire c t contact w ith coal-derived products; and (3) washing th e ir hands, forearm s, face, and neck a fte r completing each operation where th ere was known or suspected contact w ith coal-derived products. Workers must also be a le rt to sores th a t do not heal properly, dead skin, and changes in w arts and moles. A ll suspicious lesio n s should be reported immediately to the medical department because they may be early signs of cancer. To ensure good personal hygiene in p la n ts, workers must take showers at the end of each s h if t. Workers should thoroughly wash th e ir h a ir and h a irlin e during showers and should c a re fu lly clean skin creases and fin g e rn a ils. Double locker rooms are also necessary to segregate contaminated clothing from s tre e t clothing . Employers should provide an adequate number of washrooms throughout the p lan t and encourage th e ir use. Washrooms should be close to lunchrooms so workers can wash before eatin g . Employers should ensure th a t lunchrooms remain uncontaminated to minimize the likeliho od of in h alatio n or ingestion of coal-derived products. Employers should req uire workers to remove contaminated p ro tectiv e clothing and wash before entering the lunchroom. Bar soap should be provided in a l l plant showers, and lanolin-based or equivalent w aterless hand cleaners should be provided in a ll p lan t washrooms and in the locker f a c ility . Organic solvents should not be used, because they may cause the contaminants to fu rth e r penetrate the skin and they are in h eren tly toxic in many cases. V PERSONAL PROTECTIVE EQUIPMENT AND CLOTHING Design of coal liq u efactio n p lants must focus on engineering controls to e ffe c tiv e ly minimize worker exposure to coal-derived products. In thost instances where engineering controls do not provide adequate p ro tectio i (mechanical breakdown, for example), good work p ra ctice s become the primary means of safeguarding h ealth . I f the work p ra ctice s are also inadequate and workers are d ire c tly exposed to coal-derived products, i t may be necessary to use personal p ro tectiv e equipment and clo th in g . # R espiratory P rotection Use of re sp ira to rs for p ro tectio n is acceptable only when engineering controls and standard work p ra ctice s are im practical and cannot be improved. Because coal liq u efactio n is a fled g lin g in dustry , th ere should be few circum stances where equipment needs to be r e tro fitte d to improve engineering co n tro l. Id e ally , equipment should be in itia lly designed to provide adequate co n tro l. However, if design is inadequate and engineering controls do need to be in sta lle d a fte r p lan ts are in operation , re sp ira to rs must be used u n til engineering controls are in e ffe c t. Other circum stances during which re sp ira to rs may be necessary for short periods include m aintenance, emergencies, leaks, s p ills , and f ir e s . When re sp ira to rs are used, they should reduce in h alatio n and in gestio n of airborne contaminants and provide li f e support in oxygen-deficient atmospheres. Although re sp ira to rs are useful fo r reducing exposure to hazardous m ateria ls, they have several undesirable asp ects. Problems associated w ith using re sp ira to rs include poor communication and hearing, reduced fie ld of v isio n , increased fatig u e and reduced worker e ffic ie n c y , s tra in on h eart and lungs, skin ir r ita tio n or d erm atitis caused by p e rsp ira tio n or fa c ia l contact w ith the re s p ira to r, and discom fort. Medical screening should be conducted to determ ine a w orker's a b ility to wear a re s p ira to r. F acial f i t is c ru cia l fo r e ffe c tiv e use of most negative pressure re s p ira to rs, because i f leaks occur, a ir contam inants may be inhaled. F acial h a ir, such as beards or long sideburns, and fa c ia l movements can prevent good re sp ira to r f i t ; th erefo re, i t is im portant th at carefu l f i t te stin g be conducted to assure an adequate f i t . Training workers to properly use, handle, and m aintain re sp ira to rs helps to achieve maximum effectiv en ess of re sp ira to ry p ro tectio n . Minimum tra in in g requirem ents for workers and supervisors have been estab lish ed by the Department of Labor (29 CFR 1910.134). These requirem ents include handling the re s p ira to r, proper f ittin g , te stin g facep iece-to -face se a l, and wearing the re sp ira to r in uncontaminated workplaces during a long t r i a l period. This tra in in g should enable the worker to determ ine whether the re s p ira to r is operating properly by checking i t for c le an lin e ss, leak s, proper f i t , and spent c artrid g es or f i l t e r s . Employers should impress upon workers th at re sp ira to ry pro tectio n is necessary if engineering controls are not su ffic ie n t. R espirator facepieces need to be m aintained reg u larly to remove contam ination and to slow down the aging of rubber p a rts. Employers should consult the m anufacturers' recommendations on cleaning methods, and care should be taken not to use solvents th at may d e te rio ra te rubber p a rts. # Gloves Gloves are usually worn a t coal liq u e fa c tio n p ilo t p lan ts in cold w eather, when heavy equipment is handled, and in areas where there is hot process equipment. Where gloves w ill not cause a sig n ific a n t safety hazard, they should be worn to p ro tect the hands from process m ateria ls. Gloves made of several types of m aterials have been used in coal liq u efactio n p ilo t p la n ts, including cotton m ill gloves, vinyl-coated heavy rubber gloves, and neoprene rubber-lined cotton gloves. No q u a n tita tiv e data on glove pen etratio n by coal liq u efactio n m aterials have been reported. One should assume, however, th at gloves and other p ro tectiv e clothing do not provide complete p ro tectio n ag ain st skin contact. To prevent contam ination of glove lin in g s, workers should wash th e ir hands before p uttin g on gloves. Because penetratio n by toxic chemicals may occur in a re la tiv e ly short tim e, gloves should be discarded follow ing noticeable contam ination. Work Clothing and Footwear Work clothing and footwear can reduce exposure to coal-derived products, esp ecially heavy o ils . Work clothing should be supplied by the employer. The clothing program at one coal liq u e fa c tio n p ilo t p lant provides each process-area worker w ith 15 se ts of s h ir ts , slack s, T -sh irts, underpants, and cotton socks; 3 jack ets; and 1 rubber rain co at [39]. In another f a c ility , thermal underwear for use in cold weather was provided [14]. Work clo th in g , including workshoes and hardhats, should be changed a t the end of every w orkshift or as soon as p ossib le when contam inated. A ll work clothing and footwear should be le f t at the p lan t a t the end of each w orkshift, and the employer should be responsible for handling and laundering the garments. Because of the volume of laundry involved, on s ite laundry f a c ilitie s would be convenient. Any commercial laundering establishm ent th at cleans work clothing should receive o ral and w ritten warning of p o te n tia l hazards th a t might re s u lt from handling contaminated clothing . Operators of coal liq u efactio n p lants should req u ire w ritten acknowledgment from laundering establishm ents th a t proper work procedures have been adopted. # Hearing P rotection In c e rta in in stan ces, engineering controls w ill not be adequate, and exposure-to-noise le v e ls exceeding the NIOSH-recommended lim it (85 dBA for an 8-hour period) may occur in some areas. Under these circum stances, workers should be provided w ith p ro tectiv e devices to prevent hearing lo ss. There are two basic types of hearing protectors available: earmuffs that f i t over the ear and earplugs that are inserted into the ear. Workers should choose the type of hearing protector they want to wear. Some workers may find earplugs uncomfortable, and others may be unable to wear earmuffs with th e ir glasses, hardhats, or re sp ira to rs. A hearing conservation program should be established. As part of th is program, workers should be instructed in the care, use, and cleaning of hearing p ro tecto rs. The program should also evaluate the need for protection against noise in various areas of the plant and provide the workers with a choice of hearing protectors su itab le for those a re a s. Workers should be cautioned not to use earplugs that are contaminated with coal liquefaction m aterials. # Eye and Face Protection Eye and face protection, such as safety glasses and faceshields, should be provided for and worn by a ll workers engaged in a c tiv itie s in which hazardous m aterials may contact the eyes or face. In addition, fu ll-len g th p la stic faceshields should be worn in areas where contact with ta r or tar o il is lik ely , except when fu ll-facep iece re sp ira to rs are being worn. Emergency personnel should also be train ed according to guidelines in 29 CFR 1910, Subpart L, giving sp ecial a tte n tio n to systems handling coalderived m aterials and to any associated sp ecial procedures. Inform ation on sp ecial fire fig h tin g and rescue procedures, p ro te c tiv e clothing requirem ents, and breathing apparatus needs should be d istrib u te d fo r areas where m aterials might be released from the process equipment during a f ir e or explosion. These procedures should be incorporated in to w ritten standard operating procedures, and copies should be given to a ll emergency personnel. # EMERGENCY PLANS AND PROCEDURES P lant emergency services should be adequate to control such situ a tio n s u n til community-provided emergency services a rriv e . Where a community f ir e department is staffed w ith permanent, p ro fessio n ally train ed workers, a plant manager may rely more on the emergency serv ices of th at departm ent. When community services are re lie d on for emergency situ a tio n s , the p lant emergency plan should include provisions for close coordination w ith these se rv ic es, frequent exercises w ith them, and adequate tra in in g in the p o te n tia l hazards of various systems in the p lan t. Emergency medical personnel, such as nurses or those w ith f ir s t-a id tra in in g , should be present a t the plant a t a ll tim es. Immediate response is needed when a delay in treatm ent would endanger l i f e . These cases may involve in h alatio n of toxic gases such as hydrogen su lfid e or carbon monoxide and asphyxiation due to oxygen displacem ent by in e rt gases such as nitrogen. # ACKNOWLEDGEMENTS Funding for th is p ro ject was provided by the Environmental P ro tectio n Agency under the Interagency Energy-Environment Research and Development Program.
None
None
24dc9ff81d2ab9497d59f1d4ddd62ddc887975be
cdc
None
Through the Act, Congress charged NIOSH with recommending occupational safety and health standards and describing exposure limits that are safe for various periods of employment. These limits include but are not limited to the exposures at which no worker will suffer diminished health, functional capacity, or life expectancy as a result of his or her work experience. By means of criteria documents, NIOSH communicates these recommended standards to regulatory agencies (including the Occupational Safety and Health Administration ), health professionals in academic institutions, industry, organized labor, public interest groups, and others in the occupational safety and health community. Criteria documents contain a critical review of the scientific and technical information about the prevalence of hazards, the existence of safety and health risks, and the adequacy of control methods.# This criteria document is derived from reviews of information from human and animal studies of the toxicity of refractory ceramic fibers (RCFs) and is intended to describe the potential health effects of occupational exposure to airborne fibers of this material. RCFs are amorphous synthetic fibers produced by the melting and blowing or spinning of calcined kaolin clay or a combination of alumina, silica, and other oxides. RCFs belong to the class of synthetic vitreous fibers (SVFs)-materials that also include fibers of glass wool, rock wool, slag wool, and specialty glass. RCFs are used in commercial applications requiring lightweight, high-heat insulation (e.g., furnace and kiln insulation). Commercial production of RCFs began in the 1950s in the United States, and production increased dramatically in the 1970s. Domestic production of RCFs in 1997 totaled approximately 107.7 million lb. Currently, total U.S. production has been estimated at 80 million lb per year, which constitutes 1% to 2% of SVFs produced worldwide. In the United States, approximately 31,500 workers have the potential for occupational exposure to RCFs during distribution, handling, installation, and removal. More than 800 of these workers are employed directly in the manufacturing of RCFs and RCF products. With increasing production of RCFs, concerns about exposures to airborne fibers prompted animal inhalation studies that have indicated an increased incidence of mesotheliomas in hamsters and lung cancer in rats following exposure to RCFs. Studies of workers who manufacture RCFs have shown a positive association between increased exposure to RCFs and the development of pleural plaques, skin and eye irritation, and respiratory symptoms and conditions (including dyspnea, wheezing, and chronic cough). In addition, current and former RCF production workers have shown decrements in pulmonary function. After evaluating this evidence, NIOSH proposes a recommended exposure limit (REL) for RCFs of 0.5 fiber per cubic centimeter (f/cm 3 ) of air as a time-weighted average (TWA) concentration for up to a 10-hr work shift during a 40-hr workweek. Limiting airborne RCF exposures to this concentration will minimize the risk for lung cancer and irritation of the eyes and upper respiratory system and is achievable based on a review of exposure monitoring data collected from RCF manufacturers and users. However, because a residual risk of cancer (lung cancer and pleural mesothelioma) may still exist at the REL, continued efforts should be made toward reducing exposures to less than 0.2 f/cm 3 . Engineering controls, appropriate respiratory protection programs, and other preventive measures should be implemented to minimize worker exposures to RCFs. NIOSH urges employers to disseminate this information to workers and customers. NIOSH also requests that professional and trade associations and labor organizations inform their members about the hazards of exposure to RCFs. Executive Summary lung cancer. However, studies of worker populations with occupational exposure to airborne RCFs have shown an association between exposure and the formation of pleural plaques, increased prevalence of respiratory symptoms and conditions (dyspnea, wheezing, chronic cough), decreases in pulmonary function, and skin, eye, and upper respiratory tract irritation . Increased decrements in pulmonary function among workers exposed to RCFs who are current or former cigarette smokers indicate an apparent synergistic effect between smoking and RCF exposure . This finding is consistent with studies of other dustexposed populations. The implementation of improved engineering controls and work practices in RCF manufacturing processes and end uses have led to dramatic declines in airborne fiber exposure concentrations , which in turn have lowered the risk of symptoms and health effects for exposed workers. In 2002, the Refractory Ceramic Fibers Coalition (RCFC) established the Product Stewardship Program (PSP), which was endorsed by the Occupational Safety and Health Administration (OSHA). Contained in the PSP were recommendations for an RCF exposure guideline of 0.5 fiber per cubic centimeter (f/cm 3 ) of air as a time-weighted average (TWA) based on the contention that exposures at this concentration could be achieved in most industries that manufactured or used RCFs. At this time, the available health data do not provide sufficient evidence for deriving a precise healthbased occupational exposure limit to protect The National Institute for Occupational Safety and Health (NIOSH) has reviewed data characterizing occupational exposure to airborne refractory ceramic fibers (RCFs) and information about potential health effects obtained from experimental and epidemiologic studies. From this review, NIOSH determined that occupational exposure to RCFs is associated with adverse respiratory effects as well as skin and eye irritation and may pose a carcinogenic risk based on the results of chronic animal inhalation studies. In chronic animal inhalation studies, exposure to RCFs produced an increased incidence of mesotheliomas in hamsters and lung cancer in rats . The potential role of nonfibrous particulates generated during inhalation exposures in the animal studies complicates the issue of determining the exact mechanisms and doses associated with the toxicity of RCFs in producing carcinogenic effects . The induction of mesotheliomas and sarcomas in rats and hamsters following intrapleural and intraperitoneal implantation of RCFs provided additional evidence for the carcinogenic potential of RCFs . Lung tumors have also been observed in rats exposed to RCFs by intratracheal instillation . In contrast to the carcinogenic effects of RCFs observed in experimental animal studies, epidemiologic studies have found no association between occupational exposure to airborne RCFs and an excess rate of pulmonary fibrosis or The risk for mesothelioma at 0.5 f/cm 3 is not known but cannot be discounted. Evidence from epidemiologic studies showed that higher exposures in the past resulted in pleural plaques in workers, indicating that RCFs do reach pleural tissue. Both implantation studies in rats and inhalation studies in hamsters show that RCFs can cause mesothelioma. Because of limitations in the hamster data, the risk of mesothelioma cannot be quantified. However, the fact that no mesothelioma has been found in workers and that pleural plaques appear to be less likely in workers with lower exposures suggests a lower risk for mesothelioma at the REL. Because residual risks of cancer (lung cancer and pleural mesothelioma) and irritation may still exist at the REL, NIOSH further recommends that all reasonable efforts be made to work toward reducing exposures to less than 0.2 f/cm 3 . At this concentration, the risks of lung cancer are estimated to be between 0.03/1,000 and 0.47/1,000 (based on extrapolations of risk models from Moolgavkar et al. and Yu and Oberdörster ). Maintaining airborne RCF concentrations below the REL requires a comprehensive safety and health program that includes provisions for the monitoring of worker exposures, installation and routine maintenance of engineering controls, and the training of workers in good work practices. Industry-led efforts have likewise promoted these actions by establishing the PSP. NIOSH believes that maintaining exposures below the REL is achievable at most manufacturing operations and many user applications, and that the incorporation of an action level (AL) of 0.25 f/cm 3 in the exposure monitoring strategy will help employers determine when workplace exposure concentrations are approaching the REL. The AL concept has been an integral element of occupational standards recommended in NIOSH criteria documents and in comprehensive standards promulgated by OSHA and the Mine Safety and Health Administration (MSHA). NIOSH also recommends that employers implement additional measures under a comprehensive safety and health program, including hazard communication, respiratory protection programs, smoking cessation, and medical monitoring. These elements, in combination with efforts to maintain airborne RCF concentrations below the REL, will further protect the health of workers. xv Glossary FVC: Forced vital capacity or the maximum volume of air (in liters) that can be forcibly expired from the lungs following a maximal inspiration. # High-efficiency particulate air (HEPA) filter: A dry-type filter used to remove airborne particles with an efficiency equal to or greater than 99.97% for 0.3-µm particles. The lowest filtering efficiency of 99.97% is associated with 0.3-µm particles, which is approximately the most penetrating particle size for particulate filters. # Inspirable dust: The fraction of airborne particles that would be inspired through the mouth and nose of a worker. # MAN: A refractory ceramic fiber produced by the Johns Manville Company. # Occupational medical monitoring (incorporating medical screening, surveillance): The periodic medical evaluation of workers to identify potential health effects and symptoms related to occupational exposures or environmental conditions in the workplace. An occupational medical monitoring program is a secondary prevention method based on two processes, screening and surveillance. Occupational medical screening focuses on early detection of health outcomes for individual workers. Screening may involve an occupational history assessment, medical examination, and medical tests to detect the presence of toxicants or early pathologic changes before the worker would normally seek clinical care for symptomatic disease. Occupational medical surveillance involves the ongoing evaluation of the health status of a group of workers through the collection and analysis of health data for the purpose of disease prevention and for evaluating the effectiveness of intervention programs. Pleural plaques: Discrete areas of thickening that are generally on the parietal pleura and are most commonly located at the midcostal and posterior costal areas, the dome of the diaphragm, and the mediastinal pleura. Presence of plaques is an indication of exposure to a fibrous silicate, most frequently asbestos. # Radiographic opacity: A shadow on a chest X-ray film generally associated with a fibrogenic response to dust retained in the lungs . Opacities are classified by size, shape, location, and profusion according to guidelines established by the International Labor Office www.ilo.org/public/english/support/publ/books.htm). # Refractory ceramic fiber (RCF): An amorphous, synthetic fiber (Chemical Abstracts Services No. 142844-00-6) produced by melting and blowing or spinning calcined kaolin clay or a combination of alumina (Al 2 O 3 ) and silicon dioxide (SiO 2 ). Oxides may be added such as zirconia, ferric oxide, titanium oxide, magnesium oxide, calcium oxide, and alkalies. The percentage (by weight) of components is as follows: alumina, 20% to 80%; silicon dioxide, 20% to 80%; and other oxides in smaller amounts. Respirable-sized fiber: Particles >5 µm long with an aspect ratio >3:1 and diameter ≤1.3 µm. # Shot: Nonfibrous particulate that is generated during the production of RCFs from the original melt batch. xvi # Refractory Ceramic Fibers # Glossary Standardized mortality ratio (SMR): The ratio of the observed number of deaths (from a specified cause) to the expected number of deaths (from that same cause) that has been adjusted to account for demographic differences (e.g., age, sex, race) between the study population and the referent population. # Synthetic vitreous fiber (SVF): Any of a number of manufactured fibers produced by the melting and subsequent fiberization of kaolin clay, sand, rock, slag, etc. Fibrous glass, mineral wool, ceramic fibers, and alkaline earth silicate wools are the major types of SVF, also called man-made mineral fiber (MMMF) or man-made vitreous fiber (MMVF). Thoracic-sized fiber: Particles >5 µm long with aspect ratio >3:1 and a diameter <3 to 3.5 µm. Thoracic refers to particles penetrating to the thorax (50% cut at 10-µm aerodynamic diameter). Mineral and vitreous fibers with diameters 3 to 3.5 µm have an aerodynamic diameter of approximately 10 µm. 1 Recommendations for a Refractory Ceramic Fiber (RCF) Standard mesothelioma, and other adverse respiratory health effects (including irritation and compromised pulmonary function) associated with excessive RCF exposure in the workplace. Limiting exposures will also protect workers' eyes and skin from the mechanical irritation associated with exposure to RCFs. In most manufacturing operations, it is currently possible to limit airborne RCF concentrations to 0.5 f/cm 3 or less. Exceptions may occur during RCF finishing operations and during the installation and removal of RCF products, when the nature of job activities presents a challenge to meeting the REL. For these operations, additional protective measures are recommended. Engineering and administrative controls, respirator use, and other preventive measures should be implemented to minimize exposures for workers in RCF industry sectors where airborne RCF concentrations exceed the REL. NIOSH urges employers to disseminate this information to workers and customers, and RCF manufacturers should convey this information to downstream users. NIOSH also requests that professional and trade associations and labor organizations inform their members about the hazards of exposure to RCFs. # Definitions and Characteristics # Naturally Occurring Mineral Fibers Many types of mineral fibers occur naturally. Asbestos is the most prominent of these fibers because of its industrial application. The The National Institute for Occupational Safety and Health (NIOSH) recommends that exposure to airborne refractory ceramic fibers (RCFs) be controlled in the workplace by implementing the recommendations presented in this document. These recommendations are designed to protect the safety and health of workers for up to a 10-hr work shift during a 40-hr workweek over a 40-year working lifetime. Observance of these recommendations should prevent or greatly reduce the risks of eye and skin irritation and adverse respiratory health effects (including lung cancer) in workers with exposure to airborne RCFs. Preventive efforts are primarily focused on controlling and minimizing airborne fiber concentrations to which workers are exposed. Exposure monitoring, hazard communication, training, respiratory protection programs, and medical monitoring are also important elements of a comprehensive program to protect the health of workers exposed to RCFs. These elements are described briefly in this chapter and in greater detail in Chapter 9. # Recommended Exposure Limit (REL) NIOSH recommends that occupational exposures to airborne RCFs be limited to 0.5 fiber per cubic centimeter (f/cm 3 ) of air as a timeweighted average (TWA) concentration for up to a 10-hr work shift during a 40-hr workweek, measured according to NIOSH Method 7400 (B rules) . This recommended exposure limit (REL) is intended to reduce the risk of lung cancer, asbestos minerals include both the serpentine asbestos (chrysotile) and the amphibole mineral fibers, including actinolite, amosite, anthophyllite, crocidolite, and tremolite . Since ancient times, mineral fibers have been mined and processed for use as insulation because of their high tensile strength, resistance to heat, durability in acids and other chemicals, and light weight. The predominant forms of asbestos mined and used today are chrysotile (~95%), crocidolite (<5%), and amosite (<1%). For the purposes of this document, naturally occurring mineral fibers are distinguishable from synthetic vitreous fibers (SVFs) based on the crystalline structure of the mineral fibers. This property causes the mineral fibers to fracture longitudinally when subjected to mechanical stresses, thereby producing more fibers of decreasing diameter. By contrast, SVFs are amorphous and fracture transversely, resulting in more fibers of decreasing length until the segments are no longer of sufficient length to be considered fibers. Naturally occurring mineral fibers are generally more durable and less soluble than SVFs, a property that accounts for the biopersistence and toxicity of mineral fibers in vivo. # RCFs RCFs are a type of SVF; they are amorphous synthetic fibers produced from the melting and blowing or spinning of calcined kaolin clay or a combination of alumina (Al 2 O 3 ) and silicon dioxide (SiO 2 ). Oxides such as zirconia, ferric oxide, titanium oxide, magnesium oxide, calcium oxide, and alkalies may be added. The percentage of components (by weight) is as follows: alumina, 20% to 80%; silicon dioxide, 20% to 80%; and other oxides in smaller amounts. Like the naturally occurring mineral fibers, RCFs possess the desired qualities of heat resistance, tensile strength, durability, and light weight. On a continuum, however, RCFs are less durable (i.e., more soluble) than the least durable asbestos fiber (chrysotile) but more durable than most fibrous glass and other types of SVFs. # SVFs SVFs include a number of manmade (not naturally occurring) fibers that are produced by the melting and subsequent fiberization of kaolin clay, sand, rock, slag, and other materials. The major types of SVFs are fibrous glass, mineral wool (slag wool, rock wool), and ceramic fibers (including RCFs). SVFs are also frequently referred to as manmade mineral fibers (MMMFs) or manmade vitreous fibers (MMVFs). # Sampling and Analysis Employers shall perform air sampling and analysis to determine airborne concentrations of RCFs according to NIOSH Method 7400 (B rules) , provided in Appendix A of this document. # Exposure Monitoring Employers shall perform exposure monitoring as follows: Establish a workplace exposure monitoring program for worksites where RCFs or RCF products are manufactured, handled, used, installed, or removed. Include in this program routine area and personal monitoring of airborne fiber concentrations. Design a monitoring strategy that can be used to 1 Recommendations for a Refractory Ceramic Fiber Standard -evaluate a worker's exposure to RCFs, -assess the effectiveness of engineering controls, work practices, and other factors in controlling airborne fiber concentrations, and -identify work areas or job tasks in which worker exposures are routinely high and thus require additional efforts to reduce them. # Sampling Surveys Employers shall conduct exposure monitoring surveys to ensure that worker exposures (measured by full-shift samples) do not exceed the REL. Because adverse respiratory health effects may occur at the REL, it is desirable to achieve lower concentrations whenever possible. When workers are potentially exposed to airborne RCFs, employers shall conduct exposure monitoring surveys as follows: Collect representative personal samples over the entire work shift . Perform periodic sampling at least annually and whenever any major process change takes place or whenever another reason exists to suspect that exposure concentrations may have changed. Collect all routine personal samples in the breathing zones of the workers. If workers are exposed to concentrations above the REL, perform more frequent exposure monitoring as engineering changes are implemented and until at least two consecutive samples indicate that exposures no longer exceed the REL . Notify all workers of monitoring results and of any actions taken to reduce their exposures. When developing an exposure sampling strategy, consider variations in work and production schedules as well as the inherent variability in most area sampling . # Focused sampling When sampling to determine whether worker RCF exposures are below the REL, a focused sampling strategy may be more practical than a random sampling approach. A focused sampling strategy targets workers perceived to be exposed to the highest concentrations of a hazardous substance . This strategy is most efficient for identifying exposures above the REL if maximum-risk workers and time periods are accurately identified. Short tasks involving high concentrations of airborne fibers could result in elevated exposure over full work shifts. Sampling strategies such as those used by Corn and Esmen , Rice et al. , and Maxim et al. have been developed and used specifically in RCF manufacturing facilities to monitor airborne fiber concentration. In these strategies, representative workers are selected for sampling and are grouped according to dust zones, uniform job titles, or functional job categories. These approaches are intended to reduce the number of required samples and increase the confidence of identifying workers at similar risk. # Area sampling Area sampling may be useful in exposure monitoring to determine sources of airborne RCFs and to assess the effectiveness of engineering controls. # Action Level An action level (AL) at half the REL (0.25 f/cm 3 ) shall be used to determine when additional controls are needed or when administrative actions should be taken to reduce exposure to RCFs. The purpose of an AL is to indicate when worker exposures to hazardous substances may be approaching the REL. When air samples contain concentrations at or above the AL, the probability is high that worker exposures to the hazardous substance exceed the REL. The AL is a statistically derived concept permitting the employer to have confidence (e.g., 95%) that if results from personal air samples are below the AL, the probability is small that worker exposures are above the REL. NIOSH has concluded that the use of an AL permits the employer to monitor hazardous workplace exposures without daily sampling. The AL concept has served as the basis for defining the elements of an occupational standard in NIOSH criteria documents and comprehensive standards promulgated by the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration (MSHA). # Hazard Communication Employers shall take the following measures to inform workers about RCF hazards: Establish a safety and health training program for all workers who manufacture, use, handle, install, or remove RCF products or perform other activities that bring them into contact with RCFs. Inform employees and contract workers about hazardous substances in their work areas. Instruct workers about how to get information from material safety data sheets (MSDSs) for RCFs and other chemicals. Provide MSDSs onsite and make them easily accessible. Inform workers about adverse respiratory health effects associated with RCF exposures. In work involving the removal of refractory insulation materials, make workers aware of the potential for exposure to respirable crystalline silica, the health effects related to this exposure, and methods for reducing exposures. Make workers who smoke cigarettes or use other tobacco products aware of their increased risk of developing RCFinduced respiratory symptoms and conditions (see Sections 1.12 and 9.6 for recommendations about smoking cessation programs). # Training Employers shall provide the following training for workers exposed to RCFs: Train workers to detect hazardous situations. Inform workers about practices or operations that may generate high airborne fiber concentrations (e.g., cutting and sanding RCF boards and other RCF products). Train workers how to protect themselves by using proper work practices, engineering controls, and personal protective equipment (PPE). # Product Formulation One factor recognized as contributing to the toxicity of an inhaled fiber is its durability and resistance to degradation in the respiratory tract. Chemical characteristics place RCFs among the most durable SVFs. As a result, an 1 Recommendations for a Refractory Ceramic Fiber Standard inhaled RCF that is deposited in the alveolar region of the lung will persist longer in the lungs than a less durable fiber. Therefore, NIOSH recommends substituting a less durable fiber for RCFs or reformulating the chemistry of RCFs toward this end to reduce the hazard for exposed workers. As part of product stewardship efforts, several RCF producers within the Refractory Ceramic Fibers Coalition (RCFC) have developed new and less biopersistent fibers termed alkaline earth silicate wools . Newly developed fibers should undergo industry-sponsored testing before their selection and commercial use to exclude possible adverse health effects from exposure. # Engineering Controls and Work Practices # Engineering Controls Employers shall use and maintain appropriate engineering controls to keep airborne concentrations of RCFs at or below the REL during the manufacture, use, handling, installation, and removal of RCF products. Engineering controls for controlling RCFs include the following: Local exhaust ventilation or dust collection systems at or near dust-generating systems -Band saws used in RCF manufacturing and finishing operations have been fitted with such engineering controls to capture fibers and dust during cutting operations, thereby reducing exposures for the band saw operator . -Disc sanders fitted with similar local exhaust ventilation systems effectively reduce airborne RCF concentrations during the sanding of vacuum-formed RCF products . Enclosed processes used during manufacturing to keep airborne fibers contained and separated from workers Water knives, which are high-pressure water jets that effectively cut and trim the edges of RCF blanket while suppressing dust and limiting the generation of airborne fibers # Work Practices Employers shall implement appropriate work practices to help keep worker exposures at or below the REL for RCFs. The following work practices are recommended to help reduce concentrations of airborne fibers: Limit the use of power tools unless they are equipped with local exhaust or dust collection systems. -Be aware that manually powered hand tools generate less dust and fewer airborne fibers, but they often require additional physical effort and time and may increase the risk of musculoskeletal disorders. -The additional physical effort required by hand tools may also increase the rate and depth of breathing and consequently affect the inhalation rate and deposition of fibers in the lungs. Use ergonomically correct tools and proper workstation design to reduce the risk of musculoskeletal disorders. Use high-efficiency particulate air-filtered (HEPA-filtered) vacuums. Use wet sweeping to suppress airborne fiber and dust concentrations during cleanup. When removing after-service RCF products, dampen insulation with a light water spray to prevent fibers and dust from becoming airborne. (However, use caution when dampening refractory linings during installation, since water can damage refractory-lined equipment, causing the generation of steam and possible explosion during heating.) Clean work areas regularly using a HEPAfiltered vacuum or wet sweeping to minimize accumulation of debris. Ensure that workers wear long-sleeved clothing, gloves, and eye protection when performing potentially dusty activities involving RCFs or RCF products. For some activities, disposable clothing or coveralls may be preferred. # Respiratory Protection Respirators shall be used while performing any task for which the exposure concentration is unknown or has been documented to be higher than the NIOSH REL of 0.5 f/cm 3 as a TWA. However, respirators shall not be used as the primary means of controlling worker exposures. When possible, use other methods for minimizing worker exposures to RCFs: Product substitution Engineering controls Changes in work practices Use respirators when available engineering controls and work practices do not adequately control worker exposures below the REL for RCFs. NIOSH recognizes that controlling exposures to RCFs is a particular challenge during the finishing stages of RCF product manufacturing and during the installation and removal of refractory materials # Respiratory Protection Program When respiratory protection is needed, employers shall establish a comprehensive respiratory protection program as described in the OSHA respiratory protection standard . Elements of a respiratory protection program must be established and described in a written plan that is specific to the workplace. The plan must include the following elements: comply with the medical evaluation and cleaning, storing, and maintenance requirements of the standard A designated program administrator who is qualified to administer the respiratory protection program Employers shall update the written program as necessary to account for changes in the workplace that affect respirator use. In addition, employers are required to provide at no cost to workers all equipment, training, and medical evaluations required under the respiratory protection program. # Respirator Selection When conditions of exposure to airborne RCFs exceed the REL, proper respiratory protection shall be selected as follows: Select, at a minimum, a half-mask, airpurifying respirator equipped with a 100 series particulate filter. This respirator has an assigned protection factor (APF) of 10. Provide a higher level of protection and prevent facial or eye irritation from RCF exposure by using a full-facepiece, air-purifying respirator equipped with a 100-series filter; or use any powered, air-purifying respirator equipped with a tight-fitting facepiece (full-facepiece). Consider providing a supplied-air respirator with a full facepiece for workers who remove after-service RCF insulation (e.g., furnace insulation) and are therefore exposed to high and unpredictable concentrations of RCFs. These respirators provide a greater level of respiratory protection. Use them whenever the work task involves potentially high concentrations of airborne fibers. Always perform a comprehensive assessment of workplace exposures to determine the presence of other possible contaminants (such as silica) and to ensure that proper respiratory protection is used. Use only respirators approved by NIOSH and MSHA. # Sanitation and Hygiene Employers shall take the following measures to protect workers potentially exposed to RCFs: Do not permit smoking, eating, or drinking in areas where workers may contact RCFs. Provide showering and changing areas free from contamination where workers can store work clothes and change into street clothes before leaving the work site. Provide services for laundering work clothes so that workers do not take contaminated clothes home. Protect laundry workers handling RCFcontaminated clothes from airborne concentrations that are above the REL. Workers shall take the following protective measures: Do not smoke, eat, or drink in areas potentially contaminated with RCFs. If fibers get on the skin, wash with warm water and mild soap. Apply skin-moisturizing cream or lotion as needed to avoid irritation caused by frequent washing. Wear long-sleeved clothing, gloves, and eye protection when performing potentially dusty activities involving RCFs. Vacuum this clothing with a HEPA-filtered vacuum before leaving the work area. Do not use compressed air to clean the work area or clothing and do not shake clothing to remove dust. These processes will create a greater respiratory hazard with airborne dust and fibers. Do not wear work clothes or protective equipment home. Change into clean clothes before leaving the work site. # Medical Monitoring Medical monitoring (in combination with resulting intervention strategies) represents secondary prevention and should not replace primary prevention efforts to control airborne fiber concentrations and worker exposures to RCFs. However, compliance with the REL for RCFs (0.5 f/cm 3 ) does not guarantee that all workers will be free from the risk of RCFinduced respiratory irritation or respiratory health effects. Therefore, medical monitoring is especially important, and employers shall establish a medical monitoring program as follows: Collect baseline data for all employees before they begin work with RCFs. Continue periodic medical screening throughout their lifetime. Use medical surveillance, which involves the aggregate collection and analysis of medical screening data, to identify occupations, activities, and work processes in need of additional primary prevention efforts. Include all workers potentially exposed to RCFs (in both manufacturing and end-use industries) in an occupational medical monitoring program. Provide workers with information about the purposes of medical monitoring, the health benefits of the program, and the procedures involved. Include the following workers (who could receive the greatest benefits from medical screening) in the medical monitoring program: -Workers exposed to elevated fiber concentrations (e.g., all workers exposed to airborne fiber concentrations above the AL of 0.25 F/cm 3 , as described in Section 9.3) -Workers in areas or in specific jobs and activities (regardless of airborne fiber concentration) in which one or more workers have symptoms or respiratory changes apparently related to RCF exposure -Workers who may have been previously exposed to asbestos or other recognized occupational respiratory hazards that place them at an increased risk of respiratory disease # Elements of the Medical Monitoring Program Include the following elements in a medical monitoring program for workers exposed to RCFs: (1) an initial medical examination, (2) periodic medical examinations at regularly scheduled intervals, (3) more frequent and detailed medical examinations as needed on the basis of the findings from these examinations, (4) worker training, (5) written reports of medical findings, (6) quality assurance, and (7) evaluation. These elements are described in the following subsections. # Initial (baseline) examination Perform an initial (baseline) examination as near as possible to the date of beginning employment (within 3 months) and include the following: A A standardized occupational history questionnaire that gathers information about all past jobs with (1) special emphasis on those with potential exposure to dust and mineral fibers, (2) a description of all duties and potential exposures for each job, and (3) a description of all protective equipment the worker has used # Periodic examinations Administer periodic examinations (including a physical examination of the respiratory system and the skin, spirometric testing, a respiratory symptom update questionnaire, and an occupational history update questionnaire) at regular intervals determined by the medical monitoring program director. Determine the frequency of the periodic medical examinations according to the following guidelines: For workers with fewer than 10 years since first exposure to RCFs, conduct periodic examinations at least once every 5 years. For workers with 10 or more years since first exposure to RCFs, conduct periodic examinations at least once every 2 years. A chest X-ray and spirometric testing are important on initial examination and may also be appropriate medical screening tests during periodic examinations for detecting respiratory system changes, especially in workers with more than 10 years since first exposure to RCFs. A qualified health care provider should consult with the worker to determine whether the benefits of periodic chest X-rays warrant the additional exposure to radiation. # More frequent evaluations Workers may need to undergo more frequent and detailed medical evaluations if the attending physician determines that he or she has any of the following indications: New or worsening respiratory symptoms or findings (e.g., chronic cough, difficult breathing, wheezing, reduced lung function, or radiographic indications of pleural plaques or fibrosis) Findings, test results, or diagnoses that have no bearing on the worker=s ability to work with RCFs shall not be included in the report to the employer. Safeguards to protect the confidentiality of the worker=s medical records shall be enforced in accordance with all applicable regulations and guidelines. # Quality assurance Employers shall do the following to ensure the effective implementation of a medical monitoring program: Ensure that workers follow the qualified health care provider's recommended exposure restrictions for RCFs and other workplace hazards. Ensure that workers use appropriate PPE if they are exposed to RCF concentrations above the REL. Encourage workers to participate in the medical monitoring program and to report any symptoms promptly to the program director. Provide any medical evaluations that are part of the medical monitoring program at no cost to the workers. When implementing job reassignments recommended by the medical program director, ensure that workers do not lose wages, benefits, or seniority. Ensure that the medical monitoring program director communicates regularly with the employer's safety and health personnel (e.g., industrial hygienists) to identify work areas that may require control measures to minimize exposures to workplace hazards. # Evaluation Employers shall evaluate their medical monitoring programs as follows: Periodically have standardized medical screening data aggregated and evaluated by an epidemiologist or other knowledgeable person to identify patterns of worker health that may be linked to work activities and practices requiring additional primary preventive efforts. Combine routine aggregate assessments of medical screening data with evaluations of exposure monitoring data to identify needed changes in work areas or exposure conditions. # Labeling and Posting Employers shall post warning labels and signs as follows: Establish smoking cessation programs to inform workers about the increased hazards of cigarette smoking and exposure to RCFs. Provide assistance and encouragement for workers who want to quit smoking. Prohibit smoking in the workplace. Disseminate information about health promotion and the harmful effects of smoking. Offer smoking cessation programs to workers at no cost to participants. Encourage activities that promote physical fitness and other healthy lifestyle practices affecting respiratory and cardiovascular health (e.g., through training programs, employee assistance programs, and health education campaigns). NIOSH recommends that all workers who smoke and are potentially exposed to RCFs participate in smoking cessation programs. # RESPIRATORY PROTECTION REQUIRED IN THIS AREA Print all labels and warning signs in both English and the predominant language of workers who do not read English. Verbally inform workers about the hazards and instructions printed on the labels and signs if they are unable to read them. # Smoking Cessation NIOSH recognizes a synergistic effect between exposure to RCFs and cigarette smoking. This effect increases the risk of adverse respiratory health effects induced by RCFs. In studies of workers exposed to various airborne contaminants, combined exposures to smoking and airborne dust have been shown to contribute to the increased risk of occupational respiratory diseases, including chronic bronchitis, emphysema, and lung cancer . # Background In 1977, NIOSH reviewed health effects data on occupational exposure to fibrous glass and determined the principal adverse health effects to be skin, eye, and upper respiratory tract irritation as well as the potential for nonmalignant respiratory disease. At that time NIOSH recommended the following: Occupational exposure to fibrous glass shall be controlled so that no worker is exposed at an airborne concentration greater than 3,000,000 fibers per cubic meter of air (3 fibers per cubic centimeter of air); . . . airborne concentrations determined as total fibrous glass shall be limited to a TWA of 5 milligrams per cubic meter of air . NIOSH also stated that until more information became available, this recommendation should be applied to other MMMFs, also called SVFs. Since then, additional data have become available from studies in animals and humans exposed to RCFs. The purpose of this report is to review and evaluate these studies and other information about RCFs. In contrast, the crystalline structure of mineral fibers such as asbestos causes the fibers to fracture along the longitudinal plane under mechanical stresses, resulting in more fibers with the same length but smaller diameters. These differences in morphology and cleavage patterns suggest that work with SVFs is less likely to generate high concentrations of airborne fibers than work with asbestos for comparable operations, since large-diameter fibers settle out in the air faster than small-diameter fibers . During the manufacturing of RCFs, approximately 50% of product (by weight) is generated as fiber, and 50% is a byproduct made up of nonfibrous particulate material called shot. Selected physical characteristics of RCFs are presented in . The addition of stabilizers and binders alters the properties of durability and heat resistance for RCFs. Generally, three types of RCFs are manufactured, and a fourth after-service fiber (often recognized in the literature) is distinguished according to its unique chemistry and morphology. Table 2-2 presents the chemistries of the four fiber types, numbered RCF1 through RCF4. RCF1 is a kaolin fiber; RCF2 is an alumina/silica/zirconia fiber; RCF3 is a high-purity (alumina/silica) fiber; and RCF4 is an after-service fiber, characterized by devitrification (i.e., formation of the silica polymorph cristobalite), which occurs during product use over an extended period of time at temperatures exceeding 1,050 to 1,100 o C (>1,900 o F). Another fiber subcategory is RCF1a, prepared from commercial RCFs using a less aggressive method than that used to prepare RCF1 for animal inhalation studies . RCF1a is distinguished from RCF1 used in chronic animal inhalation studies, the former having a greater concentration of longer fibers and fewer nonfibrous particles. The lower ratio of respirable nonfibrous particles to fibers in RCF1a compared with RCF1 has been shown to affect lung deposition and clearance in animal inhalation studies . Chapter 5 presents additional discussion of animal studies and test fiber characteristics. # Chemical and Physical # Properties of RCFs # Fiber Dimensions Fibers of biological importance are those that become airborne and have dimensions within inhalable, thoracic, and respirable size ranges. Thoracic-sized fibers (<3 to 3.5 μm in diameter) and respirable-sized fibers (<1.3 μm in diameter) with lengths up to 200 μm are capable of reaching the portion of the respiratory system below the larynx. Respirablesized fibers are of biological concern because they are capable of reaching the lower airways and gas exchange regions of the lungs when inhaled. Longer or thicker airborne fibers generally settle out of suspension or, if inhaled, are generally filtered out in the nasal passage or deposited in the upper airways. Thoracic-sized fibers that are inhaled and deposited in the upper respiratory tract are generally cleared more readily from the lung, but they have the potential to cause irritation and produce respiratory symptoms. Fiber dimensions are a significant factor in determining their deposition within the lung, biopersistence, and toxicity. RCFs and other SVFs are manufactured to meet specified nominal diameters according to the fiber type and intended use. RCFs are produced with nominal diameters of 1.2 to 3 μm . Typical diameters for an individual RCF (as measured in Mast et al. . RCF-containing products) range from 0.1 to 20 μm, with lengths ranging from 5 to 200 μm . In bulk samples taken from three RCF blanket insulation products, more than 80% of the fibers counted by phase contrast optical microscopy (PCM) were <3 μm in diameter . This result is consistent with those from another study of bulk samples of RCF insulation materials , which found the fibers to have geometric mean diameters (GM D ) ranging from 1.5 to 2.8 μm (arithmetic mean diameter range=2.3 to 3.9 μm; median diameter range=1.6 to 3.3 μm). Studies of airborne fiber size distributions in RCF manufacturing operations indicate that these fibers meet the criteria for thoracic-and respirable-sized fibers. One early study of three domestic RCF production facilities found that approximately 90% of airborne fibers were 20 μm, with 68% of fibers measuring 2.4 to 20 μm long and 19% of the fibers >20 μm long. On the basis of the results of TEM analysis of 3,357 RCFs observed on 98 air samples collected in RCF manufacturing sites, Allshouse re-ported that 99.7% of the fibers had diameters 10 μm. Measurements of airborne fibers in the European RCF manufacturing industry are comparable : Rood reported that all fibers observed were in the thoracic and respirable size range (i.e., diameter <3 to 3.5 μm), with median diameters ranging from 0.5 to 1.0 μm and median lengths from 8 to 23 μm. Cheng et al. analyzed an air sample for fibers during removal of after-service RCF blanket insulation from a refinery furnace. Fiber diameters ranged from 0.5 to 6 μm, with a median diameter of 1.6 μm. The length of fibers ranged from 5 to 220 μm. Of 100 fibers randomly selected and analyzed from the air sample, 87% were within the thoracic and respirable size range. Another study of exposures to airborne fibers in industrial furnaces during installation and removal of RCF materials found GM D values of 0.38 and 0.57 μm, respectively . # Fiber Durability Fiber durability can affect the biologic activity of fibers inhaled and deposited in the respiratory system. Durable fibers are more biopersistent, thereby increasing the potential for causing a biological effect. Durability of a fiber is measured by the amount of time it takes for the fiber to fragment mechanically into shorter fibers or dissolve in biological fluids. RCFs tested in vitro with a solution of neutral pH (modified Gamble's solution) had a dissolution rate of 1 to 10 ng/cm 2 per hr . This test is biologically relevant because of the similarity of the solution to the conditions of the pulmonary interstitial fluid. By comparison, other SVFs (glass and slag wools) are more soluble, with dissolution rates in the 100s of ng/cm 2 per hr . Along a continuum of fiber durability determined in tests using simulated lung fluids at pH 7.4, the asbestos fiber crocidolite has a dissolution rate of <1 ng/cm 2 per hr, RCF1 and MMVF32 (E glass) have dissolution rates of 1 to 10 ng/cm 2 per hr, MMVF21 has a dissolution rate of 15 to 25 ng/cm 2 per hr, other fibrous glass and slag wools have dissolution rates in the range of 50 to 400 ng/cm 2 per hr, and the alkaline earth silicate wools have dissolution rates ranging from approximately 60 to 1,000 ng/cm 2 per hr . Chrysotile, which is considered the most soluble form of asbestos, has a dissolution rate of <1 to 2 ng/cm 2 per hr. RCFs dissolve more rapidly than chrysotile, even though RCFs have a thicker diameter (by an order of magnitude) than chrysotile. The rate of dissolution is an important fiber characteristic that affects the clearance time and biopersistence of the fiber in the lung. The significance of fiber dimension, clearance, and dissolution (i.e., breakage, solubility) is discussed in Chapter 6. # RCF Production and Potential for Worker Exposure # Production RCF production in the United States began in 1942 on an experimental basis, but RCFs were not commercially available until 1953. Sales of RCFs were modest initially, but they began to expand when the material gained acceptance as an economical alternative insulation for high-temperature kilns and furnaces. Commercial production of RCFs first reached significant levels in the 1970s as oil shortages necessitated reductions in energy consumption. The growing demand for RCFs has also been strongly influenced by the recognition of health effects associated with exposure to asbestos-containing materials and the increasingly stringent regulation of these products in the United States and many other countries. # Potential for Worker Exposure Approximately 31,500 workers in the United States are potentially exposed to RCFs during manufacturing, processing, or end use. A similar number of workers are potentially exposed to RCFs in Europe. Of these workers, about 800 (3%) are employed in the actual manufacturing of RCFs and RCF products . # RCF Manufacturing Process The manufacture of RCFs (Figure 3-1) begins by blending raw materials, which may include kaolin clay, alumina, silica, and zirconia in a batch house. The batch mix is then transferred either manually or automatically to a furnace to be melted at temperatures exceeding 1,600 o C. On reaching a specified temperature and viscosity in the furnace, the molten # RCF Production and Potential for Worker Exposure Fibers are produced by blowing or spinning molten materials drained from furnace. Bulk fiber is packaged. Felt or blanket is cured in tempering oven. Product is shipped, stored, or fabricated into specialty product. batch mixture drains from the furnace and is fiberized, either through exposure to pressurized air or by flowing through a series of spinning wheels . Fans are used to create a partial vacuum that pulls the fibers into a collection or settling chamber. RCFs may then be conveyed pneumatically to a bagging area for packaging as bulk fiber. Some bulk fiber may be used directly in this form, or it may be processed to form textiles, felts, boards, cements, and other specialty items. Other RCFs are formed into blankets as bulk fiber in the collection chamber settles onto a conveyor belt. The blanket passes through a needle felting machine that interlocks the fibers and compresses the blanket to a specified thickness. From the needler, the blanket is conveyed to a tempering oven to remove lubricants that were added in the settling chamber. The lubricants are burned off, and the blanket is cut to desired size and packaged. As with the bulk fiber, the RCF blanket may undergo additional fabrication to create other specialty products. Many of the processes are automated and are monitored by machine operators. Postproduction processes such as cutting, sanding, packaging, handling, and shipping are more labor intensive, but the potential exists for exposure to airborne fibers throughout production. # RCF Products and Uses RCFs may be used in bulk fiber form or as one of the RCF specialty products in the form of mats, paper, textiles, felts, and boards . Because of its ability to withstand temperatures exceeding 1,000 °C, RCFs are used predominantly in industrial applications, including insulation, reinforcement, and thermal protection for furnaces and kilns. RCFs can also be found in automobile catalytic converters, in consumer products that operate at high temperatures (e.g., toasters, ovens, woodstoves), and in space shuttle tiles. RCFs have been formed into noise-control blankets and used as a replacement for refractory bricks in industrial kilns and furnaces . RCFs have found increasing application as reinforcements in specialized metal matrix composites (MMC), especially in the automotive and aerospace industries . A summary of RCF products and applications are provided here. # Examples of Products Blankets-high-temperature insulation produced from spun RCFs in the form of a mat or blanket Boards-high-temperature insulation produced from bulk fibers in the form of a compressed rigid board (boards have a higher density than blankets and are used as core material or in sandwich assemblies) Bulk RCFs-fibers with qualities of hightemperature resistance to be used as feedstock in manufacturing processes or other applications for which product consistency is critical-typically in the manufacture of other ceramic-fiber-based products The conventional method used to assess the characteristics and concentrations of exposures to airborne fibers is to collect personal and environmental (area) air samples for laboratory analysis. Personal samples are the preferred method for estimating the exposure characteristics of a worker performing specific tasks. For personal sampling, a worker is equipped with the air sampling equipment, and the collection medium is positioned within the worker's breathing zone. Area sampling is performed to evaluate exposure characteristics associated with an area or process. Sampling equipment for area sampling is stationary, in contrast to personal sampling, which allows for mobility by accompanying the worker throughout the sampling period. # Sampling for Airborne Fibers The two NIOSH methods for the sampling and analysis of airborne fibers of asbestos and other fibrous materials are as follows: Method 7400 describes air sampling and analysis by PCM Method 7402 describes air sampling and analysis by TEM Both methods (listed in the NIOSH Manual of Analytical Methods and provided in Appendix A) involve using an air sampling pump connected to a cassette. The cassette consists of a conductive cowl equipped with a 25-mm cellulose ester membrane filter (0.45-to 1.2-µm pore size). The pump is used to draw air through the sampling cassette at a constant flow rate between 0.5 and 16 L/min. Airborne fibers and other particulates are trapped on the filter for analysis using microscopic methods. Methods 7400 and 7402 can be used to count the number of fibers (and therefore calculate concentration based on the volume of air sampled) and measure the fiber dimensions. Fiber concentration is reported as the number of fibers per cubic centimeter of air (f/cm 3 ). Although the two methods differ in preparation of the sampling media for analysis, the major distinction between them is the resolving capabilities of the microscope. With PCM, 0.25 µm is approximately the diameter of the thinnest fibers that can be observed . TEM has a lower resolution limit well below the diameter of the smallest RCF (~0.02 to 0.05 µm) . TEM also allows for qualitative analysis of fibers using an energy-dispersive X-ray analyzer (EDXA) to determine elemental composition and selected area electron diffraction (SAED) for comparing diffraction patterns with reference patterns for identification. # NIOSH Fiber-Counting Rules The appendix to NIOSH Method 7400 specifies two sets of fiber-counting rules that vary according to the parameters used to define a fiber. Under the A rules, any particle >5 µm long with an aspect ratio (length to width) >3:1 is considered a fiber. No upper limit exists on the fiber diameter in the A counting rules. In the B rules, a fiber is defined as being >5 µm long with an aspect ratio ≥5:1 and a diameter 3:1 aspect ratio also meet the ≥5:1 aspect ratio criterion and are <3 µm in diameter. # Sampling for Total or Respirable Airborne Particulates Airborne exposures generated during work with RCFs may also be estimated by sampling for general dust concentrations. Sampling for particulates not otherwise regulated is described in NIOSH Method 0500 for total dust concentrations and in NIOSH Method 0600 for the respirable fraction . Both methods (also included in Appendix A) use a sampling pump to pull air through a filter that traps suspended particulates. NIOSH Method 0600 uses a size-selective sampling apparatus (cyclone) to separate the respirable fraction of airborne material from the nonrespirable fraction. The mass of airborne particulates on the filter is measured using gravimetric analysis, and airborne concentration is determined as the ratio of the particulate mass to the volume of air sampled, reported as mg/m 3 (or µg/m 3 ). This method does not distinguish fibers from nonfibrous airborne particles. No NIOSH REL exists for either total or respirable particulates not otherwise regulated. The OSHA permissible exposure limit (PEL) for particulates not otherwise regulated is 15 mg/m 3 for total particulates and 5 mg/m 3 for respirable particulates as 8-hr TWA concentrations. The American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) for particles (insoluble or poorly soluble) not otherwise specified is 10 mg/m 3 for inhalable particles and 3 mg/m 3 for respirable particles as 8-hr TWA concentrations . # Sampling for Airborne Silica Because silica is a major constituent of RCFs, the potential exists for exposure to silica during work with RCFs (e.g., in manufacturing or during removal of after-service RCF furnace insulation). As with sampling for respirable particulates, sampling for respirable silica involves using a pump to draw air through a cyclone before collecting respirable airborne particles on a filter. Qualitative and quantitative analysis of the sample for silica content can be performed using the following analytical methods: X- # Industrial Hygiene Surveys and Exposure Assessments Assessments of occupational exposures, including quantitative measurement of airborne fiber concentrations associated with manufacturing, handling, and using RCFs, have been performed using industrial hygiene surveys and air sampling techniques at multiple worksites. Sources of monitoring data that characterize occupational exposures to RCFs include the following: University of Pittsburgh studies of exposures at RCF manufacturing sites in the 1970s International (Canadian, Swedish, Australian) industrial hygiene surveys of occupational exposures to RCFs A study of end-user exposures to RCF insulation products by researchers at Johns Hopkins University NIOSH Health Hazard Evaluations (HHEs) of occupational exposures to RCFs # University of Pittsburgh Survey of Exposures During RCF Manufacturing In the mid 1970s, researchers from the University of Pittsburgh conducted environmental monitoring to assess worker exposures to airborne fibers at domestic RCF manufacturing facilities. This research effort was one of the pioneering studies in the use of workplace exposure groupings or dust zones for establishing a sampling strategy . In a series of industrial hygiene surveys, Esmen et al. collected 215 full-shift air samples at three RCF manufacturing plants. of the airborne fibers measured were <4.0 µm in diameter and <50 µm long with a GM D of 0.7 µm and a GM L of 13 µm. # University of Cincinnati Study of Exposures During RCF Manufacturing In 1987, researchers from the University of Cincinnati initiated an industry-wide epidemiologic study of workers who manufacture RCFs. One aim of the study was to characterize current and former exposures to RCFs and silica in U.S. RCF manufacturing facilities. Data from initial surveys conducted at five RCF manufacturing plants indicated airborne RCFs with a GM D ranging from 0.25 to 0.6 µm and a GM L ranging from 3.8 to 11.0 µm . The airborne TWA fiber concentrations for these five plants ranged from <0.01 to 1.57 f/cm 3 . After the first two rounds of quarterly sampling, Rice et al. had collected data from 484 fiber count samples (382 samples with values greater than the analytic limit of detection , 39 overloaded samples, 36 samples with values below the LOD, and 27 samples voided because of tampering or pump failure). They also collected 35 samples from persons working with raw materials that were analyzed quantitatively and qualitatively for respirable mass and for silica polymorphs (quartz, tridymite, and cristobalite). A sampling strategy was developed by identifying more than 100 job functions across 5 facilities. These job functions were consolidated into industry job titles based on similarities of function, proximity to certain processes, and exposure characteristics within designated dust zones. nett et al. 1989;Breysse et al. 1990], ranged from below the analytical LOD to 1.54 f/cm 3 . Of the 35 samples analyzed for the silica polymorphs, quantifiable silica was found in 5 samples: 4 of the samples contained cristobalite in concentrations ranging from 20 to 78 µg/m 3 , and 1 of the samples contained 70 µg/m 3 quartz. The measurable silica exposures occurred among workers employed as raw material handlers and furnace operators. As the study progressed, approximately 1,820 work history interviews were conducted and evaluated to refine uniform job titles and to identify dust zones according to the method of Corn and Esmen . Four years of sampling data (1987)(1988)(1989)(1990)(1991) were merged with historic sampling data to construct exposure estimates for 81 job titles in 7 facilities for specified time periods . Overall exposures decreased. The maximum exposure estimated was 10 f/cm 3 in the 1950s for carding in a textile operation; subsequent changes in engineering, process, and ventilation reduced exposure estimates for all 20 job titles to near or below 1 f/cm 3 . The study reported that at more recent operations (1987)(1988)(1989)(1990)(1991), exposure estimates ranged from below the analytic LOD to 0.66 f/cm 3 . Subsequently, Rice et al. published the results from an analysis of exposure estimates for 10 years of follow-up sampling (1991−2001) The study shows that exposures decreased for 25% of job titles, remained stable for 53%, and increased for 22%. Of the job titles with increased exposure estimates, 9 estimates were >0.1 f/cm 3 (range = 0.1 to 0.21 f/cm 3 ), and 19 estimates were <0.1 f/cm 3 . The exposure estimates for this study do not include adjustments for respirator use. # RCFC/EPA Consent Agreement Monitoring Data In 1993, the RCFC and the EPA entered into a negotiated 5-year consent agreement to determine the magnitude of RCF exposures in the primary RCF manufacturing industry and in secondary RCF-use industries . Another purpose of this consent agreement was to document changes in RCF exposures during the 5 years of the agreement (1993)(1994)(1995)(1996)(1997)(1998). The Quality Assurance Project Plan in the consent agreement contains the analytical protocols, statistical design, description of the program objectives, and timetables for meeting the objectives . During each year of the consent agreement, a minimum of 720 personal air samples (measured as 8-hr TWAs) were collected according to a stratified random sampling plan. Of these, 320 samples were collected in RCF manufacturing and processing (primary) facilities. The remaining 400 samples were collected in RCF Fibers were defined as having an aspect ratio of ≥5:1 and length >0.5 µm if sized using transmission electron micoscopy. For scanning electron microscopy, fibers were defined with the same aspect ratio and length >5 µm. I Fore is the area of the plant before the furnace. customer facilities referred to as end-use (secondary) facilities. The researchers collected a total of 4,576 samples. A number of end-use facilities were randomly selected from a list of known purchasers of RCF products. The remainder consisted of facilities that volunteered for sampling once they learned of the consent agreement. The strata from which the 720 samples were collected consist of eight functional job categories derived so that results could be aggregated for comparison across industries, facilities, and similar job functions . This categorization was based on the approach instituted by Corn and Esmen . Appendix B lists definitions and major work tasks for each functional job category. TWA and task-length average air sampling data were gathered according to NIOSH Method 7400 (B rules) and analyzed using PCM and TEM. Data on respirator use (by type) were also collected . As background for the consent agreement monitoring plan, baseline (now referred to as historical) information about airborne fiber concentrations was obtained through personal sampling of workers at RCF manufacturing facilities from January 1989 to May 1993. Exposure monitoring strategies used during the baseline period (1989)(1990)(1991)(1992)(1993) provided the framework for the consent agreement (1993)(1994)(1995)(1996)(1997)(1998) monitoring protocol. (1989)(1990)(1991)(1992)(1993) and consent agreement monitoring (1993)(1994)(1995)(1996)(1997)(1998) periods by manufacturing and end-use sectors. A comparison of values from Tables 4-4, 4-5, and 4-6 with those in Table 4-3 indicates that average airborne concentrations for 1993-1998 were lower than those for the preceding baseline sampling period (1989)(1990)(1991)(1992)(1993). However, a comparison of values in Tables 4-5 and 4-6 shows that average concentrations for the entire 5-year consent agreement monitoring period (1993)(1994)(1995)(1996)(1997)(1998) are equal to those of year 5 (i.e., no change). After the first 3 years (1993)(1994)(1995)(1996) of the consent agreement monitoring period, Maxim et al. performed interim analyses of these data combined with historical data from the baseline monitoring period (1989)(1990)(1991)(1992)(1993). The following conclusions about RCF exposures were based on these analyses of data from 1,600 baseline samples and 3,200 consent agreement samples: Airborne concentrations of RCFs are generally decreasing in the workplace. Ninety percent of airborne concentrations of RCFs in the workplace are below 1 f/cm 3 . RCF concentrations have an approximately log-normal distribution. Significant differences exist in workplace concentrations by facility. Workplace concentrations vary with functional job category. Respirator usage varies with the worker's functional job category and the associated average fiber concentration. Workplace samples have a lower ratio of respirable nonfibrous particles to fibers than samples used in initial animal inhalation studies . Functional job categories with the highest average TWA fiber concentrations include removal (AM=1.2 f/cm 3 ), finishing (AM=0.8 f/cm 3 ), and installation (AM=0.4 f/cm 3 ). The remainder of the functional job categories had average TWA concentrations near or below 0.3 f/cm 3 . Although different jobs and activities are associated with the three higher exposure functional job categories, similarities exist that contribute to exposure concentrations. First, removal and installation activities are performed at remote jobsites where implementing fixed engineering controls may be difficult or impractical for reducing airborne fiber concentrations. Removal requires more mechanical energy and may involve fracturing the structure of the RCF product, resulting in fiber release and higher concentrations of airborne fibers. Finishing activities are performed at fixed locations where it is possible to implement engineering controls, but they also involve mechanical energy to shape RCF products by drilling, sanding, and sawing. These processes also result in the dispersal of airborne fibers. Regarding particle-to-fiber ratio, Maxim et al. found average workplace values to be much lower (0.53; n=10; range not reported) than the average ratio (9.1; n=7) in the samples used in a series of animal inhalation toxicity studies with RCFs . Monitoring performed during the baseline period (August 1989-May 1993 and the 5-year consent agreement period (June 1993-May 1998) provided data from nearly 6,200 air samples in the domestic RCF industry. Table 4-6 presents the summary statistics of workplace RCF exposure concentrations for the baseline (historical) and consent agreement monitoring data. The data suggest that (1) the AMs and GMs of RCF concentrations were higher for workers during the baseline period than during the more recent (consent monitoring data) period, and (2) AM and GM exposure concentrations were lower for workers in manufacturing facilities than at end-use sites. # Exposures During Installation and Removal of RCF Furnace Insulation To evaluate exposures to airborne dust associated with removing RCF furnace insulation, Gantner conducted surveys with air sampling at five sites. The surveys were performed at sites where workers removed modules or blanket-type insulation manually using knives or trowels. During removal activities, workers wore disposable, single-use respirators, disposable protective clothing or their own personal clothing, and (in some cases) goggles or other protective eyewear. Personal sampling was performed for total dust concentration as well as respirable dust concentration using a cyclone. Area samples were collected in the center of work zones (industrial furnaces) at 9 ft above the floor, which was at the breathing zone level of the workers, who were on scaffolding. A total of 24 air samples were collected, including 14 personal samples (9 for respirable dust and 5 for total dust concentrations) and 10 area samples (3 for respirable dust and 7 for total dust concentrations). Cheng et al. studied exposures to RCFs during the installation and removal of RCF insulation in 13 furnaces at 6 refineries and 2 chemical plants. Air samples were collected and analyzed according to NIOSH Method 7400 (A rules); sampling times ranged from 15 to 300 min. Samples collected during minor maintenance and inspection tasks (n=27) showed GM concentrations of 0.08 to 0.39 f/cm 3 (range=0.02 to 17 f/cm 3 ). Sampling performed during installation of RCF insulation (n=59) revealed GM concentrations of 0.14 to 0.62 f/cm 3 (range=0.02 to 2.6 f/cm 3 ). The highest exposures were observed in samples collected during removal of RCF insulation (n=32), with GM concentrations of 0.02 to 1.3 f/cm 3 (range=<0.01 to 17 f/cm 3 ). Workers working outside of enclosed spaces (furnaces) were rarely exposed to more than 0.2 f/cm 3 . One sample of after-service RCF insulation was analyzed for fiber diameter and length: median diameter was reported as 1.6 µm (range=0.5 to 6 µm), and length ranged from 5 to 220 µm. Of 100 fibers randomly selected and analyzed from the air sample, 87% were within the respirable size range. Four personal samples were collected during removal of after-service RCF modules and fire bricks and were analyzed for respirable crystalline silica (cristobalite). Samples revealed concentrations ranging from 0.03 mg/m 3 to 0.2 mg/m 3 (GM=0.06 mg/m 3 ). At a Dutch oil refinery, van den Bergen et al. performed personal air monitoring for airborne fibers to assess worker exposures during the removal of RCF insulation from expansion seams in a heat-treating furnace. The 8-hr TWA exposures for 5 workers sampled ranged from 9 to 50 f/cm 3 (GM=16 f/cm 3 ). Sweeney and Gilgrist also monitored worker exposures to airborne RCFs and respirable silica during the removal of RCF materials from furnaces. Personal samples from two workers taken during the removal of after-service RCF insulation revealed exposures of 0.15 and 0.16 f/cm 3 . Exposures to total particulate (18.3 and 22.4 mg/m 3 as 8-hr TWAs) were above the OSHA PEL of 15 mg/m 3 . Exposure concentrations for respirable dust containing crystalline silica (2.4% and 4.3%) were also above the OSHA PEL. The elevated concentrations of respirable and total dust were associated with removal of conventional refractory lining using jackhammers, crowbars, and hammers. A worker performing routing to install new RCF insulation was exposed at 1.29 f/cm 3 (8-hr TWA). Personal samples from another worker using a bandsaw to cut new RCF insulation revealed concentrations of 1.02 f/cm 3 as an 8-hr TWA. In the RCF industry, worker exposures to respirable crystalline silica (including quartz, cristobalite, and tridymite) may occur during the use of silica in manufacturing, removal of after-service insulation, and waste disposal. Focusing on exposures of workers who install, use, or remove RCF insulation, Maxim et al. collected 158 personal air samples analyzed for respirable quartz, cristobalite, and tridymite over the RCFC/EPA 5-year consent agreement monitoring period (1993)(1994)(1995)(1996)(1997)(1998). A total of 42 removal projects were sampled. For small jobs, all workers engaged in insulation removal were sampled; for larger jobs, workers were selected at random for sampling. Air sampling and analysis were performed according to NIOSH Method 7500 for crystalline silica by X-ray diffraction; sampling times ranged from 37 to 588 min (AM=260 min, standard deviation =129 min). Short sampling times reflected the short duration of RCF insulation removal tasks (a benefit over time-intensive removal of conventional refractories). Removal of RCF blankets and modules is performed by using knives, pitchforks, rakes, and water lances, or by hand-peeling. The study noted that most (>90%) workers wear respirators (with protection factors from 10 to 50 or more) when removing insulation. Analysis of 158 samples found the following: Fourteen samples had task-time respirable quartz concentrations ranging from 0.01 to 0.44 mg/m 3 (equivalent 8-hr TWA range=0.004 to 0.148 mg/m 3 ); the remainder of samples were below the LOD. Three samples had detectable concentrations of cristobalite that were below the NIOSH REL of 0.05 mg/m 3 . One sample contained tridymite (0.2 mg/m 3 ) at a concentration exceeding the NIOSH REL of 0.05 mg/m 3 . # International (Canadian, Swedish, and Australian) Surveys of RCF Exposure Perrault et al. reported on the characteristics of fiber exposures that occurred during the use of synthetic fiber insulation materials on construction sites in Canada. Fiber dimensions were measured from bulk samples of insulation materials used at five construction sites. Area air samples were also collected during the installation of composite RCF and glass wool insulation, glass wool alone, rock wool (both blown and sprayed on), and RCFs alone. Respirable fiber concentrations were highest during removal and installation of RCFs (0.39 to 3.51 f/cm 3 ) compared with concentrations measured during installation of rock wool (0.15 to 0.32 f/cm 3 ), composite RCF and glass wool (0.04 f/cm 3 ), and glass wool alone (0.01 f/cm 3 ). Diameters of fibers in bulk samples differed significantly from diameters in airborne fibers. RCFs had the smallest GM D of fibers in bulk samples (0.38 to 0.55 µm) compared with glass wool (0.93 µm) and rock wool (1.1 to 3.9 µm). For airborne fibers, rock wool (sprayed on) had a GM D of 2.0 µm, followed by RCFs (1.1 µm), composite RCFs and glass wool (0.71 µm), glass wool (0.5 µm) and blown rock wool (0.5 µm). Elemental analysis and comparison of bulk samples with air samples revealed a greater concentration of fibers with oxides of silicon and aluminum in air samples. For sites with either glass wool or rock wool insulation, airborne samples contained fewer fibers with silicon oxide as the sole constituent than bulk samples. The authors concluded that airborne fiber concentrations were affected by the type of fiber material used and the confinement of worksites. The authors also concluded that characterization of fibers in bulk samples is not a good representation of physical and chemical parameters of the airborne fibers. A report by the Swedish National Institute for Occupational Health describes exposure to RCFs in smelters and foundries based on industrial hygiene surveys and sampling at 4 facilities: a specialty steel foundry (2,500 workers), a metal smelting plant (1,500 workers), an aluminum foundry (450 workers), and an iron foundry (450 workers). RCF products were used in these plants in ladles, tapping spouts, holding furnaces, heat treatment furnaces, and spill protection mats. Workers and contractors were placed into three exposure categories, depending on their potential for exposure (as determined by distance from a fiber source). The highest exposures to airborne ceramic fibers (category 1) had median concentrations of 0.26 to 1.2 f/cm 3 and involved about 3% (n=160) of the workers at the plants surveyed. Secondary exposures (categories 2 and 3) involved another 33% (n=1,650) of the workers and had median concentrations of 0.03 to 0.24 f/cm 3 . During certain operations such as removal or demolition of RCF materials in enclosed spaces, concentrations of up to 210 f/cm 3 were measured. Total dust concentrations increased with fiber concentration and were as high as 600 mg/m 3 during demolition and 60 mg/m 3 during reinsulation. Median fiber diameters from bulk samples analyzed by electron microscopy ranged from 0.6 to 1.5 µm, which was comparable to the diameters of airborne fibers. On the basis of air sampling data, fiber dose (assuming a working lifetime of 40 years ) was estimated for 8 occupations with category 1 exposures. Dose estimates ranged from 0.05 fiber-years/cm 3 for a cleaner to 85 fiber-years/cm 3 for a bricklayer or contractor. Dose estimates for the 6 other occupations ranged from 0.6 fiber-years/cm 3 to 3.1 fiber-years/cm 3 . Researchers at the Australian National Occupational Health and Safety Commission established a technical working group to investigate typical exposures in SVF manufacturing and user industries . The RCF manufacturing industry is relatively small in Australia: 2 plants employing roughly 40 workers have been manufacturing RCFs since 1976 and 1977. Since the plants began manufacturing RCFs, 152 persons have been involved with production. Airborne fiber concentrations in both plants decreased over time as a result of (1) the introduction of a national exposure standard of 0.5 f/cm 3 for synthetic fibers and a secondary standard of 2 mg/m 3 for inspirable dust, (2) the use of various controls and handling technologies, and (3) increased awareness of dust suppression by the workforce. GM concentrations of airborne fibers before implementation of the synthetic fiber exposure standard (1983-1990) measured 0.52 f/cm 3 (geometric standard deviation =3.9) and 0.29 f/cm 3 (GSD=2.5) for plants 1 and 2, respectively. GM concentrations for the subsequent period (1991)(1992)(1993)(1994)(1995)(1996) dropped to 0.11 f/cm 3 (GSD=4.1) at plant 1 and 0.27 f/cm 3 (GSD=3.3) at plant 2. # Johns Hopkins University Industrial Hygiene Surveys A report of RCF end-user exposure data prepared for the Thermal Insulation Manufacturers Association (TIMA) showed that using blanket, bulk, and vacuum-formed RCFs during certain operations resulted in high fiber concentrations . For example, 25 personal air samples collected from workers installing RCF blanket modules had an AM, 8-hr TWA concentration of 1.36 f/cm 3 (SD=1.15). The fibers were collected and analyzed using NIOSH Method 7400 (B rules). Seventeen vacuum formers had AM exposure concentrations of 0.71 f/cm 3 (SD=0.83) while using bulk RCF products. Twenty-eight workers with the job title vacuum-formed RCF cast finisher had AM exposures of 1.55 f/cm 3 (SD=1.51). Table 4-7 summarizes exposure data collected for the 17 occupations sampled during the study. Scanning electron microscopy (SEM) was used to measure dimensions of approximately 3,500 fibers from selected air samples of the 17 occupations. GM fiber diameters ranged from 0.9 to 1.5 µm, and GM fiber lengths ranged from 20.4 to 36.1 µm. Fiber aspect ratios based on these data ranged between 16:1 and 30:1. # NIOSH HHEs and Additional Sources of RCF Exposure Data NIOSH has conducted HHEs involving potential exposures to RCFs at the following work places: an RCF manufacturing facility , a steel foundry , a power plant , a foundry , and a railroad car wheel and axle production facility . # Discussion Recent and historical environmental monitoring data indicate that airborne concentrations of RCFs include fibers in the thoracic and respirable size range (<3.5 µm in diameter and <200 µm long ). Workers are exposed to these concentrations during primary RCF manufacturing, secondary manufacturing or processing, and end-use activities such as RCF installation and removal. Sampling data from studies of domestic primary RCF manufacturing sites indicate that average airborne fiber concentrations have steadily declined by nearly 2 orders of magnitude over the past 2 decades. Rice et al. report an estimated maximum airborne concentration of 10 f/cm 3 associated with an RCF manufacturing process in the 1950s. Esmen et al. recognized average exposure concentrations ranging from 0.05 to 2.6 f/cm 3 in RCF manufacturing facilities in the mid-to late-1970s. During the late 1980s, Rice et al. calculated average airborne concentrations in manufacturing facilities that ranged from <LOD to 0.66 f/cm 3 . Maxim et al. [1994Maxim et al. [ , 1997Maxim et al. [ , 2000a report that from the late 1980s through 1997, concentrations ranged from an AM of <0.3 to 0.6 f/cm 3 (GM0.2 f/cm 3 ). For many RCF manufacturing processes, reductions in exposure concentrations have been realized through improved ventilation, engineering or process changes, and product stewardship programs . Several functional job categories continue to be associated with fiber concentrations that exceed the average; these include finishing operations during manufacturing, removal operations, and installation performed by end users. Activities in these three categories require additional mechanical energy in handling RCF products (e.g., sawing, drilling, cutting, sanding), which increases the generation of airborne fibers. Removal and installation activities are performed at remote sites where conventional engineering strategies and fixed controls are more difficult to implement. For certain operations in which airborne fiber concentrations are greater (such as removal of RCF products from furnaces), jobs are performed for short periods and almost universally with the use of respiratory protection . One additional consideration during work involving RCF exposure is the potential for exposure to respirable silica in the forms of quartz, tridymite, and cristobalite. Although the potential for such exposure exists in primary manufacturing (because silica is a major component of RCFs), monitoring data indicate that these exposures are generally low . Maxim et al. reported that many airborne silica samples collected to assess exposures during installation and removal of RCF products contain concentrations below the LOD, with average concentrations of respirable silica ranging from 0.01 to 0.44 mg/m 3 (equivalent 8-hr TWA range=0.004 to 0.148 mg/m 3 ). Other studies indicate a greater potential for exposure to respirable silica (especially in the form of cristobalite) during removal of after-service RCF materials . Processes associated with high concentrations of airborne fibers generally generate high concentrations of total and respirable dust as well . # Effects of Exposure Animal studies report the concentration(s) to which the animals were exposed. The distinction between administered exposure concentration and received dose is important when analyzing these studies. The dose affecting the target tissues is known only when the amount of fiber present in the lung is measured and reported. To analyze the results of RCF studies, the number of fibers per exposure, their dimensions, durabilities, and the delivered dose should be considered for making comparisons and conclusions regarding potential and relative toxicity. # Intrapleural, Intraperitoneal, and Intratracheal Studies Instillation and implantation studies deliver fibers directly to the trachea, pleural cavity, or peritoneal cavity, bypassing some of the defense and clearance mechanisms that act on inhaled fibers. Implantation of fibers into either the pleural or abdominal cavities delivers fibers directly to the pleural or abdominal mesothelium, bypassing some or all of the normal defense and clearance mechanisms of the respiratory tract. Intratracheal instillation delivers fibers directly to the trachea, bypassing the upper respiratory tract. These exposure methods do not mimic an occupational inhalation exposure of several hours per day for several days per week over an extended period. However, one advantage of these studies is that they allow the administration of a precise dose of fibers that can be replicated between animals. They also permit the administration of higher doses than may be obtainable by inhalation exposure. # Health Effects in Animals (In Vivo Studies) The health effects of RCF exposures have been evaluated in animal studies using intrapleural, intraperitoneal, intratracheal, and inhalation routes of exposure. All of these routes have demonstrated the carcinogenic potential of RCFs. Chronic inhalation studies provide information that is most relevant to the occupational route of exposure and human risk assessment. Mechanistic information about fiber toxicity may also be derived from other types of studies. Studies investigating the cellular effects of RCFs in vitro are reviewed in Section 5.2 and Appendix C. When comparing the effects of a fiber dose in animal studies, it is possible to compare fibers on a gravimetric basis (effect per unit weight) or a fiber basis (effect per number of fibers). The same gravimetric dose of different fiber types may contain vastly different numbers of fibers because of differences in their dimensions. RCF1 is a relatively thick fiber compared with many types of asbestos, such as chrysotile, a fiber commonly used as a positive control in pulmonary carcinogenesis experiments in animals (see Table 2-2 for descriptions of RCF1, RCF2, RCF3, and RCF4). A gravimetric dose of RCF1 usually contains far fewer fibers than the same gravimetric dose of chrysotile asbestos fibers, making a direct comparison of their effects difficult when the number of fibers per unit weight is not reported. Comparison on a perfiber basis rather than a weight basis provides information most applicable to occupational risk assessment. Although the results of implantation and instillation studies may not be directly applicable to occupational exposure and human health effects, they provide important information about the potential toxicity of RCFs. Experiments that control fiber dimensions and other variables provide information about the physiological characteristics relevant to fiber toxicity. They provide a less expensive, quicker means to screen the potential toxicity of a fiber than inhalation studies. Many of the implantation and instillation studies reviewed here report the administered fiber dose on a gravimetric basis rather than on a perfiber basis. Some studies assess the toxicity of both RCFs and asbestos independently, which allows for the comparison of these fibers on a gravimetric basis but not on a per-fiber basis. # Intraperitoneal Implantation Studies In intraperitoneal studies, fibers are implanted directly into the abdominal cavity, bypassing the respiratory system defense and clearance mechanisms that act on inhaled fibers. Although the implanted fibers act on some of the same target cell types as the fibers of an inhalation exposure (such as the mesothelium), the effects elicited in the abdominal mesothelium cannot be assumed to be identical to the response of the pleural mesothelium. Table 5-1 summarizes the results of three RCF intraperitoneal implantation studies . A brief description of these studies follows. Davis et al. dosed Wistar rats with 25 mg ceramic aluminum silicate dust by intraperitoneal injection. Tumors were induced in 3 of 32 rats: 2 fibrosarcomas and 1 mesothelioma. Smith et al. dosed Osborne Mendel (OM) rats and Syrian hamsters with 25 mg RCFs by intraperitoneal injection. Abdominal mesothelioma induction rates were 83% (19/23) in OM rats and 13% (2/15) and 24% (5/21) in two groups of male hamsters. Crocidolite asbestos at 25 mg induced abdominal mesotheliomas in 80% (20/25) of OM rats and 32% (8/25) of hamsters. The difference in tumor incidence reported by Davis et al. and Smith et al. may be explained in part by differences in fiber length. Eighty-three percent of RCF fibers used by Smith et al. had a length >10 µm; 86% had a diameter <2.0 µm. Ninety percent of the ceramic aluminum silicate material used by Davis et al. had a length <3 µm and a diameter <0.3 µm. Pott et al. dosed female Wistar rats by intraperitoneal injection with 9 or 15 mg/week for 5 weeks with 2 ceramic (aluminum silicate) wool fibers, Fibrefrax (RCFs), and MAN (Manville RCFs); total doses of 45 and 75 mg were administered, respectively. Fifty percent of Fibrefrax fibers had a length <8.3 µm and diameter <0.91 µm. Exposure to Fibrefrax fibers induced abdominal tumors (sarcomas, mesotheliomas, or carcinomas) in 68% of the rats. Fifty percent of MAN fibers had a length <6.9 µm and diameter <1.1 µm. The number of fibers in different length categories was not reported. Exposure to MAN fibers induced abdominal tumors in 22% of the rats. Chrysotile (UICC/B) injected intraperitoneally at a single dose of 0.05, 0.25, or 1.00 mg induced abdominal tumors in 19%, 62%, or 86% of rats, respectively. Fifty percent of chrysotile fibers had a length <0.9 µm and diameter <0.11 µm. The number of fibers per dose was not reported for the ceramic fibers and asbestos. Saline induced tumors in 2% of rats. # Intrapleural Implantation Studies Intrapleural implantation studies permit the investigation of the effect of RCFs directly on the pleural mesothelium while controlling variables such as inhalation kinetics and translocation. Table 5-2 summarizes the results of the intrapleural study of Wagner et al. . Intrapleural injection of 20 mg of ceramic fiber (unspecified type) or 20 mg for each of two samples of chrysotile produced mesotheliomas in 10% (3/31), 64% (23/36), and 66% (21/32) of Wistar rats, respectively. The mean ceramic fiber diameter was 0.5 to 1.0 µm. The lengths of the chrysotile fibers were mostly <6 µm. The chrysotile fiber diameter, RCF fiber length, and number of fibers per dose were not reported, making a direct comparison of the samples difficult. # Intratracheal Instillation Studies The technique of intratracheal instillation has the advantage of affecting the same target tissues (other than the upper respiratory tract) as an inhalation exposure. Other advantages, compared with inhalation exposure, include a simpler technique, lower cost, accurate dosing, and the ability to deliver materials (such as long fibers) that may not be respirable to rodents . The faster dose rate and bolus delivery of tracheal instillation may affect the response of the lung defense mechanisms, resulting in differences in clearance and biopersistence relative to an inhalation exposure. Intratracheal instillation may also produce a clumping of fibers with a resulting effect on fiber distribution and clearance . Intratracheal instillation results in a heavier, more centralized distribution pattern; inhalation exposure results in a more evenly and widely distributed pattern . Table 5-3 summarizes the results of two RCF intratracheal instillation studies . A brief description of these studies follows. In the study by Smith et al. , Syrian golden hamsters and OM rats were dosed with 2 mg of RCFs suspended in saline (Fibrefrax) by intratracheal instillation once a week for 5 weeks (10 mg total). The animals were maintained for the rest of their lives. Approximately 50% of the RCFs were <20 µm long with a mean fiber diameter of 1.8 µm. No primary lung tumors developed in RCF-exposed animals. These animals did not have an increased incidence of pulmonary fibrosis or tumor production compared with controls; however, the rats had a statistically significant increase in bronchoalveolar metaplasia. The median lifespan was 479 days for hamsters and 736 days for rats. Hamsters (median lifespan 657 days) and rats (median lifespan 663 days) exposed to the same dosing schedule with 2 mg crocidolite asbestos had a statistically significant increase in bronchoalveolar lung tumors in 20 of 27 (74%) and 2 of 25 (8%) animals, respectively. The fiber numbers per dose were not reported. Manville reported a statistically significant increase in lung tumors in Fischer rats exposed intratracheally to 2 mg of RCF1, RCF2, RCF3, and RCF4 in saline . Animals were terminally sacrificed at 128 weeks with interim sacrifices at 13, 26, 52, 78, and 104 weeks. RCF1, RCF2, RCF3, and RCF4 exposure resulted in adenomas or adenocarcinomas in 6 of 109 (5.5%), 4 of 107 (3.7%), 4 of 109 (3.7%), and 7 of 108 (6.5%) rats, respectively. One mesothelioma was identified in a rat exposed to RCF2. Exposure to 0.66 mg chrysotile asbestos resulted in 8 primary lung tumors in 8 of 55 rats (14.5%). The fiber dimensions and numbers per dose were not reported. # Chronic Inhalation Studies In animal bioassays, administering RCFs by chronic inhalation most closely mimics the occupational route of exposure. Exposure to RCFs over a time period that approximates the lifespan of the animal provides the most accurate prediction of the potential pathogenicity and carcinogenicity of these fibers in animals. The sex ratio for all groups was approximately 2 male rats to 1 female rat. # Tumor incidence Manville Fischer 344 rats 109, male 107, male 109, male 108, male 55, male 118, male 2 mg RCF1 (0.2 ml of a 10-mg/ml suspension) 2 mg RCF2 (0.2 ml of a 10-mg/ml suspension) 2 mg RCF3 (0.2 ml of a 10-mg/ml suspension) 2 mg RCF4 (0.2 ml of a 10-mg/ml suspension) 0.66 mg Canadian chrysotile (0.2 ml of a 3.3-g/ml suspension) 0.2 ml (vehicle not specified) The effects seen in animals may be used to predict the effects of these fibers in humans, although interspecies differences exist in respiratory anatomy, physiology, and tissue sensitivity. Chronic inhalation studies provide the best means to predict the critical disease endpoints of cancer induction and nonmalignant respiratory disease that may occur in humans because of fiber exposure . Five chronic RCF inhalation studies have been conducted on rats or hamsters . These studies are summarized in Tables 5-4 and 5-5 and are described below. Davis et al. exposed Wistar rats by whole-body inhalation to 10 mg/m 3 (95 f/cm 3 ) ceramic (aluminum silicate glass) dust for 7 hr/day, 5 days/week for 12 months. Ninety percent of the exposure fibers were short (<3 µm) and thin (<0.3 µm). The particle ratio of nonfibrous particulate to fibers was 4:1. Eight of 48 exposed rats (17%) developed pulmonary neoplasms: 1 adenoma, 3 bronchial carcinomas, and 4 histiocytomas. Interstitial fibrosis was observed. No pulmonary tumors were observed in control animals. Smith et al. exposed OM rats and Syrian golden hamsters by nose-only inhalation to 10.8±3.4 mg/m 3 (200 f/cm 3 ) ceramic fiber (Fibrefrax) for 6 hr/day, 5 days/week for 24 months. The ratio of nonfibrous particulate to fibers was 33:1. Exposure to RCFs did not induce pulmonary tumors in rats. One RCFexposed rat and one chamber control rat developed primary lung tumors. Rats exposed to RCFs had more severe pulmonary lesions than hamsters, and a greater percentage of rats had fibrosis than hamsters (22% versus 1%, respectively). Under similar conditions, exposure to 7 mg/cm 3 (3,000 f/cm 3 ) crocidolite asbestos produced pulmonary tumors in 3 of 57 rats, including 1 mesothelioma and 2 bronchoalveolar tumors. No pulmonary tumors were observed in crocidolite-exposed hamsters. Exposure to slag wool at 10 mg/m 3 (200 f/cm 3 ) and several fibrous glasses at similar gravimetric concentrations did not result in pulmonary neoplasms (not shown in Table 5-4). Mast et al. exposed Fischer 344 rats by nose-only inhalation to 30 mg/m 3 (187±53 WHO f/cm 3 RCF1, 220±52 WHO f/cm 3 RCF2, 182±66 WHO f/cm 3 RCF3, 153±49 WHO f/cm 3 RCF4) of one of four types of RCFs for 6 hr/day, 5 days/week for 24 months and held until sacrifice at 30 months. Groups of 3 to 6 animals were sacrificed at 3, 6, 9, 12, 15, 18, and 24 months to examine lesions and determine fiber lung burdens. Other animals were removed from exposure at the same time points and held until sacrifice at 24 months. Positive control rats were exposed to 10 mg/m 3 (1.06±1.14×10 4 WHO f/cm 3 ) chrysotile under similar exposure conditions. RCF fibers with a mean diameter of 1 µm and mean length of 20 to 30 µm were selected. A particle ratio of nonfibrous particulate to fiber of 1.02-1.88:1 was reported. Interstitial fibrosis was first observed at 6 months with RCF1, RCF2, and RCF3 and at 12 months with RCF4 exposure. Pleural fibrosis was first observed at 9 months with RCF1, RCF2, and RCF3 and at 12 months with RCF4 exposure. A progression in the severity of pleural fibrosis was seen in animals exposed to 30 mg/m 3 for 24 months and examined at 6 months post exposure. The incidence of total lung tumors was significantly increased from controls after exposure to RCF1, RCF2, and RCF3 but not RCF4. Neoplastic disease, including adenomas and carcinomas, was observed in all treatment groups: with RCF1, in 16 of 123 rats (13%); RCF2, 9 of 121 (7.4%); RCF3, 19 of 121(15.7%); RCF4, 4 of 118 (3.4%); and chrysotile, 13 of 69 (18.5%). Mesotheliomas were induced in some rats in all treatment groups: 2 with RCF1; 3 with RCF2; 2 with RCF3; 1 with RCF4; and 1 in the chrysotile exposure group. All mesotheliomas were detected at or after 24 months of exposure. Most RCF fibers recovered in the lung were 5 to 10 µm long regardless of exposure time and recovery time. An 80% reduction in fiber lung burden was seen in rats allowed to recover for 21 months following 3 months of RCF exposure. Mast et al. exposed Fischer 344 rats by nose-only inhalation to 0 (air), 3, 9, or 16 mg/m 3 (0, 26±12, 75±35, or 120±35 WHO f/cm 3 ) RCF1 for 6 hr/day, 5 days/week for 24 months and held them until sacrifice at 30 months. Fibers were selected by size as in Mast et al. . A particle ratio of nonfibrous particulate to fibers of 0.9-1.5:1 was reported. Groups of 3 to 6 animals were sacrificed at 3, 6, 9, 12, 18, and 24 months to examine lesions and determine fiber lung burdens. Other animals were removed from exposure at the same time points and held until sacrifice at 24 months. Interstitial fibrosis was observed after 12 months of exposure in the 9-and 16-mg/m 3 exposure groups. Pulmonary fibrosis was first observed after 12 months with 16 mg/m 3 exposure and after 18 months with 9 mg/m 3 exposure. The mean Wagner grades of pulmonary cellular change and fibrosis in rats exposed to 0, 3, 9, 16, and 30 mg/m 3 of RCFs for 24 months were 1.0, 3.2, 4.0, 4.2, and 4.0, respectively. Rats exposed at the same range of doses for 24 months and allowed to recover for 6 months had mean Wagner grades of 1.0, 2.9, 3.8, 4.0, and 4.3. The severity of interstitial and pleural fibrosis was similar between those animals sacrificed at 24 months and those allowed 6 months of recovery following the 24 months of exposure. The incidence of pulmonary neoplasms was not statistically different from the controls in all exposure groups. One pleural mesothelioma was observed in the 9-mg/m 3 exposure group. A dose-related increase occurred in fiber lung burden. Fiber lengths of 5 to 10 µm were most prevalent in the lung fibers recovered after 3 months of exposure followed by 21 months of recovery, after 12 months of exposure, and after 24 months of exposure to all doses of RCFs. Animals exposed for 3 or 6 months and then allowed to recover until sacrifice at 24 months had lung burdens reduced by 96% to 97% compared with animals not allowed recovery time. McConnell et al. exposed Syrian golden hamsters by nose-only inhalation to 30 mg/m 3 RCF1 (256±58 WHO f/cm 3 ) for 6 hr/day, 5 days/week for 18 months and held them until sacrifice at 20 months. Positive control animals were exposed to 10 mg/m 3 (8.4±9.0×10 4 WHO f/cm 3 ) chrysotile asbestos. Groups of 3 to 6 animals were sacrificed at 3, 6, 9, 12, 15, and 18 months to examine lesions and determine fiber lung burdens. Other animals were removed from exposure at the same time points and held until sacrifice at 20 months. Interstitial and pleural fibrosis were first observed after 6 months of exposure in RCF-exposed hamsters. No pulmonary neoplasms developed. Forty-two of 102 (41.2%) RCF-exposed animals developed pleural mesotheliomas. Most mesotheliomas developed after 18 months of exposure. Animals exposed to chrysotile developed a more severe interstitial fibrosis and pleural fibrosis than those exposed to RCFs. No neoplasms were observed in the lungs or pleura of the chrysotile-exposed or air control animals. The greatest percentage of retained fibers had lengths of 5 to 10 µm and diameters <5 µm in the lungs after 6 months of exposure followed by 12 months of recovery. McConnell et al. conducted a multidose chronic study of the effects of amosite inhalation in hamsters. The data can be compared with the effects of RCF1. Syrian golden hamsters were exposed to 0.8 (36±23 WHO f/cm 3 ), 3.7 (165±61 WHO f/cm 3 ), or 7 mg/m 3 (263±90 WHO f/cm 3 ) amosite asbestos. Pleural mesothelioma incidences of 3.6%, 25.9%, and 19.5%, respectively, were reported. The aerosol mean diameter of the amosite asbestos was 0.60 µm ±0.25; its aerosol mean length was 13.4 µm ±16.7. The dimensions of this asbestos fiber were more similar to those of the RCFs used in the chronic inhalation studies of McConnell et al. than the chrysotile asbestos used as the positive control in that same study. NIOSH analyzed the hamster data from the RCF and amosite studies . A doseresponse model was developed for amosite and was used to predict the amosite response at the one and only dose at which RCFs were tested in hamsters. The modeled amosite response was compared with the observed RCF response. These results are presented in Figures 5-1 and 5-2. Log-probit, log-logistic, multistage, and unrestricted Weibull models were analyzed. The transformation for the log-probit and log-logistic models was log (fibers/cm 3 +1). The dose metric of the multistage and Weibull models was fibers/cm 3 , as they did not require a log-transformation. Results of the log-probit model analysis of these data indicated RCF/ amosite relative potency estimates of 1.85 and 1.19, using WHO fibers and fibers >20 µm as the dose metric, respectively. The model fits were poor when the amosite high-dose group and 20 µm-fiber dose were included. Sensitivity analyses in which the high-dose amosite group was dropped suggest that the relative potency of RCFs to amosite could be as low as 0.66 based on the log-probit model. Results using the log-logistic, multistage, and Weibull models were similar to those using the log-probit model, with an overall range of RCF/amosite relative potency estimates from these models using all four amosite dose groups of 1.03 to 1.89. Although no clear toxicologic basis exists for disregarding the high-dose amosite data, sensitivity analyses based on excluding these data suggest that the potency of RCFs relative to amosite could be as low as 0.47, based on the multistage model. These models indicate that the plausible carcinogenic potency estimates for RCFs relative to amosite, based on hamster mesotheliomas, range from about half to nearly twice the carcinogenicity of amosite. # Discussion of RCF Studies in Animals The intrapleural, intraperitoneal, and intratracheal RCF studies have demonstrated the carcinogenicity of RCFs. Because of the nonphysiologic delivery of fibers by these methods, it is difficult to compare their results with those of an inhalation exposure. Although tracheal instillation may result in different distribution patterns than an inhalation exposure, this route of exposure is useful as a screening test for relative toxicity and to compare the toxicity of new materials with the toxicity of materials for which data already exist . Tracheal instillation also is useful when testing fibers respirable by humans but not rodents. Chronic inhalation studies provide the data most relevant to occupational exposure to RCFs. The RCF chronic animal inhalation studies described above allow for the comparison of health effects of exposure to different doses of RCF1, different types of RCFs, and the interspecies susceptibility of the rat and hamster to RCF exposure. Results of the multidose chronic inhalation testing of RCF1 in rats indicate the pathogenic potential of RCFs at high doses . The incidence of total lung tumors was significantly increased from controls after exposure to 30 mg/m 3 RCF1, RCF2, and RCF3 but not RCF4. A dose-response relationship was demonstrated for nonneoplastic pulmonary changes in rats exposed to 3, 9, and 16 mg/m 3 RCFs. The severity of interstitial and pleural fibrosis was similar between those animals sacrificed at 24 months and those allowed 6 months .) of recovery following the 24-month exposure. Spontaneous primary pulmonary mesotheliomas are rare in rats . Therefore, the presence of any mesothelioma in treated animals is biologically significant and warrants caution. Comparing the chronic effects of RCF1 with its positive control, chrysotile asbestos, in the hamster is difficult because of the differences in dose, dimensions, and durability of the two fibers tested . Differences in the physical characteristics and biopersistence of RCF1 and amosite asbestos must be considered before extrapolating these animal data to human risk. Hamsters showed a greater susceptibility to mesothelioma induction after RCF1 exposure than did rats under similar exposure conditions . Chronic inhalation studies of amosite asbestos in hamsters showed no pulmonary neoplasms, but high incidences of mesothelioma occurred at doses of 125 and 250 f/cm 3 . Many of the mesotheliomas in the more recent hamster studies were identified only on microscopic examination [Mast et al. 1995a;McConnell et al. 1995McConnell et al. , 1999. Previous studies reporting mesotheliomas only by macroscopic identification may have underestimated the mesothelioma incidence. Recent, short-term inhalation studies indicate that hamster mesothelial cells may have a more pronounced inflammatory and proliferative response to RCF1 exposure than those of rats [Everitt 1997;Gelzleichter et al. 1996aGelzleichter et al. ,b, 1999. The reasons for this species difference in response to RCFs have not been explained. The results of these animal studies indicate the need for the inclusion of the hamster as a sensitive test species in those studies in which pleural mesothelioma is an endpoint of concern. Results from Mast et al. indicate that under the conditions studied, exposure to RCF4 may have a less pronounced effect on pulmonary pathology than exposure to RCF1, RCF2, and RCF3. Rats exposed to RCF4 did not have a significant increase in total lung tumors compared with controls; those exposed to RCF1, RCF2, and RCF3 did. Exposure to RCF4 produced a less severe fibrosis than was seen in the other RCF exposure groups. Differences in the dimensions or physical properties of RCF4 may explain its different respiratory effects from RCF1, RCF2, and RCF3. RCF4 was produced by heating RCF1 in a furnace at 2,400 º F for 24 hr. This Aafter-service@ fiber contained approximately 27% free crystalline silica. Silicotic nodules were observed in the RCF4-exposed animals. RCF4 fibers were shorter (~34% between 5 and 10 µm ) and thicker (~35% <0.5 µm) than those of RCF1, RCF2, and RCF3. The particle content of the RCF test material may have been responsible for some of the respiratory pathology observed in these studies. However, an analysis of the ratio of nonfibrous to fibrous particulates in the reviewed studies does not indicate a correlation between the particulate content and observed effects. Smith et al. performed testing with the highest particulate to fiber ratio at 33:1 and did not report a high tumor incidence. Comparing studies based on the ratio of nonfibrous particulates to fibers is complicated by differences among the studies in fiber preparation, doses tested, fiber dimensions, and methods of fiber analysis. The techniques used to detect and measure nonfibrous particulates have improved over time so that the comparison of recent and older studies may reflect these inconsistencies. These chronic RCF inhalation studies indicate the ability of RCFs to induce cancer in two laboratory species-mesotheliomas in hamsters and pulmonary tumors in rats. The late onset of tumors indicates the importance of chronic studies on the effects of RCF exposure. Shortterm intraperitoneal, intrapleural, intratracheal, and inhalation studies provide important information about the action of fibers, the fiber characteristics associated with toxicity, and potential toxicity. Currently it is only through lifespan toxicologic testing of animals that the respiratory and other chronic health effects of RCFs can be accurately assessed. # Lung Overload Argument Regarding Inhalation Studies in Animals Mast et al. published a review interpreting the results of chronic inhalation studies of RCF1 in rats and hamsters . In the review, the authors suggest the possibility that the maximum tolerated dose (MTD) may have been exceeded and that lung overload may have compromised the pulmonary clearance mechanisms of test animals. Building on the concept of lung overload (first advanced by Bolton et al. ), Mast et al. considered particulate coexposure (i.e., nonfibrous particulate or shot) to be a confounding factor that may have had a major effect on the observed chronic adverse effects. The authors propose that the MTD was exceeded at the highest exposure concentration of 30 mg/m 3 for RCF1 in the rat bioassay. The concept of pulmonary overload in the Fischer 344 rats is based on the recognition that excessive particulate exposures (>1,500 µg/rat, according to Bolton et al. ) eventually reduce the clearance effectiveness of the lungs, causing the normal linear clearance kinetics to follow a nonlinear pattern. On a cellular level, the overload conditions may result in alveolar macrophages becoming engorged with particulate, pulmonary and alveolar inflammation, increased translocation of particles to the interstitium and lymph, granuloma formation, pulmonary fibrosis, and lung tumors, depending on the time and severity of the overload . Ambiguity about the definition of MTD for chronic inhalation studies with animals was also a concern expressed by the authors. One reference recognizes the MTD as that which causes "a significant functional impairment of lung clearance." At a National Toxicology Program (NTP) workshop on establishing exposure concentrations for inhalation studies in animals, it was concluded that the highest exposure concentration should produce only minimal changes in lung defense mechanisms as measured by clearance . At a similar workshop convened by the EPA, it was proposed that the MTD for fiber inhalation studies is equivalent to the lung dose produced at the maximum achievable concentration (MAC) . The MAC is calculated as the highest fiber concentration based on a 90-day study that results in significant changes in alveolar macrophage clearance rates, lung burden normalized to exposure concentration, cell proliferation, inflammation, lung weight, and other measures. The methodology described for the RCF chronic inhalation studies involved procedures (i.e., wet cyclone separation technology) for removing the nonfibrous particulate fraction from the commercial fiber (RCF1) used for the inhalation exposures . This process resulted in an aerosol with a 9.1:1 particle-to-fiber ratio , compared with a study by Smith et al. , which reported 33 nonfibrous particles per fiber in airborne exposures. Results from Esmen et al. indicate that despite a poor correlation between mass of total airborne dust and fiber concentration in RCFs measured in manufacturing, fibers generally constitute only a small portion of the total dust. This finding is consistent with other reported measures of occupational exposures to airborne RCFs . However, Maxim et al. reported an average particle-to-fiber ratio of 0.53:1 (n=10, range not reported), or roughly 1 particle to 2 fibers in RCF manufacturing facilities. Muhle and Bellmann conducted a 5-day inhalation study with Fischer 344 rats to measure the biopersistence of RCF1 (with the 9:1 particulate-to-fiber ratio) and RCF1a (RCF1 that is further processed to reduce particulate mass). The study showed a 1.5-fold longer time-weighted half-life for RCF1 (t 1/2 =78 days) compared with RCF1a (t 1/2 =54 days). That study also involved a 3-week inhalation experiment with Fischer 344 rats, in which the clearance of RCF1 (t 1/2 =103 days) was almost twice as long as that of RCF1a (t 1/2 =54 days). In a follow-up study by Brown et al. , female Wistar rats were exposed to RCF1 and RCF1a by inhalation for 3 weeks and followed for 12 months to evaluate alveolar macrophage clearance and inflammation. The exposure concentrations were 130 fibers/ml >20 µm for RCF1 and 125 fibers/ml >20 µm for RCF1a. The nonfibrous content of RCF1 was approximately 25%, whereas the nonfibrous content of RCF1a was 2%. The mean diameter of the nonfibrous particles was 2 to 3 µm. The aerosol exposure to RCF1 contained twice as many short fibers (<20 µm) as RCF1a and twice the amount of dust (fibers and nonfibrous dust/ mg - m 3 ) as RCF1a (51 versus 25.8 mg/m 3 ). At the end of the inhalation period, animals exposed to RCF1a had a higher pulmonary concentration of long fibers but lower concentrations of short fibers and nonfibrous particles. The difference in particle content was enhanced in the lungs-15 times more particles were found in the lungs of the RCF1-exposed animals than in those exposed to RCF1a. In the aerosol exposure, only an eightfold difference was found in the number of particles between RCF1 and RCF1a. Tran et al. examined how overloading the alveolar macrophage defense system affects the clearance of fibers versus that of nonfibrous particles. Modeling was performed based on data for rats exposed by inhalation to titanium dioxide (TiO 2 ) at 1, 10, and 50 mg/m 3 or to glass wool (MMVF10) at 3, 16, and 30 mg/m 3 . Lung burdens and clearance kinetics during exposure (0 to 100 weeks) were compared with those at 3, 10, and 38 days post-exposure. The models showed that overloading of the lung by fibers or nonfibrous particles are similar when fibers are short (<15 µm). This observation is plausible, as nonfibrous particles and short fibers smaller than the diameter of the alveolar macrophage are most readily engulfed and cleared via the macrophages. When this defense is overwhelmed (lung burden ≥10 mg), these particles are cleared less effectively. For fibers longer than 15 µm, phagocytosis by alveolar macrophage is reduced. As fiber length increases, fibers tend to be cleared by dissolution and disintegration of long fibers into shorter fibers or fragments. Therefore, clearance of long fibers is not affected by the overloading of macrophage-mediated defenses with shorter fibers or nonfibrous particles. The exposure concentrations for the RCF chronic inhalation bioassays were measured and reported as mass in mg/m 3 . Monitoring of exposures as performed by gravimetric analysis does not distinguish fibers from nonfibrous particulate, although fiber concentration and dimensions were also checked by phase contrast and electron microscopy . Consequently, the particulate fraction was included in the dose measurements. This fact does complicate efforts to compare the relative toxicity of fibers, nonfibrous particulate, and total combined particulate, especially regarding the lung overload hypothesis. During pro-duction of RCFs and RCF products, however, the nonfibrous particulate fraction is associated with the fiber, as shown in Table 2-1 (i.e., 20% to 50% of RCFs by weight is nonfibrous particulate). This suggests that occupational exposures to airborne RCFs necessarily involve coexposures to a fraction of nonfibrous particulate, a suggestion that has been supported by exposure assessment studies . # Cellular and Molecular Effects of RCFs (In Vitro Studies) The cellular and molecular effects of RCF exposures have been studied with two different objectives. One purpose of these in vitro studies is to provide a quicker, less expensive, and more controlled alternative to animal toxicity testing. These experiments are best interpreted by comparing their results with those of in vivo experiments. The second objective of in vitro studies is to provide data that may help to explain the pathogenesis and mechanisms of action of RCFs at the cellular and molecular levels. These cytotoxicity and genotoxicity studies are best interpreted by comparing the effects of RCFs with those of other SVFs and asbestos fibers. In vitro studies serve as screening tools and provide insights into the molecular mechanisms of fibers. They are an important complement to animal studies. Currently it is not possible to use these data to derive the NIOSH REL for RCFs. For this reason, a discussion of in vitro studies is included here, but the more comprehensive summaries of studies are included in Appendix C. The toxicity of fibers has been attributed to their dose, dimensions, and durability. Any test system that is designed to assess the potential toxicity of fibers must address these factors. Durability is difficult to assess using in vitro studies because of their acute time course. However, in vitro studies provide an opportunity to study the effects of varying doses and dimensions of fibers in a quicker, more efficient method than animal testing. They do not currently provide data that can be extrapolated to occupational risk assessment. The association between fiber dimension and toxicity has been documented and reviewed . RCFs may have different toxicities, depending on the fiber length relative to macrophage size. Longer fibers are more toxic. Fiber length has been correlated with the cytotoxicity of glass fibers . Manville code 100 (JM-100) fiber samples with average lengths of 3, 4, 7, 17, and 33 µm were assessed for their effects on LDH activity and rat alveolar macrophage function. The greatest cytotoxicity was reported in the 17-and 33-µm samples, indicating that length is an important factor in the toxicity of this fiber. Multiple macrophages were observed attached along the length of long fibers. Relatively short fibers (<20 µm) were usually phagocytized by one rat alveolar macrophage . Longer fibers were phagocytized by two or more macrophages. Incomplete or frustrated phagocytosis may play a role in the increased toxicity of longer fibers. Long fibers (17 µm average length) were a more potent inducer of tumor necrosis factor (TNF) production and transcription factor activation than shorter fibers (7 µm average length) . These studies demonstrate the important role of length in fiber toxicity and suggest that the capacity for macrophage phagocytosis may be a critical factor in determining fiber toxicity. Several of the in vitro RCF studies (summarized in Appendix C) reported a direct association between a longer fiber length and greater cytotoxicity. Hart et al. reported the shortest fibers to be the least cytotoxic. Brown et al. reported an association between length, but not diameter, and cytotoxic activity. Wright et al. reported that cytotoxicity was correlated with fibers >8 µm long. Yegles et al. reported that the longest and thickest fibers were the most cytotoxic. The four most cytotoxic fibers had GM lengths ≥13 µm and GM diameters >0.5 µm. The production of abnormal anaphases and telophases was associated with Stanton fibers with a length >8 µm and diameter <0.25 µm. Hart et al. reported that cytotoxicity increased with increasing average fiber lengths from 1.4 to 22 µm, but did not increase with average lengths from 22 to 31 µm. Additional studies assessing the cytotoxicity of specific RCF fiber lengths are needed. Such studies will help to describe the association between fiber length and toxicity for RCFs and may allow determination of a threshold length above which toxicity increases significantly. In addition to providing data on the correlation between fiber length and toxicity, in vitro studies have provided data on the relative toxicity of RCFs compared with other fibers, although some uncertainties remain in the interpretation of these studies because of differences in fiber doses, dimensions, and durabilities. RCFs have direct and indirect effects on cells and alter gene function in similar ways. They are capable of inducing enzyme release and cell hemolysis. They may decrease cell viability and inhibit proliferation. RCFs affect the production of TNF and reactive oxygen species (ROS) and affect cell viability and proliferation. They induce necrosis in rat pleural mesothelial cells. They may also induce free radicals, micronuclei, polynuclei, chromosomal breakage, and hyperdiploid cells in vitro. In vitro studies provide an excellent opportunity for investigating the pathogenesis of RCFs. However, comparisons are difficult to make between in vitro studies based on differences in fiber doses, dimensions, preparations, and compositions. Important information such as fiber length distribution is not always determined. Even when comparable fibers are studied, the cell line or conditions under which they are tested may vary. Much of the research to date has been done in rodent cell lines and in cells that are not related to the primary target organ. In vitro studies using human pulmonary cell lines should provide pathogenesis data most relevant to human health risk assessment. Short-term in vitro studies cannot take into account the influence of fiber dissolution and fiber compositional changes that may occur over time. In an in vivo exposure, fibers are continually modified physically, chemically, and structurally by components of the lung environment. This complex set of conditions is difficult to recreate in vitro. Just as it is unlikely that only one factor is an accurate predictor of fiber toxicity, it is unlikely that any one in vitro test is able to predict fiber toxicity. # Health Effects in Humans # Morbidity and Mortality Studies Two major research efforts evaluated the morbidity of RCF-exposed workers-one conducted in U.S. plants and one in European plants. . A followup cross-sectional study conducted in 1996 evaluated the same medical endpoints in workers from six of these seven European manufacturing plants (one plant had ceased operation) [Cowie et al. 1999[Cowie et al. , 2001. Current and former workers were included as study subjects in the followup study. The studies of U.S. plants began in 1987 and involved evaluations of current workers at five RCF manufacturing plants and former workers at two RCF manufacturing plants [ Lemasters et al. 1994Lemasters et al. , 1998Lockey et al. 1993Lockey et al. , 1996Lockey et al. , 1998Lockey et al. , 2002. In the United States, the earliest commercial production of RCFs and RCF products began in 1953; in Europe, RCF production began in 1968. The demographics of the U.S. and European populations were similar at the time they were studied, although the average age of U.S. workers was slightly higher than that of the workforce in the 1986 European studies because of the earlier development of this industry in the United States. The mean age for the European RCF workers was 37.7 in the 1986 study and 42.0 for males and 39.4 for female workers in the 1996 study . In the U.S. RCF manufacturing industry, the average age is 40 for current workers and 45 for former workers . The mean duration of employment in the European cohort was 10.2 years (range 7.2 to 13.8 years) in 1986 and 13.0 years in 1996 . The U.S. study reports the mean duration of employment for 23 workers with pleural plaques as 13.6 years (±9.8); the median is 11.2 years (range 1.4 to 32.7) . The following text and From a possible 708 current workers, 628 eligible participants were identified and 596 had chest X-ray examinations; 51 female workers and 13 unexplained others were excluded from analysis. From a possible 708 current workers, 628 eligible participants were identified and 596 had chest X-ray examinations; 2 unreadable films and those of 51 female workers were excluded from the analysis. † † Study included current workers at six ceramic fiber manufacturing plants in three European countries as well as leavers from the first three European studies [Burge et al. 1995;Rossiter et al. 1994;Trethowan et al. 1995 From a possible 868 eligible current and former workers at 2 plant sites, 148 were eliminated for lack of exposure characterization data and loss to followup. Of the remaining 720 workers, 68 did not agree to chest X-ray examinations. ‡ ‡ ‡ From a possible 963 eligible current workers at five plant sites, 209 female workers were excluded as well as 393 male workers with fewer than 5 PFT sessions. mortality studies is also presented in Section 5.3.5 . Two HHEs of workplaces involving workers exposed to RCFs are also described in Section 5.3.6 . # Radiographic Analyses In both the European and U.S. studies cited in Table 5-6, the study populations included workers at multiple plants involved in the manufacture of RCFs or RCF products. As part of the investigation of potential effects of exposure to airborne RCFs, chest radiography was performed. In all studies, chest radiographs were read independently by three readers using the International Labour Office (ILO) 1980 International Classification of the Radiographs of Pneumoconioses . Identifiers on films were masked to ensure a blind review by readers, and quality control measures and tests of agreement were used to check consistency among the readers. For each type of abnormality analyzed, the median of the three readings for each film was used. # Pleural abnormalities In the 1986 study of European RCF workers, results of the chest radiography indicated a prevalence of 2.8% (15/543) for pleural abnormalities among male workers . Of the 15 cases with pleural abnormalities, 4 had bilateral diffuse thickening (1 with calcification), 1 showed bilateral pleural calcification only, 7 presented with unilateral diffuse thickening, and 3 showed costophrenic angle blunting only. The possibility for confounding effects was recognized because of other exposures: 52% of workers reported previous employment in dusty jobs, including 4.5% with prior asbestos exposures and 7% with prior MMMF exposures. When female workers were included in the same population, Trethowan et al. reported a prevalence of 2.7% (16/592) for pleural abnormalities. Two cases were known to have previous exposure to asbestos, and the possibility for exposure to other respiratory hazards was acknowledged for other persons with pleural abnormalities. Cowie et al. [1999Cowie et al. [ , 2001 reported pleural abnormalities in 10% (78/774) and [Rice et al. 1994[Rice et al. , 1996. # Exposure: Controls and cases reinterviewed with additional questions regarding asbestos exposure (application, manipulation, and distance from exposure). Asbestos exposure categorized (rating index = high, medium, low) from interview data. # Radiographic analyses In the cohort study, 20 cases of pleural plaque were identified in the cohort: 18 production workers and 2 nonproduction workers. In Cowie et al. 1999Cowie et al. , 2001 (European study) Cowie et al. 1999Cowie et al. , 2001 Respiratory health assessment Groat et al. 1999 Exposure assessment # Evaluation methods # Results # Comments Cross-sectional study: Rossiter et al. reported an association with age . However, no attempt was made to assess whether an association existed between pleural abnormalities and RCF exposure. Trethowan et al. also noted that pleural abnormalities were related to age but not independently to ceramic fiber exposures. Cowie et al. found that pleural abnormalities were associated with time since first RCF exposure (RCF latency) after adjusting for duration of asbestos exposure and time since first asbestos exposure (odds ratio =2.9 . A latency validity review was also conducted, involving analysis of 205 historical chest radiographs available for workers with pleural changes. The purpose of the review was to confirm that for persons with pleural plaques, a biologically plausible latency period (≥5 years) existed between initial RCF exposure and appearance of a pleural plaque. Of 18 pleural plaque cases for which historical chest radiographs were available, only 1 had a latency period of <5 years from initial RCF production to recognition of a pleural plaque. A subsequent analysis by Lockey et al. included chest radiographs for 625 current workers obtained every 3 years at 5 RCF manufacturing sites and 383 former workers at 2 of the 5 sites. Pleural changes were seen in 27 workers (2.7%), of which 19 were bilateral plaques (70%) and 3 were unilateral plaques (11%). Cumulative RCF exposure (>135 fibermonths/cm 3 ) was significantly associated with pleural changes (OR = 6.0, 95% CI = 1.4, 31.0). The researchers noted an increasing but nonsignificant trend involving interstitial changes and RCF exposure duration in a production job and cumulative RCF exposure. # Parenchymal Opacities In the 1987 European study, Rossiter et al. found that 7% (38/543) of the current male workers had small parenchymal opacities with median profusion of 1/0 or more. No large parenchymal opacities were observed. Both predominantly rounded (n=23, or 4.2%) and predominantly irregular (n=15, or 2.8%) small parenchymal opacities were identified. In a subsequent analysis of small opacities for both male and female workers, Trethowan et al. noted that the prevalence of small opacities increased with age, smoking, and previous exposure to asbestos but not with cumulative RCF exposure. No description of the analysis was provided. Cowie et al. reported that 10 of 51 (19.6%) men with RCF exposure before 1971 had small opacities of category 1/0 or greater. Eight of these 10 had been exposed to asbestos, and 9 were either current or ex-smokers. In the U.S. study, no analyses were performed to assess the relationship between small opacities and RCF exposure because of the small number of cases (n=4) identified by Lemasters et al. [1994. # Respiratory Conditions and Symptom Analyses Using respiratory health questionnaires, the U.S. and European studies sought to identify respiratory conditions and symptoms that could be associated with exposure to RCFs. Lockey et al. administered to 717 subjects a standardized respiratory symptoms questionnaire that included questions about the following symptoms and conditions: chronic cough, chronic phlegm, dyspnea grades 1 and 2 (described in the Definitions section of this document), wheezing, asthma, pleurisy, and pleuritic chest pain. Logistic regression analyses were adjusted for age, sex, smoking (pack years), duration of asbestos exposure, duration of production employment, duration of other hazardous occupational respiratory exposure, and time since last RCF employment. With the exception of asthma, for which self-selection out of production jobs may have occurred, adjusted ORs for respiratory symptoms were significantly elevated in production workers compared with nonproduction workers. Results of a subsequent analysis with 742 RCF workers by Lemasters et al. indicated that the prevalence of respiratory symptoms and conditions (except for asthma) was approximately twofold to fivefold higher in production than in nonproduction workers. The most frequently reported symptom for male production workers was dyspnea grade 1 (15.7%, compared with 2.5% for nonproduction), followed by wheezing (10.3%, compared with 3.8% for nonproduction). Prevalence of one or more respiratory symptoms and conditions among female production workers was 40.7%, compared with 20.3% for nonproduction workers. Trethowan et al. examined the relationship of dry cough, chronic bronchitis, dyspnea (two grades), wheeze, stuffy nose, eye irritation, and skin irritation to current and cumulative RCF exposure estimates among 628 workers. Current exposures were based on air sampling measurements taken in association with the respiratory health survey. The researchers noted eye and skin irritation were frequent in all plants and increased significantly, as did dyspnea and wheeze, with increasing current exposure concentrations (i.e., 0.2 to 0.6 and ≥0.6 f/cm 3 ) after controlling for age, sex, and smoking habits. The most frequent symptom, nasal stuffiness (in 55% of the group), showed no clear association with increasing current exposure. Chronic bronchitis, with a prevalence of 12% among all workers, also appeared unaffected by increasing current exposure concentration. Dry cough, eye irritation, and skin irritation all seemed to be associated with increasing exposure, especially at the highest exposure concentration (≥0.6 f/cm 3 ). Analyses of cumulative exposure to respirable fibers showed statistically significant associations with dyspnea but no apparent association with chronic bronchitis and wheeze. In a separate analysis of the same cohort, investigated the relative importance of respirable RCF exposure versus inspirable dust exposure in predicting respiratory symptoms and conditions. The study found work-ers= current exposures to both inspirable dust and respirable fibers were related (P7 RCF production years, smoking status (pack years, current versus former smoker), weight, and plant location (categorical). The analysis found decreases in FVC and FEV 1 for workers employed >7 years in production compared with nonproduction workers. In longitudinal analyses of followup production years (i.e., from initial PFT to final PFT) and followup cumulative exposure (i.e., from initial PFT to final PFT), neither of these variables had an effect on FVC or FEV 1 . These results led the authors to conclude that more recent exposure concentrations during 1980-1994 had no adverse effect on the longitudinal trend of pulmonary function ]. Decrements in FVC and FEV 1 noted in initial cross-sectional analyses of PFT data were believed to be related to earlier higher exposure concentrations. # Pulmonary Function Testing # Mortality Studies Table 5-8 presents findings from a cohort mortality study of two U.S. RCF production plants reported by Lockey et al. . The study is based on a cohort of 684 male workers at two RCF production plants who were employed for at least 1 year between January 1, 1950, and June 1, 1988. Five workers were lost to followup and 46 were deceased. Because this is a relatively new industry (40 years at the time of the study) that has experienced recent growth of the workforce at the plants studied, personyears at risk were limited at higher latencies (for example, only 126.37 person-years with >30 years since first RCF job). Using standardized mortality ratios (SMRs), the authors found The mortality was also lower than would be expected if RCFs had the potency of chrysotile, but the difference is not statistically significant. For mesothelioma, the authors concluded the anticipated numbers of deaths under hypotheses of asbestos-like potency are too small to be rejected by the zero cases seen in the RCF cohorts . NIOSH researchers noted that this analysis by Walker et al. was not based on the most current update of the RCF cohort. In addition, the asbestos risk assessment models used by Walker et al. were fitted to studies with longer followup periods than the cohort of RCF workers. Because these models do not specify length of followup, it is not possible to adjust for these differences. Consequently, it is likely that the RCF cohort has not been followed for a sufficient length of time to demonstrate the risks that were observed in the asbestos cohorts. NIOSH believes the mortality study by Lemasters et al. and the risk analysis by Walker et al. have insufficient power for detecting lung cancer risk based on what would be predicted for asbestos. # NIOSH HHEs As part of its mission as a public health agency, NIOSH performs HHEs at the request of workers, employers, or labor organizations to investigate occupational hazards associated with a workplace or work-related activity. One such HHE involved evaluating worker exposures to ceramic fibers at a company manufacturing steel forgings . At the facility, furnaces for heat-treating steel ingots were lined with RCF felt and batting, and this lining required regular maintenance and replacement. Among the workers interviewed were six bricklayers involved in furnace lining maintenance. Four of the bricklayers reported having experienced irritation of exposed skin areas and of the throat during the handling and installation of the RCF-containing insulation. On the basis of the reported symptoms and their consistency with known effects of RCFs, the symptoms of irritation were attributed to RCF exposure. No attempt was made to measure airborne fiber concentrations. Another NIOSH HHE resulted from an OSHA inspection that identified 18 cases of occupational lung disease recorded in 1 year at a plant manufacturing fire bricks, ceramic fiber products, and other thermal insulation components from kaolin. About 600 workers were potentially exposed to respiratory hazards that included not only RCFs but also kaolin dust, crystalline silica dust, and (for maintenance workers) asbestos. A total of 38 workers had been referred to a pulmonary physician for evaluation based on 2 rounds of chest X-ray screening of the workforce in 1980 and 1986. Diagnoses were related to pleural thickening (n=10), pleural plaques (n=3), diffuse pulmonary fibrosis (n=21), mesothelioma (n=1), and other miscellaneous conditions. At least 20 of these cases were classified as work-related by the pulmonologist who evaluated the cases. The nonoccupational classification of some of the remaining 18 cases was questioned by a NIOSH physician who performed a retrospective record review. The 38 cases were reclassified on the basis of job histories into those who were likely to have been exposed to RCFs (n=19, including 4 with pleural abnormalities and 8 with diffuse fibrosis) and those unlikely to have been exposed to RCFs (n=19, including 9 with pleural abnormalities, 13 with fibrosis, and 1 with mesothelioma). However, no attempt was made to analyze further for an association of the cases with exposure to RCFs. The report implied that occupational exposure to kaolin dust and to asbestos caused many or all of the job-related conditions. # Discussion The radiographic analyses of the U.S. and 1996 European worker groups suggest an association between pleural abnormalities, including pleural plaques, and RCF exposure . From Rossiter et al. it is less apparent whether such an association was investigated. Trethowan et al. report that pleural abnormalities were not independently related to RCF exposure. Differences between the findings of the U.S. studies and those of the initial European studies may be related to the long latency before pleural abnormalities are detectable, in particular, pleural plaques following RCF exposure. Workers exposed to asbestos developed asbestos-associated pleural plaques after a latency period of more than 15 years after initial exposure and in some cases, after 30 to 57 years began to supplement these views with left and right 45 o oblique view films as a standard practice for radiographic surveillance. This methodology, known as a film triad, was evaluated against the posteroanterioronly view to determine reliability, sensitivity, and specificity of each method . The evaluation, involving 652 subjects in the RCF study, showed the film triad had considerably higher interreader reliability (κ=0.59) than the posteroanterior-only method (κ=0.44). The authors concluded that the film triad method provides an optimum approach. The U.S. and 1986 European studies yielded little evidence of an association between radiographic parenchymal opacities and RCF exposure. In the U.S. study, small opacities were rare . Small opacities of profusion category 1/0 or greater were more frequent in the 1986 European study , but exposures to silica and other dusts were believed to account for many of these cases. The results of statistical analyses did not implicate RCF exposure or yielded results only slightly suggestive of an RCF exposure effect . In the 1996 evaluation of the European cohort, small opacities of category 1/0 or greater were positively associated with RCF exposures that occurred before 1971 . Ten of the 51 (19.6%) male workers exposed before 1971 developed category 1/0 or greater opaci-tiesC8 had also been exposed to asbestos and 9 were either current or ex-smokers. Both the U.S. and the European Burge et al. 1995;Cowie et al. 1999] studies found that occupational exposure to RCFs is associated with various reported respiratory symptoms and conditions, after adjusting for the effects of age, sex, and smoking. Exposure to RCF concentrations in the range of 0.2 to 0.6 f/cm 3 was associated with statistically significant increases in eye irritation (OR=2.16, 95% CI=1.32-3.54), stuffy nose (OR=2.06, 95% CI=1.25-3.39), and dry cough (OR=2.53, 95% CI=1.25-5.11) compared with exposure concentrations lower than 0.2 f/cm 3 . Increasing ORs were demonstrated for RCF exposure concentrations greater than 0.6 f/cm 3 compared with exposure concentrations 15 fiber-months/ cm 3 (that is, >1.25 fiber-years/cm 3 ) relative to exposure to ≤15 fiber months/cm 3 (dyspnea grade 1COR=2.1, 95% CI=1.3-3.3; dyspnea grade 2-OR=3.8, 95% CI=1.6-9.4). Lockey et al. also found statistically significant associations between cumulative RCF exposure and chronic cough (OR=2.0, 95% CI=1.0-4.0) and pleurisy (OR=5.4, 95% CI=1.4-20.2). Lemasters et al. also noted associations (P<0.05) between employment in an RCF production job and increased prevalence of dyspnea and the presence of at least one respiratory symptom or condition. Recurrent chest illness in the European cohort was associated with cumulative exposure to respirable fibers and was most strongly associated with cumulative exposure to respirable dust . In cross-sectional analyses involving spirometric testing, both the U.S. and 1986 European studies found that cumulative RCF exposure was associated with pulmonary function decrements among current and former smokers. The 1996 European study demonstrated decrements in current smokers only . The observed decreased pulmonary function in the European workers remained significantly associated with cumulative RCF exposure, even after controlling for cumulative exposure to inspirable dust . A longitudinal analysis of data from multiple PFTs by Lockey et al. led the researchers to conclude that exposures to RCFs between 1987 and 1994 were not associated with decreased pulmonary function. The findings from the U.S. and European studies suggest that decrements in pulmonary function observed in current and former smokers result from an interactive effect between smoking and RCF exposure. # Carcinogenicity Risk Assessment Analyses The literature contains three significant independent risk analyses of occupational exposure to RCFs and potential health effects. In each of these analyses, health effects data derived from multidose and MTD studies with rats were used with models to extrapolate risks to human populations. The modeling of effects observed in experimental animal studies was necessitated by the lack of adequate data on adverse health effects in humans with occupational exposures to RCFs. The three studies, described in detail below and in Table 5-9, include the following studies: Dutch Expert Committee on Occupational Standards (DE-COS) , Fayerweather et al. , and Moolgavkar et al. . # DECOS In 1995, DECOS (a workgroup of the Health Council of the Netherlands) published a report evaluating the health effects of occupational exposure to SVFs. The purpose of the report was to establish health-based recommended occupational exposure limits for specific types of SVFs. As one of the criteria for determining the airborne exposure limits for six distinct types of SVFs, risk assessments were performed for each fiber type, including RCFs. The risk analysis for RCFs was based on the assumption that RCFs are a potential human carcinogen as indicated by the positive results of carcinogenicity testing with animals. A health-based recommended occupational exposure limit was determined using the following rationale: 1. If the carcinogenic potential of RCFs is caused by a nongenotoxic mechanism, an occupational exposure limit of 1 respirable f/cm 3 as an 8-hr TWA should be recommended based on an NOAEL of 25 f/cm 3 and a safety factor of 25. 2. If the carcinogenic potential of RCFs is linked to a genotoxic mechanism, a model assuming a linear relationship between dose and the response (cancer) should be used to establish the occupational exposure limit. The model indicated that an excess cancer risk of 4 ×10 -3 is associated with a TWA exposure to 5.6 respirable f/cm 3 based on 40 years of occupational exposure. A cancer risk of 4×10 -5 is associated with exposure to 0.056 f/cm 3 , and a linear extrapolation indicated that occupational exposure to 1 respirable f/cm 3 as an 8-hr TWA for 40 years is associated with a cancer risk of 7×10 -4 . The DECOS analysis relied on the data from a long-term multidose study with rats exposed to kaolin ceramic fibers . These data showed that exposure by inhalation to 25 f/cm 3 (3 mg/m 3 ) for 24 months produced a negligible amount of fibrosis (mean Wagner score of 3.2). Consequently, the Dutch committee viewed 25 f/cm 3 as the NOAEL for fibrosis. The report also notes that at the time of publication, no data existed from retrospective cohort mortality or morbidity and case-control studies of persons with occupational exposures to RCFs. The linear modeling approach in this analysis of the exposure-response relationship using the animal data does not take into consideration possible differences in dosimetry and lung burden between rats and humans. # Fayerweather et al. Fayerweather et al. conducted a study primarily focusing on the risk assessment of occupational exposures for glass fiber insulation installers. They performed risk analyses with several other types of SVFs, including RCFs. Only the analysis with RCFs is presented here. This analysis applied an EPA linearized multistage model (representing a linear nonthreshold dose-response) to data from rat multidose and MTD chronic inhalation bioassays to determine exposures at which Ano significant risk@ occurs; i.e., no more than one additional cancer case per 100,000 exposed persons. Nonlinear models were also used for comparison: the Weibull 1.5-hit nonthreshold model (representing the nonlinear, nonthreshold dose-response curve) and Weibull 2-hit threshold model (representing the nonlinear, threshold dose-response curve). Fiber inhalation by rats was equated to humans by determining the fibers/daykg of body weight for the animals and using an exposure scenario of 4 hr/day (consistent with insulation installation workers= schedules), for 5 days/week and 50 weeks/year over 40 working years of a 70-year lifespan. RCFC interpreted the results of the analysis with the linearized multistage model to represent a risk of 3.8×10 -5 for developing lung cancer over the working lifetime at an exposure concentration of 1 f/cm 3 . Using the nonlinear models, estimates of nonsignificant exposures (i.e., a working lifetime exposure associated with no more than 1 additional cancer case/100,000 exposed persons) were 2 and 3 orders of magnitude higher. Conversely, the risk estimates for exposure to 1 f/cm 3 for a working lifetime were lower using the Weibull 1.5-hit nonthreshold and Weibull 2-hit threshold models. # Moolgavkar et al. This report describes a quantitative assessment of the risk of lung cancer associated with occupational exposure to RCFs . A major premise underlying the risk assessment is that humans are equally susceptible to RCFs as rats, at the tissue level. The risk analysis was performed using data from two chronic inhalation bioassays of RCFs in male Fischer 344 rats . Dosimetry in the risk assessment was based on a fiber deposition and clearance model developed by Yu et al. that was used to estimate the lung burdens of fibers in humans. The doseresponse model used for the risk assessment was the two-mutation clonal expansion model, commonly referred to as the Moolgavkar-Venzon-Knudson (MVK) model. The MVK model was fitted to the rat bioassay data to estimate the proportional increase in the rat lung tumor initiation rate in RCF-exposed rats, relative to the background initiation rate in nonexposed rats. An MVK model for human lung cancer was then created by fitting the model to the age-specific lung cancer incidence for either of two human cohorts. Finally, the human lung cancer rate for a given tissue dose was estimated by increasing the tumor initiation rate in the human model by the same proportional amount that an identical tissue dose would increase the initiation rate in the MVK model for rats. The assumption was made that, for any given tissue dose, the proportional increase in the lung tumor initiation rate (relative to the background rate) is the same in humans as in rats. The two human cohorts used for the human modeling were a nonsmoking American Cancer Society (ACS) cohort [Peto et al. 1992 The excess risk was also calculated for exposure concentrations of 0.75 f/cm 3 , 0.5 f/cm 3 , and 0.25 f/cm 3 . These risk estimates are presented in Table 5-10. As shown in Table 5-10, the highest risk estimates at each of the three exposure concentrations are associated with the linear model, followed by the exponential model. The lowest risk estimates are associated with the quadratic model. At each exposure concentration, more conservative risk estimates are obtained for the ACS cohort than the Steel Industry cohort. At the recommended exposure guideline established by the RCFC (0.5 f/cm 3 ), the highest risk estimate (linear model, Steel Industry cohort) is the MLE of 5.3×10 -4 or 5.3/10,000 (95% UCL=2.9×10 -3 ). At 0.5 f/cm 3 , the risk estimates for the steel industry cohort are roughly 1 order of magnitude (factor of 10) lower with the exponential model (MLE=7.3×10 -5 , 95% UCL= 9.1×10 -5 ), and 2 orders of magnitude lower using the quadratic model (MLE=3.5×10 -6 , 95% UCL=1.1×10 -5 ). At the lowest exposure concentration (0.25 f/cm 3 ), the highest risk estimate (Steel Industry cohort, linear model) was the MLE of 2.7×10 -4 (95% UCL=1.4×10 -3 ). Again, on average, the risk estimates from the 3 models using the steel industry cohort are 3 to 4 times higher than for corresponding model values with the ACS cohort. The authors concluded that the risk estimates based on the two cohorts "represent bounds on risks likely to be seen in occupational cohorts." However, an occupational cohort is unlikely to share the nonsmoking status of the ACS cohort. Therefore, of the two human populations used for model fitting in the Moolgavkar et al. risk assessment, the steel industry cohort may be the preferable cohort to use for estimating the risks from occupational exposures to RCFs. The Moolgavkar et al. report also indicates airborne fiber concentrations estimated to result in excess lifetime risk for cancer of 10 -4 (1 in 10,000) based on the approaches used by DECOS and Fayerweather et al. and using the MVK model for both the ACS cohort and the steel industry cohort. With the DECOS linearized, nonthreshold model approach, an excess lifetime cancer risk of 10 -4 was calculated to result from a fiber concentration of 0.14 f/cm 3 . Using the linearized, multistage model approach described in Fayerweather et al. , a fiber concentration of 2.6 f/cm 3 was estimated to correspond to the excess lifetime cancer risk of 10 -4 . With the MVK exponential model, an excess lifetime cancer risk of 10 -4 was determined for fiber concentrations of 0.7 f/cm 3 for the Steel Industry cohort and 2.7 f/cm 3 for the ACS cohort . # Discussion The estimated lung fiber burden for dosimetry in the analysis by Moolgavkar et al. is a methodological improvement over the risk assessment for RCFs by Fayerweather et al. , which was based solely on the inhaled fiber concentration. Modeling lung burden dosimetry should, in theory, compensate for the known differences between rats and humans in fiber deposition and clearance. Similarly, using an MVK model for dose-response estimation could compensate for differences in cell mutation and proliferation rates in rats and humans. However, some key parameter values in the MVK and lung dosimetry models are poorly known. For example, the dosimetry model for humans has been validated with only three human tissue samples taken from workers whose exposures to RCFs were not measured . A review and comparison of risk modeling approaches for RCFs by Maxim et al. describes the three models here as well as additional more sophisticated variations of quantitative risk analyses for RCFs. Using approaches such as benchmark dose modeling, Maxim et al. produced RCF unit potency values ranging from 1.4×10 -4 to 7.2×10 -4 . A common weakness among all three of the risk analyses stems from uncertainty about possible differences in the sensitivity of human lungs to fibers, as compared with rat lungs. The possibility of such a difference is acknowledged in the report by Moolgavkar et al. , but the effect of this uncertainty on the risk estimates is not explored quantitatively. As an example, Pott et al. estimated that in the case of asbestos fibers, humans are approximately 200-fold more sensitive than rats, on the basis of fiber concentration in air. Pott et al. further noted that a crocidolite inhalation study that was negative in the rat resulted in a rat lung fiber concentration that was more than 1,000-fold greater than the fiber concentrations in the lungs of asbestos workers with mesotheliomas. In support of this analysis, results of a study by Rödelsperger and Woitowitz led the authors to conclude that humans are at least 6,000 times more sensitive than rats to a given tissue concentration of amphibole fibers. Although amphibole asbestos fibers have physicochemical characteristics which differ from those of RCFs, these findings raise questions about using experimental animal data for predicting human health effects and assuming that target tissues in humans and rats are equally sensitive to RCF toxicity. The lung cancer risk estimates for RCFs derived by Moolgavkar et al. may also be underestimated for occupationally exposed workers because of several basic assumptions made in the lung tissue dosimetry. Tissue dosimetry modeling in the Moolgavkar et al. risk assessment is based on the assumption that a worker is exposed to RCFs for 8 hr/day, 5 days/week, 52 weeks/year, from age 20 to 50 . An alternative analysis, in which the assumption was changed to 8 hr/day, 5 days/week, 50 weeks/year from age 20 to 60, was also described but not presented in detail. In both cases, the breathing rate for light work was assumed to be 13.5 liters/minute. Additional information might be gained from assuming an exposure period of 8 hr/day, 5 days/week, 50 weeks/year, from age 20 to 65, with a breathing rate matching the International Commission on Radiological Protection "Reference Man" value for light work, which is 20 liters/minute . In addition, the cumulative excess risk of lung cancer was calculated only through age 70 . This practice may underestimate the lifetime risk of lung cancer in the exposed cohort, since a substantial fraction of the cohort may be expected to survive beyond age 70. The excess risk might also be calculated in a competing-risks framework using actuarial methods until most or all of the cohort is presumed to have died because of competing risks (generally 85 years). Finally, risk estimates derived by Moolgavkar et al. were based solely on data from studies with rats, ignoring data from studies of hamsters . Because 42% of the hamsters in these studies developed mesotheliomas, using this database for the risk assessment would produce higher estimates of risk than the analysis based on the rat data. 6 Discussion and Summary of Fiber Toxicity comparative measures of toxicity for different agents. RCFs implanted into the pleural and abdominal cavities of various strains of rats and hamsters have produced mesotheliomas, sarcomas, and carcinomas at the sites of fiber implantation . Similar tumorigenic responses have been observed following intratracheal instillation of RCFs . These data provide additional evidence of the carcinogenic effects of RCFs in exposed laboratory animals. Epidemiological data have not associated occupational exposure to RCFs under current exposure conditions with increased incidence of pleural mesothelioma or lung cancer . However, in epidemiologic studies of workers in RCF manufacturing facilities , increased exposures to airborne fibers have been linked to pleural plaques, small radiographic parenchymal opacities, decreased pulmonary function, respiratory symptoms and conditions (pleurisy, dyspnea, cough), and skin and eye irritation. Many of the respiratory effects showed a statistically significant association with RCF exposure after controlling or adjusting for potential confounders, including cigarette smoking and exposure to nonfibrous dust. Yet in PFTs, the interactive effect between smoking and RCF exposure was especially pronounced, based on the finding that RCF-associated decreases # Significance of Studies with RCFs Three major sources of data contributing to the literature on RCFs are (1) experimental studies with animals and in vitro bioassays, (2) epidemiologic studies of populations with occupational exposure to RCFs (primarily during manufacturing), and (3) exposure assessment studies that provide quantitative and qualitative measurements of exposures as well as the physical and chemical characteristics of airborne RCFs. Each of these sources of information is considered integral to this criteria document for providing a more comprehensive evaluation of occupational exposure to RCFs and their potential health consequences. Data from inhalation studies with animals exposed to RCFs have demonstrated statistically significant increases in the induction of lung tumors in rats and mesotheliomas in hamsters . Other inhalation studies with RCFs have shown pathobiologic inflammatory responses in lung and pleural tissues . Implantation and instillation methods have also been used in animal studies with RCFs to determine the potential effects of these fibers on target tissues. These studies have recognized limitations for interpreting results because the exposure techniques bypass the natural defense and clearance mechanisms associated with the normal route of exposure (i.e., inhalation). However, they are useful for demonstrating mechanisms of toxicity and in pulmonary function were limited to current and former smokers . The interactive effect between exposure to airborne fibers and cigarette smoke has been previously documented (e.g., Selikoff et al. ). However, unlike male workers, nonsmoking female workers did show statistically significant decreases in PFT results associated with RCF exposure . Analyses of data from multiple PFT sessions have led researchers to conclude that decreases in pulmonary function were more strongly influenced by the higher exposures to airborne RCFs that occurred in the past. This conclusion seems plausible, since historical air-sampling data indicate that airborne fiber concentrations were much higher in the first decades of RCF manufacturing and that former workers had potentially higher exposures. Multiple studies have been performed to characterize the concentrations and characteristics of airborne exposures to RCFs in the workplace. Current and historical environmental monitoring data indicate that airborne exposures to RCFs include fibers in the respirable size range (<3.5 μm in diameter and <200 μm long ). These exposures occur in primary RCF manufacturing as well as in secondary industries such as RCF installation and removal. Sampling data from studies of domestic, primary RCF manufacturing sites indicate that average airborne fiber concentrations have steadily declined by nearly 2 orders of magnitude over the past 2 decades. For example, Rice et al. report a maximum exposure estimate of 10 f/cm 3 associated with an RCF manufacturing process in the 1950s, and Esmen et al. measured average exposure concentrations ranging from 0.05 to 2.6 f/cm 3 in RCF facilities in the middle to late 1970s. Rice et al. . Although the potential exists for exposure to respirable crystalline silica in the forms of quartz, tridymite, and cristobalite during work with RCFs, exposure monitoring data indicate that these exposures are generally low . Maxim et al. report that many airborne samples of crystalline silica collected during the installation and removal of RCF products contain concentrations below the LOD, with average concentrations of respirable crystalline silica per measurable task ranging from 0.01 to 0.44 mg/m 3 (equivalent 8-hr TWA range=0.004 to 0.148 mg/m 3 ). Other studies have shown greater potential for exposure to respirable crystalline silica (especially in the form of cristobalite) during the removal of afterservice RCF materials . For processes associated with higher concentrations of airborne respirable fibers, there are also generally greater concentrations of total and respirable dusts . # Factors Affecting Fiber Toxicity To accurately interpret the results of experimental and epidemiologic studies with RCFs, it is important to consider recognized factors that contribute to fiber toxicity for RCFs and other SVFs in general. The major determinants of fiber toxicity have been identified as fiber dose (or its surrogate, airborne fiber exposure), fiber dimensions (length and diameter), and fiber durability (especially as it affects fiber biopersistence in the lungs) . # Fiber Dose The measurement of airborne fiber concentrations is frequently used as a surrogate for assessing dose and health risk to workers. Analyses of historical and current air sampling data indicate that occupational exposure concentrations of airborne RCFs have decreased dramatically in the manufacturing sector . In chronic inhalation studies of RCFs , both rats and hamsters were exposed to a range of size-separated RCF concentrations in a nose-only inhalation protocol. When airborne RCFs are generated, half or more of the aerosol is composed of respirable particles of unfiberized material that was formerly a component of the fiber . Because of the nature of this mixed exposure, it is difficult to determine the relative contributions of the airborne fibers and nonfibrous particulates to the adverse effects observed in humans and animals. It has been postulated that the nonfibrous particulates may have contributed to an overload effect in the Mast et al. animal studies with RCFs . Burge et al. have suggested that the health effects seen in RCF-exposed workers are a consequence of combined particulate and fiber exposure, but the decrements in lung function are more related to fiber exposure combined with smoking. Other studies have shown that for processes associated with higher concentrations of airborne respirable fibers, there is also a greater concentration of total and respirable dust . # Fiber Dimensions Throughout the literature, studies support the theory that fiber toxicity is related to fiber dimensions . Initially, fiber dimensions (length and diameter) play a significant role in determining the deposition site of a fiber in the lungs. Longer and thicker (>3.5 µm in diameter) fibers are preferentially deposited in the upper airways by the mechanisms of impaction or interception. Timbrell suggested that direct interception plays an important role in the deposition of fibers, as the fiber comes into contact with the airway wall and is deposited. Fibers being deposited in the larger ciliated airways are generally cleared via the mucociliary escalator. Thinner fibers tend to maneuver past airway bifurcations into smaller and smaller airways until their dimensions dictate deposition either by sedimentation or diffusion . Another factor that may enhance deposition is the electrostatic charge a fiber can accumulate during dust-generating processes in occupational settings . The fiber charge may affect its attraction to the lung surface, causing the fiber to be deposited by electrostatic precipitation. Although the dimensional characteristics and geometry of a fiber influence its deposition in the respiratory tract, the fiber's length and chemical properties dictate its clearance and retention once it has been deposited within the alveolar region. For the fiber that traverses the respiratory airways and is deposited in the gas exchange region, possible fates include dissolution, clearance via phagocytic cells (alveolar macrophages) in the alveoli, or translocation through membranes into interstitial tissues. Both test animals and workers have been exposed to RCFs of similar length and diameter , and these exposures include fibers of respirable dimensions . Since rats and other rodents are obligate nasal breathers, fibers greater than about 1 µm in diameter are too large for deposition in their alveoli . By comparison, humans can inhale and deposit fibers up to 3.5 µm in diameter in the thoracic and gas exchange regions of the lung. This physiological difference prevents the evaluation of fibers with diameters of about 1 to 3.5 µm (which would have human relevance) in rodent inhalation studies. The role of fiber size in inducing biological effects is well documented and reviewed in the literature . Stanton et al. hypothesized that glass fibers longer than 8 µm with diameters thinner than 0.25 µm had high carcinogenic potential. In a review of the significance of fiber size to mesothelioma etiology, Timbrell concluded that the thinner fibers with an upper diameter limit of 0.1 µm are more potent for producing diseases of the parietal pleura (e.g., mesothelioma and pleural plaques) than thicker fibers. That value for fiber diameter is cited by Lippmann in his asbestos exposure indices for mesothelioma. Oberdörster studied the effects of both long (>10-16 μm) and short (<10 μm) fibers on alveolar macrophage functions, concluding that both will lead to inflammatory reactions-although a distinct difference exists in the long-term effects because of differential clearance of fibers of different sizes. Alveolar macrophages constitute the first line of defense against particles deposited in the alveoli; they migrate to sites where fibers are deposited and phagocytize them. The engulfed fibers are then moved by the macrophages toward the mucociliary escalator and removed from the respiratory tract. The ability of the macrophages to clear fibers is size-dependent. Short fibers (<15 µm long) can usually be phagocytized by one rat alveolar macrophage have suggested that incomplete or frustrated phagocytosis may play a role in the increased toxicity of longer fibers. Fiber length has been correlated with the cytotoxicity of glass fibers , with greatest cytotoxicity for fibers 17 and 33 µm long compared with shorter fiber samples. Long fibers (17 μm average length) tend to be a more potent inducer of TNF production and transcription factor activation than short fibers (7 μm average length) . When comparing the dimensions of airborne fibers with those found in the lungs, it is important to consider the preferential clearance of shorter fibers as well as the effects of fiber dissolution and breakage. Yu et al. evaluated these factors in a study that led to the development of a clearance model for RCFs in rat lungs. Results of that study confirmed that fibers 10 to 20 µm long are cleared more slowly than those <10 µm long because of the incomplete phagocytosis of long fibers by macrophages. The preferential clearance of shorter fibers has also been documented in studies with chrysotile asbestos and other mineral fibers, in which the average length of retained fibers increased during a discrete period following deposition . This increase might also be explained by the longitudinal cleavage pattern of asbestos fibers, which results in longer fibers of decreasing diameters . By contrast, any breakage of RCFs would occur perpendicular to the longitudinal plane, resulting in shorter fibers of the same diameter. For the clearance model developed by Yu et al. , the effect of fiber breakage was also assessed from experimental data and incorporated into the model. The authors concluded that the simultaneous effect of fiber breakage and differential clearance leads only to a small change in fiber size distribution in the lung. This result suggests that the dimensions of fibers in the lung are closely related to the dimensions of fibers measured in the airborne samples (adjusted for deposition); thus, most short fibers in the lungs originated as short fibers in airborne exposures. The dimensions of airborne fibers have also been characterized for workers with occupational exposure to RCFs. One study of domestic RCF manufacturing facilities found that approximately 90% of airborne fibers were 10 μm . Measurements of airborne fibers in the European RCF manufacturing industry are comparable: Rood reported that all fibers observed were in the thoracic and respirable size range (i.e., diameter <3 μm), with median diameters ranging 0.5 to 1.0 μm and median lengths from 8 to 23 μm. During removal of RCF products, Cheng et al. found that 87% of airborne fibers were within the respirable size range, with fiber diameters ranging from 0.5 to 6 μm (median diame-ter=1.6 μm) and fiber lengths ranging from 5 to 220 μm. Another study of airborne fiber dimensions measured during installation and removal of RCF materials in industrial furnaces reported GM D values of 0.38 and 0.57 μm, respectively. # Fiber Durability Biopersistence (and specifically the retention time of the fiber in the lungs) is considered to be an important predictor of fiber toxicity. Fiber solubility affects the biopersistence of fibers deposited within the lung and is a key determinant of fiber toxicity. Bender and Hadley suggest that some of the important considerations of fiber durability include the following: Fiber size-particularly length as it relates to the dimensions of the alveolar macrophages Fiber dissolution rate Mechanical properties of the fibers, including partially dissolved and/or digested fibers Overloading of the normal clearance mechanisms of the lung Bignon et al. argue that fibers that are biopersistent in vivo and in vitro are more biologically active than less durable fibers. The durability of RCFs provides a basis for suggesting that these fibers might persist long enough to induce biological effects similar to those of asbestos. In vitro durability tests have shown RCFs to be highly resistant to dissolution in biologically relevant mixtures such as Gamble's solution . The persistence of RCFs in both the peritoneal cavity and the lung has been recognized in experimental studies. Hammad et al. sacrificed rats exposed to either slag wool or ceramic fibers via inhalation at 5, 30, 90, 180, or 270 days after exposure. The lungs of the animals were ashed in a low-temperature asher, and the fiber content of the lungs was evaluated by PCM. The researchers found that 24% of the deposited RCFs persisted in the lungs of rats sacrificed 270 days following exposure. In the same study, the lungs of rats exposed to slag wool contained only 6% of the slag wool fibers 270 days after exposure compared with those sacrificed 5 days following inhalation. From these results, it was concluded that RCFs follow a clearance pattern of relatively durable fibers that persist, translocate, or are removed by some mechanism other than dissolution. Similar results were obtained in the study by Mast et al. , which shows that RCFs are persistent in the lungs of rats exposed by inhalation. Specifically, compared with the fiber burden in the lungs of animals sacrificed 3 months after exposure (recovery), the lungs of animals sacrificed after 21 months of recovery contained approximately 20% of the deposited fibers. Of the retained fibers (measured with both SEM and TEM techniques) 54% to 75% had diameters <0.5 µm, and more than 90% were 5 to 20 µm long. Researchers have suggested that fibers deposited in the gas exchange region with lengths less than the diameter of an alveolar macrophage are phagocytized and cleared via the mucociliary system or the lymph channels. Dissolution of fibers within the AlM occurs if the fibers are not resistant to the acidic intracellular conditions or a pH~5 . Fibers that are not engulfed by alveolar macrophages are subjected to a pH of 7.4 in the extracellular fluid of the lung. A study of SVF durability in rat alveolar macrophages reports that RCFs are much less soluble than glass wool and rock wool fibers based on the amounts of silicon (Si) and iron (Fe) dissolved from the fibers in vitro . RCFs in rat alveolar macrophage culture dissolved less than 10 mg Si/m 2 of fiber surface area and less than 1 mg Fe/m 2 of fiber surface area. Glass wool dissolved more than 50 mg Si/m 2 , and rock wool dissolved nearly 2 mg Fe/m 2 when measured over comparable time periods. However, degradation and dissolution of deposited RCFs may still occur, based on the findings of higher dissolution of aluminum (Al) from RCFs (0.8 to 2.4 mg Al/m 2 ) in alveolar macrophages than from the other SVFs . In another study, SEM analysis of fibers recovered from the lungs of rats 6 months after inhalation of RCFs revealed an eroded appearance, causing the researchers to conclude that dissolution of Si in the fibers is a plausible mechanism for long-term fiber clearance . SVFs in general are less durable than asbestos fibers. RCFs are more durable than many other SVFs, with a dissolution rate somewhat higher than chrysotile asbestos. Under the extracellular conditions in the lung, chrysotile-the most soluble form of asbestos-has a dissolution rate of <1 to 2 ng/cm 2 /hr. RCFs have a similar dissolution rate of about 1 to 10 ng/cm 2 /hr under conditions experienced in pulmonary interstitial fluid. Other more soluble SVFs can be 10 to 1,000 times less durable . At the measured solubility rate, an RCF with a 1-µm diameter would take more than 1,000 days to dissolve completely . # Summary of RCF Toxicity and Exposure Data In addition to the main determinants of fiber toxicity (dose, dimension, and durability), other factors such as elemental composition, surface area, and composition can also influence the toxicity of the fiber. Thus, it is difficult to predict a fiber's potential for human toxicity based solely on in vitro or in vivo tests. Based on consideration of these factors, the major findings from the RCF animal and human studies are as follows: Toxicologic evidence from experimental inhalation studies indicates that RCFs are capable of producing lung tumors in laboratory rats and mesotheliomas in hamsters . However, interpreting these studies with regard to RCF potency and its implication for occupationally exposed human populations is complicated by the issue of coexposure to fibers and nonfibrous respirable particulate. The durability of RCFs contributes to the biopersistence of these fibers both in vivo and in vitro . Cytotoxicity and genotoxicity studies indicate that RCFs -are capable of inducing enzyme release and cell hemolysis , -affect mediator release , -may decrease cell viability and inhibit proliferation , -affect cell viability and proliferation , and -may induce free radicals, micronuclei, polynuclei, chromosomal breakage, and hyperdiploid cells . Exposure monitoring results indicate that airborne fibers measured in both the manufacturing and end-use sectors of the RCF industry have dimensions that fall within the thoracic and respirable size ranges . Epidemiologic studies of workers in the RCF manufacturing industry report an association between increased exposures to airborne fibers and the occurrence of pleural plaques, other radiographic abnormalities, respiratory symptoms, decreased pulmonary function, and eye and skin irritation . Current occupational exposures to RCFs have not been linked to decreases in pulmonary function of workers . Worker exposure to airborne fiber in the RCF industry over the past 20 years or more have decreased substantially, reportedly as the result of increased hazard awareness and the design and implementation of engineering controls . These observations warrant concern for the continued control and reduction of occupational exposures to airborne RCFs. # Existing Standards and Recommendations lengths ≥10 µm for up to 10 hr/day during a 40-hr workweek. NIOSH also recommended that airborne concentrations determined as total fibrous glass be limited to a 5 mg/m 3 of air as a TWA. At that time, NIOSH concluded that exposure to glass fibers caused eye, skin, and respiratory irritation. NIOSH also stated that until more information became available, this recommendation should be applied to other SVFs. The Agency for Toxic Substances and Disease Registry (ATSDR) calculated an inhalation minimal risk concentration of 0.03 f/cm 3 for humans based on extrapolation from animal studies . The Agency used macrophage aggregation, the most sensitive indicator of inflammation from RCFs, as the basis for this value. Calculation of this value is based on exposure assumptions for general public health that differ from those used in models for determining occupational exposure limits. ACGIH proposed a TLV of 0.1 f/cm 3 as an 8-hr TWA for RCFs under its notice of intended changes to the 1998 TLVs . On further review, this concentration was revised to 0.2 f/cm 3 . ACGIH also classifies RCFs as a suspected human carcinogen (A2 designations) . On the basis of a weight-of-evidence carcinogenic risk assessment, the EPA classified RCFs as a Group B2 carcinogen (probable human carcinogen based on sufficient animal data). ACGIH and EPA designations are consistent with that of the International Agency for Research on Cancer (IARC), which classified Standards and guidelines for controlling worker exposures to RCFs vary in the United States. Other governments and international agencies have also developed recommendations and occupational exposure limits that apply to RCFs. Table 7-1 presents a summary of occupational exposure limit standards and guidelines for RCFs. Within the United States, the RCFC formally established a recommended exposure guideline of 0.5 f/cm 3 as an element of its product stewardship program known as PSP 2002, which was endorsed by OSHA as a 5-year voluntary program . As part of that program, the RCFC recommends that workers wear respirators whenever the workplace fiber concentration is unknown or when airborne concentrations exceed 0.5 f/cm 3 . This exposure guideline was established by the RCFC on October 31, 1997, and replaces the previous exposure guideline of 1 f/cm 3 set by the RCFC in 1991. Before this agreement, the OSHA General Industry Standard was most applicable to RCFs, requiring that a worker's exposure to airborne dust containing <1% quartz and no asbestos be limited to an 8-hr PEL of 5 mg/m 3 for respirable dust and 15 mg/m 3 for total dust . NIOSH has not previously commented on occupational exposure to RCFs. However, in addressing health hazards for another SVF (fibrous glass), NIOSH recommended an exposure limit (REL) of 3 f/cm 3 as a TWA for glass fibers with diameters ≤3.5 µm and ceramic fibers, including RCF, as "possibly carcinogenic to humans (Group 2B)" determined that "RCFs may pose a carcinogenic risk for humans," and set a healthbased recommended occupational exposure limit of 1 f/cm 3 . The German Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area published a review of fibrous dusts classifying RCFs as category IIIA2, citing "positive results (for carcinogenicity) from inhalation studies (often supported by the results of other studies with intraperitoneal, intrapleural, or intratracheal administration)." In the United Kingdom, the Health and Safety Commission of the Health and Safety Executive has established a maximum exposure limit for RCFs of 1.0 f/ml of air, with the additional advisory to reduce exposures to the lowest as reasonably practicable concentration based on the category 2 carcinogen classification for RCFs . 8 Basis for the Recommended Standard ditions (including dyspnea, wheezing, coughing, and pleurisy) observed in RCF workers to be adverse health effects associated with exposure to airborne RCFs . An association between inhaling RCFs and fibrotic or carcinogenic effects has been documented in animals, but no evidence of such effects has been found in workers in the RCF manufacturing industry. The lack of such an association could be influenced by the small population of workers in this industry, the long latency period between initial exposure and development of measurable effects, the limited number of persons with extended exposure to elevated concentrations of airborne fibers, and declining occupational exposure concentrations. However, the evidence from animal studies suggests that RCFs should be considered a potential occupational carcinogen. This classification is consistent with the conclusions of ACGIH, EPA, DECOS, and IARC. (See discussion in Chapter 7.) Given these considerations, the NIOSH objective in developing an REL for RCFs is to reduce the possible risk of lung cancer and mesothelioma. In addition, maintaining exposures below the REL will also help to prevent other adverse effects, including irritation of the skin, eyes, and respiratory tract in exposed workers. To establish an REL for RCFs, NIOSH took into account not only the animal and human health data but also exposure # Background In the Occupational Safety and Health Act of 1970 (Public Law 91-96), Congress mandated that NIOSH develop and recommend criteria for identifying and controlling workplace hazards that may result in occupational illness or injury. In fulfilling this mission, NIOSH continues to investigate the potential health effects of exposure to naturally occurring and synthetic airborne fibers. This interest stems from the results of research studies confirming asbestos fibers as human carcinogens. Significant increases in the production of RCFs during the 1970s and concerns about potential health effects led to experimental and epidemiological studies as well as worker exposure monitoring. Chronic animal inhalation studies demonstrated the carcinogenic potential of RCFs, with a statistically significant increase in the incidence of lung cancer or mesothelioma in two laboratory species-rats and hamsters . Evidence of pleural plaques observed in persons with occupational exposures to airborne RCFs is clinically similar to that observed in asbestos-exposed persons after the initial years of their occupational exposures to asbestos . NIOSH considers the discovery of pleural plaques in U.S. studies of RCF manufacturing workers to be a significant finding because the plaques were correlated with RCF exposure . In addition, NIOSH considers the respiratory symptoms and con-information describing the extent to which RCF exposures can be controlled at different workplaces. On the basis of this evaluation, NIOSH considers an REL of 0.5 f/cm 3 (as a TWA for up to 10 hr/day during a 40-hr workweek) to be achievable for most workplaces where RCFs or RCF products are manufactured, used, or handled. Maintaining exposures at the REL will minimize the risk for lung cancer and reduce the risk of irritation of the eyes and upper respiratory system. The residual risks of lung cancer at the REL are estimated to be 0.073 to 1.2 per 1,000 based on extrapolations of risk models from Moolgavkar et al. and Yu and Oberdörster . The risk for mesothelioma at the REL of 0.5 f/cm 3 is not known but cannot be discounted. Evidence from epidemiologic studies showed that higher exposures in the past resulted in pleural plaques in workers, indicating that RCFs do reach pleural tissue. Both implantation studies in rats and inhalation studies in hamsters have shown that RCF fibers can cause mesothelioma. Because of limitations in the hamster data, the risk of mesothelioma cannot be quantified. However, the fact that no mesothelioma has been found in workers and that pleural plaques appear to be less likely to occur in workers with lower exposures suggests a lower risk for mesothelioma at the REL. Because residual risks of cancer (lung cancer and pleural mesothelioma) and irritation may exist at the REL, NIOSH further recommends that all reasonable efforts be made to work toward reducing exposures to less than 0.2 f/cm 3 . At this concentration, the risks of lung cancer are estimated to be 0.03 to 0.47 per 1,000 based on extrapolations of risk models from Sciences International , Moolgavkar et al. , and Yu and Oberdörster . Maintaining airborne RCF concentrations at or below the REL requires the implementation of a comprehensive safety and health program that includes routine monitoring of worker exposures, installation and routine maintenance of engineering controls, and worker training in good work practices. To ensure that worker exposures are routinely maintained below the REL, NIOSH recommends that an AL of 0.25 f/cm 3 be part of the workplace exposure monitoring strategy to ensure that all exposure control efforts (e.g., engineering controls and work practices) are in place and working properly. The purpose of the AL is to indicate when worker exposures to RCFs may be approaching the REL. Exposure measurements at or above the AL indicate a high degree of certainty that concentrations of RCFs exceed the REL. The AL is a statistically derived concept permitting the employer to have confidence (e.g., 95%) that if exposure measurements are below the AL, only a small probability exists that the exposure concentrations are above the REL. When exposures exceed the AL, employers should take immediate action (e.g., determine the source of exposure, identify measures for controlling exposure) to ensure that exposures are maintained below the exposure limit. NIOSH has concluded that an AL allows for the periodic monitoring of worker exposures in the workplace so that resources do not need to be devoted to conducting daily exposure measurements. The AL concept has been an integral element of recommended occupational standards in NIOSH criteria documents and in comprehensive standards promulgated by OSHA and MSHA. # Rationale for the REL The recommendation to limit occupational exposures to airborne RCFs to a TWA of 0.5 f/cm 3 is based on data from animal and human studies, risk assessments, and the availability of methods to control RCF exposures at the REL in many workplaces. Establishing the REL for RCFs is consistent with the mission of NIOSH mandated in the Occupational Safety and Health Act of 1970. This Act states that NIOSH is obligated to "develop criteria dealing with toxic materials and harmful physical agents and substances which will describe exposure levels that are safe for various periods of employment, including but not limited to the exposure levels at which no employee will suffer impaired health or functional capacities or diminished life expectancy as a result of his work experience." The carcinogenicity findings from the chronic noseonly inhalation assays of RCF1 in rats and hamsters warrant concern about possible health effects in workers exposed to RCFs. Although no increase in lung cancer or mesothelioma mortality has been observed in worker populations exposed to RCFs, radiographic analyses indicate an association between pleural changes (including pleural plaques) and RCF exposure and the European [Trethowan et al. 1995;Burge et al. 1995;Cowie et al. 1999Cowie et al. , 2001 studies have found RCF-associated respiratory symptoms, pulmonary function reductions, and pleural abnormalities among RCF production workers. Several independent evaluations have quantitatively estimated the risk of lung cancer for workers exposed to RCFs at various concentrations . NIOSH evaluated these studies to determine whether an appropriate qualitative or quantitative assessment of lung cancer risk could be achieved. In addition, exposure information was collected during the 5-year monitoring period covered under the consent agreement between RCFC and EPA . NIOSH used the exposure information to evaluate the feasibility of controlling workplace exposures at manufacturing and end-use facilities where RCFs and RCF products are handled. # Carcinogenesis in Animal Studies Chronic inhalation studies with RCFs demonstrate significant increases in the incidence of mesothelioma in hamsters and lung cancer in rats. Tables 8-1 through 8-4 present a synopsis of the major findings of these studies . Results from chronic animal inhalation studies with chrysotile and amosite are also presented (i.e., results for the positive control groups); these data provide a reference point for determining the relative toxicity of RCFs . Chronic inhalation exposure to RCF1 at 30 mg/m 3 (187 WHO f/cm 3 ) induced a 13% (16/123) incidence of lung tumors in F344 rats . The incidence of lung cancer at lower doses did not show a statistically significant difference from the unexposed control group. Lung fiber burdens in the multidose chronic rat study revealed a dose-response relationship . In the rat, 16 mg/m 3 (120 WHO f/cm 3 ) appeared to be the NOAEL for lung cancer and 3 mg/m 3 (26 WHO f/cm 3 ) appeared to be the NOAEL for fibrosis. Although it has been suggested that fibrosis in animals is a precursor to carcinogenesis, a definite link has not been shown for RCFs or other fibers. No lung cancers were found in hamsters exposed to RCF1 . Chronic inhalation exposure to RCF1 at 30 mg/m 3 induced a 41% (42/102) incidence of mesotheliomas in Syrian golden hamsters . Determining a dose-response relationship for inducing mesothelioma is not possible based on currently available data because of the single exposure dose tested in the hamster and because of the low, sporadic occurrence of mesothelioma in the exposed rats . Yet the occurrence of mesotheliomas in the rat and the high incidence in the hamster are biologically significant because the spontaneous incidence of mesotheliomas in rats and hamsters has historically been very low . To assess the significance of the mesothelioma incidence observed in RCF-exposed hamsters, these results were compared with those obtained from hamsters that were exposed to chrysotile asbestos and were used as positive controls for the study (see Tables 8-3 and 8-4). However, the chrysotileexposed hamsters failed to develop any tumors and therefore could not be considered true positive controls. Based on these results, a potency ranking could not be assigned for RCFs relative to chrysotile, since the carcinogenic response rate for the latter was zero in this study. The two fibers tested also differed with regard to their dose and fiber dimension. The McConnell et al. study of hamsters exposed to amosite asbestos provides doseresponse data for comparison with the RCF1 data of McConnell et al. (See Tables 8-3 and 8-4.). These separate studies examined the effects of RCF1 or amosite asbestos in hamsters using relatively similar exposure conditions, experimental conditions, and fiber dimensions . A concentration of 215 RCF1 WHO f/cm 3 induced mesotheliomas in 41% of hamsters . Interstitial and pleural fibrosis were first observed at 13 weeks following amosite asbestos exposure and at 6 months following RCF1 exposure. Although average fiber dimensions for the RCF1 and amosite asbestos samples were similar, the RCF1 sample contained a higher percentage of fibers longer than 20 μm . Results from a dose-response analysis using the mesothelioma data from the RCF and amosite asbestos hamster studies (see Section 5.1.2). This analysis may not predict the mesothelioma risk in humans, since RCF1 contained a greater percentage of fibers longer than 20 µm and because of differences in fiber durability. Amosite asbestos is a more durable fiber with a longer in vivo half-life than RCF1 (see Table 8-5). Yet RCFs are more durable and less soluble than many other types of SVFs that have not demonstrated carcinogenicity in experimental studies. This characteristic is significant, as the durability of asbestos and SVFs (including RCFs) may be linked to the risk of lung cancer in animals chronically exposed to these fibers . Because of the long latency period for the development of mesotheliomas in humans, Berry hypothesized that fibers of sufficient durability are needed to cause this disease in humans. Extrapolation of the RCF dose-response data for lung cancer and mesothelioma in exposed rodents should take into account the durability of RCFs in humans. Some evidence indicates that rats are less sensitive than humans to the development of lung cancer and mesothelioma from exposure to asbestos and may therefore represent an inappropriate model for human risk assessment. Pott et al. hypothesized that in chronic inhalation studies, rats may have a lower sensitivity to inorganic fiber toxicity than humans. The lung cancer risk from inhaling asbestos may be two orders of magnitude lower in rats than in humans, and the mesothelioma risk from inhaling asbestos may be three orders of magnitude lower for rats. Rödelsperger and Woitowitz measured amphibole fiber concentration in the lung tissues of humans with mesothelioma and compared the results with fiber burdens reported in rats. A significantly increased OR (OR=4.8, 95%; CI=1.05-21.7) for mesothelioma was seen in humans with amphibole concentrations between 0.1 and 0.2 fiber/μg of dried lung tissue. The lowest tissue concentration reported to produce a significant carcinogenic response in rat inhalation studies of amphiboles (specifically crocidolite) was 1,250 fibers/μg of dried lung tissue. By comparing these results, Rödelsperger and Woitowitz estimated that humans are at least 6,000 times more sensitive than rats to a given tissue concentration of amphibole fibers. This work is refuted by other scientists who favor the rat as an appropriate model for evaluating the risk evaluation of lung cancer in humans . Limitations of the Rödelsperger and Woitowitz and Pott analyses (discussed earlier) include the lack of a dose-response analysis, analysis of only one epidemiologic study, inadequate comparisons of exposure duration, lack of accounting for the potentially multiplicative effect of smoking and asbestos exposure, lack of consideration of latency and clearance, and different fiber measurement techniques. In summary, multiple factors affecting the comparability of different fiber types and animal models reported in the literature make it difficult to determine whether the carcinogenic potency of RCFs in animals is similar to that in humans. Extrapolation of the animal data to humans is further complicated by a limited understanding of the mechanisms of fiber toxicity. Consequently, any extrapolation of the cancer risk found in animals exposed to RCFs must account for differences between humans and rodents with regard to fiber deposition and clearance patterns, uncertainty about the role of RCF durability for potentiating an adverse effect, and possible species differences in sensitivity to fibers. # Health Effects Studies of Workers Exposed to RCFs Two major research efforts evaluated the morbidity of workers exposed to airborne fibers in the RCF manufacturing industry. One study was conducted in the United States and the other in Europe. The objective of each was to evaluate the relationship between occupational exposure to RCFs and potential adverse health effects. These studies contained multiple components including standardized respiratory and occupational history questionnaires, chest radiographs, pulmonary function tests of workers, and air sampling to estimate worker exposures. The first studies of European plants were conducted in 1986 and included current workers at seven RCF manufacturing plants . A followup cross-sectional study conducted in 1996 evaluated the same medical endpoints in workers from six of these seven European manufacturing plants (one plant had ceased operation) [Cowie et al. 1999[Cowie et al. , 2001. Current as well as former workers were included as study subjects in the followup study. The studies of U.S. plants began in 1987 and involved evaluations of current workers at five RCF manufacturing plants and former workers at two of the plants [Lemasters et al. 1994[Lemasters et al. , 1998[Lemasters et al. , 2003Lockey et al. 1993Lockey et al. , 1996Lockey et al. , 1998Lockey et al. , 2002. In the United States, the earliest commercial production of RCFs and RCF products began in 1953. In Europe, RCF production began in 1968. The demographics of the U.S. and European populations were similar at the time they were studied, but the average age and duration of employment for the U.S. workers were slightly higher than for the workforce in the 1986 European studies because of the earlier development of this industry in the United States. # Pleural changes in humans The radiographic analyses of the U.S. and 1996 European populations in RCF manufacturing detected an association between pleural changes and RCF exposure . In the initial European studies, Trethowan et al. found pleural abnormalities in a small number of RCF workers who had other confounding exposures that did not include asbestos. Differences observed in pleural abnormalities between the U.S. and European worker populations may be related to the latency of exposure required to cause specific pleural changes , especially the formation of pleural plaques, which were first observed in studies of the U.S. RCF manufacturing industry, with its longer latency period. Historical air sampling data also indicate that airborne fiber concentrations were much higher in early U.S. RCF manufacturing. Therefore, in addition to their longer overall latency, RCF manufacturing workers in the United States probably had generally higher exposures in the early years of the industry than did their European counterparts. These factors might explain the appearance of RCF-associated pleural plaques in the U.S. studies before their detection in the European studies. The U.S. and 1986 European studies yielded little evidence of an association between radiographic parenchymal opacities and RCF exposure. In the U.S. study, small opacities were rare, with only three cases noted in one report and only one case (with small rounded opacities of profusion category 3/2 attributable to prior kaolin mine work) noted in the other . Small opacities of profusion category 1/0 or greater were more frequent in the European study by Trethowan et al. , but confounding exposures were believed to account for many of these cases. The results of statistical analyses indicated either no association with RCF exposure or an association slightly suggestive of an RCF exposure effect . In a more comprehensive evaluation of the European study population, small opacities of category 1/0 or greater were positively associated with RCF exposures that occurred before 1971 . # Respiratory symptoms, irritation, and other conditions in humans In both the U.S. and the European . Between the 0.2 to 0.6 f/cm 3 and >0.6 f/cm 3 RCF exposure groups, a statistically significant increase occurred in ORs for wheezing (P15 fiber-months/cm 3 (i.e., >1.25 fiber-year/cm 3 ) relative to cumulative exposure to ≤15 fiber-months/cm 3 (dyspnea grade 1-OR=2.1, 95% CI 1.3-3.3; dyspnea grade 2-OR=3.8, 95% CI 1.6-9.4) after adjusting for smoking and other potential confounders. Lockey et al. also found a statistically significant association between cumulative RCF exposure and pleurisy (OR=5.4, 95% CI=1.4-20.2), and an elevated but nonsignificant association between cumulative RCF exposure and chronic cough (OR=2.0, 95% CI=1.0-4.0). Lemasters et al. also noted associations (P<0.05) between employment in an RCF production job and increased prevalence of dyspnea and the presence of at least one respiratory symptom after adjusting the data for confounders. Recurrent chest illness in the European study population was associated with the estimated cumulative exposure to thoracic-sized fibers but was more strongly associated with estimated cumulative exposure to thoracic-sized dust [Cowie et al. 1999[Cowie et al. , 2001. In cross-sectional analyses, both the U.S. and the 1986 European studies found that cumulative RCF exposure is associated with pulmonary function decrements among current and former smokers. Lemasters et al. also found statistically significant deficits in pulmonary function measures for nonsmoking female workers. The decreased pulmonary function in the European study population remained significantly associated with cumulative RCF exposure, even after controlling for cumulative dust exposure . The 1996 European study found pulmonary function decrements only in current smokers concluded that the more recent RCF concentrations occurring after 1987 were not associated with decreased pulmonary function; rather, decreases in pulmonary function were more closely related to typically higher concentrations that occurred before this time period. The U.S. and European studies suggest that decrements in pulmonary function observed primarily in current and former smokers are evidence of an interactive effect between smoking and RCF exposure. Moolgavkar et al. derived risk estimates for lung cancer in humans on the basis of the results from the two chronic bioassays of RCFs in male Fischer 344 rats . Several models (linear, quadratic, exponential) were used to estimate and compare risks using reference populations comprised of either a nonsmoking ACS cohort or a cohort of steel workers not exposed to coke oven emissions (see Table 5-10 for risk estimates). The exponential model provided the best statistical fit of the data. The linear model provided the highest estimates of human lung cancer risks from exposure to RCFs when used with the referent steel workers cohort (considered to be most representative of workers exposed to RCFs because it includes blue collar workers who smoke). Lung cancer risk estimates were calculated using each model at exposure concentrations of 0.25 f/cm 3 , 0.5 f/cm 3 , 0.75 f/cm 3 , and 1.0 f/cm 3 . The RCF-related lung cancer risk determined from the linear model for the lowest concentration (0.25 f/cm 3 ) was 0.27/1,000 for the cohort of steel workers compared with 0.036/1,000 using the exponential model and 0.00088/1,000 for the quadratic model when using the same referent population. # Carcinogenic Risk in Humans The risk estimates incorporated multiple assumptions, including a human breathing rate of 13.5 L/min (considered light work) and multiple criteria for defining the length of time a worker could be exposed to RCFs over a working lifetime. Higher risk estimates could be expected if the assumptions more closely represented those used by NIOSH and OSHA: (1) a human breathing rate of 20 L/min and (2) a worker exposure duration of 8 hr/day, 5 days/wk, 50 wk/yr, from age 20 to 65 with the risk calculated beyond age 70 (e.g., to age 85). Furthermore, the calculated risk estimates could be an underestimation of the lung cancer risk to humans because the models assumed that the tissue sensitivity to RCFs in the rat is equal to that in humans. Although the lung cancer risk estimates derived from the rat data are reason for concern, estimates of human risk for mesothelioma from the high incidence (41%) of mesothelioma in hamsters cannot be appropriately modeled since only a single exposure was administered in the study. Primarily on the basis of chronic animal inhalation studies , NIOSH concludes that RCFs are a potential occupational carcinogen. Furthermore, the evidence of pleural plaques Lockey et al. 1996] observed in persons with occupational exposures to airborne RCFs is clinically similar to that observed in asbestos-exposed persons after the initial years of their occupational asbestos exposures . NIOSH considers the discovery of pleural plaques in U.S. studies of RCF manufacturing workers to be a significant finding because the plaques were correlated with RCF exposure . In addition, NIOSH considers the respiratory symptoms and conditions (including dyspnea, wheeze, cough, and pleurisy) in RCF workers to be adverse health effects that have been associated with exposure to airborne fibers of RCFs. Insufficient evidence exists to document an association between fibrotic or carcinogenic effects and the inhalation of RCFs by workers in the RCF manufacturing industry though these effects have been demonstrated in animal studies. The lack of an observed association between RCF exposure and these effects among workers could be affected by one or more factors, including several relating to the study population: the relatively small cohort, the proportion of workers with short tenure relative to what might be expected (on the basis of an asbestos analogy) in terms of disease latency, and workers with limited cumulative exposures to RCFs. # Controlling RCF Exposures in the Workplace Table 8-6 summarizes exposure monitoring data collected by the RCFC under a consent agreement with the EPA . These data indicate that exposures to RCFs during 1993-1998 had an AM fiber concentration of about 0.3 f/cm 3 for manufacturing and nearly 0.6 f/cm 3 for end users. Maxim et al. [1997Maxim et al. [ , 1999a reported results for both manufacturing and end-use sectors in which airborne fiber concentrations through 1997 were reduced to an AM <0.3-0.6 f/cm 3 . The exposure monitoring data collected as part of the RCFC/EPA consent agreement provide assurance that when appropriate engineering controls and work practices are used, airborne exposure to RCFs can be maintained for most functional job categories (FJCs) at the REL of 0.5 f/cm 3 . For many manufacturing processes, reductions in exposures have resulted from the improved ventilation, engineering or process changes, and product stewardship programs . These data provide the basis for the NIOSH determination that a REL of 0.5 f/cm 3 as a TWA can be achieved. Although many RCF manufacturing and enduser job tasks have exposures to RCFs at concentrations below 0.5 f/cm 3 , exposure monitoring data also indicate that not all FJCs may be able to achieve these RCF concentrations consistently. FJCs that currently experience airborne AM fiber concentrations >0.5 f/cm 3 include finishing (manufacturing and end use) and removal (end use). Typical processing during finishing operations (e.g., sawing, drilling, cutting, sanding) often requires high-energy sources that tend to generate larger quantities of airborne dust and fibers. For RCF insulation removal, activities are performed at remote sites where conventional engineering controls and fixed ventilation systems are more difficult to implement. For some operations, such as removal of RCF insulation tiles from furnaces, the release of high airborne fiber concentrations can occur. However, removal of RCF insulation tiles is not routine and is generally accomplished in a short period of time. Workers almost universally wear PPE and respiratory protection during these job tasks . NIOSH acknowledges that the frequent use of PPE, including respirators, may be required for some workers handling RCFs or RCF products. The frequent use of PPE may be required during job tasks for which (1) routinely high airborne concentrations of RCF (e.g., finishing, insulation removal) exist, (2) the airborne concentration of RCF is unknown or unpredictable, and (3) job tasks are associated with highly variable airborne concentrations because of environmental conditions or the manner in which the job task is performed. In all work environments where RCFs or RCF products are handled, control of exposure through the engineering controls should be the highest priority. # Summary The following summarize the relevant information used as the basis for the NIOSH assessment of occupational exposures to RCFs: Airborne concentrations of RCFs have been characterized as containing fibers of dimensions in the thoracic and respirable size ranges. RCFs are among the most durable types of SVFs. In tests of solubility, RCFs are nearly as durable as chrysotile asbestos but significantly less durable than amphibole asbestos fibers such as amosite. Chronic, nose-only inhalation studies with RCFs in animals show a statistically † Fiber concentrations were determined during monitoring performed over a 5-year period (1993)(1994)(1995)(1996)(1997)(1998) under the Refractory Ceramic Fibers Coalition/Environmental Protection Agency (RCFC/EPA) consent agreement. Concentrations were determined by NIOSH method 7400 "B" counting rules. significant increased incidence of lung tumors in rats and pleural mesotheliomas in hamsters. These data support the NIOSH determination that RCFs are a potential occupational carcinogen. Epidemiologic studies of workers in the RCF manufacturing industry show an increased incidence of pleural plaques, respiratory symptoms (dyspnea and cough), skin and eye irritation, and decreased pulmonary function related to increasing exposures to airborne fibers. Some of these conditions are documented for exposure concentrations in a range as low as 0.2 to 0.6 f/cm 3 . Studies of workers exposed to airborne RCFs show no evidence of excess risk for lung cancer or mesothelioma. However, the inability to detect such an association could be because of (1) the low statistical power for detecting an effect, (2) the short latency period for most workers occupationally exposed, and (3) the historically low and decreasing fiber exposures that have occurred in this industry. Risk assessment analyses using data from chronic inhalation studies in rats indicate that the excess risk of developing lung cancer when exposed to RCFs at a TWA of 0.5 f/cm 3 for a working lifetime is 0.073 to 1.2/1,000. However, on the basis of the assumptions used in the risk analyses, NIOSH concludes that this risk estimate is more likely to underestimate than to overestimate the risk to RCFexposed workers. Reduction of the RCF TWA concentration to 0.2 f/cm 3 would reduce the risk for lung cancer to 0.03 to 0.47/1,000. OSHA attempts to set PELs for carcinogens that reflect an estimated risk of 1/1,000 but is limited by considerations of technologic and economic feasibility. RCF exposure data gathered under a consent agreement between RCFC and EPA, which included a 5-year comprehensive air monitoring program (1993)(1994)(1995)(1996)(1997)(1998), indicate that airborne exposure concentrations to RCFs have been decreasing. Monitoring results show that 75% to >95% of TWA exposure concentration measurements in all FJCs (with one exception) were below 1.0 f/cm 3 . In all but two of the eight FJCs, >70% of TWA measurements were below the RCFC recommended exposure guideline of 0.5 f/cm 3 . On the basis of the review of these data, NIOSH has concluded that the REL of 0.5 f/cm 3 can be achieved in most work places where RCFs or RCF products are manufactured or used. The combined effect of mixed exposures to fibers and nonfibrous particulates may contribute to increased irritation of the respiratory tract, skin, and eyes. Engineering controls and appropriate work practices used to keep airborne RCF concentrations below the REL should help to minimize airborne exposures to nonfibrous particulates as well. Because the ratio of fibers to nonfibrous particulate in airborne exposures may vary among job tasks, exposure monitoring should include efforts to characterize particulate composition and to control and minimize airborne fibers and nonfibrous particulate accordingly. From the assessment described above and throughout this document, NIOSH concludes that on a continuum of fiber toxicity, RCFs relate more closely to asbestos than to fibrous glass and other SVFs and should be handled accordingly. NIOSH considers all asbestos types to be carcinogens and has established a REL of 0.1 f/cm 3 for airborne asbestos fibers. This value was determined on the basis of extensive human and animal health effects data and the recognized limits of analytical methods. Recognizing that RCFs are carcinogens in animal studies and given the limitations in deriving an exposure value that reflects no excess risk of lung cancer or mesothelioma for humans, NIOSH recommends that every effort be made to keep exposures below the REL of 0.5 f/cm 3 as a TWA for up to 10 hr/day in a 40-hr workweek. These efforts will further reduce the risk for malignant respiratory disease and protect workers from conditions and symptoms deriving from irritation of the respiratory tract, skin, and eyes. From the analysis of historical exposure data (see Chapter 4) and the exposure data collected as part of the RCFC/EPA consent agreement monitoring program (Table 8-6), NIOSH has determined that compliance with the REL for RCFs is achievable in most manufacturing and end-use job categories. Although routine attainment of TWA exposures below the REL may not currently occur at all job tasks, it does represent a reasonable objective that can be achieved through modification of the job task or the introduction or improvement of ventilation controls. 9 Guidelines for Protecting Worker Health Establish procedures for reporting hazards and giving feedback about actions taken to correct them. Instruct workers about using safe work practices and appropriate PPE. Inform workers about practices or operations that may generate high concentrations of airborne fibers (such as cutting and sanding of RCF boards and other RCF products). Make workers who remove refractory insulation materials aware of the following: -Their potential for exposure to respirable crystalline silica -Health effects related to this exposure -Methods for reducing their exposure -Types of PPE that may be required (including respirators) # Labeling and Posting Although workers should have received training about RCF exposure hazards and methods for protecting themselves, labels and signs serve as important reminders and provide warnings to workers who may not ordinarily work in the area. Employers should do the following: Post warning labels and signs about RCFassociated health risks at entrances and inside work areas where airborne concentrations of RCFs may exceed the REL. The following guidelines for protecting worker health and minimizing worker exposures to RCFs are considered minimum precautions that should be adopted as a part of a site-specific safety and health plan to be developed and overseen by appropriate and qualified personnel. # Informing Workers about Hazards # Safety and Health Training Program Employers should establish a safety and health training program for all workers who manufacture, use, handle, install, or remove RCF products or perform other activities that bring them into contact with RCFs. As part of this training program, employers should do the following: Inform all potentially exposed workers (including contract workers) about RCFassociated health risks such as skin, eye, and respiratory irritation and lung cancer. Provide MSDSs on site: -Make MSDSs readily available to workers. -Instruct workers how to interpret information from MSDSs. Teach workers to recognize and report adverse respiratory effects associated with RCFs. Train workers to detect hazardous situations. State the need to wear appropriate respiratory protection and protective clothing in areas where airborne RCFs may exceed the REL. If respiratory protection is required, post the following statement: # Engineering Controls Engineering controls should be the principal method for minimizing exposure to RCFs in the workplace. # Ventilation Achieving reduced concentrations of airborne RCFs depends on adequate engineering controls such as local exhaust ventilation systems that are properly constructed and maintained. Local exhaust ventilation systems that employ hoods and ductwork to remove fibers from the workplace atmosphere have been used by RCF manufacturers. One example is a slotted-hood dust collection system placed over a mixing tank so that airborne fibers are captured and collected in a bag house with HEPA filters . Other types of local exhaust ventilation or dust collection systems may be used at or near dust-generating systems to capture airborne fibers. Band saws used in RCF manufacturing and finishing operations have been fitted with such engineering controls to capture fibers and dust during cutting operations and thereby reduce exposures for the band saw operator . Disc sanders fitted with similar local exhaust ventilation systems are effective in reducing airborne RCF concentrations during sanding of vacuum-formed RCF products . For quality control laboratories or laboratories where production samples are prepared for analyses, exhaust ventilation systems should be designed to capture and contain dust. For guidance in designing local exhaust ventilation systems, see Industrial Ventilation-A Manual of Recommended Practice, 25th edition , Recommended Industrial Ventilation Guidelines , and the OSHA ventilation standard . # RESPIRATORS REQUIRED IN THIS AREA. Print all labels and warning signs in both English and the predominant language of workers who do not read English. If workers are unable to read the labels and signs, inform them verbally about the hazards and instructions printed on the labels and signs. # Hazard Prevention and Control Proper Proper decontamination and waste disposal Additional engineering controls have been evaluated by the Bureau of Mines for minimizing airborne dust in underground mining operations and at industrial sand plants. These controls may also have applications for RCF finishing, installation, and removal operations. The use of air showers (also known as a canopy air curtain or an overhead air supply island) involves positioning an air supply over the head of a worker to provide a flow of clean, filtered air to the worker's breathing zone [Volkwein et al. 1982[Volkwein et al. , 1988. Proper design and evaluation are critical for ensuring that filtration is adequate to remove airborne fibers from the air supply. Also, selection of the air supply flow rate is important to make sure that the velocity delivered to the worker's breathing zone is sufficient to overcome cross drafts and maintain a clean air flow. # Tool selection and modification The RCFC has reported that using hand tools instead of powered tools can significantly reduce airborne concentrations of dust. However, hand tools often require additional physical effort and time, and they may increase the risk of musculoskeletal disorders. Employers should therefore use ergonomically correct tools and proper workstation design to avoid ergonomic hazards. The additional physical effort required to use hand tools may also increase the rate and depth of breathing and may consequently affect the inhalation and deposition of fibers. For operations such as cutting, sawing, grinding, drilling, and sanding, the high level of mechanical energy applied to RCF products with power tools increases the potential for elevated concentrations of airborne fiber. Examples of how airborne fiber concentrations are affected by the equipment used to process RCF products include the following: A test of hand sawing versus the use of a powered jigsaw showed an 81% reduction in concentrations of airborne dust generated. A comparison of hand sanding versus power sanding showed a 90% reduction in concentrations of airborne dust generated. When a light water mist is applied to the surface of a vacuum-formed board before sanding, airborne dust concentration is reduced by 89% for hand sanding and 88% for powered sanding. The use of a cork bore (core drill) versus an electric drill with a twist bit for cutting holes in RCF product forms reduces airborne dust concentrations by about 85%. # Engineering controls for RCF finishing operations Researchers at NIOSH have been working with industrial hygienists at RCFC member facilities to study the effectiveness of engineering controls designed and applied to RCF finishing operations. Because hand tools are not always a practical solution to manufacturing and enduse facilities requirements, engineering controls are being designed and evaluated for use with powered tools. A joint project between NIOSH and RCFC was initiated in 1998 and involved investigating engineering controls for use with a pedestal belt/ disc sander [Dunn et al. 2000[Dunn et al. , 2004. These units are frequently used by the manufacturers as well as the customer facilities to produce vacuum-formed boards sized to the required dimensions. A continuous misting nozzle and simple local exhaust ventilation system were integrated for use on the pedestal sanding unit. The mister consisted of a standard atomization nozzle that was set for a low-water flow rate to minimize part degradation. The local exhaust ventilation system used two hoods or pickup points with a total airflow of 700 ft 3 /min. During production of vacuum-formed boards, these two controls reduced fiber concentrations in the breathing zone as follows: % decrease in airborne fibers: These studies highlight the potential for significant reductions in worker exposure using well designed and maintained engineering controls, but their effectiveness needs to be validated in the field. Disc # Wet methods for dust suppression Fiber counts are lower in more humid atmospheres. Examples of using water to suppress dust concentrations are described as follows: At one RCF textile facility, misters have been added above broad looms and tape looms to decrease fiber concentrations. Water knives are high-pressure water jets that are used to cut and trim edges of RCF blanket while suppressing dust and limiting the generation of airborne fibers. During the installation of RCF modules in a furnace, a procedure called tamping is typically performed. After modules are put in place on the furnace wall, the modules are compressed by placing a 1-ft length of 2-by 4-ft lumber against the modules and tapping it lightly with a hammer. The process helps ensure that the RCF modules are installed tightly in place. When a light water spray is applied to the surface of the modules before tamping, airborne fiber concentration is reduced by about 75% . The water is applied with a garden-type sprayer that is set on mist using about 1 gal of water/100 ft 2 of surface area. However, caution is advised regarding the dampening of refractory-linings during installation. The introduction of water can damage refractory-lined equipment during heating with explosive spalling from the generation of steam. After-service RCF insulation removed from furnaces may be wetted to reduce the release of fibers. # Isolation Some manufacturing processes may be enclosed to keep airborne fibers contained and separated from workers. When possible, isolate workers from direct contact with RCFs by using automated equipment operated from a closed control booth or room. Maintain the control room at greater air pressure than that surrounding the process equipment so that air flows out rather than in. Make sure workers take special precautions (such as using PPE) when they must enter the general work area to perform process checks, adjustments, maintenance, assembly-line tasks, and related operations. # Product Reformulation One factor that contributes to the toxicity of an inhaled fiber is the durability of the fiber and its resistance to degradation in the respiratory tract. The chemical characteristics of RCFs make them one of the most durable types of SVFs. As a result, an inhaled RCF of specific dimensions will persist longer in the lungs. Modifying the physical characteristics of RCFs or reformulating their chemistry to produce less durable fibers are recommended options for reducing the hazard for exposed workers. Such an approach has been taken by one RCF manufacturer in developing two more soluble types of SVF . However, caution is advised for developing and distributing such modified fibers. Possible adverse health effects of newly developed fibers should be evaluated before introducing them into commerce. Appropriate testing of these fibers should be performed to provide information about the fiber toxicology and potential adverse health effects associated with exposure to these fibers. # Work Practices and Hygiene Use good work practices to help reduce exposure to airborne fibers. These practices include the following: Limit the use of power tools unless they are equipped with local exhaust or dust collection systems. When possible, use hand tools, which generate less dust and fewer airborne particles. Use HEPA-filtered vacuums, wet sweeping, or a properly enclosed wet vacuum system for cleaning up dust containing RCFs. During removal of RCF products, dampen insulation with a light water spray to keep fibers and dust from becoming airborne. Clean work areas regularly with HEPAfiltered vacuums or with wet sweeping methods to minimize the accumulation of debris. Limit access to areas where workers may be exposed to airborne RCFs: permit only workers who are essential to the process or operation. Use good hygiene and sanitation to protect workers as well as people outside the workplace who might be contaminated with takehome dust and fibers: Do not allow workers to smoke, eat, or drink in work areas where they contact RCFs. If RCFs get on the skin, wash with warm water and mild soap. Apply skin moisturizing cream as needed to avoid dryness or irritation from repeated washing. Vacuum contaminated clothes with a HEPA-filtered vacuum before leaving the work area. -Do not use compressed air to clean the work area or clothing. -Do not shake clothes to remove dust. Do not wear contaminated clothes outside the work area. Instead, take the following measures to prevent taking contaminants home: -Change into street clothes before going home. -Leave contaminated clothes at the workplace to be laundered by the employer. -Store street clothes in separate areas of the workplace to keep from contaminating them. Provide workers with showers and have them shower before leaving work. Prohibit removal of contaminated clothes or other items from the workplace . # Personal Protective Equipment Wear long sleeves, gloves, and eye protection when performing dusty activities involving RCFs. # Respiratory Protection NIOSH recommends using a respirator for any task involving RCF exposures that are unknown or have been documented to be higher than the NIOSH REL of 0.5 f/cm 3 (TWA). Respirators should not be used as the primary means of controlling worker exposures. Instead, NIOSH recommends using other exposure-reduction methods, such as product substitution, engineering controls, and changes in work practices. However, respirators may be necessary when available engineering controls and work practices do not adequately control worker exposures below the REL for RCFs. NIOSH recognizes this control to be a particular challenge in the finishing stages of RCF product manufacturing as well as during the installation and removal of RCF insulation materials. If respiratory protection is needed, the employer should establish a comprehensive respiratory protection program as described in the OSHA respiratory protection standard . Elements of a respiratory protection program should be established and described in a written plan that is specific to the workplace. This respirator program must include the following: At a minimum, use a half-mask, airpurifying respirator equipped with a 100 series particulate filter (this respirator has an assigned protection factor (APF) of 10. For a higher level of protection and for prevention of facial or eye irritation, use a full-facepiece, air-purifying respirator (equipped with a 100 series filter) or any powered, air-purifying respirator equipped with a tight-fitting full facepiece. For greater respiratory protection when the work involves potentially high airborne fiber concentrations (such as removal of after-service RCF insulation such as furnace insulation), use a supplied-air respirator equipped with a full facepiece, since airborne exposure to RCFs can be high and unpredictable. A comprehensive assessment of workplace exposures should always be performed to determine the presence of other possible contaminants (such as silica) and to ensure that the proper respiratory protection is used. exposure to airborne RCFs, perform the exposure sampling survey as follows: Collect representative personal samples for the entire work shift using NIOSH Method 7400 (B rules) . Perform periodic sampling at least annually and whenever any major process change takes place or another reason exists to suspect that exposure concentrations may have changed. Collect all routine personal samples in the breathing zones of the workers. For workers exposed to concentrations above the REL, perform more frequent exposure monitoring until at least two consecutive samples indicate that the worker's exposures no longer exceed the REL. Notify all workers about monitoring results and any actions taken to reduce their exposures. Make sure that any sampling strategy considers variations in work and production schedules as well as the inherent exposure variability in most workplaces . Similar exposure monitoring strategies have been espoused by the American Industrial Hygiene Association, which recommends that if measured exposures are less than 10% of the designated exposure limit (for example, REL or PEL), there is a high degree of certainty that the exposure limit will not be exceeded. # Action Level # Sampling Strategies When sampling to determine whether worker exposures are below the REL, a focused sampling strategy may be more practical than random sampling. A focused sampling strategy targets workers perceived to have the highest exposure concentrations . A focused strategy is most efficient for identifying exposures above the REL if maximum-risk workers and time periods are accurately identified. Focused sampling may help identify short-duration tasks involving high airborne fiber concentrations that could result in elevated exposures over a full work shift. Sampling strategies such as those used by Corn and Esmen , Rice et al. , and Maxim et al. have been derived and used in RCF manufacturing facilities to monitor airborne fiber concentrations by selecting representative workers for sampling. The representative workers are grouped according to dust zones, uniform job titles, or functional job categories. These approaches are intended to reduce the number of required samples while increasing the confidence of identifying workers at similar risk. Area sampling may also be useful in exposure monitoring for determining sources of airborne RCF exposures and assessing the effectiveness of engineering controls. # Medical Monitoring NIOSH recommends periodic medical evaluation, or medical monitoring, of RCF-exposed workers to identify potential health effects and symptoms that may be related to contact with airborne fibers. The following sections describe the objectives of medical monitoring and the elements of a medical monitoring program for workers exposed to RCFs. The primary goals of a workplace medical monitoring program are (1) early identification of adverse health effects that may be related to exposures at work and (2) possible health trends within groups of exposed workers. These goals are based on the premise that early detection, subsequent treatment, and workplace interventions will ensure the continued health of the affected workforce. # Objectives of Medical Monitoring Medical monitoring and resulting interventions represent secondary prevention and should not replace primary prevention efforts to minimize worker exposures to RCFs. In the case of RCFs, medical monitoring is especially important because achieving compliance with the REL of 0.5 f/cm 3 does not assure that all workers will be free from the risk of respiratory irritation or chronic respiratory disease caused by occupational exposure. Early identification of respiratory system changes and symptoms associated with RCF exposures (such as decreased pulmonary function, irritation, dyspnea, chronic cough, wheezing, and pleural plaques) may signal the need for more intensive medical monitoring and the assessment of existing controls to minimize the risk of longterm adverse health effects. An ongoing medical monitoring program also serves to inform workers of potential health risks and promotes an understanding of the need for and support of exposure control activities. A medical monitoring program serves as an effective secondary prevention method on two levels-screening and surveillance. Medical screening in the workplace focuses on the early detection of health outcomes for individual workers and may involve an occupational history, medical examination, and application of specific medical tests to detect the presence of toxicants or early pathologic changes before the worker would normally seek clinical care for symptomatic disease. By contrast, medical surveillance (described in Section 9.5) involves the ongoing evaluation of the health status of a group of workers through the collection and aggregate analysis of health data for the purpose of preventing disease and evaluating the effectiveness of intervention programs. # Criteria for Medical Screening To determine whether tests or procedures for medical screening are appropriate and relevant to a given hazard (in this case, exposure to airborne RCFs), the following factors should be considered: # Recommended Program Elements Recommended elements of a medical monitoring program for workers exposed to RCFs include provisions for an initial medical examination and periodic medical examinations at regularly scheduled intervals. Depending on the findings from these examinations, more frequent and detailed medical examinations may be necessary. Worker education should also be included as a component of the medical monitoring program. Specific elements of the examinations and scheduling are described below and illustrated in the flow chart diagram in Figure 9-1. # Initial medical examination An initial (baseline) examination should be performed as near as possible to the date of beginning employment (within 3 months). The initial medical examination should include the following: If time since first RCF exposure is <10 years, examinations should be conducted at least every 5 years. If time since first RCF exposure is ≥10 years, then examinations should be conducted at least every 2 years. A For workers with 10 or more years since first exposure to RCFs, periodic examinations should be conducted at least once every 2 years. A chest X-ray and spirometric testing are important upon initial examination and may also be appropriate medical screening tests during periodic examinations for detecting respiratory system changes-especially in workers with more than 10 years since first exposure to RCFs. The value of periodic chest X-rays in a medical monitoring program should be evaluated by a qualified health care provider in consultation with the worker to assess whether the benefits of testing warrant the additional exposure to radiation. As with the frequency of periodic examinations, the utility of the chest X-ray as a medical test becomes greater for workers with more than 10 years since first exposure to RCFs (based on the latency period between first exposure and appearance of noticeable respiratory system changes). Because persons with advanced fiber-related pleural changes experience difficulty in breathing as the parietal and visceral surfaces become adherent and lose flexibility, it may prove beneficial to detect fibrotic changes in the early stages so steps may be taken to prevent further lung damage. Similar recommendations have been made for asbestos-exposed workers diagnosed with pleural fibrosis or pleural plaques to prevent more serious types of respiratory disease . # More frequent medical examinations Any worker should undergo more frequent and detailed medical evaluation if he or she has any of the following indications: New or worsening respiratory symptoms or findings (for example, chronic cough, difficulty breathing, wheezing, reduced # Employer Actions The employer should assure that the qualified health care provider=s recommended restrictions of a worker=s exposure to RCFs or to other workplace hazards are followed and that the REL for RCFs is not exceeded without requiring the use of PPE. Efforts to encourage worker participation in the medical monitoring program and to report symptoms promptly to the program director are essential for the program=s success. Medical evaluations performed as part of the medical monitoring program should be provided by the employer at no cost to the participating workers. If the recommended restrictions determined by the medical program director include job reassignment, such reassignment should be implemented with the assurance of economic protection for the worker. When medical removal or job reassignment is indicated, the affected worker should not suffer loss of wages, benefits, or seniority. The employer should ensure that the medical monitoring program director communicates regularly with the employer=s safety and health personnel (such as industrial hygienists), employee representatives, and safety and health committees to identify work areas that may require evaluation and implementation of control measures to minimize the risk from exposure to hazards. # Surveillance of Health Outcomes Standardized medical screening data should be periodically aggregated and evaluated by an epidemiologist or other knowledgeable person to identify patterns of worker health that may be linked to work activities and practices that require additional primary prevention efforts. Routine aggregate assessments of medical screening data should be used in combination with evaluations of exposure monitoring data to identify changes needed in work areas or exposure conditions. One example of surveillance using analyses of medical screening data is the ongoing epidemiologic study of RCF workers described in the RCFC product stewardship plan referred to as PSP 2000. Elements of this plan may be adapted and modified by other employers to develop medical surveillance programs for workers who are potentially exposed to RCFs. The research strategy also considers the usefulness of integrating fiber data from various scientific disciplines (toxicology, epidemiology, industrial hygiene, occupational medicine) to elucidate the characteristics of fibers. # Smoking Cessation In addition, NIOSH recommends that the following steps be taken with regard to RCF research: 1. Conduct basic scientific investigations, including in vitro and in vivo animal studies, to delineate the mechanism of action for RCF toxicity. 2. Conduct comparable studies for other SVFs and natural fibers so that the mechanistic data can be compared. For instance, Coffin et al. examined the ability of different synthetic and natural fibers to induce mesotheliomas. They suggested that in addition to fiber length and width, currently undefined intrinsic surface characteristics of the fibers are directly related to their mesothelioma induction potency. human pulmonary cells. The human alveolar macrophage has a volume several times greater than that of the rat alveolar macrophage . Macrophage size and volume may affect (1) the size range of fibers that can be phagocytized, dissolved, and cleared by the lungs and (2) the resulting pathogenicity of the fiber. Even the use of a human lung cell line does not guarantee that in vitro results will be directly applicable to the intact human response. The in vivo integration of stimuli from the nervous, hormonal, and cardiovascular systems cannot be reproduced in vitro. Another point to consider when reviewing these data is the number and definitions of variables used in different studies. Variables include differences in fiber type, fiber length, fiber dose, cell type, and length of exposure tested, among others. Disparate results between studies make strong conclusions from in vitro studies difficult. At the same time, these studies may provide important data regarding the mechanism of action of RCFs that would not be obtainable in other testing venues. RCFs may exert their effects on pulmonary target cells via direct or indirect mechanisms. Direct mechanisms are the resultant effects when fibers come in direct physical contact with cells. Direct cytotoxic effects of RCFs include effects on cell viability, responses, and proliferation. The cellular and molecular effects of RCF exposures have been studied with two different objectives. One purpose of these in vitro studies is to provide a quicker, less expensive, and more controlled alternative to animal toxicity testing. These experiments, which strive to act as screening tests or alternatives to animal studies, are best interpreted by comparing their results with those of in vivo experiments. The second objective of in vitro studies is to provide data that may help to explain the pathogenesis and mechanisms of action of RCFs at the cellular and molecular levels. These cytotoxicity and genotoxicity studies are best interpreted by comparing the effects of RCFs with those of other SVFs and asbestos fibers. In vitro studies serve as an important complement to animal studies and provide important tools for studying the molecular mechanisms of fibers. It is not yet possible to use these data in the derivation of an REL. Drawing strong conclusions relevant to human health based on these in vitro studies is impossible. One point to consider when reviewing these data is the relevance of the cell types studied. Many studies to date have examined the effects of RCFs on rodent cell lines. The cytotoxic effects of RCFs may vary with cell size, volume, and lineage. Effects observed in the cells from organs other than the lung or effects in species other than the human may not be similar to those elicited with Indirect cellular effects of RCFs involve the interaction of fibers with inflammatory cells that may be activated to produce inflammatory mediators. These mediators may affect target cells directly or may attract other cells that act on target cells. An inflammatory cell type often used in RCF in vitro studies is the pulmonary macrophage. Pulmonary macrophages are the first line of defense against inhaled material that deposits in the alveoli, and among functions, they attempt to phagocytize particles deposited in the lung. Effects of RCF exposure on macrophages and other inflammatory cells are assessed by the measurement of inflammatory mediator release in vitro. Three important groups of inflammatory mediators are cytokines, ROS, and lipid mediators (prostaglandins and leukotrienes). Some of the cytokines that have been implicated in the inflammatory process include TNF and interleukins (ILs). TNF and many ILs stimulate the deposition of fibroblast collagen, an initial step in fibrosis, and prostaglandins (PG)s inhibit these effects. ROS include hydroxyl radicals, hydrogen peroxide, and superoxide anion radicals. Oxidative stress occurs when the ROS level in a cell exceeds its antioxidant level. Oxidative stress may result in damage to deoxyribo nucelic acid (DNA), lipids, and proteins. Either direct or indirect effects of RCFs may result in genotoxic effects on pulmonary target cells. Changes in the genetic material may be important in tumor development . Genotoxic effects may be assessed through the analysis of chromosome changes or alterations in gene expression following exposure to RCFs. The following summary of RCF in vitro studies examines their direct effects on cell proliferation and viability and indirect effects via release of TNF, ROS, and other inflammatory mediators. The genotoxic effects of RCFs are also examined and summarized. Luoto et al. evaluated the effects of RCFs, quartz, and several MMVFs on LDH levels in rat alveolar macrophages and hemolysis in sheep erythrocytes. RCF1, RCF2, RCF3, and RCF4 at 1.0 mg/ml induced a lower release of LDH (less than 20% of control) from rat alveolar macrophages compared with quartz (approximately 40% of control) . RCF1 stimulated the lowest amount of MMVF10 fibers in hamster tracheal epithelial cells at 24 hr; no necrosis was present at 72 hr. RCF1, RCF2, RCF3, and RCF4 as well as asbestos and other fibers had a dose-dependent effect on cytotoxicity, as measured by cell detachment, in the human alveolar epithelial cell line A549 . Cell detachment is associated with epithelial damage, an important step in the inflammatory process. These cells are a primary target of inhaled fibers. When equivalent doses (10, 25, 50, and 100 µg/ ml) were tested with various fibers, all RCFs had a less significant effect than both crocidolite and amosite asbestos. When the dose data were adjusted for the number of fibers, RCF1, RCF2, and RCF3 were more cytotoxic than RCF4 and crocidolite. In an assay determining the ability of fibers to induce an increase in the diameter of human A549 cells, an elutriated ceramic fiber (unspecified type) had a midrange of activity when compared with 12 other fibers . It was more active than most varieties of amosite tested (but not UICC amosite) but less active than the chrysotile fibers. An association was found between increasing length and assay activity. When these same fiber samples were tested for colony inhibition in V79/4 Chinese hamster lung fibroblasts, the ceramic fiber had even less effect than the TiO 2 control. Analysis of all fibers upheld the association between increasing length and increased activity. In both assays, fiber diameter was not related to activity in most cases. Chrysotile asbestos at 10 µg/cm 2 and crocidolite asbestos at 5 µg/cm 2 altered porcine aortic endothelial cell morphology and increased neutrophil adherence . RCF1 fibers at 10 µg/cm 2 did not change cell morphology or increase neutrophil binding. These studies suggest that RCFs may have some similar direct cytotoxic effects to asbestos. They are capable of inducing enzyme release and cell hemolysis. They may decrease cell viability and inhibit proliferation. In most studies, the effects of RCFs are much less pronounced than the effects of asbestos at similar gravimetric concentrations. Fiber length was demonstrated to be an important factor in determining the cell responses in many studies. # C.2 Indirect Effects of RCFs: Effects on Inflammatory Cells In addition to direct effects on target cells, RCFs may have indirect mechanisms of action by acting on inflammatory cells. Inflammatory cells, such as pulmonary macrophages, may respond to fiber exposure by releasing inflammatory mediators that initiate the process of pulmonary inflammation and fibrosis. Cytokines and ROS are among the inflammatory mediators released. Many studies, summarized below and in Table C-2, have investigated this link between fiber exposure and mediator release to try to elucidate the mechanism of action of RCFs. Cytokines are a class of proteins that are involved in regulating processes such as cell secretion, proliferation, and differentiation. One of the cytokines most commonly analyzed in RCF cytotoxicity studies is TNF. TNF has been implicated in silica-and asbestos-induced pulmonary fibrosis . TNF and many ILs are associated with collagen deposition (an initial stage of fibrosis), and PGs inhibit these effects. Experiments on the effects of RCF exposure on TNF production in various cell types have had differing results. TNF secretion has been associated with exposure to asbestos both in vitro and in vivo . In vitro incubation of human alveolar macrophages from normal volunteers with 25 µg/ml chrysotile asbestos resulted in increased levels of TNF secretion. Alveolar macrophages from 6 human subjects occupationally exposed to asbestos for more than 10 years secreted increased amounts of the cytokines TNF, IL-6, PGE 2 , and IL-1ß in vitro. Five human subjects occupationally exposed for less than 10 years did not show increases in these cytokines. The two exposure groups were matched for age, smoking history, and diagnosis. The increased TNF secretion in both in vitro and chronic in vivo asbestosexposed conditions suggests its importance in the response of the lung to fiber exposure, although the small exposure group sizes warrant caution in drawing strong conclusions. When equal numbers (8.2  10 6 ) of various fiber types, including RCF1, RCF2, RCF3, and RCF4, were incubated separately with rat alveolar macrophages, silicon carbide whiskers, amosite, and crocidolite asbestos stimulated the highest TNF release . RCF1, RCF2, RCF3, and RCF4 showed no significant increase in TNF release compared with control. In contrast, ceramic fibers (unspecified type) at 50 µg/ml (1.72  10 5 f) significantly increased TNF release compared with controls in rat alveolar macrophages . Potassium octatitanate whisker, chrysotile, and crocidolite asbestos induced the greatest TNF release. Alveolar macrophages exposed to either 300 or 1,000 µg/ml RCFs or 1,000 µg/ml asbestos showed a significant increase in TNF production . At 300 µg/ml RCFs, a significant elevation occurred in leukotriene B 4 . At 1,000 µg/ml RCFs, significant elevations occurred in leukotriene B 4 and prostaglandin E 2 . Levels induced at lower doses were not different from controls. At equivalent doses, the effect on the levels of all mediators measured was greater after asbestos exposure than after RCF exposure. Chrysotile A, chrysotile B, crocidolite, MMVF21, RCF1, and silicon carbide at 100 µg/ml caused a significantly increased synthesis of TNF mRNA after 90 minutes of incubation with rat alveolar macrophages . After 4 hr of incubation, chrysotile A still had a significantly increased TNF mRNA production, and all other fibers were at baseline concentrations. None of the fibers studied increased TNF release at 90 minutes. However, an increased TNF bioactivity occurred after 4 hr of incubation with chrysotile A, chrysotile B, crocidolite, or MMVF21. RCF1 at 100 µg/ml did not increase TNF production under these conditions. Chrysotile asbestos and alumina silicate ceramic fibers increased in vitro alveolar macrophage TNF production in rats exposed to cigarette smoke in vivo and in rats unexposed to smoke . Asbestos at 50 and 100 µg/ml induced a significantly greater TNF release in rats exposed to cigarette smoke versus unexposed rats. No significant differences were found between groups at all doses of RCF fibers tested (25, 50 and 100 µg/ml). RCF exposure, in contrast to chrysotile, did not have a significant synergistic effect with cigarette smoke exposure. In addition to the cytokines such as TNF, another group of inflammatory mediators that has been studied in vitro are the ROS. These mediators, also referred to as reactive oxygen metabolites (ROMs) are normally produced during the process of cellular aerobic metabolism and in phagocytic cells in response to particle exposure. One molecular effect of asbestos exposure has been demonstrated to be the induction of ROS . Oxidative stress occurs when the ROS level in a cell exceeds the antioxidant level. ROS may result in damage to DNA, lipids and proteins and have been implicated in having a role in carcinogenesis . This research has suggested that free radical activity may be involved in the pathogenesis of fiber-induced lung disease. The ability of RCFs to induce the release of free radicals has been studied in rodent alveolar macrophages. RF1 and RF2 (Japanese ceramic fibers) at 200 µg/ml induced a significant production of superoxide anion and a significant increase in intracellular free calcium in guinea pig alveolar macrophages . Both superoxide anion and increased intracellular calcium are associated with oxidative stress. Superoxide anions may generate hydrogen peroxide and hydroxyl radical, classified as ROS or free radicals. RF2 exposure resulted in a significant depletion of glutathione. Glutathione is a cellular antioxidant that protects cells against oxidative stress; depletion of glutathione is associated with oxidative stress. The RFs did not affect hydrogen peroxide production. In each test, the effects of chrysotile were significantly greater than those of the RFs. RCF1, MMVF10, and amosite asbestos at 8.24 × 10 6 f/ml induced a significant depletion of intracellular glutathione in rat alveolar macrophages . RCF1 had similar effects to amosite asbestos, whereas MMVF10 caused the greatest depletion of glutathione. RCF1, RCF2, and RCF3 induced a greater production of ROMs in human polymorphonuclear cell cultures than RCF4 and chrysotile . A dose-dependent production of ROMs was seen in all RCFs and other MMVFs tested from 25 to 500 µg/ml. Quartz had a greater effect on ROM production than all fibers tested. RCF1 had a high binding capacity for rat immunoglobulin (IgG), a normal component of lung lining fluid . At doses >100 µg RCF1, fibers coated with IgG induced a significantly increased superoxide anion release. This supports the premise that lung lining fluid and other substances that fibers are exposed to in vivo may significantly alter the effect of fibers on cells. IgG-coated long fiber amosite asbestos, in spite of a poor binding affinity for IgG, induced a comparable superoxide anion release response to that of coated RCF1. Brown et al. investigated the ability of RCF1, amosite asbestos, silicon carbide, MMVF10, Cole 100/475 glass fiber, and RCF4 to cause translocation of the transcription factor NF-κB to the nucleus in A549 lung epithelial cells. RCF1, amosite asbestos, and silicon carbide produced a significant dose-dependent translocation of NF-κB to the nucleus; the other fibers tested did not. Equal fiber numbers were tested. These cytotoxicity studies indicate that RCFs may share some aspects of their mechanism of action with asbestos. They both affect the production of TNF and ROS as well as cell viability and proliferation. The effects of RCFs have usually been less pronounced than those of asbestos. Results of in vitro studies are difficult to compare, even within studies of different fiber types, because of different study designs, different fiber concentrations and characteristics, and different endpoints. # C.3 Genotoxic Effects of RCFs In addition to research assessing the cytotoxicity of RCFs, studies have also assessed the genotoxicity of RCFs. Most genotoxicity assays assess changes in or damage to genetic material. Methods that have been used to investigate the genotoxicity of fibers include cell-free or in vitro cell systems investigating DNA damage, studies of aneuploidy or polyploidy, studies of chromosome damage or mutation, gene mutation assays, and investigations of cell growth regulation . Several studies, described below and in Table C-3, have examined the ability of RCFs to produce genotoxic changes in comparison with asbestos. Several fibers, including RCF1 and RCF4, were assayed for free radical generating activity using a DNA assay and a salicylate assay . The DNA plasmid assay showed only amosite asbestos to have free radical activity. The salicylate assay showed amosite as well as RCF1 to have free radical activity. Coating the fibers with lung surfactant decreased their hydroxyl radical generation. Differences in RCF1 results in the two assays were proposed to be a result of increased iron release from RCF1 in the salicylate assay. An iron chelator inhibited the RCF hydroxylation of salicylate. RCF4 showed no free radical activity. When equal fiber numbers were compared, RCF1, RCF2, RCF3, and RCF4 had minimal free-radical-generating activity on plasmid DNA compared with crocidolite and amosite asbestos . RCFs and other MMVF produced a small but insignificant amount of DNA damage. This damage was mediated by hydroxyl radicals. No correlation was found between iron content and free radical generation. At 9.3 × 10 5 fibers per assay, amosite produced substantial free radical damage to plasmid DNA . Amosite significantly upregulated the transcription factors AP-1 and NFkB; RCF1 had a much smaller effect on AP-1 upregulation only. SVFs, including ceramic fibers (unspecified), were reported to form hydroxyl radicals based on the formation of the DNA adduct 8-hydroxydeoxyguanosine (8-OH-dG) from 2-de-oxyguanosine (dG) in calf thymus DNA and solution . Ceramic and glasswool fibers had poor hydroxylating capabilities relative to rockwool and slag wool fibers ]. Hydroxyl radical scavengers, such as dimethyl sulfoxide, decreased the hydroxylation. Compounds that increase hydroxyl radical formation, such as hydrogen peroxide, increased hydroxylation. Rockwool in combination with cigarette smoke condensate caused a synergistic increase in 8-OH-dG formation; ceramic and glasswool fibers did not have synergistic effects with cigarette smoke ]. RCF1, RCF2, RCF3, and RCF4 induced nuclear abnormalities, including micronuclei and polynuclei, in Chinese hamster ovary cells . Micronuclei may form when chromosomes or fragments of chromosomes are separated during mitosis. Polynuclei may arise when cytokinesis fails after mitosis. The incidence of micronuclei and polynuclei after exposure to 20 µg/cm 2 RCF was from 22% to 33%. At 5 µg/cm 2 , chrysotile and crocidolite induced nuclear abnormalities of 49% and 28%, respectively. Amosite, chrysotile, and crocidolite asbestos, and ceramic fibers caused a significant increase in micronuclei in human amniotic fluid cells . The response was dosedependent with asbestos fiber exposure but not with ceramic fiber exposure. Significant increases in chromosomal breakage and hyperdiploid cells were noted after asbestos and ceramic fiber exposure. RCF1, RCF3, and RCF4 did not induce anaphase aberrations in rat pleural mesothelial cells . Of all fibers tested, UICC chrysotile was the most genotoxic on the basis of weight, number of fibers with a length >4 µm and number of fibers corresponding to Stanton=s and Pott=s criteria . The effect of fibers on the mRNA levels of cfos and c-jun proto-oncogenes and ornithine decarboxylase (ODC) in hamster tracheal epithelial (HTE) cells and rodent pleural mesothelial (RPM) cells were examined . ODC is a rate-limiting enzyme in the synthesis of compounds involved in cell proliferation and tumor promotion, the polyamines. In HTE cells, crocidolite induced a significant dose-dependent increase in levels of c-jun and ODC mRNA but not c-fos mRNA. RCF1 induced only small nondose-dependent increases in ODC mRNA levels. In RPM cells, crocidolite fibers at 2.5 µg/cm 2 significantly elevated levels of c-fos and c-jun mRNA. RCF1 increased proto-oncogene expression at cytotoxic levels of 25 µg/cm 2 ; no significant effect was seen at concentrations ≤5 µg/cm 2 . RCF1 fibers were nonmutagenic in the human-hamster hybrid cell line A L . Chrysotile was a significant inducer of mutations in the same system. These studies demonstrate that RCFs may share some similar genotoxic mechanisms with asbestos including induction of free radicals, micronuclei, polynuclei, chromosomal breakage, and hyperdiploid cells. Other studies have demonstrated that, using certain methods and doses, RCFs did not induce anaphase aberrations and induced proto-oncogene expression only at cytotoxic concentrations. RCFs were nonmutagenic in human-hamster hybrid cells. # C.4 Discussion of In Vitro Studies The toxicity of fibers has been attributable to their dose, dimensions, and durability. Any test system that is designed to assess the potential toxicity of fibers must address these factors. Durability is difficult to assess using in vitro studies because of their acute time course. However, in vitro studies provide an opportunity to study the effects of varying doses and dimensions of fibers in a quicker, more efficient method than animal testing. Although they provide important information about mechanism of action, they do not currently provide data that can be extrapolated to occupational risk assessment. The association between fiber dimension and toxicity has been documented and reviewed . Fiber length has been correlated with the cytotoxicity of glass fibers . Manville code 100 (JM-100) fiber samples of average lengths of 3, 4, 7, 17, and 33 µm were assessed for their effects on LDH activity and rat alveolar macrophage function. The greatest cytotoxicity was reported in the 17 µm and 33 µm samples, indicating that length is an important factor in the toxicity of this fiber. Multiple macrophages were observed attached along the length of long fibers. Relatively short fibers, <20 µm long, were usually phagocytized by one rat alveolar macrophage . Longer fibers were phagocytized by two or more macrophages. Incomplete, or frustrated, phagocytosis may play a role in the increased toxicity of longer fibers. Long fibers (17 µm average length) were a more potent inducer of TNF production and transcription factor activation than shorter fibers (7 µm average length) . These studies demonstrate the important role of length in fiber toxicity and suggest that the capacity for macrophage phagocytosis may be a critical factor in determining fiber toxicity. The toxicity of individual fibers of the same type of RCFs may differ according to their size in relation to alveolar macrophages. Several RCF in vitro studies reported a direct association between a longer fiber length and greater cytotoxicity. Hart et al. reported the shortest fibers to be the least cytotoxic. Brown et al. reported an association between length and cytotoxic activity but not between diameter and activity. Wright et al. reported that cytotoxicity was correlated with fibers >8 µm length. Yegles et al. reported that the longest and thickest fibers were the most cytotoxic. The four most cytotoxic fibers had GM lengths ≥13 µm and GM diameters >0.5 µm. The production of abnormal anaphases and telophases was associated with Stanton fibers with a length >8 µm and diameter ≤0.25 µm. Hart et al. reported that cytotoxicity increased with fiber length up to 20 µm. All of these studies demonstrate the importance of fiber dimensions on cytotoxicity. Other studies have not reported the length distribution of fiber samples used. When studies are done with RCFs for which specific lengths are assessed for cytotoxicity (such as has been done with glass fibers) , it will be possible to determine the strength of the association between RCF fiber length and toxicity and determine whether a threshold length exists above which toxicity increases steeply. In addition to providing data on the correlation between fiber length and toxicity, in vitro studies have provided data on the relative toxicity of RCFs compared with asbestos. Uncertainties exist in the interpretation of these studies because of differences in fiber doses, dimensions, and durabilities. RCFs do appear to share some similar mechanisms of action with asbestos. (See references in Tables C-1, C-2, and C-3.) They have similar direct and indirect effects on cells and alter gene function in similar ways. They are capable of inducing enzyme release and cell hemolysis. They may decrease cell viability and inhibit proliferation. They both affect the production of tumor necrosis factor and ROS, and affect cell viability and proliferation. They induce necrosis in rat pleural mesothelial cells. They also may induce free radicals, micronuclei, polynuclei, chromosomal breakage, and hyperdiploid cells in vitro. In vitro studies also provide an excellent opportunity for investigating the pathogenesis of RCF. However, comparisons are difficult to make between in vitro studies because of differences in fiber doses, dimensions, preparations, and compositions. Important information, such as fiber length distribution, is not always determined. Even when comparable fibers are studied, the cell line or conditions under which they are tested may vary. Much of the research to date has been done in rodent cell lines and in cells that are not related to the primary target organ. In vitro studies using human pulmonary cell lines should provide pathogenesis data most relevant to human health risk assessment. Short-term in vitro studies cannot take into account the influence of fiber dissolution and fiber compositional changes that may occur over time. In an in vivo exposure, fibers are continually modified physically, chemically, and structurally by components of the lung environment. This complex set of conditions is difficult to recreate in vitro. Just as it is unlikely that only one factor will be an accurate predictor of fiber toxicity, it is much more unlikely that any one in vitro test will be able to predict fiber toxicity. Best results are obtained by toxicity assessment in several in vitro tests and in comparison with in vivo results. In vitro studies provide an excellent opportunity to investigate factors important to fiber toxicity such as dose, dimension, surface area, and physicochemical composition. They provide the ability to obtain information that is an important supplement to the data of chronic inhalation studies. They do not currently provide information that can be directly applied to human health risk assessment and the development of occupational exposure limits.
Through the Act, Congress charged NIOSH with recommending occupational safety and health standards and describing exposure limits that are safe for various periods of employment. These limits include but are not limited to the exposures at which no worker will suffer diminished health, functional capacity, or life expectancy as a result of his or her work experience. By means of criteria documents, NIOSH communicates these recommended standards to regulatory agencies (including the Occupational Safety and Health Administration [OSHA]), health professionals in academic institutions, industry, organized labor, public interest groups, and others in the occupational safety and health community. Criteria documents contain a critical review of the scientific and technical information about the prevalence of hazards, the existence of safety and health risks, and the adequacy of control methods.# This criteria document is derived from reviews of information from human and animal studies of the toxicity of refractory ceramic fibers (RCFs) and is intended to describe the potential health effects of occupational exposure to airborne fibers of this material. RCFs are amorphous synthetic fibers produced by the melting and blowing or spinning of calcined kaolin clay or a combination of alumina, silica, and other oxides. RCFs belong to the class of synthetic vitreous fibers (SVFs)-materials that also include fibers of glass wool, rock wool, slag wool, and specialty glass. RCFs are used in commercial applications requiring lightweight, high-heat insulation (e.g., furnace and kiln insulation). Commercial production of RCFs began in the 1950s in the United States, and production increased dramatically in the 1970s. Domestic production of RCFs in 1997 totaled approximately 107.7 million lb. Currently, total U.S. production has been estimated at 80 million lb per year, which constitutes 1% to 2% of SVFs produced worldwide. In the United States, approximately 31,500 workers have the potential for occupational exposure to RCFs during distribution, handling, installation, and removal. More than 800 of these workers are employed directly in the manufacturing of RCFs and RCF products. With increasing production of RCFs, concerns about exposures to airborne fibers prompted animal inhalation studies that have indicated an increased incidence of mesotheliomas in hamsters and lung cancer in rats following exposure to RCFs. Studies of workers who manufacture RCFs have shown a positive association between increased exposure to RCFs and the development of pleural plaques, skin and eye irritation, and respiratory symptoms and conditions (including dyspnea, wheezing, and chronic cough). In addition, current and former RCF production workers have shown decrements in pulmonary function. After evaluating this evidence, NIOSH proposes a recommended exposure limit (REL) for RCFs of 0.5 fiber per cubic centimeter (f/cm 3 ) of air as a time-weighted average (TWA) concentration for up to a 10-hr work shift during a 40-hr workweek. Limiting airborne RCF exposures to this concentration will minimize the risk for lung cancer and irritation of the eyes and upper respiratory system and is achievable based on a review of exposure monitoring data collected from RCF manufacturers and users. However, because a residual risk of cancer (lung cancer and pleural mesothelioma) may still exist at the REL, continued efforts should be made toward reducing exposures to less than 0.2 f/cm 3 . Engineering controls, appropriate respiratory protection programs, and other preventive measures should be implemented to minimize worker exposures to RCFs. NIOSH urges employers to disseminate this information to workers and customers. NIOSH also requests that professional and trade associations and labor organizations inform their members about the hazards of exposure to RCFs. Executive Summary lung cancer. However, studies of worker populations with occupational exposure to airborne RCFs have shown an association between exposure and the formation of pleural plaques, increased prevalence of respiratory symptoms and conditions (dyspnea, wheezing, chronic cough), decreases in pulmonary function, and skin, eye, and upper respiratory tract irritation [Lemasters et al. 1994[Lemasters et al. , 1998Lockey et al. 1996]. Increased decrements in pulmonary function among workers exposed to RCFs who are current or former cigarette smokers indicate an apparent synergistic effect between smoking and RCF exposure [Lemasters et al. 1998]. This finding is consistent with studies of other dustexposed populations. The implementation of improved engineering controls and work practices in RCF manufacturing processes and end uses have led to dramatic declines in airborne fiber exposure concentrations [Rice et al. 1996[Rice et al. , 1997Maxim et al. 2000a], which in turn have lowered the risk of symptoms and health effects for exposed workers. In 2002, the Refractory Ceramic Fibers Coalition (RCFC) established the Product Stewardship Program (PSP), which was endorsed by the Occupational Safety and Health Administration (OSHA). Contained in the PSP were recommendations for an RCF exposure guideline of 0.5 fiber per cubic centimeter (f/cm 3 ) of air as a time-weighted average (TWA) based on the contention that exposures at this concentration could be achieved in most industries that manufactured or used RCFs. At this time, the available health data do not provide sufficient evidence for deriving a precise healthbased occupational exposure limit to protect The National Institute for Occupational Safety and Health (NIOSH) has reviewed data characterizing occupational exposure to airborne refractory ceramic fibers (RCFs) and information about potential health effects obtained from experimental and epidemiologic studies. From this review, NIOSH determined that occupational exposure to RCFs is associated with adverse respiratory effects as well as skin and eye irritation and may pose a carcinogenic risk based on the results of chronic animal inhalation studies. In chronic animal inhalation studies, exposure to RCFs produced an increased incidence of mesotheliomas in hamsters [McConnell et al. 1995] and lung cancer in rats [Mast et al. 1995a,b]. The potential role of nonfibrous particulates generated during inhalation exposures in the animal studies complicates the issue of determining the exact mechanisms and doses associated with the toxicity of RCFs in producing carcinogenic effects [Mast et al. 2000]. The induction of mesotheliomas and sarcomas in rats and hamsters following intrapleural and intraperitoneal implantation of RCFs provided additional evidence for the carcinogenic potential of RCFs [Wagner et al. 1973;Davis et al. 1984;Smith et al. 1987;Pott et al. 1987]. Lung tumors have also been observed in rats exposed to RCFs by intratracheal instillation [Manville Corporation 1991]. In contrast to the carcinogenic effects of RCFs observed in experimental animal studies, epidemiologic studies have found no association between occupational exposure to airborne RCFs and an excess rate of pulmonary fibrosis or The risk for mesothelioma at 0.5 f/cm 3 is not known but cannot be discounted. Evidence from epidemiologic studies showed that higher exposures in the past resulted in pleural plaques in workers, indicating that RCFs do reach pleural tissue. Both implantation studies in rats and inhalation studies in hamsters show that RCFs can cause mesothelioma. Because of limitations in the hamster data, the risk of mesothelioma cannot be quantified. However, the fact that no mesothelioma has been found in workers and that pleural plaques appear to be less likely in workers with lower exposures suggests a lower risk for mesothelioma at the REL. Because residual risks of cancer (lung cancer and pleural mesothelioma) and irritation may still exist at the REL, NIOSH further recommends that all reasonable efforts be made to work toward reducing exposures to less than 0.2 f/cm 3 . At this concentration, the risks of lung cancer are estimated to be between 0.03/1,000 and 0.47/1,000 (based on extrapolations of risk models from Moolgavkar et al. [1999] and Yu and Oberdörster [2000]). Maintaining airborne RCF concentrations below the REL requires a comprehensive safety and health program that includes provisions for the monitoring of worker exposures, installation and routine maintenance of engineering controls, and the training of workers in good work practices. Industry-led efforts have likewise promoted these actions by establishing the PSP. NIOSH believes that maintaining exposures below the REL is achievable at most manufacturing operations and many user applications, and that the incorporation of an action level (AL) of 0.25 f/cm 3 in the exposure monitoring strategy will help employers determine when workplace exposure concentrations are approaching the REL. The AL concept has been an integral element of occupational standards recommended in NIOSH criteria documents and in comprehensive standards promulgated by OSHA and the Mine Safety and Health Administration (MSHA). NIOSH also recommends that employers implement additional measures under a comprehensive safety and health program, including hazard communication, respiratory protection programs, smoking cessation, and medical monitoring. These elements, in combination with efforts to maintain airborne RCF concentrations below the REL, will further protect the health of workers. xv Glossary FVC: Forced vital capacity or the maximum volume of air (in liters) that can be forcibly expired from the lungs following a maximal inspiration. # High-efficiency particulate air (HEPA) filter: A dry-type filter used to remove airborne particles with an efficiency equal to or greater than 99.97% for 0.3-µm particles. The lowest filtering efficiency of 99.97% is associated with 0.3-µm particles, which is approximately the most penetrating particle size for particulate filters. # Inspirable dust: The fraction of airborne particles that would be inspired through the mouth and nose of a worker. # MAN: A refractory ceramic fiber produced by the Johns Manville Company. # Occupational medical monitoring (incorporating medical screening, surveillance): The periodic medical evaluation of workers to identify potential health effects and symptoms related to occupational exposures or environmental conditions in the workplace. An occupational medical monitoring program is a secondary prevention method based on two processes, screening and surveillance. Occupational medical screening focuses on early detection of health outcomes for individual workers. Screening may involve an occupational history assessment, medical examination, and medical tests to detect the presence of toxicants or early pathologic changes before the worker would normally seek clinical care for symptomatic disease. Occupational medical surveillance involves the ongoing evaluation of the health status of a group of workers through the collection and analysis of health data for the purpose of disease prevention and for evaluating the effectiveness of intervention programs. Pleural plaques: Discrete areas of thickening that are generally on the parietal pleura and are most commonly located at the midcostal and posterior costal areas, the dome of the diaphragm, and the mediastinal pleura. Presence of plaques is an indication of exposure to a fibrous silicate, most frequently asbestos. # Radiographic opacity: A shadow on a chest X-ray film generally associated with a fibrogenic response to dust retained in the lungs [Morgan 1995]. Opacities are classified by size, shape, location, and profusion according to guidelines established by the International Labor Office [ ILO 2000] www.ilo.org/public/english/support/publ/books.htm). # Refractory ceramic fiber (RCF): An amorphous, synthetic fiber (Chemical Abstracts Services No. 142844-00-6) produced by melting and blowing or spinning calcined kaolin clay or a combination of alumina (Al 2 O 3 ) and silicon dioxide (SiO 2 ). Oxides may be added such as zirconia, ferric oxide, titanium oxide, magnesium oxide, calcium oxide, and alkalies. The percentage (by weight) of components is as follows: alumina, 20% to 80%; silicon dioxide, 20% to 80%; and other oxides in smaller amounts. Respirable-sized fiber: Particles >5 µm long with an aspect ratio >3:1 and diameter ≤1.3 µm. # Shot: Nonfibrous particulate that is generated during the production of RCFs from the original melt batch. xvi # Refractory Ceramic Fibers # Glossary Standardized mortality ratio (SMR): The ratio of the observed number of deaths (from a specified cause) to the expected number of deaths (from that same cause) that has been adjusted to account for demographic differences (e.g., age, sex, race) between the study population and the referent population. # Synthetic vitreous fiber (SVF): Any of a number of manufactured fibers produced by the melting and subsequent fiberization of kaolin clay, sand, rock, slag, etc. Fibrous glass, mineral wool, ceramic fibers, and alkaline earth silicate wools are the major types of SVF, also called man-made mineral fiber (MMMF) or man-made vitreous fiber (MMVF). Thoracic-sized fiber: Particles >5 µm long with aspect ratio >3:1 and a diameter <3 to 3.5 µm. Thoracic refers to particles penetrating to the thorax (50% cut at 10-µm aerodynamic diameter). Mineral and vitreous fibers with diameters 3 to 3.5 µm have an aerodynamic diameter of approximately 10 µm. 1 Recommendations for a Refractory Ceramic Fiber (RCF) Standard mesothelioma, and other adverse respiratory health effects (including irritation and compromised pulmonary function) associated with excessive RCF exposure in the workplace. Limiting exposures will also protect workers' eyes and skin from the mechanical irritation associated with exposure to RCFs. In most manufacturing operations, it is currently possible to limit airborne RCF concentrations to 0.5 f/cm 3 or less. Exceptions may occur during RCF finishing operations and during the installation and removal of RCF products, when the nature of job activities presents a challenge to meeting the REL. For these operations, additional protective measures are recommended. Engineering and administrative controls, respirator use, and other preventive measures should be implemented to minimize exposures for workers in RCF industry sectors where airborne RCF concentrations exceed the REL. NIOSH urges employers to disseminate this information to workers and customers, and RCF manufacturers should convey this information to downstream users. NIOSH also requests that professional and trade associations and labor organizations inform their members about the hazards of exposure to RCFs. # Definitions and Characteristics # Naturally Occurring Mineral Fibers Many types of mineral fibers occur naturally. Asbestos is the most prominent of these fibers because of its industrial application. The The National Institute for Occupational Safety and Health (NIOSH) recommends that exposure to airborne refractory ceramic fibers (RCFs) be controlled in the workplace by implementing the recommendations presented in this document. These recommendations are designed to protect the safety and health of workers for up to a 10-hr work shift during a 40-hr workweek over a 40-year working lifetime. Observance of these recommendations should prevent or greatly reduce the risks of eye and skin irritation and adverse respiratory health effects (including lung cancer) in workers with exposure to airborne RCFs. Preventive efforts are primarily focused on controlling and minimizing airborne fiber concentrations to which workers are exposed. Exposure monitoring, hazard communication, training, respiratory protection programs, and medical monitoring are also important elements of a comprehensive program to protect the health of workers exposed to RCFs. These elements are described briefly in this chapter and in greater detail in Chapter 9. # Recommended Exposure Limit (REL) NIOSH recommends that occupational exposures to airborne RCFs be limited to 0.5 fiber per cubic centimeter (f/cm 3 ) of air as a timeweighted average (TWA) concentration for up to a 10-hr work shift during a 40-hr workweek, measured according to NIOSH Method 7400 (B rules) [NIOSH 1998]. This recommended exposure limit (REL) is intended to reduce the risk of lung cancer, asbestos minerals include both the serpentine asbestos (chrysotile) and the amphibole mineral fibers, including actinolite, amosite, anthophyllite, crocidolite, and tremolite [Peters and Peters 1980]. Since ancient times, mineral fibers have been mined and processed for use as insulation because of their high tensile strength, resistance to heat, durability in acids and other chemicals, and light weight. The predominant forms of asbestos mined and used today are chrysotile (~95%), crocidolite (<5%), and amosite (<1%). For the purposes of this document, naturally occurring mineral fibers are distinguishable from synthetic vitreous fibers (SVFs) based on the crystalline structure of the mineral fibers. This property causes the mineral fibers to fracture longitudinally when subjected to mechanical stresses, thereby producing more fibers of decreasing diameter. By contrast, SVFs are amorphous and fracture transversely, resulting in more fibers of decreasing length until the segments are no longer of sufficient length to be considered fibers. Naturally occurring mineral fibers are generally more durable and less soluble than SVFs, a property that accounts for the biopersistence and toxicity of mineral fibers in vivo. # RCFs RCFs are a type of SVF; they are amorphous synthetic fibers produced from the melting and blowing or spinning of calcined kaolin clay or a combination of alumina (Al 2 O 3 ) and silicon dioxide (SiO 2 ). Oxides such as zirconia, ferric oxide, titanium oxide, magnesium oxide, calcium oxide, and alkalies may be added. The percentage of components (by weight) is as follows: alumina, 20% to 80%; silicon dioxide, 20% to 80%; and other oxides in smaller amounts. Like the naturally occurring mineral fibers, RCFs possess the desired qualities of heat resistance, tensile strength, durability, and light weight. On a continuum, however, RCFs are less durable (i.e., more soluble) than the least durable asbestos fiber (chrysotile) but more durable than most fibrous glass and other types of SVFs. # SVFs SVFs include a number of manmade (not naturally occurring) fibers that are produced by the melting and subsequent fiberization of kaolin clay, sand, rock, slag, and other materials. The major types of SVFs are fibrous glass, mineral wool (slag wool, rock wool), and ceramic fibers (including RCFs). SVFs are also frequently referred to as manmade mineral fibers (MMMFs) or manmade vitreous fibers (MMVFs). # Sampling and Analysis Employers shall perform air sampling and analysis to determine airborne concentrations of RCFs according to NIOSH Method 7400 (B rules) [NIOSH 1998], provided in Appendix A of this document. # Exposure Monitoring Employers shall perform exposure monitoring as follows: ■ Establish a workplace exposure monitoring program for worksites where RCFs or RCF products are manufactured, handled, used, installed, or removed. ■ Include in this program routine area and personal monitoring of airborne fiber concentrations. ■ Design a monitoring strategy that can be used to 1 ■ Recommendations for a Refractory Ceramic Fiber Standard -evaluate a worker's exposure to RCFs, -assess the effectiveness of engineering controls, work practices, and other factors in controlling airborne fiber concentrations, and -identify work areas or job tasks in which worker exposures are routinely high and thus require additional efforts to reduce them. # Sampling Surveys Employers shall conduct exposure monitoring surveys to ensure that worker exposures (measured by full-shift samples) do not exceed the REL. Because adverse respiratory health effects may occur at the REL, it is desirable to achieve lower concentrations whenever possible. When workers are potentially exposed to airborne RCFs, employers shall conduct exposure monitoring surveys as follows: ■ Collect representative personal samples over the entire work shift [NIOSH 1997a]. ■ Perform periodic sampling at least annually and whenever any major process change takes place or whenever another reason exists to suspect that exposure concentrations may have changed. ■ Collect all routine personal samples in the breathing zones of the workers. ■ If workers are exposed to concentrations above the REL, perform more frequent exposure monitoring as engineering changes are implemented and until at least two consecutive samples indicate that exposures no longer exceed the REL [NIOSH 1977a]. ■ Notify all workers of monitoring results and of any actions taken to reduce their exposures. ■ When developing an exposure sampling strategy, consider variations in work and production schedules as well as the inherent variability in most area sampling [NIOSH 1995a]. # Focused sampling When sampling to determine whether worker RCF exposures are below the REL, a focused sampling strategy may be more practical than a random sampling approach. A focused sampling strategy targets workers perceived to be exposed to the highest concentrations of a hazardous substance [Leidel and Busch 1994]. This strategy is most efficient for identifying exposures above the REL if maximum-risk workers and time periods are accurately identified. Short tasks involving high concentrations of airborne fibers could result in elevated exposure over full work shifts. Sampling strategies such as those used by Corn and Esmen [1979], Rice et al. [1997], and Maxim et al. [1997] have been developed and used specifically in RCF manufacturing facilities to monitor airborne fiber concentration. In these strategies, representative workers are selected for sampling and are grouped according to dust zones, uniform job titles, or functional job categories. These approaches are intended to reduce the number of required samples and increase the confidence of identifying workers at similar risk. # Area sampling Area sampling may be useful in exposure monitoring to determine sources of airborne RCFs and to assess the effectiveness of engineering controls. # Action Level An action level (AL) at half the REL (0.25 f/cm 3 ) shall be used to determine when additional controls are needed or when administrative actions should be taken to reduce exposure to RCFs. The purpose of an AL is to indicate when worker exposures to hazardous substances may be approaching the REL. When air samples contain concentrations at or above the AL, the probability is high that worker exposures to the hazardous substance exceed the REL. The AL is a statistically derived concept permitting the employer to have confidence (e.g., 95%) that if results from personal air samples are below the AL, the probability is small that worker exposures are above the REL. NIOSH has concluded that the use of an AL permits the employer to monitor hazardous workplace exposures without daily sampling. The AL concept has served as the basis for defining the elements of an occupational standard in NIOSH criteria documents and comprehensive standards promulgated by the Occupational Safety and Health Administration (OSHA) and the Mine Safety and Health Administration (MSHA). # Hazard Communication Employers shall take the following measures to inform workers about RCF hazards: ■ Establish a safety and health training program for all workers who manufacture, use, handle, install, or remove RCF products or perform other activities that bring them into contact with RCFs. ■ Inform employees and contract workers about hazardous substances in their work areas. ■ Instruct workers about how to get information from material safety data sheets (MSDSs) for RCFs and other chemicals. ■ Provide MSDSs onsite and make them easily accessible. ■ Inform workers about adverse respiratory health effects associated with RCF exposures. ■ In work involving the removal of refractory insulation materials, make workers aware of the potential for exposure to respirable crystalline silica, the health effects related to this exposure, and methods for reducing exposures. ■ Make workers who smoke cigarettes or use other tobacco products aware of their increased risk of developing RCFinduced respiratory symptoms and conditions (see Sections 1.12 and 9.6 for recommendations about smoking cessation programs). # Training Employers shall provide the following training for workers exposed to RCFs: ■ Train workers to detect hazardous situations. ■ Inform workers about practices or operations that may generate high airborne fiber concentrations (e.g., cutting and sanding RCF boards and other RCF products). ■ Train workers how to protect themselves by using proper work practices, engineering controls, and personal protective equipment (PPE). # Product Formulation One factor recognized as contributing to the toxicity of an inhaled fiber is its durability and resistance to degradation in the respiratory tract. Chemical characteristics place RCFs among the most durable SVFs. As a result, an 1 ■ Recommendations for a Refractory Ceramic Fiber Standard inhaled RCF that is deposited in the alveolar region of the lung will persist longer in the lungs than a less durable fiber. Therefore, NIOSH recommends substituting a less durable fiber for RCFs or reformulating the chemistry of RCFs toward this end to reduce the hazard for exposed workers. As part of product stewardship efforts, several RCF producers within the Refractory Ceramic Fibers Coalition (RCFC) have developed new and less biopersistent fibers termed alkaline earth silicate wools [Maxim et al. 1999b]. Newly developed fibers should undergo industry-sponsored testing before their selection and commercial use to exclude possible adverse health effects from exposure. # Engineering Controls and Work Practices # Engineering Controls Employers shall use and maintain appropriate engineering controls to keep airborne concentrations of RCFs at or below the REL during the manufacture, use, handling, installation, and removal of RCF products. Engineering controls for controlling RCFs include the following: ■ Local exhaust ventilation or dust collection systems at or near dust-generating systems -Band saws used in RCF manufacturing and finishing operations have been fitted with such engineering controls to capture fibers and dust during cutting operations, thereby reducing exposures for the band saw operator [Venturin 1998]. -Disc sanders fitted with similar local exhaust ventilation systems effectively reduce airborne RCF concentrations during the sanding of vacuum-formed RCF products [Dunn et al. 2004]. ■ Enclosed processes used during manufacturing to keep airborne fibers contained and separated from workers ■ Water knives, which are high-pressure water jets that effectively cut and trim the edges of RCF blanket while suppressing dust and limiting the generation of airborne fibers # Work Practices Employers shall implement appropriate work practices to help keep worker exposures at or below the REL for RCFs. The following work practices are recommended to help reduce concentrations of airborne fibers: ■ Limit the use of power tools unless they are equipped with local exhaust or dust collection systems. -Be aware that manually powered hand tools generate less dust and fewer airborne fibers, but they often require additional physical effort and time and may increase the risk of musculoskeletal disorders. -The additional physical effort required by hand tools may also increase the rate and depth of breathing and consequently affect the inhalation rate and deposition of fibers in the lungs. ■ Use ergonomically correct tools and proper workstation design to reduce the risk of musculoskeletal disorders. ■ Use high-efficiency particulate air-filtered (HEPA-filtered) vacuums. ■ Use wet sweeping to suppress airborne fiber and dust concentrations during cleanup. ■ When removing after-service RCF products, dampen insulation with a light water spray to prevent fibers and dust from becoming airborne. (However, use caution when dampening refractory linings during installation, since water can damage refractory-lined equipment, causing the generation of steam and possible explosion during heating.) ■ Clean work areas regularly using a HEPAfiltered vacuum or wet sweeping to minimize accumulation of debris. ■ Ensure that workers wear long-sleeved clothing, gloves, and eye protection when performing potentially dusty activities involving RCFs or RCF products. For some activities, disposable clothing or coveralls may be preferred. # Respiratory Protection Respirators shall be used while performing any task for which the exposure concentration is unknown or has been documented to be higher than the NIOSH REL of 0.5 f/cm 3 as a TWA. However, respirators shall not be used as the primary means of controlling worker exposures. When possible, use other methods for minimizing worker exposures to RCFs: ■ Product substitution ■ Engineering controls ■ Changes in work practices Use respirators when available engineering controls and work practices do not adequately control worker exposures below the REL for RCFs. NIOSH recognizes that controlling exposures to RCFs is a particular challenge during the finishing stages of RCF product manufacturing and during the installation and removal of refractory materials # Respiratory Protection Program When respiratory protection is needed, employers shall establish a comprehensive respiratory protection program as described in the OSHA respiratory protection standard [29 CFR * 1910.134]. Elements of a respiratory protection program must be established and described in a written plan that is specific to the workplace. The plan must include the following elements: comply with the medical evaluation and cleaning, storing, and maintenance requirements of the standard ■ A designated program administrator who is qualified to administer the respiratory protection program Employers shall update the written program as necessary to account for changes in the workplace that affect respirator use. In addition, employers are required to provide at no cost to workers all equipment, training, and medical evaluations required under the respiratory protection program. # Respirator Selection When conditions of exposure to airborne RCFs exceed the REL, proper respiratory protection shall be selected as follows: ■ Select, at a minimum, a half-mask, airpurifying respirator equipped with a 100 series particulate filter. This respirator has an assigned protection factor (APF) of 10. ■ Provide a higher level of protection and prevent facial or eye irritation from RCF exposure by using a full-facepiece, air-purifying respirator equipped with a 100-series filter; or use any powered, air-purifying respirator equipped with a tight-fitting facepiece (full-facepiece). ■ Consider providing a supplied-air respirator with a full facepiece for workers who remove after-service RCF insulation (e.g., furnace insulation) and are therefore exposed to high and unpredictable concentrations of RCFs. These respirators provide a greater level of respiratory protection. Use them whenever the work task involves potentially high concentrations of airborne fibers. ■ Always perform a comprehensive assessment of workplace exposures to determine the presence of other possible contaminants (such as silica) and to ensure that proper respiratory protection is used. ■ Use only respirators approved by NIOSH and MSHA. # Sanitation and Hygiene Employers shall take the following measures to protect workers potentially exposed to RCFs: ■ Do not permit smoking, eating, or drinking in areas where workers may contact RCFs. ■ Provide showering and changing areas free from contamination where workers can store work clothes and change into street clothes before leaving the work site. ■ Provide services for laundering work clothes so that workers do not take contaminated clothes home. ■ Protect laundry workers handling RCFcontaminated clothes from airborne concentrations that are above the REL. Workers shall take the following protective measures: ■ Do not smoke, eat, or drink in areas potentially contaminated with RCFs. ■ If fibers get on the skin, wash with warm water and mild soap. ■ Apply skin-moisturizing cream or lotion as needed to avoid irritation caused by frequent washing. ■ Wear long-sleeved clothing, gloves, and eye protection when performing potentially dusty activities involving RCFs. ■ Vacuum this clothing with a HEPA-filtered vacuum before leaving the work area. ■ Do not use compressed air to clean the work area or clothing and do not shake clothing to remove dust. These processes will create a greater respiratory hazard with airborne dust and fibers. ■ Do not wear work clothes or protective equipment home. Change into clean clothes before leaving the work site. # Medical Monitoring Medical monitoring (in combination with resulting intervention strategies) represents secondary prevention and should not replace primary prevention efforts to control airborne fiber concentrations and worker exposures to RCFs. However, compliance with the REL for RCFs (0.5 f/cm 3 ) does not guarantee that all workers will be free from the risk of RCFinduced respiratory irritation or respiratory health effects. Therefore, medical monitoring is especially important, and employers shall establish a medical monitoring program as follows: ■ Collect baseline data for all employees before they begin work with RCFs. ■ Continue periodic medical screening throughout their lifetime. ■ Use medical surveillance, which involves the aggregate collection and analysis of medical screening data, to identify occupations, activities, and work processes in need of additional primary prevention efforts. ■ Include all workers potentially exposed to RCFs (in both manufacturing and end-use industries) in an occupational medical monitoring program. ■ Provide workers with information about the purposes of medical monitoring, the health benefits of the program, and the procedures involved. ■ Include the following workers (who could receive the greatest benefits from medical screening) in the medical monitoring program: -Workers exposed to elevated fiber concentrations (e.g., all workers exposed to airborne fiber concentrations above the AL of 0.25 F/cm 3 , as described in Section 9.3) -Workers in areas or in specific jobs and activities (regardless of airborne fiber concentration) in which one or more workers have symptoms or respiratory changes apparently related to RCF exposure -Workers who may have been previously exposed to asbestos or other recognized occupational respiratory hazards that place them at an increased risk of respiratory disease # Elements of the Medical Monitoring Program Include the following elements in a medical monitoring program for workers exposed to RCFs: (1) an initial medical examination, (2) periodic medical examinations at regularly scheduled intervals, (3) more frequent and detailed medical examinations as needed on the basis of the findings from these examinations, (4) worker training, (5) written reports of medical findings, (6) quality assurance, and (7) evaluation. These elements are described in the following subsections. # Initial (baseline) examination Perform an initial (baseline) examination as near as possible to the date of beginning employment (within 3 months) and include the following: [Ferris 1978, or the most recent equivalent] ■ A ■ A standardized occupational history questionnaire that gathers information about all past jobs with (1) special emphasis on those with potential exposure to dust and mineral fibers, (2) a description of all duties and potential exposures for each job, and (3) a description of all protective equipment the worker has used # Periodic examinations Administer periodic examinations (including a physical examination of the respiratory system and the skin, spirometric testing, a respiratory symptom update questionnaire, and an occupational history update questionnaire) at regular intervals determined by the medical monitoring program director. Determine the frequency of the periodic medical examinations according to the following guidelines: ■ For workers with fewer than 10 years since first exposure to RCFs, conduct periodic examinations at least once every 5 years. ■ For workers with 10 or more years since first exposure to RCFs, conduct periodic examinations at least once every 2 years. A chest X-ray and spirometric testing are important on initial examination and may also be appropriate medical screening tests during periodic examinations for detecting respiratory system changes, especially in workers with more than 10 years since first exposure to RCFs. A qualified health care provider should consult with the worker to determine whether the benefits of periodic chest X-rays warrant the additional exposure to radiation. # More frequent evaluations Workers may need to undergo more frequent and detailed medical evaluations if the attending physician determines that he or she has any of the following indications: ■ New or worsening respiratory symptoms or findings (e.g., chronic cough, difficult breathing, wheezing, reduced lung function, or radiographic indications of pleural plaques or fibrosis) Findings, test results, or diagnoses that have no bearing on the worker=s ability to work with RCFs shall not be included in the report to the employer. Safeguards to protect the confidentiality of the worker=s medical records shall be enforced in accordance with all applicable regulations and guidelines. # Quality assurance Employers shall do the following to ensure the effective implementation of a medical monitoring program: ■ Ensure that workers follow the qualified health care provider's recommended exposure restrictions for RCFs and other workplace hazards. ■ Ensure that workers use appropriate PPE if they are exposed to RCF concentrations above the REL. ■ Encourage workers to participate in the medical monitoring program and to report any symptoms promptly to the program director. ■ Provide any medical evaluations that are part of the medical monitoring program at no cost to the workers. ■ When implementing job reassignments recommended by the medical program director, ensure that workers do not lose wages, benefits, or seniority. ■ Ensure that the medical monitoring program director communicates regularly with the employer's safety and health personnel (e.g., industrial hygienists) to identify work areas that may require control measures to minimize exposures to workplace hazards. # Evaluation Employers shall evaluate their medical monitoring programs as follows: ■ Periodically have standardized medical screening data aggregated and evaluated by an epidemiologist or other knowledgeable person to identify patterns of worker health that may be linked to work activities and practices requiring additional primary preventive efforts. ■ Combine routine aggregate assessments of medical screening data with evaluations of exposure monitoring data to identify needed changes in work areas or exposure conditions. # Labeling and Posting Employers shall post warning labels and signs as follows: ■ Establish smoking cessation programs to inform workers about the increased hazards of cigarette smoking and exposure to RCFs. ■ Provide assistance and encouragement for workers who want to quit smoking. ■ Prohibit smoking in the workplace. ■ Disseminate information about health promotion and the harmful effects of smoking. ■ Offer smoking cessation programs to workers at no cost to participants. ■ Encourage activities that promote physical fitness and other healthy lifestyle practices affecting respiratory and cardiovascular health (e.g., through training programs, employee assistance programs, and health education campaigns). NIOSH recommends that all workers who smoke and are potentially exposed to RCFs participate in smoking cessation programs. # RESPIRATORY PROTECTION REQUIRED IN THIS AREA ■ Print all labels and warning signs in both English and the predominant language of workers who do not read English. ■ Verbally inform workers about the hazards and instructions printed on the labels and signs if they are unable to read them. # Smoking Cessation NIOSH recognizes a synergistic effect between exposure to RCFs and cigarette smoking. This effect increases the risk of adverse respiratory health effects induced by RCFs. In studies of workers exposed to various airborne contaminants, combined exposures to smoking and airborne dust have been shown to contribute to the increased risk of occupational respiratory diseases, including chronic bronchitis, emphysema, and lung cancer [Morgan 1994;Barnhart 1994]. # Background In 1977, NIOSH reviewed health effects data on occupational exposure to fibrous glass and determined the principal adverse health effects to be skin, eye, and upper respiratory tract irritation as well as the potential for nonmalignant respiratory disease. At that time NIOSH recommended the following: Occupational exposure to fibrous glass shall be controlled so that no worker is exposed at an airborne concentration greater than 3,000,000 fibers per cubic meter of air (3 fibers per cubic centimeter of air); . . . airborne concentrations determined as total fibrous glass shall be limited to a TWA of 5 milligrams per cubic meter of air [NIOSH 1977]. NIOSH also stated that until more information became available, this recommendation should be applied to other MMMFs, also called SVFs. Since then, additional data have become available from studies in animals and humans exposed to RCFs. The purpose of this report is to review and evaluate these studies and other information about RCFs. In contrast, the crystalline structure of mineral fibers such as asbestos causes the fibers to fracture along the longitudinal plane under mechanical stresses, resulting in more fibers with the same length but smaller diameters. These differences in morphology and cleavage patterns suggest that work with SVFs is less likely to generate high concentrations of airborne fibers than work with asbestos for comparable operations, since large-diameter fibers settle out in the air faster than small-diameter fibers [Assuncao and Corn 1975;Cherrie et al. 1986;Lippmann 1990]. During the manufacturing of RCFs, approximately 50% of product (by weight) is generated as fiber, and 50% is a byproduct made up of nonfibrous particulate material called shot. Selected physical characteristics of RCFs are presented in [RCFC 1996]. The addition of stabilizers and binders alters the properties of durability and heat resistance for RCFs. Generally, three types of RCFs are manufactured, and a fourth after-service fiber (often recognized in the literature) is distinguished according to its unique chemistry and morphology. Table 2-2 presents the chemistries of the four fiber types, numbered RCF1 through RCF4. RCF1 is a kaolin fiber; RCF2 is an alumina/silica/zirconia fiber; RCF3 is a high-purity (alumina/silica) fiber; and RCF4 is an after-service fiber, characterized by devitrification (i.e., formation of the silica polymorph cristobalite), which occurs during product use over an extended period of time at temperatures exceeding 1,050 to 1,100 o C (>1,900 o F). Another fiber subcategory is RCF1a, prepared from commercial RCFs using a less aggressive method than that used to prepare RCF1 for animal inhalation studies [Brown et al. 2000]. RCF1a is distinguished from RCF1 used in chronic animal inhalation studies, the former having a greater concentration of longer fibers and fewer nonfibrous particles. The lower ratio of respirable nonfibrous particles to fibers in RCF1a compared with RCF1 has been shown to affect lung deposition and clearance in animal inhalation studies [Brown et al. 2000;Bellman et al. 2001]. Chapter 5 presents additional discussion of animal studies and test fiber characteristics. # Chemical and Physical # Properties of RCFs # Fiber Dimensions Fibers of biological importance are those that become airborne and have dimensions within inhalable, thoracic, and respirable size ranges. Thoracic-sized fibers (<3 to 3.5 μm in diameter) and respirable-sized fibers (<1.3 μm in diameter) with lengths up to 200 μm [Timbrell 1982;Lippmann 1990;Baron 1996] are capable of reaching the portion of the respiratory system below the larynx. Respirablesized fibers are of biological concern because they are capable of reaching the lower airways and gas exchange regions of the lungs when inhaled. Longer or thicker airborne fibers generally settle out of suspension or, if inhaled, are generally filtered out in the nasal passage or deposited in the upper airways. Thoracic-sized fibers that are inhaled and deposited in the upper respiratory tract are generally cleared more readily from the lung, but they have the potential to cause irritation and produce respiratory symptoms. Fiber dimensions are a significant factor in determining their deposition within the lung, biopersistence, and toxicity. RCFs and other SVFs are manufactured to meet specified nominal diameters according to the fiber type and intended use. RCFs are produced with nominal diameters of 1.2 to 3 μm [Esmen et al. 1979;Vu 1988;TIMA 1993]. Typical diameters for an individual RCF (as measured in Mast et al. [1995a]. RCF-containing products) range from 0.1 to 20 μm, with lengths ranging from 5 to 200 μm [IARC 1988]. In bulk samples taken from three RCF blanket insulation products, more than 80% of the fibers counted by phase contrast optical microscopy (PCM) were <3 μm in diameter [Brown 1992]. This result is consistent with those from another study of bulk samples of RCF insulation materials [Christensen et al. 1993], which found the fibers to have geometric mean diameters (GM D ) ranging from 1.5 to 2.8 μm (arithmetic mean [AM] diameter range=2.3 to 3.9 μm; median diameter range=1.6 to 3.3 μm). Studies of airborne fiber size distributions in RCF manufacturing operations indicate that these fibers meet the criteria for thoracic-and respirable-sized fibers. One early study of three domestic RCF production facilities found that approximately 90% of airborne fibers were <3 μm in diameter, and 95% of airborne fibers were <4 μm in diameter and <50 μm long [Esmen et al. 1979]. The study showed that diameter and length distributions of airborne fibers in the facilities were consistent, with a GM D of 0.7 μm and a geometric mean length (GM L ) of 13 μm. Another study [Lentz et al. 1999] used these data in combination with monitoring data from two additional studies [MacKinnon et al. 2001;Maxim et al. 1997] at RCF manufacturing plants to review characteristics of fibers sized from 118 air samples covering 20 years . Fibers with diameters <1 μm (n=3,711) were measured by transmission electron microscopy (TEM). Of these, 52% had diameters <0.4 μm, 75% had diameters<0.6 μm, and 89% had diameters <0.8 μm. Fiber lengths ranged from <0.6 to >20 μm, with 68% of fibers measuring 2.4 to 20 μm long and 19% of the fibers >20 μm long. On the basis of the results of TEM analysis of 3,357 RCFs observed on 98 air samples collected in RCF manufacturing sites, Allshouse [1995] re-ported that 99.7% of the fibers had diameters <3 μm and 64% had lengths >10 μm. Measurements of airborne fibers in the European RCF manufacturing industry are comparable : Rood [1988] reported that all fibers observed were in the thoracic and respirable size range (i.e., diameter <3 to 3.5 μm), with median diameters ranging from 0.5 to 1.0 μm and median lengths from 8 to 23 μm. Cheng et al. [1992] analyzed an air sample for fibers during removal of after-service RCF blanket insulation from a refinery furnace. Fiber diameters ranged from 0.5 to 6 μm, with a median diameter of 1.6 μm. The length of fibers ranged from 5 to 220 μm. Of 100 fibers randomly selected and analyzed from the air sample, 87% were within the thoracic and respirable size range. Another study of exposures to airborne fibers in industrial furnaces during installation and removal of RCF materials found GM D values of 0.38 and 0.57 μm, respectively [Perrault et al. 1992]. # Fiber Durability Fiber durability can affect the biologic activity of fibers inhaled and deposited in the respiratory system. Durable fibers are more biopersistent, thereby increasing the potential for causing a biological effect. Durability of a fiber is measured by the amount of time it takes for the fiber to fragment mechanically into shorter fibers or dissolve in biological fluids. RCFs tested in vitro with a solution of neutral pH (modified Gamble's solution) had a dissolution rate of 1 to 10 ng/cm 2 per hr [Leineweber 1984]. This test is biologically relevant because of the similarity of the solution to the conditions of the pulmonary interstitial fluid. By comparison, other SVFs (glass and slag wools) are more soluble, with dissolution rates in the 100s of ng/cm 2 per hr [Scholze and Conradt 1987]. Along a continuum of fiber durability determined in tests using simulated lung fluids at pH 7.4, the asbestos fiber crocidolite has a dissolution rate of <1 ng/cm 2 per hr, RCF1 and MMVF32 (E glass) have dissolution rates of 1 to 10 ng/cm 2 per hr, MMVF21 has a dissolution rate of 15 to 25 ng/cm 2 per hr, other fibrous glass and slag wools have dissolution rates in the range of 50 to 400 ng/cm 2 per hr, and the alkaline earth silicate wools have dissolution rates ranging from approximately 60 to 1,000 ng/cm 2 per hr [Christensen et al. 1994;Maxim et al. 1999b;Moore et al. 2001]. Chrysotile, which is considered the most soluble form of asbestos, has a dissolution rate of <1 to 2 ng/cm 2 per hr. RCFs dissolve more rapidly than chrysotile, even though RCFs have a thicker diameter (by an order of magnitude) than chrysotile. The rate of dissolution is an important fiber characteristic that affects the clearance time and biopersistence of the fiber in the lung. The significance of fiber dimension, clearance, and dissolution (i.e., breakage, solubility) is discussed in Chapter 6. # RCF Production and Potential for Worker Exposure # Production RCF production in the United States began in 1942 on an experimental basis, but RCFs were not commercially available until 1953. Sales of RCFs were modest initially, but they began to expand when the material gained acceptance as an economical alternative insulation for high-temperature kilns and furnaces. Commercial production of RCFs first reached significant levels in the 1970s as oil shortages necessitated reductions in energy consumption. The growing demand for RCFs has also been strongly influenced by the recognition of health effects associated with exposure to asbestos-containing materials and the increasingly stringent regulation of these products in the United States and many other countries. # Potential for Worker Exposure Approximately 31,500 workers in the United States are potentially exposed to RCFs during manufacturing, processing, or end use. A similar number of workers are potentially exposed to RCFs in Europe. Of these workers, about 800 (3%) are employed in the actual manufacturing of RCFs and RCF products [Maxim et al. 1997;RCFC 2004]. # RCF Manufacturing Process The manufacture of RCFs (Figure 3-1) begins by blending raw materials, which may include kaolin clay, alumina, silica, and zirconia in a batch house. The batch mix is then transferred either manually or automatically to a furnace to be melted at temperatures exceeding 1,600 o C. On reaching a specified temperature and viscosity in the furnace, the molten # ■ RCF Production and Potential for Worker Exposure Fibers are produced by blowing or spinning molten materials drained from furnace. Bulk fiber is packaged. Felt or blanket is cured in tempering oven. Product is shipped, stored, or fabricated into specialty product. batch mixture drains from the furnace and is fiberized, either through exposure to pressurized air or by flowing through a series of spinning wheels [Hill 1983]. Fans are used to create a partial vacuum that pulls the fibers into a collection or settling chamber. RCFs may then be conveyed pneumatically to a bagging area for packaging as bulk fiber. Some bulk fiber may be used directly in this form, or it may be processed to form textiles, felts, boards, cements, and other specialty items. Other RCFs are formed into blankets as bulk fiber in the collection chamber settles onto a conveyor belt. The blanket passes through a needle felting machine that interlocks the fibers and compresses the blanket to a specified thickness. From the needler, the blanket is conveyed to a tempering oven to remove lubricants that were added in the settling chamber. The lubricants are burned off, and the blanket is cut to desired size and packaged. As with the bulk fiber, the RCF blanket may undergo additional fabrication to create other specialty products. Many of the processes are automated and are monitored by machine operators. Postproduction processes such as cutting, sanding, packaging, handling, and shipping are more labor intensive, but the potential exists for exposure to airborne fibers throughout production. # RCF Products and Uses RCFs may be used in bulk fiber form or as one of the RCF specialty products in the form of mats, paper, textiles, felts, and boards [RCFC 1996]. Because of its ability to withstand temperatures exceeding 1,000 °C, RCFs are used predominantly in industrial applications, including insulation, reinforcement, and thermal protection for furnaces and kilns. RCFs can also be found in automobile catalytic converters, in consumer products that operate at high temperatures (e.g., toasters, ovens, woodstoves), and in space shuttle tiles. RCFs have been formed into noise-control blankets [Thornton et al. 1984] and used as a replacement for refractory bricks in industrial kilns and furnaces [RCFC 1996]. RCFs have found increasing application as reinforcements in specialized metal matrix composites (MMC), especially in the automotive and aerospace industries [Stacey 1988]. A summary of RCF products and applications are provided here. # Examples of Products ■ Blankets-high-temperature insulation produced from spun RCFs in the form of a mat or blanket ■ Boards-high-temperature insulation produced from bulk fibers in the form of a compressed rigid board (boards have a higher density than blankets and are used as core material or in sandwich assemblies) ■ Bulk RCFs-fibers with qualities of hightemperature resistance to be used as feedstock in manufacturing processes or other applications for which product consistency is critical-typically in the manufacture of other ceramic-fiber-based products The conventional method used to assess the characteristics and concentrations of exposures to airborne fibers is to collect personal and environmental (area) air samples for laboratory analysis. Personal samples are the preferred method for estimating the exposure characteristics of a worker performing specific tasks. For personal sampling, a worker is equipped with the air sampling equipment, and the collection medium is positioned within the worker's breathing zone. Area sampling is performed to evaluate exposure characteristics associated with an area or process. Sampling equipment for area sampling is stationary, in contrast to personal sampling, which allows for mobility by accompanying the worker throughout the sampling period. # Sampling for Airborne Fibers The two NIOSH methods for the sampling and analysis of airborne fibers of asbestos and other fibrous materials are as follows: ■ Method 7400 describes air sampling and analysis by PCM ■ Method 7402 describes air sampling and analysis by TEM Both methods (listed in the NIOSH Manual of Analytical Methods [NIOSH 1998] and provided in Appendix A) involve using an air sampling pump connected to a cassette. The cassette consists of a conductive cowl equipped with a 25-mm cellulose ester membrane filter (0.45-to 1.2-µm pore size). The pump is used to draw air through the sampling cassette at a constant flow rate between 0.5 and 16 L/min. Airborne fibers and other particulates are trapped on the filter for analysis using microscopic methods. Methods 7400 and 7402 can be used to count the number of fibers (and therefore calculate concentration based on the volume of air sampled) and measure the fiber dimensions. Fiber concentration is reported as the number of fibers per cubic centimeter of air (f/cm 3 ). Although the two methods differ in preparation of the sampling media for analysis, the major distinction between them is the resolving capabilities of the microscope. With PCM, 0.25 µm is approximately the diameter of the thinnest fibers that can be observed [Dement and Wallingford 1990]. TEM has a lower resolution limit well below the diameter of the smallest RCF (~0.02 to 0.05 µm) [Middleton 1982]. TEM also allows for qualitative analysis of fibers using an energy-dispersive X-ray analyzer (EDXA) to determine elemental composition and selected area electron diffraction (SAED) for comparing diffraction patterns with reference patterns for identification. # NIOSH Fiber-Counting Rules The appendix to NIOSH Method 7400 specifies two sets of fiber-counting rules that vary according to the parameters used to define a fiber. Under the A rules, any particle >5 µm long with an aspect ratio (length to width) >3:1 is considered a fiber. No upper limit exists on the fiber diameter in the A counting rules. In the B rules, a fiber is defined as being >5 µm long with an aspect ratio ≥5:1 and a diameter <3 µm. Maxim et al. [1997] found that fiber counts made using NIOSH Method 7400 B rules are equal to approximately 95% of the counts determined using the WHO reference method. In studies with other SVFs, Lees et al. [1993] also found that fiber exposure estimates were slightly higher using the A rules but were comparable to the values obtained using B rules. Breysse et al. [1999] reported a similar finding when comparing RCF fiber counts determined by both A and B rules: the ratio of A to B counts was 1.33. These results suggest that for airborne RCF exposures, most fibers with a >3:1 aspect ratio also meet the ≥5:1 aspect ratio criterion and are <3 µm in diameter. # Sampling for Total or Respirable Airborne Particulates Airborne exposures generated during work with RCFs may also be estimated by sampling for general dust concentrations. Sampling for particulates not otherwise regulated is described in NIOSH Method 0500 for total dust concentrations and in NIOSH Method 0600 for the respirable fraction [NIOSH 1998]. Both methods (also included in Appendix A) use a sampling pump to pull air through a filter that traps suspended particulates. NIOSH Method 0600 uses a size-selective sampling apparatus (cyclone) to separate the respirable fraction of airborne material from the nonrespirable fraction. The mass of airborne particulates on the filter is measured using gravimetric analysis, and airborne concentration is determined as the ratio of the particulate mass to the volume of air sampled, reported as mg/m 3 (or µg/m 3 ). This method does not distinguish fibers from nonfibrous airborne particles. No NIOSH REL exists for either total or respirable particulates not otherwise regulated. The OSHA permissible exposure limit (PEL) for particulates not otherwise regulated is 15 mg/m 3 for total particulates and 5 mg/m 3 for respirable particulates as 8-hr TWA concentrations. The American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) for particles (insoluble or poorly soluble) not otherwise specified is 10 mg/m 3 for inhalable particles and 3 mg/m 3 for respirable particles as 8-hr TWA concentrations [ACGIH 2005]. # Sampling for Airborne Silica Because silica is a major constituent of RCFs, the potential exists for exposure to silica during work with RCFs (e.g., in manufacturing or during removal of after-service RCF furnace insulation). As with sampling for respirable particulates, sampling for respirable silica involves using a pump to draw air through a cyclone before collecting respirable airborne particles on a filter. Qualitative and quantitative analysis of the sample for silica content can be performed using the following analytical methods: ■ X- # Industrial Hygiene Surveys and Exposure Assessments Assessments of occupational exposures, including quantitative measurement of airborne fiber concentrations associated with manufacturing, handling, and using RCFs, have been performed using industrial hygiene surveys and air sampling techniques at multiple worksites. Sources of monitoring data that characterize occupational exposures to RCFs include the following: ■ University of Pittsburgh studies of exposures at RCF manufacturing sites in the 1970s [Corn and Esmen 1979; ■ An ongoing University of Cincinnati epidemiologic study with exposure assessments that use historical monitoring data and current monitoring strategies [Rice et al. 1994[Rice et al. , 1996[Rice et al. , 1997 ■ A 5-year consent agreement between the RCFC and the U.S. Environmental Protection Agency (EPA) to monitor worker exposures in RCF manufacturing plants and in secondary users of RCFs and RCF products [RCFC 1993;Everest 1998;Maxim et al. 1994Maxim et al. , 1997Maxim et al. , 2000a ■ Studies of exposure to airborne fibers during the installation and removal of RCF insulation in industrial furnaces [Gantner 1986;Cheng et al. 1992;van den Bergen et al. 1994;Sweeney and Gilgrist 1998;Maxim et al. 1999b] ■ International (Canadian, Swedish, Australian) industrial hygiene surveys of occupational exposures to RCFs [Perrault et al. 1992;Krantz et al. 1994;Rogers et al. 1997] ■ A study of end-user exposures to RCF insulation products by researchers at Johns Hopkins University [Corn et al. 1992] ■ NIOSH Health Hazard Evaluations (HHEs) of occupational exposures to RCFs # University of Pittsburgh Survey of Exposures During RCF Manufacturing In the mid 1970s, researchers from the University of Pittsburgh conducted environmental monitoring to assess worker exposures to airborne fibers at domestic RCF manufacturing facilities. This research effort was one of the pioneering studies in the use of workplace exposure groupings or dust zones for establishing a sampling strategy [Corn and Esmen 1979]. In a series of industrial hygiene surveys, Esmen et al. [1979] collected 215 full-shift air samples at three RCF manufacturing plants. of the airborne fibers measured were <4.0 µm in diameter and <50 µm long with a GM D of 0.7 µm and a GM L of 13 µm. # University of Cincinnati Study of Exposures During RCF Manufacturing In 1987, researchers from the University of Cincinnati initiated an industry-wide epidemiologic study of workers who manufacture RCFs. One aim of the study was to characterize current and former exposures to RCFs and silica in U.S. RCF manufacturing facilities. Data from initial surveys conducted at five RCF manufacturing plants indicated airborne RCFs with a GM D ranging from 0.25 to 0.6 µm and a GM L ranging from 3.8 to 11.0 µm [Lockey et al. 1990]. The airborne TWA fiber concentrations for these five plants ranged from <0.01 to 1.57 f/cm 3 . After the first two rounds of quarterly sampling, Rice et al. [1994] had collected data from 484 fiber count samples (382 samples with values greater than the analytic limit of detection [LOD], 39 overloaded samples, 36 samples with values below the LOD, and 27 samples voided because of tampering or pump failure). They also collected 35 samples from persons working with raw materials that were analyzed quantitatively and qualitatively for respirable mass and for silica polymorphs (quartz, tridymite, and cristobalite). A sampling strategy was developed by identifying more than 100 job functions across 5 facilities. These job functions were consolidated into industry job titles based on similarities of function, proximity to certain processes, and exposure characteristics within designated dust zones. nett et al. 1989;Breysse et al. 1990], ranged from below the analytical LOD to 1.54 f/cm 3 . Of the 35 samples analyzed for the silica polymorphs, quantifiable silica was found in 5 samples: 4 of the samples contained cristobalite in concentrations ranging from 20 to 78 µg/m 3 , and 1 of the samples contained 70 µg/m 3 quartz. The measurable silica exposures occurred among workers employed as raw material handlers and furnace operators. As the study progressed, approximately 1,820 work history interviews were conducted and evaluated to refine uniform job titles and to identify dust zones according to the method of Corn and Esmen [1979]. Four years of sampling data (1987)(1988)(1989)(1990)(1991) were merged with historic sampling data to construct exposure estimates for 81 job titles in 7 facilities for specified time periods [Rice et al. 1997]. Overall exposures decreased. The maximum exposure estimated was 10 f/cm 3 in the 1950s for carding in a textile operation; subsequent changes in engineering, process, and ventilation reduced exposure estimates for all 20 job titles to near or below 1 f/cm 3 [Rice et al. 1996[Rice et al. , 1997]. The study reported that at more recent operations (1987)(1988)(1989)(1990)(1991), exposure estimates ranged from below the analytic LOD to 0.66 f/cm 3 . Subsequently, Rice et al. [2005] published the results from an analysis of exposure estimates for 10 years of follow-up sampling (1991−2001) The study shows that exposures decreased for 25% of job titles, remained stable for 53%, and increased for 22%. Of the job titles with increased exposure estimates, 9 estimates were >0.1 f/cm 3 (range = 0.1 to 0.21 f/cm 3 ), and 19 estimates were <0.1 f/cm 3 . The exposure estimates for this study do not include adjustments for respirator use. # RCFC/EPA Consent Agreement Monitoring Data In 1993, the RCFC and the EPA entered into a negotiated 5-year consent agreement to determine the magnitude of RCF exposures in the primary RCF manufacturing industry and in secondary RCF-use industries [RCFC 1993;Maxim et al. 1994Maxim et al. , 1997Everest 1998]. Another purpose of this consent agreement was to document changes in RCF exposures during the 5 years of the agreement (1993)(1994)(1995)(1996)(1997)(1998). The Quality Assurance Project Plan in the consent agreement contains the analytical protocols, statistical design, description of the program objectives, and timetables for meeting the objectives [RCFC 1993]. During each year of the consent agreement, a minimum of 720 personal air samples (measured as 8-hr TWAs) were collected according to a stratified random sampling plan. Of these, 320 samples were collected in RCF manufacturing and processing (primary) facilities. The remaining 400 samples were collected in RCF Fibers were defined as having an aspect ratio of ≥5:1 and length >0.5 µm if sized using transmission electron micoscopy. For scanning electron microscopy, fibers were defined with the same aspect ratio and length >5 µm. I Fore is the area of the plant before the furnace. customer facilities referred to as end-use (secondary) facilities. The researchers collected a total of 4,576 samples. A number of end-use facilities were randomly selected from a list of known purchasers of RCF products. The remainder consisted of facilities that volunteered for sampling once they learned of the consent agreement. The strata from which the 720 samples were collected consist of eight functional job categories derived so that results could be aggregated for comparison across industries, facilities, and similar job functions [RCFC 1993]. This categorization was based on the approach instituted by Corn and Esmen [1979]. Appendix B lists definitions and major work tasks for each functional job category. TWA and task-length average air sampling data were gathered according to NIOSH Method 7400 (B rules) and analyzed using PCM and TEM. Data on respirator use (by type) were also collected [Maxim et al. 1998]. As background for the consent agreement monitoring plan, baseline (now referred to as historical) information about airborne fiber concentrations was obtained through personal sampling of workers at RCF manufacturing facilities from January 1989 to May 1993. Exposure monitoring strategies used during the baseline period (1989)(1990)(1991)(1992)(1993) provided the framework for the consent agreement (1993)(1994)(1995)(1996)(1997)(1998) monitoring protocol. (1989)(1990)(1991)(1992)(1993) and consent agreement monitoring (1993)(1994)(1995)(1996)(1997)(1998) periods by manufacturing and end-use sectors. A comparison of values from Tables 4-4, 4-5, and 4-6 with those in Table 4-3 indicates that average airborne concentrations for 1993-1998 were lower than those for the preceding baseline sampling period (1989)(1990)(1991)(1992)(1993). However, a comparison of values in Tables 4-5 and 4-6 shows that average concentrations for the entire 5-year consent agreement monitoring period (1993)(1994)(1995)(1996)(1997)(1998) are equal to those of year 5 (i.e., no change). After the first 3 years (1993)(1994)(1995)(1996) of the consent agreement monitoring period, Maxim et al. [1997] performed interim analyses of these data combined with historical data from the baseline monitoring period (1989)(1990)(1991)(1992)(1993). The following conclusions about RCF exposures were based on these analyses of data from 1,600 baseline samples and 3,200 consent agreement samples: ■ Airborne concentrations of RCFs are generally decreasing in the workplace. ■ Ninety percent of airborne concentrations of RCFs in the workplace are below 1 f/cm 3 . ■ RCF concentrations have an approximately log-normal distribution. ■ Significant differences exist in workplace concentrations by facility. ■ Workplace concentrations vary with functional job category. ■ Respirator usage varies with the worker's functional job category and the associated average fiber concentration. ■ Workplace samples have a lower ratio of respirable nonfibrous particles to fibers than samples used in initial animal inhalation studies [Mast et al. 1995a,b;Mc-Connell et al. 1995]. Functional job categories with the highest average TWA fiber concentrations include removal (AM=1.2 f/cm 3 ), finishing (AM=0.8 f/cm 3 ), and installation (AM=0.4 f/cm 3 ). The remainder of the functional job categories had average TWA concentrations near or below 0.3 f/cm 3 . Although different jobs and activities are associated with the three higher exposure functional job categories, similarities exist that contribute to exposure concentrations. First, removal and installation activities are performed at remote jobsites where implementing fixed engineering controls may be difficult or impractical for reducing airborne fiber concentrations. Removal requires more mechanical energy and may involve fracturing the structure of the RCF product, resulting in fiber release and higher concentrations of airborne fibers. Finishing activities are performed at fixed locations where it is possible to implement engineering controls, but they also involve mechanical energy to shape RCF products by drilling, sanding, and sawing. These processes also result in the dispersal of airborne fibers. Regarding particle-to-fiber ratio, Maxim et al. [1997] found average workplace values to be much lower (0.53; n=10; range not reported) than the average ratio (9.1; n=7) in the samples used in a series of animal inhalation toxicity studies with RCFs [Mast et al. 1995a[Mast et al. ,b, 2000McConnell et al. 1995]. Monitoring performed during the baseline period (August 1989-May 1993 and the 5-year consent agreement period (June 1993-May 1998) provided data from nearly 6,200 air samples in the domestic RCF industry. Table 4-6 presents the summary statistics of workplace RCF exposure concentrations for the baseline (historical) and consent agreement monitoring data. The data suggest that (1) the AMs and GMs of RCF concentrations were higher for workers during the baseline period than during the more recent (consent monitoring data) period, and (2) AM and GM exposure concentrations were lower for workers in manufacturing facilities than at end-use sites. # Exposures During Installation and Removal of RCF Furnace Insulation To evaluate exposures to airborne dust associated with removing RCF furnace insulation, Gantner [1986] conducted surveys with air sampling at five sites. The surveys were performed at sites where workers removed modules or blanket-type insulation manually using knives or trowels. During removal activities, workers wore disposable, single-use respirators, disposable protective clothing or their own personal clothing, and (in some cases) goggles or other protective eyewear. Personal sampling was performed for total dust concentration as well as respirable dust concentration using a cyclone. Area samples were collected in the center of work zones (industrial furnaces) at 9 ft above the floor, which was at the breathing zone level of the workers, who were on scaffolding. A total of 24 air samples were collected, including 14 personal samples (9 for respirable dust and 5 for total dust concentrations) and 10 area samples (3 for respirable dust and 7 for total dust concentrations). Cheng et al. [1992] studied exposures to RCFs during the installation and removal of RCF insulation in 13 furnaces at 6 refineries and 2 chemical plants. Air samples were collected and analyzed according to NIOSH Method 7400 (A rules); sampling times ranged from 15 to 300 min. Samples collected during minor maintenance and inspection tasks (n=27) showed GM concentrations of 0.08 to 0.39 f/cm 3 (range=0.02 to 17 f/cm 3 ). Sampling performed during installation of RCF insulation (n=59) revealed GM concentrations of 0.14 to 0.62 f/cm 3 (range=0.02 to 2.6 f/cm 3 ). The highest exposures were observed in samples collected during removal of RCF insulation (n=32), with GM concentrations of 0.02 to 1.3 f/cm 3 (range=<0.01 to 17 f/cm 3 ). Workers working outside of enclosed spaces (furnaces) were rarely exposed to more than 0.2 f/cm 3 . One sample of after-service RCF insulation was analyzed for fiber diameter and length: median diameter was reported as 1.6 µm (range=0.5 to 6 µm), and length ranged from 5 to 220 µm. Of 100 fibers randomly selected and analyzed from the air sample, 87% were within the respirable size range. Four personal samples were collected during removal of after-service RCF modules and fire bricks and were analyzed for respirable crystalline silica (cristobalite). Samples revealed concentrations ranging from 0.03 mg/m 3 to 0.2 mg/m 3 (GM=0.06 mg/m 3 ). At a Dutch oil refinery, van den Bergen et al. [1994] performed personal air monitoring for airborne fibers to assess worker exposures during the removal of RCF insulation from expansion seams in a heat-treating furnace. The 8-hr TWA exposures for 5 workers sampled ranged from 9 to 50 f/cm 3 (GM=16 f/cm 3 ). Sweeney and Gilgrist [1998] also monitored worker exposures to airborne RCFs and respirable silica during the removal of RCF materials from furnaces. Personal samples from two workers taken during the removal of after-service RCF insulation revealed exposures of 0.15 and 0.16 f/cm 3 . Exposures to total particulate (18.3 and 22.4 mg/m 3 as 8-hr TWAs) were above the OSHA PEL of 15 mg/m 3 . Exposure concentrations for respirable dust containing crystalline silica (2.4% and 4.3%) were also above the OSHA PEL. The elevated concentrations of respirable and total dust were associated with removal of conventional refractory lining using jackhammers, crowbars, and hammers. A worker performing routing to install new RCF insulation was exposed at 1.29 f/cm 3 (8-hr TWA). Personal samples from another worker using a bandsaw to cut new RCF insulation revealed concentrations of 1.02 f/cm 3 as an 8-hr TWA. In the RCF industry, worker exposures to respirable crystalline silica (including quartz, cristobalite, and tridymite) may occur during the use of silica in manufacturing, removal of after-service insulation, and waste disposal. Focusing on exposures of workers who install, use, or remove RCF insulation, Maxim et al. [1999a] collected 158 personal air samples analyzed for respirable quartz, cristobalite, and tridymite over the RCFC/EPA 5-year consent agreement monitoring period (1993)(1994)(1995)(1996)(1997)(1998). A total of 42 removal projects were sampled. For small jobs, all workers engaged in insulation removal were sampled; for larger jobs, workers were selected at random for sampling. Air sampling and analysis were performed according to NIOSH Method 7500 for crystalline silica by X-ray diffraction; sampling times ranged from 37 to 588 min (AM=260 min, standard deviation [SD]=129 min). Short sampling times reflected the short duration of RCF insulation removal tasks (a benefit over time-intensive removal of conventional refractories). Removal of RCF blankets and modules is performed by using knives, pitchforks, rakes, and water lances, or by hand-peeling. The study noted that most (>90%) workers wear respirators (with protection factors from 10 to 50 or more) when removing insulation. Analysis of 158 samples found the following: ■ Fourteen samples had task-time respirable quartz concentrations ranging from 0.01 to 0.44 mg/m 3 (equivalent 8-hr TWA range=0.004 to 0.148 mg/m 3 ); the remainder of samples were below the LOD. ■ Three samples had detectable concentrations of cristobalite that were below the NIOSH REL of 0.05 mg/m 3 . ■ One sample contained tridymite (0.2 mg/m 3 ) at a concentration exceeding the NIOSH REL of 0.05 mg/m 3 . # International (Canadian, Swedish, and Australian) Surveys of RCF Exposure Perrault et al. [1992] reported on the characteristics of fiber exposures that occurred during the use of synthetic fiber insulation materials on construction sites in Canada. Fiber dimensions were measured from bulk samples of insulation materials used at five construction sites. Area air samples were also collected during the installation of composite RCF and glass wool insulation, glass wool alone, rock wool (both blown and sprayed on), and RCFs alone. Respirable fiber concentrations were highest during removal and installation of RCFs (0.39 to 3.51 f/cm 3 ) compared with concentrations measured during installation of rock wool (0.15 to 0.32 f/cm 3 ), composite RCF and glass wool (0.04 f/cm 3 ), and glass wool alone (0.01 f/cm 3 ). Diameters of fibers in bulk samples differed significantly from diameters in airborne fibers. RCFs had the smallest GM D of fibers in bulk samples (0.38 to 0.55 µm) compared with glass wool (0.93 µm) and rock wool (1.1 to 3.9 µm). For airborne fibers, rock wool (sprayed on) had a GM D of 2.0 µm, followed by RCFs (1.1 µm), composite RCFs and glass wool (0.71 µm), glass wool (0.5 µm) and blown rock wool (0.5 µm). Elemental analysis and comparison of bulk samples with air samples revealed a greater concentration of fibers with oxides of silicon and aluminum in air samples. For sites with either glass wool or rock wool insulation, airborne samples contained fewer fibers with silicon oxide as the sole constituent than bulk samples. The authors concluded that airborne fiber concentrations were affected by the type of fiber material used and the confinement of worksites. The authors also concluded that characterization of fibers in bulk samples is not a good representation of physical and chemical parameters of the airborne fibers. A report by the Swedish National Institute for Occupational Health [Krantz et al. 1994] describes exposure to RCFs in smelters and foundries based on industrial hygiene surveys and sampling at 4 facilities: a specialty steel foundry (2,500 workers), a metal smelting plant (1,500 workers), an aluminum foundry (450 workers), and an iron foundry (450 workers). RCF products were used in these plants in ladles, tapping spouts, holding furnaces, heat treatment furnaces, and spill protection mats. Workers and contractors were placed into three exposure categories, depending on their potential for exposure (as determined by distance from a fiber source). The highest exposures to airborne ceramic fibers (category 1) had median concentrations of 0.26 to 1.2 f/cm 3 and involved about 3% (n=160) of the workers at the plants surveyed. Secondary exposures (categories 2 and 3) involved another 33% (n=1,650) of the workers and had median concentrations of 0.03 to 0.24 f/cm 3 . During certain operations such as removal or demolition of RCF materials in enclosed spaces, concentrations of up to 210 f/cm 3 were measured. Total dust concentrations increased with fiber concentration and were as high as 600 mg/m 3 during demolition and 60 mg/m 3 during reinsulation. Median fiber diameters from bulk samples analyzed by electron microscopy ranged from 0.6 to 1.5 µm, which was comparable to the diameters of airborne fibers. On the basis of air sampling data, fiber dose (assuming a working lifetime of 40 years [fiber concentration × exposure time per year × 40 years]) was estimated for 8 occupations with category 1 exposures. Dose estimates ranged from 0.05 fiber-years/cm 3 for a cleaner to 85 fiber-years/cm 3 for a bricklayer or contractor. Dose estimates for the 6 other occupations ranged from 0.6 fiber-years/cm 3 to 3.1 fiber-years/cm 3 . Researchers at the Australian National Occupational Health and Safety Commission established a technical working group to investigate typical exposures in SVF manufacturing and user industries [Rogers et al. 1997]. The RCF manufacturing industry is relatively small in Australia: 2 plants employing roughly 40 workers have been manufacturing RCFs since 1976 and 1977. Since the plants began manufacturing RCFs, 152 persons have been involved with production. Airborne fiber concentrations in both plants decreased over time as a result of (1) the introduction of a national exposure standard of 0.5 f/cm 3 for synthetic fibers and a secondary standard of 2 mg/m 3 for inspirable dust, (2) the use of various controls and handling technologies, and (3) increased awareness of dust suppression by the workforce. GM concentrations of airborne fibers before implementation of the synthetic fiber exposure standard (1983-1990) measured 0.52 f/cm 3 (geometric standard deviation [GSD]=3.9) and 0.29 f/cm 3 (GSD=2.5) for plants 1 and 2, respectively. GM concentrations for the subsequent period (1991)(1992)(1993)(1994)(1995)(1996) dropped to 0.11 f/cm 3 (GSD=4.1) at plant 1 and 0.27 f/cm 3 (GSD=3.3) at plant 2. # Johns Hopkins University Industrial Hygiene Surveys A report of RCF end-user exposure data prepared for the Thermal Insulation Manufacturers Association (TIMA) showed that using blanket, bulk, and vacuum-formed RCFs during certain operations resulted in high fiber concentrations [Corn et al. 1992]. For example, 25 personal air samples collected from workers installing RCF blanket modules had an AM, 8-hr TWA concentration of 1.36 f/cm 3 (SD=1.15). The fibers were collected and analyzed using NIOSH Method 7400 (B rules). Seventeen vacuum formers had AM exposure concentrations of 0.71 f/cm 3 (SD=0.83) while using bulk RCF products. Twenty-eight workers with the job title vacuum-formed RCF cast finisher had AM exposures of 1.55 f/cm 3 (SD=1.51). Table 4-7 summarizes exposure data collected for the 17 occupations sampled during the study. Scanning electron microscopy (SEM) was used to measure dimensions of approximately 3,500 fibers from selected air samples of the 17 occupations. GM fiber diameters ranged from 0.9 to 1.5 µm, and GM fiber lengths ranged from 20.4 to 36.1 µm. Fiber aspect ratios based on these data ranged between 16:1 and 30:1. # NIOSH HHEs and Additional Sources of RCF Exposure Data NIOSH has conducted HHEs involving potential exposures to RCFs at the following work places: an RCF manufacturing facility [Lyman 1992], a steel foundry [O'Brien et al. 1990], a power plant [Cantor and Gorman 1987], a foundry [Gorman 1987], and a railroad car wheel and axle production facility [Hewett 1996]. # Discussion Recent and historical environmental monitoring data [Esmen et al. 1979;Cantor and Gorman 1987;Gorman 1987;O=Brien et al. 1990;Cheng et al. 1992;Brown 1992;Corn et al. 1992;Lyman 1992;Allshouse 1995;Hewett 1996] indicate that airborne concentrations of RCFs include fibers in the thoracic and respirable size range (<3.5 µm in diameter and <200 µm long [Timbrell 1982;Lippmann 1990;Baron 1996]). Workers are exposed to these concentrations during primary RCF manufacturing, secondary manufacturing or processing, and end-use activities such as RCF installation and removal. Sampling data from studies of domestic primary RCF manufacturing sites indicate that average airborne fiber concentrations have steadily declined by nearly 2 orders of magnitude over the past 2 decades. Rice et al. [1997] report an estimated maximum airborne concentration of 10 f/cm 3 associated with an RCF manufacturing process in the 1950s. Esmen et al. [1979] recognized average exposure concentrations ranging from 0.05 to 2.6 f/cm 3 in RCF manufacturing facilities in the mid-to late-1970s. During the late 1980s, Rice et al. [1994Rice et al. [ , 1996Rice et al. [ , 1997] calculated average airborne concentrations in manufacturing facilities that ranged from <LOD to 0.66 f/cm 3 . Maxim et al. [1994Maxim et al. [ , 1997Maxim et al. [ , 2000a report that from the late 1980s through 1997, concentrations ranged from an AM of <0.3 to 0.6 f/cm 3 (GM0.2 f/cm 3 ). For many RCF manufacturing processes, reductions in exposure concentrations have been realized through improved ventilation, engineering or process changes, and product stewardship programs [Rice et al. 1996;Maxim et al. 1999b]. Several functional job categories continue to be associated with fiber concentrations that exceed the average; these include finishing operations during manufacturing, removal operations, and installation performed by end users. Activities in these three categories require additional mechanical energy in handling RCF products (e.g., sawing, drilling, cutting, sanding), which increases the generation of airborne fibers. Removal and installation activities are performed at remote sites where conventional engineering strategies and fixed controls are more difficult to implement. For certain operations in which airborne fiber concentrations are greater (such as removal of RCF products from furnaces), jobs are performed for short periods and almost universally with the use of respiratory protection [Maxim et al. 1998]. One additional consideration during work involving RCF exposure is the potential for exposure to respirable silica in the forms of quartz, tridymite, and cristobalite. Although the potential for such exposure exists in primary manufacturing (because silica is a major component of RCFs), monitoring data indicate that these exposures are generally low . Maxim et al. [1999a] reported that many airborne silica samples collected to assess exposures during installation and removal of RCF products contain concentrations below the LOD, with average concentrations of respirable silica ranging from 0.01 to 0.44 mg/m 3 (equivalent 8-hr TWA range=0.004 to 0.148 mg/m 3 ). Other studies indicate a greater potential for exposure to respirable silica (especially in the form of cristobalite) during removal of after-service RCF materials [Gantner 1986;Cheng et al. 1992;Perrault et al. 1992;van den Bergen et al. 1994;Sweeney and Gilgrist 1998]. Processes associated with high concentrations of airborne fibers generally generate high concentrations of total and respirable dust as well [Esmen et al. 1979;Krantz et al. 1994]. # Effects of Exposure Animal studies report the concentration(s) to which the animals were exposed. The distinction between administered exposure concentration and received dose is important when analyzing these studies. The dose affecting the target tissues is known only when the amount of fiber present in the lung is measured and reported. To analyze the results of RCF studies, the number of fibers per exposure, their dimensions, durabilities, and the delivered dose should be considered for making comparisons and conclusions regarding potential and relative toxicity. # Intrapleural, Intraperitoneal, and Intratracheal Studies Instillation and implantation studies deliver fibers directly to the trachea, pleural cavity, or peritoneal cavity, bypassing some of the defense and clearance mechanisms that act on inhaled fibers. Implantation of fibers into either the pleural or abdominal cavities delivers fibers directly to the pleural or abdominal mesothelium, bypassing some or all of the normal defense and clearance mechanisms of the respiratory tract. Intratracheal instillation delivers fibers directly to the trachea, bypassing the upper respiratory tract. These exposure methods do not mimic an occupational inhalation exposure of several hours per day for several days per week over an extended period. However, one advantage of these studies is that they allow the administration of a precise dose of fibers that can be replicated between animals. They also permit the administration of higher doses than may be obtainable by inhalation exposure. # Health Effects in Animals (In Vivo Studies) The health effects of RCF exposures have been evaluated in animal studies using intrapleural, intraperitoneal, intratracheal, and inhalation routes of exposure. All of these routes have demonstrated the carcinogenic potential of RCFs. Chronic inhalation studies provide information that is most relevant to the occupational route of exposure and human risk assessment. Mechanistic information about fiber toxicity may also be derived from other types of studies. Studies investigating the cellular effects of RCFs in vitro are reviewed in Section 5.2 and Appendix C. When comparing the effects of a fiber dose in animal studies, it is possible to compare fibers on a gravimetric basis (effect per unit weight) or a fiber basis (effect per number of fibers). The same gravimetric dose of different fiber types may contain vastly different numbers of fibers because of differences in their dimensions. RCF1 is a relatively thick fiber compared with many types of asbestos, such as chrysotile, a fiber commonly used as a positive control in pulmonary carcinogenesis experiments in animals (see Table 2-2 for descriptions of RCF1, RCF2, RCF3, and RCF4). A gravimetric dose of RCF1 usually contains far fewer fibers than the same gravimetric dose of chrysotile asbestos fibers, making a direct comparison of their effects difficult when the number of fibers per unit weight is not reported. Comparison on a perfiber basis rather than a weight basis provides information most applicable to occupational risk assessment. Although the results of implantation and instillation studies may not be directly applicable to occupational exposure and human health effects, they provide important information about the potential toxicity of RCFs. Experiments that control fiber dimensions and other variables provide information about the physiological characteristics relevant to fiber toxicity. They provide a less expensive, quicker means to screen the potential toxicity of a fiber than inhalation studies. Many of the implantation and instillation studies reviewed here report the administered fiber dose on a gravimetric basis rather than on a perfiber basis. Some studies assess the toxicity of both RCFs and asbestos independently, which allows for the comparison of these fibers on a gravimetric basis but not on a per-fiber basis. # Intraperitoneal Implantation Studies In intraperitoneal studies, fibers are implanted directly into the abdominal cavity, bypassing the respiratory system defense and clearance mechanisms that act on inhaled fibers. Although the implanted fibers act on some of the same target cell types as the fibers of an inhalation exposure (such as the mesothelium), the effects elicited in the abdominal mesothelium cannot be assumed to be identical to the response of the pleural mesothelium. Table 5-1 summarizes the results of three RCF intraperitoneal implantation studies [Davis et al. 1984;Smith et al. 1987;Pott et al. 1987]. A brief description of these studies follows. Davis et al. [1987] dosed Wistar rats with 25 mg ceramic aluminum silicate dust by intraperitoneal injection. Tumors were induced in 3 of 32 rats: 2 fibrosarcomas and 1 mesothelioma. Smith et al. [1987] dosed Osborne Mendel (OM) rats and Syrian hamsters with 25 mg RCFs by intraperitoneal injection. Abdominal mesothelioma induction rates were 83% (19/23) in OM rats and 13% (2/15) and 24% (5/21) in two groups of male hamsters. Crocidolite asbestos at 25 mg induced abdominal mesotheliomas in 80% (20/25) of OM rats and 32% (8/25) of hamsters. The difference in tumor incidence reported by Davis et al. [1984] and Smith et al. [1987] may be explained in part by differences in fiber length. Eighty-three percent of RCF fibers used by Smith et al. [1987] had a length >10 µm; 86% had a diameter <2.0 µm. Ninety percent of the ceramic aluminum silicate material used by Davis et al. [1984] had a length <3 µm and a diameter <0.3 µm. Pott et al. [1987] dosed female Wistar rats by intraperitoneal injection with 9 or 15 mg/week for 5 weeks with 2 ceramic (aluminum silicate) wool fibers, Fibrefrax (RCFs), and MAN (Manville RCFs); total doses of 45 and 75 mg were administered, respectively. Fifty percent of Fibrefrax fibers had a length <8.3 µm and diameter <0.91 µm. Exposure to Fibrefrax fibers induced abdominal tumors (sarcomas, mesotheliomas, or carcinomas) in 68% of the rats. Fifty percent of MAN fibers had a length <6.9 µm and diameter <1.1 µm. The number of fibers in different length categories was not reported. Exposure to MAN fibers induced abdominal tumors in 22% of the rats. Chrysotile (UICC/B) injected intraperitoneally at a single dose of 0.05, 0.25, or 1.00 mg induced abdominal tumors in 19%, 62%, or 86% of rats, respectively. Fifty percent of chrysotile fibers had a length <0.9 µm and diameter <0.11 µm. The number of fibers per dose was not reported for the ceramic fibers and asbestos. Saline induced tumors in 2% of rats. # Intrapleural Implantation Studies Intrapleural implantation studies permit the investigation of the effect of RCFs directly on the pleural mesothelium while controlling variables such as inhalation kinetics and translocation. Table 5-2 summarizes the results of the intrapleural study of Wagner et al. [1973]. Intrapleural injection of 20 mg of ceramic fiber (unspecified type) or 20 mg for each of two samples of chrysotile produced mesotheliomas in 10% (3/31), 64% (23/36), and 66% (21/32) of Wistar rats, respectively. The mean ceramic fiber diameter was 0.5 to 1.0 µm. The lengths of the chrysotile fibers were mostly <6 µm. The chrysotile fiber diameter, RCF fiber length, and number of fibers per dose were not reported, making a direct comparison of the samples difficult. # Intratracheal Instillation Studies The technique of intratracheal instillation has the advantage of affecting the same target tissues (other than the upper respiratory tract) as an inhalation exposure. Other advantages, compared with inhalation exposure, include a simpler technique, lower cost, accurate dosing, and the ability to deliver materials (such as long fibers) that may not be respirable to rodents [Driscoll et al. 2000]. The faster dose rate and bolus delivery of tracheal instillation may affect the response of the lung defense mechanisms, resulting in differences in clearance and biopersistence relative to an inhalation exposure. Intratracheal instillation may also produce a clumping of fibers with a resulting effect on fiber distribution and clearance [Davis et al. 1996;Driscoll et al. 2000]. Intratracheal instillation results in a heavier, more centralized distribution pattern; inhalation exposure results in a more evenly and widely distributed pattern [Brain et al. 1976]. Table 5-3 summarizes the results of two RCF intratracheal instillation studies [Smith et al. 1987;Manville 1991]. A brief description of these studies follows. In the study by Smith et al. [1987], Syrian golden hamsters and OM rats were dosed with 2 mg of RCFs suspended in saline (Fibrefrax) by intratracheal instillation once a week for 5 weeks (10 mg total). The animals were maintained for the rest of their lives. Approximately 50% of the RCFs were <20 µm long with a mean fiber diameter of 1.8 µm. No primary lung tumors developed in RCF-exposed animals. These animals did not have an increased incidence of pulmonary fibrosis or tumor production compared with controls; however, the rats had a statistically significant increase in bronchoalveolar metaplasia. The median lifespan was 479 days for hamsters and 736 days for rats. Hamsters (median lifespan 657 days) and rats (median lifespan 663 days) exposed to the same dosing schedule with 2 mg crocidolite asbestos had a statistically significant increase in bronchoalveolar lung tumors in 20 of 27 (74%) and 2 of 25 (8%) animals, respectively. The fiber numbers per dose were not reported. Manville [1991] reported a statistically significant increase in lung tumors in Fischer rats exposed intratracheally to 2 mg of RCF1, RCF2, RCF3, and RCF4 in saline [ Manville 1991]. Animals were terminally sacrificed at 128 weeks with interim sacrifices at 13, 26, 52, 78, and 104 weeks. RCF1, RCF2, RCF3, and RCF4 exposure resulted in adenomas or adenocarcinomas in 6 of 109 (5.5%), 4 of 107 (3.7%), 4 of 109 (3.7%), and 7 of 108 (6.5%) rats, respectively. One mesothelioma was identified in a rat exposed to RCF2. Exposure to 0.66 mg chrysotile asbestos resulted in 8 primary lung tumors in 8 of 55 rats (14.5%). The fiber dimensions and numbers per dose were not reported. # Chronic Inhalation Studies In animal bioassays, administering RCFs by chronic inhalation most closely mimics the occupational route of exposure. Exposure to RCFs over a time period that approximates the lifespan of the animal provides the most accurate prediction of the potential pathogenicity and carcinogenicity of these fibers in animals. The sex ratio for all groups was approximately 2 male rats to 1 female rat. # Tumor incidence Manville [1991] Fischer 344 rats 109, male 107, male 109, male 108, male 55, male 118, male 2 mg RCF1 (0.2 ml of a 10-mg/ml suspension) 2 mg RCF2 (0.2 ml of a 10-mg/ml suspension) 2 mg RCF3 (0.2 ml of a 10-mg/ml suspension) 2 mg RCF4 (0.2 ml of a 10-mg/ml suspension) 0.66 mg Canadian chrysotile (0.2 ml of a 3.3-g/ml suspension) 0.2 ml (vehicle not specified) The effects seen in animals may be used to predict the effects of these fibers in humans, although interspecies differences exist in respiratory anatomy, physiology, and tissue sensitivity. Chronic inhalation studies provide the best means to predict the critical disease endpoints of cancer induction and nonmalignant respiratory disease that may occur in humans because of fiber exposure [McConnell 1995;Vu et al. 1996]. Five chronic RCF inhalation studies have been conducted on rats or hamsters [Davis et al. 1984;Smith et al. 1987;Mast et al. 1995a,b;Mc-Connell et al. 1995]. These studies are summarized in Tables 5-4 and 5-5 and are described below. Davis et al. [1984] exposed Wistar rats by whole-body inhalation to 10 mg/m 3 (95 f/cm 3 ) ceramic (aluminum silicate glass) dust for 7 hr/day, 5 days/week for 12 months. Ninety percent of the exposure fibers were short (<3 µm) and thin (<0.3 µm). The particle ratio of nonfibrous particulate to fibers was 4:1. Eight of 48 exposed rats (17%) developed pulmonary neoplasms: 1 adenoma, 3 bronchial carcinomas, and 4 histiocytomas. Interstitial fibrosis was observed. No pulmonary tumors were observed in control animals. Smith et al. [1987] exposed OM rats and Syrian golden hamsters by nose-only inhalation to 10.8±3.4 mg/m 3 (200 f/cm 3 ) ceramic fiber (Fibrefrax) for 6 hr/day, 5 days/week for 24 months. The ratio of nonfibrous particulate to fibers was 33:1. Exposure to RCFs did not induce pulmonary tumors in rats. One RCFexposed rat and one chamber control rat developed primary lung tumors. Rats exposed to RCFs had more severe pulmonary lesions than hamsters, and a greater percentage of rats had fibrosis than hamsters (22% versus 1%, respectively). Under similar conditions, exposure to 7 mg/cm 3 (3,000 f/cm 3 ) crocidolite asbestos produced pulmonary tumors in 3 of 57 rats, including 1 mesothelioma and 2 bronchoalveolar tumors. No pulmonary tumors were observed in crocidolite-exposed hamsters. Exposure to slag wool at 10 mg/m 3 (200 f/cm 3 ) and several fibrous glasses at similar gravimetric concentrations did not result in pulmonary neoplasms (not shown in Table 5-4). Mast et al. [1995a] exposed Fischer 344 rats by nose-only inhalation to 30 mg/m 3 (187±53 WHO f/cm 3 RCF1, 220±52 WHO f/cm 3 RCF2, 182±66 WHO f/cm 3 RCF3, 153±49 WHO f/cm 3 RCF4) of one of four types of RCFs for 6 hr/day, 5 days/week for 24 months and held until sacrifice at 30 months. Groups of 3 to 6 animals were sacrificed at 3, 6, 9, 12, 15, 18, and 24 months to examine lesions and determine fiber lung burdens. Other animals were removed from exposure at the same time points and held until sacrifice at 24 months. Positive control rats were exposed to 10 mg/m 3 (1.06±1.14×10 4 WHO f/cm 3 ) chrysotile under similar exposure conditions. RCF fibers with a mean diameter of 1 µm and mean length of 20 to 30 µm were selected. A particle ratio of nonfibrous particulate to fiber of 1.02-1.88:1 was reported. Interstitial fibrosis was first observed at 6 months with RCF1, RCF2, and RCF3 and at 12 months with RCF4 exposure. Pleural fibrosis was first observed at 9 months with RCF1, RCF2, and RCF3 and at 12 months with RCF4 exposure. A progression in the severity of pleural fibrosis was seen in animals exposed to 30 mg/m 3 for 24 months and examined at 6 months post exposure. The incidence of total lung tumors was significantly increased from controls after exposure to RCF1, RCF2, and RCF3 but not RCF4. Neoplastic disease, including adenomas and carcinomas, was observed in all treatment groups: with RCF1, in 16 of 123 rats (13%); RCF2, 9 of 121 (7.4%); RCF3, 19 of 121(15.7%); RCF4, 4 of 118 (3.4%); and chrysotile, 13 of 69 (18.5%). Mesotheliomas were induced in some rats in all treatment groups: 2 with RCF1; 3 with RCF2; 2 with RCF3; 1 with RCF4; and 1 in the chrysotile exposure group. All mesotheliomas were detected at or after 24 months of exposure. Most RCF fibers recovered in the lung were 5 to 10 µm long regardless of exposure time and recovery time. An 80% reduction in fiber lung burden was seen in rats allowed to recover for 21 months following 3 months of RCF exposure. Mast et al. [1995b] exposed Fischer 344 rats by nose-only inhalation to 0 (air), 3, 9, or 16 mg/m 3 (0, 26±12, 75±35, or 120±35 WHO f/cm 3 ) RCF1 for 6 hr/day, 5 days/week for 24 months and held them until sacrifice at 30 months. Fibers were selected by size as in Mast et al. [1995a]. A particle ratio of nonfibrous particulate to fibers of 0.9-1.5:1 was reported. Groups of 3 to 6 animals were sacrificed at 3, 6, 9, 12, 18, and 24 months to examine lesions and determine fiber lung burdens. Other animals were removed from exposure at the same time points and held until sacrifice at 24 months. Interstitial fibrosis was observed after 12 months of exposure in the 9-and 16-mg/m 3 exposure groups. Pulmonary fibrosis was first observed after 12 months with 16 mg/m 3 exposure and after 18 months with 9 mg/m 3 exposure. The mean Wagner grades of pulmonary cellular change and fibrosis in rats exposed to 0, 3, 9, 16, and 30 mg/m 3 of RCFs for 24 months were 1.0, 3.2, 4.0, 4.2, and 4.0, respectively. Rats exposed at the same range of doses for 24 months and allowed to recover for 6 months had mean Wagner grades of 1.0, 2.9, 3.8, 4.0, and 4.3. The severity of interstitial and pleural fibrosis was similar between those animals sacrificed at 24 months and those allowed 6 months of recovery following the 24 months of exposure. The incidence of pulmonary neoplasms was not statistically different from the controls in all exposure groups. One pleural mesothelioma was observed in the 9-mg/m 3 exposure group. A dose-related increase occurred in fiber lung burden. Fiber lengths of 5 to 10 µm were most prevalent in the lung fibers recovered after 3 months of exposure followed by 21 months of recovery, after 12 months of exposure, and after 24 months of exposure to all doses of RCFs. Animals exposed for 3 or 6 months and then allowed to recover until sacrifice at 24 months had lung burdens reduced by 96% to 97% compared with animals not allowed recovery time. McConnell et al. [1995] exposed Syrian golden hamsters by nose-only inhalation to 30 mg/m 3 RCF1 (256±58 WHO f/cm 3 ) for 6 hr/day, 5 days/week for 18 months and held them until sacrifice at 20 months. Positive control animals were exposed to 10 mg/m 3 (8.4±9.0×10 4 WHO f/cm 3 ) chrysotile asbestos. Groups of 3 to 6 animals were sacrificed at 3, 6, 9, 12, 15, and 18 months to examine lesions and determine fiber lung burdens. Other animals were removed from exposure at the same time points and held until sacrifice at 20 months. Interstitial and pleural fibrosis were first observed after 6 months of exposure in RCF-exposed hamsters. No pulmonary neoplasms developed. Forty-two of 102 (41.2%) RCF-exposed animals developed pleural mesotheliomas. Most mesotheliomas developed after 18 months of exposure. Animals exposed to chrysotile developed a more severe interstitial fibrosis and pleural fibrosis than those exposed to RCFs. No neoplasms were observed in the lungs or pleura of the chrysotile-exposed or air control animals. The greatest percentage of retained fibers had lengths of 5 to 10 µm and diameters <5 µm in the lungs after 6 months of exposure followed by 12 months of recovery. McConnell et al. [1999] conducted a multidose chronic study of the effects of amosite inhalation in hamsters. The data can be compared with the effects of RCF1. Syrian golden hamsters were exposed to 0.8 (36±23 WHO f/cm 3 ), 3.7 (165±61 WHO f/cm 3 ), or 7 mg/m 3 (263±90 WHO f/cm 3 ) amosite asbestos. Pleural mesothelioma incidences of 3.6%, 25.9%, and 19.5%, respectively, were reported. The aerosol mean diameter of the amosite asbestos was 0.60 µm ±0.25; its aerosol mean length was 13.4 µm ±16.7. The dimensions of this asbestos fiber were more similar to those of the RCFs used in the chronic inhalation studies of McConnell et al. [1995] than the chrysotile asbestos used as the positive control in that same study. NIOSH [Dankovic 2001] analyzed the hamster data from the RCF [McConnell et al. 1995] and amosite studies [McConnell et al. 1999]. A doseresponse model was developed for amosite and was used to predict the amosite response at the one and only dose at which RCFs were tested in hamsters. The modeled amosite response was compared with the observed RCF response. These results are presented in Figures 5-1 and 5-2. Log-probit, log-logistic, multistage, and unrestricted Weibull models were analyzed. The transformation for the log-probit and log-logistic models was log (fibers/cm 3 +1). The dose metric of the multistage and Weibull models was fibers/cm 3 , as they did not require a log-transformation. Results of the log-probit model analysis of these data indicated RCF/ amosite relative potency estimates of 1.85 and 1.19, using WHO fibers and fibers >20 µm as the dose metric, respectively. The model fits were poor when the amosite high-dose group and 20 µm-fiber dose were included. Sensitivity analyses in which the high-dose amosite group was dropped suggest that the relative potency of RCFs to amosite could be as low as 0.66 based on the log-probit model. Results using the log-logistic, multistage, and Weibull models were similar to those using the log-probit model, with an overall range of RCF/amosite relative potency estimates from these models using all four amosite dose groups of 1.03 to 1.89. Although no clear toxicologic basis exists for disregarding the high-dose amosite data, sensitivity analyses based on excluding these data suggest that the potency of RCFs relative to amosite could be as low as 0.47, based on the multistage model. These models indicate that the plausible carcinogenic potency estimates for RCFs relative to amosite, based on hamster mesotheliomas, range from about half to nearly twice the carcinogenicity of amosite. # Discussion of RCF Studies in Animals The intrapleural, intraperitoneal, and intratracheal RCF studies have demonstrated the carcinogenicity of RCFs. Because of the nonphysiologic delivery of fibers by these methods, it is difficult to compare their results with those of an inhalation exposure. Although tracheal instillation may result in different distribution patterns than an inhalation exposure, this route of exposure is useful as a screening test for relative toxicity and to compare the toxicity of new materials with the toxicity of materials for which data already exist [Driscoll et al. 2000]. Tracheal instillation also is useful when testing fibers respirable by humans but not rodents. Chronic inhalation studies provide the data most relevant to occupational exposure to RCFs. The RCF chronic animal inhalation studies described above allow for the comparison of health effects of exposure to different doses of RCF1, different types of RCFs, and the interspecies susceptibility of the rat and hamster to RCF exposure. Results of the multidose chronic inhalation testing of RCF1 in rats indicate the pathogenic potential of RCFs at high doses [Mast et al. 1995a,b]. The incidence of total lung tumors was significantly increased from controls after exposure to 30 mg/m 3 RCF1, RCF2, and RCF3 but not RCF4. A dose-response relationship was demonstrated for nonneoplastic pulmonary changes in rats exposed to 3, 9, and 16 mg/m 3 RCFs. The severity of interstitial and pleural fibrosis was similar between those animals sacrificed at 24 months and those allowed 6 months [1995,1999].) of recovery following the 24-month exposure. Spontaneous primary pulmonary mesotheliomas are rare in rats [Analytical Sciences Incorporated 1999]. Therefore, the presence of any mesothelioma in treated animals is biologically significant and warrants caution. Comparing the chronic effects of RCF1 with its positive control, chrysotile asbestos, in the hamster is difficult because of the differences in dose, dimensions, and durability of the two fibers tested [McConnell et al. 1995 [Dankovic 2001]. Differences in the physical characteristics and biopersistence of RCF1 and amosite asbestos must be considered before extrapolating these animal data to human risk. Hamsters showed a greater susceptibility to mesothelioma induction after RCF1 exposure than did rats under similar exposure conditions [Mast et al. 1995a;McConnell et al. 1995]. Chronic inhalation studies of amosite asbestos in hamsters showed no pulmonary neoplasms, but high incidences of mesothelioma occurred at doses of 125 and 250 f/cm 3 [McConnell et al. 1999]. Many of the mesotheliomas in the more recent hamster studies were identified only on microscopic examination [Mast et al. 1995a;McConnell et al. 1995McConnell et al. , 1999. Previous studies reporting mesotheliomas only by macroscopic identification may have underestimated the mesothelioma incidence. Recent, short-term inhalation studies indicate that hamster mesothelial cells may have a more pronounced inflammatory and proliferative response to RCF1 exposure than those of rats [Everitt 1997;Gelzleichter et al. 1996aGelzleichter et al. ,b, 1999. The reasons for this species difference in response to RCFs have not been explained. The results of these animal studies indicate the need for the inclusion of the hamster as a sensitive test species in those studies in which pleural mesothelioma is an endpoint of concern. Results from Mast et al. [1995a] indicate that under the conditions studied, exposure to RCF4 may have a less pronounced effect on pulmonary pathology than exposure to RCF1, RCF2, and RCF3. Rats exposed to RCF4 did not have a significant increase in total lung tumors compared with controls; those exposed to RCF1, RCF2, and RCF3 did. Exposure to RCF4 produced a less severe fibrosis than was seen in the other RCF exposure groups. Differences in the dimensions or physical properties of RCF4 may explain its different respiratory effects from RCF1, RCF2, and RCF3. RCF4 was produced by heating RCF1 in a furnace at 2,400 º F for 24 hr. This Aafter-service@ fiber contained approximately 27% free crystalline silica. Silicotic nodules were observed in the RCF4-exposed animals. RCF4 fibers were shorter (~34% between 5 and 10 µm ) and thicker (~35% <0.5 µm) than those of RCF1, RCF2, and RCF3. The particle content of the RCF test material may have been responsible for some of the respiratory pathology observed in these studies. However, an analysis of the ratio of nonfibrous to fibrous particulates in the reviewed studies does not indicate a correlation between the particulate content and observed effects. Smith et al. [1987] performed testing with the highest particulate to fiber ratio at 33:1 and did not report a high tumor incidence. Comparing studies based on the ratio of nonfibrous particulates to fibers is complicated by differences among the studies in fiber preparation, doses tested, fiber dimensions, and methods of fiber analysis. The techniques used to detect and measure nonfibrous particulates have improved over time so that the comparison of recent and older studies may reflect these inconsistencies. These chronic RCF inhalation studies indicate the ability of RCFs to induce cancer in two laboratory species-mesotheliomas in hamsters and pulmonary tumors in rats. The late onset of tumors indicates the importance of chronic studies on the effects of RCF exposure. Shortterm intraperitoneal, intrapleural, intratracheal, and inhalation studies provide important information about the action of fibers, the fiber characteristics associated with toxicity, and potential toxicity. Currently it is only through lifespan toxicologic testing of animals that the respiratory and other chronic health effects of RCFs can be accurately assessed. # Lung Overload Argument Regarding Inhalation Studies in Animals Mast et al. [2000] published a review interpreting the results of chronic inhalation studies of RCF1 in rats and hamsters [Mast et al. 1995a,b;McConnell et al. 1995]. In the review, the authors suggest the possibility that the maximum tolerated dose (MTD) may have been exceeded and that lung overload may have compromised the pulmonary clearance mechanisms of test animals. Building on the concept of lung overload (first advanced by Bolton et al. [1983]), Mast et al. [2000] considered particulate coexposure (i.e., nonfibrous particulate or shot) to be a confounding factor that may have had a major effect on the observed chronic adverse effects. The authors propose that the MTD was exceeded at the highest exposure concentration of 30 mg/m 3 for RCF1 in the rat bioassay. The concept of pulmonary overload in the Fischer 344 rats is based on the recognition that excessive particulate exposures (>1,500 µg/rat, according to Bolton et al. [1983]) eventually reduce the clearance effectiveness of the lungs, causing the normal linear clearance kinetics to follow a nonlinear pattern. On a cellular level, the overload conditions may result in alveolar macrophages becoming engorged with particulate, pulmonary and alveolar inflammation, increased translocation of particles to the interstitium and lymph, granuloma formation, pulmonary fibrosis, and lung tumors, depending on the time and severity of the overload [Mast et al. 2000]. Ambiguity about the definition of MTD for chronic inhalation studies with animals was also a concern expressed by the authors. One reference [Morrow 1986] recognizes the MTD as that which causes "a significant functional impairment of lung clearance." At a National Toxicology Program (NTP) workshop on establishing exposure concentrations for inhalation studies in animals, it was concluded that the highest exposure concentration should produce only minimal changes in lung defense mechanisms as measured by clearance [Lewis et al. 1989]. At a similar workshop convened by the EPA, it was proposed that the MTD for fiber inhalation studies is equivalent to the lung dose produced at the maximum achievable concentration (MAC) [Vu et al. 1996]. The MAC is calculated as the highest fiber concentration based on a 90-day study that results in significant changes in alveolar macrophage clearance rates, lung burden normalized to exposure concentration, cell proliferation, inflammation, lung weight, and other measures. The methodology described for the RCF chronic inhalation studies involved procedures (i.e., wet cyclone separation technology) for removing the nonfibrous particulate fraction from the commercial fiber (RCF1) used for the inhalation exposures [Mast et al. 1995a[Mast et al. ,b 2000McConnell et al. 1995]. This process resulted in an aerosol with a 9.1:1 particle-to-fiber ratio [Maxim et al. 1997;Mast et al. 2000], compared with a study by Smith et al. [1987], which reported 33 nonfibrous particles per fiber in airborne exposures. Results from Esmen et al. [1979] indicate that despite a poor correlation between mass of total airborne dust and fiber concentration in RCFs measured in manufacturing, fibers generally constitute only a small portion of the total dust. This finding is consistent with other reported measures of occupational exposures to airborne RCFs [Krantz et al. 1994;Trethowan et al. 1995]. However, Maxim et al. [1997] reported an average particle-to-fiber ratio of 0.53:1 (n=10, range not reported), or roughly 1 particle to 2 fibers in RCF manufacturing facilities. Muhle and Bellmann [1996] conducted a 5-day inhalation study with Fischer 344 rats to measure the biopersistence of RCF1 (with the 9:1 particulate-to-fiber ratio) and RCF1a (RCF1 that is further processed to reduce particulate mass). The study showed a 1.5-fold longer time-weighted half-life for RCF1 (t 1/2 =78 days) compared with RCF1a (t 1/2 =54 days). That study also involved a 3-week inhalation experiment with Fischer 344 rats, in which the clearance of RCF1 (t 1/2 =103 days) was almost twice as long as that of RCF1a (t 1/2 =54 days). In a follow-up study by Brown et al. [2000], female Wistar rats were exposed to RCF1 and RCF1a by inhalation for 3 weeks and followed for 12 months to evaluate alveolar macrophage clearance and inflammation. The exposure concentrations were 130 fibers/ml >20 µm for RCF1 and 125 fibers/ml >20 µm for RCF1a. The nonfibrous content of RCF1 was approximately 25%, whereas the nonfibrous content of RCF1a was 2%. The mean diameter of the nonfibrous particles was 2 to 3 µm. The aerosol exposure to RCF1 contained twice as many short fibers (<20 µm) as RCF1a and twice the amount of dust (fibers and nonfibrous dust/ mg • m 3 ) as RCF1a (51 versus 25.8 mg/m 3 ). At the end of the inhalation period, animals exposed to RCF1a had a higher pulmonary concentration of long fibers but lower concentrations of short fibers and nonfibrous particles. The difference in particle content was enhanced in the lungs-15 times more particles were found in the lungs of the RCF1-exposed animals than in those exposed to RCF1a. In the aerosol exposure, only an eightfold difference was found in the number of particles between RCF1 and RCF1a. Tran et al. [1997] examined how overloading the alveolar macrophage defense system affects the clearance of fibers versus that of nonfibrous particles. Modeling was performed based on data for rats exposed by inhalation to titanium dioxide (TiO 2 ) at 1, 10, and 50 mg/m 3 or to glass wool (MMVF10) at 3, 16, and 30 mg/m 3 . Lung burdens and clearance kinetics during exposure (0 to 100 weeks) were compared with those at 3, 10, and 38 days post-exposure. The models showed that overloading of the lung by fibers or nonfibrous particles are similar when fibers are short (<15 µm). This observation is plausible, as nonfibrous particles and short fibers smaller than the diameter of the alveolar macrophage are most readily engulfed and cleared via the macrophages. When this defense is overwhelmed (lung burden ≥10 mg), these particles are cleared less effectively. For fibers longer than 15 µm, phagocytosis by alveolar macrophage is reduced. As fiber length increases, fibers tend to be cleared by dissolution and disintegration of long fibers into shorter fibers or fragments. Therefore, clearance of long fibers is not affected by the overloading of macrophage-mediated defenses with shorter fibers or nonfibrous particles. The exposure concentrations for the RCF chronic inhalation bioassays were measured and reported as mass in mg/m 3 . Monitoring of exposures as performed by gravimetric analysis does not distinguish fibers from nonfibrous particulate, although fiber concentration and dimensions were also checked by phase contrast and electron microscopy [Mast et al. 1995a,b]. Consequently, the particulate fraction was included in the dose measurements. This fact does complicate efforts to compare the relative toxicity of fibers, nonfibrous particulate, and total combined particulate, especially regarding the lung overload hypothesis. During pro-duction of RCFs and RCF products, however, the nonfibrous particulate fraction is associated with the fiber, as shown in Table 2-1 (i.e., 20% to 50% of RCFs by weight is nonfibrous particulate). This suggests that occupational exposures to airborne RCFs necessarily involve coexposures to a fraction of nonfibrous particulate, a suggestion that has been supported by exposure assessment studies [Esmen et al. 1979;Krantz et al. 1994;van den Bergen et al. 1994;Trethowan et al. 1995;Maxim et al. 1997;Mast et al. 2000]. # Cellular and Molecular Effects of RCFs (In Vitro Studies) The cellular and molecular effects of RCF exposures have been studied with two different objectives. One purpose of these in vitro studies is to provide a quicker, less expensive, and more controlled alternative to animal toxicity testing. These experiments are best interpreted by comparing their results with those of in vivo experiments. The second objective of in vitro studies is to provide data that may help to explain the pathogenesis and mechanisms of action of RCFs at the cellular and molecular levels. These cytotoxicity and genotoxicity studies are best interpreted by comparing the effects of RCFs with those of other SVFs and asbestos fibers. In vitro studies serve as screening tools and provide insights into the molecular mechanisms of fibers. They are an important complement to animal studies. Currently it is not possible to use these data to derive the NIOSH REL for RCFs. For this reason, a discussion of in vitro studies is included here, but the more comprehensive summaries of studies are included in Appendix C. The toxicity of fibers has been attributed to their dose, dimensions, and durability. Any test system that is designed to assess the potential toxicity of fibers must address these factors. Durability is difficult to assess using in vitro studies because of their acute time course. However, in vitro studies provide an opportunity to study the effects of varying doses and dimensions of fibers in a quicker, more efficient method than animal testing. They do not currently provide data that can be extrapolated to occupational risk assessment. The association between fiber dimension and toxicity has been documented and reviewed [Stanton et al. 1977[Stanton et al. , 1981Pott et al. 1987;Warheit 1994]. RCFs may have different toxicities, depending on the fiber length relative to macrophage size. Longer fibers are more toxic. Fiber length has been correlated with the cytotoxicity of glass fibers [Blake et al.1998]. Manville code 100 (JM-100) fiber samples with average lengths of 3, 4, 7, 17, and 33 µm were assessed for their effects on LDH activity and rat alveolar macrophage function. The greatest cytotoxicity was reported in the 17-and 33-µm samples, indicating that length is an important factor in the toxicity of this fiber. Multiple macrophages were observed attached along the length of long fibers. Relatively short fibers (<20 µm) were usually phagocytized by one rat alveolar macrophage [Luoto et al. 1994]. Longer fibers were phagocytized by two or more macrophages. Incomplete or frustrated phagocytosis may play a role in the increased toxicity of longer fibers. Long fibers (17 µm average length) were a more potent inducer of tumor necrosis factor (TNF) production and transcription factor activation than shorter fibers (7 µm average length) [Ye et al. 1999]. These studies demonstrate the important role of length in fiber toxicity and suggest that the capacity for macrophage phagocytosis may be a critical factor in determining fiber toxicity. Several of the in vitro RCF studies (summarized in Appendix C) reported a direct association between a longer fiber length and greater cytotoxicity. Hart et al. [1992] reported the shortest fibers to be the least cytotoxic. Brown et al. [1986] reported an association between length, but not diameter, and cytotoxic activity. Wright et al. [1986] reported that cytotoxicity was correlated with fibers >8 µm long. Yegles et al. [1995] reported that the longest and thickest fibers were the most cytotoxic. The four most cytotoxic fibers had GM lengths ≥13 µm and GM diameters >0.5 µm. The production of abnormal anaphases and telophases was associated with Stanton fibers with a length >8 µm and diameter <0.25 µm. Hart et al. [1994] reported that cytotoxicity increased with increasing average fiber lengths from 1.4 to 22 µm, but did not increase with average lengths from 22 to 31 µm. Additional studies assessing the cytotoxicity of specific RCF fiber lengths are needed. Such studies will help to describe the association between fiber length and toxicity for RCFs and may allow determination of a threshold length above which toxicity increases significantly. In addition to providing data on the correlation between fiber length and toxicity, in vitro studies have provided data on the relative toxicity of RCFs compared with other fibers, although some uncertainties remain in the interpretation of these studies because of differences in fiber doses, dimensions, and durabilities. RCFs have direct and indirect effects on cells and alter gene function in similar ways. They are capable of inducing enzyme release and cell hemolysis. They may decrease cell viability and inhibit proliferation. RCFs affect the production of TNF and reactive oxygen species (ROS) and affect cell viability and proliferation. They induce necrosis in rat pleural mesothelial cells. They may also induce free radicals, micronuclei, polynuclei, chromosomal breakage, and hyperdiploid cells in vitro. In vitro studies provide an excellent opportunity for investigating the pathogenesis of RCFs. However, comparisons are difficult to make between in vitro studies based on differences in fiber doses, dimensions, preparations, and compositions. Important information such as fiber length distribution is not always determined. Even when comparable fibers are studied, the cell line or conditions under which they are tested may vary. Much of the research to date has been done in rodent cell lines and in cells that are not related to the primary target organ. In vitro studies using human pulmonary cell lines should provide pathogenesis data most relevant to human health risk assessment. Short-term in vitro studies cannot take into account the influence of fiber dissolution and fiber compositional changes that may occur over time. In an in vivo exposure, fibers are continually modified physically, chemically, and structurally by components of the lung environment. This complex set of conditions is difficult to recreate in vitro. Just as it is unlikely that only one factor is an accurate predictor of fiber toxicity, it is unlikely that any one in vitro test is able to predict fiber toxicity. # Health Effects in Humans # Morbidity and Mortality Studies Two major research efforts evaluated the morbidity of RCF-exposed workers-one conducted in U.S. plants and one in European plants. [Rossiter et al. 1994;Trethowan et al. 1995;Burge et al. 1995]. A followup cross-sectional study conducted in 1996 evaluated the same medical endpoints in workers from six of these seven European manufacturing plants (one plant had ceased operation) [Cowie et al. 1999[Cowie et al. , 2001. Current and former workers were included as study subjects in the followup study. The studies of U.S. plants began in 1987 and involved evaluations of current workers at five RCF manufacturing plants and former workers at two RCF manufacturing plants [ Lemasters et al. 1994Lemasters et al. , 1998Lockey et al. 1993Lockey et al. , 1996Lockey et al. , 1998Lockey et al. , 2002. In the United States, the earliest commercial production of RCFs and RCF products began in 1953; in Europe, RCF production began in 1968. The demographics of the U.S. and European populations were similar at the time they were studied, although the average age of U.S. workers was slightly higher than that of the workforce in the 1986 European studies because of the earlier development of this industry in the United States. The mean age for the European RCF workers was 37.7 in the 1986 study [Trethowan et al. 1995] and 42.0 for males and 39.4 for female workers in the 1996 study [Cowie et al. 1999]. In the U.S. RCF manufacturing industry, the average age is 40 for current workers and 45 for former workers [Lemasters et al. 1994]. The mean duration of employment in the European cohort was 10.2 years (range 7.2 to 13.8 years) in 1986 [Trethowan et al. 1995] and 13.0 years in 1996 [Cowie et al. 1999]. The U.S. study reports the mean duration of employment for 23 workers with pleural plaques as 13.6 years (±9.8); the median is 11.2 years (range 1.4 to 32.7) [Lemasters et al. 1994]. The following text and From a possible 708 current workers, 628 eligible participants were identified and 596 had chest X-ray examinations; 51 female workers and 13 unexplained others were excluded from analysis. # ** From a possible 708 current workers, 628 eligible participants were identified and 596 had chest X-ray examinations; 2 unreadable films and those of 51 female workers were excluded from the analysis. † † Study included current workers at six ceramic fiber manufacturing plants in three European countries as well as leavers from the first three European studies [Burge et al. 1995;Rossiter et al. 1994;Trethowan et al. 1995 From a possible 868 eligible current and former workers at 2 plant sites, 148 were eliminated for lack of exposure characterization data and loss to followup. Of the remaining 720 workers, 68 did not agree to chest X-ray examinations. ‡ ‡ ‡ From a possible 963 eligible current workers at five plant sites, 209 female workers were excluded as well as 393 male workers with fewer than 5 PFT sessions. mortality studies is also presented in Section 5.3.5 [Lockey et al. 1993;Lemasters et al. 2003]. Two HHEs of workplaces involving workers exposed to RCFs are also described in Section 5.3.6 [Kominsky 1978;Lyman 1992]. # Radiographic Analyses In both the European and U.S. studies cited in Table 5-6, the study populations included workers at multiple plants involved in the manufacture of RCFs or RCF products. As part of the investigation of potential effects of exposure to airborne RCFs, chest radiography was performed. In all studies, chest radiographs were read independently by three readers using the International Labour Office (ILO) 1980 International Classification of the Radiographs of Pneumoconioses [ILO 1980]. Identifiers on films were masked to ensure a blind review by readers, and quality control measures and tests of agreement were used to check consistency among the readers. For each type of abnormality analyzed, the median of the three readings for each film was used. # Pleural abnormalities In the 1986 study of European RCF workers, results of the chest radiography indicated a prevalence of 2.8% (15/543) for pleural abnormalities among male workers [Rossiter et al. 1994]. Of the 15 cases with pleural abnormalities, 4 had bilateral diffuse thickening (1 with calcification), 1 showed bilateral pleural calcification only, 7 presented with unilateral diffuse thickening, and 3 showed costophrenic angle blunting only. The possibility for confounding effects was recognized because of other exposures: 52% of workers reported previous employment in dusty jobs, including 4.5% with prior asbestos exposures and 7% with prior MMMF exposures. When female workers were included in the same population, Trethowan et al. [1995] reported a prevalence of 2.7% (16/592) for pleural abnormalities. Two cases were known to have previous exposure to asbestos, and the possibility for exposure to other respiratory hazards was acknowledged for other persons with pleural abnormalities. Cowie et al. [1999Cowie et al. [ , 2001 reported pleural abnormalities in 10% (78/774) and [Rice et al. 1994[Rice et al. , 1996. # Exposure: Controls and cases reinterviewed with additional questions regarding asbestos exposure (application, manipulation, and distance from exposure). Asbestos exposure categorized (rating index = high, medium, low) from interview data. # Radiographic analyses In the cohort study, 20 cases of pleural plaque were identified in the cohort: 18 production workers and 2 nonproduction workers. In Cowie et al. 1999Cowie et al. , 2001 (European study) Cowie et al. 1999Cowie et al. , 2001 Respiratory health assessment Groat et al. 1999 Exposure assessment # Evaluation methods # Results # Comments Cross-sectional study: Rossiter et al. [1994] reported an association with age [χ 2 =18.85, P=0.0008]. However, no attempt was made to assess whether an association existed between pleural abnormalities and RCF exposure. Trethowan et al. [1995] also noted that pleural abnormalities were related to age but not independently to ceramic fiber exposures. Cowie et al. [1999Cowie et al. [ , 2001 reported pleural abnormalities to be associated with age, exposure to asbestos, and body mass index (weight divided by height squared). When the data were unadjusted for age, an association existed between pleural changes and years worked at the plant. Lemasters et al. [1994] found that pleural abnormalities were associated with time since first RCF exposure (RCF latency) after adjusting for duration of asbestos exposure and time since first asbestos exposure (odds ratio [OR]=2.9 [95% CI=0. [Lockey et al. 1996]. A latency validity review was also conducted, involving analysis of 205 historical chest radiographs available for workers with pleural changes. The purpose of the review was to confirm that for persons with pleural plaques, a biologically plausible latency period (≥5 years) existed between initial RCF exposure and appearance of a pleural plaque. Of 18 pleural plaque cases for which historical chest radiographs were available, only 1 had a latency period of <5 years from initial RCF production to recognition of a pleural plaque. A subsequent analysis by Lockey et al. [2002] included chest radiographs for 625 current workers obtained every 3 years at 5 RCF manufacturing sites and 383 former workers at 2 of the 5 sites. Pleural changes were seen in 27 workers (2.7%), of which 19 were bilateral plaques (70%) and 3 were unilateral plaques (11%). Cumulative RCF exposure (>135 fibermonths/cm 3 ) was significantly associated with pleural changes (OR = 6.0, 95% CI = 1.4, 31.0). The researchers noted an increasing but nonsignificant trend involving interstitial changes and RCF exposure duration in a production job and cumulative RCF exposure. # Parenchymal Opacities In the 1987 European study, Rossiter et al. [1994] found that 7% (38/543) of the current male workers had small parenchymal opacities with median profusion of 1/0 or more. No large parenchymal opacities were observed. Both predominantly rounded (n=23, or 4.2%) and predominantly irregular (n=15, or 2.8%) small parenchymal opacities were identified. In a subsequent analysis of small opacities for both male and female workers, Trethowan et al. [1995] noted that the prevalence of small opacities increased with age, smoking, and previous exposure to asbestos but not with cumulative RCF exposure. No description of the analysis was provided. Cowie et al. [1999] reported that 10 of 51 (19.6%) men with RCF exposure before 1971 had small opacities of category 1/0 or greater. Eight of these 10 had been exposed to asbestos, and 9 were either current or ex-smokers. In the U.S. study, no analyses were performed to assess the relationship between small opacities and RCF exposure because of the small number of cases (n=4) identified by Lemasters et al. [1994. # Respiratory Conditions and Symptom Analyses Using respiratory health questionnaires, the U.S. and European studies sought to identify respiratory conditions and symptoms that could be associated with exposure to RCFs. Lockey et al. [1993] administered to 717 subjects a standardized respiratory symptoms questionnaire that included questions about the following symptoms and conditions: chronic cough, chronic phlegm, dyspnea grades 1 and 2 (described in the Definitions section of this document), wheezing, asthma, pleurisy, and pleuritic chest pain. Logistic regression analyses were adjusted for age, sex, smoking (pack years), duration of asbestos exposure, duration of production employment, duration of other hazardous occupational respiratory exposure, and time since last RCF employment. With the exception of asthma, for which self-selection out of production jobs may have occurred, adjusted ORs for respiratory symptoms were significantly elevated in production workers compared with nonproduction workers. Results of a subsequent analysis with 742 RCF workers by Lemasters et al. [1998] indicated that the prevalence of respiratory symptoms and conditions (except for asthma) was approximately twofold to fivefold higher in production than in nonproduction workers. The most frequently reported symptom for male production workers was dyspnea grade 1 (15.7%, compared with 2.5% for nonproduction), followed by wheezing (10.3%, compared with 3.8% for nonproduction). Prevalence of one or more respiratory symptoms and conditions among female production workers was 40.7%, compared with 20.3% for nonproduction workers. Trethowan et al. [1995] examined the relationship of dry cough, chronic bronchitis, dyspnea (two grades), wheeze, stuffy nose, eye irritation, and skin irritation to current and cumulative RCF exposure estimates among 628 workers. Current exposures were based on air sampling measurements taken in association with the respiratory health survey. The researchers noted eye and skin irritation were frequent in all plants and increased significantly, as did dyspnea and wheeze, with increasing current exposure concentrations (i.e., 0.2 to 0.6 and ≥0.6 f/cm 3 ) after controlling for age, sex, and smoking habits. The most frequent symptom, nasal stuffiness (in 55% of the group), showed no clear association with increasing current exposure. Chronic bronchitis, with a prevalence of 12% among all workers, also appeared unaffected by increasing current exposure concentration. Dry cough, eye irritation, and skin irritation all seemed to be associated with increasing exposure, especially at the highest exposure concentration (≥0.6 f/cm 3 ). Analyses of cumulative exposure to respirable fibers showed statistically significant associations with dyspnea but no apparent association with chronic bronchitis and wheeze. In a separate analysis of the same cohort, investigated the relative importance of respirable RCF exposure versus inspirable dust exposure in predicting respiratory symptoms and conditions. The study found work-ers= current exposures to both inspirable dust and respirable fibers were related (P<0.05) to dry cough, stuffy nose, eye and skin irritation, and breathlessness (dyspnea) after adjustment for the effects of smoking, sex, age, and plant. Only skin irritation was significantly associated with current RCF exposure after controlling for exposure to inspirable dust. Burge et al. [1995] did not analyze the relationship between symptoms and cumulative exposure indices. Cowie et al. [1999Cowie et al. [ , 2001 reported that recurrent chest illness was associated with estimated cumulative exposure to respirable fibers but was not significantly associated with recent exposure. Cowie et al. [1999Cowie et al. [ , 2001 observed that RCFexposed male workers (n=692) showed a decrease in FEV 1 and FVC only for current smokers, the strongest association being with estimated cumulative exposure to respirable fibers. The average estimated decrease in FEV 1 and FVC was mild, approximately 100 ml. Female RCF-exposed workers (n=82) had a decreased FEV 1 with increasing cumulative exposure to respirable fibers and respirable and total dust. Among the female workers, cumulative exposure to total dust was most strongly associated with decreased pulmonary function measurements. Lemasters et al. [1998] anaylzed PFT data for 736 male and female current workers at five U.S. RCF plants. They reported decreases in the percentage of predicted FVC and FEV 1 with every 10 years of RCF production work. Although the decreases were greatest among current male smokers and former male smokers, they were greater than decreases associated with smoking alone. No significant changes were noted in pulmonary function of RCF production workers who never smoked. A separate study by Lockey et al. [1998] involved longitudinal analysis of data from a cohort of 361 current male RCF workers hired before June 30, 1990, who had participated in at least five PFT sessions between 1987 and 1994. By comparison, nonparticipants who were excluded from the analysis according to the criteria above were on average older, smoked, weighed more, and had lower height-adjusted and percentpredicted lung function values. Cross-sectional analysis of the initial pulmonary function session in a regression model included coefficients for age, ≤7 versus >7 RCF production years, smoking status (pack years, current versus former smoker), weight, and plant location (categorical). The analysis found decreases in FVC and FEV 1 for workers employed >7 years in production compared with nonproduction workers. In longitudinal analyses of followup production years (i.e., from initial PFT to final PFT) and followup cumulative exposure (i.e., from initial PFT to final PFT), neither of these variables had an effect on FVC or FEV 1 . These results led the authors to conclude that more recent exposure concentrations during 1980-1994 had no adverse effect on the longitudinal trend of pulmonary function ]. Decrements in FVC and FEV 1 noted in initial cross-sectional analyses of PFT data were believed to be related to earlier higher exposure concentrations. # Pulmonary Function Testing # Mortality Studies Table 5-8 presents findings from a cohort mortality study of two U.S. RCF production plants reported by Lockey et al. [1993]. The study is based on a cohort of 684 male workers at two RCF production plants who were employed for at least 1 year between January 1, 1950, and June 1, 1988. Five workers were lost to followup and 46 were deceased. Because this is a relatively new industry (40 years at the time of the study) that has experienced recent growth of the workforce at the plants studied, personyears at risk were limited at higher latencies (for example, only 126.37 person-years with >30 years since first RCF job). Using standardized mortality ratios (SMRs), the authors found The mortality was also lower than would be expected if RCFs had the potency of chrysotile, but the difference is not statistically significant. For mesothelioma, the authors concluded the anticipated numbers of deaths under hypotheses of asbestos-like potency are too small to be rejected by the zero cases seen in the RCF cohorts [Walker et al. 2002]. NIOSH researchers noted that this analysis by Walker et al. was not based on the most current update of the RCF cohort. In addition, the asbestos risk assessment models used by Walker et al. [2002] were fitted to studies with longer followup periods than the cohort of RCF workers. Because these models do not specify length of followup, it is not possible to adjust for these differences. Consequently, it is likely that the RCF cohort has not been followed for a sufficient length of time to demonstrate the risks that were observed in the asbestos cohorts. NIOSH believes the mortality study by Lemasters et al. [2003] and the risk analysis by Walker et al. [2002] have insufficient power for detecting lung cancer risk based on what would be predicted for asbestos. # NIOSH HHEs As part of its mission as a public health agency, NIOSH performs HHEs at the request of workers, employers, or labor organizations to investigate occupational hazards associated with a workplace or work-related activity. One such HHE involved evaluating worker exposures to ceramic fibers at a company manufacturing steel forgings [Kominsky 1978]. At the facility, furnaces for heat-treating steel ingots were lined with RCF felt and batting, and this lining required regular maintenance and replacement. Among the workers interviewed were six bricklayers involved in furnace lining maintenance. Four of the bricklayers reported having experienced irritation of exposed skin areas and of the throat during the handling and installation of the RCF-containing insulation. On the basis of the reported symptoms and their consistency with known effects of RCFs, the symptoms of irritation were attributed to RCF exposure. No attempt was made to measure airborne fiber concentrations. Another NIOSH HHE [Lyman 1992] resulted from an OSHA inspection that identified 18 cases of occupational lung disease recorded in 1 year at a plant manufacturing fire bricks, ceramic fiber products, and other thermal insulation components from kaolin. About 600 workers were potentially exposed to respiratory hazards that included not only RCFs but also kaolin dust, crystalline silica dust, and (for maintenance workers) asbestos. A total of 38 workers had been referred to a pulmonary physician for evaluation based on 2 rounds of chest X-ray screening of the workforce in 1980 and 1986. Diagnoses were related to pleural thickening (n=10), pleural plaques (n=3), diffuse pulmonary fibrosis (n=21), mesothelioma (n=1), and other miscellaneous conditions. At least 20 of these cases were classified as work-related by the pulmonologist who evaluated the cases. The nonoccupational classification of some of the remaining 18 cases was questioned by a NIOSH physician who performed a retrospective record review. The 38 cases were reclassified on the basis of job histories into those who were likely to have been exposed to RCFs (n=19, including 4 with pleural abnormalities and 8 with diffuse fibrosis) and those unlikely to have been exposed to RCFs (n=19, including 9 with pleural abnormalities, 13 with fibrosis, and 1 with mesothelioma). However, no attempt was made to analyze further for an association of the cases with exposure to RCFs. The report implied that occupational exposure to kaolin dust and to asbestos caused many or all of the job-related conditions. # Discussion The radiographic analyses of the U.S. and 1996 European worker groups suggest an association between pleural abnormalities, including pleural plaques, and RCF exposure [Lemasters et al. 1994;Lockey et al. 1996;Cowie et al. 1999]. From Rossiter et al. [1994] it is less apparent whether such an association was investigated. Trethowan et al. [1995] report that pleural abnormalities were not independently related to RCF exposure. Differences between the findings of the U.S. studies and those of the initial European studies may be related to the long latency before pleural abnormalities are detectable, in particular, pleural plaques following RCF exposure. Workers exposed to asbestos developed asbestos-associated pleural plaques after a latency period of more than 15 years after initial exposure [Hillerdal 1994] and in some cases, after 30 to 57 years [Begin et al. 1996 Lockey et al. [2002] began to supplement these views with left and right 45 o oblique view films as a standard practice for radiographic surveillance. This methodology, known as a film triad, was evaluated against the posteroanterioronly view to determine reliability, sensitivity, and specificity of each method [Lawson et al. 2001]. The evaluation, involving 652 subjects in the RCF study, showed the film triad had considerably higher interreader reliability (κ=0.59) than the posteroanterior-only method (κ=0.44). The authors concluded that the film triad method provides an optimum approach. The U.S. and 1986 European studies yielded little evidence of an association between radiographic parenchymal opacities and RCF exposure. In the U.S. study, small opacities were rare [Lockey et al. 1996]. Small opacities of profusion category 1/0 or greater were more frequent in the 1986 European study [Trethowan et al. 1995], but exposures to silica and other dusts were believed to account for many of these cases. The results of statistical analyses did not implicate RCF exposure [Trethowan et al. 1995] or yielded results only slightly suggestive of an RCF exposure effect [Rossiter et al. 1994]. In the 1996 evaluation of the European cohort, small opacities of category 1/0 or greater were positively associated with RCF exposures that occurred before 1971 [Cowie et al. 1999]. Ten of the 51 (19.6%) male workers exposed before 1971 developed category 1/0 or greater opaci-tiesC8 had also been exposed to asbestos and 9 were either current or ex-smokers. Both the U.S. [Lockey et al. 1993;Lemasters et al. 1998] and the European Burge et al. 1995;Cowie et al. 1999] studies found that occupational exposure to RCFs is associated with various reported respiratory symptoms and conditions, after adjusting for the effects of age, sex, and smoking. Exposure to RCF concentrations in the range of 0.2 to 0.6 f/cm 3 was associated with statistically significant increases in eye irritation (OR=2.16, 95% CI=1.32-3.54), stuffy nose (OR=2.06, 95% CI=1.25-3.39), and dry cough (OR=2.53, 95% CI=1.25-5.11) compared with exposure concentrations lower than 0.2 f/cm 3 [Trethowan et al. 1995]. Increasing ORs were demonstrated for RCF exposure concentrations greater than 0.6 f/cm 3 compared with exposure concentrations <0.2 f/cm 3 for wheeze (P<0.0001), dyspnea (P<0.05), eye irritation (P<0.0001), skin irritation (P<0.0001), and dry cough (P<0.05) but not stuffy nose or chronic bronchitis [Trethowan et al. 1995]. Lockey et al. [1993] found that dyspnea was significantly associated with exposure to >15 fiber-months/ cm 3 (that is, >1.25 fiber-years/cm 3 ) relative to exposure to ≤15 fiber months/cm 3 (dyspnea grade 1COR=2.1, 95% CI=1.3-3.3; dyspnea grade 2-OR=3.8, 95% CI=1.6-9.4). Lockey et al. [1993] also found statistically significant associations between cumulative RCF exposure and chronic cough (OR=2.0, 95% CI=1.0-4.0) and pleurisy (OR=5.4, 95% CI=1.4-20.2). Lemasters et al. [1998] also noted associations (P<0.05) between employment in an RCF production job and increased prevalence of dyspnea and the presence of at least one respiratory symptom or condition. Recurrent chest illness in the European cohort was associated with cumulative exposure to respirable fibers and was most strongly associated with cumulative exposure to respirable dust [Cowie et al. 1999]. In cross-sectional analyses involving spirometric testing, both the U.S. [Lockey et al. 1998;Lemasters et al. 1998] and 1986 European [Trethowan et al. 1995;Burge et al. 1995] studies found that cumulative RCF exposure was associated with pulmonary function decrements among current and former smokers. The 1996 European study demonstrated decrements in current smokers only [Cowie et al. 1999]. The observed decreased pulmonary function in the European workers remained significantly associated with cumulative RCF exposure, even after controlling for cumulative exposure to inspirable dust [Burge et al. 1995]. A longitudinal analysis of data from multiple PFTs by Lockey et al. [1998] led the researchers to conclude that exposures to RCFs between 1987 and 1994 were not associated with decreased pulmonary function. The findings from the U.S. and European studies suggest that decrements in pulmonary function observed in current and former smokers result from an interactive effect between smoking and RCF exposure. # Carcinogenicity Risk Assessment Analyses The literature contains three significant independent risk analyses of occupational exposure to RCFs and potential health effects. In each of these analyses, health effects data derived from multidose and MTD studies with rats were used with models to extrapolate risks to human populations. The modeling of effects observed in experimental animal studies was necessitated by the lack of adequate data on adverse health effects in humans with occupational exposures to RCFs. The three studies, described in detail below and in Table 5-9, include the following studies: Dutch Expert Committee on Occupational Standards (DE-COS) [1995], Fayerweather et al. [1997], and Moolgavkar et al. [1999]. # DECOS [1995] In 1995, DECOS (a workgroup of the Health Council of the Netherlands) published a report evaluating the health effects of occupational exposure to SVFs. The purpose of the report was to establish health-based recommended occupational exposure limits for specific types of SVFs. As one of the criteria for determining the airborne exposure limits for six distinct types of SVFs, risk assessments were performed for each fiber type, including RCFs. The risk analysis for RCFs was based on the assumption that RCFs are a potential human carcinogen as indicated by the positive results of carcinogenicity testing with animals. A health-based recommended occupational exposure limit was determined using the following rationale: 1. If the carcinogenic potential of RCFs is caused by a nongenotoxic mechanism, an occupational exposure limit of 1 respirable f/cm 3 as an 8-hr TWA should be recommended based on an NOAEL of 25 f/cm 3 and a safety factor of 25. 2. If the carcinogenic potential of RCFs is linked to a genotoxic mechanism, a model assuming a linear relationship between dose and the response (cancer) should be used to establish the occupational exposure limit. The model indicated that an excess cancer risk of 4 ×10 -3 is associated with a TWA exposure to 5.6 respirable f/cm 3 based on 40 years of occupational exposure. A cancer risk of 4×10 -5 is associated with exposure to 0.056 f/cm 3 , and a linear extrapolation indicated that occupational exposure to 1 respirable f/cm 3 as an 8-hr TWA for 40 years is associated with a cancer risk of 7×10 -4 . The DECOS analysis relied on the data from a long-term multidose study with rats exposed to kaolin ceramic fibers [Bunn et al. 1993;Mast et al. 1995b]. These data showed that exposure by inhalation to 25 f/cm 3 (3 mg/m 3 ) for 24 months produced a negligible amount of fibrosis (mean Wagner score of 3.2). Consequently, the Dutch committee viewed 25 f/cm 3 as the NOAEL for fibrosis. The report also notes that at the time of publication, no data existed from retrospective cohort mortality or morbidity and case-control studies of persons with occupational exposures to RCFs. The linear modeling approach in this analysis of the exposure-response relationship using the animal data does not take into consideration possible differences in dosimetry and lung burden between rats and humans. # Fayerweather et al. [1997] Fayerweather et al. [1997] conducted a study primarily focusing on the risk assessment of occupational exposures for glass fiber insulation installers. They performed risk analyses with several other types of SVFs, including RCFs. Only the analysis with RCFs is presented here. This analysis applied an EPA linearized multistage model (representing a linear nonthreshold dose-response) to data from rat multidose and MTD chronic inhalation bioassays [Mast et al. 1995a,b] to determine exposures at which Ano significant risk@ occurs; i.e., no more than one additional cancer case per 100,000 exposed persons. Nonlinear models were also used for comparison: the Weibull 1.5-hit nonthreshold model (representing the nonlinear, nonthreshold dose-response curve) and Weibull 2-hit threshold model (representing the nonlinear, threshold dose-response curve). Fiber inhalation by rats was equated to humans by determining the fibers/day•kg of body weight for the animals and using an exposure scenario of 4 hr/day (consistent with insulation installation workers= schedules), for 5 days/week and 50 weeks/year over 40 working years of a 70-year lifespan. RCFC interpreted the results of the analysis with the linearized multistage model to represent a risk of 3.8×10 -5 for developing lung cancer over the working lifetime at an exposure concentration of 1 f/cm 3 [RCFC 1998]. Using the nonlinear models, estimates of nonsignificant exposures (i.e., a working lifetime exposure associated with no more than 1 additional cancer case/100,000 exposed persons) were 2 and 3 orders of magnitude higher. Conversely, the risk estimates for exposure to 1 f/cm 3 for a working lifetime were lower using the Weibull 1.5-hit nonthreshold and Weibull 2-hit threshold models. # Moolgavkar et al. [1999] This report describes a quantitative assessment of the risk of lung cancer associated with occupational exposure to RCFs [Moolgavkar et al. 1999]. A major premise underlying the risk assessment is that humans are equally susceptible to RCFs as rats, at the tissue level. The risk analysis was performed using data from two chronic inhalation bioassays of RCFs in male Fischer 344 rats [Mast et al. 1995a,b]. Dosimetry in the risk assessment was based on a fiber deposition and clearance model developed by Yu et al. [1996] that was used to estimate the lung burdens of fibers in humans. The doseresponse model used for the risk assessment was the two-mutation clonal expansion model, commonly referred to as the Moolgavkar-Venzon-Knudson (MVK) model. The MVK model was fitted to the rat bioassay data to estimate the proportional increase in the rat lung tumor initiation rate in RCF-exposed rats, relative to the background initiation rate in nonexposed rats. An MVK model for human lung cancer was then created by fitting the model to the age-specific lung cancer incidence for either of two human cohorts. Finally, the human lung cancer rate for a given tissue dose was estimated by increasing the tumor initiation rate in the human model by the same proportional amount that an identical tissue dose would increase the initiation rate in the MVK model for rats. The assumption was made that, for any given tissue dose, the proportional increase in the lung tumor initiation rate (relative to the background rate) is the same in humans as in rats. The two human cohorts used for the human modeling were a nonsmoking American Cancer Society (ACS) cohort [Peto et al. 1992 The excess risk was also calculated for exposure concentrations of 0.75 f/cm 3 , 0.5 f/cm 3 , and 0.25 f/cm 3 . These risk estimates are presented in Table 5-10. As shown in Table 5-10, the highest risk estimates at each of the three exposure concentrations are associated with the linear model, followed by the exponential model. The lowest risk estimates are associated with the quadratic model. At each exposure concentration, more conservative risk estimates are obtained for the ACS cohort than the Steel Industry cohort. At the recommended exposure guideline established by the RCFC (0.5 f/cm 3 ), the highest risk estimate (linear model, Steel Industry cohort) is the MLE of 5.3×10 -4 or 5.3/10,000 (95% UCL=2.9×10 -3 ). At 0.5 f/cm 3 , the risk estimates for the steel industry cohort are roughly 1 order of magnitude (factor of 10) lower with the exponential model (MLE=7.3×10 -5 , 95% UCL= 9.1×10 -5 ), and 2 orders of magnitude lower using the quadratic model (MLE=3.5×10 -6 , 95% UCL=1.1×10 -5 ). At the lowest exposure concentration (0.25 f/cm 3 ), the highest risk estimate (Steel Industry cohort, linear model) was the MLE of 2.7×10 -4 (95% UCL=1.4×10 -3 ). Again, on average, the risk estimates from the 3 models using the steel industry cohort are 3 to 4 times higher than for corresponding model values with the ACS cohort. The authors concluded that the risk estimates based on the two cohorts "represent bounds on risks likely to be seen in occupational cohorts." However, an occupational cohort is unlikely to share the nonsmoking status of the ACS cohort. Therefore, of the two human populations used for model fitting in the Moolgavkar et al. [1999] risk assessment, the steel industry cohort may be the preferable cohort to use for estimating the risks from occupational exposures to RCFs. The Moolgavkar et al. [1999] report also indicates airborne fiber concentrations estimated to result in excess lifetime risk for cancer of 10 -4 (1 in 10,000) based on the approaches used by DECOS [1995] and Fayerweather et al. [1997] and using the MVK model for both the ACS cohort and the steel industry cohort. With the DECOS [1995] linearized, nonthreshold model approach, an excess lifetime cancer risk of 10 -4 was calculated to result from a fiber concentration of 0.14 f/cm 3 . Using the linearized, multistage model approach described in Fayerweather et al. [1997], a fiber concentration of 2.6 f/cm 3 was estimated to correspond to the excess lifetime cancer risk of 10 -4 . With the MVK exponential model, an excess lifetime cancer risk of 10 -4 was determined for fiber concentrations of 0.7 f/cm 3 for the Steel Industry cohort and 2.7 f/cm 3 for the ACS cohort [Moolgavkar et al. 1999]. # Discussion The estimated lung fiber burden for dosimetry in the analysis by Moolgavkar et al. [1999] is a methodological improvement over the risk assessment for RCFs by Fayerweather et al. [1997], which was based solely on the inhaled fiber concentration. Modeling lung burden dosimetry should, in theory, compensate for the known differences between rats and humans in fiber deposition and clearance. Similarly, using an MVK model for dose-response estimation could compensate for differences in cell mutation and proliferation rates in rats and humans. However, some key parameter values in the MVK and lung dosimetry models are poorly known. For example, the dosimetry model for humans has been validated with only three human tissue samples taken from workers whose exposures to RCFs were not measured [Yu et al. 1997]. A review and comparison of risk modeling approaches for RCFs by Maxim et al. [2003] describes the three models here as well as additional more sophisticated variations of quantitative risk analyses for RCFs. Using approaches such as benchmark dose modeling, Maxim et al. [2003] produced RCF unit potency values ranging from 1.4×10 -4 to 7.2×10 -4 . A common weakness among all three of the risk analyses stems from uncertainty about possible differences in the sensitivity of human lungs to fibers, as compared with rat lungs. The possibility of such a difference is acknowledged in the report by Moolgavkar et al. [1999], but the effect of this uncertainty on the risk estimates is not explored quantitatively. As an example, Pott et al. [1994] estimated that in the case of asbestos fibers, humans are approximately 200-fold more sensitive than rats, on the basis of fiber concentration in air. Pott et al. [1994] further noted that a crocidolite inhalation study that was negative in the rat resulted in a rat lung fiber concentration that was more than 1,000-fold greater than the fiber concentrations in the lungs of asbestos workers with mesotheliomas. In support of this analysis, results of a study by Rödelsperger and Woitowitz [1995] led the authors to conclude that humans are at least 6,000 times more sensitive than rats to a given tissue concentration of amphibole fibers. Although amphibole asbestos fibers have physicochemical characteristics which differ from those of RCFs, these findings raise questions about using experimental animal data for predicting human health effects and assuming that target tissues in humans and rats are equally sensitive to RCF toxicity. The lung cancer risk estimates for RCFs derived by Moolgavkar et al. [1999] may also be underestimated for occupationally exposed workers because of several basic assumptions made in the lung tissue dosimetry. Tissue dosimetry modeling in the Moolgavkar et al. [1999] risk assessment is based on the assumption that a worker is exposed to RCFs for 8 hr/day, 5 days/week, 52 weeks/year, from age 20 to 50 [Moolgavkar et al. 1999]. An alternative analysis, in which the assumption was changed to 8 hr/day, 5 days/week, 50 weeks/year from age 20 to 60, was also described but not presented in detail. In both cases, the breathing rate for light work was assumed to be 13.5 liters/minute. Additional information might be gained from assuming an exposure period of 8 hr/day, 5 days/week, 50 weeks/year, from age 20 to 65, with a breathing rate matching the International Commission on Radiological Protection "Reference Man" value for light work, which is 20 liters/minute [ICRP 1994]. In addition, the cumulative excess risk of lung cancer was calculated only through age 70 [Moolgavkar et al. 1999]. This practice may underestimate the lifetime risk of lung cancer in the exposed cohort, since a substantial fraction of the cohort may be expected to survive beyond age 70. The excess risk might also be calculated in a competing-risks framework using actuarial methods until most or all of the cohort is presumed to have died because of competing risks (generally 85 years). Finally, risk estimates derived by Moolgavkar et al. [1999] were based solely on data from studies with rats, ignoring data from studies of hamsters [McConnell et al. 1995]. Because 42% of the hamsters in these studies developed mesotheliomas, using this database for the risk assessment would produce higher estimates of risk than the analysis based on the rat data. 6 Discussion and Summary of Fiber Toxicity comparative measures of toxicity for different agents. RCFs implanted into the pleural and abdominal cavities of various strains of rats and hamsters have produced mesotheliomas, sarcomas, and carcinomas at the sites of fiber implantation [Wagner et al. 1973;Davis et al. 1984;Pott et al. 1987]. Similar tumorigenic responses have been observed following intratracheal instillation of RCFs [Manville Corporation 1991]. These data provide additional evidence of the carcinogenic effects of RCFs in exposed laboratory animals. Epidemiological data have not associated occupational exposure to RCFs under current exposure conditions with increased incidence of pleural mesothelioma or lung cancer [Lockey et al. 1993;Lemasters et al. 1998]. However, in epidemiologic studies of workers in RCF manufacturing facilities [Lemasters et al. 1994;Lockey et al. 1993Lockey et al. , 1996Rossiter et al. 1994;Trethowan et al. 1995;Burge et al. 1995;Cowie et al. 1999], increased exposures to airborne fibers have been linked to pleural plaques, small radiographic parenchymal opacities, decreased pulmonary function, respiratory symptoms and conditions (pleurisy, dyspnea, cough), and skin and eye irritation. Many of the respiratory effects showed a statistically significant association with RCF exposure after controlling or adjusting for potential confounders, including cigarette smoking and exposure to nonfibrous dust. Yet in PFTs, the interactive effect between smoking and RCF exposure was especially pronounced, based on the finding that RCF-associated decreases # Significance of Studies with RCFs Three major sources of data contributing to the literature on RCFs are (1) experimental studies with animals and in vitro bioassays, (2) epidemiologic studies of populations with occupational exposure to RCFs (primarily during manufacturing), and (3) exposure assessment studies that provide quantitative and qualitative measurements of exposures as well as the physical and chemical characteristics of airborne RCFs. Each of these sources of information is considered integral to this criteria document for providing a more comprehensive evaluation of occupational exposure to RCFs and their potential health consequences. Data from inhalation studies with animals exposed to RCFs have demonstrated statistically significant increases in the induction of lung tumors in rats and mesotheliomas in hamsters [Mast et al. 1995a,b;McConnell et al. 1995]. Other inhalation studies with RCFs have shown pathobiologic inflammatory responses in lung and pleural tissues [Gelzleichter et al. 1996a,b]. Implantation and instillation methods have also been used in animal studies with RCFs to determine the potential effects of these fibers on target tissues. These studies have recognized limitations for interpreting results because the exposure techniques bypass the natural defense and clearance mechanisms associated with the normal route of exposure (i.e., inhalation). However, they are useful for demonstrating mechanisms of toxicity and in pulmonary function were limited to current and former smokers [Lockey et al. 1998;Lemasters et al. 1998;Trethowan et al. 1995;Burge et al. 1995]. The interactive effect between exposure to airborne fibers and cigarette smoke has been previously documented (e.g., Selikoff et al. [1968] ). However, unlike male workers, nonsmoking female workers did show statistically significant decreases in PFT results associated with RCF exposure [Lemasters et al. 1998]. Analyses of data from multiple PFT sessions [Lockey et al. 1998] have led researchers to conclude that decreases in pulmonary function were more strongly influenced by the higher exposures to airborne RCFs that occurred in the past. This conclusion seems plausible, since historical air-sampling data indicate that airborne fiber concentrations were much higher in the first decades of RCF manufacturing and that former workers had potentially higher exposures. Multiple studies have been performed to characterize the concentrations and characteristics of airborne exposures to RCFs in the workplace. Current and historical environmental monitoring data [Esmen et al. 1979;Cantor and Gorman 1987;Gorman 1987;O'Brien et al. 1990;Cheng et al. 1992;Brown 1992;Corn et al. 1992;Lyman 1992;Allshouse 1995;Hewett 1996] indicate that airborne exposures to RCFs include fibers in the respirable size range (<3.5 μm in diameter and <200 μm long [Timbrell 1965;Lippmann 1990;Baron 1996]). These exposures occur in primary RCF manufacturing as well as in secondary industries such as RCF installation and removal. Sampling data from studies of domestic, primary RCF manufacturing sites indicate that average airborne fiber concentrations have steadily declined by nearly 2 orders of magnitude over the past 2 decades. For example, Rice et al. [1997] report a maximum exposure estimate of 10 f/cm 3 associated with an RCF manufacturing process in the 1950s, and Esmen et al. [1979] measured average exposure concentrations ranging from 0.05 to 2.6 f/cm 3 in RCF facilities in the middle to late 1970s. Rice et al. [1994Rice et al. [ , 1996Rice et al. [ , 1997 suggest average concentrations in manufacturing ranging from <LOD to 0.66 f/cm 3 in the late 1980s, and Maxim et al. [1994Maxim et al. [ , 1997Maxim et al. [ , 2000a report that concentrations from the late 1980s through 1997 ranged from an AM of <0.3 to 0.6 f/cm 3 (GM0.2 f/cm 3 ). For many manufacturing processes, even greater reductions in exposures have been realized through improved ventilation, engineering or process changes, and product stewardship programs [Rice et al. 1996;Maxim et al. 1999b]. Although the potential exists for exposure to respirable crystalline silica in the forms of quartz, tridymite, and cristobalite during work with RCFs, exposure monitoring data indicate that these exposures are generally low [Rice et al. 1994]. Maxim et al. [1999b] report that many airborne samples of crystalline silica collected during the installation and removal of RCF products contain concentrations below the LOD, with average concentrations of respirable crystalline silica per measurable task ranging from 0.01 to 0.44 mg/m 3 (equivalent 8-hr TWA range=0.004 to 0.148 mg/m 3 ). Other studies have shown greater potential for exposure to respirable crystalline silica (especially in the form of cristobalite) during the removal of afterservice RCF materials [Gantner 1986;Cheng et al. 1992;Perrault et al. 1992;van den Bergen et al. 1994;Sweeney and Gilgrist 1998]. For processes associated with higher concentrations of airborne respirable fibers, there are also generally greater concentrations of total and respirable dusts [Esmen et al. 1979;Krantz et al. 1994]. # Factors Affecting Fiber Toxicity To accurately interpret the results of experimental and epidemiologic studies with RCFs, it is important to consider recognized factors that contribute to fiber toxicity for RCFs and other SVFs in general. The major determinants of fiber toxicity have been identified as fiber dose (or its surrogate, airborne fiber exposure), fiber dimensions (length and diameter), and fiber durability (especially as it affects fiber biopersistence in the lungs) [Bignon et al. 1994;Bunn et al. 1992;Bender and Hadley 1994;Christensen et al. 1994;Lockey and Wiese 1992;Moore et al. 2001]. # Fiber Dose The measurement of airborne fiber concentrations is frequently used as a surrogate for assessing dose and health risk to workers. Analyses of historical and current air sampling data indicate that occupational exposure concentrations of airborne RCFs have decreased dramatically in the manufacturing sector [Maxim et al. 1997;Rice et al. 1997]. In chronic inhalation studies of RCFs [Mast et al. 1995a,b;Mc-Connell et al. 1995], both rats and hamsters were exposed to a range of size-separated RCF concentrations in a nose-only inhalation protocol. When airborne RCFs are generated, half or more of the aerosol is composed of respirable particles of unfiberized material that was formerly a component of the fiber [Mast et al. 1995a,b]. Because of the nature of this mixed exposure, it is difficult to determine the relative contributions of the airborne fibers and nonfibrous particulates to the adverse effects observed in humans and animals. It has been postulated that the nonfibrous particulates may have contributed to an overload effect in the Mast et al. [1995a,b] animal studies with RCFs [Yu et al. 1994;Mast et al. 1995a,b;Brown et al. 2000]. Burge et al. [1995] have suggested that the health effects seen in RCF-exposed workers are a consequence of combined particulate and fiber exposure, but the decrements in lung function are more related to fiber exposure combined with smoking. Other studies have shown that for processes associated with higher concentrations of airborne respirable fibers, there is also a greater concentration of total and respirable dust [Esmen et al. 1979;Krantz et al. 1994]. # Fiber Dimensions Throughout the literature, studies support the theory that fiber toxicity is related to fiber dimensions [Timbrell 1982[Timbrell , 1989Harris and Timbrell 1977;Stanton et al. 1977Stanton et al. , 1981Lippmann 1988]. Initially, fiber dimensions (length and diameter) play a significant role in determining the deposition site of a fiber in the lungs. Longer and thicker (>3.5 µm in diameter) fibers are preferentially deposited in the upper airways by the mechanisms of impaction [Yu et al. 1986] or interception. Timbrell [1965] suggested that direct interception plays an important role in the deposition of fibers, as the fiber comes into contact with the airway wall and is deposited. Fibers being deposited in the larger ciliated airways are generally cleared via the mucociliary escalator. Thinner fibers tend to maneuver past airway bifurcations into smaller and smaller airways until their dimensions dictate deposition either by sedimentation or diffusion [Asgharian and Yu 1989]. Another factor that may enhance deposition is the electrostatic charge a fiber can accumulate during dust-generating processes in occupational settings [Vincent 1985]. The fiber charge may affect its attraction to the lung surface, causing the fiber to be deposited by electrostatic precipitation. Although the dimensional characteristics and geometry of a fiber influence its deposition in the respiratory tract, the fiber's length and chemical properties dictate its clearance and retention once it has been deposited within the alveolar region. For the fiber that traverses the respiratory airways and is deposited in the gas exchange region, possible fates include dissolution, clearance via phagocytic cells (alveolar macrophages) in the alveoli, or translocation through membranes into interstitial tissues. Both test animals and workers have been exposed to RCFs of similar length and diameter [Allshouse 1995], and these exposures include fibers of respirable dimensions [Esmen et al. 1979;Lockey et al. 1990;Cheng et al. 1992]. Since rats and other rodents are obligate nasal breathers, fibers greater than about 1 µm in diameter are too large for deposition in their alveoli [Jones 1993]. By comparison, humans can inhale and deposit fibers up to 3.5 µm in diameter in the thoracic and gas exchange regions of the lung. This physiological difference prevents the evaluation of fibers with diameters of about 1 to 3.5 µm (which would have human relevance) in rodent inhalation studies. The role of fiber size in inducing biological effects is well documented and reviewed in the literature [Stanton et al. 1977[Stanton et al. , 1981Pott et al. 1987;Warheit 1994]. Stanton et al. [1977] hypothesized that glass fibers longer than 8 µm with diameters thinner than 0.25 µm had high carcinogenic potential. In a review of the significance of fiber size to mesothelioma etiology, Timbrell [1989] concluded that the thinner fibers with an upper diameter limit of 0.1 µm are more potent for producing diseases of the parietal pleura (e.g., mesothelioma and pleural plaques) than thicker fibers. That value for fiber diameter is cited by Lippmann [1988] in his asbestos exposure indices for mesothelioma. Oberdörster [1994] studied the effects of both long (>10-16 μm) and short (<10 μm) fibers on alveolar macrophage functions, concluding that both will lead to inflammatory reactions-although a distinct difference exists in the long-term effects because of differential clearance of fibers of different sizes. Alveolar macrophages constitute the first line of defense against particles deposited in the alveoli; they migrate to sites where fibers are deposited and phagocytize them. The engulfed fibers are then moved by the macrophages toward the mucociliary escalator and removed from the respiratory tract. The ability of the macrophages to clear fibers is size-dependent. Short fibers (<15 µm long) can usually be phagocytized by one rat alveolar macrophage [Luoto et al. 1994;Morgan et al. 1982;Oberdörster et al. 1988, Oberdörster 1994, whereas longer fibers may be engulfed by two or more macrophages. Blake et al. [1998] have suggested that incomplete or frustrated phagocytosis may play a role in the increased toxicity of longer fibers. Fiber length has been correlated with the cytotoxicity of glass fibers [Blake et al. 1998], with greatest cytotoxicity for fibers 17 and 33 µm long compared with shorter fiber samples. Long fibers (17 μm average length) tend to be a more potent inducer of TNF production and transcription factor activation than short fibers (7 μm average length) [Ye et al. 1999]. When comparing the dimensions of airborne fibers with those found in the lungs, it is important to consider the preferential clearance of shorter fibers as well as the effects of fiber dissolution and breakage. Yu et al. [1996] evaluated these factors in a study that led to the development of a clearance model for RCFs in rat lungs. Results of that study confirmed that fibers 10 to 20 µm long are cleared more slowly than those <10 µm long because of the incomplete phagocytosis of long fibers by macrophages. The preferential clearance of shorter fibers has also been documented in studies with chrysotile asbestos and other mineral fibers, in which the average length of retained fibers increased during a discrete period following deposition [Coin et al. 1992;Churg 1994]. This increase might also be explained by the longitudinal cleavage pattern of asbestos fibers, which results in longer fibers of decreasing diameters [Coin et al. 1992]. By contrast, any breakage of RCFs would occur perpendicular to the longitudinal plane, resulting in shorter fibers of the same diameter. For the clearance model developed by Yu et al. [1996], the effect of fiber breakage was also assessed from experimental data and incorporated into the model. The authors concluded that the simultaneous effect of fiber breakage and differential clearance leads only to a small change in fiber size distribution in the lung. This result suggests that the dimensions of fibers in the lung are closely related to the dimensions of fibers measured in the airborne samples (adjusted for deposition); thus, most short fibers in the lungs originated as short fibers in airborne exposures. The dimensions of airborne fibers have also been characterized for workers with occupational exposure to RCFs. One study of domestic RCF manufacturing facilities found that approximately 90% of airborne fibers were <3 μm in diameter, and 95% of airborne fibers were <4 μm in diameter and <50 μm long [Esmen et al. 1979]. The study showed that diameter and length distributions of airborne fibers in the facilities were consistent with a GM D of 0.7 μm and a GM L of 13 μm. Another air sampling study of domestic RCF manufacturing sites reported that 99.7% of the fibers had diameters of <3 μm and 64% had lengths >10 μm [Allshouse 1995]. Measurements of airborne fibers in the European RCF manufacturing industry are comparable: Rood [1988] reported that all fibers observed were in the thoracic and respirable size range (i.e., diameter <3 μm), with median diameters ranging 0.5 to 1.0 μm and median lengths from 8 to 23 μm. During removal of RCF products, Cheng et al. [1992] found that 87% of airborne fibers were within the respirable size range, with fiber diameters ranging from 0.5 to 6 μm (median diame-ter=1.6 μm) and fiber lengths ranging from 5 to 220 μm. Another study [Perrault et al. 1992] of airborne fiber dimensions measured during installation and removal of RCF materials in industrial furnaces reported GM D values of 0.38 and 0.57 μm, respectively. # Fiber Durability Biopersistence (and specifically the retention time of the fiber in the lungs) is considered to be an important predictor of fiber toxicity. Fiber solubility affects the biopersistence of fibers deposited within the lung and is a key determinant of fiber toxicity. Bender and Hadley [1994] suggest that some of the important considerations of fiber durability include the following: ■ Fiber size-particularly length as it relates to the dimensions of the alveolar macrophages ■ Fiber dissolution rate ■ Mechanical properties of the fibers, including partially dissolved and/or digested fibers ■ Overloading of the normal clearance mechanisms of the lung Bignon et al. [1994] argue that fibers that are biopersistent in vivo and in vitro are more biologically active than less durable fibers. The durability of RCFs [Hammad et al. 1988;Luoto et al. 1995] provides a basis for suggesting that these fibers might persist long enough to induce biological effects similar to those of asbestos. In vitro durability tests have shown RCFs to be highly resistant to dissolution in biologically relevant mixtures such as Gamble's solution [Scholze and Conradt 1987]. The persistence of RCFs in both the peritoneal cavity [Bellman et al. 1987] and the lung [Hammad et al. 1988] has been recognized in experimental studies. Hammad et al. [1988] sacrificed rats exposed to either slag wool or ceramic fibers via inhalation at 5, 30, 90, 180, or 270 days after exposure. The lungs of the animals were ashed in a low-temperature asher, and the fiber content of the lungs was evaluated by PCM. The researchers found that 24% of the deposited RCFs persisted in the lungs of rats sacrificed 270 days following exposure. In the same study, the lungs of rats exposed to slag wool contained only 6% of the slag wool fibers 270 days after exposure compared with those sacrificed 5 days following inhalation. From these results, it was concluded that RCFs follow a clearance pattern of relatively durable fibers that persist, translocate, or are removed by some mechanism other than dissolution. Similar results were obtained in the study by Mast et al. [1995b], which shows that RCFs are persistent in the lungs of rats exposed by inhalation. Specifically, compared with the fiber burden in the lungs of animals sacrificed 3 months after exposure (recovery), the lungs of animals sacrificed after 21 months of recovery contained approximately 20% of the deposited fibers. Of the retained fibers (measured with both SEM and TEM techniques) 54% to 75% had diameters <0.5 µm, and more than 90% were 5 to 20 µm long. Researchers have suggested that fibers deposited in the gas exchange region with lengths less than the diameter of an alveolar macrophage are phagocytized and cleared via the mucociliary system or the lymph channels. Dissolution of fibers within the AlM occurs if the fibers are not resistant to the acidic intracellular conditions or a pH~5 [Nyberg et al. 1989]. Fibers that are not engulfed by alveolar macrophages are subjected to a pH of 7.4 in the extracellular fluid of the lung. A study of SVF durability in rat alveolar macrophages reports that RCFs are much less soluble than glass wool and rock wool fibers based on the amounts of silicon (Si) and iron (Fe) dissolved from the fibers in vitro [Luoto et al. 1995]. RCFs in rat alveolar macrophage culture dissolved less than 10 mg Si/m 2 of fiber surface area and less than 1 mg Fe/m 2 of fiber surface area. Glass wool dissolved more than 50 mg Si/m 2 , and rock wool dissolved nearly 2 mg Fe/m 2 when measured over comparable time periods. However, degradation and dissolution of deposited RCFs may still occur, based on the findings of higher dissolution of aluminum (Al) from RCFs (0.8 to 2.4 mg Al/m 2 ) in alveolar macrophages than from the other SVFs [Luoto et al. 1995]. In another study, SEM analysis of fibers recovered from the lungs of rats 6 months after inhalation of RCFs revealed an eroded appearance, causing the researchers to conclude that dissolution of Si in the fibers is a plausible mechanism for long-term fiber clearance [Yamato et al. 1994]. SVFs in general are less durable than asbestos fibers. RCFs are more durable than many other SVFs, with a dissolution rate somewhat higher than chrysotile asbestos. Under the extracellular conditions in the lung, chrysotile-the most soluble form of asbestos-has a dissolution rate of <1 to 2 ng/cm 2 /hr. RCFs have a similar dissolution rate of about 1 to 10 ng/cm 2 /hr under conditions experienced in pulmonary interstitial fluid. Other more soluble SVFs can be 10 to 1,000 times less durable [Scholze and Conradt 1987;Christensen et al. 1994;Maxim et al. 1999b;Moore et al. 2001]. At the measured solubility rate, an RCF with a 1-µm diameter would take more than 1,000 days to dissolve completely [Leineweber 1984]. # Summary of RCF Toxicity and Exposure Data In addition to the main determinants of fiber toxicity (dose, dimension, and durability), other factors such as elemental composition, surface area, and composition can also influence the toxicity of the fiber. Thus, it is difficult to predict a fiber's potential for human toxicity based solely on in vitro or in vivo tests. Based on consideration of these factors, the major findings from the RCF animal and human studies are as follows: ■ Toxicologic evidence from experimental inhalation studies indicates that RCFs are capable of producing lung tumors in laboratory rats and mesotheliomas in hamsters [Mast et al. 1995a,b;McConnell et al. 1995]. However, interpreting these studies with regard to RCF potency and its implication for occupationally exposed human populations is complicated by the issue of coexposure to fibers and nonfibrous respirable particulate. ■ The durability of RCFs contributes to the biopersistence of these fibers both in vivo and in vitro [Bellmann et al. 1987;Scholze and Conradt 1987;Lockey and Wiese 1992]. ■ Cytotoxicity and genotoxicity studies indicate that RCFs -are capable of inducing enzyme release and cell hemolysis [Wright et al. 1986;Fujino et al. 1995;Leikauf et al. 1995;Luoto et al. 1997], -affect mediator release [Morimoto et al. 1993;Ljungman et al.1994;Fujino et al. 1995;Leikauf et al. 1995;Hill et al. 1996;Cullen et al. 1997;Gilmour et al. 1997;Luoto et al. 1997;Wang et al. 1999], -may decrease cell viability and inhibit proliferation [Yegles et al. 1995;Okayasu et al. 1999;Hart et al. 1992], -affect cell viability and proliferation [Hart et al. 1992], and -may induce free radicals, micronuclei, polynuclei, chromosomal breakage, and hyperdiploid cells [Brown et al. 1998;Dopp et al. 1997;Hart et al. 1992]. ■ Exposure monitoring results indicate that airborne fibers measured in both the manufacturing and end-use sectors of the RCF industry have dimensions that fall within the thoracic and respirable size ranges [Esmen et al. 1979;Lockey et al. 1990;Cheng et al. 1992]. ■ Epidemiologic studies of workers in the RCF manufacturing industry report an association between increased exposures to airborne fibers and the occurrence of pleural plaques, other radiographic abnormalities, respiratory symptoms, decreased pulmonary function, and eye and skin irritation [Lemasters et al. 1994[Lemasters et al. , 1998Lockey et al. 1996;Trethowan et al. 1995;Burge et al. 1995]. Current occupational exposures to RCFs have not been linked to decreases in pulmonary function of workers [Lockey et al. 1998]. ■ Worker exposure to airborne fiber in the RCF industry over the past 20 years or more have decreased substantially, reportedly as the result of increased hazard awareness and the design and implementation of engineering controls [Rice et al. 1997;Maxim et al. 1997]. These observations warrant concern for the continued control and reduction of occupational exposures to airborne RCFs. # Existing Standards and Recommendations lengths ≥10 µm for up to 10 hr/day during a 40-hr workweek. NIOSH also recommended that airborne concentrations determined as total fibrous glass be limited to a 5 mg/m 3 of air as a TWA. At that time, NIOSH concluded that exposure to glass fibers caused eye, skin, and respiratory irritation. NIOSH also stated that until more information became available, this recommendation should be applied to other SVFs. The Agency for Toxic Substances and Disease Registry (ATSDR) calculated an inhalation minimal risk concentration of 0.03 f/cm 3 for humans based on extrapolation from animal studies [ATSDR 2002]. The Agency used macrophage aggregation, the most sensitive indicator of inflammation from RCFs, as the basis for this value. Calculation of this value is based on exposure assumptions for general public health that differ from those used in models for determining occupational exposure limits. ACGIH proposed a TLV of 0.1 f/cm 3 as an 8-hr TWA for RCFs under its notice of intended changes to the 1998 TLVs [ACGIH 1998]. On further review, this concentration was revised to 0.2 f/cm 3 [ ACGIH 2000]. ACGIH also classifies RCFs as a suspected human carcinogen (A2 designations) [ACGIH 2005]. On the basis of a weight-of-evidence carcinogenic risk assessment, the EPA [1993] classified RCFs as a Group B2 carcinogen (probable human carcinogen based on sufficient animal data). ACGIH and EPA designations are consistent with that of the International Agency for Research on Cancer (IARC), which classified Standards and guidelines for controlling worker exposures to RCFs vary in the United States. Other governments and international agencies have also developed recommendations and occupational exposure limits that apply to RCFs. Table 7-1 presents a summary of occupational exposure limit standards and guidelines for RCFs. Within the United States, the RCFC formally established a recommended exposure guideline of 0.5 f/cm 3 as an element of its product stewardship program known as PSP 2002, which was endorsed by OSHA as a 5-year voluntary program [OSHA 2002]. As part of that program, the RCFC recommends that workers wear respirators whenever the workplace fiber concentration is unknown or when airborne concentrations exceed 0.5 f/cm 3 . This exposure guideline was established by the RCFC on October 31, 1997, and replaces the previous exposure guideline of 1 f/cm 3 set by the RCFC in 1991. Before this agreement, the OSHA General Industry Standard was most applicable to RCFs, requiring that a worker's exposure to airborne dust containing <1% quartz and no asbestos be limited to an 8-hr PEL of 5 mg/m 3 for respirable dust and 15 mg/m 3 for total dust [29 CFR 1910[29 CFR .1000]. NIOSH has not previously commented on occupational exposure to RCFs. However, in addressing health hazards for another SVF (fibrous glass), NIOSH [1977] recommended an exposure limit (REL) of 3 f/cm 3 as a TWA for glass fibers with diameters ≤3.5 µm and ceramic fibers, including RCF, as "possibly carcinogenic to humans (Group 2B)" [IARC 1988[IARC , 2002. The IARC characterization was based on "sufficient evidence for the carcinogenicity of ceramic fibers in experimental animals" and a lack of data on the carcinogenicity of ceramic fibers to humans [IARC 1988[IARC , 2002. DECOS [1995] determined that "RCFs may pose a carcinogenic risk for humans," and set a healthbased recommended occupational exposure limit of 1 f/cm 3 . The German Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area published a review of fibrous dusts [Pott 1997] classifying RCFs as category IIIA2, citing "positive results (for carcinogenicity) from inhalation studies (often supported by the results of other studies with intraperitoneal, intrapleural, or intratracheal administration)." In the United Kingdom, the Health and Safety Commission of the Health and Safety Executive has established a maximum exposure limit for RCFs of 1.0 f/ml of air, with the additional advisory to reduce exposures to the lowest as reasonably practicable concentration based on the category 2 carcinogen classification for RCFs [HSE 2004]. 8 Basis for the Recommended Standard ditions (including dyspnea, wheezing, coughing, and pleurisy) observed in RCF workers to be adverse health effects associated with exposure to airborne RCFs [Lemasters et al. 1998;Lockey et al. 1993;Trethowan et al. 1995;Cowie et al. 1999]. An association between inhaling RCFs and fibrotic or carcinogenic effects has been documented in animals, but no evidence of such effects has been found in workers in the RCF manufacturing industry. The lack of such an association could be influenced by the small population of workers in this industry, the long latency period between initial exposure and development of measurable effects, the limited number of persons with extended exposure to elevated concentrations of airborne fibers, and declining occupational exposure concentrations. However, the evidence from animal studies suggests that RCFs should be considered a potential occupational carcinogen. This classification is consistent with the conclusions of ACGIH, EPA, DECOS, and IARC. (See discussion in Chapter 7.) Given these considerations, the NIOSH objective in developing an REL for RCFs is to reduce the possible risk of lung cancer and mesothelioma. In addition, maintaining exposures below the REL will also help to prevent other adverse effects, including irritation of the skin, eyes, and respiratory tract in exposed workers. To establish an REL for RCFs, NIOSH took into account not only the animal and human health data but also exposure # Background In the Occupational Safety and Health Act of 1970 (Public Law 91-96), Congress mandated that NIOSH develop and recommend criteria for identifying and controlling workplace hazards that may result in occupational illness or injury. In fulfilling this mission, NIOSH continues to investigate the potential health effects of exposure to naturally occurring and synthetic airborne fibers. This interest stems from the results of research studies confirming asbestos fibers as human carcinogens. Significant increases in the production of RCFs during the 1970s and concerns about potential health effects led to experimental and epidemiological studies as well as worker exposure monitoring. Chronic animal inhalation studies demonstrated the carcinogenic potential of RCFs, with a statistically significant increase in the incidence of lung cancer or mesothelioma in two laboratory species-rats and hamsters [Bunn et al. 1993;Mast et al. 1995a;McConnell et al. 1995]. Evidence of pleural plaques observed in persons with occupational exposures to airborne RCFs is clinically similar to that observed in asbestos-exposed persons after the initial years of their occupational exposures to asbestos [Hourihane et al. 1966;Becklake et al. 1970;Dement et al. 1986]. NIOSH considers the discovery of pleural plaques in U.S. studies of RCF manufacturing workers to be a significant finding because the plaques were correlated with RCF exposure [Lemasters et al. 1994;Lockey et al. 1996]. In addition, NIOSH considers the respiratory symptoms and con-information describing the extent to which RCF exposures can be controlled at different workplaces. On the basis of this evaluation, NIOSH considers an REL of 0.5 f/cm 3 (as a TWA for up to 10 hr/day during a 40-hr workweek) to be achievable for most workplaces where RCFs or RCF products are manufactured, used, or handled. Maintaining exposures at the REL will minimize the risk for lung cancer and reduce the risk of irritation of the eyes and upper respiratory system. The residual risks of lung cancer at the REL are estimated to be 0.073 to 1.2 per 1,000 based on extrapolations of risk models from Moolgavkar et al. [1999] and Yu and Oberdörster [2000]. The risk for mesothelioma at the REL of 0.5 f/cm 3 is not known but cannot be discounted. Evidence from epidemiologic studies showed that higher exposures in the past resulted in pleural plaques in workers, indicating that RCFs do reach pleural tissue. Both implantation studies in rats and inhalation studies in hamsters have shown that RCF fibers can cause mesothelioma. Because of limitations in the hamster data, the risk of mesothelioma cannot be quantified. However, the fact that no mesothelioma has been found in workers and that pleural plaques appear to be less likely to occur in workers with lower exposures suggests a lower risk for mesothelioma at the REL. Because residual risks of cancer (lung cancer and pleural mesothelioma) and irritation may exist at the REL, NIOSH further recommends that all reasonable efforts be made to work toward reducing exposures to less than 0.2 f/cm 3 . At this concentration, the risks of lung cancer are estimated to be 0.03 to 0.47 per 1,000 based on extrapolations of risk models from Sciences International [1998], Moolgavkar et al. [1999], and Yu and Oberdörster [2000]. Maintaining airborne RCF concentrations at or below the REL requires the implementation of a comprehensive safety and health program that includes routine monitoring of worker exposures, installation and routine maintenance of engineering controls, and worker training in good work practices. To ensure that worker exposures are routinely maintained below the REL, NIOSH recommends that an AL of 0.25 f/cm 3 be part of the workplace exposure monitoring strategy to ensure that all exposure control efforts (e.g., engineering controls and work practices) are in place and working properly. The purpose of the AL is to indicate when worker exposures to RCFs may be approaching the REL. Exposure measurements at or above the AL indicate a high degree of certainty that concentrations of RCFs exceed the REL. The AL is a statistically derived concept permitting the employer to have confidence (e.g., 95%) that if exposure measurements are below the AL, only a small probability exists that the exposure concentrations are above the REL. When exposures exceed the AL, employers should take immediate action (e.g., determine the source of exposure, identify measures for controlling exposure) to ensure that exposures are maintained below the exposure limit. NIOSH has concluded that an AL allows for the periodic monitoring of worker exposures in the workplace so that resources do not need to be devoted to conducting daily exposure measurements. The AL concept has been an integral element of recommended occupational standards in NIOSH criteria documents and in comprehensive standards promulgated by OSHA and MSHA. # Rationale for the REL The recommendation to limit occupational exposures to airborne RCFs to a TWA of 0.5 f/cm 3 is based on data from animal and human studies, risk assessments, and the availability of methods to control RCF exposures at the REL in many workplaces. Establishing the REL for RCFs is consistent with the mission of NIOSH mandated in the Occupational Safety and Health Act of 1970. This Act states that NIOSH is obligated to "develop criteria dealing with toxic materials and harmful physical agents and substances which will describe exposure levels that are safe for various periods of employment, including but not limited to the exposure levels at which no employee will suffer impaired health or functional capacities or diminished life expectancy as a result of his work experience." The carcinogenicity findings from the chronic noseonly inhalation assays of RCF1 in rats and hamsters [Mast et al. 1995a,b;McConnell et al. 1995] warrant concern about possible health effects in workers exposed to RCFs. Although no increase in lung cancer or mesothelioma mortality has been observed in worker populations exposed to RCFs, radiographic analyses indicate an association between pleural changes (including pleural plaques) and RCF exposure [Lemasters et al. 1994;Lockey et al. 1996;Cowie et al. 1999Cowie et al. , 2001. Both the U.S. [Lockey et al. 1993;Lemasters et al. 1998] and the European [Trethowan et al. 1995;Burge et al. 1995;Cowie et al. 1999Cowie et al. , 2001 studies have found RCF-associated respiratory symptoms, pulmonary function reductions, and pleural abnormalities among RCF production workers. Several independent evaluations have quantitatively estimated the risk of lung cancer for workers exposed to RCFs at various concentrations [DECOS 1995;Fayerweather et al. 1997;Moolgavkar et al. 1999]. NIOSH evaluated these studies to determine whether an appropriate qualitative or quantitative assessment of lung cancer risk could be achieved. In addition, exposure information was collected during the 5-year monitoring period covered under the consent agreement between RCFC and EPA [Maxim et al. 1994[Maxim et al. , 1998]. NIOSH used the exposure information to evaluate the feasibility of controlling workplace exposures at manufacturing and end-use facilities where RCFs and RCF products are handled. # Carcinogenesis in Animal Studies Chronic inhalation studies with RCFs demonstrate significant increases in the incidence of mesothelioma in hamsters and lung cancer in rats. Tables 8-1 through 8-4 present a synopsis of the major findings of these studies [Mast et al. 1995a,b;McConnell et al. 1995]. Results from chronic animal inhalation studies with chrysotile and amosite are also presented (i.e., results for the positive control groups); these data provide a reference point for determining the relative toxicity of RCFs [Mast et al. 1995a;McConnell et al. 1999]. Chronic inhalation exposure to RCF1 at 30 mg/m 3 (187 WHO f/cm 3 ) induced a 13% (16/123) incidence of lung tumors in F344 rats [Mast et al. 1995a]. The incidence of lung cancer at lower doses did not show a statistically significant difference from the unexposed control group. Lung fiber burdens in the multidose chronic rat study revealed a dose-response relationship [Mast et al. 1995b]. In the rat, 16 mg/m 3 (120 WHO f/cm 3 ) appeared to be the NOAEL for lung cancer and 3 mg/m 3 (26 WHO f/cm 3 ) appeared to be the NOAEL for fibrosis. Although it has been suggested that fibrosis in animals is a precursor to carcinogenesis, a definite link has not been shown for RCFs or other fibers. No lung cancers were found in hamsters exposed to RCF1 [McConnell et al. 1995]. Chronic inhalation exposure to RCF1 at 30 mg/m 3 induced a 41% (42/102) incidence of mesotheliomas in Syrian golden hamsters [McConnell et al. 1995]. Determining a dose-response relationship for inducing mesothelioma is not possible based on currently available data because of the single exposure dose tested in the hamster and because of the low, sporadic occurrence of mesothelioma in the exposed rats [Mast et al. 1995a]. Yet the occurrence of mesotheliomas in the rat and the high incidence in the hamster are biologically significant because the spontaneous incidence of mesotheliomas in rats and hamsters has historically been very low [Analytical Sciences Incorporated 1999]. To assess the significance of the mesothelioma incidence observed in RCF-exposed hamsters, these results were compared with those obtained from hamsters that were exposed to chrysotile asbestos and were used as positive controls for the study [McConnell et al. 1995] (see Tables 8-3 and 8-4). However, the chrysotileexposed hamsters failed to develop any tumors and therefore could not be considered true positive controls. Based on these results, a potency ranking could not be assigned for RCFs relative to chrysotile, since the carcinogenic response rate for the latter was zero in this study. The two fibers tested also differed with regard to their dose and fiber dimension. The McConnell et al. [1999] study of hamsters exposed to amosite asbestos provides doseresponse data for comparison with the RCF1 data of McConnell et al. [1995] (See Tables 8-3 and 8-4.). These separate studies examined the effects of RCF1 or amosite asbestos in hamsters using relatively similar exposure conditions, experimental conditions, and fiber dimensions [McConnell et al. 1995[McConnell et al. , 1999. Exposure to 263 WHO f/cm 3 and 165 WHO f/cm 3 of amosite asbestos induced pleural mesotheliomas in 20% and 26% of the hamsters, respectively [McConnell et al. 1999]. A concentration of 215 RCF1 WHO f/cm 3 induced mesotheliomas in 41% of hamsters [McConnell et al. 1995]. Interstitial and pleural fibrosis were first observed at 13 weeks following amosite asbestos exposure and at 6 months following RCF1 exposure. Although average fiber dimensions for the RCF1 and amosite asbestos samples were similar, the RCF1 sample contained a higher percentage of fibers longer than 20 μm [McConnell et al. 1995[McConnell et al. , 1999. Longer fibers have been associated with increased toxicity in experimental animal studies [Davis et al. 1986;Pott et al. 1987;Davis and Jones 1988;Warheit 1994;Blake et al. 1998]. Results from a dose-response analysis using the mesothelioma data from the RCF and amosite asbestos hamster studies [McConnell et al. 1995[McConnell et al. , 1999 indicated that the carcinogenic potency estimates for RCFs ranged from about half to two times the carcinogenic potency estimates for amosite asbestos [Dankovic 2001] (see Section 5.1.2). This analysis may not predict the mesothelioma risk in humans, since RCF1 contained a greater percentage of fibers longer than 20 µm and because of differences in fiber durability. Amosite asbestos is a more durable fiber with a longer in vivo half-life than RCF1 [Maxim et al. 1999b;Hesterberg et al. 1993] (see Table 8-5). Yet RCFs are more durable and less soluble than many other types of SVFs that have not demonstrated carcinogenicity in experimental studies. This characteristic is significant, as the durability of asbestos and SVFs (including RCFs) may be linked to the risk of lung cancer in animals chronically exposed to these fibers [Bignon et al. 1994;Bender and Hadley 1994;Hammad et al. 1988;Luoto et al. 1995]. Because of the long latency period for the development of mesotheliomas in humans, Berry [1999] hypothesized that fibers of sufficient durability are needed to cause this disease in humans. Extrapolation of the RCF dose-response data for lung cancer and mesothelioma in exposed rodents should take into account the durability of RCFs in humans. Some evidence indicates that rats are less sensitive than humans to the development of lung cancer and mesothelioma from exposure to asbestos and may therefore represent an inappropriate model for human risk assessment. Pott et al. [1994] hypothesized that in chronic inhalation studies, rats may have a lower sensitivity to inorganic fiber toxicity than humans. The lung cancer risk from inhaling asbestos may be two orders of magnitude lower in rats than in humans, and the mesothelioma risk from inhaling asbestos may be three orders of magnitude lower for rats. Rödelsperger and Woitowitz [1995] measured amphibole fiber concentration in the lung tissues of humans with mesothelioma and compared the results with fiber burdens reported in rats. A significantly increased OR (OR=4.8, 95%; CI=1.05-21.7) for mesothelioma was seen in humans with amphibole concentrations between 0.1 and 0.2 fiber/μg of dried lung tissue. The lowest tissue concentration reported to produce a significant carcinogenic response in rat inhalation studies of amphiboles (specifically crocidolite) was 1,250 fibers/μg of dried lung tissue. By comparing these results, Rödelsperger and Woitowitz [1995] estimated that humans are at least 6,000 times more sensitive than rats to a given tissue concentration of amphibole fibers. This work is refuted by other scientists who favor the rat as an appropriate model for evaluating the risk evaluation of lung cancer in humans [Maxim and McConnell 2001]. Limitations of the Rödelsperger and Woitowitz [1995] and Pott [1994] analyses (discussed earlier) include the lack of a dose-response analysis, analysis of only one epidemiologic study, inadequate comparisons of exposure duration, lack of accounting for the potentially multiplicative effect of smoking and asbestos exposure, lack of consideration of latency and clearance, and different fiber measurement techniques. In summary, multiple factors affecting the comparability of different fiber types and animal models reported in the literature make it difficult to determine whether the carcinogenic potency of RCFs in animals is similar to that in humans. Extrapolation of the animal data to humans is further complicated by a limited understanding of the mechanisms of fiber toxicity. Consequently, any extrapolation of the cancer risk found in animals exposed to RCFs must account for differences between humans and rodents with regard to fiber deposition and clearance patterns, uncertainty about the role of RCF durability for potentiating an adverse effect, and possible species differences in sensitivity to fibers. # Health Effects Studies of Workers Exposed to RCFs Two major research efforts evaluated the morbidity of workers exposed to airborne fibers in the RCF manufacturing industry. One study was conducted in the United States and the other in Europe. The objective of each was to evaluate the relationship between occupational exposure to RCFs and potential adverse health effects. These studies contained multiple components including standardized respiratory and occupational history questionnaires, chest radiographs, pulmonary function tests of workers, and air sampling to estimate worker exposures. The first studies of European plants were conducted in 1986 and included current workers at seven RCF manufacturing plants [Rossiter et al. 1994;Trethowan et al. 1995;Burge et al. 1995]. A followup cross-sectional study conducted in 1996 evaluated the same medical endpoints in workers from six of these seven European manufacturing plants (one plant had ceased operation) [Cowie et al. 1999[Cowie et al. , 2001. Current as well as former workers were included as study subjects in the followup study. The studies of U.S. plants began in 1987 and involved evaluations of current workers at five RCF manufacturing plants and former workers at two of the plants [Lemasters et al. 1994[Lemasters et al. , 1998[Lemasters et al. , 2003Lockey et al. 1993Lockey et al. , 1996Lockey et al. , 1998Lockey et al. , 2002. In the United States, the earliest commercial production of RCFs and RCF products began in 1953. In Europe, RCF production began in 1968. The demographics of the U.S. and European populations were similar at the time they were studied, but the average age and duration of employment for the U.S. workers were slightly higher than for the workforce in the 1986 European studies because of the earlier development of this industry in the United States. # Pleural changes in humans The radiographic analyses of the U.S. and 1996 European populations in RCF manufacturing detected an association between pleural changes and RCF exposure [Lemasters et al. 1994;Lockey et al. 1996;Cowie et al. 1999Cowie et al. , 2001]. In the initial European studies, Trethowan et al. [1995] found pleural abnormalities in a small number of RCF workers who had other confounding exposures that did not include asbestos. Differences observed in pleural abnormalities between the U.S. and European worker populations may be related to the latency of exposure required to cause specific pleural changes [Hillerdal 1994;Begin et al. 1996], especially the formation of pleural plaques, which were first observed in studies of the U.S. RCF manufacturing industry, with its longer latency period. Historical air sampling data also indicate that airborne fiber concentrations were much higher in early U.S. RCF manufacturing. Therefore, in addition to their longer overall latency, RCF manufacturing workers in the United States probably had generally higher exposures in the early years of the industry than did their European counterparts. These factors might explain the appearance of RCF-associated pleural plaques in the U.S. studies before their detection in the European studies. The U.S. and 1986 European studies yielded little evidence of an association between radiographic parenchymal opacities and RCF exposure. In the U.S. study, small opacities were rare, with only three cases noted in one report [Lockey et al. 1996] and only one case (with small rounded opacities of profusion category 3/2 attributable to prior kaolin mine work) noted in the other [Lemasters et al. 1994]. Small opacities of profusion category 1/0 or greater were more frequent in the European study by Trethowan et al. [1995], but confounding exposures were believed to account for many of these cases. The results of statistical analyses indicated either no association with RCF exposure or an association slightly suggestive of an RCF exposure effect [Rossiter et al. 1994]. In a more comprehensive evaluation of the European study population, small opacities of category 1/0 or greater were positively associated with RCF exposures that occurred before 1971 [Cowie et al. 1999]. # Respiratory symptoms, irritation, and other conditions in humans In both the U.S. [Lockey et al. 1993;] and the European [Trethowan et al. 1995;Burge et al. 1995;Cowie et al. 1999Cowie et al. , 2001 studies, occupational exposure to RCFs was associated with various reported respiratory conditions or irritation symptoms after adjusting for the effects of smoking. Worker exposure to RCFs at concentrations of 0.2 to 0.6 f/cm 3 was associated with statistically significant increases in eye irritation (OR=2.16, 95% CI=1.32-3.54), stuffy nose (OR=2.06, 95% CI=1.25-3.39), and dry cough (OR=2.53, 95% CI=1.25-5.11) compared with exposure to concentrations lower than 0.2 f/cm 3 [Trethowan et al. 1995]. Between the 0.2 to 0.6 f/cm 3 and >0.6 f/cm 3 RCF exposure groups, a statistically significant increase occurred in ORs for wheezing (P<0.0001), grade 2 dyspnea (P<0.05), eye irritation (P<0.0001), and skin irritation (P<0.0001)-but not for stuffy nose [Trethowan et al. 1995]. Lockey et al. [1993] found that dyspnea was significantly associated with cumulative exposure to >15 fiber-months/cm 3 (i.e., >1.25 fiber-year/cm 3 ) relative to cumulative exposure to ≤15 fiber-months/cm 3 (dyspnea grade 1-OR=2.1, 95% CI 1.3-3.3; dyspnea grade 2-OR=3.8, 95% CI 1.6-9.4) after adjusting for smoking and other potential confounders. Lockey et al. [1993] also found a statistically significant association between cumulative RCF exposure and pleurisy (OR=5.4, 95% CI=1.4-20.2), and an elevated but nonsignificant association between cumulative RCF exposure and chronic cough (OR=2.0, 95% CI=1.0-4.0). Lemasters et al. [1998] also noted associations (P<0.05) between employment in an RCF production job and increased prevalence of dyspnea and the presence of at least one respiratory symptom after adjusting the data for confounders. Recurrent chest illness in the European study population was associated with the estimated cumulative exposure to thoracic-sized fibers but was more strongly associated with estimated cumulative exposure to thoracic-sized dust [Cowie et al. 1999[Cowie et al. , 2001. In cross-sectional analyses, both the U.S. [Lockey et al. 1998;Lemasters et al. 1998] and the 1986 European [Trethowan et al. 1995;Burge et al. 1995] studies found that cumulative RCF exposure is associated with pulmonary function decrements among current and former smokers. Lemasters et al. [1998] also found statistically significant deficits in pulmonary function measures for nonsmoking female workers. The decreased pulmonary function in the European study population remained significantly associated with cumulative RCF exposure, even after controlling for cumulative dust exposure [Burge et al. 1995]. The 1996 European study found pulmonary function decrements only in current smokers [Cowie et al. 1999[Cowie et al. , 2001. In a longitudinal analysis of data from multiple serial pulmonary function tests, Lockey et al. [1998] concluded that the more recent RCF concentrations occurring after 1987 were not associated with decreased pulmonary function; rather, decreases in pulmonary function were more closely related to typically higher concentrations that occurred before this time period. The U.S. and European studies suggest that decrements in pulmonary function observed primarily in current and former smokers are evidence of an interactive effect between smoking and RCF exposure. Moolgavkar et al. [1999] derived risk estimates for lung cancer in humans on the basis of the results from the two chronic bioassays of RCFs in male Fischer 344 rats [Mast et al. 1995a,b]. Several models (linear, quadratic, exponential) were used to estimate and compare risks using reference populations comprised of either a nonsmoking ACS cohort or a cohort of steel workers not exposed to coke oven emissions (see Table 5-10 for risk estimates). The exponential model provided the best statistical fit of the data. The linear model provided the highest estimates of human lung cancer risks from exposure to RCFs when used with the referent steel workers cohort (considered to be most representative of workers exposed to RCFs because it includes blue collar workers who smoke). Lung cancer risk estimates were calculated using each model at exposure concentrations of 0.25 f/cm 3 , 0.5 f/cm 3 , 0.75 f/cm 3 , and 1.0 f/cm 3 . The RCF-related lung cancer risk determined from the linear model for the lowest concentration (0.25 f/cm 3 ) was 0.27/1,000 for the cohort of steel workers compared with 0.036/1,000 using the exponential model and 0.00088/1,000 for the quadratic model when using the same referent population. # Carcinogenic Risk in Humans The risk estimates incorporated multiple assumptions, including a human breathing rate of 13.5 L/min (considered light work) and multiple criteria for defining the length of time a worker could be exposed to RCFs over a working lifetime. Higher risk estimates could be expected if the assumptions more closely represented those used by NIOSH and OSHA: (1) a human breathing rate of 20 L/min and (2) a worker exposure duration of 8 hr/day, 5 days/wk, 50 wk/yr, from age 20 to 65 with the risk calculated beyond age 70 (e.g., to age 85). Furthermore, the calculated risk estimates could be an underestimation of the lung cancer risk to humans because the models assumed that the tissue sensitivity to RCFs in the rat is equal to that in humans. Although the lung cancer risk estimates derived from the rat data are reason for concern, estimates of human risk for mesothelioma from the high incidence (41%) of mesothelioma in hamsters cannot be appropriately modeled since only a single exposure was administered in the study. Primarily on the basis of chronic animal inhalation studies [Mast et al. 1995a,b;McConnell et al. 1995], NIOSH concludes that RCFs are a potential occupational carcinogen. Furthermore, the evidence of pleural plaques Lockey et al. 1996] observed in persons with occupational exposures to airborne RCFs is clinically similar to that observed in asbestos-exposed persons after the initial years of their occupational asbestos exposures [Hourihane et al. 1966;Becklake et al. 1970;Dement et al. 1986]. NIOSH considers the discovery of pleural plaques in U.S. studies of RCF manufacturing workers to be a significant finding because the plaques were correlated with RCF exposure [Lemasters et al. 1994;Lockey et al. 1996]. In addition, NIOSH considers the respiratory symptoms and conditions (including dyspnea, wheeze, cough, and pleurisy) [Lemasters et al. 1998;Lockey et al. 1993;Trethowan et al. 1995;Burge et al. 1995;Cowie et al. 1999] in RCF workers to be adverse health effects that have been associated with exposure to airborne fibers of RCFs. Insufficient evidence exists to document an association between fibrotic or carcinogenic effects and the inhalation of RCFs by workers in the RCF manufacturing industry though these effects have been demonstrated in animal studies. The lack of an observed association between RCF exposure and these effects among workers could be affected by one or more factors, including several relating to the study population: the relatively small cohort, the proportion of workers with short tenure relative to what might be expected (on the basis of an asbestos analogy) in terms of disease latency, and workers with limited cumulative exposures to RCFs. # Controlling RCF Exposures in the Workplace Table 8-6 summarizes exposure monitoring data collected by the RCFC under a consent agreement with the EPA [Everest 1998;Maxim et al. 1997]. These data indicate that exposures to RCFs during 1993-1998 had an AM fiber concentration of about 0.3 f/cm 3 for manufacturing and nearly 0.6 f/cm 3 for end users. Maxim et al. [1997Maxim et al. [ , 1999a reported results for both manufacturing and end-use sectors in which airborne fiber concentrations through 1997 were reduced to an AM <0.3-0.6 f/cm 3 . The exposure monitoring data collected as part of the RCFC/EPA consent agreement provide assurance that when appropriate engineering controls and work practices are used, airborne exposure to RCFs can be maintained for most functional job categories (FJCs) at the REL of 0.5 f/cm 3 . For many manufacturing processes, reductions in exposures have resulted from the improved ventilation, engineering or process changes, and product stewardship programs [Rice et al. 1996;Maxim et al. 1999b]. These data provide the basis for the NIOSH determination that a REL of 0.5 f/cm 3 as a TWA can be achieved. Although many RCF manufacturing and enduser job tasks have exposures to RCFs at concentrations below 0.5 f/cm 3 , exposure monitoring data also indicate that not all FJCs may be able to achieve these RCF concentrations consistently. FJCs that currently experience airborne AM fiber concentrations >0.5 f/cm 3 include finishing (manufacturing and end use) and removal (end use). Typical processing during finishing operations (e.g., sawing, drilling, cutting, sanding) often requires high-energy sources that tend to generate larger quantities of airborne dust and fibers. For RCF insulation removal, activities are performed at remote sites where conventional engineering controls and fixed ventilation systems are more difficult to implement. For some operations, such as removal of RCF insulation tiles from furnaces, the release of high airborne fiber concentrations can occur. However, removal of RCF insulation tiles is not routine and is generally accomplished in a short period of time. Workers almost universally wear PPE and respiratory protection during these job tasks [Maxim et al. 1997[Maxim et al. , 1998]. NIOSH acknowledges that the frequent use of PPE, including respirators, may be required for some workers handling RCFs or RCF products. The frequent use of PPE may be required during job tasks for which (1) routinely high airborne concentrations of RCF (e.g., finishing, insulation removal) exist, (2) the airborne concentration of RCF is unknown or unpredictable, and (3) job tasks are associated with highly variable airborne concentrations because of environmental conditions or the manner in which the job task is performed. In all work environments where RCFs or RCF products are handled, control of exposure through the engineering controls should be the highest priority. # Summary The following summarize the relevant information used as the basis for the NIOSH assessment of occupational exposures to RCFs: ■ Airborne concentrations of RCFs have been characterized as containing fibers of dimensions in the thoracic and respirable size ranges. RCFs are among the most durable types of SVFs. In tests of solubility, RCFs are nearly as durable as chrysotile asbestos but significantly less durable than amphibole asbestos fibers such as amosite. ■ Chronic, nose-only inhalation studies with RCFs in animals show a statistically † Fiber concentrations were determined during monitoring performed over a 5-year period (1993)(1994)(1995)(1996)(1997)(1998) under the Refractory Ceramic Fibers Coalition/Environmental Protection Agency (RCFC/EPA) consent agreement. Concentrations were determined by NIOSH method 7400 "B" counting rules. significant increased incidence of lung tumors in rats and pleural mesotheliomas in hamsters. These data support the NIOSH determination that RCFs are a potential occupational carcinogen. ■ Epidemiologic studies of workers in the RCF manufacturing industry show an increased incidence of pleural plaques, respiratory symptoms (dyspnea and cough), skin and eye irritation, and decreased pulmonary function related to increasing exposures to airborne fibers. Some of these conditions are documented for exposure concentrations in a range as low as 0.2 to 0.6 f/cm 3 . Studies of workers exposed to airborne RCFs show no evidence of excess risk for lung cancer or mesothelioma. However, the inability to detect such an association could be because of (1) the low statistical power for detecting an effect, (2) the short latency period for most workers occupationally exposed, and (3) the historically low and decreasing fiber exposures that have occurred in this industry. ■ Risk assessment analyses using data from chronic inhalation studies in rats indicate that the excess risk of developing lung cancer when exposed to RCFs at a TWA of 0.5 f/cm 3 for a working lifetime is 0.073 to 1.2/1,000. However, on the basis of the assumptions used in the risk analyses, NIOSH concludes that this risk estimate is more likely to underestimate than to overestimate the risk to RCFexposed workers. Reduction of the RCF TWA concentration to 0.2 f/cm 3 would reduce the risk for lung cancer to 0.03 to 0.47/1,000. OSHA attempts to set PELs for carcinogens that reflect an estimated risk of 1/1,000 but is limited by considerations of technologic and economic feasibility. ■ RCF exposure data gathered under a consent agreement between RCFC and EPA, which included a 5-year comprehensive air monitoring program (1993)(1994)(1995)(1996)(1997)(1998), indicate that airborne exposure concentrations to RCFs have been decreasing. Monitoring results show that 75% to >95% of TWA exposure concentration measurements in all FJCs (with one exception) were below 1.0 f/cm 3 . In all but two of the eight FJCs, >70% of TWA measurements were below the RCFC recommended exposure guideline of 0.5 f/cm 3 . On the basis of the review of these data, NIOSH has concluded that the REL of 0.5 f/cm 3 can be achieved in most work places where RCFs or RCF products are manufactured or used. ■ The combined effect of mixed exposures to fibers and nonfibrous particulates may contribute to increased irritation of the respiratory tract, skin, and eyes. Engineering controls and appropriate work practices used to keep airborne RCF concentrations below the REL should help to minimize airborne exposures to nonfibrous particulates as well. Because the ratio of fibers to nonfibrous particulate in airborne exposures may vary among job tasks, exposure monitoring should include efforts to characterize particulate composition and to control and minimize airborne fibers and nonfibrous particulate accordingly. From the assessment described above and throughout this document, NIOSH concludes that on a continuum of fiber toxicity, RCFs relate more closely to asbestos than to fibrous glass and other SVFs and should be handled accordingly. NIOSH considers all asbestos types to be carcinogens and has established a REL of 0.1 f/cm 3 for airborne asbestos fibers. This value was determined on the basis of extensive human and animal health effects data and the recognized limits of analytical methods. Recognizing that RCFs are carcinogens in animal studies and given the limitations in deriving an exposure value that reflects no excess risk of lung cancer or mesothelioma for humans, NIOSH recommends that every effort be made to keep exposures below the REL of 0.5 f/cm 3 as a TWA for up to 10 hr/day in a 40-hr workweek. These efforts will further reduce the risk for malignant respiratory disease and protect workers from conditions and symptoms deriving from irritation of the respiratory tract, skin, and eyes. From the analysis of historical exposure data (see Chapter 4) and the exposure data collected as part of the RCFC/EPA consent agreement monitoring program (Table 8-6), NIOSH has determined that compliance with the REL for RCFs is achievable in most manufacturing and end-use job categories. Although routine attainment of TWA exposures below the REL may not currently occur at all job tasks, it does represent a reasonable objective that can be achieved through modification of the job task or the introduction or improvement of ventilation controls. 9 Guidelines for Protecting Worker Health ■ Establish procedures for reporting hazards and giving feedback about actions taken to correct them. ■ Instruct workers about using safe work practices and appropriate PPE. ■ Inform workers about practices or operations that may generate high concentrations of airborne fibers (such as cutting and sanding of RCF boards and other RCF products). ■ Make workers who remove refractory insulation materials aware of the following: -Their potential for exposure to respirable crystalline silica -Health effects related to this exposure -Methods for reducing their exposure -Types of PPE that may be required (including respirators) # Labeling and Posting Although workers should have received training about RCF exposure hazards and methods for protecting themselves, labels and signs serve as important reminders and provide warnings to workers who may not ordinarily work in the area. Employers should do the following: ■ Post warning labels and signs about RCFassociated health risks at entrances and inside work areas where airborne concentrations of RCFs may exceed the REL. The following guidelines for protecting worker health and minimizing worker exposures to RCFs are considered minimum precautions that should be adopted as a part of a site-specific safety and health plan to be developed and overseen by appropriate and qualified personnel. # Informing Workers about Hazards # Safety and Health Training Program Employers should establish a safety and health training program for all workers who manufacture, use, handle, install, or remove RCF products or perform other activities that bring them into contact with RCFs. As part of this training program, employers should do the following: ■ Inform all potentially exposed workers (including contract workers) about RCFassociated health risks such as skin, eye, and respiratory irritation and lung cancer. ■ Provide MSDSs on site: -Make MSDSs readily available to workers. -Instruct workers how to interpret information from MSDSs. ■ Teach workers to recognize and report adverse respiratory effects associated with RCFs. ■ Train workers to detect hazardous situations. ■ State the need to wear appropriate respiratory protection and protective clothing in areas where airborne RCFs may exceed the REL. ■ If respiratory protection is required, post the following statement: # Engineering Controls Engineering controls should be the principal method for minimizing exposure to RCFs in the workplace. # Ventilation Achieving reduced concentrations of airborne RCFs depends on adequate engineering controls such as local exhaust ventilation systems that are properly constructed and maintained. Local exhaust ventilation systems that employ hoods and ductwork to remove fibers from the workplace atmosphere have been used by RCF manufacturers. One example is a slotted-hood dust collection system placed over a mixing tank so that airborne fibers are captured and collected in a bag house with HEPA filters [RCFC 1996]. Other types of local exhaust ventilation or dust collection systems may be used at or near dust-generating systems to capture airborne fibers. Band saws used in RCF manufacturing and finishing operations have been fitted with such engineering controls to capture fibers and dust during cutting operations and thereby reduce exposures for the band saw operator [Venturin 1998]. Disc sanders fitted with similar local exhaust ventilation systems are effective in reducing airborne RCF concentrations during sanding of vacuum-formed RCF products [Dunn et al. 2004]. For quality control laboratories or laboratories where production samples are prepared for analyses, exhaust ventilation systems should be designed to capture and contain dust. For guidance in designing local exhaust ventilation systems, see Industrial Ventilation-A Manual of Recommended Practice, 25th edition [ACGIH 2005], Recommended Industrial Ventilation Guidelines [Hagopian and Bastress 1976], and the OSHA ventilation standard [29 CFR 1910.94]. # RESPIRATORS REQUIRED IN THIS AREA. ■ Print all labels and warning signs in both English and the predominant language of workers who do not read English. ■ If workers are unable to read the labels and signs, inform them verbally about the hazards and instructions printed on the labels and signs. # Hazard Prevention and Control Proper ■ Proper decontamination and waste disposal Additional engineering controls have been evaluated by the Bureau of Mines for minimizing airborne dust in underground mining operations and at industrial sand plants. These controls may also have applications for RCF finishing, installation, and removal operations. The use of air showers (also known as a canopy air curtain or an overhead air supply island) involves positioning an air supply over the head of a worker to provide a flow of clean, filtered air to the worker's breathing zone [Volkwein et al. 1982[Volkwein et al. , 1988. Proper design and evaluation are critical for ensuring that filtration is adequate to remove airborne fibers from the air supply. Also, selection of the air supply flow rate is important to make sure that the velocity delivered to the worker's breathing zone is sufficient to overcome cross drafts and maintain a clean air flow. # Tool selection and modification The RCFC has reported that using hand tools instead of powered tools can significantly reduce airborne concentrations of dust. However, hand tools often require additional physical effort and time, and they may increase the risk of musculoskeletal disorders. Employers should therefore use ergonomically correct tools and proper workstation design to avoid ergonomic hazards. The additional physical effort required to use hand tools may also increase the rate and depth of breathing and may consequently affect the inhalation and deposition of fibers. For operations such as cutting, sawing, grinding, drilling, and sanding, the high level of mechanical energy applied to RCF products with power tools increases the potential for elevated concentrations of airborne fiber. Examples [Carborundum 1992] of how airborne fiber concentrations are affected by the equipment used to process RCF products include the following: ■ A test of hand sawing versus the use of a powered jigsaw showed an 81% reduction in concentrations of airborne dust generated. ■ A comparison of hand sanding versus power sanding showed a 90% reduction in concentrations of airborne dust generated. ■ When a light water mist is applied to the surface of a vacuum-formed board before sanding, airborne dust concentration is reduced by 89% for hand sanding and 88% for powered sanding. ■ The use of a cork bore (core drill) versus an electric drill with a twist bit for cutting holes in RCF product forms reduces airborne dust concentrations by about 85%. # Engineering controls for RCF finishing operations Researchers at NIOSH have been working with industrial hygienists at RCFC member facilities to study the effectiveness of engineering controls designed and applied to RCF finishing operations. Because hand tools are not always a practical solution to manufacturing and enduse facilities requirements, engineering controls are being designed and evaluated for use with powered tools. A joint project between NIOSH and RCFC was initiated in 1998 and involved investigating engineering controls for use with a pedestal belt/ disc sander [Dunn et al. 2000[Dunn et al. , 2004. These units are frequently used by the manufacturers as well as the customer facilities to produce vacuum-formed boards sized to the required dimensions. A continuous misting nozzle and simple local exhaust ventilation system were integrated for use on the pedestal sanding unit. The mister consisted of a standard atomization nozzle that was set for a low-water flow rate to minimize part degradation. The local exhaust ventilation system used two hoods or pickup points with a total airflow of 700 ft 3 /min. During production of vacuum-formed boards, these two controls reduced fiber concentrations in the breathing zone as follows: % decrease in airborne fibers: These studies highlight the potential for significant reductions in worker exposure using well designed and maintained engineering controls, but their effectiveness needs to be validated in the field. Disc # Wet methods for dust suppression Fiber counts are lower in more humid atmospheres. Examples of using water to suppress dust concentrations are described as follows: ■ At one RCF textile facility, misters have been added above broad looms and tape looms to decrease fiber concentrations. ■ Water knives are high-pressure water jets that are used to cut and trim edges of RCF blanket while suppressing dust and limiting the generation of airborne fibers. ■ During the installation of RCF modules in a furnace, a procedure called tamping is typically performed. After modules are put in place on the furnace wall, the modules are compressed by placing a 1-ft length of 2-by 4-ft lumber against the modules and tapping it lightly with a hammer. The process helps ensure that the RCF modules are installed tightly in place. When a light water spray is applied to the surface of the modules before tamping, airborne fiber concentration is reduced by about 75% [Carborundum 1993]. The water is applied with a garden-type sprayer that is set on mist using about 1 gal of water/100 ft 2 of surface area. However, caution is advised regarding the dampening of refractory-linings during installation. The introduction of water can damage refractory-lined equipment during heating with explosive spalling from the generation of steam. ■ After-service RCF insulation removed from furnaces may be wetted to reduce the release of fibers. # Isolation Some manufacturing processes may be enclosed to keep airborne fibers contained and separated from workers. ■ When possible, isolate workers from direct contact with RCFs by using automated equipment operated from a closed control booth or room. ■ Maintain the control room at greater air pressure than that surrounding the process equipment so that air flows out rather than in. ■ Make sure workers take special precautions (such as using PPE) when they must enter the general work area to perform process checks, adjustments, maintenance, assembly-line tasks, and related operations. # Product Reformulation One factor that contributes to the toxicity of an inhaled fiber is the durability of the fiber and its resistance to degradation in the respiratory tract. The chemical characteristics of RCFs make them one of the most durable types of SVFs. As a result, an inhaled RCF of specific dimensions will persist longer in the lungs. Modifying the physical characteristics of RCFs or reformulating their chemistry to produce less durable fibers are recommended options for reducing the hazard for exposed workers. Such an approach has been taken by one RCF manufacturer in developing two more soluble types of SVF [Maxim et al. 1999b]. However, caution is advised for developing and distributing such modified fibers. Possible adverse health effects of newly developed fibers should be evaluated before introducing them into commerce. Appropriate testing of these fibers should be performed to provide information about the fiber toxicology and potential adverse health effects associated with exposure to these fibers. # Work Practices and Hygiene Use good work practices to help reduce exposure to airborne fibers. These practices include the following: ■ Limit the use of power tools unless they are equipped with local exhaust or dust collection systems. When possible, use hand tools, which generate less dust and fewer airborne particles. ■ Use HEPA-filtered vacuums, wet sweeping, or a properly enclosed wet vacuum system for cleaning up dust containing RCFs. ■ During removal of RCF products, dampen insulation with a light water spray to keep fibers and dust from becoming airborne. ■ Clean work areas regularly with HEPAfiltered vacuums or with wet sweeping methods to minimize the accumulation of debris. ■ Limit access to areas where workers may be exposed to airborne RCFs: permit only workers who are essential to the process or operation. Use good hygiene and sanitation to protect workers as well as people outside the workplace who might be contaminated with takehome dust and fibers: ■ Do not allow workers to smoke, eat, or drink in work areas where they contact RCFs. ■ If RCFs get on the skin, wash with warm water and mild soap. ■ Apply skin moisturizing cream as needed to avoid dryness or irritation from repeated washing. ■ Vacuum contaminated clothes with a HEPA-filtered vacuum before leaving the work area. -Do not use compressed air to clean the work area or clothing. -Do not shake clothes to remove dust. ■ Do not wear contaminated clothes outside the work area. Instead, take the following measures to prevent taking contaminants home: -Change into street clothes before going home. -Leave contaminated clothes at the workplace to be laundered by the employer. -Store street clothes in separate areas of the workplace to keep from contaminating them. ■ Provide workers with showers and have them shower before leaving work. ■ Prohibit removal of contaminated clothes or other items from the workplace [NIOSH 1995b]. # Personal Protective Equipment Wear long sleeves, gloves, and eye protection when performing dusty activities involving RCFs. # Respiratory Protection NIOSH recommends using a respirator for any task involving RCF exposures that are unknown or have been documented to be higher than the NIOSH REL of 0.5 f/cm 3 (TWA). Respirators should not be used as the primary means of controlling worker exposures. Instead, NIOSH recommends using other exposure-reduction methods, such as product substitution, engineering controls, and changes in work practices. However, respirators may be necessary when available engineering controls and work practices do not adequately control worker exposures below the REL for RCFs. NIOSH recognizes this control to be a particular challenge in the finishing stages of RCF product manufacturing as well as during the installation and removal of RCF insulation materials. If respiratory protection is needed, the employer should establish a comprehensive respiratory protection program as described in the OSHA respiratory protection standard [29 CFR 1910.134]. Elements of a respiratory protection program should be established and described in a written plan that is specific to the workplace. This respirator program must include the following: ■ At a minimum, use a half-mask, airpurifying respirator equipped with a 100 series particulate filter (this respirator has an assigned protection factor (APF) of 10. ■ For a higher level of protection and for prevention of facial or eye irritation, use a full-facepiece, air-purifying respirator (equipped with a 100 series filter) or any powered, air-purifying respirator equipped with a tight-fitting full facepiece. ■ For greater respiratory protection when the work involves potentially high airborne fiber concentrations (such as removal of after-service RCF insulation such as furnace insulation), use a supplied-air respirator equipped with a full facepiece, since airborne exposure to RCFs can be high and unpredictable. A comprehensive assessment of workplace exposures should always be performed to determine the presence of other possible contaminants (such as silica) and to ensure that the proper respiratory protection is used. exposure to airborne RCFs, perform the exposure sampling survey as follows: ■ Collect representative personal samples for the entire work shift using NIOSH Method 7400 (B rules) [NIOSH 1977a[NIOSH , 1998]. ■ Perform periodic sampling at least annually and whenever any major process change takes place or another reason exists to suspect that exposure concentrations may have changed. ■ Collect all routine personal samples in the breathing zones of the workers. ■ For workers exposed to concentrations above the REL, perform more frequent exposure monitoring until at least two consecutive samples indicate that the worker's exposures no longer exceed the REL. ■ Notify all workers about monitoring results and any actions taken to reduce their exposures. ■ Make sure that any sampling strategy considers variations in work and production schedules as well as the inherent exposure variability in most workplaces [NIOSH 1995a]. Similar exposure monitoring strategies have been espoused by the American Industrial Hygiene Association, which recommends that if measured exposures are less than 10% of the designated exposure limit (for example, REL or PEL), there is a high degree of certainty that the exposure limit will not be exceeded. # Action Level # Sampling Strategies When sampling to determine whether worker exposures are below the REL, a focused sampling strategy may be more practical than random sampling. A focused sampling strategy targets workers perceived to have the highest exposure concentrations [Leidel and Busch 1994]. A focused strategy is most efficient for identifying exposures above the REL if maximum-risk workers and time periods are accurately identified. Focused sampling may help identify short-duration tasks involving high airborne fiber concentrations that could result in elevated exposures over a full work shift. Sampling strategies such as those used by Corn and Esmen [1979], Rice et al. [1997], and Maxim et al. [1997] have been derived and used in RCF manufacturing facilities to monitor airborne fiber concentrations by selecting representative workers for sampling. The representative workers are grouped according to dust zones, uniform job titles, or functional job categories. These approaches are intended to reduce the number of required samples while increasing the confidence of identifying workers at similar risk. Area sampling may also be useful in exposure monitoring for determining sources of airborne RCF exposures and assessing the effectiveness of engineering controls. # Medical Monitoring NIOSH recommends periodic medical evaluation, or medical monitoring, of RCF-exposed workers to identify potential health effects and symptoms that may be related to contact with airborne fibers. The following sections describe the objectives of medical monitoring and the elements of a medical monitoring program for workers exposed to RCFs. The primary goals of a workplace medical monitoring program are (1) early identification of adverse health effects that may be related to exposures at work and (2) possible health trends within groups of exposed workers. These goals are based on the premise that early detection, subsequent treatment, and workplace interventions will ensure the continued health of the affected workforce. # Objectives of Medical Monitoring Medical monitoring and resulting interventions represent secondary prevention and should not replace primary prevention efforts to minimize worker exposures to RCFs. In the case of RCFs, medical monitoring is especially important because achieving compliance with the REL of 0.5 f/cm 3 does not assure that all workers will be free from the risk of respiratory irritation or chronic respiratory disease caused by occupational exposure. Early identification of respiratory system changes and symptoms associated with RCF exposures (such as decreased pulmonary function, irritation, dyspnea, chronic cough, wheezing, and pleural plaques) may signal the need for more intensive medical monitoring and the assessment of existing controls to minimize the risk of longterm adverse health effects. An ongoing medical monitoring program also serves to inform workers of potential health risks and promotes an understanding of the need for and support of exposure control activities. A medical monitoring program serves as an effective secondary prevention method on two levels-screening and surveillance. Medical screening in the workplace focuses on the early detection of health outcomes for individual workers and may involve an occupational history, medical examination, and application of specific medical tests to detect the presence of toxicants or early pathologic changes before the worker would normally seek clinical care for symptomatic disease. By contrast, medical surveillance (described in Section 9.5) involves the ongoing evaluation of the health status of a group of workers through the collection and aggregate analysis of health data for the purpose of preventing disease and evaluating the effectiveness of intervention programs. # Criteria for Medical Screening To determine whether tests or procedures for medical screening are appropriate and relevant to a given hazard (in this case, exposure to airborne RCFs), the following factors should be considered: # Recommended Program Elements Recommended elements of a medical monitoring program for workers exposed to RCFs include provisions for an initial medical examination and periodic medical examinations at regularly scheduled intervals. Depending on the findings from these examinations, more frequent and detailed medical examinations may be necessary. Worker education should also be included as a component of the medical monitoring program. Specific elements of the examinations and scheduling are described below and illustrated in the flow chart diagram in Figure 9-1. # Initial medical examination An initial (baseline) examination should be performed as near as possible to the date of beginning employment (within 3 months). The initial medical examination should include the following: ■ If time since first RCF exposure is <10 years, examinations should be conducted at least every 5 years. ■ If time since first RCF exposure is ≥10 years, then examinations should be conducted at least every 2 years. ■ A ■ For workers with 10 or more years since first exposure to RCFs, periodic examinations should be conducted at least once every 2 years. A chest X-ray and spirometric testing are important upon initial examination and may also be appropriate medical screening tests during periodic examinations for detecting respiratory system changes-especially in workers with more than 10 years since first exposure to RCFs. The value of periodic chest X-rays in a medical monitoring program should be evaluated by a qualified health care provider in consultation with the worker to assess whether the benefits of testing warrant the additional exposure to radiation. As with the frequency of periodic examinations, the utility of the chest X-ray as a medical test becomes greater for workers with more than 10 years since first exposure to RCFs (based on the latency period between first exposure and appearance of noticeable respiratory system changes). Because persons with advanced fiber-related pleural changes experience difficulty in breathing as the parietal and visceral surfaces become adherent and lose flexibility, it may prove beneficial to detect fibrotic changes in the early stages so steps may be taken to prevent further lung damage. Similar recommendations have been made for asbestos-exposed workers diagnosed with pleural fibrosis or pleural plaques to prevent more serious types of respiratory disease [Balmes 1990]. # More frequent medical examinations Any worker should undergo more frequent and detailed medical evaluation if he or she has any of the following indications: ■ New or worsening respiratory symptoms or findings (for example, chronic cough, difficulty breathing, wheezing, reduced # Employer Actions The employer should assure that the qualified health care provider=s recommended restrictions of a worker=s exposure to RCFs or to other workplace hazards are followed and that the REL for RCFs is not exceeded without requiring the use of PPE. Efforts to encourage worker participation in the medical monitoring program and to report symptoms promptly to the program director are essential for the program=s success. Medical evaluations performed as part of the medical monitoring program should be provided by the employer at no cost to the participating workers. If the recommended restrictions determined by the medical program director include job reassignment, such reassignment should be implemented with the assurance of economic protection for the worker. When medical removal or job reassignment is indicated, the affected worker should not suffer loss of wages, benefits, or seniority. The employer should ensure that the medical monitoring program director communicates regularly with the employer=s safety and health personnel (such as industrial hygienists), employee representatives, and safety and health committees to identify work areas that may require evaluation and implementation of control measures to minimize the risk from exposure to hazards. # Surveillance of Health Outcomes Standardized medical screening data should be periodically aggregated and evaluated by an epidemiologist or other knowledgeable person to identify patterns of worker health that may be linked to work activities and practices that require additional primary prevention efforts. Routine aggregate assessments of medical screening data should be used in combination with evaluations of exposure monitoring data to identify changes needed in work areas or exposure conditions. One example of surveillance using analyses of medical screening data is the ongoing epidemiologic study of RCF workers described in the RCFC product stewardship plan referred to as PSP 2000[RCFC 2001]. Elements of this plan may be adapted and modified by other employers to develop medical surveillance programs for workers who are potentially exposed to RCFs. The research strategy also considers the usefulness of integrating fiber data from various scientific disciplines (toxicology, epidemiology, industrial hygiene, occupational medicine) to elucidate the characteristics of fibers. # Smoking Cessation In addition, NIOSH recommends that the following steps be taken with regard to RCF research: 1. Conduct basic scientific investigations, including in vitro and in vivo animal studies, to delineate the mechanism of action for RCF toxicity. 2. Conduct comparable studies for other SVFs and natural fibers so that the mechanistic data can be compared. For instance, Coffin et al. [1992] examined the ability of different synthetic and natural fibers to induce mesotheliomas. They suggested that in addition to fiber length and width, currently undefined intrinsic surface characteristics of the fibers are directly related to their mesothelioma induction potency. human pulmonary cells. The human alveolar macrophage has a volume several times greater than that of the rat alveolar macrophage [Kromback et al. 1997]. Macrophage size and volume may affect (1) the size range of fibers that can be phagocytized, dissolved, and cleared by the lungs and (2) the resulting pathogenicity of the fiber. Even the use of a human lung cell line does not guarantee that in vitro results will be directly applicable to the intact human response. The in vivo integration of stimuli from the nervous, hormonal, and cardiovascular systems cannot be reproduced in vitro. Another point to consider when reviewing these data is the number and definitions of variables used in different studies. Variables include differences in fiber type, fiber length, fiber dose, cell type, and length of exposure tested, among others. Disparate results between studies make strong conclusions from in vitro studies difficult. At the same time, these studies may provide important data regarding the mechanism of action of RCFs that would not be obtainable in other testing venues. RCFs may exert their effects on pulmonary target cells via direct or indirect mechanisms. Direct mechanisms are the resultant effects when fibers come in direct physical contact with cells. Direct cytotoxic effects of RCFs include effects on cell viability, responses, and proliferation. The cellular and molecular effects of RCF exposures have been studied with two different objectives. One purpose of these in vitro studies is to provide a quicker, less expensive, and more controlled alternative to animal toxicity testing. These experiments, which strive to act as screening tests or alternatives to animal studies, are best interpreted by comparing their results with those of in vivo experiments. The second objective of in vitro studies is to provide data that may help to explain the pathogenesis and mechanisms of action of RCFs at the cellular and molecular levels. These cytotoxicity and genotoxicity studies are best interpreted by comparing the effects of RCFs with those of other SVFs and asbestos fibers. In vitro studies serve as an important complement to animal studies and provide important tools for studying the molecular mechanisms of fibers. It is not yet possible to use these data in the derivation of an REL. Drawing strong conclusions relevant to human health based on these in vitro studies is impossible. One point to consider when reviewing these data is the relevance of the cell types studied. Many studies to date have examined the effects of RCFs on rodent cell lines. The cytotoxic effects of RCFs may vary with cell size, volume, and lineage. Effects observed in the cells from organs other than the lung or effects in species other than the human may not be similar to those elicited with Indirect cellular effects of RCFs involve the interaction of fibers with inflammatory cells that may be activated to produce inflammatory mediators. These mediators may affect target cells directly or may attract other cells that act on target cells. An inflammatory cell type often used in RCF in vitro studies is the pulmonary macrophage. Pulmonary macrophages are the first line of defense against inhaled material that deposits in the alveoli, and among functions, they attempt to phagocytize particles deposited in the lung. Effects of RCF exposure on macrophages and other inflammatory cells are assessed by the measurement of inflammatory mediator release in vitro. Three important groups of inflammatory mediators are cytokines, ROS, and lipid mediators (prostaglandins and leukotrienes). Some of the cytokines that have been implicated in the inflammatory process include TNF and interleukins (ILs). TNF and many ILs stimulate the deposition of fibroblast collagen, an initial step in fibrosis, and prostaglandins (PG)s inhibit these effects. ROS include hydroxyl radicals, hydrogen peroxide, and superoxide anion radicals. Oxidative stress occurs when the ROS level in a cell exceeds its antioxidant level. Oxidative stress may result in damage to deoxyribo nucelic acid (DNA), lipids, and proteins. Either direct or indirect effects of RCFs may result in genotoxic effects on pulmonary target cells. Changes in the genetic material may be important in tumor development [Solomon et al. 1991]. Genotoxic effects may be assessed through the analysis of chromosome changes or alterations in gene expression following exposure to RCFs. The following summary of RCF in vitro studies examines their direct effects on cell proliferation and viability and indirect effects via release of TNF, ROS, and other inflammatory mediators. The genotoxic effects of RCFs are also examined and summarized. Luoto et al. [1997] evaluated the effects of RCFs, quartz, and several MMVFs on LDH levels in rat alveolar macrophages and hemolysis in sheep erythrocytes. RCF1, RCF2, RCF3, and RCF4 at 1.0 mg/ml induced a lower release of LDH (less than 20% of control) from rat alveolar macrophages compared with quartz (approximately 40% of control) [Luoto et al. 1997]. RCF1 stimulated the lowest amount of MMVF10 fibers in hamster tracheal epithelial cells at 24 hr; no necrosis was present at 72 hr. RCF1, RCF2, RCF3, and RCF4 as well as asbestos and other fibers had a dose-dependent effect on cytotoxicity, as measured by cell detachment, in the human alveolar epithelial cell line A549 [Cullen et al. 1997]. Cell detachment is associated with epithelial damage, an important step in the inflammatory process. These cells are a primary target of inhaled fibers. When equivalent doses (10, 25, 50, and 100 µg/ ml) were tested with various fibers, all RCFs had a less significant effect than both crocidolite and amosite asbestos. When the dose data were adjusted for the number of fibers, RCF1, RCF2, and RCF3 were more cytotoxic than RCF4 and crocidolite. In an assay determining the ability of fibers to induce an increase in the diameter of human A549 cells, an elutriated ceramic fiber (unspecified type) had a midrange of activity when compared with 12 other fibers [Brown et al. 1986]. It was more active than most varieties of amosite tested (but not UICC amosite) but less active than the chrysotile fibers. An association was found between increasing length and assay activity. When these same fiber samples were tested for colony inhibition in V79/4 Chinese hamster lung fibroblasts, the ceramic fiber had even less effect than the TiO 2 control. Analysis of all fibers upheld the association between increasing length and increased activity. In both assays, fiber diameter was not related to activity in most cases. Chrysotile asbestos at 10 µg/cm 2 and crocidolite asbestos at 5 µg/cm 2 altered porcine aortic endothelial cell morphology and increased neutrophil adherence [Treadwell et al. 1996]. RCF1 fibers at 10 µg/cm 2 did not change cell morphology or increase neutrophil binding. These studies suggest that RCFs may have some similar direct cytotoxic effects to asbestos. They are capable of inducing enzyme release and cell hemolysis. They may decrease cell viability and inhibit proliferation. In most studies, the effects of RCFs are much less pronounced than the effects of asbestos at similar gravimetric concentrations. Fiber length was demonstrated to be an important factor in determining the cell responses in many studies. # C.2 Indirect Effects of RCFs: Effects on Inflammatory Cells In addition to direct effects on target cells, RCFs may have indirect mechanisms of action by acting on inflammatory cells. Inflammatory cells, such as pulmonary macrophages, may respond to fiber exposure by releasing inflammatory mediators that initiate the process of pulmonary inflammation and fibrosis. Cytokines and ROS are among the inflammatory mediators released. Many studies, summarized below and in Table C-2, have investigated this link between fiber exposure and mediator release to try to elucidate the mechanism of action of RCFs. Cytokines are a class of proteins that are involved in regulating processes such as cell secretion, proliferation, and differentiation. One of the cytokines most commonly analyzed in RCF cytotoxicity studies is TNF. TNF has been implicated in silica-and asbestos-induced pulmonary fibrosis [Piguet et al. 1990;Lemaire and Ouellet 1996]. TNF and many ILs are associated with collagen deposition (an initial stage of fibrosis), and PGs inhibit these effects. Experiments on the effects of RCF exposure on TNF production in various cell types have had differing results. TNF secretion has been associated with exposure to asbestos both in vitro and in vivo [Perkins et al. 1993]. In vitro incubation of human alveolar macrophages from normal volunteers with 25 µg/ml chrysotile asbestos resulted in increased levels of TNF secretion. Alveolar macrophages from 6 human subjects occupationally exposed to asbestos for more than 10 years secreted increased amounts of the cytokines TNF, IL-6, PGE 2 , and IL-1ß in vitro. Five human subjects occupationally exposed for less than 10 years did not show increases in these cytokines. The two exposure groups were matched for age, smoking history, and diagnosis. The increased TNF secretion in both in vitro and chronic in vivo asbestosexposed conditions suggests its importance in the response of the lung to fiber exposure, although the small exposure group sizes warrant caution in drawing strong conclusions. When equal numbers (8.2  10 6 ) of various fiber types, including RCF1, RCF2, RCF3, and RCF4, were incubated separately with rat alveolar macrophages, silicon carbide whiskers, amosite, and crocidolite asbestos stimulated the highest TNF release [Cullen et al. 1997]. RCF1, RCF2, RCF3, and RCF4 showed no significant increase in TNF release compared with control. In contrast, ceramic fibers (unspecified type) at 50 µg/ml (1.72  10 5 f) significantly increased TNF release compared with controls in rat alveolar macrophages [Fujino et al. 1995]. Potassium octatitanate whisker, chrysotile, and crocidolite asbestos induced the greatest TNF release. Alveolar macrophages exposed to either 300 or 1,000 µg/ml RCFs or 1,000 µg/ml asbestos showed a significant increase in TNF production [Leikauf et al. 1995]. At 300 µg/ml RCFs, a significant elevation occurred in leukotriene B 4 . At 1,000 µg/ml RCFs, significant elevations occurred in leukotriene B 4 and prostaglandin E 2 . Levels induced at lower doses were not different from controls. At equivalent doses, the effect on the levels of all mediators measured was greater after asbestos exposure than after RCF exposure. Chrysotile A, chrysotile B, crocidolite, MMVF21, RCF1, and silicon carbide at 100 µg/ml caused a significantly increased synthesis of TNF mRNA after 90 minutes of incubation with rat alveolar macrophages [Ljungman et al.1994]. After 4 hr of incubation, chrysotile A still had a significantly increased TNF mRNA production, and all other fibers were at baseline concentrations. None of the fibers studied increased TNF release at 90 minutes. However, an increased TNF bioactivity occurred after 4 hr of incubation with chrysotile A, chrysotile B, crocidolite, or MMVF21. RCF1 at 100 µg/ml did not increase TNF production under these conditions. Chrysotile asbestos and alumina silicate ceramic fibers increased in vitro alveolar macrophage TNF production in rats exposed to cigarette smoke in vivo and in rats unexposed to smoke [Morimoto et al. 1993]. Asbestos at 50 and 100 µg/ml induced a significantly greater TNF release in rats exposed to cigarette smoke versus unexposed rats. No significant differences were found between groups at all doses of RCF fibers tested (25, 50 and 100 µg/ml). RCF exposure, in contrast to chrysotile, did not have a significant synergistic effect with cigarette smoke exposure. In addition to the cytokines such as TNF, another group of inflammatory mediators that has been studied in vitro are the ROS. These mediators, also referred to as reactive oxygen metabolites (ROMs) are normally produced during the process of cellular aerobic metabolism and in phagocytic cells in response to particle exposure. One molecular effect of asbestos exposure has been demonstrated to be the induction of ROS [Kamp et al. 1992]. Oxidative stress occurs when the ROS level in a cell exceeds the antioxidant level. ROS may result in damage to DNA, lipids and proteins and have been implicated in having a role in carcinogenesis [Klaunig et al. 1998;Vallyathan et al. 1998]. This research has suggested that free radical activity may be involved in the pathogenesis of fiber-induced lung disease. The ability of RCFs to induce the release of free radicals has been studied in rodent alveolar macrophages. RF1 and RF2 (Japanese ceramic fibers) at 200 µg/ml induced a significant production of superoxide anion and a significant increase in intracellular free calcium in guinea pig alveolar macrophages [Wang et al. 1999]. Both superoxide anion and increased intracellular calcium are associated with oxidative stress. Superoxide anions may generate hydrogen peroxide and hydroxyl radical, classified as ROS or free radicals. RF2 exposure resulted in a significant depletion of glutathione. Glutathione is a cellular antioxidant that protects cells against oxidative stress; depletion of glutathione is associated with oxidative stress. The RFs did not affect hydrogen peroxide production. In each test, the effects of chrysotile were significantly greater than those of the RFs. RCF1, MMVF10, and amosite asbestos at 8.24 × 10 6 f/ml induced a significant depletion of intracellular glutathione in rat alveolar macrophages [Gilmour et al. 1997]. RCF1 had similar effects to amosite asbestos, whereas MMVF10 caused the greatest depletion of glutathione. RCF1, RCF2, and RCF3 induced a greater production of ROMs in human polymorphonuclear cell cultures than RCF4 and chrysotile [Luoto et al. 1997]. A dose-dependent production of ROMs was seen in all RCFs and other MMVFs tested from 25 to 500 µg/ml. Quartz had a greater effect on ROM production than all fibers tested. RCF1 had a high binding capacity for rat immunoglobulin (IgG), a normal component of lung lining fluid [Hill et al. 1996]. At doses >100 µg RCF1, fibers coated with IgG induced a significantly increased superoxide anion release. This supports the premise that lung lining fluid and other substances that fibers are exposed to in vivo may significantly alter the effect of fibers on cells. IgG-coated long fiber amosite asbestos, in spite of a poor binding affinity for IgG, induced a comparable superoxide anion release response to that of coated RCF1. Brown et al. [1999] investigated the ability of RCF1, amosite asbestos, silicon carbide, MMVF10, Cole 100/475 glass fiber, and RCF4 to cause translocation of the transcription factor NF-κB to the nucleus in A549 lung epithelial cells. RCF1, amosite asbestos, and silicon carbide produced a significant dose-dependent translocation of NF-κB to the nucleus; the other fibers tested did not. Equal fiber numbers were tested. These cytotoxicity studies indicate that RCFs may share some aspects of their mechanism of action with asbestos. They both affect the production of TNF and ROS as well as cell viability and proliferation. The effects of RCFs have usually been less pronounced than those of asbestos. Results of in vitro studies are difficult to compare, even within studies of different fiber types, because of different study designs, different fiber concentrations and characteristics, and different endpoints. # C.3 Genotoxic Effects of RCFs In addition to research assessing the cytotoxicity of RCFs, studies have also assessed the genotoxicity of RCFs. Most genotoxicity assays assess changes in or damage to genetic material. Methods that have been used to investigate the genotoxicity of fibers include cell-free or in vitro cell systems investigating DNA damage, studies of aneuploidy or polyploidy, studies of chromosome damage or mutation, gene mutation assays, and investigations of cell growth regulation [Jaurand 1997]. Several studies, described below and in Table C-3, have examined the ability of RCFs to produce genotoxic changes in comparison with asbestos. Several fibers, including RCF1 and RCF4, were assayed for free radical generating activity using a DNA assay and a salicylate assay [Brown et al.1998]. The DNA plasmid assay showed only amosite asbestos to have free radical activity. The salicylate assay showed amosite as well as RCF1 to have free radical activity. Coating the fibers with lung surfactant decreased their hydroxyl radical generation. Differences in RCF1 results in the two assays were proposed to be a result of increased iron release from RCF1 in the salicylate assay. An iron chelator inhibited the RCF hydroxylation of salicylate. RCF4 showed no free radical activity. When equal fiber numbers were compared, RCF1, RCF2, RCF3, and RCF4 had minimal free-radical-generating activity on plasmid DNA compared with crocidolite and amosite asbestos [Gilmour et al. 1995]. RCFs and other MMVF produced a small but insignificant amount of DNA damage. This damage was mediated by hydroxyl radicals. No correlation was found between iron content and free radical generation. At 9.3 × 10 5 fibers per assay, amosite produced substantial free radical damage to plasmid DNA [Gilmour et al. 1997]. Amosite significantly upregulated the transcription factors AP-1 and NFkB; RCF1 had a much smaller effect on AP-1 upregulation only. SVFs, including ceramic fibers (unspecified), were reported to form hydroxyl radicals based on the formation of the DNA adduct 8-hydroxydeoxyguanosine (8-OH-dG) from 2-de-oxyguanosine (dG) in calf thymus DNA and solution . Ceramic and glasswool fibers had poor hydroxylating capabilities relative to rockwool and slag wool fibers ]. Hydroxyl radical scavengers, such as dimethyl sulfoxide, decreased the hydroxylation. Compounds that increase hydroxyl radical formation, such as hydrogen peroxide, increased hydroxylation. Rockwool in combination with cigarette smoke condensate caused a synergistic increase in 8-OH-dG formation; ceramic and glasswool fibers did not have synergistic effects with cigarette smoke ]. RCF1, RCF2, RCF3, and RCF4 induced nuclear abnormalities, including micronuclei and polynuclei, in Chinese hamster ovary cells [Hart et al. 1992]. Micronuclei may form when chromosomes or fragments of chromosomes are separated during mitosis. Polynuclei may arise when cytokinesis fails after mitosis. The incidence of micronuclei and polynuclei after exposure to 20 µg/cm 2 RCF was from 22% to 33%. At 5 µg/cm 2 , chrysotile and crocidolite induced nuclear abnormalities of 49% and 28%, respectively. Amosite, chrysotile, and crocidolite asbestos, and ceramic fibers caused a significant increase in micronuclei in human amniotic fluid cells [Dopp et al. 1997]. The response was dosedependent with asbestos fiber exposure but not with ceramic fiber exposure. Significant increases in chromosomal breakage and hyperdiploid cells were noted after asbestos and ceramic fiber exposure. RCF1, RCF3, and RCF4 did not induce anaphase aberrations in rat pleural mesothelial cells [Yegles et al. 1995]. Of all fibers tested, UICC chrysotile was the most genotoxic on the basis of weight, number of fibers with a length >4 µm and number of fibers corresponding to Stanton=s and Pott=s criteria [Stanton et al. 1981;Pott et al. 1987]. The effect of fibers on the mRNA levels of cfos and c-jun proto-oncogenes and ornithine decarboxylase (ODC) in hamster tracheal epithelial (HTE) cells and rodent pleural mesothelial (RPM) cells were examined [Janssen et al. 1994]. ODC is a rate-limiting enzyme in the synthesis of compounds involved in cell proliferation and tumor promotion, the polyamines. In HTE cells, crocidolite induced a significant dose-dependent increase in levels of c-jun and ODC mRNA but not c-fos mRNA. RCF1 induced only small nondose-dependent increases in ODC mRNA levels. In RPM cells, crocidolite fibers at 2.5 µg/cm 2 significantly elevated levels of c-fos and c-jun mRNA. RCF1 increased proto-oncogene expression at cytotoxic levels of 25 µg/cm 2 ; no significant effect was seen at concentrations ≤5 µg/cm 2 . RCF1 fibers were nonmutagenic in the human-hamster hybrid cell line A L [Okayasu et al. 1999]. Chrysotile was a significant inducer of mutations in the same system. These studies demonstrate that RCFs may share some similar genotoxic mechanisms with asbestos including induction of free radicals, micronuclei, polynuclei, chromosomal breakage, and hyperdiploid cells. Other studies have demonstrated that, using certain methods and doses, RCFs did not induce anaphase aberrations and induced proto-oncogene expression only at cytotoxic concentrations. RCFs were nonmutagenic in human-hamster hybrid cells. # C.4 Discussion of In Vitro Studies The toxicity of fibers has been attributable to their dose, dimensions, and durability. Any test system that is designed to assess the potential toxicity of fibers must address these factors. Durability is difficult to assess using in vitro studies because of their acute time course. However, in vitro studies provide an opportunity to study the effects of varying doses and dimensions of fibers in a quicker, more efficient method than animal testing. Although they provide important information about mechanism of action, they do not currently provide data that can be extrapolated to occupational risk assessment. The association between fiber dimension and toxicity has been documented and reviewed [Stanton et al. 1977[Stanton et al. , 1981Pott et al. 1987;Warheit 1994]. Fiber length has been correlated with the cytotoxicity of glass fibers [Blake et al. 1998]. Manville code 100 (JM-100) fiber samples of average lengths of 3, 4, 7, 17, and 33 µm were assessed for their effects on LDH activity and rat alveolar macrophage function. The greatest cytotoxicity was reported in the 17 µm and 33 µm samples, indicating that length is an important factor in the toxicity of this fiber. Multiple macrophages were observed attached along the length of long fibers. Relatively short fibers, <20 µm long, were usually phagocytized by one rat alveolar macrophage [Luoto et al. 1994]. Longer fibers were phagocytized by two or more macrophages. Incomplete, or frustrated, phagocytosis may play a role in the increased toxicity of longer fibers. Long fibers (17 µm average length) were a more potent inducer of TNF production and transcription factor activation than shorter fibers (7 µm average length) [Ye et al. 1999]. These studies demonstrate the important role of length in fiber toxicity and suggest that the capacity for macrophage phagocytosis may be a critical factor in determining fiber toxicity. The toxicity of individual fibers of the same type of RCFs may differ according to their size in relation to alveolar macrophages. Several RCF in vitro studies reported a direct association between a longer fiber length and greater cytotoxicity. Hart et al. [1992] reported the shortest fibers to be the least cytotoxic. Brown et al. [1986] reported an association between length and cytotoxic activity but not between diameter and activity. Wright et al. [1986] reported that cytotoxicity was correlated with fibers >8 µm length. Yegles et al. [1995] reported that the longest and thickest fibers were the most cytotoxic. The four most cytotoxic fibers had GM lengths ≥13 µm and GM diameters >0.5 µm. The production of abnormal anaphases and telophases was associated with Stanton fibers with a length >8 µm and diameter ≤0.25 µm. Hart et al. [1994] reported that cytotoxicity increased with fiber length up to 20 µm. All of these studies demonstrate the importance of fiber dimensions on cytotoxicity. Other studies have not reported the length distribution of fiber samples used. When studies are done with RCFs for which specific lengths are assessed for cytotoxicity (such as has been done with glass fibers) [Blake et al. 1998], it will be possible to determine the strength of the association between RCF fiber length and toxicity and determine whether a threshold length exists above which toxicity increases steeply. In addition to providing data on the correlation between fiber length and toxicity, in vitro studies have provided data on the relative toxicity of RCFs compared with asbestos. Uncertainties exist in the interpretation of these studies because of differences in fiber doses, dimensions, and durabilities. RCFs do appear to share some similar mechanisms of action with asbestos. (See references in Tables C-1, C-2, and C-3.) They have similar direct and indirect effects on cells and alter gene function in similar ways. They are capable of inducing enzyme release and cell hemolysis. They may decrease cell viability and inhibit proliferation. They both affect the production of tumor necrosis factor and ROS, and affect cell viability and proliferation. They induce necrosis in rat pleural mesothelial cells. They also may induce free radicals, micronuclei, polynuclei, chromosomal breakage, and hyperdiploid cells in vitro. In vitro studies also provide an excellent opportunity for investigating the pathogenesis of RCF. However, comparisons are difficult to make between in vitro studies because of differences in fiber doses, dimensions, preparations, and compositions. Important information, such as fiber length distribution, is not always determined. Even when comparable fibers are studied, the cell line or conditions under which they are tested may vary. Much of the research to date has been done in rodent cell lines and in cells that are not related to the primary target organ. In vitro studies using human pulmonary cell lines should provide pathogenesis data most relevant to human health risk assessment. Short-term in vitro studies cannot take into account the influence of fiber dissolution and fiber compositional changes that may occur over time. In an in vivo exposure, fibers are continually modified physically, chemically, and structurally by components of the lung environment. This complex set of conditions is difficult to recreate in vitro. Just as it is unlikely that only one factor will be an accurate predictor of fiber toxicity, it is much more unlikely that any one in vitro test will be able to predict fiber toxicity. Best results are obtained by toxicity assessment in several in vitro tests and in comparison with in vivo results. In vitro studies provide an excellent opportunity to investigate factors important to fiber toxicity such as dose, dimension, surface area, and physicochemical composition. They provide the ability to obtain information that is an important supplement to the data of chronic inhalation studies. They do not currently provide information that can be directly applied to human health risk assessment and the development of occupational exposure limits. # Acknowledgments This document was prepared by the Education and Information Division (EID), Paul Schulte, Ph.D., Director. Thomas Lentz, Ph.D.; Kathleen MacMahon, D.V.M.; Ralph Zumwalde; and Clayton Doak were the principle authors of this document. Other members of the Document Development Branch, in particular Marie Haring Sweeney, Ph.D., were extremely helpful in providing technical reviews and comments. For contributions to the technical content and review of this document, the authors gratefully acknowledge the following NIOSH personnel: # Acknowledgments The authors wish to thank Susan Afanuh, Vanessa Becks, Gino Fazio, Anne Hamilton, and Jane Weber for their editorial support and contributions to the design and layout of this document. Clerical support in preparing this document was provided by Judith Curless, Pamela Graydon, Rosmarie Hagedorn, Norma Lou Helton, and Kristina Wasmund. Finally, special appreciation is expressed to the following individuals and organizations for serving as independent external reviewers and providing comments that contributed to the development of this document: # Chris Burley European Ceramic Fibres Industry Association Paris, France # LDH release (less than 10% of control), lower even than TiO 2 (approximately 15% of control). RCF1, RCF2, RCF3, RCF4, MMVF10, MMVF11, MMVF21, and MMVF22 at 0.5, 2.5, and 5.0 mg/ml induced a dose-dependent increase in sheep erythrocyte hemolysis. RCF1 and RCF3 induced slightly more hemolysis than other MMVFs. The hemolytic activity of MMVFs was similar to that of TiO 2 , and much less than that of quartz. At doses of 100, 300, and 1,000 µg/ml RCFs (unspecified type), an increased release of LDH was induced from rat macrophages [Leikauf et al. 1995]. At equivalent gravimetric doses of 1,000 µg/ml, the effects of RCFs were much less than those of silica. Ceramic fibers (unspecified type) at 50 µg/ml induced no difference in LDH levels compared with negative controls in rat alveolar macrophages [Fujino et al. 1995]. Chrysotile, crocidolite, amosite, and anthophyllite asbestos all induced significant increases in LDH and β-glucuronidase levels. Ceramic fibers also induced a significant increase in β-glucuronidase but much less than that induced by each of the asbestos fiber types. In the permanent macrophage-like cell line P388D1, an elutriated ceramic fiber (unspecified type) at 10 or 50 µg/ml after 24 or 48 hr had no significant effect on cell viability as measured by the trypan blue assay [Wright et al. 1986]. The elutriation process used for this experiment provided mainly respirable fibers. All other fibers examined, excluding shortfiber amosite, reduced viability. Although the specific data on the effect of exposure to fibers on enzyme release was not presented, an association between decreasing cell viability and increasing loss of intracellular glucosaminidase and LDH was reported under most conditions investigated. Cytotoxicity was correlated with fiber lengths greater than 8 µm when all fiber types were combined. The effect of several fibers on the viability of rat pleural mesothelial cells was investigated [Yegles et al. 1995]. On a per weight basis, the rank order of cytotoxicity was National Institute for Environmental Health Sciences (NIEHS) chrysotile, RCF3, MMVF10 and RCF1, Calidria chrysotile, RCF4, and all others. Based on the total number of fibers, the rank order of cytotoxicity was RCF3, MMVF10, RCF1, RCF4, MMVF11, NIEHS chrysotile, amosite, and all others. Cytotoxicity was dependent on fiber dimensions as the longest (RCF3, MMVF10, RCF1, MMVF11) or thickest (RCF4, RCF1, MMVF11, RCF3) fibers were the most cytotoxic. RCF1, RCF2, RCF3, and RCF4 were found to inhibit the proliferation and colony-forming efficiency of Chinese hamster ovary cells in vitro [Hart et al. 1992]. The inhibition was concentration-dependent. RCF4 was least cytotoxic, RCF2 was intermediate, and RCF1 and RCF3 were the most cytotoxic. A correlation existed between average fiber length and toxicity, with the shortest fibers being least cytotoxic. LC 50 s for the RCF ranged from 10 to 30 µg/cm 2 . In each assay, the RCFs were less cytotoxic than those of the positive controls of crocidolite (LC 50 =5 µg/cm 2 ) and chrysotile (LC 50 =1 µg/cm 2 ) asbestos. At 0 to 80 µg/cm 2 RCF1, tremolite, and erionite were significantly less cytotoxic to humanhamster hybrid A L cells than chrysotile as determined by the surviving fraction of colonies after fiber exposure [Okayasu et al. 1999]. RCF1, crocidolite asbestos, and MMVF10 at 25 µg/cm 2 induced focal necrosis in rat pleural mesothelial cells after 24 hr that became a more obvious necrosis by 72 hr [Janssen et al. 1994]. At 72 hr, the qualitative effects of 25 µg/cm 2 RCF1 were comparable to those of 5 µg/cm 2 crocidolite asbestos. In contrast, minimal necrosis was seen at 25 µg/cm 2 crocidolite asbestos, RCF2, and
None
None
6c94615955f4d886d26d5cbb9fee84b75389ad3e
cdc
None
Multiple venues encourage or permit the public to come in contact with animals, resulting in millions of human-animal contacts each year. These settings include county or state fairs, petting zoos, animal swap meets, pet stores, zoologic institutions, circuses, carnivals, farm tours, livestock-birthing exhibits, educational exhibits at schools, and wildlife photo opportunities. Although multiple benefits of humananimal contact exist, infectious diseases, rabies exposures, injuries, and other human health problems associated with these settings are of concern. Rabid or potentially rabid animals in public settings can result in extensive public health investigation and action. Infectious disease outbreaks reported during the previous decade have been attributed to multiple organisms, including Escherichia coli O157:H7, Salmonella, Coxiella burnetti, Mycobacterium tuberculosis, and ringworm. Such incidents have substantial medical, public health, legal, and economic effects. This report provides standardized recommendations for public health officials, veterinarians, animal venue operators, animal exhibitors, visitors to animal venues and exhibits, and others concerned with disease-control and with minimizing risks associated with animals in public settings. The recommendation to wash hands is the single most important prevention step for reducing the risk for disease transmission. Other critical recommendations are that venues include transition areas between animal areas and nonanimal areas (where food is sold) and that animals are properly cared for and managed in public settings. In addition, this report recommends educating venue operators, staff, exhibitors, and visitors regarding the risk for disease transmission where animal contact is possible.# Introduction Contact with animals in public settings (e.g., fairs, farm tours, and petting zoos) provides opportunities for entertainment and education concerning animals and animal husbandry. However, inadequate understanding of disease transmission and animal behavior can lead to infectious diseases, rabies exposures, injuries, and other health problems among visitors, especially children, in these settings. Diseases called zoonoses or zoonotic diseases can be transmitted from animals to humans. Of particular concern are situations in which substantial numbers of persons are exposed to zoonotic disease or become ill, necessitating public health investigation and medical follow-up. A 2004 review identified >25 human infectious disease outbreaks during 1990-2000 associated with visitors to animal exhibits (1). The National Association of State Public Health Veterinarians (NASPHV) recognizes the positive benefits of human-animal contact. NASPHV considers that the risks of these contacts can be minimized in properly supervised and managed settings by using appropriately selected animals that receive regular health examinations and preventive care. Although eliminating all risk from animal contacts might not be achievable, this report provides standardized recommendations for minimizing disease and injury. NASPHV recommends that local and state public health, agricultural, environmental, and wildlife agencies, and other organizations use these recommendations to establish their own guidelines or regulations for reducing the risk for disease from human-animal contact in public settings. Multiple venues exist where public contact with animals is permitted (e.g., animal displays, petting zoos, animal swap meets, pet stores, zoologic institutions, nature parks, circuses, carnivals, farm tours, livestock-birthing exhibits, county or state fairs, schools, and wildlife photo opportunities). Persons responsible for managing these venues are encouraged to use the information in this report to reduce risk. Guidelines to reduce risks for disease from animals in healthcare facilities and service animals (e.g., guide dogs) have been developed (2)(3)(4). These settings are not specifically addressed in this report, although the general principles and recommendations might be applicable. # Enteric (Intestinal) Diseases Infections with enteric bacteria and parasites pose the highest risk for human disease from animals in public settings (5). Healthy animals harbor multiple human enteric pathogens. Certain organisms have a low infectious dose (6)(7)(8). Because of the popularity of animal venues, a substantial number of persons might be exposed to these organisms. Reports of illness and outbreaks among visitors to fairs, farms, and petting zoos have been documented. Pathogens linked to outbreaks include Escherichia coli O157:H7, Campylobacter, Salmonella, and Cryptosporidium (9)(10)(11)(12)(13)(14)(15)(16)(17). Although these reports usually document cattle, sheep, and goats as sources for infection, poultry (18)(19)(20)(21) and other domestic and wild animals also are potential sources. The primary mode of transmission for enteric pathogens is the fecal-oral route. Because animal fur, hair, skin, and saliva (22) can become contaminated with fecal organisms, transmission might occur when persons pet, touch, or are licked by animals. Transmission has also occurred from fecal contamination of food, including raw milk (23)(24)(25), sticky foods (e.g., cotton candy ), water (27)(28)(29), and environmental surfaces (12,18,30,31). Animals infected with enteric pathogens (e.g., E. coli O157:H7, Salmonella, and Campylobacter) frequently exhibit no signs of illness and might shed pathogens intermittently. Therefore, although removing ill animals (especially those with diarrhea) is necessary to protect animal and human health, it is not sufficient: animals that appear to be healthy might still be infectious and contaminate the environment. Certain organisms live months or years in the environment (32)(33)(34)(35)(36). Because of intermittent shedding and limitations of laboratory tests, culturing fecal specimens or other attempts to identify, screen, and remove infected animals might not be effective in eliminating the risk for transmission. Antimicrobial treatment of animals cannot be depended upon to eliminate infection and shedding of enteric pathogens or to prevent reinfection. Multiple factors increase the probability of transmission at animal exhibits. Animals are more likely to shed pathogens because of stress induced by prolonged transportation, confinement, crowding, and increased contact with persons (37)(38)(39)(40)(41)(42)(43). Commingled animals increase the probability that animals shedding organisms will infect other animals. The prevalence of certain enteric pathogens might be higher in young animals (44)(45)(46), which are frequently exhibited by petting zoos. Shedding of E. coli O157:H7 and Salmonella is highest in the summer and fall when substantial numbers of traveling animal exhibits, agricultural fairs, and petting zoos are scheduled (43,47,48). The risk for infections or outbreaks is increased by certain human factors and behaviors. These factors include inadequate hand washing, venues that attract substantial numbers of children, a lack of close supervision of children, hand-to-mouth activities (e.g., use of pacifiers, thumb-sucking, smoking, and eating) in proximity to animals, and a lack of awareness of the risk. The layout and maintenance of facilities and animal exhibits can also contribute to the risk for infection. Risk factors include inadequate hand-washing facilities (1), structural deficiencies as-sociated with temporary food-service facilities, inadequate separation between animal exhibits and food-consumption areas (49), and contaminated or inadequately maintained drinking water and sewage/manure disposal systems (27)(28)(29)31). # Lessons from Outbreaks Two E. coli O157:H7 outbreaks in Pennsylvania and Washington State led to CDC establishing recommendations for enteric disease prevention in animal contact settings (http:// www.cdc.gov/foodborneoutbreaks/pulication/recomm_ farm_animal.htm). Findings in both outbreaks were animal contact at farms open to the public and inadequate hand washing (14,16). In the Pennsylvania outbreak, 51 persons (median age: 4 years) became ill within 10 days of visiting a dairy farm, and eight (16%) developed hemolytic uremic syndrome (HUS), a potentially fatal consequence of E. coli O157:H7 infection. The same strain of E. coli O157:H7 was isolated from cattle, case-patients, and the farm environment. In addition to the reported cases, an increased number of diarrhea cases in the community were attributed to visiting the farm. An assessment of the farm environment determined that 1) no areas existed for eating and drinking that were separate from the animal contact areas, and 2) the limited hand-washing facilities were not configured for children (14). Failure to properly wash hands was also a contributing factor in other outbreaks caused by Cryptosporidium (11) and Salmonella (12). The protective effect of hand washing and the persistence of organisms in the environment were demonstrated in an outbreak of Salmonella infections at a Colorado zoo. Sixty-five cases (the majority of them children) were associated with touching a wooden barrier around the Komodo dragon exhibit. Noninfected children were substantially more likely to have washed their hands after visiting the exhibit. Salmonella was isolated from 39 case-patients, a Komodo dragon, and the wooden barrier (12). During 2000-2001 at a Minnesota children's farm day camp, washing hands with soap after touching a calf and washing hands before going home were protective factors in two outbreaks involving multiple enteric organisms (50). A total of 84 illnesses were documented among attendees. Implicated organisms for the human infections were E. coli O157:H7, Cryptosporidium parvum, non-O157 Shiga toxin-producing E. coli (STEC), Salmonella enterica serotype Typhimurium, and Campylobacter jejuni. These organisms, as well as Giardia, were also isolated from the calves. Risk factors for children included caring for an ill calf and getting visible manure on their hands. Enteric pathogens can contaminate and persist in animal housing areas. For example, E. coli O157:H7 can survive in soil for months (31,32,34,51). Prolonged environmental persistence of pathogens was documented in an Ohio outbreak of E. coli O157 infections in which 23 persons became ill at a fair after handling sawdust, attending a dance, or eating and drinking in a building where animals were exhibited during the previous week (31). Fourteen weeks after the fair ended, E. coli O157 was isolated from multiple environmental sources within the building, including sawdust on the floor and dust on the rafters. Forty-two weeks after the fair ended, E. coli O157 was recovered from sawdust on the floor. Transmission of E. coli O157:H7 from airborne dust was implicated in an Oregon county fair outbreak with 60 cases, the majority of them children (18). Illness was associated with visiting an exhibition hall that housed goats, sheep, pigs, rabbits, and poultry but was not associated with touching animals or their pens, eating, or inadequate hand washing. The same organism was recovered from ill persons and the building. In 2004, an outbreak of E. coli O157:H7 infection was associated with attendance at a goat and sheep petting zoo at the North Carolina State Fair (51). Health officials investigated 112 case-patients, including 15 who had HUS. The same strain of E. coli O157:H7 infecting case-patients was isolated from the animal bedding 10 days after the fair was over. The strain was also isolated from the soil after the animal bedding was removed. The effect of improper facility design was illustrated by one of the most substantial waterborne outbreaks in the United States (28,29). Approximately 800 suspected cases of E. coli O157:H7 and Campylobacter were identified among attendees at a New York county fair where the water and sewage systems had deficiencies. # Sporadic Infections Multiple sporadic infections, not identified as part of recognized outbreaks, have been associated with animal environments. A study of sporadic E. coli O157:H7 infections among selected U.S. states and counties determined that case-patients, especially children, were more likely to have visited a farm with cows than healthy persons (52). Additional studies also documented an association between E. coli O157:H7 infection and visiting a farm (53) or living in a rural area (54). Studies of human cryptosporidiosis have documented contact with cattle or visiting farms as risk factors for infection (55)(56)(57). A case-control study identified multiple factors associated with Campylobacter infection, including raw milk consumption and contact with farm animals (58). In other studies, farm residents were at a lower risk for infection with Cryptosporidium (55) and E. coli O157:H7 (59) than farm visitors, probably because the residents had acquired immunity to the infection as a result of their early and frequent exposure to these organisms. # Additional Health Concerns Although enteric diseases are the most commonly reported health risks associated with animals in public settings, multiple other health risks are of concern. For example, allergies can be associated with animal dander, scales, fur, feathers, body wastes (urine), and saliva (60)(61)(62). Additional health concerns addressed in this report include injuries, rabies exposures, and other infections. # Injuries Injuries associated with animals in public settings include bites, kicks, falls, scratches, stings, crushing of the hands or feet, and being pinned between the animal and a fixed object. These injuries have been associated with multiple species, including big cats (e.g., tigers), monkeys, domestic animals, and zoo animals. The settings have included public stables, petting zoos, traveling photo opportunities, schools, children's parties, and animal rides.* # Rabies Exposures Contact with mammals might expose persons to rabies through contamination of mucous membranes, bites, scratches, or other wounds with infected saliva or nervous tissue. Although no human rabies deaths caused by animal contact in public exhibits have been recorded, multiple rabies exposures have occurred, requiring extensive public health investigation and medical followup. For example, in the previous decade, thousands of persons have received rabies postexposure prophylaxis (PEP) after being exposed to rabid or potentially rabid animal species (including cats, goats, bears, sheep, ponies, and dogs) at 1) a pet store in New Hampshire ( 63), 2) a county fair in New York State (64), 3) petting zoos in Iowa (65,66) and Texas (J.H. Wright, DVM, Texas Department of Health, personal communication, 2004), and 4) school and rodeo events in Wyoming (1). Substantial public health and medical care challenges associated with potential mass rabies exposures include difficulty in identifying and contacting persons, correctly assessing exposure risks, and providing timely medical treatment. Prompt assessment and treatment are critical for this disease, which is usually fatal. # Other Infections Multiple bacterial, viral, fungal, and parasitic agents have been associated with animal contact. These organisms are transmitted through various modes. Infections from animal bites are common and frequently require extensive treatment or hospitalization. Bacterial pathogens that are frequently associated with animal bites include Pasteurella, Staphylococcus, Streptococcus, Capnocytophaga canimorsus, Bartonella henselae (cat-scratch disease), and Streptobacillus moniliformis (rat-bite fever). Certain monkey species (especially macaques) that are kept as pets or used in public exhibitions can be infected with herpes B virus, either asymptomatically or with mild oral lesions. Human exposure through bites or fluids can result in a fatal meningoencephalitis (67,68). Because of difficulties with laboratory testing to confirm monkey infection and high herpes B prevalence, monkey bites can require intensive public health and medical follow-up. Skin contact with animals in public settings might also result in human infection. Fifteen cases of ringworm infection (club lamb fungus) caused by Trichophyton species and Microsporum gypseum were documented among owners and family members who exhibited lambs in Georgia during a show season (69). Ringworm infection in 23 persons and multiple animal species were traced to a Microsporum canis infection in a hand-reared zoo tiger cub (70). Orf virus infections (contagious ecthyma or sore mouth) have occurred in goats and sheep at a children's petting zoo (71) and in a lamb used for an Easter photo opportunity (M. Eidson, DVM, New York State Department of Health, personal communication, 2003). After handling various species of infected exotic animals, a zoo attendant experienced an extensive papular skin rash from a cowpox-like virus (72). In 2003, multiple cases of monkeypox occurred among persons who had had contact with infected prairie dogs either at a child care center (73,74) Ecto-and endoparasites pose concerns when humans and exhibit animals interact. Sarcoptes scabiei is a skin mite that infests humans and animals, including swine, dogs, cats, foxes, cattle, and coyotes (75,76). Although human infestation from animal sources is usually self-limiting, skin irritation and itching might occur for multiple days and be difficult to diagnose (75)(76)(77). Animal fleas bite humans, which increases the risk for infection or allergic reaction. In addition, fleas are the intermediate host for a tapeworm species that can infect children. Multiple other animal helminths might infect humans through fecal-oral contact or through contact with animals or contaminated earth (78,79). Parasite-control through veterinary care and proper husbandry coupled with hand washing reduce the risks associated with ectoand endoparasites (80). Tuberculosis (TB) is another disease of concern in certain animal settings. Twelve circus elephant handlers at an exotic animal farm in Illinois were infected with Mycobacterium tuberculosis, and one handler had signs consistent with active disease after three elephants died of TB. Medical history and testing of the handlers indicated that the elephants had been a probable source of exposure for the majority of the human infections (81). At a zoo in Louisiana, seven animal handlers who were previously negative for TB tested positive after a Mycobacterium bovis outbreak in rhinoceroses and monkeys (82). The U.S. Department of Agriculture (USDA) developed guidelines regarding removal of infected animals from public contact as a result of concerns regarding the risk for exposure to the public (83). Zoonotic pathogens might also be transmitted by direct or indirect contact with reproductive fluids, aborted fetuses, or newborns from infected dams. Live-birthing exhibits, usually involving livestock (e.g., cattle, pigs, goats, or sheep), are popular at agricultural fairs. Although the public usually does not have direct contact with animals during birthing, newborns and their dams are frequently available for petting and observation afterward. Q fever (Coxiella burnetii), leptospirosis, listeriosis, brucellosis, and chlamydiosis are serious zoonoses that can be associated with contact with reproductive materials (84). C. burnetii is a rickettsial organism that most frequently infects cattle, sheep, and goats. The disease can cause abortion in animals, but more frequently the infection is asymptomatic. During parturition, infected animals shed substantial numbers of organisms that might become aerosolized. The majority of persons exposed to C. burnetii develop an asymptomatic infection, but clinical illness can range from an acute influenza-like illness to life-threatening endocarditis. A Q fever outbreak involving 95 confirmed case-patients and 41 hospitalizations was linked to goats and sheep giving birth at petting zoos. These petting zoos were in indoor shopping malls, indicating that indoor-birthing exhibits might pose an increased risk for Q fever transmission (85). Chlamydophila psittaci infections cause respiratory disease (commonly called psittacosis) and are usually acquired from psittacine birds (86). For example, an outbreak of C. psittaci pneumonia occurred among the staff at the Copenhagen Denmark Zoo (87). On limited occasions, chlamydial infections acquired from sheep, goats, and birds result in reproductive problems in humans (86,88,89). # Recommendations Guidelines and recommendations from multiple organizations contributed to the recommendations in this report. A limited number of states have specific guidelines or legislation for petting zoo exhibitors and other animal exhibition venues (1,16,(90)(91)(92). However, in the United Kingdom, recommendations to prevent enteric infections at animal exhibitions and agricultural fairs were developed in 1989 (93), 1995 (94), and 2000 (95). In the United States, the American Zoo and Aquarium Association has accreditation standards for reducing risks of animal contact with the public in zoologic parks (96). In accordance with the Animal Welfare Act, the USDA Animal Care licenses and inspects certain animal exhibits for humane treatment of animals, but this act is not intended for human health protection. No federal laws address the risk for transmission of pathogens at venues where the public has contact with animals. However, in 2001, CDC issued guidelines to reduce the risk for enteric pathogens (16). CDC has also issued recommendations for preventing transmission of Salmonella from reptiles to humans (97). The Association for Professionals in Infection Control and Epidemiology (APIC) developed guidelines to address risks associated with the use of service animals in health-care settings (2). Opportunities for animal contact with the public occur in various settings. Recommendations provided in this report should be tailored to specific settings, and the report should be incorporated into guidelines and regulations developed at the state or local level. This report should be disseminated to persons who own or manage animals in public settings. State and local human and animal health agencies should make educational materials available to venue operators and other interested persons (90,91,98). Incidents of disease transmission or injury should be promptly reported to public health authorities and investigated. # Educational Responsibilities of Venue Operators Education is essential to reduce risks associated with animal contact in public settings. Animal owners, exhibit operators, and their staff should be educated to make appropriate management decisions. In addition, the public should be educated so that they can weigh the benefits and risks of animal contact and take appropriate measures to reduce risks. Recommendations include the following: - Operator education. Venue operators should familiarize themselves with the basic risk-reduction recommendations contained in this report. The responsibility of the operator is to apply these recommendations to specific settings and provide basic education to staff and visitors (e.g., using signage, stickers, handouts, or verbal information # General Recommendations for Managing Public and Animal Contact The public's contact with animals should occur in settings where controls are in place to reduce the potential for injuries or disease and increase the probability that exposures will be reported, documented, and handled appropriately. The design of facilities or contact settings should minimize the risk for exposure and facilitate hand washing (Box 1). Certain jurisdictions might choose to establish more restrictive recommendations in areas where animal contact is specifically encouraged (e.g., petting zoos). Requirements for the design of facilities or contact settings might include double barriers to prevent contact with animals or contaminated surfaces except for specified interaction areas. Manure disposal and wastewater runoff should occur in areas where the risk for exposure to pedestrians is eliminated or reduced. Control methods should focus on facility design and management. Recommendations regarding the management of animals in public settings should address animal areas (where animal contact is possible or encouraged), transition areas, and nonanimal areas (areas in which animals are not permitted, with the exception of service animals) (Figure ). Specific guidelines might be necessary for certain settings (e.g., schools ). Recommendations for cleaning procedures should be tailored to the specific situation (Appendix). # Animal Areas Recommendations should be applied both to settings in which animal contact is possible (e.g., county fairs) and settings in which direct animal contact is encouraged (e.g., petting zoos). However, in settings where direct animal contact is encouraged, additional precautions should be taken to reduce the risk for injuries and disease transmission. For areas where animal contact is possible, design of the entry and exit points for animal contact areas should be planned to facilitate proper visitor flow through transition areas (Figure ). These transition areas should include educational information and hand-washing facilities. Fences, gates, or other types of barriers can restrict uncontrolled access to animals and animal contact areas and ensure that visitors enter and exit through transition areas. Animal feed and water should not be accessible to the public. In addition, in buildings where animals live, adequate ventilation is essential for both animals (99) and humans. Food and beverages. No food or beverages should be allowed in animal areas. In addition, smoking, carrying toys, and use of pacifiers, spill-proof cups ("sippy cups"), and baby bottles should not be permitted in animal areas. Cleaning procedures. Manure and soiled animal bedding should be removed promptly. Animal waste and specific tools for waste removal (e.g., shovels and pitchforks) should be confined to designated areas restricted from public access. Manure and Hand washing is the single most important prevention step for reducing disease transmission. # How to Wash Hands 1. Wet hands with running water; place soap in palms; rub together to make a lather; scrub hands vigorously for 20 seconds; rinse soap off hands; then dry hands with a disposable towel. 2. If possible, turn off the faucet by using a disposable towel. 3. Assist young children with washing their hands. # Hand-Washing Facilities or Stations - Hand-washing facilities should be accessible and sufficient for the maximum anticipated attendance, and configured for use by children, adults, and those with disabilities. - Hands should always be washed after leaving animal areas and before eating or drinking. - Hand-washing stations should be conveniently located between animal and nonanimal areas and in food concession areas. - Maintenance should include routine cleaning and restocking of towels and soap. - Running water should be of sufficient volume and pressure to remove soil from hands. Volume and pressure might be substantially reduced if the water supply is furnished from a holding tank. Therefore, a permanent pressured water supply is preferable. - The hand-washing unit should be designed so that both hands are free for hand washing. - Hot water is preferable, but if the hand-washing stations are supplied with only cold water, a soap that emulsifies easily in cold water should be provided. # BOX 1. Hand-washing recommendations to reduce disease transmission from animals in public settings - Communal basins, where water is used by more than one person, do not constitute adequate hand-washing facilities. # Hand-Washing Agents - Liquid soap dispensed by a hand or foot pump is recommended. - Alcohol-based hand-sanitizers are effective against multiple common disease agents (e.g., Escherichia coli, Salmonella, and Campylobacter) when soap and water are not available. However, they are ineffective against certain organisms (i.e., bacterial spores, Cryptosporidium, and certain viruses). - Hand-sanitizers are less effective if hands are visibly soiled. Therefore, visible contamination and dirt should be removed to the extent possible before using hand-sanitizers. # Hand-Washing Signs At venues where human-animal contact occurs, signs regarding proper hand-washing practices are critical to reduce disease transmission. - Signs that are reminders to wash hands should be posted at exits from animal areas. # Example of a Hand-Washing Sign # Directions for Washing Hands # When - After going to the toilet - After exiting animal areas - Before eating - Before preparing foods soiled bedding should not be transported or removed through nonanimal areas or transition areas used by visitors. If this is unavoidable, precautions should be taken to avoid spillage and aerosolization. During events where animal contact is encouraged, periodic disinfection of the venue might reduce the risk for disease transmission during the event. Supervision of children. Children should be closely supervised during contact with animals to discourage contact with manure and soiled bedding. Hand-to-mouth contact (e.g., thumbsucking) should also be discouraged. Appropriate hand washing should be required. Additional recommendations for groups at high risk, including children aged <5 years, are outlined in this report (see Additional Recommendations). Staff. Trained staff should be present in areas where animal contact is permitted to encourage appropriate human-animal interactions, reduce risk for exposure (e.g., by promptly cleaning up wastes), and process reports of injuries and exposures. Feeding animals. If feeding animals is permitted, only food sold by the venue for that purpose should be allowed. Food sold for animal consumption should not be eaten by humans and should not be provided in containers that can be eaten by persons (e.g., ice cream cones). This policy will reduce the risk for animal bites and the probability of children eating food that has come into contact with animals. Use of animal areas for public (nonanimal) activities. Zoonotic pathogens can contaminate the environment for substantial periods (31). If animal areas need to be used for public events (e.g., weddings and dances), these areas should be cleaned and disinfected, particularly if food and beverages are served. Materials with smooth, impervious surfaces (e.g., steel, plastic, and sealed concrete) are easier to clean than other materials (e.g., wood or dirt floors). Removing organic material (bedding, feed, and manure) before using disinfectants is important. A list of disinfectants is included in this report (Appendix). # Transition Areas Between Animal and Nonanimal Areas Providing transition areas for visitors to pass through when entering and exiting animal areas is critical. The transition areas between animal and nonanimal areas should be designated as clearly as possible, even if they need to be conceptual rather than physical (Figure). In these areas, information should be provided regarding the 1) prevention of infection and injury and 2) location of hand-washing facilities and instructions for visitors to wash their hands upon exiting. - Signs informing visitors that they are entering an animal area should be posted at the entrance transition areas. These signs should also instruct visitors not to eat, drink, or place their hands in their mouth while in the animal area. Visitors should be discouraged from taking strollers, baby bottles, pacifiers, food, and beverages into areas where animal contact is encouraged or where contact with animal manure or bedding can occur. Visitor traffic should be controlled to avoid overcrowding the animal area. - Exit transition areas should be marked with signs instructing the public to wash their hands. Hand-washing stations should be available and accessible to all visitors, including children and persons with disabilities (Box 1). # Nonanimal Areas Nonanimal areas are areas in which animals are not permitted, with the exception of service animals. - Food and beverages should be prepared, served, and consumed only in the designated nonanimal areas. Hand-- Nonanimal areas -Areas in which animals are not permitted, except for service animals (e.g., guide dogs). Food and beverages should be prepared, served, and consumed only in the designated nonanimal areas. † Signs should be in different formats depending on the audience (e.g., children and persons who do not speak English). Nonwritten information (e.g., verbal instructions and videos) can also be used. § Animal area -Areas in which animal contact is possible (e.g., county fairs) or is encouraged (e.g., petting zoos). # FIGURE. Examples of designs for animal contact settings, including clearly designated animal areas, nonanimal areas,- and transition areas with hand-washing stations and signage # Animal-Specific Guidelines - Fish -Use disposable gloves when cleaning aquariums, and do not dispose of aquarium water in sinks used for food preparation or for obtaining drinking water. # Animals Not Recommended in School Settings - Wild or exotic animals (e.g., lions, tigers, ocelots, and bears). - Nonhuman primates (e.g., monkeys and apes). - Mammals at higher risk for transmitting rabies (e.g., bats, raccoons, skunks, foxes, and coyotes). - Wolf-dog hybrids. - Aggressive or unpredictable animals, wild or domestic. - Stray animals with unknown health and vaccination history. - Venomous or toxin-producing spiders, insects, reptiles, and amphibians. washing facilities should be available where food or beverages are served (Box 1). - If animals or animal products (e.g., animal pelts, animal waste, and owl pellets) (100) are used for educational purposes in nonanimal areas (Box 2), the nonanimal areas should be cleaned (Appendix). Animals and animal products should not be brought into school cafeterias and other food-consumption areas. # Animal Care and Management The risk for disease or injuries from animal contacts can be reduced by carefully managing the specific animals used for such contacts. These recommendations should be considered for management of animals in contact with the public. - Animal care. Animals should be monitored daily by their owners or caretakers for signs of illness, and they should receive appropriate veterinary care. Ill animals and animals from herds with a recent history of abortion or diarrhea should not be exhibited. Animals should be housed to minimize stress and overcrowding, which can increase shedding of microorganisms. Options to reduce the burden of enteric pathogens need to be evaluated, particularly for animals that are at higher risk and that will be used in venues where animal contact is encouraged. # Additional Recommendations - Populations at high risk. Groups at high risk for serious infection include persons with waning immunity (e.g., older adults); children aged <5 years; and persons who are cognitively impaired, pregnant, or immunocompromised (e.g., persons with human immunodeficiency virus/acquired immunodeficiency syndrome, without a functioning spleen, or on immunosuppressive therapy). Persons at high risk should take heightened precautions at any animal exhibit. In addition to thorough and frequent hand washing, heightened precautions might include avoiding contact with animals and their environment (e.g., pens, bedding, and manure). Animals of particular concern for transmitting enteric diseases include young ruminants, young poultry, reptiles, amphibians, and ill animals. For young children, risk for exposure might be reduced if they are closely supervised by adults, carried by adults in animal areas, or have animal contact only over a barrier. These measures discourage animals from jumping on or nuzzling children and minimize contact with feces and soiled bedding. - Consumption of unpasteurized products. Unpasteurized dairy products (e.g., milk, cheese, and yogurt) as well as unpasteurized apple cider or juices should not be consumed. - Drinking water. Local public health authorities should inspect drinking water systems before use. Only potable water should be used for human consumption. Back-flow prevention devices should be installed between outlets in livestock areas and water lines supplying other uses on the grounds. If the water supply is from a well, adequate distance should be maintained from possible sources of contamination (e.g., animal-holding areas and manure piles). Maps of the water distribution system should be available for use in identifying potential or actual problems. The use of outdoor hoses should be minimized, and hoses should not be left on the ground. Hoses that are accessible to the public should be labeled "not for human consumption." Operators and managers of animal contact settings in which treated municipal water is not available should consider methods for disinfection of their water supply. # Conclusion NASPHV recognizes the benefits of human-animal contact. However, infectious diseases, rabies exposures, injuries, and other human health problems have occurred in animal contact settings secondary to human-animal contact. These incidents have substantial medical, public health, legal, and economic effects. The recommendation to wash hands is the single most important prevention step for reducing the risk for disease transmission. The standardized recommendations in this report should be used by public health officials, veterinarians, venue operators, animal exhibitors, and other persons concerned with disease control to minimize risks associated with animals in public settings. # Appendix Disinfectants and Their Properties All surfaces should be cleaned thoroughly before disinfection. For basic disinfection, a 1:100 dilution of household bleach (i.e., 2.5 tablespoons/gallon) or a 1:1,000 dilution of quaternary ammonium compounds (e.g., Roccal-D ® or Zephiran ® ) may be used. This appendix includes instructions for disinfection when a particular organism has been identified. All compounds require a contact time of >10 minutes. Local # Goal and Objectives This MMWR provides evidence-based guidelines for reducing risks associated with animals in public settings. The recommendations were developed by the National Association of State Public Health Veterinarians, in consultation with representatives from CDC, the National Assembly of State Animal Health Officials, the U.S. Department of Agriculture, the American Veterinary Medical Association, and the Council of State and Territorial Epidemiologists. The goal of this report is to provide guidelines for public health officials, veterinarians, animal venue operators, animal exhibitors, and others concerned with disease control to minimize risks associated with animals in public settings. Upon completion of this activity, the reader should be able to describe 1) the reasons for the development of the guidelines; 2) the disease risks associated with animals in public settings; 3) populations at high risk; and 4) recommended prevention and control methods to reduce disease risks. To receive continuing education credit, please answer all of the following questions. 1. Which of the following is true about the reasons why these recommendations were developed? A. Animal contacts are too risky and thus must be regulated against. B. Only petting zoos are of concern for disease risk. C. Multiple venues allow public contact with animals and thus pose a disease risk. D. These recommendations were developed to control zoonoses, which are diseases contracted only at zoos. E. Following these guidelines will eliminate all disease risk. # Which of the following is true about guidelines for visiting and resident animals in schools? A. Baby chicks and ducks are an excellent choice for all children in school settings because of their small size. B. Animals may be allowed in food settings (e.g., a school cafeteria) if they have a health certificate from a veterinarian. C. Animals should not be allowed to roam or fly free, and areas for contact should be designated. D. A and B. E. A, B, and C. F. None of the above. # If no licensed rabies vaccine exists for an animal species on display in a petting zoo, options to manage human rabies exposure risk include. . . A. using an animal born from a vaccinated mother if it is too young to vaccinate. B. penning the animal each night in a cage or pen that will exclude rabies reservoirs (e.g., bats and skunks). C. maintaining a record of visitors to facilitate visitor follow-up. D. asking a veterinarian to vaccinate the animal off-label with a rabies vaccine. E. testing the animal for presence of rabies antibodies. F. A, B, C, and D. # Which best describes your professional activities? A. Physician.
Multiple venues encourage or permit the public to come in contact with animals, resulting in millions of human-animal contacts each year. These settings include county or state fairs, petting zoos, animal swap meets, pet stores, zoologic institutions, circuses, carnivals, farm tours, livestock-birthing exhibits, educational exhibits at schools, and wildlife photo opportunities. Although multiple benefits of humananimal contact exist, infectious diseases, rabies exposures, injuries, and other human health problems associated with these settings are of concern. Rabid or potentially rabid animals in public settings can result in extensive public health investigation and action. Infectious disease outbreaks reported during the previous decade have been attributed to multiple organisms, including Escherichia coli O157:H7, Salmonella, Coxiella burnetti, Mycobacterium tuberculosis, and ringworm. Such incidents have substantial medical, public health, legal, and economic effects. This report provides standardized recommendations for public health officials, veterinarians, animal venue operators, animal exhibitors, visitors to animal venues and exhibits, and others concerned with disease-control and with minimizing risks associated with animals in public settings. The recommendation to wash hands is the single most important prevention step for reducing the risk for disease transmission. Other critical recommendations are that venues include transition areas between animal areas and nonanimal areas (where food is sold) and that animals are properly cared for and managed in public settings. In addition, this report recommends educating venue operators, staff, exhibitors, and visitors regarding the risk for disease transmission where animal contact is possible.# Introduction Contact with animals in public settings (e.g., fairs, farm tours, and petting zoos) provides opportunities for entertainment and education concerning animals and animal husbandry. However, inadequate understanding of disease transmission and animal behavior can lead to infectious diseases, rabies exposures, injuries, and other health problems among visitors, especially children, in these settings. Diseases called zoonoses or zoonotic diseases can be transmitted from animals to humans. Of particular concern are situations in which substantial numbers of persons are exposed to zoonotic disease or become ill, necessitating public health investigation and medical follow-up. A 2004 review identified >25 human infectious disease outbreaks during 1990-2000 associated with visitors to animal exhibits (1). The National Association of State Public Health Veterinarians (NASPHV) recognizes the positive benefits of human-animal contact. NASPHV considers that the risks of these contacts can be minimized in properly supervised and managed settings by using appropriately selected animals that receive regular health examinations and preventive care. Although eliminating all risk from animal contacts might not be achievable, this report provides standardized recommendations for minimizing disease and injury. NASPHV recommends that local and state public health, agricultural, environmental, and wildlife agencies, and other organizations use these recommendations to establish their own guidelines or regulations for reducing the risk for disease from human-animal contact in public settings. Multiple venues exist where public contact with animals is permitted (e.g., animal displays, petting zoos, animal swap meets, pet stores, zoologic institutions, nature parks, circuses, carnivals, farm tours, livestock-birthing exhibits, county or state fairs, schools, and wildlife photo opportunities). Persons responsible for managing these venues are encouraged to use the information in this report to reduce risk. Guidelines to reduce risks for disease from animals in healthcare facilities and service animals (e.g., guide dogs) have been developed (2)(3)(4). These settings are not specifically addressed in this report, although the general principles and recommendations might be applicable. # Enteric (Intestinal) Diseases Infections with enteric bacteria and parasites pose the highest risk for human disease from animals in public settings (5). Healthy animals harbor multiple human enteric pathogens. Certain organisms have a low infectious dose (6)(7)(8). Because of the popularity of animal venues, a substantial number of persons might be exposed to these organisms. Reports of illness and outbreaks among visitors to fairs, farms, and petting zoos have been documented. Pathogens linked to outbreaks include Escherichia coli O157:H7, Campylobacter, Salmonella, and Cryptosporidium (9)(10)(11)(12)(13)(14)(15)(16)(17). Although these reports usually document cattle, sheep, and goats as sources for infection, poultry (18)(19)(20)(21) and other domestic and wild animals also are potential sources. The primary mode of transmission for enteric pathogens is the fecal-oral route. Because animal fur, hair, skin, and saliva (22) can become contaminated with fecal organisms, transmission might occur when persons pet, touch, or are licked by animals. Transmission has also occurred from fecal contamination of food, including raw milk (23)(24)(25), sticky foods (e.g., cotton candy [26]), water (27)(28)(29), and environmental surfaces (12,18,30,31). Animals infected with enteric pathogens (e.g., E. coli O157:H7, Salmonella, and Campylobacter) frequently exhibit no signs of illness and might shed pathogens intermittently. Therefore, although removing ill animals (especially those with diarrhea) is necessary to protect animal and human health, it is not sufficient: animals that appear to be healthy might still be infectious and contaminate the environment. Certain organisms live months or years in the environment (32)(33)(34)(35)(36). Because of intermittent shedding and limitations of laboratory tests, culturing fecal specimens or other attempts to identify, screen, and remove infected animals might not be effective in eliminating the risk for transmission. Antimicrobial treatment of animals cannot be depended upon to eliminate infection and shedding of enteric pathogens or to prevent reinfection. Multiple factors increase the probability of transmission at animal exhibits. Animals are more likely to shed pathogens because of stress induced by prolonged transportation, confinement, crowding, and increased contact with persons (37)(38)(39)(40)(41)(42)(43). Commingled animals increase the probability that animals shedding organisms will infect other animals. The prevalence of certain enteric pathogens might be higher in young animals (44)(45)(46), which are frequently exhibited by petting zoos. Shedding of E. coli O157:H7 and Salmonella is highest in the summer and fall when substantial numbers of traveling animal exhibits, agricultural fairs, and petting zoos are scheduled (43,47,48). The risk for infections or outbreaks is increased by certain human factors and behaviors. These factors include inadequate hand washing, venues that attract substantial numbers of children, a lack of close supervision of children, hand-to-mouth activities (e.g., use of pacifiers, thumb-sucking, smoking, and eating) in proximity to animals, and a lack of awareness of the risk. The layout and maintenance of facilities and animal exhibits can also contribute to the risk for infection. Risk factors include inadequate hand-washing facilities (1), structural deficiencies as-sociated with temporary food-service facilities, inadequate separation between animal exhibits and food-consumption areas (49), and contaminated or inadequately maintained drinking water and sewage/manure disposal systems (27)(28)(29)31). # Lessons from Outbreaks Two E. coli O157:H7 outbreaks in Pennsylvania and Washington State led to CDC establishing recommendations for enteric disease prevention in animal contact settings (http:// www.cdc.gov/foodborneoutbreaks/pulication/recomm_ farm_animal.htm). Findings in both outbreaks were animal contact at farms open to the public and inadequate hand washing (14,16). In the Pennsylvania outbreak, 51 persons (median age: 4 years) became ill within 10 days of visiting a dairy farm, and eight (16%) developed hemolytic uremic syndrome (HUS), a potentially fatal consequence of E. coli O157:H7 infection. The same strain of E. coli O157:H7 was isolated from cattle, case-patients, and the farm environment. In addition to the reported cases, an increased number of diarrhea cases in the community were attributed to visiting the farm. An assessment of the farm environment determined that 1) no areas existed for eating and drinking that were separate from the animal contact areas, and 2) the limited hand-washing facilities were not configured for children (14). Failure to properly wash hands was also a contributing factor in other outbreaks caused by Cryptosporidium (11) and Salmonella (12). The protective effect of hand washing and the persistence of organisms in the environment were demonstrated in an outbreak of Salmonella infections at a Colorado zoo. Sixty-five cases (the majority of them children) were associated with touching a wooden barrier around the Komodo dragon exhibit. Noninfected children were substantially more likely to have washed their hands after visiting the exhibit. Salmonella was isolated from 39 case-patients, a Komodo dragon, and the wooden barrier (12). During 2000-2001 at a Minnesota children's farm day camp, washing hands with soap after touching a calf and washing hands before going home were protective factors in two outbreaks involving multiple enteric organisms (50). A total of 84 illnesses were documented among attendees. Implicated organisms for the human infections were E. coli O157:H7, Cryptosporidium parvum, non-O157 Shiga toxin-producing E. coli (STEC), Salmonella enterica serotype Typhimurium, and Campylobacter jejuni. These organisms, as well as Giardia, were also isolated from the calves. Risk factors for children included caring for an ill calf and getting visible manure on their hands. Enteric pathogens can contaminate and persist in animal housing areas. For example, E. coli O157:H7 can survive in soil for months (31,32,34,51). Prolonged environmental persistence of pathogens was documented in an Ohio outbreak of E. coli O157 infections in which 23 persons became ill at a fair after handling sawdust, attending a dance, or eating and drinking in a building where animals were exhibited during the previous week (31). Fourteen weeks after the fair ended, E. coli O157 was isolated from multiple environmental sources within the building, including sawdust on the floor and dust on the rafters. Forty-two weeks after the fair ended, E. coli O157 was recovered from sawdust on the floor. Transmission of E. coli O157:H7 from airborne dust was implicated in an Oregon county fair outbreak with 60 cases, the majority of them children (18). Illness was associated with visiting an exhibition hall that housed goats, sheep, pigs, rabbits, and poultry but was not associated with touching animals or their pens, eating, or inadequate hand washing. The same organism was recovered from ill persons and the building. In 2004, an outbreak of E. coli O157:H7 infection was associated with attendance at a goat and sheep petting zoo at the North Carolina State Fair (51). Health officials investigated 112 case-patients, including 15 who had HUS. The same strain of E. coli O157:H7 infecting case-patients was isolated from the animal bedding 10 days after the fair was over. The strain was also isolated from the soil after the animal bedding was removed. The effect of improper facility design was illustrated by one of the most substantial waterborne outbreaks in the United States (28,29). Approximately 800 suspected cases of E. coli O157:H7 and Campylobacter were identified among attendees at a New York county fair where the water and sewage systems had deficiencies. # Sporadic Infections Multiple sporadic infections, not identified as part of recognized outbreaks, have been associated with animal environments. A study of sporadic E. coli O157:H7 infections among selected U.S. states and counties determined that case-patients, especially children, were more likely to have visited a farm with cows than healthy persons (52). Additional studies also documented an association between E. coli O157:H7 infection and visiting a farm (53) or living in a rural area (54). Studies of human cryptosporidiosis have documented contact with cattle or visiting farms as risk factors for infection (55)(56)(57). A case-control study identified multiple factors associated with Campylobacter infection, including raw milk consumption and contact with farm animals (58). In other studies, farm residents were at a lower risk for infection with Cryptosporidium (55) and E. coli O157:H7 (59) than farm visitors, probably because the residents had acquired immunity to the infection as a result of their early and frequent exposure to these organisms. # Additional Health Concerns Although enteric diseases are the most commonly reported health risks associated with animals in public settings, multiple other health risks are of concern. For example, allergies can be associated with animal dander, scales, fur, feathers, body wastes (urine), and saliva (60)(61)(62). Additional health concerns addressed in this report include injuries, rabies exposures, and other infections. # Injuries Injuries associated with animals in public settings include bites, kicks, falls, scratches, stings, crushing of the hands or feet, and being pinned between the animal and a fixed object. These injuries have been associated with multiple species, including big cats (e.g., tigers), monkeys, domestic animals, and zoo animals. The settings have included public stables, petting zoos, traveling photo opportunities, schools, children's parties, and animal rides.* # Rabies Exposures Contact with mammals might expose persons to rabies through contamination of mucous membranes, bites, scratches, or other wounds with infected saliva or nervous tissue. Although no human rabies deaths caused by animal contact in public exhibits have been recorded, multiple rabies exposures have occurred, requiring extensive public health investigation and medical followup. For example, in the previous decade, thousands of persons have received rabies postexposure prophylaxis (PEP) after being exposed to rabid or potentially rabid animal species (including cats, goats, bears, sheep, ponies, and dogs) at 1) a pet store in New Hampshire ( 63), 2) a county fair in New York State (64), 3) petting zoos in Iowa (65,66) and Texas (J.H. Wright, DVM, Texas Department of Health, personal communication, 2004), and 4) school and rodeo events in Wyoming (1). Substantial public health and medical care challenges associated with potential mass rabies exposures include difficulty in identifying and contacting persons, correctly assessing exposure risks, and providing timely medical treatment. Prompt assessment and treatment are critical for this disease, which is usually fatal. # Other Infections Multiple bacterial, viral, fungal, and parasitic agents have been associated with animal contact. These organisms are transmitted through various modes. Infections from animal bites are common and frequently require extensive treatment or hospitalization. Bacterial pathogens that are frequently associated with animal bites include Pasteurella, Staphylococcus, Streptococcus, Capnocytophaga canimorsus, Bartonella henselae (cat-scratch disease), and Streptobacillus moniliformis (rat-bite fever). Certain monkey species (especially macaques) that are kept as pets or used in public exhibitions can be infected with herpes B virus, either asymptomatically or with mild oral lesions. Human exposure through bites or fluids can result in a fatal meningoencephalitis (67,68). Because of difficulties with laboratory testing to confirm monkey infection and high herpes B prevalence, monkey bites can require intensive public health and medical follow-up. Skin contact with animals in public settings might also result in human infection. Fifteen cases of ringworm infection (club lamb fungus) caused by Trichophyton species and Microsporum gypseum were documented among owners and family members who exhibited lambs in Georgia during a show season (69). Ringworm infection in 23 persons and multiple animal species were traced to a Microsporum canis infection in a hand-reared zoo tiger cub (70). Orf virus infections (contagious ecthyma or sore mouth) have occurred in goats and sheep at a children's petting zoo (71) and in a lamb used for an Easter photo opportunity (M. Eidson, DVM, New York State Department of Health, personal communication, 2003). After handling various species of infected exotic animals, a zoo attendant experienced an extensive papular skin rash from a cowpox-like virus (72). In 2003, multiple cases of monkeypox occurred among persons who had had contact with infected prairie dogs either at a child care center (73,74) Ecto-and endoparasites pose concerns when humans and exhibit animals interact. Sarcoptes scabiei is a skin mite that infests humans and animals, including swine, dogs, cats, foxes, cattle, and coyotes (75,76). Although human infestation from animal sources is usually self-limiting, skin irritation and itching might occur for multiple days and be difficult to diagnose (75)(76)(77). Animal fleas bite humans, which increases the risk for infection or allergic reaction. In addition, fleas are the intermediate host for a tapeworm species that can infect children. Multiple other animal helminths might infect humans through fecal-oral contact or through contact with animals or contaminated earth (78,79). Parasite-control through veterinary care and proper husbandry coupled with hand washing reduce the risks associated with ectoand endoparasites (80). Tuberculosis (TB) is another disease of concern in certain animal settings. Twelve circus elephant handlers at an exotic animal farm in Illinois were infected with Mycobacterium tuberculosis, and one handler had signs consistent with active disease after three elephants died of TB. Medical history and testing of the handlers indicated that the elephants had been a probable source of exposure for the majority of the human infections (81). At a zoo in Louisiana, seven animal handlers who were previously negative for TB tested positive after a Mycobacterium bovis outbreak in rhinoceroses and monkeys (82). The U.S. Department of Agriculture (USDA) developed guidelines regarding removal of infected animals from public contact as a result of concerns regarding the risk for exposure to the public (83). Zoonotic pathogens might also be transmitted by direct or indirect contact with reproductive fluids, aborted fetuses, or newborns from infected dams. Live-birthing exhibits, usually involving livestock (e.g., cattle, pigs, goats, or sheep), are popular at agricultural fairs. Although the public usually does not have direct contact with animals during birthing, newborns and their dams are frequently available for petting and observation afterward. Q fever (Coxiella burnetii), leptospirosis, listeriosis, brucellosis, and chlamydiosis are serious zoonoses that can be associated with contact with reproductive materials (84). C. burnetii is a rickettsial organism that most frequently infects cattle, sheep, and goats. The disease can cause abortion in animals, but more frequently the infection is asymptomatic. During parturition, infected animals shed substantial numbers of organisms that might become aerosolized. The majority of persons exposed to C. burnetii develop an asymptomatic infection, but clinical illness can range from an acute influenza-like illness to life-threatening endocarditis. A Q fever outbreak involving 95 confirmed case-patients and 41 hospitalizations was linked to goats and sheep giving birth at petting zoos. These petting zoos were in indoor shopping malls, indicating that indoor-birthing exhibits might pose an increased risk for Q fever transmission (85). Chlamydophila psittaci infections cause respiratory disease (commonly called psittacosis) and are usually acquired from psittacine birds (86). For example, an outbreak of C. psittaci pneumonia occurred among the staff at the Copenhagen Denmark Zoo (87). On limited occasions, chlamydial infections acquired from sheep, goats, and birds result in reproductive problems in humans (86,88,89). # Recommendations Guidelines and recommendations from multiple organizations contributed to the recommendations in this report. A limited number of states have specific guidelines or legislation for petting zoo exhibitors and other animal exhibition venues (1,16,(90)(91)(92). However, in the United Kingdom, recommendations to prevent enteric infections at animal exhibitions and agricultural fairs were developed in 1989 (93), 1995 (94), and 2000 (95). In the United States, the American Zoo and Aquarium Association has accreditation standards for reducing risks of animal contact with the public in zoologic parks (96). In accordance with the Animal Welfare Act, the USDA Animal Care licenses and inspects certain animal exhibits for humane treatment of animals, but this act is not intended for human health protection. No federal laws address the risk for transmission of pathogens at venues where the public has contact with animals. However, in 2001, CDC issued guidelines to reduce the risk for enteric pathogens (16). CDC has also issued recommendations for preventing transmission of Salmonella from reptiles to humans (97). The Association for Professionals in Infection Control and Epidemiology (APIC) developed guidelines to address risks associated with the use of service animals in health-care settings (2). Opportunities for animal contact with the public occur in various settings. Recommendations provided in this report should be tailored to specific settings, and the report should be incorporated into guidelines and regulations developed at the state or local level. This report should be disseminated to persons who own or manage animals in public settings. State and local human and animal health agencies should make educational materials available to venue operators and other interested persons (90,91,98). Incidents of disease transmission or injury should be promptly reported to public health authorities and investigated. # Educational Responsibilities of Venue Operators Education is essential to reduce risks associated with animal contact in public settings. Animal owners, exhibit operators, and their staff should be educated to make appropriate management decisions. In addition, the public should be educated so that they can weigh the benefits and risks of animal contact and take appropriate measures to reduce risks. Recommendations include the following: • Operator education. Venue operators should familiarize themselves with the basic risk-reduction recommendations contained in this report. The responsibility of the operator is to apply these recommendations to specific settings and provide basic education to staff and visitors (e.g., using signage, stickers, handouts, or verbal information # General Recommendations for Managing Public and Animal Contact The public's contact with animals should occur in settings where controls are in place to reduce the potential for injuries or disease and increase the probability that exposures will be reported, documented, and handled appropriately. The design of facilities or contact settings should minimize the risk for exposure and facilitate hand washing (Box 1). Certain jurisdictions might choose to establish more restrictive recommendations in areas where animal contact is specifically encouraged (e.g., petting zoos). Requirements for the design of facilities or contact settings might include double barriers to prevent contact with animals or contaminated surfaces except for specified interaction areas. Manure disposal and wastewater runoff should occur in areas where the risk for exposure to pedestrians is eliminated or reduced. Control methods should focus on facility design and management. Recommendations regarding the management of animals in public settings should address animal areas (where animal contact is possible or encouraged), transition areas, and nonanimal areas (areas in which animals are not permitted, with the exception of service animals) (Figure ). Specific guidelines might be necessary for certain settings (e.g., schools [Box 2]). Recommendations for cleaning procedures should be tailored to the specific situation (Appendix). # Animal Areas Recommendations should be applied both to settings in which animal contact is possible (e.g., county fairs) and settings in which direct animal contact is encouraged (e.g., petting zoos). However, in settings where direct animal contact is encouraged, additional precautions should be taken to reduce the risk for injuries and disease transmission. For areas where animal contact is possible, design of the entry and exit points for animal contact areas should be planned to facilitate proper visitor flow through transition areas (Figure ). These transition areas should include educational information and hand-washing facilities. Fences, gates, or other types of barriers can restrict uncontrolled access to animals and animal contact areas and ensure that visitors enter and exit through transition areas. Animal feed and water should not be accessible to the public. In addition, in buildings where animals live, adequate ventilation is essential for both animals (99) and humans. Food and beverages. No food or beverages should be allowed in animal areas. In addition, smoking, carrying toys, and use of pacifiers, spill-proof cups ("sippy cups"), and baby bottles should not be permitted in animal areas. Cleaning procedures. Manure and soiled animal bedding should be removed promptly. Animal waste and specific tools for waste removal (e.g., shovels and pitchforks) should be confined to designated areas restricted from public access. Manure and Hand washing is the single most important prevention step for reducing disease transmission. # How to Wash Hands 1. Wet hands with running water; place soap in palms; rub together to make a lather; scrub hands vigorously for 20 seconds; rinse soap off hands; then dry hands with a disposable towel. 2. If possible, turn off the faucet by using a disposable towel. 3. Assist young children with washing their hands. # Hand-Washing Facilities or Stations • Hand-washing facilities should be accessible and sufficient for the maximum anticipated attendance, and configured for use by children, adults, and those with disabilities. • Hands should always be washed after leaving animal areas and before eating or drinking. • Hand-washing stations should be conveniently located between animal and nonanimal areas and in food concession areas. • Maintenance should include routine cleaning and restocking of towels and soap. • Running water should be of sufficient volume and pressure to remove soil from hands. Volume and pressure might be substantially reduced if the water supply is furnished from a holding tank. Therefore, a permanent pressured water supply is preferable. • The hand-washing unit should be designed so that both hands are free for hand washing. • Hot water is preferable, but if the hand-washing stations are supplied with only cold water, a soap that emulsifies easily in cold water should be provided. # BOX 1. Hand-washing recommendations to reduce disease transmission from animals in public settings • Communal basins, where water is used by more than one person, do not constitute adequate hand-washing facilities. # Hand-Washing Agents • Liquid soap dispensed by a hand or foot pump is recommended. • Alcohol-based hand-sanitizers are effective against multiple common disease agents (e.g., Escherichia coli, Salmonella, and Campylobacter) when soap and water are not available. However, they are ineffective against certain organisms (i.e., bacterial spores, Cryptosporidium, and certain viruses). • Hand-sanitizers are less effective if hands are visibly soiled. Therefore, visible contamination and dirt should be removed to the extent possible before using hand-sanitizers. # Hand-Washing Signs At venues where human-animal contact occurs, signs regarding proper hand-washing practices are critical to reduce disease transmission. • Signs that are reminders to wash hands should be posted at exits from animal areas. # Example of a Hand-Washing Sign # Directions for Washing Hands # When • After going to the toilet • After exiting animal areas • Before eating • Before preparing foods soiled bedding should not be transported or removed through nonanimal areas or transition areas used by visitors. If this is unavoidable, precautions should be taken to avoid spillage and aerosolization. During events where animal contact is encouraged, periodic disinfection of the venue might reduce the risk for disease transmission during the event. Supervision of children. Children should be closely supervised during contact with animals to discourage contact with manure and soiled bedding. Hand-to-mouth contact (e.g., thumbsucking) should also be discouraged. Appropriate hand washing should be required. Additional recommendations for groups at high risk, including children aged <5 years, are outlined in this report (see Additional Recommendations). Staff. Trained staff should be present in areas where animal contact is permitted to encourage appropriate human-animal interactions, reduce risk for exposure (e.g., by promptly cleaning up wastes), and process reports of injuries and exposures. Feeding animals. If feeding animals is permitted, only food sold by the venue for that purpose should be allowed. Food sold for animal consumption should not be eaten by humans and should not be provided in containers that can be eaten by persons (e.g., ice cream cones). This policy will reduce the risk for animal bites and the probability of children eating food that has come into contact with animals. Use of animal areas for public (nonanimal) activities. Zoonotic pathogens can contaminate the environment for substantial periods (31). If animal areas need to be used for public events (e.g., weddings and dances), these areas should be cleaned and disinfected, particularly if food and beverages are served. Materials with smooth, impervious surfaces (e.g., steel, plastic, and sealed concrete) are easier to clean than other materials (e.g., wood or dirt floors). Removing organic material (bedding, feed, and manure) before using disinfectants is important. A list of disinfectants is included in this report (Appendix). # Transition Areas Between Animal and Nonanimal Areas Providing transition areas for visitors to pass through when entering and exiting animal areas is critical. The transition areas between animal and nonanimal areas should be designated as clearly as possible, even if they need to be conceptual rather than physical (Figure). In these areas, information should be provided regarding the 1) prevention of infection and injury and 2) location of hand-washing facilities and instructions for visitors to wash their hands upon exiting. • Signs informing visitors that they are entering an animal area should be posted at the entrance transition areas. These signs should also instruct visitors not to eat, drink, or place their hands in their mouth while in the animal area. Visitors should be discouraged from taking strollers, baby bottles, pacifiers, food, and beverages into areas where animal contact is encouraged or where contact with animal manure or bedding can occur. Visitor traffic should be controlled to avoid overcrowding the animal area. • Exit transition areas should be marked with signs instructing the public to wash their hands. Hand-washing stations should be available and accessible to all visitors, including children and persons with disabilities (Box 1). # Nonanimal Areas Nonanimal areas are areas in which animals are not permitted, with the exception of service animals. • Food and beverages should be prepared, served, and consumed only in the designated nonanimal areas. Hand-* Nonanimal areas -Areas in which animals are not permitted, except for service animals (e.g., guide dogs). Food and beverages should be prepared, served, and consumed only in the designated nonanimal areas. † Signs should be in different formats depending on the audience (e.g., children and persons who do not speak English). Nonwritten information (e.g., verbal instructions and videos) can also be used. § Animal area -Areas in which animal contact is possible (e.g., county fairs) or is encouraged (e.g., petting zoos). # FIGURE. Examples of designs for animal contact settings, including clearly designated animal areas, nonanimal areas,* and transition areas with hand-washing stations and signage # Animal-Specific Guidelines • Fish -Use disposable gloves when cleaning aquariums, and do not dispose of aquarium water in sinks used for food preparation or for obtaining drinking water. # Animals Not Recommended in School Settings • Wild or exotic animals (e.g., lions, tigers, ocelots, and bears). • Nonhuman primates (e.g., monkeys and apes). • Mammals at higher risk for transmitting rabies (e.g., bats, raccoons, skunks, foxes, and coyotes). • Wolf-dog hybrids. • Aggressive or unpredictable animals, wild or domestic. • Stray animals with unknown health and vaccination history. • Venomous or toxin-producing spiders, insects, reptiles, and amphibians. washing facilities should be available where food or beverages are served (Box 1). • If animals or animal products (e.g., animal pelts, animal waste, and owl pellets) (100) are used for educational purposes in nonanimal areas (Box 2), the nonanimal areas should be cleaned (Appendix). Animals and animal products should not be brought into school cafeterias and other food-consumption areas. # Animal Care and Management The risk for disease or injuries from animal contacts can be reduced by carefully managing the specific animals used for such contacts. These recommendations should be considered for management of animals in contact with the public. • Animal care. Animals should be monitored daily by their owners or caretakers for signs of illness, and they should receive appropriate veterinary care. Ill animals and animals from herds with a recent history of abortion or diarrhea should not be exhibited. Animals should be housed to minimize stress and overcrowding, which can increase shedding of microorganisms. Options to reduce the burden of enteric pathogens need to be evaluated, particularly for animals that are at higher risk and that will be used in venues where animal contact is encouraged. # Additional Recommendations • Populations at high risk. Groups at high risk for serious infection include persons with waning immunity (e.g., older adults); children aged <5 years; and persons who are cognitively impaired, pregnant, or immunocompromised (e.g., persons with human immunodeficiency virus/acquired immunodeficiency syndrome, without a functioning spleen, or on immunosuppressive therapy). Persons at high risk should take heightened precautions at any animal exhibit. In addition to thorough and frequent hand washing, heightened precautions might include avoiding contact with animals and their environment (e.g., pens, bedding, and manure). Animals of particular concern for transmitting enteric diseases include young ruminants, young poultry, reptiles, amphibians, and ill animals. For young children, risk for exposure might be reduced if they are closely supervised by adults, carried by adults in animal areas, or have animal contact only over a barrier. These measures discourage animals from jumping on or nuzzling children and minimize contact with feces and soiled bedding. • Consumption of unpasteurized products. Unpasteurized dairy products (e.g., milk, cheese, and yogurt) as well as unpasteurized apple cider or juices should not be consumed. • Drinking water. Local public health authorities should inspect drinking water systems before use. Only potable water should be used for human consumption. Back-flow prevention devices should be installed between outlets in livestock areas and water lines supplying other uses on the grounds. If the water supply is from a well, adequate distance should be maintained from possible sources of contamination (e.g., animal-holding areas and manure piles). Maps of the water distribution system should be available for use in identifying potential or actual problems. The use of outdoor hoses should be minimized, and hoses should not be left on the ground. Hoses that are accessible to the public should be labeled "not for human consumption." Operators and managers of animal contact settings in which treated municipal water is not available should consider methods for disinfection of their water supply. # Conclusion NASPHV recognizes the benefits of human-animal contact. However, infectious diseases, rabies exposures, injuries, and other human health problems have occurred in animal contact settings secondary to human-animal contact. These incidents have substantial medical, public health, legal, and economic effects. The recommendation to wash hands is the single most important prevention step for reducing the risk for disease transmission. The standardized recommendations in this report should be used by public health officials, veterinarians, venue operators, animal exhibitors, and other persons concerned with disease control to minimize risks associated with animals in public settings. # Appendix Disinfectants and Their Properties All surfaces should be cleaned thoroughly before disinfection. For basic disinfection, a 1:100 dilution of household bleach (i.e., 2.5 tablespoons/gallon) or a 1:1,000 dilution of quaternary ammonium compounds (e.g., Roccal-D ® or Zephiran ® ) may be used. This appendix includes instructions for disinfection when a particular organism has been identified. All compounds require a contact time of >10 minutes. Local # Goal and Objectives This MMWR provides evidence-based guidelines for reducing risks associated with animals in public settings. The recommendations were developed by the National Association of State Public Health Veterinarians, in consultation with representatives from CDC, the National Assembly of State Animal Health Officials, the U.S. Department of Agriculture, the American Veterinary Medical Association, and the Council of State and Territorial Epidemiologists. The goal of this report is to provide guidelines for public health officials, veterinarians, animal venue operators, animal exhibitors, and others concerned with disease control to minimize risks associated with animals in public settings. Upon completion of this activity, the reader should be able to describe 1) the reasons for the development of the guidelines; 2) the disease risks associated with animals in public settings; 3) populations at high risk; and 4) recommended prevention and control methods to reduce disease risks. To receive continuing education credit, please answer all of the following questions. 1. Which of the following is true about the reasons why these recommendations were developed? A. Animal contacts are too risky and thus must be regulated against. B. Only petting zoos are of concern for disease risk. C. Multiple venues allow public contact with animals and thus pose a disease risk. D. These recommendations were developed to control zoonoses, which are diseases contracted only at zoos. E. Following these guidelines will eliminate all disease risk. # Which of the following is true about guidelines for visiting and resident animals in schools? A. Baby chicks and ducks are an excellent choice for all children in school settings because of their small size. B. Animals may be allowed in food settings (e.g., a school cafeteria) if they have a health certificate from a veterinarian. C. Animals should not be allowed to roam or fly free, and areas for contact should be designated. D. A and B. E. A, B, and C. F. None of the above. # If no licensed rabies vaccine exists for an animal species on display in a petting zoo, options to manage human rabies exposure risk include. . . A. using an animal born from a vaccinated mother if it is too young to vaccinate. B. penning the animal each night in a cage or pen that will exclude rabies reservoirs (e.g., bats and skunks). C. maintaining a record of visitors to facilitate visitor follow-up. D. asking a veterinarian to vaccinate the animal off-label with a rabies vaccine. E. testing the animal for presence of rabies antibodies. F. A, B, C, and D. # Which best describes your professional activities? A. Physician.
None
None
d617d9757c1c97f7a066c1fdf54865052af11c4e
cdc
None
In July 197 5, a private physician submitted a blood sample to the Center for Disease Control (CDC) to be analyzed for kepone, a chlorinated hydro carbon pesticide. The sample had been obtained from a kepone production worker who suffered from weight loss, nystagmus, and tremors. CDC notified the State epidemiologist that high levels of kepone were present in blood sample, and he initiated an epidemiologic investigation which revealed other employees suffering with sim ilar symptoms. It was evident to the State official after visiting the plant that the employees had been exposed to kepone at extrem ely high concentrations through inhalation, ingestion, and skin absorption. He recommended that the plant be closed, and company man agement complied. Of the 113 current and form er employees of this kepone-manufacturing plant examined, more than half exhibited clinical symptoms of kepone poisoning. Medical histories of tremors (called " kepone shakes" by em ployees), visual disturbances, loss of weight, nervousness, insomnia, pain in the chest and abdomen and, in some cases, infertility and loss of libido were reported. The employees also complained of vertigo and lack of muscular coordination. The intervals between exposure and onset of the signs and symptoms varied between patients but appeared to be dose-related. NIOSH responded to a request for technical assistance from the Regional Director of the Occupational Safety and Health Administration (OSHA) by sending an industrial hygienist from the regional office to assist in the investi gation of the working conditions in the plant. Information on kepone was collected from several sources including one of the principal distributors of the material. No data were available regarding the effect of kepone on man or the mechanism of metabolism and excretion. N IO SH has concluded that kepone produced physical injury to workers exposed to it but has not deter mined what the dose-response relationship is because of the lack of en vironmental exposure information. NIOSH has identified fewer than 50 establishments processing or form ulating pesticLes using kepone and has estimated that 6 0 0 workers are potentially exposed to kepone. (NIOSH is unaware of any plant in the US which is currently manufacturing kepone; the only known plant manufacturing it was closed in July 1975.) NIOSH has recently received a report on a carcinogenesis bioassay of technical grade kepone which was conducted by the National Cancer Institute using Osborne-Mendel rats and B6C3F1 mice. Kepone was adm inistered in the diet at two tolerated dosages. In addition to the clinical signs of toxicity, which were seen in both species, a significant increase (P < .0 5 ) of hepato cellular carcinoma in rats given large dosages of kepone and in mice at both dosages was found. Rats and mice also had extensive hyperplasia of the liver. In view of these findings, NIOSH must assume that kepone is a potential human carcinogen. Since we are unable to establish a safe exposure level, NIOSH recommends that the workplace e n v ir o n m e n t a l level for kepone be lim ited to 1 /¿g/cu m as a time-weighted average concentration for up to a 10-hour workday, 40-hour workweek, as an emergency standard.# I. RECOM MENDATIONS FOR A KEPONE STANDARD The National Institute for Occupational Safety and Health (NIOSH) recommends that worker exposure to kepone in the workplace be controlled by adherence to the following sections. The standard is designed to protect the health and safety of workers for up to a 10hour workday, 40-hour workweek over a working lifetime. Compliance with all sections of the standard should prevent the overt adverse effects currently reported from exposure to kepone in the workplace and materially reduce the risk of cancer from occupational exposure to kepone. The standard is measurable by techniques that are valid, reproducible, and available. Sufficient technology exists to permit compliance with the recom mended standard. The standard w ill be subject to review and revision as necessary. " Occupational expo sure to kepone" is defined as exposure to airborne kepone at concentrations greater than one-half of the workplace environmental lim it for kepone. Exposure to kepone at concentrations less than one-half of the workplace environmental lim it w ill not require ad herence to the following sections, except for 3, 4(a), 5, 6(b,c,e,f), and 7. Section 1 -Environmental (Workplace Air) # Concentration of Kepone Kepone shall be controlled in the workplace so that the airborne concentration of kepone, sampled and analyzed according to the procedures in Appendices I and II, is not greater than 1 ^/ c u m of breathing zone air. Procedures for sampling and analysis of kepone in air shall be as provided in Appendices I and II, or by any method shown to be equivalent in precision, accu racy, and sensitivity to the methods specified. # Section 2 -Medical Employers shall make medical surveillance avail able to all workers occupationally exposed to kepone, including personnel periodically exposed during routine maintenance or emergency operations. Periodic exami nations shall be made available at least on an annual basis. # (a) Preplacement and periodic medical examinations shall include: (1)A comprehensive or interim work history. (2) A comprehensive medical history to include, but not be lim ited to, information on conditions which may preclude further exposure to kepone, specifically: (3) Complete physical examination with particu lar attention to any neurologic findings such as tremor, opsoclonia (jumpy, irregular eye movements in all directions), short-memory difficulty, exaggerated startle reflex, and signs of muscular incoordination, and to atrophy of testicles, splenomegaly, hepatomegaly, and jaundice. (4) An evaluation of the worker's ability to use negative or positive pressure respirators shall be made. (b) For workers with occupational exposure to ke pone, preplacement and periodic medical examinations shall include liver function studies, as considered necessary by the responsible physician. (c) Medical examinations shall be made available to all workers with signs or symptoms of skin irritation likely to have been the result of exposure to kepone. (d) If clinical evidence of adverse effects due to kepone is developed from these medical examinations, the worker shall be kept under a physician's care until the worker has completely recovered or maximal improvement has occurred. (e) Initial examinations for presently employed workers shall be offered w ithin 6 months of the prom ulgation of a standard incorporating these recom mendations. (f) Pertinent medical records shall be available to the designated medical representatives of the Sec retary of Health, Education, and Welfare, of the Secre tary of Labor, of the employee or former employee, and of the employer. (g) Medical records shall be maintained for all employees with occupational exposure to kepone and for maintenance personnel with periodic exposure. All pertinent medical records with supporting documents shall be retained at least 30 years after the individ ual's employment is terminated. Avoid contact with skin and eyes. Avoid breathing dust or solution spray. In case of contact, immediately flush eyes with plenty of water for at least 15 minutes. Call a physician. Flush skin with water. Wash clothing before reuse. Use fresh clothing daily. Take show ers after work, using plenty of soap. Change to clean street clothing before leaving place of em ployment. # (b) In areas where there is occupational exposure to kepone, the following warning sign shall be posted in readily visible locations, particularly at the entrances to the area. # DANGER! EXTREME HEALTH HAZARD CANCER-SUSPECT AGENT NERVE-DESTROYING AGENT USED IN THIS AREA UNAUTHORIZED PERSONS KEEP OUT This sign shall be printed both in English and in the predominant language of non-English-speaking workers, if any, unless employers use other equally effective means to ensure that these workers know the hazards associated with kepone and the locations of areas in which there is occupational exposure to kepone. Em ployers shall ensure that illiterate workers also know these hazards and the locations of these areas. # Section 4 -Personal Protective Equipment and Protective Clothing (a) Protective Clothing (1) Coveralls or other full-body protective cloth ing shall be worn in areas where there is occupational exposure to kepone. Protective clothing shall be changed at least daily at the end of the sh ift and more frequently if it should become grossly con taminated. (2) Impervious gloves, aprons, and footwear shall be worn at operations where solutions of kepone may contact the skin. Protective gloves shall be worn at operations where dry kepone or materials containing kepone are handled and may contact the skin. (3) Eye protective devices shall be provided by the employer and used by the employees where con tact of kepone with eyes is likely. Selection, use, and maintenance of eye protective equipment shall be in accordance with the provisions of the American Na tional Standard Practice for Occupational and Educa tional Eye and Face Protection, ANSI Z87.1-1968. Unless eye protection is afforded by a respirator hood or face-piece, protective goggles or a face shield shall be worn at operations where there is danger of contact of the eyes with dry or wet materials containing kepone because of spills, splashes, or excessive dust or mists in the air. (4) The employer shall ensure that all personal protective devices are inspected regularly and main tained in clean and satisfactory working condition. (5) Work clothing may not be taken home by employees. The employer shall provide for mainte nance and laundering of protective clothing. (6) The employer shall ensure that precautions necessary to protect laundry personnel are taken while soiled protective clothing is being laundered. (7) The employer shall ensure that kepone is not discharged into municipal waste treatment systems or the community air. # (b) Respiratory Protection from Kepone Engineering controls shall be used wherever feasible to maintain airborne kepone concentrations at or below that recommended in Section 1 above. Com pliance with the environmental exposure lim it by the use of respirators is allowed only when airborne kepone concentrations are in excess of the workplace environ mental lim it because required engineering controls are being installed or tested, when nonroutine mainte nance or repair is being accomplished, or during emergencies. When a respirator is thus permitted, it shall be selected and used in accordance with the following requirements: (1) For the purpose of determining if it is necessary for workers to wear respirators, the employer shall measure the airborne concentration of kepone in the workplace in itia lly and thereafter whenever process, worksite, climate, or control changes occur which are likely to increase the airborne concentration of kepone. (2) The employer shall ensure that no worker is exposed to kepone above the workplace environmental lim it because of improper respirator selection, fit, use, or maintenance. (3) A respiratory protection program meeting the requirements of 29 CFR 1910.134 and 30 CFR 11 which incorporates the American National Standard Practices for Respiratory Protection Z88.2-1969 shall be estab lished and enforced by the employer. (4) The employer shall provide respirators in accordance with Table 1-1 and shall ensure that the employee uses the respirator provided. (5) Respirators described in Table 1-1 shall be those approved under the provisions of 29 CFR 1910.-134 and 30 CFR 11. (6) The employer shall ensure that respirators are adequately cleaned, and that employees are in structed on the use of respirators assigned to them, their location in the workplace, and on how to test for leakage. (7) Where an emergency may develop which could result in employee injury from kepone, the em ployer shall provide an escape device as listed in At the beginning of employment or assignment for work in a kepone area, employees with occupational exposure to kepone shall be informed of the hazards, relevant signs and symptoms of overexposure, appro priate emergency procedures, and proper conditions and precautions for the safe use of kepone. # Recommended Standard Instruction shall include, as a minimum, all infor mation in Appendix III which is applicable to the specific kepone product or material to which there is exposure. This information shall be posted in the work area and kept on file, readily accessible to the worker at all places of employment where kepone is involved in unit processes and operations. A continuing educational program shall be instituted to ensure that all workers have current knowledge of job hazards, proper maintenance procedures, and cleanup methods, and that they know how to use respiratory protective equipment and protective clothing correctly. Information as specified in Appendix III shall be recorded on the Material Safety Data Sheet or a sim ilar form approved by the Occupational Safety and Health Administration, US Department of Labor. # NOTICE Appendices I, II, and III w ill be supplied when they become available. # Section G -Work Practices (a) Control of Airborne Contamination Emission of airborne particulates (dust, mist, spray, etc) of kepone shall be controlled at the sources of dispersion by means of effective and properly main tained methods such as fu lly enclosed operations and local exhaust ventilation. Other methods may be used if they are shown to effectively control airborne con centrations of kepone within the lim it of the recom mended standard. Air discharged from facilities in which kepone is manufactured, processed, stored, or used may not con tain kepone at concentrations greater than 5% of the workplace environmental lim it in Section 1 or at con centrations greater than those permitted by other or dinances, statutes, and regulations. # (b) Control of Contact with Skin and Eyes (1) Employees working in areas where contact of skin or eyes with kepone, dry or wet, is possible shall wear full-body protective clothing, including neck and head coverings, and gloves, in accord with Section 4(a). (2) Clean protective clothing shall be put on before each work shift. (3) If, during the shift, the clothing becomes wetted with a solution, slurry, or paste of a kepone material, or grossly contaminated with a dry form of such material, it shall be removed promptly and placed in a special container for garments for decontamination or disposal. The employee shall wash the contam inated skin area thoroughly with soap and a copious amount of water. A complete shower is preferred after anything but limited, minor contact. Then, clean pro tective clothing shall be put on before resuming work. When working directly with kepone, with unsealed containers of kepone, or with kepone in other than fu lly enclosed operations, protective devices and cloth ing shall be removed and the arms, hands, and face thoroughly washed after working with kepone, and at 30-minute intervals when working with kepone for extended periods of time. (4) Small areas of skin (principally the hands) contaminated by contact with kepone shall be washed immediately and thoroughly with an abundance of water. Water shall be easily accessible in the work areas through low-pressure, free-running hose lines or showers. (5) If kepone comes into contact with the eyes, they should be flushed with a large volume of lowpressure flowing water for at least 15 minutes. Medical attention shall be obtained without delay, but not at the expense of thoroughly flushing the eyes. (c) Procedures for emergencies, including firefight ing, shall be established to meet foreseeable events. Necessary emergency equipment, including appropriate respiratory protective devices, shall be kept in readily accessible locations. Only self-contained breathing apparatus with positive pressure in the facepiece shall be used in firefighting. Appropriate respirators shall be available for use during evacuation. (d) Special supervision and care shall be exercised to ensure that the exposures of repair and mainte nance personnel to kepone are within the lim it pre scribed by this standard. (e) Prompt cleaning of spills of kepone: (1) No dry sweeping shall be performed. Wet methods or dry vacuuming shall be used as appro priate. (2) Wet spills and flushing of wet or dry spills shall be channeled for appropriate treatment or collec tion for disposal. They may not be channeled directly into the municipal sanitary sewer system. (f) General requirements: (1) Good housekeeping practices shall be ob served to prevent or minimize contamination of areas and equipment and to prevent build-up of such con tamination. (2) Good personal hygiene practices shall be en couraged. m m m i f j Recommended Standard (3) Equipment shall be kept in good repair and free of leaks. (4) Containers of dry kepone shall be kept cov ered insofar as is practical. Section 7 -Sanitation (a) Washing Facilities Emergency showers and eye-flushing fountains with adequate pressure of cool water shall be provided and be quickly accessible in areas where contact of skin or eyes with kepone may occur. This equipment shall be frequently inspected and maintained in good work ing condition. Showers and washbasins shall be provided in the employees' locker areas. Employers shall ensure that employees exposed to kepone during their work shift shall wash before eating or smoking periods taken during the work shift. (b) Food Facilities Food storage and preparation as well as eating shall be prohibited in areas where kepone is handled, proc essed or stored. Eating facilities provided for employees shall be located in areas in which the airborne concentrations of kepone are not greater than 5% of the workplace environmental lim it in Section 1. Surfaces in these areas shall be kept free of kepone. Employers shall ensure that before employees enter premises reserved for eating, food storage, or food preparation, they re move protective clothing. Washing facilities should be accessible nearby. (c) Employees may not smoke in areas where ke pone is handled, processed, or stored. # (d) Clothing and Locker Room Facilities Locker room facilities shall be provided in an area without occupational exposure to kepone for employees required to change clothing before and after work. The facilities shall provide for the storing of street clothing and clean work clothing separately from soiled work clothing. Showers and washbasins should be located in the locker area to encourage good personal hygiene. Covered containers should be provided for work clothing discarded at the end of the shift or after a contamination incident. The clothing shall be held in these containers until removed for decontamination or disposal. # Section 8 -Monitoring and Recordkeeping Requirements Workers are not considered to have occupational exposure to kepone if, on the basis of a professional industrial hygiene survey, the airborne concentration of kepone in an area where kepone is handled, proc essed, or stored is sufficiently low that a sampling volume equal to or greater than 1.0 cu m is necessary in order to collect 0.5 ^g of kepone. The minimum quantity of kepone which rnust be collected in order to determine with reliability the presence of kepone in a sample is 0.5 ^g. In order to determine that ke pone is only present in workplace air at concentrations equal to or less than 0.5 ^.g/cu m, it is necessary that each sample of airborne kepone that is analyzed for the purpose of making this determination be the residue from the filtration of at least 1.0 eu m of work place air. All samples of airborne kepone shall be analyzed by the chemical analytical method in Appen dix II. Records of these surveys, including the basis for concluding that there is no occupational exposure to kepone shall be maintained until a new survey is conducted. In workplaces where kepone is handled or proc essed, surveys shall be repeated annually and when any process change indicates a need for réévaluation. Requirements set forth below apply to areas in which there is occupational exposure to kepone. Employers shall maintain records of workplace en vironmental exposures to kepone based on the follow ing sampling, analytical, and recording schedules: (a) In all monitoring, samples representative of the exposure in the breathing zone of employees shall be collected by personal samplers. (b) An adequate number of samples shall be taken in order to permit construction of TWA exposures for every operation or process. Except as otherwise de termined by a professional industrial hygienist, the minimum number of representative TWA determina tions for an operation or process shall be based on the number of workers exposed as provided in Table 1-2. (c) The first determination of the worker's expo sures to airborne kepone shall be completed w ithin 6 months after the promulgation of a standard incor porating these recommendations. (d) A réévaluation of the exposures of workers to airborne kepone shall be made w ithin 30 days after installation of a new process or process changes. (e) Samples of airborne kepor,-shall be collected and analyzed at least every 2 mc ith s for those work areas with occupational exposure to kepone. (f) A réévaluation of the worker's exposures to air borne kepone shall be repeated at 1-week intervals when the airborne concentration has been found to exceed the recommended workplace environmental lim it. In such cases, suitable controls shall be insti tuted and monitoring shall continue at 1-week inter vals until 3 consecutive surveys indicate the adequacy of controls. (g) Records of all sampling and analysis of air borne kepone and of medical examinations shall be maintained for at least 30 years after the individual's employment is terminated. Records shall indicate the details of (1) type of personal protective devices, if any, in use at the time of sampling, and (2) methods of sampling and analysis used. Each employee shall be able to obtain information on his own exposure. If the employer who has or has had employees with occupa tional exposure to kepone ceases business without a successor, he shall forward their records by registered mail to the Director, National Institute for Occupa tional Safety and Health. (1) Kepone is manufactured, reacted, mixed with other substances, repackaged, stored, handled, or used; and # N ID 5H Recommended Standard (2) Airborne concentrations of kepone are in excess of the workplace environmental lim it in Sec tion 1. (i) Access to the regulated areas designated by Sec tion 8(h) shall be lim ited to authorized persons. A daily roster shall be made of persons authorized to enter; these rosters shall be maintained for 30 years. (j) Employers shall ensure that before employees leave a regulated area they remove and leave protective clothing at the point of exit. 6 5 7 -0 1 2 /3 2 7 # D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E PU B L IC HEALTH SE R V IC E C E N T E R F O R D I S E A S E C O N T R O L N A T I O N A L I N S T I T U T E F O R O C C U P A T I O N A L S A F E T Y A N D H E A L T H R O B E R T A . T A F T L A B O R
In July 197 5, a private physician submitted a blood sample to the Center for Disease Control (CDC) to be analyzed for kepone, a chlorinated hydro carbon pesticide. The sample had been obtained from a kepone production worker who suffered from weight loss, nystagmus, and tremors. CDC notified the State epidemiologist that high levels of kepone were present in blood sample, and he initiated an epidemiologic investigation which revealed other employees suffering with sim ilar symptoms. It was evident to the State official after visiting the plant that the employees had been exposed to kepone at extrem ely high concentrations through inhalation, ingestion, and skin absorption. He recommended that the plant be closed, and company man agement complied. Of the 113 current and form er employees of this kepone-manufacturing plant examined, more than half exhibited clinical symptoms of kepone poisoning. Medical histories of tremors (called " kepone shakes" by em ployees), visual disturbances, loss of weight, nervousness, insomnia, pain in the chest and abdomen and, in some cases, infertility and loss of libido were reported. The employees also complained of vertigo and lack of muscular coordination. The intervals between exposure and onset of the signs and symptoms varied between patients but appeared to be dose-related. NIOSH responded to a request for technical assistance from the Regional Director of the Occupational Safety and Health Administration (OSHA) by sending an industrial hygienist from the regional office to assist in the investi gation of the working conditions in the plant. Information on kepone was collected from several sources including one of the principal distributors of the material. No data were available regarding the effect of kepone on man or the mechanism of metabolism and excretion. N IO SH has concluded that kepone produced physical injury to workers exposed to it but has not deter mined what the dose-response relationship is because of the lack of en vironmental exposure information. NIOSH has identified fewer than 50 establishments processing or form ulating pesticLes using kepone and has estimated that 6 0 0 workers are potentially exposed to kepone. (NIOSH is unaware of any plant in the US which is currently manufacturing kepone; the only known plant manufacturing it was closed in July 1975.) NIOSH has recently received a report on a carcinogenesis bioassay of technical grade kepone which was conducted by the National Cancer Institute using Osborne-Mendel rats and B6C3F1 mice. Kepone was adm inistered in the diet at two tolerated dosages. In addition to the clinical signs of toxicity, which were seen in both species, a significant increase (P < .0 5 ) of hepato cellular carcinoma in rats given large dosages of kepone and in mice at both dosages was found. Rats and mice also had extensive hyperplasia of the liver. In view of these findings, NIOSH must assume that kepone is a potential human carcinogen. Since we are unable to establish a safe exposure level, NIOSH recommends that the workplace e n v ir o n m e n t a l level for kepone be lim ited to 1 /¿g/cu m as a time-weighted average concentration for up to a 10-hour workday, 40-hour workweek, as an emergency standard.# I. RECOM MENDATIONS FOR A KEPONE STANDARD The National Institute for Occupational Safety and Health (NIOSH) recommends that worker exposure to kepone in the workplace be controlled by adherence to the following sections. The standard is designed to protect the health and safety of workers for up to a 10hour workday, 40-hour workweek over a working lifetime. Compliance with all sections of the standard should prevent the overt adverse effects currently reported from exposure to kepone in the workplace and materially reduce the risk of cancer from occupational exposure to kepone. The standard is measurable by techniques that are valid, reproducible, and available. Sufficient technology exists to permit compliance with the recom mended standard. The standard w ill be subject to review and revision as necessary. " Occupational expo sure to kepone" is defined as exposure to airborne kepone at concentrations greater than one-half of the workplace environmental lim it for kepone. Exposure to kepone at concentrations less than one-half of the workplace environmental lim it w ill not require ad herence to the following sections, except for 3, 4(a), 5, 6(b,c,e,f), and 7. Section 1 -Environmental (Workplace Air) # Concentration of Kepone Kepone shall be controlled in the workplace so that the airborne concentration of kepone, sampled and analyzed according to the procedures in Appendices I and II, is not greater than 1 ^/ c u m of breathing zone air. Procedures for sampling and analysis of kepone in air shall be as provided in Appendices I and II, or by any method shown to be equivalent in precision, accu racy, and sensitivity to the methods specified. # Section 2 -Medical Employers shall make medical surveillance avail able to all workers occupationally exposed to kepone, including personnel periodically exposed during routine maintenance or emergency operations. Periodic exami nations shall be made available at least on an annual basis. # (a) Preplacement and periodic medical examinations shall include: (1)A comprehensive or interim work history. (2) A comprehensive medical history to include, but not be lim ited to, information on conditions which may preclude further exposure to kepone, specifically: (3) Complete physical examination with particu lar attention to any neurologic findings such as tremor, opsoclonia (jumpy, irregular eye movements in all directions), short-memory difficulty, exaggerated startle reflex, and signs of muscular incoordination, and to atrophy of testicles, splenomegaly, hepatomegaly, and jaundice. (4) An evaluation of the worker's ability to use negative or positive pressure respirators shall be made. (b) For workers with occupational exposure to ke pone, preplacement and periodic medical examinations shall include liver function studies, as considered necessary by the responsible physician. (c) Medical examinations shall be made available to all workers with signs or symptoms of skin irritation likely to have been the result of exposure to kepone. (d) If clinical evidence of adverse effects due to kepone is developed from these medical examinations, the worker shall be kept under a physician's care until the worker has completely recovered or maximal improvement has occurred. (e) Initial examinations for presently employed workers shall be offered w ithin 6 months of the prom ulgation of a standard incorporating these recom mendations. (f) Pertinent medical records shall be available to the designated medical representatives of the Sec retary of Health, Education, and Welfare, of the Secre tary of Labor, of the employee or former employee, and of the employer. (g) Medical records shall be maintained for all employees with occupational exposure to kepone and for maintenance personnel with periodic exposure. All pertinent medical records with supporting documents shall be retained at least 30 years after the individ ual's employment is terminated. Avoid contact with skin and eyes. Avoid breathing dust or solution spray. In case of contact, immediately flush eyes with plenty of water for at least 15 minutes. Call a physician. Flush skin with water. Wash clothing before reuse. Use fresh clothing daily. Take show ers after work, using plenty of soap. Change to clean street clothing before leaving place of em ployment. # (b) In areas where there is occupational exposure to kepone, the following warning sign shall be posted in readily visible locations, particularly at the entrances to the area. # DANGER! EXTREME HEALTH HAZARD CANCER-SUSPECT AGENT NERVE-DESTROYING AGENT USED IN THIS AREA UNAUTHORIZED PERSONS KEEP OUT This sign shall be printed both in English and in the predominant language of non-English-speaking workers, if any, unless employers use other equally effective means to ensure that these workers know the hazards associated with kepone and the locations of areas in which there is occupational exposure to kepone. Em ployers shall ensure that illiterate workers also know these hazards and the locations of these areas. # Section 4 -Personal Protective Equipment and Protective Clothing (a) Protective Clothing (1) Coveralls or other full-body protective cloth ing shall be worn in areas where there is occupational exposure to kepone. Protective clothing shall be changed at least daily at the end of the sh ift and more frequently if it should become grossly con taminated. (2) Impervious gloves, aprons, and footwear shall be worn at operations where solutions of kepone may contact the skin. Protective gloves shall be worn at operations where dry kepone or materials containing kepone are handled and may contact the skin. (3) Eye protective devices shall be provided by the employer and used by the employees where con tact of kepone with eyes is likely. Selection, use, and maintenance of eye protective equipment shall be in accordance with the provisions of the American Na tional Standard Practice for Occupational and Educa tional Eye and Face Protection, ANSI Z87.1-1968. Unless eye protection is afforded by a respirator hood or face-piece, protective goggles or a face shield shall be worn at operations where there is danger of contact of the eyes with dry or wet materials containing kepone because of spills, splashes, or excessive dust or mists in the air. (4) The employer shall ensure that all personal protective devices are inspected regularly and main tained in clean and satisfactory working condition. (5) Work clothing may not be taken home by employees. The employer shall provide for mainte nance and laundering of protective clothing. (6) The employer shall ensure that precautions necessary to protect laundry personnel are taken while soiled protective clothing is being laundered. (7) The employer shall ensure that kepone is not discharged into municipal waste treatment systems or the community air. # (b) Respiratory Protection from Kepone Engineering controls shall be used wherever feasible to maintain airborne kepone concentrations at or below that recommended in Section 1 above. Com pliance with the environmental exposure lim it by the use of respirators is allowed only when airborne kepone concentrations are in excess of the workplace environ mental lim it because required engineering controls are being installed or tested, when nonroutine mainte nance or repair is being accomplished, or during emergencies. When a respirator is thus permitted, it shall be selected and used in accordance with the following requirements: (1) For the purpose of determining if it is necessary for workers to wear respirators, the employer shall measure the airborne concentration of kepone in the workplace in itia lly and thereafter whenever process, worksite, climate, or control changes occur which are likely to increase the airborne concentration of kepone. (2) The employer shall ensure that no worker is exposed to kepone above the workplace environmental lim it because of improper respirator selection, fit, use, or maintenance. (3) A respiratory protection program meeting the requirements of 29 CFR 1910.134 and 30 CFR 11 which incorporates the American National Standard Practices for Respiratory Protection Z88.2-1969 shall be estab lished and enforced by the employer. (4) The employer shall provide respirators in accordance with Table 1-1 and shall ensure that the employee uses the respirator provided. (5) Respirators described in Table 1-1 shall be those approved under the provisions of 29 CFR 1910.-134 and 30 CFR 11. (6) The employer shall ensure that respirators are adequately cleaned, and that employees are in structed on the use of respirators assigned to them, their location in the workplace, and on how to test for leakage. (7) Where an emergency may develop which could result in employee injury from kepone, the em ployer shall provide an escape device as listed in At the beginning of employment or assignment for work in a kepone area, employees with occupational exposure to kepone shall be informed of the hazards, relevant signs and symptoms of overexposure, appro priate emergency procedures, and proper conditions and precautions for the safe use of kepone. # Recommended Standard Instruction shall include, as a minimum, all infor mation in Appendix III which is applicable to the specific kepone product or material to which there is exposure. This information shall be posted in the work area and kept on file, readily accessible to the worker at all places of employment where kepone is involved in unit processes and operations. A continuing educational program shall be instituted to ensure that all workers have current knowledge of job hazards, proper maintenance procedures, and cleanup methods, and that they know how to use respiratory protective equipment and protective clothing correctly. Information as specified in Appendix III shall be recorded on the Material Safety Data Sheet or a sim ilar form approved by the Occupational Safety and Health Administration, US Department of Labor. # NOTICE Appendices I, II, and III w ill be supplied when they become available. # Section G -Work Practices (a) Control of Airborne Contamination Emission of airborne particulates (dust, mist, spray, etc) of kepone shall be controlled at the sources of dispersion by means of effective and properly main tained methods such as fu lly enclosed operations and local exhaust ventilation. Other methods may be used if they are shown to effectively control airborne con centrations of kepone within the lim it of the recom mended standard. Air discharged from facilities in which kepone is manufactured, processed, stored, or used may not con tain kepone at concentrations greater than 5% of the workplace environmental lim it in Section 1 or at con centrations greater than those permitted by other or dinances, statutes, and regulations. # (b) Control of Contact with Skin and Eyes (1) Employees working in areas where contact of skin or eyes with kepone, dry or wet, is possible shall wear full-body protective clothing, including neck and head coverings, and gloves, in accord with Section 4(a). (2) Clean protective clothing shall be put on before each work shift. (3) If, during the shift, the clothing becomes wetted with a solution, slurry, or paste of a kepone material, or grossly contaminated with a dry form of such material, it shall be removed promptly and placed in a special container for garments for decontamination or disposal. The employee shall wash the contam inated skin area thoroughly with soap and a copious amount of water. A complete shower is preferred after anything but limited, minor contact. Then, clean pro tective clothing shall be put on before resuming work. When working directly with kepone, with unsealed containers of kepone, or with kepone in other than fu lly enclosed operations, protective devices and cloth ing shall be removed and the arms, hands, and face thoroughly washed after working with kepone, and at 30-minute intervals when working with kepone for extended periods of time. (4) Small areas of skin (principally the hands) contaminated by contact with kepone shall be washed immediately and thoroughly with an abundance of water. Water shall be easily accessible in the work areas through low-pressure, free-running hose lines or showers. (5) If kepone comes into contact with the eyes, they should be flushed with a large volume of lowpressure flowing water for at least 15 minutes. Medical attention shall be obtained without delay, but not at the expense of thoroughly flushing the eyes. (c) Procedures for emergencies, including firefight ing, shall be established to meet foreseeable events. Necessary emergency equipment, including appropriate respiratory protective devices, shall be kept in readily accessible locations. Only self-contained breathing apparatus with positive pressure in the facepiece shall be used in firefighting. Appropriate respirators shall be available for use during evacuation. (d) Special supervision and care shall be exercised to ensure that the exposures of repair and mainte nance personnel to kepone are within the lim it pre scribed by this standard. (e) Prompt cleaning of spills of kepone: (1) No dry sweeping shall be performed. Wet methods or dry vacuuming shall be used as appro priate. (2) Wet spills and flushing of wet or dry spills shall be channeled for appropriate treatment or collec tion for disposal. They may not be channeled directly into the municipal sanitary sewer system. (f) General requirements: (1) Good housekeeping practices shall be ob served to prevent or minimize contamination of areas and equipment and to prevent build-up of such con tamination. (2) Good personal hygiene practices shall be en couraged. m m m i f j Recommended Standard (3) Equipment shall be kept in good repair and free of leaks. (4) Containers of dry kepone shall be kept cov ered insofar as is practical. Section 7 -Sanitation (a) Washing Facilities Emergency showers and eye-flushing fountains with adequate pressure of cool water shall be provided and be quickly accessible in areas where contact of skin or eyes with kepone may occur. This equipment shall be frequently inspected and maintained in good work ing condition. Showers and washbasins shall be provided in the employees' locker areas. Employers shall ensure that employees exposed to kepone during their work shift shall wash before eating or smoking periods taken during the work shift. (b) Food Facilities Food storage and preparation as well as eating shall be prohibited in areas where kepone is handled, proc essed or stored. Eating facilities provided for employees shall be located in areas in which the airborne concentrations of kepone are not greater than 5% of the workplace environmental lim it in Section 1. Surfaces in these areas shall be kept free of kepone. Employers shall ensure that before employees enter premises reserved for eating, food storage, or food preparation, they re move protective clothing. Washing facilities should be accessible nearby. (c) Employees may not smoke in areas where ke pone is handled, processed, or stored. # (d) Clothing and Locker Room Facilities Locker room facilities shall be provided in an area without occupational exposure to kepone for employees required to change clothing before and after work. The facilities shall provide for the storing of street clothing and clean work clothing separately from soiled work clothing. Showers and washbasins should be located in the locker area to encourage good personal hygiene. Covered containers should be provided for work clothing discarded at the end of the shift or after a contamination incident. The clothing shall be held in these containers until removed for decontamination or disposal. # Section 8 -Monitoring and Recordkeeping Requirements Workers are not considered to have occupational exposure to kepone if, on the basis of a professional industrial hygiene survey, the airborne concentration of kepone in an area where kepone is handled, proc essed, or stored is sufficiently low that a sampling volume equal to or greater than 1.0 cu m is necessary in order to collect 0.5 ^g of kepone. The minimum quantity of kepone which rnust be collected in order to determine with reliability the presence of kepone in a sample is 0.5 ^g. In order to determine that ke pone is only present in workplace air at concentrations equal to or less than 0.5 ^.g/cu m, it is necessary that each sample of airborne kepone that is analyzed for the purpose of making this determination be the residue from the filtration of at least 1.0 eu m of work place air. All samples of airborne kepone shall be analyzed by the chemical analytical method in Appen dix II. Records of these surveys, including the basis for concluding that there is no occupational exposure to kepone shall be maintained until a new survey is conducted. In workplaces where kepone is handled or proc essed, surveys shall be repeated annually and when any process change indicates a need for réévaluation. Requirements set forth below apply to areas in which there is occupational exposure to kepone. Employers shall maintain records of workplace en vironmental exposures to kepone based on the follow ing sampling, analytical, and recording schedules: (a) In all monitoring, samples representative of the exposure in the breathing zone of employees shall be collected by personal samplers. (b) An adequate number of samples shall be taken in order to permit construction of TWA exposures for every operation or process. Except as otherwise de termined by a professional industrial hygienist, the minimum number of representative TWA determina tions for an operation or process shall be based on the number of workers exposed as provided in Table 1-2. (c) The first determination of the worker's expo sures to airborne kepone shall be completed w ithin 6 months after the promulgation of a standard incor porating these recommendations. (d) A réévaluation of the exposures of workers to airborne kepone shall be made w ithin 30 days after installation of a new process or process changes. (e) Samples of airborne kepor,-shall be collected and analyzed at least every 2 mc ith s for those work areas with occupational exposure to kepone. (f) A réévaluation of the worker's exposures to air borne kepone shall be repeated at 1-week intervals when the airborne concentration has been found to exceed the recommended workplace environmental lim it. In such cases, suitable controls shall be insti tuted and monitoring shall continue at 1-week inter vals until 3 consecutive surveys indicate the adequacy of controls. (g) Records of all sampling and analysis of air borne kepone and of medical examinations shall be maintained for at least 30 years after the individual's employment is terminated. Records shall indicate the details of (1) type of personal protective devices, if any, in use at the time of sampling, and (2) methods of sampling and analysis used. Each employee shall be able to obtain information on his own exposure. If the employer who has or has had employees with occupa tional exposure to kepone ceases business without a successor, he shall forward their records by registered mail to the Director, National Institute for Occupa tional Safety and Health. (1) Kepone is manufactured, reacted, mixed with other substances, repackaged, stored, handled, or used; and # N ID 5H Recommended Standard (2) Airborne concentrations of kepone are in excess of the workplace environmental lim it in Sec tion 1. (i) Access to the regulated areas designated by Sec tion 8(h) shall be lim ited to authorized persons. A daily roster shall be made of persons authorized to enter; these rosters shall be maintained for 30 years. (j) Employers shall ensure that before employees leave a regulated area they remove and leave protective clothing at the point of exit. 6 5 7 -0 1 2 /3 2 7 # D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E PU B L IC HEALTH SE R V IC E C E N T E R F O R D I S E A S E C O N T R O L N A T I O N A L I N S T I T U T E F O R O C C U P A T I O N A L S A F E T Y A N D H E A L T H R O B E R T A . T A F T L A B O R
None
None
9aae22dea747bf232bb655af93d600ab551a7731
cdc
None
CDC recommends that all states and territories conduct case surveillance for human immunodeficiency virus (HIV) infection as an extension of current acquired immunodeficiency syndrome (AIDS) surveillance activities. The expansion of national surveillance to include both HIV infection and AIDS cases is a necessary response to the impact of advances in antiretroviral therapy, the implementation of new HIV treatment guidelines, and the increased need for epidemiologic data regarding persons at all stages of HIV disease. Expanded surveillance will provide additional data about HIV-infected populations to enhance local, state, and federal efforts to prevent HIV transmission, improve allocation of resources for treatment services, and assist in evaluating the impact of public health interventions. CDC will provide technical assistance to all state and territorial health departments to continue or establish HIV and AIDS case surveillance systems and to evaluate the performance of their surveillance programs. This report includes a revised case definition for HIV infection in adults and children, recommended program practices, and performance and security standards for conducting HIV/AIDS surveillance by local, state, and territorial health departments. The revised surveillance case definition and associated recommendations become effective January 1, 2000.# INTRODUCTION AIDS surveillance has been the cornerstone of national efforts to monitor the spread of HIV infection in the United States and to target HIV-prevention programs and health-care services. Although AIDS is the end-stage of the natural history of HIV infection, in the past, monitoring AIDS-defining conditions provided population-based data that reflected changes in the incidence of HIV infection. However, recent advances in HIV treatment have slowed the progression of HIV disease for infected persons on treatment and contributed to a decline in AIDS incidence. These advances in treatment have diminished the ability of AIDS surveillance data to represent trends in the incidence of HIV infection or the impact of the epidemic on the health-care system. As a consequence, the capacity of local, state, and federal public health agencies to monitor the HIV epidemic has been compromised (1)(2)(3). In response to these changes and following consultations with multiple and diverse constituencies (including representatives of public health, government, and community organizations), CDC and the Council of State and Territorial Epidemiologists (CSTE) have recommended that all states and territories include surveillance for HIV infection as an extension of their AIDS surveillance activities (1,4 ). In this manner, the HIV/AIDS epidemic can be tracked more accurately and appropriate information about HIV infection and AIDS can be made available to policymakers. CDC continues to support a diverse set of epidemiologic methods to characterize persons affected by the epidemic in the United States (5)(6)(7)(8)(9)(10). Although HIV/AIDS case surveillance represents only one component among multiple necessary surveillance strategies, this report focuses primarily on CDC's recommendation to implement HIV case reporting nationwide. This report provides a revised case definition for HIV infection in adults and children, recommended program practices, and performance and security standards for conducting HIV/AIDS surveillance by local, state, and territorial health departments. The case definition for HIV infection was revised in consultation with CSTE and includes the current AIDS surveillance criteria as a component (11 ). The recommended program practices and performance and security standards are based on a) the established practices of AIDS and other public health surveillance systems; b) reviews of state and local surveillance programs, confidentiality statutes, and security procedures; c) studies of the performance of surveillance systems; d) ongoing evaluations of determinants of test-seeking or test-avoidance in relation to state policies and practices on HIV testing and reporting; and e) discussions at a consultation held by CDC and CSTE in May 1997. A draft of this report was made available for public comment from December 10, 1998, to January 11, 1999, through a notice published in the Federal Register (12 ). # BACKGROUND # History of AIDS and HIV Case Surveillance Since the epidemic was first identified in the United States in 1981, populationbased AIDS surveillance (i.e., reporting of AIDS cases and their characteristics to public health authorities for epidemiologic analysis) has been used to track the progression of the HIV epidemic from the initial case reports of opportunistic illnesses caused by a then unknown agent in a few large cities to the reporting of 711,344 AIDS cases nationwide through June 30, 1999 (5,(13)(14)(15). The AIDS reporting criteria have been periodically revised to incorporate new understanding of HIV disease and changes in medical practice (16)(17)(18)(19). In the absence of effective therapy for HIV infection, AIDS surveillance data have reliably detected changing patterns of HIV transmission and reflected the effect of HIV-prevention programs on the incidence of HIV infection and related illnesses in specific populations (20)(21)(22)(23)(24)(25). Because of these attributes, AIDS surveillance data have been used as a basis for allocating many federal resources for HIV treatment and care services and as the epidemiologic basis for planning local HIV-prevention services. With the advent of more effective therapy that slows the progression of HIV disease, AIDS surveillance data no longer reliably reflect trends in HIV transmission and do not accurately represent the need for prevention and care services (26,27 ). In 1996, national AIDS incidence and AIDS deaths declined for the first time during the HIV epidemic (Figure 1). These declines have been primarily attributed to the early use of combination antiretroviral therapy, which delays the progression to AIDS and death for persons with HIV infection (1)(2)(3)9 ). Revised HIV treatment guidelines recommend antiretroviral therapy for many HIV-infected persons in whom AIDS-defining conditions have not yet developed (28)(29)(30). In addition, antiretroviral treatment of pregnant women and their newborns has reduced perinatal HIV transmission and resulted in dramatic declines in the incidence of perinatally acquired AIDS (31,32 ) (Figure 2). In response to these changes in HIV treatment practices and the information needs of public health and other policymakers, CDC and CSTE have recommended that all states and territories extend their AIDS case surveillance activities to include HIV case surveillance and the reporting of HIV-exposed infants (1,4,33 ). Since 1985, many states have implemented HIV case reporting as part of their comprehensive HIV/AIDS surveillance programs. As of November 1, 1999, a total of 34 states and the Virgin Islands (VI) had implemented HIV case surveillance using the same confidential system for name-based case reporting for both HIV infection and AIDS; two of these states conduct pediatric surveillance only (5 ) (Figure 3). Areas that conduct integrated HIV/AIDS surveillance for adults, adolescents, and children have reported 42% of cumulative U.S. AIDS cases. In addition, four states (Illinois, Maine, Maryland, and Massachusetts) and Puerto Rico, representing 11% of cumulative AIDS cases, are reporting cases of HIV infection using a coded identifier rather than patient name. Washington has implemented HIV reporting by patient name to enable public health follow-up; after services and referrals are offered, names are converted into codes. In most other states, HIV case reporting is under consideration or laws, rules, or regulations enabling HIV surveillance are expected to be implemented during 2000. 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1993 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 In contrast to AIDS case surveillance, HIV case surveillance provides data to better characterize populations in which HIV infection has been newly diagnosed, including persons with evidence of recent HIV infection such as adolescents and young adults (13-24-year-olds) (34,35 ). Of the 52,690 HIV infections diagnosed from January 1994 through June 1997 in 25 states that conducted name-based HIV surveillance throughout this period, 14% of cases occurred in persons aged 13-24 years. In comparison, of the 20,215 persons in whom AIDS was diagnosed in these 25 states, only 3% of cases occurred in persons aged 13-24 years. Thus, AIDS case surveillance alone does not accurately reflect the extent of the HIV epidemic among adolescents and young adults. Compared with persons reported with AIDS, those reported with HIV infection in these 25 states were more likely to be women and from racial/ethnic minorities (36 ) (Table 1). These patterns reflect the characteristics of populations that were affected by the epidemic more recently, but they might also reflect changes in testing practices or behaviors (6,36,37 ). Compared with the diagnosis of AIDS, which can be delayed among HIV-infected persons receiving antiretroviral therapy, the first diagnosis of HIV infection is not delayed by treatment but is affected by testing behaviors and targeted testing programs. In addition, in these 25 states as of June 30, 1999, the total number of persons (159,083) who were reported as living with either a diagnosis of HIV infection (90,699) or AIDS (68,384) was 133% greater than that represented by the number living with AIDS alone (5 ). Therefore, these states have documented that the combined prevalence of those living with a diagnosis of HIV infection and those living with AIDS provides a more realistic and useful estimate of the resources needed for patient care and services than does AIDS prevalence alone. States with confidential name-based HIV case surveillance systems have used data on all perinatally exposed children to document the sharp decline in perinatally acquired HIV infection, the increase in the proportion of infected pregnant women who have been tested for HIV infection before delivery, and the high proportion of HIV-infected pregnant women who accept zidovudine therapy (31,(38)(39)(40)(41)(42)(43)(44). These findings contribute to HIV-prevention policy development. CSTE and the American Academy of Pediatrics have recommended that all states and territories conduct pediatric HIV surveillance that includes all perinatally exposed infants to facilitate follow-up to assess infection status and access to care (11,31,33,40,45 ). Persons can choose to be tested for HIV in the following ways: a) anonymouslywhereby identifying information, including patient name and other locating information, are not linked to the HIV test result (e.g., at anonymous testing sites) and b) confidentially -whereby the HIV test result is linked to identifying information such as patient and provider names (e.g., at medical clinics). In states that require HIV case reporting, providers in confidential medical or testing sites are required to report HIV-infected persons to public health authorities. Not all persons infected with HIV are tested, and of those who are, testing occurs at different stages of their infection. Therefore, HIV surveillance data provide a minimum estimate of the number of infected persons and are most representative of persons who have had HIV infection diagnosed in medical clinics and other confidential diagnostic settings. The data represent the characteristics of persons who recognize their risk and seek confidential testing, who are offered HIV testing (e.g., pregnant women and clients at sexually transmitted disease clinics), who are required to be tested (e.g., blood donors and military recruits), and who are tested because they present with symptoms of HIV-related illnesses. CDC estimated that, in 1996, approximately two thirds of all infected persons in the United States had HIV infection diagnosed in such settings (46 ). HIV surveillance data might not represent untested persons or those who seek testing at anonymous test sites or with home collection kits; such persons are not reported to confidential HIV/AIDS surveillance systems. However, the availability of anonymous testing is important in promoting knowledge of HIV status among at-risk NA (Native American), unknown, because estimates were too small for separate analysis. † † Column totals include missing/other for some categories (e.g., missing sex). Persons infected through receipt of blood or blood products are included under other/unreported risk. populations and provides an opportunity for counseling to reduce high-risk behaviors and voluntary referrals to appropriate medical diagnosis and prevention services. Despite their current limitations, HIV and AIDS case surveillance data together can provide a clearer picture of the HIV epidemic than AIDS case surveillance data alone. Therefore, CDC and CSTE continue to recommend that all areas implement HIV case reporting as part of a comprehensive strategy to monitor HIV infection and HIV disease. The strategy should also include surveys of the incidence and prevalence of HIV infection; AIDS case surveillance; monitoring HIV-related mortality; supplemental research and evaluation studies, including behavioral surveillance; and statistical estimation of the incidence and prevalence of infection and disease. # Considerations in Implementing Nationwide HIV Case Surveillance The nationwide implementation of the 1993 expanded AIDS surveillance case definition prompted renewed discussions of the rationale and need for data representing HIV-infected persons who did not meet the AIDS-defining criteria. Because many states were considering implementing HIV reporting, CDC held a consultation in 1993 with public health and community representatives to discuss relevant issues and concerns. Community representatives' main concerns were that the security and confidentiality standards of surveillance programs might not be sufficient to prevent disclosures of information and that many persons at risk for HIV infection might therefore delay seeking HIV counseling and testing because of these confidentiality concerns. The consensus of the consultants was that few published studies were of sufficient scientific quality to assess these concerns. Therefore, the consultants identified several areas that required additional research and policy development before CDC and CSTE should consider recommending further expansion of HIV surveillance efforts. These areas included a) the impact of reporting policies on testing behaviors and practices, including the decreased availability of anonymous testing in some states; b) the role of surveillance data in linking reported persons to prevention and care programs; c) the development of recommended standards for the security and confidentiality of publicly held HIV/AIDS surveillance data; and d) determining whether alternatives to reporting of patient names would reduce confidentiality risks while meeting the needs for high-quality surveillance data. In response to the consultants' recommendations, CDC initiated several research projects to a) assess the effect of confidential name-based HIV surveillance on persons' willingness to seek HIV testing and care; b) review program practices and legal requirements for the security and confidentiality of state and local HIV/AIDS surveillance data; and c) evaluate the performance of coded-identifier-based surveillance systems. Findings from these projects and expert advice from participants at numerous technical meetings and consultations held during the intervening period have guided formulation of the policies and practices recommended in this report. The findings from these projects are summarized in the following three subsections: HIV surveillance and testing behavior, HIV surveillance using non-name-based unique identifiers, and confidentiality of HIV surveillance data. # HIV Surveillance and Testing Behavior Few studies have characterized test-or care-seeking behaviors in relation to state HIV reporting policies. A 1988 general population study of previous or planned use of HIV testing services did not identify an association of reporting policy with testing behavior (47 ). In contrast, interviews of persons seeking anonymous testing in 1989 documented that many would avoid testing if a positive test resulted in name reporting or partner notification (48 ). A review of the published literature on HIV testing behaviors highlighted several limitations and biases in previous studies (49 ), including small numbers, lack of geographic and risk-group representativeness, and analysis of intent to test rather than of actual testing behavior. An additional limitation of the available literature is that studies published 5-10 years ago might not reflect actual testing behaviors in the current treatment era. Literature that highlights potential misuse of public health surveillance data might have the unintended effect of increasing test avoidance among some at-risk persons (50 ). Examining knowledge of and perceptions about testing and reporting, as well as actual testing behavior, in the context of current treatment advances and evolving HIV reporting policies, can address some of the limitations of previous research. To determine the effect of changes in reporting policies on actual testing behaviors among persons seeking testing at publicly funded HIV counseling and testing sites, CDC and six state health departments reviewed data routinely collected from these sites to compare HIV testing patterns during the 12 months before and the 12 months after implementation of HIV case surveillance (51 ). In these areas, the number of HIV tests increased in four states and decreased in two states; the declines were not statistically significant. All the analysis periods (25-month periods during 1992-1996) antedated the widespread beneficial effects of highly active antiretroviral therapy. Slight variability in testing trends was observed among racial/ethnic subgroups and HIV-risk exposure categories; however, these data do not suggest that, in these states, the policy of implementing HIV case reporting adversely affected test-seeking behaviors overall (52 ). CDC also supported studies by researchers at the University of California at San Francisco and participating state health departments to identify the most important determinants of test seeking or test avoidance among high-risk populations and to assess the impact of changes in HIV testing and HIV reporting policies. Data from these surveys of high-risk persons in nine selected states about their perceptions and knowledge of HIV testing and HIV reporting practices documented that few respondents had knowledge of the HIV reporting policies in their respective states (53,54 ). In surveys conducted during 1995-1996, respondents reported high levels of testing, with approximately three fourths reporting that they had had an HIV test. The most commonly reported factors (by nearly half of respondents) that might have contributed to delays in seeking testing or not getting tested were fear of having HIV infection diagnosed or belief that they were not likely to be HIV infected. "Reporting to the government" was a concern that might have contributed to a delay in seeking HIV testing for 11% of heterosexuals, 18% of injecting-drug users, and 22% of men who have sex with men; <1%, 3%, and 2% of respondents in these risk groups, respectively, indicated that this was their main concern. Concern about name-based reporting of HIV infections to the government was a factor for not testing for HIV for 13% of heterosexuals, 18% of injecting-drug users, and 28% of men who have sex with men. As the main factor for not testing for HIV, concern about name-based reporting to the government was substantially lower in all risk groups (1% of heterosexuals, 1% of injecting-drug users, and 4% of men who have sex with men) (55 ). These findings suggest that name-based reporting policies might deter a small proportion of persons with high-risk sex or drug-using behaviors from seeking testing and, therefore, support the need for strict adherence to confidentiality safeguards of public health testing and surveillance data. In addition, the survey documented that the availability of an anonymous testing option is consistently associated with higher rates of intention to test in the future. In this survey, high levels of testing, together with high levels of test delay or avoidance associated with reasons other than concern about name reporting, suggest that addressing these other concerns may have a greater effect on testing behavior. For example, 59% of men who have sex with men reported being "afraid to find out" as a factor for not testing, and 27% reported it as the main factor for not testing. In addition, 52% of men who have sex with men reported "unlikely to have been exposed" as a factor for not testing, and 17% reported it as the main factor. In a companion survey of persons reported with AIDS in eight of these same states, participants who had recognized their HIV risk and sought testing at anonymous testing sites reported entering care at an earlier stage of HIV disease than persons who were first tested in a confidential testing setting (e.g., STD clinics, medical clinics, or hospitals), where persons are frequently first tested when they become ill (56 ). These data suggest that anonymous testing options are important in promoting timely knowledge of HIV status for some at-risk persons. # HIV Surveillance Using Non-Name-Based Unique Identifiers To assess the feasibility of using alternatives to confidential name-based methods for HIV surveillance, several states implemented reporting of cases of HIV infection or CD4 (a marker of immunosuppression in HIV-infected persons) laboratory test results using various numeric or alphanumeric codes. Other states considered or tried to conduct case surveillance without name identifiers by using codes designed for nonsurveillance purposes (e.g., codes intended for use in tracking patients in casemanagement systems) (57 ). In May 1995, CDC convened a meeting at which these states identified operational, technical, and scientific challenges in conducting surveillance using coded identifiers rather than patient names. The states recommended that CDC evaluate additional coded identifiers and assist them in documenting and disseminating the results of their findings. In addition, CDC supported research to evaluate the performance of a coded unique identifier (UI) in two states that implemented a non-name-based HIV case-reporting system while maintaining name-based surveillance methods for AIDS (58 ). The study, conducted by Maryland and Texas during 1994-1996 in collaboration with CDC, documented nearly 50% incomplete reporting, in part because the social security number necessary to construct the identifier code was not uniformly available in medical or laboratory records. In Maryland, provider-maintained logs were needed to link the UI to name-based medical records to obtain follow-up data (e.g., on HIV risk/exposure). A more recent evaluation conducted by the Maryland Department of Health and Mental Hygiene (MDHMH) reported data from a publicly funded counseling and testing site and documented a higher level of completeness of HIV reporting (88%) than the 50% documented in the previous study (58,59 ). MDHMH reports that their code is unique to a given person and that assignment of two different codes to the same person is unlikely. That is, the probability that a given code can distinguish one person from any other is >99% if all the elements of the code are complete and accurate. No published evaluations have assessed the probability of assigning the same code to different persons, which could occur if elements of the code were missing. In contrast to MDHMH's findings, analogous evaluations in Texas, as well as studies that used more diverse methods in Los Angeles and New Jersey, failed to identify a code that performs as well as name-based methods (58,(60)(61)(62)(63)(64)(65)(66)(67). On the basis of published evaluations (58 ), Texas recently switched to name-based HIV case surveillance. In addition to Maryland, three other states (Illinois, Maine, and Massachusetts) and Puerto Rico recently implemented HIV reporting using four different coded identifiers. CDC will assist these states in implementing their systems, establishing standardized criteria for assessing the overall performance of their systems, as well as assessing whether the required standards are achieved. Additional evaluations will be conducted by the respective state health departments, in collaboration with CDC, to determine a) the ability of coded identifiers to accurately track disease progression from HIV infection to AIDS to death, b) their utility for evaluating public health efforts to eliminate perinatal HIV transmission, c) their acceptability, and d) their usefulness in matching to other databases (e.g., tuberculosis). # Confidentiality of HIV Surveillance Data A 1994 review of state confidentiality laws that protect HIV surveillance data documented that all states and many localities have legal safeguards for confidentiality of government-held health data (68 ). These laws provide greater protection than laws protecting the confidentiality of information in health records held by private healthcare providers. Most states have specific statutory protections for public health data related to HIV infection and other STDs. However, state legal protections vary, and CDC supports additional efforts to strengthen privacy protections for public health data. On the basis of input from expert legal and public health consultants, the Model State Public Health Privacy Act (69 ) was developed by an independent contractor at the behest of CSTE. If enacted by states, the provisions of the Model Act would ensure the confidentiality of surveillance data, strengthen statutory protections against disclosure, and preclude the intended or unintended use of surveillance data for non-public health purposes. CDC has reviewed state and local security policies and procedures for HIV/AIDS surveillance data. Since 1981, states have conducted AIDS surveillance, and few breaches of security have resulted in the unauthorized release of data (70,71 ). Because survival has improved for HIV-infected persons, information about them might be maintained in public health surveillance databases for longer periods. This has resulted in increased concerns about confidentiality of surveillance data among public health and community groups (72 ). Therefore, CDC has issued technical guidance for security procedures that include enhanced confidentiality and security safeguards as evaluation criteria for federal funding of state HIV/AIDS surveillance activities (73 ). The receipt of federal surveillance funding depends on the recipient's ability to ensure the physical security and confidentiality of case reports. At the federal level, HIV/AIDS surveillance data are protected by several federal statutes, which ensure that CDC will not release HIV/AIDS surveillance data for non-public health purposes (e.g., for use in criminal, civil, or administrative proceedings). Privacy is also ensured by the removal of names and the encryption of data transmitted to CDC. On the basis of the importance of maintaining the confidentiality of persons in whom HIV infection has been diagnosed by public or private health-care providers, CDC has recommended additional standards to enhance the security and confidentiality of HIV and AIDS surveillance data (74,75 ). # GUIDELINES FOR SURVEILLANCE OF HIV INFECTION AND AIDS HIV Surveillance Case Definition for Adults and Children CDC, in collaboration with CSTE, has established a new case definition for HIV infection in adults and children that includes revised surveillance criteria for HIV infection and incorporates the surveillance criteria for AIDS (17-19,76 ) (Appendix). HIV infection and AIDS case reports forwarded to CDC should be based on this definition. For adults and children aged ≥18 months, the HIV surveillance case definition includes laboratory and clinical evidence specifically indicative of HIV infection and severe HIV disease (AIDS). For children aged <18 months (except for those who acquired HIV infection other than by perinatal transmission), the HIV surveillance case definition updates the definition in the 1994 revised classification system. In addition, the new case definition is based on recent data regarding the sensitivity and specificity of HIV diagnostic tests in infants and clinical guidelines for Pneumocystis carinii pneumonia (PCP) prophylaxis for children (19,(77)(78)(79)(80)(81)(82)(83)(84)(85)(86)(87)(88) and for use of antiretroviral agents for pediatric HIV infection (30 ). The revised surveillance case definitions for adults and children become effective January 1, 2000. # HIV/AIDS Case Surveillance Practices and Standards CDC and CSTE recommend that all states require reporting to public health surveillance of all cases of perinatal HIV exposure in infants, the earliest diagnosis of HIV infection (exclusive of anonymous tests) and the earliest diagnosis of AIDS in persons of all ages, and deaths among these persons (4,33 ). Such reporting should constitute the core minimum performance standard for HIV/AIDS surveillance in all states and territories. CDC provides federal funds and technical assistance to states to establish and conduct active HIV/AIDS surveillance programs. On the basis of feasibility, needs, and resources, areas may be funded to implement additional surveillance activities (e.g., supplemental research and evaluation studies and serologic surveys), but these approaches might not be necessary in all areas. The following recommended practices update and revise the CDC Guidelines for HIV/AIDS Surveillance released in 1996 and updated in 1998 as a technical guide for state and local HIV/AIDS surveillance programs (34,(73)(74)(75). Recommended practices represent CDC's guidance for best public health practice based on available scientific data. Programmatic standards set minimum requirements for states to receive support from CDC for HIV/AIDS surveillance activities. # Recommended Surveillance Practices - All state and local programs should collect a standard set of surveillance data for all cases that meet the reporting criteria for HIV infection and AIDS. The standard data set includes the a) patient identifier, b) earliest date of diagnosis of HIV infection, c) earliest date of diagnosis of an AIDS-defining condition, d) demographic information (e.g., date of birth, race/ethnicity, and sex) and residence (i.e., city and state) at diagnosis of HIV infection and of AIDS, e) HIV risk exposure, f) facility of diagnosis, and g) date of death and state of residence at death. In addition to this information, the date of HIV diagnostic testing, the results of these tests, and exposure to antiretroviral treatment for reducing perinatal HIV transmission should be collected for all infants with perinatal exposures to HIV. Surveillance information, without patient identifiers, should be encrypted and forwarded to CDC through the HIV/AIDS Reporting System (or equivalent) in accordance with current practice. To address specific public health information needs, local surveillance programs can cross-match HIV and AIDS surveillance data with other public health data (e.g., tuberculosis data) and collect supplemental surveillance data on all or a representative sample of cases. CDC will provide technical assistance and recommend standardized surveillance methods to assist in collecting supplemental surveillance information. - On the basis of studies of coded identifier systems conducted in at least eight states, published evaluations of name-based and code-based surveillance systems, and CDC's assessment of the quality and reproducibility of the available data, CDC has concluded that confidential name-based HIV/AIDS surveillance systems are most likely to meet the necessary performance standards (36,58,60-67,89,90 ), as well as to serve the public health purposes for which surveillance data are required. Therefore, CDC advises that state and local surveillance programs use the same confidential name-based approach for HIV surveillance as is currently used for AIDS surveillance nationwide. However, CDC recognizes that some states have adopted, and others may elect to adopt, coded case identifiers for public health reporting of HIV infection. CDC will provide technical assistance to all state and local areas to continue or establish HIV/AIDS surveillance systems and to evaluate their surveillance programs using standardized methods and criteria whether they use name or coded identifiers. - HIV and AIDS surveillance should be used to identify rare or previously unrecognized modes of HIV transmission, unusual clinical or virologic manifestations, and other cases of public health importance. Providers are the most likely and timely source of identifying unusual laboratory or clinical cases. They are encouraged to promptly report atypical cases to local, state, or territorial public health officials for follow-up. CDC will provide technical assistance to state and local health departments conducting such investigations and will revise public health recommendations based on the findings, as appropriate. - HIV and AIDS case surveillance efforts should result in collection of data from all private and public sources of HIV-related testing and care services. Laboratoryinitiated surveillance methods should identify all cases that meet the laboratory reporting criteria for HIV infection and/or AIDS. However, these methods will require follow-up with the provider to verify the infection status or clinical stage and obtain complete demographic and exposure risk data. HIV-infected persons who are initially tested anonymously are eligible to be reported to CDC's HIV/AIDS surveillance database only after they have had HIV infection diagnosed in a confidential testing setting (e.g., by a health-care provider) and have test results or clinical conditions that meet the HIV and/or AIDS reporting criteria. - All state and local surveillance programs should regularly publish, in print or electronically, aggregated HIV/AIDS surveillance data in a format that facilitates use of these data by federal, state, and local public health agencies, HIV-prevention community planning groups and care-planning councils, academic institutions, providers and institutions that have reported cases, community-based organizations, and the general public. Presentation of surveillance data should be consistent with established policies for data release that preclude the direct or indirect identification of a person with HIV infection or AIDS. CDC will increase its efforts to coordinate requests for HIV/AIDS surveillance data across federal government agencies to use state/local surveillance resources efficiently. CDC will also develop specific guidelines for analyzing and interpreting HIV/AIDS surveillance data. - All state and local surveillance programs should conduct regular, ongoing assessments of the performance of the surveillance system and redirect efforts and resources to ensure timely reporting of complete, representative, and accurate data. CDC will provide technical assistance and recommend standardized evaluation methods to assist states in achieving the highest possible level of performance and to promote comparability of data throughout the United States. # Minimum Performance Standards - To provide accurate and timely data for monitoring HIV/AIDS trends and ensuring a reliable measure of the number of persons in need of HIV-related prevention and care services, state and local HIV/AIDS surveillance systems should use reporting methods that provide case reporting that is complete (≥85%) and timely (≥66% of cases reported within 6 months of diagnosis). In addition, evaluation studies should demonstrate that the approach used to conduct surveillance (i.e., name or coded identifier) must result in accurate case counts (≤5% duplicate case reports and ≤5% incorrectly matched case reports). Finally, at least 85% of reported cases or a representative sample should have information regarding risk for HIV infection after epidemiologic follow-up is completed. All HIV/AIDS surveillance systems should collect the recommended standard data in a reliable and valid manner, allow matching to other public health databases (e.g., death registries) to benefit specific public health goals, and allow identification and follow-up of individual cases of public health importance. - To assess the quality of HIV and AIDS case surveillance as specified in the performance standards, states and local surveillance programs must conduct periodic evaluation studies. CDC will recommend several evaluation methods to enable states to select methods best suited to their program needs and resources. States should also evaluate the representativeness of their HIV case reports by monitoring the potential impact of HIV surveillance on test-seeking patterns and behaviors and review the extent to which surveillance data are being used for planning, targeting, and evaluating HIV-prevention programs and services. The goal of these performance evaluations is to enhance the quality and usefulness of surveillance data for public health action. During the next several years (i.e., 2000-2002), CDC will assist states in transitioning to an integrated HIV/AIDS surveillance system by evaluating current performance levels, instituting revised program operations and policies as necessary, and then reassessing performance. Following this transition period, CDC will evaluate and award proposals for federal funding of state and local surveillance programs based on their capacity to meet these performance standards. At that time, CDC will require that recipients of federal funds for HIV/AIDS case surveillance adopt surveillance methods and practices that will enable them to achieve the standards to ensure that federal funds are awarded responsibly. # Recommended Security and Confidentiality Practices - State and local programs should document their security policies and procedures and ensure their availability for periodic review. - State and local health departments should minimize storage and retention of unnecessary or redundant paper or electronic reports and should review their data-retention policies consistent with CDC technical guidelines (73)(74)(75). States should consider and evaluate removing names from surveillance records when they no longer serve the public health purpose for which they were collected. Policies should provide the flexibility to remove cases that were reported in error or that are determined not to be infected with HIV on follow-up. CDC will develop guidance for confirming HIV-infection status as testing and vaccine technologies evolve. - State and local health departments should also review their confidentiality practices to determine whether additional protections should be established (e.g., before implementation of HIV case surveillance). States that plan to implement HIV case surveillance should review their current confidentiality statutes to determine whether they need to be strengthened. The Model State Public Health Privacy Act (69 ) should be considered by states in developing their statutory protections of HIV/AIDS surveillance data. Confidentiality laws should protect surveillance data that are transmitted (in a secure and confidential manner consistent with CDC's HIV/AIDS surveillance program requirements) to other public health programs as part of evaluation studies or for follow-up of cases of special public health importance. The penalties for violating privacy and security should apply to all recipients of HIV/AIDS case surveillance information. - To further enhance security and confidentiality of data, states are encouraged to implement use of a double-key encryption and decryption system, in which identifying information encrypted by states using the first key can only be decrypted for access using the second key. CDC will develop this option at the request of states that wish to reassure HIV-infected persons that HIV and AIDS surveillance data will be held confidentially and will be used only for specified public health purposes. CDC will hold the second key under an Assurance of Confidentiality under Section 308(d) of the Public Health Service Act, which governs how CDC uses or releases surveillance data voluntarily shared with CDC by the states. Under this assurance, CDC is prohibited from providing that key to a state planning to use HIV/AIDS surveillance data for non-public health purposes. # Minimum Security and Confidentiality Standards The security and confidentiality policies and procedures of state and local surveillance programs should be consistent with CDC standards for the security of HIV/AIDS surveillance data (73,74 ). The minimum security criteria were established following reviews of all state and numerous local health department HIV/AIDS surveillance programs. In general, the reviews documented that health departments have achieved a high level of security and that most state health departments meet or exceed the minimum standards. Beginning in 2000, CDC will require that recipients of federal funds for HIV/AIDS surveillance establish the minimum security standards and include their security policy in applications for surveillance funds (73,74 ). Examples of these standards include the following: - Electronic HIV/AIDS surveillance data should be protected by computer encryption during data transfer. States should continue the established practice of not including personal identifying information in HIV/AIDS surveillance data forwarded to CDC. - HIV and AIDS surveillance records should be located in a physically secured area and should be protected by coded passwords and computer encryption. - Access to the HIV/AIDS surveillance registry should be restricted to a minimum number of authorized surveillance staff, who are designated by a responsible authorizing official, have been trained in confidentiality procedures, and are aware of penalties for unauthorized disclosure of surveillance information. - Public health programs that receive HIV/AIDS information from matching of public health databases should have security and confidentiality protections and penalties for unauthorized disclosure equivalent to those for HIV/AIDS surveillance data and personnel. - Use of HIV/AIDS surveillance data for research purposes should be approved by appropriate institutional review boards, and persons conducting the research must sign confidentiality statements. - HIV and AIDS surveillance data made available for epidemiologic analyses must not include names or other identifying information. State and local data release policies should ensure that the release of data for statistical purposes does not result in the direct or indirect identification of persons reported with HIV infection and AIDS. - In the rare instance of a possible security breach of HIV/AIDS surveillance data, state and local health departments should promptly investigate and report confirmed breaches to CDC to enable CDC to provide technical assistance to state and local health departments, develop recommendations for improvements in security measures, and provide oversight in monitoring changes in program practices. # Relation to HIV-Prevention and HIV-Care Programs: Recommended Practices At the federal level, the primary function of HIV/AIDS surveillance is collecting accurate and timely epidemiologic data for public health planning and policy. Consequently, CDC is authorized to provide federal funds to states through surveillance cooperative agreements, both to achieve the goals of the national surveillance program and to assist states in developing their surveillance programs in accordance with state and local laws and practices. Federal funds authorized for HIV/AIDS surveillance are not provided to states for developing or providing prevention or treatment case-management services; funds for such services are provided by CDC and other federal agencies under separate authorizations. Whether and how states establish a link between individual case-patients reported to their HIV/AIDS surveillance programs and other health department programs and services for HIV prevention and treatment is within the purview of the states. However, in considering or establishing such linkages, CDC recommends the following: - The implementation of HIV case surveillance should not interfere with HIVprevention programs, including those that offer anonymous HIV counseling and testing services. Unless prohibited by state law or regulation, as a condition of federal funding for HIV prevention under a separate authorization, CDC requires that states and local areas provide anonymous HIV counseling and testing services. CDC strongly recommends that states which prohibit anonymous HIV testing change this practice, given the overriding public health objective of encouraging persons to become aware of their HIV serologic status. CDC does not view the availability of publicly funded anonymous counseling and HIV testing as incompatible with the ability to conduct HIV case surveillance in the population. - HIV testing services should be offered for participation on a voluntary basis and preceded by informed consent in accordance with local laws (91 ). - Both public and private providers should refer persons in whom HIV infection has been diagnosed to programs that provide HIV care, treatment, and comprehensive prevention case-management services. - Provider-based referrals of patients to prevention and care services should enable a timely, effective, and efficient means of ensuring that persons in whom HIV infection has been diagnosed receive needed services. - States should consult with providers, prevention-and care-planning bodies, and public health professionals in developing the policies and practices necessary to effect these linkages; should require that recipients of HIV/AIDS surveillance information be subject to the same penalties for unauthorized disclosure as HIV/AIDS surveillance personnel; and should evaluate the effectiveness of this public health approach. Such an evaluation should ensure that the public health objectives of such linkages are achieved without unnecessarily increasing security and confidentiality risks to surveillance data or decreasing the acceptability of surveillance programs to health-care providers and affected communities. Providers and affected communities, including HIV-prevention community planning groups, should participate with health departments in planning and implementing surveillance strategies, as well as programs and services. # COMMENTARY Surveillance Case Definition for HIV Infection and AIDS The revised case definition for HIV infection in adults and children integrates reporting criteria for HIV infection and AIDS in a single case definition and incorporates new laboratory tests in the laboratory criteria for HIV case reporting. The 2000 case definition for HIV infection includes HIV nucleic acid (DNA or RNA) detection tests that were not commercially available when the AIDS case definition was revised in 1993. The revised case definition for HIV infection also permits states to report cases to CDC based on the result of any test licensed for diagnosing HIV infection in the United States. Although the reporting criteria generally reflect the recommendations for diagnosing HIV infection, the HIV reporting criteria are for public health surveillance and are not designed for making a diagnosis for an individual patient. The laboratory criteria include the serologic HIV tests described in the clinical standards for diagnosing HIV infection (92)(93)(94)(95). The pediatric HIV reporting criteria include criteria for monitoring all children with perinatal exposures to HIV and reflect recent advances in diagnostic approaches that permit the diagnosis of HIV infection during the first months of life. With HIV nucleic acid detection tests, HIV infection can be detected in nearly all infants aged ≥1 month. The timing of the HIV serologic and HIV nucleic acid detection tests and the number of HIV nucleic acid detection tests in the definitive and presumptive criteria for HIV infection are based on the recommended practices for diagnosing infection in children aged <18 months and on evaluations of the performance of these tests for children in this age group (30,(77)(78)(79)(80)(81)(82)(83)(84)(85)(86)(87)(88). The clinical criteria in the case definition for HIV infection are included to ensure the complete reporting of cases with documented evidence of HIV infection or conditions meeting the AIDS case definition. The AIDS-defining conditions are included as part of the single case definition for HIV infection. In adults and adolescents aged ≥13 years, criteria for presumptive and definitive AIDS-defining conditions have not been revised since 1993 and continue to include the laboratory markers of severe HIV-related immunosuppression and the opportunistic illnesses indicative of severe HIV disease, which greatly increase mortality risks. # Effect of National HIV Case Surveillance on Reporting Trends Changes in the HIV reporting criteria will have little effect on reporting trends in states already conducting HIV case surveillance. However, the number of cases of HIV infection reported nationally will increase primarily because of implementation of HIV surveillance by the remaining states and local areas. Many of the states that will implement HIV case surveillance in the future have high AIDS incidence rates. Similar to the effect on AIDS surveillance trends after the implementation of the revised reporting criteria in 1993, the initiation of HIV surveillance by additional states might result in a sudden and large increase in HIV case reports (96 ). On the basis of CDC's estimate that approximately 220,000 HIV-infected persons without AIDS-defining conditions had had HIV infection diagnosed in confidential testing settings and resided in states that were not conducting HIV case surveillance at the end of 1996 (46 ), the possibility exists that this number of persons could be reported with HIV infection from these states in 2000. However, reporting of prevalent HIV infections is more likely to be spread over several years, and the annual increases will most likely be more modest. Initially, most case reports will represent persons whose HIV infection was diagnosed before the implementation of HIV surveillance. As the reporting of prevalent cases of HIV infection reaches full implementation nationwide, the number of HIV case reports will decrease, and case reports will increasingly represent persons with recent diagnoses of HIV infection. To facilitate interpretation of HIV surveillance data and given that CDC strongly promotes continued availability of anonymous testing options, evaluations of HIV/AIDS surveillance systems will include assessments of the representativeness of HIV case surveillance data. These assessments will include special surveys to evaluate the delays between HIV testing and entry to care. In addition, these evaluations will be useful in determining the effectiveness of program efforts to refer persons into care services after the diagnosis of HIV infection in anonymous testing settings. AIDS cases have declined nationwide; however, because AIDS surveillance trends are affected by the incidence of HIV infection, as well as the effect of treatment on the progression of HIV disease, future AIDS trends cannot be predicted. AIDS surveillance will continue to be important in evaluating access to care for different populations and in identifying changes in trends that might signal a decrease in the effectiveness of treatment. The long-term benefits of antiretroviral therapy and antimicrobial prophylaxis for AIDS-related illnesses continue to be defined. In addition, various factors (e.g., access, adherence, treatment costs, and viral resistance) will influence the use and effectiveness of these therapies and their effects on AIDS incidence and mortality trends (97)(98)(99). Because trends in new diagnoses of HIV infection are affected by when in the course of disease a person seeks or is offered HIV testing, such trends do not reflect the incidence of HIV infection in the population. In addition, because all HIV-infected persons in the population might not have had the infection diagnosed, these data do not represent total HIV prevalence in the population. Currently, interpretation of these data is complicated by several factors. First, persons might have HIV infection diagnosed and later during the same calendar year have AIDS diagnosed, which can complicate presentation of the data. Second, delays in reporting cases of HIV infection tend to be shorter than for AIDS cases, necessitating development of stage-specific statistical adjustments. Third, methods of imputation of exposure risk data for AIDS cases have been developed based on historical patterns of reclassification after investigation, but comparable methods for cases of HIV infection are only recently available at the national level. Finally, whether a trend in the number of new HIV diagnoses is stable, increasing, or decreasing might reflect current or historical HIV transmission patterns, changes in testing behaviors, and/or stage of the epidemic in the local geographic area. Overall, in the United States, the incidence of HIV infection peaked approximately 15 years ago, and the annual number of HIV infections has been stable at approximately 40,000 since 1992, when CDC estimated the prevalence of HIV infection in the range of 650,000-900,000 infected persons (100,101 ). Based on HIV and AIDS case surveillance data, CDC estimates that the prevalence of HIV infection at the end of 1998 was in the range of 800,000-900,000 infected persons. Of these persons, approximately 625,000 (range: 575,000-675,000) had had HIV infection or AIDS diagnosed (CDC, unpublished data, 1999). Because the annual number of new infections in recent years is relatively lower than during the peak incidence years, over time the remaining untested or anonymously tested infected persons will have HIV infection diagnosed through test-seeking, targeted testing, entry to care, or progression of disease to AIDS. Ultimately, the number of new diagnoses of HIV infection will decrease each year as they increasingly represent the smaller pool of more recently infected persons. Thus, in states that have been conducting HIV case reporting for several years, the number of new diagnoses of HIV infection is expected to decrease, then stabilize at a lower rate if the number of new infections remains stable. For states that newly implement HIV reporting, a large bolus of reported prevalent infections is expected to occur, followed by a decline in the annual number of new cases until the number stabilizes at a lower level. Recently, since the impact of highly active antiretroviral therapy on survival, the estimated number of new infections each year probably exceeds the number of deaths, and the prevalence of HIV infection might be increasing by a small proportion of total prevalence. Thus, during the transition period to nationwide HIV-infection reporting, measures of the combined prevalence of HIV infection diagnoses and AIDS diagnoses will be most useful in projecting the need for resources for care and prevention. Trends in the numbers of new cases reported will not provide immediate insights into the dynamics of the epidemic because prevalent case reports represent a mixture of new and old HIV infections. Within the next several years, however, all states will be able to characterize new diagnoses of HIV infection or a representative sample by demographic and clinical characteristics that will provide meaningful insights into actual HIV transmission patterns and will have well-characterized the health and service needs of the population of prevalent HIV-infected persons. CDC will develop analysis profiles, statistical adjustments for reporting delays and imputation of risk data, and recommendations for data presentation to assist states in analyzing and interpreting their HIV/AIDS surveillance data during this transition period. # HIV/AIDS Surveillance Practices Laboratories will be an increasingly important source of information from which to initiate reporting. HIV infection is frequently diagnosed in the outpatient clinical setting, and laboratory-initiated reporting will be particularly useful in identifying outpatient sources of HIV testing (89 ) although contact with individual providers is necessary to complete the reporting process. The routine collection of HIV and CD4 test data from laboratories and managed-care organizations promotes completeness of reporting and may increase the simplicity and efficiency of initial case-finding activities by local surveillance programs. Nonetheless, repeated testing of the same persons results in multiple reports and necessitates labor-intensive follow-up to eliminate duplicates. CDC is increasing its efforts to promote standards in laboratory reporting and to facilitate the transfer of data from public health and commercial laboratories to health departments. Performance criteria for HIV and AIDS surveillance are necessary to ensure that surveillance data are of sufficient quality to target prevention and care resources and to detect emerging trends in the HIV epidemic. Evaluations of HIV and AIDS surveillance programs have documented that areas should be able to meet these performance criteria (5,36,(61)(62)(63)(64)(65)(66)(67)89,90 ). According to these evaluations of namebased surveillance systems, the completeness of HIV surveillance (from 79% to approximately 95%) and AIDS surveillance (from 85% to approximately 95%) is high, and reporting is timely with nearly one half of AIDS cases and three quarters of cases of HIV infection reported to the national HIV/AIDS reporting system within 3 months of diagnosis (5 ). CDC estimates that the duplication rate of cases of HIV infection reported from different states to the national surveillance database was approximately 2%; for AIDS cases, the rate was approximately 3% (5,36 ). The performance criteria also reflect the need for public health surveillance systems to identify and follow-up on cases of public health importance. On the basis of current evaluation studies of non-name-based case identifiers and the current infrastructure of state and local health departments, name-based methods for collecting and reporting public health data provide the most feasible, simple, and reliable means for ensuring timely, accurate, and complete reporting of persons in whom HIV infection or AIDS has been diagnosed. Confidential name-based reporting also facilitates follow-up of perinatally exposed infants to determine their infection status and of persons reported with HIV infection to determine progression to AIDS and vital status (36,42 ). A name-based patient identifier allows providers to report cases directly from their name-based medical records, facilitates elimination of duplicate case reports, enables cross-matching of HIV and AIDS data with other namebased public health data (e.g., tuberculosis surveillance), permits follow-up with providers to collect information regarding risk for HIV infection and other data of public health importance. Through follow-up with providers, the HIV/AIDS surveillance system has provided an effective means to identify rare or unusual modes of HIV transmission and infection with rare strains of HIV and to improve prevention of HIVrelated opportunistic illnesses (102)(103)(104)(105)(106). CDC will assist states in monitoring the impact of changing medical interventions, epidemiology, and HIV case surveillance policies on test-and care-seeking behaviors. # Security and Confidentiality of HIV and AIDS Surveillance The revision of the case definition for HIV infection provides an opportunity to review and strengthen state and local confidentiality laws and regulations. Although state HIV/AIDS surveillance confidentiality laws and regulations adequately protect privacy compared with the statutory protections of other health-care data, state statutes differ in the degree of privacy protections afforded health information and the criteria for permissible disclosures of personal information. Most state statutes describe some permissible disclosures of public health information. To help ensure uniform confidentiality protections, the Georgetown University Law Center developed the Model State Public Health Privacy Act (69 ). Public health, legislative, legal, and community advocacy representatives provided expert consultation. The model legislative language protects confidential, identifiable information held by state and local public health departments against unauthorized and inappropriate non-public health uses but still allows public health officials to use surveillance information to accomplish the public health objectives defined by the law (69 ). CDC recommends that states planning to implement HIV case surveillance should consider adopting the model legislation, if necessary, to strengthen the current level of protection of public health data. Although HIV/AIDS surveillance systems have exemplary records of security and confidentiality, it is essential for all programs to identify ways to strengthen data protection because of a perceived greater sensitivity of HIV case surveillance compared with that of AIDS case surveillance alone (71 ). Providing accurate public education and factual media messages to inform vulnerable populations, as well as promoting testing programs that facilitate referrals into treatment and prevention services, will be important to ensure that test seeking and acceptance are not adversely affected as additional states implement HIV case reporting. The revised security standards (74 ) promote enhancements to further reduce any potential for disclosure of sensitive surveillance data. CDC continues to conduct evaluations of methods to further enhance data security, including the use of coding and encryption of data collected in the HIV/AIDS reporting system. # HIV Prevention and Care CDC has published guidelines concerning the provision and targeting of HIV counseling and testing services (29,41,(107)(108)(109)(110)(111) and provides support for most public sources of HIV testing. The availability of anonymous HIV testing services might be particularly important for persons who delay seeking testing because of a concern that others might learn of their serologic status (55 ). Studies have documented that the availability of anonymous HIV testing is associated with increased numbers of persons seeking testing services (112)(113)(114)(115). Anonymous HIV testing services are a required element of federally supported prevention programs unless prohibited by state law or regulation. Currently, 39 states, Puerto Rico, and the District of Columbia provide anonymous HIV testing services. CDC advises that the decision to refer persons reported to the surveillance system to prevention and care services (e.g., partner counseling and referral services ) be made at the local level. PCRS programs provide HIV counseling and testing to persons who might be unaware of HIV risk exposures, and these services are a required component of federally sponsored HIV-prevention programs (116,117 ). The provision of such services to persons in whom HIV infection or AIDS has been diagnosed, especially those who receive services in publicly funded testing and clinic settings, is conducted successfully by states regardless of whether they have implemented HIV reporting (118 ). Referrals from surveillance to other health department services, when they occur, should be established in a manner that ensures both the quality of the surveillance data and the security of the surveillance system, as well as the quality, confidentiality, and voluntary nature of HIV-prevention services (119 ). At the federal level, the primary function of HIV/AIDS surveillance remains the provision of accurate epidemiologic data for public health information, planning, and evaluation. Persons in whom HIV infection has been diagnosed at either confidential or anonymous test sites should be promptly referred to facilities that provide confidential HIV care. Recent studies have documented disparities in ensuring timely testing and access to care by demographic, socioeconomic, and other factors (120,121 ). Although not directly responsible for the delivery of medical care, CDC provides federal direction for state and local programs that facilitate referral of HIV-infected persons from counseling and testing centers and health education/risk-reduction programs to HIV care facilities. CDC has developed guidelines to strengthen the system of referrals between HIV testing sites and care programs, in part by increasing coordination with the Health Resources and Services Administration and the Ryan White CARE Act grantees (122 ). To provide further guidance, CDC has participated in developing model contract language for Medicaid programs that serve persons with HIV infection to ensure cooperation with public health authorities in case reporting and follow-up. A well-developed and well-implemented HIV and AIDS case surveillance system is integral to public health efforts to identify disparities, target programs and resources to vulnerable populations, and assess the impact of these programs in reducing infection, disease, and premature death. CDC is undertaking a national effort to further reduce perinatal HIV transmission in the United States. This effort will incorporate HIV counseling and voluntary testing, treatment, and outreach to pregnant women, especially those who are racial/ethnic minorities and substance abusers, and will integrate prevention and treatment services for women and children. Surveillance for perinatally HIV-exposed and HIVinfected children will remain a critical measure of the effectiveness of this campaign (32,40,41,123,124 ). # CONCLUSION The implementation of a national surveillance network to include both HIV and AIDS case reporting is a necessary response to epidemiologic trends and new standards for HIV care (125)(126)(127). Integrated HIV/AIDS surveillance programs will provide data to characterize persons in whom HIV infection has been newly diagnosed, including those with evidence of recent infection, persons with severe HIV disease (AIDS), and those dying of HIV disease or AIDS. The revised HIV surveillance case definition and the establishment of minimum performance standards will promote uniform case ascertainment and will ensure that the surveillance data are of sufficient quality for effective planning and allocation of resources for prevention and care programs. Clinical or Other Criteria (if the above laboratory criteria are not met) # OR Clinical or Other Criteria (if the above definitive or presumptive laboratory criteria are not met) - Determined by a physician to be "not infected", and a physician has noted the results of the preceding HIV diagnostic tests in the medical record AND NO other laboratory or clinical evidence of HIV infection (i.e., has not had any positive virologic tests, if performed, and has not had an AIDS-defining condition) IV. A child aged <18 months born to an HIV-infected mother will be categorized as having perinatal exposure to HIV infection if the child does not meet the criteria for HIV infection (II) or the criteria for "not infected with HIV" (III). *HIV nucleic acid (DNA or RNA) detection tests are the virologic methods of choice to exclude infection in children aged <18 months. Although HIV culture can be used for this purpose, it is more complex and expensive to perform and is less well standardized than nucleic acid detection tests. The use of p24 antigen testing to exclude infection in children aged <18 months is not recommended because of its lack of sensitivity. # Appendix # Revised Surveillance Case Definition for HIV Infection* This revised definition of HIV infection, which applies to any HIV (e.g., HIV-1 or HIV-2), is intended for public health surveillance only. It incorporates the reporting criteria for HIV infection and AIDS into a single case definition. The revised criteria for HIV infection update the definition of HIV infection implemented in 1993 (18 ); the revised HIV criteria apply to AIDS-defining conditions for adults (18 ) and children (17,19 ), which require laboratory evidence of HIV. This definition is not presented as a guide to clinical diagnosis or for other uses (17,18 ). § In adults, adolescents, and children infected by other than perinatal exposure, plasma viral RNA nucleic acid tests should NOT be used in lieu of licensed HIV screening tests (e.g., repeatedly reactive enzyme immunoassay). In addition, a negative (i.e., undetectable) plasma HIV-1 RNA test result does not rule out the diagnosis of HIV infection. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on
CDC recommends that all states and territories conduct case surveillance for human immunodeficiency virus (HIV) infection as an extension of current acquired immunodeficiency syndrome (AIDS) surveillance activities. The expansion of national surveillance to include both HIV infection and AIDS cases is a necessary response to the impact of advances in antiretroviral therapy, the implementation of new HIV treatment guidelines, and the increased need for epidemiologic data regarding persons at all stages of HIV disease. Expanded surveillance will provide additional data about HIV-infected populations to enhance local, state, and federal efforts to prevent HIV transmission, improve allocation of resources for treatment services, and assist in evaluating the impact of public health interventions. CDC will provide technical assistance to all state and territorial health departments to continue or establish HIV and AIDS case surveillance systems and to evaluate the performance of their surveillance programs. This report includes a revised case definition for HIV infection in adults and children, recommended program practices, and performance and security standards for conducting HIV/AIDS surveillance by local, state, and territorial health departments. The revised surveillance case definition and associated recommendations become effective January 1, 2000.# INTRODUCTION AIDS surveillance has been the cornerstone of national efforts to monitor the spread of HIV infection in the United States and to target HIV-prevention programs and health-care services. Although AIDS is the end-stage of the natural history of HIV infection, in the past, monitoring AIDS-defining conditions provided population-based data that reflected changes in the incidence of HIV infection. However, recent advances in HIV treatment have slowed the progression of HIV disease for infected persons on treatment and contributed to a decline in AIDS incidence. These advances in treatment have diminished the ability of AIDS surveillance data to represent trends in the incidence of HIV infection or the impact of the epidemic on the health-care system. As a consequence, the capacity of local, state, and federal public health agencies to monitor the HIV epidemic has been compromised (1)(2)(3). In response to these changes and following consultations with multiple and diverse constituencies (including representatives of public health, government, and community organizations), CDC and the Council of State and Territorial Epidemiologists (CSTE) have recommended that all states and territories include surveillance for HIV infection as an extension of their AIDS surveillance activities (1,4 ). In this manner, the HIV/AIDS epidemic can be tracked more accurately and appropriate information about HIV infection and AIDS can be made available to policymakers. CDC continues to support a diverse set of epidemiologic methods to characterize persons affected by the epidemic in the United States (5)(6)(7)(8)(9)(10). Although HIV/AIDS case surveillance represents only one component among multiple necessary surveillance strategies, this report focuses primarily on CDC's recommendation to implement HIV case reporting nationwide. This report provides a revised case definition for HIV infection in adults and children, recommended program practices, and performance and security standards for conducting HIV/AIDS surveillance by local, state, and territorial health departments. The case definition for HIV infection was revised in consultation with CSTE and includes the current AIDS surveillance criteria as a component (11 ). The recommended program practices and performance and security standards are based on a) the established practices of AIDS and other public health surveillance systems; b) reviews of state and local surveillance programs, confidentiality statutes, and security procedures; c) studies of the performance of surveillance systems; d) ongoing evaluations of determinants of test-seeking or test-avoidance in relation to state policies and practices on HIV testing and reporting; and e) discussions at a consultation held by CDC and CSTE in May 1997. A draft of this report was made available for public comment from December 10, 1998, to January 11, 1999, through a notice published in the Federal Register (12 ). # BACKGROUND # History of AIDS and HIV Case Surveillance Since the epidemic was first identified in the United States in 1981, populationbased AIDS surveillance (i.e., reporting of AIDS cases and their characteristics to public health authorities for epidemiologic analysis) has been used to track the progression of the HIV epidemic from the initial case reports of opportunistic illnesses caused by a then unknown agent in a few large cities to the reporting of 711,344 AIDS cases nationwide through June 30, 1999 (5,(13)(14)(15). The AIDS reporting criteria have been periodically revised to incorporate new understanding of HIV disease and changes in medical practice (16)(17)(18)(19). In the absence of effective therapy for HIV infection, AIDS surveillance data have reliably detected changing patterns of HIV transmission and reflected the effect of HIV-prevention programs on the incidence of HIV infection and related illnesses in specific populations (20)(21)(22)(23)(24)(25). Because of these attributes, AIDS surveillance data have been used as a basis for allocating many federal resources for HIV treatment and care services and as the epidemiologic basis for planning local HIV-prevention services. With the advent of more effective therapy that slows the progression of HIV disease, AIDS surveillance data no longer reliably reflect trends in HIV transmission and do not accurately represent the need for prevention and care services (26,27 ). In 1996, national AIDS incidence and AIDS deaths declined for the first time during the HIV epidemic (Figure 1). These declines have been primarily attributed to the early use of combination antiretroviral therapy, which delays the progression to AIDS and death for persons with HIV infection (1)(2)(3)9 ). Revised HIV treatment guidelines recommend antiretroviral therapy for many HIV-infected persons in whom AIDS-defining conditions have not yet developed (28)(29)(30). In addition, antiretroviral treatment of pregnant women and their newborns has reduced perinatal HIV transmission and resulted in dramatic declines in the incidence of perinatally acquired AIDS (31,32 ) (Figure 2). In response to these changes in HIV treatment practices and the information needs of public health and other policymakers, CDC and CSTE have recommended that all states and territories extend their AIDS case surveillance activities to include HIV case surveillance and the reporting of HIV-exposed infants (1,4,33 ). Since 1985, many states have implemented HIV case reporting as part of their comprehensive HIV/AIDS surveillance programs. As of November 1, 1999, a total of 34 states and the Virgin Islands (VI) had implemented HIV case surveillance using the same confidential system for name-based case reporting for both HIV infection and AIDS; two of these states conduct pediatric surveillance only (5 ) (Figure 3). Areas that conduct integrated HIV/AIDS surveillance for adults, adolescents, and children have reported 42% of cumulative U.S. AIDS cases. In addition, four states (Illinois, Maine, Maryland, and Massachusetts) and Puerto Rico, representing 11% of cumulative AIDS cases, are reporting cases of HIV infection using a coded identifier rather than patient name. Washington has implemented HIV reporting by patient name to enable public health follow-up; after services and referrals are offered, names are converted into codes. In most other states, HIV case reporting is under consideration or laws, rules, or regulations enabling HIV surveillance are expected to be implemented during 2000. 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1993 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 In contrast to AIDS case surveillance, HIV case surveillance provides data to better characterize populations in which HIV infection has been newly diagnosed, including persons with evidence of recent HIV infection such as adolescents and young adults (13-24-year-olds) (34,35 ). Of the 52,690 HIV infections diagnosed from January 1994 through June 1997 in 25 states that conducted name-based HIV surveillance throughout this period, 14% of cases occurred in persons aged 13-24 years. In comparison, of the 20,215 persons in whom AIDS was diagnosed in these 25 states, only 3% of cases occurred in persons aged 13-24 years. Thus, AIDS case surveillance alone does not accurately reflect the extent of the HIV epidemic among adolescents and young adults. Compared with persons reported with AIDS, those reported with HIV infection in these 25 states were more likely to be women and from racial/ethnic minorities (36 ) (Table 1). These patterns reflect the characteristics of populations that were affected by the epidemic more recently, but they might also reflect changes in testing practices or behaviors (6,36,37 ). Compared with the diagnosis of AIDS, which can be delayed among HIV-infected persons receiving antiretroviral therapy, the first diagnosis of HIV infection is not delayed by treatment but is affected by testing behaviors and targeted testing programs. In addition, in these 25 states as of June 30, 1999, the total number of persons (159,083) who were reported as living with either a diagnosis of HIV infection (90,699) or AIDS (68,384) was 133% greater than that represented by the number living with AIDS alone (5 ). Therefore, these states have documented that the combined prevalence of those living with a diagnosis of HIV infection and those living with AIDS provides a more realistic and useful estimate of the resources needed for patient care and services than does AIDS prevalence alone. States with confidential name-based HIV case surveillance systems have used data on all perinatally exposed children to document the sharp decline in perinatally acquired HIV infection, the increase in the proportion of infected pregnant women who have been tested for HIV infection before delivery, and the high proportion of HIV-infected pregnant women who accept zidovudine therapy (31,(38)(39)(40)(41)(42)(43)(44). These findings contribute to HIV-prevention policy development. CSTE and the American Academy of Pediatrics have recommended that all states and territories conduct pediatric HIV surveillance that includes all perinatally exposed infants to facilitate follow-up to assess infection status and access to care (11,31,33,40,45 ). Persons can choose to be tested for HIV in the following ways: a) anonymouslywhereby identifying information, including patient name and other locating information, are not linked to the HIV test result (e.g., at anonymous testing sites) and b) confidentially -whereby the HIV test result is linked to identifying information such as patient and provider names (e.g., at medical clinics). In states that require HIV case reporting, providers in confidential medical or testing sites are required to report HIV-infected persons to public health authorities. Not all persons infected with HIV are tested, and of those who are, testing occurs at different stages of their infection. Therefore, HIV surveillance data provide a minimum estimate of the number of infected persons and are most representative of persons who have had HIV infection diagnosed in medical clinics and other confidential diagnostic settings. The data represent the characteristics of persons who recognize their risk and seek confidential testing, who are offered HIV testing (e.g., pregnant women and clients at sexually transmitted disease [STD] clinics), who are required to be tested (e.g., blood donors and military recruits), and who are tested because they present with symptoms of HIV-related illnesses. CDC estimated that, in 1996, approximately two thirds of all infected persons in the United States had HIV infection diagnosed in such settings (46 ). HIV surveillance data might not represent untested persons or those who seek testing at anonymous test sites or with home collection kits; such persons are not reported to confidential HIV/AIDS surveillance systems. However, the availability of anonymous testing is important in promoting knowledge of HIV status among at-risk NA (Native American), unknown, because estimates were too small for separate analysis. † † Column totals include missing/other for some categories (e.g., missing sex). Persons infected through receipt of blood or blood products are included under other/unreported risk. populations and provides an opportunity for counseling to reduce high-risk behaviors and voluntary referrals to appropriate medical diagnosis and prevention services. Despite their current limitations, HIV and AIDS case surveillance data together can provide a clearer picture of the HIV epidemic than AIDS case surveillance data alone. Therefore, CDC and CSTE continue to recommend that all areas implement HIV case reporting as part of a comprehensive strategy to monitor HIV infection and HIV disease. The strategy should also include surveys of the incidence and prevalence of HIV infection; AIDS case surveillance; monitoring HIV-related mortality; supplemental research and evaluation studies, including behavioral surveillance; and statistical estimation of the incidence and prevalence of infection and disease. # Considerations in Implementing Nationwide HIV Case Surveillance The nationwide implementation of the 1993 expanded AIDS surveillance case definition prompted renewed discussions of the rationale and need for data representing HIV-infected persons who did not meet the AIDS-defining criteria. Because many states were considering implementing HIV reporting, CDC held a consultation in 1993 with public health and community representatives to discuss relevant issues and concerns. Community representatives' main concerns were that the security and confidentiality standards of surveillance programs might not be sufficient to prevent disclosures of information and that many persons at risk for HIV infection might therefore delay seeking HIV counseling and testing because of these confidentiality concerns. The consensus of the consultants was that few published studies were of sufficient scientific quality to assess these concerns. Therefore, the consultants identified several areas that required additional research and policy development before CDC and CSTE should consider recommending further expansion of HIV surveillance efforts. These areas included a) the impact of reporting policies on testing behaviors and practices, including the decreased availability of anonymous testing in some states; b) the role of surveillance data in linking reported persons to prevention and care programs; c) the development of recommended standards for the security and confidentiality of publicly held HIV/AIDS surveillance data; and d) determining whether alternatives to reporting of patient names would reduce confidentiality risks while meeting the needs for high-quality surveillance data. In response to the consultants' recommendations, CDC initiated several research projects to a) assess the effect of confidential name-based HIV surveillance on persons' willingness to seek HIV testing and care; b) review program practices and legal requirements for the security and confidentiality of state and local HIV/AIDS surveillance data; and c) evaluate the performance of coded-identifier-based surveillance systems. Findings from these projects and expert advice from participants at numerous technical meetings and consultations held during the intervening period have guided formulation of the policies and practices recommended in this report. The findings from these projects are summarized in the following three subsections: HIV surveillance and testing behavior, HIV surveillance using non-name-based unique identifiers, and confidentiality of HIV surveillance data. # HIV Surveillance and Testing Behavior Few studies have characterized test-or care-seeking behaviors in relation to state HIV reporting policies. A 1988 general population study of previous or planned use of HIV testing services did not identify an association of reporting policy with testing behavior (47 ). In contrast, interviews of persons seeking anonymous testing in 1989 documented that many would avoid testing if a positive test resulted in name reporting or partner notification (48 ). A review of the published literature on HIV testing behaviors highlighted several limitations and biases in previous studies (49 ), including small numbers, lack of geographic and risk-group representativeness, and analysis of intent to test rather than of actual testing behavior. An additional limitation of the available literature is that studies published 5-10 years ago might not reflect actual testing behaviors in the current treatment era. Literature that highlights potential misuse of public health surveillance data might have the unintended effect of increasing test avoidance among some at-risk persons (50 ). Examining knowledge of and perceptions about testing and reporting, as well as actual testing behavior, in the context of current treatment advances and evolving HIV reporting policies, can address some of the limitations of previous research. To determine the effect of changes in reporting policies on actual testing behaviors among persons seeking testing at publicly funded HIV counseling and testing sites, CDC and six state health departments reviewed data routinely collected from these sites to compare HIV testing patterns during the 12 months before and the 12 months after implementation of HIV case surveillance (51 ). In these areas, the number of HIV tests increased in four states and decreased in two states; the declines were not statistically significant. All the analysis periods (25-month periods during 1992-1996) antedated the widespread beneficial effects of highly active antiretroviral therapy. Slight variability in testing trends was observed among racial/ethnic subgroups and HIV-risk exposure categories; however, these data do not suggest that, in these states, the policy of implementing HIV case reporting adversely affected test-seeking behaviors overall (52 ). CDC also supported studies by researchers at the University of California at San Francisco and participating state health departments to identify the most important determinants of test seeking or test avoidance among high-risk populations and to assess the impact of changes in HIV testing and HIV reporting policies. Data from these surveys of high-risk persons in nine selected states about their perceptions and knowledge of HIV testing and HIV reporting practices documented that few respondents had knowledge of the HIV reporting policies in their respective states (53,54 ). In surveys conducted during 1995-1996, respondents reported high levels of testing, with approximately three fourths reporting that they had had an HIV test. The most commonly reported factors (by nearly half of respondents) that might have contributed to delays in seeking testing or not getting tested were fear of having HIV infection diagnosed or belief that they were not likely to be HIV infected. "Reporting to the government" was a concern that might have contributed to a delay in seeking HIV testing for 11% of heterosexuals, 18% of injecting-drug users, and 22% of men who have sex with men; <1%, 3%, and 2% of respondents in these risk groups, respectively, indicated that this was their main concern. Concern about name-based reporting of HIV infections to the government was a factor for not testing for HIV for 13% of heterosexuals, 18% of injecting-drug users, and 28% of men who have sex with men. As the main factor for not testing for HIV, concern about name-based reporting to the government was substantially lower in all risk groups (1% of heterosexuals, 1% of injecting-drug users, and 4% of men who have sex with men) (55 ). These findings suggest that name-based reporting policies might deter a small proportion of persons with high-risk sex or drug-using behaviors from seeking testing and, therefore, support the need for strict adherence to confidentiality safeguards of public health testing and surveillance data. In addition, the survey documented that the availability of an anonymous testing option is consistently associated with higher rates of intention to test in the future. In this survey, high levels of testing, together with high levels of test delay or avoidance associated with reasons other than concern about name reporting, suggest that addressing these other concerns may have a greater effect on testing behavior. For example, 59% of men who have sex with men reported being "afraid to find out" as a factor for not testing, and 27% reported it as the main factor for not testing. In addition, 52% of men who have sex with men reported "unlikely to have been exposed" as a factor for not testing, and 17% reported it as the main factor. In a companion survey of persons reported with AIDS in eight of these same states, participants who had recognized their HIV risk and sought testing at anonymous testing sites reported entering care at an earlier stage of HIV disease than persons who were first tested in a confidential testing setting (e.g., STD clinics, medical clinics, or hospitals), where persons are frequently first tested when they become ill (56 ). These data suggest that anonymous testing options are important in promoting timely knowledge of HIV status for some at-risk persons. # HIV Surveillance Using Non-Name-Based Unique Identifiers To assess the feasibility of using alternatives to confidential name-based methods for HIV surveillance, several states implemented reporting of cases of HIV infection or CD4 (a marker of immunosuppression in HIV-infected persons) laboratory test results using various numeric or alphanumeric codes. Other states considered or tried to conduct case surveillance without name identifiers by using codes designed for nonsurveillance purposes (e.g., codes intended for use in tracking patients in casemanagement systems) (57 ). In May 1995, CDC convened a meeting at which these states identified operational, technical, and scientific challenges in conducting surveillance using coded identifiers rather than patient names. The states recommended that CDC evaluate additional coded identifiers and assist them in documenting and disseminating the results of their findings. In addition, CDC supported research to evaluate the performance of a coded unique identifier (UI) in two states that implemented a non-name-based HIV case-reporting system while maintaining name-based surveillance methods for AIDS (58 ). The study, conducted by Maryland and Texas during 1994-1996 in collaboration with CDC, documented nearly 50% incomplete reporting, in part because the social security number necessary to construct the identifier code was not uniformly available in medical or laboratory records. In Maryland, provider-maintained logs were needed to link the UI to name-based medical records to obtain follow-up data (e.g., on HIV risk/exposure). A more recent evaluation conducted by the Maryland Department of Health and Mental Hygiene (MDHMH) reported data from a publicly funded counseling and testing site and documented a higher level of completeness of HIV reporting (88%) than the 50% documented in the previous study (58,59 ). MDHMH reports that their code is unique to a given person and that assignment of two different codes to the same person is unlikely. That is, the probability that a given code can distinguish one person from any other is >99% if all the elements of the code are complete and accurate. No published evaluations have assessed the probability of assigning the same code to different persons, which could occur if elements of the code were missing. In contrast to MDHMH's findings, analogous evaluations in Texas, as well as studies that used more diverse methods in Los Angeles and New Jersey, failed to identify a code that performs as well as name-based methods (58,(60)(61)(62)(63)(64)(65)(66)(67). On the basis of published evaluations (58 ), Texas recently switched to name-based HIV case surveillance. In addition to Maryland, three other states (Illinois, Maine, and Massachusetts) and Puerto Rico recently implemented HIV reporting using four different coded identifiers. CDC will assist these states in implementing their systems, establishing standardized criteria for assessing the overall performance of their systems, as well as assessing whether the required standards are achieved. Additional evaluations will be conducted by the respective state health departments, in collaboration with CDC, to determine a) the ability of coded identifiers to accurately track disease progression from HIV infection to AIDS to death, b) their utility for evaluating public health efforts to eliminate perinatal HIV transmission, c) their acceptability, and d) their usefulness in matching to other databases (e.g., tuberculosis). # Confidentiality of HIV Surveillance Data A 1994 review of state confidentiality laws that protect HIV surveillance data documented that all states and many localities have legal safeguards for confidentiality of government-held health data (68 ). These laws provide greater protection than laws protecting the confidentiality of information in health records held by private healthcare providers. Most states have specific statutory protections for public health data related to HIV infection and other STDs. However, state legal protections vary, and CDC supports additional efforts to strengthen privacy protections for public health data. On the basis of input from expert legal and public health consultants, the Model State Public Health Privacy Act (69 ) was developed by an independent contractor at the behest of CSTE. If enacted by states, the provisions of the Model Act would ensure the confidentiality of surveillance data, strengthen statutory protections against disclosure, and preclude the intended or unintended use of surveillance data for non-public health purposes. CDC has reviewed state and local security policies and procedures for HIV/AIDS surveillance data. Since 1981, states have conducted AIDS surveillance, and few breaches of security have resulted in the unauthorized release of data (70,71 ). Because survival has improved for HIV-infected persons, information about them might be maintained in public health surveillance databases for longer periods. This has resulted in increased concerns about confidentiality of surveillance data among public health and community groups (72 ). Therefore, CDC has issued technical guidance for security procedures that include enhanced confidentiality and security safeguards as evaluation criteria for federal funding of state HIV/AIDS surveillance activities (73 ). The receipt of federal surveillance funding depends on the recipient's ability to ensure the physical security and confidentiality of case reports. At the federal level, HIV/AIDS surveillance data are protected by several federal statutes, which ensure that CDC will not release HIV/AIDS surveillance data for non-public health purposes (e.g., for use in criminal, civil, or administrative proceedings). Privacy is also ensured by the removal of names and the encryption of data transmitted to CDC. On the basis of the importance of maintaining the confidentiality of persons in whom HIV infection has been diagnosed by public or private health-care providers, CDC has recommended additional standards to enhance the security and confidentiality of HIV and AIDS surveillance data (74,75 ). # GUIDELINES FOR SURVEILLANCE OF HIV INFECTION AND AIDS HIV Surveillance Case Definition for Adults and Children CDC, in collaboration with CSTE, has established a new case definition for HIV infection in adults and children that includes revised surveillance criteria for HIV infection and incorporates the surveillance criteria for AIDS (17-19,76 ) (Appendix). HIV infection and AIDS case reports forwarded to CDC should be based on this definition. For adults and children aged ≥18 months, the HIV surveillance case definition includes laboratory and clinical evidence specifically indicative of HIV infection and severe HIV disease (AIDS). For children aged <18 months (except for those who acquired HIV infection other than by perinatal transmission), the HIV surveillance case definition updates the definition in the 1994 revised classification system. In addition, the new case definition is based on recent data regarding the sensitivity and specificity of HIV diagnostic tests in infants and clinical guidelines for Pneumocystis carinii pneumonia (PCP) prophylaxis for children (19,(77)(78)(79)(80)(81)(82)(83)(84)(85)(86)(87)(88) and for use of antiretroviral agents for pediatric HIV infection (30 ). The revised surveillance case definitions for adults and children become effective January 1, 2000. # HIV/AIDS Case Surveillance Practices and Standards CDC and CSTE recommend that all states require reporting to public health surveillance of all cases of perinatal HIV exposure in infants, the earliest diagnosis of HIV infection (exclusive of anonymous tests) and the earliest diagnosis of AIDS in persons of all ages, and deaths among these persons (4,33 ). Such reporting should constitute the core minimum performance standard for HIV/AIDS surveillance in all states and territories. CDC provides federal funds and technical assistance to states to establish and conduct active HIV/AIDS surveillance programs. On the basis of feasibility, needs, and resources, areas may be funded to implement additional surveillance activities (e.g., supplemental research and evaluation studies and serologic surveys), but these approaches might not be necessary in all areas. The following recommended practices update and revise the CDC Guidelines for HIV/AIDS Surveillance released in 1996 and updated in 1998 as a technical guide for state and local HIV/AIDS surveillance programs (34,(73)(74)(75). Recommended practices represent CDC's guidance for best public health practice based on available scientific data. Programmatic standards set minimum requirements for states to receive support from CDC for HIV/AIDS surveillance activities. # Recommended Surveillance Practices • All state and local programs should collect a standard set of surveillance data for all cases that meet the reporting criteria for HIV infection and AIDS. The standard data set includes the a) patient identifier, b) earliest date of diagnosis of HIV infection, c) earliest date of diagnosis of an AIDS-defining condition, d) demographic information (e.g., date of birth, race/ethnicity, and sex) and residence (i.e., city and state) at diagnosis of HIV infection and of AIDS, e) HIV risk exposure, f) facility of diagnosis, and g) date of death and state of residence at death. In addition to this information, the date of HIV diagnostic testing, the results of these tests, and exposure to antiretroviral treatment for reducing perinatal HIV transmission should be collected for all infants with perinatal exposures to HIV. Surveillance information, without patient identifiers, should be encrypted and forwarded to CDC through the HIV/AIDS Reporting System (or equivalent) in accordance with current practice. To address specific public health information needs, local surveillance programs can cross-match HIV and AIDS surveillance data with other public health data (e.g., tuberculosis data) and collect supplemental surveillance data on all or a representative sample of cases. CDC will provide technical assistance and recommend standardized surveillance methods to assist in collecting supplemental surveillance information. • On the basis of studies of coded identifier systems conducted in at least eight states, published evaluations of name-based and code-based surveillance systems, and CDC's assessment of the quality and reproducibility of the available data, CDC has concluded that confidential name-based HIV/AIDS surveillance systems are most likely to meet the necessary performance standards (36,58,60-67,89,90 ), as well as to serve the public health purposes for which surveillance data are required. Therefore, CDC advises that state and local surveillance programs use the same confidential name-based approach for HIV surveillance as is currently used for AIDS surveillance nationwide. However, CDC recognizes that some states have adopted, and others may elect to adopt, coded case identifiers for public health reporting of HIV infection. CDC will provide technical assistance to all state and local areas to continue or establish HIV/AIDS surveillance systems and to evaluate their surveillance programs using standardized methods and criteria whether they use name or coded identifiers. • HIV and AIDS surveillance should be used to identify rare or previously unrecognized modes of HIV transmission, unusual clinical or virologic manifestations, and other cases of public health importance. Providers are the most likely and timely source of identifying unusual laboratory or clinical cases. They are encouraged to promptly report atypical cases to local, state, or territorial public health officials for follow-up. CDC will provide technical assistance to state and local health departments conducting such investigations and will revise public health recommendations based on the findings, as appropriate. • HIV and AIDS case surveillance efforts should result in collection of data from all private and public sources of HIV-related testing and care services. Laboratoryinitiated surveillance methods should identify all cases that meet the laboratory reporting criteria for HIV infection and/or AIDS. However, these methods will require follow-up with the provider to verify the infection status or clinical stage and obtain complete demographic and exposure risk data. HIV-infected persons who are initially tested anonymously are eligible to be reported to CDC's HIV/AIDS surveillance database only after they have had HIV infection diagnosed in a confidential testing setting (e.g., by a health-care provider) and have test results or clinical conditions that meet the HIV and/or AIDS reporting criteria. • All state and local surveillance programs should regularly publish, in print or electronically, aggregated HIV/AIDS surveillance data in a format that facilitates use of these data by federal, state, and local public health agencies, HIV-prevention community planning groups and care-planning councils, academic institutions, providers and institutions that have reported cases, community-based organizations, and the general public. Presentation of surveillance data should be consistent with established policies for data release that preclude the direct or indirect identification of a person with HIV infection or AIDS. CDC will increase its efforts to coordinate requests for HIV/AIDS surveillance data across federal government agencies to use state/local surveillance resources efficiently. CDC will also develop specific guidelines for analyzing and interpreting HIV/AIDS surveillance data. • All state and local surveillance programs should conduct regular, ongoing assessments of the performance of the surveillance system and redirect efforts and resources to ensure timely reporting of complete, representative, and accurate data. CDC will provide technical assistance and recommend standardized evaluation methods to assist states in achieving the highest possible level of performance and to promote comparability of data throughout the United States. # Minimum Performance Standards • To provide accurate and timely data for monitoring HIV/AIDS trends and ensuring a reliable measure of the number of persons in need of HIV-related prevention and care services, state and local HIV/AIDS surveillance systems should use reporting methods that provide case reporting that is complete (≥85%) and timely (≥66% of cases reported within 6 months of diagnosis). In addition, evaluation studies should demonstrate that the approach used to conduct surveillance (i.e., name or coded identifier) must result in accurate case counts (≤5% duplicate case reports and ≤5% incorrectly matched case reports). Finally, at least 85% of reported cases or a representative sample should have information regarding risk for HIV infection after epidemiologic follow-up is completed. All HIV/AIDS surveillance systems should collect the recommended standard data in a reliable and valid manner, allow matching to other public health databases (e.g., death registries) to benefit specific public health goals, and allow identification and follow-up of individual cases of public health importance. • To assess the quality of HIV and AIDS case surveillance as specified in the performance standards, states and local surveillance programs must conduct periodic evaluation studies. CDC will recommend several evaluation methods to enable states to select methods best suited to their program needs and resources. States should also evaluate the representativeness of their HIV case reports by monitoring the potential impact of HIV surveillance on test-seeking patterns and behaviors and review the extent to which surveillance data are being used for planning, targeting, and evaluating HIV-prevention programs and services. The goal of these performance evaluations is to enhance the quality and usefulness of surveillance data for public health action. During the next several years (i.e., 2000-2002), CDC will assist states in transitioning to an integrated HIV/AIDS surveillance system by evaluating current performance levels, instituting revised program operations and policies as necessary, and then reassessing performance. Following this transition period, CDC will evaluate and award proposals for federal funding of state and local surveillance programs based on their capacity to meet these performance standards. At that time, CDC will require that recipients of federal funds for HIV/AIDS case surveillance adopt surveillance methods and practices that will enable them to achieve the standards to ensure that federal funds are awarded responsibly. # Recommended Security and Confidentiality Practices • State and local programs should document their security policies and procedures and ensure their availability for periodic review. • State and local health departments should minimize storage and retention of unnecessary or redundant paper or electronic reports and should review their data-retention policies consistent with CDC technical guidelines (73)(74)(75). States should consider and evaluate removing names from surveillance records when they no longer serve the public health purpose for which they were collected. Policies should provide the flexibility to remove cases that were reported in error or that are determined not to be infected with HIV on follow-up. CDC will develop guidance for confirming HIV-infection status as testing and vaccine technologies evolve. • State and local health departments should also review their confidentiality practices to determine whether additional protections should be established (e.g., before implementation of HIV case surveillance). States that plan to implement HIV case surveillance should review their current confidentiality statutes to determine whether they need to be strengthened. The Model State Public Health Privacy Act (69 ) should be considered by states in developing their statutory protections of HIV/AIDS surveillance data. Confidentiality laws should protect surveillance data that are transmitted (in a secure and confidential manner consistent with CDC's HIV/AIDS surveillance program requirements) to other public health programs as part of evaluation studies or for follow-up of cases of special public health importance. The penalties for violating privacy and security should apply to all recipients of HIV/AIDS case surveillance information. • To further enhance security and confidentiality of data, states are encouraged to implement use of a double-key encryption and decryption system, in which identifying information encrypted by states using the first key can only be decrypted for access using the second key. CDC will develop this option at the request of states that wish to reassure HIV-infected persons that HIV and AIDS surveillance data will be held confidentially and will be used only for specified public health purposes. CDC will hold the second key under an Assurance of Confidentiality under Section 308(d) of the Public Health Service Act, which governs how CDC uses or releases surveillance data voluntarily shared with CDC by the states. Under this assurance, CDC is prohibited from providing that key to a state planning to use HIV/AIDS surveillance data for non-public health purposes. # Minimum Security and Confidentiality Standards The security and confidentiality policies and procedures of state and local surveillance programs should be consistent with CDC standards for the security of HIV/AIDS surveillance data (73,74 ). The minimum security criteria were established following reviews of all state and numerous local health department HIV/AIDS surveillance programs. In general, the reviews documented that health departments have achieved a high level of security and that most state health departments meet or exceed the minimum standards. Beginning in 2000, CDC will require that recipients of federal funds for HIV/AIDS surveillance establish the minimum security standards and include their security policy in applications for surveillance funds (73,74 ). Examples of these standards include the following: • Electronic HIV/AIDS surveillance data should be protected by computer encryption during data transfer. States should continue the established practice of not including personal identifying information in HIV/AIDS surveillance data forwarded to CDC. • HIV and AIDS surveillance records should be located in a physically secured area and should be protected by coded passwords and computer encryption. • Access to the HIV/AIDS surveillance registry should be restricted to a minimum number of authorized surveillance staff, who are designated by a responsible authorizing official, have been trained in confidentiality procedures, and are aware of penalties for unauthorized disclosure of surveillance information. • Public health programs that receive HIV/AIDS information from matching of public health databases should have security and confidentiality protections and penalties for unauthorized disclosure equivalent to those for HIV/AIDS surveillance data and personnel. • Use of HIV/AIDS surveillance data for research purposes should be approved by appropriate institutional review boards, and persons conducting the research must sign confidentiality statements. • HIV and AIDS surveillance data made available for epidemiologic analyses must not include names or other identifying information. State and local data release policies should ensure that the release of data for statistical purposes does not result in the direct or indirect identification of persons reported with HIV infection and AIDS. • In the rare instance of a possible security breach of HIV/AIDS surveillance data, state and local health departments should promptly investigate and report confirmed breaches to CDC to enable CDC to provide technical assistance to state and local health departments, develop recommendations for improvements in security measures, and provide oversight in monitoring changes in program practices. # Relation to HIV-Prevention and HIV-Care Programs: Recommended Practices At the federal level, the primary function of HIV/AIDS surveillance is collecting accurate and timely epidemiologic data for public health planning and policy. Consequently, CDC is authorized to provide federal funds to states through surveillance cooperative agreements, both to achieve the goals of the national surveillance program and to assist states in developing their surveillance programs in accordance with state and local laws and practices. Federal funds authorized for HIV/AIDS surveillance are not provided to states for developing or providing prevention or treatment case-management services; funds for such services are provided by CDC and other federal agencies under separate authorizations. Whether and how states establish a link between individual case-patients reported to their HIV/AIDS surveillance programs and other health department programs and services for HIV prevention and treatment is within the purview of the states. However, in considering or establishing such linkages, CDC recommends the following: • The implementation of HIV case surveillance should not interfere with HIVprevention programs, including those that offer anonymous HIV counseling and testing services. Unless prohibited by state law or regulation, as a condition of federal funding for HIV prevention under a separate authorization, CDC requires that states and local areas provide anonymous HIV counseling and testing services. CDC strongly recommends that states which prohibit anonymous HIV testing change this practice, given the overriding public health objective of encouraging persons to become aware of their HIV serologic status. CDC does not view the availability of publicly funded anonymous counseling and HIV testing as incompatible with the ability to conduct HIV case surveillance in the population. • HIV testing services should be offered for participation on a voluntary basis and preceded by informed consent in accordance with local laws (91 ). • Both public and private providers should refer persons in whom HIV infection has been diagnosed to programs that provide HIV care, treatment, and comprehensive prevention case-management services. • Provider-based referrals of patients to prevention and care services should enable a timely, effective, and efficient means of ensuring that persons in whom HIV infection has been diagnosed receive needed services. • States should consult with providers, prevention-and care-planning bodies, and public health professionals in developing the policies and practices necessary to effect these linkages; should require that recipients of HIV/AIDS surveillance information be subject to the same penalties for unauthorized disclosure as HIV/AIDS surveillance personnel; and should evaluate the effectiveness of this public health approach. Such an evaluation should ensure that the public health objectives of such linkages are achieved without unnecessarily increasing security and confidentiality risks to surveillance data or decreasing the acceptability of surveillance programs to health-care providers and affected communities. Providers and affected communities, including HIV-prevention community planning groups, should participate with health departments in planning and implementing surveillance strategies, as well as programs and services. # COMMENTARY Surveillance Case Definition for HIV Infection and AIDS The revised case definition for HIV infection in adults and children integrates reporting criteria for HIV infection and AIDS in a single case definition and incorporates new laboratory tests in the laboratory criteria for HIV case reporting. The 2000 case definition for HIV infection includes HIV nucleic acid (DNA or RNA) detection tests that were not commercially available when the AIDS case definition was revised in 1993. The revised case definition for HIV infection also permits states to report cases to CDC based on the result of any test licensed for diagnosing HIV infection in the United States. Although the reporting criteria generally reflect the recommendations for diagnosing HIV infection, the HIV reporting criteria are for public health surveillance and are not designed for making a diagnosis for an individual patient. The laboratory criteria include the serologic HIV tests described in the clinical standards for diagnosing HIV infection (92)(93)(94)(95). The pediatric HIV reporting criteria include criteria for monitoring all children with perinatal exposures to HIV and reflect recent advances in diagnostic approaches that permit the diagnosis of HIV infection during the first months of life. With HIV nucleic acid detection tests, HIV infection can be detected in nearly all infants aged ≥1 month. The timing of the HIV serologic and HIV nucleic acid detection tests and the number of HIV nucleic acid detection tests in the definitive and presumptive criteria for HIV infection are based on the recommended practices for diagnosing infection in children aged <18 months and on evaluations of the performance of these tests for children in this age group (30,(77)(78)(79)(80)(81)(82)(83)(84)(85)(86)(87)(88). The clinical criteria in the case definition for HIV infection are included to ensure the complete reporting of cases with documented evidence of HIV infection or conditions meeting the AIDS case definition. The AIDS-defining conditions are included as part of the single case definition for HIV infection. In adults and adolescents aged ≥13 years, criteria for presumptive and definitive AIDS-defining conditions have not been revised since 1993 and continue to include the laboratory markers of severe HIV-related immunosuppression and the opportunistic illnesses indicative of severe HIV disease, which greatly increase mortality risks. # Effect of National HIV Case Surveillance on Reporting Trends Changes in the HIV reporting criteria will have little effect on reporting trends in states already conducting HIV case surveillance. However, the number of cases of HIV infection reported nationally will increase primarily because of implementation of HIV surveillance by the remaining states and local areas. Many of the states that will implement HIV case surveillance in the future have high AIDS incidence rates. Similar to the effect on AIDS surveillance trends after the implementation of the revised reporting criteria in 1993, the initiation of HIV surveillance by additional states might result in a sudden and large increase in HIV case reports (96 ). On the basis of CDC's estimate that approximately 220,000 HIV-infected persons without AIDS-defining conditions had had HIV infection diagnosed in confidential testing settings and resided in states that were not conducting HIV case surveillance at the end of 1996 (46 ), the possibility exists that this number of persons could be reported with HIV infection from these states in 2000. However, reporting of prevalent HIV infections is more likely to be spread over several years, and the annual increases will most likely be more modest. Initially, most case reports will represent persons whose HIV infection was diagnosed before the implementation of HIV surveillance. As the reporting of prevalent cases of HIV infection reaches full implementation nationwide, the number of HIV case reports will decrease, and case reports will increasingly represent persons with recent diagnoses of HIV infection. To facilitate interpretation of HIV surveillance data and given that CDC strongly promotes continued availability of anonymous testing options, evaluations of HIV/AIDS surveillance systems will include assessments of the representativeness of HIV case surveillance data. These assessments will include special surveys to evaluate the delays between HIV testing and entry to care. In addition, these evaluations will be useful in determining the effectiveness of program efforts to refer persons into care services after the diagnosis of HIV infection in anonymous testing settings. AIDS cases have declined nationwide; however, because AIDS surveillance trends are affected by the incidence of HIV infection, as well as the effect of treatment on the progression of HIV disease, future AIDS trends cannot be predicted. AIDS surveillance will continue to be important in evaluating access to care for different populations and in identifying changes in trends that might signal a decrease in the effectiveness of treatment. The long-term benefits of antiretroviral therapy and antimicrobial prophylaxis for AIDS-related illnesses continue to be defined. In addition, various factors (e.g., access, adherence, treatment costs, and viral resistance) will influence the use and effectiveness of these therapies and their effects on AIDS incidence and mortality trends (97)(98)(99). Because trends in new diagnoses of HIV infection are affected by when in the course of disease a person seeks or is offered HIV testing, such trends do not reflect the incidence of HIV infection in the population. In addition, because all HIV-infected persons in the population might not have had the infection diagnosed, these data do not represent total HIV prevalence in the population. Currently, interpretation of these data is complicated by several factors. First, persons might have HIV infection diagnosed and later during the same calendar year have AIDS diagnosed, which can complicate presentation of the data. Second, delays in reporting cases of HIV infection tend to be shorter than for AIDS cases, necessitating development of stage-specific statistical adjustments. Third, methods of imputation of exposure risk data for AIDS cases have been developed based on historical patterns of reclassification after investigation, but comparable methods for cases of HIV infection are only recently available at the national level. Finally, whether a trend in the number of new HIV diagnoses is stable, increasing, or decreasing might reflect current or historical HIV transmission patterns, changes in testing behaviors, and/or stage of the epidemic in the local geographic area. Overall, in the United States, the incidence of HIV infection peaked approximately 15 years ago, and the annual number of HIV infections has been stable at approximately 40,000 since 1992, when CDC estimated the prevalence of HIV infection in the range of 650,000-900,000 infected persons (100,101 ). Based on HIV and AIDS case surveillance data, CDC estimates that the prevalence of HIV infection at the end of 1998 was in the range of 800,000-900,000 infected persons. Of these persons, approximately 625,000 (range: 575,000-675,000) had had HIV infection or AIDS diagnosed (CDC, unpublished data, 1999). Because the annual number of new infections in recent years is relatively lower than during the peak incidence years, over time the remaining untested or anonymously tested infected persons will have HIV infection diagnosed through test-seeking, targeted testing, entry to care, or progression of disease to AIDS. Ultimately, the number of new diagnoses of HIV infection will decrease each year as they increasingly represent the smaller pool of more recently infected persons. Thus, in states that have been conducting HIV case reporting for several years, the number of new diagnoses of HIV infection is expected to decrease, then stabilize at a lower rate if the number of new infections remains stable. For states that newly implement HIV reporting, a large bolus of reported prevalent infections is expected to occur, followed by a decline in the annual number of new cases until the number stabilizes at a lower level. Recently, since the impact of highly active antiretroviral therapy on survival, the estimated number of new infections each year probably exceeds the number of deaths, and the prevalence of HIV infection might be increasing by a small proportion of total prevalence. Thus, during the transition period to nationwide HIV-infection reporting, measures of the combined prevalence of HIV infection diagnoses and AIDS diagnoses will be most useful in projecting the need for resources for care and prevention. Trends in the numbers of new cases reported will not provide immediate insights into the dynamics of the epidemic because prevalent case reports represent a mixture of new and old HIV infections. Within the next several years, however, all states will be able to characterize new diagnoses of HIV infection or a representative sample by demographic and clinical characteristics that will provide meaningful insights into actual HIV transmission patterns and will have well-characterized the health and service needs of the population of prevalent HIV-infected persons. CDC will develop analysis profiles, statistical adjustments for reporting delays and imputation of risk data, and recommendations for data presentation to assist states in analyzing and interpreting their HIV/AIDS surveillance data during this transition period. # HIV/AIDS Surveillance Practices Laboratories will be an increasingly important source of information from which to initiate reporting. HIV infection is frequently diagnosed in the outpatient clinical setting, and laboratory-initiated reporting will be particularly useful in identifying outpatient sources of HIV testing (89 ) although contact with individual providers is necessary to complete the reporting process. The routine collection of HIV and CD4 test data from laboratories and managed-care organizations promotes completeness of reporting and may increase the simplicity and efficiency of initial case-finding activities by local surveillance programs. Nonetheless, repeated testing of the same persons results in multiple reports and necessitates labor-intensive follow-up to eliminate duplicates. CDC is increasing its efforts to promote standards in laboratory reporting and to facilitate the transfer of data from public health and commercial laboratories to health departments. Performance criteria for HIV and AIDS surveillance are necessary to ensure that surveillance data are of sufficient quality to target prevention and care resources and to detect emerging trends in the HIV epidemic. Evaluations of HIV and AIDS surveillance programs have documented that areas should be able to meet these performance criteria (5,36,(61)(62)(63)(64)(65)(66)(67)89,90 ). According to these evaluations of namebased surveillance systems, the completeness of HIV surveillance (from 79% to approximately 95%) and AIDS surveillance (from 85% to approximately 95%) is high, and reporting is timely with nearly one half of AIDS cases and three quarters of cases of HIV infection reported to the national HIV/AIDS reporting system within 3 months of diagnosis (5 ). CDC estimates that the duplication rate of cases of HIV infection reported from different states to the national surveillance database was approximately 2%; for AIDS cases, the rate was approximately 3% (5,36 ). The performance criteria also reflect the need for public health surveillance systems to identify and follow-up on cases of public health importance. On the basis of current evaluation studies of non-name-based case identifiers and the current infrastructure of state and local health departments, name-based methods for collecting and reporting public health data provide the most feasible, simple, and reliable means for ensuring timely, accurate, and complete reporting of persons in whom HIV infection or AIDS has been diagnosed. Confidential name-based reporting also facilitates follow-up of perinatally exposed infants to determine their infection status and of persons reported with HIV infection to determine progression to AIDS and vital status (36,42 ). A name-based patient identifier allows providers to report cases directly from their name-based medical records, facilitates elimination of duplicate case reports, enables cross-matching of HIV and AIDS data with other namebased public health data (e.g., tuberculosis surveillance), permits follow-up with providers to collect information regarding risk for HIV infection and other data of public health importance. Through follow-up with providers, the HIV/AIDS surveillance system has provided an effective means to identify rare or unusual modes of HIV transmission and infection with rare strains of HIV and to improve prevention of HIVrelated opportunistic illnesses (102)(103)(104)(105)(106). CDC will assist states in monitoring the impact of changing medical interventions, epidemiology, and HIV case surveillance policies on test-and care-seeking behaviors. # Security and Confidentiality of HIV and AIDS Surveillance The revision of the case definition for HIV infection provides an opportunity to review and strengthen state and local confidentiality laws and regulations. Although state HIV/AIDS surveillance confidentiality laws and regulations adequately protect privacy compared with the statutory protections of other health-care data, state statutes differ in the degree of privacy protections afforded health information and the criteria for permissible disclosures of personal information. Most state statutes describe some permissible disclosures of public health information. To help ensure uniform confidentiality protections, the Georgetown University Law Center developed the Model State Public Health Privacy Act (69 ). Public health, legislative, legal, and community advocacy representatives provided expert consultation. The model legislative language protects confidential, identifiable information held by state and local public health departments against unauthorized and inappropriate non-public health uses but still allows public health officials to use surveillance information to accomplish the public health objectives defined by the law (69 ). CDC recommends that states planning to implement HIV case surveillance should consider adopting the model legislation, if necessary, to strengthen the current level of protection of public health data. Although HIV/AIDS surveillance systems have exemplary records of security and confidentiality, it is essential for all programs to identify ways to strengthen data protection because of a perceived greater sensitivity of HIV case surveillance compared with that of AIDS case surveillance alone (71 ). Providing accurate public education and factual media messages to inform vulnerable populations, as well as promoting testing programs that facilitate referrals into treatment and prevention services, will be important to ensure that test seeking and acceptance are not adversely affected as additional states implement HIV case reporting. The revised security standards (74 ) promote enhancements to further reduce any potential for disclosure of sensitive surveillance data. CDC continues to conduct evaluations of methods to further enhance data security, including the use of coding and encryption of data collected in the HIV/AIDS reporting system. # HIV Prevention and Care CDC has published guidelines concerning the provision and targeting of HIV counseling and testing services (29,41,(107)(108)(109)(110)(111) and provides support for most public sources of HIV testing. The availability of anonymous HIV testing services might be particularly important for persons who delay seeking testing because of a concern that others might learn of their serologic status (55 ). Studies have documented that the availability of anonymous HIV testing is associated with increased numbers of persons seeking testing services (112)(113)(114)(115). Anonymous HIV testing services are a required element of federally supported prevention programs unless prohibited by state law or regulation. Currently, 39 states, Puerto Rico, and the District of Columbia provide anonymous HIV testing services. CDC advises that the decision to refer persons reported to the surveillance system to prevention and care services (e.g., partner counseling and referral services [PCRS]) be made at the local level. PCRS programs provide HIV counseling and testing to persons who might be unaware of HIV risk exposures, and these services are a required component of federally sponsored HIV-prevention programs (116,117 ). The provision of such services to persons in whom HIV infection or AIDS has been diagnosed, especially those who receive services in publicly funded testing and clinic settings, is conducted successfully by states regardless of whether they have implemented HIV reporting (118 ). Referrals from surveillance to other health department services, when they occur, should be established in a manner that ensures both the quality of the surveillance data and the security of the surveillance system, as well as the quality, confidentiality, and voluntary nature of HIV-prevention services (119 ). At the federal level, the primary function of HIV/AIDS surveillance remains the provision of accurate epidemiologic data for public health information, planning, and evaluation. Persons in whom HIV infection has been diagnosed at either confidential or anonymous test sites should be promptly referred to facilities that provide confidential HIV care. Recent studies have documented disparities in ensuring timely testing and access to care by demographic, socioeconomic, and other factors (120,121 ). Although not directly responsible for the delivery of medical care, CDC provides federal direction for state and local programs that facilitate referral of HIV-infected persons from counseling and testing centers and health education/risk-reduction programs to HIV care facilities. CDC has developed guidelines to strengthen the system of referrals between HIV testing sites and care programs, in part by increasing coordination with the Health Resources and Services Administration and the Ryan White CARE Act grantees (122 ). To provide further guidance, CDC has participated in developing model contract language for Medicaid programs that serve persons with HIV infection to ensure cooperation with public health authorities in case reporting and follow-up. A well-developed and well-implemented HIV and AIDS case surveillance system is integral to public health efforts to identify disparities, target programs and resources to vulnerable populations, and assess the impact of these programs in reducing infection, disease, and premature death. CDC is undertaking a national effort to further reduce perinatal HIV transmission in the United States. This effort will incorporate HIV counseling and voluntary testing, treatment, and outreach to pregnant women, especially those who are racial/ethnic minorities and substance abusers, and will integrate prevention and treatment services for women and children. Surveillance for perinatally HIV-exposed and HIVinfected children will remain a critical measure of the effectiveness of this campaign (32,40,41,123,124 ). # CONCLUSION The implementation of a national surveillance network to include both HIV and AIDS case reporting is a necessary response to epidemiologic trends and new standards for HIV care (125)(126)(127). Integrated HIV/AIDS surveillance programs will provide data to characterize persons in whom HIV infection has been newly diagnosed, including those with evidence of recent infection, persons with severe HIV disease (AIDS), and those dying of HIV disease or AIDS. The revised HIV surveillance case definition and the establishment of minimum performance standards will promote uniform case ascertainment and will ensure that the surveillance data are of sufficient quality for effective planning and allocation of resources for prevention and care programs. Clinical or Other Criteria (if the above laboratory criteria are not met) # OR Clinical or Other Criteria (if the above definitive or presumptive laboratory criteria are not met) • Determined by a physician to be "not infected", and a physician has noted the results of the preceding HIV diagnostic tests in the medical record AND NO other laboratory or clinical evidence of HIV infection (i.e., has not had any positive virologic tests, if performed, and has not had an AIDS-defining condition) IV. A child aged <18 months born to an HIV-infected mother will be categorized as having perinatal exposure to HIV infection if the child does not meet the criteria for HIV infection (II) or the criteria for "not infected with HIV" (III). *HIV nucleic acid (DNA or RNA) detection tests are the virologic methods of choice to exclude infection in children aged <18 months. Although HIV culture can be used for this purpose, it is more complex and expensive to perform and is less well standardized than nucleic acid detection tests. The use of p24 antigen testing to exclude infection in children aged <18 months is not recommended because of its lack of sensitivity. # Appendix # Revised Surveillance Case Definition for HIV Infection* This revised definition of HIV infection, which applies to any HIV (e.g., HIV-1 or HIV-2), is intended for public health surveillance only. It incorporates the reporting criteria for HIV infection and AIDS into a single case definition. The revised criteria for HIV infection update the definition of HIV infection implemented in 1993 (18 ); the revised HIV criteria apply to AIDS-defining conditions for adults (18 ) and children (17,19 ), which require laboratory evidence of HIV. This definition is not presented as a guide to clinical diagnosis or for other uses (17,18 ). § In adults, adolescents, and children infected by other than perinatal exposure, plasma viral RNA nucleic acid tests should NOT be used in lieu of licensed HIV screening tests (e.g., repeatedly reactive enzyme immunoassay). In addition, a negative (i.e., undetectable) plasma HIV-1 RNA test result does not rule out the diagnosis of HIV infection. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on
None
None
38ebe37244075c4b366bb49898da1995bb3bee5b
cdc
None
Incidence rates of suicide and attempted suicide among adolescents and young adults aged 15-24 years continue to remain at high levels. In 1992, to aid communities in developing new or augmenting existing suicide prevention programs directed toward this age group, CDC's National Center for Injury Prevention and Control published Youth Suicide Prevention Programs: A Resource Guide. The Resource Guide describes the rationale and evidence for the effectiveness of various suicide prevention strategies, and it identifies model programs that incorporate these strategies. This summary of the Resource Guide describes eight suicide prevention strategies and provides general recommendations for the development, implementation, and evaluation of suicide prevention programs targeted toward this age group.# INTRODUCTION The continued high rates of suicide among adolescents (i.e., persons aged 15-19 years) and young adults (persons aged 20-24 years) (Table 1) have heightened the need for allocation of prevention resources. To better focus these resources, CDC's National Center for Injury Prevention and Control recently published Youth Suicide Prevention Programs: A Resource Guide (1 ). The guide describes the rationale and evidence for the effectiveness of various suicide prevention strategies and identifies model programs that incorporate these strategies. It is intended as an aid for communities interested in developing or augmenting suicide prevention programs targeted toward adolescents and young adults. This report summarizes the eight prevention strategies described in the Resource Guide. # METHODOLOGY Suicide prevention programs were identified by contacting suicide prevention experts in the United States and Canada and asking them to name and describe suicide prevention programs for adolescents and young adults that, based on their experience and assessment, were likely to be effective in preventing suicide. After compiling an initial list, program representatives were contacted and asked to describe the number of persons exposed to the intervention, the number of years the program had been operating, the nature and intensity of the intervention, and the availability of data to facilitate evaluation. Program representatives were also asked to identify other programs that they considered exemplary. Representatives from these programs were contacted and asked to describe their programs. The list of programs was further supplemented by contacting program representatives who participated in the 1990 national meeting of the American Association of Suicidology and by soliciting program contacts through Newslink, the association's newsletter. Suicide prevention programs on the list were then categorized according to the nature of the prevention strategy using a framework of eight suicide prevention strategies: - School gatekeeper training. This type of program is designed to help school staff (e.g., teachers, counselors, and coaches) identify and refer students at risk for suicide. These programs also teach staff how to respond to suicide or other crises in the school. - Community gatekeeper training. These programs train community members (e.g., clergy, police, merchants, and recreation staff) and clinical health-care providers who see adolescent and young adult patients (e.g., physicians and nurses) to identify and refer persons in this age group who are at risk for suicide. - General suicide education. Students learn about suicide, its warning signs, and how to seek help for themselves or others. These programs often incorporate a variety of activities that develop self-esteem and social competency. - Screening programs. A questionnaire or other screening instrument is used to identify high-risk adolescents and young adults and provide further assessment and treatment. Repeated assessment can be used to measure changes in attitudes or behaviors over time, to test the effectiveness of a prevention strategy, and to detect potential suicidal behavior. - Peer support programs. These programs, which can be conducted in or outside of school, are designed to foster peer relationships and competency in social skills among high-risk adolescents and young adults. - Crisis centers and hotlines. Trained volunteers and paid staff provide telephone counseling and other services for suicidal persons. Such programs also may offer a "drop-in" crisis center and referral to mental health services. - Restriction of access to lethal means. Activities are designed to restrict access to handguns, drugs, and other common means of suicide. - Intervention after a suicide. These programs focus on friends and relatives of persons who have committed suicide. They are partially designed to help prevent or contain suicide clusters and to help adolescents and young adults cope effectively with the feelings of loss that follow the sudden death or suicide of a peer. After categorizing suicide prevention efforts according to this framework, an expert group at CDC reviewed the list to identify recurrent themes across the different categories and to suggest directions for future research and intervention. # FINDINGS The following conclusions were derived from information published in the Resource Guide: - Strategies in suicide prevention programs for adolescents and young adults focus on two general themes. Although the eight strategies for suicide prevention programs for adolescents and young adults differ, they can be classified into two conceptual categories: -Strategies to identify and refer suicidal adolescents and young adults for mental health care. This category includes active strategies (e.g., general screening programs and targeted screening in the event of a suicide) and passive strategies (e.g., training school and community gatekeepers, providing general education about suicide, and establishing crisis centers and hotlines). Some passive strategies are designed to lower barriers to self-referral, and others seek to increase referrals by persons who recognize suicidal tendencies in someone they know. -Strategies to address known or suspected risk factors for suicide among adolescents and young adults. These interventions include promoting self-esteem and teaching stress management (e.g., general suicide education and peer support programs); developing support networks for high-risk adolescents and young adults (peer support programs); and providing crisis counseling (crisis centers, hotlines, and interventions to minimize contagion in the context of suicide clusters). Although restricting access to the means of committing suicide may be critically important in reducing risk, none of the programs reviewed placed major emphasis on this strategy. - Suicide prevention efforts targeted for young adults are rare. With a few important exceptions, most programs have been targeted toward adolescents in high school, and these programs generally do not extend to include young adults. Although the reasons for this phenomenon are not clear, the focus of prevention efforts on adolescents may be because they are relatively easy to access in comparison with young adults, who may be working or in college. In addition, persons who design and implement such efforts may not realize that the suicide rate for young adults is substantially higher than the rate for adolescents (Table 1). - Links between suicide prevention programs and existing community mental health resources are frequently inadequate. In many instances, suicide prevention programs directed toward adolescents and young adults have not established close working ties with traditional community mental health resources. Inadequate communication with local mental health service agencies obviously reduces the potential effectiveness of programs that seek to identify and refer suicidal adolescents and young adults for mental health care. - Some potentially successful strategies are applied infrequently, yet other strategies are applied commonly. Despite evidence that restricting access to lethal means of suicide (e.g., firearms and lethal dosages of drugs) can help to prevent suicide among adolescents and young adults, this strategy was not a major focus of any of the programs identified. Other promising strategies, such as peer support programs for those who have attempted suicide or others at high risk, are rarely incorporated into current programs. In contrast, school-based education on suicide is a common strategy. This approach is relatively simple to implement, and it is a cost-effective way to reach a large proportion of adolescents. However, evidence to indicate the effectiveness of school-based suicide education is sparse. Educational interventions often consist of a brief, one-time lecture on the warning signs of suicide-a method that is unlikely to have substantial or sustained impact and that may not reach high-risk students (e.g., those who have considered or attempted suicide). Further, students who have attempted suicide previously may react more negatively to such curricula than students who have not. The relative balance of the positive and the potentially negative effects of these general educational approaches is unclear. - Many programs with potential for reducing suicide among adolescents and young adults are not considered or evaluated as suicide prevention programs. Programs designed to improve other psychosocial problem areas among adolescents and young adults (e.g., alcohol-and drug-abuse treatment programs or programs that provide help and services to runaways, pregnant teenagers, and/or high school dropouts) often address risk factors for suicide. However, such programs are rarely considered suicide prevention programs, and evaluations of such programs rarely consider their effect on suicidal behavior. A review of the suicide prevention programs discussed in the Resource Guide indicated that only a small number maintained working relationships with these other programs. - The effectiveness of suicide prevention programs has not been demonstrated. The lack of evaluation research is the single greatest obstacle to improving current efforts to prevent suicide among adolescents and young adults. Without evidence to support the potential of a program for reducing suicidal behavior, recommending one approach over another for any given population is difficult. # RECOMMENDATIONS Because current scientific information about the efficacy of suicide prevention strategies is insufficient, the Resource Guide does not recommend one strategy over another. However, the following general recommendations should be considered: - Ensure that suicide prevention programs are linked as closely as possible with professional mental health resources in the community. Strategies designed to increase referrals of at-risk adolescents and young adults can be successful only to the extent that trained counselors are available and mechanisms for linking at-risk persons with resources are operational. - Avoid reliance on one prevention strategy. Most of the programs reviewed already incorporate several of the eight strategies described. However, as noted, certain strategies tend to predominate despite insufficient evidence of their effectiveness. Given the limited knowledge regarding the effectiveness of any one program, a multi-faceted approach to suicide prevention is recommended. - Incorporate promising, but underused, strategies into current programs where possible. Restricting access to lethal means of committing suicide may be the most promising underused strategy. Parents should be taught to recognize the warning signs for suicide and encouraged to restrict their teenagers' access to lethal means. Peer support groups for adolescents and young adults who have exhibited suicidal behaviors or who have contemplated and/or attempted suicide also appear promising but should be implemented carefully. Establishment of working relationships with other prevention programs, such as alcohol-and drug-abuse treatment programs, may enhance suicide prevention efforts. Furthermore, when school-based education is used, program planners should consider broad curricula that address suicide prevention in conjunction with other adolescent health issues before considering curricula that address only suicide. - Expand suicide prevention efforts for young adults. The suicide rate for persons in this age group is substantially higher than that for adolescents, yet programs targeted toward them are sparse. More prevention efforts should be targeted toward young adults at high risk for suicide. - Incorporate evaluation efforts into suicide prevention programs. Planning, process, and outcome evaluation are important components of any public health effort. Efforts to conduct outcome evaluation are imperative given the lack of knowledge regarding the effectiveness of suicide prevention programs. Outcome evaluation should include measures such as incidence of suicidal behavior or measures closely associated with such incidence (e.g., measures of suicidal ideation, clinical depression, and alcohol abuse). Program directors should be aware that suicide prevention efforts, like most health interventions, may have unforeseen negative consequences. Evaluation measures should be designed to detect such consequences. Joy Silver # INTRODUCTION Suicide rates among adolescents and young adults have increased sharply in recent decades-from 1950 through 1990, the rate of suicide for persons 15-24 years of age increased from 4.5 to 13.5 per 100,000 (1,2 ). In comparison with older persons, adolescents and young adults who commit suicide are less likely to be clinically depressed or to have certain other mental disorders (3 ) that are important risk factors for suicide among persons in all age groups (4 ). This has led to research directed at the identification of other preventable risk factors for suicide among young persons. One risk factor that has emerged from this research is suicide "contagion," a process by which exposure to the suicide or suicidal behavior of one or more persons influences others to commit or attempt suicide (5 ). Evidence suggests that the effect of contagion is not confined to suicides occurring in discrete geographic areas. In particular, nonfictional newspaper and television coverage of suicide has been associated with a statistically significant excess of suicides (6 ). The effect of contagion appears to be strongest among adolescents (7,8 ), and several well publicized "clusters" among young persons have occurred (9)(10)(11). These findings have induced efforts on the part of many suicide-prevention specialists, public health practitioners, and researchers to curtail the reporting of suicide-especially youth suicide-in newspapers and on television. Such efforts were often counterproductive, and news articles about suicides were written without the valuable input of well-informed suicide-prevention specialists and others in the community. In November 1989, the Association of State and Territorial Health Officials and the New Jersey Department of Health convened a workshop- at which suicidologists, public health officials, researchers, psychiatrists, and psychologists worked directly with news media professionals from around the country to share their concerns and *CDC, which participated in developing the concepts for discussion and assisted in the operations of this workshop, supports these recommendations. perspectives on this problem and explore ways in which suicide, especially suicide among persons 15-24 years of age, could be reported with minimal potential for suicide contagion and without compromising the independence or professional integrity of news media professionals. A set of general concerns about and recommendations for reducing the possibility of media-related suicide contagion were developed at this workshop, and characteristics of news coverage that appear to foster suicide contagion were described. This report summarizes these concerns, recommendations, and characteristics and provides hypothetical examples of news reports that have high and low potential for causing suicide contagion (see Appendix). # GENERAL CONCERNS AND RECOMMENDATIONS The following concerns and recommendations should be reviewed and understood by health professionals, suicidologists, public officials, and others who provide information for reporting of suicide: - Suicide is often newsworthy, and it will probably be reported. The mission of a news organization is to report to the public information on events in the community. If a suicide is considered newsworthy, it will probably be reported. Health-care providers should realize that efforts to prevent news coverage may not be effective, and their goal should be to assist news professionals in their efforts toward responsible and accurate reporting. - "No comment" is not a productive response to media representatives who are covering a suicide story. Refusing to speak with the media does not prevent coverage of a suicide; rather, it precludes an opportunity to influence what will be contained in the report. Nevertheless, public officials should not feel obligated to provide an immediate answer to difficult questions. They should, however, be prepared to provide a reasonable timetable for giving such answers or be able to direct the media to someone who can provide the answers. - All parties should understand that a scientific basis exists for concern that news coverage of suicide may contribute to the causation of suicide. Efforts by persons trying to minimize suicide contagion are easily misinterpreted. Health officials must take the time to explain the carefully established, scientific basis for their concern about suicide contagion and how the potential for contagion can be reduced by responsible reporting. - Some characteristics of news coverage of suicide may contribute to contagion, and other characteristics may help prevent suicide. Clinicians and researchers acknowledge that it is not news coverage of suicide per se, but certain types of news coverage, that promote contagion. Persons concerned with preventing suicide contagion should be aware that certain characteristics of news coverage, rather than news coverage itself, should be avoided. - Health professionals or other public officials should not try to tell reporters what to report or how to write the news regarding suicide. If the nature and apparent mechanisms of suicide contagion are understood, the news media are more likely to present the news in a manner that minimizes the likelihood of such contagion. Instead of dictating what should be reported, public officials should explain the potential for suicide contagion associated with certain types of reports and should suggest ways to minimize the risk for contagion (see Appendix). - Public officials and the news media should carefully consider what is to be said and reported regarding suicide. Reporters generally present the information that they are given. Impromptu comments about a suicide by a public official can result in harmful news coverage. Given the potential risks, public officials and the media should seek to minimize these risks by carefully considering what is to be said and reported regarding suicide. # ASPECTS OF NEWS COVERAGE THAT CAN PROMOTE SUICIDE CONTAGION Clinicians, researchers, and other health professionals at the workshop agreed that to minimize the likelihood of suicide contagion, reporting should be concise and factual. Although scientific research in this area is not complete, workshop participants believed that the likelihood of suicide contagion may be increased by the following actions: - Presenting simplistic explanations for suicide. Suicide is never the result of a single factor or event, but rather results from a complex interaction of many factors and usually involves a history of psychosocial problems (12 ). Public officials and the media should carefully explain that the final precipitating event was not the only cause of a given suicide. Most persons who have committed suicide have had a history of problems that may not have been acknowledged during the acute aftermath of the suicide. Cataloguing the problems that could have played a causative role in a suicide is not necessary, but acknowledgment of these problems is recommended. - Engaging in repetitive, ongoing, or excessive reporting of suicide in the news. Repetitive and ongoing coverage, or prominent coverage, of a suicide tends to promote and maintain a preoccupation with suicide among at-risk persons, especially among persons 15-24 years of age. This preoccupation appears to be associated with suicide contagion. Information presented to the media should include the association between such coverage and the potential for suicide contagion. Public officials and media representatives should discuss alternative approaches for coverage of newsworthy suicide stories. - Providing sensational coverage of suicide. By its nature, news coverage of a suicidal event tends to heighten the general public's preoccupation with suicide. This reaction is also believed to be associated with contagion and the development of suicide clusters. Public officials can help minimize sensationalism by limiting, as much as possible, morbid details in their public discussions of suicide. News media professionals should attempt to decrease the prominence of the news report and avoid the use of dramatic photographs related to the suicide (e.g., photographs of the funeral, the deceased person's bedroom, and the site of the suicide). - Reporting "how-to" descriptions of suicide. Describing technical details about the method of suicide is undesirable. For example, reporting that a person died from carbon monoxide poisoning may not be harmful; however, providing details of the mechanism and procedures used to complete the suicide may facilitate imitation of the suicidal behavior by other atrisk persons. - Presenting suicide as a tool for accomplishing certain ends. Suicide is usually a rare act of a troubled or depressed person. Presentation of suicide as a means of coping with personal problems (e.g., the break-up of a relationship or retaliation against parental discipline) may suggest suicide as a potential coping mechanism to at-risk persons. Although such factors often seem to trigger a suicidal act, other psychopathological problems are almost always involved. If suicide is presented as an effective means for accomplishing specific ends, it may be perceived by a potentially suicidal person as an attractive solution. - Glorifying suicide or persons who commit suicide. News coverage is less likely to contribute to suicide contagion when reports of community expressions of grief (e.g., public eulogies, flying flags at half-mast, and erecting permanent public memorials) are minimized. Such actions may contribute to suicide contagion by suggesting to susceptible persons that society is honoring the suicidal behavior of the deceased person, rather than mourning the person's death. - Focusing on the suicide completer's positive characteristics. Empathy for family and friends often leads to a focus on reporting the positive aspects of a suicide completer's life. For example, friends or teachers may be quoted as saying the deceased person "was a great kid" or "had a bright future," and they avoid mentioning the troubles and problems that the deceased person experienced. As a result, statements venerating the deceased person are often reported in the news. However, if the suicide completer's problems are not acknowledged in the presence of these laudatory statements, suicidal behavior may appear attractive to other at-risk persons-especially those who rarely receive positive reinforcement for desirable behaviors. # CONCLUSION In addition to recognizing the types of news coverage that can promote suicide contagion, the workshop participants strongly agreed that reporting of suicide can have several direct benefits. Specifically, community efforts to address this problem can be strengthened by news coverage that describes the help and support available in a community, explains how to identify persons at high risk for suicide, or presents information about risk factors for suicide. An ongoing dialogue between news media professionals and health and other public officials is the key to facilitating the reporting of this information. Although no one could say for sure why Doe killed himself, his classmates, who did not want to be quoted, said Doe and his girlfriend, Jane, also a sophomore at the high school, had been having difficulty. Doe was also known to have been a zealous player of fantasy video games. School closed at noon Monday, and buses were on hand to transport students who wished to attend Doe's funeral. School officials said almost all the student body of 1,200 attended. Flags in town were flown at half staff in his honor. Members of the School Committee and the Board of Selectmen are planning to erect a memorial flag pole in front of the high school. Also, a group of Doe's friends intend to plant a memorial tree in City Park during a ceremony this coming Sunday at 2:00 p.m. Doe was born in Otherville and moved to this town 10 years ago with his parents and sister, Ann. He was an avid member of the high school swim team last spring, and he enjoyed collecting comic books. He had been active in local youth organizations, although he had not attended meetings in several months. # Alternative Report with Low Potential for Promoting Suicide Contagion John Doe, Jr., 15, of Maplewood Drive, died Friday from a self-inflicted gunshot wound. John, the son of Mary and John Doe, Sr., was a sophomore at City High School. John had lived in Anytown since moving here 10 years ago from Otherville, where he was born. His funeral was held Sunday. School counselors are available for any students who wish to talk about his death. In addition to his parents, John is survived by his sister, Ann. # Appendix
Incidence rates of suicide and attempted suicide among adolescents and young adults aged 15-24 years continue to remain at high levels. In 1992, to aid communities in developing new or augmenting existing suicide prevention programs directed toward this age group, CDC's National Center for Injury Prevention and Control published Youth Suicide Prevention Programs: A Resource Guide. The Resource Guide describes the rationale and evidence for the effectiveness of various suicide prevention strategies, and it identifies model programs that incorporate these strategies. This summary of the Resource Guide describes eight suicide prevention strategies and provides general recommendations for the development, implementation, and evaluation of suicide prevention programs targeted toward this age group.# INTRODUCTION The continued high rates of suicide among adolescents (i.e., persons aged 15-19 years) and young adults (persons aged 20-24 years) (Table 1) have heightened the need for allocation of prevention resources. To better focus these resources, CDC's National Center for Injury Prevention and Control recently published Youth Suicide Prevention Programs: A Resource Guide (1 ). The guide describes the rationale and evidence for the effectiveness of various suicide prevention strategies and identifies model programs that incorporate these strategies. It is intended as an aid for communities interested in developing or augmenting suicide prevention programs targeted toward adolescents and young adults. This report summarizes the eight prevention strategies described in the Resource Guide. # METHODOLOGY Suicide prevention programs were identified by contacting suicide prevention experts in the United States and Canada and asking them to name and describe suicide prevention programs for adolescents and young adults that, based on their experience and assessment, were likely to be effective in preventing suicide. After compiling an initial list, program representatives were contacted and asked to describe the number of persons exposed to the intervention, the number of years the program had been operating, the nature and intensity of the intervention, and the availability of data to facilitate evaluation. Program representatives were also asked to identify other programs that they considered exemplary. Representatives from these programs were contacted and asked to describe their programs. The list of programs was further supplemented by contacting program representatives who participated in the 1990 national meeting of the American Association of Suicidology and by soliciting program contacts through Newslink, the association's newsletter. Suicide prevention programs on the list were then categorized according to the nature of the prevention strategy using a framework of eight suicide prevention strategies: • School gatekeeper training. This type of program is designed to help school staff (e.g., teachers, counselors, and coaches) identify and refer students at risk for suicide. These programs also teach staff how to respond to suicide or other crises in the school. • Community gatekeeper training. These programs train community members (e.g., clergy, police, merchants, and recreation staff) and clinical health-care providers who see adolescent and young adult patients (e.g., physicians and nurses) to identify and refer persons in this age group who are at risk for suicide. • General suicide education. Students learn about suicide, its warning signs, and how to seek help for themselves or others. These programs often incorporate a variety of activities that develop self-esteem and social competency. • Screening programs. A questionnaire or other screening instrument is used to identify high-risk adolescents and young adults and provide further assessment and treatment. Repeated assessment can be used to measure changes in attitudes or behaviors over time, to test the effectiveness of a prevention strategy, and to detect potential suicidal behavior. • Peer support programs. These programs, which can be conducted in or outside of school, are designed to foster peer relationships and competency in social skills among high-risk adolescents and young adults. • Crisis centers and hotlines. Trained volunteers and paid staff provide telephone counseling and other services for suicidal persons. Such programs also may offer a "drop-in" crisis center and referral to mental health services. • Restriction of access to lethal means. Activities are designed to restrict access to handguns, drugs, and other common means of suicide. • Intervention after a suicide. These programs focus on friends and relatives of persons who have committed suicide. They are partially designed to help prevent or contain suicide clusters and to help adolescents and young adults cope effectively with the feelings of loss that follow the sudden death or suicide of a peer. After categorizing suicide prevention efforts according to this framework, an expert group at CDC reviewed the list to identify recurrent themes across the different categories and to suggest directions for future research and intervention. # FINDINGS The following conclusions were derived from information published in the Resource Guide: • Strategies in suicide prevention programs for adolescents and young adults focus on two general themes. Although the eight strategies for suicide prevention programs for adolescents and young adults differ, they can be classified into two conceptual categories: -Strategies to identify and refer suicidal adolescents and young adults for mental health care. This category includes active strategies (e.g., general screening programs and targeted screening in the event of a suicide) and passive strategies (e.g., training school and community gatekeepers, providing general education about suicide, and establishing crisis centers and hotlines). Some passive strategies are designed to lower barriers to self-referral, and others seek to increase referrals by persons who recognize suicidal tendencies in someone they know. -Strategies to address known or suspected risk factors for suicide among adolescents and young adults. These interventions include promoting self-esteem and teaching stress management (e.g., general suicide education and peer support programs); developing support networks for high-risk adolescents and young adults (peer support programs); and providing crisis counseling (crisis centers, hotlines, and interventions to minimize contagion in the context of suicide clusters). Although restricting access to the means of committing suicide may be critically important in reducing risk, none of the programs reviewed placed major emphasis on this strategy. • Suicide prevention efforts targeted for young adults are rare. With a few important exceptions, most programs have been targeted toward adolescents in high school, and these programs generally do not extend to include young adults. Although the reasons for this phenomenon are not clear, the focus of prevention efforts on adolescents may be because they are relatively easy to access in comparison with young adults, who may be working or in college. In addition, persons who design and implement such efforts may not realize that the suicide rate for young adults is substantially higher than the rate for adolescents (Table 1). • Links between suicide prevention programs and existing community mental health resources are frequently inadequate. In many instances, suicide prevention programs directed toward adolescents and young adults have not established close working ties with traditional community mental health resources. Inadequate communication with local mental health service agencies obviously reduces the potential effectiveness of programs that seek to identify and refer suicidal adolescents and young adults for mental health care. • Some potentially successful strategies are applied infrequently, yet other strategies are applied commonly. Despite evidence that restricting access to lethal means of suicide (e.g., firearms and lethal dosages of drugs) can help to prevent suicide among adolescents and young adults, this strategy was not a major focus of any of the programs identified. Other promising strategies, such as peer support programs for those who have attempted suicide or others at high risk, are rarely incorporated into current programs. In contrast, school-based education on suicide is a common strategy. This approach is relatively simple to implement, and it is a cost-effective way to reach a large proportion of adolescents. However, evidence to indicate the effectiveness of school-based suicide education is sparse. Educational interventions often consist of a brief, one-time lecture on the warning signs of suicide-a method that is unlikely to have substantial or sustained impact and that may not reach high-risk students (e.g., those who have considered or attempted suicide). Further, students who have attempted suicide previously may react more negatively to such curricula than students who have not. The relative balance of the positive and the potentially negative effects of these general educational approaches is unclear. • Many programs with potential for reducing suicide among adolescents and young adults are not considered or evaluated as suicide prevention programs. Programs designed to improve other psychosocial problem areas among adolescents and young adults (e.g., alcohol-and drug-abuse treatment programs or programs that provide help and services to runaways, pregnant teenagers, and/or high school dropouts) often address risk factors for suicide. However, such programs are rarely considered suicide prevention programs, and evaluations of such programs rarely consider their effect on suicidal behavior. A review of the suicide prevention programs discussed in the Resource Guide indicated that only a small number maintained working relationships with these other programs. • The effectiveness of suicide prevention programs has not been demonstrated. The lack of evaluation research is the single greatest obstacle to improving current efforts to prevent suicide among adolescents and young adults. Without evidence to support the potential of a program for reducing suicidal behavior, recommending one approach over another for any given population is difficult. # RECOMMENDATIONS Because current scientific information about the efficacy of suicide prevention strategies is insufficient, the Resource Guide does not recommend one strategy over another. However, the following general recommendations should be considered: • Ensure that suicide prevention programs are linked as closely as possible with professional mental health resources in the community. Strategies designed to increase referrals of at-risk adolescents and young adults can be successful only to the extent that trained counselors are available and mechanisms for linking at-risk persons with resources are operational. • Avoid reliance on one prevention strategy. Most of the programs reviewed already incorporate several of the eight strategies described. However, as noted, certain strategies tend to predominate despite insufficient evidence of their effectiveness. Given the limited knowledge regarding the effectiveness of any one program, a multi-faceted approach to suicide prevention is recommended. • Incorporate promising, but underused, strategies into current programs where possible. Restricting access to lethal means of committing suicide may be the most promising underused strategy. Parents should be taught to recognize the warning signs for suicide and encouraged to restrict their teenagers' access to lethal means. Peer support groups for adolescents and young adults who have exhibited suicidal behaviors or who have contemplated and/or attempted suicide also appear promising but should be implemented carefully. Establishment of working relationships with other prevention programs, such as alcohol-and drug-abuse treatment programs, may enhance suicide prevention efforts. Furthermore, when school-based education is used, program planners should consider broad curricula that address suicide prevention in conjunction with other adolescent health issues before considering curricula that address only suicide. • Expand suicide prevention efforts for young adults. The suicide rate for persons in this age group is substantially higher than that for adolescents, yet programs targeted toward them are sparse. More prevention efforts should be targeted toward young adults at high risk for suicide. • Incorporate evaluation efforts into suicide prevention programs. Planning, process, and outcome evaluation are important components of any public health effort. Efforts to conduct outcome evaluation are imperative given the lack of knowledge regarding the effectiveness of suicide prevention programs. Outcome evaluation should include measures such as incidence of suicidal behavior or measures closely associated with such incidence (e.g., measures of suicidal ideation, clinical depression, and alcohol abuse). Program directors should be aware that suicide prevention efforts, like most health interventions, may have unforeseen negative consequences. Evaluation measures should be designed to detect such consequences. Joy Silver # INTRODUCTION Suicide rates among adolescents and young adults have increased sharply in recent decades-from 1950 through 1990, the rate of suicide for persons 15-24 years of age increased from 4.5 to 13.5 per 100,000 (1,2 ). In comparison with older persons, adolescents and young adults who commit suicide are less likely to be clinically depressed or to have certain other mental disorders (3 ) that are important risk factors for suicide among persons in all age groups (4 ). This has led to research directed at the identification of other preventable risk factors for suicide among young persons. One risk factor that has emerged from this research is suicide "contagion," a process by which exposure to the suicide or suicidal behavior of one or more persons influences others to commit or attempt suicide (5 ). Evidence suggests that the effect of contagion is not confined to suicides occurring in discrete geographic areas. In particular, nonfictional newspaper and television coverage of suicide has been associated with a statistically significant excess of suicides (6 ). The effect of contagion appears to be strongest among adolescents (7,8 ), and several well publicized "clusters" among young persons have occurred (9)(10)(11). These findings have induced efforts on the part of many suicide-prevention specialists, public health practitioners, and researchers to curtail the reporting of suicide-especially youth suicide-in newspapers and on television. Such efforts were often counterproductive, and news articles about suicides were written without the valuable input of well-informed suicide-prevention specialists and others in the community. In November 1989, the Association of State and Territorial Health Officials and the New Jersey Department of Health convened a workshop* at which suicidologists, public health officials, researchers, psychiatrists, and psychologists worked directly with news media professionals from around the country to share their concerns and *CDC, which participated in developing the concepts for discussion and assisted in the operations of this workshop, supports these recommendations. perspectives on this problem and explore ways in which suicide, especially suicide among persons 15-24 years of age, could be reported with minimal potential for suicide contagion and without compromising the independence or professional integrity of news media professionals. A set of general concerns about and recommendations for reducing the possibility of media-related suicide contagion were developed at this workshop, and characteristics of news coverage that appear to foster suicide contagion were described. This report summarizes these concerns, recommendations, and characteristics and provides hypothetical examples of news reports that have high and low potential for causing suicide contagion (see Appendix). # GENERAL CONCERNS AND RECOMMENDATIONS The following concerns and recommendations should be reviewed and understood by health professionals, suicidologists, public officials, and others who provide information for reporting of suicide: • Suicide is often newsworthy, and it will probably be reported. The mission of a news organization is to report to the public information on events in the community. If a suicide is considered newsworthy, it will probably be reported. Health-care providers should realize that efforts to prevent news coverage may not be effective, and their goal should be to assist news professionals in their efforts toward responsible and accurate reporting. • "No comment" is not a productive response to media representatives who are covering a suicide story. Refusing to speak with the media does not prevent coverage of a suicide; rather, it precludes an opportunity to influence what will be contained in the report. Nevertheless, public officials should not feel obligated to provide an immediate answer to difficult questions. They should, however, be prepared to provide a reasonable timetable for giving such answers or be able to direct the media to someone who can provide the answers. • All parties should understand that a scientific basis exists for concern that news coverage of suicide may contribute to the causation of suicide. Efforts by persons trying to minimize suicide contagion are easily misinterpreted. Health officials must take the time to explain the carefully established, scientific basis for their concern about suicide contagion and how the potential for contagion can be reduced by responsible reporting. • Some characteristics of news coverage of suicide may contribute to contagion, and other characteristics may help prevent suicide. Clinicians and researchers acknowledge that it is not news coverage of suicide per se, but certain types of news coverage, that promote contagion. Persons concerned with preventing suicide contagion should be aware that certain characteristics of news coverage, rather than news coverage itself, should be avoided. • Health professionals or other public officials should not try to tell reporters what to report or how to write the news regarding suicide. If the nature and apparent mechanisms of suicide contagion are understood, the news media are more likely to present the news in a manner that minimizes the likelihood of such contagion. Instead of dictating what should be reported, public officials should explain the potential for suicide contagion associated with certain types of reports and should suggest ways to minimize the risk for contagion (see Appendix). • Public officials and the news media should carefully consider what is to be said and reported regarding suicide. Reporters generally present the information that they are given. Impromptu comments about a suicide by a public official can result in harmful news coverage. Given the potential risks, public officials and the media should seek to minimize these risks by carefully considering what is to be said and reported regarding suicide. # ASPECTS OF NEWS COVERAGE THAT CAN PROMOTE SUICIDE CONTAGION Clinicians, researchers, and other health professionals at the workshop agreed that to minimize the likelihood of suicide contagion, reporting should be concise and factual. Although scientific research in this area is not complete, workshop participants believed that the likelihood of suicide contagion may be increased by the following actions: • Presenting simplistic explanations for suicide. Suicide is never the result of a single factor or event, but rather results from a complex interaction of many factors and usually involves a history of psychosocial problems (12 ). Public officials and the media should carefully explain that the final precipitating event was not the only cause of a given suicide. Most persons who have committed suicide have had a history of problems that may not have been acknowledged during the acute aftermath of the suicide. Cataloguing the problems that could have played a causative role in a suicide is not necessary, but acknowledgment of these problems is recommended. • Engaging in repetitive, ongoing, or excessive reporting of suicide in the news. Repetitive and ongoing coverage, or prominent coverage, of a suicide tends to promote and maintain a preoccupation with suicide among at-risk persons, especially among persons 15-24 years of age. This preoccupation appears to be associated with suicide contagion. Information presented to the media should include the association between such coverage and the potential for suicide contagion. Public officials and media representatives should discuss alternative approaches for coverage of newsworthy suicide stories. • Providing sensational coverage of suicide. By its nature, news coverage of a suicidal event tends to heighten the general public's preoccupation with suicide. This reaction is also believed to be associated with contagion and the development of suicide clusters. Public officials can help minimize sensationalism by limiting, as much as possible, morbid details in their public discussions of suicide. News media professionals should attempt to decrease the prominence of the news report and avoid the use of dramatic photographs related to the suicide (e.g., photographs of the funeral, the deceased person's bedroom, and the site of the suicide). • Reporting "how-to" descriptions of suicide. Describing technical details about the method of suicide is undesirable. For example, reporting that a person died from carbon monoxide poisoning may not be harmful; however, providing details of the mechanism and procedures used to complete the suicide may facilitate imitation of the suicidal behavior by other atrisk persons. • Presenting suicide as a tool for accomplishing certain ends. Suicide is usually a rare act of a troubled or depressed person. Presentation of suicide as a means of coping with personal problems (e.g., the break-up of a relationship or retaliation against parental discipline) may suggest suicide as a potential coping mechanism to at-risk persons. Although such factors often seem to trigger a suicidal act, other psychopathological problems are almost always involved. If suicide is presented as an effective means for accomplishing specific ends, it may be perceived by a potentially suicidal person as an attractive solution. • Glorifying suicide or persons who commit suicide. News coverage is less likely to contribute to suicide contagion when reports of community expressions of grief (e.g., public eulogies, flying flags at half-mast, and erecting permanent public memorials) are minimized. Such actions may contribute to suicide contagion by suggesting to susceptible persons that society is honoring the suicidal behavior of the deceased person, rather than mourning the person's death. • Focusing on the suicide completer's positive characteristics. Empathy for family and friends often leads to a focus on reporting the positive aspects of a suicide completer's life. For example, friends or teachers may be quoted as saying the deceased person "was a great kid" or "had a bright future," and they avoid mentioning the troubles and problems that the deceased person experienced. As a result, statements venerating the deceased person are often reported in the news. However, if the suicide completer's problems are not acknowledged in the presence of these laudatory statements, suicidal behavior may appear attractive to other at-risk persons-especially those who rarely receive positive reinforcement for desirable behaviors. # CONCLUSION In addition to recognizing the types of news coverage that can promote suicide contagion, the workshop participants strongly agreed that reporting of suicide can have several direct benefits. Specifically, community efforts to address this problem can be strengthened by news coverage that describes the help and support available in a community, explains how to identify persons at high risk for suicide, or presents information about risk factors for suicide. An ongoing dialogue between news media professionals and health and other public officials is the key to facilitating the reporting of this information. # Although no one could say for sure why Doe killed himself, his classmates, who did not want to be quoted, said Doe and his girlfriend, Jane, also a sophomore at the high school, had been having difficulty. Doe was also known to have been a zealous player of fantasy video games. School closed at noon Monday, and buses were on hand to transport students who wished to attend Doe's funeral. School officials said almost all the student body of 1,200 attended. Flags in town were flown at half staff in his honor. Members of the School Committee and the Board of Selectmen are planning to erect a memorial flag pole in front of the high school. Also, a group of Doe's friends intend to plant a memorial tree in City Park during a ceremony this coming Sunday at 2:00 p.m. Doe was born in Otherville and moved to this town 10 years ago with his parents and sister, Ann. He was an avid member of the high school swim team last spring, and he enjoyed collecting comic books. He had been active in local youth organizations, although he had not attended meetings in several months. # Alternative Report with Low Potential for Promoting Suicide Contagion John Doe, Jr., 15, of Maplewood Drive, died Friday from a self-inflicted gunshot wound. John, the son of Mary and John Doe, Sr., was a sophomore at City High School. John had lived in Anytown since moving here 10 years ago from Otherville, where he was born. His funeral was held Sunday. School counselors are available for any students who wish to talk about his death. In addition to his parents, John is survived by his sister, Ann. # Appendix
None
None
15463d133cd19aa564553bff697708aaaaad6ac3
cdc
None
timely efforts to reduce the number of infected adult mosquitoes will optimally impact human WNV case incidence when environmental surveillance indicates substantial WNV epizootic activity or when many human cases occur early in the season (e.g., June or July.) This document provides guidance for communities and public health agencies developing new programs or enhancing existing WNV management programs. The CDC Division of Vector-Borne Diseases is available to provide additional consultation and technical assistance (by phone:# Foreword As West Nile virus (WNV) spread and became established across the United States following its first identification in New York City in 1999, the responses of all levels of the public health system have resulted in a detailed understanding of WNV transmission ecology and epidemiology as well as development of systems and procedures to reduce human risk. This includes an expanded capacity to diagnose and monitor WNV infections in humans, measure WNV transmission activity in vector mosquitoes, and implement effective WNV control programs. These guidelines, which update the third revision released in 2003, incorporate this new knowledge with the goal of providing guidance to health departments and other public health entities in monitoring and mitigating WNV risk to humans. Human disease surveillance provides an ongoing nationwide assessment of the human impact of WNV, and over the past decade has demonstrated where WNV incidence and total disease burden are greatest. However, human disease surveillance, by itself, is limited in its ability to predict the large focal outbreaks that have come to characterize this disease. These outbreaks typically intensify over as little as a couple of weeks; however, human case reports are lagging indicators of risk since case reports occur weeks after the time of infection. Thus, environmental surveillance -monitoring enzootic and epizootic WNV transmission in mosquitoes and birds -forms a timelier index of risk, and is an important cornerstone for implementing effective WNV risk reduction efforts. Research and operational experience shows that increases in WNV infection rates in mosquito populations can provide an indicator of developing outbreak conditions several weeks in advance of increases in human infections. Communities that have a history of WNV, particularly metropolitan areas with large human populations at risk, should implement comprehensive, integrated vector management (IVM) programs that incorporate monitoring mosquito abundance and infection rates. Mosquito-based WNV surveillance programs should use strategies that assure data are comparable over time and space, and are designed to detect trends in WNV transmission levels. Programs should enlist quantitative indicators such as the WNV infection rate or vector index to represent WNV transmission activity in mosquito populations. Programs must be sustainable over the long term in order to provide sufficient information to link surveillance indicators with the degree of human risk. Consistency also requires that mosquito collections be repeated at regular (weekly) intervals over the course of the transmission season, and that collections are made at fixed collecting sites. Only through maintaining consistency can monitoring programs provide information useful in crafting thresholds to support decisions about vector control efforts and other interventions. Other surveillance modalities, such as sentinel chickens and dead bird surveillance, may be a valuable adjunct to mosquito-based surveillance in quantifying epizootic activity in some settings. IVM programs must be proactive and make plans in advance for addressing increasing levels of WNV risk. The objective of IVM is to implement control measures sufficient to maintain mosquito abundance below levels that result in high risk of WNV transmission to humans. By establishing action thresholds based on the abundance of WNV-infected vector mosquitoes, IVM programs can monitor risk and the effectiveness of their programs, and implement more directed vector control efforts as needed. IVM implies that all of the tools available for managing mosquito populations should be considered for use as needed to maintain vector populations at low levels. Source reduction and larval control activities can be effective in maintaining low vector abundance, but adult mosquito control efforts through ground or aerially applied pesticides complement proactive vector management programs and should not be relegated to the status of a "last resort" measure to be used only during outbreaks. Aggressive and # Introduction Ten years have passed since the 2003 publication of the 3rd edition of "West Nile virus in the United States: Guidelines for surveillance prevention and control". At that time, only 4 years since West Nile virus (WNV) was first detected in New York City, the virus had already established itself across approximately the eastern half of the country and produced the largest epidemic of arboviral encephalitis ever experienced in the United States. Knowledge about WNV epidemiology and transmission ecology was expanding rapidly, but numerous gaps remained in our understanding of how this relatively new exotic disease would affect public health, what monitoring practices would provide the best indicators of human risk, and what interventions would be most effective in reducing human infections. Thus, large portions of the WNV Guidelines 3rd edition were predicated on relatively limited research and operational experience, and were dedicated to identifying and prioritizing specific basic and operational research directions. Since that time, WNV has expanded to the point that it can now be found in all 48 contiguous states and has produced two additional, large nationwide epidemics in 2003 and 2012. Also, considerable new information about WNV epidemiology, ecology and control has been generated since 2003. The objective of this 4th edition of the WNV guidelines is to consolidate and describe this information and describe how these new findings can be used to better monitor WNV and mitigate its public health impact. This document was produced through a comprehensive review of the published literature related to WNV epidemiology, diagnostics, transmission ecology, environmental surveillance, and vector control. The publications were reviewed for relevance to developing operational surveillance and control programs, and selected for inclusion in a draft document by a technical development group of CDC subject matter experts. Numerous stakeholder groups were requested to review the document. We view the recommendations contained in these guidelines as the best that can be derived from the currently available information, and will provide updates as new information about WNV epidemiology, ecology, or intervention becomes available. # West Nile Virus Epidemiology and Ecology WNV, a mosquito-transmitted member of the genus Flavivirus in the family Flaviridae, was discovered in northwest Uganda in 1937 (Smithburn et al. 1940) but was not viewed as a potentially important public health threat until it was associated with epidemics of fever and encephalitis in the Middle East in the 1950's (Taylor et al. 1956, Paz 2006). In the following years, WNV was associated with sporadic outbreaks of human disease across portions of Africa, the Middle East, India, Europe and Asia (Hubalek and Halouzka 1999). In the mid to late 1990's outbreaks occurred more frequently in the Mediterranean Basin and large outbreaks occurred in Romania and the Volga delta in southern Russia (Hayes et al. 2005). The first domestically-acquired human cases of WNV disease in the Western Hemisphere were detected in New York City in 1999 (Nash et al. 2001). WNV rapidly spread during the following years, and by 2005 had established sustained transmission foci in much of the hemisphere with an overall distribution that extended from central Canada to southern Argentina (Gubler 2007). WNV transmission persists across this large, ecologically-diverse expanse, and as a result this virus is recognized as the most widely distributed arbovirus in the world (Kramer et al. 2008). WNV has become enzootic in all 48 contiguous United States and evidence of transmission in the form of infected humans, mosquitoes, birds, horses, or other mammals has been reported from 96% of U.S. counties. This extensive distribution is due to the ability of WNV to establish and persist in the wide variety of ecosystems present across the country. WNV has been detected in 65 different mosquito species in the U.S. , though it appears that only a few Culex species drive epizootic and epidemic transmission. The most important vectors are Cx. pipiens in the northern half of the country, Cx. quinquefasciatus in the southern states, and Cx. tarsalis in the western states where it overlaps with the Cx. pipiens and quinquefasciatus (Fig 1 .) (Andreadis et al. 2004, Kilpatrick et al. 2006a, Godsey et al. 2010). However, the population structure of Culex pipiens and Cx. quinquefasciatus is more complex than indicated in Fig. 1 as these species readily hybridize and produce a stable hybrid zone across the United States. Barr (1957) set the limits of the hybrid zone at 36 O N and 39 O N based on measurements of the male genitalia. Subsequent work using microsatellites (Huang et al. 2008, Edillo et al. 2009, Kothera et al. 2009, Kothera et al. 2013, Savage and Kothera 2012 and other molecular markers (Huang et al. 2011) indicates that the hybrid zone extends farther north and south than suggested by Barr (1957). In the middle latitudes of the US, both nominal species and hybrids may be present and are commonly reported as Cx. pipiens complex mosquitoes (Savage et al. 2007, Savage andKothera 2012). The implications of Cx. pipiens -Cx. quinquefasciatus population genetics any hybridization patterns for WNV transmission are not well understood. Figure 1. Approximate geographic distribution of the primary WNV vectors, Cx. pipiens, Cx. quinquefasciatus and Cx. tarsalis (modified from Darsie and Ward 2005). Culex salinarius has been identified as an important enzootic and epidemic vector in the northeastern U.S. (Anderson et al. 2004, Molaei et al. 2006. Other mosquito species including Cx. restuans, Cx. nigripalpus and Cx. stigmatosoma may contribute to early season amplification or serve as accessory bridge vectors in certain regions, but their role is less well understood (Kilpatrick et al. 2005). WNV has been detected in hundreds of bird species in the United States (CDC 2012). However, relatively few species function as primary amplifiers of the virus, and a small subset of bird species may significantly influence WNV transmission dynamics locally (Hamer et al. 2009). For example, the American robin (Turdus migratorious) can play a key role as amplifier host, even in locations where it is present in relatively low abundance (Kilpatrick et al. 2006b). As a result of this extensive distribution in the U.S., WNV is now the most frequent cause of arboviral disease in the country. Since 1999, WNV disease cases have been reported from all 48 contiguous states and two-thirds of all U.S. counties. Though widely distributed, WNV transmission is temporally and spatially heterogeneous. Human WNV disease cases have been reported during every month of the year in the United States, but as is characteristic of zoonotic arboviruses in temperate climates, intense transmission is limited to the summer and early fall months; 94% of human cases have been reported from July through September (CDC 2010) and approximately two-thirds of reported cases occurred during a 6-week period from mid-July through the end of August. At a national level, the annual incidence and number of cases reported have varied dramatically since 1999 , CDC 2010a. Weather, especially temperature, is an important modifier of WNV transmission, and has been correlated with increased incidence of human disease at regional and national scales (Soverow et al. 2009), and likely drives the annual fluctuations in numbers of cases reported at the national level. However, WNV epidemiology is characterized by focal and sometimes intense outbreaks . Epidemiological data gathered since 1999 demonstrate regions in the United States with recurring high levels of WNV transmission and risk to humans (Fig. 2). High average annual incidence of WNV disease occurs in the West Central and Mountain regions (CDC 2010), with the highest cumulative incidence of infection occurring in the central plains states (i.e., South Dakota, Wyoming and North Dakota) (Petersen et al. 2012). The greatest disease burden occurs where areas of moderate to high incidence intersect metropolitan counties with correspondingly high human population densities. # Limits to Prediction -The Need for Surveillance WNV outbreaks have been associated on a local level with a variety of parameters including urban habitats in the Northeast and agricultural habitats in the western United States (Bowden et al. 2011), rural irrigated landscapes (DeGroote and Sugumaran 2012), increased temperature (Hartley et al. 2012), specific precipitation patterns, several socioeconomic factors such as housing age and community drainage patterns (Ruiz et al. 2007), per capita income (DeGroote and Sugumaran 2012), and neglected swimming pool density (Reisen et al. 2008, Harrigan et al. 2010. Despite these documented associations with a variety of biotic and abiotic factors, and recognition that certain regions experience more frequent outbreaks and higher levels of human disease risk, no models have been developed to provide long-term predictions of how and where these factors will combine to produce outbreaks. The unpredictable nature of WNV outbreaks necessitates the establishment and maintenance of surveillance systems capable of detecting increases in WNV transmission activity and the ability to respond to the surveillance data with effective, disease-reducing interventions. Such surveillance and control programs can be costly to maintain. However it is important that communities with large human populations in areas with documented WNV risk establish and maintain surveillance for human cases and effective integrated vector management programs that incorporate environmental surveillance components capable of providing indicators predictive of human risk. This document provides guidance for developing systems to 1) Monitor WNV enzootic and epizootic transmission activity as indicators of human risk; 2) Maintain surveillance for human infections and disease to monitor trends in WNV disease burden and clinical presentation; and 3) Implement prevention and control programs that reduce community level risk by managing vector mosquito populations and that reduce individual risk by promoting effective personal protection measures. # Surveillance # Objectives of WNV Surveillance WNV surveillance consists of two distinct, but complementary activities. Epidemiological surveillance measures WNV human disease to quantify disease burden and identify seasonal, geographic, and demographic patterns in human morbidity and mortality (Lindsey et al. 2008. Environmental surveillance monitors local WNV activity in vectors and non-human vertebrate hosts in advance of epidemic activity affecting humans. In addition to monitoring disease burden and distribution, epidemiological surveillance has been instrumental in characterizing clinical disease presentation and disease outcome, as well as identifying high risk populations and factors associated with serious WNV disease (Lindsey et al. 2012). Epidemiological surveillance has also detected and quantified alternative routes of WNV transmission to humans, such as contaminated blood donations and organ transplantation (Pealer et al. 2003, Nett et al. 2012. Epidemiological and environmental surveillance for WNV was facilitated by development and implementation of ArboNET, the national arbovirus surveillance system (Lindsey et al. 2012). ArboNET was developed in 2000 as a comprehensive surveillance data capture platform to monitor WNV infections in humans, mosquitoes, birds, and other animals as the virus spread and became established across the country. This comprehensive approach was essential to tracking the progression of WNV activity and remains a significant source of data adding to our understanding of the epidemiology and ecology of WNV. In the absence of effective human WNV vaccines, preventing disease in humans depends on application of measures to keep infected mosquitoes from biting people. A principal objective of WNV environmental surveillance is to quantify the intensity of virus transmission in a region in order to provide a predictive index of human infection risk. This risk prediction, along with information about the local conditions and habitats that produce WNV vector mosquitoes, can be used to inform an integrated vector management program and the associated decisions about implementing prevention and control interventions (Nasci 2013). Though epidemiological surveillance is essential for understanding WNV disease burden, utilizing human case surveillance by itself is insufficient for predicting outbreaks (Reisen and Brault 2007). WNV outbreaks can develop quickly, with the majority of human cases occurring over a few weeks during the peak of transmission . The time from human infection to onset of symptoms, to diagnosis and reporting can be several weeks or longer. As a result, human WNV case reports lag well behind the transmission from mosquitoes that initiated the infection. By monitoring WNV infection prevalence in mosquito vectors and incidence in non-human vertebrate hosts, and comparing these indices to historical environmental and epidemiological surveillance data, conditions associated with increasing human risk can be detected 2-4 weeks in advance of human disease onset (Kwan et al. 2012a). This provides additional lead time for critical vector control interventions and public education programs to be put in place. The following sections describe the elements of epidemiological and environmental WNV surveillance and how they may be used to monitor and predict risk and to trigger interventions. # Human Surveillance # Routes of Transmission WNV is transmitted to humans primarily through the bite of infected mosquitoes (Campbell et al. 2001). However, person-to-person transmission can occur through transfusion of infected blood products or solid organ transplantation (Pealer et al. 2003, Iwamoto et al. 2003. Intrauterine transmission and probable transmission via human milk also have been described but appear to be uncommon (O'Leary et al. 2006, Hinckley et al. 2007. Percutaneous infection and aerosol infection have occurred in laboratory workers, and an outbreak of WNV infection among turkey handlers also raised the possibility of aerosol transmission , CDC 2003a. Since 2003, the U.S. blood supply has been routinely screened for WNV RNA; as a result, transfusionassociated WNV infection is rare (CDC 2003b). The Food and Drug Administration recommends that blood collection agencies perform WNV nucleic acid amplification test (NAAT) year-round on all blood donations, either in minipools of six or sixteen donations (depending on test specifications) or as individual donations. Organ and tissue donors are not routinely screened for WNV infection (Nett et al. 2012). # Clinical Presentation An estimated 70-80% of human WNV infections are subclinical or asymptomatic (Mostashari et al. 2001, Zou et al. 2010. Most symptomatic persons experience an acute systemic febrile illness that often includes headache, myalgia, or arthralgia; gastrointestinal symptoms and a transient maculopapular rash also are commonly reported (Watson et al. 2004, Hayes et al. 2005b, Zou et al. 2010. Less than 1% of infected persons develop neuroinvasive disease, which typically manifests as meningitis, encephalitis, or acute flaccid paralysis (Hayes et al. 2005b). WNV meningitis is clinically indistinguishable from aseptic meningitis due to most other viruses (Sejvar and Marfin 2006). Patients with WNV encephalitis usually present with seizures, mental status changes, focal neurologic deficits, or movement disorders (Sejvar and Marfin 2006). WNV acute flaccid paralysis is often clinically and pathologically identical to poliovirus-associated poliomyelitis, with damage of anterior horn cells, and may progress to respiratory paralysis requiring mechanical ventilation (Sejvar and Marfin 2006). WNV-associated Guillain-Barré syndrome has also been reported and can be distinguished from WNV poliomyelitis by clinical manifestations and electrophysiologic testing (Sejvar and Marfin 2006). Cardiac dysrhythmias, myocarditis, rhabdomyolysis, optic neuritis, uveitis, chorioretinitis, orchitis, pancreatitis, and hepatitis have been described rarely with WNV infection (Hayes et al. 2005b). # Clinical Evaluation and Diagnosis WNV disease should be considered in the differential diagnosis of febrile or acute neurologic illnesses associated with recent exposure to mosquitoes, blood transfusion or organ transplantation, and of illnesses in neonates whose mothers were infected with WNV during pregnancy or while breastfeeding. In addition to other more common causes of encephalitis and aseptic meningitis (e.g., herpes simplex virus and enteroviruses), other arboviruses (e.g., La Crosse, St. Louis encephalitis, Eastern equine encephalitis, Western equine encephalitis and Powassan viruses) should also be considered in the differential etiology of suspected WNV illness. WNV infections are most frequently confirmed by detection of anti-WNV immunoglobulin (Ig) M antibodies in serum or cerebrospinal fluid (CSF). The presence of anti-WNV IgM is usually good evidence of recent WNV infection, but may indicate infection with another closely related flavivirus (e.g., St. Louis encephalitis). Because anti-WNV IgM can persist in some patients for >1 year, a positive test result occasionally may reflect past infection unrelated to current disease manifestations. Serum collected within 8 days of illness onset may lack detectable IgM, and the test should be repeated on a convalescent-phase sample. IgG antibody generally is detectable shortly after the appearance of IgM and persists for years. Plaque-reduction neutralization tests (PRNT) can be performed to measure specific virus-neutralizing antibodies. A fourfold or greater rise in neutralizing antibody titer between acute-and convalescent-phase serum specimens collected 2 to 3 weeks apart may be used to confirm recent WNV infection and to discriminate between cross-reacting antibodies from closely related flaviviruses. Viral culture and WNV NAAT can be performed on acute-phase serum, CSF, or tissue specimens. However, by the time most immunocompetent patients present with clinical symptoms, WNV RNA may not be detectable, thus negative results should not be used to rule out an infection. NAAT may have utility in certain clinical settings as an adjunct to detection of IgM. Among patients with West Nile fever, combining NAAT and IgM detection identified more cases than either procedure alone (Tilley et al. 2006). NAAT may prove useful in immunocompromised patients, when antibody development is delayed or absent. Immunohistochemical staining can detect WNV antigens in fixed tissue, but negative results are not definitive. See the Laboratory Diagnosis and Testing chapter for additional information. # Passive Surveillance and Case Investigation WNV disease is a nationally-notifiable condition and is reportable in most, if not all, states and territories. Most disease cases are reported to public health authorities from public health or commercial laboratories; healthcare providers also submit reports of suspected cases. State and local health departments are responsible for ensuring that reported human disease cases meet the national case definitions. The most recent case definitions for confirmed and probable neuroinvasive and nonneuroinvasive domestic arboviral diseases were approved by the CSTE in 2011 (Appendix 1). Presumptive WNV-viremic donors are identified through universal screening of the blood supply; case definitions and reporting practices for viremic donors vary by jurisdiction and blood services agency. All identified WNV disease cases and presumptive viremic blood donors should be investigated promptly. Jurisdictions may choose to interview the patient's health care provider, the patient, or both depending on information needs and resources. Whenever possible, the following information should be gathered: Basic demographic information (age, sex, race/ethnicity, state and county of residence) - Clinical syndrome (e.g., asymptomatic blood donor, uncomplicated fever, meningitis, encephalitis, acute flaccid paralysis) Illness onset date and/or date of blood donation If the patient was hospitalized and if he/she survived or died Travel history in the four weeks prior to onset If the patient was an organ donor or a transplant recipient in the 4 weeks prior to onset If the patient was a blood donor or blood transfusion recipient in the 4 weeks prior to onset If the patient was pregnant at illness onset If the patient is an infant, was he/she breastfed before illness onset If the patient donated blood, tissues or organs in the four weeks prior to illness onset, immediately inform the blood or tissue bank and public health authorities. Similarly, any WNV infections temporallyassociated with blood transfusion or organ transplantation should be reported. Prompt reporting of these cases will facilitate the identification and quarantine of any remaining infected products and the identification of any other exposed recipients so they may be managed appropriately. Passive surveillance systems are dependent on clinicians considering the diagnosis of an arboviral disease and obtaining the appropriate diagnostic test, and reporting of laboratory-confirmed cases to public health authorities. Because of incomplete diagnosis and reporting, the incidence of WNV disease is underestimated. Reported neuroinvasive disease cases are considered the most accurate indicator of WNV activity in humans because of the substantial associated morbidity. In contrast, reported cases of non-neuroinvasive disease are more likely to be affected by disease awareness and healthcare-seeking behavior in different communities and by the availability and specificity of laboratory tests performed. Surveillance data for non-neuroinvasive disease should be interpreted with caution and generally should not be used to make comparisons between geographic areas or over time. # Enhanced Surveillance Activities Enhanced surveillance for human disease cases should be considered, particularly when environmental or human surveillance suggests that an outbreak is suspected or anticipated. Educating healthcare providers and infection control nurses about the need for arbovirus testing, and reporting of all suspected cases could increase the sensitivity of the surveillance system. This may be accomplished through distribution of print materials, participation in local hospital meetings and grand rounds, and providing lectures/seminars. Public health agencies should also work to establish guidelines and protocols with local blood collection agencies for reporting WNV viremic blood donors. At the end of the year, an active review of medical records and laboratory results from local hospitals and associated commercial laboratories should be conducted to identify any previously unreported cases. In addition, an active review of appropriate records from blood collection agencies should be conducted to identify any positive donors that were not reported. # Environmental Surveillance # Mosquito-Based WNV Surveillance Mosquito-based surveillance consists of the systematic collection of mosquito samples and screening them for arboviruses. Mosquitoes become infected with WNV primarily through taking blood meals from infected birds. However, WNV may be passed from infected female mosquitoes to their eggs, resulting in infected offspring (i.e., vertical transmission) (Anderson et al. 2006). Vertical transmission is likely responsible for virus maintenance over the winter in northern parts of the country, but the extent of its contribution to virus amplification and human risk during the peak transmission season is not well understood. The principal enzootic and epidemic vectors vary regionally across the United States. In the northern states, Culex pipiens mosquitoes are the primary vectors (Savage et al 2007) with Cx. salinarius serving as an important vector in portions of the Northeast (Anderson et al. 2004, Molaei et al. 2006). Culex quinquefasciatus is the main vector in the southern states, and Culex tarsalis is an important vector in western states where it overlaps the distribution of Cx. pipiens and Cx. quinquefasciatus and likely enhances transmission in these areas (Reisen et al. 2005, Andreadis 2012). Therefore, mosquitobased surveillance programs for WNV in the United States primarily target these Culex species and may include other species suspected of contributing to transmission and human risk in local areas. Mosquito-based surveillance is an integral component of an integrated vector management program and is the primary tool for quantifying WNV virus transmission and human risk (Moore et al. 1993). The principal functions of a mosquito-based surveillance program are to: - Collect data on mosquito population abundance and virus infection rates in those populations. Provide indicators of the threat of human infection and disease and identify geographic areas of high-risk. Support decisions regarding the need for and timing of intervention activities (i.e., enhanced vector control efforts and public education programs). Monitor the effectiveness of vector control efforts. Mosquito-based WNV monitoring has several positive attributes that contribute to its value in surveillance programs. These include: - Quick turn-around of results. Mosquito samples can be processed quickly, usually within a few days. Some programs maintain local, in-house laboratories where samples are processed daily leading to rapid results. Collecting adult mosquitoes provides information about vector species community composition, relative abundance, and infection rates. This provides the data needed for rapid computation of infection indices and timely risk assessment. Maintaining consistent programs over the long-term provides a baseline of historical data that can be used to evaluate risk and guide control operations. However, there are some limitations to mosquito-based surveillance. Virus may not be detected in the mosquito population if the infection rates are very low (i.e., early in the transmission season) or if only small sample sizes are tested. In addition, WNV transmission ecology varies regionally, and surveillance practices vary among programs (e.g., number and type of traps, testing procedures), which limits the degree to which surveillance data can be compared across regions. This prohibits setting national thresholds for assessing risk and implementing interventions. Developing useful thresholds requires consistent effort across seasons to assure the surveillance indices and their association to human risk is comparable over time, and may require mosquito surveillance and human disease incidence data from several transmission seasons. # Specimen Collection and Types of Traps Adult mosquitoes are collected using a variety of trapping techniques. Adequate sampling requires regular (weekly) trapping at fixed sites throughout the community that are representative of the habitat types present in the area. The commonly used types of mosquito traps for WNV surveillance sample host-seeking mosquitoes or gravid mosquitoes (carrying eggs) seeking a place to lay eggs (oviposition site). Traps used to sample host-seeking mosquitoes are available in several configurations. The most commonly used are based on the CDC miniature light trap (Sudia and Chamberlain 1962). The CDC miniature light trap and similar configurations are lightweight and use batteries to provide power to a light source and fan motor. CO 2 (usually dry ice) is frequently used as an additional attractant. In some programs, the light sources are removed to minimize the capture of other nocturnal insects that are attracted to light, such as moths and beetles. In those cases CO 2 is the only attractant used. The advantage of light traps is that they collect a wide range of mosquito species (McCardle et al. 2004), which provides information about both primary and secondary vectors and a better understanding of the species composition in an area. A limitation of light traps is that the collections in certain locations and times may consist largely of unfed, nulliparous individuals (McCardle et al. 2004), which greatly reduces the likelihood of detecting WNV and other arboviruses. Also, not all mosquito species are attracted to light traps (Miller et al. 1969) and the numbers captured may not reflect the population size of a particular species (Bidlingmayer 1967). Light traps are of little use in sampling day-active time mosquitoes such as Ae. albopictus (Haufe andBurgess 1960, Unlu andFarajollah 2012), though these species can be collected in other traps such as the BG Sentinel (Krokel et al. 2006). However, the role of these species in WNV transmission is not well understood. The three major WNV vectors (Cx. pipiens, Cx. quinquefasciatus and Cx. tarsalis) can be collected in light traps, and some surveillance programs rely on light traps alone. This should be done with the understanding that, while effective in collecting large numbers of Cx. tarsalis, light traps typically collect relatively few Cx. pipiens or Cx. quinquefasciatus and the resulting small sample sizes may reduce the ability to accurately estimate WNV infection rates. Gravid traps are more effective in collecting Cx. pipiens and Cx. quinquefasciatus in urban areas (Andreadis andArmstrong 2007, Reisen et al. 1999). Gravid traps target gravid females (i.e., those carrying mature eggs) of the Cx. pipiens complex (Reiter et al. 1986). The strength of gravid traps is that gravid females have previously taken a blood meal, which greatly increases the likelihood of capturing WNV infected individuals and thus detecting virus. Gravid traps can be baited attractants such as fresh or dry grass clippings infusions, rabbit chow infusions, cow manure, fish oil, or other materials that mimic the stagnant water in habitats where these species lay eggs. The different infusions vary in attractiveness (Burkett et al 2004, Lampman et al. 1996. It is advisable that infusion preparations are consistent within a surveillance program because variations may lead to changes in number and/or type of species captured. One limitation of gravid traps is that they selectively capture mosquitoes in the Cx. pipiens complex, and therefore provide limited information on species composition within a region (Reiter et al. 1986). Several other traps may be used to collect mosquitoes for WNV monitoring. Collecting resting mosquitoes provides a good representation of vector population structure since un-fed, gravid and blood-fed females (as well as males) may be collected (Service 1992). Since resting populations typically provide samples that are representative of the population they can also provide more representative WNV infection rates. Resting mosquitoes can be collected using suction traps such as the CDC resting trap (Panella et al. 2011), and by using handheld or backpack mechanical aspirators (Nasci 1981) to remove mosquitoes from natural resting harborage or artificial resting structures (e.g., wooden resting boxes, red boxes, fiber pots and other similar containers) (Service 1992). Because of the wide variety of resting sites and the low density of resting mosquitoes in most locations, sampling resting populations is labor intensive and sufficient sample sizes are often difficult to obtain. Host-baited traps, usually employing chickens or pigeons as bait, can collect mosquito vectors of interest in large numbers. However, these methods require use of live animals and adherence to animal use requirements and permitting. In addition, they are similar to light traps in that collections in certain locations and times may consist largely of unfed nulliparous individuals. Human landing collections have been used to accurately measure the population of human-feeding mosquitoes in an area and can be quite valuable in monitoring the effectiveness of adult mosquito control efforts. However, human landing collections may expose collectors to infected mosquitoes and is not recommended as a sampling procedure in areas where WNV transmission is occurring. # Specimen Handling and Processing Since mosquito-based WNV surveillance relies on identifying WNV in the collected mosquitoes through detection of viral proteins, viral RNA, or live virus (see Diagnostics section), efforts should be made to handle and process the specimens in a way that minimizes exposure to conditions (e.g., heat, successive freeze-thaw cycles) that would degrade the virus. Optimally, a cold chain should be maintained from the time mosquitoes are removed from the traps to the time they are delivered to the processing laboratory, and through any short-term storage and processing. Transport mosquitoes from the field in a cooler either with cold packs or on dry ice. Sort and identify the mosquitoes to species on a chill-table if available. This is particularly important if the specimens will be tested for infectious virus or viral antigen. However, lack of a cold chain does not appear to reduce the ability to detect WNV RNA by RT-PCR (Turell et al. 2002). Mosquitoes are generally tested in pools of 50 to 100 specimens grouped by species, date, and location of collection. Larger pool sizes may result in a loss of sensitivity (Sutherland and Nasci 2007). Follow package instructions that may indicate smaller pool sizes if using a commercial WNV assay -see Diagnostics section. Usually only female mosquitoes are tested in routine WNV surveillance programs. If WNV screening is not done immediately after mosquito identification and pooling, the pooled samples should be stored frozen, optimally at -70 O C, but temperatures below freezing may suffice for short-term storage. # Mosquito-Based Surveillance Indicators Data derived from mosquito surveillance include estimates of mosquito species abundance and WNV infection rate in those mosquito populations. The indices derived from those data vary in information content, ability to be compared over time and space, and association with WNV transmission levels and levels of human risk. The five indicators that have commonly been used are: - Vector abundance Number of positive pools Percent of pools positive Infection rate Vector index Vector abundance provides a measure of the relative number of mosquitoes in an area during a particular sampling period. It is simply the total number of mosquitoes of a particular species collected, divided by the number of trapping nights conducted during a specified sampling period, and is expressed as the number/trap night. High mosquito densities have been associated with arboviral disease outbreaks (Olson et al. 1979, Eldridge 2004; thus, risk assessments involve estimating mosquito abundance (Tonn et al. 1969, Bang et al. 1981). In the WNV outbreak in Maricopa County, AZ, 2010, Cx. quinquefasciatus densities were higher in outbreak areas compared to nonoutbreak areas (Godsey et al. 2012. Vector abundance can provide measures of population abundance that are useful as thresholds in conducting pro-active integrated vector management and in monitoring the outcome of mosquito control efforts. However, high mosquito abundance may occur in the absence of virus or detectable virus amplification, and WNV outbreaks often occur when abundance is low but the mosquito population is older and the infection rate is high. As with all environmental surveillance efforts, numerous trapping locations and regular collecting are required to obtain spatially and temporally representative data. Number of positive pools is the total of the number of WNV positive mosquito pools detected in a given surveillance location and period. It is sometimes separated by species, or may be a tally of the total number of positive pools for all species tested. While detection of positive pools provides evidence of WNV activity in an area, expression of the total number of pools without a denominator (e.g., number of pools tested) limits the comparative value and the ability of this index to convey relative levels of WNV transmission activity. Expressing WNV activity as the number of positive pools is not recommended since the same data can be used to produce more informative indices. Percent of pools positive is calculated by expressing the number of WNV-positive pools divided by the total number of pools tested as a percentage. This is an improvement over number of positive pools as an index of relative WNV transmission activity since it provides a rough estimate of the rate of WNV in the mosquitoes tested and can be used to compare WNV activity over time and place. However, the comparative value is limited unless the number of pools tested is relatively large and the number of mosquitoes per pool remains constant. Using percent of pools positive as an index of WNV activity is not recommended; as with the number of positive pools index, the same data can be used to produce the more informative Infection Rate and Vector Index measures of WNV activity. The infection rate in a vector population is an estimate of the prevalence of WNV-infected mosquitoes in the population and is a good indicator of human risk. There are two commonly used methods for calculating and expressing the infection rate. The minimum infection rate (MIR) for a given mosquito species is calculated by dividing the number of WNV-positive pools by the total number of mosquitoes tested (not the number of pools tested). The MIR is based on the assumption that infection rates are generally low and that only one mosquito is positive in a positive pool. The MIR can be expressed as a proportion or percent of the sample that is WNV positive, but is commonly expressed as the number infected/1000 tested because infection rates are usually low. The maximum likelihood estimate (MLE) of the infection rate does not require the assumption of one positive mosquito per positive pool, and provides a more accurate estimate when infection rates are high (Gu et al. 2008); thus, it is the preferred method of estimating infection rate particularly during outbreaks. The MLE and MIR are similar when infection rates are low. Estimates of infection rate provide a useful, quantitative basis for comparison, allowing evaluation of changes in infection rate over time and space. These indices also permit use of variable pool numbers and pool sizes while retaining comparability. Infection rate indices have been used successfully in systems associating infection rates with human risk (Bell et al. 2005). Larger sample sizes improve accuracy of the infection rate indicators. The MLE requires more complex calculations than the MIR; however, a Microsoft Excel® Add-In to compute infection rates from pooled data is available (). The Vector Index (VI) is an estimate of the abundance of infected mosquitoes in an area and incorporates information describing the vector species that are present in the area, relative abundance of those species, and the WNV infection rate in each species into a single index (Gujaral et al. 2007). The VI is calculated by multiplying the average number of mosquitoes collected per trap night by the proportion infected with WNV, and is expressed as the average number of infected mosquitoes collected per trap night in the area during the sampling period. In areas where more than one WNV vector mosquito species is present, a VI is calculated for each of the important vector species and the individual VIs are summed to represent a combined estimate of the infected vector abundance. By summing the VI for the key vector species, the combined VI accommodates the fact that WNV transmission may involve one or more vectors in an area. Increases in VI reflect increases in risk of human disease , Kwan et al. 2012 and have demonstrated significantly better predictive ability than estimates of vector abundance or infection rate alone, clearly demonstrating the value of combining information for vector abundance and WNV infection rates to generate a more meaningful risk index . As with other surveillance indicators, the accuracy of the Vector Index is dependent upon the number of trap nights used to estimate abundance and the number of specimens tested for virus to estimate infection rate. Instructions for calculating the Vector Index in a system with multiple vector species present are in Appendix 2. # Use of Mosquito-Based Indicators The mosquito-based surveillance indicators have two important roles in WNV surveillance and response programs. First, they can provide quantifiable thresholds for proactive vector control efforts. By identifying thresholds for vector abundance and infection rate that are below levels associated with disease outbreaks, integrated vector management programs can institute proactive measures to maintain mosquito populations at levels below which WNV amplification can occur. Second, if thresholds related to outbreak levels of transmission can be identified, surveillance can be used to determine when proactive measures have been insufficient to dampen virus amplification and more aggressive measures, such as wide-scale aerial application of mosquito adulticides and more aggressive public education messaging, are required to prevent or stop an outbreak. # Bird-Based WNV Surveillance WNV amplifies in nature by replicating to high levels in a variety of bird species which then transmit the virus to mosquitoes during several days of sustained high-level viremia. In addition to infection from mosquito bites, some birds may get infected by consuming infected prey items such as insects, small mammals, or other birds, or in rare cases, from direct contact with other infected birds. Thus, monitoring infections in wild and/or captive birds can be effective for determining whether WNV may be active in a region, and in some cases, provide a quantitative index of risk for human infections. A hallmark of the North American strain of WNV is its propensity to kill a number of the birds it infects (326 affected species were reported to ArboNET through 2012), particularly among the corvids (species of the family Corvidae, including crows, ravens, magpies and jays) and a few other sensitive species (Komar 2003). More than a decade after the introduction of WNV across the North America continent, numerous bird species continue to suffer mortality from WNV infection, albeit at lower rates. Robust studies of the development of avian resistance to fatal WNV infections are lacking. However, one study of the American crow suggested that resistance is developing at a rate of approximately 1% per year (Reed et al. 2007). There are two basic strategies for avian mortality surveillance of WNV: virus testing of selected carcasses and spatial analysis of dead bird sightings. While several bird species experience high mortality from WNV infection, most birds survive the infection and develop a lifelong immune response that can be detected by serology. Detection of antibodies in young birds, or seroconversions detected in older birds that are serially sampled is another approach to monitoring WNV transmission (Komar 2001). # Dead Bird Reporting Dead birds need not be collected and tested in order to be useful in WNV surveillance programs. Simply collecting information about the temporal and spatial patterns of bird deaths in an area has provided information about WNV activity. In order to develop a dead bird reporting WNV surveillance program, public participation is essential and must be encouraged through an effective public education and outreach program. A database should be established to record and analyze dead bird sightings with the following suggested data: Caller identification and call-back number, date observed, location geocoded to the highest feasible resolution, species, and condition. Birds in good condition (unscavenged and without obvious decomposition or maggot infestation) may be sampled or retrieved for sampling and laboratory testing (see Avian morbidity/mortality testing). Dead bird reporting systems, though they are passive surveillance systems, provide the widest surveillance net possible, extending to any area where a person is present to observe a dead bird. A strong carcass reporting system can guide selective sampling of carcasses for WNV testing. These systems have been used with some success to estimate risk of human infection (Eidson et al. 2001, Mostashiari et al. 2003, Carney et al. 2011 Though useful, there are limitations to dead bird surveillance systems. Maintaining public interest and willingness to participate is essential to these programs, but is difficult to maintain. The surveillance is passive and qualitative, and can only be used to assess risk of infection to people in areas where sufficient data are collected to populate risk models such as DYCAST (Carney et al. 2011) and SaTScan (Mostashari et al. 2003). Over time, bird populations are becoming resistant to morbidity and mortality (Reed et al. 2009). A system for carcass reporting needs to be developed and communicated to multiple agencies in a community and to the public (Eidson 2001). This system should include assistance to the public for identifying birds (specialized websites are available). Other causes of bird mortality may produce a false alarm for WNV activity. However, this alarm might alert the public health and wildlife disease communities to other pathogens or health threats. A subset of reported bird deaths should be investigated to confirm WNV activity through carcass testing. # Avian Morbidity/Mortality Testing In programs where the objective of avian morbidity/mortality testing is simply early detection of WNV activity and not production of a quantitative index of human risk, testing of moribund or dead birds should be initiated when local adult mosquito activity begins in the spring and continue as long as local WNV activity remains undetected in the area. Once WNV is detected in dead birds in an area, or if prevention and control actions have been initiated, continued detection of WNV in carcasses in that area does not provide additional information about WNV activity and is not necessary or cost-effective. However, the number of WNV-infected dead birds can contribute to an effective human risk index (Kwan et al. 2012a). Contact with WNV-infected carcasses presents a potential health hazard to handlers (Fonseca et al. 2005). Appropriate biosafety precautions should be taken when handling carcasses in the field and in the laboratory. More detailed guidelines for sampling avian carcasses are available in Appendix 3. To maximize sensitivity of this surveillance system, a variety of bird species should be tested, but corvids should be emphasized if they are present (Nemeth et al. 2007a). In dead corvids and other birds, bloody pulp from immature feathers, and tissues collected at necropsy such as brain, heart, kidney, or skin harbor very high viral loads, and any of these specimen types is sufficient for sensitive detection of WNV (Panella et al. 2001, Komar et al. 2002, Docherty et al. 2004, Nemeth et al. 2009, Johnson et al. 2010. Oral swabs and breast feathers are easy specimens to collect in the field, avoid the need to transfer dead birds to the laboratory, do not require a cold chain, and are effective for detecting WNV in dead corvids (Komar et al. 2002, Nemeth et al. 2009). They are less sensitive for WNV detection in noncorvids; however, the reduced sensitivity of testing non-corvids using these tissue types can be offset by sampling more carcasses. The number of bird specimens tested will be dependent upon resources and whether WNV-infected birds have already been found in the area; triage of specimens by species or by geographic location may be appropriate in some jurisdictions. Several studies have demonstrated the effectiveness of avian mortality testing for early detection of WNV activity in several studies (Eidson et al. 2001, Julian et al. 2002, Guptill et al. 2003, Nemeth et al. 2007b, Patnaik et al. 2007, Kwan et al. 2012a). Wildlife rehabilitation clinics can be a good source of specimens derived from carcasses (Nemeth et al. 2007b). Collecting samples from living birds that are showing signs of illness requires the assistance of a veterinarian or wildlife technician. Dead crows and raptors alarm the public and carcasses are easily spotted (Ward et al. 2008). However, in regions with few or no crows, carcasses may be less obvious. Eye aspirates have been shown to be a sensitive and fast sampling protocol for WNV detection in corvid carcasses brought to the laboratory for testing (Lim et al 2009). # Live Bird Serology The use of living birds as sentinels for monitoring WNV transmission requires serially blood-sampling a statistically valid number of avian hosts. Captive chickens, frequently referred to as sentinel chickens, (though other species have been used) provide the most convenient source of blood for this purpose. Blood may be collected from a wing vein, the jugular vein, or on Nobuto® strips by pricking the chicken's comb with a lancet. There is no standard protocol for implementing a sentinel chicken program. It can be tailored to the specific circumstances of each surveillance jurisdiction, though sentinel chicken systems generally employ flocks of 6-10 birds at each site and bleed each bird weekly or every other week throughout the WNV transmission season. Sentinel chicken-based WNV surveillance systems have provided evidence of WNV transmission several weeks in advance of human cases (Healy et al. 2012). While serially sampling free-ranging bird species is very labor intensive, it can provide information about seroconversion in amplifier hosts, similar to the data provided by sentinel chickens. Quantifying seroprevalence in free-ranging birds may provide additional information that benefits surveillance programs (Komar 2001). For example, a serosurvey of the local resident bird population (in particular, juvenile birds) following the arbovirus transmission season may help determine which local species may be important amplifiers of WNV in the surveillance area. This in turn could be used to map areas of greatest risk in relation to the populations of amplifier hosts. Furthermore, a serosurvey of adult birds just prior to arbovirus transmission season can detect pre-existing levels of antibody in the bird population. High levels would suggest less opportunity for WNV amplification because many adult bird species transfer maternal antibodies to their offspring, which can delay or inhibit WNV amplification among the population of juvenile birds that emerges each summer. In Los Angeles, California, serosurveys of local amplifier hosts during winter determined that subsequent outbreaks occurred only after seroprevalence dipped below 10% in these birds (Kwan et al. 2012b). There are numerous positive attributes of sentinel chicken and other live-bird serology surveillance systems. Sentinel chickens are captive, so a seroconversion event indicates local transmission and presence of infected mosquitoes in the area. Chickens are preferred blood-feeding hosts of Cx. pipiens and Cx. quinquefasciatus, which are important urban vectors of WNV. Chickens can be used to monitor seroconversions of multiple arboviruses of public health importance (i.e., WN, SLE, WEE and EEE viruses) simultaneously. However, there are also a number of important limitations related to these systems. Determination that a chicken has seroconverted occurs typically 3-4 weeks after the transmission event has occurred, and reporting of a positive chicken may not precede the first local case of human disease caused by WNV (Patnaik et al. 2007, Kwan et al. 2010, Unlu et al. 2009). However, a rapid screening assay can cut this time in half (Cheng & Su 2011) and improve the predictive capability of sentinel chicken systems (Healy 2012). Use of sentinel birds requires institutional animal use and care protocols, and other authorization permits. Linking patterns in sentinel chicken seroconversion with human risk requires multiple years of data collection. # Horses and Other Vertebrates Horses are susceptible to encephalitis due to WNV infection; thus, equine cases of WNV-induced encephalitis may serve a sentinel function in the absence of other environmental surveillance programs. Equine health is an important economic issue, so severe disease in horses comes to the attention of the veterinary community. Use of horses as sentinels for active WNV surveillance is theoretically possible, but practically infeasible. Widespread use of equine WNV vaccines decreases the incidence of equine WNV disease, and survivors of natural infections are protected from disease, reducing the usefulness of equines as sentinels. Veterinarians, veterinary service societies/agencies, and state agriculture departments are essential partners in any surveillance activities involving WNV infections in horses. Equine disease due to WNV is rare in tropical ecosystems. However, WNV frequently infects horses in the tropics. Detection of seroconversions in horses has been suggested as a sentinel system to detect risk of WNV transmission to people in Puerto Rico and other tropical locations (Phoutrides et al. 2011, Mattar et al. 2011. Small numbers of other mammal species have been affected by WNV. Dead squirrels are tested for WNV along with dead birds in some jurisdictions (Reisen et al. 2013). Among domestic mammals, the most important has been the camelids, such as llamas and alpacas (Whitehead & Bedenice 2009). As with horses, these come to the attention of veterinarians and any veterinary case of disease due to WNV may be used for passive surveillance. Dogs and cats become infected with WNV (Austgen et al. 2004) and have been shown to seroconvert to WNV during human disease outbreaks (Kile et al. 2005, Resnick et al. 2008); however, their value as primary WNV surveillance tools has not been evaluated and is not recommended at this time. There is no evidence that dogs or cats develop sufficient viremia to become amplifier hosts (Austgen et al. 2004). # Evaluation of Environmental Surveillance Systems The objective of environmental surveillance is to detect local WNV activity in advance of transmission to humans in order to guide the implementation of interventions and to prevent human infections and disease from occurring. The methods for mosquito and bird surveillance outlined above are widely used to monitor WNV transmission levels, but few studies have directly compared the performance of these early warning systems in their ability to measure or predict human risk. Single factor surveillance systems, in which only one method of surveillance is used to monitor WNV activity and predict risk of subsequent human infection, have been shown to give valuable early information , Carney et al. 2011, Ginsberg et al. 2010. Comparisons of single-factor surveillance systems have been done on a regional basis. In comparing mosquito and sentinel chicken systems in Louisiana, Unlu et al. (2009) demonstrated that positive mosquito pools occurred before detection of human cases, but seroconversions of sentinel chickens were detected after human cases occurred. They also compared mosquito trapping methods and found that while gravid traps collected more total mosquitoes and WNV-positive mosquitoes, sentinel chicken box traps detected WNV positive mosquitoes earlier. Another study showed that sentinel chicken seroconversions were the first indication of WNV activity (Blackmore et al. 2003). A comparison of mosquito, sentinel chicken and dead-bird systems found that all provided evidence of WNV transmission before human cases occurred, but that mosquito surveillance performed better than the other systems (Healy et al. 2012). A few WNV surveillance programs have been developed that incorporate multiple WNV monitoring systems and other environmental parameters (e.g., temperature and rainfall patterns). These generally require a historical database of environmental and epidemiological surveillance data in order to develop and validate the complex quantitative associations that they employ. An example of a multi-factor system is the California Mosquito-Borne Virus Surveillance and Response Plan (CMVRA) ( CA Response Plan 5-8-12.pdf) which uses mosquitoes, birds and environmental conditions to perform a mosquito-borne virus risk assessment. Individual components of the system are scored to provide a value that corresponds to a normal season, emergency planning or epidemic situation. Kwan et al. (2012) compared the single-factor, mosquito-based Vector Index to the single-factor birdbased Dynamic Continuous-Area Space-Time (DYCAST) system, and the multiple-factor CMVRA system and found that the CMVRA system using avian, mosquito and environmental information provided better prediction of WNV high risk periods. However, the Vector Index and DYCAST systems also provided useful, predictive information. # ArboNET ArboNET, the national arboviral surveillance system, was developed by CDC and state health departments in 2000 in response to the emergence of WNV in 1999 (CDC 2010). In 2003, the system was expanded to include other domestic and imported arboviruses of public health significance. ArboNET is an electronic surveillance system administered by CDC's Division of Vector-Borne Diseases (DVBD). Human arboviral disease data are reported from all states and three local jurisdictions. In addition to human disease cases, ArboNET maintains data on arboviral infections among human viremic blood donors, non-human mammals, sentinel animals, dead birds, and mosquitoes. # Data Collected Variables collected for human disease cases include patient age, sex, race, and county and state of residence; date of illness onset; case status (i.e., confirmed, probable, suspected, or not a case); clinical syndrome (e.g., encephalitis, meningitis, or uncomplicated fever); whether illness resulted in hospitalization; and whether the illness was fatal. Cases reported as encephalitis (including meningoencephalitis), meningitis, or acute flaccid paralysis are collectively referred to as neuroinvasive disease; others are considered non-neuroinvasive disease. Acute flaccid paralysis can occur with or without encephalitis or meningitis. Information regarding potential non-mosquito-borne transmission (e.g., blood transfusion or organ transplant recipient, breast-fed infant, or laboratory worker) and recent donation of blood or solid organs should be reported if applicable. An optional set of variables related to clinical symptoms and diagnostic testing was added to ArboNET in 2012. Blood donors identified as presumptively viremic by NAAT screening of the donation by a blood collection agency are also reported to ArboNET. For the purposes of national surveillance, a presumptively viremic donor is defined as a person with a blood donation that meets at least one of the following criteria: a) One reactive NAAT with a signal-to-cutoff (S/CO) ratio ≥ 17; or b) Two reactive NAATs. Reporting of donors who do not meet these criteria should wait until follow-up testing is completed. The date of blood donation is reported in addition to the variables routinely reported for disease cases. WNV disease in non-human mammals (primarily horses) and WNV infections in trapped mosquitoes, dead birds, and sentinel animals are also reported to ArboNET. Variables collected for non-human WNV infections include species, state and county, and date of specimen collection or symptom onset. The total number of mosquitoes or birds tested weekly can also be reported. Detailed descriptions of all variables collected by ArboNET and instructions for reporting are included in the ArboNET User Guide, which can be requested from DVBD by phone (970-261-6400) or email ([email protected]). # Data Transmission Jurisdictions can transmit data to ArboNET using one or more of four methods supported by DVBD: 1) jurisdictions that have a commercially-or state-developed electronic surveillance system can upload records from their system using an Extensible Markup Language (XML) message; 2) jurisdictions can upload records from a Microsoft Access database provided by CDC DVBD using an XML message; 3) jurisdictions may enter records manually using a CDC web site (/); or 4) jurisdictions can report cases through the CDC NEDSS Base System (NBS) and DVBD will download records directly from NBS to ArboNET. ArboNET data are maintained in a Microsoft Structured Query Language (SQL) Server database inside CDC's firewall. Users can access data via a password-protected website, but are limited to viewing data only from their own jurisdiction. The ArboNET website and database are maintained by CDC information technology staff and are backed up nightly. # Dissemination of ArboNET Data CDC epidemiologists periodically review and analyze ArboNET surveillance data and disseminate results to stakeholders via direct communication, briefs in Morbidity and Mortality Weekly Reports and Epi-X, comprehensive annual summary reports, and DVBD's website. CDC also provides ArboNET data to the U.S. Geological Survey to produce maps of domestic arboviral activity which are then posted on its website (/). Surveillance reports are typically updated weekly during the transmission season and monthly during the off-season. A final report is released in the spring of the following year. CDC provides limited-use ArboNET data sets to the general public by formal request. Data release guidelines have been updated to be consistent with those developed by CDC and the Council of State and Territorial Epidemiologists (CSTE). # Limitations of ArboNET Data Human surveillance for WNV disease is largely passive, and relies on the receipt of information from physicians, laboratories, and other reporting sources by state health departments. Neuroinvasive disease cases are likely to be consistently reported because of the substantial morbidity associated with this clinical syndrome. Non-neuroinvasive disease cases are inconsistently reported because of a less severe spectrum of illness, geographic differences in disease awareness and healthcare seeking behavior, and variable capacity for laboratory testing. Surveillance data for non-neuroinvasive disease cases should be interpreted with caution and generally should not be used to make comparisons between geographic areas or over time. Accordingly, ratios of reported neuroinvasive disease cases to non-neuroinvasive disease cases should not be interpreted as a measure of WNV virulence in an area. ArboNET does not routinely collect information regarding clinical signs and symptoms or diagnostic laboratory test results. Therefore, misclassification of the various syndromes caused by WNV cannot be detected. In addition, ArboNET does not routinely collect information regarding the specific laboratory methods used to confirm each case. The most common laboratory tests used to diagnose arboviral disease are IgM antibody-capture enzyme immunoassays (EIA. Although these assays are relatively specific, false-positive results and cross-reactions occur between WNV and other flaviviruses (e.g., St. Louis encephalitis or Dengue viruses). Positive IgM results should be confirmed by additional tests, especially plaque-reduction neutralization. However, such confirmatory testing often is not performed. While the electronic mechanisms for data transmission allow for rapid case reporting, the inclusion of both clinical and laboratory criteria in the surveillance case definition creates delays between the occurrence of cases and their reporting. Provisional data are disseminated to allow for monitoring of regional and national epidemiology during the arboviral transmission season. However, these reports generally lag several weeks behind the occurrence of the cases comprising them, and the data may change substantially before they are finalized. For this reason, provisional data from the current transmission season should not be combined with or compared to provisional or final data from previous years. The collection and reporting of non-human WNV surveillance data are highly variable among states (and even between regions within states) and changes from year to year. Because of this variability, non-human surveillance data should not be used to compare arboviral activity between geographic areas or over time. For more information about ArboNET, please contact the Division of Vector-Borne Diseases by phone: 970-261-6400 or email: [email protected]. # Laboratory Diagnosis and Testing Public health programs responsible for monitoring WNV activity and implementing effective interventions rely on accurate information about human disease and environmental indicators of risk. To be successful, these programs must be supported by diagnostic laboratories capable of performing the required range of tests. Numerous serological and virus detection testing protocols have been developed to diagnose human infections and cases, and to enable environmental surveillance programs to monitor the presence of WNV in vector mosquitoes and non-human vertebrate hosts. The characteristics and uses of these tests are outlined in the sections below. # Biocontainment -Laboratory Safety Issues Laboratory-associated infections with WNV have been reported in the literature. The Subcommittee on Arbovirus Laboratory Safety in 1980 reported 15 human infections from laboratory accidents. One of these infections was attributed to aerosol exposure. In addition, two parenteral inoculations have been reported during work with animals. WNV may be present in blood, serum, tissues and CSF of infected humans, birds, mammals and reptiles. The virus has been found in the oral fluids and feces of birds. Parenteral inoculation with contaminated materials poses the greatest hazard; contact exposure of broken skin is a possible risk. Sharps precautions should be strictly adhered to when handling potentially infectious materials. Workers performing necropsies on infected animals may be at high risk of infection. Manipulation of infectious stocks of WNV should be conducted in BSL-3 laboratory space. However, diagnostic specimens from any source that have not been tested may be handled in BSL-2 conditions. If the specimen is suspected of harboring infectious WNV, it is recommended that it be manipulated in BSL-3 conditions, such as within a Class II Type A biological safety cabinet. Necropsies of birds or other animals expected to harbor high-titered WNV infections should be practiced under BSL-3 conditions. Containment specifications are available in the Centers for Disease Control and Prevention/National Institutes of Health publication Biosafety in Microbiological and Biomedical Laboratories (BMBL). This document can be found online at: /. # Shipping of Agents Shipping and transport of WNV and clinical specimens should follow current International Air Transport Association (IATA) and Department of Commerce recommendations. Because of the threat to the domestic animal population, a U.S. Department of Agriculture (USDA) shipping permit is required for transport of known WNV isolates. For more information, visit the IATA dangerous goods Web site at: , and the USDA Animal and Plant Health Inspection Service (APHIS), National Center for Import /Export's Web site at: . # Human Diagnosis In most patients, infection with WNV and many of the other arboviruses that cause encephalitis is clinically inapparent or causes a nonspecific viral syndrome. Numerous pathogens cause encephalitis, aseptic meningitis and febrile disease with clinical symptoms and presentations similar to those caused by WNV and should be considered in the differential diagnosis. Definitive diagnosis of WNV can only be made by laboratory testing using specific reagents. Selection of diagnostic test procedures should take into consideration the range of pathogens in the differential diagnosis, the criteria for classifying a WNV case as confirmed or probable, as well as the capability of the primary and confirming diagnostic laboratories. The case definition for neuroinvasive and non-neuroinvasive disease caused by WNV and the other arboviral pathogens present in the United States specifies presence of clinically-compatible symptoms accompanied by laboratory evidence of recent infection (Appendix 1). Appropriate selection of diagnostic procedures and accurate interpretation of findings requires information describing the patient and the diagnostic specimen. For human specimens, the following data must accompany sera, CSF or tissue specimens for results to be properly interpreted and reported: 1) symptom onset date (when known); 2) date of sample collection; 3) unusual immunological status of patient (e.g., immunosuppression); 4) state and county of residence; 5) travel history (especially in flavivirus-endemic areas); 6) history of prior vaccination (e.g., yellow fever, Japanese encephalitis, or Tick-borne encephalitis viruses); and 7) brief clinical summary including clinical diagnosis (e.g., encephalitis, aseptic meningitis). Minimally, onset and sample collection dates are required to perform and interpret initial screening tests. The remaining information is required to evaluate any specimens positive on initial screening. If possible, a convalescent serum sample taken at least 14 days following the acute sample should be obtained to enable confirmation by serological testing. # Serology The front-line screening assay for laboratory diagnosis of human WNV infection is the IgM assay. Currently, the FDA has cleared four commercially-available test kits from different manufacturers, for detection of WNV IgM antibodies. These four kits are used in many commercial and public health laboratories in the United States. In addition the CDC-defined IgM and IgG ELISA can be used; protocols and reagents are available from the CDC DVBD Diagnostic Laboratory (Martin et al. 2000;Johnson et al. 2000). There is also a microsphere-based immunoassay for the detection of IgM antibodies that can differentiate WNV from SLE (Johnson et al. 2005). Because the IgM and IgG ELISA tests can cross-react between flaviviruses (e.g., SLE, dengue, yellow fever, WN), they should be viewed as screening tests only. For a case to be considered confirmed, serum samples that are antibody-positive on initial screening should be evaluated by a more specific test; currently the plaque reduction neutralization test (PRNT) is the recommended test for differentiating between flavivirus infections. Though WNV is the most common cause of arboviral encephalitis in the United States, there are several other arboviral encephalitides present in the country and in other regions of the world. Specimens submitted for WNV testing should also be tested by ELISA and PRNT against other arboviruses known to be active or be present in the area or in the region where the patient traveled. # Virus Detection Assays Numerous procedures have been developed for detecting viable WNV, WNV antigen or WNV RNA in human diagnostic samples, many of which have been adapted to detecting WNV in other vertebrates and in mosquito samples. These procedures vary in their sensitivity, specificity, and time required to conduct the test (Table 1). Among the tests listed in the Table 1, the VectorTest, Antigen Capture ELISA, and Rapid Analyte Measurement Platform were developed specifically for testing mosquitoes for WNV antigen, were subsequently adapted to testing bird and other vertebrate samples, and are not used for human diagnostic testing. Additional details about these tests are contained in the following sections on mosquito and bird diagnostic tests. The remaining tests have been used in various human diagnostic assays. Table 1. Characteristic sensitivity and time required for West Nile infectious virus, viral RNA or viral antigen detection assays. Among the most sensitive procedures for detecting WNV in samples are those using RT-PCR to detect WNV RNA in human CSF, serum and other tissues. Fluorogenic 5' nuclease techniques (real-time PCR) and nucleic acid sequence-based amplification (NASBA) methods have been developed and validated for specific human diagnostic applications (Briese et al. 2000;Shi et al. 2001;Lanciotti et al. 2000;Lanciotti et al. 2001) and for detecting WNV RNA in blood donations (Busch et al. 2005). WNV presence can be demonstrated by isolation of viable virus from samples taken from clinically ill humans. Appropriate samples include CSF (serum samples may be useful very early in infection) and brain tissue (taken at biopsy or postmortem). Virus isolation should be performed in known susceptible mammalian (e.g., Vero) or mosquito cell lines (e.g., C6/36). Mosquito origin cells may not show obvious cytopathic effect and must be screened by immunofluorescence or RT-PCR. Appropriate samples for virus isolation from clinically ill humans include CSF (serum samples may be useful very early in infection) and brain tissue (taken at biopsy or postmortem). Confirmation of virus isolate identity can be accomplished by indirect immunofluorescence assay (IFA) using virus-specific monoclonal antibodies or nucleic acid detection. The IFA using well-defined murine monoclonal antibodies (MAbs) is an efficient, economical, and rapid method to identify flaviviruses. MAbs are available that can differentiate WNV and SLE virus from each other and from other flaviviruses. Incorporating MAbs specific for other arboviruses known to circulate in various regions will increase the rapid diagnostic capacities of state (Briese et al. 2000;Shi et al. 2001;Lanciotti et al. 2000). While these tests can be quite sensitive, virus isolation and RT-PCR to detect WNV RNA in sera or CSF of clinically ill patients have limited utility in diagnosing human WNV neuroinvasive disease due to the low level viremia present in most cases at the time of clinical presentation. However, one study demonstrated that combining detection of IgM with detection of WNV RNA in plasma significantly increased the number of WNV non-neuroinvasive (i.e., fever) cases detected (Tilley et al 2006). Virus isolation or RT-PCR on serum may be helpful in confirming human WNV infection in immunocompromised patients when antibody development is delayed or absent. Immunohistochemistry (IHC) using virus-specific MAbs on brain tissue has been very useful in identifying both human and avian cases of WNV infection. In suspected fatal cases, IHC should be performed on formalin fixed autopsy, biopsy, and necropsy material, ideally collected from multiple anatomic regions of the brain, including the brainstem, midbrain, and cortex (Bhatnagar et al. 2007). To maintain Clinical Laboratory Improvements Amendments (CLIA) certification, CLIA recommendations for performing and interpreting human diagnostic tests should be followed. Laboratories doing WNV serology or RNA-detection testing are invited to participate in the annual proficiency testing that is available from CDC's Division of Vector-Borne Diseases in Fort Collins, Colorado. To obtain additional information about the proficiency testing program and about training in arbovirus diagnostic procedures, contact the Division of Vector-Borne Diseases by phone: 970-261-6400 or email: [email protected]. # Non-human laboratory diagnosis The following sections discuss techniques that may be applied to mosquitoes and non-human vertebrate specimens that are collected for the purpose of diagnosing WNV infections. Many of the virus detection procedures are identical those described in the Human Diagnosis section, but several procedures have been developed specifically for these sample types. # Mosquitoes # Identification and Pooling Mosquitoes should be identified to species or lowest taxonomic unit. Specimens are placed into pools of 50 specimens or less based on species, sex, location, trap-type, and date of collection. Larger pool sizes can be used in some assays with loss of sensitivity (Sutherland and Nasci 2007). If resources are limited, testing of mosquitoes for surveillance purposes can be limited to the primary vector species. # Homogenizing and Centrifugation After adding the appropriate media, mosquito pools can be macerated or ground by a variety of techniques including mortar and pestel, vortexing sealed tubes containing one or more copper clad BBs, or by use of tissue homogenizing apparatus that are commercially available (Savage et al. 2007). After grinding, samples are centrifuged and an aliquot is removed for WNV testing. Because mosquito pools may contain WNV and other pathogenic viruses which may be aerosolized during processing, laboratory staff should take appropriate safety precautions including use of a Class II Type A biological safety cabinet and wearing appropriate personal protective equipment (PPE) and biosafety practices. # Virus Detection Virus isolation in Vero cell culture remains the standard for confirmation of WNV positive pools (Beaty et al. 1989, Savage et al. 1999, Lanciotti et al. 2000. Virus isolation provides the benefit of detecting other viruses that may be contained in the mosquitoes, a feature that is lost using test procedures that target virus-specific nucleotide sequence or proteins. However, Vero cell culture is expensive and requires specialized laboratory facilities; thus, nucleic acid assays have largely replaced virus isolation as detection and confirmatory assay methods of choice. Virus isolation requires that mosquito pools be ground in a media that protects the virus from degradation such as BA-1 (Lanciotti et al. 2000), and preservation of an aliquot at -70 o C to retain virus viability for future testing. Nucleic acid detection assays are the most sensitive assays for virus detection and confirmation of WNV in mosquito pools (Lanciotti et al. 2000. Real-Time RT-PCR assays with different primer sets may be used for both detection and confirmation of virus in mosquito pools. Standard RT-PCR primers are also available (Kuno et al 1998). Nucleic acids may be extracted from an aliquot of the mosquito pool homogenate by hand using traditional methods or with kits, or with automated robots in high-through-put laboratories (Savage et al 2007). West Nile virus antigen detection assays are available in ELISA format (Tsai et al. 1987 and in two commercial kits that employ lateral flow wicking assays, developed specifically for testing mosquitoes (, Komar et al. 2002, Panella et al. 2001, Burkhalter et al. 2006). The antigen capture ELISA of Hunt et al. 2002 and the RAMP (Rapid Analyte Measurement Platform, Response Biomedical Corp, Burnaby, British Columbia, Canada) WNV test are approximately equal in sensitivity and detect WNV in mosquito pools at concentrations as low as 10 3.1 PFU/ml (Burkhalter et al. 2006). The VecTest (Medical Analysis Systems, Inc., Camarillo, CA) is less sensitive and detects WNV in mosquito pools at concentrations of 10 5.17 PFU/ml. The VecTest (evaluated by Burkhalter et al. 2006) is no longer available, but is similar to a lateral flow wicking assay marketed as VecTOR Test (VecTOR Test Systems, Inc., Thousand Oaks, CA). Although the antigen detection assays are less sensitive than nucleic acid detection assays, they have been evaluated in operational surveillance programs (Mackay et al. 2004. Lampman et al. 2006. Williges et al. 2009, Kesavaraju et al. 2012) and can provide valuable WNV infection rate data when employed consistently in a mosquito surveillance program. # Non-Human Vertebrates # Serology Diagnostic kits for serologic diagnosis of WNV infection in clinically ill domestic animals are not commercially available. IgM-capture ELISA has been developed for use in horses, and can be readily adapted to other animal species where anti-IgM antibody reagents are commercially available. Alternatively, seroconversion for IgG, neutralizing antibodies, and haemagglutinin inhibiting (HAI) assays in acute and convalescent serum samples collected 2-3 weeks apart can be used as screening assays. The latter two approaches do not require species-specific reagents and thus have broad applicability. The ELISA format may be used when employed as inhibition or competition ELISAs, which avoids the use of species-specific reagents. A popular blocking ELISA has been applied to a variety of vertebrate species with very high specificity and sensitivity, reducing the necessity of a second confirmatory test (Blitvich et al 2003a(Blitvich et al , 2003b. Similarly, the microsphere immunoassay, when used comparatively with WNV antigen-coated beads and SLEV antigen-coated beads, performs with high specificity and sensitivity (Johnson et al. 2005). Typically, a confirmatory 90% plaque-reduction neutralization test (PRNT 90 ) with end-point titration is used to confirm serology in non-human vertebrates. Plaque-reduction thresholds below 80% are not recommended. Because of the cross-reactive potential of anti-flavivirus antibodies, the PRNT must be comparative, performed simultaneously with St. Louis encephalitis virus (SLEV). PRNT requires the use of a BSL3 laboratory and Vero cell culture. The PRNT has been adapted to BSL-2 laboratory conditions using a recombinant chimeric virus featuring the WNV envelope glycoprotein gene in a yellow fever virus backbone (Chimeravax®, originally developed as a live-attenuated vaccine candidate). For PRNTs, the Chimeravax provided equivalent results for bird sera, and 10-100 fold lower titers for equine sera ). The same serologic techniques applied to clinically ill animals may also be used for healthy subjects for vertebrate serosurveys or for healthy sentinel animals serially sampled as sentinels. Serologic techniques for WNV diagnosis should not be applied to carcasses, as in many cases of fatal WNV infection, the host will die before a detectable immune response develops. Furthermore, some morbid or moribund animals that have WNV antibodies due to past infection may be currently infected with a pathogen other than WNV. Fatal cases should have readily detectable WNV in their tissues. As with human diagnostic samples, serologic results from non-human vertebrates must be interpreted with caution and with an understanding of the cross-reactive tendencies of WNV and other flaviviruses. For primary WNV infections, a low rate of cross-reactivity is expected (<5%) and misdiagnoses are avoided by the requirement that the reciprocal anti-WNV titer be a minimum of 4-fold greater than the corresponding anti-SLEV titer. In rare cases, a secondary flavivirus infection due to WNV in a host with a history of SLEV infection may boost the older anti-SLEV titer to greater levels than the anti-WNV titer, resulting in a misdiagnosis of SLEV infection, a phenomenon known as "original antigenic sin". Some serum samples will have endpoint titers for WNV and SLEV that are the same or just 2-fold different. While it is possible that this serologic result is due to past infections with both of these viruses, it is impossible to rule out cross-reaction from one or the other, or even from a third indeterminate flavivirus. Such a result should be presented as "Undifferentiated flavivirus infection." # Virus Detection Methods for WNV detection, isolation and identification are the same as described for human and mosquito diagnostics. Specimens typically used are tissues and/or fluids from acutely ill and/or dead animals. Virus detection in apparently healthy animals is very low-yield and inefficient, and therefore not cost-effective, and should not be considered for routine surveillance programs. In bird, mammal and reptile carcasses, tissue tropisms have varied among individuals within a species, and across species. Some animals, like humans, have few tissues with detectable virus particles or viral RNA at necropsy, such as horses. Others, such as certain bird species, may have fulminant infections with high viral loads in almost every tissue. # Prevention and Control # Integrated Vector Management Mosquito abatement programs successfully employ integrated pest management (IPM) principles to reduce mosquito abundance, providing important community services to protect quality of life and public health (Rose 2001). Prevention and control of WNV and other zoonotic arboviral diseases is accomplished most effectively through a comprehensive, integrated vector management (IVM) program applying the principles of IPM. IVM is based on an understanding of the underlying biology of the arbovirus transmission system, and utilizes regular monitoring of vector mosquito populations and WNV activity levels to determine if, when, and where interventions are needed to keep mosquito numbers below levels which produce risk of human disease, and to respond appropriately to reduce risk when it exceeds acceptable levels. Operationally, IVM is anchored by a monitoring program providing data that describe: Conditions and habitats that produce vector mosquitoes. Abundance of those mosquitoes over the course of a season. WNV transmission activity levels expressed as WNV infection rate in mosquito vectors. Parameters that influence local mosquito populations and WNV transmission. These data inform decisions about implementing mosquito control activities appropriate to the situation, such as: Source reduction through habitat modification. Larval mosquito control using the appropriate methods for the habitat. Adult mosquito control using pesticides applied from trucks or aircraft when established thresholds have been exceeded. Community education efforts related to WNV risk levels and intervention activities. Monitoring also provides quality control for the program, allowing evaluation of: Effectiveness of larval control efforts. Effectiveness of adult control efforts. Causes of control failures (e.g., undetected larval sources, pesticide resistance, equipment failure). # Surveillance Programs Effective IVM for WNV prevention relies on a sustained, consistent surveillance program that targets vector species. The objectives are to identify and map larval production sites by season, monitor adult mosquito abundance, monitor vector infection rates, document the need for control based on established thresholds, and monitor control efficacy. Surveillance can be subdivided into three categories based on the objective of the surveillance effort. However, the surveillance elements are complementary, and in combination provide the information required for IVM decisions. # Larval Mosquito Surveillance Larval surveillance involves identifying and sampling a wide range of aquatic habitats to identify the sources of vector mosquitoes, maintaining a database of these locations, and a record of larval control measures applied to each. This requires trained inspectors to identify larval production sites, collect larval specimens on a regular basis from known larval habitats, and to perform systematic surveillance for new sources. This information is used to determine where and when source reduction or larval control efforts should be implemented. # Adult Mosquito Surveillance Adult mosquito surveillance is used to quantify relative abundance of adult vector mosquitoes, and to describe their spatial distribution. This process also provides specimens for evaluating the incidence of WNV infection in vector mosquitoes (see Surveillance chapter for more information). Adult mosquito surveillance programs require standardized and consistent surveillance efforts in order to provide data appropriate for monitoring trends in vector activity, for setting action thresholds, and evaluating control efforts. Various methods are available for monitoring adult mosquitoes. Most frequently used in WNV surveillance are the CO 2 -baited CDC miniature-style light traps for monitoring host-seeking Culex tarsalis (and potential bridge vector species) and gravid traps to monitor Cx. quinquefasciatus, Cx. pipiens and Cx. restuans populations. Adult mosquito surveillance should consist of a series of collecting sites at which mosquitoes are sampled using both gravid and light traps on a regular schedule. Fixed trap sites allow monitoring of trends in mosquito abundance and virus activity over time and are essential for obtaining information to evaluate WNV risk and to guide control efforts. Additional trap sites can be utilized on an ad hoc basis to provide additional information about the extent of virus transmission activity and effectiveness of control efforts. # WNV Transmission Activity Monitoring WNV transmission activity in the environment before human cases occur is an essential component of an IVM program to reduce WNV risk. Without this information, it is impossible to set thresholds for vector mosquito population management and to take appropriate action before an outbreak is in progress (Table 2). WNV transmission activity can be monitored by tracking the WNV infection rate in vector mosquito populations, WNV-related avian mortality, seroconversion to WNV in sentinel chickens, seroprevalence/seroconversion in wild birds, and WNV veterinary (primarily horse) cases. The methods for monitoring and calculating indices of WNV activity are described in detail in the Environmental Surveillance section of the Surveillance chapter. # Mosquito Control Activities Guided by the surveillance elements of the program, integrated efforts to control mosquitoes are implemented to maintain vector populations below thresholds that would facilitate WNV amplification and increase human risk. Failing that, efforts to reduce the abundance of WNV-infected biting adult mosquitoes must be quickly implemented to prevent risk levels from increasing to the point of a human disease outbreak. Properly implemented, a program monitoring mosquito abundance and WNV activities in the vector mosquito population will provide a warning of when risk levels are increasing. Because of delays in onset of disease following infection, and delays related to seeking medical care, diagnosis, and reporting of human disease, WNV surveillance based on human case reports lags behind increases in risk and is not sufficiently sensitive to allow timely implementation of outbreak control measures. # Larval Mosquito Control The objective of the larval mosquito control component of an IVM program is to manage mosquito populations before they emerge as adults. This can be an efficient method of managing mosquito populations if the mosquito breeding sites are accessible. However larval control may not attain the levels of mosquito population reduction needed to maintain WNV risk at low levels, and must be accompanied by measures to control the adult mosquito populations as well. In outbreak situations, larval control complements adult mosquito control measures by preventing new vector mosquitoes from being produced. However, larval control alone is not able to stop WNV outbreaks once virus amplification has reached levels causing human infections. Numerous methods are available for controlling larval mosquitoes. Source reduction is the elimination or removal of habitats that produce mosquitoes. This can range from draining roadside ditches to properly disposing of discarded tires and other trash containers. Only through a thorough surveillance program will mosquito sources be identified and appropriately removed. In order to effectively control vector mosquito populations through source reduction, all sites capable of producing vector mosquitoes must be identified and routinely inspected for the presence of mosquito larvae or pupae. This is difficult to accomplish with the WNV vector species Cx. quinquefasciatus and Cx. pipiens that readily utilize cryptic sites such as storm drainage systems, grey water storage cisterns and storm water runoff impoundments. Vacant housing with unmaintained swimming pools, ponds and similar water features are difficult to identify and contribute a significant number of adult mosquitoes to local populations. To manage mosquitoes produced in habitats that are not conducive to source reduction, pesticides registered by EPA for larval mosquito control are applied when larvae are detected. Several larval mosquito control pesticides are available and a discussion of their attributes and limitations is beyond the scope of this overview. More detailed information about pesticides used for larval mosquito control is available from the U.S. EPA (). No single larvicide product will work effectively in every habitat where WNV vectors are found. An adequate field staff with proper training is required to properly identify larval production sites and implement the appropriate management tools for that site. # Adult Mosquito Control Source reduction and larvicide treatments may be inadequate to maintain vector populations at levels sufficiently low to limit virus amplification. The objective of the adult mosquito control component of an IVM program is to complement the larval management program by reducing the abundance of adult mosquitoes in an area, thereby reducing the number of eggs laid in breeding sites. Adult mosquito control is also intended to reduce the abundance of biting, infected adult mosquitoes in order to prevent them from transmitting WNV to humans and to break the mosquito-bird transmission cycle. In situations where vector abundance is increasing above acceptable levels, targeted adulticide applications using pesticides registered by EPA for this purpose can assist in maintaining vector abundance below threshold levels. More detailed information about pesticides used for adult mosquito control is available from the U.S. EPA (). Pesticides for adult mosquito control can be applied from hand-held application devices or from trucks or aircraft. Hand-held or truck-based applications are useful to manage relatively small areas, but are limited in their capacity to treat large areas quickly during an outbreak. In addition, gaps in coverage may occur during truck-based applications due to limitations of the road infrastructure. Aerial application of mosquito control adulticides is required when large areas must be treated quickly, and can be particularly valuable because controlling WNV vectors such as Cx. quinquefasciatus or Cx. pipiens often requires multiple, closely spaced treatments (Andis et al. 1987). Both truck and aeriallyapplied pesticides for adult mosquito control are applied using ultra-low-volume (ULV) technology in which a very small volume of pesticide is applied per acre in an aerosol of minute droplets designed to contain sufficient pesticide to kill mosquitoes that are contacted by the droplets. Information describing ULV spray technology and the factors affecting effectiveness of ground and aerially-applied ULV pesticides is reviewed in Mount et al. 1996, Mount 1998, and Bonds 2012. # Risk and Safety of Vector Control Pesticides and Practices Insecticides to control larval and adult mosquitoes are registered specifically for that use by the U.S. Environmental Protection Agency (EPA). Instructions provided on the product labels prescribe the required application and use parameters, and must be carefully followed. Properly applied, these products do not negatively affect human health or the environment. Research has demonstrated that ULV application of mosquito control adulticides did not produce detectable exposure or increases in asthma events in persons living in treated areas (Karpati et al. 2004, Currier et al. 2005, Duprey et al. 2008. The risks from WNV demonstrably exceed the risks from mosquito control practices (Davis and Peterson 2008, Macedo et al. 2010, Peterson et al. 2006) # Legal Action to Achieve Access or Control Individually-owned private properties may be major sources of mosquito production. Examples include accumulations of discarded tires or other trash, neglected swimming pools and similar water features that become stagnant and produce mosquitoes. Local public health statutes or public nuisance regulations may be employed to gain access for surveillance and control, or to require the property owner to mitigate the problem. Executing such legal actions may be a prolonged process during which adult mosquitoes are continuously produced. Proactive communication with residents and public education programs may alleviate the need to use legal actions. However, legal efforts may be required to eliminate persistent mosquito production sites. # Quality Control Pesticide products and application procedures (for both larval and adult control) must periodically be evaluated to ensure an effective rate of application is being used and that the desired degree of control is obtained. Application procedures should be evaluated regularly (minimally once each season) to assure equipment is functioning properly to deliver the correct dosages and droplet parameters and to determine appropriate label rates to use locally. Finally, mosquito populations should routinely be evaluated to ensure insecticide resistance is not emerging. # Records Surveillance data describing vector sources, abundance and infection rates, records of control efforts (e.g., source reduction, larvicide applications, adulticide applications), and quality control data must be maintained and used to evaluate IVM needs and performance. Long-term data are essential to track trends and to evaluate levels of risk. # Insecticide Resistance Management In order to delay or prevent the development of insecticide resistance in vector populations, integrated vector management programs should include a resistance management component (Florida Coordinating Council on Mosquito Control 1998). Ideally, this should include annual monitoring of the status of resistance in the target populations to: Provide baseline data for program planning and pesticide selection before the start of control operations. Detect resistance at an early stage so that timely management can be implemented. Continuously monitor the effect of control strategies on insecticide resistance. Monitoring resistance in the vector population is essential, and is useful in determining the potential causes for control failures, should they occur. CDC has developed an assay to determine if a particular pesticide formulation (combination of the active ingredient in the insecticide and inactive ingredients) is able to kill mosquito vectors. The technique, referred to as the CDC bottle bioassay, is simple, rapid, and economical compared with alternatives. The results can help guide the choice of insecticide used for spraying. A practical laboratory manual that describes how to perform and interpret the CDC bottle bioassay is available online (). For additional information about obtaining and performing the bottle bioassay, contact CDC at [email protected]. The integrated vector management program should include options for managing resistance that are appropriate for the local conditions. The techniques regularly used include the following: Management by moderation -preventing onset of resistance by: - Using dosages no lower than the lowest label rate to avoid genetic selection. o Using less frequent applications. o Using chemicals of short environmental persistence. o Avoiding slow-release formulations that increase selection for resistance. o Avoiding the use of the same class of insecticide to control both adults and immature stages. o Applying locally. Currently, most districts treat only hot spots. Area-wide treatments are used only during public health alerts or outbreaks. o Leaving certain generations, population segments, or areas untreated. o Establishing high thresholds for pest mosquito mitigation using insecticides except during public health alerts or outbreaks. - Management by continued suppression. This strategy is used in regions of high value or persistent high risk (e.g., heavily populated regions or locations with recurring WNV outbreaks) where mosquitoes must be kept at very low densities. This does not mean saturation of the environment by pesticides, but rather the saturation of the defense mechanisms of the insect by insecticide dosages that can overcome resistance. This is achieved by the application of dosages within label rates but sufficiently high to be lethal to heterozygous individuals that are partially resistant. If the heterozygous individuals are killed, resistance will be slow to emerge. This method should not be used if any significant portion of the population in question is fully resistant. Another approach more commonly used is the addition of synergists that inhibit existing detoxification enzymes and thus eliminate the competitive advantage of these individuals. Commonly, the synergist of choice in mosquito control is piperonyl butoxide (PBO). - Management by multiple attack consists of achieving effective control through the action of several different and independent pressures such that selection for any one of them would be below that required for the development of resistance in the mosquito population. This strategy involves the use of insecticides with different modes of action in mixtures or in rotations. There are economic limitations associated with this approach (e.g., costs of switching chemicals or having storage space for them), and critical variables in addition to the pesticide mode of action that must be taken into consideration (i.e., mode of resistance inheritance, frequency of mutations, population dynamics of the target species, availability of refuges, and migration). General recommendations are to evaluate resistance patterns at least annually and the need for rotating insecticides at annual or longer intervals. # Continuing Education Continuing education for operational vector control workers is required to instill or refresh knowledge related to practical mosquito control. Training focusses on safety, applied technology, and requirements for the regulated certification program mandated by most states. Training should also include information on the identification of mosquito species, their behavior, ecology, and appropriate methods of control. # Vector Management in Public Health Emergencies Intensive early season adult mosquito control efforts can decrease WNV transmission activity (Lothrop et al. 2008) and result in reduced human risk. However, depending on local conditions, proactive vector management may not maintain mosquito populations at levels sufficiently low to avoid development of outbreaks. As evidence of sustained or intensified virus transmission in a region increases, emergency vector control efforts to reduce the abundance of infected, biting adult mosquitoes must be implemented. This is particularly important in areas where vector surveillance indicates that infection rates in Culex mosquitoes are continually increasing, or being sustained at high levels, that potential accessory vectors (e.g., mammal-feeding mosquito species) are infected with WNV, or multiple human cases or viremic blood donors have been reported. Delaying adulticide applications until numerous human cases occur negates the value and purpose of the surveillance system. Timely application of adulticides interrupts WNV transmission and prevents human cases (Carney et al. 2005). # Guidelines for a Phased Response to WNV Surveillance Data The objective of a phased response to WNV surveillance data is to implement public health interventions appropriate to the level of WNV risk in a community (Table 2). A surveillance program adequate to monitor WNV activity levels associated with human risk must be in place in order to provide detection of epizootic transmission in advance of human disease outbreaks. The surveillance programs and environmental surveillance indicators described above demonstrate that enzootic/epizootic WNV transmission can be detected several weeks before the onset of human disease, allowing for implementation of effective interventions , Mostashari et al. 2003, Unlu et al. 2009. All communities should prepare for WNV activity. For reasons that are not well understood, some regions are at risk of higher levels of WNV transmission and epidemics than others (CDC 2010), but there is evidence of WNV presence and the risk of human disease and outbreaks in most counties in the contiguous 48 states. The ability to develop a useful phased response depends upon the existence of some form of WNV monitoring in the community to provide the information needed to gauge risk levels. Measures of the intensity of WNV epizootic transmission in a region, preferably from environmental surveillance indicators, should be considered when determining the level of the public health response. As noted previously, human case reports lag weeks behind human infection events and are poor indicators of current risk levels. Effective public health actions depend on interpreting the best available surveillance data and initiating prompt and aggressive intervention when necessary. # Individual and Community -Based Prevention Education Without an effective human vaccine, the only way to prevent human infection and disease is to prevent infected mosquitoes from biting people. This can be accomplished by the efforts of community-based integrated vector management programs, as described above, and by effective personal protection behaviors and practices, such as mosquito-avoidance, use of personal repellents, and removal of residential mosquito sources. Prevention messages and programs for promoting individual and community measures to decrease risk of WNV infection should be employed. Public education and risk communication activities should reflect the degree of WNV risk in a community, as noted in Table 2. Messages should acknowledge the seriousness of the disease without promoting undue fear or panic in the target population. Fear-driven messages may heighten the powerlessness people express in dealing with vector-borne diseases like WNV. Messages should be clear and consistent with the recommendations of coordinating agencies. Use plain language, and adapt materials for lower literacy and non-English speaking audiences. # MESSAGING ABOUT PERSONAL AND HOUSEHOLD PREVENTION The best way to prevent West Nile virus disease is to avoid mosquito bites. - Use insect repellents when you go outdoors. Wear long sleeves and pants during dawn and dusk. Repair or install screens on windows and doors. Use air conditioning, if you have it. - Remove mosquito sources from around your home. # Individual-Level Actions and Behaviors to Reduce WNV Risk # Repellents CDC recommends the use of products containing active ingredients which have been registered by the U.S. Environmental Protection Agency (EPA) for use as repellents applied to skin and clothing. EPA registration of repellent active ingredients indicates the materials have been reviewed and approved for efficacy and human safety when applied according to the instructions on the label. Repellents for use on skin and clothing: CDC evaluation of information contained in peer-reviewed scientific literature and data available from EPA has identified several EPA-registered products that provide repellent activity sufficient to help people avoid the bites of disease carrying mosquitoes. Products containing these active ingredients typically provide reasonably long-lasting protection: - DEET (Chemical Name: N,N-diethyl-m-toluamide or N,N-diethly-3-methyl-benzamide) Picaridin (KBR 3023, Chemical Name: 2-(2-hydroxyethyl)-1-piperidinecarboxylic acid 1methylpropyl ester ) Oil of Lemon Eucalyptus- or PMD (Chemical Name: para-Menthane-3,8-diol) the synthesized version of oil of lemon eucalyptus IR3535 (Chemical Name: 3--aminopropionic acid, ethyl ester) - Note: This recommendation refers to EPA-registered repellent products containing the active ingredient oil of lemon eucalyptus (or PMD). "Pure" oil of lemon eucalyptus (e.g. essential oil) has not received similar, validated testing for safety and efficacy, is not registered with EPA as an insect repellent, and is not covered by this CDC recommendation. EPA characterizes the active ingredients DEET and Picaridin as "conventional repellents" and Oil of Lemon Eucalyptus, PMD, and IR3535 as "biopesticide repellents", which are derived from natural materials. For more information on repellent active ingredients see () . Published data indicate that repellent efficacy and duration of protection vary considerably among products and among mosquito species, and are markedly affected by ambient temperature, amount of perspiration, exposure to water, abrasive removal, and other factors. In general, higher concentrations # MESSAGING ABOUT COMMUNITY PROTECTION Using insecticides to control adult mosquitoes, whether applied from trucks at ground level or by aircraft, is a common, safe and effective practice in the United States. - In many parts of the country, spraying insecticides is part of an annual effort to reduce the number of adult mosquitoes, which in turn reduces the risk of WNV and other diseases spread by mosquitoes. Insecticides used in aerial spraying are no different from those sprayed by trucks. Aerial spraying can cover larger areas faster, which helps to reduce the number of infected mosquitoes and slow the spread of illness. These products are specifically designed to reduce the number of adult mosquitoes. Insecticides used in mosquito control are reviewed for safety and effectiveness and registered by EPA. -f active ingredient provide longer duration of protection, regardless of the active ingredient, although concentrations above ~50% do not offer a marked increase in protection time. Products with <10% active ingredient may offer only limited protection, often from 1-2 hours. Products that offer sustained release or controlled release (micro-encapsulated) formulations, even with lower active ingredient concentrations, may provide longer protection times. Repellents for use on clothing: Certain products containing permethrin are recommended for use on clothing, shoes, bed nets, and camping gear, and are registered with EPA for this use. Permethrin is highly effective as an insecticide and as a repellent. Permethrin-treated clothing repels mosquitoes, and retains this effect after repeated laundering. The permethrin repellents should be reapplied following the label instructions. Some commercial clothing products are available pretreated with permethrin. Additional information about CDC's repellent recommendations is available at (). Mosquito bites can be avoided simply by not going outdoors when mosquitoes are biting, and recommendations to avoid outdoor activity when and where high WNV activity levels have been detected are a component of prevention programs. Recommendations to avoid being outdoors from dusk to dawn may conflict with neighborhood social patterns, community events, or the practices of persons without air-conditioning. It is important to communicate that the primary WNV vectors are active from dusk until dawn. Emphasize that repellent use is protective, and should be used when outdoors during the prime mosquito-biting hours. # Reduce Mosquito Production at the Home Encourage residents to regularly empty standing water from items outside homes such as rain gutters, flowerpots, old tires, empty containers, buckets, and wading pools. Water in birdbaths should be changed at least once per week. CDC provides resources that list these and other steps for reducing mosquito populations (www.cdc.gov/ncidod/dvbid/westnile/qa/habitats.htm). # Community-Level Actions to Reduce WNV Risk # Community Protection Measures At the community level, reporting dead birds and nuisance mosquito problems, advocating for organized mosquito abatement, and participating in community mobilization projects to address sources of mosquitoes such as trash, standing water or neglected swimming pools are activities that can help protect individuals and at-risk groups. # Communicating About Mosquito Control Area-wide control of mosquitoes that transmit WNV is most effectively done at a community level. Public understanding and acceptance of emergency adult mosquito control operations using insecticides is critical to its success, especially where these measures are unfamiliar. Questions about the products being used, their safety, and their effects on the environment are common. Improved communication about surveillance and how decisions to use mosquito adulticides are made may help residents weigh the risks and benefits of control. When possible, provide detailed information regarding the schedule for adulticiding through newspapers, radio, government-access television, the internet, recorded phone messages, or other means your agency uses to successfully communicate with its constituencies. # Community Engagement and Education Knowing the norms and cultural practices of communities is invaluable when informing the public, and for gaining support and assistance for routine vector-management practices and to enhance personal protection during outbreaks. It is essential to know how targeted audiences communicate with each other and with government officials, and to identify the key audiences and how to reach them. Identifying what risk communication principles work best can better prepare the community to effectively provide information during an outbreak. Translating complex scientific jargon into understandable concepts promotes community understanding and acceptance. The following provides a description of selected best practices for targeting high-risk groups, offers suggestions for cultivating partnerships with media and communities, and provides select outreach measures for mobilizing communities. # Prevention Strategies for High-Risk Groups Audience members have different disease-related concerns and motivations for action. Proper message targeting (including use of plain language) permits better use of limited communication and prevention resources. The following are some population segments that require specific targeting: # Persons over age 50 While persons of any age can be infected with WNV, surveillance data indicate that persons over age 50 are at higher risk for severe disease and death due to WNV infection. Collaborate with organizations that have an established relationship with mature adults, such as the AARP, senior centers, and programs for adult learners. Include images of older adults in your promotional material. Identify activities in your area where older adults may be exposed to mosquito bites (e.g., jogging, walking, golf, and gardening). # Persons with Outdoor Exposure Data suggest that persons engaged in extensive outdoor work or recreational activities are at greater risk of being bitten by mosquitoes which may be infected with WNV. Develop opportunities to inform people engaged in outdoor activities about WNV. Encourage use of repellent and protective clothing, particularly if outdoor activities occur during dusk to dawn hours. Local spokespersons (e.g., union officials, job-site supervisors, golf pros, sports organizations, lawn care professionals, public works officials, gardening experts) may be useful collaborators. Place messages in locations where people engage in outdoor activities (e.g., parks, golf courses, hiking trails). # Homeless Persons Extensive outdoor exposure and limited financial resources in this group present special challenges. Application of repellents to exposed skin and clothing may be most appropriate prevention measures for this population. Work with social service groups in your area to educate and provide repellents to this population segment. # Residences Lacking Window Screens. The absence of intact window/door screens is a likely risk factor for exposure to mosquito bites. Focus attention on the need to repair screens and provide access to resources to do so. Partner with community organizations that can assist older persons or others with financial or physical barriers to screen installation or repair. # Partnerships with Media and the Community Cultivate relationships with the media (radio, TV, newspaper, web-based news outlets). Obtain media training for at least one member of your staff, and designate that individual as the organization's spokesperson. Develop clear press releases and an efficient system to answer press inquiries. Many communities have heard similar prevention messages repeated for several years. Securing the public's attention when risk levels increase can be a challenge. Evaluate and update WNV prevention messages annually, and test new messages with different population segments to evaluate effectiveness. Develop partnerships with agencies/organizations that have relationships with populations at higher risk (such as persons over 50) or are recognized as community leaders (e.g., churches, service groups). Working through sources trusted by the target audience can heighten the credibility of and attention to messages. Partnerships with businesses that sell materials to fix or install window screens or that sell insect repellent may be useful in some settings (e.g., local hardware stores, grocery stores). # Community Mobilization and Outreach Community mobilization can improve education and help achieve behavior change goals. Promote the concept that health departments and mosquito control programs require community assistance to reduce WNV risk. A community task force that includes civic, business, public health, and environmental concerns can be valuable in achieving buy-in from various segments of the community, and in developing a common message. Community mobilization activities can include clean-up days to get rid of mosquito breeding sites. Community outreach involves presenting messages in person, in addition to media and educational materials, and involving citizens in prevention activities. Hearing the message of personal prevention from community leaders can validate the importance of the disease. Health promotion events and activities reinforce the importance of prevention in a community setting. Select tools and resources are listed here that may enhance your outreach efforts: # Social Media The increasing popularity of social media has been accompanied with a rise in health informationseeking using these new interactive outlets. Outreach can be conducted using Twitter, Facebook, YouTube, blogs, and other websites that may reach constituents less connected to more traditional media sources. Social media posts can be used as an attention-grabber, providing links that direct users to webpages or other resources with more complete information. # Online Resources The Internet has become a primary source of health information for many Americans. Unfortunately, it is impossible to police the vast amount of content available online for quality and accuracy. Encourage constituents to seek advice from credible sources. Make sure local official public health agency websites are clear, accurate and up to date. Useful information is available from a number of resources: The U.S. Environmental Protection Agency (EPA) is the government's regulatory agency for insecticide and insect repellent use, safety, and effectiveness. Information about mosquito control insecticides and repellents is available at (). These include guidance for using repellents safely () and a search tool to assist in finding a repellent that is right for you (/#searchform) which allows the user to examine the protection time afforded by registered repellents containing various concentrations of the active ingredients. There are a number of organizations that have developed useful tools and information that can be adapted for local needs. Examples include: the American Mosquito Control Association (www.mosquito.org) and the National Pesticide Information Center (NPIC) (www.npic.orst.edu). BACKGROUND. The establishment of West Nile (WN) virus across North America has been accompanied by expanded efforts to monitor WN virus transmission activity in many communities. Surveillance programs use various indicators to demonstrate virus activity. These include detecting evidence of virus in dead birds, dead horses, and mosquitoes; and detection of antibody against WN virus in sentinel birds, wild birds, or horses (Reisen & Brault 2007). While all of these surveillance practices can demonstrate the presence of WN virus in an area, few provide reliable, quantitative indices that may be useful in predictive surveillance programs. Only indices derived from a known and quantifiable surveillance effort conducted over time in an area will provide information that adequately reflects trends in virus transmission activity that may be related to human risk. Of the practices listed above, surveillance efforts are controlled and quantifiable only in mosquito and sentinel-chicken based programs. In these programs, the number of sentinel chicken flocks/ number of chickens, and the number of mosquito traps set per week is known and allows calculation of meaningful infection rates that reflect virus transmission activity. # PREMISE BEHIND DEVELOPING THE VECTOR INDEX. Mosquito-based arbovirus surveillance provides three pieces of information: The variety of species comprising of the mosquito community; density of each species population (in terms of the number collected in each trap unit of a given trap type); and if the specimens are tested for the presence of arboviruses, the incidence of the agent in the mosquito population. Taken individually, each parameter describes one aspect of the vector community that may affect human risk, but the individual elements don't give a comprehensive estimate of the number of potentially infectious vectors seeking hosts at a given time in the surveillance area. To express the arbovirus transmission risk posed by a vector population adequately, information from all three parameters (vector species presence, vector species density, vector species infection rate) must be considered. The Vector Index (VI) combines all three of the parameters quantified through standard mosquito surveillance procedures in a single value (Gujaral et al. 2007, Kwan et al. 2012). The VI is simply the estimated average number of infected mosquitoes collected per trap night summed for the key vector species in the area. Summing the VI for the key vector species incorporates the contribution of more than one species and recognizes the fact that West Nile (WN) virus transmission may involve one or more primary vectors and several accessory or bridge vectors in an area. # Deriving the Vector Index from routine mosquito surveillance data The Vector Index is expressed as: # Background Arthropod-borne viruses (arboviruses) are transmitted to humans primarily through the bites of infected mosquitoes, ticks, sand flies, or midges. Other modes of transmission for some arboviruses include blood transfusion, organ transplantation, perinatal transmission, consumption of unpasteurized dairy products, breast feeding, and laboratory exposures. More than 130 arboviruses are known to cause human disease. Most arboviruses of public health importance belong to one of three virus genera: Flavivirus, Alphavirus, and Bunyavirus. # Clinical description Most arboviral infections are asymptomatic. Clinical disease ranges from mild febrile illness to severe encephalitis. For the purposes of surveillance and reporting, based on their clinical presentation, arboviral disease cases are often categorized into two primary groups: neuroinvasive disease and nonneuroinvasive disease. Neuroinvasive disease: Many arboviruses cause neuroinvasive disease such as aseptic meningitis, encephalitis, or acute flaccid paralysis (AFP). These illnesses are usually characterized by the acute onset of fever with stiff neck, altered mental status, seizures, limb weakness, cerebrospinal fluid (CSF) pleocytosis, or abnormal neuroimaging. AFP may result from anterior ("polio") myelitis, peripheral neuritis, or post-infectious peripheral demyelinating neuropathy (i.e., Guillain-Barré syndrome). Less common neurological manifestations, such as cranial nerve palsies, also occur. Non-neuroinvasive disease: Most arboviruses are capable of causing an acute systemic febrile illness (e.g., West Nile fever) that may include headache, myalgias, arthralgias, rash, or gastrointestinal symptoms. Rarely, myocarditis, pancreatitis, hepatitis, or ocular manifestations such as chorioretinitis and iridocyclitis can occur. # Clinical criteria for diagnosis A clinically compatible case of arboviral disease is defined as follows: Neuroinvasive disease Fever (≥100.4°F or 38°C) as reported by the patient or a health-care provider, AND Meningitis, encephalitis, acute flaccid paralysis, or other acute signs of central or peripheral neurologic dysfunction, as documented by a physician, AND Absence of a more likely clinical explanation. # Non-neuroinvasive disease - Fever (≥100.4°F or 38°C) as reported by the patient or a health-care provider, AND Absence of neuroinvasive disease, AND Absence of a more likely clinical explanation. # Laboratory criteria for diagnosis - Isolation of virus from, or demonstration of specific viral antigen or nucleic acid in, tissue, blood, CSF, or other body fluid, OR Four-fold or greater change in virus-specific quantitative antibody titers in paired sera, OR Virus-specific IgM antibodies in serum with confirmatory virus-specific neutralizing antibodies in the same or a later specimen, OR Virus-specific IgM antibodies in CSF and a negative result for other IgM antibodies in CSF for arboviruses endemic to the region where exposure occurred, OR Virus-specific IgM antibodies in CSF or serum. # Case classification # Confirmed: Neuroinvasive disease: A case that meets the above clinical criteria for neuroinvasive disease and one or more the following laboratory criteria for a confirmed case: Isolation of virus from, or demonstration of specific viral antigen or nucleic acid in, tissue, blood, CSF, or other body fluid, OR Four-fold or greater change in virus-specific quantitative antibody titers in paired sera, OR Virus-specific IgM antibodies in serum with confirmatory virus-specific neutralizing antibodies in the same or a later specimen, OR Virus-specific IgM antibodies in CSF and a negative result for other IgM antibodies in CSF for arboviruses endemic to the region where exposure occurred. # Non-neuroinvasive disease: A case that meets the above clinical criteria for non-neuroinvasive disease and one or more of the following laboratory criteria for a confirmed case: - Isolation of virus from, or demonstration of specific viral antigen or nucleic acid in, tissue, blood, CSF, or other body fluid, OR Four-fold or greater change in virus-specific quantitative antibody titers in paired sera, OR Virus-specific IgM antibodies in serum with confirmatory virus-specific neutralizing antibodies in the same or a later specimen, OR - Virus-specific IgM antibodies in CSF and a negative result for other IgM antibodies in CSF for arboviruses endemic to the region where exposure occurred. # Probable: Neuroinvasive disease: A case that meets the above clinical criteria for neuroinvasive disease and the following laboratory criteria: - Virus-specific IgM antibodies in CSF or serum but with no other testing. # Non-neuroinvasive disease: A case that meets the above clinical criteria for non-neuroinvasive disease and the laboratory criteria for a probable case: - Virus-specific IgM antibodies in CSF or serum but with no other testing. # Comment: # Interpreting arboviral laboratory results - Serologic cross-reactivity. In some instances, arboviruses from the same genus produce crossreactive antibodies. In geographic areas where two or more closely-related arboviruses occur, serologic testing for more than one virus may be needed and results compared to determine the specific causative virus. For example, such testing might be needed to distinguish antibodies resulting from infections within genera, e.g., flaviviruses such as West Nile, St. Louis encephalitis, Powassan, Dengue, or Japanese encephalitis viruses. - Rise and fall of IgM antibodies. For most arboviral infections, IgM antibodies are generally first detectable at 3 to 8 days after onset of illness and persist for 30 to 90 days, but longer persistence has been documented (e.g, up to 500 days for West Nile virus). Serum collected within 8 days of illness onset may not have detectable IgM and testing should be repeated on a convalescent-phase sample to rule out arboviral infection in those with a compatible clinical syndrome. - Persistence of IgM antibodies. Arboviral IgM antibodies may be detected in some patients months or years after their acute infection. Therefore, the presence of these virus-specific IgM antibodies may signify a past infection and be unrelated to the current acute illness. Finding virus-specific IgM antibodies in CSF or a fourfold or greater change in virus-specific antibody titers between acuteand convalescent-phase serum specimens provides additional laboratory evidence that the arbovirus was the likely cause of the patient's recent illness. Clinical and epidemiologic history also should be carefully considered. - Persistence of IgG and neutralizing antibodies. Arboviral IgG and neutralizing antibodies can persist for many years following a symptomatic or asymptomatic infection. Therefore, the presence of these antibodies alone is only evidence of previous infection and clinically compatible cases with the presence of IgG, but not IgM, should be evaluated for other etiologic agents. - Arboviral serologic assays. Assays for the detection of IgM and IgG antibodies commonly include enzyme-linked immunosorbent assay (ELISA), microsphere immunoassay (MIA), or immunofluorescence assay (IFA). These assays provide a presumptive diagnosis and should have 62 confirmatory testing performed. Confirmatory testing involves the detection of arboviral-specific neutralizing antibodies utilizing assays such as plaque reduction neutralization test (PRNT). - Other information to consider. Vaccination history, detailed travel history, date of onset of symptoms, and knowledge of potentially cross-reactive arboviruses known to circulate in the geographic area should be considered when interpreting results. # Imported arboviral diseases Human disease cases due to dengue or yellow fever viruses are nationally notifiable to CDC using specific case definitions. However, many other exotic arboviruses (e.g., chikungunya, Japanese encephalitis, tick-borne encephalitis, Venezuelan equine encephalitis, and Rift Valley fever viruses) are important public health risks for the United States as competent vectors exist that could allow for sustained transmission upon establishment of imported arboviral pathogens. Health-care providers and public health officials should maintain a high index of clinical suspicion for cases of potentially exotic or unusual arboviral etiology, particularly in international travelers. If a suspected case occurs, it should be reported to the appropriate local/state health agencies and CDC. Step Surveillance of dead birds for WNV has proven useful for the early detection of WNV in the United States. In recent months, it has also proven useful for the early detection of highly pathogenic H5N1 avian influenza A (HPAI H5N1, hereafter referred to as H5N1 virus) in Europe. Given the potential for H5N1 to infect wild birds in North America in the future, the following interim guidance is offered to support the efforts of states conducting avian mortality surveillance. # General Considerations for States Conducting Avian Mortality Surveillance - If different agencies within a state are separately responsible for conducting surveillance for WNV or H5N1 among wild birds, the sharing of resources, including dead birds submitted for testing, may increase the efficiency of both systems. Any dead bird might be infected with any one of a number of zoonotic diseases currently present in the United States (US), such as WNV. However, in countries where H5N1 has been found in captive and wild birds, it frequently has resulted in multiple deaths within and across species, and if H5N1 enters the US, it is likely to result in the death of wild birds. If wild birds in the US are exposed to the virus, both single and groups of dead birds should be considered potentially infected. Avian mortality due to the introduction of H5N1 could occur at any time of the year, whereas WNV is more often detected when mosquitoes are active. To date, no human infections of WNV have been confirmed due to contact with live or dead wild birds in outdoor settings. Most human H5N1 cases overseas have been associated with close contact with infected poultry or their environment; however, a very small number of cases appear to be related to the handling of infected wild birds or their feathers or feces without the use of proper personal protective equipment (PPE). There is no evidence of H5N1 transmission to humans from exposure to H5N1 virus-contaminated water during swimming; however this may be theoretically possible. ( o) Although handling infected birds is unlikely to lead to infection, persons who develop an influenza-like illness after handling sick or dead birds should seek medical attention. Their health care provider should report the incident to public health agencies if clinical symptoms or laboratory test results indicate possible H5N1 or WNV infection. # Infection Control and Health and Safety Precautions These guidelines are intended for any person handling dead birds. The risk of infection with WNV from such contact is small. The risk of infection with H5N1 from handling dead birds is difficult to quantify and is likely to vary with each situation. Risk is related to the nature of the work environment, the number of birds to be collected, and the potential for aerosolization of bird feces, body fluids, or other tissues. The most important factor that will influence the degree of infection risk from handling wild birds is whether H5N1 has been reported in the area. Local public health officials can be consulted to help in selecting the most appropriate PPE for the situation. # General Precautions for Collection of Single Dead Birds (These precautions are applicable to employees as well as the general public) When collecting dead birds, the risk of infection from WNV, H5N1, or any other pathogen may be eliminated by avoiding contamination of mucous membranes, eyes, and skin by material from the birds. This can be accomplished by eliminating any direct contact with dead birds via use of the following safety precautions: When picking up any dead bird, wear disposable impermeable gloves and place it directly into a plastic bag. Gloves should be changed if torn or otherwise damaged. If gloves are not available, use an inverted double-plastic bag technique for picking up carcasses or use a shovel to scoop the carcass into a plastic bag. In situations in which the bird carcass is in a wet environment or in other situations in which splashing or aerosolization of viral particles is likely to occur during disposal, safety goggles or glasses and a surgical mask may be worn to protect mucous membranes against splashed droplets or particles. Bird carcasses should be double bagged and placed in a trash receptacle that is secured from access by children and animals. If the carcass will be submitted for testing, hold it a cool location until it pickup or delivery to authorities. Carcasses should not be held in close contact with food (e.g., not in a household refrigerator or picnic cooler). After handling any dead bird, avoid touching the face with gloved or unwashed hands. Any PPE that was used (e.g. gloves, safety glasses, mask) should be discarded or disinfected- when done, and hands should then be washed with soap and water (or use an alcohol-based hand gel when soap and water are not available). / If possible, before disposing of the bird, members of the public may wish to consult with their local animal control, health, wildlife or agricultural agency or other such entity to inquire whether dead bird reports are being tallied and if the dead bird in question might be a candidate for WNV or H5N1 testing. Additional Precautions for Personnel Tasked with Collecting Dead Birds in Higher-Risk Settings (e.g., when collecting large numbers or in confined indoor spaces, particularly once H5N1 has been confirmed in an area) - Minimize any work activities that generate airborne particles. For example, during the cleanup phase of the bird removal, avoid washing surfaces with pressurized water or cleaner (i.e., pressure washing), which could theoretically aerosolize H5N1 viral particles that could then be inhaled. If aerosolization is unavoidable, the use of a filtering face-piece respirator (e.g., N95) 69 would be prudent, particularly while handling large quantities of dead birds repeatedly as part of regular work requirements. If using safety glasses, a mask, or a respirator, do not remove until after gloves have been removed and hands have been washed with soap and water (or use an alcohol-based hand gel when soap and water are not available). After PPE has been removed, hands should immediately be cleaned again. / Personal protective equipment worn (e.g., gloves, mask, or clothing) should be disinfected- or discarded. Discuss appropriate biosafety practices and PPE use with your employer. # *Recommendations for PPE Disinfection For machine-washable, reusable PPE: Disinfect PPE in a washing machine with detergent in a normal wash cycle. Adding bleach will increase the speed of viral inactivation as will hot water but detergent alone in cold water will be effective. Follow manufacturer recommendations for drying the PPE. Non machine-washable, reusable PPE should be cleaned following the manufacturer's recommendations for cleaning. # Laboratory Biosafety Recommendations Laboratory handling of routine diagnostic specimens of avian carcasses requires a minimum of BSL-2 laboratory safety precautions. However, if either WNV or H5N1 infection of the specimens is suspected on the basis of previous surveillance findings, at a minimum BSL-3 precautions are advisable. Consult your institutional biosafety officer for specific recommendations. Biosafety levels are described at www.cdc.gov/od/ohs/biosfty/bmbl4/bmbl4s3.htm. # Additional Information Sources
timely efforts to reduce the number of infected adult mosquitoes will optimally impact human WNV case incidence when environmental surveillance indicates substantial WNV epizootic activity or when many human cases occur early in the season (e.g., June or July.) This document provides guidance for communities and public health agencies developing new programs or enhancing existing WNV management programs. The CDC Division of Vector-Borne Diseases is available to provide additional consultation and technical assistance (by phone:# Foreword As West Nile virus (WNV) spread and became established across the United States following its first identification in New York City in 1999, the responses of all levels of the public health system have resulted in a detailed understanding of WNV transmission ecology and epidemiology as well as development of systems and procedures to reduce human risk. This includes an expanded capacity to diagnose and monitor WNV infections in humans, measure WNV transmission activity in vector mosquitoes, and implement effective WNV control programs. These guidelines, which update the third revision released in 2003, incorporate this new knowledge with the goal of providing guidance to health departments and other public health entities in monitoring and mitigating WNV risk to humans. Human disease surveillance provides an ongoing nationwide assessment of the human impact of WNV, and over the past decade has demonstrated where WNV incidence and total disease burden are greatest. However, human disease surveillance, by itself, is limited in its ability to predict the large focal outbreaks that have come to characterize this disease. These outbreaks typically intensify over as little as a couple of weeks; however, human case reports are lagging indicators of risk since case reports occur weeks after the time of infection. Thus, environmental surveillance -monitoring enzootic and epizootic WNV transmission in mosquitoes and birds -forms a timelier index of risk, and is an important cornerstone for implementing effective WNV risk reduction efforts. Research and operational experience shows that increases in WNV infection rates in mosquito populations can provide an indicator of developing outbreak conditions several weeks in advance of increases in human infections. Communities that have a history of WNV, particularly metropolitan areas with large human populations at risk, should implement comprehensive, integrated vector management (IVM) programs that incorporate monitoring mosquito abundance and infection rates. Mosquito-based WNV surveillance programs should use strategies that assure data are comparable over time and space, and are designed to detect trends in WNV transmission levels. Programs should enlist quantitative indicators such as the WNV infection rate or vector index to represent WNV transmission activity in mosquito populations. Programs must be sustainable over the long term in order to provide sufficient information to link surveillance indicators with the degree of human risk. Consistency also requires that mosquito collections be repeated at regular (weekly) intervals over the course of the transmission season, and that collections are made at fixed collecting sites. Only through maintaining consistency can monitoring programs provide information useful in crafting thresholds to support decisions about vector control efforts and other interventions. Other surveillance modalities, such as sentinel chickens and dead bird surveillance, may be a valuable adjunct to mosquito-based surveillance in quantifying epizootic activity in some settings. IVM programs must be proactive and make plans in advance for addressing increasing levels of WNV risk. The objective of IVM is to implement control measures sufficient to maintain mosquito abundance below levels that result in high risk of WNV transmission to humans. By establishing action thresholds based on the abundance of WNV-infected vector mosquitoes, IVM programs can monitor risk and the effectiveness of their programs, and implement more directed vector control efforts as needed. IVM implies that all of the tools available for managing mosquito populations should be considered for use as needed to maintain vector populations at low levels. Source reduction and larval control activities can be effective in maintaining low vector abundance, but adult mosquito control efforts through ground or aerially applied pesticides complement proactive vector management programs and should not be relegated to the status of a "last resort" measure to be used only during outbreaks. Aggressive and # Introduction Ten years have passed since the 2003 publication of the 3rd edition of "West Nile virus in the United States: Guidelines for surveillance prevention and control". At that time, only 4 years since West Nile virus (WNV) was first detected in New York City, the virus had already established itself across approximately the eastern half of the country and produced the largest epidemic of arboviral encephalitis ever experienced in the United States. Knowledge about WNV epidemiology and transmission ecology was expanding rapidly, but numerous gaps remained in our understanding of how this relatively new exotic disease would affect public health, what monitoring practices would provide the best indicators of human risk, and what interventions would be most effective in reducing human infections. Thus, large portions of the WNV Guidelines 3rd edition were predicated on relatively limited research and operational experience, and were dedicated to identifying and prioritizing specific basic and operational research directions. Since that time, WNV has expanded to the point that it can now be found in all 48 contiguous states and has produced two additional, large nationwide epidemics in 2003 and 2012. Also, considerable new information about WNV epidemiology, ecology and control has been generated since 2003. The objective of this 4th edition of the WNV guidelines is to consolidate and describe this information and describe how these new findings can be used to better monitor WNV and mitigate its public health impact. This document was produced through a comprehensive review of the published literature related to WNV epidemiology, diagnostics, transmission ecology, environmental surveillance, and vector control. The publications were reviewed for relevance to developing operational surveillance and control programs, and selected for inclusion in a draft document by a technical development group of CDC subject matter experts. Numerous stakeholder groups were requested to review the document. We view the recommendations contained in these guidelines as the best that can be derived from the currently available information, and will provide updates as new information about WNV epidemiology, ecology, or intervention becomes available. # West Nile Virus Epidemiology and Ecology WNV, a mosquito-transmitted member of the genus Flavivirus in the family Flaviridae, was discovered in northwest Uganda in 1937 (Smithburn et al. 1940) but was not viewed as a potentially important public health threat until it was associated with epidemics of fever and encephalitis in the Middle East in the 1950's (Taylor et al. 1956, Paz 2006). In the following years, WNV was associated with sporadic outbreaks of human disease across portions of Africa, the Middle East, India, Europe and Asia (Hubalek and Halouzka 1999). In the mid to late 1990's outbreaks occurred more frequently in the Mediterranean Basin and large outbreaks occurred in Romania and the Volga delta in southern Russia (Hayes et al. 2005). The first domestically-acquired human cases of WNV disease in the Western Hemisphere were detected in New York City in 1999 (Nash et al. 2001). WNV rapidly spread during the following years, and by 2005 had established sustained transmission foci in much of the hemisphere with an overall distribution that extended from central Canada to southern Argentina (Gubler 2007). WNV transmission persists across this large, ecologically-diverse expanse, and as a result this virus is recognized as the most widely distributed arbovirus in the world (Kramer et al. 2008). WNV has become enzootic in all 48 contiguous United States and evidence of transmission in the form of infected humans, mosquitoes, birds, horses, or other mammals has been reported from 96% of U.S. counties. This extensive distribution is due to the ability of WNV to establish and persist in the wide variety of ecosystems present across the country. WNV has been detected in 65 different mosquito species in the U.S. , though it appears that only a few Culex species drive epizootic and epidemic transmission. The most important vectors are Cx. pipiens in the northern half of the country, Cx. quinquefasciatus in the southern states, and Cx. tarsalis in the western states where it overlaps with the Cx. pipiens and quinquefasciatus (Fig 1 .) (Andreadis et al. 2004, Kilpatrick et al. 2006a, Godsey et al. 2010). However, the population structure of Culex pipiens and Cx. quinquefasciatus is more complex than indicated in Fig. 1 as these species readily hybridize and produce a stable hybrid zone across the United States. Barr (1957) set the limits of the hybrid zone at 36 O N and 39 O N based on measurements of the male genitalia. Subsequent work using microsatellites (Huang et al. 2008, Edillo et al. 2009, Kothera et al. 2009, Kothera et al. 2013, Savage and Kothera 2012 and other molecular markers (Huang et al. 2011) indicates that the hybrid zone extends farther north and south than suggested by Barr (1957). In the middle latitudes of the US, both nominal species and hybrids may be present and are commonly reported as Cx. pipiens complex mosquitoes (Savage et al. 2007, Savage andKothera 2012). The implications of Cx. pipiens -Cx. quinquefasciatus population genetics any hybridization patterns for WNV transmission are not well understood. Figure 1. Approximate geographic distribution of the primary WNV vectors, Cx. pipiens, Cx. quinquefasciatus and Cx. tarsalis (modified from Darsie and Ward 2005). Culex salinarius has been identified as an important enzootic and epidemic vector in the northeastern U.S. (Anderson et al. 2004, Molaei et al. 2006. Other mosquito species including Cx. restuans, Cx. nigripalpus and Cx. stigmatosoma may contribute to early season amplification or serve as accessory bridge vectors in certain regions, but their role is less well understood (Kilpatrick et al. 2005). WNV has been detected in hundreds of bird species in the United States (CDC 2012). However, relatively few species function as primary amplifiers of the virus, and a small subset of bird species may significantly influence WNV transmission dynamics locally (Hamer et al. 2009). For example, the American robin (Turdus migratorious) can play a key role as amplifier host, even in locations where it is present in relatively low abundance (Kilpatrick et al. 2006b). As a result of this extensive distribution in the U.S., WNV is now the most frequent cause of arboviral disease in the country. Since 1999, WNV disease cases have been reported from all 48 contiguous states and two-thirds of all U.S. counties. Though widely distributed, WNV transmission is temporally and spatially heterogeneous. Human WNV disease cases have been reported during every month of the year in the United States, but as is characteristic of zoonotic arboviruses in temperate climates, intense transmission is limited to the summer and early fall months; 94% of human cases have been reported from July through September (CDC 2010) and approximately two-thirds of reported cases occurred during a 6-week period from mid-July through the end of August. At a national level, the annual incidence and number of cases reported have varied dramatically since 1999 , CDC 2010a. Weather, especially temperature, is an important modifier of WNV transmission, and has been correlated with increased incidence of human disease at regional and national scales (Soverow et al. 2009), and likely drives the annual fluctuations in numbers of cases reported at the national level. However, WNV epidemiology is characterized by focal and sometimes intense outbreaks . Epidemiological data gathered since 1999 demonstrate regions in the United States with recurring high levels of WNV transmission and risk to humans (Fig. 2). High average annual incidence of WNV disease occurs in the West Central and Mountain regions (CDC 2010), with the highest cumulative incidence of infection occurring in the central plains states (i.e., South Dakota, Wyoming and North Dakota) (Petersen et al. 2012). The greatest disease burden occurs where areas of moderate to high incidence intersect metropolitan counties with correspondingly high human population densities. # Limits to Prediction -The Need for Surveillance WNV outbreaks have been associated on a local level with a variety of parameters including urban habitats in the Northeast and agricultural habitats in the western United States (Bowden et al. 2011), rural irrigated landscapes (DeGroote and Sugumaran 2012), increased temperature (Hartley et al. 2012), specific precipitation patterns, several socioeconomic factors such as housing age and community drainage patterns (Ruiz et al. 2007), per capita income (DeGroote and Sugumaran 2012), and neglected swimming pool density (Reisen et al. 2008, Harrigan et al. 2010. Despite these documented associations with a variety of biotic and abiotic factors, and recognition that certain regions experience more frequent outbreaks and higher levels of human disease risk, no models have been developed to provide long-term predictions of how and where these factors will combine to produce outbreaks. The unpredictable nature of WNV outbreaks necessitates the establishment and maintenance of surveillance systems capable of detecting increases in WNV transmission activity and the ability to respond to the surveillance data with effective, disease-reducing interventions. Such surveillance and control programs can be costly to maintain. However it is important that communities with large human populations in areas with documented WNV risk establish and maintain surveillance for human cases and effective integrated vector management programs that incorporate environmental surveillance components capable of providing indicators predictive of human risk. This document provides guidance for developing systems to 1) Monitor WNV enzootic and epizootic transmission activity as indicators of human risk; 2) Maintain surveillance for human infections and disease to monitor trends in WNV disease burden and clinical presentation; and 3) Implement prevention and control programs that reduce community level risk by managing vector mosquito populations and that reduce individual risk by promoting effective personal protection measures. # Surveillance # Objectives of WNV Surveillance WNV surveillance consists of two distinct, but complementary activities. Epidemiological surveillance measures WNV human disease to quantify disease burden and identify seasonal, geographic, and demographic patterns in human morbidity and mortality (Lindsey et al. 2008. Environmental surveillance monitors local WNV activity in vectors and non-human vertebrate hosts in advance of epidemic activity affecting humans. In addition to monitoring disease burden and distribution, epidemiological surveillance has been instrumental in characterizing clinical disease presentation and disease outcome, as well as identifying high risk populations and factors associated with serious WNV disease (Lindsey et al. 2012). Epidemiological surveillance has also detected and quantified alternative routes of WNV transmission to humans, such as contaminated blood donations and organ transplantation (Pealer et al. 2003, Nett et al. 2012. Epidemiological and environmental surveillance for WNV was facilitated by development and implementation of ArboNET, the national arbovirus surveillance system (Lindsey et al. 2012). ArboNET was developed in 2000 as a comprehensive surveillance data capture platform to monitor WNV infections in humans, mosquitoes, birds, and other animals as the virus spread and became established across the country. This comprehensive approach was essential to tracking the progression of WNV activity and remains a significant source of data adding to our understanding of the epidemiology and ecology of WNV. In the absence of effective human WNV vaccines, preventing disease in humans depends on application of measures to keep infected mosquitoes from biting people. A principal objective of WNV environmental surveillance is to quantify the intensity of virus transmission in a region in order to provide a predictive index of human infection risk. This risk prediction, along with information about the local conditions and habitats that produce WNV vector mosquitoes, can be used to inform an integrated vector management program and the associated decisions about implementing prevention and control interventions (Nasci 2013). Though epidemiological surveillance is essential for understanding WNV disease burden, utilizing human case surveillance by itself is insufficient for predicting outbreaks (Reisen and Brault 2007). WNV outbreaks can develop quickly, with the majority of human cases occurring over a few weeks during the peak of transmission . The time from human infection to onset of symptoms, to diagnosis and reporting can be several weeks or longer. As a result, human WNV case reports lag well behind the transmission from mosquitoes that initiated the infection. By monitoring WNV infection prevalence in mosquito vectors and incidence in non-human vertebrate hosts, and comparing these indices to historical environmental and epidemiological surveillance data, conditions associated with increasing human risk can be detected 2-4 weeks in advance of human disease onset (Kwan et al. 2012a). This provides additional lead time for critical vector control interventions and public education programs to be put in place. The following sections describe the elements of epidemiological and environmental WNV surveillance and how they may be used to monitor and predict risk and to trigger interventions. # Human Surveillance # Routes of Transmission WNV is transmitted to humans primarily through the bite of infected mosquitoes (Campbell et al. 2001). However, person-to-person transmission can occur through transfusion of infected blood products or solid organ transplantation (Pealer et al. 2003, Iwamoto et al. 2003. Intrauterine transmission and probable transmission via human milk also have been described but appear to be uncommon (O'Leary et al. 2006, Hinckley et al. 2007. Percutaneous infection and aerosol infection have occurred in laboratory workers, and an outbreak of WNV infection among turkey handlers also raised the possibility of aerosol transmission , CDC 2003a. Since 2003, the U.S. blood supply has been routinely screened for WNV RNA; as a result, transfusionassociated WNV infection is rare (CDC 2003b). The Food and Drug Administration recommends that blood collection agencies perform WNV nucleic acid amplification test (NAAT) year-round on all blood donations, either in minipools of six or sixteen donations (depending on test specifications) or as individual donations. Organ and tissue donors are not routinely screened for WNV infection (Nett et al. 2012). # Clinical Presentation An estimated 70-80% of human WNV infections are subclinical or asymptomatic (Mostashari et al. 2001, Zou et al. 2010. Most symptomatic persons experience an acute systemic febrile illness that often includes headache, myalgia, or arthralgia; gastrointestinal symptoms and a transient maculopapular rash also are commonly reported (Watson et al. 2004, Hayes et al. 2005b, Zou et al. 2010. Less than 1% of infected persons develop neuroinvasive disease, which typically manifests as meningitis, encephalitis, or acute flaccid paralysis (Hayes et al. 2005b). WNV meningitis is clinically indistinguishable from aseptic meningitis due to most other viruses (Sejvar and Marfin 2006). Patients with WNV encephalitis usually present with seizures, mental status changes, focal neurologic deficits, or movement disorders (Sejvar and Marfin 2006). WNV acute flaccid paralysis is often clinically and pathologically identical to poliovirus-associated poliomyelitis, with damage of anterior horn cells, and may progress to respiratory paralysis requiring mechanical ventilation (Sejvar and Marfin 2006). WNV-associated Guillain-Barré syndrome has also been reported and can be distinguished from WNV poliomyelitis by clinical manifestations and electrophysiologic testing (Sejvar and Marfin 2006). Cardiac dysrhythmias, myocarditis, rhabdomyolysis, optic neuritis, uveitis, chorioretinitis, orchitis, pancreatitis, and hepatitis have been described rarely with WNV infection (Hayes et al. 2005b). # Clinical Evaluation and Diagnosis WNV disease should be considered in the differential diagnosis of febrile or acute neurologic illnesses associated with recent exposure to mosquitoes, blood transfusion or organ transplantation, and of illnesses in neonates whose mothers were infected with WNV during pregnancy or while breastfeeding. In addition to other more common causes of encephalitis and aseptic meningitis (e.g., herpes simplex virus and enteroviruses), other arboviruses (e.g., La Crosse, St. Louis encephalitis, Eastern equine encephalitis, Western equine encephalitis and Powassan viruses) should also be considered in the differential etiology of suspected WNV illness. WNV infections are most frequently confirmed by detection of anti-WNV immunoglobulin (Ig) M antibodies in serum or cerebrospinal fluid (CSF). The presence of anti-WNV IgM is usually good evidence of recent WNV infection, but may indicate infection with another closely related flavivirus (e.g., St. Louis encephalitis). Because anti-WNV IgM can persist in some patients for >1 year, a positive test result occasionally may reflect past infection unrelated to current disease manifestations. Serum collected within 8 days of illness onset may lack detectable IgM, and the test should be repeated on a convalescent-phase sample. IgG antibody generally is detectable shortly after the appearance of IgM and persists for years. Plaque-reduction neutralization tests (PRNT) can be performed to measure specific virus-neutralizing antibodies. A fourfold or greater rise in neutralizing antibody titer between acute-and convalescent-phase serum specimens collected 2 to 3 weeks apart may be used to confirm recent WNV infection and to discriminate between cross-reacting antibodies from closely related flaviviruses. Viral culture and WNV NAAT can be performed on acute-phase serum, CSF, or tissue specimens. However, by the time most immunocompetent patients present with clinical symptoms, WNV RNA may not be detectable, thus negative results should not be used to rule out an infection. NAAT may have utility in certain clinical settings as an adjunct to detection of IgM. Among patients with West Nile fever, combining NAAT and IgM detection identified more cases than either procedure alone (Tilley et al. 2006). NAAT may prove useful in immunocompromised patients, when antibody development is delayed or absent. Immunohistochemical staining can detect WNV antigens in fixed tissue, but negative results are not definitive. See the Laboratory Diagnosis and Testing chapter for additional information. # Passive Surveillance and Case Investigation WNV disease is a nationally-notifiable condition and is reportable in most, if not all, states and territories. Most disease cases are reported to public health authorities from public health or commercial laboratories; healthcare providers also submit reports of suspected cases. State and local health departments are responsible for ensuring that reported human disease cases meet the national case definitions. The most recent case definitions for confirmed and probable neuroinvasive and nonneuroinvasive domestic arboviral diseases were approved by the CSTE in 2011 (Appendix 1). Presumptive WNV-viremic donors are identified through universal screening of the blood supply; case definitions and reporting practices for viremic donors vary by jurisdiction and blood services agency. All identified WNV disease cases and presumptive viremic blood donors should be investigated promptly. Jurisdictions may choose to interview the patient's health care provider, the patient, or both depending on information needs and resources. Whenever possible, the following information should be gathered:  Basic demographic information (age, sex, race/ethnicity, state and county of residence)  Clinical syndrome (e.g., asymptomatic blood donor, uncomplicated fever, meningitis, encephalitis, acute flaccid paralysis)  Illness onset date and/or date of blood donation  If the patient was hospitalized and if he/she survived or died  Travel history in the four weeks prior to onset  If the patient was an organ donor or a transplant recipient in the 4 weeks prior to onset  If the patient was a blood donor or blood transfusion recipient in the 4 weeks prior to onset  If the patient was pregnant at illness onset  If the patient is an infant, was he/she breastfed before illness onset If the patient donated blood, tissues or organs in the four weeks prior to illness onset, immediately inform the blood or tissue bank and public health authorities. Similarly, any WNV infections temporallyassociated with blood transfusion or organ transplantation should be reported. Prompt reporting of these cases will facilitate the identification and quarantine of any remaining infected products and the identification of any other exposed recipients so they may be managed appropriately. Passive surveillance systems are dependent on clinicians considering the diagnosis of an arboviral disease and obtaining the appropriate diagnostic test, and reporting of laboratory-confirmed cases to public health authorities. Because of incomplete diagnosis and reporting, the incidence of WNV disease is underestimated. Reported neuroinvasive disease cases are considered the most accurate indicator of WNV activity in humans because of the substantial associated morbidity. In contrast, reported cases of non-neuroinvasive disease are more likely to be affected by disease awareness and healthcare-seeking behavior in different communities and by the availability and specificity of laboratory tests performed. Surveillance data for non-neuroinvasive disease should be interpreted with caution and generally should not be used to make comparisons between geographic areas or over time. # Enhanced Surveillance Activities Enhanced surveillance for human disease cases should be considered, particularly when environmental or human surveillance suggests that an outbreak is suspected or anticipated. Educating healthcare providers and infection control nurses about the need for arbovirus testing, and reporting of all suspected cases could increase the sensitivity of the surveillance system. This may be accomplished through distribution of print materials, participation in local hospital meetings and grand rounds, and providing lectures/seminars. Public health agencies should also work to establish guidelines and protocols with local blood collection agencies for reporting WNV viremic blood donors. At the end of the year, an active review of medical records and laboratory results from local hospitals and associated commercial laboratories should be conducted to identify any previously unreported cases. In addition, an active review of appropriate records from blood collection agencies should be conducted to identify any positive donors that were not reported. # Environmental Surveillance # Mosquito-Based WNV Surveillance Mosquito-based surveillance consists of the systematic collection of mosquito samples and screening them for arboviruses. Mosquitoes become infected with WNV primarily through taking blood meals from infected birds. However, WNV may be passed from infected female mosquitoes to their eggs, resulting in infected offspring (i.e., vertical transmission) (Anderson et al. 2006). Vertical transmission is likely responsible for virus maintenance over the winter in northern parts of the country, but the extent of its contribution to virus amplification and human risk during the peak transmission season is not well understood. The principal enzootic and epidemic vectors vary regionally across the United States. In the northern states, Culex pipiens mosquitoes are the primary vectors (Savage et al 2007) with Cx. salinarius serving as an important vector in portions of the Northeast (Anderson et al. 2004, Molaei et al. 2006). Culex quinquefasciatus is the main vector in the southern states, and Culex tarsalis is an important vector in western states where it overlaps the distribution of Cx. pipiens and Cx. quinquefasciatus and likely enhances transmission in these areas (Reisen et al. 2005, Andreadis 2012). Therefore, mosquitobased surveillance programs for WNV in the United States primarily target these Culex species and may include other species suspected of contributing to transmission and human risk in local areas. Mosquito-based surveillance is an integral component of an integrated vector management program and is the primary tool for quantifying WNV virus transmission and human risk (Moore et al. 1993). The principal functions of a mosquito-based surveillance program are to:  Collect data on mosquito population abundance and virus infection rates in those populations.  Provide indicators of the threat of human infection and disease and identify geographic areas of high-risk.  Support decisions regarding the need for and timing of intervention activities (i.e., enhanced vector control efforts and public education programs).  Monitor the effectiveness of vector control efforts. Mosquito-based WNV monitoring has several positive attributes that contribute to its value in surveillance programs. These include:  Quick turn-around of results. Mosquito samples can be processed quickly, usually within a few days. Some programs maintain local, in-house laboratories where samples are processed daily leading to rapid results.  Collecting adult mosquitoes provides information about vector species community composition, relative abundance, and infection rates. This provides the data needed for rapid computation of infection indices and timely risk assessment.  Maintaining consistent programs over the long-term provides a baseline of historical data that can be used to evaluate risk and guide control operations. However, there are some limitations to mosquito-based surveillance. Virus may not be detected in the mosquito population if the infection rates are very low (i.e., early in the transmission season) or if only small sample sizes are tested. In addition, WNV transmission ecology varies regionally, and surveillance practices vary among programs (e.g., number and type of traps, testing procedures), which limits the degree to which surveillance data can be compared across regions. This prohibits setting national thresholds for assessing risk and implementing interventions. Developing useful thresholds requires consistent effort across seasons to assure the surveillance indices and their association to human risk is comparable over time, and may require mosquito surveillance and human disease incidence data from several transmission seasons. # Specimen Collection and Types of Traps Adult mosquitoes are collected using a variety of trapping techniques. Adequate sampling requires regular (weekly) trapping at fixed sites throughout the community that are representative of the habitat types present in the area. The commonly used types of mosquito traps for WNV surveillance sample host-seeking mosquitoes or gravid mosquitoes (carrying eggs) seeking a place to lay eggs (oviposition site). Traps used to sample host-seeking mosquitoes are available in several configurations. The most commonly used are based on the CDC miniature light trap (Sudia and Chamberlain 1962). The CDC miniature light trap and similar configurations are lightweight and use batteries to provide power to a light source and fan motor. CO 2 (usually dry ice) is frequently used as an additional attractant. In some programs, the light sources are removed to minimize the capture of other nocturnal insects that are attracted to light, such as moths and beetles. In those cases CO 2 is the only attractant used. The advantage of light traps is that they collect a wide range of mosquito species (McCardle et al. 2004), which provides information about both primary and secondary vectors and a better understanding of the species composition in an area. A limitation of light traps is that the collections in certain locations and times may consist largely of unfed, nulliparous individuals (McCardle et al. 2004), which greatly reduces the likelihood of detecting WNV and other arboviruses. Also, not all mosquito species are attracted to light traps (Miller et al. 1969) and the numbers captured may not reflect the population size of a particular species (Bidlingmayer 1967). Light traps are of little use in sampling day-active time mosquitoes such as Ae. albopictus (Haufe andBurgess 1960, Unlu andFarajollah 2012), though these species can be collected in other traps such as the BG Sentinel (Krokel et al. 2006). However, the role of these species in WNV transmission is not well understood. The three major WNV vectors (Cx. pipiens, Cx. quinquefasciatus and Cx. tarsalis) can be collected in light traps, and some surveillance programs rely on light traps alone. This should be done with the understanding that, while effective in collecting large numbers of Cx. tarsalis, light traps typically collect relatively few Cx. pipiens or Cx. quinquefasciatus and the resulting small sample sizes may reduce the ability to accurately estimate WNV infection rates. Gravid traps are more effective in collecting Cx. pipiens and Cx. quinquefasciatus in urban areas (Andreadis andArmstrong 2007, Reisen et al. 1999). Gravid traps target gravid females (i.e., those carrying mature eggs) of the Cx. pipiens complex (Reiter et al. 1986). The strength of gravid traps is that gravid females have previously taken a blood meal, which greatly increases the likelihood of capturing WNV infected individuals and thus detecting virus. Gravid traps can be baited attractants such as fresh or dry grass clippings infusions, rabbit chow infusions, cow manure, fish oil, or other materials that mimic the stagnant water in habitats where these species lay eggs. The different infusions vary in attractiveness (Burkett et al 2004, Lampman et al. 1996. It is advisable that infusion preparations are consistent within a surveillance program because variations may lead to changes in number and/or type of species captured. One limitation of gravid traps is that they selectively capture mosquitoes in the Cx. pipiens complex, and therefore provide limited information on species composition within a region (Reiter et al. 1986). Several other traps may be used to collect mosquitoes for WNV monitoring. Collecting resting mosquitoes provides a good representation of vector population structure since un-fed, gravid and blood-fed females (as well as males) may be collected (Service 1992). Since resting populations typically provide samples that are representative of the population they can also provide more representative WNV infection rates. Resting mosquitoes can be collected using suction traps such as the CDC resting trap (Panella et al. 2011), and by using handheld or backpack mechanical aspirators (Nasci 1981) to remove mosquitoes from natural resting harborage or artificial resting structures (e.g., wooden resting boxes, red boxes, fiber pots and other similar containers) (Service 1992). Because of the wide variety of resting sites and the low density of resting mosquitoes in most locations, sampling resting populations is labor intensive and sufficient sample sizes are often difficult to obtain. Host-baited traps, usually employing chickens or pigeons as bait, can collect mosquito vectors of interest in large numbers. However, these methods require use of live animals and adherence to animal use requirements and permitting. In addition, they are similar to light traps in that collections in certain locations and times may consist largely of unfed nulliparous individuals. Human landing collections have been used to accurately measure the population of human-feeding mosquitoes in an area and can be quite valuable in monitoring the effectiveness of adult mosquito control efforts. However, human landing collections may expose collectors to infected mosquitoes and is not recommended as a sampling procedure in areas where WNV transmission is occurring. # Specimen Handling and Processing Since mosquito-based WNV surveillance relies on identifying WNV in the collected mosquitoes through detection of viral proteins, viral RNA, or live virus (see Diagnostics section), efforts should be made to handle and process the specimens in a way that minimizes exposure to conditions (e.g., heat, successive freeze-thaw cycles) that would degrade the virus. Optimally, a cold chain should be maintained from the time mosquitoes are removed from the traps to the time they are delivered to the processing laboratory, and through any short-term storage and processing. Transport mosquitoes from the field in a cooler either with cold packs or on dry ice. Sort and identify the mosquitoes to species on a chill-table if available. This is particularly important if the specimens will be tested for infectious virus or viral antigen. However, lack of a cold chain does not appear to reduce the ability to detect WNV RNA by RT-PCR (Turell et al. 2002). Mosquitoes are generally tested in pools of 50 to 100 specimens grouped by species, date, and location of collection. Larger pool sizes may result in a loss of sensitivity (Sutherland and Nasci 2007). Follow package instructions that may indicate smaller pool sizes if using a commercial WNV assay -see Diagnostics section. Usually only female mosquitoes are tested in routine WNV surveillance programs. If WNV screening is not done immediately after mosquito identification and pooling, the pooled samples should be stored frozen, optimally at -70 O C, but temperatures below freezing may suffice for short-term storage. # Mosquito-Based Surveillance Indicators Data derived from mosquito surveillance include estimates of mosquito species abundance and WNV infection rate in those mosquito populations. The indices derived from those data vary in information content, ability to be compared over time and space, and association with WNV transmission levels and levels of human risk. The five indicators that have commonly been used are:  Vector abundance  Number of positive pools  Percent of pools positive  Infection rate  Vector index Vector abundance provides a measure of the relative number of mosquitoes in an area during a particular sampling period. It is simply the total number of mosquitoes of a particular species collected, divided by the number of trapping nights conducted during a specified sampling period, and is expressed as the number/trap night. High mosquito densities have been associated with arboviral disease outbreaks (Olson et al. 1979, Eldridge 2004; thus, risk assessments involve estimating mosquito abundance (Tonn et al. 1969, Bang et al. 1981). In the WNV outbreak in Maricopa County, AZ, 2010, Cx. quinquefasciatus densities were higher in outbreak areas compared to nonoutbreak areas (Godsey et al. 2012. Vector abundance can provide measures of population abundance that are useful as thresholds in conducting pro-active integrated vector management and in monitoring the outcome of mosquito control efforts. However, high mosquito abundance may occur in the absence of virus or detectable virus amplification, and WNV outbreaks often occur when abundance is low but the mosquito population is older and the infection rate is high. As with all environmental surveillance efforts, numerous trapping locations and regular collecting are required to obtain spatially and temporally representative data. Number of positive pools is the total of the number of WNV positive mosquito pools detected in a given surveillance location and period. It is sometimes separated by species, or may be a tally of the total number of positive pools for all species tested. While detection of positive pools provides evidence of WNV activity in an area, expression of the total number of pools without a denominator (e.g., number of pools tested) limits the comparative value and the ability of this index to convey relative levels of WNV transmission activity. Expressing WNV activity as the number of positive pools is not recommended since the same data can be used to produce more informative indices. Percent of pools positive is calculated by expressing the number of WNV-positive pools divided by the total number of pools tested as a percentage. This is an improvement over number of positive pools as an index of relative WNV transmission activity since it provides a rough estimate of the rate of WNV in the mosquitoes tested and can be used to compare WNV activity over time and place. However, the comparative value is limited unless the number of pools tested is relatively large and the number of mosquitoes per pool remains constant. Using percent of pools positive as an index of WNV activity is not recommended; as with the number of positive pools index, the same data can be used to produce the more informative Infection Rate and Vector Index measures of WNV activity. The infection rate in a vector population is an estimate of the prevalence of WNV-infected mosquitoes in the population and is a good indicator of human risk. There are two commonly used methods for calculating and expressing the infection rate. The minimum infection rate (MIR) for a given mosquito species is calculated by dividing the number of WNV-positive pools by the total number of mosquitoes tested (not the number of pools tested). The MIR is based on the assumption that infection rates are generally low and that only one mosquito is positive in a positive pool. The MIR can be expressed as a proportion or percent of the sample that is WNV positive, but is commonly expressed as the number infected/1000 tested because infection rates are usually low. The maximum likelihood estimate (MLE) of the infection rate does not require the assumption of one positive mosquito per positive pool, and provides a more accurate estimate when infection rates are high (Gu et al. 2008); thus, it is the preferred method of estimating infection rate particularly during outbreaks. The MLE and MIR are similar when infection rates are low. Estimates of infection rate provide a useful, quantitative basis for comparison, allowing evaluation of changes in infection rate over time and space. These indices also permit use of variable pool numbers and pool sizes while retaining comparability. Infection rate indices have been used successfully in systems associating infection rates with human risk (Bell et al. 2005). Larger sample sizes improve accuracy of the infection rate indicators. The MLE requires more complex calculations than the MIR; however, a Microsoft Excel® Add-In to compute infection rates from pooled data is available (http://www.cdc.gov/ncidod/dvbid/westnile/software.htm). The Vector Index (VI) is an estimate of the abundance of infected mosquitoes in an area and incorporates information describing the vector species that are present in the area, relative abundance of those species, and the WNV infection rate in each species into a single index (Gujaral et al. 2007). The VI is calculated by multiplying the average number of mosquitoes collected per trap night by the proportion infected with WNV, and is expressed as the average number of infected mosquitoes collected per trap night in the area during the sampling period. In areas where more than one WNV vector mosquito species is present, a VI is calculated for each of the important vector species and the individual VIs are summed to represent a combined estimate of the infected vector abundance. By summing the VI for the key vector species, the combined VI accommodates the fact that WNV transmission may involve one or more vectors in an area. Increases in VI reflect increases in risk of human disease , Kwan et al. 2012 and have demonstrated significantly better predictive ability than estimates of vector abundance or infection rate alone, clearly demonstrating the value of combining information for vector abundance and WNV infection rates to generate a more meaningful risk index . As with other surveillance indicators, the accuracy of the Vector Index is dependent upon the number of trap nights used to estimate abundance and the number of specimens tested for virus to estimate infection rate. Instructions for calculating the Vector Index in a system with multiple vector species present are in Appendix 2. # Use of Mosquito-Based Indicators The mosquito-based surveillance indicators have two important roles in WNV surveillance and response programs. First, they can provide quantifiable thresholds for proactive vector control efforts. By identifying thresholds for vector abundance and infection rate that are below levels associated with disease outbreaks, integrated vector management programs can institute proactive measures to maintain mosquito populations at levels below which WNV amplification can occur. Second, if thresholds related to outbreak levels of transmission can be identified, surveillance can be used to determine when proactive measures have been insufficient to dampen virus amplification and more aggressive measures, such as wide-scale aerial application of mosquito adulticides and more aggressive public education messaging, are required to prevent or stop an outbreak. # Bird-Based WNV Surveillance WNV amplifies in nature by replicating to high levels in a variety of bird species which then transmit the virus to mosquitoes during several days of sustained high-level viremia. In addition to infection from mosquito bites, some birds may get infected by consuming infected prey items such as insects, small mammals, or other birds, or in rare cases, from direct contact with other infected birds. Thus, monitoring infections in wild and/or captive birds can be effective for determining whether WNV may be active in a region, and in some cases, provide a quantitative index of risk for human infections. A hallmark of the North American strain of WNV is its propensity to kill a number of the birds it infects (326 affected species were reported to ArboNET through 2012), particularly among the corvids (species of the family Corvidae, including crows, ravens, magpies and jays) and a few other sensitive species (Komar 2003). More than a decade after the introduction of WNV across the North America continent, numerous bird species continue to suffer mortality from WNV infection, albeit at lower rates. Robust studies of the development of avian resistance to fatal WNV infections are lacking. However, one study of the American crow suggested that resistance is developing at a rate of approximately 1% per year (Reed et al. 2007). There are two basic strategies for avian mortality surveillance of WNV: virus testing of selected carcasses and spatial analysis of dead bird sightings. While several bird species experience high mortality from WNV infection, most birds survive the infection and develop a lifelong immune response that can be detected by serology. Detection of antibodies in young birds, or seroconversions detected in older birds that are serially sampled is another approach to monitoring WNV transmission (Komar 2001). # Dead Bird Reporting Dead birds need not be collected and tested in order to be useful in WNV surveillance programs. Simply collecting information about the temporal and spatial patterns of bird deaths in an area has provided information about WNV activity. In order to develop a dead bird reporting WNV surveillance program, public participation is essential and must be encouraged through an effective public education and outreach program. A database should be established to record and analyze dead bird sightings with the following suggested data: Caller identification and call-back number, date observed, location geocoded to the highest feasible resolution, species, and condition. Birds in good condition (unscavenged and without obvious decomposition or maggot infestation) may be sampled or retrieved for sampling and laboratory testing (see Avian morbidity/mortality testing). Dead bird reporting systems, though they are passive surveillance systems, provide the widest surveillance net possible, extending to any area where a person is present to observe a dead bird. A strong carcass reporting system can guide selective sampling of carcasses for WNV testing. These systems have been used with some success to estimate risk of human infection (Eidson et al. 2001, Mostashiari et al. 2003, Carney et al. 2011 Though useful, there are limitations to dead bird surveillance systems. Maintaining public interest and willingness to participate is essential to these programs, but is difficult to maintain. The surveillance is passive and qualitative, and can only be used to assess risk of infection to people in areas where sufficient data are collected to populate risk models such as DYCAST (Carney et al. 2011) and SaTScan (Mostashari et al. 2003). Over time, bird populations are becoming resistant to morbidity and mortality (Reed et al. 2009). A system for carcass reporting needs to be developed and communicated to multiple agencies in a community and to the public (Eidson 2001). This system should include assistance to the public for identifying birds (specialized websites are available). Other causes of bird mortality may produce a false alarm for WNV activity. However, this alarm might alert the public health and wildlife disease communities to other pathogens or health threats. A subset of reported bird deaths should be investigated to confirm WNV activity through carcass testing. # Avian Morbidity/Mortality Testing In programs where the objective of avian morbidity/mortality testing is simply early detection of WNV activity and not production of a quantitative index of human risk, testing of moribund or dead birds should be initiated when local adult mosquito activity begins in the spring and continue as long as local WNV activity remains undetected in the area. Once WNV is detected in dead birds in an area, or if prevention and control actions have been initiated, continued detection of WNV in carcasses in that area does not provide additional information about WNV activity and is not necessary or cost-effective. However, the number of WNV-infected dead birds can contribute to an effective human risk index (Kwan et al. 2012a). Contact with WNV-infected carcasses presents a potential health hazard to handlers (Fonseca et al. 2005). Appropriate biosafety precautions should be taken when handling carcasses in the field and in the laboratory. More detailed guidelines for sampling avian carcasses are available in Appendix 3. To maximize sensitivity of this surveillance system, a variety of bird species should be tested, but corvids should be emphasized if they are present (Nemeth et al. 2007a). In dead corvids and other birds, bloody pulp from immature feathers, and tissues collected at necropsy such as brain, heart, kidney, or skin harbor very high viral loads, and any of these specimen types is sufficient for sensitive detection of WNV (Panella et al. 2001, Komar et al. 2002, Docherty et al. 2004, Nemeth et al. 2009, Johnson et al. 2010. Oral swabs and breast feathers are easy specimens to collect in the field, avoid the need to transfer dead birds to the laboratory, do not require a cold chain, and are effective for detecting WNV in dead corvids (Komar et al. 2002, Nemeth et al. 2009). They are less sensitive for WNV detection in noncorvids; however, the reduced sensitivity of testing non-corvids using these tissue types can be offset by sampling more carcasses. The number of bird specimens tested will be dependent upon resources and whether WNV-infected birds have already been found in the area; triage of specimens by species or by geographic location may be appropriate in some jurisdictions. Several studies have demonstrated the effectiveness of avian mortality testing for early detection of WNV activity in several studies (Eidson et al. 2001, Julian et al. 2002, Guptill et al. 2003, Nemeth et al. 2007b, Patnaik et al. 2007, Kwan et al. 2012a). Wildlife rehabilitation clinics can be a good source of specimens derived from carcasses (Nemeth et al. 2007b). Collecting samples from living birds that are showing signs of illness requires the assistance of a veterinarian or wildlife technician. Dead crows and raptors alarm the public and carcasses are easily spotted (Ward et al. 2008). However, in regions with few or no crows, carcasses may be less obvious. Eye aspirates have been shown to be a sensitive and fast sampling protocol for WNV detection in corvid carcasses brought to the laboratory for testing (Lim et al 2009). # Live Bird Serology The use of living birds as sentinels for monitoring WNV transmission requires serially blood-sampling a statistically valid number of avian hosts. Captive chickens, frequently referred to as sentinel chickens, (though other species have been used) provide the most convenient source of blood for this purpose. Blood may be collected from a wing vein, the jugular vein, or on Nobuto® strips by pricking the chicken's comb with a lancet. There is no standard protocol for implementing a sentinel chicken program. It can be tailored to the specific circumstances of each surveillance jurisdiction, though sentinel chicken systems generally employ flocks of 6-10 birds at each site and bleed each bird weekly or every other week throughout the WNV transmission season. Sentinel chicken-based WNV surveillance systems have provided evidence of WNV transmission several weeks in advance of human cases (Healy et al. 2012). While serially sampling free-ranging bird species is very labor intensive, it can provide information about seroconversion in amplifier hosts, similar to the data provided by sentinel chickens. Quantifying seroprevalence in free-ranging birds may provide additional information that benefits surveillance programs (Komar 2001). For example, a serosurvey of the local resident bird population (in particular, juvenile birds) following the arbovirus transmission season may help determine which local species may be important amplifiers of WNV in the surveillance area. This in turn could be used to map areas of greatest risk in relation to the populations of amplifier hosts. Furthermore, a serosurvey of adult birds just prior to arbovirus transmission season can detect pre-existing levels of antibody in the bird population. High levels would suggest less opportunity for WNV amplification because many adult bird species transfer maternal antibodies to their offspring, which can delay or inhibit WNV amplification among the population of juvenile birds that emerges each summer. In Los Angeles, California, serosurveys of local amplifier hosts during winter determined that subsequent outbreaks occurred only after seroprevalence dipped below 10% in these birds (Kwan et al. 2012b). There are numerous positive attributes of sentinel chicken and other live-bird serology surveillance systems. Sentinel chickens are captive, so a seroconversion event indicates local transmission and presence of infected mosquitoes in the area. Chickens are preferred blood-feeding hosts of Cx. pipiens and Cx. quinquefasciatus, which are important urban vectors of WNV. Chickens can be used to monitor seroconversions of multiple arboviruses of public health importance (i.e., WN, SLE, WEE and EEE viruses) simultaneously. However, there are also a number of important limitations related to these systems. Determination that a chicken has seroconverted occurs typically 3-4 weeks after the transmission event has occurred, and reporting of a positive chicken may not precede the first local case of human disease caused by WNV (Patnaik et al. 2007, Kwan et al. 2010, Unlu et al. 2009). However, a rapid screening assay can cut this time in half (Cheng & Su 2011) and improve the predictive capability of sentinel chicken systems (Healy 2012). Use of sentinel birds requires institutional animal use and care protocols, and other authorization permits. Linking patterns in sentinel chicken seroconversion with human risk requires multiple years of data collection. # Horses and Other Vertebrates Horses are susceptible to encephalitis due to WNV infection; thus, equine cases of WNV-induced encephalitis may serve a sentinel function in the absence of other environmental surveillance programs. Equine health is an important economic issue, so severe disease in horses comes to the attention of the veterinary community. Use of horses as sentinels for active WNV surveillance is theoretically possible, but practically infeasible. Widespread use of equine WNV vaccines decreases the incidence of equine WNV disease, and survivors of natural infections are protected from disease, reducing the usefulness of equines as sentinels. Veterinarians, veterinary service societies/agencies, and state agriculture departments are essential partners in any surveillance activities involving WNV infections in horses. Equine disease due to WNV is rare in tropical ecosystems. However, WNV frequently infects horses in the tropics. Detection of seroconversions in horses has been suggested as a sentinel system to detect risk of WNV transmission to people in Puerto Rico and other tropical locations (Phoutrides et al. 2011, Mattar et al. 2011. Small numbers of other mammal species have been affected by WNV. Dead squirrels are tested for WNV along with dead birds in some jurisdictions (Reisen et al. 2013). Among domestic mammals, the most important has been the camelids, such as llamas and alpacas (Whitehead & Bedenice 2009). As with horses, these come to the attention of veterinarians and any veterinary case of disease due to WNV may be used for passive surveillance. Dogs and cats become infected with WNV (Austgen et al. 2004) and have been shown to seroconvert to WNV during human disease outbreaks (Kile et al. 2005, Resnick et al. 2008); however, their value as primary WNV surveillance tools has not been evaluated and is not recommended at this time. There is no evidence that dogs or cats develop sufficient viremia to become amplifier hosts (Austgen et al. 2004). # Evaluation of Environmental Surveillance Systems The objective of environmental surveillance is to detect local WNV activity in advance of transmission to humans in order to guide the implementation of interventions and to prevent human infections and disease from occurring. The methods for mosquito and bird surveillance outlined above are widely used to monitor WNV transmission levels, but few studies have directly compared the performance of these early warning systems in their ability to measure or predict human risk. Single factor surveillance systems, in which only one method of surveillance is used to monitor WNV activity and predict risk of subsequent human infection, have been shown to give valuable early information , Carney et al. 2011, Ginsberg et al. 2010. Comparisons of single-factor surveillance systems have been done on a regional basis. In comparing mosquito and sentinel chicken systems in Louisiana, Unlu et al. (2009) demonstrated that positive mosquito pools occurred before detection of human cases, but seroconversions of sentinel chickens were detected after human cases occurred. They also compared mosquito trapping methods and found that while gravid traps collected more total mosquitoes and WNV-positive mosquitoes, sentinel chicken box traps detected WNV positive mosquitoes earlier. Another study showed that sentinel chicken seroconversions were the first indication of WNV activity (Blackmore et al. 2003). A comparison of mosquito, sentinel chicken and dead-bird systems found that all provided evidence of WNV transmission before human cases occurred, but that mosquito surveillance performed better than the other systems (Healy et al. 2012). A few WNV surveillance programs have been developed that incorporate multiple WNV monitoring systems and other environmental parameters (e.g., temperature and rainfall patterns). These generally require a historical database of environmental and epidemiological surveillance data in order to develop and validate the complex quantitative associations that they employ. An example of a multi-factor system is the California Mosquito-Borne Virus Surveillance and Response Plan (CMVRA) (http://westnile.ca.gov/downloads.php?download_id=2321&filename=2012 CA Response Plan 5-8-12.pdf) which uses mosquitoes, birds and environmental conditions to perform a mosquito-borne virus risk assessment. Individual components of the system are scored to provide a value that corresponds to a normal season, emergency planning or epidemic situation. Kwan et al. (2012) compared the single-factor, mosquito-based Vector Index to the single-factor birdbased Dynamic Continuous-Area Space-Time (DYCAST) system, and the multiple-factor CMVRA system and found that the CMVRA system using avian, mosquito and environmental information provided better prediction of WNV high risk periods. However, the Vector Index and DYCAST systems also provided useful, predictive information. # ArboNET ArboNET, the national arboviral surveillance system, was developed by CDC and state health departments in 2000 in response to the emergence of WNV in 1999 (CDC 2010). In 2003, the system was expanded to include other domestic and imported arboviruses of public health significance. ArboNET is an electronic surveillance system administered by CDC's Division of Vector-Borne Diseases (DVBD). Human arboviral disease data are reported from all states and three local jurisdictions. In addition to human disease cases, ArboNET maintains data on arboviral infections among human viremic blood donors, non-human mammals, sentinel animals, dead birds, and mosquitoes. # Data Collected Variables collected for human disease cases include patient age, sex, race, and county and state of residence; date of illness onset; case status (i.e., confirmed, probable, suspected, or not a case); clinical syndrome (e.g., encephalitis, meningitis, or uncomplicated fever); whether illness resulted in hospitalization; and whether the illness was fatal. Cases reported as encephalitis (including meningoencephalitis), meningitis, or acute flaccid paralysis are collectively referred to as neuroinvasive disease; others are considered non-neuroinvasive disease. Acute flaccid paralysis can occur with or without encephalitis or meningitis. Information regarding potential non-mosquito-borne transmission (e.g., blood transfusion or organ transplant recipient, breast-fed infant, or laboratory worker) and recent donation of blood or solid organs should be reported if applicable. An optional set of variables related to clinical symptoms and diagnostic testing was added to ArboNET in 2012. Blood donors identified as presumptively viremic by NAAT screening of the donation by a blood collection agency are also reported to ArboNET. For the purposes of national surveillance, a presumptively viremic donor is defined as a person with a blood donation that meets at least one of the following criteria: a) One reactive NAAT with a signal-to-cutoff (S/CO) ratio ≥ 17; or b) Two reactive NAATs. Reporting of donors who do not meet these criteria should wait until follow-up testing is completed. The date of blood donation is reported in addition to the variables routinely reported for disease cases. WNV disease in non-human mammals (primarily horses) and WNV infections in trapped mosquitoes, dead birds, and sentinel animals are also reported to ArboNET. Variables collected for non-human WNV infections include species, state and county, and date of specimen collection or symptom onset. The total number of mosquitoes or birds tested weekly can also be reported. Detailed descriptions of all variables collected by ArboNET and instructions for reporting are included in the ArboNET User Guide, which can be requested from DVBD by phone (970-261-6400) or email ([email protected]). # Data Transmission Jurisdictions can transmit data to ArboNET using one or more of four methods supported by DVBD: 1) jurisdictions that have a commercially-or state-developed electronic surveillance system can upload records from their system using an Extensible Markup Language (XML) message; 2) jurisdictions can upload records from a Microsoft Access database provided by CDC DVBD using an XML message; 3) jurisdictions may enter records manually using a CDC web site (https://wwwn.cdc.gov/arbonet/); or 4) jurisdictions can report cases through the CDC NEDSS Base System (NBS) and DVBD will download records directly from NBS to ArboNET. ArboNET data are maintained in a Microsoft Structured Query Language (SQL) Server database inside CDC's firewall. Users can access data via a password-protected website, but are limited to viewing data only from their own jurisdiction. The ArboNET website and database are maintained by CDC information technology staff and are backed up nightly. # Dissemination of ArboNET Data CDC epidemiologists periodically review and analyze ArboNET surveillance data and disseminate results to stakeholders via direct communication, briefs in Morbidity and Mortality Weekly Reports and Epi-X, comprehensive annual summary reports, and DVBD's website. CDC also provides ArboNET data to the U.S. Geological Survey to produce maps of domestic arboviral activity which are then posted on its website (http://diseasemaps.usgs.gov/). Surveillance reports are typically updated weekly during the transmission season and monthly during the off-season. A final report is released in the spring of the following year. CDC provides limited-use ArboNET data sets to the general public by formal request. Data release guidelines have been updated to be consistent with those developed by CDC and the Council of State and Territorial Epidemiologists (CSTE). # Limitations of ArboNET Data Human surveillance for WNV disease is largely passive, and relies on the receipt of information from physicians, laboratories, and other reporting sources by state health departments. Neuroinvasive disease cases are likely to be consistently reported because of the substantial morbidity associated with this clinical syndrome. Non-neuroinvasive disease cases are inconsistently reported because of a less severe spectrum of illness, geographic differences in disease awareness and healthcare seeking behavior, and variable capacity for laboratory testing. Surveillance data for non-neuroinvasive disease cases should be interpreted with caution and generally should not be used to make comparisons between geographic areas or over time. Accordingly, ratios of reported neuroinvasive disease cases to non-neuroinvasive disease cases should not be interpreted as a measure of WNV virulence in an area. ArboNET does not routinely collect information regarding clinical signs and symptoms or diagnostic laboratory test results. Therefore, misclassification of the various syndromes caused by WNV cannot be detected. In addition, ArboNET does not routinely collect information regarding the specific laboratory methods used to confirm each case. The most common laboratory tests used to diagnose arboviral disease are IgM antibody-capture enzyme immunoassays (EIA. Although these assays are relatively specific, false-positive results and cross-reactions occur between WNV and other flaviviruses (e.g., St. Louis encephalitis or Dengue viruses). Positive IgM results should be confirmed by additional tests, especially plaque-reduction neutralization. However, such confirmatory testing often is not performed. While the electronic mechanisms for data transmission allow for rapid case reporting, the inclusion of both clinical and laboratory criteria in the surveillance case definition creates delays between the occurrence of cases and their reporting. Provisional data are disseminated to allow for monitoring of regional and national epidemiology during the arboviral transmission season. However, these reports generally lag several weeks behind the occurrence of the cases comprising them, and the data may change substantially before they are finalized. For this reason, provisional data from the current transmission season should not be combined with or compared to provisional or final data from previous years. The collection and reporting of non-human WNV surveillance data are highly variable among states (and even between regions within states) and changes from year to year. Because of this variability, non-human surveillance data should not be used to compare arboviral activity between geographic areas or over time. For more information about ArboNET, please contact the Division of Vector-Borne Diseases by phone: 970-261-6400 or email: [email protected]. # Laboratory Diagnosis and Testing Public health programs responsible for monitoring WNV activity and implementing effective interventions rely on accurate information about human disease and environmental indicators of risk. To be successful, these programs must be supported by diagnostic laboratories capable of performing the required range of tests. Numerous serological and virus detection testing protocols have been developed to diagnose human infections and cases, and to enable environmental surveillance programs to monitor the presence of WNV in vector mosquitoes and non-human vertebrate hosts. The characteristics and uses of these tests are outlined in the sections below. # Biocontainment -Laboratory Safety Issues Laboratory-associated infections with WNV have been reported in the literature. The Subcommittee on Arbovirus Laboratory Safety in 1980 reported 15 human infections from laboratory accidents. One of these infections was attributed to aerosol exposure. In addition, two parenteral inoculations have been reported during work with animals. WNV may be present in blood, serum, tissues and CSF of infected humans, birds, mammals and reptiles. The virus has been found in the oral fluids and feces of birds. Parenteral inoculation with contaminated materials poses the greatest hazard; contact exposure of broken skin is a possible risk. Sharps precautions should be strictly adhered to when handling potentially infectious materials. Workers performing necropsies on infected animals may be at high risk of infection. Manipulation of infectious stocks of WNV should be conducted in BSL-3 laboratory space. However, diagnostic specimens from any source that have not been tested may be handled in BSL-2 conditions. If the specimen is suspected of harboring infectious WNV, it is recommended that it be manipulated in BSL-3 conditions, such as within a Class II Type A biological safety cabinet. Necropsies of birds or other animals expected to harbor high-titered WNV infections should be practiced under BSL-3 conditions. Containment specifications are available in the Centers for Disease Control and Prevention/National Institutes of Health publication Biosafety in Microbiological and Biomedical Laboratories (BMBL). This document can be found online at: http://www.cdc.gov/biosafety/publications/bmbl5/. # Shipping of Agents Shipping and transport of WNV and clinical specimens should follow current International Air Transport Association (IATA) and Department of Commerce recommendations. Because of the threat to the domestic animal population, a U.S. Department of Agriculture (USDA) shipping permit is required for transport of known WNV isolates. For more information, visit the IATA dangerous goods Web site at: http://www.iata.org/publications/dgr/Pages/index.aspx, and the USDA Animal and Plant Health Inspection Service (APHIS), National Center for Import /Export's Web site at: http://www.aphis.usda.gov/animal_health/vet_biologics/vb_notices.shtml. # Human Diagnosis In most patients, infection with WNV and many of the other arboviruses that cause encephalitis is clinically inapparent or causes a nonspecific viral syndrome. Numerous pathogens cause encephalitis, aseptic meningitis and febrile disease with clinical symptoms and presentations similar to those caused by WNV and should be considered in the differential diagnosis. Definitive diagnosis of WNV can only be made by laboratory testing using specific reagents. Selection of diagnostic test procedures should take into consideration the range of pathogens in the differential diagnosis, the criteria for classifying a WNV case as confirmed or probable, as well as the capability of the primary and confirming diagnostic laboratories. The case definition for neuroinvasive and non-neuroinvasive disease caused by WNV and the other arboviral pathogens present in the United States specifies presence of clinically-compatible symptoms accompanied by laboratory evidence of recent infection (Appendix 1). Appropriate selection of diagnostic procedures and accurate interpretation of findings requires information describing the patient and the diagnostic specimen. For human specimens, the following data must accompany sera, CSF or tissue specimens for results to be properly interpreted and reported: 1) symptom onset date (when known); 2) date of sample collection; 3) unusual immunological status of patient (e.g., immunosuppression); 4) state and county of residence; 5) travel history (especially in flavivirus-endemic areas); 6) history of prior vaccination (e.g., yellow fever, Japanese encephalitis, or Tick-borne encephalitis viruses); and 7) brief clinical summary including clinical diagnosis (e.g., encephalitis, aseptic meningitis). Minimally, onset and sample collection dates are required to perform and interpret initial screening tests. The remaining information is required to evaluate any specimens positive on initial screening. If possible, a convalescent serum sample taken at least 14 days following the acute sample should be obtained to enable confirmation by serological testing. # Serology The front-line screening assay for laboratory diagnosis of human WNV infection is the IgM assay. Currently, the FDA has cleared four commercially-available test kits from different manufacturers, for detection of WNV IgM antibodies. These four kits are used in many commercial and public health laboratories in the United States. In addition the CDC-defined IgM and IgG ELISA can be used; protocols and reagents are available from the CDC DVBD Diagnostic Laboratory (Martin et al. 2000;Johnson et al. 2000). There is also a microsphere-based immunoassay for the detection of IgM antibodies that can differentiate WNV from SLE (Johnson et al. 2005). Because the IgM and IgG ELISA tests can cross-react between flaviviruses (e.g., SLE, dengue, yellow fever, WN), they should be viewed as screening tests only. For a case to be considered confirmed, serum samples that are antibody-positive on initial screening should be evaluated by a more specific test; currently the plaque reduction neutralization test (PRNT) is the recommended test for differentiating between flavivirus infections. Though WNV is the most common cause of arboviral encephalitis in the United States, there are several other arboviral encephalitides present in the country and in other regions of the world. Specimens submitted for WNV testing should also be tested by ELISA and PRNT against other arboviruses known to be active or be present in the area or in the region where the patient traveled. # Virus Detection Assays Numerous procedures have been developed for detecting viable WNV, WNV antigen or WNV RNA in human diagnostic samples, many of which have been adapted to detecting WNV in other vertebrates and in mosquito samples. These procedures vary in their sensitivity, specificity, and time required to conduct the test (Table 1). Among the tests listed in the Table 1, the VectorTest, Antigen Capture ELISA, and Rapid Analyte Measurement Platform were developed specifically for testing mosquitoes for WNV antigen, were subsequently adapted to testing bird and other vertebrate samples, and are not used for human diagnostic testing. Additional details about these tests are contained in the following sections on mosquito and bird diagnostic tests. The remaining tests have been used in various human diagnostic assays. Table 1. Characteristic sensitivity and time required for West Nile infectious virus, viral RNA or viral antigen detection assays. Among the most sensitive procedures for detecting WNV in samples are those using RT-PCR to detect WNV RNA in human CSF, serum and other tissues. Fluorogenic 5' nuclease techniques (real-time PCR) and nucleic acid sequence-based amplification (NASBA) methods have been developed and validated for specific human diagnostic applications (Briese et al. 2000;Shi et al. 2001;Lanciotti et al. 2000;Lanciotti et al. 2001) and for detecting WNV RNA in blood donations (Busch et al. 2005). WNV presence can be demonstrated by isolation of viable virus from samples taken from clinically ill humans. Appropriate samples include CSF (serum samples may be useful very early in infection) and brain tissue (taken at biopsy or postmortem). Virus isolation should be performed in known susceptible mammalian (e.g., Vero) or mosquito cell lines (e.g., C6/36). Mosquito origin cells may not show obvious cytopathic effect and must be screened by immunofluorescence or RT-PCR. Appropriate samples for virus isolation from clinically ill humans include CSF (serum samples may be useful very early in infection) and brain tissue (taken at biopsy or postmortem). Confirmation of virus isolate identity can be accomplished by indirect immunofluorescence assay (IFA) using virus-specific monoclonal antibodies or nucleic acid detection. The IFA using well-defined murine monoclonal antibodies (MAbs) is an efficient, economical, and rapid method to identify flaviviruses. MAbs are available that can differentiate WNV and SLE virus from each other and from other flaviviruses. Incorporating MAbs specific for other arboviruses known to circulate in various regions will increase the rapid diagnostic capacities of state (Briese et al. 2000;Shi et al. 2001;Lanciotti et al. 2000). While these tests can be quite sensitive, virus isolation and RT-PCR to detect WNV RNA in sera or CSF of clinically ill patients have limited utility in diagnosing human WNV neuroinvasive disease due to the low level viremia present in most cases at the time of clinical presentation. However, one study demonstrated that combining detection of IgM with detection of WNV RNA in plasma significantly increased the number of WNV non-neuroinvasive (i.e., fever) cases detected (Tilley et al 2006). Virus isolation or RT-PCR on serum may be helpful in confirming human WNV infection in immunocompromised patients when antibody development is delayed or absent. Immunohistochemistry (IHC) using virus-specific MAbs on brain tissue has been very useful in identifying both human and avian cases of WNV infection. In suspected fatal cases, IHC should be performed on formalin fixed autopsy, biopsy, and necropsy material, ideally collected from multiple anatomic regions of the brain, including the brainstem, midbrain, and cortex (Bhatnagar et al. 2007). To maintain Clinical Laboratory Improvements Amendments (CLIA) certification, CLIA recommendations for performing and interpreting human diagnostic tests should be followed. Laboratories doing WNV serology or RNA-detection testing are invited to participate in the annual proficiency testing that is available from CDC's Division of Vector-Borne Diseases in Fort Collins, Colorado. To obtain additional information about the proficiency testing program and about training in arbovirus diagnostic procedures, contact the Division of Vector-Borne Diseases by phone: 970-261-6400 or email: [email protected]. # Non-human laboratory diagnosis The following sections discuss techniques that may be applied to mosquitoes and non-human vertebrate specimens that are collected for the purpose of diagnosing WNV infections. Many of the virus detection procedures are identical those described in the Human Diagnosis section, but several procedures have been developed specifically for these sample types. # Mosquitoes # Identification and Pooling Mosquitoes should be identified to species or lowest taxonomic unit. Specimens are placed into pools of 50 specimens or less based on species, sex, location, trap-type, and date of collection. Larger pool sizes can be used in some assays with loss of sensitivity (Sutherland and Nasci 2007). If resources are limited, testing of mosquitoes for surveillance purposes can be limited to the primary vector species. # Homogenizing and Centrifugation After adding the appropriate media, mosquito pools can be macerated or ground by a variety of techniques including mortar and pestel, vortexing sealed tubes containing one or more copper clad BBs, or by use of tissue homogenizing apparatus that are commercially available (Savage et al. 2007). After grinding, samples are centrifuged and an aliquot is removed for WNV testing. Because mosquito pools may contain WNV and other pathogenic viruses which may be aerosolized during processing, laboratory staff should take appropriate safety precautions including use of a Class II Type A biological safety cabinet and wearing appropriate personal protective equipment (PPE) and biosafety practices. # Virus Detection Virus isolation in Vero cell culture remains the standard for confirmation of WNV positive pools (Beaty et al. 1989, Savage et al. 1999, Lanciotti et al. 2000. Virus isolation provides the benefit of detecting other viruses that may be contained in the mosquitoes, a feature that is lost using test procedures that target virus-specific nucleotide sequence or proteins. However, Vero cell culture is expensive and requires specialized laboratory facilities; thus, nucleic acid assays have largely replaced virus isolation as detection and confirmatory assay methods of choice. Virus isolation requires that mosquito pools be ground in a media that protects the virus from degradation such as BA-1 (Lanciotti et al. 2000), and preservation of an aliquot at -70 o C to retain virus viability for future testing. Nucleic acid detection assays are the most sensitive assays for virus detection and confirmation of WNV in mosquito pools (Lanciotti et al. 2000. Real-Time RT-PCR assays with different primer sets may be used for both detection and confirmation of virus in mosquito pools. Standard RT-PCR primers are also available (Kuno et al 1998). Nucleic acids may be extracted from an aliquot of the mosquito pool homogenate by hand using traditional methods or with kits, or with automated robots in high-through-put laboratories (Savage et al 2007). West Nile virus antigen detection assays are available in ELISA format (Tsai et al. 1987 and in two commercial kits that employ lateral flow wicking assays, developed specifically for testing mosquitoes (, Komar et al. 2002, Panella et al. 2001, Burkhalter et al. 2006). The antigen capture ELISA of Hunt et al. 2002 and the RAMP (Rapid Analyte Measurement Platform, Response Biomedical Corp, Burnaby, British Columbia, Canada) WNV test are approximately equal in sensitivity and detect WNV in mosquito pools at concentrations as low as 10 3.1 PFU/ml (Burkhalter et al. 2006). The VecTest (Medical Analysis Systems, Inc., Camarillo, CA) is less sensitive and detects WNV in mosquito pools at concentrations of 10 5.17 PFU/ml. The VecTest (evaluated by Burkhalter et al. 2006) is no longer available, but is similar to a lateral flow wicking assay marketed as VecTOR Test (VecTOR Test Systems, Inc., Thousand Oaks, CA). Although the antigen detection assays are less sensitive than nucleic acid detection assays, they have been evaluated in operational surveillance programs (Mackay et al. 2004. Lampman et al. 2006. Williges et al. 2009, Kesavaraju et al. 2012) and can provide valuable WNV infection rate data when employed consistently in a mosquito surveillance program. # Non-Human Vertebrates # Serology Diagnostic kits for serologic diagnosis of WNV infection in clinically ill domestic animals are not commercially available. IgM-capture ELISA has been developed for use in horses, and can be readily adapted to other animal species where anti-IgM antibody reagents are commercially available. Alternatively, seroconversion for IgG, neutralizing antibodies, and haemagglutinin inhibiting (HAI) assays in acute and convalescent serum samples collected 2-3 weeks apart can be used as screening assays. The latter two approaches do not require species-specific reagents and thus have broad applicability. The ELISA format may be used when employed as inhibition or competition ELISAs, which avoids the use of species-specific reagents. A popular blocking ELISA has been applied to a variety of vertebrate species with very high specificity and sensitivity, reducing the necessity of a second confirmatory test (Blitvich et al 2003a(Blitvich et al , 2003b. Similarly, the microsphere immunoassay, when used comparatively with WNV antigen-coated beads and SLEV antigen-coated beads, performs with high specificity and sensitivity (Johnson et al. 2005). Typically, a confirmatory 90% plaque-reduction neutralization test (PRNT 90 ) with end-point titration is used to confirm serology in non-human vertebrates. Plaque-reduction thresholds below 80% are not recommended. Because of the cross-reactive potential of anti-flavivirus antibodies, the PRNT must be comparative, performed simultaneously with St. Louis encephalitis virus (SLEV). PRNT requires the use of a BSL3 laboratory and Vero cell culture. The PRNT has been adapted to BSL-2 laboratory conditions using a recombinant chimeric virus featuring the WNV envelope glycoprotein gene in a yellow fever virus backbone (Chimeravax®, originally developed as a live-attenuated vaccine candidate). For PRNTs, the Chimeravax provided equivalent results for bird sera, and 10-100 fold lower titers for equine sera ). The same serologic techniques applied to clinically ill animals may also be used for healthy subjects for vertebrate serosurveys or for healthy sentinel animals serially sampled as sentinels. Serologic techniques for WNV diagnosis should not be applied to carcasses, as in many cases of fatal WNV infection, the host will die before a detectable immune response develops. Furthermore, some morbid or moribund animals that have WNV antibodies due to past infection may be currently infected with a pathogen other than WNV. Fatal cases should have readily detectable WNV in their tissues. As with human diagnostic samples, serologic results from non-human vertebrates must be interpreted with caution and with an understanding of the cross-reactive tendencies of WNV and other flaviviruses. For primary WNV infections, a low rate of cross-reactivity is expected (<5%) and misdiagnoses are avoided by the requirement that the reciprocal anti-WNV titer be a minimum of 4-fold greater than the corresponding anti-SLEV titer. In rare cases, a secondary flavivirus infection due to WNV in a host with a history of SLEV infection may boost the older anti-SLEV titer to greater levels than the anti-WNV titer, resulting in a misdiagnosis of SLEV infection, a phenomenon known as "original antigenic sin". Some serum samples will have endpoint titers for WNV and SLEV that are the same or just 2-fold different. While it is possible that this serologic result is due to past infections with both of these viruses, it is impossible to rule out cross-reaction from one or the other, or even from a third indeterminate flavivirus. Such a result should be presented as "Undifferentiated flavivirus infection." # Virus Detection Methods for WNV detection, isolation and identification are the same as described for human and mosquito diagnostics. Specimens typically used are tissues and/or fluids from acutely ill and/or dead animals. Virus detection in apparently healthy animals is very low-yield and inefficient, and therefore not cost-effective, and should not be considered for routine surveillance programs. In bird, mammal and reptile carcasses, tissue tropisms have varied among individuals within a species, and across species. Some animals, like humans, have few tissues with detectable virus particles or viral RNA at necropsy, such as horses. Others, such as certain bird species, may have fulminant infections with high viral loads in almost every tissue. # Prevention and Control # Integrated Vector Management Mosquito abatement programs successfully employ integrated pest management (IPM) principles to reduce mosquito abundance, providing important community services to protect quality of life and public health (Rose 2001). Prevention and control of WNV and other zoonotic arboviral diseases is accomplished most effectively through a comprehensive, integrated vector management (IVM) program applying the principles of IPM. IVM is based on an understanding of the underlying biology of the arbovirus transmission system, and utilizes regular monitoring of vector mosquito populations and WNV activity levels to determine if, when, and where interventions are needed to keep mosquito numbers below levels which produce risk of human disease, and to respond appropriately to reduce risk when it exceeds acceptable levels. Operationally, IVM is anchored by a monitoring program providing data that describe:  Conditions and habitats that produce vector mosquitoes.  Abundance of those mosquitoes over the course of a season.  WNV transmission activity levels expressed as WNV infection rate in mosquito vectors.  Parameters that influence local mosquito populations and WNV transmission. These data inform decisions about implementing mosquito control activities appropriate to the situation, such as:  Source reduction through habitat modification.  Larval mosquito control using the appropriate methods for the habitat.  Adult mosquito control using pesticides applied from trucks or aircraft when established thresholds have been exceeded.  Community education efforts related to WNV risk levels and intervention activities. Monitoring also provides quality control for the program, allowing evaluation of:  Effectiveness of larval control efforts.  Effectiveness of adult control efforts.  Causes of control failures (e.g., undetected larval sources, pesticide resistance, equipment failure). # Surveillance Programs Effective IVM for WNV prevention relies on a sustained, consistent surveillance program that targets vector species. The objectives are to identify and map larval production sites by season, monitor adult mosquito abundance, monitor vector infection rates, document the need for control based on established thresholds, and monitor control efficacy. Surveillance can be subdivided into three categories based on the objective of the surveillance effort. However, the surveillance elements are complementary, and in combination provide the information required for IVM decisions. # Larval Mosquito Surveillance Larval surveillance involves identifying and sampling a wide range of aquatic habitats to identify the sources of vector mosquitoes, maintaining a database of these locations, and a record of larval control measures applied to each. This requires trained inspectors to identify larval production sites, collect larval specimens on a regular basis from known larval habitats, and to perform systematic surveillance for new sources. This information is used to determine where and when source reduction or larval control efforts should be implemented. # Adult Mosquito Surveillance Adult mosquito surveillance is used to quantify relative abundance of adult vector mosquitoes, and to describe their spatial distribution. This process also provides specimens for evaluating the incidence of WNV infection in vector mosquitoes (see Surveillance chapter for more information). Adult mosquito surveillance programs require standardized and consistent surveillance efforts in order to provide data appropriate for monitoring trends in vector activity, for setting action thresholds, and evaluating control efforts. Various methods are available for monitoring adult mosquitoes. Most frequently used in WNV surveillance are the CO 2 -baited CDC miniature-style light traps for monitoring host-seeking Culex tarsalis (and potential bridge vector species) and gravid traps to monitor Cx. quinquefasciatus, Cx. pipiens and Cx. restuans populations. Adult mosquito surveillance should consist of a series of collecting sites at which mosquitoes are sampled using both gravid and light traps on a regular schedule. Fixed trap sites allow monitoring of trends in mosquito abundance and virus activity over time and are essential for obtaining information to evaluate WNV risk and to guide control efforts. Additional trap sites can be utilized on an ad hoc basis to provide additional information about the extent of virus transmission activity and effectiveness of control efforts. # WNV Transmission Activity Monitoring WNV transmission activity in the environment before human cases occur is an essential component of an IVM program to reduce WNV risk. Without this information, it is impossible to set thresholds for vector mosquito population management and to take appropriate action before an outbreak is in progress (Table 2). WNV transmission activity can be monitored by tracking the WNV infection rate in vector mosquito populations, WNV-related avian mortality, seroconversion to WNV in sentinel chickens, seroprevalence/seroconversion in wild birds, and WNV veterinary (primarily horse) cases. The methods for monitoring and calculating indices of WNV activity are described in detail in the Environmental Surveillance section of the Surveillance chapter. # Mosquito Control Activities Guided by the surveillance elements of the program, integrated efforts to control mosquitoes are implemented to maintain vector populations below thresholds that would facilitate WNV amplification and increase human risk. Failing that, efforts to reduce the abundance of WNV-infected biting adult mosquitoes must be quickly implemented to prevent risk levels from increasing to the point of a human disease outbreak. Properly implemented, a program monitoring mosquito abundance and WNV activities in the vector mosquito population will provide a warning of when risk levels are increasing. Because of delays in onset of disease following infection, and delays related to seeking medical care, diagnosis, and reporting of human disease, WNV surveillance based on human case reports lags behind increases in risk and is not sufficiently sensitive to allow timely implementation of outbreak control measures. # Larval Mosquito Control The objective of the larval mosquito control component of an IVM program is to manage mosquito populations before they emerge as adults. This can be an efficient method of managing mosquito populations if the mosquito breeding sites are accessible. However larval control may not attain the levels of mosquito population reduction needed to maintain WNV risk at low levels, and must be accompanied by measures to control the adult mosquito populations as well. In outbreak situations, larval control complements adult mosquito control measures by preventing new vector mosquitoes from being produced. However, larval control alone is not able to stop WNV outbreaks once virus amplification has reached levels causing human infections. Numerous methods are available for controlling larval mosquitoes. Source reduction is the elimination or removal of habitats that produce mosquitoes. This can range from draining roadside ditches to properly disposing of discarded tires and other trash containers. Only through a thorough surveillance program will mosquito sources be identified and appropriately removed. In order to effectively control vector mosquito populations through source reduction, all sites capable of producing vector mosquitoes must be identified and routinely inspected for the presence of mosquito larvae or pupae. This is difficult to accomplish with the WNV vector species Cx. quinquefasciatus and Cx. pipiens that readily utilize cryptic sites such as storm drainage systems, grey water storage cisterns and storm water runoff impoundments. Vacant housing with unmaintained swimming pools, ponds and similar water features are difficult to identify and contribute a significant number of adult mosquitoes to local populations. To manage mosquitoes produced in habitats that are not conducive to source reduction, pesticides registered by EPA for larval mosquito control are applied when larvae are detected. Several larval mosquito control pesticides are available and a discussion of their attributes and limitations is beyond the scope of this overview. More detailed information about pesticides used for larval mosquito control is available from the U.S. EPA (http://www2.epa.gov/mosquitocontrol/controlling-mosquitoes-larvalstage). No single larvicide product will work effectively in every habitat where WNV vectors are found. An adequate field staff with proper training is required to properly identify larval production sites and implement the appropriate management tools for that site. # Adult Mosquito Control Source reduction and larvicide treatments may be inadequate to maintain vector populations at levels sufficiently low to limit virus amplification. The objective of the adult mosquito control component of an IVM program is to complement the larval management program by reducing the abundance of adult mosquitoes in an area, thereby reducing the number of eggs laid in breeding sites. Adult mosquito control is also intended to reduce the abundance of biting, infected adult mosquitoes in order to prevent them from transmitting WNV to humans and to break the mosquito-bird transmission cycle. In situations where vector abundance is increasing above acceptable levels, targeted adulticide applications using pesticides registered by EPA for this purpose can assist in maintaining vector abundance below threshold levels. More detailed information about pesticides used for adult mosquito control is available from the U.S. EPA (http://www2.epa.gov/mosquitocontrol/controlling-adultmosquitoes). Pesticides for adult mosquito control can be applied from hand-held application devices or from trucks or aircraft. Hand-held or truck-based applications are useful to manage relatively small areas, but are limited in their capacity to treat large areas quickly during an outbreak. In addition, gaps in coverage may occur during truck-based applications due to limitations of the road infrastructure. Aerial application of mosquito control adulticides is required when large areas must be treated quickly, and can be particularly valuable because controlling WNV vectors such as Cx. quinquefasciatus or Cx. pipiens often requires multiple, closely spaced treatments (Andis et al. 1987). Both truck and aeriallyapplied pesticides for adult mosquito control are applied using ultra-low-volume (ULV) technology in which a very small volume of pesticide is applied per acre in an aerosol of minute droplets designed to contain sufficient pesticide to kill mosquitoes that are contacted by the droplets. Information describing ULV spray technology and the factors affecting effectiveness of ground and aerially-applied ULV pesticides is reviewed in Mount et al. 1996, Mount 1998, and Bonds 2012. # Risk and Safety of Vector Control Pesticides and Practices Insecticides to control larval and adult mosquitoes are registered specifically for that use by the U.S. Environmental Protection Agency (EPA). Instructions provided on the product labels prescribe the required application and use parameters, and must be carefully followed. Properly applied, these products do not negatively affect human health or the environment. Research has demonstrated that ULV application of mosquito control adulticides did not produce detectable exposure or increases in asthma events in persons living in treated areas (Karpati et al. 2004, Currier et al. 2005, Duprey et al. 2008. The risks from WNV demonstrably exceed the risks from mosquito control practices (Davis and Peterson 2008, Macedo et al. 2010, Peterson et al. 2006) # Legal Action to Achieve Access or Control Individually-owned private properties may be major sources of mosquito production. Examples include accumulations of discarded tires or other trash, neglected swimming pools and similar water features that become stagnant and produce mosquitoes. Local public health statutes or public nuisance regulations may be employed to gain access for surveillance and control, or to require the property owner to mitigate the problem. Executing such legal actions may be a prolonged process during which adult mosquitoes are continuously produced. Proactive communication with residents and public education programs may alleviate the need to use legal actions. However, legal efforts may be required to eliminate persistent mosquito production sites. # Quality Control Pesticide products and application procedures (for both larval and adult control) must periodically be evaluated to ensure an effective rate of application is being used and that the desired degree of control is obtained. Application procedures should be evaluated regularly (minimally once each season) to assure equipment is functioning properly to deliver the correct dosages and droplet parameters and to determine appropriate label rates to use locally. Finally, mosquito populations should routinely be evaluated to ensure insecticide resistance is not emerging. # Records Surveillance data describing vector sources, abundance and infection rates, records of control efforts (e.g., source reduction, larvicide applications, adulticide applications), and quality control data must be maintained and used to evaluate IVM needs and performance. Long-term data are essential to track trends and to evaluate levels of risk. # Insecticide Resistance Management In order to delay or prevent the development of insecticide resistance in vector populations, integrated vector management programs should include a resistance management component (Florida Coordinating Council on Mosquito Control 1998). Ideally, this should include annual monitoring of the status of resistance in the target populations to:  Provide baseline data for program planning and pesticide selection before the start of control operations.  Detect resistance at an early stage so that timely management can be implemented.  Continuously monitor the effect of control strategies on insecticide resistance. Monitoring resistance in the vector population is essential, and is useful in determining the potential causes for control failures, should they occur. CDC has developed an assay to determine if a particular pesticide formulation (combination of the active ingredient in the insecticide and inactive ingredients) is able to kill mosquito vectors. The technique, referred to as the CDC bottle bioassay, is simple, rapid, and economical compared with alternatives. The results can help guide the choice of insecticide used for spraying. A practical laboratory manual that describes how to perform and interpret the CDC bottle bioassay is available online (http://www.cdc.gov/malaria/resources/pdf/fsp/ir_manual/ir_cdc_bioassay_en.pdf). For additional information about obtaining and performing the bottle bioassay, contact CDC at [email protected]. The integrated vector management program should include options for managing resistance that are appropriate for the local conditions. The techniques regularly used include the following:  Management by moderation -preventing onset of resistance by: o Using dosages no lower than the lowest label rate to avoid genetic selection. o Using less frequent applications. o Using chemicals of short environmental persistence. o Avoiding slow-release formulations that increase selection for resistance. o Avoiding the use of the same class of insecticide to control both adults and immature stages. o Applying locally. Currently, most districts treat only hot spots. Area-wide treatments are used only during public health alerts or outbreaks. o Leaving certain generations, population segments, or areas untreated. o Establishing high thresholds for pest mosquito mitigation using insecticides except during public health alerts or outbreaks.  Management by continued suppression. This strategy is used in regions of high value or persistent high risk (e.g., heavily populated regions or locations with recurring WNV outbreaks) where mosquitoes must be kept at very low densities. This does not mean saturation of the environment by pesticides, but rather the saturation of the defense mechanisms of the insect by insecticide dosages that can overcome resistance. This is achieved by the application of dosages within label rates but sufficiently high to be lethal to heterozygous individuals that are partially resistant. If the heterozygous individuals are killed, resistance will be slow to emerge. This method should not be used if any significant portion of the population in question is fully resistant. Another approach more commonly used is the addition of synergists that inhibit existing detoxification enzymes and thus eliminate the competitive advantage of these individuals. Commonly, the synergist of choice in mosquito control is piperonyl butoxide (PBO).  Management by multiple attack consists of achieving effective control through the action of several different and independent pressures such that selection for any one of them would be below that required for the development of resistance in the mosquito population. This strategy involves the use of insecticides with different modes of action in mixtures or in rotations. There are economic limitations associated with this approach (e.g., costs of switching chemicals or having storage space for them), and critical variables in addition to the pesticide mode of action that must be taken into consideration (i.e., mode of resistance inheritance, frequency of mutations, population dynamics of the target species, availability of refuges, and migration). General recommendations are to evaluate resistance patterns at least annually and the need for rotating insecticides at annual or longer intervals. # Continuing Education Continuing education for operational vector control workers is required to instill or refresh knowledge related to practical mosquito control. Training focusses on safety, applied technology, and requirements for the regulated certification program mandated by most states. Training should also include information on the identification of mosquito species, their behavior, ecology, and appropriate methods of control. # Vector Management in Public Health Emergencies Intensive early season adult mosquito control efforts can decrease WNV transmission activity (Lothrop et al. 2008) and result in reduced human risk. However, depending on local conditions, proactive vector management may not maintain mosquito populations at levels sufficiently low to avoid development of outbreaks. As evidence of sustained or intensified virus transmission in a region increases, emergency vector control efforts to reduce the abundance of infected, biting adult mosquitoes must be implemented. This is particularly important in areas where vector surveillance indicates that infection rates in Culex mosquitoes are continually increasing, or being sustained at high levels, that potential accessory vectors (e.g., mammal-feeding mosquito species) are infected with WNV, or multiple human cases or viremic blood donors have been reported. Delaying adulticide applications until numerous human cases occur negates the value and purpose of the surveillance system. Timely application of adulticides interrupts WNV transmission and prevents human cases (Carney et al. 2005). # Guidelines for a Phased Response to WNV Surveillance Data The objective of a phased response to WNV surveillance data is to implement public health interventions appropriate to the level of WNV risk in a community (Table 2). A surveillance program adequate to monitor WNV activity levels associated with human risk must be in place in order to provide detection of epizootic transmission in advance of human disease outbreaks. The surveillance programs and environmental surveillance indicators described above demonstrate that enzootic/epizootic WNV transmission can be detected several weeks before the onset of human disease, allowing for implementation of effective interventions , Mostashari et al. 2003, Unlu et al. 2009. All communities should prepare for WNV activity. For reasons that are not well understood, some regions are at risk of higher levels of WNV transmission and epidemics than others (CDC 2010), but there is evidence of WNV presence and the risk of human disease and outbreaks in most counties in the contiguous 48 states. The ability to develop a useful phased response depends upon the existence of some form of WNV monitoring in the community to provide the information needed to gauge risk levels. Measures of the intensity of WNV epizootic transmission in a region, preferably from environmental surveillance indicators, should be considered when determining the level of the public health response. As noted previously, human case reports lag weeks behind human infection events and are poor indicators of current risk levels. Effective public health actions depend on interpreting the best available surveillance data and initiating prompt and aggressive intervention when necessary. # Individual and Community -Based Prevention Education Without an effective human vaccine, the only way to prevent human infection and disease is to prevent infected mosquitoes from biting people. This can be accomplished by the efforts of community-based integrated vector management programs, as described above, and by effective personal protection behaviors and practices, such as mosquito-avoidance, use of personal repellents, and removal of residential mosquito sources. Prevention messages and programs for promoting individual and community measures to decrease risk of WNV infection should be employed. Public education and risk communication activities should reflect the degree of WNV risk in a community, as noted in Table 2. Messages should acknowledge the seriousness of the disease without promoting undue fear or panic in the target population. Fear-driven messages may heighten the powerlessness people express in dealing with vector-borne diseases like WNV. Messages should be clear and consistent with the recommendations of coordinating agencies. Use plain language, and adapt materials for lower literacy and non-English speaking audiences. # MESSAGING ABOUT PERSONAL AND HOUSEHOLD PREVENTION The best way to prevent West Nile virus disease is to avoid mosquito bites.  Use insect repellents when you go outdoors.  Wear long sleeves and pants during dawn and dusk.  Repair or install screens on windows and doors.  Use air conditioning, if you have it.  Remove mosquito sources from around your home. # Individual-Level Actions and Behaviors to Reduce WNV Risk # Repellents CDC recommends the use of products containing active ingredients which have been registered by the U.S. Environmental Protection Agency (EPA) for use as repellents applied to skin and clothing. EPA registration of repellent active ingredients indicates the materials have been reviewed and approved for efficacy and human safety when applied according to the instructions on the label. Repellents for use on skin and clothing: CDC evaluation of information contained in peer-reviewed scientific literature and data available from EPA has identified several EPA-registered products that provide repellent activity sufficient to help people avoid the bites of disease carrying mosquitoes. Products containing these active ingredients typically provide reasonably long-lasting protection:  DEET (Chemical Name: N,N-diethyl-m-toluamide or N,N-diethly-3-methyl-benzamide)  Picaridin (KBR 3023, Chemical Name: 2-(2-hydroxyethyl)-1-piperidinecarboxylic acid 1methylpropyl ester )  Oil of Lemon Eucalyptus* or PMD (Chemical Name: para-Menthane-3,8-diol) the synthesized version of oil of lemon eucalyptus  IR3535 (Chemical Name: 3-[N-Butyl-N-acetyl]-aminopropionic acid, ethyl ester) * Note: This recommendation refers to EPA-registered repellent products containing the active ingredient oil of lemon eucalyptus (or PMD). "Pure" oil of lemon eucalyptus (e.g. essential oil) has not received similar, validated testing for safety and efficacy, is not registered with EPA as an insect repellent, and is not covered by this CDC recommendation. EPA characterizes the active ingredients DEET and Picaridin as "conventional repellents" and Oil of Lemon Eucalyptus, PMD, and IR3535 as "biopesticide repellents", which are derived from natural materials. For more information on repellent active ingredients see (http://www2.epa.gov/mosquitocontrol) . Published data indicate that repellent efficacy and duration of protection vary considerably among products and among mosquito species, and are markedly affected by ambient temperature, amount of perspiration, exposure to water, abrasive removal, and other factors. In general, higher concentrations # MESSAGING ABOUT COMMUNITY PROTECTION Using insecticides to control adult mosquitoes, whether applied from trucks at ground level or by aircraft, is a common, safe and effective practice in the United States.  In many parts of the country, spraying insecticides is part of an annual effort to reduce the number of adult mosquitoes, which in turn reduces the risk of WNV and other diseases spread by mosquitoes.  Insecticides used in aerial spraying are no different from those sprayed by trucks. Aerial spraying can cover larger areas faster, which helps to reduce the number of infected mosquitoes and slow the spread of illness.  These products are specifically designed to reduce the number of adult mosquitoes. Insecticides used in mosquito control are reviewed for safety and effectiveness and registered by EPA. of active ingredient provide longer duration of protection, regardless of the active ingredient, although concentrations above ~50% do not offer a marked increase in protection time. Products with <10% active ingredient may offer only limited protection, often from 1-2 hours. Products that offer sustained release or controlled release (micro-encapsulated) formulations, even with lower active ingredient concentrations, may provide longer protection times. Repellents for use on clothing: Certain products containing permethrin are recommended for use on clothing, shoes, bed nets, and camping gear, and are registered with EPA for this use. Permethrin is highly effective as an insecticide and as a repellent. Permethrin-treated clothing repels mosquitoes, and retains this effect after repeated laundering. The permethrin repellents should be reapplied following the label instructions. Some commercial clothing products are available pretreated with permethrin. Additional information about CDC's repellent recommendations is available at (http://www.cdc.gov/ncidod/dvbid/westnile/RepellentUpdates.htm). Mosquito bites can be avoided simply by not going outdoors when mosquitoes are biting, and recommendations to avoid outdoor activity when and where high WNV activity levels have been detected are a component of prevention programs. Recommendations to avoid being outdoors from dusk to dawn may conflict with neighborhood social patterns, community events, or the practices of persons without air-conditioning. It is important to communicate that the primary WNV vectors are active from dusk until dawn. Emphasize that repellent use is protective, and should be used when outdoors during the prime mosquito-biting hours. # Reduce Mosquito Production at the Home Encourage residents to regularly empty standing water from items outside homes such as rain gutters, flowerpots, old tires, empty containers, buckets, and wading pools. Water in birdbaths should be changed at least once per week. CDC provides resources that list these and other steps for reducing mosquito populations (www.cdc.gov/ncidod/dvbid/westnile/qa/habitats.htm). # Community-Level Actions to Reduce WNV Risk # Community Protection Measures At the community level, reporting dead birds and nuisance mosquito problems, advocating for organized mosquito abatement, and participating in community mobilization projects to address sources of mosquitoes such as trash, standing water or neglected swimming pools are activities that can help protect individuals and at-risk groups. # Communicating About Mosquito Control Area-wide control of mosquitoes that transmit WNV is most effectively done at a community level. Public understanding and acceptance of emergency adult mosquito control operations using insecticides is critical to its success, especially where these measures are unfamiliar. Questions about the products being used, their safety, and their effects on the environment are common. Improved communication about surveillance and how decisions to use mosquito adulticides are made may help residents weigh the risks and benefits of control. When possible, provide detailed information regarding the schedule for adulticiding through newspapers, radio, government-access television, the internet, recorded phone messages, or other means your agency uses to successfully communicate with its constituencies. # Community Engagement and Education Knowing the norms and cultural practices of communities is invaluable when informing the public, and for gaining support and assistance for routine vector-management practices and to enhance personal protection during outbreaks. It is essential to know how targeted audiences communicate with each other and with government officials, and to identify the key audiences and how to reach them. Identifying what risk communication principles work best can better prepare the community to effectively provide information during an outbreak. Translating complex scientific jargon into understandable concepts promotes community understanding and acceptance. The following provides a description of selected best practices for targeting high-risk groups, offers suggestions for cultivating partnerships with media and communities, and provides select outreach measures for mobilizing communities. # Prevention Strategies for High-Risk Groups Audience members have different disease-related concerns and motivations for action. Proper message targeting (including use of plain language) permits better use of limited communication and prevention resources. The following are some population segments that require specific targeting: # Persons over age 50 While persons of any age can be infected with WNV, surveillance data indicate that persons over age 50 are at higher risk for severe disease and death due to WNV infection. Collaborate with organizations that have an established relationship with mature adults, such as the AARP, senior centers, and programs for adult learners. Include images of older adults in your promotional material. Identify activities in your area where older adults may be exposed to mosquito bites (e.g., jogging, walking, golf, and gardening). # Persons with Outdoor Exposure Data suggest that persons engaged in extensive outdoor work or recreational activities are at greater risk of being bitten by mosquitoes which may be infected with WNV. Develop opportunities to inform people engaged in outdoor activities about WNV. Encourage use of repellent and protective clothing, particularly if outdoor activities occur during dusk to dawn hours. Local spokespersons (e.g., union officials, job-site supervisors, golf pros, sports organizations, lawn care professionals, public works officials, gardening experts) may be useful collaborators. Place messages in locations where people engage in outdoor activities (e.g., parks, golf courses, hiking trails). # Homeless Persons Extensive outdoor exposure and limited financial resources in this group present special challenges. Application of repellents to exposed skin and clothing may be most appropriate prevention measures for this population. Work with social service groups in your area to educate and provide repellents to this population segment. # Residences Lacking Window Screens. The absence of intact window/door screens is a likely risk factor for exposure to mosquito bites. Focus attention on the need to repair screens and provide access to resources to do so. Partner with community organizations that can assist older persons or others with financial or physical barriers to screen installation or repair. # Partnerships with Media and the Community Cultivate relationships with the media (radio, TV, newspaper, web-based news outlets). Obtain media training for at least one member of your staff, and designate that individual as the organization's spokesperson. Develop clear press releases and an efficient system to answer press inquiries. Many communities have heard similar prevention messages repeated for several years. Securing the public's attention when risk levels increase can be a challenge. Evaluate and update WNV prevention messages annually, and test new messages with different population segments to evaluate effectiveness. Develop partnerships with agencies/organizations that have relationships with populations at higher risk (such as persons over 50) or are recognized as community leaders (e.g., churches, service groups). Working through sources trusted by the target audience can heighten the credibility of and attention to messages. Partnerships with businesses that sell materials to fix or install window screens or that sell insect repellent may be useful in some settings (e.g., local hardware stores, grocery stores). # Community Mobilization and Outreach Community mobilization can improve education and help achieve behavior change goals. Promote the concept that health departments and mosquito control programs require community assistance to reduce WNV risk. A community task force that includes civic, business, public health, and environmental concerns can be valuable in achieving buy-in from various segments of the community, and in developing a common message. Community mobilization activities can include clean-up days to get rid of mosquito breeding sites. Community outreach involves presenting messages in person, in addition to media and educational materials, and involving citizens in prevention activities. Hearing the message of personal prevention from community leaders can validate the importance of the disease. Health promotion events and activities reinforce the importance of prevention in a community setting. Select tools and resources are listed here that may enhance your outreach efforts: # Social Media The increasing popularity of social media has been accompanied with a rise in health informationseeking using these new interactive outlets. Outreach can be conducted using Twitter, Facebook, YouTube, blogs, and other websites that may reach constituents less connected to more traditional media sources. Social media posts can be used as an attention-grabber, providing links that direct users to webpages or other resources with more complete information. # Online Resources The Internet has become a primary source of health information for many Americans. Unfortunately, it is impossible to police the vast amount of content available online for quality and accuracy. Encourage constituents to seek advice from credible sources. Make sure local official public health agency websites are clear, accurate and up to date. Useful information is available from a number of resources: The U.S. Environmental Protection Agency (EPA) is the government's regulatory agency for insecticide and insect repellent use, safety, and effectiveness. Information about mosquito control insecticides and repellents is available at (http://www2.epa.gov/mosquitocontrol). These include guidance for using repellents safely (http://epa.gov/pesticides/insect/safe.htm) and a search tool to assist in finding a repellent that is right for you (http://cfpub.epa.gov/oppref/insect/#searchform) which allows the user to examine the protection time afforded by registered repellents containing various concentrations of the active ingredients. There are a number of organizations that have developed useful tools and information that can be adapted for local needs. Examples include: the American Mosquito Control Association (www.mosquito.org) and the National Pesticide Information Center (NPIC) (www.npic.orst.edu). BACKGROUND. The establishment of West Nile (WN) virus across North America has been accompanied by expanded efforts to monitor WN virus transmission activity in many communities. Surveillance programs use various indicators to demonstrate virus activity. These include detecting evidence of virus in dead birds, dead horses, and mosquitoes; and detection of antibody against WN virus in sentinel birds, wild birds, or horses (Reisen & Brault 2007). While all of these surveillance practices can demonstrate the presence of WN virus in an area, few provide reliable, quantitative indices that may be useful in predictive surveillance programs. Only indices derived from a known and quantifiable surveillance effort conducted over time in an area will provide information that adequately reflects trends in virus transmission activity that may be related to human risk. Of the practices listed above, surveillance efforts are controlled and quantifiable only in mosquito and sentinel-chicken based programs. In these programs, the number of sentinel chicken flocks/ number of chickens, and the number of mosquito traps set per week is known and allows calculation of meaningful infection rates that reflect virus transmission activity. # PREMISE BEHIND DEVELOPING THE VECTOR INDEX. Mosquito-based arbovirus surveillance provides three pieces of information: The variety of species comprising of the mosquito community; density of each species population (in terms of the number collected in each trap unit of a given trap type); and if the specimens are tested for the presence of arboviruses, the incidence of the agent in the mosquito population. Taken individually, each parameter describes one aspect of the vector community that may affect human risk, but the individual elements don't give a comprehensive estimate of the number of potentially infectious vectors seeking hosts at a given time in the surveillance area. To express the arbovirus transmission risk posed by a vector population adequately, information from all three parameters (vector species presence, vector species density, vector species infection rate) must be considered. The Vector Index (VI) combines all three of the parameters quantified through standard mosquito surveillance procedures in a single value (Gujaral et al. 2007, Kwan et al. 2012). The VI is simply the estimated average number of infected mosquitoes collected per trap night summed for the key vector species in the area. Summing the VI for the key vector species incorporates the contribution of more than one species and recognizes the fact that West Nile (WN) virus transmission may involve one or more primary vectors and several accessory or bridge vectors in an area. # Deriving the Vector Index from routine mosquito surveillance data The Vector Index is expressed as: # Background Arthropod-borne viruses (arboviruses) are transmitted to humans primarily through the bites of infected mosquitoes, ticks, sand flies, or midges. Other modes of transmission for some arboviruses include blood transfusion, organ transplantation, perinatal transmission, consumption of unpasteurized dairy products, breast feeding, and laboratory exposures. More than 130 arboviruses are known to cause human disease. Most arboviruses of public health importance belong to one of three virus genera: Flavivirus, Alphavirus, and Bunyavirus. # Clinical description Most arboviral infections are asymptomatic. Clinical disease ranges from mild febrile illness to severe encephalitis. For the purposes of surveillance and reporting, based on their clinical presentation, arboviral disease cases are often categorized into two primary groups: neuroinvasive disease and nonneuroinvasive disease. Neuroinvasive disease: Many arboviruses cause neuroinvasive disease such as aseptic meningitis, encephalitis, or acute flaccid paralysis (AFP). These illnesses are usually characterized by the acute onset of fever with stiff neck, altered mental status, seizures, limb weakness, cerebrospinal fluid (CSF) pleocytosis, or abnormal neuroimaging. AFP may result from anterior ("polio") myelitis, peripheral neuritis, or post-infectious peripheral demyelinating neuropathy (i.e., Guillain-Barré syndrome). Less common neurological manifestations, such as cranial nerve palsies, also occur. Non-neuroinvasive disease: Most arboviruses are capable of causing an acute systemic febrile illness (e.g., West Nile fever) that may include headache, myalgias, arthralgias, rash, or gastrointestinal symptoms. Rarely, myocarditis, pancreatitis, hepatitis, or ocular manifestations such as chorioretinitis and iridocyclitis can occur. # Clinical criteria for diagnosis A clinically compatible case of arboviral disease is defined as follows: Neuroinvasive disease  Fever (≥100.4°F or 38°C) as reported by the patient or a health-care provider, AND  Meningitis, encephalitis, acute flaccid paralysis, or other acute signs of central or peripheral neurologic dysfunction, as documented by a physician, AND  Absence of a more likely clinical explanation. # Non-neuroinvasive disease  Fever (≥100.4°F or 38°C) as reported by the patient or a health-care provider, AND  Absence of neuroinvasive disease, AND  Absence of a more likely clinical explanation. # Laboratory criteria for diagnosis  Isolation of virus from, or demonstration of specific viral antigen or nucleic acid in, tissue, blood, CSF, or other body fluid, OR  Four-fold or greater change in virus-specific quantitative antibody titers in paired sera, OR  Virus-specific IgM antibodies in serum with confirmatory virus-specific neutralizing antibodies in the same or a later specimen, OR  Virus-specific IgM antibodies in CSF and a negative result for other IgM antibodies in CSF for arboviruses endemic to the region where exposure occurred, OR  Virus-specific IgM antibodies in CSF or serum. # Case classification # Confirmed: Neuroinvasive disease: A case that meets the above clinical criteria for neuroinvasive disease and one or more the following laboratory criteria for a confirmed case:  Isolation of virus from, or demonstration of specific viral antigen or nucleic acid in, tissue, blood, CSF, or other body fluid, OR  Four-fold or greater change in virus-specific quantitative antibody titers in paired sera, OR  Virus-specific IgM antibodies in serum with confirmatory virus-specific neutralizing antibodies in the same or a later specimen, OR  Virus-specific IgM antibodies in CSF and a negative result for other IgM antibodies in CSF for arboviruses endemic to the region where exposure occurred. # Non-neuroinvasive disease: A case that meets the above clinical criteria for non-neuroinvasive disease and one or more of the following laboratory criteria for a confirmed case:  Isolation of virus from, or demonstration of specific viral antigen or nucleic acid in, tissue, blood, CSF, or other body fluid, OR  Four-fold or greater change in virus-specific quantitative antibody titers in paired sera, OR  Virus-specific IgM antibodies in serum with confirmatory virus-specific neutralizing antibodies in the same or a later specimen, OR # 61  Virus-specific IgM antibodies in CSF and a negative result for other IgM antibodies in CSF for arboviruses endemic to the region where exposure occurred. # Probable: Neuroinvasive disease: A case that meets the above clinical criteria for neuroinvasive disease and the following laboratory criteria:  Virus-specific IgM antibodies in CSF or serum but with no other testing. # Non-neuroinvasive disease: A case that meets the above clinical criteria for non-neuroinvasive disease and the laboratory criteria for a probable case:  Virus-specific IgM antibodies in CSF or serum but with no other testing. # Comment: # Interpreting arboviral laboratory results  Serologic cross-reactivity. In some instances, arboviruses from the same genus produce crossreactive antibodies. In geographic areas where two or more closely-related arboviruses occur, serologic testing for more than one virus may be needed and results compared to determine the specific causative virus. For example, such testing might be needed to distinguish antibodies resulting from infections within genera, e.g., flaviviruses such as West Nile, St. Louis encephalitis, Powassan, Dengue, or Japanese encephalitis viruses.  Rise and fall of IgM antibodies. For most arboviral infections, IgM antibodies are generally first detectable at 3 to 8 days after onset of illness and persist for 30 to 90 days, but longer persistence has been documented (e.g, up to 500 days for West Nile virus). Serum collected within 8 days of illness onset may not have detectable IgM and testing should be repeated on a convalescent-phase sample to rule out arboviral infection in those with a compatible clinical syndrome.  Persistence of IgM antibodies. Arboviral IgM antibodies may be detected in some patients months or years after their acute infection. Therefore, the presence of these virus-specific IgM antibodies may signify a past infection and be unrelated to the current acute illness. Finding virus-specific IgM antibodies in CSF or a fourfold or greater change in virus-specific antibody titers between acuteand convalescent-phase serum specimens provides additional laboratory evidence that the arbovirus was the likely cause of the patient's recent illness. Clinical and epidemiologic history also should be carefully considered.  Persistence of IgG and neutralizing antibodies. Arboviral IgG and neutralizing antibodies can persist for many years following a symptomatic or asymptomatic infection. Therefore, the presence of these antibodies alone is only evidence of previous infection and clinically compatible cases with the presence of IgG, but not IgM, should be evaluated for other etiologic agents.  Arboviral serologic assays. Assays for the detection of IgM and IgG antibodies commonly include enzyme-linked immunosorbent assay (ELISA), microsphere immunoassay (MIA), or immunofluorescence assay (IFA). These assays provide a presumptive diagnosis and should have 62 confirmatory testing performed. Confirmatory testing involves the detection of arboviral-specific neutralizing antibodies utilizing assays such as plaque reduction neutralization test (PRNT).  Other information to consider. Vaccination history, detailed travel history, date of onset of symptoms, and knowledge of potentially cross-reactive arboviruses known to circulate in the geographic area should be considered when interpreting results. # Imported arboviral diseases Human disease cases due to dengue or yellow fever viruses are nationally notifiable to CDC using specific case definitions. However, many other exotic arboviruses (e.g., chikungunya, Japanese encephalitis, tick-borne encephalitis, Venezuelan equine encephalitis, and Rift Valley fever viruses) are important public health risks for the United States as competent vectors exist that could allow for sustained transmission upon establishment of imported arboviral pathogens. Health-care providers and public health officials should maintain a high index of clinical suspicion for cases of potentially exotic or unusual arboviral etiology, particularly in international travelers. If a suspected case occurs, it should be reported to the appropriate local/state health agencies and CDC. Step Surveillance of dead birds for WNV has proven useful for the early detection of WNV in the United States. In recent months, it has also proven useful for the early detection of highly pathogenic H5N1 avian influenza A (HPAI H5N1, hereafter referred to as H5N1 virus) in Europe. Given the potential for H5N1 to infect wild birds in North America in the future, the following interim guidance is offered to support the efforts of states conducting avian mortality surveillance. # General Considerations for States Conducting Avian Mortality Surveillance  If different agencies within a state are separately responsible for conducting surveillance for WNV or H5N1 among wild birds, the sharing of resources, including dead birds submitted for testing, may increase the efficiency of both systems.  Any dead bird might be infected with any one of a number of zoonotic diseases currently present in the United States (US), such as WNV. However, in countries where H5N1 has been found in captive and wild birds, it frequently has resulted in multiple deaths within and across species, and if H5N1 enters the US, it is likely to result in the death of wild birds. If wild birds in the US are exposed to the virus, both single and groups of dead birds should be considered potentially infected.  Avian mortality due to the introduction of H5N1 could occur at any time of the year, whereas WNV is more often detected when mosquitoes are active.  To date, no human infections of WNV have been confirmed due to contact with live or dead wild birds in outdoor settings.  Most human H5N1 cases overseas have been associated with close contact with infected poultry or their environment; however, a very small number of cases appear to be related to the handling of infected wild birds or their feathers or feces without the use of proper personal protective equipment (PPE). There is no evidence of H5N1 transmission to humans from exposure to H5N1 virus-contaminated water during swimming; however this may be theoretically possible. (http://www.who.int/csr/disease/avian_influenza/guidelines/pharmamanagement/en/www.wh o)  Although handling infected birds is unlikely to lead to infection, persons who develop an influenza-like illness after handling sick or dead birds should seek medical attention. Their health care provider should report the incident to public health agencies if clinical symptoms or laboratory test results indicate possible H5N1 or WNV infection. # Infection Control and Health and Safety Precautions These guidelines are intended for any person handling dead birds. The risk of infection with WNV from such contact is small. The risk of infection with H5N1 from handling dead birds is difficult to quantify and is likely to vary with each situation. Risk is related to the nature of the work environment, the number of birds to be collected, and the potential for aerosolization of bird feces, body fluids, or other tissues. The most important factor that will influence the degree of infection risk from handling wild birds is whether H5N1 has been reported in the area. Local public health officials can be consulted to help in selecting the most appropriate PPE for the situation. # General Precautions for Collection of Single Dead Birds (These precautions are applicable to employees as well as the general public) When collecting dead birds, the risk of infection from WNV, H5N1, or any other pathogen may be eliminated by avoiding contamination of mucous membranes, eyes, and skin by material from the birds. This can be accomplished by eliminating any direct contact with dead birds via use of the following safety precautions:  When picking up any dead bird, wear disposable impermeable gloves and place it directly into a plastic bag. Gloves should be changed if torn or otherwise damaged. If gloves are not available, use an inverted double-plastic bag technique for picking up carcasses or use a shovel to scoop the carcass into a plastic bag.  In situations in which the bird carcass is in a wet environment or in other situations in which splashing or aerosolization of viral particles is likely to occur during disposal, safety goggles or glasses and a surgical mask may be worn to protect mucous membranes against splashed droplets or particles.  Bird carcasses should be double bagged and placed in a trash receptacle that is secured from access by children and animals. If the carcass will be submitted for testing, hold it a cool location until it pickup or delivery to authorities. Carcasses should not be held in close contact with food (e.g., not in a household refrigerator or picnic cooler).  After handling any dead bird, avoid touching the face with gloved or unwashed hands.  Any PPE that was used (e.g. gloves, safety glasses, mask) should be discarded or disinfected* when done, and hands should then be washed with soap and water (or use an alcohol-based hand gel when soap and water are not available). http://www.cdc.gov/cleanhands/  If possible, before disposing of the bird, members of the public may wish to consult with their local animal control, health, wildlife or agricultural agency or other such entity to inquire whether dead bird reports are being tallied and if the dead bird in question might be a candidate for WNV or H5N1 testing. Additional Precautions for Personnel Tasked with Collecting Dead Birds in Higher-Risk Settings (e.g., when collecting large numbers or in confined indoor spaces, particularly once H5N1 has been confirmed in an area)  Minimize any work activities that generate airborne particles. For example, during the cleanup phase of the bird removal, avoid washing surfaces with pressurized water or cleaner (i.e., pressure washing), which could theoretically aerosolize H5N1 viral particles that could then be inhaled. If aerosolization is unavoidable, the use of a filtering face-piece respirator (e.g., N95) 69 would be prudent, particularly while handling large quantities of dead birds repeatedly as part of regular work requirements.  If using safety glasses, a mask, or a respirator, do not remove until after gloves have been removed and hands have been washed with soap and water (or use an alcohol-based hand gel when soap and water are not available). After PPE has been removed, hands should immediately be cleaned again. http://www.cdc.gov/cleanhands/ Personal protective equipment worn (e.g., gloves, mask, or clothing) should be disinfected* or discarded.  Discuss appropriate biosafety practices and PPE use with your employer. # *Recommendations for PPE Disinfection For machine-washable, reusable PPE: Disinfect PPE in a washing machine with detergent in a normal wash cycle. Adding bleach will increase the speed of viral inactivation as will hot water but detergent alone in cold water will be effective. Follow manufacturer recommendations for drying the PPE. Non machine-washable, reusable PPE should be cleaned following the manufacturer's recommendations for cleaning. # Laboratory Biosafety Recommendations Laboratory handling of routine diagnostic specimens of avian carcasses requires a minimum of BSL-2 laboratory safety precautions. However, if either WNV or H5N1 infection of the specimens is suspected on the basis of previous surveillance findings, at a minimum BSL-3 precautions are advisable. Consult your institutional biosafety officer for specific recommendations. Biosafety levels are described at www.cdc.gov/od/ohs/biosfty/bmbl4/bmbl4s3.htm. # Additional Information Sources
None
None
5f1b33220832d20ea6b33dd942b1b5b602e3a571
cdc
None
These guidelines for the treatment of patients with sexually transmitted diseases (STDs) were developed by staff members of CDC after consultation with a group of invited experts who met in Atlanta on January 19-21, 1993. Included are new recommendations for single-dose oral therapy for gonococcal infections, chlamydial infections, and chancroid; new regimens for the treatment of bacterial vaginosis (BV) and outpatient management of pelvic inflammatory disease (PID); a new patient-applied medication for treatment of genital warts; and a revised approach to the management of victims of sexual assault. This report includes new sections on subclinical human papillomavirus (HPV) infections and cervical cancer screening for women who attend STD clinics or who have a history of STDs. These recommendations also include expanded sections on the management of patients with asymptomatic human immunodeficiency virus (HIV) infection; vulvovaginal candidiasis (VVC); STDs among patients coinfected with HIV; and STDs among infants, children, and pregnant women.# Abbreviations Used in This Publication ACIP # INTRODUCTION Physicians and other health-care providers have a critical role in the effort to prevent and treat sexually transmitted diseases (STDs). These recommendations for the treatment of STDs are intended to assist with that effort. They were developed by CDC staff members in consultation with a group of invited experts.- This report was produced through a multi-stage process. Beginning in the spring of 1992, CDC personnel systematically reviewed literature on each of the major STDs, focusing on data and reports that have become available since the 1989 STD Treatment Guidelines were published. Background papers were written and tables of evidence constructed summarizing the type of study (e.g., randomized controlled trial, case series), the study population and setting, the treatments or other interventions, outcome measures assessed, reported findings, and weaknesses and biases in study design and analysis. For these reviews and tables, published abstracts and peerreviewed journal articles were considered. CDC personnel then developed a draft document based on those reviews. In January 1993, invited consultants assembled in Atlanta for a 3-day meeting. CDC personnel presented the key questions on STD treatment suggested from their literature reviews and presented the data available to answer those questions. Where relevant, the questions focused on four principal outcomes of STD therapy: a) microbiologic cure, b) alleviation of signs and symptoms, c) prevention of sequelae, and d) prevention of transmission. The consultants then assessed whether the questions identified were the appropriate ones, ranked them in order of priority, and attempted to arrive at answers using the available evidence. In addition, the consultants evaluated the quality of evidence supporting the answers based on the number and types of studies and the quality of those studies. In several areas, the process diverged from that described above. The section on STD/HIV prevention guidelines was reviewed for comment by experts who had not been present at the January meeting, as well as by additional experts on STD/HIV prevention at CDC. The recommendations for STD screening during pregnancy were developed after CDC staff reviewed the published recommendations of other expert groups that were convened by CDC and other organizations. The sections on HIV infection and early intervention and hepatitis B virus (HBV) also are principally a compilation of recommendations developed by other experts and are provided in this report for the convenience of those who use this document. Throughout this document the evidence used as the basis for specific recommendations is briefly discussed. More comprehensive, annotated discussions of such evidence will appear in background papers that will be submitted for publication in 1994. These recommendations were developed in consultation with experts whose experience is primarily with the treatment of patients in public STD clinics. These recommendations also should be applicable to other patient-care settings, including family planning clinics, private doctor's offices, and other primary-care facilities. When using these guidelines, consideration should be given to the disease prevalence and to other characteristics of the practice setting. These recommendations should not be construed as standards or as inflexible rules, but as a source of clinical guidance within the United States. These recommendations focus on the treatment and counseling of individual patients and do not address other community services and interventions that also play important roles in STD/HIV prevention. Clinical and laboratory diagnoses are described when such information is related to therapy. For a more comprehensive discussion of diagnosis, refer to CDC's STD Clinical Practice Guidelines, 1991 (1 ). # STD/HIV PREVENTION GUIDELINES Prevention and control of STDs is based on four major concepts: first, education of those at risk on the means for reducing the risk for transmission; second, detection of asymptomatically infected individuals and of persons who are symptomatic but unlikely to seek diagnostic and treatment services; third, effective diagnosis and treatment of those who are infected; fourth, evaluation, treatment, and counseling of sex partners of persons who have an STD. Although this document deals largely with secondary prevention, namely clinical aspects of STD control, primary prevention of STDs is based on changing the sexual behaviors that place patients at risk. Physicians and other health-care providers have an important role in the prevention of STDs. In addition to interrupting transmission by treating persons who have bacterial and parasitic STDs, clinicians have the opportunity to provide patient education and counseling and to participate in identifying and treating infected sex partners. # Prevention Methods # Condoms When used consistently and correctly, condoms are very effective in preventing a variety of STDs, including HIV infection. Multiple cohort studies, including those of serodiscordant couples, have demonstrated a strong protective effect of condom use against HIV infection. Condoms are regulated as medical devices and subject to random sampling and testing by the Food and Drug Administration (FDA). Each latex condom manufactured in the United States is tested electronically for holes before packaging. Condom breakage rates during use are low in the United States (≤2 per 100 condoms tested). Condom failure usually results from inconsistent or incorrect use rather than condom breakage. Patients should be advised that condoms must be used consistently and correctly to be effective in preventing STDs. Patients should also be instructed in the correct use of condoms. The following recommendations ensure the proper use of condoms: - Use a new condom with each act of intercourse. - Carefully handle the condom to avoid damaging it with fingernails, teeth, or other sharp objects. - Put the condom on after the penis is erect and before any genital contact with the partner. - Ensure that no air is trapped in the tip of the condom. - Ensure that there is adequate lubrication during intercourse, possibly requiring the use of exogenous lubricants. - Use only water-based lubricants (e.g., K-Y Jelly™ or glycerine) with latex condoms (oil-based lubricants that can weaken latex should never be used). - Hold the condom firmly against the base of the penis during withdrawal, and withdraw while the penis is still erect to prevent slippage. # Condoms and Spermicides The effectiveness of spermicides in preventing HIV transmission is unknown. No data exist to indicate that condoms lubricated with spermicides are more effective than other lubricated condoms in protecting against the transmission of HIV infection and other STDs. Therefore, latex condoms with or without spermicides are recommended. # Female Condoms Laboratory studies indicate that the female condom (Reality™ )-a lubricated polyurethane sheath with a ring on each end that is inserted into the vagina-is an effective mechanical barrier to viruses, including HIV. Aside from a small study of trichomoniasis, no clinical studies have been completed to evaluate protection from HIV infection or other STDs. However, an evaluation of the female condom's effectiveness in pregnancy prevention was conducted during a 6-month period for 147 women in the United States. The estimated 12-month failure rate for pregnancy prevention among the 147 women was 26%. # Vaginal Spermicides, Sponges, Diaphragms As demonstrated in several cohort studies, vaginal spermicides (i.e., film, gel, suppositories; contraceptive foam has not been studied) used alone without condoms reduce the risk for cervical gonorrhea and chlamydia, but protection against HIV infection has not been established in human studies. The vaginal contraceptive sponge protects against cervical gonorrhea and chlamydia, but increases the risk for candidiasis as evidenced by cohort studies. Diaphragm use has been demonstrated to protect against cervical gonorrhea, chlamydia, and trichomoniasis, but only in case-control and cross-sectional studies; no cohort studies have been performed. Gonorrhea and chlamydia among women usually involve the cervix as a portal of entry, whereas other STD pathogens (including HIV) may infect women through the vagina or vulva, as well as the cervix. Protection of women against HIV infection should not be assumed from the use of vaginal spermicides, vaginal sponges, or diaphragms. The role of spermicides, sponges, and diaphragms for preventing STDs among men has not been studied. # Nonbarrier Contraception, Surgical Sterilization, Hysterectomy Women who are not at risk for pregnancy may incorrectly perceive themselves to be at no risk for STDs, including HIV infection. Nonbarrier contraceptive methods offer no protection against HIV or other STDs. Women using hormonal contraception (oral contraceptives, Norplant™ , Depo-Provera™ ), who have been surgically sterilized or who have had hysterectomies should be counseled regarding the use of condoms and the risk for STDs, including HIV infection. # Prevention Messages Preventing the spread of STDs requires that persons at risk for transmitting or acquiring infections change their behaviors. When risks have been identified, the health-care provider has an opportunity to deliver prevention messages. Counseling skills are essential to the effective delivery of prevention messages (i.e., respect, compassion, and a nonjudgmental attitude). Techniques that can be effective in developing rapport with the patient include using open-ended questions, using language that the patient understands, and reassuring the patient that treatment will be provided regardless of considerations such as ability to pay, citizenship or immigration status, language spoken, or lifestyle. Prevention messages should be tailored to the patient, with consideration given to his or her specific risks. Messages should include a description of measures, such as the following, that the person can take to avoid acquiring or transmitting STDs: - The most effective way to prevent sexual transmission of HIV infection and other STDs is to avoid sexual intercourse with an infected partner. - If a person chooses to have sexual intercourse with a partner whose infection status is unknown or who is infected with HIV or other STDs, men should use a new latex condom with each act of intercourse. - When a male condom cannot be used, couples should consider using a female condom. # Injection Drug Users Prevention messages appropriate for injection drug users are the following: - Enroll or continue in a drug treatment program. - Do not, under any circumstances, use injection equipment (needles, syringes) that has been used by another person. - Persons who continue to use injection equipment that has been used by other persons should first clean the equipment with bleach and water. (Disinfecting with bleach does not sterilize the equipment and does not guarantee that HIV is inactivated. However, thoroughly and consistently cleaning injection equipment with bleach should reduce the rate of HIV transmission when equipment is shared.) # HIV Prevention Counseling Knowledge of one's HIV status and appropriate counseling are thought to play an important role in initiating behavior change. Counseling associated with HIV testing has two main components: pretest and posttest counseling. During pretest counseling, the clinician should conduct a personalized risk assessment, explain the meaning of positive and negative test results, ask for informed consent for the HIV test, and help the person to develop a realistic, personalized risk reduction plan. During posttest counseling, the clinician should inform the patient of the results, review the meaning of the results, and reinforce prevention messages. If the patient is HIV positive, posttest counseling should include referral for follow-up medical services and for social and psychological services, if needed. HIV-seronegative persons at continuing risk for HIV infection also may benefit from referral for additional counseling and prevention services. HIV counseling is considered to be an important HIV-prevention strategy, although its efficacy in reducing risk behavior is still under evaluation. By ensuring that counseling is empathic and "client-centered," clinicians will be able to develop a realistic appraisal of the person's risk and help him or her to develop a specific and realistic HIV-prevention plan (2 ). # Partner Notification and Management of Sex Partners Patients with STDs should ensure that their sex partners, including those without symptoms, are referred for evaluation. Providers should be prepared to assist in that effort. In most circumstances, partners of patients with STDs should be examined. When a diagnosis of a treatable STD is considered likely, appropriate antibiotics should be administered even though there may be no clinical signs of infection and before laboratory test results are available. In most states, the local or state health department can assist in notifying the partners of patients with selected STDs, especially HIV, syphilis, gonorrhea, and chlamydia. Breaking the chain of transmission is crucial to STD control. For treatable STDs, further transmission and reinfection can be prevented by referral of sex partners for diagnosis, treatment, and counseling. The following two strategies are used for partner notification: a) patient referral (index patients notify their partners), and b) provider referral (partners named by infected patients are notified and counseled by health department staff). When a physician refers an infected person to a local or state health department, trained professionals may interview the patient to obtain names and locating information about all of his or her sex partners. Every health department protects the privacy of patients in partner notification activities. Because of the advantage of confidentiality, many patients prefer that public health officials notify partners. If a patient with HIV infection refuses to notify partners while continuing to place them at risk, the physician has an ethical and legal responsibility to inform persons that they are at risk of HIV infection. This duty-to-warn may be most applicable to primary care physicians, who often have knowledge about a patient's social and familial relationships. The decision to invoke the duty-to-warn measure should be a last resort-applicable only in cases in which all efforts to persuade the patient to disclose positive test results to those who need to know have failed. Although compelling ethical, theoretical, and public health reasons exist to undertake partner notification, the efficacy of partner notification as an STD prevention strategy is under evaluation, and its effectiveness may be disease-specific. Clinical guidelines for sex partner management and recommendations for partner notification for specific STDs are included for each STD addressed in this report. # Reporting and Confidentiality The accurate identification and timely reporting of STDs form an integral part of successful disease control. Reporting assists local health authorities in identifying sex partners who may be infected. Reporting also is important for assessing morbidity trends. STD/HIV and acquired immunodeficiency syndrome (AIDS) cases should be reported in accordance with local statutory requirements and in a timely manner. Syphilis, gonorrhea, and AIDS are reportable diseases in every state. The requirements for reporting other STDs and asymptomatic HIV infection differ from state to state, and clinicians should be familiar with local STD reporting requirements. Reporting may be provider-and/or laboratory-based. Clinicians who are unsure of local reporting requirements should seek advice from local health departments or state STD programs. STD and HIV reports are held in strictest confidence and in many jurisdictions are protected by statute from subpoena. Further, before any follow-up of a positive STD test is conducted by program representatives, these persons consult with the patient's health-care provider to verify the diagnosis and treatment. # SPECIAL POPULATIONS Pregnant Women Intrauterine or perinatally transmitted STDs can have fatal or severely debilitating effects on a fetus. Pregnant women and their sex partners should be questioned about STDs and should be counseled about the possibility of neonatal infections. # Recommended Screening Tests The following screening tests are recommended for pregnant women: - A serologic test for syphilis All women should be screened serologically for syphilis during the early stages of pregnancy. In populations in which utilization of prenatal care is not optimal, rapid plasma reagin (RPR)-card test screening and treatment, if that test is reactive, should be performed at the time a pregnancy is diagnosed. For patients at high risk, screening should be repeated in the third trimester and again at delivery. (Some states mandate screening all women at delivery.) No infant should be discharged from the hospital without the syphilis serologic status of its mother having been determined at least once during pregnancy and, preferably, again at delivery. Any woman who delivers a stillborn infant after 20 weeks gestation should be tested for syphilis. - A serologic test for hepatitis B surface antigen (HBsAg) - A test for Neisseria gonorrhoeae - A test for Chlamydia trachomatis Pregnant women at increased risk (<25 years of age, or with a new or more than one partner) should be tested and treated, if necessary, during the third trimester to prevent maternal postnatal complications and chlamydial infection among infants. Screening during the first trimester might permit prevention of adverse effects of chlamydia during pregnancy. However, the evidence for adverse effects during pregnancy is minimal. If screening is performed only during the first trimester, a longer period exists for acquiring infection before delivery. # A test for HIV infection Patients with risk factors for HIV or with a high-risk sex partner should be tested for HIV infection. Some authorities recommend offering HIV tests to all pregnant women, particularly in areas of high HIV seroprevalence. Appropriate counseling should be provided, and informed consent for HIV testing should be obtained. # Other Issues Other STD-related issues to be considered are as follows: - Pregnant women with primary genital herpes, HBV, primary cytomegalovirus (CMV) infection, group B streptococcal infection, and women who have syphilis and who are allergic to penicillin may need to be referred to an expert for management. - In the absence of lesions during the third trimester, routine serial cultures for herpes simplex virus (HSV) are not indicated for women with a history of recurrent genital herpes. However, obtaining cultures from such women at the time of delivery may be useful in guiding neonatal management. "Prophylactic" caesarean section is not indicated for women who do not have active genital lesions at the time of delivery. - The presence of genital warts is not considered an indication for caesarean section. For a more detailed discussion of these issues, as well as for infections not transmitted sexually, refer to Guidelines for Perinatal Care (3 ). NOTE: The sources for these guidelines for screening of pregnant women include Guide to Clinical Preventive Services (4 ), Guidelines for Perinatal Care (3 ), and Recommendations for the Prevention and Management of Chlamydia trachomatis Infections, 1993 (5 ). These sources are not entirely consistent in their recommendations. The Guide to Clinical Preventive Services recommends routine testing for gonorrhea at the first prenatal visit, with repeat testing for those at increased risk, and selective screening for chlamydia at the first prenatal visit. The Guidelines for Perinatal Care does not specifically recommend screening for either gonorrhea or chlamydia, but recommends screening for STDs in the third trimester for women at risk. The Recommendations for the Prevention and Management of Chlamydia trachomatis Infections, 1993 recommend screening for chlamydia during the third trimester for all pregnant women <25 years of age or for any woman with a new sex partner or multiple partners. Recommendations to screen pregnant women for STDs are based on disease severity and sequelae, prevalence in the population, costs, medical/legal considerations (including state laws), and other factors. The screening recommendations in this report are more comprehensive (i.e., if followed, more women will be screened for more STDs than would be screened by following other recommendations) and are compatible with other CDC guidelines. Physicians should select a screening strategy compatible with their practice population and setting. # Adolescents Health-care providers who provide care for patients with sexually transmitted infections should be aware of several issues that relate specifically to adolescents. The rates of many STDs are highest among adolescents; e.g., the rate of gonorrhea is highest among persons 15-19 years of age. Clinic-based studies have demonstrated that the prevalence of chlamydial infections, and possibly of HPV infections, also is highest among adolescents. All adolescents in the United States can consent to the confidential diagnosis and treatment of STDs. Medical care for these conditions can be provided to adolescents without parental consent or knowledge. Furthermore, in many states adolescents can consent to HIV counseling and testing. The style and content of counseling and health education should be adapted for adolescents. Discussions should be appropriate for the patient's developmental level and should identify risky behaviors, such as sex and drug use behaviors. Care and counseling should be direct and nonjudgmental. # Children Management of children with STDs requires close cooperation between the clinician, laboratory, and child-protection authorities. Investigations, when indicated, should be initiated promptly. Some diseases, such as gonorrhea, syphilis, and chlamydia, if acquired after the neonatal period, are almost 100% indicative of sexual contact. For other diseases, such as HPV infection and vaginitis, the association with sexual contact is not as clear (see Sexual Assault and STDs). # Persons with HIV Infection The management of patients infected with HIV and patients infected with both HIV and other STDs presents complex clinical and behavioral issues. For that reason, these issues are addressed throughout this report (see HIV Infection and Early Intervention and specific disease sections). Because of its effects on the immune system, HIV infection may alter the natural histories of many STDs and the effect of antimicrobial therapy. Such effects are likely to occur as the degree of immunosuppression advances; frequent or severe episodes of some STDs or failure to respond appropriately to therapy should lead the health-care provider to consider HIV infection as a cause. Close clinical follow-up of patients infected with both HIV and STDs is imperative. STD infection among patients with or without HIV is a sentinel event, often indicating unprotected sexual activity. Further patient counseling is needed in such situations. # HIV INFECTION AND EARLY INTERVENTION Infection with HIV produces a spectrum that progresses from no apparent illness to AIDS as a late manifestation. The pace of disease progression is variable. The median time between infection with HIV and the development of AIDS among adults is 10 years, with a range from a few months to ≥12 years. Most adults and adolescents infected with HIV remain symptom-free for long periods, but viral replication can be detected in asymptomatic persons and increases substantially as the immune system deteriorates. Most people who are infected with HIV will eventually have symptoms related to the infection. In cohort studies of adults infected with HIV, data indicated that symptoms developed in 70%-85% of infected adults, and AIDS developed in 55%-62% within 12 years after infection. Additional cases are expected to occur among those who have remained AIDS-free for >12 years. Greater awareness of risky behaviors by both patients and health-care providers has led to increased testing for HIV and earlier diagnosis of early HIV infection, often before symptoms develop (though emotional or psychological problems may occur). Such early identification of HIV infection is important for several reasons. Treatments are available to slow the decline of immune system function. Persons who are infected with HIV and altered immune function also are at increased risk for infections such as tuberculosis (TB), bacterial pneumonia, and Pneumocystis carinii pneumonia (PCP), for which preventive measures are available. Because of its effect on the immune system, HIV affects the diagnosis, evaluation, treatment, and follow-up of many other diseases and may affect the efficacy of antimicrobial therapy for some STDs. Close clinical follow-up after treatment for STDs is imperative. During early infection, persons with HIV and their families can be educated about the disease and become linked with a support network that addresses their needs and with care systems effective in maintaining good health and delaying the onset of symptoms. Early diagnosis also offers the opportunity for counseling and for assistance in preventing the transmission of HIV infection to others. For the purpose of these recommendations, early intervention for HIV is defined as care for persons infected with HIV who are without symptoms. However, recently detected HIV infection may not have been recently acquired. Persons newly diagnosed with HIV may be at many different stages of the infection. Therefore, early intervention also involves assuming the responsibility for coordinating care and for arranging access to resources necessary to meet the medical, psychological, and social needs of persons with more advanced HIV infection. # Diagnostic Testing for HIV-1 and HIV-2 HIV infection is most often diagnosed by using HIV-1 antibody tests. Antibody testing begins with a sensitive screening test such as the enzyme-linked immunosorbent assay (ELISA) or a rapid assay. If confirmed by Western blot or other supplemental test, a positive antibody test means that a person is infected with HIV and is capable of transmitting the virus to others. HIV antibody is detectable in ≥95% of patients within 6 months of infection. Although a negative antibody test usually means a person is not infected, antibody tests cannot rule out infection that occurred <6 months before the test. Since there is transplacental passage of maternal HIV antibody, antibody tests for HIV are expected to be positive in the serum of both infected and uninfected infants born to a seropositive mother. Passively acquired HIV antibody falls to undetectable levels among most infants by 15 months of age. A definitive determination of HIV infection for an infant <15 months of age should be based either on the presence of antibody to HIV in conjunction with a compatible immunologic profile and clinical course or on laboratory evidence of HIV in blood or tissues by culture, nucleic acid, or antigen detection. Specific recommendations for the diagnostic testing of HIV are listed below: - Informed consent must be obtained before an HIV test is performed. Some states require written consent. See HIV Prevention Counseling for a discussion of pretest and posttest counseling. - Positive screening tests for HIV antibody must be confirmed by a more specific confirmatory test (either the Western blot assay or indirect immunofluorescence assay ) before being considered definitive for confirming HIV infection. - Persons with positive HIV tests must receive medical and psychosocial evaluation and monitoring services, or be referred for these services. The prevalence of HIV-2 in the United States is extremely low, and CDC does not recommend routine testing for HIV-2 in settings other than blood centers, unless demographic or behavioral information suggests that HIV-2 infection might be present. Those at risk for HIV-2 infection include persons from a country in which HIV-2 is endemic or the sex partners of such persons. (As of July 1992, HIV-2 was endemic in parts of West Africa and an increased prevalence of HIV-2 had been reported in Angola, France, Mozambique, and Portugal.) Additionally, testing for HIV-2 should be conducted when there is clinical evidence or suspicion of HIV disease in the absence of a positive test for antibodies to HIV-1 (6 ). # Counseling for Patients with HIV Infection Behavioral and psychosocial services are an integral part of HIV early intervention. Patients usually experience emotional distress when first being informed of a positive HIV test result, and also later when notified of changes in immunologic markers, when antiviral or prophylactic therapy is initiated, and when symptoms develop. Patients face several major adaptive challenges: a) accepting the possibility of a curtailed life span, b) coping with others' reactions to a stigmatizing illness, c) developing strategies for maintaining physical and emotional health, and d) initiating changes in behavior to prevent HIV transmission. Many patients also require assistance with making reproductive choices, gaining access to health services and health insurance, and confronting employment discrimination. Interrupting HIV transmission depends upon changes in behavior by those persons at risk for transmitting or acquiring infection. Though some viral culture studies suggest that antiviral treatment reduces viral burden, clinical data are insufficient to determine whether therapy might reduce the probability of transmission. Infected persons, as potential sources of new infections, must receive extra attention and support to help break chains of transmission and to prevent infection of others. Targeting behavior change programs toward HIV-infected persons and their sex partners, or those with whom they share needles, is an important adjunct to current AIDS prevention efforts. Specific recommendations for counseling patients with HIV infection are listed below: - Persons who test positive for HIV antibody should be counseled by a person who is able to discuss the medical, psychological, and social implications of HIV infection. - Appropriate social support and psychological resources should be available, either on site or through referral, to assist patients in coping with emotional distress. - Persons who continue to be at risk for transmitting HIV should receive assistance in changing or avoiding behaviors that can transmit infection to others. # Initial Evaluation and Planning for Care Practice settings for offering early HIV care are variable, depending upon local resources and needs. Primary-care providers and outpatient facilities must ensure that appropriate resources are available for each patient and must avoid fragmentation of care; it is preferable for persons with HIV infection to receive care from a single source that is able to provide comprehensive care for all stages of HIV infection. But the limited availability of such resources often results in the need to coordinate care among outpatient, inpatient, and specialist providers in different locations. Because of the progressive nature of HIV and the increased risk for bacterial infections, including TB-even before HIV infection becomes advanced-it is essential to establish specific provisions for handling the medical, psychological, and social problems likely to arise at any stage of infection. An important component of early intervention is effective linkage with referral settings where off-hours care and specialty services are available. Development of an appropriate plan for care involves the following: - Identification of patients in need of immediate medical care (e.g., patients with symptomatic HIV infection or emotional crisis) and of those in need of antiretroviral therapy or prophylaxis for opportunistic infections (e.g., PCP). - Evaluation for the presence of diseases associated with HIV, such as TB and STDs. - Administration of recommended vaccinations. - Case management or referral for case management. - Counseling (see Counseling for Patients with HIV Infection). The CD4+ T-lymphocyte count is the best laboratory indicator of clinical progression, and comprehensive management strategies for HIV infection are typically stratified by CD4 count. Either the absolute number or the percentage of CD4+ T cells may be determined. CD4+ percentage is more consistent than absolute CD4+ count with successive measurements for the same person and less variable with delays in specimen processing. However, most clinical trials have used absolute CD4+ count to evaluate the need for and timing of therapeutic interventions. Patients with CD4+ counts >500/µL usually do not demonstrate evidence of clinical immunosuppression. Patients with 200-500 CD4+ cells/µL are more likely to develop HIV-related symptoms and to require medical intervention. Patients with CD4+ counts 37.8 C for ≥2 weeks) are at increased risk for developing complicated HIV disease. Such patients should be managed in a comprehensive treatment setting with access to specialty resources and hospitalization. Providers unable to offer therapeutic management of HIV may use the initial evaluation to identify the need for prompt referral to appropriate resources. The initial evaluation of HIV-positive patients should include the following essential components: - A detailed history, including sexual history, substance abuse history, and a review of systems for specific HIV-related symptoms. - A physical examination; for females, this examination should include a gynecologic examination. - For females, testing for N. gonorrhoeae, C. trachomatis, a Papanicolaou (Pap) smear, and wet mount examination of vaginal secretions. - A syphilis serology. - A CD4+ T-lymphocyte analysis. - Complete blood and platelet counts. - A purified protein derivative (PPD) tuberculin skin test by the Mantoux method and anergy testing with two delayed-type hypersensitivity (DTH) antigens (Candida, mumps, or tetanus toxoid) administered by the Mantoux method or a multipuncture device. - A thorough psychosocial evaluation, including ascertainment of behavioral factors indicating risk for transmitting HIV and elucidation of information about any partners who should be notified about possible exposure to HIV. # Preventive Therapy for TB Studies conducted among persons with and without HIV infection have suggested that HIV infection can depress tuberculin reactions before signs and symptoms of HIV infection develop. Cutaneous anergy (defined as skin test response of ≤3 mm to all DTH antigens) may be present among ≥10% of asymptomatic persons with CD4+ counts >500 cells/µL, and among >60% of persons with CD4+ counts <200. HIV-positive persons with a PPD reaction ≥5 mm induration are considered to be infected with M. tuberculosis and should be evaluated for preventive treatment with isoniazid after active TB has been excluded. Anergic persons whose risk for tuberculous infection is estimated to be ≥10%, based on available prevalence data, also should be considered for preventive therapy. For further details regarding evaluation of patients for TB, refer to Purified Protein Derivative (PPD-tuberculin anergy) and HIV Infection: Guidelines for Anergy Testing and Management of Anergic Persons at Risk of Tuberculosis (7 ). The preliminary results from a randomized clinical trial suggest that treatment with isoniazid is effective for preventing active TB among HIV-infected persons. The usual regimen is isoniazid 10 mg/kg daily, up to a maximum adult dose of 300 mg daily. Twelve months of isoniazid preventive treatment is recommended for persons with HIV infection. For further details regarding preventive therapy for TB, refer to The Use of Preventive Therapy for Tuberculous Infection in the United States (8 ) and Management of Persons Exposed to Multidrug-Resistant Tuberculosis (9 ). # Recommended Immunizations for Adults and Adolescents Specific recommendations for immunization of persons infected with HIV are listed below: - Pneumococcal vaccination and an annual influenza vaccination should be administered. - Persons at increased risk for acquiring HBV and who lack evidence of immunity may receive a three-dose schedule of hepatitis B vaccine, with postvaccination serologic testing between 1 and 6 months after the vaccination series. Recommendations for vaccinating HIV-infected persons are based on expert opinions and consensus of the Advisory Committee on Immunization Practices (ACIP). No clinical data exist to document the efficacy of inactivated vaccines among HIV-infected persons, and pneumococcal vaccine failures have been reported. However, the use of inactivated vaccines may be beneficial for persons with HIV infection and there is no evidence that they are harmful. Immunogenicity studies have suggested a generally poorer response among HIV-infected persons, with higher response rates among asymptomatic persons than among those with advanced HIV disease. Current evidence indicates that HIV infection does not increase susceptibility to HBV, nor does it increase the severity of clinical disease. The presence of HIV infection is not an indication for hepatitis B vaccine, but HIV-infected persons are at increased risk for becoming chronic carriers after hepatitis B infection. Because the routes of transmission of HBV parallel those of HIV, efforts to modify risky behaviors must be the primary focus of prevention efforts. However, vaccine should be administered to HIV-infected patients who continue to have a high likelihood for HBV exposure. Persons with HIV infection also are at increased risk for invasive Haemophilus influenzae type B (Hib) disease and for complications from measles. Immunization against Hib and measles should be considered for asymptomatic HIV-infected persons who may have an increased risk for exposure to these infections. For further details on immunization of HIV-infected patients, refer to Recommendations of the Advisory Committee on Immunization Practices (ACIP): Use of Vaccines and Immune Globulins in Persons with Altered Immunocompetence (10 ). # Follow-Up Evaluation There have been no controlled studies to serve as the basis for recommending specific follow-up tests or follow-up intervals. The suggested frequency of monitoring is based on the slow decrease in CD4+ counts observed among patients in cohort studies, but should be modified depending on the patient's psychological status, the presence of symptoms, or both. Repeat evaluation for STDs also is important in the follow-up of HIV-infected persons and should be performed on all persons who continue to be sexually active. Follow-up evaluation should be performed every 6 months and should include the following: - An interim history and physical examination; - A complete blood count, platelet count, and lymphocyte subset analysis; - Re-evaluation of psychosocial status and behavioral factors indicating risk for transmitting HIV. To follow CD4+ measurements, providers should use the same laboratory and, optimally, obtain each specimen at the same time each day. When unexpected or discrepant results are obtained or when major treatment decisions are to be made, health-care providers should consider repeating the CD4+ measurement after at least 1 week. More frequent laboratory monitoring, every 3-4 months, is indicated if CD4+ results indicate a patient is close to a point when a clinical intervention may be indicated. # Continuing Management of Patients with Early HIV Infection Providing comprehensive, continuing management of patients with early HIV infection can include additional diagnostic studies (e.g., chest x-ray, serum chemistry, antibody testing for toxoplasmosis and hepatitis B), antiretroviral therapy and monitoring, and PCP prophylaxis. Treatment of HIV infection and prophylaxis against opportunistic infections continue to evolve rapidly. This treatment should be undertaken in consultation with physicians who are familiar with the care of persons with HIV infection. The complete therapeutic management of HIV infection is beyond the scope of this document. # Antiretroviral Therapy The optimal time for initiating antiretroviral therapy has not yet been established. Zidovudine (ZDV) at a dose of 500 mg/day (100 mg orally every 4 hours while the patient is awake) has been recommended for symptomatic persons with <500 CD4+ T-cells/µL, and for asymptomatic persons with <300 CD4+ T-cells/µL. This recommendation is based on results of short-term follow-up in three randomized clinical trials demonstrating that the initiation of ZDV therapy delays progression to advanced disease. Evidence for improved long-term survival after early treatment is less conclusive. The effects of ZDV may be transient, possibly because of the development of viral resistance or other factors. Sequential or combination therapy with other antiretroviral agents could be more efficacious. Whether other daily dosages, dose schedules, or dosages based on body weight would result in greater therapeutic benefit or few side effects is not known. Providers should work with patients to design a treatment strategy that is both clinically sound and appropriate for each individual patient's needs, priorities, and circumstances. An initial dose of 600 mg in divided doses has been recommended by a panel of experts convened by the National Institute of Allergy and Infectious Diseases (NIAID). Preliminary data suggest ZDV can yield therapeutic results when the dosing interval is increased to 8 hours, and at doses of 200 mg three times daily. Antiretroviral efficacy is diminished at doses <300 mg/day, and it has been suggested that higher oral doses may be required to achieve effective levels in the central nervous system. There are no data to support the use of antiretroviral drugs other than ZDV as initial therapy. Didanosine (DDI) is recommended for persons who are intolerant of ZDV or who experience progression of symptoms despite ZDV. Two 100 mg tablets of DDI are recommended every 12 hours for persons who weigh ≥ 60 kg; the recommended dose for adults <60 kg is one 100 mg tablet and one 25 mg tablet every 12 hours. Two tablets are recommended at each dose so that adequate buffering is provided to prevent gastric acid degradation of the drug. Benefits have been reported from other antiretroviral regimens, including treatment with combinations of ZDV, DDC (dideoxycytidine ), and DDI, or switching therapy to DDI after long-term therapy with ZDV. Experience with these alternatives is insufficient to serve as a basis for recommendations. Providers managing patients who are taking antiretroviral therapy should be familiar with evidence being developed in several clinical trials. Current information is available from the NIAID AIDS Clinical Trials Information Service, 1-800-TRIALS-A. Side effects that are serious (e.g., anemia, cytopenia, pancreatitis, and peripheral neuropathy) and uncomfortable (e.g., nausea, vomiting, headaches, and insomnia) are common during antiretroviral therapy. Although hematologic toxicity from ZDV is less common with the lower doses recommended, approximately 2% of patients who receive 500 mg/day manifest severe anemia by the 18th month of treatment-most within the 3rd through 8th month of treatment. Careful hematologic monitoring of patients receiving ZDV is recommended. # PCP Prophylaxis Adults and adolescents with 100 F for ≥2 weeks, and any patient with a previous episode of PCP should receive PCP prophylaxis. Prophylaxis should be continued for the lifetime of the patient. Based upon evidence from randomized controlled clinical trials, the Public Health Service Task Force on Antipneumocystis Prophylaxis has recommended the following regimens for PCP prophylaxis among adults and adolescents: - Oral trimethoprim-sulfamethoxazole (TMP-SMX) at a dose of one doublestrength tablet (800 mg SMX and 160 mg TMP) orally once a day. - For patients unable to tolerate TMP-SMX: aerosol pentamidine administered by either the Respirgard II nebulizer regimen (300 mg once a month) or the Fisoneb nebulizer (initial loading regimen of five 60 mg doses during a 2-week period, followed by a 60 mg dose every 2 weeks). The efficacy of alternatives for patients unable to tolerate TMP-SMX, including dapsone 100 mg orally once a day and sulfa desensitization, has not been studied extensively. For further details on PCP prophylaxis, refer to Recommendations for Prophylaxis Against Pneumocystis carinii pneumonia for adults and adolescents infected with human immunodeficiency virus (11 ). # Management of Sex Partners The rationale for implementing partner notification is that early diagnosis and treatment of HIV infection may reduce morbidity and offers the opportunity to encourage risk-reducing behaviors. Two complementary notification processes, patient referral and provider referral, can be used to identify partners. With patient referral, patients inform their own partners directly of their exposure to HIV infection. With provider referral, trained health department personnel locate partners on the basis of the names, descriptions, and addresses provided by the patient. During the notification process, the anonymity of patients is protected; their names are not revealed to sex or needle-sharing partners who are notified. Many state health departments provide assistance with provider referral partner notification upon request. One randomized trial suggested that provider referral is more effective in notifying partners than patient referral. In that trial, 50% of partners in the provider referral group were notified, yet only 7% of partners were notified by subjects in the patient referral group. However, few data demonstrate whether behavioral change takes place as a result of partner notification and many patients are reluctant to disclose the names of partners because of concern about discrimination, disruption of relationships, and loss of confidentiality for the partners. When referring to those persons infected with HIV, the term "partner" includes not only sex partners but also injecting drug users who share needles or other injecting equipment. Partner notification is a means of identifying and concentrating riskreduction efforts on persons at high risk for contracting or transmitting HIV infection. Partner notification for HIV infection must be confidential and should depend upon the voluntary cooperation of the patient. Specific recommendations for implementing partner notification procedures are listed below: - Persons who are HIV-positive should be encouraged to notify their partners and to refer them for counseling and testing. Providers should assist in this process, if desired by the patient, either directly or through referral to health department partner notification programs. - If patients are unwilling to notify their partners or if it cannot be assured that their partners will seek counseling, physicians or health department personnel should use confidential procedures to assure that the partners are notified. # Special Considerations Pregnancy Women who are HIV-infected should be specifically informed about the risk for perinatal infection. Current evidence indicates that 15%-39% of infants born to HIVinfected mothers are infected with HIV, and the virus also can be transmitted from an infected mother by breastfeeding. Pregnancy among HIV-infected patients does not appear to increase maternal morbidity or mortality. Women should be counseled about their options regarding pregnancy. The objective of counseling is to provide HIV-infected women with current information for making reproductive decisions, analogous to the model used in genetic counseling. Contraceptive, prenatal, and abortion services should be available on site or by referral. Minimal information is available on the use of ZDV or other antiretroviral drugs during pregnancy. Trials to evaluate its efficacy in preventing perinatal transmission and its safety during pregnancy are being conducted. A case series of 43 pregnant women has been published; dosages of ZDV ranged from 300 to 1,200 mg/day. ZDV was well tolerated and there were no malformations among the newborns in this series. Although this observation is encouraging, this series of negative case reports cannot be used to infer that ZDV is not teratogenic. Burroughs Wellcome Co. and Hoffmann-LaRoche, Inc., in cooperation with CDC, maintain a registry to assess the effects of the use of ZDV and DDC during pregnancy. Women who receive either ZDV or DDC during pregnancy should be reported to this registry (1-800-722-9292, ext. 58465). # HIV Infection Among Infants and Children Infants and young children with HIV infection differ from adults and adolescents with respect to the diagnosis, clinical presentation, and management of HIV disease. For example, total lymphocytes and absolute CD4+ cell counts are much higher in infants and children than in healthy adults and are age dependent. Specific indications and dosages for both antiretroviral and prophylactic therapy have been developed for children (12 ). Other modifications must be made in health services that are recommended for infants and children, such as avoiding vaccination with live oral polio vaccine when a child (or close household contact) is infected with HIV. State laws differ regarding consent of minor persons (<18 years of age) for HIV counseling and testing, evaluation, treatment services, and participation in clinical trials. Although most adolescents receive adult doses of antiretroviral and prophylactic therapy, there are no data on modification of these dosages during puberty. Management of infants, children, and adolescents-who are known or suspected to be infected with HIV-requires referral to, or close consultation with, physicians familiar with the manifestations and treatment of pediatric HIV infection. # DISEASES CHARACTERIZED BY GENITAL ULCERS # Management of the Patient with Genital Ulcers In the United States, most patients with genital ulcers have genital herpes, syphilis, or chancroid. The relative frequency of each varies by geographic area and patient population, but in most areas of the United States genital herpes is the most common of these diseases. More than one of these diseases may be present among at least 3%-10% of patients with genital ulcers. Each disease has been associated with an increased risk for HIV infection. A diagnosis based only on history and physical examination is often inaccurate. Therefore, evaluation of all persons with genital ulcers should include a serologic test for syphilis and possibly other tests. Although ideally all of these tests should be conducted for each patient with a genital ulcer, use of such tests (other than a serologic test for syphilis) may be based on test availability and clinical or epidemiologic suspicion. Specific tests for the evaluation of genital ulcers are listed below: - Darkfield examination or direct immunofluorescence test for Treponema pallidum, - Culture or antigen test for HSV, and - Culture for Haemophilus ducreyi. HIV testing should be considered in the management of patients with genital ulcers, especially for those with syphilis or chancroid. A health-care provider often must treat a patient before test results are available (even after complete testing, at least one quarter of patients with genital ulcers have no laboratory-confirmed diagnosis). In that circumstance, the clinician should treat for the diagnosis considered most likely. Many experts recommend treatment for both chancroid and syphilis if the diagnosis is unclear or if the patient resides in a community in which chancroid morbidity is notable (especially when diagnostic capabilities for chancroid and syphilis are not ideal). # Chancroid Chancroid is endemic in many areas of the United States and also occurs in discrete outbreaks. Chancroid has been well established as a co-factor for HIV transmission and a high rate of HIV infection among patients with chancroid has been reported in the United States and in other countries. As many as 10% of patients with chancroid may be coinfected with T. pallidum or HSV. Definitive diagnosis of chancroid requires identification of H. ducreyi on special culture media that are not commercially available; even using these media, sensitivity is no higher than 80% and is usually lower. A probable diagnosis, for both clinical and surveillance purposes, may be made if the person has one or more painful genital ulcers, and a) no evidence of T. pallidum infection by darkfield examination of ulcer exudate or by a serologic test for syphilis performed at least 7 days after onset of ulcers, and b) either the clinical presentation of the ulcer(s) is not typical of disease caused by HSV or the HSV test results are negative. The combination of a painful ulcer with tender inguinal adenopathy (which occurs among one-third of patients) is suggestive of chancroid, and when accompanied by suppurative inguinal adenopathy is almost pathognomonic. # Treatment Successful treatment cures infection, resolves clinical symptoms, and prevents transmission to others. In extensive cases, scarring may result despite successful therapy. # Recommended Regimens Azithromycin 1 g orally in a single dose or Ceftriaxone 250 mg intramuscularly (IM) in a single dose or Erythromycin base 500 mg orally 4 times a day for 7 days. All three regimens are effective for the treatment of chancroid among patients without HIV infection. Azithromycin and ceftriaxone offer the advantage of singledose therapy. Antimicrobial resistance to ceftriaxone and azithromycin has not been reported. Although two isolates resistant to erythromycin were reported from Asia a decade ago, similar isolates have not been reported. # Alternative Regimens Amoxicillin 500 mg plus clavulanic acid 125 mg orally 3 times a day for 7 days, or Ciprofloxacin 500 mg orally 2 times a day for 3 days. # NOTE: Ciprofloxacin is contraindicated for pregnant and lactating women, children, and adolescents ≤17 years of age. These regimens have not been evaluated as extensively as the recommended regimens; neither has been studied in the United States. # Other Management Considerations Patients should be tested for HIV infection at the time of diagnosis. Patients also should be tested 3 months later for both syphilis and HIV, if initial results are negative. # Follow-Up Patients should be re-examined 3-7 days after initiation of therapy. If treatment is successful, ulcers improve symptomatically within 3 days and improve objectively within 7 days after therapy. If no clinical improvement is evident, the clinician must consider whether a) the diagnosis is correct, b) coinfection with another STD agent exists, c) the patient is infected with HIV, d) treatment was not taken as instructed, or e) the H. ducreyi strain causing infection is resistant to the prescribed antimicrobial. The time required for complete healing is related to the size of the ulcer; large ulcers may require ≥2 weeks. Clinical resolution of fluctuant lymphadenopathy is slower than that of ulcers and may require needle aspiration through adjacent intact skin-even during successful therapy. # Management of Sex Partners Persons who had sexual contact with a patient who has chancroid within the 10 days before onset of the patient's symptoms should be examined and treated. The examination and treatment should be administered even in the absence of symptoms. # Special Considerations Pregnancy The safety of azithromycin for pregnant and lactating women has not been established. Ciprofloxacin is contraindicated during pregnancy. No adverse effects of chancroid on pregnancy outcome or on the fetus have been reported. # HIV Infection Patients coinfected with HIV should be closely monitored. These patients may require courses of therapy longer than those recommended in this report. Healing may be slower among HIV-infected persons and treatment failures do occur, especially after shorter-course treatment regimens. Since data on therapeutic efficacy with the recommended ceftriaxone and azithromycin regimens among patients infected with HIV are limited, those regimens should be used among persons known to be infected with HIV only if follow-up can be assured. Some experts suggest using the erythromycin 7-day regimen for treating HIV-infected persons. # Genital Herpes Simplex Virus Infections Genital herpes is a viral disease that may be recurrent and has no cure. Two serotypes of HSV have been identified: HSV-1 and HSV-2; most cases of genital herpes are caused by HSV-2. On the basis of serologic studies, approximately 30 million persons in the United States may have genital HSV infection. Most infected persons never recognize signs suggestive of genital herpes; some will have symptoms shortly after infection and then never again. A minority of the total infected U.S. population will have recurrent episodes of genital lesions. Some cases of first clinical episode genital herpes are manifested by extensive disease that requires hospitalization. Many cases of genital herpes are acquired from persons who do not know that they have a genital infection with HSV or who were asymptomatic at the time of the sexual contact. Randomized trials show that systemic acyclovir provides partial control of the symptoms and signs of herpes episodes when used to treat first clinical episodes, or when used as suppressive therapy. However, acyclovir neither eradicates latent virus nor affects subsequent risk, frequency, or severity of recurrences after administration of the drug is discontinued. Topical therapy with acyclovir is substantially less effective than the oral drug and its use is discouraged. Episodes of HSV infection among HIV-infected patients may require more aggressive therapy. Immunocompromised persons may have prolonged episodes with extensive disease. For these persons, infections caused by acyclovir-resistant strains require selection of alternate antiviral agents. # First Clinical Episode of Genital Herpes # Recommended Regimen Acyclovir 200 mg orally 5 times a day for 7-10 days or until clinical resolution is attained. # First Clinical Episode of Herpes Proctitis # Recommended Regimen Acyclovir 400 mg orally 5 times a day for 10 days or until clinical resolution is attained. # Recurrent Episodes When treatment is instituted during the prodrome or within 2 days of onset of lesions, some patients with recurrent disease experience limited benefit from therapy. However, since early treatment can seldom be administered, most immunocompetent patients with recurrent disease do not benefit from acyclovir treatment, and it is not generally recommended. # Recommended Regimen Acyclovir 200 mg orally 5 times a day for 5 days, or Acyclovir 400 mg orally 3 times a day for 5 days, or Acyclovir 800 mg orally 2 times a day for 5 days. # Daily Suppressive Therapy Daily suppressive therapy reduces the frequency of HSV recurrences by at least 75% among patients with frequent recurrences (i.e., six or more recurrences per year). Suppressive treatment with oral acyclovir does not totally eliminate symptomatic or asymptomatic viral shedding or the potential for transmission. Safety and efficacy have been documented among persons receiving daily therapy for as long as 5 years. Acyclovir-resistant strains of HSV have been isolated from some persons receiving suppressive therapy, but these strains have not been associated with treatment failure among immunocompetent patients. After 1 year of continuous suppressive therapy, acyclovir should be discontinued to allow assessment of the patient's rate of recurrent episodes. # Recommended Regimen Acyclovir 400 mg orally 2 times a day. # Alternative Regimen Acyclovir 200 mg orally 3-5 times a day. The goal of the alternative regimen is to identify for each patient the lowest dose that provides relief from frequently recurring symptoms. # Severe Disease Intravenous (IV) therapy should be provided for patients with severe disease or complications necessitating hospitalization (e.g., disseminated infection that includes encephalitis, pneumonitis, or hepatitis). # Recommended Regimen Acyclovir 5-10 mg/kg body weight IV every 8 hours for 5-7 days or until clinical resolution is attained. # Other Management Considerations Other considerations for managing patients with genital HSV infection are as follows: - Patients should be advised to abstain from sexual activity while lesions are present. - Patients with genital herpes should be told about the natural history of the disease, with emphasis on the potential for recurrent episodes, asymptomatic viral shedding, and sexual transmission. Sexual transmission of HSV has been documented to occur during periods without evidence of lesions. Many cases are transmitted during such asymptomatic periods. The use of condoms should be encouraged during all sexual exposures. The risk for neonatal infection should be explained to all patients-male and female-with genital herpes. Women of childbearing age who have genital herpes should be advised to inform health-care providers who care for them during pregnancy about their HSV infection. # Management of Sex Partners Sex partners of patients who have genital herpes are likely to benefit from evaluation and counseling. Symptomatic sex partners should be managed in the same manner as any patient with genital lesions. However, the majority of persons with genital HSV infection do not have a history of typical genital lesions. These asymptomatic persons may benefit from evaluation and counseling; thus, even asymptomatic partners should be queried about histories of typical and atypical genital lesions and encouraged to examine themselves for lesions in the future. Commercially available HSV type-specific antibody tests have not demonstrated adequate performance characteristics; their use is not currently recommended. Sensitive and specific type-specific serum antibody assays now utilized in research settings might contribute to future intervention strategies. Should tests with adequate sensitivity and specificity become commercially available, it might be possible to accurately identify asymptomatic persons infected with HSV-2, to focus counseling on how to detect lesions by self-examination, and to reduce the risk for transmission to sex partners. # Special Considerations # Allergy, Intolerance, or Adverse Reactions Effective alternatives to therapy with acyclovir are not available. # HIV Infection Lesions caused by HSV are relatively common among patients infected with HIV. Intermittent or suppressive therapy with oral acyclovir may be needed. The acyclovir dosage for HIV-infected persons is controversial, but experience strongly suggests that immunocompromised patients benefit from increased dosage. Regimens such as 400 mg orally 3 to 5 times a day, as used for other immunocompromised persons, have been found useful. Therapy should be continued until clinical resolution is attained. For severe disease, IV acyclovir therapy may be required. If lesions persist among patients undergoing acyclovir treatment, resistance to acyclovir should be suspected. These patients should be managed in consultation with an expert. For severe disease because of proven or suspected acyclovir-resistant strains, hospitalization should be considered. Foscarnet, 40 mg/kg body weight IV every 8 hours until clinical resolution is attained, appears to be the best available treatment. # Pregnancy The safety of systemic acyclovir therapy among pregnant women has not been established. Burroughs Wellcome Co., in cooperation with CDC, maintains a registry to assess the effects of the use of acyclovir during pregnancy. Women who receive acyclovir during pregnancy should be reported to this registry (1-800-722-9292, ext. 58465). Current registry findings do not indicate an increase in the number of birth defects identified among the prospective reports when compared with those expected in the general population. Moreover, no consistent pattern of abnormalities emerges among retrospective reports. These findings provide some assurance in counseling women who have had inadvertent prenatal exposure to acyclovir. However, accumulated case histories comprise a sample of insufficient size for reaching reliable and definitive conclusions regarding the risks of acyclovir treatment to pregnant women and to their fetuses. In the presence of life-threatening maternal HSV infection (e.g., disseminated infection that includes encephalitis, pneumonitis, or hepatitis), acyclovir administered IV is indicated. Among pregnant women without life-threatening disease, systemic acyclovir should not be used to treat recurrences nor should it be used as suppressive therapy near-term (or at other times during pregnancy) to prevent reactivation. # Perinatal Infections Most mothers of infants who acquire neonatal herpes lack histories of clinically evident genital herpes. The risk for transmission to the neonate from an infected mother appears highest among women with first episode genital herpes near the time of delivery, and is low (≤3%) among women with recurrent herpes. The results of viral cultures during pregnancy do not predict viral shedding at the time of delivery, and such cultures are not routinely indicated. At the onset of labor, all women should be carefully questioned about symptoms of genital herpes and should be examined. Women without symptoms or signs of genital herpes infection (or prodrome) may deliver their babies vaginally. Among women who have a history of genital herpes, or who have a sex partner with genital herpes, cultures of the birth canal at delivery may aid in decisions relating to neonatal management. Infants delivered through an infected birth canal (proven by virus isolation or presumed by observation of lesions) should be followed carefully, including virus cultures obtained 24-48 hours after birth. Available data do not support the routine use of acyclovir as anticipatory treatment for asymptomatic infants delivered through an infected birth canal. Treatment should be reserved for infants who develop evidence of clinical disease and for those with positive postpartum cultures. All infants with evidence of neonatal herpes should be treated with systemic acyclovir or vidarabine; refer to the Report of the Committee on Infectious Diseases, American Academy of Pediatrics (13 ). For ease of administration and to lower toxicity, acyclovir (30 mg/kg/day for 10--14 days) is the preferred drug. The care of these infants should be managed in consultation with an expert. # Lymphogranuloma Venereum Lymphogranuloma venereum (LGV), a rare disease in the United States, is caused by serovars L 1 , L 2 , or L 3 of C. trachomatis. The most common clinical manifestation of LGV among heterosexuals is tender inguinal lymphadenopathy that is most commonly unilateral. Women and homosexually active men may have proctocolitis or inflammatory involvement of perirectal or perianal lymphatic tissues resulting in fistulas and strictures. When patients seek care, most no longer have the self-limited genital ulcer that sometimes occurs at the site of inoculation. The diagnosis is usually made serologically and by exclusion of other causes of inguinal lymphadenopathy or genital ulcers. # Treatment Treatment cures infection and prevents ongoing tissue damage, although tissue reaction can result in scarring. Buboes may require aspiration or incision and drainage through intact skin. Doxycycline is the preferred treatment. # Recommended Regimen Doxycycline 100 mg orally 2 times a day for 21 days. # Alternative Regimens Erythromycin 500 mg orally 4 times a day for 21 days or Sulfisoxazole 500 mg orally 4 times a day for 21 days or equivalent sulfonamide course. # Follow-Up Patients should be followed clinically until signs and symptoms have resolved. # Management of Sex Partners Persons who have had sexual contact with a patient who has LGV within the 30 days before onset of the patient's symptoms should be examined, tested for urethral or cervical chlamydial infection, and treated. # Special Considerations Pregnancy Pregnant and lactating women should be treated with the erythromycin regimen. # HIV infection Persons with HIV infection and LGV should be treated following the regimens previously cited. # Syphilis General Principles Background Syphilis is a systemic disease caused by T. pallidum. Patients with syphilis may seek treatment for signs or symptoms of primary infection (ulcer or chancre at site of infection), secondary infection (manifestations that include rash, mucocutaneous lesions, and adenopathy), or tertiary infection (cardiac, neurologic, ophthalmic, auditory, or gummatous lesions). Infections also may be detected during the latent stage by serologic testing. Patients with latent syphilis who are known to have been infected within the preceding year are considered to have early latent syphilis; others have late latent syphilis or syphilis of unknown duration. Theoretically, treatment for late latent syphilis (as well as tertiary syphilis) requires therapy of longer duration because organisms are dividing more slowly; however, the validity of this division and its timing are unproven. # Diagnostic Considerations and Use of Serologic Tests Darkfield examinations and direct fluorescent antibody tests of lesion exudate or tissue are the definitive methods for diagnosing early syphilis. Presumptive diagnosis is possible with the use of two types of serologic tests for syphilis: a) nontreponemal (e.g., Venereal Disease Research Laboratory and RPR, and b) treponemal (e.g., fluorescent treponemal antibody absorbed and microhemagglutination assay for antibody to T. pallidum ). The use of one type of test alone is not sufficient for diagnosis. Nontreponemal test antibody titers usually correlate with disease activity, and results should be reported quantitatively. A fourfold change in titer, equivalent to a change of two dilutions (e.g., from 1:16 to 1:4, or from 1:8 to 1:32), is necessary to demonstrate a substantial difference between two nontreponemal test results that were obtained using the same serologic test. A patient who has a reactive treponemal test usually will have a reactive test for a lifetime, regardless of treatment or disease activity (15%-25% of patients treated during the primary stage may revert to being serologically nonreactive after 2-3 years). Treponemal test antibody titers correlate poorly with disease activity and should not be used to assess response to treatment. Sequential serologic tests should be performed using the same testing method (e.g., VDRL or RPR) by the same laboratory. The VDRL and RPR are equally valid, but quantitative results from the two tests cannot be directly compared because RPR titers are often slightly higher than VDRL titers. Abnormal results of serologic testing (unusually high, unusually low, and fluctuating titers) have been observed among HIV-infected patients. For such patients, use of other tests (e.g., biopsy and direct microscopy) should be considered. However, serologic tests appear to be accurate and reliable for the diagnosis of syphilis and for evaluation of treatment response for the vast majority of HIV-infected patients. No single test can be used to diagnose neurosyphilis among all patients. The diagnosis of neurosyphilis can be made based on various combinations of reactive serologic test results, abnormalities of cerebrospinal fluid (CSF) cell count or protein, or a reactive VDRL-CSF (RPR is not performed on CSF) with or without clinical manifestations. The CSF leukocyte count is usually elevated (>5 WBC/mm 3 ) when active neurosyphilis is present, and it is also a sensitive measure of the effectiveness of therapy. The VDRL-CSF is the standard serologic test for CSF; when reactive in the absence of substantial contamination of the CSF with blood, it is considered diagnostic of neurosyphilis. However, the VDRL-CSF may be nonreactive when neurosyphilis is present. Some experts recommend performing an FTA-ABS test on CSF. The CSF FTA-ABS is less specific (i.e., yields more false positives) for neurosyphilis than the VDRL-CSF; however, the test is believed to be highly sensitive. # Treatment Parenteral penicillin G is the preferred drug for treatment of all stages of syphilis. The preparation(s) used (i.e., benzathine, aqueous procaine, or aqueous crystalline), the dosage, and the length of treatment depend on the stage and clinical manifestations of disease. The efficacy of penicillin for the treatment of syphilis was well established through clinical experience before the value of randomized controlled clinical trials was recognized. Therefore, nearly all the recommendations for the treatment of syphilis are based on expert opinion reinforced by case series, open clinical trials, and 50 years of clinical experience. Parenteral penicillin G is the only therapy with documented efficacy for neurosyphilis or for syphilis during pregnancy. Patients with neurosyphilis and pregnant women with syphilis in any stage who report penicillin allergy should almost always be treated with penicillin, after desensitization, if necessary. Skin testing for penicillin allergy may be useful for some patients and in some settings (see Management of the Patient With a History of Penicillin Allergy). However, minor determinants needed for penicillin skin testing are not available commercially. The Jarisch-Herxheimer reaction is an acute febrile reaction-accompanied by headache, myalgia, and other symptoms-that may occur within the first 24 hours after any therapy for syphilis; patients should be advised of this possible adverse reaction. The Jarisch-Herxheimer reaction is common among patients with early syphilis. Antipyretics may be recommended, but there are no proven methods for preventing this reaction. The Jarisch-Herxheimer reaction may induce early labor or cause fetal distress among pregnant women. This concern should not prevent or delay therapy (see Syphilis During Pregnancy). # Management of Sex Partners Sexual transmission of T. pallidum occurs only when mucocutaneous syphilitic lesions are present; such manifestations are uncommon after the first year of infection. However, persons sexually exposed to a patient with syphilis in any stage should be evaluated clinically and serologically according to the following recommendations: - Persons who were exposed to a patient with primary, secondary, or latent (duration <1 year) syphilis within the preceding 90 days might be infected even if seronegative, and therefore should be treated presumptively. - Persons who were sexually exposed to a patient with primary, secondary, or latent (duration 90 days before examination should be treated presumptively if serologic test results are not available immediately, and the opportunity for follow-up is uncertain. - For purposes of partner notification and presumptive treatment of exposed sex partners, patients who have syphilis of unknown duration and who have high nontreponemal serologic test titers (≥1:32) may be considered to be infected with early syphilis. - Long-term sex partners of patients with late syphilis should be evaluated clinically and serologically for syphilis. The time periods before treatment used for identifying at-risk sex partners are 3 months plus duration of symptoms for primary syphilis, 6 months plus duration of symptoms for secondary syphilis, and 1 year for early latent syphilis. # Primary and Secondary Syphilis Treatment Four decades of experience indicate that parenteral penicillin G is effective in achieving local cure (healing of lesions and prevention of sexual transmission) and in preventing late sequelae. However, no adequately conducted comparative trials have been performed to guide the selection of an optimal penicillin regimen (i.e., dose, duration, and preparation). Substantially fewer data on nonpenicillin regimens are available. # Recommended Regimen for Adults Nonallergic patients with primary or secondary syphilis should be treated with the following regimen: Benzathine penicillin G, 2.4 million units IM in a single dose. # NOTE: Recommendations for treating pregnant women and HIV-infected persons for syphilis are discussed in separate sections. # Recommended Regimen for Children After the newborn period, children diagnosed with syphilis should have a CSF examination to exclude a diagnosis of neurosyphilis, and birth and maternal medical records should be reviewed to assess whether the child has congenital or acquired syphilis (see Congenital Syphilis). Children with acquired primary or secondary syphilis should be evaluated (including consultation with child-protection services) and treated using the following pediatric regimen (see Sexual Assault or Abuse of Children). Benzathine penicillin G, 50,000 units/kg IM, up to the adult dose of 2.4 million units in a single dose. # Other Management Considerations All patients with syphilis should be tested for HIV. In areas with high HIV prevalence, patients with primary syphilis should be retested for HIV after 3 months. Patients who have syphilis and who also have symptoms or signs suggesting neurologic disease (e.g., meningitis) or ophthalmic disease (e.g., uveitis) should be fully evaluated for neurosyphilis and syphilitic eye disease (including CSF analysis and ocular slit-lamp examination). Such patients should be treated appropriately according to the results of this evaluation. Invasion of CSF by T. pallidum with accompanying CSF abnormalities is common among adults who have primary or secondary syphilis. However, few patients develop neurosyphilis after treatment with the regimens described in this report. Therefore, unless clinical signs or symptoms of neurologic involvement are present (e.g., auditory, cranial nerve, meningeal, or ophthalmic manifestations), lumbar puncture is not recommended for routine evaluation of patients with primary or secondary syphilis. # Follow-Up Treatment failures can occur with any regimen. However, assessing response to treatment is often difficult, and no definitive criteria for cure or failure exist. Serologic test titers may decline more slowly among patients with a prior syphilis infection. Patients should be re-examined clinically and serologically at 3 months and again at 6 months. Patients with signs or symptoms that persist or recur or who have a sustained fourfold increase in nontreponemal test titer compared with either the baseline titer or to a subsequent result, can be considered to have failed treatment or to be reinfected. These patients should be re-treated after evaluation for HIV infection. Unless reinfection is likely, lumbar puncture also should be performed. Failure of nontreponemal test titers to decline fourfold by 3 months after therapy for primary or secondary syphilis identifies persons at risk for treatment failure. Those persons should be evaluated for HIV infection. Optimal management of such patients is unclear if they are HIV negative. At a minimum, these patients should have additional clinical and serologic follow-up. If further follow-up cannot be assured, re-treatment is recommended. Some experts recommend CSF examination in such situations. When patients are re-treated, most experts recommend re-treatment with three weekly injections of benzathine penicillin G 2.4 million units IM, unless CSF examination indicates that neurosyphilis is present. # Management of Sex Partners Refer to General Principles, Management of Sex Partners. # Special Considerations # Penicillin Allergy Nonpregnant penicillin-allergic patients who have primary or secondary syphilis should be treated with the following regimen. Doxycycline 100 mg orally 2 times a day for 2 weeks or Tetracycline 500 mg orally 4 times a day for 2 weeks. There is less clinical experience with doxycycline than with tetracycline, but compliance is likely to be better with doxycycline. Therapy for a patient who cannot tolerate either doxycycline or tetracycline should be based upon whether the patient's compliance with the therapy regimen and with follow-up examinations can be assured. For nonpregnant patients whose compliance with therapy and follow-up can be assured, an alternative regimen is erythromycin 500 mg orally 4 times a day for 2 weeks. Various ceftriaxone regimens also may be considered. Patients whose compliance with therapy or follow-up cannot be assured should be desensitized, if necessary, and treated with penicillin. Skin testing for penicillin allergy may be useful in some situations (see Management of the Patient With a History of Penicillin Allergy). Erythromycin is less effective than other recommended regimens. Data on ceftriaxone are limited, and experience has been too brief to permit identification of late failures. Optimal dose and duration have not been established for ceftriaxone, but regimens that provide 8-10 days of treponemicidal levels in the blood should be used. Single dose ceftriaxone therapy is not effective for treating syphilis. # Pregnancy Pregnant patients who are allergic to penicillin should be treated with penicillin, after desensitization, if necessary (see Management of the Patient With a History of Penicillin Allergy and Syphilis During Pregnancy). # HIV Infection Refer to Syphilis Among HIV-Infected Patients. # Latent Syphilis Latent syphilis is defined as those periods after infection with T. pallidum when patients are seroreactive, but show no other evidence of disease. Patients who have latent syphilis and who have acquired syphilis within the preceding year are classified as having early latent syphilis. Patients can be demonstrated to have acquired syphilis within the preceding year on the basis of documented seroconversion, a fourfold or greater increase in titer of a nontreponemal serologic test, history of symptoms of primary or secondary syphilis, or if they had a sex partner with primary, secondary, or latent syphilis (documented independently as duration <1 year). Nearly all others have latent syphilis of unknown duration and should be managed as if they had late latent syphilis. # Treatment Treatment of latent syphilis is intended to prevent occurrence or progression of late complications. Although clinical experience supports belief in the effectiveness of penicillin in achieving those goals, limited evidence is available for guidance in choosing specific regimens. There is very little evidence to support the use of nonpenicillin regimens. # Recommended Regimens for Adults These regimens are for nonallergic patients with normal CSF examination (if performed). # Early Latent Syphilis Benzathine penicillin G, 2.4 million units IM in a single dose. # Late Latent Syphilis or Latent Syphilis of Unknown Duration Benzathine penicillin G, 7.2 million units total, administered as 3 doses of 2.4 million units IM each, at 1-week intervals. # Recommended Regimens for Children After the newborn period, children diagnosed with syphilis should have a CSF examination to exclude neurosyphilis, and birth and maternal medical records should be reviewed to assess whether the child has congenital or acquired syphilis (see Congenital Syphilis). Older children with acquired latent syphilis should be evaluated as described for adults and treated using the following pediatric regimens (see Sexual Assault or Abuse of Children). These regimens are for nonallergic children who have acquired syphilis and who have had a normal CSF examination. # Early Latent Syphilis Benzathine penicillin G, 50,000 units/kg IM, up to the adult dose of 2.4 million units in a single dose. # Late Latent Syphilis or Latent Syphilis of Unknown Duration Benzathine penicillin G, 50,000 units/kg IM, up to the adult dose of 2.4 million units, for three total doses (total 150,000 units/kg up to adult total dose of 7.2 million units). # Other Management Considerations All patients with latent syphilis should be evaluated clinically for evidence of tertiary disease (e.g., aortitis, neurosyphilis, gumma, and iritis). Recommended therapy for patients with latent syphilis may not be optimal therapy for the persons with asymptomatic neurosyphilis. However, the yield from CSF examination, in terms of newly diagnosed cases of neurosyphilis, is low. Patients with any one of the criteria listed below should have a CSF examination before treatment: - Neurologic or ophthalmic signs or symptoms; - Other evidence of active syphilis (e.g., aortitis, gumma, iritis); - Treatment failure; - HIV infection; - Serum nontreponemal titer ≥1:32, unless duration of infection is known to be <1 year; or - Nonpenicillin therapy planned, unless duration of infection is known to be <1 year. If dictated by circumstances and patient preferences, CSF examination may be performed for persons who do not meet the criteria listed above. If a CSF examination is performed and the results show abnormalities consistent with CNS syphilis, the patient should be treated for neurosyphilis (see Neurosyphilis). - All syphilis patients should be tested for HIV. # Follow-Up Quantitative nontreponemal serologic tests should be repeated at 6 months and again at 12 months. Limited data are available to guide evaluation of the response to therapy for a patient with latent syphilis. If titers increase fourfold, or if an initially high titer (≥1:32) fails to decline at least fourfold (two dilutions) within 12-24 months, or if the patient develops signs or symptoms attributable to syphilis, the patient should be evaluated for neurosyphilis and re-treated appropriately. # Management of Sex Partners Refer to General Principles, Management of Sex Partners. # Special Considerations # Penicillin Allergy For patients who have latent syphilis and who are allergic to penicillin, nonpenicillin therapy should be used only after CSF examination has excluded neurosyphilis. Nonpregnant, penicillin-allergic patients should be treated with the following regimens. Doxycycline 100 mg orally 2 times a day or Tetracycline 500 mg orally 4 times a day. Both drugs are administered for 2 weeks if duration of infection is known to have been <1 year; otherwise, for 4 weeks. # Pregnancy Pregnant patients who are allergic to penicillin should be treated with penicillin, after desensitization, if necessary (see Management of the Patient With a History of Penicillin Allergy and Syphilis During Pregnancy). # HIV Infection Refer to Syphilis Among HIV-Infected Patients. # Late Syphilis Late (tertiary) syphilis refers to patients with gumma and patients with cardiovascular syphilis, but not to neurosyphilis. Nonallergic patients without evidence of neurosyphilis should be treated with the following regimen. # Recommended Regimen Benzathine penicillin G, 7.2 million units total, administered as 3 doses of 2.4 million units IM, at 1-week intervals. # Other Management Considerations Patients with symptomatic late syphilis should undergo CSF examination before therapy. Some experts treat all patients who have cardiovascular syphilis with a neurosyphilis regimen. The complete management of patients with cardiovascular or gummatous syphilis is beyond the scope of these guidelines. These patients should be managed in consultation with experts. # Follow-Up There is minimal evidence regarding follow-up of patients infected with late syphilis. Clinical response depends partly on the nature of the lesions. # Management of Sex Partners Refer to General Principles, Management of Sex Partners. # Special Considerations # Penicillin Allergy Patients allergic to penicillin should be treated according to treatment regimens recommended for late latent syphilis. # Pregnancy Pregnant patients who are allergic to penicillin should be treated with penicillin, after desensitization, if necessary (see Management of the Patient With a History of Penicillin Allergy and Syphilis During Pregnancy). # HIV Infection Refer to Syphilis Among HIV-Infected Patients. # Neurosyphilis Treatment Central nervous system disease can occur during any stage of syphilis. A patient with clinical evidence of neurologic involvement (e.g., ophthalmic or auditory symptoms, cranial nerve palsies) with syphilis warrants a CSF examination. Although four decades of experience have confirmed the effectiveness of penicillin, the evidence to guide the choice of the best regimen is limited. Syphilitic eye disease is frequently associated with neurosyphilis, and patients with this disease should be treated according to neurosyphilis treatment recommendations. CSF examination should be performed on all such patients to identify those patients with CSF abnormalities who should have follow-up CSF examinations to assess response to treatment. Patients who have neurosyphilis or syphilitic eye disease (e.g., uveitis, neuroretinitis, or optic neuritis) and who are not allergic to penicillin should be treated with the following regimen. # Recommended Regimen 12-24 million units aqueous crystalline penicillin G daily, administered as 2-4 million units IV every 4 hours, for 10-14 days. If compliance with therapy can be assured, patients may be treated with the following alternative regimen. # Alternative Regimen 2.4 million units procaine penicillin IM daily, plus probenecid 500 mg orally 4 times a day, both for 10-14 days. The durations of these regimens are shorter than that of the regimen used for late syphilis in the absence of neurosyphilis. Therefore, some experts administer benzathine penicillin, 2.4 million units IM after completion of these neurosyphilis treatment regimens to provide a comparable total duration of therapy. # Other Management Considerations Other considerations in the management of the patient with neurosyphilis are the following: - All patients with syphilis should be tested for HIV. - Many experts recommend treating patients with evidence of auditory disease caused by syphilis in the same manner as for neurosyphilis, regardless of the findings on CSF examination. # Follow-Up If CSF pleocytosis was present initially, CSF examination should be repeated every 6 months until the cell count is normal. Follow-up CSF examinations also may be used to evaluate changes in the VDRL-CSF or CSF protein in response to therapy, though changes in these two parameters are slower and persistent abnormalities are of less certain importance. If the cell count has not decreased at 6 months, or if the CSF is not entirely normal by 2 years, re-treatment should be considered. # Management of Sex Partners Refer to General Principles, Management of Sex Partners. # Special Considerations Penicillin Allergy No data have been collected systematically for evaluation of therapeutic alternatives to penicillin for treatment of neurosyphilis. Therefore, patients who report being allergic to penicillin should be treated with penicillin, after desensitization if necessary, or should be managed in consultation with an expert. In some situations, skin testing to confirm penicillin allergy may be useful (see Management of the Patient With a History of Penicillin Allergy). # Pregnancy Pregnant patients who are allergic to penicillin should be treated with penicillin, after desensitization if necessary (see Syphilis During Pregnancy). # HIV Infection Refer to Syphilis Among HIV-Infected Patients. # Syphilis Among HIV-Infected Patients Diagnostic Considerations Unusual serologic responses have been observed among HIV-infected persons who also have syphilis. Most reports involved serologic titers that were higher than expected, but false-negative serologic test results or delayed appearance of seroreactivity have also been reported. Nevertheless, both treponemal and nontreponemal serologic tests for syphilis are accurate for the majority of patients with syphilis and HIV coinfection. When clinical findings suggest that syphilis is present, but serologic tests are nonreactive or confusing, it may be helpful to perform such alternative tests as biopsy of a lesion, darkfield examination, or direct fluorescent antibody staining of lesion material. Neurosyphilis should be considered in the differential diagnosis of neurologic disease among HIV-infected persons. # Treatment Although adequate research-based evidence is not available, published case reports and expert opinion suggest that HIV-infected patients with early syphilis are at increased risk for neurologic complications and have higher rates of treatment failure with currently recommended regimens. The magnitude of these risks, although not precisely defined, is probably small. No treatment regimens have been demonstrated to be more effective in preventing development of neurosyphilis than those recommended for patients without HIV infection. Careful follow-up after therapy is essential. # Primary and Secondary Syphilis Among HIV-Infected Patients Treatment Treatment with benzathine penicillin G 2.4 million units IM, as for patients without HIV infection, is recommended. Some experts recommend additional treatments, such as multiple doses of benzathine penicillin G as suggested for late syphilis, or other supplemental antibiotics in addition to benzathine penicillin G 2.4 million units IM. # Other Management Considerations CSF abnormalities are common among HIV-infected patients who have primary or secondary syphilis, but these abnormalities are of unknown prognostic significance. Most HIV-infected patients respond appropriately to currently recommended penicillin therapy; however, some experts recommend CSF examination before therapy and modification of treatment accordingly. # Follow-Up Patients should be evaluated clinically and serologically for treatment failure at 1 month and at 2, 3, 6, 9, and 12 months after therapy. Although of unproven benefit, some experts recommend performing CSF examination after therapy (i.e., at 6 months). HIV-infected patients who meet the criteria for treatment failure should undergo CSF examination and be retreated just as for patients without HIV infection. CSF examination and re-treatment also should be strongly considered for patients in whom the suggested fourfold decrease in nontreponemal test titer does not occur within 3 months for primary or secondary syphilis. Most experts would re-treat patients with benzathine penicillin G 7.2 million units (as 3 weekly doses of 2.4 million units each) if the CSF examination is normal. # Special Considerations # Penicillin Allergy Penicillin regimens should be used to treat HIV-infected patients in all stages of syphilis. Skin testing to confirm penicillin allergy may be used (see Management of the Patient With a History of Penicillin Allergy), but data on the utility of that approach among immunocompromised patients are inadequate. Patients may be desensitized, then treated with penicillin. # Latent Syphilis Among HIV-Infected Patients Diagnostic Considerations Patients who have both latent syphilis (regardless of apparent duration) and HIV infection should undergo CSF examination before treatment. # Treatment A patient with latent syphilis, HIV infection, and a normal CSF examination can be treated with benzathine penicillin G 7.2 million units (as 3 weekly doses of 2.4 million units each). # Special Considerations Penicillin Allergy Penicillin regimens should be used to treat all stages of syphilis among HIVinfected patients. Skin testing to confirm penicillin allergy may be used (see Management of the Patient With a History of Penicillin Allergy), but data on the utility of that approach in immunocompromised patients are inadequate. Patients may be desensitized, then treated with penicillin. # Syphilis During Pregnancy All women should be screened serologically for syphilis during the early stages of pregnancy. In populations in which utilization of prenatal care is not optimal, RPR-card test screening and treatment, if that test is reactive, should be performed at the time a pregnancy is diagnosed. In communities and populations with high syphilis prevalence or for patients at high risk, serologic testing should be repeated during the third trimester and again at delivery. (Some states mandate screening at delivery for all women.) Any woman who delivers a stillborn infant after 20 weeks gestation should be tested for syphilis. No infant should leave the hospital without the serologic status of the infant's mother having been determined at least once during pregnancy. # Diagnostic Considerations Seropositive pregnant women should be considered infected unless treatment history is clearly documented in a medical or health department record and sequential serologic antibody titers have appropriately declined. # Treatment Penicillin is effective for preventing transmission to fetuses and for treating established infection among fetuses. Evidence is insufficient, however, to determine whether the specific, recommended penicillin regimens are optimal. # Recommended Regimens Treatment during pregnancy should be the penicillin regimen appropriate for the woman's stage of syphilis. Some experts recommend additional therapy (e.g., a second dose of benzathine penicillin 2.4 million units IM) 1 week after the initial dose, particularly for those women in the third trimester of pregnancy and for women who have secondary syphilis during pregnancy. # Other Management Considerations Women who are treated for syphilis during the second half of pregnancy are at risk for premature labor or fetal distress, or both, if their treatment precipitates the Jarisch-Herxheimer reaction. These women should be advised to seek medical attention following treatment if they notice any change in fetal movements or if they have contractions. Stillbirth is a rare complication of treatment; however, since therapy is necessary to prevent further fetal damage, that concern should not delay treatment. All patients with syphilis should be tested for HIV. # Follow-Up Serologic titers should be checked monthly until adequacy of treatment has been assured. The antibody response should be appropriate for the stage of disease. # Management of Sex Partners Refer to General Principles, Management of Sex Partners. # Special Considerations # Penicillin Allergy There are no proven alternatives to penicillin. A pregnant woman with a history of penicillin allergy should be treated with penicillin, after desensitization, if necessary. Skin testing may be helpful for some patients and in some settings (see Management of the Patient With a History of Penicillin Allergy). Tetracycline and doxycycline are contraindicated during pregnancy. Erythromycin should not be used because it cannot be relied upon to cure an infected fetus. # Congenital Syphilis Diagnostic Considerations Who Should Be Evaluated Infants should be evaluated for congenital syphilis if they were born to seropositive (nontreponemal test confirmed by treponemal test) women who meet the following criteria: - Have untreated syphilis;- or - Were treated for syphilis during pregnancy with erythromycin; or - Were treated for syphilis <1 month before delivery; or - Were treated for syphilis during pregnancy with the appropriate penicillin regimen, but nontreponemal antibody titers did not decrease sufficiently after therapy to indicate an adequate response (≥ fourfold decrease); or - Do not have a well-documented history of treatment for syphilis; or - Were treated appropriately before pregnancy but had insufficient serologic follow-up to assure that they had responded appropriately to treatment and are not currently infected (≥ fourfold decrease for patients treated for early syphilis; stable or declining titers ≤1:4 for other patients). No infant should leave the hospital without the serologic status of the infant's mother having been documented at least once during pregnancy. Serologic testing also should be performed at delivery in communities and populations at risk for congenital syphilis. Serologic tests can be nonreactive among infants infected late during their mother's pregnancy. # Evaluation of the Infant The clinical and laboratory evaluation of infants born to women described above should include the following: - A thorough physical examination for evidence of congenital syphilis; - A quantitative nontreponemal serologic test for syphilis performed on the infant's sera (not on cord blood); - CSF analysis for cells, protein, and VDRL; - Long bone x-rays; - Other tests as clinically indicated (e.g., chest x-ray, complete blood count, differential and platelet count, liver function tests); - For infants who have no evidence of congenital syphilis on the above evaluation, determination of presence of specific antitreponemal IgM antibody by a testing method recognized by CDC as having either provisional or standard status; - Pathologic examination of the placenta or amniotic cord using specific fluorescent antitreponemal antibody staining. *A woman treated with a regimen other than those recommended for treatment of syphilis (for pregnant women or otherwise) in these guidelines should be considered untreated. # Treatment Therapy Decisions Infants should be treated for presumed congenital syphilis if they were born to mothers who, at delivery, had untreated syphilis or who had evidence of relapse or reinfection after treatment (see Congenital Syphilis, Diagnostic Considerations). Additional criteria for presumptively treating infants with congenital syphilis are as follows: - Physical evidence of active disease; - X-ray evidence of active disease; - A reactive VDRL-CSF or, for infants born to seroreactive mothers, an abnormal- CSF white blood cell count or protein, regardless of CSF serology; - A serum quantitative nontreponemal serologic titer that is at least fourfold greater than the mother's titer † ; - Specific antitreponemal IgM antibody detected by a testing method that has been given provisional or standard status by CDC; - If they meet the previously cited criteria for "Who Should Be Evaluated," but have not been fully evaluated (see Congenital Syphilis, Diagnostic Considerations). # NOTE: Infants with clinically evident congenital syphilis should have an ophthalmologic examination. # Recommended Regimens Aqueous crystalline penicillin G, 100,000-150,000 units/kg/day (administered as 50,000 units/kg IV every 12 hours during the first 7 days of life and every 8 hours thereafter) for 10-14 days, or Procaine penicillin G, 50,000 units/kg IM daily in a single dose for 10-14 days. If more than 1 day of therapy is missed, the entire course should be restarted. An infant whose complete evaluation was normal and whose mother was a) treated for syphilis during pregnancy with erythromycin, or b) treated for syphilis <1 month before delivery, or c) treated with an appropriate regimen before or during pregnancy *In the immediate newborn period, interpretation of CSF test results may be difficult; normal values vary with gestational age and are higher in preterm infants. Other causes of elevated values also should be considered when an infant is being evaluated for congenital syphilis. Though values as high as 25 white blood cells(WBC)/mm 3 and 150 mg protein/dL occur among normal neonates, some experts recommend that lower values (5 WBC/mm 3 and 40 mg/dL) be considered the upper limits of normal. The infant should be treated if test results cannot exclude infection. † The absence of a fourfold greater titer for an infant cannot be used as evidence against congenital syphilis. but did not yet have an adequate serologic response should be treated with benzathine penicillin G, 50,000 units/kg IM in a single dose. In some cases, infants with a normal complete evaluation for whom follow-up can be assured can be followed closely without treatment. # Treatment of Older Infants and Children with Congenital Syphilis After the newborn period, children diagnosed with syphilis should have a CSF examination to exclude neurosyphilis and records should be reviewed to assess whether the child has congenital or acquired syphilis (see Primary and Secondary Syphilis and Latent Syphilis). Any child who is thought to have congenital syphilis (or who has neurologic involvement) should be treated with aqueous crystalline penicillin G, 200,000-300,000 units/kg/day IV or IM (administered as 50,000 units/kg every 4-6 hours) for 10-14 days. # Follow-Up A seroreactive infant (or an infant whose mother was seroreactive at delivery) who is not treated for congenital syphilis during the perinatal period should receive careful follow-up examinations at 1 month and at 2, 3, 6, and 12 months after therapy. Nontreponemal antibody titers should decline by 3 months of age and should be nonreactive by 6 months of age if the infant was not infected and the titers were the result of passive transfer of antibody from the mother. If these titers are found to be stable or increasing, the child should be re-evaluated, including CSF examination, and fully treated. Passively transferred treponemal antibodies may be present for as long as 1 year. If they are present >1 year, the infant should be re-evaluated and treated for congenital syphilis. Treated infants also should be followed every 2-3 months to assure that nontreponemal antibody titers decline; these infants should have become nonreactive by 6 months of age (response may be slower for infants treated after the neonatal period). Treponemal tests should not be used to evaluate response to treatment because test results can remain positive despite effective therapy if the child was infected. Infants with CSF pleocytosis should undergo CSF examination every 6 months, or until the cell count is normal. If the cell count is still abnormal after 2 years, or if a downward trend is not present at each examination, the child should be re-treated. The VDRL-CSF also should be checked at 6 months; if still reactive, the infant should be re-treated. Follow-up of children treated for congenital syphilis after the newborn period should be the same as that prescribed for congenital syphilis among neonates. # Special Considerations # Penicillin Allergy Children who require treatment for syphilis after the newborn period, but who have a history of penicillin allergy, should be treated with penicillin after desensitization, if necessary. Skin testing may be helpful in some patients and settings (see Management of the Patient With a History of Penicillin Allergy). # HIV Infection Mothers of infants with congenital syphilis should be tested for HIV. Infants born to mothers who have HIV infection should be referred for evaluation and appropriate follow-up. No data exist to suggest that infants with congenital syphilis whose mothers are coinfected with HIV require different evaluation, therapy, or follow-up for syphilis than is recommended for all infants. # Management of the Patient With a History of Penicillin Allergy No proven alternatives to penicillin are available for treating neurosyphilis, congenital syphilis, or syphilis among pregnant women. Penicillin also is recommended for use, whenever possible, with HIV-infected patients. Unfortunately, 3%-10% of the adult population in the United States have experienced urticaria, angioedema, or anaphylaxis (upper airway obstruction, bronchospasm, or hypotension) with penicillin therapy. Re-administration of penicillin can cause severe immediate reactions among these patients. Because anaphylactic reactions to penicillin can be fatal, every effort should be made to avoid administering penicillin to penicillin-allergic patients, unless the anaphylactic sensitivity has been removed by acute desensitization. However, only approximately 10% of persons who report a history of severe allergic reactions to penicillin are still allergic. With the passage of time after an allergic reaction to penicillin, most persons who have experienced a severe reaction stop expressing penicillin-specific IgE. These persons can be treated safely with penicillin. Many studies have found that skin testing with the major and minor determinants can reliably identify persons at high risk for penicillin reactions. Although these reagents are easily generated and have been available in academic centers for >30 years, currently only penicilloyl-poly-L-lysine (Pre-Pen, the major determinant) and penicillin G are available commercially. Experts estimate that testing with only the major determinant and penicillin G detects 90%-97% of the currently allergic patients. However, because skin testing without the minor determinants would still miss 3%-10% of allergic patients, and serious or fatal reactions can occur among these minor determinant positive patients, experts suggest caution when the full battery of skin test reagents listed in the table is not available. # Recommendations If the full battery of skin-test reagents is available, including the major and minor determinants (see Penicillin Allergy Skin Testing), patients who report a history of penicillin reaction and are skin-test negative can receive conventional penicillin therapy. Skin-test positive patients should be desensitized. If the full battery of skin-test reagents, including the minor determinants, is not available, the patient should be skin tested using penicilloyl (the major determinant, Pre-Pen) and penicillin G. Those with positive tests should be desensitized. Some experts believe that persons with negative tests, in that situation, should be regarded as probably allergic and should be desensitized. Others suggest that those with negative skin tests can be test-dosed gradually with oral penicillin in a monitored setting in which treatment for anaphylactic reaction is possible. # Penicillin Allergy Skin Testing Patients at high risk for anaphylaxis (i.e., a history of penicillin-related anaphylaxis, asthma or other diseases that would make anaphylaxis more dangerous, or therapy with beta-adrenergic blocking agents) should be tested with 100-fold dilutions of the full-strength skin-test reagents before testing with full-strength reagents. In these situations, patients should be tested in a monitored setting in which treatment for an anaphylactic reaction is possible. If possible, the patient should not have taken antihistamines (e.g., chlorpheniramine maleate or terfenadine during the past 24 hours, diphenhydramine HCl or hydroxyzine during the past 4 days, or astemizole during the past 3 weeks). # Reagents (Adapted from Beall # Minor Determinant Precursors † - Benzylpenicillin G (10 -2 M, 3.3 mg/mL, 6000 U/mL), - Benzylpenicilloate (10 -2 M, 3.3 mg/mL), - Benzylpenilloate (or penicilloyl propylamine)(10 -2 M, 3.3 mg/mL). # Positive Control - Commercial histamine for epicutaneous skin testing (1 mg/mL). # Negative Control - Diluent used to dissolve other reagents, usually phenol saline. # Procedures Dilute the antigens 100-fold for preliminary testing if the patient has had a lifethreatening reaction, or 10-fold if the patient has had another type of immediate, generalized reaction within the past year. Epicutaneous (prick) tests. Duplicate drops of skin-test reagent are placed on the volar surface of the forearm. The underlying epidermis is pierced with a 26-gauge needle without drawing blood. *Reprinted with permission from G.N. Beall in Annals of Internal Medicine. † Aged penicillin is not an adequate source of minor determinants. Penicillin G should be freshly prepared or should come from a fresh-frozen source. An epicutaneous test is positive if the average wheal diameter after 15 minutes is 4 mm larger than that of negative controls; otherwise, the test is negative. The histamine controls should be positive to assure that results are not falsely negative because of the effect of antihistaminic drugs. Intradermal test. If epicutaneous tests are negative, duplicate 0.02 mL intradermal injections of negative control and antigen solutions are made into the volar surface of the forearm using a 26-or 27-gauge needle on a syringe. The crossed diameters of the wheals induced by the injections should be recorded. An intradermal test is positive if the average wheal diameter 15 minutes after injection is 2 mm or larger than the initial wheal size and also is at least 2 mm larger than the negative controls. Otherwise, the tests are negative. # Desensitization Patients who have a positive skin test to one of the penicillin determinants can be desensitized. This is a straightforward, relatively safe procedure that can be done orally or IV. Although the two approaches have not been compared, oral desensitization is thought to be safer, simpler, and easier. Patients should be desensitized in a hospital setting because serious IgE-mediated allergic reactions, although unlikely, can occur. Desensitization can usually be completed in about 4 hours, after which the first dose of penicillin is given (Table 1). STD programs should have a referral center where patients with positive skin tests can be desensitized. After desensitization, patients must be maintained on penicillin continuously for the duration of the course of therapy. # DISEASES CHARACTERIZED BY URETHRITIS AND CERVICITIS # Management of the Patient with Urethritis Urethritis, or inflammation of the urethra, is caused by an infection characterized by the discharge of mucoid or purulent material and by burning during urination. However, asymptomatic infections are common. The two bacterial agents primarily responsible for urethritis among men are N. gonorrhoeae and C. trachomatis. Testing to determine the specific diagnosis is recommended because both of these infections are reportable to state health departments and because with a specific diagnosis, treatment compliance may be better and the likelihood of partner notification may be improved. If diagnostic tools (e.g., Gram stain and microscope) are unavailable, health-care providers should treat patients for both infections. The added expense of treating a person with nongonococcal urethritis (NGU) for both infections also should encourage the health-care provider to make a specific diagnosis. (See Nongonococcal Urethritis, Chlamydial Infections, and Gonococcal Infections.) # Nongonococcal Urethritis NGU, or inflammation of the urethra not caused by gonococcal infection, is characterized by a mucoid or purulent urethral discharge. In the presence or absence of a discharge, NGU may be diagnosed by ≥5 polymorphonuclear leukocytes per oil immersion field on a smear of an intraurethral swab specimen. Increasingly, the leukocyte esterase test (LET) is being used to screen urine from asymptomatic males for evidence of urethritis (either gonococcal or nongonococcal). The diagnosis of urethritis among males tested with LET should be confirmed with a Gram-stained smear of a urethral swab specimen. C. trachomatis is the most frequent cause of NGU (23%-55% of cases); however, prevalence varies among age groups, with lower prevalence found among older men. Ureaplasma urealyticum causes 20%-40% of cases, and Trichomonas vaginalis 2%-5%. HSV is occasionally responsible for cases of NGU. The etiology of the remaining cases of NGU is unknown. Complications of NGU among men infected with C. trachomatis include epididymitis and Reiter's syndrome. Female sex partners of men who have NGU are at risk for chlamydial infection and associated complications. # Recommended Regimen Doxycycline 100 mg orally 2 times a day for 7 days.* # Alternative Regimens Erythromycin base 500 mg orally 4 times a day for 7 days or *Azithromycin 1 g in a single dose, according to manufacturer's data, is equivalent to doxycycline. However, this study has not been published in a peer-reviewed journal. For a discussion comparing azithromycin and doxycyline, refer to Chlamydial Infections. Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days. If a patient cannot tolerate high-dose erythromycin schedules, one of the following regimens may be used: Erythromycin base 250 mg orally 4 times a day for 14 days or Erythromycin ethylsuccinate 400 mg orally 4 times a day for 14 days. Treatment with the recommended regimen has been demonstrated in most cases to result in alleviation of symptoms and in microbiologic cure of infection. If the etiologic organism is susceptible to the antimicrobial agent used, sequelae specific to that organism will be prevented, as will further transmission; this is especially important for cases of NGU caused by C. trachomatis. # Follow-Up Patients should be instructed to return for evaluation if symptoms persist or recur after completion of therapy. Patients with persistent or recurrent urethritis should be re-treated with the initial regimen if they failed to comply with the treatment regimen or if they were re-exposed to an untreated sex partner. Otherwise, a wet mount examination and culture of an intraurethral swab specimen for T. vaginalis should be performed; if negative, the patient should be retreated with an alternative regimen extended to 14 days (e.g., erythromycin base 500 mg orally 4 times a day for 14 days). The use of alternative regimens ensures treatment of possible tetracycline-resistant U. urealyticum. Effective regimens have not been identified for treating patients who experience persistent symptoms or frequent recurrences following treatment with doxycycline and erythromycin. Urologic examinations do not usually reveal a specific etiology. Such patients should be assured that, although they have persistent or frequently recurring urethritis, the condition is not known to cause complications among them or their sex partners and is not known to be sexually transmitted. However, men exposed to a new sex partner should be re-evaluated. Symptoms alone, without documentation of signs or laboratory evidence of urethral inflammation, are not a sufficient basis for re-treatment. # Management of Sex Partners Patients should be instructed to refer sex partners for evaluation and treatment. Since exposure intervals have received limited evaluation, the following recommendations are somewhat arbitrary. Sex partners of symptomatic patients should be evaluated and treated if their last sexual contact with the index patient was within 30 days of onset of symptoms. If the index patient is asymptomatic, sex partners whose last sexual contact with the index patient was within 60 days of diagnosis should be evaluated and treated. If the patient's last sexual intercourse preceded the time intervals previously described, the most recent sex partner should be treated. A specific diagnosis may facilitate partner referral and partner cooperation. Therefore, testing for both gonorrhea and chlamydia is encouraged. Patients should be instructed to abstain from sexual intercourse until patient and partners are cured. In the absence of microbiologic test-of-cure, this means when therapy is completed and patient and partners are without symptoms or signs. # Special Considerations # HIV Infection Persons with HIV infection and NGU should receive the same treatment as patients without HIV infection. # Management of the Patient With Mucopurulent Cervicitis Mucopurulent cervicitis (MPC) is characterized by a yellow endocervical exudate visible in the endocervical canal or in an endocervical swab specimen. Some experts also make the diagnosis on the basis of an increased number of polymorphonuclear leukocytes on cervical Gram stain. The condition is asymptomatic among many women, but some may experience an abnormal vaginal discharge and abnormal vaginal bleeding (e.g., following intercourse). The condition can be caused by C. trachomatis or N. gonorrhoeae, although in most cases neither organism can be isolated. Patients with MPC should have cervical specimens tested for C. trachomatis and cultured for N. gonorrhoeae. MPC is not a sensitive predictor of infection; however, most women with C. trachomatis or N. gonorrhoeae do not have MPC. # Treatment The results of tests for C. trachomatis or N. gonorrhoeae should determine the need for treatment, unless the likelihood of infection with either organism is high or unless the patient is unlikely to return for treatment. Treatment for MPC should include the following: - Treatment for gonorrhea and chlamydia in patient populations with high prevalence of both infections, such as patients seen at many STD clinics. - Treatment for chlamydia only, if the prevalence of N. gonorrhoeae is low but the likelihood of chlamydia is substantial. - Await test results if the prevalence of both infections are low and if compliance with a recommendation for a return visit is likely. # Follow-Up Follow-up should be as recommended for the infections for which the woman is being treated. Management of sex partners of women with MPC should be appropriate for the STD (C. trachomatis or N. gonorrhoeae ) identified. Partners should be notified, examined, and treated on the basis of test results. However, partners of patients who are treated presumptively should receive the same treatment as the index patient. # Special Considerations # HIV Infection Persons with HIV infection and MPC should receive the same treatment as patients without HIV infection. # Chlamydial Infections Chlamydial genital infection is common among adolescents and young adults in the United States. Asymptomatic infection is common among both men and women. Testing sexually active adolescent girls for chlamydial infection should be routine during gynecologic examination, even if symptoms are not present. Screening of young adult women 20-24 years of age also is suggested, particularly for those who do not consistently use barrier contraceptives and who have new or multiple partners. Periodic surveys of chlamydial prevalence among these groups should be conducted to confirm the validity of using these recommendations in specific clinical settings. # Chlamydial Infections Among Adolescents and Adults The following recommended treatment regimens or the alternative regimens relieve symptoms and cure infection. Among women, several important sequelae may result from C. trachomatis infection, the most serious among them being PID, ectopic pregnancy, and infertility. Some women with apparently uncomplicated cervical infection already have subclinical upper reproductive tract infection. Treatment of cervical infection is believed to reduce the likelihood of sequelae, although few studies have demonstrated that antimicrobial therapy reduces the risk of subsequent ascending infections or decreases the incidence of long-term complications of tubal infertility and ectopic pregnancy. Treatment of infected patients prevents transmission to sex partners, and for infected pregnant women may prevent transmission of C. trachomatis to infants during birth. Treatment of sex partners will help to prevent re-infection of the index patient and infection of other partners. Because of the high prevalence of coinfection with C. trachomatis among patients with gonococcal infection, presumptive treatment for chlamydia of patients being treated for gonorrhea is appropriate, particularly if no diagnostic test for C. trachomatis infection will be performed (see Gonococcal Infections). # Recommended Regimens Doxycycline 100 mg orally 2 times a day for 7 days, or Azithromycin 1 g orally in a single dose. # Alternative Regimens Ofloxacin 300 mg orally 2 times a day for 7 days or Erythromycin base 500 mg orally 4 times a day for 7 days or Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days or Sulfisoxazole 500 mg orally 4 times a day for 10 days (inferior efficacy to other regimens). Doxycycline and azithromycin appear similar in efficacy and toxicity; however, the safety and efficacy of azithromycin for persons ≤15 years of age have not been established. Doxycycline has a longer history of extensive use, safety, efficacy, and the advantage of low cost. Azithromycin has the advantage of single-dose administration. Ofloxacin is similar in efficacy to doxycycline and azithromycin, but is more expensive than doxycycline, cannot be used during pregnancy or with persons ≤17 years of age, and offers no advantage in dosing. Ofloxacin is the only quinolone with proven efficacy against chlamydial infection. Sulfisoxazole is the least desirable treatment because of inferior efficacy. # Follow-Up Patients do not need to be retested for chlamydia after completing treatment with doxycycline or azithromycin unless symptoms persist or re-infection is suspected. Retesting may be considered 3 weeks after completion of treatment with erythromycin, sulfisoxazole, or amoxicillin. This is usually unnecessary if the patient was treated with doxycycline, azithromycin, or ofloxacin. The validity of chlamydial culture testing performed at <3 weeks following completion of therapy among patients failing therapy has not been established. False-negative results may occur because of small numbers of chlamydial organisms. In addition, nonculture tests conducted at <3 weeks following completion of therapy for patients successfully treated may sometimes be false-positive because of the continued excretion of dead organisms. Some studies have demonstrated high rates of infection among women retested several months following treatment, presumably because of reinfection. Rescreening women several months following treatment may be an effective strategy for detecting further morbidity in some populations. # Management of Sex Partners Patients should be instructed to refer their sex partners for evaluation and treatment. Because exposure intervals have received limited evaluation, the following recommendations are somewhat arbitrary. Sex partners of symptomatic patients with C. trachomatis should be evaluated and treated for chlamydia if their last sexual contact with the index patient was within 30 days of onset of the index patient's symptoms. If the index patient is asymptomatic, sex partners whose last sexual contact with the index patient was within 60 days of diagnosis should be evaluated and treated. Health-care providers should treat the last sex partner even if last sexual intercourse took place before the foregoing time intervals. Patients should be instructed to avoid sex until they and their partners are cured. In the absence of microbiologic test-of-cure, this means until therapy is completed and patient and partner(s) are without symptoms. # Special Considerations Pregnancy Doxycycline and ofloxacin are contraindicated for pregnant women, and sulfisoxazole is contraindicated for women during pregnancy near-term and for women who are nursing. The safety and efficacy of azithromycin among pregnant and lactating women have not been established. Repeat testing, preferably by culture, after completing therapy with the following regimens is recommended because there are few data regarding the effectiveness of these regimens, and the frequent gastrointestinal side effects of erythromycin may discourage a patient from complying with the prescribed treatment. # Recommended Regimen for Pregnant Women Erythromycin base 500 mg orally 4 times a day for 7 days. # Alternative Regimens for Pregnant Women Erythromycin base 250 mg orally 4 times a day for 14 days or Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days or Erythromycin ethylsuccinate 400 mg orally 4 times a day for 14 days or If erythromycin cannot be tolerated: Amoxicillin 500 mg orally 3 times a day for 7-10 days. # NOTE: Erythromycin estolate is contraindicated during pregnancy because of drug-related hepatotoxicity. Few data exist concerning the efficacy of amoxicillin. # HIV Infection Persons with HIV infection and chlamydial infection should receive the same treatment as patients without HIV infection. # Chlamydial Infections Among Infants Prenatal screening of pregnant women can prevent chlamydial infection among neonates. Pregnant women <25 years of age and those with new or multiple sex partners should, in particular, be targeted for screening. Periodic surveys of chlamydial prevalence can be conducted to confirm the validity of using these recommendations in specific clinical settings. C. trachomatis infection of neonates results from perinatal exposure to the mother's infected cervix. The prevalence of C. trachomatis infection generally exceeds 5% among pregnant women, regardless of race/ethnicity or socioeconomic status. Neonatal ocular prophylaxis with silver nitrate solution or antibiotic ointments is ineffective in preventing perinatal transmission of chlamydial infection from mother to infant. However, ocular prophylaxis with those agents does prevent gonococcal ophthalmia and should be continued for that reason (see Prevention of Ophthalmia Neonatorum). Initial C. trachomatis perinatal infection involves mucous membranes of the eye, oropharynx, urogenital tract, and rectum. C. trachomatis infection among neonates can most often be recognized because of conjunctivitis developing 5-12 days after birth. Chlamydia is the most frequent identifiable infectious cause of ophthalmia neonatorum. C. trachomatis also is a common cause of subacute, afebrile pneumonia with onset from 1 to 3 months of age. Asymptomatic infections of the oropharynx, genital tract, and rectum among neonates also occur. # Ophthalmia Neonatorum Caused by C. trachomatis A chlamydial etiology should be considered for all infants with conjunctivitis through 30 days of age. # Diagnostic Considerations Sensitive and specific methods to diagnose chlamydial ophthalmia for the neonate include isolation by tissue culture and nonculture tests, direct fluorescent antibody tests, and immunoassays. Giemsa-stained smears are specific for C. trachomatis, but are not sensitive. Specimens must contain conjunctival cells, not exudate alone. Specimens for culture isolation and nonculture tests should be obtained from the everted eyelid using a dacron-tipped swab or the swab specified by the manufacturer's test kit. A specific diagnosis of C. trachomatis infection confirms the need for chlamydial treatment not only for the neonate, but also for the mother and her sex partner(s). Ocular exudate from infants being evaluated for chlamydial conjunctivitis should also be tested for N. gonorrhoeae. # Recommended Regimen Erythromycin 50 mg/kg/day orally divided into 4 doses for 10-14 days. Topical antibiotic therapy alone is inadequate for treatment of chlamydial infection and is unnecessary when systemic treatment is undertaken. # Follow-Up The possibility of chlamydial pneumonia should be considered. The efficacy of erythromycin treatment is approximately 80%; a second course of therapy may be required. Follow-up of infants to determine resolution is recommended. # Management of Mothers and Their Sex Partners The mothers of infants who have chlamydial infection and the mother's sex partners should be evaluated and treated following the treatment recommendations for adults with chlamydial infections (see Chlamydial Infections Among Adolescents and Adults). # Infant Pneumonia Caused by C. trachomatis Characteristic signs of chlamydial pneumonia among infants include a repetitive staccato cough with tachypnea, and hyperinflation and bilateral diffuse infiltrates on a chest roentgenogram. Wheezing is rare, and infants are typically afebrile. Peripheral eosinophilia, documented in a complete blood count, is sometimes observed among infants with chlamydial pneumonia. Because variation from this clinical presentation is common, initial treatment and diagnostic tests should encompass C. trachomatis for all infants 1-3 months of age who have possible pneumonia. # Diagnostic Considerations Specimens should be collected from the nasopharynx for chlamydial testing. Tissue culture remains the definitive standard for chlamydial pneumonia; nonculture tests can be used with the knowledge that nonculture tests of nasopharyngeal specimens produce lower sensitivity and specificity than nonculture tests of ocular specimens. Tracheal aspirates and lung biopsy specimens, if collected, should be tested for C. trachomatis. The microimmunofluorescence test for C. trachomatis antibody is useful but not widely available. An acute IgM antibody titer ≥1:32 is strongly suggestive of C. trachomatis pneumonia. Because of the delay in obtaining test results for chlamydia, inclusion of an agent active against C. trachomatis in the antibiotic regimen must frequently be decided on the basis of the clinical and radiologic findings. Conducting tests for chlamydial infection is worthwhile, not only to assist in the management of an infant's illness, but also to determine the need for treatment of the mother and her sex partners. # Recommended Regimen Erythromycin 50 mg/kg/day orally divided into 4 doses for 10-14 days. # Follow-Up The effectiveness of erythromycin treatment is approximately 80%; a second course of therapy may be required. Follow-up of infants is recommended to determine that the pneumonia has resolved. Some infants with chlamydial pneumonia have had abnormal pulmonary function tests later in childhood. # Management of Mothers and Their Sex Partners Mothers of infants who have chlamydial infection and the mother's sex partners should be evaluated and treated according to the recommended treatment of adults with chlamydial infections (see Chlamydial Infections Among Adolescents and Adults). # Infants Born to Mothers Who Have Chlamydial Infection Infants born to mothers who have untreated chlamydia are at high risk for infection and should be evaluated and treated as for infants with ophthalmia neonatorum caused by C. trachomatis. # Chlamydial Infections Among Children Sexual abuse must be considered a cause of chlamydial infection among preadolescent children, although perinatally transmitted C. trachomatis infection of the nasopharynx, urogenital tract, and rectum may persist beyond 1 year (see Sexual Assault or Abuse of Children). Because of the potential for a criminal investigation and legal proceedings for sexual abuse, diagnosis of C. trachomatis among preadolescent children requires the high specificity provided by isolation in cell culture. The cultures should be confirmed by microscopic identification of the characteristic intracytoplasmic inclusions, preferably by fluorescein-conjugated monoclonal antibodies specific for C. trachomatis. # Diagnostic Considerations Nonculture chlamydia tests should not be used because of the possibility of falsepositive test results. With respiratory tract specimens, false-positive test results can occur because of cross-reaction of test reagents with Chlamydia pneumoniae; with genital and anal specimens, false-positive test results occur because of cross-reaction with fecal flora. # Recommended Regimen Children who weigh <45 kg Erythromycin 50 mg/kg/day divided into four doses for 10-14 days. # NOTE: The effectiveness of erythromycin treatment is approximately 80%; a second course of therapy may be required. # Children who weigh ≥45 kg but who are <8 years of age Use the same treatment regimens for these children as the adult regimens of erythromycin (see Chlamydial Infections Among Adolescents and Adults). # Children ≥8 years of age Use the same treatment regimens for these children as the adult regimens of doxycycline or tetracycline (see Chlamydial Infections Among Adolescents and Adults). Adult regimens of azithromycin also may be considered for adolescents. # Other Management Considerations See Sexual Assault or Abuse of Children. # Follow-Up Follow-up cultures are necessary to ensure that treatment has been effective. # Gonococcal Infections # Gonococcal Infections Among Adolescents and Adults An estimated 1 million new infections with N. gonorrhoeae occur in the United States each year. Most infections among men produce symptoms that cause the person to seek curative treatment soon enough to prevent serious sequelae-but not soon enough to prevent transmission to others. Many infections among women do not produce recognizable symptoms until complications such as PID have occurred. PID, whether symptomatic or asymptomatic, can cause tubal scarring leading to infertility or ectopic pregnancy. Because gonococcal infections among women are often asymptomatic, a primary measure for controlling gonorrhea in the United States has been the screening of high-risk women. # Uncomplicated Gonococcal Infections # Recommended Regimens Ceftriaxone 125 mg IM in a single dose or Cefixime 400 mg orally in a single dose or Ciprofloxacin 500 mg orally in a single dose or Ofloxacin 400 mg orally in a single dose # PLUS A regimen effective against possible coinfection with C. trachomatis , such as doxycycline 100 mg orally 2 times a day for 7 days. Many antibiotics are safe and effective for treating gonorrhea, eradicating N. gonorrhoeae, ending the possibility of further transmission, relieving symptoms, and reducing the chances of sequelae. Selection of a treatment regimen for N. gonorrhoeae infection requires consideration of the anatomic site of infection, resistance of N. gonorrhoeae strains to antimicrobials, the possibility of concurrent infection with C. trachomatis, and the side effects and costs of the various treatment regimens. Because coinfection with C. trachomatis is common, persons treated for gonorrhea should be treated presumptively with a regimen that is effective against C. trachomatis (see Chlamydial Infections). Most experts agree that other regimens recommended for the treatment of C. trachomatis infection are also likely to be satisfactory for the treatment of coinfection (see Chlamydial Infections). However, studies have not been conducted to investigate possible interactions between other treatments for N. gonorrhoeae and C. trachomatis, including interactions influencing the effectiveness and side effects of cotreatment. In clinical trials, these recommended regimens cured >95% of anal and genital infections; any of the regimens may be used for uncomplicated anal or genital infection. Published studies indicate that ceftriaxone 125 mg and ciprofloxacin 500 mg can cure ≥90% of pharyngeal infections. If pharyngeal infection is a concern, one of these two regimens should be used. Ceftriaxone in a single dose of either 125 mg or 250 mg provides sustained, high bactericidal levels in the blood. Extensive clinical experience indicates that both doses are safe and effective for the treatment of uncomplicated gonorrhea at all sites. In the past, the 250 mg dose has been recommended on the supposition that the routine use of a higher dose may forestall the development of resistance. However, on the basis of ceftriaxone's activity against N. gonorrhoeae, its pharmacokinetics, and the results in clinical trials of doses as low as 62.5 mg, the 125 mg dose appears to have a therapeutic reserve at least as large as that of other accepted treatment regimens. No ceftriaxone-resistant strains of N. gonorrhoeae have been reported. The drawbacks of ceftriaxone are that it is expensive, currently unavailable in vials of <250 mg, and must be administered by injection. Some health-care providers believe that the discomfort of the injection may be reduced by using 1% lidocaine solution as a diluent. Ceftriaxone also may abort incubating syphilis, a concern when gonorrhea treatment is not accompanied by a 7-day course of doxycycline or erythromycin for the presumptive treatment of chlamydia. Cefixime has an antimicrobial spectrum similar to that of ceftriaxone, but the 400 mg oral dose does not provide as high nor as sustained a bactericidal level as does 125 mg of ceftriaxone. Cefixime appears to be effective against pharyngeal gonococcal infection, but few patients with pharyngeal infection have been included in studies. No gonococcal strains resistant to cefixime have been reported. The advantage of cefixime is that it can be administered orally. It is not known if the 400 mg dose can cure incubating syphilis. Ciprofloxacin at a dose of 500 mg provides sustained bactericidal levels in the blood. Clinical trials have demonstrated that both 250 mg and 500 mg doses are safe and effective for the treatment of uncomplicated gonorrhea at all sites. Most clinical experience in the United States has been with the 500 mg dose. Ciprofloxacin can be administered orally and is less expensive than ceftriaxone. No resistance has been reported in the United States, but strains with decreased susceptibility to some quinolones are becoming common in Asia and have been reported in North America. The 500 mg dose is recommended, rather than the 250 mg dose, because of the trend toward decreasing susceptibility to quinolones and because of rare reports of treatment failure. Quinolones are contraindicated for pregnant or nursing women and for persons ≤17 years of age on the basis of information from animal studies. Quinolones are not active against T. pallidum . Ofloxacin is active against N. gonorrhoeae, has favorable pharmacokinetics, and the 400 mg dose has been effective for the treatment of uncomplicated anal and genital gonorrhea. In published studies a 400 mg dose cured 22 (88%) of 25 pharyngeal infections. # Alternative Regimens - Spectinomycin 2 g IM in a single dose. Spectinomycin has the disadvantages of being injectable, expensive, inactive against T. pallidum , and relatively ineffective against pharyngeal gonorrhea. In addition, resistant strains have been reported in the United States. However, spectinomycin remains useful for the treatment of patients who can tolerate neither cephalosporins nor quinolones. - Injectable cephalosporin regimens other than ceftriaxone 125 mg that have demonstrated efficacy against uncomplicated anal or genital gonococcal infections include these injectable cephalosporins: ceftizoxime 500 mg IM in a single dose; cefotaxime 500 mg IM in a single dose; cefotetan 1 g IM in a single dose; and cefoxitin 2 g IM in a single dose. None of these injectable cephalosporins offers any advantage compared with ceftriaxone, and there is less clinical experience with them for the treatment of uncomplicated gonorrhea. Of these four regimens, ceftizoxime 500 mg appears to be the most effective according to cumulative experience in published clinical trials. - Oral cephalosporin regimens other than cefixime 400 mg include cefuroxime axetil 1 g orally in a single dose and cefpodoxime proxetil 200 mg orally in a single dose. These two regimens have anti-gonococcal activity and pharmacokinetics less favorable than the 400 mg cefixime regimen, and there is less clinical experience with them in the treatment of gonorrhea. They have not been very effective against pharyngeal infections among the few patients studied. - Quinolone regimens other than ciprofloxacin 500 mg and ofloxacin 400 mg include enoxacin 400 mg orally in a single dose; lomefloxacin 400 mg orally in a single dose; and norfloxacin 800 mg orally in a single dose. They appear to be safe and effective for the treatment of uncomplicated gonorrhea, but none appears to offer any advantage over ciprofloxacin at a dose of 500 mg or ofloxacin at 400 mg. Enoxacin and norfloxacin are active against N. gonorrhoeae , have favorable pharmacokinetics, and have been effective in clinical trials, but there is minimal experience with their use in the United States. Lomefloxacin is effective against N. gonorrhoeae and has very favorable pharmacokinetics, but there are few published clinical studies to support its use for the treatment of gonorrhea, and there is little experience with its use in the United States. Many other antimicrobials are active against N. gonorrhoeae. These guidelines are not intended to be a comprehensive list of all effective treatment regimens. # Other Management Considerations Persons treated for gonorrhea should be screened for syphilis by serology when gonorrhea is first detected. Gonorrhea treatment regimens that include ceftriaxone or a 7-day course of either doxycycline or erythromycin may cure incubating syphilis, but few data relevant to this topic are available. # Follow-Up Persons who have uncomplicated gonorrhea and who are treated with any of the regimens in these guidelines need not return for a test-of-cure. Those persons with symptoms persisting after treatment should be evaluated by culture for N. gonorrhoeae, and any gonococci isolated should be tested for antimicrobial susceptibility. Infections detected after treatment with one of the recommended regimens more commonly occur because of reinfection rather than treatment failure, indicating a need for improved sex partner referral and patient education. Persistent urethritis, cervicitis, or proctitis also may be caused by C. trachomatis and other organisms. # Management of Sex Partners Patients should be instructed to refer sex partners for evaluation and treatment. Sex partners of symptomatic patients who have N. gonorrhoeae infection should be evaluated and treated for N. gonorrhoeae and C. trachomatis infections, if their last sexual contact with the patient was within 30 days of onset of the patient's symptoms. If the index patient is asymptomatic, sex partners whose last sexual contact with the patient was within 60 days of diagnosis should be evaluated and treated. Health-care providers should treat the most recent sex partner, if last sexual intercourse took place before those time periods. Patients should be instructed to avoid sexual intercourse until patient and partner(s) are cured. In the absence of microbiologic test-of-cure, this means until therapy is completed and patient and partner(s) are without symptoms. # Special Considerations # Allergy, Intolerance, or Adverse Reactions Persons who cannot tolerate cephalosporins should, in general, be treated with quinolones. Those who can take neither cephalosporins nor quinolones should be treated with spectinomycin, except for those patients who are suspected or known to have pharyngeal infection. For pharyngeal infections among persons who can tolerate neither a cephalosporin nor quinolones, some studies suggest that trimethoprim/ sulfamethoxazole may be effective at a dose of 720 mg trimethoprim/3,600 mg sulfamethoxazole orally once a day for 5 days. # Pregnancy Pregnant women should not be treated with quinolones or tetracyclines. Those infected with N. gonorrhoeae should be treated with a recommended or alternate cephalosporin. Women who cannot tolerate a cephalosporin should be administered a single dose of 2 g of spectinomycin IM. Erythromycin is the recommended treatment for presumptive or diagnosed C. trachomatis infection during pregnancy (see Chlamydial Infections). # HIV Infection Persons with HIV infection and gonococcal infection should receive the same treatment as persons not infected with HIV. # Gonococcal Conjunctivitis Only one North American study of the treatment of gonococcal conjunctivitis among adults has been published in recent years. In that study, 12 of 12 patients responded favorably to a single 1 g IM injection of ceftriaxone. The recommendations that follow reflect the opinions of expert consultants. # Treatment Recommended Regimen A single, 1 g dose of ceftriaxone should be administered IM, and the infected eye should be lavaged with saline solution once. # Management of Sex Partners As for uncomplicated infections, patients should be instructed to refer sex partner(s) for evaluation and treatment (see Uncomplicated Gonococcal Infections, Management of Sex Partners). # Disseminated Gonococcal Infection Disseminated gonococcal infection (DGI) results from gonococcal bacteremia, often resulting in petechial or pustular acral skin lesions, asymmetrical arthalgias, tenosynovitis or septic arthritis-and is occasionally complicated by hepatitis and, rarely, by endocarditis or meningitis. Strains of N. gonorrhoeae that cause DGI tend to cause little genital inflammation. These strains have become uncommon in the United States during the past decade. No North American studies of the treatment of DGI have been published recently. The recommendations that follow reflect the opinions of expert consultants. # Treatment Hospitalization is recommended for initial therapy, especially for patients who cannot be relied on to comply with treatment, for those for whom the diagnosis is uncertain, and for those who have purulent synovial effusions or other complications. Patients should be examined for clinical evidence of endocarditis and meningitis. Patients treated for DGI should be treated presumptively for concurrent C. trachomatis infection. # Recommended Initial Regimen Ceftriaxone 1 g IM or IV every 24 hours. All regimens should be continued for 24-48 hours after improvement begins; then therapy may be switched to one of the following regimens to complete a full week of antimicrobial therapy: Cefixime 400 mg orally 2 times a day or Ciprofloxacin 500 mg orally 2 times a day. # Alternative Initial Regimens # NOTE: Ciprofloxacin is contraindicated for children, adolescents ≤17 years of age, and pregnant and lactating women. # Management of Sex Partners Gonococcal infection is often asymptomatic in sex partners of patients with DGI. As for uncomplicated infections, patients should be instructed to refer sex partner(s) for evaluation and treatment (see Uncomplicated Gonococcal Infections, Management of Sex Partners). # Gonococcal Meningitis and Endocarditis Recommended Initial Regimen 1-2 g of ceftriaxone IV every 12 hours. Therapy for meningitis should be continued for 10-14 days and for endocarditis for at least 4 weeks. Treatment of complicated DGI should be undertaken in consultation with an expert. # Management of Sex Partners As for uncomplicated infections, patients should be instructed to refer sex partners for evaluation and treatment (see Uncomplicated Gonococcal Infections, Management of Sex Partners). # Gonococcal Infections Among Infants Gonococcal infection among neonates usually results from peripartum exposure to infected cervical exudate of the mother. Gonococcal infection among neonates is usually an acute illness beginning 2-5 days after birth. The incidence of N. gonorrhoeae among neonates varies in U.S. communities, depends on the prevalence of infection among pregnant women, on whether pregnant women are screened for gonorrhea, and on whether newborns receive ophthalmia prophylaxis. The prevalence of infection is <1% in most prenatal patient populations, but may be higher in some settings. Of greatest concern are complications of ophthalmia neonatorum and sepsis, including arthritis and meningitis. Less serious manifestations at sites of infection include rhinitis, vaginitis, urethritis, and inflammation at sites of intrauterine fetal monitoring. # Ophthalmia Neonatorum Caused by N. gonorrhoeae In most patient populations in the United States, C. trachomatis and nonsexually transmitted agents are more common causes of neonatal conjunctivitis than N. gonorrhoeae. However, N. gonorrhoeae is especially important because gonococcal ophthalmia may result in perforation of the globe and in blindness. # Diagnostic Considerations Infants at high risk for gonococcal ophthalmia in the United States are those who do not receive ophthalmia prophylaxis, whose mothers have had no prenatal care, or whose mothers have a history of STDs or substance abuse. The presence of typical Gram-negative diplococci in a Gram-stained smear of conjunctival exudate suggests a diagnosis of N. gonorrhoeae conjunctivitis. Such patients should be treated presumptively for gonorrhea after obtaining appropriate cultures for N. gonorrhoeae; appropriate chlamydial testing should be done simultaneously. The decision not to treat presumptively for N. gonorrhoeae among patients without evidence of gonococci on a Gram-stained smear of conjunctival exudate, or among patients for whom a Gram-stained smear cannot be performed, must be made on a case-by-case basis after considering the previously described risk factors. A specimen of conjunctival exudate also should be cultured for isolation of N. gonorrhoeae, since culture is needed for definitive microbiologic identification and for antibiotic susceptibility testing. Such definitive testing is required because of the public health and social consequences for the infant and mother that may result from the diagnosis of gonococcal ophthalmia. Moraxella catarrhalis and other Neisseria species are uncommon causes of neonatal conjunctivitis that can mimic N. gonorrhoeae on Gram-stained smear. To differentiate N. gonorrhoeae from M. catarrhalis and other Neisseria species, the laboratory should be instructed to perform confirmatory tests on any colonies that meet presumptive criteria for N. gonorrhoeae. # Recommended Regimen Ceftriaxone 25-50 mg/kg IV or IM in a single dose, not to exceed 125 mg. NOTE: Topical antibiotic therapy alone is inadequate and is unnecessary if systemic treatment is administered. # Other Management Considerations Simultaneous infection with C. trachomatis has been reported and should be considered for patients who do not respond satisfactorily. The mother and infant should be tested for chlamydial infection at the same time that gonorrhea testing is done (see Ophthalmia Neonatorum Caused by C. trachomatis ). Ceftriaxone should be administered cautiously among infants with elevated bilirubin levels, especially premature infants. # Follow-Up Infants should be admitted to the hospital and evaluated for signs of disseminated infection (e.g., sepsis, arthritis, and meningitis). One dose of ceftriaxone is adequate for gonococcal conjunctivitis, but many pediatricians prefer to maintain infants on antibiotics until cultures are negative at 48-72 hours. The decision on duration of therapy should be made with input from experienced physicians. # Management of Mothers and Their Sex Partners The mothers of infants with gonococcal infection and their sex partners should be evaluated and treated following the recommendations for treatment of gonococcal infections in adults (see Gonococcal Infections Among Adolescents and Adults). # Disseminated Gonococcal Infection Among Infants Sepsis, arthritis, meningitis, or any combination thereof are rare complications of neonatal gonococcal infection. Gonococcal scalp abscesses also may develop as a result of fetal monitoring. Detection of gonococcal infection among neonates who have sepsis, arthritis, meningitis, or scalp abscesses requires cultures of blood, CSF, and joint aspirate on chocolate agar. Cultures of specimens from the conjunctiva, vagina, oropharynx, and rectum onto gonococcal selective medium are useful to identify sites of primary infection, especially if inflammation is present. Positive Gram-stained smears of exudate, CSF, or joint aspirate provide a presumptive basis for initiating treatment for N. gonorrhoeae. Diagnoses based on positive Gram-stained smears or presumptive isolation by cultures should be confirmed with definitive tests on culture isolates. # Recommended Regimen Ceftriaxone 25-50 mg/kg/day IV or IM in a single daily dose for 7 days, with a duration of 10-14 days, if meningitis is documented; or Cefotaxime 25 mg/kg IV or IM every 12 hours for 7 days, with a duration of 10-14 days, if meningitis is documented. # Prophylactic Treatment for Infants Whose Mothers Have Gonococcal Infection Infants born to mothers who have untreated gonorrhea are at high risk for infection. # Recommended Regimen in the Absence of Signs of Gonococcal Infection Ceftriaxone 25-50 mg/kg IV or IM, not to exceed 125 mg, in a single dose. # Other Management Considerations If simultaneous infection with C. trachomatis has been reported, mother and infant should be tested for chlamydial infection. # Follow-Up Follow-up examination is not required. # Management of Mothers and Their Sex Partners The mothers of infants with gonococcal infection and the mother's sex partners should be evaluated and treated following the recommendations for treatment of gonococcal infections among adults (see Gonococcal Infections). # Gonococcal Infections Among Children After the neonatal period, sexual abuse is the most common cause of gonococcal infection among preadolescent children (see Sexual Assault or Abuse of Children). Vaginitis is the most common manifestation of gonococcal infection among preadolescent children. PID following vaginal infection appears to be less common than among adults. Among sexually-abused children, anorectal and pharyngeal infections with N. gonorrhoeae are common and are frequently asymptomatic. # Diagnostic Considerations Because of the potential medical/legal use of the test results for N. gonorrhoeae among children, only standard culture systems for the isolation of N. gonorrhoeae should be used to diagnose N. gonorrhoeae for these children. Nonculture gonococcal tests, including Gram-stained smear, DNA probes, or EIA tests should not be used; none of these tests have been approved by the FDA for use in the oropharynx, rectum, or genital tract of children. Specimens from the vagina, urethra, pharynx, or rectum should be streaked onto selective media for isolation of N. gonorrhoeae. All presumptive isolates of N. gonorrhoeae should be confirmed by at least two tests that involve different principles, e.g., biochemical, enzyme substrate, or serologic. Isolates should be preserved to permit additional or repeated analysis. # Recommended Regimen for Children Children Who Weigh > 45 kg Children who weigh ≥45 kg should be administered the same treatment regimens as those recommended for adults (see Gonococcal Infections). # Children Who Weigh <45 kg The following treatment recommendations are for children with uncomplicated gonococcal vulvovaginitis, cervicitis, urethritis, pharyngitis, or proctitis. Ceftriaxone 125 mg IM in a single dose. # Alternative Regimen Spectinomycin 40 mg/kg (maximum 2 g) IM in a single dose. # Children Who Weigh <45 kg and Who Have Bacteremia, Arthritis, or Meningitis Recommended Regimen Ceftriaxone 50 mg/kg (maximum 1 g) IM or IV in a single dose daily for 7 days. # NOTE: For meningitis, increase the duration of treatment to 10-14 days and the maximum dose to 2 g. # Follow-Up Follow-up cultures of specimens from infected sites are necessary to ensure that treatment has been effective. # Other Management Considerations Only parenteral cephalosporins are recommended for use among children. Ceftriaxone is approved for all gonococcal indications among children; cefotaxime is approved for gonococcal ophthalmia only. Oral cephalosporins (cefixime, cefuroxime axetil, cefpodoxime) have not received adequate evaluation in the treatment of gonococcal infections among pediatric patients to recommend their use. The pharmacokinetic activity of these drugs among adults cannot be extrapolated to children. All children with gonococcal infections should be evaluated for coinfection with syphilis and C. trachomatis. For a discussion of issues regarding sexual assault, refer to Sexual Assault or Abuse of Children. # Ophthalmia Neonatorum Prophylaxis Instillation of a prophylactic agent into the eyes of all newborn infants is recommended to prevent gonococcal ophthalmia neonatorum and is required by law in most states. Although all the regimens that follow effectively prevent gonococcal eye disease, their efficacy in preventing chlamydial eye disease is not clear. Furthermore, they do not eliminate nasopharyngeal colonization with C. trachomatis. Treatment of gonococcal and chlamydial infections among pregnant women is the best method for preventing neonatal gonococcal and chlamydial disease. However, ocular prophylaxis should continue because it can prevent gonococcal ophthalmia and, in some populations, >10% of pregnant women may receive no prenatal care. # Prophylaxis # Recommended Preparations Silver nitrate (1%) aqueous solution in a single application or Erythromycin (0.5%) ophthalmic ointment in a single application or Tetracycline ophthalmic ointment (1%) in a single application. One of the above preparations should be instilled into the eyes of every neonate as soon as possible after delivery. If prophylaxis is delayed (i.e., not administered in the delivery room), hospitals should establish a monitoring system to see that all infants receive prophylaxis. All infants should be administered ocular prophylaxis, whether delivery is vaginal or caesarian. Single-use tubes or ampules are preferable to multiple-use tubes. Bacitracin is not effective. # DISEASES CHARACTERIZED BY VAGINAL DISCHARGE # Management of the Patient with Vaginitis Vaginitis is characterized by a vaginal discharge (usually) or vulvar itching and irritation; a vaginal odor may be present. The three common diseases characterized by vaginitis include trichomoniasis (caused by T. vaginalis ), BV (caused by a replacement of the normal vaginal flora by an overgrowth of anaerobic microorganisms and Gardnerella vaginalis ), and candidiasis (usually caused by Candida albicans ). MPC caused by C. trachomatis or N. gonorrhoeae may uncommonly cause a vaginal discharge. Although vulvovaginal candidiasis is not usually transmitted sexually, it is included here because it is a common infection among women being evaluated for STDs. The diagnosis of vaginitis is made by pH and microscopic examination of fresh samples of the discharge. The pH of the vaginal secretions can be determined by narrow-range pH paper for the elevated pH (>4.5) typical of BV or trichomoniasis. One way to examine the discharge is to dilute a sample in 1-2 drops of 0.9% normal saline solution on one slide and 10% potassium hydroxide (KOH) solution on a second slide. An amine odor detected immediately after applying KOH suggests either BV or trichomoniasis. A cover slip is placed on each slide and they are examined under a microscope at low-and high-dry power. The motile T. vaginalis or the clue cells of BV are usually easily identified in the saline specimen. The yeast or pseudohyphae of Candida species are more easily identified in the KOH specimen. The presence of objective signs of vulvar inflammation in the absence of vaginal pathogens, along with a minimal amount of discharge, suggests the possibility of mechanical or chemical irritation of the vulva. Culture for T. vaginalis or Candida species is more sensitive than microscopic examination, but the specificity of culture for Candida species to diagnose vaginitis is less clear. Laboratory testing fails to identify a cause among a substantial minority of women. # Bacterial Vaginosis BV is a clinical syndrome resulting from replacement of the normal H 2 O 2 -producing Lactobacillus spp in the vagina with high concentrations of anaerobic bacteria (e.g., Bacteroides spp, Mobiluncus spp), G. vaginalis, and Mycoplasma hominis. This condition is the most prevalent cause of vaginal discharge or malodor. However, half the women who meet clinical criteria for BV have no symptoms. The cause of the microbial alteration is not fully understood. Although BV is associated with sexual activity in that women who have never been sexually active are rarely affected and acquisition of BV is associated with having multiple sex partners, BV is not considered exclusively an STD. Treatment of the male sex partner has not been found beneficial in preventing the recurrence of BV. # Diagnostic Considerations BV may be diagnosed by the use of clinical or Gram stain criteria. Clinical criteria require three of the following symptoms or signs: - A homogeneous, white, noninflammatory discharge that adheres to the vaginal walls; - The presence of clue cells on microscopic examination; - pH of vaginal fluid >4.5; - A fishy odor of vaginal discharge before or after addition of 10% KOH (whiff test). When Gram stain is used, determining the relative concentration of the bacterial morphotypes characteristic of the altered flora of BV is an acceptable laboratory method for diagnosing BV. Culture of G. vaginalis is not recommended as a diagnostic tool because it is not specific. G. vaginalis can be isolated from vaginal cultures among half of normal women. # Treatment The principal goal of therapy is to relieve vaginal symptoms and signs. Therefore, only women with symptomatic disease require treatment. Because male sex partners of women with BV are not symptomatic, and because treatment of male partners has not been shown to alter either the clinical course of BV in women during treatment or the relapse/reinfection rate, preventing transmission to men is not a goal of therapy. Many bacterial flora characterizing BV have been recovered from the endometrium or salpinx of women with PID. BV has been associated with endometritis, PID, or vaginal cuff cellulitis following invasive procedures such as endometrial biopsy, hysterectomy, hysterosalpingography, placement of IUD, caesarian section, or uterine curettage. A randomized controlled trial found that treatment of BV with metronidazole substantially reduced post-abortion PID. Based on these data, it may be reasonable to consider treatment of BV (symptomatic or asymptomatic) before performing surgical abortion procedures. However, more data are needed to consider treatment of asymptomatic patients with BV when performing other invasive procedures. # Recommended Regimen Metronidazole 500 mg orally 2 times a day for 7 days. # NOTE: Patients should be advised to avoid using alcohol during treatment with metronidazole and for 24 hours thereafter. # Alternative Regimens Metronidazole 2 g orally in a single dose. The following alternative regimens have been effective in clinical trials, although experience with these regimens is limited. Clindamycin cream, 2%, one full applicator (5 g) intravaginally at bedtime for 7 days; or Metronidazole gel, 0.75%, one full applicator (5 g) intravaginally, 2 times a day for 5 days; or Clindamycin 300 mg orally 2 times a day for 7 days. Oral metronidazole has been shown in numerous studies to be efficacious for the treatment of BV, resulting in relief of symptoms and improvement in clinical course and flora disturbances. Based on efficacy data from four randomized-controlled trials, the overall cure rates are 95% for the 7-day regimen and 84% for the 2 g single-dose regimen. Some health-care providers remain concerned about the possibility of metronidazole mutagenicity, which has been suggested by experiments on animals using extremely high and prolonged doses. However, there is no evidence for mutagenicity in humans. Some health-care providers prefer the intravaginal route because of lack of systemic side effects such as mild-to-moderate gastrointestinal upset and unpleasant taste (mean peak serum concentrations of metronidazole following intravaginal administration are <2% those of standard 500 mg oral doses and mean bioavailability of clindamycin cream is about 4%). # Follow-Up Follow-up visits are not necessary if symptoms resolve. Recurrence of BV is common. The alternative treatment regimens suitable for BV treatment may be used for treatment of recurrent disease. No long-term maintenance regimen with any therapeutic agent is currently available. # Management of Sex Partners Treatment of sex partners in clinical trials has not influenced the woman's response to therapy, nor has it influenced the relapse or recurrence rate. Therefore, routine treatment of sex partners is not recommended . # Special Considerations # Allergy or Intolerance to the Recommended Therapy Clindamycin cream is preferred in case of allergy or intolerance to metronidazole. Metronidazole gel can be considered for patients who do not tolerate systemic metronidazole, but patients allergic to oral metronidazole should not be administered metronidazole vaginally. # Pregnancy Because metronidazole is contraindicated during the first trimester of pregnancy, clindamycin vaginal cream is the preferred treatment for BV during the first trimester of pregnancy (clindamycin cream is recommended instead of oral clindamycin because of the general desire to limit the exposure of the fetus to medication). During the second and third trimesters of pregnancy, oral metronidazole can be used, although the vaginal metronidazole gel or clindamycin cream may be preferable. BV has been associated with adverse outcomes of pregnancy (e.g., premature rupture of the membranes, preterm labor, preterm delivery), and the organisms found in increased concentration in BV are also commonly present in postpartum or post-caesarean endometritis. Whether treatment of BV among pregnant women would reduce the risk of adverse pregnancy outcomes is unknown; randomized controlled trials have not been conducted. # HIV infection Persons with HIV and BV should receive the same treatment as persons without HIV. # Trichomoniasis Trichomoniasis is caused by the protozoan T. vaginalis. The majority of men infected with T. vaginalis are asymptomatic, but many women are symptomatic. Among women, T. vaginalis typically causes a diffuse, malodorous, yellow-green discharge with vulvar irritation. There is recent evidence of a possible relationship between vaginal trichomoniasis and adverse pregnancy outcomes, particularly premature rupture of the membranes and preterm delivery. # HIV Infection Persons with HIV infection and trichomoniasis should receive the same treatment as persons without HIV. # Vulvovaginal Candidiasis Vulvovaginal candidiasis (VVC) is caused by C. albicans or, occasionally, by other Candida spp, Torulopsis sp, or other yeasts. An estimated 75% of women will experience at least one episode of VVC during their lifetime, and 40%-45% will experience two or more episodes. A small percentage of women (probably <5%) experience recurrent VVC (RVVC). Typical symptoms of VVC include pruritus and vaginal discharge. Other symptoms may include vaginal soreness, vulvar burning, dyspareunia, and external dysuria. None of these symptoms is specific for VVC. VVC usually is not sexually acquired or transmitted. # Diagnostic Considerations A diagnosis of Candida vaginitis is suggested clinically by pruritus in the vulvar area together with erythema of the vagina or vulva; a white discharge may occur. The diagnosis can be made when a woman has signs and symptoms of vaginitis, and when a wet preparation or Gram stain of vaginal discharge demonstrates yeasts or pseudohyphae, or when a culture or other test yields a positive result for a yeast species. Vaginitis solely because of Candida infection is associated with a normal vaginal pH (≤4.5). Use of 10% KOH in wet preparations improves the visualization of yeast and mycelia by disrupting cellular material that may obscure the yeast or pseudohyphae. Identifying Candida in the absence of symptoms should not lead to treatment, because approximately 10%-20% of women normally harbor Candida spp and other yeasts in the vagina. VVC may be present concurrently with STDs. # Treatment Topical formulations provide effective treatment for VVC. The topically applied azole drugs are more effective than nystatin. Treatment with azoles results in relief of symptoms and negative cultures among 80%-90% of patients after therapy is completed. # Recommended Regimens The following intravaginal formulations are recommended for the treatment of VVC. Butoconazole 2% cream 5 g intravaginally for 3 days;- or Clotrimazole 1% cream 5 g intravaginally for 7-14 days; † or *These creams and suppositories are oil-based and may weaken latex condoms and diaphragms. Refer to product labeling for further information. † Over-the-counter (OTC) preparations. Clotrimazole 100 mg vaginal tablet for 7 days; † or Clotrimazole 100 mg vaginal tablet, two tablets for 3 days; or Clotrimazole 500 mg vaginal tablet, one tablet single application; or Miconazole 2% cream 5 g intravaginally for 7 days;- † or Miconazole 200 mg vaginal suppository, one suppository for 3 days;- or Miconazole 100 mg vaginal suppository, one suppository for 7 days;- † or Tioconazole 6.5% ointment 5 g intravaginally in a single application;- or Terconazole 0.4% cream 5 g intravaginally for 7 days; or Terconazole 0.8% cream 5 g intravaginally for 3 days; or Terconazole 80 mg suppository, 1 suppository for 3 days.- Although information is not conclusive, single-dose treatments should probably be reserved for cases of uncomplicated mild-to-moderate VVC. Multi-day regimens (3-and 7-day) are the preferred treatment for severe or complicated VVC. Preparations for intravaginal administration of both miconazole and clotrimazole are now available over-the-counter (OTC ), and women with VVC can choose one of those preparations. The duration for treatment with either preparation is 7 days. Self-medication with OTC preparations should be advised only for women who have been diagnosed previously with VVC and who experience a recurrence of the same symptoms. Any woman whose symptoms persist after using an OTC preparation or who experiences a recurrence of symptoms within 2 months should seek medical care. # Alternative Regimens Several trials have demonstrated that oral azole agents such as fluconazole, ketoconazole, and itraconazole may be as effective as topical agents. The optimum dose and duration of oral therapy have not been established, but a range of 1-5 days of treatment, depending on the agent, has been effective in clinical trials. The ease of administration of oral agents is an advantage over topical therapies. However, the potential for toxicity associated with using a systemic drug, particularly ketoconazole, must be considered. No oral agent is approved currently by the FDA for the treatment of acute VVC. # Follow-Up Patients should be instructed to return for follow-up visits only if symptoms persist or recur. Women who experience three or more episodes of VVC per year should be evaluated for predisposing conditions (see Recurrent Vulvovaginal Candidiasis). # Management of Sex Partners VVC is not acquired through sexual intercourse; treatment of sex partners has not been demonstrated to reduce the frequency of recurrences. Therefore, routine notification or treatment of sex partners is not warranted. A minority of male sex partners may have balanitis, which is characterized by erythematous areas on the glans in conjunction with pruritus or irritation. These partners may benefit from treatment with topical antifungal agents to relieve symptoms. # Special Considerations # Allergy or Intolerance to the Recommended Therapy Topical agents are usually free of systemic side effects, although local burning or irritation may occur. Oral agents occasionally cause nausea, abdominal pain, and headaches. Therapy with the oral azoles has been associated rarely with abnormal elevations of liver enzymes. Hepatotoxicity secondary to ketoconazole therapy has been estimated to appear in 1:10,000 to 1:15,000 exposed persons. Clinically important interactions may occur when these oral agents are administered with other drugs, including terfenadine, rifampin, astemizole, phenytoin, cyclosporin A, coumarin-like agents, or oral hypoglycemic agents. # Pregnancy VVC is common during pregnancy. Only topical azole therapies should be used for the treatment of pregnant women. The most effective treatments that have been studied for pregnant women are clotrimazole, miconazole, butoconazole, and terconazole. Many experts recommend 7 days of therapy during pregnancy. # HIV Infection Acute VVC occurs frequently among women with HIV infection and may be more severe for these women than for other women. However, insufficient information exists to determine the optimal management of VVC in HIV-infected women. Until such information becomes available, women with HIV infection and acute VVC should be treated following the same regimens as for women without HIV infection. # Recurrent Vulvovaginal Candidiasis RVVC, usually defined as three or more episodes of symptomatic VVC annually, affects a small proportion of women (probably <5%). The natural history and pathogenesis of RVVC are poorly understood. Risk factors for RVVC include diabetes mellitus, immunosuppression, broad spectrum antibiotic use, corticosteroid use, and HIV infection, although the majority of women with RVVC have no apparent predisposing conditions. Clinical trials addressing the management of RVVC have involved continuing therapy between episodes. # Treatment The optimal treatment for RVVC has not been established. Ketoconazole 100 mg orally, once daily for up to 6 months reduces the frequency of episodes of RVVC. Current studies are evaluating weekly intravaginal administration of clotrimazole, as well as oral therapy with itraconazole and fluconazole, in the treatment of RVVC. All cases of RVVC should be confirmed by culture before maintenance therapy is initiated. Although patients with RVVC should be evaluated for predisposing conditions, routinely performing HIV testing for women with RVVC who do not have HIV risk factors is unwarranted. # Follow-Up Patients who are receiving treatment for RVVC should receive regular follow-up to monitor the effectiveness of therapy and the occurrence of side effects. # Management of Sex Partners Treatment of sex partners does not prevent recurrences, and routine therapy is not warranted. However, partners with symptomatic balanitis or penile dermatitis should be treated with a topical agent. # Special Considerations # HIV Infection Insufficient information exists to determine the optimal management of RVVC among HIV-infected women. Until such information becomes available, management should be the same as for other women with RVVC. # PELVIC INFLAMMATORY DISEASE PID comprises a spectrum of inflammatory disorders of the upper genital tract among women and may include any combination of endometritis, salpingitis, tuboovarian abscess, and pelvic peritonitis. Sexually transmitted organisms, especially N. gonorrhoeae and C. trachomatis, are implicated in the majority of cases; however, microorganisms that can be part of the vaginal flora, such as anaerobes, G. vaginalis, H. influenzae, enteric Gram-negative rods, and Streptococcus agalactiae also can cause PID. Some experts also believe that M. hominis and U. urealyticum are etiologic agents of PID. # Diagnostic Considerations Because of the wide variation in many symptoms and signs among women with this condition, a clinical diagnosis of acute PID is difficult. Many women with PID exhibit subtle or mild symptoms that are not readily recognized as PID. Consequently, delay in diagnosis and effective treatment probably contributes to inflammatory sequelae in the upper reproductive tract. Laparoscopy can be used to obtain a more accurate diagnosis of salpingitis and a more complete bacteriologic diagnosis. However, this diagnostic tool is often neither readily available for acute cases nor easily justifiable when symptoms are mild or vague. Moreover, laparoscopy will not detect endometritis and may not detect subtle inflammation of the fallopian tubes. Consequently, the diagnosis of PID is usually made on the basis of clinical findings. The clinical diagnosis of acute PID is also imprecise. Data indicate that a clinical diagnosis of symptomatic PID has a positive predictive value (PPV) for salpingitis of 65%-90% when compared with laparoscopy as the standard. The PPV of a clinical diagnosis of acute PID varies depending on epidemiologic characteristics and the clinical setting, with higher PPV among sexually active young (especially teenage) women and among patients attending STD clinics or from settings with high rates of gonorrhea or chlamydia. In all settings, however, no single historical, physical, or laboratory finding is both sensitive and specific for the diagnosis of acute PID (i.e., can be used both to detect all cases of PID and to exclude all women without PID). Combinations of diagnostic findings that improve either sensitivity (detect more women who have PID) or specificity (exclude more women who do not have PID) do so only at the expense of the other. For example, requiring two or more findings excludes more women without PID but also reduces the number of women with PID who are detected. Many episodes of PID go unrecognized. Although some women may have asymptomatic PID, others are undiagnosed because the patient or the health-care provider fails to recognize the implications of mild or nonspecific symptoms or signs, such as abnormal bleeding, dyspareunia, or vaginal discharge ("atypical PID"). Because of the difficulty of diagnosis and the potential for damage to the reproductive health of women even by apparently mild or atypical PID, experts recommend that providers maintain a low threshold of diagnosis for PID. Even so, the long-term outcome of early treatment of women with asymptomatic or atypical PID on important clinical outcomes is unknown. The following recommendations for diagnosing PID are intended to help health-care providers recognize when PID should be suspected and when they need to obtain additional information to increase diagnostic certainty. These recommendations are based in part on the fact that diagnosis and management of other common causes of lower abdominal pain (e.g., ectopic pregnancy, acute appendicitis, and functional pain) are unlikely to be impaired by initiating empiric antimicrobial therapy for PID. No single therapeutic regimen has been established for persons with PID. When selecting a treatment regimen, health-care providers should consider availability, cost, patient acceptance, and regional differences in antimicrobial susceptibility of the likely pathogens. Many experts recommend that all patients with PID be hospitalized so that supervised treatment with parenteral antibiotics can be initiated. Hospitalization is especially recommended when the following criteria are met: - The diagnosis is uncertain, and surgical emergencies such as appendicitis and ectopic pregnancy cannot be excluded, - Pelvic abscess is suspected, - The patient is pregnant, - The patient is an adolescent (among adolescents, compliance with therapy is unpredictable), - The patient has HIV infection, - Severe illness or nausea and vomiting preclude outpatient management, - The patient is unable to follow or tolerate an outpatient regimen, - The patient has failed to respond clinically to outpatient therapy, - Clinical follow-up within 72 hours of starting antibiotic treatment cannot be arranged. # Inpatient Treatment Experts have experience with both of the following regimens. Also, there are multiple randomized trials demonstrating the efficacy of each regimen. # Regimen A Cefoxitin 2 g IV every 6 hours or cefotetan 2 g IV every 12 hours, PLUS Doxycycline 100 mg IV or orally every 12 hours. NOTE: This regimen should be continued for at least 48 hours after the patient demonstrates substantial clinical improvement, after which doxycycline 100 mg orally 2 times a day should be continued for a total of 14 days. Doxycycline administered orally has bioavailability similar to that of the IV formulation and may be administered if normal gastrointestinal function is present. Clinical data are limited for other second-or third-generation cephalosporins (e.g., ceftizoxime, cefotaxime, and ceftriaxone), which might replace cefoxitin or cefotetan, although many authorities believe they also are effective therapy for PID. However, they are less active than cefoxitin or cefotetan against anaerobic bacteria. # Regimen B Ofloxacin 400 mg orally 2 times a day for 14 days, PLUS Either clindamycin 450 mg orally 4 times a day, or metronidazole 500 mg orally 2 times a day for 14 days. Clinical trials have demonstrated that the cefoxitin regimen is effective in obtaining short-term clinical response. Fewer data support the use of ceftriaxone or other thirdgeneration cephalosporins, but, based on their similarities to cefoxitin, they also are considered effective. No data exist regarding the use of oral cephalosporins for the treatment of PID. Ofloxacin is effective against both N. gonorrhoeae and C. trachomatis. One clinical trial demonstrated the effectiveness of oral ofloxacin in obtaining short-term clinical response with PID. Despite results of this trial, there is concern related to ofloxacin's lack of anaerobic coverage; the addition of clindamycin or metronidazole provides this coverage. Clindamycin, but not metronidazole, further enhances the Gram-positive coverage of the regimen. Alternative Outpatient Regimens. Information regarding other outpatient regimens is limited. The combination of amoxicillin/clavulanic acid plus doxycycline was effective in obtaining short-term clinical response in one clinical trial, but many of the patients had to discontinue the regimen because of gastrointestinal symptoms. # Follow-Up Hospitalized patients receiving IV therapy should show substantial clinical improvement (e.g., defervescence, reduction in direct or rebound abdominal tenderness, and reduction in uterine, adnexal, and cervical motion tenderness) within 3-5 days of initiation of therapy. Patients who do not demonstrate improvement within this time period usually require further diagnostic workup or surgical intervention, or both. If the provider elects to prescribe outpatient therapy, follow-up examination should be performed within 72 hours, using the criteria for clinical improvement previously described. Because of the risk for persistent infection, particularly with C. trachomatis, patients should have a microbiologic re-examination 7-10 days after completing therapy. Some experts also recommend rescreening for C. trachomatis and N. gonorrhoeae 4-6 weeks after completing therapy. # Management of Sex Partners Evaluation and treatment of sex partners of women who have PID is imperative because of the risk for re-infection and the high likelihood of urethral gonococcal or chlamydial infection of the partner. Since nonculture, and perhaps culture, tests for C. trachomatis and N. gonorrhoeae are thought to be insensitive among asymptomatic men, sex partners should be treated empirically with regimens effective against both of these infectionsregardless of the apparent etiology of PID or pathogens isolated from the infected woman. Even in clinical settings in which only women are seen, special arrangements should be made to provide care for male sex partners of women with PID. When this is not feasible, health-care providers should ensure that sex partners are appropriately referred for treatment. # Special Considerations # Pregnancy Pregnant women with suspected PID should be hospitalized and treated with parenteral antibiotics. # HIV Infection Differences in the clinical manifestations of PID between HIV-infected women and noninfected women have not been described clearly. However, in one study, HIV-infected women with PID tended to have a leukopenia or a lesser leukocytosis than women who were not HIV-infected, and they were more likely to require surgical intervention. HIV-infected women who develop PID should be managed aggressively. Hospitalization and inpatient therapy with one of the IV antimicrobial regimens described in this report is recommended. # EPIDIDYMITIS Among men 35 years of age, and among men who have recently undergone urinary tract instrumentation or surgery. # Diagnostic Considerations Men with epididymitis typically have unilateral testicular pain and tenderness; palpable swelling of the epididymis is usually present. Testicular torsion, a surgical emergency, should be considered in all cases but is more frequent among adolescents. Emergency testing for torsion may be indicated when the onset of pain is sudden, pain is severe, or test results available during the initial visit do not permit a diagnosis of urethritis or urinary tract infection. The evaluation of men for epididymitis should include the following procedures: - A Gram-stained smear of urethral exudate or intraurethral swab specimen for N. gonorrhoeae and for NGU (≥5 polymorphonuclear leukocytes per oil immersion field), - A culture of urethral exudate or intraurethral swab specimen for N. gonorrhoeae, - A test of an intraurethral swab specimen for C. trachomatis, - Culture and Gram-stained smear of uncentrifuged urine for Gram-negative bacteria. # Treatment Empiric therapy is indicated before culture results are available. Treatment of epididymitis caused by C. trachomatis or N. gonorrhoeae will result in microbiologic cure of infection, improve signs and symptoms, and prevent transmission to others. Patients with suspected sexually transmitted epididymitis should be treated with an antimicrobial regimen effective against C. trachomatis and N. gonorrhoeae; confirmation of these agents by testing will assist in partner notification efforts, but current tests for C. trachomatis are not sufficiently sensitive to exclude infection with that agent. # Recommended Regimen Ceftriaxone 250 mg IM in a single dose and Doxycycline 100 mg orally 2 times a day for 10 days. The effect of substituting the 125 mg dose of ceftriaxone recommended for treatment of uncomplicated N. gonorrhoeae, or the azithromycin regimen recommended for treatment of C. trachomatis, is unknown. As an adjunct to therapy, bed rest and scrotal elevation are recommended until fever and local inflammation have subsided. # Alternative Regimen Ofloxacin 300 mg orally 2 times a day for 10 days. # NOTE: Ofloxacin is contraindicated for persons ≤17 years of age. # Follow-Up Failure to improve within 3 days requires re-evaluation of both the diagnosis and therapy, and consideration of hospitalization. Swelling and tenderness that persist after completing antimicrobial therapy should be evaluated for testicular cancer and tuberculous or fungal epididymitis. # Management of Sex Partners Patients with epididymitis that is known or suspected to be caused by N. gonorrhoeae or C. trachomatis should be instructed to refer sex partners for evaluation and treatment. Sex partners of these patients should be referred if their contact with the index patient was within 30 days of onset of symptoms. Patients should be instructed to avoid sexual intercourse until patient and partner(s) are cured. In the absence of microbiologic test-of-cure, this means until therapy is completed and patient and partner(s) are without symptoms. anatomic site, size, and number of warts as well as the expense, efficacy, convenience, and potential for adverse effects. Extensive or refractory disease should be referred to an expert. Carbon dioxide laser and conventional surgery are useful in the management of extensive warts, particularly for those patients who have not responded to other regimens; these alternatives are not appropriate for treatment of limited lesions. One randomized trial of laser therapy indicated efficacy of 43%, with recurrence among 95% of patients. A randomized trial of surgical excision demonstrated efficacy of 93%, with recurrences among 29% of patients. These therapies and more cost-effective treatments do not eliminate HPV infection. Interferon therapy is not recommended because of its cost and its association with a high frequency of adverse side effects, and efficacy is no greater than that of other available therapies. Two randomized trials established systemic interferon alpha to be no more effective than placebo. Efficacy of interferon injected directly into genital warts (intralesional therapy) during two randomized trials was 44%-61%, with recurrences among none to 67% of patients. Therapy with 5-fluorouracil cream has not been evaluated in controlled studies, frequently causes local irritation, and is not recommended for the treatment of genital warts. Cryotherapy is relatively inexpensive, does not require anesthesia, and does not result in scarring if performed properly. Special equipment is required, and most patients experience moderate pain during and after the procedure. Efficacy during four randomized trials was 63%-88%, with recurrences among 21%-39% of patients. # External Genital/Perianal Warts Therapy with 0.5% podofilox solution is relatively inexpensive, simple to use, safe, and is self-applied by patients at home. Unlike podophyllin, podofilox is a pure compound with a stable shelf-life and does not need to be washed off. Most patients experience mild/moderate pain or local irritation after treatment. Heavily keratinized warts may not respond as well as those on moist mucosal surfaces. To apply the podofilox solution safely and effectively, the patient must be able to see and reach the warts easily. Efficacy during five recent randomized trials was 45%-88%, with recurrences among 33%-60% of patients. Podophyllin therapy is relatively inexpensive, simple to use, and safe. Compared with other available therapies, a larger number of treatments may be required. Most patients experience mild to moderate pain or local irritation after treatment. Heavily keratinized warts may not respond as well as those on moist mucosal surfaces. Efficacy in four recent randomized trials was 32%-79%, with recurrences among 27%-65% of patients. Few data on the efficacy of TCA are available. One randomized trial among men demonstrated 81% efficacy and recurrence among 36% of patients; the frequency of adverse reactions was similar to that seen with the use of cryotherapy. One study among women showed efficacy and frequency of patient discomfort to be similar to podophyllin. No data on the efficacy of bichloroacetic acid are available. Few data on the efficacy of electrodesiccation are available. One randomized trial of electrodesiccation demonstrated an efficacy of 94%, with recurrences among 22% of patients; another randomized trial of diathermocoagulation demonstrated an efficacy of 35%. Local anesthesia is required, and patient discomfort is usually moderate. # Cervical Warts For women with (exophytic) cervical warts, dysplasia must be excluded before treatment is begun. Management should be carried out in consultation with an expert. # Vaginal Warts # Oral Warts Cryotherapy with liquid nitrogen or Electrodesiccation or electrocautery or Surgical removal. # Follow-Up After warts have responded to therapy, follow-up is not necessary. Annual cytologic screening is recommended for women with or without genital warts. The presence of genital warts is not an indication for colposcopy. # Management of Sex Partners Examination of sex partners is not necessary for management of genital warts because the role of reinfection is probably minimal. Many sex partners have obvious exophytic warts and may desire treatment; also, partners may benefit from counseling. Patients with exophytic anogenital warts should be made aware that they are contagious to uninfected sex partners. The majority of partners, however, are probably already subclinically infected with HPV, even if they do not have visible warts. No practical screening tests for subclinical infection are available. Even after removal of warts, patients may harbor HPV in surrounding normal tissue, as may persons without exophytic warts. The use of condoms may reduce transmission to partners likely to be uninfected, such as new partners; however, the period of communicability is unknown. Experts speculate that HPV infection may persist throughout a patient's lifetime in a dormant state and become infectious intermittently. Whether patients with subclinical HPV infection are as contagious as patients with exophytic warts is unknown. # Special Considerations Pregnancy The use of podophyllin and podofilox are contraindicated during pregnancy. Genital papillary lesions have a tendency to proliferate and to become friable during pregnancy. Many experts advocate removal of visible warts during pregnancy. HPV types 6 and 11 can cause laryngeal papillomatosis among infants. The route of transmission (transplacental, birth canal, or postnatal) is unknown, and laryngeal papillomatosis has occurred among infants delivered by caesarean section. Hence, the preventive value of caesarean delivery is unknown. Caesarean delivery must not be performed solely to prevent transmission of HPV infection to the newborn. However, in rare instances, caesarean delivery may be indicated for women with genital warts if the pelvic outlet is obstructed or if vaginal delivery would result in excessive bleeding. # HIV Infection Persons infected with HIV may not respond to therapy for HPV as well as persons without HIV. # Subclinical Genital HPV Infection (Without Exophytic Warts) Subclinical genital HPV infection is much more common than exophytic warts among both men and women. Infection is often indirectly diagnosed on the cervix by Pap smear, colposcopy, or biopsy and on the penis, vulva, and other genital skin by the appearance of white areas after application of acetic acid. Acetowhitening is not a specific test for HPV infection, and false-positive tests are common. Definitive diagnosis of HPV infection relies on detection of viral nucleic acid (DNA or RNA) or capsid proteins. Pap smear diagnosis of HPV generally does not correlate well with detection of HPV DNA in cervical cells. Cell changes attributed to HPV in the cervix are similar to those of mild dysplasia and often regress spontaneously without treatment. Tests for the detection of several types of HPV DNA in cells scraped from the cervix are now widely available, but the clinical utility of these tests for managing patients is not known. Management decisions should not be made on the basis of HPV DNA tests. Screening for subclinical genital HPV infection using DNA tests or acetic acid is not recommended. # Treatment In the absence of coexistent dysplasia, treatment is not recommended for subclinical genital HPV infection diagnosed by Pap smear, colposcopy, biopsy, acetic acid soaking of genital skin or mucous membranes, or the detection of HPV nucleic acids (DNA or RNA) or capsid antigen, because diagnosis often is questionable and no therapy has been demonstrated to eradicate infection. HPV has been demonstrated in adjacent tissue after laser treatment of HPV-associated dysplasia and after attempts to eliminate subclinical HPV by extensive laser vaporization of the anogenital area of men and women. In the presence of coexistent dysplasia, management should be based on the grade of dysplasia. # Management of Sex Partners Examination of sex partners is not necessary. The majority of partners are probably already infected subclinically with HPV. No practical screening tests for subclinical infection are available. The use of condoms may reduce transmission to partners likely to be uninfected, such as new partners; however, the period of communicability is unknown. Experts speculate that HPV infection may persist throughout a patient's lifetime in a dormant state and become infectious intermittently. Whether patients with subclinical HPV infection are as contagious as patients with exophytic warts is unknown. # Vaccination Eligibility Persons known to be at high risk for acquiring HBV (e.g., persons with multiple sex partners, sex partners of HBV carriers, or injecting drug users) should be advised of their risk for HBV infection (as well as HIV infection) and the means to reduce their risk (i.e., exclusivity in sexual relationships, use of condoms, avoidance of nonsterile drug injection equipment, and HBV vaccination). Selected high-risk groups for which HBV vaccination is recommended by the ACIP include the following persons: - Sexually active homosexual and bisexual men, - Men and women diagnosed as having recently acquired another STD, - Persons who have had more than one sex partner in the preceding 6 months. Such persons should be vaccinated unless they are immune to HBV as a result of past infection or vaccination. Refer to Hepatitis B Virus: A Comprehensive Strategy for Eliminating Transmission in the United States Through Universal Child Vaccination, Recommendations of the Advisory Committee on Immunization Practices (ACIP) (17 ). # Screening for Antibody Versus Vaccination Without Screening The prevalence of past HBV infection among sexually active homosexual men and among injecting drug users is high. Serologic screening for evidence of past infection before vaccinating members of these groups may be cost-effective, depending on the relative costs of laboratory testing and vaccine. Among those attending STD clinics, it may be cost-effective to screen older persons for past infection. During a recent study of 2,000 STD clinic patients who accepted HBV vaccination, 28% of those ≥25 years of age had evidence of past infection, whereas only 7% of persons <25 years of age had evidence of past infection. Past infection with HBV can be detected with a serologic test for antibody to the hepatitis B core antigen (anti-HBc). Immunity can be demonstrated by a test for antibody to the hepatitis B surface antigen (anti-HBs). The HBV carrier state can be detected by a test for HBsAg. If only a test for anti-HBc is used to screen for susceptibility to infection, persons immune because of prior vaccination will be falsely classified as susceptible. If only a test for anti-HBs is used, carriers will be falsely classified as susceptible. # Vaccination Schedules The usual vaccination schedule is three doses of vaccine at 0, 1, and 6 months. An alternative schedule of 0, 1, 2, and 12 months also has been approved for one vaccine. The dose is 1 mL for adults, which must be administered IM in the deltoid-not in a buttock. For persons 11-19 years of age, the dose is either 0.5 or 1 mL, depending on the manufacturer of the vaccine. # Management of Persons Exposed to HBV Susceptible persons exposed to HBV through sexual contact with a person who has acute or chronic HBV infection should receive postexposure prophylaxis with 0.06 mL/kg of HBIG in a single IM dose within 14 days of their last exposure; early administration may be more effective. HBIG administration should be followed by the standard three-dose immunization series with HBV vaccine beginning at the time of HBIG administration. # Special Considerations Pregnancy Pregnancy is not a contraindication to HBV or HBIG vaccine administration. # HIV Infection Among HIV-infected persons, HBV infection is more likely to lead to chronic HBV carriage. HIV infection also impairs the response to HBV vaccine. Therefore, HIVinfected persons who are vaccinated should be tested for anti-HBs 1-2 months after the third vaccine dose. Revaccination with one or more doses should be considered for those who do not respond to vaccination initially. Those who do not respond to additional doses should be advised that they may remain susceptible. # PROCTITIS, PROCTOCOLITIS, AND ENTERITIS Sexually transmitted gastrointestinal syndromes include proctitis, proctocolitis, and enteritis. Proctitis occurs predominantly among persons who participate in anal intercourse, and enteritis occurs among those whose sexual practices include oral-fecal contact. Proctocolitis may be acquired by either route depending on the pathogen. Evaluation should include appropriate diagnostic procedures, such as anoscopy or sigmoidoscopy, stool examination, and culture. Proctitis is an inflammation limited to the rectum (the distal 10 cm-12 cm) that is associated with anorectal pain, tenesmus, and rectal discharge. N. gonorrhoeae, C. trachomatis (including LGV serovars), T. pallidum, and HSV are the most common sexually transmitted pathogens involved. Among patients coinfected with HIV, herpes proctitis may be especially severe. Proctocolitis is associated with symptoms of proctitis plus diarrhea and/or abdominal cramps and inflammation of the colonic mucosa extending to 12 cm. Pathogenic organisms include Campylobacter spp, Shigella spp, Entamoeba histolytica, and, rarely, C. trachomatis (LGV serovars). CMV or other opportunistic agents may be involved among immunosuppressed patients with HIV infection. Enteritis usually results in diarrhea and abdominal cramping without signs of proctitis or proctocolitis. In otherwise healthy patients, Giardia lamblia is most commonly implicated. Among patients with HIV infection, other infections that are not generally sexually transmitted may occur, including CMV, Mycobacterium avium-intracellulare, Salmonella spp, Cryptosporidium, Microsporidium, and Isospora. Multiple stool examinations may be necessary to detect Giardia, and special stool preparations are required to diagnose cryptosporidiosis and microsporidiosis. Additionally, enteritis may be a primary effect of HIV infection. When laboratory diagnostic capabilities are available, treatment should be based on the specific diagnosis. Diagnostic and treatment recommendations for all enteric infections are beyond the scope of these guidelines. # Treatment Acute proctitis of recent onset among persons who have recently practiced receptive anal intercourse is most often sexually transmitted. Such patients should be examined by anoscopy and should be evaluated for infection with HSV, N. gonorrhoeae, C. trachomatis, and T. pallidum. If anorectal pus is found on examination, or if polymorphonuclear leukocytes are found on a Gram-stained smear of anorectal secretions, the following therapy may be prescribed pending results of further laboratory tests. # Recommended Regimen Ceftriaxone 125 mg IM (or another agent effective against anal and genital gonorrhea) and Doxycycline 100 mg orally 2 times a day for 7 days. # NOTE: For patients with herpes proctitis, refer to Genital Herpes Simplex Virus Infections. # Follow-Up Follow-up should be based on specific etiology and severity of clinical symptoms. Reinfection may be difficult to distinguish from treatment failure. # Management of Sex Partners Partners of patients with sexually transmitted enteric infections should be evaluated for any diseases diagnosed in the index patient. # ECTOPARASITIC INFECTIONS Pediculosis Pubis Patients with pediculosis pubis (pubic lice) usually seek medical attention because of pruritus. Commonly, they also notice lice on pubic hair. # Recommended Regimens Lindane 1% shampoo applied for 4 minutes and then thoroughly washed off (not recommended for pregnant or lactating women or for children <2 years of age) or Permethrin 1% creme rinse applied to affected areas and washed off after 10 minutes or # Follow-Up Examination 2 Weeks After Assault Examination for sexually transmitted infections should be repeated 2 weeks after the assault. Because infectious agents acquired through assault may not have produced sufficient concentrations of organisms to result in positive tests at the initial examination, culture and wet mount tests should be repeated at the 2-week visit unless prophylactic treatment has already been provided. # Follow-Up Examination 12 Weeks After Assault Serologic tests for syphilis and HIV infection should be performed 12 weeks after the assault. If positive, testing of the sera collected at the initial examination will assist in determining whether the infection antedated the assault. # Prophylaxis Although not all experts agree, most patients probably benefit from prophylaxis because a) follow-up of patients who have been sexually assaulted can be difficult, and b) patients may be reassured if offered treatment or prophylaxis for possible infection. The following prophylactic measures address the more common microorganisms: - HBV vaccination (see HEPATITIS B). - Antimicrobial therapy: empiric regimen for chlamydial, gonococcal, and trichomonal infections and for BV. # Recommended Regimen Ceftriaxone 125 mg IM in a single dose PLUS Metronidazole 2 g orally in a single dose PLUS Doxycycline 100 mg orally 2 times a day for 7 days. # NOTE: For patients requiring alternative treatments, see the appropriate sections of this report addressing those agents. # Other Management Considerations At the initial examination and, as indicated, at follow-up examinations, patients should be counseled regarding the following: - Symptoms of STDs and the need for immediate examination if symptoms occur, and - Use of condoms for sexual intercourse until STD prophylactic treatment is completed. Obtaining the indicated specimens requires skill to avoid psychological and physical trauma to the child. The clinical manifestations of some sexually transmitted infections are different among children when compared with adults. Examinations and specimen collection should be conducted by practitioners who have experience and training in the evaluation of abused or assaulted children. A principal purpose of the examination is to obtain evidence of an infection that is likely to have been sexually transmitted. However, because of the legal and psychosocial consequences of a false-positive diagnosis, only tests with high specificities should be used. Additional cost and time are justified to obtain such tests. The scheduling of examinations should depend on the history of assault or abuse. If initial exposure is recent, infectious agents acquired through the exposure may not have produced sufficient concentrations of organisms to result in positive tests. A follow-up visit approximately 2 weeks after the last sexual exposure should include a repeat physical examination and collection of additional specimens. To allow sufficient time for antibody to develop, another follow-up visit approximately 12 weeks after the last sexual exposure also is necessary to collect sera. A single examination may be sufficient if the child was abused for an extended period of time, or if the last suspected episode of abuse took place some time before the child received the medical evaluation. The following recommendation for scheduling examinations is a general guide. The exact timing and nature of follow-up contacts should be determined on an individual basis and should be considerate of the patient's psychological and social needs. Compliance with follow-up appointments may be improved when law enforcement personnel or child protective services are involved. # Initial and 2-Week Examinations During the initial examination and 2-week follow-up examination (if indicated), the following should be performed: - Cultures for N. gonorrhoeae specimens collected from the pharynx and anus in both sexes, the vagina in girls, and the urethra in boys. Cervical specimens are not recommended for prepubertal girls. For boys, a meatal specimen of urethral discharge is an adequate substitute for an intraurethral swab specimen when discharge is present. Only standard culture systems for the isolation of N. gonorrhoeae should be used. All presumptive isolates of N. gonorrhoeae should be confirmed by at least two tests that involve different principles (e.g., biochemical, enzyme substrate, or serologic methods). Isolates should be preserved in case additional or repeated testing is needed. - Cultures for C. trachomatis from specimens collected from the anus in both sexes and from the vagina in girls. Limited information suggests that the likelihood of recovering chlamydia from the urethra of prepubertal boys is too low to justify the trauma involved in obtaining an intraurethral specimen. A urethral specimen should be obtained if urethral discharge is present. Pharyngeal specimens for C. trachomatis also are not recommended for either sex because the yield is low, perinatally acquired infection may persist beyond infancy, and culture systems in some laboratories do not distinguish between C. trachomatis and C. pneumoniae. # Reporting Every state, the District of Columbia, Puerto Rico, Guam, the Virgin Islands, and American Samoa have laws that require the reporting of child abuse. The exact requirements vary from state to state but, generally, if there is reasonable cause to suspect child abuse, it must be reported. Health-care providers should contact their state or local child protective service agency about child abuse reporting requirements in their areas. # Recommended Regimen Metronidazole 2 g orally in a single dose. # Alternative Regimen Metronidazole 500 mg twice daily for 7 days. Only metronidazole is available in the United States for the treatment of trichomoniasis. In randomized clinical trials, both of the recommended metronidazole regimens have resulted in cure rates of approximately 95%. Treatment of the patient and sex partner results in relief of symptoms, microbiologic cure, and reduction of transmission. Metronidazole gel has been approved for the treatment of BV but it has not been studied for the treatment of trichomoniasis. Earlier preparations of metronidazole for topical vaginal therapy demonstrated low efficacy against trichomoniasis. # Follow-Up Follow-up is unnecessary for men and for women who become asymptomatic after treatment. Infections by strains of T. vaginalis with diminished susceptibility to metronidazole occur. However, most of these organisms respond to higher doses of metronidazole. If failure occurs with either regimen, the patient should be retreated with metronidazole 500 mg 2 times a day for 7 days. If repeated failure occurs, the patient should be treated with a single 2 g dose of metronidazole once daily for 3-5 days. Patients with culture-documented infection who do not respond to the regimens described in this report and in whom reinfection has been excluded, should be managed in consultation with an expert. Evaluation of such cases should include determination of the susceptibility of T. vaginalis to metronidazole. # Management of Sex Partners Sex partners should be treated. Patients should be instructed to avoid sex until patient and partner(s) are cured. In the absence of microbiologic test-of-cure, this means when therapy has been completed and patient and partner(s) are without symptoms. # Special Considerations # Allergy, Intolerance, or Adverse Reactions Effective alternatives to therapy with metronidazole are not available. # Pregnancy The use of metronidazole is contraindicated in the first trimester of pregnancy. Patients may be treated after the first trimester with 2 g of metronidazole in a single dose. # Minimum Criteria Empiric treatment of PID should be instituted on the basis of the presence of all of the following three minimum clinical criteria for pelvic inflammation and in the absence of an established cause other than PID: - Lower abdominal tenderness, - Adnexal tenderness, and - Cervical motion tenderness. # Additional Criteria For women with severe clinical signs, more elaborate diagnostic evaluation is warranted because incorrect diagnosis and management may cause unnecessary morbidity. These additional criteria may be used to increase the specificity of the diagnosis. Listed below are the routine criteria for diagnosing PID: - Oral temperature >38.3 C, - Abnormal cervical or vaginal discharge, - Elevated erythrocyte sedimentation rate, - Elevated C-reactive protein, - Laboratory documentation of cervical infection with N. gonorrhoeae or C. trachomatis. Listed below are the elaborate criteria for diagnosing PID: - Histopathologic evidence of endometritis on endometrial biopsy, - Tubo-ovarian abscess on sonography or other radiologic tests, - Laparoscopic abnormalities consistent with PID. Although initial treatment decisions can be made before bacteriologic diagnosis of C. trachomatis or N. gonorrhoeae infection, such a diagnosis emphasizes the need to treat sex partners. # Treatment PID therapy regimens must provide empiric, broad-spectrum coverage of likely pathogens. Antimicrobial coverage should include N. gonorrhoeae, C. trachomatis, Gram-negative facultative bacteria, anaerobes, and streptococci. Although several antimicrobial regimens have proven effective in achieving clinical and microbiologic cure in randomized clinical trials with short-term follow-up, few studies have been done to assess and compare elimination of infection of the endometrium and fallopian tubes, or the incidence of long-term complications such as tubal infertility and ectopic pregnancy. # Regimen B Clindamycin 900 mg IV every 8 hours, # PLUS Gentamicin loading dose IV or IM (2 mg/kg of body weight) followed by a maintenance dose (1.5 mg/kg) every 8 hours. NOTE:This regimen should be continued for at least 48 hours after the patient demonstrates substantial clinical improvement, then followed with doxycycline 100 mg orally 2 times a day or clindamycin 450 mg orally 4 times a day to complete a total of 14 days of therapy. When tubo-ovarian abscess is present, many health-care providers use clindamycin for continued therapy rather than doxycycline, because it provides more effective anaerobic coverage. Clindamycin administered intravenously appears to be effective against C. trachomatis infection; however, the effectiveness of oral clindamycin against C. trachomatis has not been determined. Alternative Inpatient Regimens. Limited data support the use of other inpatient regimens, but two regimens have undergone at least one clinical trial and have broadspectrum coverage. Ampicillin/sulbactam plus doxycycline has good anaerobic coverage and appears to be effective for patients with a tubo-ovarian abscess. Intravenous ofloxacin has been studied as a single agent. A regimen of ofloxacin plus either clindamycin or metronidazole provides broad-spectrum coverage. Evidence is insufficient to support the use of any single agent regimen for inpatient treatment of PID. # Outpatient Treatment Clinical trials of outpatient regimens have provided little information regarding intermediate and long-term outcomes. The following regimens provide coverage against the common etiologic agents of PID, but evidence from clinical trials supporting their use is limited. The second regimen provides broader coverage against anaerobic organisms but costs substantially more than the other regimen. Patients who do not respond to outpatient therapy within 72 hours should be hospitalized to confirm the diagnosis and to receive parenteral therapy. # Regimen A Cefoxitin 2 g IM plus probenecid, 1 g orally in a single dose concurrently, or ceftriaxone 250 mg IM or other parenteral third-generation cephalosporin (e.g., ceftizoxime or cefotaxime), PLUS Doxycycline 100 mg orally 2 times a day for 14 days. # Special Considerations # HIV Infection Persons with HIV infection and uncomplicated epididymitis should receive the same treatment as persons without HIV. Fungal and mycobacterial causes of epididymitis are more common, however, among patients who are immunocompromised. # HUMAN PAPILLOMAVIRUS INFECTION # Genital Warts Exophytic genital and anal warts are benign growths most commonly caused by HPV types 6 or 11. Other types that may be present in the anogenital region (e.g., types 16, 18, 31, 33, and 35) have been strongly associated with genital dysplasia and carcinoma. These types are usually associated with subclinical infection, but occasionally are found in exophytic warts. # Treatment The goal of treatment is removal of exophytic warts and the amelioration of signs and symptoms-not the eradication of HPV. No therapy has been shown to eradicate HPV. HPV has been identified in adjacent tissue after laser treatment of HPVassociated cervical intraepithelial neoplasia and after attempts to eliminate subclinical HPV by extensive laser vaporization of the anogenital area. Genital warts are generally benign growths that cause minor or no symptoms aside from their cosmetic appearance. Treatment of external genital warts is not likely to influence the development of cervical cancer. A multitude of randomized clinical trials and other treatment studies have demonstrated that currently available therapeutic methods are 22%-94% effective in clearing external exophytic genital warts, and that recurrence rates are high (usually at least 25% within 3 months) with all modalities. Several well-designed studies have indicated that treatment is more successful for genital warts that are small and that have been present <1 year. No studies have assessed if treatment of exophytic warts reduces transmission of HPV. Many experts speculate that exophytic warts may be more infectious than subclinical infection, and therefore, the risk for transmission might be reduced by "debulking" genital warts. Most experts agree that recurrences of genital warts more commonly result from reactivation of subclinical infection than reinfection by a sex partner. The effect of treatment on the natural history of HPV is unknown. If left untreated, genital warts may resolve on their own, remain unchanged, or grow. In placebo-controlled studies, genital warts have cleared spontaneously without treatment in 20%-30% of patients within 3 months. # Regimens Treatment of genital warts should be guided by the preference of the patient. Expensive therapies, toxic therapies, and procedures that result in scarring should be avoided. A specific treatment regimen should be chosen with consideration given to # CERVICAL CANCER SCREENING FOR WOMEN WHO ATTEND STD CLINICS OR WHO HAVE A HISTORY OF STDs Women who have a history of STDs are at increased risk for cervical cancer, and women attending STD clinics may have additional characteristics that place them at even higher risk. Prevalence studies have found that precursor lesions for cervical cancer occur approximately five times more often among women attending STD clinics than among women attending family planning clinics. The Pap smear (cervical smear) is an effective and relatively low-cost screening test for invasive cervical cancer and squamous intraepithelial lesions (SIL)*, the precursors of cervical cancer. The screening guidelines of both the American College of Obstetricians and Gynecologists and the American Cancer Society recommend annual Pap smears for sexually active women. Although these guidelines take the position that Pap smears can be obtained less frequently in some situations, women who attend STD clinics or who have a history of STDs should be screened annually because of their increased risk for cervical cancer. Moreover, reports from STD clinics indicate that many women do not understand the purpose or importance of Pap smears, and many women who have had a pelvic examination believe they have had a Pap smear when they actually have not. # Recommendations Whenever a woman has a pelvic examination for STD screening, the health-care provider should inquire about the result of her last Pap smear and should discuss the following information with the patient: - Purpose and importance of the Pap smear, - Whether a Pap smear was obtained during the clinic visit, - Need for a Pap smear each year, - Names of local providers or referral clinics that can obtain Pap smears and adequately follow up results (if a Pap smear was not obtained during this examination). If a woman has not had a Pap smear during the previous 12 months, a Pap smear should be obtained as part of the routine pelvic examination in most situations. Health-care providers should be aware that, after a pelvic examination, many women may believe they have had a Pap smear when they actually have not, and therefore may report they have had a recent Pap smear. In STD clinics, a Pap smear should be obtained during the routine clinical evaluation of women who have not had a documented normal smear within the past 12 months. *The 1988 Bethesda System for Reporting Cervical/Vaginal Cytologic Diagnoses introduced the new terms low-grade squamous intraepithelial lesion (SIL) and high-grade SIL. Low-grade SIL encompasses cellular changes associated with HPV and mild dysplasia/cervical intraepithelial neoplasia 1 (CIN 1). High-grade SIL includes moderate dysplasia/CIN 2, severe dysplasia/CIN 3, and carcinoma in situ (CIS)/CIN 3 (16 ). A woman may benefit from receiving printed information about Pap smears and a report containing a statement that a Pap smear was obtained during her clinic visit. Whenever possible, a copy of the Pap smear result should be sent to the patient for her records. # Follow-Up If a Pap smear shows severe inflammation with reactive cellular changes, the woman should be advised to have another Pap smear within 3 months. If possible, underlying infection should be treated before the repeat Pap smear is obtained. If a Pap smear shows either SIL (or equivalent) or atypical squamous cells of undetermined significance (ASCUS), the woman should be notified promptly and appropriate follow-up initiated. Appropriate follow-up of Pap smears showing a high-grade SIL (or equivalent) on Pap smears should always include referral to a health-care provider who has the capacity to provide a colposcopic examination of the lower genital tract and, if indicated, colposcopically directed biopsies. Because clinical follow-up of abnormal Pap smears with colposcopy and biopsy is beyond the scope of many public clinics, including most STD clinics, in most situations women with Pap smears demonstrating these abnormalities will need to be referred to other local providers or clinics. Women with either a low-grade SIL or ASCUS also need similar follow-up, although some experts believe that, in some situations, a repeat Pap smear may be a satisfactory alternative to referral for colposcopy and biopsy. # Other Management Considerations Other considerations in performing Pap smears are the following: - The Pap smear is not an effective screening test for STDs; - If a woman is menstruating, a Pap smear should be postponed and the woman should be advised to have a Pap smear at the earliest opportunity; - If a woman has an obvious severe cervicitis, the Pap smear may be deferred until after antibiotic therapy has been completed to obtain an optimum smear; - A woman with external genital warts does not require Pap smears more frequently than a woman without warts, unless otherwise indicated. # Special Considerations Pregnancy Women who are pregnant should have a Pap smear as part of routine prenatal care. A cytobrush may be used for obtaining Pap smears from pregnant women, although care should be taken not to disrupt the mucous plug. # HIV Infection Recent studies have documented an increased prevalence of SIL among women infected with HIV. Also, HIV may hasten the progression of precursor lesions to invasive cervical cancer; however, evidence supporting such a progression is limited. The following provisional recommendations for Pap smear screening among HIV-infected women are based partially on consultation with experts in the care and management of cervical cancer and HIV infection among women. These provisional recommendations may be altered in the future as more information regarding cervical disease among HIV-infected women becomes available: - Women who are HIV-infected should be advised to have a comprehensive gynecologic examination, including a Pap smear, as part of their initial medical evaluation. # HEPATITIS B Hepatitis B is a common STD. Sexual transmission accounts for an estimated onethird to two-thirds of the estimated 200,000 to 300,000 new HBV infections that occurred annually in the United States during the past 10 years. Of persons infected as adults, 6%-10% become chronic HBV carriers. These persons are capable of transmitting HBV to others and are at risk for developing fatal complications. HBV leads to an estimated 5,000 deaths annually in the United States from cirrhosis of the liver and hepatocellular carcinoma. The risk of perinatal HBV infection among infants born to HBV-infected mothers ranges from 10% to 85%, depending on the mother's hepatitis B e antigen status. Infected newborns usually become HBV carriers and are at high risk for developing chronic liver disease. # Prevention Infection of both adults and neonates can be readily prevented with a safe and effective vaccine that has been used in the United States for more than 10 years. Universal vaccination of newborns is now recommended (17 ). The use of hepatitis B immune globulin (HBIG) combined with vaccination can prevent infection among persons exposed sexually to HBV if administered within 14 days of exposure. Pyrethrins with piperonyl butoxide applied to the affected area and washed off after 10 minutes. The lindane regimen remains the least expensive therapy; toxicity (as indicated by seizure and aplastic anemia) has not been reported when treatment is limited to the recommended 4-minute period. Permethrin has less potential for toxicity in the event of inappropriate use. # Other Management Considerations The recommended regimens should not be applied to the eyes. Pediculosis of the eyelashes should be treated by applying occlusive ophthalmic ointment to the eyelid margins two times a day for 10 days. Bedding and clothing should be decontaminated (machine washed or machine dried using heat cycle or dry-cleaned) or removed from body contact for at least 72 hours. Fumigation of living areas is not necessary. # Follow-Up Patients should be evaluated after 1 week if symptoms persist. Re-treatment may be necessary if lice are found or if eggs are observed at the hair-skin junction. Patients who are not responding to one of the recommended regimens should be retreated with an alternative regimen. # Management of Sex Partners Sex partners within the last month should be treated. # Special Considerations Pregnancy Pregnant and lactating women should be treated with permethrin or pyrethrins with piperonyl butoxide. # HIV Infection Persons with HIV infection and pediculosis pubis should receive the same treatment as those without HIV infection. # Scabies The predominant symptom of scabies is pruritus. For pruritus to occur, sensitization to Sarcoptes scabiei must occur. Among persons with their first infection, sensitization takes several weeks to develop, while pruritus may occur within 24 hours after reinfestation. Scabies among adults may be sexually transmitted, although scabies among children is usually not sexually transmitted. # Recommended Regimen Permethrin cream (5%) applied to all areas of the body from the neck down and washed off after 8-14 hours, or Lindane (1%) 1 oz. of lotion or 30 g of cream applied thinly to all areas of the body from the neck down and washed off thoroughly after 8 hours. # NOTE: Lindane should not be used following a bath, and it should not be used by persons with extensive dermatitis, pregnant or lactating women, and children <2 years of age. # Alternative Regimen Crotamiton (10%) applied to the entire body from the neck down, nightly for 2 consecutive nights, and washed off 24 hours after the second application. Permethrin is effective and safe but costs more than lindane. Lindane is effective in most areas of the country, but lindane resistance has been reported in some areas of the world, including parts of the United States. Seizures have occurred when lindane was applied after a bath or used by patients with extensive dermatitis. Aplastic anemia following lindane use also has been reported. # Other Management Considerations Bedding and clothing should be decontaminated (machine washed or machine dried using hot cycle or dry-cleaned) or removed from body contact for at least 72 hours. Fumigation of living areas is not necessary. # Follow-Up Pruritus may persist for several weeks. Some experts recommend re-treatment after 1 week for patients who are still symptomatic; other experts recommend retreatment only if live mites can be observed. Patients who are not responding to the recommended treatment should be retreated with an alternative regimen. # Management of Sex Partners Both sexual and close personal or household contacts within the last month should be examined and treated. # Special Considerations # Pregnant Women, Infants, and Young Children Infants, young children, and pregnant and lactating women should not be treated with lindane. They may be treated with permethrin or crotamiton regimens. # HIV Infection Persons with HIV infection and uncomplicated scabies should receive the same treatment as persons without HIV infection. Persons with HIV infection and others who are immunosuppressed are at increased risk for Norwegian scabies, a disseminated dermatologic infection. Such patients should be managed in consultation with an expert. # SEXUAL ASSAULT AND STDs # Adults and Adolescents Recommendations in this report are limited to the identification and treatment of sexually transmitted infections and conditions commonly identified in the management of such infections. The documentation of findings and collection of specimens for forensic purposes and the management of potential pregnancy or physical and psychological trauma are beyond the scope of these recommendations. Among sexually active adults, the identification of sexually transmitted infections following assault is usually more important for the psychological and medical management of the patient than for legal purposes, if the infection could have been acquired before the assault. Trichomoniasis, chlamydia, gonorrhea, and BV appear to be the infections most commonly diagnosed among women following sexual assault. Since the prevalence of these conditions is substantial among sexually active women, their presence (postassault) does not necessarily signify acquisition during the assault. Chlamydial and gonococcal infection among females are of special concern because of the possibility of ascending infection. # Evaluation for Sexually Transmitted Infections # Initial Examination An initial examination should include the following procedures: - Cultures for N. gonorrhoeae and C. trachomatis from specimens collected from any sites of penetration or attempted penetration. If chlamydial culture is not available, nonculture tests for chlamydia are an acceptable substitute, although false-negative test results are more common with nonculture tests and false-positive test results may occur. If a nonculture test is used, a positive test result should be verified with a second test based on a different diagnostic principle or with a blocking antibody or competitive probe procedure. - Wet mount and culture of a vaginal swab specimen for T. vaginalis infection. If vaginal discharge or malodor is evident, the wet mount should also be examined for evidence of BV and yeast infection. - Collection of a serum sample to be preserved for subsequent analysis if followup serologic tests are positive (see Follow-up Examination 12 Weeks After Assault). # Risk for Acquiring HIV Infection Although HIV-antibody seroconversion has been reported among persons whose only known risk factor was sexual assault or sexual abuse, the risk for acquiring HIV infection through sexual assault is minimal in most instances. Although the overall rate of transmission of HIV from an HIV-infected person during a single act of heterosexual intercourse is thought to be low (<1%), this risk depends on many factors. Prophylactic treatment for HIV is not known to be effective and is not generally recommended in this situation. However, all persons should be offered HIV counseling and testing after the assault. Raising the issue of the potential for HIV infection during the initial medical evaluation may add to the acute psychological stress the patient may be experiencing because of the assault. An alternative is to address the issue at the 2-week follow-up appointment when the patient may be better able to receive this information and give informed consent for HIV testing. All persons electing to be tested for HIV should receive pretest and posttest counseling. # Sexual Assault or Abuse of Children Recommendations in this report are limited to the identification and treatment of sexually transmitted infections. Management of the psychosocial and legal aspects of the sexual assault or abuse of children are important, but are beyond the scope of these recommendations. The identification of sexually transmissible agents among children beyond the neonatal period suggests sexual abuse. However, there are exceptions; for example, rectal or genital infection with C. trachomatis among young children may be the result of perinatally acquired infection and may persist for as long as 3 years. In addition, BV and genital mycoplasmas have been identified among children who have been abused and among those who have not been abused. A finding of genital warts, although suggestive of assault, is not specific for sexual abuse without other evidence. When the only evidence of sexual abuse is the isolation of an organism or the detection of antibodies to a sexually transmissible agent, findings should be confirmed and the implications carefully considered. # Evaluation for Sexually Transmitted Infections Examinations of children for sexual assault or abuse should be conducted so as to minimize trauma to the child. The decision to evaluate the child for STDs must be made on an individual basis. Situations involving a high risk for STDs and a strong indication for testing include the following: - A suspected offender is known to have an STD or to be at high risk for STDs (e.g., multiple partners or past history of STD), - The child has symptoms or signs of an STD, - There is a high STD prevalence in the community. Only standard culture systems for the isolation of C. trachomatis should be used. The isolation of C. trachomatis should be confirmed by microscopic identification of inclusions by staining with fluorescein-conjugated monoclonal antibody specific for C. trachomatis. Isolates should be preserved. Nonculture tests for chlamydia are not sufficiently specific for use in circumstances involving possible child abuse or assault. # Examination 12 Weeks After Assault An examination approximately 12 weeks after the last suspected sexual exposure is recommended to allow time for antibodies to infectious agents to develop. Serologic tests for the agents listed below should be considered: - T. pallidum, - HIV, - HBV. The prevalence of these infections varies greatly among communities, and depends upon whether risk factors are known to be present in the abuser or assailant. Also, results of HBV tests must be interpreted carefully, because HBV may be transmitted by nonsexual modes as well as sexually. The choice of tests must be made on a case-by-case basis. # Presumptive Treatment There are few data upon which to establish the risk of a child's acquiring a sexually transmitted infection as a result of sexual abuse. The risk is believed to be low in most circumstances, although documentation to support this position is inadequate. Presumptive treatment for children who have been sexually assaulted or abused is not widely recommended because girls appear to be at lower risk for ascending infection than adolescent or adult women, and regular follow-up can usually be assured. However, some children or their parents or guardians may be very concerned about the possibility of contracting an STD, even if the risk is perceived by the health-care practitioner to be low. Addressing patient concerns may be an appropriate indication for presumptive treatment in some settings. All material in the MMWR Series is in the public domain and may be used and reprinted without special permission; citation as to source, however, is appreciated.
These guidelines for the treatment of patients with sexually transmitted diseases (STDs) were developed by staff members of CDC after consultation with a group of invited experts who met in Atlanta on January 19-21, 1993. Included are new recommendations for single-dose oral therapy for gonococcal infections, chlamydial infections, and chancroid; new regimens for the treatment of bacterial vaginosis (BV) and outpatient management of pelvic inflammatory disease (PID); a new patient-applied medication for treatment of genital warts; and a revised approach to the management of victims of sexual assault. This report includes new sections on subclinical human papillomavirus (HPV) infections and cervical cancer screening for women who attend STD clinics or who have a history of STDs. These recommendations also include expanded sections on the management of patients with asymptomatic human immunodeficiency virus (HIV) infection; vulvovaginal candidiasis (VVC); STDs among patients coinfected with HIV; and STDs among infants, children, and pregnant women.# Abbreviations Used in This Publication ACIP # INTRODUCTION Physicians and other health-care providers have a critical role in the effort to prevent and treat sexually transmitted diseases (STDs). These recommendations for the treatment of STDs are intended to assist with that effort. They were developed by CDC staff members in consultation with a group of invited experts.* This report was produced through a multi-stage process. Beginning in the spring of 1992, CDC personnel systematically reviewed literature on each of the major STDs, focusing on data and reports that have become available since the 1989 STD Treatment Guidelines were published. Background papers were written and tables of evidence constructed summarizing the type of study (e.g., randomized controlled trial, case series), the study population and setting, the treatments or other interventions, outcome measures assessed, reported findings, and weaknesses and biases in study design and analysis. For these reviews and tables, published abstracts and peerreviewed journal articles were considered. CDC personnel then developed a draft document based on those reviews. In January 1993, invited consultants assembled in Atlanta for a 3-day meeting. CDC personnel presented the key questions on STD treatment suggested from their literature reviews and presented the data available to answer those questions. Where relevant, the questions focused on four principal outcomes of STD therapy: a) microbiologic cure, b) alleviation of signs and symptoms, c) prevention of sequelae, and d) prevention of transmission. The consultants then assessed whether the questions identified were the appropriate ones, ranked them in order of priority, and attempted to arrive at answers using the available evidence. In addition, the consultants evaluated the quality of evidence supporting the answers based on the number and types of studies and the quality of those studies. In several areas, the process diverged from that described above. The section on STD/HIV prevention guidelines was reviewed for comment by experts who had not been present at the January meeting, as well as by additional experts on STD/HIV prevention at CDC. The recommendations for STD screening during pregnancy were developed after CDC staff reviewed the published recommendations of other expert groups that were convened by CDC and other organizations. The sections on HIV infection and early intervention and hepatitis B virus (HBV) also are principally a compilation of recommendations developed by other experts and are provided in this report for the convenience of those who use this document. Throughout this document the evidence used as the basis for specific recommendations is briefly discussed. More comprehensive, annotated discussions of such evidence will appear in background papers that will be submitted for publication in 1994. These recommendations were developed in consultation with experts whose experience is primarily with the treatment of patients in public STD clinics. These recommendations also should be applicable to other patient-care settings, including family planning clinics, private doctor's offices, and other primary-care facilities. When using these guidelines, consideration should be given to the disease prevalence and to other characteristics of the practice setting. These recommendations should not be construed as standards or as inflexible rules, but as a source of clinical guidance within the United States. These recommendations focus on the treatment and counseling of individual patients and do not address other community services and interventions that also play important roles in STD/HIV prevention. Clinical and laboratory diagnoses are described when such information is related to therapy. For a more comprehensive discussion of diagnosis, refer to CDC's STD Clinical Practice Guidelines, 1991 (1 ). # STD/HIV PREVENTION GUIDELINES Prevention and control of STDs is based on four major concepts: first, education of those at risk on the means for reducing the risk for transmission; second, detection of asymptomatically infected individuals and of persons who are symptomatic but unlikely to seek diagnostic and treatment services; third, effective diagnosis and treatment of those who are infected; fourth, evaluation, treatment, and counseling of sex partners of persons who have an STD. Although this document deals largely with secondary prevention, namely clinical aspects of STD control, primary prevention of STDs is based on changing the sexual behaviors that place patients at risk. Physicians and other health-care providers have an important role in the prevention of STDs. In addition to interrupting transmission by treating persons who have bacterial and parasitic STDs, clinicians have the opportunity to provide patient education and counseling and to participate in identifying and treating infected sex partners. # Prevention Methods # Condoms When used consistently and correctly, condoms are very effective in preventing a variety of STDs, including HIV infection. Multiple cohort studies, including those of serodiscordant couples, have demonstrated a strong protective effect of condom use against HIV infection. Condoms are regulated as medical devices and subject to random sampling and testing by the Food and Drug Administration (FDA). Each latex condom manufactured in the United States is tested electronically for holes before packaging. Condom breakage rates during use are low in the United States (≤2 per 100 condoms tested). Condom failure usually results from inconsistent or incorrect use rather than condom breakage. Patients should be advised that condoms must be used consistently and correctly to be effective in preventing STDs. Patients should also be instructed in the correct use of condoms. The following recommendations ensure the proper use of condoms: • Use a new condom with each act of intercourse. • Carefully handle the condom to avoid damaging it with fingernails, teeth, or other sharp objects. • Put the condom on after the penis is erect and before any genital contact with the partner. • Ensure that no air is trapped in the tip of the condom. • Ensure that there is adequate lubrication during intercourse, possibly requiring the use of exogenous lubricants. • Use only water-based lubricants (e.g., K-Y Jelly™ or glycerine) with latex condoms (oil-based lubricants [e.g., petroleum jelly, shortening, mineral oil, massage oils, body lotions, or cooking oil] that can weaken latex should never be used). • Hold the condom firmly against the base of the penis during withdrawal, and withdraw while the penis is still erect to prevent slippage. # Condoms and Spermicides The effectiveness of spermicides in preventing HIV transmission is unknown. No data exist to indicate that condoms lubricated with spermicides are more effective than other lubricated condoms in protecting against the transmission of HIV infection and other STDs. Therefore, latex condoms with or without spermicides are recommended. # Female Condoms Laboratory studies indicate that the female condom (Reality™ )-a lubricated polyurethane sheath with a ring on each end that is inserted into the vagina-is an effective mechanical barrier to viruses, including HIV. Aside from a small study of trichomoniasis, no clinical studies have been completed to evaluate protection from HIV infection or other STDs. However, an evaluation of the female condom's effectiveness in pregnancy prevention was conducted during a 6-month period for 147 women in the United States. The estimated 12-month failure rate for pregnancy prevention among the 147 women was 26%. # Vaginal Spermicides, Sponges, Diaphragms As demonstrated in several cohort studies, vaginal spermicides (i.e., film, gel, suppositories; contraceptive foam has not been studied) used alone without condoms reduce the risk for cervical gonorrhea and chlamydia, but protection against HIV infection has not been established in human studies. The vaginal contraceptive sponge protects against cervical gonorrhea and chlamydia, but increases the risk for candidiasis as evidenced by cohort studies. Diaphragm use has been demonstrated to protect against cervical gonorrhea, chlamydia, and trichomoniasis, but only in case-control and cross-sectional studies; no cohort studies have been performed. Gonorrhea and chlamydia among women usually involve the cervix as a portal of entry, whereas other STD pathogens (including HIV) may infect women through the vagina or vulva, as well as the cervix. Protection of women against HIV infection should not be assumed from the use of vaginal spermicides, vaginal sponges, or diaphragms. The role of spermicides, sponges, and diaphragms for preventing STDs among men has not been studied. # Nonbarrier Contraception, Surgical Sterilization, Hysterectomy Women who are not at risk for pregnancy may incorrectly perceive themselves to be at no risk for STDs, including HIV infection. Nonbarrier contraceptive methods offer no protection against HIV or other STDs. Women using hormonal contraception (oral contraceptives, Norplant™ , Depo-Provera™ ), who have been surgically sterilized or who have had hysterectomies should be counseled regarding the use of condoms and the risk for STDs, including HIV infection. # Prevention Messages Preventing the spread of STDs requires that persons at risk for transmitting or acquiring infections change their behaviors. When risks have been identified, the health-care provider has an opportunity to deliver prevention messages. Counseling skills are essential to the effective delivery of prevention messages (i.e., respect, compassion, and a nonjudgmental attitude). Techniques that can be effective in developing rapport with the patient include using open-ended questions, using language that the patient understands, and reassuring the patient that treatment will be provided regardless of considerations such as ability to pay, citizenship or immigration status, language spoken, or lifestyle. Prevention messages should be tailored to the patient, with consideration given to his or her specific risks. Messages should include a description of measures, such as the following, that the person can take to avoid acquiring or transmitting STDs: • The most effective way to prevent sexual transmission of HIV infection and other STDs is to avoid sexual intercourse with an infected partner. • If a person chooses to have sexual intercourse with a partner whose infection status is unknown or who is infected with HIV or other STDs, men should use a new latex condom with each act of intercourse. • When a male condom cannot be used, couples should consider using a female condom. # Injection Drug Users Prevention messages appropriate for injection drug users are the following: • Enroll or continue in a drug treatment program. • Do not, under any circumstances, use injection equipment (needles, syringes) that has been used by another person. • Persons who continue to use injection equipment that has been used by other persons should first clean the equipment with bleach and water. (Disinfecting with bleach does not sterilize the equipment and does not guarantee that HIV is inactivated. However, thoroughly and consistently cleaning injection equipment with bleach should reduce the rate of HIV transmission when equipment is shared.) # HIV Prevention Counseling Knowledge of one's HIV status and appropriate counseling are thought to play an important role in initiating behavior change. Counseling associated with HIV testing has two main components: pretest and posttest counseling. During pretest counseling, the clinician should conduct a personalized risk assessment, explain the meaning of positive and negative test results, ask for informed consent for the HIV test, and help the person to develop a realistic, personalized risk reduction plan. During posttest counseling, the clinician should inform the patient of the results, review the meaning of the results, and reinforce prevention messages. If the patient is HIV positive, posttest counseling should include referral for follow-up medical services and for social and psychological services, if needed. HIV-seronegative persons at continuing risk for HIV infection also may benefit from referral for additional counseling and prevention services. HIV counseling is considered to be an important HIV-prevention strategy, although its efficacy in reducing risk behavior is still under evaluation. By ensuring that counseling is empathic and "client-centered," clinicians will be able to develop a realistic appraisal of the person's risk and help him or her to develop a specific and realistic HIV-prevention plan (2 ). # Partner Notification and Management of Sex Partners Patients with STDs should ensure that their sex partners, including those without symptoms, are referred for evaluation. Providers should be prepared to assist in that effort. In most circumstances, partners of patients with STDs should be examined. When a diagnosis of a treatable STD is considered likely, appropriate antibiotics should be administered even though there may be no clinical signs of infection and before laboratory test results are available. In most states, the local or state health department can assist in notifying the partners of patients with selected STDs, especially HIV, syphilis, gonorrhea, and chlamydia. Breaking the chain of transmission is crucial to STD control. For treatable STDs, further transmission and reinfection can be prevented by referral of sex partners for diagnosis, treatment, and counseling. The following two strategies are used for partner notification: a) patient referral (index patients notify their partners), and b) provider referral (partners named by infected patients are notified and counseled by health department staff). When a physician refers an infected person to a local or state health department, trained professionals may interview the patient to obtain names and locating information about all of his or her sex partners. Every health department protects the privacy of patients in partner notification activities. Because of the advantage of confidentiality, many patients prefer that public health officials notify partners. If a patient with HIV infection refuses to notify partners while continuing to place them at risk, the physician has an ethical and legal responsibility to inform persons that they are at risk of HIV infection. This duty-to-warn may be most applicable to primary care physicians, who often have knowledge about a patient's social and familial relationships. The decision to invoke the duty-to-warn measure should be a last resort-applicable only in cases in which all efforts to persuade the patient to disclose positive test results to those who need to know have failed. Although compelling ethical, theoretical, and public health reasons exist to undertake partner notification, the efficacy of partner notification as an STD prevention strategy is under evaluation, and its effectiveness may be disease-specific. Clinical guidelines for sex partner management and recommendations for partner notification for specific STDs are included for each STD addressed in this report. # Reporting and Confidentiality The accurate identification and timely reporting of STDs form an integral part of successful disease control. Reporting assists local health authorities in identifying sex partners who may be infected. Reporting also is important for assessing morbidity trends. STD/HIV and acquired immunodeficiency syndrome (AIDS) cases should be reported in accordance with local statutory requirements and in a timely manner. Syphilis, gonorrhea, and AIDS are reportable diseases in every state. The requirements for reporting other STDs and asymptomatic HIV infection differ from state to state, and clinicians should be familiar with local STD reporting requirements. Reporting may be provider-and/or laboratory-based. Clinicians who are unsure of local reporting requirements should seek advice from local health departments or state STD programs. STD and HIV reports are held in strictest confidence and in many jurisdictions are protected by statute from subpoena. Further, before any follow-up of a positive STD test is conducted by program representatives, these persons consult with the patient's health-care provider to verify the diagnosis and treatment. # SPECIAL POPULATIONS Pregnant Women Intrauterine or perinatally transmitted STDs can have fatal or severely debilitating effects on a fetus. Pregnant women and their sex partners should be questioned about STDs and should be counseled about the possibility of neonatal infections. # Recommended Screening Tests The following screening tests are recommended for pregnant women: • A serologic test for syphilis All women should be screened serologically for syphilis during the early stages of pregnancy. In populations in which utilization of prenatal care is not optimal, rapid plasma reagin (RPR)-card test screening and treatment, if that test is reactive, should be performed at the time a pregnancy is diagnosed. For patients at high risk, screening should be repeated in the third trimester and again at delivery. (Some states mandate screening all women at delivery.) No infant should be discharged from the hospital without the syphilis serologic status of its mother having been determined at least once during pregnancy and, preferably, again at delivery. Any woman who delivers a stillborn infant after 20 weeks gestation should be tested for syphilis. • A serologic test for hepatitis B surface antigen (HBsAg) • A test for Neisseria gonorrhoeae • A test for Chlamydia trachomatis Pregnant women at increased risk (<25 years of age, or with a new or more than one partner) should be tested and treated, if necessary, during the third trimester to prevent maternal postnatal complications and chlamydial infection among infants. Screening during the first trimester might permit prevention of adverse effects of chlamydia during pregnancy. However, the evidence for adverse effects during pregnancy is minimal. If screening is performed only during the first trimester, a longer period exists for acquiring infection before delivery. # • A test for HIV infection Patients with risk factors for HIV or with a high-risk sex partner should be tested for HIV infection. Some authorities recommend offering HIV tests to all pregnant women, particularly in areas of high HIV seroprevalence. Appropriate counseling should be provided, and informed consent for HIV testing should be obtained. # Other Issues Other STD-related issues to be considered are as follows: • Pregnant women with primary genital herpes, HBV, primary cytomegalovirus (CMV) infection, group B streptococcal infection, and women who have syphilis and who are allergic to penicillin may need to be referred to an expert for management. • In the absence of lesions during the third trimester, routine serial cultures for herpes simplex virus (HSV) are not indicated for women with a history of recurrent genital herpes. However, obtaining cultures from such women at the time of delivery may be useful in guiding neonatal management. "Prophylactic" caesarean section is not indicated for women who do not have active genital lesions at the time of delivery. • The presence of genital warts is not considered an indication for caesarean section. For a more detailed discussion of these issues, as well as for infections not transmitted sexually, refer to Guidelines for Perinatal Care (3 ). NOTE: The sources for these guidelines for screening of pregnant women include Guide to Clinical Preventive Services (4 ), Guidelines for Perinatal Care (3 ), and Recommendations for the Prevention and Management of Chlamydia trachomatis Infections, 1993 (5 ). These sources are not entirely consistent in their recommendations. The Guide to Clinical Preventive Services recommends routine testing for gonorrhea at the first prenatal visit, with repeat testing for those at increased risk, and selective screening for chlamydia at the first prenatal visit. The Guidelines for Perinatal Care does not specifically recommend screening for either gonorrhea or chlamydia, but recommends screening for STDs in the third trimester for women at risk. The Recommendations for the Prevention and Management of Chlamydia trachomatis Infections, 1993 recommend screening for chlamydia during the third trimester for all pregnant women <25 years of age or for any woman with a new sex partner or multiple partners. Recommendations to screen pregnant women for STDs are based on disease severity and sequelae, prevalence in the population, costs, medical/legal considerations (including state laws), and other factors. The screening recommendations in this report are more comprehensive (i.e., if followed, more women will be screened for more STDs than would be screened by following other recommendations) and are compatible with other CDC guidelines. Physicians should select a screening strategy compatible with their practice population and setting. # Adolescents Health-care providers who provide care for patients with sexually transmitted infections should be aware of several issues that relate specifically to adolescents. The rates of many STDs are highest among adolescents; e.g., the rate of gonorrhea is highest among persons 15-19 years of age. Clinic-based studies have demonstrated that the prevalence of chlamydial infections, and possibly of HPV infections, also is highest among adolescents. All adolescents in the United States can consent to the confidential diagnosis and treatment of STDs. Medical care for these conditions can be provided to adolescents without parental consent or knowledge. Furthermore, in many states adolescents can consent to HIV counseling and testing. The style and content of counseling and health education should be adapted for adolescents. Discussions should be appropriate for the patient's developmental level and should identify risky behaviors, such as sex and drug use behaviors. Care and counseling should be direct and nonjudgmental. # Children Management of children with STDs requires close cooperation between the clinician, laboratory, and child-protection authorities. Investigations, when indicated, should be initiated promptly. Some diseases, such as gonorrhea, syphilis, and chlamydia, if acquired after the neonatal period, are almost 100% indicative of sexual contact. For other diseases, such as HPV infection and vaginitis, the association with sexual contact is not as clear (see Sexual Assault and STDs). # Persons with HIV Infection The management of patients infected with HIV and patients infected with both HIV and other STDs presents complex clinical and behavioral issues. For that reason, these issues are addressed throughout this report (see HIV Infection and Early Intervention and specific disease sections). Because of its effects on the immune system, HIV infection may alter the natural histories of many STDs and the effect of antimicrobial therapy. Such effects are likely to occur as the degree of immunosuppression advances; frequent or severe episodes of some STDs or failure to respond appropriately to therapy should lead the health-care provider to consider HIV infection as a cause. Close clinical follow-up of patients infected with both HIV and STDs is imperative. STD infection among patients with or without HIV is a sentinel event, often indicating unprotected sexual activity. Further patient counseling is needed in such situations. # HIV INFECTION AND EARLY INTERVENTION Infection with HIV produces a spectrum that progresses from no apparent illness to AIDS as a late manifestation. The pace of disease progression is variable. The median time between infection with HIV and the development of AIDS among adults is 10 years, with a range from a few months to ≥12 years. Most adults and adolescents infected with HIV remain symptom-free for long periods, but viral replication can be detected in asymptomatic persons and increases substantially as the immune system deteriorates. Most people who are infected with HIV will eventually have symptoms related to the infection. In cohort studies of adults infected with HIV, data indicated that symptoms developed in 70%-85% of infected adults, and AIDS developed in 55%-62% within 12 years after infection. Additional cases are expected to occur among those who have remained AIDS-free for >12 years. Greater awareness of risky behaviors by both patients and health-care providers has led to increased testing for HIV and earlier diagnosis of early HIV infection, often before symptoms develop (though emotional or psychological problems may occur). Such early identification of HIV infection is important for several reasons. Treatments are available to slow the decline of immune system function. Persons who are infected with HIV and altered immune function also are at increased risk for infections such as tuberculosis (TB), bacterial pneumonia, and Pneumocystis carinii pneumonia (PCP), for which preventive measures are available. Because of its effect on the immune system, HIV affects the diagnosis, evaluation, treatment, and follow-up of many other diseases and may affect the efficacy of antimicrobial therapy for some STDs. Close clinical follow-up after treatment for STDs is imperative. During early infection, persons with HIV and their families can be educated about the disease and become linked with a support network that addresses their needs and with care systems effective in maintaining good health and delaying the onset of symptoms. Early diagnosis also offers the opportunity for counseling and for assistance in preventing the transmission of HIV infection to others. For the purpose of these recommendations, early intervention for HIV is defined as care for persons infected with HIV who are without symptoms. However, recently detected HIV infection may not have been recently acquired. Persons newly diagnosed with HIV may be at many different stages of the infection. Therefore, early intervention also involves assuming the responsibility for coordinating care and for arranging access to resources necessary to meet the medical, psychological, and social needs of persons with more advanced HIV infection. # Diagnostic Testing for HIV-1 and HIV-2 HIV infection is most often diagnosed by using HIV-1 antibody tests. Antibody testing begins with a sensitive screening test such as the enzyme-linked immunosorbent assay (ELISA) or a rapid assay. If confirmed by Western blot or other supplemental test, a positive antibody test means that a person is infected with HIV and is capable of transmitting the virus to others. HIV antibody is detectable in ≥95% of patients within 6 months of infection. Although a negative antibody test usually means a person is not infected, antibody tests cannot rule out infection that occurred <6 months before the test. Since there is transplacental passage of maternal HIV antibody, antibody tests for HIV are expected to be positive in the serum of both infected and uninfected infants born to a seropositive mother. Passively acquired HIV antibody falls to undetectable levels among most infants by 15 months of age. A definitive determination of HIV infection for an infant <15 months of age should be based either on the presence of antibody to HIV in conjunction with a compatible immunologic profile and clinical course or on laboratory evidence of HIV in blood or tissues by culture, nucleic acid, or antigen detection. Specific recommendations for the diagnostic testing of HIV are listed below: • Informed consent must be obtained before an HIV test is performed. Some states require written consent. See HIV Prevention Counseling for a discussion of pretest and posttest counseling. • Positive screening tests for HIV antibody must be confirmed by a more specific confirmatory test (either the Western blot assay or indirect immunofluorescence assay [IFA]) before being considered definitive for confirming HIV infection. • Persons with positive HIV tests must receive medical and psychosocial evaluation and monitoring services, or be referred for these services. The prevalence of HIV-2 in the United States is extremely low, and CDC does not recommend routine testing for HIV-2 in settings other than blood centers, unless demographic or behavioral information suggests that HIV-2 infection might be present. Those at risk for HIV-2 infection include persons from a country in which HIV-2 is endemic or the sex partners of such persons. (As of July 1992, HIV-2 was endemic in parts of West Africa and an increased prevalence of HIV-2 had been reported in Angola, France, Mozambique, and Portugal.) Additionally, testing for HIV-2 should be conducted when there is clinical evidence or suspicion of HIV disease in the absence of a positive test for antibodies to HIV-1 (6 ). # Counseling for Patients with HIV Infection Behavioral and psychosocial services are an integral part of HIV early intervention. Patients usually experience emotional distress when first being informed of a positive HIV test result, and also later when notified of changes in immunologic markers, when antiviral or prophylactic therapy is initiated, and when symptoms develop. Patients face several major adaptive challenges: a) accepting the possibility of a curtailed life span, b) coping with others' reactions to a stigmatizing illness, c) developing strategies for maintaining physical and emotional health, and d) initiating changes in behavior to prevent HIV transmission. Many patients also require assistance with making reproductive choices, gaining access to health services and health insurance, and confronting employment discrimination. Interrupting HIV transmission depends upon changes in behavior by those persons at risk for transmitting or acquiring infection. Though some viral culture studies suggest that antiviral treatment reduces viral burden, clinical data are insufficient to determine whether therapy might reduce the probability of transmission. Infected persons, as potential sources of new infections, must receive extra attention and support to help break chains of transmission and to prevent infection of others. Targeting behavior change programs toward HIV-infected persons and their sex partners, or those with whom they share needles, is an important adjunct to current AIDS prevention efforts. Specific recommendations for counseling patients with HIV infection are listed below: • Persons who test positive for HIV antibody should be counseled by a person who is able to discuss the medical, psychological, and social implications of HIV infection. • Appropriate social support and psychological resources should be available, either on site or through referral, to assist patients in coping with emotional distress. • Persons who continue to be at risk for transmitting HIV should receive assistance in changing or avoiding behaviors that can transmit infection to others. # Initial Evaluation and Planning for Care Practice settings for offering early HIV care are variable, depending upon local resources and needs. Primary-care providers and outpatient facilities must ensure that appropriate resources are available for each patient and must avoid fragmentation of care; it is preferable for persons with HIV infection to receive care from a single source that is able to provide comprehensive care for all stages of HIV infection. But the limited availability of such resources often results in the need to coordinate care among outpatient, inpatient, and specialist providers in different locations. Because of the progressive nature of HIV and the increased risk for bacterial infections, including TB-even before HIV infection becomes advanced-it is essential to establish specific provisions for handling the medical, psychological, and social problems likely to arise at any stage of infection. An important component of early intervention is effective linkage with referral settings where off-hours care and specialty services are available. Development of an appropriate plan for care involves the following: • Identification of patients in need of immediate medical care (e.g., patients with symptomatic HIV infection or emotional crisis) and of those in need of antiretroviral therapy or prophylaxis for opportunistic infections (e.g., PCP). • Evaluation for the presence of diseases associated with HIV, such as TB and STDs. • Administration of recommended vaccinations. • Case management or referral for case management. • Counseling (see Counseling for Patients with HIV Infection). The CD4+ T-lymphocyte count is the best laboratory indicator of clinical progression, and comprehensive management strategies for HIV infection are typically stratified by CD4 count. Either the absolute number or the percentage of CD4+ T cells may be determined. CD4+ percentage is more consistent than absolute CD4+ count with successive measurements for the same person and less variable with delays in specimen processing. However, most clinical trials have used absolute CD4+ count to evaluate the need for and timing of therapeutic interventions. Patients with CD4+ counts >500/µL usually do not demonstrate evidence of clinical immunosuppression. Patients with 200-500 CD4+ cells/µL are more likely to develop HIV-related symptoms and to require medical intervention. Patients with CD4+ counts <200 cells/µL, and those with higher CD4+ counts who develop thrush or unexplained fever (temperature >37.8 C for ≥2 weeks) are at increased risk for developing complicated HIV disease. Such patients should be managed in a comprehensive treatment setting with access to specialty resources and hospitalization. Providers unable to offer therapeutic management of HIV may use the initial evaluation to identify the need for prompt referral to appropriate resources. The initial evaluation of HIV-positive patients should include the following essential components: • A detailed history, including sexual history, substance abuse history, and a review of systems for specific HIV-related symptoms. • A physical examination; for females, this examination should include a gynecologic examination. • For females, testing for N. gonorrhoeae, C. trachomatis, a Papanicolaou (Pap) smear, and wet mount examination of vaginal secretions. • A syphilis serology. • A CD4+ T-lymphocyte analysis. • Complete blood and platelet counts. • A purified protein derivative (PPD) tuberculin skin test by the Mantoux method and anergy testing with two delayed-type hypersensitivity (DTH) antigens (Candida, mumps, or tetanus toxoid) administered by the Mantoux method or a multipuncture device. • A thorough psychosocial evaluation, including ascertainment of behavioral factors indicating risk for transmitting HIV and elucidation of information about any partners who should be notified about possible exposure to HIV. # Preventive Therapy for TB Studies conducted among persons with and without HIV infection have suggested that HIV infection can depress tuberculin reactions before signs and symptoms of HIV infection develop. Cutaneous anergy (defined as skin test response of ≤3 mm to all DTH antigens) may be present among ≥10% of asymptomatic persons with CD4+ counts >500 cells/µL, and among >60% of persons with CD4+ counts <200. HIV-positive persons with a PPD reaction ≥5 mm induration are considered to be infected with M. tuberculosis and should be evaluated for preventive treatment with isoniazid after active TB has been excluded. Anergic persons whose risk for tuberculous infection is estimated to be ≥10%, based on available prevalence data, also should be considered for preventive therapy. For further details regarding evaluation of patients for TB, refer to Purified Protein Derivative (PPD-tuberculin anergy) and HIV Infection: Guidelines for Anergy Testing and Management of Anergic Persons at Risk of Tuberculosis (7 ). The preliminary results from a randomized clinical trial suggest that treatment with isoniazid is effective for preventing active TB among HIV-infected persons. The usual regimen is isoniazid 10 mg/kg daily, up to a maximum adult dose of 300 mg daily. Twelve months of isoniazid preventive treatment is recommended for persons with HIV infection. For further details regarding preventive therapy for TB, refer to The Use of Preventive Therapy for Tuberculous Infection in the United States (8 ) and Management of Persons Exposed to Multidrug-Resistant Tuberculosis (9 ). # Recommended Immunizations for Adults and Adolescents Specific recommendations for immunization of persons infected with HIV are listed below: • Pneumococcal vaccination and an annual influenza vaccination should be administered. • Persons at increased risk for acquiring HBV and who lack evidence of immunity may receive a three-dose schedule of hepatitis B vaccine, with postvaccination serologic testing between 1 and 6 months after the vaccination series. Recommendations for vaccinating HIV-infected persons are based on expert opinions and consensus of the Advisory Committee on Immunization Practices (ACIP). No clinical data exist to document the efficacy of inactivated vaccines among HIV-infected persons, and pneumococcal vaccine failures have been reported. However, the use of inactivated vaccines may be beneficial for persons with HIV infection and there is no evidence that they are harmful. Immunogenicity studies have suggested a generally poorer response among HIV-infected persons, with higher response rates among asymptomatic persons than among those with advanced HIV disease. Current evidence indicates that HIV infection does not increase susceptibility to HBV, nor does it increase the severity of clinical disease. The presence of HIV infection is not an indication for hepatitis B vaccine, but HIV-infected persons are at increased risk for becoming chronic carriers after hepatitis B infection. Because the routes of transmission of HBV parallel those of HIV, efforts to modify risky behaviors must be the primary focus of prevention efforts. However, vaccine should be administered to HIV-infected patients who continue to have a high likelihood for HBV exposure. Persons with HIV infection also are at increased risk for invasive Haemophilus influenzae type B (Hib) disease and for complications from measles. Immunization against Hib and measles should be considered for asymptomatic HIV-infected persons who may have an increased risk for exposure to these infections. For further details on immunization of HIV-infected patients, refer to Recommendations of the Advisory Committee on Immunization Practices (ACIP): Use of Vaccines and Immune Globulins in Persons with Altered Immunocompetence (10 ). # Follow-Up Evaluation There have been no controlled studies to serve as the basis for recommending specific follow-up tests or follow-up intervals. The suggested frequency of monitoring is based on the slow decrease in CD4+ counts observed among patients in cohort studies, but should be modified depending on the patient's psychological status, the presence of symptoms, or both. Repeat evaluation for STDs also is important in the follow-up of HIV-infected persons and should be performed on all persons who continue to be sexually active. Follow-up evaluation should be performed every 6 months and should include the following: • An interim history and physical examination; • A complete blood count, platelet count, and lymphocyte subset analysis; • Re-evaluation of psychosocial status and behavioral factors indicating risk for transmitting HIV. To follow CD4+ measurements, providers should use the same laboratory and, optimally, obtain each specimen at the same time each day. When unexpected or discrepant results are obtained or when major treatment decisions are to be made, health-care providers should consider repeating the CD4+ measurement after at least 1 week. More frequent laboratory monitoring, every 3-4 months, is indicated if CD4+ results indicate a patient is close to a point when a clinical intervention may be indicated. # Continuing Management of Patients with Early HIV Infection Providing comprehensive, continuing management of patients with early HIV infection can include additional diagnostic studies (e.g., chest x-ray, serum chemistry, antibody testing for toxoplasmosis and hepatitis B), antiretroviral therapy and monitoring, and PCP prophylaxis. Treatment of HIV infection and prophylaxis against opportunistic infections continue to evolve rapidly. This treatment should be undertaken in consultation with physicians who are familiar with the care of persons with HIV infection. The complete therapeutic management of HIV infection is beyond the scope of this document. # Antiretroviral Therapy The optimal time for initiating antiretroviral therapy has not yet been established. Zidovudine (ZDV) at a dose of 500 mg/day (100 mg orally every 4 hours while the patient is awake) has been recommended for symptomatic persons with <500 CD4+ T-cells/µL, and for asymptomatic persons with <300 CD4+ T-cells/µL. This recommendation is based on results of short-term follow-up in three randomized clinical trials demonstrating that the initiation of ZDV therapy delays progression to advanced disease. Evidence for improved long-term survival after early treatment is less conclusive. The effects of ZDV may be transient, possibly because of the development of viral resistance or other factors. Sequential or combination therapy with other antiretroviral agents could be more efficacious. Whether other daily dosages, dose schedules, or dosages based on body weight would result in greater therapeutic benefit or few side effects is not known. Providers should work with patients to design a treatment strategy that is both clinically sound and appropriate for each individual patient's needs, priorities, and circumstances. An initial dose of 600 mg in divided doses has been recommended by a panel of experts convened by the National Institute of Allergy and Infectious Diseases (NIAID). Preliminary data suggest ZDV can yield therapeutic results when the dosing interval is increased to 8 hours, and at doses of 200 mg three times daily. Antiretroviral efficacy is diminished at doses <300 mg/day, and it has been suggested that higher oral doses may be required to achieve effective levels in the central nervous system. There are no data to support the use of antiretroviral drugs other than ZDV as initial therapy. Didanosine (DDI) is recommended for persons who are intolerant of ZDV or who experience progression of symptoms despite ZDV. Two 100 mg tablets of DDI are recommended every 12 hours for persons who weigh ≥ 60 kg; the recommended dose for adults <60 kg is one 100 mg tablet and one 25 mg tablet every 12 hours. Two tablets are recommended at each dose so that adequate buffering is provided to prevent gastric acid degradation of the drug. Benefits have been reported from other antiretroviral regimens, including treatment with combinations of ZDV, DDC (dideoxycytidine [zalcitabine]), and DDI, or switching therapy to DDI after long-term therapy with ZDV. Experience with these alternatives is insufficient to serve as a basis for recommendations. Providers managing patients who are taking antiretroviral therapy should be familiar with evidence being developed in several clinical trials. Current information is available from the NIAID AIDS Clinical Trials Information Service, 1-800-TRIALS-A. Side effects that are serious (e.g., anemia, cytopenia, pancreatitis, and peripheral neuropathy) and uncomfortable (e.g., nausea, vomiting, headaches, and insomnia) are common during antiretroviral therapy. Although hematologic toxicity from ZDV is less common with the lower doses recommended, approximately 2% of patients who receive 500 mg/day manifest severe anemia by the 18th month of treatment-most within the 3rd through 8th month of treatment. Careful hematologic monitoring of patients receiving ZDV is recommended. # PCP Prophylaxis Adults and adolescents with <200 CD4+ T-cells/µL or with constitutional symptoms, such as thrush or unexplained fever >100 F for ≥2 weeks, and any patient with a previous episode of PCP should receive PCP prophylaxis. Prophylaxis should be continued for the lifetime of the patient. Based upon evidence from randomized controlled clinical trials, the Public Health Service Task Force on Antipneumocystis Prophylaxis has recommended the following regimens for PCP prophylaxis among adults and adolescents: • Oral trimethoprim-sulfamethoxazole (TMP-SMX) at a dose of one doublestrength tablet (800 mg SMX and 160 mg TMP) orally once a day. • For patients unable to tolerate TMP-SMX: aerosol pentamidine administered by either the Respirgard II nebulizer regimen (300 mg once a month) or the Fisoneb nebulizer (initial loading regimen of five 60 mg doses during a 2-week period, followed by a 60 mg dose every 2 weeks). The efficacy of alternatives for patients unable to tolerate TMP-SMX, including dapsone 100 mg orally once a day and sulfa desensitization, has not been studied extensively. For further details on PCP prophylaxis, refer to Recommendations for Prophylaxis Against Pneumocystis carinii pneumonia for adults and adolescents infected with human immunodeficiency virus (11 ). # Management of Sex Partners The rationale for implementing partner notification is that early diagnosis and treatment of HIV infection may reduce morbidity and offers the opportunity to encourage risk-reducing behaviors. Two complementary notification processes, patient referral and provider referral, can be used to identify partners. With patient referral, patients inform their own partners directly of their exposure to HIV infection. With provider referral, trained health department personnel locate partners on the basis of the names, descriptions, and addresses provided by the patient. During the notification process, the anonymity of patients is protected; their names are not revealed to sex or needle-sharing partners who are notified. Many state health departments provide assistance with provider referral partner notification upon request. One randomized trial suggested that provider referral is more effective in notifying partners than patient referral. In that trial, 50% of partners in the provider referral group were notified, yet only 7% of partners were notified by subjects in the patient referral group. However, few data demonstrate whether behavioral change takes place as a result of partner notification and many patients are reluctant to disclose the names of partners because of concern about discrimination, disruption of relationships, and loss of confidentiality for the partners. When referring to those persons infected with HIV, the term "partner" includes not only sex partners but also injecting drug users who share needles or other injecting equipment. Partner notification is a means of identifying and concentrating riskreduction efforts on persons at high risk for contracting or transmitting HIV infection. Partner notification for HIV infection must be confidential and should depend upon the voluntary cooperation of the patient. Specific recommendations for implementing partner notification procedures are listed below: • Persons who are HIV-positive should be encouraged to notify their partners and to refer them for counseling and testing. Providers should assist in this process, if desired by the patient, either directly or through referral to health department partner notification programs. • If patients are unwilling to notify their partners or if it cannot be assured that their partners will seek counseling, physicians or health department personnel should use confidential procedures to assure that the partners are notified. # Special Considerations Pregnancy Women who are HIV-infected should be specifically informed about the risk for perinatal infection. Current evidence indicates that 15%-39% of infants born to HIVinfected mothers are infected with HIV, and the virus also can be transmitted from an infected mother by breastfeeding. Pregnancy among HIV-infected patients does not appear to increase maternal morbidity or mortality. Women should be counseled about their options regarding pregnancy. The objective of counseling is to provide HIV-infected women with current information for making reproductive decisions, analogous to the model used in genetic counseling. Contraceptive, prenatal, and abortion services should be available on site or by referral. Minimal information is available on the use of ZDV or other antiretroviral drugs during pregnancy. Trials to evaluate its efficacy in preventing perinatal transmission and its safety during pregnancy are being conducted. A case series of 43 pregnant women has been published; dosages of ZDV ranged from 300 to 1,200 mg/day. ZDV was well tolerated and there were no malformations among the newborns in this series. Although this observation is encouraging, this series of negative case reports cannot be used to infer that ZDV is not teratogenic. Burroughs Wellcome Co. and Hoffmann-LaRoche, Inc., in cooperation with CDC, maintain a registry to assess the effects of the use of ZDV and DDC during pregnancy. Women who receive either ZDV or DDC during pregnancy should be reported to this registry (1-800-722-9292, ext. 58465). # HIV Infection Among Infants and Children Infants and young children with HIV infection differ from adults and adolescents with respect to the diagnosis, clinical presentation, and management of HIV disease. For example, total lymphocytes and absolute CD4+ cell counts are much higher in infants and children than in healthy adults and are age dependent. Specific indications and dosages for both antiretroviral and prophylactic therapy have been developed for children (12 ). Other modifications must be made in health services that are recommended for infants and children, such as avoiding vaccination with live oral polio vaccine when a child (or close household contact) is infected with HIV. State laws differ regarding consent of minor persons (<18 years of age) for HIV counseling and testing, evaluation, treatment services, and participation in clinical trials. Although most adolescents receive adult doses of antiretroviral and prophylactic therapy, there are no data on modification of these dosages during puberty. Management of infants, children, and adolescents-who are known or suspected to be infected with HIV-requires referral to, or close consultation with, physicians familiar with the manifestations and treatment of pediatric HIV infection. # DISEASES CHARACTERIZED BY GENITAL ULCERS # Management of the Patient with Genital Ulcers In the United States, most patients with genital ulcers have genital herpes, syphilis, or chancroid. The relative frequency of each varies by geographic area and patient population, but in most areas of the United States genital herpes is the most common of these diseases. More than one of these diseases may be present among at least 3%-10% of patients with genital ulcers. Each disease has been associated with an increased risk for HIV infection. A diagnosis based only on history and physical examination is often inaccurate. Therefore, evaluation of all persons with genital ulcers should include a serologic test for syphilis and possibly other tests. Although ideally all of these tests should be conducted for each patient with a genital ulcer, use of such tests (other than a serologic test for syphilis) may be based on test availability and clinical or epidemiologic suspicion. Specific tests for the evaluation of genital ulcers are listed below: • Darkfield examination or direct immunofluorescence test for Treponema pallidum, • Culture or antigen test for HSV, and • Culture for Haemophilus ducreyi. HIV testing should be considered in the management of patients with genital ulcers, especially for those with syphilis or chancroid. A health-care provider often must treat a patient before test results are available (even after complete testing, at least one quarter of patients with genital ulcers have no laboratory-confirmed diagnosis). In that circumstance, the clinician should treat for the diagnosis considered most likely. Many experts recommend treatment for both chancroid and syphilis if the diagnosis is unclear or if the patient resides in a community in which chancroid morbidity is notable (especially when diagnostic capabilities for chancroid and syphilis are not ideal). # Chancroid Chancroid is endemic in many areas of the United States and also occurs in discrete outbreaks. Chancroid has been well established as a co-factor for HIV transmission and a high rate of HIV infection among patients with chancroid has been reported in the United States and in other countries. As many as 10% of patients with chancroid may be coinfected with T. pallidum or HSV. Definitive diagnosis of chancroid requires identification of H. ducreyi on special culture media that are not commercially available; even using these media, sensitivity is no higher than 80% and is usually lower. A probable diagnosis, for both clinical and surveillance purposes, may be made if the person has one or more painful genital ulcers, and a) no evidence of T. pallidum infection by darkfield examination of ulcer exudate or by a serologic test for syphilis performed at least 7 days after onset of ulcers, and b) either the clinical presentation of the ulcer(s) is not typical of disease caused by HSV or the HSV test results are negative. The combination of a painful ulcer with tender inguinal adenopathy (which occurs among one-third of patients) is suggestive of chancroid, and when accompanied by suppurative inguinal adenopathy is almost pathognomonic. # Treatment Successful treatment cures infection, resolves clinical symptoms, and prevents transmission to others. In extensive cases, scarring may result despite successful therapy. # Recommended Regimens Azithromycin 1 g orally in a single dose or Ceftriaxone 250 mg intramuscularly (IM) in a single dose or Erythromycin base 500 mg orally 4 times a day for 7 days. All three regimens are effective for the treatment of chancroid among patients without HIV infection. Azithromycin and ceftriaxone offer the advantage of singledose therapy. Antimicrobial resistance to ceftriaxone and azithromycin has not been reported. Although two isolates resistant to erythromycin were reported from Asia a decade ago, similar isolates have not been reported. # Alternative Regimens Amoxicillin 500 mg plus clavulanic acid 125 mg orally 3 times a day for 7 days, or Ciprofloxacin 500 mg orally 2 times a day for 3 days. # NOTE: Ciprofloxacin is contraindicated for pregnant and lactating women, children, and adolescents ≤17 years of age. These regimens have not been evaluated as extensively as the recommended regimens; neither has been studied in the United States. # Other Management Considerations Patients should be tested for HIV infection at the time of diagnosis. Patients also should be tested 3 months later for both syphilis and HIV, if initial results are negative. # Follow-Up Patients should be re-examined 3-7 days after initiation of therapy. If treatment is successful, ulcers improve symptomatically within 3 days and improve objectively within 7 days after therapy. If no clinical improvement is evident, the clinician must consider whether a) the diagnosis is correct, b) coinfection with another STD agent exists, c) the patient is infected with HIV, d) treatment was not taken as instructed, or e) the H. ducreyi strain causing infection is resistant to the prescribed antimicrobial. The time required for complete healing is related to the size of the ulcer; large ulcers may require ≥2 weeks. Clinical resolution of fluctuant lymphadenopathy is slower than that of ulcers and may require needle aspiration through adjacent intact skin-even during successful therapy. # Management of Sex Partners Persons who had sexual contact with a patient who has chancroid within the 10 days before onset of the patient's symptoms should be examined and treated. The examination and treatment should be administered even in the absence of symptoms. # Special Considerations Pregnancy The safety of azithromycin for pregnant and lactating women has not been established. Ciprofloxacin is contraindicated during pregnancy. No adverse effects of chancroid on pregnancy outcome or on the fetus have been reported. # HIV Infection Patients coinfected with HIV should be closely monitored. These patients may require courses of therapy longer than those recommended in this report. Healing may be slower among HIV-infected persons and treatment failures do occur, especially after shorter-course treatment regimens. Since data on therapeutic efficacy with the recommended ceftriaxone and azithromycin regimens among patients infected with HIV are limited, those regimens should be used among persons known to be infected with HIV only if follow-up can be assured. Some experts suggest using the erythromycin 7-day regimen for treating HIV-infected persons. # Genital Herpes Simplex Virus Infections Genital herpes is a viral disease that may be recurrent and has no cure. Two serotypes of HSV have been identified: HSV-1 and HSV-2; most cases of genital herpes are caused by HSV-2. On the basis of serologic studies, approximately 30 million persons in the United States may have genital HSV infection. Most infected persons never recognize signs suggestive of genital herpes; some will have symptoms shortly after infection and then never again. A minority of the total infected U.S. population will have recurrent episodes of genital lesions. Some cases of first clinical episode genital herpes are manifested by extensive disease that requires hospitalization. Many cases of genital herpes are acquired from persons who do not know that they have a genital infection with HSV or who were asymptomatic at the time of the sexual contact. Randomized trials show that systemic acyclovir provides partial control of the symptoms and signs of herpes episodes when used to treat first clinical episodes, or when used as suppressive therapy. However, acyclovir neither eradicates latent virus nor affects subsequent risk, frequency, or severity of recurrences after administration of the drug is discontinued. Topical therapy with acyclovir is substantially less effective than the oral drug and its use is discouraged. Episodes of HSV infection among HIV-infected patients may require more aggressive therapy. Immunocompromised persons may have prolonged episodes with extensive disease. For these persons, infections caused by acyclovir-resistant strains require selection of alternate antiviral agents. # First Clinical Episode of Genital Herpes # Recommended Regimen Acyclovir 200 mg orally 5 times a day for 7-10 days or until clinical resolution is attained. # First Clinical Episode of Herpes Proctitis # Recommended Regimen Acyclovir 400 mg orally 5 times a day for 10 days or until clinical resolution is attained. # Recurrent Episodes When treatment is instituted during the prodrome or within 2 days of onset of lesions, some patients with recurrent disease experience limited benefit from therapy. However, since early treatment can seldom be administered, most immunocompetent patients with recurrent disease do not benefit from acyclovir treatment, and it is not generally recommended. # Recommended Regimen Acyclovir 200 mg orally 5 times a day for 5 days, or Acyclovir 400 mg orally 3 times a day for 5 days, or Acyclovir 800 mg orally 2 times a day for 5 days. # Daily Suppressive Therapy Daily suppressive therapy reduces the frequency of HSV recurrences by at least 75% among patients with frequent recurrences (i.e., six or more recurrences per year). Suppressive treatment with oral acyclovir does not totally eliminate symptomatic or asymptomatic viral shedding or the potential for transmission. Safety and efficacy have been documented among persons receiving daily therapy for as long as 5 years. Acyclovir-resistant strains of HSV have been isolated from some persons receiving suppressive therapy, but these strains have not been associated with treatment failure among immunocompetent patients. After 1 year of continuous suppressive therapy, acyclovir should be discontinued to allow assessment of the patient's rate of recurrent episodes. # Recommended Regimen Acyclovir 400 mg orally 2 times a day. # Alternative Regimen Acyclovir 200 mg orally 3-5 times a day. The goal of the alternative regimen is to identify for each patient the lowest dose that provides relief from frequently recurring symptoms. # Severe Disease Intravenous (IV) therapy should be provided for patients with severe disease or complications necessitating hospitalization (e.g., disseminated infection that includes encephalitis, pneumonitis, or hepatitis). # Recommended Regimen Acyclovir 5-10 mg/kg body weight IV every 8 hours for 5-7 days or until clinical resolution is attained. # Other Management Considerations Other considerations for managing patients with genital HSV infection are as follows: • Patients should be advised to abstain from sexual activity while lesions are present. • Patients with genital herpes should be told about the natural history of the disease, with emphasis on the potential for recurrent episodes, asymptomatic viral shedding, and sexual transmission. Sexual transmission of HSV has been documented to occur during periods without evidence of lesions. Many cases are transmitted during such asymptomatic periods. The use of condoms should be encouraged during all sexual exposures. The risk for neonatal infection should be explained to all patients-male and female-with genital herpes. Women of childbearing age who have genital herpes should be advised to inform health-care providers who care for them during pregnancy about their HSV infection. # Management of Sex Partners Sex partners of patients who have genital herpes are likely to benefit from evaluation and counseling. Symptomatic sex partners should be managed in the same manner as any patient with genital lesions. However, the majority of persons with genital HSV infection do not have a history of typical genital lesions. These asymptomatic persons may benefit from evaluation and counseling; thus, even asymptomatic partners should be queried about histories of typical and atypical genital lesions and encouraged to examine themselves for lesions in the future. Commercially available HSV type-specific antibody tests have not demonstrated adequate performance characteristics; their use is not currently recommended. Sensitive and specific type-specific serum antibody assays now utilized in research settings might contribute to future intervention strategies. Should tests with adequate sensitivity and specificity become commercially available, it might be possible to accurately identify asymptomatic persons infected with HSV-2, to focus counseling on how to detect lesions by self-examination, and to reduce the risk for transmission to sex partners. # Special Considerations # Allergy, Intolerance, or Adverse Reactions Effective alternatives to therapy with acyclovir are not available. # HIV Infection Lesions caused by HSV are relatively common among patients infected with HIV. Intermittent or suppressive therapy with oral acyclovir may be needed. The acyclovir dosage for HIV-infected persons is controversial, but experience strongly suggests that immunocompromised patients benefit from increased dosage. Regimens such as 400 mg orally 3 to 5 times a day, as used for other immunocompromised persons, have been found useful. Therapy should be continued until clinical resolution is attained. For severe disease, IV acyclovir therapy may be required. If lesions persist among patients undergoing acyclovir treatment, resistance to acyclovir should be suspected. These patients should be managed in consultation with an expert. For severe disease because of proven or suspected acyclovir-resistant strains, hospitalization should be considered. Foscarnet, 40 mg/kg body weight IV every 8 hours until clinical resolution is attained, appears to be the best available treatment. # Pregnancy The safety of systemic acyclovir therapy among pregnant women has not been established. Burroughs Wellcome Co., in cooperation with CDC, maintains a registry to assess the effects of the use of acyclovir during pregnancy. Women who receive acyclovir during pregnancy should be reported to this registry (1-800-722-9292, ext. 58465). Current registry findings do not indicate an increase in the number of birth defects identified among the prospective reports when compared with those expected in the general population. Moreover, no consistent pattern of abnormalities emerges among retrospective reports. These findings provide some assurance in counseling women who have had inadvertent prenatal exposure to acyclovir. However, accumulated case histories comprise a sample of insufficient size for reaching reliable and definitive conclusions regarding the risks of acyclovir treatment to pregnant women and to their fetuses. In the presence of life-threatening maternal HSV infection (e.g., disseminated infection that includes encephalitis, pneumonitis, or hepatitis), acyclovir administered IV is indicated. Among pregnant women without life-threatening disease, systemic acyclovir should not be used to treat recurrences nor should it be used as suppressive therapy near-term (or at other times during pregnancy) to prevent reactivation. # Perinatal Infections Most mothers of infants who acquire neonatal herpes lack histories of clinically evident genital herpes. The risk for transmission to the neonate from an infected mother appears highest among women with first episode genital herpes near the time of delivery, and is low (≤3%) among women with recurrent herpes. The results of viral cultures during pregnancy do not predict viral shedding at the time of delivery, and such cultures are not routinely indicated. At the onset of labor, all women should be carefully questioned about symptoms of genital herpes and should be examined. Women without symptoms or signs of genital herpes infection (or prodrome) may deliver their babies vaginally. Among women who have a history of genital herpes, or who have a sex partner with genital herpes, cultures of the birth canal at delivery may aid in decisions relating to neonatal management. Infants delivered through an infected birth canal (proven by virus isolation or presumed by observation of lesions) should be followed carefully, including virus cultures obtained 24-48 hours after birth. Available data do not support the routine use of acyclovir as anticipatory treatment for asymptomatic infants delivered through an infected birth canal. Treatment should be reserved for infants who develop evidence of clinical disease and for those with positive postpartum cultures. All infants with evidence of neonatal herpes should be treated with systemic acyclovir or vidarabine; refer to the Report of the Committee on Infectious Diseases, American Academy of Pediatrics (13 ). For ease of administration and to lower toxicity, acyclovir (30 mg/kg/day for 10--14 days) is the preferred drug. The care of these infants should be managed in consultation with an expert. # Lymphogranuloma Venereum Lymphogranuloma venereum (LGV), a rare disease in the United States, is caused by serovars L 1 , L 2 , or L 3 of C. trachomatis. The most common clinical manifestation of LGV among heterosexuals is tender inguinal lymphadenopathy that is most commonly unilateral. Women and homosexually active men may have proctocolitis or inflammatory involvement of perirectal or perianal lymphatic tissues resulting in fistulas and strictures. When patients seek care, most no longer have the self-limited genital ulcer that sometimes occurs at the site of inoculation. The diagnosis is usually made serologically and by exclusion of other causes of inguinal lymphadenopathy or genital ulcers. # Treatment Treatment cures infection and prevents ongoing tissue damage, although tissue reaction can result in scarring. Buboes may require aspiration or incision and drainage through intact skin. Doxycycline is the preferred treatment. # Recommended Regimen Doxycycline 100 mg orally 2 times a day for 21 days. # Alternative Regimens Erythromycin 500 mg orally 4 times a day for 21 days or Sulfisoxazole 500 mg orally 4 times a day for 21 days or equivalent sulfonamide course. # Follow-Up Patients should be followed clinically until signs and symptoms have resolved. # Management of Sex Partners Persons who have had sexual contact with a patient who has LGV within the 30 days before onset of the patient's symptoms should be examined, tested for urethral or cervical chlamydial infection, and treated. # Special Considerations Pregnancy Pregnant and lactating women should be treated with the erythromycin regimen. # HIV infection Persons with HIV infection and LGV should be treated following the regimens previously cited. # Syphilis General Principles Background Syphilis is a systemic disease caused by T. pallidum. Patients with syphilis may seek treatment for signs or symptoms of primary infection (ulcer or chancre at site of infection), secondary infection (manifestations that include rash, mucocutaneous lesions, and adenopathy), or tertiary infection (cardiac, neurologic, ophthalmic, auditory, or gummatous lesions). Infections also may be detected during the latent stage by serologic testing. Patients with latent syphilis who are known to have been infected within the preceding year are considered to have early latent syphilis; others have late latent syphilis or syphilis of unknown duration. Theoretically, treatment for late latent syphilis (as well as tertiary syphilis) requires therapy of longer duration because organisms are dividing more slowly; however, the validity of this division and its timing are unproven. # Diagnostic Considerations and Use of Serologic Tests Darkfield examinations and direct fluorescent antibody tests of lesion exudate or tissue are the definitive methods for diagnosing early syphilis. Presumptive diagnosis is possible with the use of two types of serologic tests for syphilis: a) nontreponemal (e.g., Venereal Disease Research Laboratory [VDRL] and RPR, and b) treponemal (e.g., fluorescent treponemal antibody absorbed [FTA-ABS] and microhemagglutination assay for antibody to T. pallidum [MHA-TP]). The use of one type of test alone is not sufficient for diagnosis. Nontreponemal test antibody titers usually correlate with disease activity, and results should be reported quantitatively. A fourfold change in titer, equivalent to a change of two dilutions (e.g., from 1:16 to 1:4, or from 1:8 to 1:32), is necessary to demonstrate a substantial difference between two nontreponemal test results that were obtained using the same serologic test. A patient who has a reactive treponemal test usually will have a reactive test for a lifetime, regardless of treatment or disease activity (15%-25% of patients treated during the primary stage may revert to being serologically nonreactive after 2-3 years). Treponemal test antibody titers correlate poorly with disease activity and should not be used to assess response to treatment. Sequential serologic tests should be performed using the same testing method (e.g., VDRL or RPR) by the same laboratory. The VDRL and RPR are equally valid, but quantitative results from the two tests cannot be directly compared because RPR titers are often slightly higher than VDRL titers. Abnormal results of serologic testing (unusually high, unusually low, and fluctuating titers) have been observed among HIV-infected patients. For such patients, use of other tests (e.g., biopsy and direct microscopy) should be considered. However, serologic tests appear to be accurate and reliable for the diagnosis of syphilis and for evaluation of treatment response for the vast majority of HIV-infected patients. No single test can be used to diagnose neurosyphilis among all patients. The diagnosis of neurosyphilis can be made based on various combinations of reactive serologic test results, abnormalities of cerebrospinal fluid (CSF) cell count or protein, or a reactive VDRL-CSF (RPR is not performed on CSF) with or without clinical manifestations. The CSF leukocyte count is usually elevated (>5 WBC/mm 3 ) when active neurosyphilis is present, and it is also a sensitive measure of the effectiveness of therapy. The VDRL-CSF is the standard serologic test for CSF; when reactive in the absence of substantial contamination of the CSF with blood, it is considered diagnostic of neurosyphilis. However, the VDRL-CSF may be nonreactive when neurosyphilis is present. Some experts recommend performing an FTA-ABS test on CSF. The CSF FTA-ABS is less specific (i.e., yields more false positives) for neurosyphilis than the VDRL-CSF; however, the test is believed to be highly sensitive. # Treatment Parenteral penicillin G is the preferred drug for treatment of all stages of syphilis. The preparation(s) used (i.e., benzathine, aqueous procaine, or aqueous crystalline), the dosage, and the length of treatment depend on the stage and clinical manifestations of disease. The efficacy of penicillin for the treatment of syphilis was well established through clinical experience before the value of randomized controlled clinical trials was recognized. Therefore, nearly all the recommendations for the treatment of syphilis are based on expert opinion reinforced by case series, open clinical trials, and 50 years of clinical experience. Parenteral penicillin G is the only therapy with documented efficacy for neurosyphilis or for syphilis during pregnancy. Patients with neurosyphilis and pregnant women with syphilis in any stage who report penicillin allergy should almost always be treated with penicillin, after desensitization, if necessary. Skin testing for penicillin allergy may be useful for some patients and in some settings (see Management of the Patient With a History of Penicillin Allergy). However, minor determinants needed for penicillin skin testing are not available commercially. The Jarisch-Herxheimer reaction is an acute febrile reaction-accompanied by headache, myalgia, and other symptoms-that may occur within the first 24 hours after any therapy for syphilis; patients should be advised of this possible adverse reaction. The Jarisch-Herxheimer reaction is common among patients with early syphilis. Antipyretics may be recommended, but there are no proven methods for preventing this reaction. The Jarisch-Herxheimer reaction may induce early labor or cause fetal distress among pregnant women. This concern should not prevent or delay therapy (see Syphilis During Pregnancy). # Management of Sex Partners Sexual transmission of T. pallidum occurs only when mucocutaneous syphilitic lesions are present; such manifestations are uncommon after the first year of infection. However, persons sexually exposed to a patient with syphilis in any stage should be evaluated clinically and serologically according to the following recommendations: • Persons who were exposed to a patient with primary, secondary, or latent (duration <1 year) syphilis within the preceding 90 days might be infected even if seronegative, and therefore should be treated presumptively. • Persons who were sexually exposed to a patient with primary, secondary, or latent (duration <1 year) syphilis >90 days before examination should be treated presumptively if serologic test results are not available immediately, and the opportunity for follow-up is uncertain. • For purposes of partner notification and presumptive treatment of exposed sex partners, patients who have syphilis of unknown duration and who have high nontreponemal serologic test titers (≥1:32) may be considered to be infected with early syphilis. • Long-term sex partners of patients with late syphilis should be evaluated clinically and serologically for syphilis. The time periods before treatment used for identifying at-risk sex partners are 3 months plus duration of symptoms for primary syphilis, 6 months plus duration of symptoms for secondary syphilis, and 1 year for early latent syphilis. # Primary and Secondary Syphilis Treatment Four decades of experience indicate that parenteral penicillin G is effective in achieving local cure (healing of lesions and prevention of sexual transmission) and in preventing late sequelae. However, no adequately conducted comparative trials have been performed to guide the selection of an optimal penicillin regimen (i.e., dose, duration, and preparation). Substantially fewer data on nonpenicillin regimens are available. # Recommended Regimen for Adults Nonallergic patients with primary or secondary syphilis should be treated with the following regimen: Benzathine penicillin G, 2.4 million units IM in a single dose. # NOTE: Recommendations for treating pregnant women and HIV-infected persons for syphilis are discussed in separate sections. # Recommended Regimen for Children After the newborn period, children diagnosed with syphilis should have a CSF examination to exclude a diagnosis of neurosyphilis, and birth and maternal medical records should be reviewed to assess whether the child has congenital or acquired syphilis (see Congenital Syphilis). Children with acquired primary or secondary syphilis should be evaluated (including consultation with child-protection services) and treated using the following pediatric regimen (see Sexual Assault or Abuse of Children). Benzathine penicillin G, 50,000 units/kg IM, up to the adult dose of 2.4 million units in a single dose. # Other Management Considerations All patients with syphilis should be tested for HIV. In areas with high HIV prevalence, patients with primary syphilis should be retested for HIV after 3 months. Patients who have syphilis and who also have symptoms or signs suggesting neurologic disease (e.g., meningitis) or ophthalmic disease (e.g., uveitis) should be fully evaluated for neurosyphilis and syphilitic eye disease (including CSF analysis and ocular slit-lamp examination). Such patients should be treated appropriately according to the results of this evaluation. Invasion of CSF by T. pallidum with accompanying CSF abnormalities is common among adults who have primary or secondary syphilis. However, few patients develop neurosyphilis after treatment with the regimens described in this report. Therefore, unless clinical signs or symptoms of neurologic involvement are present (e.g., auditory, cranial nerve, meningeal, or ophthalmic manifestations), lumbar puncture is not recommended for routine evaluation of patients with primary or secondary syphilis. # Follow-Up Treatment failures can occur with any regimen. However, assessing response to treatment is often difficult, and no definitive criteria for cure or failure exist. Serologic test titers may decline more slowly among patients with a prior syphilis infection. Patients should be re-examined clinically and serologically at 3 months and again at 6 months. Patients with signs or symptoms that persist or recur or who have a sustained fourfold increase in nontreponemal test titer compared with either the baseline titer or to a subsequent result, can be considered to have failed treatment or to be reinfected. These patients should be re-treated after evaluation for HIV infection. Unless reinfection is likely, lumbar puncture also should be performed. Failure of nontreponemal test titers to decline fourfold by 3 months after therapy for primary or secondary syphilis identifies persons at risk for treatment failure. Those persons should be evaluated for HIV infection. Optimal management of such patients is unclear if they are HIV negative. At a minimum, these patients should have additional clinical and serologic follow-up. If further follow-up cannot be assured, re-treatment is recommended. Some experts recommend CSF examination in such situations. When patients are re-treated, most experts recommend re-treatment with three weekly injections of benzathine penicillin G 2.4 million units IM, unless CSF examination indicates that neurosyphilis is present. # Management of Sex Partners Refer to General Principles, Management of Sex Partners. # Special Considerations # Penicillin Allergy Nonpregnant penicillin-allergic patients who have primary or secondary syphilis should be treated with the following regimen. Doxycycline 100 mg orally 2 times a day for 2 weeks or Tetracycline 500 mg orally 4 times a day for 2 weeks. There is less clinical experience with doxycycline than with tetracycline, but compliance is likely to be better with doxycycline. Therapy for a patient who cannot tolerate either doxycycline or tetracycline should be based upon whether the patient's compliance with the therapy regimen and with follow-up examinations can be assured. For nonpregnant patients whose compliance with therapy and follow-up can be assured, an alternative regimen is erythromycin 500 mg orally 4 times a day for 2 weeks. Various ceftriaxone regimens also may be considered. Patients whose compliance with therapy or follow-up cannot be assured should be desensitized, if necessary, and treated with penicillin. Skin testing for penicillin allergy may be useful in some situations (see Management of the Patient With a History of Penicillin Allergy). Erythromycin is less effective than other recommended regimens. Data on ceftriaxone are limited, and experience has been too brief to permit identification of late failures. Optimal dose and duration have not been established for ceftriaxone, but regimens that provide 8-10 days of treponemicidal levels in the blood should be used. Single dose ceftriaxone therapy is not effective for treating syphilis. # Pregnancy Pregnant patients who are allergic to penicillin should be treated with penicillin, after desensitization, if necessary (see Management of the Patient With a History of Penicillin Allergy and Syphilis During Pregnancy). # HIV Infection Refer to Syphilis Among HIV-Infected Patients. # Latent Syphilis Latent syphilis is defined as those periods after infection with T. pallidum when patients are seroreactive, but show no other evidence of disease. Patients who have latent syphilis and who have acquired syphilis within the preceding year are classified as having early latent syphilis. Patients can be demonstrated to have acquired syphilis within the preceding year on the basis of documented seroconversion, a fourfold or greater increase in titer of a nontreponemal serologic test, history of symptoms of primary or secondary syphilis, or if they had a sex partner with primary, secondary, or latent syphilis (documented independently as duration <1 year). Nearly all others have latent syphilis of unknown duration and should be managed as if they had late latent syphilis. # Treatment Treatment of latent syphilis is intended to prevent occurrence or progression of late complications. Although clinical experience supports belief in the effectiveness of penicillin in achieving those goals, limited evidence is available for guidance in choosing specific regimens. There is very little evidence to support the use of nonpenicillin regimens. # Recommended Regimens for Adults These regimens are for nonallergic patients with normal CSF examination (if performed). # Early Latent Syphilis Benzathine penicillin G, 2.4 million units IM in a single dose. # Late Latent Syphilis or Latent Syphilis of Unknown Duration Benzathine penicillin G, 7.2 million units total, administered as 3 doses of 2.4 million units IM each, at 1-week intervals. # Recommended Regimens for Children After the newborn period, children diagnosed with syphilis should have a CSF examination to exclude neurosyphilis, and birth and maternal medical records should be reviewed to assess whether the child has congenital or acquired syphilis (see Congenital Syphilis). Older children with acquired latent syphilis should be evaluated as described for adults and treated using the following pediatric regimens (see Sexual Assault or Abuse of Children). These regimens are for nonallergic children who have acquired syphilis and who have had a normal CSF examination. # Early Latent Syphilis Benzathine penicillin G, 50,000 units/kg IM, up to the adult dose of 2.4 million units in a single dose. # Late Latent Syphilis or Latent Syphilis of Unknown Duration Benzathine penicillin G, 50,000 units/kg IM, up to the adult dose of 2.4 million units, for three total doses (total 150,000 units/kg up to adult total dose of 7.2 million units). # Other Management Considerations All patients with latent syphilis should be evaluated clinically for evidence of tertiary disease (e.g., aortitis, neurosyphilis, gumma, and iritis). Recommended therapy for patients with latent syphilis may not be optimal therapy for the persons with asymptomatic neurosyphilis. However, the yield from CSF examination, in terms of newly diagnosed cases of neurosyphilis, is low. Patients with any one of the criteria listed below should have a CSF examination before treatment: • Neurologic or ophthalmic signs or symptoms; • Other evidence of active syphilis (e.g., aortitis, gumma, iritis); • Treatment failure; • HIV infection; • Serum nontreponemal titer ≥1:32, unless duration of infection is known to be <1 year; or • Nonpenicillin therapy planned, unless duration of infection is known to be <1 year. If dictated by circumstances and patient preferences, CSF examination may be performed for persons who do not meet the criteria listed above. If a CSF examination is performed and the results show abnormalities consistent with CNS syphilis, the patient should be treated for neurosyphilis (see Neurosyphilis). • All syphilis patients should be tested for HIV. # Follow-Up Quantitative nontreponemal serologic tests should be repeated at 6 months and again at 12 months. Limited data are available to guide evaluation of the response to therapy for a patient with latent syphilis. If titers increase fourfold, or if an initially high titer (≥1:32) fails to decline at least fourfold (two dilutions) within 12-24 months, or if the patient develops signs or symptoms attributable to syphilis, the patient should be evaluated for neurosyphilis and re-treated appropriately. # Management of Sex Partners Refer to General Principles, Management of Sex Partners. # Special Considerations # Penicillin Allergy For patients who have latent syphilis and who are allergic to penicillin, nonpenicillin therapy should be used only after CSF examination has excluded neurosyphilis. Nonpregnant, penicillin-allergic patients should be treated with the following regimens. Doxycycline 100 mg orally 2 times a day or Tetracycline 500 mg orally 4 times a day. Both drugs are administered for 2 weeks if duration of infection is known to have been <1 year; otherwise, for 4 weeks. # Pregnancy Pregnant patients who are allergic to penicillin should be treated with penicillin, after desensitization, if necessary (see Management of the Patient With a History of Penicillin Allergy and Syphilis During Pregnancy). # HIV Infection Refer to Syphilis Among HIV-Infected Patients. # Late Syphilis Late (tertiary) syphilis refers to patients with gumma and patients with cardiovascular syphilis, but not to neurosyphilis. Nonallergic patients without evidence of neurosyphilis should be treated with the following regimen. # Recommended Regimen Benzathine penicillin G, 7.2 million units total, administered as 3 doses of 2.4 million units IM, at 1-week intervals. # Other Management Considerations Patients with symptomatic late syphilis should undergo CSF examination before therapy. Some experts treat all patients who have cardiovascular syphilis with a neurosyphilis regimen. The complete management of patients with cardiovascular or gummatous syphilis is beyond the scope of these guidelines. These patients should be managed in consultation with experts. # Follow-Up There is minimal evidence regarding follow-up of patients infected with late syphilis. Clinical response depends partly on the nature of the lesions. # Management of Sex Partners Refer to General Principles, Management of Sex Partners. # Special Considerations # Penicillin Allergy Patients allergic to penicillin should be treated according to treatment regimens recommended for late latent syphilis. # Pregnancy Pregnant patients who are allergic to penicillin should be treated with penicillin, after desensitization, if necessary (see Management of the Patient With a History of Penicillin Allergy and Syphilis During Pregnancy). # HIV Infection Refer to Syphilis Among HIV-Infected Patients. # Neurosyphilis Treatment Central nervous system disease can occur during any stage of syphilis. A patient with clinical evidence of neurologic involvement (e.g., ophthalmic or auditory symptoms, cranial nerve palsies) with syphilis warrants a CSF examination. Although four decades of experience have confirmed the effectiveness of penicillin, the evidence to guide the choice of the best regimen is limited. Syphilitic eye disease is frequently associated with neurosyphilis, and patients with this disease should be treated according to neurosyphilis treatment recommendations. CSF examination should be performed on all such patients to identify those patients with CSF abnormalities who should have follow-up CSF examinations to assess response to treatment. Patients who have neurosyphilis or syphilitic eye disease (e.g., uveitis, neuroretinitis, or optic neuritis) and who are not allergic to penicillin should be treated with the following regimen. # Recommended Regimen 12-24 million units aqueous crystalline penicillin G daily, administered as 2-4 million units IV every 4 hours, for 10-14 days. If compliance with therapy can be assured, patients may be treated with the following alternative regimen. # Alternative Regimen 2.4 million units procaine penicillin IM daily, plus probenecid 500 mg orally 4 times a day, both for 10-14 days. The durations of these regimens are shorter than that of the regimen used for late syphilis in the absence of neurosyphilis. Therefore, some experts administer benzathine penicillin, 2.4 million units IM after completion of these neurosyphilis treatment regimens to provide a comparable total duration of therapy. # Other Management Considerations Other considerations in the management of the patient with neurosyphilis are the following: • All patients with syphilis should be tested for HIV. • Many experts recommend treating patients with evidence of auditory disease caused by syphilis in the same manner as for neurosyphilis, regardless of the findings on CSF examination. # Follow-Up If CSF pleocytosis was present initially, CSF examination should be repeated every 6 months until the cell count is normal. Follow-up CSF examinations also may be used to evaluate changes in the VDRL-CSF or CSF protein in response to therapy, though changes in these two parameters are slower and persistent abnormalities are of less certain importance. If the cell count has not decreased at 6 months, or if the CSF is not entirely normal by 2 years, re-treatment should be considered. # Management of Sex Partners Refer to General Principles, Management of Sex Partners. # Special Considerations Penicillin Allergy No data have been collected systematically for evaluation of therapeutic alternatives to penicillin for treatment of neurosyphilis. Therefore, patients who report being allergic to penicillin should be treated with penicillin, after desensitization if necessary, or should be managed in consultation with an expert. In some situations, skin testing to confirm penicillin allergy may be useful (see Management of the Patient With a History of Penicillin Allergy). # Pregnancy Pregnant patients who are allergic to penicillin should be treated with penicillin, after desensitization if necessary (see Syphilis During Pregnancy). # HIV Infection Refer to Syphilis Among HIV-Infected Patients. # Syphilis Among HIV-Infected Patients Diagnostic Considerations Unusual serologic responses have been observed among HIV-infected persons who also have syphilis. Most reports involved serologic titers that were higher than expected, but false-negative serologic test results or delayed appearance of seroreactivity have also been reported. Nevertheless, both treponemal and nontreponemal serologic tests for syphilis are accurate for the majority of patients with syphilis and HIV coinfection. When clinical findings suggest that syphilis is present, but serologic tests are nonreactive or confusing, it may be helpful to perform such alternative tests as biopsy of a lesion, darkfield examination, or direct fluorescent antibody staining of lesion material. Neurosyphilis should be considered in the differential diagnosis of neurologic disease among HIV-infected persons. # Treatment Although adequate research-based evidence is not available, published case reports and expert opinion suggest that HIV-infected patients with early syphilis are at increased risk for neurologic complications and have higher rates of treatment failure with currently recommended regimens. The magnitude of these risks, although not precisely defined, is probably small. No treatment regimens have been demonstrated to be more effective in preventing development of neurosyphilis than those recommended for patients without HIV infection. Careful follow-up after therapy is essential. # Primary and Secondary Syphilis Among HIV-Infected Patients Treatment Treatment with benzathine penicillin G 2.4 million units IM, as for patients without HIV infection, is recommended. Some experts recommend additional treatments, such as multiple doses of benzathine penicillin G as suggested for late syphilis, or other supplemental antibiotics in addition to benzathine penicillin G 2.4 million units IM. # Other Management Considerations CSF abnormalities are common among HIV-infected patients who have primary or secondary syphilis, but these abnormalities are of unknown prognostic significance. Most HIV-infected patients respond appropriately to currently recommended penicillin therapy; however, some experts recommend CSF examination before therapy and modification of treatment accordingly. # Follow-Up Patients should be evaluated clinically and serologically for treatment failure at 1 month and at 2, 3, 6, 9, and 12 months after therapy. Although of unproven benefit, some experts recommend performing CSF examination after therapy (i.e., at 6 months). HIV-infected patients who meet the criteria for treatment failure should undergo CSF examination and be retreated just as for patients without HIV infection. CSF examination and re-treatment also should be strongly considered for patients in whom the suggested fourfold decrease in nontreponemal test titer does not occur within 3 months for primary or secondary syphilis. Most experts would re-treat patients with benzathine penicillin G 7.2 million units (as 3 weekly doses of 2.4 million units each) if the CSF examination is normal. # Special Considerations # Penicillin Allergy Penicillin regimens should be used to treat HIV-infected patients in all stages of syphilis. Skin testing to confirm penicillin allergy may be used (see Management of the Patient With a History of Penicillin Allergy), but data on the utility of that approach among immunocompromised patients are inadequate. Patients may be desensitized, then treated with penicillin. # Latent Syphilis Among HIV-Infected Patients Diagnostic Considerations Patients who have both latent syphilis (regardless of apparent duration) and HIV infection should undergo CSF examination before treatment. # Treatment A patient with latent syphilis, HIV infection, and a normal CSF examination can be treated with benzathine penicillin G 7.2 million units (as 3 weekly doses of 2.4 million units each). # Special Considerations Penicillin Allergy Penicillin regimens should be used to treat all stages of syphilis among HIVinfected patients. Skin testing to confirm penicillin allergy may be used (see Management of the Patient With a History of Penicillin Allergy), but data on the utility of that approach in immunocompromised patients are inadequate. Patients may be desensitized, then treated with penicillin. # Syphilis During Pregnancy All women should be screened serologically for syphilis during the early stages of pregnancy. In populations in which utilization of prenatal care is not optimal, RPR-card test screening and treatment, if that test is reactive, should be performed at the time a pregnancy is diagnosed. In communities and populations with high syphilis prevalence or for patients at high risk, serologic testing should be repeated during the third trimester and again at delivery. (Some states mandate screening at delivery for all women.) Any woman who delivers a stillborn infant after 20 weeks gestation should be tested for syphilis. No infant should leave the hospital without the serologic status of the infant's mother having been determined at least once during pregnancy. # Diagnostic Considerations Seropositive pregnant women should be considered infected unless treatment history is clearly documented in a medical or health department record and sequential serologic antibody titers have appropriately declined. # Treatment Penicillin is effective for preventing transmission to fetuses and for treating established infection among fetuses. Evidence is insufficient, however, to determine whether the specific, recommended penicillin regimens are optimal. # Recommended Regimens Treatment during pregnancy should be the penicillin regimen appropriate for the woman's stage of syphilis. Some experts recommend additional therapy (e.g., a second dose of benzathine penicillin 2.4 million units IM) 1 week after the initial dose, particularly for those women in the third trimester of pregnancy and for women who have secondary syphilis during pregnancy. # Other Management Considerations Women who are treated for syphilis during the second half of pregnancy are at risk for premature labor or fetal distress, or both, if their treatment precipitates the Jarisch-Herxheimer reaction. These women should be advised to seek medical attention following treatment if they notice any change in fetal movements or if they have contractions. Stillbirth is a rare complication of treatment; however, since therapy is necessary to prevent further fetal damage, that concern should not delay treatment. All patients with syphilis should be tested for HIV. # Follow-Up Serologic titers should be checked monthly until adequacy of treatment has been assured. The antibody response should be appropriate for the stage of disease. # Management of Sex Partners Refer to General Principles, Management of Sex Partners. # Special Considerations # Penicillin Allergy There are no proven alternatives to penicillin. A pregnant woman with a history of penicillin allergy should be treated with penicillin, after desensitization, if necessary. Skin testing may be helpful for some patients and in some settings (see Management of the Patient With a History of Penicillin Allergy). Tetracycline and doxycycline are contraindicated during pregnancy. Erythromycin should not be used because it cannot be relied upon to cure an infected fetus. # Congenital Syphilis Diagnostic Considerations Who Should Be Evaluated Infants should be evaluated for congenital syphilis if they were born to seropositive (nontreponemal test confirmed by treponemal test) women who meet the following criteria: • Have untreated syphilis;* or • Were treated for syphilis during pregnancy with erythromycin; or • Were treated for syphilis <1 month before delivery; or • Were treated for syphilis during pregnancy with the appropriate penicillin regimen, but nontreponemal antibody titers did not decrease sufficiently after therapy to indicate an adequate response (≥ fourfold decrease); or • Do not have a well-documented history of treatment for syphilis; or • Were treated appropriately before pregnancy but had insufficient serologic follow-up to assure that they had responded appropriately to treatment and are not currently infected (≥ fourfold decrease for patients treated for early syphilis; stable or declining titers ≤1:4 for other patients). No infant should leave the hospital without the serologic status of the infant's mother having been documented at least once during pregnancy. Serologic testing also should be performed at delivery in communities and populations at risk for congenital syphilis. Serologic tests can be nonreactive among infants infected late during their mother's pregnancy. # Evaluation of the Infant The clinical and laboratory evaluation of infants born to women described above should include the following: • A thorough physical examination for evidence of congenital syphilis; • A quantitative nontreponemal serologic test for syphilis performed on the infant's sera (not on cord blood); • CSF analysis for cells, protein, and VDRL; • Long bone x-rays; • Other tests as clinically indicated (e.g., chest x-ray, complete blood count, differential and platelet count, liver function tests); • For infants who have no evidence of congenital syphilis on the above evaluation, determination of presence of specific antitreponemal IgM antibody by a testing method recognized by CDC as having either provisional or standard status; • Pathologic examination of the placenta or amniotic cord using specific fluorescent antitreponemal antibody staining. *A woman treated with a regimen other than those recommended for treatment of syphilis (for pregnant women or otherwise) in these guidelines should be considered untreated. # Treatment Therapy Decisions Infants should be treated for presumed congenital syphilis if they were born to mothers who, at delivery, had untreated syphilis or who had evidence of relapse or reinfection after treatment (see Congenital Syphilis, Diagnostic Considerations). Additional criteria for presumptively treating infants with congenital syphilis are as follows: • Physical evidence of active disease; • X-ray evidence of active disease; • A reactive VDRL-CSF or, for infants born to seroreactive mothers, an abnormal* CSF white blood cell count or protein, regardless of CSF serology; • A serum quantitative nontreponemal serologic titer that is at least fourfold greater than the mother's titer † ; • Specific antitreponemal IgM antibody detected by a testing method that has been given provisional or standard status by CDC; • If they meet the previously cited criteria for "Who Should Be Evaluated," but have not been fully evaluated (see Congenital Syphilis, Diagnostic Considerations). # NOTE: Infants with clinically evident congenital syphilis should have an ophthalmologic examination. # Recommended Regimens Aqueous crystalline penicillin G, 100,000-150,000 units/kg/day (administered as 50,000 units/kg IV every 12 hours during the first 7 days of life and every 8 hours thereafter) for 10-14 days, or Procaine penicillin G, 50,000 units/kg IM daily in a single dose for 10-14 days. If more than 1 day of therapy is missed, the entire course should be restarted. An infant whose complete evaluation was normal and whose mother was a) treated for syphilis during pregnancy with erythromycin, or b) treated for syphilis <1 month before delivery, or c) treated with an appropriate regimen before or during pregnancy *In the immediate newborn period, interpretation of CSF test results may be difficult; normal values vary with gestational age and are higher in preterm infants. Other causes of elevated values also should be considered when an infant is being evaluated for congenital syphilis. Though values as high as 25 white blood cells(WBC)/mm 3 and 150 mg protein/dL occur among normal neonates, some experts recommend that lower values (5 WBC/mm 3 and 40 mg/dL) be considered the upper limits of normal. The infant should be treated if test results cannot exclude infection. † The absence of a fourfold greater titer for an infant cannot be used as evidence against congenital syphilis. but did not yet have an adequate serologic response should be treated with benzathine penicillin G, 50,000 units/kg IM in a single dose. In some cases, infants with a normal complete evaluation for whom follow-up can be assured can be followed closely without treatment. # Treatment of Older Infants and Children with Congenital Syphilis After the newborn period, children diagnosed with syphilis should have a CSF examination to exclude neurosyphilis and records should be reviewed to assess whether the child has congenital or acquired syphilis (see Primary and Secondary Syphilis and Latent Syphilis). Any child who is thought to have congenital syphilis (or who has neurologic involvement) should be treated with aqueous crystalline penicillin G, 200,000-300,000 units/kg/day IV or IM (administered as 50,000 units/kg every 4-6 hours) for 10-14 days. # Follow-Up A seroreactive infant (or an infant whose mother was seroreactive at delivery) who is not treated for congenital syphilis during the perinatal period should receive careful follow-up examinations at 1 month and at 2, 3, 6, and 12 months after therapy. Nontreponemal antibody titers should decline by 3 months of age and should be nonreactive by 6 months of age if the infant was not infected and the titers were the result of passive transfer of antibody from the mother. If these titers are found to be stable or increasing, the child should be re-evaluated, including CSF examination, and fully treated. Passively transferred treponemal antibodies may be present for as long as 1 year. If they are present >1 year, the infant should be re-evaluated and treated for congenital syphilis. Treated infants also should be followed every 2-3 months to assure that nontreponemal antibody titers decline; these infants should have become nonreactive by 6 months of age (response may be slower for infants treated after the neonatal period). Treponemal tests should not be used to evaluate response to treatment because test results can remain positive despite effective therapy if the child was infected. Infants with CSF pleocytosis should undergo CSF examination every 6 months, or until the cell count is normal. If the cell count is still abnormal after 2 years, or if a downward trend is not present at each examination, the child should be re-treated. The VDRL-CSF also should be checked at 6 months; if still reactive, the infant should be re-treated. Follow-up of children treated for congenital syphilis after the newborn period should be the same as that prescribed for congenital syphilis among neonates. # Special Considerations # Penicillin Allergy Children who require treatment for syphilis after the newborn period, but who have a history of penicillin allergy, should be treated with penicillin after desensitization, if necessary. Skin testing may be helpful in some patients and settings (see Management of the Patient With a History of Penicillin Allergy). # HIV Infection Mothers of infants with congenital syphilis should be tested for HIV. Infants born to mothers who have HIV infection should be referred for evaluation and appropriate follow-up. No data exist to suggest that infants with congenital syphilis whose mothers are coinfected with HIV require different evaluation, therapy, or follow-up for syphilis than is recommended for all infants. # Management of the Patient With a History of Penicillin Allergy No proven alternatives to penicillin are available for treating neurosyphilis, congenital syphilis, or syphilis among pregnant women. Penicillin also is recommended for use, whenever possible, with HIV-infected patients. Unfortunately, 3%-10% of the adult population in the United States have experienced urticaria, angioedema, or anaphylaxis (upper airway obstruction, bronchospasm, or hypotension) with penicillin therapy. Re-administration of penicillin can cause severe immediate reactions among these patients. Because anaphylactic reactions to penicillin can be fatal, every effort should be made to avoid administering penicillin to penicillin-allergic patients, unless the anaphylactic sensitivity has been removed by acute desensitization. However, only approximately 10% of persons who report a history of severe allergic reactions to penicillin are still allergic. With the passage of time after an allergic reaction to penicillin, most persons who have experienced a severe reaction stop expressing penicillin-specific IgE. These persons can be treated safely with penicillin. Many studies have found that skin testing with the major and minor determinants can reliably identify persons at high risk for penicillin reactions. Although these reagents are easily generated and have been available in academic centers for >30 years, currently only penicilloyl-poly-L-lysine (Pre-Pen, the major determinant) and penicillin G are available commercially. Experts estimate that testing with only the major determinant and penicillin G detects 90%-97% of the currently allergic patients. However, because skin testing without the minor determinants would still miss 3%-10% of allergic patients, and serious or fatal reactions can occur among these minor determinant positive patients, experts suggest caution when the full battery of skin test reagents listed in the table is not available. # Recommendations If the full battery of skin-test reagents is available, including the major and minor determinants (see Penicillin Allergy Skin Testing), patients who report a history of penicillin reaction and are skin-test negative can receive conventional penicillin therapy. Skin-test positive patients should be desensitized. If the full battery of skin-test reagents, including the minor determinants, is not available, the patient should be skin tested using penicilloyl (the major determinant, Pre-Pen) and penicillin G. Those with positive tests should be desensitized. Some experts believe that persons with negative tests, in that situation, should be regarded as probably allergic and should be desensitized. Others suggest that those with negative skin tests can be test-dosed gradually with oral penicillin in a monitored setting in which treatment for anaphylactic reaction is possible. # Penicillin Allergy Skin Testing Patients at high risk for anaphylaxis (i.e., a history of penicillin-related anaphylaxis, asthma or other diseases that would make anaphylaxis more dangerous, or therapy with beta-adrenergic blocking agents) should be tested with 100-fold dilutions of the full-strength skin-test reagents before testing with full-strength reagents. In these situations, patients should be tested in a monitored setting in which treatment for an anaphylactic reaction is possible. If possible, the patient should not have taken antihistamines (e.g., chlorpheniramine maleate or terfenadine during the past 24 hours, diphenhydramine HCl or hydroxyzine during the past 4 days, or astemizole during the past 3 weeks). # Reagents (Adapted from Beall # Minor Determinant Precursors † • Benzylpenicillin G (10 -2 M, 3.3 mg/mL, 6000 U/mL), • Benzylpenicilloate (10 -2 M, 3.3 mg/mL), • Benzylpenilloate (or penicilloyl propylamine)(10 -2 M, 3.3 mg/mL). # Positive Control • Commercial histamine for epicutaneous skin testing (1 mg/mL). # Negative Control • Diluent used to dissolve other reagents, usually phenol saline. # Procedures Dilute the antigens 100-fold for preliminary testing if the patient has had a lifethreatening reaction, or 10-fold if the patient has had another type of immediate, generalized reaction within the past year. Epicutaneous (prick) tests. Duplicate drops of skin-test reagent are placed on the volar surface of the forearm. The underlying epidermis is pierced with a 26-gauge needle without drawing blood. *Reprinted with permission from G.N. Beall in Annals of Internal Medicine. † Aged penicillin is not an adequate source of minor determinants. Penicillin G should be freshly prepared or should come from a fresh-frozen source. An epicutaneous test is positive if the average wheal diameter after 15 minutes is 4 mm larger than that of negative controls; otherwise, the test is negative. The histamine controls should be positive to assure that results are not falsely negative because of the effect of antihistaminic drugs. Intradermal test. If epicutaneous tests are negative, duplicate 0.02 mL intradermal injections of negative control and antigen solutions are made into the volar surface of the forearm using a 26-or 27-gauge needle on a syringe. The crossed diameters of the wheals induced by the injections should be recorded. An intradermal test is positive if the average wheal diameter 15 minutes after injection is 2 mm or larger than the initial wheal size and also is at least 2 mm larger than the negative controls. Otherwise, the tests are negative. # Desensitization Patients who have a positive skin test to one of the penicillin determinants can be desensitized. This is a straightforward, relatively safe procedure that can be done orally or IV. Although the two approaches have not been compared, oral desensitization is thought to be safer, simpler, and easier. Patients should be desensitized in a hospital setting because serious IgE-mediated allergic reactions, although unlikely, can occur. Desensitization can usually be completed in about 4 hours, after which the first dose of penicillin is given (Table 1). STD programs should have a referral center where patients with positive skin tests can be desensitized. After desensitization, patients must be maintained on penicillin continuously for the duration of the course of therapy. # DISEASES CHARACTERIZED BY URETHRITIS AND CERVICITIS # Management of the Patient with Urethritis Urethritis, or inflammation of the urethra, is caused by an infection characterized by the discharge of mucoid or purulent material and by burning during urination. However, asymptomatic infections are common. The two bacterial agents primarily responsible for urethritis among men are N. gonorrhoeae and C. trachomatis. Testing to determine the specific diagnosis is recommended because both of these infections are reportable to state health departments and because with a specific diagnosis, treatment compliance may be better and the likelihood of partner notification may be improved. If diagnostic tools (e.g., Gram stain and microscope) are unavailable, health-care providers should treat patients for both infections. The added expense of treating a person with nongonococcal urethritis (NGU) for both infections also should encourage the health-care provider to make a specific diagnosis. (See Nongonococcal Urethritis, Chlamydial Infections, and Gonococcal Infections.) # Nongonococcal Urethritis NGU, or inflammation of the urethra not caused by gonococcal infection, is characterized by a mucoid or purulent urethral discharge. In the presence or absence of a discharge, NGU may be diagnosed by ≥5 polymorphonuclear leukocytes per oil immersion field on a smear of an intraurethral swab specimen. Increasingly, the leukocyte esterase test (LET) is being used to screen urine from asymptomatic males for evidence of urethritis (either gonococcal or nongonococcal). The diagnosis of urethritis among males tested with LET should be confirmed with a Gram-stained smear of a urethral swab specimen. C. trachomatis is the most frequent cause of NGU (23%-55% of cases); however, prevalence varies among age groups, with lower prevalence found among older men. Ureaplasma urealyticum causes 20%-40% of cases, and Trichomonas vaginalis 2%-5%. HSV is occasionally responsible for cases of NGU. The etiology of the remaining cases of NGU is unknown. Complications of NGU among men infected with C. trachomatis include epididymitis and Reiter's syndrome. Female sex partners of men who have NGU are at risk for chlamydial infection and associated complications. # Recommended Regimen Doxycycline 100 mg orally 2 times a day for 7 days.* # Alternative Regimens Erythromycin base 500 mg orally 4 times a day for 7 days or *Azithromycin 1 g in a single dose, according to manufacturer's data, is equivalent to doxycycline. However, this study has not been published in a peer-reviewed journal. For a discussion comparing azithromycin and doxycyline, refer to Chlamydial Infections. Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days. If a patient cannot tolerate high-dose erythromycin schedules, one of the following regimens may be used: Erythromycin base 250 mg orally 4 times a day for 14 days or Erythromycin ethylsuccinate 400 mg orally 4 times a day for 14 days. Treatment with the recommended regimen has been demonstrated in most cases to result in alleviation of symptoms and in microbiologic cure of infection. If the etiologic organism is susceptible to the antimicrobial agent used, sequelae specific to that organism will be prevented, as will further transmission; this is especially important for cases of NGU caused by C. trachomatis. # Follow-Up Patients should be instructed to return for evaluation if symptoms persist or recur after completion of therapy. Patients with persistent or recurrent urethritis should be re-treated with the initial regimen if they failed to comply with the treatment regimen or if they were re-exposed to an untreated sex partner. Otherwise, a wet mount examination and culture of an intraurethral swab specimen for T. vaginalis should be performed; if negative, the patient should be retreated with an alternative regimen extended to 14 days (e.g., erythromycin base 500 mg orally 4 times a day for 14 days). The use of alternative regimens ensures treatment of possible tetracycline-resistant U. urealyticum. Effective regimens have not been identified for treating patients who experience persistent symptoms or frequent recurrences following treatment with doxycycline and erythromycin. Urologic examinations do not usually reveal a specific etiology. Such patients should be assured that, although they have persistent or frequently recurring urethritis, the condition is not known to cause complications among them or their sex partners and is not known to be sexually transmitted. However, men exposed to a new sex partner should be re-evaluated. Symptoms alone, without documentation of signs or laboratory evidence of urethral inflammation, are not a sufficient basis for re-treatment. # Management of Sex Partners Patients should be instructed to refer sex partners for evaluation and treatment. Since exposure intervals have received limited evaluation, the following recommendations are somewhat arbitrary. Sex partners of symptomatic patients should be evaluated and treated if their last sexual contact with the index patient was within 30 days of onset of symptoms. If the index patient is asymptomatic, sex partners whose last sexual contact with the index patient was within 60 days of diagnosis should be evaluated and treated. If the patient's last sexual intercourse preceded the time intervals previously described, the most recent sex partner should be treated. A specific diagnosis may facilitate partner referral and partner cooperation. Therefore, testing for both gonorrhea and chlamydia is encouraged. Patients should be instructed to abstain from sexual intercourse until patient and partners are cured. In the absence of microbiologic test-of-cure, this means when therapy is completed and patient and partners are without symptoms or signs. # Special Considerations # HIV Infection Persons with HIV infection and NGU should receive the same treatment as patients without HIV infection. # Management of the Patient With Mucopurulent Cervicitis Mucopurulent cervicitis (MPC) is characterized by a yellow endocervical exudate visible in the endocervical canal or in an endocervical swab specimen. Some experts also make the diagnosis on the basis of an increased number of polymorphonuclear leukocytes on cervical Gram stain. The condition is asymptomatic among many women, but some may experience an abnormal vaginal discharge and abnormal vaginal bleeding (e.g., following intercourse). The condition can be caused by C. trachomatis or N. gonorrhoeae, although in most cases neither organism can be isolated. Patients with MPC should have cervical specimens tested for C. trachomatis and cultured for N. gonorrhoeae. MPC is not a sensitive predictor of infection; however, most women with C. trachomatis or N. gonorrhoeae do not have MPC. # Treatment The results of tests for C. trachomatis or N. gonorrhoeae should determine the need for treatment, unless the likelihood of infection with either organism is high or unless the patient is unlikely to return for treatment. Treatment for MPC should include the following: • Treatment for gonorrhea and chlamydia in patient populations with high prevalence of both infections, such as patients seen at many STD clinics. • Treatment for chlamydia only, if the prevalence of N. gonorrhoeae is low but the likelihood of chlamydia is substantial. • Await test results if the prevalence of both infections are low and if compliance with a recommendation for a return visit is likely. # Follow-Up Follow-up should be as recommended for the infections for which the woman is being treated. Management of sex partners of women with MPC should be appropriate for the STD (C. trachomatis or N. gonorrhoeae ) identified. Partners should be notified, examined, and treated on the basis of test results. However, partners of patients who are treated presumptively should receive the same treatment as the index patient. # Special Considerations # HIV Infection Persons with HIV infection and MPC should receive the same treatment as patients without HIV infection. # Chlamydial Infections Chlamydial genital infection is common among adolescents and young adults in the United States. Asymptomatic infection is common among both men and women. Testing sexually active adolescent girls for chlamydial infection should be routine during gynecologic examination, even if symptoms are not present. Screening of young adult women 20-24 years of age also is suggested, particularly for those who do not consistently use barrier contraceptives and who have new or multiple partners. Periodic surveys of chlamydial prevalence among these groups should be conducted to confirm the validity of using these recommendations in specific clinical settings. # Chlamydial Infections Among Adolescents and Adults The following recommended treatment regimens or the alternative regimens relieve symptoms and cure infection. Among women, several important sequelae may result from C. trachomatis infection, the most serious among them being PID, ectopic pregnancy, and infertility. Some women with apparently uncomplicated cervical infection already have subclinical upper reproductive tract infection. Treatment of cervical infection is believed to reduce the likelihood of sequelae, although few studies have demonstrated that antimicrobial therapy reduces the risk of subsequent ascending infections or decreases the incidence of long-term complications of tubal infertility and ectopic pregnancy. Treatment of infected patients prevents transmission to sex partners, and for infected pregnant women may prevent transmission of C. trachomatis to infants during birth. Treatment of sex partners will help to prevent re-infection of the index patient and infection of other partners. Because of the high prevalence of coinfection with C. trachomatis among patients with gonococcal infection, presumptive treatment for chlamydia of patients being treated for gonorrhea is appropriate, particularly if no diagnostic test for C. trachomatis infection will be performed (see Gonococcal Infections). # Recommended Regimens Doxycycline 100 mg orally 2 times a day for 7 days, or Azithromycin 1 g orally in a single dose. # Alternative Regimens Ofloxacin 300 mg orally 2 times a day for 7 days or Erythromycin base 500 mg orally 4 times a day for 7 days or Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days or Sulfisoxazole 500 mg orally 4 times a day for 10 days (inferior efficacy to other regimens). Doxycycline and azithromycin appear similar in efficacy and toxicity; however, the safety and efficacy of azithromycin for persons ≤15 years of age have not been established. Doxycycline has a longer history of extensive use, safety, efficacy, and the advantage of low cost. Azithromycin has the advantage of single-dose administration. Ofloxacin is similar in efficacy to doxycycline and azithromycin, but is more expensive than doxycycline, cannot be used during pregnancy or with persons ≤17 years of age, and offers no advantage in dosing. Ofloxacin is the only quinolone with proven efficacy against chlamydial infection. Sulfisoxazole is the least desirable treatment because of inferior efficacy. # Follow-Up Patients do not need to be retested for chlamydia after completing treatment with doxycycline or azithromycin unless symptoms persist or re-infection is suspected. Retesting may be considered 3 weeks after completion of treatment with erythromycin, sulfisoxazole, or amoxicillin. This is usually unnecessary if the patient was treated with doxycycline, azithromycin, or ofloxacin. The validity of chlamydial culture testing performed at <3 weeks following completion of therapy among patients failing therapy has not been established. False-negative results may occur because of small numbers of chlamydial organisms. In addition, nonculture tests conducted at <3 weeks following completion of therapy for patients successfully treated may sometimes be false-positive because of the continued excretion of dead organisms. Some studies have demonstrated high rates of infection among women retested several months following treatment, presumably because of reinfection. Rescreening women several months following treatment may be an effective strategy for detecting further morbidity in some populations. # Management of Sex Partners Patients should be instructed to refer their sex partners for evaluation and treatment. Because exposure intervals have received limited evaluation, the following recommendations are somewhat arbitrary. Sex partners of symptomatic patients with C. trachomatis should be evaluated and treated for chlamydia if their last sexual contact with the index patient was within 30 days of onset of the index patient's symptoms. If the index patient is asymptomatic, sex partners whose last sexual contact with the index patient was within 60 days of diagnosis should be evaluated and treated. Health-care providers should treat the last sex partner even if last sexual intercourse took place before the foregoing time intervals. Patients should be instructed to avoid sex until they and their partners are cured. In the absence of microbiologic test-of-cure, this means until therapy is completed and patient and partner(s) are without symptoms. # Special Considerations Pregnancy Doxycycline and ofloxacin are contraindicated for pregnant women, and sulfisoxazole is contraindicated for women during pregnancy near-term and for women who are nursing. The safety and efficacy of azithromycin among pregnant and lactating women have not been established. Repeat testing, preferably by culture, after completing therapy with the following regimens is recommended because there are few data regarding the effectiveness of these regimens, and the frequent gastrointestinal side effects of erythromycin may discourage a patient from complying with the prescribed treatment. # Recommended Regimen for Pregnant Women Erythromycin base 500 mg orally 4 times a day for 7 days. # Alternative Regimens for Pregnant Women Erythromycin base 250 mg orally 4 times a day for 14 days or Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days or Erythromycin ethylsuccinate 400 mg orally 4 times a day for 14 days or If erythromycin cannot be tolerated: Amoxicillin 500 mg orally 3 times a day for 7-10 days. # NOTE: Erythromycin estolate is contraindicated during pregnancy because of drug-related hepatotoxicity. Few data exist concerning the efficacy of amoxicillin. # HIV Infection Persons with HIV infection and chlamydial infection should receive the same treatment as patients without HIV infection. # Chlamydial Infections Among Infants Prenatal screening of pregnant women can prevent chlamydial infection among neonates. Pregnant women <25 years of age and those with new or multiple sex partners should, in particular, be targeted for screening. Periodic surveys of chlamydial prevalence can be conducted to confirm the validity of using these recommendations in specific clinical settings. C. trachomatis infection of neonates results from perinatal exposure to the mother's infected cervix. The prevalence of C. trachomatis infection generally exceeds 5% among pregnant women, regardless of race/ethnicity or socioeconomic status. Neonatal ocular prophylaxis with silver nitrate solution or antibiotic ointments is ineffective in preventing perinatal transmission of chlamydial infection from mother to infant. However, ocular prophylaxis with those agents does prevent gonococcal ophthalmia and should be continued for that reason (see Prevention of Ophthalmia Neonatorum). Initial C. trachomatis perinatal infection involves mucous membranes of the eye, oropharynx, urogenital tract, and rectum. C. trachomatis infection among neonates can most often be recognized because of conjunctivitis developing 5-12 days after birth. Chlamydia is the most frequent identifiable infectious cause of ophthalmia neonatorum. C. trachomatis also is a common cause of subacute, afebrile pneumonia with onset from 1 to 3 months of age. Asymptomatic infections of the oropharynx, genital tract, and rectum among neonates also occur. # Ophthalmia Neonatorum Caused by C. trachomatis A chlamydial etiology should be considered for all infants with conjunctivitis through 30 days of age. # Diagnostic Considerations Sensitive and specific methods to diagnose chlamydial ophthalmia for the neonate include isolation by tissue culture and nonculture tests, direct fluorescent antibody tests, and immunoassays. Giemsa-stained smears are specific for C. trachomatis, but are not sensitive. Specimens must contain conjunctival cells, not exudate alone. Specimens for culture isolation and nonculture tests should be obtained from the everted eyelid using a dacron-tipped swab or the swab specified by the manufacturer's test kit. A specific diagnosis of C. trachomatis infection confirms the need for chlamydial treatment not only for the neonate, but also for the mother and her sex partner(s). Ocular exudate from infants being evaluated for chlamydial conjunctivitis should also be tested for N. gonorrhoeae. # Recommended Regimen Erythromycin 50 mg/kg/day orally divided into 4 doses for 10-14 days. Topical antibiotic therapy alone is inadequate for treatment of chlamydial infection and is unnecessary when systemic treatment is undertaken. # Follow-Up The possibility of chlamydial pneumonia should be considered. The efficacy of erythromycin treatment is approximately 80%; a second course of therapy may be required. Follow-up of infants to determine resolution is recommended. # Management of Mothers and Their Sex Partners The mothers of infants who have chlamydial infection and the mother's sex partners should be evaluated and treated following the treatment recommendations for adults with chlamydial infections (see Chlamydial Infections Among Adolescents and Adults). # Infant Pneumonia Caused by C. trachomatis Characteristic signs of chlamydial pneumonia among infants include a repetitive staccato cough with tachypnea, and hyperinflation and bilateral diffuse infiltrates on a chest roentgenogram. Wheezing is rare, and infants are typically afebrile. Peripheral eosinophilia, documented in a complete blood count, is sometimes observed among infants with chlamydial pneumonia. Because variation from this clinical presentation is common, initial treatment and diagnostic tests should encompass C. trachomatis for all infants 1-3 months of age who have possible pneumonia. # Diagnostic Considerations Specimens should be collected from the nasopharynx for chlamydial testing. Tissue culture remains the definitive standard for chlamydial pneumonia; nonculture tests can be used with the knowledge that nonculture tests of nasopharyngeal specimens produce lower sensitivity and specificity than nonculture tests of ocular specimens. Tracheal aspirates and lung biopsy specimens, if collected, should be tested for C. trachomatis. The microimmunofluorescence test for C. trachomatis antibody is useful but not widely available. An acute IgM antibody titer ≥1:32 is strongly suggestive of C. trachomatis pneumonia. Because of the delay in obtaining test results for chlamydia, inclusion of an agent active against C. trachomatis in the antibiotic regimen must frequently be decided on the basis of the clinical and radiologic findings. Conducting tests for chlamydial infection is worthwhile, not only to assist in the management of an infant's illness, but also to determine the need for treatment of the mother and her sex partners. # Recommended Regimen Erythromycin 50 mg/kg/day orally divided into 4 doses for 10-14 days. # Follow-Up The effectiveness of erythromycin treatment is approximately 80%; a second course of therapy may be required. Follow-up of infants is recommended to determine that the pneumonia has resolved. Some infants with chlamydial pneumonia have had abnormal pulmonary function tests later in childhood. # Management of Mothers and Their Sex Partners Mothers of infants who have chlamydial infection and the mother's sex partners should be evaluated and treated according to the recommended treatment of adults with chlamydial infections (see Chlamydial Infections Among Adolescents and Adults). # Infants Born to Mothers Who Have Chlamydial Infection Infants born to mothers who have untreated chlamydia are at high risk for infection and should be evaluated and treated as for infants with ophthalmia neonatorum caused by C. trachomatis. # Chlamydial Infections Among Children Sexual abuse must be considered a cause of chlamydial infection among preadolescent children, although perinatally transmitted C. trachomatis infection of the nasopharynx, urogenital tract, and rectum may persist beyond 1 year (see Sexual Assault or Abuse of Children). Because of the potential for a criminal investigation and legal proceedings for sexual abuse, diagnosis of C. trachomatis among preadolescent children requires the high specificity provided by isolation in cell culture. The cultures should be confirmed by microscopic identification of the characteristic intracytoplasmic inclusions, preferably by fluorescein-conjugated monoclonal antibodies specific for C. trachomatis. # Diagnostic Considerations Nonculture chlamydia tests should not be used because of the possibility of falsepositive test results. With respiratory tract specimens, false-positive test results can occur because of cross-reaction of test reagents with Chlamydia pneumoniae; with genital and anal specimens, false-positive test results occur because of cross-reaction with fecal flora. # Recommended Regimen Children who weigh <45 kg Erythromycin 50 mg/kg/day divided into four doses for 10-14 days. # NOTE: The effectiveness of erythromycin treatment is approximately 80%; a second course of therapy may be required. # Children who weigh ≥45 kg but who are <8 years of age Use the same treatment regimens for these children as the adult regimens of erythromycin (see Chlamydial Infections Among Adolescents and Adults). # Children ≥8 years of age Use the same treatment regimens for these children as the adult regimens of doxycycline or tetracycline (see Chlamydial Infections Among Adolescents and Adults). Adult regimens of azithromycin also may be considered for adolescents. # Other Management Considerations See Sexual Assault or Abuse of Children. # Follow-Up Follow-up cultures are necessary to ensure that treatment has been effective. # Gonococcal Infections # Gonococcal Infections Among Adolescents and Adults An estimated 1 million new infections with N. gonorrhoeae occur in the United States each year. Most infections among men produce symptoms that cause the person to seek curative treatment soon enough to prevent serious sequelae-but not soon enough to prevent transmission to others. Many infections among women do not produce recognizable symptoms until complications such as PID have occurred. PID, whether symptomatic or asymptomatic, can cause tubal scarring leading to infertility or ectopic pregnancy. Because gonococcal infections among women are often asymptomatic, a primary measure for controlling gonorrhea in the United States has been the screening of high-risk women. # Uncomplicated Gonococcal Infections # Recommended Regimens Ceftriaxone 125 mg IM in a single dose or Cefixime 400 mg orally in a single dose or Ciprofloxacin 500 mg orally in a single dose or Ofloxacin 400 mg orally in a single dose # PLUS A regimen effective against possible coinfection with C. trachomatis , such as doxycycline 100 mg orally 2 times a day for 7 days. Many antibiotics are safe and effective for treating gonorrhea, eradicating N. gonorrhoeae, ending the possibility of further transmission, relieving symptoms, and reducing the chances of sequelae. Selection of a treatment regimen for N. gonorrhoeae infection requires consideration of the anatomic site of infection, resistance of N. gonorrhoeae strains to antimicrobials, the possibility of concurrent infection with C. trachomatis, and the side effects and costs of the various treatment regimens. Because coinfection with C. trachomatis is common, persons treated for gonorrhea should be treated presumptively with a regimen that is effective against C. trachomatis (see Chlamydial Infections). Most experts agree that other regimens recommended for the treatment of C. trachomatis infection are also likely to be satisfactory for the treatment of coinfection (see Chlamydial Infections). However, studies have not been conducted to investigate possible interactions between other treatments for N. gonorrhoeae and C. trachomatis, including interactions influencing the effectiveness and side effects of cotreatment. In clinical trials, these recommended regimens cured >95% of anal and genital infections; any of the regimens may be used for uncomplicated anal or genital infection. Published studies indicate that ceftriaxone 125 mg and ciprofloxacin 500 mg can cure ≥90% of pharyngeal infections. If pharyngeal infection is a concern, one of these two regimens should be used. Ceftriaxone in a single dose of either 125 mg or 250 mg provides sustained, high bactericidal levels in the blood. Extensive clinical experience indicates that both doses are safe and effective for the treatment of uncomplicated gonorrhea at all sites. In the past, the 250 mg dose has been recommended on the supposition that the routine use of a higher dose may forestall the development of resistance. However, on the basis of ceftriaxone's activity against N. gonorrhoeae, its pharmacokinetics, and the results in clinical trials of doses as low as 62.5 mg, the 125 mg dose appears to have a therapeutic reserve at least as large as that of other accepted treatment regimens. No ceftriaxone-resistant strains of N. gonorrhoeae have been reported. The drawbacks of ceftriaxone are that it is expensive, currently unavailable in vials of <250 mg, and must be administered by injection. Some health-care providers believe that the discomfort of the injection may be reduced by using 1% lidocaine solution as a diluent. Ceftriaxone also may abort incubating syphilis, a concern when gonorrhea treatment is not accompanied by a 7-day course of doxycycline or erythromycin for the presumptive treatment of chlamydia. Cefixime has an antimicrobial spectrum similar to that of ceftriaxone, but the 400 mg oral dose does not provide as high nor as sustained a bactericidal level as does 125 mg of ceftriaxone. Cefixime appears to be effective against pharyngeal gonococcal infection, but few patients with pharyngeal infection have been included in studies. No gonococcal strains resistant to cefixime have been reported. The advantage of cefixime is that it can be administered orally. It is not known if the 400 mg dose can cure incubating syphilis. Ciprofloxacin at a dose of 500 mg provides sustained bactericidal levels in the blood. Clinical trials have demonstrated that both 250 mg and 500 mg doses are safe and effective for the treatment of uncomplicated gonorrhea at all sites. Most clinical experience in the United States has been with the 500 mg dose. Ciprofloxacin can be administered orally and is less expensive than ceftriaxone. No resistance has been reported in the United States, but strains with decreased susceptibility to some quinolones are becoming common in Asia and have been reported in North America. The 500 mg dose is recommended, rather than the 250 mg dose, because of the trend toward decreasing susceptibility to quinolones and because of rare reports of treatment failure. Quinolones are contraindicated for pregnant or nursing women and for persons ≤17 years of age on the basis of information from animal studies. Quinolones are not active against T. pallidum . Ofloxacin is active against N. gonorrhoeae, has favorable pharmacokinetics, and the 400 mg dose has been effective for the treatment of uncomplicated anal and genital gonorrhea. In published studies a 400 mg dose cured 22 (88%) of 25 pharyngeal infections. # Alternative Regimens • Spectinomycin 2 g IM in a single dose. Spectinomycin has the disadvantages of being injectable, expensive, inactive against T. pallidum , and relatively ineffective against pharyngeal gonorrhea. In addition, resistant strains have been reported in the United States. However, spectinomycin remains useful for the treatment of patients who can tolerate neither cephalosporins nor quinolones. • Injectable cephalosporin regimens other than ceftriaxone 125 mg that have demonstrated efficacy against uncomplicated anal or genital gonococcal infections include these injectable cephalosporins: ceftizoxime 500 mg IM in a single dose; cefotaxime 500 mg IM in a single dose; cefotetan 1 g IM in a single dose; and cefoxitin 2 g IM in a single dose. None of these injectable cephalosporins offers any advantage compared with ceftriaxone, and there is less clinical experience with them for the treatment of uncomplicated gonorrhea. Of these four regimens, ceftizoxime 500 mg appears to be the most effective according to cumulative experience in published clinical trials. • Oral cephalosporin regimens other than cefixime 400 mg include cefuroxime axetil 1 g orally in a single dose and cefpodoxime proxetil 200 mg orally in a single dose. These two regimens have anti-gonococcal activity and pharmacokinetics less favorable than the 400 mg cefixime regimen, and there is less clinical experience with them in the treatment of gonorrhea. They have not been very effective against pharyngeal infections among the few patients studied. • Quinolone regimens other than ciprofloxacin 500 mg and ofloxacin 400 mg include enoxacin 400 mg orally in a single dose; lomefloxacin 400 mg orally in a single dose; and norfloxacin 800 mg orally in a single dose. They appear to be safe and effective for the treatment of uncomplicated gonorrhea, but none appears to offer any advantage over ciprofloxacin at a dose of 500 mg or ofloxacin at 400 mg. Enoxacin and norfloxacin are active against N. gonorrhoeae , have favorable pharmacokinetics, and have been effective in clinical trials, but there is minimal experience with their use in the United States. Lomefloxacin is effective against N. gonorrhoeae and has very favorable pharmacokinetics, but there are few published clinical studies to support its use for the treatment of gonorrhea, and there is little experience with its use in the United States. Many other antimicrobials are active against N. gonorrhoeae. These guidelines are not intended to be a comprehensive list of all effective treatment regimens. # Other Management Considerations Persons treated for gonorrhea should be screened for syphilis by serology when gonorrhea is first detected. Gonorrhea treatment regimens that include ceftriaxone or a 7-day course of either doxycycline or erythromycin may cure incubating syphilis, but few data relevant to this topic are available. # Follow-Up Persons who have uncomplicated gonorrhea and who are treated with any of the regimens in these guidelines need not return for a test-of-cure. Those persons with symptoms persisting after treatment should be evaluated by culture for N. gonorrhoeae, and any gonococci isolated should be tested for antimicrobial susceptibility. Infections detected after treatment with one of the recommended regimens more commonly occur because of reinfection rather than treatment failure, indicating a need for improved sex partner referral and patient education. Persistent urethritis, cervicitis, or proctitis also may be caused by C. trachomatis and other organisms. # Management of Sex Partners Patients should be instructed to refer sex partners for evaluation and treatment. Sex partners of symptomatic patients who have N. gonorrhoeae infection should be evaluated and treated for N. gonorrhoeae and C. trachomatis infections, if their last sexual contact with the patient was within 30 days of onset of the patient's symptoms. If the index patient is asymptomatic, sex partners whose last sexual contact with the patient was within 60 days of diagnosis should be evaluated and treated. Health-care providers should treat the most recent sex partner, if last sexual intercourse took place before those time periods. Patients should be instructed to avoid sexual intercourse until patient and partner(s) are cured. In the absence of microbiologic test-of-cure, this means until therapy is completed and patient and partner(s) are without symptoms. # Special Considerations # Allergy, Intolerance, or Adverse Reactions Persons who cannot tolerate cephalosporins should, in general, be treated with quinolones. Those who can take neither cephalosporins nor quinolones should be treated with spectinomycin, except for those patients who are suspected or known to have pharyngeal infection. For pharyngeal infections among persons who can tolerate neither a cephalosporin nor quinolones, some studies suggest that trimethoprim/ sulfamethoxazole may be effective at a dose of 720 mg trimethoprim/3,600 mg sulfamethoxazole orally once a day for 5 days. # Pregnancy Pregnant women should not be treated with quinolones or tetracyclines. Those infected with N. gonorrhoeae should be treated with a recommended or alternate cephalosporin. Women who cannot tolerate a cephalosporin should be administered a single dose of 2 g of spectinomycin IM. Erythromycin is the recommended treatment for presumptive or diagnosed C. trachomatis infection during pregnancy (see Chlamydial Infections). # HIV Infection Persons with HIV infection and gonococcal infection should receive the same treatment as persons not infected with HIV. # Gonococcal Conjunctivitis Only one North American study of the treatment of gonococcal conjunctivitis among adults has been published in recent years. In that study, 12 of 12 patients responded favorably to a single 1 g IM injection of ceftriaxone. The recommendations that follow reflect the opinions of expert consultants. # Treatment Recommended Regimen A single, 1 g dose of ceftriaxone should be administered IM, and the infected eye should be lavaged with saline solution once. # Management of Sex Partners As for uncomplicated infections, patients should be instructed to refer sex partner(s) for evaluation and treatment (see Uncomplicated Gonococcal Infections, Management of Sex Partners). # Disseminated Gonococcal Infection Disseminated gonococcal infection (DGI) results from gonococcal bacteremia, often resulting in petechial or pustular acral skin lesions, asymmetrical arthalgias, tenosynovitis or septic arthritis-and is occasionally complicated by hepatitis and, rarely, by endocarditis or meningitis. Strains of N. gonorrhoeae that cause DGI tend to cause little genital inflammation. These strains have become uncommon in the United States during the past decade. No North American studies of the treatment of DGI have been published recently. The recommendations that follow reflect the opinions of expert consultants. # Treatment Hospitalization is recommended for initial therapy, especially for patients who cannot be relied on to comply with treatment, for those for whom the diagnosis is uncertain, and for those who have purulent synovial effusions or other complications. Patients should be examined for clinical evidence of endocarditis and meningitis. Patients treated for DGI should be treated presumptively for concurrent C. trachomatis infection. # Recommended Initial Regimen Ceftriaxone 1 g IM or IV every 24 hours. All regimens should be continued for 24-48 hours after improvement begins; then therapy may be switched to one of the following regimens to complete a full week of antimicrobial therapy: Cefixime 400 mg orally 2 times a day or Ciprofloxacin 500 mg orally 2 times a day. # Alternative Initial Regimens # NOTE: Ciprofloxacin is contraindicated for children, adolescents ≤17 years of age, and pregnant and lactating women. # Management of Sex Partners Gonococcal infection is often asymptomatic in sex partners of patients with DGI. As for uncomplicated infections, patients should be instructed to refer sex partner(s) for evaluation and treatment (see Uncomplicated Gonococcal Infections, Management of Sex Partners). # Gonococcal Meningitis and Endocarditis Recommended Initial Regimen 1-2 g of ceftriaxone IV every 12 hours. Therapy for meningitis should be continued for 10-14 days and for endocarditis for at least 4 weeks. Treatment of complicated DGI should be undertaken in consultation with an expert. # Management of Sex Partners As for uncomplicated infections, patients should be instructed to refer sex partners for evaluation and treatment (see Uncomplicated Gonococcal Infections, Management of Sex Partners). # Gonococcal Infections Among Infants Gonococcal infection among neonates usually results from peripartum exposure to infected cervical exudate of the mother. Gonococcal infection among neonates is usually an acute illness beginning 2-5 days after birth. The incidence of N. gonorrhoeae among neonates varies in U.S. communities, depends on the prevalence of infection among pregnant women, on whether pregnant women are screened for gonorrhea, and on whether newborns receive ophthalmia prophylaxis. The prevalence of infection is <1% in most prenatal patient populations, but may be higher in some settings. Of greatest concern are complications of ophthalmia neonatorum and sepsis, including arthritis and meningitis. Less serious manifestations at sites of infection include rhinitis, vaginitis, urethritis, and inflammation at sites of intrauterine fetal monitoring. # Ophthalmia Neonatorum Caused by N. gonorrhoeae In most patient populations in the United States, C. trachomatis and nonsexually transmitted agents are more common causes of neonatal conjunctivitis than N. gonorrhoeae. However, N. gonorrhoeae is especially important because gonococcal ophthalmia may result in perforation of the globe and in blindness. # Diagnostic Considerations Infants at high risk for gonococcal ophthalmia in the United States are those who do not receive ophthalmia prophylaxis, whose mothers have had no prenatal care, or whose mothers have a history of STDs or substance abuse. The presence of typical Gram-negative diplococci in a Gram-stained smear of conjunctival exudate suggests a diagnosis of N. gonorrhoeae conjunctivitis. Such patients should be treated presumptively for gonorrhea after obtaining appropriate cultures for N. gonorrhoeae; appropriate chlamydial testing should be done simultaneously. The decision not to treat presumptively for N. gonorrhoeae among patients without evidence of gonococci on a Gram-stained smear of conjunctival exudate, or among patients for whom a Gram-stained smear cannot be performed, must be made on a case-by-case basis after considering the previously described risk factors. A specimen of conjunctival exudate also should be cultured for isolation of N. gonorrhoeae, since culture is needed for definitive microbiologic identification and for antibiotic susceptibility testing. Such definitive testing is required because of the public health and social consequences for the infant and mother that may result from the diagnosis of gonococcal ophthalmia. Moraxella catarrhalis and other Neisseria species are uncommon causes of neonatal conjunctivitis that can mimic N. gonorrhoeae on Gram-stained smear. To differentiate N. gonorrhoeae from M. catarrhalis and other Neisseria species, the laboratory should be instructed to perform confirmatory tests on any colonies that meet presumptive criteria for N. gonorrhoeae. # Recommended Regimen Ceftriaxone 25-50 mg/kg IV or IM in a single dose, not to exceed 125 mg. NOTE: Topical antibiotic therapy alone is inadequate and is unnecessary if systemic treatment is administered. # Other Management Considerations Simultaneous infection with C. trachomatis has been reported and should be considered for patients who do not respond satisfactorily. The mother and infant should be tested for chlamydial infection at the same time that gonorrhea testing is done (see Ophthalmia Neonatorum Caused by C. trachomatis ). Ceftriaxone should be administered cautiously among infants with elevated bilirubin levels, especially premature infants. # Follow-Up Infants should be admitted to the hospital and evaluated for signs of disseminated infection (e.g., sepsis, arthritis, and meningitis). One dose of ceftriaxone is adequate for gonococcal conjunctivitis, but many pediatricians prefer to maintain infants on antibiotics until cultures are negative at 48-72 hours. The decision on duration of therapy should be made with input from experienced physicians. # Management of Mothers and Their Sex Partners The mothers of infants with gonococcal infection and their sex partners should be evaluated and treated following the recommendations for treatment of gonococcal infections in adults (see Gonococcal Infections Among Adolescents and Adults). # Disseminated Gonococcal Infection Among Infants Sepsis, arthritis, meningitis, or any combination thereof are rare complications of neonatal gonococcal infection. Gonococcal scalp abscesses also may develop as a result of fetal monitoring. Detection of gonococcal infection among neonates who have sepsis, arthritis, meningitis, or scalp abscesses requires cultures of blood, CSF, and joint aspirate on chocolate agar. Cultures of specimens from the conjunctiva, vagina, oropharynx, and rectum onto gonococcal selective medium are useful to identify sites of primary infection, especially if inflammation is present. Positive Gram-stained smears of exudate, CSF, or joint aspirate provide a presumptive basis for initiating treatment for N. gonorrhoeae. Diagnoses based on positive Gram-stained smears or presumptive isolation by cultures should be confirmed with definitive tests on culture isolates. # Recommended Regimen Ceftriaxone 25-50 mg/kg/day IV or IM in a single daily dose for 7 days, with a duration of 10-14 days, if meningitis is documented; or Cefotaxime 25 mg/kg IV or IM every 12 hours for 7 days, with a duration of 10-14 days, if meningitis is documented. # Prophylactic Treatment for Infants Whose Mothers Have Gonococcal Infection Infants born to mothers who have untreated gonorrhea are at high risk for infection. # Recommended Regimen in the Absence of Signs of Gonococcal Infection Ceftriaxone 25-50 mg/kg IV or IM, not to exceed 125 mg, in a single dose. # Other Management Considerations If simultaneous infection with C. trachomatis has been reported, mother and infant should be tested for chlamydial infection. # Follow-Up Follow-up examination is not required. # Management of Mothers and Their Sex Partners The mothers of infants with gonococcal infection and the mother's sex partners should be evaluated and treated following the recommendations for treatment of gonococcal infections among adults (see Gonococcal Infections). # Gonococcal Infections Among Children After the neonatal period, sexual abuse is the most common cause of gonococcal infection among preadolescent children (see Sexual Assault or Abuse of Children). Vaginitis is the most common manifestation of gonococcal infection among preadolescent children. PID following vaginal infection appears to be less common than among adults. Among sexually-abused children, anorectal and pharyngeal infections with N. gonorrhoeae are common and are frequently asymptomatic. # Diagnostic Considerations Because of the potential medical/legal use of the test results for N. gonorrhoeae among children, only standard culture systems for the isolation of N. gonorrhoeae should be used to diagnose N. gonorrhoeae for these children. Nonculture gonococcal tests, including Gram-stained smear, DNA probes, or EIA tests should not be used; none of these tests have been approved by the FDA for use in the oropharynx, rectum, or genital tract of children. Specimens from the vagina, urethra, pharynx, or rectum should be streaked onto selective media for isolation of N. gonorrhoeae. All presumptive isolates of N. gonorrhoeae should be confirmed by at least two tests that involve different principles, e.g., biochemical, enzyme substrate, or serologic. Isolates should be preserved to permit additional or repeated analysis. # Recommended Regimen for Children Children Who Weigh > 45 kg Children who weigh ≥45 kg should be administered the same treatment regimens as those recommended for adults (see Gonococcal Infections). # Children Who Weigh <45 kg The following treatment recommendations are for children with uncomplicated gonococcal vulvovaginitis, cervicitis, urethritis, pharyngitis, or proctitis. Ceftriaxone 125 mg IM in a single dose. # Alternative Regimen Spectinomycin 40 mg/kg (maximum 2 g) IM in a single dose. # Children Who Weigh <45 kg and Who Have Bacteremia, Arthritis, or Meningitis Recommended Regimen Ceftriaxone 50 mg/kg (maximum 1 g) IM or IV in a single dose daily for 7 days. # NOTE: For meningitis, increase the duration of treatment to 10-14 days and the maximum dose to 2 g. # Follow-Up Follow-up cultures of specimens from infected sites are necessary to ensure that treatment has been effective. # Other Management Considerations Only parenteral cephalosporins are recommended for use among children. Ceftriaxone is approved for all gonococcal indications among children; cefotaxime is approved for gonococcal ophthalmia only. Oral cephalosporins (cefixime, cefuroxime axetil, cefpodoxime) have not received adequate evaluation in the treatment of gonococcal infections among pediatric patients to recommend their use. The pharmacokinetic activity of these drugs among adults cannot be extrapolated to children. All children with gonococcal infections should be evaluated for coinfection with syphilis and C. trachomatis. For a discussion of issues regarding sexual assault, refer to Sexual Assault or Abuse of Children. # Ophthalmia Neonatorum Prophylaxis Instillation of a prophylactic agent into the eyes of all newborn infants is recommended to prevent gonococcal ophthalmia neonatorum and is required by law in most states. Although all the regimens that follow effectively prevent gonococcal eye disease, their efficacy in preventing chlamydial eye disease is not clear. Furthermore, they do not eliminate nasopharyngeal colonization with C. trachomatis. Treatment of gonococcal and chlamydial infections among pregnant women is the best method for preventing neonatal gonococcal and chlamydial disease. However, ocular prophylaxis should continue because it can prevent gonococcal ophthalmia and, in some populations, >10% of pregnant women may receive no prenatal care. # Prophylaxis # Recommended Preparations Silver nitrate (1%) aqueous solution in a single application or Erythromycin (0.5%) ophthalmic ointment in a single application or Tetracycline ophthalmic ointment (1%) in a single application. One of the above preparations should be instilled into the eyes of every neonate as soon as possible after delivery. If prophylaxis is delayed (i.e., not administered in the delivery room), hospitals should establish a monitoring system to see that all infants receive prophylaxis. All infants should be administered ocular prophylaxis, whether delivery is vaginal or caesarian. Single-use tubes or ampules are preferable to multiple-use tubes. Bacitracin is not effective. # DISEASES CHARACTERIZED BY VAGINAL DISCHARGE # Management of the Patient with Vaginitis Vaginitis is characterized by a vaginal discharge (usually) or vulvar itching and irritation; a vaginal odor may be present. The three common diseases characterized by vaginitis include trichomoniasis (caused by T. vaginalis ), BV (caused by a replacement of the normal vaginal flora by an overgrowth of anaerobic microorganisms and Gardnerella vaginalis ), and candidiasis (usually caused by Candida albicans ). MPC caused by C. trachomatis or N. gonorrhoeae may uncommonly cause a vaginal discharge. Although vulvovaginal candidiasis is not usually transmitted sexually, it is included here because it is a common infection among women being evaluated for STDs. The diagnosis of vaginitis is made by pH and microscopic examination of fresh samples of the discharge. The pH of the vaginal secretions can be determined by narrow-range pH paper for the elevated pH (>4.5) typical of BV or trichomoniasis. One way to examine the discharge is to dilute a sample in 1-2 drops of 0.9% normal saline solution on one slide and 10% potassium hydroxide (KOH) solution on a second slide. An amine odor detected immediately after applying KOH suggests either BV or trichomoniasis. A cover slip is placed on each slide and they are examined under a microscope at low-and high-dry power. The motile T. vaginalis or the clue cells of BV are usually easily identified in the saline specimen. The yeast or pseudohyphae of Candida species are more easily identified in the KOH specimen. The presence of objective signs of vulvar inflammation in the absence of vaginal pathogens, along with a minimal amount of discharge, suggests the possibility of mechanical or chemical irritation of the vulva. Culture for T. vaginalis or Candida species is more sensitive than microscopic examination, but the specificity of culture for Candida species to diagnose vaginitis is less clear. Laboratory testing fails to identify a cause among a substantial minority of women. # Bacterial Vaginosis BV is a clinical syndrome resulting from replacement of the normal H 2 O 2 -producing Lactobacillus spp in the vagina with high concentrations of anaerobic bacteria (e.g., Bacteroides spp, Mobiluncus spp), G. vaginalis, and Mycoplasma hominis. This condition is the most prevalent cause of vaginal discharge or malodor. However, half the women who meet clinical criteria for BV have no symptoms. The cause of the microbial alteration is not fully understood. Although BV is associated with sexual activity in that women who have never been sexually active are rarely affected and acquisition of BV is associated with having multiple sex partners, BV is not considered exclusively an STD. Treatment of the male sex partner has not been found beneficial in preventing the recurrence of BV. # Diagnostic Considerations BV may be diagnosed by the use of clinical or Gram stain criteria. Clinical criteria require three of the following symptoms or signs: • A homogeneous, white, noninflammatory discharge that adheres to the vaginal walls; • The presence of clue cells on microscopic examination; • pH of vaginal fluid >4.5; • A fishy odor of vaginal discharge before or after addition of 10% KOH (whiff test). When Gram stain is used, determining the relative concentration of the bacterial morphotypes characteristic of the altered flora of BV is an acceptable laboratory method for diagnosing BV. Culture of G. vaginalis is not recommended as a diagnostic tool because it is not specific. G. vaginalis can be isolated from vaginal cultures among half of normal women. # Treatment The principal goal of therapy is to relieve vaginal symptoms and signs. Therefore, only women with symptomatic disease require treatment. Because male sex partners of women with BV are not symptomatic, and because treatment of male partners has not been shown to alter either the clinical course of BV in women during treatment or the relapse/reinfection rate, preventing transmission to men is not a goal of therapy. Many bacterial flora characterizing BV have been recovered from the endometrium or salpinx of women with PID. BV has been associated with endometritis, PID, or vaginal cuff cellulitis following invasive procedures such as endometrial biopsy, hysterectomy, hysterosalpingography, placement of IUD, caesarian section, or uterine curettage. A randomized controlled trial found that treatment of BV with metronidazole substantially reduced post-abortion PID. Based on these data, it may be reasonable to consider treatment of BV (symptomatic or asymptomatic) before performing surgical abortion procedures. However, more data are needed to consider treatment of asymptomatic patients with BV when performing other invasive procedures. # Recommended Regimen Metronidazole 500 mg orally 2 times a day for 7 days. # NOTE: Patients should be advised to avoid using alcohol during treatment with metronidazole and for 24 hours thereafter. # Alternative Regimens Metronidazole 2 g orally in a single dose. The following alternative regimens have been effective in clinical trials, although experience with these regimens is limited. Clindamycin cream, 2%, one full applicator (5 g) intravaginally at bedtime for 7 days; or Metronidazole gel, 0.75%, one full applicator (5 g) intravaginally, 2 times a day for 5 days; or Clindamycin 300 mg orally 2 times a day for 7 days. Oral metronidazole has been shown in numerous studies to be efficacious for the treatment of BV, resulting in relief of symptoms and improvement in clinical course and flora disturbances. Based on efficacy data from four randomized-controlled trials, the overall cure rates are 95% for the 7-day regimen and 84% for the 2 g single-dose regimen. Some health-care providers remain concerned about the possibility of metronidazole mutagenicity, which has been suggested by experiments on animals using extremely high and prolonged doses. However, there is no evidence for mutagenicity in humans. Some health-care providers prefer the intravaginal route because of lack of systemic side effects such as mild-to-moderate gastrointestinal upset and unpleasant taste (mean peak serum concentrations of metronidazole following intravaginal administration are <2% those of standard 500 mg oral doses and mean bioavailability of clindamycin cream is about 4%). # Follow-Up Follow-up visits are not necessary if symptoms resolve. Recurrence of BV is common. The alternative treatment regimens suitable for BV treatment may be used for treatment of recurrent disease. No long-term maintenance regimen with any therapeutic agent is currently available. # Management of Sex Partners Treatment of sex partners in clinical trials has not influenced the woman's response to therapy, nor has it influenced the relapse or recurrence rate. Therefore, routine treatment of sex partners is not recommended . # Special Considerations # Allergy or Intolerance to the Recommended Therapy Clindamycin cream is preferred in case of allergy or intolerance to metronidazole. Metronidazole gel can be considered for patients who do not tolerate systemic metronidazole, but patients allergic to oral metronidazole should not be administered metronidazole vaginally. # Pregnancy Because metronidazole is contraindicated during the first trimester of pregnancy, clindamycin vaginal cream is the preferred treatment for BV during the first trimester of pregnancy (clindamycin cream is recommended instead of oral clindamycin because of the general desire to limit the exposure of the fetus to medication). During the second and third trimesters of pregnancy, oral metronidazole can be used, although the vaginal metronidazole gel or clindamycin cream may be preferable. BV has been associated with adverse outcomes of pregnancy (e.g., premature rupture of the membranes, preterm labor, preterm delivery), and the organisms found in increased concentration in BV are also commonly present in postpartum or post-caesarean endometritis. Whether treatment of BV among pregnant women would reduce the risk of adverse pregnancy outcomes is unknown; randomized controlled trials have not been conducted. # HIV infection Persons with HIV and BV should receive the same treatment as persons without HIV. # Trichomoniasis Trichomoniasis is caused by the protozoan T. vaginalis. The majority of men infected with T. vaginalis are asymptomatic, but many women are symptomatic. Among women, T. vaginalis typically causes a diffuse, malodorous, yellow-green discharge with vulvar irritation. There is recent evidence of a possible relationship between vaginal trichomoniasis and adverse pregnancy outcomes, particularly premature rupture of the membranes and preterm delivery. # HIV Infection Persons with HIV infection and trichomoniasis should receive the same treatment as persons without HIV. # Vulvovaginal Candidiasis Vulvovaginal candidiasis (VVC) is caused by C. albicans or, occasionally, by other Candida spp, Torulopsis sp, or other yeasts. An estimated 75% of women will experience at least one episode of VVC during their lifetime, and 40%-45% will experience two or more episodes. A small percentage of women (probably <5%) experience recurrent VVC (RVVC). Typical symptoms of VVC include pruritus and vaginal discharge. Other symptoms may include vaginal soreness, vulvar burning, dyspareunia, and external dysuria. None of these symptoms is specific for VVC. VVC usually is not sexually acquired or transmitted. # Diagnostic Considerations A diagnosis of Candida vaginitis is suggested clinically by pruritus in the vulvar area together with erythema of the vagina or vulva; a white discharge may occur. The diagnosis can be made when a woman has signs and symptoms of vaginitis, and when a wet preparation or Gram stain of vaginal discharge demonstrates yeasts or pseudohyphae, or when a culture or other test yields a positive result for a yeast species. Vaginitis solely because of Candida infection is associated with a normal vaginal pH (≤4.5). Use of 10% KOH in wet preparations improves the visualization of yeast and mycelia by disrupting cellular material that may obscure the yeast or pseudohyphae. Identifying Candida in the absence of symptoms should not lead to treatment, because approximately 10%-20% of women normally harbor Candida spp and other yeasts in the vagina. VVC may be present concurrently with STDs. # Treatment Topical formulations provide effective treatment for VVC. The topically applied azole drugs are more effective than nystatin. Treatment with azoles results in relief of symptoms and negative cultures among 80%-90% of patients after therapy is completed. # Recommended Regimens The following intravaginal formulations are recommended for the treatment of VVC. Butoconazole 2% cream 5 g intravaginally for 3 days;* or Clotrimazole 1% cream 5 g intravaginally for 7-14 days; † or *These creams and suppositories are oil-based and may weaken latex condoms and diaphragms. Refer to product labeling for further information. † Over-the-counter (OTC) preparations. Clotrimazole 100 mg vaginal tablet for 7 days; † or Clotrimazole 100 mg vaginal tablet, two tablets for 3 days; or Clotrimazole 500 mg vaginal tablet, one tablet single application; or Miconazole 2% cream 5 g intravaginally for 7 days;* † or Miconazole 200 mg vaginal suppository, one suppository for 3 days;* or Miconazole 100 mg vaginal suppository, one suppository for 7 days;* † or Tioconazole 6.5% ointment 5 g intravaginally in a single application;* or Terconazole 0.4% cream 5 g intravaginally for 7 days; or Terconazole 0.8% cream 5 g intravaginally for 3 days; or Terconazole 80 mg suppository, 1 suppository for 3 days.* Although information is not conclusive, single-dose treatments should probably be reserved for cases of uncomplicated mild-to-moderate VVC. Multi-day regimens (3-and 7-day) are the preferred treatment for severe or complicated VVC. Preparations for intravaginal administration of both miconazole and clotrimazole are now available over-the-counter (OTC [nonprescription]), and women with VVC can choose one of those preparations. The duration for treatment with either preparation is 7 days. Self-medication with OTC preparations should be advised only for women who have been diagnosed previously with VVC and who experience a recurrence of the same symptoms. Any woman whose symptoms persist after using an OTC preparation or who experiences a recurrence of symptoms within 2 months should seek medical care. # Alternative Regimens Several trials have demonstrated that oral azole agents such as fluconazole, ketoconazole, and itraconazole may be as effective as topical agents. The optimum dose and duration of oral therapy have not been established, but a range of 1-5 days of treatment, depending on the agent, has been effective in clinical trials. The ease of administration of oral agents is an advantage over topical therapies. However, the potential for toxicity associated with using a systemic drug, particularly ketoconazole, must be considered. No oral agent is approved currently by the FDA for the treatment of acute VVC. # Follow-Up Patients should be instructed to return for follow-up visits only if symptoms persist or recur. Women who experience three or more episodes of VVC per year should be evaluated for predisposing conditions (see Recurrent Vulvovaginal Candidiasis). # Management of Sex Partners VVC is not acquired through sexual intercourse; treatment of sex partners has not been demonstrated to reduce the frequency of recurrences. Therefore, routine notification or treatment of sex partners is not warranted. A minority of male sex partners may have balanitis, which is characterized by erythematous areas on the glans in conjunction with pruritus or irritation. These partners may benefit from treatment with topical antifungal agents to relieve symptoms. # Special Considerations # Allergy or Intolerance to the Recommended Therapy Topical agents are usually free of systemic side effects, although local burning or irritation may occur. Oral agents occasionally cause nausea, abdominal pain, and headaches. Therapy with the oral azoles has been associated rarely with abnormal elevations of liver enzymes. Hepatotoxicity secondary to ketoconazole therapy has been estimated to appear in 1:10,000 to 1:15,000 exposed persons. Clinically important interactions may occur when these oral agents are administered with other drugs, including terfenadine, rifampin, astemizole, phenytoin, cyclosporin A, coumarin-like agents, or oral hypoglycemic agents. # Pregnancy VVC is common during pregnancy. Only topical azole therapies should be used for the treatment of pregnant women. The most effective treatments that have been studied for pregnant women are clotrimazole, miconazole, butoconazole, and terconazole. Many experts recommend 7 days of therapy during pregnancy. # HIV Infection Acute VVC occurs frequently among women with HIV infection and may be more severe for these women than for other women. However, insufficient information exists to determine the optimal management of VVC in HIV-infected women. Until such information becomes available, women with HIV infection and acute VVC should be treated following the same regimens as for women without HIV infection. # Recurrent Vulvovaginal Candidiasis RVVC, usually defined as three or more episodes of symptomatic VVC annually, affects a small proportion of women (probably <5%). The natural history and pathogenesis of RVVC are poorly understood. Risk factors for RVVC include diabetes mellitus, immunosuppression, broad spectrum antibiotic use, corticosteroid use, and HIV infection, although the majority of women with RVVC have no apparent predisposing conditions. Clinical trials addressing the management of RVVC have involved continuing therapy between episodes. # Treatment The optimal treatment for RVVC has not been established. Ketoconazole 100 mg orally, once daily for up to 6 months reduces the frequency of episodes of RVVC. Current studies are evaluating weekly intravaginal administration of clotrimazole, as well as oral therapy with itraconazole and fluconazole, in the treatment of RVVC. All cases of RVVC should be confirmed by culture before maintenance therapy is initiated. Although patients with RVVC should be evaluated for predisposing conditions, routinely performing HIV testing for women with RVVC who do not have HIV risk factors is unwarranted. # Follow-Up Patients who are receiving treatment for RVVC should receive regular follow-up to monitor the effectiveness of therapy and the occurrence of side effects. # Management of Sex Partners Treatment of sex partners does not prevent recurrences, and routine therapy is not warranted. However, partners with symptomatic balanitis or penile dermatitis should be treated with a topical agent. # Special Considerations # HIV Infection Insufficient information exists to determine the optimal management of RVVC among HIV-infected women. Until such information becomes available, management should be the same as for other women with RVVC. # PELVIC INFLAMMATORY DISEASE PID comprises a spectrum of inflammatory disorders of the upper genital tract among women and may include any combination of endometritis, salpingitis, tuboovarian abscess, and pelvic peritonitis. Sexually transmitted organisms, especially N. gonorrhoeae and C. trachomatis, are implicated in the majority of cases; however, microorganisms that can be part of the vaginal flora, such as anaerobes, G. vaginalis, H. influenzae, enteric Gram-negative rods, and Streptococcus agalactiae also can cause PID. Some experts also believe that M. hominis and U. urealyticum are etiologic agents of PID. # Diagnostic Considerations Because of the wide variation in many symptoms and signs among women with this condition, a clinical diagnosis of acute PID is difficult. Many women with PID exhibit subtle or mild symptoms that are not readily recognized as PID. Consequently, delay in diagnosis and effective treatment probably contributes to inflammatory sequelae in the upper reproductive tract. Laparoscopy can be used to obtain a more accurate diagnosis of salpingitis and a more complete bacteriologic diagnosis. However, this diagnostic tool is often neither readily available for acute cases nor easily justifiable when symptoms are mild or vague. Moreover, laparoscopy will not detect endometritis and may not detect subtle inflammation of the fallopian tubes. Consequently, the diagnosis of PID is usually made on the basis of clinical findings. The clinical diagnosis of acute PID is also imprecise. Data indicate that a clinical diagnosis of symptomatic PID has a positive predictive value (PPV) for salpingitis of 65%-90% when compared with laparoscopy as the standard. The PPV of a clinical diagnosis of acute PID varies depending on epidemiologic characteristics and the clinical setting, with higher PPV among sexually active young (especially teenage) women and among patients attending STD clinics or from settings with high rates of gonorrhea or chlamydia. In all settings, however, no single historical, physical, or laboratory finding is both sensitive and specific for the diagnosis of acute PID (i.e., can be used both to detect all cases of PID and to exclude all women without PID). Combinations of diagnostic findings that improve either sensitivity (detect more women who have PID) or specificity (exclude more women who do not have PID) do so only at the expense of the other. For example, requiring two or more findings excludes more women without PID but also reduces the number of women with PID who are detected. Many episodes of PID go unrecognized. Although some women may have asymptomatic PID, others are undiagnosed because the patient or the health-care provider fails to recognize the implications of mild or nonspecific symptoms or signs, such as abnormal bleeding, dyspareunia, or vaginal discharge ("atypical PID"). Because of the difficulty of diagnosis and the potential for damage to the reproductive health of women even by apparently mild or atypical PID, experts recommend that providers maintain a low threshold of diagnosis for PID. Even so, the long-term outcome of early treatment of women with asymptomatic or atypical PID on important clinical outcomes is unknown. The following recommendations for diagnosing PID are intended to help health-care providers recognize when PID should be suspected and when they need to obtain additional information to increase diagnostic certainty. These recommendations are based in part on the fact that diagnosis and management of other common causes of lower abdominal pain (e.g., ectopic pregnancy, acute appendicitis, and functional pain) are unlikely to be impaired by initiating empiric antimicrobial therapy for PID. No single therapeutic regimen has been established for persons with PID. When selecting a treatment regimen, health-care providers should consider availability, cost, patient acceptance, and regional differences in antimicrobial susceptibility of the likely pathogens. Many experts recommend that all patients with PID be hospitalized so that supervised treatment with parenteral antibiotics can be initiated. Hospitalization is especially recommended when the following criteria are met: • The diagnosis is uncertain, and surgical emergencies such as appendicitis and ectopic pregnancy cannot be excluded, • Pelvic abscess is suspected, • The patient is pregnant, • The patient is an adolescent (among adolescents, compliance with therapy is unpredictable), • The patient has HIV infection, • Severe illness or nausea and vomiting preclude outpatient management, • The patient is unable to follow or tolerate an outpatient regimen, • The patient has failed to respond clinically to outpatient therapy, • Clinical follow-up within 72 hours of starting antibiotic treatment cannot be arranged. # Inpatient Treatment Experts have experience with both of the following regimens. Also, there are multiple randomized trials demonstrating the efficacy of each regimen. # Regimen A Cefoxitin 2 g IV every 6 hours or cefotetan 2 g IV every 12 hours, PLUS Doxycycline 100 mg IV or orally every 12 hours. NOTE: This regimen should be continued for at least 48 hours after the patient demonstrates substantial clinical improvement, after which doxycycline 100 mg orally 2 times a day should be continued for a total of 14 days. Doxycycline administered orally has bioavailability similar to that of the IV formulation and may be administered if normal gastrointestinal function is present. Clinical data are limited for other second-or third-generation cephalosporins (e.g., ceftizoxime, cefotaxime, and ceftriaxone), which might replace cefoxitin or cefotetan, although many authorities believe they also are effective therapy for PID. However, they are less active than cefoxitin or cefotetan against anaerobic bacteria. # Regimen B Ofloxacin 400 mg orally 2 times a day for 14 days, PLUS Either clindamycin 450 mg orally 4 times a day, or metronidazole 500 mg orally 2 times a day for 14 days. Clinical trials have demonstrated that the cefoxitin regimen is effective in obtaining short-term clinical response. Fewer data support the use of ceftriaxone or other thirdgeneration cephalosporins, but, based on their similarities to cefoxitin, they also are considered effective. No data exist regarding the use of oral cephalosporins for the treatment of PID. Ofloxacin is effective against both N. gonorrhoeae and C. trachomatis. One clinical trial demonstrated the effectiveness of oral ofloxacin in obtaining short-term clinical response with PID. Despite results of this trial, there is concern related to ofloxacin's lack of anaerobic coverage; the addition of clindamycin or metronidazole provides this coverage. Clindamycin, but not metronidazole, further enhances the Gram-positive coverage of the regimen. Alternative Outpatient Regimens. Information regarding other outpatient regimens is limited. The combination of amoxicillin/clavulanic acid plus doxycycline was effective in obtaining short-term clinical response in one clinical trial, but many of the patients had to discontinue the regimen because of gastrointestinal symptoms. # Follow-Up Hospitalized patients receiving IV therapy should show substantial clinical improvement (e.g., defervescence, reduction in direct or rebound abdominal tenderness, and reduction in uterine, adnexal, and cervical motion tenderness) within 3-5 days of initiation of therapy. Patients who do not demonstrate improvement within this time period usually require further diagnostic workup or surgical intervention, or both. If the provider elects to prescribe outpatient therapy, follow-up examination should be performed within 72 hours, using the criteria for clinical improvement previously described. Because of the risk for persistent infection, particularly with C. trachomatis, patients should have a microbiologic re-examination 7-10 days after completing therapy. Some experts also recommend rescreening for C. trachomatis and N. gonorrhoeae 4-6 weeks after completing therapy. # Management of Sex Partners Evaluation and treatment of sex partners of women who have PID is imperative because of the risk for re-infection and the high likelihood of urethral gonococcal or chlamydial infection of the partner. Since nonculture, and perhaps culture, tests for C. trachomatis and N. gonorrhoeae are thought to be insensitive among asymptomatic men, sex partners should be treated empirically with regimens effective against both of these infectionsregardless of the apparent etiology of PID or pathogens isolated from the infected woman. Even in clinical settings in which only women are seen, special arrangements should be made to provide care for male sex partners of women with PID. When this is not feasible, health-care providers should ensure that sex partners are appropriately referred for treatment. # Special Considerations # Pregnancy Pregnant women with suspected PID should be hospitalized and treated with parenteral antibiotics. # HIV Infection Differences in the clinical manifestations of PID between HIV-infected women and noninfected women have not been described clearly. However, in one study, HIV-infected women with PID tended to have a leukopenia or a lesser leukocytosis than women who were not HIV-infected, and they were more likely to require surgical intervention. HIV-infected women who develop PID should be managed aggressively. Hospitalization and inpatient therapy with one of the IV antimicrobial regimens described in this report is recommended. # EPIDIDYMITIS Among men <35 years of age, epididymitis is most often caused by N. gonorrhoeae or C. trachomatis. Epididymitis caused by sexually transmitted Escherichia coli infection also occurs among homosexual men who are the insertive partners during anal intercourse. Sexually transmitted epididymitis is usually accompanied by urethritis, which is often asymptomatic. Nonsexually transmitted epididymitis associated with urinary tract infections caused by Gram-negative enteric organisms is more common among men >35 years of age, and among men who have recently undergone urinary tract instrumentation or surgery. # Diagnostic Considerations Men with epididymitis typically have unilateral testicular pain and tenderness; palpable swelling of the epididymis is usually present. Testicular torsion, a surgical emergency, should be considered in all cases but is more frequent among adolescents. Emergency testing for torsion may be indicated when the onset of pain is sudden, pain is severe, or test results available during the initial visit do not permit a diagnosis of urethritis or urinary tract infection. The evaluation of men for epididymitis should include the following procedures: • A Gram-stained smear of urethral exudate or intraurethral swab specimen for N. gonorrhoeae and for NGU (≥5 polymorphonuclear leukocytes per oil immersion field), • A culture of urethral exudate or intraurethral swab specimen for N. gonorrhoeae, • A test of an intraurethral swab specimen for C. trachomatis, • Culture and Gram-stained smear of uncentrifuged urine for Gram-negative bacteria. # Treatment Empiric therapy is indicated before culture results are available. Treatment of epididymitis caused by C. trachomatis or N. gonorrhoeae will result in microbiologic cure of infection, improve signs and symptoms, and prevent transmission to others. Patients with suspected sexually transmitted epididymitis should be treated with an antimicrobial regimen effective against C. trachomatis and N. gonorrhoeae; confirmation of these agents by testing will assist in partner notification efforts, but current tests for C. trachomatis are not sufficiently sensitive to exclude infection with that agent. # Recommended Regimen Ceftriaxone 250 mg IM in a single dose and Doxycycline 100 mg orally 2 times a day for 10 days. The effect of substituting the 125 mg dose of ceftriaxone recommended for treatment of uncomplicated N. gonorrhoeae, or the azithromycin regimen recommended for treatment of C. trachomatis, is unknown. As an adjunct to therapy, bed rest and scrotal elevation are recommended until fever and local inflammation have subsided. # Alternative Regimen Ofloxacin 300 mg orally 2 times a day for 10 days. # NOTE: Ofloxacin is contraindicated for persons ≤17 years of age. # Follow-Up Failure to improve within 3 days requires re-evaluation of both the diagnosis and therapy, and consideration of hospitalization. Swelling and tenderness that persist after completing antimicrobial therapy should be evaluated for testicular cancer and tuberculous or fungal epididymitis. # Management of Sex Partners Patients with epididymitis that is known or suspected to be caused by N. gonorrhoeae or C. trachomatis should be instructed to refer sex partners for evaluation and treatment. Sex partners of these patients should be referred if their contact with the index patient was within 30 days of onset of symptoms. Patients should be instructed to avoid sexual intercourse until patient and partner(s) are cured. In the absence of microbiologic test-of-cure, this means until therapy is completed and patient and partner(s) are without symptoms. anatomic site, size, and number of warts as well as the expense, efficacy, convenience, and potential for adverse effects. Extensive or refractory disease should be referred to an expert. Carbon dioxide laser and conventional surgery are useful in the management of extensive warts, particularly for those patients who have not responded to other regimens; these alternatives are not appropriate for treatment of limited lesions. One randomized trial of laser therapy indicated efficacy of 43%, with recurrence among 95% of patients. A randomized trial of surgical excision demonstrated efficacy of 93%, with recurrences among 29% of patients. These therapies and more cost-effective treatments do not eliminate HPV infection. Interferon therapy is not recommended because of its cost and its association with a high frequency of adverse side effects, and efficacy is no greater than that of other available therapies. Two randomized trials established systemic interferon alpha to be no more effective than placebo. Efficacy of interferon injected directly into genital warts (intralesional therapy) during two randomized trials was 44%-61%, with recurrences among none to 67% of patients. Therapy with 5-fluorouracil cream has not been evaluated in controlled studies, frequently causes local irritation, and is not recommended for the treatment of genital warts. Cryotherapy is relatively inexpensive, does not require anesthesia, and does not result in scarring if performed properly. Special equipment is required, and most patients experience moderate pain during and after the procedure. Efficacy during four randomized trials was 63%-88%, with recurrences among 21%-39% of patients. # External Genital/Perianal Warts Therapy with 0.5% podofilox solution is relatively inexpensive, simple to use, safe, and is self-applied by patients at home. Unlike podophyllin, podofilox is a pure compound with a stable shelf-life and does not need to be washed off. Most patients experience mild/moderate pain or local irritation after treatment. Heavily keratinized warts may not respond as well as those on moist mucosal surfaces. To apply the podofilox solution safely and effectively, the patient must be able to see and reach the warts easily. Efficacy during five recent randomized trials was 45%-88%, with recurrences among 33%-60% of patients. Podophyllin therapy is relatively inexpensive, simple to use, and safe. Compared with other available therapies, a larger number of treatments may be required. Most patients experience mild to moderate pain or local irritation after treatment. Heavily keratinized warts may not respond as well as those on moist mucosal surfaces. Efficacy in four recent randomized trials was 32%-79%, with recurrences among 27%-65% of patients. Few data on the efficacy of TCA are available. One randomized trial among men demonstrated 81% efficacy and recurrence among 36% of patients; the frequency of adverse reactions was similar to that seen with the use of cryotherapy. One study among women showed efficacy and frequency of patient discomfort to be similar to podophyllin. No data on the efficacy of bichloroacetic acid are available. Few data on the efficacy of electrodesiccation are available. One randomized trial of electrodesiccation demonstrated an efficacy of 94%, with recurrences among 22% of patients; another randomized trial of diathermocoagulation demonstrated an efficacy of 35%. Local anesthesia is required, and patient discomfort is usually moderate. # Cervical Warts For women with (exophytic) cervical warts, dysplasia must be excluded before treatment is begun. Management should be carried out in consultation with an expert. # Vaginal Warts # Oral Warts Cryotherapy with liquid nitrogen or Electrodesiccation or electrocautery or Surgical removal. # Follow-Up After warts have responded to therapy, follow-up is not necessary. Annual cytologic screening is recommended for women with or without genital warts. The presence of genital warts is not an indication for colposcopy. # Management of Sex Partners Examination of sex partners is not necessary for management of genital warts because the role of reinfection is probably minimal. Many sex partners have obvious exophytic warts and may desire treatment; also, partners may benefit from counseling. Patients with exophytic anogenital warts should be made aware that they are contagious to uninfected sex partners. The majority of partners, however, are probably already subclinically infected with HPV, even if they do not have visible warts. No practical screening tests for subclinical infection are available. Even after removal of warts, patients may harbor HPV in surrounding normal tissue, as may persons without exophytic warts. The use of condoms may reduce transmission to partners likely to be uninfected, such as new partners; however, the period of communicability is unknown. Experts speculate that HPV infection may persist throughout a patient's lifetime in a dormant state and become infectious intermittently. Whether patients with subclinical HPV infection are as contagious as patients with exophytic warts is unknown. # Special Considerations Pregnancy The use of podophyllin and podofilox are contraindicated during pregnancy. Genital papillary lesions have a tendency to proliferate and to become friable during pregnancy. Many experts advocate removal of visible warts during pregnancy. HPV types 6 and 11 can cause laryngeal papillomatosis among infants. The route of transmission (transplacental, birth canal, or postnatal) is unknown, and laryngeal papillomatosis has occurred among infants delivered by caesarean section. Hence, the preventive value of caesarean delivery is unknown. Caesarean delivery must not be performed solely to prevent transmission of HPV infection to the newborn. However, in rare instances, caesarean delivery may be indicated for women with genital warts if the pelvic outlet is obstructed or if vaginal delivery would result in excessive bleeding. # HIV Infection Persons infected with HIV may not respond to therapy for HPV as well as persons without HIV. # Subclinical Genital HPV Infection (Without Exophytic Warts) Subclinical genital HPV infection is much more common than exophytic warts among both men and women. Infection is often indirectly diagnosed on the cervix by Pap smear, colposcopy, or biopsy and on the penis, vulva, and other genital skin by the appearance of white areas after application of acetic acid. Acetowhitening is not a specific test for HPV infection, and false-positive tests are common. Definitive diagnosis of HPV infection relies on detection of viral nucleic acid (DNA or RNA) or capsid proteins. Pap smear diagnosis of HPV generally does not correlate well with detection of HPV DNA in cervical cells. Cell changes attributed to HPV in the cervix are similar to those of mild dysplasia and often regress spontaneously without treatment. Tests for the detection of several types of HPV DNA in cells scraped from the cervix are now widely available, but the clinical utility of these tests for managing patients is not known. Management decisions should not be made on the basis of HPV DNA tests. Screening for subclinical genital HPV infection using DNA tests or acetic acid is not recommended. # Treatment In the absence of coexistent dysplasia, treatment is not recommended for subclinical genital HPV infection diagnosed by Pap smear, colposcopy, biopsy, acetic acid soaking of genital skin or mucous membranes, or the detection of HPV nucleic acids (DNA or RNA) or capsid antigen, because diagnosis often is questionable and no therapy has been demonstrated to eradicate infection. HPV has been demonstrated in adjacent tissue after laser treatment of HPV-associated dysplasia and after attempts to eliminate subclinical HPV by extensive laser vaporization of the anogenital area of men and women. In the presence of coexistent dysplasia, management should be based on the grade of dysplasia. # Management of Sex Partners Examination of sex partners is not necessary. The majority of partners are probably already infected subclinically with HPV. No practical screening tests for subclinical infection are available. The use of condoms may reduce transmission to partners likely to be uninfected, such as new partners; however, the period of communicability is unknown. Experts speculate that HPV infection may persist throughout a patient's lifetime in a dormant state and become infectious intermittently. Whether patients with subclinical HPV infection are as contagious as patients with exophytic warts is unknown. # Vaccination Eligibility Persons known to be at high risk for acquiring HBV (e.g., persons with multiple sex partners, sex partners of HBV carriers, or injecting drug users) should be advised of their risk for HBV infection (as well as HIV infection) and the means to reduce their risk (i.e., exclusivity in sexual relationships, use of condoms, avoidance of nonsterile drug injection equipment, and HBV vaccination). Selected high-risk groups for which HBV vaccination is recommended by the ACIP include the following persons: • Sexually active homosexual and bisexual men, • Men and women diagnosed as having recently acquired another STD, • Persons who have had more than one sex partner in the preceding 6 months. Such persons should be vaccinated unless they are immune to HBV as a result of past infection or vaccination. Refer to Hepatitis B Virus: A Comprehensive Strategy for Eliminating Transmission in the United States Through Universal Child Vaccination, Recommendations of the Advisory Committee on Immunization Practices (ACIP) (17 ). # Screening for Antibody Versus Vaccination Without Screening The prevalence of past HBV infection among sexually active homosexual men and among injecting drug users is high. Serologic screening for evidence of past infection before vaccinating members of these groups may be cost-effective, depending on the relative costs of laboratory testing and vaccine. Among those attending STD clinics, it may be cost-effective to screen older persons for past infection. During a recent study of 2,000 STD clinic patients who accepted HBV vaccination, 28% of those ≥25 years of age had evidence of past infection, whereas only 7% of persons <25 years of age had evidence of past infection. Past infection with HBV can be detected with a serologic test for antibody to the hepatitis B core antigen (anti-HBc). Immunity can be demonstrated by a test for antibody to the hepatitis B surface antigen (anti-HBs). The HBV carrier state can be detected by a test for HBsAg. If only a test for anti-HBc is used to screen for susceptibility to infection, persons immune because of prior vaccination will be falsely classified as susceptible. If only a test for anti-HBs is used, carriers will be falsely classified as susceptible. # Vaccination Schedules The usual vaccination schedule is three doses of vaccine at 0, 1, and 6 months. An alternative schedule of 0, 1, 2, and 12 months also has been approved for one vaccine. The dose is 1 mL for adults, which must be administered IM in the deltoid-not in a buttock. For persons 11-19 years of age, the dose is either 0.5 or 1 mL, depending on the manufacturer of the vaccine. # Management of Persons Exposed to HBV Susceptible persons exposed to HBV through sexual contact with a person who has acute or chronic HBV infection should receive postexposure prophylaxis with 0.06 mL/kg of HBIG in a single IM dose within 14 days of their last exposure; early administration may be more effective. HBIG administration should be followed by the standard three-dose immunization series with HBV vaccine beginning at the time of HBIG administration. # Special Considerations Pregnancy Pregnancy is not a contraindication to HBV or HBIG vaccine administration. # HIV Infection Among HIV-infected persons, HBV infection is more likely to lead to chronic HBV carriage. HIV infection also impairs the response to HBV vaccine. Therefore, HIVinfected persons who are vaccinated should be tested for anti-HBs 1-2 months after the third vaccine dose. Revaccination with one or more doses should be considered for those who do not respond to vaccination initially. Those who do not respond to additional doses should be advised that they may remain susceptible. # PROCTITIS, PROCTOCOLITIS, AND ENTERITIS Sexually transmitted gastrointestinal syndromes include proctitis, proctocolitis, and enteritis. Proctitis occurs predominantly among persons who participate in anal intercourse, and enteritis occurs among those whose sexual practices include oral-fecal contact. Proctocolitis may be acquired by either route depending on the pathogen. Evaluation should include appropriate diagnostic procedures, such as anoscopy or sigmoidoscopy, stool examination, and culture. Proctitis is an inflammation limited to the rectum (the distal 10 cm-12 cm) that is associated with anorectal pain, tenesmus, and rectal discharge. N. gonorrhoeae, C. trachomatis (including LGV serovars), T. pallidum, and HSV are the most common sexually transmitted pathogens involved. Among patients coinfected with HIV, herpes proctitis may be especially severe. Proctocolitis is associated with symptoms of proctitis plus diarrhea and/or abdominal cramps and inflammation of the colonic mucosa extending to 12 cm. Pathogenic organisms include Campylobacter spp, Shigella spp, Entamoeba histolytica, and, rarely, C. trachomatis (LGV serovars). CMV or other opportunistic agents may be involved among immunosuppressed patients with HIV infection. Enteritis usually results in diarrhea and abdominal cramping without signs of proctitis or proctocolitis. In otherwise healthy patients, Giardia lamblia is most commonly implicated. Among patients with HIV infection, other infections that are not generally sexually transmitted may occur, including CMV, Mycobacterium avium-intracellulare, Salmonella spp, Cryptosporidium, Microsporidium, and Isospora. Multiple stool examinations may be necessary to detect Giardia, and special stool preparations are required to diagnose cryptosporidiosis and microsporidiosis. Additionally, enteritis may be a primary effect of HIV infection. When laboratory diagnostic capabilities are available, treatment should be based on the specific diagnosis. Diagnostic and treatment recommendations for all enteric infections are beyond the scope of these guidelines. # Treatment Acute proctitis of recent onset among persons who have recently practiced receptive anal intercourse is most often sexually transmitted. Such patients should be examined by anoscopy and should be evaluated for infection with HSV, N. gonorrhoeae, C. trachomatis, and T. pallidum. If anorectal pus is found on examination, or if polymorphonuclear leukocytes are found on a Gram-stained smear of anorectal secretions, the following therapy may be prescribed pending results of further laboratory tests. # Recommended Regimen Ceftriaxone 125 mg IM (or another agent effective against anal and genital gonorrhea) and Doxycycline 100 mg orally 2 times a day for 7 days. # NOTE: For patients with herpes proctitis, refer to Genital Herpes Simplex Virus Infections. # Follow-Up Follow-up should be based on specific etiology and severity of clinical symptoms. Reinfection may be difficult to distinguish from treatment failure. # Management of Sex Partners Partners of patients with sexually transmitted enteric infections should be evaluated for any diseases diagnosed in the index patient. # ECTOPARASITIC INFECTIONS Pediculosis Pubis Patients with pediculosis pubis (pubic lice) usually seek medical attention because of pruritus. Commonly, they also notice lice on pubic hair. # Recommended Regimens Lindane 1% shampoo applied for 4 minutes and then thoroughly washed off (not recommended for pregnant or lactating women or for children <2 years of age) or Permethrin 1% creme rinse applied to affected areas and washed off after 10 minutes or # Follow-Up Examination 2 Weeks After Assault Examination for sexually transmitted infections should be repeated 2 weeks after the assault. Because infectious agents acquired through assault may not have produced sufficient concentrations of organisms to result in positive tests at the initial examination, culture and wet mount tests should be repeated at the 2-week visit unless prophylactic treatment has already been provided. # Follow-Up Examination 12 Weeks After Assault Serologic tests for syphilis and HIV infection should be performed 12 weeks after the assault. If positive, testing of the sera collected at the initial examination will assist in determining whether the infection antedated the assault. # Prophylaxis Although not all experts agree, most patients probably benefit from prophylaxis because a) follow-up of patients who have been sexually assaulted can be difficult, and b) patients may be reassured if offered treatment or prophylaxis for possible infection. The following prophylactic measures address the more common microorganisms: • HBV vaccination (see HEPATITIS B). • Antimicrobial therapy: empiric regimen for chlamydial, gonococcal, and trichomonal infections and for BV. # Recommended Regimen Ceftriaxone 125 mg IM in a single dose PLUS Metronidazole 2 g orally in a single dose PLUS Doxycycline 100 mg orally 2 times a day for 7 days. # NOTE: For patients requiring alternative treatments, see the appropriate sections of this report addressing those agents. # Other Management Considerations At the initial examination and, as indicated, at follow-up examinations, patients should be counseled regarding the following: • Symptoms of STDs and the need for immediate examination if symptoms occur, and • Use of condoms for sexual intercourse until STD prophylactic treatment is completed. Obtaining the indicated specimens requires skill to avoid psychological and physical trauma to the child. The clinical manifestations of some sexually transmitted infections are different among children when compared with adults. Examinations and specimen collection should be conducted by practitioners who have experience and training in the evaluation of abused or assaulted children. A principal purpose of the examination is to obtain evidence of an infection that is likely to have been sexually transmitted. However, because of the legal and psychosocial consequences of a false-positive diagnosis, only tests with high specificities should be used. Additional cost and time are justified to obtain such tests. The scheduling of examinations should depend on the history of assault or abuse. If initial exposure is recent, infectious agents acquired through the exposure may not have produced sufficient concentrations of organisms to result in positive tests. A follow-up visit approximately 2 weeks after the last sexual exposure should include a repeat physical examination and collection of additional specimens. To allow sufficient time for antibody to develop, another follow-up visit approximately 12 weeks after the last sexual exposure also is necessary to collect sera. A single examination may be sufficient if the child was abused for an extended period of time, or if the last suspected episode of abuse took place some time before the child received the medical evaluation. The following recommendation for scheduling examinations is a general guide. The exact timing and nature of follow-up contacts should be determined on an individual basis and should be considerate of the patient's psychological and social needs. Compliance with follow-up appointments may be improved when law enforcement personnel or child protective services are involved. # Initial and 2-Week Examinations During the initial examination and 2-week follow-up examination (if indicated), the following should be performed: • Cultures for N. gonorrhoeae specimens collected from the pharynx and anus in both sexes, the vagina in girls, and the urethra in boys. Cervical specimens are not recommended for prepubertal girls. For boys, a meatal specimen of urethral discharge is an adequate substitute for an intraurethral swab specimen when discharge is present. Only standard culture systems for the isolation of N. gonorrhoeae should be used. All presumptive isolates of N. gonorrhoeae should be confirmed by at least two tests that involve different principles (e.g., biochemical, enzyme substrate, or serologic methods). Isolates should be preserved in case additional or repeated testing is needed. • Cultures for C. trachomatis from specimens collected from the anus in both sexes and from the vagina in girls. Limited information suggests that the likelihood of recovering chlamydia from the urethra of prepubertal boys is too low to justify the trauma involved in obtaining an intraurethral specimen. A urethral specimen should be obtained if urethral discharge is present. Pharyngeal specimens for C. trachomatis also are not recommended for either sex because the yield is low, perinatally acquired infection may persist beyond infancy, and culture systems in some laboratories do not distinguish between C. trachomatis and C. pneumoniae. # Reporting Every state, the District of Columbia, Puerto Rico, Guam, the Virgin Islands, and American Samoa have laws that require the reporting of child abuse. The exact requirements vary from state to state but, generally, if there is reasonable cause to suspect child abuse, it must be reported. Health-care providers should contact their state or local child protective service agency about child abuse reporting requirements in their areas. # Recommended Regimen Metronidazole 2 g orally in a single dose. # Alternative Regimen Metronidazole 500 mg twice daily for 7 days. Only metronidazole is available in the United States for the treatment of trichomoniasis. In randomized clinical trials, both of the recommended metronidazole regimens have resulted in cure rates of approximately 95%. Treatment of the patient and sex partner results in relief of symptoms, microbiologic cure, and reduction of transmission. Metronidazole gel has been approved for the treatment of BV but it has not been studied for the treatment of trichomoniasis. Earlier preparations of metronidazole for topical vaginal therapy demonstrated low efficacy against trichomoniasis. # Follow-Up Follow-up is unnecessary for men and for women who become asymptomatic after treatment. Infections by strains of T. vaginalis with diminished susceptibility to metronidazole occur. However, most of these organisms respond to higher doses of metronidazole. If failure occurs with either regimen, the patient should be retreated with metronidazole 500 mg 2 times a day for 7 days. If repeated failure occurs, the patient should be treated with a single 2 g dose of metronidazole once daily for 3-5 days. Patients with culture-documented infection who do not respond to the regimens described in this report and in whom reinfection has been excluded, should be managed in consultation with an expert. Evaluation of such cases should include determination of the susceptibility of T. vaginalis to metronidazole. # Management of Sex Partners Sex partners should be treated. Patients should be instructed to avoid sex until patient and partner(s) are cured. In the absence of microbiologic test-of-cure, this means when therapy has been completed and patient and partner(s) are without symptoms. # Special Considerations # Allergy, Intolerance, or Adverse Reactions Effective alternatives to therapy with metronidazole are not available. # Pregnancy The use of metronidazole is contraindicated in the first trimester of pregnancy. Patients may be treated after the first trimester with 2 g of metronidazole in a single dose. # Minimum Criteria Empiric treatment of PID should be instituted on the basis of the presence of all of the following three minimum clinical criteria for pelvic inflammation and in the absence of an established cause other than PID: • Lower abdominal tenderness, • Adnexal tenderness, and • Cervical motion tenderness. # Additional Criteria For women with severe clinical signs, more elaborate diagnostic evaluation is warranted because incorrect diagnosis and management may cause unnecessary morbidity. These additional criteria may be used to increase the specificity of the diagnosis. Listed below are the routine criteria for diagnosing PID: • Oral temperature >38.3 C, • Abnormal cervical or vaginal discharge, • Elevated erythrocyte sedimentation rate, • Elevated C-reactive protein, • Laboratory documentation of cervical infection with N. gonorrhoeae or C. trachomatis. Listed below are the elaborate criteria for diagnosing PID: • Histopathologic evidence of endometritis on endometrial biopsy, • Tubo-ovarian abscess on sonography or other radiologic tests, • Laparoscopic abnormalities consistent with PID. Although initial treatment decisions can be made before bacteriologic diagnosis of C. trachomatis or N. gonorrhoeae infection, such a diagnosis emphasizes the need to treat sex partners. # Treatment PID therapy regimens must provide empiric, broad-spectrum coverage of likely pathogens. Antimicrobial coverage should include N. gonorrhoeae, C. trachomatis, Gram-negative facultative bacteria, anaerobes, and streptococci. Although several antimicrobial regimens have proven effective in achieving clinical and microbiologic cure in randomized clinical trials with short-term follow-up, few studies have been done to assess and compare elimination of infection of the endometrium and fallopian tubes, or the incidence of long-term complications such as tubal infertility and ectopic pregnancy. # Regimen B Clindamycin 900 mg IV every 8 hours, # PLUS Gentamicin loading dose IV or IM (2 mg/kg of body weight) followed by a maintenance dose (1.5 mg/kg) every 8 hours. NOTE:This regimen should be continued for at least 48 hours after the patient demonstrates substantial clinical improvement, then followed with doxycycline 100 mg orally 2 times a day or clindamycin 450 mg orally 4 times a day to complete a total of 14 days of therapy. When tubo-ovarian abscess is present, many health-care providers use clindamycin for continued therapy rather than doxycycline, because it provides more effective anaerobic coverage. Clindamycin administered intravenously appears to be effective against C. trachomatis infection; however, the effectiveness of oral clindamycin against C. trachomatis has not been determined. Alternative Inpatient Regimens. Limited data support the use of other inpatient regimens, but two regimens have undergone at least one clinical trial and have broadspectrum coverage. Ampicillin/sulbactam plus doxycycline has good anaerobic coverage and appears to be effective for patients with a tubo-ovarian abscess. Intravenous ofloxacin has been studied as a single agent. A regimen of ofloxacin plus either clindamycin or metronidazole provides broad-spectrum coverage. Evidence is insufficient to support the use of any single agent regimen for inpatient treatment of PID. # Outpatient Treatment Clinical trials of outpatient regimens have provided little information regarding intermediate and long-term outcomes. The following regimens provide coverage against the common etiologic agents of PID, but evidence from clinical trials supporting their use is limited. The second regimen provides broader coverage against anaerobic organisms but costs substantially more than the other regimen. Patients who do not respond to outpatient therapy within 72 hours should be hospitalized to confirm the diagnosis and to receive parenteral therapy. # Regimen A Cefoxitin 2 g IM plus probenecid, 1 g orally in a single dose concurrently, or ceftriaxone 250 mg IM or other parenteral third-generation cephalosporin (e.g., ceftizoxime or cefotaxime), PLUS Doxycycline 100 mg orally 2 times a day for 14 days. # Special Considerations # HIV Infection Persons with HIV infection and uncomplicated epididymitis should receive the same treatment as persons without HIV. Fungal and mycobacterial causes of epididymitis are more common, however, among patients who are immunocompromised. # HUMAN PAPILLOMAVIRUS INFECTION # Genital Warts Exophytic genital and anal warts are benign growths most commonly caused by HPV types 6 or 11. Other types that may be present in the anogenital region (e.g., types 16, 18, 31, 33, and 35) have been strongly associated with genital dysplasia and carcinoma. These types are usually associated with subclinical infection, but occasionally are found in exophytic warts. # Treatment The goal of treatment is removal of exophytic warts and the amelioration of signs and symptoms-not the eradication of HPV. No therapy has been shown to eradicate HPV. HPV has been identified in adjacent tissue after laser treatment of HPVassociated cervical intraepithelial neoplasia and after attempts to eliminate subclinical HPV by extensive laser vaporization of the anogenital area. Genital warts are generally benign growths that cause minor or no symptoms aside from their cosmetic appearance. Treatment of external genital warts is not likely to influence the development of cervical cancer. A multitude of randomized clinical trials and other treatment studies have demonstrated that currently available therapeutic methods are 22%-94% effective in clearing external exophytic genital warts, and that recurrence rates are high (usually at least 25% within 3 months) with all modalities. Several well-designed studies have indicated that treatment is more successful for genital warts that are small and that have been present <1 year. No studies have assessed if treatment of exophytic warts reduces transmission of HPV. Many experts speculate that exophytic warts may be more infectious than subclinical infection, and therefore, the risk for transmission might be reduced by "debulking" genital warts. Most experts agree that recurrences of genital warts more commonly result from reactivation of subclinical infection than reinfection by a sex partner. The effect of treatment on the natural history of HPV is unknown. If left untreated, genital warts may resolve on their own, remain unchanged, or grow. In placebo-controlled studies, genital warts have cleared spontaneously without treatment in 20%-30% of patients within 3 months. # Regimens Treatment of genital warts should be guided by the preference of the patient. Expensive therapies, toxic therapies, and procedures that result in scarring should be avoided. A specific treatment regimen should be chosen with consideration given to # CERVICAL CANCER SCREENING FOR WOMEN WHO ATTEND STD CLINICS OR WHO HAVE A HISTORY OF STDs Women who have a history of STDs are at increased risk for cervical cancer, and women attending STD clinics may have additional characteristics that place them at even higher risk. Prevalence studies have found that precursor lesions for cervical cancer occur approximately five times more often among women attending STD clinics than among women attending family planning clinics. The Pap smear (cervical smear) is an effective and relatively low-cost screening test for invasive cervical cancer and squamous intraepithelial lesions (SIL)*, the precursors of cervical cancer. The screening guidelines of both the American College of Obstetricians and Gynecologists and the American Cancer Society recommend annual Pap smears for sexually active women. Although these guidelines take the position that Pap smears can be obtained less frequently in some situations, women who attend STD clinics or who have a history of STDs should be screened annually because of their increased risk for cervical cancer. Moreover, reports from STD clinics indicate that many women do not understand the purpose or importance of Pap smears, and many women who have had a pelvic examination believe they have had a Pap smear when they actually have not. # Recommendations Whenever a woman has a pelvic examination for STD screening, the health-care provider should inquire about the result of her last Pap smear and should discuss the following information with the patient: • Purpose and importance of the Pap smear, • Whether a Pap smear was obtained during the clinic visit, • Need for a Pap smear each year, • Names of local providers or referral clinics that can obtain Pap smears and adequately follow up results (if a Pap smear was not obtained during this examination). If a woman has not had a Pap smear during the previous 12 months, a Pap smear should be obtained as part of the routine pelvic examination in most situations. Health-care providers should be aware that, after a pelvic examination, many women may believe they have had a Pap smear when they actually have not, and therefore may report they have had a recent Pap smear. In STD clinics, a Pap smear should be obtained during the routine clinical evaluation of women who have not had a documented normal smear within the past 12 months. *The 1988 Bethesda System for Reporting Cervical/Vaginal Cytologic Diagnoses introduced the new terms low-grade squamous intraepithelial lesion (SIL) and high-grade SIL. Low-grade SIL encompasses cellular changes associated with HPV and mild dysplasia/cervical intraepithelial neoplasia 1 (CIN 1). High-grade SIL includes moderate dysplasia/CIN 2, severe dysplasia/CIN 3, and carcinoma in situ (CIS)/CIN 3 (16 ). A woman may benefit from receiving printed information about Pap smears and a report containing a statement that a Pap smear was obtained during her clinic visit. Whenever possible, a copy of the Pap smear result should be sent to the patient for her records. # Follow-Up If a Pap smear shows severe inflammation with reactive cellular changes, the woman should be advised to have another Pap smear within 3 months. If possible, underlying infection should be treated before the repeat Pap smear is obtained. If a Pap smear shows either SIL (or equivalent) or atypical squamous cells of undetermined significance (ASCUS), the woman should be notified promptly and appropriate follow-up initiated. Appropriate follow-up of Pap smears showing a high-grade SIL (or equivalent) on Pap smears should always include referral to a health-care provider who has the capacity to provide a colposcopic examination of the lower genital tract and, if indicated, colposcopically directed biopsies. Because clinical follow-up of abnormal Pap smears with colposcopy and biopsy is beyond the scope of many public clinics, including most STD clinics, in most situations women with Pap smears demonstrating these abnormalities will need to be referred to other local providers or clinics. Women with either a low-grade SIL or ASCUS also need similar follow-up, although some experts believe that, in some situations, a repeat Pap smear may be a satisfactory alternative to referral for colposcopy and biopsy. # Other Management Considerations Other considerations in performing Pap smears are the following: • The Pap smear is not an effective screening test for STDs; • If a woman is menstruating, a Pap smear should be postponed and the woman should be advised to have a Pap smear at the earliest opportunity; • If a woman has an obvious severe cervicitis, the Pap smear may be deferred until after antibiotic therapy has been completed to obtain an optimum smear; • A woman with external genital warts does not require Pap smears more frequently than a woman without warts, unless otherwise indicated. # Special Considerations Pregnancy Women who are pregnant should have a Pap smear as part of routine prenatal care. A cytobrush may be used for obtaining Pap smears from pregnant women, although care should be taken not to disrupt the mucous plug. # HIV Infection Recent studies have documented an increased prevalence of SIL among women infected with HIV. Also, HIV may hasten the progression of precursor lesions to invasive cervical cancer; however, evidence supporting such a progression is limited. The following provisional recommendations for Pap smear screening among HIV-infected women are based partially on consultation with experts in the care and management of cervical cancer and HIV infection among women. These provisional recommendations may be altered in the future as more information regarding cervical disease among HIV-infected women becomes available: • Women who are HIV-infected should be advised to have a comprehensive gynecologic examination, including a Pap smear, as part of their initial medical evaluation. # HEPATITIS B Hepatitis B is a common STD. Sexual transmission accounts for an estimated onethird to two-thirds of the estimated 200,000 to 300,000 new HBV infections that occurred annually in the United States during the past 10 years. Of persons infected as adults, 6%-10% become chronic HBV carriers. These persons are capable of transmitting HBV to others and are at risk for developing fatal complications. HBV leads to an estimated 5,000 deaths annually in the United States from cirrhosis of the liver and hepatocellular carcinoma. The risk of perinatal HBV infection among infants born to HBV-infected mothers ranges from 10% to 85%, depending on the mother's hepatitis B e antigen status. Infected newborns usually become HBV carriers and are at high risk for developing chronic liver disease. # Prevention Infection of both adults and neonates can be readily prevented with a safe and effective vaccine that has been used in the United States for more than 10 years. Universal vaccination of newborns is now recommended (17 ). The use of hepatitis B immune globulin (HBIG) combined with vaccination can prevent infection among persons exposed sexually to HBV if administered within 14 days of exposure. Pyrethrins with piperonyl butoxide applied to the affected area and washed off after 10 minutes. The lindane regimen remains the least expensive therapy; toxicity (as indicated by seizure and aplastic anemia) has not been reported when treatment is limited to the recommended 4-minute period. Permethrin has less potential for toxicity in the event of inappropriate use. # Other Management Considerations The recommended regimens should not be applied to the eyes. Pediculosis of the eyelashes should be treated by applying occlusive ophthalmic ointment to the eyelid margins two times a day for 10 days. Bedding and clothing should be decontaminated (machine washed or machine dried using heat cycle or dry-cleaned) or removed from body contact for at least 72 hours. Fumigation of living areas is not necessary. # Follow-Up Patients should be evaluated after 1 week if symptoms persist. Re-treatment may be necessary if lice are found or if eggs are observed at the hair-skin junction. Patients who are not responding to one of the recommended regimens should be retreated with an alternative regimen. # Management of Sex Partners Sex partners within the last month should be treated. # Special Considerations Pregnancy Pregnant and lactating women should be treated with permethrin or pyrethrins with piperonyl butoxide. # HIV Infection Persons with HIV infection and pediculosis pubis should receive the same treatment as those without HIV infection. # Scabies The predominant symptom of scabies is pruritus. For pruritus to occur, sensitization to Sarcoptes scabiei must occur. Among persons with their first infection, sensitization takes several weeks to develop, while pruritus may occur within 24 hours after reinfestation. Scabies among adults may be sexually transmitted, although scabies among children is usually not sexually transmitted. # Recommended Regimen Permethrin cream (5%) applied to all areas of the body from the neck down and washed off after 8-14 hours, or Lindane (1%) 1 oz. of lotion or 30 g of cream applied thinly to all areas of the body from the neck down and washed off thoroughly after 8 hours. # NOTE: Lindane should not be used following a bath, and it should not be used by persons with extensive dermatitis, pregnant or lactating women, and children <2 years of age. # Alternative Regimen Crotamiton (10%) applied to the entire body from the neck down, nightly for 2 consecutive nights, and washed off 24 hours after the second application. Permethrin is effective and safe but costs more than lindane. Lindane is effective in most areas of the country, but lindane resistance has been reported in some areas of the world, including parts of the United States. Seizures have occurred when lindane was applied after a bath or used by patients with extensive dermatitis. Aplastic anemia following lindane use also has been reported. # Other Management Considerations Bedding and clothing should be decontaminated (machine washed or machine dried using hot cycle or dry-cleaned) or removed from body contact for at least 72 hours. Fumigation of living areas is not necessary. # Follow-Up Pruritus may persist for several weeks. Some experts recommend re-treatment after 1 week for patients who are still symptomatic; other experts recommend retreatment only if live mites can be observed. Patients who are not responding to the recommended treatment should be retreated with an alternative regimen. # Management of Sex Partners Both sexual and close personal or household contacts within the last month should be examined and treated. # Special Considerations # Pregnant Women, Infants, and Young Children Infants, young children, and pregnant and lactating women should not be treated with lindane. They may be treated with permethrin or crotamiton regimens. # HIV Infection Persons with HIV infection and uncomplicated scabies should receive the same treatment as persons without HIV infection. Persons with HIV infection and others who are immunosuppressed are at increased risk for Norwegian scabies, a disseminated dermatologic infection. Such patients should be managed in consultation with an expert. # SEXUAL ASSAULT AND STDs # Adults and Adolescents Recommendations in this report are limited to the identification and treatment of sexually transmitted infections and conditions commonly identified in the management of such infections. The documentation of findings and collection of specimens for forensic purposes and the management of potential pregnancy or physical and psychological trauma are beyond the scope of these recommendations. Among sexually active adults, the identification of sexually transmitted infections following assault is usually more important for the psychological and medical management of the patient than for legal purposes, if the infection could have been acquired before the assault. Trichomoniasis, chlamydia, gonorrhea, and BV appear to be the infections most commonly diagnosed among women following sexual assault. Since the prevalence of these conditions is substantial among sexually active women, their presence (postassault) does not necessarily signify acquisition during the assault. Chlamydial and gonococcal infection among females are of special concern because of the possibility of ascending infection. # Evaluation for Sexually Transmitted Infections # Initial Examination An initial examination should include the following procedures: • Cultures for N. gonorrhoeae and C. trachomatis from specimens collected from any sites of penetration or attempted penetration. If chlamydial culture is not available, nonculture tests for chlamydia are an acceptable substitute, although false-negative test results are more common with nonculture tests and false-positive test results may occur. If a nonculture test is used, a positive test result should be verified with a second test based on a different diagnostic principle or with a blocking antibody or competitive probe procedure. • Wet mount and culture of a vaginal swab specimen for T. vaginalis infection. If vaginal discharge or malodor is evident, the wet mount should also be examined for evidence of BV and yeast infection. • Collection of a serum sample to be preserved for subsequent analysis if followup serologic tests are positive (see Follow-up Examination 12 Weeks After Assault). # Risk for Acquiring HIV Infection Although HIV-antibody seroconversion has been reported among persons whose only known risk factor was sexual assault or sexual abuse, the risk for acquiring HIV infection through sexual assault is minimal in most instances. Although the overall rate of transmission of HIV from an HIV-infected person during a single act of heterosexual intercourse is thought to be low (<1%), this risk depends on many factors. Prophylactic treatment for HIV is not known to be effective and is not generally recommended in this situation. However, all persons should be offered HIV counseling and testing after the assault. Raising the issue of the potential for HIV infection during the initial medical evaluation may add to the acute psychological stress the patient may be experiencing because of the assault. An alternative is to address the issue at the 2-week follow-up appointment when the patient may be better able to receive this information and give informed consent for HIV testing. All persons electing to be tested for HIV should receive pretest and posttest counseling. # Sexual Assault or Abuse of Children Recommendations in this report are limited to the identification and treatment of sexually transmitted infections. Management of the psychosocial and legal aspects of the sexual assault or abuse of children are important, but are beyond the scope of these recommendations. The identification of sexually transmissible agents among children beyond the neonatal period suggests sexual abuse. However, there are exceptions; for example, rectal or genital infection with C. trachomatis among young children may be the result of perinatally acquired infection and may persist for as long as 3 years. In addition, BV and genital mycoplasmas have been identified among children who have been abused and among those who have not been abused. A finding of genital warts, although suggestive of assault, is not specific for sexual abuse without other evidence. When the only evidence of sexual abuse is the isolation of an organism or the detection of antibodies to a sexually transmissible agent, findings should be confirmed and the implications carefully considered. # Evaluation for Sexually Transmitted Infections Examinations of children for sexual assault or abuse should be conducted so as to minimize trauma to the child. The decision to evaluate the child for STDs must be made on an individual basis. Situations involving a high risk for STDs and a strong indication for testing include the following: • A suspected offender is known to have an STD or to be at high risk for STDs (e.g., multiple partners or past history of STD), • The child has symptoms or signs of an STD, • There is a high STD prevalence in the community. Only standard culture systems for the isolation of C. trachomatis should be used. The isolation of C. trachomatis should be confirmed by microscopic identification of inclusions by staining with fluorescein-conjugated monoclonal antibody specific for C. trachomatis. Isolates should be preserved. Nonculture tests for chlamydia are not sufficiently specific for use in circumstances involving possible child abuse or assault. # Examination 12 Weeks After Assault An examination approximately 12 weeks after the last suspected sexual exposure is recommended to allow time for antibodies to infectious agents to develop. Serologic tests for the agents listed below should be considered: • T. pallidum, • HIV, • HBV. The prevalence of these infections varies greatly among communities, and depends upon whether risk factors are known to be present in the abuser or assailant. Also, results of HBV tests must be interpreted carefully, because HBV may be transmitted by nonsexual modes as well as sexually. The choice of tests must be made on a case-by-case basis. # Presumptive Treatment There are few data upon which to establish the risk of a child's acquiring a sexually transmitted infection as a result of sexual abuse. The risk is believed to be low in most circumstances, although documentation to support this position is inadequate. Presumptive treatment for children who have been sexually assaulted or abused is not widely recommended because girls appear to be at lower risk for ascending infection than adolescent or adult women, and regular follow-up can usually be assured. However, some children or their parents or guardians may be very concerned about the possibility of contracting an STD, even if the risk is perceived by the health-care practitioner to be low. Addressing patient concerns may be an appropriate indication for presumptive treatment in some settings. All material in the MMWR Series is in the public domain and may be used and reprinted without special permission; citation as to source, however, is appreciated.
None
None
751fddd552ad83bdca529e1beeb9107556530d13
cdc
None
# BACkGROUND The optimal way to reduce the morbidity, mortality, and socioeconomic consequences of injuries is to prevent their occurrence. 1,2 When an injury does occur, however, emergency medical service (EMS) providers must ensure that patients receive prompt emergency care at the scene and are transported to an appropriate health care facility for further evaluation and treatment. Determining the facility to which an injured patient should be transported can have a profound impact on subsequent morbidity and mortality. Although basic emergency services are generally consistent across emergency departments, certain hospitals known as "trauma centers" have additional expertise and equipment for treating severely injured patients. Trauma centers are classified by state or local authorities depending on the scope of resources and services available, ranging from Level I, which provides the highest level of care, to Level IV. Not all injured patients can or should be transported to a Level I trauma center. Patients with less severe injuries might be served better by transport to the nearest emergency department. Transporting all injured patients to Level I trauma centers, when many do not require that high a level of resources and expertise, unnecessarily burdens those facilities and makes them less available for the most severely injured patients. Research has shown that the level of care an injured patient receives can also have a significant impact on health outcome. The National Study on the Costs and Outcomes of Trauma (NSCOT) evaluated the effect of trauma center care on mortality in moderately to severely injured patients and identified a 25% reduction in mortality for severely injured patients who received care at a Level I trauma center rather than at a nontrauma center. 3 # DEVELOPMENT OF THE FIELD TRIAGE DECISION SCHEME: THE NATIONAL TRAUMA TRIAGE PROTOCOL The Centers for Disease Control and Prevention (CDC) has taken an increasingly active role in the intersection between public health and acute injury care, including the publication of the Acute Injury Care Research Agenda: Guiding Research for the Future. 4 Building on these activities, CDC and the American College of Surgeons-Committee on Trauma (ACS-COT), with additional financial support from the National Highway Traffic Safety Administration (NHTSA), convened a series of meetings of the National Expert Panel on Field Triage to guide the 2006 revision of the Triage Decision Scheme. The expert panel was assembled to bring additional expertise to the revision process (e.g., EMS, emergency medicine, public health, the automotive industry, other federal agencies) in order to provide: a vigorous review of the available evidence; - assist with the dissemination of the revised scheme, and the rationale behind it, - to a larger public health and acute injury care community; emphasize the need for additional research in field triage; and - establish the foundation for future revisions. The major outcome of these meetings was the creation of the Field Triage Decision Scheme: The National Trauma Triage Protocol (Decision Scheme)(see Appendix A). 5 # VEHICLE TELEMATICS AND ADVANCED AUTOMATIC COLLISION NOTIFICATION During the National Expert Panel on Field Triage meetings, members discussed the potential for vehicle telematics to more accurately guide trauma triage decisions. Telematics is defined as the combination of telecommunications and computing. 6 Vehicle telematics systems combine and integrate directly into the vehicle's electrical architecture, cellular communications technology, Global Positioning System (GPS) satellite location capability, and sophisticated voice recognition. 7 While vehicle telematics provide a wide array of services, Advanced Automatic Collision Notification (AACN) was the telematics service that was of particular interest to the National Expert Panel members. AACN is the successor to Automatic Crash Notification (ACN) and is found on a number of motor vehicles. (AACN is now installed in approximately 5 million vehicles in the United States and Canada.) AACN alerts emergency services that a vehicle crash has occurred and automatically summons assistance. 7 When a crash has occurred (as determined by various sensors, airbag deployment, or seatbelt pretensioners), the AACN system initiates an emergency wireless call to a telematics service provider (OnStar, ATX, etc.) to deliver the vehicle's GPS location and crash-related data, and opens a voice communications channel to the emergency call center. AACN improves the data sent from the ACN version by including crash severity data collected by in-vehicle sensors. # INCORPORATION OF "VEHICLE TELEMATICS CONSISTENT WITH HIGH RISk FOR INJURY" INTO THE DECISION SCHEME In earlier versions of the Decision Scheme, a number of vehicle crash characteristics were incorporated into the prehospital triage decision evaluation. These included, among others, high vehicle speed, vehicle deformity >20 inches, and intrusion >12 inches for unbelted occupants as mechanism of injury criteria. National Automotive Sampling System Crashworthiness Data System (NASS-CDS) data indicate that risk for injury, impact direction, and increasing crash severity are linked. 8 An analysis of 621 Australian motor vehicle crashes indicated that high-speed impacts (>60 km/hr ) were associated with major injury, defined as Injury Severity Score , ICU admission >24 hours requiring mechanical ventilation, urgent surgery, or death (OR = 1.5; CI: 1.1-2.2). 9 Previously, the usefulness of vehicle speed had been limited because of the challenges to EMS personnel in estimating impact speed accurately. New AACN technology installed in some automobiles can, however, identify vehicle location, measure change in velocity ("delta V"), and detect the crash's principal direction of force, airbag deployment, rollover, and the occurrence of multiple collisions. 8 As a result, and in recognition that this information might become more available in the future, vehicle telemetry data consistent with a high risk for injury (e.g., change in velocity and principal direction of force) was added as a triage criterion. # EXPERT PANEL ON ADVANCED AUTOMATIC COLLISION NOTIFICATION AND TRIAGE OF THE INJURED PATIENT In follow up to the need to explore further how AACN could improve triage, CDC selected and convened an expert panel (see Appendix B). The purpose of the panel was to develop a medical protocol for utilization of AACN data from crashes to better predict severity of injury and use this information to improve the ability to respond to crashes and appropriately triage crash victims. This panel included representation from the following disciplines: public safety answering points (911 call centers), EMS, emergency medicine, trauma surgery, engineering, public health, vehicle telematics providers, NHTSA, and the Health Resources and Services Administration's EMS for Children program. The expert panel met three times from 2007 to 2008, with the second meeting serving as a subset of the entire panel to deliberate on available data. Key discussion points included: Crash characteristics that predicted a 20% or greater likelihood of having a serious injury were - considered significant and warranted special recognition and action. Severe injury was defined as having an ISS of 15 or greater. If additional data was available from direct verbal contact with vehicle occupants, this should - be used to refine or alter the prediction of vehicle crash telematic data. Specifically, knowing the number of occupants, age, gender, and level of consciousness would be important additional data elements in predicting severity of injury. More work needs to be done, but the available information strongly supports immediate AACN providers should obtain specific occupant information that is known to alter or influence # RECOMMENDATIONS FROM THE EXPERT PANEL ON ADVANCED AUTOMATIC COLLISION NOTIFICATION AND TRIAGE OF THE INJURED PATIENT - injury severity and to significantly influence response to injury, including age and gender. Further refinement of the best data to obtain will require further investigations and data analyses. - Because AACN data have not been previously used in clinical decision-making, pilot studies - should be implemented as soon as possible using the following protocol (See Appendix C): In the event of a crash, the following electronic information will be transmitted by the vehicle 1. to the AACN providers: -Delta V -Principal direction of force (PDOF) -Seatbelt usage/or without -Crash with multiple impacts -Vehicle type This information is received by the AACN provider and analyzed to identify those patients who, based upon the data alone, have a > 20% risk of having a severe injury (defined as an > 15). If the analysis indicates that the risk of severe injury is < 20%, then the AACN provider proceeds per standard protocol. If the AACN data analysis indicates a 2. 20% risk of severe injury, then the AACN provider directly contacts the vehicle occupant to obtain more information. During the communication with the occupant, the AACN provider will inquire about: -Age (> 55 years old have increased risk of severe injury) -Injuries to vehicle occupants -Number of patients -Number of vehicles involved in the crash This information may help refine the AACN data; in effect, moving the 20% value either up or down as the occupant information increases or decreases the likelihood that a severe injury has occurred. For example, if the occupant is able to communicate clearly that he or she is uninjured and < 55 years of age, then the risk of severe injury is lessened. Similarly, if there is no (or inappropriate) voice response from the occupant, if the occupant is over or equal to age 55 years, or if he or she indicates an injury, then the risk of severe injury remains at least 20% (based upon the AACN data alone) and is potentially greater. If the AACN provider determines that the occupant is at 3. 20% risk of severe injury, then communication should be made with the relevant Public Safety Answering Point (PSAP) that AACN data obtained from the vehicle indicates that the occupant is at risk for a severe injury, and that the PSAP should dispatch resources as appropriate according to local protocol and consistent with the Field Triage Decision Scheme: The National Trauma Triage Protocol. If the AACN data indicate that the risk of injury is <20% and the AACN provider subsequently -btains occupant information that raises concern for a severe injury (e.g., injuries, age), then this specific information can be communicated to the PSAP. AACN providers will also communicate the following information to the PSAP, 5. APPENDIx A:
# BACkGROUND The optimal way to reduce the morbidity, mortality, and socioeconomic consequences of injuries is to prevent their occurrence. 1,2 When an injury does occur, however, emergency medical service (EMS) providers must ensure that patients receive prompt emergency care at the scene and are transported to an appropriate health care facility for further evaluation and treatment. Determining the facility to which an injured patient should be transported can have a profound impact on subsequent morbidity and mortality. Although basic emergency services are generally consistent across emergency departments, certain hospitals known as "trauma centers" have additional expertise and equipment for treating severely injured patients. Trauma centers are classified by state or local authorities depending on the scope of resources and services available, ranging from Level I, which provides the highest level of care, to Level IV. Not all injured patients can or should be transported to a Level I trauma center. Patients with less severe injuries might be served better by transport to the nearest emergency department. Transporting all injured patients to Level I trauma centers, when many do not require that high a level of resources and expertise, unnecessarily burdens those facilities and makes them less available for the most severely injured patients. Research has shown that the level of care an injured patient receives can also have a significant impact on health outcome. The National Study on the Costs and Outcomes of Trauma (NSCOT) evaluated the effect of trauma center care on mortality in moderately to severely injured patients and identified a 25% reduction in mortality for severely injured patients who received care at a Level I trauma center rather than at a nontrauma center. 3 # DEVELOPMENT OF THE FIELD TRIAGE DECISION SCHEME: THE NATIONAL TRAUMA TRIAGE PROTOCOL The Centers for Disease Control and Prevention (CDC) has taken an increasingly active role in the intersection between public health and acute injury care, including the publication of the Acute Injury Care Research Agenda: Guiding Research for the Future. 4 Building on these activities, CDC and the American College of Surgeons-Committee on Trauma (ACS-COT), with additional financial support from the National Highway Traffic Safety Administration (NHTSA), convened a series of meetings of the National Expert Panel on Field Triage to guide the 2006 revision of the Triage Decision Scheme. The expert panel was assembled to bring additional expertise to the revision process (e.g., EMS, emergency medicine, public health, the automotive industry, other federal agencies) in order to provide: a vigorous review of the available evidence; • assist with the dissemination of the revised scheme, and the rationale behind it, • to a larger public health and acute injury care community; emphasize the need for additional research in field triage; and • establish the foundation for future revisions. # • The major outcome of these meetings was the creation of the Field Triage Decision Scheme: The National Trauma Triage Protocol (Decision Scheme)(see Appendix A). 5 # VEHICLE TELEMATICS AND ADVANCED AUTOMATIC COLLISION NOTIFICATION During the National Expert Panel on Field Triage meetings, members discussed the potential for vehicle telematics to more accurately guide trauma triage decisions. Telematics is defined as the combination of telecommunications and computing. 6 Vehicle telematics systems combine and integrate directly into the vehicle's electrical architecture, cellular communications technology, Global Positioning System (GPS) satellite location capability, and sophisticated voice recognition. 7 While vehicle telematics provide a wide array of services, Advanced Automatic Collision Notification (AACN) was the telematics service that was of particular interest to the National Expert Panel members. AACN is the successor to Automatic Crash Notification (ACN) and is found on a number of motor vehicles. (AACN is now installed in approximately 5 million vehicles in the United States and Canada.) AACN alerts emergency services that a vehicle crash has occurred and automatically summons assistance. 7 When a crash has occurred (as determined by various sensors, airbag deployment, or seatbelt pretensioners), the AACN system initiates an emergency wireless call to a telematics service provider (OnStar, ATX, etc.) to deliver the vehicle's GPS location and crash-related data, and opens a voice communications channel to the emergency call center. AACN improves the data sent from the ACN version by including crash severity data collected by in-vehicle sensors. # INCORPORATION OF "VEHICLE TELEMATICS CONSISTENT WITH HIGH RISk FOR INJURY" INTO THE DECISION SCHEME In earlier versions of the Decision Scheme, a number of vehicle crash characteristics were incorporated into the prehospital triage decision evaluation. These included, among others, high vehicle speed, vehicle deformity >20 inches, and intrusion >12 inches for unbelted occupants as mechanism of injury criteria. National Automotive Sampling System Crashworthiness Data System (NASS-CDS) data indicate that risk for injury, impact direction, and increasing crash severity are linked. 8 An analysis of 621 Australian motor vehicle crashes indicated that high-speed impacts (>60 km/hr [>35 mph]) were associated with major injury, defined as Injury Severity Score [ISS >15], ICU admission >24 hours requiring mechanical ventilation, urgent surgery, or death (OR = 1.5; CI: 1.1-2.2). 9 Previously, the usefulness of vehicle speed had been limited because of the challenges to EMS personnel in estimating impact speed accurately. New AACN technology installed in some automobiles can, however, identify vehicle location, measure change in velocity ("delta V"), and detect the crash's principal direction of force, airbag deployment, rollover, and the occurrence of multiple collisions. 8 As a result, and in recognition that this information might become more available in the future, vehicle telemetry data consistent with a high risk for injury (e.g., change in velocity and principal direction of force) was added as a triage criterion. # EXPERT PANEL ON ADVANCED AUTOMATIC COLLISION NOTIFICATION AND TRIAGE OF THE INJURED PATIENT In follow up to the need to explore further how AACN could improve triage, CDC selected and convened an expert panel (see Appendix B). The purpose of the panel was to develop a medical protocol for utilization of AACN data from crashes to better predict severity of injury and use this information to improve the ability to respond to crashes and appropriately triage crash victims. This panel included representation from the following disciplines: public safety answering points (911 call centers), EMS, emergency medicine, trauma surgery, engineering, public health, vehicle telematics providers, NHTSA, and the Health Resources and Services Administration's EMS for Children program. The expert panel met three times from 2007 to 2008, with the second meeting serving as a subset of the entire panel to deliberate on available data. Key discussion points included: Crash characteristics that predicted a 20% or greater likelihood of having a serious injury were • considered significant and warranted special recognition and action. Severe injury was defined as having an ISS of 15 or greater. # • If additional data was available from direct verbal contact with vehicle occupants, this should • be used to refine or alter the prediction of vehicle crash telematic data. Specifically, knowing the number of occupants, age, gender, and level of consciousness would be important additional data elements in predicting severity of injury. More work needs to be done, but the available information strongly supports immediate AACN providers should obtain specific occupant information that is known to alter or influence # RECOMMENDATIONS FROM THE EXPERT PANEL ON ADVANCED AUTOMATIC COLLISION NOTIFICATION AND TRIAGE OF THE INJURED PATIENT • injury severity and to significantly influence response to injury, including age and gender. Further refinement of the best data to obtain will require further investigations and data analyses. • Because AACN data have not been previously used in clinical decision-making, pilot studies • should be implemented as soon as possible using the following protocol (See Appendix C): In the event of a crash, the following electronic information will be transmitted by the vehicle 1. to the AACN providers: -Delta V -Principal direction of force (PDOF) -Seatbelt usage/or without -Crash with multiple impacts -Vehicle type This information is received by the AACN provider and analyzed to identify those patients who, based upon the data alone, have a > 20% risk of having a severe injury (defined as an [ISS] > 15). If the analysis indicates that the risk of severe injury is < 20%, then the AACN provider proceeds per standard protocol. If the AACN data analysis indicates a 2. > 20% risk of severe injury, then the AACN provider directly contacts the vehicle occupant to obtain more information. During the communication with the occupant, the AACN provider will inquire about: -Age (> 55 years old have increased risk of severe injury) -Injuries to vehicle occupants -Number of patients -Number of vehicles involved in the crash This information may help refine the AACN data; in effect, moving the 20% value either up or down as the occupant information increases or decreases the likelihood that a severe injury has occurred. For example, if the occupant is able to communicate clearly that he or she is uninjured and < 55 years of age, then the risk of severe injury is lessened. Similarly, if there is no (or inappropriate) voice response from the occupant, if the occupant is over or equal to age 55 years, or if he or she indicates an injury, then the risk of severe injury remains at least 20% (based upon the AACN data alone) and is potentially greater. If the AACN provider determines that the occupant is at 3. >20% risk of severe injury, then communication should be made with the relevant Public Safety Answering Point (PSAP) that AACN data obtained from the vehicle indicates that the occupant is at risk for a severe injury, and that the PSAP should dispatch resources as appropriate according to local protocol and consistent with the Field Triage Decision Scheme: The National Trauma Triage Protocol. If the AACN data indicate that the risk of injury is <20% and the AACN provider subsequently # 4. obtains occupant information that raises concern for a severe injury (e.g., injuries, age), then this specific information can be communicated to the PSAP. AACN providers will also communicate the following information to the PSAP, 5. APPENDIx A:
None
None
371187c5b817f7160bb8a0536f57286e336069b1
cdc
None
This schedule includes recommendations in effect as of January 1, 2015. Any dose not administered at the recommended age should be administered at a subsequent visit, when indicated and feasible. The use of a combination vaccine generally is preferred over separate injections of its equivalent component vaccines. Vaccination providers should consult the relevant Advisory Committee on Immunization Practices (ACIP) statement for detailed recommendations, available online at . Clinically significant adverse events that follow vaccination should be reported to the Vaccine Adverse Event Reporting System (VAERS) online () or by telephone (800-822-7967).# ). These recommendations must be read with the footnotes that follow. For those who fall behind or start late, provide catch-up vaccination at the earliest opportunity as indicated by the green bars in Figure 1. To determine minimum intervals between doses, see the catch-up schedule (Figure 2). School entry and adolescent vaccine age groups are shaded. # NOTE: The above recommendations must be read along with the footnotes of this schedule. This schedule includes recommendations in effect as of January 1, 2015. Any dose not administered at the recommended age should be administered at a subsequent visit, when indicated and feasible. The use of a combination vaccine generally is preferred over separate injections of its equivalent component vaccines. Vaccination providers should consult the relevant Advisory Committee on Immunization Practices (ACIP) statement for detailed recommendations, available online at . Clinically significant adverse events that follow vaccination should be reported to the Vaccine Adverse Event Reporting System (VAERS) online () or by telephone (800-822-7967). Suspected cases of vaccine-preventable diseases should be reported to the state or local health department. Additional information, including precautions and contraindications for vaccination, is available from CDC online () or by telephone (800-CDC- ). This schedule is approved by the Advisory Committee on Immunization Practices (http//www.cdc.gov/vaccines/acip), the American Academy of Pediatrics (), the American Academy of Family Physicians (), and the American College of Obstetricians and Gynecologists (). # Not routinely recommended Range of recommended ages for certain high-risk groups Range of recommended ages for all children Range of recommended ages for catch-up immunization Range of recommended ages during which catch-up is encouraged and for certain high-risk groups FIGURE 2. Catch-up immunization schedule for persons aged 4 months through 18 years who start late or who are more than 1 month behind -United States, 2015. The figure below provides catch-up schedules and minimum intervals between doses for children whose vaccinations have been delayed. A vaccine series does not need to be restarted, regardless of the time that has elapsed between doses. Use the section appropriate for the child's age. Always use this table in conjunction with Figure 1 and the footnotes that follow. # NOTE: The above recommendations must be read along with the footnotes of this schedule. Footnotes -Recommended immunization schedule for persons aged 0 through 18 years-United States, 2015 For further guidance on the use of the vaccines mentioned below, see: . For vaccine recommendations for persons 19 years of age and older, see the Adult Immunization Schedule. # Additional information - For contraindications and precautions to use of a vaccine and for additional information regarding that vaccine, vaccination providers should consult the relevant ACIP statement available online at . - For purposes of calculating intervals between doses, 4 weeks = 28 days. Intervals of 4 months or greater are determined by calendar months. - Vaccine doses administered 4 days or less before the minimum interval are considered valid. Doses of any vaccine administered ≥5 days earlier than the minimum interval or minimum age should not be counted as valid doses and should be repeated as age-appropriate. The repeat dose should be spaced after the invalid dose by the recommended minimum interval. # Hepatitis B (HepB) vaccine. (Minimum age: birth) Routine vaccination: At birth: - Administer monovalent HepB vaccine to all newborns before hospital discharge. - For infants born to hepatitis B surface antigen (HBsAg)-positive mothers, administer HepB vaccine and 0.5 mL of hepatitis B immune globulin (HBIG) within 12 hours of birth. These infants should be tested for HBsAg and antibody to HBsAg (anti-HBs) 1 to 2 months after completion of the HepB series at age 9 through 18 months (preferably at the next well-child visit). - If mother's HBsAg status is unknown, within 12 hours of birth administer HepB vaccine regardless of birth weight. For infants weighing less than 2,000 grams, administer HBIG in addition to HepB vaccine within 12 hours of birth. Determine mother's HBsAg status as soon as possible and, if mother is HBsAg-positive, also administer HBIG for infants weighing 2,000 grams or more as soon as possible, but no later than age 7 days. # Doses following the birth dose: - The second dose should be administered at age 1 or 2 months. Monovalent HepB vaccine should be used for doses administered before age 6 weeks. - Infants who did not receive a birth dose should receive 3 doses of a HepB-containing vaccine on a schedule of 0, 1 to 2 months, and 6 months starting as soon as feasible. See Figure 2. - Administer the second dose 1 to 2 months after the first dose (minimum interval of 4 weeks), administer the third dose at least 8 weeks after the second dose AND at least 16 weeks after the first dose. The final (third or fourth) dose in the HepB vaccine series should be administered no earlier than age 24 weeks. - Administration of a total of 4 doses of HepB vaccine is permitted when a combination vaccine containing HepB is administered after the birth dose. # Catch-up vaccination: - Unvaccinated persons should complete a 3-dose series. - A 2-dose series (doses separated by at least 4 months) of adult formulation Recombivax HB is licensed for use in children aged 11 through 15 years. - For other catch-up guidance, see Figure 2. # Rotavirus (RV) vaccines. (Minimum age: 6 weeks for both RV1 and RV5 ) Routine vaccination: Administer a series of RV vaccine to all infants as follows: 1. If Rotarix is used, administer a 2-dose series at 2 and 4 months of age. 2. If RotaTeq is used, administer a 3-dose series at ages 2, 4, and 6 months. 3. If any dose in the series was RotaTeq or vaccine product is unknown for any dose in the series, a total of 3 doses of RV vaccine should be administered. # Catch-up vaccination: - The maximum age for the first dose in the series is 14 weeks, 6 days; vaccination should not be initiated for infants aged 15 weeks, 0 days or older. - The maximum age for the final dose in the series is 8 months, 0 days. - For other catch-up guidance, see Figure 2. The fourth dose may be administered as early as age 12 months, provided at least 6 months have elapsed since the third dose. However, the fourth dose of DTaP need not be repeated if it was administered at least 4 months after the third dose of DTaP. # Diphtheria and tetanus toxoids and acellular pertussis (DTaP) vaccine (cont'd) Catch-up vaccination: - The fifth dose of DTaP vaccine is not necessary if the fourth dose was administered at age 4 years or older. - For other catch-up guidance, see Figure 2. # Tetanus and diphtheria toxoids and acellular pertussis (Tdap) vaccine. (Minimum age: 10 years for both Boostrix and Adacel) Routine vaccination: - Administer 1 dose of Tdap vaccine to all adolescents aged 11 through 12 years. - Tdap may be administered regardless of the interval since the last tetanus and diphtheria toxoidcontaining vaccine. -If administered inadvertently to a child aged 7 through 10 years may count as part of the catch-up series. This dose may count as the adolescent Tdap dose, or the child can later receive a Tdap booster dose at age 11 through 12 years. -If administered inadvertently to an adolescent aged 11 through 18 years, the dose should be counted as the adolescent Tdap booster. - For other catch-up guidance, see Figure 2. # Haemophilus influenzae type b (Hib) conjugate vaccine. (Minimum age: 6 weeks for PRP-T , PRP-OMP , 12 months for PRP-T ) Routine vaccination: - Administer a 2-or 3-dose Hib vaccine primary series and a booster dose (dose 3 or 4 depending on vaccine used in primary series) at age 12 through 15 months to complete a full Hib vaccine series. - The primary series with ActHIB, MenHibrix, or Pentacel consists of 3 doses and should be administered at 2, 4, and 6 months of age. The primary series with PedvaxHib or COMVAX consists of 2 doses and should be administered at 2 and 4 months of age; a dose at age 6 months is not indicated. - One booster dose (dose 3 or 4 depending on vaccine used in primary series) of any Hib vaccine should be administered at age 12 through 15 months. An exception is Hiberix vaccine. Hiberix should only be used for the booster (final) dose in children aged 12 months through 4 years who have received at least 1 prior dose of Hib-containing vaccine. - For recommendations on the use of MenHibrix in patients at increased risk for meningococcal disease, please refer to the meningococcal vaccine footnotes and also to MMWR February 28, 2014 / 63(RR01);1-13, available at . # Haemophilus influenzae type b (Hib) conjugate vaccine (cont'd) Catch-up vaccination: - If dose 1 was administered at ages 12 through 14 months, administer a second (final) dose at least 8 weeks after dose 1, regardless of Hib vaccine used in the primary series. - If both doses were PRP-OMP (PedvaxHIB or COMVAX), and were administered before the first birthday, the third (and final) dose should be administered at age 12 through 59 months and at least 8 weeks after the second dose. - If the first dose was administered at age 7 through 11 months, administer the second dose at least 4 weeks later and a third (and final) dose at age 12 through 15 months or 8 weeks after second dose, whichever is later. - If first dose is administered before the first birthday and second dose administered at younger than 15 months, a third (and final) dose should be given 8 weeks later. - For unvaccinated children aged 15 months or older, administer only 1 dose. - For other catch-up guidance, see Figure 2. For catch-up guidance related to MenHibrix, please see the meningococcal vaccine footnotes and also MMWR February 28, 2014 / 63(RR01);1-13, available at . # Vaccination of persons with high-risk conditions: - Children aged 12 through 59 months who are at increased risk for Hib disease, including chemotherapy recipients and those with anatomic or functional asplenia (including sickle cell disease), human immunodeficiency virus (HIV ) infection, immunoglobulin deficiency, or early component complement deficiency, who have received either no doses or only 1 dose of Hib vaccine before 12 months of age, should receive 2 additional doses of Hib vaccine 8 weeks apart; children who received 2 or more doses of Hib vaccine before 12 months of age should receive 1 additional dose. - For patients younger than 5 years of age undergoing chemotherapy or radiation treatment who received a Hib vaccine dose(s) within 14 days of starting therapy or during therapy, repeat the dose(s) at least 3 months following therapy completion. - Recipients of hematopoietic stem cell transplant (HSCT) should be revaccinated with a 3-dose regimen of Hib vaccine starting 6 to 12 months after successful transplant, regardless of vaccination history; doses should be administered at least 4 weeks apart. - A single dose of any Hib-containing vaccine should be administered to unimmunized- children and adolescents 15 months of age and older undergoing an elective splenectomy; if possible, vaccine should be administered at least 14 days before procedure. - Hib vaccine is not routinely recommended for patients 5 years or older. However, 1 dose of Hib vaccine should be administered to unimmunized- persons aged 5 years or older who have anatomic or functional asplenia (including sickle cell disease) and unvaccinated persons 5 through 18 years of age with human immunodeficiency virus (HIV) infection. - Patients who have not received a primary series and booster dose or at least 1 dose of Hib vaccine after 14 months of age are considered unimmunized. # Pneumococcal vaccines. (Minimum age: 6 weeks for PCV13, 2 years for PPSV23) Routine vaccination with PCV13: - Administer a 4-dose series of PCV13 vaccine at ages 2, 4, and 6 months and at age 12 through 15 months. - For children aged 14 through 59 months who have received an age-appropriate series of 7-valent PCV (PCV7), administer a single supplemental dose of 13-valent PCV (PCV13). # Catch-up vaccination with PCV13: - Administer 1 dose of PCV13 to all healthy children aged 24 through 59 months who are not completely vaccinated for their age. - For other catch-up guidance, see Figure 2. # Vaccination of persons with high-risk conditions with PCV13 and PPSV23: - All recommended PCV13 doses should be administered prior to PPSV23 vaccination if possible. - For children 2 through 5 years of age with any of the following conditions: chronic heart disease (particularly cyanotic congenital heart disease and cardiac failure); chronic lung disease (including asthma if treated with high-dose oral corticosteroid therapy); diabetes mellitus; cerebrospinal fluid leak; cochlear implant; sickle cell disease and other hemoglobinopathies; anatomic or functional asplenia; HIV infection; chronic renal failure; nephrotic syndrome; diseases associated with treatment with immunosuppressive drugs or radiation therapy, including malignant neoplasms, leukemias, lymphomas, and Hodgkin's disease; solid organ transplantation; or congenital immunodeficiency: 1. Administer 1 dose of PCV13 if any incomplete schedule of 3 doses of PCV (PCV7 and/or PCV13) were received previously. 2. Administer 2 doses of PCV13 at least 8 weeks apart if unvaccinated or any incomplete schedule of fewer than 3 doses of PCV (PCV7 and/or PCV13) were received previously. 3. Administer 1 supplemental dose of PCV13 if 4 doses of PCV7 or other age-appropriate complete PCV7 series was received previously. 4. The minimum interval between doses of PCV (PCV7 or PCV13) is 8 weeks. 5. For children with no history of PPSV23 vaccination, administer PPSV23 at least 8 weeks after the most recent dose of PCV13. # Pneumococcal vaccines (cont'd) # Inactivated poliovirus vaccine (IPV). (Minimum age: 6 weeks) Routine vaccination: - Administer a 4-dose series of IPV at ages 2, 4, 6 through 18 months, and 4 through 6 years. The final dose in the series should be administered on or after the fourth birthday and at least 6 months after the previous dose. # Catch-up vaccination: - In the first 6 months of life, minimum age and minimum intervals are only recommended if the person is at risk of imminent exposure to circulating poliovirus (i.e., travel to a polio-endemic region or during an outbreak). - If 4 or more doses are administered before age 4 years, an additional dose should be administered at age 4 through 6 years and at least 6 months after the previous dose. - A fourth dose is not necessary if the third dose was administered at age 4 years or older and at least 6 months after the previous dose. For further guidance on the use of the vaccines mentioned below, see: . For further guidance on the use of the vaccines mentioned below, see: . 9. Measles, mumps, and rubella (MMR) vaccine. (Minimum age: 12 months for routine vaccination) Routine vaccination: - Administer a 2-dose series of MMR vaccine at ages 12 through 15 months and 4 through 6 years. The second dose may be administered before age 4 years, provided at least 4 weeks have elapsed since the first dose. - Administer 1 dose of MMR vaccine to infants aged 6 through 11 months before departure from the United States for international travel. These children should be revaccinated with 2 doses of MMR vaccine, the first at age 12 through 15 months (12 months if the child remains in an area where disease risk is high), and the second dose at least 4 weeks later. - Administer 2 doses of MMR vaccine to children aged 12 months and older before departure from the United States for international travel. The first dose should be administered on or after age 12 months and the second dose at least 4 weeks later. # Catch-up vaccination: - Ensure that all school-aged children and adolescents have had 2 doses of MMR vaccine; the minimum interval between the 2 doses is 4 weeks. # Varicella (VAR) vaccine. (Minimum age: 12 months) Routine vaccination: - Administer a 2-dose series of VAR vaccine at ages 12 through 15 months and 4 through 6 years. The second dose may be administered before age 4 years, provided at least 3 months have elapsed since the first dose. If the second dose was administered at least 4 weeks after the first dose, it can be accepted as valid. # Catch-up vaccination: - Ensure that all persons aged 7 through 18 years without evidence of immunity (see MMWR 2007 / 56 [No. RR-4], available at ) have 2 doses of varicella vaccine. For children aged 7 through 12 years, the recommended minimum interval between doses is 3 months (if the second dose was administered at least 4 weeks after the first dose, it can be accepted as valid); for persons aged 13 years and older, the minimum interval between doses is 4 weeks. # Hepatitis A (HepA) vaccine. (Minimum age: 12 months) Routine vaccination: - Initiate the 2-dose HepA vaccine series at 12 through 23 months; separate the 2 doses by 6 to 18 months. - Children who have received 1 dose of HepA vaccine before age 24 months should receive a second dose 6 to 18 months after the first dose. - For any person aged 2 years and older who has not already received the HepA vaccine series, 2 doses of HepA vaccine separated by 6 to 18 months may be administered if immunity against hepatitis A virus infection is desired. # Catch-up vaccination: - The minimum interval between the two doses is 6 months. # Special populations: - Administer 2 doses of HepA vaccine at least 6 months apart to previously unvaccinated persons who live in areas where vaccination programs target older children, or who are at increased risk for infection. This includes persons traveling to or working in countries that have high or intermediate endemicity of infection; men having sex with men; users of injection and non-injection illicit drugs; persons who work with HAV-infected primates or with HAV in a research laboratory; persons with clotting-factor disorders; persons with chronic liver disease; and persons who anticipate close personal contact (e.g., household or regular babysitting) with an international adoptee during the first 60 days after arrival in the United States from a country with high or intermediate endemicity. The first dose should be administered as soon as the adoption is planned, ideally 2 or more weeks before the arrival of the adoptee. # Human papillomavirus (HPV) vaccines. (Minimum age: 9 years for HPV2 and HPV4 ) Routine vaccination: - Administer a 3-dose series of HPV vaccine on a schedule of 0, 1-2, and 6 months to all adolescents aged 11 through 12 years. Either HPV4 or HPV2 may be used for females, and only HPV4 may be used for males. - The vaccine series may be started at age 9 years. - Administer the second dose 1 to 2 months after the first dose (minimum interval of 4 weeks); administer the third dose 24 weeks after the first dose and 16 weeks after the second dose (minimum interval of 12 weeks). # Catch-up vaccination: - Administer the vaccine series to females (either HPV2 or HPV4) and males (HPV4) at age 13 through 18 years if not previously vaccinated. - Use recommended routine dosing intervals (see Routine vaccination above) for vaccine series catch-up. # Catch-up vaccination: - Administer Menactra or Menveo vaccine at age 13 through 18 years if not previously vaccinated. - If the first dose is administered at age 13 through 15 years, a booster dose should be administered at age 16 through 18 years with a minimum interval of at least 8 weeks between doses. - If the first dose is administered at age 16 years or older, a booster dose is not needed. - For other catch-up guidance, see Figure 2. # Vaccination of persons with high-risk conditions and other persons at increased risk of disease: - Children with anatomic or functional asplenia (including sickle cell disease): 1. Menveo
This schedule includes recommendations in effect as of January 1, 2015. Any dose not administered at the recommended age should be administered at a subsequent visit, when indicated and feasible. The use of a combination vaccine generally is preferred over separate injections of its equivalent component vaccines. Vaccination providers should consult the relevant Advisory Committee on Immunization Practices (ACIP) statement for detailed recommendations, available online at http://www.cdc.gov/vaccines/hcp/acip-recs/index.html. Clinically significant adverse events that follow vaccination should be reported to the Vaccine Adverse Event Reporting System (VAERS) online (http://www.vaers.hhs.gov) or by telephone (800-822-7967).# 2] ). These recommendations must be read with the footnotes that follow. For those who fall behind or start late, provide catch-up vaccination at the earliest opportunity as indicated by the green bars in Figure 1. To determine minimum intervals between doses, see the catch-up schedule (Figure 2). School entry and adolescent vaccine age groups are shaded. # NOTE: The above recommendations must be read along with the footnotes of this schedule. This schedule includes recommendations in effect as of January 1, 2015. Any dose not administered at the recommended age should be administered at a subsequent visit, when indicated and feasible. The use of a combination vaccine generally is preferred over separate injections of its equivalent component vaccines. Vaccination providers should consult the relevant Advisory Committee on Immunization Practices (ACIP) statement for detailed recommendations, available online at http://www.cdc.gov/vaccines/hcp/acip-recs/index.html. Clinically significant adverse events that follow vaccination should be reported to the Vaccine Adverse Event Reporting System (VAERS) online (http://www.vaers.hhs.gov) or by telephone (800-822-7967). Suspected cases of vaccine-preventable diseases should be reported to the state or local health department. Additional information, including precautions and contraindications for vaccination, is available from CDC online (http://www.cdc.gov/vaccines/recs/vac-admin/contraindications.htm) or by telephone (800-CDC- ). This schedule is approved by the Advisory Committee on Immunization Practices (http//www.cdc.gov/vaccines/acip), the American Academy of Pediatrics (http://www.aap.org), the American Academy of Family Physicians (http://www.aafp.org), and the American College of Obstetricians and Gynecologists (http://www.acog.org). # Not routinely recommended Range of recommended ages for certain high-risk groups Range of recommended ages for all children Range of recommended ages for catch-up immunization Range of recommended ages during which catch-up is encouraged and for certain high-risk groups FIGURE 2. Catch-up immunization schedule for persons aged 4 months through 18 years who start late or who are more than 1 month behind -United States, 2015. The figure below provides catch-up schedules and minimum intervals between doses for children whose vaccinations have been delayed. A vaccine series does not need to be restarted, regardless of the time that has elapsed between doses. Use the section appropriate for the child's age. Always use this table in conjunction with Figure 1 and the footnotes that follow. # NOTE: The above recommendations must be read along with the footnotes of this schedule. Footnotes -Recommended immunization schedule for persons aged 0 through 18 years-United States, 2015 For further guidance on the use of the vaccines mentioned below, see: http://www.cdc.gov/vaccines/hcp/acip-recs/index.html. For vaccine recommendations for persons 19 years of age and older, see the Adult Immunization Schedule. # Additional information • For contraindications and precautions to use of a vaccine and for additional information regarding that vaccine, vaccination providers should consult the relevant ACIP statement available online at http://www.cdc.gov/vaccines/hcp/acip-recs/index.html. • For purposes of calculating intervals between doses, 4 weeks = 28 days. Intervals of 4 months or greater are determined by calendar months. • Vaccine doses administered 4 days or less before the minimum interval are considered valid. Doses of any vaccine administered ≥5 days earlier than the minimum interval or minimum age should not be counted as valid doses and should be repeated as age-appropriate. The repeat dose should be spaced after the invalid dose by the recommended minimum interval. # Hepatitis B (HepB) vaccine. (Minimum age: birth) Routine vaccination: At birth: • Administer monovalent HepB vaccine to all newborns before hospital discharge. • For infants born to hepatitis B surface antigen (HBsAg)-positive mothers, administer HepB vaccine and 0.5 mL of hepatitis B immune globulin (HBIG) within 12 hours of birth. These infants should be tested for HBsAg and antibody to HBsAg (anti-HBs) 1 to 2 months after completion of the HepB series at age 9 through 18 months (preferably at the next well-child visit). • If mother's HBsAg status is unknown, within 12 hours of birth administer HepB vaccine regardless of birth weight. For infants weighing less than 2,000 grams, administer HBIG in addition to HepB vaccine within 12 hours of birth. Determine mother's HBsAg status as soon as possible and, if mother is HBsAg-positive, also administer HBIG for infants weighing 2,000 grams or more as soon as possible, but no later than age 7 days. # Doses following the birth dose: • The second dose should be administered at age 1 or 2 months. Monovalent HepB vaccine should be used for doses administered before age 6 weeks. • Infants who did not receive a birth dose should receive 3 doses of a HepB-containing vaccine on a schedule of 0, 1 to 2 months, and 6 months starting as soon as feasible. See Figure 2. • Administer the second dose 1 to 2 months after the first dose (minimum interval of 4 weeks), administer the third dose at least 8 weeks after the second dose AND at least 16 weeks after the first dose. The final (third or fourth) dose in the HepB vaccine series should be administered no earlier than age 24 weeks. • Administration of a total of 4 doses of HepB vaccine is permitted when a combination vaccine containing HepB is administered after the birth dose. # Catch-up vaccination: • Unvaccinated persons should complete a 3-dose series. • A 2-dose series (doses separated by at least 4 months) of adult formulation Recombivax HB is licensed for use in children aged 11 through 15 years. • For other catch-up guidance, see Figure 2. # Rotavirus (RV) vaccines. (Minimum age: 6 weeks for both RV1 [Rotarix] and RV5 [RotaTeq]) Routine vaccination: Administer a series of RV vaccine to all infants as follows: 1. If Rotarix is used, administer a 2-dose series at 2 and 4 months of age. 2. If RotaTeq is used, administer a 3-dose series at ages 2, 4, and 6 months. 3. If any dose in the series was RotaTeq or vaccine product is unknown for any dose in the series, a total of 3 doses of RV vaccine should be administered. # Catch-up vaccination: • The maximum age for the first dose in the series is 14 weeks, 6 days; vaccination should not be initiated for infants aged 15 weeks, 0 days or older. • The maximum age for the final dose in the series is 8 months, 0 days. • For other catch-up guidance, see Figure 2. The fourth dose may be administered as early as age 12 months, provided at least 6 months have elapsed since the third dose. However, the fourth dose of DTaP need not be repeated if it was administered at least 4 months after the third dose of DTaP. # Diphtheria and tetanus toxoids and acellular pertussis (DTaP) vaccine (cont'd) Catch-up vaccination: • The fifth dose of DTaP vaccine is not necessary if the fourth dose was administered at age 4 years or older. • For other catch-up guidance, see Figure 2. # Tetanus and diphtheria toxoids and acellular pertussis (Tdap) vaccine. (Minimum age: 10 years for both Boostrix and Adacel) Routine vaccination: • Administer 1 dose of Tdap vaccine to all adolescents aged 11 through 12 years. • Tdap may be administered regardless of the interval since the last tetanus and diphtheria toxoidcontaining vaccine. -If administered inadvertently to a child aged 7 through 10 years may count as part of the catch-up series. This dose may count as the adolescent Tdap dose, or the child can later receive a Tdap booster dose at age 11 through 12 years. -If administered inadvertently to an adolescent aged 11 through 18 years, the dose should be counted as the adolescent Tdap booster. • For other catch-up guidance, see Figure 2. # Haemophilus influenzae type b (Hib) conjugate vaccine. (Minimum age: 6 weeks for PRP-T [ACTHIB, DTaP-IPV/Hib (Pentacel) and Hib-MenCY (MenHibrix)], PRP-OMP [PedvaxHIB or COMVAX], 12 months for PRP-T [Hiberix]) Routine vaccination: • Administer a 2-or 3-dose Hib vaccine primary series and a booster dose (dose 3 or 4 depending on vaccine used in primary series) at age 12 through 15 months to complete a full Hib vaccine series. • The primary series with ActHIB, MenHibrix, or Pentacel consists of 3 doses and should be administered at 2, 4, and 6 months of age. The primary series with PedvaxHib or COMVAX consists of 2 doses and should be administered at 2 and 4 months of age; a dose at age 6 months is not indicated. • One booster dose (dose 3 or 4 depending on vaccine used in primary series) of any Hib vaccine should be administered at age 12 through 15 months. An exception is Hiberix vaccine. Hiberix should only be used for the booster (final) dose in children aged 12 months through 4 years who have received at least 1 prior dose of Hib-containing vaccine. • For recommendations on the use of MenHibrix in patients at increased risk for meningococcal disease, please refer to the meningococcal vaccine footnotes and also to MMWR February 28, 2014 / 63(RR01);1-13, available at http://www.cdc.gov/mmwr/PDF/rr/rr6301.pdf. # Haemophilus influenzae type b (Hib) conjugate vaccine (cont'd) Catch-up vaccination: • If dose 1 was administered at ages 12 through 14 months, administer a second (final) dose at least 8 weeks after dose 1, regardless of Hib vaccine used in the primary series. • If both doses were PRP-OMP (PedvaxHIB or COMVAX), and were administered before the first birthday, the third (and final) dose should be administered at age 12 through 59 months and at least 8 weeks after the second dose. • If the first dose was administered at age 7 through 11 months, administer the second dose at least 4 weeks later and a third (and final) dose at age 12 through 15 months or 8 weeks after second dose, whichever is later. • If first dose is administered before the first birthday and second dose administered at younger than 15 months, a third (and final) dose should be given 8 weeks later. • For unvaccinated children aged 15 months or older, administer only 1 dose. • For other catch-up guidance, see Figure 2. For catch-up guidance related to MenHibrix, please see the meningococcal vaccine footnotes and also MMWR February 28, 2014 / 63(RR01);1-13, available at http://www.cdc.gov/mmwr/PDF/rr/rr6301.pdf. # Vaccination of persons with high-risk conditions: • Children aged 12 through 59 months who are at increased risk for Hib disease, including chemotherapy recipients and those with anatomic or functional asplenia (including sickle cell disease), human immunodeficiency virus (HIV ) infection, immunoglobulin deficiency, or early component complement deficiency, who have received either no doses or only 1 dose of Hib vaccine before 12 months of age, should receive 2 additional doses of Hib vaccine 8 weeks apart; children who received 2 or more doses of Hib vaccine before 12 months of age should receive 1 additional dose. • For patients younger than 5 years of age undergoing chemotherapy or radiation treatment who received a Hib vaccine dose(s) within 14 days of starting therapy or during therapy, repeat the dose(s) at least 3 months following therapy completion. • Recipients of hematopoietic stem cell transplant (HSCT) should be revaccinated with a 3-dose regimen of Hib vaccine starting 6 to 12 months after successful transplant, regardless of vaccination history; doses should be administered at least 4 weeks apart. • A single dose of any Hib-containing vaccine should be administered to unimmunized* children and adolescents 15 months of age and older undergoing an elective splenectomy; if possible, vaccine should be administered at least 14 days before procedure. • Hib vaccine is not routinely recommended for patients 5 years or older. However, 1 dose of Hib vaccine should be administered to unimmunized* persons aged 5 years or older who have anatomic or functional asplenia (including sickle cell disease) and unvaccinated persons 5 through 18 years of age with human immunodeficiency virus (HIV) infection. * Patients who have not received a primary series and booster dose or at least 1 dose of Hib vaccine after 14 months of age are considered unimmunized. # Pneumococcal vaccines. (Minimum age: 6 weeks for PCV13, 2 years for PPSV23) Routine vaccination with PCV13: • Administer a 4-dose series of PCV13 vaccine at ages 2, 4, and 6 months and at age 12 through 15 months. • For children aged 14 through 59 months who have received an age-appropriate series of 7-valent PCV (PCV7), administer a single supplemental dose of 13-valent PCV (PCV13). # Catch-up vaccination with PCV13: • Administer 1 dose of PCV13 to all healthy children aged 24 through 59 months who are not completely vaccinated for their age. • For other catch-up guidance, see Figure 2. # Vaccination of persons with high-risk conditions with PCV13 and PPSV23: • All recommended PCV13 doses should be administered prior to PPSV23 vaccination if possible. • For children 2 through 5 years of age with any of the following conditions: chronic heart disease (particularly cyanotic congenital heart disease and cardiac failure); chronic lung disease (including asthma if treated with high-dose oral corticosteroid therapy); diabetes mellitus; cerebrospinal fluid leak; cochlear implant; sickle cell disease and other hemoglobinopathies; anatomic or functional asplenia; HIV infection; chronic renal failure; nephrotic syndrome; diseases associated with treatment with immunosuppressive drugs or radiation therapy, including malignant neoplasms, leukemias, lymphomas, and Hodgkin's disease; solid organ transplantation; or congenital immunodeficiency: 1. Administer 1 dose of PCV13 if any incomplete schedule of 3 doses of PCV (PCV7 and/or PCV13) were received previously. 2. Administer 2 doses of PCV13 at least 8 weeks apart if unvaccinated or any incomplete schedule of fewer than 3 doses of PCV (PCV7 and/or PCV13) were received previously. 3. Administer 1 supplemental dose of PCV13 if 4 doses of PCV7 or other age-appropriate complete PCV7 series was received previously. 4. The minimum interval between doses of PCV (PCV7 or PCV13) is 8 weeks. 5. For children with no history of PPSV23 vaccination, administer PPSV23 at least 8 weeks after the most recent dose of PCV13. # Pneumococcal vaccines (cont'd) • # Inactivated poliovirus vaccine (IPV). (Minimum age: 6 weeks) Routine vaccination: • Administer a 4-dose series of IPV at ages 2, 4, 6 through 18 months, and 4 through 6 years. The final dose in the series should be administered on or after the fourth birthday and at least 6 months after the previous dose. # Catch-up vaccination: • In the first 6 months of life, minimum age and minimum intervals are only recommended if the person is at risk of imminent exposure to circulating poliovirus (i.e., travel to a polio-endemic region or during an outbreak). • If 4 or more doses are administered before age 4 years, an additional dose should be administered at age 4 through 6 years and at least 6 months after the previous dose. • A fourth dose is not necessary if the third dose was administered at age 4 years or older and at least 6 months after the previous dose. For further guidance on the use of the vaccines mentioned below, see: http://www.cdc.gov/vaccines/hcp/acip-recs/index.html. For further guidance on the use of the vaccines mentioned below, see: http://www.cdc.gov/vaccines/hcp/acip-recs/index.html. 9. Measles, mumps, and rubella (MMR) vaccine. (Minimum age: 12 months for routine vaccination) Routine vaccination: • Administer a 2-dose series of MMR vaccine at ages 12 through 15 months and 4 through 6 years. The second dose may be administered before age 4 years, provided at least 4 weeks have elapsed since the first dose. • Administer 1 dose of MMR vaccine to infants aged 6 through 11 months before departure from the United States for international travel. These children should be revaccinated with 2 doses of MMR vaccine, the first at age 12 through 15 months (12 months if the child remains in an area where disease risk is high), and the second dose at least 4 weeks later. • Administer 2 doses of MMR vaccine to children aged 12 months and older before departure from the United States for international travel. The first dose should be administered on or after age 12 months and the second dose at least 4 weeks later. # Catch-up vaccination: • Ensure that all school-aged children and adolescents have had 2 doses of MMR vaccine; the minimum interval between the 2 doses is 4 weeks. # Varicella (VAR) vaccine. (Minimum age: 12 months) Routine vaccination: • Administer a 2-dose series of VAR vaccine at ages 12 through 15 months and 4 through 6 years. The second dose may be administered before age 4 years, provided at least 3 months have elapsed since the first dose. If the second dose was administered at least 4 weeks after the first dose, it can be accepted as valid. # Catch-up vaccination: • Ensure that all persons aged 7 through 18 years without evidence of immunity (see MMWR 2007 / 56 [No. RR-4], available at http://www.cdc.gov/mmwr/pdf/rr/rr5604.pdf ) have 2 doses of varicella vaccine. For children aged 7 through 12 years, the recommended minimum interval between doses is 3 months (if the second dose was administered at least 4 weeks after the first dose, it can be accepted as valid); for persons aged 13 years and older, the minimum interval between doses is 4 weeks. # Hepatitis A (HepA) vaccine. (Minimum age: 12 months) Routine vaccination: • Initiate the 2-dose HepA vaccine series at 12 through 23 months; separate the 2 doses by 6 to 18 months. • Children who have received 1 dose of HepA vaccine before age 24 months should receive a second dose 6 to 18 months after the first dose. • For any person aged 2 years and older who has not already received the HepA vaccine series, 2 doses of HepA vaccine separated by 6 to 18 months may be administered if immunity against hepatitis A virus infection is desired. # Catch-up vaccination: • The minimum interval between the two doses is 6 months. # Special populations: • Administer 2 doses of HepA vaccine at least 6 months apart to previously unvaccinated persons who live in areas where vaccination programs target older children, or who are at increased risk for infection. This includes persons traveling to or working in countries that have high or intermediate endemicity of infection; men having sex with men; users of injection and non-injection illicit drugs; persons who work with HAV-infected primates or with HAV in a research laboratory; persons with clotting-factor disorders; persons with chronic liver disease; and persons who anticipate close personal contact (e.g., household or regular babysitting) with an international adoptee during the first 60 days after arrival in the United States from a country with high or intermediate endemicity. The first dose should be administered as soon as the adoption is planned, ideally 2 or more weeks before the arrival of the adoptee. # Human papillomavirus (HPV) vaccines. (Minimum age: 9 years for HPV2 [Cervarix] and HPV4 [Gardasil]) Routine vaccination: • Administer a 3-dose series of HPV vaccine on a schedule of 0, 1-2, and 6 months to all adolescents aged 11 through 12 years. Either HPV4 or HPV2 may be used for females, and only HPV4 may be used for males. • The vaccine series may be started at age 9 years. • Administer the second dose 1 to 2 months after the first dose (minimum interval of 4 weeks); administer the third dose 24 weeks after the first dose and 16 weeks after the second dose (minimum interval of 12 weeks). # Catch-up vaccination: • Administer the vaccine series to females (either HPV2 or HPV4) and males (HPV4) at age 13 through 18 years if not previously vaccinated. • Use recommended routine dosing intervals (see Routine vaccination above) for vaccine series catch-up. # Catch-up vaccination: • Administer Menactra or Menveo vaccine at age 13 through 18 years if not previously vaccinated. • If the first dose is administered at age 13 through 15 years, a booster dose should be administered at age 16 through 18 years with a minimum interval of at least 8 weeks between doses. • If the first dose is administered at age 16 years or older, a booster dose is not needed. • For other catch-up guidance, see Figure 2. # Vaccination of persons with high-risk conditions and other persons at increased risk of disease: • Children with anatomic or functional asplenia (including sickle cell disease): 1. Menveo
None
None
f4dc82e2112c6b8362c0ef18f9d7cb5157bba8c4
cdc
None
On October 27, 2010, the Advisory Committee on Immunization Practices (ACIP) approved updated recommendations for the use of quadrivalent (serogroups A, C, Y, and W-135) meningococcal conjugate vaccines (Menveo, Novartis; and Menactra, Sanofi Pasteur) in adolescents and persons at high risk for meningococcal disease. These recommendations supplement the previous ACIP recommendations for meningococcal vaccination (1,2). The Meningococcal Vaccines Work Group of ACIP reviewed available data on immunogenicity in high-risk groups, bactericidal antibody persistence after immunization, current epidemiology, vaccine effectiveness (VE), and cost-effectiveness of different strategies for vaccination of adolescents. The Work Group then presented policy options for consideration by the full ACIP. This report summarizes two new recommendations approved by ACIP: 1) routine vaccination of adolescents, preferably at age 11 or 12 years, with a booster dose at age 16 years and 2) a 2-dose primary series administered 2 months apart for persons aged 2 through 54 years with persistent complement component deficiency (e.g., C5--C9, properidin, factor H, or factor D) and functional or anatomic asplenia, and for adolescents with human immunodeficiency virus (HIV) infection. CDC guidance for vaccine providers regarding these updated recommendations also is included.# Rationale for Adding a Booster Dose to the Adolescent Schedule The goal of the 2005 ACIP meningococcal immunization recommendations was to protect persons aged 16 through 21 years, when meningococcal disease rates peak. At that time, vaccination was recommended at age 11 or 12 years rather than at age 14 or 15 years because 1) more persons have preventive-care visits at age 11 or 12 years, 2) adding this vaccine at the 11 or 12 year-old visit would strengthen the pre-adolescent vaccination platform, and 3) the vaccine was expected to protect adolescents through the entire period of increased risk. Meningococcal conjugate vaccines were licensed in 2005 based on immunogenicity (because a surrogate of protection had been defined) and safety data. After licensure, additional data on bactericidal antibody persistence, trends in meningococcal disease epidemiology in the United States, and VE have indicated many adolescents might not be protected for more than 5 years. Therefore, persons immunized at age 11 or 12 years might have decreased protective immunity by ages 16 through 21 years, when their risk for disease is greatest. Meningococcal disease incidence has decreased since 2000, and incidence for serogroups C and Y, which represent the majority of cases of vaccine-preventable meningococcal disease, are at historic lows. However, the peak in disease among persons aged 18 years (Figure ) has persisted, even after routine vaccination was recommended in 2005. In the 2009 National Immunization Survey-Teen, 53.6% of adolescents aged 13 through 17 years had received a dose of meningococcal vaccine (3). From 2000--2004 to 2005--2009, the estimated annual number of cases of serogroups C and Y meningococcal disease decreased 74% among persons aged 11 through 14 years but only 27% among persons aged 15 through 18 years. Cases of meningococcal disease caused by serogroups C and Y among persons who were vaccinated with meningococcal conjugate vaccine have been reported. An early VE analysis that modeled expected cases of disease in vaccinated persons estimated a VE of 80%--85% up to 3 years after vaccination (4). In 2010, CDC received 12 reports of serogroup C or Y meningococcal disease among persons who had received a meningococcal conjugate vaccine. The mean age of these persons was 18.2 years (range: 16 through 22 years). The mean time since vaccination was 3.25 years (range: 1.5--4.6 years). Five of these 12 persons had an underlying condition that might have increased their risk for meningococcal disease (CDC, unpublished data, 2010). A case-control study evaluating the VE of meningococcal conjugate vaccine was begun in January 2006 (ACIP meeting, October 2010). Because Menactra was the only licensed vaccine until February 2010, the preliminary results are estimates for Menactra only; no data are available regarding the effectiveness of Menveo. As of October 1, 2010, 108 case-patients and 158 controls were enrolled in the effectiveness study. The overall VE estimate in persons vaccinated 0--5 years earlier was 78.0% (95% confidence interval = 29%--93%). VE for persons vaccinated less than1 year earlier was 95% (CI = 10%--100%), VE for persons vaccinated 1 year earlier was 91% (CI = 10%--101%), and VE for persons vaccinated 2 through 5 years earlier was 58% (CI = -72%--89%). Although the CIs around the point estimates are wide, the ACIP Work Group concluded that VE wanes. The ACIP Work Group also concluded that serologic data are consistent with waning immunity. Three characteristics of conjugate vaccines are believed to be important for establishing long-term protection against a bacterial pathogen: memory response, herd immunity, and circulating antibody (5). Recent data from the United Kingdom indicate that although vaccination primes the immune system, the memory response after exposure might not be rapid enough to protect against meningococcal disease. After initial priming with a serogroup C meningococcal conjugate vaccine (MenC), a memory response after a booster dose was not measurable until 5--7 days later (6). The incubation period for meningococcal disease usually is less than 3 days. Although herd immunity has been an important component associated with long-term protection with MenC vaccine in the United Kingdom and other countries, immunization coverage has increased slowly in the United States, and to date no evidence of herd immunity has been observed (ACIP meeting, October 2010). Therefore, the Work Group concluded that circulating bactericidal antibody is critical for protection against meningococcal disease. The Work Group took into consideration the proportion of subjects with bactericidal antibody levels above thresholds considered protective, depending on the assay used, evaluating antibody persistence in five studies (Table 1). Although each study tested a small number of vaccine recipients, the Work Group concluded that the studies found sufficient evidence to indicate that approximately 50% of persons vaccinated 5 years earlier had bactericidal antibody levels protective against meningococcal disease. Therefore, more than 50% of persons immunized at age 11 or 12 years might not be protected when they are at higher risk at ages 16 through 21 years. Two studies evaluated the response after a booster dose of Menactra at 3 and 5 years after the primary vaccination (7; ACIP meeting, June 2009). At both 3 and 5 years after the first dose, the booster dose elicited substantially higher geometric mean antibody titers (GMT), compared with the titers elicited by a primary dose. Using a complement serum bactericidal activity (SBA) assay and baby rabbit complement (brSBA) as a measure of immune response, a booster dose administered 5 years after the first dose elicited a GMT for serogroup C of 23,613, compared with 9,045 among subjects administered a primary dose (ACIP meeting, October 2010). As expected with conjugate vaccines, the first dose primes the immune system to have a strong response to the booster dose. Local and systemic reactions to the booster were comparable to those in persons receiving a first dose. The duration of protective antibody after the booster dose is not known but is expected to last through age 21 years for booster doses administered at ages 16 through 18 years. Optimizing meningococcal vaccination. Despite the current low burden of meningococcal disease, the ACIP Work Group agreed that because of mounting evidence of waning immunity by 5 years postvaccination, vaccinating adolescents with a single dose at age 11 or 12 years is not the best strategy for protection through age 21 years. The Work Group considered two other options for optimizing protection: moving the dose from age 11 or 12 years to age 14 or 15 years or vaccinating at age 11 or 12 years and providing a booster dose at age 16 years. Although a single dose at age 14 or 15 years likely would protect most adolescents through the higher risk period at ages 16 through 21 years, the opportunities to administer vaccine at age 14 or 15 years might be more limited. Data indicate that as adolescents grow older, they are less likely to visit a health-care provider for preventive care (8). Adding a booster dose to the recommended schedule would provide more opportunities to increase vaccination coverage, while persons aged 11 through 13 years would continue to be protected. An economic analysis comparing the three adolescent vaccination strategies concluded that administering a booster dose has a cost per quality-adjusted life year similar to that of a single dose at age 11 years or age 15 years but is estimated to prevent twice the number of cases and deaths (CDC, unpublished data, 2010). # Rationale for 2-Dose Primary Series for Persons with a Reduced Response to a Single Dose Evidence supporting the need for a 2-dose primary meningococcal vaccine series for the small number of persons at increased risk for meningococcal disease was reviewed. Data indicated that SBA could be increased with 2 doses, 2 months apart. For persons who are asplenic or have HIV infection, a 2-dose primary series improves the initial immune response to vaccination. A 2-dose primary series in patients with persistent complement component deficiency will help achieve the high levels of SBA activity needed to confer protection in the absence of effective opsonization. The complement pathway is important to preventing meningococcal disease, and Neisseria meningitidis is the primary bacterial pathogen affecting persons with late component complement (LCCD) or properidin deficiency. Although persons with LCCD are able to mount an overall antibody response equal to or greater than complement-sufficient persons after vaccination with quadrivalent meningococcal polysaccharide vaccine (MPSV4), antibody titers wane more rapidly in persons with complement component deficiency, and higher antibody levels are needed for other clearance mechanisms such as opsophagocytosis to function (9,10). Asplenic persons are at increased risk for invasive infection caused by many encapsulated bacteria, including N. meningitidis. Moreover, the mortality rate is 40%--70% among these persons when they become infected with N. meningitidis. Asplenic persons achieve significantly lower geometric mean SBA titers than healthy persons after vaccination with meningococcal C conjugate vaccine, with 20% not achieving brSBA titers ≥1:8. This proportion was reduced to 7% when a second dose of vaccine was administered to nonresponders 2 months later, suggesting a booster might be effective in achieving higher circulating antibody levels and improving immunologic memory (11). Patients with HIV infection likely are at increased risk for meningococcal disease, although not to the extent that they are at risk for invasive Streptococcus pneumoniae infection. The risk to persons with HIV infection also is not as great as to persons with complement component deficiency or asplenia. One study has investigated the response rates to a single dose of meningococcal conjugate vaccinate among HIV-infected adolescents. Response to vaccination measured by brSBA titers ≥1:128 was 86%, 55%, 73%, and 72% for serogroups A, C, Y, and W-135, respectively. Response rates were significantly lower among patients with a CD4+ T-lymphocyte percentage of 10,000 copies/mL (12). The immunogenicity and safety of a 2-dose primary series has not been studied in older children and adults. However, Menactra and Menveo have been studied following administration as a 2-dose primary series in infants and young children. Infants vaccinated with a 2-dose primary series of Menactra at ages 9 months and 12 through 15 months achieved high antibody titers after the second dose. Administration of 2 doses of Menveo 2 months apart to children aged 2 through 5 years was associated with a similar rate of adverse events as a single dose (13). # Recommendation for Routine Vaccination of Persons Aged 11 Through 18 Years ACIP recommends routine vaccination of persons with quadrivalent meningococcal conjugate vaccine at age 11 or 12 years, with a booster dose at age 16 years. After a booster dose of meningococcal conjugate vaccine, antibody titers are higher than after the first dose and are expected to protect adolescents through the period of increased risk through age 21 years. For adolescents who receive the first dose at age 13 through 15 years, a one-time booster dose should be administered, preferably at age 16 through 18 years, before the peak in increased risk. Persons who receive their first dose of meningococcal conjugate vaccine at or after age 16 years do not need a booster dose. Routine vaccination of healthy persons who are not at increased risk for exposure to N. meningitidis is not recommended after age 21 years. # Recommendation for Persons Aged 2 Through 54 Years with Reduced Immune Response Data indicate that the immune response to a single dose of meningococcal conjugate vaccine is not sufficient in persons with certain medical conditions. Persons with persistent complement component deficiencies (e.g., C5--C9, properidin, factor H, or factor D) or asplenia should receive a 2-dose primary series administered 2 months apart and then receive a booster dose every 5 years. Adolescents aged 11 through 18 years with HIV infection should be routinely vaccinated with a 2-dose primary series. Other persons with HIV who are vaccinated should receive a 2-dose primary series administered 2 months apart. All other persons at increased risk for meningococcal disease (e.g., microbiologists or travelers to an epidemic or highly endemic country) should receive a single dose. # CDC Guidance for Transition to an Adolescent Booster Dose Some schools, colleges, and universities have policies requiring vaccination against meningococcal disease as a condition of enrollment. For ease of program implementation, persons aged 21 years or younger should have documentation of receipt of a dose of meningococcal conjugate vaccine not more than 5 years before enrollment. If the primary dose was administered before the 16th birthday, a booster dose should be administered before enrollment in college. The booster dose can be administered anytime after the 16th birthday to ensure that the booster is provided. The minimum interval between doses of meningococcal conjugate vaccine is 8 weeks. No data are available on the interchangeability of vaccine products. Whenever feasible, the same brand of vaccine should be used for all doses of the vaccination series. If vaccination providers do not know or have available the type of vaccine product previously administered, any product should be used to continue or complete the series. Persons with complement component deficiency, asplenia, or HIV infection who have previously received a single dose of meningococcal conjugate vaccine should receive their booster dose at the earliest opportunity. These updated meningococcal conjugate vaccine recommendations from ACIP have been summarized (Table 2). Additionally, a meningococcal conjugate vaccine information statement is available at , and details regarding the routine meningococcal conjugate vaccination schedule are available at #child. Adverse events after receipt of any vaccine should be reported to the Vaccine Adverse Event Reporting System at .
On October 27, 2010, the Advisory Committee on Immunization Practices (ACIP) approved updated recommendations for the use of quadrivalent (serogroups A, C, Y, and W-135) meningococcal conjugate vaccines (Menveo, Novartis; and Menactra, Sanofi Pasteur) in adolescents and persons at high risk for meningococcal disease. These recommendations supplement the previous ACIP recommendations for meningococcal vaccination (1,2). The Meningococcal Vaccines Work Group of ACIP reviewed available data on immunogenicity in high-risk groups, bactericidal antibody persistence after immunization, current epidemiology, vaccine effectiveness (VE), and cost-effectiveness of different strategies for vaccination of adolescents. The Work Group then presented policy options for consideration by the full ACIP. This report summarizes two new recommendations approved by ACIP: 1) routine vaccination of adolescents, preferably at age 11 or 12 years, with a booster dose at age 16 years and 2) a 2-dose primary series administered 2 months apart for persons aged 2 through 54 years with persistent complement component deficiency (e.g., C5--C9, properidin, factor H, or factor D) and functional or anatomic asplenia, and for adolescents with human immunodeficiency virus (HIV) infection. CDC guidance for vaccine providers regarding these updated recommendations also is included.# Rationale for Adding a Booster Dose to the Adolescent Schedule The goal of the 2005 ACIP meningococcal immunization recommendations was to protect persons aged 16 through 21 years, when meningococcal disease rates peak. At that time, vaccination was recommended at age 11 or 12 years rather than at age 14 or 15 years because 1) more persons have preventive-care visits at age 11 or 12 years, 2) adding this vaccine at the 11 or 12 year-old visit would strengthen the pre-adolescent vaccination platform, and 3) the vaccine was expected to protect adolescents through the entire period of increased risk. Meningococcal conjugate vaccines were licensed in 2005 based on immunogenicity (because a surrogate of protection had been defined) and safety data. After licensure, additional data on bactericidal antibody persistence, trends in meningococcal disease epidemiology in the United States, and VE have indicated many adolescents might not be protected for more than 5 years. Therefore, persons immunized at age 11 or 12 years might have decreased protective immunity by ages 16 through 21 years, when their risk for disease is greatest. Meningococcal disease incidence has decreased since 2000, and incidence for serogroups C and Y, which represent the majority of cases of vaccine-preventable meningococcal disease, are at historic lows. However, the peak in disease among persons aged 18 years (Figure ) has persisted, even after routine vaccination was recommended in 2005. In the 2009 National Immunization Survey-Teen, 53.6% of adolescents aged 13 through 17 years had received a dose of meningococcal vaccine (3). From 2000--2004 to 2005--2009, the estimated annual number of cases of serogroups C and Y meningococcal disease decreased 74% among persons aged 11 through 14 years but only 27% among persons aged 15 through 18 years. Cases of meningococcal disease caused by serogroups C and Y among persons who were vaccinated with meningococcal conjugate vaccine have been reported. An early VE analysis that modeled expected cases of disease in vaccinated persons estimated a VE of 80%--85% up to 3 years after vaccination (4). In 2010, CDC received 12 reports of serogroup C or Y meningococcal disease among persons who had received a meningococcal conjugate vaccine. The mean age of these persons was 18.2 years (range: 16 through 22 years). The mean time since vaccination was 3.25 years (range: 1.5--4.6 years). Five of these 12 persons had an underlying condition that might have increased their risk for meningococcal disease (CDC, unpublished data, 2010). A case-control study evaluating the VE of meningococcal conjugate vaccine was begun in January 2006 (ACIP meeting, October 2010). Because Menactra was the only licensed vaccine until February 2010, the preliminary results are estimates for Menactra only; no data are available regarding the effectiveness of Menveo. As of October 1, 2010, 108 case-patients and 158 controls were enrolled in the effectiveness study. The overall VE estimate in persons vaccinated 0--5 years earlier was 78.0% (95% confidence interval [CI] = 29%--93%). VE for persons vaccinated less than1 year earlier was 95% (CI = 10%--100%), VE for persons vaccinated 1 year earlier was 91% (CI = 10%--101%), and VE for persons vaccinated 2 through 5 years earlier was 58% (CI = -72%--89%). Although the CIs around the point estimates are wide, the ACIP Work Group concluded that VE wanes. The ACIP Work Group also concluded that serologic data are consistent with waning immunity. Three characteristics of conjugate vaccines are believed to be important for establishing long-term protection against a bacterial pathogen: memory response, herd immunity, and circulating antibody (5). Recent data from the United Kingdom indicate that although vaccination primes the immune system, the memory response after exposure might not be rapid enough to protect against meningococcal disease. After initial priming with a serogroup C meningococcal conjugate vaccine (MenC), a memory response after a booster dose was not measurable until 5--7 days later (6). The incubation period for meningococcal disease usually is less than 3 days. Although herd immunity has been an important component associated with long-term protection with MenC vaccine in the United Kingdom and other countries, immunization coverage has increased slowly in the United States, and to date no evidence of herd immunity has been observed (ACIP meeting, October 2010). Therefore, the Work Group concluded that circulating bactericidal antibody is critical for protection against meningococcal disease. The Work Group took into consideration the proportion of subjects with bactericidal antibody levels above thresholds considered protective, depending on the assay used, evaluating antibody persistence in five studies (Table 1). Although each study tested a small number of vaccine recipients, the Work Group concluded that the studies found sufficient evidence to indicate that approximately 50% of persons vaccinated 5 years earlier had bactericidal antibody levels protective against meningococcal disease. Therefore, more than 50% of persons immunized at age 11 or 12 years might not be protected when they are at higher risk at ages 16 through 21 years. Two studies evaluated the response after a booster dose of Menactra at 3 and 5 years after the primary vaccination (7; ACIP meeting, June 2009). At both 3 and 5 years after the first dose, the booster dose elicited substantially higher geometric mean antibody titers (GMT), compared with the titers elicited by a primary dose. Using a complement serum bactericidal activity (SBA) assay and baby rabbit complement (brSBA) as a measure of immune response, a booster dose administered 5 years after the first dose elicited a GMT for serogroup C of 23,613, compared with 9,045 among subjects administered a primary dose (ACIP meeting, October 2010). As expected with conjugate vaccines, the first dose primes the immune system to have a strong response to the booster dose. Local and systemic reactions to the booster were comparable to those in persons receiving a first dose. The duration of protective antibody after the booster dose is not known but is expected to last through age 21 years for booster doses administered at ages 16 through 18 years. Optimizing meningococcal vaccination. Despite the current low burden of meningococcal disease, the ACIP Work Group agreed that because of mounting evidence of waning immunity by 5 years postvaccination, vaccinating adolescents with a single dose at age 11 or 12 years is not the best strategy for protection through age 21 years. The Work Group considered two other options for optimizing protection: moving the dose from age 11 or 12 years to age 14 or 15 years or vaccinating at age 11 or 12 years and providing a booster dose at age 16 years. Although a single dose at age 14 or 15 years likely would protect most adolescents through the higher risk period at ages 16 through 21 years, the opportunities to administer vaccine at age 14 or 15 years might be more limited. Data indicate that as adolescents grow older, they are less likely to visit a health-care provider for preventive care (8). Adding a booster dose to the recommended schedule would provide more opportunities to increase vaccination coverage, while persons aged 11 through 13 years would continue to be protected. An economic analysis comparing the three adolescent vaccination strategies concluded that administering a booster dose has a cost per quality-adjusted life year similar to that of a single dose at age 11 years or age 15 years but is estimated to prevent twice the number of cases and deaths (CDC, unpublished data, 2010). # Rationale for 2-Dose Primary Series for Persons with a Reduced Response to a Single Dose Evidence supporting the need for a 2-dose primary meningococcal vaccine series for the small number of persons at increased risk for meningococcal disease was reviewed. Data indicated that SBA could be increased with 2 doses, 2 months apart. For persons who are asplenic or have HIV infection, a 2-dose primary series improves the initial immune response to vaccination. A 2-dose primary series in patients with persistent complement component deficiency will help achieve the high levels of SBA activity needed to confer protection in the absence of effective opsonization. The complement pathway is important to preventing meningococcal disease, and Neisseria meningitidis is the primary bacterial pathogen affecting persons with late component complement (LCCD) or properidin deficiency. Although persons with LCCD are able to mount an overall antibody response equal to or greater than complement-sufficient persons after vaccination with quadrivalent meningococcal polysaccharide vaccine (MPSV4), antibody titers wane more rapidly in persons with complement component deficiency, and higher antibody levels are needed for other clearance mechanisms such as opsophagocytosis to function (9,10). Asplenic persons are at increased risk for invasive infection caused by many encapsulated bacteria, including N. meningitidis. Moreover, the mortality rate is 40%--70% among these persons when they become infected with N. meningitidis. Asplenic persons achieve significantly lower geometric mean SBA titers than healthy persons after vaccination with meningococcal C conjugate vaccine, with 20% not achieving brSBA titers ≥1:8. This proportion was reduced to 7% when a second dose of vaccine was administered to nonresponders 2 months later, suggesting a booster might be effective in achieving higher circulating antibody levels and improving immunologic memory (11). Patients with HIV infection likely are at increased risk for meningococcal disease, although not to the extent that they are at risk for invasive Streptococcus pneumoniae infection. The risk to persons with HIV infection also is not as great as to persons with complement component deficiency or asplenia. One study has investigated the response rates to a single dose of meningococcal conjugate vaccinate among HIV-infected adolescents. Response to vaccination measured by brSBA titers ≥1:128 was 86%, 55%, 73%, and 72% for serogroups A, C, Y, and W-135, respectively. Response rates were significantly lower among patients with a CD4+ T-lymphocyte percentage of <15% or viral loads >10,000 copies/mL (12). The immunogenicity and safety of a 2-dose primary series has not been studied in older children and adults. However, Menactra and Menveo have been studied following administration as a 2-dose primary series in infants and young children. Infants vaccinated with a 2-dose primary series of Menactra at ages 9 months and 12 through 15 months achieved high antibody titers after the second dose. Administration of 2 doses of Menveo 2 months apart to children aged 2 through 5 years was associated with a similar rate of adverse events as a single dose (13). # Recommendation for Routine Vaccination of Persons Aged 11 Through 18 Years ACIP recommends routine vaccination of persons with quadrivalent meningococcal conjugate vaccine at age 11 or 12 years, with a booster dose at age 16 years. After a booster dose of meningococcal conjugate vaccine, antibody titers are higher than after the first dose and are expected to protect adolescents through the period of increased risk through age 21 years. For adolescents who receive the first dose at age 13 through 15 years, a one-time booster dose should be administered, preferably at age 16 through 18 years, before the peak in increased risk. Persons who receive their first dose of meningococcal conjugate vaccine at or after age 16 years do not need a booster dose. Routine vaccination of healthy persons who are not at increased risk for exposure to N. meningitidis is not recommended after age 21 years. # Recommendation for Persons Aged 2 Through 54 Years with Reduced Immune Response Data indicate that the immune response to a single dose of meningococcal conjugate vaccine is not sufficient in persons with certain medical conditions. Persons with persistent complement component deficiencies (e.g., C5--C9, properidin, factor H, or factor D) or asplenia should receive a 2-dose primary series administered 2 months apart and then receive a booster dose every 5 years. Adolescents aged 11 through 18 years with HIV infection should be routinely vaccinated with a 2-dose primary series. Other persons with HIV who are vaccinated should receive a 2-dose primary series administered 2 months apart. All other persons at increased risk for meningococcal disease (e.g., microbiologists or travelers to an epidemic or highly endemic country) should receive a single dose. # CDC Guidance for Transition to an Adolescent Booster Dose Some schools, colleges, and universities have policies requiring vaccination against meningococcal disease as a condition of enrollment. For ease of program implementation, persons aged 21 years or younger should have documentation of receipt of a dose of meningococcal conjugate vaccine not more than 5 years before enrollment. If the primary dose was administered before the 16th birthday, a booster dose should be administered before enrollment in college. The booster dose can be administered anytime after the 16th birthday to ensure that the booster is provided. The minimum interval between doses of meningococcal conjugate vaccine is 8 weeks. No data are available on the interchangeability of vaccine products. Whenever feasible, the same brand of vaccine should be used for all doses of the vaccination series. If vaccination providers do not know or have available the type of vaccine product previously administered, any product should be used to continue or complete the series. Persons with complement component deficiency, asplenia, or HIV infection who have previously received a single dose of meningococcal conjugate vaccine should receive their booster dose at the earliest opportunity. These updated meningococcal conjugate vaccine recommendations from ACIP have been summarized (Table 2). Additionally, a meningococcal conjugate vaccine information statement is available at http://www.cdc.gov/vaccines/pubs/vis/default.htm, and details regarding the routine meningococcal conjugate vaccination schedule are available at http://www.cdc.gov/vaccines/recs/schedules/default.htm#child. Adverse events after receipt of any vaccine should be reported to the Vaccine Adverse Event Reporting System at http://vaers.hhs.gov .
None
None
2b2f33b58b76a53f89eff268e024210fd91ce8df
cdc
None
Treatment of latent tuberculosis infection (LTBI) is critical to the control and elimination of tuberculosis disease (TB) in the United States. In 2011, CDC recommended a short-course combination regimen of once-weekly isoniazid and rifapentine for 12 weeks (3HP) by directly observed therapy (DOT) for treatment of LTBI, with limitations for use in children aged <12 years and persons with human immunodeficiency virus (HIV) infection (1). CDC identified the use of 3HP in those populations, as well as self-administration of the 3HP regimen, as areas to address in updated recommendations. In 2017, a CDC Work Group conducted a systematic review and metaanalyses of the 3HP regimen using methods adapted from the Guide to Community Preventive Services. In total, 19 articles representing 15 unique studies were included in the metaanalysis, which determined that 3HP is as safe and effective as other recommended LTBI regimens and achieves substantially higher treatment completion rates. In July 2017, the Work Group presented the meta-analysis findings to a group of TB experts, and in December 2017, CDC solicited input from the Advisory Council for the Elimination of Tuberculosis (ACET) and members of the public for incorporation into the final recommendations. CDC continues to recommend 3HP for treatment of LTBI in adults and now recommends use of 3HP 1) in persons with LTBI aged 2-17 years; 2) in persons with LTBI who have HIV infection, including acquired immunodeficiency syndrome (AIDS), and are taking antiretroviral medications with acceptable drug-drug interactions with rifapentine; and 3) by DOT or self-administered therapy (SAT) in persons aged ≥2 years. # Systematic Review A CDC Work Group including epidemiologists, health scientists, physicians from CDC's Tuberculosis Elimination program, and a CDC library specialist, was convened to conduct the systematic literature review using methods adapted from the Guide to Community Preventive Services (2,3). The library specialist used a systematic search strategy to identify and retrieve intervention studies on the use of 3HP to treat LTBI that were published from January 2006 through June 2017 and indexed in the MEDLINE, Embase, CINAHL, Cochrane Library, Scopus, and Clinicaltrials.gov databases. To identify missed studies, reference lists from included articles were reviewed, and CDC's TB experts were consulted. This review included English language articles that met the following criteria: 1) the study design was randomized controlled trial, quasi-experimental, observational cohort, or other design with a concurrent comparison group; 2) the target population included, but was not restricted to, persons aged ≥12 years, children aged 2-11 years, or persons with HIV infection; and 3) outcomes reported were prevention of TB disease, treatment completion, adverse events while on 3HP, discontinuation as a result of adverse events while on 3HP, or death while on 3HP. Two reviewers from the CDC Work Group independently screened citations obtained from the search and retrieved fulltext articles in the relevant literature to be synthesized. Using a standard data abstraction form, the reviewers abstracted data on intervention characteristics, outcomes of interest, demographics, benefits, harms, considerations for implementation, and evidence gaps. Each study was also assessed for threats to internal and external validity per Guide to Community Preventive Services standards (2,3). Any disagreement between reviewers was resolved by consensus of the CDC Work Group members. The CDC Work Group reviewed 292 citations retrieved from the librarian's search. Of these, 30 full-text articles were ordered and screened for inclusion. No eligible studies including children aged <2 years were identified. In total, 19 articles representing 15 unique studies were included in the metaanalysis. Findings from the meta-analysis indicated that 3HP is as safe and effective as other recommended LTBI regimens and achieves significantly higher treatment completion rates. Complete results of the systematic review and meta-analysis have been published elsewhere (4). Overall, the majority of included studies were of greatest design suitability and good quality of execution, as defined by the Guide to Community Preventive Services (2,3). Issues related to poor reporting of appropriate analytic methods and possible selection bias were the most common limitations assigned to the body of evidence. Recently published randomized control trials that were heavily weighted in the meta-analyses and drug interaction studies (5-9) are summarized as follows: Study of 3HP in children. A large randomized clinical trial of 3HP administered by DOT, which included children aged 2-17 years, demonstrated that 3HP was as well-tolerated and as effective as 9 months of daily isoniazid (9H) for preventing TB (5). The trial also reported that 3HP was safe and had higher treatment completion rates than 9H (5). Data on the safety and pharmacokinetics of rifapentine in children aged <2 years are not available. Studies of 3HP in persons with HIV infection, including AIDS. In 2011, CDC recommended the 3HP regimen for treatment of LTBI in persons with HIV infection, including AIDS, who are otherwise healthy and who are not taking antiretroviral medications (1). Since that time, additional data confirm not only the effectiveness of 3HP in persons with HIV infection who are not taking antiretroviral therapy, but also demonstrate the absence of clinically significant drug interactions between once-weekly rifapentine and either efavirenz or raltegravir in persons with HIV infection who are treated with those antiretroviral medications (4,(6)(7)(8). Study of self-administered therapy. A randomized clinical trial demonstrating noninferior treatment completion and safety of 3HP-SAT compared with 3HP-DOT in persons aged ≥18 years in the United States provides the primary evidence on 3HP administration by SAT (9). The 3HP-SAT regimen has not been studied in randomized controlled trials in persons aged <18 years. # Expert Consultation In July 2017, CDC met with nine non-CDC subject matter experts in TB and LTBI diagnosis, treatment, prevention, surveillance, epidemiology, clinical research, pulmonology, pediatrics, HIV/AIDS, public health programs, and patient advocacy. CDC presented the systematic review results and proposed recommendations to the experts, who provided 1) individual perspectives on the review; 2) experience with implementation of the 3HP regimen in various settings and populations; and 3) individual viewpoints on the proposed updates. Subject matter experts from programs prescribing 3HP described benefits of this regimen, including increased acceptance and completion of treatment. Some experts reported that several health departments are currently using 3HP, with high treatment completion, in children as young as age 2 years. Some noted that the 2011 recommendation to administer 3HP by DOT limits use of the regimen. In December 2017, CDC solicited input from ACET and members of the public for incorporation into the final recommendations. With regard to pediatric use, the 2011 recommendations had included limited use of the 3HP regimen for treatment of LTBI in children aged <12 years (1). New data on efficacy and safety of 3HP in children were determined sufficient to recommend the 3HP regimen for treatment of LTBI in children aged ≥2 years (4). Concerning patients with HIV infection, information about interactions between specific antimycobacterial agents, including rifamycins (e.g., rifampin, rifabutin, and rifapentine) and antiretroviral agents, is available in the U.S. Department of Health and Human Services Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents. These frequently updated guidelines include a section addressing management of LTBI in persons with HIV coinfection and tables with information on drug interactions.- Use of concomitant LTBI treatment and antiretroviral agents should be guided by clinicians experienced in the management of both conditions. In 2011, CDC recommended use of the 3HP regimen by DOT (1). Treatment completion rates are highest when the regimen is administered by DOT (4). However, the burden and expense of DOT is greater than that for SAT (9). During the expert consultation and again during review by ACET, some subject matter experts strongly recommended permitting use of SAT, when combined with clinical monitoring, in children aged ≥2 years. Based on this expert opinion, ACET formally recommended expansion of the option of parentally administered SAT to children. Some experts still prefer DOT for treating LTBI in children aged 2-5 years, in whom risk for TB progression and severe disease is higher than that in older children and adults. Health care providers should make joint decisions about SAT with each individual patient (and parent or legal guardian), considering program resources and the patient's age, medical history, social circumstances, and risk factors for progression to severe TB disease. Subject matter experts stressed the importance of educating providers and patients about 3HP. # Recommendations Based on evidence on effectiveness, safety, and treatment completion rates from the systematic review, and after consideration of viewpoints from TB subject matter experts and input from ACET and the public, CDC continues to recommend 3HP for treatment of LTBI in adults and now recommends use of 3HP 1) in persons with LTBI aged 2-17 years; 2) in persons with LTBI who have HIV infection, including AIDS, and are taking antiretroviral medications with acceptable drugdrug interactions with rifapentine; and 3) by DOT or SAT in persons aged ≥2 years. The health care provider should choose the mode of administration (DOT versus SAT) based on local practice, individual patient attributes and preferences, and other considerations, including risk for progression to severe forms of TB disease. Use of concomitant LTBI treatment and antiretroviral agents should be guided by clinicians experienced in the management of both conditions (Box 1). - . # BOX 1. Updated recommendations for once-weekly isoniazid-rifapentine for 12 weeks (3HP) for the treatment of latent tuberculosis infection CDC continues to recommend use of the short-course combination regimen of once-weekly isoniazid-rifapentine for 12 weeks (3HP) for treatment of latent tuberculosis infection (LTBI) in adults. With regard to age limits, HIV infection, and administration of the treatment, CDC now also recommends the following: Patient monitoring and adverse events. Hepatic enzymes and other blood tests should be performed for certain patients before initiation of 3HP therapy (Box 2). Approximately 4% of all patients using 3HP experience flu-like or other systemic drug reactions, with fever, headache, dizziness, nausea, muscle and bone pain, rash, itching, red eyes, or other symptoms (4,10). Approximately 5% of persons discontinue 3HP because of adverse events, including systemic drug reactions (4,10); these reactions typically occur after the first 3-4 doses, and begin approximately 4 hours after ingestion of medication (10). Hypotension and syncope have been reported rarely (two cases per 1,000 persons treated) (4,10). If symptoms suggestive of a systemic drug reaction occur, patients should stop 3HP while the cause is determined. Symptoms usually resolve without treatment within 24 hours. Neutropenia and elevation of liver enzymes occur uncommonly (4,10). CDC recommends that health care providers educate patients to report adverse events. Patient use of symptom checklists might facilitate timely recognition and reporting. † Rifapentine is a rifamycin compound; like rifampin, it induces metabolism of many medications. CDC recommends monitoring of patients when 3HP is prescribed with interacting # BOX 2. Guidance to health care providers during treatment of latent tuberculosis infection (LTBI) with a combination regimen of isoniazid and rifapentine in 12 once-weekly doses (3HP) - Evaluate all patients for active tuberculosis disease both before and during treatment of LTBI. - Inform the patient or parents or legal guardians about possible adverse effects and instruct them to seek medical attention when symptoms of possible adverse reaction first appear; particularly drug hypersensitivity reactions, rash, hypotension, or thrombocytopenia. - Conduct monthly evaluations to assess treatment adherence and adverse effects, with repeated patient education regarding adverse effects at each visit. - Order baseline hepatic chemistry blood tests (at least aspartate aminotransferase ) for patients with the following specific conditions: human immunodeficiency virus infection, liver disorders, postpartum period (≤3 months after delivery), regular alcohol use, injection drug use, or use of medications with known possible interactions. - Conduct blood tests at subsequent clinical encounters for patients whose baseline testing is abnormal and for others at risk for liver disease. Discontinue 3HP if a serum AST concentration is ≥5 times the upper limit of normal in the absence of symptoms or ≥3 times the upper limit of normal in the presence of symptoms. - In case of a possible severe adverse reaction, discontinue 3HP and provide supportive medical care. Conservative management and continuation of 3HP under observation can be considered in the presence of mild to moderate adverse events as determined by health care provider. medications (e.g., methadone or warfarin). Rifapentine can reduce the effectiveness of hormonal contraceptives; therefore, women who use hormonal birth control should be advised to add, or switch to, a barrier method. Women should be advised to inform their health care provider if they decide to try to become pregnant or become pregnant during 3HP treatment. Because altered dosing might reduce effectiveness or safety, patients on 3HP SAT should be encouraged to record medication intake and report deviations from the prescribed regimen. Persons on 3HP regimens should be evaluated monthly (in person or by telephone) to assess adherence and adverse effects. Additional studies are needed to understand the pharmacokinetics, safety, and tolerance of 3HP in children aged <2 years; adherence and safety of 3HP-SAT in persons aged <18 years; and safety of 3HP during pregnancy (4). Any LTBI treatment-associated adverse effect leading to hospital admission or death should be reported by health care providers to local or state health departments for inclusion in the National Surveillance for Severe Adverse Events Associated with Treatment for LTBI (e-mail: ltbidrugevents@ cdc.gov). Serious drug side effects, product quality problems, and therapeutic failures should be reported to the Food and Drug Administration's MedWatch program (. fda.gov/Safety/MedWatch/HowToReport/default.htm) or by telephoning 1-800-FDA-1088. Additional information regarding 3HP is available at . htm. Questions also can be directed to CDC's Division of Tuberculosis Elimination by e-mail ([email protected]) or by telephoning 800-CDC-INFO (800-232-4636). # Conflict of Interest No conflicts of interest were reported. 1 Division of Tuberculosis Elimination, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, CDC. Corresponding author: Andrey S. Borisov, [email protected], 404-639-8056.
# Treatment of latent tuberculosis infection (LTBI) is critical to the control and elimination of tuberculosis disease (TB) in the United States. In 2011, CDC recommended a short-course combination regimen of once-weekly isoniazid and rifapentine for 12 weeks (3HP) by directly observed therapy (DOT) for treatment of LTBI, with limitations for use in children aged <12 years and persons with human immunodeficiency virus (HIV) infection (1). CDC identified the use of 3HP in those populations, as well as self-administration of the 3HP regimen, as areas to address in updated recommendations. In 2017, a CDC Work Group conducted a systematic review and metaanalyses of the 3HP regimen using methods adapted from the Guide to Community Preventive Services. In total, 19 articles representing 15 unique studies were included in the metaanalysis, which determined that 3HP is as safe and effective as other recommended LTBI regimens and achieves substantially higher treatment completion rates. In July 2017, the Work Group presented the meta-analysis findings to a group of TB experts, and in December 2017, CDC solicited input from the Advisory Council for the Elimination of Tuberculosis (ACET) and members of the public for incorporation into the final recommendations. CDC continues to recommend 3HP for treatment of LTBI in adults and now recommends use of 3HP 1) in persons with LTBI aged 2-17 years; 2) in persons with LTBI who have HIV infection, including acquired immunodeficiency syndrome (AIDS), and are taking antiretroviral medications with acceptable drug-drug interactions with rifapentine; and 3) by DOT or self-administered therapy (SAT) in persons aged ≥2 years. # Systematic Review A CDC Work Group including epidemiologists, health scientists, physicians from CDC's Tuberculosis Elimination program, and a CDC library specialist, was convened to conduct the systematic literature review using methods adapted from the Guide to Community Preventive Services (2,3). The library specialist used a systematic search strategy to identify and retrieve intervention studies on the use of 3HP to treat LTBI that were published from January 2006 through June 2017 and indexed in the MEDLINE, Embase, CINAHL, Cochrane Library, Scopus, and Clinicaltrials.gov databases. To identify missed studies, reference lists from included articles were reviewed, and CDC's TB experts were consulted. This review included English language articles that met the following criteria: 1) the study design was randomized controlled trial, quasi-experimental, observational cohort, or other design with a concurrent comparison group; 2) the target population included, but was not restricted to, persons aged ≥12 years, children aged 2-11 years, or persons with HIV infection; and 3) outcomes reported were prevention of TB disease, treatment completion, adverse events while on 3HP, discontinuation as a result of adverse events while on 3HP, or death while on 3HP. Two reviewers from the CDC Work Group independently screened citations obtained from the search and retrieved fulltext articles in the relevant literature to be synthesized. Using a standard data abstraction form, the reviewers abstracted data on intervention characteristics, outcomes of interest, demographics, benefits, harms, considerations for implementation, and evidence gaps. Each study was also assessed for threats to internal and external validity per Guide to Community Preventive Services standards (2,3). Any disagreement between reviewers was resolved by consensus of the CDC Work Group members. The CDC Work Group reviewed 292 citations retrieved from the librarian's search. Of these, 30 full-text articles were ordered and screened for inclusion. No eligible studies including children aged <2 years were identified. In total, 19 articles representing 15 unique studies were included in the metaanalysis. Findings from the meta-analysis indicated that 3HP is as safe and effective as other recommended LTBI regimens and achieves significantly higher treatment completion rates. Complete results of the systematic review and meta-analysis have been published elsewhere (4). Overall, the majority of included studies were of greatest design suitability and good quality of execution, as defined by the Guide to Community Preventive Services (2,3). Issues related to poor reporting of appropriate analytic methods and possible selection bias were the most common limitations assigned to the body of evidence. Recently published randomized control trials that were heavily weighted in the meta-analyses and drug interaction studies (5-9) are summarized as follows: Study of 3HP in children. A large randomized clinical trial of 3HP administered by DOT, which included children aged 2-17 years, demonstrated that 3HP was as well-tolerated and as effective as 9 months of daily isoniazid (9H) for preventing TB (5). The trial also reported that 3HP was safe and had higher treatment completion rates than 9H (5). Data on the safety and pharmacokinetics of rifapentine in children aged <2 years are not available. Studies of 3HP in persons with HIV infection, including AIDS. In 2011, CDC recommended the 3HP regimen for treatment of LTBI in persons with HIV infection, including AIDS, who are otherwise healthy and who are not taking antiretroviral medications (1). Since that time, additional data confirm not only the effectiveness of 3HP in persons with HIV infection who are not taking antiretroviral therapy, but also demonstrate the absence of clinically significant drug interactions between once-weekly rifapentine and either efavirenz or raltegravir in persons with HIV infection who are treated with those antiretroviral medications (4,(6)(7)(8). Study of self-administered therapy. A randomized clinical trial demonstrating noninferior treatment completion and safety of 3HP-SAT compared with 3HP-DOT in persons aged ≥18 years in the United States provides the primary evidence on 3HP administration by SAT (9). The 3HP-SAT regimen has not been studied in randomized controlled trials in persons aged <18 years. # Expert Consultation In July 2017, CDC met with nine non-CDC subject matter experts in TB and LTBI diagnosis, treatment, prevention, surveillance, epidemiology, clinical research, pulmonology, pediatrics, HIV/AIDS, public health programs, and patient advocacy. CDC presented the systematic review results and proposed recommendations to the experts, who provided 1) individual perspectives on the review; 2) experience with implementation of the 3HP regimen in various settings and populations; and 3) individual viewpoints on the proposed updates. Subject matter experts from programs prescribing 3HP described benefits of this regimen, including increased acceptance and completion of treatment. Some experts reported that several health departments are currently using 3HP, with high treatment completion, in children as young as age 2 years. Some noted that the 2011 recommendation to administer 3HP by DOT limits use of the regimen. In December 2017, CDC solicited input from ACET and members of the public for incorporation into the final recommendations. With regard to pediatric use, the 2011 recommendations had included limited use of the 3HP regimen for treatment of LTBI in children aged <12 years (1). New data on efficacy and safety of 3HP in children were determined sufficient to recommend the 3HP regimen for treatment of LTBI in children aged ≥2 years (4). Concerning patients with HIV infection, information about interactions between specific antimycobacterial agents, including rifamycins (e.g., rifampin, rifabutin, and rifapentine) and antiretroviral agents, is available in the U.S. Department of Health and Human Services Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents. These frequently updated guidelines include a section addressing management of LTBI in persons with HIV coinfection and tables with information on drug interactions.* Use of concomitant LTBI treatment and antiretroviral agents should be guided by clinicians experienced in the management of both conditions. In 2011, CDC recommended use of the 3HP regimen by DOT (1). Treatment completion rates are highest when the regimen is administered by DOT (4). However, the burden and expense of DOT is greater than that for SAT (9). During the expert consultation and again during review by ACET, some subject matter experts strongly recommended permitting use of SAT, when combined with clinical monitoring, in children aged ≥2 years. Based on this expert opinion, ACET formally recommended expansion of the option of parentally administered SAT to children. Some experts still prefer DOT for treating LTBI in children aged 2-5 years, in whom risk for TB progression and severe disease is higher than that in older children and adults. Health care providers should make joint decisions about SAT with each individual patient (and parent or legal guardian), considering program resources and the patient's age, medical history, social circumstances, and risk factors for progression to severe TB disease. Subject matter experts stressed the importance of educating providers and patients about 3HP. # Recommendations Based on evidence on effectiveness, safety, and treatment completion rates from the systematic review, and after consideration of viewpoints from TB subject matter experts and input from ACET and the public, CDC continues to recommend 3HP for treatment of LTBI in adults and now recommends use of 3HP 1) in persons with LTBI aged 2-17 years; 2) in persons with LTBI who have HIV infection, including AIDS, and are taking antiretroviral medications with acceptable drugdrug interactions with rifapentine; and 3) by DOT or SAT in persons aged ≥2 years. The health care provider should choose the mode of administration (DOT versus SAT) based on local practice, individual patient attributes and preferences, and other considerations, including risk for progression to severe forms of TB disease. Use of concomitant LTBI treatment and antiretroviral agents should be guided by clinicians experienced in the management of both conditions (Box 1). * https://aidsinfo.nih.gov/guidelines/html/1/adult-and-adolescent-arv/367/overview. # BOX 1. Updated recommendations for once-weekly isoniazid-rifapentine for 12 weeks (3HP) for the treatment of latent tuberculosis infection CDC continues to recommend use of the short-course combination regimen of once-weekly isoniazid-rifapentine for 12 weeks (3HP) for treatment of latent tuberculosis infection (LTBI) in adults. With regard to age limits, HIV infection, and administration of the treatment, CDC now also recommends the following: Patient monitoring and adverse events. Hepatic enzymes and other blood tests should be performed for certain patients before initiation of 3HP therapy (Box 2). Approximately 4% of all patients using 3HP experience flu-like or other systemic drug reactions, with fever, headache, dizziness, nausea, muscle and bone pain, rash, itching, red eyes, or other symptoms (4,10). Approximately 5% of persons discontinue 3HP because of adverse events, including systemic drug reactions (4,10); these reactions typically occur after the first 3-4 doses, and begin approximately 4 hours after ingestion of medication (10). Hypotension and syncope have been reported rarely (two cases per 1,000 persons treated) (4,10). If symptoms suggestive of a systemic drug reaction occur, patients should stop 3HP while the cause is determined. Symptoms usually resolve without treatment within 24 hours. Neutropenia and elevation of liver enzymes occur uncommonly (4,10). CDC recommends that health care providers educate patients to report adverse events. Patient use of symptom checklists might facilitate timely recognition and reporting. † Rifapentine is a rifamycin compound; like rifampin, it induces metabolism of many medications. CDC recommends monitoring of patients when 3HP is prescribed with interacting • # BOX 2. Guidance to health care providers during treatment of latent tuberculosis infection (LTBI) with a combination regimen of isoniazid and rifapentine in 12 once-weekly doses (3HP) • Evaluate all patients for active tuberculosis disease both before and during treatment of LTBI. • Inform the patient or parents or legal guardians about possible adverse effects and instruct them to seek medical attention when symptoms of possible adverse reaction first appear; particularly drug hypersensitivity reactions, rash, hypotension, or thrombocytopenia. • Conduct monthly evaluations to assess treatment adherence and adverse effects, with repeated patient education regarding adverse effects at each visit. • Order baseline hepatic chemistry blood tests (at least aspartate aminotransferase [AST]) for patients with the following specific conditions: human immunodeficiency virus infection, liver disorders, postpartum period (≤3 months after delivery), regular alcohol use, injection drug use, or use of medications with known possible interactions. • Conduct blood tests at subsequent clinical encounters for patients whose baseline testing is abnormal and for others at risk for liver disease. Discontinue 3HP if a serum AST concentration is ≥5 times the upper limit of normal in the absence of symptoms or ≥3 times the upper limit of normal in the presence of symptoms. • In case of a possible severe adverse reaction, discontinue 3HP and provide supportive medical care. Conservative management and continuation of 3HP under observation can be considered in the presence of mild to moderate adverse events as determined by health care provider. medications (e.g., methadone or warfarin). Rifapentine can reduce the effectiveness of hormonal contraceptives; therefore, women who use hormonal birth control should be advised to add, or switch to, a barrier method. Women should be advised to inform their health care provider if they decide to try to become pregnant or become pregnant during 3HP treatment. Because altered dosing might reduce effectiveness or safety, patients on 3HP SAT should be encouraged to record medication intake and report deviations from the prescribed regimen. Persons on 3HP regimens should be evaluated monthly (in person or by telephone) to assess adherence and adverse effects. Additional studies are needed to understand the pharmacokinetics, safety, and tolerance of 3HP in children aged <2 years; adherence and safety of 3HP-SAT in persons aged <18 years; and safety of 3HP during pregnancy (4). # Any LTBI treatment-associated adverse effect leading to hospital admission or death should be reported by health care providers to local or state health departments for inclusion in the National Surveillance for Severe Adverse Events Associated with Treatment for LTBI (e-mail: ltbidrugevents@ cdc.gov). Serious drug side effects, product quality problems, and therapeutic failures should be reported to the Food and Drug Administration's MedWatch program (https://www. fda.gov/Safety/MedWatch/HowToReport/default.htm) or by telephoning 1-800-FDA-1088. Additional information regarding 3HP is available at https://www.cdc.gov/tb/publications/ltbi/ltbiresources. htm. Questions also can be directed to CDC's Division of Tuberculosis Elimination by e-mail ([email protected]) or by telephoning 800-CDC-INFO (800-232-4636). # Conflict of Interest No conflicts of interest were reported. 1 Division of Tuberculosis Elimination, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, CDC. Corresponding author: Andrey S. Borisov, [email protected], 404-639-8056.
None
None
b57f8cabcf562a2fda8969eb02152b89c3aaafbb
cdc
None
Chorionic villus sampling (CVS) and amniocentesis are prenatal diagnostic procedures that are performed to detect fetal abnormalities. In 1991, concerns about the relative safety of these procedures arose after reports were published that described a possible association between CVS and birth defects in infants. Subsequent studies support the hypothesis that CVS can cause transverse limb deficiencies. Following CVS, rates of these defects, estimated to be 0.03%-0.10% (1/3,000-1/1,000), generally have been increased over background rates. Rates and severity of limb deficiencies are associated with the timing of CVS; most of the birth defects reported after procedures that were performed at ≥70 days' gestation were limited to the fingers or toes. The risk for either digital or limb deficiency after CVS is only one of several important factors that must be considered in making complex and personal decisions about prenatal testing. For example, CVS is generally done earlier in pregnancy than amniocentesis and is particularly advantageous for detecting certain genetic conditions. Another important factor is the risk for miscarriage, which has been attributed to 0.5%-1.0% of CVS procedures and 0.25%-0.50% of amniocentesis procedures. Prospective parents considering the use of either CVS or amniocentesis should be counseled about the benefits and risks of these procedures. The counselor should also discuss both the mother's and father's risk(s) for transmitting genetic abnormalities to the fetus.# INTRODUCTION Chorionic villus sampling (CVS) and amniocentesis are prenatal diagnostic procedures used to detect certain fetal genetic abnormalities. Both procedures increase the risk for miscarriage (1 ). In addition, concern has been increasing among health-care providers and public health officials about the potential occurrence of birth defects resulting from CVS (2 ). This report describes CVS and amniocentesis, provides information on indications for their use, reviews studies about the safety of the procedures, compares the benefits and risks of the two procedures (focusing particularly on the risk for limb deficiency after CVS), and provides recommendations for counseling about these issues. A public meeting was convened on March 11, 1994, to discuss the results of studies of CVS-associated limb deficiencies and preliminary counseling recommendations that had been drafted at CDC (3 ). Participants included geneticists, obstetricians, pediatricians, epidemiologists, teratologists, dysmorphologists, and genetic counselors who had a particular interest in CVS studies or who represented professional organizations and government agencies. Participants provided diverse opinions about recommendations for counseling both at the meeting and in subsequent written correspondence; input from participants has been incorporated into this document. # USE OF CVS AND AMNIOCENTESIS CVS utilizes either a catheter or needle to biopsy placental cells that are derived from the same fertilized egg as the fetus. During amniocentesis, a small sample of the fluid that surrounds the fetus is removed. This fluid contains cells that are shed primarily from the fetal skin, bladder, gastrointestinal tract, and amnion. Typically, CVS is done at 10-12 weeks' gestation, and amniocentesis is done at 15-18 weeks' gestation. In the United States, the current standard of care in obstetrical practice is to offer either CVS or amniocentesis to women who will be ≥35 years of age when they give birth, because these women are at increased risk for giving birth to infants with Down syndrome and certain other types of aneuploidy. Karyotyping of cells obtained by either amniocentesis or CVS is the standard and definitive means of diagnosing aneuploidy in fetuses. The risk that a woman will give birth to an infant with Down syndrome increases with age. For example, for women 35 years of age, the risk is 1 per 385 births (0.3%), whereas for women 45 years of age, the risk is 1 per 30 births (3%) (1 ). The background risk for major birth defects (with or without chromosomal abnormalities) for women of all ages is approximately 3%. Before widespread use of amniocentesis, several controlled studies were conducted to evaluate the safety of the procedure. The major finding from these studies was that amniocentesis increases the rate for miscarriage (i.e., spontaneous abortions) by approximately 0.5%. Subsequent to these studies, amniocentesis became an accepted standard of care in the 1970s. In 1990, more than 200,000 amniocentesis procedures were performed in the United States (4 ). In the 1960s and 1970s, exploratory studies were conducted revealing that the placenta (i.e., chorionic villi) could be biopsied through a catheter and that sufficient placental cells could be obtained to permit certain genetic analyses earlier in pregnancy than through amniocentesis. In the United States, this procedure was initially evaluated in a controlled trial designed to determine the miscarriage rate (5 ). The difference in fetal-loss rate was estimated to be 0.8% higher after CVS compared with amniocentesis, although this difference was not statistically significant. Because that study was designed to determine miscarriage rates, it had limited statistical power to detect small increases in risks for individual birth defects. CVS had become widely used worldwide by the early 1980s. The World Health Organization (WHO) sponsors an International Registry of CVS procedures; data in the International Registry probably represent less than half of all procedures performed worldwide (6). More than 80,000 procedures were reported to the International Registry from 1983-1992 (6); approximately 200,000 procedures were registered from 1983-1995 (L. Jackson, personal communication). CVS is performed in hospitals, outpatient clinics, selected obstetricians' offices, and university settings; these facilities are often collectively referred to as prenatal diagnostic centers. Some investigators have reported that the availability of CVS increased the overall utilization of prenatal diagnostic procedures among women ≥35 years of age, suggesting that access to first-trimester testing may make prenatal chromosome analysis appealing to a larger number of women (7 ). Another group of obstetricians did not see an increase in overall utilization when CVS was introduced (8 ). The increase in CVS procedures was offset by a decrease in amniocentesis, suggesting that the effect of CVS availability on the utilization of prenatal diagnostic testing depends on local factors. In the United States, an estimated 40% of pregnant women ≥35 years of age underwent either amniocentesis or CVS in 1990 (9 ). Although maternal age-related risk for fetal aneuploidy is the usual indication for CVS or amniocentesis, prospective mothers or fathers of any age might desire fetal testing when they are at risk for passing on certain mendelian (single-gene) conditions. In a randomized trial conducted in the United States, 19% of women who underwent CVS were <35 years of age (10). DNA-based diagnoses of mendelian conditions, such as cystic fibrosis, hemophilia, muscular dystrophy, and hemoglobinopathies, can be made by direct analysis of uncultured chorionic villus cells (a more efficient method than culturing amniocytes) (11 ). However, amniocentesis is particularly useful to prospective parents who have a family history of neural tube defects, because alphafetoprotein (AFP) testing can be done on amniotic fluid but cannot be done on CVS specimens. When testing for chromosomal abnormalities resulting from advanced maternal age, CVS may be more acceptable than amniocentesis to some women because of the psychological and medical advantages provided by CVS through earlier diagnosis of abnormalities. Fetal movement is usually felt and uterine growth is visible at 17-19 weeks' gestation, the time when abnormalities are detected by amniocentesis; thus, deciding what action to take if an abnormality is detected at this time may be more difficult psychologically (12 ). Using CVS to diagnose chromosomal abnormalities during the first trimester allows a prospective parent to make this decision earlier than will amniocentesis. Maternal morbidity and mortality associated with induced abortion increase significantly with increasing gestational age; thus, the timing of diagnosis of chromosomal abnormalities is important. Results of studies of abortion complications conducted by CDC from 1970 through 1978 indicated that the risk for major abortion complications (e.g., prolonged fever, hemorrhage necessitating blood transfusion, and injury to pelvic organs) increases with advancing gestational age. For example, from 1971 through 1974, the major complication rate was 0.8% at 11-12 weeks' gestation, compared with 2.2% at 17-20 weeks' gestation (13 ). However, the risk for developing major complications from abortion at any gestational age decreased during the 1970s. More contemporary national morbidity data based on current abortion practices are not yet available. CDC surveillance data also indicate an increase in the risk for maternal death with increasing gestation. From 1972 through 1987, the risk for abortion-related death was 1.1 deaths per 100,000 abortions performed at 11-12 weeks' gestation compared with 6.9 deaths per 100,000 abortions for procedures performed at 16-20 weeks' gestation (14 ). The lower risk associated with first-trimester abortions may be an important factor for prospective parents who are deciding between CVS and amniocentesis. Amniocentesis is usually performed at 15-18 weeks' gestation, but more amniocentesis procedures are now being performed at 11-14 weeks' gestation. "Early" amniocentesis (defined as <15 weeks' gestation) remains investigational, because the safety of the procedure is currently being evaluated with controlled trials (15 ). Risk estimates for miscarriage caused by either CVS or midtrimester amniocentesis have been adjusted to account for spontaneous fetal losses that occur early in pregnancy and are not procedure-related. Although one randomized trial indicated that the amniocentesis-related miscarriage rate may be as high as 1%, counselors usually cite risks for miscarriage from other amniocentesis studies ranging from 0.25%-0.50% (1/400-1/200) (1,15 ). Rates of miscarriage after CVS vary widely by the center at which CVS was performed (16 ). Adjusting for confounding factors such as gestational age, the CVS-related miscarriage rate is approximately 0.5%-1.0% (1/200-1/100) (1 ). Although uterine infection (i.e., chorioamnionitis) is one possible reason for miscarriage after either CVS or amniocentesis, infection has occurred rarely after either procedure. In one study, no episodes of septic shock were reported after 4,200 CVS procedures, although less severe infections may have been associated with 12 of the 89 observed fetal losses (5 ). Overall infection rates have been <0.1% after either CVS or amniocentesis (15 ). Cytogenetically ambiguous results caused by factors such as maternal cell contamination or culture-related mosaicism are reported more often after CVS than after amniocentesis (2 ). In these instances, follow-up amniocentesis might be required to clarify results, increasing both the total cost of testing and the risk for miscarriage. However, ambiguous CVS results also may indicate a condition (e.g., confined placental mosaicism) that has been associated with adverse outcomes for the fetus (11 ). Thus, in these situations, CVS may be more informative than amniocentesis alone. # LIMB DEFICIENCIES AMONG INFANTS WHOSE MOTHERS UNDERWENT CVS Certain congenital defects of the extremities, known as limb deficiencies or limbreduction defects, have been reported among infants whose mothers underwent CVS. This section addresses 1) the expected frequency and classification of these birth defects, 2) the physical features of reported infants in relation to the timing of associated CVS procedures, and 3) cohort and case-control studies that have been done to systematically examine whether CVS increases the risk for limb deficiencies. # Population-Based Rates and Classification of Limb Deficiencies Population-based studies indicate that the risk for all limb deficiencies is from 5-6 per 10,000 live births (17 ). Limb deficiencies usually are classified into distinct anatomic and pathogenetic categories. The most common subtypes are transverse terminal defects, which involve absence of distal structures with intact proximal segments, with the axis of deficiency perpendicular to the extremity. Approximately 50% of all limb deficiencies are transverse, and 50% of those defects are digital, involving the absence of parts of one or more fingers or toes. Transverse deficiencies occur as either isolated defects or with other major defects. The rare combination of transverse limb deficiencies with either absence or hypoplasia of the tongue and lower jaw-usually referred to as oromandibular-limb hypogenesis or hypoglossia/hypodactyly -occurs at a rate of approximately 1 per 200,000 births. Although the cause of many isolated limb deficiencies and multiple anomalies that include transverse deficiencies is unknown, researchers have hypothesized that these deficiencies are caused by vascular disruption either during the formation of embryonic limbs or in already-formed fetal limbs (17,18 ). # Limb Deficiencies Reported in Infants Exposed to CVS Reports of clusters of infants born with limb deficiencies after CVS were first published in 1991 (19 ). Three studies illustrate the spectrum of CVS-associated defects (19)(20)(21). Data from these studies suggest that the severity of the outcome is associated with the specific time of CVS exposure. Exposure at ≥70 days' gestation has been associated with more limited defects, isolated to the distal extremities, whereas earlier exposures have been associated with more proximal limb deficiencies and orofacial defects. For example, in a study involving 14 infants exposed to CVS at 63-79 days' gestation and examined by a single pediatrician, 13 had isolated transverse digital deficiencies (20 ). In another study in Oxford of five infants exposed to CVS at 56-66 days' gestation, four had transverse deficiencies with oromandibular hypogenesis (19 ). In a review of published worldwide data, associated defects of the tongue or lower jaw were reported for 19 of 75 cases of CVS-associated limb deficiencies (21 ). Of those 19 infants with oromandibular-limb hypogenesis, 17 were exposed to CVS before 68 days' gestation. In this review, 74% of infants exposed to CVS at ≥70 days' gestation had digital deficiencies without proximal involvement. # Cohorts of CVS-Exposed Pregnancies Cohort studies usually measure rates of a specified outcome in an exposed group compared with an unexposed group. Ideally, both groups should be selected randomly from the same study population. The three largest collaborative trials of CVS in Europe, Canada, and the United States were designed originally in this way; however, in these studies, the outcome of interest was fetal death. The report of the first U.S. collaborative trial included no mention of any structural defects; such outcomes were reported later (5 ). After the initial case reports in 1991, neonatal outcomes from the collaborative trials were analyzed more intensively (22 ). However, rather than comparing rates for limb defects in the CVS-exposed cohorts with those of amniocentesis-exposed cohorts from the same study population, the rates in the CVS groups were compared with population-based rates. Consequently, these comparisons must be interpreted with caution because population-based rates are derived differently (i.e., usually from birth-defect registries). CVS-associated risk for limb deficiencies could be underestimated by these comparisons if follow-up of pregnancies in the exposed cohort is incomplete. Other epidemiologic issues must also be considered when interpreting comparisons of crude rates. Unless a formal meta-analysis is performed, these comparisons neither account for heterogeneity between studies nor assign individual "weights" to studies. Comparisons of crude rates also do not adjust for potential confounding variables, such as maternal age. Methods of anatomic subclassification also vary between registries and can differ from methods applied to CVS-exposed cohorts. In addition, comparing overall rates of limb deficiency in groups exposed to CVS with groups unexposed to CVS might overlook an association with a specific phenotype, such as transverse deficiency. Published CVS cohort studies of >1,000 CVS procedures include data from 65 CVS centers (Table 1). These rates include studies that describe affected limbs in sufficient detail to exclude nontransverse defects. Rates calculated for the smaller cohorts (i.e., centers performing <3,500 procedures) are less stable, but the overall rate of nonsyndromic transverse limb deficiency from these centers was 7.4 per 10,000 procedures. This crude rate can be compared with rates of transverse deficiencies from Victoria (Australia) and Boston, Massachusetts (United States), where cases were classified to resemble the phenotype of CVS-exposed infants with limb deficiencies, including deficiencies of single digits (Table 2). The range of rates for these two populations (1.5-2.3 per 10,000 births) is representative of rates reported for other populations. The threefold to fivefold increase in the overall rate for the 65 centers compared with the rates for Victoria or Boston is statistically significant (chi-square: p<0.001) (17,32 ). Investigators participating in the International Registry also have combined birthdefect data from multiple CVS centers, including some of the 65 CVS centers (16,35 ). An abstract published in 1994 includes information about 138,000 procedures reported to the International Registry. The rate of transverse deficiencies in the reporting centers was 1.4 per 10,000 procedures, lower than most population-based rates; the distribution of limb-deficiency subtypes was similar to the results of a study of limb deficiencies in British Columbia. The variability in limb-deficiency rates could be related to three possible explanations: 1) Different methods of classification. The method of classification of limb deficiencies for the International Registry resulted in a smaller proportion of transverse deficiencies (compared with all limb deficiencies) than some population-based studies (17,32,36,37 ). The reason for this smaller proportion is that the definition of "transverse terminal deficiencies" is more restrictive and includes only defects that extend across the complete width of a limb and excludes terminal deficiencies of fewer than five digits. 2) Ascertainment of outcomes. Ascertainment of outcomes may be incomplete in CVS registries because deliveries can occur at a hospital remote from where the CVS was performed and might not be reported back to the CVS center. The effect of this incomplete ascertainment would be to underestimate risk for adverse outcomes. 3) Differences among centers in the performance of CVS. Investigators have compared rates of miscarriages and rates of limb deficiencies at individual facilities. This comparison is based on the assumption that the causes of both miscarriage and limb defects might be related to particular techniques of sampling by individual obstetricians. The association between high miscarriage and limb-deficiency rates in one U.S. CVS center was cited as potential evidence of the role of surgical inexperience (24). A cluster of limb deficiencies in another U.S. teaching hospital (five after 507 CVS procedures) was not associated with elevated miscarriage rates; chorionic villus sample sizes were larger at this hospital than at another hospital affiliated with the same university that reported no infants with limb defects (38 ). # Case-Control Studies Case-control approaches with a minimum of 100 case and 100 control patients have greater statistical power than cohort studies of 10,000 or fewer births to detect a fourfold increase in risk for transverse deficiencies (the degree of relative risk suggested by data from the 65 CVS centers) (36 ). Investigators participating in multicenter birth-defect studies have used this case-control approach both to measure the strength of the association between CVS and limb deficiency and to determine if a dose-response (or gradient) effect of risk exists. The latter effect would be indicated by an increased relative risk for limb deficiency after earlier procedures, suggested in case reports of CVS-associated limb deficiencies by the high frequency of early exposures to CVS. Three case-control studies have used infants with limb deficiencies registered in surveillance systems and control infants with other birth defects to examine and compare exposure rates to CVS (36,37,39 ). The odds ratios for CVS exposure (an estimate of the relative risk for limb deficiency after CVS) are summarized in Table 3. The U.S. Multistate Case-Control Study and the study of the Italian Multicentric Birth Defects Registry both indicated a significant association between CVS exposure and subtypes of transverse limb deficiencies (36,37 ). The EUROCAT study did not analyze risk for transverse limb deficiencies (39 ); the risk for all limb deficiencies (odds ratio =1.8, 95% confidence interval =0.7-5.0) was similar to that measured in the U.S. Multistate Case-Control Study for all limb deficiencies (OR=1.7, 95% CI=0.4-6.3) (36 ). Analysis of subtypes in the U.S. study indicated a sixfold increase in risk for transverse digital deficiencies (36 ). In the U.S. study, no association between limb deficiencies and amniocentesis was observed. In the study of the Italian Multicentric Birth Defects Registry, the association between CVS exposure and transverse limb deficiencies was stronger (Table 3) (37 ). # GESTATIONAL AGE AT CVS The lower risk observed in the United States may be related to the later mean gestational age of exposure. Increased risk was associated with decreased gestational age at the time of exposure (Table 4). The risk for transverse deficiencies was greatest at ≤9 weeks' gestation. An analysis of cohort studies regarding the timing of CVS indicated a similar gradient with a relative risk for transverse deficiencies of 6.2 at <10 weeks' and 2.4 at ≥10 weeks' gestation (40 ). Because of reports of high rates of severe limb deficiencies after CVS at 6-7 weeks' gestation, a WHO-sponsored committee recommended that CVS be performed at 9-12 weeks after the last menstrual period (16). # POSSIBLE MECHANISMS OF CVS-ASSOCIATED LIMB DEFICIENCY Several biological events have been proposed to explain the occurrence of limb deficiency after CVS, the variation in severity, and the risk associated with the timing of the procedure. These mechanisms, which include thromboembolization or fetal hypoperfusion through hypovolemia or vasoconstriction, are based on the assumption that the defects associated with CVS were caused by some form of vascular disruption. The limbs and mandible are susceptible to such disruption before 10 weeks' gestation (17 ); however, isolated transverse limb deficiencies related to fetal hypoperfusion have been reported at 11 weeks' gestation (18 ). The rich vascular supply of chorionic villi can potentially be disrupted by instrumentation. Data from one study of embryoscopic procedures demonstrated fetal hemorrhagic lesions of the extremities following placental trauma, which produced subchorionic hematomas (41 ). Placental hemorrhage following CVS could lead to substantial fetal hypovolemia with subsequent hypoperfusion of the extremities. Because animal models show that limb deficiencies have been produced by either vasoconstrictive agents or occlusion of uterine vessels, some researchers have hypothesized that CVS-associated defects might be caused by uteroplacental insufficiency (42 ). Although the period of highest embryonic susceptibility appears to be when CVS is performed before 9 weeks' gestation (i.e., early CVS), these mechanisms also can disrupt limb structures at later gestational ages. # ABSOLUTE RISK FOR LIMB DEFICIENCY Subtypes of limb deficiencies rarely occur in the population of infants not exposed to CVS. Thus, even a sixfold increase in risk for such types as digital defects (the finding of the U.S. Multistate Case-Control Study) is comparable to a small absolute risk (i.e., 3.46 cases per 10,000 CVS procedures ) (36 ). The upper 95% confidence limit for this absolute risk estimate is approximately 0.1%. A range of absolute risk from 1 per 3,000 to 1 per 1,000 CVS procedures (0.03%-0.10%) for all transverse deficiencies is consistent with the overall increase in risk reported by the 65 centers (Table 1). In cohort studies that reported the timing of the CVS, the absolute risk for transverse limb deficiencies was 0.20% at ≤9 weeks, 0.10% at 10 weeks, and 0.05% at ≥11 weeks (0.07% at ≥10 weeks of gestation) (40 ). The absolute risk for CVS-related birth defects is lower than the procedure-related risk for miscarriage that counselors usually quote to prospective parents (i.e., 0.5% to 1.0%) and also is lower than the risk for Down syndrome at age 35 (0.3%). Data from a decision analysis study supported the conclusion that, weighing a range of possible risks associated with prenatal testing, amniocentesis was preferred to CVS (43 ). This study was published in 1991 and did not consider risk for limb deficiency. Data indicate that publication of the initial case reports of limb deficiency decreased subsequent utilization of CVS (44,45 ). However, one study demonstrated that prospective parents who were provided with formal genetic counseling, including information about limb deficiencies and other risks and benefits, chose CVS at a rate similar to a group of prospective parents who were counseled before published reports of CVS-associated limb deficiencies (44 ). # RECOMMENDATIONS An analysis of all aspects of CVS and amniocentesis indicates that the occasional occurrence of CVS-related limb defects is only one of several factors that must be considered in counseling prospective parents about prenatal testing. Factors that can influence prospective parents' choices about prenatal testing include their risk for transmitting genetic abnormalities to the fetus and their perception of potential complications and benefits of both CVS and amniocentesis. Prospective parents who are considering the use of either procedure should be provided with current data for informed decision making. Individualized counseling should address the following: # Indications for procedures and limitations of prenatal testing - Counselors should discuss the prospective parents' degree of risk for transmitting genetic abnormalities based on factors such as maternal age, race, and family history. - Prospective parents should be made aware of both the limitations and usefulness of either CVS or amniocentesis in detecting abnormalities. # Potential serious complications from CVS and amniocentesis - Counselors should discuss the risk for miscarriage attributable to both procedures: the risk from amniocentesis at 15-18 weeks' gestation is approximately 0.25%-0.50% (1/400-1/200), and the miscarriage risk from CVS is approximately 0.5%-1.0% (1/200-1/100). - Current data indicate that the overall risk for transverse limb deficiency from CVS is 0.03%-0.10% (1/3,000-1/1,000). Current data indicate no increase in risk for limb deficiency after amniocentesis at 15-18 weeks' gestation. - The risk and severity of limb deficiency appear to be associated with the timing of CVS: the risk at <10 weeks' gestation (0.20%) is higher than the risk from CVS done at ≥10 weeks' gestation (0.07%). Most defects associated with CVS at ≥10 weeks' gestation have been limited to the digits. The following CDC staff members prepared this report: # Timing of procedures - The timing of obtaining results from either CVS or amniocentesis is relevant because of the increased risks for maternal morbidity and mortality associated with terminating pregnancy during the second trimester compared with the first trimester (13,14). - Many amniocentesis procedures are now done at 11-14 weeks' gestation; however, further controlled studies are necessary to fully assess the safety of early amniocentesis. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available on a paid subscription basis from the
Chorionic villus sampling (CVS) and amniocentesis are prenatal diagnostic procedures that are performed to detect fetal abnormalities. In 1991, concerns about the relative safety of these procedures arose after reports were published that described a possible association between CVS and birth defects in infants. Subsequent studies support the hypothesis that CVS can cause transverse limb deficiencies. Following CVS, rates of these defects, estimated to be 0.03%-0.10% (1/3,000-1/1,000), generally have been increased over background rates. Rates and severity of limb deficiencies are associated with the timing of CVS; most of the birth defects reported after procedures that were performed at ≥70 days' gestation were limited to the fingers or toes. The risk for either digital or limb deficiency after CVS is only one of several important factors that must be considered in making complex and personal decisions about prenatal testing. For example, CVS is generally done earlier in pregnancy than amniocentesis and is particularly advantageous for detecting certain genetic conditions. Another important factor is the risk for miscarriage, which has been attributed to 0.5%-1.0% of CVS procedures and 0.25%-0.50% of amniocentesis procedures. Prospective parents considering the use of either CVS or amniocentesis should be counseled about the benefits and risks of these procedures. The counselor should also discuss both the mother's and father's risk(s) for transmitting genetic abnormalities to the fetus.# INTRODUCTION Chorionic villus sampling (CVS) and amniocentesis are prenatal diagnostic procedures used to detect certain fetal genetic abnormalities. Both procedures increase the risk for miscarriage (1 ). In addition, concern has been increasing among health-care providers and public health officials about the potential occurrence of birth defects resulting from CVS (2 ). This report describes CVS and amniocentesis, provides information on indications for their use, reviews studies about the safety of the procedures, compares the benefits and risks of the two procedures (focusing particularly on the risk for limb deficiency after CVS), and provides recommendations for counseling about these issues. A public meeting was convened on March 11, 1994, to discuss the results of studies of CVS-associated limb deficiencies and preliminary counseling recommendations that had been drafted at CDC (3 ). Participants included geneticists, obstetricians, pediatricians, epidemiologists, teratologists, dysmorphologists, and genetic counselors who had a particular interest in CVS studies or who represented professional organizations and government agencies. Participants provided diverse opinions about recommendations for counseling both at the meeting and in subsequent written correspondence; input from participants has been incorporated into this document. # USE OF CVS AND AMNIOCENTESIS CVS utilizes either a catheter or needle to biopsy placental cells that are derived from the same fertilized egg as the fetus. During amniocentesis, a small sample of the fluid that surrounds the fetus is removed. This fluid contains cells that are shed primarily from the fetal skin, bladder, gastrointestinal tract, and amnion. Typically, CVS is done at 10-12 weeks' gestation, and amniocentesis is done at 15-18 weeks' gestation. In the United States, the current standard of care in obstetrical practice is to offer either CVS or amniocentesis to women who will be ≥35 years of age when they give birth, because these women are at increased risk for giving birth to infants with Down syndrome and certain other types of aneuploidy. Karyotyping of cells obtained by either amniocentesis or CVS is the standard and definitive means of diagnosing aneuploidy in fetuses. The risk that a woman will give birth to an infant with Down syndrome increases with age. For example, for women 35 years of age, the risk is 1 per 385 births (0.3%), whereas for women 45 years of age, the risk is 1 per 30 births (3%) (1 ). The background risk for major birth defects (with or without chromosomal abnormalities) for women of all ages is approximately 3%. Before widespread use of amniocentesis, several controlled studies were conducted to evaluate the safety of the procedure. The major finding from these studies was that amniocentesis increases the rate for miscarriage (i.e., spontaneous abortions) by approximately 0.5%. Subsequent to these studies, amniocentesis became an accepted standard of care in the 1970s. In 1990, more than 200,000 amniocentesis procedures were performed in the United States (4 ). In the 1960s and 1970s, exploratory studies were conducted revealing that the placenta (i.e., chorionic villi) could be biopsied through a catheter and that sufficient placental cells could be obtained to permit certain genetic analyses earlier in pregnancy than through amniocentesis. In the United States, this procedure was initially evaluated in a controlled trial designed to determine the miscarriage rate (5 ). The difference in fetal-loss rate was estimated to be 0.8% higher after CVS compared with amniocentesis, although this difference was not statistically significant. Because that study was designed to determine miscarriage rates, it had limited statistical power to detect small increases in risks for individual birth defects. CVS had become widely used worldwide by the early 1980s. The World Health Organization (WHO) sponsors an International Registry of CVS procedures; data in the International Registry probably represent less than half of all procedures performed worldwide (6). More than 80,000 procedures were reported to the International Registry from 1983-1992 (6); approximately 200,000 procedures were registered from 1983-1995 (L. Jackson, personal communication). CVS is performed in hospitals, outpatient clinics, selected obstetricians' offices, and university settings; these facilities are often collectively referred to as prenatal diagnostic centers. Some investigators have reported that the availability of CVS increased the overall utilization of prenatal diagnostic procedures among women ≥35 years of age, suggesting that access to first-trimester testing may make prenatal chromosome analysis appealing to a larger number of women (7 ). Another group of obstetricians did not see an increase in overall utilization when CVS was introduced (8 ). The increase in CVS procedures was offset by a decrease in amniocentesis, suggesting that the effect of CVS availability on the utilization of prenatal diagnostic testing depends on local factors. In the United States, an estimated 40% of pregnant women ≥35 years of age underwent either amniocentesis or CVS in 1990 (9 ). Although maternal age-related risk for fetal aneuploidy is the usual indication for CVS or amniocentesis, prospective mothers or fathers of any age might desire fetal testing when they are at risk for passing on certain mendelian (single-gene) conditions. In a randomized trial conducted in the United States, 19% of women who underwent CVS were <35 years of age (10). DNA-based diagnoses of mendelian conditions, such as cystic fibrosis, hemophilia, muscular dystrophy, and hemoglobinopathies, can be made by direct analysis of uncultured chorionic villus cells (a more efficient method than culturing amniocytes) (11 ). However, amniocentesis is particularly useful to prospective parents who have a family history of neural tube defects, because alphafetoprotein (AFP) testing can be done on amniotic fluid but cannot be done on CVS specimens. When testing for chromosomal abnormalities resulting from advanced maternal age, CVS may be more acceptable than amniocentesis to some women because of the psychological and medical advantages provided by CVS through earlier diagnosis of abnormalities. Fetal movement is usually felt and uterine growth is visible at 17-19 weeks' gestation, the time when abnormalities are detected by amniocentesis; thus, deciding what action to take if an abnormality is detected at this time may be more difficult psychologically (12 ). Using CVS to diagnose chromosomal abnormalities during the first trimester allows a prospective parent to make this decision earlier than will amniocentesis. Maternal morbidity and mortality associated with induced abortion increase significantly with increasing gestational age; thus, the timing of diagnosis of chromosomal abnormalities is important. Results of studies of abortion complications conducted by CDC from 1970 through 1978 indicated that the risk for major abortion complications (e.g., prolonged fever, hemorrhage necessitating blood transfusion, and injury to pelvic organs) increases with advancing gestational age. For example, from 1971 through 1974, the major complication rate was 0.8% at 11-12 weeks' gestation, compared with 2.2% at 17-20 weeks' gestation (13 ). However, the risk for developing major complications from abortion at any gestational age decreased during the 1970s. More contemporary national morbidity data based on current abortion practices are not yet available. CDC surveillance data also indicate an increase in the risk for maternal death with increasing gestation. From 1972 through 1987, the risk for abortion-related death was 1.1 deaths per 100,000 abortions performed at 11-12 weeks' gestation compared with 6.9 deaths per 100,000 abortions for procedures performed at 16-20 weeks' gestation (14 ). The lower risk associated with first-trimester abortions may be an important factor for prospective parents who are deciding between CVS and amniocentesis. Amniocentesis is usually performed at 15-18 weeks' gestation, but more amniocentesis procedures are now being performed at 11-14 weeks' gestation. "Early" amniocentesis (defined as <15 weeks' gestation) remains investigational, because the safety of the procedure is currently being evaluated with controlled trials (15 ). Risk estimates for miscarriage caused by either CVS or midtrimester amniocentesis have been adjusted to account for spontaneous fetal losses that occur early in pregnancy and are not procedure-related. Although one randomized trial indicated that the amniocentesis-related miscarriage rate may be as high as 1%, counselors usually cite risks for miscarriage from other amniocentesis studies ranging from 0.25%-0.50% (1/400-1/200) (1,15 ). Rates of miscarriage after CVS vary widely by the center at which CVS was performed (16 ). Adjusting for confounding factors such as gestational age, the CVS-related miscarriage rate is approximately 0.5%-1.0% (1/200-1/100) (1 ). Although uterine infection (i.e., chorioamnionitis) is one possible reason for miscarriage after either CVS or amniocentesis, infection has occurred rarely after either procedure. In one study, no episodes of septic shock were reported after 4,200 CVS procedures, although less severe infections may have been associated with 12 of the 89 observed fetal losses (5 ). Overall infection rates have been <0.1% after either CVS or amniocentesis (15 ). Cytogenetically ambiguous results caused by factors such as maternal cell contamination or culture-related mosaicism are reported more often after CVS than after amniocentesis (2 ). In these instances, follow-up amniocentesis might be required to clarify results, increasing both the total cost of testing and the risk for miscarriage. However, ambiguous CVS results also may indicate a condition (e.g., confined placental mosaicism) that has been associated with adverse outcomes for the fetus (11 ). Thus, in these situations, CVS may be more informative than amniocentesis alone. # LIMB DEFICIENCIES AMONG INFANTS WHOSE MOTHERS UNDERWENT CVS Certain congenital defects of the extremities, known as limb deficiencies or limbreduction defects, have been reported among infants whose mothers underwent CVS. This section addresses 1) the expected frequency and classification of these birth defects, 2) the physical features of reported infants in relation to the timing of associated CVS procedures, and 3) cohort and case-control studies that have been done to systematically examine whether CVS increases the risk for limb deficiencies. # Population-Based Rates and Classification of Limb Deficiencies Population-based studies indicate that the risk for all limb deficiencies is from 5-6 per 10,000 live births (17 ). Limb deficiencies usually are classified into distinct anatomic and pathogenetic categories. The most common subtypes are transverse terminal defects, which involve absence of distal structures with intact proximal segments, with the axis of deficiency perpendicular to the extremity. Approximately 50% of all limb deficiencies are transverse, and 50% of those defects are digital, involving the absence of parts of one or more fingers or toes. Transverse deficiencies occur as either isolated defects or with other major defects. The rare combination of transverse limb deficiencies with either absence or hypoplasia of the tongue and lower jaw-usually referred to as oromandibular-limb hypogenesis or hypoglossia/hypodactyly -occurs at a rate of approximately 1 per 200,000 births. Although the cause of many isolated limb deficiencies and multiple anomalies that include transverse deficiencies is unknown, researchers have hypothesized that these deficiencies are caused by vascular disruption either during the formation of embryonic limbs or in already-formed fetal limbs (17,18 ). # Limb Deficiencies Reported in Infants Exposed to CVS Reports of clusters of infants born with limb deficiencies after CVS were first published in 1991 (19 ). Three studies illustrate the spectrum of CVS-associated defects (19)(20)(21). Data from these studies suggest that the severity of the outcome is associated with the specific time of CVS exposure. Exposure at ≥70 days' gestation has been associated with more limited defects, isolated to the distal extremities, whereas earlier exposures have been associated with more proximal limb deficiencies and orofacial defects. For example, in a study involving 14 infants exposed to CVS at 63-79 days' gestation and examined by a single pediatrician, 13 had isolated transverse digital deficiencies (20 ). In another study in Oxford of five infants exposed to CVS at 56-66 days' gestation, four had transverse deficiencies with oromandibular hypogenesis (19 ). In a review of published worldwide data, associated defects of the tongue or lower jaw were reported for 19 of 75 cases of CVS-associated limb deficiencies (21 ). Of those 19 infants with oromandibular-limb hypogenesis, 17 were exposed to CVS before 68 days' gestation. In this review, 74% of infants exposed to CVS at ≥70 days' gestation had digital deficiencies without proximal involvement. # Cohorts of CVS-Exposed Pregnancies Cohort studies usually measure rates of a specified outcome in an exposed group compared with an unexposed group. Ideally, both groups should be selected randomly from the same study population. The three largest collaborative trials of CVS in Europe, Canada, and the United States were designed originally in this way; however, in these studies, the outcome of interest was fetal death. The report of the first U.S. collaborative trial included no mention of any structural defects; such outcomes were reported later (5 ). After the initial case reports in 1991, neonatal outcomes from the collaborative trials were analyzed more intensively (22 ). However, rather than comparing rates for limb defects in the CVS-exposed cohorts with those of amniocentesis-exposed cohorts from the same study population, the rates in the CVS groups were compared with population-based rates. Consequently, these comparisons must be interpreted with caution because population-based rates are derived differently (i.e., usually from birth-defect registries). CVS-associated risk for limb deficiencies could be underestimated by these comparisons if follow-up of pregnancies in the exposed cohort is incomplete. Other epidemiologic issues must also be considered when interpreting comparisons of crude rates. Unless a formal meta-analysis is performed, these comparisons neither account for heterogeneity between studies nor assign individual "weights" to studies. Comparisons of crude rates also do not adjust for potential confounding variables, such as maternal age. Methods of anatomic subclassification also vary between registries and can differ from methods applied to CVS-exposed cohorts. In addition, comparing overall rates of limb deficiency in groups exposed to CVS with groups unexposed to CVS might overlook an association with a specific phenotype, such as transverse deficiency. Published CVS cohort studies of >1,000 CVS procedures include data from 65 CVS centers (Table 1). These rates include studies that describe affected limbs in sufficient detail to exclude nontransverse defects. Rates calculated for the smaller cohorts (i.e., centers performing <3,500 procedures) are less stable, but the overall rate of nonsyndromic transverse limb deficiency from these centers was 7.4 per 10,000 procedures. This crude rate can be compared with rates of transverse deficiencies from Victoria (Australia) and Boston, Massachusetts (United States), where cases were classified to resemble the phenotype of CVS-exposed infants with limb deficiencies, including deficiencies of single digits (Table 2). The range of rates for these two populations (1.5-2.3 per 10,000 births) is representative of rates reported for other populations. The threefold to fivefold increase in the overall rate for the 65 centers compared with the rates for Victoria or Boston is statistically significant (chi-square: p<0.001) (17,32 ). Investigators participating in the International Registry also have combined birthdefect data from multiple CVS centers, including some of the 65 CVS centers (16,35 ). An abstract published in 1994 includes information about 138,000 procedures reported to the International Registry. The rate of transverse deficiencies in the reporting centers was 1.4 per 10,000 procedures, lower than most population-based rates; the distribution of limb-deficiency subtypes was similar to the results of a study of limb deficiencies in British Columbia. The variability in limb-deficiency rates could be related to three possible explanations: 1) Different methods of classification. The method of classification of limb deficiencies for the International Registry resulted in a smaller proportion of transverse deficiencies (compared with all limb deficiencies) than some population-based studies (17,32,36,37 ). The reason for this smaller proportion is that the definition of "transverse terminal deficiencies" is more restrictive and includes only defects that extend across the complete width of a limb and excludes terminal deficiencies of fewer than five digits. 2) Ascertainment of outcomes. Ascertainment of outcomes may be incomplete in CVS registries because deliveries can occur at a hospital remote from where the CVS was performed and might not be reported back to the CVS center. The effect of this incomplete ascertainment would be to underestimate risk for adverse outcomes. 3) Differences among centers in the performance of CVS. Investigators have compared rates of miscarriages and rates of limb deficiencies at individual facilities. This comparison is based on the assumption that the causes of both miscarriage and limb defects might be related to particular techniques of sampling by individual obstetricians. The association between high miscarriage and limb-deficiency rates in one U.S. CVS center was cited as potential evidence of the role of surgical inexperience (24). A cluster of limb deficiencies in another U.S. teaching hospital (five after 507 CVS procedures) was not associated with elevated miscarriage rates; chorionic villus sample sizes were larger at this hospital than at another hospital affiliated with the same university that reported no infants with limb defects (38 ). # Case-Control Studies Case-control approaches with a minimum of 100 case and 100 control patients have greater statistical power than cohort studies of 10,000 or fewer births to detect a fourfold increase in risk for transverse deficiencies (the degree of relative risk suggested by data from the 65 CVS centers) (36 ). Investigators participating in multicenter birth-defect studies have used this case-control approach both to measure the strength of the association between CVS and limb deficiency and to determine if a dose-response (or gradient) effect of risk exists. The latter effect would be indicated by an increased relative risk for limb deficiency after earlier procedures, suggested in case reports of CVS-associated limb deficiencies by the high frequency of early exposures to CVS. Three case-control studies have used infants with limb deficiencies registered in surveillance systems and control infants with other birth defects to examine and compare exposure rates to CVS (36,37,39 ). The odds ratios for CVS exposure (an estimate of the relative risk for limb deficiency after CVS) are summarized in Table 3. The U.S. Multistate Case-Control Study and the study of the Italian Multicentric Birth Defects Registry both indicated a significant association between CVS exposure and subtypes of transverse limb deficiencies (36,37 ). The EUROCAT study did not analyze risk for transverse limb deficiencies (39 ); the risk for all limb deficiencies (odds ratio [OR]=1.8, 95% confidence interval [CI]=0.7-5.0) was similar to that measured in the U.S. Multistate Case-Control Study for all limb deficiencies (OR=1.7, 95% CI=0.4-6.3) (36 ). Analysis of subtypes in the U.S. study indicated a sixfold increase in risk for transverse digital deficiencies (36 ). In the U.S. study, no association between limb deficiencies and amniocentesis was observed. In the study of the Italian Multicentric Birth Defects Registry, the association between CVS exposure and transverse limb deficiencies was stronger (Table 3) (37 ). # GESTATIONAL AGE AT CVS The lower risk observed in the United States may be related to the later mean gestational age of exposure. Increased risk was associated with decreased gestational age at the time of exposure (Table 4). The risk for transverse deficiencies was greatest at ≤9 weeks' gestation. An analysis of cohort studies regarding the timing of CVS indicated a similar gradient with a relative risk for transverse deficiencies of 6.2 at <10 weeks' and 2.4 at ≥10 weeks' gestation (40 ). Because of reports of high rates of severe limb deficiencies after CVS at 6-7 weeks' gestation, a WHO-sponsored committee recommended that CVS be performed at 9-12 weeks after the last menstrual period (16). # POSSIBLE MECHANISMS OF CVS-ASSOCIATED LIMB DEFICIENCY Several biological events have been proposed to explain the occurrence of limb deficiency after CVS, the variation in severity, and the risk associated with the timing of the procedure. These mechanisms, which include thromboembolization or fetal hypoperfusion through hypovolemia or vasoconstriction, are based on the assumption that the defects associated with CVS were caused by some form of vascular disruption. The limbs and mandible are susceptible to such disruption before 10 weeks' gestation (17 ); however, isolated transverse limb deficiencies related to fetal hypoperfusion have been reported at 11 weeks' gestation (18 ). The rich vascular supply of chorionic villi can potentially be disrupted by instrumentation. Data from one study of embryoscopic procedures demonstrated fetal hemorrhagic lesions of the extremities following placental trauma, which produced subchorionic hematomas (41 ). Placental hemorrhage following CVS could lead to substantial fetal hypovolemia with subsequent hypoperfusion of the extremities. Because animal models show that limb deficiencies have been produced by either vasoconstrictive agents or occlusion of uterine vessels, some researchers have hypothesized that CVS-associated defects might be caused by uteroplacental insufficiency (42 ). Although the period of highest embryonic susceptibility appears to be when CVS is performed before 9 weeks' gestation (i.e., early CVS), these mechanisms also can disrupt limb structures at later gestational ages. # ABSOLUTE RISK FOR LIMB DEFICIENCY Subtypes of limb deficiencies rarely occur in the population of infants not exposed to CVS. Thus, even a sixfold increase in risk for such types as digital defects (the finding of the U.S. Multistate Case-Control Study) is comparable to a small absolute risk (i.e., 3.46 cases per 10,000 CVS procedures [0.03%]) (36 ). The upper 95% confidence limit for this absolute risk estimate is approximately 0.1%. A range of absolute risk from 1 per 3,000 to 1 per 1,000 CVS procedures (0.03%-0.10%) for all transverse deficiencies is consistent with the overall increase in risk reported by the 65 centers (Table 1). In cohort studies that reported the timing of the CVS, the absolute risk for transverse limb deficiencies was 0.20% at ≤9 weeks, 0.10% at 10 weeks, and 0.05% at ≥11 weeks (0.07% at ≥10 weeks of gestation) (40 ). The absolute risk for CVS-related birth defects is lower than the procedure-related risk for miscarriage that counselors usually quote to prospective parents (i.e., 0.5% to 1.0%) and also is lower than the risk for Down syndrome at age 35 (0.3%). Data from a decision analysis study supported the conclusion that, weighing a range of possible risks associated with prenatal testing, amniocentesis was preferred to CVS (43 ). This study was published in 1991 and did not consider risk for limb deficiency. Data indicate that publication of the initial case reports of limb deficiency decreased subsequent utilization of CVS (44,45 ). However, one study demonstrated that prospective parents who were provided with formal genetic counseling, including information about limb deficiencies and other risks and benefits, chose CVS at a rate similar to a group of prospective parents who were counseled before published reports of CVS-associated limb deficiencies (44 ). # RECOMMENDATIONS An analysis of all aspects of CVS and amniocentesis indicates that the occasional occurrence of CVS-related limb defects is only one of several factors that must be considered in counseling prospective parents about prenatal testing. Factors that can influence prospective parents' choices about prenatal testing include their risk for transmitting genetic abnormalities to the fetus and their perception of potential complications and benefits of both CVS and amniocentesis. Prospective parents who are considering the use of either procedure should be provided with current data for informed decision making. Individualized counseling should address the following: # Indications for procedures and limitations of prenatal testing • Counselors should discuss the prospective parents' degree of risk for transmitting genetic abnormalities based on factors such as maternal age, race, and family history. • Prospective parents should be made aware of both the limitations and usefulness of either CVS or amniocentesis in detecting abnormalities. # Potential serious complications from CVS and amniocentesis • Counselors should discuss the risk for miscarriage attributable to both procedures: the risk from amniocentesis at 15-18 weeks' gestation is approximately 0.25%-0.50% (1/400-1/200), and the miscarriage risk from CVS is approximately 0.5%-1.0% (1/200-1/100). • Current data indicate that the overall risk for transverse limb deficiency from CVS is 0.03%-0.10% (1/3,000-1/1,000). Current data indicate no increase in risk for limb deficiency after amniocentesis at 15-18 weeks' gestation. • The risk and severity of limb deficiency appear to be associated with the timing of CVS: the risk at <10 weeks' gestation (0.20%) is higher than the risk from CVS done at ≥10 weeks' gestation (0.07%). Most defects associated with CVS at ≥10 weeks' gestation have been limited to the digits. # The following CDC staff members prepared this report: # Timing of procedures • The timing of obtaining results from either CVS or amniocentesis is relevant because of the increased risks for maternal morbidity and mortality associated with terminating pregnancy during the second trimester compared with the first trimester (13,14). • Many amniocentesis procedures are now done at 11-14 weeks' gestation; however, further controlled studies are necessary to fully assess the safety of early amniocentesis. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available on a paid subscription basis from the
None
None
ceb53016631748ec4177fea37488fd5455b9a68f
cdc
None
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction In the United States, epidemics of influenza typically occur during the winter months and have been associated with an average of approximately 36,000 deaths per year in the United States during 1990-1999 (1). Influenza viruses cause disease among all age groups (2)(3)(4). Rates of infection are highest among children, but rates of serious illness and death are highest among persons aged >65 years, children aged <2 years, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (2,(5)(6)(7). Influenza vaccination is the primary method for preventing influenza and its severe complications. As indicated in this report from the Advisory Committee on Immunization Practices (ACIP), annual influenza vaccination is now recommended for the following groups (Box): - persons at high risk for influenza-related complications and severe disease, including -children aged 6-59 months, -pregnant women, -persons aged >50 years, -persons of any age with certain chronic medical conditions; and - persons who live with or care for persons at high risk, including -household contacts who have frequent contact with persons at high risk and who can transmit influenza to those persons at high risk and -health-care workers. Vaccination might prevent hospitalization and death among persons at high risk and might also reduce influenza-related respiratory illnesses and physician visits among all age groups, prevent otitis media among children, and decrease work absenteeism among adults (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Although influenza vaccination levels increased substantially during the 1990s, further improvements in vaccination coverage levels are needed, especially among persons aged 65 years; among children aged 6-23 months; and among health-care workers. ACIP recommends using strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (19)(20)(21)(22). Although influenza vaccination remains the cornerstone for the control of influenza, information on antiviral medications also is presented in this report because these agents are an important adjunct to vaccine. # Primary Changes and Updates in the Recommendations The 2006 recommendations include six principal changes or updates: - ACIP recommends that healthy children aged 24-59 months and their household contacts and out-of-home caregivers be vaccinated against influenza (see Target Groups for Vaccination). This change extends the recommendations for vaccination of children so that all children aged 6-1 month after the initial dose, before the onset of influenza season, if possible. Those children aged 5-<9 years who receive LAIV should have a second dose of LAIV 6-10 weeks after the initial dose, before the influenza season, if possible. If a child aged 6 months-<9 years received influenza vaccine for the first time during a previous season but did not receive a second dose of vaccine within the same season, only 1 dose of vaccine should be administered this season (see Efficacy and Effectiveness of Inactivated Influenza Vaccine, Children; TIV Dosage; and LAIV Dosage and Administration). - To ensure optimal use of available doses of influenza vaccine, projected to be approximately 100 million doses, health-care providers, those planning organized campaigns, and state and local public health agencies should 1) develop plans for expanding outreach and infrastructure to vaccinate more persons than during the previous year and 2) develop contingency plans for the timing and prioritization of administering influenza vaccine, if the supply of vaccine is delayed and/or reduced because of the complexity of the production process (see Influenza Vaccine Supply and Timing of Annual Influenza Vaccination). - ACIP emphasizes that influenza vaccine should continue to be offered throughout the influenza season even after influenza activity has been documented in a community. In addition, ACIP encourages all community vaccinators and public health agencies to schedule clinics that serve target groups and to help extend the routine vaccination season by offering at least one vaccination clinic in December (see Influenza Vaccine Supply and Timing of Annual Influenza Vaccination). - ACIP recommends that neither amantadine nor rimantadine be used for the treatment or chemoprophy- # BOX. Persons for whom annual vaccination is recommended - Children aged 6-59 months; - Women who will be pregnant during the influenza season; - Persons aged >50 years; - Children and adolescents (aged 6 months-18 years) who are receiving long-term aspirin therapy and, therefore, might be at risk for experiencing Reye syndrome after influenza infection; - Adults and children who have chronic disorders of the pulmonary or cardiovascular systems, including asthma (hypertension is not considered a high-risk condition); - Adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunodeficiency (including immunodeficiency caused by medications or by human immunodeficiency virus); - Adults and children who have any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions, or that can increase the risk for aspiration; - Residents of nursing homes and other chronic-care facilities that house persons of any age who have chronic medical conditions; - Persons who live with or care for persons at high risk for influenza-related complications, including healthy household contacts and caregivers of children aged 0-59 months; and - Health-care workers. laxis of influenza A in the United States because of recent data indicating widespread resistance of influenza virus to these medications (23,24). Until susceptibility to adamantanes has been re-established among circulating influenza A viruses, oseltamivir or zanamivir may be prescribed if antiviral treatment or chemoprophylaxis of influenza is indicated (see Recommendations for Using Antiviral Agents for Influenza). # Influenza and Its Burden # Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease (25). Influenza A viruses are further categorized into subtypes on the basis of two surface antigens: hemagglutinin and neuraminidase. Influenza B viruses are not categorized into subtypes. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have circulated globally. In 2001, influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H1N1) and A (H3N2) viruses began circulating widely. Both influenza A and B viruses are further separated into groups on the basis of antigenic characteristics. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral replication. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. Immunity to the surface antigens, particularly the hemagglutinin, reduces the likelihood of infection and severity of disease if infection occurs (26). Antibody against one influenza virus type or subtype confers limited or no protection against another type or subtype of influenza. Furthermore, antibody to one antigenic variant of influenza virus might not completely protect against a new antigenic variant of the same type or subtype (27). Frequent development of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and the reason for the usual incorporation of one or more new strains in each year's influenza vaccine. More dramatic antigenic changes, or shifts, occur less frequently and can result in the emergence of a novel influenza virus with the potential to cause a pandemic. # Clinical Signs and Symptoms of Influenza Influenza viruses are spread from person to person, primarily through respiratory droplet transmission (e.g., when an infected person coughs or sneezes in close proximity to an uninfected person) (25). The typical incubation period for influenza is 1-4 days, with an average of 2 days (28). Adults can be infectious from the day before symptoms begin through approximately 5 days after illness onset. Children can be infectious for >10 days after the onset of symptoms, and young children also can shed virus before their illness onset. Severely immunocompromised persons can shed virus for weeks or months (29)(30)(31)(32). Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symptoms (e.g., fever, myalgia, headache, malaise, nonproductive cough, sore throat, and rhinitis) (33). Among children, otitis media, nausea, and vomiting also are commonly reported with influenza illness (34)(35)(36). Uncomplicated influenza illness typically resolves after 3-7 days for the majority of persons, although cough and malaise can persist for >2 weeks. However, among certain persons, influenza can exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease), lead to secondary bacterial pneumonia or primary influenza viral pneumonia, or occur as part of a coinfection with other viral or bacterial pathogens (37). Young children with influenza virus infection can have initial symptoms mimicking bacterial sepsis with high fevers (37,38), and febrile seizures have been reported in up to 20% of children hospitalized with influenza virus infection (35,39). Influenza virus infection also has been uncommonly associated with encephalopathy, transverse myelitis, myositis, myocarditis, pericarditis, and Reye syndrome (35,37,40,41). Respiratory illnesses caused by influenza viruses are difficult to distinguish from illnesses caused by other respiratory pathogens on the basis of signs and symptoms alone (see Role of Laboratory Diagnosis). Reported sensitivities and specificities of clinical definitions of influenza infection that include fever and cough in studies primarily among adults have ranged from 63% to 78% and 55% to 71%, respectively, compared with viral culture (42,43). Sensitivity and predictive value of clinical definitions can vary, depending on the degree of co-circulation of other respiratory pathogens and the level of influenza activity (44). A study of older nonhospitalized patients determined that the presence of fever, cough, and acute onset had a positive predictive value of only 30% for influenza (45), whereas a study of hospitalized older patients with chronic cardiopulmonary disease deter-mined that a combination of fever, cough, and illness of <7 days was 78% sensitive and 73% specific for influenza (46). A study of vaccinated older persons with chronic lung disease indicated that cough was not predictive of influenza virus infection, although having a fever or feverishness was 68% sensitive and 54% specific for influenza virus infection (47). These results highlight the challenges of identifying influenza illness in the absence of laboratory confirmation. # Hospitalizations and Deaths from Influenza The risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age with certain underlying health conditions (see Persons at Increased Risk for Complications) than among healthy older children and younger adults (1,6,8,(48)(49)(50)(51)(52)(53)(54)(55)(56). Estimated rates of influenza-associated hospitalizations have varied substantially by age group in studies conducted during different influenza epidemics (Table 1). Among children aged 65 years (59,60) (Table 1). During seasonal influenza epidemics from 1979-80 through 2000-01, the estimated overall number of influenzaassociated hospitalizations in the United States ranged from approximately 54,000 to 430,000/epidemic. An average of approximately 226,000 influenza-related excess hospitalizations occurred per year, and 63% of all hospitalizations occurred among persons aged >65 years (61). Since the 1968 influenza A (H3N2) virus pandemic, the number of influenzaassociated hospitalizations is generally greater during seasonal influenza epidemics caused by type A (H3N2) viruses than seasons in which other influenza virus types predominate (62). Influenza-related deaths can result from pneumonia and from exacerbations of cardiopulmonary conditions and other chronic diseases. Deaths of adults aged >65 years account for >90% of deaths attributed to pneumonia and influenza (1,54). In one study, approximately 19,000 influenza-associated pulmonary and circulatory deaths per influenza season occurred during 1976-1990, compared with approximately 36,000 deaths during 1990-1999 (1). Estimated rates of influenzaassociated pulmonary and circulatory deaths/100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years. In the United States, the number of influenzaassociated deaths has increased in part because the number of older persons is increasing, particularly persons aged >85 years (63). In addition, influenza seasons in which influenza A (H3N2) viruses predominate are associated with higher mortality (64); influenza A (H3N2) viruses predominated in 90% of influenza seasons during 1990-1999, compared with 57% of influenza seasons during 1976-1990 (1). Deaths from influenza are uncommon among children both with and without high-risk conditions, but do occur (65,66). A study that modeled influenza-related deaths estimated that an average of 92 deaths (0.4 deaths per 100,000) occurred among children aged 65 years (1). Of 153 laboratory-confirmed influenza-related pediatric deaths reported from 40 states during the 2003-04 influenza season, 96 (63%) were among children aged <5 years. Sixty-four (70%) of the 92 children aged 2-17 years with influenza who died had no underlying medical condition previously associated with an increased risk for influenza-related complications (67). # Options for Controlling Influenza In the United States, the primary option for reducing the effect of influenza is through annual vaccination. Inactivated (i.e., killed virus) influenza vaccines and LAIV are licensed and available for use in the United States (see Recommendations for Using Inactivated and Live, Attenuated Influenza Vaccines). Vaccination coverage can be increased by administering vaccine to persons during hospitalizations or routine health-care visits, as well as at pharmacies, grocery stores, workplaces, or other locations in the community before the influenza season, therefore making special visits to physicians' offices or clinics unnecessary. Achieving increased vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) and among staff can reduce the risk for outbreaks (13), especially when vaccine and circulating strains are well-matched. Vaccination of health-care workers and other persons in close contact with persons at increased risk for severe influenza illness also can reduce transmission of influenza and subsequent influenzarelated complications. Antiviral drugs used for chemoprophylaxis or treatment of influenza are adjuncts to vaccine (see Recommendations for Using Antiviral Agents for Influenza) but are not substitutes for annual vaccination. . These viruses will be used because they are representative of influenza viruses that are anticipated to circulate in the United States during the 2006-07 influenza season and have favorable growth properties in eggs. Because circulating influenza A (H1N2) viruses are reassortants of influenza A (H1N1) and A (H3N2) viruses, antibodies directed against influenza A (H1N1) and influenza (H3N2) vaccine strains should provide protection against the circulating influenza A (H1N2) viruses. Influenza viruses for both TIV and LAIV are initially grown in embryonated hens eggs, and, therefore, might con-tain limited amounts of residual egg protein. Therefore, persons with a history of severe hypersensitivity, such as anaphylaxis, to eggs should not receive influenza vaccine. # Influenza Vaccine Composition For the inactivated vaccines, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (68). Only subvirion and purified surface antigen preparations of the inactivated vaccine are available. Manufacturing processes vary by manufacturer. Manufacturers might use different compounds to inactivate influenza viruses and add antibiotics to prevent bacterial contamination. Package inserts should be consulted for additional information. # Comparison of LAIV with Inactivated Influenza Vaccine Both inactivated influenza vaccine and LAIV are available. Although both types of vaccines are effective, the vaccines differ in several aspects (Table 2). # Major Similarities Both LAIV and inactivated influenza vaccines contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one A (H1N1) virus, and one B virus. Each year, one or more virus strains might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. Viruses for both vaccines are grown in eggs. Both vaccines are administered annually to provide optimal protection against influenza virus infection (Table 2). # Major Differences Inactivated influenza vaccine contains killed viruses, and thus cannot produce signs or symptoms of influenza virus infection. In contrast, LAIV contains live, attenuated viruses and, therefore, has a potential to produce mild signs or symptoms related to influenza virus infection. LAIV is adminis-tered intranasally by sprayer, whereas inactivated influenza vaccine is administered intramuscularly by injection. LAIV is more expensive than inactivated influenza vaccine, although the price differential between inactivated vaccine and LAIV has decreased for the 2006-07 season. LAIV is approved only for use among healthy persons aged 5-49 years; inactivated influenza vaccine is approved for use among persons aged >6 months, including those who are healthy and those with chronic medical conditions (Table 2). # Efficacy and Effectiveness of Inactivated Influenza Vaccine The effectiveness of inactivated influenza vaccine depends primarily on the age and immunocompetence of the vaccine recipient, the degree of similarity between the viruses in the vaccine and those in circulation, and the outcome being measured. Vaccine efficacy and effectiveness studies might have various endpoints, including the prevention of medically attended acute respiratory illness (MAARI), prevention of culture-positive influenza virus illness, prevention of influenza or pneumonia-associated hospitalizations or deaths, seroconversion to vaccine serotypes, or prevention of seroconversion to circulating influenza virus subtypes. High postvaccination hemagglutination inhibition antibody titers develop in the majority of vaccinated children and young adults (69)(70)(71). These antibodies are protective against illness caused by strains that are antigenically similar to those strains of the same type or subtype included in the vaccine (70)(71)(72)(73). Children. Children aged >6 months usually acquire protective levels of anti-influenza antibody against specific influenza virus strains after influenza vaccination (69,70,(74)(75)(76)(77)(78)(79), although the antibody response among children at high risk for influenza-related complications might be lower than among healthy children (80,81). A 2-year randomized study of children aged 6-24 months determined that 89% of children seroconverted to all three vaccine strains during both years (82). During year 1, among 411 children, vaccine efficacy was 66% (95% confidence interval = 34%-82%) against culture-confirmed influenza (attack rates: 5.5% and 15.9% among vaccine and placebo groups, respectively). During year 2, among 375 children, vaccine efficacy was -7% (CI = -247%-67%; attack rates: 3.6% and 3.3% among vaccine and placebo groups, respectively); the second year exhibited lower attack rates overall and was considered a mild season. In both years of this study, the vaccine strains were wellmatched to the circulating influenza virus strains. A randomized study among children aged 1-15 years also demonstrated that inactivated influenza vaccine was 77% and 91% effective against influenza respiratory illness during H3N2 and H1N1 years, respectively (71). One study documented a vaccine efficacy of 56% against influenza illness among healthy children aged 3-9 years (83), and another study determined vaccine efficacy against influenza type B infection and influenza type A infection of 22%-54% and 60%-78% among children with asthma aged 2-6 years and 7-14 years, respectively (84). Two studies have documented that TIV vaccine decreases the incidence of influenza-associated otitis media among young children by approximately 30% (16,17), whereas a third study determined that vaccination did not reduce the burden of acute otitis media (82). Effectiveness of One Dose versus Two Doses of Influenza Vaccine Among Previously Unvaccinated Children Aged <9 Years. Vaccine effectiveness is lower among previously unvaccinated children aged <9 years if they have only received 1 dose of influenza vaccine, compared with children who have received 2 doses. A retrospective study among approximately 5,000 children aged 6-23 months conducted during a year with a suboptimal vaccine match indicated vaccine effectiveness of 49% against medically attended, clinically diagnosed pneumonia or influenza among children who had received 2 doses of influenza vaccine. No effectiveness was demonstrated among children who had received only 1 dose of influenza vaccine, illustrating the importance of administering 2 doses of vaccine to previously unvaccinated children aged <9 years (85). Similar results were observed in a case-control study of children aged 6-59 months with laboratory-confirmed influenza (86). A study assessing protective antibody responses after 1 and 2 doses of vaccine among vaccine-naive children aged 5-8 years also demonstrated the importance of compliance with the 2-dose recommendation (87). When the vaccine antigens do not change from one season to the next, priming with a single dose of vaccine in the spring, followed by a dose in the fall might result in similar antibody responses to a 2-dose regimen in the fall (88,89). Adults Aged <65 Years. When the vaccine and circulating viruses are antigenically similar, influenza vaccine typically prevents influenza illness among approximately 70%-90% of healthy adults aged <65 years (9,12,90,91). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are well-matched (9)(10)(11)(12)91,92). In a case-control study of adults aged 50-64 years with laboratory-confirmed influenza during the 2003-04 season when the vaccine and circulating viruses were not well-matched, vaccine effectiveness was estimated to be 52% among healthy persons and 38% among those with one or more high-risk conditions (93). Adults Aged >65 Years. An important benefit of the influenza vaccine is its ability to help prevent secondary complications and reduce the risk for influenza-related hospitalization and death among adults aged >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (13)(14)(15)18,94,95). Older persons and persons with certain chronic diseases might have lower postvaccination antibody titers than healthy young adults and can remain susceptible to influenza virus infection and influenza-related upper respiratory tract illness (96)(97)(98). A randomized trial among noninstitutionalized persons aged >60 years reported a vaccine efficacy of 58% against influenza respiratory illness but indicated that efficacy might be lower among those aged >70 years (99). However, among older persons not living in nursing homes or similar chronic-care facilities, influenza vaccine is 30%-70% effective in preventing hospitalization for pneumonia and influenza (15,100). Among older persons who reside in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and deaths. In this population, the vaccine can be 50%-60% effective in preventing influenza-related hospitalization or pneumonia and 80% effective in preventing influenza-related death, although the effectiveness in preventing influenza illness often ranges from 30% to 40% (101)(102)(103). # Efficacy and Effectiveness of LAIV The immunogenicity of the approved LAIV has been assessed in multiple studies (104)(105)(106)(107)(108)(109)(110), which included approximately 100 children aged 5-17 years and approximately 300 adults aged 18-49 years. LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not completely understood but appear to involve both serum and nasal secretory antibodies. No single laboratory measurement closely correlates with protective immunity induced by LAIV. Healthy Children. A randomized, double-blind, placebocontrolled trial among 1,602 healthy children initially aged 15-71 months assessed the efficacy of trivalent LAIV against culture-confirmed influenza during two seasons (111,112). This trial included subsets of 238 Children who continued in the study remained in the same study group. In season one, when vaccine and circulating virus strains were well-matched, efficacy was 93% for participants who received 2 doses of LAIV. In season two, when the A (H3N2) component was not well-matched between vaccine and circulating virus strains, efficacy was 86% overall. The vaccine was 92% efficacious in preventing cultureconfirmed influenza during the two-season study. Other results included a 27% reduction in febrile otitis media and a 28% reduction in otitis media with concomitant antibiotic use. Receipt of LAIV also resulted in 21% fewer febrile illnesses. A review of LAIV effectiveness in children aged 18 months-18 years found effectiveness against MAARI of 18% but greater estimated efficacy levels: 92% against influenza A (H1N1) and 66% against an influenza B drift variant (113). Healthy Adults. A randomized, double-blind, placebocontrolled trial among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in self-reported respiratory tract illness without laboratory confirmation, absenteeism, health-care visits, and medication use during peak and total influenza outbreak periods (114). The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not well-matched. During peak outbreak periods, no difference in febrile illnesses between LAIV and placebo recipients was observed. However, vaccination was associated with reductions in severe febrile illnesses of 19% and febrile upper respiratory tract illnesses of 24%. Vaccination also was associated with fewer days of illness, fewer days of work lost, fewer days with health-care-provider visits, and reduced use of prescription antibiotics and over-the-counter medications. Among a subset of 3,637 healthy adults aged 18-49 years, LAIV recipients (n = 2,411) had 26% fewer febrile upper-respiratory illness episodes; 27% fewer lost work days as a result of febrile upper respiratory illness; and 18%-37% fewer days of health-care-provider visits caused by febrile illness, compared with placebo recipients (n = 1,226). Days of antibiotic use were reduced by 41%-45% in this age subset. A randomized, double-blind, placebo-controlled challenge study among 92 healthy adults (LAIV, n = 29; placebo, n = 31; inactivated influenza vaccine, n = 32) aged 18-41 years assessed the efficacy of both LAIV and inactivated vaccine (115). The overall efficacy of LAIV and inactivated influenza vaccine in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, on the basis of experimental challenge by viruses to which study participants were susceptible before vaccination. The difference in efficacy between the two vaccines was not statistically significant. # Cost-Effectiveness of Influenza Vaccine Influenza vaccination can reduce both health-care costs and productivity losses associated with influenza illness. Studies of influenza vaccination of persons aged >65 years conducted in the United States have reported substantial reductions in hospitalizations and deaths and overall societal costs savings (15,100,104). Studies of adults aged <65 years have indicated that vaccination can reduce both direct medical costs and indirect costs from work absenteeism (8,(10)(11)(12)91,116). Reductions of 13%-44% in health-care-provider visits, 18%-45% in lost workdays, 18%-28% in days working with reduced effectiveness, and 25% in antibiotic use for influenzaassociated illnesses have been reported (10,12,117,118). One cost-effectiveness analysis estimated a cost of approximately $60-$4,000/illness averted among healthy persons aged 18-64 years, depending on the cost of vaccination, the influenza attack rate, and vaccine effectiveness against influenza-like illness (ILI) (91). Another cost-benefit economic study estimated an average annual savings of $13.66/person vaccinated (119). In the second study, 78% of all costs prevented were costs from lost work productivity, whereas the first study did not include productivity losses from influenza illness. Economic studies specifically evaluating the costeffectiveness of vaccinating persons aged 50-64 years are not available, and the number of studies that examine the economics of routinely vaccinating children with TIV or LAIV are limited (8,(120)(121)(122)(123). However, in a study of inactivated vaccine that included all age groups, cost utility (i.e., cost per year of healthy life gained) improved with increasing age and among those with chronic medical conditions (8). Among persons aged >65 years, vaccination resulted in a net savings per quality-adjusted life year (QALY) gained, whereas among younger age groups, vaccination resulted in costs of $23-$256/ QALY. In addition to estimating the economic cost associated with influenza disease, studies have assessed the public's perception of preventing influenza morbidity. Less than half of respondents to a survey on public perception of the value of preventing influenza morbidity reported that they would trade any time from their own life to prevent a case of uncomplicated influenza in a hypothetical child (124). When asked about their willingness to pay to prevent a hypothetical child from having an uncomplicated case of influenza, the median willingness-to-pay amount was $100 for a child aged 14 years and $175 for a child aged 1 year (124). # Vaccination Coverage Levels One of the national health objectives for 2010 is to achieve an influenza vaccination coverage level of 90% for persons aged >65 years (objective no. 14-29a) (125). Among persons aged >65 years, influenza vaccination levels increased from 33% in 1989 (126) to 66% in 1999 (127), surpassing the Healthy People 2000 objective of 60% (128). Vaccination coverage in this group reached the highest levels recorded (68%) during the 1999-00 influenza season. This estimate was made using the percentage of adults reporting influenza vaccination during the previous 12 months in the National Health Interview Survey (NHIS). The NHIS administered during the first and second quarters of each calendar year was used as a proxy measure of influenza vaccination coverage for the previous influenza season (127). Possible reasons for increases in influenza vaccination levels among persons aged >65 years include 1) greater acceptance of preventive medical services by practitioners; 2) increased delivery and administration of vaccine by health-care providers and sources other than physicians; 3) new information regarding influenza vaccine effectiveness, cost-effectiveness, and safety; and 4) initiation of Medicare reimbursement for influenza vaccination in 1993 (8,14,15,101,102,129,130). Since 1997, influenza vaccination levels have increased more slowly, with an average annual percentage increase of 4% from 1988-89 to 1996-97 versus 1% from 1996-97 to 1998-99. In 2000, a substantial delay in influenza vaccine availability and distribution, followed by a less severe delay in 2001 likely contributed to the lack of progress. However, the slowing of the increase in vaccination levels began before 2000 and is not fully understood. Estimated national influenza vaccine coverage in 2004 among persons aged >65 years and 50-64 years was 65% and 36%, respectively, based on 2004 NHIS data (Table 3). The estimated vaccination coverage among adults with high-risk conditions aged 18-49 years and 50-64 years was 26% and 46%, respectively, substantially lower than the Healthy People 2000 and 2010 objective of 60% (125,128). Continued annual monitoring is needed to determine the effects of vaccine supply delays and shortages, changes in influenza vaccination recommendations and target groups for vaccination, reimbursement rates for vaccine and vaccine administration, and other factors related to vaccination coverage among adults and children. New strategies to improve coverage will be needed to achieve the Healthy People 2010 objective (21,22). Reducing racial and ethnic health disparities, including disparities in vaccination coverage, is an overarching national goal (125). Although estimated influenza vaccination coverage for the 1999-00 season reached the highest levels recorded among older black, Hispanic, and white populations, vaccination levels among blacks and Hispanics continue to lag behind those among whites (127,131). Estimated vaccination coverage levels based on 2004 NHIS data among persons aged >65 years were 67% among non-Hispanic whites, 45% among non-Hispanic blacks, and 55% among Hispanics (CDC, unpublished data, 2006). Among Medicare beneficiaries, unequal access to care might not be the only factor in contributing toward disparity levels in influenza vaccination; other key factors include having patients that actively seek vaccination and providers that recommend vaccination (132,133). In 1997 and 1998, vaccination coverage estimates among nursing home residents were 64%-82% and 83%, respectively (134,135). The Healthy People 2010 goal is to achieve influenza vaccination of 90% among nursing home residents, an increase from the Healthy People 2000 goal of 80% (125,128). Reported vaccination levels are low among children at increased risk for influenza complications. One study conducted among patients in health maintenance organizations (HMOs) documented influenza vaccination percentages ranging from 9% to 10% among children with asthma (136). A 25% vaccination level was reported among children with severe to moderate asthma who attended an allergy and immunology clinic (137). However, a study conducted in a pediatric clinic demonstrated an increase in the vaccination percentage of children with asthma or reactive airways disease from 5% to 32% after implementing a reminder/recall system (138). One study documented 79% vaccination coverage among children attending a cystic fibrosis treatment center (139). According to 2004 National Immunization Survey data, during the second year of the encouragement for vaccination of children aged 6-23 months, 18% received one or more influenza vaccinations and 8.4% received 2 doses if they were previously unvaccinated (140). A rapid analysis of influenza vaccination coverage levels among members of an HMO in Northern California determined that in 2004-05, the first year of the recommendation for vaccination of children aged 6-23 months, their coverage level reached 57% (141). Data from the Behavioral Risk Factor Surveillance System (BRFSS) collected in February 2005 indicated a national estimate of 48% vaccination coverage for 1 or more doses among children aged 6-23 months and 35% coverage among children aged 2-17 years who had one or more high-risk medical conditions during the 2004-05 season (142). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use. As has been observed for older adults, a physician recommendation for vaccination and the perception that getting a child vaccinated "is a smart idea" were positively associated with likelihood of vaccination of children aged 6-23 months (143). Annual vaccination is recommended for health-care workers. Nonetheless, NHIS 2004 survey data indicated a vaccination coverage level of only 42% among health-care workers (CDC, unpublished data, 2006). Vaccination of health-care workers has been associated with reduced work absenteeism (9) and fewer deaths among nursing home patients (144,145) and is a high priority for reducing the effect of influenza in health-care settings and for expanding influenza vaccine use (146,147). Limited information is available regarding use of influenza vaccine among pregnant women. Among women aged 18-44 years without diabetes responding to the 2001 BRFSS, those who were pregnant were less likely to report influenza vaccination during the previous 12 months (13.7%) than those women who were not pregnant (16.8%); these differences were statistically significant (148). Only 13% of pregnant women reported vaccination according to 2004 NHIS data, excluding pregnant women who reported diabetes, heart disease, § Persons categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer during the previous 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer during the previous 12 months; 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack during the preceding 12 months. ¶ Aged 18-44 years, pregnant at the time of the survey, and without high-risk conditions. Adults were classified as health-care workers if they were currently employed in a health-care occupation or in a health-care-industry setting, on the basis of standard occupation and industry categories recoded in groups by CDC's National Center for Health Statistics. † † Interviewed adult in each household containing at least one of the following: a child aged 65 years, or any person aged 2-17 years at high risk (see previous § footnote). To obtain information on household composition and high-risk status of household members, the sampled adult, child, and person files from NHIS were merged. Interviewed adults who were health-care workers or who had high-risk conditions were excluded. Information could not be assessed regarding high-risk status of other adults aged 18-64 years in the household, thus, certain adults 18-64 years who live with an adult aged 18-64 years at high risk were not included in the analysis. lung disease, and other selected high-risk conditions (CDC, unpublished data, 2006) (Table 3). These data indicate low compliance with the ACIP recommendations for pregnant women. In a study of influenza vaccine acceptance by pregnant women, 71% who were offered the vaccine chose to be vaccinated (149). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients, although 86% agreed that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (150). Data indicate that self-report of influenza vaccination among adults, compared with extraction from the medical record, is both a sensitive and specific source of information (151). Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (151). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available. # Recommendations for Using Inactivated and Live, Attenuated Influenza Vaccines The inactivated influenza vaccine and LAIV can be used to reduce the risk for influenza virus infection and its complications. TIV is Food and Drug Administration (FDA)-approved for persons aged >6 months, including those with high-risk conditions, whereas LAIV is approved only for use among healthy persons aged 5-49 years (see Inactivated Influenza Vaccine Recommendations; and Live, Attenuated Influenza Vaccine Recommendations). # Target Groups for Vaccination Annual influenza vaccination is recommended for the following groups: # Persons at Increased Risk for Complications Vaccination with inactivated influenza vaccine is recommended for the following persons who are at increased risk for severe complications from influenza: - children aged 6-23 months; - children and adolescents (aged 6 months-18 years) who are receiving long-term aspirin therapy and, therefore, might be at risk for experiencing Reye syndrome after influenza virus infection; - women who will be pregnant during the influenza season; - adults and children who have chronic disorders of the pulmonary or cardiovascular systems, including asthma (hypertension is not considered a high-risk condition); Because children aged 6-23 months are at substantially increased risk for influenza-related hospitalizations and because children aged 24-59 months are at increased risk for influenza-related clinic and emergency department visits (152), ACIP recommends vaccination of children aged 6-59 months. The current LAIV and inactivated influenza vaccines are not approved by FDA for use among children aged <6 months, the pediatric group at greatest risk for influenza-related complications (58,153,154). Vaccination of their household contacts and out-of-home caregivers also is recommended because it might decrease the probability of influenza virus infection among these children. Studies indicate that rates of hospitalization are higher among young children than older children when influenza viruses are in circulation (57,(59)(60)(61)62,(155)(156)(157). The increased rates of hospitalization are comparable with rates for other groups considered at high risk for influenza-related complications. However, the interpretation of these findings has been confounded by cocirculation of respiratory syncytial virus that causes serious respiratory viral illness among children and that frequently circulates during the same time as influenza viruses (158)(159)(160). One study assessed rates of influenza-associated hospitalizations among the entire U.S. population during 1979-2001 and calculated an average rate of approximately 108 hospitalizations per 100,000 person-years in children aged <5 years (48). Two studies have attempted to separate the impact of respiratory syncytial viruses and influenza viruses on rates of hospitalization among children who do not have high-risk conditions (58,59). Both studies indicated that otherwise healthy children aged <2 years and possibly children aged 2-4 years are at increased risk for influenza-related hospitalization compared with older healthy children (Table 1). Among the Tennessee Medicaid population during 1973-1993, healthy children aged 6 months-2 years had rates of influenza-associated hospitalization comparable with or higher than rates among children aged 3-14 years with high-risk conditions (58,60). Another Tennessee study indicated a hospitalization rate per year of 3-4/1,000 healthy children aged <2 years for laboratory-confirmed influenza (36). The ability of providers to implement the recommendation to vaccinate all children aged 24-59 months during the 2006-07 season, the first year the recommendation will be in place, might vary depending upon vaccine supply (See Influenza Vaccine Supply and Timing of Annual Influenza Vaccination; and / default.htm). # Pregnant Women Influenza-associated excess deaths among pregnant women were documented during the pandemics of 1918-19 and 1957-58 (51,(161)(162)(163). Case reports and limited studies also indicate that pregnancy can increase the risk for serious medical complications of influenza (164)(165)(166)(167)(168)(169). One study of influenza vaccination of approximately 2,000 pregnant women demonstrated no adverse fetal effects associated with inactivated influenza vaccine (170); similar results were observed in a study of 252 pregnant women who received inactivated influenza vaccine within 6 months of delivery (171). No such data exist on the safety of LAIV when administered during pregnancy. # Breastfeeding Mothers TIV is safe for mothers who are breastfeeding and their infants. Because excretion of LAIV in human milk is unknown and because of the possibility of shedding vaccine virus given the close proximity of a nursing mother and her infant, caution should be exercised if LAIV is administered to nursing mothers. Breastfeeding does not adversely affect the immune response and is not a contraindication for vaccination. # Persons Aged 50-64 Years Vaccination is recommended for persons aged 50-64 years because this group has an increased prevalence of persons with high-risk conditions. In 2002, approximately 43.6 million persons in the United States were aged 50-64 years, of whom 13.5 million (34%) had one or more high-risk medical conditions (172). Influenza vaccine has been recommended for this entire age group to increase the low vaccination levels among persons in this age group with high-risk conditions (see Persons at Increased Risk for Complications). Age-based strategies are more successful in increasing vaccine coverage than patient-selection strategies based on medical conditions. Persons aged 50-64 years without high-risk conditions also receive benefit from vaccination in the form of decreased rates of influenza illness, decreased work absenteeism, and decreased need for medical visits and medication, including antibiotics (9-12). Furthermore, 50 years is an age when other preventive services begin and when routine assessment of vaccination and other preventive services has been recommended (173,174). # Health-Care Workers and Other Persons Who Can Transmit Influenza to Those at High Risk Persons who are clinically or asymptomatically infected can transmit influenza virus to persons at high risk for complications from influenza. Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce influenza-related deaths among persons at high risk. In two studies, vaccination of health-care workers was associated with decreased deaths among nursing home patients (144,145), and hospital-based influenza outbreaks frequently occur where unvaccinated health-care workers are employed. Administration of LAIV has been demonstrated to reduce MAARI in contacts of vaccine recipients (175,176) and to reduce ILI-related economic and medical consequences (such as work days lost and number of health-care provider visits). In addition to health-care workers, additional groups that can transmit influenza to persons at high risk and that should be vaccinated include the following: - employees of assisted living and other residences for persons in groups at high risk, - persons who provide home care to persons in groups at high risk, and - household contacts (including children) of persons in groups at high risk. In addition, because children aged 0-23 months are at increased risk for influenza-related hospitalization (58-60), vaccination is recommended for their household contacts and out-of-home caregivers, particularly for contacts of children aged 0-5 months, because influenza vaccines have not been approved by FDA for use among children aged <6 months (see Healthy Young Children Aged 6-59 Months). Healthy persons aged 5-49 years in these groups who are not contacts of severely immunocompromised persons (see Live, Attenuated Influenza Vaccine Recommendations) can receive either LAIV or inactivated influenza vaccine. All other persons in this group should receive inactivated influenza vaccine. All health-care workers should be vaccinated against influenza annually (147,177,178). Facilities that employ healthcare workers are strongly encouraged to provide vaccine to workers by using approaches that maximize vaccination levels. An improvement in vaccination coverage levels might help to protect health-care workers, their patients, and communities; improve prevention of influenza-associated disease and patient safety; and reduce disease burden. Influenza vaccination levels among health-care workers should be regularly measured and reported. Although vaccination levels for health-care workers are typically <40%, with moderate effort, organized campaigns can attain higher levels of vaccination among this population (146,179). In 2005, seven states had legislation requiring annual influenza vaccination of health-care workers or the signing of an informed declination (147), and 15 states had regulations regarding vaccination of health-care workers in long-term-care facilities (180). Physicians, nurses, and other workers in both hospital and outpatient-care settings, including medical emergency-response workers (e.g., paramedics and emergency medical technicians), should be vaccinated, as should employees of nursing home and chroniccare facilities who have contact with patients or residents. # Persons Infected with HIV Limited information is available regarding the frequency and severity of influenza illness or the benefits of influenza vaccination among persons with HIV infection (181,182). However, a retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than during the peri-influenza periods. The risk for hospitalization was higher for HIV-infected women than for women with other well-recognized high-risk conditions, including chronic heart and lung diseases (183). Another study estimated that the risk for influenza-related death was 9.4-14.6/10,000 persons with acquired immunodeficiency syndrome (AIDS), compared with 0.09-0.10/10,000 among all persons aged 25-54 years and 6.4-7.0/10,000 among persons aged >65 years (184). Other reports indicate that influenza symptoms might be prolonged and the risk for complications from influenza increased for certain HIVinfected persons (185)(186)(187). Vaccination has been demonstrated to produce substantial antibody titers against influenza among vaccinated HIVinfected persons who have minimal AIDS-related symptoms and high CD4+ T-lymphocyte cell counts (188)(189)(190)(191). A limited, randomized, placebo-controlled trial determined that inactivated influenza vaccine was highly effective in preventing symptomatic, laboratory-confirmed influenza virus infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm 3 ; a limited number of persons with CD4+ T-lymphocyte cell counts of 100 CD4+ cells and among those with <30,000 viral copies of HIV type-1/mL (187). Among persons who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, inactivated influenza vaccine might not induce protective antibody titers (190,191); a second dose of vaccine does not improve the immune response in these persons (191,192). One case study determined that HIV RNA (ribonucleic acid) levels increased transiently in one HIV-infected person after influenza virus infection (193). Studies have demonstrated a transient (i.e., 2-4 week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (190,194). Other studies using similar laboratory techniques have not documented a substantial increase in the replication of HIV (195)(196)(197)(198). Deterioration of CD4+ T-lymphocyte cell counts or progression of HIV disease has not been demonstrated among HIV-infected persons after influenza vaccination compared with unvaccinated persons (191,199). Limited information is available concerning the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza virus infection or influenza vaccination (181,200). Because influenza can result in serious illness and because vaccination with inactivated influenza vaccine might result in the production of protective antibody titers, vaccination might benefit HIV-infected persons, including HIV-infected pregnant women. Therefore, influenza vaccination is recommended. # Travelers The risk for exposure to influenza during travel depends on the time of year and destination. In the tropics, influenza can occur throughout the year. In the temperate regions of the Southern Hemisphere, the majority of influenza activity occurs during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large organized tourist groups (e.g., on cruise ships) that include persons from areas of the world where influenza viruses are circulating (201,202). Persons at high risk for complications of influenza and who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to - travel to the tropics, - travel with organized tourist groups at any time of year, or - travel to the Southern Hemisphere during April-September. No information is available regarding the benefits of revaccinating persons before summer travel who were already vaccinated during the preceding fall. Persons at high risk who received the previous season's vaccine before travel should be revaccinated with the current vaccine the following fall or winter. Persons aged >50 years and persons at high risk should consult with their health-care provider before embarking on travel during the summer to discuss the symptoms and risks for influenza and other travel-related diseases. # General Population In addition to the groups for which annual influenza vaccination is recommended, vaccination providers should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza or transmitting influenza to others should they become infected (the vaccine can be administered to children aged >6 months), depending on vaccine availability (see Influenza Vaccine Supply and Timing of Annual Influenza Vaccination). A strategy of universal influenza vaccination is being assessed by ACIP. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics (203). # Inactivated Influenza Vaccine Recommendations TIV Dosage Dosage recommendations vary according to age group (Table 4). Among previously unvaccinated children aged 6 months-1 month apart are recommended for eliciting satisfactory antibody responses (85)(86)(87)(88). If possible, the second dose should be administered before the onset of influenza season. If a child aged 6 months-<9 years receiving influenza vaccine for the first time does not receive a second dose of vaccine within the same season, only 1 dose of vaccine should be administered the following season. Two doses are not required at that time. ACIP does not recommend that a child receiving influenza vaccine for the first time be administered the first dose of vaccine in the spring as a priming dose for the following season (86,88). Among adults, studies have indicated limited or no improvement in antibody response when a second dose is administered during the same season (204)(205)(206). Even when the current influenza vaccine contains one or more antigens administered in previous years, annual vaccination with the vaccine is necessary because immunity declines during the year after vaccination (207,208). Vaccine prepared for a previous influenza season should not be administered to provide protection for the current season (see Persons Who Should Not Be Vaccinated with Inactivated Influenza Vaccine). # TIV Route The intramuscular route is recommended for inactivated influenza vaccine. Adults and older children should be vaccinated in the deltoid muscle. A needle length >1 inch should be considered for these age groups because needles <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (209). Infants and young children should be vaccinated in the anterolateral aspect of the thigh (210). ACIP recommends a needle length of 7/8-1 inch for children aged <12 months for intramuscular vaccination into the anterolateral thigh. When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7/8-1.25 inches is recommended (210). # TIV Side Effects and Adverse Reactions When educating patients regarding potential side effects, clinicians should emphasize that 1) inactivated influenza vaccine contains noninfectious killed viruses and cannot cause influenza, and 2) coincidental respiratory disease unrelated to influenza vaccination can occur after vaccination. # TIV Local Reactions In placebo-controlled studies among adults, the most frequent side effect of vaccination is soreness at the vaccination site (affecting 10%-64% of patients) that lasts <2 days (12,(211)(212)(213). These local reactions typically are mild and rarely interfere with the person's ability to conduct usual daily activities. One blinded, randomized, cross-over study among 1,952 adults and children with asthma demonstrated that only body aches were reported more frequently after inactivated influenza vaccine (25.1%) than placebo-injection (20.8%) (214). One study reported 20%-28% of children with asthma aged 9 months-18 years experienced local pain and swelling (81), and another study reported 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions (76). A different study reported no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years in a placebo-controlled trial of inactivated influenza vaccine (77). In a study of 12 children aged 5-32 months, no substantial local or systemic reactions were noted (215). The interpretation of these findings should be made with caution given the small number of children studied. # TIV Systemic Reactions Fever, malaise, myalgia, and other systemic symptoms can occur after vaccination with inactivated vaccine and most often affect persons who have had no previous exposure to the influenza virus antigens in the vaccine (e.g., young children) (216,217). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Placebo-controlled trials demonstrate that among older persons and healthy young adults, administration of split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (12,(211)(212)(213). In a randomized cross-over study among both children and adults with asthma, no increase in asthma exacerbations was reported for either age group (214). An analysis of 215,600 Please note: An erratum has been published for this issue. To view the erratum, please click here. children aged <18 years and 8,476 children aged 6-23 months enrolled in one of five HMOs reported no increase in biologically plausible medically attended events during the 2 weeks after inactivated influenza vaccination, compared with control periods 3-4 weeks before and after vaccination (218). In a study of 791 healthy children (71), postvaccination fever was noted among 11.5% of children aged 1-5 years, among 4.6% of children aged 6-10 years, and among 5.1% of children aged 11-15 years. Among children with high-risk medical conditions, one study of 52 children aged 6 months-4 years indicated that 27% had fever and 25% had irritability and insomnia (76); another study among 33 children aged 6-18 months indicated that one child had irritability and one had a fever and seizure after vaccination (219). No placebo comparison group was used in these studies. A published review of the Vaccine Adverse Event Reporting System (VAERS) reports of TIV in children aged 6-23 months documented that the most frequently reported adverse events were fever, rash, injection-site reactions, and seizures. The majority of the small total number of reported seizures appeared to be febrile (220). Because of the limitations of passive reporting systems, determining causality for specific types of adverse events, with the exception of injectionsite reactions, is usually not possible using VAERS data alone. A population-based study of TIV safety in children aged 6-23 months who were vaccinated during 1993-1999 indicated no vaccine-associated adverse events that had a plausible relationship to vaccination (221). Health-care professionals should promptly report to VAERS all clinically significant adverse events after influenza vaccination, even if the health-care professional is not certain that the vaccine caused the event. The Institute of Medicine has specifically recommended reporting of potential neurologic complications (e.g., demyelinating disorders such as Guillain-Barré syndrome ), although no evidence exists of a causal relation between influenza vaccine and neurologic disorders in children. Immediate, presumably allergic, reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) rarely occur after influenza vaccination (222). These reactions probably result from hypersensitivity to certain vaccine components; the majority of reactions probably are caused by residual egg protein. Although current influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have had hives or swelling of the lips or tongue or who have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented im-munoglobulin E (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma or other allergic responses to egg protein, might also be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician should be considered (223)(224)(225). Persons with a history of severe hypersensitivity (e.g., anaphylaxis) to eggs should not receive influenza vaccine. Hypersensitivity reactions to any vaccine component can occur theoretically. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (226,227). When reported, hypersensitivity to thimerosal usually has consisted of local, delayed hypersensitivity reactions (226). # GBS and TIV The 1976 swine influenza vaccine was associated with an increased frequency of GBS (228,229). Among persons who received the swine influenza vaccine in 1976, the rate of GBS was 25 years than persons aged <25 years (228). Evidence for a causal relation of GBS with subsequent vaccines prepared from other influenza viruses is unclear. Obtaining strong epidemiologic evidence for a possible limited increase in risk is difficult for such a rare condition as GBS, which has an estimated annual incidence of 10-20 cases/1 million adults (230). Investigations to date have not documented a substantial increase in GBS associated with influenza vaccines (other than the swine influenza vaccine in 1976), and suggest that, if influenza vaccine does pose a risk, it is probably slightly more than one additional case/1 million persons vaccinated. During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were slightly elevated, but they were not statistically significant in any of these studies (231)(232)(233). However, in a study of the 1992-93 and 1993-94 influenza seasons, the overall relative risk for GBS was 1.7 (CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately 1 additional case of GBS/1 million persons vaccinated; the combined number of GBS cases peaked 2 weeks after vaccination (234). VAERS has documented decreased reporting of postinfluenza vaccine GBS across age groups, despite overall increased reporting of other, non-GBS conditions occurring after influenza vaccination (235). Cases of GBS after influenza infection have been reported, but no other epidemiologic studies have documented such an association (236,237). Substantial evidence exists that several infectious illnesses, most notably Campylobacter jejuni and upper respiratory tract infections are associated with GBS (230,(238)(239)(240). Even if GBS were a true side effect of vaccination in the years other than 1976, the estimated risk for GBS of approximately 1 additional case/1 million persons vaccinated is substantially less than the risk for severe influenza, which can be prevented by vaccination among all age groups, especially persons aged >65 years and those who have medical indications for influenza vaccination (Table 1) (see Hospitalizations and Deaths from Influenza). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death substantially outweigh the possible risks for experiencing vaccine-associated GBS. The average case fatality ratio for GBS is 6% and increases with age (230,241). No evidence indicates that the case fatality ratio for GBS differs among vaccinated persons and those not vaccinated. The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (231,242). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown. However, avoiding vaccinating persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks after a previous influenza vaccination is prudent. As an alternative, physicians might consider using influenza antiviral chemoprophylaxis for these persons. Although data are limited, for the majority of persons who have a history of GBS and who are at high risk for severe complications from influenza, the established benefits of influenza vaccination justify yearly vaccination. # Thimerosal and Inactivated Influenza Vaccine Thimerosal, a mercury-containing compound, has been used as a preservative in vaccines since the 1930s and is used in multidose vials of inactivated influenza vaccine to reduce the likelihood of bacterial contamination (243). Many of the single-dose syringes and vials of TIV are thimerosal-free or contain only trace amounts of thimerosal (Table 4). No scientific evidence indicates that thimerosal in vaccines, including influenza vaccines, leads to serious adverse events in vaccine recipients (244). However, in 1999, the U.S. Public Health Service and other organizations recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines to decrease total mercury exposure, chiefly among infants (243)(244)(245). Since mid-2001, vaccines routinely recommended for infants in the United States have been manufactured either without or with only trace amounts of thimerosal, resulting in a substantial reduction in the total mercury exposure from vaccines for children (210). Vaccines containing trace amounts of thimerosal have <1 mcg mercury/dose. The risks for severe illness from influenza virus infection are elevated among both young children and pregnant women, and persons in both groups benefit from vaccination. In contrast, no scientifically conclusive evidence exists of harm from exposure to thimerosal preservative-containing vaccine. In fact, evidence is accumulating that supports the absence of any harm resulting from exposure to such vaccines (243,(246)(247)(248). Therefore, the benefits of influenza vaccination outweigh the theoretical risk, if any, from thimerosal exposure through vaccination. Nonetheless, certain persons remain concerned regarding exposure to thimerosal. As of February 2006, six states had enacted legislation banning the administration of vaccines containing mercury; the provisions defining mercury content vary. These laws might present a barrier to vaccination until sufficient numbers of doses of influenza vaccines without thimerosal as a preservative or in trace amounts are available. The U.S. vaccine supply for infants and pregnant women is in a period of transition; the availability of thimerosalreduced or thimerosal-free vaccine intended for these groups is being expanded by manufacturers as a feasible means of reducing an infant's total exposure to mercury, because other environmental sources of exposure are more difficult or impossible to eliminate. Reductions in thimerosal in other vaccines have been achieved already and have resulted in substantially lowered cumulative exposure to thimerosal from vaccination among infants and children. For all of those reasons, persons for whom inactivated influenza vaccine is recommended may receive vaccine with or without thimerosal, depending on availability. # Persons Who Should Not Be Vaccinated with Inactivated Influenza Vaccine Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Chemoprophylactic use of antiviral agents is an option for preventing influenza among such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who also are at high risk for complications from influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Information regarding vaccine components is located in package inserts from each manufacturer. Persons with moderate-to-severe acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate use of influenza vaccine, particularly among children with mild upper-respiratory tract infection or allergic rhinitis. # TIV and Use of Influenza Antiviral Medications As TIV contains only influenza virus subunits and no live virus, no contraindication exists to the coadministration of TIV and influenza antivirals (see sections on Chemoprophylaxis; and Control of Influenza Outbreaks in Institutions). # Live, Attenuated Influenza Vaccine Recommendations Using LAIV LAIV is an option for vaccination of healthy, nonpregnant persons aged 5-49 years who want to avoid influenza, and those who might be in close contact with persons at high risk for severe complications, including health-care workers. During periods when inactivated vaccine is in short supply, use of LAIV is encouraged when feasible for eligible persons (including health-care workers) because use of LAIV by these persons might increase availability of inactivated vaccine for persons in groups at high risk. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response, its ease of administration, and the acceptability of an intranasal rather than intramuscular route of administration. # LAIV Dosage and Administration LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV must be thawed before administration. This can be accomplished by holding an individual sprayer in the palm of the hand until thawed, with subsequent immediate administration. Alternatively, the vaccine can be thawed in a refrigerator and stored at 2 º C-8 º C for <60 hours before use. Vaccine should not be refrozen after thawing. LAIV is supplied in a prefilled single-use sprayer containing 0.5 mL of vaccine. Approximately 0.25 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. If the vaccine recipient sneezes after administration, the dose should not be repeated. LAIV should be administered annually according to the following schedule: - Children aged 5-<9 years previously unvaccinated at any time with either LAIV or inactivated influenza vaccine should receive 2 doses- of LAIV separated by 6-10 weeks; if possible, the second dose of vaccine should be administered before the onset of influenza season. - Children aged 5-<9 years previously vaccinated at any time with either LAIV or inactivated influenza vaccine should receive 1 dose of LAIV. They do not require a second dose. - Persons aged 9-49 years should receive 1 dose of LAIV. LAIV can be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if clinical judgment indicates nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness. Whether concurrent administration of LAIV with other vaccines affects the safety or efficacy of either LAIV or the simultaneously administered vaccine is unknown. In the absence of specific data indicating interference, following the ACIP general recommendations for immunization is prudent (210). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. Inactivated or live vaccines can be administered simultaneously with LAIV. However, after administration of a live vaccine, at least 4 weeks should pass before another live vaccine is administered (see Persons Who Should Not Be Vaccinated with LAIV). # LAIV and Use of Influenza Antiviral Medications The effect on safety and efficacy of LAIV coadministration with influenza antiviral medications has not been studied. However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy, and influenza antiviral medications should not be administered for 2 weeks after receipt of LAIV. # LAIV Storage LAIV must be stored at -15 º C or colder. A manufacturersupplied freezer box was formerly required for storage of LAIV in a frost-free freezer; however, the freezer box is now optional, and LAIV may now be stored in frost-free freezers without using a freezer box. LAIV can be thawed in a refrigerator and stored at 2 º C-8 º C for <60 hours before use. It should not be refrozen after thawing because of decreased vaccine potency. # Shedding, Transmission, and Stability of Vaccine Viruses Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses for >2 days after vaccination, although in lower titers than typically occur with shedding of wild-type influenza viruses. Shedding should not be equated with person-to-person transmission of vaccine viruses, although, in rare instances, shed vaccine viruses can be transmitted from vaccinees to nonvaccinated persons. One unpublished study of a child care center setting assessed transmissibility of vaccine viruses from 98 vaccinated to 99 unvaccinated children, all aged 8-36 months. Eighty percent of vaccine recipients shed one or more virus strains, with a mean of 7.6 days' duration (249). One vaccine type influenza type B isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the cold-adapted, temperature-sensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient in the same children's play group. The placebo recipient from whom the influenza type B vaccine virus was isolated did not exhibit symptoms that were different from those experienced by vaccine recipients. The estimated probability of acquiring vaccine virus after close contact with a single LAIV recipient in this child care population was 0.58%-2.4%. One study assessing shedding of vaccine viruses in 20 healthy vaccinated adults aged 18-49 years demonstrated that the majority of shedding occurred within the first 3 days after vaccination, although one participant was noted to shed virus on day 7 after vaccine receipt. No study participants shed vaccine viruses >10 days after vaccination. Duration or type of symptoms associated with receipt of LAIV did not correlate with duration of shedding vaccine viruses. Personto-person transmission of vaccine viruses was not assessed in this study (250). Another study assessing shedding of vaccine viruses in 14 healthy adults aged 18-49 years indicated that 50% of these adults had viral antigen detected by direct immunofluorescence or rapid antigen tests within 7 days of vaccination. The majority of viral shedding was detected on day 2 or 3. Person-toperson transmission of vaccine viruses was not assessed in this study (251). In clinical trials, viruses shed by vaccine recipients have been phenotypically stable. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (252). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. A study conducted in a day care setting found that limited genetic change occurred in the LAIV strains after replication in the vaccine recipients (253). # LAIV Side Effects and Adverse Reactions Twenty prelicensure clinical trials assessed the safety of the approved LAIV. In these combined studies, approximately 28,000 doses of the vaccine were administered to approximately 20,000 persons. A subset of these trials were randomized, placebo-controlled studies in which an estimated 4,000 healthy children aged 5-17 years and 2,000 healthy adults aged 18-49 years were vaccinated. The incidence of adverse events possibly complicating influenza (e.g., pneumonia, bronchitis, bronchiolitis, or central nervous system events) was not statistically different among LAIV and placebo recipients aged 5-49 years. LAIV is made from attenuated viruses and does not cause influenza in vaccine recipients. Children. In a subset of healthy children aged 60-71 months from one clinical trial (111,112), certain signs and symptoms were reported more often among LAIV recipients after the first dose (n = 214) than placebo recipients (n = 95) (e.g., runny nose, 48.1% versus 44.2%; headache, 17.8% versus 11.6%; vomiting, 4.7% versus 3.2%; and myalgias, 6.1% versus 4.2%), but these differences were not statistically significant. In other trials, signs and symptoms reported after LAIV administration have included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0-26%), vomiting (3%-13%), abdominal pain (2%), and myalgias (0-21%) (105,108,110,(254)(255)(256). These symptoms were associated more often with the first dose and were self-limited. Data from a study of children aged 1-17 years indicated an increase in asthma or reactive airways disease in the subset aged 1-<5 years (257,258). Because of these data, LAIV is not approved for use among children aged <5 years. Another study was conducted among more than 11,000 children aged 18 months-18 years in which 18,780 doses of vaccine were administered over a 4-year period. This study did not observe an increase in asthma visits 0-15 days after vaccination for children who were aged 18 months-4 years compared with the prevaccination period; however, a significant increase in asthma events was observed 15-42 days after vaccination but only in vaccine year 1 (259). Adults. Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (114,260,261). In one clinical trial (114) among a subset of healthy adults aged 18-49 years, signs and symptoms reported more frequently among LAIV recipients (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (13.9% versus 10.8%), runny nose (44.5% versus 27.1%), sore throat (27.8% versus 17.1%), chills (8.6% versus 6.0%), and tiredness/ weakness (25.7% versus 21.6%). Safety Among Groups at High Risk from Influenza-Related Morbidity. Until additional data are acquired and analyzed, persons at high risk for experiencing complications from influenza virus infection (e.g., immunocompromised patients; patients with asthma, cystic fibrosis, or chronic obstructive pulmonary disease; or persons aged >65 years) should not be vaccinated with LAIV. Protection from influenza among these groups should be accomplished using inactivated influenza vaccine. Serious Adverse Events. Serious adverse events requiring medical attention among healthy children aged 5-17 years or healthy adults aged 18-49 years occurred at a rate of <1%. Surveillance will continue for adverse events that might not have been detected in previous studies. Reviews of reports to VAERS after vaccination of approximately 2,500,000 persons during the 2003-04 and 2004-05 influenza seasons did not reveal any substantial new safety concerns (262,263). Health-care professionals should promptly report all clinically significant adverse events after LAIV administration to VAERS, as recommended for inactivated influenza vaccine. # Persons Who Should Not Be Vaccinated with LAIV The following populations should not be vaccinated with LAIV: # Vaccination of Close Contacts of Persons at High Risk for Complications from Influenza Close contacts of persons at high risk for complications from influenza should receive influenza vaccine to reduce transmission of wild-type influenza viruses to persons at high risk. Use of inactivated influenza vaccine is preferred for vaccinating household members, health-care workers, and others who have close contact with severely immunocompromised persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunocompromised person requires care in a protective environment. The rationale for not using LAIV among healthcare workers caring for such patients is the theoretical risk that a live, attenuated vaccine virus could be transmitted to the severely immunocompromised person. If a health-care worker receives LAIV, that worker should refrain from contact with severely immunocompromised patients for 7 days after vaccine receipt. Hospital visitors who have received LAIV should refrain from contact with severely immunocompromised persons for 7 days after vaccination; however, such persons need not be excluded from visitation of patients who are not severely immunocompromised. ACIP has not indicated a preference for inactivated influenza vaccine use by health-care workers or other persons who have close contact with persons with lesser degrees of immunodeficiency (e.g., persons with diabetes, persons with asthma taking corticosteroids, or persons infected with HIV) or for inactivated influenza vaccine use by health-care workers or other healthy persons aged 5-49 years in close contact with all other groups at high risk. # Personnel Who May Administer LAIV Low-level introduction of vaccine viruses into the environment is likely unavoidable when administering LAIV. The risk for acquiring vaccine viruses from the environment is unknown but likely to be limited. Severely immunocompromised persons should not administer LAIV. However, other persons at high risk for influenza complications may administer LAIV. These include persons with underlying medical conditions placing them at high risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years. # Recommended Vaccines for Different Age Groups When vaccinating children aged 6 months-3 years, healthcare providers should use inactivated influenza vaccine that has been approved by FDA for this age group. Inactivated influenza vaccine from sanofi pasteur (Fluzone) is approved for use among persons aged >6 months. Inactivated influenza vaccine from Novartis, formerly Chiron (Fluvirin), is labeled in the United States for use among persons aged >4 years because data to demonstrate efficacy among younger persons have not been provided to FDA, whereas inactivated influenza vaccine from GlaxoSmithKline (FLUARIX) is labeled for use in persons aged >18 years. LAIV from MedImmune (FluMist) is approved for use by healthy persons aged 5-49 years (Table 4). # Influenza Vaccine Supply and Timing of Annual Influenza Vaccination The annual supply of influenza vaccine and the timing of its distribution cannot be guaranteed in any year. Currently, influenza vaccine manufacturers are projecting that approximately 100 million doses of influenza vaccine will be available in the United States for the 2006-07 influenza season, an amount that is approximately 16% more doses than were available for the 2005-06 season. An additional 15 million-20 million doses might be available if a new vaccine is licensed in 2006. (Information about the status of licensure of new vaccines is available at / news/vaccstatus.pdf.) However, influenza vaccine distribution delays or vaccine shortages remain possible in part because of the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains. To ensure optimal use of available doses of influenza vaccine, health-care providers, those planning organized campaigns, and state and local public health agencies should 1) develop plans for expanding outreach and infrastructure to vaccinate more persons than last year and 2) develop contingency plans for the timing and prioritization of administering influenza vaccine, if the supply of vaccine is delayed and/or reduced. CDC and other public health agencies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will inform both providers and the general public if a substantial delay or an inadequate supply occurs. Because LAIV is approved for use in healthy persons aged 5-49 years, no recommendations exist for limiting the timing and prioritization of administering LAIV. Administration of LAIV is encouraged as soon as it is available and throughout the season. If the supply of inactivated influenza vaccine is adequate and a sufficient number of doses will be available beginning in September, vaccination efforts should be structured to ensure the vaccination of as many persons as possible over the course of several months. Even if vaccine distribution begins in September, distribution probably will not be completed until December or January; therefore, the following recommendations reflect this phased distribution during the months of October, November, and December, and possibly later. The prioritized (tiered) use of influenza vaccine during inactivated influenza vaccine shortages applies only to the use of inactivated vaccine and not to LAIV. When feasible, during shortages of inactivated influenza vaccine, LAIV should be used preferentially for all healthy persons aged 5-49 years (including health-care workers) to increase the availability of inactivated vaccine for groups at high risk. The following section provides guidance regarding the timing of vaccination under two scenarios: 1) if the supply of inactivated influenza vaccine is adequate, and 2) if a reduced or delayed supply of inactivated vaccine occurs. Materials to assist providers are available at http:// www.cdc.gov/flu/professionals/vaccination/index.htm (see also Travelers section). # Vaccination Before October To avoid missed opportunities for vaccination of persons at increased risk for serious complications and their household contacts (including out-of-home caregivers and household contacts of children aged 0-59 months), such persons should be offered vaccine beginning in September during routine health-care visits or during hospitalizations, if vaccine is available. However, in facilities housing older persons (e.g., nursing homes), vaccination before October typically should be avoided because antibody levels in such persons can begin to decline more rapidly after vaccination (264). If vaccine supplies are sufficient, vaccination of other persons also may begin before October. In addition, because children aged 6 months-<9 years who have not been previously vaccinated need 2 doses of vaccine, they should receive their first dose in September, if vaccine is available, so that both doses can be administered before the onset of influenza activity. For previously vaccinated children, only 1 dose is needed. # Vaccination in October and November The optimal time for vaccination efforts is usually during October-November. In October, vaccination in providerbased settings should start or continue for all patients-both high risk and healthy-and extend throughout November. Vaccination of children aged 6 months-<9 years who are receiving vaccine for the first time should also begin in October, if not done earlier, because those children need a booster dose 4-10 weeks after the initial dose, depending upon whether they are receiving inactivated influenza vaccine or LAIV. If supplies of inactivated influenza vaccine are not adequate, ACIP recommends that vaccine providers focus their vaccination efforts in October, primarily on persons aged >50 years, persons aged <50 years at increased risk for influenza-related complications (including children aged 6-59 months), house-hold contacts of persons at high risk (including out-of-home caregivers and household contacts of children aged 0-59 months), and health-care workers (178). Efforts to vaccinate other persons who wish to decrease their risk for influenza virus infection should not begin until November; however, if such persons request vaccination in October, vaccination should not be deferred, unless vaccine supplies dictate otherwise. # Vaccination in December and Later When inactivated vaccine is delayed, a substantial proportion of doses often do not become available until December or later. Nevertheless, even when supply is not delayed or reduced, as demonstrated by the relatively low vaccination coverage levels among persons in the defined priority groups, many persons who should receive influenza vaccine remain unvaccinated (Table 3). Providers should routinely offer influenza vaccine throughout the influenza season even after influenza activity has been documented in the community. In the United States, seasonal influenza activity can begin to increase as early as October or November, but influenza activity has not reached peak levels until late December-early March in the majority of recent seasons (Table 5). Although the timing of influenza activity can vary by region, vaccine administered after November is likely to be beneficial in the majority of influenza seasons. Adults have peak antibody protection against influenza virus infection 2 weeks after vaccination (265,266). # Timing of Organized Vaccination Campaigns Persons and institutions planning substantial organized vaccination campaigns (e.g., health departments, occupational health clinics, and community vaccinators) should consider scheduling these events after at least mid-October because the availability of vaccine in any location cannot be ensured consistently in early fall. Scheduling campaigns after mid-October will minimize the need for cancellations because vaccine is unavailable. These vaccination clinics should be scheduled through November, with attention to settings that serve children aged 6-59 months, pregnant women, other persons aged 50 years, health-care workers, and household contacts and out-of-home caregivers of persons at high risk (including children aged 0-59 months) to the extent feasible. Planners are encouraged to schedule at least one vaccination clinic in December. During a vaccine shortage or delay, substantial proportions of inactivated influenza vaccine doses may not be released until November and December or later. Beginning in November, vaccination campaigns can be broadened to include healthy persons who wish to reduce their risk for influenza virus infection. ACIP recommends organizers schedule these vaccination clinics throughout November and December. When the vaccine is significantly delayed, agencies should consider offering vaccination clinics into January as long as vaccine supplies are available. Campaigns using LAIV are optimally conducted in October and November but can also extend into January. # Strategies for Implementing Vaccination Recommendations in Health-Care Settings Successful vaccination programs combine publicity and education for health-care workers and other potential vaccine recipients, a plan for identifying persons at high risk, use of reminder/recall systems, assessment of practice-level vaccination rates with feedback to staff, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (19,267). Since October 2005, the Centers for Medicare and Medicaid Services (CMS) has required nursing homes participating in the Medicare and Medicaid programs to offer all residents influenza and pneumococcal vaccines and to document the results. According to the requirements, each resident is to be vaccinated unless it is medically contraindicated or the resident or his/her legal representative refuses vaccination. This information is to be reported as part of the CMS Minimum Data Set, which tracks nursing home health parameters (268). The use of standing orders programs by long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies might help to ensure the administration of recommended vaccinations for adults (269). Standing orders programs for both influenza and pneumococcal vaccination should be conducted under the supervision of a licensed practitioner according to a No. (%) of years with peak influenza activity 1 (3) 4 ( 13) 6 ( 20) 13 ( 43) 4 ( 13) 1 (3) 1 (3) - The peak week of activity was defined as the week with the greatest percentage of respiratory specimens testing positive for influenza on the basis of a 3-week moving average. Laboratory data were provided by U.S. World Health Organization Collaborating Centers (CDC, unpublished data, 1976(CDC, unpublished data, -2006. physician-approved facility or agency policy by health-care workers trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. CMS has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, longterm-care facilities, and home health agencies (269). To the extent allowed by local and state law, these facilities and agencies may implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaid-eligible patients. Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs as well (20). In addition, physician reminders (e.g., flagging charts) and patient reminders are recognized strategies for increasing rates of influenza vaccination. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections. # Outpatient Facilities Providing Ongoing Care Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits beginning in September (if vaccine is available) and throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended and who do not have regularly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination. # Outpatient Facilities Providing Episodic or Acute Care Beginning each September, acute health-care facilities (e.g., emergency departments and walk-in clinics) should offer vaccinations to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities During October and November each year, vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility or anytime afterwards. Ideally, all residents should be vaccinated at one time, before influenza season. Residents admitted through March after completion of the vaccination program at the facility should be vaccinated at the time of admission. # Acute-Care Hospitals Persons of all ages (including children) with high-risk conditions and persons aged >50 years who are hospitalized at any time during September-March should be offered and strongly encouraged to receive influenza vaccine before they are discharged if they have not already received the vaccine during that season. In one study, 39%-46% of adult patients hospitalized during the winter with influenza-related diagnoses had been hospitalized during the preceding fall (270). Thus, the hospital serves as a setting in which persons at increased risk for subsequent hospitalization can be identified and vaccinated. However, vaccination of persons at high risk during or after their hospitalizations is often not done. In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (271). Using standing orders in hospitals increases vaccination rates among hospitalized persons (272). # Visiting Nurses and Others Providing Home Care to Persons at High Risk Beginning in September, nursing-care plans should identify patients for whom vaccination is recommended, and vaccine should be administered in the home, if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination. # Other Facilities Providing Services to Persons Aged >50 Years Beginning in October, such facilities as assisted living housing, retirement communities, and recreation centers should offer unvaccinated residents and attendees vaccination onsite before the start of the influenza season. Staff education should emphasize the need for influenza vaccine. # Health-Care Workers Beginning in October each year, health-care facilities should offer influenza vaccinations to all workers, including night and weekend staff. Particular emphasis should be placed on providing vaccinations to persons who care for members of groups at high risk. Efforts should be made to educate healthcare workers regarding the benefits of vaccination and the potential health consequences of influenza illness for their patients, themselves, and their family members. All health-care workers should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (146,177,179). # Future Directions for Research and Recommendations Related to Influenza Vaccine The relatively low effectiveness of influenza vaccine administered to older adults highlights the need for more immunogenic influenza vaccines for the elderly (273) and the need for additional research to understand potential biases in estimating the benefits of vaccination among older adults in reducing hospitalizations and deaths (274)(275)(276). Additional studies of the relative cost-effectiveness and cost utility of influenza vaccination among children and adults, especially those aged <65 years, are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, hospitalization costs and rates, and vaccine effectiveness (277). Additional data also are needed to quantify the benefits of influenza vaccination of health-care workers in protecting their patients (278). Furthermore, larger consortia of networks are needed that are able to assess rare events that occur after vaccination, including GBS. ACIP continues to review new vaccination strategies to protect against influenza, including the possibility of expanding routine influenza vaccination recommendations toward universal vaccination or other approaches that will help greatly reduce or prevent the transmission of influenza (279)(280)(281)(282). In addition, as noted by the National Vaccine Advisory Committee, strengthening the U.S. influenza vaccination system will require improving vaccine financing, increasing demand, and implementing systems to help better understand the burden of influenza in the United States (283). Strategies to evaluate the effect of vaccination recommendations remain critical. # Recommendations for Using Antiviral Agents for Influenza Although annual vaccination is the primary strategy for preventing complications of influenza virus infections, antiviral medications with activity against influenza viruses can be effective for the chemoprophylaxis and treatment of influenza. Four licensed influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. Influenza A virus resistance to amantadine and rimantadine can emerge rapidly during treatment. On the basis of antiviral testing results conducted at CDC and in Canada indicating high levels of resistance (23,24,284), ACIP recommends that neither amantadine nor rimantadine be used for the treatment or chemoprophylaxis of influenza A in the United States until susceptibility to these antiviral medications has been re-established among circulating influenza A viruses. Oseltamivir or zanamivir can be prescribed if antiviral treatment of influenza is indicated. Oseltamivir is approved for treatment of persons aged >1 year, and zanamivir is approved for treatment of persons aged >7 years. Oseltamivir and zanamivir can be used for chemoprophylaxis of influenza; oseltamivir is licensed for use in persons aged >1 year, and zanamivir is licensed for use in persons aged >5 years. # Antiviral Agents for Influenza Zanamivir and oseltamivir are chemically related antiviral drugs known as neuraminidase inhibitors that have activity against both influenza A and B viruses. Both zanamivir and oseltamivir were approved in 1999 for treatment of uncomplicated influenza virus infections. In 2000, oseltamivir was approved for chemoprophylaxis of influenza among persons aged >13 years and was approved for chemoprophylaxis of children aged >1 year in 2005. In 2006, zanamivir was approved for chemoprophylaxis of children aged >5 years. The two drugs differ in pharmacokinetics, side effects, routes of administration, approved age groups, dosages, and costs. An overview of the indications, use, administration, and known primary side effects of these medications is presented in the following sections. Package inserts should be consulted for additional information. Detailed information regarding amantadine and rimantadine is available in the previous publication of the ACIP influenza recommendations (285). # Role of Laboratory Diagnosis Appropriate treatment of patients with respiratory illness depends on accurate and timely diagnosis. Influenza surveillance information and diagnostic testing can aid clinical judgment and help guide treatment decisions. For example, early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy. However, because certain bacterial infections can produce symptoms similar to influenza, bacterial infections should be considered and appropriately treated, if suspected. In addition, bacterial infections can occur as a complication of influenza. The accuracy of clinical diagnosis of influenza on the basis of symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza (33,42,43). Because testing all patients who might have influenza is not feasible, influenza surveillance by state and local health departments and CDC can provide information regarding the presence of influenza viruses in the com-munity. Surveillance also can identify the predominant circulating types, influenza A subtypes, and strains of influenza viruses. Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, polymerase chain reaction (PCR), and immunofluorescence assays (28). The sensitivity and specificity of any test for influenza can vary by the laboratory that performs the test, the type of test used, the type of specimen tested, and the timing of specimen collection. Among respiratory specimens for viral isolation or rapid detection, nasopharyngeal specimens are typically more effective than throat swab specimens (286). As with any diagnostic test, results should be evaluated in the context of other clinical and epidemiologic information available to healthcare providers. Commercial rapid diagnostic tests are available that can detect influenza viruses in 30 minutes (28,287). Some tests are approved for use in any outpatient setting, whereas others must be used in a moderately complex clinical laboratory. These rapid tests differ in the types of influenza viruses they can detect and whether they can distinguish between influenza types. Different tests can detect 1) only influenza A viruses; 2) both influenza A and B viruses, but not distinguish between the two types; or 3) both influenza A and B and distinguish between the two. None of the rapid tests provide any information regarding influenza A subtypes. The types of specimens acceptable for use (i.e., throat, nasopharyngeal, or nasal; and aspirates, swabs, or washes) also vary by test. The specificity and, in particular, the sensitivity of rapid tests are lower than for viral culture and vary by test (288,289). Because of the lower sensitivity of the rapid tests, physicians should consider confirming negative tests with viral culture or other means because of the possibility of false-negative rapid test results, especially during periods of peak community influenza activity. In contrast, false-positive rapid test results are less likely but can occur during periods of low influenza activity. Therefore, when interpreting results of a rapid influenza test, physicians should consider the positive and negative predictive values of the test in the context of the level of influenza activity in their community. Package inserts and the laboratory performing the test should be consulted for more details regarding use of rapid diagnostic tests. Additional information concerning diagnostic testing is available at / labdiagnosis.htm. Despite the availability of rapid diagnostic tests, collecting clinical specimens for viral culture is critical because only culture isolates can provide specific information regarding circulating strains and subtypes of influenza viruses. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and chemoprophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor the emergence of antiviral resistance and the emergence of novel influenza A subtypes that might pose a pandemic threat. # Antiviral Drug-Resistant Strains of Influenza Virus CDC recently reported that 193 (92%) of 209 influenza A (H3N2) viruses isolated from patients in 26 states demonstrated a change at amino acid 31 in the M2 gene that confers resistance to adamantanes (23,24). In addition, two of eight influenza A (H1N1) viruses tested were resistant (24). Canadian health authorities also have reported the same mutation in a comparable proportion of isolates recently tested (284). Until these findings, previous screenings of epidemic strains of influenza A viruses found few amantadine-and rimantadine-resistant viruses (290)(291)(292). Viral resistance to adamantanes can emerge rapidly during treatment because a single point mutation at amino acid positions 26, 27, 30, 31, or 34 of the M2 protein can confer cross resistance to both amantadine and rimantadine (293,294). Drug-resistant viruses can emerge in approximately one third of patients when either amantadine or rimantadine is used for therapy (293,295,296). During the course of amantadine or rimantadine therapy, resistant influenza strains can replace susceptible strains within 2-3 days of starting therapy (290,297). Resistant viruses have been isolated from persons who live at home or in an institution in which other residents are taking or have taken amantadine or rimantadine as therapy (298,299); however, the frequency with which resistant viruses are transmitted and their effect on efforts to control influenza are unknown. Persons who have influenza A virus infection and who are treated with either amantadine or rimantadine can shed susceptible viruses early in the course of treatment and later shed drug-resistant viruses, including after 5-7 days of therapy (295). Resistance to zanamivir and oseltamivir can be induced in influenza A and B viruses in vitro (300-307), but induction of resistance usually requires multiple passages in cell culture. By contrast, resistance to amantadine and rimantadine in vitro can be induced with fewer passages in cell culture (308,309). Development of viral resistance to zanamivir and oseltamivir during treatment has been identified but does not appear to be frequent (310)(311)(312)(313)(314). In one pediatric study, 5.5% of patients treated with oseltamivir had posttreatment isolates that were resistant to neuraminidase inhibitors. One small study of Japanese children treated with oseltamivir reported a high frequency of resistant viruses (315). However, no transmission of neuraminidase inhibitor-resistant viruses in humans has been documented to date. No isolates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of posttreatment isolates tested is limited (316), and the risk for emergence of zanamivirresistant isolates cannot be quantified (317). Only one clinical isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (312). Available diagnostic tests are not optimal for detecting clinical resistance to the neuraminidase inhibitor antiviral drugs, and additional tests are being developed (316,318). Postmarketing surveillance for neuraminidase inhibitor-resistant influenza viruses is being conducted (319). # Indications for Use of Antivirals When Susceptibility Exists Treatment When administered within 2 days of illness onset to otherwise healthy adults, zanamivir and oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day compared with placebo (91,(320)(321)(322)(323)(324)(325)(326)(327)(328)(329)(330)(331)(332)(333)(334). More clinical data are available concerning the efficacy of zanamivir and oseltamivir for treatment of influenza A virus infection than for treatment of influenza B virus infection (324,(335)(336)(337)(338)(339)(340)(341)(342)(343)(344). However, in vitro data and studies of treatment among mice and ferrets (345)(346)(347)(348)(349)(350)(351)(352), in addition to clinical studies, have documented that zanamivir and oseltamivir have activity against influenza B viruses (310,317,325,329,353,354). Data are limited regarding the effectiveness of the antiviral agents in preventing serious influenza-related complications (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases). Evidence for the effectiveness of these antiviral drugs is principally based on studies of patients with uncomplicated influenza (355). Data are limited concerning the effectiveness of zanamivir and oseltamivir for treatment of influenza among persons at high risk for serious complications of influenza (31,321,322,324,325,(330)(331)(332)(333)(334)(335)(336)(337)(338). Among influenza virus infected participants in 10 clinical trials, the risk for pneumonia among those participants receiving oseltamivir was approximately 50% lower than among those persons receiving a placebo (339). A similar significant reduction was also found for hospital admissions; a 50% reduction was observed in the small subset of high-risk participants, although this reduction was not statistically significant. Fewer studies of the efficacy of influenza antivirals have been conducted among pediatric populations (295,322,328,329). One study of oseltamivir treatment documented a decreased incidence of otitis media among children (323). Inadequate data exist regarding the safety and efficacy of any of the influenza antiviral drugs for use among children aged <1 year (289). Initiation of antiviral treatment within 2 days of illness onset is recommended. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days. # Chemoprophylaxis Chemoprophylactic drugs are not a substitute for vaccination, although they are critical adjuncts in preventing and controlling influenza. In community studies of healthy adults, both oseltamivir and zanamivir are similarly effective in preventing febrile, laboratory-confirmed influenza illness (efficacy: zanamivir, 84%; oseltamivir, 82%) (324,340,356). Both antiviral agents also have been reported to prevent influenza illness among persons administered chemoprophylaxis after a household member had influenza diagnosed (341,353,356). Experience with chemoprophylactic use of these agents in institutional settings or among patients with chronic medical conditions is limited in comparison with the adamantanes (310,337,338,(342)(343)(344). One 6-week study of oseltamivir chemoprophylaxis among nursing home residents reported a 92% reduction in influenza illness (310,357). Use of zanamivir has not been reported to impair the immunologic response to influenza vaccine (317,358). Data are not available regarding the efficacy of any of the four antiviral agents in preventing influenza among severely immunocompromised persons. When determining the timing and duration for administering influenza antiviral medications for chemoprophylaxis, factors related to cost, compliance, and potential side effects should be considered. To be maximally effective as chemoprophylaxis, the drug must be taken each day for the duration of influenza activity in the community. Persons at High Risk Who Are Vaccinated After Influenza Activity Has Begun. Persons at high risk for complications of influenza still can be vaccinated after an outbreak of influenza has begun in a community. However, development of antibodies in adults after vaccination takes approximately 2 weeks (265,266). When influenza vaccine is administered while influenza viruses are circulating, chemoprophylaxis should be considered for persons at high risk during the time from vaccination until immunity has developed. Children aged <9 years who receive influenza vaccine for the first time can require 6 weeks of chemoprophylaxis (i.e., chemoprophylaxis for 4 weeks after the first dose of vaccine and an additional 2 weeks of chemoprophylaxis after the second dose). Persons Who Provide Care to Those at High Risk. To reduce the spread of virus to persons at high risk during community or institutional outbreaks, chemoprophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with persons at high risk. Persons with frequent contact include employees of hospitals, clinics, and chronic-care facilities; household members; visiting nurses; and volunteer workers. If an outbreak is caused by a strain of influenza that might not be covered by the vaccine, chemoprophylaxis should be considered for all such persons, regardless of their vaccination status. Persons Who Have Immune Deficiencies. Chemoprophylaxis can be considered for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, chiefly those with advanced HIV disease. No published data are available concerning possible efficacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infection. Such patients should be monitored closely if chemoprophylaxis is administered. Other Persons. Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Chemoprophylaxis also can be offered to persons who wish to avoid influenza illness. Health-care providers and patients should make this decision on an individual basis. # Control of Influenza Outbreaks in Institutions Using antiviral drugs for treatment and chemoprophylaxis of influenza is a key component of influenza outbreak control in institutions. In addition to antiviral medications, other outbreak-control measures include instituting droplet precautions and establishing cohorts of patients with confirmed or suspected influenza, reoffering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (359-361) (see Additional Information Regarding Influenza Virus Infection Control Among Specific Populations). The majority of published reports concerning use of antiviral agents to control influenza outbreaks in institutions are based on studies of influenza A outbreaks among nursing home populations that received amantadine or rimantadine (335,(362)(363)(364)(365)(366). Less information is available concerning use of neuraminidase inhibitors in influenza A or B institutional outbreaks (337,338,344,357,367). When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice can substantially expedite administration of antiviral medications. When outbreaks occur in institutions, chemoprophylaxis should be administered to all residents, regardless of whether they received influenza vaccinations during the previous fall, and should continue for a minimum of 2 weeks. If surveillance indicates that new cases continue to occur, chemoprophylaxis should be continued until approximately 1 week after the end of the outbreak. The dosage for each resident should be determined individually. Chemoprophylaxis also can be offered to unvaccinated staff members who provide care to persons at high risk. Chemoprophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is suspected to be caused by a strain of influenza virus that is not well-matched to the vaccine. In addition to nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories or other settings in which persons live in close proximity). To limit the potential transmission of drug-resistant virus during outbreaks in institutions, whether in chronic or acutecare settings or other closed settings, measures should be taken to reduce contact as much as possible between persons taking antiviral drugs for treatment and other persons, including those taking chemoprophylaxis (see Antiviral Drug-Resistant Strains of Influenza Virus). # Dosage Dosage recommendations vary by age group and medical conditions (Table 6). # Children Zanamivir. Zanamivir is approved for treatment of influenza among children aged >7 years. The recommended dosage of zanamivir for treatment of influenza is two inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approximately 12 hours apart); the chemoprophylaxis dosage of zanamivir for children aged >5 years is 10 mg (two inhalations) once a day (317). Oseltamivir. Oseltamivir is approved for treatment and chemoprophylaxis among persons aged >1 year. Recommended treatment and chemoprophylaxis dosages of oseltamivir for children vary by the weight of the child. The treatment dosage recommendation of oseltamivir for children who weigh 15-23 kg, 45 mg twice a day; for those weighing >23-40 kg, 60 mg twice a day; and for children weighing >40 kg, 75 mg twice a day (310). The chemoprophylaxis recommended dosage of oseltamivir for children weighing 15-23 kg, 45 mg # Pharmacokinetics Zanamivir In studies of healthy volunteers, approximately 7%-21% of the orally inhaled zanamivir dose reached the lungs, and 70%-87% was deposited in the oropharynx (317,372). Approximately 4%-17% of the total amount of orally inhaled zanamivir is systemically absorbed. Systemically absorbed zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the feces (317,370). # Oseltamivir Approximately 80% of orally administered oseltamivir is absorbed systemically (371). Absorbed oseltamivir is metabolized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxylate has a half-life of 6-10 hours and is excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway (310,373). Unmetabolized oseltamivir also is excreted in the urine by glomerular filtration and tubular secretion (325). # Side Effects and Adverse Reactions When considering use of influenza antiviral medications (i.e., choice of antiviral drug, dosage, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 6); presence of other medical conditions; indications for use (i.e., chemoprophylaxis or treatment); and the potential for interaction with other medications. # Zanamivir In a study of zanamivir treatment of ILI among persons with asthma or chronic obstructive pulmonary disease where study medication was administered after use of a B2-agonist, 13% of patients receiving zanamivir and 14% of patients who received placebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (317,330). However, in a phase I study of persons with mild or moderate asthma who did not have ILI, one of 13 patients experienced bronchospasm after administration of zanamivir (317). In addition, during postmarketing surveillance, cases of respiratory function deterioration after inhalation of zanamivir have been reported. Certain patients had underlying airway disease (e.g., asthma or chronic obstructive pulmonary disease). Because of the risk for serious adverse events and because the efficacy has not been demonstrated among this population, zanamivir is not recommended for treatment for patients with underlying airway disease (317). If physicians decide to prescribe zanamivir to patients with underlying chronic respiratory disease after carefully considering potential risks and benefits, the drug should be used with caution under conditions of appropriate monitoring and supportive care, including the availability of short-acting bronchodilators (355). Patients with asthma or chronic obstructive pulmonary disease who use zanamivir are advised to 1) have a fast-acting inhaled bronchodilator available when inhaling zanamivir and 2) stop using zanamivir and contact their physician if they experience difficulty breathing (317). No definitive evidence is available regarding the safety or efficacy of zanamivir for persons with underlying respiratory or cardiac disease or for persons with complications of acute influenza (355). Allergic reactions, including oropharyngeal or facial edema, also have been reported during postmarketing surveillance (317,337). In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and for those receiving placebo (i.e., inhaled lactose vehicle alone) (320)(321)(322)(323)(324)(325)337). The most common adverse events reported by both groups were diarrhea; nausea; sinusitis; nasal signs and symptoms; bronchitis; cough; headache; dizziness; and ear, nose, and throat infections. Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (317). # Oseltamivir Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vomiting, approximately 6%; vomiting, approximately 3%) (310,326,327,374). Among children treated with oseltamivir, 14% had vomiting, compared with 8.5% of placebo recipients. Overall, 1% discontinued the drug secondary to this side effect (329), whereas a limited number of adults who were enrolled in clinical treatment trials of oseltamivir discontinued treatment because of these symptoms (310). Similar types and rates of adverse events were reported in studies of oseltamivir chemoprophylaxis (310). Nausea and vomiting might be less severe if oseltamivir is taken with food (317,310). # Use During Pregnancy No clinical studies have been conducted regarding the safety or efficacy of zanamivir or oseltamivir for pregnant women. Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these two drugs should be used during pregnancy only if the potential benefit justifies the potential risk to the embryo or fetus. Oseltamivir and zanamivir are both "Pregnancy Category C" medications (see manufacturers' package inserts) (317,375). Additional Information Regarding once a day; for those weighing >23-40 kg, 60 mg once a day; and for those weighing >40 kg, 75 mg once a day. # Persons Aged >65 Years Zanamivir and Oseltamivir. No reduction in dosage is recommended on the basis of age alone. # Persons with Impaired Renal Function Zanamivir. Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were observed (317,368). However, a limited number of healthy volunteers who received high doses of zanamivir intravenously tolerated systemic levels of zanamivir that were substantially higher than those resulting from administration of zanamivir by oral inhalation at the recommended dose (369,370). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild-to-moderate or severe impairment in renal function (317). Oseltamivir. Serum concentrations of oseltamivir carboxylate, the active metabolite of oseltamivir, increase with declining renal function (310,371). For patients with creatinine clearance of 10-30 mL/min (310), a reduction of the treat-ment dosage of oseltamivir to 75 mg once daily and in the chemoprophylaxis dosage to 75 mg every other day is recommended. No treatment or chemoprophylaxis dosing recommendations are available for patients undergoing routine renal dialysis treatment. # Persons with Liver Disease Zanamivir and Oseltamivir. Neither of these medications has been studied among persons with hepatic dysfunction. # Persons with Seizure Disorders Zanamivir and Oseltamivir. Seizure events have been reported during postmarketing use of zanamivir and oseltamivir, although no epidemiologic studies have reported any increased risk for seizures with either zanamivir or oseltamivir use. # Route Oseltamivir is administered orally in capsule or oral suspension form. Zanamivir is available as a dry powder that is selfadministered via oral inhalation by using a plastic device included in the package with the medication. Patients will benefit from instruction and demonstration of correct use of this device. . This information is based on data published by the Food and Drug Administration (FDA), which is available at . - Zanamivir is administered through oral inhalation by using a plastic device included in the medication package. Patients will benefit from instruction and demonstration of the correct use of the device. Zanamivir is not recommended for those persons with underlying airway disease. † Not applicable. § A reduction in the dose of oseltamivir is recommended for persons with creatinine clearance 15-23 kg, the dose is 45 mg twice a day; for children weighing >23-40 kg, the dose is 60 mg twice a day; and for children weighing >40 kg, the dose is 75 mg twice a day. The chemoprophylaxis dosing recommendations of oseltamivir for children weighing 15-23 kg, the dose is 45 mg once a day; for children weighing >23-40 kg, the dose is 60 mg once a day; and for children >40 kg, the dose is 75 mg once a day. # Drug Interactions Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically critical drug interactions have been predicted on the basis of in vitro data and data from studies using rats (310,373). Limited clinical data are available regarding drug interactions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this pathway. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir carboxylate by approximately 50% and a corresponding approximate twofold increase in the plasma levels of oseltamivir carboxylate (304,367). No published data are available concerning the safety or efficacy of using combinations of any of these influenza antiviral drugs. For more detailed information concerning potential drug interactions for any of these influenza antiviral drugs, package inserts should be consulted. # Information Regarding the Vaccines for Children Program The Vaccines for Children (VFC) program supplies vaccine to all states, territories, and the District of Columbia for use by participating providers. These vaccines are to be administered to eligible children without vaccine cost to the patient, as well as the provider. All routine childhood vaccines recommended by ACIP are available through this program. The program saves parents and providers out-of-pocket expenses for vaccine purchases and provides cost-savings to states through the CDC vaccine contracts. The program results in lower vaccine prices and assures that all states pay the same contract prices. Detailed information regarding the VFC program is available at . # Sources of Information Regarding Influenza and Its Surveillance Information regarding influenza surveillance, prevention, detection, and control is available at / weekly/fluactivity.htm. Surveillance information is available through the CDC Voice Information System (influenza update) at 888-232-3228 or CDC Fax Information Service at 888-232-3299. During October-May, surveillance information is updated weekly. In addition, periodic updates regard-ing influenza are published in the MMWR Weekly Report (). Additional information regarding influenza vaccine can be obtained by calling 800-CDC-INFO (800-232-4636). State and local health departments should be consulted concerning availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, reporting of influenza outbreaks and influenza-related pediatric deaths, and advice concerning outbreak control. # Reporting of Adverse Events Following Vaccination Clinically significant adverse events that follow vaccination should be reported through VAERS at or by calling the 24-hour national toll-free hotline at 800-822-7967. # Additional Information Regarding Influenza Virus Infection Control Among Specific Populations Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, health-care workers, hospital patients, pregnant women, children, and travelers) also are available in the following publications:
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction In the United States, epidemics of influenza typically occur during the winter months and have been associated with an average of approximately 36,000 deaths per year in the United States during 1990-1999 (1). Influenza viruses cause disease among all age groups (2)(3)(4). Rates of infection are highest among children, but rates of serious illness and death are highest among persons aged >65 years, children aged <2 years, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (2,(5)(6)(7). Influenza vaccination is the primary method for preventing influenza and its severe complications. As indicated in this report from the Advisory Committee on Immunization Practices (ACIP), annual influenza vaccination is now recommended for the following groups (Box): • persons at high risk for influenza-related complications and severe disease, including -children aged 6-59 months, -pregnant women, -persons aged >50 years, -persons of any age with certain chronic medical conditions; and • persons who live with or care for persons at high risk, including -household contacts who have frequent contact with persons at high risk and who can transmit influenza to those persons at high risk and -health-care workers. Vaccination might prevent hospitalization and death among persons at high risk and might also reduce influenza-related respiratory illnesses and physician visits among all age groups, prevent otitis media among children, and decrease work absenteeism among adults (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Although influenza vaccination levels increased substantially during the 1990s, further improvements in vaccination coverage levels are needed, especially among persons aged <65 years with known risk factors for influenza complications; among blacks and Hispanics aged >65 years; among children aged 6-23 months; and among health-care workers. ACIP recommends using strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (19)(20)(21)(22). Although influenza vaccination remains the cornerstone for the control of influenza, information on antiviral medications also is presented in this report because these agents are an important adjunct to vaccine. # Primary Changes and Updates in the Recommendations The 2006 recommendations include six principal changes or updates: • ACIP recommends that healthy children aged 24-59 months and their household contacts and out-of-home caregivers be vaccinated against influenza (see Target Groups for Vaccination). This change extends the recommendations for vaccination of children so that all children aged 6-<59 months receive annual vaccination. • ACIP emphasizes that all children aged 6 months-<9 years who have not been previously vaccinated at any time with either live, attenuated influenza vaccine (LAIV) or trivalent inactivated influenza vaccine (TIV) should receive 2 doses of vaccine. Those children aged 6 months-<9 years who receive TIV should have a booster dose of TIV administered >1 month after the initial dose, before the onset of influenza season, if possible. Those children aged 5-<9 years who receive LAIV should have a second dose of LAIV 6-10 weeks after the initial dose, before the influenza season, if possible. If a child aged 6 months-<9 years received influenza vaccine for the first time during a previous season but did not receive a second dose of vaccine within the same season, only 1 dose of vaccine should be administered this season (see Efficacy and Effectiveness of Inactivated Influenza Vaccine, Children; TIV Dosage; and LAIV Dosage and Administration). • To ensure optimal use of available doses of influenza vaccine, projected to be approximately 100 million doses, health-care providers, those planning organized campaigns, and state and local public health agencies should 1) develop plans for expanding outreach and infrastructure to vaccinate more persons than during the previous year and 2) develop contingency plans for the timing and prioritization of administering influenza vaccine, if the supply of vaccine is delayed and/or reduced because of the complexity of the production process (see Influenza Vaccine Supply and Timing of Annual Influenza Vaccination). • ACIP emphasizes that influenza vaccine should continue to be offered throughout the influenza season even after influenza activity has been documented in a community. In addition, ACIP encourages all community vaccinators and public health agencies to schedule clinics that serve target groups and to help extend the routine vaccination season by offering at least one vaccination clinic in December (see Influenza Vaccine Supply and Timing of Annual Influenza Vaccination). • ACIP recommends that neither amantadine nor rimantadine be used for the treatment or chemoprophy- # BOX. Persons for whom annual vaccination is recommended • Children aged 6-59 months; • Women who will be pregnant during the influenza season; • Persons aged >50 years; • Children and adolescents (aged 6 months-18 years) who are receiving long-term aspirin therapy and, therefore, might be at risk for experiencing Reye syndrome after influenza infection; • Adults and children who have chronic disorders of the pulmonary or cardiovascular systems, including asthma (hypertension is not considered a high-risk condition); • Adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunodeficiency (including immunodeficiency caused by medications or by human immunodeficiency virus); • Adults and children who have any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions, or that can increase the risk for aspiration; • Residents of nursing homes and other chronic-care facilities that house persons of any age who have chronic medical conditions; • Persons who live with or care for persons at high risk for influenza-related complications, including healthy household contacts and caregivers of children aged 0-59 months; and • Health-care workers. laxis of influenza A in the United States because of recent data indicating widespread resistance of influenza virus to these medications (23,24). Until susceptibility to adamantanes has been re-established among circulating influenza A viruses, oseltamivir or zanamivir may be prescribed if antiviral treatment or chemoprophylaxis of influenza is indicated (see Recommendations for Using Antiviral Agents for Influenza). # Influenza and Its Burden # Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease (25). Influenza A viruses are further categorized into subtypes on the basis of two surface antigens: hemagglutinin and neuraminidase. Influenza B viruses are not categorized into subtypes. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have circulated globally. In 2001, influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H1N1) and A (H3N2) viruses began circulating widely. Both influenza A and B viruses are further separated into groups on the basis of antigenic characteristics. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral replication. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. Immunity to the surface antigens, particularly the hemagglutinin, reduces the likelihood of infection and severity of disease if infection occurs (26). Antibody against one influenza virus type or subtype confers limited or no protection against another type or subtype of influenza. Furthermore, antibody to one antigenic variant of influenza virus might not completely protect against a new antigenic variant of the same type or subtype (27). Frequent development of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and the reason for the usual incorporation of one or more new strains in each year's influenza vaccine. More dramatic antigenic changes, or shifts, occur less frequently and can result in the emergence of a novel influenza virus with the potential to cause a pandemic. # Clinical Signs and Symptoms of Influenza Influenza viruses are spread from person to person, primarily through respiratory droplet transmission (e.g., when an infected person coughs or sneezes in close proximity to an uninfected person) (25). The typical incubation period for influenza is 1-4 days, with an average of 2 days (28). Adults can be infectious from the day before symptoms begin through approximately 5 days after illness onset. Children can be infectious for >10 days after the onset of symptoms, and young children also can shed virus before their illness onset. Severely immunocompromised persons can shed virus for weeks or months (29)(30)(31)(32). Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symptoms (e.g., fever, myalgia, headache, malaise, nonproductive cough, sore throat, and rhinitis) (33). Among children, otitis media, nausea, and vomiting also are commonly reported with influenza illness (34)(35)(36). Uncomplicated influenza illness typically resolves after 3-7 days for the majority of persons, although cough and malaise can persist for >2 weeks. However, among certain persons, influenza can exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease), lead to secondary bacterial pneumonia or primary influenza viral pneumonia, or occur as part of a coinfection with other viral or bacterial pathogens (37). Young children with influenza virus infection can have initial symptoms mimicking bacterial sepsis with high fevers (37,38), and febrile seizures have been reported in up to 20% of children hospitalized with influenza virus infection (35,39). Influenza virus infection also has been uncommonly associated with encephalopathy, transverse myelitis, myositis, myocarditis, pericarditis, and Reye syndrome (35,37,40,41). Respiratory illnesses caused by influenza viruses are difficult to distinguish from illnesses caused by other respiratory pathogens on the basis of signs and symptoms alone (see Role of Laboratory Diagnosis). Reported sensitivities and specificities of clinical definitions of influenza infection that include fever and cough in studies primarily among adults have ranged from 63% to 78% and 55% to 71%, respectively, compared with viral culture (42,43). Sensitivity and predictive value of clinical definitions can vary, depending on the degree of co-circulation of other respiratory pathogens and the level of influenza activity (44). A study of older nonhospitalized patients determined that the presence of fever, cough, and acute onset had a positive predictive value of only 30% for influenza (45), whereas a study of hospitalized older patients with chronic cardiopulmonary disease deter-mined that a combination of fever, cough, and illness of <7 days was 78% sensitive and 73% specific for influenza (46). A study of vaccinated older persons with chronic lung disease indicated that cough was not predictive of influenza virus infection, although having a fever or feverishness was 68% sensitive and 54% specific for influenza virus infection (47). These results highlight the challenges of identifying influenza illness in the absence of laboratory confirmation. # Hospitalizations and Deaths from Influenza The risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age with certain underlying health conditions (see Persons at Increased Risk for Complications) than among healthy older children and younger adults (1,6,8,(48)(49)(50)(51)(52)(53)(54)(55)(56). Estimated rates of influenza-associated hospitalizations have varied substantially by age group in studies conducted during different influenza epidemics (Table 1). Among children aged <5 years, hospitalization rates have ranged from approximately 500/100,000 children for those with high-risk medical conditions to 100/100,000 children for those without high-risk medical conditions (57)(58)(59)(60). Hospitalization rates among children aged <24 months are comparable to rates reported among persons aged >65 years (59,60) (Table 1). During seasonal influenza epidemics from 1979-80 through 2000-01, the estimated overall number of influenzaassociated hospitalizations in the United States ranged from approximately 54,000 to 430,000/epidemic. An average of approximately 226,000 influenza-related excess hospitalizations occurred per year, and 63% of all hospitalizations occurred among persons aged >65 years (61). Since the 1968 influenza A (H3N2) virus pandemic, the number of influenzaassociated hospitalizations is generally greater during seasonal influenza epidemics caused by type A (H3N2) viruses than seasons in which other influenza virus types predominate (62). Influenza-related deaths can result from pneumonia and from exacerbations of cardiopulmonary conditions and other chronic diseases. Deaths of adults aged >65 years account for >90% of deaths attributed to pneumonia and influenza (1,54). In one study, approximately 19,000 influenza-associated pulmonary and circulatory deaths per influenza season occurred during 1976-1990, compared with approximately 36,000 deaths during 1990-1999 (1). Estimated rates of influenzaassociated pulmonary and circulatory deaths/100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years. In the United States, the number of influenzaassociated deaths has increased in part because the number of older persons is increasing, particularly persons aged >85 years (63). In addition, influenza seasons in which influenza A (H3N2) viruses predominate are associated with higher mortality (64); influenza A (H3N2) viruses predominated in 90% of influenza seasons during 1990-1999, compared with 57% of influenza seasons during 1976-1990 (1). Deaths from influenza are uncommon among children both with and without high-risk conditions, but do occur (65,66). A study that modeled influenza-related deaths estimated that an average of 92 deaths (0.4 deaths per 100,000) occurred among children aged <5 years annually during the 1990s, compared with 32,651 deaths (98.3 per 100,000) among adults aged >65 years (1). Of 153 laboratory-confirmed influenza-related pediatric deaths reported from 40 states during the 2003-04 influenza season, 96 (63%) were among children aged <5 years. Sixty-four (70%) of the 92 children aged 2-17 years with influenza who died had no underlying medical condition previously associated with an increased risk for influenza-related complications (67). # Options for Controlling Influenza In the United States, the primary option for reducing the effect of influenza is through annual vaccination. Inactivated (i.e., killed virus) influenza vaccines and LAIV are licensed and available for use in the United States (see Recommendations for Using Inactivated and Live, Attenuated Influenza Vaccines). Vaccination coverage can be increased by administering vaccine to persons during hospitalizations or routine health-care visits, as well as at pharmacies, grocery stores, workplaces, or other locations in the community before the influenza season, therefore making special visits to physicians' offices or clinics unnecessary. Achieving increased vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) and among staff can reduce the risk for outbreaks (13), especially when vaccine and circulating strains are well-matched. Vaccination of health-care workers and other persons in close contact with persons at increased risk for severe influenza illness also can reduce transmission of influenza and subsequent influenzarelated complications. Antiviral drugs used for chemoprophylaxis or treatment of influenza are adjuncts to vaccine (see Recommendations for Using Antiviral Agents for Influenza) but are not substitutes for annual vaccination. . These viruses will be used because they are representative of influenza viruses that are anticipated to circulate in the United States during the 2006-07 influenza season and have favorable growth properties in eggs. Because circulating influenza A (H1N2) viruses are reassortants of influenza A (H1N1) and A (H3N2) viruses, antibodies directed against influenza A (H1N1) and influenza (H3N2) vaccine strains should provide protection against the circulating influenza A (H1N2) viruses. Influenza viruses for both TIV and LAIV are initially grown in embryonated hens eggs, and, therefore, might con-tain limited amounts of residual egg protein. Therefore, persons with a history of severe hypersensitivity, such as anaphylaxis, to eggs should not receive influenza vaccine. # Influenza Vaccine Composition For the inactivated vaccines, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (68). Only subvirion and purified surface antigen preparations of the inactivated vaccine are available. Manufacturing processes vary by manufacturer. Manufacturers might use different compounds to inactivate influenza viruses and add antibiotics to prevent bacterial contamination. Package inserts should be consulted for additional information. # Comparison of LAIV with Inactivated Influenza Vaccine Both inactivated influenza vaccine and LAIV are available. Although both types of vaccines are effective, the vaccines differ in several aspects (Table 2). # Major Similarities Both LAIV and inactivated influenza vaccines contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one A (H1N1) virus, and one B virus. Each year, one or more virus strains might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. Viruses for both vaccines are grown in eggs. Both vaccines are administered annually to provide optimal protection against influenza virus infection (Table 2). # Major Differences Inactivated influenza vaccine contains killed viruses, and thus cannot produce signs or symptoms of influenza virus infection. In contrast, LAIV contains live, attenuated viruses and, therefore, has a potential to produce mild signs or symptoms related to influenza virus infection. LAIV is adminis-tered intranasally by sprayer, whereas inactivated influenza vaccine is administered intramuscularly by injection. LAIV is more expensive than inactivated influenza vaccine, although the price differential between inactivated vaccine and LAIV has decreased for the 2006-07 season. LAIV is approved only for use among healthy persons aged 5-49 years; inactivated influenza vaccine is approved for use among persons aged >6 months, including those who are healthy and those with chronic medical conditions (Table 2). # Efficacy and Effectiveness of Inactivated Influenza Vaccine The effectiveness of inactivated influenza vaccine depends primarily on the age and immunocompetence of the vaccine recipient, the degree of similarity between the viruses in the vaccine and those in circulation, and the outcome being measured. Vaccine efficacy and effectiveness studies might have various endpoints, including the prevention of medically attended acute respiratory illness (MAARI), prevention of culture-positive influenza virus illness, prevention of influenza or pneumonia-associated hospitalizations or deaths, seroconversion to vaccine serotypes, or prevention of seroconversion to circulating influenza virus subtypes. High postvaccination hemagglutination inhibition antibody titers develop in the majority of vaccinated children and young adults (69)(70)(71). These antibodies are protective against illness caused by strains that are antigenically similar to those strains of the same type or subtype included in the vaccine (70)(71)(72)(73). Children. Children aged >6 months usually acquire protective levels of anti-influenza antibody against specific influenza virus strains after influenza vaccination (69,70,(74)(75)(76)(77)(78)(79), although the antibody response among children at high risk for influenza-related complications might be lower than among healthy children (80,81). A 2-year randomized study of children aged 6-24 months determined that 89% of children seroconverted to all three vaccine strains during both years (82). During year 1, among 411 children, vaccine efficacy was 66% (95% confidence interval [CI] = 34%-82%) against culture-confirmed influenza (attack rates: 5.5% and 15.9% among vaccine and placebo groups, respectively). During year 2, among 375 children, vaccine efficacy was -7% (CI = -247%-67%; attack rates: 3.6% and 3.3% among vaccine and placebo groups, respectively); the second year exhibited lower attack rates overall and was considered a mild season. In both years of this study, the vaccine strains were wellmatched to the circulating influenza virus strains. A randomized study among children aged 1-15 years also demonstrated that inactivated influenza vaccine was 77% and 91% effective against influenza respiratory illness during H3N2 and H1N1 years, respectively (71). One study documented a vaccine efficacy of 56% against influenza illness among healthy children aged 3-9 years (83), and another study determined vaccine efficacy against influenza type B infection and influenza type A infection of 22%-54% and 60%-78% among children with asthma aged 2-6 years and 7-14 years, respectively (84). Two studies have documented that TIV vaccine decreases the incidence of influenza-associated otitis media among young children by approximately 30% (16,17), whereas a third study determined that vaccination did not reduce the burden of acute otitis media (82). Effectiveness of One Dose versus Two Doses of Influenza Vaccine Among Previously Unvaccinated Children Aged <9 Years. Vaccine effectiveness is lower among previously unvaccinated children aged <9 years if they have only received 1 dose of influenza vaccine, compared with children who have received 2 doses. A retrospective study among approximately 5,000 children aged 6-23 months conducted during a year with a suboptimal vaccine match indicated vaccine effectiveness of 49% against medically attended, clinically diagnosed pneumonia or influenza among children who had received 2 doses of influenza vaccine. No effectiveness was demonstrated among children who had received only 1 dose of influenza vaccine, illustrating the importance of administering 2 doses of vaccine to previously unvaccinated children aged <9 years (85). Similar results were observed in a case-control study of children aged 6-59 months with laboratory-confirmed influenza (86). A study assessing protective antibody responses after 1 and 2 doses of vaccine among vaccine-naive children aged 5-8 years also demonstrated the importance of compliance with the 2-dose recommendation (87). When the vaccine antigens do not change from one season to the next, priming with a single dose of vaccine in the spring, followed by a dose in the fall might result in similar antibody responses to a 2-dose regimen in the fall (88,89). Adults Aged <65 Years. When the vaccine and circulating viruses are antigenically similar, influenza vaccine typically prevents influenza illness among approximately 70%-90% of healthy adults aged <65 years (9,12,90,91). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are well-matched (9)(10)(11)(12)91,92). In a case-control study of adults aged 50-64 years with laboratory-confirmed influenza during the 2003-04 season when the vaccine and circulating viruses were not well-matched, vaccine effectiveness was estimated to be 52% among healthy persons and 38% among those with one or more high-risk conditions (93). Adults Aged >65 Years. An important benefit of the influenza vaccine is its ability to help prevent secondary complications and reduce the risk for influenza-related hospitalization and death among adults aged >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (13)(14)(15)18,94,95). Older persons and persons with certain chronic diseases might have lower postvaccination antibody titers than healthy young adults and can remain susceptible to influenza virus infection and influenza-related upper respiratory tract illness (96)(97)(98). A randomized trial among noninstitutionalized persons aged >60 years reported a vaccine efficacy of 58% against influenza respiratory illness but indicated that efficacy might be lower among those aged >70 years (99). However, among older persons not living in nursing homes or similar chronic-care facilities, influenza vaccine is 30%-70% effective in preventing hospitalization for pneumonia and influenza (15,100). Among older persons who reside in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and deaths. In this population, the vaccine can be 50%-60% effective in preventing influenza-related hospitalization or pneumonia and 80% effective in preventing influenza-related death, although the effectiveness in preventing influenza illness often ranges from 30% to 40% (101)(102)(103). # Efficacy and Effectiveness of LAIV The immunogenicity of the approved LAIV has been assessed in multiple studies (104)(105)(106)(107)(108)(109)(110), which included approximately 100 children aged 5-17 years and approximately 300 adults aged 18-49 years. LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not completely understood but appear to involve both serum and nasal secretory antibodies. No single laboratory measurement closely correlates with protective immunity induced by LAIV. Healthy Children. A randomized, double-blind, placebocontrolled trial among 1,602 healthy children initially aged 15-71 months assessed the efficacy of trivalent LAIV against culture-confirmed influenza during two seasons (111,112). This trial included subsets of 238 Children who continued in the study remained in the same study group. In season one, when vaccine and circulating virus strains were well-matched, efficacy was 93% for participants who received 2 doses of LAIV. In season two, when the A (H3N2) component was not well-matched between vaccine and circulating virus strains, efficacy was 86% overall. The vaccine was 92% efficacious in preventing cultureconfirmed influenza during the two-season study. Other results included a 27% reduction in febrile otitis media and a 28% reduction in otitis media with concomitant antibiotic use. Receipt of LAIV also resulted in 21% fewer febrile illnesses. A review of LAIV effectiveness in children aged 18 months-18 years found effectiveness against MAARI of 18% but greater estimated efficacy levels: 92% against influenza A (H1N1) and 66% against an influenza B drift variant (113). Healthy Adults. A randomized, double-blind, placebocontrolled trial among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in self-reported respiratory tract illness without laboratory confirmation, absenteeism, health-care visits, and medication use during peak and total influenza outbreak periods (114). The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not well-matched. During peak outbreak periods, no difference in febrile illnesses between LAIV and placebo recipients was observed. However, vaccination was associated with reductions in severe febrile illnesses of 19% and febrile upper respiratory tract illnesses of 24%. Vaccination also was associated with fewer days of illness, fewer days of work lost, fewer days with health-care-provider visits, and reduced use of prescription antibiotics and over-the-counter medications. Among a subset of 3,637 healthy adults aged 18-49 years, LAIV recipients (n = 2,411) had 26% fewer febrile upper-respiratory illness episodes; 27% fewer lost work days as a result of febrile upper respiratory illness; and 18%-37% fewer days of health-care-provider visits caused by febrile illness, compared with placebo recipients (n = 1,226). Days of antibiotic use were reduced by 41%-45% in this age subset. A randomized, double-blind, placebo-controlled challenge study among 92 healthy adults (LAIV, n = 29; placebo, n = 31; inactivated influenza vaccine, n = 32) aged 18-41 years assessed the efficacy of both LAIV and inactivated vaccine (115). The overall efficacy of LAIV and inactivated influenza vaccine in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, on the basis of experimental challenge by viruses to which study participants were susceptible before vaccination. The difference in efficacy between the two vaccines was not statistically significant. # Cost-Effectiveness of Influenza Vaccine Influenza vaccination can reduce both health-care costs and productivity losses associated with influenza illness. Studies of influenza vaccination of persons aged >65 years conducted in the United States have reported substantial reductions in hospitalizations and deaths and overall societal costs savings (15,100,104). Studies of adults aged <65 years have indicated that vaccination can reduce both direct medical costs and indirect costs from work absenteeism (8,(10)(11)(12)91,116). Reductions of 13%-44% in health-care-provider visits, 18%-45% in lost workdays, 18%-28% in days working with reduced effectiveness, and 25% in antibiotic use for influenzaassociated illnesses have been reported (10,12,117,118). One cost-effectiveness analysis estimated a cost of approximately $60-$4,000/illness averted among healthy persons aged 18-64 years, depending on the cost of vaccination, the influenza attack rate, and vaccine effectiveness against influenza-like illness (ILI) (91). Another cost-benefit economic study estimated an average annual savings of $13.66/person vaccinated (119). In the second study, 78% of all costs prevented were costs from lost work productivity, whereas the first study did not include productivity losses from influenza illness. Economic studies specifically evaluating the costeffectiveness of vaccinating persons aged 50-64 years are not available, and the number of studies that examine the economics of routinely vaccinating children with TIV or LAIV are limited (8,(120)(121)(122)(123). However, in a study of inactivated vaccine that included all age groups, cost utility (i.e., cost per year of healthy life gained) improved with increasing age and among those with chronic medical conditions (8). Among persons aged >65 years, vaccination resulted in a net savings per quality-adjusted life year (QALY) gained, whereas among younger age groups, vaccination resulted in costs of $23-$256/ QALY. In addition to estimating the economic cost associated with influenza disease, studies have assessed the public's perception of preventing influenza morbidity. Less than half of respondents to a survey on public perception of the value of preventing influenza morbidity reported that they would trade any time from their own life to prevent a case of uncomplicated influenza in a hypothetical child (124). When asked about their willingness to pay to prevent a hypothetical child from having an uncomplicated case of influenza, the median willingness-to-pay amount was $100 for a child aged 14 years and $175 for a child aged 1 year (124). # Vaccination Coverage Levels One of the national health objectives for 2010 is to achieve an influenza vaccination coverage level of 90% for persons aged >65 years (objective no. 14-29a) (125). Among persons aged >65 years, influenza vaccination levels increased from 33% in 1989 (126) to 66% in 1999 (127), surpassing the Healthy People 2000 objective of 60% (128). Vaccination coverage in this group reached the highest levels recorded (68%) during the 1999-00 influenza season. This estimate was made using the percentage of adults reporting influenza vaccination during the previous 12 months in the National Health Interview Survey (NHIS). The NHIS administered during the first and second quarters of each calendar year was used as a proxy measure of influenza vaccination coverage for the previous influenza season (127). Possible reasons for increases in influenza vaccination levels among persons aged >65 years include 1) greater acceptance of preventive medical services by practitioners; 2) increased delivery and administration of vaccine by health-care providers and sources other than physicians; 3) new information regarding influenza vaccine effectiveness, cost-effectiveness, and safety; and 4) initiation of Medicare reimbursement for influenza vaccination in 1993 (8,14,15,101,102,129,130). Since 1997, influenza vaccination levels have increased more slowly, with an average annual percentage increase of 4% from 1988-89 to 1996-97 versus 1% from 1996-97 to 1998-99. In 2000, a substantial delay in influenza vaccine availability and distribution, followed by a less severe delay in 2001 likely contributed to the lack of progress. However, the slowing of the increase in vaccination levels began before 2000 and is not fully understood. Estimated national influenza vaccine coverage in 2004 among persons aged >65 years and 50-64 years was 65% and 36%, respectively, based on 2004 NHIS data (Table 3). The estimated vaccination coverage among adults with high-risk conditions aged 18-49 years and 50-64 years was 26% and 46%, respectively, substantially lower than the Healthy People 2000 and 2010 objective of 60% (125,128). Continued annual monitoring is needed to determine the effects of vaccine supply delays and shortages, changes in influenza vaccination recommendations and target groups for vaccination, reimbursement rates for vaccine and vaccine administration, and other factors related to vaccination coverage among adults and children. New strategies to improve coverage will be needed to achieve the Healthy People 2010 objective (21,22). Reducing racial and ethnic health disparities, including disparities in vaccination coverage, is an overarching national goal (125). Although estimated influenza vaccination coverage for the 1999-00 season reached the highest levels recorded among older black, Hispanic, and white populations, vaccination levels among blacks and Hispanics continue to lag behind those among whites (127,131). Estimated vaccination coverage levels based on 2004 NHIS data among persons aged >65 years were 67% among non-Hispanic whites, 45% among non-Hispanic blacks, and 55% among Hispanics (CDC, unpublished data, 2006). Among Medicare beneficiaries, unequal access to care might not be the only factor in contributing toward disparity levels in influenza vaccination; other key factors include having patients that actively seek vaccination and providers that recommend vaccination (132,133). In 1997 and 1998, vaccination coverage estimates among nursing home residents were 64%-82% and 83%, respectively (134,135). The Healthy People 2010 goal is to achieve influenza vaccination of 90% among nursing home residents, an increase from the Healthy People 2000 goal of 80% (125,128). Reported vaccination levels are low among children at increased risk for influenza complications. One study conducted among patients in health maintenance organizations (HMOs) documented influenza vaccination percentages ranging from 9% to 10% among children with asthma (136). A 25% vaccination level was reported among children with severe to moderate asthma who attended an allergy and immunology clinic (137). However, a study conducted in a pediatric clinic demonstrated an increase in the vaccination percentage of children with asthma or reactive airways disease from 5% to 32% after implementing a reminder/recall system (138). One study documented 79% vaccination coverage among children attending a cystic fibrosis treatment center (139). According to 2004 National Immunization Survey data, during the second year of the encouragement for vaccination of children aged 6-23 months, 18% received one or more influenza vaccinations and 8.4% received 2 doses if they were previously unvaccinated (140). A rapid analysis of influenza vaccination coverage levels among members of an HMO in Northern California determined that in 2004-05, the first year of the recommendation for vaccination of children aged 6-23 months, their coverage level reached 57% (141). Data from the Behavioral Risk Factor Surveillance System (BRFSS) collected in February 2005 indicated a national estimate of 48% vaccination coverage for 1 or more doses among children aged 6-23 months and 35% coverage among children aged 2-17 years who had one or more high-risk medical conditions during the 2004-05 season (142). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use. As has been observed for older adults, a physician recommendation for vaccination and the perception that getting a child vaccinated "is a smart idea" were positively associated with likelihood of vaccination of children aged 6-23 months (143). Annual vaccination is recommended for health-care workers. Nonetheless, NHIS 2004 survey data indicated a vaccination coverage level of only 42% among health-care workers (CDC, unpublished data, 2006). Vaccination of health-care workers has been associated with reduced work absenteeism (9) and fewer deaths among nursing home patients (144,145) and is a high priority for reducing the effect of influenza in health-care settings and for expanding influenza vaccine use (146,147). Limited information is available regarding use of influenza vaccine among pregnant women. Among women aged 18-44 years without diabetes responding to the 2001 BRFSS, those who were pregnant were less likely to report influenza vaccination during the previous 12 months (13.7%) than those women who were not pregnant (16.8%); these differences were statistically significant (148). Only 13% of pregnant women reported vaccination according to 2004 NHIS data, excluding pregnant women who reported diabetes, heart disease, § Persons categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer during the previous 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer during the previous 12 months; 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack during the preceding 12 months. ¶ Aged 18-44 years, pregnant at the time of the survey, and without high-risk conditions. ** Adults were classified as health-care workers if they were currently employed in a health-care occupation or in a health-care-industry setting, on the basis of standard occupation and industry categories recoded in groups by CDC's National Center for Health Statistics. † † Interviewed adult in each household containing at least one of the following: a child aged <2 years, an adult aged >65 years, or any person aged 2-17 years at high risk (see previous § footnote). To obtain information on household composition and high-risk status of household members, the sampled adult, child, and person files from NHIS were merged. Interviewed adults who were health-care workers or who had high-risk conditions were excluded. Information could not be assessed regarding high-risk status of other adults aged 18-64 years in the household, thus, certain adults 18-64 years who live with an adult aged 18-64 years at high risk were not included in the analysis. lung disease, and other selected high-risk conditions (CDC, unpublished data, 2006) (Table 3). These data indicate low compliance with the ACIP recommendations for pregnant women. In a study of influenza vaccine acceptance by pregnant women, 71% who were offered the vaccine chose to be vaccinated (149). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients, although 86% agreed that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (150). Data indicate that self-report of influenza vaccination among adults, compared with extraction from the medical record, is both a sensitive and specific source of information (151). Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (151). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available. # Recommendations for Using Inactivated and Live, Attenuated Influenza Vaccines The inactivated influenza vaccine and LAIV can be used to reduce the risk for influenza virus infection and its complications. TIV is Food and Drug Administration (FDA)-approved for persons aged >6 months, including those with high-risk conditions, whereas LAIV is approved only for use among healthy persons aged 5-49 years (see Inactivated Influenza Vaccine Recommendations; and Live, Attenuated Influenza Vaccine Recommendations). # Target Groups for Vaccination Annual influenza vaccination is recommended for the following groups: # Persons at Increased Risk for Complications Vaccination with inactivated influenza vaccine is recommended for the following persons who are at increased risk for severe complications from influenza: • children aged 6-23 months; • children and adolescents (aged 6 months-18 years) who are receiving long-term aspirin therapy and, therefore, might be at risk for experiencing Reye syndrome after influenza virus infection; • women who will be pregnant during the influenza season; • adults and children who have chronic disorders of the pulmonary or cardiovascular systems, including asthma (hypertension is not considered a high-risk condition); Because children aged 6-23 months are at substantially increased risk for influenza-related hospitalizations and because children aged 24-59 months are at increased risk for influenza-related clinic and emergency department visits (152), ACIP recommends vaccination of children aged 6-59 months. The current LAIV and inactivated influenza vaccines are not approved by FDA for use among children aged <6 months, the pediatric group at greatest risk for influenza-related complications (58,153,154). Vaccination of their household contacts and out-of-home caregivers also is recommended because it might decrease the probability of influenza virus infection among these children. Studies indicate that rates of hospitalization are higher among young children than older children when influenza viruses are in circulation (57,(59)(60)(61)62,(155)(156)(157). The increased rates of hospitalization are comparable with rates for other groups considered at high risk for influenza-related complications. However, the interpretation of these findings has been confounded by cocirculation of respiratory syncytial virus that causes serious respiratory viral illness among children and that frequently circulates during the same time as influenza viruses (158)(159)(160). One study assessed rates of influenza-associated hospitalizations among the entire U.S. population during 1979-2001 and calculated an average rate of approximately 108 hospitalizations per 100,000 person-years in children aged <5 years (48). Two studies have attempted to separate the impact of respiratory syncytial viruses and influenza viruses on rates of hospitalization among children who do not have high-risk conditions (58,59). Both studies indicated that otherwise healthy children aged <2 years and possibly children aged 2-4 years are at increased risk for influenza-related hospitalization compared with older healthy children (Table 1). Among the Tennessee Medicaid population during 1973-1993, healthy children aged 6 months-2 years had rates of influenza-associated hospitalization comparable with or higher than rates among children aged 3-14 years with high-risk conditions (58,60). Another Tennessee study indicated a hospitalization rate per year of 3-4/1,000 healthy children aged <2 years for laboratory-confirmed influenza (36). The ability of providers to implement the recommendation to vaccinate all children aged 24-59 months during the 2006-07 season, the first year the recommendation will be in place, might vary depending upon vaccine supply (See Influenza Vaccine Supply and Timing of Annual Influenza Vaccination; and http://www.cdc.gov/nip/news/shortages/ default.htm). # Pregnant Women Influenza-associated excess deaths among pregnant women were documented during the pandemics of 1918-19 and 1957-58 (51,(161)(162)(163). Case reports and limited studies also indicate that pregnancy can increase the risk for serious medical complications of influenza (164)(165)(166)(167)(168)(169). One study of influenza vaccination of approximately 2,000 pregnant women demonstrated no adverse fetal effects associated with inactivated influenza vaccine (170); similar results were observed in a study of 252 pregnant women who received inactivated influenza vaccine within 6 months of delivery (171). No such data exist on the safety of LAIV when administered during pregnancy. # Breastfeeding Mothers TIV is safe for mothers who are breastfeeding and their infants. Because excretion of LAIV in human milk is unknown and because of the possibility of shedding vaccine virus given the close proximity of a nursing mother and her infant, caution should be exercised if LAIV is administered to nursing mothers. Breastfeeding does not adversely affect the immune response and is not a contraindication for vaccination. # Persons Aged 50-64 Years Vaccination is recommended for persons aged 50-64 years because this group has an increased prevalence of persons with high-risk conditions. In 2002, approximately 43.6 million persons in the United States were aged 50-64 years, of whom 13.5 million (34%) had one or more high-risk medical conditions (172). Influenza vaccine has been recommended for this entire age group to increase the low vaccination levels among persons in this age group with high-risk conditions (see Persons at Increased Risk for Complications). Age-based strategies are more successful in increasing vaccine coverage than patient-selection strategies based on medical conditions. Persons aged 50-64 years without high-risk conditions also receive benefit from vaccination in the form of decreased rates of influenza illness, decreased work absenteeism, and decreased need for medical visits and medication, including antibiotics (9-12). Furthermore, 50 years is an age when other preventive services begin and when routine assessment of vaccination and other preventive services has been recommended (173,174). # Health-Care Workers and Other Persons Who Can Transmit Influenza to Those at High Risk Persons who are clinically or asymptomatically infected can transmit influenza virus to persons at high risk for complications from influenza. Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce influenza-related deaths among persons at high risk. In two studies, vaccination of health-care workers was associated with decreased deaths among nursing home patients (144,145), and hospital-based influenza outbreaks frequently occur where unvaccinated health-care workers are employed. Administration of LAIV has been demonstrated to reduce MAARI in contacts of vaccine recipients (175,176) and to reduce ILI-related economic and medical consequences (such as work days lost and number of health-care provider visits). In addition to health-care workers, additional groups that can transmit influenza to persons at high risk and that should be vaccinated include the following: • employees of assisted living and other residences for persons in groups at high risk, • persons who provide home care to persons in groups at high risk, and • household contacts (including children) of persons in groups at high risk. In addition, because children aged 0-23 months are at increased risk for influenza-related hospitalization (58-60), vaccination is recommended for their household contacts and out-of-home caregivers, particularly for contacts of children aged 0-5 months, because influenza vaccines have not been approved by FDA for use among children aged <6 months (see Healthy Young Children Aged 6-59 Months). Healthy persons aged 5-49 years in these groups who are not contacts of severely immunocompromised persons (see Live, Attenuated Influenza Vaccine Recommendations) can receive either LAIV or inactivated influenza vaccine. All other persons in this group should receive inactivated influenza vaccine. All health-care workers should be vaccinated against influenza annually (147,177,178). Facilities that employ healthcare workers are strongly encouraged to provide vaccine to workers by using approaches that maximize vaccination levels. An improvement in vaccination coverage levels might help to protect health-care workers, their patients, and communities; improve prevention of influenza-associated disease and patient safety; and reduce disease burden. Influenza vaccination levels among health-care workers should be regularly measured and reported. Although vaccination levels for health-care workers are typically <40%, with moderate effort, organized campaigns can attain higher levels of vaccination among this population (146,179). In 2005, seven states had legislation requiring annual influenza vaccination of health-care workers or the signing of an informed declination (147), and 15 states had regulations regarding vaccination of health-care workers in long-term-care facilities (180). Physicians, nurses, and other workers in both hospital and outpatient-care settings, including medical emergency-response workers (e.g., paramedics and emergency medical technicians), should be vaccinated, as should employees of nursing home and chroniccare facilities who have contact with patients or residents. # Persons Infected with HIV Limited information is available regarding the frequency and severity of influenza illness or the benefits of influenza vaccination among persons with HIV infection (181,182). However, a retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than during the peri-influenza periods. The risk for hospitalization was higher for HIV-infected women than for women with other well-recognized high-risk conditions, including chronic heart and lung diseases (183). Another study estimated that the risk for influenza-related death was 9.4-14.6/10,000 persons with acquired immunodeficiency syndrome (AIDS), compared with 0.09-0.10/10,000 among all persons aged 25-54 years and 6.4-7.0/10,000 among persons aged >65 years (184). Other reports indicate that influenza symptoms might be prolonged and the risk for complications from influenza increased for certain HIVinfected persons (185)(186)(187). Vaccination has been demonstrated to produce substantial antibody titers against influenza among vaccinated HIVinfected persons who have minimal AIDS-related symptoms and high CD4+ T-lymphocyte cell counts (188)(189)(190)(191). A limited, randomized, placebo-controlled trial determined that inactivated influenza vaccine was highly effective in preventing symptomatic, laboratory-confirmed influenza virus infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm 3 ; a limited number of persons with CD4+ T-lymphocyte cell counts of <200 were included in that study (192). A nonrandomized study among HIVinfected persons determined that influenza vaccination was most effective among persons with >100 CD4+ cells and among those with <30,000 viral copies of HIV type-1/mL (187). Among persons who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, inactivated influenza vaccine might not induce protective antibody titers (190,191); a second dose of vaccine does not improve the immune response in these persons (191,192). One case study determined that HIV RNA (ribonucleic acid) levels increased transiently in one HIV-infected person after influenza virus infection (193). Studies have demonstrated a transient (i.e., 2-4 week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (190,194). Other studies using similar laboratory techniques have not documented a substantial increase in the replication of HIV (195)(196)(197)(198). Deterioration of CD4+ T-lymphocyte cell counts or progression of HIV disease has not been demonstrated among HIV-infected persons after influenza vaccination compared with unvaccinated persons (191,199). Limited information is available concerning the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza virus infection or influenza vaccination (181,200). Because influenza can result in serious illness and because vaccination with inactivated influenza vaccine might result in the production of protective antibody titers, vaccination might benefit HIV-infected persons, including HIV-infected pregnant women. Therefore, influenza vaccination is recommended. # Travelers The risk for exposure to influenza during travel depends on the time of year and destination. In the tropics, influenza can occur throughout the year. In the temperate regions of the Southern Hemisphere, the majority of influenza activity occurs during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large organized tourist groups (e.g., on cruise ships) that include persons from areas of the world where influenza viruses are circulating (201,202). Persons at high risk for complications of influenza and who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to • travel to the tropics, • travel with organized tourist groups at any time of year, or • travel to the Southern Hemisphere during April-September. No information is available regarding the benefits of revaccinating persons before summer travel who were already vaccinated during the preceding fall. Persons at high risk who received the previous season's vaccine before travel should be revaccinated with the current vaccine the following fall or winter. Persons aged >50 years and persons at high risk should consult with their health-care provider before embarking on travel during the summer to discuss the symptoms and risks for influenza and other travel-related diseases. # General Population In addition to the groups for which annual influenza vaccination is recommended, vaccination providers should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza or transmitting influenza to others should they become infected (the vaccine can be administered to children aged >6 months), depending on vaccine availability (see Influenza Vaccine Supply and Timing of Annual Influenza Vaccination). A strategy of universal influenza vaccination is being assessed by ACIP. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics (203). # Inactivated Influenza Vaccine Recommendations TIV Dosage Dosage recommendations vary according to age group (Table 4). Among previously unvaccinated children aged 6 months-<9 years, 2 doses of inactivated vaccine administered >1 month apart are recommended for eliciting satisfactory antibody responses (85)(86)(87)(88). If possible, the second dose should be administered before the onset of influenza season. If a child aged 6 months-<9 years receiving influenza vaccine for the first time does not receive a second dose of vaccine within the same season, only 1 dose of vaccine should be administered the following season. Two doses are not required at that time. ACIP does not recommend that a child receiving influenza vaccine for the first time be administered the first dose of vaccine in the spring as a priming dose for the following season (86,88). Among adults, studies have indicated limited or no improvement in antibody response when a second dose is administered during the same season (204)(205)(206). Even when the current influenza vaccine contains one or more antigens administered in previous years, annual vaccination with the vaccine is necessary because immunity declines during the year after vaccination (207,208). Vaccine prepared for a previous influenza season should not be administered to provide protection for the current season (see Persons Who Should Not Be Vaccinated with Inactivated Influenza Vaccine). # TIV Route The intramuscular route is recommended for inactivated influenza vaccine. Adults and older children should be vaccinated in the deltoid muscle. A needle length >1 inch should be considered for these age groups because needles <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (209). Infants and young children should be vaccinated in the anterolateral aspect of the thigh (210). ACIP recommends a needle length of 7/8-1 inch for children aged <12 months for intramuscular vaccination into the anterolateral thigh. When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7/8-1.25 inches is recommended (210). # TIV Side Effects and Adverse Reactions When educating patients regarding potential side effects, clinicians should emphasize that 1) inactivated influenza vaccine contains noninfectious killed viruses and cannot cause influenza, and 2) coincidental respiratory disease unrelated to influenza vaccination can occur after vaccination. # TIV Local Reactions In placebo-controlled studies among adults, the most frequent side effect of vaccination is soreness at the vaccination site (affecting 10%-64% of patients) that lasts <2 days (12,(211)(212)(213). These local reactions typically are mild and rarely interfere with the person's ability to conduct usual daily activities. One blinded, randomized, cross-over study among 1,952 adults and children with asthma demonstrated that only body aches were reported more frequently after inactivated influenza vaccine (25.1%) than placebo-injection (20.8%) (214). One study reported 20%-28% of children with asthma aged 9 months-18 years experienced local pain and swelling (81), and another study reported 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions (76). A different study reported no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years in a placebo-controlled trial of inactivated influenza vaccine (77). In a study of 12 children aged 5-32 months, no substantial local or systemic reactions were noted (215). The interpretation of these findings should be made with caution given the small number of children studied. # TIV Systemic Reactions Fever, malaise, myalgia, and other systemic symptoms can occur after vaccination with inactivated vaccine and most often affect persons who have had no previous exposure to the influenza virus antigens in the vaccine (e.g., young children) (216,217). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Placebo-controlled trials demonstrate that among older persons and healthy young adults, administration of split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (12,(211)(212)(213). In a randomized cross-over study among both children and adults with asthma, no increase in asthma exacerbations was reported for either age group (214). An analysis of 215,600 Please note: An erratum has been published for this issue. To view the erratum, please click here. children aged <18 years and 8,476 children aged 6-23 months enrolled in one of five HMOs reported no increase in biologically plausible medically attended events during the 2 weeks after inactivated influenza vaccination, compared with control periods 3-4 weeks before and after vaccination (218). In a study of 791 healthy children (71), postvaccination fever was noted among 11.5% of children aged 1-5 years, among 4.6% of children aged 6-10 years, and among 5.1% of children aged 11-15 years. Among children with high-risk medical conditions, one study of 52 children aged 6 months-4 years indicated that 27% had fever and 25% had irritability and insomnia (76); another study among 33 children aged 6-18 months indicated that one child had irritability and one had a fever and seizure after vaccination (219). No placebo comparison group was used in these studies. A published review of the Vaccine Adverse Event Reporting System (VAERS) reports of TIV in children aged 6-23 months documented that the most frequently reported adverse events were fever, rash, injection-site reactions, and seizures. The majority of the small total number of reported seizures appeared to be febrile (220). Because of the limitations of passive reporting systems, determining causality for specific types of adverse events, with the exception of injectionsite reactions, is usually not possible using VAERS data alone. A population-based study of TIV safety in children aged 6-23 months who were vaccinated during 1993-1999 indicated no vaccine-associated adverse events that had a plausible relationship to vaccination (221). Health-care professionals should promptly report to VAERS all clinically significant adverse events after influenza vaccination, even if the health-care professional is not certain that the vaccine caused the event. The Institute of Medicine has specifically recommended reporting of potential neurologic complications (e.g., demyelinating disorders such as Guillain-Barré syndrome [GBS]), although no evidence exists of a causal relation between influenza vaccine and neurologic disorders in children. Immediate, presumably allergic, reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) rarely occur after influenza vaccination (222). These reactions probably result from hypersensitivity to certain vaccine components; the majority of reactions probably are caused by residual egg protein. Although current influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have had hives or swelling of the lips or tongue or who have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented im-munoglobulin E (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma or other allergic responses to egg protein, might also be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician should be considered (223)(224)(225). Persons with a history of severe hypersensitivity (e.g., anaphylaxis) to eggs should not receive influenza vaccine. Hypersensitivity reactions to any vaccine component can occur theoretically. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (226,227). When reported, hypersensitivity to thimerosal usually has consisted of local, delayed hypersensitivity reactions (226). # GBS and TIV The 1976 swine influenza vaccine was associated with an increased frequency of GBS (228,229). Among persons who received the swine influenza vaccine in 1976, the rate of GBS was <10 cases/1 million persons vaccinated. The risk for influenza vaccine-associated GBS was higher among persons aged >25 years than persons aged <25 years (228). Evidence for a causal relation of GBS with subsequent vaccines prepared from other influenza viruses is unclear. Obtaining strong epidemiologic evidence for a possible limited increase in risk is difficult for such a rare condition as GBS, which has an estimated annual incidence of 10-20 cases/1 million adults (230). Investigations to date have not documented a substantial increase in GBS associated with influenza vaccines (other than the swine influenza vaccine in 1976), and suggest that, if influenza vaccine does pose a risk, it is probably slightly more than one additional case/1 million persons vaccinated. During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were slightly elevated, but they were not statistically significant in any of these studies (231)(232)(233). However, in a study of the 1992-93 and 1993-94 influenza seasons, the overall relative risk for GBS was 1.7 (CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately 1 additional case of GBS/1 million persons vaccinated; the combined number of GBS cases peaked 2 weeks after vaccination (234). VAERS has documented decreased reporting of postinfluenza vaccine GBS across age groups, despite overall increased reporting of other, non-GBS conditions occurring after influenza vaccination (235). Cases of GBS after influenza infection have been reported, but no other epidemiologic studies have documented such an association (236,237). Substantial evidence exists that several infectious illnesses, most notably Campylobacter jejuni and upper respiratory tract infections are associated with GBS (230,(238)(239)(240). Even if GBS were a true side effect of vaccination in the years other than 1976, the estimated risk for GBS of approximately 1 additional case/1 million persons vaccinated is substantially less than the risk for severe influenza, which can be prevented by vaccination among all age groups, especially persons aged >65 years and those who have medical indications for influenza vaccination (Table 1) (see Hospitalizations and Deaths from Influenza). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death substantially outweigh the possible risks for experiencing vaccine-associated GBS. The average case fatality ratio for GBS is 6% and increases with age (230,241). No evidence indicates that the case fatality ratio for GBS differs among vaccinated persons and those not vaccinated. The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (231,242). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown. However, avoiding vaccinating persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks after a previous influenza vaccination is prudent. As an alternative, physicians might consider using influenza antiviral chemoprophylaxis for these persons. Although data are limited, for the majority of persons who have a history of GBS and who are at high risk for severe complications from influenza, the established benefits of influenza vaccination justify yearly vaccination. # Thimerosal and Inactivated Influenza Vaccine Thimerosal, a mercury-containing compound, has been used as a preservative in vaccines since the 1930s and is used in multidose vials of inactivated influenza vaccine to reduce the likelihood of bacterial contamination (243). Many of the single-dose syringes and vials of TIV are thimerosal-free or contain only trace amounts of thimerosal (Table 4). No scientific evidence indicates that thimerosal in vaccines, including influenza vaccines, leads to serious adverse events in vaccine recipients (244). However, in 1999, the U.S. Public Health Service and other organizations recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines to decrease total mercury exposure, chiefly among infants (243)(244)(245). Since mid-2001, vaccines routinely recommended for infants in the United States have been manufactured either without or with only trace amounts of thimerosal, resulting in a substantial reduction in the total mercury exposure from vaccines for children (210). Vaccines containing trace amounts of thimerosal have <1 mcg mercury/dose. The risks for severe illness from influenza virus infection are elevated among both young children and pregnant women, and persons in both groups benefit from vaccination. In contrast, no scientifically conclusive evidence exists of harm from exposure to thimerosal preservative-containing vaccine. In fact, evidence is accumulating that supports the absence of any harm resulting from exposure to such vaccines (243,(246)(247)(248). Therefore, the benefits of influenza vaccination outweigh the theoretical risk, if any, from thimerosal exposure through vaccination. Nonetheless, certain persons remain concerned regarding exposure to thimerosal. As of February 2006, six states had enacted legislation banning the administration of vaccines containing mercury; the provisions defining mercury content vary. These laws might present a barrier to vaccination until sufficient numbers of doses of influenza vaccines without thimerosal as a preservative or in trace amounts are available. The U.S. vaccine supply for infants and pregnant women is in a period of transition; the availability of thimerosalreduced or thimerosal-free vaccine intended for these groups is being expanded by manufacturers as a feasible means of reducing an infant's total exposure to mercury, because other environmental sources of exposure are more difficult or impossible to eliminate. Reductions in thimerosal in other vaccines have been achieved already and have resulted in substantially lowered cumulative exposure to thimerosal from vaccination among infants and children. For all of those reasons, persons for whom inactivated influenza vaccine is recommended may receive vaccine with or without thimerosal, depending on availability. # Persons Who Should Not Be Vaccinated with Inactivated Influenza Vaccine Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Chemoprophylactic use of antiviral agents is an option for preventing influenza among such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who also are at high risk for complications from influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Information regarding vaccine components is located in package inserts from each manufacturer. Persons with moderate-to-severe acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate use of influenza vaccine, particularly among children with mild upper-respiratory tract infection or allergic rhinitis. # TIV and Use of Influenza Antiviral Medications As TIV contains only influenza virus subunits and no live virus, no contraindication exists to the coadministration of TIV and influenza antivirals (see sections on Chemoprophylaxis; and Control of Influenza Outbreaks in Institutions). # Live, Attenuated Influenza Vaccine Recommendations Using LAIV LAIV is an option for vaccination of healthy, nonpregnant persons aged 5-49 years who want to avoid influenza, and those who might be in close contact with persons at high risk for severe complications, including health-care workers. During periods when inactivated vaccine is in short supply, use of LAIV is encouraged when feasible for eligible persons (including health-care workers) because use of LAIV by these persons might increase availability of inactivated vaccine for persons in groups at high risk. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response, its ease of administration, and the acceptability of an intranasal rather than intramuscular route of administration. # LAIV Dosage and Administration LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV must be thawed before administration. This can be accomplished by holding an individual sprayer in the palm of the hand until thawed, with subsequent immediate administration. Alternatively, the vaccine can be thawed in a refrigerator and stored at 2 º C-8 º C for <60 hours before use. Vaccine should not be refrozen after thawing. LAIV is supplied in a prefilled single-use sprayer containing 0.5 mL of vaccine. Approximately 0.25 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. If the vaccine recipient sneezes after administration, the dose should not be repeated. LAIV should be administered annually according to the following schedule: • Children aged 5-<9 years previously unvaccinated at any time with either LAIV or inactivated influenza vaccine should receive 2 doses* of LAIV separated by 6-10 weeks; if possible, the second dose of vaccine should be administered before the onset of influenza season. • Children aged 5-<9 years previously vaccinated at any time with either LAIV or inactivated influenza vaccine should receive 1 dose of LAIV. They do not require a second dose. • Persons aged 9-49 years should receive 1 dose of LAIV. LAIV can be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if clinical judgment indicates nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness. Whether concurrent administration of LAIV with other vaccines affects the safety or efficacy of either LAIV or the simultaneously administered vaccine is unknown. In the absence of specific data indicating interference, following the ACIP general recommendations for immunization is prudent (210). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. Inactivated or live vaccines can be administered simultaneously with LAIV. However, after administration of a live vaccine, at least 4 weeks should pass before another live vaccine is administered (see Persons Who Should Not Be Vaccinated with LAIV). # LAIV and Use of Influenza Antiviral Medications The effect on safety and efficacy of LAIV coadministration with influenza antiviral medications has not been studied. However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy, and influenza antiviral medications should not be administered for 2 weeks after receipt of LAIV. # LAIV Storage LAIV must be stored at -15 º C or colder. A manufacturersupplied freezer box was formerly required for storage of LAIV in a frost-free freezer; however, the freezer box is now optional, and LAIV may now be stored in frost-free freezers without using a freezer box. LAIV can be thawed in a refrigerator and stored at 2 º C-8 º C for <60 hours before use. It should not be refrozen after thawing because of decreased vaccine potency. # Shedding, Transmission, and Stability of Vaccine Viruses Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses for >2 days after vaccination, although in lower titers than typically occur with shedding of wild-type influenza viruses. Shedding should not be equated with person-to-person transmission of vaccine viruses, although, in rare instances, shed vaccine viruses can be transmitted from vaccinees to nonvaccinated persons. One unpublished study of a child care center setting assessed transmissibility of vaccine viruses from 98 vaccinated to 99 unvaccinated children, all aged 8-36 months. Eighty percent of vaccine recipients shed one or more virus strains, with a mean of 7.6 days' duration (249). One vaccine type influenza type B isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the cold-adapted, temperature-sensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient in the same children's play group. The placebo recipient from whom the influenza type B vaccine virus was isolated did not exhibit symptoms that were different from those experienced by vaccine recipients. The estimated probability of acquiring vaccine virus after close contact with a single LAIV recipient in this child care population was 0.58%-2.4%. One study assessing shedding of vaccine viruses in 20 healthy vaccinated adults aged 18-49 years demonstrated that the majority of shedding occurred within the first 3 days after vaccination, although one participant was noted to shed virus on day 7 after vaccine receipt. No study participants shed vaccine viruses >10 days after vaccination. Duration or type of symptoms associated with receipt of LAIV did not correlate with duration of shedding vaccine viruses. Personto-person transmission of vaccine viruses was not assessed in this study (250). Another study assessing shedding of vaccine viruses in 14 healthy adults aged 18-49 years indicated that 50% of these adults had viral antigen detected by direct immunofluorescence or rapid antigen tests within 7 days of vaccination. The majority of viral shedding was detected on day 2 or 3. Person-toperson transmission of vaccine viruses was not assessed in this study (251). In clinical trials, viruses shed by vaccine recipients have been phenotypically stable. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (252). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. A study conducted in a day care setting found that limited genetic change occurred in the LAIV strains after replication in the vaccine recipients (253). # LAIV Side Effects and Adverse Reactions Twenty prelicensure clinical trials assessed the safety of the approved LAIV. In these combined studies, approximately 28,000 doses of the vaccine were administered to approximately 20,000 persons. A subset of these trials were randomized, placebo-controlled studies in which an estimated 4,000 healthy children aged 5-17 years and 2,000 healthy adults aged 18-49 years were vaccinated. The incidence of adverse events possibly complicating influenza (e.g., pneumonia, bronchitis, bronchiolitis, or central nervous system events) was not statistically different among LAIV and placebo recipients aged 5-49 years. LAIV is made from attenuated viruses and does not cause influenza in vaccine recipients. Children. In a subset of healthy children aged 60-71 months from one clinical trial (111,112), certain signs and symptoms were reported more often among LAIV recipients after the first dose (n = 214) than placebo recipients (n = 95) (e.g., runny nose, 48.1% versus 44.2%; headache, 17.8% versus 11.6%; vomiting, 4.7% versus 3.2%; and myalgias, 6.1% versus 4.2%), but these differences were not statistically significant. In other trials, signs and symptoms reported after LAIV administration have included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0-26%), vomiting (3%-13%), abdominal pain (2%), and myalgias (0-21%) (105,108,110,(254)(255)(256). These symptoms were associated more often with the first dose and were self-limited. Data from a study of children aged 1-17 years indicated an increase in asthma or reactive airways disease in the subset aged 1-<5 years (257,258). Because of these data, LAIV is not approved for use among children aged <5 years. Another study was conducted among more than 11,000 children aged 18 months-18 years in which 18,780 doses of vaccine were administered over a 4-year period. This study did not observe an increase in asthma visits 0-15 days after vaccination for children who were aged 18 months-4 years compared with the prevaccination period; however, a significant increase in asthma events was observed 15-42 days after vaccination but only in vaccine year 1 (259). Adults. Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (114,260,261). In one clinical trial (114) among a subset of healthy adults aged 18-49 years, signs and symptoms reported more frequently among LAIV recipients (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (13.9% versus 10.8%), runny nose (44.5% versus 27.1%), sore throat (27.8% versus 17.1%), chills (8.6% versus 6.0%), and tiredness/ weakness (25.7% versus 21.6%). Safety Among Groups at High Risk from Influenza-Related Morbidity. Until additional data are acquired and analyzed, persons at high risk for experiencing complications from influenza virus infection (e.g., immunocompromised patients; patients with asthma, cystic fibrosis, or chronic obstructive pulmonary disease; or persons aged >65 years) should not be vaccinated with LAIV. Protection from influenza among these groups should be accomplished using inactivated influenza vaccine. Serious Adverse Events. Serious adverse events requiring medical attention among healthy children aged 5-17 years or healthy adults aged 18-49 years occurred at a rate of <1%. Surveillance will continue for adverse events that might not have been detected in previous studies. Reviews of reports to VAERS after vaccination of approximately 2,500,000 persons during the 2003-04 and 2004-05 influenza seasons did not reveal any substantial new safety concerns (262,263). Health-care professionals should promptly report all clinically significant adverse events after LAIV administration to VAERS, as recommended for inactivated influenza vaccine. # Persons Who Should Not Be Vaccinated with LAIV The following populations should not be vaccinated with LAIV: • # Vaccination of Close Contacts of Persons at High Risk for Complications from Influenza Close contacts of persons at high risk for complications from influenza should receive influenza vaccine to reduce transmission of wild-type influenza viruses to persons at high risk. Use of inactivated influenza vaccine is preferred for vaccinating household members, health-care workers, and others who have close contact with severely immunocompromised persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunocompromised person requires care in a protective environment. The rationale for not using LAIV among healthcare workers caring for such patients is the theoretical risk that a live, attenuated vaccine virus could be transmitted to the severely immunocompromised person. If a health-care worker receives LAIV, that worker should refrain from contact with severely immunocompromised patients for 7 days after vaccine receipt. Hospital visitors who have received LAIV should refrain from contact with severely immunocompromised persons for 7 days after vaccination; however, such persons need not be excluded from visitation of patients who are not severely immunocompromised. ACIP has not indicated a preference for inactivated influenza vaccine use by health-care workers or other persons who have close contact with persons with lesser degrees of immunodeficiency (e.g., persons with diabetes, persons with asthma taking corticosteroids, or persons infected with HIV) or for inactivated influenza vaccine use by health-care workers or other healthy persons aged 5-49 years in close contact with all other groups at high risk. # Personnel Who May Administer LAIV Low-level introduction of vaccine viruses into the environment is likely unavoidable when administering LAIV. The risk for acquiring vaccine viruses from the environment is unknown but likely to be limited. Severely immunocompromised persons should not administer LAIV. However, other persons at high risk for influenza complications may administer LAIV. These include persons with underlying medical conditions placing them at high risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years. # Recommended Vaccines for Different Age Groups When vaccinating children aged 6 months-3 years, healthcare providers should use inactivated influenza vaccine that has been approved by FDA for this age group. Inactivated influenza vaccine from sanofi pasteur (Fluzone) is approved for use among persons aged >6 months. Inactivated influenza vaccine from Novartis, formerly Chiron (Fluvirin), is labeled in the United States for use among persons aged >4 years because data to demonstrate efficacy among younger persons have not been provided to FDA, whereas inactivated influenza vaccine from GlaxoSmithKline (FLUARIX) is labeled for use in persons aged >18 years. LAIV from MedImmune (FluMist) is approved for use by healthy persons aged 5-49 years (Table 4). # Influenza Vaccine Supply and Timing of Annual Influenza Vaccination The annual supply of influenza vaccine and the timing of its distribution cannot be guaranteed in any year. Currently, influenza vaccine manufacturers are projecting that approximately 100 million doses of influenza vaccine will be available in the United States for the 2006-07 influenza season, an amount that is approximately 16% more doses than were available for the 2005-06 season. An additional 15 million-20 million doses might be available if a new vaccine is licensed in 2006. (Information about the status of licensure of new vaccines is available at http://aapredbook.aappublications.org/ news/vaccstatus.pdf.) However, influenza vaccine distribution delays or vaccine shortages remain possible in part because of the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains. To ensure optimal use of available doses of influenza vaccine, health-care providers, those planning organized campaigns, and state and local public health agencies should 1) develop plans for expanding outreach and infrastructure to vaccinate more persons than last year and 2) develop contingency plans for the timing and prioritization of administering influenza vaccine, if the supply of vaccine is delayed and/or reduced. CDC and other public health agencies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will inform both providers and the general public if a substantial delay or an inadequate supply occurs. Because LAIV is approved for use in healthy persons aged 5-49 years, no recommendations exist for limiting the timing and prioritization of administering LAIV. Administration of LAIV is encouraged as soon as it is available and throughout the season. If the supply of inactivated influenza vaccine is adequate and a sufficient number of doses will be available beginning in September, vaccination efforts should be structured to ensure the vaccination of as many persons as possible over the course of several months. Even if vaccine distribution begins in September, distribution probably will not be completed until December or January; therefore, the following recommendations reflect this phased distribution during the months of October, November, and December, and possibly later. The prioritized (tiered) use of influenza vaccine during inactivated influenza vaccine shortages applies only to the use of inactivated vaccine and not to LAIV. When feasible, during shortages of inactivated influenza vaccine, LAIV should be used preferentially for all healthy persons aged 5-49 years (including health-care workers) to increase the availability of inactivated vaccine for groups at high risk. The following section provides guidance regarding the timing of vaccination under two scenarios: 1) if the supply of inactivated influenza vaccine is adequate, and 2) if a reduced or delayed supply of inactivated vaccine occurs. Materials to assist providers are available at http:// www.cdc.gov/flu/professionals/vaccination/index.htm (see also Travelers section). # Vaccination Before October To avoid missed opportunities for vaccination of persons at increased risk for serious complications and their household contacts (including out-of-home caregivers and household contacts of children aged 0-59 months), such persons should be offered vaccine beginning in September during routine health-care visits or during hospitalizations, if vaccine is available. However, in facilities housing older persons (e.g., nursing homes), vaccination before October typically should be avoided because antibody levels in such persons can begin to decline more rapidly after vaccination (264). If vaccine supplies are sufficient, vaccination of other persons also may begin before October. In addition, because children aged 6 months-<9 years who have not been previously vaccinated need 2 doses of vaccine, they should receive their first dose in September, if vaccine is available, so that both doses can be administered before the onset of influenza activity. For previously vaccinated children, only 1 dose is needed. # Vaccination in October and November The optimal time for vaccination efforts is usually during October-November. In October, vaccination in providerbased settings should start or continue for all patients-both high risk and healthy-and extend throughout November. Vaccination of children aged 6 months-<9 years who are receiving vaccine for the first time should also begin in October, if not done earlier, because those children need a booster dose 4-10 weeks after the initial dose, depending upon whether they are receiving inactivated influenza vaccine or LAIV. If supplies of inactivated influenza vaccine are not adequate, ACIP recommends that vaccine providers focus their vaccination efforts in October, primarily on persons aged >50 years, persons aged <50 years at increased risk for influenza-related complications (including children aged 6-59 months), house-hold contacts of persons at high risk (including out-of-home caregivers and household contacts of children aged 0-59 months), and health-care workers (178). Efforts to vaccinate other persons who wish to decrease their risk for influenza virus infection should not begin until November; however, if such persons request vaccination in October, vaccination should not be deferred, unless vaccine supplies dictate otherwise. # Vaccination in December and Later When inactivated vaccine is delayed, a substantial proportion of doses often do not become available until December or later. Nevertheless, even when supply is not delayed or reduced, as demonstrated by the relatively low vaccination coverage levels among persons in the defined priority groups, many persons who should receive influenza vaccine remain unvaccinated (Table 3). Providers should routinely offer influenza vaccine throughout the influenza season even after influenza activity has been documented in the community. In the United States, seasonal influenza activity can begin to increase as early as October or November, but influenza activity has not reached peak levels until late December-early March in the majority of recent seasons (Table 5). Although the timing of influenza activity can vary by region, vaccine administered after November is likely to be beneficial in the majority of influenza seasons. Adults have peak antibody protection against influenza virus infection 2 weeks after vaccination (265,266). # Timing of Organized Vaccination Campaigns Persons and institutions planning substantial organized vaccination campaigns (e.g., health departments, occupational health clinics, and community vaccinators) should consider scheduling these events after at least mid-October because the availability of vaccine in any location cannot be ensured consistently in early fall. Scheduling campaigns after mid-October will minimize the need for cancellations because vaccine is unavailable. These vaccination clinics should be scheduled through November, with attention to settings that serve children aged 6-59 months, pregnant women, other persons aged <50 years at increased risk for influenza-related complications, persons aged >50 years, health-care workers, and household contacts and out-of-home caregivers of persons at high risk (including children aged 0-59 months) to the extent feasible. Planners are encouraged to schedule at least one vaccination clinic in December. During a vaccine shortage or delay, substantial proportions of inactivated influenza vaccine doses may not be released until November and December or later. Beginning in November, vaccination campaigns can be broadened to include healthy persons who wish to reduce their risk for influenza virus infection. ACIP recommends organizers schedule these vaccination clinics throughout November and December. When the vaccine is significantly delayed, agencies should consider offering vaccination clinics into January as long as vaccine supplies are available. Campaigns using LAIV are optimally conducted in October and November but can also extend into January. # Strategies for Implementing Vaccination Recommendations in Health-Care Settings Successful vaccination programs combine publicity and education for health-care workers and other potential vaccine recipients, a plan for identifying persons at high risk, use of reminder/recall systems, assessment of practice-level vaccination rates with feedback to staff, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (19,267). Since October 2005, the Centers for Medicare and Medicaid Services (CMS) has required nursing homes participating in the Medicare and Medicaid programs to offer all residents influenza and pneumococcal vaccines and to document the results. According to the requirements, each resident is to be vaccinated unless it is medically contraindicated or the resident or his/her legal representative refuses vaccination. This information is to be reported as part of the CMS Minimum Data Set, which tracks nursing home health parameters (268). The use of standing orders programs by long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies might help to ensure the administration of recommended vaccinations for adults (269). Standing orders programs for both influenza and pneumococcal vaccination should be conducted under the supervision of a licensed practitioner according to a No. (%) of years with peak influenza activity 1 (3) 4 ( 13) 6 ( 20) 13 ( 43) 4 ( 13) 1 (3) 1 (3) * The peak week of activity was defined as the week with the greatest percentage of respiratory specimens testing positive for influenza on the basis of a 3-week moving average. Laboratory data were provided by U.S. World Health Organization Collaborating Centers (CDC, unpublished data, 1976(CDC, unpublished data, -2006. physician-approved facility or agency policy by health-care workers trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. CMS has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, longterm-care facilities, and home health agencies (269). To the extent allowed by local and state law, these facilities and agencies may implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaid-eligible patients. Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs as well (20). In addition, physician reminders (e.g., flagging charts) and patient reminders are recognized strategies for increasing rates of influenza vaccination. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections. # Outpatient Facilities Providing Ongoing Care Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits beginning in September (if vaccine is available) and throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended and who do not have regularly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination. # Outpatient Facilities Providing Episodic or Acute Care Beginning each September, acute health-care facilities (e.g., emergency departments and walk-in clinics) should offer vaccinations to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities During October and November each year, vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility or anytime afterwards. Ideally, all residents should be vaccinated at one time, before influenza season. Residents admitted through March after completion of the vaccination program at the facility should be vaccinated at the time of admission. # Acute-Care Hospitals Persons of all ages (including children) with high-risk conditions and persons aged >50 years who are hospitalized at any time during September-March should be offered and strongly encouraged to receive influenza vaccine before they are discharged if they have not already received the vaccine during that season. In one study, 39%-46% of adult patients hospitalized during the winter with influenza-related diagnoses had been hospitalized during the preceding fall (270). Thus, the hospital serves as a setting in which persons at increased risk for subsequent hospitalization can be identified and vaccinated. However, vaccination of persons at high risk during or after their hospitalizations is often not done. In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (271). Using standing orders in hospitals increases vaccination rates among hospitalized persons (272). # Visiting Nurses and Others Providing Home Care to Persons at High Risk Beginning in September, nursing-care plans should identify patients for whom vaccination is recommended, and vaccine should be administered in the home, if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination. # Other Facilities Providing Services to Persons Aged >50 Years Beginning in October, such facilities as assisted living housing, retirement communities, and recreation centers should offer unvaccinated residents and attendees vaccination onsite before the start of the influenza season. Staff education should emphasize the need for influenza vaccine. # Health-Care Workers Beginning in October each year, health-care facilities should offer influenza vaccinations to all workers, including night and weekend staff. Particular emphasis should be placed on providing vaccinations to persons who care for members of groups at high risk. Efforts should be made to educate healthcare workers regarding the benefits of vaccination and the potential health consequences of influenza illness for their patients, themselves, and their family members. All health-care workers should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (146,177,179). # Future Directions for Research and Recommendations Related to Influenza Vaccine The relatively low effectiveness of influenza vaccine administered to older adults highlights the need for more immunogenic influenza vaccines for the elderly (273) and the need for additional research to understand potential biases in estimating the benefits of vaccination among older adults in reducing hospitalizations and deaths (274)(275)(276). Additional studies of the relative cost-effectiveness and cost utility of influenza vaccination among children and adults, especially those aged <65 years, are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, hospitalization costs and rates, and vaccine effectiveness (277). Additional data also are needed to quantify the benefits of influenza vaccination of health-care workers in protecting their patients (278). Furthermore, larger consortia of networks are needed that are able to assess rare events that occur after vaccination, including GBS. ACIP continues to review new vaccination strategies to protect against influenza, including the possibility of expanding routine influenza vaccination recommendations toward universal vaccination or other approaches that will help greatly reduce or prevent the transmission of influenza (279)(280)(281)(282). In addition, as noted by the National Vaccine Advisory Committee, strengthening the U.S. influenza vaccination system will require improving vaccine financing, increasing demand, and implementing systems to help better understand the burden of influenza in the United States (283). Strategies to evaluate the effect of vaccination recommendations remain critical. # Recommendations for Using Antiviral Agents for Influenza Although annual vaccination is the primary strategy for preventing complications of influenza virus infections, antiviral medications with activity against influenza viruses can be effective for the chemoprophylaxis and treatment of influenza. Four licensed influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. Influenza A virus resistance to amantadine and rimantadine can emerge rapidly during treatment. On the basis of antiviral testing results conducted at CDC and in Canada indicating high levels of resistance (23,24,284), ACIP recommends that neither amantadine nor rimantadine be used for the treatment or chemoprophylaxis of influenza A in the United States until susceptibility to these antiviral medications has been re-established among circulating influenza A viruses. Oseltamivir or zanamivir can be prescribed if antiviral treatment of influenza is indicated. Oseltamivir is approved for treatment of persons aged >1 year, and zanamivir is approved for treatment of persons aged >7 years. Oseltamivir and zanamivir can be used for chemoprophylaxis of influenza; oseltamivir is licensed for use in persons aged >1 year, and zanamivir is licensed for use in persons aged >5 years. # Antiviral Agents for Influenza Zanamivir and oseltamivir are chemically related antiviral drugs known as neuraminidase inhibitors that have activity against both influenza A and B viruses. Both zanamivir and oseltamivir were approved in 1999 for treatment of uncomplicated influenza virus infections. In 2000, oseltamivir was approved for chemoprophylaxis of influenza among persons aged >13 years and was approved for chemoprophylaxis of children aged >1 year in 2005. In 2006, zanamivir was approved for chemoprophylaxis of children aged >5 years. The two drugs differ in pharmacokinetics, side effects, routes of administration, approved age groups, dosages, and costs. An overview of the indications, use, administration, and known primary side effects of these medications is presented in the following sections. Package inserts should be consulted for additional information. Detailed information regarding amantadine and rimantadine is available in the previous publication of the ACIP influenza recommendations (285). # Role of Laboratory Diagnosis Appropriate treatment of patients with respiratory illness depends on accurate and timely diagnosis. Influenza surveillance information and diagnostic testing can aid clinical judgment and help guide treatment decisions. For example, early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy. However, because certain bacterial infections can produce symptoms similar to influenza, bacterial infections should be considered and appropriately treated, if suspected. In addition, bacterial infections can occur as a complication of influenza. The accuracy of clinical diagnosis of influenza on the basis of symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza (33,42,43). Because testing all patients who might have influenza is not feasible, influenza surveillance by state and local health departments and CDC can provide information regarding the presence of influenza viruses in the com-munity. Surveillance also can identify the predominant circulating types, influenza A subtypes, and strains of influenza viruses. Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, polymerase chain reaction (PCR), and immunofluorescence assays (28). The sensitivity and specificity of any test for influenza can vary by the laboratory that performs the test, the type of test used, the type of specimen tested, and the timing of specimen collection. Among respiratory specimens for viral isolation or rapid detection, nasopharyngeal specimens are typically more effective than throat swab specimens (286). As with any diagnostic test, results should be evaluated in the context of other clinical and epidemiologic information available to healthcare providers. Commercial rapid diagnostic tests are available that can detect influenza viruses in 30 minutes (28,287). Some tests are approved for use in any outpatient setting, whereas others must be used in a moderately complex clinical laboratory. These rapid tests differ in the types of influenza viruses they can detect and whether they can distinguish between influenza types. Different tests can detect 1) only influenza A viruses; 2) both influenza A and B viruses, but not distinguish between the two types; or 3) both influenza A and B and distinguish between the two. None of the rapid tests provide any information regarding influenza A subtypes. The types of specimens acceptable for use (i.e., throat, nasopharyngeal, or nasal; and aspirates, swabs, or washes) also vary by test. The specificity and, in particular, the sensitivity of rapid tests are lower than for viral culture and vary by test (288,289). Because of the lower sensitivity of the rapid tests, physicians should consider confirming negative tests with viral culture or other means because of the possibility of false-negative rapid test results, especially during periods of peak community influenza activity. In contrast, false-positive rapid test results are less likely but can occur during periods of low influenza activity. Therefore, when interpreting results of a rapid influenza test, physicians should consider the positive and negative predictive values of the test in the context of the level of influenza activity in their community. Package inserts and the laboratory performing the test should be consulted for more details regarding use of rapid diagnostic tests. Additional information concerning diagnostic testing is available at http://www.cdc.gov/flu/professionals/ labdiagnosis.htm. Despite the availability of rapid diagnostic tests, collecting clinical specimens for viral culture is critical because only culture isolates can provide specific information regarding circulating strains and subtypes of influenza viruses. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and chemoprophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor the emergence of antiviral resistance and the emergence of novel influenza A subtypes that might pose a pandemic threat. # Antiviral Drug-Resistant Strains of Influenza Virus CDC recently reported that 193 (92%) of 209 influenza A (H3N2) viruses isolated from patients in 26 states demonstrated a change at amino acid 31 in the M2 gene that confers resistance to adamantanes (23,24). In addition, two of eight influenza A (H1N1) viruses tested were resistant (24). Canadian health authorities also have reported the same mutation in a comparable proportion of isolates recently tested (284). Until these findings, previous screenings of epidemic strains of influenza A viruses found few amantadine-and rimantadine-resistant viruses (290)(291)(292). Viral resistance to adamantanes can emerge rapidly during treatment because a single point mutation at amino acid positions 26, 27, 30, 31, or 34 of the M2 protein can confer cross resistance to both amantadine and rimantadine (293,294). Drug-resistant viruses can emerge in approximately one third of patients when either amantadine or rimantadine is used for therapy (293,295,296). During the course of amantadine or rimantadine therapy, resistant influenza strains can replace susceptible strains within 2-3 days of starting therapy (290,297). Resistant viruses have been isolated from persons who live at home or in an institution in which other residents are taking or have taken amantadine or rimantadine as therapy (298,299); however, the frequency with which resistant viruses are transmitted and their effect on efforts to control influenza are unknown. Persons who have influenza A virus infection and who are treated with either amantadine or rimantadine can shed susceptible viruses early in the course of treatment and later shed drug-resistant viruses, including after 5-7 days of therapy (295). Resistance to zanamivir and oseltamivir can be induced in influenza A and B viruses in vitro (300-307), but induction of resistance usually requires multiple passages in cell culture. By contrast, resistance to amantadine and rimantadine in vitro can be induced with fewer passages in cell culture (308,309). Development of viral resistance to zanamivir and oseltamivir during treatment has been identified but does not appear to be frequent (310)(311)(312)(313)(314). In one pediatric study, 5.5% of patients treated with oseltamivir had posttreatment isolates that were resistant to neuraminidase inhibitors. One small study of Japanese children treated with oseltamivir reported a high frequency of resistant viruses (315). However, no transmission of neuraminidase inhibitor-resistant viruses in humans has been documented to date. No isolates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of posttreatment isolates tested is limited (316), and the risk for emergence of zanamivirresistant isolates cannot be quantified (317). Only one clinical isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (312). Available diagnostic tests are not optimal for detecting clinical resistance to the neuraminidase inhibitor antiviral drugs, and additional tests are being developed (316,318). Postmarketing surveillance for neuraminidase inhibitor-resistant influenza viruses is being conducted (319). # Indications for Use of Antivirals When Susceptibility Exists Treatment When administered within 2 days of illness onset to otherwise healthy adults, zanamivir and oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day compared with placebo (91,(320)(321)(322)(323)(324)(325)(326)(327)(328)(329)(330)(331)(332)(333)(334). More clinical data are available concerning the efficacy of zanamivir and oseltamivir for treatment of influenza A virus infection than for treatment of influenza B virus infection (324,(335)(336)(337)(338)(339)(340)(341)(342)(343)(344). However, in vitro data and studies of treatment among mice and ferrets (345)(346)(347)(348)(349)(350)(351)(352), in addition to clinical studies, have documented that zanamivir and oseltamivir have activity against influenza B viruses (310,317,325,329,353,354). Data are limited regarding the effectiveness of the antiviral agents in preventing serious influenza-related complications (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases). Evidence for the effectiveness of these antiviral drugs is principally based on studies of patients with uncomplicated influenza (355). Data are limited concerning the effectiveness of zanamivir and oseltamivir for treatment of influenza among persons at high risk for serious complications of influenza (31,321,322,324,325,(330)(331)(332)(333)(334)(335)(336)(337)(338). Among influenza virus infected participants in 10 clinical trials, the risk for pneumonia among those participants receiving oseltamivir was approximately 50% lower than among those persons receiving a placebo (339). A similar significant reduction was also found for hospital admissions; a 50% reduction was observed in the small subset of high-risk participants, although this reduction was not statistically significant. Fewer studies of the efficacy of influenza antivirals have been conducted among pediatric populations (295,322,328,329). One study of oseltamivir treatment documented a decreased incidence of otitis media among children (323). Inadequate data exist regarding the safety and efficacy of any of the influenza antiviral drugs for use among children aged <1 year (289). Initiation of antiviral treatment within 2 days of illness onset is recommended. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days. # Chemoprophylaxis Chemoprophylactic drugs are not a substitute for vaccination, although they are critical adjuncts in preventing and controlling influenza. In community studies of healthy adults, both oseltamivir and zanamivir are similarly effective in preventing febrile, laboratory-confirmed influenza illness (efficacy: zanamivir, 84%; oseltamivir, 82%) (324,340,356). Both antiviral agents also have been reported to prevent influenza illness among persons administered chemoprophylaxis after a household member had influenza diagnosed (341,353,356). Experience with chemoprophylactic use of these agents in institutional settings or among patients with chronic medical conditions is limited in comparison with the adamantanes (310,337,338,(342)(343)(344). One 6-week study of oseltamivir chemoprophylaxis among nursing home residents reported a 92% reduction in influenza illness (310,357). Use of zanamivir has not been reported to impair the immunologic response to influenza vaccine (317,358). Data are not available regarding the efficacy of any of the four antiviral agents in preventing influenza among severely immunocompromised persons. When determining the timing and duration for administering influenza antiviral medications for chemoprophylaxis, factors related to cost, compliance, and potential side effects should be considered. To be maximally effective as chemoprophylaxis, the drug must be taken each day for the duration of influenza activity in the community. Persons at High Risk Who Are Vaccinated After Influenza Activity Has Begun. Persons at high risk for complications of influenza still can be vaccinated after an outbreak of influenza has begun in a community. However, development of antibodies in adults after vaccination takes approximately 2 weeks (265,266). When influenza vaccine is administered while influenza viruses are circulating, chemoprophylaxis should be considered for persons at high risk during the time from vaccination until immunity has developed. Children aged <9 years who receive influenza vaccine for the first time can require 6 weeks of chemoprophylaxis (i.e., chemoprophylaxis for 4 weeks after the first dose of vaccine and an additional 2 weeks of chemoprophylaxis after the second dose). Persons Who Provide Care to Those at High Risk. To reduce the spread of virus to persons at high risk during community or institutional outbreaks, chemoprophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with persons at high risk. Persons with frequent contact include employees of hospitals, clinics, and chronic-care facilities; household members; visiting nurses; and volunteer workers. If an outbreak is caused by a strain of influenza that might not be covered by the vaccine, chemoprophylaxis should be considered for all such persons, regardless of their vaccination status. Persons Who Have Immune Deficiencies. Chemoprophylaxis can be considered for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, chiefly those with advanced HIV disease. No published data are available concerning possible efficacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infection. Such patients should be monitored closely if chemoprophylaxis is administered. Other Persons. Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Chemoprophylaxis also can be offered to persons who wish to avoid influenza illness. Health-care providers and patients should make this decision on an individual basis. # Control of Influenza Outbreaks in Institutions Using antiviral drugs for treatment and chemoprophylaxis of influenza is a key component of influenza outbreak control in institutions. In addition to antiviral medications, other outbreak-control measures include instituting droplet precautions and establishing cohorts of patients with confirmed or suspected influenza, reoffering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (359-361) (see Additional Information Regarding Influenza Virus Infection Control Among Specific Populations). The majority of published reports concerning use of antiviral agents to control influenza outbreaks in institutions are based on studies of influenza A outbreaks among nursing home populations that received amantadine or rimantadine (335,(362)(363)(364)(365)(366). Less information is available concerning use of neuraminidase inhibitors in influenza A or B institutional outbreaks (337,338,344,357,367). When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice can substantially expedite administration of antiviral medications. When outbreaks occur in institutions, chemoprophylaxis should be administered to all residents, regardless of whether they received influenza vaccinations during the previous fall, and should continue for a minimum of 2 weeks. If surveillance indicates that new cases continue to occur, chemoprophylaxis should be continued until approximately 1 week after the end of the outbreak. The dosage for each resident should be determined individually. Chemoprophylaxis also can be offered to unvaccinated staff members who provide care to persons at high risk. Chemoprophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is suspected to be caused by a strain of influenza virus that is not well-matched to the vaccine. In addition to nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories or other settings in which persons live in close proximity). To limit the potential transmission of drug-resistant virus during outbreaks in institutions, whether in chronic or acutecare settings or other closed settings, measures should be taken to reduce contact as much as possible between persons taking antiviral drugs for treatment and other persons, including those taking chemoprophylaxis (see Antiviral Drug-Resistant Strains of Influenza Virus). # Dosage Dosage recommendations vary by age group and medical conditions (Table 6). # Children Zanamivir. Zanamivir is approved for treatment of influenza among children aged >7 years. The recommended dosage of zanamivir for treatment of influenza is two inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approximately 12 hours apart); the chemoprophylaxis dosage of zanamivir for children aged >5 years is 10 mg (two inhalations) once a day (317). Oseltamivir. Oseltamivir is approved for treatment and chemoprophylaxis among persons aged >1 year. Recommended treatment and chemoprophylaxis dosages of oseltamivir for children vary by the weight of the child. The treatment dosage recommendation of oseltamivir for children who weigh <15 kg is 30 mg twice a day; for children weighing >15-23 kg, 45 mg twice a day; for those weighing >23-40 kg, 60 mg twice a day; and for children weighing >40 kg, 75 mg twice a day (310). The chemoprophylaxis recommended dosage of oseltamivir for children weighing <15 kg is 30 mg once a day; for those weighing >15-23 kg, 45 mg # Pharmacokinetics Zanamivir In studies of healthy volunteers, approximately 7%-21% of the orally inhaled zanamivir dose reached the lungs, and 70%-87% was deposited in the oropharynx (317,372). Approximately 4%-17% of the total amount of orally inhaled zanamivir is systemically absorbed. Systemically absorbed zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the feces (317,370). # Oseltamivir Approximately 80% of orally administered oseltamivir is absorbed systemically (371). Absorbed oseltamivir is metabolized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxylate has a half-life of 6-10 hours and is excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway (310,373). Unmetabolized oseltamivir also is excreted in the urine by glomerular filtration and tubular secretion (325). # Side Effects and Adverse Reactions When considering use of influenza antiviral medications (i.e., choice of antiviral drug, dosage, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 6); presence of other medical conditions; indications for use (i.e., chemoprophylaxis or treatment); and the potential for interaction with other medications. # Zanamivir In a study of zanamivir treatment of ILI among persons with asthma or chronic obstructive pulmonary disease where study medication was administered after use of a B2-agonist, 13% of patients receiving zanamivir and 14% of patients who received placebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (317,330). However, in a phase I study of persons with mild or moderate asthma who did not have ILI, one of 13 patients experienced bronchospasm after administration of zanamivir (317). In addition, during postmarketing surveillance, cases of respiratory function deterioration after inhalation of zanamivir have been reported. Certain patients had underlying airway disease (e.g., asthma or chronic obstructive pulmonary disease). Because of the risk for serious adverse events and because the efficacy has not been demonstrated among this population, zanamivir is not recommended for treatment for patients with underlying airway disease (317). If physicians decide to prescribe zanamivir to patients with underlying chronic respiratory disease after carefully considering potential risks and benefits, the drug should be used with caution under conditions of appropriate monitoring and supportive care, including the availability of short-acting bronchodilators (355). Patients with asthma or chronic obstructive pulmonary disease who use zanamivir are advised to 1) have a fast-acting inhaled bronchodilator available when inhaling zanamivir and 2) stop using zanamivir and contact their physician if they experience difficulty breathing (317). No definitive evidence is available regarding the safety or efficacy of zanamivir for persons with underlying respiratory or cardiac disease or for persons with complications of acute influenza (355). Allergic reactions, including oropharyngeal or facial edema, also have been reported during postmarketing surveillance (317,337). In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and for those receiving placebo (i.e., inhaled lactose vehicle alone) (320)(321)(322)(323)(324)(325)337). The most common adverse events reported by both groups were diarrhea; nausea; sinusitis; nasal signs and symptoms; bronchitis; cough; headache; dizziness; and ear, nose, and throat infections. Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (317). # Oseltamivir Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vomiting, approximately 6%; vomiting, approximately 3%) (310,326,327,374). Among children treated with oseltamivir, 14% had vomiting, compared with 8.5% of placebo recipients. Overall, 1% discontinued the drug secondary to this side effect (329), whereas a limited number of adults who were enrolled in clinical treatment trials of oseltamivir discontinued treatment because of these symptoms (310). Similar types and rates of adverse events were reported in studies of oseltamivir chemoprophylaxis (310). Nausea and vomiting might be less severe if oseltamivir is taken with food (317,310). # Use During Pregnancy No clinical studies have been conducted regarding the safety or efficacy of zanamivir or oseltamivir for pregnant women. Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these two drugs should be used during pregnancy only if the potential benefit justifies the potential risk to the embryo or fetus. Oseltamivir and zanamivir are both "Pregnancy Category C" medications (see manufacturers' package inserts) (317,375). # Additional Information Regarding once a day; for those weighing >23-40 kg, 60 mg once a day; and for those weighing >40 kg, 75 mg once a day. # Persons Aged >65 Years Zanamivir and Oseltamivir. No reduction in dosage is recommended on the basis of age alone. # Persons with Impaired Renal Function Zanamivir. Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were observed (317,368). However, a limited number of healthy volunteers who received high doses of zanamivir intravenously tolerated systemic levels of zanamivir that were substantially higher than those resulting from administration of zanamivir by oral inhalation at the recommended dose (369,370). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild-to-moderate or severe impairment in renal function (317). Oseltamivir. Serum concentrations of oseltamivir carboxylate, the active metabolite of oseltamivir, increase with declining renal function (310,371). For patients with creatinine clearance of 10-30 mL/min (310), a reduction of the treat-ment dosage of oseltamivir to 75 mg once daily and in the chemoprophylaxis dosage to 75 mg every other day is recommended. No treatment or chemoprophylaxis dosing recommendations are available for patients undergoing routine renal dialysis treatment. # Persons with Liver Disease Zanamivir and Oseltamivir. Neither of these medications has been studied among persons with hepatic dysfunction. # Persons with Seizure Disorders Zanamivir and Oseltamivir. Seizure events have been reported during postmarketing use of zanamivir and oseltamivir, although no epidemiologic studies have reported any increased risk for seizures with either zanamivir or oseltamivir use. # Route Oseltamivir is administered orally in capsule or oral suspension form. Zanamivir is available as a dry powder that is selfadministered via oral inhalation by using a plastic device included in the package with the medication. Patients will benefit from instruction and demonstration of correct use of this device. . This information is based on data published by the Food and Drug Administration (FDA), which is available at http://www.fda.gov. * Zanamivir is administered through oral inhalation by using a plastic device included in the medication package. Patients will benefit from instruction and demonstration of the correct use of the device. Zanamivir is not recommended for those persons with underlying airway disease. † Not applicable. § A reduction in the dose of oseltamivir is recommended for persons with creatinine clearance <30 mL/min. ¶ The treatment dosing recommendations of oseltamivir for children weighing <15 kg is 30 mg twice a day; for children weighing >15-23 kg, the dose is 45 mg twice a day; for children weighing >23-40 kg, the dose is 60 mg twice a day; and for children weighing >40 kg, the dose is 75 mg twice a day. **The chemoprophylaxis dosing recommendations of oseltamivir for children weighing <15 kg is 30 mg once a day; for children weighing >15-23 kg, the dose is 45 mg once a day; for children weighing >23-40 kg, the dose is 60 mg once a day; and for children >40 kg, the dose is 75 mg once a day. # Drug Interactions Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically critical drug interactions have been predicted on the basis of in vitro data and data from studies using rats (310,373). Limited clinical data are available regarding drug interactions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this pathway. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir carboxylate by approximately 50% and a corresponding approximate twofold increase in the plasma levels of oseltamivir carboxylate (304,367). No published data are available concerning the safety or efficacy of using combinations of any of these influenza antiviral drugs. For more detailed information concerning potential drug interactions for any of these influenza antiviral drugs, package inserts should be consulted. # Information Regarding the Vaccines for Children Program The Vaccines for Children (VFC) program supplies vaccine to all states, territories, and the District of Columbia for use by participating providers. These vaccines are to be administered to eligible children without vaccine cost to the patient, as well as the provider. All routine childhood vaccines recommended by ACIP are available through this program. The program saves parents and providers out-of-pocket expenses for vaccine purchases and provides cost-savings to states through the CDC vaccine contracts. The program results in lower vaccine prices and assures that all states pay the same contract prices. Detailed information regarding the VFC program is available at http://www.cdc.gov/nip/vfc/default.htm. # Sources of Information Regarding Influenza and Its Surveillance Information regarding influenza surveillance, prevention, detection, and control is available at http://www.cdc.gov/flu/ weekly/fluactivity.htm. Surveillance information is available through the CDC Voice Information System (influenza update) at 888-232-3228 or CDC Fax Information Service at 888-232-3299. During October-May, surveillance information is updated weekly. In addition, periodic updates regard-ing influenza are published in the MMWR Weekly Report (http://www.cdc.gov/mmwr). Additional information regarding influenza vaccine can be obtained by calling 800-CDC-INFO (800-232-4636). State and local health departments should be consulted concerning availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, reporting of influenza outbreaks and influenza-related pediatric deaths, and advice concerning outbreak control. # Reporting of Adverse Events Following Vaccination Clinically significant adverse events that follow vaccination should be reported through VAERS at http://vaers.hhs.gov or by calling the 24-hour national toll-free hotline at 800-822-7967. # Additional Information Regarding Influenza Virus Infection Control Among Specific Populations Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, health-care workers, hospital patients, pregnant women, children, and travelers) also are available in the following publications: •
None
None
080e0d6188acec2392d2232b36f533e6b1fc454d
cdc
None
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Summary To improve the implementation of these guidelines, a working group of the Professional Education Subcommittee of the NAEPP extracted key clinical activities that should be considered as essential for quality asthma care in accordance with the EPR-2 guidelines and the EPR-Update 2002. The purpose was to develop a report that would help purchasers and planners of health care define the activities that are important to quality asthma care, particularly in reducing symptoms and preventing exacerbations, and subsequently reducing the overall national burden of illness and death from asthma. This report is intended to help employer health benefits managers and other health-care planners make decisions regarding delivery of health care for persons with asthma. Although this report is based on information directed to clinicians; it is not intended to substitute for recommended clinical practices for caring for persons with asthma, nor is it intended to replace the clinical decision-making required to meet individual patient needs. Readers are referred to the EPR-2 for the full asthma guidelines regarding diagnosis and management of asthma or to the abstracted Practical Guide (National Heart, Lung, and Blood Institute, National Asthma Education and Prevention Program. Practical guide for the diagnosis and management of asthma. Bethesda MD: US Department of Health and Human Services, National Institutes of Health, 1997; publication no. 97-4053. Available at http:// www.nhlbi.nih.gov/health/prof/lung/asthma/practgde.htm) and to the EPR-Update 2002. The 1997 EPR-2 guidelines and EPR-Update 2002 were derived from a consensus of leading asthma researchers from academic, clinical, federal, and voluntary institutions and based on scientific evidence supported by the literature. The 10 key activities highlighted here correspond to the four recommended-as-essential components of asthma management: assessment and monitoring, control of factors contributing to asthma severity, pharmacotherapy, and education for a partnership in care. The key clinical activities are not intended for acute or hospital management of patients with asthma but rather for the preventive aspects of managing asthma long term. This report was developed as a collaborative activity between CDC and the NAEPP. # Background The National Asthma Education and Prevention Program (NAEPP), administered and coordinated by the National Heart, Lung, and Blood Institute, began developing a consensus set of science-based guidelines for diagnosis and management of asthma shortly after its establishment in 1989. With wide participation of asthma specialists from academia, research, and clinical care, as well as representatives from voluntary health organizations and federal agencies, the first Expert Panel Report: Guidelines for the Diagnosis and Management of Asthma was produced in 1991 (1); a revision, the EPR-2, was published in 1997 (2), and an update to the EPR-2 was published in 2002 (3). The EPR-2 and EPR-Update 2002 comprise the prevailing science-based consensus concerning accurate information for health-care providers regarding asthma diagnosis and management. The EPR-2 has been disseminated nationwide and abstracted into a practical guide, a best practices pediatric guide, and a pocket guideline. However, although these asthma care principles have been widely endorsed, they have not been adequately applied (4)(5)(6)(7)(8). This report is a companion to the NAEPP Expert Panel Reports. It identifies a core set of 10 key clinical activities essential for ensuring that health care delivered to patients with asthma emphasizes the prevention aspect of care and addresses the components of care recommended in the Expert Panel Reports. The action steps listed for each key clinical activity suggest specific ways to accomplish the respective activity. The process of developing a core set of key clinical activities involved a detailed review of the EPR-2 guidelines and input from persons researching their implementation (1)(2)(3)(9)(10)(11). These activities were identified in the context of the currently proposed asthma-specific measures for managed care (12), as well as the national health goals of Healthy People 2010 (13) (Box 1) and the strategic plan of the U.S. Department of Health and Human Services (DHHS), Action Against Asthma (14). These reports complement each other to help health-care professionals, policymakers, patients, and the public work together to improve asthma care for the overall population and reduce asthma-related morbidity and mortality. # Essential Components of Care and Associated Key Clinical Activities The four essential components of asthma managementassessment and monitoring, controlling factors contributing to asthma severity, pharmacotherapy, and education for partnership in care-are distilled from the NAEPP EPR-2 Guidelines for the Diagnosis and Management of Asthma (Table 1). In addition, 10 key clinical activities are described and listed, each according to the essential component it represents. Action steps are suggested to help accomplish each of the clinical activities. The intent of this report is to help employer health benefits managers and health-care planners make decisions regarding delivery of quality health care for persons with asthma. # Assessment and Monitoring Key Clinical Activity 1. Establish Asthma Diagnosis After a person seeks medical care for symptoms that suggest asthma, the diagnosis of asthma should be clearly established and the baseline severity of the disease classified to help establish the recommended course of therapy. For symptomatic adults and children aged >5 years who can perform spirometry, asthma can be diagnosed after a medical history and physical examination documenting an episodic pattern of respiratory symptoms and from spirometry that indicates partially reversible airflow obstruction (>12% increase and 200 mL in forced expiratory volume in 1 second after inhaling a shortacting bronchodilator or receiving a short course of oral corticosteroids). Alternative diagnoses of symptoms that suggest asthma, including conditions affecting the upper and lower airways (e.g., upper airway obstruction/foreign body, bronchitis, pneumonia/bronchiolitis, chronic obstructive pulmonary disease, # Reduce - asthma-related deaths, hospitalizations for asthma, hospital emergency room visits for asthma, and limitations among persons with asthma; - the number of school days or workdays missed because of asthma. # Increase - the proportion of persons with asthma who receive formal patient education, including information about community and self-help resources, as an essential part of the management of their condition; - the proportion of persons with asthma who receive appropriate asthma care according to the NAEPP guidelines. Establish in at least 25 states a surveillance system for tracking asthma deaths, illness, disability, the impact of occupational and environmental factors on asthma, access to medical care, and asthma management. For the patient with a probable diagnosis of asthma after initial evaluation (i.e., symptomatic with normal spirometry and no alternative diagnosis), presumptive treatment may be necessary to reach a final diagnosis. Referral to a specialist (see Key Clinical Activity 4) may be necessary if the diagnosis is in doubt, other conditions are aggravating the asthma, or the contribution of occupational or environmental exposures needs to be confirmed. For infants and children aged <5 years, the diagnostic steps are the same except that spirometry, the most objective measure of lung function, is not feasible for this age group. Therefore, young children with asthma symptoms should be treated as having suspected asthma once alternative diagnoses are ruled out. Their medical histories and physical examinations should be expanded to look for factors associated with the develop- ment of chronic persistent asthma: more than three episodes of wheezing in the past year that lasted more than 1 day and affected sleep, AND parental history of asthma or physiciandiagnosed atopic dermatitis, or two of the following: physician-diagnosed allergic rhinitis, wheezing apart from colds, or peripheral blood eosinophilia (15). Over time, the diagnosis may become apparent or referral to a specialist may be necessary to perform additional testing to exclude other diagnoses. Many children aged <6 years who wheeze with respiratory tract infections respond well to asthma therapy ( 16) even though the diagnosis may be unconfirmed until persistence or recurrence of signs and symptoms is established. Approximately one third of children who wheeze with respiratory infections develop asthma that persists after age 6 (17). # Key Clinical Activity 2. Classify Severity of Asthma Because asthma is characterized by varying signs and symptoms, for appropriate treatment and monitoring, the severity of such signs and symptoms must be classified at the initial and all subsequent visits. Initially and before treatment has been optimized, clinical signs, symptoms, and peak flow monitoring or spirometry are used to classify severity (Table 2) (Box 2). After the patient's asthma is stable, severity is subsequently classified according to the level of medication required to maintain treatment goals (Table 3). Health-care providers should have the knowledge, equipment, staff or access to needed resources to aid in classification and proper management of all patients with asthma. # Key Clinical Activity 3. Schedule Routine Follow-Up Care Patients with asthma experience varying symptoms and severity because of the nature of asthma, their exposure to environmental allergens or irritants, or insufficient adherence to their medication regimen. For these reasons, they require adjustments in therapy and regular follow-up visits. The first follow-up visit should be scheduled within the month after initial diagnosis. Routine visits thereafter should be scheduled every 1-6 months, depending on the severity of asthma and the patient's ability to maintain control of symptoms. Routine care includes clinical assessment of airway function over time. Spirometry is recommended at the initial assessment and at least every 1-2 years after treatment is initiated and the symptoms and peak expiratory flow have stabilized. Spirometry as a monitoring measure may be performed more frequently, if indicated, on the basis of severity of symptoms and the disease's lack of response to treatment. At all follow-up visits, the physician reviews the patient's medication use and management plan, including self-monitoring records. Also at each visit, the physician should assess the patient's self-management skills, including correct technique for use of inhalers, spacers, and peak flow meters, as applicable. See Education for Partnership in Care (Key Clinical Activities 9 and 10) for more detailed discussion regarding the asthma management plan and patient self-management. All patients should have access to and be instructed in the use of devices needed to administer medication or monitor their asthma (e.g., inhalers, spacers, nebulizers, and peak flow meters ). Several devices may be required to ensure optimal treatment. Patients who use inhaled corticosteroids delivered by metered-dose inhalers should use a spacer to increase consistency of the dose and to minimize the possibility of local side effects. Some patients cannot easily coordinate actuation and inhalation using a metered-dose inhaler; spacers enable easier and more effective administration of medica- tion. Spacers with face masks and nebulizers are both available for young children. PFMs are recommended for patients with moderate or severe, persistent asthma and those with a history of severe exacerbations to help them monitor their symptoms during daily management as well as their response to home treatment during an exacerbation. # Key Clinical Activity 4. Assess for Referral to Specialty Care Referral to an asthma specialist for consultation or comanagement is recommended in the following circumstances: - A single life-threatening asthma exacerbation occurs. - Treatment goals for the patient's asthma are not being met after 3 weeks to 6 months of treatment, or earlier if the physician concludes that the asthma is not responding to current therapy. - Atypical signs and symptoms make asthma diagnosis unclear, or other conditions are complicating asthma or its diagnosis. - The patient has a history suggesting that asthma is being provoked by occupational factors, an environmental inhalant, or an ingested substance. - The initial diagnosis is severe, persistent asthma. - Additional diagnostic testing is indicated. - The patient is a child aged <3 years with moderate or severe persistent asthma. - The patient is a candidate for immunotherapy. - The patient or family requires additional education or guidance in managing asthma complications or therapy, following the treatment plan, or avoiding asthma triggers. - The patient requires continuous oral corticosteroid therapy or high-dose inhaled corticosteroids, or has required more than two courses of oral corticosteroids in 1 year. Specialty care for asthma may be provided by an allergist, pulmonologist, or other physician with expertise in asthma management. Patients undergoing specialty care may be comanaged by the referring physician or monitored by the referring physician in accordance with the specialty care physician's treatment regimen. # Identifying and Controlling Factors Contributing to Asthma Severity Key Clinical Activity 5. Recommend Measures to Control Asthma Triggers Environmental tobacco smoke (ETS) and house dust mite, cockroach, and cat and dog allergens can worsen asthma (or trigger asthma exacerbations) in sensitized and exposed persons (18). Irritant or allergen sensitivity can be determined by the patient's exposure and symptom history and confirmed with skin or blood testing. Allergy testing for perennial indoor allergens is recommended for persons with persistent asthma who are taking daily medications. After sensitivity is determined, avoidance of the trigger is recommended, and allergen abatement might be indicated. Ways to reduce allergen and irritant exposure should be reviewed and agreement sought with the patient to initiate measures. Examples of trigger avoidance include using dust mite impermeable pillow and mattress covers, removing furry pets, and taking measures to eliminate cockroaches (Box 3). No patient with asthma should smoke or be exposed to ETS. Physicians should review smoking status at the initial visit and all subsequent visits and, if patients smoke or are regularly exposed to ETS, should encourage and refer them to stop smoking. Exercise-induced bronchoconstriction describes the transient narrowing of airways associated with physical exertion in persons with asthma (17). Exercise-induced bronchoconstriction may be prevented with optimal long-term control of asthma. If the patient remains symptomatic during exercise, specific medications can be taken before exercise to prevent exerciseinduced bronchoconstriction. # Key Clinical Activity 6. Treat or Prevent All Comorbid Conditions Allergic rhinitis, sinusitis, gastroesophageal reflux, and sensitivity to certain medicines, including aspirin, nonsteroidal antiinflammatory drugs (NSAIDs), and beta blockers, can exacerbate asthma symptoms. Health-care providers should evaluate their asthma patients for these conditions and inquire about their medications, especially when asthma symptoms persist or worsen despite medication adjustments. Health-care providers should provide annual influenza vaccinations to patients with persistent asthma to prevent a respiratory infection that can exacerbate asthma. However, patients who are clinically allergic to egg should not get the vaccine. # Pharmacotherapy Key Clinical Activity 7. Prescribe Medications According to Severity Current evidence indicates that daily long-term control medications are necessary to prevent exacerbations and chronic symptoms for all patients with persistent asthma, whether the persistent asthma is mild, moderate, or severe. Inhaled corticosteroids are preferred because they are the most effective antiinflammatory medication available for treating the under-lying inflammation characteristic of persistent asthma. For patients with mild persistent asthma, other long-term medications-cromolyn, leukotriene modifiers, nedocromil, and theophylline-are available but have not been demonstrated to be as effective as inhaled corticosteroids. Patients with moderate or severe disease usually require additional medication combined with inhaled corticosteroids for daily long-term control. All patients with asthma require a short-acting bronchodilator medication for managing acute symptoms or exacerbations when they occur; severe exacerbations require the addition of systemic (oral) corticosteroids to treat the increased inflammation present (see EPR-2 for additional information about managing exacerbations of asthma). Allergens Reduce or eliminate exposure to allergens to which the patient is sensitive: - Animal dander: Remove animals from the house or, at a minimum, keep animals out of the patient's bedroom; use a filter for the air ducts leading to the bedroom. - House dust mites: -Essential: Encase mattress and pillow in an allergen-impermeable cover or wash pillow weekly as an alternative. Wash sheets and blankets from the patient's bed in hot water (>130 º F) weekly. The dosage and type of medications are crucial because asthma treatment is adjusted according to the level of asthma severity. Once therapy goals are achieved, a gradual reduction in treatment should be carefully undertaken to identify the minimum dose required to maintain control (Table 3). On the basis of the severity of asthma in an individual patient, various medication delivery and monitoring devices may be necessary (see Key Clinical Activity 3 for discussion of spacers/holding chambers, nebulizers, and peak flow and symptom monitoring). # Key Clinical Activity 8. Monitor Use of ß2-Agonist Drugs Patients whose need increases for short-acting inhaled ß2agonist to control serious day and night symptoms probably have inadequately controlled asthma. Although patients may need short-acting inhaled ß2-agonist during upper respiratory viral infections and exercise-induced bronchoconstriction, using more than one canister of short-acting ß2-agonist drugs per month is usually considered above expected use. At every patient visit, the health-care provider should review ß2-agonist medication use, including the patient's understanding of the dosage instructions, inhaler technique, and reasons for increased use. For patients using more ß2-agonist drugs than expected, daily long-term control therapy should be increased as needed, either by initiating or increasing daily long-term control therapy (Table 3). # Education for Partnership in Care Key Clinical Activity 9. # Develop a Written Asthma Management Plan As part of the overall management of patients with asthma, the health-care provider, in consultation with the patient or the parent or guardian of a child with asthma, should develop a written plan as part of educating patients regarding self management, especially for patients with moderate or severe persistent asthma and those with a history of severe exacerbation. The National Heart, Lung, and Blood Institute provides more specific advice on asthma management plans, emphasizing the provider/patient partnership (available at . nhlbi.nih.gov/health/public/lung/asthma/asthma.htm#plan). Writing the management plan helps clarify expectations for treatment (Box 4) and provides patients with an easy reference for remembering how to manage their asthma. The action plan should include written instructions on recognizing symptoms and signs of worsening asthma; taking appropriate medicines (type, dose, and frequency); recognizing when to seek medical care; and monitoring response to medications. Symptom-based plans may be equally effective as plans based on peak flow monitoring, although some patient preferences and circumstances (e.g., inability to recognize or report signs and symptoms of worsening asthma) may warrant a choice of peak flow monitoring. The management plan should be reviewed and adjusted, as needed, at every visit. For children, a copy of the plan should be given to each care giver and the child's school. # Key Clinical Activity 10. Provide Routine Education on Patient Self-Management Asthma education is essential for successful management of the disease. Effective asthma education is developed in a patient-provider partnership, tailored to the individual patient's needs relative to cultural or ethnic beliefs and practices. At a minimum, competent asthma education enlists and encourages family support, includes instructions on self-management skills (Box 5), and is integrated with routine ongoing care. It is provided, either as group or individual patient programs, to all patients and parents/guardians of children who have had a diagnosis of asthma. A patient's ability to take asthma medications is a necessary skill of self-management. Patients and parents/guardians of children with asthma need to know the rationale behind daily long-term and quick-relief medications, how to take medications correctly, and how to adjust the dosage if asthma symptoms occur. Instructions and verification on the proper use of any medication delivery devices and PFMs (for patients with moderate or severe persistent asthma) should be provided at the initial and each subsequent visit. Instruction should include how to interpret peak flow results and take action according to the asthma management plan. To help patients avoid or control environmental factors that worsen their asthma, education should focus first on identifying simple measures. Once the patient notices improved asthma control, more complicated or extensive measures can be undertaken. Including members of the family in these discussions may be helpful because implementing avoidance and control measures often affects all family members. This is especially true with ETS education. # Conclusion The 10 key clinical activities for quality asthma care, derived from the NAEPP EPR-2 and EPR-Update 2002, offer guidance for health-care managers and planners in making decisions regarding the resources necessary to ensure quality health care for persons with asthma. Managers and planners using these guides should understand the resources needed for appropriate diagnosis and treatment of asthma and the corresponding clinical activities recommended to ensure proper care and control of asthma. Action steps listed for each activity suggest specific ways to accomplish the respective activities. Adoption of the key clinical activities will lead to more appropriate health care for persons with asthma. All MMWR references are available on the Internet at . Use the search function to find specific articles. ------Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services.
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Summary To improve the implementation of these guidelines, a working group of the Professional Education Subcommittee of the NAEPP extracted key clinical activities that should be considered as essential for quality asthma care in accordance with the EPR-2 guidelines and the EPR-Update 2002. The purpose was to develop a report that would help purchasers and planners of health care define the activities that are important to quality asthma care, particularly in reducing symptoms and preventing exacerbations, and subsequently reducing the overall national burden of illness and death from asthma. This report is intended to help employer health benefits managers and other health-care planners make decisions regarding delivery of health care for persons with asthma. Although this report is based on information directed to clinicians; it is not intended to substitute for recommended clinical practices for caring for persons with asthma, nor is it intended to replace the clinical decision-making required to meet individual patient needs. Readers are referred to the EPR-2 for the full asthma guidelines regarding diagnosis and management of asthma or to the abstracted Practical Guide (National Heart, Lung, and Blood Institute, National Asthma Education and Prevention Program. Practical guide for the diagnosis and management of asthma. Bethesda MD: US Department of Health and Human Services, National Institutes of Health, 1997; publication no. 97-4053. Available at http:// www.nhlbi.nih.gov/health/prof/lung/asthma/practgde.htm) and to the EPR-Update 2002. The 1997 EPR-2 guidelines and EPR-Update 2002 were derived from a consensus of leading asthma researchers from academic, clinical, federal, and voluntary institutions and based on scientific evidence supported by the literature. The 10 key activities highlighted here correspond to the four recommended-as-essential components of asthma management: assessment and monitoring, control of factors contributing to asthma severity, pharmacotherapy, and education for a partnership in care. The key clinical activities are not intended for acute or hospital management of patients with asthma but rather for the preventive aspects of managing asthma long term. This report was developed as a collaborative activity between CDC and the NAEPP. # Background The National Asthma Education and Prevention Program (NAEPP), administered and coordinated by the National Heart, Lung, and Blood Institute, began developing a consensus set of science-based guidelines for diagnosis and management of asthma shortly after its establishment in 1989. With wide participation of asthma specialists from academia, research, and clinical care, as well as representatives from voluntary health organizations and federal agencies, the first Expert Panel Report: Guidelines for the Diagnosis and Management of Asthma was produced in 1991 (1); a revision, the EPR-2, was published in 1997 (2), and an update to the EPR-2 was published in 2002 (3). The EPR-2 and EPR-Update 2002 comprise the prevailing science-based consensus concerning accurate information for health-care providers regarding asthma diagnosis and management. The EPR-2 has been disseminated nationwide and abstracted into a practical guide, a best practices pediatric guide, and a pocket guideline. However, although these asthma care principles have been widely endorsed, they have not been adequately applied (4)(5)(6)(7)(8). This report is a companion to the NAEPP Expert Panel Reports. It identifies a core set of 10 key clinical activities essential for ensuring that health care delivered to patients with asthma emphasizes the prevention aspect of care and addresses the components of care recommended in the Expert Panel Reports. The action steps listed for each key clinical activity suggest specific ways to accomplish the respective activity. The process of developing a core set of key clinical activities involved a detailed review of the EPR-2 guidelines and input from persons researching their implementation (1)(2)(3)(9)(10)(11). These activities were identified in the context of the currently proposed asthma-specific measures for managed care (12), as well as the national health goals of Healthy People 2010 (13) (Box 1) and the strategic plan of the U.S. Department of Health and Human Services (DHHS), Action Against Asthma (14). These reports complement each other to help health-care professionals, policymakers, patients, and the public work together to improve asthma care for the overall population and reduce asthma-related morbidity and mortality. # Essential Components of Care and Associated Key Clinical Activities The four essential components of asthma managementassessment and monitoring, controlling factors contributing to asthma severity, pharmacotherapy, and education for partnership in care-are distilled from the NAEPP EPR-2 Guidelines for the Diagnosis and Management of Asthma (Table 1). In addition, 10 key clinical activities are described and listed, each according to the essential component it represents. Action steps are suggested to help accomplish each of the clinical activities. The intent of this report is to help employer health benefits managers and health-care planners make decisions regarding delivery of quality health care for persons with asthma. # Assessment and Monitoring Key Clinical Activity 1. Establish Asthma Diagnosis After a person seeks medical care for symptoms that suggest asthma, the diagnosis of asthma should be clearly established and the baseline severity of the disease classified to help establish the recommended course of therapy. For symptomatic adults and children aged >5 years who can perform spirometry, asthma can be diagnosed after a medical history and physical examination documenting an episodic pattern of respiratory symptoms and from spirometry that indicates partially reversible airflow obstruction (>12% increase and 200 mL in forced expiratory volume in 1 second [FEV1] after inhaling a shortacting bronchodilator or receiving a short [2-3 week] course of oral corticosteroids). Alternative diagnoses of symptoms that suggest asthma, including conditions affecting the upper and lower airways (e.g., upper airway obstruction/foreign body, bronchitis, pneumonia/bronchiolitis, chronic obstructive pulmonary disease, # Reduce • asthma-related deaths, hospitalizations for asthma, hospital emergency room visits for asthma, and limitations among persons with asthma; • the number of school days or workdays missed because of asthma. # Increase • the proportion of persons with asthma who receive formal patient education, including information about community and self-help resources, as an essential part of the management of their condition; • the proportion of persons with asthma who receive appropriate asthma care according to the NAEPP guidelines. Establish in at least 25 states a surveillance system for tracking asthma deaths, illness, disability, the impact of occupational and environmental factors on asthma, access to medical care, and asthma management. For the patient with a probable diagnosis of asthma after initial evaluation (i.e., symptomatic with normal spirometry and no alternative diagnosis), presumptive treatment may be necessary to reach a final diagnosis. Referral to a specialist (see Key Clinical Activity 4) may be necessary if the diagnosis is in doubt, other conditions are aggravating the asthma, or the contribution of occupational or environmental exposures needs to be confirmed. For infants and children aged <5 years, the diagnostic steps are the same except that spirometry, the most objective measure of lung function, is not feasible for this age group. Therefore, young children with asthma symptoms should be treated as having suspected asthma once alternative diagnoses are ruled out. Their medical histories and physical examinations should be expanded to look for factors associated with the develop- ment of chronic persistent asthma: more than three episodes of wheezing in the past year that lasted more than 1 day and affected sleep, AND parental history of asthma or physiciandiagnosed atopic dermatitis, or two of the following: physician-diagnosed allergic rhinitis, wheezing apart from colds, or peripheral blood eosinophilia (15). Over time, the diagnosis may become apparent or referral to a specialist may be necessary to perform additional testing to exclude other diagnoses. Many children aged <6 years who wheeze with respiratory tract infections respond well to asthma therapy ( 16) even though the diagnosis may be unconfirmed until persistence or recurrence of signs and symptoms is established. Approximately one third of children who wheeze with respiratory infections develop asthma that persists after age 6 (17). # Key Clinical Activity 2. Classify Severity of Asthma Because asthma is characterized by varying signs and symptoms, for appropriate treatment and monitoring, the severity of such signs and symptoms must be classified at the initial and all subsequent visits. Initially and before treatment has been optimized, clinical signs, symptoms, and peak flow monitoring or spirometry are used to classify severity (Table 2) (Box 2). After the patient's asthma is stable, severity is subsequently classified according to the level of medication required to maintain treatment goals (Table 3). Health-care providers should have the knowledge, equipment, staff or access to needed resources to aid in classification and proper management of all patients with asthma. # Key Clinical Activity 3. Schedule Routine Follow-Up Care Patients with asthma experience varying symptoms and severity because of the nature of asthma, their exposure to environmental allergens or irritants, or insufficient adherence to their medication regimen. For these reasons, they require adjustments in therapy and regular follow-up visits. The first follow-up visit should be scheduled within the month after initial diagnosis. Routine visits thereafter should be scheduled every 1-6 months, depending on the severity of asthma and the patient's ability to maintain control of symptoms. Routine care includes clinical assessment of airway function over time. Spirometry is recommended at the initial assessment and at least every 1-2 years after treatment is initiated and the symptoms and peak expiratory flow have stabilized. Spirometry as a monitoring measure may be performed more frequently, if indicated, on the basis of severity of symptoms and the disease's lack of response to treatment. At all follow-up visits, the physician reviews the patient's medication use and management plan, including self-monitoring records. Also at each visit, the physician should assess the patient's self-management skills, including correct technique for use of inhalers, spacers, and peak flow meters, as applicable. See Education for Partnership in Care (Key Clinical Activities 9 and 10) for more detailed discussion regarding the asthma management plan and patient self-management. All patients should have access to and be instructed in the use of devices needed to administer medication or monitor their asthma (e.g., inhalers, spacers, nebulizers, and peak flow meters [PFMs]). Several devices may be required to ensure optimal treatment. Patients who use inhaled corticosteroids delivered by metered-dose inhalers should use a spacer to increase consistency of the dose and to minimize the possibility of local side effects. Some patients cannot easily coordinate actuation and inhalation using a metered-dose inhaler; spacers enable easier and more effective administration of medica- tion. Spacers with face masks and nebulizers are both available for young children. PFMs are recommended for patients with moderate or severe, persistent asthma and those with a history of severe exacerbations to help them monitor their symptoms during daily management as well as their response to home treatment during an exacerbation. # Key Clinical Activity 4. Assess for Referral to Specialty Care Referral to an asthma specialist for consultation or comanagement is recommended in the following circumstances: • A single life-threatening asthma exacerbation occurs. • Treatment goals for the patient's asthma are not being met after 3 weeks to 6 months of treatment, or earlier if the physician concludes that the asthma is not responding to current therapy. • Atypical signs and symptoms make asthma diagnosis unclear, or other conditions are complicating asthma or its diagnosis. • The patient has a history suggesting that asthma is being provoked by occupational factors, an environmental inhalant, or an ingested substance. • The initial diagnosis is severe, persistent asthma. • Additional diagnostic testing is indicated. • The patient is a child aged <3 years with moderate or severe persistent asthma. • The patient is a candidate for immunotherapy. • The patient or family requires additional education or guidance in managing asthma complications or therapy, following the treatment plan, or avoiding asthma triggers. • The patient requires continuous oral corticosteroid therapy or high-dose inhaled corticosteroids, or has required more than two courses of oral corticosteroids in 1 year. Specialty care for asthma may be provided by an allergist, pulmonologist, or other physician with expertise in asthma management. Patients undergoing specialty care may be comanaged by the referring physician or monitored by the referring physician in accordance with the specialty care physician's treatment regimen. # Identifying and Controlling Factors Contributing to Asthma Severity Key Clinical Activity 5. Recommend Measures to Control Asthma Triggers Environmental tobacco smoke (ETS) and house dust mite, cockroach, and cat and dog allergens can worsen asthma (or trigger asthma exacerbations) in sensitized and exposed persons (18). Irritant or allergen sensitivity can be determined by the patient's exposure and symptom history and confirmed with skin or blood testing. Allergy testing for perennial indoor allergens is recommended for persons with persistent asthma who are taking daily medications. After sensitivity is determined, avoidance of the trigger is recommended, and allergen abatement might be indicated. Ways to reduce allergen and irritant exposure should be reviewed and agreement sought with the patient to initiate measures. Examples of trigger avoidance include using dust mite impermeable pillow and mattress covers, removing furry pets, and taking measures to eliminate cockroaches (Box 3). No patient with asthma should smoke or be exposed to ETS. Physicians should review smoking status at the initial visit and all subsequent visits and, if patients smoke or are regularly exposed to ETS, should encourage and refer them to stop smoking. Exercise-induced bronchoconstriction describes the transient narrowing of airways associated with physical exertion in persons with asthma (17). Exercise-induced bronchoconstriction may be prevented with optimal long-term control of asthma. If the patient remains symptomatic during exercise, specific medications can be taken before exercise to prevent exerciseinduced bronchoconstriction. # Key Clinical Activity 6. Treat or Prevent All Comorbid Conditions Allergic rhinitis, sinusitis, gastroesophageal reflux, and sensitivity to certain medicines, including aspirin, nonsteroidal antiinflammatory drugs (NSAIDs), and beta blockers, can exacerbate asthma symptoms. Health-care providers should evaluate their asthma patients for these conditions and inquire about their medications, especially when asthma symptoms persist or worsen despite medication adjustments. Health-care providers should provide annual influenza vaccinations to patients with persistent asthma to prevent a respiratory infection that can exacerbate asthma. However, patients who are clinically allergic to egg should not get the vaccine. # Pharmacotherapy Key Clinical Activity 7. Prescribe Medications According to Severity Current evidence indicates that daily long-term control medications are necessary to prevent exacerbations and chronic symptoms for all patients with persistent asthma, whether the persistent asthma is mild, moderate, or severe. Inhaled corticosteroids are preferred because they are the most effective antiinflammatory medication available for treating the under-lying inflammation characteristic of persistent asthma. For patients with mild persistent asthma, other long-term medications-cromolyn, leukotriene modifiers, nedocromil, and theophylline-are available but have not been demonstrated to be as effective as inhaled corticosteroids. Patients with moderate or severe disease usually require additional medication combined with inhaled corticosteroids for daily long-term control. All patients with asthma require a short-acting bronchodilator medication for managing acute symptoms or exacerbations when they occur; severe exacerbations require the addition of systemic (oral) corticosteroids to treat the increased inflammation present (see EPR-2 for additional information about managing exacerbations of asthma). Allergens Reduce or eliminate exposure to allergens to which the patient is sensitive: • Animal dander: Remove animals from the house or, at a minimum, keep animals out of the patient's bedroom; use a filter for the air ducts leading to the bedroom. • House dust mites: -Essential: Encase mattress and pillow in an allergen-impermeable cover or wash pillow weekly as an alternative. Wash sheets and blankets from the patient's bed in hot water (>130 º F) weekly. The dosage and type of medications are crucial because asthma treatment is adjusted according to the level of asthma severity. Once therapy goals are achieved, a gradual reduction in treatment should be carefully undertaken to identify the minimum dose required to maintain control (Table 3). On the basis of the severity of asthma in an individual patient, various medication delivery and monitoring devices may be necessary (see Key Clinical Activity 3 for discussion of spacers/holding chambers, nebulizers, and peak flow and symptom monitoring). # Key Clinical Activity 8. Monitor Use of ß2-Agonist Drugs Patients whose need increases for short-acting inhaled ß2agonist to control serious day and night symptoms probably have inadequately controlled asthma. Although patients may need short-acting inhaled ß2-agonist during upper respiratory viral infections and exercise-induced bronchoconstriction, using more than one canister of short-acting ß2-agonist drugs per month is usually considered above expected use. At every patient visit, the health-care provider should review ß2-agonist medication use, including the patient's understanding of the dosage instructions, inhaler technique, and reasons for increased use. For patients using more ß2-agonist drugs than expected, daily long-term control therapy should be increased as needed, either by initiating or increasing daily long-term control therapy (Table 3). # Education for Partnership in Care Key Clinical Activity 9. # Develop a Written Asthma Management Plan As part of the overall management of patients with asthma, the health-care provider, in consultation with the patient or the parent or guardian of a child with asthma, should develop a written plan as part of educating patients regarding self management, especially for patients with moderate or severe persistent asthma and those with a history of severe exacerbation. The National Heart, Lung, and Blood Institute provides more specific advice on asthma management plans, emphasizing the provider/patient partnership (available at http://www. nhlbi.nih.gov/health/public/lung/asthma/asthma.htm#plan). Writing the management plan helps clarify expectations for treatment (Box 4) and provides patients with an easy reference for remembering how to manage their asthma. The action plan should include written instructions on recognizing symptoms and signs of worsening asthma; taking appropriate medicines (type, dose, and frequency); recognizing when to seek medical care; and monitoring response to medications. Symptom-based plans may be equally effective as plans based on peak flow monitoring, although some patient preferences and circumstances (e.g., inability to recognize or report signs and symptoms of worsening asthma) may warrant a choice of peak flow monitoring. The management plan should be reviewed and adjusted, as needed, at every visit. For children, a copy of the plan should be given to each care giver and the child's school. # Key Clinical Activity 10. Provide Routine Education on Patient Self-Management Asthma education is essential for successful management of the disease. Effective asthma education is developed in a patient-provider partnership, tailored to the individual patient's needs relative to cultural or ethnic beliefs and practices. At a minimum, competent asthma education enlists and encourages family support, includes instructions on self-management skills (Box 5), and is integrated with routine ongoing care. It is provided, either as group or individual patient programs, to all patients and parents/guardians of children who have had a diagnosis of asthma. A patient's ability to take asthma medications is a necessary skill of self-management. Patients and parents/guardians of children with asthma need to know the rationale behind daily long-term and quick-relief medications, how to take medications correctly, and how to adjust the dosage if asthma symptoms occur. Instructions and verification on the proper use of any medication delivery devices and PFMs (for patients with moderate or severe persistent asthma) should be provided at the initial and each subsequent visit. Instruction should include how to interpret peak flow results and take action according to the asthma management plan. To help patients avoid or control environmental factors that worsen their asthma, education should focus first on identifying simple measures. Once the patient notices improved asthma control, more complicated or extensive measures can be undertaken. Including members of the family in these discussions may be helpful because implementing avoidance and control measures often affects all family members. This is especially true with ETS education. # Conclusion The 10 key clinical activities for quality asthma care, derived from the NAEPP EPR-2 and EPR-Update 2002, offer guidance for health-care managers and planners in making decisions regarding the resources necessary to ensure quality health care for persons with asthma. Managers and planners using these guides should understand the resources needed for appropriate diagnosis and treatment of asthma and the corresponding clinical activities recommended to ensure proper care and control of asthma. Action steps listed for each activity suggest specific ways to accomplish the respective activities. Adoption of the key clinical activities will lead to more appropriate health care for persons with asthma. # All MMWR references are available on the Internet at http://www.cdc.gov/mmwr. Use the search function to find specific articles. ------Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. ------
None
None
64809312c9a766784a1bb2d6f377e02495830ba0
cdc
None
Interviewing case-patients about where and what they ate in the days or weeks before they got sick is a critical component to hypothesis generation during an outbreak investigation. Interviews can also identify high-risk casepatients who could spread their infections to others (e.g., food handlers, day care workers or attendees, healthcare workers). During interviews, casepatients can also receive information about risky exposures and how to protect themselves and others. The FoodCORE Model Practice: Initial Case-patient Interviewing is intended to describe the basic practices and characteristics of conducting comprehensive interviews for all enteric disease case-patients upon initial identification or first contact, not just those identified as part of a local cluster or multistate cluster. The activities described would be applicable for various pathogens but are focused on those that are likely transmitted via food. Depending on jurisdictional resources, attempts should be made to interview all identified case -patients with enteric disease to ascertain an exposure history. This model practice describes successful triage and routing of case reporting and the process of attempting interviews with case-patients, recommends categories and elements identified as essential to ascertain during an initial enteric disease interview, and provides a checklist to determine alignment of initial interview practices with the FoodCORE model practice.# Timeliness, timing, and description of interview attempts: Interviews should be attempted as soon as public health officials are notified of a case-patient. This may be before all subtyping is completed. Prompt interviewing is critical to improve exposure recall and increase the likelihood of identifying leftover products to collect for testing. Additionally, early interviewing provides an opportunity for prompt prevention education to limit additional transmission, especially if a case-patient is identified in a high-risk setting (i.e., foodhandlers, childcare, or health care setting). At least three attempts should be made to contact a case-patient. Attempts should be made at different times of day, with at least one attempt during evening or weekend hours, if possible. During initial contact, interviewers can determine if the interview would be better conducted at a different time, confirm case-patient contact information, and (resource dependent) arrange for the interview to be conducted with an interpreter or other means of translation service, if necessary. If a case-patient cannot be reached for interview via telephone, FoodCORE centers have used the following approaches to attempt to ascertain exposure history and/or conduct prevention education: - Provide Case-patients with call back information, either a toll-free or direct line for reaching an interviewer. - Provide a letter from the relevant public health agency with the reason for attempted contact and providing both contact information and educational materials about the enteric pathogen and prevention. See Appendix C for sample letters. - Some centers are working on a novel approach to provide online questionnaires for exposure ascertainment. This method of exposure ascertainment is used as an alternative method for case-patients who cannot be reached or are unwilling to complete an interview over the phone but would like to provide their exposure history online. Online systems for self-reported data must be secure and allow for confidential data submission. # Interview content: FoodCORE initial case-patient interviews include elements from the following major categories: For all case-patients of enteric disease, data collected in categories 1-5 are needed to identify case-patients where public health officials can provide educational information to prevent additional illnesses and to identify any events or local trends that could indicate ongoing risk. FoodCORE center initial interviews include elements in categories 6 and 7 as part of a full exposure history. Depending on jurisdictional resources, interviews should collect sufficient detail to enable public health investigation in these categories. As resources allow, jurisdictions can evaluate including a detailed food exposure history as part of an initial interview. Other initiatives, including the Listeria Initiative, the STEC Standard Form, and the SNHGQ, have suggested food categories and elements to ascertain in a food history (e.g., Meat and Poultry; Fish and Seafood; Eggs, Dairy, and Cheese; Fruits and Vegetables; Frozen and Convenience Foods). For successful interviews, interviewers should be familiar with the questionnaire and jurisdictional policies for education and intervention so the case-patient interaction is efficient and comfortable. The content and structure of the initial interview should be understandable, and sensitive to the personal nature of the questions. FoodCORE centers have implemented the following practices and considerations: - The order in which elements are asked can influence how responsive a case-patient may be » More sensitive information should be collected after the case-patient is comfortable with the interviewer and the reason for being contacted - If an interviewer determines that a case-patient is short on time, elements can be prioritized to ascertain the highest priority elements earlier in the interview » This could include risk of spreading infection to others or identifying additional case-patients for local clusters/events - Interview elements can also be prioritized for pathogen-specific concerns to focus on highest priority elements for those specific interviews » This should include identifying persons who are at risk of spreading infection to others - Since interviews may be conducted before case-patients are linked to a cluster of illness, interviewers may explain to case-patients that they may be re-contacted for additional details about their illness to keep other people from getting sick » Interviewers can verify contact information and preferred means of contact, the best times of day to reach a case-patient, and other preferences (as reasonable), such as preferred language By law, doctors and other healthcare providers must report diseases that may be spread to others so that we may investigate, and provide education and assistance to you as needed. Our goal is help you (or your child) to get well, stay well, and to keep others from becoming ill. You may be able to help us figure out what made you (or your child) sick. It is important that we speak with you as soon as possible. We have been unable to reach you by telephone, and would appreciate it if you would return our call. If you are unable to contact us by phone, you may provide some of the information we need electronically at igotsick.health.utah.gov. If you prefer to provide information electronically, please enter the following link into your internet browser and follow the instructions: igotsick.health.utah.gov. The information you chose to share is completely confidential, secure, and will automatically be routed to the appropriate health department. Local health departments may follow up as needed to prevent future illness.
Interviewing case-patients about where and what they ate in the days or weeks before they got sick is a critical component to hypothesis generation during an outbreak investigation. Interviews can also identify high-risk casepatients who could spread their infections to others (e.g., food handlers, day care workers or attendees, healthcare workers). During interviews, casepatients can also receive information about risky exposures and how to protect themselves and others. The FoodCORE Model Practice: Initial Case-patient Interviewing is intended to describe the basic practices and characteristics of conducting comprehensive interviews for all enteric disease case-patients upon initial identification or first contact, not just those identified as part of a local cluster or multistate cluster. The activities described would be applicable for various pathogens but are focused on those that are likely transmitted via food. Depending on jurisdictional resources, attempts should be made to interview all identified case -patients with enteric disease to ascertain an exposure history. This model practice describes successful triage and routing of case reporting and the process of attempting interviews with case-patients, recommends categories and elements identified as essential to ascertain during an initial enteric disease interview, and provides a checklist to determine alignment of initial interview practices with the FoodCORE model practice.# Timeliness, timing, and description of interview attempts: Interviews should be attempted as soon as public health officials are notified of a case-patient. This may be before all subtyping is completed. Prompt interviewing is critical to improve exposure recall and increase the likelihood of identifying leftover products to collect for testing. Additionally, early interviewing provides an opportunity for prompt prevention education to limit additional transmission, especially if a case-patient is identified in a high-risk setting (i.e., foodhandlers, childcare, or health care setting). At least three attempts should be made to contact a case-patient. Attempts should be made at different times of day, with at least one attempt during evening or weekend hours, if possible. During initial contact, interviewers can determine if the interview would be better conducted at a different time, confirm case-patient contact information, and (resource dependent) arrange for the interview to be conducted with an interpreter or other means of translation service, if necessary. If a case-patient cannot be reached for interview via telephone, FoodCORE centers have used the following approaches to attempt to ascertain exposure history and/or conduct prevention education: • Provide Case-patients with call back information, either a toll-free or direct line for reaching an interviewer. • Provide a letter from the relevant public health agency with the reason for attempted contact and providing both contact information and educational materials about the enteric pathogen and prevention. See Appendix C for sample letters. • Some centers are working on a novel approach to provide online questionnaires for exposure ascertainment. This method of exposure ascertainment is used as an alternative method for case-patients who cannot be reached or are unwilling to complete an interview over the phone but would like to provide their exposure history online. Online systems for self-reported data must be secure and allow for confidential data submission. # Interview content: FoodCORE initial case-patient interviews include elements from the following major categories: For all case-patients of enteric disease, data collected in categories 1-5 are needed to identify case-patients where public health officials can provide educational information to prevent additional illnesses and to identify any events or local trends that could indicate ongoing risk. FoodCORE center initial interviews include elements in categories 6 and 7 as part of a full exposure history. Depending on jurisdictional resources, interviews should collect sufficient detail to enable public health investigation in these categories. As resources allow, jurisdictions can evaluate including a detailed food exposure history as part of an initial interview. Other initiatives, including the Listeria Initiative, the STEC Standard Form, and the SNHGQ, have suggested food categories and elements to ascertain in a food history (e.g., Meat and Poultry; Fish and Seafood; Eggs, Dairy, and Cheese; Fruits and Vegetables; Frozen and Convenience Foods). For successful interviews, interviewers should be familiar with the questionnaire and jurisdictional policies for education and intervention so the case-patient interaction is efficient and comfortable. The content and structure of the initial interview should be understandable, and sensitive to the personal nature of the questions. FoodCORE centers have implemented the following practices and considerations: • The order in which elements are asked can influence how responsive a case-patient may be » More sensitive information should be collected after the case-patient is comfortable with the interviewer and the reason for being contacted • If an interviewer determines that a case-patient is short on time, elements can be prioritized to ascertain the highest priority elements earlier in the interview » This could include risk of spreading infection to others or identifying additional case-patients for local clusters/events • Interview elements can also be prioritized for pathogen-specific concerns to focus on highest priority elements for those specific interviews » This should include identifying persons who are at risk of spreading infection to others • Since interviews may be conducted before case-patients are linked to a cluster of illness, interviewers may explain to case-patients that they may be re-contacted for additional details about their illness to keep other people from getting sick » Interviewers can verify contact information and preferred means of contact, the best times of day to reach a case-patient, and other preferences (as reasonable), such as preferred language # By law, doctors and other healthcare providers must report diseases that may be spread to others so that we may investigate, and provide education and assistance to you as needed. Our goal is help you (or your child) to get well, stay well, and to keep others from becoming ill. You may be able to help us figure out what made you (or your child) sick. It is important that we speak with you as soon as possible. We have been unable to reach you by telephone, and would appreciate it if you would return our call. If you are unable to contact us by phone, you may provide some of the information we need electronically at igotsick.health.utah.gov. If you prefer to provide information electronically, please enter the following link into your internet browser and follow the instructions: igotsick.health.utah.gov. The information you chose to share is completely confidential, secure, and will automatically be routed to the appropriate health department. Local health departments may follow up as needed to prevent future illness.
None
None
2ce0c0644e33874f6c44ada3d471e3aa71941bad
cdc
None
# Executive Summary The Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Healthcare Settings 2007 updates and expands the Guideline for Isolation Precautions in Hospitals. The following developments led to revision of the 1996 guideline: 1. The transition of healthcare delivery from primarily acute care hospitals to other healthcare settings (e.g., home care, ambulatory care, free-standing specialty care sites, long-term care) created a need for recommendations that can be applied in all healthcare settings using common principles of infection control practice, yet can be modified to reflect setting-specific needs. Accordingly, the revised guideline addresses the spectrum of healthcare delivery settings. Furthermore, the term "nosocomial infections" is replaced by "healthcare-associated infections" (HAIs) to reflect the changing patterns in healthcare delivery and difficulty in determining the geographic site of exposure to an infectious agent and/or acquisition of infection. 2. The emergence of new pathogens (e.g., SARS-CoV associated with the severe acute respiratory syndrome , Avian influenza in humans), renewed concern for evolving known pathogens (e.g., C. difficile, noroviruses, community-associated MRSA ), development of new therapies (e.g., gene therapy), and increasing concern for the threat of bioweapons attacks, established a need to address a broader scope of issues than in previous isolation guidelines. 3. The successful experience with Standard Precautions, first recommended in the 1996 guideline, has led to a reaffirmation of this approach as the foundation for preventing transmission of infectious agents in all healthcare settings. New additions to the recommendations for Standard Precautions are Respiratory Hygiene/Cough Etiquette and safe injection practices, including the use of a mask when performing certain high-risk, prolonged procedures involving spinal canal punctures (e.g., myelography, epidural anesthesia). The need for a recommendation for Respiratory Hygiene/Cough Etiquette grew out of observations during the SARS outbreaks where failure to implement simple source control measures with patients, visitors, and healthcare personnel with respiratory symptoms may have contributed to SARS coronavirus (SARS-CoV) transmission. The recommended practices have a strong evidence base. The continued occurrence of outbreaks of hepatitis B and hepatitis C viruses in ambulatory settings indicated a need to re-iterate safe injection practice recommendations as part of Standard Precautions. The addition of a mask for certain spinal injections grew from recent evidence of an associated risk for developing meningitis caused by respiratory flora. 4. The accumulated evidence that environmental controls decrease the risk of lifethreatening fungal infections in the most severely immunocompromised patients (allogeneic hematopoietic stem-cell transplant patients) led to the update on the components of the Protective Environment (PE). 5. Evidence that organizational characteristics (e.g., nurse staffing levels and composition, establishment of a safety culture) influence healthcare personnel adherence to recommended infection control practices, and therefore are important factors in preventing transmission of infectious agents, led to a new emphasis and recommendations for administrative involvement in the development and support of infection control programs. 6. Continued increase in the incidence of HAIs caused by multidrug-resistant organisms (MDROs) in all healthcare settings and the expanded body of knowledge concerning prevention of transmission of MDROs created a need for more specific recommendations for surveillance and control of these pathogens that would be practical and effective in various types of healthcare settings. This document is intended for use by infection control staff, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection control programs for healthcare settings across the continuum of care. The reader is referred to other guidelines and websites for more detailed information and for recommendations concerning specialized infection control problems. # Parts I -III: Review of the Scientific Data Regarding Transmission of Infectious Agents in Healthcare Settings Part I reviews the relevant scientific literature that supports the recommended prevention and control practices. As with the 1996 guideline, the modes and factors that influence transmission risks are described in detail. New to the section on transmission are discussions of bioaerosols and of how droplet and airborne transmission may contribute to infection transmission. This became a concern during the SARS outbreaks of 2003, when transmission associated with aerosol-generating procedures was observed. Also new is a definition of "epidemiologically important organisms" that was developed to assist in the identification of clusters of infections that require investigation (i.e. multidrugresistant organisms, C. difficile). Several other pathogens that hold special infection control interest (i.e., norovirus, SARS, Category A bioterrorist agents, prions, monkeypox, and the hemorrhagic fever viruses) also are discussed to present new information and infection control lessons learned from experience with these agents. This section of the guideline also presents information on infection risks associated with specific healthcare settings and patient populations. Part II updates information on the basic principles of hand hygiene, barrier precautions, safe work practices and isolation practices that were included in previous guidelines. However, new to this guideline, is important information on healthcare system components that influence transmission risks, including those under the influence of healthcare administrators. An important administrative priority that is described is the need for appropriate infection control staffing to meet the ever-expanding role of infection control professionals in the modern, complex healthcare system. Evidence presented also demonstrates another administrative concern, the importance of nurse staffing levels, including numbers of appropriately trained nurses in ICUs for preventing HAIs. The role of the clinical microbiology laboratory in supporting infection control is described to emphasize the need for this service in healthcare facilites. Other factors that influence transmission risks are discussed i.e., healthcare worker adherence to recommended infection control practices, organizational safety culture or climate, education and training. Discussed for the first time in an isolation guideline is surveillance of healthcare-associated infections. The information presented will be useful to new infection control professionals as well as persons involved in designing or responding to state programs for public reporting of HAI rates. Part III describes each of the categories of precautions developed by the Healthcare Infection Control Practices Advisory Committee (HICPAC) and the Centers for Disease Control and Prevention (CDC) and provides guidance for their application in various healthcare settings. The categories of Transmission-Based Precautions are unchanged from those in the 1996 guideline: Contact, Droplet, and Airborne. One important change is the recommendation to don the indicated personal protective equipment (gowns, gloves, mask) upon entry into the patient's room for patients who are on Contact and/or Droplet Precautions since the nature of the interaction with the patient cannot be predicted with certainty and contaminated environmental surfaces are important sources for transmission of pathogens. In addition, the Protective Environment (PE) for allogeneic hematopoietic stem cell transplant patients, described in previous guidelines, has been updated. # Tables, Appendices, and Other Information There are several tables that summarize important information: 1. a summary of the evolution of this document; 2. guidance on using empiric isolation precautions according to a clinical syndrome; 3. a summary of infection control recommendations for category A agents of bioterrorism; 4. components of Standard Precautions and recommendations for their application; 5. components of the Protective Environment; and 6. a glossary of definitions used in this guideline. New in this guideline is a figure that shows a recommended sequence for donning and removing personal protective equipment used for isolation precautions to optimize safety and prevent self-contamination during removal. # Appendix A: Type and Duration of Precautions Recommended for Selected Infections and Conditions Appendix A consists of an updated alphabetical list of most infectious agents and clinical conditions for which isolation precautions are recommended. A preamble to the Appendix provides a rationale for recommending the use of one or more Transmission-Based Precautions, in addition to Standard Precautions, based on a review of the literature and evidence demonstrating a real or potential risk for person-to-person transmission in healthcare settings.The type and duration of recommended precautions are presented with additional comments concerning the use of adjunctive measures or other relevant considerations to prevent transmission of the specific agent. Relevant citations are included. # Pre-Publication of the Guideline on Preventing Transmission of MDROs New to this guideline is a comprehensive review and detailed recommendations for prevention of transmission of MDROs. This portion of the guideline was published electronically in October 2006 and updated in November, 2006 (Siegel JD, Rhinehart E, Jackson M, Chiarello L and HICPAC. Management of Multidrug-Resistant Organisms in Healthcare Settings (2006) (/)), and is considered a part of the Guideline for Isolation Precautions. This section provides a detailed review of the complex topic of MDRO control in healthcare settings and is intended to provide a context for evaluation of MDRO at individual healthcare settings. A rationale and institutional requirements for developing an effective MDRO control program are summarized. Although the focus of this guideline is on measures to prevent transmission of MDROs in healthcare settings, information concerning the judicious use of antimicrobial agents is presented since such practices are intricately related to the size of the reservoir of MDROs which in turn influences transmission (e.g., colonization pressure). There are two tables that summarize recommended prevention and control practices using the following seven categories of interventions to control MDROs: administrative measures, education of healthcare personnel, judicious antimicrobial use, surveillance, infection control precautions, environmental measures, and decolonization. Recommendations for each category apply to and are adapted for the various healthcare settings. With the increasing incidence and prevalence of MDROs, all healthcare facilities must prioritize effective control of MDRO transmission. Facilities should identify prevalent MDROs at the facility, implement control measures, assess the effectiveness of control programs, and demonstrate decreasing MDRO rates. A set of intensified MDRO prevention interventions is presented to be added 1. if the incidence of transmission of a target MDRO is NOT decreasing despite implementation of basic MDRO infection control measures, and 2. when the first case(s) of an epidemiologically important MDRO is identified within a healthcare facility. # Summary This updated guideline responds to changes in healthcare delivery and addresses new concerns about transmission of infectious agents to patients and healthcare workers in the United States and infection control. The primary objective of the guideline is to improve the safety of the nation's healthcare delivery system by reducing the rates of HAIs. # Part I: Review of Scientific Data Regarding Transmission of Infectious Agents in Healthcare Settings # I.A. Evolution of the 2007 Document The Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Healthcare Settings 2007 builds upon a series of isolation and infection prevention documents promulgated since 1970. These previous documents are summarized and referenced in Table 1 and in Part I of the 1996 Guideline for Isolation Precautions in Hospitals 1 . Objectives and methods The objectives of this guideline are to 1. provide infection control recommendations for all components of the healthcare delivery system, including hospitals, long-term care facilities, ambulatory care, home care and hospice; 2. reaffirm Standard Precautions as the foundation for preventing transmission during patient care in all healthcare settings; 3. reaffirm the importance of implementing Transmission-Based Precautions based on the clinical presentation or syndrome and likely pathogens until the infectious etiology has been determined (Table 2); and 4. provide epidemiologically sound and, whenever possible, evidence-based recommendations. This guideline is designed for use by individuals who are charged with administering infection control programs in hospitals and other healthcare settings. The information also will be useful for other healthcare personnel, healthcare administrators, and anyone needing information about infection control measures to prevent transmission of infectious agents. Commonly used abbreviations are provided on page 11 and terms used in the guideline are defined in the Glossary (page 137). Med-line and Pub Med were used to search for relevant studies published in English, focusing on those published since 1996. Much of the evidence cited for preventing transmission of infectious agents in healthcare settings is derived from studies that used "quasi-experimental designs", also referred to as nonrandomized, pre-post-intervention study designs 2 . Although these types of studies can provide valuable information regarding the effectiveness of various interventions, several factors decrease the certainty of attributing improved outcome to a specific intervention. These include: difficulties in controlling for important confounding variables; the use of multiple interventions during an outbreak; and results that are explained by the statistical principle of regression to the mean, (e.g., improvement over time without any intervention) 3 . Observational studies remain relevant and have been used to evaluate infection control interventions 4,5 . The quality of studies, consistency of results and correlation with results from randomized, controlled trials when available were considered during the literature review and assignment of evidence-based categories (See Part IV: Recommendations) to the recommendations in this guideline. Several authors have summarized properties to consider when evaluating studies for the purpose of determining if the results should change practice or in designing new studies 2,6,7 . # Changes or clarifications in terminology. This guideline contains four changes in terminology from the 1996 guideline: - The term nosocomial infection is retained to refer only to infections acquired in hospitals. The term healthcare-associated infection (HAI) is used to refer to infections associated with healthcare delivery in any setting (e.g., hospitals, longterm care facilities, ambulatory settings, home care). This term reflects the inability to determine with certainty where the pathogen is acquired since patients may be colonized with or exposed to potential pathogens outside of the healthcare setting, before receiving health care, or may develop infections caused by those pathogens when exposed to the conditions associated with delivery of healthcare. Additionally, patients frequently move among the various settings within a healthcare system 8 . - A new addition to the practice recommendations for Standard Precautions is Respiratory Hygiene/Cough Etiquette. While Standard Precautions generally apply to the recommended practices of healthcare personnel during patient care, Respiratory Hygiene/Cough Etiquette applies broadly to all persons who enter a healthcare setting, including healthcare personnel, patients and visitors. These recommendations evolved from observations during the SARS epidemic that failure to implement basic source control measures with patients, visitors, and healthcare personnel with signs and symptoms of respiratory tract infection may have contributed to SARS coronavirus (SARS-CoV) transmission. This concept has been incorporated into CDC planning documents for SARS and pandemic influenza 9,10 . - The term "Airborne Precautions" has been supplemented with the term "Airborne Infection Isolation Room (AIIR)" for consistency with the Guidelines for Environmental Infection Control in Healthcare Facilities 11 , the Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Settings 2005 12 and the American Institute of Architects (AIA) guidelines for design and construction of hospitals, 2006 13 - A set of prevention measures termed Protective Environment has been added to the precautions used to prevent HAIs. These measures, which have been defined in other guidelines, consist of engineering and design interventions that decrease the risk of exposure to environmental fungi for severely immunocompromised allogeneic hematiopoietic stem cell transplant (HSCT) patients during their highest risk phase, usually the first 100 days post transplant, or longer in the presence of graft-versushost disease 11, . Recommendations for a Protective Environment apply only to acute care hospitals that provide care to HSCT patients. Scope. This guideline, like its predecessors, focuses primarily on interactions between patients and healthcare providers. The Guidelines for the Prevention of MDRO Infection were published separately in November 2006, and are available online at Management of Multidrug-Resistant Organisms in Healthcare Settings (/). Several other HICPAC guidelines to prevent transmission of infectious agents associated with healthcare delivery are cited; e.g., Guideline for Hand Hygiene, Guideline for Environmental Infection Control, Guideline for Prevention of Healthcare-Associated Pneumonia, and Guideline for Infection Control in Healthcare Personnel 11,14,16,17 . In combination, these provide comprehensive guidance on the primary infection control measures for ensuring a safe environment for patients and healthcare personnel. This guideline does not discuss in detail specialized infection control issues in defined populations that are addressed elsewhere, (e.g. 12, . An exception has been made by including abbreviated guidance for a Protective Environment used for allogeneic HSCT recipients because components of the Protective Environment have been more completely defined since publication of the Guidelines for Preventing Opportunistic Infections Among HSCT Recipients in 2000 and the Guideline for Environmental Infection Control in Healthcare Facilities 11,15 . # I.B. Rationale for Standard and Transmission-Based Precautions in healthcare settings Transmission of infectious agents within a healthcare setting requires three elements: a source (or reservoir) of infectious agents, a susceptible host with a portal of entry receptive to the agent, and a mode of transmission for the agent. This section describes the interrelationship of these elements in the epidemiology of HAIs. # I.B.1. Sources of infectious agents. Infectious agents transmitted during healthcare derive primarily from human sources but inanimate environmental sources also are implicated in transmission. Human reservoirs include patients , healthcare personnel 29-35 17, 36-39 , and household members and other visitors . Such source individuals may have active infections, may be in the asymptomatic and/or incubation period of an infectious disease, or may be transiently or chronically colonized with pathogenic microorganisms, particularly in the respiratory and gastrointestinal tracts. The endogenous flora of patients (e.g., bacteria residing in the respiratory or gastrointestinal tract) also are the source of HAIs . # I.B.2. Susceptible hosts. Infection is the result of a complex interrelationship between a potential host and an infectious agent. Most of the factors that influence infection and the occurrence and severity of disease are related to the host. However, characteristics of the host-agent interaction as it relates to pathogenicity, virulence and antigenicity are also important, as are the infectious dose, mechanisms of disease production and route of exposure 55 . There is a spectrum of possible outcomes following exposure to an infectious agent. Some persons exposed to pathogenic microorganisms never develop symptomatic disease while others become severely ill and even die. Some individuals are prone to becoming transiently or permanently colonized but remain asymptomatic. Still others progress from colonization to symptomatic disease either immediately following exposure, or after a period of asymptomatic colonization. The immune state at the time of exposure to an infectious agent, interaction between pathogens, and virulence factors intrinsic to the agent are important predictors of an individuals' outcome. Host factors such as extremes of age and underlying disease (e.g., diabetes 56, 57 ), human immunodeficiency virus/acquired immune deficiency syndrome 58, 59 , malignancy, and transplants 18,60,61 can increase susceptibility to infection as do a variety of medications that alter the normal flora (e.g., antimicrobial agents, gastric acid suppressants, corticosteroids, antirejection drugs, antineoplastic agents, and immunosuppressive drugs). Surgical procedures and radiation therapy impair defenses of the skin and other involved organ systems. Indwelling devices such as urinary catheters, endotracheal tubes, central venous and arterial catheters and synthetic implants facilitate development of HAIs by allowing potential pathogens to bypass local defenses that would ordinarily impede their invasion and by providing surfaces for development of bioflms that may facilitate adherence of microorganisms and protect from antimicrobial activity 65 . Some infections associated with invasive procedures result from transmission within the healthcare facility; others arise from the patient's endogenous flora . High-risk patient populations with noteworthy risk factors for infection are discussed further in Sections I.D, I.E., and I.F. # I.B.3. Modes of transmission. Several classes of pathogens can cause infection, including bacteria, viruses, fungi, parasites, and prions. The modes of transmission vary by type of organism and some infectious agents may be transmitted by more than one route: some are transmitted primarily by direct or indirect contact, (e.g., Herpes simplex virus , respiratory syncytial virus, Staphylococcus aureus), others by the droplet, (e.g., influenza virus, B. pertussis) or airborne routes (e.g., M. tuberculosis). Other infectious agents, such as bloodborne viruses (e.g., hepatitis B and C viruses and HIV are transmitted rarely in healthcare settings, via percutaneous or mucous membrane exposure. Importantly, not all infectious agents are transmitted from person to person. These are distinguished in Appendix A. The three principal routes of transmission are summarized below. # I.B.3.a. Contact transmission. The most common mode of transmission, contact transmission is divided into two subgroups: direct contact and indirect contact. # I.B.3.a.i. Direct contact transmission. Direct transmission occurs when microorganisms are transferred from one infected person to another person without a contaminated intermediate object or person. Opportunities for direct contact transmission between patients and healthcare personnel have been summarized in the Guideline for Infection Control in Healthcare Personnel, 1998 17 and include: - blood or other blood-containing body fluids from a patient directly enters a caregiver's body through contact with a mucous membrane 66 or breaks (i.e., cuts, abrasions) in the skin 67 . - mites from a scabies-infested patient are transferred to the skin of a caregiver while he/she is having direct ungloved contact with the patient's skin 68,69 . - a healthcare provider develops herpetic whitlow on a finger after contact with HSV when providing oral care to a patient without using gloves or HSV is transmitted to a patient from a herpetic whitlow on an ungloved hand of a healthcare worker (HCW) 70, 71 . # I.B.3.a.ii. Indirect contact transmission. Indirect transmission involves the transfer of an infectious agent through a contaminated intermediate object or person. In the absence of a point-source outbreak, it is difficult to determine how indirect transmission occurs. However, extensive evidence cited in the Guideline for Hand Hygiene in Health-Care Settings suggests that the contaminated hands of healthcare personnel are important contributors to indirect contact transmission 16 . Examples of opportunities for indirect contact transmission include: - Hands of healthcare personnel may transmit pathogens after touching an infected or colonized body site on one patient or a contaminated inanimate object, if hand hygiene is not performed before touching another patient. 72,73 . - Patient-care devices (e.g., electronic thermometers, glucose monitoring devices) may transmit pathogens if devices contaminated with blood or body fluids are shared between patients without cleaning and disinfecting between patients 74 75-77 . - Shared toys may become a vehicle for transmitting respiratory viruses (e.g., respiratory syncytial virus 24,78,79 or pathogenic bacteria (e.g., Pseudomonas aeruginosa 80 ) among pediatric patients. - Instruments that are inadequately cleaned between patients before disinfection or sterilization (e.g., endoscopes or surgical instruments) or that have manufacturing defects that interfere with the effectiveness of reprocessing 86,87 may transmit bacterial and viral pathogens. Clothing, uniforms, laboratory coats, or isolation gowns used as personal protective equipment (PPE), may become contaminated with potential pathogens after care of a patient colonized or infected with an infectious agent, (e.g., MRSA 88 , VRE 89 , and C. difficile 90 . Although contaminated clothing has not been implicated directly in transmission, the potential exists for soiled garments to transfer infectious agents to successive patients. # I.B.3.b. Droplet transmission. Droplet transmission is, technically, a form of contact transmission, and some infectious agents transmitted by the droplet route also may be transmitted by the direct and indirect contact routes. However, in contrast to contact transmission, respiratory droplets carrying infectious pathogens transmit infection when they travel directly from the respiratory tract of the infectious individual to susceptible mucosal surfaces of the recipient, generally over short distances, necessitating facial protection. Respiratory droplets are generated when an infected person coughs, sneezes, or talks 91,92 or during procedures such as suctioning, endotracheal intubation, , cough induction by chest physiotherapy 97 and cardiopulmonary resuscitation 98,99 . Evidence for droplet transmission comes from epidemiological studies of disease outbreaks , experimental studies 104 and from information on aerosol dynamics 91,105 . Studies have shown that the nasal mucosa, conjunctivae and less frequently the mouth, are susceptible portals of entry for respiratory viruses 106 . The maximum distance for droplet transmission is currently unresolved, although pathogens transmitted by the droplet route have not been transmitted through the air over long distances, in contrast to the airborne pathogens discussed below. Historically, the area of defined risk has been a distance of ≤3 feet around the patient and is based on epidemiologic and simulated studies of selected infections 103,104 . Using this distance for donning masks has been effective in preventing transmission of infectious agents via the droplet route. However, experimental studies with smallpox 107,108 and investigations during the global SARS outbreaks of 2003 101 suggest that droplets from patients with these two infections could reach persons located 6 feet or more from their source. It is likely that the distance droplets travel depends on the velocity and mechanism by which respiratory droplets are propelled from the source, the density of respiratory secretions, environmental factors such as temperature and humidity, and the ability of the pathogen to maintain infectivity over that distance 105 . Thus, a distance of ≤3 feet around the patient is best viewed as an example of what is meant by "a short distance from a patient" and should not be used as the sole criterion for deciding when a mask should be donned to protect from droplet exposure. Based on these considerations, it may be prudent to don a mask when within 6 to 10 feet of the patient or upon entry into the patient's room, especially when exposure to emerging or highly virulent pathogens is likely. More studies are needed to improve understanding of droplet transmission under various circumstances. Droplet size is another variable under discussion. Droplets traditionally have been defined as being >5 µm in size. Droplet nuclei, particles arising from desiccation of suspended droplets, have been associated with airborne transmission and defined as ≤5 µm in size 105 , a reflection of the pathogenesis of pulmonary tuberculosis which is not generalizeable to other organisms. Observations of particle dynamics have demonstrated that a range of droplet sizes, including those with diameters of 30µm or greater, can remain suspended in the air 109 . The behavior of droplets and droplet nuclei affect recommendations for preventing transmission. Whereas fine airborne particles containing pathogens that are able to remain infective may transmit infections over long distances, requiring AIIR to prevent its dissemination within a facility; organisms transmitted by the droplet route do not remain infective over long distances, and therefore do not require special air handling and ventilation. Examples of infectious agents that are transmitted via the droplet route include Bordetella pertussis 110 , influenza virus 23 , adenovirus 111 , rhinovirus 104 , Mycoplasma pneumoniae 112 , SARS-associated coronavirus (SARS-CoV) 21,96,113 , group A streptococcus 114 , and Neisseria meningitidis 95,103,115 . Although respiratory syncytial virus may be transmitted by the droplet route, direct contact with infected respiratory secretions is the most important determinant of transmission and consistent adherence to Standard plus Contact Precautions prevents transmission in healthcare settings 24,116,117 . Rarely, pathogens that are not transmitted routinely by the droplet route are dispersed into the air over short distances. For example, although S. aureus is transmitted most frequently by the contact route, viral upper respiratory tract infection has been associated with increased dispersal of S. aureus from the nose into the air for a distance of 4 feet under both outbreak and experimental conditions and is known as the "cloud baby" and "cloud adult" phenomenon . # I.B.3.c. Airborne transmission. Airborne transmission occurs by dissemination of either airborne droplet nuclei or small particles in the respirable size range containing infectious agents that remain infective over time and distance (e.g., spores of Aspergillus spp, and Mycobacterium tuberculosis). Microorganisms carried in this manner may be dispersed over long distances by air currents and may be inhaled by susceptible individuals who have not had face-to-face contact with (or been in the same room with) the infectious individual . Preventing the spread of pathogens that are transmitted by the airborne route requires the use of special air handling and ventilation systems (e.g., AIIRs) to contain and then safely remove the infectious agent 11,12 . Infectious agents to which this applies include Mycobacterium tuberculosis , rubeola virus (measles) 122 , and varicella-zoster virus (chickenpox) 123 . In addition, published data suggest the possibility that variola virus (smallpox) may be transmitted over long distances through the air under unusual circumstances and AIIRs are recommended for this agent as well; however, droplet and contact routes are the more frequent routes of transmission for smallpox 108,128,129 . In addition to AIIRs, respiratory protection with NIOSH certified N95 or higher level respirator is recommended for healthcare personnel entering the AIIR to prevent acquisition of airborne infectious agents such as M. tuberculosis 12 . For certain other respiratory infectious agents, such as influenza 130,131 and rhinovirus 104 , and even some gastrointestinal viruses (e.g., norovirus 132 and rotavirus 133 ) there is some evidence that the pathogen may be transmitted via small-particle aerosols, under natural and experimental conditions. Such transmission has occurred over distances longer than 3 feet but within a defined airspace (e.g., patient room), suggesting that it is unlikely that these agents remain viable on air currents that travel long distances. AIIRs are not required routinely to prevent transmission of these agents. Additional issues concerning examples of small particle aerosol transmission of agents that are most frequently transmitted by the droplet route are discussed below. # I.B.3.d. Emerging issues concerning airborne transmission of infectious agents. # I.B.3.d.i. Transmission from patients. The emergence of SARS in 2002, the importation of monkeypox into the United States in 2003, and the emergence of avian influenza present challenges to the assignment of isolation categories because of conflicting information and uncertainty about possible routes of transmission. Although SARS-CoV is transmitted primarily by contact and/or droplet routes, airborne transmission over a limited distance (e.g., within a room), has been suggested, though not proven . This is true of other infectious agents such as influenza virus 130 and noroviruses 132,142,143 . Influenza viruses are transmitted primarily by close contact with respiratory droplets 23,102 and acquisition by healthcare personnel has been prevented by Droplet Precautions, even when positive pressure rooms were used in one center 144 However, inhalational transmission could not be excluded in an outbreak of influenza in the passengers and crew of a single aircraft 130 . Observations of a protective effect of UV lights in preventing influenza among patients with tuberculosis during the influenza pandemic of 1957-'58 have been used to suggest airborne transmission 145,146 . In contrast to the strict interpretation of an airborne route for transmission (i.e., long distances beyond the patient room environment), short distance transmission by small particle aerosols generated under specific circumstances (e.g., during endotracheal intubation) to persons in the immediate area near the patient has been demonstrated. Also, aerosolized particles <100 µm can remain suspended in air when room air current velocities exceed the terminal settling velocities of the particles 109 . SARS-CoV transmission has been associated with endotracheal intubation, noninvasive positive pressure ventilation, and cardio-pulmonary resuscitation 93,94,96,98,141 . Although the most frequent routes of transmission of noroviruses are contact and food and waterborne routes, several reports suggest that noroviruses may be transmitted through aerosolization of infectious particles from vomitus or fecal material 142,143,147,148 . It is hypothesized that the aerosolized particles are inhaled and subsequently swallowed. Roy and Milton proposed a new classification for aerosol transmission when evaluating routes of SARS transmission: 1. obligate: under natural conditions, disease occurs following transmission of the agent only through inhalation of small particle aerosols (e.g., tuberculosis); 2. preferential: natural infection results from transmission through multiple routes, but small particle aerosols are the predominant route (e.g., measles, varicella); and 3. opportunistic: agents that naturally cause disease through other routes, but under special circumstances may be transmitted via fine particle aerosols 149 . This conceptual framework can explain rare occurrences of airborne transmission of agents that are transmitted most frequently by other routes (e.g., smallpox, SARS, influenza, noroviruses). Concerns about unknown or possible routes of transmission of agents associated with severe disease and no known treatment often result in more extreme prevention strategies than may be necessary; therefore, recommended precautions could change as the epidemiology of an emerging infection is defined and controversial issues are resolved. # I.B.3.d.ii. Transmission from the environment. Some airborne infectious agents are derived from the environment and do not usually involve person-to-person transmission. For example, anthrax spores present in a finely milled powdered preparation can be aerosolized from contaminated environmental surfaces and inhaled into the respiratory tract 150,151 . Spores of environmental fungi (e.g., Aspergillus spp.) are ubiquitous in the environment and may cause disease in immunocompromised patients who inhale aerosolized (e.g., via construction dust) spores 152,153 . As a rule, neither of these organisms is subsequently transmitted from infected patients. However, there is one welldocumented report of person-to-person transmission of Aspergillus sp. in the ICU setting that was most likey due to the aerosolization of spores during wound debridement 154 . A Protective Environment refers to isolation practices designed to decrease the risk of exposure to environmental fungal agents in allogeneic HSCT patients 11,14,15, . Environmental sources of respiratory pathogens (eg. Legionella) transmitted to humans through a common aerosol source is distinct from direct patient-to-patient transmission. # I.B.3.e. Other sources of infection. Transmission of infection from sources other than infectious individuals include those associated with common environmental sources or vehicles (e.g., contaminated food, water, or medications (e.g., intravenous fluids). Although Aspergillus spp. have been recovered from hospital water systems 159 , the role of water as a reservoir for immunosuppressed patients remains uncertain. Vectorborne transmission of infectious agents from mosquitoes, flies, rats, and other vermin also can occur in healthcare settings. Prevention of vector borne transmission is not addressed in this document. # I.C. Infectious Agents of Special Infection Control Interest for Healthcare Settings Several infectious agents with important infection control implications that either were not discussed extensively in previous isolation guidelines or have emerged recently are discussed below. These are epidemiologically important organisms (e.g., C. difficile), agents of bioterrorism, prions, SARS-CoV, monkeypox, noroviruses, and the hemorrhagic fever viruses. Experience with these agents has broadened the understanding of modes of transmission and effective preventive measures. These agents are included for purposes of information and, for some (i.e., SARS-CoV, monkeypox), because of the lessons that have been learned about preparedness planning and responding effectively to new infectious agents. # I.C.1. Epidemiologically important organisms. Any infectious agents transmitted in healthcare settings may, under defined conditions, become targeted for control because they are epidemiologically important. C. difficile is specifically discussed below because of wide recognition of its current importance in U.S. healthcare facilities. In determining what constitutes an "epidemiologically important organism", the following characteristics apply: - A propensity for transmission within healthcare facilities based on published reports and the occurrence of temporal or geographic clusters of > 2 patients, (e.g., C..difficile, norovirus, respiratory syncytial virus (RSV), influenza, rotavirus, Enterobacter spp; Serratia spp., group A streptococcus). A single case of healthcare-associated invasive disease caused by certain pathogens (e.g., group A streptococcus post-operatively 160 , in burn units 161 , or in a LTCF 162 ; Legionella sp. 14, 163 , Aspergillus sp. 164 ) is generally considered a trigger for investigation and enhanced control measures because of the risk of additional cases and severity of illness associated with these infections. Antimicrobial resistance - Resistance to first-line therapies (e.g., MRSA, VISA, VRSA, VRE, ESBL-producing organisms). - Common and uncommon microorganisms with unusual patterns of resistance within a facility (e.g., the first isolate of Burkholderia cepacia complex or Ralstonia spp. in non-CF patients or a quinolone-resistant strain of Pseudomonas aeruginosa in a facility). - Difficult to treat because of innate or acquired resistance to multiple classes of antimicrobial agents (e.g., Stenotrophomonas maltophilia, Acinetobacter spp.). - Association with serious clinical disease, increased morbidity and mortality (e.g., MRSA and MSSA, group A streptococcus) - A newly discovered or reemerging pathogen I.C.1.a. C.difficile. C. difficile is a spore-forming gram positive anaerobic bacillus that was first isolated from stools of neonates in 1935 165 and identified as the most commonly identified causative agent of antibiotic-associated diarrhea and pseudomembranous colitis in 1977 166 . This pathogen is a major cause of healthcareassociated diarrhea and has been responsible for many large outbreaks in healthcare settings that were extremely difficult to control. Important factors that contribute to healthcare-associated outbreaks include environmental contamination, persistence of spores for prolonged periods of time, resistance of spores to routinely used disinfectants and antiseptics, hand carriage by healthcare personnel to other patients, and exposure of patients to frequent courses of antimicrobial agents 167 . Antimicrobials most frequently associated with increased risk of C. difficile include third generation cephalosporins, clindamycin, vancomycin, and fluoroquinolones. Since 2001, outbreaks and sporadic cases of C. difficile with increased morbidity and mortality have been observed in several U.S. states, Canada, England and the Netherlands . The same strain of C. difficile has been implicated in these outbreaks 173 . This strain, toxinotype III, North American PFGE type 1, and PCRribotype 027 (NAP1/027) has been found to hyperproduce toxin A (16 fold increase) and toxin B (23 fold increase) compared with isolates from 12 different pulsed-field gel electrophoresisPFGE types. A recent survey of U.S. infectious disease physicians found that 40% perceived recent increases in the incidence and severity of C. difficile disease 174 . Standardization of testing methodology and surveillance definitions is needed for accurate comparisons of trends in rates among hospitals 175 . It is hypothesized that the incidence of disease and apparent heightened transmissibility of this new strain may be due, at least in part, to the greater production of toxins A and B, increasing the severity of diarrhea and resulting in more environmental contamination. Considering the greater morbidity, mortality, length of stay, and costs associated with C. difficile disease in both acute care and long term care facilities, control of this pathogen is now even more important than previously. Prevention of transmission focuses on syndromic application of Contact Precautions for patients with diarrhea, accurate identification of patients, environmental measures (e.g., rigorous cleaning of patient rooms) and consistent hand hygiene. Use of soap and water, rather than alcohol based handrubs, for mechanical removal of spores from hands, and a bleach-containing disinfectant (5000 ppm) for environmental disinfection, may be valuable when there is transmission in a healthcare facility. See Appendix A for specific recommendations. # I.C.1. b. Multidrug-resistant organisms (MDROs). In general, MDROs are defined as microorganisms -predominantly bacteria -that are resistant to one or more classes of antimicrobial agents 176 . Although the names of certain MDROs suggest resistance to only one agent (e.g., methicillin-resistant Staphylococcus aureus , vancomycin resistant enterococcus ), these pathogens are usually resistant to all but a few commercially available antimicrobial agents. This latter feature defines MDROs that are considered to be epidemiologically important and deserve special attention in healthcare facilities 177 . Other MDROs of current concern include multidrug-resistant Streptococcus pneumoniae (MDRSP) which is resistant to penicillin and other broadspectrum agents such as macrolides and fluroquinolones, multidrug-resistant gramnegative bacilli (MDR-GNB), especially those producing extended spectrum betalactamases (ESBLs); and strains of S. aureus that are intermediate or resistant to vancomycin (i.e., VISA and VRSA) 178-197 198 . MDROs are transmitted by the same routes as antimicrobial susceptible infectious agents. Patient-to-patient transmission in healthcare settings, usually via hands of HCWs, has been a major factor accounting for the increase in MDRO incidence and prevalence, especially for MRSA and VRE in acute care facilities . Preventing the emergence and transmission of these pathogens requires a comprehensive approach that includes administrative involvement and measures (e.g., nurse staffing, communication systems, performance improvement processes to ensure adherence to recommended infection control measures), education and training of medical and other healthcare personnel, judicious antibiotic use, comprehensive surveillance for targeted MDROs, application of infection control precautions during patient care, environmental measures (e.g., cleaning and disinfection of the patient care environment and equipment, dedicated single-patientuse of non-critical equipment), and decolonization therapy when appropriate. The prevention and control of MDROs is a national priority -one that requires that all healthcare facilities and agencies assume responsibility and participate in communitywide control programs 176,177 . A detailed discussion of this topic and recommendations for prevention was published in 2006 may be found at Management of Multidrug-Resistant Organisms in Healthcare Settings (2006) (/). # I.C.2. Agents of bioterrorism. CDC has designated the agents that cause anthrax, smallpox, plague, tularemia, viral hemorrhagic fevers, and botulism as Category A (high priority) because these agents can be easily disseminated environmentally and/or transmitted from person to person; can cause high mortality and have the potential for major public health impact; might cause public panic and social disruption; and require special action for public health preparedness 202 . General information relevant to infection control in healthcare settings for Category A agents of bioterrorism is summarized in Table 3. Consult for additional, updated Category A agent information as well as information concerning Category B and C agents of bioterrorism and updates. Category B and C agents are important but are not as readily disseminated and cause less morbidity and mortality than Category A agents. Healthcare facilities confront a different set of issues when dealing with a suspected bioterrorism event as compared with other communicable diseases. An understanding of the epidemiology, modes of transmission, and clinical course of each disease, as well as carefully drafted plans that provide an approach and relevant websites and other resources for disease-specific guidance to healthcare, administrative, and support personnel, are essential for responding to and managing a bioterrorism event. Infection control issues to be addressed include: 1. identifying persons who may be exposed or infected; 2. preventing transmission among patients, healthcare personnel, and visitors; 3. providing treatment, chemoprophylaxis or vaccine to potentially large numbers of people; 4. protecting the environment including the logistical aspects of securing sufficient numbers of AIIRs or designating areas for patient cohorts when there are an insufficient number of AIIRs available; 5. providing adequate quantities of appropriate personal protective equipment; and 6. identifying appropriate staff to care for potentially infectious patients (e.g., vaccinated healthcare personnel for care of patients with smallpox). The response is likely to differ for exposures resulting from an intentional release compared with naturally occurring disease because of the large number persons that can be exposed at the same time and possible differences in pathogenicity. 203 ; smallpox ; plague 207,208 ; botulinum toxin 209 ; tularemia 210 ; and hemorrhagic fever viruses: 211,212 . # I.C.2.a. Pre-event administration of smallpox (vaccinia) vaccine to healthcare personnel. Vaccination of personnel in preparation for a possible smallpox exposure has important infection control implications . These include the need for meticulous screening for vaccine contraindications in persons who are at increased risk for adverse vaccinia events; containment and monitoring of the vaccination site to prevent transmission in the healthcare setting and at home; and the management of patients with vaccinia-related adverse events 216,217 . The pre-event U.S. smallpox vaccination program of 2003 is an example of the effectiveness of carefully developed recommendations for both screening potential vaccinees for contraindications and vaccination site care and monitoring. Approximately 760,000 individuals were vaccinated in the Department of Defense and 40,000 in the civilian or public health populations from December 2002 to February 2005, including approximately 70,000 who worked in healthcare settings. There were no cases of eczema vaccinatum, progressive vaccinia, fetal vaccinia, or contact transfer of vaccinia in healthcare settings or in military workplaces 218,219 . Outside the healthcare setting, there were 53 cases of contact transfer from military vaccinees to close personal contacts (e.g., bed partners or contacts during participation in sports such as wrestling 220 ). All contact transfers were from individuals who were not following recommendations to cover their vaccination sites. Vaccinia virus was confirmed by culture or PCR in 30 cases, and two of the confirmed cases resulted from tertiary transfer. All recipients, including one breast-fed infant, recovered without complication. Subsequent studies using viral culture and PCR techniques have confirmed the effectiveness of semipermeable dressings to contain vaccinia . This experience emphasizes the importance of ensuring that newly vaccinated healthcare personnel adhere to recommended vaccination-site care, especially if they are to care for high-risk patients. Recommendations for pre-event smallpox vaccination of healthcare personnel and vaccinia-related infection control recommendations are published in the MMWR 216,225 with updates posted on the CDC bioterrorism web site 205 . # I.C.3. Prions. Creutzfeldt-Jakob disease (CJD) is a rapidly progressive, degenerative, neurologic disorder of humans with an incidence in the United States of approximately 1 person/million population/year 226,227 228,229 , from implantation of contaminated human dura mater grafts 230 or from corneal transplants 231 ). Transmission has been linked to the use of contaminated neurosurgical instruments or stereotactic electroencephalogram electrodes 232,233,234,235 . Prion diseases in animals include scrapie in sheep and goats, bovine spongiform encephalopathy (BSE, or "mad cow disease") in cattle, and chronic wasting disease in deer and elk 236 . BSE, first recognized in the United Kingdom (UK) in 1986, was associated with a major epidemic among cattle that had consumed contaminated meat and bone meal. The possible transmission of BSE to humans causing variant CJD (vCJD) was first described in 1996 and subsequently found to be associated with consumption of BSEcontaminated cattle products primarily in the United Kingdom. There is strong epidemiologic and laboratory evidence for a causal association between the causative agent of BSE and vCJD 237 . Although most cases of vCJD have been reported from the UK, a few cases also have been reported from Europe, Japan, Canada, and the United States. Most vCJD cases worldwide lived in or visited the UK during the years of a large outbreak of BSE (1980-96) and may have consumed contaminated cattle products during that time (Creutzfeldt-Jakob Disease, Classic (CJD) () ). Although there has been no indigenously acquired vCJD in the United States, the sporadic occurrence of BSE in cattle in North America has heightened awareness of the possibility that such infections could occur and have led to increased surveillance activities. Updated information may be found on the following website: Creutzfeldt-Jakob Disease, Classic (CJD) () . The public health impact of prion diseases has been reviewed 238 . vCJD in humans has different clinical and pathologic characteristics from sporadic or classic CJD 239 , including the following: 1. younger median age at death: 28 (range 16-48) vs. 68 years; 2. longer duration of illness: median 14 months vs. 4-6 months; 3. increased frequency of sensory symptoms and early psychiatric symptoms with delayed onset of frank neurologic signs; and 4. detection of prions in tonsillar and other lymphoid tissues from vCJD patients but not from sporadic CJD patients 240 . Similar to sporadic CJD, there have been no reported cases of direct human-to-human transmission of vCJD by casual or environmental contact, droplet, or airborne routes. Ongoing blood safety surveillance in the U.S. has not detected sporadic CJD transmission through blood transfusion . However, bloodborne transmission of vCJD is believed to have occurred in two UK patients 244,245 247,248 . The incubation period from exposure to the onset of symptoms is 2 to 7 days but can be as long as 10 days and uncommonly even longer 249 . The illness is initially difficult to distinguish from other common respiratory infections. Outbreaks in healthcare settings, with transmission to large numbers of healthcare personnel and patients have been a striking feature of SARS; undiagnosed, infectious patients and visitors were important initiators of these outbreaks 21, . The relative contribution of potential modes of transmission is not precisely known. There is ample evidence for droplet and contact transmission 96,101,113 ; however, opportunistic airborne transmission cannot be excluded 101, 135-139, 149, 255 . For example, exposure to aerosolgenerating procedures (e.g., endotracheal intubation, suctioning) was associated with transmission of infection to large numbers of healthcare personnel outside of the United States 93,94,96,98,253 .Therefore, aerosolization of small infectious particles generated during these and other similar procedures could be a risk factor for transmission to others within a multi-bed room or shared airspace. A review of the infection control literature generated from the SARS outbreaks of 2003 concluded that the greatest risk of transmission is to those who have close contact, are not properly trained in use of protective infection control procedures, do not consistently use PPE; and that N95 or higher respirators may offer additional protection to those exposed to aerosol-generating procedures and high risk activities 256,257 . Organizational and individual factors that affected adherence to infection control practices for SARS also were identified 257 . Control of SARS requires a coordinated, dynamic response by multiple disciplines in a healthcare setting 9 . Early detection of cases is accomplished by screening persons with symptoms of a respiratory infection for history of travel to areas experiencing community transmission or contact with SARS patients, followed by implementation of Respiratory Hygiene/Cough Etiquette (i.e., placing a mask over the patient's nose and mouth) and physical separation from other patients in common waiting areas.The precise combination of precautions to protect healthcare personnel has not been determined. 259 . In Hong Kong, the use of Droplet and Contact Precautions, which included use of a mask but not a respirator, was effective in protecting healthcare personnel 113 . However, in Toronto, consistent use of an N95 respirator was slightly more protective than a mask 93 . It is noteworthy that there was no transmission of SARS-CoV to public hospital workers in Vietnam despite inconsistent use of infection control measures, including use of PPE, which suggests other factors (e.g., severity of disease, frequency of high risk procedures or events, environmental features) may influence opportunities for transmission 260 . SARS-CoV also has been transmitted in the laboratory setting through breaches in recommended laboratory practices. Research laboratories where SARS-CoV was under investigation were the source of most cases reported after the first series of outbreaks in the winter and spring of 2003 261,262 . Studies of the SARS outbreaks of 2003 and transmissions that occurred in the laboratory re-affirm the effectiveness of recommended infection control precautions and highlight the importance of consistent adherence to these measures. Lessons from the SARS outbreaks are useful for planning to respond to future public health crises, such as pandemic influenza and bioterrorism events. Surveillance for cases among patients and healthcare personnel, ensuring availability of adequate supplies and staffing, and limiting access to healthcare facilities were important factors in the response to SARS that have been summarized 9 . 263 . This outbreak demonstrates the importance of recognition and prompt reporting of unusual disease presentations by clinicians to enable prompt identification of the etiology; and the potential of epizootic diseases to spread from animal reservoirs to humans through personal and occupational exposure 264 . Limited data on transmission of monkeypox are available. Transmission from infected animals and humans is believed to occur primarily through direct contact with lesions and respiratory secretions; airborne transmission from animals to humans is unlikely but cannot be excluded, and may have occurred in veterinary practices (e.g., during administration of nebulized medications to ill prairie dogs 265 ). Among humans, four instances of monkeypox transmission within hospitals have been reported in Africa among children, usually related to sharing the same ward or bed 266,267 . Additional recent literature documents transmission of Congo Basin monkeypox in a hospital compound for an extended number of generations 268 . There has been no evidence of airborne or any other person-to-person transmission of monkeypox in the United States, and no new cases of monkeypox have been identified since the outbreak in June 2003 269 . The outbreak strain is a clade of monkeypox distinct from the Congo Basin clade and may have different epidemiologic properties (including human-to-human transmission potential) from monkeypox strains of the Congo Basin 270 ; this awaits further study. Smallpox vaccine is 85% protective against Congo Basin monkeypox 271 . Since there is an associated case fatality rate of ≤10%, administration of smallpox vaccine within 4 days to individuals who have had direct exposure to patients or animals with monkeypox is a reasonable consideration 272 . For the most current information, see CDC Monkeypox () . I.C.6. Noroviruses. Noroviruses, formerly referred to as Norwalk-like viruses, are members of the Caliciviridae family. These agents are transmitted via contaminated food or water and from person-to-person, causing explosive outbreaks of gastrointestinal disease 273 . Environmental contamination also has been documented as a contributing factor in ongoing transmission during outbreaks 274,275 . Although noroviruses cannot be propagated in cell culture, DNA detection by molecular diagnostic techniques has facilitated a greater appreciation of their role in outbreaks of gastrointestinal disease 276 . Reported outbreaks in hospitals 132,142,277 , nursing homes 275, , cruise ships 284,285 , hotels 143,147 , schools 148 , and large crowded shelters established for hurricane evacuees 286 , demonstrate their highly contagious nature, the disruptive impact they have in healthcare facilities and the community, and the difficulty of controlling outbreaks in settings where people share common facilites and space. Of note, there is nearly a 5 fold increase in the risk to patients in outbreaks where a patient is the index case compared with exposure of patients during outbreaks where a staff member is the index case 287 . The average incubation period for gastroenteritis caused by noroviruses is 12-48 hours and the clinical course lasts 12-60 hours 273 . Illness is characterized by acute onset of nausea, vomiting, abdominal cramps, and/or diarrhea. The disease is largely self-limited; rarely, death caused by severe dehydration can occur, particularly among the elderly with debilitating health conditions. The epidemiology of norovirus outbreaks shows that even though primary cases may result from exposure to a fecally-contaminated food or water, secondary and tertiary cases often result from person-to-person transmission that is facilitated by contamination of fomites 273,288 and dissemination of infectious particles, especially during the process of vomiting 132,142,143,147,148,273,279,280 . Widespread, persistent and inapparent contamination of the environment and fomites can make outbreaks extremely difficult to control 147,275,284 .These clinical observations and the detection of norovirus DNA on horizontal surfaces 5 feet above the level that might be touched normally suggest that, under certain circumstances, aerosolized particles may travel distances beyond 3 feet 147 . It is hypothesized that infectious particles may be aerosolized from vomitus, inhaled, and swallowed. In addition, individuals who are responsible for cleaning the environment may be at increased risk of infection. Development of disease and transmission may be facilitated by the low infectious dose (i.e., <100 viral particles) 289 and the resistance of these viruses to the usual cleaning and disinfection agents (i.e., may survive ≤10 ppm chlorine) . An alternate phenolic agent that was shown to be effective against feline calicivirus was used for environmental cleaning in one outbreak 275,293 . There are insufficient data to determine the efficacy of alcohol-based hand rubs against noroviruses when the hands are not visibly soiled 294 . Absence of disease in certain individuals during an outbreak may be explained by protection from infection conferred by the B histo-blood group antigen 295 . Consultation on outbreaks of gastroenteritis is available through CDC's Division of Viral and Rickettsial Diseases 296 . # I.C.7. Hemorrhagic fever viruses (HFV). The hemorrhagic fever viruses are a mixed group of viruses that cause serious disease with high fever, skin rash, bleeding diathesis, and in some cases, high mortality; the disease caused is referred to as viral hemorrhagic fever (VHF). Among the more commonly known HFVs are Ebola and Marburg viruses (Filoviridae), Lassa virus (Arenaviridae), Crimean-Congo hemorrhagic fever and Rift Valley Fever virus (Bunyaviridae), and Dengue and Yellow fever viruses (Flaviviridae) 212,297 . These viruses are transmitted to humans via contact with infected animals or via arthropod vectors. While none of these viruses is endemic in the United States, outbreaks in affected countries provide potential opportunities for importation by infected humans and animals. Furthermore, there are concerns that some of these agents could be used as bioweapons 212 . Person-to-person transmission is documented for Ebola, Marburg, Lassa and Crimean-Congo hemorrhagic fever viruses. In resource-limited healthcare settings, transmission of these agents to healthcare personnel, patients and visitors has been described and in some outbreaks has accounted for a large proportion of cases . Transmissions within households also have occurred among individuals who had direct contact with ill persons or their body fluids, but not to those who did not have such contact 301 . Evidence concerning the transmission of HFVs has been summarized 212,302 . Person-toperson transmission is associated primarily with direct blood and body fluid contact. Percutaneous exposure to contaminated blood carries a particularly high risk for transmission and increased mortality 303,304 . The finding of large numbers of Ebola viral particles in the skin and the lumina of sweat glands has raised concern that transmission could occur from direct contact with intact skin though epidemiologic evidence to support this is lacking 305 . Postmortem handling of infected bodies is an important risk for transmission 301,306,307 . In rare situations, cases in which the mode of transmission was unexplained among individuals with no known direct contact, have led to speculation that airborne transmission could have occurred 298 . However, airborne transmission of naturally occurring HFVs in humans has not been seen. In one study of airplane passengers exposed to an in-flight index case of Lassa fever, there was no transmission to any passengers 308 . In the laboratory setting, animals have been infected experimentally with Marburg or Ebola viruses via direct inoculation of the nose, mouth and/or conjunctiva 309,310 and by using mechanically generated virus-containing aerosols 311,312 . Transmission of Ebola virus among laboratory primates in an animal facility has been described 313 . Secondarily infected animals were in individual cages and separated by approximately 3 meters. Although the possibility of airborne transmission was suggested, the authors were not able to exclude droplet or indirect contact transmission in this incidental observation. Guidance on infection control precautions for HVFs that are transmitted person-to-person have been published by CDC 1, 211 and by the Johns Hopkins Center for Civilian Biodefense Strategies 212 . The most recent recommendations at the time of publication of this document were posted on the CDC website on 5/19/05 314 . Inconsistencies among the various recommendations have raised questions about the appropriate precautions to use in U.S. hospitals. In less developed countries, outbreaks of HFVs have been controlled with basic hygiene, barrier precautions, safe injection practices, and safe burial practices 299,306 . The preponderance of evidence on HFV transmission indicates that Standard, Contact and Droplet Precautions with eye protection are effective in protecting healthcare personnel and visitors who may attend an infected patient. Single gloves are adequate for routine patient care; double-gloving is advised during invasive procedures (e.g., surgery) that pose an increased risk for blood exposure. Routine eye protection (i.e. goggles or face shield) is particularly important. Fluid-resistant gowns should be worn for all patient contact. Airborne Precautions are not required for routine patient care; however, use of AIIRs is prudent when procedures that could generate infectious aerosols are performed (e.g., endotracheal intubation, bronchoscopy, suctioning, autopsy procedures involving oscillating saws). N95 or higher level respirators may provide added protection for individuals in a room during aerosol-generating procedures (Table 3, Appendix A). When a patient with a syndrome consistent with hemorrhagic fever also has a history of travel to an endemic area, precautions are initiated upon presentation and then modified as more information is obtained (Table 2). Patients with hemorrhagic fever syndrome in the setting of a suspected bioweapon attack should be managed using Airborne Precautions, including AIIRs, since the epidemiology of a potentially weaponized hemorrhagic fever virus is unpredictable. # I.D. Transmission Risks Associated with Specific Types of Healthcare Settings Numerous factors influence differences in transmission risks among the various healthcare settings. These include the population characteristics (e.g., increased susceptibility to infections, type and prevalence of indwelling devices), intensity of care, exposure to environmental sources, length of stay, and frequency of interaction between patients/residents with each other and with HCWs. These factors, as well as organizational priorities, goals, and resources, influence how different healthcare settings adapt transmission prevention guidelines to meet their specific needs 315,316 . Infection control management decisions are informed by data regarding institutional experience/epidemiology, trends in community and institutional HAIs, local, regional, and national epidemiology, and emerging infectious disease threats. # I.D.1. Hospitals. Infection transmission risks are present in all hospital settings. However, certain hospital settings and patient populations have unique conditions that predispose patients to infection and merit special mention. These are often sentinel sites for the emergence of new transmission risks that may be unique to that setting or present opportunities for transmission to other settings in the hospital. 318,319 , because of underlying diseases and conditions, the invasive medical devices and technology used in their care (e.g., central venous catheters and other intravascular devices, mechanical ventilators, extracorporeal membrane oxygenation (ECMO), hemodialysis/-filtration, pacemakers, implantable left ventricular assist devices), the frequency of contact with healthcare personnel, prolonged length of stay, and prolonged exposure to antimicrobial agents . Furthermore, adverse patient outcomes in this setting are more severe and are associated with a higher mortality 332 . Outbreaks associated with a variety of bacterial, fungal and viral pathogens due to common-source and person-to-person transmissions are frequent in adult and pediatric ICUs 31, 333-336, 337, 338 . # I.D.1.b. Burn units. Burn wounds can provide optimal conditions for colonization, infection, and transmission of pathogens; infection acquired by burn patients is a frequent cause of morbidity and mortality 320,339,340 . In patients with a burn injury involving ≥30% of the total body surface area (TBSA), the risk of invasive burn wound infection is particularly high 341,342 . Infections that occur in patients with burn injury involving <30% TBSA are usually associated with the use of invasive devices. Methicillin-susceptible Staphylococcus aureus, MRSA, enterococci, including VRE, gram-negative bacteria, and candida are prevalent pathogens in burn infections 53,340, and outbreaks of these organisms have been reported . Shifts over time in the predominance of pathogens causing infections among burn patients often lead to changes in burn care practices 343, . Burn wound infections caused by Aspergillus sp. or other environmental molds may result from exposure to supplies contaminated during construction 359 or to dust generated during construction or other environmental disruption 360 . Hydrotherapy equipment is an important environmental reservoir of gram-negative organisms. Its use for burn care is discouraged based on demonstrated associations between use of contaminated hydrotherapy equipment and infections. Burn wound infections and colonization, as well as bloodstream infections, caused by multidrugresistant P. aeruginosa 361 , A. baumannii 362 , and MRSA 352 have been associated with hydrotherapy; excision of burn wounds in operating rooms is preferred. Advances in burn care, specifically early excision and grafting of the burn wound, use of topical antimicrobial agents, and institution of early enteral feeding, have led to decreased infectious complications. Other advances have included prophylactic antimicrobial usage, selective digestive decontamination (SDD), and use of antimicrobial-coated catheters (ACC), but few epidemiologic studies and no efficacy studies have been performed to show the relative benefit of these measures 357 . There is no consensus on the most effective infection control practices to prevent transmission of infections to and from patients with serious burns (e.g., single-bed rooms 358 , laminar flow 363 and high efficiency particulate air filtration 360 or maintaining burn patients in a separate unit without exposure to patients or equipment from other units 364 ). There also is controversy regarding the need for and type of barrier precautions for routine care of burn patients. One retrospective study demonstrated efficacy and cost effectiveness of a simplified barrier isolation protocol for wound colonization, emphasizing handwashing and use of gloves, caps, masks and plastic impermeable aprons (rather than isolation gowns) for direct patient contact 365 . However, there have been no studies that define the most effective combination of infection control precautions for use in burn settings. Prospective studies in this area are needed. # I.D.1.c. Pediatrics. Studies of the epidemiology of HAIs in children have identified unique infection control issues in this population 63,64, . Pediatric intensive care unit (PICU) patients and the lowest birthweight babies in the high-risk nursery (HRN) monitored in the NNIS system have had high rates of central venous catheter-associated bloodstream infections 64,320, . Additionally, there is a high prevalence of community-acquired infections among hospitalized infants and young children who have not yet become immune either by vaccination or by natural infection. The result is more patients and their sibling visitors with transmissible infections present in pediatric healthcare settings, especially during seasonal epidemics (e.g., pertussis 36,40,41 , respiratory viral infections including those caused by RSV 24 , influenza viruses 373 , parainfluenza virus 374 , human metapneumovirus 375 , and adenoviruses 376 ; rubeola 34 , varicella 377 , and rotavirus 38,378 ). Close physical contact between healthcare personnel and infants and young children (eg. cuddling, feeding, playing, changing soiled diapers, and cleaning copious uncontrolled respiratory secretions) provides abundant opportunities for transmission of infectious material. Practices and behaviors such as congregation of children in play areas where toys and bodily secretions are easily shared and family members roomingin with pediatric patients can further increase the risk of transmission. Pathogenic bacteria have been recovered from toys used by hospitalized patients 379 ; contaminated bath toys were implicated in an outbreak of multidrug-resistant P. aeruginosa on a pediatric oncology unit 80 . In addition, several patient factors increase the likelihood that infection will result from exposure to pathogens in healthcare settings (e.g., immaturity of the neonatal immune system, lack of previous natural infection and resulting immunity, prevalence of patients with congenital or acquired immune deficiencies, congenital anatomic anomalies, and use of life-saving invasive devices in neontal and pediatric intensive care units) 63 . There are theoretical concerns that infection risk will increase in association with innovative practices used in the NICU for the purpose of improving developmental outcomes, Such factors include co-bedding 380 and kangaroo care 381 that may increase opportunity for skin-to-skin exposure of multiple gestation infants to each other and to their mothers, respectively; although infection risk smay actually be reduced among infants receiving kangaroo care 382 . Children who attend child care centers 383,384 and pediatric rehabilitation units 385 may increase the overall burden of antimicrobial resistance (eg. by contributing to the reservoir of community-associated MRSA ) . Patients in chronic care facilities may have increased rates of colonization with resistant GNBs and may be sources of introduction of resistant organisms to acute care settings 50 . # I.D.2. Non-acute healthcare settings. Healthcare is provided in various settings outside of hospitals including facilities, such as long-term care facilities (LTCF) (e.g., nursing homes), homes for the developmentally disabled, settings where behavioral health services are provided, rehabilitation centers and hospices 392 . In addition, healthcare may be provided in nonhealthcare settings such as workplaces with occupational health clinics, adult day care centers, assisted living facilities, homeless shelters, jails and prisons, school clinics and infirmaries. Each of these settings has unique circumstances and population risks to consider when designing and implementing an infection control program. Several of the most common settings and their particular challenges are discussed below. While this Guideline does not address each setting, the principles and strategies provided may be adapted and applied as appropriate. # I.D.2.a. Long-term care. The designation LTCF applies to a diverse group of residential settings, ranging from institutions for the developmentally disabled to nursing homes for the elderly and pediatric chronic-care facilities . Nursing homes for the elderly predominate numerically and frequently represent long-term care as a group of facilities. Approximately 1.8 million Americans reside in the nation's 16,500 nursing homes. 396 Estimates of HAI rates of 1.8 to 13.5 per 1000 resident-care days have been reported with a range of 3 to 7 per 1000 resident-care days in the more rigorous studies . The infrastructure described in the Department of Veterans Affairs nursing home care units is a promising example for the development of a nationwide HAI surveillance system for LTCFs 402 . LCTFs are different from other healthcare settings in that elderly patients at increased risk for infection are brought together in one setting and remain in the facility for extended periods of time; for most residents, it is their home. An atmosphere of community is fostered and residents share common eating and living areas, and participate in various facility-sponsored activities 403,404 . Since able residents interact freely with each other, controlling transmission of infection in this setting is challenging 405 . Residents who are colonized or infected with certain microorganisms are, in some cases, restricted to their room. However, because of the psychosocial risks associated with such restriction, it has been recommended that psychosocial needs be balanced with infection control needs in the LTCF setting . Documented LTCF outbreaks have been caused by various viruses (e.g., influenza virus 35, , rhinovirus 413 , adenovirus 414 , norovirus 278, 279 275, 281 ) and bacteria (e.g., group A streptococcus 162 , B. pertussis 415 , non-susceptible S. pneumoniae 197,198 , other MDROs, and Clostridium difficile 416 ) These pathogens can lead to substantial morbidity and mortality, and increased medical costs; prompt detection and implementation of effective control measures are required. Risk factors for infection are prevalent among LTCF residents 395,417,418 . Age-related declines in immunity may affect responses to immunizations for influenza and other infectious agents, and increase susceptibility to tuberculosis. Immobility, incontinence, dysphagia, underlying chronic diseases, poor functional status, and age-related skin changes increase susceptibility to urinary, respiratory and cutaneous and soft tissue infections, while malnutrition can impair wound healing . Medications (e.g., drugs that affect level of consciousness, immune function, gastric acid secretions, and normal flora, including antimicrobial therapy) and invasive devices (e.g., urinary catheters and feeding tubes) heighten susceptibility to infection and colonization in LTCF residents . Finally, limited functional status and total dependence on healthcare personnel for activities of daily living have been identified as independent risk factors for infection 401,417,427 and for colonization with MRSA 428,429 and ESBL-producing K. pneumoniae 430 . Several position papers and review articles have been published that provide guidance on various aspects of infection control and antimicrobial resistance in LTCFs . The Centers for Medicare and Medicaid Services (CMS) have established regulations for the prevention of infection in LTCFs 437 . Because residents of LTCFs are hospitalized frequently, they can transfer pathogens between LTCFs and healthcare facilities in which they receive care 8, . This is also true for pediatric long-term care populations. Pediatric chronic care facilities have been associated with importing extended-spectrum cephalosporin-resistant, gram-negative bacilli into one PICU 50 . Children from pediatric rehabilitation units may contribute to the reservoir of community-associated MRSA 385, . # I.D.2.b. Ambulatory care. In the past decade, healthcare delivery in the United States has shifted from the acute, inpatient hospital to a variety of ambulatory and communitybased settings, including the home. Ambulatory care is provided in hospital-based outpatient clinics, nonhospital-based clinics and physician offices, public health clinics, free-standing dialysis centers, ambulatory surgical centers, urgent care centers, and many others. In 2000, there were 83 million visits to hospital outpatient clinics and more than 823 million visits to physician offices 442 ; ambulatory care now accounts for most patient encounters with the health care system 443 . In these settings, adapting transmission prevention guidelines is challenging because patients remain in common areas for prolonged periods waiting to be seen by a healthcare provider or awaiting admission to the hospital, examination or treatment rooms are turned around quickly with limited cleaning, and infectious patients may not be recognized immediately. Furthermore, immunocompromised patients often receive chemotherapy in infusion rooms where they stay for extended periods of time along with other types of patients. There are few data on the risk of HAIs in ambulatory care settings, with the exception of hemodialysis centers 18,444,445 . Transmission of infections in outpatient settings has been reviewed in three publications . Goodman and Solomon summarized 53 clusters of infections associated with the outpatient setting from 1961-1990 446 . Overall, 29 clusters were associated with common source transmission from contaminated solutions or equipment, 14 with person-to-person transmission from or involving healthcare personnel and ten associated with airborne or droplet transmission among patients and healthcare workers. Transmission of bloodborne pathogens (i.e., hepatitis B and C viruses and, rarely, HIV) in outbreaks, sometimes involving hundreds of patients, continues to occur in ambulatory settings. These outbreaks often are related to common source exposures, usually a contaminated medical device, multi-dose vial, or intravenous solution 82, . In all cases, transmission has been attributed to failure to adhere to fundamental infection control principles, including safe injection practices and aseptic technique.This subject has been reviewed and recommended infection control and safe injection practices summarized 454 . Airborne transmission of M.tuberculosis and measles in ambulatory settings, most frequently emergency departments, has been reported 34,127,446,448, . Measles virus was transmitted in physician offices and other outpatient settings during an era when immunization rates were low and measles outbreaks in the community were occurring regularly 34,122,458 . Rubella has been transmitted in the outpatient obstetric setting 33 ; there are no published reports of varicella transmission in the outpatient setting. In the ophthalmology setting, adenovirus type 8 epidemic keratoconjunctivitis has been transmitted via incompletely disinfected ophthalmology equipment and/or from healthcare workers to patients, presumably by contaminated hands 17,446,448, . If transmission in outpatient settings is to be prevented, screening for potentially infectious symptomatic and asymptomatic individuals, especially those who may be at risk for transmitting airborne infectious agents (e.g., M. tuberculosis, varicella-zoster virus, rubeola ), is necessary at the start of the initial patient encounter. Upon identification of a potentially infectious patient, implementation of prevention measures, including prompt separation of potentially infectious patients and implementation of appropriate control measures (e.g., Respiratory Hygiene/Cough Etiquette and Transmission-Based Precautions) can decrease transmission risks 9,12 . Transmission of MRSA and VRE in outpatient settings has not been reported, but the association of CA-MRSA in healthcare personnel working in an outpatient HIV clinic with environmental CA-MRSA contamination in that clinic, suggests the possibility of transmission in that setting 463 . Patient-to-patient transmission of Burkholderia species and Pseudomonas aeruginosa in outpatient clinics for adults and children with cystic fibrosis has been confirmed 464,465 . # I.D.2.c. Home care. Home care in the United States is delivered by over 20,000 provider agencies that include home health agencies, hospices, durable medical equipment providers, home infusion therapy services, and personal care and support services providers. Home care is provided to patients of all ages with both acute and chronic conditions. The scope of services ranges from assistance with activities of daily living and physical and occupational therapy to the care of wounds, infusion therapy, and chronic ambulatory peritoneal dialysis (CAPD). The incidence of infection in home care patients, other than those associated with infusion therapy is not well studied . However, data collection and calculation of infection rates have been accomplished for central venous catheter-associated bloodstream infections in patients receiving home infusion therapy and for the risk of blood contact through percutaneous or mucosal exposures, demonstrating that surveillance can be performed in this setting 4 75 . Draft definitions for home care associated infections have been developed 476 . Transmission risks during home care are presumed to be minimal. The main transmission risks to home care patients are from an infectious healthcare provider or contaminated equipment; providers also can be exposed to an infectious patient during home visits. Since home care involves patient care by a limited number of personnel in settings without multiple patients or shared equipment, the potential reservoir of pathogens is reduced. Infections of home care providers, that could pose a risk to home care patients include infections transmitted by the airborne or droplet routes (e.g., chickenpox, tuberculosis, influenza), and skin infestations (e.g., scabies 69 and lice) and infections (e.g.,impetigo) transmitted by direct or indirect contact. There are no published data on indirect transmission of MDROs from one home care patient to another, although this is theoretically possible if contaminated equipment is transported from an infected or colonized patient and used on another patient. Of note, investigation of the first case of VISA in homecare 186 and the first 2 reported cases of VRSA 178,180,181,183 found no evidence of transmission of VISA or VRSA to other home care recipients. Home health care also may contribute to antimicrobial resistance; a review of outpatient vancomycin use found 39% of recipients did not receive the antibiotic according to recommended guidelines 477 . Although most home care agencies implement policies and procedures to prevent transmission of organisms, the current approach is based on the adaptation of the 1996 Guideline for Isolation Precautions in Hospitals 1 as well as other professional guidance 478, 479 . This issue has been very challenging in the home care industry and practice has been inconsistent and frequently not evidence-based. For example, many home health agencies continue to observe "nursing bag technique," a practice that prescribes the use of barriers between the nursing bag and environmental surfaces in the home 480. While the home environment may not always appear clean, the use of barriers between two non-critical surfaces has been questioned 481,482 . Opportunites exist to conduct research in home care related to infection transmission risks 483 . # I.D.2.d. Other sites of healthcare delivery. Facilities that are not primarily healthcare settings but in which healthcare is delivered include clinics in correctional facilities and shelters. Both settings can have suboptimal features, such as crowded conditions and poor ventilation. Economically disadvantaged individuals who may have chronic illnesses and healthcare problems related to alcoholism, injection drug use, poor nutrition, and/or inadequate shelter often receive their primary healthcare at sites such as these 484 . Infectious diseases of special concern for transmission include tuberculosis, scabies, respiratory infections (e.g., N. meningitides, S. pneumoniae), sexually transmitted and bloodborne diseases (e.g.,HIV, HBV, HCV, syphilis, gonorrhea), hepatitis A virus (HAV), diarrheal agents such as norovirus, and foodborne diseases 286, . A high index of suspicion for tuberculosis and CA-MRSA in these populations is needed as outbreaks in these settings or among the populations they serve have been reported . Patient encounters in these types of facilities provide an opportunity to deliver recommended immunizations and screen for M. tuberculosis infection in addition to diagnosing and treating acute illnesses 498 . Recommended infection control measures in these non-traditional areas designated for healthcare delivery are the same as for other ambulatory care settings. Therefore, these settings must be equipped to observe Standard Precautions and, when indicated, Transmission-based Precautions. # I.E. Transmission Risks Associated with Special Patient Populations As new treatments emerge for complex diseases, unique infection control challenges associated with special patient populations need to be addressed. # I.E.1. Immunocompromised patients. Patients who have congenital primary immune deficiencies or acquired disease (eg. treatment-induced immune deficiencies) are at increased risk for numerous types of infections while receiving healthcare and may be located throughout the healthcare facility. The specific defects of the immune system determine the types of infections that are most likely to be acquired (e.g., viral infections are associated with T-cell defects and fungal and bacterial infections occur in patients who are neutropenic). As a general group, immunocompromised patients can be cared for in the same environment as other patients; however, it is always advisable to minimize exposure to other patients with transmissible infections such as influenza and other respiratory viruses 499,500 . The use of more intense chemotherapy regimens for treatment of childhood leukemia may be associated with prolonged periods of neutropenia and suppression of other components of the immune system, extending the period of infection risk and raising the concern that additional precautions may be indicated for select groups 501,502 . With the application of newer and more intense immunosuppressive therapies for a variety of medical conditions (e.g., rheumatologic disease 503,504 , inflammatory bowel disease 505 ), immunosuppressed patients are likely to be more widely distributed throughout a healthcare facility rather than localized to single patient units (e.g., hematology-oncology). Guidelines for preventing infections in certain groups of immunocompromised patients have been published 15,506,507 . Published data provide evidence to support placing allogeneic HSCT patients in a Protective Environment 15,157,158 . Also, three guidelines have been developed that address the special requirements of these immunocompromised patients, including use of antimicrobial prophylaxis and engineering controls to create a Protective Environment for the prevention of infections caused by Aspergillus spp. and other environmental fungi 11,14,15 . As more intense chemotherapy regimens associated with prolonged periods of neutropenia or graft-versus-host disease are implemented, the period of risk and duration of environmental protection may need to be prolonged beyond the traditional 100 days 508 . # I.E.2. Cystic fibrosis patients. Patients with cystic fibrosis (CF) require special consideration when developing infection control guidelines. Compared to other patients, CF patients require additional protection to prevent transmission from contaminated respiratory therapy equipment . Infectious agents such as Burkholderia cepacia complex and P. aeruginosa 464,465,514,515 have unique clinical and prognostic significance. In CF patients, B. cepacia infection has been associated with increased morbidity and mortality , while delayed acquisition of chronic P.aeruginosa infection may be associated with an improved long-term clinical outcome 519,520 . Person-to-person transmission of B. cepacia complex has been demonstrated among children 517 and adults 521 with CF in healthcare settings 464,522 , during various social contacts 523 , most notably attendance at camps for patients with CF 524 , and among siblings with CF 525 . Successful infection control measures used to prevent transmission of respiratory secretions include segregation of CF patients from each other in ambulatory and hospital settings (including use of private rooms with separate showers), environmental decontamination of surfaces and equipment contaminated with respiratory secretions, elimination of group chest physiotherapy sessions, and disbanding of CF camps 97,526 . The Cystic Fibrosis Foundation published a consensus document with evidence-based recommendations for infection control practices for CF patients 20 . # I.F. New Therapies Associated with Potentially Transmissible Infectious Agents # I.F.1. Gene therapy. Gene therapy has has been attempted using a number of different viral vectors, including nonreplicating retroviruses, adenoviruses, adeno-associated viruses, and replication-competent strains of poxviruses. Unexpected adverse events have restricted the prevalence of gene therapy protocols. The infectious hazards of gene therapy are theoretical at this time, but require meticulous surveillance due to the possible occurrence of in vivo recombination and the subsequent emergence of a transmissible genetically altered pathogen. Greatest concern attends the use of replication-competent viruses, especially vaccinia. As of the time of publication, no reports have described transmission of a vector virus from a gene therapy recipient to another individual, but surveillance is ongoing. Recommendations for monitoring infection control issues throughout the course of gene therapy trials have been published . # I.F.2. Infections transmitted through blood, organs and other tissues. The potential hazard of transmitting infectious pathogens through biologic products is a small but ever present risk, despite donor screening. Reported infections transmitted by transfusion or transplantation include West Nile Virus infection 530 cytomegalovirus infection 531 , Creutzfeldt-Jacob disease 230 , hepatitis C 532 , infections with Clostridium spp. 533 and group A streptococcus 534 , malaria 535 , babesiosis 536 , Chagas disease 537 , lymphocytic choriomeningitis 538 , and rabies 539,540 . Therefore, it is important to consider receipt of biologic products when evaluating patients for potential sources of infection. # I.F.3. Xenotransplantation. The transplantation of nonhuman cells, tissues, and organs into humans potentially exposes patients to zoonotic pathogens. Transmission of known zoonotic infections (e.g., trichinosis from porcine tissue), constitutes one concern, but also of concern is the possibility that transplantation of nonhuman cells, tissues, or organs may transmit previously unknown zoonotic infections (xenozoonoses) to immunosuppressed human recipients. Potential infections that might accompany transplantation of porcine organs have been described 541 . Guidelines from the U.S. Public Health Service address many infectious diseases and infection control issues that surround the developing field of xenotransplantation, 542 work in this area is ongoing. # Part II: Fundamental Elements Needed to Prevent Transmission of Infectious Agents in Healthcare Settings # II.A. Healthcare System Components that Influence the Effectiveness of Precautions to Prevent Transmission # II.A.1. Administrative measures. Healthcare organizations can demonstrate a commitment to preventing transmission of infectious agents by incorporating infection control into the objectives of the organization's patient and occupational safety programs . An infrastructure to guide, support, and monitor adherence to Standard and Transmission-Based Precautions 434,548,549 will facilitate fulfillment of the organization's mission and achievement of the Joint Commission on Accreditation of Healthcare Organization's patient safety goal to decrease HAIs 550 . Policies and procedures that explain how Standard and Transmission-Based Precautions are applied, including systems used to identify and communicate information about patients with potentially transmissible infectious agents, are essential to ensure the success of these measures and may vary according to the characteristics of the organization. A key administrative measure is provision of fiscal and human resources for maintaining infection control and occupational health programs that are responsive to emerging needs. Specific components include bedside nurse 551 and infection prevention and control professional (ICP) staffing levels 552 , inclusion of ICPs in facility construction and design decisions 11 , clinical microbiology laboratory support 553,554 , adequate supplies and equipment including facility ventilation systems 11 , adherence monitoring 555 , assessment and correction of system failures that contribute to transmission 556,557 , and provision of feedback to healthcare personnel and senior administrators 434,548,549,558 . The positive influence of institutional leadership has been demonstrated repeatedly in studies of HCW adherence to recommended hand hygiene practices 176,177,434,548,549, . Healthcare administrator involvement in infection control processes can improve administrators' awareness of the rationale and resource requirements for following recommended infection control practices. Several administrative factors may affect the transmission of infectious agents in healthcare settings: institutional culture, individual worker behavior, and the work environment. Each of these areas is suitable for performance improvement monitoring and incorporation into the organization's patient safety goals 543,544,546,565 . # II.A.1.a.Scope of work and staffing needs for infection control professionals. The effectiveness of infection surveillance and control programs in preventing nosocomial infections in United States hospitals was assessed by the CDC through the Study on the Efficacy of Nosocomial Infection Control (SENIC Project) conducted 1970-76 566 . In a representative sample of US general hospitals, those with a trained infection control physician or microbiologist involved in an infection control program, and at least one infection control nurse per 250 beds, were associated with a 32% lower rate of four infections studied (CVC-associated bloodstream infections, ventilator-associated pneumonias, catheter-related urinary tract infections, and surgical site infections). Since that landmark study was published, responsibilities of ICPs have expanded commensurate with the growing complexity of the healthcare system, the patient populations served, and the increasing numbers of medical procedures and devices used in all types of healthcare settings. The scope of work of ICPs was first assessed in 1982 by the Certification Board of Infection Control (CBIC), and has been reassessed every five years since that time 558, . The findings of these task analyses have been used to develop and update the Infection Control Certification Examination, offered for the first time in 1983. With each survey, it is apparent that the role of the ICP is growing in complexity and scope, beyond traditional infection control activities in acute care hospitals. Activities currently assigned to ICPs in response to emerging challenges include: 1. surveillance and infection prevention at facilities other than acute care hospitals e.g., ambulatory clinics, day surgery centers, long term care facilities, rehabilitation centers, home care; 2. oversight of employee health services related to infection prevention, e.g., assessment of risk and administration of recommended treatment following exposure to infectious agents, tuberculosis screening, influenza vaccination, respiratory protection fit testing, and administration of other vaccines as indicated, such as smallpox vaccine in 2003; 3. preparedness planning for annual influenza outbreaks, pandemic influenza, SARS, bioweapons attacks; 4. adherence monitoring for selected infection control practices; 5. oversight of risk assessment and implementation of prevention measures associated with construction and renovation; 6. prevention of transmission of MDROs; 7. evaluation of new medical products that could be associated with increased infection risk. e.g.,intravenous infusion materials; 8. communication with the public, facility staff, and state and local health departments concerning infection control-related issues; and 9. participation in local and multi-center research projects 434,549,552,558,573,574 . None of the CBIC job analyses addressed specific staffing requirements for the identified tasks, although the surveys did include information about hours worked; the 2001 survey included the number of ICPs assigned to the responding facilities 558 . There is agreement in the literature that 1 ICP per 250 acute care beds is no longer adequate to meet current infection control needs; a Delphi project that assessed staffing needs of infection control programs in the 21st century concluded that a ratio of 0.8 to 1.0 ICP per 100 occupied acute care beds is an appropriate level of staffing 552 . A survey of participants in the National Nosocomial Infections Surveillance (NNIS) system found the average daily census per ICP was 115 316 . Results of other studies have been similar: 3 per 500 beds for large acute care hospitals, 1 per 150-250 beds in long term care facilities, and 1.56 per 250 in small rural hospitals 573,575 . The foregoing demonstrates that infection control staffing can no longer be based on patient census alone, but rather must be determined by the scope of the program, characteristics of the patient population, complexity of the healthcare system, tools available to assist personnel to perform essential tasks (e.g., electronic tracking and laboratory support for surveillance), and unique or urgent needs of the institution and community 552 . Furthermore, appropriate training is required to optimize the quality of work performed 558, 572, 576 . # II.A.1.a.i. Infection control nurse liaison. Designating a bedside nurse on a patient care unit as an infection control liaison or "link nurse" is reported to be an effective adjunct to enhance infection control at the unit level . Such individuals receive training in basic infection control and have frequent communication with the ICPs, but maintain their primary role as bedside caregiver on their units. The infection control nurse liaison increases the awareness of infection control at the unit level. He or she is especially effective in implementation of new policies or control interventions because of the rapport with individuals on the unit, an understanding of unit-specific challenges, and ability to promote strategies that are most likely to be successful in that unit. This position is an adjunct to, not a replacement for, fully trained ICPs. Furthermore, the infection control liaison nurses should not be counted when considering ICP staffing. # II.A.1.b. Bedside nurse staffing. There is increasing evidence that the level of bedside nurse-staffing influences the quality of patient care 583,584 . If there are adequate nursing staff, it is more likely that infection control practices, including hand hygiene and Standard and Transmission-Based Precautions, will be given appropriate attention and applied correctly and consistently 552 . A national multicenter study reported strong and consistent inverse relationships between nurse staffing and five adverse outcomes in medical patients, two of which were HAIs: urinary tract infections and pneumonia 583 . The association of nursing staff shortages with increased rates of HAIs has been demonstrated in several outbreaks in hospitals and long term care settings, and with increased transmission of hepatitis C virus in dialysis units 22,418,551, . In most cases, when staffing improved as part of a comprehensive control intervention, the outbreak ended or the HAI rate declined. In two studies 590,596 , the composition of the nursing staff ("pool" or "float" vs. regular staff nurses) influenced the rate of primary bloodstream infections, with an increased infection rate occurring when the proportion of regular nurses decreased and pool nurses increased. # II.A.1.c. Clinical microbiology laboratory support. The critical role of the clinical microbiology laboratory in infection control and healthcare epidemiology is described well 553,554, and is supported by the Infectious Disease Society of America policy statement on consolidation of clinical microbiology laboratories published in 2001 553 . The clinical microbiology laboratory contributes to preventing transmission of infectious diseases in healthcare settings by promptly detecting and reporting epidemiologically important organisms, identifying emerging patterns of antimicrobial resistance, and assisting in assessment of the effectiveness of recommended precautions to limit transmission during outbreaks 598 . Outbreaks of infections may be recognized first by laboratorians 162 . Healthcare organizations need to ensure the availability of the recommended scope and quality of laboratory services, a sufficient number of appropriately trained laboratory staff members, and systems to promptly communicate epidemiologically important results to those who will take action (e.g., providers of clinical care, infection control staff, healthcare epidemiologists, and infectious disease consultants) 601 . As concerns about emerging pathogens and bioterrorism grow, the role of the clinical microbiology laboratory takes on even greater importance. For healthcare organizations that outsource microbiology laboratory services (e.g., ambulatory care, home care, LTCFs, smaller acute care hospitals), it is important to specify by contract the types of services (e.g., periodic institution-specific aggregate susceptibility reports) required to support infection control. Several key functions of the clinical microbiology laboratory are relevant to this guideline: - Antimicrobial susceptibility by testing and interpretation in accordance with current guidelines developed by the National Committee for Clinical Laboratory Standards (NCCLS), known as the Clinical and Laboratory Standards Institute (CLSI) since 2005 602 , for the detection of emerging resistance patterns 603,604 , and for the preparation, analysis, and distribution of periodic cumulative antimicrobial susceptibility summary reports . While not required, clinical laboratories ideally should have access to rapid genotypic identification of bacteria and their antibiotic resistance genes 608 . - Performance of surveillance cultures when appropriate (including retention of isolates for analysis) to assess patterns of infection transmission and effectiveness of infection control interventions at the facility or organization. Microbiologists assist in decisions concerning the indications for initiating and discontinuing active surveillance programs and optimize the use of laboratory resources. - Molecular typing, on-site or outsourced, in order to investigate and control healthcare-associated outbreaks 609 . - Application of rapid diagnostic tests to support clinical decisions involving patient treatment, room selection, and implementation of control measures including barrier precautions and use of vaccine or chemoprophylaxis agents (e.g., influenza , B. pertussis 613 , RSV 614,615 , and enteroviruses 616 ). The microbiologist provides guidance to limit rapid testing to clinical situations in which rapid results influence patient management decisions, as well as providing oversight of point-of-care testing performed by non-laboratory healthcare workers 617 . - Detection and rapid reporting of epidemiologically important organisms, including those that are reportable to public health agencies. - Implementation of a quality control program that ensures testing services are appropriate for the population served, and stringently evaluated for sensitivity, specificity, applicability, and feasibility. - Participation in a multidisciplinary team to develop and maintain an effective institutional program for the judicious use of antimicrobial agents 618,619 . # II.A.2. Institutional safety culture and organizational characteristics. Safety culture (or safety climate) refers to a work environment where a shared commitment to safety on the part of management and the workforce is understood and followed 557,620,621 . The authors of the Institute of Medicine Report, To Err is Human 543 , acknowledge that causes of medical error are multifaceted but emphasize repeatedly the pivotal role of system failures and the benefits of a safety culture. A safety culture is created through 1. the actions management takes to improve patient and worker safety; 2. worker participation in safety planning; 3. the availability of appropriate protective equipment; 4. influence of group norms regarding acceptable safety practices; and 5. the organization's socialization process for new personnel. Safety and patient outcomes can be enhanced by improving or creating organizational characteristics within patient care units as demonstrated by studies of surgical ICUs 622,623 . Each of these factors has a direct bearing on adherence to transmission prevention recommendations 257 . Measurement of an institutional culture of safety is useful for designing improvements in healthcare 624,625 . Several hospital-based studies have linked measures of safety culture with both employee adherence to safe practices and reduced exposures to blood and body fluids . One study of hand hygiene practices concluded that improved adherence requires integration of infection control into the organization's safety culture 561 . Several hospitals that are part of the Veterans Administration Healthcare System have taken specific steps toward improving the safety culture, including error reporting mechanisms, performing root cause analysis on problems identified, providing safety incentives, and employee education. . # II.A.3. Adherence of healthcare personnel to recommended guidelines. Adherence to recommended infection control practices decreases transmission of infectious agents in healthcare settings 116,562, . However, several observational studies have shown limited adherence to recommended practices by healthcare personnel 559, . Observed adherence to universal precautions ranged from 43% to 89% 641,642,649,651,652 . However, the degree of adherence depended frequently on the practice that was assessed and, for glove use, the circumstance in which they were used. Appropriate glove use has ranged from a low of 15% 645 to a high of 82% 650 . However, 92% and 98% adherence with glove use have been reported during arterial blood gas collection and resuscitation, respectively, procedures where there may be considerable blood contact 643,656 . Differences in observed adherence have been reported among occupational groups in the same healthcare facility 641 and between experienced and nonexperienced professionals 645 . In surveys of healthcare personnel, self-reported adherence was generally higher than that reported in observational studies. Furthermore, where an observational component was included with a self-reported survey, self-perceived adherence was often greater than observed adherence 657 . Among nurses and physicians, increasing years of experience is a negative predictor of adherence 645,651 . Education to improve adherence is the primary intervention that has been studied. While positive changes in knowledge and attitude have been demonstrated, 640,658 , there often has been limited or no accompanying change in behavior 642,644 . Self-reported adherence is higher in groups that have received an educational intervention 630,659 . Educational interventions that incorporated videotaping and performance feedback were successful in improving adherence during the period of study; the long-term effect of these interventions is not known 654 .The use of videotape also served to identify system problems (e.g., communication and access to personal protective equipment) that otherwise may not have been recognized. Use of engineering controls and facility design concepts for improving adherence is gaining interest. While introduction of automated sinks had a negative impact on consistent adherence to hand washing 660 , use of electronic monitoring and voice prompts to remind healthcare workers to perform hand hygiene, and improving accessibility to hand hygiene products, increased adherence and contributed to a decrease in HAIs in one study 661 . More information is needed regarding how technology might improve adherence. Improving adherence to infection control practices requires a multifaceted approach that incorporates continuous assessment of both the individual and the work environment 559, 561 . Using several behavioral theories, Kretzer and Larson concluded that a single intervention (e.g., a handwashing campaign or putting up new posters about transmission precautions) would likely be ineffective in improving healthcare personnel adherence 662 . Improvement requires that the organizational leadership make prevention an institutional priority and integrate infection control practices into the organization's safety culture 561 . A recent review of the literature concluded that variations in organizational factors (e.g., safety climate, policies and procedures, education and training) and individual factors (e.g., knowledge, perceptions of risk, past experience) were determinants of adherence to infection control guidelines for protection against SARS and other respiratory pathogens 257 . # II.B. Surveillance for Healthcare-Associated Infections (HAIs) Surveillance is an essential tool for case-finding of single patients or clusters of patients who are infected or colonized with epidemiologically important organisms (e.g., susceptible bacteria such as S. aureus, S. pyogenes or Enterobacter-Klebsiella spp; MRSA, VRE, and other MDROs; C. difficile; RSV; influenza virus) for which transmission-based precautions may be required. Surveillance is defined as the ongoing, systematic collection, analysis, interpretation, and dissemination of data regarding a health-related event for use in public health action to reduce morbidity and mortality and to improve health 663 . The work of Ignaz Semmelweis that described the role of person-to-person transmission in puerperal sepsis is the earliest example of the use of surveillance data to reduce transmission of infectious agents 664 . Surveillance of both process measures and the infection rates to which they are linked are important for evaluating the effectiveness of infection prevention efforts and identifying indications for change 555, . The Study on the Efficacy of Nosocomial Infection Control (SENIC) found that different combinations of infection control practices resulted in reduced rates of nosocomial surgical site infections, pneumonia, urinary tract infections, and bacteremia in acute care hospitals 566 ; however, surveillance was the only component essential for reducing all four types of HAIs. Although a similar study has not been conducted in other healthcare settings, a role for surveillance and the need for novel strategies have been described in LTCFs 398,434,669,670 and in home care . The essential elements of a surveillance system are: 1. standardized definitions; 2. identification of patient populations at risk for infection; 3. statistical analysis (e.g., risk-adjustment, calculation of rates using appropriate denominators, trend analysis using methods such as statistical process control charts); and 4. feedback of results to the primary caregivers . Data gathered through surveillance of high-risk populations, device use, procedures, and/or facility locations (e.g., ICUs) are useful for detecting transmission trends . Identification of clusters of infections should be followed by a systematic epidemiologic investigation to determine commonalities in persons, places, and time; and guide implementation of interventions and evaluation of the effectiveness of those interventions. Targeted surveillance based on the highest risk areas or patients has been preferred over facility-wide surveillance for the most effective use of resources 673,676 . However, surveillance for certain epidemiologically important organisms may need to be facilitywide. Surveillance methods will continue to evolve as healthcare delivery systems change 392,677 and user-friendly electronic tools become more widely available for electronic tracking and trend analysis 674,678,679 . Individuals with experience in healthcare epidemiology and infection control should be involved in selecting software packages for data aggregation and analysis to assure that the need for efficient and accurate HAI surveillance will be met. Effective surveillance is increasingly important as legislation requiring public reporting of HAI rates is passed and states work to develop effective systems to support such legislation 680 . # II.C. Education of HCWs, Patients, and Families Education and training of healthcare personnel are a prerequisite for ensuring that policies and procedures for Standard and Transmission-Based Precautions are understood and practiced. Understanding the scientific rationale for the precautions will allow HCWs to apply procedures correctly, as well as safely modify precautions based on changing requirements, resources, or healthcare settings 14,655, . In one study, the likelihood of HCWs developing SARS was strongly associated with less than 2 hours of infection control training and lack of understanding of infection control procedures 689 . Education about the important role of vaccines (e.g., influenza, measles, varicella, pertussis, pneumococcal) in protecting healthcare personnel, their patients, and family members can help improve vaccination rates . Education on the principles and practices for preventing transmission of infectious agents should begin during training in the health professions and be provided to anyone who has an opportunity for contact with patients or medical equipment (e.g., nursing and medical staff; therapists and technicians, including respiratory, physical, occupational, radiology, and cardiology personnel; phlebotomists; housekeeping and maintenance staff; and students). In healthcare facilities, education and training on Standard and Transmission-Based Precautions are typically provided at the time of orientation and should be repeated as necessary to maintain competency; updated education and training are necessary when policies and procedures are revised or when there is a special circumstance, such as an outbreak that requires modification of current practice or adoption of new recommendations. Education and training materials and methods appropriate to the HCW's level of responsibility, individual learning habits, and language needs, can improve the learning experience 658, . Education programs for healthcare personnel have been associated with sustained improvement in adherence to best practices and a related decrease in deviceassociated HAIs in teaching and non-teaching settings 639,703 Several studies have shown that, in addition to targeted education to improve specific practices, periodic assessment and feedback of the HCWs knowledge,and adherence to recommended practices are necessary to achieve the desired changes and to identify continuing education needs 562, . Effectiveness of this approach for isolation practices has been demonstrated for control of RSV 116,684 . Patients, family members, and visitors can be partners in preventing transmission of infections in healthcare settings 9,42, . Information about Standard Precautions, especially hand hygiene, Respiratory Hygiene/Cough Etiquette, vaccination (especially against influenza) and other routine infection prevention strategies may be incorporated into patient information materials that are provided upon admission to the healthcare facility. Additional information about Transmission-Based Precautions is best provided at the time they are initiated. Fact sheets, pamphlets, and other printed material may include information on the rationale for the additional precautions, risks to household members, room assignment for Transmission-Based Precautions purposes, explanation about the use of personal protective equipment by HCWs, and directions for use of such equipment by family members and visitors. Such information may be particularly helpful in the home environment where household members often have primary responsibility for adherence to recommended infection control practices. Healthcare personnel must be available and prepared to explain this material and answer questions as needed. # II.D. Hand Hygiene Hand hygiene has been cited frequently as the single most important practice to reduce the transmission of infectious agents in healthcare settings 559,712,713 and is an essential element of Standard Precautions. The term "hand hygiene" includes both handwashing with either plain or antiseptic-containing soap and water, and use of alcohol-based products (gels, rinses, foams) that do not require the use of water. In the absence of visible soiling of hands, approved alcohol-based products for hand disinfection are preferred over antimicrobial or plain soap and water because of their superior microbiocidal activity, reduced drying of the skin, and convenience 559 . Improved hand hygiene practices have been associated with a sustained decrease in the incidence of MRSA and VRE infections primarily in the ICU 561,562, . The scientific rationale, indications, methods, and products for hand hygiene are summarized in other publications 559,717 . The effectiveness of hand hygiene can be reduced by the type and length of fingernails 559, 718, 719 . Individuals wearing artifical nails have been shown to harbor more pathogenic organisms, especially gram negative bacilli and yeasts, on the nails and in the subungual area than those with native nails 720,721 . In 2002, CDC/HICPAC recommended (Category IA) that artificial fingernails and extenders not be worn by healthcare personnel who have contact with high-risk patients (e.g., those in ICUs, ORs) due to the association with outbreaks of gram-negative bacillus and candidal infections as confirmed by molecular typing of isolates 30,31,559, .The need to restrict the wearing of artificial fingernails by all healthcare personnel who provide direct patient care or by healthcare personnel who have contact with other high risk groups (e.g., oncology, cystic fibrosis patients), has not been studied, but has been recommended by some experts 20 . At this time such decisions are at the discretion of an individual facility's infection control program. There is less evidence that jewelry affects the quality of hand hygiene. Although hand contamination with potential pathogens is increased with ring-wearing 559,726 , no studies have related this practice to HCW-topatient transmission of pathogens. # II.E. Personal Protective Equipment (PPE) for Healthcare Personnel PPE refers to a variety of barriers and respirators used alone or in combination to protect mucous membranes, airways, skin, and clothing from contact with infectious agents. The selection of PPE is based on the nature of the patient interaction and/or the likely mode(s) of transmission. Guidance on the use of PPE is discussed in Part III. A suggested procedure for donning and removing PPE that will prevent skin or clothing contamination is presented in the Figure . Designated containers for used disposable or reusable PPE should be placed in a location that is convenient to the site of removal to facilitate disposal and containment of contaminated materials. Hand hygiene is always the final step after removing and disposing of PPE. The following sections highlight the primary uses and methods for selecting this equipment. # II.E.1. Gloves. Gloves are used to prevent contamination of healthcare personnel hands when 1. anticipating direct contact with blood or body fluids, mucous membranes, nonintact skin and other potentially infectious material; 2. having direct contact with patients who are colonized or infected with pathogens transmitted by the contact route e.g., VRE, MRSA, RSV 559,727,728 ; or 3. handling or touching visibly or potentially contaminated patient care equipment and environmental surfaces 72,73,559 . Gloves can protect both patients and healthcare personnel from exposure to infectious material that may be carried on hands 73 . The extent to which gloves will protect healthcare personnel from transmission of bloodborne pathogens (e.g., HIV, HBV, HCV) following a needlestick or other pucture that penetrates the glove barrier has not been determined. Although gloves may reduce the volume of blood on the external surface of a sharp by 46-86% 729 , the residual blood in the lumen of a hollowbore needle would not be affected; therefore, the effect on transmission risk is unknown. Gloves manufactured for healthcare purposes are subject to FDA evaluation and clearance 730 . Nonsterile disposable medical gloves made of a variety of materials (e.g., latex, vinyl, nitrile) are available for routine patient care 731 . The selection of glove type for non-surgical use is based on a number of factors, including the task that is to be performed, anticipated contact with chemicals and chemotherapeutic agents, latex sensitivity, sizing, and facility policies for creating a latex-free environment 17, . For contact with blood and body fluids during non-surgical patient care, a single pair of gloves generally provides adequate barrier protection 734 . However, there is considerable variability among gloves; both the quality of the manufacturing process and type of material influence their barrier effectiveness 735 . While there is little difference in the barrier properties of unused intact gloves 736 , studies have shown repeatedly that vinyl gloves have higher failure rates than latex or nitrile gloves when tested under simulated and actual clinical conditions 731, . For this reason either latex or nitrile gloves are preferable for clinical procedures that require manual dexterity and/or will involve more than brief patient contact. It may be necessary to stock gloves in several sizes. Heavier, reusable utility gloves are indicated for non-patient care activities, such as handling or cleaning contaminated equipment or surfaces 11,14,739 . During patient care, transmission of infectious organisms can be reduced by adhering to the principles of working from "clean" to "dirty", and confining or limiting contamination to surfaces that are directly needed for patient care. It may be necessary to change gloves during the care of a single patient to prevent cross-contamination of body sites 559,740 . It also may be necessary to change gloves if the patient interaction also involves touching portable computer keyboards or other mobile equipment that is transported from room to room. Discarding gloves between patients is necessary to prevent transmission of infectious material. Gloves must not be washed for subsequent reuse because microorganisms cannot be removed reliably from glove surfaces and continued glove integrity cannot be ensured. Furthermore, glove reuse has been associated with transmission of MRSA and gram-negative bacilli . When gloves are worn in combination with other PPE, they are put on last. Gloves that fit snugly around the wrist are preferred for use with an isolation gown because they will cover the gown cuff and provide a more reliable continuous barrier for the arms, wrists, and hands. Gloves that are removed properly will prevent hand contamination (Figure ). Hand hygiene following glove removal further ensures that the hands will not carry potentially infectious material that might have penetrated through unrecognized tears or that could contaminate the hands during glove removal 559,728,741 . # II.E.2. Isolation gowns. Isolation gowns are used as specified by Standard and Transmission-Based Precautions, to protect the HCW's arms and exposed body areas and prevent contamination of clothing with blood, body fluids, and other potentially infectious material 24,88,262, . The need for and type of isolation gown selected is based on the nature of the patient interaction, including the anticipated degree of contact with infectious material and potential for blood and body fluid penetration of the barrier. The wearing of isolation gowns and other protective apparel is mandated by the OSHA Bloodborne Pathogens Standard 739 . Clinical and laboratory coats or jackets worn over personal clothing for comfort and/or purposes of identity are not considered PPE. When applying Standard Precautions, an isolation gown is worn only if contact with blood or body fluid is anticipated. However, when Contact Precautions are used (i.e., to prevent transmission of an infectious agent that is not interrupted by Standard Precautions alone and that is associated with environmental contamination), donning of both gown and gloves upon room entry is indicated to address unintentional contact with contaminated environmental surfaces 54,72,73,88 . The routine donning of isolation gowns upon entry into an intensive care unit or other high-risk area does not prevent or influence potential colonization or infection of patients in those areas 365, . Isolation gowns are always worn in combination with gloves, and with other PPE when indicated. Gowns are usually the first piece of PPE to be donned. Full coverage of the arms and body front, from neck to the mid-thigh or below will ensure that clothing and exposed upper body areas are protected. Several gown sizes should be available in a healthcare facility to ensure appropriate coverage for staff members. Isolation gowns should be removed before leaving the patient care area to prevent possible contamination of the environment outside the patient's room. Isolation gowns should be removed in a manner that prevents contamination of clothing or skin (Figure). The outer, "contaminated", side of the gown is turned inward and rolled into a bundle, and then discarded into a designated container for waste or linen to contain contamination. # II.E.3. Face protection: masks, goggles, face shields. II.E.3.a. Masks. Masks are used for three primary purposes in healthcare settings: 1. placed on healthcare personnel to protect them from contact with infectious material from patients e.g., respiratory secretions and sprays of blood or body fluids, consistent with Standard Precautions and Droplet Precautions; 2. placed on healthcare personnel when engaged in procedures requiring sterile technique to protect patients from exposure to infectious agents carried in a healthcare worker's mouth or nose, and 3. placed on coughing patients to limit potential dissemination of infectious respiratory secretions from the patient to others (i.e., Respiratory Hygiene/Cough Etiquette). Masks may be used in combination with goggles to protect the mouth, nose and eyes, or a face shield may be used instead of a mask and goggles, to provide more complete protection for the face, as discussed below. Masks should not be confused with particulate respirators that are used to prevent inhalation of small particles that may contain infectious agents transmitted via the airborne route as described below. The mucous membranes of the mouth, nose, and eyes are susceptible portals of entry for infectious agents, as can be other skin surfaces if skin integrity is compromised (e.g., by acne, dermatitis) 66, . Therefore, use of PPE to protect these body sites is an important component of Standard Precautions. The protective effect of masks for exposed healthcare personnel has been demonstrated 93,113,755,756 . Procedures that generate splashes or sprays of blood, body fluids, secretions, or excretions (e.g., endotracheal suctioning, bronchoscopy, invasive vascular procedures) require either a face shield (disposable or reusable) or mask and goggles 93-95, 96, 113, 115, 262, 739, 757 .The wearing of masks, eye protection, and face shields in specified circumstances when blood or body fluid exposures are likely to occur is mandated by the OSHA Bloodborne Pathogens Standard 739 . Appropriate PPE should be selected based on the anticipated level of exposure. Two mask types are available for use in healthcare settings: surgical masks that are cleared by the FDA and required to have fluid-resistant properties, and procedure or isolation masks 758 #2688 . No studies have been published that compare mask types to determine whether one mask type provides better protection than another. Since procedure/isolation masks are not regulated by the FDA, there may be more variability in quality and performance than with surgical masks. Masks come in various shapes (e.g., molded and non-molded), sizes, filtration efficiency, and method of attachment (e.g., ties, elastic, ear loops). Healthcare facilities may find that different types of masks are needed to meet individual healthcare personnel needs. # II.E.3.b. Goggles, face shields. Guidance on eye protection for infection control has been published 759 . The eye protection chosen for specific work situations (e.g., goggles or face shield) depends upon the circumstances of exposure, other PPE used, and personal vision needs. Personal eyeglasses and contact lenses are NOT considered adequate eye protection (NIOSH Eye Protection for Infection Control () ). NIOSH states that, eye protection must be comfortable, allow for sufficient peripheral vision, and must be adjustable to ensure a secure fit. It may be necessary to provide several different types, styles, and sizes of protective equipment. Indirectly-vented goggles with a manufacturer's anti-fog coating may provide the most reliable practical eye protection from splashes, sprays, and respiratory droplets from multiple angles. Newer styles of goggles may provide better indirect airflow properties to reduce fogging, as well as better peripheral vision and more size options for fitting goggles to different workers. Many styles of goggles fit adequately over prescription glasses with minimal gaps. While effective as eye protection, goggles do not provide splash or spray protection to other parts of the face. The role of goggles, in addition to a mask, in preventing exposure to infectious agents transmitted via respiratory droplets has been studied only for RSV. Reports published in the mid-1980s demonstrated that eye protection reduced occupational transmission of RSV 760,761 . Whether this was due to preventing hand-eye contact or respiratory droplet-eye contact has not been determined. However, subsequent studies demonstrated that RSV transmission is effectively prevented by adherence to Standard plus Contact Precations and that for this virus routine use of goggles is not necessary 24,116,117,684,762 . It is important to remind healthcare personnel that even if Droplet Precautions are not recommended for a specific respiratory tract pathogen, protection for the eyes, nose and mouth by using a mask and goggles, or face shield alone, is necessary when it is likely that there will be a splash or spray of any respiratory secretions or other body fluids as defined in Standard Precautions. Disposable or non-disposable face shields may be used as an alternative to goggles 759 . As compared with goggles, a face shield can provide protection to other facial areas in addition to the eyes. Face shields extending from chin to crown provide better face and eye protection from splashes and sprays; face shields that wrap around the sides may reduce splashes around the edge of the shield. Removal of a face shield, goggles and mask can be performed safely after gloves have been removed, and hand hygiene performed. The ties, ear pieces and/or headband used to secure the equipment to the head are considered "clean" and therefore safe to touch with bare hands. The front of a mask, goggles and face shield are considered contaminated (Figure ). # II.E.4. Respiratory protection. The subject of respiratory protection as it applies to preventing transmission of airborne infectious agents, including the need for and frequency of fit-testing is under scientific review and was the subject of a CDC workshop in 2004 763 . Respiratory protection currently requires the use of a respirator with N95 or higher filtration to prevent inhalation of infectious particles. Information about respirators and respiratory protection programs is summarized in the Guideline for Preventing Transmission of Mycobacterium tuberculosis in Health-care Settings, 2005 (CDC.MMWR 2005; 54: RR-17 12 ). Respiratory protection is broadly regulated by OSHA under the general industry standard for respiratory protection (29CFR1910.134) 764 which requires that U.S. employers in all employment settings implement a program to protect employees from inhalation of toxic materials. OSHA program components include medical clearance to wear a respirator; provision and use of appropriate respirators, including fit-tested NIOSH-certified N95 and higher particulate filtering respirators; education on respirator use and periodic reevaluation of the respiratory protection program. When selecting particulate respirators, models with inherently good fit characteristics (i.e., those expected to provide protection factors of 10 or more to 95% of wearers) are preferred and could theoretically relieve the need for fit testing 765,766 . Issues pertaining to respiratory protection remain the subject of ongoing debate. Information on various types of respirators may be found at and in published studies 765,767,768 . A user-seal check (formerly called a "fit check") should be performed by the wearer of a respirator each time a respirator is donned to minimize air leakage around the facepiece 769 . The optimal frequency of fit-testng has not been determined; re-testing may be indicated if there is a change in facial features of the wearer, onset of a medical condition that would affect respiratory function in the wearer, or a change in the model or size of the initially assigned respirator 12 . Respiratory protection was first recommended for protection of preventing U.S. healthcare personnel from exposure to M. tuberculosis in 1989. That recommendation has been maintained in two successive revisions of the Guidelines for Prevention of Transmission of Tuberculosis in Hospitals and other Healthcare Settings 12,126 . The incremental benefit from respirator use, in addition to administrative and engineering controls (i.e., AIIRs, early recognition of patients likely to have tuberculosis and prompt placement in an AIIR, and maintenance of a patient with suspected tuberculosis in an AIIR until no longer infectious), for preventing transmission of airborne infectious agents (e.g., M. tuberculosis) is undetermined. Although some studies have demonstrated effective prevention of M. tuberculosis transmission in hospitals where surgical masks, instead of respirators, were used in conjunction with other administrative and engineering controls 637,770,771 , CDC currently recommends N95 or higher level respirators for personnel exposed to patients with suspected or confirmed tuberculosis. Currently this is also true for other diseases that could be transmitted through the airborne route, including SARS 262 and smallpox 108,129,772 , until inhalational transmission is better defined or healthcare-specific protective equipment more suitable for for preventing infection are developed. Respirators are also currently recommended to be worn during the performance of aerosol-generating procedures (e.g., intubation, bronchoscopy, suctioning) on patients withSARS Co-V infection, avian influenza and pandemic influenza (See Appendix A). Although Airborne Precautions are recommended for preventing airborne transmission of measles and varicella-zoster viruses, there are no data upon which to base a recommendation for respiratory protection to protect susceptible personnel against these two infections; transmission of varicella-zoster virus has been prevented among pediatric patients using negative pressure isolation alone 773 . Whether respiratory protection (i.e., wearing a particulate respirator) would enhance protection from these viruses has not been studied. Since the majority of healthcare personnel have natural or acquired immunity to these viruses, only immune personnel generally care for patients with these infections . Although there is no evidence to suggest that masks are not adequate to protect healthcare personnel in these settings, for purposes of consistency and simplicity, or because of difficulties in ascertaining immunity, some facilities may require the use of respirators for entry into all AIIRs, regardless of the specific infectious agent. Procedures for safe removal of respirators are provided (Figure). In some healthcare settings, particulate respirators used to provide care for patients with M. tuberculosis are reused by the same HCW. This is an acceptable practice providing the respirator is not damaged or soiled, the fit is not compromised by change in shape, and the respirator has not been contaminated with blood or body fluids. There are no data on which to base a recommendation for the length of time a respirator may be reused. # II.F. control nurse per 250 beds, were associated with a 32%re to Bloodborne Pathogens # II.F.1. Prevention of needlesticks and other sharps-related injuries. Injuries due to needles and other sharps have been associated with transmission of HBV, HCV and HIV to healthcare personnel 778,779 . The prevention of sharps injuries has always been an essential element of Universal and now Standard Precautions 1, 780 . These include measures to handle needles and other sharp devices in a manner that will prevent injury to the user and to others who may encounter the device during or after a procedure. These measures apply to routine patient care and do not address the prevention of sharps injuries and other blood exposures during surgical and other invasive procedures that are addressed elsewhere . Since 1991, when OSHA first issued its Bloodborne Pathogens Standard to protect healthcare personnel from blood exposure, the focus of regulatory and legislative activity has been on implementing a hierarchy of control measures. This has included focusing attention on removing sharps hazards through the development and use of engineering controls. The federal Needlestick Safety and Prevention Act signed into law in November, 2000 authorized OSHA's revision of its Bloodborne Pathogens Standard to more explicitly require the use of safety-engineered sharp devices 786 . CDC has provided guidance on sharps injury prevention 787,788 , including for the design, implementation and evaluation of a comprehensive sharps injury prevention program 789 . # II.F.2. Prevention of mucous membrane contact. Exposure of mucous membranes of the eyes, nose and mouth to blood and body fluids has been associated with the transmission of bloodborne viruses and other infectious agents to healthcare personnel 66, 752, 754, 779 . The prevention of mucous membrane exposures has always been an element of Universal and now Standard Precautions for routine patient care 1, 753 and is subject to OSHA bloodborne pathogen regulations. Safe work practices, in addition to wearing PPE, are used to protect mucous membranes and non-intact skin from contact with potentially infectious material. These include keeping gloved and ungloved hands that are contaminated from touching the mouth, nose, eyes, or face; and positioning patients to direct sprays and splatter away from the face of the caregiver. Careful placement of PPE before patient contact will help avoid the need to make PPE adjustments and possible face or mucous membrane contamination during use. In areas where the need for resuscitation is unpredictable, mouthpieces, pocket resuscitation masks with one-way valves, and other ventilation devices provide an alternative to mouth-to-mouth resuscitation, preventing exposure of the caregiver's nose and mouth to oral and respiratory fluids during the procedure. # II.F.2.a. Precautions during aerosol-generating procedures. The performance of procedures that can generate small particle aerosols (aerosol-generating procedures), such as bronchoscopy, endotracheal intubation, and open suctioning of the respiratory tract, have been associated with transmission of infectious agents to healthcare personnel, including M. tuberculosis 790 , SARS-CoV 93,94,98 and N. meningitidis 95 . Protection of the eyes, nose and mouth, in addition to gown and gloves, is recommended during performance of these procedures in accordance with Standard Precautions. Use of a particulate respirator is recommended during aerosol-generating procedures when the aerosol is likely to contain M. tuberculosis, SARS-CoV, or avian or pandemic influenza viruses. # II.G. Patient Placement # II.G.1. Hospitals and long-term care settings. Options for patient placement include single patient rooms, two patient rooms, and multi-bed wards. Of these, single patient rooms are prefered when there is a concern about transmission of an infectious agent. Although some studies have failed to demonstrate the efficacy of single patient rooms to prevent HAIs 791 , other published studies, including one commissioned by the American Institute of Architects and the Facility Guidelines Institute, have documented a beneficial relationship between private rooms and reduction in infectious and noninfectious adverse patient outcomes 792,793 . The AIA notes that private rooms are the trend in hospital planning and design. However, most hospitals and long-term care facilities have multi-bed rooms and must consider many competing priorities when determining the appropriate room placement for patients (e.g., reason for admission; patient characteristics, such as age, gender, mental status; staffing needs; family requests; psychosocial factors; reimbursement concerns). In the absence of obvious infectious diseases that require specified airborne infection isolation rooms (e.g., tuberculosis, SARS, chickenpox), the risk of transmission of infectious agents is not always considered when making placement decisions. When there are only a limited number of single-patient rooms, it is prudent to prioritize them for those patients who have conditions that facilitate transmission of infectious material to other patients (e.g., draining wounds, stool incontinence, uncontained secretions) and for those who are at increased risk of acquisition and adverse outcomes resulting from HAI (e.g., immunosuppression, open wounds, indwelling catheters, anticipated prolonged length of stay, total dependence on HCWs for activities of daily living) 15,24,43,430,794,795 . Single-patient rooms are always indicated for patients placed on Airborne Precautions and in a Protective Environment and are preferred for patients who require Contact or Droplet Precautions 23,24,410,435,796,797 . During a suspected or proven outbreak caused by a pathogen whose reservoir is the gastrointestinal tract, use of single patient rooms with private bathrooms limits opportunities for transmission, especially when the colonized or infected patient has poor personal hygiene habits, fecal incontinence, or cannot be expected to assist in maintaining procedures that prevent transmission of microorganisms (e.g., infants, children, and patients with altered mental status or developmental delay). In the absence of continued transmission, it is not necessary to provide a private bathroom for patients colonized or infected with enteric pathogens as long as personal hygiene practices and Standard Precautions, especially hand hygiene and appropriate environmental cleaning, are maintained. Assignment of a dedicated commode to a patient,and cleaning and disinfecting fixtures and equipment that may have fecal contamination (e.g., bathrooms, commodes 798 , scales used for weighing diapers) and the adjacent surfaces with appropriate agents may be especially important when a singlepatient room can not be used since environmental contamination with intestinal tract pathogens is likely from both continent and incontinent patients 54,799 . Results of several studies to determine the benefit of a single-patient room to prevent transmission of Clostridium difficile are inconclusive 167, . Some studies have shown that being in the same room with a colonized or infected patient is not necessarily a risk factor for transmission 791, . However, for children, the risk of healthcare-associated diarrhea is increased with the increased number of patients per room 806 . Thus, patient factors are important determinants of infection transmission risks, and the need for a single-patient room and/or private bathroom for any patient is best determined on a case-by-case basis. Cohorting is the practice of grouping together patients who are colonized or infected with the same organism to confine their care to one area and prevent contact with other patients. Cohorts are created based on clinical diagnosis, microbiologic confirmation when available, epidemiology, and mode of transmission of the infectious agent. It is generally preferred not to place severely immunosuppressed patients in rooms with other patients. Cohorting has been used extensively for managing outbreaks of MDROs including MRSA 22,807 , VRE 638,808,809 , MDR-ESBLs 810 ; Pseudomonas aeruginosa 29 ; methicillin-susceptible Staphylococcus aureus 811 ; RSV 812,813 ; adenovirus keratoconjunctivitis 814 ; rotavirus 815 ; and SARS 816 . Modeling studies provide additional support for cohorting patients to control outbreaks Talon . However, cohorting often is implemented only after routine infection control measures have failed to control an outbreak. Assigning or cohorting healthcare personnel to care only for patients infected or colonized with a single target pathogen limits further transmission of the target pathogen to uninfected patients 740,819 but is difficult to achieve in the face of current staffing shortages in hospitals 583 and residential healthcare sites . However, when continued transmission is occurring after implementing routine infection control measures and creating patient cohorts, cohorting of healthcare personnel may be beneficial. During the seasons when RSV, human metapneumovirus 823 , parainfluenza, influenza, other respiratory viruses 824 , and rotavirus are circulating in the community, cohorting based on the presenting clinical syndrome is often a priority in facilities that care for infants and young children 825 . For example, during the respiratory virus season, infants may be cohorted based soley on the clinical diagnosis of bronchiolitis due to the logistical difficulties and costs associated with requiring microbiologic confirmation prior to room placement, and the predominance of RSV during most of the season. However, when available, single patient rooms are always preferred since a common clinical presentation (e.g., bronchiolitis), can be caused by more than one infectious agent 823,824,826 . Furthermore, the inability of infants and children to contain body fluids, and the close physical contact that occurs during their care, increases infection transmission risks for patients and personnel in this setting 24,795 . # II.G.2. Ambulatory settings. Patients actively infected with or incubating transmissible infectious diseases are seen frequently in ambulatory settings (e.g., outpatient clinics, physicians' offices, emergency departments) and potentially expose healthcare personnel and other patients, family members and visitors 21,34,127,135,142,827 . In response to the global outbreak of SARS in 2003 and in preparation for pandemic influenza, healthcare providers working in outpatient settings are urged to implement source containment measures (e.g., asking couging patients to wear a surgical mask or cover their coughs with tissues) to prevent transmission of respiratory infections, beginning at the point of initial patient encounter 9,262,828 as described below in section III.A.1.a. Signs can be posted at the entrance to facilities or at the reception or registration desk requesting that the patient or individuals accompanying the patient promptly inform the receptionist if there are symptoms of a respiratory infection (e.g., cough, flu-like illness, increased production of respiratory secretions). The presence of diarrhea, skin rash, or known or suspected exposure to a transmissible disease (e.g., measles, pertussis, chickenpox, tuberculosis) also could be added. Placement of potentially infectious patients without delay in an examination room limits the number of exposed individuals, e.g., in the common waiting area. In waiting areas, maintaining a distance between symptomatic and non-symptomatic patients (e.g., >3 feet), in addition to source control measures, may limit exposures. However, infections transmitted via the airborne route (e.g., M tuberculosis, measles, chickenpox) require additional precautions 12,125,829 . Patients suspected of having such an infection can wear a surgical mask for source containment, if tolerated, and should be placed in an examination room, preferably an AIIR, as soon as possible. If this is not possible, having the patient wear a mask and segregate him/herself from other patients in the waiting area will reduce opportunities to expose others. Since the person(s) accompanying the patient also may be infectious, application of the same infection control precautions may need to be extended to these persons if they are symptomatic 21, 252, 830 . For example, family members accompanying children admitted with suspected M. tuberculosis have been found to have unsuspected pulmonary tuberculosis with cavitary lesions, even when asymptomatic 42,831 . Patients with underlying conditions that increase their susceptibility to infection (e.g., those who are immunocompromised 43,44 or have cystic fibrosis 20 ) require special efforts to protect them from exposures to infected patients in common waiting areas. By informing the receptionist of their infection risk upon arrival, appropriate steps may be taken to further protect them from infection. In some cystic fibrosis clinics, in order to avoid exposure to other patients who could be colonized with B. cepacia, patients have been given beepers upon registration so that they may leave the area and receive notification to return when an examination room becomes available 832 . # II.G.3. Home care. In home care, the patient placement concerns focus on protecting others in the home from exposure to an infectious household member. For individuals who are especially vulnerable to adverse outcomes associated with certain infections, it may be beneficial to either remove them from the home or segregate them within the home. Persons who are not part of the household may need to be prohibited from visiting during the period of infectivity. For example, if a patient with pulmonary tuberculosis is contagious and being cared for at home, very young children (<4 years of age) 833 and immunocompromised persons who have not yet been infected should be removed or excluded from the household. During the SARS outbreak of 2003, segregation of infected persons during the communicable phase of the illness was beneficial in preventing household transmission 249,834 . # II.H. Transport of Patients Several principles are used to guide transport of patients requiring Transmission-Based Precautions. In the inpatient and residential settings these include 1. limiting transport of such patients to essential purposes, such as diagnostic and therapeutic procedures that cannot be performed in the patient's room; 2. when transport is necessary, using appropriate barriers on the patient (e.g., mask, gown, wrapping in sheets or use of impervious dressings to cover the affected area(s) when infectious skin lesions or drainage are present, consistent with the route and risk of transmission; 3. notifying healthcare personnel in the receiving area of the impending arrival of the patient and of the precautions necessary to prevent transmission; and 4. for patients being transported outside the facility, informing the receiving facility and the medi-van or emergency vehicle personnel in advance about the type of Transmission-Based Precautions being used. For tuberculosis, additional precautions may be needed in a small shared air space such as in an ambulance 12 . # II.I. Environmental Measures Cleaning and disinfecting non-critical surfaces in patient-care areas are part of Standard Precautions. In general, these procedures do not need to be changed for patients on Transmission-Based Precautions. The cleaning and disinfection of all patient-care areas is important for frequently touched surfaces, especially those closest to the patient, that are most likely to be contaminated (e.g., bedrails, bedside tables, commodes, doorknobs, sinks, surfaces and equipment in close proximity to the patient) 11,72,73,835 . The frequency or intensity of cleaning may need to change based on the patient's level of hygiene and the degree of environmental contamination and for certain for infectious agents whose reservoir is the intestinal tract 54 . This may be especially true in LTCFs and pediatric facilities where patients with stool and urine incontinence are encountered more frequently. Also, increased frequency of cleaning may be needed in a Protective Environment to minimize dust accumulation 11 . Special recommendations for cleaning and disinfecting environmental surfaces in dialysis centers have been published 18 . In all healthcare settings, administrative, staffing and scheduling activities should prioritize the proper cleaning and disinfection of surfaces that could be implicated in transmission. During a suspected or proven outbreak where an environmental reservoir is suspected, routine cleaning procedures should be reviewed, and the need for additional trained cleaning staff should be assessed. Adherence should be monitored and reinforced to promote consistent and correct cleaning is performed. EPA-registered disinfectants or detergents/disinfectants that best meet the overall needs of the healthcare facility for routine cleaning and disinfection should be selected 11,836 . In general, use of the existing facility detergent/disinfectant according to the manufacturer's recommendations for amount, dilution, and contact time is sufficient to remove pathogens from surfaces of rooms where colonized or infected individuals were housed. This includes those pathogens that are resistant to multiple classes of antimicrobial agents (e.g., C. difficile, VRE, MRSA, MDR-GNB 11,24,88,435,746,796,837 ). Most often, environmental reservoirs of pathogens during outbreaks are related to a failure to follow recommended procedures for cleaning and disinfection rather than the specific cleaning and disinfectant agents used . Certain pathogens (e.g., rotavirus, noroviruses, C. difficile) may be resistant to some routinely used hospital disinfectants 275,292, .The role of specific disinfectants in limiting transmission of rotavirus has been demonstrated experimentally 842 . Also, since C. difficile may display increased levels of spore production when exposed to nonchlorine-based cleaning agents, and the spores are more resistant than vegetative cells to commonly used surface disinfectants, some investigators have recommended the use of a 1:10 dilution of 5.25% sodium hypochlorite (household bleach) and water for routine environmental disinfection of rooms of patients with C. difficile when there is continued transmission 844,848 . In one study, the use of a hypochlorite solution was associated with a decrease in rates of C. difficile infections 847 . The need to change disinfectants based on the presence of these organisms can be determined in consultation with the infection control committee 11,847,848 . Detailed recommendations for disinfection and sterilization of surfaces and medical equipment that have been in contact with prion-containing tissue or high risk body fluids, and for cleaning of blood and body substance spills, are available in the Guidelines for Environmental Infection Control in Health-Care Facilities 11 and in the Guideline for Disinfection and Sterilization 848 . # II.J. Patient Care Equipment and Instruments/Devices Medical equipment and instruments/devices must be cleaned and maintained according to the manufacturers' instructions to prevent patient-to-patient transmission of infectious agents 86,87,325,849 . Cleaning to remove organic material must always precede high level disinfection and sterilization of critical and semi-critical instruments and devices because residual proteinacous material reduces the effectiveness of the disinfection and sterilization processes 836,848 . Noncritical equipment, such as commodes, intravenous pumps, and ventilators, must be thoroughly cleaned and disinfected before use on another patient. All such equipment and devices should be handled in a manner that will prevent HCW and environmental contact with potentially infectious material. It is important to include computers and personal digital assistants (PDAs) used in patient care in policies for cleaning and disinfection of non-critical items. The literature on contamination of computers with pathogens has been summarized 850 and two reports have linked computer contamination to colonization and infections in patients 851,852 . Although keyboard covers and washable keyboards that can be easily disinfected are in use, the infection control benefit of those items and optimal management have not been determined. In all healthcare settings, providing patients who are on Transmission-Based Precautions with dedicated noncritical medical equipment (e.g., stethoscope, blood pressure cuff, electronic thermometer) has been beneficial for preventing transmission 74,89,740,853,854 . When this is not possible, disinfection after use is recommended. Consult other guidelines for detailed guidance in developing specific protocols for cleaning and reprocessing medical equipment and patient care items in both routine and special circumstances 11,14,18,20,740,836,848 . In home care, it is preferable to remove visible blood or body fluids from durable medical equipment before it leaves the home. Equipment can be cleaned on-site using a detergent/disinfectant and, when possible, should be placed in a single plastic bag for transport to the reprocessing location 20,739 . # II.K. Textiles and Laundry Soiled textiles, including bedding, towels, and patient or resident clothing may be contaminated with pathogenic microorganisms. However, the risk of disease transmission is negligible if they are handled, transported, and laundered in a safe manner 11,855,856 . Key principles for handling soiled laundry are 1. not shaking the items or handling them in any way that may aerosolize infectious agents; 2. avoiding contact of one's body and personal clothing with the soiled items being handled; and 3. containing soiled items in a laundry bag or designated bin. When laundry chutes are used, they must be maintained to minimize dispersion of aerosols from contaminated items 11 . The methods for handling, transporting, and laundering soiled textiles are determined by organizational policy and any applicable regulations 739 ; guidance is provided in the Guidelines for Environmental Infection Control 11 . Rather than rigid rules and regulations, hygienic and common sense storage and processing of clean textiles is recommended 11,857 . When laundering occurs outside of a healthcare facility, the clean items must be packaged or completely covered and placed in an enclosed space during transport to prevent contamination with outside air or construction dust that could contain infectious fungal spores that are a risk for immunocompromised patients 11 . Institutions are required to launder garments used as personal protective equipment and uniforms visibly soiled with blood or infective material 739 . There are few data to determine the safety of home laundering of HCW uniforms, but no increase in infection rates was observed in the one published study 858 and no pathogens were recovered from home-or hospital-laundered scrubs in another study 859 . In the home, textiles and laundry from patients with potentially transmissible infectious pathogens do not require special handling or separate laundering, and may be washed with warm water and detergent 11,858,859 . # II.L. Solid Waste The management of solid waste emanating from the healthcare environment is subject to federal and state regulations for medical and non-medical waste 860,861 . No additional precautions are needed for non-medical solid waste that is being removed from rooms of patients on Transmission-Based Precautions. Solid waste may be contained in a single bag (as compared to using two bags) of sufficient strength. 862 # II.M. Dishware and Eating Utensils The combination of hot water and detergents used in dishwashers is sufficient to decontaminate dishware and eating utensils. Therefore, no special precautions are needed for dishware (e.g., dishes, glasses, cups) or eating utensils; reusable dishware and utensils may be used for patients requiring Transmission-Based Precautions. In the home and other communal settings, eating utensils and drinking vessels that are being used should not be shared, consistent with principles of good personal hygiene and for the purpose of preventing transmission of respiratory viruses, Herpes simplex virus, and infectious agents that infect the gastrointestinal tract and are transmitted by the fecal/oral route (e.g., hepatitis A virus, noroviruses). If adequate resources for cleaning utensils and dishes are not available, disposable products may be used. # II.N. Adjunctive Measures Important adjunctive measures that are not considered primary components of programs to prevent transmission of infectious agents, but improve the effectiveness of such programs, include 1. antimicrobial management programs; 2. postexposure chemoprophylaxis with antiviral or antibacterial agents; 3. vaccines used both for pre and postexposure prevention; and 4. screening and restricting visitors with signs of transmissible infections. Detailed discussion of judicious use of antimicrobial agents is beyond the scope of this document; however the topic is addressed in the Management of Multidrug-Resistant Organisms in Healthcare Settings 2006 (/). # II.N.1. Chemoprophylaxis. Antimicrobial agents and topical antiseptics may be used to prevent infection and potential outbreaks of selected agents. Infections for which postexposure chemoprophylaxis is recommended under defined conditions include B. pertussis 17,863 , N. meningitidis 864 , B. anthracis after environmental exposure to aeosolizable material 865 , influenza virus 611 , HIV 866 , and group A streptococcus 160 . Orally administered antimicrobials may also be used under defined circumstances for MRSA decolonization of patients or healthcare personnel 867 . Another form of chemoprophylaxis is the use of topical antiseptic agents. For example, triple dye is used routinely on the umbilical cords of term newborns to reduce the risk of colonization, skin infections, and omphalitis caused by S. aureus, including MRSA, and group A streptococcus 868,869 . Extension of the use of triple dye to low birth weight infants in the NICU was one component of a program that controlled one longstanding MRSA outbreak 22 . Topical antiseptics are also used for decolonization of healthcare personnel or selected patients colonized with MRSA, using mupirocin as discussed in the MDRO guideline 870 867, 871-873 . # II.N.2. Immunoprophylaxis. Certain immunizations recommended for susceptible healthcare personnel have decreased the risk of infection and the potential for transmission in healthcare facilities 17,874 . The OSHA mandate that requires employers to offer hepatitis B vaccination to HCWs played a substantial role in the sharp decline in incidence of occupational HBV infection 778,875 . The use of varicella vaccine in healthcare personnel has decreased the need to place susceptible HCWs on administrative leave following exposure to patients with varicella 775 . Also, reports of healthcare-associated transmission of rubella in obstetrical clinics 33,876 and measles in acute care settings 34 demonstrate the importance of immunization of susceptible healthcare personnel against childhood diseases. Many states have requirements for HCW vaccination for measles and rubella in the absence of evidence of immunity. Annual influenza vaccine campaigns targeted to patients and healthcare personnel in LTCFs and acute-care settings have been instrumental in preventing or limiting institutional outbreaks and increasing attention is being directed toward improving influenza vaccination rates in healthcare personnel 35,611,690,877,878,879 . Transmission of B. pertussis in healthcare facilities has been associated with large and costly outbreaks that include both healthcare personnel and patients 17,36,41,100,683,827,880,881 . HCWs who have close contact with infants with pertussis are at particularly high risk because of waning immunity and, until 2005, the absence of a vaccine that could be used in adults. However, two acellular pertussis vaccines were licensed in the United States in 2005, one for use in individuals aged 11-18 and one for use in ages 10-64 years 882 . Provisional ACIP recommendations at the time of publication of this document include adolescents and adults, especially those with contact with infants < 12 months of age and healthcare personnel with direct patient contact 883 884 . Immunization of children and adults will help prevent the introduction of vaccinepreventable diseases into healthcare settings. The recommended immunization schedule for children is published annually in the January issues of the Morbidity Mortality Weekly Report with interim updates as needed 885,886 . An adult immunization schedule also is available for healthy adults and those with special immunization needs due to high risk medical conditions 887 . Some vaccines are also used for postexposure prophylaxis of susceptible individuals, including varicella 888 , influenza 611 , hepatitis B 778 , and smallpox 225 vaccines 17,874 . In the future, administration of a newly developed S. aureus conjugate vaccine (still under investigation) to selected patients may provide a novel method of preventing healthcareassociated S. aureus, including MRSA, infections in high-risk groups (e.g., hemodialysis patients and candidates for selected surgical procedures) 889,890 . Immune globulin preparations also are used for postexposure prophylaxis of certain infectious agents under specified circumstances (e.g., varicella-zoster virus , hepatitis B virus , rabies , measles and hepatitis A virus 17,833,874 ). The RSV monoclonal antibody preparation, Palivizumab, may have contributed to controlling a nosocomial outbreak of RSV in one NICU, but there is insufficient evidence to support a routine recommendation for its use in this setting 891 . # II.N. 3. Management of visitors. # II.N.3.a. Visitors as sources of infection. Visitors have been identified as the source of several types of HAIs (e.g., pertussis 40,41 , M. tuberculosis 42,892 , influenza, and other respiratory viruses 24,43,44,373 and SARS 21, ). However, effective methods for visitor screening in healthcare settings have not been studied. Visitor screening is especially important during community outbreaks of infectious diseases and for high risk patient units. Sibling visits are often encouraged in birthing centers, post partum rooms and in pediatric inpatient units, ICUs, and in residential settings for children; in hospital settings, a child visitor should visit only his or her own sibling. Screening of visiting siblings and other children before they are allowed into clinical areas is necessary to prevent the introduction of childhood illnesses and common respiratory infections. Screening may be passive through the use of signs to alert family members and visitors with signs and symptoms of communicable diseases not to enter clinical areas. More active screening may include the completion of a screening tool or questionnaire which elicits information related to recent exposures or current symptoms. That information is reviewed by the facility staff and the visitor is either permitted to visit or is excluded 833 . Family and household members visiting pediatric patients with pertussis and tuberculosis may need to be screened for a history of exposure as well as signs and symptoms of current infection. Potentially infectious visitors are excluded until they receive appropriate medical screening, diagnosis, or treatment. If exclusion is not considered to be in the best interest of the patient or family (i.e., primary family members of critically or terminally ill patients), then the symptomatic visitor must wear a mask while in the healthcare facility and remain in the patient's room, avoiding exposure to others, especially in public waiting areas and the cafeteria. Visitor screening is used consistently on HSCT units 15,43 . However, considering the experience during the 2003 SARS outbreaks and the potential for pandemic influenza, developing effective visitor screening systems will be beneficial 9 . Education concerning Respiratory Hygiene/Cough Etiquette is a useful adjunct to visitor screening. # II.N.3.b. Use of barrier precautions by visitors. The use of gowns, gloves, or masks by visitors in healthcare settings has not been addressed specifically in the scientific literature. Some studies included the use of gowns and gloves by visitors in the control of MDRO's, but did not perform a separate analysis to determine whether their use by visitors had a measurable impact . Family members or visitors who are providing care or having very close patient contact (e.g., feeding, holding) may have contact with other patients and could contribute to transmission if barrier precautions are not used correctly. Specific recommendations may vary by facility or by unit and should be determined by the level of interaction. # Part III: Precautions to Prevent Transmission of Infectious Agents There are two tiers of HICPAC/CDC precautions to prevent transmission of infectious agents, Standard Precautions and Transmission-Based Precautions. Standard Precautions are intended to be applied to the care of all patients in all healthcare settings, regardless of the suspected or confirmed presence of an infectious agent. Implementation of Standard Precautions constitutes the primary strategy for the prevention of healthcare-associated transmission of infectious agents among patients and healthcare personnel. Transmission-Based Precautions are for patients who are known or suspected to be infected or colonized with infectious agents, including certain epidemiologically important pathogens, which require additional control measures to effectively prevent transmission. Since the infecting agent often is not known at the time of admission to a healthcare facility, Transmission-Based Precautions are used empirically, according to the clinical syndrome and the likely etiologic agents at the time, and then modified when the pathogen is identified or a transmissible infectious etiology is ruled out. Examples of this syndromic approach are presented in Table 2. The HICPAC/CDC Guidelines also include recommendations for creating a Protective Environment for allogeneic HSCT patients. 4 and 5 for summaries of the key elements of these sets of precautions # III.A. Standard Precautions Standard Precautions combine the major features of Universal Precautions (UP) 780,896 and Body Substance Isolation (BSI) 640 and are based on the principle that all blood, body fluids, secretions, excretions except sweat, nonintact skin, and mucous membranes may contain transmissible infectious agents. Standard Precautions include a group of infection prevention practices that apply to all patients, regardless of suspected or confirmed infection status, in any setting in which healthcare is delivered (Table 4). These include: hand hygiene; use of gloves, gown, mask, eye protection, or face shield, depending on the anticipated exposure; and safe injection practices. Also, equipment or items in the patient environment likely to have been contaminated with infectious body fluids must be handled in a manner to prevent transmission of infectious agents (e.g., wear gloves for direct contact, contain heavily soiled equipment, properly clean and disinfect or sterilize reusable equipment before use on another patient). The application of Standard Precautions during patient care is determined by the nature of the HCW-patient interaction and the extent of anticipated blood, body fluid, or pathogen exposure. For some interactions (e.g., performing venipuncture), only gloves may be needed; during other interactions (e.g., intubation), use of gloves, gown, and face shield or mask and goggles is necessary. Education and training on the principles and rationale for recommended practices are critical elements of Standard Precautions because they facilitate appropriate decision-making and promote adherence when HCWs are faced with new circumstances 655, . An example of the importance of the use of Standard Precautions is intubation, especially under emergency circumstances when infectious agents may not be suspected, but later are identified (e.g., SARS-CoV, N. meningitides). The application of Standard Precautions is described below and summarized in Table 4. Guidance on donning and removing gloves, gowns and other PPE is presented in the Figure. Standard Precautions are also intended to protect patients by ensuring that healthcare personnel do not carry infectious agents to patients on their hands or via equipment used during patient care. 21,254,897 . The strategy proposed has been termed Respiratory Hygiene/Cough Etiquette 9,828 and is intended to be incorporated into infection control practices as a new component of Standard Precautions. The strategy is targeted at patients and accompanying family members and friends with undiagnosed transmissible respiratory infections, and applies to any person with signs of illness including cough, congestion, rhinorrhea, or increased production of respiratory secretions when entering a healthcare facility 40,41,43 . The term cough etiquette is derived from recommended source control measures for M. tuberculosis 12,126 . The elements of Respiratory Hygiene/Cough Etiquette include 1. education of healthcare facility staff, patients, and visitors; 2. posted signs, in language(s) appropriate to the population served, with instructions to patients and accompanying family members or friends; 3. source control measures (e.g., covering the mouth/nose with a tissue when coughing and prompt disposal of used tissues, using surgical masks on the coughing person when tolerated and appropriate); 4. hand hygiene after contact with respiratory secretions; and 5. spatial separation, ideally >3 feet, of persons with respiratory infections in common waiting areas when possible. Covering sneezes and coughs and placing masks on coughing patients are proven means of source containment that prevent infected persons from dispersing respiratory secretions into the air 107,145,898,899 . Masking may be difficult in some settings, (e.g., pediatrics, in which case, the emphasis by necessity may be on cough etiquette 900 . Physical proximity of <3 feet has been associated with an increased risk for transmission of infections via the droplet route (e.g., N. meningitidis 103 and group A streptococcus 114 and therefore supports the practice of distancing infected persons from others who are not infected. The effectiveness of good hygiene practices, especially hand hygiene, in preventing transmission of viruses and reducing the incidence of respiratory infections both within and outside healthcare settings is summarized in several reviews 559,717,904 . These measures should be effective in decreasing the risk of transmission of pathogens contained in large respiratory droplets (e.g., influenza virus 23 , adenovirus 111 , B. pertussis 827 and Mycoplasma pneumoniae 112 . Although fever will be present in many respiratory infections, patients with pertussis and mild upper respiratory tract infections are often afebrile. Therefore, the absence of fever does not always exclude a respiratory infection. Patients who have asthma, allergic rhinitis, or chronic obstructive lung disease also may be coughing and sneezing. While these patients often are not infectious, cough etiquette measures are prudent. Healthcare personnel are advised to observe Droplet Precautions (i.e., wear a mask) and hand hygiene when examining and caring for patients with signs and symptoms of a respiratory infection. Healthcare personnel who have a respiratory infection are advised to avoid direct patient contact, especially with high risk patients. If this is not possible, then a mask should be worn while providing patient care. # III.A.1.b. Safe injection practices. The investigation of four large outbreaks of HBV and HCV among patients in ambulatory care facilities in the United States identified a need to define and reinforce safe injection practices 453 . The four outbreaks occurred in a private medical practice, a pain clinic, an endoscopy clinic, and a hematology/oncology clinic. The primary breaches in infection control practice that contributed to these outbreaks were 1. reinsertion of used needles into a multiple-dose vial or solution container (e.g., saline bag) and 2. use of a single needle/syringe to administer intravenous medication to multiple patients. In one of these outbreaks, preparation of medications in the same workspace where used needle/syringes were dismantled also may have been a contributing factor. These and other outbreaks of viral hepatitis could have been prevented by adherence to basic principles of aseptic technique for the preparation and administration of parenteral medications 453,454 . These include the use of a sterile, single-use, disposable needle and syringe for each injection given and prevention of contamination of injection equipment and medication. Whenever possible, use of single-dose vials is preferred over multipledose vials, especially when medications will be administered to multiple patients. Outbreaks related to unsafe injection practices indicate that some healthcare personnel are unaware of, do not understand, or do not adhere to basic principles of infection control and aseptic technique. A survey of US healthcare workers who provide medication through injection found that 1% to 3% reused the same needle and/or syringe on multiple patients 905 . Among the deficiencies identified in recent outbreaks were a lack of oversight of personnel and failure to follow-up on reported breaches in infection control practices in ambulatory settings. Therefore, to ensure that all healthcare workers understand and adhere to recommended practices, principles of infection control and aseptic technique need to be reinforced in training programs and incorporated into institutional polices that are monitored for adherence 454 . # III.A.1.c. Infection Control Practices for Special Lumbar Puncture Procedures. In 2004, CDC investigated eight cases of post-myelography meningitis that either were reported to CDC or identified through a survey of the Emerging Infections Network of the Infectious Disease Society of America. Blood and/or cerebrospinal fluid of all eight cases yielded streptococcal species consistent with oropharyngeal flora and there were changes in the CSF indices and clinical status indicative of bacterial meningitis. Equipment and products used during these procedures (e.g., contrast media) were excluded as probable sources of contamination. Procedural details available for seven cases determined that antiseptic skin preparations and sterile gloves had been used. However, none of the clinicians wore a face mask, giving rise to the speculation that droplet transmission of oralpharyngeal flora was the most likely explanation for these infections. Bacterial meningitis following myelogram and other spinal procedures (e.g., lumbar puncture, spinal and epidural anesthesia, intrathecal chemotherapy) has been reported previously . As a result, the question of whether face masks should be worn to prevent droplet spread of oral flora during spinal procedures (e.g., myelogram, lumbar puncture, spinal anesthesia) has been debated 916,917 . Face masks are effective in limiting the dispersal of oropharyngeal droplets 918 and are recommended for the placement of central venous catheters 919 . In October 2005, the Healthcare Infection Control Practices Advisory Committee (HICPAC) reviewed the evidence and concluded that there is sufficient experience to warrant the additional protection of a face mask for the individual placing a catheter or injecting material into the spinal or epidural space. # III.B. Transmission-Based Precautions There are three categories of Transmission-Based Precautions: Contact Precautions, Droplet Precautions, and Airborne Precautions. Transmission-Based Precautions are used when the route(s) of transmission is (are) not completely interrupted using Standard Precautions alone. For some diseases that have multiple routes of transmission (e.g., SARS), more than one Transmission-Based Precautions category may be used. When used either singly or in combination, they are always used in addition to Standard Precautions. See Appendix A for recommended precautions for specific infections. When Transmission-Based Precautions are indicated, efforts must be made to counteract possible adverse effects on patients (i.e., anxiety, depression and other mood disturbances , perceptions of stigma 923 , reduced contact with clinical staff , and increases in preventable adverse events 565 in order to improve acceptance by the patients and adherence by HCWs. # III.B.1. Contact precautions. Contact Precautions are intended to prevent transmission of infectious agents, including epidemiologically important microorganisms, which are spread by direct or indirect contact with the patient or the patient's environment as described in I.B. 12,13 . Some states require the availability of such rooms in hospitals, emergency departments, and nursing homes that care for patients with M. tuberculosis. A respiratory protection program that includes education about use of respirators, fit-testing, and user seal checks is required in any facility with AIIRs. In settings where Airborne Precautions cannot be implemented due to limited engineering resources (e.g., physician offices), masking the patient, placing the patient in a private room (e.g., office examination room) with the door closed, and providing N95 or higher level respirators or masks if respirators are not available for healthcare personnel will reduce the likelihood of airborne transmission until the patient is either transferred to a facility with an AIIR or returned to the home environment, as deemed medically appropriate. Healthcare personnel caring for patients on Airborne Precautions wear a mask or respirator, depending on the disease-specific recommendations (Respiratory Protection II.E.4, Table 2, and Appendix A), that is donned prior to room entry. Whenever possible, non-immune HCWs should not care for patients with vaccine-preventable airborne diseases (e.g., measles, chickenpox, and smallpox). # III.C. Syndromic and Empiric Applications of Transmission-Based Precautions Diagnosis of many infections requires laboratory confirmation. Since laboratory tests, especially those that depend on culture techniques, often require two or more days for completion, Transmission-Based Precautions must be implemented while test results are pending based on the clinical presentation and likely pathogens. Use of appropriate Transmission-Based Precautions at the time a patient develops symptoms or signs of transmissible infection, or arrives at a healthcare facility for care, reduces transmission opportunities. While it is not possible to identify prospectively all patients needing Transmission-Based Precautions, certain clinical syndromes and conditions carry a sufficiently high risk to warrant their use empirically while confirmatory tests are pending (Table 2). Infection control professionals are encouraged to modify or adapt this table according to local conditions. # III.D. Discontinuation of Transmission-Based Precautions Transmission-Based Precautions remain in effect for limited periods of time (i.e., while the risk for transmission of the infectious agent persists or for the duration of the illness (Appendix A). For most infectious diseases, this duration reflects known patterns of persistence and shedding of infectious agents associated with the natural history of the infectious process and its treatment. For some diseases (e.g., pharyngeal or cutaneous diphtheria, RSV), Transmission-Based Precautions remain in effect until culture or antigen-detection test results document eradication of the pathogen and, for RSV, symptomatic disease is resolved. For other diseases, (e.g., M. tuberculosis) state laws and regulations, and healthcare facility policies, may dictate the duration of precautions 12 ). In immunocompromised patients, viral shedding can persist for prolonged periods of time (many weeks to months) and transmission to others may occur during that time; therefore, the duration of contact and/or droplet precautions may be prolonged for many weeks 500, . The duration of Contact Precautions for patients who are colonized or infected with MDROs remains undefined. MRSA is the only MDRO for which effective decolonization regimens are available 867 . However, carriers of MRSA who have negative nasal cultures after a course of systemic or topical therapy may resume shedding MRSA in the weeks that follow therapy 934,935 . Although early guidelines for VRE suggested discontinuation of Contact Precautions after three stool cultures obtained at weekly intervals proved negative 740 , subsequent experiences have indicated that such screening may fail to detect colonization that can persist for >1 year 27, . Likewise, available data indicate that colonization with VRE, MRSA 939 , and possibly MDR-GNB, can persist for many months, especially in the presence of severe underlying disease, invasive devices, and recurrent courses of antimicrobial agents. It may be prudent to assume that MDRO carriers are colonized permanently and manage them accordingly. Alternatively, an interval free of hospitalizations, antimicrobial therapy, and invasive devices (e.g., 6 or 12 months) before reculturing patients to document clearance of carriage may be used. Determination of the best strategy awaits the results of additional studies. See the 2006 HICPAC/CDC MDRO guideline 927 for discussion of possible criteria to discontinue Contact Precautions for patients colonized or infected with MDROs. # III.E. Application of Transmission-Based Precautions in Ambulatory and Home Care Settings Although Transmission-Based Precautions generally apply in all healthcare settings, exceptions exist. For example, in home care, AIIRs are not available. Furthermore, family members already exposed to diseases such as varicella and tuberculosis would not use masks or respiratory protection, but visiting HCWs would need to use such protection. Similarly, management of patients colonized or infected with MDROs may necessitate Contact Precautions in acute care hospitals and in some LTCFs when there is continued transmission, but the risk of transmission in ambulatory care and home care, has not been defined. Consistent use of Standard Precautions may suffice in these settings, but more information is needed. # III.F. Protective EnvironmentA Protective Environment is designed for allogeneic HSCT patients to minimize fungal spore counts in the air and reduce the risk of invasive environmental fungal infections (see Table 5 for specifications) 11, . The need for such controls has been demonstrated in studies of aspergillus outbreaks associated with construction 11,14,15,157,158 . As defined by the American Insitute of Architecture 13 and presented in detail in the Guideline for Environmental Infection Control 2003 11,861 , air quality for HSCT patients is improved through a combination of environmental controls that include 1. HEPA filtration of incoming air; 2. directed room air flow; 3. positive room air pressure relative to the corridor; 4. well-sealed rooms (including sealed walls, floors, ceilings, windows, electrical outlets) to prevent flow of air from the outside; 5. ventilation to provide >12 air changes per hour; 6. strategies to minimize dust (e.g., scrubbable surfaces rather than upholstery 940 and carpet 941 , and routinely cleaning crevices and sprinkler heads); and 7. prohibiting dried and fresh flowers and potted plants in the rooms of HSCT patients. The latter is based on molecular typing studies that have found indistinguishable strains of Aspergillus terreus in patients with hematologic malignancies and in potted plants in the vicinity of the patients . The desired quality of air may be achieved without incurring the inconvenience or expense of laminar airflow 15,157 . To prevent inhalation of fungal spores during periods when construction, renovation, or other dust-generating activities that may be ongoing in and around the health-care facility, it has been advised that severely immunocompromised patients wear a high-efficiency respiratory-protection device (e.g., an N95 respirator) when they leave the Protective Environment 11,14,945 ). The use of masks or respirators by HSCT patients when they are outside of the Protective Environment for prevention of environmental fungal infections in the absence of construction has not been evaluated. A Protective Environment does not include the use of barrier precautions beyond those indicated for Standard and Transmission-Based Precautions. No published reports support the benefit of placing solid organ transplants or other immunocompromised patients in a Protective Environment. # Part IV: Recommendations These recommendations are designed to prevent transmission of infectious agents among patients and healthcare personnel in all settings where healthcare is delivered. As in other CDC/HICPAC guidelines, each recommendation is categorized on the basis of existing scientific data, theoretical rationale, applicability, and when possible, economic impact. The CDC/HICPAC system for categorizing recommendations is as follows: Category IA Strongly recommended for implementation and strongly supported by welldesigned experimental, clinical, or epidemiologic studies. Category IB Strongly recommended for implementation and supported by some experimental, clinical, or epidemiologic studies and a strong theoretical rationale. Category IC Required for implementation, as mandated by federal and/or state regulation or standard. Category II Suggested for implementation and supported by suggestive clinical or epidemiologic studies or a theoretical rationale. No recommendation; unresolved issue. Practices for which insufficient evidence or no consensus regarding efficacy exists. # I. Administrative Responsibilities Healthcare organization administrators should ensure the implementation of recommendations in this section. # I.A. Incorporate preventing transmission of infectious agents into the objectives of the organization's patient and occupational safety programs 543-546, 561, 620, 626, 946 566 247 687 . Category IB III.E. Review periodically information on community or regional trends in the incidence and prevalence of epidemiologically-important organisms (e.g., influenza, RSV, pertussis, invasive group A streptococcal disease, MRSA, VRE) (including in other healthcare facilities) that may impact transmission of organisms within the facility 398, 687, 972, 973 974 . Category II # IV. Standard Precautions Assume that every person is potentially infected or colonized with an organism that could be transmitted in the healthcare setting and apply the following infection control practices during the delivery of health care. 956 IV.E.1. Establish policies and procedures for containing, transporting, and handling patient-care equipment and instruments/devices that may be contaminated with blood or body fluids 18,739,975 . Category IB/IC # IV.A. Hand Hygiene IV.E.2. Remove organic material from critical and semi-critical instrument/devices, using recommended cleaning agents before high level disinfection and sterilization to enable effective disinfection and sterilization processes 836 991, 992 . Category IA IV.E.3. Wear PPE (e.g., gloves, gown), according to the level of anticipated contamination, when handling patient-care equipment and instruments/devices that is visibly soiled or may have been in contact with blood or body fluids 18,739,975 . Category IB/IC # IV.F. Care of the Environment 11 Edit : An - indicates recommendations that were renumbered for clarity. The renumbering does not constitute change to the intent of the recommendations. IV.F.1. Establish policies and procedures for routine and targeted cleaning of environmental surfaces as indicated by the level of patient contact and degree of soiling 11 . Category II IV.F.2. Clean and disinfect surfaces that are likely to be contaminated with pathogens, including those that are in close proximity to the patient (e.g., bed rails, over bed tables) and frequently-touched surfaces in the patient care environment (e.g., door knobs, surfaces in and surrounding toilets in patients' rooms) on a more frequent schedule compared to that for other surfaces (e.g., horizontal surfaces in waiting rooms) 11 73, 740, 746, 993, 994 72, 800, 835 995 . Category IB IV.F.3. Use EPA-registered disinfectants that have microbiocidal (i.e., killing) activity against the pathogens most likely to contaminate the patient-care environment. Use in accordance with manufacturer's instructions 842-844, 956, 996 . Category IB/IC IV.F.3.a. Review the efficacy of in-use disinfectants when evidence of continuing transmission of an infectious agent (e.g., rotavirus, C. difficile, norovirus) may indicate resistance to the in-use product and change to a more effective disinfectant as indicated 275,842,847 . # Category II Edit : An - indicates recommendations that were renumbered for clarity. The renumbering does not constitute change to the intent of the recommendations. IV.F.4. In facilities that provide health care to pediatric patients or have waiting areas with child play toys (e.g., obstetric/gynecology offices and clinics), establish policies and procedures for cleaning and disinfecting toys at regular intervals 379 80 . Category IB IV.F.4.a. - Use the following principles in developing this policy and procedures: Category II - Select play toys that can be easily cleaned and disinfected - Do not permit use of stuffed furry toys if they will be shared - Clean and disinfect large stationary toys (e.g., climbing quipment) at least weekly and whenever visibly soiled - If toys are likely to be mouthed, rinse with water after disinfection; alternatively wash in a dishwasher - When a toy requires cleaning and disinfection, do so immediately or store in a designated labeled container separate from toys that are clean and ready for use IV.F.5. Include multi-use electronic equipment in policies and procedures for preventing contamination and for cleaning and disinfection, especially those items that are used by patients, those used during delivery of patient care, and mobile devices that are moved in and out of patient rooms frequently (e.g., daily) 850 851, 852, 997 . Category IB IV.F.5.a. No recommendation for use of removable protective covers or washable keyboards. Unresolved issue # IV.G. Textiles and Laundry IV.G.1. Handle used textiles and fabrics with minimum agitation to avoid contamination of air, surfaces and persons 739,998,999 . Category IB/IC IV.G.2. If laundry chutes are used, ensure that they are properly designed, maintained, and used in a manner to minimize dispersion of aerosols from contaminated laundry 11,13,1000,1001 . Category IB/IC # IV.H. Safe Injection Practices The following recommendations apply to the use of needles, cannulas that replace needles, and, where applicable, intravenous delivery systems 454 IV.H.1. Use aseptic technique to avoid contamination of sterile injection equipment 1002, 1003 . Category IA IV.H.2. Do not administer medications from a syringe to multiple patients, even if the needle or cannula on the syringe is changed. Needles, cannulae and syringes are sterile, single-use items; they should not be reused for another patient nor to access a medication or solution that might be used for a subsequent patient 453,919,1004,1005 . Category IA IV.H.3. Use fluid infusion and administration sets (i.e., intravenous bags, tubing and connectors) for one patient only and dispose appropriately after use. Consider a syringe or needle/cannula contaminated once it has been used to enter or connect to a patient's intravenous infusion bag or administration set 453 . Category IB IV.H.4. Use single-dose vials for parenteral medications whenever possible 453 . # Category IA IV.H.5. Do not administer medications from single-dose vials or ampules to multiple patients or combine leftover contents for later use 369 453, 1005 . Category IA IV.H.6. If multidose vials must be used, both the needle or cannula and syringe used to access the multidose vial must be sterile 453,1002 . Category IA IV.H.7. Do not keep multidose vials in the immediate patient treatment area and store in accordance with the manufacturer's recommendations; discard if sterility is compromised or questionable 453,1003 . Category IA IV.H.8. Do not use bags or bottles of intravenous solution as a common source of supply for multiple patients 453,1006 . Category IB IV.I. Infection control practices for special lumbar puncture procedures Wear a surgical mask when placing a catheter or injecting material into the spinal canal or subdural space (i.e., during myelograms, lumbar puncture and spinal or epidural anesthesia). 906 # V.C. Droplet Precautions Use Droplet Precautions as recommended in Appendix A for patients known or suspected to be infected with pathogens transmitted by respiratory droplets (i.e., large-particle droplets >5µ in size) that are generated by a patient who is coughing, sneezing or talking 14, 23, Steinberg, 1969 to exhaust air from an AIIR directly to the outside, the air may be returned to the air-handling system or adjacent spaces if all air is directed through HEPA filters. V.D.2.a.iii. Whenever an AIIR is in use for a patient on Airborne Precautions, monitor air pressure daily with visual indicators (e.g., smoke tubes, flutter strips), regardless of the presence of differential pressure sensing devices (e.g., manometers) 11,12,1023,1024 . V.D.2.a.iv. Keep the AIIR door closed when not required for entry and exit. V.D.2.b. When an AIIR is not available, transfer the patient to a facility that has an available AIIR 12 . Category II V.D.2.c. In the event of an outbreak or exposure involving large numbers of patients who require Airborne Precautions:  Consult infection control professionals before patient placement to determine the safety of alternative room that do not meet engineering requirements for an AIIR.  Place together (cohort) patients who are presumed to have the same infection( based on clinical presentation and diagnosis when known) in areas of the facility that are away from other patients, especially patients who are at increased risk for infection (e.g., immunocompromised patients).  Use temporary portable solutions (e.g., exhaust fan) to create a negative pressure environment in the converted area of the facility. Discharge air directly to the outside,away from people and air intakes, or direct all the air through HEPA filters before it is introduced to other air spaces 12 Category II V.D.2.d. In ambulatory settings: V.D.2.d.i. Develop systems (e.g., triage, signage) to identify patients with known or suspected infections that require Airborne Precautions upon entry into ambulatory settings 9,12,34,127,134 . Category IA V.D.2.d.ii. Place the patient in an AIIR as soon as possible. If an AIIR is not available, place a surgical mask on the patient and place him/her in an examination room. Once the patient leaves, the room should remain vacant for the appropriate time, generally one hour, to allow for a full exchange of air 11,12,122 . Category IB/IC V.D.2.d.iii. Instruct patients with a known or suspected airborne infection to wear a surgical mask and observe Respiratory Hygiene/Cough Etiquette. Once in an AIIR, the mask may be removed; the mask should remain on if the patient is not in an AIIR 12,107,145,899 . Category IB/IC V.D.3. Personnel restrictions Restrict susceptible healthcare personnel from entering the rooms of patients known or suspected to have measles (rubeola), varicella (chickenpox), disseminated zoster, or smallpox if other immune healthcare personnel are available 17,775 . Category IB V.D.4.a. Wear a fit-tested NIOSH-approved N95 or higher level respirator for respiratory protection when entering the room or home of a patient when the following diseases are suspected or confirmed:  - V.D.4.a.i. Infectious pulmonary or laryngeal tuberculosis or when infectious tuberculosis skin lesions are present and procedures that would aerosolize viable organisms (e.g., irrigation, incision and drainage, whirlpool treatments) are performed 12,1025,1026 . Category IB  - V.D.4.a.ii. Smallpox (vaccinated and unvaccinated). Respiratory protection is recommended for all healthcare personnel, including those with a documented "take" after smallpox vaccination due to the risk of a genetically engineered virus against which the vaccine may not provide protection, or of exposure to a very large viral load (e.g., from high-risk aerosol-generating procedures, immunocompromised patients, hemorrhagic or flat smallpox 108,129 . Category II V.D.4.b. § Suspected measles, chickenpox or disseminated zoster. No recommendation is made regarding the use of PPE by healthcare personnel who are presumed to be immune to measles (rubeola) or varicella-zoster based on history of disease, vaccine, or serologic testing when caring for an individual with known or suspected measles, chickenpox or disseminated zoster, due to difficulties in establishing definite immunity 1027,1028 . Unresolved issue V.D.4.c. § Suspected measles, chickenpox or disseminated zoster. No recommendation is made regarding the type of personal protective equipment (i.e., surgical mask or respiratory protection with a N95 or higher respirator) to be worn by susceptible healthcare personnel who must have contact with patients with known or suspected measles, chickenpox or disseminated herpes zoster. Unresolved issue V.D. 13 . Category IB VI.C.2. Lower dust levels by using smooth, nonporous surfaces and finishes that can be scrubbed, rather than textured material (e.g., upholstery). Wet dust horizontal surfaces whenever dust detected and routinely clean crevices and sprinkler heads where dust may accumulate 940,941 . Category II VI.C.3. Avoid carpeting in hallways and patient rooms in areas 941 . Category IB VI.C.4. Prohibit dried and fresh flowers and potted plants . Category II VI.D. Minimize the length of time that patients who require a Protective Environment are outside their rooms for diagnostic procedures and other activities 11,158,945 . # Category IB VI.E. During periods of construction, to prevent inhalation of respirable particles that could contain infectious spores, provide respiratory protection (e.g., N95 respirator) to patients who are medically fit to tolerate a respirator when they are required to leave the Protective Environment 945 158 13 . Category IB VI.F.4.b. Use an anteroom to further support the appropriate air-balance relative to the corridor and the Protective Environment; provide independent exhaust of contaminated air to the outside or place a HEPA filter in the exhaust duct if the return air must be recirculated 13,1041 . Category IB VI.F.4.c. If an anteroom is not available, place the patient in an AIIR and use portable, industrial-grade HEPA filters in the room to enhance filtration of spores 1042 . Category II Preamble The mode(s) and risk of transmission for each specific disease agent included in Appendix A were reviewed. Principle sources consulted for the development of disease-specific recommendations for Appendix A included infectious disease manuals and textbooks . The published literature was searched for evidence of person-to-person transmission in healthcare and non-healthcare settings with a focus on reported outbreaks that would assist in developing recommendations for all settings where healthcare is delivered. Criteria used to assign Transmission-Based Precautions categories follow: - A Transmission-Based Precautions category was assigned if there was strong evidence for person-to-person transmission via droplet, contact, or airborne routes in healthcare or nonhealthcare settings and/or if patient factors (e.g., diapered infants, diarrhea, draining wounds) increased the risk of transmission - Transmission-Based Precautions category assignments reflect the predominant mode(s) of transmission - If there was no evidence for person-to-person transmission by droplet, contact or airborne routes, Standard Precautions were assigned - If there was a low risk for person-to-person transmission and no evidence of healthcareassociated transmission, Standard Precautions were assigned - Standard Precautions were assigned for bloodborne pathogens (e.g., hepatitis B and C viruses, human immunodeficiency virus) as per CDC recommendations for Universal Precautions issued in 1988 . Subsequent experience has confirmed the efficacy of Standard Precautions to prevent exposure to infected blood and body fluid . Additional information relevant to use of precautions was added in the comments column to assist the caregiver in decision-making. Citations were added as needed to support a change in or provide additional evidence for recommendations for a specific disease and for new infectious agents (e.g., SARS-CoV, avian influenza) that have been added to Appendix A. The reader may refer to more detailed discussion concerning modes of transmission and emerging pathogens in the background text and for MDRO control in Appendix B (Management of Multidrug-Resistant Organisms in Healthcare Settings (/)). ; ensure consistent environmental cleaning and disinfection with focus on restrooms even when apparently unsoiled ). Hypochlorite solutions may be required when there is continued transmission . Alcohol is less active, but there is no evidence that alcohol antiseptic handrubs are not effective for hand decontamination . Cohorting of affected patients to separate airspaces and toilet facilities may help interrupt transmission during outbreaks. Gastroenteritis Rotavirus # Type and Duration of Precautions Recommended for Selected Infections and Conditions 1 Infection/Condition # Type of Precaution # Contact + Standard # Duration of illness Ensure consistent environmental cleaning and disinfection and frequent removal of soiled diapers. Prolonged shedding may occur in both immunocompetent and immunocompromised children and the elderly . Gastrointestinal Tract: Ingestion of toxin-containing food, Respiratory Tract: Inhalation of toxin containing aerosol cause disease. Comment: Toxin ingested or potentially delivered by aerosol in bioterrorist incidents. LD50 (lethal dose for 50% of experimental animals) for type A is 0.001 μg/ml/kg. Incubation Period 1-5 days. # Infection/Condition # Type of Precaution # Clinical Features Ptosis, generalized weakness, dizziness, dry mouth and throat, blurred vision, diplopia, dysarthria, dysphonia, and dysphagia followed by symmetrical descending paralysis and respiratory failure. # Diagnosis Clinical diagnosis; identification of toxin in stool, serology unless toxin-containing material available for toxin neutralization bioassays. # Characteristics Infection Control Considerations Infectivity Not transmitted from person to person. Exposure to toxin necessary for disease. # Recommended Precautions Standard Precautions. # Ebola Hemorrhagic Fever Ebola Virus Disease for Healthcare Workers: Updated recommendations for healthcare workers can be found at Ebola: U.S. Healthcare Workers and Settings (/). # Table 3C. Ebola Hemorrhagic Fever Characteristics Infection Control Considerations Site(s) of Infection; Transmission Mode As a rule infection develops after exposure of mucous membranes or respiratory tract, or through broken skin or percutaneous injury. Incubation Period 2-19 days, usually 5-10 days # Clinical Features Febrile illnesses with malaise, myalgias, headache, vomiting and diarrhea that are rapidly complicated by hypotension, shock, and hemorrhagic features. Massive hemorrhage in < 50% pts. # Diagnosis Etiologic diagnosis can be made using respiratory tract-polymerase chain reaction, serologic detection of antibody and antigen, pathologic assessment with immunohistochemistry and viral culture with EM confirmation of morphology, # Infectivity Person-to-person transmission primarily occurs through unprotected contact with blood and body fluids; percutaneous injuries (e.g., needlestick) associated with a high rate of transmission; transmission in healthcare settings has been reported but is prevented by use of barrier precautions. # Recommended Precautions Hemorrhagic fever specific barrier precautions: If disease is believed to be related to intentional release of a bioweapon, epidemiology of transmission is unpredictable pending observation of disease transmission. Until the nature of the pathogen is understood and its transmission pattern confirmed, Standard, Contact and Airborne Precautions should be used. Once the pathogen is characterized, if the epidemiology of transmission is consistent with natural disease, Droplet Precautions can be substituted for Airborne Precautions. Emphasize: 1. use of sharps safety devices and safe work practices, 2. hand hygiene; 3. barrier protection against blood and body fluids upon entry into room (single gloves and fluid-resistant or impermeable gown, face/eye protection with masks, goggles or face shields); and 4. appropriate waste handling. Use N95 or higher respirators when performing aerosol-generating procedures. In settings where AIIRs are unavailable or the large numbers of patients cannot be accommodated by existing AIIRs, observe Droplet Precautions (plus Standard Precautions and Contact Precautions) and segregate patients from those not suspected of VHF infection. Limit blooddraws to those essential to care. See text for discussion and Appendix A for recommendations for naturally occurring VHFs. # Plague Pneumonic plague is not as contagious as is often thought. Historical accounts and contemporary evidence indicate that persons with plague usually transmit the infection only when the disease is in the end stage. These persons cough copious amounts of bloody sputum that contains many plague bacteria. Patients in the early stage of primary pneumonic plague (approximately the first 20-24 h) apparently pose little risk . Antibiotic medication rapidly clears the sputum of plague bacilli, so that a patient generally is not infective within hours after initiation of effective antibiotic treatment . This means that in modern times many patients will never reach a stage where they pose a significant risk to others. Even in the end stage of disease, transmission only occurs after close contact. Simple protective measures, such as wearing masks, good hygiene, and avoiding close contact, have been effective to interrupt transmission during many pneumonic plague outbreaks . In the United States, the last known cases of person to person transmission of pneumonic plague occurred in 1925 . # Characteristics Infection Control Considerations Clinical Features Fever, malaise, backache, headache, and often vomiting for 2-3 days; then generalized papular or maculopapular rash (more on face and extremities), which becomes vesicular (on day 4 or 5) and then pustular; lesions all in same stage. # Diagnosis Electron microscopy of vesicular fluid or culture of vesicular fluid by WHO approved laboratory (CDC); detection by polymerase chain reaction available only in select LRN labs, CDC and USAMRID Infectivity Secondary attack rates up to 50% in unvaccinated persons; infected persons may transmit disease from time rash appears until all lesions have crusted over (about 3 weeks); greatest infectivity during first 10 days of rash. # Recommended Precautions Combined use of Standard, Contact, and Airborne Precautions until all scabs have separated (3-4 weeks). Transmission by the airborne route is a rare event; Airborne Precautions is recommended when possible, but in the event of mass exposures, barrier precautions and containment within a designated area are most important. 204,212 Only immune HCWs to care for pts; post-exposure vaccine within 4 days. Vaccinia: HCWs cover vaccination site with gauze and semi-permeable dressing until scab separates (≥21 days). Observe hand hygiene. Adverse events with virus-containing lesions: Standard plus Contact Precautions until all lesions crusted. Vaccinia adverse events with lesions containing infectious virus include inadvertent autoinoculation, ocular lesions (blepharitis, conjunctivitis), generalized vaccinia, progressive vaccinia, eczema vaccinatum; bacterial superinfection also requires addition of contact precautions if exudates cannot be contained. 216,217 # Table 3F. Tularemia Characteristics Infection Control Considerations Site(s) of Infection; Transmission Mode Respiratory Tract: Inhalation of aerosolized bacteria. Gastrointestinal Tract: Ingestion of food or drink contaminated with aerosolized bacteria. Comment: Pneumonic or typhoidal disease likely to occur after bioterrorist event using aerosol delivery. Infective dose 10-50 bacteria Incubation Period 2 to 10 days, usually 3 to 5 days # Clinical Features Pneumonic: malaise, cough, sputum production, dyspnea; Typhoidal: fever, prostration, weight loss and frequently an associated pneumonia. # Diagnosis Diagnosis usually made with serology on acute and convalescent serum specimens; bacterium can be detected by polymerase chain reaction (LRN) or isolated from blood and other body fluids on cysteine-enriched media or mouse inoculation. # Infectivity Person-to-person spread is rare. Laboratory workers who encounter/handle cultures of this organism are at high risk for disease if exposed. (e.g., flutter strips, smoke tubes) or a hand held pressure gauge - Self-closing door on all room exits - Maintain back-up ventilation equipment (e.g., portable units for fans or filters) for emergency provision of ventilation requirements for PE areas and take immediate steps to restore the fixed ventilation system - For patients who require both a PE and Airborne Infection Isolation, use an anteroom to ensure proper air balance relationships and provide independent exhaust of contaminated air to the outside or place a HEPA filter in the exhaust duct. If an anteroom is not available, place patient in an AIIR and use portable ventilation units, industrial-grade HEPA filters to enhance filtration of spores. # Recommended Precautions Standard Precautions # IV. Surfaces - Cohorting. In the context of this guideline, this term applies to the practice of grouping patients infected or colonized with the same infectious agent together to confine their care to one area and prevent contact with susceptible patients (cohorting patients). During outbreaks, healthcare personnel may be assigned to a cohort of patients to further limit opportunities for transmission (cohorting staff). Colonization. Proliferation of microorganisms on or within body sites without detectable host immune response, cellular damage, or clinical expression. The presence of a microorganism within a host may occur with varying duration, but may become a source of potential transmission. In many instances, colonization and carriage are synonymous. Droplet nuclei. Microscopic particles < 5 µm in size that are the residue of evaporated droplets and are produced when a person coughs, sneezes, shouts, or sings. These particles can remain suspended in the air for prolonged periods of time and can be carried on normal air currents in a room or beyond, to adjacent spaces or areas receiving exhaust air. # Hand hygiene. A general term that applies to any one of the following: 1. handwashing with plain (nonantimicrobial) soap and water); 2. antiseptic handwash (soap containing antiseptic agents and water); 3. antiseptic handrub (waterless antiseptic product, most often alcohol-based, rubbed on all surfaces of hands); or 4. surgical hand antisepsis (antiseptic handwash or antiseptic handrub performed preoperatively by surgical personnel to eliminate transient hand flora and reduce resident hand flora) 559 . # Healthcare-associated infection (HAI). An infection that develops in a patient who is cared for in any setting where healthcare is delivered (e.g., acute care hospital, chronic care facility, ambulatory clinic, dialysis center, surgicenter, home) and is related to receiving health care (i.e., was not incubating or present at the time healthcare was provided). In ambulatory and home settings, HAI would apply to any infection that is associated with a medical or surgical intervention. Since the geographic location of infection acquisition is often uncertain, the preferred term is considered to be healthcare-associated rather than healthcare-acquired. Healthcare epidemiologist. A person whose primary training is medical (M.D., D.O.) and/or masters or doctorate-level epidemiology who has received advanced training in healthcare epidemiology. Typically these professionals direct or provide consultation to an infection control program in a hospital, long term care facility (LTCF), or healthcare delivery system (also see infection control professional). # Healthcare personnel, healthcare worker (HCW). All paid and unpaid persons who work in a healthcare setting (e.g., any person who has professional or technical training in a healthcare-related field and provides patient care in a healthcare setting or any person who provides services that support the delivery of healthcare such as dietary, housekeeping, engineering, maintenance personnel). # Hematopoietic stem cell transplantation (HSCT). Any transplantation of bloodor bone marrow-derived hematopoietic stem cells, regardless of donor type (e.g., allogeneic or autologous) or cell source (e.g., bone marrow, peripheral blood, or placental/umbilical cord blood); associated with periods of severe immunosuppression that vary with the source of the cells, the intensity of chemotherapy required, and the presence of graft versus host disease (MMWR 2000; 49: RR-10). High-efficiency particulate air (HEPA) filter. An air filter that removes >99.97% of particles ≥ 0.3µm (the most penetrating particle size) at a specified flow rate of air. HEPA filters may be integrated into the central air handling systems, installed at the point of use above the ceiling of a room, or used as portable units (MMWR 2003; 52: RR-10). Home care. A wide-range of medical, nursing, rehabilitation, hospice and social services delivered to patients in their place of residence (e.g., private residence, senior living center, assisted living facility). Home health-care services include care provided by home health aides and skilled nurses, respiratory therapists, dieticians, physicians, chaplains, and volunteers; provision of durable medical equipment; home infusion therapy; and physical, speech, and occupational therapy. # Immunocompromised patients. Those patients whose immune mechanisms are deficient because of congenital or acquired immunologic disorders (e.g., human immunodeficiency virus infection, congenital immune deficiency syndromes), chronic diseases such as diabetes mellitus, cancer, emphysema, or cardiac failure, ICU care, malnutrition, and immunosuppressive therapy of another disease process ). The type of infections for which an immunocompromised patient has increased susceptibility is determined by the severity of immunosuppression and the specific component(s) of the immune system that is affected. Patients undergoing allogeneic HSCT and those with chronic graft versus host disease are considered the most vulnerable to HAIs. Immunocompromised states also make it more difficult to diagnose certain infections (e.g., tuberculosis) and are associated with more severe clinical disease states than persons with the same infection and a normal immune system. Infection. The transmission of microorganisms into a host after evading or overcoming defense mechanisms, resulting in the organism's proliferation and invasion within host tissue(s). Host responses to infection may include clinical symptoms or may be subclinical, with manifestations of disease mediated by direct organisms pathogenesis and/or a function of cell-mediated or antibody responses that result in the destruction of host tissues. # Infection control and prevention professional (ICP). A person whose primary training is in either nursing, medical technology, microbiology, or epidemiology and who has acquired specialized training in infection control. Responsibilities may include collection, analysis, and feedback of infection data and trends to healthcare providers; consultation on infection risk assessment, prevention and control strategies; performance of education and training activities; implementation of evidence-based infection control practices or those mandated by regulatory and licensing agencies; application of epidemiologic principles to improve patient outcomes; participation in planning renovation and construction projects (e.g., to ensure appropriate containment of construction dust); evaluation of new products or procedures on patient outcomes; oversight of employee health services related to infection prevention; implementation of preparedness plans; communication within the healthcare setting, with local and state health departments, and with the community at large concerning infection control issues; and participation in research. Certification in infection control (CIC) is available through the Certification Board of Infection Control and Epidemiology. # Infection control and prevention program. A multidisciplinary program that includes a group of activities to ensure that recommended practices for the prevention of healthcare-associated infections are implemented and followed by HCWs, making the healthcare setting safe from infection for patients and healthcare personnel. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) requires the following five components of an infection control program for accreditation: 1. surveillance: monitoring patients and healthcare personnel for acquisition of infection and/or colonization; 2. investigation: identification and analysis of infection problems or undesirable trends; 3. prevention: implementation of measures to prevent transmission of infectious agents and to reduce risks for device-and procedure-related infections; 4) control: evaluation and management of outbreaks; and 4. reporting: provision of information to external agencies as required by state and federal law and regulation (The Joint Commission (/) ). The infection control program staff has the ultimate authority to determine infection control policies for a healthcare organization with the approval of the organization's governing body. Long-term care facilities (LTCFs). An array of residential and outpatient facilities designed to meet the bio-psychosocial needs of persons with sustained self-care deficits. These include skilled nursing facilities, chronic disease hospitals, nursing homes, foster and group homes, institutions for the developmentally disabled, residential care facilities, assisted living facilities, retirement homes, adult day health care facilities, rehabilitation centers, and long-term psychiatric hospitals. # Mask. A term that applies collectively to items used to cover the nose and mouth and includes both procedure masks and surgical masks ( # Multidrug-resistant organisms (MDROs). In general, bacteria (excluding M. tuberculosis) that are resistant to one or more classes of antimicrobial agents and usually are resistant to all but one or two commercially available antimicrobial agents (e.g., MRSA, VRE, extended spectrum beta-lactamase -producing or intrinsically resistant gram-negative bacilli) 176 . # Nosocomial infection. Derived from two Greek words "nosos" (disease) and "komeion" (to take care of). Refers to any infection that develops during or as a result of an admission to an acute care facility (hospital) and was not incubating at the time of admission. # Personal protective equipment (PPE). A variety of barriers used alone or in combination to protect mucous membranes, skin, and clothing from contact with infectious agents. PPE includes gloves, masks, respirators, goggles, face shields, and gowns. # Procedure Mask. A covering for the nose and mouth that is intended for use in general patient care situations. These masks generally attach to the face with ear loops rather than ties or elastic. Unlike surgical masks, procedure masks are not regulated by the Food and Drug Administration. Protective Environment. A specialized patient-care area, usually in a hospital, with a positive air flow relative to the corridor (i.e., air flows from the room to the outside adjacent space). The combination of high-efficiency particulate air (HEPA) filtration, high numbers (≥12) of air changes per hour (ACH), and minimal leakage of air into the room creates an environment that can safely accommodate patients with a severely compromised immune system (e.g., those who have received allogeneic hemopoietic stem-cell transplant ) and decrease the risk of exposure to spores produced by environmental fungi. Other components include use of 2. using tissues to contain respiratory secretions with prompt disposal into a notouch receptacle, 3. offering a surgical mask to persons who are coughing to decrease contamination of the surrounding environment, and 4. turning the head away from others and maintaining spatial separation, ideally >3 feet, when coughing. These measures are targeted to all patients with symptoms of respiratory infection and their accompanying family members or friends beginning at the point of initial encounter with a healthcare setting (e.g., reception/triage in emergency departments, ambulatory clinics, healthcare provider offices) 126 620 . Source Control. The process of containing an infectious agent either at the portal of exit from the body or within a confined space. The term is applied most frequently to containment of infectious agents transmitted by the respiratory route but could apply to other routes of transmission, (e.g., a draining wound, vesicular or bullous skin lesions). Respiratory Hygiene/Cough Etiquette that encourages individuals to "cover your cough" and/or wear a mask is a source control measure. The use of enclosing devices for local exhaust ventilation (e.g., booths for sputum induction or administration of aerosolized medication) is another example of source control. # Standard Precautions. A group of infection prevention practices that apply to all patients, regardless of suspected or confirmed diagnosis or presumed infection status. Standard Precautions is a combination and expansion of Universal Precautions 780 and Body Substance Isolation 1102 . Standard Precautions is based on the principle that all blood, body fluids, secretions, excretions except sweat, nonintact skin, and mucous membranes may contain transmissible infectious agents. Standard Precautions includes hand hygiene, and depending on the anticipated exposure, use of gloves, gown, mask, eye protection, or face shield. Also, equipment or items in the patient environment likely to have been contaminated with infectious fluids must be handled in a manner to prevent transmission of infectious agents, (e.g., wear gloves for handling, contain heavily soiled equipment, properly clean and disinfect or sterilize reusable equipment before use on another patient). # Surgical mask. A device worn over the mouth and nose by operating room personnel during surgical procedures to protect both surgical patients and operating room personnel from transfer of microorganisms and body fluids. Surgical masks also are used to protect healthcare personnel from contact with large infectious droplets (>5 μm in size). According to draft guidance issued by the Food and Drug Administration on May 15, 2003, surgical masks are evaluated using standardized testing procedures for fluid resistance, bacterial filtration efficiency, differential pressure (air exchange), and flammability in order to mitigate the risks to health associated with the use of surgical masks. These specifications apply to any masks that are labeled surgical, laser, isolation, or dental or medical procedure (). Surgical masks do not protect against inhalation of small particles or droplet nuclei and should not be confused with particulate respirators that are recommended for protection against selected airborne infectious agents, (e.g., Mycobacterium tuberculosis).
# Executive Summary The Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Healthcare Settings 2007 updates and expands the Guideline for Isolation Precautions in Hospitals. The following developments led to revision of the 1996 guideline: 1. The transition of healthcare delivery from primarily acute care hospitals to other healthcare settings (e.g., home care, ambulatory care, free-standing specialty care sites, long-term care) created a need for recommendations that can be applied in all healthcare settings using common principles of infection control practice, yet can be modified to reflect setting-specific needs. Accordingly, the revised guideline addresses the spectrum of healthcare delivery settings. Furthermore, the term "nosocomial infections" is replaced by "healthcare-associated infections" (HAIs) to reflect the changing patterns in healthcare delivery and difficulty in determining the geographic site of exposure to an infectious agent and/or acquisition of infection. 2. The emergence of new pathogens (e.g., SARS-CoV associated with the severe acute respiratory syndrome [SARS], Avian influenza in humans), renewed concern for evolving known pathogens (e.g., C. difficile, noroviruses, community-associated MRSA [CA-MRSA]), development of new therapies (e.g., gene therapy), and increasing concern for the threat of bioweapons attacks, established a need to address a broader scope of issues than in previous isolation guidelines. 3. The successful experience with Standard Precautions, first recommended in the 1996 guideline, has led to a reaffirmation of this approach as the foundation for preventing transmission of infectious agents in all healthcare settings. New additions to the recommendations for Standard Precautions are Respiratory Hygiene/Cough Etiquette and safe injection practices, including the use of a mask when performing certain high-risk, prolonged procedures involving spinal canal punctures (e.g., myelography, epidural anesthesia). The need for a recommendation for Respiratory Hygiene/Cough Etiquette grew out of observations during the SARS outbreaks where failure to implement simple source control measures with patients, visitors, and healthcare personnel with respiratory symptoms may have contributed to SARS coronavirus (SARS-CoV) transmission. The recommended practices have a strong evidence base. The continued occurrence of outbreaks of hepatitis B and hepatitis C viruses in ambulatory settings indicated a need to re-iterate safe injection practice recommendations as part of Standard Precautions. The addition of a mask for certain spinal injections grew from recent evidence of an associated risk for developing meningitis caused by respiratory flora. 4. The accumulated evidence that environmental controls decrease the risk of lifethreatening fungal infections in the most severely immunocompromised patients (allogeneic hematopoietic stem-cell transplant patients) led to the update on the components of the Protective Environment (PE). 5. Evidence that organizational characteristics (e.g., nurse staffing levels and composition, establishment of a safety culture) influence healthcare personnel adherence to recommended infection control practices, and therefore are important factors in preventing transmission of infectious agents, led to a new emphasis and recommendations for administrative involvement in the development and support of infection control programs. 6. Continued increase in the incidence of HAIs caused by multidrug-resistant organisms (MDROs) in all healthcare settings and the expanded body of knowledge concerning prevention of transmission of MDROs created a need for more specific recommendations for surveillance and control of these pathogens that would be practical and effective in various types of healthcare settings. This document is intended for use by infection control staff, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection control programs for healthcare settings across the continuum of care. The reader is referred to other guidelines and websites for more detailed information and for recommendations concerning specialized infection control problems. # Parts I -III: Review of the Scientific Data Regarding Transmission of Infectious Agents in Healthcare Settings Part I reviews the relevant scientific literature that supports the recommended prevention and control practices. As with the 1996 guideline, the modes and factors that influence transmission risks are described in detail. New to the section on transmission are discussions of bioaerosols and of how droplet and airborne transmission may contribute to infection transmission. This became a concern during the SARS outbreaks of 2003, when transmission associated with aerosol-generating procedures was observed. Also new is a definition of "epidemiologically important organisms" that was developed to assist in the identification of clusters of infections that require investigation (i.e. multidrugresistant organisms, C. difficile). Several other pathogens that hold special infection control interest (i.e., norovirus, SARS, Category A bioterrorist agents, prions, monkeypox, and the hemorrhagic fever viruses) also are discussed to present new information and infection control lessons learned from experience with these agents. This section of the guideline also presents information on infection risks associated with specific healthcare settings and patient populations. Part II updates information on the basic principles of hand hygiene, barrier precautions, safe work practices and isolation practices that were included in previous guidelines. However, new to this guideline, is important information on healthcare system components that influence transmission risks, including those under the influence of healthcare administrators. An important administrative priority that is described is the need for appropriate infection control staffing to meet the ever-expanding role of infection control professionals in the modern, complex healthcare system. Evidence presented also demonstrates another administrative concern, the importance of nurse staffing levels, including numbers of appropriately trained nurses in ICUs for preventing HAIs. The role of the clinical microbiology laboratory in supporting infection control is described to emphasize the need for this service in healthcare facilites. Other factors that influence transmission risks are discussed i.e., healthcare worker adherence to recommended infection control practices, organizational safety culture or climate, education and training. Discussed for the first time in an isolation guideline is surveillance of healthcare-associated infections. The information presented will be useful to new infection control professionals as well as persons involved in designing or responding to state programs for public reporting of HAI rates. Part III describes each of the categories of precautions developed by the Healthcare Infection Control Practices Advisory Committee (HICPAC) and the Centers for Disease Control and Prevention (CDC) and provides guidance for their application in various healthcare settings. The categories of Transmission-Based Precautions are unchanged from those in the 1996 guideline: Contact, Droplet, and Airborne. One important change is the recommendation to don the indicated personal protective equipment (gowns, gloves, mask) upon entry into the patient's room for patients who are on Contact and/or Droplet Precautions since the nature of the interaction with the patient cannot be predicted with certainty and contaminated environmental surfaces are important sources for transmission of pathogens. In addition, the Protective Environment (PE) for allogeneic hematopoietic stem cell transplant patients, described in previous guidelines, has been updated. # Tables, Appendices, and Other Information There are several tables that summarize important information: 1. a summary of the evolution of this document; 2. guidance on using empiric isolation precautions according to a clinical syndrome; 3. a summary of infection control recommendations for category A agents of bioterrorism; 4. components of Standard Precautions and recommendations for their application; 5. components of the Protective Environment; and 6. a glossary of definitions used in this guideline. New in this guideline is a figure that shows a recommended sequence for donning and removing personal protective equipment used for isolation precautions to optimize safety and prevent self-contamination during removal. # Appendix A: Type and Duration of Precautions Recommended for Selected Infections and Conditions Appendix A consists of an updated alphabetical list of most infectious agents and clinical conditions for which isolation precautions are recommended. A preamble to the Appendix provides a rationale for recommending the use of one or more Transmission-Based Precautions, in addition to Standard Precautions, based on a review of the literature and evidence demonstrating a real or potential risk for person-to-person transmission in healthcare settings.The type and duration of recommended precautions are presented with additional comments concerning the use of adjunctive measures or other relevant considerations to prevent transmission of the specific agent. Relevant citations are included. # Pre-Publication of the Guideline on Preventing Transmission of MDROs New to this guideline is a comprehensive review and detailed recommendations for prevention of transmission of MDROs. This portion of the guideline was published electronically in October 2006 and updated in November, 2006 (Siegel JD, Rhinehart E, Jackson M, Chiarello L and HICPAC. Management of Multidrug-Resistant Organisms in Healthcare Settings (2006) (https://www.cdc.gov/infectioncontrol/guidelines/mdro/)), and is considered a part of the Guideline for Isolation Precautions. This section provides a detailed review of the complex topic of MDRO control in healthcare settings and is intended to provide a context for evaluation of MDRO at individual healthcare settings. A rationale and institutional requirements for developing an effective MDRO control program are summarized. Although the focus of this guideline is on measures to prevent transmission of MDROs in healthcare settings, information concerning the judicious use of antimicrobial agents is presented since such practices are intricately related to the size of the reservoir of MDROs which in turn influences transmission (e.g., colonization pressure). There are two tables that summarize recommended prevention and control practices using the following seven categories of interventions to control MDROs: administrative measures, education of healthcare personnel, judicious antimicrobial use, surveillance, infection control precautions, environmental measures, and decolonization. Recommendations for each category apply to and are adapted for the various healthcare settings. With the increasing incidence and prevalence of MDROs, all healthcare facilities must prioritize effective control of MDRO transmission. Facilities should identify prevalent MDROs at the facility, implement control measures, assess the effectiveness of control programs, and demonstrate decreasing MDRO rates. A set of intensified MDRO prevention interventions is presented to be added 1. if the incidence of transmission of a target MDRO is NOT decreasing despite implementation of basic MDRO infection control measures, and 2. when the first case(s) of an epidemiologically important MDRO is identified within a healthcare facility. # Summary This updated guideline responds to changes in healthcare delivery and addresses new concerns about transmission of infectious agents to patients and healthcare workers in the United States and infection control. The primary objective of the guideline is to improve the safety of the nation's healthcare delivery system by reducing the rates of HAIs. # Part I: Review of Scientific Data Regarding Transmission of Infectious Agents in Healthcare Settings # I.A. Evolution of the 2007 Document The Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Healthcare Settings 2007 builds upon a series of isolation and infection prevention documents promulgated since 1970. These previous documents are summarized and referenced in Table 1 and in Part I of the 1996 Guideline for Isolation Precautions in Hospitals 1 . Objectives and methods The objectives of this guideline are to 1. provide infection control recommendations for all components of the healthcare delivery system, including hospitals, long-term care facilities, ambulatory care, home care and hospice; 2. reaffirm Standard Precautions as the foundation for preventing transmission during patient care in all healthcare settings; 3. reaffirm the importance of implementing Transmission-Based Precautions based on the clinical presentation or syndrome and likely pathogens until the infectious etiology has been determined (Table 2); and 4. provide epidemiologically sound and, whenever possible, evidence-based recommendations. This guideline is designed for use by individuals who are charged with administering infection control programs in hospitals and other healthcare settings. The information also will be useful for other healthcare personnel, healthcare administrators, and anyone needing information about infection control measures to prevent transmission of infectious agents. Commonly used abbreviations are provided on page 11 and terms used in the guideline are defined in the Glossary (page 137). Med-line and Pub Med were used to search for relevant studies published in English, focusing on those published since 1996. Much of the evidence cited for preventing transmission of infectious agents in healthcare settings is derived from studies that used "quasi-experimental designs", also referred to as nonrandomized, pre-post-intervention study designs 2 . Although these types of studies can provide valuable information regarding the effectiveness of various interventions, several factors decrease the certainty of attributing improved outcome to a specific intervention. These include: difficulties in controlling for important confounding variables; the use of multiple interventions during an outbreak; and results that are explained by the statistical principle of regression to the mean, (e.g., improvement over time without any intervention) 3 . Observational studies remain relevant and have been used to evaluate infection control interventions 4,5 . The quality of studies, consistency of results and correlation with results from randomized, controlled trials when available were considered during the literature review and assignment of evidence-based categories (See Part IV: Recommendations) to the recommendations in this guideline. Several authors have summarized properties to consider when evaluating studies for the purpose of determining if the results should change practice or in designing new studies 2,6,7 . # Changes or clarifications in terminology. This guideline contains four changes in terminology from the 1996 guideline: • The term nosocomial infection is retained to refer only to infections acquired in hospitals. The term healthcare-associated infection (HAI) is used to refer to infections associated with healthcare delivery in any setting (e.g., hospitals, longterm care facilities, ambulatory settings, home care). This term reflects the inability to determine with certainty where the pathogen is acquired since patients may be colonized with or exposed to potential pathogens outside of the healthcare setting, before receiving health care, or may develop infections caused by those pathogens when exposed to the conditions associated with delivery of healthcare. Additionally, patients frequently move among the various settings within a healthcare system 8 . • A new addition to the practice recommendations for Standard Precautions is Respiratory Hygiene/Cough Etiquette. While Standard Precautions generally apply to the recommended practices of healthcare personnel during patient care, Respiratory Hygiene/Cough Etiquette applies broadly to all persons who enter a healthcare setting, including healthcare personnel, patients and visitors. These recommendations evolved from observations during the SARS epidemic that failure to implement basic source control measures with patients, visitors, and healthcare personnel with signs and symptoms of respiratory tract infection may have contributed to SARS coronavirus (SARS-CoV) transmission. This concept has been incorporated into CDC planning documents for SARS and pandemic influenza 9,10 . • The term "Airborne Precautions" has been supplemented with the term "Airborne Infection Isolation Room (AIIR)" for consistency with the Guidelines for Environmental Infection Control in Healthcare Facilities 11 , the Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Settings 2005 12 and the American Institute of Architects (AIA) guidelines for design and construction of hospitals, 2006 13 • A set of prevention measures termed Protective Environment has been added to the precautions used to prevent HAIs. These measures, which have been defined in other guidelines, consist of engineering and design interventions that decrease the risk of exposure to environmental fungi for severely immunocompromised allogeneic hematiopoietic stem cell transplant (HSCT) patients during their highest risk phase, usually the first 100 days post transplant, or longer in the presence of graft-versushost disease 11,[13][14][15] . Recommendations for a Protective Environment apply only to acute care hospitals that provide care to HSCT patients. Scope. This guideline, like its predecessors, focuses primarily on interactions between patients and healthcare providers. The Guidelines for the Prevention of MDRO Infection were published separately in November 2006, and are available online at Management of Multidrug-Resistant Organisms in Healthcare Settings (https://www.cdc.gov/infectioncontrol/guidelines/mdro/). Several other HICPAC guidelines to prevent transmission of infectious agents associated with healthcare delivery are cited; e.g., Guideline for Hand Hygiene, Guideline for Environmental Infection Control, Guideline for Prevention of Healthcare-Associated Pneumonia, and Guideline for Infection Control in Healthcare Personnel 11,14,16,17 . In combination, these provide comprehensive guidance on the primary infection control measures for ensuring a safe environment for patients and healthcare personnel. This guideline does not discuss in detail specialized infection control issues in defined populations that are addressed elsewhere, (e.g. 12,[18][19][20] . An exception has been made by including abbreviated guidance for a Protective Environment used for allogeneic HSCT recipients because components of the Protective Environment have been more completely defined since publication of the Guidelines for Preventing Opportunistic Infections Among HSCT Recipients in 2000 and the Guideline for Environmental Infection Control in Healthcare Facilities 11,15 . # I.B. Rationale for Standard and Transmission-Based Precautions in healthcare settings Transmission of infectious agents within a healthcare setting requires three elements: a source (or reservoir) of infectious agents, a susceptible host with a portal of entry receptive to the agent, and a mode of transmission for the agent. This section describes the interrelationship of these elements in the epidemiology of HAIs. # I.B.1. Sources of infectious agents. Infectious agents transmitted during healthcare derive primarily from human sources but inanimate environmental sources also are implicated in transmission. Human reservoirs include patients [20][21][22][23][24][25][26][27][28] , healthcare personnel 29-35 17, 36-39 , and household members and other visitors [40][41][42][43][44][45] . Such source individuals may have active infections, may be in the asymptomatic and/or incubation period of an infectious disease, or may be transiently or chronically colonized with pathogenic microorganisms, particularly in the respiratory and gastrointestinal tracts. The endogenous flora of patients (e.g., bacteria residing in the respiratory or gastrointestinal tract) also are the source of HAIs [46][47][48][49][50][51][52][53][54] . # I.B.2. Susceptible hosts. Infection is the result of a complex interrelationship between a potential host and an infectious agent. Most of the factors that influence infection and the occurrence and severity of disease are related to the host. However, characteristics of the host-agent interaction as it relates to pathogenicity, virulence and antigenicity are also important, as are the infectious dose, mechanisms of disease production and route of exposure 55 . There is a spectrum of possible outcomes following exposure to an infectious agent. Some persons exposed to pathogenic microorganisms never develop symptomatic disease while others become severely ill and even die. Some individuals are prone to becoming transiently or permanently colonized but remain asymptomatic. Still others progress from colonization to symptomatic disease either immediately following exposure, or after a period of asymptomatic colonization. The immune state at the time of exposure to an infectious agent, interaction between pathogens, and virulence factors intrinsic to the agent are important predictors of an individuals' outcome. Host factors such as extremes of age and underlying disease (e.g., diabetes 56, 57 ), human immunodeficiency virus/acquired immune deficiency syndrome [HIV/AIDS] 58, 59 , malignancy, and transplants 18,60,61 can increase susceptibility to infection as do a variety of medications that alter the normal flora (e.g., antimicrobial agents, gastric acid suppressants, corticosteroids, antirejection drugs, antineoplastic agents, and immunosuppressive drugs). Surgical procedures and radiation therapy impair defenses of the skin and other involved organ systems. Indwelling devices such as urinary catheters, endotracheal tubes, central venous and arterial catheters [62][63][64] and synthetic implants facilitate development of HAIs by allowing potential pathogens to bypass local defenses that would ordinarily impede their invasion and by providing surfaces for development of bioflms that may facilitate adherence of microorganisms and protect from antimicrobial activity 65 . Some infections associated with invasive procedures result from transmission within the healthcare facility; others arise from the patient's endogenous flora [46][47][48][49][50] . High-risk patient populations with noteworthy risk factors for infection are discussed further in Sections I.D, I.E., and I.F. # I.B.3. Modes of transmission. Several classes of pathogens can cause infection, including bacteria, viruses, fungi, parasites, and prions. The modes of transmission vary by type of organism and some infectious agents may be transmitted by more than one route: some are transmitted primarily by direct or indirect contact, (e.g., Herpes simplex virus [HSV], respiratory syncytial virus, Staphylococcus aureus), others by the droplet, (e.g., influenza virus, B. pertussis) or airborne routes (e.g., M. tuberculosis). Other infectious agents, such as bloodborne viruses (e.g., hepatitis B and C viruses [HBV, HCV] and HIV are transmitted rarely in healthcare settings, via percutaneous or mucous membrane exposure. Importantly, not all infectious agents are transmitted from person to person. These are distinguished in Appendix A. The three principal routes of transmission are summarized below. # I.B.3.a. Contact transmission. The most common mode of transmission, contact transmission is divided into two subgroups: direct contact and indirect contact. # I.B.3.a.i. Direct contact transmission. Direct transmission occurs when microorganisms are transferred from one infected person to another person without a contaminated intermediate object or person. Opportunities for direct contact transmission between patients and healthcare personnel have been summarized in the Guideline for Infection Control in Healthcare Personnel, 1998 17 and include: • blood or other blood-containing body fluids from a patient directly enters a caregiver's body through contact with a mucous membrane 66 or breaks (i.e., cuts, abrasions) in the skin 67 . • mites from a scabies-infested patient are transferred to the skin of a caregiver while he/she is having direct ungloved contact with the patient's skin 68,69 . • a healthcare provider develops herpetic whitlow on a finger after contact with HSV when providing oral care to a patient without using gloves or HSV is transmitted to a patient from a herpetic whitlow on an ungloved hand of a healthcare worker (HCW) 70, 71 . # I.B.3.a.ii. Indirect contact transmission. Indirect transmission involves the transfer of an infectious agent through a contaminated intermediate object or person. In the absence of a point-source outbreak, it is difficult to determine how indirect transmission occurs. However, extensive evidence cited in the Guideline for Hand Hygiene in Health-Care Settings suggests that the contaminated hands of healthcare personnel are important contributors to indirect contact transmission 16 . Examples of opportunities for indirect contact transmission include: • Hands of healthcare personnel may transmit pathogens after touching an infected or colonized body site on one patient or a contaminated inanimate object, if hand hygiene is not performed before touching another patient. 72,73 . • Patient-care devices (e.g., electronic thermometers, glucose monitoring devices) may transmit pathogens if devices contaminated with blood or body fluids are shared between patients without cleaning and disinfecting between patients 74 75-77 . • Shared toys may become a vehicle for transmitting respiratory viruses (e.g., respiratory syncytial virus 24,78,79 or pathogenic bacteria (e.g., Pseudomonas aeruginosa 80 ) among pediatric patients. • Instruments that are inadequately cleaned between patients before disinfection or sterilization (e.g., endoscopes or surgical instruments) [81][82][83][84][85] or that have manufacturing defects that interfere with the effectiveness of reprocessing 86,87 may transmit bacterial and viral pathogens. Clothing, uniforms, laboratory coats, or isolation gowns used as personal protective equipment (PPE), may become contaminated with potential pathogens after care of a patient colonized or infected with an infectious agent, (e.g., MRSA 88 , VRE 89 , and C. difficile 90 . Although contaminated clothing has not been implicated directly in transmission, the potential exists for soiled garments to transfer infectious agents to successive patients. # I.B.3.b. Droplet transmission. Droplet transmission is, technically, a form of contact transmission, and some infectious agents transmitted by the droplet route also may be transmitted by the direct and indirect contact routes. However, in contrast to contact transmission, respiratory droplets carrying infectious pathogens transmit infection when they travel directly from the respiratory tract of the infectious individual to susceptible mucosal surfaces of the recipient, generally over short distances, necessitating facial protection. Respiratory droplets are generated when an infected person coughs, sneezes, or talks 91,92 or during procedures such as suctioning, endotracheal intubation, [93][94][95][96] , cough induction by chest physiotherapy 97 and cardiopulmonary resuscitation 98,99 . Evidence for droplet transmission comes from epidemiological studies of disease outbreaks [100][101][102][103] , experimental studies 104 and from information on aerosol dynamics 91,105 . Studies have shown that the nasal mucosa, conjunctivae and less frequently the mouth, are susceptible portals of entry for respiratory viruses 106 . The maximum distance for droplet transmission is currently unresolved, although pathogens transmitted by the droplet route have not been transmitted through the air over long distances, in contrast to the airborne pathogens discussed below. Historically, the area of defined risk has been a distance of ≤3 feet around the patient and is based on epidemiologic and simulated studies of selected infections 103,104 . Using this distance for donning masks has been effective in preventing transmission of infectious agents via the droplet route. However, experimental studies with smallpox 107,108 and investigations during the global SARS outbreaks of 2003 101 suggest that droplets from patients with these two infections could reach persons located 6 feet or more from their source. It is likely that the distance droplets travel depends on the velocity and mechanism by which respiratory droplets are propelled from the source, the density of respiratory secretions, environmental factors such as temperature and humidity, and the ability of the pathogen to maintain infectivity over that distance 105 . Thus, a distance of ≤3 feet around the patient is best viewed as an example of what is meant by "a short distance from a patient" and should not be used as the sole criterion for deciding when a mask should be donned to protect from droplet exposure. Based on these considerations, it may be prudent to don a mask when within 6 to 10 feet of the patient or upon entry into the patient's room, especially when exposure to emerging or highly virulent pathogens is likely. More studies are needed to improve understanding of droplet transmission under various circumstances. Droplet size is another variable under discussion. Droplets traditionally have been defined as being >5 µm in size. Droplet nuclei, particles arising from desiccation of suspended droplets, have been associated with airborne transmission and defined as ≤5 µm in size 105 , a reflection of the pathogenesis of pulmonary tuberculosis which is not generalizeable to other organisms. Observations of particle dynamics have demonstrated that a range of droplet sizes, including those with diameters of 30µm or greater, can remain suspended in the air 109 . The behavior of droplets and droplet nuclei affect recommendations for preventing transmission. Whereas fine airborne particles containing pathogens that are able to remain infective may transmit infections over long distances, requiring AIIR to prevent its dissemination within a facility; organisms transmitted by the droplet route do not remain infective over long distances, and therefore do not require special air handling and ventilation. Examples of infectious agents that are transmitted via the droplet route include Bordetella pertussis 110 , influenza virus 23 , adenovirus 111 , rhinovirus 104 , Mycoplasma pneumoniae 112 , SARS-associated coronavirus (SARS-CoV) 21,96,113 , group A streptococcus 114 , and Neisseria meningitidis 95,103,115 . Although respiratory syncytial virus may be transmitted by the droplet route, direct contact with infected respiratory secretions is the most important determinant of transmission and consistent adherence to Standard plus Contact Precautions prevents transmission in healthcare settings 24,116,117 . Rarely, pathogens that are not transmitted routinely by the droplet route are dispersed into the air over short distances. For example, although S. aureus is transmitted most frequently by the contact route, viral upper respiratory tract infection has been associated with increased dispersal of S. aureus from the nose into the air for a distance of 4 feet under both outbreak and experimental conditions and is known as the "cloud baby" and "cloud adult" phenomenon [118][119][120] . # I.B.3.c. Airborne transmission. Airborne transmission occurs by dissemination of either airborne droplet nuclei or small particles in the respirable size range containing infectious agents that remain infective over time and distance (e.g., spores of Aspergillus spp, and Mycobacterium tuberculosis). Microorganisms carried in this manner may be dispersed over long distances by air currents and may be inhaled by susceptible individuals who have not had face-to-face contact with (or been in the same room with) the infectious individual [121][122][123][124] . Preventing the spread of pathogens that are transmitted by the airborne route requires the use of special air handling and ventilation systems (e.g., AIIRs) to contain and then safely remove the infectious agent 11,12 . Infectious agents to which this applies include Mycobacterium tuberculosis [124][125][126][127] , rubeola virus (measles) 122 , and varicella-zoster virus (chickenpox) 123 . In addition, published data suggest the possibility that variola virus (smallpox) may be transmitted over long distances through the air under unusual circumstances and AIIRs are recommended for this agent as well; however, droplet and contact routes are the more frequent routes of transmission for smallpox 108,128,129 . In addition to AIIRs, respiratory protection with NIOSH certified N95 or higher level respirator is recommended for healthcare personnel entering the AIIR to prevent acquisition of airborne infectious agents such as M. tuberculosis 12 . For certain other respiratory infectious agents, such as influenza 130,131 and rhinovirus 104 , and even some gastrointestinal viruses (e.g., norovirus 132 and rotavirus 133 ) there is some evidence that the pathogen may be transmitted via small-particle aerosols, under natural and experimental conditions. Such transmission has occurred over distances longer than 3 feet but within a defined airspace (e.g., patient room), suggesting that it is unlikely that these agents remain viable on air currents that travel long distances. AIIRs are not required routinely to prevent transmission of these agents. Additional issues concerning examples of small particle aerosol transmission of agents that are most frequently transmitted by the droplet route are discussed below. # I.B.3.d. Emerging issues concerning airborne transmission of infectious agents. # I.B.3.d.i. Transmission from patients. The emergence of SARS in 2002, the importation of monkeypox into the United States in 2003, and the emergence of avian influenza present challenges to the assignment of isolation categories because of conflicting information and uncertainty about possible routes of transmission. Although SARS-CoV is transmitted primarily by contact and/or droplet routes, airborne transmission over a limited distance (e.g., within a room), has been suggested, though not proven [134][135][136][137][138][139][140][141] . This is true of other infectious agents such as influenza virus 130 and noroviruses 132,142,143 . Influenza viruses are transmitted primarily by close contact with respiratory droplets 23,102 and acquisition by healthcare personnel has been prevented by Droplet Precautions, even when positive pressure rooms were used in one center 144 However, inhalational transmission could not be excluded in an outbreak of influenza in the passengers and crew of a single aircraft 130 . Observations of a protective effect of UV lights in preventing influenza among patients with tuberculosis during the influenza pandemic of 1957-'58 have been used to suggest airborne transmission 145,146 . In contrast to the strict interpretation of an airborne route for transmission (i.e., long distances beyond the patient room environment), short distance transmission by small particle aerosols generated under specific circumstances (e.g., during endotracheal intubation) to persons in the immediate area near the patient has been demonstrated. Also, aerosolized particles <100 µm can remain suspended in air when room air current velocities exceed the terminal settling velocities of the particles 109 . SARS-CoV transmission has been associated with endotracheal intubation, noninvasive positive pressure ventilation, and cardio-pulmonary resuscitation 93,94,96,98,141 . Although the most frequent routes of transmission of noroviruses are contact and food and waterborne routes, several reports suggest that noroviruses may be transmitted through aerosolization of infectious particles from vomitus or fecal material 142,143,147,148 . It is hypothesized that the aerosolized particles are inhaled and subsequently swallowed. Roy and Milton proposed a new classification for aerosol transmission when evaluating routes of SARS transmission: 1. obligate: under natural conditions, disease occurs following transmission of the agent only through inhalation of small particle aerosols (e.g., tuberculosis); 2. preferential: natural infection results from transmission through multiple routes, but small particle aerosols are the predominant route (e.g., measles, varicella); and 3. opportunistic: agents that naturally cause disease through other routes, but under special circumstances may be transmitted via fine particle aerosols 149 . This conceptual framework can explain rare occurrences of airborne transmission of agents that are transmitted most frequently by other routes (e.g., smallpox, SARS, influenza, noroviruses). Concerns about unknown or possible routes of transmission of agents associated with severe disease and no known treatment often result in more extreme prevention strategies than may be necessary; therefore, recommended precautions could change as the epidemiology of an emerging infection is defined and controversial issues are resolved. # I.B.3.d.ii. Transmission from the environment. Some airborne infectious agents are derived from the environment and do not usually involve person-to-person transmission. For example, anthrax spores present in a finely milled powdered preparation can be aerosolized from contaminated environmental surfaces and inhaled into the respiratory tract 150,151 . Spores of environmental fungi (e.g., Aspergillus spp.) are ubiquitous in the environment and may cause disease in immunocompromised patients who inhale aerosolized (e.g., via construction dust) spores 152,153 . As a rule, neither of these organisms is subsequently transmitted from infected patients. However, there is one welldocumented report of person-to-person transmission of Aspergillus sp. in the ICU setting that was most likey due to the aerosolization of spores during wound debridement 154 . A Protective Environment refers to isolation practices designed to decrease the risk of exposure to environmental fungal agents in allogeneic HSCT patients 11,14,15,[155][156][157][158] . Environmental sources of respiratory pathogens (eg. Legionella) transmitted to humans through a common aerosol source is distinct from direct patient-to-patient transmission. # I.B.3.e. Other sources of infection. Transmission of infection from sources other than infectious individuals include those associated with common environmental sources or vehicles (e.g., contaminated food, water, or medications (e.g., intravenous fluids). Although Aspergillus spp. have been recovered from hospital water systems 159 , the role of water as a reservoir for immunosuppressed patients remains uncertain. Vectorborne transmission of infectious agents from mosquitoes, flies, rats, and other vermin also can occur in healthcare settings. Prevention of vector borne transmission is not addressed in this document. # I.C. Infectious Agents of Special Infection Control Interest for Healthcare Settings Several infectious agents with important infection control implications that either were not discussed extensively in previous isolation guidelines or have emerged recently are discussed below. These are epidemiologically important organisms (e.g., C. difficile), agents of bioterrorism, prions, SARS-CoV, monkeypox, noroviruses, and the hemorrhagic fever viruses. Experience with these agents has broadened the understanding of modes of transmission and effective preventive measures. These agents are included for purposes of information and, for some (i.e., SARS-CoV, monkeypox), because of the lessons that have been learned about preparedness planning and responding effectively to new infectious agents. # I.C.1. Epidemiologically important organisms. Any infectious agents transmitted in healthcare settings may, under defined conditions, become targeted for control because they are epidemiologically important. C. difficile is specifically discussed below because of wide recognition of its current importance in U.S. healthcare facilities. In determining what constitutes an "epidemiologically important organism", the following characteristics apply: • A propensity for transmission within healthcare facilities based on published reports and the occurrence of temporal or geographic clusters of > 2 patients, (e.g., C..difficile, norovirus, respiratory syncytial virus (RSV), influenza, rotavirus, Enterobacter spp; Serratia spp., group A streptococcus). A single case of healthcare-associated invasive disease caused by certain pathogens (e.g., group A streptococcus post-operatively 160 , in burn units 161 , or in a LTCF 162 ; Legionella sp. 14, 163 , Aspergillus sp. 164 ) is generally considered a trigger for investigation and enhanced control measures because of the risk of additional cases and severity of illness associated with these infections. Antimicrobial resistance • Resistance to first-line therapies (e.g., MRSA, VISA, VRSA, VRE, ESBL-producing organisms). • Common and uncommon microorganisms with unusual patterns of resistance within a facility (e.g., the first isolate of Burkholderia cepacia complex or Ralstonia spp. in non-CF patients or a quinolone-resistant strain of Pseudomonas aeruginosa in a facility). • Difficult to treat because of innate or acquired resistance to multiple classes of antimicrobial agents (e.g., Stenotrophomonas maltophilia, Acinetobacter spp.). • Association with serious clinical disease, increased morbidity and mortality (e.g., MRSA and MSSA, group A streptococcus) • A newly discovered or reemerging pathogen I.C.1.a. C.difficile. C. difficile is a spore-forming gram positive anaerobic bacillus that was first isolated from stools of neonates in 1935 165 and identified as the most commonly identified causative agent of antibiotic-associated diarrhea and pseudomembranous colitis in 1977 166 . This pathogen is a major cause of healthcareassociated diarrhea and has been responsible for many large outbreaks in healthcare settings that were extremely difficult to control. Important factors that contribute to healthcare-associated outbreaks include environmental contamination, persistence of spores for prolonged periods of time, resistance of spores to routinely used disinfectants and antiseptics, hand carriage by healthcare personnel to other patients, and exposure of patients to frequent courses of antimicrobial agents 167 . Antimicrobials most frequently associated with increased risk of C. difficile include third generation cephalosporins, clindamycin, vancomycin, and fluoroquinolones. Since 2001, outbreaks and sporadic cases of C. difficile with increased morbidity and mortality have been observed in several U.S. states, Canada, England and the Netherlands [168][169][170][171][172] . The same strain of C. difficile has been implicated in these outbreaks 173 . This strain, toxinotype III, North American PFGE type 1, and PCRribotype 027 (NAP1/027) has been found to hyperproduce toxin A (16 fold increase) and toxin B (23 fold increase) compared with isolates from 12 different pulsed-field gel electrophoresisPFGE types. A recent survey of U.S. infectious disease physicians found that 40% perceived recent increases in the incidence and severity of C. difficile disease 174 . Standardization of testing methodology and surveillance definitions is needed for accurate comparisons of trends in rates among hospitals 175 . It is hypothesized that the incidence of disease and apparent heightened transmissibility of this new strain may be due, at least in part, to the greater production of toxins A and B, increasing the severity of diarrhea and resulting in more environmental contamination. Considering the greater morbidity, mortality, length of stay, and costs associated with C. difficile disease in both acute care and long term care facilities, control of this pathogen is now even more important than previously. Prevention of transmission focuses on syndromic application of Contact Precautions for patients with diarrhea, accurate identification of patients, environmental measures (e.g., rigorous cleaning of patient rooms) and consistent hand hygiene. Use of soap and water, rather than alcohol based handrubs, for mechanical removal of spores from hands, and a bleach-containing disinfectant (5000 ppm) for environmental disinfection, may be valuable when there is transmission in a healthcare facility. See Appendix A for specific recommendations. # I.C.1. b. Multidrug-resistant organisms (MDROs). In general, MDROs are defined as microorganisms -predominantly bacteria -that are resistant to one or more classes of antimicrobial agents 176 . Although the names of certain MDROs suggest resistance to only one agent (e.g., methicillin-resistant Staphylococcus aureus [MRSA], vancomycin resistant enterococcus [VRE]), these pathogens are usually resistant to all but a few commercially available antimicrobial agents. This latter feature defines MDROs that are considered to be epidemiologically important and deserve special attention in healthcare facilities 177 . Other MDROs of current concern include multidrug-resistant Streptococcus pneumoniae (MDRSP) which is resistant to penicillin and other broadspectrum agents such as macrolides and fluroquinolones, multidrug-resistant gramnegative bacilli (MDR-GNB), especially those producing extended spectrum betalactamases (ESBLs); and strains of S. aureus that are intermediate or resistant to vancomycin (i.e., VISA and VRSA) 178-197 198 . MDROs are transmitted by the same routes as antimicrobial susceptible infectious agents. Patient-to-patient transmission in healthcare settings, usually via hands of HCWs, has been a major factor accounting for the increase in MDRO incidence and prevalence, especially for MRSA and VRE in acute care facilities [199][200][201] . Preventing the emergence and transmission of these pathogens requires a comprehensive approach that includes administrative involvement and measures (e.g., nurse staffing, communication systems, performance improvement processes to ensure adherence to recommended infection control measures), education and training of medical and other healthcare personnel, judicious antibiotic use, comprehensive surveillance for targeted MDROs, application of infection control precautions during patient care, environmental measures (e.g., cleaning and disinfection of the patient care environment and equipment, dedicated single-patientuse of non-critical equipment), and decolonization therapy when appropriate. The prevention and control of MDROs is a national priority -one that requires that all healthcare facilities and agencies assume responsibility and participate in communitywide control programs 176,177 . A detailed discussion of this topic and recommendations for prevention was published in 2006 may be found at Management of Multidrug-Resistant Organisms in Healthcare Settings (2006) (https://www.cdc.gov/infectioncontrol/guidelines/mdro/). # I.C.2. Agents of bioterrorism. CDC has designated the agents that cause anthrax, smallpox, plague, tularemia, viral hemorrhagic fevers, and botulism as Category A (high priority) because these agents can be easily disseminated environmentally and/or transmitted from person to person; can cause high mortality and have the potential for major public health impact; might cause public panic and social disruption; and require special action for public health preparedness 202 . General information relevant to infection control in healthcare settings for Category A agents of bioterrorism is summarized in Table 3. Consult [This link is no longer active: www.bt.cdc.gov. Similar information may be found at CDC Bioterrorism Agents/Diseases (https://emergency.cdc.gov/agent/agentlist.asp) Accessed May 2016.] for additional, updated Category A agent information as well as information concerning Category B and C agents of bioterrorism and updates. Category B and C agents are important but are not as readily disseminated and cause less morbidity and mortality than Category A agents. Healthcare facilities confront a different set of issues when dealing with a suspected bioterrorism event as compared with other communicable diseases. An understanding of the epidemiology, modes of transmission, and clinical course of each disease, as well as carefully drafted plans that provide an approach and relevant websites and other resources for disease-specific guidance to healthcare, administrative, and support personnel, are essential for responding to and managing a bioterrorism event. Infection control issues to be addressed include: 1. identifying persons who may be exposed or infected; 2. preventing transmission among patients, healthcare personnel, and visitors; 3. providing treatment, chemoprophylaxis or vaccine to potentially large numbers of people; 4. protecting the environment including the logistical aspects of securing sufficient numbers of AIIRs or designating areas for patient cohorts when there are an insufficient number of AIIRs available; 5. providing adequate quantities of appropriate personal protective equipment; and 6. identifying appropriate staff to care for potentially infectious patients (e.g., vaccinated healthcare personnel for care of patients with smallpox). The response is likely to differ for exposures resulting from an intentional release compared with naturally occurring disease because of the large number persons that can be exposed at the same time and possible differences in pathogenicity. 203 ; smallpox [204][205][206] ; plague 207,208 ; botulinum toxin 209 ; tularemia 210 ; and hemorrhagic fever viruses: 211,212 . # I.C.2.a. Pre-event administration of smallpox (vaccinia) vaccine to healthcare personnel. Vaccination of personnel in preparation for a possible smallpox exposure has important infection control implications [213][214][215] . These include the need for meticulous screening for vaccine contraindications in persons who are at increased risk for adverse vaccinia events; containment and monitoring of the vaccination site to prevent transmission in the healthcare setting and at home; and the management of patients with vaccinia-related adverse events 216,217 . The pre-event U.S. smallpox vaccination program of 2003 is an example of the effectiveness of carefully developed recommendations for both screening potential vaccinees for contraindications and vaccination site care and monitoring. Approximately 760,000 individuals were vaccinated in the Department of Defense and 40,000 in the civilian or public health populations from December 2002 to February 2005, including approximately 70,000 who worked in healthcare settings. There were no cases of eczema vaccinatum, progressive vaccinia, fetal vaccinia, or contact transfer of vaccinia in healthcare settings or in military workplaces 218,219 . Outside the healthcare setting, there were 53 cases of contact transfer from military vaccinees to close personal contacts (e.g., bed partners or contacts during participation in sports such as wrestling 220 ). All contact transfers were from individuals who were not following recommendations to cover their vaccination sites. Vaccinia virus was confirmed by culture or PCR in 30 cases, and two of the confirmed cases resulted from tertiary transfer. All recipients, including one breast-fed infant, recovered without complication. Subsequent studies using viral culture and PCR techniques have confirmed the effectiveness of semipermeable dressings to contain vaccinia [221][222][223][224] . This experience emphasizes the importance of ensuring that newly vaccinated healthcare personnel adhere to recommended vaccination-site care, especially if they are to care for high-risk patients. Recommendations for pre-event smallpox vaccination of healthcare personnel and vaccinia-related infection control recommendations are published in the MMWR 216,225 with updates posted on the CDC bioterrorism web site 205 . # I.C.3. Prions. Creutzfeldt-Jakob disease (CJD) is a rapidly progressive, degenerative, neurologic disorder of humans with an incidence in the United States of approximately 1 person/million population/year 226,227 228,229 , from implantation of contaminated human dura mater grafts 230 or from corneal transplants 231 ). Transmission has been linked to the use of contaminated neurosurgical instruments or stereotactic electroencephalogram electrodes 232,233,234,235 . Prion diseases in animals include scrapie in sheep and goats, bovine spongiform encephalopathy (BSE, or "mad cow disease") in cattle, and chronic wasting disease in deer and elk 236 . BSE, first recognized in the United Kingdom (UK) in 1986, was associated with a major epidemic among cattle that had consumed contaminated meat and bone meal. The possible transmission of BSE to humans causing variant CJD (vCJD) was first described in 1996 and subsequently found to be associated with consumption of BSEcontaminated cattle products primarily in the United Kingdom. There is strong epidemiologic and laboratory evidence for a causal association between the causative agent of BSE and vCJD 237 . Although most cases of vCJD have been reported from the UK, a few cases also have been reported from Europe, Japan, Canada, and the United States. Most vCJD cases worldwide lived in or visited the UK during the years of a large outbreak of BSE (1980-96) and may have consumed contaminated cattle products during that time (Creutzfeldt-Jakob Disease, Classic (CJD) (https://www.cdc.gov/prions/cjd/index.html) [Current version of this document may differ from original.]). Although there has been no indigenously acquired vCJD in the United States, the sporadic occurrence of BSE in cattle in North America has heightened awareness of the possibility that such infections could occur and have led to increased surveillance activities. Updated information may be found on the following website: Creutzfeldt-Jakob Disease, Classic (CJD) (https://www.cdc.gov/prions/cjd/index.html) [Current version of this document may differ from original.]. The public health impact of prion diseases has been reviewed 238 . vCJD in humans has different clinical and pathologic characteristics from sporadic or classic CJD 239 , including the following: 1. younger median age at death: 28 (range 16-48) vs. 68 years; 2. longer duration of illness: median 14 months vs. 4-6 months; 3. increased frequency of sensory symptoms and early psychiatric symptoms with delayed onset of frank neurologic signs; and 4. detection of prions in tonsillar and other lymphoid tissues from vCJD patients but not from sporadic CJD patients 240 . Similar to sporadic CJD, there have been no reported cases of direct human-to-human transmission of vCJD by casual or environmental contact, droplet, or airborne routes. Ongoing blood safety surveillance in the U.S. has not detected sporadic CJD transmission through blood transfusion [241][242][243] . However, bloodborne transmission of vCJD is believed to have occurred in two UK patients 244,245 247,248 . The incubation period from exposure to the onset of symptoms is 2 to 7 days but can be as long as 10 days and uncommonly even longer 249 . The illness is initially difficult to distinguish from other common respiratory infections. Outbreaks in healthcare settings, with transmission to large numbers of healthcare personnel and patients have been a striking feature of SARS; undiagnosed, infectious patients and visitors were important initiators of these outbreaks 21,[252][253][254] . The relative contribution of potential modes of transmission is not precisely known. There is ample evidence for droplet and contact transmission 96,101,113 ; however, opportunistic airborne transmission cannot be excluded 101, 135-139, 149, 255 . For example, exposure to aerosolgenerating procedures (e.g., endotracheal intubation, suctioning) was associated with transmission of infection to large numbers of healthcare personnel outside of the United States 93,94,96,98,253 .Therefore, aerosolization of small infectious particles generated during these and other similar procedures could be a risk factor for transmission to others within a multi-bed room or shared airspace. A review of the infection control literature generated from the SARS outbreaks of 2003 concluded that the greatest risk of transmission is to those who have close contact, are not properly trained in use of protective infection control procedures, do not consistently use PPE; and that N95 or higher respirators may offer additional protection to those exposed to aerosol-generating procedures and high risk activities 256,257 . Organizational and individual factors that affected adherence to infection control practices for SARS also were identified 257 . Control of SARS requires a coordinated, dynamic response by multiple disciplines in a healthcare setting 9 . Early detection of cases is accomplished by screening persons with symptoms of a respiratory infection for history of travel to areas experiencing community transmission or contact with SARS patients, followed by implementation of Respiratory Hygiene/Cough Etiquette (i.e., placing a mask over the patient's nose and mouth) and physical separation from other patients in common waiting areas.The precise combination of precautions to protect healthcare personnel has not been determined. 259 . In Hong Kong, the use of Droplet and Contact Precautions, which included use of a mask but not a respirator, was effective in protecting healthcare personnel 113 . However, in Toronto, consistent use of an N95 respirator was slightly more protective than a mask 93 . It is noteworthy that there was no transmission of SARS-CoV to public hospital workers in Vietnam despite inconsistent use of infection control measures, including use of PPE, which suggests other factors (e.g., severity of disease, frequency of high risk procedures or events, environmental features) may influence opportunities for transmission 260 . SARS-CoV also has been transmitted in the laboratory setting through breaches in recommended laboratory practices. Research laboratories where SARS-CoV was under investigation were the source of most cases reported after the first series of outbreaks in the winter and spring of 2003 261,262 . Studies of the SARS outbreaks of 2003 and transmissions that occurred in the laboratory re-affirm the effectiveness of recommended infection control precautions and highlight the importance of consistent adherence to these measures. Lessons from the SARS outbreaks are useful for planning to respond to future public health crises, such as pandemic influenza and bioterrorism events. Surveillance for cases among patients and healthcare personnel, ensuring availability of adequate supplies and staffing, and limiting access to healthcare facilities were important factors in the response to SARS that have been summarized 9 . 263 . This outbreak demonstrates the importance of recognition and prompt reporting of unusual disease presentations by clinicians to enable prompt identification of the etiology; and the potential of epizootic diseases to spread from animal reservoirs to humans through personal and occupational exposure 264 . Limited data on transmission of monkeypox are available. Transmission from infected animals and humans is believed to occur primarily through direct contact with lesions and respiratory secretions; airborne transmission from animals to humans is unlikely but cannot be excluded, and may have occurred in veterinary practices (e.g., during administration of nebulized medications to ill prairie dogs 265 ). Among humans, four instances of monkeypox transmission within hospitals have been reported in Africa among children, usually related to sharing the same ward or bed 266,267 . Additional recent literature documents transmission of Congo Basin monkeypox in a hospital compound for an extended number of generations 268 . There has been no evidence of airborne or any other person-to-person transmission of monkeypox in the United States, and no new cases of monkeypox have been identified since the outbreak in June 2003 269 . The outbreak strain is a clade of monkeypox distinct from the Congo Basin clade and may have different epidemiologic properties (including human-to-human transmission potential) from monkeypox strains of the Congo Basin 270 ; this awaits further study. Smallpox vaccine is 85% protective against Congo Basin monkeypox 271 . Since there is an associated case fatality rate of ≤10%, administration of smallpox vaccine within 4 days to individuals who have had direct exposure to patients or animals with monkeypox is a reasonable consideration 272 . For the most current information, see CDC Monkeypox (https://www.cdc.gov/poxvirus/monkeypox/index.html) [Current version of this document may differ from original.]. I.C.6. Noroviruses. Noroviruses, formerly referred to as Norwalk-like viruses, are members of the Caliciviridae family. These agents are transmitted via contaminated food or water and from person-to-person, causing explosive outbreaks of gastrointestinal disease 273 . Environmental contamination also has been documented as a contributing factor in ongoing transmission during outbreaks 274,275 . Although noroviruses cannot be propagated in cell culture, DNA detection by molecular diagnostic techniques has facilitated a greater appreciation of their role in outbreaks of gastrointestinal disease 276 . Reported outbreaks in hospitals 132,142,277 , nursing homes 275,[278][279][280][281][282][283] , cruise ships 284,285 , hotels 143,147 , schools 148 , and large crowded shelters established for hurricane evacuees 286 , demonstrate their highly contagious nature, the disruptive impact they have in healthcare facilities and the community, and the difficulty of controlling outbreaks in settings where people share common facilites and space. Of note, there is nearly a 5 fold increase in the risk to patients in outbreaks where a patient is the index case compared with exposure of patients during outbreaks where a staff member is the index case 287 . The average incubation period for gastroenteritis caused by noroviruses is 12-48 hours and the clinical course lasts 12-60 hours 273 . Illness is characterized by acute onset of nausea, vomiting, abdominal cramps, and/or diarrhea. The disease is largely self-limited; rarely, death caused by severe dehydration can occur, particularly among the elderly with debilitating health conditions. The epidemiology of norovirus outbreaks shows that even though primary cases may result from exposure to a fecally-contaminated food or water, secondary and tertiary cases often result from person-to-person transmission that is facilitated by contamination of fomites 273,288 and dissemination of infectious particles, especially during the process of vomiting 132,142,143,147,148,273,279,280 . Widespread, persistent and inapparent contamination of the environment and fomites can make outbreaks extremely difficult to control 147,275,284 .These clinical observations and the detection of norovirus DNA on horizontal surfaces 5 feet above the level that might be touched normally suggest that, under certain circumstances, aerosolized particles may travel distances beyond 3 feet 147 . It is hypothesized that infectious particles may be aerosolized from vomitus, inhaled, and swallowed. In addition, individuals who are responsible for cleaning the environment may be at increased risk of infection. Development of disease and transmission may be facilitated by the low infectious dose (i.e., <100 viral particles) 289 and the resistance of these viruses to the usual cleaning and disinfection agents (i.e., may survive ≤10 ppm chlorine) [290][291][292] . An alternate phenolic agent that was shown to be effective against feline calicivirus was used for environmental cleaning in one outbreak 275,293 . There are insufficient data to determine the efficacy of alcohol-based hand rubs against noroviruses when the hands are not visibly soiled 294 . Absence of disease in certain individuals during an outbreak may be explained by protection from infection conferred by the B histo-blood group antigen 295 . Consultation on outbreaks of gastroenteritis is available through CDC's Division of Viral and Rickettsial Diseases 296 . # I.C.7. Hemorrhagic fever viruses (HFV). The hemorrhagic fever viruses are a mixed group of viruses that cause serious disease with high fever, skin rash, bleeding diathesis, and in some cases, high mortality; the disease caused is referred to as viral hemorrhagic fever (VHF). Among the more commonly known HFVs are Ebola and Marburg viruses (Filoviridae), Lassa virus (Arenaviridae), Crimean-Congo hemorrhagic fever and Rift Valley Fever virus (Bunyaviridae), and Dengue and Yellow fever viruses (Flaviviridae) 212,297 . These viruses are transmitted to humans via contact with infected animals or via arthropod vectors. While none of these viruses is endemic in the United States, outbreaks in affected countries provide potential opportunities for importation by infected humans and animals. Furthermore, there are concerns that some of these agents could be used as bioweapons 212 . Person-to-person transmission is documented for Ebola, Marburg, Lassa and Crimean-Congo hemorrhagic fever viruses. In resource-limited healthcare settings, transmission of these agents to healthcare personnel, patients and visitors has been described and in some outbreaks has accounted for a large proportion of cases [298][299][300] . Transmissions within households also have occurred among individuals who had direct contact with ill persons or their body fluids, but not to those who did not have such contact 301 . Evidence concerning the transmission of HFVs has been summarized 212,302 . Person-toperson transmission is associated primarily with direct blood and body fluid contact. Percutaneous exposure to contaminated blood carries a particularly high risk for transmission and increased mortality 303,304 . The finding of large numbers of Ebola viral particles in the skin and the lumina of sweat glands has raised concern that transmission could occur from direct contact with intact skin though epidemiologic evidence to support this is lacking 305 . Postmortem handling of infected bodies is an important risk for transmission 301,306,307 . In rare situations, cases in which the mode of transmission was unexplained among individuals with no known direct contact, have led to speculation that airborne transmission could have occurred 298 . However, airborne transmission of naturally occurring HFVs in humans has not been seen. In one study of airplane passengers exposed to an in-flight index case of Lassa fever, there was no transmission to any passengers 308 . In the laboratory setting, animals have been infected experimentally with Marburg or Ebola viruses via direct inoculation of the nose, mouth and/or conjunctiva 309,310 and by using mechanically generated virus-containing aerosols 311,312 . Transmission of Ebola virus among laboratory primates in an animal facility has been described 313 . Secondarily infected animals were in individual cages and separated by approximately 3 meters. Although the possibility of airborne transmission was suggested, the authors were not able to exclude droplet or indirect contact transmission in this incidental observation. Guidance on infection control precautions for HVFs that are transmitted person-to-person have been published by CDC 1, 211 and by the Johns Hopkins Center for Civilian Biodefense Strategies 212 . The most recent recommendations at the time of publication of this document were posted on the CDC website on 5/19/05 314 . Inconsistencies among the various recommendations have raised questions about the appropriate precautions to use in U.S. hospitals. In less developed countries, outbreaks of HFVs have been controlled with basic hygiene, barrier precautions, safe injection practices, and safe burial practices 299,306 . The preponderance of evidence on HFV transmission indicates that Standard, Contact and Droplet Precautions with eye protection are effective in protecting healthcare personnel and visitors who may attend an infected patient. Single gloves are adequate for routine patient care; double-gloving is advised during invasive procedures (e.g., surgery) that pose an increased risk for blood exposure. Routine eye protection (i.e. goggles or face shield) is particularly important. Fluid-resistant gowns should be worn for all patient contact. Airborne Precautions are not required for routine patient care; however, use of AIIRs is prudent when procedures that could generate infectious aerosols are performed (e.g., endotracheal intubation, bronchoscopy, suctioning, autopsy procedures involving oscillating saws). N95 or higher level respirators may provide added protection for individuals in a room during aerosol-generating procedures (Table 3, Appendix A). When a patient with a syndrome consistent with hemorrhagic fever also has a history of travel to an endemic area, precautions are initiated upon presentation and then modified as more information is obtained (Table 2). Patients with hemorrhagic fever syndrome in the setting of a suspected bioweapon attack should be managed using Airborne Precautions, including AIIRs, since the epidemiology of a potentially weaponized hemorrhagic fever virus is unpredictable. # I.D. Transmission Risks Associated with Specific Types of Healthcare Settings Numerous factors influence differences in transmission risks among the various healthcare settings. These include the population characteristics (e.g., increased susceptibility to infections, type and prevalence of indwelling devices), intensity of care, exposure to environmental sources, length of stay, and frequency of interaction between patients/residents with each other and with HCWs. These factors, as well as organizational priorities, goals, and resources, influence how different healthcare settings adapt transmission prevention guidelines to meet their specific needs 315,316 . Infection control management decisions are informed by data regarding institutional experience/epidemiology, trends in community and institutional HAIs, local, regional, and national epidemiology, and emerging infectious disease threats. # I.D.1. Hospitals. Infection transmission risks are present in all hospital settings. However, certain hospital settings and patient populations have unique conditions that predispose patients to infection and merit special mention. These are often sentinel sites for the emergence of new transmission risks that may be unique to that setting or present opportunities for transmission to other settings in the hospital. 318,319 , because of underlying diseases and conditions, the invasive medical devices and technology used in their care (e.g., central venous catheters and other intravascular devices, mechanical ventilators, extracorporeal membrane oxygenation (ECMO), hemodialysis/-filtration, pacemakers, implantable left ventricular assist devices), the frequency of contact with healthcare personnel, prolonged length of stay, and prolonged exposure to antimicrobial agents [320][321][322][323][324][325][326][327][328][329][330][331] . Furthermore, adverse patient outcomes in this setting are more severe and are associated with a higher mortality 332 . Outbreaks associated with a variety of bacterial, fungal and viral pathogens due to common-source and person-to-person transmissions are frequent in adult and pediatric ICUs 31, 333-336, 337, 338 . # I.D.1.b. Burn units. Burn wounds can provide optimal conditions for colonization, infection, and transmission of pathogens; infection acquired by burn patients is a frequent cause of morbidity and mortality 320,339,340 . In patients with a burn injury involving ≥30% of the total body surface area (TBSA), the risk of invasive burn wound infection is particularly high 341,342 . Infections that occur in patients with burn injury involving <30% TBSA are usually associated with the use of invasive devices. Methicillin-susceptible Staphylococcus aureus, MRSA, enterococci, including VRE, gram-negative bacteria, and candida are prevalent pathogens in burn infections 53,340,[343][344][345][346][347][348][349][350] and outbreaks of these organisms have been reported [351][352][353][354] . Shifts over time in the predominance of pathogens causing infections among burn patients often lead to changes in burn care practices 343,[355][356][357][358] . Burn wound infections caused by Aspergillus sp. or other environmental molds may result from exposure to supplies contaminated during construction 359 or to dust generated during construction or other environmental disruption 360 . Hydrotherapy equipment is an important environmental reservoir of gram-negative organisms. Its use for burn care is discouraged based on demonstrated associations between use of contaminated hydrotherapy equipment and infections. Burn wound infections and colonization, as well as bloodstream infections, caused by multidrugresistant P. aeruginosa 361 , A. baumannii 362 , and MRSA 352 have been associated with hydrotherapy; excision of burn wounds in operating rooms is preferred. Advances in burn care, specifically early excision and grafting of the burn wound, use of topical antimicrobial agents, and institution of early enteral feeding, have led to decreased infectious complications. Other advances have included prophylactic antimicrobial usage, selective digestive decontamination (SDD), and use of antimicrobial-coated catheters (ACC), but few epidemiologic studies and no efficacy studies have been performed to show the relative benefit of these measures 357 . There is no consensus on the most effective infection control practices to prevent transmission of infections to and from patients with serious burns (e.g., single-bed rooms 358 , laminar flow 363 and high efficiency particulate air filtration [HEPA] 360 or maintaining burn patients in a separate unit without exposure to patients or equipment from other units 364 ). There also is controversy regarding the need for and type of barrier precautions for routine care of burn patients. One retrospective study demonstrated efficacy and cost effectiveness of a simplified barrier isolation protocol for wound colonization, emphasizing handwashing and use of gloves, caps, masks and plastic impermeable aprons (rather than isolation gowns) for direct patient contact 365 . However, there have been no studies that define the most effective combination of infection control precautions for use in burn settings. Prospective studies in this area are needed. # I.D.1.c. Pediatrics. Studies of the epidemiology of HAIs in children have identified unique infection control issues in this population 63,64,[366][367][368][369][370] . Pediatric intensive care unit (PICU) patients and the lowest birthweight babies in the high-risk nursery (HRN) monitored in the NNIS system have had high rates of central venous catheter-associated bloodstream infections 64,320,[369][370][371][372] . Additionally, there is a high prevalence of community-acquired infections among hospitalized infants and young children who have not yet become immune either by vaccination or by natural infection. The result is more patients and their sibling visitors with transmissible infections present in pediatric healthcare settings, especially during seasonal epidemics (e.g., pertussis 36,40,41 , respiratory viral infections including those caused by RSV 24 , influenza viruses 373 , parainfluenza virus 374 , human metapneumovirus 375 , and adenoviruses 376 ; rubeola [measles] 34 , varicella [chickenpox] 377 , and rotavirus 38,378 ). Close physical contact between healthcare personnel and infants and young children (eg. cuddling, feeding, playing, changing soiled diapers, and cleaning copious uncontrolled respiratory secretions) provides abundant opportunities for transmission of infectious material. Practices and behaviors such as congregation of children in play areas where toys and bodily secretions are easily shared and family members roomingin with pediatric patients can further increase the risk of transmission. Pathogenic bacteria have been recovered from toys used by hospitalized patients 379 ; contaminated bath toys were implicated in an outbreak of multidrug-resistant P. aeruginosa on a pediatric oncology unit 80 . In addition, several patient factors increase the likelihood that infection will result from exposure to pathogens in healthcare settings (e.g., immaturity of the neonatal immune system, lack of previous natural infection and resulting immunity, prevalence of patients with congenital or acquired immune deficiencies, congenital anatomic anomalies, and use of life-saving invasive devices in neontal and pediatric intensive care units) 63 . There are theoretical concerns that infection risk will increase in association with innovative practices used in the NICU for the purpose of improving developmental outcomes, Such factors include co-bedding 380 and kangaroo care 381 that may increase opportunity for skin-to-skin exposure of multiple gestation infants to each other and to their mothers, respectively; although infection risk smay actually be reduced among infants receiving kangaroo care 382 . Children who attend child care centers 383,384 and pediatric rehabilitation units 385 may increase the overall burden of antimicrobial resistance (eg. by contributing to the reservoir of community-associated MRSA [CA-MRSA]) [386][387][388][389][390][391] . Patients in chronic care facilities may have increased rates of colonization with resistant GNBs and may be sources of introduction of resistant organisms to acute care settings 50 . # I.D.2. Non-acute healthcare settings. Healthcare is provided in various settings outside of hospitals including facilities, such as long-term care facilities (LTCF) (e.g., nursing homes), homes for the developmentally disabled, settings where behavioral health services are provided, rehabilitation centers and hospices 392 . In addition, healthcare may be provided in nonhealthcare settings such as workplaces with occupational health clinics, adult day care centers, assisted living facilities, homeless shelters, jails and prisons, school clinics and infirmaries. Each of these settings has unique circumstances and population risks to consider when designing and implementing an infection control program. Several of the most common settings and their particular challenges are discussed below. While this Guideline does not address each setting, the principles and strategies provided may be adapted and applied as appropriate. # I.D.2.a. Long-term care. The designation LTCF applies to a diverse group of residential settings, ranging from institutions for the developmentally disabled to nursing homes for the elderly and pediatric chronic-care facilities [393][394][395] . Nursing homes for the elderly predominate numerically and frequently represent long-term care as a group of facilities. Approximately 1.8 million Americans reside in the nation's 16,500 nursing homes. 396 Estimates of HAI rates of 1.8 to 13.5 per 1000 resident-care days have been reported with a range of 3 to 7 per 1000 resident-care days in the more rigorous studies [397][398][399][400][401] . The infrastructure described in the Department of Veterans Affairs nursing home care units is a promising example for the development of a nationwide HAI surveillance system for LTCFs 402 . LCTFs are different from other healthcare settings in that elderly patients at increased risk for infection are brought together in one setting and remain in the facility for extended periods of time; for most residents, it is their home. An atmosphere of community is fostered and residents share common eating and living areas, and participate in various facility-sponsored activities 403,404 . Since able residents interact freely with each other, controlling transmission of infection in this setting is challenging 405 . Residents who are colonized or infected with certain microorganisms are, in some cases, restricted to their room. However, because of the psychosocial risks associated with such restriction, it has been recommended that psychosocial needs be balanced with infection control needs in the LTCF setting [406][407][408][409] . Documented LTCF outbreaks have been caused by various viruses (e.g., influenza virus 35,[410][411][412] , rhinovirus 413 , adenovirus [conjunctivitis] 414 , norovirus 278, 279 275, 281 ) and bacteria (e.g., group A streptococcus 162 , B. pertussis 415 , non-susceptible S. pneumoniae 197,198 , other MDROs, and Clostridium difficile 416 ) These pathogens can lead to substantial morbidity and mortality, and increased medical costs; prompt detection and implementation of effective control measures are required. Risk factors for infection are prevalent among LTCF residents 395,417,418 . Age-related declines in immunity may affect responses to immunizations for influenza and other infectious agents, and increase susceptibility to tuberculosis. Immobility, incontinence, dysphagia, underlying chronic diseases, poor functional status, and age-related skin changes increase susceptibility to urinary, respiratory and cutaneous and soft tissue infections, while malnutrition can impair wound healing [419][420][421][422][423] . Medications (e.g., drugs that affect level of consciousness, immune function, gastric acid secretions, and normal flora, including antimicrobial therapy) and invasive devices (e.g., urinary catheters and feeding tubes) heighten susceptibility to infection and colonization in LTCF residents [424][425][426] . Finally, limited functional status and total dependence on healthcare personnel for activities of daily living have been identified as independent risk factors for infection 401,417,427 and for colonization with MRSA 428,429 and ESBL-producing K. pneumoniae 430 . Several position papers and review articles have been published that provide guidance on various aspects of infection control and antimicrobial resistance in LTCFs [406][407][408][431][432][433][434][435][436] . The Centers for Medicare and Medicaid Services (CMS) have established regulations for the prevention of infection in LTCFs 437 . Because residents of LTCFs are hospitalized frequently, they can transfer pathogens between LTCFs and healthcare facilities in which they receive care 8,[438][439][440][441] . This is also true for pediatric long-term care populations. Pediatric chronic care facilities have been associated with importing extended-spectrum cephalosporin-resistant, gram-negative bacilli into one PICU 50 . Children from pediatric rehabilitation units may contribute to the reservoir of community-associated MRSA 385,[389][390][391] . # I.D.2.b. Ambulatory care. In the past decade, healthcare delivery in the United States has shifted from the acute, inpatient hospital to a variety of ambulatory and communitybased settings, including the home. Ambulatory care is provided in hospital-based outpatient clinics, nonhospital-based clinics and physician offices, public health clinics, free-standing dialysis centers, ambulatory surgical centers, urgent care centers, and many others. In 2000, there were 83 million visits to hospital outpatient clinics and more than 823 million visits to physician offices 442 ; ambulatory care now accounts for most patient encounters with the health care system 443 . In these settings, adapting transmission prevention guidelines is challenging because patients remain in common areas for prolonged periods waiting to be seen by a healthcare provider or awaiting admission to the hospital, examination or treatment rooms are turned around quickly with limited cleaning, and infectious patients may not be recognized immediately. Furthermore, immunocompromised patients often receive chemotherapy in infusion rooms where they stay for extended periods of time along with other types of patients. There are few data on the risk of HAIs in ambulatory care settings, with the exception of hemodialysis centers 18,444,445 . Transmission of infections in outpatient settings has been reviewed in three publications [446][447][448] . Goodman and Solomon summarized 53 clusters of infections associated with the outpatient setting from 1961-1990 446 . Overall, 29 clusters were associated with common source transmission from contaminated solutions or equipment, 14 with person-to-person transmission from or involving healthcare personnel and ten associated with airborne or droplet transmission among patients and healthcare workers. Transmission of bloodborne pathogens (i.e., hepatitis B and C viruses and, rarely, HIV) in outbreaks, sometimes involving hundreds of patients, continues to occur in ambulatory settings. These outbreaks often are related to common source exposures, usually a contaminated medical device, multi-dose vial, or intravenous solution 82,[449][450][451][452][453] . In all cases, transmission has been attributed to failure to adhere to fundamental infection control principles, including safe injection practices and aseptic technique.This subject has been reviewed and recommended infection control and safe injection practices summarized 454 . Airborne transmission of M.tuberculosis and measles in ambulatory settings, most frequently emergency departments, has been reported 34,127,446,448,[455][456][457] . Measles virus was transmitted in physician offices and other outpatient settings during an era when immunization rates were low and measles outbreaks in the community were occurring regularly 34,122,458 . Rubella has been transmitted in the outpatient obstetric setting 33 ; there are no published reports of varicella transmission in the outpatient setting. In the ophthalmology setting, adenovirus type 8 epidemic keratoconjunctivitis has been transmitted via incompletely disinfected ophthalmology equipment and/or from healthcare workers to patients, presumably by contaminated hands 17,446,448,[459][460][461][462] . If transmission in outpatient settings is to be prevented, screening for potentially infectious symptomatic and asymptomatic individuals, especially those who may be at risk for transmitting airborne infectious agents (e.g., M. tuberculosis, varicella-zoster virus, rubeola [measles]), is necessary at the start of the initial patient encounter. Upon identification of a potentially infectious patient, implementation of prevention measures, including prompt separation of potentially infectious patients and implementation of appropriate control measures (e.g., Respiratory Hygiene/Cough Etiquette and Transmission-Based Precautions) can decrease transmission risks 9,12 . Transmission of MRSA and VRE in outpatient settings has not been reported, but the association of CA-MRSA in healthcare personnel working in an outpatient HIV clinic with environmental CA-MRSA contamination in that clinic, suggests the possibility of transmission in that setting 463 . Patient-to-patient transmission of Burkholderia species and Pseudomonas aeruginosa in outpatient clinics for adults and children with cystic fibrosis has been confirmed 464,465 . # I.D.2.c. Home care. Home care in the United States is delivered by over 20,000 provider agencies that include home health agencies, hospices, durable medical equipment providers, home infusion therapy services, and personal care and support services providers. Home care is provided to patients of all ages with both acute and chronic conditions. The scope of services ranges from assistance with activities of daily living and physical and occupational therapy to the care of wounds, infusion therapy, and chronic ambulatory peritoneal dialysis (CAPD). The incidence of infection in home care patients, other than those associated with infusion therapy is not well studied [466][467][468][469][470][471] . However, data collection and calculation of infection rates have been accomplished for central venous catheter-associated bloodstream infections in patients receiving home infusion therapy [470][471][472][473][474] and for the risk of blood contact through percutaneous or mucosal exposures, demonstrating that surveillance can be performed in this setting 4 75 . Draft definitions for home care associated infections have been developed 476 . Transmission risks during home care are presumed to be minimal. The main transmission risks to home care patients are from an infectious healthcare provider or contaminated equipment; providers also can be exposed to an infectious patient during home visits. Since home care involves patient care by a limited number of personnel in settings without multiple patients or shared equipment, the potential reservoir of pathogens is reduced. Infections of home care providers, that could pose a risk to home care patients include infections transmitted by the airborne or droplet routes (e.g., chickenpox, tuberculosis, influenza), and skin infestations (e.g., scabies 69 and lice) and infections (e.g.,impetigo) transmitted by direct or indirect contact. There are no published data on indirect transmission of MDROs from one home care patient to another, although this is theoretically possible if contaminated equipment is transported from an infected or colonized patient and used on another patient. Of note, investigation of the first case of VISA in homecare 186 and the first 2 reported cases of VRSA 178,180,181,183 found no evidence of transmission of VISA or VRSA to other home care recipients. Home health care also may contribute to antimicrobial resistance; a review of outpatient vancomycin use found 39% of recipients did not receive the antibiotic according to recommended guidelines 477 . Although most home care agencies implement policies and procedures to prevent transmission of organisms, the current approach is based on the adaptation of the 1996 Guideline for Isolation Precautions in Hospitals 1 as well as other professional guidance 478, 479 . This issue has been very challenging in the home care industry and practice has been inconsistent and frequently not evidence-based. For example, many home health agencies continue to observe "nursing bag technique," a practice that prescribes the use of barriers between the nursing bag and environmental surfaces in the home 480. While the home environment may not always appear clean, the use of barriers between two non-critical surfaces has been questioned 481,482 . Opportunites exist to conduct research in home care related to infection transmission risks 483 . # I.D.2.d. Other sites of healthcare delivery. Facilities that are not primarily healthcare settings but in which healthcare is delivered include clinics in correctional facilities and shelters. Both settings can have suboptimal features, such as crowded conditions and poor ventilation. Economically disadvantaged individuals who may have chronic illnesses and healthcare problems related to alcoholism, injection drug use, poor nutrition, and/or inadequate shelter often receive their primary healthcare at sites such as these 484 . Infectious diseases of special concern for transmission include tuberculosis, scabies, respiratory infections (e.g., N. meningitides, S. pneumoniae), sexually transmitted and bloodborne diseases (e.g.,HIV, HBV, HCV, syphilis, gonorrhea), hepatitis A virus (HAV), diarrheal agents such as norovirus, and foodborne diseases 286,[485][486][487][488] . A high index of suspicion for tuberculosis and CA-MRSA in these populations is needed as outbreaks in these settings or among the populations they serve have been reported [489][490][491][492][493][494][495][496][497] . Patient encounters in these types of facilities provide an opportunity to deliver recommended immunizations and screen for M. tuberculosis infection in addition to diagnosing and treating acute illnesses 498 . Recommended infection control measures in these non-traditional areas designated for healthcare delivery are the same as for other ambulatory care settings. Therefore, these settings must be equipped to observe Standard Precautions and, when indicated, Transmission-based Precautions. # I.E. Transmission Risks Associated with Special Patient Populations As new treatments emerge for complex diseases, unique infection control challenges associated with special patient populations need to be addressed. # I.E.1. Immunocompromised patients. Patients who have congenital primary immune deficiencies or acquired disease (eg. treatment-induced immune deficiencies) are at increased risk for numerous types of infections while receiving healthcare and may be located throughout the healthcare facility. The specific defects of the immune system determine the types of infections that are most likely to be acquired (e.g., viral infections are associated with T-cell defects and fungal and bacterial infections occur in patients who are neutropenic). As a general group, immunocompromised patients can be cared for in the same environment as other patients; however, it is always advisable to minimize exposure to other patients with transmissible infections such as influenza and other respiratory viruses 499,500 . The use of more intense chemotherapy regimens for treatment of childhood leukemia may be associated with prolonged periods of neutropenia and suppression of other components of the immune system, extending the period of infection risk and raising the concern that additional precautions may be indicated for select groups 501,502 . With the application of newer and more intense immunosuppressive therapies for a variety of medical conditions (e.g., rheumatologic disease 503,504 , inflammatory bowel disease 505 ), immunosuppressed patients are likely to be more widely distributed throughout a healthcare facility rather than localized to single patient units (e.g., hematology-oncology). Guidelines for preventing infections in certain groups of immunocompromised patients have been published 15,506,507 . Published data provide evidence to support placing allogeneic HSCT patients in a Protective Environment 15,157,158 . Also, three guidelines have been developed that address the special requirements of these immunocompromised patients, including use of antimicrobial prophylaxis and engineering controls to create a Protective Environment for the prevention of infections caused by Aspergillus spp. and other environmental fungi 11,14,15 . As more intense chemotherapy regimens associated with prolonged periods of neutropenia or graft-versus-host disease are implemented, the period of risk and duration of environmental protection may need to be prolonged beyond the traditional 100 days 508 . # I.E.2. Cystic fibrosis patients. Patients with cystic fibrosis (CF) require special consideration when developing infection control guidelines. Compared to other patients, CF patients require additional protection to prevent transmission from contaminated respiratory therapy equipment [509][510][511][512][513] . Infectious agents such as Burkholderia cepacia complex and P. aeruginosa 464,465,514,515 have unique clinical and prognostic significance. In CF patients, B. cepacia infection has been associated with increased morbidity and mortality [516][517][518] , while delayed acquisition of chronic P.aeruginosa infection may be associated with an improved long-term clinical outcome 519,520 . Person-to-person transmission of B. cepacia complex has been demonstrated among children 517 and adults 521 with CF in healthcare settings 464,522 , during various social contacts 523 , most notably attendance at camps for patients with CF 524 , and among siblings with CF 525 . Successful infection control measures used to prevent transmission of respiratory secretions include segregation of CF patients from each other in ambulatory and hospital settings (including use of private rooms with separate showers), environmental decontamination of surfaces and equipment contaminated with respiratory secretions, elimination of group chest physiotherapy sessions, and disbanding of CF camps 97,526 . The Cystic Fibrosis Foundation published a consensus document with evidence-based recommendations for infection control practices for CF patients 20 . # I.F. New Therapies Associated with Potentially Transmissible Infectious Agents # I.F.1. Gene therapy. Gene therapy has has been attempted using a number of different viral vectors, including nonreplicating retroviruses, adenoviruses, adeno-associated viruses, and replication-competent strains of poxviruses. Unexpected adverse events have restricted the prevalence of gene therapy protocols. The infectious hazards of gene therapy are theoretical at this time, but require meticulous surveillance due to the possible occurrence of in vivo recombination and the subsequent emergence of a transmissible genetically altered pathogen. Greatest concern attends the use of replication-competent viruses, especially vaccinia. As of the time of publication, no reports have described transmission of a vector virus from a gene therapy recipient to another individual, but surveillance is ongoing. Recommendations for monitoring infection control issues throughout the course of gene therapy trials have been published [527][528][529] . # I.F.2. Infections transmitted through blood, organs and other tissues. The potential hazard of transmitting infectious pathogens through biologic products is a small but ever present risk, despite donor screening. Reported infections transmitted by transfusion or transplantation include West Nile Virus infection 530 cytomegalovirus infection 531 , Creutzfeldt-Jacob disease 230 , hepatitis C 532 , infections with Clostridium spp. 533 and group A streptococcus 534 , malaria 535 , babesiosis 536 , Chagas disease 537 , lymphocytic choriomeningitis 538 , and rabies 539,540 . Therefore, it is important to consider receipt of biologic products when evaluating patients for potential sources of infection. # I.F.3. Xenotransplantation. The transplantation of nonhuman cells, tissues, and organs into humans potentially exposes patients to zoonotic pathogens. Transmission of known zoonotic infections (e.g., trichinosis from porcine tissue), constitutes one concern, but also of concern is the possibility that transplantation of nonhuman cells, tissues, or organs may transmit previously unknown zoonotic infections (xenozoonoses) to immunosuppressed human recipients. Potential infections that might accompany transplantation of porcine organs have been described 541 . Guidelines from the U.S. Public Health Service address many infectious diseases and infection control issues that surround the developing field of xenotransplantation, 542 work in this area is ongoing. # Part II: Fundamental Elements Needed to Prevent Transmission of Infectious Agents in Healthcare Settings # II.A. Healthcare System Components that Influence the Effectiveness of Precautions to Prevent Transmission # II.A.1. Administrative measures. Healthcare organizations can demonstrate a commitment to preventing transmission of infectious agents by incorporating infection control into the objectives of the organization's patient and occupational safety programs [543][544][545][546][547] . An infrastructure to guide, support, and monitor adherence to Standard and Transmission-Based Precautions 434,548,549 will facilitate fulfillment of the organization's mission and achievement of the Joint Commission on Accreditation of Healthcare Organization's patient safety goal to decrease HAIs 550 . Policies and procedures that explain how Standard and Transmission-Based Precautions are applied, including systems used to identify and communicate information about patients with potentially transmissible infectious agents, are essential to ensure the success of these measures and may vary according to the characteristics of the organization. A key administrative measure is provision of fiscal and human resources for maintaining infection control and occupational health programs that are responsive to emerging needs. Specific components include bedside nurse 551 and infection prevention and control professional (ICP) staffing levels 552 , inclusion of ICPs in facility construction and design decisions 11 , clinical microbiology laboratory support 553,554 , adequate supplies and equipment including facility ventilation systems 11 , adherence monitoring 555 , assessment and correction of system failures that contribute to transmission 556,557 , and provision of feedback to healthcare personnel and senior administrators 434,548,549,558 . The positive influence of institutional leadership has been demonstrated repeatedly in studies of HCW adherence to recommended hand hygiene practices 176,177,434,548,549,[559][560][561][562][563][564] . Healthcare administrator involvement in infection control processes can improve administrators' awareness of the rationale and resource requirements for following recommended infection control practices. Several administrative factors may affect the transmission of infectious agents in healthcare settings: institutional culture, individual worker behavior, and the work environment. Each of these areas is suitable for performance improvement monitoring and incorporation into the organization's patient safety goals 543,544,546,565 . # II.A.1.a.Scope of work and staffing needs for infection control professionals. The effectiveness of infection surveillance and control programs in preventing nosocomial infections in United States hospitals was assessed by the CDC through the Study on the Efficacy of Nosocomial Infection Control (SENIC Project) conducted 1970-76 566 . In a representative sample of US general hospitals, those with a trained infection control physician or microbiologist involved in an infection control program, and at least one infection control nurse per 250 beds, were associated with a 32% lower rate of four infections studied (CVC-associated bloodstream infections, ventilator-associated pneumonias, catheter-related urinary tract infections, and surgical site infections). Since that landmark study was published, responsibilities of ICPs have expanded commensurate with the growing complexity of the healthcare system, the patient populations served, and the increasing numbers of medical procedures and devices used in all types of healthcare settings. The scope of work of ICPs was first assessed in 1982 [567][568][569] by the Certification Board of Infection Control (CBIC), and has been reassessed every five years since that time 558,[570][571][572] . The findings of these task analyses have been used to develop and update the Infection Control Certification Examination, offered for the first time in 1983. With each survey, it is apparent that the role of the ICP is growing in complexity and scope, beyond traditional infection control activities in acute care hospitals. Activities currently assigned to ICPs in response to emerging challenges include: 1. surveillance and infection prevention at facilities other than acute care hospitals e.g., ambulatory clinics, day surgery centers, long term care facilities, rehabilitation centers, home care; 2. oversight of employee health services related to infection prevention, e.g., assessment of risk and administration of recommended treatment following exposure to infectious agents, tuberculosis screening, influenza vaccination, respiratory protection fit testing, and administration of other vaccines as indicated, such as smallpox vaccine in 2003; 3. preparedness planning for annual influenza outbreaks, pandemic influenza, SARS, bioweapons attacks; 4. adherence monitoring for selected infection control practices; 5. oversight of risk assessment and implementation of prevention measures associated with construction and renovation; 6. prevention of transmission of MDROs; 7. evaluation of new medical products that could be associated with increased infection risk. e.g.,intravenous infusion materials; 8. communication with the public, facility staff, and state and local health departments concerning infection control-related issues; and 9. participation in local and multi-center research projects 434,549,552,558,573,574 . None of the CBIC job analyses addressed specific staffing requirements for the identified tasks, although the surveys did include information about hours worked; the 2001 survey included the number of ICPs assigned to the responding facilities 558 . There is agreement in the literature that 1 ICP per 250 acute care beds is no longer adequate to meet current infection control needs; a Delphi project that assessed staffing needs of infection control programs in the 21st century concluded that a ratio of 0.8 to 1.0 ICP per 100 occupied acute care beds is an appropriate level of staffing 552 . A survey of participants in the National Nosocomial Infections Surveillance (NNIS) system found the average daily census per ICP was 115 316 . Results of other studies have been similar: 3 per 500 beds for large acute care hospitals, 1 per 150-250 beds in long term care facilities, and 1.56 per 250 in small rural hospitals 573,575 . The foregoing demonstrates that infection control staffing can no longer be based on patient census alone, but rather must be determined by the scope of the program, characteristics of the patient population, complexity of the healthcare system, tools available to assist personnel to perform essential tasks (e.g., electronic tracking and laboratory support for surveillance), and unique or urgent needs of the institution and community 552 . Furthermore, appropriate training is required to optimize the quality of work performed 558, 572, 576 . # II.A.1.a.i. Infection control nurse liaison. Designating a bedside nurse on a patient care unit as an infection control liaison or "link nurse" is reported to be an effective adjunct to enhance infection control at the unit level [577][578][579][580][581][582] . Such individuals receive training in basic infection control and have frequent communication with the ICPs, but maintain their primary role as bedside caregiver on their units. The infection control nurse liaison increases the awareness of infection control at the unit level. He or she is especially effective in implementation of new policies or control interventions because of the rapport with individuals on the unit, an understanding of unit-specific challenges, and ability to promote strategies that are most likely to be successful in that unit. This position is an adjunct to, not a replacement for, fully trained ICPs. Furthermore, the infection control liaison nurses should not be counted when considering ICP staffing. # II.A.1.b. Bedside nurse staffing. There is increasing evidence that the level of bedside nurse-staffing influences the quality of patient care 583,584 . If there are adequate nursing staff, it is more likely that infection control practices, including hand hygiene and Standard and Transmission-Based Precautions, will be given appropriate attention and applied correctly and consistently 552 . A national multicenter study reported strong and consistent inverse relationships between nurse staffing and five adverse outcomes in medical patients, two of which were HAIs: urinary tract infections and pneumonia 583 . The association of nursing staff shortages with increased rates of HAIs has been demonstrated in several outbreaks in hospitals and long term care settings, and with increased transmission of hepatitis C virus in dialysis units 22,418,551,[585][586][587][588][589][590][591][592][593][594][595][596][597] . In most cases, when staffing improved as part of a comprehensive control intervention, the outbreak ended or the HAI rate declined. In two studies 590,596 , the composition of the nursing staff ("pool" or "float" vs. regular staff nurses) influenced the rate of primary bloodstream infections, with an increased infection rate occurring when the proportion of regular nurses decreased and pool nurses increased. # II.A.1.c. Clinical microbiology laboratory support. The critical role of the clinical microbiology laboratory in infection control and healthcare epidemiology is described well 553,554,[598][599][600] and is supported by the Infectious Disease Society of America policy statement on consolidation of clinical microbiology laboratories published in 2001 553 . The clinical microbiology laboratory contributes to preventing transmission of infectious diseases in healthcare settings by promptly detecting and reporting epidemiologically important organisms, identifying emerging patterns of antimicrobial resistance, and assisting in assessment of the effectiveness of recommended precautions to limit transmission during outbreaks 598 . Outbreaks of infections may be recognized first by laboratorians 162 . Healthcare organizations need to ensure the availability of the recommended scope and quality of laboratory services, a sufficient number of appropriately trained laboratory staff members, and systems to promptly communicate epidemiologically important results to those who will take action (e.g., providers of clinical care, infection control staff, healthcare epidemiologists, and infectious disease consultants) 601 . As concerns about emerging pathogens and bioterrorism grow, the role of the clinical microbiology laboratory takes on even greater importance. For healthcare organizations that outsource microbiology laboratory services (e.g., ambulatory care, home care, LTCFs, smaller acute care hospitals), it is important to specify by contract the types of services (e.g., periodic institution-specific aggregate susceptibility reports) required to support infection control. Several key functions of the clinical microbiology laboratory are relevant to this guideline: • Antimicrobial susceptibility by testing and interpretation in accordance with current guidelines developed by the National Committee for Clinical Laboratory Standards (NCCLS), known as the Clinical and Laboratory Standards Institute (CLSI) since 2005 602 , for the detection of emerging resistance patterns 603,604 , and for the preparation, analysis, and distribution of periodic cumulative antimicrobial susceptibility summary reports [605][606][607] . While not required, clinical laboratories ideally should have access to rapid genotypic identification of bacteria and their antibiotic resistance genes 608 . • Performance of surveillance cultures when appropriate (including retention of isolates for analysis) to assess patterns of infection transmission and effectiveness of infection control interventions at the facility or organization. Microbiologists assist in decisions concerning the indications for initiating and discontinuing active surveillance programs and optimize the use of laboratory resources. • Molecular typing, on-site or outsourced, in order to investigate and control healthcare-associated outbreaks 609 . • Application of rapid diagnostic tests to support clinical decisions involving patient treatment, room selection, and implementation of control measures including barrier precautions and use of vaccine or chemoprophylaxis agents (e.g., influenza [610][611][612] , B. pertussis 613 , RSV 614,615 , and enteroviruses 616 ). The microbiologist provides guidance to limit rapid testing to clinical situations in which rapid results influence patient management decisions, as well as providing oversight of point-of-care testing performed by non-laboratory healthcare workers 617 . • Detection and rapid reporting of epidemiologically important organisms, including those that are reportable to public health agencies. • Implementation of a quality control program that ensures testing services are appropriate for the population served, and stringently evaluated for sensitivity, specificity, applicability, and feasibility. • Participation in a multidisciplinary team to develop and maintain an effective institutional program for the judicious use of antimicrobial agents 618,619 . # II.A.2. Institutional safety culture and organizational characteristics. Safety culture (or safety climate) refers to a work environment where a shared commitment to safety on the part of management and the workforce is understood and followed 557,620,621 . The authors of the Institute of Medicine Report, To Err is Human 543 , acknowledge that causes of medical error are multifaceted but emphasize repeatedly the pivotal role of system failures and the benefits of a safety culture. A safety culture is created through 1. the actions management takes to improve patient and worker safety; 2. worker participation in safety planning; 3. the availability of appropriate protective equipment; 4. influence of group norms regarding acceptable safety practices; and 5. the organization's socialization process for new personnel. Safety and patient outcomes can be enhanced by improving or creating organizational characteristics within patient care units as demonstrated by studies of surgical ICUs 622,623 . Each of these factors has a direct bearing on adherence to transmission prevention recommendations 257 . Measurement of an institutional culture of safety is useful for designing improvements in healthcare 624,625 . Several hospital-based studies have linked measures of safety culture with both employee adherence to safe practices and reduced exposures to blood and body fluids [626][627][628][629][630][631][632] . One study of hand hygiene practices concluded that improved adherence requires integration of infection control into the organization's safety culture 561 . Several hospitals that are part of the Veterans Administration Healthcare System have taken specific steps toward improving the safety culture, including error reporting mechanisms, performing root cause analysis on problems identified, providing safety incentives, and employee education. [633][634][635] . # II.A.3. Adherence of healthcare personnel to recommended guidelines. Adherence to recommended infection control practices decreases transmission of infectious agents in healthcare settings 116,562,[636][637][638][639][640] . However, several observational studies have shown limited adherence to recommended practices by healthcare personnel 559,[640][641][642][643][644][645][646][647][648][649][650][651][652][653][654][655][656][657] . Observed adherence to universal precautions ranged from 43% to 89% 641,642,649,651,652 . However, the degree of adherence depended frequently on the practice that was assessed and, for glove use, the circumstance in which they were used. Appropriate glove use has ranged from a low of 15% 645 to a high of 82% 650 . However, 92% and 98% adherence with glove use have been reported during arterial blood gas collection and resuscitation, respectively, procedures where there may be considerable blood contact 643,656 . Differences in observed adherence have been reported among occupational groups in the same healthcare facility 641 and between experienced and nonexperienced professionals 645 . In surveys of healthcare personnel, self-reported adherence was generally higher than that reported in observational studies. Furthermore, where an observational component was included with a self-reported survey, self-perceived adherence was often greater than observed adherence 657 . Among nurses and physicians, increasing years of experience is a negative predictor of adherence 645,651 . Education to improve adherence is the primary intervention that has been studied. While positive changes in knowledge and attitude have been demonstrated, 640,658 , there often has been limited or no accompanying change in behavior 642,644 . Self-reported adherence is higher in groups that have received an educational intervention 630,659 . Educational interventions that incorporated videotaping and performance feedback were successful in improving adherence during the period of study; the long-term effect of these interventions is not known 654 .The use of videotape also served to identify system problems (e.g., communication and access to personal protective equipment) that otherwise may not have been recognized. Use of engineering controls and facility design concepts for improving adherence is gaining interest. While introduction of automated sinks had a negative impact on consistent adherence to hand washing 660 , use of electronic monitoring and voice prompts to remind healthcare workers to perform hand hygiene, and improving accessibility to hand hygiene products, increased adherence and contributed to a decrease in HAIs in one study 661 . More information is needed regarding how technology might improve adherence. Improving adherence to infection control practices requires a multifaceted approach that incorporates continuous assessment of both the individual and the work environment 559, 561 . Using several behavioral theories, Kretzer and Larson concluded that a single intervention (e.g., a handwashing campaign or putting up new posters about transmission precautions) would likely be ineffective in improving healthcare personnel adherence 662 . Improvement requires that the organizational leadership make prevention an institutional priority and integrate infection control practices into the organization's safety culture 561 . A recent review of the literature concluded that variations in organizational factors (e.g., safety climate, policies and procedures, education and training) and individual factors (e.g., knowledge, perceptions of risk, past experience) were determinants of adherence to infection control guidelines for protection against SARS and other respiratory pathogens 257 . # II.B. Surveillance for Healthcare-Associated Infections (HAIs) Surveillance is an essential tool for case-finding of single patients or clusters of patients who are infected or colonized with epidemiologically important organisms (e.g., susceptible bacteria such as S. aureus, S. pyogenes [Group A streptococcus] or Enterobacter-Klebsiella spp; MRSA, VRE, and other MDROs; C. difficile; RSV; influenza virus) for which transmission-based precautions may be required. Surveillance is defined as the ongoing, systematic collection, analysis, interpretation, and dissemination of data regarding a health-related event for use in public health action to reduce morbidity and mortality and to improve health 663 . The work of Ignaz Semmelweis that described the role of person-to-person transmission in puerperal sepsis is the earliest example of the use of surveillance data to reduce transmission of infectious agents 664 . Surveillance of both process measures and the infection rates to which they are linked are important for evaluating the effectiveness of infection prevention efforts and identifying indications for change 555, [665][666][667][668] . The Study on the Efficacy of Nosocomial Infection Control (SENIC) found that different combinations of infection control practices resulted in reduced rates of nosocomial surgical site infections, pneumonia, urinary tract infections, and bacteremia in acute care hospitals 566 ; however, surveillance was the only component essential for reducing all four types of HAIs. Although a similar study has not been conducted in other healthcare settings, a role for surveillance and the need for novel strategies have been described in LTCFs 398,434,669,670 and in home care [470][471][472][473] . The essential elements of a surveillance system are: 1. standardized definitions; 2. identification of patient populations at risk for infection; 3. statistical analysis (e.g., risk-adjustment, calculation of rates using appropriate denominators, trend analysis using methods such as statistical process control charts); and 4. feedback of results to the primary caregivers [671][672][673][674][675][676] . Data gathered through surveillance of high-risk populations, device use, procedures, and/or facility locations (e.g., ICUs) are useful for detecting transmission trends [671][672][673] . Identification of clusters of infections should be followed by a systematic epidemiologic investigation to determine commonalities in persons, places, and time; and guide implementation of interventions and evaluation of the effectiveness of those interventions. Targeted surveillance based on the highest risk areas or patients has been preferred over facility-wide surveillance for the most effective use of resources 673,676 . However, surveillance for certain epidemiologically important organisms may need to be facilitywide. Surveillance methods will continue to evolve as healthcare delivery systems change 392,677 and user-friendly electronic tools become more widely available for electronic tracking and trend analysis 674,678,679 . Individuals with experience in healthcare epidemiology and infection control should be involved in selecting software packages for data aggregation and analysis to assure that the need for efficient and accurate HAI surveillance will be met. Effective surveillance is increasingly important as legislation requiring public reporting of HAI rates is passed and states work to develop effective systems to support such legislation 680 . # II.C. Education of HCWs, Patients, and Families Education and training of healthcare personnel are a prerequisite for ensuring that policies and procedures for Standard and Transmission-Based Precautions are understood and practiced. Understanding the scientific rationale for the precautions will allow HCWs to apply procedures correctly, as well as safely modify precautions based on changing requirements, resources, or healthcare settings 14,655,[681][682][683][684][685][686][687][688] . In one study, the likelihood of HCWs developing SARS was strongly associated with less than 2 hours of infection control training and lack of understanding of infection control procedures 689 . Education about the important role of vaccines (e.g., influenza, measles, varicella, pertussis, pneumococcal) in protecting healthcare personnel, their patients, and family members can help improve vaccination rates [690][691][692][693] . Education on the principles and practices for preventing transmission of infectious agents should begin during training in the health professions and be provided to anyone who has an opportunity for contact with patients or medical equipment (e.g., nursing and medical staff; therapists and technicians, including respiratory, physical, occupational, radiology, and cardiology personnel; phlebotomists; housekeeping and maintenance staff; and students). In healthcare facilities, education and training on Standard and Transmission-Based Precautions are typically provided at the time of orientation and should be repeated as necessary to maintain competency; updated education and training are necessary when policies and procedures are revised or when there is a special circumstance, such as an outbreak that requires modification of current practice or adoption of new recommendations. Education and training materials and methods appropriate to the HCW's level of responsibility, individual learning habits, and language needs, can improve the learning experience 658,[694][695][696][697][698][699][700][701][702] . Education programs for healthcare personnel have been associated with sustained improvement in adherence to best practices and a related decrease in deviceassociated HAIs in teaching and non-teaching settings 639,703 Several studies have shown that, in addition to targeted education to improve specific practices, periodic assessment and feedback of the HCWs knowledge,and adherence to recommended practices are necessary to achieve the desired changes and to identify continuing education needs 562,[704][705][706][707][708] . Effectiveness of this approach for isolation practices has been demonstrated for control of RSV 116,684 . Patients, family members, and visitors can be partners in preventing transmission of infections in healthcare settings 9,42,[709][710][711] . Information about Standard Precautions, especially hand hygiene, Respiratory Hygiene/Cough Etiquette, vaccination (especially against influenza) and other routine infection prevention strategies may be incorporated into patient information materials that are provided upon admission to the healthcare facility. Additional information about Transmission-Based Precautions is best provided at the time they are initiated. Fact sheets, pamphlets, and other printed material may include information on the rationale for the additional precautions, risks to household members, room assignment for Transmission-Based Precautions purposes, explanation about the use of personal protective equipment by HCWs, and directions for use of such equipment by family members and visitors. Such information may be particularly helpful in the home environment where household members often have primary responsibility for adherence to recommended infection control practices. Healthcare personnel must be available and prepared to explain this material and answer questions as needed. # II.D. Hand Hygiene Hand hygiene has been cited frequently as the single most important practice to reduce the transmission of infectious agents in healthcare settings 559,712,713 and is an essential element of Standard Precautions. The term "hand hygiene" includes both handwashing with either plain or antiseptic-containing soap and water, and use of alcohol-based products (gels, rinses, foams) that do not require the use of water. In the absence of visible soiling of hands, approved alcohol-based products for hand disinfection are preferred over antimicrobial or plain soap and water because of their superior microbiocidal activity, reduced drying of the skin, and convenience 559 . Improved hand hygiene practices have been associated with a sustained decrease in the incidence of MRSA and VRE infections primarily in the ICU 561,562,[714][715][716][717] . The scientific rationale, indications, methods, and products for hand hygiene are summarized in other publications 559,717 . The effectiveness of hand hygiene can be reduced by the type and length of fingernails 559, 718, 719 . Individuals wearing artifical nails have been shown to harbor more pathogenic organisms, especially gram negative bacilli and yeasts, on the nails and in the subungual area than those with native nails 720,721 . In 2002, CDC/HICPAC recommended (Category IA) that artificial fingernails and extenders not be worn by healthcare personnel who have contact with high-risk patients (e.g., those in ICUs, ORs) due to the association with outbreaks of gram-negative bacillus and candidal infections as confirmed by molecular typing of isolates 30,31,559,[722][723][724][725] .The need to restrict the wearing of artificial fingernails by all healthcare personnel who provide direct patient care or by healthcare personnel who have contact with other high risk groups (e.g., oncology, cystic fibrosis patients), has not been studied, but has been recommended by some experts 20 . At this time such decisions are at the discretion of an individual facility's infection control program. There is less evidence that jewelry affects the quality of hand hygiene. Although hand contamination with potential pathogens is increased with ring-wearing 559,726 , no studies have related this practice to HCW-topatient transmission of pathogens. # II.E. Personal Protective Equipment (PPE) for Healthcare Personnel PPE refers to a variety of barriers and respirators used alone or in combination to protect mucous membranes, airways, skin, and clothing from contact with infectious agents. The selection of PPE is based on the nature of the patient interaction and/or the likely mode(s) of transmission. Guidance on the use of PPE is discussed in Part III. A suggested procedure for donning and removing PPE that will prevent skin or clothing contamination is presented in the Figure . Designated containers for used disposable or reusable PPE should be placed in a location that is convenient to the site of removal to facilitate disposal and containment of contaminated materials. Hand hygiene is always the final step after removing and disposing of PPE. The following sections highlight the primary uses and methods for selecting this equipment. # II.E.1. Gloves. Gloves are used to prevent contamination of healthcare personnel hands when 1. anticipating direct contact with blood or body fluids, mucous membranes, nonintact skin and other potentially infectious material; 2. having direct contact with patients who are colonized or infected with pathogens transmitted by the contact route e.g., VRE, MRSA, RSV 559,727,728 ; or 3. handling or touching visibly or potentially contaminated patient care equipment and environmental surfaces 72,73,559 . Gloves can protect both patients and healthcare personnel from exposure to infectious material that may be carried on hands 73 . The extent to which gloves will protect healthcare personnel from transmission of bloodborne pathogens (e.g., HIV, HBV, HCV) following a needlestick or other pucture that penetrates the glove barrier has not been determined. Although gloves may reduce the volume of blood on the external surface of a sharp by 46-86% 729 , the residual blood in the lumen of a hollowbore needle would not be affected; therefore, the effect on transmission risk is unknown. Gloves manufactured for healthcare purposes are subject to FDA evaluation and clearance 730 . Nonsterile disposable medical gloves made of a variety of materials (e.g., latex, vinyl, nitrile) are available for routine patient care 731 . The selection of glove type for non-surgical use is based on a number of factors, including the task that is to be performed, anticipated contact with chemicals and chemotherapeutic agents, latex sensitivity, sizing, and facility policies for creating a latex-free environment 17,[732][733][734] . For contact with blood and body fluids during non-surgical patient care, a single pair of gloves generally provides adequate barrier protection 734 . However, there is considerable variability among gloves; both the quality of the manufacturing process and type of material influence their barrier effectiveness 735 . While there is little difference in the barrier properties of unused intact gloves 736 , studies have shown repeatedly that vinyl gloves have higher failure rates than latex or nitrile gloves when tested under simulated and actual clinical conditions 731,[735][736][737][738] . For this reason either latex or nitrile gloves are preferable for clinical procedures that require manual dexterity and/or will involve more than brief patient contact. It may be necessary to stock gloves in several sizes. Heavier, reusable utility gloves are indicated for non-patient care activities, such as handling or cleaning contaminated equipment or surfaces 11,14,739 . During patient care, transmission of infectious organisms can be reduced by adhering to the principles of working from "clean" to "dirty", and confining or limiting contamination to surfaces that are directly needed for patient care. It may be necessary to change gloves during the care of a single patient to prevent cross-contamination of body sites 559,740 . It also may be necessary to change gloves if the patient interaction also involves touching portable computer keyboards or other mobile equipment that is transported from room to room. Discarding gloves between patients is necessary to prevent transmission of infectious material. Gloves must not be washed for subsequent reuse because microorganisms cannot be removed reliably from glove surfaces and continued glove integrity cannot be ensured. Furthermore, glove reuse has been associated with transmission of MRSA and gram-negative bacilli [741][742][743] . When gloves are worn in combination with other PPE, they are put on last. Gloves that fit snugly around the wrist are preferred for use with an isolation gown because they will cover the gown cuff and provide a more reliable continuous barrier for the arms, wrists, and hands. Gloves that are removed properly will prevent hand contamination (Figure ). Hand hygiene following glove removal further ensures that the hands will not carry potentially infectious material that might have penetrated through unrecognized tears or that could contaminate the hands during glove removal 559,728,741 . # II.E.2. Isolation gowns. Isolation gowns are used as specified by Standard and Transmission-Based Precautions, to protect the HCW's arms and exposed body areas and prevent contamination of clothing with blood, body fluids, and other potentially infectious material 24,88,262,[744][745][746] . The need for and type of isolation gown selected is based on the nature of the patient interaction, including the anticipated degree of contact with infectious material and potential for blood and body fluid penetration of the barrier. The wearing of isolation gowns and other protective apparel is mandated by the OSHA Bloodborne Pathogens Standard 739 . Clinical and laboratory coats or jackets worn over personal clothing for comfort and/or purposes of identity are not considered PPE. When applying Standard Precautions, an isolation gown is worn only if contact with blood or body fluid is anticipated. However, when Contact Precautions are used (i.e., to prevent transmission of an infectious agent that is not interrupted by Standard Precautions alone and that is associated with environmental contamination), donning of both gown and gloves upon room entry is indicated to address unintentional contact with contaminated environmental surfaces 54,72,73,88 . The routine donning of isolation gowns upon entry into an intensive care unit or other high-risk area does not prevent or influence potential colonization or infection of patients in those areas 365,[747][748][749][750] . Isolation gowns are always worn in combination with gloves, and with other PPE when indicated. Gowns are usually the first piece of PPE to be donned. Full coverage of the arms and body front, from neck to the mid-thigh or below will ensure that clothing and exposed upper body areas are protected. Several gown sizes should be available in a healthcare facility to ensure appropriate coverage for staff members. Isolation gowns should be removed before leaving the patient care area to prevent possible contamination of the environment outside the patient's room. Isolation gowns should be removed in a manner that prevents contamination of clothing or skin (Figure). The outer, "contaminated", side of the gown is turned inward and rolled into a bundle, and then discarded into a designated container for waste or linen to contain contamination. # II.E.3. Face protection: masks, goggles, face shields. II.E.3.a. Masks. Masks are used for three primary purposes in healthcare settings: 1. placed on healthcare personnel to protect them from contact with infectious material from patients e.g., respiratory secretions and sprays of blood or body fluids, consistent with Standard Precautions and Droplet Precautions; 2. placed on healthcare personnel when engaged in procedures requiring sterile technique to protect patients from exposure to infectious agents carried in a healthcare worker's mouth or nose, and 3. placed on coughing patients to limit potential dissemination of infectious respiratory secretions from the patient to others (i.e., Respiratory Hygiene/Cough Etiquette). Masks may be used in combination with goggles to protect the mouth, nose and eyes, or a face shield may be used instead of a mask and goggles, to provide more complete protection for the face, as discussed below. Masks should not be confused with particulate respirators that are used to prevent inhalation of small particles that may contain infectious agents transmitted via the airborne route as described below. The mucous membranes of the mouth, nose, and eyes are susceptible portals of entry for infectious agents, as can be other skin surfaces if skin integrity is compromised (e.g., by acne, dermatitis) 66,[751][752][753][754] . Therefore, use of PPE to protect these body sites is an important component of Standard Precautions. The protective effect of masks for exposed healthcare personnel has been demonstrated 93,113,755,756 . Procedures that generate splashes or sprays of blood, body fluids, secretions, or excretions (e.g., endotracheal suctioning, bronchoscopy, invasive vascular procedures) require either a face shield (disposable or reusable) or mask and goggles 93-95, 96, 113, 115, 262, 739, 757 .The wearing of masks, eye protection, and face shields in specified circumstances when blood or body fluid exposures are likely to occur is mandated by the OSHA Bloodborne Pathogens Standard 739 . Appropriate PPE should be selected based on the anticipated level of exposure. Two mask types are available for use in healthcare settings: surgical masks that are cleared by the FDA and required to have fluid-resistant properties, and procedure or isolation masks 758 #2688 . No studies have been published that compare mask types to determine whether one mask type provides better protection than another. Since procedure/isolation masks are not regulated by the FDA, there may be more variability in quality and performance than with surgical masks. Masks come in various shapes (e.g., molded and non-molded), sizes, filtration efficiency, and method of attachment (e.g., ties, elastic, ear loops). Healthcare facilities may find that different types of masks are needed to meet individual healthcare personnel needs. # II.E.3.b. Goggles, face shields. Guidance on eye protection for infection control has been published 759 . The eye protection chosen for specific work situations (e.g., goggles or face shield) depends upon the circumstances of exposure, other PPE used, and personal vision needs. Personal eyeglasses and contact lenses are NOT considered adequate eye protection (NIOSH Eye Protection for Infection Control (https://www.cdc.gov/niosh/topics/eye/eye-infectious.html) [Current version of this document may differ from original.]). NIOSH states that, eye protection must be comfortable, allow for sufficient peripheral vision, and must be adjustable to ensure a secure fit. It may be necessary to provide several different types, styles, and sizes of protective equipment. Indirectly-vented goggles with a manufacturer's anti-fog coating may provide the most reliable practical eye protection from splashes, sprays, and respiratory droplets from multiple angles. Newer styles of goggles may provide better indirect airflow properties to reduce fogging, as well as better peripheral vision and more size options for fitting goggles to different workers. Many styles of goggles fit adequately over prescription glasses with minimal gaps. While effective as eye protection, goggles do not provide splash or spray protection to other parts of the face. The role of goggles, in addition to a mask, in preventing exposure to infectious agents transmitted via respiratory droplets has been studied only for RSV. Reports published in the mid-1980s demonstrated that eye protection reduced occupational transmission of RSV 760,761 . Whether this was due to preventing hand-eye contact or respiratory droplet-eye contact has not been determined. However, subsequent studies demonstrated that RSV transmission is effectively prevented by adherence to Standard plus Contact Precations and that for this virus routine use of goggles is not necessary 24,116,117,684,762 . It is important to remind healthcare personnel that even if Droplet Precautions are not recommended for a specific respiratory tract pathogen, protection for the eyes, nose and mouth by using a mask and goggles, or face shield alone, is necessary when it is likely that there will be a splash or spray of any respiratory secretions or other body fluids as defined in Standard Precautions. Disposable or non-disposable face shields may be used as an alternative to goggles 759 . As compared with goggles, a face shield can provide protection to other facial areas in addition to the eyes. Face shields extending from chin to crown provide better face and eye protection from splashes and sprays; face shields that wrap around the sides may reduce splashes around the edge of the shield. Removal of a face shield, goggles and mask can be performed safely after gloves have been removed, and hand hygiene performed. The ties, ear pieces and/or headband used to secure the equipment to the head are considered "clean" and therefore safe to touch with bare hands. The front of a mask, goggles and face shield are considered contaminated (Figure ). # II.E.4. Respiratory protection. The subject of respiratory protection as it applies to preventing transmission of airborne infectious agents, including the need for and frequency of fit-testing is under scientific review and was the subject of a CDC workshop in 2004 763 . Respiratory protection currently requires the use of a respirator with N95 or higher filtration to prevent inhalation of infectious particles. Information about respirators and respiratory protection programs is summarized in the Guideline for Preventing Transmission of Mycobacterium tuberculosis in Health-care Settings, 2005 (CDC.MMWR 2005; 54: RR-17 12 ). Respiratory protection is broadly regulated by OSHA under the general industry standard for respiratory protection (29CFR1910.134) 764 which requires that U.S. employers in all employment settings implement a program to protect employees from inhalation of toxic materials. OSHA program components include medical clearance to wear a respirator; provision and use of appropriate respirators, including fit-tested NIOSH-certified N95 and higher particulate filtering respirators; education on respirator use and periodic reevaluation of the respiratory protection program. When selecting particulate respirators, models with inherently good fit characteristics (i.e., those expected to provide protection factors of 10 or more to 95% of wearers) are preferred and could theoretically relieve the need for fit testing 765,766 . Issues pertaining to respiratory protection remain the subject of ongoing debate. Information on various types of respirators may be found at [This link is no longer active: www.cdc.gov/niosh/npptl/respirators/disp_part/particlist.html. Similar information may be found at NIOSH Respirators (https://www.cdc.gov/niosh/topics/respirators).] and in published studies 765,767,768 . A user-seal check (formerly called a "fit check") should be performed by the wearer of a respirator each time a respirator is donned to minimize air leakage around the facepiece 769 . The optimal frequency of fit-testng has not been determined; re-testing may be indicated if there is a change in facial features of the wearer, onset of a medical condition that would affect respiratory function in the wearer, or a change in the model or size of the initially assigned respirator 12 . Respiratory protection was first recommended for protection of preventing U.S. healthcare personnel from exposure to M. tuberculosis in 1989. That recommendation has been maintained in two successive revisions of the Guidelines for Prevention of Transmission of Tuberculosis in Hospitals and other Healthcare Settings 12,126 . The incremental benefit from respirator use, in addition to administrative and engineering controls (i.e., AIIRs, early recognition of patients likely to have tuberculosis and prompt placement in an AIIR, and maintenance of a patient with suspected tuberculosis in an AIIR until no longer infectious), for preventing transmission of airborne infectious agents (e.g., M. tuberculosis) is undetermined. Although some studies have demonstrated effective prevention of M. tuberculosis transmission in hospitals where surgical masks, instead of respirators, were used in conjunction with other administrative and engineering controls 637,770,771 , CDC currently recommends N95 or higher level respirators for personnel exposed to patients with suspected or confirmed tuberculosis. Currently this is also true for other diseases that could be transmitted through the airborne route, including SARS 262 and smallpox 108,129,772 , until inhalational transmission is better defined or healthcare-specific protective equipment more suitable for for preventing infection are developed. Respirators are also currently recommended to be worn during the performance of aerosol-generating procedures (e.g., intubation, bronchoscopy, suctioning) on patients withSARS Co-V infection, avian influenza and pandemic influenza (See Appendix A). Although Airborne Precautions are recommended for preventing airborne transmission of measles and varicella-zoster viruses, there are no data upon which to base a recommendation for respiratory protection to protect susceptible personnel against these two infections; transmission of varicella-zoster virus has been prevented among pediatric patients using negative pressure isolation alone 773 . Whether respiratory protection (i.e., wearing a particulate respirator) would enhance protection from these viruses has not been studied. Since the majority of healthcare personnel have natural or acquired immunity to these viruses, only immune personnel generally care for patients with these infections [774][775][776][777] . Although there is no evidence to suggest that masks are not adequate to protect healthcare personnel in these settings, for purposes of consistency and simplicity, or because of difficulties in ascertaining immunity, some facilities may require the use of respirators for entry into all AIIRs, regardless of the specific infectious agent. Procedures for safe removal of respirators are provided (Figure). In some healthcare settings, particulate respirators used to provide care for patients with M. tuberculosis are reused by the same HCW. This is an acceptable practice providing the respirator is not damaged or soiled, the fit is not compromised by change in shape, and the respirator has not been contaminated with blood or body fluids. There are no data on which to base a recommendation for the length of time a respirator may be reused. # II.F. control nurse per 250 beds, were associated with a 32%re to Bloodborne Pathogens # II.F.1. Prevention of needlesticks and other sharps-related injuries. Injuries due to needles and other sharps have been associated with transmission of HBV, HCV and HIV to healthcare personnel 778,779 . The prevention of sharps injuries has always been an essential element of Universal and now Standard Precautions 1, 780 . These include measures to handle needles and other sharp devices in a manner that will prevent injury to the user and to others who may encounter the device during or after a procedure. These measures apply to routine patient care and do not address the prevention of sharps injuries and other blood exposures during surgical and other invasive procedures that are addressed elsewhere [781][782][783][784][785] . Since 1991, when OSHA first issued its Bloodborne Pathogens Standard to protect healthcare personnel from blood exposure, the focus of regulatory and legislative activity has been on implementing a hierarchy of control measures. This has included focusing attention on removing sharps hazards through the development and use of engineering controls. The federal Needlestick Safety and Prevention Act signed into law in November, 2000 authorized OSHA's revision of its Bloodborne Pathogens Standard to more explicitly require the use of safety-engineered sharp devices 786 . CDC has provided guidance on sharps injury prevention 787,788 , including for the design, implementation and evaluation of a comprehensive sharps injury prevention program 789 . # II.F.2. Prevention of mucous membrane contact. Exposure of mucous membranes of the eyes, nose and mouth to blood and body fluids has been associated with the transmission of bloodborne viruses and other infectious agents to healthcare personnel 66, 752, 754, 779 . The prevention of mucous membrane exposures has always been an element of Universal and now Standard Precautions for routine patient care 1, 753 and is subject to OSHA bloodborne pathogen regulations. Safe work practices, in addition to wearing PPE, are used to protect mucous membranes and non-intact skin from contact with potentially infectious material. These include keeping gloved and ungloved hands that are contaminated from touching the mouth, nose, eyes, or face; and positioning patients to direct sprays and splatter away from the face of the caregiver. Careful placement of PPE before patient contact will help avoid the need to make PPE adjustments and possible face or mucous membrane contamination during use. In areas where the need for resuscitation is unpredictable, mouthpieces, pocket resuscitation masks with one-way valves, and other ventilation devices provide an alternative to mouth-to-mouth resuscitation, preventing exposure of the caregiver's nose and mouth to oral and respiratory fluids during the procedure. # II.F.2.a. Precautions during aerosol-generating procedures. The performance of procedures that can generate small particle aerosols (aerosol-generating procedures), such as bronchoscopy, endotracheal intubation, and open suctioning of the respiratory tract, have been associated with transmission of infectious agents to healthcare personnel, including M. tuberculosis 790 , SARS-CoV 93,94,98 and N. meningitidis 95 . Protection of the eyes, nose and mouth, in addition to gown and gloves, is recommended during performance of these procedures in accordance with Standard Precautions. Use of a particulate respirator is recommended during aerosol-generating procedures when the aerosol is likely to contain M. tuberculosis, SARS-CoV, or avian or pandemic influenza viruses. # II.G. Patient Placement # II.G.1. Hospitals and long-term care settings. Options for patient placement include single patient rooms, two patient rooms, and multi-bed wards. Of these, single patient rooms are prefered when there is a concern about transmission of an infectious agent. Although some studies have failed to demonstrate the efficacy of single patient rooms to prevent HAIs 791 , other published studies, including one commissioned by the American Institute of Architects and the Facility Guidelines Institute, have documented a beneficial relationship between private rooms and reduction in infectious and noninfectious adverse patient outcomes 792,793 . The AIA notes that private rooms are the trend in hospital planning and design. However, most hospitals and long-term care facilities have multi-bed rooms and must consider many competing priorities when determining the appropriate room placement for patients (e.g., reason for admission; patient characteristics, such as age, gender, mental status; staffing needs; family requests; psychosocial factors; reimbursement concerns). In the absence of obvious infectious diseases that require specified airborne infection isolation rooms (e.g., tuberculosis, SARS, chickenpox), the risk of transmission of infectious agents is not always considered when making placement decisions. When there are only a limited number of single-patient rooms, it is prudent to prioritize them for those patients who have conditions that facilitate transmission of infectious material to other patients (e.g., draining wounds, stool incontinence, uncontained secretions) and for those who are at increased risk of acquisition and adverse outcomes resulting from HAI (e.g., immunosuppression, open wounds, indwelling catheters, anticipated prolonged length of stay, total dependence on HCWs for activities of daily living) 15,24,43,430,794,795 . Single-patient rooms are always indicated for patients placed on Airborne Precautions and in a Protective Environment and are preferred for patients who require Contact or Droplet Precautions 23,24,410,435,796,797 . During a suspected or proven outbreak caused by a pathogen whose reservoir is the gastrointestinal tract, use of single patient rooms with private bathrooms limits opportunities for transmission, especially when the colonized or infected patient has poor personal hygiene habits, fecal incontinence, or cannot be expected to assist in maintaining procedures that prevent transmission of microorganisms (e.g., infants, children, and patients with altered mental status or developmental delay). In the absence of continued transmission, it is not necessary to provide a private bathroom for patients colonized or infected with enteric pathogens as long as personal hygiene practices and Standard Precautions, especially hand hygiene and appropriate environmental cleaning, are maintained. Assignment of a dedicated commode to a patient,and cleaning and disinfecting fixtures and equipment that may have fecal contamination (e.g., bathrooms, commodes 798 , scales used for weighing diapers) and the adjacent surfaces with appropriate agents may be especially important when a singlepatient room can not be used since environmental contamination with intestinal tract pathogens is likely from both continent and incontinent patients 54,799 . Results of several studies to determine the benefit of a single-patient room to prevent transmission of Clostridium difficile are inconclusive 167,[800][801][802] . Some studies have shown that being in the same room with a colonized or infected patient is not necessarily a risk factor for transmission 791,[803][804][805] . However, for children, the risk of healthcare-associated diarrhea is increased with the increased number of patients per room 806 . Thus, patient factors are important determinants of infection transmission risks, and the need for a single-patient room and/or private bathroom for any patient is best determined on a case-by-case basis. Cohorting is the practice of grouping together patients who are colonized or infected with the same organism to confine their care to one area and prevent contact with other patients. Cohorts are created based on clinical diagnosis, microbiologic confirmation when available, epidemiology, and mode of transmission of the infectious agent. It is generally preferred not to place severely immunosuppressed patients in rooms with other patients. Cohorting has been used extensively for managing outbreaks of MDROs including MRSA 22,807 , VRE 638,808,809 , MDR-ESBLs 810 ; Pseudomonas aeruginosa 29 ; methicillin-susceptible Staphylococcus aureus 811 ; RSV 812,813 ; adenovirus keratoconjunctivitis 814 ; rotavirus 815 ; and SARS 816 . Modeling studies provide additional support for cohorting patients to control outbreaks Talon [817][818][819] . However, cohorting often is implemented only after routine infection control measures have failed to control an outbreak. Assigning or cohorting healthcare personnel to care only for patients infected or colonized with a single target pathogen limits further transmission of the target pathogen to uninfected patients 740,819 but is difficult to achieve in the face of current staffing shortages in hospitals 583 and residential healthcare sites [820][821][822] . However, when continued transmission is occurring after implementing routine infection control measures and creating patient cohorts, cohorting of healthcare personnel may be beneficial. During the seasons when RSV, human metapneumovirus 823 , parainfluenza, influenza, other respiratory viruses 824 , and rotavirus are circulating in the community, cohorting based on the presenting clinical syndrome is often a priority in facilities that care for infants and young children 825 . For example, during the respiratory virus season, infants may be cohorted based soley on the clinical diagnosis of bronchiolitis due to the logistical difficulties and costs associated with requiring microbiologic confirmation prior to room placement, and the predominance of RSV during most of the season. However, when available, single patient rooms are always preferred since a common clinical presentation (e.g., bronchiolitis), can be caused by more than one infectious agent 823,824,826 . Furthermore, the inability of infants and children to contain body fluids, and the close physical contact that occurs during their care, increases infection transmission risks for patients and personnel in this setting 24,795 . # II.G.2. Ambulatory settings. Patients actively infected with or incubating transmissible infectious diseases are seen frequently in ambulatory settings (e.g., outpatient clinics, physicians' offices, emergency departments) and potentially expose healthcare personnel and other patients, family members and visitors 21,34,127,135,142,827 . In response to the global outbreak of SARS in 2003 and in preparation for pandemic influenza, healthcare providers working in outpatient settings are urged to implement source containment measures (e.g., asking couging patients to wear a surgical mask or cover their coughs with tissues) to prevent transmission of respiratory infections, beginning at the point of initial patient encounter 9,262,828 as described below in section III.A.1.a. Signs can be posted at the entrance to facilities or at the reception or registration desk requesting that the patient or individuals accompanying the patient promptly inform the receptionist if there are symptoms of a respiratory infection (e.g., cough, flu-like illness, increased production of respiratory secretions). The presence of diarrhea, skin rash, or known or suspected exposure to a transmissible disease (e.g., measles, pertussis, chickenpox, tuberculosis) also could be added. Placement of potentially infectious patients without delay in an examination room limits the number of exposed individuals, e.g., in the common waiting area. In waiting areas, maintaining a distance between symptomatic and non-symptomatic patients (e.g., >3 feet), in addition to source control measures, may limit exposures. However, infections transmitted via the airborne route (e.g., M tuberculosis, measles, chickenpox) require additional precautions 12,125,829 . Patients suspected of having such an infection can wear a surgical mask for source containment, if tolerated, and should be placed in an examination room, preferably an AIIR, as soon as possible. If this is not possible, having the patient wear a mask and segregate him/herself from other patients in the waiting area will reduce opportunities to expose others. Since the person(s) accompanying the patient also may be infectious, application of the same infection control precautions may need to be extended to these persons if they are symptomatic 21, 252, 830 . For example, family members accompanying children admitted with suspected M. tuberculosis have been found to have unsuspected pulmonary tuberculosis with cavitary lesions, even when asymptomatic 42,831 . Patients with underlying conditions that increase their susceptibility to infection (e.g., those who are immunocompromised 43,44 or have cystic fibrosis 20 ) require special efforts to protect them from exposures to infected patients in common waiting areas. By informing the receptionist of their infection risk upon arrival, appropriate steps may be taken to further protect them from infection. In some cystic fibrosis clinics, in order to avoid exposure to other patients who could be colonized with B. cepacia, patients have been given beepers upon registration so that they may leave the area and receive notification to return when an examination room becomes available 832 . # II.G.3. Home care. In home care, the patient placement concerns focus on protecting others in the home from exposure to an infectious household member. For individuals who are especially vulnerable to adverse outcomes associated with certain infections, it may be beneficial to either remove them from the home or segregate them within the home. Persons who are not part of the household may need to be prohibited from visiting during the period of infectivity. For example, if a patient with pulmonary tuberculosis is contagious and being cared for at home, very young children (<4 years of age) 833 and immunocompromised persons who have not yet been infected should be removed or excluded from the household. During the SARS outbreak of 2003, segregation of infected persons during the communicable phase of the illness was beneficial in preventing household transmission 249,834 . # II.H. Transport of Patients Several principles are used to guide transport of patients requiring Transmission-Based Precautions. In the inpatient and residential settings these include 1. limiting transport of such patients to essential purposes, such as diagnostic and therapeutic procedures that cannot be performed in the patient's room; 2. when transport is necessary, using appropriate barriers on the patient (e.g., mask, gown, wrapping in sheets or use of impervious dressings to cover the affected area(s) when infectious skin lesions or drainage are present, consistent with the route and risk of transmission; 3. notifying healthcare personnel in the receiving area of the impending arrival of the patient and of the precautions necessary to prevent transmission; and 4. for patients being transported outside the facility, informing the receiving facility and the medi-van or emergency vehicle personnel in advance about the type of Transmission-Based Precautions being used. For tuberculosis, additional precautions may be needed in a small shared air space such as in an ambulance 12 . # II.I. Environmental Measures Cleaning and disinfecting non-critical surfaces in patient-care areas are part of Standard Precautions. In general, these procedures do not need to be changed for patients on Transmission-Based Precautions. The cleaning and disinfection of all patient-care areas is important for frequently touched surfaces, especially those closest to the patient, that are most likely to be contaminated (e.g., bedrails, bedside tables, commodes, doorknobs, sinks, surfaces and equipment in close proximity to the patient) 11,72,73,835 . The frequency or intensity of cleaning may need to change based on the patient's level of hygiene and the degree of environmental contamination and for certain for infectious agents whose reservoir is the intestinal tract 54 . This may be especially true in LTCFs and pediatric facilities where patients with stool and urine incontinence are encountered more frequently. Also, increased frequency of cleaning may be needed in a Protective Environment to minimize dust accumulation 11 . Special recommendations for cleaning and disinfecting environmental surfaces in dialysis centers have been published 18 . In all healthcare settings, administrative, staffing and scheduling activities should prioritize the proper cleaning and disinfection of surfaces that could be implicated in transmission. During a suspected or proven outbreak where an environmental reservoir is suspected, routine cleaning procedures should be reviewed, and the need for additional trained cleaning staff should be assessed. Adherence should be monitored and reinforced to promote consistent and correct cleaning is performed. EPA-registered disinfectants or detergents/disinfectants that best meet the overall needs of the healthcare facility for routine cleaning and disinfection should be selected 11,836 . In general, use of the existing facility detergent/disinfectant according to the manufacturer's recommendations for amount, dilution, and contact time is sufficient to remove pathogens from surfaces of rooms where colonized or infected individuals were housed. This includes those pathogens that are resistant to multiple classes of antimicrobial agents (e.g., C. difficile, VRE, MRSA, MDR-GNB 11,24,88,435,746,796,837 ). Most often, environmental reservoirs of pathogens during outbreaks are related to a failure to follow recommended procedures for cleaning and disinfection rather than the specific cleaning and disinfectant agents used [838][839][840][841] . Certain pathogens (e.g., rotavirus, noroviruses, C. difficile) may be resistant to some routinely used hospital disinfectants 275,292,[842][843][844][845][846][847] .The role of specific disinfectants in limiting transmission of rotavirus has been demonstrated experimentally 842 . Also, since C. difficile may display increased levels of spore production when exposed to nonchlorine-based cleaning agents, and the spores are more resistant than vegetative cells to commonly used surface disinfectants, some investigators have recommended the use of a 1:10 dilution of 5.25% sodium hypochlorite (household bleach) and water for routine environmental disinfection of rooms of patients with C. difficile when there is continued transmission 844,848 . In one study, the use of a hypochlorite solution was associated with a decrease in rates of C. difficile infections 847 . The need to change disinfectants based on the presence of these organisms can be determined in consultation with the infection control committee 11,847,848 . Detailed recommendations for disinfection and sterilization of surfaces and medical equipment that have been in contact with prion-containing tissue or high risk body fluids, and for cleaning of blood and body substance spills, are available in the Guidelines for Environmental Infection Control in Health-Care Facilities 11 and in the Guideline for Disinfection and Sterilization 848 . # II.J. Patient Care Equipment and Instruments/Devices Medical equipment and instruments/devices must be cleaned and maintained according to the manufacturers' instructions to prevent patient-to-patient transmission of infectious agents 86,87,325,849 . Cleaning to remove organic material must always precede high level disinfection and sterilization of critical and semi-critical instruments and devices because residual proteinacous material reduces the effectiveness of the disinfection and sterilization processes 836,848 . Noncritical equipment, such as commodes, intravenous pumps, and ventilators, must be thoroughly cleaned and disinfected before use on another patient. All such equipment and devices should be handled in a manner that will prevent HCW and environmental contact with potentially infectious material. It is important to include computers and personal digital assistants (PDAs) used in patient care in policies for cleaning and disinfection of non-critical items. The literature on contamination of computers with pathogens has been summarized 850 and two reports have linked computer contamination to colonization and infections in patients 851,852 . Although keyboard covers and washable keyboards that can be easily disinfected are in use, the infection control benefit of those items and optimal management have not been determined. In all healthcare settings, providing patients who are on Transmission-Based Precautions with dedicated noncritical medical equipment (e.g., stethoscope, blood pressure cuff, electronic thermometer) has been beneficial for preventing transmission 74,89,740,853,854 . When this is not possible, disinfection after use is recommended. Consult other guidelines for detailed guidance in developing specific protocols for cleaning and reprocessing medical equipment and patient care items in both routine and special circumstances 11,14,18,20,740,836,848 . In home care, it is preferable to remove visible blood or body fluids from durable medical equipment before it leaves the home. Equipment can be cleaned on-site using a detergent/disinfectant and, when possible, should be placed in a single plastic bag for transport to the reprocessing location 20,739 . # II.K. Textiles and Laundry Soiled textiles, including bedding, towels, and patient or resident clothing may be contaminated with pathogenic microorganisms. However, the risk of disease transmission is negligible if they are handled, transported, and laundered in a safe manner 11,855,856 . Key principles for handling soiled laundry are 1. not shaking the items or handling them in any way that may aerosolize infectious agents; 2. avoiding contact of one's body and personal clothing with the soiled items being handled; and 3. containing soiled items in a laundry bag or designated bin. When laundry chutes are used, they must be maintained to minimize dispersion of aerosols from contaminated items 11 . The methods for handling, transporting, and laundering soiled textiles are determined by organizational policy and any applicable regulations 739 ; guidance is provided in the Guidelines for Environmental Infection Control 11 . Rather than rigid rules and regulations, hygienic and common sense storage and processing of clean textiles is recommended 11,857 . When laundering occurs outside of a healthcare facility, the clean items must be packaged or completely covered and placed in an enclosed space during transport to prevent contamination with outside air or construction dust that could contain infectious fungal spores that are a risk for immunocompromised patients 11 . Institutions are required to launder garments used as personal protective equipment and uniforms visibly soiled with blood or infective material 739 . There are few data to determine the safety of home laundering of HCW uniforms, but no increase in infection rates was observed in the one published study 858 and no pathogens were recovered from home-or hospital-laundered scrubs in another study 859 . In the home, textiles and laundry from patients with potentially transmissible infectious pathogens do not require special handling or separate laundering, and may be washed with warm water and detergent 11,858,859 . # II.L. Solid Waste The management of solid waste emanating from the healthcare environment is subject to federal and state regulations for medical and non-medical waste 860,861 . No additional precautions are needed for non-medical solid waste that is being removed from rooms of patients on Transmission-Based Precautions. Solid waste may be contained in a single bag (as compared to using two bags) of sufficient strength. 862 # II.M. Dishware and Eating Utensils The combination of hot water and detergents used in dishwashers is sufficient to decontaminate dishware and eating utensils. Therefore, no special precautions are needed for dishware (e.g., dishes, glasses, cups) or eating utensils; reusable dishware and utensils may be used for patients requiring Transmission-Based Precautions. In the home and other communal settings, eating utensils and drinking vessels that are being used should not be shared, consistent with principles of good personal hygiene and for the purpose of preventing transmission of respiratory viruses, Herpes simplex virus, and infectious agents that infect the gastrointestinal tract and are transmitted by the fecal/oral route (e.g., hepatitis A virus, noroviruses). If adequate resources for cleaning utensils and dishes are not available, disposable products may be used. # II.N. Adjunctive Measures Important adjunctive measures that are not considered primary components of programs to prevent transmission of infectious agents, but improve the effectiveness of such programs, include 1. antimicrobial management programs; 2. postexposure chemoprophylaxis with antiviral or antibacterial agents; 3. vaccines used both for pre and postexposure prevention; and 4. screening and restricting visitors with signs of transmissible infections. Detailed discussion of judicious use of antimicrobial agents is beyond the scope of this document; however the topic is addressed in the Management of Multidrug-Resistant Organisms in Healthcare Settings 2006 (https://www.cdc.gov/infectioncontrol/guidelines/mdro/). # II.N.1. Chemoprophylaxis. Antimicrobial agents and topical antiseptics may be used to prevent infection and potential outbreaks of selected agents. Infections for which postexposure chemoprophylaxis is recommended under defined conditions include B. pertussis 17,863 , N. meningitidis 864 , B. anthracis after environmental exposure to aeosolizable material 865 , influenza virus 611 , HIV 866 , and group A streptococcus 160 . Orally administered antimicrobials may also be used under defined circumstances for MRSA decolonization of patients or healthcare personnel 867 . Another form of chemoprophylaxis is the use of topical antiseptic agents. For example, triple dye is used routinely on the umbilical cords of term newborns to reduce the risk of colonization, skin infections, and omphalitis caused by S. aureus, including MRSA, and group A streptococcus 868,869 . Extension of the use of triple dye to low birth weight infants in the NICU was one component of a program that controlled one longstanding MRSA outbreak 22 . Topical antiseptics are also used for decolonization of healthcare personnel or selected patients colonized with MRSA, using mupirocin as discussed in the MDRO guideline 870 867, 871-873 . # II.N.2. Immunoprophylaxis. Certain immunizations recommended for susceptible healthcare personnel have decreased the risk of infection and the potential for transmission in healthcare facilities 17,874 . The OSHA mandate that requires employers to offer hepatitis B vaccination to HCWs played a substantial role in the sharp decline in incidence of occupational HBV infection 778,875 . The use of varicella vaccine in healthcare personnel has decreased the need to place susceptible HCWs on administrative leave following exposure to patients with varicella 775 . Also, reports of healthcare-associated transmission of rubella in obstetrical clinics 33,876 and measles in acute care settings 34 demonstrate the importance of immunization of susceptible healthcare personnel against childhood diseases. Many states have requirements for HCW vaccination for measles and rubella in the absence of evidence of immunity. Annual influenza vaccine campaigns targeted to patients and healthcare personnel in LTCFs and acute-care settings have been instrumental in preventing or limiting institutional outbreaks and increasing attention is being directed toward improving influenza vaccination rates in healthcare personnel 35,611,690,877,878,879 . Transmission of B. pertussis in healthcare facilities has been associated with large and costly outbreaks that include both healthcare personnel and patients 17,36,41,100,683,827,880,881 . HCWs who have close contact with infants with pertussis are at particularly high risk because of waning immunity and, until 2005, the absence of a vaccine that could be used in adults. However, two acellular pertussis vaccines were licensed in the United States in 2005, one for use in individuals aged 11-18 and one for use in ages 10-64 years 882 . Provisional ACIP recommendations at the time of publication of this document include adolescents and adults, especially those with contact with infants < 12 months of age and healthcare personnel with direct patient contact 883 884 . Immunization of children and adults will help prevent the introduction of vaccinepreventable diseases into healthcare settings. The recommended immunization schedule for children is published annually in the January issues of the Morbidity Mortality Weekly Report with interim updates as needed 885,886 . An adult immunization schedule also is available for healthy adults and those with special immunization needs due to high risk medical conditions 887 . Some vaccines are also used for postexposure prophylaxis of susceptible individuals, including varicella 888 , influenza 611 , hepatitis B 778 , and smallpox 225 vaccines 17,874 . In the future, administration of a newly developed S. aureus conjugate vaccine (still under investigation) to selected patients may provide a novel method of preventing healthcareassociated S. aureus, including MRSA, infections in high-risk groups (e.g., hemodialysis patients and candidates for selected surgical procedures) 889,890 . Immune globulin preparations also are used for postexposure prophylaxis of certain infectious agents under specified circumstances (e.g., varicella-zoster virus [VZIG], hepatitis B virus [HBIG], rabies [RIG], measles and hepatitis A virus [IG] 17,833,874 ). The RSV monoclonal antibody preparation, Palivizumab, may have contributed to controlling a nosocomial outbreak of RSV in one NICU, but there is insufficient evidence to support a routine recommendation for its use in this setting 891 . # II.N. 3. Management of visitors. # II.N.3.a. Visitors as sources of infection. Visitors have been identified as the source of several types of HAIs (e.g., pertussis 40,41 , M. tuberculosis 42,892 , influenza, and other respiratory viruses 24,43,44,373 and SARS 21,[252][253][254] ). However, effective methods for visitor screening in healthcare settings have not been studied. Visitor screening is especially important during community outbreaks of infectious diseases and for high risk patient units. Sibling visits are often encouraged in birthing centers, post partum rooms and in pediatric inpatient units, ICUs, and in residential settings for children; in hospital settings, a child visitor should visit only his or her own sibling. Screening of visiting siblings and other children before they are allowed into clinical areas is necessary to prevent the introduction of childhood illnesses and common respiratory infections. Screening may be passive through the use of signs to alert family members and visitors with signs and symptoms of communicable diseases not to enter clinical areas. More active screening may include the completion of a screening tool or questionnaire which elicits information related to recent exposures or current symptoms. That information is reviewed by the facility staff and the visitor is either permitted to visit or is excluded 833 . Family and household members visiting pediatric patients with pertussis and tuberculosis may need to be screened for a history of exposure as well as signs and symptoms of current infection. Potentially infectious visitors are excluded until they receive appropriate medical screening, diagnosis, or treatment. If exclusion is not considered to be in the best interest of the patient or family (i.e., primary family members of critically or terminally ill patients), then the symptomatic visitor must wear a mask while in the healthcare facility and remain in the patient's room, avoiding exposure to others, especially in public waiting areas and the cafeteria. Visitor screening is used consistently on HSCT units 15,43 . However, considering the experience during the 2003 SARS outbreaks and the potential for pandemic influenza, developing effective visitor screening systems will be beneficial 9 . Education concerning Respiratory Hygiene/Cough Etiquette is a useful adjunct to visitor screening. # II.N.3.b. Use of barrier precautions by visitors. The use of gowns, gloves, or masks by visitors in healthcare settings has not been addressed specifically in the scientific literature. Some studies included the use of gowns and gloves by visitors in the control of MDRO's, but did not perform a separate analysis to determine whether their use by visitors had a measurable impact [893][894][895] . Family members or visitors who are providing care or having very close patient contact (e.g., feeding, holding) may have contact with other patients and could contribute to transmission if barrier precautions are not used correctly. Specific recommendations may vary by facility or by unit and should be determined by the level of interaction. # Part III: Precautions to Prevent Transmission of Infectious Agents There are two tiers of HICPAC/CDC precautions to prevent transmission of infectious agents, Standard Precautions and Transmission-Based Precautions. Standard Precautions are intended to be applied to the care of all patients in all healthcare settings, regardless of the suspected or confirmed presence of an infectious agent. Implementation of Standard Precautions constitutes the primary strategy for the prevention of healthcare-associated transmission of infectious agents among patients and healthcare personnel. Transmission-Based Precautions are for patients who are known or suspected to be infected or colonized with infectious agents, including certain epidemiologically important pathogens, which require additional control measures to effectively prevent transmission. Since the infecting agent often is not known at the time of admission to a healthcare facility, Transmission-Based Precautions are used empirically, according to the clinical syndrome and the likely etiologic agents at the time, and then modified when the pathogen is identified or a transmissible infectious etiology is ruled out. Examples of this syndromic approach are presented in Table 2. The HICPAC/CDC Guidelines also include recommendations for creating a Protective Environment for allogeneic HSCT patients. 4 and 5 for summaries of the key elements of these sets of precautions # III.A. Standard Precautions Standard Precautions combine the major features of Universal Precautions (UP) 780,896 and Body Substance Isolation (BSI) 640 and are based on the principle that all blood, body fluids, secretions, excretions except sweat, nonintact skin, and mucous membranes may contain transmissible infectious agents. Standard Precautions include a group of infection prevention practices that apply to all patients, regardless of suspected or confirmed infection status, in any setting in which healthcare is delivered (Table 4). These include: hand hygiene; use of gloves, gown, mask, eye protection, or face shield, depending on the anticipated exposure; and safe injection practices. Also, equipment or items in the patient environment likely to have been contaminated with infectious body fluids must be handled in a manner to prevent transmission of infectious agents (e.g., wear gloves for direct contact, contain heavily soiled equipment, properly clean and disinfect or sterilize reusable equipment before use on another patient). The application of Standard Precautions during patient care is determined by the nature of the HCW-patient interaction and the extent of anticipated blood, body fluid, or pathogen exposure. For some interactions (e.g., performing venipuncture), only gloves may be needed; during other interactions (e.g., intubation), use of gloves, gown, and face shield or mask and goggles is necessary. Education and training on the principles and rationale for recommended practices are critical elements of Standard Precautions because they facilitate appropriate decision-making and promote adherence when HCWs are faced with new circumstances 655,[681][682][683][684][685][686] . An example of the importance of the use of Standard Precautions is intubation, especially under emergency circumstances when infectious agents may not be suspected, but later are identified (e.g., SARS-CoV, N. meningitides). The application of Standard Precautions is described below and summarized in Table 4. Guidance on donning and removing gloves, gowns and other PPE is presented in the Figure. Standard Precautions are also intended to protect patients by ensuring that healthcare personnel do not carry infectious agents to patients on their hands or via equipment used during patient care. 21,254,897 . The strategy proposed has been termed Respiratory Hygiene/Cough Etiquette 9,828 and is intended to be incorporated into infection control practices as a new component of Standard Precautions. The strategy is targeted at patients and accompanying family members and friends with undiagnosed transmissible respiratory infections, and applies to any person with signs of illness including cough, congestion, rhinorrhea, or increased production of respiratory secretions when entering a healthcare facility 40,41,43 . The term cough etiquette is derived from recommended source control measures for M. tuberculosis 12,126 . The elements of Respiratory Hygiene/Cough Etiquette include 1. education of healthcare facility staff, patients, and visitors; 2. posted signs, in language(s) appropriate to the population served, with instructions to patients and accompanying family members or friends; 3. source control measures (e.g., covering the mouth/nose with a tissue when coughing and prompt disposal of used tissues, using surgical masks on the coughing person when tolerated and appropriate); 4. hand hygiene after contact with respiratory secretions; and 5. spatial separation, ideally >3 feet, of persons with respiratory infections in common waiting areas when possible. Covering sneezes and coughs and placing masks on coughing patients are proven means of source containment that prevent infected persons from dispersing respiratory secretions into the air 107,145,898,899 . Masking may be difficult in some settings, (e.g., pediatrics, in which case, the emphasis by necessity may be on cough etiquette 900 . Physical proximity of <3 feet has been associated with an increased risk for transmission of infections via the droplet route (e.g., N. meningitidis 103 and group A streptococcus 114 and therefore supports the practice of distancing infected persons from others who are not infected. The effectiveness of good hygiene practices, especially hand hygiene, in preventing transmission of viruses and reducing the incidence of respiratory infections both within and outside [901][902][903] healthcare settings is summarized in several reviews 559,717,904 . These measures should be effective in decreasing the risk of transmission of pathogens contained in large respiratory droplets (e.g., influenza virus 23 , adenovirus 111 , B. pertussis 827 and Mycoplasma pneumoniae 112 . Although fever will be present in many respiratory infections, patients with pertussis and mild upper respiratory tract infections are often afebrile. Therefore, the absence of fever does not always exclude a respiratory infection. Patients who have asthma, allergic rhinitis, or chronic obstructive lung disease also may be coughing and sneezing. While these patients often are not infectious, cough etiquette measures are prudent. Healthcare personnel are advised to observe Droplet Precautions (i.e., wear a mask) and hand hygiene when examining and caring for patients with signs and symptoms of a respiratory infection. Healthcare personnel who have a respiratory infection are advised to avoid direct patient contact, especially with high risk patients. If this is not possible, then a mask should be worn while providing patient care. # III.A.1.b. Safe injection practices. The investigation of four large outbreaks of HBV and HCV among patients in ambulatory care facilities in the United States identified a need to define and reinforce safe injection practices 453 . The four outbreaks occurred in a private medical practice, a pain clinic, an endoscopy clinic, and a hematology/oncology clinic. The primary breaches in infection control practice that contributed to these outbreaks were 1. reinsertion of used needles into a multiple-dose vial or solution container (e.g., saline bag) and 2. use of a single needle/syringe to administer intravenous medication to multiple patients. In one of these outbreaks, preparation of medications in the same workspace where used needle/syringes were dismantled also may have been a contributing factor. These and other outbreaks of viral hepatitis could have been prevented by adherence to basic principles of aseptic technique for the preparation and administration of parenteral medications 453,454 . These include the use of a sterile, single-use, disposable needle and syringe for each injection given and prevention of contamination of injection equipment and medication. Whenever possible, use of single-dose vials is preferred over multipledose vials, especially when medications will be administered to multiple patients. Outbreaks related to unsafe injection practices indicate that some healthcare personnel are unaware of, do not understand, or do not adhere to basic principles of infection control and aseptic technique. A survey of US healthcare workers who provide medication through injection found that 1% to 3% reused the same needle and/or syringe on multiple patients 905 . Among the deficiencies identified in recent outbreaks were a lack of oversight of personnel and failure to follow-up on reported breaches in infection control practices in ambulatory settings. Therefore, to ensure that all healthcare workers understand and adhere to recommended practices, principles of infection control and aseptic technique need to be reinforced in training programs and incorporated into institutional polices that are monitored for adherence 454 . # III.A.1.c. Infection Control Practices for Special Lumbar Puncture Procedures. In 2004, CDC investigated eight cases of post-myelography meningitis that either were reported to CDC or identified through a survey of the Emerging Infections Network of the Infectious Disease Society of America. Blood and/or cerebrospinal fluid of all eight cases yielded streptococcal species consistent with oropharyngeal flora and there were changes in the CSF indices and clinical status indicative of bacterial meningitis. Equipment and products used during these procedures (e.g., contrast media) were excluded as probable sources of contamination. Procedural details available for seven cases determined that antiseptic skin preparations and sterile gloves had been used. However, none of the clinicians wore a face mask, giving rise to the speculation that droplet transmission of oralpharyngeal flora was the most likely explanation for these infections. Bacterial meningitis following myelogram and other spinal procedures (e.g., lumbar puncture, spinal and epidural anesthesia, intrathecal chemotherapy) has been reported previously [906][907][908][909][910][911][912][913][914][915] . As a result, the question of whether face masks should be worn to prevent droplet spread of oral flora during spinal procedures (e.g., myelogram, lumbar puncture, spinal anesthesia) has been debated 916,917 . Face masks are effective in limiting the dispersal of oropharyngeal droplets 918 and are recommended for the placement of central venous catheters 919 . In October 2005, the Healthcare Infection Control Practices Advisory Committee (HICPAC) reviewed the evidence and concluded that there is sufficient experience to warrant the additional protection of a face mask for the individual placing a catheter or injecting material into the spinal or epidural space. # III.B. Transmission-Based Precautions There are three categories of Transmission-Based Precautions: Contact Precautions, Droplet Precautions, and Airborne Precautions. Transmission-Based Precautions are used when the route(s) of transmission is (are) not completely interrupted using Standard Precautions alone. For some diseases that have multiple routes of transmission (e.g., SARS), more than one Transmission-Based Precautions category may be used. When used either singly or in combination, they are always used in addition to Standard Precautions. See Appendix A for recommended precautions for specific infections. When Transmission-Based Precautions are indicated, efforts must be made to counteract possible adverse effects on patients (i.e., anxiety, depression and other mood disturbances [920][921][922] , perceptions of stigma 923 , reduced contact with clinical staff [924][925][926] , and increases in preventable adverse events 565 in order to improve acceptance by the patients and adherence by HCWs. # III.B.1. Contact precautions. Contact Precautions are intended to prevent transmission of infectious agents, including epidemiologically important microorganisms, which are spread by direct or indirect contact with the patient or the patient's environment as described in I.B. 12,13 . Some states require the availability of such rooms in hospitals, emergency departments, and nursing homes that care for patients with M. tuberculosis. A respiratory protection program that includes education about use of respirators, fit-testing, and user seal checks is required in any facility with AIIRs. In settings where Airborne Precautions cannot be implemented due to limited engineering resources (e.g., physician offices), masking the patient, placing the patient in a private room (e.g., office examination room) with the door closed, and providing N95 or higher level respirators or masks if respirators are not available for healthcare personnel will reduce the likelihood of airborne transmission until the patient is either transferred to a facility with an AIIR or returned to the home environment, as deemed medically appropriate. Healthcare personnel caring for patients on Airborne Precautions wear a mask or respirator, depending on the disease-specific recommendations (Respiratory Protection II.E.4, Table 2, and Appendix A), that is donned prior to room entry. Whenever possible, non-immune HCWs should not care for patients with vaccine-preventable airborne diseases (e.g., measles, chickenpox, and smallpox). # III.C. Syndromic and Empiric Applications of Transmission-Based Precautions Diagnosis of many infections requires laboratory confirmation. Since laboratory tests, especially those that depend on culture techniques, often require two or more days for completion, Transmission-Based Precautions must be implemented while test results are pending based on the clinical presentation and likely pathogens. Use of appropriate Transmission-Based Precautions at the time a patient develops symptoms or signs of transmissible infection, or arrives at a healthcare facility for care, reduces transmission opportunities. While it is not possible to identify prospectively all patients needing Transmission-Based Precautions, certain clinical syndromes and conditions carry a sufficiently high risk to warrant their use empirically while confirmatory tests are pending (Table 2). Infection control professionals are encouraged to modify or adapt this table according to local conditions. # III.D. Discontinuation of Transmission-Based Precautions Transmission-Based Precautions remain in effect for limited periods of time (i.e., while the risk for transmission of the infectious agent persists or for the duration of the illness (Appendix A). For most infectious diseases, this duration reflects known patterns of persistence and shedding of infectious agents associated with the natural history of the infectious process and its treatment. For some diseases (e.g., pharyngeal or cutaneous diphtheria, RSV), Transmission-Based Precautions remain in effect until culture or antigen-detection test results document eradication of the pathogen and, for RSV, symptomatic disease is resolved. For other diseases, (e.g., M. tuberculosis) state laws and regulations, and healthcare facility policies, may dictate the duration of precautions 12 ). In immunocompromised patients, viral shedding can persist for prolonged periods of time (many weeks to months) and transmission to others may occur during that time; therefore, the duration of contact and/or droplet precautions may be prolonged for many weeks 500,[928][929][930][931][932][933] . The duration of Contact Precautions for patients who are colonized or infected with MDROs remains undefined. MRSA is the only MDRO for which effective decolonization regimens are available 867 . However, carriers of MRSA who have negative nasal cultures after a course of systemic or topical therapy may resume shedding MRSA in the weeks that follow therapy 934,935 . Although early guidelines for VRE suggested discontinuation of Contact Precautions after three stool cultures obtained at weekly intervals proved negative 740 , subsequent experiences have indicated that such screening may fail to detect colonization that can persist for >1 year 27,[936][937][938] . Likewise, available data indicate that colonization with VRE, MRSA 939 , and possibly MDR-GNB, can persist for many months, especially in the presence of severe underlying disease, invasive devices, and recurrent courses of antimicrobial agents. It may be prudent to assume that MDRO carriers are colonized permanently and manage them accordingly. Alternatively, an interval free of hospitalizations, antimicrobial therapy, and invasive devices (e.g., 6 or 12 months) before reculturing patients to document clearance of carriage may be used. Determination of the best strategy awaits the results of additional studies. See the 2006 HICPAC/CDC MDRO guideline 927 for discussion of possible criteria to discontinue Contact Precautions for patients colonized or infected with MDROs. # III.E. Application of Transmission-Based Precautions in Ambulatory and Home Care Settings Although Transmission-Based Precautions generally apply in all healthcare settings, exceptions exist. For example, in home care, AIIRs are not available. Furthermore, family members already exposed to diseases such as varicella and tuberculosis would not use masks or respiratory protection, but visiting HCWs would need to use such protection. Similarly, management of patients colonized or infected with MDROs may necessitate Contact Precautions in acute care hospitals and in some LTCFs when there is continued transmission, but the risk of transmission in ambulatory care and home care, has not been defined. Consistent use of Standard Precautions may suffice in these settings, but more information is needed. # III.F. Protective EnvironmentA Protective Environment is designed for allogeneic HSCT patients to minimize fungal spore counts in the air and reduce the risk of invasive environmental fungal infections (see Table 5 for specifications) 11,[13][14][15] . The need for such controls has been demonstrated in studies of aspergillus outbreaks associated with construction 11,14,15,157,158 . As defined by the American Insitute of Architecture 13 and presented in detail in the Guideline for Environmental Infection Control 2003 11,861 , air quality for HSCT patients is improved through a combination of environmental controls that include 1. HEPA filtration of incoming air; 2. directed room air flow; 3. positive room air pressure relative to the corridor; 4. well-sealed rooms (including sealed walls, floors, ceilings, windows, electrical outlets) to prevent flow of air from the outside; 5. ventilation to provide >12 air changes per hour; 6. strategies to minimize dust (e.g., scrubbable surfaces rather than upholstery 940 and carpet 941 , and routinely cleaning crevices and sprinkler heads); and 7. prohibiting dried and fresh flowers and potted plants in the rooms of HSCT patients. The latter is based on molecular typing studies that have found indistinguishable strains of Aspergillus terreus in patients with hematologic malignancies and in potted plants in the vicinity of the patients [942][943][944] . The desired quality of air may be achieved without incurring the inconvenience or expense of laminar airflow 15,157 . To prevent inhalation of fungal spores during periods when construction, renovation, or other dust-generating activities that may be ongoing in and around the health-care facility, it has been advised that severely immunocompromised patients wear a high-efficiency respiratory-protection device (e.g., an N95 respirator) when they leave the Protective Environment 11,14,945 ). The use of masks or respirators by HSCT patients when they are outside of the Protective Environment for prevention of environmental fungal infections in the absence of construction has not been evaluated. A Protective Environment does not include the use of barrier precautions beyond those indicated for Standard and Transmission-Based Precautions. No published reports support the benefit of placing solid organ transplants or other immunocompromised patients in a Protective Environment. # Part IV: Recommendations These recommendations are designed to prevent transmission of infectious agents among patients and healthcare personnel in all settings where healthcare is delivered. As in other CDC/HICPAC guidelines, each recommendation is categorized on the basis of existing scientific data, theoretical rationale, applicability, and when possible, economic impact. The CDC/HICPAC system for categorizing recommendations is as follows: Category IA Strongly recommended for implementation and strongly supported by welldesigned experimental, clinical, or epidemiologic studies. Category IB Strongly recommended for implementation and supported by some experimental, clinical, or epidemiologic studies and a strong theoretical rationale. Category IC Required for implementation, as mandated by federal and/or state regulation or standard. Category II Suggested for implementation and supported by suggestive clinical or epidemiologic studies or a theoretical rationale. No recommendation; unresolved issue. Practices for which insufficient evidence or no consensus regarding efficacy exists. # I. Administrative Responsibilities Healthcare organization administrators should ensure the implementation of recommendations in this section. # I.A. Incorporate preventing transmission of infectious agents into the objectives of the organization's patient and occupational safety programs 543-546, 561, 620, 626, 946 566 247 687 . Category IB III.E. Review periodically information on community or regional trends in the incidence and prevalence of epidemiologically-important organisms (e.g., influenza, RSV, pertussis, invasive group A streptococcal disease, MRSA, VRE) (including in other healthcare facilities) that may impact transmission of organisms within the facility 398, 687, 972, 973 974 . Category II # IV. Standard Precautions Assume that every person is potentially infected or colonized with an organism that could be transmitted in the healthcare setting and apply the following infection control practices during the delivery of health care. 956 IV.E.1. Establish policies and procedures for containing, transporting, and handling patient-care equipment and instruments/devices that may be contaminated with blood or body fluids 18,739,975 . Category IB/IC # IV.A. Hand Hygiene IV.E.2. Remove organic material from critical and semi-critical instrument/devices, using recommended cleaning agents before high level disinfection and sterilization to enable effective disinfection and sterilization processes 836 991, 992 . Category IA IV.E.3. Wear PPE (e.g., gloves, gown), according to the level of anticipated contamination, when handling patient-care equipment and instruments/devices that is visibly soiled or may have been in contact with blood or body fluids 18,739,975 . Category IB/IC # IV.F. Care of the Environment 11 Edit [February 2017]: An * indicates recommendations that were renumbered for clarity. The renumbering does not constitute change to the intent of the recommendations. IV.F.1. Establish policies and procedures for routine and targeted cleaning of environmental surfaces as indicated by the level of patient contact and degree of soiling 11 . Category II IV.F.2. Clean and disinfect surfaces that are likely to be contaminated with pathogens, including those that are in close proximity to the patient (e.g., bed rails, over bed tables) and frequently-touched surfaces in the patient care environment (e.g., door knobs, surfaces in and surrounding toilets in patients' rooms) on a more frequent schedule compared to that for other surfaces (e.g., horizontal surfaces in waiting rooms) 11 73, 740, 746, 993, 994 72, 800, 835 995 . Category IB IV.F.3. Use EPA-registered disinfectants that have microbiocidal (i.e., killing) activity against the pathogens most likely to contaminate the patient-care environment. Use in accordance with manufacturer's instructions 842-844, 956, 996 . Category IB/IC IV.F.3.a. Review the efficacy of in-use disinfectants when evidence of continuing transmission of an infectious agent (e.g., rotavirus, C. difficile, norovirus) may indicate resistance to the in-use product and change to a more effective disinfectant as indicated 275,842,847 . # Category II Edit [February 2017]: An * indicates recommendations that were renumbered for clarity. The renumbering does not constitute change to the intent of the recommendations. IV.F.4. In facilities that provide health care to pediatric patients or have waiting areas with child play toys (e.g., obstetric/gynecology offices and clinics), establish policies and procedures for cleaning and disinfecting toys at regular intervals 379 80 . Category IB IV.F.4.a. * Use the following principles in developing this policy and procedures: Category II • Select play toys that can be easily cleaned and disinfected • Do not permit use of stuffed furry toys if they will be shared • Clean and disinfect large stationary toys (e.g., climbing quipment) at least weekly and whenever visibly soiled • If toys are likely to be mouthed, rinse with water after disinfection; alternatively wash in a dishwasher • When a toy requires cleaning and disinfection, do so immediately or store in a designated labeled container separate from toys that are clean and ready for use IV.F.5. Include multi-use electronic equipment in policies and procedures for preventing contamination and for cleaning and disinfection, especially those items that are used by patients, those used during delivery of patient care, and mobile devices that are moved in and out of patient rooms frequently (e.g., daily) 850 851, 852, 997 . Category IB IV.F.5.a. No recommendation for use of removable protective covers or washable keyboards. Unresolved issue # IV.G. Textiles and Laundry IV.G.1. Handle used textiles and fabrics with minimum agitation to avoid contamination of air, surfaces and persons 739,998,999 . Category IB/IC IV.G.2. If laundry chutes are used, ensure that they are properly designed, maintained, and used in a manner to minimize dispersion of aerosols from contaminated laundry 11,13,1000,1001 . Category IB/IC # IV.H. Safe Injection Practices The following recommendations apply to the use of needles, cannulas that replace needles, and, where applicable, intravenous delivery systems 454 IV.H.1. Use aseptic technique to avoid contamination of sterile injection equipment 1002, 1003 . Category IA IV.H.2. Do not administer medications from a syringe to multiple patients, even if the needle or cannula on the syringe is changed. Needles, cannulae and syringes are sterile, single-use items; they should not be reused for another patient nor to access a medication or solution that might be used for a subsequent patient 453,919,1004,1005 . Category IA IV.H.3. Use fluid infusion and administration sets (i.e., intravenous bags, tubing and connectors) for one patient only and dispose appropriately after use. Consider a syringe or needle/cannula contaminated once it has been used to enter or connect to a patient's intravenous infusion bag or administration set 453 . Category IB IV.H.4. Use single-dose vials for parenteral medications whenever possible 453 . # Category IA IV.H.5. Do not administer medications from single-dose vials or ampules to multiple patients or combine leftover contents for later use 369 453, 1005 . Category IA IV.H.6. If multidose vials must be used, both the needle or cannula and syringe used to access the multidose vial must be sterile 453,1002 . Category IA IV.H.7. Do not keep multidose vials in the immediate patient treatment area and store in accordance with the manufacturer's recommendations; discard if sterility is compromised or questionable 453,1003 . Category IA IV.H.8. Do not use bags or bottles of intravenous solution as a common source of supply for multiple patients 453,1006 . Category IB IV.I. Infection control practices for special lumbar puncture procedures Wear a surgical mask when placing a catheter or injecting material into the spinal canal or subdural space (i.e., during myelograms, lumbar puncture and spinal or epidural anesthesia). 906 # V.C. Droplet Precautions Use Droplet Precautions as recommended in Appendix A for patients known or suspected to be infected with pathogens transmitted by respiratory droplets (i.e., large-particle droplets >5µ in size) that are generated by a patient who is coughing, sneezing or talking 14, 23, Steinberg, 1969 to exhaust air from an AIIR directly to the outside, the air may be returned to the air-handling system or adjacent spaces if all air is directed through HEPA filters. V.D.2.a.iii. Whenever an AIIR is in use for a patient on Airborne Precautions, monitor air pressure daily with visual indicators (e.g., smoke tubes, flutter strips), regardless of the presence of differential pressure sensing devices (e.g., manometers) 11,12,1023,1024 . V.D.2.a.iv. Keep the AIIR door closed when not required for entry and exit. V.D.2.b. When an AIIR is not available, transfer the patient to a facility that has an available AIIR 12 . Category II V.D.2.c. In the event of an outbreak or exposure involving large numbers of patients who require Airborne Precautions:  Consult infection control professionals before patient placement to determine the safety of alternative room that do not meet engineering requirements for an AIIR.  Place together (cohort) patients who are presumed to have the same infection( based on clinical presentation and diagnosis when known) in areas of the facility that are away from other patients, especially patients who are at increased risk for infection (e.g., immunocompromised patients).  Use temporary portable solutions (e.g., exhaust fan) to create a negative pressure environment in the converted area of the facility. Discharge air directly to the outside,away from people and air intakes, or direct all the air through HEPA filters before it is introduced to other air spaces 12 Category II V.D.2.d. In ambulatory settings: V.D.2.d.i. Develop systems (e.g., triage, signage) to identify patients with known or suspected infections that require Airborne Precautions upon entry into ambulatory settings 9,12,34,127,134 . Category IA V.D.2.d.ii. Place the patient in an AIIR as soon as possible. If an AIIR is not available, place a surgical mask on the patient and place him/her in an examination room. Once the patient leaves, the room should remain vacant for the appropriate time, generally one hour, to allow for a full exchange of air 11,12,122 . Category IB/IC V.D.2.d.iii. Instruct patients with a known or suspected airborne infection to wear a surgical mask and observe Respiratory Hygiene/Cough Etiquette. Once in an AIIR, the mask may be removed; the mask should remain on if the patient is not in an AIIR 12,107,145,899 . Category IB/IC V.D.3. Personnel restrictions Restrict susceptible healthcare personnel from entering the rooms of patients known or suspected to have measles (rubeola), varicella (chickenpox), disseminated zoster, or smallpox if other immune healthcare personnel are available 17,775 . Category IB V.D.4.a. Wear a fit-tested NIOSH-approved N95 or higher level respirator for respiratory protection when entering the room or home of a patient when the following diseases are suspected or confirmed:  * V.D.4.a.i. Infectious pulmonary or laryngeal tuberculosis or when infectious tuberculosis skin lesions are present and procedures that would aerosolize viable organisms (e.g., irrigation, incision and drainage, whirlpool treatments) are performed 12,1025,1026 . Category IB  * V.D.4.a.ii. Smallpox (vaccinated and unvaccinated). Respiratory protection is recommended for all healthcare personnel, including those with a documented "take" after smallpox vaccination due to the risk of a genetically engineered virus against which the vaccine may not provide protection, or of exposure to a very large viral load (e.g., from high-risk aerosol-generating procedures, immunocompromised patients, hemorrhagic or flat smallpox 108,129 . Category II V.D.4.b. § Suspected measles, chickenpox or disseminated zoster. No recommendation is made regarding the use of PPE by healthcare personnel who are presumed to be immune to measles (rubeola) or varicella-zoster based on history of disease, vaccine, or serologic testing when caring for an individual with known or suspected measles, chickenpox or disseminated zoster, due to difficulties in establishing definite immunity 1027,1028 . Unresolved issue V.D.4.c. § Suspected measles, chickenpox or disseminated zoster. No recommendation is made regarding the type of personal protective equipment (i.e., surgical mask or respiratory protection with a N95 or higher respirator) to be worn by susceptible healthcare personnel who must have contact with patients with known or suspected measles, chickenpox or disseminated herpes zoster. Unresolved issue V.D. 13 . Category IB VI.C.2. Lower dust levels by using smooth, nonporous surfaces and finishes that can be scrubbed, rather than textured material (e.g., upholstery). Wet dust horizontal surfaces whenever dust detected and routinely clean crevices and sprinkler heads where dust may accumulate 940,941 . Category II VI.C.3. Avoid carpeting in hallways and patient rooms in areas 941 . Category IB VI.C.4. Prohibit dried and fresh flowers and potted plants [942][943][944] . Category II VI.D. Minimize the length of time that patients who require a Protective Environment are outside their rooms for diagnostic procedures and other activities 11,158,945 . # Category IB VI.E. During periods of construction, to prevent inhalation of respirable particles that could contain infectious spores, provide respiratory protection (e.g., N95 respirator) to patients who are medically fit to tolerate a respirator when they are required to leave the Protective Environment 945 158 13 . Category IB VI.F.4.b. Use an anteroom to further support the appropriate air-balance relative to the corridor and the Protective Environment; provide independent exhaust of contaminated air to the outside or place a HEPA filter in the exhaust duct if the return air must be recirculated 13,1041 . Category IB VI.F.4.c. If an anteroom is not available, place the patient in an AIIR and use portable, industrial-grade HEPA filters in the room to enhance filtration of spores 1042 . Category II Preamble The mode(s) and risk of transmission for each specific disease agent included in Appendix A were reviewed. Principle sources consulted for the development of disease-specific recommendations for Appendix A included infectious disease manuals and textbooks [833,1043,1044]. The published literature was searched for evidence of person-to-person transmission in healthcare and non-healthcare settings with a focus on reported outbreaks that would assist in developing recommendations for all settings where healthcare is delivered. Criteria used to assign Transmission-Based Precautions categories follow: • A Transmission-Based Precautions category was assigned if there was strong evidence for person-to-person transmission via droplet, contact, or airborne routes in healthcare or nonhealthcare settings and/or if patient factors (e.g., diapered infants, diarrhea, draining wounds) increased the risk of transmission • Transmission-Based Precautions category assignments reflect the predominant mode(s) of transmission • If there was no evidence for person-to-person transmission by droplet, contact or airborne routes, Standard Precautions were assigned • If there was a low risk for person-to-person transmission and no evidence of healthcareassociated transmission, Standard Precautions were assigned • Standard Precautions were assigned for bloodborne pathogens (e.g., hepatitis B and C viruses, human immunodeficiency virus) as per CDC recommendations for Universal Precautions issued in 1988 [780]. Subsequent experience has confirmed the efficacy of Standard Precautions to prevent exposure to infected blood and body fluid [778,779,866]. Additional information relevant to use of precautions was added in the comments column to assist the caregiver in decision-making. Citations were added as needed to support a change in or provide additional evidence for recommendations for a specific disease and for new infectious agents (e.g., SARS-CoV, avian influenza) that have been added to Appendix A. The reader may refer to more detailed discussion concerning modes of transmission and emerging pathogens in the background text and for MDRO control in Appendix B (Management of Multidrug-Resistant Organisms in Healthcare Settings (https://www.cdc.gov/infectioncontrol/guidelines/mdro/)). [142, 147 148]; ensure consistent environmental cleaning and disinfection with focus on restrooms even when apparently unsoiled [273,1064]). Hypochlorite solutions may be required when there is continued transmission [290][291][292]. Alcohol is less active, but there is no evidence that alcohol antiseptic handrubs are not effective for hand decontamination [294]. Cohorting of affected patients to separate airspaces and toilet facilities may help interrupt transmission during outbreaks. Gastroenteritis Rotavirus # Type and Duration of Precautions Recommended for Selected Infections and Conditions 1 Infection/Condition # Type of Precaution # Contact + Standard # Duration of illness Ensure consistent environmental cleaning and disinfection and frequent removal of soiled diapers. Prolonged shedding may occur in both immunocompetent and immunocompromised children and the elderly [932,933]. Gastrointestinal Tract: Ingestion of toxin-containing food, Respiratory Tract: Inhalation of toxin containing aerosol cause disease. Comment: Toxin ingested or potentially delivered by aerosol in bioterrorist incidents. LD50 (lethal dose for 50% of experimental animals) for type A is 0.001 μg/ml/kg. Incubation Period 1-5 days. # Infection/Condition # Type of Precaution # Clinical Features Ptosis, generalized weakness, dizziness, dry mouth and throat, blurred vision, diplopia, dysarthria, dysphonia, and dysphagia followed by symmetrical descending paralysis and respiratory failure. # Diagnosis Clinical diagnosis; identification of toxin in stool, serology unless toxin-containing material available for toxin neutralization bioassays. # Characteristics Infection Control Considerations Infectivity Not transmitted from person to person. Exposure to toxin necessary for disease. # Recommended Precautions Standard Precautions. # Ebola Hemorrhagic Fever Ebola Virus Disease for Healthcare Workers: Updated recommendations for healthcare workers can be found at Ebola: U.S. Healthcare Workers and Settings (https://www.cdc.gov/vhf/ebola/healthcare-us/). # Table 3C. Ebola Hemorrhagic Fever Characteristics Infection Control Considerations Site(s) of Infection; Transmission Mode As a rule infection develops after exposure of mucous membranes or respiratory tract, or through broken skin or percutaneous injury. Incubation Period 2-19 days, usually 5-10 days # Clinical Features Febrile illnesses with malaise, myalgias, headache, vomiting and diarrhea that are rapidly complicated by hypotension, shock, and hemorrhagic features. Massive hemorrhage in < 50% pts. # Diagnosis Etiologic diagnosis can be made using respiratory tract-polymerase chain reaction, serologic detection of antibody and antigen, pathologic assessment with immunohistochemistry and viral culture with EM confirmation of morphology, # Infectivity Person-to-person transmission primarily occurs through unprotected contact with blood and body fluids; percutaneous injuries (e.g., needlestick) associated with a high rate of transmission; transmission in healthcare settings has been reported but is prevented by use of barrier precautions. # Recommended Precautions Hemorrhagic fever specific barrier precautions: If disease is believed to be related to intentional release of a bioweapon, epidemiology of transmission is unpredictable pending observation of disease transmission. Until the nature of the pathogen is understood and its transmission pattern confirmed, Standard, Contact and Airborne Precautions should be used. Once the pathogen is characterized, if the epidemiology of transmission is consistent with natural disease, Droplet Precautions can be substituted for Airborne Precautions. Emphasize: 1. use of sharps safety devices and safe work practices, 2. hand hygiene; 3. barrier protection against blood and body fluids upon entry into room (single gloves and fluid-resistant or impermeable gown, face/eye protection with masks, goggles or face shields); and 4. appropriate waste handling. Use N95 or higher respirators when performing aerosol-generating procedures. In settings where AIIRs are unavailable or the large numbers of patients cannot be accommodated by existing AIIRs, observe Droplet Precautions (plus Standard Precautions and Contact Precautions) and segregate patients from those not suspected of VHF infection. Limit blooddraws to those essential to care. See text for discussion and Appendix A for recommendations for naturally occurring VHFs. # Plague Pneumonic plague is not as contagious as is often thought. Historical accounts and contemporary evidence indicate that persons with plague usually transmit the infection only when the disease is in the end stage. These persons cough copious amounts of bloody sputum that contains many plague bacteria. Patients in the early stage of primary pneumonic plague (approximately the first 20-24 h) apparently pose little risk [1,2]. Antibiotic medication rapidly clears the sputum of plague bacilli, so that a patient generally is not infective within hours after initiation of effective antibiotic treatment [3]. This means that in modern times many patients will never reach a stage where they pose a significant risk to others. Even in the end stage of disease, transmission only occurs after close contact. Simple protective measures, such as wearing masks, good hygiene, and avoiding close contact, have been effective to interrupt transmission during many pneumonic plague outbreaks [2]. In the United States, the last known cases of person to person transmission of pneumonic plague occurred in 1925 [2]. # Characteristics Infection Control Considerations Clinical Features Fever, malaise, backache, headache, and often vomiting for 2-3 days; then generalized papular or maculopapular rash (more on face and extremities), which becomes vesicular (on day 4 or 5) and then pustular; lesions all in same stage. # Diagnosis Electron microscopy of vesicular fluid or culture of vesicular fluid by WHO approved laboratory (CDC); detection by polymerase chain reaction available only in select LRN labs, CDC and USAMRID Infectivity Secondary attack rates up to 50% in unvaccinated persons; infected persons may transmit disease from time rash appears until all lesions have crusted over (about 3 weeks); greatest infectivity during first 10 days of rash. # Recommended Precautions Combined use of Standard, Contact, and Airborne Precautions until all scabs have separated (3-4 weeks). Transmission by the airborne route is a rare event; Airborne Precautions is recommended when possible, but in the event of mass exposures, barrier precautions and containment within a designated area are most important. 204,212 Only immune HCWs to care for pts; post-exposure vaccine within 4 days. Vaccinia: HCWs cover vaccination site with gauze and semi-permeable dressing until scab separates (≥21 days). Observe hand hygiene. Adverse events with virus-containing lesions: Standard plus Contact Precautions until all lesions crusted. Vaccinia adverse events with lesions containing infectious virus include inadvertent autoinoculation, ocular lesions (blepharitis, conjunctivitis), generalized vaccinia, progressive vaccinia, eczema vaccinatum; bacterial superinfection also requires addition of contact precautions if exudates cannot be contained. 216,217 # Table 3F. Tularemia Characteristics Infection Control Considerations Site(s) of Infection; Transmission Mode Respiratory Tract: Inhalation of aerosolized bacteria. Gastrointestinal Tract: Ingestion of food or drink contaminated with aerosolized bacteria. Comment: Pneumonic or typhoidal disease likely to occur after bioterrorist event using aerosol delivery. Infective dose 10-50 bacteria Incubation Period 2 to 10 days, usually 3 to 5 days # Clinical Features Pneumonic: malaise, cough, sputum production, dyspnea; Typhoidal: fever, prostration, weight loss and frequently an associated pneumonia. # Diagnosis Diagnosis usually made with serology on acute and convalescent serum specimens; bacterium can be detected by polymerase chain reaction (LRN) or isolated from blood and other body fluids on cysteine-enriched media or mouse inoculation. # Infectivity Person-to-person spread is rare. Laboratory workers who encounter/handle cultures of this organism are at high risk for disease if exposed. (e.g., flutter strips, smoke tubes) or a hand held pressure gauge • Self-closing door on all room exits • Maintain back-up ventilation equipment (e.g., portable units for fans or filters) for emergency provision of ventilation requirements for PE areas and take immediate steps to restore the fixed ventilation system • For patients who require both a PE and Airborne Infection Isolation, use an anteroom to ensure proper air balance relationships and provide independent exhaust of contaminated air to the outside or place a HEPA filter in the exhaust duct. If an anteroom is not available, place patient in an AIIR and use portable ventilation units, industrial-grade HEPA filters to enhance filtration of spores. # Recommended Precautions Standard Precautions # IV. Surfaces • Cohorting. In the context of this guideline, this term applies to the practice of grouping patients infected or colonized with the same infectious agent together to confine their care to one area and prevent contact with susceptible patients (cohorting patients). During outbreaks, healthcare personnel may be assigned to a cohort of patients to further limit opportunities for transmission (cohorting staff). Colonization. Proliferation of microorganisms on or within body sites without detectable host immune response, cellular damage, or clinical expression. The presence of a microorganism within a host may occur with varying duration, but may become a source of potential transmission. In many instances, colonization and carriage are synonymous. Droplet nuclei. Microscopic particles < 5 µm in size that are the residue of evaporated droplets and are produced when a person coughs, sneezes, shouts, or sings. These particles can remain suspended in the air for prolonged periods of time and can be carried on normal air currents in a room or beyond, to adjacent spaces or areas receiving exhaust air. # Hand hygiene. A general term that applies to any one of the following: 1. handwashing with plain (nonantimicrobial) soap and water); 2. antiseptic handwash (soap containing antiseptic agents and water); 3. antiseptic handrub (waterless antiseptic product, most often alcohol-based, rubbed on all surfaces of hands); or 4. surgical hand antisepsis (antiseptic handwash or antiseptic handrub performed preoperatively by surgical personnel to eliminate transient hand flora and reduce resident hand flora) 559 . # Healthcare-associated infection (HAI). An infection that develops in a patient who is cared for in any setting where healthcare is delivered (e.g., acute care hospital, chronic care facility, ambulatory clinic, dialysis center, surgicenter, home) and is related to receiving health care (i.e., was not incubating or present at the time healthcare was provided). In ambulatory and home settings, HAI would apply to any infection that is associated with a medical or surgical intervention. Since the geographic location of infection acquisition is often uncertain, the preferred term is considered to be healthcare-associated rather than healthcare-acquired. Healthcare epidemiologist. A person whose primary training is medical (M.D., D.O.) and/or masters or doctorate-level epidemiology who has received advanced training in healthcare epidemiology. Typically these professionals direct or provide consultation to an infection control program in a hospital, long term care facility (LTCF), or healthcare delivery system (also see infection control professional). # Healthcare personnel, healthcare worker (HCW). All paid and unpaid persons who work in a healthcare setting (e.g., any person who has professional or technical training in a healthcare-related field and provides patient care in a healthcare setting or any person who provides services that support the delivery of healthcare such as dietary, housekeeping, engineering, maintenance personnel). # Hematopoietic stem cell transplantation (HSCT). Any transplantation of bloodor bone marrow-derived hematopoietic stem cells, regardless of donor type (e.g., allogeneic or autologous) or cell source (e.g., bone marrow, peripheral blood, or placental/umbilical cord blood); associated with periods of severe immunosuppression that vary with the source of the cells, the intensity of chemotherapy required, and the presence of graft versus host disease (MMWR 2000; 49: RR-10). High-efficiency particulate air (HEPA) filter. An air filter that removes >99.97% of particles ≥ 0.3µm (the most penetrating particle size) at a specified flow rate of air. HEPA filters may be integrated into the central air handling systems, installed at the point of use above the ceiling of a room, or used as portable units (MMWR 2003; 52: RR-10). Home care. A wide-range of medical, nursing, rehabilitation, hospice and social services delivered to patients in their place of residence (e.g., private residence, senior living center, assisted living facility). Home health-care services include care provided by home health aides and skilled nurses, respiratory therapists, dieticians, physicians, chaplains, and volunteers; provision of durable medical equipment; home infusion therapy; and physical, speech, and occupational therapy. # Immunocompromised patients. Those patients whose immune mechanisms are deficient because of congenital or acquired immunologic disorders (e.g., human immunodeficiency virus [HIV] infection, congenital immune deficiency syndromes), chronic diseases such as diabetes mellitus, cancer, emphysema, or cardiac failure, ICU care, malnutrition, and immunosuppressive therapy of another disease process [e.g., radiation, cytotoxic chemotherapy, anti-graft-rejection medication, corticosteroids, monoclonal antibodies directed against a specific component of the immune system]). The type of infections for which an immunocompromised patient has increased susceptibility is determined by the severity of immunosuppression and the specific component(s) of the immune system that is affected. Patients undergoing allogeneic HSCT and those with chronic graft versus host disease are considered the most vulnerable to HAIs. Immunocompromised states also make it more difficult to diagnose certain infections (e.g., tuberculosis) and are associated with more severe clinical disease states than persons with the same infection and a normal immune system. Infection. The transmission of microorganisms into a host after evading or overcoming defense mechanisms, resulting in the organism's proliferation and invasion within host tissue(s). Host responses to infection may include clinical symptoms or may be subclinical, with manifestations of disease mediated by direct organisms pathogenesis and/or a function of cell-mediated or antibody responses that result in the destruction of host tissues. # Infection control and prevention professional (ICP). A person whose primary training is in either nursing, medical technology, microbiology, or epidemiology and who has acquired specialized training in infection control. Responsibilities may include collection, analysis, and feedback of infection data and trends to healthcare providers; consultation on infection risk assessment, prevention and control strategies; performance of education and training activities; implementation of evidence-based infection control practices or those mandated by regulatory and licensing agencies; application of epidemiologic principles to improve patient outcomes; participation in planning renovation and construction projects (e.g., to ensure appropriate containment of construction dust); evaluation of new products or procedures on patient outcomes; oversight of employee health services related to infection prevention; implementation of preparedness plans; communication within the healthcare setting, with local and state health departments, and with the community at large concerning infection control issues; and participation in research. Certification in infection control (CIC) is available through the Certification Board of Infection Control and Epidemiology. # Infection control and prevention program. A multidisciplinary program that includes a group of activities to ensure that recommended practices for the prevention of healthcare-associated infections are implemented and followed by HCWs, making the healthcare setting safe from infection for patients and healthcare personnel. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) requires the following five components of an infection control program for accreditation: 1. surveillance: monitoring patients and healthcare personnel for acquisition of infection and/or colonization; 2. investigation: identification and analysis of infection problems or undesirable trends; 3. prevention: implementation of measures to prevent transmission of infectious agents and to reduce risks for device-and procedure-related infections; 4) control: evaluation and management of outbreaks; and 4. reporting: provision of information to external agencies as required by state and federal law and regulation (The Joint Commission (https://www.jointcommission.org/) [Current version of this document may differ from original.]). The infection control program staff has the ultimate authority to determine infection control policies for a healthcare organization with the approval of the organization's governing body. Long-term care facilities (LTCFs). An array of residential and outpatient facilities designed to meet the bio-psychosocial needs of persons with sustained self-care deficits. These include skilled nursing facilities, chronic disease hospitals, nursing homes, foster and group homes, institutions for the developmentally disabled, residential care facilities, assisted living facilities, retirement homes, adult day health care facilities, rehabilitation centers, and long-term psychiatric hospitals. # Mask. A term that applies collectively to items used to cover the nose and mouth and includes both procedure masks and surgical masks ( [ # Multidrug-resistant organisms (MDROs). In general, bacteria (excluding M. tuberculosis) that are resistant to one or more classes of antimicrobial agents and usually are resistant to all but one or two commercially available antimicrobial agents (e.g., MRSA, VRE, extended spectrum beta-lactamase [ESBL]-producing or intrinsically resistant gram-negative bacilli) 176 . # Nosocomial infection. Derived from two Greek words "nosos" (disease) and "komeion" (to take care of). Refers to any infection that develops during or as a result of an admission to an acute care facility (hospital) and was not incubating at the time of admission. # Personal protective equipment (PPE). A variety of barriers used alone or in combination to protect mucous membranes, skin, and clothing from contact with infectious agents. PPE includes gloves, masks, respirators, goggles, face shields, and gowns. # Procedure Mask. A covering for the nose and mouth that is intended for use in general patient care situations. These masks generally attach to the face with ear loops rather than ties or elastic. Unlike surgical masks, procedure masks are not regulated by the Food and Drug Administration. Protective Environment. A specialized patient-care area, usually in a hospital, with a positive air flow relative to the corridor (i.e., air flows from the room to the outside adjacent space). The combination of high-efficiency particulate air (HEPA) filtration, high numbers (≥12) of air changes per hour (ACH), and minimal leakage of air into the room creates an environment that can safely accommodate patients with a severely compromised immune system (e.g., those who have received allogeneic hemopoietic stem-cell transplant [HSCT]) and decrease the risk of exposure to spores produced by environmental fungi. Other components include use of 2. using tissues to contain respiratory secretions with prompt disposal into a notouch receptacle, 3. offering a surgical mask to persons who are coughing to decrease contamination of the surrounding environment, and 4. turning the head away from others and maintaining spatial separation, ideally >3 feet, when coughing. These measures are targeted to all patients with symptoms of respiratory infection and their accompanying family members or friends beginning at the point of initial encounter with a healthcare setting (e.g., reception/triage in emergency departments, ambulatory clinics, healthcare provider offices) 126 620 . Source Control. The process of containing an infectious agent either at the portal of exit from the body or within a confined space. The term is applied most frequently to containment of infectious agents transmitted by the respiratory route but could apply to other routes of transmission, (e.g., a draining wound, vesicular or bullous skin lesions). Respiratory Hygiene/Cough Etiquette that encourages individuals to "cover your cough" and/or wear a mask is a source control measure. The use of enclosing devices for local exhaust ventilation (e.g., booths for sputum induction or administration of aerosolized medication) is another example of source control. # Standard Precautions. A group of infection prevention practices that apply to all patients, regardless of suspected or confirmed diagnosis or presumed infection status. Standard Precautions is a combination and expansion of Universal Precautions 780 and Body Substance Isolation 1102 . Standard Precautions is based on the principle that all blood, body fluids, secretions, excretions except sweat, nonintact skin, and mucous membranes may contain transmissible infectious agents. Standard Precautions includes hand hygiene, and depending on the anticipated exposure, use of gloves, gown, mask, eye protection, or face shield. Also, equipment or items in the patient environment likely to have been contaminated with infectious fluids must be handled in a manner to prevent transmission of infectious agents, (e.g., wear gloves for handling, contain heavily soiled equipment, properly clean and disinfect or sterilize reusable equipment before use on another patient). # Surgical mask. A device worn over the mouth and nose by operating room personnel during surgical procedures to protect both surgical patients and operating room personnel from transfer of microorganisms and body fluids. Surgical masks also are used to protect healthcare personnel from contact with large infectious droplets (>5 μm in size). According to draft guidance issued by the Food and Drug Administration on May 15, 2003, surgical masks are evaluated using standardized testing procedures for fluid resistance, bacterial filtration efficiency, differential pressure (air exchange), and flammability in order to mitigate the risks to health associated with the use of surgical masks. These specifications apply to any masks that are labeled surgical, laser, isolation, or dental or medical procedure ([This link is no longer active: www.fda.gov/cdrh/ode/guidance/094.html#4. Similar information may be found at FDA: Masks and N95 Respirators (http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/GeneralHospit alDevicesandSupplies/PersonalProtectiveEquipment/ucm055977.htm), accessed May 2016.]). Surgical masks do not protect against inhalation of small particles or droplet nuclei and should not be confused with particulate respirators that are recommended for protection against selected airborne infectious agents, (e.g., Mycobacterium tuberculosis). # Acknowledgement: The authors and HICPAC gratefully acknowledge Dr. Larry Strausbaugh for his many contributions and valued guidance in the preparation of this guideline. # Abbreviations Used in the Guideline Appendix # Glossary Airborne infection isolation room (AIIR). Formerly, negative pressure isolation room, an AIIR is a single-occupancy patient-care room used to isolate persons with a suspected or confirmed airborne infectious disease. Environmental factors are controlled in AIIRs to minimize the transmission of infectious agents that are usually transmitted from person to person by droplet nuclei associated with coughing or aerosolization of contaminated fluids. AIIRs should provide negative pressure in the room (so that air flows under the door gap into the room); and an air flow rate of 6-12 ACH (6 ACH for existing structures, 12 ACH for new construction or renovation); and direct exhaust of air from the room to the outside of the building or recirculation of air through a HEPA filter before retruning to circulation (MMWR 2003; 52 [RR-10]; MMWR 1994; 43 ). # American Institute of Architects (AIA). A professional organization that develops standards for building ventilation, The "2001Guidelines for Design and Construction of Hospital and Health Care Facilities", the development of which was supported by the AIA, Academy of Architecture for Health, Facilities Guideline Institute, with assistance from the U.S. Department of Health and Human Services and the National Institutes of Health, is the primary source of guidance for creating airborne infection isolation rooms (AIIRs) and protective environments (American Institute of Architects -Academy of Architecture for Health (https://network.aia.org/academyofarchitectureforhealth/home) [Current version of this document may differ from original.]) Ambulatory care settings. Facilities that provide health care to patients who do not remain overnight (e.g., hospital-based outpatient clinics, nonhospital-based clinics and physician offices, urgent care centers, surgicenters, free-standing dialysis centers, public health clinics, imaging centers, ambulatory behavioral health and substance abuse clinics, physical therapy and rehabilitation centers, and dental practices. Bioaerosols. An airborne dispersion of particles containing whole or parts of biological entities, such as bacteria, viruses, dust mites, fungal hyphae, or fungal spores. Such aerosols usually consist of a mixture of mono-dispersed and aggregate cells, spores or viruses, carried by other materials, such as respiratory secretions and/or inert particles. Infectious bioaerosols (i.e., those that contain biological agents capable of causing an infectious disease) can be generated from human sources (e.g., expulsion from the respiratory tract during coughing, sneezing, talking or singing; during suctioning or wound irrigation), wet environmental sources (e.g., HVAC and cooling tower water with Legionella) or dry sources (e.g., construction dust with spores produced by Aspergillus spp.). Bioaerosols include large respiratory droplets and small droplet nuclei (Cole EC. AJIC 1998;26: 453-64). Caregivers. All persons who are not employees of an organization, are not paid, and provide or assist in providing healthcare to a patient (e.g., family member, friend) and scrubbable surfaces instead of materials such as upholstery or carpeting, cleaning to prevent dust accumulation, and prohibition of fresh flowers or potted plants. Quasi-experimental studies. Studies to evaluate interventions but do not use randomization as part of the study design. These studies are also referred to as nonrandomized, pre-post-intervention study designs. These studies aim to demonstrate causality between an intervention and an outcome but cannot achieve the level of confidence concerning attributable benefit obtained through a randomized, controlled trial. In hospitals and public health settings, randomized control trials often cannot be implemented due to ethical, practical and urgency reasons; therefore, quasi-experimental design studies are used commonly. However, even if an intervention appears to be effective statistically, the question can be raised as to the possibility of alternative explanations for the result. Such study design is used when it is not logistically feasible or ethically possible to conduct a randomized, controlled trial, (e.g., during outbreaks). Within the classification of quasi-experimental study designs, there is a hierarchy of design features that may contribute to validity of results (Harris et al. CID 2004:38: 1586). # Residential care setting. A facility in which people live, minimal medical care is delivered, and the psychosocial needs of the residents are provided for. # Respirator. A personal protective device worn by healthcare personnel over the nose and mouth to protect them from acquiring airborne infectious diseases due to inhalation of infectious airborne particles that are < 5 μm in size. These include infectious droplet nuclei from patients with M. tuberculosis, variola virus [smallpox], SARS-CoV), and dust particles that contain infectious particles, such as spores of environmental fungi (e.g., Aspergillus sp. # Respiratory Hygiene/ Cough Etiquette. A combination of measures designed to minimize the transmission of respiratory pathogens via droplet or airborne routes in healthcare settings. The components of respiratory hygiene/cough etiquette are 1. covering the mouth and nose during coughing and sneezing,
None
None
be5fa83f79f487d22ab6683f23fb04f3e7755c40
cdc
None
The purpose of the memorandum is to exchange information and expertise in the area of occupational safety and health. One product of this agreement is the development of documents to provide the scientific basis for establishing recommended occupational exposure limits. These limits will be developed separately by the two countries according to their different national policies. This document on the health effects of occupational exposure to grain dust is the first product of that agreement. The document was written by Michael Brown (DSDTT/NIOSH) and was reviewed by the Criteria Group and by DSDTT/NIOSH.# INTRODUCTION Adverse health effects from exposure to grain dust were first reported in 1713, when Bernardino Ramazzini noted that grain dust irritated the throats, eyes, skin, and lungs of grain workers (Ramazzini 1713). Workers may be exposed to grain dust during farming operations or while working at grain elevators and flour, feed, and seed mills. Exposure to grain dust may also occur during the transportation of grain between these facilities. Present-day grain workers have been reported to exhibit both pulmonary effects (including cough, dyspnea, wheezing, asthma, chronic bronchitis, farmer's lung, chronic airway obstruction, mycotoxicosis, and allergic alveolitis) and nonpulmonary effects (including conjunctivitis, rhinitis, grain fever, and dermatitis). In the United States of America (U.S.A.), the National Institute for Occupational Safety and Health (NIOSH) has previously considered the explosive properties of grain dust and has recommended safe work practices and engineering controls (NIOSH 1983a). # In Sweden, Lantbrukets Brandskyddskommitte has issued recommendations on how to prevent the spontaneous combustion and explosion of grain dust (SBK 1977). Data have also been reported on grain dust contaminants, which include fungal spores, bacteria, mites, and insect body parts (NIOSH 1986a). This paper addresses the adverse health effects of human exposure to grain dust. A thorough search of the literature has identified three groups of studies specifically dealing with this type of occupational exposure. These epidemiologic studies are discussed in detail in Section 5.7. # CHARACTERISTICS AND COMPOSITION OF GRAIN DUST Grain dust consists of 60% to 75% organic material and 25% to 40% inorganic material (Yoshida and Maybank 1980), including the following: (1 ) Fragments of cereal grains (e.g., wheat, oats, barley, rye, and corn), oil seeds (e.g., rapeseed, linseed, and sunflower seed), and pulses (the edible seeds of legumes such as peas and soybeans) (Becklake 1980), (2) Decomposition products of grains, seeds, and pulses (Chan-Yeung and (3) Inorganic materials such as soil and traces of chemical elements (Yoshida and Maybank 1980), ( 4) Microorganisms (NIOSH 1986a(NIOSH , 1986b, (5) Insects, insect parts, and mites (Labour Canada 1981), ( 6) Hairs, feathers, and excreta of rodents and birds (Becklake 1980), ( 7) Fragments of plant matter (Becklake 1980), ( 8) Chemicals such as fertilizers, pesticides, and herbicides (Federal Register 1980), and ( 9) Other contaminants such as metal fragments, lubricating oils, or paint chips that may have accumulated with the grain during the harvesting and subsequent processing or storage (Labour Canada 1981). Airborne grain dust concentrations in storage and grain-handling facilities are affected by the design of ventilation systems (if present), the intensity and thoroughness of housekeeping efforts, the amounts of contaminants present, the humidity, the age of the grain, the extent to which the grain has been cleaned, and the degree to which the grain has been manipulated. # .1 Microorganisms The microbial flora associated with grain dust consist of a wide variety .of fungi and bacteria. The predominant organisms isolated from grain dust depend on the time the dust is produced (during harvest or storage) and factors such as original growing conditions, season of the year, geographical location, water content of the grain, temperature, type of grain, and storage practices. Some organisms invade or contaminate the grain when it is developing in the fields, and other organisms are primarily associated with grain during storage (Lacey 1980;NIOSH 1986a). # Ashley 1978), Under field conditions, fungi such as Cladosporium, Alternaria, Fusarium, Diplodia, Chaetomium, Rhizopus, and Absidia are predominant, but bacteria such as Erwinia herbicola, Pseudomonas, Bac i11 u s , and Streptomyces are also found (NIOSH 1986a). Though microorganisms from the field can still be isolated after the grain has been stored, the number of field organisms usually declines over time, and some are replaced by other fungi and bacteria (Lacey 1980). Aspergi11 us and Pen i c i 11i um species are the predominant fungi associated with storage, but grain that has become moist or heated often contains Micropolyspora, Thermoactinomyces vulgaris, and yeasts such as Cand i d a , Mucor pusi11 u s , and Absidia (NIOSH 1986a). Appendix Table A-1 lists the predominant microorganisms identified in selected reports. # Pesticides Grain may be fumigated with pesticides at a number of points between the field and storage facility to prevent spoilage or loss from pests. Fumigation frequently occurs during transportation by truck or freight car to large elevators. Grain may be fumigated again during transfer into elevators, during storage, and also during discharge into barges, ships, or rail cars. Grain in long-term storage is subjected to periodic preventive fumigation and treatment if pest infestation occurs. Pesticides that are currently banned for use with grain in the U.S.A. are ethylene dibromide (Federal Register 1984), carbon disulfide, ethylene dichloride, and carbon tetrachloride (Federal Register 1985). Pesticides that are currently being used are methyl bromide (EPA 1986), and aluminum or magnesium phosphide (EPA 1987). Malathion has an EPA label registration that permits it to be used for protecting grain from future infestation. Sampling has been conducted for carbon disulfide, ethylene dibromide, and carbon tetrachloride (McMahon 1971;NIOSH 1977NIOSH , 1984aNIOSH , 1985aNIOSH , 1985b; chloroform and 1 ,2-dichloroethylene (NIOSH 1984a(NIOSH , 1985a; methyl bromide and ethylene dichloride (NIOSH 1977(NIOSH , 1985a; carbon disulfide (NIOSH 1977(NIOSH , 1985b; malathion (Palmgren and Lee 1984;NIOSH 1985c); 1 ,2-dibromoethane (Berck 1974;Panel on Fumigant Residues in Grain 1974;Heikes 1985); phosphine (NIOSH 1977(NIOSH , 1985c(NIOSH , 1987; ethylene dichloride (Berck 1974;NIOSH 1984a); carbon tetrachloride (NIOSH 1976a); trichloroethylene, and chloroform (Panel on Fumigant Residues in Grain 1974); and diazinon (Palmgren and Lee 1984). None of these pesticides are known to cause the major health effects identified with exposures to grain dust. NIOSH has recommended that some pesticides (e.g., carbon tetrachloride, trichloroethylene, and ethylene dibromide) be regarded as potential occupational carcinogens, and that efforts be made to substitute or take appropriate control measures (NIOSH 1976b(NIOSH , 1978(NIOSH , 1983b. # Inorganic compounds Wirtz and Olenchock (1984a) characterized the elemental composition of airborne grain dusts from six different grains and the settled dust from grain elevators. The data from this study demonstrated that differences exist between the elemental compositions of the airborne grain dusts generated by different grains and between the compositions of settled and airborne grain dusts. Concern exists about the silica content of grain dust. Silica has been reported to constitute 9.96% of dusts with a particle size <44 yim (Heatley et al. 1944). Stepanov et al. (1967) reported that up to 70% of the grain dust from the elevator and threshing floor of a grain elevator consisted of organic material and that the mineral fraction of the dust was composed of 8% to 18% free silica. When available, silica concentrations are reported in the epidemiologic studies in Section 5.7. # ANALYTICAL METHODS Because of the variety of contaminants occurring in grain dust, the analytical methods for their determination cannot be reviewed in this brief document. The reader is referred to standard textbooks on analysis of organic chemicals in the workplace. Whether or not the specific components of the dusts are measured, the exposure measurement should involve the collection of the total and respirable fractions. These can be collected using standard methods for total nuisance dust (NIOSH Method 0500) and respirable nuisance dust (NIOSH Method 0600) (NIOSH 1984b). Several methods are available for sampling the microflora of both settled and airborne grain dust. These methods vary in their degree of precision, sophistication, and suitability (Lacey 1980;NIOSH 1986a (Table I). Data on exposure to grain dust are listed by occupational group, process, or area monitored in Appendix Tables A-2 and A-3 for Canada and the U.S.A. In Sweden, the concentration of grain dust ranged from 4 to 53 mg/m in five grain pits (ASF 1985). # EFFECTS IN HUMANS This document deals with the adverse health effects of exposure to grain dust. The occurrence of these adverse health effects has been documented in the literature since 1713. # Uptake and deposition in human lungs No information has been found regarding the uptake and deposition of grain dust in human lungs. # Acute effects Acute exposure to grain dust has resulted in pruritic skin reactions, conjunctivitis, rhinitis, asthma, and grain fever (symptoms of the latter may include cough, chest tightness, wheezing, dyspnea , expectoration, fever, chills, flushed face, and pain in the joints and muscles). See also Section 5.7, Epidemiologic studies. # Chronic effects Chronic exposure to grain dust has resulted in chronic bronchitis and hyperreactive airways. See also Section 5.7, Epidemiologic studies. Neuropsychiatric effects have been seen in grain elevator workers exposed to carbon disulfide (Peters et al. 1982). Grain elevator -2.5 to 19.9 NIOSH (1977) Grain elevator -2 . 0 to 118.8 NIOSH (1977) Grain elevator -0 . 2 to 6 . 6 NIOSH (1977) Grain elevator -0.7 to 35.9 NIOSH (1977) Grain elevator -0.28 to 9.5 NIOSH (1985c) Grain elevator -0.34 to 30.0 NIOSH (1985c) Grain elevator 13 0.96 to 9.48 NIOSH (1986a) *Area dust concentrations. "^PersonaI dust concentrations. # Critical effects No specific adverse health effect can be recognized as always occurring first in workers exposed to grain dust. The first effect could be manifested as irritation of the throat, eyes, skin, or lungs. # Dose response/dose effect Table II lists the reported human health effects at various grain dust exposure concentrations. # Case studies The following health effects have been reported in workers exposed to grain dust: silicosis in a worker who unloaded grain from rail cars (Heatley et al. 1944), pneumoconiosis in dock laborers who primarily handled grain (Dunner et al. 1946), farmer's lung in farmers who handled grain (Dickie and Rankin 1958;Patterson et al. 1974;Kotimaa et al. 1984), neuropsychiatrie disorders in grain storage workers (Peters et al. 1982), pulmonary mycotoxicosis (now referred to as organic dust toxic syndrome ) in farmers who handled moldy silage (Emanuel et al. 1975), and grain fever and severe dyspnea in grain elevator workers (Skoulas et al. 1964 A logistic regression analysis produced the odds ratios (i.e., the probability of having a symptom divided by the probability of not having the symptom) listed in Table IV. These ratios indicate that the odds of developing respiratory symptoms from grain dust exposure are independent of and usually greater than the odds of developing them from smoking. , (1983). 283 grain elevator workers and 192 city service workers (comparison population). The two groups were comparable with respect to age, sex, height, weight, and smoking habits. The workers were given a limited physical examination and were administered medical and occupational questionnaires to obtain information on the frequency of eye, nasal, and respiratory symptoms and their perceived level of dust exposure ("less than average," "above average," or "average"). Spirometrie measurements were obtained for FVC, F E V^ FEFgo^, and FEF^g^. Other tests included (1) oral temperature, (2) skin tests using pollen, mixed feathers, cat and rat epithelium, grass, and mixed grain dusts, and (3) venous blood samples for total leukocyte count. During the work shift, 209 total dust measurements (8 -hr TWA's) were made. Ninety-three percent of the grain dust concentrations were 3 3 below 10 mg/m , and 8 6% were below 5 mg/m , with a mean concentration of 3.3 mg/m^. Statistically significant differences (p<0.05) existed in the prevalence rates of the following symptoms among all grain workers and all comparison workers, respectively: cough, 48% and 32%; expectoration, 38% and 19%; wheezing, 13% and 9%; dyspnea, 12% and 6%; nasal stuffiness, 38% and 26%; and eye irritation, 13% and 6 %. In the comparison group, workers who smoked had a significantly greater prevalence (p<0.05) of cough, expectoration, and nasal symptoms than did nonsmokers. Grain elevator workers who reported that their perceived exposure to grain dust was "average" had a significantly greater occurrence (p<0.05) of cough than those who reported "less than average" exposures. Those who assessed their exposure as "greater than average" had a significantly greater prevalence (p<0.05) of dyspnea, wheezing, and eye symptoms than those reporting "average" or "less than average" exposures. An analysis of the relationship between dust concentrations and symptom prevalence was conducted for 209 grain workers (no explanation was given for the reduced number of workers). At total grain dust concentrations 3 >5 mg/m , grain workers had a significantly greater prevalence of cough (p<0.005), expectoration (p<0.0001), dyspnea (p<0.001), and eye irritation (p<0 .0 0 0 1 ) than did city service workers. An analysis of pulmonary function values for 241 grain workers and 191 comparison workers was conducted (no explanation was given for the reduced number of workers). Grain workers experienced a significant pre-shift to post-shift percentage change for FVC (p<0.05), ^5 0% (p<0.01), and FEF?5 % (p<0.001). A multiple regression analysis correcting for age, height, and smoking habits revealed that the grain dust concentration had a statistically significant (p<0.05) effect on the pre-shift to post-shift percentage changes in FVC, FEF^q^, and FEF^^. The significant effect noted for FVC limits the validity of the flow rate measurements, as the lung volume shifted the volume at which the flow measurements were made. The average respirable dust 3 concentration was 0.9+0.7 mg/m , and the average nonrespirable dust 3 concentration was 5.7+10.9 mg/m . Note that the standard deviation for the nonrespirable dust concentration is larger than the mean because the data are greatly skewed; the validity of the data is not affected, however. Pulmonary function analyses were based on results from 34 of the 47 grain elevator workers (the remaining 12 workers were either unable to report for the tests or had results that were rejected on technical grounds). The FVC decreased from Monday to Wednesday, and the Wednesday change was sustained on Friday (p<0.001). The These data were used to examine a dose-effect relationship for grain dust exposure. Results showed that 50% of the workers underwent a daily decrease of at least 923 ml/sec in (p=0.004) and of 310 ml/sec in FEF2 5% (p=0.08) for each 1 mg/m^ increase in the concentration of respi rable dust. 5.7.2. 3 Broder et al. (1983) In a report based on information collected in the 1977 survey (Broder et al. 1979), Broder et al. (1983 No results were reported for the chest radiographs. On the basis of skin tests results, two subsets of grain workers were established for further study: (1 ) 18 grain handlers with positive reactions to one or more grains and a comparison group of 18 grain workers with no reaction, and (2 ) 21 grain handlers with positive reactions to one or more fungal antigens and a comparison group of 21 grain workers with no reaction. Workers were matched for age, duration of employment, smoking status, and pack years of smoking. No significant differences were found for demographics, respiratory symptoms, or pulmonary function. On the basis of results of the inhalation challenge tests, three subsets of grain workers were established for further analysis: (1 ) 12 nonsmoking grain workers with chronic cough and 12 grain workers with no cough (internal comparison group), (2) 9 grain workers with a decrease in FEFgQ^ from Monday to Friday and 9 grain workers with no decrease in FEFgo^ (internal comparison group), and (3) 10 grain workers whose baseline FEV^ was 100% (internal comparison group). Insofar as possible, the internal comparison workers were matched with the other grain workers for age, duration of employment, and smoking history. No significant differences were found for demographic variables, respiratory symptoms, or pulmonary function. For the longitudinal analysis, 14 workers who were laid off and 14 who were not laid off were matched as closely as possible for age and smoking habits. The laid-off workers were examined in January and February before being rehired; they were subsequently examined between April and June. Steadily employed workers were examined at the same intervals. After they were rehired, workers who had been laid off reported an increase in cough (p<0.04), rhinitis (p<0.03), and sputum production (p<0.05), and a decrease in shortness of breath (p<0 .0 1 ) and respiratory illness (p<0 .0 2 ). Workers who were steadily employed had an increase in cough (p<0.04), rhinitis (p<0.03), and sputum production (p<0.05) during the same time period. Compared with these workers, those who had been laid off reported significantly less cough (p<0.05) and sputum production (p<0.01) in February, and less shortness of breath (p<0.05) after they were rehired. Workers who had been laid off showed improvement in FEV^ (p<0.001) and FEFgo^ (p<0.001) in February followed by a decline after they were rehired. For that same period, steadily employed workers had an increase in FEV.J (p<0.002) in February followed by a decline. An analysis covering the maximum consecutive period of reduced employment (January to March) included 15 workers who were laid off and 24 steadily employed workers. The subjects were not matched for age or smoking habits. The steadily employed workers had a significant increase in cough (p<0 .0 2 ) and sputum production (p<0 .0 0 1 ). Though low ambient temperature may have had an effect on the respiratory symptoms, adverse symptoms continued to increase as the temperature increased. The longitudinal changes in respiratory variables are partially reversible and are consistent with the effects of grain dust exposure. (Broder et al. 1985). The grain handlers who had left employment were significantly younger (p<0 .0 0 1 ), had a shorter duration of employment (p<0.001), and reported a greater prevalence of eye irritation (p<0.04), cough (p<0.04), and shortness of breath (p<0.003) than those grain workers who remained employed. The groups were compared using multiple logistic analyses of the symptoms and multiple regression analyses of pulmonary function measurements with adjustments for age, pack years of smoking history, and duration of employment. These adjustments increased the differences in prevalence rates for the aforementioned symptoms but they did not change the statistical significance. The adjustments also increased the significance of the reported sputum prevalence from p=0.04 to p<0 .0 2 . Civic workers who left the job were significantly younger (p<0 .0 0 1 ), had a shorter duration of employment (p<0 .0 0 2 ), had decreased pack years of smoking history (p<0 .0 0 1 ), included a lower percentage of current smokers (p<0 .0 2 ), had a lower prevalence of coughing (p<0 .0 2 ), and had a lower FVC (p<0.002), a higher F E V^F V C ratio (p<0.02), and a higher ^7 5% (p<0.005) than those civic workers who remained employed. When adjusted for age, pack years of smoking history, and duration of employment, the differences were no longer statistically significant. Data obtained in the 1977 survey (Broder et al. 1979) for the grain and civic workers who remained employed and were reexamined were compared with data obtained from those workers who remained employed but did not report for the 1980 survey (Broder et al. 1985). The grain workers who declined participation in the 1980 survey were significantly older (p<0.007), had a longer duration of employment (p<0.009), and had a greater number of pack years of smoking history (p<0.006) when compared with those grain workers who were reexamined in the 1980 survey. The grain workers who declined participation in the 1980 survey had a higher prevalence of sputum production (p<0.01), lower FEF^q (p<0.009), and F E V^F V C (p<0.006) when compared with those grain workers who were reexamined in the 1980 survey. However, the significance for these pulmonary parameters declined when adjusted for number of pack years of smoking. The civic workers who declined to participate in the 1980 survey were significantly older (p<0.03) than those civic workers who were reexamined. There was a statistically significant greater prevalence of wheezing in nonsmoking civic workers compared with nonsmoking grain workers in 1977 (p<0.03) and in 1980 (p<0.001). In 1977, increased eye irritation (p<0.01) and a higher FEF^g^ (p<0 .0 2 ) was measured in grain workers who smoked compared with civic workers who smoked. For grain and civic workers whose smoking status remained constant from 1977-80, Table V shows demographic characteristics, symptom prevalence, and measures of pulmonary function. The changes in pulmonary function from 1977 to 1980 show (1) a significant increase in FEV^/FVC (p<0.04) among grain workers who were ex-smokers compared with civic workers who were ex-smokers, and (2 ) a decline in FEF7 5% (P<0 -0 0 2 ) among grain workers who were current smokers compared with civic workers who were current smokers. A multiple regression analysis of change in pulmonary function from 1977 to 1980 demonstrated an inverse relationship between age in 1980 and the level of pulmonary function in 1977. The longitudinal study by Broder (1980) showed that there was an increased prevalence of rhinitis, cough, and sputum production among rehired grain workers compared with continuously employed grain workers. However, the rehired workers showed a decrease in the prevalence of shortness of breath and respiratory illness compared with the continuously employed grain (1985) reported the results of special tests and/or analyses performed by Tabona et al. (1984) on subsets of these populations. These reports followed portions of the same populations of grain workers and controls for periods as long as 6 years (Table VI). For the first health survey, 1977. Candidates identified for study included 642 workers from five grain elevators, 206 civic workers, and 302 noncedar sawmill workers (1980). Because there were no female grain elevator workers, women were excluded from the two comparison populations. Nonwhite workers were eliminated the crystalline silica content of the dust ranged from 3.8% to 5.0%. The average total dust concentration in the sawmills was 1.9 mg/m (the range was not reported). There were no statistically significant differences between grain workers and civic workers with regard to the prevalences of eye, nasal, and chest symptoms when workers were matched for smoking habits. Grain workers and sawmill workers had statistically significant differences in the prevalences of these symptoms (Table VIII). Cough, sputum, wheezing, and breathlessness were more prevalent in smokers than in nonsmokers in all # groups. The predicted values for FEV1 and FVC (mean percentage) were significantly lower for nonsmoking or ex-smoking grain workers than for nonsmoking or ex-smoking civic or sawmill workers (p<0.05). The more non-atopic status of grain workers compared with civic workers suggests that a self selection (healthy worker effect) occurs. On IX). In addition to tests conducted in the primary study, other tests performed were (1 ) methacholine broncochallenge tests, with FEV1 and FVC measured before and after the tests, (2) serum precipitin responses to grain dust extracts, (3) bronchial inhalation challenge to grain dust and grain dust extract, with FEV^ and FVC measured before and after the tests, and (4) venous blood samples for determination of absolute eosinophil counts. On the basis of these test results, the 33 workers were divided into three groups. Twenty-two of these workers (Groups A and B) reported respiratory symptoms (cough, sputum production with or without wheezing, and breathlessness) and demonstrated decrements in lung function. Group A consisted of 6 workers who also had a +PC2 0 =Provocation concentration of methacholine to induce a 20% fal I in FEV-j. the workers in Group B demonstrated bronchial reactivity to methacholine at a level similar to that of patients with symptomatic occupational asthma. The remaining workers in Group B showed bronchial reactivity to methacholine challenge within the range seen in patients with chronic bronchitis. Groups A and B had evidence of airway obstruction, indicated by the decline in FEV1 , FEF25-75%' an(- FEV-|^VC (Table X). Group A had an absolute eosinophil count of 244/mm'- compared with 156/mm^ for Group B. None of the subjects reacted to skin tests or responded to grain dust extracts with serum precipitins. Although grain dust did not cause a bronchial reaction in all the workers, it cannot be excluded as the causative agent for the asthma and chronic bronchitis. Asthma is not a graded uniform response across exposed groups; rather it is an incident frequency response. Also, the results of exposure to the settled grain dust may not be indicative of those that may occur with fresh grain dust. 5.7.3.4 Tabona et al. (1984) In the fourth report of the series, Tabona et al. (1984) conducted a health survey in October and November 1981 using workers from the same worker populations studied in the first three reports (Chan-Yeung et al. 1980, 1981. The characteristics of 267 grain workers who had not changed their smoking habits over the 6 -year study period were compared with the characteristics reported in the first health survey (Chan-Yeung et al. 1980 5.7.3.5 Enarson et al. (1985) In the fifth report, Enarson et al. (1985) analyzed the respiratory functions of 81 of the grain workers described by Tabona et al . (1984). These workers were matched for age and smoking habits. Twenty-seven of the # . ANIMAL STUDIES The animal studies cited here identify the mechanism(s) of lung response to grain dust exposure. Mice exposed to grain dust concentrations ranging from 1200+500 to 5400+700 mg/m^ total dust for 2 to 28 days developed an increased pulmonary macrophage response (Armanious et al. 1982). Rabbits exposed to grain dust at 20 mg/m for 24 weeks, 5 days a week, developed inflammation of the alveoli and interstitium (Stepner et al. 1986). Exposure to mycotoxin produced noncytotoxic histamine release in rat peritoneal cells (Warren and Ho Iford-Stevens 1986) and inhibited several critical cellular functions in rat alveolar macrophages (Sorenson et al. 1986). In rats, intratracheal instillation of grain dust initiated an acute inflammatory reaction as evidenced by an influx of neutrophils into the airspaces and later into the lung interstitium (Keller et al . 1987). Rats do not respond to histamine, yet histamine is the major pharmacologic mediator of acute bronchoconstriction in man. The allergic anaphylactic response of guinea pigs is mediated by a subtype of IgG (heterocytrotropic IgG) which is unlike the human IgG mediated responses. Human allergy is IgE mediated (Bass et al. 1988). Limited conclusions have been drawn from the rodent animal modeling studies of human etiopathogenes. However, these studies support the proposed immunologic mechanism(s) presented in this document. # POSSIBLE MECHANISMS OF ACTION Grain dust is a complex mixture of organic and inorganic compounds. dust as a source of material for the RAST assay, and found that the prevalence of IgE antibodies to grain dust antigens was significantly higher than the prevalence of IgE antibodies to grain antigens. In Sweden, febrile reactions occurred in 19% of farmers exposed to unusually moldy grain (Malmberg et al. 1985). This variability of the agents in dust may be partially responsible for the variations in hypersensitivity noted in the reports cited above; or other causal mechanisms may be involved such as irritant effects or toxic reactions (e.g., activation of the alternative pathway of complement activation). Allergic reactions to grain mites and weevils have been described. Ingram et al. (1979) obtained responses to bronchial provocation after farmers inhaled storage mite extract. Warren et al . (1983) Olenchock et al. (1978Olenchock et al. ( , 1980aOlenchock et al. ( , 1980b and Olenchock and Major (1979) have suggested that grain dust induces activation of the alternative complement pathway in man and that the activity appears to vary depending on the type of grain dust exposure. Though a number of potential mechanisms or interactions have been activated the specific grain dust component that activates the alternative complement pathway is yet to be identified. Moldy hay has been shown to induce activation of the alternative pathway in human (Edwards 1976) and rabbit serum (Edwards et al. 1974). # Microbial Toxins Bacteria are ubiquitous in grain dust (Outkiewicz 1978;DeLucca et al.1984;Lacey 1980 exposure to a 1% weevil extract. (Olenchock et al 1978;Wirtz et al 1984b) as well as in settled dust. DeLucca and Palmgren (1987) reported a seasonal variation in endotoxin concentration in both respirable and settled grain dust. Endotoxin levels in settled dusts were lowest in January and highest in November. Respirable samples collected in the fall months (September and October) showed significantly higher levels than those collected on other sampling dates. Mycotoxins such as aflatoxin are commonly found in grain. They are considered to be carcinogenic as well as hepatotoxic (WHO 1979). Mycotoxins are products of secondary fungal metabolism; they may be located within the hyphae and spores and may be excreted into the grain (Palmgren and Lee 1986). Mycotoxins are usually associated with a toxic reaction that occurs as a result of food ingestion. Burg and Shotwell (1984) measured the aflatoxin level of corn dust during harvest at a farm in Georgia, U.S.A., where the level of contamination was known to be high. Airborne dust samples collected at the front of the combine had an average aflatoxin concentration of 3,850 ng/g, and the operator was exposed to aflatoxin concentrations up to 1,360 ng/g in the combine cab. Sorenson et al. (1981) analyzed samples of airborne grain dust for aflatoxin. Samples were collected from grain terminals in Duluth, Minnesota, Superior, Wisconsin, and from a corn dumping station in Georgia. No aflatoxin was detected in the samples from the Superior-Duluth area, but the airborne corn dust from Georgia contained 130 ppb aflatoxin B^. Dashek et al. (1983) later confirmed the absence of aflatoxin in grain from the Superior-Duluth area. # . RESEARCH NEEDS Research is needed in the following areas: (1) Establish the relationship between the recognized health effects of exposure to grain dust and the various constituents of the dust, including the possible effects of pesticides, fumigants, endotoxins, and mycotoxins. (2) Determine the relationship between the various acute and chronic clinical effects of grain dust exposure. (3) Identify the importance of the host factor in modifying the workers' response to grain dust exposure. (4) Determine the relationship between levels of grain dust exposure, smoking, and respiratory disorders. (5) Determine the role and mechanism of action of grain dust components. (7) Determine the feasibility and efficacy of using small amounts of mineral oil to reduce grain dust in elevators. # CONCLUSIONS From epidemiologic studies it is evident that grain dust is associated with respiratory symptoms (i.e., cough, dyspnea, rhinitis, wheezing, and chronic airway obstruction) and nonpulmonary disorders (i.e., conjunctivitis, grain fever, and dermatitis). Epidemiologic data indicate an excess risk of chronic obstructive and restrictive pulmonary disease in workers with chronic exposure to grain dust, although the mechanism(s) responsible for the lung abnormalities is not known. Worker exposure to grain dust has resulted in increased incidences of chronic bronchitis, asthma and chronic obstructive pulmonary disease. These adverse health effects are likely to result from repetitive acute inflammatory responses to grain dust exposure. Epidemiologic and animal studies support the hypothesis that immunologic mechanism(s) cause the adverse health effects seen in grain workers. Although thorough characterization of allergenicity (Type I ) is lacking, it can be concluded that potent immunologic responses evoke inflammatory-mediated responses in airway smooth muscle. A number of animal studies have been conducted to examine the mechanisms of response to grain dust, but the modeling approaches taken do not study the same physiologic parameter of response. For instance, all human studies evaluate forced expiratory flowrate (FEV, FVC, FEFgg 75^. yet the animal studies cited do not evaluate these parameters. If they did, it would be possible to compare animal and human responses almost directly. Because exposure to grain dust results in an immunologic response in some individuals, it is presently impossible to recommend an exposure concentration at which all workers would be protected from adverse health effects. However, reducing exposure to grain dust will decrease exposure to the agent(s) in grain dust that elicit the adverse health effects. # SUMMARY IN SWEDISH # ASF. Dammfria mottagningsgropar for spannmal (Dust-free pits for grain). Enarson DA, Vedal S, Chan-Yeung M. Rapid decline in FEV^ in grain handlers. Am Rev Respir Dis 132 (1985) No. PB87-117172 Washington, DC 1-257 (January 1987) 1-257. Farant J-P, Moore CF. Dust exposures in the Canadian grain industry. Am Ind Hyg Assoc J 39 (1978) 177-194. Federal Register. 45 FR (No. 33); 10732-10737, February 1980. U.S. Department of Labor, 29 CFR, Parts 1910, 1918, 1926, and 1928 Grain dust from an elevator and threshing floor was cul tured to determine the types of microorganisms present and the number of organisms per gram of dust. The total microbial count per gram of grain dust ranged from 168,000 to 544,000 in five samples. (Stepanov 1967) A 10-year study in Poland that measured the viable microorganisms in the air at a variety of worksites Five grain elevators along the M ississippi River near New Orleans were sampled over a 2-year period to determine thei r bacter ial samples were col breathing zones using personal a ir samplers and s te rile filte rs of 0.45-mm pore size. Twenty dust samples were also collected from various sections of the elevators where grain dusts were deposited. Settled grain dust had popula tions of bacteria that ranged from 1.9 to 53.4 x 10®/g (total plate counts). The range was 0.9 to 14.8 x 10®/g for gram-positive bacteria and 0.1 to 5.0 x 106/g for gram-negative bacteria. (DeLucca et a I . 1984) Airborne dust samples were taken from 21 farms of farmers who were undergoing a study to determine the occurrence of symptoms (feb rile and airway-obstructive) and their immune statu s. Twenty-four samples were collected from the breathing zone on a 0.1-m filte r while the subject was handling grain or hay. (Malmberg et a l. 1985)
The purpose of the memorandum is to exchange information and expertise in the area of occupational safety and health. One product of this agreement is the development of documents to provide the scientific basis for establishing recommended occupational exposure limits. These limits will be developed separately by the two countries according to their different national policies. This document on the health effects of occupational exposure to grain dust is the first product of that agreement. The document was written by Michael Brown (DSDTT/NIOSH) and was reviewed by the Criteria Group and by DSDTT/NIOSH.# INTRODUCTION Adverse health effects from exposure to grain dust were first reported in 1713, when Bernardino Ramazzini noted that grain dust irritated the throats, eyes, skin, and lungs of grain workers (Ramazzini 1713). Workers may be exposed to grain dust during farming operations or while working at grain elevators and flour, feed, and seed mills. Exposure to grain dust may also occur during the transportation of grain between these facilities. Present-day grain workers have been reported to exhibit both pulmonary effects (including cough, dyspnea, wheezing, asthma, chronic bronchitis, farmer's lung, chronic airway obstruction, mycotoxicosis, and allergic alveolitis) and nonpulmonary effects (including conjunctivitis, rhinitis, grain fever, and dermatitis). In the United States of America (U.S.A.), the National Institute for Occupational Safety and Health (NIOSH) has previously considered the explosive properties of grain dust and has recommended safe work practices and engineering controls (NIOSH 1983a). # In Sweden, Lantbrukets Brandskyddskommitte has issued recommendations on how to prevent the spontaneous combustion and explosion of grain dust (SBK 1977). Data have also been reported on grain dust contaminants, which include fungal spores, bacteria, mites, and insect body parts (NIOSH 1986a). This paper addresses the adverse health effects of human exposure to grain dust. A thorough search of the literature has identified three groups of studies specifically dealing with this type of occupational exposure. These epidemiologic studies are discussed in detail in Section 5.7. # CHARACTERISTICS AND COMPOSITION OF GRAIN DUST Grain dust consists of 60% to 75% organic material and 25% to 40% inorganic material (Yoshida and Maybank 1980), including the following: (1 ) Fragments of cereal grains (e.g., wheat, oats, barley, rye, and corn), oil seeds (e.g., rapeseed, linseed, and sunflower seed), and pulses (the edible seeds of legumes such as peas and soybeans) (Becklake 1980), (2) Decomposition products of grains, seeds, and pulses (Chan-Yeung and (3) Inorganic materials such as soil and traces of chemical elements (Yoshida and Maybank 1980), ( 4) Microorganisms (NIOSH 1986a(NIOSH , 1986b, (5) Insects, insect parts, and mites (Labour Canada 1981), ( 6) Hairs, feathers, and excreta of rodents and birds (Becklake 1980), ( 7) Fragments of plant matter (Becklake 1980), ( 8) Chemicals such as fertilizers, pesticides, and herbicides (Federal Register 1980), and ( 9) Other contaminants such as metal fragments, lubricating oils, or paint chips that may have accumulated with the grain during the harvesting and subsequent processing or storage (Labour Canada 1981). Airborne grain dust concentrations in storage and grain-handling facilities are affected by the design of ventilation systems (if present), the intensity and thoroughness of housekeeping efforts, the amounts of contaminants present, the humidity, the age of the grain, the extent to which the grain has been cleaned, and the degree to which the grain has been manipulated. # .1 Microorganisms The microbial flora associated with grain dust consist of a wide variety .of fungi and bacteria. The predominant organisms isolated from grain dust depend on the time the dust is produced (during harvest or storage) and factors such as original growing conditions, season of the year, geographical location, water content of the grain, temperature, type of grain, and storage practices. Some organisms invade or contaminate the grain when it is developing in the fields, and other organisms are primarily associated with grain during storage (Lacey 1980;NIOSH 1986a). # Ashley 1978), Under field conditions, fungi such as Cladosporium, Alternaria, Fusarium, Diplodia, Chaetomium, Rhizopus, and Absidia are predominant, but bacteria such as Erwinia herbicola, Pseudomonas, Bac i11 u s , and Streptomyces are also found (NIOSH 1986a). Though microorganisms from the field can still be isolated after the grain has been stored, the number of field organisms usually declines over time, and some are replaced by other fungi and bacteria (Lacey 1980). Aspergi11 us and Pen i c i 11i um species are the predominant fungi associated with storage, but grain that has become moist or heated often contains Micropolyspora, Thermoactinomyces vulgaris, and yeasts such as Cand i d a , Mucor pusi11 u s , and Absidia (NIOSH 1986a). Appendix Table A-1 lists the predominant microorganisms identified in selected reports. # Pesticides Grain may be fumigated with pesticides at a number of points between the field and storage facility to prevent spoilage or loss from pests. Fumigation frequently occurs during transportation by truck or freight car to large elevators. Grain may be fumigated again during transfer into elevators, during storage, and also during discharge into barges, ships, or rail cars. Grain in long-term storage is subjected to periodic preventive fumigation and treatment if pest infestation occurs. Pesticides that are currently banned for use with grain in the U.S.A. are ethylene dibromide (Federal Register 1984), carbon disulfide, ethylene dichloride, and carbon tetrachloride (Federal Register 1985). Pesticides that are currently being used are methyl bromide (EPA 1986), and aluminum or magnesium phosphide (EPA 1987). Malathion has an EPA label registration that permits it to be used for protecting grain from future infestation. Sampling has been conducted for carbon disulfide, ethylene dibromide, and carbon tetrachloride (McMahon 1971;NIOSH 1977NIOSH , 1984aNIOSH , 1985aNIOSH , 1985b; chloroform and 1 ,2-dichloroethylene (NIOSH 1984a(NIOSH , 1985a; methyl bromide and ethylene dichloride (NIOSH 1977(NIOSH , 1985a; carbon disulfide (NIOSH 1977(NIOSH , 1985b; malathion (Palmgren and Lee 1984;NIOSH 1985c); 1 ,2-dibromoethane (Berck 1974;Panel on Fumigant Residues in Grain 1974;Heikes 1985); phosphine (NIOSH 1977(NIOSH , 1985c(NIOSH , 1987; ethylene dichloride (Berck 1974;NIOSH 1984a); carbon tetrachloride (NIOSH 1976a); trichloroethylene, and chloroform (Panel on Fumigant Residues in Grain 1974); and diazinon (Palmgren and Lee 1984). None of these pesticides are known to cause the major health effects identified with exposures to grain dust. NIOSH has recommended that some pesticides (e.g., carbon tetrachloride, trichloroethylene, and ethylene dibromide) be regarded as potential occupational carcinogens, and that efforts be made to substitute or take appropriate control measures (NIOSH 1976b(NIOSH , 1978(NIOSH , 1983b. # Inorganic compounds Wirtz and Olenchock (1984a) characterized the elemental composition of airborne grain dusts from six different grains and the settled dust from grain elevators. The data from this study demonstrated that differences exist between the elemental compositions of the airborne grain dusts generated by different grains and between the compositions of settled and airborne grain dusts. Concern exists about the silica content of grain dust. Silica has been reported to constitute 9.96% of dusts with a particle size <44 yim (Heatley et al. 1944). Stepanov et al. (1967) reported that up to 70% of the grain dust from the elevator and threshing floor of a grain elevator consisted of organic material and that the mineral fraction of the dust was composed of 8% to 18% free silica. When available, silica concentrations are reported in the epidemiologic studies in Section 5.7. # ANALYTICAL METHODS Because of the variety of contaminants occurring in grain dust, the analytical methods for their determination cannot be reviewed in this brief document. The reader is referred to standard textbooks on analysis of organic chemicals in the workplace. Whether or not the specific components of the dusts are measured, the exposure measurement should involve the collection of the total and respirable fractions. These can be collected using standard methods for total nuisance dust (NIOSH Method 0500) and respirable nuisance dust (NIOSH Method 0600) (NIOSH 1984b). Several methods are available for sampling the microflora of both settled and airborne grain dust. These methods vary in their degree of precision, sophistication, and suitability (Lacey 1980;NIOSH 1986a (Table I). Data on exposure to grain dust are listed by occupational group, process, or area monitored in Appendix Tables A-2 and A-3 for Canada and the U.S.A. In Sweden, the concentration of grain dust ranged from 4 to 53 mg/m in five grain pits (ASF 1985). # EFFECTS IN HUMANS This document deals with the adverse health effects of exposure to grain dust. The occurrence of these adverse health effects has been documented in the literature since 1713. # Uptake and deposition in human lungs No information has been found regarding the uptake and deposition of grain dust in human lungs. # Acute effects Acute exposure to grain dust has resulted in pruritic skin reactions, conjunctivitis, rhinitis, asthma, and grain fever (symptoms of the latter may include cough, chest tightness, wheezing, dyspnea [shortness of breath], expectoration, fever, chills, flushed face, and pain in the joints and muscles). See also Section 5.7, Epidemiologic studies. # Chronic effects Chronic exposure to grain dust has resulted in chronic bronchitis and hyperreactive airways. See also Section 5.7, Epidemiologic studies. Neuropsychiatric effects have been seen in grain elevator workers exposed to carbon disulfide (Peters et al. 1982). Grain elevator -2.5 to 19.9 NIOSH (1977) Grain elevator -2 . 0 to 118.8 NIOSH (1977) Grain elevator -0 . 2 to 6 . 6 NIOSH (1977) Grain elevator -0.7 to 35.9 NIOSH (1977) Grain elevator -0.28 to 9.5 NIOSH (1985c) Grain elevator -0.34 to 30.0 NIOSH (1985c) Grain elevator 13 0.96 to 9.48 NIOSH (1986a) *Area dust concentrations. "^PersonaI dust concentrations. # Critical effects No specific adverse health effect can be recognized as always occurring first in workers exposed to grain dust. The first effect could be manifested as irritation of the throat, eyes, skin, or lungs. # Dose response/dose effect Table II lists the reported human health effects at various grain dust exposure concentrations. # Case studies The following health effects have been reported in workers exposed to grain dust: silicosis in a worker who unloaded grain from rail cars (Heatley et al. 1944), pneumoconiosis in dock laborers who primarily handled grain (Dunner et al. 1946), farmer's lung in farmers who handled grain (Dickie and Rankin 1958;Patterson et al. 1974;Kotimaa et al. 1984), neuropsychiatrie disorders in grain storage workers (Peters et al. 1982), pulmonary mycotoxicosis (now referred to as organic dust toxic syndrome [doPico 1986]) in farmers who handled moldy silage (Emanuel et al. 1975), and grain fever and severe dyspnea in grain elevator workers (Skoulas et al. 1964 A logistic regression analysis produced the odds ratios (i.e., the probability of having a symptom divided by the probability of not having the symptom) listed in Table IV. These ratios indicate that the odds of developing respiratory symptoms from grain dust exposure are independent of and usually greater than the odds of developing them from smoking. , (1983). 283 grain elevator workers and 192 city service workers (comparison population). The two groups were comparable with respect to age, sex, height, weight, and smoking habits. The workers were given a limited physical examination and were administered medical and occupational questionnaires to obtain information on the frequency of eye, nasal, and respiratory symptoms and their perceived level of dust exposure ("less than average," "above average," or "average"). Spirometrie measurements were obtained for FVC, F E V^ FEFgo^, and FEF^g^. Other tests included (1) oral temperature, (2) skin tests using pollen, mixed feathers, cat and rat epithelium, grass, and mixed grain dusts, and (3) venous blood samples for total leukocyte count. During the work shift, 209 total dust measurements (8 -hr TWA's) were made. Ninety-three percent of the grain dust concentrations were 3 3 below 10 mg/m , and 8 6% were below 5 mg/m , with a mean concentration of 3.3 mg/m^. Statistically significant differences (p<0.05) existed in the prevalence rates of the following symptoms among all grain workers and all comparison workers, respectively: cough, 48% and 32%; expectoration, 38% and 19%; wheezing, 13% and 9%; dyspnea, 12% and 6%; nasal stuffiness, 38% and 26%; and eye irritation, 13% and 6 %. In the comparison group, workers who smoked had a significantly greater prevalence (p<0.05) of cough, expectoration, and nasal symptoms than did nonsmokers. Grain elevator workers who reported that their perceived exposure to grain dust was "average" had a significantly greater occurrence (p<0.05) of cough than those who reported "less than average" exposures. Those who assessed their exposure as "greater than average" had a significantly greater prevalence (p<0.05) of dyspnea, wheezing, and eye symptoms than those reporting "average" or "less than average" exposures. An analysis of the relationship between dust concentrations and symptom prevalence was conducted for 209 grain workers (no explanation was given for the reduced number of workers). At total grain dust concentrations 3 >5 mg/m , grain workers had a significantly greater prevalence of cough (p<0.005), expectoration (p<0.0001), dyspnea (p<0.001), and eye irritation (p<0 .0 0 0 1 ) than did city service workers. An analysis of pulmonary function values for 241 grain workers and 191 comparison workers was conducted (no explanation was given for the reduced number of workers). Grain workers experienced a significant pre-shift to post-shift percentage change for FVC (p<0.05), ^5 0% (p<0.01), and FEF?5 % (p<0.001). A multiple regression analysis correcting for age, height, and smoking habits revealed that the grain dust concentration had a statistically significant (p<0.05) effect on the pre-shift to post-shift percentage changes in FVC, FEF^q^, and FEF^^. The significant effect noted for FVC limits the validity of the flow rate measurements, as the lung volume shifted the volume at which the flow measurements were made. The average respirable dust 3 concentration was 0.9+0.7 mg/m , and the average nonrespirable dust 3 concentration was 5.7+10.9 mg/m . Note that the standard deviation for the nonrespirable dust concentration is larger than the mean because the data are greatly skewed; the validity of the data is not affected, however. Pulmonary function analyses were based on results from 34 of the 47 grain elevator workers (the remaining 12 workers were either unable to report for the tests or had results that were rejected on technical grounds). The FVC decreased from Monday to Wednesday, and the Wednesday change was sustained on Friday (p<0.001). The These data were used to examine a dose-effect relationship for grain dust exposure. Results showed that 50% of the workers underwent a daily decrease of at least 923 ml/sec in (p=0.004) and of 310 ml/sec in FEF2 5% (p=0.08) for each 1 mg/m^ increase in the concentration of respi rable dust. 5.7.2. 3 Broder et al. (1983) In a report based on information collected in the 1977 survey (Broder et al. 1979), Broder et al. (1983 No results were reported for the chest radiographs. On the basis of skin tests results, two subsets of grain workers were established for further study: (1 ) 18 grain handlers with positive reactions to one or more grains and a comparison group of 18 grain workers with no reaction, and (2 ) 21 grain handlers with positive reactions to one or more fungal antigens and a comparison group of 21 grain workers with no reaction. Workers were matched for age, duration of employment, smoking status, and pack years of smoking. No significant differences were found for demographics, respiratory symptoms, or pulmonary function. On the basis of results of the inhalation challenge tests, three subsets of grain workers were established for further analysis: (1 ) 12 nonsmoking grain workers with chronic cough and 12 grain workers with no cough (internal comparison group), (2) 9 grain workers with a decrease in FEFgQ^ from Monday to Friday and 9 grain workers with no decrease in FEFgo^ (internal comparison group), and (3) 10 grain workers whose baseline FEV^ was <70% and 7 grain workers whose baseline was >100% (internal comparison group). Insofar as possible, the internal comparison workers were matched with the other grain workers for age, duration of employment, and smoking history. No significant differences were found for demographic variables, respiratory symptoms, or pulmonary function. For the longitudinal analysis, 14 workers who were laid off and 14 who were not laid off were matched as closely as possible for age and smoking habits. The laid-off workers were examined in January and February before being rehired; they were subsequently examined between April and June. Steadily employed workers were examined at the same intervals. After they were rehired, workers who had been laid off reported an increase in cough (p<0.04), rhinitis (p<0.03), and sputum production (p<0.05), and a decrease in shortness of breath (p<0 .0 1 ) and respiratory illness (p<0 .0 2 ). Workers who were steadily employed had an increase in cough (p<0.04), rhinitis (p<0.03), and sputum production (p<0.05) during the same time period. Compared with these workers, those who had been laid off reported significantly less cough (p<0.05) and sputum production (p<0.01) in February, and less shortness of breath (p<0.05) after they were rehired. Workers who had been laid off showed improvement in FEV^ (p<0.001) and FEFgo^ (p<0.001) in February followed by a decline after they were rehired. For that same period, steadily employed workers had an increase in FEV.J (p<0.002) in February followed by a decline. An analysis covering the maximum consecutive period of reduced employment (January to March) included 15 workers who were laid off and 24 steadily employed workers. The subjects were not matched for age or smoking habits. The steadily employed workers had a significant increase in cough (p<0 .0 2 ) and sputum production (p<0 .0 0 1 ). Though low ambient temperature may have had an effect on the respiratory symptoms, adverse symptoms continued to increase as the temperature increased. The longitudinal changes in respiratory variables are partially reversible and are consistent with the effects of grain dust exposure. (Broder et al. 1985). The grain handlers who had left employment were significantly younger (p<0 .0 0 1 ), had a shorter duration of employment (p<0.001), and reported a greater prevalence of eye irritation (p<0.04), cough (p<0.04), and shortness of breath (p<0.003) than those grain workers who remained employed. The groups were compared using multiple logistic analyses of the symptoms and multiple regression analyses of pulmonary function measurements with adjustments for age, pack years of smoking history, and duration of employment. These adjustments increased the differences in prevalence rates for the aforementioned symptoms but they did not change the statistical significance. The adjustments also increased the significance of the reported sputum prevalence from p=0.04 to p<0 .0 2 . Civic workers who left the job were significantly younger (p<0 .0 0 1 ), had a shorter duration of employment (p<0 .0 0 2 ), had decreased pack years of smoking history (p<0 .0 0 1 ), included a lower percentage of current smokers (p<0 .0 2 ), had a lower prevalence of coughing (p<0 .0 2 ), and had a lower FVC (p<0.002), a higher F E V^F V C ratio (p<0.02), and a higher ^7 5% (p<0.005) than those civic workers who remained employed. When adjusted for age, pack years of smoking history, and duration of employment, the differences were no longer statistically significant. Data obtained in the 1977 survey (Broder et al. 1979) for the grain and civic workers who remained employed and were reexamined were compared with data obtained from those workers who remained employed but did not report for the 1980 survey (Broder et al. 1985). The grain workers who declined participation in the 1980 survey were significantly older (p<0.007), had a longer duration of employment (p<0.009), and had a greater number of pack years of smoking history (p<0.006) when compared with those grain workers who were reexamined in the 1980 survey. The grain workers who declined participation in the 1980 survey had a higher prevalence of sputum production (p<0.01), lower FEF^q (p<0.009), and F E V^F V C (p<0.006) when compared with those grain workers who were reexamined in the 1980 survey. However, the significance for these pulmonary parameters declined when adjusted for number of pack years of smoking. The civic workers who declined to participate in the 1980 survey were significantly older (p<0.03) than those civic workers who were reexamined. There was a statistically significant greater prevalence of wheezing in nonsmoking civic workers compared with nonsmoking grain workers in 1977 (p<0.03) and in 1980 (p<0.001). In 1977, increased eye irritation (p<0.01) and a higher FEF^g^ (p<0 .0 2 ) was measured in grain workers who smoked compared with civic workers who smoked. For grain and civic workers whose smoking status remained constant from 1977-80, Table V shows demographic characteristics, symptom prevalence, and measures of pulmonary function. The changes in pulmonary function from 1977 to 1980 show (1) a significant increase in FEV^/FVC (p<0.04) among grain workers who were ex-smokers compared with civic workers who were ex-smokers, and (2 ) a decline in FEF7 5% (P<0 -0 0 2 ) among grain workers who were current smokers compared with civic workers who were current smokers. A multiple regression analysis of change in pulmonary function from 1977 to 1980 demonstrated an inverse relationship between age in 1980 and the level of pulmonary function in 1977. The longitudinal study by Broder (1980) showed that there was an increased prevalence of rhinitis, cough, and sputum production among rehired grain workers compared with continuously employed grain workers. However, the rehired workers showed a decrease in the prevalence of shortness of breath and respiratory illness compared with the continuously employed grain (1985) reported the results of special tests and/or analyses performed by Tabona et al. (1984) on subsets of these populations. These reports followed portions of the same populations of grain workers and controls for periods as long as 6 years (Table VI). For the first health survey, 1977. Candidates identified for study included 642 workers from five grain elevators, 206 civic workers, and 302 noncedar sawmill workers (1980). Because there were no female grain elevator workers, women were excluded from the two comparison populations. Nonwhite workers were eliminated the crystalline silica content of the dust ranged from 3.8% to 5.0%. The average total dust concentration in the sawmills was 1.9 mg/m (the range was not reported). There were no statistically significant differences between grain workers and civic workers with regard to the prevalences of eye, nasal, and chest symptoms when workers were matched for smoking habits. Grain workers and sawmill workers had statistically significant differences in the prevalences of these symptoms (Table VIII). Cough, sputum, wheezing, and breathlessness were more prevalent in smokers than in nonsmokers in all # groups. The predicted values for FEV1 and FVC (mean percentage) were significantly lower for nonsmoking or ex-smoking grain workers than for nonsmoking or ex-smoking civic or sawmill workers (p<0.05). The more non-atopic status of grain workers compared with civic workers suggests that a self selection (healthy worker effect) occurs. On IX). In addition to tests conducted in the primary study, other tests performed were (1 ) methacholine broncochallenge tests, with FEV1 and FVC measured before and after the tests, (2) serum precipitin responses to grain dust extracts, (3) bronchial inhalation challenge to grain dust and grain dust extract, with FEV^ and FVC measured before and after the tests, and (4) venous blood samples for determination of absolute eosinophil counts. On the basis of these test results, the 33 workers were divided into three groups. Twenty-two of these workers (Groups A and B) reported respiratory symptoms (cough, sputum production with or without wheezing, and breathlessness) and demonstrated decrements in lung function. Group A consisted of 6 workers who also had a +PC2 0 =Provocation concentration of methacholine to induce a 20% fal I in FEV-j. the workers in Group B demonstrated bronchial reactivity to methacholine at a level similar to that of patients with symptomatic occupational asthma. The remaining workers in Group B showed bronchial reactivity to methacholine challenge within the range seen in patients with chronic bronchitis. Groups A and B had evidence of airway obstruction, indicated by the decline in FEV1 , FEF25-75%' an(* FEV-|^VC (Table X). Group A had an absolute eosinophil count of 244/mm'* compared with 156/mm^ for Group B. None of the subjects reacted to skin tests or responded to grain dust extracts with serum precipitins. Although grain dust did not cause a bronchial reaction in all the workers, it cannot be excluded as the causative agent for the asthma and chronic bronchitis. Asthma is not a graded uniform response across exposed groups; rather it is an incident frequency response. Also, the results of exposure to the settled grain dust may not be indicative of those that may occur with fresh grain dust. 5.7.3.4 Tabona et al. (1984) In the fourth report of the series, Tabona et al. (1984) conducted a health survey in October and November 1981 using workers from the same worker populations studied in the first three reports (Chan-Yeung et al. 1980, 1981. The characteristics of 267 grain workers who had not changed their smoking habits over the 6 -year study period were compared with the characteristics reported in the first health survey (Chan-Yeung et al. 1980 5.7.3.5 Enarson et al. (1985) In the fifth report, Enarson et al. (1985) analyzed the respiratory functions of 81 of the grain workers described by Tabona et al . (1984). These workers were matched for age and smoking habits. Twenty-seven of the # . ANIMAL STUDIES The animal studies cited here identify the mechanism(s) of lung response to grain dust exposure. Mice exposed to grain dust concentrations ranging from 1200+500 to 5400+700 mg/m^ total dust for 2 to 28 days developed an increased pulmonary macrophage response (Armanious et al. 1982). Rabbits exposed to grain dust at 20 mg/m for 24 weeks, 5 days a week, developed inflammation of the alveoli and interstitium (Stepner et al. 1986). Exposure to mycotoxin produced noncytotoxic histamine release in rat peritoneal cells (Warren and Ho Iford-Stevens 1986) and inhibited several critical cellular functions in rat alveolar macrophages (Sorenson et al. 1986). In rats, intratracheal instillation of grain dust initiated an acute inflammatory reaction as evidenced by an influx of neutrophils into the airspaces and later into the lung interstitium (Keller et al . 1987). Rats do not respond to histamine, yet histamine is the major pharmacologic mediator of acute bronchoconstriction in man. The allergic anaphylactic response of guinea pigs is mediated by a subtype of IgG (heterocytrotropic IgG) which is unlike the human IgG mediated responses. Human allergy is IgE mediated (Bass et al. 1988). Limited conclusions have been drawn from the rodent animal modeling studies of human etiopathogenes. However, these studies support the proposed immunologic mechanism(s) presented in this document. # POSSIBLE MECHANISMS OF ACTION Grain dust is a complex mixture of organic and inorganic compounds. dust as a source of material for the RAST assay, and found that the prevalence of IgE antibodies to grain dust antigens was significantly higher than the prevalence of IgE antibodies to grain antigens. In Sweden, febrile reactions occurred in 19% of farmers exposed to unusually moldy grain (Malmberg et al. 1985). This variability of the agents in dust may be partially responsible for the variations in hypersensitivity noted in the reports cited above; or other causal mechanisms may be involved such as irritant effects or toxic reactions (e.g., activation of the alternative pathway of complement activation). Allergic reactions to grain mites and weevils have been described. Ingram et al. (1979) obtained responses to bronchial provocation after farmers inhaled storage mite extract. Warren et al . (1983) Olenchock et al. (1978Olenchock et al. ( , 1980aOlenchock et al. ( , 1980b and Olenchock and Major (1979) have suggested that grain dust induces activation of the alternative complement pathway in man and that the activity appears to vary depending on the type of grain dust exposure. Though a number of potential mechanisms or interactions have been activated [Wirtz et al. 1984b] the specific grain dust component that activates the alternative complement pathway is yet to be identified. Moldy hay has been shown to induce activation of the alternative pathway in human (Edwards 1976) and rabbit serum (Edwards et al. 1974). # Microbial Toxins Bacteria are ubiquitous in grain dust (Outkiewicz 1978;DeLucca et al.1984;Lacey 1980 exposure to a 1% weevil extract. (Olenchock et al 1978;Wirtz et al 1984b) as well as in settled dust. DeLucca and Palmgren (1987) reported a seasonal variation in endotoxin concentration in both respirable and settled grain dust. Endotoxin levels in settled dusts were lowest in January and highest in November. Respirable samples collected in the fall months (September and October) showed significantly higher levels than those collected on other sampling dates. Mycotoxins such as aflatoxin are commonly found in grain. They are considered to be carcinogenic as well as hepatotoxic (WHO 1979). Mycotoxins are products of secondary fungal metabolism; they may be located within the hyphae and spores and may be excreted into the grain (Palmgren and Lee 1986). Mycotoxins are usually associated with a toxic reaction that occurs as a result of food ingestion. Burg and Shotwell (1984) measured the aflatoxin level of corn dust during harvest at a farm in Georgia, U.S.A., where the level of contamination was known to be high. Airborne dust samples collected at the front of the combine had an average aflatoxin concentration of 3,850 ng/g, and the operator was exposed to aflatoxin concentrations up to 1,360 ng/g in the combine cab. Sorenson et al. (1981) analyzed samples of airborne grain dust for aflatoxin. Samples were collected from grain terminals in Duluth, Minnesota, Superior, Wisconsin, and from a corn dumping station in Georgia. No aflatoxin was detected in the samples from the Superior-Duluth area, but the airborne corn dust from Georgia contained 130 ppb aflatoxin B^. Dashek et al. (1983) later confirmed the absence of aflatoxin in grain from the Superior-Duluth area. # . RESEARCH NEEDS Research is needed in the following areas: (1) Establish the relationship between the recognized health effects of exposure to grain dust and the various constituents of the dust, including the possible effects of pesticides, fumigants, endotoxins, and mycotoxins. (2) Determine the relationship between the various acute and chronic clinical effects of grain dust exposure. (3) Identify the importance of the host factor in modifying the workers' response to grain dust exposure. (4) Determine the relationship between levels of grain dust exposure, smoking, and respiratory disorders. (5) Determine the role and mechanism of action of grain dust components. (7) Determine the feasibility and efficacy of using small amounts of mineral oil to reduce grain dust in elevators. # CONCLUSIONS From epidemiologic studies it is evident that grain dust is associated with respiratory symptoms (i.e., cough, dyspnea, rhinitis, wheezing, and chronic airway obstruction) and nonpulmonary disorders (i.e., conjunctivitis, grain fever, and dermatitis). Epidemiologic data indicate an excess risk of chronic obstructive and restrictive pulmonary disease in workers with chronic exposure to grain dust, although the mechanism(s) responsible for the lung abnormalities is not known. Worker exposure to grain dust has resulted in increased incidences of chronic bronchitis, asthma and chronic obstructive pulmonary disease. These adverse health effects are likely to result from repetitive acute inflammatory responses to grain dust exposure. Epidemiologic and animal studies support the hypothesis that immunologic mechanism(s) cause the adverse health effects seen in grain workers. Although thorough characterization of allergenicity (Type I [ IgE-mediated]) is lacking, it can be concluded that potent immunologic responses evoke inflammatory-mediated responses in airway smooth muscle. A number of animal studies have been conducted to examine the mechanisms of response to grain dust, but the modeling approaches taken do not study the same physiologic parameter of response. For instance, all human studies evaluate forced expiratory flowrate (FEV, FVC, FEFgg 75^. yet the animal studies cited do not evaluate these parameters. If they did, it would be possible to compare animal and human responses almost directly. Because exposure to grain dust results in an immunologic response in some individuals, it is presently impossible to recommend an exposure concentration at which all workers would be protected from adverse health effects. However, reducing exposure to grain dust will decrease exposure to the agent(s) in grain dust that elicit the adverse health effects. # SUMMARY IN SWEDISH # ASF. Dammfria mottagningsgropar for spannmal (Dust-free pits for grain). # Enarson DA, Vedal S, Chan-Yeung M. Rapid decline in FEV^ in grain handlers. Am Rev Respir Dis 132 (1985) No. PB87-117172 Washington, DC 1-257 (January 1987) 1-257. Farant J-P, Moore CF. Dust exposures in the Canadian grain industry. Am Ind Hyg Assoc J 39 (1978) 177-194. Federal Register. 45 FR (No. 33); 10732-10737, February 1980. U.S. Department of Labor, 29 CFR, Parts 1910, 1918, 1926, and 1928 Grain dust from an elevator and threshing floor was cul tured to determine the types of microorganisms present and the number of organisms per gram of dust. The total microbial count per gram of grain dust ranged from 168,000 to 544,000 in five samples. (Stepanov 1967) A 10-year study in Poland that measured the viable microorganisms in the air at a variety of worksites Five grain elevators along the M ississippi River near New Orleans were sampled over a 2-year period to determine thei r bacter ial samples were col breathing zones using personal a ir samplers and s te rile filte rs of 0.45-mm pore size. Twenty dust samples were also collected from various sections of the elevators where grain dusts were deposited. Settled grain dust had popula tions of bacteria that ranged from 1.9 to 53.4 x 10®/g (total plate counts). The range was 0.9 to 14.8 x 10®/g for gram-positive bacteria and 0.1 to 5.0 x 106/g for gram-negative bacteria. (DeLucca et a I . 1984) Airborne dust samples were taken from 21 farms of farmers who were undergoing a study to determine the occurrence of symptoms (feb rile and airway-obstructive) and their immune statu s. Twenty-four samples were collected from the breathing zone on a 0.1-m filte r while the subject was handling grain or hay. (Malmberg et a l. 1985)
None
None
cdf6df06d30bffb3b69500df28af41f363e3d433
cdc
None
Several developments have fueled the renewed interest in xenotransplantationthe use of live animal cells, tissues and organs in the treatment or mitigation of human disease. The world-wide, critical shortage of human organs available for transplantation and advances in genetic engineering and in the immunology and biology of organ/tissue rejection have renewed scientists' interest in investigating xenotransplantation as a potentially promising means to treat a wide range of human disorders. This situation is highlighted by the fact that in the United States alone, 13 patients die each day waiting to receive a life-saving transplant to replace a diseased vital organ. While animal organs are proposed as an investigational alternative to human organ transplantation, xenotransplantation is also being used in the effort to treat diseases for which human organ allotransplants are not traditional therapies (e.g., epilepsy, chronic intractable pain syndromes, insulin dependent diabetes mellitus and degenerative neurologic diseases such as Parkinson's disease and Huntington's disease). At present, the majority of clinical xenotransplantation procedures utilize avascular cells or tissues rather than solid organs in large part due to the immunologic barriers that the human host presents to vascularized xenotransplantation products. However, with recent scientific advances, xenotransplantation is viewed by many researchers as having the potential for treating not only end-organ failure but also chronic debilitating diseases that affect major segments of the world population. Although the potential benefits may be considerable, the use of xenotransplantation also presents a number of significant challenges. These include (1) the potential risk of transmission of infectious agents from source animals to patients, their close contacts, and the general public; (2) the complexities of informed consent; and (3) animal welfare issues. On September 23, 1996, the Department of Health and Human Services (DHHS) published for public comment the Draft PHS Guideline on Infectious Disease Issues in Xenotransplantation to address the infectious disease concerns raised by xenotransplantation (61 Federal Register 49919). The Draft Guideline was jointly developed by five components within DHHS -the Centers for Disease Control and Prevention (CDC), Food and Drug Administration (FDA), Health Resources and Services Administration (HRSA), National Institutes of Health (NIH), all parts of the U.S. Public Health Service (PHS), plus the DHHS Office of the Assistant Secretary for Planning and Evaluation. This Draft Guideline discusses general principles for the prevention and control of infectious diseases that may be associated with xenotransplantation. Intended to minimize potential risks to public health, these general principles provide guidance on the development, design, and implementation of clinical protocols to sponsors of xenotransplantation clinical trials and local review bodies evaluating proposed xenotransplantation clinical protocols. The Draft Guideline emphasizes the need for appropriate clinical and scientific expertise on the xenotransplantation research team, adequate protocol review, thorough health surveillance plans, and comprehensive informed consent and education processes. In response to the Draft Guideline, the DHHS received over 140 written comments reflecting a broad spectrum of public opinion (Federal Register docket No. 96M-0311). Comments were received from a variety of stakeholders, including representatives of academia; industry; patient, consumer, and animal welfare advocacy organizations; professional, scientific and medical societies; ethicists; researchers; other government agencies and private citizens. In revising the Draft Guideline, careful consideration was given to recent scientific findings, each of the written comments, as well as to public comments received at several national, international, and DHHS-sponsored workshops. These meetings constituted critically important public forums for discussing the scientific, public health, and social issues attendant to xenotransplantation. The DHHS sponsored two public workshops on xenotransplantation during 1997 and 1998. The first meeting, held in July 1997, focused on virology and documented evidence of cross species infections. Titled "Cross-Species Infectivity and Pathogenesis," the meeting addressed current knowledge about the mechanisms and consequences of infectious agent transmission across species barriers. Discussions also focused on the possibility that an infectious agent might cross from an animal donor organ or tissue to human xenotransplantation product recipients. The conference also highlighted gaps in knowledge about the emergence of new infections in humans, especially as a result of xenotransplantation. The basic consensus of the meeting was that while there were examples of animal infectious agents crossing species barriers to infect, and even cause diseases in humans, the actual likelihood of this in xenotransplantation product recipients cannot be ascertained at this time. Small adequate and well-controlled clinical trials designed to test the safety and efficacy of xenotransplantation were considered to be appropriate. One anticipated outcome of such trials would be to both minimize and better understand the risks of transmission of infectious agents. (The meeting summary can be accessed at: ). In January 1998, a second DHHS workshop titled "Developing U.S. Public Health Service Policy in Xenotransplantation," focused on the current and evolving U.S. public health policy in xenotransplantation. (The meeting transcripts can be accessed at ) Among other issues, the regulatory framework, a national xenotransplantation database, and a national advisory committee were discussed. During this workshop, several themes were raised repeatedly and echoed many of the written public comments on the Draft Guideline. First, there was a broad consensus that the Draft Guideline was important and should be implemented, albeit with some modifications. For example, it was expressed that there could be more public awareness and participation in the development of public health policies in the field of xenotransplantation. Second, there was strong support for the DHHS proposal to establish a national xenotransplantation advisory committee, not only to facilitate analysis and discussion of the scientific, medical, ethical, legal, and social issues raised by xenotransplantation, but also to review and make recommendations about proposed clinical trial protocols. There was broad support for proceeding cautiously with xenotransplantation trials; however, some participants held that a national moratorium on clinical trials in xenotransplantation might be advantageous until the national xenotransplantation advisory committee is established and operational. While there is no definitive scientific evidence that xenotransplantation would promote cross-species infectious agent transmission leading to disease, there are data providing a reasonable basis for caution . Some members of the scientific and medical community and concerned citizens expressed the opinion that there is a perceived greater risk from the use of xenotransplantation products procured from nonhuman primates (as opposed to other species) because of potential public health risks and animal welfare concerns. The January 1998 workshop also included presentations by representatives of the World Health Organization (WHO), the Organization for Economic Cooperation and Development (OECD), and several nations engaged in developing policies on xenotransplantation. These presentations placed the U.S. policy in global context and enhanced international dialogue on important public health safeguards. Because of the potential for the secondary transmission of infectious agents, the public health risks posed by xenotransplantation transcend national boundaries. International communication and cooperation in the development of public health policies are critical elements in successfully addressing the global safety and ethical challenges inherent in xenotransplantation. To this end, several countries, including Canada, France, Germany, the Netherlands, Spain, Sweden, the United Kingdom, and the United States and several international organizations such as the WHO, OECD, and the Council of Europe are actively engaged in international workshops and consultations on xenotransplantation. # MAJOR REVISIONS AND CLARIFICATIONS TO THE GUIDELINE Major revisions and clarifications to the Draft Guideline are briefly summarized and discussed below. These revisions were prompted by public comments submitted to the Draft Guideline docket, concerns expressed at public workshops, evolving science, and developing international policies. PHS intends to address related issues that go beyond the scope of this Guideline in future guidance documents. In the future, the Guideline may be amended as needed to appropriately reflect the accrual of new knowledge about cross-species infectivity and pathogenesis, new insights into the potential risks associated with xenotransplantation, policies currently under development (e.g., the Secretary's Advisory Committee on Xenotransplantation and the National Xenotransplantation Database), and other evolving public health policies in this arena. # Definition of Xenotransplantation and Xenotransplantation Product The definition of "xenotransplantation" has been revised from that used in the Draft Guideline. For the purposes of this document and U.S. P.H.S. policy, xenotransplantation is now defined to include any procedure that involves the transplantation, implantation, or infusion into a human recipient of either (a) live cells, tissues, or organs from a nonhuman animal source or (b) human body fluids, cells, tissues or organs that have had ex vivo contact with live nonhuman animal cells, tissues, or organs. Furthermore, xenotransplantation products have been defined to include live cells, tissues or organs used in xenotransplantation. The term xenograft, used in previous PHS documents, will no longer be used to refer to all xenotransplantation products. # Clinical Protocol Review and Oversight A variety of opinions were expressed regarding the appropriate level of protocol review and oversight of clinical trials in the U.S. For example, the American Society of Transplant Surgeons stated that the Draft Guideline represented an unnecessary intrusion of government regulation into the performance of transplant surgery. In contrast, some organizations with commercial interests in the development of xenotransplantation contended that an inappropriate share of the burden for oversight of clinical trials had been assigned to local review committees and that the responsibility for this oversight should reside at the national level with the FDA. Several academic veterinarians, a group of 44 virologists, and other concerned citizens asserted that strict regulations should accompany the Guideline and that the major responsibility for determining the suitability of any animals as sources of nonhuman animal live cells, tissues, or organs used in xenotransplantation must reside with the FDA. The revised Guideline makes clear that, in addition to review by appropriate local review bodies (Institutional Review Boards, Institutional Animal Care and Use Committees, and the Institutional Biosafety Committees), the FDA has regulatory oversight for xenotransplantation clinical trials conducted in the U.S. Xenotransplantation products (i.e., live cells, tissues, or organs from a nonhuman animal source or human body fluids, cells, tissues, or organs that have had ex vivo contact with live cells, tissues, or organs from nonhuman animal sources and are used for xenotransplantation) are considered to be biological products, or combination products that contain a biological component, subject to regulation by FDA under section 351 of the Public Health Service Act (42 U.S.C. 262) and under the Federal Food, Drug and Cosmetic Act (21 U.S.C. 321 et seq.). In accordance with the applicable statutory provisions, xenotransplantation products are subject to the FDA regulations governing clinical investigations and product approvals (e.g., the Investigational new Drug regulations in 21 CFR Part 312, and the regulations governing licensing of biological products in 21 CFR Part 601). Investigators should submit an application for FDA review before proceeding with xenotransplantation clinical trials. Sponsors are strongly encouraged to meet with FDA staff in the pre-submission phase. In addition to the guidances referred to below, the FDA is considering further regulations and/or guidances regarding, for example, the development of xenotransplantation protocols and the technical and clinical development of xenotransplantation products. Xenotransplantation clinical protocols may also be reviewed by the Secretary's Advisory Committee on Xenotransplantation. The scope and process for this review will be described in future publications. # Responsibility for Design and Conduct of Clinical Protocols The Draft Guideline originally proposed that clinical centers, source animal facilities, and individual investigators share the responsibilities for various aspects of the clinical trial protocol, including pre-xenotransplantation screening programs, patient informed consent procedures, record keeping, and post-xenotransplantation surveillance activities. The revised Guideline clarifies that primary responsibility for designing and monitoring the conduct of xenotransplantation clinical trials rests with the sponsor (as provided under, e.g., 21 CFR 312.23(a)(6)(d) and 312.50). # Informed Consent and Patient Education Virologists, infectious disease specialists, health care workers, and patient advocates emphasized the need for the sponsor to offer assistance to xenotransplantation product recipients in educating their close contacts about potential infectious disease risks and methods for reducing those risks. The Guideline has been revised to state that the sponsor should ensure that counseling regarding behavior modification and other issues associated with risk of infection is provided to the patient and made available to the patient's family and other close contacts prior to and at the time of consent, and that such counseling should continue to be available thereafter. The revised Guideline clarifies and strengthens the informed consent process for xenotransplantation product recipients and the education and counseling process for recipients and their close contacts, including associated health care professionals. It also emphasizes the need for xenotransplantation product recipients to comply with long-term or life-long surveillance regardless of the outcome of the clinical trial or the status of the graft or other xenotransplantation product. # Deferral of Allograft and Blood Donors The 1996 Draft Guideline recommended that xenotransplantation product recipients refrain from donating body fluids and/or parts for use in humans. Some infectious disease specialists and an infectious disease control practitioner organization suggested that this be strengthened to active deferral of xenotransplantation product recipients, and that consideration also be given to the deferral of close contacts of xenotransplantation product recipients. This issue was addressed by the FDA Xenotransplantation Subcommittee of the Biological Response Modifiers Advisory Committee (December, 1997, for transcript: ). The committee recommended that xenotransplantation product recipients and their close contacts be counseled and actively deferred from donation of body fluids and other parts. A proposed FDA policy was then later presented to FDA's Blood Products Advisory Committee for further discussion, (March, 1998, for transcript: ). Of note, at the time of both these advisory committee meetings the operative definition of xenotransplantation did not include, as it does now, the use of certain products involving limited ex vivo exposure to xenogeneic cell lines or tissues. FDA has published a draft guidance document ("Guidance for Industry: Precautionary Measures to Reduce the Possible Risk of Transmission of Zoonoses by Blood and Blood Products from Xenotransplantation Product Recipients and Their Contacts") for public comment, which was again discussed by the FDA Xenotransplantation Subcommittee of the Biological Response Modifiers Advisory Committee on January 13, 2000. FDA will further consult with its advisors to identify the range of xenotransplantation products for which recipients and/or their contacts should be recommended for deferral from blood donation. Additionally, the range of contacts who should be deferred from blood donation will be clarified after further public discussion. The Guideline has been revised to reflect comments made at the FDA advisory committee meetings . # MMWR August 24, 2001 # Xenotransplantation Product Sources Strong opposition to the use of nonhuman primates as xenotransplantation product sources was voiced by many individuals and groups, including 44 virologists, scientific and medical organizations such as the American Society of Transplant Physicians, the American College of Cardiology, private citizens, and commercial sponsors of xenotransplantation clinical trials. The concerns focused on the ethics of using animals so closely related to humans, as well as the risk of transmission of infectious diseases from nonhuman primates to humans. Many recommended that the Guideline state that clinical xenotransplantation trials using xenotransplantation products for which nonhuman primates served as source animals should not occur until a closer examination of infectious disease risks can be adequately carried out. Scientific findings since the publication of the Draft Guideline have also resulted in revisions. For example, the ability of simian foamy virus (SFV) to persistently infect human hosts has been further characterized , the persistence of microchimerism with anatomically dispersed baboon cells containing SFV, baboon cytomegalovirus (CMV), and baboon endogenous retrovirus (BaEV) in human recipients of baboon liver xenotransplantation products has been documented , and new viruses capable of infecting humans have been identified in pigs . The active expression of infectious porcine endogenous retrovirus from multiple porcine cell types, and the ability of porcine endogenous retrovirus variants A and B to infect human cell lines in vitro has been demonstrated , giving scientific plausibility to concerns that this retrovirus from porcine xenotransplantation products may be able to infect recipients in vivo. Diagnostic tests for porcine endogenous retrovirus, BaEV, and other relevant infectious agents have been developed and studies are currently underway to assess the presence or absence of infectious endogenous retroviruses and other relevant infectious agents in both porcine and baboon xenotransplantation products and in the recipients of these xenotransplantation products . The risk of endogenous retrovirus infection, however, is multi-factorial and it is not known whether results from these studies will be predictive of the potential infectious risks associated with future xenotransplantation products. One factor that impacts porcine endogenous retrovirus infectivity is its sensitivity to inactivation and lysis by human sera, yet the virus becomes resistant to inactivation after a single passage through human cells . It is hypothesized that pre-xenotransplantation removal of naturally occurring xenoreactive antibodies from the recipient and other modifications intended to facilitate xenotransplantation product survival, such as the procurement of xenotransplantation products or nonhuman animal live cells, tissues or organs used in the manufacture of xenotransplantation products from certain transgenic pigs, may also modulate the infectivity of endogenous retroviruses for xenotransplantation product recipients . As the science regarding porcine endogenous retroviruses summarized above began to emerge, the FDA placed all clinical trials using porcine xenotransplantation products on hold (October 16, 1997) pending development by sponsors of sensitive and specific assays for (1) preclinical detection of infectious porcine endogenous retrovirus in porcine xenotransplantation products, (2) post-xenotransplantation screening for porcine endogenous retrovirus and clinical follow-up of porcine xenotransplantation product recipients, and (3) the development of informed consent documents that indicate the potential clinical implications of the capacity of porcine endogenous retrovirus to infect human cells in vitro. These issues were discussed publicly by the FDA Xenotransplantation Subcommittee of the Biological Response Modifiers Advisory Committee (December, 1997, for transcript: ). In response to concerns articulated by scientists and other members of the public regarding the use of nonhuman primate xenotransplantation products, the FDA, after consultation with other DHHS agencies, has issued a "Guidance for Industry: Public Health Issues Posed by the Use of Nonhuman Primate Xenografts in Humans" containing the following conclusions: - "...( 1 (b)(1)(iv)], any protocol submission that does not adequately address these risks is subject to clinical hold (i.e., the clinical trial may not proceed) due to insufficient information to assess the risks and/or due to unreasonable risk. - (3) at the current time, FDA believes there is not sufficient information to assess the risks posed by nonhuman primate xenotransplantation. FDA believes that it will be necessary for there to be public discussion before these issues can be adequately addressed..." While the document "Guidance for Industry: Public Health Issues Posed by the Use of Nonhuman Primate Xenografts in Humans" specifically addresses the issue of nonhuman primates as sources for xenotransplantation products, the DHHS recognizes that other animal species have been used and/or are proposed as sources of xenotransplantation products and that all species pose infectious disease risks. Accordingly, the principles for source animal screening and health surveillance described in the revised Guideline apply to all candidate source animals regardless of species. These principles will need to be reassessed as new data become available. # Source Animal Screening and Qualification Many groups and individuals expressed concern that the Draft Guideline did not set forth sufficiently stringent principles and criteria for source animal husbandry and screening, source animal facilities, and procurement and screening of xenotransplantation products. This view was expressed by virologists, veterinarians, infectious disease specialists, concerned citizens, commercial producers of laboratory animals, industrial sponsors of xenotransplantation trials, and a number of professional, scientific, medical, and advocacy organizations, such as the American Society of Transplant Surgeons, Doctors and Lawyers for Responsible Medicine, the American College of Cardiology, Biotechnology Industry Organization (BIO -representing 670 biotech companies), and the Association for Professionals in Infection Control and Epidemiology. Others expressed concern that the stringency of the Draft Guideline imposed high economic burdens on producers of xenotransplantation product source animals and/or on sponsors of xenotransplantation clinical trials. However, in order to reduce the potential public health risks posed by xenotransplantation, strict control of animal husbandry and health surveillance practices are needed during the course of development of this technology. The Guideline has been revised to clarify the animal husbandry and prexenotransplantation infectious disease screening that should be performed before an animal can become a qualified source of xenotransplantation products. The revised Guideline now emphasizes that risk minimization precautions appropriate to each xenotransplantation product protocol should be employed during all steps of production and that screening, quarantine, and surveillance protocols should be tailored to the specific clinical protocol, xenotransplantation product, source animal and husbandry history. Breeding programs using cesarean derivation of animals should be used whenever possible. Source animals should be procured from closed herds or colonies raised in facilities that have appropriate barriers to effectively preclude the introduction or spread of infectious agents. These facilities should actively monitor the herds for infectious agents. The revised Guideline clarifies and strengthens the infectious disease screening and surveillance practices that should be in place before a clinical trial can begin. # Specimen Archives and Medical Records A number of infectious disease specialists, veterinarians, epidemiologists, industry sponsors of xenotransplantation trials, biotechnology companies, professional organizations such as the American Society of Transplant Physicians, and consumer advocates requested clarification regarding the collection and usage of, and access to, biological specimens obtained from both source animals and xenotransplantation product recipients. The revised Guideline clarifies the recommended types, volumes, and collection schedule for biological specimens from both source animals and xenotransplantation product recipients. It also clearly distinguishes between biological specimens archived for public health investigations and specimens archived for use by the sponsor in conducting surveillance of source animals and post-xenotransplantation laboratory surveillance of xenotransplantation product recipients. The revised Guideline also states that health records and biologic specimens should be maintained for 50 years, based on the latency periods of known human pathogenic persistent viruses and the precedents established by the US Occupational Safety and Health Administration with respect to record-keeping requirements. # National Xenotransplantation Database A number of infectious disease specialists, epidemiologists, transplant physicians, and a state health official emphasized the need for accurate and timely information on infectious disease surveillance and xenotransplantation protocols and their outcomes. They further supported the concept of a national xenotransplantation database as described in the Draft Guideline. The revised Guideline describes the development of a pilot national xenotransplantation database to identify and implement routine data collection methods, system design, data reporting, and general start-up and to assess routine operational issues associated with a fully functional national database. The revisions also discuss plans to expand this pilot into a national xenotransplantation database intended to compile data from all clinical centers conducting trials in xenotransplantation and all animal facilities providing source animals for xenotransplantation. # Secretary's Advisory Committee on Xenotransplantation Xenotransplantation research brings to the fore certain challenges in assessing the potential impact of science on society as a whole, including the role of the public in those assessments. The broad spectrum of public opinions expressed since the publication of the Draft Guideline indicates that there is neither uniform public endorsement nor rejection of xenotransplantation. The fields of research involved are rapidly moving ones, at the leading edge of medical science. Furthermore, in many instances the clinical trials are privately funded and the public may not even be aware of them. However, public awareness and understanding of xenotransplantation is vital because the potential infectious disease risks posed by xenotransplantation extend beyond the individual patient to the public at large. In addition to these safety issues, a variety of individuals and groups have identified and/or raised concerns about issues such as animal welfare, human rights, community interest and consent, social equity in access to novel biotechnologies, and allocation of human allografts versus xenotransplantation products. For all of these reasons, public discourse on xenotransplantation research is critical and necessary. The revised Guideline acknowledges the complexity, importance, and relevance of these issues, but emphasizes that the scope of the Guideline is limited to infectious disease issues. The revised Guideline discusses the development of the Secretary's Advisory Committee on Xenotransplantation (SACX) as a mechanism for ensuring ongoing discussions of the scientific, medical, social, and ethical issues and the public health concerns raised by xenotransplantation, including ongoing and proposed protocols. The SACX will make recommendations to the Secretary on policy and procedures and, as needed, on changes to the Guideline. ). This guidance document represents PHS's current thinking on certain infectious disease issues in xenotransplantation. It does not create or confer any rights for or on any person and does not operate to bind PHS or the public. This guidance is not intended to set forth an approach that addresses all of the potential health hazards related to infectious disease issues in xenotransplantation nor to establish the only way in which the public health hazards that are identified in this document may be addressed. The PHS acknowledges that not all of the recommendations set forth within this document may be fully relevant to all xenotransplantation products or xenotransplantation procedures. Sponsors of clinical xenotransplantation trials are advised to confer with relevant authorities (the FDA, other reviewing authorities, funding sources, etc.) in assessing the relevance and appropriate adaptation of the general guidance offered here to specific clinical applications. # Definitions This section defines terms as used in this guideline document. 1. Allograft -a graft consisting of live cells, tissues, and/or organs between individuals of the same species. 2. Closed herd or colony -herd or colony governed by Standard Operating Procedures that specify criteria restricting admission of new animals to assure that all introduced animals are at the same or a higher health standard compared to the residents of the herd or colony. 3. Commensal -an organism living on or within another, but not causing injury to the host. 4. Good Clinical Practices -A standard for the design, conduct, performance, monitoring, auditing, recording, analyses, and reporting of clinical trials that provides assurance that the data and reported results are credible and accurate, and that the rights, integrity, and confidentiality of trial subjects are protected. 5. Infection Control Program -a systematic activity within a hospital or health care center charged with responsibility for the control and prevention of infections within the hospital or center. 6. Infectious agents -viruses, bacteria (including the rickettsiae), fungi, parasites, or agents responsible for Transmissible Spongiform Encephalopathies (currently thought to be prions) capable of invading and multiplying within the body. (i.e., under whose immediate direction the drug is administered or dispensed to a subject). In the event an investigation is conducted by a team of individuals, the investigator is the responsible leader of the team (see 21 CFR 312.3(b)). 11. Nosocomial infection -an infection acquired in a hospital. 12. Occupational Health Service -an office within a hospital or health care center charged with responsibility for the protection of workers from health hazards to which they may be exposed in the course of their job duties. 13. Procurement -the process of obtaining or acquiring animals or biological specimens (such as cells, tissues, or organs) from an animal or human for medicinal, research, or archival purposes. 14. Recipient -a person who receives or who undergoes ex vivo exposure to a xenotransplantation product (as defined in xenotransplantation). 15. Secretary's Advisory Committee on Xenotransplantation (SACX) -the advisory committee appointed by the Secretary of Health and Human Services to consider the full range of issues raised by xenotransplantation (including ongoing and proposed protocols) and make recommendations to the Secretary on policy and procedures. 16. Source animal -an animal from which cells, tissues, and/or organs for xenotransplantation are obtained. 17. Source animal facility -facility that provides source animals for use in xenotransplantation. 18. Sponsor -a person who takes responsibility for and initiates a clinical investigation. The sponsor may be an individual or a pharmaceutical company, government agency, academic institution, private organization or other organization. The sponsor does not actually conduct the investigation unless the sponsor is a sponsor-investigator (see, e.g.,21 CFR 312.3(b)). 19. Transmissible spongiform encephalopathies (TSEs) -fatal, subacute, degenerative diseases of humans and animals with characteristic neuropathology (spongiform change and deposition of an abnormal form of a prion protein present in all mammalian brains). TSEs are experimentally transmissible by inoculation or ingestion of diseased tissue, especially central nervous system tissue. The prion protein (intimately associated with transmission and pathological progression) is hypothesized to be the agent of transmission. Alternatively, other unidentified co-factors or an as-yet unidentified viral agent may be necessary for transmission. Creutzfeldt-Jakob disease (CJD) is the most common human TSE. 20. Xenogeneic infectious agents -infectious agents that become capable of infecting humans due to the unique facilitating circumstances of xenotransplantation; includes zoonotic infectious agents. 21. Xenotransplantation -for the purposes of this document, any procedure that involves the transplantation, implantation, or infusion into a human recipient of either (A.) live cells, tissues, or organs from a nonhuman animal source or (B.) human body fluids, cells, tissues or organs that have had ex vivo contact with live nonhuman animal cells, tissues, or organs. 22. Xenotransplantation Product(s) -live cells, tissues or organs used in xenotransplantation (defined above). Previous PHS documents have used the term "xenograft" to refer to all xenotransplantation products. 23. Xenotransplantation Product Recipient -a person who receives or who undergoes ex vivo exposure to a xenotransplantation product. 24. Zoonosis -A disease of animals that may be transmitted to humans under natural conditions (e.g., brucellosis, rabies). # Background The demand for human cells, tissues and organs for clinical transplantation continues to exceed the supply. The limited availability of human allografts, coupled with recent scientific and biotechnical advances, has prompted the renewed development of investigational therapeutic approaches that use xenotransplantation products in human recipients. The experience with human allografts, however, has shown that infectious agents can be transmitted through transplantation. HIV/AIDS, Creutzfeldt-Jakob Disease, rabies, and hepatitis B and C, for example, have been transmitted between humans via allotransplantation. The use of live nonhuman cells, tissues and organs for xenotransplantation raises serious public health concerns about potential infection of xenotransplantation product recipients with both known and emerging infectious agents. Zoonoses are infectious diseases of animals transmitted to humans via exposure to or consumption of the source animal. It is well documented that contact between humans and nonhuman animals -such as that which occurs during husbandry, food production, or interactions with pets -can lead to zoonotic infections. Many infectious agents responsible for zoonoses (e.g., Toxoplasma species, Salmonella species, or Cercopithecine herpesvirus 1 ( of monkeys) are well characterized and can be identified through available diagnostic tests. Infectious disease public health concerns about xenotransplantation focus not only on the transmission of these known zoonoses, but also on the transmission of infectious agents as yet unrecognized. The disruption of natural anatomical barriers and immunosuppression of the recipient increase the likelihood of interspecies transmission of xenogeneic infectious agents. An additional concern is that these xenogeneic infectious agents could be subsequently transmitted from the xenotransplantation product recipient to close contacts and then to other human beings. An infectious agent may pose risk to the patients and/or public if it can infect, cause disease in, and transmit among humans, or if its ability to infect, cause disease in, or transmit among humans remains inadequately defined. Emerging infectious agents may not be readily identifiable with current techniques. This was the case with the several year delay in identifying HIV-1 as the etiologic agent for AIDS. Retroviruses and other persistent infections may be associated with acute disease with varying incubation periods, followed by periods of clinical latency prior to the onset of clinically evident malignancies or other diseases. As the HIV/AIDS pandemic demonstrates, persistent latent infections may result in person-to-person transmission for many years before clinical disease develops in the index case, thereby allowing an emerging infectious agent to become established in the susceptible population before it is recognized. # Scope of the Document This guideline addresses the public health issues related to xenotransplantation and recommends procedures for diminishing the risk of transmission of infectious agents to the recipient, health care workers, and the general public. While it is beyond the scope of this document to address the array of complex and important ethical issues raised by xenotransplantation, this guideline describes a mechanism for ensuring ongoing broad public discussion of ethical issues related to xenotransplantation (section 5.3). Other publications and reports of public discussions (section 6., references C.7.a., c., d., h., I.; D.1.b. & I.) have addressed issues such as animal welfare, human rights, and community interest. This guideline reflects the status of the field of xenotransplantation and knowledge of the risk of xenogeneic infections at the time of publication. The general guidance in this document will be augmented by public discussion, new advances in scientific knowledge and clinical experience, and specific FDA guidance documents intended to facilitate the implementation of the principles set forth herein. HHS may ask the Secretary's Advisory Committee on Xenotransplantation (SACX) to review the Guideline on a periodic basis and recommend appropriate revisions to the Secretary (section 5.3). # Objectives The objective of this PHS guideline is to present measures that can be used to minimize the risk of human disease due to xenogeneic infectious agents including both recognized zoonoses and non-zoonotic infectious agents that become capable of infecting humans due to the unique facilitating circumstances of xenotransplantation. In order to achieve this goal, this document: - Outlines the composition and function of the xenotransplantation team to ensure that appropriate technical expertise can be applied (section 2.1). - Addresses aspects of the clinical protocol, clinical center, and the informed consent and patient education processes with respect to public health concerns raised by the potential for infections associated with xenotransplantation (sections 2.2-2.5). - Provides a framework for pre-transplantation animal source screening to minimize the potential for transmission of xenogeneic infectious agents from the xenotransplantation product to the human recipient (section 3, particularly sections 3. 3-3.6). - Provides a framework for post-xenotransplantation surveillance to monitor transmission of infectious agents, including newly identified xenogeneic agents, to the recipient as well as health care workers and other individuals in close contact with the recipient (section 4, particularly sections 4. 1.1. and 4.2.3.). - Provides a framework for hospital infection control practices to reduce the risk of nosocomial transmission of zoonotic and xenogeneic infectious agents (section 4.2.). - Provides a framework for maintaining appropriate records, including human and veterinary health care records (section 4.3. and 3.7), standard operating procedures of facilities and centers (sections 3.2, 3.4), and occupational health service program records (section 4.3). - Provides a framework for archiving biologic samples from the source animal and the xenotransplantation product recipient. These records and samples will be essential in the event that public health investigations are necessitated by infectious diseases and other adverse events arising from xenotransplantation that could affect the public health (sections 3.7, 4.1.2., and 5.2). - Discusses the creation of a national database that will enable population based public health surveillance and investigation (s). (section 5.1). - Discusses the creation of a Secretary's Advisory Committee on Xenotransplantation (SACX) that will consider the full range of complex and interrelated issues raised by xenotransplantation, including ongoing and proposed protocols (sections 2.3. and 5.3.). # XENOTRANSPLANTATION PROTOCOL ISSUES # Xenotransplantation Team The development and implementation of xenotransplantation clinical research protocols require expertise in the infectious diseases of both human recipients and source animals. Consequently, in addition to health care professionals who have clinical experience with transplantation, the xenotransplantation team should include as active participants:(1) infectious disease physician(s) with expertise in zoonoses, transplantation, and epidemiology; (2) veterinarian (s) with expertise in the animal husbandry issues and infectious diseases relevant to the source animal; (3) specialist(s) in hospital epidemiology and infection control; and (4) experts in research and diagnostic microbiology laboratory methodologies. The sponsor should ensure that the appropriate expertise is available in the development and implementation of the clinical protocol, including the onsite follow up of the xenotransplantation product recipient. # Clinical Xenotransplantation Site Any sites performing xenotransplantation clinical procedures should have experience and expertise with and facilities for any comparable allotransplantation procedures. All xenotransplantation clinical centers should utilize CLIA'88 (Clinical Laboratory Improvements Act, amended in 1988) accredited virology and microbiology laboratories. 2.2.1. The safe conduct of xenotransplantation clinical trials should include the active participation of laboratories with the ability to isolate and identify unusual and/or newly recognized pathogens of both human and animal origin. Each protocol will present unique diagnostic, surveillance, and research needs that require expertise and experience in the microbiology and infectious diseases of both animals and humans. The sponsor should ensure that persons and centers with appropriate experience and expertise are involved in the study development, clinical application, and follow up of each protocol, either on-site or through formal and documented off-site collaborations. Sponsors are responsible for ensuring reviews by local review bodies as appropriate, (Institutional Review Boards , Institutional Animal Care and Use Committees , Institutional Biosafety Committees ), the FDA, and the SACX (upon implementation by the Secretary, HHS). The scope and process for SACX review will be described in subsequent publications. Institutional review of xenotransplantation clinical trial protocols should address: (1) the potential risks of infection for the recipient and contact populations (including health care providers, family members, friends, and the community at large); ( 2) the conditions of source animal husbandry (e.g., screening program, animal quarantine); and (3) issues related to human and veterinary infectious diseases (including virology, laboratory diagnostics, epidemiology, and risk assessment). # Health Screening and Surveillance Plans 'Clearly defined methodologies for pre-xenotransplantation screening for known infectious agents and post-xenotransplantation surveillance are essential parts of clinical xenotransplantation trials and should be clearly developed in all protocols. Prexenotransplantation screening includes screening of the source herd (sections 3.2.-3.4.), the source animal(s) (section 3.5.), and the nonhuman animal live cells, tissues or organs used in the manufacture of the xenotransplantation product or the product itself (section 3.6.). Post-xenotransplantation surveillance includes surveillance of the recipient(s) (section 4.1.), selected health care workers or other contacts (section 4.2.), and the surviving source animal(s) (section 3.6.). The screening methods used and the specific agents sought will differ depending on the procedure, cells, tissue, or organ used, the source animal, and the clinical indication for xenotransplantation. Details of these screening and surveillance plans, including a summary of the relevant aspects of the health maintenance and surveillance program of the herd and the medical history of the source animal(s) (section 3) and written protocols for hospital infection control practices regarding both xenotransplantation product recipients and health care workers (section 4.2.) should be described in the materials submitted for review by the SACX, the FDA, and the local review bodies. # Informed Consent and Patient Education Processes In the process of obtaining and documenting informed consent, the sponsor and investigators should comply with all applicable regulatory requirement (s) ). In addition, the sponsor should ensure that counseling regarding behavior modification and other issues associated with risk of infection is provided to the patient and made available to the patient's family and contacts prior to and at the time of consent. Such counseling should remain available on an ongoing basis thereafter. The informed consent discussion, the informed consent document, and the written information provided to potential xenotransplantation product recipients should address, at a minimum, the following points relating to the potential risk associated with xenotransplantation: 2.5.1. The potential for infection with zoonotic agents known to be associated with the nonhuman source animal species. 2.5.2. The potential for transmission to the recipient of unknown xenogeneic infectious agents. The patient should be informed of the uncertainty regarding the risk of infection, whether such infections might result in disease, the nature of disease that might result, and the possibility that infections with these agents may not be recognized for an extended period of time. 2.5.3. The potential risk for transmission of xenogeneic infectious agents (and possible subsequent manifestation of disease) to the recipient's family or close contacts, especially sexual contacts. The recipient should be informed that immunocompromised persons may be at increased risk of xenogeneic infections. The recipient should be counseled regarding behavioral modifications that diminish the likelihood of transmitting infectious agents and relevant infection control practices. (sections 4.2.1.1., 4.2.1.2., 4.2.1.5., and 4.2.3.1.). 2.5.4. The informed consent process should include a documented procedure to inform the recipient of the responsibility to educate his/her close contacts regarding the possibility of xenogeneic infections from the source animal species and to offer the recipient assistance with this education process, if desired. Education of close contacts should address the uncertainty regarding the risks of xenogeneic infections, information about behaviors known to transmit infectious agents from human to human (e.g., unprotected sex, breast-feeding, intravenous drug use with shared needles, and other activities that involve potential exchange of blood or other body fluids) and methods to minimize the risk of transmission. Recipients should educate their close contacts about the importance of reporting any significant unexplained illness through their health care provider to the research coordinator at the institutions where the xenotransplantation was performed. 2.5.5. The potential need for isolation procedures during any hospitalization (including to the extent possible the estimated duration of such confinement and the specific symptoms/situation that would prompt such isolation), and any specialized precautions needed to minimize acquisition or transmission of infections following hospital discharge. 2.5.6. The potential need for specific precautions following hospital discharge to minimize the risk that livestock of the source animal species and the recipient of the xenotransplantation product will represent biohazards to each other. For example, if a recipient comes into contact with the animal species from which the xenotransplantation product was procured, the xenotransplantation product (and therefore the recipient) may have an increased risk from exposures to agents infectious for the xenotransplantation product source species. Conversely, the recipient may represent a biohazard to healthy livestock if the presence of the xenotransplantation product enables the recipient to serve as a vector for outbreaks of disease in source species livestock. 2.5.7. The importance of complying with long-term or life-long surveillance necessitating routine physical evaluations and the archiving of tissue and/or body fluid specimens for public health purposes even if the experiment fails and the xenotransplantation product is rejected or removed. The schedule for clinical and laboratory monitoring should be provided to the extent possible. The patient should be informed that any serious or unexplained illness in themselves or their contacts should be reported immediately to the clinical investigator or his/her designee. 2.5.8. The responsibility of the xenotransplantation product recipient to inform the investigator or his/her designee of any change in address or telephone number for the purpose of enabling long-term health surveillance. 2.5.9. The importance of a complete autopsy upon the death of the xenotransplantation product recipient, even if the xenotransplantation product was previously rejected or removed. Advance discussion with the recipient and his/her family concerning the need to conduct an autopsy is also encouraged in order to ensure that the recipient's intent is known to all relevant parties. 2.5.10. The long term need for access by the appropriate public health agencies to the recipient's medical records. To the extent permitted by applicable laws and/or regulations, the confidentiality of medical records should be maintained. The informed consent document should include a statement describing the extent, if any, to which confidentiality of records identifying the subject will be maintained (45 CFR 46.116 or 21 CFR 50.25(A)( 5)). 2.5.11. As an interim precautionary measure, xenotransplantation product recipients and certain of their contacts should be deferred indefinitely from donation of Whole Blood, blood components, including Source Plasma and Source Leukocytes, tissues, breast milk, ova, sperm, or any other body parts for use in humans. Pending further clarification, contacts to be deferred from donations should include persons who have engaged repeatedly in activities that could result in intimate exchange of body fluids with a xenotransplantation product recipient. For example, such contacts may include sexual partners, household members who share razors or toothbrushes, and health care workers or laboratory personnel with repeated percutaneous, mucosal, or other direct exposures. These recommendations may be revised based on ongoing surveillance of xenotransplantation product recipients and their contacts to clarify the actual risk of acquiring xenogeneic infections, and the outcome of deliberations between FDA and its advisors. FDA has published a draft guidance document ("Guidance for Industry: Precautionary Measures to Reduce the Possible Risk of Transmission of Zoonoses by Blood and Blood Products from Xenotransplantation Product Recipients and Their Contacts") for public comment and will consult with its advisors to identify the range of xenotransplantation products for which recipients and/or certain of their contacts should be recommended for deferral from blood donation. Additionally, the range of contacts who should be deferred from blood donation will be clarified after further public discussion. 2.5.12. Xenotransplantation product recipients who may wish to consider reproduction in the future should be aware that a potential risk of transmission of xenogeneic infectious agents not only to their partner but also to their offspring during conception, embryonic/fetal development and/or breast-feeding cannot be excluded. 2.5.13. All centers where xenotransplantation procedures are performed should develop appropriate xenotransplantation procedure-specific educational materials to be used in educating and counseling both potential xenotransplantation product recipients and their contacts. These materials should describe the xenotransplantation procedure(s), and the known and potential risks of xenogeneic infections posed by the procedure(s) in appropriate language. Those activities that are considered to be associated with the greatest risk of transmission of infection to contacts should be described. Education programs should detail the circumstances under which the use of personal protective equipment (e.g., gloves, gowns, masks) or special infection control practices are recommended, and emphasize the importance of hand washing. The potential for transmission of these agents to the general public should be discussed. # ANIMAL SOURCES FOR XENOTRANSPLANTATION Recognized zoonotic infectious agents and other organisms present in animals, such as normal flora or commensals, may cause disease in humans when introduced by xenotransplantation, especially in immunocompromised patients. The risk of transmitting xenogeneic infectious agents is reduced by procuring source animals from herds or colonies that are screened and qualified as free of specific pathogenic infectious agents and that are maintained in an environment that reduces exposure to vectors of infectious agents. Precautions intended to reduce risk should be employed in all steps of production (e.g., during animal husbandry, procurement and processing of nonhuman animal live cells, tissues or organs used in the manufacture of xenotransplantation products) and should be appropriate to each xenotransplantation protocol. Before an animal species is used as a source of xenotransplantation product (s), sponsors should adequately address the public health issues raised. These issues are delineated in more detail below. Procedures should be developed to identify incidents that negatively affect the health of the herd. This information is relevant to the safety review of every xenotransplantation product application. Such information, as well as the procedures to collect the information, should be reported to FDA. Some experts consider that nonhuman primates pose a greater risk of transmitting infections to humans. The PHS recognized the substantial concerns about this issue that have been raised within the scientific community and the general public. In its April 6, 1999 (b)(1)(iv)], any protocol submission that does not adequately address these risks is subject to clinical hold (i.e., the clinical trial may not proceed) due to insufficient information to assess the risks and/or due to unreasonable risk..." # Animal Procurement Sources All xenotransplantation products pose a risk of infection and disease to humans. Regardless of the species of the source animal, precautions appropriate to each xenotransplantation product protocol should be employed in all steps of production (animal husbandry, procurement and processing of nonhuman animal live cells, tissues or organs) to minimize this risk. Source animal procurement and processing procedures should include, at minimum, the following precautions: 3.1.1. Cells, tissues, and organs intended for use in xenotransplantation should be procured only from animals that have been bred and reared in captivity and that have a documented, well characterized health history and lineage. 3.1.2. Source animals should be raised in facilities with adequate barriers, i.e. biosecurity, to prevent the introduction or spread of infectious agents. Animals should also be obtained from herds or colonies with restricted admission of new animals. Such closed herds or colonies should be free of infectious agents that are relevant to the animal species and that may pose risk to the patient and/or the public. An infectious agent may pose risk to the patients and/or public if it can infect, cause disease in, and transmit among humans, or if its ability to infect, cause disease in, or transmit among humans remains inadequately defined. In this regard, persistent viral infections are of particular concern. Source animals should specifically be free of infection with any identifiable exogenous persistent virus. Breeding programs utilizing caesarean derivation of animals reduce the risk of maternal-fetal transmission of infectious agents and should be used whenever possible. The prevalence of exposure to these agents should be documented through periodic surveillance of the herd or colony using serologic and other appropriate diagnostic methodologies. 3.1.3. Animals from minimally controlled environments such as closed corrals (captive free-ranging animals) should not be used as source animals for xenotransplantation. Such animals have a higher likelihood of harboring adventitious infectious agents from uncontrolled contact with arthropods and/or other animal vectors. 3.1.4. Wild-caught animals should not be used as source animals for xenotransplantation. 3.1.5. Animals or live animal cells, tissues, or organs obtained from abattoirs should not be used for xenotransplantation. Such animals are obtained from geographically divergent farms or markets and are more likely to carry infectious agents due to increased exposure to other animals and increased activation and shedding of infectious agents during the stress of slaughter. In addition, health histories of slaughterhouse animals are usually not available. 3.1.6. Imported animals or the first generation of offspring of imported animals should not be used as source animals for xenotransplantation unless the animals belong to a species or strain (including transgenic animals) not available for use in the United States and their use is scientifically warranted. In this case, the imported animals should be documented to have been bred and continuously maintained in a manner consistent with the principles in this document. The source animal facility, production process and records are subject to inspection by the FDA (Federal Food,Drug and Cosmetic Act,). The US Department of Agriculture (USDA), Animal and Plant Health Inspection Service (APHIS), Veterinary Services (VS) regulates the importation of all animals and animal-origin materials that could represent a disease risk to U.S. livestock and poultry (9 CFR Part 122). Importation or interstate transport of any animal and/or animal-origin material that may represent such a disease risk requires a USDA permit. In addition, plans for testing and quarantine of the imported animals as well as health maintenance and surveillance of the herd or colony into which imported animals are introduced should be conducted by a veterinarian who is either specifically trained in or who otherwise has a solid background in foreign animal diseases. 3.1.7. Source animals from species in which transmissible spongiform encephalopathies have been reported should be obtained from closed herds with documented absence of dementing illnesses and controlled food sources for at least 2 generations prior to the source animal (section 3.2.6.3). Xenotransplantation products should not be obtained from source animals imported from any country or geographic region where transmissible spongiform encephalopathies are known to be present in the source species or from which the USDA prohibits or restricts importation of ruminants or ruminant products due to concern about transmissible spongiform encephalopathies. 3.1.8. The CDC, Division of Quarantine, regulates the importation of certain animals, including nonhuman primates (NHP), because of their potential to cause serious outbreaks of communicable disease in humans (42 CFR Part 71). Importers must register with CDC, certify imported NHP will be used only for scientific, educational, and exhibition purposes, implement disease control measures, maintain records regarding each shipment, and report suspected zoonotic illness in animals or workers. Further, the importation and/or transfer of known or potential etiological agents, hosts, or vectors of human disease (including biological materials) may require a permit issued by CDC's Office of Health and Safety. # Source Animal Facilities Potential source animals should be housed in facilities built and operated taking into account the factors outlined in this section. 3.2.1. Source Animal Facilities (facilities providing source animals for xenotransplantation) should be designed and maintained with adequate barriers to prevent the introduction and spread of infectious agents. Entry and exit of animals and humans should be controlled to minimize environmental exposures/inadvertent exposure to transmissible infectious agents. Source Animal Facilities should not be located in geographic proximity to manufacturing or agricultural activities that could compromise the biosecurity of these facilities. 3.2.6. The Source Animal Facility standard operating procedures should thoroughly describe the following: (1) criteria for animal admission, including sourcing and entry procedures, (2) description of the disease monitoring program, ( 3) criteria for the isolation or elimination of diseased animals, including a diagnostic algorithm for ill and dead animals, ( 4) facility cleaning and disinfecting arrangements, ( 5) the source and delivery of feed, water and supplies, ( 6) measures to exclude arthropods and other animals, ( 7) animal transportation, (8) dead animal disposition, ( 9) criteria for the health screening and surveillance of humans entering the facility, and (10) permanent individual animal identification. 3.2.6.1. Animal movement through the secured facility should be described in the standard operating procedures of the facility. All animals introduced into the source colony other than by birth should go through a well defined quarantine and testing period (section 3.5). With regard to the reproduction and raising of suitable replacement animals, the use of methods such as artificial insemination (AI), embryo transfer, medicated early weaning, cloning, or hysterotomy/hysterectomy and fostering may minimize further colonization with infectious agents. 3.2.6.2. During final screening and qualification of individual source animals and procurement of live cells, tissues or organs for use in xenotransplantation, the potential for transmission of an infectious agent should be minimized by established standard operating procedures. One method to accomplish this is a step-wise "batch" or "all-in/all-out" method of source animal movement through the facility rather than continuous replacement movement. With the "all-in/all-out" or "batch" method, a cohort of qualified animals is quarantined from the closed herd or colony while undergoing final screening qualification and xenogeneic biomaterial procurement. After the entire cohort of source animals is removed, the quarantine and xenogeneic biomaterial processing areas of the animal facility are then cleaned and disinfected prior to the introduction of the next cohort of source animals. 3.2.6.3. The feed components, including any antibiotics or other medicinals or other additives, should be documented for a minimum of two generations prior to the source animal. Pasteurized milk products may be included in feeds. The absence of other mammalian materials, including recycled or rendered materials, should be specifically documented. The absence of such materials is important for the prevention of transmissible spongiform encephalopathies and other infectious agents. Potentially extended periods of clinical latency, severity of consequent disease, and the difficulty in current detection methods highlight the importance of eliminating risk factors associated with transmissible spongiform encephalopathies. 3.2.7. The sponsor should establish records linking each xenotransplantation product recipient with the relevant health history of the source animal, herd or colony, and the specific organ, tissue, or cell type included in the xenotransplantation product or used in the manufacture of the xenotransplantation product. The relevant records include information on the standard operating procedures of the animal procurement facility, the herd health surveillance, and the lifelong health history of the source animal(s) for the xenotransplantation product (sections 3.2.-3.7.). 3.2.7.1. The sponsor should maintain these record systems and an animal numbering or other system that allows easy, accurate, and rapid linkage between the information contained in these different record systems and the xenotransplantation product recipient for 50 years beyond the date of xenotransplantation. If record systems are maintained in a computer database, electronic back ups should be kept in a secure office facility and back up on hard copy should be routinely performed. 3.2.7.2. In the event that the Source Animal Facility ceases to operate, the facility should either transfer all animal health records and specimens to the respective sponsors or notify the sponsors of the new archive site. If the sponsor ceases to exist, decisions on the disposition of the archived records and specimens should be made in consultation with the FDA. 3.2.8. All animal facilities should be subject to inspection by designated representatives of the clinical protocol sponsor and public health agencies. The sponsor is responsible for implementing and maintaining a routine facilities inspection program for quality control and quality assurance. # Pre-xenotransplantation Screening for Known Infectious Agents The following points discuss measures for appropriate screening of known infectious agents in the herd, individual source animal and the nonhuman animal live cells, tissues or organs used in xenotransplantation. The selection of assays for pre-transplant screening should be determined by the source of the nonhuman animal live cells, tissues or organs and the intended clinical application of the xenotransplantation product. General guidance on adventitious agent testing may be found in 'Points to Consider for the Characterization of Cell Lines Used to Produce Biologicals' (FDA, CBER, 1993), and a guidance document from the International Conference on Harmonization: 'Q5D Quality of Biotechnological/Biological Products: Derivation and Characterization of Cell Subsets Used for Production of Biotechnological/Biological Products'. 3.3.1. The design of preclinical studies intended to identify infectious agents in the xenotransplantation product and/or the nonhuman animal live cells, tissues or organs intended for use in the manufacture of xenotransplantation products should take into consideration the source animal species and the specific manner in which the xenotransplantation product will be used clinically. These studies should identify infectious agents and characterize their potential pathogenicity and tropism for human cells by appropriate in vivo and in vitro assays. Characterization of persistent viral infections and endogenous retroviruses present in source animals cells, tissues or organs is particularly important. The information from these studies is necessary for the identification and development of appropriate assays for xenotransplantation product screening programs. 3.3.2. Programs for screening and detection of known infectious agents in the herd or colony, the individual source animal, and the xenotransplantation product itself or the nonhuman animal live cells, tissues or organs used in the manufacture of xenotransplantation products should take into account the infectious agents associated with the source animals used, the stringency of the husbandry techniques employed, and the manner in which the xenotransplantation product will be used clinically. These programs should be updated periodically to reflect advances in the knowledge of infectious diseases. The sponsor should develop an adequate screening program in consultation with appropriate experts including oversight and regulatory bodies. 3.3.3. Assays used for screening and detection of infectious agents should have well defined and documented sensitivity, specificity, and reproducibility in the setting in which they are employed. In addition to assays for specific infectious agents, the use of assays capable of detecting broad ranges of infectious agents is strongly encouraged. In vivo assays involving animal models may require different standards for evaluation. Assays under development may complement the screening process. 3.3.4. Samples from the xenotransplantation product itself or of the nonhuman animal live cells, tissues or organs used in the manufacture of the xenotransplantation product, whenever possible, or from an appropriate biologic proxy should be tested preclinically with co-cultivation assays. These assays should include a panel of appropriate indicator cells, which may include human peripheral blood mononuclear cells (PBMC), to facilitate amplification and detection of endogenous retroviruses and other xenogeneic viruses capable of producing infection in humans. Agents that may be latent are of particular concern and their detection may be facilitated by using chemical and irradiation methods. 3.3.5. All xenotransplantation products should be screened by direct culture for bacteria, fungi, and mycoplasma (see, e.g., 21 CFR Part 600-680). In addition, universal PCR probes for the presence of micro-organisms are available and should be considered to complement the screening of xenotransplantation products. # Herd/Colony Health Maintenance and Surveillance The principal elements recommended to qualify a herd or colony as a source of animals for use in xenotransplantation include: (1) closed herd or colony of stock (optimally caesarian derived) raised in barrier facilities; and (2) adequate surveillance programs for infectious agents. The standard operating procedures of the animal facility with regard to the herd or colony health maintenance and surveillance programs relevant to the specific xenotransplantation product usage should be documented and available to appropriate review bodies. Medical records for the herd or colony and the specific individual source animals should be maintained by the animal facility or the sponsor, as appropriate, for 50 years beyond the date of the xenotransplantation. 3.4.1. Herd or colony health measures that constitute standard veterinary care for the species (e.g., anti-parasitic measures) should be implemented and recorded at the animal facility. For example, aseptic techniques and sterile equipment should be used in all parenteral interventions including vaccinations, phlebotomy, and biopsies. All incidents that may affect herd or colony health should be recorded (e.g., breaks in the environmental barriers of the secured facility, disease outbreaks, or sudden animal deaths). Vaccination and screening schedules should be described in detail and taken into account when interpreting serologic screening tests. Prevention of disease by protection from exposure is generally preferable to vaccination, since this preserves the ability of serologic screening to define herd exposures. In particular, the use of live vaccines is discouraged, but may be justified when dead or acellular vaccines are not available and barriers to exposure are inadequate to prevent the introduction of infectious agents into the herd or colony. 3.4.2. In addition to standard medical care, the herd/colony should be monitored for the introduction of infectious agents which may not be apparent clinically. The sponsor should describe the monitoring program, including the types and schedules of physical examinations and laboratory tests used in the detection of all infectious agents, and document the results. 3.4.3. Routine testing of closed herds or colonies in the United States should concentrate on zoonoses known to exist in captive animals of the relevant species in North America. Since many important pathogens are not endemic to the United States or have been found only in wild-caught animals, testing of breeding stock and maintenance of a closed herd or colony reduces the need for extensive testing of individual source animals. Herd or colony geographic locations are relevant to consideration of presence and likelihood of pathogens in a given herd or colony. The geographic origin of the founding stock of the colony, including quarantine and screening procedures utilized when the closed colony was established, should be taken into consideration. Veterinarians familiar with the prevalence of different infectious agents in the geographic area of source animal origin and the location where the source animals are to be maintained should be consulted. 3.4.3.1. As part of the surveillance program, routine serum samples should be obtained from randomly selected animals representative of the herd or colony population. These samples should be tested for indicators of infectious agents relevant to the species and epidemiologic exposures. Additional directed serologic analysis, active culturing, or other diagnostic laboratory testing of individual animals should be performed in response to clinical indications. Infection in one animal in the herd justifies a larger clinical and epidemiologic evaluation of the rest of the herd or colony. Aliquots of serum samples collected during routine surveillance and specific disease investigations should be maintained for 50 years beyond the date of sample collection. The Source Animal Facility or the sponsor should maintain these specimens (either on-or off-site) for investigations of unexpected diseases that occur in the herd, colony, individual source animals, or animal facility staff. These herd health surveillance samples, which are not archived for PHS investigation purposes, should nonetheless be made available to the PHS if needed. (section 3.7.) 3.4.3.2. Any animal deaths, including stillbirths or abortions, where the cause is either unknown or ambiguous should lead to full necropsy and evaluation for infectious etiologies (including transmissible spongiform encephalopathies) by a trained veterinary pathologist. Results of these investigations should be documented. 3.4.4. Standard operating procedures that include maintenance of a subset of sentinel animals are encouraged. Monitoring of these animals will increase the probability of detection of subclinical, latent, or late-onset diseases such as transmissible spongiform encephalopathies. # Individual Source Animal Screening and Qualification The qualification of individual source animals should include documentation of breed and lineage, general health, and vaccination history, particularly the use of live and/or live attenuated vaccines (section 3.4.1). The presence of pathogens that result in acute infections should be documented and controlled by clinical examination and treatment of individual source animals, by use of individual quarantine periods that extend beyond the incubation period of pathogens of concern, and by herd surveillance indicating the presence or absence of infection in the herd from which the individual source animal is selected. The use of any drugs or biologic agents for treatment should be documented. During quarantine and/or prior to procurement of live cells, tissues or organs for use in xenotransplantation, individual source animals should be screened for infectious agents relevant to the particular intended clinical use of the planned xenotransplantation product. The screening program should be guided by the surveillance and health history of the herd or colony. 3.5.1. In general, individual source animals should be quarantined for 3 weeks prior to procurement of live cells, tissues or organs for use in xenotransplantation. During the quarantine, acute illnesses due to infectious agents to which the animal may have been exposed shortly before removal from the herd or colony would be expected to become clinically apparent. It may be appropriate to modify the need for and duration of individual quarantine periods depending on the characterization and surveillance of the source animal herd or colony, the design of the facility in which the herd is bred and maintained, and the clinical urgency. When the quarantine period is shortened or eliminated, justification should be documented and any potentially increased infectious risk should be addressed in the informed consent document. 3.5.1.1. During the quarantine period, candidate source animals should be examined by a veterinarian and screened for the presence of infectious agents (bacteria including rickettsiae when appropriate, parasites, fungi, and viruses) by appropriate serologies and cultures, serum clinical chemistries (including those specific to the function of the organ or tissue to be procured), complete blood count and peripheral blood smear, and fecal exam for parasites. Evaluation for viruses which may not be recognized zoonotic agents but which have been documented to infect either human or nonhuman primate cells in vivo or in vitro should be considered. Particular attention should be given to viruses with demonstrated capacity for recombination, complementation, or pseudotyping. Surveillance of a closed herd or colony (as described in section 3.4.3.) will minimize the additional screening necessary to qualify individual member animals. The nature, timing, and results of surveillance of the herd or colony from which the individual animal is procured should be considered in designing appropriate additional screening of individual animals. These tests should be performed as closely as possible to the date of xenotransplantation while ensuring availability of results prior to clinical use. 3.5.1.2. Screening of a candidate source animal should be repeated prior to procurement of live cells, tissues or organs for use in xenotransplantation if a period greater than three months has elapsed since the initial screening and qualification were performed or if the animal has been in contact with other non-quarantined animals between the quarantine period and the time of cells, tissue or organ procurement. 3.5.1.3. Transportation of source animals may compromise the microbiologic protection ensured by the closed colony. Careful attention to conditions of transport can minimize disease exposures during shipping. Microbiological isolation of the source animal during transit is critically important. Source animals should be transported using a system that reliably ensures microbiological isolation. Transported source animals should be quarantined for a minimum period of three weeks after transportation, during which time appropriate screening should be performed. The sponsor may propose a shorter quarantine period if appropriate justification (that reflects the level of containment and the duration of the transportation) is provided. When source animals are transported intact, the sponsor should consult the FDA about further details of appropriate transport, quarantine, and screening. If the animals are transported across state or federal boundaries the USDA should be consulted. 3.5.1.4. For the reasons cited above, it is preferable, whenever feasible, to procure live cells, tissues or organs for use in xenotransplantation at the animal facility. Precautions employed during transport to ensure microbiological isolation of the procured xenotransplantation product or live cells, tissues or organs should be documented. 3.5.2. All procured cells, tissues and organs intended for use in xenotransplantation should be as free of infectious agents as possible. The use of source animals in which infectious agents, including latent viruses, have been identified should be avoided. However, the presence of an infectious agent in certain anatomic sites, for example the alimentary tract, should not preclude use of the source animal if the agent is documented to be absent in the xenotransplantation product. 3.5.3. When feasible, a biopsy of the nonhuman animal live cells, tissues or organs intended for use in xenotransplantation, the xenotransplantation product itself, or other relevant tissue should be evaluated for the presence of infectious agents by appropriate assays and histopathology prior to xenotransplantation, and then archived (section 3.7). 3.5.4. The sponsor should ensure that the linked records described in section 3.2.7. are available for review when appropriate by the local review bodies, the SACX, and the FDA. These records should include information on the results of the quarantine and screening of individual xenotransplantation source animals. In addition to records kept at the Source Animal Facility, a summary of the individual source animal record should accompany the xenotransplantation product and be archived as part of the medical record of the xenotransplantation product recipient. 3.5.5. The Source Animal Facility should notify the sponsor in the event that an infectious agent is identified in the source animal or herd subsequent to procurement of live cells, tissues or organs for use in xenotransplantation (e.g., identification of delayed onset transmissible spongiform encephalopathies in a sentinel animal). 3.5.6. The sponsor should ensure that the quarantine, screening, and qualification program is appropriately tailored to the specific source animal species, the animal husbandry history, the process for procuring the xenogeneic biomaterial and preparing the xenotransplantation product, and the clinical application. The sponsor should also ensure that the results of these procedures are reviewed and approved by persons with the appropriate expertise prior to the clinical application. # Procurement and Screening of Nonhuman Animal Live Cells, Tissues or Organs Used for Xenotransplantation 3.6.1. Procurement and processing of cells, tissues and organs should be performed using documented aseptic conditions designed to minimize contamination. These procedures should be conducted in designated facilities which may be subject to inspection by appropriate oversight and regulatory authorities. 3.6.2. Cells, tissues or organs intended for xenotransplantation that are maintained in culture prior to xenotransplantation should be periodically screened for maintenance of sterility, including screening for viruses and mycoplasma. The FDA publications titled "Guidance for Industry: Guidance for Human Somatic Cell Therapy and Gene Therapy (1998)"; "Points To Consider in the Characterization of Cell Lines Used to Produce Biologicals (1993)"; and "Points to Consider in the Manufacture and Testing of Therapeutic Products for Human Use Derived from Transgenic Animals (1995)" should be consulted for guidance. The sponsor should develop, implement, and stringently enforce the standard operating procedures for the procurement and screening processes. Procedures that may inactivate or remove pathogens without compromising the integrity and function of the xenotransplantation product should be employed. 3.6.3. All steps involved in the procuring, processing, and screening of live cells, tissues or organs or xenotransplantation products to the point of xenotransplantation should be rehearsed preclinically to ensure reproducible quality control. 3.6.4. If nonhuman animal live cells, tissues or organs for use in xenotransplantation are procured without euthanatizing the source animal, the designated PHS specimens should be archived (PHS specimens are discussed in section 3.7.1.) and the animal's health should be monitored for life. When source animals die or are euthanatized, a complete necropsy with gross, histopathologic and microbiological evaluation by a trained veterinary pathologist should follow, regardless of the time elapsed between xenogeneic biomaterial procurement and death. This should include evaluation for transmissible spongiform encephalopathies. The sponsor should maintain documentation of all necropsy results for 50 years beyond the date of necropsy as part of the animal health record (sections 3.2.7. and 3.4.). In the event that the necropsy reveals findings pertinent to the health of the xenotransplantation product recipient(s) (e.g., evidence of transmissible spongiform encephalopathies) the finding should be communicated to the FDA without delay (see e.g., 21 CFR 312.32). # Archives of Source Animal Medical Records and Specimens Systematically archived source animal biologic samples and record keeping that allows rapid and accurate linking of xenotransplantation product recipients to the individual source animal records and archived biologic specimens are essential for public health investigation and containment of emergent xenogeneic infections. 3.7.1. Source animal biologic specimens designated for PHS use (as outlined below) should be banked at the time of xenogeneic biomaterial procurement. These specimens should remain in archival storage for 50 years beyond the date of the xenotransplantation to permit retrospective analyses if a public health need arises. Such archived specimens should be readily accessible to the PHS and remain linked to both source animal and recipient health records. At the time of procurement of nonhuman animal live cells, tissues or organs for use in xenotransplantation, plasma should be collected from the source animal and stored in sufficient quantity for subsequent serology and viral testing. In addition, the sponsor should recover and bank sufficient aliquots of cryopreserved leukocytes for subsequent isolation of nucleic acids and proteins as well as aliquots for thawing viable cells for viral co-culture assays or other tissue culture assays. Ideally, at least ten 0.5 cc aliquots of citrated or EDTA-anticoagulated plasma should be banked. At least five aliquots of viable (1x10 7 ) leukocytes should be cryopreserved. It may also be appropriate to collect paraffin-embedded, formalin fixed, and cryopreserved tissue samples from source animal organs relevant to the specific protocol at the time of xenogeneic biomaterial procurement. Additionally, cryopreserved tissue samples representative of major organ systems (e.g., spleen, liver, bone marrow, central nervous system, lung,) should be collected from source animals at necropsy. The material submitted for review by FDA and, when appropriate, the Secretary's Advisory Committee on Xenotransplantation (under development, see section 5.3) should justify the types of tissues, cells, and plasma taken for storage and any smaller quantities of plasma and leukocytes collected. 3.7.2. The sponsor should maintain archives of designated PHS specimens (section 3.7.1.) and serum collected for herd surveillance for 50 years beyond the date of collection (section 3.4.3.1.), and animal health records for 50 years beyond the date of the animal's death (sections 3.2.7.). # Disposal of Animals and Animal By-products The need for advanced planning for the ultimate disposition of source and sentinel animals bred for xenotransplantation, especially animals of species ordinarily used to produce food, should be anticipated. Generally, source and sentinel animals should not be used as pets, breeding animals, sources of human food via milk or meat, or as ingredients of feed for other animals because of their potential to enter the human or animal food chain. 3.8.1. There may be species specific situations where animals from xenotransplant facilities can be considered to be safe for human food use or as feed ingredients when disposed of through rendering. FDA's Center for Veterinary Medicine (CVM) regulates animal feed ingredients and also establishes conditions for the release of animals to the USDA Food Safety Inspection Service for inspection as food for humans. Persons wishing to offer animals into the human food or animal feed supply or who have food safety questions should consult with CVM. Food safety issues will be referred to CVM. 3.8.2. Animals from biomedical facilities that have not been authorized for release by CVM into the human food or animal feed supply may be adulterated under the Federal Food, Drug and Cosmetic Act (21 U.S.C. 321 et seq.), unfit for food or feed, and potentially infectious. They should be disposed of in a manner consistent with infectious medical waste in compliance with federal, state and local requirements. It is critical that adequate diagnostic assays and methodologies for surveillance of known infectious agents from the source animal are available prior to initiating the clinical trial. The sensitivity, specificity, and reproducibility of these testing methods should be documented under conditions that simulate those employed at the time of and following the xenotransplantation procedure. As with pre-xenotransplantation screening, assays under development may complement the surveillance process (see section 3.3.3.). The laboratory surveillance should include methods to detect infectious agents known to establish persistent latent infections in the absence of clinical symptoms (e.g., herpesviruses, retroviruses, papillomaviruses) and that are known or suspected to have been present in the xenotransplantation product. When the xenogeneic viruses of concern have similar human counterparts (e.g., simian cytomegalovirus), assays to distinguish between the two should be used in the post-xenotransplantation laboratory surveillance. Depending upon the degree of immunosuppression in the recipient, serological assays may be or may not be useful. Methods for analysis may include co-cultivation of cells coupled with appropriate detection assays. # Xenotransplantation Product Recipients' Biologic Specimens Archived for Public Health Investigations (PHS Specimens) Biological specimens obtained from the xenotransplantation product recipients and designated for public health investigations (as distinct from specimens collected for clinical evaluation or laboratory surveillance) should be archived for 50 years beyond the date of the xenotransplantation to allow retrospective investigation of xenogeneic infections. The type and quantity of specimens archived may vary with the clinical procedure and the age of the xenotransplantation product recipient. In the application for FDA review, which may also be reviewed by the SACX, the sponsor should justify the amount and types of specimens to be designated for PHS use, including any differences from the recommendations described below. At selected time points, at least three to five 0.5 cc aliquots of citrated or EDTAanticoagulated plasma should be recovered and archived. At least two aliquots of viable (1 x 10 7 ) leukocytes should be cryopreserved. Specimens from any xenotransplantation product that is removed (e.g., post-rejection or at the time of death) should be archived. The following schedule for archiving biological specimens is recommended: (1) Prior to the xenotransplantation procedure, 2 sets of samples should be collected and archived one month apart. If this is not feasible then two sets should be collected and archived at times that are separated as much as possible. One set should be collected immediately prior to the xenotransplantation. (2) Additional sets should be archived in the immediate post-xenotransplantation period and at approximately one month and six months after xenotransplantation. (3) Collection should then be obtained annually for the first two years after xenotransplantation. ( 4) After that, specimens should be archived every five years for the remainder of the recipient's life. More frequent archiving may be indicated by the specific protocol or the recipient's medical course. 4.1.2.1. In the event of recipient's death, snap-frozen samples stored at -70 o C, paraffin embedded tissue, and tissue suitable for electron microscopy should be collected at autopsy from the xenotransplantation product and all major organs relevant to either the xenotransplantation or the clinical syndrome that resulted in the patient's death. These designated PHS specimens should be archived for 50 years beyond the date of collection. 4.1.2.2. The sponsor should maintain an accurate archive of the PHS specimens. In the absence of a central facility (section 5.2), these specimens should be archived with the safeguards necessary to ensure long-term storage (e.g., a monitored storage freezer alarm system and specimen archiving in split portions in separate freezers) and an efficient system for the prompt retrieval and linkage of data to medical records of recipients and source animals. The sponsor should maintain these archives and a record system that allows easy, accurate, and rapid linkage of information among the different record systems (i.e., the specimen archive, the recipient's medical records and the records of the source animal) for 50 years beyond the date of xenotransplantation. If record systems are maintained in a computer database, electronic back ups should be kept in a secure office facility and back up on hard copy should be routinely performed. 4.1.2.3. A clinical episode potentially representing a xenogeneic infection should prompt notification of the FDA, which will notify other federal and state health authorities as appropriate. Under these circumstances, the PHS may decide that an investigation involving the use of these archived biologic specimens is warranted to assess the public health significance of the infection. 4.2.1.2. Additional infection control or isolation precautions (e.g., Airborne, Droplet, Contact) should be employed as indicated in the judgment of the hospital epidemiologist and the xenotransplantation team infectious disease specialist. For example, appropriate isolation precautions for each hospitalized xenotransplantation product recipient will depend upon the type of xenotransplantation, the extent of immunosuppression, and patient symptoms. Isolation precautions should be continued until a diagnosis has been established or the patient symptoms have resolved. The appropriateness of isolation precautions and other infection control measures should be reassessed when the diagnosis is established, the patient's symptoms change, and at the time of readmission and discharge. Discharge instructions should include specific education on appropriate infection control practices following discharge, including any special precautions recommended for disposal of biologic products. The most restrictive level of isolation should be used when patients exhibit respiratory symptoms because airborne transmission of infectious agents is most concerning. # Infection Control 4.2.1.3. Health care personnel, including xenotransplantation team members, should adhere to recommended procedures for handling and disinfection/sterilization of medical instruments and disposal of infectious waste. 4.2.1.4. Biosafety level 2 (BSL-2) standard and special practices, containment equipment and facilities should be used for activities involving clinical specimens from xenotransplantation product recipients. Particular attention should be given to sharps management and bioaerosol containment. BSL-3 standard and special practices and containment equipment should be employed in a BSL-2 facility when propagating an unidentified infectious agent isolated from a xenotransplantation product recipient. # Acute Infectious Episodes Most acute viral infectious episodes among the general population are never etiologically identified. Xenotransplantation product recipients are at risk for these infections and other infections common among immunosuppressed allograft recipients. When the source of an illness in a recipient remains unidentified despite standard diagnostic procedures, it may be appropriate to perform additional testing of body fluid and tissue samples. The infectious disease specialist, in consultation with the hospital epidemiologist, the veterinarian, the clinical microbiologist and other members of the xenotransplantation team should assess each clinical episode and make a considered judgment regarding the significance of the illness, the need and type of diagnostic testing and specific infection control precautions. Other experts on infectious diseases and public health may also need to be consulted. 4.2.2.1. In immunosuppressed xenotransplantation product recipients, assays of antibody response may not detect infections reliably. In such patients, culture systems, genomic detection methodologies and other techniques may detect infections for which serologic testing is inadequate. Consequently, clinical centers where xenotransplantation is performed should have the capability to culture and to identify viral agents using in vitro and in vivo methodologies either on site or through active and documented collaborations. Specimens should be handled to ensure viability and to maximize the probability of isolation and identification of fastidious agents. Algorithms for evaluation of unknown xenogeneic pathogens should be developed in consultation with appropriate experts, including persons with expertise in both medical and veterinary infectious diseases, laboratory identification of unknown infectious agents and the management of biosafety issues associated with such investigations. 4.2.2.2. Acute and convalescent sera obtained in association with acute unexplained illnesses should be archived when judged appropriate by the infectious disease physician and/or the hospital epidemiologist. This would permit retrospective study and perhaps the identification of an etiologic agent. # Health Care Workers The risk to health care workers who provide post-xenotransplantation care to xenotransplantation product recipients is undefined. However, health care workers, including laboratory personnel, who handle the animal tissues/organs prior to xenotransplantation will have a definable risk of infection not exceeding that of animal care, veterinary, or abattoir workers routinely exposed to the source animal species provided equivalent biosafety standards are employed. The sponsor should ensure that a comprehensive Occupational Health Services program is available to educate workers regarding the risks associated with xenotransplantation and to monitor for possible infections in workers. The Occupational Health Service program should include: 4.2.3.1. Education of Health Care Workers. All centers where xenotransplantation procedures are performed should develop appropriate xenotransplantation procedurespecific educational materials for their staff. These materials should describe the xenotransplantation procedure(s), the known and potential risks of xenogeneic infections posed by the procedure(s), and research or health care activities that may pose the greatest risk of infection or nosocomial transmission of zoonotic or other infectious agents. Education programs should detail the circumstances under which the use of Standard Precautions and other isolation precautions are recommended, including the use of personal protective equipment handwashing before and after all patient contacts, even if gloves are worn. In addition, the potential for transmission of these agents to the general public should be discussed. # Health Care Worker Surveillance. The sponsor and the Occupational Health Service in each clinical center should develop protocols for monitoring health care personnel. These protocols should describe methods for storage and retrieval of personnel records and collection of serologic specimens from workers. Baseline sera (i.e., prior to exposure to xenotransplantation products or recipients) should be collected from all personnel who provide direct care to xenotransplantation product recipients, and laboratory personnel who handle, or are likely to handle, animal cells, tissues and organs or biologic specimens from xenotransplantation product recipients. Baseline sera can be compared to sera collected following occupational exposures; such baseline sera should be maintained for 50 years from the time of collection. The activities of the Occupational Health Service should be coordinated with the Infection Control Program to ensure appropriate surveillance of infections in personnel. # Post-Exposure Evaluation and Management. Written protocols should be in place for the evaluation of health care workers who experience an exposure where there is a risk of transmission of an infectious agent, e.g., an accidental needle stick. Health care workers, including laboratory personnel, should be instructed to report exposures immediately to the Occupational Health Services. The post-exposure protocol should describe the information to be recorded including the date and nature of exposure, the xenotransplantation procedure, recipient information, actions taken as a result of such exposures (e.g., counseling, post-exposure management, and follow-up) and the outcome of the event. This information should be archived in a health exposure log (section 4.3.) and maintained for at least 50 years from the time of the xenotransplantation despite any change in employment of the health care worker or discontinuation of xenotransplantation procedures at that center. Health care and laboratory workers should be counseled to report and seek medical evaluation for unexplained clinical illnesses occurring after the exposure. # Health Care Records The sponsor should maintain a cross-referenced system that links the relevant records of the xenotransplantation product recipient, xenotransplantation product, source animal(s), animal procurement center, and significant nosocomial exposures. These records should include: (1) documentation of each xenotransplantation procedure, (2) documentation of significant nosocomial health exposures, and (3) documentation of the infectious disease screening and surveillance records on both xenotransplantation product source animals and recipients. These records should be updated regularly and crossreferenced to allow rapid and easy linkage between the clinical records of the source animal(s) and the xenotransplantation product recipient. To the extent permitted by applicable laws and/or regulations, the confidentiality of all medical and research records pertaining to human recipients should be maintained (section 2.5.10.). 4.3.1. The documentation of each xenotransplantation procedure includes the date and type of the procedure, the principal investigator(s) (PI), the xenotransplantation product recipient, the xenotransplantation product(s), the individual source animal (s) and the procurement facilities for these animals, as well as the health care workers associated with each procedure. 4.3.2. The documentation of significant nosocomial health exposures includes the persons involved, the date and nature of each potentially significant nosocomial exposure (exposures defined in the written Infection Control/Occupational Health Service protocol), and the actions taken. # PUBLIC HEALTH NEEDS # National Xenotransplantation Database A pilot project to demonstrate the feasibility of, and identify system requirements for, a National Xenotransplantation Database is currently underway. It is anticipated that this pilot would be expanded into a fully operational Database to collect data from all clinical centers conducting trials in xenotransplantation and all animal facilities providing animals or xenogeneic organs, tissues, or cells for clinical use. Such a database would enable: (a) the recognition of rates of occurrence and clustering of adverse health events, including events that may represent outcomes of xenogeneic infections; (b) accurate linkage of these events to exposures on a national level; (c) notification of individuals and clinical centers regarding epidemiologically significant adverse events associated with xenotransplantation; and (d) biological and clinical research assessments. When such a Database becomes functional, the sponsor should ensure that information requested by the Database is provided in an accurate and timely manner. To the extent allowed by law, information derived from the Database would be available to the public with appropriate confidentiality protections for any proprietary or individually identifiable information. # Biologic Specimen Archives The sponsor should ensure that the designated PHS specimens from the source animals, xenotransplantation products, and xenotransplantation product recipients are archived (sections 3.7.1, 3.5.3, and 4.1.2.). The biologic specimens should be collected and archived under conditions that will ensure their suitability for subsequent public health purposes, including public health investigations (sections 4. 1.2.3.). The location and nature of archived specimens should be documented in the health care records and this information should be linked to the National Xenotransplantation Database when the latter becomes functional. DHHS is considering options for a central biological archive, e.g., one maintained by a private sector organization under contract to DHHS. Designated PHS specimens would be deposited in such a repository. # Secretary's Advisory Committee on Xenotransplantation (SACX) The SACX is currently being implemented by DHHS. As currently envisioned, the SACX will consider the full range of complex issues raised by xenotransplantation, including ongoing and proposed protocols, and make recommendations to the Secretary on policy and procedures. The SACX will also provide a forum for public discussion of issues when appropriate. These activities will facilitate DHHS efforts to develop an integrated approach to addressing emerging public health issues in xenotransplantation. The structure and functions of the SACX as well as procedures for SACX review of protocols and issues will be described in subsequent publications. Inquiries about the status and function of, and access to the SACX should be directed to the Office of Science Policy, Office of the Secretary, DHHS, or the Office of Biotechnology Activities (OBA), formerly known as the Office of Recombinant DNA Activities (ORDA), Office of the Director, NIH. # BIBLIOGRAPHY # A. Federal Laws
# Several developments have fueled the renewed interest in xenotransplantationthe use of live animal cells, tissues and organs in the treatment or mitigation of human disease. The world-wide, critical shortage of human organs available for transplantation and advances in genetic engineering and in the immunology and biology of organ/tissue rejection have renewed scientists' interest in investigating xenotransplantation as a potentially promising means to treat a wide range of human disorders. This situation is highlighted by the fact that in the United States alone, 13 patients die each day waiting to receive a life-saving transplant to replace a diseased vital organ. While animal organs are proposed as an investigational alternative to human organ transplantation, xenotransplantation is also being used in the effort to treat diseases for which human organ allotransplants are not traditional therapies (e.g., epilepsy, chronic intractable pain syndromes, insulin dependent diabetes mellitus and degenerative neurologic diseases such as Parkinson's disease and Huntington's disease). At present, the majority of clinical xenotransplantation procedures utilize avascular cells or tissues rather than solid organs in large part due to the immunologic barriers that the human host presents to vascularized xenotransplantation products. However, with recent scientific advances, xenotransplantation is viewed by many researchers as having the potential for treating not only end-organ failure but also chronic debilitating diseases that affect major segments of the world population. Although the potential benefits may be considerable, the use of xenotransplantation also presents a number of significant challenges. These include (1) the potential risk of transmission of infectious agents from source animals to patients, their close contacts, and the general public; (2) the complexities of informed consent; and (3) animal welfare issues. On September 23, 1996, the Department of Health and Human Services (DHHS) published for public comment the Draft PHS Guideline on Infectious Disease Issues in Xenotransplantation to address the infectious disease concerns raised by xenotransplantation (61 Federal Register 49919). The Draft Guideline was jointly developed by five components within DHHS -the Centers for Disease Control and Prevention (CDC), Food and Drug Administration (FDA), Health Resources and Services Administration (HRSA), National Institutes of Health (NIH), all parts of the U.S. Public Health Service (PHS), plus the DHHS Office of the Assistant Secretary for Planning and Evaluation. This Draft Guideline discusses general principles for the prevention and control of infectious diseases that may be associated with xenotransplantation. Intended to minimize potential risks to public health, these general principles provide guidance on the development, design, and implementation of clinical protocols to sponsors of xenotransplantation clinical trials and local review bodies evaluating proposed xenotransplantation clinical protocols. The Draft Guideline emphasizes the need for appropriate clinical and scientific expertise on the xenotransplantation research team, adequate protocol review, thorough health surveillance plans, and comprehensive informed consent and education processes. In response to the Draft Guideline, the DHHS received over 140 written comments reflecting a broad spectrum of public opinion (Federal Register docket No. 96M-0311). Comments were received from a variety of stakeholders, including representatives of academia; industry; patient, consumer, and animal welfare advocacy organizations; professional, scientific and medical societies; ethicists; researchers; other government agencies and private citizens. In revising the Draft Guideline, careful consideration was given to recent scientific findings, each of the written comments, as well as to public comments received at several national, international, and DHHS-sponsored workshops. These meetings constituted critically important public forums for discussing the scientific, public health, and social issues attendant to xenotransplantation. The DHHS sponsored two public workshops on xenotransplantation during 1997 and 1998. The first meeting, held in July 1997, focused on virology and documented evidence of cross species infections. Titled "Cross-Species Infectivity and Pathogenesis," the meeting addressed current knowledge about the mechanisms and consequences of infectious agent transmission across species barriers. Discussions also focused on the possibility that an infectious agent might cross from an animal donor organ or tissue to human xenotransplantation product recipients. The conference also highlighted gaps in knowledge about the emergence of new infections in humans, especially as a result of xenotransplantation. The basic consensus of the meeting was that while there were examples of animal infectious agents crossing species barriers to infect, and even cause diseases in humans, the actual likelihood of this in xenotransplantation product recipients cannot be ascertained at this time. Small adequate and well-controlled clinical trials designed to test the safety and efficacy of xenotransplantation were considered to be appropriate. One anticipated outcome of such trials would be to both minimize and better understand the risks of transmission of infectious agents. (The meeting summary can be accessed at: <http://www.niaid.nih.gov/dait/cross-species/default.htm>). In January 1998, a second DHHS workshop titled "Developing U.S. Public Health Service Policy in Xenotransplantation," focused on the current and evolving U.S. public health policy in xenotransplantation. (The meeting transcripts can be accessed at <http:/ /www.fda.gov/ohrms/dockets/dockets/96m0311/96m0311.htm>) Among other issues, the regulatory framework, a national xenotransplantation database, and a national advisory committee were discussed. During this workshop, several themes were raised repeatedly and echoed many of the written public comments on the Draft Guideline. First, there was a broad consensus that the Draft Guideline was important and should be implemented, albeit with some modifications. For example, it was expressed that there could be more public awareness and participation in the development of public health policies in the field of xenotransplantation. Second, there was strong support for the DHHS proposal to establish a national xenotransplantation advisory committee, not only to facilitate analysis and discussion of the scientific, medical, ethical, legal, and social issues raised by xenotransplantation, but also to review and make recommendations about proposed clinical trial protocols. There was broad support for proceeding cautiously with xenotransplantation trials; however, some participants held that a national moratorium on clinical trials in xenotransplantation might be advantageous until the national xenotransplantation advisory committee is established and operational. While there is no definitive scientific evidence that xenotransplantation would promote cross-species infectious agent transmission leading to disease, there are data providing a reasonable basis for caution [see revised guideline, section 6., references D. 1.a; e.; f.; i.; l; o.; q.; r.& s.]. Some members of the scientific and medical community and concerned citizens expressed the opinion that there is a perceived greater risk from the use of xenotransplantation products procured from nonhuman primates (as opposed to other species) because of potential public health risks and animal welfare concerns. The January 1998 workshop also included presentations by representatives of the World Health Organization (WHO), the Organization for Economic Cooperation and Development (OECD), and several nations engaged in developing policies on xenotransplantation. These presentations placed the U.S. policy in global context and enhanced international dialogue on important public health safeguards. Because of the potential for the secondary transmission of infectious agents, the public health risks posed by xenotransplantation transcend national boundaries. International communication and cooperation in the development of public health policies are critical elements in successfully addressing the global safety and ethical challenges inherent in xenotransplantation. To this end, several countries, including Canada, France, Germany, the Netherlands, Spain, Sweden, the United Kingdom, and the United States and several international organizations such as the WHO, OECD, and the Council of Europe are actively engaged in international workshops and consultations on xenotransplantation. [see revised guideline, section 6.C.7. for a partial bibliography of guidance documents and websites from national and international bodies]. # MAJOR REVISIONS AND CLARIFICATIONS TO THE GUIDELINE Major revisions and clarifications to the Draft Guideline are briefly summarized and discussed below. These revisions were prompted by public comments submitted to the Draft Guideline docket, concerns expressed at public workshops, evolving science, and developing international policies. PHS intends to address related issues that go beyond the scope of this Guideline in future guidance documents. In the future, the Guideline may be amended as needed to appropriately reflect the accrual of new knowledge about cross-species infectivity and pathogenesis, new insights into the potential risks associated with xenotransplantation, policies currently under development (e.g., the Secretary's Advisory Committee on Xenotransplantation and the National Xenotransplantation Database), and other evolving public health policies in this arena. # Definition of Xenotransplantation and Xenotransplantation Product The definition of "xenotransplantation" has been revised from that used in the Draft Guideline. For the purposes of this document and U.S. P.H.S. policy, xenotransplantation is now defined to include any procedure that involves the transplantation, implantation, or infusion into a human recipient of either (a) live cells, tissues, or organs from a nonhuman animal source or (b) human body fluids, cells, tissues or organs that have had ex vivo contact with live nonhuman animal cells, tissues, or organs. Furthermore, xenotransplantation products have been defined to include live cells, tissues or organs used in xenotransplantation. The term xenograft, used in previous PHS documents, will no longer be used to refer to all xenotransplantation products. # Clinical Protocol Review and Oversight A variety of opinions were expressed regarding the appropriate level of protocol review and oversight of clinical trials in the U.S. For example, the American Society of Transplant Surgeons stated that the Draft Guideline represented an unnecessary intrusion of government regulation into the performance of transplant surgery. In contrast, some organizations with commercial interests in the development of xenotransplantation contended that an inappropriate share of the burden for oversight of clinical trials had been assigned to local review committees and that the responsibility for this oversight should reside at the national level with the FDA. Several academic veterinarians, a group of 44 virologists, and other concerned citizens asserted that strict regulations should accompany the Guideline and that the major responsibility for determining the suitability of any animals as sources of nonhuman animal live cells, tissues, or organs used in xenotransplantation must reside with the FDA. The revised Guideline makes clear that, in addition to review by appropriate local review bodies (Institutional Review Boards, Institutional Animal Care and Use Committees, and the Institutional Biosafety Committees), the FDA has regulatory oversight for xenotransplantation clinical trials conducted in the U.S. Xenotransplantation products (i.e., live cells, tissues, or organs from a nonhuman animal source or human body fluids, cells, tissues, or organs that have had ex vivo contact with live cells, tissues, or organs from nonhuman animal sources and are used for xenotransplantation) are considered to be biological products, or combination products that contain a biological component, subject to regulation by FDA under section 351 of the Public Health Service Act (42 U.S.C. 262) and under the Federal Food, Drug and Cosmetic Act (21 U.S.C. 321 et seq.). In accordance with the applicable statutory provisions, xenotransplantation products are subject to the FDA regulations governing clinical investigations and product approvals (e.g., the Investigational new Drug [IND] regulations in 21 CFR Part 312, and the regulations governing licensing of biological products in 21 CFR Part 601). Investigators should submit an application for FDA review before proceeding with xenotransplantation clinical trials. Sponsors are strongly encouraged to meet with FDA staff in the pre-submission phase. In addition to the guidances referred to below, the FDA is considering further regulations and/or guidances regarding, for example, the development of xenotransplantation protocols and the technical and clinical development of xenotransplantation products. Xenotransplantation clinical protocols may also be reviewed by the Secretary's Advisory Committee on Xenotransplantation. The scope and process for this review will be described in future publications. [see revised guideline, sections 2.3, 5.3] # Responsibility for Design and Conduct of Clinical Protocols The Draft Guideline originally proposed that clinical centers, source animal facilities, and individual investigators share the responsibilities for various aspects of the clinical trial protocol, including pre-xenotransplantation screening programs, patient informed consent procedures, record keeping, and post-xenotransplantation surveillance activities. The revised Guideline clarifies that primary responsibility for designing and monitoring the conduct of xenotransplantation clinical trials rests with the sponsor (as provided under, e.g., 21 CFR 312.23(a)(6)(d) and 312.50). # Informed Consent and Patient Education Virologists, infectious disease specialists, health care workers, and patient advocates emphasized the need for the sponsor to offer assistance to xenotransplantation product recipients in educating their close contacts about potential infectious disease risks and methods for reducing those risks. The Guideline has been revised to state that the sponsor should ensure that counseling regarding behavior modification and other issues associated with risk of infection is provided to the patient and made available to the patient's family and other close contacts prior to and at the time of consent, and that such counseling should continue to be available thereafter. The revised Guideline clarifies and strengthens the informed consent process for xenotransplantation product recipients and the education and counseling process for recipients and their close contacts, including associated health care professionals. It also emphasizes the need for xenotransplantation product recipients to comply with long-term or life-long surveillance regardless of the outcome of the clinical trial or the status of the graft or other xenotransplantation product. [see revised guideline, sections 2.5.3, 2.5.4, 2.5.7.] # Deferral of Allograft and Blood Donors The 1996 Draft Guideline recommended that xenotransplantation product recipients refrain from donating body fluids and/or parts for use in humans. Some infectious disease specialists and an infectious disease control practitioner organization suggested that this be strengthened to active deferral of xenotransplantation product recipients, and that consideration also be given to the deferral of close contacts of xenotransplantation product recipients. This issue was addressed by the FDA Xenotransplantation Subcommittee of the Biological Response Modifiers Advisory Committee (December, 1997, for transcript: <http://www.fda.gov/ohrms/dockets/ac/97/ transcpt/3365t1.rtf>). The committee recommended that xenotransplantation product recipients and their close contacts be counseled and actively deferred from donation of body fluids and other parts. A proposed FDA policy was then later presented to FDA's Blood Products Advisory Committee for further discussion, (March, 1998, for transcript: <http://www.fda.gov/ohrms/dockets/ac/98/transcpt/3391t2.rtf>). Of note, at the time of both these advisory committee meetings the operative definition of xenotransplantation did not include, as it does now, the use of certain products involving limited ex vivo exposure to xenogeneic cell lines or tissues. FDA has published a draft guidance document ("Guidance for Industry: Precautionary Measures to Reduce the Possible Risk of Transmission of Zoonoses by Blood and Blood Products from Xenotransplantation Product Recipients and Their Contacts") for public comment, which was again discussed by the FDA Xenotransplantation Subcommittee of the Biological Response Modifiers Advisory Committee on January 13, 2000. FDA will further consult with its advisors to identify the range of xenotransplantation products for which recipients and/or their contacts should be recommended for deferral from blood donation. Additionally, the range of contacts who should be deferred from blood donation will be clarified after further public discussion. The Guideline has been revised to reflect comments made at the FDA advisory committee meetings [see revised guideline, sections 2.5.11]. # MMWR August 24, 2001 # Xenotransplantation Product Sources Strong opposition to the use of nonhuman primates as xenotransplantation product sources was voiced by many individuals and groups, including 44 virologists, scientific and medical organizations such as the American Society of Transplant Physicians, the American College of Cardiology, private citizens, and commercial sponsors of xenotransplantation clinical trials. The concerns focused on the ethics of using animals so closely related to humans, as well as the risk of transmission of infectious diseases from nonhuman primates to humans. Many recommended that the Guideline state that clinical xenotransplantation trials using xenotransplantation products for which nonhuman primates served as source animals should not occur until a closer examination of infectious disease risks can be adequately carried out. Scientific findings since the publication of the Draft Guideline have also resulted in revisions. For example, the ability of simian foamy virus (SFV) to persistently infect human hosts has been further characterized [see revised guideline, section 6., references D. 2.m. & D.4.d.], the persistence of microchimerism with anatomically dispersed baboon cells containing SFV, baboon cytomegalovirus (CMV), and baboon endogenous retrovirus (BaEV) in human recipients of baboon liver xenotransplantation products has been documented [see revised guideline, section 6., references D. 3.a. & D.4.h.], and new viruses capable of infecting humans have been identified in pigs [see revised guideline, section 6., references D. 2.a., b., f., g., h., i., v., w., x., bb., cc., ee., & gg.]. The active expression of infectious porcine endogenous retrovirus from multiple porcine cell types, and the ability of porcine endogenous retrovirus variants A and B to infect human cell lines in vitro has been demonstrated [see revised guideline, section 6., references D. 1.q., s.; D.2.jj.; D.3.i.; D.4.a., e., f., m., s. & t.], giving scientific plausibility to concerns that this retrovirus from porcine xenotransplantation products may be able to infect recipients in vivo. Diagnostic tests for porcine endogenous retrovirus, BaEV, and other relevant infectious agents have been developed [see revised guideline, section 6., references D. 4.a., b., d., g., h., l., n., p., q., t. & u.] and studies are currently underway to assess the presence or absence of infectious endogenous retroviruses and other relevant infectious agents in both porcine and baboon xenotransplantation products and in the recipients of these xenotransplantation products [see revised guideline, section 6., references D. 3.a.; D.4.c., h., j., l. & n.]. The risk of endogenous retrovirus infection, however, is multi-factorial and it is not known whether results from these studies will be predictive of the potential infectious risks associated with future xenotransplantation products. One factor that impacts porcine endogenous retrovirus infectivity is its sensitivity to inactivation and lysis by human sera, yet the virus becomes resistant to inactivation after a single passage through human cells [see revised guideline, section 6., references D. 2.jj. & D.4.m.]. It is hypothesized that pre-xenotransplantation removal of naturally occurring xenoreactive antibodies from the recipient and other modifications intended to facilitate xenotransplantation product survival, such as the procurement of xenotransplantation products or nonhuman animal live cells, tissues or organs used in the manufacture of xenotransplantation products from certain transgenic pigs, may also modulate the infectivity of endogenous retroviruses for xenotransplantation product recipients [see revised guideline, section 6., references D. 1.d., o., q., s.; D.2.k., jj.; D.3.i.; D.4.e., k., m. & r.]. As the science regarding porcine endogenous retroviruses summarized above began to emerge, the FDA placed all clinical trials using porcine xenotransplantation products on hold (October 16, 1997) pending development by sponsors of sensitive and specific assays for (1) preclinical detection of infectious porcine endogenous retrovirus in porcine xenotransplantation products, (2) post-xenotransplantation screening for porcine endogenous retrovirus and clinical follow-up of porcine xenotransplantation product recipients, and (3) the development of informed consent documents that indicate the potential clinical implications of the capacity of porcine endogenous retrovirus to infect human cells in vitro. These issues were discussed publicly by the FDA Xenotransplantation Subcommittee of the Biological Response Modifiers Advisory Committee (December, 1997, for transcript: <http://www.fda.gov/ohrms/dockets/ac/97/ transcpt/3365tl.rtf>). In response to concerns articulated by scientists and other members of the public regarding the use of nonhuman primate xenotransplantation products, the FDA, after consultation with other DHHS agencies, has issued a "Guidance for Industry: Public Health Issues Posed by the Use of Nonhuman Primate Xenografts in Humans" containing the following conclusions: • "...( 1 (b)(1)(iv)], any protocol submission that does not adequately address these risks is subject to clinical hold (i.e., the clinical trial may not proceed) due to insufficient information to assess the risks and/or due to unreasonable risk. • (3) at the current time, FDA believes there is not sufficient information to assess the risks posed by nonhuman primate xenotransplantation. FDA believes that it will be necessary for there to be public discussion before these issues can be adequately addressed..." While the document "Guidance for Industry: Public Health Issues Posed by the Use of Nonhuman Primate Xenografts in Humans" specifically addresses the issue of nonhuman primates as sources for xenotransplantation products, the DHHS recognizes that other animal species have been used and/or are proposed as sources of xenotransplantation products and that all species pose infectious disease risks. Accordingly, the principles for source animal screening and health surveillance described in the revised Guideline apply to all candidate source animals regardless of species. These principles will need to be reassessed as new data become available. # Source Animal Screening and Qualification Many groups and individuals expressed concern that the Draft Guideline did not set forth sufficiently stringent principles and criteria for source animal husbandry and screening, source animal facilities, and procurement and screening of xenotransplantation products. This view was expressed by virologists, veterinarians, infectious disease specialists, concerned citizens, commercial producers of laboratory animals, industrial sponsors of xenotransplantation trials, and a number of professional, scientific, medical, and advocacy organizations, such as the American Society of Transplant Surgeons, Doctors and Lawyers for Responsible Medicine, the American College of Cardiology, Biotechnology Industry Organization (BIO -representing 670 biotech companies), and the Association for Professionals in Infection Control and Epidemiology. Others expressed concern that the stringency of the Draft Guideline imposed high economic burdens on producers of xenotransplantation product source animals and/or on sponsors of xenotransplantation clinical trials. However, in order to reduce the potential public health risks posed by xenotransplantation, strict control of animal husbandry and health surveillance practices are needed during the course of development of this technology. The Guideline has been revised to clarify the animal husbandry and prexenotransplantation infectious disease screening that should be performed before an animal can become a qualified source of xenotransplantation products. The revised Guideline now emphasizes that risk minimization precautions appropriate to each xenotransplantation product protocol should be employed during all steps of production and that screening, quarantine, and surveillance protocols should be tailored to the specific clinical protocol, xenotransplantation product, source animal and husbandry history. Breeding programs using cesarean derivation of animals should be used whenever possible. Source animals should be procured from closed herds or colonies raised in facilities that have appropriate barriers to effectively preclude the introduction or spread of infectious agents. These facilities should actively monitor the herds for infectious agents. The revised Guideline clarifies and strengthens the infectious disease screening and surveillance practices that should be in place before a clinical trial can begin. # Specimen Archives and Medical Records A number of infectious disease specialists, veterinarians, epidemiologists, industry sponsors of xenotransplantation trials, biotechnology companies, professional organizations such as the American Society of Transplant Physicians, and consumer advocates requested clarification regarding the collection and usage of, and access to, biological specimens obtained from both source animals and xenotransplantation product recipients. The revised Guideline clarifies the recommended types, volumes, and collection schedule for biological specimens from both source animals and xenotransplantation product recipients. It also clearly distinguishes between biological specimens archived for public health investigations [see revised guideline, sections 4. 1.2. and 3.7.] and specimens archived for use by the sponsor in conducting surveillance of source animals and post-xenotransplantation laboratory surveillance of xenotransplantation product recipients. The revised Guideline also states that health records and biologic specimens should be maintained for 50 years, based on the latency periods of known human pathogenic persistent viruses and the precedents established by the US Occupational Safety and Health Administration with respect to record-keeping requirements. # National Xenotransplantation Database A number of infectious disease specialists, epidemiologists, transplant physicians, and a state health official emphasized the need for accurate and timely information on infectious disease surveillance and xenotransplantation protocols and their outcomes. They further supported the concept of a national xenotransplantation database as described in the Draft Guideline. The revised Guideline describes the development of a pilot national xenotransplantation database to identify and implement routine data collection methods, system design, data reporting, and general start-up and to assess routine operational issues associated with a fully functional national database. The revisions also discuss plans to expand this pilot into a national xenotransplantation database intended to compile data from all clinical centers conducting trials in xenotransplantation and all animal facilities providing source animals for xenotransplantation. # Secretary's Advisory Committee on Xenotransplantation Xenotransplantation research brings to the fore certain challenges in assessing the potential impact of science on society as a whole, including the role of the public in those assessments. The broad spectrum of public opinions expressed since the publication of the Draft Guideline indicates that there is neither uniform public endorsement nor rejection of xenotransplantation. The fields of research involved are rapidly moving ones, at the leading edge of medical science. Furthermore, in many instances the clinical trials are privately funded and the public may not even be aware of them. However, public awareness and understanding of xenotransplantation is vital because the potential infectious disease risks posed by xenotransplantation extend beyond the individual patient to the public at large. In addition to these safety issues, a variety of individuals and groups have identified and/or raised concerns about issues such as animal welfare, human rights, community interest and consent, social equity in access to novel biotechnologies, and allocation of human allografts versus xenotransplantation products. For all of these reasons, public discourse on xenotransplantation research is critical and necessary. The revised Guideline acknowledges the complexity, importance, and relevance of these issues, but emphasizes that the scope of the Guideline is limited to infectious disease issues. The revised Guideline discusses the development of the Secretary's Advisory Committee on Xenotransplantation (SACX) as a mechanism for ensuring ongoing discussions of the scientific, medical, social, and ethical issues and the public health concerns raised by xenotransplantation, including ongoing and proposed protocols. The SACX will make recommendations to the Secretary on policy and procedures and, as needed, on changes to the Guideline. ). This guidance document represents PHS's current thinking on certain infectious disease issues in xenotransplantation. It does not create or confer any rights for or on any person and does not operate to bind PHS or the public. This guidance is not intended to set forth an approach that addresses all of the potential health hazards related to infectious disease issues in xenotransplantation nor to establish the only way in which the public health hazards that are identified in this document may be addressed. The PHS acknowledges that not all of the recommendations set forth within this document may be fully relevant to all xenotransplantation products or xenotransplantation procedures. Sponsors of clinical xenotransplantation trials are advised to confer with relevant authorities (the FDA, other reviewing authorities, funding sources, etc.) in assessing the relevance and appropriate adaptation of the general guidance offered here to specific clinical applications. # Definitions This section defines terms as used in this guideline document. 1. Allograft -a graft consisting of live cells, tissues, and/or organs between individuals of the same species. 2. Closed herd or colony -herd or colony governed by Standard Operating Procedures that specify criteria restricting admission of new animals to assure that all introduced animals are at the same or a higher health standard compared to the residents of the herd or colony. 3. Commensal -an organism living on or within another, but not causing injury to the host. 4. Good Clinical Practices -A standard for the design, conduct, performance, monitoring, auditing, recording, analyses, and reporting of clinical trials that provides assurance that the data and reported results are credible and accurate, and that the rights, integrity, and confidentiality of trial subjects are protected. 5. Infection Control Program -a systematic activity within a hospital or health care center charged with responsibility for the control and prevention of infections within the hospital or center. 6. Infectious agents -viruses, bacteria (including the rickettsiae), fungi, parasites, or agents responsible for Transmissible Spongiform Encephalopathies (currently thought to be prions) capable of invading and multiplying within the body. (i.e., under whose immediate direction the drug [or investigational product] is administered or dispensed to a subject). In the event an investigation is conducted by a team of individuals, the investigator is the responsible leader of the team (see 21 CFR 312.3(b)). 11. Nosocomial infection -an infection acquired in a hospital. 12. Occupational Health Service -an office within a hospital or health care center charged with responsibility for the protection of workers from health hazards to which they may be exposed in the course of their job duties. 13. Procurement -the process of obtaining or acquiring animals or biological specimens (such as cells, tissues, or organs) from an animal or human for medicinal, research, or archival purposes. 14. Recipient -a person who receives or who undergoes ex vivo exposure to a xenotransplantation product (as defined in xenotransplantation). 15. Secretary's Advisory Committee on Xenotransplantation (SACX) -the advisory committee appointed by the Secretary of Health and Human Services to consider the full range of issues raised by xenotransplantation (including ongoing and proposed protocols) and make recommendations to the Secretary on policy and procedures. 16. Source animal -an animal from which cells, tissues, and/or organs for xenotransplantation are obtained. 17. Source animal facility -facility that provides source animals for use in xenotransplantation. 18. Sponsor -a person who takes responsibility for and initiates a clinical investigation. The sponsor may be an individual or a pharmaceutical company, government agency, academic institution, private organization or other organization. The sponsor does not actually conduct the investigation unless the sponsor is a sponsor-investigator (see, e.g.,21 CFR 312.3(b)). 19. Transmissible spongiform encephalopathies (TSEs) -fatal, subacute, degenerative diseases of humans and animals with characteristic neuropathology (spongiform change and deposition of an abnormal form of a prion protein present in all mammalian brains). TSEs are experimentally transmissible by inoculation or ingestion of diseased tissue, especially central nervous system tissue. The prion protein (intimately associated with transmission and pathological progression) is hypothesized to be the agent of transmission. Alternatively, other unidentified co-factors or an as-yet unidentified viral agent may be necessary for transmission. Creutzfeldt-Jakob disease (CJD) is the most common human TSE. 20. Xenogeneic infectious agents -infectious agents that become capable of infecting humans due to the unique facilitating circumstances of xenotransplantation; includes zoonotic infectious agents. 21. Xenotransplantation -for the purposes of this document, any procedure that involves the transplantation, implantation, or infusion into a human recipient of either (A.) live cells, tissues, or organs from a nonhuman animal source or (B.) human body fluids, cells, tissues or organs that have had ex vivo contact with live nonhuman animal cells, tissues, or organs. 22. Xenotransplantation Product(s) -live cells, tissues or organs used in xenotransplantation (defined above). Previous PHS documents have used the term "xenograft" to refer to all xenotransplantation products. 23. Xenotransplantation Product Recipient -a person who receives or who undergoes ex vivo exposure to a xenotransplantation product. 24. Zoonosis -A disease of animals that may be transmitted to humans under natural conditions (e.g., brucellosis, rabies). # Background The demand for human cells, tissues and organs for clinical transplantation continues to exceed the supply. The limited availability of human allografts, coupled with recent scientific and biotechnical advances, has prompted the renewed development of investigational therapeutic approaches that use xenotransplantation products in human recipients. The experience with human allografts, however, has shown that infectious agents can be transmitted through transplantation. HIV/AIDS, Creutzfeldt-Jakob Disease, rabies, and hepatitis B and C, for example, have been transmitted between humans via allotransplantation. The use of live nonhuman cells, tissues and organs for xenotransplantation raises serious public health concerns about potential infection of xenotransplantation product recipients with both known and emerging infectious agents. Zoonoses are infectious diseases of animals transmitted to humans via exposure to or consumption of the source animal. It is well documented that contact between humans and nonhuman animals -such as that which occurs during husbandry, food production, or interactions with pets -can lead to zoonotic infections. Many infectious agents responsible for zoonoses (e.g., Toxoplasma species, Salmonella species, or Cercopithecine herpesvirus 1 ([B virus] of monkeys) are well characterized and can be identified through available diagnostic tests. Infectious disease public health concerns about xenotransplantation focus not only on the transmission of these known zoonoses, but also on the transmission of infectious agents as yet unrecognized. The disruption of natural anatomical barriers and immunosuppression of the recipient increase the likelihood of interspecies transmission of xenogeneic infectious agents. An additional concern is that these xenogeneic infectious agents could be subsequently transmitted from the xenotransplantation product recipient to close contacts and then to other human beings. An infectious agent may pose risk to the patients and/or public if it can infect, cause disease in, and transmit among humans, or if its ability to infect, cause disease in, or transmit among humans remains inadequately defined. Emerging infectious agents may not be readily identifiable with current techniques. This was the case with the several year delay in identifying HIV-1 as the etiologic agent for AIDS. Retroviruses and other persistent infections may be associated with acute disease with varying incubation periods, followed by periods of clinical latency prior to the onset of clinically evident malignancies or other diseases. As the HIV/AIDS pandemic demonstrates, persistent latent infections may result in person-to-person transmission for many years before clinical disease develops in the index case, thereby allowing an emerging infectious agent to become established in the susceptible population before it is recognized. # Scope of the Document This guideline addresses the public health issues related to xenotransplantation and recommends procedures for diminishing the risk of transmission of infectious agents to the recipient, health care workers, and the general public. While it is beyond the scope of this document to address the array of complex and important ethical issues raised by xenotransplantation, this guideline describes a mechanism for ensuring ongoing broad public discussion of ethical issues related to xenotransplantation (section 5.3). Other publications and reports of public discussions (section 6., references C.7.a., c., d., h., I.; D.1.b. & I.) have addressed issues such as animal welfare, human rights, and community interest. This guideline reflects the status of the field of xenotransplantation and knowledge of the risk of xenogeneic infections at the time of publication. The general guidance in this document will be augmented by public discussion, new advances in scientific knowledge and clinical experience, and specific FDA guidance documents intended to facilitate the implementation of the principles set forth herein. HHS may ask the Secretary's Advisory Committee on Xenotransplantation (SACX) to review the Guideline on a periodic basis and recommend appropriate revisions to the Secretary (section 5.3). # Objectives The objective of this PHS guideline is to present measures that can be used to minimize the risk of human disease due to xenogeneic infectious agents including both recognized zoonoses and non-zoonotic infectious agents that become capable of infecting humans due to the unique facilitating circumstances of xenotransplantation. In order to achieve this goal, this document: • Outlines the composition and function of the xenotransplantation team to ensure that appropriate technical expertise can be applied (section 2.1). • Addresses aspects of the clinical protocol, clinical center, and the informed consent and patient education processes with respect to public health concerns raised by the potential for infections associated with xenotransplantation (sections 2.2-2.5). • Provides a framework for pre-transplantation animal source screening to minimize the potential for transmission of xenogeneic infectious agents from the xenotransplantation product to the human recipient (section 3, particularly sections 3. 3-3.6). • Provides a framework for post-xenotransplantation surveillance to monitor transmission of infectious agents, including newly identified xenogeneic agents, to the recipient as well as health care workers and other individuals in close contact with the recipient (section 4, particularly sections 4. 1.1. and 4.2.3.). • Provides a framework for hospital infection control practices to reduce the risk of nosocomial transmission of zoonotic and xenogeneic infectious agents (section 4.2.). • Provides a framework for maintaining appropriate records, including human and veterinary health care records (section 4.3. and 3.7), standard operating procedures of facilities and centers (sections 3.2, 3.4), and occupational health service program records (section 4.3). • Provides a framework for archiving biologic samples from the source animal and the xenotransplantation product recipient. These records and samples will be essential in the event that public health investigations are necessitated by infectious diseases and other adverse events arising from xenotransplantation that could affect the public health (sections 3.7, 4.1.2., and 5.2). • Discusses the creation of a national database that will enable population based public health surveillance and investigation (s). (section 5.1). • Discusses the creation of a Secretary's Advisory Committee on Xenotransplantation (SACX) that will consider the full range of complex and interrelated issues raised by xenotransplantation, including ongoing and proposed protocols (sections 2.3. and 5.3.). # XENOTRANSPLANTATION PROTOCOL ISSUES # Xenotransplantation Team The development and implementation of xenotransplantation clinical research protocols require expertise in the infectious diseases of both human recipients and source animals. Consequently, in addition to health care professionals who have clinical experience with transplantation, the xenotransplantation team should include as active participants:(1) infectious disease physician(s) with expertise in zoonoses, transplantation, and epidemiology; (2) veterinarian (s) with expertise in the animal husbandry issues and infectious diseases relevant to the source animal; (3) specialist(s) in hospital epidemiology and infection control; and (4) experts in research and diagnostic microbiology laboratory methodologies. The sponsor should ensure that the appropriate expertise is available in the development and implementation of the clinical protocol, including the onsite follow up of the xenotransplantation product recipient. # Clinical Xenotransplantation Site Any sites performing xenotransplantation clinical procedures should have experience and expertise with and facilities for any comparable allotransplantation procedures. All xenotransplantation clinical centers should utilize CLIA'88 (Clinical Laboratory Improvements Act, amended in 1988) accredited virology and microbiology laboratories. 2.2.1. The safe conduct of xenotransplantation clinical trials should include the active participation of laboratories with the ability to isolate and identify unusual and/or newly recognized pathogens of both human and animal origin. Each protocol will present unique diagnostic, surveillance, and research needs that require expertise and experience in the microbiology and infectious diseases of both animals and humans. The sponsor should ensure that persons and centers with appropriate experience and expertise are involved in the study development, clinical application, and follow up of each protocol, either on-site or through formal and documented off-site collaborations. Sponsors are responsible for ensuring reviews by local review bodies as appropriate, (Institutional Review Boards [IRBs], Institutional Animal Care and Use Committees [IACUCs], Institutional Biosafety Committees [IBCs]), the FDA, and the SACX (upon implementation by the Secretary, HHS). The scope and process for SACX review will be described in subsequent publications. Institutional review of xenotransplantation clinical trial protocols should address: (1) the potential risks of infection for the recipient and contact populations (including health care providers, family members, friends, and the community at large); ( 2) the conditions of source animal husbandry (e.g., screening program, animal quarantine); and (3) issues related to human and veterinary infectious diseases (including virology, laboratory diagnostics, epidemiology, and risk assessment). # Health Screening and Surveillance Plans 'Clearly defined methodologies for pre-xenotransplantation screening for known infectious agents and post-xenotransplantation surveillance are essential parts of clinical xenotransplantation trials and should be clearly developed in all protocols. Prexenotransplantation screening includes screening of the source herd (sections 3.2.-3.4.), the source animal(s) (section 3.5.), and the nonhuman animal live cells, tissues or organs used in the manufacture of the xenotransplantation product or the product itself (section 3.6.). Post-xenotransplantation surveillance includes surveillance of the recipient(s) (section 4.1.), selected health care workers or other contacts (section 4.2.), and the surviving source animal(s) (section 3.6.). The screening methods used and the specific agents sought will differ depending on the procedure, cells, tissue, or organ used, the source animal, and the clinical indication for xenotransplantation. Details of these screening and surveillance plans, including a summary of the relevant aspects of the health maintenance and surveillance program of the herd and the medical history of the source animal(s) (section 3) and written protocols for hospital infection control practices regarding both xenotransplantation product recipients and health care workers (section 4.2.) should be described in the materials submitted for review by the SACX, the FDA, and the local review bodies. # Informed Consent and Patient Education Processes In the process of obtaining and documenting informed consent, the sponsor and investigators should comply with all applicable regulatory requirement (s) [e]). In addition, the sponsor should ensure that counseling regarding behavior modification and other issues associated with risk of infection is provided to the patient and made available to the patient's family and contacts prior to and at the time of consent. Such counseling should remain available on an ongoing basis thereafter. The informed consent discussion, the informed consent document, and the written information provided to potential xenotransplantation product recipients should address, at a minimum, the following points relating to the potential risk associated with xenotransplantation: 2.5.1. The potential for infection with zoonotic agents known to be associated with the nonhuman source animal species. 2.5.2. The potential for transmission to the recipient of unknown xenogeneic infectious agents. The patient should be informed of the uncertainty regarding the risk of infection, whether such infections might result in disease, the nature of disease that might result, and the possibility that infections with these agents may not be recognized for an extended period of time. 2.5.3. The potential risk for transmission of xenogeneic infectious agents (and possible subsequent manifestation of disease) to the recipient's family or close contacts, especially sexual contacts. The recipient should be informed that immunocompromised persons may be at increased risk of xenogeneic infections. The recipient should be counseled regarding behavioral modifications that diminish the likelihood of transmitting infectious agents and relevant infection control practices. (sections 4.2.1.1., 4.2.1.2., 4.2.1.5., and 4.2.3.1.). 2.5.4. The informed consent process should include a documented procedure to inform the recipient of the responsibility to educate his/her close contacts regarding the possibility of xenogeneic infections from the source animal species and to offer the recipient assistance with this education process, if desired. Education of close contacts should address the uncertainty regarding the risks of xenogeneic infections, information about behaviors known to transmit infectious agents from human to human (e.g., unprotected sex, breast-feeding, intravenous drug use with shared needles, and other activities that involve potential exchange of blood or other body fluids) and methods to minimize the risk of transmission. Recipients should educate their close contacts about the importance of reporting any significant unexplained illness through their health care provider to the research coordinator at the institutions where the xenotransplantation was performed. 2.5.5. The potential need for isolation procedures during any hospitalization (including to the extent possible the estimated duration of such confinement and the specific symptoms/situation that would prompt such isolation), and any specialized precautions needed to minimize acquisition or transmission of infections following hospital discharge. 2.5.6. The potential need for specific precautions following hospital discharge to minimize the risk that livestock of the source animal species and the recipient of the xenotransplantation product will represent biohazards to each other. For example, if a recipient comes into contact with the animal species from which the xenotransplantation product was procured, the xenotransplantation product (and therefore the recipient) may have an increased risk from exposures to agents infectious for the xenotransplantation product source species. Conversely, the recipient may represent a biohazard to healthy livestock if the presence of the xenotransplantation product enables the recipient to serve as a vector for outbreaks of disease in source species livestock. 2.5.7. The importance of complying with long-term or life-long surveillance necessitating routine physical evaluations and the archiving of tissue and/or body fluid specimens for public health purposes even if the experiment fails and the xenotransplantation product is rejected or removed. The schedule for clinical and laboratory monitoring should be provided to the extent possible. The patient should be informed that any serious or unexplained illness in themselves or their contacts should be reported immediately to the clinical investigator or his/her designee. 2.5.8. The responsibility of the xenotransplantation product recipient to inform the investigator or his/her designee of any change in address or telephone number for the purpose of enabling long-term health surveillance. 2.5.9. The importance of a complete autopsy upon the death of the xenotransplantation product recipient, even if the xenotransplantation product was previously rejected or removed. Advance discussion with the recipient and his/her family concerning the need to conduct an autopsy is also encouraged in order to ensure that the recipient's intent is known to all relevant parties. 2.5.10. The long term need for access by the appropriate public health agencies to the recipient's medical records. To the extent permitted by applicable laws and/or regulations, the confidentiality of medical records should be maintained. The informed consent document should include a statement describing the extent, if any, to which confidentiality of records identifying the subject will be maintained (45 CFR 46.116 or 21 CFR 50.25(A)( 5)). 2.5.11. As an interim precautionary measure, xenotransplantation product recipients and certain of their contacts should be deferred indefinitely from donation of Whole Blood, blood components, including Source Plasma and Source Leukocytes, tissues, breast milk, ova, sperm, or any other body parts for use in humans. Pending further clarification, contacts to be deferred from donations should include persons who have engaged repeatedly in activities that could result in intimate exchange of body fluids with a xenotransplantation product recipient. For example, such contacts may include sexual partners, household members who share razors or toothbrushes, and health care workers or laboratory personnel with repeated percutaneous, mucosal, or other direct exposures. These recommendations may be revised based on ongoing surveillance of xenotransplantation product recipients and their contacts to clarify the actual risk of acquiring xenogeneic infections, and the outcome of deliberations between FDA and its advisors. FDA has published a draft guidance document ("Guidance for Industry: Precautionary Measures to Reduce the Possible Risk of Transmission of Zoonoses by Blood and Blood Products from Xenotransplantation Product Recipients and Their Contacts") for public comment and will consult with its advisors to identify the range of xenotransplantation products for which recipients and/or certain of their contacts should be recommended for deferral from blood donation. Additionally, the range of contacts who should be deferred from blood donation will be clarified after further public discussion. 2.5.12. Xenotransplantation product recipients who may wish to consider reproduction in the future should be aware that a potential risk of transmission of xenogeneic infectious agents not only to their partner but also to their offspring during conception, embryonic/fetal development and/or breast-feeding cannot be excluded. 2.5.13. All centers where xenotransplantation procedures are performed should develop appropriate xenotransplantation procedure-specific educational materials to be used in educating and counseling both potential xenotransplantation product recipients and their contacts. These materials should describe the xenotransplantation procedure(s), and the known and potential risks of xenogeneic infections posed by the procedure(s) in appropriate language. Those activities that are considered to be associated with the greatest risk of transmission of infection to contacts should be described. Education programs should detail the circumstances under which the use of personal protective equipment (e.g., gloves, gowns, masks) or special infection control practices are recommended, and emphasize the importance of hand washing. The potential for transmission of these agents to the general public should be discussed. # ANIMAL SOURCES FOR XENOTRANSPLANTATION Recognized zoonotic infectious agents and other organisms present in animals, such as normal flora or commensals, may cause disease in humans when introduced by xenotransplantation, especially in immunocompromised patients. The risk of transmitting xenogeneic infectious agents is reduced by procuring source animals from herds or colonies that are screened and qualified as free of specific pathogenic infectious agents and that are maintained in an environment that reduces exposure to vectors of infectious agents. Precautions intended to reduce risk should be employed in all steps of production (e.g., during animal husbandry, procurement and processing of nonhuman animal live cells, tissues or organs used in the manufacture of xenotransplantation products) and should be appropriate to each xenotransplantation protocol. Before an animal species is used as a source of xenotransplantation product (s), sponsors should adequately address the public health issues raised. These issues are delineated in more detail below. Procedures should be developed to identify incidents that negatively affect the health of the herd. This information is relevant to the safety review of every xenotransplantation product application. Such information, as well as the procedures to collect the information, should be reported to FDA. Some experts consider that nonhuman primates pose a greater risk of transmitting infections to humans. The PHS recognized the substantial concerns about this issue that have been raised within the scientific community and the general public. In its April 6, 1999 (b)(1)(iv)], any protocol submission that does not adequately address these risks is subject to clinical hold (i.e., the clinical trial may not proceed) due to insufficient information to assess the risks and/or due to unreasonable risk..." # Animal Procurement Sources All xenotransplantation products pose a risk of infection and disease to humans. Regardless of the species of the source animal, precautions appropriate to each xenotransplantation product protocol should be employed in all steps of production (animal husbandry, procurement and processing of nonhuman animal live cells, tissues or organs) to minimize this risk. Source animal procurement and processing procedures should include, at minimum, the following precautions: 3.1.1. Cells, tissues, and organs intended for use in xenotransplantation should be procured only from animals that have been bred and reared in captivity and that have a documented, well characterized health history and lineage. 3.1.2. Source animals should be raised in facilities with adequate barriers, i.e. biosecurity, to prevent the introduction or spread of infectious agents. Animals should also be obtained from herds or colonies with restricted admission of new animals. Such closed herds or colonies should be free of infectious agents that are relevant to the animal species and that may pose risk to the patient and/or the public. An infectious agent may pose risk to the patients and/or public if it can infect, cause disease in, and transmit among humans, or if its ability to infect, cause disease in, or transmit among humans remains inadequately defined. In this regard, persistent viral infections are of particular concern. Source animals should specifically be free of infection with any identifiable exogenous persistent virus. Breeding programs utilizing caesarean derivation of animals reduce the risk of maternal-fetal transmission of infectious agents and should be used whenever possible. The prevalence of exposure to these agents should be documented through periodic surveillance of the herd or colony using serologic and other appropriate diagnostic methodologies. 3.1.3. Animals from minimally controlled environments such as closed corrals (captive free-ranging animals) should not be used as source animals for xenotransplantation. Such animals have a higher likelihood of harboring adventitious infectious agents from uncontrolled contact with arthropods and/or other animal vectors. 3.1.4. Wild-caught animals should not be used as source animals for xenotransplantation. 3.1.5. Animals or live animal cells, tissues, or organs obtained from abattoirs should not be used for xenotransplantation. Such animals are obtained from geographically divergent farms or markets and are more likely to carry infectious agents due to increased exposure to other animals and increased activation and shedding of infectious agents during the stress of slaughter. In addition, health histories of slaughterhouse animals are usually not available. 3.1.6. Imported animals or the first generation of offspring of imported animals should not be used as source animals for xenotransplantation unless the animals belong to a species or strain (including transgenic animals) not available for use in the United States and their use is scientifically warranted. In this case, the imported animals should be documented to have been bred and continuously maintained in a manner consistent with the principles in this document. The source animal facility, production process and records are subject to inspection by the FDA (Federal Food,Drug and Cosmetic Act,[21 USC 374]). The US Department of Agriculture (USDA), Animal and Plant Health Inspection Service (APHIS), Veterinary Services (VS) regulates the importation of all animals and animal-origin materials that could represent a disease risk to U.S. livestock and poultry (9 CFR Part 122). Importation or interstate transport of any animal and/or animal-origin material that may represent such a disease risk requires a USDA permit. In addition, plans for testing and quarantine of the imported animals as well as health maintenance and surveillance of the herd or colony into which imported animals are introduced should be conducted by a veterinarian who is either specifically trained in or who otherwise has a solid background in foreign animal diseases. 3.1.7. Source animals from species in which transmissible spongiform encephalopathies have been reported should be obtained from closed herds with documented absence of dementing illnesses and controlled food sources for at least 2 generations prior to the source animal (section 3.2.6.3). Xenotransplantation products should not be obtained from source animals imported from any country or geographic region where transmissible spongiform encephalopathies are known to be present in the source species or from which the USDA prohibits or restricts importation of ruminants or ruminant products due to concern about transmissible spongiform encephalopathies. 3.1.8. The CDC, Division of Quarantine, regulates the importation of certain animals, including nonhuman primates (NHP), because of their potential to cause serious outbreaks of communicable disease in humans (42 CFR Part 71). Importers must register with CDC, certify imported NHP will be used only for scientific, educational, and exhibition purposes, implement disease control measures, maintain records regarding each shipment, and report suspected zoonotic illness in animals or workers. Further, the importation and/or transfer of known or potential etiological agents, hosts, or vectors of human disease (including biological materials) may require a permit issued by CDC's Office of Health and Safety. # Source Animal Facilities Potential source animals should be housed in facilities built and operated taking into account the factors outlined in this section. 3.2.1. Source Animal Facilities (facilities providing source animals for xenotransplantation) should be designed and maintained with adequate barriers to prevent the introduction and spread of infectious agents. Entry and exit of animals and humans should be controlled to minimize environmental exposures/inadvertent exposure to transmissible infectious agents. Source Animal Facilities should not be located in geographic proximity to manufacturing or agricultural activities that could compromise the biosecurity of these facilities. 3.2.6. The Source Animal Facility standard operating procedures should thoroughly describe the following: (1) criteria for animal admission, including sourcing and entry procedures, (2) description of the disease monitoring program, ( 3) criteria for the isolation or elimination of diseased animals, including a diagnostic algorithm for ill and dead animals, ( 4) facility cleaning and disinfecting arrangements, ( 5) the source and delivery of feed, water and supplies, ( 6) measures to exclude arthropods and other animals, ( 7) animal transportation, (8) dead animal disposition, ( 9) criteria for the health screening and surveillance of humans entering the facility, and (10) permanent individual animal identification. 3.2.6.1. Animal movement through the secured facility should be described in the standard operating procedures of the facility. All animals introduced into the source colony other than by birth should go through a well defined quarantine and testing period (section 3.5). With regard to the reproduction and raising of suitable replacement animals, the use of methods such as artificial insemination (AI), embryo transfer, medicated early weaning, cloning, or hysterotomy/hysterectomy and fostering may minimize further colonization with infectious agents. 3.2.6.2. During final screening and qualification of individual source animals and procurement of live cells, tissues or organs for use in xenotransplantation, the potential for transmission of an infectious agent should be minimized by established standard operating procedures. One method to accomplish this is a step-wise "batch" or "all-in/all-out" method of source animal movement through the facility rather than continuous replacement movement. With the "all-in/all-out" or "batch" method, a cohort of qualified animals is quarantined from the closed herd or colony while undergoing final screening qualification and xenogeneic biomaterial procurement. After the entire cohort of source animals is removed, the quarantine and xenogeneic biomaterial processing areas of the animal facility are then cleaned and disinfected prior to the introduction of the next cohort of source animals. 3.2.6.3. The feed components, including any antibiotics or other medicinals or other additives, should be documented for a minimum of two generations prior to the source animal. Pasteurized milk products may be included in feeds. The absence of other mammalian materials, including recycled or rendered materials, should be specifically documented. The absence of such materials is important for the prevention of transmissible spongiform encephalopathies and other infectious agents. Potentially extended periods of clinical latency, severity of consequent disease, and the difficulty in current detection methods highlight the importance of eliminating risk factors associated with transmissible spongiform encephalopathies. 3.2.7. The sponsor should establish records linking each xenotransplantation product recipient with the relevant health history of the source animal, herd or colony, and the specific organ, tissue, or cell type included in the xenotransplantation product or used in the manufacture of the xenotransplantation product. The relevant records include information on the standard operating procedures of the animal procurement facility, the herd health surveillance, and the lifelong health history of the source animal(s) for the xenotransplantation product (sections 3.2.-3.7.). 3.2.7.1. The sponsor should maintain these record systems and an animal numbering or other system that allows easy, accurate, and rapid linkage between the information contained in these different record systems and the xenotransplantation product recipient for 50 years beyond the date of xenotransplantation. If record systems are maintained in a computer database, electronic back ups should be kept in a secure office facility and back up on hard copy should be routinely performed. 3.2.7.2. In the event that the Source Animal Facility ceases to operate, the facility should either transfer all animal health records and specimens to the respective sponsors or notify the sponsors of the new archive site. If the sponsor ceases to exist, decisions on the disposition of the archived records and specimens should be made in consultation with the FDA. 3.2.8. All animal facilities should be subject to inspection by designated representatives of the clinical protocol sponsor and public health agencies. The sponsor is responsible for implementing and maintaining a routine facilities inspection program for quality control and quality assurance. # Pre-xenotransplantation Screening for Known Infectious Agents The following points discuss measures for appropriate screening of known infectious agents in the herd, individual source animal and the nonhuman animal live cells, tissues or organs used in xenotransplantation. The selection of assays for pre-transplant screening should be determined by the source of the nonhuman animal live cells, tissues or organs and the intended clinical application of the xenotransplantation product. General guidance on adventitious agent testing may be found in 'Points to Consider for the Characterization of Cell Lines Used to Produce Biologicals' (FDA, CBER, 1993), and a guidance document from the International Conference on Harmonization: 'Q5D Quality of Biotechnological/Biological Products: Derivation and Characterization of Cell Subsets Used for Production of Biotechnological/Biological Products'. 3.3.1. The design of preclinical studies intended to identify infectious agents in the xenotransplantation product and/or the nonhuman animal live cells, tissues or organs intended for use in the manufacture of xenotransplantation products should take into consideration the source animal species and the specific manner in which the xenotransplantation product will be used clinically. These studies should identify infectious agents and characterize their potential pathogenicity and tropism for human cells by appropriate in vivo and in vitro assays. Characterization of persistent viral infections and endogenous retroviruses present in source animals cells, tissues or organs is particularly important. The information from these studies is necessary for the identification and development of appropriate assays for xenotransplantation product screening programs. 3.3.2. Programs for screening and detection of known infectious agents in the herd or colony, the individual source animal, and the xenotransplantation product itself or the nonhuman animal live cells, tissues or organs used in the manufacture of xenotransplantation products should take into account the infectious agents associated with the source animals used, the stringency of the husbandry techniques employed, and the manner in which the xenotransplantation product will be used clinically. These programs should be updated periodically to reflect advances in the knowledge of infectious diseases. The sponsor should develop an adequate screening program in consultation with appropriate experts including oversight and regulatory bodies. 3.3.3. Assays used for screening and detection of infectious agents should have well defined and documented sensitivity, specificity, and reproducibility in the setting in which they are employed. In addition to assays for specific infectious agents, the use of assays capable of detecting broad ranges of infectious agents is strongly encouraged. In vivo assays involving animal models may require different standards for evaluation. Assays under development may complement the screening process. 3.3.4. Samples from the xenotransplantation product itself or of the nonhuman animal live cells, tissues or organs used in the manufacture of the xenotransplantation product, whenever possible, or from an appropriate biologic proxy should be tested preclinically with co-cultivation assays. These assays should include a panel of appropriate indicator cells, which may include human peripheral blood mononuclear cells (PBMC), to facilitate amplification and detection of endogenous retroviruses and other xenogeneic viruses capable of producing infection in humans. Agents that may be latent are of particular concern and their detection may be facilitated by using chemical and irradiation methods. 3.3.5. All xenotransplantation products should be screened by direct culture for bacteria, fungi, and mycoplasma (see, e.g., 21 CFR Part 600-680). In addition, universal PCR probes for the presence of micro-organisms are available and should be considered to complement the screening of xenotransplantation products. # Herd/Colony Health Maintenance and Surveillance The principal elements recommended to qualify a herd or colony as a source of animals for use in xenotransplantation include: (1) closed herd or colony of stock (optimally caesarian derived) raised in barrier facilities; and (2) adequate surveillance programs for infectious agents. The standard operating procedures of the animal facility with regard to the herd or colony health maintenance and surveillance programs relevant to the specific xenotransplantation product usage should be documented and available to appropriate review bodies. Medical records for the herd or colony and the specific individual source animals should be maintained by the animal facility or the sponsor, as appropriate, for 50 years beyond the date of the xenotransplantation. 3.4.1. Herd or colony health measures that constitute standard veterinary care for the species (e.g., anti-parasitic measures) should be implemented and recorded at the animal facility. For example, aseptic techniques and sterile equipment should be used in all parenteral interventions including vaccinations, phlebotomy, and biopsies. All incidents that may affect herd or colony health should be recorded (e.g., breaks in the environmental barriers of the secured facility, disease outbreaks, or sudden animal deaths). Vaccination and screening schedules should be described in detail and taken into account when interpreting serologic screening tests. Prevention of disease by protection from exposure is generally preferable to vaccination, since this preserves the ability of serologic screening to define herd exposures. In particular, the use of live vaccines is discouraged, but may be justified when dead or acellular vaccines are not available and barriers to exposure are inadequate to prevent the introduction of infectious agents into the herd or colony. 3.4.2. In addition to standard medical care, the herd/colony should be monitored for the introduction of infectious agents which may not be apparent clinically. The sponsor should describe the monitoring program, including the types and schedules of physical examinations and laboratory tests used in the detection of all infectious agents, and document the results. 3.4.3. Routine testing of closed herds or colonies in the United States should concentrate on zoonoses known to exist in captive animals of the relevant species in North America. Since many important pathogens are not endemic to the United States or have been found only in wild-caught animals, testing of breeding stock and maintenance of a closed herd or colony reduces the need for extensive testing of individual source animals. Herd or colony geographic locations are relevant to consideration of presence and likelihood of pathogens in a given herd or colony. The geographic origin of the founding stock of the colony, including quarantine and screening procedures utilized when the closed colony was established, should be taken into consideration. Veterinarians familiar with the prevalence of different infectious agents in the geographic area of source animal origin and the location where the source animals are to be maintained should be consulted. 3.4.3.1. As part of the surveillance program, routine serum samples should be obtained from randomly selected animals representative of the herd or colony population. These samples should be tested for indicators of infectious agents relevant to the species and epidemiologic exposures. Additional directed serologic analysis, active culturing, or other diagnostic laboratory testing of individual animals should be performed in response to clinical indications. Infection in one animal in the herd justifies a larger clinical and epidemiologic evaluation of the rest of the herd or colony. Aliquots of serum samples collected during routine surveillance and specific disease investigations should be maintained for 50 years beyond the date of sample collection. The Source Animal Facility or the sponsor should maintain these specimens (either on-or off-site) for investigations of unexpected diseases that occur in the herd, colony, individual source animals, or animal facility staff. These herd health surveillance samples, which are not archived for PHS investigation purposes, should nonetheless be made available to the PHS if needed. (section 3.7.) 3.4.3.2. Any animal deaths, including stillbirths or abortions, where the cause is either unknown or ambiguous should lead to full necropsy and evaluation for infectious etiologies (including transmissible spongiform encephalopathies) by a trained veterinary pathologist. Results of these investigations should be documented. 3.4.4. Standard operating procedures that include maintenance of a subset of sentinel animals are encouraged. Monitoring of these animals will increase the probability of detection of subclinical, latent, or late-onset diseases such as transmissible spongiform encephalopathies. # Individual Source Animal Screening and Qualification The qualification of individual source animals should include documentation of breed and lineage, general health, and vaccination history, particularly the use of live and/or live attenuated vaccines (section 3.4.1). The presence of pathogens that result in acute infections should be documented and controlled by clinical examination and treatment of individual source animals, by use of individual quarantine periods that extend beyond the incubation period of pathogens of concern, and by herd surveillance indicating the presence or absence of infection in the herd from which the individual source animal is selected. The use of any drugs or biologic agents for treatment should be documented. During quarantine and/or prior to procurement of live cells, tissues or organs for use in xenotransplantation, individual source animals should be screened for infectious agents relevant to the particular intended clinical use of the planned xenotransplantation product. The screening program should be guided by the surveillance and health history of the herd or colony. 3.5.1. In general, individual source animals should be quarantined for 3 weeks prior to procurement of live cells, tissues or organs for use in xenotransplantation. During the quarantine, acute illnesses due to infectious agents to which the animal may have been exposed shortly before removal from the herd or colony would be expected to become clinically apparent. It may be appropriate to modify the need for and duration of individual quarantine periods depending on the characterization and surveillance of the source animal herd or colony, the design of the facility in which the herd is bred and maintained, and the clinical urgency. When the quarantine period is shortened or eliminated, justification should be documented and any potentially increased infectious risk should be addressed in the informed consent document. 3.5.1.1. During the quarantine period, candidate source animals should be examined by a veterinarian and screened for the presence of infectious agents (bacteria including rickettsiae when appropriate, parasites, fungi, and viruses) by appropriate serologies and cultures, serum clinical chemistries (including those specific to the function of the organ or tissue to be procured), complete blood count and peripheral blood smear, and fecal exam for parasites. Evaluation for viruses which may not be recognized zoonotic agents but which have been documented to infect either human or nonhuman primate cells in vivo or in vitro should be considered. Particular attention should be given to viruses with demonstrated capacity for recombination, complementation, or pseudotyping. Surveillance of a closed herd or colony (as described in section 3.4.3.) will minimize the additional screening necessary to qualify individual member animals. The nature, timing, and results of surveillance of the herd or colony from which the individual animal is procured should be considered in designing appropriate additional screening of individual animals. These tests should be performed as closely as possible to the date of xenotransplantation while ensuring availability of results prior to clinical use. 3.5.1.2. Screening of a candidate source animal should be repeated prior to procurement of live cells, tissues or organs for use in xenotransplantation if a period greater than three months has elapsed since the initial screening and qualification were performed or if the animal has been in contact with other non-quarantined animals between the quarantine period and the time of cells, tissue or organ procurement. 3.5.1.3. Transportation of source animals may compromise the microbiologic protection ensured by the closed colony. Careful attention to conditions of transport can minimize disease exposures during shipping. Microbiological isolation of the source animal during transit is critically important. Source animals should be transported using a system that reliably ensures microbiological isolation. Transported source animals should be quarantined for a minimum period of three weeks after transportation, during which time appropriate screening should be performed. The sponsor may propose a shorter quarantine period if appropriate justification (that reflects the level of containment and the duration of the transportation) is provided. When source animals are transported intact, the sponsor should consult the FDA about further details of appropriate transport, quarantine, and screening. If the animals are transported across state or federal boundaries the USDA should be consulted. 3.5.1.4. For the reasons cited above, it is preferable, whenever feasible, to procure live cells, tissues or organs for use in xenotransplantation at the animal facility. Precautions employed during transport to ensure microbiological isolation of the procured xenotransplantation product or live cells, tissues or organs should be documented. 3.5.2. All procured cells, tissues and organs intended for use in xenotransplantation should be as free of infectious agents as possible. The use of source animals in which infectious agents, including latent viruses, have been identified should be avoided. However, the presence of an infectious agent in certain anatomic sites, for example the alimentary tract, should not preclude use of the source animal if the agent is documented to be absent in the xenotransplantation product. 3.5.3. When feasible, a biopsy of the nonhuman animal live cells, tissues or organs intended for use in xenotransplantation, the xenotransplantation product itself, or other relevant tissue should be evaluated for the presence of infectious agents by appropriate assays and histopathology prior to xenotransplantation, and then archived (section 3.7). 3.5.4. The sponsor should ensure that the linked records described in section 3.2.7. are available for review when appropriate by the local review bodies, the SACX, and the FDA. These records should include information on the results of the quarantine and screening of individual xenotransplantation source animals. In addition to records kept at the Source Animal Facility, a summary of the individual source animal record should accompany the xenotransplantation product and be archived as part of the medical record of the xenotransplantation product recipient. 3.5.5. The Source Animal Facility should notify the sponsor in the event that an infectious agent is identified in the source animal or herd subsequent to procurement of live cells, tissues or organs for use in xenotransplantation (e.g., identification of delayed onset transmissible spongiform encephalopathies in a sentinel animal). 3.5.6. The sponsor should ensure that the quarantine, screening, and qualification program is appropriately tailored to the specific source animal species, the animal husbandry history, the process for procuring the xenogeneic biomaterial and preparing the xenotransplantation product, and the clinical application. The sponsor should also ensure that the results of these procedures are reviewed and approved by persons with the appropriate expertise prior to the clinical application. # Procurement and Screening of Nonhuman Animal Live Cells, Tissues or Organs Used for Xenotransplantation 3.6.1. Procurement and processing of cells, tissues and organs should be performed using documented aseptic conditions designed to minimize contamination. These procedures should be conducted in designated facilities which may be subject to inspection by appropriate oversight and regulatory authorities. 3.6.2. Cells, tissues or organs intended for xenotransplantation that are maintained in culture prior to xenotransplantation should be periodically screened for maintenance of sterility, including screening for viruses and mycoplasma. The FDA publications titled "Guidance for Industry: Guidance for Human Somatic Cell Therapy and Gene Therapy (1998)"; "Points To Consider in the Characterization of Cell Lines Used to Produce Biologicals (1993)"; and "Points to Consider in the Manufacture and Testing of Therapeutic Products for Human Use Derived from Transgenic Animals (1995)" should be consulted for guidance. The sponsor should develop, implement, and stringently enforce the standard operating procedures for the procurement and screening processes. Procedures that may inactivate or remove pathogens without compromising the integrity and function of the xenotransplantation product should be employed. 3.6.3. All steps involved in the procuring, processing, and screening of live cells, tissues or organs or xenotransplantation products to the point of xenotransplantation should be rehearsed preclinically to ensure reproducible quality control. 3.6.4. If nonhuman animal live cells, tissues or organs for use in xenotransplantation are procured without euthanatizing the source animal, the designated PHS specimens should be archived (PHS specimens are discussed in section 3.7.1.) and the animal's health should be monitored for life. When source animals die or are euthanatized, a complete necropsy with gross, histopathologic and microbiological evaluation by a trained veterinary pathologist should follow, regardless of the time elapsed between xenogeneic biomaterial procurement and death. This should include evaluation for transmissible spongiform encephalopathies. The sponsor should maintain documentation of all necropsy results for 50 years beyond the date of necropsy as part of the animal health record (sections 3.2.7. and 3.4.). In the event that the necropsy reveals findings pertinent to the health of the xenotransplantation product recipient(s) (e.g., evidence of transmissible spongiform encephalopathies) the finding should be communicated to the FDA without delay (see e.g., 21 CFR 312.32). # Archives of Source Animal Medical Records and Specimens Systematically archived source animal biologic samples and record keeping that allows rapid and accurate linking of xenotransplantation product recipients to the individual source animal records and archived biologic specimens are essential for public health investigation and containment of emergent xenogeneic infections. 3.7.1. Source animal biologic specimens designated for PHS use (as outlined below) should be banked at the time of xenogeneic biomaterial procurement. These specimens should remain in archival storage for 50 years beyond the date of the xenotransplantation to permit retrospective analyses if a public health need arises. Such archived specimens should be readily accessible to the PHS and remain linked to both source animal and recipient health records. At the time of procurement of nonhuman animal live cells, tissues or organs for use in xenotransplantation, plasma should be collected from the source animal and stored in sufficient quantity for subsequent serology and viral testing. In addition, the sponsor should recover and bank sufficient aliquots of cryopreserved leukocytes for subsequent isolation of nucleic acids and proteins as well as aliquots for thawing viable cells for viral co-culture assays or other tissue culture assays. Ideally, at least ten 0.5 cc aliquots of citrated or EDTA-anticoagulated plasma should be banked. At least five aliquots of viable (1x10 7 ) leukocytes should be cryopreserved. It may also be appropriate to collect paraffin-embedded, formalin fixed, and cryopreserved tissue samples from source animal organs relevant to the specific protocol at the time of xenogeneic biomaterial procurement. Additionally, cryopreserved tissue samples representative of major organ systems (e.g., spleen, liver, bone marrow, central nervous system, lung,) should be collected from source animals at necropsy. The material submitted for review by FDA and, when appropriate, the Secretary's Advisory Committee on Xenotransplantation (under development, see section 5.3) should justify the types of tissues, cells, and plasma taken for storage and any smaller quantities of plasma and leukocytes collected. 3.7.2. The sponsor should maintain archives of designated PHS specimens (section 3.7.1.) and serum collected for herd surveillance for 50 years beyond the date of collection (section 3.4.3.1.), and animal health records for 50 years beyond the date of the animal's death (sections 3.2.7.). # Disposal of Animals and Animal By-products The need for advanced planning for the ultimate disposition of source and sentinel animals bred for xenotransplantation, especially animals of species ordinarily used to produce food, should be anticipated. Generally, source and sentinel animals should not be used as pets, breeding animals, sources of human food via milk or meat, or as ingredients of feed for other animals because of their potential to enter the human or animal food chain. 3.8.1. There may be species specific situations where animals from xenotransplant facilities can be considered to be safe for human food use or as feed ingredients when disposed of through rendering. FDA's Center for Veterinary Medicine (CVM) regulates animal feed ingredients and also establishes conditions for the release of animals to the USDA Food Safety Inspection Service for inspection as food for humans. Persons wishing to offer animals into the human food or animal feed supply or who have food safety questions should consult with CVM. Food safety issues will be referred to CVM. 3.8.2. Animals from biomedical facilities that have not been authorized for release by CVM into the human food or animal feed supply may be adulterated under the Federal Food, Drug and Cosmetic Act (21 U.S.C. 321 et seq.), unfit for food or feed, and potentially infectious. They should be disposed of in a manner consistent with infectious medical waste in compliance with federal, state and local requirements. It is critical that adequate diagnostic assays and methodologies for surveillance of known infectious agents from the source animal are available prior to initiating the clinical trial. The sensitivity, specificity, and reproducibility of these testing methods should be documented under conditions that simulate those employed at the time of and following the xenotransplantation procedure. As with pre-xenotransplantation screening, assays under development may complement the surveillance process (see section 3.3.3.). The laboratory surveillance should include methods to detect infectious agents known to establish persistent latent infections in the absence of clinical symptoms (e.g., herpesviruses, retroviruses, papillomaviruses) and that are known or suspected to have been present in the xenotransplantation product. When the xenogeneic viruses of concern have similar human counterparts (e.g., simian cytomegalovirus), assays to distinguish between the two should be used in the post-xenotransplantation laboratory surveillance. Depending upon the degree of immunosuppression in the recipient, serological assays may be or may not be useful. Methods for analysis may include co-cultivation of cells coupled with appropriate detection assays. # Xenotransplantation Product Recipients' Biologic Specimens Archived for Public Health Investigations (PHS Specimens) Biological specimens obtained from the xenotransplantation product recipients and designated for public health investigations (as distinct from specimens collected for clinical evaluation or laboratory surveillance) should be archived for 50 years beyond the date of the xenotransplantation to allow retrospective investigation of xenogeneic infections. The type and quantity of specimens archived may vary with the clinical procedure and the age of the xenotransplantation product recipient. In the application for FDA review, which may also be reviewed by the SACX, the sponsor should justify the amount and types of specimens to be designated for PHS use, including any differences from the recommendations described below. At selected time points, at least three to five 0.5 cc aliquots of citrated or EDTAanticoagulated plasma should be recovered and archived. At least two aliquots of viable (1 x 10 7 ) leukocytes should be cryopreserved. Specimens from any xenotransplantation product that is removed (e.g., post-rejection or at the time of death) should be archived. The following schedule for archiving biological specimens is recommended: (1) Prior to the xenotransplantation procedure, 2 sets of samples should be collected and archived one month apart. If this is not feasible then two sets should be collected and archived at times that are separated as much as possible. One set should be collected immediately prior to the xenotransplantation. (2) Additional sets should be archived in the immediate post-xenotransplantation period and at approximately one month and six months after xenotransplantation. (3) Collection should then be obtained annually for the first two years after xenotransplantation. ( 4) After that, specimens should be archived every five years for the remainder of the recipient's life. More frequent archiving may be indicated by the specific protocol or the recipient's medical course. 4.1.2.1. In the event of recipient's death, snap-frozen samples stored at -70 o C, paraffin embedded tissue, and tissue suitable for electron microscopy should be collected at autopsy from the xenotransplantation product and all major organs relevant to either the xenotransplantation or the clinical syndrome that resulted in the patient's death. These designated PHS specimens should be archived for 50 years beyond the date of collection. 4.1.2.2. The sponsor should maintain an accurate archive of the PHS specimens. In the absence of a central facility (section 5.2), these specimens should be archived with the safeguards necessary to ensure long-term storage (e.g., a monitored storage freezer alarm system and specimen archiving in split portions in separate freezers) and an efficient system for the prompt retrieval and linkage of data to medical records of recipients and source animals. The sponsor should maintain these archives and a record system that allows easy, accurate, and rapid linkage of information among the different record systems (i.e., the specimen archive, the recipient's medical records and the records of the source animal) for 50 years beyond the date of xenotransplantation. If record systems are maintained in a computer database, electronic back ups should be kept in a secure office facility and back up on hard copy should be routinely performed. 4.1.2.3. A clinical episode potentially representing a xenogeneic infection should prompt notification of the FDA, which will notify other federal and state health authorities as appropriate. Under these circumstances, the PHS may decide that an investigation involving the use of these archived biologic specimens is warranted to assess the public health significance of the infection. 4.2.1.2. Additional infection control or isolation precautions (e.g., Airborne, Droplet, Contact) should be employed as indicated in the judgment of the hospital epidemiologist and the xenotransplantation team infectious disease specialist. For example, appropriate isolation precautions for each hospitalized xenotransplantation product recipient will depend upon the type of xenotransplantation, the extent of immunosuppression, and patient symptoms. Isolation precautions should be continued until a diagnosis has been established or the patient symptoms have resolved. The appropriateness of isolation precautions and other infection control measures should be reassessed when the diagnosis is established, the patient's symptoms change, and at the time of readmission and discharge. Discharge instructions should include specific education on appropriate infection control practices following discharge, including any special precautions recommended for disposal of biologic products. The most restrictive level of isolation should be used when patients exhibit respiratory symptoms because airborne transmission of infectious agents is most concerning. # Infection Control 4.2.1.3. Health care personnel, including xenotransplantation team members, should adhere to recommended procedures for handling and disinfection/sterilization of medical instruments and disposal of infectious waste. 4.2.1.4. Biosafety level 2 (BSL-2) standard and special practices, containment equipment and facilities should be used for activities involving clinical specimens from xenotransplantation product recipients. Particular attention should be given to sharps management and bioaerosol containment. BSL-3 standard and special practices and containment equipment should be employed in a BSL-2 facility when propagating an unidentified infectious agent isolated from a xenotransplantation product recipient. # Acute Infectious Episodes Most acute viral infectious episodes among the general population are never etiologically identified. Xenotransplantation product recipients are at risk for these infections and other infections common among immunosuppressed allograft recipients. When the source of an illness in a recipient remains unidentified despite standard diagnostic procedures, it may be appropriate to perform additional testing of body fluid and tissue samples. The infectious disease specialist, in consultation with the hospital epidemiologist, the veterinarian, the clinical microbiologist and other members of the xenotransplantation team should assess each clinical episode and make a considered judgment regarding the significance of the illness, the need and type of diagnostic testing and specific infection control precautions. Other experts on infectious diseases and public health may also need to be consulted. 4.2.2.1. In immunosuppressed xenotransplantation product recipients, assays of antibody response may not detect infections reliably. In such patients, culture systems, genomic detection methodologies and other techniques may detect infections for which serologic testing is inadequate. Consequently, clinical centers where xenotransplantation is performed should have the capability to culture and to identify viral agents using in vitro and in vivo methodologies either on site or through active and documented collaborations. Specimens should be handled to ensure viability and to maximize the probability of isolation and identification of fastidious agents. Algorithms for evaluation of unknown xenogeneic pathogens should be developed in consultation with appropriate experts, including persons with expertise in both medical and veterinary infectious diseases, laboratory identification of unknown infectious agents and the management of biosafety issues associated with such investigations. 4.2.2.2. Acute and convalescent sera obtained in association with acute unexplained illnesses should be archived when judged appropriate by the infectious disease physician and/or the hospital epidemiologist. This would permit retrospective study and perhaps the identification of an etiologic agent. # Health Care Workers The risk to health care workers who provide post-xenotransplantation care to xenotransplantation product recipients is undefined. However, health care workers, including laboratory personnel, who handle the animal tissues/organs prior to xenotransplantation will have a definable risk of infection not exceeding that of animal care, veterinary, or abattoir workers routinely exposed to the source animal species provided equivalent biosafety standards are employed. The sponsor should ensure that a comprehensive Occupational Health Services program is available to educate workers regarding the risks associated with xenotransplantation and to monitor for possible infections in workers. The Occupational Health Service program should include: 4.2.3.1. Education of Health Care Workers. All centers where xenotransplantation procedures are performed should develop appropriate xenotransplantation procedurespecific educational materials for their staff. These materials should describe the xenotransplantation procedure(s), the known and potential risks of xenogeneic infections posed by the procedure(s), and research or health care activities that may pose the greatest risk of infection or nosocomial transmission of zoonotic or other infectious agents. Education programs should detail the circumstances under which the use of Standard Precautions and other isolation precautions are recommended, including the use of personal protective equipment handwashing before and after all patient contacts, even if gloves are worn. In addition, the potential for transmission of these agents to the general public should be discussed. # Health Care Worker Surveillance. The sponsor and the Occupational Health Service in each clinical center should develop protocols for monitoring health care personnel. These protocols should describe methods for storage and retrieval of personnel records and collection of serologic specimens from workers. Baseline sera (i.e., prior to exposure to xenotransplantation products or recipients) should be collected from all personnel who provide direct care to xenotransplantation product recipients, and laboratory personnel who handle, or are likely to handle, animal cells, tissues and organs or biologic specimens from xenotransplantation product recipients. Baseline sera can be compared to sera collected following occupational exposures; such baseline sera should be maintained for 50 years from the time of collection. The activities of the Occupational Health Service should be coordinated with the Infection Control Program to ensure appropriate surveillance of infections in personnel. # Post-Exposure Evaluation and Management. Written protocols should be in place for the evaluation of health care workers who experience an exposure where there is a risk of transmission of an infectious agent, e.g., an accidental needle stick. Health care workers, including laboratory personnel, should be instructed to report exposures immediately to the Occupational Health Services. The post-exposure protocol should describe the information to be recorded including the date and nature of exposure, the xenotransplantation procedure, recipient information, actions taken as a result of such exposures (e.g., counseling, post-exposure management, and follow-up) and the outcome of the event. This information should be archived in a health exposure log (section 4.3.) and maintained for at least 50 years from the time of the xenotransplantation despite any change in employment of the health care worker or discontinuation of xenotransplantation procedures at that center. Health care and laboratory workers should be counseled to report and seek medical evaluation for unexplained clinical illnesses occurring after the exposure. # Health Care Records The sponsor should maintain a cross-referenced system that links the relevant records of the xenotransplantation product recipient, xenotransplantation product, source animal(s), animal procurement center, and significant nosocomial exposures. These records should include: (1) documentation of each xenotransplantation procedure, (2) documentation of significant nosocomial health exposures, and (3) documentation of the infectious disease screening and surveillance records on both xenotransplantation product source animals and recipients. These records should be updated regularly and crossreferenced to allow rapid and easy linkage between the clinical records of the source animal(s) and the xenotransplantation product recipient. To the extent permitted by applicable laws and/or regulations, the confidentiality of all medical and research records pertaining to human recipients should be maintained (section 2.5.10.). 4.3.1. The documentation of each xenotransplantation procedure includes the date and type of the procedure, the principal investigator(s) (PI), the xenotransplantation product recipient, the xenotransplantation product(s), the individual source animal (s) and the procurement facilities for these animals, as well as the health care workers associated with each procedure. 4.3.2. The documentation of significant nosocomial health exposures includes the persons involved, the date and nature of each potentially significant nosocomial exposure (exposures defined in the written Infection Control/Occupational Health Service protocol), and the actions taken. # PUBLIC HEALTH NEEDS # National Xenotransplantation Database A pilot project to demonstrate the feasibility of, and identify system requirements for, a National Xenotransplantation Database is currently underway. It is anticipated that this pilot would be expanded into a fully operational Database to collect data from all clinical centers conducting trials in xenotransplantation and all animal facilities providing animals or xenogeneic organs, tissues, or cells for clinical use. Such a database would enable: (a) the recognition of rates of occurrence and clustering of adverse health events, including events that may represent outcomes of xenogeneic infections; (b) accurate linkage of these events to exposures on a national level; (c) notification of individuals and clinical centers regarding epidemiologically significant adverse events associated with xenotransplantation; and (d) biological and clinical research assessments. When such a Database becomes functional, the sponsor should ensure that information requested by the Database is provided in an accurate and timely manner. To the extent allowed by law, information derived from the Database would be available to the public with appropriate confidentiality protections for any proprietary or individually identifiable information. # Biologic Specimen Archives The sponsor should ensure that the designated PHS specimens from the source animals, xenotransplantation products, and xenotransplantation product recipients are archived (sections 3.7.1, 3.5.3, and 4.1.2.). The biologic specimens should be collected and archived under conditions that will ensure their suitability for subsequent public health purposes, including public health investigations (sections 4. 1.2.3.). The location and nature of archived specimens should be documented in the health care records and this information should be linked to the National Xenotransplantation Database when the latter becomes functional. DHHS is considering options for a central biological archive, e.g., one maintained by a private sector organization under contract to DHHS. Designated PHS specimens would be deposited in such a repository. # Secretary's Advisory Committee on Xenotransplantation (SACX) The SACX is currently being implemented by DHHS. As currently envisioned, the SACX will consider the full range of complex issues raised by xenotransplantation, including ongoing and proposed protocols, and make recommendations to the Secretary on policy and procedures. The SACX will also provide a forum for public discussion of issues when appropriate. These activities will facilitate DHHS efforts to develop an integrated approach to addressing emerging public health issues in xenotransplantation. The structure and functions of the SACX as well as procedures for SACX review of protocols and issues will be described in subsequent publications. Inquiries about the status and function of, and access to the SACX should be directed to the Office of Science Policy, Office of the Secretary, DHHS, or the Office of Biotechnology Activities (OBA), formerly known as the Office of Recombinant DNA Activities (ORDA), Office of the Director, NIH. # BIBLIOGRAPHY # A. Federal Laws
None
None
56ae786521af9e4884bc31de2139f922faebbb7c
cdc
None
Routine infant immunization with PCV7 since 2000 has decreased rates of IPD in young children markedly, but IPD from non-PCV7 serotypes, - Cases per 100,000 population. A case of IPD was defined as isolation of Streptococcus pneumoniae from a normally sterile body site (primarily blood or cerebrospinal fluid) in a resident of an ABCs surveillance area. Sites include Connecticut, Minnesota, and New Mexico, and selected counties in California,# Invasive pneumococcal disease (IPD), caused by Streptococcus pneumoniae (pneumococcus), remains a leading cause of serious illness in children and adults worldwide (1). After routine infant immunization with a 7-valent pneumococcal conjugate vaccine (PCV7) began in 2000, IPD among children aged <5 years in the United States decreased by 76%; however, IPD from non-PCV7 serotypes, particularly 19A, has increased (2). In February 2010, the Advisory Committee on Immunization Practices (ACIP) issued recommendations for use of a newly licensed 13-valent pneumococcal conjugate vaccine (PCV13) (3). PCV13 contains the seven serotypes in PCV7 (4, 6B, 9V, 14, 18C, 19F, and 23F) and six additional serotypes (1, 3, 5, 6A, 7F, and 19A). To characterize the potentially vaccinepreventable IPD burden among children aged <5 years in the United States, CDC and investigators analyzed 2007 data from Active Bacterial Core surveillance (ABCs). This report summarizes the results of that analysis, which found that among 427 IPD cases with known serotype in children aged <5 years, 274 (64%) were caused by serotypes contained in PCV13. In 2007, an estimated 4,600 cases of IPD occurred in children in this age group in the United States, including approximately 2,900 cases caused by serotypes covered in PCV13 (versus 70 cases caused by PCV7 serotypes). PCV13 use has the potential to further reduce IPD in the United States. Post-licensure monitoring will help characterize the effectiveness of PCV13 in different populations and track the potential changes in disease burden caused by non-PCV13 serotypes. ABCs- of the Emerging Infections Program (EIP) Network is a collaboration between CDC and 10 selected sites. ABCs conducts population-and laboratory-based active surveillance. During 2006 and 2007, IPD surveillance was conducted in Connecticut, Minnesota, and New Mexico, and selected counties in California, Colorado, Georgia, Maryland, New York, Oregon, and Tennessee. In 2007, the total catchment population of children aged <5 years for these 10 sites was 2.1 million. A case of IPD was defined as isolation of S. pneumoniae from a normally sterile body site (primarily blood or cerebrospinal fluid) in a resident of an ABCs area. Pneumococcal isolates were serotyped at CDC and reference laboratories. Serotype information was analyzed by vaccine serotype group (Table 1). Age-, race-and vaccine serotype-specific rates of IPD were calculated using observed IPD cases in the 2007 ABCs data as the numerator and U.S. Census Bureau projections of the 2007 population of ABCs sites as the denominator. To estimate the incidence and total number of IPD cases in the United States in 2007, rates were standardized to the entire U.S. population, adjusting for small differences between age and race distributions of ABCs areas and the U.S. population. Investigators reviewed medical records to identify children aged 24-59 months with underlying medical conditions who are recommended by ACIP to receive the 23-valent pneumococcal polysaccharide vaccine (PPSV23) (1). Characteristics of these high-risk children and healthy children were compared by chi-square test; data from 2006 and 2007 were summed because of the small number of IPD cases with underlying medical conditions among persons in this age group. In 2007, a total of 493 children aged <5 years (<60 months) with IPD were identified in the ABCs population (Table 2), and information on the serotype of the pneumococcal isolate was available for 427 (87%) of those children. Among the 427, the group aged <12 months accounted for 36% of all cases, and the 12-23 months group accounted for 29%. Overall rates were highest in children aged <12 months and 12-23 months (40.5 and 31.2 cases per 100,000 population, respectively); among children aged 24-59 months, rates of all IPD decreased with each additional year of age. Information on race was available for 378 (89%) cases for which serotype information was available. Among children aged <5 years, rates of overall IPD in black children (35.8 cases per 100,000) and children of other races (30.7 cases per 100,000) were approximately twofold and 1.7fold higher, respectively, than rates for white children (18.4 per 100,000). Among the 427 IPD cases with known serotype in children aged <5 years, 274 (64%) were caused by serotypes contained in PCV13. Of these 274 cases, 260 (95%) were caused by three of the six additional serotypes (3, 7F, and 19A) that are not included in PCV7; overall, 180 (42%) of the 427 were caused by serotype 19A. Within each 1-year age group, the proportions of all IPD cases caused by serotypes covered by PCV13 were relatively similar, ranging from 59% to 71%. The proportions of all IPD cases caused by the 13 serotypes were comparable in black children (61%), children of other races (62%), and white children (67%). Information on hospitalization and clinical outcome was available for 99% of serotyped IPD cases. Among 272 children with IPD caused by serotypes covered by PCV13 for whom hospitalization status, clinical presentation, and outcome were known, 168 (62%) were hospitalized, and four (2%) died; 101 (37%) had bacteremia without confirmed source, 24 (9%) had meningitis, and 115 (42%) had pneumonia with bacteremia. Based on the 2007 rate of IPD in children aged <5 years (22 cases per 100,000), an estimated 4,600 cases of IPD occurred in this age group in the United States. Included among those cases were an estimated 70 cases caused by serotypes covered in PCV7 and 2,900 cases caused by serotypes covered in PCV13. During 2006-2007, a total of 301 IPD cases with a known serotype occurred among children aged 24-59 months; 31 cases (10%) occurred in a child at high risk recommended for vaccination with PPSV23 (1). Of these 31 cases, the 11 serotypes included in PPSV23 but not in PCV13 (Table 1) accounted for four cases (13%), serotypes covered in PCV13 accounted for 13 cases (42%), and the remaining 14 cases (45%) were caused by serotypes not covered in either vaccine. PCV13 serotypes accounted for a smaller proportion of cases among children with underlying medical conditions than among healthy children aged 24-59 months (42% versus 65% ; p = 0.01). PCV7 PCV13 PPSV23 4 X X X 6B X X X 9V X X X 14 X X X 18C X X X 19F X X X 23F X X X 1 X X 3 X X 5 X X 6A X 7F X X 19A X X 2 X 8 X 9N X 10A X 11A X 12F X 15B X 17F X 20 X 22F X 33F X * The 13-valent pneumococcal conjugate vaccine (PCV13) includes the seven serotypes in the 7-valent vaccine (PCV7) and six additional serotypes. The 23-valent pneumococcal polysaccharide vaccine (PPSV23) includes 12 of the serotypes included in PCV13 (it does not include serotype 6A) and 11 additional serotypes. predominantly serotype 19A, has increased and partially offset these reductions (2,4). Overall, rates of IPD have remained stable at 22-25 cases per 100,000 since 2002 (2,4). Based on the findings in this report, the use of PCV13 in the routine immunization schedule has the potential to further reduce IPD caused by the six additional serotypes (1, 3, 5, 6A, 7F, or 19A) among children aged <5 years. PPSV23 has been available for use in adults aged ≥65 years and persons aged ≥2 years with certain underlying medical conditions since 1983 (1). In this analysis, approximately 42% of IPD cases among children aged 24-59 months with underlying medical conditions were caused by serotypes covered in PCV13; an additional 13% of cases were caused by serotypes not covered in PCV13 but included in PPSV23. The role for PPSV23 in high-risk children might become clearer when more data are available on disease burden and serotype distribution after routine use of PCV13. Based on available safety, immunogenicity and disease burden data, ACIP also recommends that a single supplemental PCV13 dose be given to healthy children aged 14-59 months and to children with underlying medical conditions up to age 71 months who already have completed a schedule of PCV7 (3). In one study, a single dose of PCV13 in children aged ≥12 months who had received 3 previous doses of PCV7 induced an antibody response comparable to the 3-dose infant PCV13 series, and the safety profile of this supplemental dose was comparable to that after a fourth dose of PCV13 (5). Although rates of IPD are relatively low in these older children, ACIP also considered the emergence of multidrug-resistant serotype 19A strains causing meningitis and other severe invasive infections (6,7) and the substantial burden of noninvasive pneumococcal disease as additional factors in making the recommendation. Cost-effectiveness evaluations suggest that supplemental PCV13 vaccination appears comparable in cost effectiveness to other accepted interventions (CDC, unpublished data, 2009). After PCV7 was introduced, rates of IPD caused by the seven serotypes covered in the vaccine also decreased substantially among unvaccinated children and adults. This indirect (or herd) effect resulted from reduced nasopharyngeal carriage of pneumococcus in vaccinated children and reduced transmission from children to unvaccinated children and adults (8). Immunization of children with PCV13 also is anticipated to have herd effects among adults. For example, as of 2007, serotype 19A had emerged as the most common cause of IPD in all age groups after PCV7 introduction (CDC, unpublished data, 2009). Colonization and disease caused by serotype 19A have a similar epidemiological pattern to those caused by PCV7 serotypes, and some degree of herd effects in the population might be expected. In contrast, some of the other new serotypes in PCV13 might have different epidemiologic characteristics (9). In particular, serotypes 1 and 5 are rarely found in the nasopharynx, so the potential herd effects of PCV13 vaccination on disease caused by these serotypes is uncertain. In the United States, however, serotypes 1 and 5 are relatively uncommon causes of IPD. Although rates of pneumonia hospitalizations decreased after PCV7 introduction among children aged <2 years (10), the potential effects of PCV13 on noninvasive disease, such as nonbacteremic pneumonia and otitis media, are difficult to evaluate because of lack of standard case definitions, sensitive and specific diagnostic methods, and routine surveillance for these conditions. Information on these noninvasive pneumococcal diseases is not available in the ABCs dataset. Because PCV13 was licensed on the basis of immunogenicity studies rather than clinical efficacy trials, post-licensure monitoring is important to characterize On February 24, 2010, a 13-valent pneumococcal conjugate vaccine (PCV13 ) was licensed by the Food and Drug Administration (FDA) for prevention of invasive pneumococcal disease (IPD) caused by the 13 pneumococcal serotypes covered by the vaccine and for prevention of otitis media caused by serotypes in the 7-valent pneumococcal conjugate vaccine formulation (PCV7 ). PCV13 is approved for use among children aged 6 weeks-71 months and succeeds PCV7, which was licensed by FDA in 2000. The Pneumococcal Vaccines Work Group of the Advisory Committee on Immunization Practices (ACIP) reviewed available data on the immunogenicity, safety, and costeffectiveness of PCV13, and on estimates of the vaccine-preventable pneumococcal disease burden. The working group then presented policy options for consideration of the full ACIP. This report summarizes recommendations approved by ACIP on February 24, 2010, for 1) routine vaccination of all children aged 2-59 months with PCV13, 2) vaccination with PCV13 of children aged 60-71 months with underlying medical conditions that increase their risk for pneumococcal disease or complications, and 3) PCV13 vaccination of children who previously received 1 or more doses of PCV7 (1). CDC guidance for vaccination providers regarding transition from PCV7 to the PCV13 immunization program also is included. # Prevnar 13 Licensure Vaccine formulation. PCV13 contains polysaccharides of the capsular antigens of Streptococcus pneumoniae serotypes 1, 3, 4, 5, 6A, 6B, 7F, 9V, 14, 18C, 19A, 19F, and 23F, individually conjugated to a nontoxic diphtheria CRM 197 (CRM, crossreactive material) carrier protein. A 0.5-mL PCV13 dose contains approximately 2 μg of polysaccharide from each of 12 serotypes and approximately 4 μg of polysaccharide from serotype 6B; the total concentration of CRM 197 is approximately 34 μg. The vaccine contains 0.125 mg of aluminum as aluminum phosphate adjuvant and no thimerosal preservative. PCV13 is administered intramuscularly and is available in single-dose, prefilled syringes that do not contain latex (2). Immunogenicity profile. The immunogenicity of PCV13 was evaluated in a randomized, doubleblind, active-controlled trial in which 663 U.S. infants received at least 1 dose of PCV13 or PCV7 (3). To compare PCV13 antibody responses with those for PCV7, criteria for noninferior immunogenicity after 3 and 4 doses of PCV13 (pneumococcal immunoglobulin G antibody concentrations measured by enzyme immunoassay) were defined for the seven serotypes common to PCV7 and PCV13 (4, 6B, 9V, 14, 18C, 19F, and 23F) and for the six additional serotypes in PCV13 (serotypes 1, 3, 5, 6A, 7F, and 19A). Functional antibody responses were measured by opsonophagocytosis assay (OPA) in a subset of the study population. Evaluation of these immunologic parameters indicated that PCV13 induced levels of antibodies that were comparable to those induced by PCV7 and shown to be protective against IPD (3). Among infants receiving the 3-dose primary series, responses to three PCV13 serotypes (the shared serotypes 6B and 9V, and new serotype 3) did not meet the prespecified, primary endpoint criterion (percentage of subjects achieving an IgG seroresponse of ≥0.35 μg/ mL 1 month after the third dose); however, detectable OPA antibodies to each of these three serotypes indicated the presence of functional antibodies (3). The percentages of subjects with an OPA titer ≥1:8 were similar for the seven common serotypes among PCV13 recipients (range: 90%-100%) and PCV7 recipients (range: 93%-100%); the proportion of PCV13 recipients with an OPA titer ≥1:8 was >90% for all of the 13 serotypes (3). After the fourth dose, the IgG geometric mean concentrations (GMCs) were comparable for 12 of the 13 serotypes; the noninferiority criterion was not met for serotype 3. However, measurable OPA titers were present for all serotypes after the fourth dose; the percentage of PCV13 recipients with an OPA titer ≥1:8 ranged from 97% to 100% for the 13 serotypes and was 98% for serotype 3 (3). # Licensure of a 13-Valent Pneumococcal Conjugate Vaccine (PCV13) and Recommendations for Use Among Children -Advisory Committee on Immunization Practices (ACIP), 2010 A schedule of 3 doses of PCV7 followed by 1 dose of PCV13 resulted in somewhat lower IgG GMCs for the six additional serotypes compared with a 4-dose PCV13 series. However, the OPA responses after the fourth dose were comparable for the two groups, and the clinical relevance of these lower antibody responses is not known. The single dose of PCV13 among children aged ≥12 months who had received 3 doses of PCV7 elicited IgG immune responses to the six additional serotypes that were comparable to those after a 3-dose infant PCV13 series (3). Safety profile. The safety of PCV13 was assessed in 13 clinical trials in which 4,729 healthy infants and toddlers were administered at least 1 dose of PCV13 and 2,760 children received at least 1 dose of PCV7, concomitantly with other routine pediatric vaccines. The most commonly reported (more than 20% of subjects) solicited adverse reactions that occurred within 7 days after each dose of PCV13 were injection-site reactions, fever, decreased appetite, irritability, and increased or decreased sleep (2). The incidence and severity of solicited local reactions at the injection site (pain/tenderness, erythema, and induration/ swelling) and solicited systemic reactions (irritability, drowsiness/increased sleep, decreased appetite, fever, and restless sleep/decreased sleep) were similar in the PCV13 and PCV7 groups. These data suggest that the safety profiles of PCV13 and PCV7 are comparable (2); CDC will conduct postlicensure monitoring for adverse events, and the manufacturer will conduct a Phase IV study. Supportive data for safety outcomes were provided by a catch-up study among 354 children aged 7-71 months who received at least 1 dose of PCV13. In addition, an open label study was conducted among 284 healthy U.S. children aged 15-59 months who had previously received 3 or 4 doses of PCV7 (2). Among these children, the frequency and severity of solicited local reactions and systemic adverse reactions after 1 dose of PCV13 were comparable to those among children receiving their fourth dose of PCV13 (2). # Indications and Guidance for Use ACIP recommends PCV13 for all children aged 2-59 months. ACIP also recommends PCV13 for children aged 60-71 months with underlying medical conditions that increase their risk for pneumococcal disease or complications (Table 1). No previous PCV7/PCV13 vaccination. The ACIP recommendation for routine vaccination with PCV13 and the immunization schedules for infants and toddlers through age 59 months who have not received any previous PCV7 or PCV13 doses are the same as those previously published for PCV7 (4,5). PCV13 is recommended as a 4-dose series at ages 2, 4, 6, and 12-15 months. Infants receiving their first dose at age ≤6 months should receive 3 doses of PCV13 at intervals of approximately 8 weeks (the minimum interval is 4 weeks). The fourth dose is recommended at age 12-15 months, and at least 8 weeks after the third dose (Table 2). Children aged 7-59 months who have not been vaccinated with PCV7 or PCV13 previously should receive 1 to 3 doses of PCV13, depending on their age at the time when vaccination begins and whether underlying medical conditions are present (Table 2). Children aged 24-71 months with chronic medical conditions that increase their risk for pneumococcal disease should receive 2 doses of PCV13. Interruption of the vaccination schedule does not require reinstitution of the entire series or the addition of extra doses. Incomplete PCV7/ PCV13 vaccination. Infants and children who have received 1 or more doses of PCV7 should complete the immunization series with PCV13 (Table 3). Children aged 12-23 months who have received 3 doses of PCV7 before age 12 months are recommended to receive 1 dose of PCV13, given at least 8 weeks after the last dose of PCV7. No additional PCV13 doses are recommended for children aged 12-23 months who received 2 or 3 doses of PCV7 before age 12 months and at least 1 dose of PCV13 at age ≥12 months. Similar to the previous ACIP recommendation for use of PCV7 (6), 1 dose of PCV13 is recommended for all healthy children aged 24-59 months with any incomplete PCV schedule (PCV7 or PCV13). For children aged 24-71 months with underlying medical conditions who have received any incomplete schedule of <3 doses of PCV (PCV7 or PCV13) before age 24 months, 2 doses of PCV13 are recommended. For children with underlying medical conditions who have received 3 doses of PCV (PCV7 or PCV13) a single dose of PCV13 is recommended through age 71 months. The minimum interval between doses is 8 weeks. 1), a single supplemental PCV13 dose is recommended through age 71 months Complete PCV7 vaccination. A single supplemental dose of PCV13 is recommended for all children aged 14-59 months who have received 4 doses of PCV7 or another age-appropriate, complete PCV7 schedule (Table 3). For children who have underlying medical conditions, a single supplemental PCV13 dose is recommended through age 71 months. This includes children who have previously received the 23-valent pneumococcal polysaccharide vaccine (PPSV23). PCV13 should be given at least 8 weeks after the last dose of PCV7 or PPSV23. In addition, a single dose of PCV13 may be administered to children aged 6-18 years who are at increased risk for IPD because of sickle cell disease, human immunodeficiency virus (HIV) infection or other immunocompromising condition, cochlear implant, or cerebrospinal fluid leaks, regardless of whether they have previously received PCV7 or PPSV23. Routine use of PCV13 is not recommended for healthy children aged ≥5 years. # Precautions and Contraindications Before administering PCV13, vaccination providers should consult the package insert for precautions, warnings, and contraindications. Vaccination with PCV13 is contraindicated among persons known to have severe allergic reaction (e.g., anaphylaxis) to any component of PCV13 or PCV7 or to any diphtheria toxoid-containing vaccine (2). # Transition from PCV7 to PCV13 When PCV13 is available in the vaccination provider's office, unvaccinated children and children incompletely vaccinated with PCV7 should complete the immunization series with PCV13. If the only pneumococcal conjugate vaccine available in a provider's office is PCV7, that vaccine should be provided to children and infants who are due for vaccination; these children should complete their series with PCV13 at subsequent visits. Children for whom the supplemental PCV13 dose is recommended should receive it at their next medical visit, at least 8 weeks after the last dose of PCV7. ) other insurance: some other source of health insurance, such as a self-directed plan or student health insurance. ¶ A key element of the health-care legislation in Massachusetts was creation of the Commonwealth Health Insurance Connector, the agency responsible for connecting residents to either Commonwealth Care, a subsidized program for certain adults who have not been offered employer-sponsored insurance, or Commonwealth Choice, an unsubsidized offering of six private health plans available through the Health Connector to individuals, families, and certain employers. data for adults aged 18-34 years also were analyzed separately to more closely examine this traditionally underinsured age group. The statistical significance (p<0.05) of differences between health indicators in the pre-law and post-law periods was estimated using the Wald chi-square test. Variability of point estimates of weighted proportions was indicated by 95% confidence intervals. The percentage of respondents who reported having health insurance rose 5.5%, from 91.3% in the pre-law period to 96.3% in the post-law period (Table 1). Among major subpopulations, the largest increases were observed among Hispanics (14.2%), persons with less than a high school diploma (12.0%), and persons with annual household incomes <$25,000 (11.9%). Nonetheless, in the post-law period, these same three subpopulations continued to have the lowest percentages of health insurance coverage: 89.0% for Hispanics, 88.6% for persons with less than a high school diploma, and 89.0% for persons with annual household incomes <$25,000. By 2008, approximately 8% of publicly insured Massachusetts residents were obtaining their health insurance through the new public Commonwealth Care program. The percentage of insured residents with public health insurance (including those aged 18-64 years who were eligible for Medicare) increased 29.7%, from 14.8% in the pre-law period to 19.2% in the post-law period (Table 1). The percentage of insured residents with private insurance decreased 3.2%, from 80.8% to 78.2%, and the percentage of insured residents with other types of insurance (e.g., a self-directed plan or student health insurance) decreased 40.9%, from 4.4% to 2.6% (Table 1). The overall percentage of respondents who reported having a personal health-care provider increased significantly, from 86.1% in the pre-law period to 87.7% in the post-law period (Table 2). The largest reported increases occurred among Hispanic respondents who answered the survey in Spanish (30.3% increase) and among Hispanics overall (17.0% increase). The percentage of respondents who reported having a routine checkup within the past year also increased significantly, from 71.9% in the pre-law period to 74.1% in the post-law period (Table 3). The largest reported increases occurred among Hispanic respondents who answered the survey in Spanish (19.9% increase) and among Hispanics overall (14.1% increase). The percentage of men reporting a routine checkup increased 5.1%, from 66.4% to 69.8%, but the percentage of women reporting a routine checkup did not change significantly. The percentage of respondents with chronic conditions who reported having a personal health-care provider or having had an annual checkup also did not change significantly after enactment of the health-care coverage law. # Editorial Note The results of this analysis indicate that the estimated percentage of Massachusetts residents covered by health insurance increased significantly after passage of health-care coverage legislation. A wider comparison, between 2005 BRFSS state survey results and 2008 results, indicated that health insurance coverage increased from 89% to 97% among all state residents (including children and adults aged ≥65 years); the increase included an estimated 300,000 newly insured persons aged 18-64 years (3). After implementation of the health-care coverage law, the proportion of respondents who said they lacked health insurance was approximately cut in half, and 8% of publicly insured respondents were obtaining health insurance through the state's new Commonwealth Care program. The effects observed likely are attributable to the new law; although, because of limitations inherent in such studies, a causal link cannot be proven. Increases in health insurance coverage can result from multiple factors, such as a higher employment rate, reduction in health insurance premiums, or expansion of existing public health insurance programs. During 1996-1999, Massachusetts observed an increase in the percentage of persons with health insurance (3) after the state expanded Medicaid eligibility; as a result, an additional 124,000 Massachusetts residents obtained insurance coverage (4). In this analysis, the observed increases in the percentage of insured among traditionally underserved subpopulations (e.g., Hispanics, persons with less than a high school diploma, and persons with annual household incomes 1 year when asked, "For how long have your activities been limited because of your major impairment, health problem, or disability?" † † † Responded "yes" to the question, "Have you ever been told by a doctor that you have diabetes?" § § § Responded "yes" to both of these questions: "Have you ever been told by a doctor, nurse, or other health professional that you had asthma?" and "Do you still have asthma?" ¶ ¶ ¶ Defined as coverage through an employer, someone else's employer, or a plan purchased by the person covered. Defined as coverage through Medicare, Medicaid, MassHealth, CommonHealth MassHealth, health maintenance organizations offered through Neighborhood Health Plan, Fallon Community Health Plan, BMC HealthNet or Network Health, Commonwealth Care, the military, CHAMPUS, TriCare, Veterans Administration (VA), CHAMP-VA, Indian Health Service, or the Alaska Native Health Service. † † † † Defined as some other source of health insurance such as a self-directed plan or student health insurance. attributable to the health-care coverage law, because implementation of heavily subsidized health insurance programs likely would affect these subpopulations first. Data from similar surveys in Massachusetts support this same hypothesis (5-7). For example, reports from the Massachusetts Division of Health Care Finance and Policy, which were focused on insurance status specifically, found that from fall 2006 to fall 2008, the number of uninsured working-age adults was reduced by nearly 70%. Most of the gains in insurance coverage were concentrated among lower-income adults (7). In contrast, according to U.S. Census data, from 2007 to 2008, the overall proportion of U.S. adults with health insurance declined (8). The largest increases in insurance coverage were among Hispanic respondents overall and Hispanic respondents who answered the survey in Spanish. Traditionally, a larger proportion of Hispanics in † † p values were calculated using the Wald chi-square test of the difference between each period. § § General Educational Development certificate. ¶ ¶ Responded "fair" or "poor" to the question, "Would you say that in general your health is excellent, very good, good, fair, or poor?" * Responded "yes" to the question, "A disability can be physical, mental, emotional, or communication-related. Would you describe yourself as having a disability of any kind?" plus indicated >1 year when asked, "For how long have your activities been limited because of your major impairment, health problem, or disability?" † † † Responded "yes" to the question, "Have you ever been told by a doctor that you have diabetes?" § § § Responded "yes" to both of these questions: "Have you ever been told by a doctor, nurse, or other health professional that you had asthma?" and "Do you still have asthma?" Massachusetts have lacked access to health care, compared with other racial/ethnic populations (9,10). The results showed an 18.4% increase for persons responding in Spanish and a 14.2% increase for Hispanics overall. However, despite these increases, Hispanics continued to have the lowest health insurance coverage and the lowest percentage of persons with a personal health-care provider than any other subpopulation. The percentage of younger adults, whites, blacks, and persons with chronic diseases who reported having a personal health-care provider did not change significantly. One reason might be that more time is needed for the effects of improved healthcare access to be realized in these groups. Another reason might be that health-care providers are not equally accessible for certain groups or in certain areas † † p values were calculated using the Wald chi-square test of the difference between each period. § § General Educational Development certificate. ¶ ¶ Responded "fair" or "poor" to the question, "Would you say that in general your health is excellent, very good, good, fair, or poor?" * Responded "yes" to the question, "A disability can be physical, mental, emotional, or communication-related. Would you describe yourself as having a disability of any kind?" plus indicated >1 year when asked, "For how long have your activities been limited because of your major impairment, health problem, or disability?" † † † Responded "yes" to the question, "Have you ever been told by a doctor that you have diabetes?" § § § Responded "yes" to both of these questions: "Have you ever been told by a doctor, nurse, or other health professional that you had asthma?" and "Do you still have asthma?" of the state. Although the cost of a doctor visit might also be a factor, 2008 BRFSS data have shown that only 6% of all respondents reported that they were unable to visit a doctor during the past year because of cost, compared with 8% in 2006 (9). In addition to an increase in the percentage of persons with health insurance, the findings in this analysis indicate changes in the proportion of plans that were private, public, or other (e.g., a self-directed plan or student health insurance) in Massachusetts. Those proportions changed from 80.8% private, 14.8% public, and 4.4% other before the law was enacted to 78.2%, 19.2%, and 2.6%, respectively. These changes were similar to U.S. Census data, which found that the proportion of adults with private health insurance declined from 2007 to 2008, while the proportion of publicly insured adults increased (8). In addition to the limitations on establishing causality, the findings in this report are subject to at least three other limitations. First, BRFSS only samples households with landline telephones. Minorities, persons with lower socioeconomic status, and younger adults typically have lower landline telephone coverage and might be underrepresented in this report. However, poststratification weighting might correct some bias resulting from lack of landline telephones. Second, depending on when the survey was administered, some responses might pertain to health-care activities (e.g., having a personal-care provider in the past year) that actually occurred during the 12-month transition period. Finally, BRFSS data are based on self-report and might be subject to error (e.g., underreporting of chronic conditions). The findings in this report and others (10) can help local health departments in areas with large underserved populations assess local public health needs, enhance cultural competency, engage hospitals in community primary-care efforts, and address the availability of health-care providers. MDPH is targeting outreach more precisely to increase health insurance enrollment and health-care access among state residents. Afghanistan, Pakistan, India, and Nigeria are the four remaining countries where indigenous wild poliovirus (WPV) transmission has never been interrupted (1). This report updates previous reports (1,2) and describes polio eradication activities in Afghanistan and Pakistan during January-December 2009 and proposed activities in 2010 to address challenges. During 2009, both countries continued to conduct coordinated supplemental immunization activities (SIAs) and used multiple strategies to reach previously unreached children. These strategies included 1) use of short interval additional dose (SIAD) SIAs to administer a dose of oral poliovirus vaccine (OPV) within 1-2 weeks after a prior dose during negotiated periods of security; 2) systematic engagement of local leaders; 3) negotiations with conflict parties; and 4) increased engagement of nongovernmental organizations delivering basic health services. However, security problems continued to limit access by vaccination teams to large numbers of children. In Afghanistan, poliovirus transmission during 2009 predominantly occurred in 12 highrisk districts in the conflict-affected South Region; 38 WPV cases were confirmed in 2009, compared with 31 in 2008. In Pakistan, 89 WPV cases were confirmed in 2009, compared with 118 in 2008, but transmission persisted both in security-compromised areas and in accessible areas, where managerial and operational problems continued to affect immunization coverage. Continued efforts to enhance safe access of vaccination teams in insecure areas will be required for further progress toward interruption of WPV transmission in Afghanistan and Pakistan. In addition, substantial improvements in subnational accountability and oversight are needed to improve immunization activities in Pakistan. # Immunization Activities Reported routine vaccination coverage of infants with 3 OPV doses (OPV3) in 2009 was 85% nationally in Afghanistan and 81% in Pakistan During 2009, large-scale house-to-house SIAs † targeting children aged 370,000 children aged 20% during SIAs conducted in January and March to 5% during the July and September SIAs. In Pakistan, the percentage of children aged <5 years who were living in SIA-inaccessible areas of NWFP increased from 10% in January to 20% in May and July, and then decreased to <5% in October and November. However, in FATA itself, the estimated percentage of children aged <5 years living in inaccessible areas increased from 15% in January to 30% by November. # Progress Toward Poliomyelitis Eradication # AFP Surveillance In 2009, AFP surveillance performance indicators remained high in both countries, including in areas with ongoing WPV transmission. † † The annual national nonpolio AFP rate (per 100,000 population aged <15 years) was 8.5 in Afghanistan (range among the eight regions: 6.7-12.0) and 6.1 in Pakistan (range among the six provinces/territories: 2.9-9.2). The percentage of nonpolio AFP cases for which adequate specimens were collected was 93% in Afghanistan (range: 81%-97%) and 90% in Pakistan (range: 83%-96%) (Table 2). # Editorial Note During 2009, the total number of WPV cases reported in Afghanistan and Pakistan did not substantially change compared with 2008, and both WPV1 and WPV3 serotypes continued to circulate in the In Afghanistan, WPV transmission during the past 5 years has remained largely restricted to 12 insecure districts in the South Region. Since 2008, multiple strategies have been implemented to immunize these children. As a result, the proportion of children in the South Region who are not vaccinated in a given SIA was reduced to 5% of the overall target population toward the end of 2009. Meanwhile, Afghanistan has been able to keep most of the country free of endemic WPV transmission despite extensive population movements due to economic, social/cultural, and security reasons. In Pakistan, circulation of both WPV serotypes persists in both transmission zones, with WPV repeatedly occurring, primarily in nine districts during the past 5 years. In the northern zone, WPV transmission continues because of limited access to children during SIAs in insecure areas of NWFP and FATA. Largescale population movements from NWFP and FATA have caused renewed WPV transmission in polio-free areas. Access to districts in NWFP improved during the last quarter of 2009, but access into FATA deteriorated. In the southern zone, WPV circulation continued mainly due to weak routine vaccination programs and managerial and operational gaps during SIAs, compounded by large-scale population movements from insecure areas in southern Afghanistan. Substantial improvements in subnational accountability and oversight are needed to improve the quality of immunization activities in Pakistan. During 2010, planning, resources, and immunization activities need to focus on the small number of persistently affected districts in Afghanistan and Pakistan. In insecure areas, negotiations with community leaders need to be enhanced, and efforts are needed to achieve agreement of all parties to conflict regarding Days of Tranquility during SIAs to ensure access to the target population of children aged <5 years and the safety of vaccination teams. Negotiations with conflict parties have been and will be supported by the International Federation of Red Cross and Red Crescent Societies. Coordination of both SIAs and AFP surveillance between both countries also needs to be strengthened further to interrupt transmission from cross-border movements. In addition, specific mechanisms need to be established to hold provincial, district, and local administrative leaders accountable for program performance. MenACWY-CRM consists of two components: 1) 10 μg of lyophilized meningococcal serogroup A capsular polysaccharide conjugated to CRM 197 (MenA) and 2) 5 μg each of capsular polysaccharide of serogroup C, Y, and W135 conjugated to CRM 197 in 0.5 mL of phosphate buffered saline, which is used to reconstitute the lyophilized MenA component before injection (1). The reconstituted vaccine should be used immediately, but may be held at or below 77°F (25°C) for up to 8 hours. MenACWY-CRM is administered as an intramuscular injection, preferably into the deltoid region (1). The capsular polysaccharide serogroups included in MenACWY-CRM are the same as those contained in Sanofi Pasteur's MCV4 (Menactra). In study participants aged 11-18 years, noninferiority of MenACWY-CRM to MCV4 was demonstrated for all four serogroups using the primary endpoint, hSBA seroresponse (serum bactericidal assay using human complement). The proportions of subjects with hSBA seroresponse were statistically higher for serogroups A, W, and Y in the MenACWY-CRM group, compared with the MCV4 group. The clinical relevance of higher postvaccination immune responses is not known (1). Safety and reactogenicity profiles were comparable to those observed with MCV4 (1). # Guidance for Use of MenACWY-CRM MenACWY-CRM is licensed by the FDA as a single dose in persons aged 11-55 years (1). ACIP recommends quadrivalent meningococcal conjugate vaccine for all persons aged 11-18 years and for persons aged 2-55 years who are at increased risk for meningococcal disease. Persons at increased risk for meningococcal disease include 1) college freshmen living in dormitories, 2) microbiologists who are exposed routinely to isolates of Neisseria meningitidis, 3) military recruits, 4) persons who travel to or reside in countries where meningococcal disease is hyperendemic or epidemic Severe allergic reaction (e.g., anaphylaxis) after a previous dose of Menveo, any component of this vaccine, or any other CRM 197 , diphtheria toxoid, or meningococcal-containing vaccine is a contraindication to administration of Menveo. Details regarding the recommended meningococcal vaccination schedule are available at / recs/schedules/default.htm#child. Adverse events after receipt of any vaccine should be reported to the Vaccine Adverse Event Reporting System at http:// vaers.hhs.gov. # Licensure of a Meningococcal Conjugate Vaccine (Menveo) and Guidance for Use -Advisory Committee on Immunization Practices (ACIP), 2010 IV, which appears quarterly. § § Updated weekly from reports to the Influenza Division, National Center for Immunization and Respiratory Diseases. Since April 26, 2009, a total of 278 influenza-associated pediatric deaths associated with 2009 influenza A (H1N1) virus infection have been reported. Since August 30, 2009, a total of 265 influenza-associated pediatric deaths occurring during the 2009-10 influenza season have been reported. A total of 133 influenza-associated pediatric deaths occurring during the 2008-09 influenza season have been reported. ¶ ¶ No measles cases were reported for the current week. * Data for meningococcal disease (all serogroups) are available in Table II. † † † CDC discontinued reporting of individual confirmed and probable cases of 2009 pandemic influenza A (H1N1) virus infections on July 24, 2009. CDC will report the total number of 2009 pandemic influenza A (H1N1) hospitalizations and deaths weekly on the CDC H1N1 influenza website (). In addition, three cases of novel influenza A virus infections, unrelated to the 2009 pandemic influenza A (H1N1) virus, were reported to CDC during 2009. § § § In 2009, Q fever acute and chronic reporting categories were recognized as a result of revisions to the Q fever case definition. Prior to that time, case counts were not differentiated with respect to acute and chronic Q fever cases. A death is reported by the place of its occurrence and by the week that the death certificate was filed. Fetal deaths are not included. † Pneumonia and influenza. § Because of changes in reporting methods in this Pennsylvania city, these numbers are partial counts for the current week. Complete counts will be available in 4 to 6 weeks. ¶ Total includes unknown ages. American Samoa - 0 0 - - N 0 0 N N C.N.M.I. - - - - - - - - - - Guam - 0 0 - - - 0 0 - - Puerto - 0 0 - NN New England - 0 1 1 NN - 0 0 - NN Connecticut - 0 0 - NN - 0 0 - NN Maine § - 0 1 1 NN - 0 0 - NN Massachusetts - 0 0 - NN - 0 0 - NN New Hampshire - 0 0 - NN - 0 0 - NN Rhode Island § - 0 0 - NN - 0 0 - NN Vermont § - 0 0 - NN - 0 0 - NN Mid. Atlantic - 0 1 2 NN - 0 0 - NN New Jersey - 0 0 - NN - 0 0 - NN New York (Upstate) - 0 0 - NN - 0 0 - NN New York City - 0 0 - NN - 0 0 - NN Pennsylvania - 0 1 2 NN - 0 0 - NN E.N. Central - 0 1 1 NN - 0 0 - NN Illinois - 0 0 - NN - 0 0 - NN Indiana - 0 0 - NN - 0 0 - NN Michigan - 0 0 - NN - 0 0 - NN Ohio - 0 1 1 NN - 0 0 - NN Wisconsin - 0 0 - NN - 0 0 - NN W.N. Central - 0 0 - NN - 0 0 - NN Iowa - 0 0 - NN - 0 0 - NN Kansas - 0 0 - NN - 0 0 - NN Minnesota - 0 0 - NN - 0 0 - NN Missouri - 0 0 - NN - 0 0 - NN Nebraska § - 0 0 - NN - 0 0 - NN North Dakota - 0 0 - NN - 0 0 - NN South Dakota - 0 0 - NN - 0 0 - NN S. Atlantic - 0 1 1 NN - 0 0 - NN Delaware - 0 0 - NN - 0 0 - NN District of Columbia - 0 0 - NN - 0 0 - NN Florida - 0 0 - NN - 0 0 - NN Georgia - 0 1 1 NN - 0 0 - NN Maryland § - 0 0 - NN - 0 0 - NN North Carolina - 0 0 - NN - 0 0 - NN South Carolina § - 0 0 - NN - 0 0 - NN Virginia § - 0 0 - NN - 0 0 - NN West Virginia - 0 0 - NN - 0 0 - NN E.S. Central - 0 0 - NN - 0 0 - NN Alabama § - 0 0 - NN - 0 0 - NN Kentucky - 0 0 - NN - 0 0 - NN Mississippi - 0 0 - NN - 0 0 - NN Tennessee § - 0 0 - NN - 0 0 - NN W.S. Central - 0 0 - NN - 0 0 - NN Arkansas § - 0 0 - NN - 0 0 - NN Louisiana - 0 0 - NN - 0 0 - NN Oklahoma - 0 0 - NN - 0 0 - NN Texas § - 0 0 - NN - 0 0 - NN Mountain - 0 0 - NN - 0 0 - NN Arizona - 0 0 - NN - 0 0 - NN Colorado - 0 0 - NN - 0 0 - NN Idaho § - 0 0 - NN - 0 0 - NN Montana § - 0 0 - NN - 0 0 - NN Nevada § - 0 0 - NN - 0 0 - NN New Mexico § - 0 0 - NN - 0 0 - NN Utah - 0 0 - NN - 0 0 - NN Wyoming § - 0 0 - NN - 0 0 - NN Pacific - 0 2 2 NN - 0 0 - NN Alaska - 0 0 - NN - 0 0 - NN California - 0 0 - NN - 0 0 - NN Hawaii - 0 0 - NN - 0 0 - NN Oregon - 0 0 - NN - 0 0 - NN Washington - 0 2 2 NN - 0 0 - NN American Samoa - 0 0 - NN - 0 0 - NN C.N.M.I. - - - - NN - - - - NN Guam - 0 0 - NN - 0 0 - NN Puerto Rico - 0 0 - NN - 0 0 - NN U.S. - 0 4 1 1 - 2 21 4 3 - 0 2 - - Connecticut - 0 0 - - - 0 11 - - - 0 1 - - Maine § - 0 1 1 - - 0 3 2 - - 0 0 - - Massachusetts - 0 0 - - - 0 0 - - - 0 0 - - New Hampshire - 0 1 - - - 0 3 - 1 - 0 1 - - Rhode Island § - 0 4 - 1 - 0 20 2 2 - 0 1 - - Vermont § - 0 1 - - - 0 0 - - - 0 0 - - Mid. Atlantic 1 2 17 2 1 - 3 22 1 - - 0 2 - - New Jersey - 0 1 - - - 0 0 - - - 0 0 - - New York (Upstate) 1 1 17 1 - - 3 21 1 - - 0 1 - - New York City - 0 3 - 1 - 0 1 - - - 0 2 - - Pennsylvania - 0 1 1 - - 0 0 - - - 0 0 - - E.N. Central - 1 8 - - - 3 22 1 2 - 1 9 - - Illinois - 0 4 - - - 0 1 - - - 0 1 - - Indiana - 0 0 - - - 0 0 - - - 0 8 - - Michigan - 0 0 - - - 0 0 - - - 0 0 - - Ohio - 0 2 - - - 0 1 - - - 0 1 - - Wisconsin - 0 5 - - - 3 22 1 2 - 0 3 - - W.N. Central - 2 23 1 1 - 0 41 - - - 0 5 1 - Iowa - 0 0 - - - 0 0 - - - 0 0 - - Kansas - 0 2 - - - 0 0 - - - 0 0 - - Minnesota - 0 3 - 1 - 0 41 - - - 0 5 - - Missouri - 1 22 1 - - 0 1 - - - 0 3 1 - Nebraska § - 0 1 - - - 0 1 - - - 0 0 - - North Dakota - 0 0 - - - 0 0 - - - 0 0 - - South Dakota - 0 0 - - - 0 0 - - - 0 0 - - S. Atlantic 3 3 19 12 15 - 0 2 2 3 - 0 2 - - Delaware - 0 2 1 1 - 0 1 - - - 0 0 - - District of Columbia - 0 0 - - - 0 0 - - - 0 0 - - Florida - 0 1 1 2 - 0 1 - - - 0 0 - - Georgia - 0 2 3 3 - 0 1 1 1 - 0 0 - - Maryland § - 1 4 4 4 - 0 1 - 1 - 0 1 - - North Carolina 3 0 4 3 5 - 0 1 1 1 - 0 0 - - South Carolina § - 0 1 - - - 0 0 - - - 0 0 - - Virginia § - 1 13 - - - 0 1 - - - 0 2 - - West Virginia - 0 1 - - - 0 0 - - - 0 0 - - E.S. Central - 1 11 - 2 - 0 1 - 1 - 0 5 - 1 Alabama § - 0 3 - - - 0 1 - - - 0 0 - - Kentucky - 0 2 - - - 0 0 - - - 0 1 - - Mississippi - 0 0 - - - 0 0 - - - 0 0 - - Tennessee § - 1 10 - 2 - 0 1 - 1 - 0 5 - 1 W.S. Central - 0 9 1 - - 0 1 - - - 0 0 - - Arkansas § - 0 5 - - - 0 0 - - - 0 0 - - Louisiana - 0 0 - - - 0 0 - - - 0 0 - - Oklahoma - 0 8 - - - 0 1 - - - 0 0 - - Texas § - 0 1 1 - - 0 1 - - - 0 0 - - Mountain - 0 0 - - - 0 0 - - - 0 1 - - Arizona - 0 0 - - - 0 0 - - - 0 1 - - Colorado - 0 0 - - - 0 0 - - - 0 0 - - Idaho § - 0 0 - - - 0 0 - - - 0 0 - - Montana § - 0 0 - - - 0 0 - - - 0 0 - - Nevada § - 0 0 - - - 0 0 - - - 0 0 - - New Mexico § - 0 0 - - - 0 0 - - - 0 0 - - Utah - 0 0 - - - 0 0 - - - 0 0 - - Wyoming § - 0 0 - - - 0 0 - - - 0 0 - - Pacific - 0 1 - - - 0 0 - - - 0 0 - - Alaska - 0 0 - - - 0 0 - - - 0 0 - - California - 0 1 - - - 0 0 - - - 0 0 - - Hawaii - 0 0 - - - 0 0 - - - 0 0 - - Oregon - 0 0 - - - 0 0 - - - 0 0 - - Washington - 0 0 - - - 0 0 - - - 0 0 - - American Samoa - 0 0 - - - 0 0 - - - 0 0 - - C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 0 0 - - - 0 0 - - - 0 0 - - U. American Samoa - 0 0 - - - 0 0 - - - 0 0 - - C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 1 - 0 1 1 1 - 0 2 1 1 - 0 2 - - Massachusetts - 1 4 - 10 - 0 2 - 3 - 0 1 - 2 New Hampshire - 0 1 - 1 - 0 1 - 1 - 0 0 - - Rhode Island † - 0 1 - 1 - 0 0 - - - 0 0 - - Vermont † - 0 1 - - - 0 0 - - - 0 0 - 1 Mid. - 0 3 - 4 1 0 2 6 5 - 0 1 1 1 North Dakota - 0 1 - - - 0 0 - - - 0 1 - - South Dakota - 0 1 - - - 0 1 - 1 - 0 1 1 - S. - Colorado - 1 5 5 6 - 0 2 1 6 - 0 3 - 8 Idaho † - 0 1 2 - - 0 2 1 1 - 0 2 3 - Montana † - 0 1 - 2 - 0 0 - - - 0 0 - - Nevada † - 0 2 1 - - 0 3 4 5 - 0 1 - - New Mexico † - 0 1 1 1 - 0 1 - 4 - 0 1 - 4 Utah - 0 2 - 3 - 0 1 - 3 - 0 2 2 - Wyoming † - 0 1 - - - 0 2 - - - 0 0 - - Pacific 6 American Samoa - 0 0 - - - 0 0 - - - 0 0 - - C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 0 2 2 7 - 0 5 1 1 - 0 0 - - U. 5 - 0 1 1 3 Kansas - 0 1 1 2 - 0 2 - 4 - 0 1 3 1 Minnesota 1 0 11 3 - - 0 196 - 4 - 0 8 3 1 Missouri - 1 5 2 - - 0 1 - - - 0 1 2 3 Nebraska † - 0 2 2 - - 0 3 1 - - 0 2 3 - North Dakota - 0 1 - - - 0 0 - - - 0 1 - - South Dakota - 0 1 - - - 0 0 - 1 - 0 1 - - S. - 0 1 - - - 0 0 - - - 0 1 1 - Louisiana - 0 2 - 1 - 0 0 - - - 0 1 - 1 Oklahoma - 0 2 - - - 0 0 - - - 0 1 1 - Texas † - 2 - - - 0 2 1 - Colorado - 0 4 2 2 - 0 1 1 - - 0 3 - 1 Idaho † - 0 2 - 1 - 0 3 1 1 - 0 1 - - Montana † - 0 1 1 3 - 0 1 - - - 0 3 - - Nevada † - 0 1 2 3 - 0 1 - - - 0 1 1 - New Mexico † - 0 2 1 - - 0 1 - - - 0 0 - - Utah - 0 4 1 5 - 0 1 1 1 - 0 1 3 2 Wyoming † - 0 2 - - - 0 1 - - - 0 0 - - Pacific 6 - 0 0 - 1 N 0 0 N N - 0 1 - - Oregon - 0 2 - 3 - 1 4 8 3 - 0 2 - 2 Washington 1 0 4 1 2 - 0 3 - - 1 0 4 4 3 American Samoa N 0 0 N N N 0 0 N N - 0 0 - - C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 0 1 - - N 0 0 N N - 0 1 1 1 U.S. Virgin Islands - 0 0 - - N 0 0 N N - 0 0 - - C. American Samoa - 0 0 - - - 0 0 - - N 0 0 N N C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 0 0 - - - 0 1 - - 3 1 3 12 9 U.S. Virgin Islands - 0 0 - - - 0 0 - - N 0 0 N N C.
Routine infant immunization with PCV7 since 2000 has decreased rates of IPD in young children markedly, but IPD from non-PCV7 serotypes, * Cases per 100,000 population. A case of IPD was defined as isolation of Streptococcus pneumoniae from a normally sterile body site (primarily blood or cerebrospinal fluid) in a resident of an ABCs surveillance area. Sites include Connecticut, Minnesota, and New Mexico, and selected counties in California,# Invasive pneumococcal disease (IPD), caused by Streptococcus pneumoniae (pneumococcus), remains a leading cause of serious illness in children and adults worldwide (1). After routine infant immunization with a 7-valent pneumococcal conjugate vaccine (PCV7) began in 2000, IPD among children aged <5 years in the United States decreased by 76%; however, IPD from non-PCV7 serotypes, particularly 19A, has increased (2). In February 2010, the Advisory Committee on Immunization Practices (ACIP) issued recommendations for use of a newly licensed 13-valent pneumococcal conjugate vaccine (PCV13) (3). PCV13 contains the seven serotypes in PCV7 (4, 6B, 9V, 14, 18C, 19F, and 23F) and six additional serotypes (1, 3, 5, 6A, 7F, and 19A). To characterize the potentially vaccinepreventable IPD burden among children aged <5 years in the United States, CDC and investigators analyzed 2007 data from Active Bacterial Core surveillance (ABCs). This report summarizes the results of that analysis, which found that among 427 IPD cases with known serotype in children aged <5 years, 274 (64%) were caused by serotypes contained in PCV13. In 2007, an estimated 4,600 cases of IPD occurred in children in this age group in the United States, including approximately 2,900 cases caused by serotypes covered in PCV13 (versus 70 cases caused by PCV7 serotypes). PCV13 use has the potential to further reduce IPD in the United States. Post-licensure monitoring will help characterize the effectiveness of PCV13 in different populations and track the potential changes in disease burden caused by non-PCV13 serotypes. ABCs* of the Emerging Infections Program (EIP) Network is a collaboration between CDC and 10 selected sites. ABCs conducts population-and laboratory-based active surveillance. During 2006 and 2007, IPD surveillance was conducted in Connecticut, Minnesota, and New Mexico, and selected counties in California, Colorado, Georgia, Maryland, New York, Oregon, and Tennessee. In 2007, the total catchment population of children aged <5 years for these 10 sites was 2.1 million. A case of IPD was defined as isolation of S. pneumoniae from a normally sterile body site (primarily blood or cerebrospinal fluid) in a resident of an ABCs area. Pneumococcal isolates were serotyped at CDC and reference laboratories. Serotype information was analyzed by vaccine serotype group (Table 1). Age-, race-and vaccine serotype-specific rates of IPD were calculated using observed IPD cases in the 2007 ABCs data as the numerator and U.S. Census Bureau projections of the 2007 population of ABCs sites as the denominator. To estimate the incidence and total number of IPD cases in the United States in 2007, rates were standardized to the entire U.S. population, adjusting for small differences between age and race distributions of ABCs areas and the U.S. population. Investigators reviewed medical records to identify children aged 24-59 months with underlying medical conditions who are recommended by ACIP to receive the 23-valent pneumococcal polysaccharide vaccine (PPSV23) (1). Characteristics of these high-risk children and healthy children were compared by chi-square test; data from 2006 and 2007 were summed because of the small number of IPD cases with underlying medical conditions among persons in this age group. In 2007, a total of 493 children aged <5 years (<60 months) with IPD were identified in the ABCs population (Table 2), and information on the serotype of the pneumococcal isolate was available for 427 (87%) of those children. Among the 427, the group aged <12 months accounted for 36% of all cases, and the 12-23 months group accounted for 29%. Overall rates were highest in children aged <12 months and 12-23 months (40.5 and 31.2 cases per 100,000 population, respectively); among children aged 24-59 months, rates of all IPD decreased with each additional year of age. Information on race was available for 378 (89%) cases for which serotype information was available. Among children aged <5 years, rates of overall IPD in black children (35.8 cases per 100,000) and children of other races (30.7 cases per 100,000) were approximately twofold and 1.7fold higher, respectively, than rates for white children (18.4 per 100,000). Among the 427 IPD cases with known serotype in children aged <5 years, 274 (64%) were caused by serotypes contained in PCV13. Of these 274 cases, 260 (95%) were caused by three of the six additional serotypes (3, 7F, and 19A) that are not included in PCV7; overall, 180 (42%) of the 427 were caused by serotype 19A. Within each 1-year age group, the proportions of all IPD cases caused by serotypes covered by PCV13 were relatively similar, ranging from 59% to 71%. The proportions of all IPD cases caused by the 13 serotypes were comparable in black children (61%), children of other races (62%), and white children (67%). Information on hospitalization and clinical outcome was available for 99% of serotyped IPD cases. Among 272 children with IPD caused by serotypes covered by PCV13 for whom hospitalization status, clinical presentation, and outcome were known, 168 (62%) were hospitalized, and four (2%) died; 101 (37%) had bacteremia without confirmed source, 24 (9%) had meningitis, and 115 (42%) had pneumonia with bacteremia. Based on the 2007 rate of IPD in children aged <5 years (22 cases per 100,000), an estimated 4,600 cases of IPD occurred in this age group in the United States. Included among those cases were an estimated 70 cases caused by serotypes covered in PCV7 and 2,900 cases caused by serotypes covered in PCV13. During 2006-2007, a total of 301 IPD cases with a known serotype occurred among children aged 24-59 months; 31 cases (10%) occurred in a child at high risk recommended for vaccination with PPSV23 (1). Of these 31 cases, the 11 serotypes included in PPSV23 but not in PCV13 (Table 1) accounted for four cases (13%), serotypes covered in PCV13 accounted for 13 cases (42%), and the remaining 14 cases (45%) were caused by serotypes not covered in either vaccine. PCV13 serotypes accounted for a smaller proportion of cases among children with underlying medical conditions than among healthy children aged 24-59 months (42% [13 of 31] versus 65% [175 of 270]; p = 0.01). PCV7 PCV13 PPSV23 4 X X X 6B X X X 9V X X X 14 X X X 18C X X X 19F X X X 23F X X X 1 X X 3 X X 5 X X 6A X 7F X X 19A X X 2 X 8 X 9N X 10A X 11A X 12F X 15B X 17F X 20 X 22F X 33F X * The 13-valent pneumococcal conjugate vaccine (PCV13) includes the seven serotypes in the 7-valent vaccine (PCV7) and six additional serotypes. The 23-valent pneumococcal polysaccharide vaccine (PPSV23) includes 12 of the serotypes included in PCV13 (it does not include serotype 6A) and 11 additional serotypes. predominantly serotype 19A, has increased and partially offset these reductions (2,4). Overall, rates of IPD have remained stable at 22-25 cases per 100,000 since 2002 (2,4). Based on the findings in this report, the use of PCV13 in the routine immunization schedule has the potential to further reduce IPD caused by the six additional serotypes (1, 3, 5, 6A, 7F, or 19A) among children aged <5 years. PPSV23 has been available for use in adults aged ≥65 years and persons aged ≥2 years with certain underlying medical conditions since 1983 (1). In this analysis, approximately 42% of IPD cases among children aged 24-59 months with underlying medical conditions were caused by serotypes covered in PCV13; an additional 13% of cases were caused by serotypes not covered in PCV13 but included in PPSV23. The role for PPSV23 in high-risk children might become clearer when more data are available on disease burden and serotype distribution after routine use of PCV13. Based on available safety, immunogenicity and disease burden data, ACIP also recommends that a single supplemental PCV13 dose be given to healthy children aged 14-59 months and to children with underlying medical conditions up to age 71 months who already have completed a schedule of PCV7 (3). In one study, a single dose of PCV13 in children aged ≥12 months who had received 3 previous doses of PCV7 induced an antibody response comparable to the 3-dose infant PCV13 series, and the safety profile of this supplemental dose was comparable to that after a fourth dose of PCV13 (5). Although rates of IPD are relatively low in these older children, ACIP also considered the emergence of multidrug-resistant serotype 19A strains causing meningitis and other severe invasive infections (6,7) and the substantial burden of noninvasive pneumococcal disease as additional factors in making the recommendation. Cost-effectiveness evaluations suggest that supplemental PCV13 vaccination appears comparable in cost effectiveness to other accepted interventions (CDC, unpublished data, 2009). After PCV7 was introduced, rates of IPD caused by the seven serotypes covered in the vaccine also decreased substantially among unvaccinated children and adults. This indirect (or herd) effect resulted from reduced nasopharyngeal carriage of pneumococcus in vaccinated children and reduced transmission from children to unvaccinated children and adults (8). Immunization of children with PCV13 also is anticipated to have herd effects among adults. For example, as of 2007, serotype 19A had emerged as the most common cause of IPD in all age groups after PCV7 introduction (CDC, unpublished data, 2009). Colonization and disease caused by serotype 19A have a similar epidemiological pattern to those caused by PCV7 serotypes, and some degree of herd effects in the population might be expected. In contrast, some of the other new serotypes in PCV13 might have different epidemiologic characteristics (9). In particular, serotypes 1 and 5 are rarely found in the nasopharynx, so the potential herd effects of PCV13 vaccination on disease caused by these serotypes is uncertain. In the United States, however, serotypes 1 and 5 are relatively uncommon causes of IPD. Although rates of pneumonia hospitalizations decreased after PCV7 introduction among children aged <2 years (10), the potential effects of PCV13 on noninvasive disease, such as nonbacteremic pneumonia and otitis media, are difficult to evaluate because of lack of standard case definitions, sensitive and specific diagnostic methods, and routine surveillance for these conditions. Information on these noninvasive pneumococcal diseases is not available in the ABCs dataset. Because PCV13 was licensed on the basis of immunogenicity studies rather than clinical efficacy trials, post-licensure monitoring is important to characterize On February 24, 2010, a 13-valent pneumococcal conjugate vaccine (PCV13 [Prevnar 13, Wyeth Pharmaceuticals Inc., a subsidiary of Pfizer Inc.]) was licensed by the Food and Drug Administration (FDA) for prevention of invasive pneumococcal disease (IPD) caused by the 13 pneumococcal serotypes covered by the vaccine and for prevention of otitis media caused by serotypes in the 7-valent pneumococcal conjugate vaccine formulation (PCV7 [Prevnar, Wyeth]). PCV13 is approved for use among children aged 6 weeks-71 months and succeeds PCV7, which was licensed by FDA in 2000. The Pneumococcal Vaccines Work Group of the Advisory Committee on Immunization Practices (ACIP) reviewed available data on the immunogenicity, safety, and costeffectiveness of PCV13, and on estimates of the vaccine-preventable pneumococcal disease burden. The working group then presented policy options for consideration of the full ACIP. This report summarizes recommendations approved by ACIP on February 24, 2010, for 1) routine vaccination of all children aged 2-59 months with PCV13, 2) vaccination with PCV13 of children aged 60-71 months with underlying medical conditions that increase their risk for pneumococcal disease or complications, and 3) PCV13 vaccination of children who previously received 1 or more doses of PCV7 (1). CDC guidance for vaccination providers regarding transition from PCV7 to the PCV13 immunization program also is included. # Prevnar 13 Licensure Vaccine formulation. PCV13 contains polysaccharides of the capsular antigens of Streptococcus pneumoniae serotypes 1, 3, 4, 5, 6A, 6B, 7F, 9V, 14, 18C, 19A, 19F, and 23F, individually conjugated to a nontoxic diphtheria CRM 197 (CRM, crossreactive material) carrier protein. A 0.5-mL PCV13 dose contains approximately 2 μg of polysaccharide from each of 12 serotypes and approximately 4 μg of polysaccharide from serotype 6B; the total concentration of CRM 197 is approximately 34 μg. The vaccine contains 0.125 mg of aluminum as aluminum phosphate adjuvant and no thimerosal preservative. PCV13 is administered intramuscularly and is available in single-dose, prefilled syringes that do not contain latex (2). Immunogenicity profile. The immunogenicity of PCV13 was evaluated in a randomized, doubleblind, active-controlled trial in which 663 U.S. infants received at least 1 dose of PCV13 or PCV7 (3). To compare PCV13 antibody responses with those for PCV7, criteria for noninferior immunogenicity after 3 and 4 doses of PCV13 (pneumococcal immunoglobulin G [IgG] antibody concentrations measured by enzyme immunoassay) were defined for the seven serotypes common to PCV7 and PCV13 (4, 6B, 9V, 14, 18C, 19F, and 23F) and for the six additional serotypes in PCV13 (serotypes 1, 3, 5, 6A, 7F, and 19A). Functional antibody responses were measured by opsonophagocytosis assay (OPA) in a subset of the study population. Evaluation of these immunologic parameters indicated that PCV13 induced levels of antibodies that were comparable to those induced by PCV7 and shown to be protective against IPD (3). Among infants receiving the 3-dose primary series, responses to three PCV13 serotypes (the shared serotypes 6B and 9V, and new serotype 3) did not meet the prespecified, primary endpoint criterion (percentage of subjects achieving an IgG seroresponse of ≥0.35 μg/ mL 1 month after the third dose); however, detectable OPA antibodies to each of these three serotypes indicated the presence of functional antibodies (3). The percentages of subjects with an OPA titer ≥1:8 were similar for the seven common serotypes among PCV13 recipients (range: 90%-100%) and PCV7 recipients (range: 93%-100%); the proportion of PCV13 recipients with an OPA titer ≥1:8 was >90% for all of the 13 serotypes (3). After the fourth dose, the IgG geometric mean concentrations (GMCs) were comparable for 12 of the 13 serotypes; the noninferiority criterion was not met for serotype 3. However, measurable OPA titers were present for all serotypes after the fourth dose; the percentage of PCV13 recipients with an OPA titer ≥1:8 ranged from 97% to 100% for the 13 serotypes and was 98% for serotype 3 (3). # Licensure of a 13-Valent Pneumococcal Conjugate Vaccine (PCV13) and Recommendations for Use Among Children -Advisory Committee on Immunization Practices (ACIP), 2010 A schedule of 3 doses of PCV7 followed by 1 dose of PCV13 resulted in somewhat lower IgG GMCs for the six additional serotypes compared with a 4-dose PCV13 series. However, the OPA responses after the fourth dose were comparable for the two groups, and the clinical relevance of these lower antibody responses is not known. The single dose of PCV13 among children aged ≥12 months who had received 3 doses of PCV7 elicited IgG immune responses to the six additional serotypes that were comparable to those after a 3-dose infant PCV13 series (3). Safety profile. The safety of PCV13 was assessed in 13 clinical trials in which 4,729 healthy infants and toddlers were administered at least 1 dose of PCV13 and 2,760 children received at least 1 dose of PCV7, concomitantly with other routine pediatric vaccines. The most commonly reported (more than 20% of subjects) solicited adverse reactions that occurred within 7 days after each dose of PCV13 were injection-site reactions, fever, decreased appetite, irritability, and increased or decreased sleep (2). The incidence and severity of solicited local reactions at the injection site (pain/tenderness, erythema, and induration/ swelling) and solicited systemic reactions (irritability, drowsiness/increased sleep, decreased appetite, fever, and restless sleep/decreased sleep) were similar in the PCV13 and PCV7 groups. These data suggest that the safety profiles of PCV13 and PCV7 are comparable (2); CDC will conduct postlicensure monitoring for adverse events, and the manufacturer will conduct a Phase IV study. Supportive data for safety outcomes were provided by a catch-up study among 354 children aged 7-71 months who received at least 1 dose of PCV13. In addition, an open label study was conducted among 284 healthy U.S. children aged 15-59 months who had previously received 3 or 4 doses of PCV7 (2). Among these children, the frequency and severity of solicited local reactions and systemic adverse reactions after 1 dose of PCV13 were comparable to those among children receiving their fourth dose of PCV13 (2). # Indications and Guidance for Use ACIP recommends PCV13 for all children aged 2-59 months. ACIP also recommends PCV13 for children aged 60-71 months with underlying medical conditions that increase their risk for pneumococcal disease or complications (Table 1). No previous PCV7/PCV13 vaccination. The ACIP recommendation for routine vaccination with PCV13 and the immunization schedules for infants and toddlers through age 59 months who have not received any previous PCV7 or PCV13 doses are the same as those previously published for PCV7 (4,5). PCV13 is recommended as a 4-dose series at ages 2, 4, 6, and 12-15 months. Infants receiving their first dose at age ≤6 months should receive 3 doses of PCV13 at intervals of approximately 8 weeks (the minimum interval is 4 weeks). The fourth dose is recommended at age 12-15 months, and at least 8 weeks after the third dose (Table 2). Children aged 7-59 months who have not been vaccinated with PCV7 or PCV13 previously should receive 1 to 3 doses of PCV13, depending on their age at the time when vaccination begins and whether underlying medical conditions are present (Table 2). Children aged 24-71 months with chronic medical conditions that increase their risk for pneumococcal disease should receive 2 doses of PCV13. Interruption of the vaccination schedule does not require reinstitution of the entire series or the addition of extra doses. Incomplete PCV7/ PCV13 vaccination. Infants and children who have received 1 or more doses of PCV7 should complete the immunization series with PCV13 (Table 3). Children aged 12-23 months who have received 3 doses of PCV7 before age 12 months are recommended to receive 1 dose of PCV13, given at least 8 weeks after the last dose of PCV7. No additional PCV13 doses are recommended for children aged 12-23 months who received 2 or 3 doses of PCV7 before age 12 months and at least 1 dose of PCV13 at age ≥12 months. Similar to the previous ACIP recommendation for use of PCV7 (6), 1 dose of PCV13 is recommended for all healthy children aged 24-59 months with any incomplete PCV schedule (PCV7 or PCV13). For children aged 24-71 months with underlying medical conditions who have received any incomplete schedule of <3 doses of PCV (PCV7 or PCV13) before age 24 months, 2 doses of PCV13 are recommended. For children with underlying medical conditions who have received 3 doses of PCV (PCV7 or PCV13) a single dose of PCV13 is recommended through age 71 months. The minimum interval between doses is 8 weeks. 1), a single supplemental PCV13 dose is recommended through age 71 months Complete PCV7 vaccination. A single supplemental dose of PCV13 is recommended for all children aged 14-59 months who have received 4 doses of PCV7 or another age-appropriate, complete PCV7 schedule (Table 3). For children who have underlying medical conditions, a single supplemental PCV13 dose is recommended through age 71 months. This includes children who have previously received the 23-valent pneumococcal polysaccharide vaccine (PPSV23). PCV13 should be given at least 8 weeks after the last dose of PCV7 or PPSV23. In addition, a single dose of PCV13 may be administered to children aged 6-18 years who are at increased risk for IPD because of sickle cell disease, human immunodeficiency virus (HIV) infection or other immunocompromising condition, cochlear implant, or cerebrospinal fluid leaks, regardless of whether they have previously received PCV7 or PPSV23. Routine use of PCV13 is not recommended for healthy children aged ≥5 years. # Precautions and Contraindications Before administering PCV13, vaccination providers should consult the package insert for precautions, warnings, and contraindications. Vaccination with PCV13 is contraindicated among persons known to have severe allergic reaction (e.g., anaphylaxis) to any component of PCV13 or PCV7 or to any diphtheria toxoid-containing vaccine (2). # Transition from PCV7 to PCV13 When PCV13 is available in the vaccination provider's office, unvaccinated children and children incompletely vaccinated with PCV7 should complete the immunization series with PCV13. If the only pneumococcal conjugate vaccine available in a provider's office is PCV7, that vaccine should be provided to children and infants who are due for vaccination; these children should complete their series with PCV13 at subsequent visits. Children for whom the supplemental PCV13 dose is recommended should receive it at their next medical visit, at least 8 weeks after the last dose of PCV7. ) other insurance: some other source of health insurance, such as a self-directed plan or student health insurance. ¶ A key element of the health-care legislation in Massachusetts was creation of the Commonwealth Health Insurance Connector, the agency responsible for connecting residents to either Commonwealth Care, a subsidized program for certain adults who have not been offered employer-sponsored insurance, or Commonwealth Choice, an unsubsidized offering of six private health plans available through the Health Connector to individuals, families, and certain employers. data for adults aged 18-34 years also were analyzed separately to more closely examine this traditionally underinsured age group. The statistical significance (p<0.05) of differences between health indicators in the pre-law and post-law periods was estimated using the Wald chi-square test. Variability of point estimates of weighted** proportions was indicated by 95% confidence intervals. The percentage of respondents who reported having health insurance rose 5.5%, from 91.3% in the pre-law period to 96.3% in the post-law period (Table 1). Among major subpopulations, the largest increases were observed among Hispanics (14.2%), persons with less than a high school diploma (12.0%), and persons with annual household incomes <$25,000 (11.9%). Nonetheless, in the post-law period, these same three subpopulations continued to have the lowest percentages of health insurance coverage: 89.0% for Hispanics, 88.6% for persons with less than a high school diploma, and 89.0% for persons with annual household incomes <$25,000. By 2008, approximately 8% of publicly insured Massachusetts residents were obtaining their health insurance through the new public Commonwealth Care program. The percentage of insured residents with public health insurance (including those aged 18-64 years who were eligible for Medicare) increased 29.7%, from 14.8% in the pre-law period to 19.2% in the post-law period (Table 1). The percentage of insured residents with private insurance decreased 3.2%, from 80.8% to 78.2%, and the percentage of insured residents with other types of insurance (e.g., a self-directed plan or student health insurance) decreased 40.9%, from 4.4% to 2.6% (Table 1). The overall percentage of respondents who reported having a personal health-care provider increased significantly, from 86.1% in the pre-law period to 87.7% in the post-law period (Table 2). The largest reported increases occurred among Hispanic respondents who answered the survey in Spanish (30.3% increase) and among Hispanics overall (17.0% increase). The percentage of respondents who reported having a routine checkup within the past year also increased significantly, from 71.9% in the pre-law period to 74.1% in the post-law period (Table 3). The largest reported increases occurred among Hispanic respondents who answered the survey in Spanish (19.9% increase) and among Hispanics overall (14.1% increase). The percentage of men reporting a routine checkup increased 5.1%, from 66.4% to 69.8%, but the percentage of women reporting a routine checkup did not change significantly. The percentage of respondents with chronic conditions who reported having a personal health-care provider or having had an annual checkup also did not change significantly after enactment of the health-care coverage law. # Editorial Note The results of this analysis indicate that the estimated percentage of Massachusetts residents covered by health insurance increased significantly after passage of health-care coverage legislation. A wider comparison, between 2005 BRFSS state survey results and 2008 results, indicated that health insurance coverage increased from 89% to 97% among all state residents (including children and adults aged ≥65 years); the increase included an estimated 300,000 newly insured persons aged 18-64 years (3). After implementation of the health-care coverage law, the proportion of respondents who said they lacked health insurance was approximately cut in half, and 8% of publicly insured respondents were obtaining health insurance through the state's new Commonwealth Care program. The effects observed likely are attributable to the new law; although, because of limitations inherent in such studies, a causal link cannot be proven. Increases in health insurance coverage can result from multiple factors, such as a higher employment rate, reduction in health insurance premiums, or expansion of existing public health insurance programs. During 1996-1999, Massachusetts observed an increase in the percentage of persons with health insurance (3) after the state expanded Medicaid eligibility; as a result, an additional 124,000 Massachusetts residents obtained insurance coverage (4). In this analysis, the observed increases in the percentage of insured among traditionally underserved subpopulations (e.g., Hispanics, persons with less than a high school diploma, and persons with annual household incomes <$25,000) serve to strengthen the hypothesis that the increases in insurance coverage are ** Data were weighted to the total Massachusetts population. The BRFSS weighting methodology is available at http://www.cdc.gov/ brfss/technical_infodata/surveydata/2008/overview_08.rtf. † † p values were calculated using the Wald chi-square test of the difference between each period. § § General Educational Development certificate. ¶ ¶ Responded "fair" or "poor" to the question, "Would you say that in general your health is excellent, very good, good, fair, or poor?" *** Responded "yes" to the question, "A disability can be physical, mental, emotional, or communication-related. Would you describe yourself as having a disability of any kind?" plus indicated >1 year when asked, "For how long have your activities been limited because of your major impairment, health problem, or disability?" † † † Responded "yes" to the question, "Have you ever been told by a doctor that you have diabetes?" § § § Responded "yes" to both of these questions: "Have you ever been told by a doctor, nurse, or other health professional that you had asthma?" and "Do you still have asthma?" ¶ ¶ ¶ Defined as coverage through an employer, someone else's employer, or a plan purchased by the person covered. **** Defined as coverage through Medicare, Medicaid, MassHealth, CommonHealth MassHealth, health maintenance organizations offered through Neighborhood Health Plan, Fallon Community Health Plan, BMC HealthNet or Network Health, Commonwealth Care, the military, CHAMPUS, TriCare, Veterans Administration (VA), CHAMP-VA, Indian Health Service, or the Alaska Native Health Service. † † † † Defined as some other source of health insurance such as a self-directed plan or student health insurance. attributable to the health-care coverage law, because implementation of heavily subsidized health insurance programs likely would affect these subpopulations first. Data from similar surveys in Massachusetts support this same hypothesis (5-7). For example, reports from the Massachusetts Division of Health Care Finance and Policy, which were focused on insurance status specifically, found that from fall 2006 to fall 2008, the number of uninsured working-age adults was reduced by nearly 70%. Most of the gains in insurance coverage were concentrated among lower-income adults (7). In contrast, according to U.S. Census data, from 2007 to 2008, the overall proportion of U.S. adults with health insurance declined (8). The largest increases in insurance coverage were among Hispanic respondents overall and Hispanic respondents who answered the survey in Spanish. Traditionally, a larger proportion of Hispanics in † † p values were calculated using the Wald chi-square test of the difference between each period. § § General Educational Development certificate. ¶ ¶ Responded "fair" or "poor" to the question, "Would you say that in general your health is excellent, very good, good, fair, or poor?" *** Responded "yes" to the question, "A disability can be physical, mental, emotional, or communication-related. Would you describe yourself as having a disability of any kind?" plus indicated >1 year when asked, "For how long have your activities been limited because of your major impairment, health problem, or disability?" † † † Responded "yes" to the question, "Have you ever been told by a doctor that you have diabetes?" § § § Responded "yes" to both of these questions: "Have you ever been told by a doctor, nurse, or other health professional that you had asthma?" and "Do you still have asthma?" Massachusetts have lacked access to health care, compared with other racial/ethnic populations (9,10). The results showed an 18.4% increase for persons responding in Spanish and a 14.2% increase for Hispanics overall. However, despite these increases, Hispanics continued to have the lowest health insurance coverage and the lowest percentage of persons with a personal health-care provider than any other subpopulation. The percentage of younger adults, whites, blacks, and persons with chronic diseases who reported having a personal health-care provider did not change significantly. One reason might be that more time is needed for the effects of improved healthcare access to be realized in these groups. Another reason might be that health-care providers are not equally accessible for certain groups or in certain areas † † p values were calculated using the Wald chi-square test of the difference between each period. § § General Educational Development certificate. ¶ ¶ Responded "fair" or "poor" to the question, "Would you say that in general your health is excellent, very good, good, fair, or poor?" *** Responded "yes" to the question, "A disability can be physical, mental, emotional, or communication-related. Would you describe yourself as having a disability of any kind?" plus indicated >1 year when asked, "For how long have your activities been limited because of your major impairment, health problem, or disability?" † † † Responded "yes" to the question, "Have you ever been told by a doctor that you have diabetes?" § § § Responded "yes" to both of these questions: "Have you ever been told by a doctor, nurse, or other health professional that you had asthma?" and "Do you still have asthma?" of the state. Although the cost of a doctor visit might also be a factor, 2008 BRFSS data have shown that only 6% of all respondents reported that they were unable to visit a doctor during the past year because of cost, compared with 8% in 2006 (9). In addition to an increase in the percentage of persons with health insurance, the findings in this analysis indicate changes in the proportion of plans that were private, public, or other (e.g., a self-directed plan or student health insurance) in Massachusetts. Those proportions changed from 80.8% private, 14.8% public, and 4.4% other before the law was enacted to 78.2%, 19.2%, and 2.6%, respectively. These changes were similar to U.S. Census data, which found that the proportion of adults with private health insurance declined from 2007 to 2008, while the proportion of publicly insured adults increased (8). In addition to the limitations on establishing causality, the findings in this report are subject to at least three other limitations. First, BRFSS only samples households with landline telephones. Minorities, persons with lower socioeconomic status, and younger adults typically have lower landline telephone coverage and might be underrepresented in this report. However, poststratification weighting might correct some bias resulting from lack of landline telephones. Second, depending on when the survey was administered, some responses might pertain to health-care activities (e.g., having a personal-care provider in the past year) that actually occurred during the 12-month transition period. Finally, BRFSS data are based on self-report and might be subject to error (e.g., underreporting of chronic conditions). The findings in this report and others (10) can help local health departments in areas with large underserved populations assess local public health needs, enhance cultural competency, engage hospitals in community primary-care efforts, and address the availability of health-care providers. MDPH is targeting outreach more precisely to increase health insurance enrollment and health-care access among state residents. Afghanistan, Pakistan, India, and Nigeria are the four remaining countries where indigenous wild poliovirus (WPV) transmission has never been interrupted (1). This report updates previous reports (1,2) and describes polio eradication activities in Afghanistan and Pakistan during January-December 2009 and proposed activities in 2010 to address challenges. During 2009, both countries continued to conduct coordinated supplemental immunization activities (SIAs) and used multiple strategies to reach previously unreached children. These strategies included 1) use of short interval additional dose (SIAD) SIAs to administer a dose of oral poliovirus vaccine (OPV) within 1-2 weeks after a prior dose during negotiated periods of security; 2) systematic engagement of local leaders; 3) negotiations with conflict parties; and 4) increased engagement of nongovernmental organizations delivering basic health services. However, security problems continued to limit access by vaccination teams to large numbers of children. In Afghanistan, poliovirus transmission during 2009 predominantly occurred in 12 highrisk districts in the conflict-affected South Region; 38 WPV cases were confirmed in 2009, compared with 31 in 2008. In Pakistan, 89 WPV cases were confirmed in 2009, compared with 118 in 2008, but transmission persisted both in security-compromised areas and in accessible areas, where managerial and operational problems continued to affect immunization coverage. Continued efforts to enhance safe access of vaccination teams in insecure areas will be required for further progress toward interruption of WPV transmission in Afghanistan and Pakistan. In addition, substantial improvements in subnational accountability and oversight are needed to improve immunization activities in Pakistan. # Immunization Activities Reported routine vaccination coverage of infants with 3 OPV doses (OPV3) in 2009 was 85% nationally in Afghanistan and 81% in Pakistan During 2009, large-scale house-to-house SIAs † targeting children aged <5 years using different formulations of OPV, depending on the epidemiologic situation, continued in both countries (Table 1). OPV formulations included trivalent (tOPV), monovalent type 1 (mOPV1) and type 3 (mOPV3), or OPV bivalent types 1 and 3 (bOPV). § Afghanistan conducted six national immunization days (NIDs); four subnational immunization days (SNIDs) in the East, Southeast, and South regions along the border with Pakistan, three of which targeted nearly 50% of the national population of children aged <5 years; and four smaller-scale SIAD ¶ SIAs after a preceding larger SIA, targeting children in conflict-affected areas of the South Region. Pakistan conducted six NIDs; four SNIDs in the main WPV transmission areas of NWFP/FATA, southern Punjab, and Sindh (including Karachi city), targeting 40%-50% of the national population aged <5 years; and four SIAD SIAs in conflict-affected areas of NWFP/FATA. These included two SIAD SIAs in Swat Valley, targeting >370,000 children aged <5 years, conducted after 1 year of civil conflict that prevented any polio vaccination. In 2009, as in past years, certain vaccination campaigns were unable to reach children aged <5 years living in areas inaccessible** because of security problems. During 2009, the estimated percentage of children aged <5 years who were living in inaccessible areas in the South Region of Afghanistan ranged from >20% during SIAs conducted in January and March to 5% during the July and September SIAs. In Pakistan, the percentage of children aged <5 years who were living in SIA-inaccessible areas of NWFP increased from 10% in January to 20% in May and July, and then decreased to <5% in October and November. However, in FATA itself, the estimated percentage of children aged <5 years living in inaccessible areas increased from 15% in January to 30% by November. # Progress Toward Poliomyelitis Eradication # AFP Surveillance In 2009, AFP surveillance performance indicators remained high in both countries, including in areas with ongoing WPV transmission. † † The annual national nonpolio AFP rate (per 100,000 population aged <15 years) was 8.5 in Afghanistan (range among the eight regions: 6.7-12.0) and 6.1 in Pakistan (range among the six provinces/territories: 2.9-9.2). The percentage of nonpolio AFP cases for which adequate specimens were collected was 93% in Afghanistan (range: 81%-97%) and 90% in Pakistan (range: 83%-96%) (Table 2). # Editorial Note During 2009, the total number of WPV cases reported in Afghanistan and Pakistan did not substantially change compared with 2008, and both WPV1 and WPV3 serotypes continued to circulate in the In Afghanistan, WPV transmission during the past 5 years has remained largely restricted to 12 insecure districts in the South Region. Since 2008, multiple strategies have been implemented to immunize these children. As a result, the proportion of children in the South Region who are not vaccinated in a given SIA was reduced to 5% of the overall target population toward the end of 2009. Meanwhile, Afghanistan has been able to keep most of the country free of endemic WPV transmission despite extensive population movements due to economic, social/cultural, and security reasons. In Pakistan, circulation of both WPV serotypes persists in both transmission zones, with WPV repeatedly occurring, primarily in nine districts during the past 5 years. In the northern zone, WPV transmission continues because of limited access to children during SIAs in insecure areas of NWFP and FATA. Largescale population movements from NWFP and FATA have caused renewed WPV transmission in polio-free areas. Access to districts in NWFP improved during the last quarter of 2009, but access into FATA deteriorated. In the southern zone, WPV circulation continued mainly due to weak routine vaccination programs and managerial and operational gaps during SIAs, compounded by large-scale population movements from insecure areas in southern Afghanistan. Substantial improvements in subnational accountability and oversight are needed to improve the quality of immunization activities in Pakistan. During 2010, planning, resources, and immunization activities need to focus on the small number of persistently affected districts in Afghanistan and Pakistan. In insecure areas, negotiations with community leaders need to be enhanced, and efforts are needed to achieve agreement of all parties to conflict regarding Days of Tranquility during SIAs to ensure access to the target population of children aged <5 years and the safety of vaccination teams. Negotiations with conflict parties have been and will be supported by the International Federation of Red Cross and Red Crescent Societies. Coordination of both SIAs and AFP surveillance between both countries also needs to be strengthened further to interrupt transmission from cross-border movements. In addition, specific mechanisms need to be established to hold provincial, district, and local administrative leaders accountable for program performance. MenACWY-CRM consists of two components: 1) 10 μg of lyophilized meningococcal serogroup A capsular polysaccharide conjugated to CRM 197 (MenA) and 2) 5 μg each of capsular polysaccharide of serogroup C, Y, and W135 conjugated to CRM 197 in 0.5 mL of phosphate buffered saline, which is used to reconstitute the lyophilized MenA component before injection (1). The reconstituted vaccine should be used immediately, but may be held at or below 77°F (25°C) for up to 8 hours. MenACWY-CRM is administered as an intramuscular injection, preferably into the deltoid region (1). The capsular polysaccharide serogroups included in MenACWY-CRM are the same as those contained in Sanofi Pasteur's MCV4 (Menactra). In study participants aged 11-18 years, noninferiority of MenACWY-CRM to MCV4 was demonstrated for all four serogroups using the primary endpoint, hSBA seroresponse (serum bactericidal assay using human complement). The proportions of subjects with hSBA seroresponse were statistically higher for serogroups A, W, and Y in the MenACWY-CRM group, compared with the MCV4 group. The clinical relevance of higher postvaccination immune responses is not known (1). Safety and reactogenicity profiles were comparable to those observed with MCV4 (1). # Guidance for Use of MenACWY-CRM MenACWY-CRM is licensed by the FDA as a single dose in persons aged 11-55 years (1). ACIP recommends quadrivalent meningococcal conjugate vaccine for all persons aged 11-18 years and for persons aged 2-55 years who are at increased risk for meningococcal disease. Persons at increased risk for meningococcal disease include 1) college freshmen living in dormitories, 2) microbiologists who are exposed routinely to isolates of Neisseria meningitidis, 3) military recruits, 4) persons who travel to or reside in countries where meningococcal disease is hyperendemic or epidemic Severe allergic reaction (e.g., anaphylaxis) after a previous dose of Menveo, any component of this vaccine, or any other CRM 197 , diphtheria toxoid, or meningococcal-containing vaccine is a contraindication to administration of Menveo. Details regarding the recommended meningococcal vaccination schedule are available at http://www.cdc.gov/vaccines/ recs/schedules/default.htm#child. Adverse events after receipt of any vaccine should be reported to the Vaccine Adverse Event Reporting System at http:// vaers.hhs.gov. # Licensure of a Meningococcal Conjugate Vaccine (Menveo) and Guidance for Use -Advisory Committee on Immunization Practices (ACIP), 2010 IV, which appears quarterly. § § Updated weekly from reports to the Influenza Division, National Center for Immunization and Respiratory Diseases. Since April 26, 2009, a total of 278 influenza-associated pediatric deaths associated with 2009 influenza A (H1N1) virus infection have been reported. Since August 30, 2009, a total of 265 influenza-associated pediatric deaths occurring during the 2009-10 influenza season have been reported. A total of 133 influenza-associated pediatric deaths occurring during the 2008-09 influenza season have been reported. ¶ ¶ No measles cases were reported for the current week. *** Data for meningococcal disease (all serogroups) are available in Table II. † † † CDC discontinued reporting of individual confirmed and probable cases of 2009 pandemic influenza A (H1N1) virus infections on July 24, 2009. CDC will report the total number of 2009 pandemic influenza A (H1N1) hospitalizations and deaths weekly on the CDC H1N1 influenza website (http://www.cdc.gov/h1n1flu). In addition, three cases of novel influenza A virus infections, unrelated to the 2009 pandemic influenza A (H1N1) virus, were reported to CDC during 2009. § § § In 2009, Q fever acute and chronic reporting categories were recognized as a result of revisions to the Q fever case definition. Prior to that time, case counts were not differentiated with respect to acute and chronic Q fever cases. A death is reported by the place of its occurrence and by the week that the death certificate was filed. Fetal deaths are not included. † Pneumonia and influenza. § Because of changes in reporting methods in this Pennsylvania city, these numbers are partial counts for the current week. Complete counts will be available in 4 to 6 weeks. ¶ Total includes unknown ages. American Samoa - 0 0 - - N 0 0 N N C.N.M.I. - - - - - - - - - - Guam - 0 0 - - - 0 0 - - Puerto - 0 0 - NN New England - 0 1 1 NN - 0 0 - NN Connecticut - 0 0 - NN - 0 0 - NN Maine § - 0 1 1 NN - 0 0 - NN Massachusetts - 0 0 - NN - 0 0 - NN New Hampshire - 0 0 - NN - 0 0 - NN Rhode Island § - 0 0 - NN - 0 0 - NN Vermont § - 0 0 - NN - 0 0 - NN Mid. Atlantic - 0 1 2 NN - 0 0 - NN New Jersey - 0 0 - NN - 0 0 - NN New York (Upstate) - 0 0 - NN - 0 0 - NN New York City - 0 0 - NN - 0 0 - NN Pennsylvania - 0 1 2 NN - 0 0 - NN E.N. Central - 0 1 1 NN - 0 0 - NN Illinois - 0 0 - NN - 0 0 - NN Indiana - 0 0 - NN - 0 0 - NN Michigan - 0 0 - NN - 0 0 - NN Ohio - 0 1 1 NN - 0 0 - NN Wisconsin - 0 0 - NN - 0 0 - NN W.N. Central - 0 0 - NN - 0 0 - NN Iowa - 0 0 - NN - 0 0 - NN Kansas - 0 0 - NN - 0 0 - NN Minnesota - 0 0 - NN - 0 0 - NN Missouri - 0 0 - NN - 0 0 - NN Nebraska § - 0 0 - NN - 0 0 - NN North Dakota - 0 0 - NN - 0 0 - NN South Dakota - 0 0 - NN - 0 0 - NN S. Atlantic - 0 1 1 NN - 0 0 - NN Delaware - 0 0 - NN - 0 0 - NN District of Columbia - 0 0 - NN - 0 0 - NN Florida - 0 0 - NN - 0 0 - NN Georgia - 0 1 1 NN - 0 0 - NN Maryland § - 0 0 - NN - 0 0 - NN North Carolina - 0 0 - NN - 0 0 - NN South Carolina § - 0 0 - NN - 0 0 - NN Virginia § - 0 0 - NN - 0 0 - NN West Virginia - 0 0 - NN - 0 0 - NN E.S. Central - 0 0 - NN - 0 0 - NN Alabama § - 0 0 - NN - 0 0 - NN Kentucky - 0 0 - NN - 0 0 - NN Mississippi - 0 0 - NN - 0 0 - NN Tennessee § - 0 0 - NN - 0 0 - NN W.S. Central - 0 0 - NN - 0 0 - NN Arkansas § - 0 0 - NN - 0 0 - NN Louisiana - 0 0 - NN - 0 0 - NN Oklahoma - 0 0 - NN - 0 0 - NN Texas § - 0 0 - NN - 0 0 - NN Mountain - 0 0 - NN - 0 0 - NN Arizona - 0 0 - NN - 0 0 - NN Colorado - 0 0 - NN - 0 0 - NN Idaho § - 0 0 - NN - 0 0 - NN Montana § - 0 0 - NN - 0 0 - NN Nevada § - 0 0 - NN - 0 0 - NN New Mexico § - 0 0 - NN - 0 0 - NN Utah - 0 0 - NN - 0 0 - NN Wyoming § - 0 0 - NN - 0 0 - NN Pacific - 0 2 2 NN - 0 0 - NN Alaska - 0 0 - NN - 0 0 - NN California - 0 0 - NN - 0 0 - NN Hawaii - 0 0 - NN - 0 0 - NN Oregon - 0 0 - NN - 0 0 - NN Washington - 0 2 2 NN - 0 0 - NN American Samoa - 0 0 - NN - 0 0 - NN C.N.M.I. - - - - NN - - - - NN Guam - 0 0 - NN - 0 0 - NN Puerto Rico - 0 0 - NN - 0 0 - NN U.S. - 0 4 1 1 - 2 21 4 3 - 0 2 - - Connecticut - 0 0 - - - 0 11 - - - 0 1 - - Maine § - 0 1 1 - - 0 3 2 - - 0 0 - - Massachusetts - 0 0 - - - 0 0 - - - 0 0 - - New Hampshire - 0 1 - - - 0 3 - 1 - 0 1 - - Rhode Island § - 0 4 - 1 - 0 20 2 2 - 0 1 - - Vermont § - 0 1 - - - 0 0 - - - 0 0 - - Mid. Atlantic 1 2 17 2 1 - 3 22 1 - - 0 2 - - New Jersey - 0 1 - - - 0 0 - - - 0 0 - - New York (Upstate) 1 1 17 1 - - 3 21 1 - - 0 1 - - New York City - 0 3 - 1 - 0 1 - - - 0 2 - - Pennsylvania - 0 1 1 - - 0 0 - - - 0 0 - - E.N. Central - 1 8 - - - 3 22 1 2 - 1 9 - - Illinois - 0 4 - - - 0 1 - - - 0 1 - - Indiana - 0 0 - - - 0 0 - - - 0 8 - - Michigan - 0 0 - - - 0 0 - - - 0 0 - - Ohio - 0 2 - - - 0 1 - - - 0 1 - - Wisconsin - 0 5 - - - 3 22 1 2 - 0 3 - - W.N. Central - 2 23 1 1 - 0 41 - - - 0 5 1 - Iowa - 0 0 - - - 0 0 - - - 0 0 - - Kansas - 0 2 - - - 0 0 - - - 0 0 - - Minnesota - 0 3 - 1 - 0 41 - - - 0 5 - - Missouri - 1 22 1 - - 0 1 - - - 0 3 1 - Nebraska § - 0 1 - - - 0 1 - - - 0 0 - - North Dakota - 0 0 - - - 0 0 - - - 0 0 - - South Dakota - 0 0 - - - 0 0 - - - 0 0 - - S. Atlantic 3 3 19 12 15 - 0 2 2 3 - 0 2 - - Delaware - 0 2 1 1 - 0 1 - - - 0 0 - - District of Columbia - 0 0 - - - 0 0 - - - 0 0 - - Florida - 0 1 1 2 - 0 1 - - - 0 0 - - Georgia - 0 2 3 3 - 0 1 1 1 - 0 0 - - Maryland § - 1 4 4 4 - 0 1 - 1 - 0 1 - - North Carolina 3 0 4 3 5 - 0 1 1 1 - 0 0 - - South Carolina § - 0 1 - - - 0 0 - - - 0 0 - - Virginia § - 1 13 - - - 0 1 - - - 0 2 - - West Virginia - 0 1 - - - 0 0 - - - 0 0 - - E.S. Central - 1 11 - 2 - 0 1 - 1 - 0 5 - 1 Alabama § - 0 3 - - - 0 1 - - - 0 0 - - Kentucky - 0 2 - - - 0 0 - - - 0 1 - - Mississippi - 0 0 - - - 0 0 - - - 0 0 - - Tennessee § - 1 10 - 2 - 0 1 - 1 - 0 5 - 1 W.S. Central - 0 9 1 - - 0 1 - - - 0 0 - - Arkansas § - 0 5 - - - 0 0 - - - 0 0 - - Louisiana - 0 0 - - - 0 0 - - - 0 0 - - Oklahoma - 0 8 - - - 0 1 - - - 0 0 - - Texas § - 0 1 1 - - 0 1 - - - 0 0 - - Mountain - 0 0 - - - 0 0 - - - 0 1 - - Arizona - 0 0 - - - 0 0 - - - 0 1 - - Colorado - 0 0 - - - 0 0 - - - 0 0 - - Idaho § - 0 0 - - - 0 0 - - - 0 0 - - Montana § - 0 0 - - - 0 0 - - - 0 0 - - Nevada § - 0 0 - - - 0 0 - - - 0 0 - - New Mexico § - 0 0 - - - 0 0 - - - 0 0 - - Utah - 0 0 - - - 0 0 - - - 0 0 - - Wyoming § - 0 0 - - - 0 0 - - - 0 0 - - Pacific - 0 1 - - - 0 0 - - - 0 0 - - Alaska - 0 0 - - - 0 0 - - - 0 0 - - California - 0 1 - - - 0 0 - - - 0 0 - - Hawaii - 0 0 - - - 0 0 - - - 0 0 - - Oregon - 0 0 - - - 0 0 - - - 0 0 - - Washington - 0 0 - - - 0 0 - - - 0 0 - - American Samoa - 0 0 - - - 0 0 - - - 0 0 - - C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 0 0 - - - 0 0 - - - 0 0 - - U. American Samoa - 0 0 - - - 0 0 - - - 0 0 - - C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 1 - 0 1 1 1 - 0 2 1 1 - 0 2 - - Massachusetts - 1 4 - 10 - 0 2 - 3 - 0 1 - 2 New Hampshire - 0 1 - 1 - 0 1 - 1 - 0 0 - - Rhode Island † - 0 1 - 1 - 0 0 - - - 0 0 - - Vermont † - 0 1 - - - 0 0 - - - 0 0 - 1 Mid. - 0 3 - 4 1 0 2 6 5 - 0 1 1 1 North Dakota - 0 1 - - - 0 0 - - - 0 1 - - South Dakota - 0 1 - - - 0 1 - 1 - 0 1 1 - S. - Colorado - 1 5 5 6 - 0 2 1 6 - 0 3 - 8 Idaho † - 0 1 2 - - 0 2 1 1 - 0 2 3 - Montana † - 0 1 - 2 - 0 0 - - - 0 0 - - Nevada † - 0 2 1 - - 0 3 4 5 - 0 1 - - New Mexico † - 0 1 1 1 - 0 1 - 4 - 0 1 - 4 Utah - 0 2 - 3 - 0 1 - 3 - 0 2 2 - Wyoming † - 0 1 - - - 0 2 - - - 0 0 - - Pacific 6 American Samoa - 0 0 - - - 0 0 - - - 0 0 - - C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 0 2 2 7 - 0 5 1 1 - 0 0 - - U. 5 - 0 1 1 3 Kansas - 0 1 1 2 - 0 2 - 4 - 0 1 3 1 Minnesota 1 0 11 3 - - 0 196 - 4 - 0 8 3 1 Missouri - 1 5 2 - - 0 1 - - - 0 1 2 3 Nebraska † - 0 2 2 - - 0 3 1 - - 0 2 3 - North Dakota - 0 1 - - - 0 0 - - - 0 1 - - South Dakota - 0 1 - - - 0 0 - 1 - 0 1 - - S. - 0 1 - - - 0 0 - - - 0 1 1 - Louisiana - 0 2 - 1 - 0 0 - - - 0 1 - 1 Oklahoma - 0 2 - - - 0 0 - - - 0 1 1 - Texas † - 2 - - - 0 2 1 - Colorado - 0 4 2 2 - 0 1 1 - - 0 3 - 1 Idaho † - 0 2 - 1 - 0 3 1 1 - 0 1 - - Montana † - 0 1 1 3 - 0 1 - - - 0 3 - - Nevada † - 0 1 2 3 - 0 1 - - - 0 1 1 - New Mexico † - 0 2 1 - - 0 1 - - - 0 0 - - Utah - 0 4 1 5 - 0 1 1 1 - 0 1 3 2 Wyoming † - 0 2 - - - 0 1 - - - 0 0 - - Pacific 6 - 0 0 - 1 N 0 0 N N - 0 1 - - Oregon - 0 2 - 3 - 1 4 8 3 - 0 2 - 2 Washington 1 0 4 1 2 - 0 3 - - 1 0 4 4 3 American Samoa N 0 0 N N N 0 0 N N - 0 0 - - C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 0 1 - - N 0 0 N N - 0 1 1 1 U.S. Virgin Islands - 0 0 - - N 0 0 N N - 0 0 - - C. American Samoa - 0 0 - - - 0 0 - - N 0 0 N N C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 0 0 - - - 0 1 - - 3 1 3 12 9 U.S. Virgin Islands - 0 0 - - - 0 0 - - N 0 0 N N C. # Acknowledgments
None
None
841a02c51ef591fe48a6b772c59e62daa4466fd1
cdc
None
On January 22, 1974, representatives from the B. F. Goodrich Chemical Company informed NIOSH that the deaths of several employees of their Louisville, Kentucky, plant might have been related to occupa tional exposures. An immediate industrial hygiene walk-through survey of the facility was conducted by NIOSH and resulted in developing and" transmitting to affected companies recommendations for precau tionary monitoring and control procedures for polym erization processes involving vinyl chloride. On Feb ruary 1, 1974, NIOSH/CDC conducted a briefing for other Federal agencies with health research responsi bilities at which it was disclosed that four employees of the plant in question had died of angiosarcoma of the liver. Because of the extremely low incidence of this disease, estimated to be on the order of 20 to 30 deaths per year in the United States, the history of four cases in a five-year period in one plant was con sidered of great importance. It was concluded at the briefing that a new occupational cancer had been discovered: angiosarcoma of the liver. It was further concluded that this disease was associated with the manufacture of polyvinyl chloride and that vinyl chloride was the prime etiological candidate in pro ducing the disease. NIOSH, with the assistance of expert consultants from both industry and organized labor, began de velopment of a recommended occupational health standard. These and other activities were discussed in more detail at thé OSHA Informal Fact-Finding Hearing on Possible Hazards of Vinyl Chloride Manu facture and Use on February 15, 1974. It was also during this Hearing that Professor Cesare Maltoni of Bologna, Italy, presented the preliminary results of his research which showed induction of angiosarcoma of the liver and other organs, as well as the produc tion of other cancers in rats exposed to vinyl chloride. The results of these studies identify vinyl chloride as a carcinogen and further confirm its role in in ducing the cancers observed in the B. F. Goodrich workers . . . Although vinyl chloride must be considered as a car cinogenic agent, the immediate problem appeared to be concentrated in polymerization facilities. Conse quently, the attached NIOSH recommended standard only applies to such operations. This is not to say, however, that appropriate standards should not be developed for other exposures to the basic chemical. NIOSH is implementing further evaluation of the data, coupled with field observations, to determine exposure potentials in pre-and post-polymerization operations . . . # Consultants from industry who worked with NIOSH proposed that the recommended standard contain the concept of an allowable "working level" for vinyl chloride gas in the atmosphere, which they identified as a time weighted average of 50 ppm. They recom mended that where workers were exposed to concen trations in excess of this level they should wear airsupplied respirators. This concept of an allowable "working level" might seem justifiable in that Pro fessor Maltoni found no liver tumors at 50 ppm, but there is the possibility that tumors might have been produced if a larger number of animals had been ex posed at that concentration. Based on theoretical considerations, there is probably no threshold for carcinogenesis although it is possible that with very low concentrations, the latency period might be ex tended beyond the life expectancy. In view of these considerations and NIOSH's inability to describe a safe exposure level as required in section 20(a)(3) of the Occupational Safety and Health Act, the concept of a threshold limit for vinyl chloride gas in the atmosphere was rejected. Consequently, the recommendations are such that where any employee is exposed to measurable con centrations of vinyl chloride, as determined by the recommended sampling and analytical method, he shall wear an air supplied respirator. This recom mendation is based on some preliminary information that the standard chemical cartridge respirators are inefficient in protecting against vinyl chloride. NIOSH is implementing a study to evaluate the degree of protection afforded by different types of respirators using vinyl chloride as the test gas. As information becomes available, it will be forwarded to OSHA as recommendations for alternative respirator usage. The employer is also required to develop a Control Plan to reduce airborne concentrations of vinyl chlo ride to levels not detachable by the recommended method . . . # N IQ 5H Recommended Standard R E C O M M E N D E D O CCU PATIONAL HEALTH ST A N D A R D FOR TH E M A N U F A C T U R E OF SY N TH ET IC PO LY M ER FROM V IN Y L C H LO R ID E # SCOPE AND APPLICATION This standard regulates the manufacture of synthetic polymer from vinyl chloride (chloroethene, Chemical Abstracts Registry No. 75014), in order to protect the health and safety of workers. Vinyl chloride also know as vinyl chloride monomer (VCM), chloroethylene and chloroethene, is a colorless sweet sm elling gas at ordinary temperature and pressure and has a boiling and melting point at one atmosphere of -13.8°C, and -153.71°C, respectively. Its chemical formula is CH..CHCI and it has a molecular weight of 62.50. Although noncorrosive at normal atmospheric temperatures, in contact with water and at elevated temper atures it accelerates the corrosion of iron and steel. Of considerable concern is the fact that vinyl chloride is easily ignited and has a lower and upper explosive limit of 3 .6 % and 26.4% , respectively. # DEFINITIONS For the purpose of this standard: a. "Assistant Secretary" means the Assistant Secretary for Occupational Safety and Health, U. S. Department of Labor or any person di rected by him. b. "Director" means the Director, National Institute for Occupational Safety and Health, or any person directed by him or the Secretary of Health, Education, and Welfare to act for the Director. c. "Authorized employee" means an employee whose duties require him to be in the regu lated area and who has been specifically as signed by the employer. d. "Detectable levels" means the determina tion that airborne concentrations of vinyl chlo ride are in excess of the limit of sensitivity of the sampling and analytical method recom mended by the Director. e. "Clean change room" means a room where employees put on clean clothing; clean change room shall be contiguous to and have an entry from a shower room, when the shower room fa cilities are otherwise required in this standard. f. "Closed container" is any container which is used to prevent the physical contact of em ployees with material containing viny! chloride monomer. g. "Closed system " means an operation in volving vinyl chloride where containment pre vents the release of vinyl chloride into regu lated areas, nonregulated areas, or the external environment. h. "Contaminated" refers to detectable levels of vinyl chloride monomer. i. "Decontamination" means the inactivation of vinyl chloride to less than detectable levels or its safe disposal. j. "D isposal" means the safe removal of vinyl chloride from the work environment, k. "Emergency" means an unforeseen circum stance or set of circumstances, such as a rup tured transfer line, resulting in the release of vinyl chloride sufficient to produce acute symptoms among workers exposed or having contact with the vinyl chloride. I. "External environment" means any environ ment external to regulated and nonregulated areas. m. "Regulated area" means an area where entry to and exit from a vinyl chloride work place is restricted and controlled, n. "Nonregulated area" means any area under the control of the employer where entry and exit is neither restricted nor controlled, o. "Protective clothing" means clothing de signed to protect an employee against contact with or exposure to vinyl chloride, p. "Waste resin" means any resin or other vinyl chloride reaction product which has been removed from vessels during clean-up opera tions, or which has been collected as a result of in-plant housekeeping operations. # REQ U IREM EN TS FOR REGULATED AREAS A regulated area shall be established where synthetic resins containing vinyl chloride are N1D5H Recommended Standard manufactured. These regulated areas shall in clude but are not limited to vinyl chloride loading or unloading operations, storage, and transfer facilities; synthetic resin polymeriza tion processes and operations; and resin han dling, compounding, packaging and storage areas. Access shall be restricted to authorized employees only. All such regulated areas shall be controlled in accordance with the following requirements. # a. Routine Operations (i) Initial concentration of vinyl chloride in all regulated areas shall be determined by per forming air measurements at strategic sam pling points under normal operating conditions. These initial sampling points must be selected by a professional industrial hygienist and will serve as monitoring locations for future en vironmental measurements. The sam pling pat tern shall be adequate to represent the en vironment of the controlled area. (ii) Where detectable levels of vinyl chlo ride are measured, a Control Plan to reduce such levels shall be developed and imple mented. The Plan shall consist not only of establishing goals for reducing vinyl chloride levels by designing and introducing engineer ing and process controls, but shall also identify plans for developing additional (healthful) work practices. Target dates shall be established for all goals and the Plan must be updated at least on an annual basis. Copies of the Control Plan shall be posted in all regulated areas and be provided to all authorized employees. (in) There shall be periodic tests for proc ess or equipment leaks and for emission of vinyl chloride which may result from work practices. The frequency of these tests shall be such as to insure the integrity of equipment, adherence to proper work practices, and to de termine achievement of the goals of the Con trol Plan. Tests shall be performed at each sampling point at least daily or more frequently if concentrations of vinyl chloride are in excess of those established in the Plan. When such levels are exceeded, additional samples to identify sources of contamination shall be taken. Results of all such tests shall be made available to authorized employees in such a manner as to evaluate achievement of goals contained in the Control Plan. (iv) Until exposures to vinyl chloride are reduced below detectable levels, employees entering any regulated area shall be provided with and required to wear and use a full-face, supplied air respirator, of the continuous flow or pressure demand type in accordance with §1910.134. (v) In operations involving loading or un loading vinyl chloride monomer from tank cars, .trucks, barges, or other conveyance equipment, each transfer line and each vapor-equalizing line shall be equipped with vent connections permitting, at the completion of the transfer, pressure to be vented and the hose purged with an inert gas in such a manner to preclude any employee exposure. Specific and detailed transfer procedures shall be developed and provided to involved employees in written form. (vi) Employees shall be provided with and required to wear, clean, full-body protective clothing (smocks, coveralls, or long-sleeved shirt and pants), and gloves prior to entering the regulated area. (vii) Prior to each exit from a regulated area, employees shall be required to remove and leave protective clothing and equipment at the point of exit and at the last exit of the day, to place used clothing and equipment in impervious containers at the point of exit for purposes of decontamination or disposal. The contents of such impervious containers shall be identified as required under paragraph e(2)(i) of this standard. # b. Reactor and Vessel Entry (i) A reactor and vessel entry procedure shall be developed and provided to involved employees in written form. Employees shall be familiarized with the procedure and shall be trained and rehearsed in the techniques pro vided for in the procedure. Em phasis shall not only be placed on concern for potential expos ure to vinyl chloride but shall also include appropriate precautions for entry into confined spaces. (ii) Techniques shall be developed and ap plied to minimize to the maximal practicable extent employee exposure to vinyl chloride when opening any closed vessel. Examples of effective methods are the application of heat or suction to the vessel prior to opening, or use of sufficient exhaust ventilation around the vessel. Where operations such as cleaning or maintenance conducted inside an open vessel could result in the liberation of vinyl chloride, suitable procedures such as exhaust ventila tion shall be developed and implemented to insure that vinyl chloride is not released into the general work environment. (iii) Exhaust air shall not be discharged to Regulated areas, nonregulated areas, or the ex ternal environment unless decontaminated. (iv) All piping to and from the vessel shall be blanked o f otherwise isolated prior to entry. (v) Employees entering the reactor or vessel shall be provided with and required to wear and use a full-face, supplied air respira tor, of the continuous flow or pressure demand type in accordance with §1910.134. (vi) Employees entering the reactor or vessel where levels of vinyl chloride are mon itored and are found to not exceed ambient levels external to the vessel shall be provided with and required to wear clean, full body pro tective clothing (coveralls or long-sleeved shirt and pants), gloves, footwear or foot coverings, and head covering. Where vessel levels are in excess of ambient levels, employees shall in stead be provided with and required to wear impervious clothing to prevent skin contact of vinyl chloride or other materials containing vinyl chloride. (vii) After each exit from the reactor or vessel, employees shall be required to remove and leave protective clothing and equipment at a designated point in the regulated area, and at the end of each work shift to place used clothing and equipment in impervious but vented containers for the purpose of decon tamination or disposal. The contents of such impervious containers shall be identified as re quired under paragraph e(2)(i) of this standard. (viii) Employees engaged in reactor clean ing or other operations involving vessel entry shall shower at the end of the work shift. # c. Maintenance and Decontamination Activities (i) Emphasis shall be placed upon imme diate clean up of spills, periodic inspection, prompt repair of equipment and leaks, and proper handling, storage and disposal or de contamination of materials to prevent airborne contamination and accidental skin contact with vinyl chloride. Because vinyl chloride is a gas at normal temperatures, waste materials, equipment, and other sources of the monomer in closed containers, shall not be placed in areas of excessive temperature or sunlight since build-up of internal pressure may result in rupture of the container, fire or explosion. (ii) Waste resins or other materials con taminated with vinyl chloride shall be placed in closed containers identified as required under paragraphs e(2)(i) or (ii) of this standard. (iii) Appropriate procedures shall be de veloped and implemented for the decontam ination and/or disposal of all such waste material. (iv) In clean-up of leaks or spills, mainte nance or repair operations on contaminated systems or equipment, or any operation in volving work where direct contact with vinyl chloride monomer could result, each autho rized empolyee involved in such operations shall be provided with and required to wear clean, impervious garments, including gloves, boots and continuous air supplied hoods in accordance with §1910.134, be decontam inated before removing the protective garments and hood; and be required to shower upon re moving the protective garments and hood. # d. General Regulated Area Requirements # Employee identification. A daily roster of employees entering regu lated areas shall be established and main tained. The rosters or a sum m ary of the roster shall be retained for a minimum period of 20 years by the employer or successors thereto. The rosters and/or sum m aries shall be pro vided upon request to authorized represent atives of the Assistant Secretary and the Director. In the event that the employer ceases business without a successor, rosters shall be forwarded by registered mail to the Director. # Emergencies. In an emergency, immediate measures in cluding but not limited to the requirements of subdivisions (i), (ii), (iii), (iv), and (v) of this subparagraph shall be implemented. (i) The potentially affected area shall be evacuated as soon as the existence of the emergency has been determined. ' (ii) Hazardous conditions created by the emergency shall be eliminated and the poten tially affected area shall be decontaminated # NIDBH Recommended Standard prior to the resumption of normal operations. (iii) Special medical surveillance by a phy sician shall be instituted within 24-hours for employees present in the potentially affected area at the time of the emergency. A report of the medical surveillance and any treatment shall be included in the incident report, in ac cordance with paragraph (g)(3) of this standard. (iv) Where an employee has a known con tact with liquid vinyl chloride such employee shall be required to shower as soon as possible, unless contraindicated by physical injuries. (v) An incident report on the emergency shall be reported as provided in paragraph (g)( 3) of this standard. 3. Hygiene facilities and practices. (i) Storage or consumption of food, storage or use of containers of beverages, storage or application of cosmetics, smoking, storage of sm oking materials, tobacco products or other products for chewing, or the chewing of such products, are prohibited in regulated areas. (ii) Where employees wear protective clothing and equipment clean change rooms shall be provided, in accordance with §1 91 0 .141(e)(3). (iii) Where employees are required by this standard to wash, washing facilities shall be provided in accordance with §1 9 1 0 .141(d)(1) and (2)(ii) through (vii). (iv) Where employees are required by this standard to shower, shower facilities shall be provided in accordance with §1910.141(d)(3). 4. Contamination control. (i) Regulated areas, except for outdoor sy s tems, shall be maintained under negative pres sure with respect to nonregulated areas. Local exhaust ventilation may be used to satisfy this requirement. Clean tempered makeup air shall replace air removed. Exhaust air shall not be discharged to regulated areas, nonregulated areas, or the external environment unless de contaminated. (ii) Any equipment, material, or other item taken into or removed from a regulated area shall be done so in a manner that does not cause contamination in nonregulated areas or the external environment. (iii) Decontamination procedures shall be established and implemented to remove vinyl chloride from the surfaces of materials, equip ment and the decontamination facility. # Container contents identification. (i) Containers of waste or other materials contaminated with vinyl chloride shall be labelled as follows: V IN Y L C H L O R ID E C O N T A M IN T ED M A T E R IA L C A N C ER S U S P E C T AG ENT D IS P O S E OF OR D EC O N T A M IN A T E U SIN G A PPR O VED P R O C E D U R E S (ii) Containers of synthetic polymers made from vinyl chloride shall be labelled as follows: (i) A report of the occurrence of the in cident and the facts obtainable at that time including a report on any medical treatment of affected employees shall be made within 24 hours to the appropriate O SH A Area Director. SY N T H ET IC V IN Y L C H L O R ID E PO LY M ER V IN Y L C H LO R ID E IS A C A N C ER S U S P E C T AG EN T P O LY M ER C O N T A IN S _ - % BY W EIG HT U N R EA C T ED V IN Y L C H LO R (ii) A written report shall be filed with the appropriate O SH A Area Director within 15 cal endar days thereafter and shall include: (a) A specification of the amount of material released, the amount of time involved, and an explanation of the procedure used in determining this figure; (b) A description of the area involved, and the extent of known and possible employee exposure and area contamination; (c) A report of any medical treatment of affected employees, and any medical sur veillance program implemented; and (d) An analysis of the circumstances of the incident, and measures taken or to be taken, with specific completion dates, to avoid further sim ilar releases. # h. Medical Surveillance The following recommendations are di rected primarily at medical screening to detect liver disease and/or hepatic tumor. They should be considered in the context of routine health screening for any general employee health problems, including non-hepatic health con ditions potentially related to vinyl chloride monomer exposure. Routine health screening should include at time of initial employment the recording of past medical history and the performance both of a general physical exam ination and certain basic laboratory procedures (e.g., complete blood count, urinalysis, chest x-ray); provisions should also be made for rou tine periodic health followup examinations. Employees covered by the following spe cific recommendations shall encom pass all persons engaged in vinyl chloride monomer production and polymerization, including per sonnel peripherally involved such as in clerical and management assignments. The recom mendations shall be applied both as a pre employment requirement and as part of peri odic health followup. Screening priority should be given to current employees with prolonged and close potential exposure to vinyl chloride monomer, whether in present or past work settings. " (i) At time of initial employment, or upon institution of screening, a physical examination shall be performed with specific attention to detecting enlargement of liver or spleen by abdominal palpation. (ii) At time of initial employment, or upon institution of screening and annually there after, a medical history check-list shall be com pleted by the employee. This list shall include questions concerning: (a) alcohol intake; (b) past history of hepatitis; (c) past exposure to potential hepatotoxic agents, including drugs and chemicals; (d) past history of blood transfusions; and (e) past history of hospitalizations. Completed medical check-list shall be re viewed by a physician and should be acted upon as medically indicated for each individual employee. (iii) At time of initial employment, or upon institution of screening, a serum specimen shall be obtained for screening with respect to the following five bio-chemical determinations of liver function: (e) gama glutamyl transpeptidase (GGTP). Additional tests that may optionally be con sidered for use in screening include lactic de hydrogenase (LDH), serum protein determin ations, serum protein electrophoresis, and platelet count.' Laboratory analysis shall be performed in laboratories accredited by the College of American Pathologists or licensed in accordance with the provisions of the Clin ical Laboratories Improvement Act of 1967. (iv) If results or laboratory screening are normal, screening shall be repeated on an annual basis. If the person being screened has been employed directly in vinyl chloride monomer production or polymerization for 10 years or longer, screening s h a ll be repeated every six (6) months. (v) If one or more liver function tests are abnormal, serum testing shall be repeated as soon as possible, preferably within two (2) to miQQim Recommended Standard four (4) weeks. If no abnormalities are present upon rescreening, testing should be repeatedin three (3) months. (vi) If abnormalities persist on rescreen ing, the employee shall be removed from con tact with vinyl chloride monomer operations and an individualized medical workup shall be instituted. Suggested as initial steps in med ical workup are a complete physical examina tion and various special procedures such as hepatitis B antigen determination and liver scanning. If liver function abnormalities are deter mined to be unrelated to liver disease (e.g., elevated alkaline phosphates in a young, phys ically active man or elevated bilirubin in Gilbert's syndrome) or to be transient (e.g., due to recent hepatitis or recent alcohol intake), the employee may be permitted to return to vinyl chloride-related employment, subject to individual medical evaluation. (vii) In view of the preliminary results of animal toxicology studies, it is recommended that no woman who is pregnant or who expects to become pregnant should be employed di rectly in vinyl chloride monomer operations. # R E C O M M E N D E D O SH (e) The purpose for and significance of emergency practices and procedures; (f) The employee's specific role under normal operating or emergency conditions; (g) Specific information to aid the em ployee in recognition and evaluation of condi tions and situations which may result in the release of vinyl chloride monomer; (h) The purpose for and application of specific first aid procedures and practices; (i) A review of this standard at the employee's first training and indoctrination program and annually thereafter. (ii) Specific emergency procedures shall be prescribed, and posted, and employees shall be familiarized with their terms, and rehearsed in their application. (iii) All materials relating to the program shall be provided upon request to authorized representatives of the Assistant Secretary and the Director. # f. Environmental Monitoring and Record keeping (i) Environmental concentrations of vinyl of methods for sam pling and analysis recom mended by the Director or by methods of at least equal sensitivity. (ii) Employees or their representatives shall be provided with the opportunity to observe chloride shall be determined through the use environmental monitoring activities and shall have access to the results. (iii) Complete and accurate records of all environmental measurements shall be main tained for at least 20 years by the employer or successors thereto and shall be provided upon request to authorized representatives of the Assistant Secretary or the Director. (i) A brief description and in-plant location of the area(s) regulated and the address of each regulated area. (ii) The number of employees in each reg ulated area, during normal operations, includ ing maintenance activity. (iii) A copy of the Control Plan is developed under paragraph 3(a)(ii). 2. Environmental Measurements. On a semi-annual basis the results of mea surements taken at strategic sampling points, presented in such a manner as to identify achievement of goals established in the Con trol Plan, shall be reported in writing to the appropriate O SH A Area Director. # Incidents. Incidents which result in the release of vinyl chloride monomer into any area where employees may be potentially exposed shall be reported in accordance with this subparagraph.
# On January 22, 1974, representatives from the B. F. Goodrich Chemical Company informed NIOSH that the deaths of several employees of their Louisville, Kentucky, plant might have been related to occupa tional exposures. An immediate industrial hygiene walk-through survey of the facility was conducted by NIOSH and resulted in developing and" transmitting to affected companies recommendations for precau tionary monitoring and control procedures for polym erization processes involving vinyl chloride. On Feb ruary 1, 1974, NIOSH/CDC conducted a briefing for other Federal agencies with health research responsi bilities at which it was disclosed that four employees of the plant in question had died of angiosarcoma of the liver. Because of the extremely low incidence of this disease, estimated to be on the order of 20 to 30 deaths per year in the United States, the history of four cases in a five-year period in one plant was con sidered of great importance. It was concluded at the briefing that a new occupational cancer had been discovered: angiosarcoma of the liver. It was further concluded that this disease was associated with the manufacture of polyvinyl chloride and that vinyl chloride was the prime etiological candidate in pro ducing the disease. NIOSH, with the assistance of expert consultants from both industry and organized labor, began de velopment of a recommended occupational health standard. These and other activities were discussed in more detail at thé OSHA Informal Fact-Finding Hearing on Possible Hazards of Vinyl Chloride Manu facture and Use on February 15, 1974. It was also during this Hearing that Professor Cesare Maltoni of Bologna, Italy, presented the preliminary results of his research which showed induction of angiosarcoma of the liver and other organs, as well as the produc tion of other cancers in rats exposed to vinyl chloride. The results of these studies identify vinyl chloride as a carcinogen and further confirm its role in in ducing the cancers observed in the B. F. Goodrich workers . . . Although vinyl chloride must be considered as a car cinogenic agent, the immediate problem appeared to be concentrated in polymerization facilities. Conse quently, the attached NIOSH recommended standard only applies to such operations. This is not to say, however, that appropriate standards should not be developed for other exposures to the basic chemical. NIOSH is implementing further evaluation of the data, coupled with field observations, to determine exposure potentials in pre-and post-polymerization operations . . . # Consultants from industry who worked with NIOSH proposed that the recommended standard contain the concept of an allowable "working level" for vinyl chloride gas in the atmosphere, which they identified as a time weighted average of 50 ppm. They recom mended that where workers were exposed to concen trations in excess of this level they should wear airsupplied respirators. This concept of an allowable "working level" might seem justifiable in that Pro fessor Maltoni found no liver tumors at 50 ppm, but there is the possibility that tumors might have been produced if a larger number of animals had been ex posed at that concentration. Based on theoretical considerations, there is probably no threshold for carcinogenesis although it is possible that with very low concentrations, the latency period might be ex tended beyond the life expectancy. In view of these considerations and NIOSH's inability to describe a safe exposure level as required in section 20(a)(3) of the Occupational Safety and Health Act, the concept of a threshold limit for vinyl chloride gas in the atmosphere was rejected. Consequently, the recommendations are such that where any employee is exposed to measurable con centrations of vinyl chloride, as determined by the recommended sampling and analytical method, he shall wear an air supplied respirator. This recom mendation is based on some preliminary information that the standard chemical cartridge respirators are inefficient in protecting against vinyl chloride. NIOSH is implementing a study to evaluate the degree of protection afforded by different types of respirators using vinyl chloride as the test gas. As information becomes available, it will be forwarded to OSHA as recommendations for alternative respirator usage. The employer is also required to develop a Control Plan to reduce airborne concentrations of vinyl chlo ride to levels not detachable by the recommended method . . . # N IQ 5H Recommended Standard R E C O M M E N D E D O CCU PATIONAL HEALTH ST A N D A R D FOR TH E M A N U F A C T U R E OF SY N TH ET IC PO LY M ER FROM V IN Y L C H LO R ID E # SCOPE AND APPLICATION This standard regulates the manufacture of synthetic polymer from vinyl chloride (chloroethene, Chemical Abstracts Registry No. 75014), in order to protect the health and safety of workers. Vinyl chloride also know as vinyl chloride monomer (VCM), chloroethylene and chloroethene, is a colorless sweet sm elling gas at ordinary temperature and pressure and has a boiling and melting point at one atmosphere of -13.8°C, and -153.71°C, respectively. Its chemical formula is CH..CHCI and it has a molecular weight of 62.50. Although noncorrosive at normal atmospheric temperatures, in contact with water and at elevated temper atures it accelerates the corrosion of iron and steel. Of considerable concern is the fact that vinyl chloride is easily ignited and has a lower and upper explosive limit of 3 .6 % and 26.4% , respectively. # DEFINITIONS For the purpose of this standard: a. "Assistant Secretary" means the Assistant Secretary for Occupational Safety and Health, U. S. Department of Labor or any person di rected by him. b. "Director" means the Director, National Institute for Occupational Safety and Health, or any person directed by him or the Secretary of Health, Education, and Welfare to act for the Director. c. "Authorized employee" means an employee whose duties require him to be in the regu lated area and who has been specifically as signed by the employer. d. "Detectable levels" means the determina tion that airborne concentrations of vinyl chlo ride are in excess of the limit of sensitivity of the sampling and analytical method recom mended by the Director. e. "Clean change room" means a room where employees put on clean clothing; clean change room shall be contiguous to and have an entry from a shower room, when the shower room fa cilities are otherwise required in this standard. f. "Closed container" is any container which is used to prevent the physical contact of em ployees with material containing viny! chloride monomer. g. "Closed system " means an operation in volving vinyl chloride where containment pre vents the release of vinyl chloride into regu lated areas, nonregulated areas, or the external environment. h. "Contaminated" refers to detectable levels of vinyl chloride monomer. i. "Decontamination" means the inactivation of vinyl chloride to less than detectable levels or its safe disposal. j. "D isposal" means the safe removal of vinyl chloride from the work environment, k. "Emergency" means an unforeseen circum stance or set of circumstances, such as a rup tured transfer line, resulting in the release of vinyl chloride sufficient to produce acute symptoms among workers exposed or having contact with the vinyl chloride. I. "External environment" means any environ ment external to regulated and nonregulated areas. m. "Regulated area" means an area where entry to and exit from a vinyl chloride work place is restricted and controlled, n. "Nonregulated area" means any area under the control of the employer where entry and exit is neither restricted nor controlled, o. "Protective clothing" means clothing de signed to protect an employee against contact with or exposure to vinyl chloride, p. "Waste resin" means any resin or other vinyl chloride reaction product which has been removed from vessels during clean-up opera tions, or which has been collected as a result of in-plant housekeeping operations. # REQ U IREM EN TS FOR REGULATED AREAS A regulated area shall be established where synthetic resins containing vinyl chloride are N1D5H Recommended Standard manufactured. These regulated areas shall in clude but are not limited to vinyl chloride loading or unloading operations, storage, and transfer facilities; synthetic resin polymeriza tion processes and operations; and resin han dling, compounding, packaging and storage areas. Access shall be restricted to authorized employees only. All such regulated areas shall be controlled in accordance with the following requirements. # a. Routine Operations (i) Initial concentration of vinyl chloride in all regulated areas shall be determined by per forming air measurements at strategic sam pling points under normal operating conditions. These initial sampling points must be selected by a professional industrial hygienist and will serve as monitoring locations for future en vironmental measurements. The sam pling pat tern shall be adequate to represent the en vironment of the controlled area. (ii) Where detectable levels of vinyl chlo ride are measured, a Control Plan to reduce such levels shall be developed and imple mented. The Plan shall consist not only of establishing goals for reducing vinyl chloride levels by designing and introducing engineer ing and process controls, but shall also identify plans for developing additional (healthful) work practices. Target dates shall be established for all goals and the Plan must be updated at least on an annual basis. Copies of the Control Plan shall be posted in all regulated areas and be provided to all authorized employees. (in) There shall be periodic tests for proc ess or equipment leaks and for emission of vinyl chloride which may result from work practices. The frequency of these tests shall be such as to insure the integrity of equipment, adherence to proper work practices, and to de termine achievement of the goals of the Con trol Plan. Tests shall be performed at each sampling point at least daily or more frequently if concentrations of vinyl chloride are in excess of those established in the Plan. When such levels are exceeded, additional samples to identify sources of contamination shall be taken. Results of all such tests shall be made available to authorized employees in such a manner as to evaluate achievement of goals contained in the Control Plan. (iv) Until exposures to vinyl chloride are reduced below detectable levels, employees entering any regulated area shall be provided with and required to wear and use a full-face, supplied air respirator, of the continuous flow or pressure demand type in accordance with §1910.134. (v) In operations involving loading or un loading vinyl chloride monomer from tank cars, .trucks, barges, or other conveyance equipment, each transfer line and each vapor-equalizing line shall be equipped with vent connections permitting, at the completion of the transfer, pressure to be vented and the hose purged with an inert gas in such a manner to preclude any employee exposure. Specific and detailed transfer procedures shall be developed and provided to involved employees in written form. (vi) Employees shall be provided with and required to wear, clean, full-body protective clothing (smocks, coveralls, or long-sleeved shirt and pants), and gloves prior to entering the regulated area. (vii) Prior to each exit from a regulated area, employees shall be required to remove and leave protective clothing and equipment at the point of exit and at the last exit of the day, to place used clothing and equipment in impervious containers at the point of exit for purposes of decontamination or disposal. The contents of such impervious containers shall be identified as required under paragraph e(2)(i) of this standard. # b. Reactor and Vessel Entry (i) A reactor and vessel entry procedure shall be developed and provided to involved employees in written form. Employees shall be familiarized with the procedure and shall be trained and rehearsed in the techniques pro vided for in the procedure. Em phasis shall not only be placed on concern for potential expos ure to vinyl chloride but shall also include appropriate precautions for entry into confined spaces. (ii) Techniques shall be developed and ap plied to minimize to the maximal practicable extent employee exposure to vinyl chloride when opening any closed vessel. Examples of effective methods are the application of heat or suction to the vessel prior to opening, or use of sufficient exhaust ventilation around the vessel. Where operations such as cleaning or maintenance conducted inside an open vessel could result in the liberation of vinyl chloride, suitable procedures such as exhaust ventila tion shall be developed and implemented to insure that vinyl chloride is not released into the general work environment. (iii) Exhaust air shall not be discharged to Regulated areas, nonregulated areas, or the ex ternal environment unless decontaminated. (iv) All piping to and from the vessel shall be blanked o f otherwise isolated prior to entry. (v) Employees entering the reactor or vessel shall be provided with and required to wear and use a full-face, supplied air respira tor, of the continuous flow or pressure demand type in accordance with §1910.134. (vi) Employees entering the reactor or vessel where levels of vinyl chloride are mon itored and are found to not exceed ambient levels external to the vessel shall be provided with and required to wear clean, full body pro tective clothing (coveralls or long-sleeved shirt and pants), gloves, footwear or foot coverings, and head covering. Where vessel levels are in excess of ambient levels, employees shall in stead be provided with and required to wear impervious clothing to prevent skin contact of vinyl chloride or other materials containing vinyl chloride. (vii) After each exit from the reactor or vessel, employees shall be required to remove and leave protective clothing and equipment at a designated point in the regulated area, and at the end of each work shift to place used clothing and equipment in impervious but vented containers for the purpose of decon tamination or disposal. The contents of such impervious containers shall be identified as re quired under paragraph e(2)(i) of this standard. (viii) Employees engaged in reactor clean ing or other operations involving vessel entry shall shower at the end of the work shift. # c. Maintenance and Decontamination Activities (i) Emphasis shall be placed upon imme diate clean up of spills, periodic inspection, prompt repair of equipment and leaks, and proper handling, storage and disposal or de contamination of materials to prevent airborne contamination and accidental skin contact with vinyl chloride. Because vinyl chloride is a gas at normal temperatures, waste materials, equipment, and other sources of the monomer in closed containers, shall not be placed in areas of excessive temperature or sunlight since build-up of internal pressure may result in rupture of the container, fire or explosion. (ii) Waste resins or other materials con taminated with vinyl chloride shall be placed in closed containers identified as required under paragraphs e(2)(i) or (ii) of this standard. (iii) Appropriate procedures shall be de veloped and implemented for the decontam ination and/or disposal of all such waste material. (iv) In clean-up of leaks or spills, mainte nance or repair operations on contaminated systems or equipment, or any operation in volving work where direct contact with vinyl chloride monomer could result, each autho rized empolyee involved in such operations shall be provided with and required to wear clean, impervious garments, including gloves, boots and continuous air supplied hoods in accordance with §1910.134, be decontam inated before removing the protective garments and hood; and be required to shower upon re moving the protective garments and hood. # d. General Regulated Area Requirements # Employee identification. A daily roster of employees entering regu lated areas shall be established and main tained. The rosters or a sum m ary of the roster shall be retained for a minimum period of 20 years by the employer or successors thereto. The rosters and/or sum m aries shall be pro vided upon request to authorized represent atives of the Assistant Secretary and the Director. In the event that the employer ceases business without a successor, rosters shall be forwarded by registered mail to the Director. # Emergencies. In an emergency, immediate measures in cluding but not limited to the requirements of subdivisions (i), (ii), (iii), (iv), and (v) of this subparagraph shall be implemented. (i) The potentially affected area shall be evacuated as soon as the existence of the emergency has been determined. ' (ii) Hazardous conditions created by the emergency shall be eliminated and the poten tially affected area shall be decontaminated # NIDBH Recommended Standard prior to the resumption of normal operations. (iii) Special medical surveillance by a phy sician shall be instituted within 24-hours for employees present in the potentially affected area at the time of the emergency. A report of the medical surveillance and any treatment shall be included in the incident report, in ac cordance with paragraph (g)(3) of this standard. (iv) Where an employee has a known con tact with liquid vinyl chloride such employee shall be required to shower as soon as possible, unless contraindicated by physical injuries. (v) An incident report on the emergency shall be reported as provided in paragraph (g)( 3) of this standard. 3. Hygiene facilities and practices. (i) Storage or consumption of food, storage or use of containers of beverages, storage or application of cosmetics, smoking, storage of sm oking materials, tobacco products or other products for chewing, or the chewing of such products, are prohibited in regulated areas. (ii) Where employees wear protective clothing and equipment clean change rooms shall be provided, in accordance with §1 91 0 .141(e)(3). (iii) Where employees are required by this standard to wash, washing facilities shall be provided in accordance with §1 9 1 0 .141(d)(1) and (2)(ii) through (vii). (iv) Where employees are required by this standard to shower, shower facilities shall be provided in accordance with §1910.141(d)(3). 4. Contamination control. (i) Regulated areas, except for outdoor sy s tems, shall be maintained under negative pres sure with respect to nonregulated areas. Local exhaust ventilation may be used to satisfy this requirement. Clean tempered makeup air shall replace air removed. Exhaust air shall not be discharged to regulated areas, nonregulated areas, or the external environment unless de contaminated. (ii) Any equipment, material, or other item taken into or removed from a regulated area shall be done so in a manner that does not cause contamination in nonregulated areas or the external environment. (iii) Decontamination procedures shall be established and implemented to remove vinyl chloride from the surfaces of materials, equip ment and the decontamination facility. # Container contents identification. (i) Containers of waste or other materials contaminated with vinyl chloride shall be labelled as follows: V IN Y L C H L O R ID E C O N T A M IN T ED M A T E R IA L C A N C ER S U S P E C T AG ENT D IS P O S E OF OR D EC O N T A M IN A T E U SIN G A PPR O VED P R O C E D U R E S (ii) Containers of synthetic polymers made from vinyl chloride shall be labelled as follows: (i) A report of the occurrence of the in cident and the facts obtainable at that time including a report on any medical treatment of affected employees shall be made within 24 hours to the appropriate O SH A Area Director. SY N T H ET IC V IN Y L C H L O R ID E PO LY M ER V IN Y L C H LO R ID E IS A C A N C ER S U S P E C T AG EN T P O LY M ER C O N T A IN S _ * % BY W EIG HT U N R EA C T ED V IN Y L C H LO R (ii) A written report shall be filed with the appropriate O SH A Area Director within 15 cal endar days thereafter and shall include: (a) A specification of the amount of material released, the amount of time involved, and an explanation of the procedure used in determining this figure; (b) A description of the area involved, and the extent of known and possible employee exposure and area contamination; (c) A report of any medical treatment of affected employees, and any medical sur veillance program implemented; and (d) An analysis of the circumstances of the incident, and measures taken or to be taken, with specific completion dates, to avoid further sim ilar releases. # h. Medical Surveillance The following recommendations are di rected primarily at medical screening to detect liver disease and/or hepatic tumor. They should be considered in the context of routine health screening for any general employee health problems, including non-hepatic health con ditions potentially related to vinyl chloride monomer exposure. Routine health screening should include at time of initial employment the recording of past medical history and the performance both of a general physical exam ination and certain basic laboratory procedures (e.g., complete blood count, urinalysis, chest x-ray); provisions should also be made for rou tine periodic health followup examinations. Employees covered by the following spe cific recommendations shall encom pass all persons engaged in vinyl chloride monomer production and polymerization, including per sonnel peripherally involved such as in clerical and management assignments. The recom mendations shall be applied both as a pre employment requirement and as part of peri odic health followup. Screening priority should be given to current employees with prolonged and close potential exposure to vinyl chloride monomer, whether in present or past work settings. " (i) At time of initial employment, or upon institution of screening, a physical examination shall be performed with specific attention to detecting enlargement of liver or spleen by abdominal palpation. (ii) At time of initial employment, or upon institution of screening and annually there after, a medical history check-list shall be com pleted by the employee. This list shall include questions concerning: (a) alcohol intake; (b) past history of hepatitis; (c) past exposure to potential hepatotoxic agents, including drugs and chemicals; (d) past history of blood transfusions; and (e) past history of hospitalizations. Completed medical check-list shall be re viewed by a physician and should be acted upon as medically indicated for each individual employee. (iii) At time of initial employment, or upon institution of screening, a serum specimen shall be obtained for screening with respect to the following five bio-chemical determinations of liver function: (e) gama glutamyl transpeptidase (GGTP). Additional tests that may optionally be con sidered for use in screening include lactic de hydrogenase (LDH), serum protein determin ations, serum protein electrophoresis, and platelet count.' Laboratory analysis shall be performed in laboratories accredited by the College of American Pathologists or licensed in accordance with the provisions of the Clin ical Laboratories Improvement Act of 1967. (iv) If results or laboratory screening are normal, screening shall be repeated on an annual basis. If the person being screened has been employed directly in vinyl chloride monomer production or polymerization for 10 years or longer, screening s h a ll be repeated every six (6) months. (v) If one or more liver function tests are abnormal, serum testing shall be repeated as soon as possible, preferably within two (2) to miQQim Recommended Standard four (4) weeks. If no abnormalities are present upon rescreening, testing should be repeatedin three (3) months. (vi) If abnormalities persist on rescreen ing, the employee shall be removed from con tact with vinyl chloride monomer operations and an individualized medical workup shall be instituted. Suggested as initial steps in med ical workup are a complete physical examina tion and various special procedures such as hepatitis B antigen determination and liver scanning. If liver function abnormalities are deter mined to be unrelated to liver disease (e.g., elevated alkaline phosphates in a young, phys ically active man or elevated bilirubin in Gilbert's syndrome) or to be transient (e.g., due to recent hepatitis or recent alcohol intake), the employee may be permitted to return to vinyl chloride-related employment, subject to individual medical evaluation. (vii) In view of the preliminary results of animal toxicology studies, it is recommended that no woman who is pregnant or who expects to become pregnant should be employed di rectly in vinyl chloride monomer operations. # R E C O M M E N D E D O SH # (e) The purpose for and significance of emergency practices and procedures; (f) The employee's specific role under normal operating or emergency conditions; (g) Specific information to aid the em ployee in recognition and evaluation of condi tions and situations which may result in the release of vinyl chloride monomer; (h) The purpose for and application of specific first aid procedures and practices; (i) A review of this standard at the employee's first training and indoctrination program and annually thereafter. (ii) Specific emergency procedures shall be prescribed, and posted, and employees shall be familiarized with their terms, and rehearsed in their application. (iii) All materials relating to the program shall be provided upon request to authorized representatives of the Assistant Secretary and the Director. # f. Environmental Monitoring and Record keeping (i) Environmental concentrations of vinyl of methods for sam pling and analysis recom mended by the Director or by methods of at least equal sensitivity. (ii) Employees or their representatives shall be provided with the opportunity to observe chloride shall be determined through the use environmental monitoring activities and shall have access to the results. (iii) Complete and accurate records of all environmental measurements shall be main tained for at least 20 years by the employer or successors thereto and shall be provided upon request to authorized representatives of the Assistant Secretary or the Director. (i) A brief description and in-plant location of the area(s) regulated and the address of each regulated area. (ii) The number of employees in each reg ulated area, during normal operations, includ ing maintenance activity. (iii) A copy of the Control Plan is developed under paragraph 3(a)(ii). 2. Environmental Measurements. On a semi-annual basis the results of mea surements taken at strategic sampling points, presented in such a manner as to identify achievement of goals established in the Con trol Plan, shall be reported in writing to the appropriate O SH A Area Director. # Incidents. Incidents which result in the release of vinyl chloride monomer into any area where employees may be potentially exposed shall be reported in accordance with this subparagraph.
None
None
54529de143cdd9a528e6f0c7360ccc7dbbd2b871
cdc
None
Morbidity and Mortality Weekly Report depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction Epidemics of influenza typically occur during the winter months in temperate regions and have been responsible for an average of approximately 36,000 deaths/year in the United States during 1990-1999 (1). Influenza viruses also can cause pandemics, during which rates of illness and death from influenza-related complications can increase worldwide. Influenza viruses cause disease among all age groups (2)(3)(4). Rates of infection are highest among children, but rates of serious illness and death are highest among persons aged >65 years, children aged <2 years, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (2,(5)(6)(7). Influenza vaccination is the primary method for preventing influenza and its severe complications. In this report from the Advisory Committee on Immunization Practices (ACIP), the primary target groups recommended for annual vaccination are 1) persons at increased risk for influenza-related complications (i.e., those aged >65 years, children aged 6-23 months, pregnant women, and persons of any age with certain chronic medical conditions); 2) persons aged 50-64 years because this group has an elevated prevalence of certain chronic medical conditions; and 3) persons who live with or care for persons at high risk (e.g., health-care workers and household contacts who have frequent contact with persons at high risk and who can transmit influenza to those persons at high risk). Vaccination is associated with reductions in influenza-related respiratory illness and physician visits among all age groups, hospitalization and death among persons at high risk, otitis media among children, and work absenteeism among adults (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Although influenza vaccination levels increased substantially during the 1990s, further improvements in vaccine coverage levels are needed, chiefly among persons aged 65 years, among children aged 6-23 months, and among health-care workers. ACIP recommends using strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (19)(20)(21). Although influenza vaccination remains the cornerstone for the control and treatment of influenza, information on antiviral medications is also presented because these agents are an adjunct to vaccine. # Primary Changes and Updates in the Recommendations The 2005 recommendations include five principal changes or updates: - ACIP recommends that persons with any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration be vaccinated against influenza (see Target Groups for Vaccination). - ACIP emphasizes that all health-care workers should be vaccinated against influenza annually, and that facilities that employ health-care workers be strongly encouraged to provide vaccine to workers by using approaches that maximize immunization rates. - Use of both available vaccines (inactivated and LAIV) is encouraged for eligible persons every influenza season, especially persons in recommended target groups. During periods when inactivated vaccine is in short supply, use of LAIV is especially encouraged when feasible for eligible persons (including health-care workers) because use of LAIV by these persons might considerably increase availability of inactivated vaccine for persons in groups at high risk. - CDC and other agencies will assess the vaccine supply throughout the manufacturing period and will make recommendations preceding the 2005-06 influenza season regarding the need for tiered timing of vaccination of different risk groups. In addition, CDC will publish ACIP recommendations regarding inactivated vaccine subprioritization (tiering) on a later date in MMWR. # Influenza and Its Burden # Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease (22). Influenza A viruses are further categorized into subtypes on the basis of two surface antigens: hemagglutinin and neuraminidase. Influenza B viruses are not categorized into subtypes. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have been in global circulation. In 2001, influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H3N2) and A (H1N1) viruses began circulating widely. Both influenza A and B viruses are further separated into groups on the basis of antigenic characteristics. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral replication. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. Immunity to the surface antigens, particularly the hemagglutinin, reduces the likelihood of infection and severity of disease if infection occurs (23). Antibody against one influenza virus type or subtype confers limited or no protection against another type or subtype of influenza. Furthermore, antibody to one antigenic variant of influenza virus might not completely protect against a new antigenic variant of the same type or subtype (24). Frequent development of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and the reason for the usual incorporation of one or more new strains in each year's influenza vaccine. # Clinical Signs and Symptoms of Influenza Influenza viruses are spread from person to person primarily through the coughing and sneezing of infected persons (22). The typical incubation period for influenza is 1-4 days, with an average of 2 days (25). Adults can be infectious from the day before symptoms begin through approximately 5 days after illness onset. Children can be infectious for >10 days, and young children can shed virus for several days before their illness onset. Severely immunocompromised persons can shed virus for weeks or months (26)(27)(28)(29). Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symptoms (e.g., fever, myalgia, headache, malaise, nonproductive cough, sore throat, and rhinitis) (30). Among children, otitis media, nausea, and vomiting are also commonly reported with influenza illness (31)(32)(33). Respiratory illness caused by influenza is difficult to distinguish from illness caused by other respiratory pathogens on the basis of symptoms alone (see Role of Laboratory Diagnosis). Reported sensitivities and speci-ficities of clinical definitions for influenza-like illness (ILI) in studies primarily among adults that include fever and cough have ranged from 63% to 78% and 55% to 71%, respectively, compared with viral culture (34,35). Sensitivity and predictive value of clinical definitions can vary, depending on the degree of co-circulation of other respiratory pathogens and the level of influenza activity (36). A study among older nonhospitalized patients determined that symptoms of fever, cough, and acute onset had a positive predictive value of 30% for influenza (37), whereas a study of hospitalized older patients with chronic cardiopulmonary disease determined that a combination of fever, cough, and illness of <7 days was 78% sensitive and 73% specific for influenza (38). However, a study among vaccinated older persons with chronic lung disease reported that cough was not predictive of influenza infection, although having a fever or feverishness was 68% sensitive and 54% specific for influenza infection (39). Influenza illness typically resolves after 3-7 days for the majority of persons, although cough and malaise can persist for >2 weeks. Among certain persons, influenza can exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease), lead to secondary bacterial pneumonia or primary influenza viral pneumonia, or occur as part of a coinfection with other viral or bacterial pathogens (40). Young children with influenza infection can have initial symptoms mimicking bacterial sepsis with high fevers (41,42), and <20% of children hospitalized with influenza can have febrile seizures (32,43). Influenza infection has also been associated with encephalopathy, transverse myelitis, Reye syndrome, myositis, myocarditis, and pericarditis (32,40,44,45). # Hospitalizations and Deaths from Influenza The risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age with certain underlying health conditions (see Persons at Increased Risk for Complications) than among healthy older children and younger adults (1,6,8,(46)(47)(48)(49)(50)(51)(52). Estimated rates of influenza-associated hospitalizations have varied substantially by age group in studies conducted during different influenza epidemics (Table 1). Among children aged 0-4 years, hospitalization rates have ranged from approximately 500/100,000 children for those with high-risk medical conditions to 100/100,000 children for those without high-risk medical conditions (53)(54)(55)(56). Within the 0-4 year age group, hospitalization rates are highest among children aged 0-1 years and are comparable to rates reported among persons aged >65 years (55,56) (Table 1). During influenza epidemics from 1979-80 through 2000-01, the estimated overall number of influenzaassociated hospitalizations in the United States ranged from approximately 54,000 to 430,000/epidemic. An average of approximately 226,000 influenza-related excess hospitalizations occurred per year, with 63% of all hospitalizations occurring among persons aged >65 years (57). Since the 1968 influenza A (H3N2) virus pandemic, the greatest numbers of influenza-associated hospitalizations have occurred during epidemics caused by type A (H3N2) viruses (58). Influenza-related deaths can result from pneumonia and from exacerbations of cardiopulmonary conditions and other chronic diseases. Deaths of older adults account for >90% of deaths attributed to pneumonia and influenza (1,52). In one study of influenza epidemics, approximately 19,000 influenzaassociated pulmonary and circulatory deaths per influenza season occurred during 1976-1990, compared with approximately 36,000 deaths during 1990-1999 (1). Estimated rates of influenza-associated pulmonary and circulatory deaths/ 100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years. In the United States, the number of influenza-associated deaths might be increasing in part because the number of older persons is increasing (59). In addition, influenza seasons in which influenza A (H3N2) viruses predominate are associated with higher mortality (60); influenza A (H3N2) viruses predominated in 90% of influenza seasons during 1990-1999, compared with 57% of seasons during 1976-1990 (1). Deaths from influenza are uncommon among both children with and without high-risk conditions, but do occur (61,62). A study that modeled influenza-related deaths estimated that an average of 92 deaths (0.4 deaths per 100,000) occurred among children aged 65 years (1). Reports of 153 laboratoryconfirmed influenza-related pediatric deaths from 40 states during the 2003-04 influenza season indicated that 61 (40%) were aged <2 years and, of 92 children aged 2-17 years, 64 (70%) did not have an underlying medical condition traditionally considered to place a person at risk for influenzarelated complications (CDC, National Center for Infectious Diseases, unpublished data, 2005). Further information is needed regarding the risk for severe influenza-complications and optimal strategies for minimizing severe disease and death among children. # Options for Controlling Influenza In the United States, the primary option for reducing the effect of influenza is immunoprophylaxis with vaccine. Inactivated (i.e., killed virus) influenza vaccine and live, attenuated influenza vaccine are available for use in the United States (see Recommendations for Using Inactivated and Live, At- * Outcomes were limited to hospitalizations in which either pneumonia or influenza was listed as the first condition on discharge records (Simonsen) or included anywhere in the list of discharge diagnoses (Barker). † † † Source: Simonsen L, Fukuda K, Schonberger LB, Cox NJ. Impact of influenza epidemics on hospitalizations. J Infect Dis 2000;181:831-7. § § § Persons at high risk and not at high risk for influenza-related complications are combined. ¶ ¶ ¶ The low estimate is the average during influenza A (H1N1) or influenza B-predominate seasons, and the high estimate is the average during influenza A (H3N2)-predominate seasons. Outcomes were for rate of primary respiratory and circulatory hospitalizations. † † † † Source: Thompson # Influenza Vaccine Composition Both the inactivated and live, attenuated vaccines prepared for the 2005-06 season will include A/California/7/2004 (H3N2)-like, A/New Caledonia/20/99 (H1N1)-like, and B/Shanghai/361/2002-like antigens. For the A/California/7/ 2004 (H3N2)-like antigen, manufacturers may use the antigenically equivalent A/New York/55/2004 (H3N2) virus, and for the B/Shanghai/361/2002-like antigen, manufacturers may use the antigenically equivalent B/Jilin/20/2003 virus or B/Jiangsu/10/2003 virus. These viruses will be used because of their growth properties and because they are representative of influenza viruses likely to circulate in the United States during the 2005-06 influenza season. Because circulating influenza A (H1N2) viruses are a reassortant of influenza A (H1N1) and (H3N2) viruses, antibody directed against influenza A (H1N1) and influenza (H3N2) vaccine strains provides protection against circulating influenza A (H1N2) viruses. Influenza viruses for both the inactivated and live attenuated influenza vaccines are initially grown in embryonated hens eggs. Thus, both vaccines might contain limited amounts of residual egg protein. For the inactivated vaccine, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (63). Subvirion and purified surface antigen preparations of the inactivated vaccine are available. Manufacturing processes differ by manufacturer. Manufacturers might use different compounds to inactivate influenza viruses and add antibiotics to prevent bacterial contamination. Package inserts should be consulted for additional information. # Thimerosal Thimerosal, a mercury-containing compound, has been used as a preservative in vaccines (64) since the 1930s and is used in multi-dose vials of inactivated influenza vaccine to reduce the likelihood of bacterial contamination. Although no scientific evidence indicates that thimerosal in vaccines leads to serious adverse events in vaccine recipients, in 1999, the U.S. Public Health Service and other organizations recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines to decrease total mercury exposure, chiefly among infants (64)(65)(66). Since mid-2001, vaccines routinely recommended for infants in the United States have been manufactured either without or with only trace amounts of thimerosal to provide a substantial reduction in the total mercury exposure from vaccines for children (67). Vaccines containing trace amounts of thimerosal have <1 mcg mercury/dose. Influenza Vaccines and Thimerosal. LAIV does not contain thimerosal. Thimerosal preservative-containing inactivated influenza vaccines, distributed in multi-dose containers in the United States, contain 25 mcg of mercury/0.5-mL dose (64,65). Inactivated influenza virus vaccines distributed in the United States will also be available in 2005 in a thimerosal-free formulation in both 0.25 mL and 0.5-mL single-dose syringes and a preservative-free formulation (which contains trace amounts of thimerosal) in 0.25-mL-dose syringes. Influenza vaccine is part of the routine childhood immunization schedule. Sanofi Pasteur, Inc. (formerly Aventis Pas-teur, Inc.) produces FluZone ® , which is an inactivated influenza vaccine approved by the Food and Drug Administration (FDA) for persons aged >6 months. FluZone that is available in multi-dose vials contains thimerosal as a preservative. Thimerosal-free FluZone packaged as 0.25-mL unit dose syringes is available for use among persons aged 6-35 months. Thimerosal-free FluZone packaged as 0.5 mL unit dose syringes is available for use among persons aged >3 years. Fluvirin ® , produced by Chiron, is an inactivated influenza vaccine available in a preservative-free formulation, is packaged as 0.5-mL single-dose syringes, and is licensed for use in persons aged >4 years. The preservative-free Fluvirin vaccine contains trace amounts of thimerosal. The total amount of inactivated influenza vaccine available without thimerosal as a preservative will be increased as manufacturing capabilities are expanded. The risks for severe illness from influenza infection are elevated among both young children and pregnant women, and both groups benefit from vaccination by preventing illness and death from influenza. In contrast, no scientifically conclusive evidence exists of harm from exposure to thimerosal preservative-containing vaccine, whereas evidence is accumulating of lack of any harm resulting from exposure to such vaccines (64,68). Therefore, the benefits of influenza vaccination outweigh the theoretical risk, if any, for thimerosal exposure through vaccination. Nonetheless, certain persons remain concerned regarding exposure to thimerosal. The U.S. vaccine supply for infants and pregnant women is in a period of transition during which the availability of thimerosal-reduced or thimerosal-free vaccine intended for these groups is being expanded by manufacturers as a feasible means of reducing an infant's total exposure to mercury because other environmental sources of exposure are more difficult or impossible to eliminate. Reductions in thimerosal in other vaccines have been achieved already and have resulted in substantially lowered cumulative exposure to thimerosal from vaccination among infants and children. For all of these reasons, persons recommended to receive inactivated influenza vaccine may receive either vaccine preparation, depending on availability. # Efficacy and Effectiveness of Inactivated Influenza Vaccine The effectiveness of inactivated influenza vaccine depends primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the viruses in the vaccine and those in circulation. The majority of vaccinated children and young adults develop high postvaccination hemagglutination inhibition antibody titers (69)(70)(71). These antibody titers are protective against illness caused by strains that are antigenically similar to those strains of the same type or subtype included in the vaccine (70)(71)(72)(73). Adults Aged <65 Years. When the vaccine and circulating viruses are antigenically similar, influenza vaccine prevents influenza illness among approximately 70%-90% of healthy adults aged <65 years (9,12,74,75). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are wellmatched (9)(10)(11)(12)75,76). In a case-control study of adults aged 50-64 years with laboratory-confirmed influenza during the 2003-04 season when the vaccine and circulating viruses were not well matched, vaccine effectiveness was estimated to be 52% among healthy persons and 38% among those with one or more high-risk conditions (77). Children. Children aged 6 months can develop protective levels of anti-influenza antibody against specific influenza virus strains after influenza vaccination (69,70,(78)(79)(80)(81), although the antibody response among children at high risk for influenza-related complications might be lower than among healthy children (82,83). In a randomized study among children aged 1-15 years, inactivated influenza vaccine was 77%-91% effective against influenza respiratory illness and was 44%-49%, 74%-76%, and 70%-81% effective against influenza seroconversion among children aged 1-5, 6-10, and 11-15 years, respectively (71). One study (84) reported a vaccine efficacy of 56% against influenza illness among healthy children aged 3-9 years, and another study (85) determined vaccine efficacy of 22%-54% and 60%-78% among children with asthma aged 2-6 years and 7-14 years, respectively. A 2-year randomized study of children aged 6-24 months determined that >89% of children seroconverted to all three vaccine strains during both years (86). During year 1, among 411 children, vaccine efficacy was 66% (95% confidence interval = 34%-82%) against culture-confirmed influenza (attack rates: 5.5% and 15.9% among vaccine and placebo groups, respectively). During year 2, among 375 children, vaccine efficacy was -7% (95% CI = -247%-67%; attack rates: 3.6% and 3.3% among vaccine and placebo groups, respectively; the second year exhibited lower attack rates overall and was considered a mild season). However, no overall reduction in otitis media was reported. Other studies report that trivalent inactivated influenza vaccine decreases the incidence of influenza-associated otitis media among young children by approximately 30% (16,17). A retrospective study among approximately 5,000 children aged 6-23 months conducted during a year with a suboptimal vaccine match indicated vaccine effectiveness of 49% against medically attended, clinically diagnosed pneumonia or influenza (International Classification of Diseases, Ninth Revision codes 480-487) among children who had received 2 doses of influenza vaccine. No effectiveness was demonstrated among children who had received only 1 dose of influenza vaccine, illustrating the importance of administering 2 doses of vaccine to previously unvaccinated children aged <9 years (87). Adults Aged >65 Years. Older persons and persons with certain chronic diseases might develop lower postvaccination antibody titers than healthy young adults and thus can remain susceptible to influenza infection and influenza-related upper respiratory tract illness (88)(89)(90). A randomized trial among noninstitutionalized persons aged >60 years reported a vaccine efficacy of 58% against influenza respiratory illness, but indicated that efficacy might be lower among those aged >70 years (91). The vaccine can also be effective in preventing secondary complications and reducing the risk for influenzarelated hospitalization and death among adults aged >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (13)(14)(15)18,92). Among elderly persons not living in nursing homes or similar chronic-care facilities, influenza vaccine is 30%-70% effective in preventing hospitalization for pneumonia and influenza (15,93). Among older persons who do reside in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and deaths. Among this population, the vaccine can be 50%-60% effective in preventing influenza-related hospitalization or pneumonia and 80% effective in preventing influenza-related death, although the effectiveness in preventing influenza illness often ranges from 30% to 40% (94)(95)(96). # Efficacy and Effectiveness of LAIV Healthy Children. A randomized, double-blind, placebocontrolled trial among 1,602 healthy children initially aged 15-71 months assessed the efficacy of trivalent LAIV against culture-confirmed influenza during two seasons (97,98). This trial included subsets of 238 Children who continued in the study remained in the same study group. In season one, when vaccine and circulating virus strains were well-matched, efficacy was 93% for all participants, regardless of age, among persons receiving 2 doses of LAIV. Efficacy was 87% in the 60-71-month subset for those who received 2 doses, and was 91% in the subset for those who received 1 or 2 doses. In season two, when the A (H3N2) component was not well-matched between vaccine and circulating virus strains, efficacy was 86% overall and 87% among those aged 60-84 months. The vaccine was 92% effi-cacious in preventing culture-confirmed influenza during the two-season study. Other results included a 27% reduction in febrile otitis media and a 28% reduction in otitis media with concomitant antibiotic use. Receipt of LAIV also resulted in a 30% lower incidence of febrile otitis media and 21% fewer febrile illnesses. Another study assessing LAIV effectiveness in children aged 18 months-18 years indicated effectiveness against medically attended acute respiratory illness (MAARI) of 18%. However, applying a validation sample of surveillance cultures with MAARI demonstrated efficacy of 92% against influenza A (H1N1) and 66% against an influenza B drift variant (99). Healthy Adults. A randomized, double-blind, placebocontrolled trial among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in illness, absenteeism, health-care visits, and medication use during peak and total influenza outbreak periods (100). The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not wellmatched. The study did not include testing of viruses by a laboratory. During peak outbreak periods, no difference was identified between LAIV and placebo recipients experiencing any febrile episodes. However, vaccination was associated with reductions in severe febrile illnesses of 19% and febrile upper respiratory tract illnesses of 24%. Vaccination also was associated with fewer days of illness, fewer days of work lost, fewer days with health-care-provider visits, and reduced use of prescription antibiotics and over-the-counter medications. Among the subset of 3,637 healthy adults aged 18-49 years, LAIV recipients (n = 2,411) had 26% fewer febrile upperrespiratory illness episodes; 27% fewer lost work days as a result of febrile upper respiratory illness; and 18%-37% fewer days of health-care provider visits caused by febrile illness, compared with placebo recipients (n = 1,226). Days of antibiotic use were reduced by 41%-45% in this age subset. Another randomized, double-blind, placebo-controlled challenge study among 92 healthy adults (LAIV, n = 29; placebo, n = 31; inactivated influenza vaccine, n = 32) aged 18-41 years assessed the efficacy of both LAIV and inactivated vaccine (101). The overall efficacy of LAIV and inactivated influenza vaccine in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, on the basis of experimental challenge by viruses to which study participants were susceptible before vaccination. The difference in efficacy between the two vaccines was not statistically significant. # Cost-Effectiveness of Influenza Vaccine Influenza vaccination can reduce both health-care costs and productivity losses associated with influenza illness. Economic studies of influenza vaccination of persons aged >65 years conducted in the United States have reported overall societal cost savings and substantial reductions in hospitalization and death (15,93,102). Studies of adults aged 65 years, vaccination resulted in a net savings per quality-adjusted life year (QALY) gained and resulted in costs of $23-$256/QALY among younger age groups. Additional studies of the relative cost-effectiveness and cost utility of influenza vaccination among children and among adults aged <65 years are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, and vaccine efficacy when evaluating the longterm costs and benefits of annual vaccination. # Vaccination Coverage Levels One of the national health objectives for 2010 is to achieve vaccination coverage for 90% of persons aged >65 years (objective no. 14-29a) (111). Among persons aged >65 years, influenza vaccination levels increased from 33% in 1989 (112) to 66% in 1999 (113), surpassing the Healthy People 2000 objective of 60% (114). Vaccine coverage in this group reached the highest levels recorded (68%) during the 1999-00 influenza season, using the percentage of adults reporting influenza vaccination during the previous 12 months who participated in the National Health Interview Survey (NHIS) during the first and second quarters of each calendar year as a proxy measure of influenza vaccine coverage for the previous § Persons categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer during the previous 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer during the previous 12 months; 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack during the preceding 12 months. ¶ Aged 18-44 years, pregnant at the time of the survey and without high-risk conditions. Adults were classified as health-care workers if they were currently employed in a health-care occupation or in a health-care-industry setting, on the basis of standard occupation and industry categories recoded in groups by CDC's National Center for Health Statistics. † † Interviewed adult in each household containing at least one of the following: a child aged 65 years, or any person aged 2-17 years at high risk (see previous footnote § ). To obtain information on household composition and high-risk status of household members, the sampled adult, child, and person files from NHIS were merged. Interviewed adults who were health-care workers or who had high-risk conditions were excluded. Information could not be assessed regarding high-risk status of other adults aged 18-64 years in the household, thus, certain adults 18-64 years who live with an adult aged 18-64 years at high risk were not included in the analysis. influenza season (113). Possible reasons for the increase in influenza vaccination levels among persons aged >65 years through the 1999-00 influenza season include 1) greater acceptance of preventive medical services by practitioners; 2) increased delivery and administration of vaccine by healthcare providers and sources other than physicians; 3) new information regarding influenza vaccine effectiveness, cost-effectiveness, and safety; and 4) initiation of Medicare reimbursement for influenza vaccination in 1993 (8,14,15,94,95,115,116). Vaccine coverage increased more rapidly through the mid-1990s than during subsequent seasons (average annual percentage increase of 4% from 1988-89 to 1996-97 versus 1% from 1996-97 to 1999-00) and has remained relatively stable since 2000. Estimated national influenza vaccine coverage in 2003 among persons aged >65 years and 50-64 years was 66% and 37%, respectively, based on 2003 NHIS data (Table 2). The estimated vaccination coverage among adults with high-risk conditions aged 18-49 years and 50-64 years was 24% and 46%, respectively, substantially lower than the Healthy People 2000 and 2010 objective of 60% (111,114). Continued annual monitoring is needed to determine the effects of vaccine supply delays and shortages, changes in influenza vaccination recommendations and target groups for vaccination, reimbursement rates for vaccine and vaccine administration, and other factors related to vaccination coverage among adults and children. New strategies to improve coverage will be needed to achieve the Healthy People 2010 objective (21). Reducing racial and ethnic health disparities, including disparities in vaccination coverage, is an overarching national goal (111). Although estimated influenza vaccination coverage for the 1999-00 season reached the highest levels recorded among older black, Hispanic, and white populations, vaccination levels among blacks and Hispanics continue to lag behind those among whites (113,117). Estimated vaccination coverage levels based on 2003 NHIS data among persons aged >65 years were 69% among non-Hispanic whites, 48% among non-Hispanic blacks, and 45% among Hispanics (CDC, National Immunization Program, unpublished data, 2005). Additional strate-gies are needed to achieve the Healthy People 2010 objectives among all racial and ethnic groups. In 1997 and 1998, vaccination coverage estimates among nursing home residents were 64%-82% and 83%, respectively (118,119). The Healthy People 2010 goal is to achieve influenza vaccination of 90% among nursing home residents, an increase from the Healthy People 2000 goal of 80% (111,114). Reported vaccination levels are low among children at increased risk for influenza complications. One study conducted among patients in health maintenance organizations reported influenza vaccination percentages ranging from 9% to 10% among children with asthma (120). A 25% vaccination level was reported among children with severe to moderate asthma who attended an allergy and immunology clinic (121). However, a study conducted in a pediatric clinic demonstrated an increase in the vaccination percentage of children with asthma or reactive airways disease from 5% to 32% after implementing a reminder/recall system (122). One study reported 79% vaccination coverage among children attending a cystic fibrosis treatment center (123) Vaccination of health-care workers has been associated with reduced work absenteeism (9) and fewer deaths among nursing home patients (125,126) and is a high priority for reducing the impact of influenza in health-care settings and for expanding influenza vaccine use (127,128). Limited information is available regarding use of influenza vaccine among pregnant women. Among women aged 18-44 years without diabetes responding to the 2001 BRFSS, those who were pregnant were less likely to report influenza vaccination during the previous 12 months (13.7%) than those not pregnant (16.8%) (122,129). Only 13% of pregnant women reported vaccination according to 2003 NHIS data, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions (CDC, National Immunization Program, unpublished data, 2004) (Table 2). These data indicate low compliance with the ACIP recommendations for pregnant women. In a study of influenza vaccine acceptance by pregnant women, 71% who were offered the vaccine chose to be vaccinated (130). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients, although 86% agreed that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (131). Recent data indicate that self-report of influenza vaccination among adults, compared with extraction from the medical record, is both sensitive and specific (132). Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (132). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available. # Recommendations for Using Inactivated and Live, Attenuated Influenza Vaccines Both the inactivated influenza vaccine and LAIV can be used to reduce the risk for influenza. LAIV is approved for use among healthy persons aged 5-49 years. Inactivated influenza vaccine is approved for persons aged >6 months, including those with high-risk conditions (see following sections on inactivated influenza vaccine and live, attenuated influenza vaccine). # Target Groups for Vaccination # Persons at Increased Risk for Complications Vaccination with inactivated influenza vaccine is recommended for the following persons who are at increased risk for complications from influenza: # Persons Aged 50-64 Years Vaccination is recommended for persons aged 50-64 years because this group has an increased prevalence of persons with high-risk conditions. In 2002, approximately 43.6 million persons in the United States were aged 50-64 years, of whom 13.5 million (34%) had one or more high-risk medical conditions (133). Influenza vaccine has been recommended for this entire age group to increase the low vaccination rates among persons in this age group with high-risk conditions (see preceding section). Age-based strategies are more successful in increasing vaccine coverage than patient-selection strategies based on medical conditions. Persons aged 50-64 years without high-risk conditions also receive benefit from vaccination in the form of decreased rates of influenza illness, decreased work absenteeism, and decreased need for medical visits and medication, including antibiotics (9-12). Furthermore, 50 years is an age when other preventive services begin and when routine assessment of vaccination and other preventive services has been recommended (134,135). # Persons Who Can Transmit Influenza to Those at High Risk Persons who are clinically or subclinically infected can transmit influenza virus to persons at high risk for complications from influenza. Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce influenza-related deaths among persons at high risk. Evidence from two studies indicates that vaccination of healthcare workers is associated with decreased deaths among nursing home patients (125,126), and hospital-based influenza outbreaks frequently occur where unvaccinated health-care workers are employed. Administration of LAIV has been dem-onstrated to reduce MAARI in contacts of vaccine recipients (136), and to reduce ILI-related economic and medical consequences (such as work days lost and number of health-care provider visits). In addition to health-care workers, additional groups that can transmit influenza to high-risk persons and that should be vaccinated include - employees of assisted living and other residences for persons in groups at high risk; - persons who provide home care to persons in groups at high risk; and - household contacts (including children) of persons in groups at high risk. In addition, because children aged 0-23 months are at increased risk for influenza-related hospitalization (54)(55)(56), vaccination is recommended for their household contacts and out-of-home caregivers, particularly for contacts of children aged 0-5 months, because influenza vaccines have not been approved by FDA for use among children aged <6 months (see Healthy Young Children). Healthy persons aged 5-49 years in these groups who are not contacts of severely immunosuppressed persons (see Live, Attenuated Influenza Vaccine Recommendations) can receive either LAIV or inactivated influenza vaccine. All other persons in this group should receive inactivated influenza vaccine. # Health-Care Workers All health-care workers should be vaccinated against influenza annually (128). Facilities that employ health-care workers are strongly encouraged to provide vaccine to workers by using approaches that maximize vaccination rates. This will protect health-care workers, their patients, and communities, and will improve prevention of influenza-associated disease, patient safety, and will reduce disease burden. Influenza vaccination rates among health-care workers should be regularly measured and reported. Although vaccination rates for healthcare workers are typically <40%, with moderate effort, organized campaigns can attain higher rates of vaccination among this population (127,137). Currently, seven states have legislation requiring annual influenza vaccination of health-care workers or the signing of an informed declination (128), and 15 states have regulations regarding vaccination of health-care workers in long-term-care facilities (138). Physicians, nurses, and other workers in both hospital and outpatient-care settings, including medical emergency-response workers (e.g., paramedics and emergency medical technicians), should be vaccinated, as should employees of nursing home and chroniccare facilities who have contact with patients or residents. # Additional Information Regarding Vaccination of Specific Populations Pregnant Women Influenza-associated excess deaths among pregnant women were documented during the pandemics of 1918-19 and 1957-58 (49,(139)(140)(141). Case reports and limited studies also indicate that pregnancy can increase the risk for serious medical complications of influenza (142)(143)(144)(145)(146). An increased risk might result from 1) increases in heart rate, stroke volume, and oxygen consumption; 2) decreases in lung capacity; and 3) changes in immunologic function during pregnancy. A study of the effect of influenza during 17 interpandemic influenza seasons demonstrated that the relative risk for hospitalization for selected cardiorespiratory conditions among pregnant women enrolled in Medicaid increased from 1.4 during weeks 14-20 of gestation to 4.7 during weeks 37-42, in comparison with women who were 1-6 months postpartum (147). Women in their third trimester of pregnancy were hospitalized at a rate (i.e., 250/100,000 pregnant women) comparable with that of nonpregnant women who had high-risk medical conditions. Researchers estimate that an average of 1-2 hospitalizations can be prevented for every 1,000 pregnant women vaccinated (147). Because of the increased risk for influenza-related complications, women who will be pregnant during the influenza season should be vaccinated. Vaccination can occur in any trimester. One study of influenza vaccination of approximately 2,000 pregnant women demonstrated no adverse fetal effects associated with influenza vaccine (148). # Healthy Young Children Studies indicate that rates of hospitalization are higher among young children than older children when influenza viruses are in circulation (53,55,56,149,150). The increased rates of hospitalization are comparable with rates for other groups considered at high risk for influenza-related complications. However, the interpretation of these findings has been confounded by co-circulation of respiratory syncytial viruses, which are a cause of serious respiratory viral illness among children and which frequently circulate during the same time as influenza viruses (151)(152)(153). One study assessed rates of influenza-associated hospitalizations among the entire U.S. population during 1979-2001 and calculated an average rate of approximately 108 hospitalizations per 100,000 personyears in children aged <5 years (46). Two recent studies have attempted to separate the effects of respiratory syncytial viruses and influenza viruses on rates of hospitalization among children who do not have high-risk conditions (54,55). Both studies reported that otherwise healthy children aged <2 years, and possibly children aged 2-4 years, are at increased risk for influenza-related hospitalization compared with older healthy children (Table 1). Among the Tennessee Medicaid population during 1973-1993, healthy children aged 6 months-<3 years had rates of influenza-associated hospitalization comparable with or higher than rates among children aged 3-14 years with high-risk conditions (54,56). Another Tennessee study reported a hospitalization rate per year of 3-4/1,000 healthy children aged <2 years for laboratory-confirmed influenza (33). Because children aged 6-23 months are at substantially increased risk for influenza-related hospitalizations, ACIP recommends vaccination of all children in this age group (154). ACIP continues to recommend influenza vaccination of persons aged >6 months who have high-risk medical conditions. The current inactivated influenza vaccine is not approved by FDA for use among children aged <6 months, the pediatric group at greatest risk for influenza-related complications (54). Vaccinating their household contacts and out-of-home caregivers might decrease the probability of influenza infection among these children. Beginning # Persons Infected with HIV Limited information is available regarding the frequency and severity of influenza illness or the benefits of influenza vaccination among persons with HIV infection (156,157). However, a retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the attributable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than during the peri-influenza periods. The risk for hospitalization was higher for HIV-infected women than for women with other well-recognized high-risk conditions, including chronic heart and lung diseases (158). Another study estimated that the risk for influenza-related death was 9.4-14.6/10,000 persons with acquired immunodeficiency syndrome (AIDS) compared with 0.09-0.10/10,000 among all persons aged 25-54 years and 6.4-7.0/10,000 among persons aged >65 years (159). Other reports indicate that influenza symptoms might be prolonged and the risk for complications from influenza increased for certain HIVinfected persons (160)(161)(162). Inactivated influenza vaccination has been demonstrated to produce substantial antibody titers against influenza among vaccinated HIV-infected persons who have minimal AIDSrelated symptoms and high CD4+ T-lymphocyte cell counts (163)(164)(165)(166). A limited, randomized, placebo-controlled trial determined that inactivated influenza vaccine was highly effective in preventing symptomatic, laboratory-confirmed influenza infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm3; a limited number of persons with CD4+ T-lymphocyte cell counts of 100 CD4+ cells and among those with <30,000 viral copies of HIV type-1/mL (162). Among persons who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, inactivated influenza vaccine might not induce protective antibody titers (165,166); a second dose of vaccine does not improve the immune response in these persons (166,167). One study determined that HIV RNA (ribonucleic acid) levels increased transiently in one HIV-infected person after influenza infection (168). Studies have demonstrated a transient (i.e., 2-4 week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (165,169). Other studies using similar laboratory techniques have not documented a substantial increase in the replication of HIV (170)(171)(172)(173). Deterioration of CD4+ T-lymphocyte cell counts or progression of HIV disease have not been demonstrated among HIVinfected persons after influenza vaccination compared with unvaccinated persons (166,174). Limited information is available concerning the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza infection or influenza vaccination (156,175). Because influenza can result in serious illness and because vaccination with inactivated influenza vaccine can result in the production of protective antibody titers, vaccination will benefit HIV-infected persons, including HIV-infected pregnant women. # Breastfeeding Mothers Influenza vaccine is safe for mothers who are breastfeeding and their infants. Breastfeeding does not adversely affect the immune response and is not a contraindication for vaccination. # Travelers The risk for exposure to influenza during travel depends on the time of year and destination. In the tropics, influenza can occur throughout the year. In the temperate regions of the Southern Hemisphere, the majority of influenza activity occurs during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large organized tourist groups (e.g., on cruise ships) that include persons from areas of the world where influenza viruses are circulating (176,177). Persons at high risk for complications of influenza who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to - travel to the tropics, - travel with organized tourist groups at any time of year, or - travel to the Southern Hemisphere during April-September. No information is available regarding the benefits of revaccinating persons before summer travel who were already vaccinated in the preceding fall. Persons at high risk who receive the previous season's vaccine before travel should be revaccinated with the current vaccine the following fall or winter. Persons aged >50 years and persons at high risk should consult with their physicians before embarking on travel during the summer to discuss the symptoms and risks for influenza and the advisability of carrying antiviral medications for either prophylaxis or treatment of influenza. # General Population In addition to the groups for which annual influenza vaccination is recommended, physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza or transmitting influenza to others should they become infected (the vaccine can be administered to children aged >6 months), depending on vaccine availability (see Influenza Vaccine Supply). Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics. # Comparison of LAIV with Inactivated Influenza Vaccine Both inactivated influenza vaccine and LAIV are available to reduce the risk for influenza infection and illness. However, the vaccines also differ in key ways (Table 3). # Major Similarities LAIV and inactivated influenza vaccine contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one A (H1N1) virus, and one B virus. Each year, one or more virus strains might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. Viruses for both vaccines are grown in eggs. Both vaccines are administered annually to provide optimal protection against influenza infection (Table 3). # Major Differences Inactivated influenza vaccine contains killed viruses, whereas LAIV contains live, attenuated viruses still capable of replication. LAIV is administered intranasally by sprayer, whereas inactivated influenza vaccine is administered intramuscularly by injection. LAIV is more expensive than inactivated influenza vaccine, although the price differential between inactivated vaccine and LAIV has decreased for the 2005-06 season. LAIV is approved for use among healthy persons aged 5-49 years; inactivated influenza vaccine is approved for use among persons aged >6 months, including those who are healthy and those with chronic medical conditions (Table 3). # Inactivated Influenza Vaccine Recommendations Persons Who Should Not Be Vaccinated with Inactivated Influenza Vaccine Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Prophylactic use of antiviral agents is an option for preventing influenza among such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at high risk for complications from influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Information regarding vaccine components is located in package inserts from each manufacturer. Persons with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate use of influenza vaccine, particularly among children with mild upper-respiratory-tract infection or allergic rhinitis. Immunogenicity and side effects of split-and whole-virus vaccines are similar among adults when vaccines are administered at the recommended dosage. § For adults and older children, the recommended site of vaccination is the deltoid muscle. The preferred site for infants and young children is the anterolateral aspect of the thigh. ¶ Two doses administered at least 1 month apart are recommended for children aged <9 years who are receiving influenza vaccine for the first time. # Dosage Dosage recommendations vary according to age group (Table 4). Among previously unvaccinated children aged 1 month apart are recommended for satisfactory antibody responses. If possible, the second dose should be administered before December. If a child aged <9 years receiving vaccine for the first time does not receive a second dose of vaccine within the same season, only 1 dose of vaccine should be administered the following season. Two doses are not required at that time. Among adults, studies have indicated limited or no improvement in antibody response when a second dose is administered during the same season (178)(179)(180). Even when the current influenza vaccine contains one or more antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines during the year after vaccination (181,182). Vaccine prepared for a previous influenza season should not be administered to provide protection for the current season. Because of lack of vaccine efficacy data, ACIP does not recommend that a child receiving influenza vaccine for the first time be given the first dose of vaccine in the spring, followed by the second dose in the autumn of the same year. # Route The intramuscular route is recommended for influenza vaccine. Adults and older children should be vaccinated in the deltoid muscle. A needle length >1 inch can be considered for these age groups because needles <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (183). Infants and young children should be vaccinated in the anterolateral aspect of the thigh (67). ACIP recommends a needle length of 7/8-1 inch for children aged <12 months for intramuscular vaccination into the anterolateral thigh. When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7/8-1.25 inches is recommended (67). # Side Effects and Adverse Reactions When educating patients regarding potential side effects, clinicians should emphasize that 1) inactivated influenza vaccine contains noninfectious killed viruses and cannot cause influenza; and 2) coincidental respiratory disease unrelated to influenza vaccination can occur after vaccination. # Local Reactions In placebo-controlled studies among adults, the most frequent side effect of vaccination is soreness at the vaccination site (affecting 10%-64% of patients) that lasts <2 days (12,(184)(185)(186). These local reactions typically are mild and rarely interfere with the person's ability to conduct usual daily activities. One blinded, randomized, cross-over study among 1,952 adults and children with asthma, demonstrated that only body aches were reported more frequently after inactivated influenza vaccine (25.1%) than placebo-injection (20.8%) (187). One study (83) reported 20%-28% of children with asthma aged 9 months-18 years with local pain and swelling, and another study (80) reported 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions. A different study (81) reported no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years in a placebo-controlled trial of inactivated influenza vaccine. In a study of 12 children aged 5-32 months, no substantial local or systemic reactions were noted (188). # Systemic Reactions Fever, malaise, myalgia, and other systemic symptoms can occur after vaccination with inactivated vaccine and most often affect persons who have had no previous exposure to the influenza virus antigens in the vaccine (e.g., young children) (189,190). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Recent placebo-controlled trials demonstrate that among older persons and healthy young adults, administration of split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (12,(184)(185)(186). Less information from published studies is available for children, compared with adults. However, in a randomized crossover study among both children and adults with asthma, no increase in asthma exacerbations was reported for either age group (187). An analysis of 215,600 children aged <18 years and 8,476 children aged 6-23 months enrolled in one of five health maintenance organizations reported no increase in biologically plausible medically attended events during the 2 weeks after inactivated influenza vaccination, compared with control periods 3-4 weeks before and after vaccination (191). In a study of 791 healthy children (71), postvaccination fever was noted among 11.5% of children aged 1-5 years, 4.6% among children aged 6-10 years, and 5.1% among children aged 11-15 years. Among children with high-risk medical conditions, one study of 52 children aged 6 months-4 years reported fever among 27% and irritability and insomnia among 25% (80); and a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (192). No placebo comparison was made in these studies. However, in pediatric trials of A/New Jersey/76 swine influenza vaccine, no difference was reported between placebo and split-virus vaccine groups in febrile reactions after injection, although the vaccine was associated with mild local tenderness or erythema (81). Limited data regarding potential adverse events after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). During January 1, 1991-June 30, 2004, VAERS received 1,895 reports of adverse events among children aged <18 years, including 479 reports of adverse events among children aged 6-23 months. The number of influenza vaccine doses received by children during this entire period is unknown (CDC, unpublished data, 2005). A recently published review of VAERS reports of trivalent inactivated influenza vaccine (TIV) in children aged 6-23 months documented that the most frequently reported adverse events were fever, rash, injection-site reactions, and seizures. The majority of the small total number of reported seizures appeared to be febrile (193). Because of the limitations of passive reporting systems, determining causality for specific types of adverse events, with the exception of injection-site reactions, is usually not possible by using VAERS data alone. A population-based study of TIV safety in children aged 6-23 months indicated no vaccine associated adverse events that had a plausible relationship to vaccination (194). Health-care professionals should promptly report to VAERS all clinically significant adverse events after influenza vaccination of children, even if the health-care professional is not certain that the vaccine caused the event. The Institute of Medicine has specifically recommended reporting of potential neurologic complications (e.g., demyelinating disorders such as Guillain-Barré syndrome ), although no evidence exists of a causal relationship between influenza vaccine and neurologic disorders in children. Immediate -presumably allergic -reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) rarely occur after influenza vaccination (195). These reactions probably result from hypersensitivity to certain vaccine components; the majority of reactions probably are caused by residual egg protein. Although current influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have had hives or swelling of the lips or tongue, or who have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma or other allergic responses to egg protein, might also be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician should be considered. Protocols have been published for safely administering influenza vaccine to persons with egg allergies (196)(197)(198). Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (199,200). When reported, hypersensitivity to thimerosal usually has consisted of local, delayed hypersensitivity reactions (199). # Guillain-Barré Syndrome The 1976 swine influenza vaccine was associated with an increased frequency of GBS (201,202). Among persons who received the swine influenza vaccine in 1976, the rate of GBS was 25 years than persons <25 years (201). Evidence for a causal relation of GBS with subsequent vaccines prepared from other influenza viruses is unclear. Obtaining strong epidemiologic evidence for a possible limited increase in risk is difficult for such a rare condition as GBS, which has an annual incidence of 10-20 cases/1 million adults (203). More definitive data probably will require using other methodologies (e.g., laboratory studies of the pathophysiology of GBS). During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were slightly elevated but were not statistically significant in any of these studies (204)(205)(206). However, in a study of the 1992-93 and 1993-94 seasons, the overall relative risk for GBS was 1.7 (95% CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately 1 additional case of GBS/1 million persons vaccinated. The combined number of GBS cases peaked 2 weeks after vaccination (207). Thus, investigations to date have not documented a substantial increase in GBS associated with influenza vaccines (other than the swine influenza vaccine in 1976), and that, if influenza vaccine does pose a risk, it is probably slightly more than one additional case/1 million persons vaccinated. Recent data from VAERS has documented decreased reporting of post influenza vaccine GBS across age groups, despite overall increased reporting for influenza vaccine (208). Cases of GBS after influenza infection have been reported, but no epidemiologic studies have documented such an association (209,210). Substantial evidence exists that multiple infectious illnesses, most notably Campylobacter jejuni, and upper respiratory tract infections are associated with GBS (203,(211)(212)(213). Even if GBS were a true side effect of vaccination in the years after 1976, the estimated risk for GBS of approximately 1 additional case/1 million persons vaccinated is substantially less than the risk for severe influenza, which can be prevented by vaccination among all age groups, especially persons aged >65 years and those who have medical indications for influenza vaccination (Table 1) (see Hospitalizations and Deaths from Influenza). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death substantially outweigh the possible risks for experiencing vaccine-associated GBS. The average case fatality ratio for GBS is 6% and increases with age (203,214). No evidence indicates that the case fatality ratio for GBS differs among vaccinated persons and those not vaccinated. The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (204,215). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown; therefore, avoiding vaccinating persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks after a previous influenza vaccination is prudent. As an alternative, physicians might consider using influenza an-tiviral chemoprophylaxis for these persons. Although data are limited, for the majority of persons who have a history of GBS and who are at high risk for severe complications from influenza, the established benefits of influenza vaccination justify yearly vaccination. # Live, Attenuated Influenza Vaccine Recommendations Background Description and Action Mechanisms. LAIVs have been in development since the 1960s in the United States, where they have been evaluated as mono-, bi-, and trivalent formulations (216)(217)(218). The LAIV licensed for use in the United States beginning in 2003 is produced by MedImmune, Inc. (Gaithersburg, Maryland; ) and marketed under the name FluMist™. It is a live, trivalent, intranasally administered vaccine that is - attenuated, producing mild or no signs or symptoms related to influenza virus infection; - temperature-sensitive, a property that limits the replication of the vaccine viruses at 38 º C-39 º C, and thus restricts LAIV viruses from replicating efficiently in human lower airways; and - cold-adapted, replicating efficiently at 25 º C, a temperature that is permissive for replication of LAIV viruses, but restrictive for replication of different wild-type viruses. In animal studies, LAIV viruses replicate in the mucosa of the nasopharynx, inducing protective immunity against viruses included in the vaccine, but replicate inefficiently in the lower airways or lungs. The first step in developing an LAIV was the derivation of two stably attenuated master donor viruses (MDV), one for type A and one for type B influenza viruses. The two MDVs each acquired the cold-adapted, temperature-sensitive, attenuated phenotypes through serial passage in viral culture conducted at progressively lower temperatures. The vaccine viruses in LAIV are reassortant viruses containing genes from these MDVs that confer attenuation, temperature sensitivity, and cold adaptation and genes from the recommended contemporary wild-type influenza viruses, encoding the surface antigens hemagglutinin (HA) and neuraminidase (NA). Thus, MDVs provide the stably attenuated vehicles for presenting influenza HA and NA antigens, to which the protective antibody response is directed, to the immune system. The reassortant vaccine viruses are grown in embryonated hens eggs. After the vaccine is formulated and inserted into individual sprayers for nasal administration, the vaccine must be stored at -15 º C or colder. The immunogenicity of the approved LAIV has been assessed in multiple studies (102,(219)(220)(221)(222)(223)(224), which included approximately 100 children aged 5-17 years, and approximately 300 adults aged 18-49 years. LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not completely understood but appear to involve both serum and nasal secretory antibodies. No single laboratory measurement closely correlates with protective immunity induced by LAIV. Shedding and Transmission of Vaccine Viruses. Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses for >2 days after vaccination, although in lower titers than typically occur with shedding of wild-type influenza viruses. Shedding should not be equated with person-to-person transmission of vaccine viruses, although, in rare instances, shed vaccine viruses can be transmitted from vaccinees to nonvaccinated persons. One unpublished study in a child care center settingassessed transmissibility of vaccine viruses from 98 vaccinated to 99 unvaccinated subjects, all aged 8-36 months. Eighty percent of vaccine recipients shed one or more virus strains, with a mean of 7.6 days' duration (225). One vaccine type influenza type B isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the cold-adapted, temperature-sensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient in the same children's play group. The placebo recipient from whom the influenza type B vaccine virus was isolated did not exhibit symptoms that were different from those experienced by vaccine recipients. The estimated probability of acquiring vaccine virus after close contact with a single LAIV recipient in this child care population was 0.58%-2.4%. One study assessing shedding of vaccine viruses in 20 healthy vaccinated adults aged 18-49 years demonstrated that the majority of shedding occurred within the first 3 days after vaccination, although one subject was noted to shed virus on day 7 after vaccine receipt. No subject shed vaccine viruses >10 days after vaccination. Duration or type of symptoms associated with receipt of LAIV did not correlate with duration of shedding of vaccine viruses. Person-to-person transmission of vaccine viruses was not assessed in this study (226). Another study assessing shedding of vaccine viruses in 14 healthy adults aged 18-49 years indicated that 50% of these adults had viral antigen detected by direct immunofluorescence or rapid antigen tests within 7 days of vaccination. Most viral shedding was detected on day 2 or 3. Person-to-person transmission of vaccine viruses was not assessed in this study (227). # Stability of Vaccine Viruses. In clinical trials, viruses shed by vaccine recipients have been phenotypically stable. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (228). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. # Using LAIV LAIV is an option for vaccination of healthy persons aged 5-49 years, including health-care workers and other persons in close contact with groups at high risk and those wanting to avoid influenza. During periods when inactivated vaccine is in short supply, use of LAIV is encouraged when feasible for eligible persons (including health-care workers) because use of LAIV by these persons might increase availability of inactivated vaccine for persons in groups at high risk. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response, its ease of administration, and the acceptability of an intranasal rather than intramuscular route of administration. # Persons Who Should Not Be Vaccinated with LAIV The following populations should not be vaccinated with LAIV: # Close Contacts of Persons at High Risk for Complications from Influenza Close contacts of persons at high risk for complications from influenza should receive influenza vaccine to reduce transmission of wild-type influenza viruses to persons at high risk. ACIP has not indicated a preference for inactivated influenza vaccine use by health-care workers or other persons who have close contact with persons with lesser degrees of immunosuppression (e.g., persons with diabetes, persons with asthma taking corticosteroids, or persons infected with HIV) or for inactivated influenza vaccine use by health-care workers or other healthy persons aged 5-49 years in close contact with all other groups at high risk. Use of inactivated influenza vaccine is preferred for vaccinating household members, healthcare workers, and others who have close contact with severely immunosuppressed persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunosuppressed person requires care in a protective environment. The rationale for not using LAIV among healthcare workers caring for such patients is the theoretical risk that a live, attenuated vaccine virus could be transmitted to the severely immunosuppressed person. If a health-care worker receives LAIV, that worker should refrain from contact with severely immunosuppressed patients for 7 days after vaccine receipt. Hospital visitors who have received LAIV should refrain from contact with severely immunosuppressed persons for 7 days after vaccination; however, such persons need not be excluded from visitation of patients who are not severely immunosuppressed. # Personnel Who May Administer LAIV Low-level introduction of vaccine viruses into the environment is likely unavoidable when administering LAIV. The risk for acquiring vaccine viruses from the environment is unknown but likely to be limited. Severely immunosuppressed persons should not administer LAIV. However, other persons at high risk for influenza complications may administer LAIV. These include persons with underlying medical conditions placing them at high risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years. # LAIV Dosage and Administration LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV must be thawed before administration. This can be accomplished by holding an individual sprayer in the palm of the hand until thawed, with subsequent immediate administration. Alternatively, the vaccine can be thawed in a refrigerator and stored at 2 º C-8 º C for <24 hours before use. Vaccine should not be refrozen after thawing. LAIV is supplied in a prefilled single-use sprayer containing 0.5 mL of vaccine. Approximately 0.25 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. If the vaccine recipient sneezes after administration, the dose should not be repeated. LAIV should be administered annually according to the following schedule: - Children aged 5-8 years previously unvaccinated at any time with either LAIV or inactivated influenza vaccine should receive 2 doses † of LAIV separated by 6-10 weeks. - Children aged 5-8 years previously vaccinated at any time with either LAIV or inactivated influenza vaccine should receive 1 dose of LAIV. They do not require a second dose. - Persons aged 9-49 years should receive 1 dose of LAIV. LAIV can be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if clinical judgment indicates nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness. Whether concurrent administration of LAIV with other vaccines affects the safety or efficacy of either LAIV or the simultaneously administered vaccine is unknown. In the absence of specific data indicating interference, following the ACIP general recommendations for immunization is prudent (67). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. An inactivated vaccine can be administered either simultaneously or at any time before or after LAIV. Two live vaccines not administered on the same day should be administered >4 weeks apart when possible. # LAIV and Use of Influenza Antiviral Medications The effect on safety and efficacy of LAIV coadministration with influenza antiviral medications has not been studied. However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy, and influenza antiviral medications should not be administered for 2 weeks after receipt of LAIV. # LAIV Storage LAIV must be stored at -15 º C or colder. A manufacturersupplied freezer box was formerly required for storage of LAIV in a frost-free freezer; however, the freezer box is now optional, and LAIV may now be stored in frost-free freezers without using a freezer box. LAIV can be thawed in a refrigerator and stored at 2 º C-8 º C for <60 hours before use. It should not be refrozen after thawing. # Side Effects and Adverse Reactions Twenty prelicensure clinical trials assessed the safety of the approved LAIV. In these combined studies, approximately 28,000 doses of the vaccine were administered to approximately 20,000 subjects. A subset of these trials were randomized, placebo-controlled studies in which an estimated 4,000 healthy children aged 5-17 years and 2,000 healthy adults aged 18-49 years were vaccinated. The incidence of adverse events possibly complicating influenza (e.g., pneumonia, bronchitis, bronchiolitis, or central nervous system events) was not statistically different among LAIV and placebo recipients aged 5-49 years. LAIV is made from attenuated viruses and does not cause influenza in vaccine recipients. Children. In a subset of healthy children aged 60-71 months from one clinical trial (97,98), certain signs and symptoms were reported more often among LAIV recipients after the first dose (n = 214) than placebo recipients (n = 95) (e.g., runny nose, 48.1% versus 44.2%; headache, 17.8% versus 11.6%; vomiting, 4.7% versus 3.2%; and myalgias, 6.1% versus 4.2%), but these differences were not statistically significant. In other trials, signs and symptoms reported after LAIV administration have included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0%-26%), vomiting (3%-13%), abdominal pain (2%), and myalgias (0%-21%) (219,222,224,(229)(230)(231). These symptoms were associated more often with the first dose and were self-limited. Unpublished data from a study including subjects aged 1-17 years indicated an increase in asthma or reactive airways disease in the subset aged 12-59 months. Because of this, LAIV is not approved for use among children aged <60 months. Adults. Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (100,232,233). In one clinical trial (100) among a subset of healthy adults aged 18-49 years, signs and symptoms reported more frequently among LAIV recipients (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (13.9% versus 10.8%); runny nose (44.5% versus 27.1%); sore throat (27.8% versus 17.1%); chills (8.6% versus 6.0%); and tiredness/ weakness (25.7% versus 21.6%). Safety Among Groups at High Risk from Influenza-Related Morbidity. Until additional data are acquired and analyzed, persons at high risk for experiencing complications from influenza infection (e.g., immunocompromised patients; patients with asthma, cystic fibrosis, or chronic obstructive pulmonary disease; or persons aged >65 years) should not be vaccinated with LAIV. Protection from influenza among these groups should be accomplished by using inactivated influenza vaccine. Serious Adverse Events. Serious adverse events among healthy children aged 5-17 years or healthy adults aged 18-49 years occurred at a rate of <1%. Surveillance should continue for adverse events that might not have been detected in previous studies. A preliminary review of reports to VAERS after distribution of approximately 800,000 doses during the 2003-04 influenza season did not reveal any substantial new safety concerns (234). Health-care professionals should promptly report all clinically significant adverse events after LAIV administration to VAERS, as recommended for inactivated influenza vaccine. # Recommended Vaccines for Different Age Groups When vaccinating children aged 6 months-3 years, healthcare providers should use inactivated influenza vaccine that has been approved by FDA for this age group. Inactivated influenza vaccine from Sanofi Pasteur, Inc., (FluZone splitvirus) is approved for use among persons aged >6 months. Inactivated influenza vaccine from Chiron (Fluvirin) is labeled in the United States for use among persons aged >4 years because data to demonstrate efficacy among younger persons have not been provided to FDA. Live, attenuated influenza vaccine from MedImmune (FluMist) is approved for use by healthy persons aged 5-49 years (Table 5). # Timing of Annual Influenza Vaccination The annual supply of influenza vaccine and the timing of its distribution cannot be guaranteed in any year. Information regarding the supply of 2005-06 vaccine might not be available until late summer or early fall 2005. To allow vaccine providers to plan for the upcoming vaccination season, taking into account the yearly possibility of vaccine delays or shortages and the need to ensure vaccination of persons at high risk and their contacts, ACIP recommends that inactivated influenza vaccine campaigns conducted in October focus primarily on persons at increased risk for influenza complications and their contacts, including health-care workers. Campaigns conducted in November and later should continue to vaccinate persons at high risk and their contacts, but also vaccinate other persons who wish to decrease their risk for influenza infection. Vaccination for all groups should continue into December and beyond. CDC and other public health agencies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will make recommendations preceding the 2005-06 influenza season regarding the need for tiered timing of inactivated influenza vaccination of different risk groups. Because LAIV is approved for use in healthy persons 5-49 years, its use has not been subject to tiered timing. # Vaccination Before October To avoid missed opportunities for vaccination of persons at high risk for serious complications, such persons should be offered vaccine beginning in September during routine healthcare visits or during hospitalizations, if vaccine is available. In facilities housing older persons (e.g., nursing homes), vaccination before October typically should be avoided because antibody levels in such persons can begin to decline within a limited time after vaccination (235). In addition, children aged <9 years who have not been previously vaccinated and who need 2 doses before the start of the influenza season can receive their first dose in September so that both doses of the most up-to-date vaccine can be administered before the onset of influenza activity. For previously vaccinated children, 2 doses are needed to provide optimal protection against influenza. # Vaccination in October and November The optimal time to vaccinate is usually during October-November. ACIP recommends that vaccine providers focus their vaccination efforts in October and earlier primarily on persons aged >50 years, persons aged <50 years at increased risk for influenza-related complications (including children aged 6-23 months), household contacts of persons at high risk (including out-of-home caregivers and household contacts of children aged 0-23 months), and health-care workers. Vaccination of children aged <9 years who are receiving vaccine for the first time should also begin in October or earlier because those persons need a booster dose 1 month after the initial dose. Efforts to vaccinate other persons who wish to decrease their risk for influenza infection should begin in November; however, if such persons request vaccination in October, vaccination should not be deferred, unless vaccine supplies dictate otherwise. Materials to assist providers in prioritizing early vaccine are available at / professionals/vaccination/index.htm (see also Travelers in this report). # Timing of Organized Vaccination Campaigns Persons and institutions planning substantial organized vaccination campaigns should consider scheduling these events after mid-October because the availability of vaccine in any location cannot be ensured consistently in early fall. Scheduling campaigns after mid-October will minimize the need for cancellations because vaccine is unavailable. Campaigns conducted before November using inactivated vaccine should focus efforts on vaccination of persons aged >50 years, persons aged <50 years at increased risk for influenza-related complications (including children aged 6-23 months and pregnant women), health-care workers, and household contacts of persons at high-risk (including children aged 0-23 months) to the extent feasible. Campaigns using the LAIV are also optimally conducted in October and November. # Vaccination in December and Later After November, many persons who should or want to receive influenza vaccine remain unvaccinated. In addition, substantial amounts of vaccine are often left over at the end of the influenza season. To improve vaccine coverage, influenza vaccine should continue to be offered in December and throughout the influenza season as long as vaccine supplies are available, even after influenza activity has been documented in the community. In the United States, seasonal influenza activity can begin to increase as early as October or November, but influenza activity has not reached peak levels in the majority of recent seasons until late December-early March (Table 6). Therefore, although the timing of influenza activity can vary by region, vaccine administered after November is likely to be beneficial in the majority of influenza seasons. Adults develop peak antibody protection against influenza infection 2 weeks after vaccination (236,237). # Strategies for Implementing Vaccination Recommendations in Health-Care Settings Successful vaccination programs combine publicity and education for health-care workers and other potential vaccine recipients, a plan for identifying persons at high risk, use of reminder/recall systems, assessment of practice-level vaccination rates with feedback to staff, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (19,238). Using standing orders programs is recommended for long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies to ensure the administration of recommended vaccinations for adults (239). Standing orders programs for both influenza and pneumococcal vaccination should be conducted under the supervision of a licensed practitioner according to a physician-approved facility or agency policy by health-care workers trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. The Centers for Medicare and Medicaid Services (CMS) has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, long-term-care facilities, and home health agencies (239). To the extent allowed by local and state law, these facilities and agencies may implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaid-eligible patients. Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs as well (20). In addition, physician reminders (e.g., flagging charts) and patient reminders are recommended strategies for increasing rates of influenza vaccination. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections. # Outpatient Facilities Providing Ongoing Care Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits beginning in Septem-ber and throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended and who do not have regularly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination. # Outpatient Facilities Providing Episodic or Acute Care Beginning each September, acute health-care facilities (e.g., emergency departments and walk-in clinics) should offer vaccinations to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities During October and November each year, vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility or anytime afterwards. All residents should be vaccinated at one time, preceding the influenza season. Residents admitted through March after completion of the vaccination program at the facility should be vaccinated at the time of admission. # Acute-Care Hospitals Persons of all ages (including children) with high-risk conditions and persons aged >50 years who are hospitalized at any time during September-March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. In one study, 39%-46% of adult patients hospitalized during the winter with influenza-related diagnoses had been hospitalized during the preceding autumn (240). Thus, the hospital serves as a setting in which persons at increased risk for subsequent hospitalization can be identified and vaccinated. However, vaccination of persons at high risk during or after their hospitalizations is often not done. In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (241). Using standing orders in hospitals increases vaccination rates among hospitalized persons (242). # Visiting Nurses and Others Providing Home Care to Persons at High Risk Beginning in September, nursing-care plans should identify patients for whom vaccination is recommended, and vac-cine should be administered in the home, if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination. # Other Facilities Providing Services to Persons Aged >50 Years Beginning in October, such facilities as assisted living housing, retirement communities, and recreation centers should offer unvaccinated residents and attendees vaccination on-site before the influenza season. Staff education should emphasize the need for influenza vaccine. # Health-Care Workers Beginning in October each year, health-care facilities should offer influenza vaccinations to all workers, including night and weekend staff. Particular emphasis should be placed on providing vaccinations to persons who care for members of groups at high risk. Efforts should be made to educate healthcare workers regarding the benefits of vaccination and the potential health consequences of influenza illness for themselves, their family members, and their patients. All healthcare workers should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (127,137). # Influenza Vaccine Supply Influenza vaccine distribution delays or vaccine supply shortages have occurred in the United States vaccine in three of the last five influenza seasons. Influenza vaccine delivery delays or vaccine shortages remain possible in part because of the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains. Steps being taken to accommodate possible future delays or vaccine shortages include identification and implementation of ways to expand the influenza vaccine supply and improvement of targeted delivery of vaccine to groups at high risk when delays or shortages are expected. # Influenza Vaccine Use During Shortages of Inactivated Vaccine ACIP will publish additional guidance regarding the prioritized (tiered) use of inactivated influenza vaccine to be implemented only during periods when there is a shortage of influenza vaccine. Otherwise, when vaccine is in adequate supply, every effort should be made to promote and use influenza vaccine for all regularly targeted groups and for other persons who wish to reduce their risk for influenza illness. The prioritized (tiered) use of influenza vaccine during inactivated influenza vaccine shortages applies only to use of inactivated vaccine and not to LAIV. When feasible, during shortages of inactivated influenza vaccine, LAIV should be used preferentially for all healthy persons aged 5-49 years (including health-care workers) to increase the availability of inactivated vaccine for groups at high risk. # Future Directions for Influenza Vaccine Recommendations ACIP plans to review new vaccination strategies for improving prevention and control of influenza, including the possibility of expanding recommendations for use of influenza vaccines. In addition, strategies for regularly monitoring vaccine effectiveness will be reviewed. # Recommendations for Using Antiviral Agents for Influenza Antiviral drugs for influenza are an adjunct to influenza vaccine for controlling and preventing influenza. However, these agents are not a substitute for vaccination. Four licensed influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. Amantadine and rimantadine are chemically related antiviral drugs known as adamantanes with activity against influenza A viruses, but not influenza B viruses. Amantadine was approved in 1966 for chemoprophylaxis of influenza A (H2N2) infection and was later approved in 1976 for treatment and chemoprophylaxis of influenza type A virus infections among adults and children aged >1 year. Rimantadine was approved in 1993 for treatment and chemoprophylaxis of influenza A infection among adults and prophylaxis among children. Although rimantadine is approved only for chemoprophylaxis of influenza A infection among children, rimantadine treatment for influenza A among children can be beneficial (243). Zanamivir and oseltamivir are chemically related antiviral drugs known as neuraminidase inhibitors that have activity against both influenza A and B viruses. Both zanamivir and oseltamivir were approved in 1999 for treating uncomplicated influenza infections. Zanamivir is approved for treating persons aged >7 years, and oseltamivir is approved for treatment of persons aged >1 year. In 2000, oseltamivir was approved for chemoprophylaxis of influenza among persons aged >13 years. The four drugs differ in pharmacokinetics, side effects, routes of administration, approved age groups, dosages, and costs. An overview of the indications, use, administration, and known primary side effects of these medications is presented in the following sections. Information contained in this report might not represent FDA approval or approved labeling for the anti-viral agents described. Package inserts should be consulted for additional information. # Role of Laboratory Diagnosis Appropriate treatment of patients with respiratory illness depends on accurate and timely diagnosis. Early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy. However, because certain bacterial infections can produce symptoms similar to influenza, bacterial infections should be considered and appropriately treated, if suspected. In addition, bacterial infections can occur as a complication of influenza. Influenza surveillance information and diagnostic testing can aid clinical judgment and help guide treatment decisions. The accuracy of clinical diagnosis of influenza on the basis of symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza (30,34,35). Influenza surveillance by state and local health departments and CDC can provide information regarding the presence of influenza viruses in the community. Surveillance can also identify the predominant circulating types, influenza A subtypes, and strains of influenza. Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, polymerase chain reaction (PCR), and immunofluorescence assays (25). Sensitivity and specificity of any test for influenza might vary by the laboratory that performs the test, the type of test used, and the type of specimen tested. Among respiratory specimens for viral isolation or rapid detection, nasopharyngeal specimens are typically more effective than throat swab specimens (244). As with any diagnostic test, results should be evaluated in the context of other clinical and epidemiologic information available to health-care providers. Commercial rapid diagnostic tests are available that can detect influenza viruses within 30 minutes (25,245). Some tests are approved for use in any outpatient setting, whereas others must be used in a moderately complex clinical laboratory. These rapid tests differ in the types of influenza viruses they can detect and whether they can distinguish between influenza types. Different tests can detect 1) only influenza A viruses; 2) both influenza A and B viruses, but not distinguish between the two types; or 3) both influenza A and B and distinguish between the two. None of the tests provide any information about influenza A subtypes. The types of specimens acceptable for use (i.e., throat, nasopharyngeal, or nasal aspirates, swabs, or washes) also vary by test. The specificity and, in particular, the sensitivity of rapid tests are lower than for viral culture and vary by test (246,247). Because of the lower sensitivity of the rapid tests, physicians should consider confirming negative tests with viral culture or other means because of the possibility of falsenegative rapid test results, especially during periods of peak community influenza activity. In contrast, false-positive rapid test results are less likely, but can occur during periods of low influenza activity. Therefore, when interpreting results of a rapid influenza test, physicians should consider the positive and negative predictive values of the test in the context of the level of influenza activity in their community. Package inserts and the laboratory performing the test should be consulted for more details regarding use of rapid diagnostic tests. Additional information concerning diagnostic testing is available at . Despite the availability of rapid diagnostic tests, collecting clinical specimens for viral culture is critical, because only culture isolates can provide specific information regarding circulating strains and subtypes of influenza viruses. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and chemoprophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor the emergence of antiviral resistance and the emergence of novel influenza A subtypes that might pose a pandemic threat. # Indications for Use Treatment When administered within 2 days of illness onset to otherwise healthy adults, amantadine and rimantadine can reduce the duration of uncomplicated influenza A illness, and zanamivir and oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day, compared with placebo (75,(248)(249)(250)(251)(252)(253)(254)(255)(256)(257)(258)(259)(260)(261)(262)(263)(264)(265). More clinical data are available concerning the efficacy of zanamivir and oseltamivir for treatment of influenza A infection than for treatment of influenza B infection (253,(266)(267)(268)(269)(270)(271)(272)(273)(274)(275)(276)(277)(278)(279)(280)(281). However, in vitro data and studies of treatment among mice and ferrets (282)(283)(284)(285)(286)(287)(288)(289), in addition to clinical studies, have documented that zanamivir and oseltamivir have activity against influenza B viruses (254,(258)(259)(260)290,291). Data are limited regarding the effectiveness of the four antiviral agents in preventing serious influenza-related complications (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases). Evidence for the effectiveness of these four antiviral drugs is principally based on studies of patients with uncomplicated influenza (292). Data are limited and inconclusive concerning the effectiveness of amantadine, rimantadine, zanamivir, and oseltamivir for treatment of influenza among persons at high risk for serious complications of influenza (28,248,250,251,253,254,261,(266)(267)(268)(269)(270). One study assessing oseltamivir treatment primarily among adults reported a reduction in complications, necessitating antibiotic therapy compared with placebo (271). Fewer studies of the efficacy of influenza antivirals have been conducted among pediatric populations (248,251,257,258,267,272,273). One study of oseltamivir treatment documented a decreased incidence of otitis media among children (258). Inadequate data exist regarding the safety and efficacy of any of the influenza antiviral drugs for use among children aged <1 year (247). To reduce the emergence of antiviral drug-resistant viruses, amantadine or rimantadine therapy for persons with influenza A illness should be discontinued as soon as clinically warranted, typically after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days. # Chemoprophylaxis Chemoprophylactic drugs are not a substitute for vaccination, although they are critical adjuncts in preventing and controlling influenza. Both amantadine and rimantadine are indicated for chemoprophylaxis of influenza A infection, but not influenza B. Both drugs are approximately 60%-90% effective in preventing illness from influenza A infection (75,248,267). When used as prophylaxis, these antiviral agents can prevent illness while permitting subclinical infection and development of protective antibody against circulating influenza viruses. Therefore, certain persons who take these drugs will develop protective immune responses to circulating influenza viruses. Amantadine and rimantadine do not interfere with the antibody response to the vaccine (248). Both drugs have been studied extensively among nursing home populations as a component of influenza outbreak-control programs, which can limit the spread of influenza within chronic-care institutions (248,266,(274)(275)(276). Among the neuraminidase inhibitor antivirals zanamivir and oseltamivir, only oseltamivir has been approved for prophylaxis, but community studies of healthy adults indicate that both drugs are similarly effective in preventing febrile, laboratory-confirmed influenza illness (efficacy: zanamivir, 84%; oseltamivir, 82%) (253,277,293). Both antiviral agents have also been reported to prevent influenza illness among persons administered chemoprophylaxis after a household member had influenza diagnosed (278,290,293). Experience with prophylactic use of these agents in institutional settings or among patients with chronic medical conditions is limited in comparison with the adamantanes (260,269,270,(279)(280)(281). One 6-week study of oseltamivir prophylaxis among nursing home residents reported a 92% reduction in influenza illness (260,294). Use of zanamivir has not been reported to impair the immunologic response to influenza vaccine (259,295). Data are not available regarding the efficacy of any of the four antiviral agents in preventing influenza among severely immunocompromised persons. When determining the timing and duration for administering influenza antiviral medications for prophylaxis, factors related to cost, compliance, and potential side effects should be considered. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost-effective, one study of amantadine or rimantadine prophylaxis reported that the drugs should be taken only during the period of peak influenza activity in a community (296). Persons at High Risk Who Are Vaccinated After Influenza Activity Has Begun. Persons at high risk for complications of influenza still can be vaccinated after an outbreak of influenza has begun in a community. However, development of antibodies in adults after vaccination takes approximately 2 weeks (236,237). When influenza vaccine is administered while influenza viruses are circulating, chemoprophylaxis should be considered for persons at high risk during the time from vaccination until immunity has developed. Children aged <9 years who receive influenza vaccine for the first time can require 6 weeks of prophylaxis (i.e., prophylaxis for 4 weeks after the first dose of vaccine and an additional 2 weeks of prophylaxis after the second dose). Persons Who Provide Care to Those at High Risk. To reduce the spread of virus to persons at high risk during community or institutional outbreaks, chemoprophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with persons at high risk. Persons with frequent contact include employees of hospitals, clinics, and chronic-care facilities, household members, visiting nurses, and volunteer workers. If an outbreak is caused by a variant strain of influenza that might not be controlled by the vaccine, chemoprophylaxis should be considered for all such persons, regardless of their vaccination status. Persons Who Have Immune Deficiencies. Chemoprophylaxis can be considered for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, chiefly those with advanced HIV disease. No published data are available concerning possible efficacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infection. Such patients should be monitored closely if chemoprophylaxis is administered. Other Persons. Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Chemoprophylaxis can also be offered to persons who wish to avoid influenza illness. Health-care providers and patients should make this decision on an individual basis. # Control of Influenza Outbreaks in Institutions Using antiviral drugs for treatment and prophylaxis of influenza is a key component of influenza outbreak control in institutions. In addition to antiviral medications, other outbreak-control measures include instituting droplet precautions and establishing cohorts of patients with confirmed or suspected influenza, re-offering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (297-299) (for additional information regarding outbreak control in specific settings, see Additional Information Regarding Influenza Infection Control Among Specific Populations). The majority of published reports concerning use of antiviral agents to control influenza outbreaks in institutions are based on studies of influenza A outbreaks among nursing home populations where amantadine or rimantadine were used (248,266,(274)(275)(276)296). Less information is available concerning use of neuraminidase inhibitors in influenza A or B institutional outbreaks (269,270,281,294,300). When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice can substantially expedite administration of antiviral medications. When outbreaks occur in institutions, chemoprophylaxis should be administered to all residents, regardless of whether they received influenza vaccinations during the previous fall, and should continue for a minimum of 2 weeks. If surveillance indicates that new cases continue to occur, chemoprophylaxis should be continued until approximately 1 week after the end of the outbreak. The dosage for each resident should be determined individually. Chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza that is not well-matched by the vaccine. In addition to nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories or other settings in which persons live in close proximity). For example, chemoprophylaxis with rimantadine has been used successfully to control an influenza A outbreak aboard a large cruise ship (177). To limit the potential transmission of drug-resistant virus during outbreaks in institutions, whether in chronic or acutecare settings or other closed settings, measures should be taken to reduce contact as much as possible between persons taking antiviral drugs for treatment and other persons, including those taking chemoprophylaxis (see Antiviral Drug-Resistant Strains of Influenza). # Dosage Dosage recommendations vary by age group and medical conditions (Table 7). # Children Amantadine. Use of amantadine among children aged 10 years is 200 mg/ day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg body weight/day, regardless of age, is advisable (268). Rimantadine. Rimantadine is approved for prophylaxis among children aged >1 year and for treatment and prophylaxis among adults. Although rimantadine is approved only for prophylaxis of infection among children, certain specialists in the management of influenza consider it appropriate for treatment among children (243). Use of rimantadine among children aged 10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg body weight/day, regardless of age, is recommended (301). Zanamivir. Zanamivir is approved for treatment among children aged >7 years. The recommended dosage of zanamivir for treatment of influenza is two inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approximately 12 hours apart) (259). Oseltamivir. Oseltamivir is approved for treatment among persons aged >1 year and for chemoprophylaxis among persons aged >13 years. Recommended treatment dosages for children vary by the weight of the child: the dosage recommendation for children who weigh 15-23 kg, the dosage is 45 mg (243). ¶ ¶ Older nursing-home residents should be administered only 100 mg/day of rimantadine. A reduction in dosage to 100 mg/day should be considered for all persons aged >65 years, if they experience possible side effects when taking 200 mg/day. * Zanamivir administered through inhalation by using a plastic device included in the medication package. Patients will benefit from instruction and demonstration of the correct use of the device. † † † Zanamivir is not approved for prophylaxis. § § § A reduction in the dose of oseltamivir is recommended for persons with creatinine clearance 15-23 kg, the dose is 45 mg twice a day. For children who weigh >23-40 kg, the dose is 60 mg twice a day. And, for children who weigh >40 kg, the dose is 75 mg twice a day. twice a day; for those weighing >23-40 kg, the dosage is 60 mg twice a day; and for children weighing >40 kg, the dosage is 75 mg twice a day. The treatment dosage for persons aged >13 years is 75 mg twice daily. For children aged >13 years, the recommended dose for prophylaxis is 75 mg once a day (260). # Persons Aged >65 Years Amantadine. The daily dosage of amantadine for persons aged >65 years should not exceed 100 mg for prophylaxis or treatment, because renal function declines with increasing age. For certain older persons, the dose should be further reduced. Rimantadine. Among older persons, the incidence and severity of central nervous system (CNS) side effects are substantially lower among those taking rimantadine at a dosage of 100 mg/day than among those taking amantadine at dosages adjusted for estimated renal clearance (302). However, chronically ill older persons have had a higher incidence of CNS and gastrointestinal symptoms and serum concentrations 2-4 times higher than among healthy, younger persons when rimantadine has been administered at a dosage of 200 mg/day (248). For prophylaxis among persons aged >65 years, the recommended dosage is 100 mg/day. For treatment of older persons in the community, a reduction in dosage to 100 mg/day should be considered if they experience side effects when taking a dosage of 200 mg/day. For treatment of older nursing home residents, the dosage of rimantadine should be reduced to 100 mg/day (301). Zanamivir and Oseltamivir. No reduction in dosage is recommended on the basis of age alone. # Persons with Impaired Renal Function Amantadine. A reduction in dosage is recommended for patients with creatinine clearance <50 mL/min. Guidelines for amantadine dosage on the basis of creatinine clearance are located in the package insert. Because recommended dosages on the basis of creatinine clearance might provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully for adverse reactions. If necessary, further reduction in the dose or discontinuation of the drug might be indicated because of side effects. Hemodialysis contributes minimally to amantadine clearance (303,304). Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance <10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including older persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. Hemodialysis contributes minimally to drug clearance (305). Zanamivir. Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were observed (259,306). However, a limited number of healthy volunteers who were administered high doses of intravenous zanamivir tolerated systemic levels of zanamivir that were substantially higher than those resulting from administration of zanamivir by oral inhalation at the recommended dose (307,308). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild-to-moderate or severe impairment in renal function (259). Oseltamivir. Serum concentrations of oseltamivir carboxylate (GS4071), the active metabolite of oseltamivir, increase with declining renal function (260,309). For patients with creatinine clearance of 10-30 mL/min (260), a reduction of the treatment dosage of oseltamivir to 75 mg once daily and in the prophylaxis dosage to 75 mg every other day is recommended. No treatment or prophylaxis dosing recommendations are available for patients undergoing routine renal dialysis treatment. # Persons with Liver Disease Amantadine. No increase in adverse reactions to amantadine has been observed among persons with liver disease. Rare instances of reversible elevation of liver enzymes among patients receiving amantadine have been reported, although a specific relation between the drug and such changes has not been established (310). Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with severe hepatic dysfunction. Zanamivir and Oseltamivir. Neither of these medications has been studied among persons with hepatic dysfunction. # Persons with Seizure Disorders Amantadine. An increased incidence of seizures has been reported among patients with a history of seizure disorders who have received amantadine (311). Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine. Rimantadine. Seizures (or seizure-like activity) have been reported among persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine (312). The extent to which rimantadine might increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated. Zanamivir and Oseltamivir. Seizure events have been reported during postmarketing use of zanamivir and oseltamivir, although no epidemiologic studies have reported any increased risk for seizures with either zanamivir or oseltamivir use. # Route Amantadine, rimantadine, and oseltamivir are administered orally. Amantadine and rimantadine are available in tablet or syrup form, and oseltamivir is available in capsule or oral suspension form. Zanamivir is available as a dry powder that is self-administered via oral inhalation by using a plastic device included in the package with the medication. Patients will benefit from instruction and demonstration of correct use of this device. # Pharmacokinetics Amantadine Approximately 90% of amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion (274,(313)(314)(315)(316). Thus, renal clearance of amantadine is reduced substantially among persons with renal insufficiency, and dosages might need to be decreased (see Dosage) (Table 7). # Rimantadine Approximately 75% of rimantadine is metabolized by the liver (267). The safety and pharmacokinetics of rimantadine among persons with liver disease have been evaluated only after single-dose administration (267,317). In a study of persons with chronic liver disease (the majority with stabilized cirrhosis), no alterations in liver function were observed after a single dose. However, for persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease (302). Rimantadine and its metabolites are excreted by the kidneys. The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration (267,305). Further studies are needed to determine multiple-dose pharmacokinetics and the most appropriate dosages for patients with renal insufficiency. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6fold greater than that among healthy persons of the same age (305). Hemodialysis did not contribute to drug clearance. In studies of persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were higher than those among control patients without renal disease who were the same weight, age, and sex (301,317). # Zanamivir In studies of healthy volunteers, approximately 7%-21% of the orally inhaled zanamivir dose reached the lungs, and 70%-87% was deposited in the oropharynx (259,318). Approximately 4%-17% of the total amount of orally inhaled zanamivir is systemically absorbed. Systemically absorbed zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the feces (259,308). # Oseltamivir Approximately 80% of orally administered oseltamivir is absorbed systemically (309). Absorbed oseltamivir is metabolized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxylate has a half-life of 6-10 hours and is excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway (260,319). Unmetabolized oseltamivir also is excreted in the urine by glomerular filtration and tubular secretion (320). # Side Effects and Adverse Reactions When considering use of influenza antiviral medications (i.e., choice of antiviral drug, dosage, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 7); presence of other medical conditions; indications for use (i.e., prophylaxis or therapy); and the potential for interaction with other medications. # Amantadine and Rimantadine Both amantadine and rimantadine can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day. However, incidence of CNS side effects (e.g., nervousness, anxiety, insomnia, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine than among those taking rimantadine (320). In a 6-week study of prophylaxis among healthy adults, approximately 6% of participants taking rimantadine at a dosage of 200 mg/day experienced one or more CNS symptoms, compared with approximately 13% of those taking the same dosage of amantadine and 4% of those taking placebo (320). A study of older persons also demonstrated fewer CNS side effects associated with rimantadine compared with amantadine (302). Gastrointestinal side effects (e.g., nausea and anorexia) occur among approximately 1%-3% of persons taking either drug, compared with 1% of persons receiving the placebo (320). Side effects associated with amantadine and rimantadine are usually mild and cease soon after discontinuing the drug. Side effects can diminish or disappear after the first week, despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures) (303,311). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among older persons who have been taking amantadine as prophylaxis at a dosage of 200 mg/day (274). Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects (Table 7). In acute overdosage of amantadine, CNS, renal, respiratory, and cardiac toxicity, including arrhythmias, have been reported (303). Because rimantadine has been marketed for a shorter period than amantadine, its safety among certain patient populations (e.g., chronically ill and older persons) has been evaluated less frequently. Because amantadine has anticholinergic effects and might cause mydriasis, it should not be used among patients with untreated angle closure glaucoma (303). # Zanamivir In a study of zanamivir treatment of ILI among persons with asthma or chronic obstructive pulmonary disease where study medication was administered after use of a B2-agonist, 13% of patients receiving zanamivir and 14% of patients who received placebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (259,261). However, in a phase-I study of persons with mild or moderate asthma who did not have ILI, one of 13 patients experienced bronchospasm after administration of zanamivir (259). In addition, during postmarketing surveillance, cases of respiratory function deterioration after inhalation of zanamivir have been reported. Certain patients had underlying airway disease (e.g., asthma or chronic obstructive pulmonary disease). Because of the risk for serious adverse events and because the efficacy has not been demonstrated among this population, zanamivir is not recommended for treatment for patients with underlying airway disease (259). If physicians decide to prescribe zanamivir to patients with underlying chronic respiratory disease after carefully considering potential risks and benefits, the drug should be used with caution under conditions of appropriate monitoring and supportive care, including the availability of short-acting bronchodilators (292). Patients with asthma or chronic obstructive pulmonary disease who use zanamivir are advised to 1) have a fast-acting inhaled bronchodilator available when inhaling zanamivir and 2) stop using zanamivir and contact their physician if they experience difficulty breathing (259). No definitive evidence is available regarding the safety or efficacy of zanamivir for persons with underlying respiratory or cardiac disease or for persons with complications of acute influenza (292). Allergic reactions, including oropharyngeal or facial edema, have also been reported during postmarketing surveillance (259,269). In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and for those receiving placebo (i.e., inhaled lactose vehicle alone) (249)(250)(251)(252)(253)(254)269). The most common adverse events reported by both groups were diarrhea; nausea; sinusitis; nasal signs and symptoms; bronchitis; cough; headache; dizziness; and ear, nose, and throat infections. Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (259). # Oseltamivir Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vomiting, approximately 6%; vomiting, approximately 3%) (255,256,260,321). Among children treated with oseltamivir, 14.3% had vomiting, compared with 8.5% of placebo recipients. Overall, 1% discontinued the drug secondary to this side effect (258), whereas a limited number of adults who were enrolled in clinical treatment trials of oseltamivir discontinued treatment because of these symptoms (260). Similar types and rates of adverse events were reported in studies of oseltamivir prophylaxis (260). Nausea and vomiting might be less severe if oseltamivir is taken with food (260,321). # Use During Pregnancy No clinical studies have been conducted regarding the safety or efficacy of amantadine, rimantadine, zanamivir, or oseltamivir for pregnant women; only two cases of amantadine use for severe influenza illness during the third trimester have been reported (144,145). However, both amantadine and rimantadine have been demonstrated in animal studies to be teratogenic and embryotoxic when administered at substantially high doses (301,303). Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these four drugs should be used during pregnancy only if the potential benefit justifies the potential risk to the embryo or fetus (see manufacturers' package inserts) (259,260,301,303). # Drug Interactions Careful observation is advised when amantadine is administered concurrently with drugs that affect CNS, including CNS stimulants. Concomitant administration of antihistamines or anticholinergic drugs can increase the incidence of adverse CNS reactions (248). No clinically substantial interactions between rimantadine and other drugs have been identified. Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically critical drug interactions have been predicted on the basis of in vitro data and data from studies using rats (259,322). Limited clinical data are available regarding drug interactions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this pathway. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir carboxylate by approximately 50% and a corresponding approximate twofold increase in the plasma levels of oseltamivir carboxylate (260,319). No published data are available concerning the safety or efficacy of using combinations of any of these four influenza antiviral drugs. For more detailed information concerning potential drug interactions for any of these influenza antiviral drugs, package inserts should be consulted. # Antiviral Drug-Resistant Strains of Influenza Amantadine-resistant viruses are cross-resistant to rimantadine and vice versa (323). Drug-resistant viruses can appear in approximately one third of patients when either amantadine or rimantadine is used for therapy (273,323,324). During the course of amantadine or rimantadine therapy, resistant influenza strains can replace susceptible strains within 2-3 days of starting therapy (325,326). Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy (327,328); however, the frequency with which resistant viruses are transmitted and their effect on efforts to control influenza are unknown. Amantadine-and rimantadine-resistant viruses are not more virulent or transmissible than susceptible viruses (329). The screening of epidemic strains of influenza A has rarely detected amantadine-and rimantadine-resistant viruses (325,330,331). Persons who have influenza A infection and who are treated with either amantadine or rimantadine can shed susceptible viruses early in the course of treatment and later shed drugresistant viruses, including after 5-7 days of therapy (273). Such persons can benefit from therapy even when resistant viruses emerge. Resistance to zanamivir and oseltamivir can be induced in influenza A and B viruses in vitro (332)(333)(334)(335)(336)(337)(338)(339), but induction of resistance usually requires multiple passages in cell culture. By contrast, resistance to amantadine and rimantadine in vitro can be induced with fewer passages in cell culture (340,341). Development of viral resistance to zanamivir and oseltamivir during treatment has been identified but does not appear to be frequent (260,(342)(343)(344)(345). In one pediatric study, 5.5% of patients treated with oseltamivir had posttreatment isolates that were resistant to neuraminidase inhibitors. One limited study of Japanese children treated with oseltamivir reported a high frequency of resistant viruses (346). However, no transmission of neuraminidase inhibitor resistant viruses in humans has been documented to date. No isolates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of posttreatment isolates tested is limited (347), and the risk for emergence of zanamivirresistant isolates cannot be quantified (259). Only one clinical isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (343). Available diagnostic tests are not optimal for detecting clinical resistance to the neuraminidase inhibitor antiviral drugs, and additional tests are being developed (347,348). Postmarketing surveillance for neuraminidase inhibitor-resistant influenza viruses is being conducted (349). # Sources of Information Regarding Influenza and Its Surveillance Information regarding influenza surveillance, prevention, detection, and control is available at / weekly/fluactivity.htm. Surveillance information is available through the CDC Voice Information System (influenza update) at 888-232-3228 or CDC Fax Information Service at 888-232-3299. During October-May, surveillance information is updated at least every other week. In addition, periodic updates regarding influenza are published in the MMWR Weekly Report (). Additional information regarding influenza vaccine can be obtained by calling 800-CDC-INFO (800-232-4636). State and local health departments should be consulted concerning availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, and for reporting influenza outbreaks and receiving advice concerning outbreak control. # Additional Information Regarding Influenza Infection Control Among Specific Populations Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, health-care workers, hospitals, and travelers) are also available in the following publications: - CDC.
Morbidity and Mortality Weekly Report depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction Epidemics of influenza typically occur during the winter months in temperate regions and have been responsible for an average of approximately 36,000 deaths/year in the United States during 1990-1999 (1). Influenza viruses also can cause pandemics, during which rates of illness and death from influenza-related complications can increase worldwide. Influenza viruses cause disease among all age groups (2)(3)(4). Rates of infection are highest among children, but rates of serious illness and death are highest among persons aged >65 years, children aged <2 years, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (2,(5)(6)(7). Influenza vaccination is the primary method for preventing influenza and its severe complications. In this report from the Advisory Committee on Immunization Practices (ACIP), the primary target groups recommended for annual vaccination are 1) persons at increased risk for influenza-related complications (i.e., those aged >65 years, children aged 6-23 months, pregnant women, and persons of any age with certain chronic medical conditions); 2) persons aged 50-64 years because this group has an elevated prevalence of certain chronic medical conditions; and 3) persons who live with or care for persons at high risk (e.g., health-care workers and household contacts who have frequent contact with persons at high risk and who can transmit influenza to those persons at high risk). Vaccination is associated with reductions in influenza-related respiratory illness and physician visits among all age groups, hospitalization and death among persons at high risk, otitis media among children, and work absenteeism among adults (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Although influenza vaccination levels increased substantially during the 1990s, further improvements in vaccine coverage levels are needed, chiefly among persons aged <65 years who are at increased risk for influenza-related complications among all racial and ethnic groups, among blacks and Hispanics aged >65 years, among children aged 6-23 months, and among health-care workers. ACIP recommends using strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (19)(20)(21). Although influenza vaccination remains the cornerstone for the control and treatment of influenza, information on antiviral medications is also presented because these agents are an adjunct to vaccine. # Primary Changes and Updates in the Recommendations The 2005 recommendations include five principal changes or updates: • ACIP recommends that persons with any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration be vaccinated against influenza (see Target Groups for Vaccination). • ACIP emphasizes that all health-care workers should be vaccinated against influenza annually, and that facilities that employ health-care workers be strongly encouraged to provide vaccine to workers by using approaches that maximize immunization rates. • Use of both available vaccines (inactivated and LAIV) is encouraged for eligible persons every influenza season, especially persons in recommended target groups. During periods when inactivated vaccine is in short supply, use of LAIV is especially encouraged when feasible for eligible persons (including health-care workers) because use of LAIV by these persons might considerably increase availability of inactivated vaccine for persons in groups at high risk. • CDC and other agencies will assess the vaccine supply throughout the manufacturing period and will make recommendations preceding the 2005-06 influenza season regarding the need for tiered timing of vaccination of different risk groups. In addition, CDC will publish ACIP recommendations regarding inactivated vaccine subprioritization (tiering) on a later date in MMWR. • # Influenza and Its Burden # Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease (22). Influenza A viruses are further categorized into subtypes on the basis of two surface antigens: hemagglutinin and neuraminidase. Influenza B viruses are not categorized into subtypes. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have been in global circulation. In 2001, influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H3N2) and A (H1N1) viruses began circulating widely. Both influenza A and B viruses are further separated into groups on the basis of antigenic characteristics. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral replication. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. Immunity to the surface antigens, particularly the hemagglutinin, reduces the likelihood of infection and severity of disease if infection occurs (23). Antibody against one influenza virus type or subtype confers limited or no protection against another type or subtype of influenza. Furthermore, antibody to one antigenic variant of influenza virus might not completely protect against a new antigenic variant of the same type or subtype (24). Frequent development of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and the reason for the usual incorporation of one or more new strains in each year's influenza vaccine. # Clinical Signs and Symptoms of Influenza Influenza viruses are spread from person to person primarily through the coughing and sneezing of infected persons (22). The typical incubation period for influenza is 1-4 days, with an average of 2 days (25). Adults can be infectious from the day before symptoms begin through approximately 5 days after illness onset. Children can be infectious for >10 days, and young children can shed virus for several days before their illness onset. Severely immunocompromised persons can shed virus for weeks or months (26)(27)(28)(29). Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symptoms (e.g., fever, myalgia, headache, malaise, nonproductive cough, sore throat, and rhinitis) (30). Among children, otitis media, nausea, and vomiting are also commonly reported with influenza illness (31)(32)(33). Respiratory illness caused by influenza is difficult to distinguish from illness caused by other respiratory pathogens on the basis of symptoms alone (see Role of Laboratory Diagnosis). Reported sensitivities and speci-ficities of clinical definitions for influenza-like illness (ILI) in studies primarily among adults that include fever and cough have ranged from 63% to 78% and 55% to 71%, respectively, compared with viral culture (34,35). Sensitivity and predictive value of clinical definitions can vary, depending on the degree of co-circulation of other respiratory pathogens and the level of influenza activity (36). A study among older nonhospitalized patients determined that symptoms of fever, cough, and acute onset had a positive predictive value of 30% for influenza (37), whereas a study of hospitalized older patients with chronic cardiopulmonary disease determined that a combination of fever, cough, and illness of <7 days was 78% sensitive and 73% specific for influenza (38). However, a study among vaccinated older persons with chronic lung disease reported that cough was not predictive of influenza infection, although having a fever or feverishness was 68% sensitive and 54% specific for influenza infection (39). Influenza illness typically resolves after 3-7 days for the majority of persons, although cough and malaise can persist for >2 weeks. Among certain persons, influenza can exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease), lead to secondary bacterial pneumonia or primary influenza viral pneumonia, or occur as part of a coinfection with other viral or bacterial pathogens (40). Young children with influenza infection can have initial symptoms mimicking bacterial sepsis with high fevers (41,42), and <20% of children hospitalized with influenza can have febrile seizures (32,43). Influenza infection has also been associated with encephalopathy, transverse myelitis, Reye syndrome, myositis, myocarditis, and pericarditis (32,40,44,45). # Hospitalizations and Deaths from Influenza The risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age with certain underlying health conditions (see Persons at Increased Risk for Complications) than among healthy older children and younger adults (1,6,8,(46)(47)(48)(49)(50)(51)(52). Estimated rates of influenza-associated hospitalizations have varied substantially by age group in studies conducted during different influenza epidemics (Table 1). Among children aged 0-4 years, hospitalization rates have ranged from approximately 500/100,000 children for those with high-risk medical conditions to 100/100,000 children for those without high-risk medical conditions (53)(54)(55)(56). Within the 0-4 year age group, hospitalization rates are highest among children aged 0-1 years and are comparable to rates reported among persons aged >65 years (55,56) (Table 1). During influenza epidemics from 1979-80 through 2000-01, the estimated overall number of influenzaassociated hospitalizations in the United States ranged from approximately 54,000 to 430,000/epidemic. An average of approximately 226,000 influenza-related excess hospitalizations occurred per year, with 63% of all hospitalizations occurring among persons aged >65 years (57). Since the 1968 influenza A (H3N2) virus pandemic, the greatest numbers of influenza-associated hospitalizations have occurred during epidemics caused by type A (H3N2) viruses (58). Influenza-related deaths can result from pneumonia and from exacerbations of cardiopulmonary conditions and other chronic diseases. Deaths of older adults account for >90% of deaths attributed to pneumonia and influenza (1,52). In one study of influenza epidemics, approximately 19,000 influenzaassociated pulmonary and circulatory deaths per influenza season occurred during 1976-1990, compared with approximately 36,000 deaths during 1990-1999 (1). Estimated rates of influenza-associated pulmonary and circulatory deaths/ 100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years. In the United States, the number of influenza-associated deaths might be increasing in part because the number of older persons is increasing (59). In addition, influenza seasons in which influenza A (H3N2) viruses predominate are associated with higher mortality (60); influenza A (H3N2) viruses predominated in 90% of influenza seasons during 1990-1999, compared with 57% of seasons during 1976-1990 (1). Deaths from influenza are uncommon among both children with and without high-risk conditions, but do occur (61,62). A study that modeled influenza-related deaths estimated that an average of 92 deaths (0.4 deaths per 100,000) occurred among children aged <5 years annually during the 1990's, compared with 32,651 deaths (98.3 per 100,000) among adults aged >65 years (1). Reports of 153 laboratoryconfirmed influenza-related pediatric deaths from 40 states during the 2003-04 influenza season indicated that 61 (40%) were aged <2 years and, of 92 children aged 2-17 years, 64 (70%) did not have an underlying medical condition traditionally considered to place a person at risk for influenzarelated complications (CDC, National Center for Infectious Diseases, unpublished data, 2005). Further information is needed regarding the risk for severe influenza-complications and optimal strategies for minimizing severe disease and death among children. # Options for Controlling Influenza In the United States, the primary option for reducing the effect of influenza is immunoprophylaxis with vaccine. Inactivated (i.e., killed virus) influenza vaccine and live, attenuated influenza vaccine are available for use in the United States (see Recommendations for Using Inactivated and Live, At- *** Outcomes were limited to hospitalizations in which either pneumonia or influenza was listed as the first condition on discharge records (Simonsen) or included anywhere in the list of discharge diagnoses (Barker). † † † Source: Simonsen L, Fukuda K, Schonberger LB, Cox NJ. Impact of influenza epidemics on hospitalizations. J Infect Dis 2000;181:831-7. § § § Persons at high risk and not at high risk for influenza-related complications are combined. ¶ ¶ ¶ The low estimate is the average during influenza A (H1N1) or influenza B-predominate seasons, and the high estimate is the average during influenza A (H3N2)-predominate seasons. ****Outcomes were for rate of primary respiratory and circulatory hospitalizations. † † † † Source: Thompson # Influenza Vaccine Composition Both the inactivated and live, attenuated vaccines prepared for the 2005-06 season will include A/California/7/2004 (H3N2)-like, A/New Caledonia/20/99 (H1N1)-like, and B/Shanghai/361/2002-like antigens. For the A/California/7/ 2004 (H3N2)-like antigen, manufacturers may use the antigenically equivalent A/New York/55/2004 (H3N2) virus, and for the B/Shanghai/361/2002-like antigen, manufacturers may use the antigenically equivalent B/Jilin/20/2003 virus or B/Jiangsu/10/2003 virus. These viruses will be used because of their growth properties and because they are representative of influenza viruses likely to circulate in the United States during the 2005-06 influenza season. Because circulating influenza A (H1N2) viruses are a reassortant of influenza A (H1N1) and (H3N2) viruses, antibody directed against influenza A (H1N1) and influenza (H3N2) vaccine strains provides protection against circulating influenza A (H1N2) viruses. Influenza viruses for both the inactivated and live attenuated influenza vaccines are initially grown in embryonated hens eggs. Thus, both vaccines might contain limited amounts of residual egg protein. For the inactivated vaccine, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (63). Subvirion and purified surface antigen preparations of the inactivated vaccine are available. Manufacturing processes differ by manufacturer. Manufacturers might use different compounds to inactivate influenza viruses and add antibiotics to prevent bacterial contamination. Package inserts should be consulted for additional information. # Thimerosal Thimerosal, a mercury-containing compound, has been used as a preservative in vaccines (64) since the 1930s and is used in multi-dose vials of inactivated influenza vaccine to reduce the likelihood of bacterial contamination. Although no scientific evidence indicates that thimerosal in vaccines leads to serious adverse events in vaccine recipients, in 1999, the U.S. Public Health Service and other organizations recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines to decrease total mercury exposure, chiefly among infants (64)(65)(66). Since mid-2001, vaccines routinely recommended for infants in the United States have been manufactured either without or with only trace amounts of thimerosal to provide a substantial reduction in the total mercury exposure from vaccines for children (67). Vaccines containing trace amounts of thimerosal have <1 mcg mercury/dose. Influenza Vaccines and Thimerosal. LAIV does not contain thimerosal. Thimerosal preservative-containing inactivated influenza vaccines, distributed in multi-dose containers in the United States, contain 25 mcg of mercury/0.5-mL dose (64,65). Inactivated influenza virus vaccines distributed in the United States will also be available in 2005 in a thimerosal-free formulation in both 0.25 mL and 0.5-mL single-dose syringes and a preservative-free formulation (which contains trace amounts of thimerosal) in 0.25-mL-dose syringes. Influenza vaccine is part of the routine childhood immunization schedule. Sanofi Pasteur, Inc. (formerly Aventis Pas-teur, Inc.) produces FluZone ® , which is an inactivated influenza vaccine approved by the Food and Drug Administration (FDA) for persons aged >6 months. FluZone that is available in multi-dose vials contains thimerosal as a preservative. Thimerosal-free FluZone packaged as 0.25-mL unit dose syringes is available for use among persons aged 6-35 months. Thimerosal-free FluZone packaged as 0.5 mL unit dose syringes is available for use among persons aged >3 years. Fluvirin ® , produced by Chiron, is an inactivated influenza vaccine available in a preservative-free formulation, is packaged as 0.5-mL single-dose syringes, and is licensed for use in persons aged >4 years. The preservative-free Fluvirin vaccine contains trace amounts of thimerosal. The total amount of inactivated influenza vaccine available without thimerosal as a preservative will be increased as manufacturing capabilities are expanded. The risks for severe illness from influenza infection are elevated among both young children and pregnant women, and both groups benefit from vaccination by preventing illness and death from influenza. In contrast, no scientifically conclusive evidence exists of harm from exposure to thimerosal preservative-containing vaccine, whereas evidence is accumulating of lack of any harm resulting from exposure to such vaccines (64,68). Therefore, the benefits of influenza vaccination outweigh the theoretical risk, if any, for thimerosal exposure through vaccination. Nonetheless, certain persons remain concerned regarding exposure to thimerosal. The U.S. vaccine supply for infants and pregnant women is in a period of transition during which the availability of thimerosal-reduced or thimerosal-free vaccine intended for these groups is being expanded by manufacturers as a feasible means of reducing an infant's total exposure to mercury because other environmental sources of exposure are more difficult or impossible to eliminate. Reductions in thimerosal in other vaccines have been achieved already and have resulted in substantially lowered cumulative exposure to thimerosal from vaccination among infants and children. For all of these reasons, persons recommended to receive inactivated influenza vaccine may receive either vaccine preparation, depending on availability. # Efficacy and Effectiveness of Inactivated Influenza Vaccine The effectiveness of inactivated influenza vaccine depends primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the viruses in the vaccine and those in circulation. The majority of vaccinated children and young adults develop high postvaccination hemagglutination inhibition antibody titers (69)(70)(71). These antibody titers are protective against illness caused by strains that are antigenically similar to those strains of the same type or subtype included in the vaccine (70)(71)(72)(73). Adults Aged <65 Years. When the vaccine and circulating viruses are antigenically similar, influenza vaccine prevents influenza illness among approximately 70%-90% of healthy adults aged <65 years (9,12,74,75). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are wellmatched (9)(10)(11)(12)75,76). In a case-control study of adults aged 50-64 years with laboratory-confirmed influenza during the 2003-04 season when the vaccine and circulating viruses were not well matched, vaccine effectiveness was estimated to be 52% among healthy persons and 38% among those with one or more high-risk conditions (77). Children. Children aged 6 months can develop protective levels of anti-influenza antibody against specific influenza virus strains after influenza vaccination (69,70,(78)(79)(80)(81), although the antibody response among children at high risk for influenza-related complications might be lower than among healthy children (82,83). In a randomized study among children aged 1-15 years, inactivated influenza vaccine was 77%-91% effective against influenza respiratory illness and was 44%-49%, 74%-76%, and 70%-81% effective against influenza seroconversion among children aged 1-5, 6-10, and 11-15 years, respectively (71). One study (84) reported a vaccine efficacy of 56% against influenza illness among healthy children aged 3-9 years, and another study (85) determined vaccine efficacy of 22%-54% and 60%-78% among children with asthma aged 2-6 years and 7-14 years, respectively. A 2-year randomized study of children aged 6-24 months determined that >89% of children seroconverted to all three vaccine strains during both years (86). During year 1, among 411 children, vaccine efficacy was 66% (95% confidence interval [CI] = 34%-82%) against culture-confirmed influenza (attack rates: 5.5% and 15.9% among vaccine and placebo groups, respectively). During year 2, among 375 children, vaccine efficacy was -7% (95% CI = -247%-67%; attack rates: 3.6% and 3.3% among vaccine and placebo groups, respectively; the second year exhibited lower attack rates overall and was considered a mild season). However, no overall reduction in otitis media was reported. Other studies report that trivalent inactivated influenza vaccine decreases the incidence of influenza-associated otitis media among young children by approximately 30% (16,17). A retrospective study among approximately 5,000 children aged 6-23 months conducted during a year with a suboptimal vaccine match indicated vaccine effectiveness of 49% against medically attended, clinically diagnosed pneumonia or influenza (International Classification of Diseases, Ninth Revision [ICD-9] codes 480-487) among children who had received 2 doses of influenza vaccine. No effectiveness was demonstrated among children who had received only 1 dose of influenza vaccine, illustrating the importance of administering 2 doses of vaccine to previously unvaccinated children aged <9 years (87). Adults Aged >65 Years. Older persons and persons with certain chronic diseases might develop lower postvaccination antibody titers than healthy young adults and thus can remain susceptible to influenza infection and influenza-related upper respiratory tract illness (88)(89)(90). A randomized trial among noninstitutionalized persons aged >60 years reported a vaccine efficacy of 58% against influenza respiratory illness, but indicated that efficacy might be lower among those aged >70 years (91). The vaccine can also be effective in preventing secondary complications and reducing the risk for influenzarelated hospitalization and death among adults aged >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (13)(14)(15)18,92). Among elderly persons not living in nursing homes or similar chronic-care facilities, influenza vaccine is 30%-70% effective in preventing hospitalization for pneumonia and influenza (15,93). Among older persons who do reside in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and deaths. Among this population, the vaccine can be 50%-60% effective in preventing influenza-related hospitalization or pneumonia and 80% effective in preventing influenza-related death, although the effectiveness in preventing influenza illness often ranges from 30% to 40% (94)(95)(96). # Efficacy and Effectiveness of LAIV Healthy Children. A randomized, double-blind, placebocontrolled trial among 1,602 healthy children initially aged 15-71 months assessed the efficacy of trivalent LAIV against culture-confirmed influenza during two seasons (97,98). This trial included subsets of 238 Children who continued in the study remained in the same study group. In season one, when vaccine and circulating virus strains were well-matched, efficacy was 93% for all participants, regardless of age, among persons receiving 2 doses of LAIV. Efficacy was 87% in the 60-71-month subset for those who received 2 doses, and was 91% in the subset for those who received 1 or 2 doses. In season two, when the A (H3N2) component was not well-matched between vaccine and circulating virus strains, efficacy was 86% overall and 87% among those aged 60-84 months. The vaccine was 92% effi-cacious in preventing culture-confirmed influenza during the two-season study. Other results included a 27% reduction in febrile otitis media and a 28% reduction in otitis media with concomitant antibiotic use. Receipt of LAIV also resulted in a 30% lower incidence of febrile otitis media and 21% fewer febrile illnesses. Another study assessing LAIV effectiveness in children aged 18 months-18 years indicated effectiveness against medically attended acute respiratory illness (MAARI) of 18%. However, applying a validation sample of surveillance cultures with MAARI demonstrated efficacy of 92% against influenza A (H1N1) and 66% against an influenza B drift variant (99). Healthy Adults. A randomized, double-blind, placebocontrolled trial among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in illness, absenteeism, health-care visits, and medication use during peak and total influenza outbreak periods (100). The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not wellmatched. The study did not include testing of viruses by a laboratory. During peak outbreak periods, no difference was identified between LAIV and placebo recipients experiencing any febrile episodes. However, vaccination was associated with reductions in severe febrile illnesses of 19% and febrile upper respiratory tract illnesses of 24%. Vaccination also was associated with fewer days of illness, fewer days of work lost, fewer days with health-care-provider visits, and reduced use of prescription antibiotics and over-the-counter medications. Among the subset of 3,637 healthy adults aged 18-49 years, LAIV recipients (n = 2,411) had 26% fewer febrile upperrespiratory illness episodes; 27% fewer lost work days as a result of febrile upper respiratory illness; and 18%-37% fewer days of health-care provider visits caused by febrile illness, compared with placebo recipients (n = 1,226). Days of antibiotic use were reduced by 41%-45% in this age subset. Another randomized, double-blind, placebo-controlled challenge study among 92 healthy adults (LAIV, n = 29; placebo, n = 31; inactivated influenza vaccine, n = 32) aged 18-41 years assessed the efficacy of both LAIV and inactivated vaccine (101). The overall efficacy of LAIV and inactivated influenza vaccine in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, on the basis of experimental challenge by viruses to which study participants were susceptible before vaccination. The difference in efficacy between the two vaccines was not statistically significant. # Cost-Effectiveness of Influenza Vaccine Influenza vaccination can reduce both health-care costs and productivity losses associated with influenza illness. Economic studies of influenza vaccination of persons aged >65 years conducted in the United States have reported overall societal cost savings and substantial reductions in hospitalization and death (15,93,102). Studies of adults aged <65 years have reported that vaccination can reduce both direct medical costs and indirect costs from work absenteeism (8,(10)(11)(12)75,103). Reductions of 13%-44% in health-care-provider visits, 18%-45% in lost workdays, 18%-28% in days working with reduced effectiveness, and 25% in antibiotic use for influenza-associated illnesses have been reported (10,12,104,105). One cost-effectiveness analysis estimated a cost of approximately $60-$4,000/illness averted among healthy persons aged 18-64 years, depending on the cost of vaccination, the influenza attack rate, and vaccine effectiveness against ILI (75). Another cost-benefit economic study estimated an average annual savings of $13.66/person vaccinated (106). In the second study, 78% of all costs prevented were costs from lost work productivity, whereas the first study did not include productivity losses from influenza illness. Economic studies specifically evaluating the cost-effectiveness of vaccinating persons aged 50-64 years are not available, and the number of studies that examine the economics of routinely vaccinating children with inactivated or live, attenuated vaccine are limited (8,(107)(108)(109)(110). However, in a study of inactivated vaccine that included all age groups, cost utility (i.e., cost per year of healthy life gained) improved with increasing age and among those with chronic medical conditions (8). Among persons aged >65 years, vaccination resulted in a net savings per quality-adjusted life year (QALY) gained and resulted in costs of $23-$256/QALY among younger age groups. Additional studies of the relative cost-effectiveness and cost utility of influenza vaccination among children and among adults aged <65 years are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, and vaccine efficacy when evaluating the longterm costs and benefits of annual vaccination. # Vaccination Coverage Levels One of the national health objectives for 2010 is to achieve vaccination coverage for 90% of persons aged >65 years (objective no. 14-29a) (111). Among persons aged >65 years, influenza vaccination levels increased from 33% in 1989 (112) to 66% in 1999 (113), surpassing the Healthy People 2000 objective of 60% (114). Vaccine coverage in this group reached the highest levels recorded (68%) during the 1999-00 influenza season, using the percentage of adults reporting influenza vaccination during the previous 12 months who participated in the National Health Interview Survey (NHIS) during the first and second quarters of each calendar year as a proxy measure of influenza vaccine coverage for the previous § Persons categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer during the previous 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer during the previous 12 months; 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack during the preceding 12 months. ¶ Aged 18-44 years, pregnant at the time of the survey and without high-risk conditions. ** Adults were classified as health-care workers if they were currently employed in a health-care occupation or in a health-care-industry setting, on the basis of standard occupation and industry categories recoded in groups by CDC's National Center for Health Statistics. † † Interviewed adult in each household containing at least one of the following: a child aged <2 years, an adult aged >65 years, or any person aged 2-17 years at high risk (see previous footnote § ). To obtain information on household composition and high-risk status of household members, the sampled adult, child, and person files from NHIS were merged. Interviewed adults who were health-care workers or who had high-risk conditions were excluded. Information could not be assessed regarding high-risk status of other adults aged 18-64 years in the household, thus, certain adults 18-64 years who live with an adult aged 18-64 years at high risk were not included in the analysis. influenza season (113). Possible reasons for the increase in influenza vaccination levels among persons aged >65 years through the 1999-00 influenza season include 1) greater acceptance of preventive medical services by practitioners; 2) increased delivery and administration of vaccine by healthcare providers and sources other than physicians; 3) new information regarding influenza vaccine effectiveness, cost-effectiveness, and safety; and 4) initiation of Medicare reimbursement for influenza vaccination in 1993 (8,14,15,94,95,115,116). Vaccine coverage increased more rapidly through the mid-1990s than during subsequent seasons (average annual percentage increase of 4% from 1988-89 to 1996-97 versus 1% from 1996-97 to 1999-00) and has remained relatively stable since 2000. Estimated national influenza vaccine coverage in 2003 among persons aged >65 years and 50-64 years was 66% and 37%, respectively, based on 2003 NHIS data (Table 2). The estimated vaccination coverage among adults with high-risk conditions aged 18-49 years and 50-64 years was 24% and 46%, respectively, substantially lower than the Healthy People 2000 and 2010 objective of 60% (111,114). Continued annual monitoring is needed to determine the effects of vaccine supply delays and shortages, changes in influenza vaccination recommendations and target groups for vaccination, reimbursement rates for vaccine and vaccine administration, and other factors related to vaccination coverage among adults and children. New strategies to improve coverage will be needed to achieve the Healthy People 2010 objective (21). Reducing racial and ethnic health disparities, including disparities in vaccination coverage, is an overarching national goal (111). Although estimated influenza vaccination coverage for the 1999-00 season reached the highest levels recorded among older black, Hispanic, and white populations, vaccination levels among blacks and Hispanics continue to lag behind those among whites (113,117). Estimated vaccination coverage levels based on 2003 NHIS data among persons aged >65 years were 69% among non-Hispanic whites, 48% among non-Hispanic blacks, and 45% among Hispanics (CDC, National Immunization Program, unpublished data, 2005). Additional strate-gies are needed to achieve the Healthy People 2010 objectives among all racial and ethnic groups. In 1997 and 1998, vaccination coverage estimates among nursing home residents were 64%-82% and 83%, respectively (118,119). The Healthy People 2010 goal is to achieve influenza vaccination of 90% among nursing home residents, an increase from the Healthy People 2000 goal of 80% (111,114). Reported vaccination levels are low among children at increased risk for influenza complications. One study conducted among patients in health maintenance organizations reported influenza vaccination percentages ranging from 9% to 10% among children with asthma (120). A 25% vaccination level was reported among children with severe to moderate asthma who attended an allergy and immunology clinic (121). However, a study conducted in a pediatric clinic demonstrated an increase in the vaccination percentage of children with asthma or reactive airways disease from 5% to 32% after implementing a reminder/recall system (122). One study reported 79% vaccination coverage among children attending a cystic fibrosis treatment center (123) Vaccination of health-care workers has been associated with reduced work absenteeism (9) and fewer deaths among nursing home patients (125,126) and is a high priority for reducing the impact of influenza in health-care settings and for expanding influenza vaccine use (127,128). . Limited information is available regarding use of influenza vaccine among pregnant women. Among women aged 18-44 years without diabetes responding to the 2001 BRFSS, those who were pregnant were less likely to report influenza vaccination during the previous 12 months (13.7%) than those not pregnant (16.8%) (122,129). Only 13% of pregnant women reported vaccination according to 2003 NHIS data, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions (CDC, National Immunization Program, unpublished data, 2004) (Table 2). These data indicate low compliance with the ACIP recommendations for pregnant women. In a study of influenza vaccine acceptance by pregnant women, 71% who were offered the vaccine chose to be vaccinated (130). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients, although 86% agreed that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (131). Recent data indicate that self-report of influenza vaccination among adults, compared with extraction from the medical record, is both sensitive and specific (132). Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (132). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available. # Recommendations for Using Inactivated and Live, Attenuated Influenza Vaccines Both the inactivated influenza vaccine and LAIV can be used to reduce the risk for influenza. LAIV is approved for use among healthy persons aged 5-49 years. Inactivated influenza vaccine is approved for persons aged >6 months, including those with high-risk conditions (see following sections on inactivated influenza vaccine and live, attenuated influenza vaccine). # Target Groups for Vaccination # Persons at Increased Risk for Complications Vaccination with inactivated influenza vaccine is recommended for the following persons who are at increased risk for complications from influenza: • # Persons Aged 50-64 Years Vaccination is recommended for persons aged 50-64 years because this group has an increased prevalence of persons with high-risk conditions. In 2002, approximately 43.6 million persons in the United States were aged 50-64 years, of whom 13.5 million (34%) had one or more high-risk medical conditions (133). Influenza vaccine has been recommended for this entire age group to increase the low vaccination rates among persons in this age group with high-risk conditions (see preceding section). Age-based strategies are more successful in increasing vaccine coverage than patient-selection strategies based on medical conditions. Persons aged 50-64 years without high-risk conditions also receive benefit from vaccination in the form of decreased rates of influenza illness, decreased work absenteeism, and decreased need for medical visits and medication, including antibiotics (9-12). Furthermore, 50 years is an age when other preventive services begin and when routine assessment of vaccination and other preventive services has been recommended (134,135). # Persons Who Can Transmit Influenza to Those at High Risk Persons who are clinically or subclinically infected can transmit influenza virus to persons at high risk for complications from influenza. Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce influenza-related deaths among persons at high risk. Evidence from two studies indicates that vaccination of healthcare workers is associated with decreased deaths among nursing home patients (125,126), and hospital-based influenza outbreaks frequently occur where unvaccinated health-care workers are employed. Administration of LAIV has been dem-onstrated to reduce MAARI in contacts of vaccine recipients (136), and to reduce ILI-related economic and medical consequences (such as work days lost and number of health-care provider visits). In addition to health-care workers, additional groups that can transmit influenza to high-risk persons and that should be vaccinated include • employees of assisted living and other residences for persons in groups at high risk; • persons who provide home care to persons in groups at high risk; and • household contacts (including children) of persons in groups at high risk. In addition, because children aged 0-23 months are at increased risk for influenza-related hospitalization (54)(55)(56), vaccination is recommended for their household contacts and out-of-home caregivers, particularly for contacts of children aged 0-5 months, because influenza vaccines have not been approved by FDA for use among children aged <6 months (see Healthy Young Children). Healthy persons aged 5-49 years in these groups who are not contacts of severely immunosuppressed persons (see Live, Attenuated Influenza Vaccine Recommendations) can receive either LAIV or inactivated influenza vaccine. All other persons in this group should receive inactivated influenza vaccine. # Health-Care Workers All health-care workers should be vaccinated against influenza annually (128). Facilities that employ health-care workers are strongly encouraged to provide vaccine to workers by using approaches that maximize vaccination rates. This will protect health-care workers, their patients, and communities, and will improve prevention of influenza-associated disease, patient safety, and will reduce disease burden. Influenza vaccination rates among health-care workers should be regularly measured and reported. Although vaccination rates for healthcare workers are typically <40%, with moderate effort, organized campaigns can attain higher rates of vaccination among this population (127,137). Currently, seven states have legislation requiring annual influenza vaccination of health-care workers or the signing of an informed declination (128), and 15 states have regulations regarding vaccination of health-care workers in long-term-care facilities (138). Physicians, nurses, and other workers in both hospital and outpatient-care settings, including medical emergency-response workers (e.g., paramedics and emergency medical technicians), should be vaccinated, as should employees of nursing home and chroniccare facilities who have contact with patients or residents. # Additional Information Regarding Vaccination of Specific Populations Pregnant Women Influenza-associated excess deaths among pregnant women were documented during the pandemics of 1918-19 and 1957-58 (49,(139)(140)(141). Case reports and limited studies also indicate that pregnancy can increase the risk for serious medical complications of influenza (142)(143)(144)(145)(146). An increased risk might result from 1) increases in heart rate, stroke volume, and oxygen consumption; 2) decreases in lung capacity; and 3) changes in immunologic function during pregnancy. A study of the effect of influenza during 17 interpandemic influenza seasons demonstrated that the relative risk for hospitalization for selected cardiorespiratory conditions among pregnant women enrolled in Medicaid increased from 1.4 during weeks 14-20 of gestation to 4.7 during weeks 37-42, in comparison with women who were 1-6 months postpartum (147). Women in their third trimester of pregnancy were hospitalized at a rate (i.e., 250/100,000 pregnant women) comparable with that of nonpregnant women who had high-risk medical conditions. Researchers estimate that an average of 1-2 hospitalizations can be prevented for every 1,000 pregnant women vaccinated (147). Because of the increased risk for influenza-related complications, women who will be pregnant during the influenza season should be vaccinated. Vaccination can occur in any trimester. One study of influenza vaccination of approximately 2,000 pregnant women demonstrated no adverse fetal effects associated with influenza vaccine (148). # Healthy Young Children Studies indicate that rates of hospitalization are higher among young children than older children when influenza viruses are in circulation (53,55,56,149,150). The increased rates of hospitalization are comparable with rates for other groups considered at high risk for influenza-related complications. However, the interpretation of these findings has been confounded by co-circulation of respiratory syncytial viruses, which are a cause of serious respiratory viral illness among children and which frequently circulate during the same time as influenza viruses (151)(152)(153). One study assessed rates of influenza-associated hospitalizations among the entire U.S. population during 1979-2001 and calculated an average rate of approximately 108 hospitalizations per 100,000 personyears in children aged <5 years (46). Two recent studies have attempted to separate the effects of respiratory syncytial viruses and influenza viruses on rates of hospitalization among children who do not have high-risk conditions (54,55). Both studies reported that otherwise healthy children aged <2 years, and possibly children aged 2-4 years, are at increased risk for influenza-related hospitalization compared with older healthy children (Table 1). Among the Tennessee Medicaid population during 1973-1993, healthy children aged 6 months-<3 years had rates of influenza-associated hospitalization comparable with or higher than rates among children aged 3-14 years with high-risk conditions (54,56). Another Tennessee study reported a hospitalization rate per year of 3-4/1,000 healthy children aged <2 years for laboratory-confirmed influenza (33). Because children aged 6-23 months are at substantially increased risk for influenza-related hospitalizations, ACIP recommends vaccination of all children in this age group (154). ACIP continues to recommend influenza vaccination of persons aged >6 months who have high-risk medical conditions. The current inactivated influenza vaccine is not approved by FDA for use among children aged <6 months, the pediatric group at greatest risk for influenza-related complications (54). Vaccinating their household contacts and out-of-home caregivers might decrease the probability of influenza infection among these children. Beginning # Persons Infected with HIV Limited information is available regarding the frequency and severity of influenza illness or the benefits of influenza vaccination among persons with HIV infection (156,157). However, a retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the attributable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than during the peri-influenza periods. The risk for hospitalization was higher for HIV-infected women than for women with other well-recognized high-risk conditions, including chronic heart and lung diseases (158). Another study estimated that the risk for influenza-related death was 9.4-14.6/10,000 persons with acquired immunodeficiency syndrome (AIDS) compared with 0.09-0.10/10,000 among all persons aged 25-54 years and 6.4-7.0/10,000 among persons aged >65 years (159). Other reports indicate that influenza symptoms might be prolonged and the risk for complications from influenza increased for certain HIVinfected persons (160)(161)(162). Inactivated influenza vaccination has been demonstrated to produce substantial antibody titers against influenza among vaccinated HIV-infected persons who have minimal AIDSrelated symptoms and high CD4+ T-lymphocyte cell counts (163)(164)(165)(166). A limited, randomized, placebo-controlled trial determined that inactivated influenza vaccine was highly effective in preventing symptomatic, laboratory-confirmed influenza infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm3; a limited number of persons with CD4+ T-lymphocyte cell counts of <200 were included in that study (167). A nonrandomized study among HIV-infected persons determined that influenza vaccination was most effective among persons with >100 CD4+ cells and among those with <30,000 viral copies of HIV type-1/mL (162). Among persons who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, inactivated influenza vaccine might not induce protective antibody titers (165,166); a second dose of vaccine does not improve the immune response in these persons (166,167). One study determined that HIV RNA (ribonucleic acid) levels increased transiently in one HIV-infected person after influenza infection (168). Studies have demonstrated a transient (i.e., 2-4 week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (165,169). Other studies using similar laboratory techniques have not documented a substantial increase in the replication of HIV (170)(171)(172)(173). Deterioration of CD4+ T-lymphocyte cell counts or progression of HIV disease have not been demonstrated among HIVinfected persons after influenza vaccination compared with unvaccinated persons (166,174). Limited information is available concerning the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza infection or influenza vaccination (156,175). Because influenza can result in serious illness and because vaccination with inactivated influenza vaccine can result in the production of protective antibody titers, vaccination will benefit HIV-infected persons, including HIV-infected pregnant women. # Breastfeeding Mothers Influenza vaccine is safe for mothers who are breastfeeding and their infants. Breastfeeding does not adversely affect the immune response and is not a contraindication for vaccination. # Travelers The risk for exposure to influenza during travel depends on the time of year and destination. In the tropics, influenza can occur throughout the year. In the temperate regions of the Southern Hemisphere, the majority of influenza activity occurs during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large organized tourist groups (e.g., on cruise ships) that include persons from areas of the world where influenza viruses are circulating (176,177). Persons at high risk for complications of influenza who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to • travel to the tropics, • travel with organized tourist groups at any time of year, or • travel to the Southern Hemisphere during April-September. No information is available regarding the benefits of revaccinating persons before summer travel who were already vaccinated in the preceding fall. Persons at high risk who receive the previous season's vaccine before travel should be revaccinated with the current vaccine the following fall or winter. Persons aged >50 years and persons at high risk should consult with their physicians before embarking on travel during the summer to discuss the symptoms and risks for influenza and the advisability of carrying antiviral medications for either prophylaxis or treatment of influenza. # General Population In addition to the groups for which annual influenza vaccination is recommended, physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza or transmitting influenza to others should they become infected (the vaccine can be administered to children aged >6 months), depending on vaccine availability (see Influenza Vaccine Supply). Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics. # Comparison of LAIV with Inactivated Influenza Vaccine Both inactivated influenza vaccine and LAIV are available to reduce the risk for influenza infection and illness. However, the vaccines also differ in key ways (Table 3). # Major Similarities LAIV and inactivated influenza vaccine contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one A (H1N1) virus, and one B virus. Each year, one or more virus strains might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. Viruses for both vaccines are grown in eggs. Both vaccines are administered annually to provide optimal protection against influenza infection (Table 3). # Major Differences Inactivated influenza vaccine contains killed viruses, whereas LAIV contains live, attenuated viruses still capable of replication. LAIV is administered intranasally by sprayer, whereas inactivated influenza vaccine is administered intramuscularly by injection. LAIV is more expensive than inactivated influenza vaccine, although the price differential between inactivated vaccine and LAIV has decreased for the 2005-06 season. LAIV is approved for use among healthy persons aged 5-49 years; inactivated influenza vaccine is approved for use among persons aged >6 months, including those who are healthy and those with chronic medical conditions (Table 3). # Inactivated Influenza Vaccine Recommendations Persons Who Should Not Be Vaccinated with Inactivated Influenza Vaccine Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Prophylactic use of antiviral agents is an option for preventing influenza among such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at high risk for complications from influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Information regarding vaccine components is located in package inserts from each manufacturer. Persons with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate use of influenza vaccine, particularly among children with mild upper-respiratory-tract infection or allergic rhinitis. Immunogenicity and side effects of split-and whole-virus vaccines are similar among adults when vaccines are administered at the recommended dosage. § For adults and older children, the recommended site of vaccination is the deltoid muscle. The preferred site for infants and young children is the anterolateral aspect of the thigh. ¶ Two doses administered at least 1 month apart are recommended for children aged <9 years who are receiving influenza vaccine for the first time. # Dosage Dosage recommendations vary according to age group (Table 4). Among previously unvaccinated children aged <9 years, 2 doses administered >1 month apart are recommended for satisfactory antibody responses. If possible, the second dose should be administered before December. If a child aged <9 years receiving vaccine for the first time does not receive a second dose of vaccine within the same season, only 1 dose of vaccine should be administered the following season. Two doses are not required at that time. Among adults, studies have indicated limited or no improvement in antibody response when a second dose is administered during the same season (178)(179)(180). Even when the current influenza vaccine contains one or more antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines during the year after vaccination (181,182). Vaccine prepared for a previous influenza season should not be administered to provide protection for the current season. Because of lack of vaccine efficacy data, ACIP does not recommend that a child receiving influenza vaccine for the first time be given the first dose of vaccine in the spring, followed by the second dose in the autumn of the same year. # Route The intramuscular route is recommended for influenza vaccine. Adults and older children should be vaccinated in the deltoid muscle. A needle length >1 inch can be considered for these age groups because needles <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (183). Infants and young children should be vaccinated in the anterolateral aspect of the thigh (67). ACIP recommends a needle length of 7/8-1 inch for children aged <12 months for intramuscular vaccination into the anterolateral thigh. When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7/8-1.25 inches is recommended (67). # Side Effects and Adverse Reactions When educating patients regarding potential side effects, clinicians should emphasize that 1) inactivated influenza vaccine contains noninfectious killed viruses and cannot cause influenza; and 2) coincidental respiratory disease unrelated to influenza vaccination can occur after vaccination. # Local Reactions In placebo-controlled studies among adults, the most frequent side effect of vaccination is soreness at the vaccination site (affecting 10%-64% of patients) that lasts <2 days (12,(184)(185)(186). These local reactions typically are mild and rarely interfere with the person's ability to conduct usual daily activities. One blinded, randomized, cross-over study among 1,952 adults and children with asthma, demonstrated that only body aches were reported more frequently after inactivated influenza vaccine (25.1%) than placebo-injection (20.8%) (187). One study (83) reported 20%-28% of children with asthma aged 9 months-18 years with local pain and swelling, and another study (80) reported 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions. A different study (81) reported no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years in a placebo-controlled trial of inactivated influenza vaccine. In a study of 12 children aged 5-32 months, no substantial local or systemic reactions were noted (188). # Systemic Reactions Fever, malaise, myalgia, and other systemic symptoms can occur after vaccination with inactivated vaccine and most often affect persons who have had no previous exposure to the influenza virus antigens in the vaccine (e.g., young children) (189,190). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Recent placebo-controlled trials demonstrate that among older persons and healthy young adults, administration of split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (12,(184)(185)(186). Less information from published studies is available for children, compared with adults. However, in a randomized crossover study among both children and adults with asthma, no increase in asthma exacerbations was reported for either age group (187). An analysis of 215,600 children aged <18 years and 8,476 children aged 6-23 months enrolled in one of five health maintenance organizations reported no increase in biologically plausible medically attended events during the 2 weeks after inactivated influenza vaccination, compared with control periods 3-4 weeks before and after vaccination (191). In a study of 791 healthy children (71), postvaccination fever was noted among 11.5% of children aged 1-5 years, 4.6% among children aged 6-10 years, and 5.1% among children aged 11-15 years. Among children with high-risk medical conditions, one study of 52 children aged 6 months-4 years reported fever among 27% and irritability and insomnia among 25% (80); and a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (192). No placebo comparison was made in these studies. However, in pediatric trials of A/New Jersey/76 swine influenza vaccine, no difference was reported between placebo and split-virus vaccine groups in febrile reactions after injection, although the vaccine was associated with mild local tenderness or erythema (81). Limited data regarding potential adverse events after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). During January 1, 1991-June 30, 2004, VAERS received 1,895 reports of adverse events among children aged <18 years, including 479 reports of adverse events among children aged 6-23 months. The number of influenza vaccine doses received by children during this entire period is unknown (CDC, unpublished data, 2005). A recently published review of VAERS reports of trivalent inactivated influenza vaccine (TIV) in children aged 6-23 months documented that the most frequently reported adverse events were fever, rash, injection-site reactions, and seizures. The majority of the small total number of reported seizures appeared to be febrile (193). Because of the limitations of passive reporting systems, determining causality for specific types of adverse events, with the exception of injection-site reactions, is usually not possible by using VAERS data alone. A population-based study of TIV safety in children aged 6-23 months indicated no vaccine associated adverse events that had a plausible relationship to vaccination (194). Health-care professionals should promptly report to VAERS all clinically significant adverse events after influenza vaccination of children, even if the health-care professional is not certain that the vaccine caused the event. The Institute of Medicine has specifically recommended reporting of potential neurologic complications (e.g., demyelinating disorders such as Guillain-Barré syndrome [GBS]), although no evidence exists of a causal relationship between influenza vaccine and neurologic disorders in children. Immediate -presumably allergic -reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) rarely occur after influenza vaccination (195). These reactions probably result from hypersensitivity to certain vaccine components; the majority of reactions probably are caused by residual egg protein. Although current influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have had hives or swelling of the lips or tongue, or who have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma or other allergic responses to egg protein, might also be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician should be considered. Protocols have been published for safely administering influenza vaccine to persons with egg allergies (196)(197)(198). Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (199,200). When reported, hypersensitivity to thimerosal usually has consisted of local, delayed hypersensitivity reactions (199). # Guillain-Barré Syndrome The 1976 swine influenza vaccine was associated with an increased frequency of GBS (201,202). Among persons who received the swine influenza vaccine in 1976, the rate of GBS was <10 cases/1 million persons vaccinated. The risk for influenza vaccine-associated GBS is higher among persons aged >25 years than persons <25 years (201). Evidence for a causal relation of GBS with subsequent vaccines prepared from other influenza viruses is unclear. Obtaining strong epidemiologic evidence for a possible limited increase in risk is difficult for such a rare condition as GBS, which has an annual incidence of 10-20 cases/1 million adults (203). More definitive data probably will require using other methodologies (e.g., laboratory studies of the pathophysiology of GBS). During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were slightly elevated but were not statistically significant in any of these studies (204)(205)(206). However, in a study of the 1992-93 and 1993-94 seasons, the overall relative risk for GBS was 1.7 (95% CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately 1 additional case of GBS/1 million persons vaccinated. The combined number of GBS cases peaked 2 weeks after vaccination (207). Thus, investigations to date have not documented a substantial increase in GBS associated with influenza vaccines (other than the swine influenza vaccine in 1976), and that, if influenza vaccine does pose a risk, it is probably slightly more than one additional case/1 million persons vaccinated. Recent data from VAERS has documented decreased reporting of post influenza vaccine GBS across age groups, despite overall increased reporting for influenza vaccine (208). Cases of GBS after influenza infection have been reported, but no epidemiologic studies have documented such an association (209,210). Substantial evidence exists that multiple infectious illnesses, most notably Campylobacter jejuni, and upper respiratory tract infections are associated with GBS (203,(211)(212)(213). Even if GBS were a true side effect of vaccination in the years after 1976, the estimated risk for GBS of approximately 1 additional case/1 million persons vaccinated is substantially less than the risk for severe influenza, which can be prevented by vaccination among all age groups, especially persons aged >65 years and those who have medical indications for influenza vaccination (Table 1) (see Hospitalizations and Deaths from Influenza). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death substantially outweigh the possible risks for experiencing vaccine-associated GBS. The average case fatality ratio for GBS is 6% and increases with age (203,214). No evidence indicates that the case fatality ratio for GBS differs among vaccinated persons and those not vaccinated. The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (204,215). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown; therefore, avoiding vaccinating persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks after a previous influenza vaccination is prudent. As an alternative, physicians might consider using influenza an-tiviral chemoprophylaxis for these persons. Although data are limited, for the majority of persons who have a history of GBS and who are at high risk for severe complications from influenza, the established benefits of influenza vaccination justify yearly vaccination. # Live, Attenuated Influenza Vaccine Recommendations Background Description and Action Mechanisms. LAIVs have been in development since the 1960s in the United States, where they have been evaluated as mono-, bi-, and trivalent formulations (216)(217)(218). The LAIV licensed for use in the United States beginning in 2003 is produced by MedImmune, Inc. (Gaithersburg, Maryland; http://www.medimmune.com) and marketed under the name FluMist™. It is a live, trivalent, intranasally administered vaccine that is • attenuated, producing mild or no signs or symptoms related to influenza virus infection; • temperature-sensitive, a property that limits the replication of the vaccine viruses at 38 º C-39 º C, and thus restricts LAIV viruses from replicating efficiently in human lower airways; and • cold-adapted, replicating efficiently at 25 º C, a temperature that is permissive for replication of LAIV viruses, but restrictive for replication of different wild-type viruses. In animal studies, LAIV viruses replicate in the mucosa of the nasopharynx, inducing protective immunity against viruses included in the vaccine, but replicate inefficiently in the lower airways or lungs. The first step in developing an LAIV was the derivation of two stably attenuated master donor viruses (MDV), one for type A and one for type B influenza viruses. The two MDVs each acquired the cold-adapted, temperature-sensitive, attenuated phenotypes through serial passage in viral culture conducted at progressively lower temperatures. The vaccine viruses in LAIV are reassortant viruses containing genes from these MDVs that confer attenuation, temperature sensitivity, and cold adaptation and genes from the recommended contemporary wild-type influenza viruses, encoding the surface antigens hemagglutinin (HA) and neuraminidase (NA). Thus, MDVs provide the stably attenuated vehicles for presenting influenza HA and NA antigens, to which the protective antibody response is directed, to the immune system. The reassortant vaccine viruses are grown in embryonated hens eggs. After the vaccine is formulated and inserted into individual sprayers for nasal administration, the vaccine must be stored at -15 º C or colder. The immunogenicity of the approved LAIV has been assessed in multiple studies (102,(219)(220)(221)(222)(223)(224), which included approximately 100 children aged 5-17 years, and approximately 300 adults aged 18-49 years. LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not completely understood but appear to involve both serum and nasal secretory antibodies. No single laboratory measurement closely correlates with protective immunity induced by LAIV. Shedding and Transmission of Vaccine Viruses. Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses for >2 days after vaccination, although in lower titers than typically occur with shedding of wild-type influenza viruses. Shedding should not be equated with person-to-person transmission of vaccine viruses, although, in rare instances, shed vaccine viruses can be transmitted from vaccinees to nonvaccinated persons. One unpublished study in a child care center settingassessed transmissibility of vaccine viruses from 98 vaccinated to 99 unvaccinated subjects, all aged 8-36 months. Eighty percent of vaccine recipients shed one or more virus strains, with a mean of 7.6 days' duration (225). One vaccine type influenza type B isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the cold-adapted, temperature-sensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient in the same children's play group. The placebo recipient from whom the influenza type B vaccine virus was isolated did not exhibit symptoms that were different from those experienced by vaccine recipients. The estimated probability of acquiring vaccine virus after close contact with a single LAIV recipient in this child care population was 0.58%-2.4%. One study assessing shedding of vaccine viruses in 20 healthy vaccinated adults aged 18-49 years demonstrated that the majority of shedding occurred within the first 3 days after vaccination, although one subject was noted to shed virus on day 7 after vaccine receipt. No subject shed vaccine viruses >10 days after vaccination. Duration or type of symptoms associated with receipt of LAIV did not correlate with duration of shedding of vaccine viruses. Person-to-person transmission of vaccine viruses was not assessed in this study (226). Another study assessing shedding of vaccine viruses in 14 healthy adults aged 18-49 years indicated that 50% of these adults had viral antigen detected by direct immunofluorescence or rapid antigen tests within 7 days of vaccination. Most viral shedding was detected on day 2 or 3. Person-to-person transmission of vaccine viruses was not assessed in this study (227). # Stability of Vaccine Viruses. In clinical trials, viruses shed by vaccine recipients have been phenotypically stable. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (228). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. # Using LAIV LAIV is an option for vaccination of healthy persons aged 5-49 years, including health-care workers and other persons in close contact with groups at high risk and those wanting to avoid influenza. During periods when inactivated vaccine is in short supply, use of LAIV is encouraged when feasible for eligible persons (including health-care workers) because use of LAIV by these persons might increase availability of inactivated vaccine for persons in groups at high risk. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response, its ease of administration, and the acceptability of an intranasal rather than intramuscular route of administration. # Persons Who Should Not Be Vaccinated with LAIV The following populations should not be vaccinated with LAIV: • # Close Contacts of Persons at High Risk for Complications from Influenza Close contacts of persons at high risk for complications from influenza should receive influenza vaccine to reduce transmission of wild-type influenza viruses to persons at high risk. ACIP has not indicated a preference for inactivated influenza vaccine use by health-care workers or other persons who have close contact with persons with lesser degrees of immunosuppression (e.g., persons with diabetes, persons with asthma taking corticosteroids, or persons infected with HIV) or for inactivated influenza vaccine use by health-care workers or other healthy persons aged 5-49 years in close contact with all other groups at high risk. Use of inactivated influenza vaccine is preferred for vaccinating household members, healthcare workers, and others who have close contact with severely immunosuppressed persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunosuppressed person requires care in a protective environment. The rationale for not using LAIV among healthcare workers caring for such patients is the theoretical risk that a live, attenuated vaccine virus could be transmitted to the severely immunosuppressed person. If a health-care worker receives LAIV, that worker should refrain from contact with severely immunosuppressed patients for 7 days after vaccine receipt. Hospital visitors who have received LAIV should refrain from contact with severely immunosuppressed persons for 7 days after vaccination; however, such persons need not be excluded from visitation of patients who are not severely immunosuppressed. # Personnel Who May Administer LAIV Low-level introduction of vaccine viruses into the environment is likely unavoidable when administering LAIV. The risk for acquiring vaccine viruses from the environment is unknown but likely to be limited. Severely immunosuppressed persons should not administer LAIV. However, other persons at high risk for influenza complications may administer LAIV. These include persons with underlying medical conditions placing them at high risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years. # LAIV Dosage and Administration LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV must be thawed before administration. This can be accomplished by holding an individual sprayer in the palm of the hand until thawed, with subsequent immediate administration. Alternatively, the vaccine can be thawed in a refrigerator and stored at 2 º C-8 º C for <24 hours before use. Vaccine should not be refrozen after thawing. LAIV is supplied in a prefilled single-use sprayer containing 0.5 mL of vaccine. Approximately 0.25 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. If the vaccine recipient sneezes after administration, the dose should not be repeated. LAIV should be administered annually according to the following schedule: • Children aged 5-8 years previously unvaccinated at any time with either LAIV or inactivated influenza vaccine should receive 2 doses † of LAIV separated by 6-10 weeks. • Children aged 5-8 years previously vaccinated at any time with either LAIV or inactivated influenza vaccine should receive 1 dose of LAIV. They do not require a second dose. • Persons aged 9-49 years should receive 1 dose of LAIV. LAIV can be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if clinical judgment indicates nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness. Whether concurrent administration of LAIV with other vaccines affects the safety or efficacy of either LAIV or the simultaneously administered vaccine is unknown. In the absence of specific data indicating interference, following the ACIP general recommendations for immunization is prudent (67). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. An inactivated vaccine can be administered either simultaneously or at any time before or after LAIV. Two live vaccines not administered on the same day should be administered >4 weeks apart when possible. # LAIV and Use of Influenza Antiviral Medications The effect on safety and efficacy of LAIV coadministration with influenza antiviral medications has not been studied. However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy, and influenza antiviral medications should not be administered for 2 weeks after receipt of LAIV. # LAIV Storage LAIV must be stored at -15 º C or colder. A manufacturersupplied freezer box was formerly required for storage of LAIV in a frost-free freezer; however, the freezer box is now optional, and LAIV may now be stored in frost-free freezers without using a freezer box. LAIV can be thawed in a refrigerator and stored at 2 º C-8 º C for <60 hours before use. It should not be refrozen after thawing. # Side Effects and Adverse Reactions Twenty prelicensure clinical trials assessed the safety of the approved LAIV. In these combined studies, approximately 28,000 doses of the vaccine were administered to approximately 20,000 subjects. A subset of these trials were randomized, placebo-controlled studies in which an estimated 4,000 healthy children aged 5-17 years and 2,000 healthy adults aged 18-49 years were vaccinated. The incidence of adverse events possibly complicating influenza (e.g., pneumonia, bronchitis, bronchiolitis, or central nervous system events) was not statistically different among LAIV and placebo recipients aged 5-49 years. LAIV is made from attenuated viruses and does not cause influenza in vaccine recipients. Children. In a subset of healthy children aged 60-71 months from one clinical trial (97,98), certain signs and symptoms were reported more often among LAIV recipients after the first dose (n = 214) than placebo recipients (n = 95) (e.g., runny nose, 48.1% versus 44.2%; headache, 17.8% versus 11.6%; vomiting, 4.7% versus 3.2%; and myalgias, 6.1% versus 4.2%), but these differences were not statistically significant. In other trials, signs and symptoms reported after LAIV administration have included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0%-26%), vomiting (3%-13%), abdominal pain (2%), and myalgias (0%-21%) (219,222,224,(229)(230)(231). These symptoms were associated more often with the first dose and were self-limited. Unpublished data from a study including subjects aged 1-17 years indicated an increase in asthma or reactive airways disease in the subset aged 12-59 months. Because of this, LAIV is not approved for use among children aged <60 months. Adults. Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (100,232,233). In one clinical trial (100) among a subset of healthy adults aged 18-49 years, signs and symptoms reported more frequently among LAIV recipients (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (13.9% versus 10.8%); runny nose (44.5% versus 27.1%); sore throat (27.8% versus 17.1%); chills (8.6% versus 6.0%); and tiredness/ weakness (25.7% versus 21.6%). Safety Among Groups at High Risk from Influenza-Related Morbidity. Until additional data are acquired and analyzed, persons at high risk for experiencing complications from influenza infection (e.g., immunocompromised patients; patients with asthma, cystic fibrosis, or chronic obstructive pulmonary disease; or persons aged >65 years) should not be vaccinated with LAIV. Protection from influenza among these groups should be accomplished by using inactivated influenza vaccine. Serious Adverse Events. Serious adverse events among healthy children aged 5-17 years or healthy adults aged 18-49 years occurred at a rate of <1%. Surveillance should continue for adverse events that might not have been detected in previous studies. A preliminary review of reports to VAERS after distribution of approximately 800,000 doses during the 2003-04 influenza season did not reveal any substantial new safety concerns (234). Health-care professionals should promptly report all clinically significant adverse events after LAIV administration to VAERS, as recommended for inactivated influenza vaccine. # Recommended Vaccines for Different Age Groups When vaccinating children aged 6 months-3 years, healthcare providers should use inactivated influenza vaccine that has been approved by FDA for this age group. Inactivated influenza vaccine from Sanofi Pasteur, Inc., (FluZone splitvirus) is approved for use among persons aged >6 months. Inactivated influenza vaccine from Chiron (Fluvirin) is labeled in the United States for use among persons aged >4 years because data to demonstrate efficacy among younger persons have not been provided to FDA. Live, attenuated influenza vaccine from MedImmune (FluMist) is approved for use by healthy persons aged 5-49 years (Table 5). # Timing of Annual Influenza Vaccination The annual supply of influenza vaccine and the timing of its distribution cannot be guaranteed in any year. Information regarding the supply of 2005-06 vaccine might not be available until late summer or early fall 2005. To allow vaccine providers to plan for the upcoming vaccination season, taking into account the yearly possibility of vaccine delays or shortages and the need to ensure vaccination of persons at high risk and their contacts, ACIP recommends that inactivated influenza vaccine campaigns conducted in October focus primarily on persons at increased risk for influenza complications and their contacts, including health-care workers. Campaigns conducted in November and later should continue to vaccinate persons at high risk and their contacts, but also vaccinate other persons who wish to decrease their risk for influenza infection. Vaccination for all groups should continue into December and beyond. CDC and other public health agencies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will make recommendations preceding the 2005-06 influenza season regarding the need for tiered timing of inactivated influenza vaccination of different risk groups. Because LAIV is approved for use in healthy persons 5-49 years, its use has not been subject to tiered timing. # Vaccination Before October To avoid missed opportunities for vaccination of persons at high risk for serious complications, such persons should be offered vaccine beginning in September during routine healthcare visits or during hospitalizations, if vaccine is available. In facilities housing older persons (e.g., nursing homes), vaccination before October typically should be avoided because antibody levels in such persons can begin to decline within a limited time after vaccination (235). In addition, children aged <9 years who have not been previously vaccinated and who need 2 doses before the start of the influenza season can receive their first dose in September so that both doses of the most up-to-date vaccine can be administered before the onset of influenza activity. For previously vaccinated children, 2 doses are needed to provide optimal protection against influenza. # Vaccination in October and November The optimal time to vaccinate is usually during October-November. ACIP recommends that vaccine providers focus their vaccination efforts in October and earlier primarily on persons aged >50 years, persons aged <50 years at increased risk for influenza-related complications (including children aged 6-23 months), household contacts of persons at high risk (including out-of-home caregivers and household contacts of children aged 0-23 months), and health-care workers. Vaccination of children aged <9 years who are receiving vaccine for the first time should also begin in October or earlier because those persons need a booster dose 1 month after the initial dose. Efforts to vaccinate other persons who wish to decrease their risk for influenza infection should begin in November; however, if such persons request vaccination in October, vaccination should not be deferred, unless vaccine supplies dictate otherwise. Materials to assist providers in prioritizing early vaccine are available at http://www.cdc.gov/flu/ professionals/vaccination/index.htm (see also Travelers in this report). # Timing of Organized Vaccination Campaigns Persons and institutions planning substantial organized vaccination campaigns should consider scheduling these events after mid-October because the availability of vaccine in any location cannot be ensured consistently in early fall. Scheduling campaigns after mid-October will minimize the need for cancellations because vaccine is unavailable. Campaigns conducted before November using inactivated vaccine should focus efforts on vaccination of persons aged >50 years, persons aged <50 years at increased risk for influenza-related complications (including children aged 6-23 months and pregnant women), health-care workers, and household contacts of persons at high-risk (including children aged 0-23 months) to the extent feasible. Campaigns using the LAIV are also optimally conducted in October and November. # Vaccination in December and Later After November, many persons who should or want to receive influenza vaccine remain unvaccinated. In addition, substantial amounts of vaccine are often left over at the end of the influenza season. To improve vaccine coverage, influenza vaccine should continue to be offered in December and throughout the influenza season as long as vaccine supplies are available, even after influenza activity has been documented in the community. In the United States, seasonal influenza activity can begin to increase as early as October or November, but influenza activity has not reached peak levels in the majority of recent seasons until late December-early March (Table 6). Therefore, although the timing of influenza activity can vary by region, vaccine administered after November is likely to be beneficial in the majority of influenza seasons. Adults develop peak antibody protection against influenza infection 2 weeks after vaccination (236,237). # Strategies for Implementing Vaccination Recommendations in Health-Care Settings Successful vaccination programs combine publicity and education for health-care workers and other potential vaccine recipients, a plan for identifying persons at high risk, use of reminder/recall systems, assessment of practice-level vaccination rates with feedback to staff, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (19,238). Using standing orders programs is recommended for long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies to ensure the administration of recommended vaccinations for adults (239). Standing orders programs for both influenza and pneumococcal vaccination should be conducted under the supervision of a licensed practitioner according to a physician-approved facility or agency policy by health-care workers trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. The Centers for Medicare and Medicaid Services (CMS) has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, long-term-care facilities, and home health agencies (239). To the extent allowed by local and state law, these facilities and agencies may implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaid-eligible patients. Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs as well (20). In addition, physician reminders (e.g., flagging charts) and patient reminders are recommended strategies for increasing rates of influenza vaccination. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections. # Outpatient Facilities Providing Ongoing Care Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits beginning in Septem-ber and throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended and who do not have regularly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination. # Outpatient Facilities Providing Episodic or Acute Care Beginning each September, acute health-care facilities (e.g., emergency departments and walk-in clinics) should offer vaccinations to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities During October and November each year, vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility or anytime afterwards. All residents should be vaccinated at one time, preceding the influenza season. Residents admitted through March after completion of the vaccination program at the facility should be vaccinated at the time of admission. # Acute-Care Hospitals Persons of all ages (including children) with high-risk conditions and persons aged >50 years who are hospitalized at any time during September-March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. In one study, 39%-46% of adult patients hospitalized during the winter with influenza-related diagnoses had been hospitalized during the preceding autumn (240). Thus, the hospital serves as a setting in which persons at increased risk for subsequent hospitalization can be identified and vaccinated. However, vaccination of persons at high risk during or after their hospitalizations is often not done. In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (241). Using standing orders in hospitals increases vaccination rates among hospitalized persons (242). # Visiting Nurses and Others Providing Home Care to Persons at High Risk Beginning in September, nursing-care plans should identify patients for whom vaccination is recommended, and vac-cine should be administered in the home, if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination. # Other Facilities Providing Services to Persons Aged >50 Years Beginning in October, such facilities as assisted living housing, retirement communities, and recreation centers should offer unvaccinated residents and attendees vaccination on-site before the influenza season. Staff education should emphasize the need for influenza vaccine. # Health-Care Workers Beginning in October each year, health-care facilities should offer influenza vaccinations to all workers, including night and weekend staff. Particular emphasis should be placed on providing vaccinations to persons who care for members of groups at high risk. Efforts should be made to educate healthcare workers regarding the benefits of vaccination and the potential health consequences of influenza illness for themselves, their family members, and their patients. All healthcare workers should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (127,137). # Influenza Vaccine Supply Influenza vaccine distribution delays or vaccine supply shortages have occurred in the United States vaccine in three of the last five influenza seasons. Influenza vaccine delivery delays or vaccine shortages remain possible in part because of the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains. Steps being taken to accommodate possible future delays or vaccine shortages include identification and implementation of ways to expand the influenza vaccine supply and improvement of targeted delivery of vaccine to groups at high risk when delays or shortages are expected. # Influenza Vaccine Use During Shortages of Inactivated Vaccine ACIP will publish additional guidance regarding the prioritized (tiered) use of inactivated influenza vaccine to be implemented only during periods when there is a shortage of influenza vaccine. Otherwise, when vaccine is in adequate supply, every effort should be made to promote and use influenza vaccine for all regularly targeted groups and for other persons who wish to reduce their risk for influenza illness. The prioritized (tiered) use of influenza vaccine during inactivated influenza vaccine shortages applies only to use of inactivated vaccine and not to LAIV. When feasible, during shortages of inactivated influenza vaccine, LAIV should be used preferentially for all healthy persons aged 5-49 years (including health-care workers) to increase the availability of inactivated vaccine for groups at high risk. # Future Directions for Influenza Vaccine Recommendations ACIP plans to review new vaccination strategies for improving prevention and control of influenza, including the possibility of expanding recommendations for use of influenza vaccines. In addition, strategies for regularly monitoring vaccine effectiveness will be reviewed. # Recommendations for Using Antiviral Agents for Influenza Antiviral drugs for influenza are an adjunct to influenza vaccine for controlling and preventing influenza. However, these agents are not a substitute for vaccination. Four licensed influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. Amantadine and rimantadine are chemically related antiviral drugs known as adamantanes with activity against influenza A viruses, but not influenza B viruses. Amantadine was approved in 1966 for chemoprophylaxis of influenza A (H2N2) infection and was later approved in 1976 for treatment and chemoprophylaxis of influenza type A virus infections among adults and children aged >1 year. Rimantadine was approved in 1993 for treatment and chemoprophylaxis of influenza A infection among adults and prophylaxis among children. Although rimantadine is approved only for chemoprophylaxis of influenza A infection among children, rimantadine treatment for influenza A among children can be beneficial (243). Zanamivir and oseltamivir are chemically related antiviral drugs known as neuraminidase inhibitors that have activity against both influenza A and B viruses. Both zanamivir and oseltamivir were approved in 1999 for treating uncomplicated influenza infections. Zanamivir is approved for treating persons aged >7 years, and oseltamivir is approved for treatment of persons aged >1 year. In 2000, oseltamivir was approved for chemoprophylaxis of influenza among persons aged >13 years. The four drugs differ in pharmacokinetics, side effects, routes of administration, approved age groups, dosages, and costs. An overview of the indications, use, administration, and known primary side effects of these medications is presented in the following sections. Information contained in this report might not represent FDA approval or approved labeling for the anti-viral agents described. Package inserts should be consulted for additional information. # Role of Laboratory Diagnosis Appropriate treatment of patients with respiratory illness depends on accurate and timely diagnosis. Early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy. However, because certain bacterial infections can produce symptoms similar to influenza, bacterial infections should be considered and appropriately treated, if suspected. In addition, bacterial infections can occur as a complication of influenza. Influenza surveillance information and diagnostic testing can aid clinical judgment and help guide treatment decisions. The accuracy of clinical diagnosis of influenza on the basis of symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza (30,34,35). Influenza surveillance by state and local health departments and CDC can provide information regarding the presence of influenza viruses in the community. Surveillance can also identify the predominant circulating types, influenza A subtypes, and strains of influenza. Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, polymerase chain reaction (PCR), and immunofluorescence assays (25). Sensitivity and specificity of any test for influenza might vary by the laboratory that performs the test, the type of test used, and the type of specimen tested. Among respiratory specimens for viral isolation or rapid detection, nasopharyngeal specimens are typically more effective than throat swab specimens (244). As with any diagnostic test, results should be evaluated in the context of other clinical and epidemiologic information available to health-care providers. Commercial rapid diagnostic tests are available that can detect influenza viruses within 30 minutes (25,245). Some tests are approved for use in any outpatient setting, whereas others must be used in a moderately complex clinical laboratory. These rapid tests differ in the types of influenza viruses they can detect and whether they can distinguish between influenza types. Different tests can detect 1) only influenza A viruses; 2) both influenza A and B viruses, but not distinguish between the two types; or 3) both influenza A and B and distinguish between the two. None of the tests provide any information about influenza A subtypes. The types of specimens acceptable for use (i.e., throat, nasopharyngeal, or nasal aspirates, swabs, or washes) also vary by test. The specificity and, in particular, the sensitivity of rapid tests are lower than for viral culture and vary by test (246,247). Because of the lower sensitivity of the rapid tests, physicians should consider confirming negative tests with viral culture or other means because of the possibility of falsenegative rapid test results, especially during periods of peak community influenza activity. In contrast, false-positive rapid test results are less likely, but can occur during periods of low influenza activity. Therefore, when interpreting results of a rapid influenza test, physicians should consider the positive and negative predictive values of the test in the context of the level of influenza activity in their community. Package inserts and the laboratory performing the test should be consulted for more details regarding use of rapid diagnostic tests. Additional information concerning diagnostic testing is available at http://www.cdc.gov/flu/professionals/labdiagnosis.htm. Despite the availability of rapid diagnostic tests, collecting clinical specimens for viral culture is critical, because only culture isolates can provide specific information regarding circulating strains and subtypes of influenza viruses. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and chemoprophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor the emergence of antiviral resistance and the emergence of novel influenza A subtypes that might pose a pandemic threat. # Indications for Use Treatment When administered within 2 days of illness onset to otherwise healthy adults, amantadine and rimantadine can reduce the duration of uncomplicated influenza A illness, and zanamivir and oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day, compared with placebo (75,(248)(249)(250)(251)(252)(253)(254)(255)(256)(257)(258)(259)(260)(261)(262)(263)(264)(265). More clinical data are available concerning the efficacy of zanamivir and oseltamivir for treatment of influenza A infection than for treatment of influenza B infection (253,(266)(267)(268)(269)(270)(271)(272)(273)(274)(275)(276)(277)(278)(279)(280)(281). However, in vitro data and studies of treatment among mice and ferrets (282)(283)(284)(285)(286)(287)(288)(289), in addition to clinical studies, have documented that zanamivir and oseltamivir have activity against influenza B viruses (254,(258)(259)(260)290,291). Data are limited regarding the effectiveness of the four antiviral agents in preventing serious influenza-related complications (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases). Evidence for the effectiveness of these four antiviral drugs is principally based on studies of patients with uncomplicated influenza (292). Data are limited and inconclusive concerning the effectiveness of amantadine, rimantadine, zanamivir, and oseltamivir for treatment of influenza among persons at high risk for serious complications of influenza (28,248,250,251,253,254,261,(266)(267)(268)(269)(270). One study assessing oseltamivir treatment primarily among adults reported a reduction in complications, necessitating antibiotic therapy compared with placebo (271). Fewer studies of the efficacy of influenza antivirals have been conducted among pediatric populations (248,251,257,258,267,272,273). One study of oseltamivir treatment documented a decreased incidence of otitis media among children (258). Inadequate data exist regarding the safety and efficacy of any of the influenza antiviral drugs for use among children aged <1 year (247). To reduce the emergence of antiviral drug-resistant viruses, amantadine or rimantadine therapy for persons with influenza A illness should be discontinued as soon as clinically warranted, typically after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days. # Chemoprophylaxis Chemoprophylactic drugs are not a substitute for vaccination, although they are critical adjuncts in preventing and controlling influenza. Both amantadine and rimantadine are indicated for chemoprophylaxis of influenza A infection, but not influenza B. Both drugs are approximately 60%-90% effective in preventing illness from influenza A infection (75,248,267). When used as prophylaxis, these antiviral agents can prevent illness while permitting subclinical infection and development of protective antibody against circulating influenza viruses. Therefore, certain persons who take these drugs will develop protective immune responses to circulating influenza viruses. Amantadine and rimantadine do not interfere with the antibody response to the vaccine (248). Both drugs have been studied extensively among nursing home populations as a component of influenza outbreak-control programs, which can limit the spread of influenza within chronic-care institutions (248,266,(274)(275)(276). Among the neuraminidase inhibitor antivirals zanamivir and oseltamivir, only oseltamivir has been approved for prophylaxis, but community studies of healthy adults indicate that both drugs are similarly effective in preventing febrile, laboratory-confirmed influenza illness (efficacy: zanamivir, 84%; oseltamivir, 82%) (253,277,293). Both antiviral agents have also been reported to prevent influenza illness among persons administered chemoprophylaxis after a household member had influenza diagnosed (278,290,293). Experience with prophylactic use of these agents in institutional settings or among patients with chronic medical conditions is limited in comparison with the adamantanes (260,269,270,(279)(280)(281). One 6-week study of oseltamivir prophylaxis among nursing home residents reported a 92% reduction in influenza illness (260,294). Use of zanamivir has not been reported to impair the immunologic response to influenza vaccine (259,295). Data are not available regarding the efficacy of any of the four antiviral agents in preventing influenza among severely immunocompromised persons. When determining the timing and duration for administering influenza antiviral medications for prophylaxis, factors related to cost, compliance, and potential side effects should be considered. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost-effective, one study of amantadine or rimantadine prophylaxis reported that the drugs should be taken only during the period of peak influenza activity in a community (296). Persons at High Risk Who Are Vaccinated After Influenza Activity Has Begun. Persons at high risk for complications of influenza still can be vaccinated after an outbreak of influenza has begun in a community. However, development of antibodies in adults after vaccination takes approximately 2 weeks (236,237). When influenza vaccine is administered while influenza viruses are circulating, chemoprophylaxis should be considered for persons at high risk during the time from vaccination until immunity has developed. Children aged <9 years who receive influenza vaccine for the first time can require 6 weeks of prophylaxis (i.e., prophylaxis for 4 weeks after the first dose of vaccine and an additional 2 weeks of prophylaxis after the second dose). Persons Who Provide Care to Those at High Risk. To reduce the spread of virus to persons at high risk during community or institutional outbreaks, chemoprophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with persons at high risk. Persons with frequent contact include employees of hospitals, clinics, and chronic-care facilities, household members, visiting nurses, and volunteer workers. If an outbreak is caused by a variant strain of influenza that might not be controlled by the vaccine, chemoprophylaxis should be considered for all such persons, regardless of their vaccination status. Persons Who Have Immune Deficiencies. Chemoprophylaxis can be considered for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, chiefly those with advanced HIV disease. No published data are available concerning possible efficacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infection. Such patients should be monitored closely if chemoprophylaxis is administered. Other Persons. Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Chemoprophylaxis can also be offered to persons who wish to avoid influenza illness. Health-care providers and patients should make this decision on an individual basis. # Control of Influenza Outbreaks in Institutions Using antiviral drugs for treatment and prophylaxis of influenza is a key component of influenza outbreak control in institutions. In addition to antiviral medications, other outbreak-control measures include instituting droplet precautions and establishing cohorts of patients with confirmed or suspected influenza, re-offering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (297-299) (for additional information regarding outbreak control in specific settings, see Additional Information Regarding Influenza Infection Control Among Specific Populations). The majority of published reports concerning use of antiviral agents to control influenza outbreaks in institutions are based on studies of influenza A outbreaks among nursing home populations where amantadine or rimantadine were used (248,266,(274)(275)(276)296). Less information is available concerning use of neuraminidase inhibitors in influenza A or B institutional outbreaks (269,270,281,294,300). When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice can substantially expedite administration of antiviral medications. When outbreaks occur in institutions, chemoprophylaxis should be administered to all residents, regardless of whether they received influenza vaccinations during the previous fall, and should continue for a minimum of 2 weeks. If surveillance indicates that new cases continue to occur, chemoprophylaxis should be continued until approximately 1 week after the end of the outbreak. The dosage for each resident should be determined individually. Chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza that is not well-matched by the vaccine. In addition to nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories or other settings in which persons live in close proximity). For example, chemoprophylaxis with rimantadine has been used successfully to control an influenza A outbreak aboard a large cruise ship (177). To limit the potential transmission of drug-resistant virus during outbreaks in institutions, whether in chronic or acutecare settings or other closed settings, measures should be taken to reduce contact as much as possible between persons taking antiviral drugs for treatment and other persons, including those taking chemoprophylaxis (see Antiviral Drug-Resistant Strains of Influenza). # Dosage Dosage recommendations vary by age group and medical conditions (Table 7). # Children Amantadine. Use of amantadine among children aged <1 year has not been adequately evaluated. The FDA-approved dosage for children aged 1-9 years for treatment and prophylaxis is 4.4-8.8 mg/kg body weight/day, not to exceed 150 mg/day. Although further studies are needed to determine the optimal dosage for children aged 1-9 years, physicians should consider prescribing only 5 mg/kg body weight/ day (not to exceed 150 mg/day) to reduce the risk for toxicity. The approved dosage for children aged >10 years is 200 mg/ day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg body weight/day, regardless of age, is advisable (268). Rimantadine. Rimantadine is approved for prophylaxis among children aged >1 year and for treatment and prophylaxis among adults. Although rimantadine is approved only for prophylaxis of infection among children, certain specialists in the management of influenza consider it appropriate for treatment among children (243). Use of rimantadine among children aged <1 year has not been adequately evaluated. Rimantadine should be administered in 1 or 2 divided doses at a dosage of 5 mg/kg body weight/day, not to exceed 150 mg/day for children aged 1-9 years. The approved dosage for children aged >10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg body weight/day, regardless of age, is recommended (301). Zanamivir. Zanamivir is approved for treatment among children aged >7 years. The recommended dosage of zanamivir for treatment of influenza is two inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approximately 12 hours apart) (259). Oseltamivir. Oseltamivir is approved for treatment among persons aged >1 year and for chemoprophylaxis among persons aged >13 years. Recommended treatment dosages for children vary by the weight of the child: the dosage recommendation for children who weigh <15 kg is 30 mg twice a day; for children weighing >15-23 kg, the dosage is 45 mg (243). ¶ ¶ Older nursing-home residents should be administered only 100 mg/day of rimantadine. A reduction in dosage to 100 mg/day should be considered for all persons aged >65 years, if they experience possible side effects when taking 200 mg/day. *** Zanamivir administered through inhalation by using a plastic device included in the medication package. Patients will benefit from instruction and demonstration of the correct use of the device. † † † Zanamivir is not approved for prophylaxis. § § § A reduction in the dose of oseltamivir is recommended for persons with creatinine clearance <30 mL/min. ¶ ¶ ¶ The dose recommendation for children who weigh <15 kg is 30 mg twice a day. For children who weigh >15-23 kg, the dose is 45 mg twice a day. For children who weigh >23-40 kg, the dose is 60 mg twice a day. And, for children who weigh >40 kg, the dose is 75 mg twice a day. twice a day; for those weighing >23-40 kg, the dosage is 60 mg twice a day; and for children weighing >40 kg, the dosage is 75 mg twice a day. The treatment dosage for persons aged >13 years is 75 mg twice daily. For children aged >13 years, the recommended dose for prophylaxis is 75 mg once a day (260). # Persons Aged >65 Years Amantadine. The daily dosage of amantadine for persons aged >65 years should not exceed 100 mg for prophylaxis or treatment, because renal function declines with increasing age. For certain older persons, the dose should be further reduced. Rimantadine. Among older persons, the incidence and severity of central nervous system (CNS) side effects are substantially lower among those taking rimantadine at a dosage of 100 mg/day than among those taking amantadine at dosages adjusted for estimated renal clearance (302). However, chronically ill older persons have had a higher incidence of CNS and gastrointestinal symptoms and serum concentrations 2-4 times higher than among healthy, younger persons when rimantadine has been administered at a dosage of 200 mg/day (248). For prophylaxis among persons aged >65 years, the recommended dosage is 100 mg/day. For treatment of older persons in the community, a reduction in dosage to 100 mg/day should be considered if they experience side effects when taking a dosage of 200 mg/day. For treatment of older nursing home residents, the dosage of rimantadine should be reduced to 100 mg/day (301). Zanamivir and Oseltamivir. No reduction in dosage is recommended on the basis of age alone. # Persons with Impaired Renal Function Amantadine. A reduction in dosage is recommended for patients with creatinine clearance <50 mL/min. Guidelines for amantadine dosage on the basis of creatinine clearance are located in the package insert. Because recommended dosages on the basis of creatinine clearance might provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully for adverse reactions. If necessary, further reduction in the dose or discontinuation of the drug might be indicated because of side effects. Hemodialysis contributes minimally to amantadine clearance (303,304). Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance <10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including older persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. Hemodialysis contributes minimally to drug clearance (305). Zanamivir. Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were observed (259,306). However, a limited number of healthy volunteers who were administered high doses of intravenous zanamivir tolerated systemic levels of zanamivir that were substantially higher than those resulting from administration of zanamivir by oral inhalation at the recommended dose (307,308). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild-to-moderate or severe impairment in renal function (259). Oseltamivir. Serum concentrations of oseltamivir carboxylate (GS4071), the active metabolite of oseltamivir, increase with declining renal function (260,309). For patients with creatinine clearance of 10-30 mL/min (260), a reduction of the treatment dosage of oseltamivir to 75 mg once daily and in the prophylaxis dosage to 75 mg every other day is recommended. No treatment or prophylaxis dosing recommendations are available for patients undergoing routine renal dialysis treatment. # Persons with Liver Disease Amantadine. No increase in adverse reactions to amantadine has been observed among persons with liver disease. Rare instances of reversible elevation of liver enzymes among patients receiving amantadine have been reported, although a specific relation between the drug and such changes has not been established (310). Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with severe hepatic dysfunction. Zanamivir and Oseltamivir. Neither of these medications has been studied among persons with hepatic dysfunction. # Persons with Seizure Disorders Amantadine. An increased incidence of seizures has been reported among patients with a history of seizure disorders who have received amantadine (311). Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine. Rimantadine. Seizures (or seizure-like activity) have been reported among persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine (312). The extent to which rimantadine might increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated. Zanamivir and Oseltamivir. Seizure events have been reported during postmarketing use of zanamivir and oseltamivir, although no epidemiologic studies have reported any increased risk for seizures with either zanamivir or oseltamivir use. # Route Amantadine, rimantadine, and oseltamivir are administered orally. Amantadine and rimantadine are available in tablet or syrup form, and oseltamivir is available in capsule or oral suspension form. Zanamivir is available as a dry powder that is self-administered via oral inhalation by using a plastic device included in the package with the medication. Patients will benefit from instruction and demonstration of correct use of this device. # Pharmacokinetics Amantadine Approximately 90% of amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion (274,(313)(314)(315)(316). Thus, renal clearance of amantadine is reduced substantially among persons with renal insufficiency, and dosages might need to be decreased (see Dosage) (Table 7). # Rimantadine Approximately 75% of rimantadine is metabolized by the liver (267). The safety and pharmacokinetics of rimantadine among persons with liver disease have been evaluated only after single-dose administration (267,317). In a study of persons with chronic liver disease (the majority with stabilized cirrhosis), no alterations in liver function were observed after a single dose. However, for persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease (302). Rimantadine and its metabolites are excreted by the kidneys. The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration (267,305). Further studies are needed to determine multiple-dose pharmacokinetics and the most appropriate dosages for patients with renal insufficiency. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6fold greater than that among healthy persons of the same age (305). Hemodialysis did not contribute to drug clearance. In studies of persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were higher than those among control patients without renal disease who were the same weight, age, and sex (301,317). # Zanamivir In studies of healthy volunteers, approximately 7%-21% of the orally inhaled zanamivir dose reached the lungs, and 70%-87% was deposited in the oropharynx (259,318). Approximately 4%-17% of the total amount of orally inhaled zanamivir is systemically absorbed. Systemically absorbed zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the feces (259,308). # Oseltamivir Approximately 80% of orally administered oseltamivir is absorbed systemically (309). Absorbed oseltamivir is metabolized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxylate has a half-life of 6-10 hours and is excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway (260,319). Unmetabolized oseltamivir also is excreted in the urine by glomerular filtration and tubular secretion (320). # Side Effects and Adverse Reactions When considering use of influenza antiviral medications (i.e., choice of antiviral drug, dosage, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 7); presence of other medical conditions; indications for use (i.e., prophylaxis or therapy); and the potential for interaction with other medications. # Amantadine and Rimantadine Both amantadine and rimantadine can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day. However, incidence of CNS side effects (e.g., nervousness, anxiety, insomnia, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine than among those taking rimantadine (320). In a 6-week study of prophylaxis among healthy adults, approximately 6% of participants taking rimantadine at a dosage of 200 mg/day experienced one or more CNS symptoms, compared with approximately 13% of those taking the same dosage of amantadine and 4% of those taking placebo (320). A study of older persons also demonstrated fewer CNS side effects associated with rimantadine compared with amantadine (302). Gastrointestinal side effects (e.g., nausea and anorexia) occur among approximately 1%-3% of persons taking either drug, compared with 1% of persons receiving the placebo (320). Side effects associated with amantadine and rimantadine are usually mild and cease soon after discontinuing the drug. Side effects can diminish or disappear after the first week, despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures) (303,311). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among older persons who have been taking amantadine as prophylaxis at a dosage of 200 mg/day (274). Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects (Table 7). In acute overdosage of amantadine, CNS, renal, respiratory, and cardiac toxicity, including arrhythmias, have been reported (303). Because rimantadine has been marketed for a shorter period than amantadine, its safety among certain patient populations (e.g., chronically ill and older persons) has been evaluated less frequently. Because amantadine has anticholinergic effects and might cause mydriasis, it should not be used among patients with untreated angle closure glaucoma (303). # Zanamivir In a study of zanamivir treatment of ILI among persons with asthma or chronic obstructive pulmonary disease where study medication was administered after use of a B2-agonist, 13% of patients receiving zanamivir and 14% of patients who received placebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (259,261). However, in a phase-I study of persons with mild or moderate asthma who did not have ILI, one of 13 patients experienced bronchospasm after administration of zanamivir (259). In addition, during postmarketing surveillance, cases of respiratory function deterioration after inhalation of zanamivir have been reported. Certain patients had underlying airway disease (e.g., asthma or chronic obstructive pulmonary disease). Because of the risk for serious adverse events and because the efficacy has not been demonstrated among this population, zanamivir is not recommended for treatment for patients with underlying airway disease (259). If physicians decide to prescribe zanamivir to patients with underlying chronic respiratory disease after carefully considering potential risks and benefits, the drug should be used with caution under conditions of appropriate monitoring and supportive care, including the availability of short-acting bronchodilators (292). Patients with asthma or chronic obstructive pulmonary disease who use zanamivir are advised to 1) have a fast-acting inhaled bronchodilator available when inhaling zanamivir and 2) stop using zanamivir and contact their physician if they experience difficulty breathing (259). No definitive evidence is available regarding the safety or efficacy of zanamivir for persons with underlying respiratory or cardiac disease or for persons with complications of acute influenza (292). Allergic reactions, including oropharyngeal or facial edema, have also been reported during postmarketing surveillance (259,269). In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and for those receiving placebo (i.e., inhaled lactose vehicle alone) (249)(250)(251)(252)(253)(254)269). The most common adverse events reported by both groups were diarrhea; nausea; sinusitis; nasal signs and symptoms; bronchitis; cough; headache; dizziness; and ear, nose, and throat infections. Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (259). # Oseltamivir Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vomiting, approximately 6%; vomiting, approximately 3%) (255,256,260,321). Among children treated with oseltamivir, 14.3% had vomiting, compared with 8.5% of placebo recipients. Overall, 1% discontinued the drug secondary to this side effect (258), whereas a limited number of adults who were enrolled in clinical treatment trials of oseltamivir discontinued treatment because of these symptoms (260). Similar types and rates of adverse events were reported in studies of oseltamivir prophylaxis (260). Nausea and vomiting might be less severe if oseltamivir is taken with food (260,321). # Use During Pregnancy No clinical studies have been conducted regarding the safety or efficacy of amantadine, rimantadine, zanamivir, or oseltamivir for pregnant women; only two cases of amantadine use for severe influenza illness during the third trimester have been reported (144,145). However, both amantadine and rimantadine have been demonstrated in animal studies to be teratogenic and embryotoxic when administered at substantially high doses (301,303). Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these four drugs should be used during pregnancy only if the potential benefit justifies the potential risk to the embryo or fetus (see manufacturers' package inserts) (259,260,301,303). # Drug Interactions Careful observation is advised when amantadine is administered concurrently with drugs that affect CNS, including CNS stimulants. Concomitant administration of antihistamines or anticholinergic drugs can increase the incidence of adverse CNS reactions (248). No clinically substantial interactions between rimantadine and other drugs have been identified. Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically critical drug interactions have been predicted on the basis of in vitro data and data from studies using rats (259,322). Limited clinical data are available regarding drug interactions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this pathway. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir carboxylate by approximately 50% and a corresponding approximate twofold increase in the plasma levels of oseltamivir carboxylate (260,319). No published data are available concerning the safety or efficacy of using combinations of any of these four influenza antiviral drugs. For more detailed information concerning potential drug interactions for any of these influenza antiviral drugs, package inserts should be consulted. # Antiviral Drug-Resistant Strains of Influenza Amantadine-resistant viruses are cross-resistant to rimantadine and vice versa (323). Drug-resistant viruses can appear in approximately one third of patients when either amantadine or rimantadine is used for therapy (273,323,324). During the course of amantadine or rimantadine therapy, resistant influenza strains can replace susceptible strains within 2-3 days of starting therapy (325,326). Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy (327,328); however, the frequency with which resistant viruses are transmitted and their effect on efforts to control influenza are unknown. Amantadine-and rimantadine-resistant viruses are not more virulent or transmissible than susceptible viruses (329). The screening of epidemic strains of influenza A has rarely detected amantadine-and rimantadine-resistant viruses (325,330,331). Persons who have influenza A infection and who are treated with either amantadine or rimantadine can shed susceptible viruses early in the course of treatment and later shed drugresistant viruses, including after 5-7 days of therapy (273). Such persons can benefit from therapy even when resistant viruses emerge. Resistance to zanamivir and oseltamivir can be induced in influenza A and B viruses in vitro (332)(333)(334)(335)(336)(337)(338)(339), but induction of resistance usually requires multiple passages in cell culture. By contrast, resistance to amantadine and rimantadine in vitro can be induced with fewer passages in cell culture (340,341). Development of viral resistance to zanamivir and oseltamivir during treatment has been identified but does not appear to be frequent (260,(342)(343)(344)(345). In one pediatric study, 5.5% of patients treated with oseltamivir had posttreatment isolates that were resistant to neuraminidase inhibitors. One limited study of Japanese children treated with oseltamivir reported a high frequency of resistant viruses (346). However, no transmission of neuraminidase inhibitor resistant viruses in humans has been documented to date. No isolates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of posttreatment isolates tested is limited (347), and the risk for emergence of zanamivirresistant isolates cannot be quantified (259). Only one clinical isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (343). Available diagnostic tests are not optimal for detecting clinical resistance to the neuraminidase inhibitor antiviral drugs, and additional tests are being developed (347,348). Postmarketing surveillance for neuraminidase inhibitor-resistant influenza viruses is being conducted (349). # Sources of Information Regarding Influenza and Its Surveillance Information regarding influenza surveillance, prevention, detection, and control is available at http://www.cdc.gov/flu/ weekly/fluactivity.htm. Surveillance information is available through the CDC Voice Information System (influenza update) at 888-232-3228 or CDC Fax Information Service at 888-232-3299. During October-May, surveillance information is updated at least every other week. In addition, periodic updates regarding influenza are published in the MMWR Weekly Report (http://www.cdc.gov/mmwr). Additional information regarding influenza vaccine can be obtained by calling 800-CDC-INFO (800-232-4636). State and local health departments should be consulted concerning availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, and for reporting influenza outbreaks and receiving advice concerning outbreak control. # Additional Information Regarding Influenza Infection Control Among Specific Populations Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, health-care workers, hospitals, and travelers) are also available in the following publications: • CDC.
None
None
c193601c7cbb60b584371ad7f6064ade278536e7
cdc
None
Technology and youth seem destined for each other. They are both young, fast paced, and ever changing. In the last 20 years there has been an explosion in new technology. This new technology has been eagerly embraced by young people and has led to expanding knowledge, social networks, and vocabulary that includes instant messaging ("IMing"), blogging, and text messaging.# Electronic Aggression is any type of harassment or bullying that occurs through e-mail, a chat room, instant messaging, a website (including blogs), or text messaging. New technology has many potential benefits for youth. With the help of new technology, young people can interact with others across the United States and throughout the world on a regular basis. Social networking sites like Facebook and MySpace also allow youth to develop new relationships with others, some of whom they have never even met in person. New technology also provides opportunities to make rewarding social connections for those youth who have difficulty developing friendships in traditional social settings or because of limited contact with same-aged peers. In addition, regular Internet access allows teens and pre-teens to quickly increase their knowledge on a wide variety of topics. # Examples of Electronic Aggression - Disclosing someone else's personal information in a public area (e.g., website) in order to cause embarrassment. - Posting rumors or lies about someone in a public area (e.g., discussion board). - Distributing embarrassing pictures of someone by posting them in a public area (e.g., website) or sending them via e-mail. - Assuming another person's electronic identity to post or send messages about others with the intent of causing the other person harm. - Sending mean, embarrassing, or threatening text messages, instant messages, or e-mails. However, the recent explosion in technology does not come without possible risks. Youth can use electronic media to embarrass, harass, or threaten their peers. Explore the Internet. Visit the websites your child frequents, and assess the pros and cons. Remember, most websites and on-line activities are beneficial. They help young people learn new information, interact with others, and connect with people who have similar interests. Talk to your child. Parents and caregivers often ask children where they are going and who they are going with when they leave the house. You should ask these same questions when your child goes on the Internet. Because children are reluctant to disclose victimization for fear of having their Internet and cellular phone privileges revoked; develop solutions to prevent or address victimization that do not punish the child. Talk with other parents and caregivers. Talk to other parents and caregivers about how they have discussed technology use with their children. Ask about the rules they have developed and how they stay informed about their child's technology use. Connect with the school. Parents and caregivers are encouraged to work with their child's school and school district to develop a class for parents and caregivers that educates them about school policies on electronic aggression, recent incidents in the community involving electronic aggression, and resources available to parents and caregivers who have concerns. Work with the school and other partners to develop a collaborative approach to preventing electronic aggression. # Educate yourself. Stay informed about the new devices and websites your child is using. Technology changes rapidly, and many developers offer information to keep people aware of advances. Continually talk with your child about "where they are going" and explore the technology yourself. Technology is not going away, and forbidding young people to access electronic media may not be a good long-term solution. Together, parents and children can come up with ways to maximize the benefits of technology and decrease its risks.
Technology and youth seem destined for each other. They are both young, fast paced, and ever changing. In the last 20 years there has been an explosion in new technology. This new technology has been eagerly embraced by young people and has led to expanding knowledge, social networks, and vocabulary that includes instant messaging ("IMing"), blogging, and text messaging.# Electronic Aggression is any type of harassment or bullying that occurs through e-mail, a chat room, instant messaging, a website (including blogs), or text messaging. New technology has many potential benefits for youth. With the help of new technology, young people can interact with others across the United States and throughout the world on a regular basis. Social networking sites like Facebook and MySpace also allow youth to develop new relationships with others, some of whom they have never even met in person. New technology also provides opportunities to make rewarding social connections for those youth who have difficulty developing friendships in traditional social settings or because of limited contact with same-aged peers. In addition, regular Internet access allows teens and pre-teens to quickly increase their knowledge on a wide variety of topics. # Examples of Electronic Aggression • Disclosing someone else's personal information in a public area (e.g., website) in order to cause embarrassment. • Posting rumors or lies about someone in a public area (e.g., discussion board). • Distributing embarrassing pictures of someone by posting them in a public area (e.g., website) or sending them via e-mail. • Assuming another person's electronic identity to post or send messages about others with the intent of causing the other person harm. • Sending mean, embarrassing, or threatening text messages, instant messages, or e-mails. However, the recent explosion in technology does not come without possible risks. Youth can use electronic media to embarrass, harass, or threaten their peers. Explore the Internet. Visit the websites your child frequents, and assess the pros and cons. Remember, most websites and on-line activities are beneficial. They help young people learn new information, interact with others, and connect with people who have similar interests. Talk to your child. Parents and caregivers often ask children where they are going and who they are going with when they leave the house. You should ask these same questions when your child goes on the Internet. Because children are reluctant to disclose victimization for fear of having their Internet and cellular phone privileges revoked; develop solutions to prevent or address victimization that do not punish the child. Talk with other parents and caregivers. Talk to other parents and caregivers about how they have discussed technology use with their children. Ask about the rules they have developed and how they stay informed about their child's technology use. Connect with the school. Parents and caregivers are encouraged to work with their child's school and school district to develop a class for parents and caregivers that educates them about school policies on electronic aggression, recent incidents in the community involving electronic aggression, and resources available to parents and caregivers who have concerns. Work with the school and other partners to develop a collaborative approach to preventing electronic aggression. # Educate yourself. Stay informed about the new devices and websites your child is using. Technology changes rapidly, and many developers offer information to keep people aware of advances. Continually talk with your child about "where they are going" and explore the technology yourself. Technology is not going away, and forbidding young people to access electronic media may not be a good long-term solution. Together, parents and children can come up with ways to maximize the benefits of technology and decrease its risks.
None
None
e4029ba2cb8bd40b19fd330caee93fd854d7ae2e
cdc
None
Understanding and Complying with Security Procedures/Training (Sections 15, 11(f) and 11(d)(7))23 Inspection of Suspicious Packages (Section 11(d)(4)# Introduction Section 201 of the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 (P.L. 107-188)(the Act) requires the Secretary of the Department of Health and Human Services (HHS) to, by regulation, provide for "the appropriate safeguard and security requirements for persons possessing, using, or transferring a listed agent or toxin commensurate with the risk such agent or toxin poses to public health and safety(including the risk of use in domestic or international terrorism)." 1 Section 212 of the Act requires the Secretary of the Department of Agriculture (USDA) to, by regulation, provide for "the appropriate safeguard and security requirements for persons possessing, using, or transferring a listed agent or toxin commensurate with the risk such agent or toxin pose to animal and plant health, and animal and plant products (including the risk of use in domestic or international terrorism)." 2 The select agent regulations require a registered entity to develop and implement a written security plan that is (1) sufficient to safeguard the select agent or toxin against unauthorized access, theft, loss, or release; and (2) designed according to a site-specific risk assessment, providing graded protection (See section 11 (Security) of the select agent regulations). 3 The purpose of this document is to assist the entity in developing and implementing its site-specific security plan. This document is organized by security function/system with the regulatory citation at the end of the section heading. We have addressed any additional Tier 1 BSAT specific requirements at the end of the section. As used in this document, the word "must" means a regulatory requirement. The use of the word "should" or "consider" is a suggested method to meet that requirement based on generally recognized security "best practices." We recognize that implementation is performance based and that an entity may find other ways to meet the regulatory requirement. This document addresses the amendments to the select agent regulations with regard to security with one exception. Entities with Tier 1 BSAT have additional pre-access suitably and ongoing suitability assessment requirements which are addressed in the "Guidance for Suitability Assessments" available at www.selectagents.gov. # Key considerations in developing an effective security system: - It results from collaboration between scientific and security personnel. - It is built upon well documented operational processes. Security should reinforce existing processes. In order to do that, existing processes must be defined. - It accounts for and secures all biological select agents or toxins from creation or acquisition to destruction. - It complements other plans such as safety, disaster recovery, continuity of operations and others. - It does not violate any laws. This includes the Americans with Disabilities Act, OSHA safety standards, and local building and fire codes. - Personnel are trained so every person understands his or her responsibilities. - It accounts for the primary and secondary impacts of a threat's action, to include impacts they have beyond the entity. - It requires reporting of all suspected security incidents and suspicious activities. - It is reviewed at least annually and updated whenever conditions change. - It is based on a site-specific risk assessment. # Considerations when performing a risk assessment and developing a site specific security plan (Section 11(b)) Entities should consider forming a team of both entity subject matter experts (SMEs), supporting SMEs and stakeholders. The team should include entity professionals who are experts on the potential consequences of a theft, loss, or release of a select agent or toxin and the daily operations of the entity. Entities are also encouraged to include organizational and governmental members as well. Entity personnel should provide: - Standard Operating Procedures (SOPs), policies and other organizational controls which can reinforce or be affected by security measures - Public health consequences of the select agent and toxin - Operational requirements - Value of the select agent or toxin work to the organization - Knowledge of current security systems Facility and support personnel should provide: - Facility wide security measures - Personnel hiring practices (background checks, reference checks, education verifications) - Planned upgrades to the facility - Constraints which affect security (fire code, ordinances, federal laws) Local, state and federal law enforcement and security personnel members may be able to provide: - Known threats to the entities - Assistance with identifying vulnerabilities - Assistance with designing or vetting the mitigating factors - Economic and psychological impacts of the select agents or toxins Once the team is formed, members should be consulted on a regular basis, including during the plan development. The team should meet annually as part of the security plan review. # Entities Registered for Tier 1 BSAT must have the following additional requirements (Section 11(f)(2)) Entities must describe procedures for how an entity's Responsible Official (RO) will coordinate their efforts with the entity's safety and security professionals to ensure security of Tier 1 select agents and toxins and share, as appropriate, relevant information which may affect the security plan. # Specific considerations with the Site Specific Risk Assessment and Graded Security (Section 11(b)) The cornerstone of a good security plan is a site-specific risk assessment. It forms the logical basis for physical and personnel security measures employed to achieve graded security. It should indicate what risks have been identified, and of those, which have been mitigated and any residual risks acceptable to the entity. It does not necessarily have to account for accidental hazards accounted for in a safety plan. "Risk" comes from the interaction of threats/hazards (T), vulnerabilities (V) and consequence (C). There are many ways to capture these interactions, including qualitative, quantitative, probabilistic, and others. However, any assessment which captures and relates these interactions is sufficient. There are tools available to assist the entities at /. Below is a discussion of threat, vulnerability and consequence. 1. Understanding and Assessing Threats. A threat is a person or organizations whose actions may cause the theft or release of a select agent or toxin. The threat can be an insider with approved access or an outsider. The threat may target the agent directly (theft), may cause damage to the entity as the result of their action (Animal Rights Terrorists damaging containment), may act on their own or may collude with others. Threats can be captured as a 'probability of attack.' Threats are generally determined in 3 different ways. Useful resources when determining threats include: a. Entities are encouraged to reach out to law enforcement and other experts to determine threats. b. An expert or group of experts model 'threats' in general (Design Basis Threat or DBT). This is most common in federal and state facilities; however, the capability may be present in large entities. c. Historical data, including statistics on past local events (crimes), terrorist events worldwide, social sciences research into terrorists' behavior, official accounts, and/or terrorists own writings about motivation and intent. 2. Understanding and Assessing Natural Hazards. For a tool to help you to determine if you are in a risk area for natural hazards, see the Incident Response Guide at . As with threats, entities should assess the impacts of the hazard to the select agent or toxin as well as the entity as whole. 3. Understanding and Assessing Vulnerabilities. Vulnerability is the relative susceptibility of select agents or toxins to a threat or natural hazard. Vulnerabilities are a threat capability that can be applied which results in the theft or release of the agent or a natural hazard that can impact the select agent or toxin. Vulnerabilities are often captured as "probability of effectiveness" (PE) of a particular system. There are many ways to determine vulnerability that range from "discussion" to "mathematical simulation" depending on the information available. Below are some best practices in conducting vulnerability assessment: # Communicate the risk: After the risk assessment is completed, key entity leadership (such as the Principal Investigator (PI), RO, Alternate Responsible Official (ARO), Security Staff, Institutional Biosafety Committee, and Laboratory Management) should determine if the current risk level is acceptable. If the risk level is deemed unacceptable, then the entity is obligated to develop a means to mitigate the risk. Some common risk mitigation measures are given below. It should be noted that any activity involving a select agent or toxin will involve some level of unmitigated risk. The only way to eliminate risk completely would be to not undertake this work. # Manage the risk: Mitigation measures If the risk is not acceptable, the entity has multiple paths to mitigate the risks. The entity can: - Employ additional security measures. - Change the work with the select agent or toxin. - Decrease the quantity of toxin on hand, possessing only the amounts necessary for the work. - Change how the select agent or toxin is stored (e.g., not lyophilized). - When the toxin is a by-product of a larger process, immediately autoclave the agent or destroy the toxin. - Document any risks which have not been mitigated and why. # Document and Update the risk assessment The entity should document the risk assessment and review it at least yearly or as the threat changes. # Security Program Development and Management (Section 11) A security program implements risk management goals. Security program management consists of the plans, policies, people, processes/procedures and performance assessments that support the security system. # Plans (Section 11(a) and (b)) Entities are required to develop and implement a written site-specific security plan. A security plan is a documented systematic design for implementing security goals. It is a blue print for how an entity secures its select agents and toxins. It establishes the performance goals for the system and metrics for performance. As stated in the select agent regulations (section 11 (b)), "The security plan must be designed according to a site-specific risk assessment and must provide graded protection in accordance with the risk of the select agent or toxin, given its intended use." Graded protection is a result of mitigating the hazards (threat and natural) and the vulnerabilities based on the consequences of a select agent or toxin in its current form. Plans also include agreements or arrangements with extra-entity organizations such as local law enforcement. Reviews, Evaluating Effectiveness: The select agent regulations require that the security plan be reviewed on an annual basis. A security plan should be reviewed after any security incident, as well as after drills and exercises, and revised as needed (Section 11(h)). # Entities Registered for Tier 1 BSAT should have the following enhancements: An effective security program for Tier 1 BSAT should include roles and responsibilities for security management, and the possible designation of a Security Officer to manage the entity's security. It should also discuss who manages security control measures. This may include: - How the entity manages access controls (This management may include keys, card keys, access logs, biometrics and other access control measures for each of the security barriers in the security plan. This may be accomplished by directly controlling or interacting with a service provider (e.g., a guard company).) - Designating personnel to manage the entity's security systems, including intrusion detection - How the intrusion detection alarm code is managed (who has it, when it is changed) - How the entity tests and manages the configuration of the system - How the entity responds to an access control or intrusion detection failure (e.g., alarm) - How the entity screens visitors - A documented security awareness training program for, at a minimum, all employees listed the entity's approved registration that include regular insider threat awareness briefings on how to identify and report suspicious behaviors that occur inside the laboratory or storage area # Policies Entities should consider establishing specific policies which support their plan. Security policies, for purposes of this document, are documented strategies, principles, or rules which the entity follows to manage its security risks. They are a clear means of establishing behavioral expectations. They cover the spectrum from directives to standard operating procedures. As part of security program management, the entity should consider formally documenting security policies covering all operational controls (See p. 16: Inventory Control Measures). - Background checks and other personnel security measures (If practical, these policies should be vetted through the entity's legal and human resources department. Additional guidance on suitability assessment can be found at www.selectagents.gov. # People Entities should consider who will implement the plan. People are the core of any security system. The security program should define an individual's roles and responsibilities in the system and solicit their input for improvements. An entity should be aware of, and collaborate with, the personnel responsible for and/or impacting security. This may include: - Facility key control and/or access control personnel - Alarm companies - Campus security personnel - Security personnel who observe video - Local law enforcement or other response forces # Entities Registered for Tier 1 BSAT must have the following enhancements: Entities registered to possess Tier 1 BSAT must include in their security plan a component for all professionals involved in BSAT safety and security at an entity to share relevant information with the RO in order to coordinate their efforts. (Section 11(f)(2)) Ideally the entity's RO, safety, and security professionals should meet on a regular or defined basis. This may be annually in conjunction with the security plan review, after a security incident, when there is a significant entity change that affects security, or in response to a threat. # Processes and Procedures Processes and procedures are how people implement the leadership's plan. They are more than standard operating procedures (SOPs) and policies; they are how well the SOPs are implemented, followed, and supervised. # Physical Security ((Section 11(c)(1) and 11(d)(3)) The Security Plan must describe how the select agent or toxin is physically secured against unauthorized access. The security plan is performance based and should complement the Incident Response Plan and Biosafety Plan. An effective physical security plan deters, detects, delays, and responds to threats identified by the site-specific risk assessment creating sufficient time between detection of an attack and the time when the attack becomes successful, for response force to arrive. It should include: - Security barriers which both deter intrusion and deny access (except by approved personnel) to the areas containing select agents and toxins (e.g., perimeter fences, walls, locked doors and security windows and trained person (e.g., security guard, trained laboratorians, or escorts)) - Safety measures and other environmental factors which increase security (e.g., an access or locking system which denies access to the select agent or toxin (i.e., mechanical locks, card key access systems or biometrics) and tamper-evident devices for select agents and toxins held in long-term storage) - A balanced approach so that all access points, including windows and emergency exits, is secured at the same level - A procedure or process to keep the number of nuisance alarms to a minimum The physical security system must control access to the select agents and toxins. An individual will be deemed to have access at any point in time if the individual has possession of a select agent or toxin (e.g., ability to carry, possess, use, transfer, or manipulate) or the ability to gain possession of a select agent or toxin (section 10(b) of the select agent regulations). Based on the site-specific risk assessment, the entity should control access to the facility and to registered areas within the facility beyond the registered area, limiting access to the select agents and toxins to only those individuals with access approvals from the HHS Secretary or Administrator. # The entity must: Create an Access Control System (Section 11 (c)(2) and 11(d)(1): Create a system which limits access to select agents and toxins to those approved by the HHS Secretary or APHIS Administrator. This should: - Include provisions to limit unescorted/unrestricted access to the registered areas to those who have been approved by the HHS Secretary or Administrator to have access to select agents and toxins - Include provisions for the safeguarding of animals and plants exposed to or infected with select agents - Regularly review and update access logs - Be modified when access requirements change or be responsive to changes in personnel's access requirements during personnel changes - Remain flexible enough so non-approved personnel can be escorted if needed - Appendix III addresses access control measures and security risk assessment (SRA) requirements in detail # Access considerations for autoclaves Individuals loading an autoclave with select agents or toxins must be approved for access by the HHS Secretary or Administrator and, if a Tier 1 BSAT, have undergone pre-access suitability and are subject to ongoing suitability assessment. They should check the autoclave to ensure it is loaded properly and comes up to temperature and pressure within guidelines. Once that is complete, the person does not have to remain for the full cycle. Individuals unloading the autoclave may not require additional personnel security measures. At the end of the cycle, the person removing material from the autoclave should verify the cycle completed within normal parameters. If it has, the person removing the material does not require an SRA nor has additional suitability requirements. However, if the run was not completed with normal parameters, the personnel security requirements remain. The RO should be notified and the material removed by an individual approved for access by the HHS Secretary or Administrator and, if a Tier 1 BSAT agent, has undergone pre-access suitability and are subject to ongoing assessment. # Provide provisions for escorts (Section 11(d)(2)): The security plan must contain provisions which allow non-approved persons access only when escorted by an approved person. The escort must be dedicated (no other duties during the time he or she is serving as an escort) to observing the escorted person and must understand what to observe for (e.g., accessing select agents and toxins). Non-approved persons are not allowed to put "hands on" or have access to an agent, even if escorted by an approved person. Record Access (Section 17(a)(4)): An access log to record information for all entries, including the name of the individual, the name of the escort (if applicable), and date and time of entry must be maintained. If an electronic log is utilized, the database controlling access must be capable of maintaining three years of access information. Entry records must be safeguarded to prevent alterations and be retained for 3 years (Section 17(c) of the select agent regulations). # Control Access of Support Personnel for Maintenance, Cleaning, and Repair (Section 11(c)(3)): The security plan must state how cleaning, maintenance and repairs will be accomplished in areas where select agents and toxins are possessed or used. In allowing maintenance, cleaning, or repair personnel (whether in-house or contract services) into a registered area, an entity should: 1) use only approved individuals; or 2) provide an approved individual as an escort to the non-approved individual; 3) if the non-approved individual will not be escorted, install additional security measures (e.g., additional lock and key, cipher lock, or tamper alarms interfaced with the facility intrusion detection system) to prohibit access to select agents and toxins by non-approved individual; or 4) remove the select agent or toxin to a different area that is appropriately registered. Section 17 (Records) of the select agent regulations requires that access logs must be in place to record the name and date/time of entry into the registered area, including the name of an escort, if applicable. # Prevent Sharing Access Credentials (Section 11(d)(6)): The security plan must state that any person accessing select agents and toxins will not share their unique means of access (such as key cards and passwords) with any other person. This should include how the entity prevents: - "Piggybacking" or "tailgating" on another approved person's access card - Sharing a key card, password or badge - Challenge all individuals who tailgate or piggyback a secured access entry point Reporting and Removing Unauthorized or Suspicious Persons (Section 11 (d)(4) and 11(d)( 6)): An "unauthorized person" is one who is not approved to have access to select agents and toxins or is not authorized by the entity to be in a particular area or be involved in particular conduct. A "suspicious person" is any individual who has no valid reason to be in or around the areas where select agents and toxins are possessed or used. The security plan must describe the process for identifying, challenging and removing unauthorized and suspicious persons. It must also require follow-up actions such as reporting the information to the RO, the RO providing an incident report to entity security personnel, and possibly contacting local law enforcement agencies and the Federal Select Agent Program as appropriate. Unauthorized and suspicious persons attempting to gain entry into registered areas without proper credentials should be identified, challenged and removed immediately and the RO notified. It is important for an entity to train laboratory personnel on what to do when a suspected unauthorized person attempts to access registered areas. The entity should consider: - Integrating an access control measure (e.g., card key) into an alarm system which notifies a responder when an unauthorized person attempts to gain access (similar to an IDS, but does not involve an actual break in) - Having a badge system which clearly identifies who does and does not have access to select agents and toxins and training on how to remove unauthorized personnel (e.g., procedures for notification of security personnel and/or local law enforcement) # Address the Loss and Compromise of Access Credentials (Section 11(d)(7)): The security plan must include the reporting mechanisms for loss or compromise of keys and access cards and how they will be replaced. To be effective this requires prompt and immediate attention to ensure there is no compromise of security. The entity must: - Require immediate notification if a key or access card is lost - Evaluate whether locking mechanisms need to be replaced if keys are lost - Describing the procedure for deactivating access for a lost or stolen access card and how entry logs will be checked - Describe, in cases where a badge system is used, the means of disseminating information concerning the loss of a badge in order that all personnel know the badge may have been compromised Address Procedures for Personnel Changes (Section 11(c)(5)): The security plan must describe the procedures for changing access after personnel changes in order to prevent access by personnel who have previous access to select agents and toxins. This can include: - Deactivating card key access - Deactivating email, network, and local machine computer accounts which provide access to information - Surrendering key cards and badges - Surrendering keys when people leave or change duties Separate storage or laboratories that contain select agents and toxins from public areas of a building (Section 11(d)( 8)): The storage or laboratories that contain select agents and toxins must not be publicly accessible. Public areas are, as the name suggest, places where the general public may congregate or transit in the vicinity of registered spaces. # Entities Registered for Tier 1 BSAT must have the following enhancements: Three Barriers (Tier 1 only) For entities registered for Tier 1 BSAT, the select agent regulations require three barriers (section 11(f)(4)(iv). A barrier is a physical structure that is designed to prevent access by unauthorized persons. Cameras, security lighting, and IDS are not considered security barriers because while they may monitor access, they cannot, by themselves, prevent access. These security barriers must be identified on the entity's registration and discussed in the security plan (Sections 5A and 6A of APHIS/CDC Form 1). Each security barrier must add to the delay in reaching the secured areas where select agents and toxins are used or stored. Most security barriers, in and of themselves, do provide additional delay to forced entry. Entities should also consider the delay they have on covert entry (e.g., faked badges, etc.). All access points, including emergency exits, must be secured. This means that if there is a card key lock on the main door, the emergency exit should be secured to prevent ingress-for example, by having no outside handle. One of the security barriers must be monitored in such a way as to detect circumvention of established entry control measures under all conditions. This may include video cameras, monitoring access control logs from a card key reader, or tamper-evident tape on containers used for long term storage. The final security barrier must limit access to the select agents and toxins to personnel approved for access by the HHS Secretary or Administrator. Also, per CFR Section 11(f) (4) (i), the entity must ensure access to the Tier 1 BSAT is limited to those who have undergone the entity's pre-access suitability and are subject to ongoing assessment. Access records can be used to show that only approved personnel have accessed the final barrier (and the name of the escort if required). See Appendix IV for additional barrier scenarios. # Intrusion Detection System (Tier 1 only) All areas that reasonably afford access to the registered suite/room must be protected by an intrusion detection system (IDS) unless the registered area is physically occupied. An IDS is a system that consists of a sensor device which triggers an alarm when a security breach occurs notifying a response force (e.g., local police, security guard force, etc.) who have the capability to respond to the alarm and stop a threat. See Appendix V for detailed discussion of different types of IDS systems. Personnel monitoring the IDS must be capable of evaluating and interpreting the alarm and alerting the designated security response force or law enforcement. This may be personnel employed by the entity (an alarm or security operations center), contracted alarm company, local law enforcement or a military police unit. Depending on the system, it may also be dedicated entity personnel. If the entity contracts monitoring of its IDS from a service provider and local law enforcement respond, the entity should consider coordination with local law enforcement to assist local law enforcement in understanding the importance of the information from the service provider. For example, due to the volume of false alarms, local law enforcement may not treat the alarm as a serious matter. Entities that possess select agents and toxins are encouraged to discuss the consequence of theft of a select agent or toxin with local law enforcement so law enforcement can appreciate the seriousness of the threat.From an entity's viewpoint, it is important that local law enforcement understand that an alarm at an entity housing select agents and toxins should not be regarded as a "typical" property crime. # Response Time (Tier 1 only) The entity must determine the response time and describe the provisions or procedures for the response force in the security plan. Response time is the elapsed time, measured under typical conditions, from the time the response force is notified to the time the response force arrives at the entity. A response force is a force capable of interrupting a threat. It may be trained laboratory personnel, unarmed guards, armed guards and/or local law enforcement-though law enforcement is preferable. A reasonable target for response time is 15 minutes. This is based on Department of Defense adopted standards for protecting high consequence assets. Entities have two options to meet the requirement. The first is to determine that the response time for the response force is less than 15 minutes. This can be achieved in multiple ways. For example, an entity can: - Enter into a formal agreement with local law enforcement - Discuss with local law enforcement - If you have a dedicated guard force, work with them (generally, you will meet this requirement with a dedicated guard force) - Conduct an exercise with local responders The second is to calculate the delay time provided by entity security barriers and compare it to the expected response time of the response force. This is a matter of getting the typical response times from the responding personnel and comparing it to the delay times determined through scenarios. If the delay times are greater than the response time under typical conditions, it will meet the standard. Because the delay time is threat dependent, entities are strongly encouraged to coordinate with local law enforcement and/or federal partners to assist with threat capabilities. Local law enforcement, especially in areas where the response time is challenging, will often assist the entity in determining how long the barriers will delay an adversary. Though not required, entities should consider the effect of natural hazards and other factors when addressing response times. For example, an earthquake may trigger the alarm but may also impact local law enforcement's capabilities to respond to alarms. A mandatory evacuation for a hurricane may prevent the response force from arriving. A tornado may both cut the power to the IDS triggering an alarm but may also block the roads. Entities are strongly encouraged to address these matters in their incident response plans. Key points: If local law enforcement is the response force, it is critical to understand IDS triggering of a Federal Select Agent Program entity involves a high priority response by local authorities. Some police do not respond to property crimes; others only respond as they get time. You want your entity to be a 'higher' priority response. # Access control power failures and personnel changes (Tier 1 only) For powered access control systems, the entity must describe procedures to ensure that security is maintained in the event of the failure of the access control system due to power disruption. This may include locks "failing secure" (locked), personnel/guard forces, backup generators or other similar features. For example, if power is lost and the door locks (even if they can be opened only from the inside), then it meets this requirement. If power is lost and door unlocks (it can be open from the outside), then it does not meet "fail secure." Depending on access controls, the entity may also want to consider changing alarm codes when personnel change. Former employees with the capability to deactivate the IDS may pose vulnerability. # Personnel Security (Section 10) Personnel security measures are key to mitigating the 'insider' risk. A good personnel security program reinforces honesty and integrity while identifying those who may pose unacceptable risks. In some cases, an entity's personnel security is already in place but outside the control of the RO. This is acceptable and often times a much more effective means of conducting personnel security. The RO, however, must be aware of these procedures. The security plan should describe what personnel security measures are in place. These measures should assess the workers based on their level of responsibilities within the entity. Personnel security measures should be based on insider threats from the site specific security risk assessment. # Personnel Suitability Assessments (Tier 1 BSAT only) Personnel with access to Tier 1 BSAT must have additional pre-access suitably and ongoing assessment requirements. See the "Guidance for Suitability Assessments" on www.selectagents.gov for additional information. # Security Risk Assessment (Section 10) An SRA is the method used by the Federal Bureau of Investigation to identify whether an individual is within any of the prohibited or restrictive categories specified in section 10 of the select agent regulations. # Additional Personnel Security Considerations Based on the site specific risk assessment, the entity may determine risks that are best managed through additional personnel security measures. These are at the discretion of the entity and subject to many local, state and federal laws. Also, entities should keep in mind parts of a typical background check may already be in place for non-security reasons. For example, verification of a person's educational background and contacting a person's references may be accomplished as part of the hiring process. The registered entity may also consider continuing evaluation. Some approaches may include: - Self-reporting (an environment which encourages honesty and self-help) - Peer reporting (an environment where team members help each other) - Supervisor observation (to make sure people perform in approved manner) - Periodic rescreening including the SRA and other background screens Suspicious activity or behavior to consider includes: - Personnel who deliberately or routinely violate security or safety procedures - Personnel who threaten or support those who threaten to do harm to other people - Personnel who do not properly account for material - Personnel who work after hours without authorization or apparent reason - Personnel displaying nervous or evasive behavior when accounting for material This goes beyond laboratorians and includes support staff with access to select agents and toxins. # Operational Security (Section 11(d)(3)) Operational Security: Effective operational security builds on existing operational procedures but mitigates threats based on site-specific risk assessments. Operational security are controls and procedural measures which are put in place or modified to control access to access to select agents and toxins; they prevent the unauthorized access, theft or loss a select agent or toxin. The entity may choose to consider: - Limiting duty hours/after hour operations - Training personnel on their responsibilities for securing select agents and toxins - Badging procedures, including displaying badges at all times unless in personal protective equipment - Removing signs that indicate where the select agents and toxins are stored (unless required by law, regulation, or as part of a biosafety program) - Having two personnel present while working with select agents and toxins or review of security video by laboratorians or other trained personnel - Screening of visitors, packages, vehicles, etc. as part of allowing them access. - Requiring that individuals refrain from sharing information about the entity's security Entities should assess lack of adherence to operational security measures because this is a key alert to potential insider threat activity. # Work hours (Tier 1 BSAT only) : Entities registered to possess Tier 1 BSAT are required to have procedures that limit access to laboratory and storage facilities outside of normal business hours to only those specifically approved by the RO or designee(s). This requirement is not intended to limit work, only to make an entity aware of the work that is occurring. If the RO is aware and access is limited (card-key limitations, key control), then it is sufficient to meet the regulatory requirement. If any person on the registration can access Tier 1 BSAT after hours on their own, it is not. These procedures must also be addressed in the security plan. # Screening (Tier 1 BSAT only) : Entities registered to possess Tier 1 BSAT are required to have procedures for screening visitors, their property, and vehicles at the entry and exit points to the areas registered for Tier 1 BSAT, or at other designated points of entry to the building, facility, or compound based on the entity's site-specific risk assessment. Screening consists of confirming a person's need to visit and his/her identity prior to allowing access to areas registered for Tier 1 BSAT. The method and detail is determined by the site-specific risk assessment. Screening can be done by any trained person at any point prior to accessing the area registered for Tier 1 BSAT. Entities are also required to document in the security plan their procedures for visitors and property screening. These procedures should be based on the site-specific security risk assessment along with applicable laws. # Inventory Control Measures (Sections 11(c)(1) and(2) and 17) Effective inventory control measures for select agents and toxins in long term storage can be a very effective way to deter and detect a variety of insider threats. How the inventory audits are conducted and inventory is maintained must be described in the entity's security plan and inventory records must meet the requirements of section 17 of the select agent regulations. The security requirement includes: - Current accounting of any animals or plants intentionally or accidentally exposed to, or infected with, a select agent - An accurate and current inventory for each select agent or toxin held in long-term storage - Labeling and identifying select agents and toxins in the entity inventory in a way that leaves no question that the entity's inventory is accurately reflected in the inventory records - Accounting for select agents and toxins from acquisition to destruction - Accounting for select agents and toxins as they are withdrawn from long term storage and returned to storage An inventory audit is an examination of a portion of the inventory or collection sufficient to verify inventory controls are effective. This guidance does not apply to inventories conducted in accordance with Section 17 for further information on long term storage inventories (See the "Guidance on the Inventory of Select Agents and Toxins" which can be found at www.selectagents.gov). This only applies to audits required by section 11. Entities must conduct complete inventory audits of affected select agents and toxins in long-term storage when any of the following occur: - Upon the physical relocation of a collection or inventory containing select agents and toxins. This includes moving a collection or inventory into a new facility or into a new storage location within the same facility; - Upon the departure or arrival of a principal investigator for select agents or toxins under the control of that principal investigator; or - In the event of a theft or loss of a select agent or toxin, all select agents and toxins under the control of the principal investigator that suffered the theft or loss. Entities have discretion on how they conduct these audits. The depth of an audit should depend on the circumstances. Entities should consider the following 5 items when conducting entity audits: 1) The timing of the inventory audit. See recommended objectives below. 2) The circumstances that require the inventory audit. For example, an 'emergency' movement to another location (freezer malfunction) may result in a focus on counting full racks and a confirmation of a targeted, smaller number of vials. In the case of a shipment to a new building or campus where there is sufficient time to plan, entities are encouraged to inventory more thoroughly. 3) The criteria used to determine which samples are audited. In the case of a large inventory, the entity may choose to focus on the most recently manipulated samples. In the case of a small inventory, the entity may choose to focus on the entire inventory. 4) Any additional storage measures. If the material is stored in tamper evident systems, the entity may choose to count the sealed containers instead of the individual vials within those containers. 5) The size of the collection being audited and the manner it is stored. Inventories which are intermixed with other samples may require a 'vial by vial' audit. Records of audit must be kept in accordance with section 73.17 (c). The changes to the inventory must be recorded in accordance with section 73.17 (a) as well. Inventory control measures may include tamper-evident storage containers. They are especially useful when the material is in storage without being accessed for a long period of time (archival collections, while laboratories are under renovation). The material should be inventoried initially then the material is sealed in the container. The entity can check the 'seal' for evidence of tampering instead of inventorying the entire container. Note: as a practical matter, an entity will either have to retain the record of the initial inventory record past the minimum 3 year select agent regulatory requirement, or the entity will have to re-inventory the select agents and toxins if the record is to be destroyed on a 3 year cycle. If the inventory record of what is inside the container is lost or destroyed, the container must be re-inventoried. # Examples of tamper evident material: Tamper Evident Tape (useful on boxes and storage devices). Tamper Evident Seal (useful on freezers and other locking storage devices) See Appendix VI for an example of a select agent inventory form and Appendix VII for an example of a select toxin inventory form. Re-training is required when the entity significantly amends its security plan. This includes processes which change how people gain access, along with other significant changes to security. Significant changes may be the result of new technology (new badge reader, new IDS, new inventory control system) or new operational processes (new inventory control method, new work hour standard). There must also be a means to verify that the employee understood the training. This may be a test or practical exercise. In the case of documentation, this can be a 'read and understood' statement. Entities must keep records of this training and means used to verify understanding per Section17 of the select agent regulations. # Inspection of Suspicious Packages (Section 11(d)(4)) A suspicious package is any package or item that enters or leaves registered areas that does not appear to be consistent with what is expected during normal daily operations. The entity should consider the following indicators of suspicious packages: - Misspelled words - Addressed to a title only or an incorrect title - Badly taped or sealed - Lopsided or uneven - Oily stains, discolorations, or crystallization on the wrapper - Excessive tape or string - Protruding wires - Return address does not exist or does not make sense Inspection of Packages: The security plan must describe how the entity will inspect packages based on the site-specific risk assessment. The entity should inspect all packages and items before they are brought into or removed from areas where select agents and toxins are used or stored (registered laboratory, etc.). Suspicious packages should be inspected visually or with noninvasive techniques before they are brought into, or removed from the area where select agents and toxins are stored or used. Guidelines for recognizing suspicious packages are provided in Appendix VIII. # Reporting requirements (Section 11(c)(8) and 11(d)(7)) The security plan must indicate that the following incidents must be reported to the RO: - Any loss or compromise of keys, passwords, and combinations - Any suspicious persons or activities - Any loss or theft of a select agent or toxin - Any release of a select agent or toxin - Any sign that inventory or use records for select agents and toxins have been altered or otherwise compromised The security plan must describe procedures for how the RO will be informed of suspicious activity that may be criminal in nature and related to the entity, its personnel, or its select agents and toxins; and describe procedures for how the entity will notify the appropriate federal, state, or local law enforcement agencies of such activity. Discussions during the risk assessment's security portion should identify who best can respond to the circumstances. Suspicious activity of a criminal nature includes: - Those identified in the site-specific security risk assessment - Insider: o Attempts to create additional inventory not authorized or required o Attempts to "cover up" and not report inventory discrepancies o Attempts to remove inventory without authorization o Attempts by "restricted" persons to intentionally access registered areas containing a select agent or toxin - Outsider: - Indirect threats against the entity received by e-mail, letter, telephone, or website postings o Unauthorized attempts to purchase or transfer a select agent or toxin o Attempts to coerce entity personnel into a criminal act o Intimidation of entity personnel based on their scientific work (Eco-Terrorism) o Requests for access to laboratories for no apparent legitimate purpose, or for purposes that don't seem legitimate o Unauthorized attempts to probe or gain access to proprietary information systems particularly access control systems (e.g., attempts by unauthorized individuals to gain physical or electronic access to systems) o Theft of identification documents, identification cards, key cards or other items required to access registered areas o Personnel representing themselves as government personnel (federal, state, local) attempting to gain access to the facility or obtain sensitive information that cannot or will not present appropriate identification o Use of fraudulent documents or identification to request access # Shipping (Section 11(c)(10)) The security plan must contain provisions and policies for shipping, receiving, and storage of select agents and toxins. This includes procedures for receiving, monitoring, and shipping of all select agents and toxins. Shipments containing select agents and toxins between entities must be authorized by the Federal Select Agent Program, and coordinated through an APHIS/CDC Form 2 and tracked so the receiving entity knows when the shipment will arrive. The package must be packaged and received by a person approved for access to select agent or toxins. With respect to outbound shipments, the individual who packaged the BSAT for shipment must have an SRA approval. Once the select agent or toxin is packaged in accordance with DOT regulations and cannot be identified as a select agent or toxin, it can be passed off to a non-SRA approved person for shipping. With respect to inbound shipments, the package containing select agents and toxins is not considered "received" by the entity until the intended recipient takes possession of the package. The intended recipient must have an SRA and, if the agent is Tier 1, has gone through the entity's pre-access suitability and is subject to entitles ongoing assessment. When received by the intended recipient, they should immediately be secured in registered space. Ideally, they are taken to the receiving laboratory. However, the package may be stored in other registered space temporarily. Shipping and receiving areas must be registered if the select agents or toxins packages are identified or accessed. For example: - If packaging or un-packaging of a select agent or toxin is performed in these areas - If the plan to temporarily store identified select agents If select agent or toxin packages are not identified or accessed, the shipping and receiving area may not need to be registered. The entity must also have a written contingency plan for receipt and security for unexpected shipments. An "unexpected shipment" is when an entity receives a shipment of a select agent that it had neither requested nor coordinated for, and therefore was not expecting. Upon realizing that a shipment has arrived which contains select agents and toxins, the entity must have a contingency plan to have approved personnel gain control of the shipment without delay and secure it in a registered area. # Intra-entity Transfers (Section 11(d)(5)) Intra-entity Transfer: A physical transfer of select agents or toxins that takes place between two SRA approved PIs at the same registered entity (e.g., from one PI to another). Entities that conduct intra-entity transfers must have in their security plan a description of how these transfers will take place, including chain-of-custody documents and provisions for safeguarding the select agents and toxins against theft, loss, or release. An example: A PI removes a select agent or toxin from his long term storage and gives it to another PI. An example of an intra-entity transfer form can be found in Appendix IX. Transfers accompanied by a chain-of-custody ensure that select agents and toxins will not be left unattended. If intra-entity transfers are not conducted in the entity, this is not required to be covered in the security plan. Entities with Tier 1 BSAT must have the following enhancements to transfers (See Appendix IX for an example form). Personnel transferring Tier 1 BSAT must ensure that individuals receiving the Tier 1 BSAT are approved by the HHS Secretary or Administrator, have gone through the entity's pre-access suitability and are subject to the entity's on going assessment. # Appendices The appendices contain information, suggested diagrams, and examples that an entity may consider in development and implementation of a security plan. The entity is not required to use, or limited to, the information provided in the appendices. # Deny Threat key information # Mitigation (in green/double line) # P a g e | 31 Security Guide Insider Threat (Someone with authorized access with intent and capability to steal or impact a select agent or toxin) Unlike the Outsider, the entity can take steps to prevent certain tasks. Entities should focus on how one could get a select agent or toxin out of the registered areas without authorization and create control or measures which prevent it. Example "Relative Risk Score"-This method assesses risk by numerically scoring threats and vulnerabilities. Threat and Venerability are combined into one score (one independent variable) and compared to consequence which is treated as independent variable. # Threat tasks without mitigation Mitigation and prevention (in green/double lines) Risk # Appendix III: Comparison of Access Control Devices and Systems which are used to Control Access to Select Agents and Toxins Lock Type Physical Security Requirement Additional SRA Requirements # Mechanical Key - All keys must be tracked in a log. - Change locks if key is lost or compromised. - All keys must be returned when people quit or are terminated. - The entity must log access and retain for 3 years. - If the key is secured in a key box, the key box key must meet the requirements above. - All personnel with access to the key must have SRAs. If in a key box, all personnel with access to the keybox key must have an SRA. If there is no IDS, the following people must have SRAs: - All personnel with access to a master key. - All personnel with access to a facility or building grand master. - Entity locksmiths if they have or can make the key and the key can be traced to the door. Cipher Key/Combination lock - The entity must change the code or lock when personnel quit or are terminated. Changes must be reflected in a log. - The entity must change the code or lock in the event of compromise. - The entity must log access to registered areas and retain access records for 3 years. - All personnel with the code/combination or access to the code/combination must have SRAs. If there is no IDS, the following people must have SRAs: - All personnel who can change the code. Card Key - The entity must maintain electronic or physical logs of access to registered areas for 3 years. - The log must be capable of being printed. - The access control network must meet the information security requirements. - All personnel with card-key which can open door (includes facility wide keys) Card Key+ Pin - The entity must maintain electronic logs of access for 3 years. - The access control network must meet the information security requirements. - No additional requirement # Biometrics - The entity must maintain electronic logs of access for 3 years. - The access control network must meet the information security requirements. - No additional requirement Multiple kinds of access control (i.e., Card Key and Mechanical Lock on same door) - All the requirements for each type of access control systems when or if used. - All the SRA requirements for both systems unless use of the access control device triggers the IDS (use of a mechanical key in Card-Key door will often trigger a 'forced door' alarm. The same alarm if someone broke the door down). Remote opening (e.g., someone 'buzzes' a person in) - Maintain electronic logs of access for 3 years. - The access control network must meet the information security requirements. Security requirements for shared areas depend on access to the select agents and toxins. All personnel who have access to the Tier 1 BSAT must have gone through the entity's pre-access suitability and be enrolled in the ongoing assessment program. Beyond that, it depends on the parameters for the work or action being done. Below are six common scenarios and the corresponding personnel and physical security requirements. 1) Actively working with Tier 1 BSAT and select agents and toxins simultaneously in the same contiguous registered area. An entity is conducting research or diagnostic work using Tier 1 BSAT along with other work in a single registered suite. # Personnel Requirements All people with access to the agent within the suite/shared area have gone through the entity's pre-access suitability and are subject to ongoing assessment. Physical Security Requirements The suite/shared area meet all the physical security requirements for Tier 1. Final barrier usually the door to the suite. # Verification Check Access logs to the suite must reflect only entity personnel who have gone through the entity's pre-access suitability and ongoing access. 2) Storage only within a registered space. A Tier 1 storage location or freezer location inside registered laboratory or a shared freezer inside a registered laboratory. # Personnel Requirements All people with access to the locking storage device (i.e., freezer) have gone through the entity's pre-access suitability and on-going assessment. All people with access to the room but not to the Tier 1 BSAT do not need to be in the entity's pre-access suitability and ongoing assessment program. Physical Security Requirements Storage location meets the security requirement for Tier 1. Final barrier is the locking mechanism of the storage device. # Verification Check Access logs to storage device that contains Tier 1 BSAT (i.e., freezer) must reflect only entity personnel who have gone through the entity's pre-access suitability and ongoing access. 3) Working with Tier 1 BSAT and select agents and toxins inside the same contiguous registered space separated by time. This entails only working with Tier 1 during certain well defined times and conditions. The entity restricts access during times when Tier 1 BSAT is outside of locked storage units such as a locked freezer or a locked incubator. # Personnel Requirements All people with access to the suite/shared area when Tier 1 BSAT is present have gone through the entity's pre-access suitability and ongoing assessment. Physical Security Requirements The suite/shared area meet all the physical security requirements for Tier 1 BSAT. Depending on how the work is organized, the final barrier may be the door to the suite or to any freezers /devices containing Tier 1 BSAT during work with select agents and toxins. Verification Check Access logs to room and written procedures in the security plan that detail how access is restricted when Tier 1 BSAT are outside of locked storage. # P a g e | 41 Security Guide 4) Shared Autoclave. Using an autoclave for both Tier1 BSAT and select agents and toxins. # Personnel Requirements For select agents and toxins, individuals operating the autoclave must be SRA approved. For Tier 1 BSAT, individuals operating the autoclave Tier 1 BSAT must be SRA approved and have gone through the entity's pre-access suitability and subject to on-going assessment. Physical Security Requirements For select agents and toxins, agent or toxin secured by SRA approved persons until the autoclave reaches desired operational parameters. For Tier 1 BSAT, the agent or toxin secured by individuals who is SRA approved and has gone through the entity's pre-access suitability and subject to on-going assessment. That person must remain until the autoclave reaches desired operational parameters. It is recommended that autoclaved cycles be scheduled so materials can be autoclaved without delay. Verification Check Autoclave records. Animals exposed to a Tier 1 BSAT-An animal that is experimentally infected or exposed to a Tier 1 BSAT must be secured as a Tier 1 BSAT itself until such time as it is demonstrated to be free of that select agent. # Personnel Requirements Individuals who exposed the animal must be SRA approved and have gone through the entity's pre-access suitability and subject to ongoing assessment. Personnel who handle or care for the animal must be SRA approved and have gone through the entity's pre-access suitability and subject to on-going assessment. Physical Security Requirements The area where the animal is handled and housed meets all the physical security requirements for Tier 1 agents. Final barrier usually the door to the suite. Verification Check Access logs. # 6) Animals inoculated with a Select Toxin Personnel Requirements For select toxins, persons inoculating the animals must be SRA approved. For Tier 1 select toxin, individuals who inoculated the animal must be SRA approved and have gone through the entity's pre-access suitability and subject to on-going assessment. Physical Security Requirements For all select toxins, room used for inoculation must be registered. Animal inoculated with select toxins are not subject to additional security requirements. However, any equipment used for inoculation of animals (syringe, aerosol chamber) must be managed as a select toxin until decontaminated. For Tier 1 select toxin, registered room must meet Tier 1 security requirements (e.g., 3 barriers, IDS). Verification Check Inventory Logs with respect to toxin only. # P a g e | 42 Security Guide # Appendix XI: Access and Barrier Scenarios Regulatory Requirements -Access: § 73.10 (b): To access any select agent or toxin (e.g., ability to carry, use, or manipulate) or the ability to gain possession of a select agent or toxin), a person must be approved by the HHS Secretary or Administrator after an SRA (i.e., have an SRA approval). -Records: § 73.17 (a) 5: Entity must retain records about all entries into areas containing select agents or toxins including the name of the individual, name of the escort (if applicable), and date and time of entry. # Areas containing Tier 1 agents or toxins also have the following requirements: -Access: § 73.11 (f) 4 (i): To access any Tier 1 select agent, a person must meet the 73.10 (b) requirement, have had an entity-conducted pre-access suitability assessment and are subject to the entity's procedures for ongoing suitability assessment (i.e., Suitability Program). -Barriers § 73.11 (f) 4(iv): A minimum of three security barriers. Only the final barrier must limit access to the select agent or toxin to personnel with an approved SRA. A "security barrier" is a physical structure that is designed to prevent entry by unauthorized persons. The entity must have procedures to control access though security barriers. # Scenarios (Tier 1 Barriers and Access Controls): Scenarios (Tier 1 Barriers and Access Controls): Scenarios (Non-Tier 1 Barriers and Access Controls):
Understanding and Complying with Security Procedures/Training (Sections 15, 11(f) and 11(d)(7))23 Inspection of Suspicious Packages (Section 11(d)(4)# Introduction Section 201 of the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 (P.L. 107-188)(the Act) requires the Secretary of the Department of Health and Human Services (HHS) to, by regulation, provide for "the appropriate safeguard and security requirements for persons possessing, using, or transferring a listed agent or toxin commensurate with the risk such agent or toxin poses to public health and safety(including the risk of use in domestic or international terrorism)." 1 Section 212 of the Act requires the Secretary of the Department of Agriculture (USDA) to, by regulation, provide for "the appropriate safeguard and security requirements for persons possessing, using, or transferring a listed agent or toxin commensurate with the risk such agent or toxin pose to animal and plant health, and animal and plant products (including the risk of use in domestic or international terrorism)." 2 The select agent regulations require a registered entity to develop and implement a written security plan that is (1) sufficient to safeguard the select agent or toxin against unauthorized access, theft, loss, or release; and (2) designed according to a site-specific risk assessment, providing graded protection (See section 11 (Security) of the select agent regulations). 3 The purpose of this document is to assist the entity in developing and implementing its site-specific security plan. This document is organized by security function/system with the regulatory citation at the end of the section heading. We have addressed any additional Tier 1 BSAT specific requirements at the end of the section. As used in this document, the word "must" means a regulatory requirement. The use of the word "should" or "consider" is a suggested method to meet that requirement based on generally recognized security "best practices." We recognize that implementation is performance based and that an entity may find other ways to meet the regulatory requirement. This document addresses the amendments to the select agent regulations with regard to security with one exception. Entities with Tier 1 BSAT have additional pre-access suitably and ongoing suitability assessment requirements which are addressed in the "Guidance for Suitability Assessments" available at www.selectagents.gov. # Key considerations in developing an effective security system: • It results from collaboration between scientific and security personnel. • It is built upon well documented operational processes. Security should reinforce existing processes. In order to do that, existing processes must be defined. • It accounts for and secures all biological select agents or toxins from creation or acquisition to destruction. • It complements other plans such as safety, disaster recovery, continuity of operations and others. • It does not violate any laws. This includes the Americans with Disabilities Act, OSHA safety standards, and local building and fire codes. • Personnel are trained so every person understands his or her responsibilities. • It accounts for the primary and secondary impacts of a threat's action, to include impacts they have beyond the entity. • It requires reporting of all suspected security incidents and suspicious activities. • It is reviewed at least annually and updated whenever conditions change. • It is based on a site-specific risk assessment. # Considerations when performing a risk assessment and developing a site specific security plan (Section 11(b)) Entities should consider forming a team of both entity subject matter experts (SMEs), supporting SMEs and stakeholders. The team should include entity professionals who are experts on the potential consequences of a theft, loss, or release of a select agent or toxin and the daily operations of the entity. Entities are also encouraged to include organizational and governmental members as well. Entity personnel should provide: • Standard Operating Procedures (SOPs), policies and other organizational controls which can reinforce or be affected by security measures • Public health consequences of the select agent and toxin • Operational requirements • Value of the select agent or toxin work to the organization • Knowledge of current security systems Facility and support personnel should provide: • Facility wide security measures • Personnel hiring practices (background checks, reference checks, education verifications) • Planned upgrades to the facility • Constraints which affect security (fire code, ordinances, federal laws) Local, state and federal law enforcement and security personnel members may be able to provide: • Known threats to the entities • Assistance with identifying vulnerabilities • Assistance with designing or vetting the mitigating factors • Economic and psychological impacts of the select agents or toxins Once the team is formed, members should be consulted on a regular basis, including during the plan development. The team should meet annually as part of the security plan review. # Entities Registered for Tier 1 BSAT must have the following additional requirements (Section 11(f)(2)) Entities must describe procedures for how an entity's Responsible Official (RO) will coordinate their efforts with the entity's safety and security professionals to ensure security of Tier 1 select agents and toxins and share, as appropriate, relevant information which may affect the security plan. # Specific considerations with the Site Specific Risk Assessment and Graded Security (Section 11(b)) The cornerstone of a good security plan is a site-specific risk assessment. It forms the logical basis for physical and personnel security measures employed to achieve graded security. It should indicate what risks have been identified, and of those, which have been mitigated and any residual risks acceptable to the entity. It does not necessarily have to account for accidental hazards accounted for in a safety plan. "Risk" comes from the interaction of threats/hazards (T), vulnerabilities (V) and consequence (C). There are many ways to capture these interactions, including qualitative, quantitative, probabilistic, and others. However, any assessment which captures and relates these interactions is sufficient. There are tools available to assist the entities at http://www.selectagents.gov/. Below is a discussion of threat, vulnerability and consequence. 1. Understanding and Assessing Threats. A threat is a person or organizations whose actions may cause the theft or release of a select agent or toxin. The threat can be an insider with approved access or an outsider. The threat may target the agent directly (theft), may cause damage to the entity as the result of their action (Animal Rights Terrorists damaging containment), may act on their own or may collude with others. Threats can be captured as a 'probability of attack.' Threats are generally determined in 3 different ways. Useful resources when determining threats include: a. Entities are encouraged to reach out to law enforcement and other experts to determine threats. b. An expert or group of experts model 'threats' in general (Design Basis Threat or DBT). This is most common in federal and state facilities; however, the capability may be present in large entities. c. Historical data, including statistics on past local events (crimes), terrorist events worldwide, social sciences research into terrorists' behavior, official accounts, and/or terrorists own writings about motivation and intent. 2. Understanding and Assessing Natural Hazards. For a tool to help you to determine if you are in a risk area for natural hazards, see the Incident Response Guide at http://www.selectagents.gov. As with threats, entities should assess the impacts of the hazard to the select agent or toxin as well as the entity as whole. 3. Understanding and Assessing Vulnerabilities. Vulnerability is the relative susceptibility of select agents or toxins to a threat or natural hazard. Vulnerabilities are a threat capability that can be applied which results in the theft or release of the agent or a natural hazard that can impact the select agent or toxin. Vulnerabilities are often captured as "probability of effectiveness" (PE) of a particular system. There are many ways to determine vulnerability that range from "discussion" to "mathematical simulation" depending on the information available. Below are some best practices in conducting vulnerability assessment: # Communicate the risk: After the risk assessment is completed, key entity leadership (such as the Principal Investigator (PI), RO, Alternate Responsible Official (ARO), Security Staff, Institutional Biosafety Committee, and Laboratory Management) should determine if the current risk level is acceptable. If the risk level is deemed unacceptable, then the entity is obligated to develop a means to mitigate the risk. Some common risk mitigation measures are given below. It should be noted that any activity involving a select agent or toxin will involve some level of unmitigated risk. The only way to eliminate risk completely would be to not undertake this work. # Manage the risk: Mitigation measures If the risk is not acceptable, the entity has multiple paths to mitigate the risks. The entity can: • Employ additional security measures. • Change the work with the select agent or toxin. • Decrease the quantity of toxin on hand, possessing only the amounts necessary for the work. • Change how the select agent or toxin is stored (e.g., not lyophilized). • When the toxin is a by-product of a larger process, immediately autoclave the agent or destroy the toxin. • Document any risks which have not been mitigated and why. # Document and Update the risk assessment The entity should document the risk assessment and review it at least yearly or as the threat changes. # Security Program Development and Management (Section 11) A security program implements risk management goals. Security program management consists of the plans, policies, people, processes/procedures and performance assessments that support the security system. # Plans (Section 11(a) and (b)) Entities are required to develop and implement a written site-specific security plan. A security plan is a documented systematic design for implementing security goals. It is a blue print for how an entity secures its select agents and toxins. It establishes the performance goals for the system and metrics for performance. As stated in the select agent regulations (section 11 (b)), "The security plan must be designed according to a site-specific risk assessment and must provide graded protection in accordance with the risk of the select agent or toxin, given its intended use." Graded protection is a result of mitigating the hazards (threat and natural) and the vulnerabilities based on the consequences of a select agent or toxin in its current form. Plans also include agreements or arrangements with extra-entity organizations such as local law enforcement. Reviews, Evaluating Effectiveness: The select agent regulations require that the security plan be reviewed on an annual basis. A security plan should be reviewed after any security incident, as well as after drills and exercises, and revised as needed (Section 11(h)). # Entities Registered for Tier 1 BSAT should have the following enhancements: An effective security program for Tier 1 BSAT should include roles and responsibilities for security management, and the possible designation of a Security Officer to manage the entity's security. It should also discuss who manages security control measures. This may include: • How the entity manages access controls (This management may include keys, card keys, access logs, biometrics and other access control measures for each of the security barriers in the security plan. This may be accomplished by directly controlling or interacting with a service provider (e.g., a guard company).) • Designating personnel to manage the entity's security systems, including intrusion detection • How the intrusion detection alarm code is managed (who has it, when it is changed) • How the entity tests and manages the configuration of the system • How the entity responds to an access control or intrusion detection failure (e.g., alarm) • How the entity screens visitors • A documented security awareness training program for, at a minimum, all employees listed the entity's approved registration that include regular insider threat awareness briefings on how to identify and report suspicious behaviors that occur inside the laboratory or storage area # Policies Entities should consider establishing specific policies which support their plan. Security policies, for purposes of this document, are documented strategies, principles, or rules which the entity follows to manage its security risks. They are a clear means of establishing behavioral expectations. They cover the spectrum from directives to standard operating procedures. As part of security program management, the entity should consider formally documenting security policies covering all operational controls (See p. 16: Inventory Control Measures). • Background checks and other personnel security measures (If practical, these policies should be vetted through the entity's legal and human resources department. Additional guidance on suitability assessment can be found at www.selectagents.gov. # People Entities should consider who will implement the plan. People are the core of any security system. The security program should define an individual's roles and responsibilities in the system and solicit their input for improvements. An entity should be aware of, and collaborate with, the personnel responsible for and/or impacting security. This may include: • Facility key control and/or access control personnel • Alarm companies • Campus security personnel • Security personnel who observe video • Local law enforcement or other response forces # Entities Registered for Tier 1 BSAT must have the following enhancements: Entities registered to possess Tier 1 BSAT must include in their security plan a component for all professionals involved in BSAT safety and security at an entity to share relevant information with the RO in order to coordinate their efforts. (Section 11(f)(2)) Ideally the entity's RO, safety, and security professionals should meet on a regular or defined basis. This may be annually in conjunction with the security plan review, after a security incident, when there is a significant entity change that affects security, or in response to a threat. # Processes and Procedures Processes and procedures are how people implement the leadership's plan. They are more than standard operating procedures (SOPs) and policies; they are how well the SOPs are implemented, followed, and supervised. # Physical Security ((Section 11(c)(1) and 11(d)(3)) The Security Plan must describe how the select agent or toxin is physically secured against unauthorized access. The security plan is performance based and should complement the Incident Response Plan and Biosafety Plan. An effective physical security plan deters, detects, delays, and responds to threats identified by the site-specific risk assessment creating sufficient time between detection of an attack and the time when the attack becomes successful, for response force to arrive. It should include: • Security barriers which both deter intrusion and deny access (except by approved personnel) to the areas containing select agents and toxins (e.g., perimeter fences, walls, locked doors and security windows and trained person (e.g., security guard, trained laboratorians, or escorts)) • Safety measures and other environmental factors which increase security (e.g., an access or locking system which denies access to the select agent or toxin (i.e., mechanical locks, card key access systems or biometrics) and tamper-evident devices for select agents and toxins held in long-term storage) • A balanced approach so that all access points, including windows and emergency exits, is secured at the same level • A procedure or process to keep the number of nuisance alarms to a minimum The physical security system must control access to the select agents and toxins. An individual will be deemed to have access at any point in time if the individual has possession of a select agent or toxin (e.g., ability to carry, possess, use, transfer, or manipulate) or the ability to gain possession of a select agent or toxin (section 10(b) of the select agent regulations). Based on the site-specific risk assessment, the entity should control access to the facility and to registered areas within the facility beyond the registered area, limiting access to the select agents and toxins to only those individuals with access approvals from the HHS Secretary or Administrator. # The entity must: Create an Access Control System (Section 11 (c)(2) and 11(d)(1): Create a system which limits access to select agents and toxins to those approved by the HHS Secretary or APHIS Administrator. This should: • Include provisions to limit unescorted/unrestricted access to the registered areas to those who have been approved by the HHS Secretary or Administrator to have access to select agents and toxins • Include provisions for the safeguarding of animals and plants exposed to or infected with select agents • Regularly review and update access logs • Be modified when access requirements change or be responsive to changes in personnel's access requirements during personnel changes • Remain flexible enough so non-approved personnel can be escorted if needed • Appendix III addresses access control measures and security risk assessment (SRA) requirements in detail # Access considerations for autoclaves Individuals loading an autoclave with select agents or toxins must be approved for access by the HHS Secretary or Administrator and, if a Tier 1 BSAT, have undergone pre-access suitability and are subject to ongoing suitability assessment. They should check the autoclave to ensure it is loaded properly and comes up to temperature and pressure within guidelines. Once that is complete, the person does not have to remain for the full cycle. Individuals unloading the autoclave may not require additional personnel security measures. At the end of the cycle, the person removing material from the autoclave should verify the cycle completed within normal parameters. If it has, the person removing the material does not require an SRA nor has additional suitability requirements. However, if the run was not completed with normal parameters, the personnel security requirements remain. The RO should be notified and the material removed by an individual approved for access by the HHS Secretary or Administrator and, if a Tier 1 BSAT agent, has undergone pre-access suitability and are subject to ongoing assessment. # Provide provisions for escorts (Section 11(d)(2)): The security plan must contain provisions which allow non-approved persons access only when escorted by an approved person. The escort must be dedicated (no other duties during the time he or she is serving as an escort) to observing the escorted person and must understand what to observe for (e.g., accessing select agents and toxins). Non-approved persons are not allowed to put "hands on" or have access to an agent, even if escorted by an approved person. Record Access (Section 17(a)(4)): An access log to record information for all entries, including the name of the individual, the name of the escort (if applicable), and date and time of entry must be maintained. If an electronic log is utilized, the database controlling access must be capable of maintaining three years of access information. Entry records must be safeguarded to prevent alterations and be retained for 3 years (Section 17(c) of the select agent regulations). # Control Access of Support Personnel for Maintenance, Cleaning, and Repair (Section 11(c)(3)): The security plan must state how cleaning, maintenance and repairs will be accomplished in areas where select agents and toxins are possessed or used. In allowing maintenance, cleaning, or repair personnel (whether in-house or contract services) into a registered area, an entity should: 1) use only approved individuals; or 2) provide an approved individual as an escort to the non-approved individual; 3) if the non-approved individual will not be escorted, install additional security measures (e.g., additional lock and key, cipher lock, or tamper alarms interfaced with the facility intrusion detection system) to prohibit access to select agents and toxins by non-approved individual; or 4) remove the select agent or toxin to a different area that is appropriately registered. Section 17 (Records) of the select agent regulations requires that access logs must be in place to record the name and date/time of entry into the registered area, including the name of an escort, if applicable. # Prevent Sharing Access Credentials (Section 11(d)(6)): The security plan must state that any person accessing select agents and toxins will not share their unique means of access (such as key cards and passwords) with any other person. This should include how the entity prevents: • "Piggybacking" or "tailgating" on another approved person's access card • Sharing a key card, password or badge • Challenge all individuals who tailgate or piggyback a secured access entry point Reporting and Removing Unauthorized or Suspicious Persons (Section 11 (d)(4) and 11(d)( 6)): An "unauthorized person" is one who is not approved to have access to select agents and toxins or is not authorized by the entity to be in a particular area or be involved in particular conduct. A "suspicious person" is any individual who has no valid reason to be in or around the areas where select agents and toxins are possessed or used. The security plan must describe the process for identifying, challenging and removing unauthorized and suspicious persons. It must also require follow-up actions such as reporting the information to the RO, the RO providing an incident report to entity security personnel, and possibly contacting local law enforcement agencies and the Federal Select Agent Program as appropriate. Unauthorized and suspicious persons attempting to gain entry into registered areas without proper credentials should be identified, challenged and removed immediately and the RO notified. It is important for an entity to train laboratory personnel on what to do when a suspected unauthorized person attempts to access registered areas. The entity should consider: • Integrating an access control measure (e.g., card key) into an alarm system which notifies a responder when an unauthorized person attempts to gain access (similar to an IDS, but does not involve an actual break in) • Having a badge system which clearly identifies who does and does not have access to select agents and toxins and training on how to remove unauthorized personnel (e.g., procedures for notification of security personnel and/or local law enforcement) # Address the Loss and Compromise of Access Credentials (Section 11(d)(7)): The security plan must include the reporting mechanisms for loss or compromise of keys and access cards and how they will be replaced. To be effective this requires prompt and immediate attention to ensure there is no compromise of security. The entity must: • Require immediate notification if a key or access card is lost • Evaluate whether locking mechanisms need to be replaced if keys are lost • Describing the procedure for deactivating access for a lost or stolen access card and how entry logs will be checked • Describe, in cases where a badge system is used, the means of disseminating information concerning the loss of a badge in order that all personnel know the badge may have been compromised Address Procedures for Personnel Changes (Section 11(c)(5)): The security plan must describe the procedures for changing access after personnel changes in order to prevent access by personnel who have previous access to select agents and toxins. This can include: • Deactivating card key access • Deactivating email, network, and local machine computer accounts which provide access to information • Surrendering key cards and badges • Surrendering keys when people leave or change duties Separate storage or laboratories that contain select agents and toxins from public areas of a building (Section 11(d)( 8)): The storage or laboratories that contain select agents and toxins must not be publicly accessible. Public areas are, as the name suggest, places where the general public may congregate or transit in the vicinity of registered spaces. # Entities Registered for Tier 1 BSAT must have the following enhancements: Three Barriers (Tier 1 only) [Section 11(f)(4)(iv)] For entities registered for Tier 1 BSAT, the select agent regulations require three barriers (section 11(f)(4)(iv). A barrier is a physical structure that is designed to prevent access by unauthorized persons. Cameras, security lighting, and IDS are not considered security barriers because while they may monitor access, they cannot, by themselves, prevent access. These security barriers must be identified on the entity's registration and discussed in the security plan (Sections 5A and 6A of APHIS/CDC Form 1). Each security barrier must add to the delay in reaching the secured areas where select agents and toxins are used or stored. Most security barriers, in and of themselves, do provide additional delay to forced entry. Entities should also consider the delay they have on covert entry (e.g., faked badges, etc.). All access points, including emergency exits, must be secured. This means that if there is a card key lock on the main door, the emergency exit should be secured to prevent ingress-for example, by having no outside handle. One of the security barriers must be monitored in such a way as to detect circumvention of established entry control measures under all conditions. This may include video cameras, monitoring access control logs from a card key reader, or tamper-evident tape on containers used for long term storage. The final security barrier must limit access to the select agents and toxins to personnel approved for access by the HHS Secretary or Administrator. Also, per CFR Section 11(f) (4) (i), the entity must ensure access to the Tier 1 BSAT is limited to those who have undergone the entity's pre-access suitability and are subject to ongoing assessment. Access records can be used to show that only approved personnel have accessed the final barrier (and the name of the escort if required). See Appendix IV for additional barrier scenarios. # Intrusion Detection System (Tier 1 only) [Section 11(f)(4)(v)] All areas that reasonably afford access to the registered suite/room must be protected by an intrusion detection system (IDS) unless the registered area is physically occupied. An IDS is a system that consists of a sensor device which triggers an alarm when a security breach occurs notifying a response force (e.g., local police, security guard force, etc.) who have the capability to respond to the alarm and stop a threat. See Appendix V for detailed discussion of different types of IDS systems. Personnel monitoring the IDS must be capable of evaluating and interpreting the alarm and alerting the designated security response force or law enforcement. This may be personnel employed by the entity (an alarm or security operations center), contracted alarm company, local law enforcement or a military police unit. Depending on the system, it may also be dedicated entity personnel. If the entity contracts monitoring of its IDS from a service provider and local law enforcement respond, the entity should consider coordination with local law enforcement to assist local law enforcement in understanding the importance of the information from the service provider. For example, due to the volume of false alarms, local law enforcement may not treat the alarm as a serious matter. Entities that possess select agents and toxins are encouraged to discuss the consequence of theft of a select agent or toxin with local law enforcement so law enforcement can appreciate the seriousness of the threat.From an entity's viewpoint, it is important that local law enforcement understand that an alarm at an entity housing select agents and toxins should not be regarded as a "typical" property crime. # Response Time (Tier 1 only) [Section 11(f)(4)(viii)] The entity must determine the response time and describe the provisions or procedures for the response force in the security plan. Response time is the elapsed time, measured under typical conditions, from the time the response force is notified to the time the response force arrives at the entity. A response force is a force capable of interrupting a threat. It may be trained laboratory personnel, unarmed guards, armed guards and/or local law enforcement-though law enforcement is preferable. A reasonable target for response time is 15 minutes. This is based on Department of Defense adopted standards for protecting high consequence assets. Entities have two options to meet the requirement. The first is to determine that the response time for the response force is less than 15 minutes. This can be achieved in multiple ways. For example, an entity can: • Enter into a formal agreement with local law enforcement • Discuss with local law enforcement • If you have a dedicated guard force, work with them (generally, you will meet this requirement with a dedicated guard force) • Conduct an exercise with local responders The second is to calculate the delay time provided by entity security barriers and compare it to the expected response time of the response force. This is a matter of getting the typical response times from the responding personnel and comparing it to the delay times determined through scenarios. If the delay times are greater than the response time under typical conditions, it will meet the standard. Because the delay time is threat dependent, entities are strongly encouraged to coordinate with local law enforcement and/or federal partners to assist with threat capabilities. Local law enforcement, especially in areas where the response time is challenging, will often assist the entity in determining how long the barriers will delay an adversary. Though not required, entities should consider the effect of natural hazards and other factors when addressing response times. For example, an earthquake may trigger the alarm but may also impact local law enforcement's capabilities to respond to alarms. A mandatory evacuation for a hurricane may prevent the response force from arriving. A tornado may both cut the power to the IDS triggering an alarm but may also block the roads. Entities are strongly encouraged to address these matters in their incident response plans. Key points: If local law enforcement is the response force, it is critical to understand IDS triggering of a Federal Select Agent Program entity involves a high priority response by local authorities. Some police do not respond to property crimes; others only respond as they get time. You want your entity to be a 'higher' priority response. # Access control power failures and personnel changes (Tier 1 only) [Section 11(f)(4)(vii)] For powered access control systems, the entity must describe procedures to ensure that security is maintained in the event of the failure of the access control system due to power disruption. This may include locks "failing secure" (locked), personnel/guard forces, backup generators or other similar features. For example, if power is lost and the door locks (even if they can be opened only from the inside), then it meets this requirement. If power is lost and door unlocks (it can be open from the outside), then it does not meet "fail secure." Depending on access controls, the entity may also want to consider changing alarm codes when personnel change. Former employees with the capability to deactivate the IDS may pose vulnerability. # Personnel Security (Section 10) Personnel security measures are key to mitigating the 'insider' risk. A good personnel security program reinforces honesty and integrity while identifying those who may pose unacceptable risks. In some cases, an entity's personnel security is already in place but outside the control of the RO. This is acceptable and often times a much more effective means of conducting personnel security. The RO, however, must be aware of these procedures. The security plan should describe what personnel security measures are in place. These measures should assess the workers based on their level of responsibilities within the entity. Personnel security measures should be based on insider threats from the site specific security risk assessment. # Personnel Suitability Assessments (Tier 1 BSAT only) [[Section 11(f)(1) and 11(f)(3)] Personnel with access to Tier 1 BSAT must have additional pre-access suitably and ongoing assessment requirements. See the "Guidance for Suitability Assessments" on www.selectagents.gov for additional information. # Security Risk Assessment (Section 10) An SRA is the method used by the Federal Bureau of Investigation to identify whether an individual is within any of the prohibited or restrictive categories specified in section 10 of the select agent regulations. # Additional Personnel Security Considerations Based on the site specific risk assessment, the entity may determine risks that are best managed through additional personnel security measures. These are at the discretion of the entity and subject to many local, state and federal laws. Also, entities should keep in mind parts of a typical background check may already be in place for non-security reasons. For example, verification of a person's educational background and contacting a person's references may be accomplished as part of the hiring process. The registered entity may also consider continuing evaluation. Some approaches may include: • Self-reporting (an environment which encourages honesty and self-help) • Peer reporting (an environment where team members help each other) • Supervisor observation (to make sure people perform in approved manner) • Periodic rescreening including the SRA and other background screens Suspicious activity or behavior to consider includes: • Personnel who deliberately or routinely violate security or safety procedures • Personnel who threaten or support those who threaten to do harm to other people • Personnel who do not properly account for material • Personnel who work after hours without authorization or apparent reason • Personnel displaying nervous or evasive behavior when accounting for material This goes beyond laboratorians and includes support staff with access to select agents and toxins. # Operational Security (Section 11(d)(3)) Operational Security: Effective operational security builds on existing operational procedures but mitigates threats based on site-specific risk assessments. Operational security are controls and procedural measures which are put in place or modified to control access to access to select agents and toxins; they prevent the unauthorized access, theft or loss a select agent or toxin. The entity may choose to consider: • Limiting duty hours/after hour operations • Training personnel on their responsibilities for securing select agents and toxins • Badging procedures, including displaying badges at all times unless in personal protective equipment • Removing signs that indicate where the select agents and toxins are stored (unless required by law, regulation, or as part of a biosafety program) • Having two personnel present while working with select agents and toxins or review of security video by laboratorians or other trained personnel • Screening of visitors, packages, vehicles, etc. as part of allowing them access. • Requiring that individuals refrain from sharing information about the entity's security Entities should assess lack of adherence to operational security measures because this is a key alert to potential insider threat activity. # Work hours (Tier 1 BSAT only) [Section 11 (f)(4)(ii)]: Entities registered to possess Tier 1 BSAT are required to have procedures that limit access to laboratory and storage facilities outside of normal business hours to only those specifically approved by the RO or designee(s). This requirement is not intended to limit work, only to make an entity aware of the work that is occurring. If the RO is aware and access is limited (card-key limitations, key control), then it is sufficient to meet the regulatory requirement. If any person on the registration can access Tier 1 BSAT after hours on their own, it is not. These procedures must also be addressed in the security plan. # Screening (Tier 1 BSAT only) [Section 11 (f)(4)(iii)]: Entities registered to possess Tier 1 BSAT are required to have procedures for screening visitors, their property, and vehicles at the entry and exit points to the areas registered for Tier 1 BSAT, or at other designated points of entry to the building, facility, or compound based on the entity's site-specific risk assessment. Screening consists of confirming a person's need to visit and his/her identity prior to allowing access to areas registered for Tier 1 BSAT. The method and detail is determined by the site-specific risk assessment. Screening can be done by any trained person at any point prior to accessing the area registered for Tier 1 BSAT. Entities are also required to document in the security plan their procedures for visitors and property screening. These procedures should be based on the site-specific security risk assessment along with applicable laws. # Inventory Control Measures (Sections 11(c)(1) and(2) and 17) Effective inventory control measures for select agents and toxins in long term storage can be a very effective way to deter and detect a variety of insider threats. How the inventory audits are conducted and inventory is maintained must be described in the entity's security plan and inventory records must meet the requirements of section 17 of the select agent regulations. The security requirement includes: • Current accounting of any animals or plants intentionally or accidentally exposed to, or infected with, a select agent • An accurate and current inventory for each select agent or toxin held in long-term storage • Labeling and identifying select agents and toxins in the entity inventory in a way that leaves no question that the entity's inventory is accurately reflected in the inventory records • Accounting for select agents and toxins from acquisition to destruction • Accounting for select agents and toxins as they are withdrawn from long term storage and returned to storage An inventory audit is an examination of a portion of the inventory or collection sufficient to verify inventory controls are effective. This guidance does not apply to inventories conducted in accordance with Section 17 for further information on long term storage inventories (See the "Guidance on the Inventory of Select Agents and Toxins" which can be found at www.selectagents.gov). This only applies to audits required by section 11. Entities must conduct complete inventory audits of affected select agents and toxins in long-term storage when any of the following occur: • Upon the physical relocation of a collection or inventory containing select agents and toxins. This includes moving a collection or inventory into a new facility or into a new storage location within the same facility; • Upon the departure or arrival of a principal investigator for select agents or toxins under the control of that principal investigator; or • In the event of a theft or loss of a select agent or toxin, all select agents and toxins under the control of the principal investigator that suffered the theft or loss. Entities have discretion on how they conduct these audits. The depth of an audit should depend on the circumstances. Entities should consider the following 5 items when conducting entity audits: 1) The timing of the inventory audit. See recommended objectives below. 2) The circumstances that require the inventory audit. For example, an 'emergency' movement to another location (freezer malfunction) may result in a focus on counting full racks and a confirmation of a targeted, smaller number of vials. In the case of a shipment to a new building or campus where there is sufficient time to plan, entities are encouraged to inventory more thoroughly. 3) The criteria used to determine which samples are audited. In the case of a large inventory, the entity may choose to focus on the most recently manipulated samples. In the case of a small inventory, the entity may choose to focus on the entire inventory. 4) Any additional storage measures. If the material is stored in tamper evident systems, the entity may choose to count the sealed containers instead of the individual vials within those containers. 5) The size of the collection being audited and the manner it is stored. Inventories which are intermixed with other samples may require a 'vial by vial' audit. Records of audit must be kept in accordance with section 73.17 (c). The changes to the inventory must be recorded in accordance with section 73.17 (a) as well. Inventory control measures may include tamper-evident storage containers. They are especially useful when the material is in storage without being accessed for a long period of time (archival collections, while laboratories are under renovation). The material should be inventoried initially then the material is sealed in the container. The entity can check the 'seal' for evidence of tampering instead of inventorying the entire container. Note: as a practical matter, an entity will either have to retain the record of the initial inventory record past the minimum 3 year select agent regulatory requirement, or the entity will have to re-inventory the select agents and toxins if the record is to be destroyed on a 3 year cycle. If the inventory record of what is inside the container is lost or destroyed, the container must be re-inventoried. # Examples of tamper evident material: Tamper Evident Tape (useful on boxes and storage devices). Tamper Evident Seal (useful on freezers and other locking storage devices) See Appendix VI for an example of a select agent inventory form and Appendix VII for an example of a select toxin inventory form. Re-training is required when the entity significantly amends its security plan. This includes processes which change how people gain access, along with other significant changes to security. Significant changes may be the result of new technology (new badge reader, new IDS, new inventory control system) or new operational processes (new inventory control method, new work hour standard). There must also be a means to verify that the employee understood the training. This may be a test or practical exercise. In the case of documentation, this can be a 'read and understood' statement. Entities must keep records of this training and means used to verify understanding per Section17 of the select agent regulations. # Inspection of Suspicious Packages (Section 11(d)(4)) A suspicious package is any package or item that enters or leaves registered areas that does not appear to be consistent with what is expected during normal daily operations. The entity should consider the following indicators of suspicious packages: • Misspelled words • Addressed to a title only or an incorrect title • Badly taped or sealed • Lopsided or uneven • Oily stains, discolorations, or crystallization on the wrapper • Excessive tape or string • Protruding wires • Return address does not exist or does not make sense Inspection of Packages: The security plan must describe how the entity will inspect packages based on the site-specific risk assessment. The entity should inspect all packages and items before they are brought into or removed from areas where select agents and toxins are used or stored (registered laboratory, etc.). Suspicious packages should be inspected visually or with noninvasive techniques before they are brought into, or removed from the area where select agents and toxins are stored or used. Guidelines for recognizing suspicious packages are provided in Appendix VIII. # Reporting requirements (Section 11(c)(8) and 11(d)(7)) The security plan must indicate that the following incidents must be reported to the RO: • Any loss or compromise of keys, passwords, and combinations • Any suspicious persons or activities • Any loss or theft of a select agent or toxin • Any release of a select agent or toxin • Any sign that inventory or use records for select agents and toxins have been altered or otherwise compromised The security plan must describe procedures for how the RO will be informed of suspicious activity that may be criminal in nature and related to the entity, its personnel, or its select agents and toxins; and describe procedures for how the entity will notify the appropriate federal, state, or local law enforcement agencies of such activity. Discussions during the risk assessment's security portion should identify who best can respond to the circumstances. Suspicious activity of a criminal nature includes: • Those identified in the site-specific security risk assessment • Insider: o Attempts to create additional inventory not authorized or required o Attempts to "cover up" and not report inventory discrepancies o Attempts to remove inventory without authorization o Attempts by "restricted" persons to intentionally access registered areas containing a select agent or toxin • Outsider: o Indirect threats against the entity received by e-mail, letter, telephone, or website postings o Unauthorized attempts to purchase or transfer a select agent or toxin o Attempts to coerce entity personnel into a criminal act o Intimidation of entity personnel based on their scientific work (Eco-Terrorism) o Requests for access to laboratories for no apparent legitimate purpose, or for purposes that don't seem legitimate o Unauthorized attempts to probe or gain access to proprietary information systems particularly access control systems (e.g., attempts by unauthorized individuals to gain physical or electronic access to systems) o Theft of identification documents, identification cards, key cards or other items required to access registered areas o Personnel representing themselves as government personnel (federal, state, local) attempting to gain access to the facility or obtain sensitive information that cannot or will not present appropriate identification o Use of fraudulent documents or identification to request access # Shipping (Section 11(c)(10)) The security plan must contain provisions and policies for shipping, receiving, and storage of select agents and toxins. This includes procedures for receiving, monitoring, and shipping of all select agents and toxins. Shipments containing select agents and toxins between entities must be authorized by the Federal Select Agent Program, and coordinated through an APHIS/CDC Form 2 and tracked so the receiving entity knows when the shipment will arrive. The package must be packaged and received by a person approved for access to select agent or toxins. With respect to outbound shipments, the individual who packaged the BSAT for shipment must have an SRA approval. Once the select agent or toxin is packaged in accordance with DOT regulations and cannot be identified as a select agent or toxin, it can be passed off to a non-SRA approved person for shipping. With respect to inbound shipments, the package containing select agents and toxins is not considered "received" by the entity until the intended recipient takes possession of the package. The intended recipient must have an SRA and, if the agent is Tier 1, has gone through the entity's pre-access suitability and is subject to entitles ongoing assessment. When received by the intended recipient, they should immediately be secured in registered space. Ideally, they are taken to the receiving laboratory. However, the package may be stored in other registered space temporarily. Shipping and receiving areas must be registered if the select agents or toxins packages are identified or accessed. For example: • If packaging or un-packaging of a select agent or toxin is performed in these areas • If the plan to temporarily store identified select agents If select agent or toxin packages are not identified or accessed, the shipping and receiving area may not need to be registered. The entity must also have a written contingency plan for receipt and security for unexpected shipments. An "unexpected shipment" is when an entity receives a shipment of a select agent that it had neither requested nor coordinated for, and therefore was not expecting. Upon realizing that a shipment has arrived which contains select agents and toxins, the entity must have a contingency plan to have approved personnel gain control of the shipment without delay and secure it in a registered area. # Intra-entity Transfers (Section 11(d)(5)) Intra-entity Transfer: A physical transfer of select agents or toxins that takes place between two SRA approved PIs at the same registered entity (e.g., from one PI to another). Entities that conduct intra-entity transfers must have in their security plan a description of how these transfers will take place, including chain-of-custody documents and provisions for safeguarding the select agents and toxins against theft, loss, or release. An example: A PI removes a select agent or toxin from his long term storage and gives it to another PI. An example of an intra-entity transfer form can be found in Appendix IX. Transfers accompanied by a chain-of-custody ensure that select agents and toxins will not be left unattended. If intra-entity transfers are not conducted in the entity, this is not required to be covered in the security plan. Entities with Tier 1 BSAT must have the following enhancements to transfers (See Appendix IX for an example form). Personnel transferring Tier 1 BSAT must ensure that individuals receiving the Tier 1 BSAT are approved by the HHS Secretary or Administrator, have gone through the entity's pre-access suitability and are subject to the entity's on going assessment. # Appendices The appendices contain information, suggested diagrams, and examples that an entity may consider in development and implementation of a security plan. The entity is not required to use, or limited to, the information provided in the appendices. # Deny Threat key information # Mitigation (in green/double line) # P a g e | 31 Security Guide Insider Threat (Someone with authorized access with intent and capability to steal or impact a select agent or toxin) Unlike the Outsider, the entity can take steps to prevent certain tasks. Entities should focus on how one could get a select agent or toxin out of the registered areas without authorization and create control or measures which prevent it. Example "Relative Risk Score"-This method assesses risk by numerically scoring threats and vulnerabilities. Threat and Venerability are combined into one score (one independent variable) and compared to consequence which is treated as independent variable. # Threat tasks without mitigation Mitigation and prevention (in green/double lines) Risk # Appendix III: Comparison of Access Control Devices and Systems which are used to Control Access to Select Agents and Toxins Lock Type Physical Security Requirement Additional SRA Requirements # Mechanical Key • All keys must be tracked in a log. • Change locks if key is lost or compromised. • All keys must be returned when people quit or are terminated. • The entity must log access and retain for 3 years. • If the key is secured in a key box, the key box key must meet the requirements above. • All personnel with access to the key must have SRAs. If in a key box, all personnel with access to the keybox key must have an SRA. If there is no IDS, the following people must have SRAs: • All personnel with access to a master key. • All personnel with access to a facility or building grand master. • Entity locksmiths if they have or can make the key and the key can be traced to the door. Cipher Key/Combination lock • The entity must change the code or lock when personnel quit or are terminated. Changes must be reflected in a log. • The entity must change the code or lock in the event of compromise. • The entity must log access to registered areas and retain access records for 3 years. • All personnel with the code/combination or access to the code/combination must have SRAs. If there is no IDS, the following people must have SRAs: • All personnel who can change the code. Card Key • The entity must maintain electronic or physical logs of access to registered areas for 3 years. • The log must be capable of being printed. • The access control network must meet the information security requirements. • All personnel with card-key which can open door (includes facility wide keys) Card Key+ Pin • The entity must maintain electronic logs of access for 3 years. • The access control network must meet the information security requirements. • No additional requirement # Biometrics • The entity must maintain electronic logs of access for 3 years. • The access control network must meet the information security requirements. • No additional requirement Multiple kinds of access control (i.e., Card Key and Mechanical Lock on same door) • All the requirements for each type of access control systems when or if used. • All the SRA requirements for both systems unless use of the access control device triggers the IDS (use of a mechanical key in Card-Key door will often trigger a 'forced door' alarm. The same alarm if someone broke the door down). Remote opening (e.g., someone 'buzzes' a person in) • Maintain electronic logs of access for 3 years. • The access control network must meet the information security requirements. Security requirements for shared areas depend on access to the select agents and toxins. All personnel who have access to the Tier 1 BSAT must have gone through the entity's pre-access suitability and be enrolled in the ongoing assessment program. Beyond that, it depends on the parameters for the work or action being done. Below are six common scenarios and the corresponding personnel and physical security requirements. 1) Actively working with Tier 1 BSAT and select agents and toxins simultaneously in the same contiguous registered area. An entity is conducting research or diagnostic work using Tier 1 BSAT along with other work in a single registered suite. # Personnel Requirements All people with access to the agent within the suite/shared area have gone through the entity's pre-access suitability and are subject to ongoing assessment. Physical Security Requirements The suite/shared area meet all the physical security requirements for Tier 1. Final barrier usually the door to the suite. # Verification Check Access logs to the suite must reflect only entity personnel who have gone through the entity's pre-access suitability and ongoing access. 2) Storage only within a registered space. A Tier 1 storage location or freezer location inside registered laboratory or a shared freezer inside a registered laboratory. # Personnel Requirements All people with access to the locking storage device (i.e., freezer) have gone through the entity's pre-access suitability and on-going assessment. All people with access to the room but not to the Tier 1 BSAT do not need to be in the entity's pre-access suitability and ongoing assessment program. Physical Security Requirements Storage location meets the security requirement for Tier 1. Final barrier is the locking mechanism of the storage device. # Verification Check Access logs to storage device that contains Tier 1 BSAT (i.e., freezer) must reflect only entity personnel who have gone through the entity's pre-access suitability and ongoing access. 3) Working with Tier 1 BSAT and select agents and toxins inside the same contiguous registered space separated by time. This entails only working with Tier 1 during certain well defined times and conditions. The entity restricts access during times when Tier 1 BSAT is outside of locked storage units such as a locked freezer or a locked incubator. # Personnel Requirements All people with access to the suite/shared area when Tier 1 BSAT is present have gone through the entity's pre-access suitability and ongoing assessment. Physical Security Requirements The suite/shared area meet all the physical security requirements for Tier 1 BSAT. Depending on how the work is organized, the final barrier may be the door to the suite or to any freezers /devices containing Tier 1 BSAT during work with select agents and toxins. Verification Check Access logs to room and written procedures in the security plan that detail how access is restricted when Tier 1 BSAT are outside of locked storage. # P a g e | 41 Security Guide 4) Shared Autoclave. Using an autoclave for both Tier1 BSAT and select agents and toxins. # Personnel Requirements For select agents and toxins, individuals operating the autoclave must be SRA approved. For Tier 1 BSAT, individuals operating the autoclave Tier 1 BSAT must be SRA approved and have gone through the entity's pre-access suitability and subject to on-going assessment. Physical Security Requirements For select agents and toxins, agent or toxin secured by SRA approved persons until the autoclave reaches desired operational parameters. For Tier 1 BSAT, the agent or toxin secured by individuals who is SRA approved and has gone through the entity's pre-access suitability and subject to on-going assessment. That person must remain until the autoclave reaches desired operational parameters. It is recommended that autoclaved cycles be scheduled so materials can be autoclaved without delay. Verification Check Autoclave records. # 5) Animals exposed to a Tier 1 BSAT-An animal that is experimentally infected or exposed to a Tier 1 BSAT must be secured as a Tier 1 BSAT itself until such time as it is demonstrated to be free of that select agent. # Personnel Requirements Individuals who exposed the animal must be SRA approved and have gone through the entity's pre-access suitability and subject to ongoing assessment. Personnel who handle or care for the animal must be SRA approved and have gone through the entity's pre-access suitability and subject to on-going assessment. Physical Security Requirements The area where the animal is handled and housed meets all the physical security requirements for Tier 1 agents. Final barrier usually the door to the suite. Verification Check Access logs. # 6) Animals inoculated with a Select Toxin Personnel Requirements For select toxins, persons inoculating the animals must be SRA approved. For Tier 1 select toxin, individuals who inoculated the animal must be SRA approved and have gone through the entity's pre-access suitability and subject to on-going assessment. Physical Security Requirements For all select toxins, room used for inoculation must be registered. Animal inoculated with select toxins are not subject to additional security requirements. However, any equipment used for inoculation of animals (syringe, aerosol chamber) must be managed as a select toxin until decontaminated. For Tier 1 select toxin, registered room must meet Tier 1 security requirements (e.g., 3 barriers, IDS). Verification Check Inventory Logs with respect to toxin only. # P a g e | 42 Security Guide # Appendix XI: Access and Barrier Scenarios Regulatory Requirements -Access: § 73.10 (b): To access any select agent or toxin (e.g., ability to carry, use, or manipulate) or the ability to gain possession of a select agent or toxin), a person must be approved by the HHS Secretary or Administrator after an SRA (i.e., have an SRA approval). -Records: § 73.17 (a) 5: Entity must retain records about all entries into areas containing select agents or toxins including the name of the individual, name of the escort (if applicable), and date and time of entry. # Areas containing Tier 1 agents or toxins also have the following requirements: -Access: § 73.11 (f) 4 (i): To access any Tier 1 select agent, a person must meet the 73.10 (b) requirement, have had an entity-conducted pre-access suitability assessment and are subject to the entity's procedures for ongoing suitability assessment (i.e., Suitability Program). -Barriers § 73.11 (f) 4(iv): A minimum of three security barriers. Only the final barrier must limit access to the select agent or toxin to personnel with an approved SRA. A "security barrier" is a physical structure that is designed to prevent entry by unauthorized persons. The entity must have procedures to control access though security barriers. # Scenarios (Tier 1 Barriers and Access Controls): Scenarios (Tier 1 Barriers and Access Controls): Scenarios (Non-Tier 1 Barriers and Access Controls):
None
None
f4a9ffe2377adc4b32c4c82ddaca0a22af30e3e1
cdc
None
Participants who responded "yes" to either question were classified as having a disability. To assess self-rated health status, participants were asked, "Would you say that in general your health is excellent, very good, good, fair, or poor?" The following racial/ethnic categories were included in this analysis: white, black, Hispanic, Asian, Native Hawaiian or Other Pacific Islander, and AI/AN. † Data from 2004, 2005, and 2006 were aggregated to provide sufficient power to analyze low-count racial/ethnic populations. Prevalence estimates were weighted and age adjusted to the 2000 U.S. standard population. Weighted population estimates were determined by taking the final weights for each year during 2004-2006 and dividing by three. Data were weighted to compensate for unequal probabilities of selection, to adjust for nonresponse and telephone noncoverage, to ensure that results were consistent with population data, and to make population estimates. §# Self-rated health status has been found to be an independent predictor of morbidity and mortality (1), and racial/ethnic disparities in self-rated health status persist among the U.S. adult population (2). Black and Hispanic adults are more likely to report their general health status as fair or poor compared with white adults (2). In addition, the prevalence of disability has been shown to be higher among blacks and American Indians/Alaska Natives (AI/ANs) (3). To estimate differences in self-rated health status by race/ethnicity and disability, CDC analyzed data from the 2004-2006 Behavioral Risk Factor Surveillance System (BRFSS) surveys. This report summarizes the results of that analysis, which indicated that the prevalence of disability among U.S. adults ranged from 11.6% among Asians to 29.9% among AI/ANs. Within each racial/ethnic population, adults with a disability were more likely to report fair or poor health than adults without a disability, with differences ranging from 16.8 percentage points among Asians to 37.9 percentage points among AI/ANs. Efforts to reduce racial/ ethnic health disparities should explicitly include strategies to improve the health and well being of persons with disabilities within each racial/ethnic population. BRFSS is a state-based, random-digit-dialed telephone survey of the noninstitutionalized, U.S. civilian population aged >18 years. , approximately 1 million persons from all 50 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands participated in the BRFSS survey.- Consistent with the definition of disability from Healthy People 2010 (4), respondents were asked, "Are you limited in any way in any activities because of physical, mental, or emotional problems?" and "Do you now have any health problem that requires you to use special equipment, such as a cane, a wheelchair, a special bed, or a special telephone?" Prevalence estimates and standard errors were obtained using statistical software to account for the complex sampling design. Chi-square tests were used to compare self-rated health status between racial/ethnic populations and by disability status. Council of American Survey Research Organizations (CASRO) median response rates ¶ for the 2004-2006 BRFSS surveys were 52.7% (2004), 51.1% (2005), and 51.4% (2006). The median cooperation rates for each year were 74.3% (2004), 75.1% (2005), and 74.5% (2006). During 2004-2006, an estimated 19.9% of the total U.S. population aged >18 years (i.e., an average of 43 million persons) had a disability. The prevalence of disability was highest among AI/ANs (29.9%) and lowest among Asians (11.6%) (Table 1). Nearly 84% of the total U.S. adult population reported having good or better health, but substantial variation was observed in self-rated health status across racial/ethnic populations. Nearly 60% of white, Asian, and Native Hawaiian or Other Pacific Islander respondents (59.3%, 55.8%, and 55.4%, respectively) rated their health as very good or excellent, whereas 44.4% of black respondents reported their health to be very good or excellent. White and Asian adults had similar rates of self-rated fair or poor health (12.9% and 10.4%, respectively), whereas fair or poor health was reported more frequently among other minority populations: 21.1% among blacks, 14.8% among Native Hawaiian or Other Pacific Islanders, and 24.5% among AI/ANs. Hispanic adults rated their health status approximately equally across the three health status categories: very good or excellent (33.6%), good (35.4%), and fair or poor (31.1%). Overall, adults with a disability were less likely to report excellent or very good health (27.2% versus 60.2%; p<0.01) and more likely to report fair or poor health (40.3% versus 9.9%; p<0.01), compared with adults without disability (Table 2). White adults without a disability had the highest proportion of respondents who rated their health as very good or excellent (66.9%), whereas 49.9% of black respondents without a disability reported very good or excellent health. Reports of fair or poor health among adults with a disability were most common among Hispanics and AI/ANs (55.2% and 50.5%, respectively) and least common among Asians (24.9%). # MMWR October 3,2008 tions should also address the needs of adults with disabilities. Such efforts must ensure that persons with disabilities have accessible, available, and appropriate health-care and wellness promotion services (5). # HIV Prevalence Estimates -United States, 2006 Accurate and timely data on the number of persons in the United States living with human immunodeficiency virus (HIV) infection (HIV prevalence) are needed to guide planning for disease prevention, program evaluation, and resource allocation. However, overall HIV prevalence cannot be measured directly because a proportion of persons infected with HIV have neither been diagnosed nor reported to local surveillance programs. In addition, national HIV prevalence data are incomplete because local reporting systems for confidential, name-based HIV reporting have been fully implemented only since April 2008. With the advent of highly active antiretroviral therapies that delay the progression of HIV to acquired immunodeficiency syndrome (AIDS), and of AIDS to death (1), and changes in the AIDS case definition to include an immunologic diagnosis (2), earlier back-calculation methods from the 1990s for estimating HIV prevalence based on the number of reported AIDS cases are no longer reliable. With 80% of states reporting name-based HIV diagnoses as of January 2006, an extended back-calculation method now can be used to estimate HIV prevalence more accurately. Based on this method, CDC now estimates that 1.1 million adults and adolescents (prevalence rate: 447.8 per 100,000 population) were living with diagnosed or undiagnosed HIV infection in the United States at the end of 2006. The majority of those living with HIV were nonwhite (65.4%), and nearly half (48.1%) were men who have sex with men (MSM). The HIV prevalence rates for blacks (1,715.1 per 100,000) and Hispanics (585.3 per 100,000) were, respectively, 7.6 and 2.6 times the rate for whites (224.3 per 100,000). An extended back-calculation method has been described in detail and was used recently to calculate the incidence of HIV infection in the United States (3). The method was used in this analysis to estimate HIV prevalence based on the number of HIV diagnoses by calendar year and disease severity (i.e., whether the person received an AIDS diagnosis in the same calendar year as the HIV diagnosis). HIV prevalence at the end of 2006 for the 50 states and District of Columbia was estimated using information from the national HIV/AIDS Reporting System for persons aged >13 years who were diagnosed with HIV during 2006 and reported to CDC by the end of June 2007. Forty states provided data on both HIV and AIDS diagnoses, whereas 10 states (California, Delaware, Hawaii, Illinois, Maryland, Massachusetts, Montana, Oregon, Rhode Island, and Vermont) and the District of Columbia provided data only for AIDS diagnoses. For the areas without name-based HIV data, statistical procedures and AIDS data were used to estimate HIV cases, based on the ratio of HIV to AIDS in states with integrated surveillance systems (4). The number of undiagnosed HIV infections was calculated by subtracting diagnosed AIDS prevalence and diagnosed HIV prevalence from the estimated overall HIV prevalence. Using an established method, data also were adjusted for reporting delays and redistribution of risk factors among persons initially reported without sufficient information to be classified into an HIV transmission category (5). HIV prevalence rates per 100,000 population were calculated for various demographic characteristics; population denominators for rate calculations were based on official postcensus estimates for 2006 (6). Among the estimated number of persons living with HIV at the end of 2006, 46.1% (1,715.1 per 100,000 population) were black, 34.6% (224.3 per 100,000) were white, 17.5% (585.3 per 100,000) were Hispanic, 1.4% (129.6 per 100,000) were Asian/Pacific Islander, and 0.4% (231.4 per 100,000) were American Indian/Alaska Native (Table ). Males accounted for 74.8% of prevalent HIV cases (685.7 per 100,000). The greatest percentage of cases was attributed to male-to-male sexual contact, accounting for 48.1% overall (and 64.3% among men). High-risk heterosexual contact, defined as heterosexual contact with a person known to have, or to be at high risk for, October 3, 2008 HIV infection (e.g., an injection drug user) accounted for 27.6% of prevalent cases overall (12.6% of cases among men and 72.4% of cases among women). Injection drug use (IDU) accounted for 18.5% of total cases (15.9% of cases among men and 26.3% of cases among women). The remainder of cases were attributed to men who reported both male-to-male sexual contact and IDU (5.0%) or whose transmission category was classified as other (0.8%; including hemophilia, blood transfusion, perinatal exposure, and risk factors not reported or not identified). Overall, an estimated 232,700 (21.0%) persons living with HIV infection had not been diagnosed as of the end of 2006. The HIV prevalence rate for black men (2,388.2 per 100,000 population; 95% confidence interval = 2,197.9-2,578.4) was six times the rate for white men (394.6 per 100,000; CI = 363.3-425.9) (Figure ), and the rate for Hispanic men (883.4 per 100,000; CI = 784.9-982.4) was more than twice the rate for white men. The HIV prevalence rate for black women (1,122.4 per 100,000; CI = 1,002.2-1,242.5) was nearly 18 times the rate for white women (62.7 per 100,000; CI = 54.7-70.7), and the rate for Hispanic women (263.0 per 100,000; CI = 231.6-294.4) was more than four times the rate for white women. The HIV prevalence rate for black women was greater than the rate for all other groups, except for black men. Reported by: ML Campsmith, DDS, P Rhodes, PhD, HI Hall, PhD, T Green, PhD, Div of HIV/AIDS Prevention, National Center for HIV/ AIDS, Viral Hepatitis, STD, and TB Prevention, CDC. Editorial Note: Reduced mortality resulting from the use of highly active antiretroviral therapies is a major factor contributing to the number of persons in the United States living with HIV disease (1). Additionally, more than 56,000 new HIV infections are estimated to occur annually (3). The estimate of HIV prevalence in this report is similar to an estimate for 2003 (1,039,000-1,185,000) that used the same extended back-calculation method (4). However, because of improvements in national HIV surveillance data since 2003, the two estimates cannot be compared directly. The 2006 estimate is based on a data set that 1) includes HIV diagnoses from 10 states that were not reporting in 2003 and 2) has been refined by an improved ability to identify and remove duplicate HIV case data that reflect reports by more than one state. Using the refined data set, CDC now estimates the HIV prevalence for 2003 to have been 994,000, suggesting that HIV 3.7% ever have had anal sex with another male, and the proportion of men who had a male sexual partner in the past 12 months was 2.9% (7). The findings in this report are subject to at least three limitations. First, reported HIV data used in the extended back-calculation method represent only a portion of persons in the United States who were diagnosed with HIV infection; several high-morbidity areas, including California, Illinois, Maryland, and the District of Columbia, did not contribute HIV data. Availability of reported HIV data from these areas will increase accuracy of future prevalence estimates. Second, not all persons who are infected with HIV have been diagnosed and reported to the public health surveillance system, and data must be estimated for undiagnosed persons. Finally, the data have been adjusted statistically to account for delays in reporting new cases and deaths, and cases reported without risk factor information have been redistributed among other transmission categories (5). These adjustments were based on risk redistribution assumptions from the mid-1990s that might no longer be valid, which could result in over-or underadjustment of the data. Previous studies have indicated that persons generally reduce their sexual risk behaviors (e.g., decrease the number of sex partners and reduce unprotected intercourse through increased condom use) after being diagnosed with HIV (8). Thus, increasing the percentage of HIV-infected persons who are diagnosed and linked with effective care and prevention services has the potential to reduce new HIV infections over time. To help achieve that, CDC has focused resources on increasing testing for HIV, particularly among populations that are disproportionately affected by HIV infection. Recent CDC activities have included publication of revised recommendations for HIV testing in health-care settings (9) and creation of a new program, the Heightened National Response to the HIV/AIDS Crisis in the African American Community (10). In 2007, as part of the President's Domestic HIV Initiative, CDC allocated funds to expand routine HIV testing, primarily among blacks. In addition to testing, expanding the number and reach of effective HIV prevention services for at-risk populations, including blacks, Hispanics, and MSM of all races, can contribute to reducing the disproportionate numbers of infections in these groups. Culturally appropriate opportunities for HIV testing, diagnosis, and access to early treatment and prevention services to reduce further HIV transmission are key to reducing new infections and ultimately decreasing HIV prevalence in the United States. # FIGURE. Estimated human immunodeficiency virus (HIV # Rabies in a Dog Imported from Iraq -New Jersey, June 2008 Rabies vaccination and stray dog control have led to successful control of canine rabies in the United States. The number of rabid dogs reported decreased from approximately 5,000 in 1950 to 79 in 2006, when the canine rabies virus variant associated with dog-to-dog rabies transmission was declared eliminated in the United States (1). On June 18, 2008, a mixed-breed dog, recently shipped from Iraq into the United States, was confirmed to have rabies by the Public Health and Environmental Laboratories of the New Jersey Department of Health and Senior Services. A total of 24 additional animals in the shipment, all potentially exposed to the rabid dog, were distributed to 16 states. This report summarizes the epidemiologic investigation by the New Jersey Department of Health and Senior Services, Bergen County Department of Health, and CDC, and the ensuing public health response. These findings underscore the need for vigilance regarding rabies (and other zoonotic diseases) during animal importation to prevent the possible reintroduction and sustained transmission of canine rabies in U.S. dog populations. # Case Report On June 5, 2008, a shipment of 24 dogs and two cats arrived in the United States from Iraq as part of an international animal rescue operation. The goal of the operation was to reunite servicemen returning to the United States with animals they had adopted in Iraq. Upon arrival at Newark Liberty International Airport, the animals received physical examinations from volunteer licensed veterinarians. One cat became ill with neurologic signs during transport and was euthanized on arrival. The cat was tested for rabies and was negative. The remaining 24 dogs and one cat were housed for several days at the airport before distribution to their final U.S. destinations. On June 8, one of the 24 dogs, a mixed-breed aged 11 months (dog A), became ill and was taken to a veterinarian the next day. The dog was hospitalized with fever, diarrhea, wobbly gait, agitation, and crying. The dog's condition deteriorated, progressing to lateral recumbency with periods of agitation. On June 11, the dog was euthanized. Specimens were shipped to the Public Health and Environmental Laboratories for rabies testing, but delivery of the specimens was delayed. On June 18, the specimens were tested, and rabies was diagnosed. Specimens also were submitted to CDC, where rabies was confirmed on June 26 and typed as a rabies virus variant associated with dogs in the Middle East. # Public Health Investigation The potentially infectious period for a dog, cat, or ferret with rabies can begin as many as 10 days before the onset of clinical signs and continue throughout the clinical course until death (2). To identify potential rabies exposure to humans or other animals while dog A was in Iraq, during transport, or at the airport shelter, an investigation was initiated by the New Jersey Department of Health and Senior Services and the Bergen County Department of Health, with participation from CDC. The dog was reportedly in the possession of a U.S. soldier in Baghdad for approximately 7 months before shipment to the United States. The dog had been kept in an indoor-outdoor run on a military base and had not been vaccinated for rabies; the owner reported no signs of illness in the dog or potential exposure to other rabid animals during the 7 months. The owner also reported no potential exposures to other persons or animals during the 2 days of potential infectivity before the dog was transferred to the animal rescue operation for shipment on May 31. Upon arrival in the United States, none of the 24 dogs were accompanied by the valid rabies vaccination certificates required for admission by CDC animal importation regulations.- For dogs aged >3 months, a rabies vaccination must be administered at least 30 days before the date of arrival at a U.S. port. Five of the 24 dogs (not including dog A) reportedly had received a previous rabies vaccination; however, none of the information required for a valid rabies vaccination certificate was available, including vaccine manufacturer, lot numbers, or a certifying veterinarian signature. Twenty-one of the animals in the shipment, including dog A, had received a primary rabies vaccination in Iraq during May 28-31, immediately before being shipped to New Jersey. Because none of the dogs met rabies vaccination requirements for importation, in accordance with the importation regulation, a confinement agreement was issued by CDC, stating where the animals would be held for at least 30 days after vaccination. During shipment and upon arrival in New Jersey, all the animals were housed in separate crates; however, interviews with persons present during the animals' arrival and stay in Newark identified potential periods during which dogs, including dog A, were allowed to intermingle. On June 10, 1 day before dog A was euthanized and 8 days before rabies was diagnosed, the remaining 23 dogs and one cat were shipped to destinations in 16 states. † Because none of the surviving animals had a verifiable history of vaccination at least 30 days before their potential exposure to dog A, CDC recommended immediate vaccination and a 6-month quarantine for all of them (2). State health departments in the 16 states were advised of the recommendations. During the public health investigation, 28 persons were evaluated for potential rabies exposure; 13 were identified with potential exposure because of direct contact with possibly infectious saliva (3) and were recommended to initiate rabies postexposure prophylaxis (PEP). All 23 dogs and one cat were located by state and local health authorities within 2 weeks of the rabies diagnosis. No clinical signs consistent with rabies were reported in the animals during 20 days of follow-up. All 24 animals continue to be monitored during the 6-month quarantine period. Editorial Note: Rabies virus infection results in a fatal encephalomyelitis in humans and other mammals. Globally, the most common sources of human rabies are geographically distinct rabies virus variants maintained predominantly through dogto-dog transmission (i.e., canine rabies), but sometimes with spillover § into other species. In the United States, occasional spillover into dogs of rabies virus variants associated with wildlife has occurred. However, since 2004, no rabies case attributable to an indigenously acquired canine rabies virus variant has been reported (1). Canine rabies virus variants most commonly are imported via unvaccinated dogs from areas where rabies is enzootic, such as Asia, Africa, the Middle East, and parts of Latin America, where canine variants are responsible for most of the 55,000 human rabies deaths estimated worldwide each year (4). In May 2004, an unvaccinated puppy was flown from Puerto Rico to Massachusetts as part of an animal rescue program. The day after arrival, the puppy exhibited neurologic signs, was euthanized, and was subsequently confirmed to have rabies. Six persons were recommended to receive PEP because of potential exposure. In June 2004, an unvaccinated puppy adopted by a U.S. resident in Thailand was confirmed to have rabies by the California Department of Public Health. Of 40 persons interviewed for potential rabies exposure, 12 received PEP. In March 2007, a puppy adopted by a U.S. veterinarian while volunteering in India was confirmed to have rabies by the Alaska Department of Health and Social Services. The puppy was flown in cargo to Seattle, Washington, then adopted by another veterinarian in Juneau, Alaska, where it was flown 7 days after arrival. Of 20 persons interviewed for potential rabies exposure, eight received PEP (5,6). In all three cases, the rabies virus variant was typed as a variant circulating in dogs and terrestrial wildlife in the animal's country of origin (i.e., mongoose and canine rabies virus variants enzootic in Puerto Rico, Thailand, and India, respectively). This report reiterates the need for education of the public regarding rabies incidence in other countries and preventing rabies exposure. While traveling in areas that are endemic for rabies, travelers should not pet stray animals. In addition, travelers should not adopt stray animals without acquiring a veterinarian's health assessment and ensuring proper animal vaccination for importation. Travelers also should consider their potential for rabies exposure from animals, understand proper wound management, and promptly report animal bites to health-care providers (7). Health information for travelers is available at . cdc.gov/travel/contentyellowbook.aspx. CDC administers federal importation regulations for dogs. These regulations allow admittance of unvaccinated dogs aged 3 months that have not been vaccinated for rabies also must be confined until vaccinated and for 3 months after vaccination. Upon arrival in the United States, importers should declare animals to federal authorities and comply with those requirements for confinement of unvaccinated puppies. CDC's regulations were created in the early 1950s to guide persons importing dogs or cats as their personal pets. However, recent trends in dog importations have shown an increase in the numbers of animals being imported for commercial pet trade (8). CDC is working to update current regulations and better address the importation of dogs. In July 2007, the U.S. Department of Health and Human Services posted an advance notice of proposed rulemaking to begin the process of revising CDC's animal importation regulations, including those that apply to dogs and other companion animals. ¶ U.S. animal importation regulations, rabies vaccination requirements for dogs, wildlife rabies surveillance and vaccination programs, and prophylaxis for human exposures all contribute to public health protection from rabies. Continued vigilance and partnership between federal and state agencies, as well as health professionals and pet importers, are vital to decrease the risk for reemergence of canine rabies virus in the United States. ¶ Available at . The individual antigens (diphtheria, tetanus, and pertussis toxoids, filamentous hemagglutinin, pertactin, and poliovirus types 1, 2, and 3) contained in combined DTaP-IPV are identical to the antigens contained in GlaxoSmithKline's DTaP (Infanrix) and DTaP-Hepatitis B-IPV (Pediarix) and have been described previously (3). DTaP-IPV contains no preservatives. DTaP-IPV is administered as an intramuscular injection, preferably into the deltoid region. Two clinical trials conducted in U.S. children aged 4-6 years showed that combined DTaP-IPV and separately administered DTaP and IPV vaccines had comparable safety and reactogenicity profiles, with or without a co-administered second dose of measles, mumps, and rubella (MMR) vaccine (3,4). The immunogenicity of all antigens was similar between the treatment groups, with or without a co-administered second dose of MMR vaccine. In comparative studies, the frequency of solicited local and systemic adverse events and of serious adverse events after administration of DTaP-IPV/Hib was similar to that observed following separately administered DTaP, IPV, and Hib component vaccines (2,3). The immunologic responses after the third dose or the fourth dose of DTaP-IPV-Hib generally were comparable to those following separately administered component vaccines, and have been published (2,3). Immune responses following the first and second doses were not measured. # Licensure of a Diphtheria and Tetanus Toxoids and Acellular Pertussis # Indications and Guidance for Use # Indications and Guidance for Use DTaP-IPV/Hib is licensed for use in children aged 6 weeks through 4 years. DTaP-IPV/Hib is indicated for use in infants and children at ages 2, 4, 6, and 15-18 months (1). DTaP-IPV/ Hib is not licensed for use in children aged >5 years, and is not indicated for the booster dose at age 4-6 years (2). However, DTaP-IPV/Hib that is inadvertently administered to children aged >5 years should be counted as a valid dose. For prevention of diphtheria, tetanus, and pertussis, all children are recommended to receive 4 doses of DTaP, at ages 2, 4, 6, and 15-18 months, and a booster dose at age 4-6 years. who received DTaP (Infanrix) and/or DTaP-Hepatitis B-IPV (Pediarix) as the first 3 doses and DTaP (Infanrix) as the fourth dose (1,2). This vaccine should not be administered to children aged 7 years; however, if DTaP-IPV (Kinrix) is inadvertently administered for an earlier dose of the DTaP and/or IPV series, the dose should be counted as valid and does not need to be repeated provided minimum interval requirements have been met (5). Data are limited on the safety and immunogenicity of interchanging DTaP vaccines from different manufacturers (6). ACIP recommends that, whenever feasible, the same manufacturer's DTaP vaccines should be used for each dose in the series; however, vaccination should not be deferred because the type of DTaP previously administered is unavailable or unknown (6). Although an 8-week interval between doses is preferred, if an accelerated schedule is needed, a minimum interval of 4 weeks should occur between the first and second doses, and the third dose should not be administered before age 14 weeks (4). The fourth dose of DTaP-IPV/Hib may be administered as early as 12 months of age if the clinician feels an opportunity to vaccinate may be missed later and if 6 months has elapsed since the third dose of DTaP-IPV/Hib (1). Data are limited on the safety and immunogenicity of interchanging DTaP vaccines from different manufacturers (2). ACIP recommends that, whenever feasible, the same manufacturer's DTaP product should be used for the pertussis series; however, that vaccination should not be deferred if the specific DTaP vaccine brand previously administered is unavailable or unknown (2). # Licensure of a Diphtheria and Tetanus Toxoids and For prevention of poliomyelitis, all children are recommended to receive 4 doses of IPV, at ages 2, 4, 6-18 months, and 4-6 years. DTaP-IPV/Hib may be used for 1 or more doses of the IPV series, including in children who have received 1 or more doses of another licensed IPV vaccine and who also are scheduled to receive DTaP and Hib vaccination. When an accelerated or catch-up schedule is needed, IPV doses may be administered at 4-week intervals and the fourth dose counted as valid if administered as early as age 18 weeks when the proper spacing of prior doses is maintained (1). Therefore, DTaP-IPV/Hib (Pentacel) doses administered at 2, 4, 6, and 12-18 months would provide 4 valid doses of IPV under these circumstances. The recommended vaccination schedule for Hib-TT vaccines (e.g., Pentacel) consists of a 3-dose primary series at ages 2, 4, and 6 months, and a booster dose at age 12-15 months (1). Intervals between doses of the primary series as short as 1 month are acceptable but not optimal. Minimum intervals for the booster dose vary by age at first vaccination and have been published (5). DTaP-IPV/Hib may be administered at 12 months and counted as a valid Hib-TT dose if the minimum intervals are followed; however, the safety and efficacy of DTaP-IPV/Hib in this circumstance have not been evaluated. DTaP-IPV/Hib may be administered at separate injection sites with other vaccines administered at age 12-18 months, such as hepatitis A, hepatitis B, pneumococcal conjugate, measles, mumps, and rubella (MMR), and varicella vaccines (2). # Special Considerations Certain American Indian/Alaska Native (AI/AN) children are at increased risk for Hib disease, particularly in the first 6 months of life (6). Furthermore, the immunologic response to different Hib conjugate vaccine preparations can vary. Compared with other Hib conjugate vaccines (e.g., Hib-TT), administration of polyribosylribitol phosphate-meningococcal outer membrane protein (PRP-OMP)-containing Hib vaccine preparations leads to a more rapid seroconversion to protective antibody concentrations within the first 6 months of life. Although for subsequent doses, PRP-OMP and other Hib conjugate vaccines appear to have equal efficacy, failure to use PRP-OMP vaccines for the first dose has been associated with excess cases of Hib disease in AI/AN infants living in communities where Hib transmission is ongoing and exposure to colonized persons is likely (6,7). In addition, stocking of both PRP-OMP and other Hib conjugate vaccine preparations in the same clinic might lead to inadvertent administration of another vaccine for the first Hib dose. For this reason, clinics that serve predominantly AI/AN children might elect to stock and use only PRP-OMP-containing Hib vaccines (6). Different lot numbers for the different components of DTaP-IPV/Hib are included on the DTaP-IPV vial and on the Hib powder vial. Providers should record lot numbers separately for the DTaP-IPV and Hib components. # Notice to Readers # Get Smart About Antibiotics Week -October 6-10, 2008 October 6-10 is Get Smart About Antibiotics Week. The theme of this observance is "The power to prevent resistance is in your hands." Inappropriate use of antibiotics to treat upper respiratory infections (URIs) can result in unnecessary risk for adverse events and contribute to the likelihood of antibiotic resistance. Adverse events related to antibiotics (usually aller- gies or drug intolerance) resulted in an estimated 142,500 emergency department visits annually in the United States during 2004-2006 (1). In addition, inappropriate and excessive antimicrobial use can increase a community's risk for antibiotic-resistant bacterial infections that might lead to severe or prolonged illness, hospitalization, and sometimes death. Educating clinicians and the public regarding appropriate use of antibiotics might help reduce adverse drug events, including antibiotic resistance. As part of Get Smart About Antibiotics Week, health-care providers are urged to take the following actions to help reduce antibiotic resistance and other adverse drug events: - Know when antibiotics are indicated, and avoid prescribing antibiotics for URIs such as pharyngitis, bronchitis, sinusitis, and the common cold, which are primarily caused by viruses. - Instead of prescribing antibiotics for URIs, identify and validate patient concerns and recommend symptomatic therapy. # Additional information about Get Smart About Antibiotics Week is available at . The course includes a review of the fundamentals of descriptive epidemiology and biostatistics, measures of association, normal and binomial distributions, confounding, statistical tests, stratification, logistic regression models, and computer programs used in epidemiology. The prerequisite is an introductory course in epidemiology, such as Epidemiology in Action or the International Course in Applied Epidemiology. Tuition will be charged. The application deadline is December 15, 2008, or until all slots have been filled. † Children were considered fully vaccinated if they had 1) received no doses of influenza vaccine before September 1 and received 2 doses from September 1 through the date of interview or January 31 (whichever was earlier), or 2) received 1 or more doses of influenza vaccine before September 1 and received 1 or more doses during September-December. - Depression was measured using the Patient Health Questionnaire (PHQ-9), a nine-item screening instrument that asks questions about the frequency of symptoms of depression during the preceding 2 weeks. Response categories "not at all," "several days," "more than half the days," and "nearly every day" were given a score ranging from 0 to 3. Depression was defined as a total score of 10 or higher on the PHQ-9. This cut point has been well validated and is commonly used in clinical studies that measure depression with the PHQ-9. † Poverty status was defined using the poverty income ratio (PIR), an index calculated by dividing the family income by a poverty threshold that is based on the size of the family. A PIR of less than 1 was used as the cut point for below the poverty level. § 95% confidence interval. # Poverty status Percentage
Participants who responded "yes" to either question were classified as having a disability. To assess self-rated health status, participants were asked, "Would you say that in general your health is excellent, very good, good, fair, or poor?" The following racial/ethnic categories were included in this analysis: white, black, Hispanic, Asian, Native Hawaiian or Other Pacific Islander, and AI/AN. † Data from 2004, 2005, and 2006 were aggregated to provide sufficient power to analyze low-count racial/ethnic populations. Prevalence estimates were weighted and age adjusted to the 2000 U.S. standard population. Weighted population estimates were determined by taking the final weights for each year during 2004-2006 and dividing by three. Data were weighted to compensate for unequal probabilities of selection, to adjust for nonresponse and telephone noncoverage, to ensure that results were consistent with population data, and to make population estimates. §# Self-rated health status has been found to be an independent predictor of morbidity and mortality (1), and racial/ethnic disparities in self-rated health status persist among the U.S. adult population (2). Black and Hispanic adults are more likely to report their general health status as fair or poor compared with white adults (2). In addition, the prevalence of disability has been shown to be higher among blacks and American Indians/Alaska Natives (AI/ANs) (3). To estimate differences in self-rated health status by race/ethnicity and disability, CDC analyzed data from the 2004-2006 Behavioral Risk Factor Surveillance System (BRFSS) surveys. This report summarizes the results of that analysis, which indicated that the prevalence of disability among U.S. adults ranged from 11.6% among Asians to 29.9% among AI/ANs. Within each racial/ethnic population, adults with a disability were more likely to report fair or poor health than adults without a disability, with differences ranging from 16.8 percentage points among Asians to 37.9 percentage points among AI/ANs. Efforts to reduce racial/ ethnic health disparities should explicitly include strategies to improve the health and well being of persons with disabilities within each racial/ethnic population. BRFSS is a state-based, random-digit-dialed telephone survey of the noninstitutionalized, U.S. civilian population aged >18 years. , approximately 1 million persons from all 50 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands participated in the BRFSS survey.* Consistent with the definition of disability from Healthy People 2010 (4), respondents were asked, "Are you limited in any way in any activities because of physical, mental, or emotional problems?" and "Do you now have any health problem that requires you to use special equipment, such as a cane, a wheelchair, a special bed, or a special telephone?" Prevalence estimates and standard errors were obtained using statistical software to account for the complex sampling design. Chi-square tests were used to compare self-rated health status between racial/ethnic populations and by disability status. Council of American Survey Research Organizations (CASRO) median response rates ¶ for the 2004-2006 BRFSS surveys were 52.7% (2004), 51.1% (2005), and 51.4% (2006). The median cooperation rates** for each year were 74.3% (2004), 75.1% (2005), and 74.5% (2006). During 2004-2006, an estimated 19.9% of the total U.S. population aged >18 years (i.e., an average of 43 million persons) had a disability. The prevalence of disability was highest among AI/ANs (29.9%) and lowest among Asians (11.6%) (Table 1). Nearly 84% of the total U.S. adult population reported having good or better health, but substantial variation was observed in self-rated health status across racial/ethnic populations. Nearly 60% of white, Asian, and Native Hawaiian or Other Pacific Islander respondents (59.3%, 55.8%, and 55.4%, respectively) rated their health as very good or excellent, whereas 44.4% of black respondents reported their health to be very good or excellent. White and Asian adults had similar rates of self-rated fair or poor health (12.9% and 10.4%, respectively), whereas fair or poor health was reported more frequently among other minority populations: 21.1% among blacks, 14.8% among Native Hawaiian or Other Pacific Islanders, and 24.5% among AI/ANs. Hispanic adults rated their health status approximately equally across the three health status categories: very good or excellent (33.6%), good (35.4%), and fair or poor (31.1%). Overall, adults with a disability were less likely to report excellent or very good health (27.2% versus 60.2%; p<0.01) and more likely to report fair or poor health (40.3% versus 9.9%; p<0.01), compared with adults without disability (Table 2). White adults without a disability had the highest proportion of respondents who rated their health as very good or excellent (66.9%), whereas 49.9% of black respondents without a disability reported very good or excellent health. Reports of fair or poor health among adults with a disability were most common among Hispanics and AI/ANs (55.2% and 50.5%, respectively) and least common among Asians (24.9%). # MMWR October 3,2008 tions should also address the needs of adults with disabilities. Such efforts must ensure that persons with disabilities have accessible, available, and appropriate health-care and wellness promotion services (5). # HIV Prevalence Estimates -United States, 2006 Accurate and timely data on the number of persons in the United States living with human immunodeficiency virus (HIV) infection (HIV prevalence) are needed to guide planning for disease prevention, program evaluation, and resource allocation. However, overall HIV prevalence cannot be measured directly because a proportion of persons infected with HIV have neither been diagnosed nor reported to local surveillance programs. In addition, national HIV prevalence data are incomplete because local reporting systems for confidential, name-based HIV reporting have been fully implemented only since April 2008. With the advent of highly active antiretroviral therapies that delay the progression of HIV to acquired immunodeficiency syndrome (AIDS), and of AIDS to death (1), and changes in the AIDS case definition to include an immunologic diagnosis (2), earlier back-calculation methods from the 1990s for estimating HIV prevalence based on the number of reported AIDS cases are no longer reliable. With 80% of states reporting name-based HIV diagnoses as of January 2006, an extended back-calculation method now can be used to estimate HIV prevalence more accurately. Based on this method, CDC now estimates that 1.1 million adults and adolescents (prevalence rate: 447.8 per 100,000 population) were living with diagnosed or undiagnosed HIV infection in the United States at the end of 2006. The majority of those living with HIV were nonwhite (65.4%), and nearly half (48.1%) were men who have sex with men (MSM). The HIV prevalence rates for blacks (1,715.1 per 100,000) and Hispanics (585.3 per 100,000) were, respectively, 7.6 and 2.6 times the rate for whites (224.3 per 100,000). An extended back-calculation method has been described in detail and was used recently to calculate the incidence of HIV infection in the United States (3). The method was used in this analysis to estimate HIV prevalence based on the number of HIV diagnoses by calendar year and disease severity (i.e., whether the person received an AIDS diagnosis in the same calendar year as the HIV diagnosis). HIV prevalence at the end of 2006 for the 50 states and District of Columbia was estimated using information from the national HIV/AIDS Reporting System for persons aged >13 years who were diagnosed with HIV during 2006 and reported to CDC by the end of June 2007. Forty states provided data on both HIV and AIDS diagnoses, whereas 10 states (California, Delaware, Hawaii, Illinois, Maryland, Massachusetts, Montana, Oregon, Rhode Island, and Vermont) and the District of Columbia provided data only for AIDS diagnoses. For the areas without name-based HIV data, statistical procedures and AIDS data were used to estimate HIV cases, based on the ratio of HIV to AIDS in states with integrated surveillance systems (4). The number of undiagnosed HIV infections was calculated by subtracting diagnosed AIDS prevalence and diagnosed HIV prevalence from the estimated overall HIV prevalence. Using an established method, data also were adjusted for reporting delays and redistribution of risk factors among persons initially reported without sufficient information to be classified into an HIV transmission category (5). HIV prevalence rates per 100,000 population were calculated for various demographic characteristics; population denominators for rate calculations were based on official postcensus estimates for 2006 (6). Among the estimated number of persons living with HIV at the end of 2006, 46.1% (1,715.1 per 100,000 population) were black, 34.6% (224.3 per 100,000) were white, 17.5% (585.3 per 100,000) were Hispanic, 1.4% (129.6 per 100,000) were Asian/Pacific Islander, and 0.4% (231.4 per 100,000) were American Indian/Alaska Native (Table ). Males accounted for 74.8% of prevalent HIV cases (685.7 per 100,000). The greatest percentage of cases was attributed to male-to-male sexual contact, accounting for 48.1% overall (and 64.3% among men). High-risk heterosexual contact, defined as heterosexual contact with a person known to have, or to be at high risk for, October 3, 2008 HIV infection (e.g., an injection drug user) accounted for 27.6% of prevalent cases overall (12.6% of cases among men and 72.4% of cases among women). Injection drug use (IDU) accounted for 18.5% of total cases (15.9% of cases among men and 26.3% of cases among women). The remainder of cases were attributed to men who reported both male-to-male sexual contact and IDU (5.0%) or whose transmission category was classified as other (0.8%; including hemophilia, blood transfusion, perinatal exposure, and risk factors not reported or not identified). Overall, an estimated 232,700 (21.0%) persons living with HIV infection had not been diagnosed as of the end of 2006. The HIV prevalence rate for black men (2,388.2 per 100,000 population; 95% confidence interval [CI] = 2,197.9-2,578.4) was six times the rate for white men (394.6 per 100,000; CI = 363.3-425.9) (Figure ), and the rate for Hispanic men (883.4 per 100,000; CI = 784.9-982.4) was more than twice the rate for white men. The HIV prevalence rate for black women (1,122.4 per 100,000; CI = 1,002.2-1,242.5) was nearly 18 times the rate for white women (62.7 per 100,000; CI = 54.7-70.7), and the rate for Hispanic women (263.0 per 100,000; CI = 231.6-294.4) was more than four times the rate for white women. The HIV prevalence rate for black women was greater than the rate for all other groups, except for black men. Reported by: ML Campsmith, DDS, P Rhodes, PhD, HI Hall, PhD, T Green, PhD, Div of HIV/AIDS Prevention, National Center for HIV/ AIDS, Viral Hepatitis, STD, and TB Prevention, CDC. Editorial Note: Reduced mortality resulting from the use of highly active antiretroviral therapies is a major factor contributing to the number of persons in the United States living with HIV disease (1). Additionally, more than 56,000 new HIV infections are estimated to occur annually (3). The estimate of HIV prevalence in this report is similar to an estimate for 2003 (1,039,000-1,185,000) that used the same extended back-calculation method (4). However, because of improvements in national HIV surveillance data since 2003, the two estimates cannot be compared directly. The 2006 estimate is based on a data set that 1) includes HIV diagnoses from 10 states that were not reporting in 2003 and 2) has been refined by an improved ability to identify and remove duplicate HIV case data that reflect reports by more than one state. Using the refined data set, CDC now estimates the HIV prevalence for 2003 to have been 994,000, suggesting that HIV 3.7% ever have had anal sex with another male, and the proportion of men who had a male sexual partner in the past 12 months was 2.9% (7). The findings in this report are subject to at least three limitations. First, reported HIV data used in the extended back-calculation method represent only a portion of persons in the United States who were diagnosed with HIV infection; several high-morbidity areas, including California, Illinois, Maryland, and the District of Columbia, did not contribute HIV data. Availability of reported HIV data from these areas will increase accuracy of future prevalence estimates. Second, not all persons who are infected with HIV have been diagnosed and reported to the public health surveillance system, and data must be estimated for undiagnosed persons. Finally, the data have been adjusted statistically to account for delays in reporting new cases and deaths, and cases reported without risk factor information have been redistributed among other transmission categories (5). These adjustments were based on risk redistribution assumptions from the mid-1990s that might no longer be valid, which could result in over-or underadjustment of the data. Previous studies have indicated that persons generally reduce their sexual risk behaviors (e.g., decrease the number of sex partners and reduce unprotected intercourse through increased condom use) after being diagnosed with HIV (8). Thus, increasing the percentage of HIV-infected persons who are diagnosed and linked with effective care and prevention services has the potential to reduce new HIV infections over time. To help achieve that, CDC has focused resources on increasing testing for HIV, particularly among populations that are disproportionately affected by HIV infection. Recent CDC activities have included publication of revised recommendations for HIV testing in health-care settings (9) and creation of a new program, the Heightened National Response to the HIV/AIDS Crisis in the African American Community (10). In 2007, as part of the President's Domestic HIV Initiative, CDC allocated funds to expand routine HIV testing, primarily among blacks. In addition to testing, expanding the number and reach of effective HIV prevention services for at-risk populations, including blacks, Hispanics, and MSM of all races, can contribute to reducing the disproportionate numbers of infections in these groups. Culturally appropriate opportunities for HIV testing, diagnosis, and access to early treatment and prevention services to reduce further HIV transmission are key to reducing new infections and ultimately decreasing HIV prevalence in the United States. # FIGURE. Estimated human immunodeficiency virus (HIV # Rabies in a Dog Imported from Iraq -New Jersey, June 2008 Rabies vaccination and stray dog control have led to successful control of canine rabies in the United States. The number of rabid dogs reported decreased from approximately 5,000 in 1950 to 79 in 2006, when the canine rabies virus variant associated with dog-to-dog rabies transmission was declared eliminated in the United States (1). On June 18, 2008, a mixed-breed dog, recently shipped from Iraq into the United States, was confirmed to have rabies by the Public Health and Environmental Laboratories of the New Jersey Department of Health and Senior Services. A total of 24 additional animals in the shipment, all potentially exposed to the rabid dog, were distributed to 16 states. This report summarizes the epidemiologic investigation by the New Jersey Department of Health and Senior Services, Bergen County Department of Health, and CDC, and the ensuing public health response. These findings underscore the need for vigilance regarding rabies (and other zoonotic diseases) during animal importation to prevent the possible reintroduction and sustained transmission of canine rabies in U.S. dog populations. # Case Report On June 5, 2008, a shipment of 24 dogs and two cats arrived in the United States from Iraq as part of an international animal rescue operation. The goal of the operation was to reunite servicemen returning to the United States with animals they had adopted in Iraq. Upon arrival at Newark Liberty International Airport, the animals received physical examinations from volunteer licensed veterinarians. One cat became ill with neurologic signs during transport and was euthanized on arrival. The cat was tested for rabies and was negative. The remaining 24 dogs and one cat were housed for several days at the airport before distribution to their final U.S. destinations. On June 8, one of the 24 dogs, a mixed-breed aged 11 months (dog A), became ill and was taken to a veterinarian the next day. The dog was hospitalized with fever, diarrhea, wobbly gait, agitation, and crying. The dog's condition deteriorated, progressing to lateral recumbency with periods of agitation. On June 11, the dog was euthanized. Specimens were shipped to the Public Health and Environmental Laboratories for rabies testing, but delivery of the specimens was delayed. On June 18, the specimens were tested, and rabies was diagnosed. Specimens also were submitted to CDC, where rabies was confirmed on June 26 and typed as a rabies virus variant associated with dogs in the Middle East. # Public Health Investigation The potentially infectious period for a dog, cat, or ferret with rabies can begin as many as 10 days before the onset of clinical signs and continue throughout the clinical course until death (2). To identify potential rabies exposure to humans or other animals while dog A was in Iraq, during transport, or at the airport shelter, an investigation was initiated by the New Jersey Department of Health and Senior Services and the Bergen County Department of Health, with participation from CDC. The dog was reportedly in the possession of a U.S. soldier in Baghdad for approximately 7 months before shipment to the United States. The dog had been kept in an indoor-outdoor run on a military base and had not been vaccinated for rabies; the owner reported no signs of illness in the dog or potential exposure to other rabid animals during the 7 months. The owner also reported no potential exposures to other persons or animals during the 2 days of potential infectivity before the dog was transferred to the animal rescue operation for shipment on May 31. Upon arrival in the United States, none of the 24 dogs were accompanied by the valid rabies vaccination certificates required for admission by CDC animal importation regulations.* For dogs aged >3 months, a rabies vaccination must be administered at least 30 days before the date of arrival at a U.S. port. Five of the 24 dogs (not including dog A) reportedly had received a previous rabies vaccination; however, none of the information required for a valid rabies vaccination certificate was available, including vaccine manufacturer, lot numbers, or a certifying veterinarian signature. Twenty-one of the animals in the shipment, including dog A, had received a primary rabies vaccination in Iraq during May 28-31, immediately before being shipped to New Jersey. Because none of the dogs met rabies vaccination requirements for importation, in accordance with the importation regulation, a confinement agreement was issued by CDC, stating where the animals would be held for at least 30 days after vaccination. During shipment and upon arrival in New Jersey, all the animals were housed in separate crates; however, interviews with persons present during the animals' arrival and stay in Newark identified potential periods during which dogs, including dog A, were allowed to intermingle. On June 10, 1 day before dog A was euthanized and 8 days before rabies was diagnosed, the remaining 23 dogs and one cat were shipped to destinations in 16 states. † Because none of the surviving animals had a verifiable history of vaccination at least 30 days before their potential exposure to dog A, CDC recommended immediate vaccination and a 6-month quarantine for all of them (2). State health departments in the 16 states were advised of the recommendations. During the public health investigation, 28 persons were evaluated for potential rabies exposure; 13 were identified with potential exposure because of direct contact with possibly infectious saliva (3) and were recommended to initiate rabies postexposure prophylaxis (PEP). All 23 dogs and one cat were located by state and local health authorities within 2 weeks of the rabies diagnosis. No clinical signs consistent with rabies were reported in the animals during 20 days of follow-up. All 24 animals continue to be monitored during the 6-month quarantine period. Editorial Note: Rabies virus infection results in a fatal encephalomyelitis in humans and other mammals. Globally, the most common sources of human rabies are geographically distinct rabies virus variants maintained predominantly through dogto-dog transmission (i.e., canine rabies), but sometimes with spillover § into other species. In the United States, occasional spillover into dogs of rabies virus variants associated with wildlife has occurred. However, since 2004, no rabies case attributable to an indigenously acquired canine rabies virus variant has been reported (1). Canine rabies virus variants most commonly are imported via unvaccinated dogs from areas where rabies is enzootic, such as Asia, Africa, the Middle East, and parts of Latin America, where canine variants are responsible for most of the 55,000 human rabies deaths estimated worldwide each year (4). In May 2004, an unvaccinated puppy was flown from Puerto Rico to Massachusetts as part of an animal rescue program. The day after arrival, the puppy exhibited neurologic signs, was euthanized, and was subsequently confirmed to have rabies. Six persons were recommended to receive PEP because of potential exposure. In June 2004, an unvaccinated puppy adopted by a U.S. resident in Thailand was confirmed to have rabies by the California Department of Public Health. Of 40 persons interviewed for potential rabies exposure, 12 received PEP. In March 2007, a puppy adopted by a U.S. veterinarian while volunteering in India was confirmed to have rabies by the Alaska Department of Health and Social Services. The puppy was flown in cargo to Seattle, Washington, then adopted by another veterinarian in Juneau, Alaska, where it was flown 7 days after arrival. Of 20 persons interviewed for potential rabies exposure, eight received PEP (5,6). In all three cases, the rabies virus variant was typed as a variant circulating in dogs and terrestrial wildlife in the animal's country of origin (i.e., mongoose and canine rabies virus variants enzootic in Puerto Rico, Thailand, and India, respectively). This report reiterates the need for education of the public regarding rabies incidence in other countries and preventing rabies exposure. While traveling in areas that are endemic for rabies, travelers should not pet stray animals. In addition, travelers should not adopt stray animals without acquiring a veterinarian's health assessment and ensuring proper animal vaccination for importation. Travelers also should consider their potential for rabies exposure from animals, understand proper wound management, and promptly report animal bites to health-care providers (7). Health information for travelers is available at http://wwwn. cdc.gov/travel/contentyellowbook.aspx. CDC administers federal importation regulations for dogs. These regulations allow admittance of unvaccinated dogs aged <3 months, provided the importer signs an agreement to vaccinate the dog at age 3 months and confine the animal for 30 days after the vaccination. Dogs aged >3 months that have not been vaccinated for rabies also must be confined until vaccinated and for 3 months after vaccination. Upon arrival in the United States, importers should declare animals to federal authorities and comply with those requirements for confinement of unvaccinated puppies. CDC's regulations were created in the early 1950s to guide persons importing dogs or cats as their personal pets. However, recent trends in dog importations have shown an increase in the numbers of animals being imported for commercial pet trade (8). CDC is working to update current regulations and better address the importation of dogs. In July 2007, the U.S. Department of Health and Human Services posted an advance notice of proposed rulemaking to begin the process of revising CDC's animal importation regulations, including those that apply to dogs and other companion animals. ¶ U.S. animal importation regulations, rabies vaccination requirements for dogs, wildlife rabies surveillance and vaccination programs, and prophylaxis for human exposures all contribute to public health protection from rabies. Continued vigilance and partnership between federal and state agencies, as well as health professionals and pet importers, are vital to decrease the risk for reemergence of canine rabies virus in the United States. ¶ Available at http://www.cdc.gov/ncidod/dq/anprm/index.htm. The individual antigens (diphtheria, tetanus, and pertussis toxoids, filamentous hemagglutinin, pertactin, and poliovirus types 1, 2, and 3) contained in combined DTaP-IPV are identical to the antigens contained in GlaxoSmithKline's DTaP (Infanrix) and DTaP-Hepatitis B-IPV (Pediarix) and have been described previously (3). DTaP-IPV contains no preservatives. DTaP-IPV is administered as an intramuscular injection, preferably into the deltoid region. Two clinical trials conducted in U.S. children aged 4-6 years showed that combined DTaP-IPV and separately administered DTaP and IPV vaccines had comparable safety and reactogenicity profiles, with or without a co-administered second dose of measles, mumps, and rubella (MMR) vaccine (3,4). The immunogenicity of all antigens was similar between the treatment groups, with or without a co-administered second dose of MMR vaccine. In comparative studies, the frequency of solicited local and systemic adverse events and of serious adverse events after administration of DTaP-IPV/Hib was similar to that observed following separately administered DTaP, IPV, and Hib component vaccines (2,3). The immunologic responses after the third dose or the fourth dose of DTaP-IPV-Hib generally were comparable to those following separately administered component vaccines, and have been published (2,3). Immune responses following the first and second doses were not measured. # Licensure of a Diphtheria and Tetanus Toxoids and Acellular Pertussis # Indications and Guidance for Use # Indications and Guidance for Use DTaP-IPV/Hib is licensed for use in children aged 6 weeks through 4 years. DTaP-IPV/Hib is indicated for use in infants and children at ages 2, 4, 6, and 15-18 months (1). DTaP-IPV/ Hib is not licensed for use in children aged >5 years, and is not indicated for the booster dose at age 4-6 years (2). However, DTaP-IPV/Hib that is inadvertently administered to children aged >5 years should be counted as a valid dose. For prevention of diphtheria, tetanus, and pertussis, all children are recommended to receive 4 doses of DTaP, at ages 2, 4, 6, and 15-18 months, and a booster dose at age 4-6 years. who received DTaP (Infanrix) and/or DTaP-Hepatitis B-IPV (Pediarix) as the first 3 doses and DTaP (Infanrix) as the fourth dose (1,2). This vaccine should not be administered to children aged <4 years or >7 years; however, if DTaP-IPV (Kinrix) is inadvertently administered for an earlier dose of the DTaP and/or IPV series, the dose should be counted as valid and does not need to be repeated provided minimum interval requirements have been met (5). Data are limited on the safety and immunogenicity of interchanging DTaP vaccines from different manufacturers (6). ACIP recommends that, whenever feasible, the same manufacturer's DTaP vaccines should be used for each dose in the series; however, vaccination should not be deferred because the type of DTaP previously administered is unavailable or unknown (6). Although an 8-week interval between doses is preferred, if an accelerated schedule is needed, a minimum interval of 4 weeks should occur between the first and second doses, and the third dose should not be administered before age 14 weeks (4). The fourth dose of DTaP-IPV/Hib may be administered as early as 12 months of age if the clinician feels an opportunity to vaccinate may be missed later and if 6 months has elapsed since the third dose of DTaP-IPV/Hib (1). Data are limited on the safety and immunogenicity of interchanging DTaP vaccines from different manufacturers (2). ACIP recommends that, whenever feasible, the same manufacturer's DTaP product should be used for the pertussis series; however, that vaccination should not be deferred if the specific DTaP vaccine brand previously administered is unavailable or unknown (2). # Licensure of a Diphtheria and Tetanus Toxoids and For prevention of poliomyelitis, all children are recommended to receive 4 doses of IPV, at ages 2, 4, 6-18 months, and 4-6 years. DTaP-IPV/Hib may be used for 1 or more doses of the IPV series, including in children who have received 1 or more doses of another licensed IPV vaccine and who also are scheduled to receive DTaP and Hib vaccination. When an accelerated or catch-up schedule is needed, IPV doses may be administered at 4-week intervals and the fourth dose counted as valid if administered as early as age 18 weeks when the proper spacing of prior doses is maintained (1). Therefore, DTaP-IPV/Hib (Pentacel) doses administered at 2, 4, 6, and 12-18 months would provide 4 valid doses of IPV under these circumstances. The recommended vaccination schedule for Hib-TT vaccines (e.g., Pentacel) consists of a 3-dose primary series at ages 2, 4, and 6 months, and a booster dose at age 12-15 months (1). Intervals between doses of the primary series as short as 1 month are acceptable but not optimal. Minimum intervals for the booster dose vary by age at first vaccination and have been published (5). DTaP-IPV/Hib may be administered at 12 months and counted as a valid Hib-TT dose if the minimum intervals are followed; however, the safety and efficacy of DTaP-IPV/Hib in this circumstance have not been evaluated. DTaP-IPV/Hib may be administered at separate injection sites with other vaccines administered at age 12-18 months, such as hepatitis A, hepatitis B, pneumococcal conjugate, measles, mumps, and rubella (MMR), and varicella vaccines (2). # Special Considerations Certain American Indian/Alaska Native (AI/AN) children are at increased risk for Hib disease, particularly in the first 6 months of life (6). Furthermore, the immunologic response to different Hib conjugate vaccine preparations can vary. Compared with other Hib conjugate vaccines (e.g., Hib-TT), administration of polyribosylribitol phosphate-meningococcal outer membrane protein (PRP-OMP)-containing Hib vaccine preparations leads to a more rapid seroconversion to protective antibody concentrations within the first 6 months of life. Although for subsequent doses, PRP-OMP and other Hib conjugate vaccines appear to have equal efficacy, failure to use PRP-OMP vaccines for the first dose has been associated with excess cases of Hib disease in AI/AN infants living in communities where Hib transmission is ongoing and exposure to colonized persons is likely (6,7). In addition, stocking of both PRP-OMP and other Hib conjugate vaccine preparations in the same clinic might lead to inadvertent administration of another vaccine for the first Hib dose. For this reason, clinics that serve predominantly AI/AN children might elect to stock and use only PRP-OMP-containing Hib vaccines (6). Different lot numbers for the different components of DTaP-IPV/Hib are included on the DTaP-IPV vial and on the Hib powder vial. Providers should record lot numbers separately for the DTaP-IPV and Hib components. # Notice to Readers # Get Smart About Antibiotics Week -October 6-10, 2008 October 6-10 is Get Smart About Antibiotics Week. The theme of this observance is "The power to prevent resistance is in your hands." Inappropriate use of antibiotics to treat upper respiratory infections (URIs) can result in unnecessary risk for adverse events and contribute to the likelihood of antibiotic resistance. Adverse events related to antibiotics (usually aller- gies or drug intolerance) resulted in an estimated 142,500 emergency department visits annually in the United States during 2004-2006 (1). In addition, inappropriate and excessive antimicrobial use can increase a community's risk for antibiotic-resistant bacterial infections that might lead to severe or prolonged illness, hospitalization, and sometimes death. Educating clinicians and the public regarding appropriate use of antibiotics might help reduce adverse drug events, including antibiotic resistance. As part of Get Smart About Antibiotics Week, health-care providers are urged to take the following actions to help reduce antibiotic resistance and other adverse drug events: • Know when antibiotics are indicated, and avoid prescribing antibiotics for URIs such as pharyngitis, bronchitis, sinusitis, and the common cold, which are primarily caused by viruses. • Instead of prescribing antibiotics for URIs, identify and validate patient concerns and recommend symptomatic therapy. # Additional information about Get Smart About Antibiotics Week is available at http://www.cdc.gov/getsmart. The course includes a review of the fundamentals of descriptive epidemiology and biostatistics, measures of association, normal and binomial distributions, confounding, statistical tests, stratification, logistic regression models, and computer programs used in epidemiology. The prerequisite is an introductory course in epidemiology, such as Epidemiology in Action or the International Course in Applied Epidemiology. Tuition will be charged. The application deadline is December 15, 2008, or until all slots have been filled. † Children were considered fully vaccinated if they had 1) received no doses of influenza vaccine before September 1 and received 2 doses from September 1 through the date of interview or January 31 (whichever was earlier), or 2) received 1 or more doses of influenza vaccine before September 1 and received 1 or more doses during September-December. * Depression was measured using the Patient Health Questionnaire (PHQ-9), a nine-item screening instrument that asks questions about the frequency of symptoms of depression during the preceding 2 weeks. Response categories "not at all," "several days," "more than half the days," and "nearly every day" were given a score ranging from 0 to 3. Depression was defined as a total score of 10 or higher on the PHQ-9. This cut point has been well validated and is commonly used in clinical studies that measure depression with the PHQ-9. † Poverty status was defined using the poverty income ratio (PIR), an index calculated by dividing the family income by a poverty threshold that is based on the size of the family. A PIR of less than 1 was used as the cut point for below the poverty level. § 95% confidence interval. # Poverty status Percentage # Acknowledgments The findings in this report are based, in part, on contributions by J Bateman, A Plummer, MD, H Gunness, Newark Quarantine Station, S Shapiro, MHA, New York City Quarantine Station, Div of Global Migration and Quarantine, National Center for Preparedness, Detection, and Control of Infectious Diseases; T Cieslak, MD, Coordinating Office for Terrorism Preparedness and Emergency Response; and P Yager and I Kuzmin, MD, Div of Viral and Rickettsial Diseases, National Center for Zoonotic, Vector-Borne, and Enteric Diseases, CDC.
None
None
b73ce531e4a2381d4f9c3063cdd9179ad2fe5815
cdc
None
This publication offers a comprehensive collection of 70 "building blocks," which are primary prevention strategies that merit consideration by state and local governments and others in position to reduce exposure to hazards in housing and thereby help meet the Healthy People 2010 goal of eliminating childhood lead poisoning. Exemplary strategies span a broad spectrum which includes targeting high-risk properties; widely instituting safe work practices; building community capacity to check for hazards and work safety; delivering hazard assessment, control and prevention services; motivating action; screening high-risk housing; expanding financial resources; strengthening enforcement; raising public awareness and support; and establishing valuable partnerships. A strategy has been considered for inclusion as a building block if it is sensitive to the economics of affordable housing, consistent with the principles of public health, holds the potential for broad-scale impact, stands a reasonable possibility of implementation, and offers promise for reducing lead and other environmental health hazards in high-risk housing. The summary of each building block is coupled with an illustration of how the strategy has been implemented and contact information for at least one individual who is knowledgeable about this activity. The purpose of disseminating Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards is to allow programs and policymakers easy access to information about innovative and promising strategies that span the spectrum of primary prevention, from which they may select one or several to pursue based on their jurisdiction's needs and political and economic realities.# INTRODUCTION # Context and Background Exposure to lead continues to poison young children in the United States. Estimates based on data from 1999 and 2000 indicate that about 2.2% of children aged 1-5 years (about 434,000 children) have blood lead level (BLL) elevations at or above 10 micrograms per deciliter (≥10 µg/dL). Healthy People 2010 (Objective 8-11) i calls for the eradication of lead poisoning as a public health problem by the year 2010 through the elimination of elevated blood lead levels in children. Over the past decade, research has greatly expanded understanding of the sources and pathways of lead exposure in the residential environment and the effectiveness of a range of strategies to make housing safe. While children can be exposed to lead from a variety of other sources and pathways, the most significant cause of exposure is the presence of lead-based paint hazards in their homes, such as lead in non-intact paint, interior settled dust, exterior soil and dust, and hazards created by improperly conducted renovation work. Focus on the presence of lead-based paint and its lead content has given way to recognition of the importance of the condition of painted surfaces in older homes and the dangers of lead-contaminated dust. Chronic ingestion of settled lead dust on floors, windowsills, and other surfaces is now recognized as the foremost pathway of young children's exposure to lead in the home environment, and dust lead levels are recognized as the strongest predictor of risk. Based on the recommendations of an interagency working group tasked with planning to achieve the 2010 lead elimination goal, the Federal Strategy for Eliminating Childhood Lead Poisoning ii emphasizes the essential need to require action before children are poisoned-by making the US housing stock lead-safe. The latest national survey of lead hazards in US housing makes clear the magnitude and complexity of this challenge: more than one-quarter of all US housing units pose "significant lead hazards." iii Yet the impact of most lead poisoning prevention programs is limited to the fraction of properties that are occupied by a child with an elevated blood lead level (less than two percent of hazardous units nationwide). While the need continues to improve blood lead screening and case management services, achieving the national 2010 goal of eliminating lead poisoning as a public health problem requires significantly increasing the impact of primary prevention strategies to make high-risk housing lead-safe. The Centers for Disease Control and Prevention (CDC) has a longstanding responsibility and commitment to protecting children from lead poisoning. Since the early 1970s, CDC has made grants to help state and local health department lead poisoning prevention programs screen children at risk for lead poisoning or elevated blood lead levels (EBL), perform environmental investigations to determine the source children's exposure, and provide follow-up case management and educational services. At the national level, CDC works closely with other federal agencies committed to lead poisoning prevention, notably the US Department of Housing and Urban Development's Office of Healthy Homes and Lead Hazard Control (HUD OHHLHC), the US Environmental Protection Agency (EPA), and, within the US Department of Health and Human Services, the Centers for Medicare and Medicaid Services (CMS) and the Office of Community Services (OCS). The Lead Poisoning Prevention Branch of CDC is fulfilling its commitment to the 2010 lead elimination goal through its grant program's requirement that jurisdictions develop and implement a strategic plan for elimination that includes primary prevention, partnering, and program evaluation. Through this Building Blocks publication, the Branch now offers grantees and others access to a compendium of promising primary prevention approaches to reduce exposure to lead paint hazards. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards State and local childhood lead poisoning prevention programs (CLPPPs) universally acknowledge the importance of primary prevention and are beginning to address it in their strategic plans and funding applications. However, many programs' primary prevention efforts are confined to parent education about hygiene, nutrition, and housekeeping, despite research that makes clear the limitations of these interventions for families whose homes pose significant lead hazards. Inability to institute durable primary prevention is caused in part by the pressure to focus resources and attention on secondary prevention by identifying and managing individual cases of elevated childhood blood lead level (BLL). Indeed, in communities where follow-up on actual poisonings is limited to educating family members about lead hazards and behavioral change (because public resources are not available to control identified lead hazards and halt further exposure), meaningful primary prevention can seem like an extremely remote target. Programs facing these circumstances need ideas for sharing responsibility within the jurisdiction to stop repeat offenders, expand access to lead-safe housing, and ultimately arrest the cycle of inferior housing that continually produces new poisonings. While no city or state with a significant stock of leaded housing has successfully assembled all of the elements needed to make primary prevention a reality across the jurisdiction, state and local lead poisoning prevention programs across the country and their partners in other agencies and the private sector have implemented a multitude of innovative and successful primary prevention strategies over past years. Workshops and conferences periodically feature model programs, but the prospect of replicating an entire program with multiple components and elements can be daunting to the CLPPP seeking to evolve beyond screening and case management. Difficulty in achieving program transformation to primary prevention is only compounded within an overwhelmed public agency that is surrounded by a change-resistant or risk-averse political environment. Since most successful primary prevention programs consist of multiple elements, specific strategies can be considered individually or in combination. The multitude of innovative strategies to identify, control, and prevent lead hazards in housing before a child is poisoned that are currently being implemented across the country has never been systematically documented or described in a way that makes information about their design and implementation readily accessible. Programs and their jurisdictions need this information at the "building block" level in order to decide which strategies to pursue based on local needs and conditions. This document identifies and describes individual building blocks across the spectrum of primary prevention strategies in order to create access to knowledge about tangible and realistic opportunities for progress and program evolution in identifying, controlling, and preventing lead poisoning and other housing-related health hazards. # Scope and Limitations The research for Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards was guided by the descriptions of primary prevention in CDC's 1997 screening guidelines and 2002 case management guidelines, which emphasize eliminating and controlling toxic exposures at the source. While primary prevention necessarily encompasses activities that address all sources of exposure to lead, this publication is focused on strategies for preventing and controlling lead hazards in housing, the foremost cause of poisoning. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards The heart of the challenge to public health agencies is leveraging action to make privately owned housing leadsafe. Many CLPPPs are increasingly viewing leveraging action to address lead hazards in housing as a part of their leadership role. While public health program directors and staff are clearly the primary audience for Building Blocks, some strategies entail fostering change in other organizations and systems to advance prevention in high-risk housing. The summary of each building block is coupled with an illustration of how the strategy has been implemented and contact information for at least one individual who is knowledgeable about this activity. Building Blocks has some inherent limitations that deserve note. The information listed in illustrations (partners, resources, constraints) is not comprehensive but rather a citation of specific and strong examples of building blocks. Results of efforts to replicate a given building block will vary depending on individual state and local laws, maturity of partnerships, political will, and the existence and strength of community-based partners. The applicability of a building block selected for implementation will depend on the maturity and capacity of the jurisdiction and its CLPPP. Inclusion of building blocks in this document does not assure that they have been evaluated for their outcome or transferability. # Organization of Building Blocks The description of each strategy reflects the template (Appendix A) that has shaped the research and compilation of Building Blocks. The generalized information includes the title, brief summary, potential applications and benefits (including scope of impact), and critical elements such as staffing patterns, other resource needs, institutional capacity, cost and timing considerations, and indication of feasibility of implementation. At least one real-world illustration amplifies building block descriptions by documenting the scope and particulars of the example in a given jurisdiction or target area; the staffing and other resources utilized; magnitude of its impact; factors essential to implementation; limitations encountered; estimated potential for replication; and specific contact information and references for additional information. The illustrations offer strong examples of how each strategy has been recently implemented but do not provide an inclusive or exhaustive review of all efforts to ever plumb the benefits of the given strategy. This document displays building blocks grouped by the category that best fits their essential contribution: - An alphabetical index of of the building blocks follows the Introduction. The internet edition of Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards will be available in Summer 2005 through the website of the Lead Poisoning Prevention Branch, www.cdc.gov/nceh/ lead. Through this site, it will be possible to easily select sections of Building Blocks for online review and search by keyword, location of illustration, category, key actor or partner, and similar criteria. # ILLUSTRATION #2 OF STRATEGY IN PRACTICE In 2002, the New York Public Interest Research Group (NYPIRG) used data from the health department to issue a report about childhood lead poisoning disparities in New York City. This study was conducted in conjunction with a campaign to pass the new lead poisoning prevention law in New York City that was enacted in February 2002. NYPIRG was aware that while the number of children poisoned by lead in New York had been declining for years, there appeared to be stubborn pockets of poisoning throughout the city, particularly in low-income neighborhoods. NYPIRG conducted a small area analysis of the data-they first analyzed the data by census block and then aggregated it by ZIP code. The analysis confirmed that there were indeed concentrated pockets of childhood lead poisoning in New York, many of which were located in low-income areas with tracts of substandard housing. In order to convince City Council members that the existing lead poisoning prevention policy was not working for all of the city's children, NYPIRG decided they needed to illustrate the extent of the disparities in New York # POTENTIAL OBSTACLES/BARRIERS In some areas, landlords may continue to be resistant to change or cooperative working relationships with government regulators and/or community-based organizations, despite persistent efforts to engage them. In other instances, landlords may deem necessary efforts "too expensive," setting up adversarial relationships this strategy is supposed to avoid. Other resource requirements: Appropriate mechanisms for delivery and dissemination of desired educational messages are needed, but the mechanisms can vary dramatically depending on the design of the educational initiative. Typical educational methods include brochures, fact sheets, and web sites. The considerable range of materials already developed on lead safety obviates the need to develop materials, although modifications should be made to incorporate local referral resources. Programs can augment traditional materials with more attention-getting vehicles, such as diaper bags and other promotional items. Any materials used must be accessible and understandable to those who live in high-risk areas, where language barriers and reading levels can present a challenge. Programs will also need data and surveillance information. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Awareness and Public Support EXPAND LEAD SAFETY EDUCATION TO EXPECTANT AND NEW PARENTS Institutional capacity required: Health education initiatives rarely require special authorization. # Cost considerations: Adding a new subject to an existing education program is more cost-efficient than implementing free-standing education focused only on lead safety. Printing materials will cost nominal amounts per parent. Timing issues: Can be implemented anytime Feasibility of Implementation: Variable. Feasibility depends on the availability of people to manage the effort and resources to support it. # POTENTIAL OBSTACLES/BARRIERS One potential obstacle is reaching agreement on a specific strategy deemed most effective for the circumstances, as there are so many possible combinations of messages, messengers, delivery mechanisms, and possible target audiences. In addition, it can be uncomfortable to make lead-safety recommendations to parents in communities where resources do not exist to assist families in repairing lead hazards. A barrier to the effectiveness of education on lead safety is the fact that expectant and new parents may already be overwhelmed with other recent messages on multiple weighty issues and have many other concerns and priorities. Discussion of possible lead exposure in utero may help parents to focus attention on the immediacy of lead safety. Programs may also encounter unexpected challenges in developing partnerships with seemingly natural partners. For example, one program reported difficulty in convincing obstetricians to participate in such an educational campaign. # ADDITIONAL RESOURCES N/A # ILLUSTRATION #1 OF STRATEGY IN PRACTICE MA CLPPP conducted a project to educate pregnant women about lead hazards and encourage them to adopt preventive behaviors, and to educate doctors and staff members for community health centers and agencies in the target communities. The project's three core activities were: 1. Development and distribution of bilingual prenatal lead awareness kits packaged in large attractive diaper bags. The kit included educational fact sheets and brochures, promotional items, a community resource card, an evaluation card, and a voucher for free lead-safety training for a family member. The pre-existing educational materials were provided as bilingual documents, in English and one other language-Spanish, Khmer, or Vietnamese. On a limited basis, materials were also distributed in Russian, Chinese, and Portuguese. 2. Recruitment of community health centers and agencies that serve pregnant women in the target communities (e.g., WIC, Head Start, etc.) to educate their staffs and clients and distribute information kits; and, 3. Sponsorship of Grand Rounds training (offering CEUs) for physicians and other medical and program staff in the four communities. This project was supported by a nine-month CDC supplemental grant of $100,000. Staffing utilized: Staffing was routinely about 1.25 FTE, but spiked during busy periods associated with trainings and implementation (e.g., about 5 FTEs for a few days). # Other resources utilized: N/A Factors essential to implementation: Staff felt that success was dependent on the availability of a full-time project coordinator and-for effective materials distribution and training recruitment channels-on the network of existing contacts in the community. MA CLPPP was able to use existing MOUs with some partners, which expedited administrative processes. Limitations/challenges/problems encountered: Major challenges were in the areas of deadlines and evaluation. Various administrative factors meant that the program had about seven months to hire staff and complete the project, operating within the constraints of state governmental systems for purchasing. Due to time and realities, the program was limited to a self-reporting evaluation. Logistical constraints, including an interpretation of HIPAA requirements, prevented an evaluation approach involving tracking individual women's names. Magnitude of Impact/Potential Impact: Approximately 3,500 diaper bags/information kits were distributed in 4 months, with many distributed in high-risk areas; 29 agencies signed Memoranda of Understanding (MOU) and partnered in the project; 138 self-reported evaluation cards were returned from kits; and Grand Rounds attendees gave high evaluation marks. # Potential for replication: Moderate # Contacts for Specific Information Xanthi Scrimgeour Health Education Coordinator 413-586-7525 x1122 or 1-800-445-1255 [email protected] Paul Hunter Director, MA CLPPP 617-624-5585 [email protected] # References for additional information 1. An August 2003 report called "CDC Supplemental Prenatal Grant: Overview and Evaluation" describes the project and its results in detail. The report includes a review conducted with the New England Lead Coordinating Council of similar prenatal lead education activities that had been undertaken in other states. # ILLUSTRATION #2 OF STRATEGY IN PRACTICE As part of a nine-month project focused on increasing testing rates for lead among pregnant women in Alameda and Fresno counties and prompting early intervention, CA CLPPP developed, disseminated, and field-tested educational materials for at-risk pregnant women. To this end, brochures were developed urging women to get tested, and explaining how lead gets into the body, how it can affect a baby, and how to create a lead-safe environment. County-specific phone numbers were provided so that women could easily seek medical care and information regarding lead and pregnancy. After completion, 25,000 packets of culturally-appropriate outreach Pediatricians who are knowledgeable about lead safety and healthy homes can provide better health care for children at high risk of toxic exposures, advocate for relevant solutions, and suggest primary prevention tools for parents. A recent study revealed that many pediatricians want to better understand lead exposure and other environmental history components in patients' backgrounds, yet fewer than one in five has any formal training in making inquiries on lead and other chemical exposures. Medical schools and residency review committees can work to train pediatricians in environmental history-taking to identify possible lead exposures in the home and to help prevent exposure. Integrating such specialized training into required medical education is the easiest method, as most medical schools already require some level of lead poisoning prevention education during pediatric clinical rotations. Some state medical societies may get involved in mandating primary prevention education requirements, and residency review committees will also be involved. Requiring a rotation at a children's hospital, community center, or local health department can give pediatric students even more first-hand experience with childhood lead poisoning and add extra incentives for them to take steps toward primary prevention in their future practices. Additional course offerings on primary prevention and environmental history-taking could also be included in physicians' required Continuing Medical Education (CME). # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Awareness and Public Support INTEGRATE LEAD POISONING PREVENTION EDUCATION INTO PHYSICIAN EDUCATION CURRICULA Institutional capacity required: Curricula complete with primary prevention and environmental history-taking education is the main institutional requirement for this strategy. # Cost considerations: No additional costs would be incurred if this training on environmental history-taking and lead safety is integrated into existing education and training systems for pediatricians. # Timing issues: None Feasibility of Implementation: Very high. Because some lead poisoning prevention is already built into most medical school curricula as all students go through their pediatric rotations, putting more emphasis on primary prevention and environmental history-taking would require only modest adjustments to curricula, with little or no conflict with other course priorities. # POTENTIAL OBSTACLES/BARRIERS None identified. # ADDITIONAL RESOURCES # ORGANIZE "TOXIC TOURS" FOR POLICY MAKERS DESCRIPTION OF THE STRATEGY A first-hand look at unhealthy housing conditions can be provided to public officials by organizing a community tour that allows them to visit homes with hazards (and if possible, some that have been repaired) and talk with residents and advocates about the problems and policy solutions. Experience has shown that policy makers can be moved significantly by the personal experience of seeing hazardous conditions first-hand and having face-to face interaction with families directly affected. First-year medical students can also benefit from a "toxic tour." Community-based organizations in Los Angeles, New Orleans, and Providence have successfully used this strategy to educate and motivate local health and housing officials. This strategy parallels "Child Watch" tours that child advocacy groups have historically conducted to sensitize and challenge elected officials and journalists regarding a variety of problems that children face. # POTENTIAL OBSTACLES/BARRIERS It can be quite challenging to convince public officials, especially elected officials, and the media to participate in the tour. Another challenge is ensuring sensitivity to the families who agree to open their homes to the tour. It is a fine line between showcasing the problem and potential solutions vs. unintentionally allowing voyeurism at the expense of low-income families. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE The Los Angeles Healthy Homes Collaborative has found that providing public officials with tours of hazardous housing conditions deepens the understanding and motivation to enforce standards and improve services to affected families. The Healthy Homes Collaborative is a diverse coalition of community-based and advocacy organizations committed to eliminating environmental health threats to children and increasing health access. The Collaborative enlists families whose homes will be visited, persuades agency and legislative staff to attend, and arranges the itinerary, transportation, and food for the attendees. An opportunity for the group to eat together is important to all allow attendees to exchange impressions, information, and ideas. # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: Because the Collaborative has worked consistently with tenants and earned their trust, the CBO leaders are able to persuade tenants to participate. # Limitations/challenges/problems encountered: The logistics of the tour can be challenging. The size of Los Angeles meant that public officials had to commit almost a full day, with the result that elected officials sent their aides instead of seeing the housing conditions first hand. There may be last-minute conflicts that affect a family's ability to be home or disruptions in the tour schedule, so it is advisable to line up 'extra' families who are willing to participate. Another challenge is facilitating the discussion and reactions of participants with diverse political views and perspectives as everyone is in "close quarters" for the tour. Magnitude of Impact/Potential Impact: Agency staff felt a new sense of urgency from seeing the desperate living conditions of many families and from seeing how hazards persist in units where violations had been cited. Staff of the lead poisoning prevention program have since been more responsive when alerted to families in need by the Collaborative; communication and working relationships between the CBOs and the agencies have improved. # ILLUSTRATION OF STRATEGY IN PRACTICE In September 2003, Indianapolis' mayor unveiled a "Top 10" list of city property owners who have been serial code violators. The property owners on the initial list held title to 310 properties throughout the city. The mayor's list, which is updated as needed, serves several purposes. It helps to distribute information on problem landlords, assisting tenants in avoiding structures that may contain dangerous code violations and health hazards while exposing slumlords to the local community. It also helps the city to hold property owners accountable and provides a tool for community leaders seeking to put pressure on property owners to remedy code violations and maintain their properties. The list is readily available through the city's website, and it has also been publicized by The Indianapolis Star newspaper. # Institutional capacity required: N/A Cost considerations: Extremely low cost. Timing issues: Sustained cultivation of or accessibility to the media is critical; one-time or occasional efforts will be much less effective. # Feasibility of Implementation: Very high # POTENTIAL OBSTACLES/BARRIERS A potential barrier is the reluctance of families to participate because they fear retaliation from the landlord or other consequences. The willingness of families to cooperate with reporters is essential because the personal toll of lead poisoning helps capture the public attention and build political will to change the status quo. # ADDITIONAL RESOURCES N/A # ILLUSTRATION #1 OF STRATEGY IN PRACTICE In May of 2001, the Providence Journal ran a six-part series on lead poisoning that helped set the stage for new legislation and regulations in the state. A photojournalist at the Journal, John Freidah initiated the idea, but it could not have happened without the commitment of the newspaper, the hard work of the Childhood Lead Action Project, the time and effort of many government officials who educated and provided information to reporter Peter Lord, and the willingness of many families to open their lives to the journalists and the public. Staffing utilized: At the Journal, reporter Peter Lord and photojournalist John Freidah invested more than 6 months of time preparing the series (and Freidah had begun taking photographs at the lead clinic many months earlier in between other assignments in order to bring the idea vividly to the paper's editors). Top editors at the paper worked with them on "designing" the series to most effectively convey the breadth and impact of lead poisoning in the state. At the Childhood Lead Action Project, a community organizer working with the parents of lead-poisoned children helped the parents overcome their fear of participating in the series. Officials at the Rhode Island Department of Health and the Rhode Island Housing and Mortgage Finance Corporation devoted many meetings to helping Peter Lord understand and accurately convey the issues. # Jurisdiction or Target # Other resources utilized: The state health department forged a partnership with the Providence Journal to make public information about properties that have lead hazards. The department had generated a list of houses where children had been poisoned but didn't have the technical capacity to publish it online. By providing these data to the newspaper and by providing yearly updates to the paper, the department has met the public need for information despite its technical limitations. # ILLUSTRATION #2 OF STRATEGY IN PRACTICE The Detroit Free Press conducted an in-depth investigation of lead poisoning in Detroit and Michigan that began appearing January 21, 2003. The paper followed the five-day investigative series with continuing coverage throughout the year. The reporting-and the persistent work of state advocates for children's environmental health-resulted in lead poisoning becoming a top priority of the Governor and the state legislature. In addition, the series prompted the US EPA to order the removal of lead-contaminated soil in a Detroit neighborhood near a former lead smelter. # Jurisdiction or Target Area: Michigan Primary Actor: Detroit Free Press # Secondary Actor(s): Get The Lead Out Coalition Staffing utilized: The Free Press devoted a multi-talented team of reporters to covering this issue over many months. The reporters worked with and wrote about families affected by lead poisoning, investigated and identified systemic shortcomings in the city and state's programs, hired experts to test the soil near industrial sites, and researched how local efforts compared with other cities and states around the country. When state advocates, including the Get The Lead Out coalition in Grand Rapids, became aware of the investigation, they provided information to the reporters about the extent of the problem outside Detroit and the need for statewide leadership on lead poisoning prevention. Both the newspaper and the advocates worked to sustain the impact of the initial investigation by continued coverage of the problem and of the policy initiatives of the Governor and the state legislature. # Other resources utilized: The Free Press conducted an analysis of the State Department of Community Health data on elevated blood lead levels to identify the "hot spot" neighborhoods-those with the most lead-poisoned children-and questioned why the state had not been using the data to target prevention and hazard control efforts. The analysis revealed that the single worst hot spot was in Grand Rapids-a finding that changed the political dynamic of the issue within the state by capturing the attention of the legislators in the western part of the state, educating them on the scope of the problem statewide, and motivating them to support new legislation. Because the paper had to sue the state to release the data, this article appeared in July, putting the issue back on the front burner six months after the original series. # Factors essential to implementation: The critical factors are a large-circulation newspaper with a commitment to doing investigative reporting, strong advocates who can use the heightened public attention and concern to build political will for policy change, and continuing coverage and advocacy to keep elected officials focused on and accountable for making necessary changes. # Limitations/challenges/problems encountered: The main policy-making challenge in Michigan was overcoming the east/west split in the state. The Free Press broadened their focus beyond Detroit and the eastern part of the state at the urging of state advocates. Most legislators had considered lead poisoning a Detroit problem until the Free Press analysis of the data documented the extent of the problem in Grand Rapids and other western areas. Magnitude of Impact/Potential Impact: While the Governor had campaigned as a champion of children's environmental health, it is clear that the investigative series combined with advocates' work to take maximal advantage of the publicity persuaded the Governor to submit a much stronger action plan much more quickly than would otherwise have been the case. It is likely the legislature would have reacted far more coolly to the Governor's proposals without the series. (The bills currently working their way through Michigan House and Senate would establish a Childhood Lead Poisoning Prevention and Control Commission, impose penalties on landlords who knowingly rent units with lead hazards, provide tax credits for lead hazard control, create a leadsafe housing registry, increase pressure on Medicaid plans to screen enrolled children for lead poisoning, and require labs to report blood screening results electronically.) Potential # ADD LEAD SAFETY TRAINING TO WEATHERIZATION PROGRAMS AND PRACTICES DESCRIPTION OF THE STRATEGY Integrating lead safety into the ongoing work of weatherization program contractors has the multiple benefits of reducing energy costs, improving the indoor climate, reducing lead hazards in the homes treated by the weatherization program, improving the safety of weatherization crew workers and their families, and protecting the safety of residents. Lead poisoning prevention programs can provide training and incentives such as free or discounted HEPA vacuums and personal protective equipment. Options include developing a hybrid training curriculum, adding lead-safe work practices to standards or specifications, expanding monitoring and inspections to address lead safety concerns, offering complete lead-safe work practices (LSWP) training within the weatherization training program, subsidizing risk assessor training, and providing an XRF analyzer for each local weatherization program. # BENEFITS The Department of Energy requires its state-level grantees to ensure that weatherization crews complete lead safety training if they will be working on homes built before 1978. This federal requirement prevents any confusion surrounding the need for lead safety training and ensures that all weatherization workers who operate in older homes will understand the consequences of repair and energy measures that may cut, sand, or pry leadbased paint and how to avoid creating lead dust and paint hazards through proper containment, control during the work, and clean up. It is most efficient to have the state weatherization agency condition disbursement of federal weatherization funds on fulfilling the training requirement; the state can also incorporate into standard training any state-specific standards or other information. Immediate/Direct Results: There will be increased awareness of lead safety among thousands of laborers and contractors. Public Health Benefits: Crews will be significantly less likely to create hazards such as lead dust, lead soil, or deteriorated lead paint during weatherization work that disturbs lead-based paint. Other Indirect/Collateral Benefits: Lead safety capacity is built in the wider community of individuals and community action agencies that may also conduct repair and renovation work using HUD funds or other resources. Transferable lead safety skills will cause laborers who work in weatherization to be careful about paint chips and dust when performing other types of work in older homes in the future. Weatherization program staff gains awareness of potential health risks associated with lead hazards and other housing condition problems. Finally, the initiative helps build capacity among contractors and awareness of lead-safe work practices that will likely transfer to other non-weatherization jobs-when working in older homes likely to have lead based paint. # SCOPE OF POTENTIAL IMPACT Nationally, the risk of lead dust hazards will be reduced as weatherization crews treat pre-1978 homes. More than 100,000 homes are treated by weatherization each year, and a significant majority were built before 1978. Statewide-You can pursue such training at the state, county, or local level. It is most efficient to have the state condition disbursement of federal weatherization funds on fulfilling the training requirement. Regional (e.g. multi-county)-Many CAP agencies cover City-or County-Wide Neighborhood/Community Specific (Targeted) Population-Very low-income households # ILLUSTRATION OF THE STRATEGY IN PRACTICE This program brings subsidized lead safety training to all agencies administering weatherization funds, provides training to workers, ensures at least one individual is trained and licensed as a lead risk assessor, and provides at least one XRF analyzer to each agency performing weatherization work. The program has developed specific policies and procedures to address lead that are more extensive than the federal requirements. Each weatherization program's risk assessor tests the lead content of the paint likely to be disturbed by the weatherization project in all pre-1978 homes. Using an XRF takes the guesswork out of the job: the crew knows if there is lead paint and does not have to presume it exists. The state pays expert consultants to work with each risk assessor on using the equipment and procedure properly in order to guarantee consistent performance. Jurisdiction or Target Area: Indiana. Primary Actor: Division of Family and Children, Housing and Community Service in the Department of Family and Social Services Agency. # Secondary Actor(s): N/A Staffing utilized: The state weatherization program director helped launch and develop the program with support from one key staffer and an independent consultant. It quickly became a relatively small aspect of the staff person's job as details fell into place. A consultant developed the policies and procedures, and Environmental Management Institute, an accredited lead trainer, was contracted to support the risk assessors. Other resources utilized: Administrative funding from the Section 8 program was used to purchase XRF devices. # Factors essential to implementation: The key factor for success is the commitment of state weatherization staff who care about the lead issue and are willing to make it a priority. Limitations/challenges/problems encountered: One challenge is to convince key senior managers at the state level of the necessity of creating systems to incorporate lead safety into weatherization and approval for a centralized procurement for XRFs to substantially reduce the price. Obtaining commitments from local community action agency directors to have their weatherization crews complete the training was another challenge that was overcome by the state's upfront provision of needed resources. Magnitude of Impact/Potential Impact: Weatherization work is performed using lead-safe work practices in all units that have lead-based paint. XRF testing has allowed the CAP agencies to focus dust containment and cleanup efforts when the surface tests positive for lead. Information developed from the lead testing is now available to future tenants and buyers under the federal lead hazard disclosure requirement. # Potential for replication: High. Lead safety training for weatherization crews can be replicated throughout all states in which it is a priority for key managers at the state level. # ASSESS AND ADDRESS MULTIPLE HAZARDS SIMULTANEOUSLY DESCRIPTION OF THE STRATEGY Programs that address health hazards beyond lead can efficiently and effectively equip families to reduce health hazards in the home environment. Community-based organizations train community members to assess houses for hazards and leverage the results through both individual and systems advocacy. Such programs also work to build support for prevention through the education of tenants/residents about hazards, available remedies for obtaining safe repairs, legal rights, and leadership skills. Programs focusing on direct services train volunteers and community health workers to conduct home audits, create a personalized Home Action Plan that emphasizes low-or no-cost solutions, and provide tools to assist in making needed changes, such as sealed mattress covers, cleaning kits, and HEPA vacuum loaners. # BENEFITS Immediate/Direct Results: Home heath hazards will be identified. Rental property owners, their tenants, and owner-occupants will learn low-cost solutions to address hazards and be aware of home health hazards. Public Health Benefits: Written reports and photographs that are property-specific inform the landlord about hazards. With respect to lead, the landlord then must correct the problem or disclose this information to prospective tenants. Property-specific data transforms the federal lead hazard disclosure rule's right to know into a powerful catalyst for action to improve conditions in substandard rental properties. Aggregate data can motivate local lawmakers to address the prevalence of hazards through property maintenance codes, health and housing codes, and other policy changes. Other Indirect/Collateral Benefits: Fosters community building and community organizing. Advocates can use aggregate data to generate media interest in the plight of low-income families who seek healthy housing. # POTENTIAL OBSTACLES/BARRIERS If home assessments reveal health hazards in rental housing, steps must be taken to ensure that tenants are protected from retaliatory evictions and other illegal landlord actions. Advocates must also guard against the responsibility being inappropriately shifted from landlord to tenant. Advocates must press code inspectors to inspect, cite, and enforce the code. # ADDITIONAL RESOURCES # N/A # ILLUSTRATION #1 OF STRATEGY IN PRACTICE The American Lung Association of Washington established the Master Home Environmentalist (MHE) program in 1992 and has trained more than 1,400 volunteers. Trainers include environmental scientists, psychologists, social workers, academics, and medical professionals. Volunteers complete 35 hours of training and 35 hours of community service. Trained MHE volunteers use a Home Environmental Assessment List (HEAL TM ) to help identify health hazards in the home, including lead, dust, and mold. The volunteer then works with the resident to develop an action plan to create a healthier home environment, emphasizing inexpensive or no-cost solutions. Jurisdiction or Target Area: King County, Washington; expanding to additional counties. # Primary Actor: American Lung Association of Washington # Secondary Actor(s): N/A Staffing utilized: At a minimum, one full-time staff person is needed to recruit volunteers and members for the steering committee, schedule trainings, etc. Other resources utilized: Computer, LCD projector for presentations, and trainings are helpful. # Factors essential to implementation: Developing key partners to serve as trainers and on the steering committee is essential. The expertise of key partners depends on the geographical area and the type of classes needed; for example, pesticide experts must be recruited where pesticides are a main issue. Limitations/challenges/problems encountered: Funding and volunteer retention were both challenges. Magnitude of Impact/Potential Impact: From 1994 to 1999, MHE volunteers reported completing more than 4,500 hours of community service and 1,400 home assessments. A 1997 study revealed that 86% of households visited by MHE volunteers improved their home environment. # Potential for replication: Moderate. MHE sells its trademarked and licensed program. Purchasers receive the Implementation Guide, training manual, facilitator's guide, HEAL paperwork, database to track volunteers and residents, as well as all components created by other licensees. The cost is $2,500 for an American Lung Association affiliate; the cost is higher for non-affiliates. # Contact for Specific Information Aileen Gagney Environmental Health Program Manager 206-441-5100 [email protected] # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Capacity for Lead Safety BROADCAST LEAD SAFETY TRAINING WIDELY Timing issues: This strategy could be implemented at any time. # Feasibility of Implementation: High. In states or regions of states where widespread telecommunication delivery systems exist, this strategy should be feasible. Some limitations may exist. # POTENTIAL OBSTACLES/BARRIERS There should be few, if any, potential barriers to implementation of this strategy. Hands-on portions of the training will be nearly impossible to conduct using this strategy, which could limit the value of this particular type of lead-safe work practices course. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE The Iowa Department of Public Health Bureau of Lead Poisoning Prevention (the Bureau) partnered with the Iowa Department of Economic Development, Iowa's housing agency, public housing authorities, and entitlement cities to produce and deliver a lead-safe work practices training curriculum that could be broadcast to workers who were widely scattered throughout rural and urban Iowa. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Capacity for Lead Safety BROADCAST LEAD SAFETY TRAINING WIDELY Limitations/challenges/problems encountered: The main challenge to implementation was coordination of training materials, Bureau staff, and the training schedule. However, Bureau staff noted that this became easier as time passed. Magnitude of Impact/Potential Impact: 1,020 landlords and contractors were trained via the ICN. # Potential for replication: High. This strategy can be easily replicated in any state or multi-county region with a fiber optic or other telecommunications presentation and delivery system. # ENSURE THAT DO-IT-YOURSELF REHABBERS ARE TRAINED DESCRIPTION OF THE STRATEGY Housing agencies that provide funds for housing rehab can require that property owners be prepared to effectively deal with existing conditions as well as problems that emerge as they work. Rehabbers of older housing especially need to know how to work safely around lead-based paint and how to safely and thoroughly repair lead-based paint hazards. Rehabbers also need to be aware of the U.S. Environmental Protection Agency's Pre-Renovation and Education Program (406b), which requires property owners to notify all occupants in pre 1978 housing units of any rehab work that will disturb more than two square feet of a painted surface. # BENEFITS Immediate/Direct Results: Training do-it-yourself rehabbers will make it more likely that lead-safe work practices will be used. This will bring a category of properties under the lead-safe work practices umbrella that has been missed through other, more formal training of professional rehabilitation contractors. Public Health Benefits: Properties that would not otherwise have had the benefit of lead-safe work practices can now be rehabbed safely. This will reduce or eliminate the creation of lead hazards and encourage the repair of existing hazards, which will decrease children's exposure. Other Indirect/Collateral Benefits: When included as part of a larger housing or development program, this strategy can also help reduce urban blight, reduce other health hazards in older structures, and assist in comprehensive community development and/or revitalization. # SCOPE OF POTENTIAL IMPACT # Statewide Regional (e.g. multi-county) City-or County-Wide Local Community Specific (Targeted) Population # PRIMARY ACTORS KEY PARTNERS Housing Agency Property Owners # CRITICAL ELEMENTS Staff requirements: This strategy will require staff time to conduct trainings. This could require up to 1.5 FTE if staff training is provided directly by the funding agency. The agency could also contract with outside trainers. Other resource requirements: Lead-safe work practices training materials will be necessary for this strategy. # Institutional capacity required: Training instructors should be well versed in lead-safe work practices. Cost considerations: This strategy should be cost-effective in preventing health problems. Timing issues: This strategy would require some short-duration outreach, but it can be implemented at any time. It is also important to implement this strategy in a sustainable way so as to not limit its effectiveness. # Feasibility of Implementation: Very high in almost all jurisdictions. # POTENTIAL OBSTACLES/BARRIERS Few, if any, barriers should exist for this strategy. # ADDITIONAL RESOURCES # Secondary Actor(s): N/A Staffing utilized: NJ Citizen Action tries to include guest speakers, such as attorneys, pediatricians, or parents of lead poisoned children, to avoid the monotony of a single speaker and to provide expert information. Other resources utilized: Each trainee receives a large binder full of relevant reference materials, including transparencies for use with overhead projectors and a script on lead poisoning designed to make it easier for attendees to become trainers. The training includes lunch. Factors essential to implementation: NJ Citizen Action staff believe that the most essential factor to continued success of the training is the quality of the training, as attendance would surely fall off if agency managers and CBOs did not perceive value in dedicating a full day of a new staff member's time. Keeping logistics routine enables program staff to focus more on recruiting and providing quality training and outreach than on, for example, catering or parking arrangements. Limitations/challenges/problems encountered: None listed. # Magnitude of # POTENTIAL OBSTACLES/BARRIERS The obstacles to delivering a lead-safe work practices training to non-English speaking immigrant communities are many and varied. Skilled, culturally-competent trainers are need to teach the Spanish version of the HUD EPA course Lead Safety for Remodeling, Repair and Painting. The course has not yet been translated into languages needed by non-Spanish speaking immigrants who may also be working as day laborers and at risk for lead hazard exposure. Since many immigrants may not have been able to attend school in their country of origin long enough to equip them to sit through a long, classroom-style training, delivery needs to be paced or staged and include hands-on practices. The main obstacle to getting day laborers to use lead-safe work practices is that the relationship between day laborers and their employers is not conducive to the workers changing work methods based on law and safety. Simply providing a training opportunity for workers is not enough-employers must be motivated or required to comply with lead safety requirements. Factors essential to implementation: In Los Angeles, healthy homes advocates have found that several steps are needed to successfully deliver the Spanish-language version of the lead-safe work practices course to day laborers. Most day laborers are unable to give up a full 8-hour workday in order to attend training for which they are not compensated. Therefore, advocates have divided the training into several evening sessions. Second, trainers emphasize the self-protective benefits of lead safety, since linking the issue to worker protection has been an important step in engaging day laborers in the issue of lead-safety and in how to prevent hazards in the first place. Third, using written materials as little as possible, and relying on face-to-face explanations and handson use of equipment has been the most successful teaching method. # ADDITIONAL RESOURCES # Limitations/challenges/problems encountered: The main barrier to getting day laborers to use lead safe work practices is the employer. Given the scarcity of work for a growing population of workers, day laborers are unwilling to raise concern over lead safety with their employers, if they are even aware of this issue. Community and union organizers need to support workers in protecting themselves and their families by "blowing the whistle" on employers circumventing lead safety laws in order to get work done as quickly and cheaply as possible without regard to the health and safety of either the family whose home is being painted or repaired or the family of the worker. # Magnitude of actual impact: The project trained 200 workers in one year. Since a typical day laborer works on an average of 50-100 homes annually, this training has added lead-safe work practices to thousands of projects. Potential for replication: Moderate. Replica (modified) projects have or are being carried out in several other immigrant communities in the United States, including the Mission District in San Francisco, CA. Every "replication" of this project will vary due to local political, population, and socio-economic variances. This project can most successfully be replicated where lead-safe work practices are required during painting and remodeling activities. # Contacts for Specific Information # CRITICAL ELEMENTS Staff requirements: The resources required are somewhat related to the scale of the project. In a city or county, an existing weatherization program could partner with a lead hazard control program and, with a percentage of a full time employee's time, structure a project to target weatherization units for lead hazard control. If a broader effort is envisioned to add a lead hazard control element to statewide weatherization or rehab programs, a more substantial commitment of staff resources and funding would be needed. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Capacity for Lead Safety EXPAND WEATHERIZATION AND REHAB PROGRAMS TO ADDRESS LEAD SAFETY Other resource requirements: The weatherization program would need equipment and trained personnel to conduct lead inspections and/or risk assessments (to confirm the presence of lead based paint) and perform dust clearance testing. They would also need access to trained and qualified contractors to perform the work. Institutional capacity required: Typically it is not necessary to change laws or regulations to integrate the delivery of weatherization and lead hazard control services. The main capacity issue is the availability of qualified personnel to perform the hazard assessments (or measure lead content in paint), complete the work following lead safety standards, and conduct clearance tests. Weatherization and rehab programs can have their existing crews perform lead hazard reduction provided they are properly qualified. For abatement projects, workers and contractors must be trained and certified. Except in a few states, training in lead-safe work practices is sufficient qualification for most non-abatement projects. Certified lead inspectors, risk assessors and-in some states-sampling technicians can perform the dust clearance testing after the work is completed. # Cost considerations: Leveraging other programs' work in homes to tackle lead hazards is generally cost effective. Weatherization and rehab programs already bear the costs of identifying housing units appropriate for their programs and their eligibility criteria are consistent with risk indicators for lead hazards (homes built before 1950; low-income families). Expanded programs can therefore offer lead hazard control, building upon existing efforts to enroll units and fix other problems, many of which may also be contributing to lead hazards, such as plumbing leaks, holes in the exterior walls or roof, and poor insulation resulting in condensation and water damage. In some cases, it may be appropriate to purchase XRF machines to help identify homes where lead hazard control is not needed. # Timing issues: If funding is available to support the lead hazard control interventions, approximately 6-12 months is needed to launch such a program. If funding is not reliable, then a more substantial commitment of staff resources and time may be need to structure the appropriate partnerships. # Feasibility of Implementation: Moderate. Implementation can hinge on the availability of dedicated funds to support the added lead work. Weatherization and publicly supported rehab programs generally have production targets that provide disincentives to increasing the costs in individual units. There may also be restrictions on spending the program's funds for actions not directly related to the program's mission (e.g., non-energy based repairs are ineligible for weatherization funding, except that in some states up to 10% can be spent on "health and safety" repairs). A key to implementation therefore is locating funds that can be used for lead work and securing a commitment from the state and local housing and weatherization program managers that this supplemental work is a valuable complement to their central mission. # POTENTIAL OBSTACLES/BARRIERS Lack of support from the key energy or housing agency staff and/or the local or state lead program can hinder efforts. Funding for added lead work must be secured. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE The Department of Public Health supplements weatherization activities in pre-1978 housing with a child under age six to include targeted lead hazard control activities. Window wells are capped and a thorough cleaning of windowsills and floors is completed using a wet wash and HEPA vacuum. Pre-and post-intervention dust samples are collected to document the decline in lead-contaminated dust and to verify that the unit meets dust clearance standards. Clearance testing is performed by certified lead risk assessors. Funding for the lead supplement to the weatherization is provided by a HUD-funded lead hazard reduction grant. Factors essential to implementation: Funding such as support from the state or regional weatherization program and the HUD lead hazard control grant program or other sources to underwrite the lead treatments is key. Another potential source of funding could be state energy consortiums that are funded by electric utilities to help justify window treatments. The weatherization program must also have policies to ensure the appropriate lead training, work practices, and that clearance testing occurs. # Limitations/challenges/problems encountered: The most difficult aspect of the program is managing the logistics of the various components. A second challenge is the inherent difficulty in proving that such prevention-based action works. The program did not have funds to collect data to document this. Finally, the majority of homes treated by a typical weatherization program are not occupied by a family with a young child. # Magnitude of Impact/Potential Impact: The program targets neighborhoods and homes with key risk factors for lead hazards (older homes built before 1950; low income families which are eligible for weatherization services). To date the program has completed the lead treatments in 61 units, all of which had at least one child under age 6. Slightly more than half of the units were owner-occupied. Health department staff believes that this initiative has helped prevent elevated blood lead levels in units where work occurred. None of the homes were occupied by children with elevated blood lead levels when this work occurred. Contractors performing weatherization now understand how to complete lead dust removal during final cleanup and recognize the importance of controlling lead-contaminated dust. These changes in cleaning behavior extend to jobs in older homes even when there is not a lead specification. # BENEFITS Immediate/Direct Results: Property owners will be provided additional opportunities to be trained in leadsafe work practices. Public Health Benefits: As more property owners are trained in lead-safe work practices, creation or exacerbation of lead-based paint hazards will decrease, lowering the risk of childhood exposure to lead hazards. Other Indirect/Collateral Benefits: If required for property owners cited for code violations, this strategy can provide a useful, alternative enforcement mechanism. Instead of levying fines which may never be paid, an enforcement agency can put primary prevention tools in the hands of those who need them most. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: This strategy is not staff-intensive. An experienced trainer will be needed one or two days a month. Other resource requirements: Lead-safe work practices training materials will be necessary. # Institutional capacity required: Where applicable, instructors certified or accredited in lead-safe work practices training will be required. Cost considerations: This strategy will incur modest costs by running an ongoing training program. Costs to property owners will be minimal or non-existent. # POTENTIAL OBSTACLES/BARRIERS Tying lead-safe work practices training to code enforcement may prove to be a challenge in some jurisdictions, as enforcement agencies may prefer to rely exclusively on fines. There may also be a lack of interest in ongoing training programs on the part of property owners, or a lack of time to attend such training. # ADDITIONAL RESOURCES # Potential for replication: The potential for replication is high. # Contact for Specific Information # PROVIDE TECHNICAL ASSISTANCE TO PROPERTY OWNERS DESCRIPTION OF THE STRATEGY Cities, states, and community-based organizations can provide technical assistance to property owners who are seeking to meet lead-safe housing and lead hazard control requirements or to voluntarily implement lead safety measures. As a complement to relevant training in lead-safe work practices and regulatory requirements, individualized technical assistance can accelerate the pace at which property owners and contractors retool the way they work. Depending on the scope of the strategy, technical assistance can be accessible through a hotline, one-on-one visits at a "one-stop" center, or a user-friendly interactive website. In the process, these agencies, which may normally be seen as antagonistic toward property owners, will learn about the issues and needs facing the owners of high-risk housing and be able to address gaps in their understanding of lead safety. Also, the property owners will be better positioned to successfully assume their role as partners in the effort to prevent childhood lead poisoning. # BENEFITS # SCOPE OF POTENTIAL IMPACT Statewide City-or County-Wide # Regional # PRIMARY ACTORS Health Department Housing Agency # KEY PARTNERS Property Owners Homeowners Community-based Organizations # CRITICAL ELEMENTS Staff requirements: Staff requirements will vary substantially based upon the scope of the strategy. Some small projects may only require 1-2 FTE, while larger projects may need a staff of 6 or more FTE to properly engage property owners on an individualized basis. # Other resource requirements: N/A Institutional capacity required: This strategy requires staff knowledgeable in lead safety and state and federal lead standards. Staff members with the ability to relate to and work cooperatively with property owners are also essential. # Cost considerations: Costs to administer this strategy will depend on scope, but could be significant due to the individualized nature of the strategy. The largest costs to the agency or organization implementing the strategy will come from staff salaries. # Timing issues: There are no distinct timing issues with this strategy. # POTENTIAL OBSTACLES/BARRIERS The main potential obstacle or barrier to implementing this strategy will be a lack of funding for project staff. States and localities with tight budgets will need to look for creative funding mechanisms to implement their specific technical assistance projects. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Rhode Island's Lead Hazard Mitigation law requires the state to provide technical assistance to property owners who seek to comply with the law requiring the repair of lead hazards using lead safe work practices and clearance. The Housing Resources Commission is charged with crafting and implementing a technical assistance plan. When the plan is fully implemented, the state will provide property owners with access to technical service centers. These centers will be "one-stop shops" where owners will be able to gain hands-on experience with lead hazard mitigation, work one-on-one with technical assistance staff, and find resources on how to obtain financial assistance for making their properties lead-safe. It is anticipated that the state will have these centers in place by July 1, 2005. # Jurisdiction or Target Area: Rhode Island # Primary Actor: Housing Resources Commission # Secondary Actor(s): N/A Staffing utilized: To fully implement the program, the Housing Resources Commission estimates it will need 6 FTE. # Other resources utilized: N/A Factors essential to implementation: Factors essential to implementation include the ability to secure funding for technical assistance staff, as well as property owners' use of the technical assistance resources. Limitations/challenges/problems encountered: Without funding, the Housing Resources Commission will not be able to retain sufficient technical assistance staff. # Magnitude of Impact/Potential Impact: The technical assistance plan has not yet been implemented. However, the state expects many property owners to use its technical assistance resources as they work to meet the requirements of Rhode Island's lead hazard mitigation law. # Potential for replication: Given the availability of professionals able to provide technical assistance regarding lead-safe work practices and lead hazard control, this strategy is fairly easy to replicate. # Contact for Specific Information Simon Kue Rhode Island Housing Resources Commission 401-450-1349 www.HRC.ri.gov # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Capacity for Lead Safety # TRAIN AND EMPLOY LOW-INCOME COMMUNITY RESIDENTS IN HAZARD CONTROL DESCRIPTION OF THE STRATEGY Environmental health services can be provided to communities through programs that train and employ lowincome community residents, including parents of lead-poisoned children and parents of children at high risk. The services provided can consist of low-cost hazard control, cleaning, peer education, and the provision of products that reduce environmental health hazards. Health departments, housing agencies, and communitybased organizations can work independently or collaboratively to provide these trainings. # BENEFITS Immediate/Direct Results: Where demand for services exist, low-income residents will have the opportunity to obtain steady, meaningful employment and will be trained to recognize and control lead hazards in their own homes. These residents will likely relate well to other low-income families in their area, supporting hazard control and peer education efforts. Hazard control and other services will also be available to low-income property owners and tenants who may not have had access to such services in the past. Public Health Benefits: Providing low-cost services like hazard control can increase the number of hazards reduced in a particular community, leading to greater primary prevention of childhood lead poisoning. Other Indirect/Collateral Benefits: This strategy can help reduce unemployment in a local area and can be one useful tool used to combat poverty. Low-income residents will also gain work skills that they may not have been able to obtain elsewhere. # SCOPE OF POTENTIAL IMPACT City-or County-Wide Neighborhood/Community # PRIMARY ACTORS KEY PARTNERS # Health Department Property Owners Housing Agency Tenants Community-based Organizations Parents Community Members # CRITICAL ELEMENTS Staff requirements: This will vary on the scope of the strategy. If the project will involve employing lowincome workers, new staff will be required. Other resource requirements: Trainers, educational materials, and environmental health products may be required, depending on the scope of the strategy. Institutional capacity required: Some projects undertaken by local government agencies may need prior city council/county board approval. Cost considerations: Overall costs will depend on the scope of the strategy. Projects that involve employing community residents will have higher costs than those that provide training or environmental health products. Timing issues: This strategy will require some planning and good organization; projects involving employing community residents will also require some lead time for the hiring process. # Feasibility of Implementation: High. This strategy should be feasible to implement. # POTENTIAL OBSTACLES/BARRIERS In some areas, projects that involve employing community residents may find a lack of potential employees. Other challenges to this type of project could include employee retention problems. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Healthy Homes Services is a program of The Way Home (TWH), a non-profit tenant rights and social services agency in Manchester, New Hampshire. The program trains and employs low-income community residents, including parents of lead-poisoned children and children at high risk, to provide environmental health services to their communities. These services, which include low-cost hazard control, peer education, and the provision of products that reduce environmental health hazards, have proven to be one way to advance primary prevention efforts. Other resources utilized: The Healthy Homes Services project was initially launched under a small grant from the U.S. Environmental Protection Agency. Currently, the project is operating as a sub-grantee to the City of Manchester, under a substantial HUD Lead Hazard Control grant. Staff knowledgeable in lead safety, peer education, and training low-income residents have also been utilized by the project. # Jurisdiction or Target # Factors essential to implementation: A dedicated source of funding and interest from landlords and residents have been essential to implementing this strategy. Limitations/challenges/problems encountered: Reliance on grants and low-interest loans are challenges the project has faced. The project is now seeking to become sustainable through small contracts and offering more services at low cost, which will allow it to operate without heavy reliance on grants. Magnitude of Impact/Potential Impact: As of the beginning of 2004, the project had trained a total of 31 low-income residents, seven in peer education and 24 in low-cost lead hazard control. # Potential for replication: Moderate. Initially, any project similar to Healthy Homes Services will rely on grant funding. With a commitment to use initial grant money to seek out other sustainable funding, this strategy has a high potential for replication. # Contact for Specific Information Mary Sliney Executive Director, The Way Home 603-627-5403 # References for additional information N/A Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards # COLLABORATIONS, PARTNERSHIPS, AND INCENTIVES COLLABORATE FOR LEAD SAFETY IN CHILD CARE HOMES CREATE INCENTIVES TO INTEGRATE LEAD SAFETY INTO HOUSING REHABILITATION INTRODUCE INCENTIVES FOR LEAD SAFETY INTO CHILD CARE PROGRAMS LEND OUT LEAD SAFETY EQUIPMENT SHARE RISK ASSESSMENT AND LEAD SAMPLING SERVICES TEACH CODE INSPECTORS ABOUT LEAD SAFETY THROUGH JOINT VISITS # COLLABORATE FOR LEAD SAFETY IN CHILD CARE HOMES DESCRIPTION OF THE STRATEGY Protecting children in child care settings is an essential complement to preventing exposure in the home environment. Using collaborations between local child care providers, associations that represent their interests, and local housing or health agencies can ensure that child care homes (i.e. child care based in the private home of the caregiver) are renovated to provide a lead-safe and healthy environment in which children thrive. # BENEFITS Immediate/Direct Results: Participating child care homes will have facilities that are lead-safe and healthier for children. Public Health Benefits: If a child care home has lead hazards, many children may become lead poisoned. Since the children served may not otherwise be considered at high risk for lead poisoning, they may not be identified under targeted screening programs. Reducing lead hazards in these facilities will benefit all of the children who use the child care homes' services. Awareness of healthy homes approaches in the properties where their children spend time may prompt parents to consider like measures in their own homes. If child care homes are similar to all homes, they are almost twice as likely to have lead hazards than a licensed, non home-based child care center. Other Indirect/Collateral Benefits: The owners of child care homes may be reluctant to address lead hazards given competing priorities. If competitors are marketing their lead-safe and healthy status, they may be more likely to address the issue. They may also be more willing to accept a lead-safe mandate if industry leaders have a model for success to allay fears. # SCOPE OF POTENTIAL IMPACT There are 14,200 licensed non-home-based, child care centers nationally. There may be as many as ten times more child care homes that are not licensed. City-or County-Wide Neighborhood/Community Specific (Targeted) Population-The approach may be expanded to non-home based child care centers but HUD funding may be more limited. # PRIMARY ACTORS KEY PARTNERS Housing Agency Health Department Child Care Providers Community-based Organizations Property Owners Parents # CRITICAL ELEMENTS Staff requirements: One FTE will be needed to establish and manage the program and conduct outreach to the proprietors of child care homes. The amount of time depends on the number of providers to be solicited to participate in the program. A small community might not need a full-time person while a large area with many unlicensed child care homes may need more staff. Other resource requirements: A strong association representing the child care providers, especially the homebased providers, is extremely helpful. The association can market the opportunity and provide the basic education needed to have willing and able participants. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Collaborations, Partnerships, and Incentives COLLABORATE FOR LEAD SAFETY IN CHILD CARE HOMES Institutional capacity required: Qualified personnel are needed to check for hazards and make repairs. In many states, a lead sampling technician can check for deteriorated paint and lead dust hazards. If the home is pre-1950 or lead hazards are suspected, a risk assessment should be performed to check for hazards and determine what is needed to make the property lead-safe. It would be beneficial if the sampling technician or risk assessor were familiar with asthma triggers and methods to reduce those sources since asthma is a significant concern for most child care providers. Contractors or workers trained to perform lead hazard control will be needed to perform the work. After work is completed the property must pass a clearance test (performed by a sampling technician in the 23 states where they are clearly authorized, or by a risk assessor). Cost considerations: Grant or loan money will probably be needed to fund the lead hazard control. The agency that administers the locality's funds from HUD's Community Development Block Grant program is a possible source of funding. Funds awarded by HUD's HOME, Healthy Homes, LEAP, and Lead Hazard Control Programs can also be used for child care homes if the household is income-eligible. # Timing issues: The program can begin quickly once funds are secured. While a collaborative team of stakeholders could be formed after funds are available, the team may have to be established in order to demonstrate capacity to implement the program and secure funding. Child care providers are busiest-and therefore unavailable-during August and September when children return to school and establish enrollment. Work may be scheduled during provider vacations to reduce relocation costs. # Feasibility of Implementation: Moderate. The program is feasible if grant or loan funding is available. Expanding beyond lead hazards to address asthma triggers such as mold, cockroaches, and dust mites may increase acceptance by meeting the growing concern to families and caregivers. Therefore, the program needs to be able to deal with the most relevant mix of addressable environmental hazards. Relocation of the provider's family and the child care business to a temporary location may need to be addressed. # POTENTIAL OBSTACLES/BARRIERS If communities do not have a broader program to educate potential clients to consider lead safety and healthy homes issues in the selection of a child care home, the broader impact from competition will be lost. Limitations/challenges/problems encountered: Funding must be available for the work. # ADDITIONAL RESOURCES Magnitude of Impact/Potential Impact: 125 home-based, child care homes were made lead-safe. 70 homebased, child care homes had reduced asthma triggers. # Potential for replication: Very high. # Contact for Specific Information Ed Petsche HEEL Coordinator 612-349-0563 [email protected] # References for additional information 1. Greater Minneapolis Day-Care Association www.gmdca.org # ILLUSTRATION #2 OF STRATEGY IN PRACTICE The program is designed to improve the quality of 25 home-based child care providers serving more than 150 children in Rochester and Syracuse by making them lead-safe and addressing other healthy homes and general safety issues. To avoid disruption to services and protect the children who attend the homes, the program is providing temporary relocation to an alternative location while renovations are underway. The program educates providers and parents using their services on lead poisoning and daily maintenance techniques to reduce lead hazards and other environmental hazards. It is designed to serve as a model for other communities. Potential for replication: Very high. The project is producing an implementation guide and document templates so it can be replicated in other communities. Funding must be available for the lead hazard control work. # Jurisdiction or Target # Contacts for Specific Information # SCOPE OF POTENTIAL IMPACT City-or County-Wide-Can apply to jurisdiction of county or city health department Neighborhood/Community-Can target specific neighborhoods # PRIMARY ACTORS KEY PARTNERS Health Department Housing Agency Community-based Organizations Contractors # CRITICAL ELEMENTS Staff requirements: The staffing needs are minimal. Other resource requirements: If lead inspections are provided, XRF analyzers will be needed. # Institutional capacity required: State regulatory agencies must take a flexible, practical approach to incorporating lead hazard control into rehab projects. Requiring abatement certification exceeds HUD requirements and may unnecessarily increase the cost of smaller rehab projects. Communities need a combination of certified workers and workers trained in lead-safe work practices to perform large-and smallscale rehab projects respectively. Cost considerations: First, rehab costs will increase incrementally as a result of contractors using safe work practices and for minor activities such as additional set-up and clean-up steps. Ideally, another funding source should be made available to help pay the additional costs of lead hazard control that are above and beyond the scope of the rehabilitation work. Alternatively, health departments might charge fees to the housing agency for performing lead inspections or risk assessments, as well as training contractors and workers, to support the continuation of skilled staff. # Timing issues: The health department and the housing agency need to agree on the timing of the risk assessment, the housing inspection, and the development of specifications. There are many different approaches, which basically fall into two camps. The first is to develop the rehab scope of work which is then given to the risk assessor. The risk assessment will then suggest additional work required by the risk assessment and specific work practices that should be followed during the rehab. The second is to conduct the risk assessment first so that all lead hazard control work can be incorporated into the rehab scope of work from the outset. Either approach can be effective, but there must be agreement on protocols before launching a collaborative venture. # Feasibility of Implementation: High. This primary prevention strategy can be implemented in communities where there is an interest and a will to make it work. Effective strategic planning for preventing childhood lead poisoning can foster such resource sharing between housing rehab programs and CLPPPS. # POTENTIAL OBSTACLES/BARRIERS Individual state regulations and requirements can be a barrier. State policy must be flexible while maintaining consistency on basic principles. In addition, a state or local requirement for lead hazard insurance can present an obstacle to getting contractors to perform rehab work. # ADDITIONAL RESOURCES 1. www.hud.gov/offices/lead/leadsaferule/index.cfm # ILLUSTRATION OF STRATEGY IN PRACTICE The Health Department performs a range of services for community development corporations conducting housing rehab using federal funds. It performs risk assessments, specifies the scope of work for lead hazard control which is incorporated into the rehab specifications, conducts post-work clearance examinations, trains workers in lead-safe work practices (LSWP), and provides up to $2,000 per unit in matching funds for rehab projects. This broad spectrum of services constitutes a powerful incentive for the public and private rehabilitation industry to address lead hazards using lead-safe work practices. Timing issues: It will take 6-12 months to establish the program, build support for it, and develop materials. Full implementation usually will take an additional year. Child care facilities are busiest-and therefore unavailable-during August and September when children are returning to school or enrolling. # Jurisdiction or Target # Feasibility of Implementation: Moderate. The program is feasible but to reach the child care facilities most in need of support and oversight, the program should ideally to be part of a larger effort to improve health conditions within all child care facilities. In addition, the program is difficult to implement if narrowly focused on lead hazards. Most child care facilities view asthma triggers such as mold, cockroaches and dust mites as a bigger concern. Therefore, services that address a broad mix of environmental hazards will be more readily accepted. # POTENTIAL OBSTACLES/BARRIERS Lead hazards are most effectively addressed in the context of a broader evaluation of environmental hazards, especially mold and moisture; pests and pesticides; and carbon monoxide. Communities need to be prepared to identify potential resources to address lead hazards and provide technical assistance to help facilities obtain and use those resources. # ADDITIONAL RESOURCES # ACCESS ELECTRIC UTILITY PUBLIC BENEFIT FUNDS DESCRIPTION OF THE STRATEGY Over the past decade, the deregulation of the electric utility industry has prompted the creation of public benefit funds in many states. The funds, which total more than $1.5 billion, support a wide range of activities related to energy conservation including home improvements that produce energy savings, such as increased insulation, air infiltration reduction, and sometimes window replacement. Policies are set by the state energy-related agency through negotiations with the key funding source(s). Because such funds often target lower income households who suffer disproportionately from lead hazards the synergies are intriguing. The challenge is that these funds often strictly focus on energy and efforts to broaden the eligible activities could be perceived as diluting the programs' purpose. As a result, few utility benefits programs are currently factoring lead hazard reduction considerations into the program design. # BENEFITS Immediate/Direct Results: Reducing lead hazards in the course of projects to make energy conservation improvements will have a direct impact on reducing lead exposure in high-risk properties. Such action can prevent lead poisoning and control lead hazards in the home. Public Health Benefits: Repairing lead hazards through window replacement or repair and correction of moisture problems can reduce lead exposure and help to alleviate or prevent mold, which can exacerbate respiratory problems. Other Indirect/Collateral Benefits: These actions can improve the overall building quality and energy usage (e.g., utility bills) making the building more durable and comfortable. Controlling lead hazards can also improve overall building quality, durability, and energy efficiency. # SCOPE OF POTENTIAL IMPACT Statewide-Requires statewide commitment to make public benefit funds available for health and safety issues. # PRIMARY ACTORS KEY PARTNERS # State Energy Agency Housing Agency Utilities Community-based Organizations # CRITICAL ELEMENTS Staff requirements: Short-term staff resources are required to develop the expansion of eligible activities to include lead and other health concerns. Staffing to clarify the actions that are eligible for public utility funding and appropriate protocols and procedures will need to be integrated into existing structures for the administration of public benefit funds. Other resource requirements: Weatherization program crews may need additional training to check for and repair lead hazards. # Institutional capacity required: It is essential that the entity determining the scope and eligible activities for public benefit funds take steps to clarify that lead hazard control activities are eligible for funding. This may require a change in state statutory authority or a policy change by a state commission or entity that oversees the program and/or the utility that is providing the funds through the fee attached to utility bills. # Cost considerations: Once a decision is made to expand the use of the public benefits funds, the only cost of implementing the expansion is minimal staff to oversee the lead hazard repair services. Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies ACCESS ELECTRIC UTILITY PUBLIC BENEFIT FUNDS Timing issues: It may take over a year to gain policy agreement to expand the scope of activities funded by the public benefit funds to support lead hazard assessment and repair. If such actions are already eligible then an additional six months to one year is likely needed to get the program up and running. Feasibility of Implementation: Moderate. Making public benefits funds available for lead hazard control work may require substantial advocacy and planning efforts. # POTENTIAL OBSTACLES/BARRIERS A key potential obstacle may be opposition by energy conservation advocates who may be concerned that expanding the scope of eligible activities for funding by public benefit funds will reduce the number of homes receiving energy improvements. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF THE STRATEGY IN PRACTICE The New York State Energy Research and Development Authority (NYSERDA) administers programs funded by the System Benefits Charge (SBC) on the electricity transmitted and distributed by the state's investor-owned utilities. Originally, the fees for New York Energy $mart were intended to cover energy-related upgrades. Beginning in 2001, the program was expanded to address health concerns. This expansion required a policy decision at the state level to structure the supplemental health and safety component. The program covers up to 50% of the costs associated with energy-efficiency and indoor air quality improvements to a maximum of $5,000 for a single-family unit or $10,000 for a 2 -4 family building. Low-interest loans are available for qualified applicants to cover the balance of the cost of the work. Total funds equal roughly $150 million per year for residential, commercial, and industrial properties combined; $40 million is invested in residential for both market and lower income. # Jurisdiction or Target # Staffing utilized: Once agreement is reached on expansion of eligible activities to include health and safety concerns, the staff resources required to develop protocols and procedures to administer the program are nominal: overall, NYSERDA has one to two FTE working on the New York Energy $mart program. Other resources utilized: NYSERDA contracted with a separate organization -Conservation Services Group (CSG) -to administer the program, audit energy and health results, process financial incentives, and manage loans. The Building Performance Institute (BPI) was engaged to develop work practice standards. A third entity with significant experience in adult education trains contractors (who are required to obtain training and certification to participate in the program). Contractors pay roughly $1,000 for each level of certification. NYSERDA reimburses the contractor for 75% of the training and certification fees. # Factors essential to implementation: Willing utility company and statewide administrator of public benefits funds, and motivated contractors who will pay for training/certification and absorb cost of lost work time to acquire credentials. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies ACCESS ELECTRIC UTILITY PUBLIC BENEFIT FUNDS Limitations/challenges/problems encountered: Quality assurance is difficult. It is challenging to accurately report that the work occurred and determine if the work was performed consistent with program standards. CSG performs limited field inspections (roughly 15% overall) and BPI conducts sporadic field monitoring. Magnitude of Impact/Potential Impact: In the initial three years of operation, approximately 4,000 homes have received energy upgrades. An unknown number have also received health upgrades to address combustion gases and possibly lead hazards. The average expenditure per unit is $7,500. Potential for replication: Uncertain. NYSERDA is willing to share its experiences with other states (e.g., effective advertising strategies, standards, training curriculum, and program design materials). Key ingredients include: creating consumer demand for energy, comfort and health upgrades (where consumers are willing to pay for upgrades), varying marketing to target specific market segments, and developing standards to ensure consistent and quality work is performed. Realistic start up once there is agreement of funding is about two years to get the system working well. # Contacts for Specific Information # BENEFITS Immediate/Direct Results: Funds are available to increase the supply of lead-safe housing through the rehabilitation of substandard older properties (resulting in the elimination of lead hazards) and the construction of new housing. Public Health Benefits: Creating decent affordable housing will prevent exposure to lead hazards, especially if priority is given to projects serving very low-income families that are at high risk for childhood lead poisoning. Other Indirect/Collateral Benefits: Access to decent housing that is affordable to very low-income households and special needs populations enhances the learning and earning potential of the family and improves its ability to pay for health care, child care, and other necessities. # SCOPE OF POTENTIAL IMPACT Statewide Regional (e.g. multi-county) City-or County-Wide # PRIMARY ACTOR KEY PARTNERS Housing Agency Property Taxation Agency Community-based Organizations # CRITICAL ELEMENTS Staff requirements: This will vary depending on the size of the fund and the nature of the activities to be carried out. # Other resource requirements: The heart of a housing trust fund is a reliable, dedicated source of funds. A certainty of funds helps maintain an ongoing program that can sustain a production pipeline and maximize other opportunities. # Institutional capacity required: A housing trust fund needs to be administered by an agency or organization that is familiar with soliciting, reviewing, awarding, and monitoring grants and/or loans for affordable housing programs. The administering entity also needs to work closely with community-based organizations that understand and can articulate the needs and capacity of the community. # Cost considerations: The size of housing trust funds range from less than $100,000 per year to many millions of dollars per year, depending on the size of the jurisdiction and other factors. # POTENTIAL OBSTACLES/BARRIERS The greatest difficulty in getting started is mobilizing the support to convince a legislative body that such a fund is needed and that an ongoing allocation of funds is required to make the fund operate effectively. This requires establishing clear cut goals and objectives, creating a coalition of organizations and individuals to support creation of the fund, and identifying alternative funding sources. Feasibility of Implementation: Moderate. Political will, bolstered by the support of some residential property owners, is key to the implementation of a special real estate assessment mechanism, since a vote by either a legislative body and/or the electorate will be required to implement a new tax or other property-based tax. It should be possible to reach agreement to pursue the strategy in numerous jurisdictions, especially where there is concern about the problem of lead poisoning but action is more likely on approaches that require no new funds, staff, or equipment. # ADDITIONAL RESOURCES # POTENTIAL OBSTACLES/BARRIERS Lack of political will is the most significant potential barrier to establishing a special real estate assessment mechanism. In jurisdictions where a referendum is required for revenue-generating measures, the absence of widespread popular support may be a severe obstacle. # ADDITIONAL RESOURCES # DEPLOY ENFORCEMENT ORDERS AND GRANT INCENTIVES IN TANDEM DESCRIPTION OF THE STRATEGY Public financing for the abatement of lead hazards, combined with code enforcement, provides both the carrot and the stick to provide lead-safe housing in high-risk neighborhoods. Landlords are most likely to be "persuaded" to address lead hazards if financial assistance is available to offset at least part of the cost of repairing/replacing windows and other lead hazards. The threat of being taken to court can be a powerful incentive to landlords. Combining code enforcement with financial incentives is especially useful to motivate action by landlords who own multiple high-risk properties and have limited resources. # BENEFITS Immediate/Direct Results: Rental units are rendered safe from lead hazards. Depending on the scope of the program, results can range from a few units to an entire neighborhood or neighborhoods. Public Health Benefits: Reduction in childhood lead poisoning. In addition, there is a heightened awareness of childhood lead poisoning and how to prevent it. Other Indirect/Collateral Benefits: An improved standard of habitability: for example, windows that were dilapidated or painted shut become operable. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Staff requirements will vary depending on the scope of the program and the extent to which activities are performed in-house or contracted out. Post-work clearance must be performed by a trained staff member or contractor who is certified. Other resource requirements: A good database is critical to efficient operation. Access to standard professional specifications for the scope of work as well as uniform and reliable cost estimates for typical work will help build support from owners and landlords to participate in the program. In addition, for abatement programs, there must be a sufficient supply of trained and qualified contractors. # Institutional capacity required: There must be clear legal authority to require owners to correct lead hazards or at least to repair deteriorated paint. In addition, there must be an institutional structure that can enforce code violations. This may be a municipal court or an administrative hearing process that has enforcement authority. # Cost considerations: There must be a funding source to support lead hazard control as well as the code enforcement effort. # Timing issues: None; the program can be initiated at any time. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies DEPLOY ENFORCEMENT ORDERS AND GRANT INCENTIVES IN TANDEM Feasibility of Implementation: Moderate. Can be implemented wherever the basic requirements are in place: political support, a staff commitment to working with landlords, legal authority, an operating code enforcement program, a skilled workforce, and financial aid to owners. # POTENTIAL OBSTACLES/BARRIERS There must be political support to undertake the program and political will to use code enforcement authority to take action against recalcitrant landlords if they don't agree to repair their properties. Landlords can be expected to appeal to political leadership to forego or curtail the program. Most difficult of all is getting landlords to buy in to the program. First, it is often difficult to find out who the actual owner of a specific property is. Second, it is even more difficult to get owners to come to a public meeting or agree to a private meeting. Third, if the owner does attend, he or she needs to be convinced to participate in the program and to pay for at least a portion of the repairs. The absence of a strategy on how to deal with landlords, including absentee landlords, can constitute a considerable barrier to success. Limitations/problems encountered: Finding the owners/landlords was often very difficult. Getting them to "apply" for a risk assessment was even harder. City staff did mailings to owners and held group and individual meetings. The combination of hard sales in tandem with the threat of code enforcement was needed to win over reluctant landlords, especially since the amount available for window repairs was limited to approximately $2,000 per unit. # ADDITIONAL RESOURCES Magnitude of Impact/Potential Impact: Lead hazard control was completed in 800 units in one year. The owners of approximately 50 units were taken through the court system. # ESTABLISH A REVOLVING FUND TO STRETCH DOLLARS DESCRIPTION OF THE STRATEGY Instead of making outright grants of funds for lead hazard control projects, jurisdictions can make loans through a revolving loan fund that recycles the original pool of funds. Loans can be amortized or repayment deferred until the property is sold or refinanced. Financing terms can be indexed according to income so that the lowest income homeowners can access funds at very low and even no interest. Permanent revolving funds can be capitalized using an appropriation from the jurisdiction's general fund, a federal grant, or designating the receipts from a specific tax or fee, so that such monies are continuously reserved for a specified purpose. # BENEFITS # CRITICAL ELEMENTS Staff requirements: One full-time person in the assigned agency is needed to set up the program. This includes developing program procedures and guidelines, preparing marketing materials, and establishing agreements with program partners such as health departments, lenders, and community-based organizations. A key policy issue is whether the agency administering the loan fund will manage the program or rely on other partners (banks, community-based nonprofit organizations, etc.) to perform outreach, prepare and process applications, and service the loans. If all work is done in-house, the staffing requirements will vary substantially depending on the volume of loans and the way the program is designed. Other resource requirements: Program brochures, manuals, operating procedures for lending programs. Institutional capacity required: There needs to be institutional capacity to make and/or service real estate loans, including knowledge of the industry and experience with lending programs. Ideally, the program is managed by a housing agency that has a track record with other financing mechanisms. Legislative authority and appropriations must create and fund the program; authority to use repayments for additional loans distinguishes revolving funds from one-time appropriations. Capacity to assist owners with applications is important and can be provided by community-based nonprofit organizations, city/county/state staff, or lenders. # Cost considerations: A revolving fund must initially be capitalized by designated funds. Annual or regular additions to the fund may be critical to create stability, meet growing demand, and assure continuation of the fund. Fees to process and service the loan and the cost of outreach need to be factored into the plan. # Timing issues: A revolving loan fund can be established at any time. # Feasibility of Implementation: Excellent if the institutional capacity is in place. # POTENTIAL OBSTACLES/BARRIERS Making loans to low-income owners and/or investors in property rented to low-income families is often not a profitable venture. Financial institutions are experienced and efficient in processing loans; however, a bank will probably need an incentive such as a fee unless it is willing to do the work to maintain or improve its Community Reinvestment Act (CRA) rating. Capacity must also be established to get referrals or generate applications, assist owners with the application process, qualify applicants, and prepare complete applications for whomever processes the application. A state or local agency may be well served by relying on community-based organizations (CBOs) to reach and encourage applications from the owners of highest risk properties. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE MassHousing provides financing to owners of one-to four-unit properties to abate lead-based paint. Lowincome owner-occupants are eligible for loans with 0 percent interest for which payment is not due until the property's sale or refinancing. Nonprofit organizations and "investor-owners" (landlords who do not live in a unit in the property) that own properties that are being rented to income eligible households are eligible for loans that must be repaid in the near term. The maximum loan amounts are: Single-family-$20,000; 2-family-$25,000; 3-family-$30,000; and 4-family-$35,000. Limitations/problems encountered: Massachusetts found that bond funds didn't work well; a State appropriation was much better. The borrower needs to be current on the mortgage, taxes, and utilities. Magnitude of Impact/Potential Impact: Approximately 200 -300 loans are made each year. Nearly $50 million has been loaned since 1997. Potential for replication: Very high. There must be a nonprofit network or other means to assist owners in applying and, if the MassHousing model is followed, a bank willing to participate. # IMPOSE FEES ON REAL ESTATE TRANSACTIONS AND RELATED PROFESSIONAL LICENSES DESCRIPTION OF THE STRATEGY Imposing fees on real estate transactions and on the licenses of professionals who procure housing, risk management, or lead services can help subsidize the cost of managing public sector systems for lead poisoning prevention. Fees levied on real estate transactions (e.g. for the recording of deeds or tax stamps) and on licenses for professions and disciplines such as real estate agents, insurance agents, lead inspectors, risk assessors, lead abatement contractors, and others can be a source of dedicated income for lead poisoning prevention programs. These types of strategies would be more common at the state level, but enactment and implementation is possible at the county and municipal levels as well. # BENEFITS Immediate/Direct Results: New or increased recurring funding sources are immediately created as the fee structure goes into effect. These funds can be used to support primary prevention programs that may have little, if any, funding from other sources. Public Health Benefits: Funds from these fees or surcharges can be used to fund existing childhood lead poisoning prevention programs as well as initiatives that complement efforts already in existence. Other Indirect/Collateral Benefits: By targeting home purchases and individuals directly involved with buying, selling, and insuring housing, such fees could raise awareness among homebuyers, real estate agents, mortgage lenders, insurers, and others about lead poisoning prevention issues. Such a strategy may also reinforce the responsibility of buyers, agents, and others to ensure that the home involved in a specific transaction is lead-safe. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Minor to moderate administrative activities would be required at the start-up of a strategy that imposes a fee on real estate transactions or a surcharge on certain professional licenses. It is unlikely that any new staff would be required to implement such fees or surcharges. Other resource requirements: In the case of professional license surcharges, the agency(s) that oversee the licensing may need to be involved in administering the strategy. Institutional capacity required: New statutory or code sections, naming the specific real estate transactions or professional licenses involved, must be added, and the agency(s) involved may need to promulgate new regulations to implement the strategy. Cost considerations: Small start-up costs to the government agencies responsible for the collection of the new fees or surcharges can be expected, but no further costs should exist beyond this transition period. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies IMPOSE FEES ON REAL ESTATE TRANSACTIONS AND RELATED PROFESSIONAL LICENSES # Timing issues: None Feasibility of Implementation: Moderate. Political will, bolstered by the support of individuals, community and statewide organizations, and those who have been adversely affected by lead poisoning will be key to implementing this strategy. Partnering with professionals in the real estate or insurance worlds who are concerned about or aware of lead hazards in homes would also be useful. # POTENTIAL OBSTACLES/BARRIERS The lack of political will to impose a new fee or surcharge is the major obstacle that could block the realization of this strategy. Once put in place, a license surcharge strategy may have to overcome overburdened licensing agencies or inefficient transfer of revenues. A minimal fee on real estate transactions is not likely to face significant implementation barriers. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE In Massachusetts, surcharges are imposed on the annual fees of a variety of professional licenses. These surcharges are imposed on individuals licensed as real estate brokers and property and casualty insurance agents, as well as on mortgage brokers, mortgage lenders, small loan agencies, and individuals licensed to perform lead inspections. The surcharge is generally $25 per license or renewal, though mortgage brokers, lenders, and small loan agencies pay a $100 surcharge. Staffing utilized: 0.5 FTE. # Other resources utilized: N/A Factors essential to implementation: The cooperation of professionals impacted by this strategy is key, as is the level of priority each licensing board places on collecting the surcharges. Two additional factors are essential to the continuing implementation of this strategy: 1. The continued existence of the statutory authority contained in the Acts of 1993; and 2. The timely distribution of the surcharges to the Trust Account for disbursement to DPH Limitations/challenges/problems encountered: There have been no major challenges in implementing the surcharge strategy. However, the Office of the State Auditor found that some professions were not being billed for the surcharge in a timely manner in 2000; the responsible agency subsequently corrected the problem, and in 2002, the Legislature mandated that it collect the surcharge in direct connection with the license renewal process. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies IMPOSE FEES ON REAL ESTATE TRANSACTIONS AND RELATED PROFESSIONAL LICENSES Magnitude of Impact/Potential Impact: The surcharge strategy raises about $1.25 million per year for DPH/CLPPP. This gives the department and the program much needed funds to engage in a variety of childhood lead poisoning prevention activities. Potential for replication: Moderate. In states with sufficient political will, fee-based funding strategies, such as surcharges on specific professional licenses, could be readily replicated. # Contacts for Specific Information # IMPOSE TAXES OR FEES ON POLLUTERS DESCRIPTION OF THE STRATEGY States and local jurisdictions that have appropriate authority can impose taxes and/or fees on corporations that are judged responsible for contributing to the existence of an environmental health or pollution problem, including those companies that have contributed to the existence of lead hazards in housing. The revenue from these taxes and fees can be used to fund primary prevention programs. # BENEFITS Immediate/Direct Results: New or increased funding sources are immediately created as the tax or fee structure goes into effect. These funds can be channeled into a variety of primary prevention efforts that may not have other sources of sustained funding. Public Health Benefits: Funds from these taxes or fees can be used to fund either already existing public health programs related to the prevention of childhood lead poisoning, or possibly expand primary prevention efforts to reach a greater number of people. Other Indirect/Collateral Benefits: By targeting corporations that are responsible for the production of goods that caused indoor lead contamination, a tax or fee on these polluters helps shift the burden away from taxpayers and individuals who are impacted by environmental health hazards. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Minor to moderate administrative activities would be required at the start-up of a new tax or fee on polluters. It is unlikely that any new staffing would be required. # Other resource requirements: The taxing agency must have all information pertaining to the specific corporations or class of corporations upon which the new tax or fee is to be imposed. # Institutional capacity required: A new statutory or code section, naming the specific corporation or class of corporations to be targeted with the new tax or fee, must be added. In many states, taxing authority of this nature must be approved on the state level before being implemented by local jurisdictions. Cost considerations: Small start-up costs to the government agency responsible for the collection of the new tax or fee can be expected, but no further costs should exist beyond this transition period. Costs to targeted corporations will likely be passed onto to retailers, and in turn, to consumers. # Timing issues: Because new statutory authority may be needed to implement taxes or fees on polluters, the implementation of the strategy will depend on legislative calendars, committee hearing schedules, and local government meeting schedules. In some areas, some taxes and fees carry "sunset" provisions, meaning they must be reviewed and reauthorized on a periodic basis. In other areas, tax or fee structure changes such as this are permanent. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies IMPOSE TAXES OR FEES ON POLLUTERS Feasibility of Implementation: Moderate. Political will, bolstered by the support of consumers, community and statewide organizations, and those who have been adversely affected are key to the success of efforts to impose taxes and fees on polluters. Some may be opposed to imposing taxes or fees on selected corporations because that might be seen as "bad for the general business climate" in the state or local area. Others may support shifting the burden of financially supporting lead poisoning prevention programs to those directly responsible for indoor environmental health problems caused by lead. # POTENTIAL OBSTACLES/BARRIERS The lack of political will to impose a new tax or fee, especially on powerful corporate interests, is the major obstacle that could block the realization of this strategy. Many states and local areas have very strong business lobbies, and targeted corporations themselves may be willing to spend large amounts of time and money to defeat a strategy that would increase their overall costs. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE The Childhood Lead Poisoning Prevention Act was passed in California in 1991. The state act provides funds for county programs to identify children at risk for lead poisoning and to work toward reducing children's exposure to lead hazards. The program implemented under the act is funded by a fee imposed on corporations that previously, or currently, bear responsibility for the production and sale of lead or products containing lead. This fee also extends to corporations and businesses that are responsible for other sources of lead or that have contributed or continue to significantly contribute to environmental lead contamination (indoor and outdoor). The fee is assessed on each individual corporation based on two criteria: - The corporation's past and present responsibility for environmental lead contamination - The corporation's "market share" responsibility for environmental lead contamination The two criteria allow for greater fees to be imposed on those corporations that hold a greater responsibility for environmental lead contamination. Magnitude of Impact/Potential Impact: Annually, $12 million is collected for county programs in California via this fee. The impact is significant in that the fees are the main source of funds for the state's Childhood Lead Poisoning Prevention Program and the county-based programs. # Jurisdiction or Target Area: California Potential for replication: Moderate. In states and localities where the political will is present, fee-based funding strategies such as this could be readily replicated with a minimal use of resources. # Contacts for Specific Information # LEVERAGE COMMUNITY REINVESTMENT ACT FOR LEAD SAFETY AND HEALTHY HOMES DESCRIPTION OF THE STRATEGY The federal Community Reinvestment Act (CRA) was enacted in 1977 to challenge widespread discrimination in mortgage lending and encourage banks to help meet the credit needs of all segments of their communities, including low-and moderate-income neighborhoods. Banks may provide loans, grants, technical assistance or services to support community development activities that serve low-and moderate-income communities, including assistance with the cost of abating or controlling lead hazards in housing. Banks report relevant activity to their respective federal regulatory agencies which monitor the banks and issue public ratings of the banks' CRA activities in three areas: lending, investment, and services. These ratings affect whether regulators will permit banks to merge or expand. Many banks are therefore open to partnerships with local agencies or community-based organizations that are seeking financing or other support for specific proposals. # BENEFITS # CRITICAL ELEMENTS Staff requirements: Once a financial institution decides to increase lending or make qualified investments in underserved communities, it must develop appropriate agreements, policy guidelines, and operating procedures. Some banks execute contractual agreements with a nonprofit organization to administer the activity and assist in developing processing procedures for loans and establishing program eligibility criteria. These activities may require a significant investment of time and effort, depending on the experience level of and extent of initiative on the part of the lender and the nonprofit partner. Other resource requirements: It is easiest for the bank to partner with an experienced nonprofit agency operating an effective program that needs added funding or assistance to expand into new areas or activities. # Institutional capacity required: The federal legal authority for CRA activity is already established (12 U.S.C. 2901). # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies LEVERAGE COMMUNITY REINVESTMENT ACT FOR LEAD SAFETY AND HEALTHY HOMES Cost considerations: The modest cost of operating community lending activities, as an in-house service and/or under contract with a nonprofit partner, counts as a CRA service. Timing issues: Can be implemented at any time. # Feasibility of Implementation: Moderate where there is a bank that is interested in undertaking CRA activity involving prevention of childhood lead poisoning. # POTENTIAL OBSTACLES/BARRIERS The typical problem is finding or designing an activity or program that meets the bank's criteria for lending or investment and also provides substantive assistance. For instance, if the CRA activity is lending to abate or control lead hazards, the bank must be willing to discount its loans or make other changes to its lending guidelines to make the loans attractive to borrowers. A reduction of 1% in the interest rate will generally not be sufficient. Many potential borrowers will not have good credit ratings. A modest relaxation of credit standards may qualify a few borrowers that otherwise could not borrow at conventional lending rates. However, it will probably be necessary to supplement lending at below-market rates with grants for a portion of the cost. # ADDITIONAL RESOURCES # Other resources utilized: N/A Factors essential to implementation: The primary factor is a bank willing to participate; i.e., absorb the closing costs in exchange for CRA credits. Secondly, there must be willing landlords who see the remediation of lead hazards as in their interest. # Limitations/problems encountered: The biggest problem is getting landlords to participate. They must become part of the solution. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies LEVERAGE COMMUNITY REINVESTMENT ACT FOR LEAD SAFETY AND HEALTHY HOMES Magnitude of Impact/Potential Impact: Borrowing by owners of rental property is made feasible by two factors: The cost of borrowing is reduced when the bank waives closing costs, and the cost or rehab is partially offset by a grant from the County of up to $6,250. Approximately 40 -50 units will be completed with funds from a HUD Lead Hazard Control grant and the First Place Bank subsidy. # Potential for replication: This can be replicated in communities with a bank willing to make concessions in their lending in return for CRA credits. # Contacts for Specific Information # MAKE THE MOST OF FINES AND PENALTIES DESCRIPTION OF THE STRATEGY Typically, fines and penalties that government agencies collect revert to the treasury's general fund. Designating a special fund, revolving or otherwise, offers a mechanism for such receipts to be reserved for a special purpose such as lead hazard control or promoting lead safe work practices. Rather than having all taxpayers fund code enforcement and other services that are directed toward a problem few, jurisdictions can charge violators fees to cover costs of inspection, investigation, and enforcement of building codes and other ordinances related to lead hazards and poisoning prevention. # BENEFITS Immediate/Direct Results: This strategy allows municipalities, counties, and even states to fund important prevention programs and enforcement actions that may not receive adequate money from the general fund. The revenue generated from designated fines and penalties can then be used to support more code enforcement, training sessions on lead-safe work practices, lead hazard control, and other primary prevention measures. Public Health Benefits: Presumably, with significant penalties in place for violating statutes or codes related to lead hazards, property owners will be apt to follow the rules, thus engaging in lead-safe work practices, reducing lead hazards in homes, and ultimately lowering the chance that children will be exposed to lead. The primary prevention measures funded by fines and penalties from those property owners who don't play by the rules will also reduce children's exposure to lead hazards. Other Indirect/Collateral Benefits: As word spreads throughout the area affected by the possibility of fines or penalties (i.e. the city, county, or state), property owners, especially landlords, will become more aware of the issues surrounding lead poisoning in children, lead-safe work practices, and reducing and controlling lead hazards in homes. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Since this income is generated by existing housing, building, and/or health code enforcement programs, no additional staff should be necessary. Other resource requirements: Implementation would require systems for tracking data on fines and penalties generated from codes, as well as statutes governing lead hazards and lead safe work practices (or other building or housing provisions). # Institutional capacity required: Statutory and regulatory authority needed to create the specialized fund and designate the use of the fines and penalties. Code inspection and enforcement staff may need to be trained on how to recognize lead hazards and unsafe work practices if the jurisdiction has not previously focused on these areas. # Cost considerations: In almost all instances, no new enforcement agency or staff will be needed. One possible problem is that the cost of compliance as well as fines and penalties may be passed on to tenants; to mitigate this situation, consideration should be given to using some proceeds for targeted rent subsidies as needed. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies MAKE THE MOST OF FINES AND PENALTIES Timing issues: Enforcement, the collection of fines and penalties, and the flow of money from the specialized fund will depend on the workload of the authorized agency. # Feasibility of Implementation: High. Local governments are always looking for revenue generators. Since this approach enables a program to be self-funded, holds bad actors accountable, and doesn't require a general tax increase, it should be quite feasible to implement. # POTENTIAL OBSTACLES/BARRIERS Landlord associations will resist enactment and enforcement of code provisions for fines and penalties. Overburdened building inspection, housing, and health departments' implementation and enforcement may be lackluster, resulting in a low rate of collection of penalties. # ADDITIONAL RESOURCES # Other resources utilized: N/A Factors essential to implementation: The main essential factor for implementation of this strategy in San Francisco is the cooperation between two sections of the Department of Building Inspection-the Lead Abatement Section and the Administration and Finance Section. Code enforcement and penalty collection officials rely on the City Tax Collector to collect delinquent penalties and on department hearing officers when a penalty is appealed. Underlying all of this is the code authority already in place. # Limitations/challenges/problems encountered: None listed. Magnitude of Impact/Potential Impact: On average, the Building Inspection Fund receives between $1.2 and $1.7 million each year from fines and penalties. A portion of these funds is used to support the enforcement of lead-safe work practices standards for exterior surfaces. This is the major source of funding for this enforcement work; no general revenue is used for the enforcement of the work practices standards. # Potential for replication: The potential for replication of this approach to the fines and penalties strategy is high. It is relatively easy to administer, and it is easily integrated into existing code enforcement and penalty structures. # OFFER AN INCOME TAX CREDIT FOR ABATEMENT DESCRIPTION OF THE STRATEGY One approach to funding lead hazard control outside of general fund appropriations is for governments to offer a credit on federal, state, or local income taxes to property owners who expend funds for eligible activities. A tax credit's advantage of extremely low administrative costs is balanced against the problem that tax credits are difficult to target and frequently provide little or no benefit for very low-income properties that generate little or no tax liability. # BENEFITS Immediate/Direct Results: A tax credit is a way to finance the cost of lead hazard control, subject to cost limitations and other operational requirements such as the use of certified contractors. Although income tax credits provide dollar-for-dollar reduction in tax liability at the end of a tax year, up to the maximum amount specified, the funds are not available to pay for the cost of abatement at the time the work is performed. Public Health Benefits: Additional lead-safe houses will expose fewer children to lead hazards. Other Indirect/Collateral Benefits: Ease of administration. # SCOPE OF POTENTIAL IMPACT Statewide City-or County-Wide # PRIMARY ACTORS KEY PARTNERS # Property Taxation Agency Property Owners Contractors # CRITICAL ELEMENTS Staff requirements: The staffing requirements to implement a tax credit for abatement are minimal. There is the initial need to draft regulations to incorporate the credit into a state's tax code as well as explanatory materials to foster general public understanding. Once the tax credit is implemented, it becomes just another line item on a tax return. Other resource requirements: There must be a mechanism for verifying that a property for which a credit is claimed has lead hazards and that the work has addressed the lead hazards (through either abatement or interim controls), leaving the property or work area lead-safe at the end of the job. Jurisdictions should specify that the hazard control and hazard determination/clearance work be performed by appropriate trained or certified personnel. There must be a sufficient supply of qualified personnel available to property owners. # Institutional capacity required: A tax structure and administrative agency that can fairly administer a tax credit as part of an income tax system. Cost considerations: It is very difficult to estimate the financial impact of a tax credit for lead paint abatement on a jurisdiction's tax revenue, since it is impossible to predict how many taxpayers will take advantage of this opportunity. Timing issues: None. # Feasibility of Implementation: Extremely easy to implement, but difficult to enact when state and local revenues are constrained. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies OFFER AN INCOME TAX CREDIT FOR ABATEMENT POTENTIAL OBSTACLES/BARRIERS Successfully amending a complex tax code may be daunting for people and organizations normally dedicated to health and housing issues. The tax credit must be large enough to create an incentive for property owners to spend their own money for lead hazard control or abatement. At the same time, the cost must be reasonable enough that it can be borne by the state. # ADDITIONAL RESOURCES 1. Federal income tax credit legislation for lead abatement is now pending (S. 1228). # ILLUSTRATION OF STRATEGY IN PRACTICE The Commonwealth of Massachusetts' "deleading " income tax credit, which has been in place since 1994, offers a model for other states interested in helping residents pay for the cost of abating lead hazards. The owner of a residential property can claim a tax credit equal to the lesser of the cost of deleading, or $1,500, for the containment or abatement of lead hazards, including the replacement of window units. A tax credit equal to the lesser of one-half of the cost of deleading, or $500, is available to offset the cost of bringing the property into interim compliance, using interim control measures, pending full compliance. Several steps are necessary to claim the credit: the property must be inspected by a licensed inspector; the property is then "deleaded" by a MA-licensed contractor; a licensed inspector issues a letter of compliance or a letter of interim control; and the owner files a copy of the inspector's letter with the owner's income tax return. The tax credit is a dollar-for dollar offset for the actual amount spent against taxes owed. Any unused portion of the credit may be carried forward from the year that a credit was first claimed to any of the next seven years. Some activities may be undertaken by a "qualified unlicensed individual" pursuant to State regulations. # Jurisdiction or Target Area: Massachusetts # Other resources utilized: N/A Factors essential to implementation: State legislation is the essential prerequisite. There is no staff dedicated to implementing or enforcing this particular element of the tax code. # Limitations/problems encountered: The relatively modest size of the tax credit ($1,500) limits the use of a tax credit as an incentive to undertake lead hazard abatement activities. Since most deleading projects in Massachusetts cost several thousand dollars, the tax credit is seldom the deciding factor in financing the cost of lead hazard remediation. # Magnitude of Impact/Potential Impact: The dollar impact on the State is not known. # Potential for replication: This can be implemented in any jurisdiction with an income tax and a legislature willing to create a tax credit for the purpose of abating lead hazards. The potential small loss of revenue will be weighed against other competing budgetary interests. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies OFFER AN INCOME TAX CREDIT FOR ABATEMENT # Contact for Specific Information Massachusetts Department of Revenue 617-887-MDOR # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies # PROVIDE LOCAL PROPERTY TAX CREDITS DESCRIPTION OF THE STRATEGY Local governments could offer a credit on or forgiveness of property taxes to property owners who make expenditures on specified activities such as window replacement, lead hazard control, or correction of other health and safety hazards-just as credits are provided currently in some locales for marginal properties that are substantially improved. A property tax credit could be very narrowly targeted, for example to a single high-risk neighborhood within designated boundaries. While property tax revenue reductions ultimately have the same budgetary impact as expenditure increases, some jurisdictions may find that tax credits are more palatable than increasing local agency budgets. No specific lead paint or health-related property tax credit exists anywhere today, although a bill (HB1039) to create a property tax credit for lead hazard reduction for residential and child care properties was introduced in Maryland in 2004. # BENEFITS Immediate/Direct Results: Improvements related to lead safety and other health considerations are far more likely to be made by owners who can recoup some of their costs. Property tax credits provide more direct and immediate benefits to owners of low-income properties than income tax deductions or credits. Making the credit dependent upon independent verification of the work provides an opportunity to build in quality controls. Public Health Benefits: Lead hazard reduction directly reduces lead exposure to children. Other Indirect/Collateral Benefits: Improvements will generally increase property values and durability of housing, restore vacant buildings, provide employment opportunities, and increase community pride. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Approximately 1.0 FTE might be needed to process applications and approve repairs for a program in a major jurisdiction. Other administrative aspects of this program probably could be carried out within the existing staffing and budget of any state or local revenue or property tax agency. Other resource requirements: Some new administrative processes and forms would be needed. # Institutional capacity required: Changes in state, county, and local tax law may be necessary, depending on the jurisdiction. # Cost considerations: Losses in property tax revenue would be the main cost to a jurisdiction. There would be marginal administrative costs in ensuring that required repairs are actually done and auditing claims to avoid Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies PROVIDE LOCAL PROPERTY TAX CREDITS fraudulent use of the credit. Long-term enhancement of property value may partially offset the lost tax revenue as lead safety becomes valued in the marketplace. Collateral improvements to property condition (e.g., increased energy efficiency, new windows, plumbing and roof repairs) will increase property value and durability, which in turn should increase future property tax revenues and help arrest blight. # Timing issues: No seasonal or cyclical considerations other than the fact that policies that reduce tax revenue are more likely to succeed in times of budget surpluses. Timeline to implement depends on the legislative process. # Feasibility of Implementation: High. Many jurisdictions provide property tax credits for a wide range of purposes that are deemed socially beneficial (such as credits for low-income, elderly, disabled, or blind occupants; properties used for charitable or educational purposes; historic preservation; substantial property improvements; and even brownfield clean-up). A credit to promote lead hazard control or other health-related housing improvements would seem to have comparable political appeal. # POTENTIAL OBSTACLES/BARRIERS Fraudulent use of the credit could occur without proper quality control measures and independent verification of repairs. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Since state-enabling legislation was enacted in 1996, Baltimore has offered a property tax incentive program for owners who complete substantial rehabilitation (greater than 25% of the initial "assessed full market value" of the property) of landmark designated properties and properties located in one of the city's historic districts. The program is not designed, nor has it ever been used, for lead hazard control or other health-related repairs specifically, although such hazards are often corrected incidentally. The assessed tax of the renovated or rehabilitated property remains at the same level as it was before the start of renovation for 10 years. Credit is for 100% of the tax assessment increase due to the improvements made, and is fully transferable to a new owner for the remaining life of the credit-provided the property is certified by CHAP. Staffing utilized: 1.5 FTE at CHAP run the program. No new staff has been added at the state tax agency or city finance agency, but a tax agency staff person estimates that each agency currently uses as much as 1 FTE staff to do additional data input and record keeping to support this program. # Jurisdiction or Target # Other resources utilized: N/A. Factors essential to implementation: Implementation requires cooperation among CHAP (accepts, reviews, approves applications and projects), city department of housing and community development (construction permits), state department of taxation (calculates value of tax credit), and city department of finance (issues property tax bills). # POTENTIAL OBSTACLES/BARRIERS One potential obstacle is opposition to new fees if the dedicated fund is established based on fees and fines. Therefore, the fees must be kept to the minimum needed to establish and maintain the fund, and the basis and justification for the new fund must be clear and convincing, based on facts about housing and health conditions. Second, property owners are likely to object to new or enhanced housing inspections. Public education and outreach must convince decision-makers that (1) inspections are crucial to relieving documented housing conditions that threaten the health and safety of the occupants; and (2) a more professional code enforcement program featuring registration of rental properties, scheduled inspections timed so that property owners can anticipate them, and consistent enforcement processes will provide greater predictability and objectivity as well as accountability for compliance. Third, the goals of decent housing condition and lead safety must take precedence over zealousness to garner revenues from penalties (to hire more staff to collect more fines, etc.). Orders to comply without financial penalty should be vigorously pursued since in many cases the limited resources of the owner would be better spent on correcting violations rather than paying fines. Fines must be set high enough to motivate property owners to cooperate with enforcement staff as well as preemptively invest in their properties. # ADDITIONAL RESOURCES N/A # ILLUSTRATION #1 OF STRATEGY IN PRACTICE The City of Los Angeles has adopted a housing ordinance that requires that every residential rental property with two or more units be inspected on a scheduled basis, currently once every five years. The housing habitability inspection, paid for by a fee of $27.24 per unit per year, covers compliance with codes for fire and life safety, building, electrical, plumbing, heat and ventilation, health, and lead hazards. # Other resources utilized: N/A Factors essential to implementation: The essential components are a professional code enforcement agency, a good database and tracking system, effective outreach and education of property owners and contractors, and consistency of treatment. # Limitations/problems encountered: The program needs to factor in that owners want to/need to recoup investment, and ways to help contractors understand potential liability. Magnitude of Impact/Potential Impact: Approximately 180,000 units are inspected each year. In a pilot program in one-third of the council districts, inspectors who have been trained in lead safety are citing landlords for visible lead hazards and requiring that all work in pre-1979 buildings that disturbs paint be performed using lead-safe work practices. City inspectors make referrals to the county lead program to document violations. They also refer buildings where children are at-risk to community organizations, which deploy staff to educate tenants to identify and complain of unsafe work practices and to have their children screened. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies SECURE DEDICATED FUNDING FOR CODE ENFORCEMENT Potential for replication: Moderate. Cities and counties that have a housing code could adopt a systematic enforcement program using fees or appropriations dedicated to code enforcement. While many codes do not specifically cover deteriorated paint, there are generally other habitability standards that can be cited. Codes inspectors need to be retrained to look at habitability issues, not just building or structural conditions. # Contact for Specific Information # Secondary Actor(s): N/A Staffing utilized: No information provided. # Other resources utilized: N/A Factors essential to implementation: Registration is the key to success. Once a property is registered, it is possible to contact the owner or the owner's representative. The owner knows the property will be inspected and that it must be maintained. Regular inspections are essential to maintaining properties in a safe condition. Limitations/problems encountered: It is very difficult to keep up with the constant turnover in ownership of small apartments. The vast majority of properties subject to registration and inspection are three-unit properties. Small property owners are investing for appreciation, not long-term ownership and with little attention to maintenance. Magnitude of Impact/Potential Impact: 150,000 to 180,000 units are inspected annually. # Centers for Disease Control and Prevention--Lead Poisoning Prevention Branch Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies SECURE DEDICATED FUNDING FOR CODE ENFORCEMENT Potential for replication: Moderate. Other states and localities seeking to replicate New Jersey's approach would need to find a basis for widespread registration that reflect their individual needs and past history. New Jersey laws grew out of a need to regulate tenements in the early 1900s and have gradually evolved since then. Also, the State has a unique relationship with its municipalities that is not common in most states. # Contact for Specific Information Michael Motich Supervisory Code Administrator 609-633-6225 Other resource requirements: While the direct outcome of a disclosure law will be providing families with information they need to make informed housing choices, the reality is that many families living in high-risk housing do not have real housing options. Therefore, the indirect goal of the disclosure requirement is to motivate property owners to go beyond merely providing information about lead hazards and take steps to address them. Resources to assist landlords of low-income properties and owner-occupants, including free trainings in lead-safe work practices and grants and low-or no-interest loans, would help accomplish this goal. # LEAD SAFETY AND HEALTHY HOMES STANDARDS ADOPT STATE AND LOCAL LEAD HAZARD DISCLOSURE LAWS CERTIFY LEAD SAMPLING TECHNICIANS ENSURE LEAD SAFETY IN LICENSED CHILD CARE PROGRAMS ESTABLISH A LEAD-SAFE HOUSING REGISTRY MAKE LEAD HAZARDS A VIOLATION OF THE HOUSING OR HEALTH CODE NOTIFY ALL RESIDENTS IN A BUILDING FOUND TO CONTAIN LEAD HAZARDS PROTECT OCCUPANTS DURING HAZARD REMEDIATION AND RENOVATION WORK REQUIRE RENTAL PROPERTY OWNERS TO INFORM TENANTS HOW TO REPORT DETERIORATING PAINT REQUIRE SAFE WORK PRACTICES DURING REMODELING, REPAIR, AND PAINTING TRAIN PAINTERS, REMODELERS, AND MAINTENANCE STAFF IN LEAD-SAFE WORK PRACTICES # Institutional capacity required: The lead agency, presumably the health or housing department, must have the capacity to enforce the law in order for this strategy to make a meaningful difference. In addition, judges and prosecutors must be educated about lead poisoning and the goals of the law so that settlements and judgments go beyond collecting fines and actually require owners to take measure that will protect tenants. Cost considerations: This is a low-cost way to motivate owners to invest in lead hazard control. The cost of enforcement can be offset by fees and fines. # Timing issues: The timeline to enact and initially implement a disclosure requirement can be quite long (18 months to 2 years or more), depending on the political climate and the calendar of the city council or state legislature. Feasibility of Implementation: High, though the successful implementation of this strategy is dependent on legislative approval, which is difficult to predict. Building the support of community-based organizations, tenants, and others concerned about lead poisoning, such as pediatricians, will help achieve passage. # POTENTIAL OBSTACLES/BARRIERS Landlords, property management companies, and real estate agents may oppose this legislation. In addition, other agencies that need to be involved, such as inspection, code, building, and judicial agencies, may be resistant to taking on what they view as new responsibilities. Staffing utilized: It is estimated that approximately 1 FTE was needed over 12-18 months to conduct background research on other state and local lead laws; draft the legislation; build political support for the new law (including meetings with the Mayor and housing officials); and see it through the legislative process. The director of the CLPPP and a lawyer in the Law Department were the two primary staff who worked on the legislation. The staffing pattern to implement this local disclosure law has not been determined. # Other resources utilized: The program conducted research to review other state and local lead poisoning prevention laws and to survey HUD grantees to learn how they handle properties. The program used the Wisconsin law as a model. # Factors essential to implementation: Political support for the law was essential to its passage. In this case, the Mayor's support of the ordinance assured the support of the Housing Director and Housing Commissioner, both of whom are effective and influential. # Limitations/challenges/problems encountered: The actual implementation and enforcement of the law will depend on having adequate resources and effective communication and coordination between the health and housing departments. Magnitude of Impact/Potential Impact: There are 178,000 homes in Cleveland built before 1978 that will be affected by the disclosure law. Of those, it is estimated that 120,000 (60%) contain lead hazards and would be candidates for enforcement. # Contacts for Specific Information # CRITICAL ELEMENTS Staff requirements: States that already have EPA-authorized certification programs in place for lead abatement workers and supervisors, risk assessors, lead inspectors, and project designers will be able to add approval of certifications for LSTs with a nominal amount of staffing. # Other resource requirements: Training materials, testing material samples for practice in lead sampling. # Institutional capacity required: In some states, the laws establishing EPA-authorized certification programs may need to be amended to accommodate LST certification. Elsewhere, agencies administering EPA-authorized programs will only need to promulgate regulations detailing LST certification requirements. Accredited training providers may need to develop LST training programs, but states should be able to approve their plan to use the EPA model course. Cost considerations: Once a LST certification program is underway, the increased availability of lead dust testing will bring the cost of that service down significantly. # Timing issues: Assuming the statutory authority to certify LSTs is in place, as much as a year may be required to adopt regulations. Individuals typically are certified as LSTs for one or two years and can easily extend their certification through renewal. # Feasibility of Implementation: High. Certifying LSTs as a free-standing discipline is readily achievable in states with EPA-authorized certification programs. In states not currently authorized by EPA to administer certification, sampling technicians who are certified by other states can perform non-abatement clearance in accordance with HUD regulations. # POTENTIAL OBSTACLES/BARRIERS Confusion or perceived competitive interests may interfere in consideration of certifying LSTs. The benefits of diversifying and expanding capacity need to be communicated. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE In 2001, the State of Vermont promulgated regulations that permit the licensing of lead sampling technicians, completing a process that took roughly one year. Candidates are required to attend a five-hour training course designed to give them hands-on experience in conducting visual assessments and taking dust wipe samples, as well as sample submission, lab results interpretation, and other skills. To obtain a license, candidates must pass an exam at the end of the course. Lead sampling technician licenses must be renewed annually. For technicians working for private firms, the license fee is $150 per year; however, public employees and employees of non profit organizations that are not working commercially can have the fee waived. Licenses or certifications obtained in other states can also be used in Vermont. Lead sampling technicians in Vermont are allowed to perform a well-defined set of duties-they can conduct clearance testing following interim controls, renovations, remodeling, and ongoing maintenance. Sampling technicians cannot perform clearance testing after an abatement project or conduct random dust wipe sampling in multifamily properties. # Jurisdiction or Target Area: Vermont # ENSURE LEAD SAFETY IN LICENSED CHILD CARE PROGRAMS DESCRIPTION OF THE STRATEGY Protecting children in child care settings is an essential complement to preventing exposure in the home environment. Requiring property owners of child care programs to certify annually that the program used essential maintenance practices during the previous year will prevent the occurrence of lead hazards. This certification is required for issuance or renewal of the program's child care license and must be filed with the program's insurance carrier. # BENEFITS Immediate/Direct Results: Once implemented, the program should result in direct and substantial reductions in lead hazards and deteriorated paint at child care programs covered by the regulations. Public Health Benefits: If a child care facility has lead hazards, children served by the program are likely to become lead poisoned. In addition, since these children may not be considered at high-risk for lead poisoning, they may not be identified under the targeted screening programs used in many states. Reducing lead hazards in child care programs will benefit all of the children who use the program's services. Other Indirect/Collateral Benefits: The program should raise awareness of lead hazards for the families that use the child care program as the program cleans up problems or proudly declares that it has any potential problems under control. # SCOPE OF POTENTIAL IMPACT There are 100,000 licensed child care programs nationally serving children under six years old. According to the First National Environmental Health Survey of Child Care Programs, 14 percent of licensed child care programs in the United States have significant lead-based paint hazards -primarily deteriorated lead-based paint. 470,000 children attend these programs. For programs in buildings built before 1960, the rate is 26 percent. For programs where the majority of the children are African American, the rate is 30 percent. # CRITICAL ELEMENTS Staff requirements: The program will take at least one year to develop, including adoption of laws or rules, development of coalitions, and production of outreach materials. It will take approximately 0.2 FTE to prepare the program, educate child care providers, and manage the program for the first two to three years. In addition, inspectors normally conducting the program inspections will need to be trained to address the issue and integrate compliance monitoring for lead into their workload. The additional inspection burden should be minimal-about 15 minutes per site visit. Other resource requirements: It may help improve compliance if some inspectors are trained and certified or licensed to conduct a clearance examination. If inspectors will be expected to take dust wipe samples, they will need about $50 per facility for lab analysis of samples. While not essential, statutory authority such as Vermont's will make it easier. The programs will also need access to contractors and maintenance workers trained in lead-safe work practices to ensure that essential maintenance practices for lead safety are properly done. Some inspection staff may need to be qualified as lead sampling technicians to enhance compliance. # Centers for # Cost considerations: Child care programs often operate on a tight budget. Many programs lack the resources to remedy lead hazards. Unless resources are provided, programs may be confronted with closure. The programs most at risk for lead hazards are the ones most likely to need the resources. Timing issues: It will take approximately one year to establish the program and build support for it. Full implementation will usually take two more years. Child care programs are busiest-and therefore unavailableduring August and September when children return to school and enrollment adjusts to the changes. # Feasibility of Implementation: Moderate. The program is feasible to implement in jurisdictions with lead hazard control resources to devote to child card facilities, but it will take the support of agency and political leadership since additional requirements will be imposed on child care programs. # POTENTIAL OBSTACLES/BARRIERS Management support is essential to overcoming concerns by programs and to manage staff reluctance to expand their inspection role and respond to program concerns. Communities need to be prepared to identify potential resources to address lead hazards and provide technical assistance to help programs obtain and use those resources. # ADDITIONAL RESOURCES # ILLUSTRATION OF STRATEGY IN PRACTICE In 1996, Vermont adopted a law requiring all licensed child care programs, including those operated as "Family Day Care Homes," to perform annual Essential Maintenance Practices (EMPs). The programs certify that they have completed essential maintenance practices to reduce lead hazards during the previous year. # POTENTIAL OBSTACLES/BARRIERS Some resistance from landlords and realtors may occur, in objection to the use of public resources to the benefit of owners of lead-safe properties and to the disadvantage of property owners not on the list (or who will have to undergo costly renovations and repairs to qualify). However, public health concerns should outweigh these arguments. Other resources utilized: HUD Lead Hazard Control funds were used for registry start-up. # ADDITIONAL RESOURCES Factors essential to implementation: Factors essential to implementation of the lead-safe housing registry in Montgomery County included the availability of HUD grant funds and the cooperation of the cities of Kettering and Dayton, as well as assistance from a variety of other parties, including the Sunrise Center, the CityWide Development Corporation, and the Center for Healthy Communities. Limitations/challenges/problems encountered: There were no significant limitations or challenges encountered in establishing the Montgomery County Lead-Safe Housing Registry. Magnitude of Impact/Potential Impact: The Montgomery County registry currently lists 102 owneroccupied housing units and 56 rental units as lead-safe. # Potential for replication: Cities and counties that receive state or federal grant funds can easily establish a leadsafe housing registry as part of their overall programs. Eleven counties in California and the City of Long Beach have established similar registries. # ILLUSTRATION #2 OF STRATEGY IN PRACTICE The Lead Safe Housing Registry in Maryland is a product of the Coalition to End Childhood Lead Poisoning, a nonprofit organization headquartered in Baltimore. The registry is a statewide list of currently available rental properties that, according to the records of the Maryland Department of the Environment, are in compliance with state and federal lead safety standards. Properties on the registry are designated as having undergone a full risk reduction or as being lead-safe or lead-free. The list is unique in that it shows only properties currently available for rent, along with the type of unit (apartment, townhouse, etc.), some of the amenities, the amount of the security deposit, the total rent per month, and whether the unit is eligible for subsidy under HUD's Section 8 Housing Choice Voucher program. In addition, some of the properties listed on the housing registry are considered affordable housing. # Jurisdiction or Target Area: Maryland # Primary Actor: Coalition to End Childhood Lead Poisoning Secondary Actors: N/A Staffing utilized: The Coalition did a lot of preliminary groundwork in establishing Maryland's lead-safe housing registry. 1-2 FTE were temporarily needed for this process. Maintaining and updating the list on a biweekly basis requires 0.25 FTE. # Other resources utilized: N/A Factors essential to implementation: The initial and ongoing cooperation of the state of Maryland has been essential to the Coalition in implementing this strategy. # Limitations/challenges/problems encountered: The Coalition has encountered several challenges in making its registry as complete as possible. The State of Maryland is prohibited by law from making public an inventory of properties owned by a specific landlord or leasing company for any purpose, so the Coalition must list units on an individual basis. Also, although the Coalition attempts to provide a large selection of lead-safe affordable housing, these properties are in short supply. Developing the housing registry for the less urban sections of Maryland (i.e. outside the Baltimore metro area and the Washington, DC suburbs) is another challenge for the Coalition. Magnitude of Impact/Potential Impact: The impact of this registry is statewide. Properties from every county can be included on the registry. Because the registry can include a limitless number of affordable housing units, it can have positive impacts on low-income families in Maryland. # ILLUSTRATION #3 OF STRATEGY IN PRACTICE The Wisconsin Lead-Free/Lead-Safe Registry is a listing of houses, apartments, day care facilities, and other buildings that meet the state's lead-free or lead-safe property standard. The lead-free standard is met when a property does not contain lead-based paint. A lead-safe property is one that does not contain lead hazards, such as peeling, chipping, or flaking lead-based paint. A property owner may apply to be added to the registry by obtaining a lead-free or lead-safe certificate, which is issued following an inspection by a certified lead inspector or risk assessor. Lead-safe certificates are valid for a set period of time as determined by DHFS; lead-free certificates do not expire. The Lead-Free/Lead-Safe Registry is posted online; the .pdf file is updated whenever a significant number of properties have been added. The registry is organized by county and lists the address of the property, whether the property is lead-free or lead-safe, and contact information for the property owner or the owner's representative. DHFS is working to make the information available in an interactive format through the Wisconsin Asbestos Lead Database Online (WALDO). While no definite timeline has been set, ultimately the database will be located at /. # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: The policy requiring DFHS access to addresses of lead-safe/lead-free properties and the availability of these addresses to the public has been crucial to the success and timeliness of the registry. Limitations/challenges/problems encountered: Publishing the WALDO database online has been the main challenge for DHFS. While an online version currently exists to collect input from property owners, it is complicated and too cumbersome for display and interactive activity by website visitors. DHFS is currently seeking funding to make the database more user-friendly and fully available to the public. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Lead Safety and Healthy Homes Standards # MAKE LEAD HAZARDS A VIOLATION OF THE HOUSING OR HEALTH CODE DESCRIPTION OF THE STRATEGY In order to provide the clearest legal basis for code officials to confront lead hazards, local and state codes should state explicitly that deteriorated lead-based paint and dangerous levels of lead in dust and bare soil constitute violations of the housing or health code. Specifically referencing lead hazards in the housing or health code will alert enforcement officials and property owners alike that such hazards constitute code violations and must be corrected. The code can explicitly incorporate EPA's national standard for dangerous levels of lead in paint, dust, and soil that state and local jurisdictions can reference. # BENEFITS Immediate/Direct Results: Enforcement officials have the authority to mandate repair or abatement and cite property owners who fail to comply. Public Health Benefits: Children are protected from exposure because hazards are addressed on a pre-emptive basis. Other Indirect/Collateral Benefits: With the prospect of enforcement and fines, some property owners may be motivated to repair their property before problems occur. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Since adding lead hazards supplements existing code enforcement programs' authority, no additional staffing would be needed. # Other resource requirements: N/A Institutional capacity required: The initial requirement is local or state legislation that names deteriorated lead-based paint and dangerous levels of lead in dust and bare soil as code violations. Implementation requires training for code staff in the identification of lead hazards and certification to become lead sampling technicians, lead-based paint inspectors, or risk assessors. Cost considerations: None identified. # Time issues: None Feasibility of Implementation: High. Adding lead hazards to the housing code is not difficult to implement. # POTENTIAL OBSTACLES/BARRIERS The strategy has limited usefulness if local jurisdictions do not have the budget or staff to investigate and enforce violations. # ADDITIONAL RESOURCES # ILLUSTRATION OF STRATEGY IN PRACTICE The Town of Manchester's Property Maintenance Code requires that interior and exterior lead-based paint must either be maintained in a condition free from peeling, chipping, and flaking or be removed or covered in an appropriate manner. Cases involving lead-based paint violations are referred to the health and building departments to pursue compliance with state and federal regulations. If a child under six years of age resides in a property with deteriorated, flaking, or loose paint conditions, dust wipe samples are collected. If lab analysis results reveal lead hazards, repairs are ordered and the property owner is referred to the Lead Abatement Project, which may provide financial support to complete the repairs. Participants in the program are required to obtain lead safe work practices training. # Jurisdiction or Target Area: Manchester, CT Primary Actor: Department of Health, Lead Abatement Project, in conjunction with the city's Code Enforcement Unit. # Secondary Actor(s): N/A Staffing utilized: Only 0.2 FTE is available to address Property Maintenance Code complaints. One full time Property Maintenance Inspector, with support staff, would be needed to proactively address lead hazards in a town the size of Manchester. # Other resources utilized: N/A Factors essential to implementation: Strong partnership with a childhood lead poisoning prevention program and enough staff to implement property maintenance code. Limitations/challenges/problems encountered: Generally, Code Department personnel focus primarily on new construction and only react to property maintenance complaints, so there is a need for on-going education and advocacy about lead hazards in older properties. Nonetheless, a partnership between the building inspectors and the Lead Abatement Project has made a difference. Magnitude of Impact/Potential Impact: By making lead hazards part of the Property Maintenance Code, Manchester has institutionalized the importance of recognizing and addressing them. This is an essential step in eradicating lead poisoning, particularly in an area where 93 percent of the housing is at risk for lead hazards. # Potential for replication: High. The housing code provision is not difficult to implement, but to reach its full potential impact, the jurisdiction should have sufficient resources for code inspection and enforcement. # Contacts for Specific Information # NOTIFY ALL RESIDENTS IN A BUILDING FOUND TO CONTAIN LEAD HAZARDS DESCRIPTION OF THE STRATEGY The presence of lead hazards in one unit of a multi-family building is a strong indication that other units in the building also contain hazards. Through statutes or code, hazard assessment staff can be given the authority to notify, or rental property owners can be required to notify, all building residents of any evaluation, inspection, other hazard determination, hazard reduction activities, or clearance testing performed in the building. By putting all occupants on notice when hazards are identified, residents can take steps to protect their children from lead poisoning. # BENEFITS Immediate/Direct Results: Occupants will become aware of existing lead hazards and may be motivated to seek an assessment or corrective action in their own unit. As a result, other hazards in the same building will be identified and remediated before more children are poisoned. Public Health Benefits: Expand awareness and education of lead hazards among residents. Other Indirect/Collateral Benefits: Notification of all tenants provides an opportunity for community building among residents. # SCOPE OF POTENTIAL IMPACT Statewide City-or County-Wide # PRIMARY ACTORS Code or Housing Inspection Agency # KEY PARTNERS # Tenants # CRITICAL ELEMENTS Staff requirements: Approximately 0.05 FTE. # Other resource requirements: N/A Institutional capacity required: Hazard assessment personnel must have the authority to notify all residents when a unit located in the same structure is cited for lead hazards. # Cost considerations: None listed # Timing issues: None Feasibility of Implementation: Very high. This is a simple, low-cost education and outreach tool. # POTENTIAL OBSTACLES/BARRIERS Non-English-speaking tenants need notice and educational materials in the appropriate language. to notify all residents of a building where an investigation documents lead hazards in any unit in that building. When an inspection reveals lead hazards, the environmental health inspector gives the property owner a report highlighting the hazard and where it is located and instructs the property owner to copy and distribute the notice to all tenants in the building. In order to ensure all tenants actually receive the notice, the department also distributes copies, along with lead hazard educational materials. The materials are available in Chinese, Spanish, and English. # ADDITIONAL RESOURCES # Jurisdiction or Target # Secondary Actor(s): N/A Staffing utilized: 0.05 FTE to prepare the photocopies, assemble the literature, and distribute it to tenants. Other resources utilized: A photocopier and educational literature. # Factors essential to implementation: The Health Code gives inspectors the authority to notify all tenants. Limitations/challenges/problems encountered: The challenge involves distributing the flyers and ensuring that materials are provided in the appropriate language for the tenants. # Magnitude of Impact/Potential Impact: The Children's Environmental Health Promotion Program has not yet collected data on this strategy's impact, but inspectors consider it a strategy complementary to their overall environmental health promotion. Distributing notice to all tenants is another way to build awareness and reinforces the seriousness of San Francisco's lead problem. Potential for replication: Very high. This strategy is easily replicated with very little cost incurred. # Contact for Specific Information # PROTECT OCCUPANTS DURING HAZARD REMEDIATION AND RENOVATION WORK DESCRIPTION OF THE STRATEGY Generally, occupants of homes that contain lead-based paint should be temporarily relocated to lead-safe housing before the start of lead hazard control work, or renovation or remodeling work that disturbs more than a small area of lead-based paint, and they should not return until the work is completed and the work site has been vacuum cleaned and wet washed and passed clearance. Relocation is not necessary if work area containment is practiced and either only a few square feet of paint will be disturbed or the work can be completed in a few days while occupants stay out of the work area. Temporary relocation can be carried out most efficiently and costs minimized by (a) ensuring that paintdisturbing work is completed as quickly as possible; (b) occupants are fully advised in writing of the necessity of not returning until the dwelling has been thoroughly cleaned; and (c) arrangements are made in advance for the protection and security of occupants' belongings and for the transportation needs of schoolchildren. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Lead Safety and Healthy Homes Standards PROTECT OCCUPANTS DURING HAZARD REMEDIATION AND RENOVATION WORK incidental costs. Where private sector accommodations must be used, relocation costs can be minimized if the agency can establish a public-private partnership with hotels or motels to set aside low-cost rooms for temporary relocation. In addition, it is conceivable that a temporary relocation requirement will result in rental property owners passing the cost to tenants in the form of higher rent. # Timing issues: N/A Feasibility of Implementation: High. Encouraging temporary relocation to homes of the occupant's friends or relatives may be one practical way of minimizing costs and ensuring successful implementation. Otherwise, feasibility will depend upon the availability of funds to implement a program. # POTENTIAL OBSTACLES/BARRIERS It may be very difficult to impose temporary relocation requirements on landlords without the availability of some type of cost-sharing. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE This strategy is HUD's temporary relocation policy for federally assisted housing rehabilitation and renovation work. The policy provides for temporary relocation of residents to lead-safe housing during the work period, but it does not require relocation if certain requirements are met. If only a small area of paint will be disturbed; if the work can be completed in one 8-hour work day or within five calendar days, occupants are kept out of the work area, warning signs are placed in each room where work is occurring, and the area is thoroughly cleaned are work is completed; or only outside work is involved, the property owner does not have to relocate the unit's occupants. Other resources utilized: Some agencies allow temporary relocation costs as an eligible expense for housing rehab programs. # Jurisdiction or Target Factors essential to implementation: Coordination and cooperation among occupants, property owners, and contractors involved in the rehabilitation and renovation work. It also requires ongoing inspection and enforcement by HUD and local housing agencies. Limitations/challenges/problems encountered: HUD exempted elderly homeowners from relocation requirements since local agencies reported that this population did not want to be relocated and was considered at low risk. Other resource requirements: Additional staff dedicated to ensuring landlord compliance probably could be funded mostly or entirely from penalties assessed against non-complying landlords. # Magnitude of Impact Institutional capacity required: State or local legislation would need to be enacted to create the notice requirement and enforcement authority to ensure compliance. Cost considerations: This requirement seems cost-effective no matter how passively or aggressively it is enforced. Without enforcement, some compliance will occur at virtually no cost. Additional resources spent on landlord outreach and education and/or enforcement should increase compliance substantially. Penalties against non-compliant landlords would increase in proportion to resources spent on enforcement and cover or at least offset costs of enforcement. # Timing issues: No seasonal or cyclical considerations. Timeline to implement depends on the legislative process. # Feasibility of Implementation: Moderate. The existence of this policy in two states and throughout most federally assisted housing demonstrates its feasibility. # POTENTIAL OBSTACLES/BARRIERS The impact of this policy is directly related to the degree to which it is promoted and enforced among landlords. Some resources would have to be committed initially in order to demonstrate cost effectiveness of promotion and enforcement. This is most likely to happen if policy makers are shown or convinced that enforcement efforts can pay for themselves. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE In 1996, the Vermont Legislature enacted the Essential Maintenance Practices law, which includes a requirement that owners of pre-1978 rental housing and child care facilities "post, in a prominent place … a notice to occupants emphasizing the importance of promptly reporting deteriorated paint to the owner or to the owner's agent. The notice shall include the name, address, and telephone number of the owner or the owner's agent." The law also requires owners of pre-1978 rental housing to annually submit an Affidavit of Performance attesting to compliance with this and the other requirements to the Vermont Department of Health. Limitations/challenges/problems encountered: Although VDH is charged with implementing the law and keeping records, no money is appropriated for these activities or for enforcement. VDH does not have the resources to conduct quality control on affidavits to verify they are correctly completed or to physically check dwellings to confirm compliance. Currently, VDH attempts to resolve complaints informally but does not use its statutory power to penalize non-compliant landlords. VDH has never issued a health order to address violations of the law-even after the infraction caused a child to be lead poisoned. Failure to prosecute even the most egregious cases means there are essentially no negative consequences to ignoring the law. # Jurisdiction or Target Magnitude of Impact/Potential Impact: Approximately 30-40 percent of Vermont's pre-1978 rental housing units have affidavits on file at VDH that claim compliance. According to officials, the lack of any comprehensive listing of rental properties hinders the agency's ability to get a precise picture of compliance. However, Vermont is poised to have a larger impact in the future, and other jurisdictions with more staff availability could be even more effective. # REQUIRE SAFE WORK PRACTICES DURING REMODELING, REPAIR, AND PAINTING DESCRIPTION OF THE STRATEGY Banning unsafe work practices and requiring basic safeguards for remodeling and paint repair work are key to preventing childhood lead poisoning in older housing. Banning unsafe methods of removing paint will sharply reduce the amount of lead contaminated dust that would otherwise be generated. The unsafe methods that should be prohibited include: dry sanding or scraping; open flame burning; operating a heat gun above 1100 degrees; machine sanding without a HEPA attachment; and stripping in poorly ventilated areas using volatile strippers on surfaces containing lead-based paint. Requiring precautions such as work area containment and careful post-work cleaning will prevent the dispersal of any lead-contaminated dust that might be generated. When coupled with occupant protection activities, adherence to lead-safe work practices for routine remodeling and repair work can help prevent children's exposure to lead dust hazards. # BENEFITS Immediate/Direct Results: Homes that are being remodeled, repaired, or repainted are less likely to pose lead dust hazards if contractors refrain from unsafe work methods that generate lead dust and follow basic precautions while performing work that disturbs paint in older homes. Public Health Benefits: Following lead-safe work practices will materially reduce risks to children living in older homes that are undergoing repairs or renovation. In many areas, such as New England, up to 20% of lead poisoning cases can be attributed to unsafe remodeling or renovation activities. Other Indirect/Collateral Benefits: A requirement for using lead-safe work practices would also reduce exposure of workers, and potentially their children, to dangerous levels of lead dust. # SCOPE OF POTENTIAL IMPACT A requirement for using lead-safe work practices could be implemented statewide or at the county or city level. Similar requirements already apply to all remodeling, rehab, and paint repair projects in HUD-assisted housing and properties rehabilitated using HUD funds. Cost Considerations: Because some banned work practices, such as machine sanding, reduce labor time in surface preparation, painting contractors and their clients would bear marginal increased costs. Timing Issues: Developing and implementing systems to train remodeling contractors, painters, and maintenance workers will take time. # PRIMARY ACTORS KEY PARTNERS Feasibility of Implementation: Moderate. Training to build lead safety capacity can start before requirements are in place. Health department leadership will accelerate acceptance and enactment of lead-safe work practices requirements. Substantial support from community and advocacy organizations will help. Property owner and contractor associations should be asked to participate in developing the statute, ordinance, or code amendment to offset their likely opposition. Compliance will grow over time, because most contractors are law-abiding or interested in avoiding legal liability and are responsive to consumer awareness and demand for lead safety. Success is more likely in areas with a relatively high incidence of lead poisoning and broad public awareness. # POTENTIAL OBSTACLES/BARRIERS The main obstacles are likely to be the opposition of property owners and contractors to enactment of requirements for lead-safe work practices. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE In 2001, the City of New Orleans enacted an ordinance that prohibits unsafe work practices during work on metal structures and buildings built before 1978. It requires that contaminated debris be contained within barriers and that visible paint chips be cleaned after completion of work. It also requires that tenants, neighbors, workers, and government agencies be notified that work on interior and exterior painted surfaces will take place and forbids retaliatory evictions. Enforcement is mostly by complaint and is more effective for work on exteriors that are evident to neighbors. The city is authorized to issue notices of violation, to require remediation of any lead-based paint hazards generated by unsafe work, and to require a risk assessment before resumption of work. Factors Essential to Implementation: A coalition of physicians and community advocates worked with the city's administration to develop the ordinance, which was passed in the wake of a survey finding that 25 percent of children screened at city-operated clinics had elevated blood lead levels. City departments and advocates must ensure wide publicity and education so that tenants and neighbors will report violations and so that violations will be vigorously pursued. # Jurisdiction or Target # Limitations/challenges/problems encountered: The ordinance only applies to lead-based paint, which allows painters and owners to submit an unleaded paint chip to circumvent all requirements. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Targeting High-Risk Housing # CAPITALIZE ON HOME NURSING VISITS TO TARGET PREVENTION SERVICES DESCRIPTION OF THE STRATEGY Visiting nurse programs offer a unique opportunity to efficiently reach pregnant women and new mothers in high-risk communities. Traditional home nursing visits can be enhanced to visually assess hazards, collect dust samples, inform occupants and rental property owners of hazards, demonstrate specialized cleaning methods for lead dust, and discuss lead poisoning risks. Further, and most critically, the nurses can provide referrals to available lead hazard control resources and other resources. # BENEFITS Immediate/Direct Results: Education and referrals provided during home nursing visits directly benefit the families served. The physical presence of nurses in the homes provides a mechanism for nurses to identify conditions that may warrant emergency interventions, even outside the context of formal policies for such action. Public Health Benefits: Lead safety improvements triggered by nurse visits benefit siblings and future occupants. In addition, home nurse visits provide a mechanism for targeting available lead hazard assessment and/or control services to high-risk families who can benefit immediately. Over time, cumulative efforts will help improve the lead safety of the housing stock. Other Indirect/Collateral Benefits: Word-of-mouth among new mothers may reinforce efforts to raise awareness among families in the community. Nurse referrals may help generate referrals to other community programs, such as weatherization or lead hazard control, thereby reducing marketing efforts for such programs. # SCOPE OF POTENTIAL IMPACT City-or County-Wide # PRIMARY ACTORS KEY PARTNERS Health Department # CRITICAL ELEMENTS Staff requirements: Staffing needs depend upon whether the lead activities are provided as an adjunct to home nurse visits that are already being made, or whether they are an entirely new service, and by the scope of services provided. Other resource requirements: If nurses will be collecting dust samples or demonstrating lead-safe cleaning techniques, they will need the appropriate tools (such as wipes for dust sampling, HEPA vacuum, etc.) and protocols, along with any necessary training. Institutional capacity required: Management support is the most likely element required for continued support of a staff-intensive effort. # Cost considerations: To maintain the visiting nurse program's existing coverage of its target population caseload, the incremental cost of adding lead safety to the visiting nurse protocol will need to be reimbursed so that the staff can be expanded accordingly. Timing issues: Can be implemented at any time. # Feasibility of Implementation: High. As demonstrated by the adoption of this strategy in multiple jurisdictions, the strategy is feasible in multiple variations. If programs seek to add lead hazard education to previously planned nursing visits, they may run the risk of overwhelming both parents and the nurses by providing too much information at one time, ultimately making the sessions less effective. Nurses and families may be frustrated by the lack of meaningful lead hazard control resources available in the community. Pregnant women may resist interventions that could lead to uncomfortable relationships with landlords or even evictions; the program should develop a contingency strategy for landlord retaliation (with assistance from a legal aid agency and/or the code enforcement agency) and explain it to their patients. # ADDITIONAL RESOURCES N/A # ILLUSTRATION #1 OF STRATEGY IN PRACTICE With the support of a CDC $100,000 primary prevention grant, WI CLPPP developed and pilot-tested a nursing home visitation program in two high-risk Wisconsin communities (Racine and Sheboygan). Although the one time CDC grant has ended, the program since has evolved into a different but sustainable format, with 34 programs run by local health departments throughout the state. The initial pilot program, which targeted low-income, primarily Medicaid-eligible, pregnant women through the Prenatal Care Coordination Program (PNCC), provided prenatal lead education and referrals, environmental assessments, and feedback to property owners. Fourteen PNCC nurses were trained as Lead Sampling Technicians and equipped with HEPA Vacuums. During initial prenatal home visits, a nurse provided information about childhood lead poisoning and potential lead hazards in the home environment. The nurse also conducted a visual assessment of the home, collected pre-and post-cleaning lead dust samples from floors and windows, demonstrated lead dust reduction measures, and provided cleaning supplies to the parent. During a second visit four-six weeks later, nurses reinforced messages and collected lead dust samples from the same locations to assess the effectiveness of measures that were taken to reduce lead dust. Pre-and post-cleaning results were provided to clients. Finally, property owners were informed in writing of the results of the dust sampling, given a copy of the HUD Lead-Paint Safety Field Guide, and encouraged to repair deteriorated painted surfaces. Families and property owners were also encouraged to enroll their properties in the HUD Lead Hazard Reduction Program if appropriate. After CDC funding ended, Wisconsin revised the program to reach a broader target audience by adding an additional lead-specific home visit to existing prenatal and newborn visitation programs. Local health departments (LHD) now choose the level of intensity of services provided during their lead visits, selecting various combinations of three options: lead education, environmental assessments, and feedback to property owners. Due to capacity and resource constraints, not all LHDs are doing dust sampling; some are using LeadCheck swabs. The state is continuing to support local efforts, currently devoting about 15 percent of two FTEs to support the program and budgeting about $35,000 per year. Local LHDs estimate their costs at about $3,000 per year. To help build capacity and support dust sampling, Wisconsin is training an additional 23 nurses as lead sampling technicians and plans to offer free dust sample analysis through the state lab. # Jurisdiction or Target Area: Wisconsin # Other resources utilized: N/A Factors essential to implementation: The willingness of visiting nurse program to train nurses as LSTs, and lab resources for dust wipe analysis, were essential to implementing this strategy. Limitations/challenges/problems encountered: Initial concerns about creating conflict by contacting property owners proved to be mostly unfounded. In fact, the program has been so popular that nurses in one locality fought successfully to protect it from threatened budget cuts. Magnitude of Impact/Potential Impact: During the nine-month pilot, approximately 100 families received services. WI CLPPP estimates that some level of service has been provided to about 500 families statewide this year, and expects the number to reach 1,200 by the end of year. The program has been well received statewide by nurses and families. Potential for replication: High. Widely replicable in jurisdictions with visiting nurse programs. Through partnership with the Weatherization program, the KYBLS project aims to literally get a "foot in the door" by delivering an energy-efficient, lead-safe environment at no cost and with minimal involvement of the property owner. Weatherization program policies do not require the permission of the property owner to perform an initial energy assessment. The property owner's signature is only needed in order to perform energy conservation treatments (which involve no cost to the property owner and no threat of legal penalty). CLPPP sought to utilize these facets of the Weatherization Program to initiate contact with property owners offering an opportunity to improve their properties for free, easing the concerns of women who feared conflict with or eviction by their landlord, and following up with contact regarding the need to repair lead hazards. # Contacts for Specific Information Following the birth of each participant's child, the program uses RI health department (KIDSNET) and CLPPP databases to identify and track the child up to the first blood lead level screen, to serve as another method of evaluation method for the project. The educational impacts of the KYBLS project continue to be assessed through an analysis of both the pre-and post-educational surveys administered to participants at the initial and final stages of their enrollment, as well as the pre-and post-dust wipe samples taken by the FOP workers at the homes of KYBLS enrollees. RI CLPPP has budgeted $100,000 per year for KYBLS. # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: The essential factor is the cooperation between the health department and the weatherization agency. Limitations/challenges/problems encountered: Unfortunately, implementation has proven to be very challenging, since a waiting list for weatherization services delays access to the property improvements to be provided at no cost, and property owners are not very enthusiastic about financing lead hazard controls via the loans that are offered. Many of the referred women refused services, could not be located, or did not meet eligibility requirements. Because the program began as a small pilot program, to date only a handful of properties have been successfully enrolled into lead hazard control or weatherization programs. # Magnitude of Impact # CONNECT MEDICAID DATA AND STATEWIDE SURVEILLANCE DATABASES DESCRIPTION OF THE STRATEGY By linking the state's Medicaid data electronically with statewide lead poisoning surveillance databases, states determine testing rates for children served by Medicaid and identify children in the Medicaid caseload who have not been screened. Analysis of these data can also be used to identify and target neighborhoods in which numerous Medicaid children have been poisoned in order to direct prevention resources to areas at highest risk. Since 1998, CDC has been encouraging states to make these connections by requiring lead poisoning prevention grantees to have a system for ongoing identification of Medicaid-eligible children in the surveillance system, preferably via performing automated data linkages or matches between surveillance and Medicaid enrollment data sets. # BENEFITS Immediate/Direct Results: Data sharing improves the ability of the lead program and state Medicaid agency to systematically target prioritized primary prevention in high-risk neighborhoods. Public Health Benefits: Data sharing combines information sources and it sheds enormous light on "high risk" Medicaid populations, who can be targeted for primary prevention. Combining information sources also permits agencies to focus EBL screening efforts in neighborhoods where screening is required but not happening and better monitor both case-identification rates and the actual delivery of lead screening services, including the performance of individual Medicaid managed care plans and medical practices. It can also permit agencies to track follow-up care provided by local health departments and justify Medicaid reimbursement for such services. Other Indirect/Collateral Benefits: In addition to links between lead surveillance and Medicaid enrollment data, linkages can be established with other systems, such as geographic information system (GIS) coding and other programs' enrollment data. When illuminated by GIS, the matched data can yield clear and persuasive maps presenting risk, screening, or case-identification data for specific neighborhoods. The lead program can share analyses of the resultant combined data (suppressing identifying information) with housing agencies to facilitate targeting of resources for housing rehab and up-front lead hazard control to highest-risk blocks and block groups, or combine it with WIC enrollment data to discover risk relationships that can improve targeting strategies for primary or secondary prevention initiatives. Institutional capacity required: Information system managers for both agencies must understand the project goals and have top-down support for a joint project. Interagency agreements and even legislation supportive of sharing the data may be required. # SCOPE OF POTENTIAL IMPACT # Cost considerations: Net costs depend on the status of the existing data systems. The cost of data matching is an allowable state administrative cost under Medicaid and therefore partially reimbursable by CMS. Timing issues: Can be implemented whenever systems and support are in place. # Feasibility of Implementation # POTENTIAL OBSTACLES/BARRIERS Poor quality underlying data is a prohibitive barrier to a successful data sharing project, as any data linkages are only as good as the information being compared. A common obstacle arises if the necessary data sets are housed in different agencies or in different locations, or split between agencies with poor working relationships. There is ample evidence that such obstacles can be overcome in any state, by enlisting senior management's early support for the project. Privacy and confidentiality issues make all public agencies anxious about sharing data, and such concerns have been heightened by perceived new requirements associated with the Health Insurance Portability and Accountability Act's Standards for Privacy of Individually Identifiable Health Information (commonly known as the HIPAA Privacy Rule). It is imperative that proponents of data sharing be familiar with the facts about HIPAA as well as applicable state or local privacy policies. Some states have laws that require sharing of such data. # ADDITIONAL RESOURCES A number of resources are available to assist states in taking this step. # ILLUSTRATION OF STRATEGY IN PRACTICE The Chicago CLPPP regularly matches its lead surveillance database with Medicaid eligibility and billing data (two separate Medicaid databases) to track and analyze screening and EBL rates. Each quarter, CLPPP performs a data match and then translates the results into a standardized report for the Medicaid agency. The report contains aggregate information on the number of Medicaid-enrolled children tested by age, race, address, and other factors of interest, as well as information on blood lead levels by various criteria. It also includes a list of untested children, which is used for direct outreach by CLPPP to encourage testing. In addition, since the data are geo-coded by address, the CLPPP program uses the data to validate high-risk areas for HUD funds and other prevention efforts and to generate "good visuals" (i.e., mapped data) that help in mobilizing partnerships for prevention in problem areas. # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: The city health department successfully negotiated its data-sharing agreement directly with the state Medicaid agency, the Illinois Department of Public Aid. Program staff believes that the key ingredient for a successful project is the existence of a clean blood lead surveillance database, which can be a challenge given that it is constructed by the CLPPP from information provided by providers who collected the samples and the laboratories that analyzed the samples. In contrast, Medicaid data tends to be relatively clean. Chicago modeled its approach after two states' successful efforts (NC and WI). Chicago CLPPP staff sought advice and technical assistance from peers in those states, who were gracious about sharing their data matching protocols. Limitations/problems/challenges: Completing development of the agreement between the agencies was regarded as the biggest challenge to the program. respect to the identity of children in lead surveillance databases, and in some circumstances, address information. # Cost considerations: There are out-of-pocket costs associated with programming, acquiring data, cleaning databases (if necessary), and licensing software; however, specific costs will vary depending on a number of factors. Purchase of GIS software is now an authorized expenditure for CDC CLPP grant funds. Timing issues: Can be implemented at any time. # Feasibility of Implementation: Moderate. A number of programs have already deployed GIS technology successfully to analyze data related to lead poisoning prevention, providing useful models and resources for support, advice, and practical tools. # POTENTIAL OBSTACLES/BARRIERS Although "mapping" feels approachable, the concept of Geographic Information Systems (GIS) can be intimidating for those unfamiliar with the software. Programs need to set clear goals for their analyses and decide if simple and focused will accomplish their goals (e.g., Wisconsin illustration), or if something more complex and comprehensive is appropriate (e.g., NCHH illustration). A potential barrier that can require considerable staff time to remedy is the quality of the databases. Generally speaking, tax assessor databases are relatively clean, due to their importance to local government finances, while lead surveillance data tends to require cleaning to render it intelligible for GIS programs. It is also possible that very small scale mapping (e.g., at the block level) of EBL data could trigger privacy concerns, so agencies must have clear policies in place in comply with prevailing privacy requirements. # ADDITIONAL RESOURCES Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Targeting High-Risk Housing CONSOLIDATE AND ANALYZE DATA TO HIGHLIGHT LEAD POISONING "HOT SPOTS" # ILLUSTRATION #1 OF STRATEGY IN PRACTICE WI CLPPP used GIS software to produce maps demonstrating visually the association between childhood lead poisoning and age of housing. Specifically, Wisconsin developed a series of maps showing both the geographic location of residences of children with elevated blood lead levels (data from the state's blood lead surveillance system) and age of housing (data from US Census). # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: The program partially credits the success of the maps to careful planning and testing to ensure that the messages were clear for the desired audiences. To this end, WI CLPPP pilot-tested the materials with nurses, health educators, and sanitarians from local health departments' lead program staff. The state is currently exploring the feasibility of offering online access to GIS mapping, through which the public would be able to customize maps of lead data in combination with various other types of data. Limitations/challenges/problems encountered: None mentioned. Magnitude of Impact/Potential Impact: Wisconsin created maps with 3 views for all of its 72 counties and the larger cities, enabling them to show separately the areas with low (0 -30 percent pre-1950 housing), medium (30 -60 percent), and high (60 -100 percent) densities of older housing. The resulting maps were powerful communication tools showing strongly that those with lead poisoning are predominately found in the areas with the highest proportion of pre-1950 housing. Although the maps were relatively straightforward in the sense that they only mapped one familiar risk factor (age of housing), feedback on the maps has been universally positive, with continuing requests for customization. At least one jurisdiction used them to target properties for HUD lead hazard control funding, and the city-level maps positioned cities for collaboration and communication about the use of Community Development Block Grant (CDBG) funds for prevention. Even private managed care organizations have requested detailed maps for their service areas, as they reportedly do not have GIS capability themselves. An unsolicited feedback letter from an insurance company requesting additional maps commented that they "were a real visual learning experience for our head Pediatrician and, interestingly, … his nurse and training and any applicable credentials to provide lead education services, hazard determination, and lead hazard control services. Cost considerations: Such intensive service delivery can be expensive: $300-400 for a risk assessment, $500 20,000 for lead hazard control, $150 for a clearance examination. Federal Medicaid policies allow reimbursement for environmental investigation and case management services. However, at present some state Medicaid programs have not begun reimbursing for these services despite explicit federal encouragement to do so. Lead hazard control can be funded by a variety of sources. # Timing issues: Program could be implemented whenever resources, referral mechanisms, regulatory compliance measures, and service providers have been secured. # Feasibility of Implementation: Moderate. This is an ambitious program requiring strong community commitment, significant resources, and ongoing collaboration among disparate entities. Beginning with a pilot program is one means to establish relationships and test systems. # POTENTIAL OBSTACLES/BARRIERS This project requires considerable coordination and cooperation between various agencies and entities including health, housing, and the state Medicaid agency. It would likely be difficult to acquire the resources and support for this type of project in a situation where adequate environmental responses are not available to families whose children have even higher blood lead elevations (above 20 µg/dL). # ADDITIONAL RESOURCES # Factors essential to implementation: The tenacity of the Get the Lead Out Coalition in promoting the project and the willingness of the project partners to work collaboratively were essential. Limitations/challenges/problems encountered: This initiative is limited to properties occupied by Medicaid families with young children or pregnant women, and targeted to eleven Connecticut cities with large numbers of Medicaid enrolled children but limited or no funding for lead hazard control. # Magnitude of # POTENTIAL OBSTACLES/BARRIERS The experience of the Massachusetts Department of Public Health (DPH) is illustrative of some difficulties associated with this strategy. While its regulations authorize the health department to conduct building-wide investigations, the agency does not routinely do so because of concern that generating multiple orders for correction of hazards will divert the owner's resources from addressing the unit that has poisoned a child. As an alternative, DPH notifies all tenants in a building about an inspection, advising that lead was found in one unit, it is likely that their unit has lead, and they should talk to the owner or call the state or local board of health if they want an inspection. Upon tenant request, DPH does an investigation. Additionally, DPH will investigate other units in those jurisdictions where local financing agencies in Massachusetts will assist owners with abatement of all units in the building at once. Massachusetts is revisiting its strategy for multi-unit buildings as it develops its CDC-required strategic plan. Limitations/challenges/problems encountered: Securing the resources to pay for environmental investigations has been the biggest challenge. Although not a common problem, inspections occasionally create conflict between landlords and tenants. Although Maine law specifies that a household cannot be evicted due to a child's lead poisoning, Maine is considering strengthening tenant protections against landlord retaliation. # ADDITIONAL RESOURCES Magnitude of Impact/Potential Impact: Program staff estimate that 250 multi-family buildings have been investigated after the identification of a lead-poisoned child in one of the units since the 1999 law was enacted. Potential for replication: High. # SCREEN HOMES DURING CODE INSPECTION DESCRIPTION OF THE STRATEGY Code inspections prompted by complaints about housing problems such as a roof leak, roaches, or no heat, as well as routine periodic inspections of rental housing units, provide opportunities to screen for lead hazards and peeling paint in the homes of young children. Code enforcement staff can be specially trained to conduct limited checks for lead hazards as a means to trigger additional action. If the inspection identifies a lead hazard through visual assessment, spot testing, dust testing, or paint testing, the inspector can order the property owner to undertake lead hazard control or lead-safe repair work to bring the unit into compliance with any applicable standards. # BENEFITS Immediate/Direct Results: The number of homes checked for lead hazards can be greatly increased at comparatively low cost by integrating basic lead safety checks into other code inspection visits. By checking homes for lead hazards and requiring corrective action consistent with applicable standards, code enforcement programs help reduce the risk of childhood lead poisoning. Public Health Benefits: In most jurisdictions, the lead poisoning prevention program sends environmental investigators only to the homes of children who have been lead poisoned. In contrast, code inspectors have the opportunity to enter many homes, and they can identify hazards before a child is exposed and an elevated blood lead level develops. Their code enforcement authority can be used to routinely intervene to require lead safety in the highest risk older properties-those that are subject to tenant complaints about poor maintenance or other health conditions. Other Indirect/Collateral Benefits: This approach leverages limited public inspection resources to trigger lead hazard assessment control. Strong enforcement broadens impact of the code enforcement program, prompting property owners to undertake voluntary measures in other properties and perform preventive maintenance on all rental housing. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: During the start-up phase of a statewide program, approximately 0.75 FTE is needed to establish procedures and build trained capacity. For an established program, 0.3 FTE is needed for program oversight and one FTE for technical assistance to local agencies (code agency and health department). For local agencies, each individual who conducts housing inspections should be trained to perform lead determinations. Depending on the sampling method used, checking for lead hazards will add 10-30 minutes to the typical housing inspection. Staff requirements to meet the workload will depend on the type of determination, which is affected by what the standard is: no peeling or otherwise non-intact paint in any housing, no lead dust hazards, etc. Depending on the extent of lead hazards, the standard to be met, the type of enforcement action, and the local or state court rules, additional time is also required for enforcement steps, including court appearances. Inspectors represent themselves at hearings before clerk magistrates or judges, as they already do for other code enforcement cases. Involvement of public agency attorneys is generally needed only for cases that progress to the criminal complaint stage. Other resource requirements: Supplies or equipment to check paint; supplies and lab services to check dust. # Institutional capacity required: This strategy requires authority to require compliance with an explicit standard (e.g. no peeling or otherwise non-intact paint in any housing, no plumbing leaks, no lead dust hazards), such as statutory lead safety requirements for housing or locally adopted property maintenance code. The statutory authority should cover licensing or certification of code inspectors or categorically exempt trained inspectors employed by public agencies from licensing or certification requirements. Implementation needs include training curricula, and, as needed, training providers approved by an accreditation program to teach the curriculum. Continuing partnership between agencies that regulate lead-based paint activities and those that enforce codes will ensure effective implementation. Cost considerations: The program will be more effective if resources are available to assist low-income property owners with the cost of lead hazard control and provide favorable financing terms to others. Timing issues: Can be implemented whenever infrastructure is in place. # Feasibility of Implementation: Moderate. In jurisdictions that have code enforcement apparatus and enforceable lead safety standards, this can be implemented wherever political will is sufficient to support enforcement. Enforcement can include requiring interim lead hazard controls rather than full lead hazard abatement, so as to decrease costs of compliance when code violations are found. # POTENTIAL OBSTACLES/BARRIERS Lack of legal authority and absence of enforcement standards can be insurmountable barriers. Enforcement includes a single brief inspection and may involve multiple court appearances by the inspector to trigger and complete the legal process of enforcement. Two potential obstacles are the unwillingness of the city/county attorney to prosecute cases and the difficulty agencies may face in maintaining a presence in court throughout the entire enforcement process. State health departments could loan lawyers to prosecute and/or provide technical assistance to local agencies on request. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Code enforcement staff who respond to complaints about housing problems such as a roof leak, roaches, or no heat, as well as routine periodic inspections of rental housing units are specially trained to conduct limited checks for lead hazards as a means to trigger additional action. If the inspection identifies a lead hazard through visual assessment, spot testing, dust testing, or paint testing, the inspector orders the rental property owner to undertake lead hazard control or lead-safe repair work to bring the unit into compliance with any applicable standards. # Jurisdiction or Target # ILLUSTRATION OF THE STRATEGY IN PRACTICE Title 6 of the Philadelphia Code and Regulations gives the Department of Public Health (DPH) the authority to issue correction orders to owners (or their agents) of housing units found to have lead-based paint hazards. If an owner does not comply with the order, the City files a case in Philadelphia's special "lead court." The city may seek a range of remedies, including the use of City funds to abate the hazard and recovery of those costs from the owner. If the property owner fails to reimburse the city, the court may place a lien on the subject property for the amount of abatement costs and other related expenses. This process has been a powerful motivator for property owners, who are now more likely to proactively correct lead hazards-or at least comply with orders before the case gets to court. Staffing utilized: There are no staff dedicated to implementing this strategy. When abatement is needed, crews from the city's lead hazard control program can be assigned and the labor cost is included in the amount billed to the property owner or added to the lien. # Jurisdiction or Target # Other resources utilized: The Department of Health provides justification for the cases, including lead inspection checklists and laboratory records on EBLs. The Law Department also has access to a list of property owners who have requested assistance. # Factors essential to implementation: The combination of a dedicated lead court, consistent enforcement, and outreach to landlords to make sure they understand that they must comply or they will be prosecuted enables the City to avoid using this strategy. Limitations/challenges/problems encountered: The City is unlikely to recover the costs of lead hazard control because homes with deferred maintenance and serious hazards, which often already have tax liabilities or other liens attached, sell for as low as five to ten thousand dollars. As a result, this measure is used only when owners qualify for no other programs. Also, determining the identity of the property owner is sometimes challenging and takes a considerable amount of time. Magnitude of Impact/Potential Impact: The provision has not been used because the Law Department has been able to use other means to resolve the 1,700 cases that it has filed with the Court. # CONDUCT PERIODIC HOUSING CODE INSPECTIONS DESCRIPTION OF THE STRATEGY Code enforcement systems that operate solely in response to tenant complaints, although the prevailing norm nationwide, are highly ineffective and have limited impact. This approach fosters the decline of rental housing conditions since tenants may not know how to register complaints or may be reluctant to complain out of fear of retaliation by the landlord. In contrast to sole reliance on complaint-based approaches, proactive, periodic inspection programs can advance primary prevention more meaningfully. Both New Jersey and Los Angeles have committed to inspecting multi-family rental properties every three to five years. Such preemptive code inspections also can be more narrowly targeted to high-risk neighborhoods, as the City of Milwaukee is doing. # BENEFITS Immediate/Direct Results: Problems such as lead hazards are routinely identified by an inspection, documented, and brought to the attention of the rental property owner. Code officials can ensure that when housing code violations are corrected, the work is done in a lead-safe manner. Public Health Benefits: A periodic rental housing inspection program helps to ensure that multi-family rental housing units comply with basic health and safety standards. Periodic inspections foster pro-active maintenance because property owners cannot expect to remain "outside the system." By promoting routine preventative maintenance on a widespread basis and improving the quality of the rental housing stock, periodic inspection programs can help to prevent lead hazards-even in rental housing units that would be missed under a complaint-based inspection program. Other Indirect/Collateral Benefits: Periodic inspection programs, when coupled with an effective enforcement regimen, can generate fees sufficient to offset the cost of the program. Regular inspections help to maintain the quality of the rental housing stock over the long term in a cost-effective manner. # SCOPE OF POTENTIAL IMPACT Statewide-Impact depends upon the scope of the housing code inspection program City-or County-Wide Neighborhood/Community # PRIMARY ACTORS KEY PARTNERS Code or Building Inspection Agency Health Department Local Prosecutors Community-based Organizations Property Owners Tenants # CRITICAL ELEMENTS Staff requirements: When moving from a complaint-based inspection program to a periodic system, additional inspectors may initially be required because a periodic inspection program also must accommodate complaints. Under an effective periodic inspection program, the number of complaint-based inspections will decrease over time. Where periodic inspections have been mandated, few inspections are undertaken in response to complaints. In New Jersey, for example, periodic inspections have been mandated for over thirty years, and few inspections currently are undertaken in response to complaints. Under the state's periodic inspection program, approximately 115 inspectors conduct approximately 162,000 inspections in dwelling units annually and re inspect about 127,000 of those units. Staffing utilized: More than 57 inspectors devote their time solely to proactive inspections. An additional 22 inspectors respond to tenant complaints. Some of the inspectors that deal with complaints also assist with reinspections in units found to be out of compliance during the scheduled inspections. # Other resources utilized: N/A Factors essential to implementation: Successful implementation of a periodic inspection program requires, first and foremost, adequate staff to carry out inspections. Inspectors must be well trained, not only to identify code violations, but also to deal effectively with tenants. In Los Angeles, advocates have conducted trainings for inspectors to help them deal with cultural and/or language issues that may arise with tenants. Another key to the success of the SCEP program is that a loan program has been put in place to help small landlords make repairs. Finally, effective enforcement is critical to the success of a periodic code inspection program. While the city experienced some initial problems with cases stalling in the courts, hearing officers are increasingly successful at moving cases forward. In addition to a commitment to enforcement on the part of agency staff, adequate prosecutorial resources must be dedicated to enforcement. Limitations/challenges/problems encountered: Initially, obtaining funds for the program was a challenge. However, the City increased the monthly inspection fee that rental property owners pay. Magnitude of Impact/Potential Impact: The current schedule for inspections allows for rental units to be inspected every five years. Each year, about 150,000 units are inspected. However, the city hopes to increase that figure to 180,000. Potential for replication: Very high. These programs are readily replicated. # Contact for Specific Information: Greg # ENABLE TENANTS AND COMMUNITY-BASED ORGANIZATIONS TO TAKE ACTION TO ADDRESS SUBSTANDARD HOUSING CONDITIONS DESCRIPTION OF THE STRATEGY In many cases, tenants lack the ability to address substandard housing conditions or are reluctant to exercise their rights out of concern that the landlord will retaliate. Empowering tenants to take action when housing conditions are inadequate and enabling neighborhood organizations to act on tenants' behalf can significantly enhance the efforts of code enforcement officials. One effective strategy is to legally enable tenants or their advocates to request a code inspection and empower them to pursue enforcement actions themselves in court. This approach also helps circumvent the common problem of inadequate resources for enforcement. # BENEFITS Immediate/Direct Results: Tenants in substandard properties obtain legal standing to initiate code inspections, enforcement, and remediation actions without fear of landlord retaliation. If receiverships (court appointment of third party administrators to manage properties and oversee repairs) or rent escrow arrangements are permitted, rents can be used directly to fund repairs. Public Health Benefits: High-risk housing is targeted for repairs that reduce health hazards. Code agency and/or court oversight can ensure that repairs are done safely and following accepted protocols and without hazards being left behind. Other Indirect/Collateral Benefits: Tenants gain power in relation to landlords, which could result in landlords community-wide becoming more responsive and proactive regarding maintenance and repairs. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: An experienced organizer could, in several months, organize a campaign capable of enacting such a law. One or more FTE (organizers, attorneys) could staff a project to assist tenants with using the process in just one community. Enacting a new state or municipal law to give legal standing for tenants in substandard properties to initiate code inspections/enforcement and take enforcement actions themselves can be a major undertaking. Other resource requirements: Research would be needed on existing laws, as well as on the degree and extent of substandard housing conditions in the jurisdiction and specific shortcomings of the existing code enforcement system. Institutional capacity required: An organization undertaking such a campaign would need the capacity to organize and lobby, experienced staff, and relationships with allies among tenants' rights, affordable housing, public interest, legal, and other community organizations. # Cost considerations: The positive impact on housing affordability and condition is potentially great. These benefits will far exceed the cost of a campaign to secure enabling legislation. Creating and funding an organization or agency to provide ongoing assistance to tenants bringing enforcement cases should be considered, as this will greatly improve the impact of the law and the quality of outcomes. # Timing issues: None Feasibility of Implementation: Variable. Any organization undertaking such an effort should understand that because of the complex and unpredictable nature of the legislative process, the degree of difficulty may be greater than expected and success is not guaranteed. Pilot programs with limited scope may be a useful first step, giving advocates time and resources to prove that the strategy is effective in a target area. # POTENTIAL OBSTACLES/BARRIERS This strategy requires enacting new legislation, working to ensure that it is effectively implemented, and ongoing work to assist tenants with using the process. Thus, this strategy is a major undertaking and could fail if sufficient resources and energy are not available to overcome inertia and political opposition. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Minnesota's landlord-tenant law, Chapter 504B, allows tenants, a municipality, or a neighborhood housing-related organization legal standing to bring a court action against a landlord who fails within a reasonable time to correct deficiencies at their property. Project 504, a non-profit neighborhood organization, has brought more than ten such cases in the past three years, leading to broad remedies for tenants, including in some cases the appointment of a third-party administrator to manage and operate the landlord's property. Project 504's court action also established precedent that significant unabated lead hazards in a property constitute an emergency, causing the court to issue orders to the landlord to correct the hazards immediately. Factors essential to implementation: Strong partnership with other affordable housing and tenant advocacy organizations. Code enforcement officials who recognize that the strategy's success will reduce enforcement time spent on problematic properties. Strong and ongoing relationships with tenants, including any identified tenant leaders who will advance the strategy. Solid knowledge of landlord-tenant law, or partnership with pro bono or legal services attorney who can provide legal analysis and support. Relationships with proactive landlords who recognize the need to address substandard housing in their jurisdiction are also helpful. # Jurisdiction or # ILLUSTRATION #2 OF STRATEGY IN PRACTICE An ordinance passed in 2002 gave code inspectors the authority to pursue charges against property owners who do not treat lead-based paint hazards in their buildings. The City of Kankakee's Community Development Agency's Lead Poisoning Prevention Program (LPPP) trained code inspectors to visually assess and identify lead hazards. When code inspectors discover an existing or potential lead hazard, they can refer the property owner to the LPPP. Through its HUD Healthy Homes grant, the program helps property owners make repairs before a hazard develops, as well as remediate or abate existing hazards. Property owners who do not follow up on the voluntary referral are cited. Other resources utilized: No information provided. # Jurisdiction or Target # Factors essential to implementation: A good relationship with the code enforcement division is key. Outreach to property owners through local landlord association allowed them to address concerns up front as well and educate property owners about lead hazards before inspections. HUD Healthy Homes and CDBG monies provide the funding to assist property owners in lead-safe work practices and lead hazard remediation. Limitations/challenges/problems encountered: Occasionally a property owner doesn't recognize the seriousness of the problem, but the LPPP grants typically alleviate any objections. Nominal outreach was needed to bring the Code Enforcement Division and the Health Department together, since each agency has its own focus. Magnitude of Impact/Potential Impact: LPPP receives an average of 20-25 referrals per month. In its first two-year grant cycle, it assisted 300 properties and will complete another 240 this cycle. As a result of interactions with code inspectors, other property owners voluntarily contact LPPP for lead hazard information and assistance. Outreach coordinators have seen a shift in community awareness also, from landlord association meetings to WIC outreach. # Potential for replication: Moderate. This strategy can be replicated wherever there is a property maintenance code that the city is willing to enforce. Approximately 25% of the owners contacted via letters have attended dinners, and GHC is starting to receive responses from previously uncooperative owners. # Contacts for Specific Information Jenny Jurisdiction or Target Area: Greensboro, North Carolina Primary Actor: Greensboro Housing Coalition (GHC) Secondary Actor(s): N/A Staffing utilized: 0.25 -0.5 FTEs for four mailings and eight dinners, including researching and preparing the mailings and planning for and convening the dinners. Other resources utilized: Data sources include information collected during GHC's neighborhood outreach and hazard assessment activities (as a subcontractor to CEHRC and the city's lead hazard control program), the city's housing code enforcement database, and the tax assessor's database. GHC uses a laptop and digital projector for presentations. A transitional housing program with a large community room donates space for the meetings. Food is purchased and prepared by GHC staff. # Factors essential to implementation: A key factor is good working partnerships with the range of agencies that need to communicate with landlords about their responsibilities under federal, state, and local laws, and resources available to help them make their properties lead-safe. Limitations/challenges/problems encountered: The majority of landlords have not yet responded to the letters. Also, it has been difficult to identify and locate some owners. It is difficult to determine owners' follow up activities for two reasons: the city's lead hazard control program is overwhelmed and cannot always provide accurate and up-to-date information on the applications received; and the owners tend to not readily provide information on what, if any, steps they have taken to control lead hazards. Magnitude of Impact/Potential Impact: Many more owners of high-risk housing are following the disclosure law. At least 25 properties have been enrolled in the city's lead hazard control grant program. # Potential for replication: Moderate # Contact for Specific Information Beth McKee-Huger Executive Director 336-691-9521 [email protected] # References for additional information N/A # ILLUSTRATION #4 OF STRATEGY IN PRACTICE Project 504 built an extensive database of owners of pre-1950 properties in high-risk neighborhoods and is using it to mail out at least 1,000 boilerplate and 250 registered letters. Owners receiving boilerplate letters first receive a postcard to alert them to the coming letter. The postcard serves two purposes: getting the property owners' attention and culling bad addresses from the database before the more expensive letters are sent. Both the postcard and the letters refer owners to Project 504's new website designed to provide resources to property owners, www.nomorelead.org. When rental property owners contact Project 504 for more information, they are sent a letter with an accompanying stamped postcard that the owner can send directly into the county for more information on how to enroll in the lead hazard control (LHC) grant program. The county saves these cards
This publication offers a comprehensive collection of 70 "building blocks," which are primary prevention strategies that merit consideration by state and local governments and others in position to reduce exposure to hazards in housing and thereby help meet the Healthy People 2010 goal of eliminating childhood lead poisoning. Exemplary strategies span a broad spectrum which includes targeting high-risk properties; widely instituting safe work practices; building community capacity to check for hazards and work safety; delivering hazard assessment, control and prevention services; motivating action; screening high-risk housing; expanding financial resources; strengthening enforcement; raising public awareness and support; and establishing valuable partnerships. A strategy has been considered for inclusion as a building block if it is sensitive to the economics of affordable housing, consistent with the principles of public health, holds the potential for broad-scale impact, stands a reasonable possibility of implementation, and offers promise for reducing lead and other environmental health hazards in high-risk housing. The summary of each building block is coupled with an illustration of how the strategy has been implemented and contact information for at least one individual who is knowledgeable about this activity. The purpose of disseminating Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards is to allow programs and policymakers easy access to information about innovative and promising strategies that span the spectrum of primary prevention, from which they may select one or several to pursue based on their jurisdiction's needs and political and economic realities.# INTRODUCTION # Context and Background Exposure to lead continues to poison young children in the United States. Estimates based on data from 1999 and 2000 indicate that about 2.2% of children aged 1-5 years (about 434,000 children) have blood lead level (BLL) elevations at or above 10 micrograms per deciliter (≥10 µg/dL). Healthy People 2010 (Objective 8-11) i calls for the eradication of lead poisoning as a public health problem by the year 2010 through the elimination of elevated blood lead levels in children. Over the past decade, research has greatly expanded understanding of the sources and pathways of lead exposure in the residential environment and the effectiveness of a range of strategies to make housing safe. While children can be exposed to lead from a variety of other sources and pathways, the most significant cause of exposure is the presence of lead-based paint hazards in their homes, such as lead in non-intact paint, interior settled dust, exterior soil and dust, and hazards created by improperly conducted renovation work. Focus on the presence of lead-based paint and its lead content has given way to recognition of the importance of the condition of painted surfaces in older homes and the dangers of lead-contaminated dust. Chronic ingestion of settled lead dust on floors, windowsills, and other surfaces is now recognized as the foremost pathway of young children's exposure to lead in the home environment, and dust lead levels are recognized as the strongest predictor of risk. Based on the recommendations of an interagency working group tasked with planning to achieve the 2010 lead elimination goal, the Federal Strategy for Eliminating Childhood Lead Poisoning ii emphasizes the essential need to require action before children are poisoned-by making the US housing stock lead-safe. The latest national survey of lead hazards in US housing makes clear the magnitude and complexity of this challenge: more than one-quarter of all US housing units pose "significant lead hazards." iii Yet the impact of most lead poisoning prevention programs is limited to the fraction of properties that are occupied by a child with an elevated blood lead level (less than two percent of hazardous units nationwide). While the need continues to improve blood lead screening and case management services, achieving the national 2010 goal of eliminating lead poisoning as a public health problem requires significantly increasing the impact of primary prevention strategies to make high-risk housing lead-safe. The Centers for Disease Control and Prevention (CDC) has a longstanding responsibility and commitment to protecting children from lead poisoning. Since the early 1970s, CDC has made grants to help state and local health department lead poisoning prevention programs screen children at risk for lead poisoning or elevated blood lead levels (EBL), perform environmental investigations to determine the source children's exposure, and provide follow-up case management and educational services. At the national level, CDC works closely with other federal agencies committed to lead poisoning prevention, notably the US Department of Housing and Urban Development's Office of Healthy Homes and Lead Hazard Control (HUD OHHLHC), the US Environmental Protection Agency (EPA), and, within the US Department of Health and Human Services, the Centers for Medicare and Medicaid Services (CMS) and the Office of Community Services (OCS). The Lead Poisoning Prevention Branch of CDC is fulfilling its commitment to the 2010 lead elimination goal through its grant program's requirement that jurisdictions develop and implement a strategic plan for elimination that includes primary prevention, partnering, and program evaluation. Through this Building Blocks publication, the Branch now offers grantees and others access to a compendium of promising primary prevention approaches to reduce exposure to lead paint hazards. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards State and local childhood lead poisoning prevention programs (CLPPPs) universally acknowledge the importance of primary prevention and are beginning to address it in their strategic plans and funding applications. However, many programs' primary prevention efforts are confined to parent education about hygiene, nutrition, and housekeeping, despite research that makes clear the limitations of these interventions for families whose homes pose significant lead hazards. Inability to institute durable primary prevention is caused in part by the pressure to focus resources and attention on secondary prevention by identifying and managing individual cases of elevated childhood blood lead level (BLL). Indeed, in communities where follow-up on actual poisonings is limited to educating family members about lead hazards and behavioral change (because public resources are not available to control identified lead hazards and halt further exposure), meaningful primary prevention can seem like an extremely remote target. Programs facing these circumstances need ideas for sharing responsibility within the jurisdiction to stop repeat offenders, expand access to lead-safe housing, and ultimately arrest the cycle of inferior housing that continually produces new poisonings. While no city or state with a significant stock of leaded housing has successfully assembled all of the elements needed to make primary prevention a reality across the jurisdiction, state and local lead poisoning prevention programs across the country and their partners in other agencies and the private sector have implemented a multitude of innovative and successful primary prevention strategies over past years. Workshops and conferences periodically feature model programs, but the prospect of replicating an entire program with multiple components and elements can be daunting to the CLPPP seeking to evolve beyond screening and case management. Difficulty in achieving program transformation to primary prevention is only compounded within an overwhelmed public agency that is surrounded by a change-resistant or risk-averse political environment. Since most successful primary prevention programs consist of multiple elements, specific strategies can be considered individually or in combination. The multitude of innovative strategies to identify, control, and prevent lead hazards in housing before a child is poisoned that are currently being implemented across the country has never been systematically documented or described in a way that makes information about their design and implementation readily accessible. Programs and their jurisdictions need this information at the "building block" level in order to decide which strategies to pursue based on local needs and conditions. This document identifies and describes individual building blocks across the spectrum of primary prevention strategies in order to create access to knowledge about tangible and realistic opportunities for progress and program evolution in identifying, controlling, and preventing lead poisoning and other housing-related health hazards. # Scope and Limitations The research for Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards was guided by the descriptions of primary prevention in CDC's 1997 screening guidelines and 2002 case management guidelines, which emphasize eliminating and controlling toxic exposures at the source. While primary prevention necessarily encompasses activities that address all sources of exposure to lead, this publication is focused on strategies for preventing and controlling lead hazards in housing, the foremost cause of poisoning. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards The heart of the challenge to public health agencies is leveraging action to make privately owned housing leadsafe. Many CLPPPs are increasingly viewing leveraging action to address lead hazards in housing as a part of their leadership role. While public health program directors and staff are clearly the primary audience for Building Blocks, some strategies entail fostering change in other organizations and systems to advance prevention in high-risk housing. The summary of each building block is coupled with an illustration of how the strategy has been implemented and contact information for at least one individual who is knowledgeable about this activity. Building Blocks has some inherent limitations that deserve note. The information listed in illustrations (partners, resources, constraints) is not comprehensive but rather a citation of specific and strong examples of building blocks. Results of efforts to replicate a given building block will vary depending on individual state and local laws, maturity of partnerships, political will, and the existence and strength of community-based partners. The applicability of a building block selected for implementation will depend on the maturity and capacity of the jurisdiction and its CLPPP. Inclusion of building blocks in this document does not assure that they have been evaluated for their outcome or transferability. # Organization of Building Blocks The description of each strategy reflects the template (Appendix A) that has shaped the research and compilation of Building Blocks. The generalized information includes the title, brief summary, potential applications and benefits (including scope of impact), and critical elements such as staffing patterns, other resource needs, institutional capacity, cost and timing considerations, and indication of feasibility of implementation. At least one real-world illustration amplifies building block descriptions by documenting the scope and particulars of the example in a given jurisdiction or target area; the staffing and other resources utilized; magnitude of its impact; factors essential to implementation; limitations encountered; estimated potential for replication; and specific contact information and references for additional information. The illustrations offer strong examples of how each strategy has been recently implemented but do not provide an inclusive or exhaustive review of all efforts to ever plumb the benefits of the given strategy. This document displays building blocks grouped by the category that best fits their essential contribution: • An alphabetical index of of the building blocks follows the Introduction. The internet edition of Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards will be available in Summer 2005 through the website of the Lead Poisoning Prevention Branch, www.cdc.gov/nceh/ lead. Through this site, it will be possible to easily select sections of Building Blocks for online review and search by keyword, location of illustration, category, key actor or partner, and similar criteria. # ILLUSTRATION #2 OF STRATEGY IN PRACTICE In 2002, the New York Public Interest Research Group (NYPIRG) used data from the health department to issue a report about childhood lead poisoning disparities in New York City. This study was conducted in conjunction with a campaign to pass the new lead poisoning prevention law in New York City that was enacted in February 2002. NYPIRG was aware that while the number of children poisoned by lead in New York had been declining for years, there appeared to be stubborn pockets of poisoning throughout the city, particularly in low-income neighborhoods. NYPIRG conducted a small area analysis of the data-they first analyzed the data by census block and then aggregated it by ZIP code. The analysis confirmed that there were indeed concentrated pockets of childhood lead poisoning in New York, many of which were located in low-income areas with tracts of substandard housing. In order to convince City Council members that the existing lead poisoning prevention policy was not working for all of the city's children, NYPIRG decided they needed to illustrate the extent of the disparities in New York # POTENTIAL OBSTACLES/BARRIERS In some areas, landlords may continue to be resistant to change or cooperative working relationships with government regulators and/or community-based organizations, despite persistent efforts to engage them. In other instances, landlords may deem necessary efforts "too expensive," setting up adversarial relationships this strategy is supposed to avoid. Other resource requirements: Appropriate mechanisms for delivery and dissemination of desired educational messages are needed, but the mechanisms can vary dramatically depending on the design of the educational initiative. Typical educational methods include brochures, fact sheets, and web sites. The considerable range of materials already developed on lead safety obviates the need to develop materials, although modifications should be made to incorporate local referral resources. Programs can augment traditional materials with more attention-getting vehicles, such as diaper bags and other promotional items. Any materials used must be accessible and understandable to those who live in high-risk areas, where language barriers and reading levels can present a challenge. Programs will also need data and surveillance information. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Awareness and Public Support EXPAND LEAD SAFETY EDUCATION TO EXPECTANT AND NEW PARENTS Institutional capacity required: Health education initiatives rarely require special authorization. # Cost considerations: Adding a new subject to an existing education program is more cost-efficient than implementing free-standing education focused only on lead safety. Printing materials will cost nominal amounts per parent. Timing issues: Can be implemented anytime Feasibility of Implementation: Variable. Feasibility depends on the availability of people to manage the effort and resources to support it. # POTENTIAL OBSTACLES/BARRIERS One potential obstacle is reaching agreement on a specific strategy deemed most effective for the circumstances, as there are so many possible combinations of messages, messengers, delivery mechanisms, and possible target audiences. In addition, it can be uncomfortable to make lead-safety recommendations to parents in communities where resources do not exist to assist families in repairing lead hazards. A barrier to the effectiveness of education on lead safety is the fact that expectant and new parents may already be overwhelmed with other recent messages on multiple weighty issues and have many other concerns and priorities. Discussion of possible lead exposure in utero may help parents to focus attention on the immediacy of lead safety. Programs may also encounter unexpected challenges in developing partnerships with seemingly natural partners. For example, one program reported difficulty in convincing obstetricians to participate in such an educational campaign. # ADDITIONAL RESOURCES N/A # ILLUSTRATION #1 OF STRATEGY IN PRACTICE MA CLPPP conducted a project to educate pregnant women about lead hazards and encourage them to adopt preventive behaviors, and to educate doctors and staff members for community health centers and agencies in the target communities. The project's three core activities were: 1. Development and distribution of bilingual prenatal lead awareness kits packaged in large attractive diaper bags. The kit included educational fact sheets and brochures, promotional items, a community resource card, an evaluation card, and a voucher for free lead-safety training for a family member. The pre-existing educational materials were provided as bilingual documents, in English and one other language-Spanish, Khmer, or Vietnamese. On a limited basis, materials were also distributed in Russian, Chinese, and Portuguese. 2. Recruitment of community health centers and agencies that serve pregnant women in the target communities (e.g., WIC, Head Start, etc.) to educate their staffs and clients and distribute information kits; and, 3. Sponsorship of Grand Rounds training (offering CEUs) for physicians and other medical and program staff in the four communities. This project was supported by a nine-month CDC supplemental grant of $100,000. Staffing utilized: Staffing was routinely about 1.25 FTE, but spiked during busy periods associated with trainings and implementation (e.g., about 5 FTEs for a few days). # Other resources utilized: N/A Factors essential to implementation: Staff felt that success was dependent on the availability of a full-time project coordinator and-for effective materials distribution and training recruitment channels-on the network of existing contacts in the community. MA CLPPP was able to use existing MOUs with some partners, which expedited administrative processes. Limitations/challenges/problems encountered: Major challenges were in the areas of deadlines and evaluation. Various administrative factors meant that the program had about seven months to hire staff and complete the project, operating within the constraints of state governmental systems for purchasing. Due to time and realities, the program was limited to a self-reporting evaluation. Logistical constraints, including an interpretation of HIPAA requirements, prevented an evaluation approach involving tracking individual women's names. Magnitude of Impact/Potential Impact: Approximately 3,500 diaper bags/information kits were distributed in 4 months, with many distributed in high-risk areas; 29 agencies signed Memoranda of Understanding (MOU) and partnered in the project; 138 self-reported evaluation cards were returned from kits; and Grand Rounds attendees gave high evaluation marks. # Potential for replication: Moderate # Contacts for Specific Information Xanthi Scrimgeour Health Education Coordinator 413-586-7525 x1122 or 1-800-445-1255 [email protected] Paul Hunter Director, MA CLPPP 617-624-5585 [email protected] # References for additional information 1. An August 2003 report called "CDC Supplemental Prenatal Grant: Overview and Evaluation" describes the project and its results in detail. The report includes a review conducted with the New England Lead Coordinating Council of similar prenatal lead education activities that had been undertaken in other states. # ILLUSTRATION #2 OF STRATEGY IN PRACTICE As part of a nine-month project focused on increasing testing rates for lead among pregnant women in Alameda and Fresno counties and prompting early intervention, CA CLPPP developed, disseminated, and field-tested educational materials for at-risk pregnant women. To this end, brochures were developed urging women to get tested, and explaining how lead gets into the body, how it can affect a baby, and how to create a lead-safe environment. County-specific phone numbers were provided so that women could easily seek medical care and information regarding lead and pregnancy. After completion, 25,000 packets of culturally-appropriate outreach Pediatricians who are knowledgeable about lead safety and healthy homes can provide better health care for children at high risk of toxic exposures, advocate for relevant solutions, and suggest primary prevention tools for parents. A recent study revealed that many pediatricians want to better understand lead exposure and other environmental history components in patients' backgrounds, yet fewer than one in five has any formal training in making inquiries on lead and other chemical exposures. Medical schools and residency review committees can work to train pediatricians in environmental history-taking to identify possible lead exposures in the home and to help prevent exposure. Integrating such specialized training into required medical education is the easiest method, as most medical schools already require some level of lead poisoning prevention education during pediatric clinical rotations. Some state medical societies may get involved in mandating primary prevention education requirements, and residency review committees will also be involved. Requiring a rotation at a children's hospital, community center, or local health department can give pediatric students even more first-hand experience with childhood lead poisoning and add extra incentives for them to take steps toward primary prevention in their future practices. Additional course offerings on primary prevention and environmental history-taking could also be included in physicians' required Continuing Medical Education (CME). # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Awareness and Public Support INTEGRATE LEAD POISONING PREVENTION EDUCATION INTO PHYSICIAN EDUCATION CURRICULA Institutional capacity required: Curricula complete with primary prevention and environmental history-taking education is the main institutional requirement for this strategy. # Cost considerations: No additional costs would be incurred if this training on environmental history-taking and lead safety is integrated into existing education and training systems for pediatricians. # Timing issues: None Feasibility of Implementation: Very high. Because some lead poisoning prevention is already built into most medical school curricula as all students go through their pediatric rotations, putting more emphasis on primary prevention and environmental history-taking would require only modest adjustments to curricula, with little or no conflict with other course priorities. # POTENTIAL OBSTACLES/BARRIERS None identified. # ADDITIONAL RESOURCES # ORGANIZE "TOXIC TOURS" FOR POLICY MAKERS DESCRIPTION OF THE STRATEGY A first-hand look at unhealthy housing conditions can be provided to public officials by organizing a community tour that allows them to visit homes with hazards (and if possible, some that have been repaired) and talk with residents and advocates about the problems and policy solutions. Experience has shown that policy makers can be moved significantly by the personal experience of seeing hazardous conditions first-hand and having face-to face interaction with families directly affected. First-year medical students can also benefit from a "toxic tour." Community-based organizations in Los Angeles, New Orleans, and Providence have successfully used this strategy to educate and motivate local health and housing officials. This strategy parallels "Child Watch" tours that child advocacy groups have historically conducted to sensitize and challenge elected officials and journalists regarding a variety of problems that children face. # POTENTIAL OBSTACLES/BARRIERS It can be quite challenging to convince public officials, especially elected officials, and the media to participate in the tour. Another challenge is ensuring sensitivity to the families who agree to open their homes to the tour. It is a fine line between showcasing the problem and potential solutions vs. unintentionally allowing voyeurism at the expense of low-income families. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE The Los Angeles Healthy Homes Collaborative has found that providing public officials with tours of hazardous housing conditions deepens the understanding and motivation to enforce standards and improve services to affected families. The Healthy Homes Collaborative is a diverse coalition of community-based and advocacy organizations committed to eliminating environmental health threats to children and increasing health access. The Collaborative enlists families whose homes will be visited, persuades agency and legislative staff to attend, and arranges the itinerary, transportation, and food for the attendees. An opportunity for the group to eat together is important to all allow attendees to exchange impressions, information, and ideas. # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: Because the Collaborative has worked consistently with tenants and earned their trust, the CBO leaders are able to persuade tenants to participate. # Limitations/challenges/problems encountered: The logistics of the tour can be challenging. The size of Los Angeles meant that public officials had to commit almost a full day, with the result that elected officials sent their aides instead of seeing the housing conditions first hand. There may be last-minute conflicts that affect a family's ability to be home or disruptions in the tour schedule, so it is advisable to line up 'extra' families who are willing to participate. Another challenge is facilitating the discussion and reactions of participants with diverse political views and perspectives as everyone is in "close quarters" for the tour. Magnitude of Impact/Potential Impact: Agency staff felt a new sense of urgency from seeing the desperate living conditions of many families and from seeing how hazards persist in units where violations had been cited. Staff of the lead poisoning prevention program have since been more responsive when alerted to families in need by the Collaborative; communication and working relationships between the CBOs and the agencies have improved. # ILLUSTRATION OF STRATEGY IN PRACTICE In September 2003, Indianapolis' mayor unveiled a "Top 10" list of city property owners who have been serial code violators. The property owners on the initial list held title to 310 properties throughout the city. The mayor's list, which is updated as needed, serves several purposes. It helps to distribute information on problem landlords, assisting tenants in avoiding structures that may contain dangerous code violations and health hazards while exposing slumlords to the local community. It also helps the city to hold property owners accountable and provides a tool for community leaders seeking to put pressure on property owners to remedy code violations and maintain their properties. The list is readily available through the city's website, and it has also been publicized by The Indianapolis Star newspaper. # Institutional capacity required: N/A Cost considerations: Extremely low cost. Timing issues: Sustained cultivation of or accessibility to the media is critical; one-time or occasional efforts will be much less effective. # Feasibility of Implementation: Very high # POTENTIAL OBSTACLES/BARRIERS A potential barrier is the reluctance of families to participate because they fear retaliation from the landlord or other consequences. The willingness of families to cooperate with reporters is essential because the personal toll of lead poisoning helps capture the public attention and build political will to change the status quo. # ADDITIONAL RESOURCES N/A # ILLUSTRATION #1 OF STRATEGY IN PRACTICE In May of 2001, the Providence Journal ran a six-part series on lead poisoning that helped set the stage for new legislation and regulations in the state. A photojournalist at the Journal, John Freidah initiated the idea, but it could not have happened without the commitment of the newspaper, the hard work of the Childhood Lead Action Project, the time and effort of many government officials who educated and provided information to reporter Peter Lord, and the willingness of many families to open their lives to the journalists and the public. Staffing utilized: At the Journal, reporter Peter Lord and photojournalist John Freidah invested more than 6 months of time preparing the series (and Freidah had begun taking photographs at the lead clinic many months earlier in between other assignments in order to bring the idea vividly to the paper's editors). Top editors at the paper worked with them on "designing" the series to most effectively convey the breadth and impact of lead poisoning in the state. At the Childhood Lead Action Project, a community organizer working with the parents of lead-poisoned children helped the parents overcome their fear of participating in the series. Officials at the Rhode Island Department of Health and the Rhode Island Housing and Mortgage Finance Corporation devoted many meetings to helping Peter Lord understand and accurately convey the issues. # Jurisdiction or Target # Other resources utilized: The state health department forged a partnership with the Providence Journal to make public information about properties that have lead hazards. The department had generated a list of houses where children had been poisoned but didn't have the technical capacity to publish it online. By providing these data to the newspaper and by providing yearly updates to the paper, the department has met the public need for information despite its technical limitations. # ILLUSTRATION #2 OF STRATEGY IN PRACTICE The Detroit Free Press conducted an in-depth investigation of lead poisoning in Detroit and Michigan that began appearing January 21, 2003. The paper followed the five-day investigative series with continuing coverage throughout the year. The reporting-and the persistent work of state advocates for children's environmental health-resulted in lead poisoning becoming a top priority of the Governor and the state legislature. In addition, the series prompted the US EPA to order the removal of lead-contaminated soil in a Detroit neighborhood near a former lead smelter. # Jurisdiction or Target Area: Michigan Primary Actor: Detroit Free Press # Secondary Actor(s): Get The Lead Out Coalition Staffing utilized: The Free Press devoted a multi-talented team of reporters to covering this issue over many months. The reporters worked with and wrote about families affected by lead poisoning, investigated and identified systemic shortcomings in the city and state's programs, hired experts to test the soil near industrial sites, and researched how local efforts compared with other cities and states around the country. When state advocates, including the Get The Lead Out coalition in Grand Rapids, became aware of the investigation, they provided information to the reporters about the extent of the problem outside Detroit and the need for statewide leadership on lead poisoning prevention. Both the newspaper and the advocates worked to sustain the impact of the initial investigation by continued coverage of the problem and of the policy initiatives of the Governor and the state legislature. # Other resources utilized: The Free Press conducted an analysis of the State Department of Community Health data on elevated blood lead levels to identify the "hot spot" neighborhoods-those with the most lead-poisoned children-and questioned why the state had not been using the data to target prevention and hazard control efforts. The analysis revealed that the single worst hot spot was in Grand Rapids-a finding that changed the political dynamic of the issue within the state by capturing the attention of the legislators in the western part of the state, educating them on the scope of the problem statewide, and motivating them to support new legislation. Because the paper had to sue the state to release the data, this article appeared in July, putting the issue back on the front burner six months after the original series. # Factors essential to implementation: The critical factors are a large-circulation newspaper with a commitment to doing investigative reporting, strong advocates who can use the heightened public attention and concern to build political will for policy change, and continuing coverage and advocacy to keep elected officials focused on and accountable for making necessary changes. # Limitations/challenges/problems encountered: The main policy-making challenge in Michigan was overcoming the east/west split in the state. The Free Press broadened their focus beyond Detroit and the eastern part of the state at the urging of state advocates. Most legislators had considered lead poisoning a Detroit problem until the Free Press analysis of the data documented the extent of the problem in Grand Rapids and other western areas. Magnitude of Impact/Potential Impact: While the Governor had campaigned as a champion of children's environmental health, it is clear that the investigative series combined with advocates' work to take maximal advantage of the publicity persuaded the Governor to submit a much stronger action plan much more quickly than would otherwise have been the case. It is likely the legislature would have reacted far more coolly to the Governor's proposals without the series. (The bills currently working their way through Michigan House and Senate would establish a Childhood Lead Poisoning Prevention and Control Commission, impose penalties on landlords who knowingly rent units with lead hazards, provide tax credits for lead hazard control, create a leadsafe housing registry, increase pressure on Medicaid plans to screen enrolled children for lead poisoning, and require labs to report blood screening results electronically.) Potential # ADD LEAD SAFETY TRAINING TO WEATHERIZATION PROGRAMS AND PRACTICES DESCRIPTION OF THE STRATEGY Integrating lead safety into the ongoing work of weatherization program contractors has the multiple benefits of reducing energy costs, improving the indoor climate, reducing lead hazards in the homes treated by the weatherization program, improving the safety of weatherization crew workers and their families, and protecting the safety of residents. Lead poisoning prevention programs can provide training and incentives such as free or discounted HEPA vacuums and personal protective equipment. Options include developing a hybrid training curriculum, adding lead-safe work practices to standards or specifications, expanding monitoring and inspections to address lead safety concerns, offering complete lead-safe work practices (LSWP) training within the weatherization training program, subsidizing risk assessor training, and providing an XRF analyzer for each local weatherization program. # BENEFITS The Department of Energy requires its state-level grantees to ensure that weatherization crews complete lead safety training if they will be working on homes built before 1978. This federal requirement prevents any confusion surrounding the need for lead safety training and ensures that all weatherization workers who operate in older homes will understand the consequences of repair and energy measures that may cut, sand, or pry leadbased paint and how to avoid creating lead dust and paint hazards through proper containment, control during the work, and clean up. It is most efficient to have the state weatherization agency condition disbursement of federal weatherization funds on fulfilling the training requirement; the state can also incorporate into standard training any state-specific standards or other information. Immediate/Direct Results: There will be increased awareness of lead safety among thousands of laborers and contractors. Public Health Benefits: Crews will be significantly less likely to create hazards such as lead dust, lead soil, or deteriorated lead paint during weatherization work that disturbs lead-based paint. Other Indirect/Collateral Benefits: Lead safety capacity is built in the wider community of individuals and community action agencies that may also conduct repair and renovation work using HUD funds or other resources. Transferable lead safety skills will cause laborers who work in weatherization to be careful about paint chips and dust when performing other types of work in older homes in the future. Weatherization program staff gains awareness of potential health risks associated with lead hazards and other housing condition problems. Finally, the initiative helps build capacity among contractors and awareness of lead-safe work practices that will likely transfer to other non-weatherization jobs-when working in older homes likely to have lead based paint. # SCOPE OF POTENTIAL IMPACT Nationally, the risk of lead dust hazards will be reduced as weatherization crews treat pre-1978 homes. More than 100,000 homes are treated by weatherization each year, and a significant majority were built before 1978. Statewide-You can pursue such training at the state, county, or local level. It is most efficient to have the state condition disbursement of federal weatherization funds on fulfilling the training requirement. Regional (e.g. multi-county)-Many CAP agencies cover City-or County-Wide Neighborhood/Community Specific (Targeted) Population-Very low-income households # ILLUSTRATION OF THE STRATEGY IN PRACTICE This program brings subsidized lead safety training to all agencies administering weatherization funds, provides training to workers, ensures at least one individual is trained and licensed as a lead risk assessor, and provides at least one XRF analyzer to each agency performing weatherization work. The program has developed specific policies and procedures to address lead that are more extensive than the federal requirements. Each weatherization program's risk assessor tests the lead content of the paint likely to be disturbed by the weatherization project in all pre-1978 homes. Using an XRF takes the guesswork out of the job: the crew knows if there is lead paint and does not have to presume it exists. The state pays expert consultants to work with each risk assessor on using the equipment and procedure properly in order to guarantee consistent performance. Jurisdiction or Target Area: Indiana. Primary Actor: Division of Family and Children, Housing and Community Service in the Department of Family and Social Services Agency. # Secondary Actor(s): N/A Staffing utilized: The state weatherization program director helped launch and develop the program with support from one key staffer and an independent consultant. It quickly became a relatively small aspect of the staff person's job as details fell into place. A consultant developed the policies and procedures, and Environmental Management Institute, an accredited lead trainer, was contracted to support the risk assessors. Other resources utilized: Administrative funding from the Section 8 program was used to purchase XRF devices. # Factors essential to implementation: The key factor for success is the commitment of state weatherization staff who care about the lead issue and are willing to make it a priority. Limitations/challenges/problems encountered: One challenge is to convince key senior managers at the state level of the necessity of creating systems to incorporate lead safety into weatherization and approval for a centralized procurement for XRFs to substantially reduce the price. Obtaining commitments from local community action agency directors to have their weatherization crews complete the training was another challenge that was overcome by the state's upfront provision of needed resources. Magnitude of Impact/Potential Impact: Weatherization work is performed using lead-safe work practices in all units that have lead-based paint. XRF testing has allowed the CAP agencies to focus dust containment and cleanup efforts when the surface tests positive for lead. Information developed from the lead testing is now available to future tenants and buyers under the federal lead hazard disclosure requirement. # Potential for replication: High. Lead safety training for weatherization crews can be replicated throughout all states in which it is a priority for key managers at the state level. # ASSESS AND ADDRESS MULTIPLE HAZARDS SIMULTANEOUSLY DESCRIPTION OF THE STRATEGY Programs that address health hazards beyond lead can efficiently and effectively equip families to reduce health hazards in the home environment. Community-based organizations train community members to assess houses for hazards and leverage the results through both individual and systems advocacy. Such programs also work to build support for prevention through the education of tenants/residents about hazards, available remedies for obtaining safe repairs, legal rights, and leadership skills. Programs focusing on direct services train volunteers and community health workers to conduct home audits, create a personalized Home Action Plan that emphasizes low-or no-cost solutions, and provide tools to assist in making needed changes, such as sealed mattress covers, cleaning kits, and HEPA vacuum loaners. # BENEFITS Immediate/Direct Results: Home heath hazards will be identified. Rental property owners, their tenants, and owner-occupants will learn low-cost solutions to address hazards and be aware of home health hazards. Public Health Benefits: Written reports and photographs that are property-specific inform the landlord about hazards. With respect to lead, the landlord then must correct the problem or disclose this information to prospective tenants. Property-specific data transforms the federal lead hazard disclosure rule's right to know into a powerful catalyst for action to improve conditions in substandard rental properties. Aggregate data can motivate local lawmakers to address the prevalence of hazards through property maintenance codes, health and housing codes, and other policy changes. Other Indirect/Collateral Benefits: Fosters community building and community organizing. Advocates can use aggregate data to generate media interest in the plight of low-income families who seek healthy housing. # POTENTIAL OBSTACLES/BARRIERS If home assessments reveal health hazards in rental housing, steps must be taken to ensure that tenants are protected from retaliatory evictions and other illegal landlord actions. Advocates must also guard against the responsibility being inappropriately shifted from landlord to tenant. Advocates must press code inspectors to inspect, cite, and enforce the code. # ADDITIONAL RESOURCES # N/A # ILLUSTRATION #1 OF STRATEGY IN PRACTICE The American Lung Association of Washington established the Master Home Environmentalist (MHE) program in 1992 and has trained more than 1,400 volunteers. Trainers include environmental scientists, psychologists, social workers, academics, and medical professionals. Volunteers complete 35 hours of training and 35 hours of community service. Trained MHE volunteers use a Home Environmental Assessment List (HEAL TM ) to help identify health hazards in the home, including lead, dust, and mold. The volunteer then works with the resident to develop an action plan to create a healthier home environment, emphasizing inexpensive or no-cost solutions. Jurisdiction or Target Area: King County, Washington; expanding to additional counties. # Primary Actor: American Lung Association of Washington # Secondary Actor(s): N/A Staffing utilized: At a minimum, one full-time staff person is needed to recruit volunteers and members for the steering committee, schedule trainings, etc. Other resources utilized: Computer, LCD projector for presentations, and trainings are helpful. # Factors essential to implementation: Developing key partners to serve as trainers and on the steering committee is essential. The expertise of key partners depends on the geographical area and the type of classes needed; for example, pesticide experts must be recruited where pesticides are a main issue. Limitations/challenges/problems encountered: Funding and volunteer retention were both challenges. Magnitude of Impact/Potential Impact: From 1994 to 1999, MHE volunteers reported completing more than 4,500 hours of community service and 1,400 home assessments. A 1997 study revealed that 86% of households visited by MHE volunteers improved their home environment. # Potential for replication: Moderate. MHE sells its trademarked and licensed program. Purchasers receive the Implementation Guide, training manual, facilitator's guide, HEAL paperwork, database to track volunteers and residents, as well as all components created by other licensees. The cost is $2,500 for an American Lung Association affiliate; the cost is higher for non-affiliates. # Contact for Specific Information Aileen Gagney Environmental Health Program Manager 206-441-5100 [email protected] # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Capacity for Lead Safety BROADCAST LEAD SAFETY TRAINING WIDELY Timing issues: This strategy could be implemented at any time. # Feasibility of Implementation: High. In states or regions of states where widespread telecommunication delivery systems exist, this strategy should be feasible. Some limitations may exist. # POTENTIAL OBSTACLES/BARRIERS There should be few, if any, potential barriers to implementation of this strategy. Hands-on portions of the training will be nearly impossible to conduct using this strategy, which could limit the value of this particular type of lead-safe work practices course. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE The Iowa Department of Public Health Bureau of Lead Poisoning Prevention (the Bureau) partnered with the Iowa Department of Economic Development, Iowa's housing agency, public housing authorities, and entitlement cities to produce and deliver a lead-safe work practices training curriculum that could be broadcast to workers who were widely scattered throughout rural and urban Iowa. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Capacity for Lead Safety BROADCAST LEAD SAFETY TRAINING WIDELY Limitations/challenges/problems encountered: The main challenge to implementation was coordination of training materials, Bureau staff, and the training schedule. However, Bureau staff noted that this became easier as time passed. Magnitude of Impact/Potential Impact: 1,020 landlords and contractors were trained via the ICN. # Potential for replication: High. This strategy can be easily replicated in any state or multi-county region with a fiber optic or other telecommunications presentation and delivery system. # ENSURE THAT DO-IT-YOURSELF REHABBERS ARE TRAINED DESCRIPTION OF THE STRATEGY Housing agencies that provide funds for housing rehab can require that property owners be prepared to effectively deal with existing conditions as well as problems that emerge as they work. Rehabbers of older housing especially need to know how to work safely around lead-based paint and how to safely and thoroughly repair lead-based paint hazards. Rehabbers also need to be aware of the U.S. Environmental Protection Agency's Pre-Renovation and Education Program (406b), which requires property owners to notify all occupants in pre 1978 housing units of any rehab work that will disturb more than two square feet of a painted surface. # BENEFITS Immediate/Direct Results: Training do-it-yourself rehabbers will make it more likely that lead-safe work practices will be used. This will bring a category of properties under the lead-safe work practices umbrella that has been missed through other, more formal training of professional rehabilitation contractors. Public Health Benefits: Properties that would not otherwise have had the benefit of lead-safe work practices can now be rehabbed safely. This will reduce or eliminate the creation of lead hazards and encourage the repair of existing hazards, which will decrease children's exposure. Other Indirect/Collateral Benefits: When included as part of a larger housing or development program, this strategy can also help reduce urban blight, reduce other health hazards in older structures, and assist in comprehensive community development and/or revitalization. # SCOPE OF POTENTIAL IMPACT # Statewide Regional (e.g. multi-county) City-or County-Wide Local Community Specific (Targeted) Population # PRIMARY ACTORS KEY PARTNERS Housing Agency Property Owners # CRITICAL ELEMENTS Staff requirements: This strategy will require staff time to conduct trainings. This could require up to 1.5 FTE if staff training is provided directly by the funding agency. The agency could also contract with outside trainers. Other resource requirements: Lead-safe work practices training materials will be necessary for this strategy. # Institutional capacity required: Training instructors should be well versed in lead-safe work practices. Cost considerations: This strategy should be cost-effective in preventing health problems. Timing issues: This strategy would require some short-duration outreach, but it can be implemented at any time. It is also important to implement this strategy in a sustainable way so as to not limit its effectiveness. # Feasibility of Implementation: Very high in almost all jurisdictions. # POTENTIAL OBSTACLES/BARRIERS Few, if any, barriers should exist for this strategy. # ADDITIONAL RESOURCES # Secondary Actor(s): N/A Staffing utilized: NJ Citizen Action tries to include guest speakers, such as attorneys, pediatricians, or parents of lead poisoned children, to avoid the monotony of a single speaker and to provide expert information. Other resources utilized: Each trainee receives a large binder full of relevant reference materials, including transparencies for use with overhead projectors and a script on lead poisoning designed to make it easier for attendees to become trainers. The training includes lunch. Factors essential to implementation: NJ Citizen Action staff believe that the most essential factor to continued success of the training is the quality of the training, as attendance would surely fall off if agency managers and CBOs did not perceive value in dedicating a full day of a new staff member's time. Keeping logistics routine enables program staff to focus more on recruiting and providing quality training and outreach than on, for example, catering or parking arrangements. Limitations/challenges/problems encountered: None listed. # Magnitude of # POTENTIAL OBSTACLES/BARRIERS The obstacles to delivering a lead-safe work practices training to non-English speaking immigrant communities are many and varied. Skilled, culturally-competent trainers are need to teach the Spanish version of the HUD EPA course Lead Safety for Remodeling, Repair and Painting. The course has not yet been translated into languages needed by non-Spanish speaking immigrants who may also be working as day laborers and at risk for lead hazard exposure. Since many immigrants may not have been able to attend school in their country of origin long enough to equip them to sit through a long, classroom-style training, delivery needs to be paced or staged and include hands-on practices. The main obstacle to getting day laborers to use lead-safe work practices is that the relationship between day laborers and their employers is not conducive to the workers changing work methods based on law and safety. Simply providing a training opportunity for workers is not enough-employers must be motivated or required to comply with lead safety requirements. Factors essential to implementation: In Los Angeles, healthy homes advocates have found that several steps are needed to successfully deliver the Spanish-language version of the lead-safe work practices course to day laborers. Most day laborers are unable to give up a full 8-hour workday in order to attend training for which they are not compensated. Therefore, advocates have divided the training into several evening sessions. Second, trainers emphasize the self-protective benefits of lead safety, since linking the issue to worker protection has been an important step in engaging day laborers in the issue of lead-safety and in how to prevent hazards in the first place. Third, using written materials as little as possible, and relying on face-to-face explanations and handson use of equipment has been the most successful teaching method. # ADDITIONAL RESOURCES # Limitations/challenges/problems encountered: The main barrier to getting day laborers to use lead safe work practices is the employer. Given the scarcity of work for a growing population of workers, day laborers are unwilling to raise concern over lead safety with their employers, if they are even aware of this issue. Community and union organizers need to support workers in protecting themselves and their families by "blowing the whistle" on employers circumventing lead safety laws in order to get work done as quickly and cheaply as possible without regard to the health and safety of either the family whose home is being painted or repaired or the family of the worker. # Magnitude of actual impact: The project trained 200 workers in one year. Since a typical day laborer works on an average of 50-100 homes annually, this training has added lead-safe work practices to thousands of projects. Potential for replication: Moderate. Replica (modified) projects have or are being carried out in several other immigrant communities in the United States, including the Mission District in San Francisco, CA. Every "replication" of this project will vary due to local political, population, and socio-economic variances. This project can most successfully be replicated where lead-safe work practices are required during painting and remodeling activities. # Contacts for Specific Information # CRITICAL ELEMENTS Staff requirements: The resources required are somewhat related to the scale of the project. In a city or county, an existing weatherization program could partner with a lead hazard control program and, with a percentage of a full time employee's time, structure a project to target weatherization units for lead hazard control. If a broader effort is envisioned to add a lead hazard control element to statewide weatherization or rehab programs, a more substantial commitment of staff resources and funding would be needed. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Capacity for Lead Safety EXPAND WEATHERIZATION AND REHAB PROGRAMS TO ADDRESS LEAD SAFETY Other resource requirements: The weatherization program would need equipment and trained personnel to conduct lead inspections and/or risk assessments (to confirm the presence of lead based paint) and perform dust clearance testing. They would also need access to trained and qualified contractors to perform the work. Institutional capacity required: Typically it is not necessary to change laws or regulations to integrate the delivery of weatherization and lead hazard control services. The main capacity issue is the availability of qualified personnel to perform the hazard assessments (or measure lead content in paint), complete the work following lead safety standards, and conduct clearance tests. Weatherization and rehab programs can have their existing crews perform lead hazard reduction provided they are properly qualified. For abatement projects, workers and contractors must be trained and certified. Except in a few states, training in lead-safe work practices is sufficient qualification for most non-abatement projects. Certified lead inspectors, risk assessors and-in some states-sampling technicians can perform the dust clearance testing after the work is completed. # Cost considerations: Leveraging other programs' work in homes to tackle lead hazards is generally cost effective. Weatherization and rehab programs already bear the costs of identifying housing units appropriate for their programs and their eligibility criteria are consistent with risk indicators for lead hazards (homes built before 1950; low-income families). Expanded programs can therefore offer lead hazard control, building upon existing efforts to enroll units and fix other problems, many of which may also be contributing to lead hazards, such as plumbing leaks, holes in the exterior walls or roof, and poor insulation resulting in condensation and water damage. In some cases, it may be appropriate to purchase XRF machines to help identify homes where lead hazard control is not needed. # Timing issues: If funding is available to support the lead hazard control interventions, approximately 6-12 months is needed to launch such a program. If funding is not reliable, then a more substantial commitment of staff resources and time may be need to structure the appropriate partnerships. # Feasibility of Implementation: Moderate. Implementation can hinge on the availability of dedicated funds to support the added lead work. Weatherization and publicly supported rehab programs generally have production targets that provide disincentives to increasing the costs in individual units. There may also be restrictions on spending the program's funds for actions not directly related to the program's mission (e.g., non-energy based repairs are ineligible for weatherization funding, except that in some states up to 10% can be spent on "health and safety" repairs). A key to implementation therefore is locating funds that can be used for lead work and securing a commitment from the state and local housing and weatherization program managers that this supplemental work is a valuable complement to their central mission. # POTENTIAL OBSTACLES/BARRIERS Lack of support from the key energy or housing agency staff and/or the local or state lead program can hinder efforts. Funding for added lead work must be secured. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE The Department of Public Health supplements weatherization activities in pre-1978 housing with a child under age six to include targeted lead hazard control activities. Window wells are capped and a thorough cleaning of windowsills and floors is completed using a wet wash and HEPA vacuum. Pre-and post-intervention dust samples are collected to document the decline in lead-contaminated dust and to verify that the unit meets dust clearance standards. Clearance testing is performed by certified lead risk assessors. Funding for the lead supplement to the weatherization is provided by a HUD-funded lead hazard reduction grant. Factors essential to implementation: Funding such as support from the state or regional weatherization program and the HUD lead hazard control grant program or other sources to underwrite the lead treatments is key. Another potential source of funding could be state energy consortiums that are funded by electric utilities to help justify window treatments. The weatherization program must also have policies to ensure the appropriate lead training, work practices, and that clearance testing occurs. # Limitations/challenges/problems encountered: The most difficult aspect of the program is managing the logistics of the various components. A second challenge is the inherent difficulty in proving that such prevention-based action works. The program did not have funds to collect data to document this. Finally, the majority of homes treated by a typical weatherization program are not occupied by a family with a young child. # Magnitude of Impact/Potential Impact: The program targets neighborhoods and homes with key risk factors for lead hazards (older homes built before 1950; low income families which are eligible for weatherization services). To date the program has completed the lead treatments in 61 units, all of which had at least one child under age 6. Slightly more than half of the units were owner-occupied. Health department staff believes that this initiative has helped prevent elevated blood lead levels in units where work occurred. None of the homes were occupied by children with elevated blood lead levels when this work occurred. Contractors performing weatherization now understand how to complete lead dust removal during final cleanup and recognize the importance of controlling lead-contaminated dust. These changes in cleaning behavior extend to jobs in older homes even when there is not a lead specification. # BENEFITS Immediate/Direct Results: Property owners will be provided additional opportunities to be trained in leadsafe work practices. Public Health Benefits: As more property owners are trained in lead-safe work practices, creation or exacerbation of lead-based paint hazards will decrease, lowering the risk of childhood exposure to lead hazards. Other Indirect/Collateral Benefits: If required for property owners cited for code violations, this strategy can provide a useful, alternative enforcement mechanism. Instead of levying fines which may never be paid, an enforcement agency can put primary prevention tools in the hands of those who need them most. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: This strategy is not staff-intensive. An experienced trainer will be needed one or two days a month. Other resource requirements: Lead-safe work practices training materials will be necessary. # Institutional capacity required: Where applicable, instructors certified or accredited in lead-safe work practices training will be required. Cost considerations: This strategy will incur modest costs by running an ongoing training program. Costs to property owners will be minimal or non-existent. # POTENTIAL OBSTACLES/BARRIERS Tying lead-safe work practices training to code enforcement may prove to be a challenge in some jurisdictions, as enforcement agencies may prefer to rely exclusively on fines. There may also be a lack of interest in ongoing training programs on the part of property owners, or a lack of time to attend such training. # ADDITIONAL RESOURCES # Potential for replication: The potential for replication is high. # Contact for Specific Information # PROVIDE TECHNICAL ASSISTANCE TO PROPERTY OWNERS DESCRIPTION OF THE STRATEGY Cities, states, and community-based organizations can provide technical assistance to property owners who are seeking to meet lead-safe housing and lead hazard control requirements or to voluntarily implement lead safety measures. As a complement to relevant training in lead-safe work practices and regulatory requirements, individualized technical assistance can accelerate the pace at which property owners and contractors retool the way they work. Depending on the scope of the strategy, technical assistance can be accessible through a hotline, one-on-one visits at a "one-stop" center, or a user-friendly interactive website. In the process, these agencies, which may normally be seen as antagonistic toward property owners, will learn about the issues and needs facing the owners of high-risk housing and be able to address gaps in their understanding of lead safety. Also, the property owners will be better positioned to successfully assume their role as partners in the effort to prevent childhood lead poisoning. # BENEFITS # SCOPE OF POTENTIAL IMPACT Statewide City-or County-Wide # Regional # PRIMARY ACTORS Health Department Housing Agency # KEY PARTNERS Property Owners Homeowners Community-based Organizations # CRITICAL ELEMENTS Staff requirements: Staff requirements will vary substantially based upon the scope of the strategy. Some small projects may only require 1-2 FTE, while larger projects may need a staff of 6 or more FTE to properly engage property owners on an individualized basis. # Other resource requirements: N/A Institutional capacity required: This strategy requires staff knowledgeable in lead safety and state and federal lead standards. Staff members with the ability to relate to and work cooperatively with property owners are also essential. # Cost considerations: Costs to administer this strategy will depend on scope, but could be significant due to the individualized nature of the strategy. The largest costs to the agency or organization implementing the strategy will come from staff salaries. # Timing issues: There are no distinct timing issues with this strategy. # POTENTIAL OBSTACLES/BARRIERS The main potential obstacle or barrier to implementing this strategy will be a lack of funding for project staff. States and localities with tight budgets will need to look for creative funding mechanisms to implement their specific technical assistance projects. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Rhode Island's Lead Hazard Mitigation law requires the state to provide technical assistance to property owners who seek to comply with the law requiring the repair of lead hazards using lead safe work practices and clearance. The Housing Resources Commission is charged with crafting and implementing a technical assistance plan. When the plan is fully implemented, the state will provide property owners with access to technical service centers. These centers will be "one-stop shops" where owners will be able to gain hands-on experience with lead hazard mitigation, work one-on-one with technical assistance staff, and find resources on how to obtain financial assistance for making their properties lead-safe. It is anticipated that the state will have these centers in place by July 1, 2005. # Jurisdiction or Target Area: Rhode Island # Primary Actor: Housing Resources Commission # Secondary Actor(s): N/A Staffing utilized: To fully implement the program, the Housing Resources Commission estimates it will need 6 FTE. # Other resources utilized: N/A Factors essential to implementation: Factors essential to implementation include the ability to secure funding for technical assistance staff, as well as property owners' use of the technical assistance resources. Limitations/challenges/problems encountered: Without funding, the Housing Resources Commission will not be able to retain sufficient technical assistance staff. # Magnitude of Impact/Potential Impact: The technical assistance plan has not yet been implemented. However, the state expects many property owners to use its technical assistance resources as they work to meet the requirements of Rhode Island's lead hazard mitigation law. # Potential for replication: Given the availability of professionals able to provide technical assistance regarding lead-safe work practices and lead hazard control, this strategy is fairly easy to replicate. # Contact for Specific Information Simon Kue Rhode Island Housing Resources Commission 401-450-1349 www.HRC.ri.gov # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Building Capacity for Lead Safety # TRAIN AND EMPLOY LOW-INCOME COMMUNITY RESIDENTS IN HAZARD CONTROL DESCRIPTION OF THE STRATEGY Environmental health services can be provided to communities through programs that train and employ lowincome community residents, including parents of lead-poisoned children and parents of children at high risk. The services provided can consist of low-cost hazard control, cleaning, peer education, and the provision of products that reduce environmental health hazards. Health departments, housing agencies, and communitybased organizations can work independently or collaboratively to provide these trainings. # BENEFITS Immediate/Direct Results: Where demand for services exist, low-income residents will have the opportunity to obtain steady, meaningful employment and will be trained to recognize and control lead hazards in their own homes. These residents will likely relate well to other low-income families in their area, supporting hazard control and peer education efforts. Hazard control and other services will also be available to low-income property owners and tenants who may not have had access to such services in the past. Public Health Benefits: Providing low-cost services like hazard control can increase the number of hazards reduced in a particular community, leading to greater primary prevention of childhood lead poisoning. Other Indirect/Collateral Benefits: This strategy can help reduce unemployment in a local area and can be one useful tool used to combat poverty. Low-income residents will also gain work skills that they may not have been able to obtain elsewhere. # SCOPE OF POTENTIAL IMPACT City-or County-Wide Neighborhood/Community # PRIMARY ACTORS KEY PARTNERS # Health Department Property Owners Housing Agency Tenants Community-based Organizations Parents Community Members # CRITICAL ELEMENTS Staff requirements: This will vary on the scope of the strategy. If the project will involve employing lowincome workers, new staff will be required. Other resource requirements: Trainers, educational materials, and environmental health products may be required, depending on the scope of the strategy. Institutional capacity required: Some projects undertaken by local government agencies may need prior city council/county board approval. Cost considerations: Overall costs will depend on the scope of the strategy. Projects that involve employing community residents will have higher costs than those that provide training or environmental health products. Timing issues: This strategy will require some planning and good organization; projects involving employing community residents will also require some lead time for the hiring process. # Feasibility of Implementation: High. This strategy should be feasible to implement. # POTENTIAL OBSTACLES/BARRIERS In some areas, projects that involve employing community residents may find a lack of potential employees. Other challenges to this type of project could include employee retention problems. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Healthy Homes Services is a program of The Way Home (TWH), a non-profit tenant rights and social services agency in Manchester, New Hampshire. The program trains and employs low-income community residents, including parents of lead-poisoned children and children at high risk, to provide environmental health services to their communities. These services, which include low-cost hazard control, peer education, and the provision of products that reduce environmental health hazards, have proven to be one way to advance primary prevention efforts. Other resources utilized: The Healthy Homes Services project was initially launched under a small grant from the U.S. Environmental Protection Agency. Currently, the project is operating as a sub-grantee to the City of Manchester, under a substantial HUD Lead Hazard Control grant. Staff knowledgeable in lead safety, peer education, and training low-income residents have also been utilized by the project. # Jurisdiction or Target # Factors essential to implementation: A dedicated source of funding and interest from landlords and residents have been essential to implementing this strategy. Limitations/challenges/problems encountered: Reliance on grants and low-interest loans are challenges the project has faced. The project is now seeking to become sustainable through small contracts and offering more services at low cost, which will allow it to operate without heavy reliance on grants. Magnitude of Impact/Potential Impact: As of the beginning of 2004, the project had trained a total of 31 low-income residents, seven in peer education and 24 in low-cost lead hazard control. # Potential for replication: Moderate. Initially, any project similar to Healthy Homes Services will rely on grant funding. With a commitment to use initial grant money to seek out other sustainable funding, this strategy has a high potential for replication. # Contact for Specific Information Mary Sliney Executive Director, The Way Home 603-627-5403 # References for additional information N/A Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards # COLLABORATIONS, PARTNERSHIPS, AND INCENTIVES COLLABORATE FOR LEAD SAFETY IN CHILD CARE HOMES CREATE INCENTIVES TO INTEGRATE LEAD SAFETY INTO HOUSING REHABILITATION INTRODUCE INCENTIVES FOR LEAD SAFETY INTO CHILD CARE PROGRAMS LEND OUT LEAD SAFETY EQUIPMENT SHARE RISK ASSESSMENT AND LEAD SAMPLING SERVICES TEACH CODE INSPECTORS ABOUT LEAD SAFETY THROUGH JOINT VISITS # COLLABORATE FOR LEAD SAFETY IN CHILD CARE HOMES DESCRIPTION OF THE STRATEGY Protecting children in child care settings is an essential complement to preventing exposure in the home environment. Using collaborations between local child care providers, associations that represent their interests, and local housing or health agencies can ensure that child care homes (i.e. child care based in the private home of the caregiver) are renovated to provide a lead-safe and healthy environment in which children thrive. # BENEFITS Immediate/Direct Results: Participating child care homes will have facilities that are lead-safe and healthier for children. Public Health Benefits: If a child care home has lead hazards, many children may become lead poisoned. Since the children served may not otherwise be considered at high risk for lead poisoning, they may not be identified under targeted screening programs. Reducing lead hazards in these facilities will benefit all of the children who use the child care homes' services. Awareness of healthy homes approaches in the properties where their children spend time may prompt parents to consider like measures in their own homes. If child care homes are similar to all homes, they are almost twice as likely to have lead hazards than a licensed, non home-based child care center. Other Indirect/Collateral Benefits: The owners of child care homes may be reluctant to address lead hazards given competing priorities. If competitors are marketing their lead-safe and healthy status, they may be more likely to address the issue. They may also be more willing to accept a lead-safe mandate if industry leaders have a model for success to allay fears. # SCOPE OF POTENTIAL IMPACT There are 14,200 licensed non-home-based, child care centers nationally. There may be as many as ten times more child care homes that are not licensed. City-or County-Wide Neighborhood/Community Specific (Targeted) Population-The approach may be expanded to non-home based child care centers but HUD funding may be more limited. # PRIMARY ACTORS KEY PARTNERS Housing Agency Health Department Child Care Providers Community-based Organizations Property Owners Parents # CRITICAL ELEMENTS Staff requirements: One FTE will be needed to establish and manage the program and conduct outreach to the proprietors of child care homes. The amount of time depends on the number of providers to be solicited to participate in the program. A small community might not need a full-time person while a large area with many unlicensed child care homes may need more staff. Other resource requirements: A strong association representing the child care providers, especially the homebased providers, is extremely helpful. The association can market the opportunity and provide the basic education needed to have willing and able participants. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Collaborations, Partnerships, and Incentives COLLABORATE FOR LEAD SAFETY IN CHILD CARE HOMES Institutional capacity required: Qualified personnel are needed to check for hazards and make repairs. In many states, a lead sampling technician can check for deteriorated paint and lead dust hazards. If the home is pre-1950 or lead hazards are suspected, a risk assessment should be performed to check for hazards and determine what is needed to make the property lead-safe. It would be beneficial if the sampling technician or risk assessor were familiar with asthma triggers and methods to reduce those sources since asthma is a significant concern for most child care providers. Contractors or workers trained to perform lead hazard control will be needed to perform the work. After work is completed the property must pass a clearance test (performed by a sampling technician in the 23 states where they are clearly authorized, or by a risk assessor). Cost considerations: Grant or loan money will probably be needed to fund the lead hazard control. The agency that administers the locality's funds from HUD's Community Development Block Grant program is a possible source of funding. Funds awarded by HUD's HOME, Healthy Homes, LEAP, and Lead Hazard Control Programs can also be used for child care homes if the household is income-eligible. # Timing issues: The program can begin quickly once funds are secured. While a collaborative team of stakeholders could be formed after funds are available, the team may have to be established in order to demonstrate capacity to implement the program and secure funding. Child care providers are busiest-and therefore unavailable-during August and September when children return to school and establish enrollment. Work may be scheduled during provider vacations to reduce relocation costs. # Feasibility of Implementation: Moderate. The program is feasible if grant or loan funding is available. Expanding beyond lead hazards to address asthma triggers such as mold, cockroaches, and dust mites may increase acceptance by meeting the growing concern to families and caregivers. Therefore, the program needs to be able to deal with the most relevant mix of addressable environmental hazards. Relocation of the provider's family and the child care business to a temporary location may need to be addressed. # POTENTIAL OBSTACLES/BARRIERS If communities do not have a broader program to educate potential clients to consider lead safety and healthy homes issues in the selection of a child care home, the broader impact from competition will be lost. Limitations/challenges/problems encountered: Funding must be available for the work. # ADDITIONAL RESOURCES Magnitude of Impact/Potential Impact: 125 home-based, child care homes were made lead-safe. 70 homebased, child care homes had reduced asthma triggers. # Potential for replication: Very high. # Contact for Specific Information Ed Petsche HEEL Coordinator 612-349-0563 [email protected] # References for additional information 1. Greater Minneapolis Day-Care Association www.gmdca.org # ILLUSTRATION #2 OF STRATEGY IN PRACTICE The program is designed to improve the quality of 25 home-based child care providers serving more than 150 children in Rochester and Syracuse by making them lead-safe and addressing other healthy homes and general safety issues. To avoid disruption to services and protect the children who attend the homes, the program is providing temporary relocation to an alternative location while renovations are underway. The program educates providers and parents using their services on lead poisoning and daily maintenance techniques to reduce lead hazards and other environmental hazards. It is designed to serve as a model for other communities. Potential for replication: Very high. The project is producing an implementation guide and document templates so it can be replicated in other communities. Funding must be available for the lead hazard control work. # Jurisdiction or Target # Contacts for Specific Information # SCOPE OF POTENTIAL IMPACT City-or County-Wide-Can apply to jurisdiction of county or city health department Neighborhood/Community-Can target specific neighborhoods # PRIMARY ACTORS KEY PARTNERS Health Department Housing Agency Community-based Organizations Contractors # CRITICAL ELEMENTS Staff requirements: The staffing needs are minimal. Other resource requirements: If lead inspections are provided, XRF analyzers will be needed. # Institutional capacity required: State regulatory agencies must take a flexible, practical approach to incorporating lead hazard control into rehab projects. Requiring abatement certification exceeds HUD requirements and may unnecessarily increase the cost of smaller rehab projects. Communities need a combination of certified workers and workers trained in lead-safe work practices to perform large-and smallscale rehab projects respectively. Cost considerations: First, rehab costs will increase incrementally as a result of contractors using safe work practices and for minor activities such as additional set-up and clean-up steps. Ideally, another funding source should be made available to help pay the additional costs of lead hazard control that are above and beyond the scope of the rehabilitation work. Alternatively, health departments might charge fees to the housing agency for performing lead inspections or risk assessments, as well as training contractors and workers, to support the continuation of skilled staff. # Timing issues: The health department and the housing agency need to agree on the timing of the risk assessment, the housing inspection, and the development of specifications. There are many different approaches, which basically fall into two camps. The first is to develop the rehab scope of work which is then given to the risk assessor. The risk assessment will then suggest additional work required by the risk assessment and specific work practices that should be followed during the rehab. The second is to conduct the risk assessment first so that all lead hazard control work can be incorporated into the rehab scope of work from the outset. Either approach can be effective, but there must be agreement on protocols before launching a collaborative venture. # Feasibility of Implementation: High. This primary prevention strategy can be implemented in communities where there is an interest and a will to make it work. Effective strategic planning for preventing childhood lead poisoning can foster such resource sharing between housing rehab programs and CLPPPS. # POTENTIAL OBSTACLES/BARRIERS Individual state regulations and requirements can be a barrier. State policy must be flexible while maintaining consistency on basic principles. In addition, a state or local requirement for lead hazard insurance can present an obstacle to getting contractors to perform rehab work. # ADDITIONAL RESOURCES 1. www.hud.gov/offices/lead/leadsaferule/index.cfm # ILLUSTRATION OF STRATEGY IN PRACTICE The Health Department performs a range of services for community development corporations conducting housing rehab using federal funds. It performs risk assessments, specifies the scope of work for lead hazard control which is incorporated into the rehab specifications, conducts post-work clearance examinations, trains workers in lead-safe work practices (LSWP), and provides up to $2,000 per unit in matching funds for rehab projects. This broad spectrum of services constitutes a powerful incentive for the public and private rehabilitation industry to address lead hazards using lead-safe work practices. Timing issues: It will take 6-12 months to establish the program, build support for it, and develop materials. Full implementation usually will take an additional year. Child care facilities are busiest-and therefore unavailable-during August and September when children are returning to school or enrolling. # Jurisdiction or Target # Feasibility of Implementation: Moderate. The program is feasible but to reach the child care facilities most in need of support and oversight, the program should ideally to be part of a larger effort to improve health conditions within all child care facilities. In addition, the program is difficult to implement if narrowly focused on lead hazards. Most child care facilities view asthma triggers such as mold, cockroaches and dust mites as a bigger concern. Therefore, services that address a broad mix of environmental hazards will be more readily accepted. # POTENTIAL OBSTACLES/BARRIERS Lead hazards are most effectively addressed in the context of a broader evaluation of environmental hazards, especially mold and moisture; pests and pesticides; and carbon monoxide. Communities need to be prepared to identify potential resources to address lead hazards and provide technical assistance to help facilities obtain and use those resources. # ADDITIONAL RESOURCES # ACCESS ELECTRIC UTILITY PUBLIC BENEFIT FUNDS DESCRIPTION OF THE STRATEGY Over the past decade, the deregulation of the electric utility industry has prompted the creation of public benefit funds in many states. The funds, which total more than $1.5 billion, support a wide range of activities related to energy conservation including home improvements that produce energy savings, such as increased insulation, air infiltration reduction, and sometimes window replacement. Policies are set by the state energy-related agency through negotiations with the key funding source(s). Because such funds often target lower income households who suffer disproportionately from lead hazards the synergies are intriguing. The challenge is that these funds often strictly focus on energy and efforts to broaden the eligible activities could be perceived as diluting the programs' purpose. As a result, few utility benefits programs are currently factoring lead hazard reduction considerations into the program design. # BENEFITS Immediate/Direct Results: Reducing lead hazards in the course of projects to make energy conservation improvements will have a direct impact on reducing lead exposure in high-risk properties. Such action can prevent lead poisoning and control lead hazards in the home. Public Health Benefits: Repairing lead hazards through window replacement or repair and correction of moisture problems can reduce lead exposure and help to alleviate or prevent mold, which can exacerbate respiratory problems. Other Indirect/Collateral Benefits: These actions can improve the overall building quality and energy usage (e.g., utility bills) making the building more durable and comfortable. Controlling lead hazards can also improve overall building quality, durability, and energy efficiency. # SCOPE OF POTENTIAL IMPACT Statewide-Requires statewide commitment to make public benefit funds available for health and safety issues. # PRIMARY ACTORS KEY PARTNERS # State Energy Agency Housing Agency Utilities Community-based Organizations # CRITICAL ELEMENTS Staff requirements: Short-term staff resources are required to develop the expansion of eligible activities to include lead and other health concerns. Staffing to clarify the actions that are eligible for public utility funding and appropriate protocols and procedures will need to be integrated into existing structures for the administration of public benefit funds. Other resource requirements: Weatherization program crews may need additional training to check for and repair lead hazards. # Institutional capacity required: It is essential that the entity determining the scope and eligible activities for public benefit funds take steps to clarify that lead hazard control activities are eligible for funding. This may require a change in state statutory authority or a policy change by a state commission or entity that oversees the program and/or the utility that is providing the funds through the fee attached to utility bills. # Cost considerations: Once a decision is made to expand the use of the public benefits funds, the only cost of implementing the expansion is minimal staff to oversee the lead hazard repair services. Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies ACCESS ELECTRIC UTILITY PUBLIC BENEFIT FUNDS Timing issues: It may take over a year to gain policy agreement to expand the scope of activities funded by the public benefit funds to support lead hazard assessment and repair. If such actions are already eligible then an additional six months to one year is likely needed to get the program up and running. Feasibility of Implementation: Moderate. Making public benefits funds available for lead hazard control work may require substantial advocacy and planning efforts. # POTENTIAL OBSTACLES/BARRIERS A key potential obstacle may be opposition by energy conservation advocates who may be concerned that expanding the scope of eligible activities for funding by public benefit funds will reduce the number of homes receiving energy improvements. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF THE STRATEGY IN PRACTICE The New York State Energy Research and Development Authority (NYSERDA) administers programs funded by the System Benefits Charge (SBC) on the electricity transmitted and distributed by the state's investor-owned utilities. Originally, the fees for New York Energy $mart were intended to cover energy-related upgrades. Beginning in 2001, the program was expanded to address health concerns. This expansion required a policy decision at the state level to structure the supplemental health and safety component. The program covers up to 50% of the costs associated with energy-efficiency and indoor air quality improvements to a maximum of $5,000 for a single-family unit or $10,000 for a 2 -4 family building. Low-interest loans are available for qualified applicants to cover the balance of the cost of the work. Total funds equal roughly $150 million per year for residential, commercial, and industrial properties combined; $40 million is invested in residential for both market and lower income. # Jurisdiction or Target # Staffing utilized: Once agreement is reached on expansion of eligible activities to include health and safety concerns, the staff resources required to develop protocols and procedures to administer the program are nominal: overall, NYSERDA has one to two FTE working on the New York Energy $mart program. Other resources utilized: NYSERDA contracted with a separate organization -Conservation Services Group (CSG) -to administer the program, audit energy and health results, process financial incentives, and manage loans. The Building Performance Institute (BPI) was engaged to develop work practice standards. A third entity with significant experience in adult education trains contractors (who are required to obtain training and certification to participate in the program). Contractors pay roughly $1,000 for each level of certification. NYSERDA reimburses the contractor for 75% of the training and certification fees. # Factors essential to implementation: Willing utility company and statewide administrator of public benefits funds, and motivated contractors who will pay for training/certification and absorb cost of lost work time to acquire credentials. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies ACCESS ELECTRIC UTILITY PUBLIC BENEFIT FUNDS Limitations/challenges/problems encountered: Quality assurance is difficult. It is challenging to accurately report that the work occurred and determine if the work was performed consistent with program standards. CSG performs limited field inspections (roughly 15% overall) and BPI conducts sporadic field monitoring. Magnitude of Impact/Potential Impact: In the initial three years of operation, approximately 4,000 homes have received energy upgrades. An unknown number have also received health upgrades to address combustion gases and possibly lead hazards. The average expenditure per unit is $7,500. Potential for replication: Uncertain. NYSERDA is willing to share its experiences with other states (e.g., effective advertising strategies, standards, training curriculum, and program design materials). Key ingredients include: creating consumer demand for energy, comfort and health upgrades (where consumers are willing to pay for upgrades), varying marketing to target specific market segments, and developing standards to ensure consistent and quality work is performed. Realistic start up once there is agreement of funding is about two years to get the system working well. # Contacts for Specific Information # BENEFITS Immediate/Direct Results: Funds are available to increase the supply of lead-safe housing through the rehabilitation of substandard older properties (resulting in the elimination of lead hazards) and the construction of new housing. Public Health Benefits: Creating decent affordable housing will prevent exposure to lead hazards, especially if priority is given to projects serving very low-income families that are at high risk for childhood lead poisoning. Other Indirect/Collateral Benefits: Access to decent housing that is affordable to very low-income households and special needs populations enhances the learning and earning potential of the family and improves its ability to pay for health care, child care, and other necessities. # SCOPE OF POTENTIAL IMPACT Statewide Regional (e.g. multi-county) City-or County-Wide # PRIMARY ACTOR KEY PARTNERS Housing Agency Property Taxation Agency Community-based Organizations # CRITICAL ELEMENTS Staff requirements: This will vary depending on the size of the fund and the nature of the activities to be carried out. # Other resource requirements: The heart of a housing trust fund is a reliable, dedicated source of funds. A certainty of funds helps maintain an ongoing program that can sustain a production pipeline and maximize other opportunities. # Institutional capacity required: A housing trust fund needs to be administered by an agency or organization that is familiar with soliciting, reviewing, awarding, and monitoring grants and/or loans for affordable housing programs. The administering entity also needs to work closely with community-based organizations that understand and can articulate the needs and capacity of the community. # Cost considerations: The size of housing trust funds range from less than $100,000 per year to many millions of dollars per year, depending on the size of the jurisdiction and other factors. # POTENTIAL OBSTACLES/BARRIERS The greatest difficulty in getting started is mobilizing the support to convince a legislative body that such a fund is needed and that an ongoing allocation of funds is required to make the fund operate effectively. This requires establishing clear cut goals and objectives, creating a coalition of organizations and individuals to support creation of the fund, and identifying alternative funding sources. Feasibility of Implementation: Moderate. Political will, bolstered by the support of some residential property owners, is key to the implementation of a special real estate assessment mechanism, since a vote by either a legislative body and/or the electorate will be required to implement a new tax or other property-based tax. It should be possible to reach agreement to pursue the strategy in numerous jurisdictions, especially where there is concern about the problem of lead poisoning but action is more likely on approaches that require no new funds, staff, or equipment. # ADDITIONAL RESOURCES # POTENTIAL OBSTACLES/BARRIERS Lack of political will is the most significant potential barrier to establishing a special real estate assessment mechanism. In jurisdictions where a referendum is required for revenue-generating measures, the absence of widespread popular support may be a severe obstacle. # ADDITIONAL RESOURCES # DEPLOY ENFORCEMENT ORDERS AND GRANT INCENTIVES IN TANDEM DESCRIPTION OF THE STRATEGY Public financing for the abatement of lead hazards, combined with code enforcement, provides both the carrot and the stick to provide lead-safe housing in high-risk neighborhoods. Landlords are most likely to be "persuaded" to address lead hazards if financial assistance is available to offset at least part of the cost of repairing/replacing windows and other lead hazards. The threat of being taken to court can be a powerful incentive to landlords. Combining code enforcement with financial incentives is especially useful to motivate action by landlords who own multiple high-risk properties and have limited resources. # BENEFITS Immediate/Direct Results: Rental units are rendered safe from lead hazards. Depending on the scope of the program, results can range from a few units to an entire neighborhood or neighborhoods. Public Health Benefits: Reduction in childhood lead poisoning. In addition, there is a heightened awareness of childhood lead poisoning and how to prevent it. Other Indirect/Collateral Benefits: An improved standard of habitability: for example, windows that were dilapidated or painted shut become operable. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Staff requirements will vary depending on the scope of the program and the extent to which activities are performed in-house or contracted out. Post-work clearance must be performed by a trained staff member or contractor who is certified. Other resource requirements: A good database is critical to efficient operation. Access to standard professional specifications for the scope of work as well as uniform and reliable cost estimates for typical work will help build support from owners and landlords to participate in the program. In addition, for abatement programs, there must be a sufficient supply of trained and qualified contractors. # Institutional capacity required: There must be clear legal authority to require owners to correct lead hazards or at least to repair deteriorated paint. In addition, there must be an institutional structure that can enforce code violations. This may be a municipal court or an administrative hearing process that has enforcement authority. # Cost considerations: There must be a funding source to support lead hazard control as well as the code enforcement effort. # Timing issues: None; the program can be initiated at any time. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies DEPLOY ENFORCEMENT ORDERS AND GRANT INCENTIVES IN TANDEM Feasibility of Implementation: Moderate. Can be implemented wherever the basic requirements are in place: political support, a staff commitment to working with landlords, legal authority, an operating code enforcement program, a skilled workforce, and financial aid to owners. # POTENTIAL OBSTACLES/BARRIERS There must be political support to undertake the program and political will to use code enforcement authority to take action against recalcitrant landlords if they don't agree to repair their properties. Landlords can be expected to appeal to political leadership to forego or curtail the program. Most difficult of all is getting landlords to buy in to the program. First, it is often difficult to find out who the actual owner of a specific property is. Second, it is even more difficult to get owners to come to a public meeting or agree to a private meeting. Third, if the owner does attend, he or she needs to be convinced to participate in the program and to pay for at least a portion of the repairs. The absence of a strategy on how to deal with landlords, including absentee landlords, can constitute a considerable barrier to success. Limitations/problems encountered: Finding the owners/landlords was often very difficult. Getting them to "apply" for a risk assessment was even harder. City staff did mailings to owners and held group and individual meetings. The combination of hard sales in tandem with the threat of code enforcement was needed to win over reluctant landlords, especially since the amount available for window repairs was limited to approximately $2,000 per unit. # ADDITIONAL RESOURCES Magnitude of Impact/Potential Impact: Lead hazard control was completed in 800 units in one year. The owners of approximately 50 units were taken through the court system. # ESTABLISH A REVOLVING FUND TO STRETCH DOLLARS DESCRIPTION OF THE STRATEGY Instead of making outright grants of funds for lead hazard control projects, jurisdictions can make loans through a revolving loan fund that recycles the original pool of funds. Loans can be amortized or repayment deferred until the property is sold or refinanced. Financing terms can be indexed according to income so that the lowest income homeowners can access funds at very low and even no interest. Permanent revolving funds can be capitalized using an appropriation from the jurisdiction's general fund, a federal grant, or designating the receipts from a specific tax or fee, so that such monies are continuously reserved for a specified purpose. # BENEFITS # CRITICAL ELEMENTS Staff requirements: One full-time person in the assigned agency is needed to set up the program. This includes developing program procedures and guidelines, preparing marketing materials, and establishing agreements with program partners such as health departments, lenders, and community-based organizations. A key policy issue is whether the agency administering the loan fund will manage the program or rely on other partners (banks, community-based nonprofit organizations, etc.) to perform outreach, prepare and process applications, and service the loans. If all work is done in-house, the staffing requirements will vary substantially depending on the volume of loans and the way the program is designed. Other resource requirements: Program brochures, manuals, operating procedures for lending programs. Institutional capacity required: There needs to be institutional capacity to make and/or service real estate loans, including knowledge of the industry and experience with lending programs. Ideally, the program is managed by a housing agency that has a track record with other financing mechanisms. Legislative authority and appropriations must create and fund the program; authority to use repayments for additional loans distinguishes revolving funds from one-time appropriations. Capacity to assist owners with applications is important and can be provided by community-based nonprofit organizations, city/county/state staff, or lenders. # Cost considerations: A revolving fund must initially be capitalized by designated funds. Annual or regular additions to the fund may be critical to create stability, meet growing demand, and assure continuation of the fund. Fees to process and service the loan and the cost of outreach need to be factored into the plan. # Timing issues: A revolving loan fund can be established at any time. # Feasibility of Implementation: Excellent if the institutional capacity is in place. # POTENTIAL OBSTACLES/BARRIERS Making loans to low-income owners and/or investors in property rented to low-income families is often not a profitable venture. Financial institutions are experienced and efficient in processing loans; however, a bank will probably need an incentive such as a fee unless it is willing to do the work to maintain or improve its Community Reinvestment Act (CRA) rating. Capacity must also be established to get referrals or generate applications, assist owners with the application process, qualify applicants, and prepare complete applications for whomever processes the application. A state or local agency may be well served by relying on community-based organizations (CBOs) to reach and encourage applications from the owners of highest risk properties. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE MassHousing provides financing to owners of one-to four-unit properties to abate lead-based paint. Lowincome owner-occupants are eligible for loans with 0 percent interest for which payment is not due until the property's sale or refinancing. Nonprofit organizations and "investor-owners" (landlords who do not live in a unit in the property) that own properties that are being rented to income eligible households are eligible for loans that must be repaid in the near term. The maximum loan amounts are: Single-family-$20,000; 2-family-$25,000; 3-family-$30,000; and 4-family-$35,000. Limitations/problems encountered: Massachusetts found that bond funds didn't work well; a State appropriation was much better. The borrower needs to be current on the mortgage, taxes, and utilities. Magnitude of Impact/Potential Impact: Approximately 200 -300 loans are made each year. Nearly $50 million has been loaned since 1997. Potential for replication: Very high. There must be a nonprofit network or other means to assist owners in applying and, if the MassHousing model is followed, a bank willing to participate. # IMPOSE FEES ON REAL ESTATE TRANSACTIONS AND RELATED PROFESSIONAL LICENSES DESCRIPTION OF THE STRATEGY Imposing fees on real estate transactions and on the licenses of professionals who procure housing, risk management, or lead services can help subsidize the cost of managing public sector systems for lead poisoning prevention. Fees levied on real estate transactions (e.g. for the recording of deeds or tax stamps) and on licenses for professions and disciplines such as real estate agents, insurance agents, lead inspectors, risk assessors, lead abatement contractors, and others can be a source of dedicated income for lead poisoning prevention programs. These types of strategies would be more common at the state level, but enactment and implementation is possible at the county and municipal levels as well. # BENEFITS Immediate/Direct Results: New or increased recurring funding sources are immediately created as the fee structure goes into effect. These funds can be used to support primary prevention programs that may have little, if any, funding from other sources. Public Health Benefits: Funds from these fees or surcharges can be used to fund existing childhood lead poisoning prevention programs as well as initiatives that complement efforts already in existence. Other Indirect/Collateral Benefits: By targeting home purchases and individuals directly involved with buying, selling, and insuring housing, such fees could raise awareness among homebuyers, real estate agents, mortgage lenders, insurers, and others about lead poisoning prevention issues. Such a strategy may also reinforce the responsibility of buyers, agents, and others to ensure that the home involved in a specific transaction is lead-safe. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Minor to moderate administrative activities would be required at the start-up of a strategy that imposes a fee on real estate transactions or a surcharge on certain professional licenses. It is unlikely that any new staff would be required to implement such fees or surcharges. Other resource requirements: In the case of professional license surcharges, the agency(s) that oversee the licensing may need to be involved in administering the strategy. Institutional capacity required: New statutory or code sections, naming the specific real estate transactions or professional licenses involved, must be added, and the agency(s) involved may need to promulgate new regulations to implement the strategy. Cost considerations: Small start-up costs to the government agencies responsible for the collection of the new fees or surcharges can be expected, but no further costs should exist beyond this transition period. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies IMPOSE FEES ON REAL ESTATE TRANSACTIONS AND RELATED PROFESSIONAL LICENSES # Timing issues: None Feasibility of Implementation: Moderate. Political will, bolstered by the support of individuals, community and statewide organizations, and those who have been adversely affected by lead poisoning will be key to implementing this strategy. Partnering with professionals in the real estate or insurance worlds who are concerned about or aware of lead hazards in homes would also be useful. # POTENTIAL OBSTACLES/BARRIERS The lack of political will to impose a new fee or surcharge is the major obstacle that could block the realization of this strategy. Once put in place, a license surcharge strategy may have to overcome overburdened licensing agencies or inefficient transfer of revenues. A minimal fee on real estate transactions is not likely to face significant implementation barriers. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE In Massachusetts, surcharges are imposed on the annual fees of a variety of professional licenses. These surcharges are imposed on individuals licensed as real estate brokers and property and casualty insurance agents, as well as on mortgage brokers, mortgage lenders, small loan agencies, and individuals licensed to perform lead inspections. The surcharge is generally $25 per license or renewal, though mortgage brokers, lenders, and small loan agencies pay a $100 surcharge. Staffing utilized: 0.5 FTE. # Other resources utilized: N/A Factors essential to implementation: The cooperation of professionals impacted by this strategy is key, as is the level of priority each licensing board places on collecting the surcharges. Two additional factors are essential to the continuing implementation of this strategy: 1. The continued existence of the statutory authority contained in the Acts of 1993; and 2. The timely distribution of the surcharges to the Trust Account for disbursement to DPH Limitations/challenges/problems encountered: There have been no major challenges in implementing the surcharge strategy. However, the Office of the State Auditor found that some professions were not being billed for the surcharge in a timely manner in 2000; the responsible agency subsequently corrected the problem, and in 2002, the Legislature mandated that it collect the surcharge in direct connection with the license renewal process. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies IMPOSE FEES ON REAL ESTATE TRANSACTIONS AND RELATED PROFESSIONAL LICENSES Magnitude of Impact/Potential Impact: The surcharge strategy raises about $1.25 million per year for DPH/CLPPP. This gives the department and the program much needed funds to engage in a variety of childhood lead poisoning prevention activities. Potential for replication: Moderate. In states with sufficient political will, fee-based funding strategies, such as surcharges on specific professional licenses, could be readily replicated. # Contacts for Specific Information # IMPOSE TAXES OR FEES ON POLLUTERS DESCRIPTION OF THE STRATEGY States and local jurisdictions that have appropriate authority can impose taxes and/or fees on corporations that are judged responsible for contributing to the existence of an environmental health or pollution problem, including those companies that have contributed to the existence of lead hazards in housing. The revenue from these taxes and fees can be used to fund primary prevention programs. # BENEFITS Immediate/Direct Results: New or increased funding sources are immediately created as the tax or fee structure goes into effect. These funds can be channeled into a variety of primary prevention efforts that may not have other sources of sustained funding. Public Health Benefits: Funds from these taxes or fees can be used to fund either already existing public health programs related to the prevention of childhood lead poisoning, or possibly expand primary prevention efforts to reach a greater number of people. Other Indirect/Collateral Benefits: By targeting corporations that are responsible for the production of goods that caused indoor lead contamination, a tax or fee on these polluters helps shift the burden away from taxpayers and individuals who are impacted by environmental health hazards. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Minor to moderate administrative activities would be required at the start-up of a new tax or fee on polluters. It is unlikely that any new staffing would be required. # Other resource requirements: The taxing agency must have all information pertaining to the specific corporations or class of corporations upon which the new tax or fee is to be imposed. # Institutional capacity required: A new statutory or code section, naming the specific corporation or class of corporations to be targeted with the new tax or fee, must be added. In many states, taxing authority of this nature must be approved on the state level before being implemented by local jurisdictions. Cost considerations: Small start-up costs to the government agency responsible for the collection of the new tax or fee can be expected, but no further costs should exist beyond this transition period. Costs to targeted corporations will likely be passed onto to retailers, and in turn, to consumers. # Timing issues: Because new statutory authority may be needed to implement taxes or fees on polluters, the implementation of the strategy will depend on legislative calendars, committee hearing schedules, and local government meeting schedules. In some areas, some taxes and fees carry "sunset" provisions, meaning they must be reviewed and reauthorized on a periodic basis. In other areas, tax or fee structure changes such as this are permanent. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies IMPOSE TAXES OR FEES ON POLLUTERS Feasibility of Implementation: Moderate. Political will, bolstered by the support of consumers, community and statewide organizations, and those who have been adversely affected are key to the success of efforts to impose taxes and fees on polluters. Some may be opposed to imposing taxes or fees on selected corporations because that might be seen as "bad for the general business climate" in the state or local area. Others may support shifting the burden of financially supporting lead poisoning prevention programs to those directly responsible for indoor environmental health problems caused by lead. # POTENTIAL OBSTACLES/BARRIERS The lack of political will to impose a new tax or fee, especially on powerful corporate interests, is the major obstacle that could block the realization of this strategy. Many states and local areas have very strong business lobbies, and targeted corporations themselves may be willing to spend large amounts of time and money to defeat a strategy that would increase their overall costs. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE The Childhood Lead Poisoning Prevention Act was passed in California in 1991. The state act provides funds for county programs to identify children at risk for lead poisoning and to work toward reducing children's exposure to lead hazards. The program implemented under the act is funded by a fee imposed on corporations that previously, or currently, bear responsibility for the production and sale of lead or products containing lead. This fee also extends to corporations and businesses that are responsible for other sources of lead or that have contributed or continue to significantly contribute to environmental lead contamination (indoor and outdoor). The fee is assessed on each individual corporation based on two criteria: • The corporation's past and present responsibility for environmental lead contamination • The corporation's "market share" responsibility for environmental lead contamination The two criteria allow for greater fees to be imposed on those corporations that hold a greater responsibility for environmental lead contamination. Magnitude of Impact/Potential Impact: Annually, $12 million is collected for county programs in California via this fee. The impact is significant in that the fees are the main source of funds for the state's Childhood Lead Poisoning Prevention Program and the county-based programs. # Jurisdiction or Target Area: California Potential for replication: Moderate. In states and localities where the political will is present, fee-based funding strategies such as this could be readily replicated with a minimal use of resources. # Contacts for Specific Information # LEVERAGE COMMUNITY REINVESTMENT ACT FOR LEAD SAFETY AND HEALTHY HOMES DESCRIPTION OF THE STRATEGY The federal Community Reinvestment Act (CRA) was enacted in 1977 to challenge widespread discrimination in mortgage lending and encourage banks to help meet the credit needs of all segments of their communities, including low-and moderate-income neighborhoods. Banks may provide loans, grants, technical assistance or services to support community development activities that serve low-and moderate-income communities, including assistance with the cost of abating or controlling lead hazards in housing. Banks report relevant activity to their respective federal regulatory agencies which monitor the banks and issue public ratings of the banks' CRA activities in three areas: lending, investment, and services. These ratings affect whether regulators will permit banks to merge or expand. Many banks are therefore open to partnerships with local agencies or community-based organizations that are seeking financing or other support for specific proposals. # BENEFITS # CRITICAL ELEMENTS Staff requirements: Once a financial institution decides to increase lending or make qualified investments in underserved communities, it must develop appropriate agreements, policy guidelines, and operating procedures. Some banks execute contractual agreements with a nonprofit organization to administer the activity and assist in developing processing procedures for loans and establishing program eligibility criteria. These activities may require a significant investment of time and effort, depending on the experience level of and extent of initiative on the part of the lender and the nonprofit partner. Other resource requirements: It is easiest for the bank to partner with an experienced nonprofit agency operating an effective program that needs added funding or assistance to expand into new areas or activities. # Institutional capacity required: The federal legal authority for CRA activity is already established (12 U.S.C. 2901). # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies LEVERAGE COMMUNITY REINVESTMENT ACT FOR LEAD SAFETY AND HEALTHY HOMES Cost considerations: The modest cost of operating community lending activities, as an in-house service and/or under contract with a nonprofit partner, counts as a CRA service. Timing issues: Can be implemented at any time. # Feasibility of Implementation: Moderate where there is a bank that is interested in undertaking CRA activity involving prevention of childhood lead poisoning. # POTENTIAL OBSTACLES/BARRIERS The typical problem is finding or designing an activity or program that meets the bank's criteria for lending or investment and also provides substantive assistance. For instance, if the CRA activity is lending to abate or control lead hazards, the bank must be willing to discount its loans or make other changes to its lending guidelines to make the loans attractive to borrowers. A reduction of 1% in the interest rate will generally not be sufficient. Many potential borrowers will not have good credit ratings. A modest relaxation of credit standards may qualify a few borrowers that otherwise could not borrow at conventional lending rates. However, it will probably be necessary to supplement lending at below-market rates with grants for a portion of the cost. # ADDITIONAL RESOURCES # Other resources utilized: N/A Factors essential to implementation: The primary factor is a bank willing to participate; i.e., absorb the closing costs in exchange for CRA credits. Secondly, there must be willing landlords who see the remediation of lead hazards as in their interest. # Limitations/problems encountered: The biggest problem is getting landlords to participate. They must become part of the solution. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies LEVERAGE COMMUNITY REINVESTMENT ACT FOR LEAD SAFETY AND HEALTHY HOMES Magnitude of Impact/Potential Impact: Borrowing by owners of rental property is made feasible by two factors: The cost of borrowing is reduced when the bank waives closing costs, and the cost or rehab is partially offset by a grant from the County of up to $6,250. Approximately 40 -50 units will be completed with funds from a HUD Lead Hazard Control grant and the First Place Bank subsidy. # Potential for replication: This can be replicated in communities with a bank willing to make concessions in their lending in return for CRA credits. # Contacts for Specific Information # MAKE THE MOST OF FINES AND PENALTIES DESCRIPTION OF THE STRATEGY Typically, fines and penalties that government agencies collect revert to the treasury's general fund. Designating a special fund, revolving or otherwise, offers a mechanism for such receipts to be reserved for a special purpose such as lead hazard control or promoting lead safe work practices. Rather than having all taxpayers fund code enforcement and other services that are directed toward a problem few, jurisdictions can charge violators fees to cover costs of inspection, investigation, and enforcement of building codes and other ordinances related to lead hazards and poisoning prevention. # BENEFITS Immediate/Direct Results: This strategy allows municipalities, counties, and even states to fund important prevention programs and enforcement actions that may not receive adequate money from the general fund. The revenue generated from designated fines and penalties can then be used to support more code enforcement, training sessions on lead-safe work practices, lead hazard control, and other primary prevention measures. Public Health Benefits: Presumably, with significant penalties in place for violating statutes or codes related to lead hazards, property owners will be apt to follow the rules, thus engaging in lead-safe work practices, reducing lead hazards in homes, and ultimately lowering the chance that children will be exposed to lead. The primary prevention measures funded by fines and penalties from those property owners who don't play by the rules will also reduce children's exposure to lead hazards. Other Indirect/Collateral Benefits: As word spreads throughout the area affected by the possibility of fines or penalties (i.e. the city, county, or state), property owners, especially landlords, will become more aware of the issues surrounding lead poisoning in children, lead-safe work practices, and reducing and controlling lead hazards in homes. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Since this income is generated by existing housing, building, and/or health code enforcement programs, no additional staff should be necessary. Other resource requirements: Implementation would require systems for tracking data on fines and penalties generated from codes, as well as statutes governing lead hazards and lead safe work practices (or other building or housing provisions). # Institutional capacity required: Statutory and regulatory authority needed to create the specialized fund and designate the use of the fines and penalties. Code inspection and enforcement staff may need to be trained on how to recognize lead hazards and unsafe work practices if the jurisdiction has not previously focused on these areas. # Cost considerations: In almost all instances, no new enforcement agency or staff will be needed. One possible problem is that the cost of compliance as well as fines and penalties may be passed on to tenants; to mitigate this situation, consideration should be given to using some proceeds for targeted rent subsidies as needed. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies MAKE THE MOST OF FINES AND PENALTIES Timing issues: Enforcement, the collection of fines and penalties, and the flow of money from the specialized fund will depend on the workload of the authorized agency. # Feasibility of Implementation: High. Local governments are always looking for revenue generators. Since this approach enables a program to be self-funded, holds bad actors accountable, and doesn't require a general tax increase, it should be quite feasible to implement. # POTENTIAL OBSTACLES/BARRIERS Landlord associations will resist enactment and enforcement of code provisions for fines and penalties. Overburdened building inspection, housing, and health departments' implementation and enforcement may be lackluster, resulting in a low rate of collection of penalties. # ADDITIONAL RESOURCES # Other resources utilized: N/A Factors essential to implementation: The main essential factor for implementation of this strategy in San Francisco is the cooperation between two sections of the Department of Building Inspection-the Lead Abatement Section and the Administration and Finance Section. Code enforcement and penalty collection officials rely on the City Tax Collector to collect delinquent penalties and on department hearing officers when a penalty is appealed. Underlying all of this is the code authority already in place. # Limitations/challenges/problems encountered: None listed. Magnitude of Impact/Potential Impact: On average, the Building Inspection Fund receives between $1.2 and $1.7 million each year from fines and penalties. A portion of these funds is used to support the enforcement of lead-safe work practices standards for exterior surfaces. This is the major source of funding for this enforcement work; no general revenue is used for the enforcement of the work practices standards. # Potential for replication: The potential for replication of this approach to the fines and penalties strategy is high. It is relatively easy to administer, and it is easily integrated into existing code enforcement and penalty structures. # OFFER AN INCOME TAX CREDIT FOR ABATEMENT DESCRIPTION OF THE STRATEGY One approach to funding lead hazard control outside of general fund appropriations is for governments to offer a credit on federal, state, or local income taxes to property owners who expend funds for eligible activities. A tax credit's advantage of extremely low administrative costs is balanced against the problem that tax credits are difficult to target and frequently provide little or no benefit for very low-income properties that generate little or no tax liability. # BENEFITS Immediate/Direct Results: A tax credit is a way to finance the cost of lead hazard control, subject to cost limitations and other operational requirements such as the use of certified contractors. Although income tax credits provide dollar-for-dollar reduction in tax liability at the end of a tax year, up to the maximum amount specified, the funds are not available to pay for the cost of abatement at the time the work is performed. Public Health Benefits: Additional lead-safe houses will expose fewer children to lead hazards. Other Indirect/Collateral Benefits: Ease of administration. # SCOPE OF POTENTIAL IMPACT Statewide City-or County-Wide # PRIMARY ACTORS KEY PARTNERS # Property Taxation Agency Property Owners Contractors # CRITICAL ELEMENTS Staff requirements: The staffing requirements to implement a tax credit for abatement are minimal. There is the initial need to draft regulations to incorporate the credit into a state's tax code as well as explanatory materials to foster general public understanding. Once the tax credit is implemented, it becomes just another line item on a tax return. Other resource requirements: There must be a mechanism for verifying that a property for which a credit is claimed has lead hazards and that the work has addressed the lead hazards (through either abatement or interim controls), leaving the property or work area lead-safe at the end of the job. Jurisdictions should specify that the hazard control and hazard determination/clearance work be performed by appropriate trained or certified personnel. There must be a sufficient supply of qualified personnel available to property owners. # Institutional capacity required: A tax structure and administrative agency that can fairly administer a tax credit as part of an income tax system. Cost considerations: It is very difficult to estimate the financial impact of a tax credit for lead paint abatement on a jurisdiction's tax revenue, since it is impossible to predict how many taxpayers will take advantage of this opportunity. Timing issues: None. # Feasibility of Implementation: Extremely easy to implement, but difficult to enact when state and local revenues are constrained. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies OFFER AN INCOME TAX CREDIT FOR ABATEMENT POTENTIAL OBSTACLES/BARRIERS Successfully amending a complex tax code may be daunting for people and organizations normally dedicated to health and housing issues. The tax credit must be large enough to create an incentive for property owners to spend their own money for lead hazard control or abatement. At the same time, the cost must be reasonable enough that it can be borne by the state. # ADDITIONAL RESOURCES 1. Federal income tax credit legislation for lead abatement is now pending (S. 1228). # ILLUSTRATION OF STRATEGY IN PRACTICE The Commonwealth of Massachusetts' "deleading " income tax credit, which has been in place since 1994, offers a model for other states interested in helping residents pay for the cost of abating lead hazards. The owner of a residential property can claim a tax credit equal to the lesser of the cost of deleading, or $1,500, for the containment or abatement of lead hazards, including the replacement of window units. A tax credit equal to the lesser of one-half of the cost of deleading, or $500, is available to offset the cost of bringing the property into interim compliance, using interim control measures, pending full compliance. Several steps are necessary to claim the credit: the property must be inspected by a licensed inspector; the property is then "deleaded" by a MA-licensed contractor; a licensed inspector issues a letter of compliance or a letter of interim control; and the owner files a copy of the inspector's letter with the owner's income tax return. The tax credit is a dollar-for dollar offset for the actual amount spent against taxes owed. Any unused portion of the credit may be carried forward from the year that a credit was first claimed to any of the next seven years. Some activities may be undertaken by a "qualified unlicensed individual" pursuant to State regulations. # Jurisdiction or Target Area: Massachusetts # Other resources utilized: N/A Factors essential to implementation: State legislation is the essential prerequisite. There is no staff dedicated to implementing or enforcing this particular element of the tax code. # Limitations/problems encountered: The relatively modest size of the tax credit ($1,500) limits the use of a tax credit as an incentive to undertake lead hazard abatement activities. Since most deleading projects in Massachusetts cost several thousand dollars, the tax credit is seldom the deciding factor in financing the cost of lead hazard remediation. # Magnitude of Impact/Potential Impact: The dollar impact on the State is not known. # Potential for replication: This can be implemented in any jurisdiction with an income tax and a legislature willing to create a tax credit for the purpose of abating lead hazards. The potential small loss of revenue will be weighed against other competing budgetary interests. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies OFFER AN INCOME TAX CREDIT FOR ABATEMENT # Contact for Specific Information Massachusetts Department of Revenue 617-887-MDOR # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies # PROVIDE LOCAL PROPERTY TAX CREDITS DESCRIPTION OF THE STRATEGY Local governments could offer a credit on or forgiveness of property taxes to property owners who make expenditures on specified activities such as window replacement, lead hazard control, or correction of other health and safety hazards-just as credits are provided currently in some locales for marginal properties that are substantially improved. A property tax credit could be very narrowly targeted, for example to a single high-risk neighborhood within designated boundaries. While property tax revenue reductions ultimately have the same budgetary impact as expenditure increases, some jurisdictions may find that tax credits are more palatable than increasing local agency budgets. No specific lead paint or health-related property tax credit exists anywhere today, although a bill (HB1039) to create a property tax credit for lead hazard reduction for residential and child care properties was introduced in Maryland in 2004. # BENEFITS Immediate/Direct Results: Improvements related to lead safety and other health considerations are far more likely to be made by owners who can recoup some of their costs. Property tax credits provide more direct and immediate benefits to owners of low-income properties than income tax deductions or credits. Making the credit dependent upon independent verification of the work provides an opportunity to build in quality controls. Public Health Benefits: Lead hazard reduction directly reduces lead exposure to children. Other Indirect/Collateral Benefits: Improvements will generally increase property values and durability of housing, restore vacant buildings, provide employment opportunities, and increase community pride. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Approximately 1.0 FTE might be needed to process applications and approve repairs for a program in a major jurisdiction. Other administrative aspects of this program probably could be carried out within the existing staffing and budget of any state or local revenue or property tax agency. Other resource requirements: Some new administrative processes and forms would be needed. # Institutional capacity required: Changes in state, county, and local tax law may be necessary, depending on the jurisdiction. # Cost considerations: Losses in property tax revenue would be the main cost to a jurisdiction. There would be marginal administrative costs in ensuring that required repairs are actually done and auditing claims to avoid Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies PROVIDE LOCAL PROPERTY TAX CREDITS fraudulent use of the credit. Long-term enhancement of property value may partially offset the lost tax revenue as lead safety becomes valued in the marketplace. Collateral improvements to property condition (e.g., increased energy efficiency, new windows, plumbing and roof repairs) will increase property value and durability, which in turn should increase future property tax revenues and help arrest blight. # Timing issues: No seasonal or cyclical considerations other than the fact that policies that reduce tax revenue are more likely to succeed in times of budget surpluses. Timeline to implement depends on the legislative process. # Feasibility of Implementation: High. Many jurisdictions provide property tax credits for a wide range of purposes that are deemed socially beneficial (such as credits for low-income, elderly, disabled, or blind occupants; properties used for charitable or educational purposes; historic preservation; substantial property improvements; and even brownfield clean-up). A credit to promote lead hazard control or other health-related housing improvements would seem to have comparable political appeal. # POTENTIAL OBSTACLES/BARRIERS Fraudulent use of the credit could occur without proper quality control measures and independent verification of repairs. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Since state-enabling legislation was enacted in 1996, Baltimore has offered a property tax incentive program for owners who complete substantial rehabilitation (greater than 25% of the initial "assessed full market value" of the property) of landmark designated properties and properties located in one of the city's historic districts. The program is not designed, nor has it ever been used, for lead hazard control or other health-related repairs specifically, although such hazards are often corrected incidentally. The assessed tax of the renovated or rehabilitated property remains at the same level as it was before the start of renovation for 10 years. Credit is for 100% of the tax assessment increase due to the improvements made, and is fully transferable to a new owner for the remaining life of the credit-provided the property is certified by CHAP. Staffing utilized: 1.5 FTE at CHAP run the program. No new staff has been added at the state tax agency or city finance agency, but a tax agency staff person estimates that each agency currently uses as much as 1 FTE staff to do additional data input and record keeping to support this program. # Jurisdiction or Target # Other resources utilized: N/A. Factors essential to implementation: Implementation requires cooperation among CHAP (accepts, reviews, approves applications and projects), city department of housing and community development (construction permits), state department of taxation (calculates value of tax credit), and city department of finance (issues property tax bills). # POTENTIAL OBSTACLES/BARRIERS One potential obstacle is opposition to new fees if the dedicated fund is established based on fees and fines. Therefore, the fees must be kept to the minimum needed to establish and maintain the fund, and the basis and justification for the new fund must be clear and convincing, based on facts about housing and health conditions. Second, property owners are likely to object to new or enhanced housing inspections. Public education and outreach must convince decision-makers that (1) inspections are crucial to relieving documented housing conditions that threaten the health and safety of the occupants; and (2) a more professional code enforcement program featuring registration of rental properties, scheduled inspections timed so that property owners can anticipate them, and consistent enforcement processes will provide greater predictability and objectivity as well as accountability for compliance. Third, the goals of decent housing condition and lead safety must take precedence over zealousness to garner revenues from penalties (to hire more staff to collect more fines, etc.). Orders to comply without financial penalty should be vigorously pursued since in many cases the limited resources of the owner would be better spent on correcting violations rather than paying fines. Fines must be set high enough to motivate property owners to cooperate with enforcement staff as well as preemptively invest in their properties. # ADDITIONAL RESOURCES N/A # ILLUSTRATION #1 OF STRATEGY IN PRACTICE The City of Los Angeles has adopted a housing ordinance that requires that every residential rental property with two or more units be inspected on a scheduled basis, currently once every five years. The housing habitability inspection, paid for by a fee of $27.24 per unit per year, covers compliance with codes for fire and life safety, building, electrical, plumbing, heat and ventilation, health, and lead hazards. # Other resources utilized: N/A Factors essential to implementation: The essential components are a professional code enforcement agency, a good database and tracking system, effective outreach and education of property owners and contractors, and consistency of treatment. # Limitations/problems encountered: The program needs to factor in that owners want to/need to recoup investment, and ways to help contractors understand potential liability. Magnitude of Impact/Potential Impact: Approximately 180,000 units are inspected each year. In a pilot program in one-third of the council districts, inspectors who have been trained in lead safety are citing landlords for visible lead hazards and requiring that all work in pre-1979 buildings that disturbs paint be performed using lead-safe work practices. City inspectors make referrals to the county lead program to document violations. They also refer buildings where children are at-risk to community organizations, which deploy staff to educate tenants to identify and complain of unsafe work practices and to have their children screened. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies SECURE DEDICATED FUNDING FOR CODE ENFORCEMENT Potential for replication: Moderate. Cities and counties that have a housing code could adopt a systematic enforcement program using fees or appropriations dedicated to code enforcement. While many codes do not specifically cover deteriorated paint, there are generally other habitability standards that can be cited. Codes inspectors need to be retrained to look at habitability issues, not just building or structural conditions. # Contact for Specific Information # Secondary Actor(s): N/A Staffing utilized: No information provided. # Other resources utilized: N/A Factors essential to implementation: Registration is the key to success. Once a property is registered, it is possible to contact the owner or the owner's representative. The owner knows the property will be inspected and that it must be maintained. Regular inspections are essential to maintaining properties in a safe condition. Limitations/problems encountered: It is very difficult to keep up with the constant turnover in ownership of small apartments. The vast majority of properties subject to registration and inspection are three-unit properties. Small property owners are investing for appreciation, not long-term ownership and with little attention to maintenance. Magnitude of Impact/Potential Impact: 150,000 to 180,000 units are inspected annually. # Centers for Disease Control and Prevention--Lead Poisoning Prevention Branch Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Financing and Subsidies SECURE DEDICATED FUNDING FOR CODE ENFORCEMENT Potential for replication: Moderate. Other states and localities seeking to replicate New Jersey's approach would need to find a basis for widespread registration that reflect their individual needs and past history. New Jersey laws grew out of a need to regulate tenements in the early 1900s and have gradually evolved since then. Also, the State has a unique relationship with its municipalities that is not common in most states. # Contact for Specific Information Michael Motich Supervisory Code Administrator 609-633-6225 Other resource requirements: While the direct outcome of a disclosure law will be providing families with information they need to make informed housing choices, the reality is that many families living in high-risk housing do not have real housing options. Therefore, the indirect goal of the disclosure requirement is to motivate property owners to go beyond merely providing information about lead hazards and take steps to address them. Resources to assist landlords of low-income properties and owner-occupants, including free trainings in lead-safe work practices and grants and low-or no-interest loans, would help accomplish this goal. # LEAD SAFETY AND HEALTHY HOMES STANDARDS ADOPT STATE AND LOCAL LEAD HAZARD DISCLOSURE LAWS CERTIFY LEAD SAMPLING TECHNICIANS ENSURE LEAD SAFETY IN LICENSED CHILD CARE PROGRAMS ESTABLISH A LEAD-SAFE HOUSING REGISTRY MAKE LEAD HAZARDS A VIOLATION OF THE HOUSING OR HEALTH CODE NOTIFY ALL RESIDENTS IN A BUILDING FOUND TO CONTAIN LEAD HAZARDS PROTECT OCCUPANTS DURING HAZARD REMEDIATION AND RENOVATION WORK REQUIRE RENTAL PROPERTY OWNERS TO INFORM TENANTS HOW TO REPORT DETERIORATING PAINT REQUIRE SAFE WORK PRACTICES DURING REMODELING, REPAIR, AND PAINTING TRAIN PAINTERS, REMODELERS, AND MAINTENANCE STAFF IN LEAD-SAFE WORK PRACTICES # Institutional capacity required: The lead agency, presumably the health or housing department, must have the capacity to enforce the law in order for this strategy to make a meaningful difference. In addition, judges and prosecutors must be educated about lead poisoning and the goals of the law so that settlements and judgments go beyond collecting fines and actually require owners to take measure that will protect tenants. Cost considerations: This is a low-cost way to motivate owners to invest in lead hazard control. The cost of enforcement can be offset by fees and fines. # Timing issues: The timeline to enact and initially implement a disclosure requirement can be quite long (18 months to 2 years or more), depending on the political climate and the calendar of the city council or state legislature. Feasibility of Implementation: High, though the successful implementation of this strategy is dependent on legislative approval, which is difficult to predict. Building the support of community-based organizations, tenants, and others concerned about lead poisoning, such as pediatricians, will help achieve passage. # POTENTIAL OBSTACLES/BARRIERS Landlords, property management companies, and real estate agents may oppose this legislation. In addition, other agencies that need to be involved, such as inspection, code, building, and judicial agencies, may be resistant to taking on what they view as new responsibilities. Staffing utilized: It is estimated that approximately 1 FTE was needed over 12-18 months to conduct background research on other state and local lead laws; draft the legislation; build political support for the new law (including meetings with the Mayor and housing officials); and see it through the legislative process. The director of the CLPPP and a lawyer in the Law Department were the two primary staff who worked on the legislation. The staffing pattern to implement this local disclosure law has not been determined. # Other resources utilized: The program conducted research to review other state and local lead poisoning prevention laws and to survey HUD grantees to learn how they handle properties. The program used the Wisconsin law as a model. # Factors essential to implementation: Political support for the law was essential to its passage. In this case, the Mayor's support of the ordinance assured the support of the Housing Director and Housing Commissioner, both of whom are effective and influential. # Limitations/challenges/problems encountered: The actual implementation and enforcement of the law will depend on having adequate resources and effective communication and coordination between the health and housing departments. Magnitude of Impact/Potential Impact: There are 178,000 homes in Cleveland built before 1978 that will be affected by the disclosure law. Of those, it is estimated that 120,000 (60%) contain lead hazards and would be candidates for enforcement. # Contacts for Specific Information # CRITICAL ELEMENTS Staff requirements: States that already have EPA-authorized certification programs in place for lead abatement workers and supervisors, risk assessors, lead inspectors, and project designers will be able to add approval of certifications for LSTs with a nominal amount of staffing. # Other resource requirements: Training materials, testing material samples for practice in lead sampling. # Institutional capacity required: In some states, the laws establishing EPA-authorized certification programs may need to be amended to accommodate LST certification. Elsewhere, agencies administering EPA-authorized programs will only need to promulgate regulations detailing LST certification requirements. Accredited training providers may need to develop LST training programs, but states should be able to approve their plan to use the EPA model course. Cost considerations: Once a LST certification program is underway, the increased availability of lead dust testing will bring the cost of that service down significantly. # Timing issues: Assuming the statutory authority to certify LSTs is in place, as much as a year may be required to adopt regulations. Individuals typically are certified as LSTs for one or two years and can easily extend their certification through renewal. # Feasibility of Implementation: High. Certifying LSTs as a free-standing discipline is readily achievable in states with EPA-authorized certification programs. In states not currently authorized by EPA to administer certification, sampling technicians who are certified by other states can perform non-abatement clearance in accordance with HUD regulations. # POTENTIAL OBSTACLES/BARRIERS Confusion or perceived competitive interests may interfere in consideration of certifying LSTs. The benefits of diversifying and expanding capacity need to be communicated. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE In 2001, the State of Vermont promulgated regulations that permit the licensing of lead sampling technicians, completing a process that took roughly one year. Candidates are required to attend a five-hour training course designed to give them hands-on experience in conducting visual assessments and taking dust wipe samples, as well as sample submission, lab results interpretation, and other skills. To obtain a license, candidates must pass an exam at the end of the course. Lead sampling technician licenses must be renewed annually. For technicians working for private firms, the license fee is $150 per year; however, public employees and employees of non profit organizations that are not working commercially can have the fee waived. Licenses or certifications obtained in other states can also be used in Vermont. Lead sampling technicians in Vermont are allowed to perform a well-defined set of duties-they can conduct clearance testing following interim controls, renovations, remodeling, and ongoing maintenance. Sampling technicians cannot perform clearance testing after an abatement project or conduct random dust wipe sampling in multifamily properties. # Jurisdiction or Target Area: Vermont # ENSURE LEAD SAFETY IN LICENSED CHILD CARE PROGRAMS DESCRIPTION OF THE STRATEGY Protecting children in child care settings is an essential complement to preventing exposure in the home environment. Requiring property owners of child care programs to certify annually that the program used essential maintenance practices during the previous year will prevent the occurrence of lead hazards. This certification is required for issuance or renewal of the program's child care license and must be filed with the program's insurance carrier. # BENEFITS Immediate/Direct Results: Once implemented, the program should result in direct and substantial reductions in lead hazards and deteriorated paint at child care programs covered by the regulations. Public Health Benefits: If a child care facility has lead hazards, children served by the program are likely to become lead poisoned. In addition, since these children may not be considered at high-risk for lead poisoning, they may not be identified under the targeted screening programs used in many states. Reducing lead hazards in child care programs will benefit all of the children who use the program's services. Other Indirect/Collateral Benefits: The program should raise awareness of lead hazards for the families that use the child care program as the program cleans up problems or proudly declares that it has any potential problems under control. # SCOPE OF POTENTIAL IMPACT There are 100,000 licensed child care programs nationally serving children under six years old. According to the First National Environmental Health Survey of Child Care Programs, 14 percent of licensed child care programs in the United States have significant lead-based paint hazards -primarily deteriorated lead-based paint. 470,000 children attend these programs. For programs in buildings built before 1960, the rate is 26 percent. For programs where the majority of the children are African American, the rate is 30 percent. # CRITICAL ELEMENTS Staff requirements: The program will take at least one year to develop, including adoption of laws or rules, development of coalitions, and production of outreach materials. It will take approximately 0.2 FTE to prepare the program, educate child care providers, and manage the program for the first two to three years. In addition, inspectors normally conducting the program inspections will need to be trained to address the issue and integrate compliance monitoring for lead into their workload. The additional inspection burden should be minimal-about 15 minutes per site visit. Other resource requirements: It may help improve compliance if some inspectors are trained and certified or licensed to conduct a clearance examination. If inspectors will be expected to take dust wipe samples, they will need about $50 per facility for lab analysis of samples. While not essential, statutory authority such as Vermont's will make it easier. The programs will also need access to contractors and maintenance workers trained in lead-safe work practices to ensure that essential maintenance practices for lead safety are properly done. Some inspection staff may need to be qualified as lead sampling technicians to enhance compliance. # Centers for # Cost considerations: Child care programs often operate on a tight budget. Many programs lack the resources to remedy lead hazards. Unless resources are provided, programs may be confronted with closure. The programs most at risk for lead hazards are the ones most likely to need the resources. Timing issues: It will take approximately one year to establish the program and build support for it. Full implementation will usually take two more years. Child care programs are busiest-and therefore unavailableduring August and September when children return to school and enrollment adjusts to the changes. # Feasibility of Implementation: Moderate. The program is feasible to implement in jurisdictions with lead hazard control resources to devote to child card facilities, but it will take the support of agency and political leadership since additional requirements will be imposed on child care programs. # POTENTIAL OBSTACLES/BARRIERS Management support is essential to overcoming concerns by programs and to manage staff reluctance to expand their inspection role and respond to program concerns. Communities need to be prepared to identify potential resources to address lead hazards and provide technical assistance to help programs obtain and use those resources. # ADDITIONAL RESOURCES # ILLUSTRATION OF STRATEGY IN PRACTICE In 1996, Vermont adopted a law requiring all licensed child care programs, including those operated as "Family Day Care Homes," to perform annual Essential Maintenance Practices (EMPs). The programs certify that they have completed essential maintenance practices to reduce lead hazards during the previous year. # POTENTIAL OBSTACLES/BARRIERS Some resistance from landlords and realtors may occur, in objection to the use of public resources to the benefit of owners of lead-safe properties and to the disadvantage of property owners not on the list (or who will have to undergo costly renovations and repairs to qualify). However, public health concerns should outweigh these arguments. Other resources utilized: HUD Lead Hazard Control funds were used for registry start-up. # ADDITIONAL RESOURCES Factors essential to implementation: Factors essential to implementation of the lead-safe housing registry in Montgomery County included the availability of HUD grant funds and the cooperation of the cities of Kettering and Dayton, as well as assistance from a variety of other parties, including the Sunrise Center, the CityWide Development Corporation, and the Center for Healthy Communities. Limitations/challenges/problems encountered: There were no significant limitations or challenges encountered in establishing the Montgomery County Lead-Safe Housing Registry. Magnitude of Impact/Potential Impact: The Montgomery County registry currently lists 102 owneroccupied housing units and 56 rental units as lead-safe. # Potential for replication: Cities and counties that receive state or federal grant funds can easily establish a leadsafe housing registry as part of their overall programs. Eleven counties in California and the City of Long Beach have established similar registries. # ILLUSTRATION #2 OF STRATEGY IN PRACTICE The Lead Safe Housing Registry in Maryland is a product of the Coalition to End Childhood Lead Poisoning, a nonprofit organization headquartered in Baltimore. The registry is a statewide list of currently available rental properties that, according to the records of the Maryland Department of the Environment, are in compliance with state and federal lead safety standards. Properties on the registry are designated as having undergone a full risk reduction or as being lead-safe or lead-free. The list is unique in that it shows only properties currently available for rent, along with the type of unit (apartment, townhouse, etc.), some of the amenities, the amount of the security deposit, the total rent per month, and whether the unit is eligible for subsidy under HUD's Section 8 Housing Choice Voucher program. In addition, some of the properties listed on the housing registry are considered affordable housing. # Jurisdiction or Target Area: Maryland # Primary Actor: Coalition to End Childhood Lead Poisoning Secondary Actors: N/A Staffing utilized: The Coalition did a lot of preliminary groundwork in establishing Maryland's lead-safe housing registry. 1-2 FTE were temporarily needed for this process. Maintaining and updating the list on a biweekly basis requires 0.25 FTE. # Other resources utilized: N/A Factors essential to implementation: The initial and ongoing cooperation of the state of Maryland has been essential to the Coalition in implementing this strategy. # Limitations/challenges/problems encountered: The Coalition has encountered several challenges in making its registry as complete as possible. The State of Maryland is prohibited by law from making public an inventory of properties owned by a specific landlord or leasing company for any purpose, so the Coalition must list units on an individual basis. Also, although the Coalition attempts to provide a large selection of lead-safe affordable housing, these properties are in short supply. Developing the housing registry for the less urban sections of Maryland (i.e. outside the Baltimore metro area and the Washington, DC suburbs) is another challenge for the Coalition. Magnitude of Impact/Potential Impact: The impact of this registry is statewide. Properties from every county can be included on the registry. Because the registry can include a limitless number of affordable housing units, it can have positive impacts on low-income families in Maryland. # ILLUSTRATION #3 OF STRATEGY IN PRACTICE The Wisconsin Lead-Free/Lead-Safe Registry is a listing of houses, apartments, day care facilities, and other buildings that meet the state's lead-free or lead-safe property standard. The lead-free standard is met when a property does not contain lead-based paint. A lead-safe property is one that does not contain lead hazards, such as peeling, chipping, or flaking lead-based paint. A property owner may apply to be added to the registry by obtaining a lead-free or lead-safe certificate, which is issued following an inspection by a certified lead inspector or risk assessor. Lead-safe certificates are valid for a set period of time as determined by DHFS; lead-free certificates do not expire. The Lead-Free/Lead-Safe Registry is posted online; the .pdf file is updated whenever a significant number of properties have been added. The registry is organized by county and lists the address of the property, whether the property is lead-free or lead-safe, and contact information for the property owner or the owner's representative. DHFS is working to make the information available in an interactive format through the Wisconsin Asbestos Lead Database Online (WALDO). While no definite timeline has been set, ultimately the database will be located at http://dhfs.wisconsin.gov/waldo/. # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: The policy requiring DFHS access to addresses of lead-safe/lead-free properties and the availability of these addresses to the public has been crucial to the success and timeliness of the registry. Limitations/challenges/problems encountered: Publishing the WALDO database online has been the main challenge for DHFS. While an online version currently exists to collect input from property owners, it is complicated and too cumbersome for display and interactive activity by website visitors. DHFS is currently seeking funding to make the database more user-friendly and fully available to the public. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Lead Safety and Healthy Homes Standards # MAKE LEAD HAZARDS A VIOLATION OF THE HOUSING OR HEALTH CODE DESCRIPTION OF THE STRATEGY In order to provide the clearest legal basis for code officials to confront lead hazards, local and state codes should state explicitly that deteriorated lead-based paint and dangerous levels of lead in dust and bare soil constitute violations of the housing or health code. Specifically referencing lead hazards in the housing or health code will alert enforcement officials and property owners alike that such hazards constitute code violations and must be corrected. The code can explicitly incorporate EPA's national standard for dangerous levels of lead in paint, dust, and soil that state and local jurisdictions can reference. # BENEFITS Immediate/Direct Results: Enforcement officials have the authority to mandate repair or abatement and cite property owners who fail to comply. Public Health Benefits: Children are protected from exposure because hazards are addressed on a pre-emptive basis. Other Indirect/Collateral Benefits: With the prospect of enforcement and fines, some property owners may be motivated to repair their property before problems occur. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Since adding lead hazards supplements existing code enforcement programs' authority, no additional staffing would be needed. # Other resource requirements: N/A Institutional capacity required: The initial requirement is local or state legislation that names deteriorated lead-based paint and dangerous levels of lead in dust and bare soil as code violations. Implementation requires training for code staff in the identification of lead hazards and certification to become lead sampling technicians, lead-based paint inspectors, or risk assessors. Cost considerations: None identified. # Time issues: None Feasibility of Implementation: High. Adding lead hazards to the housing code is not difficult to implement. # POTENTIAL OBSTACLES/BARRIERS The strategy has limited usefulness if local jurisdictions do not have the budget or staff to investigate and enforce violations. # ADDITIONAL RESOURCES # ILLUSTRATION OF STRATEGY IN PRACTICE The Town of Manchester's Property Maintenance Code requires that interior and exterior lead-based paint must either be maintained in a condition free from peeling, chipping, and flaking or be removed or covered in an appropriate manner. Cases involving lead-based paint violations are referred to the health and building departments to pursue compliance with state and federal regulations. If a child under six years of age resides in a property with deteriorated, flaking, or loose paint conditions, dust wipe samples are collected. If lab analysis results reveal lead hazards, repairs are ordered and the property owner is referred to the Lead Abatement Project, which may provide financial support to complete the repairs. Participants in the program are required to obtain lead safe work practices training. # Jurisdiction or Target Area: Manchester, CT Primary Actor: Department of Health, Lead Abatement Project, in conjunction with the city's Code Enforcement Unit. # Secondary Actor(s): N/A Staffing utilized: Only 0.2 FTE is available to address Property Maintenance Code complaints. One full time Property Maintenance Inspector, with support staff, would be needed to proactively address lead hazards in a town the size of Manchester. # Other resources utilized: N/A Factors essential to implementation: Strong partnership with a childhood lead poisoning prevention program and enough staff to implement property maintenance code. Limitations/challenges/problems encountered: Generally, Code Department personnel focus primarily on new construction and only react to property maintenance complaints, so there is a need for on-going education and advocacy about lead hazards in older properties. Nonetheless, a partnership between the building inspectors and the Lead Abatement Project has made a difference. Magnitude of Impact/Potential Impact: By making lead hazards part of the Property Maintenance Code, Manchester has institutionalized the importance of recognizing and addressing them. This is an essential step in eradicating lead poisoning, particularly in an area where 93 percent of the housing is at risk for lead hazards. # Potential for replication: High. The housing code provision is not difficult to implement, but to reach its full potential impact, the jurisdiction should have sufficient resources for code inspection and enforcement. # Contacts for Specific Information # NOTIFY ALL RESIDENTS IN A BUILDING FOUND TO CONTAIN LEAD HAZARDS DESCRIPTION OF THE STRATEGY The presence of lead hazards in one unit of a multi-family building is a strong indication that other units in the building also contain hazards. Through statutes or code, hazard assessment staff can be given the authority to notify, or rental property owners can be required to notify, all building residents of any evaluation, inspection, other hazard determination, hazard reduction activities, or clearance testing performed in the building. By putting all occupants on notice when hazards are identified, residents can take steps to protect their children from lead poisoning. # BENEFITS Immediate/Direct Results: Occupants will become aware of existing lead hazards and may be motivated to seek an assessment or corrective action in their own unit. As a result, other hazards in the same building will be identified and remediated before more children are poisoned. Public Health Benefits: Expand awareness and education of lead hazards among residents. Other Indirect/Collateral Benefits: Notification of all tenants provides an opportunity for community building among residents. # SCOPE OF POTENTIAL IMPACT Statewide City-or County-Wide # PRIMARY ACTORS Code or Housing Inspection Agency # KEY PARTNERS # Tenants # CRITICAL ELEMENTS Staff requirements: Approximately 0.05 FTE. # Other resource requirements: N/A Institutional capacity required: Hazard assessment personnel must have the authority to notify all residents when a unit located in the same structure is cited for lead hazards. # Cost considerations: None listed # Timing issues: None Feasibility of Implementation: Very high. This is a simple, low-cost education and outreach tool. # POTENTIAL OBSTACLES/BARRIERS Non-English-speaking tenants need notice and educational materials in the appropriate language. to notify all residents of a building where an investigation documents lead hazards in any unit in that building. When an inspection reveals lead hazards, the environmental health inspector gives the property owner a report highlighting the hazard and where it is located and instructs the property owner to copy and distribute the notice to all tenants in the building. In order to ensure all tenants actually receive the notice, the department also distributes copies, along with lead hazard educational materials. The materials are available in Chinese, Spanish, and English. # ADDITIONAL RESOURCES # Jurisdiction or Target # Secondary Actor(s): N/A Staffing utilized: 0.05 FTE to prepare the photocopies, assemble the literature, and distribute it to tenants. Other resources utilized: A photocopier and educational literature. # Factors essential to implementation: The Health Code gives inspectors the authority to notify all tenants. Limitations/challenges/problems encountered: The challenge involves distributing the flyers and ensuring that materials are provided in the appropriate language for the tenants. # Magnitude of Impact/Potential Impact: The Children's Environmental Health Promotion Program has not yet collected data on this strategy's impact, but inspectors consider it a strategy complementary to their overall environmental health promotion. Distributing notice to all tenants is another way to build awareness and reinforces the seriousness of San Francisco's lead problem. Potential for replication: Very high. This strategy is easily replicated with very little cost incurred. # Contact for Specific Information # PROTECT OCCUPANTS DURING HAZARD REMEDIATION AND RENOVATION WORK DESCRIPTION OF THE STRATEGY Generally, occupants of homes that contain lead-based paint should be temporarily relocated to lead-safe housing before the start of lead hazard control work, or renovation or remodeling work that disturbs more than a small area of lead-based paint, and they should not return until the work is completed and the work site has been vacuum cleaned and wet washed and passed clearance. Relocation is not necessary if work area containment is practiced and either only a few square feet of paint will be disturbed or the work can be completed in a few days while occupants stay out of the work area. Temporary relocation can be carried out most efficiently and costs minimized by (a) ensuring that paintdisturbing work is completed as quickly as possible; (b) occupants are fully advised in writing of the necessity of not returning until the dwelling has been thoroughly cleaned; and (c) arrangements are made in advance for the protection and security of occupants' belongings and for the transportation needs of schoolchildren. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Lead Safety and Healthy Homes Standards PROTECT OCCUPANTS DURING HAZARD REMEDIATION AND RENOVATION WORK incidental costs. Where private sector accommodations must be used, relocation costs can be minimized if the agency can establish a public-private partnership with hotels or motels to set aside low-cost rooms for temporary relocation. In addition, it is conceivable that a temporary relocation requirement will result in rental property owners passing the cost to tenants in the form of higher rent. # Timing issues: N/A Feasibility of Implementation: High. Encouraging temporary relocation to homes of the occupant's friends or relatives may be one practical way of minimizing costs and ensuring successful implementation. Otherwise, feasibility will depend upon the availability of funds to implement a program. # POTENTIAL OBSTACLES/BARRIERS It may be very difficult to impose temporary relocation requirements on landlords without the availability of some type of cost-sharing. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE This strategy is HUD's temporary relocation policy for federally assisted housing rehabilitation and renovation work. The policy provides for temporary relocation of residents to lead-safe housing during the work period, but it does not require relocation if certain requirements are met. If only a small area of paint will be disturbed; if the work can be completed in one 8-hour work day or within five calendar days, occupants are kept out of the work area, warning signs are placed in each room where work is occurring, and the area is thoroughly cleaned are work is completed; or only outside work is involved, the property owner does not have to relocate the unit's occupants. Other resources utilized: Some agencies allow temporary relocation costs as an eligible expense for housing rehab programs. # Jurisdiction or Target Factors essential to implementation: Coordination and cooperation among occupants, property owners, and contractors involved in the rehabilitation and renovation work. It also requires ongoing inspection and enforcement by HUD and local housing agencies. Limitations/challenges/problems encountered: HUD exempted elderly homeowners from relocation requirements since local agencies reported that this population did not want to be relocated and was considered at low risk. Other resource requirements: Additional staff dedicated to ensuring landlord compliance probably could be funded mostly or entirely from penalties assessed against non-complying landlords. # Magnitude of Impact Institutional capacity required: State or local legislation would need to be enacted to create the notice requirement and enforcement authority to ensure compliance. Cost considerations: This requirement seems cost-effective no matter how passively or aggressively it is enforced. Without enforcement, some compliance will occur at virtually no cost. Additional resources spent on landlord outreach and education and/or enforcement should increase compliance substantially. Penalties against non-compliant landlords would increase in proportion to resources spent on enforcement and cover or at least offset costs of enforcement. # Timing issues: No seasonal or cyclical considerations. Timeline to implement depends on the legislative process. # Feasibility of Implementation: Moderate. The existence of this policy in two states and throughout most federally assisted housing demonstrates its feasibility. # POTENTIAL OBSTACLES/BARRIERS The impact of this policy is directly related to the degree to which it is promoted and enforced among landlords. Some resources would have to be committed initially in order to demonstrate cost effectiveness of promotion and enforcement. This is most likely to happen if policy makers are shown or convinced that enforcement efforts can pay for themselves. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE In 1996, the Vermont Legislature enacted the Essential Maintenance Practices law, which includes a requirement that owners of pre-1978 rental housing and child care facilities "post, in a prominent place … a notice to occupants emphasizing the importance of promptly reporting deteriorated paint to the owner or to the owner's agent. The notice shall include the name, address, and telephone number of the owner or the owner's agent." The law also requires owners of pre-1978 rental housing to annually submit an Affidavit of Performance attesting to compliance with this and the other requirements to the Vermont Department of Health. Limitations/challenges/problems encountered: Although VDH is charged with implementing the law and keeping records, no money is appropriated for these activities or for enforcement. VDH does not have the resources to conduct quality control on affidavits to verify they are correctly completed or to physically check dwellings to confirm compliance. Currently, VDH attempts to resolve complaints informally but does not use its statutory power to penalize non-compliant landlords. VDH has never issued a health order to address violations of the law-even after the infraction caused a child to be lead poisoned. Failure to prosecute even the most egregious cases means there are essentially no negative consequences to ignoring the law. # Jurisdiction or Target Magnitude of Impact/Potential Impact: Approximately 30-40 percent of Vermont's pre-1978 rental housing units have affidavits on file at VDH that claim compliance. According to officials, the lack of any comprehensive listing of rental properties hinders the agency's ability to get a precise picture of compliance. However, Vermont is poised to have a larger impact in the future, and other jurisdictions with more staff availability could be even more effective. # REQUIRE SAFE WORK PRACTICES DURING REMODELING, REPAIR, AND PAINTING DESCRIPTION OF THE STRATEGY Banning unsafe work practices and requiring basic safeguards for remodeling and paint repair work are key to preventing childhood lead poisoning in older housing. Banning unsafe methods of removing paint will sharply reduce the amount of lead contaminated dust that would otherwise be generated. The unsafe methods that should be prohibited include: dry sanding or scraping; open flame burning; operating a heat gun above 1100 degrees; machine sanding without a HEPA attachment; and stripping in poorly ventilated areas using volatile strippers on surfaces containing lead-based paint. Requiring precautions such as work area containment and careful post-work cleaning will prevent the dispersal of any lead-contaminated dust that might be generated. When coupled with occupant protection activities, adherence to lead-safe work practices for routine remodeling and repair work can help prevent children's exposure to lead dust hazards. # BENEFITS Immediate/Direct Results: Homes that are being remodeled, repaired, or repainted are less likely to pose lead dust hazards if contractors refrain from unsafe work methods that generate lead dust and follow basic precautions while performing work that disturbs paint in older homes. Public Health Benefits: Following lead-safe work practices will materially reduce risks to children living in older homes that are undergoing repairs or renovation. In many areas, such as New England, up to 20% of lead poisoning cases can be attributed to unsafe remodeling or renovation activities. Other Indirect/Collateral Benefits: A requirement for using lead-safe work practices would also reduce exposure of workers, and potentially their children, to dangerous levels of lead dust. # SCOPE OF POTENTIAL IMPACT A requirement for using lead-safe work practices could be implemented statewide or at the county or city level. Similar requirements already apply to all remodeling, rehab, and paint repair projects in HUD-assisted housing and properties rehabilitated using HUD funds. Cost Considerations: Because some banned work practices, such as machine sanding, reduce labor time in surface preparation, painting contractors and their clients would bear marginal increased costs. Timing Issues: Developing and implementing systems to train remodeling contractors, painters, and maintenance workers will take time. # PRIMARY ACTORS KEY PARTNERS Feasibility of Implementation: Moderate. Training to build lead safety capacity can start before requirements are in place. Health department leadership will accelerate acceptance and enactment of lead-safe work practices requirements. Substantial support from community and advocacy organizations will help. Property owner and contractor associations should be asked to participate in developing the statute, ordinance, or code amendment to offset their likely opposition. Compliance will grow over time, because most contractors are law-abiding or interested in avoiding legal liability and are responsive to consumer awareness and demand for lead safety. Success is more likely in areas with a relatively high incidence of lead poisoning and broad public awareness. # POTENTIAL OBSTACLES/BARRIERS The main obstacles are likely to be the opposition of property owners and contractors to enactment of requirements for lead-safe work practices. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE In 2001, the City of New Orleans enacted an ordinance that prohibits unsafe work practices during work on metal structures and buildings built before 1978. It requires that contaminated debris be contained within barriers and that visible paint chips be cleaned after completion of work. It also requires that tenants, neighbors, workers, and government agencies be notified that work on interior and exterior painted surfaces will take place and forbids retaliatory evictions. Enforcement is mostly by complaint and is more effective for work on exteriors that are evident to neighbors. The city is authorized to issue notices of violation, to require remediation of any lead-based paint hazards generated by unsafe work, and to require a risk assessment before resumption of work. Factors Essential to Implementation: A coalition of physicians and community advocates worked with the city's administration to develop the ordinance, which was passed in the wake of a survey finding that 25 percent of children screened at city-operated clinics had elevated blood lead levels. City departments and advocates must ensure wide publicity and education so that tenants and neighbors will report violations and so that violations will be vigorously pursued. # Jurisdiction or Target # Limitations/challenges/problems encountered: The ordinance only applies to lead-based paint, which allows painters and owners to submit an unleaded paint chip to circumvent all requirements. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Targeting High-Risk Housing # CAPITALIZE ON HOME NURSING VISITS TO TARGET PREVENTION SERVICES DESCRIPTION OF THE STRATEGY Visiting nurse programs offer a unique opportunity to efficiently reach pregnant women and new mothers in high-risk communities. Traditional home nursing visits can be enhanced to visually assess hazards, collect dust samples, inform occupants and rental property owners of hazards, demonstrate specialized cleaning methods for lead dust, and discuss lead poisoning risks. Further, and most critically, the nurses can provide referrals to available lead hazard control resources and other resources. # BENEFITS Immediate/Direct Results: Education and referrals provided during home nursing visits directly benefit the families served. The physical presence of nurses in the homes provides a mechanism for nurses to identify conditions that may warrant emergency interventions, even outside the context of formal policies for such action. Public Health Benefits: Lead safety improvements triggered by nurse visits benefit siblings and future occupants. In addition, home nurse visits provide a mechanism for targeting available lead hazard assessment and/or control services to high-risk families who can benefit immediately. Over time, cumulative efforts will help improve the lead safety of the housing stock. Other Indirect/Collateral Benefits: Word-of-mouth among new mothers may reinforce efforts to raise awareness among families in the community. Nurse referrals may help generate referrals to other community programs, such as weatherization or lead hazard control, thereby reducing marketing efforts for such programs. # SCOPE OF POTENTIAL IMPACT City-or County-Wide # PRIMARY ACTORS KEY PARTNERS Health Department # CRITICAL ELEMENTS Staff requirements: Staffing needs depend upon whether the lead activities are provided as an adjunct to home nurse visits that are already being made, or whether they are an entirely new service, and by the scope of services provided. Other resource requirements: If nurses will be collecting dust samples or demonstrating lead-safe cleaning techniques, they will need the appropriate tools (such as wipes for dust sampling, HEPA vacuum, etc.) and protocols, along with any necessary training. Institutional capacity required: Management support is the most likely element required for continued support of a staff-intensive effort. # Cost considerations: To maintain the visiting nurse program's existing coverage of its target population caseload, the incremental cost of adding lead safety to the visiting nurse protocol will need to be reimbursed so that the staff can be expanded accordingly. Timing issues: Can be implemented at any time. # Feasibility of Implementation: High. As demonstrated by the adoption of this strategy in multiple jurisdictions, the strategy is feasible in multiple variations. If programs seek to add lead hazard education to previously planned nursing visits, they may run the risk of overwhelming both parents and the nurses by providing too much information at one time, ultimately making the sessions less effective. Nurses and families may be frustrated by the lack of meaningful lead hazard control resources available in the community. Pregnant women may resist interventions that could lead to uncomfortable relationships with landlords or even evictions; the program should develop a contingency strategy for landlord retaliation (with assistance from a legal aid agency and/or the code enforcement agency) and explain it to their patients. # ADDITIONAL RESOURCES N/A # ILLUSTRATION #1 OF STRATEGY IN PRACTICE With the support of a CDC $100,000 primary prevention grant, WI CLPPP developed and pilot-tested a nursing home visitation program in two high-risk Wisconsin communities (Racine and Sheboygan). Although the one time CDC grant has ended, the program since has evolved into a different but sustainable format, with 34 programs run by local health departments throughout the state. The initial pilot program, which targeted low-income, primarily Medicaid-eligible, pregnant women through the Prenatal Care Coordination Program (PNCC), provided prenatal lead education and referrals, environmental assessments, and feedback to property owners. Fourteen PNCC nurses were trained as Lead Sampling Technicians and equipped with HEPA Vacuums. During initial prenatal home visits, a nurse provided information about childhood lead poisoning and potential lead hazards in the home environment. The nurse also conducted a visual assessment of the home, collected pre-and post-cleaning lead dust samples from floors and windows, demonstrated lead dust reduction measures, and provided cleaning supplies to the parent. During a second visit four-six weeks later, nurses reinforced messages and collected lead dust samples from the same locations to assess the effectiveness of measures that were taken to reduce lead dust. Pre-and post-cleaning results were provided to clients. Finally, property owners were informed in writing of the results of the dust sampling, given a copy of the HUD Lead-Paint Safety Field Guide, and encouraged to repair deteriorated painted surfaces. Families and property owners were also encouraged to enroll their properties in the HUD Lead Hazard Reduction Program if appropriate. After CDC funding ended, Wisconsin revised the program to reach a broader target audience by adding an additional lead-specific home visit to existing prenatal and newborn visitation programs. Local health departments (LHD) now choose the level of intensity of services provided during their lead visits, selecting various combinations of three options: lead education, environmental assessments, and feedback to property owners. Due to capacity and resource constraints, not all LHDs are doing dust sampling; some are using LeadCheck swabs. The state is continuing to support local efforts, currently devoting about 15 percent of two FTEs to support the program and budgeting about $35,000 per year. Local LHDs estimate their costs at about $3,000 per year. To help build capacity and support dust sampling, Wisconsin is training an additional 23 nurses as lead sampling technicians and plans to offer free dust sample analysis through the state lab. # Jurisdiction or Target Area: Wisconsin # Other resources utilized: N/A Factors essential to implementation: The willingness of visiting nurse program to train nurses as LSTs, and lab resources for dust wipe analysis, were essential to implementing this strategy. Limitations/challenges/problems encountered: Initial concerns about creating conflict by contacting property owners proved to be mostly unfounded. In fact, the program has been so popular that nurses in one locality fought successfully to protect it from threatened budget cuts. Magnitude of Impact/Potential Impact: During the nine-month pilot, approximately 100 families received services. WI CLPPP estimates that some level of service has been provided to about 500 families statewide this year, and expects the number to reach 1,200 by the end of year. The program has been well received statewide by nurses and families. Potential for replication: High. Widely replicable in jurisdictions with visiting nurse programs. Through partnership with the Weatherization program, the KYBLS project aims to literally get a "foot in the door" by delivering an energy-efficient, lead-safe environment at no cost and with minimal involvement of the property owner. Weatherization program policies do not require the permission of the property owner to perform an initial energy assessment. The property owner's signature is only needed in order to perform energy conservation treatments (which involve no cost to the property owner and no threat of legal penalty). CLPPP sought to utilize these facets of the Weatherization Program to initiate contact with property owners offering an opportunity to improve their properties for free, easing the concerns of women who feared conflict with or eviction by their landlord, and following up with contact regarding the need to repair lead hazards. # Contacts for Specific Information Following the birth of each participant's child, the program uses RI health department (KIDSNET) and CLPPP databases to identify and track the child up to the first blood lead level screen, to serve as another method of evaluation method for the project. The educational impacts of the KYBLS project continue to be assessed through an analysis of both the pre-and post-educational surveys administered to participants at the initial and final stages of their enrollment, as well as the pre-and post-dust wipe samples taken by the FOP workers at the homes of KYBLS enrollees. RI CLPPP has budgeted $100,000 per year for KYBLS. # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: The essential factor is the cooperation between the health department and the weatherization agency. Limitations/challenges/problems encountered: Unfortunately, implementation has proven to be very challenging, since a waiting list for weatherization services delays access to the property improvements to be provided at no cost, and property owners are not very enthusiastic about financing lead hazard controls via the loans that are offered. Many of the referred women refused services, could not be located, or did not meet eligibility requirements. Because the program began as a small pilot program, to date only a handful of properties have been successfully enrolled into lead hazard control or weatherization programs. # Magnitude of Impact # CONNECT MEDICAID DATA AND STATEWIDE SURVEILLANCE DATABASES DESCRIPTION OF THE STRATEGY By linking the state's Medicaid data electronically with statewide lead poisoning surveillance databases, states determine testing rates for children served by Medicaid and identify children in the Medicaid caseload who have not been screened. Analysis of these data can also be used to identify and target neighborhoods in which numerous Medicaid children have been poisoned in order to direct prevention resources to areas at highest risk. Since 1998, CDC has been encouraging states to make these connections by requiring lead poisoning prevention grantees to have a system for ongoing identification of Medicaid-eligible children in the surveillance system, preferably via performing automated data linkages or matches between surveillance and Medicaid enrollment data sets. # BENEFITS Immediate/Direct Results: Data sharing improves the ability of the lead program and state Medicaid agency to systematically target prioritized primary prevention in high-risk neighborhoods. Public Health Benefits: Data sharing combines information sources and it sheds enormous light on "high risk" Medicaid populations, who can be targeted for primary prevention. Combining information sources also permits agencies to focus EBL screening efforts in neighborhoods where screening is required but not happening and better monitor both case-identification rates and the actual delivery of lead screening services, including the performance of individual Medicaid managed care plans and medical practices. It can also permit agencies to track follow-up care provided by local health departments and justify Medicaid reimbursement for such services. Other Indirect/Collateral Benefits: In addition to links between lead surveillance and Medicaid enrollment data, linkages can be established with other systems, such as geographic information system (GIS) coding and other programs' enrollment data. When illuminated by GIS, the matched data can yield clear and persuasive maps presenting risk, screening, or case-identification data for specific neighborhoods. The lead program can share analyses of the resultant combined data (suppressing identifying information) with housing agencies to facilitate targeting of resources for housing rehab and up-front lead hazard control to highest-risk blocks and block groups, or combine it with WIC enrollment data to discover risk relationships that can improve targeting strategies for primary or secondary prevention initiatives. Institutional capacity required: Information system managers for both agencies must understand the project goals and have top-down support for a joint project. Interagency agreements and even legislation supportive of sharing the data may be required. # SCOPE OF POTENTIAL IMPACT # Cost considerations: Net costs depend on the status of the existing data systems. The cost of data matching is an allowable state administrative cost under Medicaid and therefore partially reimbursable by CMS. Timing issues: Can be implemented whenever systems and support are in place. # Feasibility of Implementation # POTENTIAL OBSTACLES/BARRIERS Poor quality underlying data is a prohibitive barrier to a successful data sharing project, as any data linkages are only as good as the information being compared. A common obstacle arises if the necessary data sets are housed in different agencies or in different locations, or split between agencies with poor working relationships. There is ample evidence that such obstacles can be overcome in any state, by enlisting senior management's early support for the project. Privacy and confidentiality issues make all public agencies anxious about sharing data, and such concerns have been heightened by perceived new requirements associated with the Health Insurance Portability and Accountability Act's Standards for Privacy of Individually Identifiable Health Information (commonly known as the HIPAA Privacy Rule). It is imperative that proponents of data sharing be familiar with the facts about HIPAA as well as applicable state or local privacy policies. Some states have laws that require sharing of such data. # ADDITIONAL RESOURCES A number of resources are available to assist states in taking this step. # ILLUSTRATION OF STRATEGY IN PRACTICE The Chicago CLPPP regularly matches its lead surveillance database with Medicaid eligibility and billing data (two separate Medicaid databases) to track and analyze screening and EBL rates. Each quarter, CLPPP performs a data match and then translates the results into a standardized report for the Medicaid agency. The report contains aggregate information on the number of Medicaid-enrolled children tested by age, race, address, and other factors of interest, as well as information on blood lead levels by various criteria. It also includes a list of untested children, which is used for direct outreach by CLPPP to encourage testing. In addition, since the data are geo-coded by address, the CLPPP program uses the data to validate high-risk areas for HUD funds and other prevention efforts and to generate "good visuals" (i.e., mapped data) that help in mobilizing partnerships for prevention in problem areas. # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: The city health department successfully negotiated its data-sharing agreement directly with the state Medicaid agency, the Illinois Department of Public Aid. Program staff believes that the key ingredient for a successful project is the existence of a clean blood lead surveillance database, which can be a challenge given that it is constructed by the CLPPP from information provided by providers who collected the samples and the laboratories that analyzed the samples. In contrast, Medicaid data tends to be relatively clean. Chicago modeled its approach after two states' successful efforts (NC and WI). Chicago CLPPP staff sought advice and technical assistance from peers in those states, who were gracious about sharing their data matching protocols. Limitations/problems/challenges: Completing development of the agreement between the agencies was regarded as the biggest challenge to the program. respect to the identity of children in lead surveillance databases, and in some circumstances, address information. # Cost considerations: There are out-of-pocket costs associated with programming, acquiring data, cleaning databases (if necessary), and licensing software; however, specific costs will vary depending on a number of factors. Purchase of GIS software is now an authorized expenditure for CDC CLPP grant funds. Timing issues: Can be implemented at any time. # Feasibility of Implementation: Moderate. A number of programs have already deployed GIS technology successfully to analyze data related to lead poisoning prevention, providing useful models and resources for support, advice, and practical tools. # POTENTIAL OBSTACLES/BARRIERS Although "mapping" feels approachable, the concept of Geographic Information Systems (GIS) can be intimidating for those unfamiliar with the software. Programs need to set clear goals for their analyses and decide if simple and focused will accomplish their goals (e.g., Wisconsin illustration), or if something more complex and comprehensive is appropriate (e.g., NCHH illustration). A potential barrier that can require considerable staff time to remedy is the quality of the databases. Generally speaking, tax assessor databases are relatively clean, due to their importance to local government finances, while lead surveillance data tends to require cleaning to render it intelligible for GIS programs. It is also possible that very small scale mapping (e.g., at the block level) of EBL data could trigger privacy concerns, so agencies must have clear policies in place in comply with prevailing privacy requirements. # ADDITIONAL RESOURCES Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Targeting High-Risk Housing CONSOLIDATE AND ANALYZE DATA TO HIGHLIGHT LEAD POISONING "HOT SPOTS" # ILLUSTRATION #1 OF STRATEGY IN PRACTICE WI CLPPP used GIS software to produce maps demonstrating visually the association between childhood lead poisoning and age of housing. Specifically, Wisconsin developed a series of maps showing both the geographic location of residences of children with elevated blood lead levels (data from the state's blood lead surveillance system) and age of housing (data from US Census). # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: The program partially credits the success of the maps to careful planning and testing to ensure that the messages were clear for the desired audiences. To this end, WI CLPPP pilot-tested the materials with nurses, health educators, and sanitarians from local health departments' lead program staff. The state is currently exploring the feasibility of offering online access to GIS mapping, through which the public would be able to customize maps of lead data in combination with various other types of data. Limitations/challenges/problems encountered: None mentioned. Magnitude of Impact/Potential Impact: Wisconsin created maps with 3 views for all of its 72 counties and the larger cities, enabling them to show separately the areas with low (0 -30 percent pre-1950 housing), medium (30 -60 percent), and high (60 -100 percent) densities of older housing. The resulting maps were powerful communication tools showing strongly that those with lead poisoning are predominately found in the areas with the highest proportion of pre-1950 housing. Although the maps were relatively straightforward in the sense that they only mapped one familiar risk factor (age of housing), feedback on the maps has been universally positive, with continuing requests for customization. At least one jurisdiction used them to target properties for HUD lead hazard control funding, and the city-level maps positioned cities for collaboration and communication about the use of Community Development Block Grant (CDBG) funds for prevention. Even private managed care organizations have requested detailed maps for their service areas, as they reportedly do not have GIS capability themselves. An unsolicited feedback letter from an insurance company requesting additional maps commented that they "were a real visual learning experience for our head Pediatrician and, interestingly, … his nurse [said] and training and any applicable credentials to provide lead education services, hazard determination, and lead hazard control services. Cost considerations: Such intensive service delivery can be expensive: $300-400 for a risk assessment, $500 20,000 for lead hazard control, $150 for a clearance examination. Federal Medicaid policies allow reimbursement for environmental investigation and case management services. However, at present some state Medicaid programs have not begun reimbursing for these services despite explicit federal encouragement to do so. Lead hazard control can be funded by a variety of sources. # Timing issues: Program could be implemented whenever resources, referral mechanisms, regulatory compliance measures, and service providers have been secured. # Feasibility of Implementation: Moderate. This is an ambitious program requiring strong community commitment, significant resources, and ongoing collaboration among disparate entities. Beginning with a pilot program is one means to establish relationships and test systems. # POTENTIAL OBSTACLES/BARRIERS This project requires considerable coordination and cooperation between various agencies and entities including health, housing, and the state Medicaid agency. It would likely be difficult to acquire the resources and support for this type of project in a situation where adequate environmental responses are not available to families whose children have even higher blood lead elevations (above 20 µg/dL). # ADDITIONAL RESOURCES # Factors essential to implementation: The tenacity of the Get the Lead Out Coalition in promoting the project and the willingness of the project partners to work collaboratively were essential. Limitations/challenges/problems encountered: This initiative is limited to properties occupied by Medicaid families with young children or pregnant women, and targeted to eleven Connecticut cities with large numbers of Medicaid enrolled children but limited or no funding for lead hazard control. # Magnitude of # POTENTIAL OBSTACLES/BARRIERS The experience of the Massachusetts Department of Public Health (DPH) is illustrative of some difficulties associated with this strategy. While its regulations authorize the health department to conduct building-wide investigations, the agency does not routinely do so because of concern that generating multiple orders for correction of hazards will divert the owner's resources from addressing the unit that has poisoned a child. As an alternative, DPH notifies all tenants in a building about an inspection, advising that lead was found in one unit, it is likely that their unit has lead, and they should talk to the owner or call the state or local board of health if they want an inspection. Upon tenant request, DPH does an investigation. Additionally, DPH will investigate other units in those jurisdictions where local financing agencies in Massachusetts will assist owners with abatement of all units in the building at once. Massachusetts is revisiting its strategy for multi-unit buildings as it develops its CDC-required strategic plan. Limitations/challenges/problems encountered: Securing the resources to pay for environmental investigations has been the biggest challenge. Although not a common problem, inspections occasionally create conflict between landlords and tenants. Although Maine law specifies that a household cannot be evicted due to a child's lead poisoning, Maine is considering strengthening tenant protections against landlord retaliation. # ADDITIONAL RESOURCES Magnitude of Impact/Potential Impact: Program staff estimate that 250 multi-family buildings have been investigated after the identification of a lead-poisoned child in one of the units since the 1999 law was enacted. Potential for replication: High. # SCREEN HOMES DURING CODE INSPECTION DESCRIPTION OF THE STRATEGY Code inspections prompted by complaints about housing problems such as a roof leak, roaches, or no heat, as well as routine periodic inspections of rental housing units, provide opportunities to screen for lead hazards and peeling paint in the homes of young children. Code enforcement staff can be specially trained to conduct limited checks for lead hazards as a means to trigger additional action. If the inspection identifies a lead hazard through visual assessment, spot testing, dust testing, or paint testing, the inspector can order the property owner to undertake lead hazard control or lead-safe repair work to bring the unit into compliance with any applicable standards. # BENEFITS Immediate/Direct Results: The number of homes checked for lead hazards can be greatly increased at comparatively low cost by integrating basic lead safety checks into other code inspection visits. By checking homes for lead hazards and requiring corrective action consistent with applicable standards, code enforcement programs help reduce the risk of childhood lead poisoning. Public Health Benefits: In most jurisdictions, the lead poisoning prevention program sends environmental investigators only to the homes of children who have been lead poisoned. In contrast, code inspectors have the opportunity to enter many homes, and they can identify hazards before a child is exposed and an elevated blood lead level develops. Their code enforcement authority can be used to routinely intervene to require lead safety in the highest risk older properties-those that are subject to tenant complaints about poor maintenance or other health conditions. Other Indirect/Collateral Benefits: This approach leverages limited public inspection resources to trigger lead hazard assessment control. Strong enforcement broadens impact of the code enforcement program, prompting property owners to undertake voluntary measures in other properties and perform preventive maintenance on all rental housing. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: During the start-up phase of a statewide program, approximately 0.75 FTE is needed to establish procedures and build trained capacity. For an established program, 0.3 FTE is needed for program oversight and one FTE for technical assistance to local agencies (code agency and health department). For local agencies, each individual who conducts housing inspections should be trained to perform lead determinations. Depending on the sampling method used, checking for lead hazards will add 10-30 minutes to the typical housing inspection. Staff requirements to meet the workload will depend on the type of determination, which is affected by what the standard is: no peeling or otherwise non-intact paint in any housing, no lead dust hazards, etc. Depending on the extent of lead hazards, the standard to be met, the type of enforcement action, and the local or state court rules, additional time is also required for enforcement steps, including court appearances. Inspectors represent themselves at hearings before clerk magistrates or judges, as they already do for other code enforcement cases. Involvement of public agency attorneys is generally needed only for cases that progress to the criminal complaint stage. Other resource requirements: Supplies or equipment to check paint; supplies and lab services to check dust. # Institutional capacity required: This strategy requires authority to require compliance with an explicit standard (e.g. no peeling or otherwise non-intact paint in any housing, no plumbing leaks, no lead dust hazards), such as statutory lead safety requirements for housing or locally adopted property maintenance code. The statutory authority should cover licensing or certification of code inspectors or categorically exempt trained inspectors employed by public agencies from licensing or certification requirements. Implementation needs include training curricula, and, as needed, training providers approved by an accreditation program to teach the curriculum. Continuing partnership between agencies that regulate lead-based paint activities and those that enforce codes will ensure effective implementation. Cost considerations: The program will be more effective if resources are available to assist low-income property owners with the cost of lead hazard control and provide favorable financing terms to others. Timing issues: Can be implemented whenever infrastructure is in place. # Feasibility of Implementation: Moderate. In jurisdictions that have code enforcement apparatus and enforceable lead safety standards, this can be implemented wherever political will is sufficient to support enforcement. Enforcement can include requiring interim lead hazard controls rather than full lead hazard abatement, so as to decrease costs of compliance when code violations are found. # POTENTIAL OBSTACLES/BARRIERS Lack of legal authority and absence of enforcement standards can be insurmountable barriers. Enforcement includes a single brief inspection and may involve multiple court appearances by the inspector to trigger and complete the legal process of enforcement. Two potential obstacles are the unwillingness of the city/county attorney to prosecute cases and the difficulty agencies may face in maintaining a presence in court throughout the entire enforcement process. State health departments could loan lawyers to prosecute and/or provide technical assistance to local agencies on request. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Code enforcement staff who respond to complaints about housing problems such as a roof leak, roaches, or no heat, as well as routine periodic inspections of rental housing units are specially trained to conduct limited checks for lead hazards as a means to trigger additional action. If the inspection identifies a lead hazard through visual assessment, spot testing, dust testing, or paint testing, the inspector orders the rental property owner to undertake lead hazard control or lead-safe repair work to bring the unit into compliance with any applicable standards. # Jurisdiction or Target # ILLUSTRATION OF THE STRATEGY IN PRACTICE Title 6 of the Philadelphia Code and Regulations gives the Department of Public Health (DPH) the authority to issue correction orders to owners (or their agents) of housing units found to have lead-based paint hazards. If an owner does not comply with the order, the City files a case in Philadelphia's special "lead court." The city may seek a range of remedies, including the use of City funds to abate the hazard and recovery of those costs from the owner. If the property owner fails to reimburse the city, the court may place a lien on the subject property for the amount of abatement costs and other related expenses. This process has been a powerful motivator for property owners, who are now more likely to proactively correct lead hazards-or at least comply with orders before the case gets to court. Staffing utilized: There are no staff dedicated to implementing this strategy. When abatement is needed, crews from the city's lead hazard control program can be assigned and the labor cost is included in the amount billed to the property owner or added to the lien. # Jurisdiction or Target # Other resources utilized: The Department of Health provides justification for the cases, including lead inspection checklists and laboratory records on EBLs. The Law Department also has access to a list of property owners who have requested assistance. # Factors essential to implementation: The combination of a dedicated lead court, consistent enforcement, and outreach to landlords to make sure they understand that they must comply or they will be prosecuted enables the City to avoid using this strategy. Limitations/challenges/problems encountered: The City is unlikely to recover the costs of lead hazard control because homes with deferred maintenance and serious hazards, which often already have tax liabilities or other liens attached, sell for as low as five to ten thousand dollars. As a result, this measure is used only when owners qualify for no other programs. Also, determining the identity of the property owner is sometimes challenging and takes a considerable amount of time. Magnitude of Impact/Potential Impact: The provision has not been used because the Law Department has been able to use other means to resolve the 1,700 cases that it has filed with the Court. # CONDUCT PERIODIC HOUSING CODE INSPECTIONS DESCRIPTION OF THE STRATEGY Code enforcement systems that operate solely in response to tenant complaints, although the prevailing norm nationwide, are highly ineffective and have limited impact. This approach fosters the decline of rental housing conditions since tenants may not know how to register complaints or may be reluctant to complain out of fear of retaliation by the landlord. In contrast to sole reliance on complaint-based approaches, proactive, periodic inspection programs can advance primary prevention more meaningfully. Both New Jersey and Los Angeles have committed to inspecting multi-family rental properties every three to five years. Such preemptive code inspections also can be more narrowly targeted to high-risk neighborhoods, as the City of Milwaukee is doing. # BENEFITS Immediate/Direct Results: Problems such as lead hazards are routinely identified by an inspection, documented, and brought to the attention of the rental property owner. Code officials can ensure that when housing code violations are corrected, the work is done in a lead-safe manner. Public Health Benefits: A periodic rental housing inspection program helps to ensure that multi-family rental housing units comply with basic health and safety standards. Periodic inspections foster pro-active maintenance because property owners cannot expect to remain "outside the system." By promoting routine preventative maintenance on a widespread basis and improving the quality of the rental housing stock, periodic inspection programs can help to prevent lead hazards-even in rental housing units that would be missed under a complaint-based inspection program. Other Indirect/Collateral Benefits: Periodic inspection programs, when coupled with an effective enforcement regimen, can generate fees sufficient to offset the cost of the program. Regular inspections help to maintain the quality of the rental housing stock over the long term in a cost-effective manner. # SCOPE OF POTENTIAL IMPACT Statewide-Impact depends upon the scope of the housing code inspection program City-or County-Wide Neighborhood/Community # PRIMARY ACTORS KEY PARTNERS Code or Building Inspection Agency Health Department Local Prosecutors Community-based Organizations Property Owners Tenants # CRITICAL ELEMENTS Staff requirements: When moving from a complaint-based inspection program to a periodic system, additional inspectors may initially be required because a periodic inspection program also must accommodate complaints. Under an effective periodic inspection program, the number of complaint-based inspections will decrease over time. Where periodic inspections have been mandated, few inspections are undertaken in response to complaints. In New Jersey, for example, periodic inspections have been mandated for over thirty years, and few inspections currently are undertaken in response to complaints. Under the state's periodic inspection program, approximately 115 inspectors conduct approximately 162,000 inspections in dwelling units annually and re inspect about 127,000 of those units. Staffing utilized: More than 57 inspectors devote their time solely to proactive inspections. An additional 22 inspectors respond to tenant complaints. Some of the inspectors that deal with complaints also assist with reinspections in units found to be out of compliance during the scheduled inspections. # Other resources utilized: N/A Factors essential to implementation: Successful implementation of a periodic inspection program requires, first and foremost, adequate staff to carry out inspections. Inspectors must be well trained, not only to identify code violations, but also to deal effectively with tenants. In Los Angeles, advocates have conducted trainings for inspectors to help them deal with cultural and/or language issues that may arise with tenants. Another key to the success of the SCEP program is that a loan program has been put in place to help small landlords make repairs. Finally, effective enforcement is critical to the success of a periodic code inspection program. While the city experienced some initial problems with cases stalling in the courts, hearing officers are increasingly successful at moving cases forward. In addition to a commitment to enforcement on the part of agency staff, adequate prosecutorial resources must be dedicated to enforcement. Limitations/challenges/problems encountered: Initially, obtaining funds for the program was a challenge. However, the City increased the monthly inspection fee that rental property owners pay. Magnitude of Impact/Potential Impact: The current schedule for inspections allows for rental units to be inspected every five years. Each year, about 150,000 units are inspected. However, the city hopes to increase that figure to 180,000. Potential for replication: Very high. These programs are readily replicated. # Contact for Specific Information: Greg # ENABLE TENANTS AND COMMUNITY-BASED ORGANIZATIONS TO TAKE ACTION TO ADDRESS SUBSTANDARD HOUSING CONDITIONS DESCRIPTION OF THE STRATEGY In many cases, tenants lack the ability to address substandard housing conditions or are reluctant to exercise their rights out of concern that the landlord will retaliate. Empowering tenants to take action when housing conditions are inadequate and enabling neighborhood organizations to act on tenants' behalf can significantly enhance the efforts of code enforcement officials. One effective strategy is to legally enable tenants or their advocates to request a code inspection and empower them to pursue enforcement actions themselves in court. This approach also helps circumvent the common problem of inadequate resources for enforcement. # BENEFITS Immediate/Direct Results: Tenants in substandard properties obtain legal standing to initiate code inspections, enforcement, and remediation actions without fear of landlord retaliation. If receiverships (court appointment of third party administrators to manage properties and oversee repairs) or rent escrow arrangements are permitted, rents can be used directly to fund repairs. Public Health Benefits: High-risk housing is targeted for repairs that reduce health hazards. Code agency and/or court oversight can ensure that repairs are done safely and following accepted protocols and without hazards being left behind. Other Indirect/Collateral Benefits: Tenants gain power in relation to landlords, which could result in landlords community-wide becoming more responsive and proactive regarding maintenance and repairs. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: An experienced organizer could, in several months, organize a campaign capable of enacting such a law. One or more FTE (organizers, attorneys) could staff a project to assist tenants with using the process in just one community. Enacting a new state or municipal law to give legal standing for tenants in substandard properties to initiate code inspections/enforcement and take enforcement actions themselves can be a major undertaking. Other resource requirements: Research would be needed on existing laws, as well as on the degree and extent of substandard housing conditions in the jurisdiction and specific shortcomings of the existing code enforcement system. Institutional capacity required: An organization undertaking such a campaign would need the capacity to organize and lobby, experienced staff, and relationships with allies among tenants' rights, affordable housing, public interest, legal, and other community organizations. # Cost considerations: The positive impact on housing affordability and condition is potentially great. These benefits will far exceed the cost of a campaign to secure enabling legislation. Creating and funding an organization or agency to provide ongoing assistance to tenants bringing enforcement cases should be considered, as this will greatly improve the impact of the law and the quality of outcomes. # Timing issues: None Feasibility of Implementation: Variable. Any organization undertaking such an effort should understand that because of the complex and unpredictable nature of the legislative process, the degree of difficulty may be greater than expected and success is not guaranteed. Pilot programs with limited scope may be a useful first step, giving advocates time and resources to prove that the strategy is effective in a target area. # POTENTIAL OBSTACLES/BARRIERS This strategy requires enacting new legislation, working to ensure that it is effectively implemented, and ongoing work to assist tenants with using the process. Thus, this strategy is a major undertaking and could fail if sufficient resources and energy are not available to overcome inertia and political opposition. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Minnesota's landlord-tenant law, Chapter 504B, allows tenants, a municipality, or a neighborhood housing-related organization legal standing to bring a court action against a landlord who fails within a reasonable time to correct deficiencies at their property. Project 504, a non-profit neighborhood organization, has brought more than ten such cases in the past three years, leading to broad remedies for tenants, including in some cases the appointment of a third-party administrator to manage and operate the landlord's property. Project 504's court action also established precedent that significant unabated lead hazards in a property constitute an emergency, causing the court to issue orders to the landlord to correct the hazards immediately. Factors essential to implementation: Strong partnership with other affordable housing and tenant advocacy organizations. Code enforcement officials who recognize that the strategy's success will reduce enforcement time spent on problematic properties. Strong and ongoing relationships with tenants, including any identified tenant leaders who will advance the strategy. Solid knowledge of landlord-tenant law, or partnership with pro bono or legal services attorney who can provide legal analysis and support. Relationships with proactive landlords who recognize the need to address substandard housing in their jurisdiction are also helpful. # Jurisdiction or # ILLUSTRATION #2 OF STRATEGY IN PRACTICE An ordinance passed in 2002 gave code inspectors the authority to pursue charges against property owners who do not treat lead-based paint hazards in their buildings. The City of Kankakee's Community Development Agency's Lead Poisoning Prevention Program (LPPP) trained code inspectors to visually assess and identify lead hazards. When code inspectors discover an existing or potential lead hazard, they can refer the property owner to the LPPP. Through its HUD Healthy Homes grant, the program helps property owners make repairs before a hazard develops, as well as remediate or abate existing hazards. Property owners who do not follow up on the voluntary referral are cited. Other resources utilized: No information provided. # Jurisdiction or Target # Factors essential to implementation: A good relationship with the code enforcement division is key. Outreach to property owners through local landlord association allowed them to address concerns up front as well and educate property owners about lead hazards before inspections. HUD Healthy Homes and CDBG monies provide the funding to assist property owners in lead-safe work practices and lead hazard remediation. Limitations/challenges/problems encountered: Occasionally a property owner doesn't recognize the seriousness of the problem, but the LPPP grants typically alleviate any objections. Nominal outreach was needed to bring the Code Enforcement Division and the Health Department together, since each agency has its own focus. Magnitude of Impact/Potential Impact: LPPP receives an average of 20-25 referrals per month. In its first two-year grant cycle, it assisted 300 properties and will complete another 240 this cycle. As a result of interactions with code inspectors, other property owners voluntarily contact LPPP for lead hazard information and assistance. Outreach coordinators have seen a shift in community awareness also, from landlord association meetings to WIC outreach. # Potential for replication: Moderate. This strategy can be replicated wherever there is a property maintenance code that the city is willing to enforce. Approximately 25% of the owners contacted via letters have attended dinners, and GHC is starting to receive responses from previously uncooperative owners. # Contacts for Specific Information Jenny Jurisdiction or Target Area: Greensboro, North Carolina Primary Actor: Greensboro Housing Coalition (GHC) Secondary Actor(s): N/A Staffing utilized: 0.25 -0.5 FTEs for four mailings and eight dinners, including researching and preparing the mailings and planning for and convening the dinners. Other resources utilized: Data sources include information collected during GHC's neighborhood outreach and hazard assessment activities (as a subcontractor to CEHRC and the city's lead hazard control program), the city's housing code enforcement database, and the tax assessor's database. GHC uses a laptop and digital projector for presentations. A transitional housing program with a large community room donates space for the meetings. Food is purchased and prepared by GHC staff. # Factors essential to implementation: A key factor is good working partnerships with the range of agencies that need to communicate with landlords about their responsibilities under federal, state, and local laws, and resources available to help them make their properties lead-safe. Limitations/challenges/problems encountered: The majority of landlords have not yet responded to the letters. Also, it has been difficult to identify and locate some owners. It is difficult to determine owners' follow up activities for two reasons: the city's lead hazard control program is overwhelmed and cannot always provide accurate and up-to-date information on the applications received; and the owners tend to not readily provide information on what, if any, steps they have taken to control lead hazards. Magnitude of Impact/Potential Impact: Many more owners of high-risk housing are following the disclosure law. At least 25 properties have been enrolled in the city's lead hazard control grant program. # Potential for replication: Moderate # Contact for Specific Information Beth McKee-Huger Executive Director 336-691-9521 [email protected] # References for additional information N/A # ILLUSTRATION #4 OF STRATEGY IN PRACTICE Project 504 built an extensive database of owners of pre-1950 properties in high-risk neighborhoods and is using it to mail out at least 1,000 boilerplate and 250 registered letters. Owners receiving boilerplate letters first receive a postcard to alert them to the coming letter. The postcard serves two purposes: getting the property owners' attention and culling bad addresses from the database before the more expensive letters are sent. Both the postcard and the letters refer owners to Project 504's new website designed to provide resources to property owners, www.nomorelead.org. When rental property owners contact Project 504 for more information, they are sent a letter with an accompanying stamped postcard that the owner can send directly into the county for more information on how to enroll in the lead hazard control (LHC) grant program. The county saves these cards # ACKNOWLEDGEMENTS Members of the Advisory Committee on Lead Poisoning Prevention and countless individuals working in local and state programs contributed many valuable ideas, feedback, and real-world illustrations. The team that researched and wrote Building Blocks under CDC's contract with the Alliance for Healthy Homes, headed by Project Director Jane Malone, included Nick Farr, Laura Fudala, Brian Gumm, Carol Kawecki, Jane Malone, Betsy Marzahn-Ramos, Gordon McKay, Tom Neltner, Anne Phelps, Eileen Quinn, Maria Rapuano, Don Ryan, Ralph Scott, Ellen Tohn, Anne Wengrovitz, and Anne Ziebarth. # Lead Safety and Healthy Homes Standards ESTABLISH A LEAD-SAFE HOUSING REGISTRY Potential for replication: In states where statutes or regulations allow or require lead-safe housing certification, a community-based or statewide nonprofit organization with sufficient resources could easily replicate this housing registry, though it may be more challenging to establish statewide registries in more rural states than in those with large metro areas. Potential for replication: Very high in states where statutes or regulations require lead-safe and/or lead-free housing certification. # Contact for Specific Information # Contacts for Specific Information Gail # REQUIRE RENTAL PROPERTY OWNERS TO INFORM TENANTS HOW TO REPORT DETERIORATING PAINT DESCRIPTION OF THE STRATEGY Requiring property owners to provide information on lead hazards to tenants and to inform tenants how to report deteriorating paint can increase tenant awareness of the risk of lead hazards and assist them if paint deterioration problems develop. Notices can be delivered or mailed to tenants or posted in the building to inform occupants of basic lead hazard control measures, ask them to report deteriorated paint, and provide them with the information necessary to report conditions of concern. This strategy is effective only to the extent that property owners promptly and safely repair deteriorated paint and its causes. This type of notice to tenants is required in Vermont, Rhode Island, and housing subject to HUD's lead-safe housing rule: public housing, housing subsidized by a variety of HUD assistance programs (including the Section 8 Housing Choice Voucher program), and properties that HUD is selling. # BENEFITS Immediate/Direct Results: This strategy can increase tenant awareness of the risk of lead hazards and increase the likelihood that property owners are made aware if paint deterioration develops, which, in turn, increases the likelihood of corrective action. Public Health Benefits: Lead exposure is reduced if deteriorated paint is repaired more promptly and in a lead-safe manner. Other Indirect/Collateral Benefits: Property owners and occupants will become more aware of the hazards associated with deteriorated lead paint and will pay more attention to paint condition. Code enforcement personnel may also pay more attention to deteriorated paint. # SCOPE OF POTENTIAL IMPACT City-or County-Wide Specific (Targeted) Population-Notice requirement could be restricted to a higher-risk set of properties, such as pre-1950 rental housing # PRIMARY ACTORS KEY PARTNERS Health Department Code or Building Inspection Agency Housing Agency Local Prosecutors Community-based Organizations Property Owners Tenants Community Members # CRITICAL ELEMENTS Staff requirements: The requirement to notify could be considered self-enforcing, but governmental enforcement efforts can greatly improve awareness and compliance. Very minimal staffing within a health or code enforcement agency (0.2-0.5 FTE) could create a basic education and outreach program to increase landlord awareness of the requirement. High-profile enforcement actions against egregious violations and/or spot-checking properties for compliance would be a reasonable starting point for additional enforcement efforts. Staffing levels for enforcement could be further increased to the point where additional staff produces diminishing returns. To the extent that increased reporting to landlords of deteriorated paint does not prompt safe repairs by landlords, additional hazard inspection and enforcement may be needed. # TRAIN PAINTERS, REMODELERS, AND MAINTENANCE STAFF IN LEAD-SAFE WORK PRACTICES DESCRIPTION OF THE STRATEGY Research makes clear that routine work disturbing painted surfaces can create lead dust hazards. "Basic training" in lead-safe work practices is now readily available to reach painters, remodelers, and maintenance staff to educate them on the work practices that are needed to control, contain, and clean up any lead dust generated by their work. A new HUD/EPA 5½-hour "basic training" course includes valuable "hands-on" exercises and can be easily taught in most localities. # BENEFITS Immediate/Direct Results: Attendees will have the knowledge and skills necessary to use lead-safe work practices immediately. These practices will reduce lead hazards during renovation and maintenance work. Public Health Benefits: As the lead-safe work practices learned by attendees are used on maintenance and renovation projects, fewer children will be exposed to lead-based paint hazards in their homes. Other Indirect/Collateral Benefits: Lead-safe work practices will become far more widespread as more professionals are trained to teach the class. This will help to avoid the creation of lead-based paint hazards and will help reduce hazards that already exist. Lead-safe work practices require, among other things, extensive dust control during work and thorough cleaning once a job is completed. This can significantly reduce dust levels and other respiratory irritants in remodeled homes and apartments. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: This strategy will require staff time to conduct trainings and follow-up with attendees. A good trainer will need two days to become familiar with the curriculum. Other resource requirements: Lead-safe work practices materials will need to be copied for each attendee. In addition, hands-on supplies will be needed. # Institutional capacity required: An experienced trainer is needed to teach the class. Statutes, regulations, and/ or municipal codes should ideally include standards for training requirements for painters and remodelers. Cost considerations: Cost will mostly involve staff time and training materials. These expenses should be low or moderate. Timing issues: This strategy would require some short-duration outreach. # POTENTIAL OBSTACLES/BARRIERS There should be few, if any, obstacles to impede implementation of this strategy. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE In 1991, the State of Rhode Island passed a comprehensive lead law designed to protect children from lead poisoning. Included as part of that law is a strategy that requires all painters, remodelers, and others who are working with lead-based paint, or who seek to control existing lead hazards, to obtain training in lead-safe work practices. The Department refers all applicants for certification on lead-safe work practices to a list of training providers. It also provides the general public with resource information on lead-safe work practices. This information is available through the Department's Family Health Information Line. # Jurisdiction or Target # Other resources utilized: N/A Factors essential to implementation: A law that required certification and training and a Department of Health committed to its implementation were key. A strong enforcement mechanism against those who do not use lead-safe work practices has been helpful. Limitations/challenges/problems encountered: No significant problems or challenges were encountered in implementing this strategy, though many painters and remodelers continue to claim they aren't aware of the training requirements. Magnitude of Impact/Potential Impact: Department of Health statistics show that 293 painters and remodelers have been trained through the lead-safe work practices requirements, and use of lead-safe work practices among target groups has increased since 1991. # ILLUSTRATION #2 OF STRATEGY IN PRACTICE In March 2002, the U.S. Department of Housing and Urban Development (HUD) contracted with the National Center for Healthy Housing (NCHH) to develop an interactive, web-based lead database that utilizes "real time" information and mapping capabilities to display housing and blood lead information related to lead hazards. NCHH has partnered with Abt Associates in Cambridge, Massachusetts to do the technical development of the database system and website. As part of HUD's strategy to eliminate childhood lead poisoning by 2010, this system is a demonstration project for other jurisdictions interested in using their local data in a similar way. The website provides visitors with a number of choices of data sources and presentation modes. For example, visitors can view maps of EBL data by neighborhood, or lead or code violation data for a specific address, and plot lead risk factors by zip code, census tract, or other geographic areas. The system is designed to allow user to interface seamlessly with multiple local databases to target at-risk properties for services and education as well as enforcement activities. The system could also assist individual renters and buyers in identifying lead-safe or atrisk housing. Jurisdiction or Target Area: Nationwide; pilot tested in Baltimore, Boston, and Chicago Primary Actor: National Center for Healthy Housing, with Abt Associates # Secondary Actor(s): N/A Staffing utilized: The pilot project has required an average of 1 FTE at each site, plus 4 FTEs at the national level, to develop the prototypes for these unique systems. # Other resources utilized: The pilot project was funded through a $3.5 million grant from HUD. Current funding will support the site through 2004. The project partners are currently seeking Federal, state, and private funds for ongoing support of the system. Estimates for the development costs for local jurisdictions to replicate and maintain the approach are expected to be well below the costs of initial development. Costs to replicate the system will be affected by local characteristics, including: quality and "cleanliness" of the existing blood lead surveillance, housing, and other relevant data sets willingness and ability of local governments to share property-level information, and technical capacity of the jurisdiction with respect to GIS readiness (e.g., availability of "shape" files). Factors essential to implementation: NCHH and Abt Associates consulted with stakeholders in each of the three cities to determine what data should be included in the database; how that data will be collected, cleaned, and incorporated in the database system; and how it will be maintained over time. Stakeholders included local and state health and housing agencies, community groups and advocates, health care providers, nonprofit housing organizations, and others. As a result, six key features were selected: "real time" health and housing # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Targeting High-Risk Housing CONSOLIDATE AND ANALYZE DATA TO HIGHLIGHT LEAD POISONING "HOT SPOTS" data; one location for health and housing data; housing data at the address level; health data at a higher level of aggregation; mapping capabilities; and links to other websites containing useful information. Limitations/problems/challenges: Overcoming concerns of property owners, the local real estate industry, and the health department over privacy issues required problem solving. As a consequence, all blood lead data is aggregated at the block group level and is not viewable for specific addresses. And, if fewer than 25 children are included, then the blood lead data is not displayed at all. Magnitude of Impact/Potential Impact: The project has been piloted in Baltimore, Boston, and Chicago. Comprehensive, "real time," address-specific data is a distinguishing feature. Address-level information will help localities focus their efforts and target resources toward the areas of greatest need. Potential for Replication: Moderate. The conceptual and technical development supported by HUD's pilot project can greatly facilitate local replication if the resources to sustain and accurately update the information systems are in place at the local level. Cost estimates for replicating the approach are forthcoming. # Contacts for Specific Information # PERFORM BUILDING-WIDE HAZARD ASSESSMENTS IN MULTI-UNIT BUILDINGS FOLLOWING IDENTIFICATION OF LEAD HAZARDS IN ONE TROUBLED UNIT DESCRIPTION OF THE STRATEGY If lead hazards are identified in one unit in a multi-family building (through an EBL investigation or other means), there is a significant likelihood that similar hazards may be present in other units in the building due to common painting and maintenance histories. Undertaking building-wide hazard assessments in multi-unit buildings (complemented by building-wide blood lead screening of other young children who are occupants, especially if there was an elevated blood lead level already detected) is a useful strategy for targeting high-risk units. Agencies could extend this approach to screen all properties owned or managed by the same person or entity, especially "problem landlords." # BENEFITS Immediate/Direct Results: Environmental assessments triggered in this manner can benefit other children who reside in the same property before they might be exposed to lead or by identifying current, but previously unrecognized, lead hazards earlier than they might otherwise have been. Public Health Benefits: This strategy efficiently targets limited public health inspection resources to properties and families that are predictably at higher risk. Other Indirect/Collateral Benefits: Consistent application of this strategy raises awareness of lead hazards and reinforces messages about their relationship to housing. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: An agency must provide sufficient staff to oversee property referrals and handle necessary administrative responsibilities. Hazard assessment capacity could be acquired by hiring staff or by contracting with private service providers. # Other resource requirements: N/A Institutional capacity required: Authorization to enter properties and share data as needed. Cost considerations: Systematically screening other units in the same building where one has hazards can be a cost-effective primary prevention strategy. Staffing utilized: During the start-up phase, approximately 0.75 FTE was needed to establish procedures and build trained capacity. The established statewide program needs 0.3 FTE for program oversight and 1 FTE for technical assistance to local agencies (code agency and health department). Including this function in the responsibility of local housing inspectors reduces the number of inspections by a small increment. The inspector must acquire a special license for code enforcement lead determination inspectors; prerequisites for the license include employment by a code enforcement agency or local board of health, completion of the CLPPP training course, passing the licensing exam, and completion of a field apprenticeship. # Other resources utilized: N/A Factors essential to implementation: Program staff in MA found the following factors to be especially critical to implementation: 1. The underlying statutory structure authorizing standards, licensing, and enforcement; 2. Implementation plans that were developed in collaborative fashion through an advisory committee (Governors Advisory Committee) with local health departments to ensure that procedures would be workable for them; 3. An inexpensive lead screening technique (sodium sulfide testing) that makes the program accessible to all localities in state; and 4. Continuous support of enforcement structure, beginning with advance notice to courts that enforcement cases would begin appearing. Limitations/problems encountered: Program staff found that the most significant limitation was the lack of resources for lead hazard control. Despite longstanding Lead Law requirements, enhanced enforcement when the lead determination program was launched made it seem like a new requirement to property owners. Program staff feel that it was important to be able to offer resources to property owners to help with financing of lead work since the principal defense offered by property owners is that they don't have the money to abate. Consequently, the state has made available several different resources, including a state deleading tax credit, the "Get the Lead Out" revolving loan fund (originally funded through state appropriation but now primarily from repayments of prior loans), CDBG funds, and now a lower cost, moderate-risk do-it-yourself option for property owners. This last option involves training property owners in the use of lead-safe work practices to allow them to control moderate-risk lead hazards without the need for certified contractors. Magnitude of Impact/Potential Impact: Since most lead determinations and subsequent enforcement actions occur at the local level and a statewide database is under development, the state lacks complete knowledge of the impact to date. The state provides lead determination services when local capacity is lacking or on special request (e.g., local government is property owner). For FY03, seven state inspectors did 150 lead determinations upon parental request, resulting in 85 homes undergoing lead hazard control. # ABATE LEAD HAZARDS AND RECOVER COSTS WHEN OWNERS FAIL TO ACT DESCRIPTION OF THE STRATEGY Strong enforcement powers and sufficient resources to compel compliance are essential to any effective lead poisoning prevention program. In order to ensure that lead hazards cited by violation orders are controlled when property owners fail to act, enforcement officials can be authorized to abate hazards using agency or contractors' crews and recoup the costs along with any unpaid penalties by placing a lien on the property. # BENEFITS # CRITICAL ELEMENTS Staff requirements: Limited to writing lead hazard control work specifications, ensuring acceptable completion including clearance, and administrative communications to collect costs or impose lien. Other resource requirements: Access to qualified crews, contracted or in-house, to perform the lead hazard control. Institutional capacity required: Agencies will need statutory authority to enter the premises and do the work, as well as to place a lien on the property. In addition, the agency will need capacity to perform independent clearance testing. # Cost considerations: Need for working capital or other financing to pay for the repair work pending recovery of costs when the property is sold or refinanced. Timing issues: None. # Feasibility of Implementation: Moderate. Political will is needed to supersede owners' rights, to allow the city or its agents authority to enter the property and perform lead hazard control, and to impose liens. Strategy is best used within a continuum of approaches that include voluntary compliance and financing mechanisms. # POTENTIAL OBSTACLES/BARRIERS May require relocation of occupants; these costs would be included in the owner indebtedness to the city. # ADDITIONAL RESOURCES # ATTACH PROPERTY-SPECIFIC LEAD HAZARD INFORMATION TO PROPERTY DEEDS DESCRIPTION OF THE STRATEGY The federal lead hazard disclosure law requires property owners to communicate the presence of known leadbased paint and lead hazards to the prospective buyer or tenant when selling or renting a property built before 1978. However, disclosure requirements are not consistently implemented or understood. As a result, buyers may purchase properties without knowledge of existing and identified lead hazards. To ensure that buyers are informed of these lead hazards, copies of lead hazard violations, repair orders, and clearance reports could be attached to the property deed and available for review through the title search. Attaching property-specific lead hazard information to the deed would also offer the potential to monitor future disclosure of the identified hazards to prospective and renewal tenants in the property. # BENEFITS Immediate/Direct Results: Documented hazards alert prospective purchasers and tenants to hazards. If prospective purchasers are unwilling to purchase properties with existing lead hazards and bear the cost of repair, property owners may be motivated to remediate hazards. Public Health Benefits: Properties that have already been identified as sources of potential or actual poisoning receive attention. As a result, future occupants with young children will be living in a lead-safe environment. Other Indirect/Collateral Benefits: Attaching property-specific lead hazard information to the deed allows the public agency to track and document owner knowledge of lead hazards. This public record could be used to verify compliance with disclosure requirements and bolster enforcement, thereby ensuring that a greater number of new or renewal rental tenants receive disclosure. # SCOPE OF POTENTIAL IMPACT # Statewide # PRIMARY ACTORS Code or Building Inspection Agency Housing Agency Property Taxation Agency # KEY PARTNERS # Registry of Deeds # CRITICAL ELEMENTS Staff requirements: No additional full-time staff is required to successfully enact this strategy. Implementation requires only the cooperation of existing Registry of Deeds staff. # Other resource requirements: N/A Institutional capacity required: The federal disclosure law requires property owners to divulge known information about lead-based paint and lead hazards. The Registry of Deeds must be given the authority to attach lead hazard violations, repair orders, and clearance reports to individual property deeds. Statutory authority to issue code violations on lead hazards (and/or standards for issuing orders to reduce identified lead hazards) increases the impact of the disclosure law. Cost considerations: Attachment to title involves the cost to process the appropriate paperwork. Using the illustration below, the average cost is $24 per order (based on one charge for the first page and a lower charge for each additional page). The charge varies based on the length of the order. # POTENTIAL OBSTACLES/BARRIERS The Registry of Deeds must have the statutory authority to attach orders to the deed, so jurisdictions interested in implementing this strategy should check current regulations and advocate for the appropriate changes. Another consideration is ensuring that confidential information related to any associated blood lead tests taken during the identification of the hazards is handled appropriately. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE If a lead investigation reveals an existing or potential lead exposure hazard, the State issues an Order of Lead Hazard Reduction (order) on the property, which requires that all lead exposure hazards be corrected. Under federal law, property owners are required to disclose the presence of lead hazards when selling or renting a property, but owners do not always comply. To ensure that buyers are aware of an outstanding order, the New Hampshire Childhood Lead Poisoning Prevention Program (NHCLPPP) sends all orders to the Registry of Deeds, which then attaches the order to the property deed. In the event that the owner attempts to sell the property without proper disclosure, the purchasers will discover the order during the routine title search. Limitations/challenges/problems encountered: This is an indirect strategy that strengthens the federal disclosure law and helps ensure that buyers are aware of untested lead hazards. Magnitude of Impact/Potential Impact: NHCLPPP expects the attachment to increase the number of properties coming into compliance over time, as owners will be motivated to remediate hazards in order to sell the property. Potential for replication: Very high. NHCLPPP has found this to be simple, low-cost strategy to implement. # POTENTIAL OBSTACLES/BARRIERS There are two potential obstacles to this strategy's success. First, if case law is desired in the compilation, search tools are needed to locate past cases on lead poisoning prevention. Second, staff or interns may lack the time to maintain and update the compilation. Other resources utilized: Appendix information came from various state and federal agencies, and the Corporation Counsel's office reviewed the bench book for accuracy and completeness. # ADDITIONAL RESOURCES # Factors essential to implementation: It was essential to find up-to-date information, and to present that information verbatim. The information had to be summarized in an objective, non-judgmental manner to help judges apply the law independent of subjective information and anecdotes, and to allow judges and attorneys to be as efficient as possible by consistently consulting the same edition of lead poisoning prevention and safety laws. Limitations/challenges/problems encountered: There were no significant limitations or challenges producing the bench book for Cook County. However, as Loyola tried to develop bench books for other counties in Illinois, it found it difficult to: a) to understand the court procedures in some of the other counties; b) identify who could provide information on the procedures and the information included in the Cook County bench book appendix; and c) identify the appropriate people to discuss distribution of the bench books. Magnitude of Impact/Potential Impact: Unknown. Other resource requirements: As additional code inspectors are employed, they must be provided essential equipment and technical support, including vehicles and computers. # Potential for Replication: High. # Contacts for Specific Information # Institutional capacity required: Statutory authority is required in order to give housing code inspectors authority to enter rental housing to conduct regular inspections. Statutes should specify the universe of units to be inspected (e.g., rental housing in buildings with two or more units); how frequently inspections are to be conducted; what type of notice is required for each party (owner and tenant); funding sources for inspections (e.g., any fees imposed upon rental property owners to cover inspection costs); and enforcement provisions, including penalties for non-compliance. Newly hired inspectors will need to be qualified to conduct inspections, issue notices of violation, and commence enforcement actions. Experienced inspectors will require continuing education to ensure that they are aware of any new standards or technological advances. # Cost considerations: Even if the costs of periodic inspections are passed along directly to tenants, these programs need not have an adverse effect on affordable housing. When Los Angeles adopted its Systematic Code Enforcement Program in 1998, the city hired 67 new housing inspectors. The program was initially funded by a $1.00 per unit fee each month, which since has been increased to $2.27. New Jersey uses a sliding scale to determine the per-unit inspection fee imposed upon owners, dependent upon the number of units inspected. The maximum per-unit fee is $43 every five years. # Timing issues: Hiring and training of additional inspectors may take several months. In addition, landlords will need to be made aware of the new requirements, will need to receive guidance in building improvement requirements, and will need to incorporate periodic inspection language into leases. Tenants will also need education on the new requirements and procedures. # Feasibility of Implementation: High. These programs are feasible and effective, assuming they are adequately funded and enforced. # POTENTIAL OBSTACLES/BARRIERS Perhaps the greatest obstacle facing periodic inspection programs is generating the political will necessary to put the programs in place. # ADDITIONAL RESOURCES # ILLUSTRATION #1 OF STRATEGY IN PRACTICE Under Los Angeles' Systematic Code Enforcement Program (SCEP), adopted in 1998, every residential rental property with two or more units must be inspected on a regular basis (currently, units are inspected at least once every five years). The program was funded initially by a $1.00 per unit per month fee paid by property owners, which, under the law, can be passed on to tenants. Low-income tenants strongly supported the passage of the program, including the monthly fee, which since has been increased to $2.27. Los Angeles is in the process of incorporating lead hazard screening into its periodic inspections, including requiring lead-safe work practices when repairs are undertaken. To complement the SCEP, a loan program has been created to provide funds to small apartment owners to help them finance repairs. Other resources utilized: A computerized tracking system is required to track compliance and enforcement. # Factors essential to implementation: 1. Inspection fees and penalties for non-compliance must be sufficient to cover the costs of the inspection program. 2. A streamlined enforcement process minimizes the resources needed to ensure compliance: a. If an owner fails to contest a violation within 15 days of receiving a citation, the owner is deemed to admit to the violation. b. If the owner fails to remedy the violation in a timely manner, BHI imposes a penalty and sets a deadline for compliance. c. Owners who fail to comply and pay the penalty are pursued in court, where they are barred from contesting the violation. d. Once BHI obtains a judgment, it can impose a lien on the owner's assets-both personal and corporate. Limitations/challenges/problems encountered: One limitation on New Jersey's program is that it does not address buildings with fewer than three dwelling units. Efforts have been underway to include those buildings in the periodic inspection program but have not succeeded to date. Magnitude of Impact/Potential Impact: New Jersey's periodic inspection program inspects approximately 162,000 dwelling units per year, and over time, achieves compliance in 95% of cases. Potential for replication: Very high. These programs are readily replicated and highly effective. # Contact for Specific Information # POTENTIAL OBSTACLES/BARRIERS The housing code enforcement agency may be reluctant to delegate authority to or share it with the CLPPP staff because they will not trust that the staff will follow the procedures properly. Staff may also be concerned about overwhelming the legal system needed to complete the enforcement process. If the housing code enforcement has been lax in the past, suddenly adding lead hazard enforcement will be controversial with property owners. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE When the city and the county consolidated operations in the mid-1970s, the health department and the county hospital became part of a quasi-governmental corporation. The county delegated responsibility for housing code enforcement to the health department and subsequently established an Environmental Court to prosecute housing code violations and related issues. The CLPPP issues citations for houses where it finds deteriorated paint or lead hazards, often as a result of an environmental investigation of a lead poisoned child. Between January 2000 and July 2003, the CLPPP issued more than 200 citations and pursued those cases in the Environmental Court to get the hazards resolved. # Jurisdiction or Target # Secondary Actor(s): N/A Staffing utilized: Six FTE staff trained and licensed as risk assessors conduct the inspections, manage the enforcement process, and re-inspect some properties as needed. Other resources utilized: The close cooperation of the inspectors with other CLPPP staff enhances the overall effectiveness of the program. The Environmental Court streamlines the process and ensures that the hazards are addressed. # Factors essential to implementation: The agency responsible for housing code enforcement must be willing to cooperate with the CLPPP staff and have a streamlined process to enforce code citations. Limitations/challenges/problems encountered: Staff must follow specific procedures and be prepared for the delays as property owners must be notified and prodded with orders and fines to address the problem. Magnitude of Impact/Potential Impact: Between January 2000 and July 2003, the CLPPP issued more than 200 citations and managed those citations in the Environmental Court to get the hazards resolved. # Potential for replication: Very high # Contact for Specific Information # CREATE A SPECIAL LEAD COURT DESCRIPTION OF THE STRATEGY One obstacle to effective enforcement of lead safety in housing is the lack of enforcement capacity. Establishing a special lead court to focus the applicable court system's consistent attention on cases involving violations of lead hazard repair orders can reduce a backlog and enable the assigned judge(s) to become familiar with repeat violators and treat them accordingly. Philadelphia and Chicago have significantly accelerated the processing of cases and reduced the backlog of properties pending corrective action by dedicating court resources to these cases. # BENEFITS # CRITICAL ELEMENTS Staff requirements: The extent of staffing (lawyers and paralegals) must meet the needs of the enforcement caseload. # Other resource requirements: N/A Institutional capacity required: Statute that allows city to hold property owners responsible for code violations is necessary to give the health department the authority to go after landlords. The Court must determine how to implement the court by reworking existing courtrooms and judge assignments. # Cost considerations: No additional cost if court is funded to fulfill its mandates. # Time issues: None Feasibility of Implementation: High. This strategy could be successfully replicated where there exists a statute granting the jurisdiction to the inspection and enforcement authority and cooperation between the Court, the city prosecutor's office, and the inspection office exists. Other useful factors include lead hazard control funds to assist landlords with the repairs. # POTENTIAL OBSTACLES/BARRIERS A big challenge is identifying the actual property owner. A rental property registration system identifying owners' names and contact information would overcome this problem. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Three days a week, a special Lead Court convenes within Philadelphia's Court of Common Pleas to hear complaints regarding outstanding lead hazard orders; each session hears an average of 20 cases. City attorneys set forth all possible remedies available to the city under the various codes, including fines; relocation of tenants at the owner's expense; and if the owner fails to abate, abatement by the city with authorization to recover costs or place a lien on the property. The potential for court action acts as a great motivator: in the majority of cases, owners have begun the work prior to the hearing, and typically the court responds by ordering that the property owner complete the work by a specified deadline. In instances where the city has had to do the abatement work, the court works out payment plans with property owners to recoup the cost. Potential for replication: High. As long as the key public agencies are willing to make it happen, this could be replicated anywhere. # Jurisdiction or Target # ENABLE TENANTS AND COMMUNITY-BASED ORGANIZATIONS TO TAKE ACTION TO ADDRESS SUBSTANDARD HOUSING CONDITIONS Limitations/challenges/problems encountered: Language barriers should be expected and budgeted for in non-English speaking communities. Some code enforcers may view this strategy as infringing on their traditional role and turf. Strong initial negative reaction from some landlords should be expected, possibly followed by retaliation against the project or some tenants upon implementation. # CRITICAL ELEMENTS Staff requirements: Beyond one-day training, there will be minimal impact on inspection staff to incorporate deteriorated paint into the inspection protocol. Additional staffing may be needed in the short term for repeat inspections to ensure compliance with repair orders. # Other resource requirements: N/A Institutional capacity required: The jurisdiction's health, housing, or property maintenance code must provide the investigating department with enforcement authority. Cost considerations: Jurisdictions with funding assistance for property owners may be more successful. # Timing issues: N/A. Feasibility of Implementation: Moderate. Feasible where there is applicable code and a will to enforce it. # POTENTIAL OBSTACLES/BARRIERS Landlords may resist making repairs citing cost considerations; the city must be prepared to enforce code requirements against recalcitrant owners and landlords. Other resources utilized: No information provided. # Factors essential to implementation: Dubuque is a very historic city, and there is a lot of pride in its appearance. There are also numerous incentives to make repairs, including the city's comprehensive rehabilitation program that offers financial assistance, its Operation Paint Brush (which helped many lower income homeowners paint their houses), and a Lead Hazard Control grant, which has assisted with the repair of many properties. Limitations/problems encountered: Many landlords were initially unresponsive to the city's interest in paint, arguing that it was not a safety issue. Resistance dissipated as it became apparent that the City would not be deterred regarding deteriorated paint. Magnitude of Impact/Potential Impact: Approximately 1500 rental properties are inspected each year. The majority are cited for paint violations, and the deteriorated paint is repaired. Potential for replication: Moderate. Can be replicated wherever there is a property maintenance code that the City is willing to enforce. # Contact for Specific Information # INFORM RENTAL PROPERTY OWNERS OF FEDERAL LEAD HAZARD DISCLOSURE REQUIREMENTS DESCRIPTION OF THE STRATEGY The federal lead hazard disclosure Law requires property owners to provide information on lead poisoning and known property-specific data to tenants upon lease or lease renewal. Many rental property owners are unaware of the law and/or are out of compliance. Community-based organizations (CBOs) or governmental agencies can mail boilerplate letters to rental property owners with information on federal lead hazard disclosure requirements, applicable local laws, and available resources, such as free lead-safe work practices training and lead hazard control grant funds. Agencies and organizations with access to property-specific information related to lead-based paint and hazards can send registered letters that put owners on notice about specific hazards and remind them that this information must be provided to tenants. Other ways to reach out to landlords include seminars, one-on-one meetings, and notices included in water or other bills. Sending complementary mailings to tenants, especially those in units with known lead hazards, puts increased pressure on landlords and further protects tenants by informing them of their rights and providing them with information their landlord may be withholding. # BENEFITS Immediate/Direct Results: Providing information about landlord responsibilities under the federal lead hazard disclosure law will increase owner compliance and motivate some owners to take measures to control hazards-especially if resources are available to help them (e.g., training in lead-safe work practices and lead hazard control grants and loans). Public Health Benefits: Tenants will receive information they need to make informed housing choices and protect their families from lead. Other Indirect/Collateral Benefits: Reaching out to landlords will help agencies and organizations identify cooperative owners who will benefit from assistance and put them in a position to monitor unresponsive landlords. Also, demand for lead-safe work practices training and enrollment in lead hazard control programs will increase; these programs traditionally have experienced difficulties attracting owners of high-risk properties. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: 0.5 -1.25 FTEs for mailings and providing follow-up assistance, depending on ease of access to data and the amount of follow-up assistance provided. Other resource requirements: Access to a variety of databases and types of information to identify owners of pre-1978 housing in high-risk communities and properties with known lead hazards, such as: tax assessor records, EBL data, housing code violations, GIS data, and inspection and risk assessment results. Cost considerations: Mailings are a relatively low-cost way to reach out to landlords. However, there must be time and resources allocated to provide follow-up assistance and a plan for follow-up contact with unresponsive owners. # Timing issues: The entire process, from the research through the mailing stage, can be implemented in less than six months. Follow-up could take a year or more. # Feasibility of Implementation: High. Fairly easy to implement by any jurisdiction. # POTENTIAL OBSTACLES/BARRIERS Identifying and locating the owner of high-risk rental properties is often difficult. Identifying properties with known lead hazards is challenging, particularly for CBOs. # ILLUSTRATION #1 OF STRATEGY IN PRACTICE In the City of Cleveland, all water bills, by law, must be sent to property owners (not tenants). With funding through a national project funded by a HUD Operation LEAP grant, the Cleveland Department of Public Health has developed a flyer on the lead hazard disclosure law to be included in water bill mailings. It is estimated that 450,000 flyers will be mailed. A website and hotline have been established to respond to owners who want assistance or need more information. Magnitude of Impact/Potential Impact: The owners of 450,000 properties serviced by the Cleveland Water Authority will be informed of their responsibilities under the federal lead hazard disclosure law and the Fair Housing Act. It is not possible to predict how many will take additional steps, like visit the website, call the hotline, and take steps to control lead hazards. Other resources utilized: Data on code violations for chipping and peeling paint over the last five years were obtained through a "request for data" to the code enforcement agency. Properties with documented hazards were identified through Project 504's CEHRC project. Also, local GIS data to identify owners. # Jurisdiction or Target # Potential for replication: Very high # Contact for Specific Information # Factors essential to implementation: The website is relatively inexpensive and takes enormous pressure off Project 504 staff by minimizing the number of calls from landlords and the number of hard copies of documents sent out by mail. Having resources (e.g., lead-safe work practices training and the LHC program) to offer to property owners is key to owner responsiveness and building Project 504's credibility with this audience. Limitations/challenges/problems encountered: Identifying properties with known hazards. Magnitude of Impact/Potential Impact: In response to the first batch of letters that went to 257 owners, a dozen owners of multiple properties receiving these letters have contacted the county's lead hazard control program. # Potential for replication: High # Contact for Specific Information: Greg Luce 612-521-8888 [email protected] # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Using Code Enforcement and Other Systems # PRECLUDE OWNERS FROM RENTING UNITS THAT HAVE BEEN CITED FOR HAZARDS DESCRIPTION OF THE STRATEGY Prohibiting owners from renting dwellings that have been cited for lead hazards provides a strong incentive for owners to address the hazards. A jurisdiction can issue an order to vacate (or even cite the unit as unfit for human occupancy) and declare those units uninhabitable. Jurisdictions that require rental licenses or certificates of occupancy can revoke them for cited units to achieve the same end. A prohibition of occupancy must be coupled with measures to protect tenants from eviction, offer relocation assistance when absolutely necessary, and safeguard against possible loss of affordable housing due to gentrification. # BENEFITS Immediate/Direct Results: With the potential loss of rental income, property owners will be motivated to remediate hazards. Public Health Benefits: Hazards will be removed and fewer children will be poisoned. Rental housing will meet minimum standards, resulting in healthier and safer housing. Other Indirect/Collateral Benefits: More public awareness regarding lead hazards, lead poisoning, and lead poisoning prevention. # SCOPE OF POTENTIAL IMPACT City-or County-Wide Neighborhood/Community # PRIMARY ACTORS KEY PARTNERS Building or Code Inspection Agency Housing Agency Federal Agencies # CRITICAL ELEMENTS Staff requirements: Additional staffing may be needed to process paperwork such as notices and placards, re inspect units, and, after violations are cured, approve properties for reoccupancy. Institutional capacity required: Local code must stipulate that a certificate of occupancy is required for all rental property and contingent on compliance with minimum property maintenance standards. Timing issues: Officials must be realistic about the start-up time needed for initial inspections and certifications. # Feasibility of Implementation: It is helpful for the local jurisdiction to have funding for grants and low-cost loans available to help owners make repairs. Also, partnerships with community organizations to provide outreach and educational materials for property owners can help landlords come into compliance before inspections, thereby minimizing tenant displacement. # POTENTIAL OBSTACLES/BARRIERS Without funding for grants and loans and in the absence of local community group partnerships to provide education and outreach to property owners, code enforcement officials may issue a great deal of citations initially, which may increase tensions with property owners. Also, whatever database is used to identify rental units must be current in order to reach as many properties as possible. # ILLUSTRATION OF STRATEGY IN PRACTICE As of January 2004, Greensboro requires all residential property owners to acquire and maintain a valid Rental Unit Certificate of Occupancy (CO) before the property can be rented or leased. Units that meet the International Property Maintenance Code minimum standards on the first inspection (or re-inspection within 45 days) get a free five-year CO. Units that do not meet the standards within 45 days must be vacated until the unit is brought into compliance; at this point, owners must pay $250 for the CO. With additional complaints, the costs rise. Once the unit has a CO, if a complaint inspection results in a violation, the unit must be vacated. Owners must then repair within 45 days and pay $500 to restore the CO. The next verified complaint also results in a revoked CO and a vacated unit, but restoring the CO will cost $500 plus $25 for each day that the unit is out of compliance. The IPMC lists peeling and deteriorating paint as a violation. When an inspector cites a property for paint violations, the owner is referred to the housing and community development program to apply for a lead hazard control grant. A local non-profit, the Greensboro Housing Coalition (GHC), also provides outreach to landlords to educate them about minimum standards and help them find solutions for rental property problems. GHC is also prepared to assist displaced tenants. Other resources utilized: The Greensboro code enforcement program used this opportunity to revamp software and purchase new notepad computers for mobile operations. # Jurisdiction or Target # Factors essential to implementation: A requirement that all rental property receive a certificate of occupancy that is dependent on property maintenance code compliance. Limitations/challenges/problems encountered: Greensboro's existing rental unit database was not current, so they had to develop a new database. The number of requests by property owners for inspections, in lieu of waiting until the inspectors reach that area of the city, exceeded the city's expectations. Magnitude of Impact/Potential: Greensboro implemented RUCO in January 2004, so the actual impact cannot yet be measured. However, prior to RUCO, there were no negative consequences for landlords who do not respond promptly to repair orders. With a real tracking system in place with very clear consequences (i.e. loss of rental income), local code enforcers believe there will be a dramatic and permanent change in Greensboro rental housing. # Potential for replication: Once the ordinance was passed, there were no significant additional burdens to overcome, making this a simple yet effective strategy to replicate. # REPORT PROBLEM RENTAL PROPERTY OWNERS TO HUD AND EPA FOR DISCLOSURE ENFORCEMENT DESCRIPTION OF THE STRATEGY Federal law requires owners of most pre-1978 rental properties to disclose information about lead hazards to tenants at the time of lease or lease renewal. The law provides significant penalties for violations and authorizes enforcement by HUD, EPA, and DOJ. Using "results-oriented" enforcement, federal agencies have investigated cases referred by local agencies and others and generated $14,000,000 in lead safety investments by landlords in 150,000 housing units. Health departments and community-based organizations can facilitate enforcement locally by identifying and reporting owners of poorly maintained buildings who fail to comply with disclosure requirements to EPA, HUD, or US attorneys. Health departments can strengthen federal enforcement by providing information on documented poisonings and lead hazards in non-compliant properties. # BENEFITS Immediate/Direct Results: Landlords who have violated the federal lead hazard disclosure law are encouraged to evaluate, control, and prevent lead hazards in multiple units in exchange for reduced fines. Public Health Benefits: Tenants living in units where hazards have been controlled or prevented are less likely to be exposed to lead hazards. Future tenants will receive information they need to make informed housing choices and protect their families from lead hazards. Owners forced to follow the disclosure law will be motivated to address lead hazards to avoid having to disclose them. Other Indirect/Collateral Benefits: This is a good way to target problem landlords, particularly owners of properties responsible for repeat poisonings. If federal agencies pursue results-oriented enforcement, working with federal authorities in bringing enforcement actions against property owners may persuade landlords to address lead hazards in all units they own or manage as well as yield funding for education, outreach, screening, and other prevention activities. Through Community Health Improvement Projects (CHIPs) and Supplemental Environmental Projects (SEPs), a few large and well-publicized enforcement cases will also get the attention of other property owners and hopefully motivate them to comply with the disclosure law and address lead hazards in their properties. # SCOPE OF POTENTIAL IMPACT # Other resource requirements: N/A Institutional capacity required: Address-specific information about lead hazards is necessary. Access to EBL data, tax assessor's records, and data on housing code and other violations is helpful. Local or state lead laws are not required for the implementation of this strategy. Cost considerations: This is a very cost-effective strategy-a relatively small investment of time and resources can reap tens of thousands of dollars in property owner investments in lead safety and other prevention projects. There is no evidence that results-oriented enforcement of the disclosure law has adversely affected housing affordability. Timing issues: Typically, it can take more than one year for federal agencies to complete enforcement action, from the investigation through the settlement stage; some cases may take even longer. It can take another two or more years for defendants to complete the work agreed to in the settlements. # Feasibility of Implementation: Very high. This is an easy strategy for local and state entities to implement, because federal agencies conduct the investigation and enforcement work once cases are referred. # POTENTIAL OBSTACLES/BARRIERS Some tenants may be reluctant to report non-compliance or provide documentation for fear of landlord retaliation. It is important to communicate these fears when reporting cases for enforcement to federal agencies so that steps can be taken to protect tenants and safeguard their rights. Also, follow-up monitoring is needed to ensure that landlords implement settlement agreements properly. Federal agencies have the ability to collect penalties if agreements are not honored. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Using Code Enforcement and Other Systems REPORT PROBLEM RENTAL PROPERTY OWNERS TO HUD AND EPA FOR DISCLOSURE ENFORCEMENT Magnitude of Impact/Potential Impact: The four companies agreed to conduct lead hazard control in a total of 8,642 units at an estimated cost of $6 million. To date, 477 units have been made lead safe and more than $750,000 has been spent on testing and abatement. In addition, the settlements included $77,000 for blood lead screening and $100,000 for abatement of 10 housing units owned by low-income property owners. Potential for replication: Low. Unless there is a strong local law and resources to proactively inspect properties, it would be difficult for health departments to fully replicate the Chicago experience-but many elements are worth replicating. # Contacts for Specific Information Anne Evens Director, CLPPP 312-746-7820 [email protected] Tara Jordan National Center for Healthy Housing 410-992-0712 [email protected] # References for additional information N/A # ILLUSTRATION #2 OF STRATEGY IN PRACTICE Through health department data, the tax assessor's database, and a number of other data sources, Brown University students working for CLAP were able to determine that of 887 properties owned by 204 owners had poisoned 2,644 children. CLAP profiled these owners, documented disclosure violations in their properties, and provided this information to the Attorney General's office, which in turn prioritized the cases and forwarded them to EPA Region 1. CLAP continues to document and report disclosure violations as tenants provide tips. Other resources utilized: Brown University students did much of the initial research. Through a contract between Brown University and the Rhode Island Department of Health, the students had access to health department EBL data. The information they compiled was sent directly to the federal agencies. # Jurisdiction or Target Factors essential to implementation: Important components are a good working relationship with the AG's Office, as well as networking and building relationships with various agencies to gain access to records. Limitations/challenges/problems encountered: EPA Region 1 is now poised to act on the cases referred by CLAP-two years after CLAP supplied the documentation. Magnitude of Impact/Potential Impact: So far, one case has been prosecuted, resulting in abatement of 12 units plus the contribution of $3,000 for community-based organizations working on lead poisoning prevention. In addition, the owner was fined $16,000 and was required to make five presentations about lead-based paint hazards: three to tenants and two to landlords. # Potential for # REQUIRE AGENCIES TO DISSEMINATE LEAD POISONING PREVENTION INFORMATION DESCRIPTION OF THE STRATEGY Requiring governmental agencies that have regular contact with homeowners, landlords, tenants, and parents to disseminate lead poisoning prevention information to their constituents is an effective way to advance primary prevention. Agencies can enclose information on lead poisoning prevention when they mail items such as property tax statements and water and utility bills or when they provide such documents as birth certificates and building permits. This is an effective, low-cost method that can use existing systems and leverage limited funding while distributing lead poisoning prevention information to thousands of people. # BENEFITS Immediate/Direct Results: Lead poisoning prevention information disseminated by public agencies instantly reaches thousands of people who receive property tax bills and pay water and other utility bills. Public Health Benefits: Especially as tied to building permits, this strategy can alert homeowners and rental property owners about the hazards that could be created by disturbing or removing lead-based paint, as well as educate these groups about lead-safe work practices; both are measures that can protect public health. This strategy can also alert parents to potential lead hazards and what steps are needed to protect their children from lead hazards. Other Indirect/Collateral Benefits: This effort can raise awareness about the extent of lead hazards in a community and potentially generate interest in lead hazard control strategies. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: No new staff time should be required; a small percentage of an FTE would be needed to produce and distribute the information materials to the participating agencies. Other resource requirements: The information materials to be enclosed with mailings or document distribution. # Institutional capacity required: This strategy may require statutory or code authority. It also requires a knowledgeable staff member to compile the information materials and to ensure that all agencies have all required materials for dissemination. # Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Using Code Enforcement and Other Systems REQUIRE AGENCIES TO DISSEMINATE LEAD POISONING PREVENTION INFORMATION Cost considerations: The cost of producing the materials: writing, editing, graphics, reproduction, and the incremental cost of collating the document(s) into the other material the disseminating agency was already distributing. Timing issues: Once underlying statutory or code authority is in place, implementation of this strategy should be very quick. # Feasibility of Implementation: This strategy should be very easy to implement at any level. # POTENTIAL OBSTACLES/BARRIERS A potential challenge is whether property owners and others who receive the information with water or utility bills pay attention to the material they receive. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE As part of a larger childhood lead poisoning prevention program passed in 1991, the City and County of San Francisco directed the Department of Public Health and its CLPP to put together a collection of information materials that were then disseminated to various audiences by a number of different city and county agencies. This dissemination was required by law (city health code) and involved the public health department, the Tax Collector, the city's water utility, the San Francisco Unified School district, and others. Two very successful portions of this policy expired in 2003. The first was a requirement to include lead hazard notices in each county tax bill. These notices included an official lead hazard Informational Bulletin, as well as a Pre-1978 Hazard Notice, both prepared by the Department of Public Health. The second portion required that all rental property owners with a unit or units constructed before 1978 distribute the official Hazard Notice to all tenants; the property owners had to retain affidavits as proof of distribution. Other requirements still in effect include the following: the birth records office must provide bilingual lead poisoning prevention information with every birth certificate issued; the San Francisco Unified School District, the departments of Social Services and Recreation & Parks, Head Start providers, and libraries disseminate the Informational Bulletin; and city-funded child care and health care facilities are required to distribute information on lead poisoning prevention. # Jurisdiction or Target Area: City and County of San Francisco # Factors essential to implementation: The main factor essential to the implementation of this strategy was the cooperation demonstrated by all departments involved in the information dissemination process. Magnitude of Impact/Potential Impact: This strategy has impacted thousands of families each year since its implementation. Staff at the CLPP estimate that when all portions of the strategy were in effect, lead poisoning prevention information was reaching between 70,000 and 100,000 households every year. # Limitations/challenges/problems: None # Potential for replication: The San Francisco CLPP strongly recommends this strategy to other local governments as an easy-to-implement, effective way to increase knowledge of childhood lead poisoning prevention. # Contact for Specific Information # REQUIRE AN INSPECTION FOR LEAD-BASED PAINT HAZARDS AT TENANT TURNOVER DESCRIPTION OF THE STRATEGY Tenant turnover presents an excellent opportunity for deteriorated paint and other potential lead hazards to be identified and corrected because the safety and convenience of occupants are not an issue in a vacant unit. Rental property owners can be required to assess and control any lead hazards after the departing tenant leaves but before the new tenant occupies the unit. # BENEFITS Immediate/Direct Results: Regular maintenance to correct or prevent lead-based paint hazards reduces the risk of child exposure to lead. Public Health Benefits: Triggering corrective action by landlords at the time of vacancy institutionalizes lead safety and primary prevention. Other Indirect/Collateral Benefits: The overall quality of rental housing is improved. Property owners that perform turnover treatments may avoid the high cost of lead abatement that may be required if a child is poisoned and benefit from increased liability protection and lower insurance premiums. # SCOPE OF POTENTIAL IMPACT # KEY PARTNERS # Tenants # CRITICAL ELEMENTS Staff requirements: The number of staff needed depends on the size of the jurisdiction and number of rental units. Other resource requirements: A database of rental housing must exist or be created. Some enforcement presence is needed to monitor and enforce compliance. Institutional capacity required: Statute, ordinance, or code that requires assessment and control at the time of or prior to tenant turnover; statutory authority to enforce such requirements; and enough code inspectors to implement it. Cost considerations: Costs associated with creating and maintaining a rental property database; salary and other costs related to monitoring and enforcement. # Feasibility of Implementation: Variable. For jurisdictions with an existing rental property database and an active code enforcement program, this strategy has the potential to generate profound change with little cost. Jurisdictions with weak code enforcement programs will find this strategy difficult to implement. # POTENTIAL OBSTACLES/BARRIERS Since it is impossible for code inspectors to know when rental property is turning over, this strategy's success depends on substantial voluntary property owner compliance as well as an educated renter population. Also, turnover treatment may be difficult in tight rental markets where new tenants need to occupy units quicklysuch as "on the first of the month" because the leases on their previous homes expired the day before. # ILLUSTRATION OF STRATEGY IN PRACTICE As part of statutorily mandated Essential Maintenance Practices (EMPs), Vermont law requires owners of pre 1978 rental housing to perform visual, on-site inspections on each rental unit at tenant turnover. The EMPs require, among other things, the inspection of paint condition, including interior and exterior surfaces and fixtures, and the completion of any needed repair. The essential maintenance work must be completed according to safe work practices. Owners must sign a notarized affidavit stating that EMPs have been completed and file it with their insurance carrier and the Vermont Department of Health. In Burlington, VT (the only jurisdiction currently enforcing this law), code enforcement officers conduct workshops to educate landlords about the requirements and how to meet them. The training addresses the property owners' business sense and self-interest by emphasizing that compliance with the regular property maintenance requirements placates insurance companies, protects the property, and shields property owners from liability. Burlington also does periodic mailings with detailed materials on lead paint regulations and includes the Department of Health form. # Jurisdiction or Target Area: Vermont # Other resources utilized: N/A Factors essential to implementation: An existing database of rental property or the means to create one; educational outreach materials for property owners; and sufficient staffing to conduct outreach to property owners and property maintenance companies, monitor for compliance, and enforcement Limitations/challenges/problems encountered: Local jurisdictions other than Burlington do not have code enforcement programs or rental property databases in place and have been unable to enforce the law thus far. Also, the insurance industry's failure to support the law (although its part was voluntary) precludes the EMPs' proponents envisioned incentive for property owners to conduct an inspection and obtain a notarized affidavit. Magnitude of Impact/Potential Impact: There is potential for tremendous impact-regular property maintenance and yearly inspections may be the best method of primary prevention. Burlington Code Enforcement finds that most property owners already inspect at tenant turnover, so educating them about the lead paint requirements is often all that is necessary for compliance. # Potential for replication: High in a location with a rental property database and sufficient staffing to do landlord outreach and monitoring. # REQUIRE RENTAL PROPERTY REGISTRATION/LICENSING DESCRIPTION OF THE STRATEGY Universal registration or licensing of multi-unit residential buildings with state or local code enforcement authorities helps to ensure that minimum property maintenance standards are met by landlords, particularly absentee landlords. As part of the registration/licensing obligation, owners can be required to provide contact information for themselves, as well as any agents managing the property, and to designate an agent to receive legal notices in the locality where the property is situated. # BENEFITS Immediate/Direct Results: Rental registration and licensing programs ensure that persons with responsibility and authority to maintain buildings can be readily located and served with legal notices. Successful delivery of such notices ensures that non-compliant property owners have received official notification of a code violation. This expedites compliance or enforcement action to achieve compliance with applicable housing quality and maintenance standards. Tenants also benefit from being able to readily locate those responsible for maintaining their homes to inform them about potential housing-related health hazards such as peeling paint or leaks. Public Health Benefits: Prompt and consistent enforcement of housing codes improves the likelihood of effective maintenance of rental housing, reducing the risk of lead hazards. Other Indirect/Collateral Benefits: Other systems, such as CLPP programs, housing authorities, and public safety agencies, can use address-based data about rental housing that identifies property owners to fulfill their missions. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Staff requirements will vary depending on whether the program is adopted at a state or local level. Once a system is in place, nominal staff is required to update records and enforce orders. Other resource requirements: Computerized information system and methods for disseminating, retrieving, and reviewing registrations. # Institutional capacity required: Statutory authority is required in order to compel property owners to register their properties. Staff training requirements are minimal. Cost considerations: Rental registration programs are not costly. These programs can be supported by a registration fee to minimize the impact on the code enforcement program's resources and can generate income to support pro-active inspections or enforcement. Timing issues: An initial phase-in period will be necessary, with a deadline after which owners who have not registered their properties are considered in violation of the requirements. Re-registration should be required at minimum when properties change ownership or an owner's or agent's contact information changes. Building Blocks for Primary Prevention: Protecting Children from Lead-Based Paint Hazards Using Code Enforcement and Other Systems REQUIRE RENTAL PROPERTY REGISTRATION/LICENSING Feasibility of Implementation: Moderate. These programs can be easily implemented once statutory authority is in place. The key to their effectiveness is adequate staffing in health and/or code inspection agencies to ensure enforcement. # POTENTIAL OBSTACLES/BARRIERS Rental registration and licensing programs are in place in a number of jurisdictions. However, in order to be effective, they must be coupled with effective enforcement and meaningful consequences for non-compliance. In New Jersey, for example, many courts will not allow an eviction case to proceed if the property is not registered. In addition, the state can file a docketed judgment for $200 per building if an owner fails to register a property. # ADDITIONAL RESOURCES # ILLUSTRATION OF STRATEGY IN PRACTICE Owners of buildings containing three or more rental units must submit a certificate of registration to BHI as well as a $10 fee per building. Owners must provide personal contact information and designate an agent, residing in the county where the property is located, who is authorized to accept notices from tenants and to receive service of process. Corporate owners must be registered to do business in New Jersey and must identify the corporation's registered agent and corporate officers. If the property is owned by a partnership, the names of all general partners must be disclosed. Mortgage holders also must be identified. Owners must provide the name and address of any managing agent, superintendent, janitor, or other person responsible for the maintenance of the property and must designate someone who can authorize expenditures for emergency repairs. New owners are required to submit a registration form within 20 days of a acquiring a property. # Jurisdiction or Target Area: New Jersey Primary Actor: Bureau of Housing Inspection (BHI), which is part of the Division of Codes and Standards in New Jersey's Department of Community Affairs (DCA). # Secondary Actor(s): N/A Staffing utilized: 1-2 FTE is required to update records and enforce orders. Other resources utilized: A computer database, registration forms, and orders. Factors essential to implementation: Strong enforcement provisions are essential to the effectiveness of the program. If an owner fails to register a property, DCA notifies the owner of the violation and orders him to register within 30 days. If the owner still neglects to comply, the Department imposes a penalty of $200 per violation and certifies the debt to the superior court. The clerk of the court immediately dockets a judgment against the owner. Implementation is also enhanced by tying rental registration requirements to the state's construction code: owners may not obtain a certificate of occupancy for newly constructed rental housing without first procuring a certificate of registration. Finally, owners may not evict tenants from buildings that are not registered. # UTILIZE EARLY WARNING SYSTEMS FOR DETERIORATING PROPERTIES DESCRIPTION OF THE STRATEGY An easy-to-use online tool that integrates public data about deteriorating and potentially deteriorating properties is an innovative means of leveraging code enforcement and targeting high-risk housing. Such a web site provides a searchable database of information that enables the identification of properties in danger of decline, such as code complaints, contract nuisance abatements (city-sponsored repairs to address public safety hazards), tax delinquencies, and liens for unpaid utility bills. City agencies, county agencies, and community groups can use search results to identify properties in trouble; acquire properties headed for abandonment before they deteriorate; determine if landlords are complying with obligations; and learn about the overall condition of various neighborhoods. # BENEFITS Immediate/Direct Results: Coordinating underutilized public information delivers a comprehensive picture of what actually is happening in a particular block or neighborhood. Using a system that identifies properties likely to decline, public agencies can notify property owners about potential lead problems and require or encourage that they make repairs before problems become serious hazards and children are poisoned. Public Health Benefits: Such a system has multiple applied uses that can prevent childhood lead poisoning: community development corporations and other groups can identify property owners in trouble and offer proactive services or acquire properties before they deteriorate; residents can determine whether their landlords are complying with their obligations and learn about patterns in their neighborhoods; and code enforcement agencies can use the information to target troubled neighborhoods, problem owners, high-risk properties and neighborhoods in decline. Other Indirect/Collateral Benefits: Comprehensive, integrated public information-including mapping capability for visual representation of the data-provides powerful data that can be used to advance policy change. Free training and public access to the collected information bridges the digital divide and encourages the development of technical and community-organizing skills among community residents. # SCOPE OF POTENTIAL IMPACT # CRITICAL ELEMENTS Staff requirements: Staff are needed initially to develop the interface between multiple data sources and provide training and outreach to engage potential users. Continuing staff capacity is needed to update and maintain the information system. Other resource requirements: The managing entity requires website management and development capabilities, computers for staff use, and a system for managing the data supplied by government agencies and other entities. Institutional capacity required: Authority to share data is needed. Cost considerations: Costs will vary considerably depending on whether an existing information system can be expanded to manage the co-location of multiple data sets or an entirely new system is needed to receive available data sets, and if any key sources lack automated capacity or compatible data programs. A simple system may cost between $50,000 and $100,000 Timing issues: Between 6 and 18 months is required for planning and initial implementation. # Feasibility of Implementation: Variable. This strategy could be successfully replicated by jurisdictions that have relevant data in electronic files. There must be an interested party willing and authorized to tackle the integration and centralization of the data. # POTENTIAL OBSTACLES/BARRIERS Ambivalence and/or outright resistance from city and county agencies may be encountered. The resistance may stem from turf-protection issues, perceived additional workload, and/or reluctance to make certain information publicly accessible. Another key consideration is ensuring that the central organizing entity has the necessary capacity, knowledge, authority, and credibility to be able to engage all of the necessary players and sustain the project once in place. # ADDITIONAL RESOURCES N/A # ILLUSTRATION OF STRATEGY IN PRACTICE Neighborhood Knowledge Los Angeles (NKLA) is dedicated to preventing Los Angeles housing and neighborhood conditions from deteriorating. It provides a searchable, online tool, highlighted on its website, that tracks multiple data sets (including code complaints, building permits, contract nuisance abatements, tax delinquencies, and utility liens) gathered from city agencies. Tenants, community advocacy groups, and others may search the site (in English or Spanish) by zip code, census tract, council district, or address, or by specific user-selected criteria, such as properties with pending code complaint cases. Any of the site's data sets may be viewed area-wide on easy-to-read maps. This mapping function allows users to spot patterns of tax delinquencies, code complaints, or other problems indicating pockets of potential neighborhood decay. Such comprehensive and illustrative information helps neighborhood residents, community organizations, and policymakers mobilize support for community improvement. Other resources utilized: NKLA used a computer lab or local community technology center to train users at the neighborhood level. # Jurisdiction or Target Factors essential to implementation: The project's association with UCLA helped considerably as NKLA sought out relationships with local government agencies that possessed the needed data sets. These relationships were crucial to the project's implementation and success. NKLA has received funding from city and federal governmental authorities as well as private foundations and corporations. Limitations/challenges/problems encountered: Initially, most of the needed data sets were spread throughout the City of Los Angeles on individual computers or inaccessible mainframe systems. Convincing the individuals involved to share the data was challenging. Staying competent in an ever-changing technology is an ongoing challenge. Magnitude of Impact/Potential Impact: With the various data sets integrated, the scope of property deterioration was documented clearly, and city staff, politicians, and advocates were better able to grasp the need for regularized code inspection. In Oct. 2000, NKLA received about 250,000 hits each month, with approximately 100 different individual users on the site each day. # Potential for replication: Moderate. The information central to this strategy already exists in various government agencies, which makes this a promising strategy for replication. NKLA contains "A Political and Technical How-To Kit" for those seeking to replicate the site in other locales. The kit offers advice on overcoming the political as well as technical challenges confronted by the site's creators. # Contact for Specific Information # APPENDIX E GLOSSARY OF TERMS Abatement-Any set of measures designed to permanently eliminate lead-based paint or lead-based paint hazards. Abatement includes: (1) The removal of lead-based paint and dust-lead hazards, the permanent enclosure or encapsulation of lead-based paint, the replacement of components or fixtures painted with leadbased paint, and the removal or permanent covering of soil-lead hazards; and (2) All preparation, cleanup, disposal, and post-abatement clearance testing activities associated with such measures. Clearance examination-An activity conducted following lead-based paint hazard reduction activities to determine that the hazard reduction activities are complete and that no soil-lead hazards or settled dust-lead hazards exist in the dwelling unit or worksite. The clearance process includes a visual assessment and collection and analysis of environmental samples. Containment-The physical measures taken to ensure that dust and debris created or released during leadbased paint hazard reduction are not spread, blown, or tracked from inside to outside of the worksite. Deteriorated paint-Any interior or exterior paint or other coating that is peeling, chipping, chalking or cracking, or any paint or coating located on an interior or exterior surface or fixture that is otherwise damaged or separated from the surface to which it was applied. Dry sanding-Sanding without moisture; includes both hand and machine sanding. Elevated blood lead level-The Centers for Disease Control and Prevention has established 10 micrograms per deciliter (µg/dL) or greater of lead in whole blood as an elevated blood lead level for children under age six. Encapsulation-The application of a covering or coating that acts as a barrier between lead-based paint and the environment and that relies for its durability on adhesion between the encapsulant and the painted surface, and on the integrity of the existing bonds between paint layers and between the paint and the surface to which it was applied. # Environmental Health Perspectives (EHP)-A journal of the National Institute of Environmental Health Sciences (NIEHS) that presents peer-reviewed articles focused on the impacts of the environment on human health and often includes articles on childhood lead poisoning. EHP is an open access journal online at http:// ehp.niehs.nih.gov/. Feasibility of implementation-This section of the Building Blocks template estimates the ease in which a particular building block can be implemented. This section uses a feasibility scale that runs from low to variable to moderate to high to very high. Federal Lead Hazard Disclosure law-A federal statute, administered by HUD and EPA, that requires owners of pre-1978 housing to disclose lead hazards to prospective tenants or buyers. Friction surface-An interior or exterior surface that is subject to abrasion or friction, including, but not limited to, certain window, floor, and stair surfaces. Appendices APPENDIX E-GLOSSARY OF TERMS HEPA vacuum-A vacuum cleaner with an included high efficiency particulate air (HEPA) filter through which contaminated air flows. A HEPA filter is one that captures at least 99.97 percent of airborne particles of at least 0.3 micrometers in diameter. Housing Choice Voucher Program (Section 8)-A HUD-administered assistance program that helps lowincome families secure housing they may otherwise be unable to afford. Impact surface-An interior or exterior surface that is subject to damage by repeated sudden force, such as certain parts of doorframes. Interim controls-A set of measures designed to temporarily reduce human exposure or likely exposure to lead-based paint hazards. Interim controls include, but are not limited to, repairs, painting, temporary containment, specialized cleaning, clearance, ongoing lead-based paint maintenance activities, and the establishment and operation of management and resident education programs. Key Partners-Those agencies, organizations, and individuals who work with or should be included in a given building block strategy. They are not the main parties responsible for implementation of a given building block. Lead-based paint-Paint or other surface coatings that contain lead equal to or in excess of 1.0 milligram per square centimeter or 0.5 percent by weight. Lead-based paint hazard-Any condition that causes exposure to lead from lead-contaminated dust, leadcontaminated soil, or lead-contaminated paint that is deteriorated or present in accessible surfaces, friction surfaces, or impact surfaces that would result in adverse human health effects as established by the CDC or another appropriate federal agency. Lead-based paint inspection-A surface-by-surface investigation to determine the presence of lead-based paint and the provision of a report explaining the results of the investigation. Lead-free housing-Target housing that has been found to be free of paint or other surface coatings that contain lead-based paint. Lead-safe work practices (LSWP)-A collection of "best practices" techniques, methods, and processes, which minimize the amount of dust and debris created during remodeling, renovation, rehabilitation, or repair of pre-1978 housing. Lead-safe work practices help prevent the creation or exacerbation of lead-based paint hazards. Lead Hazard Control Grant program-A HUD-administered program that awards grants to cities and states to facilitate the control of lead hazards, mainly in targeted low-income housing. Lead hazard evaluation-A risk assessment, a lead hazard screen, a lead-based paint inspection, paint testing, or a combination of these to determine the presence of lead-based paint hazards or lead-based paint in a residential building. Paint stabilization-Repairing any physical defect in the substrate of a painted surface that is causing paint deterioration, removing loose paint and other material from the surface to be treated, and applying a new protective coating or paint. # Centers for Paint testing-The process of determining, by a certified lead inspector or risk assessor, the presence or the absence of lead-based paint on deteriorated paint surfaces or painted surfaces to be disturbed or replaced. Painted surface to be disturbed-A paint surface that is to be scraped, sanded, cut, penetrated, or otherwise affected by rehabilitation work in a manner that could potentially create a lead-based paint hazard by generating dust, fumes, or paint chips. Potential for replication-This section of the Building Blocks Template describes the ease in which jurisdictions may be able to implement a specific strategy described in a building block illustration. Such potential for replication is estimated using a standardized scale. The scale runs from low to moderate to high to very high. Primary Actors-The main parties responsible for implementation of a given building block strategy. These can include public health departments, housing agencies, code enforcement agencies, and community-based organizations, among others. Public health department-A state, tribal, county or municipal public health department, or the Indian Health Service. Rehabilitation-The improvement of an existing structure through alterations, incidental additions, or enhancements. Rehabilitation includes repairs necessary to correct the results of deferred maintenance, the replacement of principal fixtures and components, improvements to increase the efficient use of energy, and installation of security devices.
None
None
d8f1644d7aaefcaa1d5b59a2bb635727fd65fb25
cdc
None
The O ccu p atio n al S afety and H ealth A ct of 1970 em p hasizes the need for standards to protect the health and provide for the safety of workers occupationally exposed to an ever-increasing number of potential hazards. The National Institute for O ccu p atio n al S afety and H ealth (NIOSH) evaluates all available research data and criteria and recommends standards for occupational exposure. The Secretary of Labor will w eigh th e se reco m m en d atio n s along with other considerations, such as feasibility and means of implementation, in promulgating regulatory standards. NIOSH will periodically review the recommended standards to ensure continuing p r o te c tio n of w orkers and will m ake su ccessiv e re p o rts as new re s e a rc h and epidem iolo g ic stu d ie s a re c o m p le te d and as sampling and analytical methods are developed. The co n trib u tio n s to th is do cu m en t on d iiso cy an ates by NIOSH staff, other F ed e ra l ag en cies or departm ents, the review consultants, the reviewers selected by th e S ociety for O ccu p atio n al and Environmental Health and the American Medical A ssociation, and R o b e rt B. O'C onnor, M.D., NIOSH c o n su lta n t in occupational medicine, are gratefully acknowledged. The views and co nclusions ex p ressed in th is d o cu m en t, to g e th e r w ith the re co m m en d atio n s for a sta n d a rd , a re th o se of NIOSH. They are not necessarily those of th e co nsultants, the reviewers selected by professional societies, or other F ed e ra l agencies. However, all comments, whether or not incorporated, have been s e n t w ith th e c r i t e r i a d o c u m e n t to th e O c c u p a t io n a l S a f e ty and H ealth A d m in istratio n for c o n sid e ra tio n in setting the standard. The review consultants and th e F e d e ra l agencies which received the document for review appear on pages v and vi.# NIOSH reco m m en d s th a t employee exposure to diisocyanates in the workplace be c o n tro lle d by adherence to the following sections. The standard is designed to p ro te c t the h e a lth and provide for th e s a fe ty of employees for up to a 10-hour workshift, 40-hour workweek, over a working lifetime. Compliance with all sections of the recommended standard should prevent adverse effects of diisocyanates on the h e a lth of u n sen sitized workers and provide for their safety. Sufficient technology ex ists to p erm it co m p lian ce w ith th e reco m m en d ed standard. Although NIOSH considers th e w orkplace e n v iro n m en ta l lim its to be safe levels based on current in fo rm atio n , th e em ployer should regard them as the upper boundaries of exposure and m ake every effort to keep the exposure as low as possible. The recommended standard will be reviewed and revised as necessary. D iis o c y a n a te s i r r i t a t e th e re s p ira to ry t r a c t and can a c t as re s p ira to ry s e n s itiz e rs , producing asthma-like symptoms in sensitized individuals with exposure a t very low c o n c e n tra tio n s . Exposure to diisocyantes may also result in chronic impairment of pulmonary function. NIOSH published criteria for a recommended standard for toluene diisocyanate (TDI) in 1973. The p re se n t reco m m en d ed stan d ard is ex panded to include all d iiso c y an ate s, but not their polymerized forms. It includes most of the provisions recom m end ed in th e TDI document but differs where appropriate to reflect newer in fo rm atio n or special provisions for other diisocyanates. Most of the information c u rre n tly av ailab le on effects of exposure to diisocyanates concerns TDI and, to a lesser e x te n t, d ip h en y lm eth an e d iiso c y an ate (MDI). In addition to TDI and MDI, occupational exposure limits are recommended for other diisocyanates that have had w idespread in d u strial a p p lic a tio n : hexamethylene diisocyanate (HDI), naphthalene d iis o c y a n a te (NDI), isophorone d iiso c y an ate (IPDI), and dicyclohexyl m eth an e diisocyanate (hydrogenated MDI). "O ccu p atio n al exposure to d iiso c y an ate s" is defined as exposure to airborne d iiso c y an ate s a t c o n c e n tra tio n s above o n e-h alf th e recommended time-weighted a v e ra g e (TWA) occupational exposure limit or above the recommended ceiling limit. Exposure to diisocyanates shall be controlled so that no employee is exposed at c o n c e n tra tio n s g re a te r th a n the limits specified below. These limits expressed in U g/cu m are equivalent to a vapor concentration of 5 ppb as a TWA concentration fo r up to a 1 0 -h o u r w o r k s h if t, 40-hour w orkw eek, and 20 ppb as a ceiling c o n c e n tra tio n for any 10-minute sampling period. The yg equivalents for selected diisocyanates are as follows: If o th e r diisocyanates are used, employers should observe environmental limits eq u iv alen t to a ceiling concentration of 20 ppb and a TWA concentration of 5 ppb. TWA # (b) Sampling and Analysis E n v ir o n m e n ta l sam p les shall be c o lle c te d and an aly zed by th e m ethods described in A ppendix I or by any o th e r method at leaSt equivalent in accuracy, precision, and sensitivity. Section 2 -Medical M edical su rv eillan ce shall be made available as outlined below to all workers exposed to diisocyanates in the workplace. # (a) Preplacement examinations shall include at least: (1) C o m p reh en siv e m ed ical and work histories, with special emphasis d ire c te d to ev id en ce of p re e x istin g re s p ira to ry co n d itio n s such as asthm a. A smoking history should also be compiled. P hysical e x am in atio n giving particular attention to the respiratory tract. (3) S pecific clin ica l te s ts including a 14-x 17-inch posteroanterior c h est roentgenogram and baseline measurements of forced vital capacity (FVC) and forced expiratory volume at 1 second (FEV 1). A judgm ent of th e w orker's a b ility to use negative and positive pressure respirators. # (b) P erio d ic e x am in atio n s shall be m ade a v ailab le a t le a s t annually, as determined by the responsible physician, and shall include: (1) Interim medical and work histories. (2) P hysical e x am in atio n giving particular attention to the respiratory tra c t and including measurements of FVC and FEV 1. # (c) During ex am in atio n s, a p p lic a n ts or em p lo y ees found to have medical c o n d itio n s t h a t c o u ld be d i r e c t l y or in d ire c tly a g g ra v a te d by exp o su re to d iiso c y an ate s, eg, respiratory allergy, chronic upper or lower respiratory irritation, chronic obstructive pulmonary disease, or evidence of sensitization to diisocyanates, shall be co unseled on th e ir in c re ased risk from w orking with these substances. Chronic bronchitis, emphysema, disabling pneumoconiosis, or cardiopulmonary disease w ith sig n ifican tly im paired ventilatory capacity similarly suggest an increased risk from exposure to d iiso c y an ate s. If a h isto ry of allerg y o th er than respiratory allerg y is e lic ite d , ap p lic a n ts should be counseled that they may be at increased risk of adverse health effects from exposure to diisocyanates. Employees shall also be advised th a t exposure to d iiso cy an ates may result in delayed effects, such as coughing or difficulty in breathing during the night. # (d) Pertinent medical records shall be maintained. Records of environmental exposures ap p licab le to an em p lo y ee shall be included in the employee's medical r e c o r d s . S u ch r e c o r d s s h a ll be k e p t fo r a t le a s t 30 y ears a f te r th e la st o ccu p a tio n a l exposure to diisocyanates. These records shall be made available to th e d esig n ated m ed ical representatives of the Secretary of Health, Education, and W elfare, of th e S e c re ta ry of Labor, of th e em p lo y er, and of th e employee or former employee. Section 3 -Labeling and Posting (a) Warning signs shall be p rin ted both in English and in the predominant language of non-English-reading workers. Workers unable to read labels and posted signs shall be instructed concerning hazardous areas and shall be orally informed of the instructions printed on labels and signs. In any a r e a w here there is a likelihood of emergency situations arising, signs req u ire d by S ectio n 3(c) shall be supplemented with signs giving emergency and firs t-a id in s tru c tio n s and p ro ced u res, th e lo c a tio n of first-aid supplies and e m e r g e n c y e q u ip m e n t, and th e lo c a tio n s of em erg en cy show ers and eyew ash fountains. # Section k -Personal Protective Equipment and Clothing T he e m p lo y e r s h a ll use en g in eerin g co n tro ls w here need ed to keep th e c o n c e n tra tio n of airb o rn e diisocyanates at or below the limits specified in Section (l)(a). The em ployer sh all also provide em p lo y ees with protective clothing and equipm en t of m aterials resistant to penetration by diisocyanates, such as rubber or p o ly v in y l c h l o r i d e , w h en n e c e s s a r y to p r e v e n t skin and ey e c o n ta c t w ith diisocyanates. Protective equipment suitable for emergency use shall be located at clearly identified stations outside the work area. (a) Eye Protection The employer shall provide face shields (20-cm minimum) with goggles and shall ensure that employees wear the protective equipment during any operation in which splashes of liquid diisocyanates are likely to occur. Protective devices for the eyes and fa c e shall be s e le c te d , used, and m a in tain ed in a c c o rd a n c e w ith 29 CFR 1910.133. (b) Skin Protection The em ployer shall provide a p p ro p ria te p ro te c tiv e clothing and eq u ip m en t th a t are r e s is ta n t to p e n e tra tio n by d iiso c y a n a te s, including gloves, aprons, su its, and boots, and shall ensure that employees weai these when needed to p rev en t skin c o n ta c t w ith liquid d iiso c y an ate s. W orkers w ithin 10 feet of spraying o p eratio n s, or at greater distance when there is a greater drift of spray, shall be p r o te c te d w ith im pervious clo th in g , gloves, and footwear in addition to required respiratory protection. Rubber shoes or rubbers over leather shoes shall be worn w henever th e re is a po ssib ility th at liquid diisocyanates may be present on floors. (1) To d eterm in e the type of respirator to be used, the employer shall m easure th e concentrations of airborne diisocyanates in the workplace initially and th e re a fte r whenever control, process, operation, worksite, or clim atic changes occur that are likely to increase the concentration of airborne diisocyanates. The employer shall provide respirators in accordance with Table 1-1 and s h a ll en su re th a t th e em p lo y ees use them p roperly when re s p ira to rs are required. The respiratory protective devices provided in conformance wih Table 1-1 shall be those approved by NIOSH and the Mine Safety and Health Administration as specified in 30 CFR 11. ♦Use of supplied-air suits may be necessary to prevent skin contact during exposure a t high concentrations of airborne, diisocyanates. (3) R e s p ira to rs sp ecified for use at higher concentrations of airborne diisocyanates may be used in atmospheres with lower concentrations. (4) The em p lo y er shall en su re that employees are properly instructed and drilled at least annually in the use of respirators assigned to them and on how to test for leakage, proper fit, and proper operation. The em ployer shall e sta b lish and conduct a program of cleaning, san itizin g , inspecting, maintaining, repairing, and storing respirators to ensure that employees are provided with clean respirators that are in good operating condition. (6) R e s p ira to rs shall be easily a c c e ssib le , and em p lo y ees shall be informed of their location. A ll c u rr e n t and p ro sp e c tiv e em p lo y ees w orking w here o ccu p a tio n a l exposure to diisocyanates may occur shall be informed orally and in writing of the h a z a r d s , r e l e v a n t s ig n s a n d s y m p to m s of ex p o su re, a p p ro p ria te em erg en cy p ro ced u res, and proper conditions and precautions concerning safe use and handling of d iiso c y an ate s. The in s tru c tio n a l program shall include a description of the g eneral nature of the environmental and medical surveillance procedures and of the a d v an tag es to th e em p lo y ee of p a rtic ip a tin g in th e se su rv eillan ce procedures. Em ployees exposed to diisocyanates should be warned that symptoms of exposure to d iiso c y an ate s, such as nocturnal dyspnea, may occur several hours after the end of th e workshift. They should also be advised that improper home use of polyurethane products co n tain in g unpolym erized diisocyanates, such as foam kits and varnishes, may in c re ase th e ir risk of w o rk -re la te d h ealth p ro b lem s. E m ployees shall be in s tru c te d on th e ir re sp o n sib ilitie s for follow ing work p ra c tic e s and sanitation p ro ced u res to help protect the health and provide for the safety of themselves and of fellow employees. # (b) The em ployer shall institu te a continuing education program, conducted a t le a st annually by persons qualified by experience or training, to ensure that all em ployees have cu rren t knowledge of job hazards, proper maintenance and cleanup m ethods, and p ro p er re s p ira to r use. As a minimum, instruction shall include the in fo rm atio n p re sc rib e d in paragraph 5(c) below. This information shall be readily available to all employees involved in the manufacture, use, transport, or storage of diisocyanates and shall be posted in prominent positions within the workplace. (c) R eq u ired in fo rm a tio n shall be rec o rd e d on th e "Material Safety Data Sheet" shown in A ppendix II or on a sim ilar form approved by the Occupational Safety and Health Administration, US Departm ent of Labor. # Section 6 -Work Practices (a) Control of Airborne Diisocyanates (1) E ngineering c o n tro ls, such as pro cess enclosure or local exhaust v e n tila tio n , shall be used w hen need ed to keep exp o su re to diisocyanates at or below th e recom mended environmental limit. Ventilation systems, if used, shall be designed to prevent accumulation or recirculation of diisocyanates in the workplace en v iro n m en t and to e f fe c tiv e ly rem ove diisocyanates from the breathing zone of employees. Exhaust ventilation systems discharging to outside air must conform to applicable local, state, and Federal air pollution regulations. V en tilatio n system s shall be reg u larly maintained and cleaned to ensure e ffe c tiv e n e ss , w hich shall be verified by semiannual airflow measurements. A log showing design airflo w and th e re su lts of semiannual inspections shall be kept. B efore m a in te n a n c e w ork is u n d ertaken, sources of diisocyanates shall be shut o ff and iso la ted . The need for and use of respiratory protective equipment shall be determined as outlined in Section 4. # (b) Confined Spaces In confined a re a s w here work is p e rfo rm e d routinely, such as spray booths, exposure to diisocyanates shall be kept at or below the recommended limits by the use o f e n g in e e r in g c o n t r o l s as d escrib ed in S ection 6(a). When nonroutine o p eratio n s such as cleaning and maintenance must be performed in confined spaces not equipped with such engineering controls, the following requirements shall apply. (1) E ntry into confined spaces, eg, tanks, pits, or process vessels, that may contain diisocyanates shall be controlled by a permit system. Permits shall be signed by an authorized representative of the employer, certifying th a t preparation of the con fin ed sp ace, precautionary measures, and personal protective equipment are ad eq u a te and th a t p rescrib e d procedures will be followed. Each work permit shall also be signed by the employee entering the confined space. (2) C onfined spaces that have contained diisocyanates shall be isolated by shutting off and sealing sources of diisocyanates. The confined space shall be cleaned with a solvent, flushed, washed w ith w a te r, purged with air, and thoroughly ventilated. It shall then be inspected and te s te d for oxygen d e fic ie n c y , diisocyanates, and the presence of combustible gases and o th e r su sp e cted c o n ta m in a n ts b efo re being e n te re d and reinspected periodically at 1-hour intervals while occupied. (4) Each em p lo y ee entering the confined space shall be equipped with a se lf-c o n ta in e d b re a th in g a p p a ra tu s as sp ecified in Section 4, a harness, and a lifeline. At le a st one o th e r em ployee equipped for entry with the same type of p ro te c tiv e equipment shall be stationed outside to monitor the operation. At least one ad d itio n al person shall be av ailab le to assist in an emergency. (1) D iiso cy an a tes should be stored in closed containers and should be p ro te c te d from h e a t and d ire c t sunlight. They should not be stored near bases, prim ary or seco n d ary am in es, acids, or alcohols, since these chemicals may react violently with diisocyanates. (2) D iiso cy an a te c o n ta in e rs should be k ep t clo sed to prevent water from en te rin g th e c o n ta in e rs , since w a te r and diisocyanates react to produce a w ater-in so lu b le u re a and carb o n dioxide, w hich can generate enough pressure to ru p tu re the c o n ta in e rs. All c o n ta in e rs of d iiso c y an ate s should be periodically insp ected for signs of increased pressure within the containers and to ensure that the in te g rity of containers and seals are maintained. Leaking containers should be rem oved to the outdoors or to an isolated, well-ventilated area before the contents are tra n s fe rre d to o th e r su ita b le containers, and leaks of diisocyanates should be cleaned up immediately. # (d) Control of Spills and Leaks (1) A dequate fa c ilitie s for handling spills of diisocyanates shall be provided and shall include suitable floor drainage and readily accessible hoses, mops, buckets, ab so rb en t or d e c o n ta m in a tin g m a te ria ls , and protective equipment and clothing. All spills or leaks of diisocyanates shall be given prompt attention by trained personnel, and all unessential personnel shall be evacuated from the area during cleanup. (3) Waste m aterial contam inated with diisocyanates shall be disposed of in a m anner not hazard o u s to em p lo y ees. D isposal m eth o d s m ust conform to ap p licab le local, state, and Federal regulations and shall not constitute a hazard to th e surrounding popu latio n or e n v iro n m en t. Spills of diisocyanates shall not be allow ed to e n te r public sewers or drains in amounts that could cause explosion or fire hazards. # (e) Emergency Procedures E m ergency plans and p ro ced u res shall be developed for all work areas where there is a potential for exposure to diisocyanates. The measures shall include those sp ecified below and any o th e rs considered appropriate for a specific operation or p r o c e s s . Em ployees shall be tra in e d to im p lem en t th e plans and p ro ced u res effectively. P r e a r r a n g e d plans shall be in s titu te d for o b tain in g em erg en cy m edical care and for the transportation of injured workers. A sufficient number of em ployees shall be tra in e d in first aid so that assistance is available immediately when necessary. Em ployees who have sig n ific a n t skin c o n ta c t with diisocyanates should w ash w ith w a te r or show er to rem ove th e com pound from the skin and should th en wash the a f f e c te d areas with alcohol. Contam inated clothing shall be removed and discarded or cleaned before reuse. In th e ev en t of a fire involving d iiso c y a n a te s, all u n essen tial personnel shall be evacuated from the area. The types of extinguishing media that should be used in fig h tin g d iiso cy an ate-su p p o rted fires are dry chemical powder, c a r b o n d io x id e, or foam . W ater should be used only if la rg e q u a n titie s a re av ailab le. F ire fig h te rs should be cautioned of the possibility of exposure to other hazardous chemicals, such as hydrogen cyanide, phosgene, and carbon monoxide. (4) After the fire has been extinguished, the area shall be inspected by properly p r o te c te d personnel and shall be decontaminated to remove any suspected diisocyanate residues before unprotected workers are perm itted to enter the area. # (f) Laundering (1) B efore being la u n d ered , contam inated clothes shall be placed in a d e c o n ta m in a tin g solution of water containing 10% ammonia in a container that is impervious to diisocyanates. When diisocyanates are used in laboratory activities, the following provisions, in addition to other sections, shall be followed. M echanical pipetting aids shall be used for all pipetting procedures. (2) Experim ents, procedures, and equipment that could produce aerosols or vapors of diisocyanates shall be confined to laboratory-type hoods, glove boxes, or o th e r sim ilar c o n tro l ap p ara tu s. Exposure chambers and associated generation apparatus shall be separately ventilated. (3) S u rfaces on which diisocyanates are handled shall be impervious to absorption or penetration by these compounds. L a b o ra to ry vacuum systems, hoods, and exposure chambers shall be e x h a u s t-v e n tila te d in a m anner c o n siste n t w ith F e d e ra l and local air pollution regulations. A irflow in th e laboratory shall be established in a pattern flowing from th e least to the most contam inated area. Contaminated exhaust air shall not be recirculated or discharged to other work areas. # Section 7 -Sanitation (a) Preparing, storing, dispensing (including vending machines), and consuming food and smoking shall be prohibited in work areas where occupational exposure to diisocyanates may occur. # (b) Em ployees who handle d iiso c y an ate s or eq u ip m en t contam inated with d iiso c y an ate s shall be advised to wash th e ir hands thoroughly with soap or mild detergent and water before using toilet facilities, eating, or smoking. (c) P lan t fa c ilitie s shall be maintained in a sanitary manner in accordance with sanitation requirements listed in 29 CFR 1910.141. (d) The em ployer shall provide ap p ro p ria te changing and shower rooms as required in 29 CFR 1910.141(d,e). Surveys shall be re p e a te d a t le a st annually and as soon as practicable after any change likely to result in increased concentrations of airborne diisocyanates. # (b) Personal Monitoring If it has been determined that there is occupational exposure to diisocyanates, the employer shall fulfill the following requirements: A program of p erso n al m o n ito rin g shall be instituted to identify and m e a s u r e , o r p e r m i t c a l c u l a t i o n o f, th e e x p o s u r e o f e a c h e m p lo y e e o ccu p atio n ally exposed to diisocyanates. Personal monitoring may be supplemented by source and area monitoring. In all p erso n al m onitoring, samples representative of the exposure in the breathing zone of the employee shall be collected. F o r e a c h d e t e r m i n a t i o n of th e d iiso c y a n a te c o n c e n tra tio n , a s u ffic ie n t num ber of sam p les shall be ta k e n to characterize employee exposure. V ariations in th e em ployee's work sch ed u le, lo c atio n , or d u tie s and changes in p ro d u c tio n schedules shall be co n sid ered in deciding when sam p les a re to be collected. Sam ples from eac h o p e ra tio n in eac h work a r e a and each shift s h a ll be ta k e n a t le a s t once ev ery 6 m onths or as o th e rw ise in d ic a te d by a profession al in d u strial hygienist. If monitoring shows that an employee is exposed to d iiso c y an ate s a t concentrations above the environmental limits recommended in Section 1(a), additional monitoring shall be promptly initiated. If this confirms that exposure is ex cessiv e, co n tro l m easu res shall be initiated as soon as possible to red u ce th e c o n c e n tra tio n of d iiso c y a n a te s in the employee's environment to less than or equal to th e lim its recommended in Section 1(a). The affected employee shall be n o tified of th e ex cessiv e exposure and of th e co n tro l measures being im p lem en ted . M onitoring of th e employee's exposure shall be conducted at least every 30 days and shall co n tinue until two consecutive determinations, at least 1 w eek a p a r t , in d i c a te t h a t th e e m p lo y e e 's exposure no longer ex ce ed s the recom m en d ed en v iro n m en ta l limits. At that point, semiannual monitoring may be resumed. # (c) # Recordkeeping Environm ental monitoring records and other pertinent records shall be kept for a t le a s t 30 y ears a f te r th e la st o ccu p a tio n a l exposure to d iiso c y an ate s. The reco rd s shall include the dates and times of measurement, duties and job locations w ithin th e w o rk site, sam pling and analytical methods used, the number, duration, and resu lts of samples taken, concentrations of diisocyanates in air estim ated from th e se sam ples, the type of personal protection in use at the time of sampling, and i d e n t i f i c a t i o n of th e exposed em ployee. E m ployees shall be able to obtain in fo rm atio n on th e ir own en v iro n m en ta l ex p o su res. E n v iro n m en tal monitoring reco rd s shall be m ade av ailab le to designated representatives of the Secretary of Labor, th e S e c re ta ry of H ealth , E d u catio n , and W elfare, and th e employee or former employee. P e rtin e n t medical records shall be retained by the employer for 30 years after th e l a s t o c c u p a tio n a l ex p o su re to d iiso c y an ate s. R eco rd s of en v iro n m en ta l exposures ap p licab le to an employee should be included in records. These medical reco rd s shall be m ade available to the designated representatives of the Secretary of Labor, of the Secretary of Health, Education, and Welfare, of the employer, and of the employee or former employee. # II. INTRODUCTION This re p o rt presents the criteria and the recommended standard based thereon th a t w ere p re p a re d to m e e t the need for preventing impairment of health arising from o ccu p a tio n a l exposure to d iiso c y an ate s. The criteria document fulfills the responsib ility of th e S e c re ta ry of H ealth , Education, and Welfare under Section 20(a)(3) of th e O ccu p atio n al S afety and H ealth Act of 1970 to "develop criteria dealing w ith toxic materials and harmful physical agents and substances which will d e sc rib e ...e x p o su re levels a t w hich no em p lo y ee will su ffe r im paired health or f u n c ti o n a l c a p a c i t i e s or dim inished life e x p e c ta n c y as a re s u lt of his work experience." A fter review ing d a ta and consulting with others, NIOSH formalized a system for th e d ev elo p m en t of c r ite r ia on which standards can be established to protect th e h e a lth and to provide for th e s a fe ty of em p lo y ees exposed to hazardous chem ical and physical agents. C riteria for a recommended standard should enable m a n ag em en t and labor to develop b e tte r en g in eerin g controls resulting in more h e a lth fu l work environments, and simply complying with the recommended standard should not be regarded as the final goal. O ccu p atio n al exp o su re to som e of the diisocyanates has produced respiratory illness in w orkers. In addition to irritating the upper and lower respiratory tract, diisocyanates can cause sensitization, and sensitized individuals may develop asthma upon exposure to d iiso c y an ate s in very sm all am o u n ts. Chronic impairment of pulm onary fu n ctio n has b een re p o rte d in some workers exposed to diisocyanates. Diisocyanates are also irritating to the skin and eyes. F u rth e r re se a rc h is need ed in a num ber of a re a s re le v a n t to co n tro llin g occupational exposure to diisocyanates. The possibilities of carcinogenic, mutagenic, te ra to g e n ic , and reproductive effects from diisocyanates have not been adequately in v e stig a te d . Studies in w hich e f f e c ts on individuals a re correlated with their a c tu a l exposures are also needed. Screening tests should be developed to permit e arly reco g n itio n of adverse respiratory effects resulting from sensitization to the d ii s o c y a n a te s . A n im a l e x p e rim e n ts should be p e rfo rm e d to d e te rm in e how c o n c e n tra tio n and len g th of exposure a f f e c t th e d ev elo p m en t of sensitization. Im proved en g ineering co n tro ls should be developed to protect workers in certain diisocyanate applications, such as spraying. # III. BIOLOGIC EFFECTS OF EXPOSURE The d iiso c y an ate s are ch em ical compounds in which two isocyanate groups, -NCO, are a tta c h e d to carb o n ato m s of an org an ic ra d ic a l. The chemical and p h y sic a l p ro p e rtie s of various d iiso c y an ate s a re liste d in Table XI -1 . Synonyms for these compounds are listed in Table XI-2. Many d iiso cy an ates exhibit high chemical reactivity . In the presence of w a te r they r e a c t exothermically to produce an unstable carbamic acid that rapidly d isso cia tes to form a p rim ary amine and carbon dioxide. The primary amine can react further with excess isocyanate to form a urea derivative. Iso cy an ates also r e a c t vigorously with organic compounds containing reactive hydrogens, esp ecially where the hydrogen atom is attached to oxygen, nitrogen, or sulfur . In biologic m acrornolecules, these groups occur abundantly, and the iso cy an a tes will th e re fo re r e a c t and com bine w ith a v a rie ty of sites on these m olecules. Polyfunctional isocyanates, such as the diisocyanates, can act as crosslinking agents with biologic macromolecules. # Extent of Exposure The m ost com m on method of synthesis of the diisocyanates is the reaction of p rim ary am ines w ith phosgene . In th is p ro cess, a p rim ary a lip h a tic or a ro m a tic am ine, dissolved in a so lv en t such as xylene, m onochlorobenzene, or dich lo ro b en zen e, is mixed with phosgene dissolved in the same solvent and allowed to re a c t for sev eral hours a t te m p e ra tu re s of ab o u t 200 C. More phosgene is added during the process, and the final reaction mixture is fractionated to recover th e iso cy an a te p ro d u ct, as w ell as h y d ro ch lo ric acid , u n re a c te d phosgene, the solvent for recycling, and the distillation residue for incineration. Diisocyanates are used to produce polyurethane foams, coatings, elastomers, and spandex fib ers. Toluene d iiso c y an ate (TDI), w hich is co m m ercially available as stan d ard m ix tu res of th e 2,4-and 2 ,6 -iso m ers, is g en erally used in producing flexible p o ly u reth an e fo am s. Methylene diphenyl diisocyanate (MDI), especially in p a rtia lly polym erized forms, is used more frequently in rigid foams. A substantial am ount of MDI (40-50% of th e am o u n t produced) is used in the manufacture of p o ly u r e t h a n e s y s te m s , such as fo rm u la te d p ack ag es of iso c y a n a te s, polyols, flu o ro carb o n blowing a g e n ts, fire retardants, surfactants, and catalysts. TDI and pure MDI, or sp ecial liquid MDI products, are used to make elastomers, which are used in m a n u fa ctu rin g p rin tin g rolls, liners for m ine c h u te s and grain elevator ch u tes, c o a te d fab ric s, shoe soles, and automobile bumpers . MDI is also used in th e foundry in d u stry as part of a binding system for casting molds . The total consumption of MDI and partially polymerized MDI in 1975 was about 300 m illion pounds. TDI consumption totaled about 400 million pounds, and only a few million pounds of other diisocyanates were used in 1975 , W orkers w ith potential occupational exposure to diisocyanates include adhesive w orkers, insulation w orkers, d iiso c y an ate resin workers, lacquer workers, organic c h e m ic a l s y n th e s iz e r s , p ain t sp ray ers, p o ly u reth an e m a k ers, rubber w orkers, shipbuilders, textile processors, and wire-coating workers . A NIOSH s u r v e y co n d u cted in 1972-1974 e s tim a te d th a t 50,000-100,000 em plo y ees in th e U n ited S ta te s w ere p o te n tia lly exposed to diisocyanates. This n u m b e r does not include o ccasio n al users of iso c y a n a te p re p a ra tio n s such as p o ly u reth an e varnish and m ay th e re fo re u n d e re s tim a te th e num ber of workers exposed. # Historical Reports Toluene d iiso c y an ate and h ex am eth y len e d iiso c y an ate (HDI) were the most w id e ly u sed d iis o c y a n a te s in th e early s ta g e s of th e in d u stry , acco rd in g to W illiam son and Munn . C o n seq u en tly , the earliest reports of hazards from exposure to iso c y a n a te s usually involved th e se compounds. Both of these com pounds a re am ong th e m ore v o la tile diisocyanates, and respiratory and other h ealth problems associated with these compounds prompted the development of less v o la tile di isocyanates and derivatives, as well as improved handling techniques. As a re s u lt, although many new d iiso c y an ate p ro d u cts have been used in industrial applications more recently, the number of reports of toxic effects from exposure to diisocyanates has decreased. In G erm any in 1941, G ross and Hellrung, according to Friebel and Luchtrath , in v e stig a te d the toxicity of TDI in animal experiments. They exposed dogs, cats, rabbits, and guinea pigs to a commercial TDI preparation at 14-1,400 ppm and re p o rte d th a t, a t the low er c o n c e n tra tio n s stu d ied , irritation of the respiratory tr a c t o c cu rred , and, a t the higher c o n c e n tra tio n s , b ro n ch itis, pneum onia, and pulmonary edema resulted. A ccording to Brugsch and Elkins , toxic effects of TDI had been observed in G erm an w orkers handling th e su b stan ce in w ar-related industries during World War II. However, the first published account of TDI toxicity in humans was a 1951 r e p o r t by Fuchs and Valade , They d escrib ed nine c ases of p rogressive bronchial ir rita tio n in F ren ch w orkers exposed to TDI. On continued exposure, seven of th e a f f e c te d w orkers developed an a sth m a -lik e co n d itio n , w hich the authors suggested was allergic. In 1953, R einl reported a human fatality attributed to organoisocyanate exposure. This was 1 of 17 cases of respiratory illness in German workers exposed to TDI or other isocyanates. Thirteen of these illnesses were severe. Two workers developed pulm onary ed em a, w hich in one case was f a ta l, te rm in a tin g in cor pulm onale. In the same year, in Sweden, Swensson et al described three cases of re s p ira to ry illness in painters who used lacquer containing isocyanates. Two of th e se w orkers had s p iro m e tric pulm onary fu n c tio n m e a su re m e n ts suggestive of emphysema. In the 1950's many similar cases of isocyanate toxicity were reported in Europe , including an o th er f a ta lity , and in th e United States . T hese o c c u rre d in w orkers exposed to TDI in manufacturing polyurethane foam or using TDI-or p o ly iso c y an ate -b ase d la cq u ers and glues. As many as 99 cases of respiratory illness were reported from a single US plant manufacturing polyurethane foam . A 1962 review by Elkins et al reported a total of 222 cases of respiratory illness attributed to TDI exposure in the literature through 1960. G o ld b la tt and Goldblatt , in a 1956 report, described a case of a chemist exposed to th e vapor of h e a te d 1,5-naphthalene diisocyanate (NDI). The chemist developed a sev ere cough th a t recurred each time he returned to the laboratory. G erritsen suggested in 1955 that an asthm atic condition in workers exposed to HDI was the result of an allergic mechanism. M ost of th e e a r ly r e p o r ts of re s p ira to ry illn ess in w orkers exposed to d iiso c y an ate s described bronchial asthm a or chronic bronchitis, often considered by th e a u t h o r s to involve ev id en ce of se n s itiz a tio n . H ow ever, som e re s p ira to ry illn esses w ere a ttr ib u te d to d ire c t irrita tio n from TDI, usually as a result of acute accidental exposures 35]. Friebel and Luchtrath , in 1955, attem pted to dem onstrate sensitization to TDI in guinea pigs. They were not able to produce allergic asthm atic responses in animals exposed to TDI aerosol a t 120 ppm or TDI vapor at 50-80 ppm. Effects on th e anim als' lungs were attributed to primary toxic action by TDI. Zapp , in 1957, also reported only direct effects on the respiratory tra c t in rats, guinea pigs, dogs, and ra b b its exposed to TDI a t 1.5 ppm fo r about 80 exposures of 6 hours each. Since 1960, ad d itio n al cases of occupational illness attributed to exposure to diisocyanates have been reported, but these have been less frequent and less severe as recognition of the hazard has increased. In 1973, NIOSH published criteria for a recom m end ed standard for occupational exposure to TDI . The studies of TDI to x ic ity on which NIOSH based its 1973 reco m m en d atio n s a re discussed in the follow ing s e c tio n s, as is in fo rm a tio n on TDI th a t has appeared in the literature more recently and data on other diisocyanates. Most studies on TDI in this chapter th at are dated prior to 1973 were discussed in the earlier document. # Effects on Humans Much of th e in v e stig a tio n of th e biologic effe c ts of diisocyanates has been d i r e c t e d to w ard d e term in in g th e e x te n t and n a tu re of s e n s itiz a tio n to th e se com pounds. In this document, sensitivity to diisocyanates denotes the tendency of so m e in d iv id u a ls to h a v e a r e s p ira to ry response when th ey a re exposed a t c o n c e n tra tio n s much low er th a n th o se th a t irritate the respiratory tra c t in most p e o p le . S e n s i t i z a t i o n m ay d e v e lo p gradually or suddenly a f te r exposure to diiso c y an ate s. The usual response is an a s th m a tic re a c tio n , c h a ra c te riz e d by w heezing, dyspnea, and bronchial constriction. Use of these term s is not intended to im p lic ate any p a rtic u la r m echanism as the cause of the reaction. The term s "allergy" and "allergic," on the other hand, are reserved for conditions in which an immunologic response is implied. W orkers occu p atio n ally exposed to the diisocyanates in various industries have d e v e lo p e d a d v e r s e re s p ira to ry e ff e c ts ; re p o rts of skin d isease and evidence suggesting sy stem ic to x ic ity from such exp o su res have been far less numerous. Most of th e affected workers have been exposed in manufacturing diisocyanates, in using th e se com pounds to manufacture polyurethane products such as foam, and in painting or spraying polyurethane varnishes and paints. These activities also involve possible exposure to o th e r potentially harmful chemicals, including chlorobenzene, phosgene, sty ren e, and am in es, and little is known of how such mixed exposures may affect the toxicity of the diisocyanates. Most of th e d a ta available on exposure to diisocyanates are on TDI. Several reports on MDI and a smaller number on other diisocyantes, including HDI and NDI, have also been published. In the following subsections, information on the biologic effects of TDI is discussed first, followed by data on MDI and other diisocyanates. # (a) Respiratory Effects The odor th resh o ld for TDI estim ated by Zapp in 1957 was 400 ppb (2.8 m g /c u m) in 12 of 24 m en te s te d . Five y ears la te r , H enschler e t al e s tim a te d an odor threshold of 50 ppb (360 yg/cu m), using the analytical method of E rlich er and Pilz , which they found was more accurate and sensitive than the R a n ta m ethod used by Zapp. Eye irr ita tio n was experienced by three of six volunteers exposed at this concentration for 10 minutes and five of six exposed for 15 m inutes; one also had nasal irritation . At 100 ppb (700 yg/cu m), two of six com plained of th r o a t ir r ita tio n , and exposure a t 500 ppb (3,600 yg/cu m) produced eye, nose, and throat irritation in all volunteers. Pulm onary fu n ctio n testing has been used in many studies of workers exposed to TDI and o th e r d iiso c y a n a te s to e v a lu a te changes in lung function. In 1964, seven fu rn itu re p lan t employees who sprayed, dipped, or painted with polyurethane varnish developed acute respiratory symptoms 0.5 hour to 3 weeks after their first known exposure . T h ree m e a su re m e n ts m ade a f te r some improvements in ventilation showed TDI at 80, 100, and 120 ppb (570-850 yg/cu m). All seven men coughed and had difficulty in breathing and four had blood-stained sputum. Five of the seven were tested for forced vital capacity (FVC) and forced expiratory volume a t 1 second (FEV 1) 2-3.5 months after exposure and had higher values than they did shortly after exposure. The responses to a questionnaire given 22 months after exposure after exposure suggested to the author that four of six who responded had become sensitized to TDI. In 1965, W illia m s o n d e s c r ib e d six w orkers exposed to TDI in a p o ly u reth an e foam p lan t who developed symptoms that were considered suggestive of s e n sitiz a tio n . Four of th e se , from a workforce of 99, had become sensitized during 18 m onths of exposure to TDI a t c o n c e n tra tio n s usually below 20 ppb, ap p are n tly d eterm in e d by a re a m onitoring. The author believed th at sensitization had re su lte d from exposure at higher concentrations caused by spills. Immediately a fte r one spill, TDI at 200 ppb (1.4 mg/cu m) was found, but the concentration was less than 5 ppb (35 y g /c u m) 10 m in u tes la te r . All six a ffe c te d workers had asth m a or b ro n ch itis w ith decreased ventilatory capacity (FVC and FEV 1) during th e se in c id en ts. Some of the subjects were also occasionally exposed to MDI, but always at concentrations below 20 ppb (204 yg/cu m). In 1976, Charles et al described a case of pneumonitis and three cases of chronic re s p ira to ry d isease in w orkers exposed to TDI or HDI. Pneumonitis was diagnosed in a 5 0 -y ear-o ld nonsmoker who had experienced difficulty in breathing, weight loss, and fever for 6 weeks at the time of examination. Prior to his 5 years of working in th e p o ly u reth an e foam in d u stry , he had been a coal miner for 11 years. C h est X -rays showed alveolar filling lesions in both the lungs. Two months la te r , pulm onary fu n c tio n te s ts showed reduced FEV 1, vital capacity, total lung c a p a c ity , and residual volum e of the lungs. A bronchogram showed peripheral cy stic bronchiectasis in the right upper lobe. All immunologic tests for antibodies against TDI were negative. Microscopic examination of biopsy samples of lung tissue showed variations from normal architecture ranging from diffuse interstitial disease to a c u te in flam m atio n and end-stage interstitial fibrosis. The authors stated that the a re a s of filled alveoli resembled desquamative interstitial pneumonitis and that the w hole p ic tu re resem b led th a t of a pulmonary hypersensitivity response to an in h a le d a l le r g e n r a t h e r th a n c o a l-m in e rs' pneum oconiosis, but th ey did not demonstrate that this was related to TDI exposure. Two o th er w orkers developed sev ere dyspnea after exposure to spills of TDI . Two to th r e e years after exposure, both workers had moderate obstruction of the airw ays, as in d ic a te d by FEV 1 measurements below predicted values; one also had a decreased vital capacity, and the other, although his vital capacity was normal, still experienced severe nonwheezing dyspnea after minimal exertion. Sim ilar symptoms occurred in a 61-year-old man who had been a paint sprayer for 43 y ears with no previous history of respiratory illness; he developed wheezing, d y s p n e a , an d sw eatin g w ithin hours when he f ir s t used a p o ly u reth an e paint co n tain in g HDI . Whenever he was reexposed to the paint by casual contact w ith fum es from other spraying operations, he again developed symptoms. Testing 1 year a f te r exposure showed moderate obstruction of airways, and he complained of nonwheezing dyspnea after exertion. Pepys e t al te s te d fo u r p a tie n ts w ith o ccu p a tio n a l a s th m a fo r TDI sen sitiv ity by simulating occupational exposures to a two-stage polyurethane varnish w ith TDI activ ato r. The subjects applied varnish with the TDI activator and, on a se p a ra te day, w ith o u t th e a c tiv a to r , to a surface in a small cubicle. When the a c tiv a to r was used, TDI concentrations in the air reached a maximum of almost 2 ppb (14 y g /c u m), as measured by the colorimetric method of Meddle et al . No TDI was d e te c te d when th e varnish alone was applied. All the subjects were e sse n tia lly a sy m p to m a tic a t the time of testing, and their FVC and FEV 1 values w ere not m ore than 10% below predicted values. None showed positive responses in skin te s ts with common allergens, and none had a family or personal history of allergies. A 26-year-old boatbuilder, who had been using a two-stage polyurethane varnish system for 8 years, had cough and dyspnea at night , The attacks gradually b eca m e more severe and occurred earlier in the day, appearing only when the twos ta g e varnish was used. A nother man, 46 years old, had worked for 8 years as a m a in te n a n c e en g in eer whose d u ties included maintenance of a polyurethane foam m achine. He had chest tightness and shortness of breath, which disappeared within 20 m inu tes a f te r he le ft w ork. His sym ptom s also b eca m e progressively worse, developing into sev ere asth m a. Neither man had any known exposure to spills of TDI. In challenge testing, both reacted to the varnish only when the TDI activator was added. The boatbuilder developed a late asthm atic reaction that appeared at 1-2 hours and re a c h e d a m axim um a t 3-4 hours, while the maintenance engineer showed an immediate reaction. T h e o th er tw o su b je c ts w ere w om en who w orked in a te le v isio n fa c to ry d e p a rtm e n t w here co a te d wires were soldered . The wire, coated with cured p o ly u r e t h a n e an d po ly v in y l b u ty ra l, w as dipped into a resin flux co n taining dim ethylam ine hydrochloride and then into multicore solder at 460 C. One woman, 44 y ears old, developed a chronic cough and, after 6 years, wheezing. The second w om an, a 5 4 -y ear-o ld sup erv iso r in the same departm ent, developed a productive cough, wheezing, and breathlessness, developing into severe bronchitis that kept her away from work for 5 weeks. Her symptoms recurred within 1 week of her return, and this p a tte r n was re p e a te d eac h tim e she attem pted to return to work. Both w o m en r e a c te d to th e varn ish w ith TDI a c tiv a to r . The fir s t woman had an im m e d ia te re a c tio n th a t was reso lv ed in 2 hours but was follow ed by a la te re a c tio n a t 3-4 hours. The second woman had only a late reaction at 3-4 hours. She was also te s te d w ith various com ponents used in th e soldering operation. This te s t p roduced a severe asthm atic reaction starting 30-60 minutes after exposure ended and continuing for 6 w eeks b efo re her FEV 1 re tu rn e d to p r e te s t levels. Blood tests on these four p a tie n ts showed no eosinophilia, but sputum collected from the 54-year-old woman contained eosinophils. This suggested to the authors a reagin-mediated reaction. The se n sitiz e d individuals te s te d by Pepys and colleagues had adverse re a c tio n s to TDI after exposures as brief as 10 minutes a t reported concentrations of a b o u t 2 ppb (14 u g /c u m). The au th o rs em p h asized th a t none of th e se sen sitized individuals had a known history of heavy TDI exposure, such as exposure to spills. Thus, it ap p ea rs th a t exposure to massive amounts of TDI, as from a spill, may not be necessary to produce sensitization. The sam e technique of challenge exposure to polyurethane varnish was used by C a rro ll and co w o rk ers to te s t four em p lo y ees who w orked in an o ffic e a d ja c e n t to a fa c to ry th a t used TDI. Three clerical workers and a security guard, am ong 47 w orkers in th e office block, had histories of asthm a-like symptoms, and in tw o cases these were clearly alleviated during periods away from work. It was discovered th a t the air inlet for the office building was located only 23 feet from th e v e n tila tio n outflow of the factory that used TDI, but actual air concentrations in the offices were not determined. The p o ly u reth an e varnish m ixture used in the challenge testing was one-third TDI, and th e au th o rs s ta te d th a t th e a tm o sp h e ric c o n c e n tra tio n of TDI c r e a te d by painting it on a surface in the test chamber was about 1 ppb (7 yg/cu m), but th e m ethod of d e te rm in a tio n was not described. Three of the patients reacted to TDI, one after a 15-minute exposure, one after 30 minutes, and one only a f te r a 6 0 -m in u te exposure. Exposures to the varnish without the TDI activator produced no reactions. The a u th o rs also m en tio n ed th a t one ad d itio n al o ffic e w orker had a s th m a tic sym ptom s th a t w ere re lie v e d by removal from the environment. This suggests th a t 4 of th e 47 w orkers were sensitized to TDI, a sensitization rate of about 9%. To e v a lu a te th e s p e c ific ity of TDI sensitivity, O'Brien et al tested the responses of TDI workers to TDI, histamine, and exercise. The 63 men studied had been referred for investigation of possible work-related respiratory symptoms. A s u b je c t was considered sensitive to TDI if his fall in FEV 1 after exposure was 15% more than on the control day. TDI concentrations in the cubicle w ere m easu red by a continuous monitor; in 23 cases, breathing-zone sampling was also p erfo rm e d , and the results of the two measurements were found to be closely correlated (r = 0.95). F ifty -tw o of th e workers were also tested by inhaling an aerosol of histamine acid pho sp h ate a t g rad ed c o n c e n tra tio n s up to 32 m g /m l for 30-second periods . A 20% fall in FEV 1 was considered evidence of bronchial hyperreactivity. F o rty -six su b jects p a r tic ip a te d in e x e rc ise te s tin g , co n sistin g of fre e running sufficient to increase the heart rate to 140 beats/m inute. A fall in FEV 1 of more than 9% was re g a rd e d as in d ic a tiv e of an asthm atic reaction. All subjects were prick te s te d w ith 23 common respiratory allergens, and those who reacted to 1 or more were considered atopic. T h i r ty -s e v e n of th e 63 w o rk ers w ere se n sitiv e to TDI as in d ic a te d by respiratory responses to challenge testing, which included 2 immediate, 17 late, and 18 dual reactions . Nine of these workers reacted to TDI at concentrations of less than 1 ppb (7 y g /c u m). When ch allen g ed w ith h istam in e, 17 H ow ever, in th e subgroup of sensitive workers who responded to TDI at less than 1 ppb, there were significantly more reactions to both histamine (P<0.005) and e x e rc ise (P<0.01) than in those who reacted only at higher TDI concentrations; this extrem ely sensitive group also had a singificantly higher incidence of exerciseinduced a sth m a th an th e group th a t did not react to TDI (P<0.025). Age, atopic s ta tu s , and h isto ry of rh in itis w ere sim ilar in the TDI sensitive and nonsensitive groups, but th e re was a higher incidence of asthm a prior to work with TDI and of a fam ily h isto ry of allerg y in th e n o n sen sitiv e group. There was no significant d iffe re n c e s betw een smokers, exsmokers, and nonsmokers on the TDI, histamine, or exercise tests. In a fu rth e r stu d y , O 'B rien e t al investigated cross-reactivity to TDI, MDI, and HDI in 24 d iiso cy an ate workers referred for investigation of respiratory sym ptom s. All 24 m en had been exposed to TDI, 14 to MDI, and 6 to HDI; 5 of the latter group had been exposed to all 3 diisocyanates. All su b jects w ere ch allen g ed w ith TDI by the procedure previously described and w ith MDI over th e sam e ran g e of co n cen tratio n s (<1-20 ppb) by heating the m a te ria l in a closed cubicle . Nine, including the six with previous exposure to H D I, w e re a lso c h a l le n g e d by p a in tin g w ith an HDI v a r n is h , b u t air c o n c e n tra tio n s of HDI w ere not measured. All subjects were tested by histamine inhalation. S ix teen of th e subjects were sensitive to TDI, and eight of these also reacted to MDI, including four who had no known previous exposure to MDI . Three of the nine w orkers te s te d with HDI showed positive responses; all three had also re a c te d to both TDI and MDI, and two of them had no previous exposure to HDI. H istam in e inhalation produced a positive reaction in five of the eight subjects who did not respond to challenge with diisocyanates, one of the eight who reacted only to TDI, and six of the eight who reacted to both TDI and MDI (including all three who also reacted to HDI). The authors reported that subjects who reacted to more than one d iiso c y an ate had a greater degree of histamine reactivity and reacted to TDI at lower concentrations than did those who reacted only to TDI. Among workers referred for respiratory symptoms, TDIsensitive individuals were no more likely to have nonspecific asthm atic responses to h istam in e or e x e rc ise than were those who did not react to TDI. However, those who showed e x tre m e se n s itiv ity to TDI, reacting at concentrations of less than 1 ppb, did have an increased incidence of nonspecific asthm atic responses, suggesting to the authors that specific sensitivity to TDI might be exacerbated by irritative or p h arm a co lo g ic h y p e rre a c tiv ity of th e airw ay s. The e x iste n c e of such a dual m echanism in individuals extremely sensitive to diisocyanates was supported by the re su lts of cross-challenging with TDI, MDI, and HDI . The authors considered th a t im m unologic c ro s s -re a c tiv ity b etw ee n th e se th r e e compounds was unlikely because of th e ir s tr u c tu ra l d iffe re n c e s . They concluded that their results were consistent with the existence of a specific mechanism of TDI sensitivity coupled, in e x tre m e ly sen sitiv e individuals, with a pharmacologic mechanism that also caused increased reactivity to other diisocyanates. Occupational exposure to MDI has produced respiratory effects similar to those reported from TDI exposure. Longley , in 1964, described an incident in which 12 m en who w orked 60-120 f e e t from an MDI foarn-spraying operation developed sym ptom s, including a sth m atic breathing, retrosternal soreness, constriction of the c h e st, cough, retrobulbar pain, depression, headache, nasal discharge, and insomnia. All 12 workers developed symptoms within several hours after exposure to the mist. The w orkers actually spraying the MDI foam, who wore full protective clothing and air-supplied respirators, were unaffected. Munn co n clu d ed th a t MDI was a p o te n tia l respiratory irritant and that in rare instances it could cause sensitization. In 1972, Lob d escrib e d a reaction to MDI in a 50-year-old worker in a p o ly u reth an e fa c to ry who had no h isto ry of allergies, bronchitis, or asthma. He in te rm itte n tly e x p e rie n c e d m alaise acco m p an ied by fever, nausea, and coughing, usually a t th e end of th e day. A fte r one such a tta c k , a thorough examination showed th a t he had a slightly decreased vital capacity. A fter another attack, he had an increased white blood cell count (WBC) of 12,650/cu mm. To d e te rm in e th e fa c to rs producing th e se symptoms, Lob exposed the worker for 3-4 minutes in a simulated operation where plastic belts were welded by h e a t. The au th o r s ta te d th a t MDI was d e te c te d in th e whitish fumes given off during th e w elding p ro cess, b u t th e c o n c e n tra tio n was not given; no TDI was d e te c te d . The worker's body tem perature increased to 39 C within 4-5 hours after he was exposed, and he had nausea and a severe cough. Vital capacity decreased slig h tly , WBC in c re ased , and he had c o n g este d conjunctiva, increased pulse, and decreased blood pressure. All these signs were normal the next day. Lob concluded t h a t th e o n s e t, s e v e rity , p e rs is te n c e , and re c u rre n c e of th e sym ptom s w ere suggestive of an allergic reaction to MDI. In 1971, Lapp described the effects of brief exposures to TDI and MDI on th re e men. One was a 3 8 -y ear-o ld w orker in a chemical plant who had worked w ith d iiso c y an ate s for 13 years. The other two were 25-and 23-year-old medical o ffic e rs w ith no previous exposure to diisocyanates. Each subject slowly inhaled TDI from a sn iff b o ttle . A fter at least 1 day without exposure, each subject was challenged w ith MDI in the same manner. Pulmonary function of each subject was determined before and after each exposure. F ifte e n m in u tes a f te r TDI inhalation, the values for FVC, FEV 1, and forced e x p ira to ry flow b e tw e e n 25 and 75% of the FVC (FEF 25-75) in the worker who had previously been exposed to d iiso c y an ate s w ere 3.16, 2.79, and 3.62 liters, respectively, compared with corresponding preexposure values of 4.03, 3.68, and 5.92 liters. Airway resistance increased to 123% of the preexposure value at 15 minutes and 166% a t 30 m i n u te s . T The e f f e c ts of MDI w ere re v e rs e d follow ing ad m in istra tio n of a bronchodilator. Approximately 4-6 hours later, the man again ex p erien ced c h e st tightness and wheezing and his tem perature increased to 100 F. All th e se sym ptom s had disappeared by the next morning. The other two subjects showed no loss of pulm onary fu n ctio n a f te r exposure to MDI. Minor changes in th e ir airway resistance and thoracic volume were probably due to chance, according to the author, though he noted that these might have been caused by irritation. Lapp co n clu d ed th a t th e ch anges o b serv ed in th e previously exposed individual who was ex p osed to TDI and MDI a t levels th a t did not cause such re a c tio n s in th e o th e r su b jects co n firm ed his re s p ira to ry s e n s itiv ity to th e se com pounds. Since the isocyanates to which this worker had previously been exposed w ere not id e n tifie d , this study does not provide ev id en ce on th e p o te n tia l of diisocyanates to produce cross-sensitization. # (b) Immunologic Effects The stu d ies discussed in th e preceding section indicate that some people are sen sitiv e to d iiso cy an ates, reacting to these substances in quantities much smaller than tho se th a t p roduce d ir e c t ir r ita tio n of th e lungs in most individuals. The m e c h a n is m of s e n s i t i z a t i o n to th e d iis o c y a n a te s has been in v e stig a te d in immunologic and pharmacodynamic studies on exposed workers. A lle r g ic responses th a t re s u lt from c irc u la tin g an tib o d ies can be e ith e r im m ediate or late, or a combination of the two. Immediate responses occur within m inutes of exposure to th e a n tig en , and late reactions appear a few hours after exposure. Im m ed iate re a c tio n s to some substances are associated with atopy, an in n a te te n d e n c y to d e v e lo p a lle rg ie s , w hich may be r e la te d to high serum c o n c e n tra tio n s of re a g in -ty p e im m unoglobulin (IgE). Atopy is often judged to be p resen t if tw o or m ore skin te s ts with common inhalant allergens such as pollen and animal dander are positive. Molecules with molecular weights of less than 10,000 are rarely antigenic; thus, im m unologic a c tiv ity of th e diisocyanates probably results from a reaction with a h ap ten com plex fo rm ed from a d iiso c y an ate and a naturally occurring antigenic s u b s t a n c e such as p ro te in or p o ly sacch arid e. B ecause iso c y a n a te s r e a c t w ith hydroxyl, am ino, su lfh y d ry l, or sim ilar groups, it is likely that hapten complexes may be fo rm ed . Most in v e stig a to rs who have stu d ie d th e immune response to d iiso c y an ate s have a tte m p te d to duplicate this hapten complex by conjugating the iso cy an a te w ith a protein such as egg albumin or human serum albumin for use as an an tig en in im m unologic te s ts , using a modification of the method described in 1964 by Scheel et al Four of these five, but only 1 of 21 unsensitized workers, had a history of allergies before working with diisocyanates. These four sensitized subjects also had c le a rly p o sitiv e ly m p h o cy te tra n s fo rm a tio n te s ts , although all had been without exposure to diisocyanates for at least 6 months before testing. Neither unexposed nor u n sen sitized exposed w orkers gave p o sitiv e resp o n ses in this test. Passive cu tan eo u s anaphylaxis (PCA), Prausnitz-Kuestner (P-K), leukocyte histamine release, passive h em ag g lu tin atio n , and gel d iffusion precipitin tests were negative in all subjects and did not identify the sensitized workers. Subsequent studies by Nava et al and B utcher e t al have not confirmed the diagnostic value of the lymphocyte transformation test for diisocyanate sensitivity. B ruckner and co w o rk ers noted that all five sensitized workers had been exposed to d iiso c y an ate s a t c o n ce n tratio n s above 20 ppb. They also pointed out that the development of sensitization in these workers occurred only after 2 months to 5 years of repetitive exposures, concluding that overt clinical sensitization might be avoided if workers who showed increasingly severe signs of respiratory irritation were removed from further exposure to diisocyanates. In 1970, Taylor attem p ted to detect circulating antibodies to TDI in 55 w orkers w ith sym ptom s su g g estiv e of TDI sensitivity. Their sera were compared w ith th o s e fro m 40 u n e x p o s e d t e x t i l e w orkers for an tib o d ies by te s ts for com plem en t fix atio n , PCA , and red-cell-linked antiglobulin. None of the control sera, but 23 of the te s t sera, gave p o sitiv e results in one or more tests. Five w ere positive in more than one test, but only one in all three tests. Six sera taken w ithin a few m onths of an unusually high exposure to TDI that produced severe sym ptom s all showed p o sitiv e te s t re su lts. T here was no co rrelatio n between p o s itiv e antibody te s ts and eosinophilia as d e te rm in e d from blood or sputum sam ples. The au th o r su g g ested th a t th e lack of c o rre la tio n between the tests indicated that they detect antibodies of slightly different specificity or of different immunoglobulin classes. In 1975, N ava e t al d escrib ed im m unologic re s e a rc h on 182 clinical p a tie n ts , all but one of whom had respiratory symptoms, who had been exposed to d iiso c y an ate s in th e workplace. Ninety-six of these patients reacted positively to in tra d e rm a l te stin g w ith T D I-p ro tein conjugates. Thirty-seven of the 96 workers w ith positive tests and 6 of 86 with negative ones were atopic. However, in the 45 p a tie n ts w ith im m ed iate re a c tio n s , 60% were atopic. The authors concluded that atopy was not a f a c to r in TDI se n s itiz a tio n , since m o st p a tie n ts with positive reactions to TDI in intradermal tests were not atopic. However, their data suggest that atopy may be a predisposing factor. Nava and asso ciates also performed pulmonary function testing on 45 of th e s e p a tie n ts who w ere exposed to TDI a t 100-130 y g /c u m (14-18 ppb) in ch allenge te s ts . T h irty -fiv e p a tie n ts showed decreased pulmonary function after TDI challenge. An immediate or dual response occurred in 25, whereas 10 had only a la te response; in c o n tra s t, over half the positive reactions in intraderm al tests co n sisted of a la te response only. Tests with acetylcholine on 18 subjects showed th a t those who w ere h y p e rre a c tiv e to th is b ro n c h o c o n s tric to r te n d ed to have im m e d i a te b r o n c h ia l re a c tio n s to TDI. This su g g ests th a t a p h arm aco lo g ic mechanism, as well as an immunologic one, is involved in diisocyanate sensitivity. In 1975, P o rte r e t al published a retrospective study of sensitization in w orkers in a TDI manufacturing plant that had been in operation since 1956. The workforce exposed to TDI numbered about 200, remaining fairly constant throughout th e study period, and the turnover during the 17 years of the study was about 100 w orkers. The investigators examined medical records of the workers to determine th e r e la tio n s h ip of c l i n i c a l p r o b le m s to TDI c o n c e n tra tio n s in th e p la n t. Immunologic and lung function testing were performed on some workers. From 1956 to 1974, 30 of 300 workers at risk in the plant were judged on the basis of medical examinations to have become sensitized to TDI . At least six w orkers w ere h y p ersen sitiv e to TDI on first exposure, reacting at concentrations below 5 ppb (35 y g /c u m); in other workers, sensitization developed as late as 14 y ears a f te r in itia l exposure. The authors noted that, as individuals became more sensitized, they responded more quickly to TDI exposures and recovered more slowly a f te r rem oval from ex posure. T able III-1 shows th e num ber of new c ases of s e n s itiz a tio n diagnosed eac h y ear in relation to the average air concentration of TDI. The d a ta in d ic a te th a t a d o se-resp o n se relationship for sensitization may ex ist. It is also clear, however, that the incidence of sensitization decreased with tim e during th e y ears b e fo re 1970 w hen th e re w ere no s ig n ific a n t changes in av era g e TDI c o n c e n tra tio n s . The au th o rs a ttr ib u te d th is not only to improved control of peak concentrations and increased employee understanding of TDI effects, but also to possible "hardening" of exposed workers. It appears m ore likely th a t m ost p o te n tia lly s e n s itiz a b le w orkers became se n sitiz e d during th e ir e a rlie r y ears a t th e higher exposures. The authors noted th a t sensitized workers were relocated out of the TDI handling area to other parts of the plant. These considerations preclude the assumption that 20 ppb can be reg ard ed as a no-effect level for sensitization on the basis of this study. The low tu rn o v er r a te im plies th a t only an average of 5-6 workers each year were newly exposed to TDI, so that most of the workers exposed during the last 3 years of the study already had several years of exposure at higher average concentrations. Some w orkers b eca m e sensitized after 14 years of exposure, indicating th at some of the s e n s i t i v i t y c a s e s r e p o r t e d in a b o u t 1970 d ev elo p ed when th e a v e ra g e TDI c o n c e n t r a t i o n s w ere ab o u t 60 ppb. It is u n clear w h eth er th e a u th o rs used a w eighting procedure in calculating the average concentrations, or perhaps, averaged a ll r e s u l t s . F o r th is r e a s o n it is im p o s s ib le to co n c lu d e w h at th e TWA c o n c e n tra tio n s w ere in the plant, and thus impossible to ascertain a concentration sufficiently low to prevent sensitization. P o rte r e t al also presented case studies and results of immunologic and pulm onary fu n ctio n te s tin g fo r 32 of th e w orkers in th is p lan t; some of these workers had signs of respiratory illness, according to medical diagnosis, while others w ere a sy m p to m a tic . Sera from th e s e w orkers w ere te s te d for the presence of an tib o d ies with the P-K test in monkeys and the PC A te st in guinea pigs, by using a te s t antigen of TDI conjugated with serum protein from the same animal species; this is th e only im m unologic study found on TDI th a t used antigens made with hom ologous p ro tein s in th e se te s ts . The P-K te s t was intended to identify IgE an tib o d ies, which th e au th o rs e x p e c te d to be a ss o c ia te d w ith h y p ersensitivity re a c tio n s, and th e PCA w as to id e n tify c irc u la tin g IgG antibodies, which they assum ed to confer immunologic protection. The workers' sera were similarly tested with a common pollen antigen. In th e cases d escrib ed , there was no correlation between the presence of IgE or IgG an tib o d ies ag ain st TDI and e ith e r clin ical sym ptom s, lung function, or reactivity to pollen antigens. For example, apparent sensitivity to TDI accompanied by loss of lung fu n ctio n was re p o rte d in a worker who had a positive PCA test w ith TDI an tig en but not with pollen antigen and in one who gave a positive P-K te s t w ith pollen but show ed no antibodies against TDI; another sensitized worker had positiv e PCA to both TDI and pollen but refused pulmonary function testing. Two w orkers w ith s e n s itiz a tio n re a c tio n s to TDI had p o sitiv e results with TDI an tig en in th e P-K te s t , but no loss of lung function. Workers who showed no sig n s of se n s itiz a tio n to TDI included som e who gave n eg a tiv e re s u lts in all im m unologic te sts and others with positive reactions to TDI in the PCA, P-K test, or both. These re su lts do not support the authors' hypothesis regarding the roles of IgE and IgG an tib o d ies in TDI se n s itiz a tio n . The authors attributed the loss of lung fu n ctio n , which was a p p a re n tly in d ep en d en t of th e p resen ce of an tib o d ies, to b ronchoco n strictio n ; the reactions of these individuals occurred almost immediately upon exposure and w ere relieved by treatm en t with a bronchodilator. In contrast, clinical reactions in the two workers with positive immunologic tests but no loss of lung function had developed gradually after several years of exposure. The findings of th is s tu d y i n d i c a t e t h a t an im m unologic m echanism may be involved in diisocyanate sensitivity but that sensitivity in some individuals may also result from a nonimmunologic mechanism. As part of a long-term study of workers exposed to TDI, described in detail in E pidem iologic Studies, Butcher and associates reported in 1976 the results of im m unologic and in h a latio n ch allen g e studies of 167 employees who worked in a fa c to ry producing TDI. B efo re TDI p ro d u ctio n began and a t 6 and 18 months afterw ard, employees were prick-tested to determine their reactivity to a conjugate of TDI with human serum albumin (HSA) and to HSA alone. They were also prickte s te d w ith 15 common inhalant allergens. Using blood taken from the workers at th e s a m e t e s t in te r v a ls , th e in v e stig a to rs d e te rm in e d eosinophil counts and im m unoglobulin levels. To identify TDI-specific antibodies, sera were tested with the TDI-HSA antigen by radioimmunoassay tests, the PCA test on guinea pigs, and Workers were subdivided into groups with constant, interm ittent, or no exposure . On initial testing, four workers had positive skin reactions to both TDI-HSA and HSA a lo n e ; h o w ev er, during th e th ird p lan t v isit 6 m onths la te r , th re e individuals r e a c te d p o sitiv ely to TDI-HSA but not to HSA. The authors did not in d icate the exposure groups of th e se persons, but they noted that none of the three showed clinical respiratory responses to TDI. Both b efo re and after TDI production began, PCA, P-K, and radioimmunoassay te s ts for TDI an tib o d ies w ere negative in all subjects . Eosinophil counts did not d iffe r in exposed and unexposed groups. Immunoglobin levels were similar in all th re e exposure groups. Six m onths after production began, both IgG and IgE had in c re ased sig n ifican tly over p reex p o su re values; however, this increase was ap p a re n t in all groups, and the IgE increase was greatest in the unexposed group. The au th o rs therefore concluded that the increase was not related to TDI exposure but probably reflected seasonal variation. H ow ever, th e au th o rs n o te d that the number of individuals tested was too small to indicate that antibody titer was proportional to exposure. In a 1973 NIOSH h e a lth h azard ev alu atio n , V an dervort and Lucas in v e stig a te d im m unologic resp on ses of 90 w orkers exposed to MDI a t average c o n c e n tra tio n s of up to 11 ppb (110 U g/cu m) in a plant manufacturing fibrous glass tan k s. PCA , P-K , and agglutination tests were carried out with a "specially prepared isocyanate antigen," not otherwise characterized. Of 12 men with positive P-K te s ts , 2 showed respiratory responses to MDI, and 1 had decreased pulmonary function; pulm onary fu n ctio n te stin g was recom m ended for 2 others to evaluate their status. The other seven showed no evidence of adverse reactions to MDI, and th e au th o rs co n sid ered th em "hardened" to its effects. Forty workers who gave positive results only in the PCA or agglutination tests were also asymptomatic. It is possible, as th e authors suggested, that certain workers giving positive tests for an tib o d ies w ere im m unologically "hardened" to th e effects of MDI and that the c irc u la tin g IgG an tib o d ies th a t m ight be in d ic a te d by positive PCA tests were involved in c o n ferrin g such im m unologic p ro te c tio n . H ow ever, th e inadequate c h a ra c te r iz a tio n of the antigen used in testing makes it difficult to determine the validity of these results. To evaluate whether the difficulty in detecting diisocyanate antibodies might be due to nonavailability of exposed hapten groups in the antigen, Karol e t al , in a 1978 study, used a c o n ju g ate of p -to ly l isocyanate with human serum albumin (TMI-HSA) as a te s t a n tig en . B ecause it contained only one isocyanate group on eac h m olecule, this monoisocyanate would not cross-link the protein component of th e an tig en , increasing the probability that the tolyl portion of the molecule would be ste ric a lly exposed. The authors tested 23 employees of a large TDI production fa c ility , 4 of whom w ere considered sensitized to TDI. Three of these had had a sen sitiv ity response, e ith e r bronchial or skin reac tio n , within 1 year before the study; the fourth had avoided exposure to TDI for at least 2 years. The remaining 19 w orkers w ere co n sid ered unsen sitized because they showed no adverse effects when exposed to TDI; in som e c a ses, this judgm ent was confirmed by negative results in challenge tests with TDI at 20 ppb (140 yg/cu m). A radioimmunoassay for IgE bound to TMI-HSA showed that the 19 unsensitized w o rk e rs had antibody ti te r s sim ilar to th o se of 10 blood-blank donors . H ow ever, th e se n sitiz e d group show ed a significantly elevated titer of anti-tolyl an tib o d ies (P<0.01). The th re e workers who had had TDI reactions within the last year had antibody titers higher than any of the unsensitized or control individuals. Serum -binding to th e an tig en was inhibited in the presence of nonisocyanate tolyl com pounds, suggesting to the authors that the antibodies were toly 1-specific. There was no co rrelation between the tolyl-specific IgE antibodies and the levels of total IgE in the sera. Incubating lym phocytes from sen sitiv e and nonsensitive workers with TDI in the absence of iso p ro te re n o l did not a f f e c t cy clic AMP levels. T h ere was a d o se-dependent inhibition of iso p ro teren o l-stim u lated cyclic AMP levels in lymphocytes from both sensitive and nonsensitive subjects; there was no significant difference in the ability of cells from these two groups to exhibit cyclic AMP stimulation. In th e m echolyl ch allen g e stu d ie s, 6 of th e 10 clinically sensitive subjects showed a drop in FEV 1 of m ore th an 20% w ithin 1.5 m in u tes a f te r a single inh alatio n of mecholyl . Only 1 of the 10 nonsensitized subjects gave such a response, and this occurred 5 minutes after inhaling mecholyl four times. This study indicates that TDI is not a histamine releaser per se but that it does suppress stimulation of the beta-adrenergic system by isoproterenol. These resu lts a g re e w ith those of a similar study by Van Ert and Battigelli on the e f f e c ts of TDI on h ista m in e release in vitro. Butcher et al concluded that th e ir findings suggested that TDI may act as a beta-receptor blocking agent. This would produce increased reactivity to agents capable of causing bronchoconstriction, such as m echolyl. In a follow up re p o rte d in tw o 1978 abstracts , blood te stin g a f te r ch allen g e ex posures to TDI show ed that histamine levels increased a f te r a bronchial re a c tio n , w hile com plem ent components were not affected. In this study, all TDI r e a c to r s r e a c te d p o sitiv ely to m echolyl ch allen g e, and the au th o rs n o ted that kinetic studies had revealed a strong indication that cells from TDI re a c to rs respond d iffe re n tly th an th o se of n o n re a c to rs to the betaadrenergic agonists isoproterenol and prostaglandin E, and to TDI added alone. These stu d ies suggest that a pharmacologic mechanism is involved in r e s p i r a t o r y s e n s i t i v i t y to TDI, b ut th e re is no in d icatio n w h eth er m echolyl hyperreactivity is a preexisting factor or a result of TDI exposure. (c) Skin Effects Some diisocyanates have been described as skin irritants , but there are few re p o rts in th e lite r a tu r e of skin effects from these compounds. Munn has noted th a t, in several years of study, he has seen only two mild cases of skin irrita tio n from d iiso c y an ate s and no cases of skin sensitization. Bruckner et al re p o rte d that 6 of 44 workers in a chemical plant experienced skin irritation a ttrib u te d to exposure to unspecified diisocyanates. These reactions consisted of e ry th e m a only on areas of skin that were in actual contact with the diisocyanates. One w orker who o fte n had d iiso c y an ate s on his hands n o ted th a t his skin had become hard and smooth, so that he had difficulty in turning pages. Possible skin sensitization to TDI was described in two of the studies discussed in th e previous section. Nava et al reported that a worker with eczematous d e rm a titis was 1 of 3 w orkers who reacted positively to TDI in a patch test, out of 182 workers tested. Karol e t al found tolyl-specific IgE antibodies in two w orkers who displayed im m e d ia te skin reactions when exposed to TDI, apparently w ithout a bronchial response. These skin reactions were extensive and not confined to areas where TDI had contacted the skin. Rothe , in 1976, described 20 cases of occupational skin disease in workers exposed to poly u reth an es. Clinical examinations, observation of the course of the disease, reex p o su re te s ts , and skin te s ts were carried out. Standard and special te s ts using a v ariety of isocyanates, amines, and additives were used to determine the specific sensitivity of the workers. R othe found 12 cases of c o n ta c t e c z e m a c h a r a c te r iz e d by follicular papules in w orkers exposed to MDI or p artially polymerized MDI. Ten of these co n stitu ted more than half the total number of workers who had come into contact w ith a p o ly u reth an e sealing com pound a t one plant, and two were from another plan t. S everal in sp ectio n s showed that there was very close contact between the w orkers' skin and th e sealing com pound. Work c lo th e s were often soaked with resin. The workers had positive skin test reactions to the isocyanate component of th e sealing com pound. T w en ty -fiv e unexposed persons with eczem a had negative r e s u lt s . Five of seven w orkers w ith MDI a lle rg ie s e x h ib ite d ty p ical e c z e m a re a c tio n s to diaminodiphenyl methane (MDA). Only one of these had had previous contact with the MDA, which was not used at the plant. Skin disease disap p eared in all four persons se n sitiz e d to IPDI a f te r exposure was stopped. T hree of th e in v e stig a to rs tested themselves with undiluted IPDI and no reactions o ccu rred w ithin 4 days . However, two of the three investigators developed fo llicu lar papules 10 days a f te r te stin g . Sensitization in these investigators was confirm ed in a la te r test with a 1% IPDI solution, which produced no reactions in six nonexposed subjects. The o th e r four p a tie n ts with skin disease included two cases of eczem a from TDI exposure, one case with exposure mainly to TDI but also to MDI, and one case of eczem a probably related to exposure to triphenylmethane triisocyanate . In all 20 cases th e re was a pattern of brief exposure to the isocyanate, often caused by spills, with subsequent development of eczema. In most cases, sensitization was co n firm ed by sk in -te stin g with a dilute solution of the isocyanate suspected to be the agent. # UU Other Effects A lthough m ost re p o rts of diisocyanate toxicity have described effects on the resp ira to ry tr a c t or skin, some have noted other effects. These have included eye irritation, psychologic symptoms and CNS effects, and hematologic changes. Most of th e se e f f e c ts have o c c u rre d following mixed exposures to diisocyanates and other c h e m ic a ls , an d su c h e f f e c t s can n o t be c le a rly ascrib e d to th e d iiso c y an ate exposures. S everal stu d ies have su g g ested th a t TDI, esp ecially a t very high exposure levels, may cause neu ro lo g ic or CNS e f f e c ts . In th e firs t published report of occupational illness from TDI exposure, Fuchs and Valade noted that insomnia was o fte n th e firs t co m p lain t of a f f e c te d w o rk ers, p reced in g any re sp ira to ry sym ptom s. They also m en tio n ed that three patients had a decrease of the kneejerk and A chilles re fle x e s . In one patient, who completely lacked these reflexes, th e cond itio n p e rsiste d for 2 months after he stopped working with TDI and then ab ru p tly re tu rn e d to no rm al. In th e ab sen ce of other signs of exposure-related nervous d iso rd ers, th e a u th o rs did not specifically im plicate TDI as the cause of this condition. A 1964 USSR study investigated the effects of TDI on electrical activity in th e human cerebral cortex. No experimental details were reported, but TDI was said to a f f e c t electroencephalographic (EEG) rhythms at a threshold concentration of 100 U g/cu m (14 ppb). This study was not included in th e 1973 c r ite r ia docum ent on TDI . Little can be made of these results in the absence of any inform atio n on experimental methods, but the implication of CNS effects at a such a low concentration suggests that such effects should be more carefully evaluated. In 1965, a Canadian report indicated that 12 of 24 maintenance workers d e v e lo p e d r e s p i r a t o r y s y m p to m s a f t e r th e y had c lean ed pipes and vessels c o n ta m in a te d w ith TDI. In ad d itio n , four of th e workers developed psychologic problems, including anxiety neuroses, psychosomatic complaints, depression, and even paranoid te n d en cie s. A year after exposure, they had not returned to work; some s till com plained of cough and d iffic u lty in b re a th in g , although their pulmonary fu n ctio n te s ts were normal. This report suggests the possibility th at TDI produces CNS e ff e c ts ; cleaning p ro cesses, how ever, involve th e use of solvents to which th e se CNS e f f e c ts m ight be attributed. This report did not detail the procedures or solvents used in cleaning the TDI-contaminated vessels. B urton , review ing O n tario w orkm en's co m p en sa tio n claim s in 1972, m entioned an in cid en t of TDI exp o su re in a rubber plant. One of three women em ployees who developed chronic obstructive lung disease after an acute exposure to TDI also had a "psychogenic problem," not otherwise described. N ev erth eless, it is not unequivocal that the effects reported were caused by TDI. In a 1962 report, Filatova et al described the effects of mixed exposures to TDI, ch lo ro b en zen e, phosgene, to lu en e diam ine, and HDI on 63 men and 17 w om en who had m anufactured diisocyanates for 1-2 years. These effects included irrita tio n of th e ey es, nose, and skin, coughing, difficulty in breathing, headaches, in s o m n ia , w eak n ess, tre m o rs , re fle x ch an g es, and c h e s t and abdom inal pain. H e m a to lo g ic te s ts show ed d e c re a se s in eosinophils and n eu tro p h ils, and some w orkers had slightly en larg ed liv ers w ith no functional impairment. The authors concluded th a t th e substances produced during diisocyanate production were toxic, but they could not a ttr ib u te th e sym ptom s to TDI alone, since other compounds that were present could have produced similar effects. O n ly th e HDI concentrations were said to be in excess of the MAC. Thirty-two workers complained of headaches, 36 of increased perspiration, 20 of aches in th e a re a of the heart and under the right ribs, 13 of dream disturbances, 12 of difficulty in breathing, 19 of general weakness, and 6 of coughing . All w orkers re p o rte d th a t HDI vapor irritated their eyes and upper respiratory tract. N in eteen w orkers, who had w orked in th e p lan t for 7-13 y ears, had developed slightly en larg ed liv ers th a t w ere painful upon palpation. Duodenal sampling and blood bilirubin and cholesterol analyses revealed no hepatic lesions. Most of the 55 w orkers ex am in ed for liver abnorm alities showed hypocholesteremia, indicating to th e au th o rs an early sta g e of d istu rb a n ce of liv er function. Most workers also showed abnormalities in blood proteins and serum cholinesterase activity. A pproxim ately 50% of the examined workers had developed chronic subatrophic pharyngitis w ith o u t any p ath o lo g ic changes in th e lungs . Effects on the ca rd io v ascu lar system were seen in 47 workers, 27-40 years old, more than half of whom had sinus arh y th m ia , b ra d y c a rd ia , e x tra s y s to le , and slowing of endoatrial co n d u ctiv ity indicative of toxic myocardiodystrophy. Some workers had trem ors of the fingers and eyelids and increased muscular excitability. Filatova et al concluded that the adverse effects on workers' health were produced by a m ix tu re of to x ic com pounds whose main component was HDI. No o th e r re p o rts of h e p a to to x ic ity or cardiovascular effects in diisocyanate workers have been found. It should be noted that chlorobenzene is a hepatotoxin that has re p o rte d ly cau sed h e p a tic necrosis in animals at high doses and produced an increase in liver weight in rats inhaling 1,150 mg/cu m for 6 months . # Epidemiologic Studies Studies of w orker p o pulations exposed to TDI have r e la te d environm ental exposure levels to th e incidence and severity of respiratory symptoms, changes in pulm onary function, and immunologic reactivity. Investigations of workers exposed to MDI and HDI have g en era lly provided less useful data because they involved mixed exposures to several other toxic chemicals. In 1957, H am a e t al re p o rte d th a t 12 workers exposed to isocyanates (TDI) a t 30-70 ppb (210-500 ug/cu m) for 1 week in an automobile plant had mild to sev ere re sp ira to ry symptoms including cold symptoms, continuous coughing, sore th ro at, dyspnea, fatigue, and nocturnal sweating. No symptoms had developed during th e previous m onth when isocyanate concentrations were below 10 ppb (70 Ug/cu m), and when concentrations were subsequently reduced to the 10-30 ppb range (70-210 u g /c u m), no fu rth e r co m p lain ts o c c u rre d in over 3 m onths. A w ritte n co m m unicatio n from Hama (3une 1973) confirmed that the isocyanate was TDI and in d icate d th a t exp o su re concentration measurements were based on breathing-zone sam ples an aly zed by th e R a n ta m ethod. This m ethod is unable to distinguish b e tw e e n TDI and th e TDI u re a fo rm ed in th e p re se n c e of w a te r. Thus, the concentrations of TDI in the area were probably less than the reported values. A d e ta ile d 2 .5 -y ear study by W alw orth and Virchow of a polyurethane foam p lan t was published in 1959. TDI concentrations ranged as high as 300 ppb (2,200 y g /c u m), but monthly averages were generally below 150 ppb (1,100 yg/cu m). E ig h ty -th re e cases of respiratory illness that required medical attention were a ttrib u te d to TDI ex posure; m ost of them occurred after 3-4 weeks of exposure. The to ta l num ber of w orkers a t risk was not reported. The authors noted that t h e r e w as l i t t l e c o r r e l a t i o n b etw ee n m easu red TDI c o n c e n tra tio n s and the app earan ce of respiratory symptoms. They attributed this largely to short exposures a t high c o n c e n tra tio n s not r e f le c te d in th e m easurem ents of average exposures. They added that once workers experienced adverse effects from TDI they could not tolerate even minute exposures. In 1964, to x ic e ffe c ts from TDI in workers in three New Zealand plants were reported . At one plant, where usual TDI concentrations ranged from 3 to 120 ppb (20-850 y g /c u m), three cases of respiratory sensitization occurred in 1 year. In tw o of th e se w orkers, sym ptom s first appeared after 2-3 hours of pouring TDI inside a r e f r ig e ra te d van, w here unusually high concentrations were likely. The third w orker, whose sym ptom s developed gradually, could work 50-60 feet away from th e foaming operation, where TDI concentrations were about 5 ppb (35 yg/cu m), but he had a respiratory reaction when he worked within the foaming area. In a sim ilar p la n t, w here TDI co n cen tratio n s were usually below 20 ppb (140 yg/cu m), t h e r e w e re tw o c a s e s of mild cold sym ptom s and one case of possible s e n sitiz a tio n , all a ss o c ia te d w ith a foam ing o p e ra tio n in w hich concentrations reac h ed 100 ppb (700 y g /c u m). This p lan t also reported one case of a severe a s th m a tic a tta c k and co llap se in a worker exposed at a very high concentration. He subsequently re tu rn e d to work with no evidence of sensitization. In the third plan t, tw o w orkers exposed to TDI at 18 ppb (130 yg/cu m) wearing canister-type m asks e x p erien ced very mild cold symptoms at the end of the day when a double run was carried out. The total workforce at risk in these plants was not reported. In 1962, Elkins et al described experiences with TDI in 15 Massachusetts plants over a 5 -y ear period. They e v a lu a te d th e c ases of re s p ira to ry illness o ccu rrin g in eac h of the plants and made environmental measurements, apparently from a re a sam ples. Most of th e sam ples were analyzed by the Marcali method. The R a n ta m ethod was used for some of the early measurements and found to be less a c c u ra te , but the authors did not indicate which measurements were made by this method. Other methods used in a few plants reportedly gave results comparable to the Marcali method. The findings of Elkins and coworkers, as adapted by NIOSH to p r e s e n t w h a t w e re c o n s id e r e d to be re le v a n t d o se-resp o n se d a ta , w ere sum m arized in the 1973 TDI criteria document , and are shown in Table III-2. This ta b le om its data from plants where environmental levels were not determined or w here th e au th o rs considered that these measurements were not representative of exposure. The numbers given for workers at risk are probably somewhat higher than th e a c tu a l num bers exposed to TDI, which could not be determined from the paper. Elkins e t al found a to ta l of established cases and 73 questionable cases of re s p ira to ry illness a s s o c ia te d with TDI exposure. Concentrations higher than 20 ppb (140 yg/cu m) were measured in only three plants. From the data in Table III-2, it can be seen that cases of respiratory illness were associated with all exposure co n cen tratio n s above 10 ppb (70 yg/cu m), but there were no cases at 7 ppb (50 y g /c u m) or low er. At 9 ppb th e re w ere no established cases but one q u estio n ab le one; th e re w ere sev eral estab lish ed cases a t 8 ppb. The authors concluded that the environmental limit for TDI should be considerably less than 100 ppb (700 y g /c u m), and th ey su g g ested th a t 10 ppb (70 yg/cu m) was "not an unreasonable limit." E ight m en who had a positive reaction to histamine had a larg er daily decrease in FEV 1 than did nonreactors (0.310 vs 0.115 liter). Smoking status was not significantly related to the changes in FEV 1. It is therefore difficult to evaluate the significance of the changes reported. The following year, Williamson reported the results of pulmonary function te stin g over a 14-m onth period on 15 w orkers in an o p e ra tio n w here TDI was se p a ra te d from a so lv en t by d istilla tio n . Frequent environmental measurements w ere m ade and th e se never show ed TDI concentrations above 20 ppb (140 Ug/cu m), but av era g e c o n c e n tra tio n s w ere not given. One major spill occurred during th e study, causing c o n c e n tra tio n s high enough to permit detection of odor, from w hich th e au th o r in fe rre d th a t th e c o n c e n tra tio n was at least 200 ppb, and the room was immediately cleared. All the workers tested were free of respiratory symptoms . In four series of m easurem ents of FVC and FEV 1, the only significant change was a fall in FEV 1 a t the tim e of the second measurement (P<0.01), and subsequent tests showed no significant change from baseline FEV 1 values. There was little difference between Monday and Friday values; daily changes were not measured. Williamson No sig n ific a n t d iffe re n c e in respiratory symptoms was found b etw een 76 currently employed men exposed to TDI and controls. Nine of 76 men in the co n tro l group had w heezing, co m p ared with only 1 of 76 men exposed to TDI. In th e second p a rt of the study, Adams exam ined men who had been rem oved from th e TDI p la n ts b ecau se of re s p ira to ry symptoms such as mild to severe bronchospasm and dyspnea. A bout 15% of the men employed in the TDI p lant w ere rem oved from th e p lan ts in th e ir 1st year b eca u se th ey developed re s p ira to ry symptoms. In the 2nd year of employment, only 3.5% of the remaining w orkers developed re s p ira to ry sym ptom s, and th e rate gradually dropped to less than 2%/year after the 5th year, totaling about 20% of the original workforce over th e 9 years of the study. In form ation on symptoms in 46 men removed from the plan t, who had not been exposed to TDI for 2-11 years, was collected annually by respiratory questionnaire and compared with responses from 46 age-m atched workers not exposed to TDI. These results were correlated with the results of pulmonary fu n ctio n te s ts . The d a ta w ere analyzed for statistical significance by chi-square test. D ata from 46 co n tro ls and 46 m en previously exposed to TDI show ed no d i f f e r e n c e s in t h e i r sm o k in g h ab its . H ow ever, 17 of th e 46 w orkers previously exposed to TDI developed breathlessness after exertion, significantly more th a n th e 5 m en in th e c o n tro l group w ith this sym ptom (P<0.01). W heezing o cc u rre d in 17 w orkers but only in 7 controls (P<0.05). These findings indicated th a t re s p ira to ry sym ptom s p e rs is te d in som e subjects after exposure to TDI had ceased. Pulmonary function data for 61 men who had had no contact with TDI for 2-11 years showed th a t th e ir av e ra g e FVC and FEV 1 values were slightly lower than control values after adjustment for age and height . Eleven of the 20 workers who had been rem o v ed from the plants because of sensitization to TDI and whose p reem p lo y m en t lung fu n ctio n records were available were asymptomatic after 3-8 years w ith o u t ex p osure, and 12 of these 20 had FEV 1 and FVC values unchanged from th e ir p reem p lo y m en t levels. Six had FEV 1 and FVC values between 90 and 100% of th e ir p reem p lo y m en t levels, and two had values of 80-90%. Those who had red u ced pulm onary fu n ctio n co m p lain ed of dyspnea on e x e rtio n , nocturnal dyspnea, and tightness in the chest. Adams concluded th a t exposure to TDI at about 20 ppb (140 Ug/cu m) for 5 y ears did not in c re a se respiratory symptoms or affect the lung function of w orkers who w ere not se n s itiz e d to the compound. However, sensitized workers, even when no longer exposed to TDI, had m ore re s p ira to ry symptoms than did unexposed controls, suggesting th at effects of TDI are, to some extent, irreversible. F or e n v iro n m en ta l m easu rem en ts, area samples, apparently collected at 6-month intervals, were analyzed by the Marcali method. The initial study , made during December 1966, included 38 workers, 7 of them w om en, w ith an average age of 36.3 years (range 18-62 years), employed an av era g e of 104.6 w eeks (2-624 weeks). Environmental measurements taken during this period show ed TDI co n cen tratio n s ranging from 0.1 to 3.0 ppb (0.7-21 ug/cu m). Pulmonary function measurements on 34 workers showed a mean daily decrease in FEV 1 of 0.19 lite r (P<0.001). S ignificant daily decreases were also noted in FVC (P 0 .0 0 1 ), PFR (P<0.05), FR50% (P<0.01), and FR25% (P<0.05). From Monday m orning to F riday m orning, th e m ean FEV 1, FR 50% , and FR25% all showed significant decreases (P<0.001). Responses for smokers and nonsmokers were similar, but w orkers with respiratory symptoms had a significantly greater decrease in FEV 1 than those without symptoms (P<0.05). The authors noted that there appeared to be no rela tio n sh ip b e tw e e n pulm onary fu n ctio n changes and amount of exposure, which they judged from the distance between work stations and sources of TDI. At th e 6-m onth followup , 28 of the 34 workers were still employed, and 6 new w orkers w ere added to th e study group. Environmental concentrations at th a t tim e ranged from undetectable to a high of 12.0 ppb (85 ug /cu m) in the TDI pouring a re a . Monday p resh ift and postshift measurements of pulmonary function showed significant decreases (P<0.02) in both FVC and FEV 1; Tuesday morning tests sh o w e d e s s e n t i a l l y c o m p le te re c o v e ry in FVC, but FEV 1 valu es w ere s till significantly lower than on the previous morning. When pulm onary fu n ctio n te s t re su lts w ere compared with those from tests done 6 m onths e a rlie r, sig n ific a n t decreases were found in FEV 1, the ratio FEV 1/FVC, and FR values at 75, 50, 25, and 10% of vital capacity . The authors noted th a t th e re was a high c o rre la tio n (r=0.72) between the 1-day and 6-month d e c r e a s e s in FEV 1. T he only o th er v aria b le sig n ific a n tly c o r r e la te d w ith pulm onary fu n ctio n test results was lifetime smoking history, and when this factor was held constant, the 6-month changes in FEV 1 were still significantly correlated with diurnal changes (r=0.60). The 12-m onth followup, made in December 1967, showed a much lower diurnal d ec re a se in FEV 1, 0.05 lite r . In th e 25 workers still available from the original 34, the decrease in FEV 1 over 1 year was still significant, but the entire d ec re a se was a c c o u n te d for by changes during the first 6 months. The authors noted th a t TDI concentrations measured at this time were very low; the maximum concentration detected was only 1.5 ppb (11 Ug/cu m). In itial m e asu re m en ts show ed a dose-related diurnal decrease in FEV 1 in the th re e groups . At the 2-year followup , only 63 members of the original w orkforce were still employed. Examination of records showed that 40 of those no longer em ployed had resig n ed v o lu n tarily and th a t th e se w orkers had shown a diurnal decrease in FEV 1 of 0.126 liter at the earlier testing, compared with 0.096 lite r in th o se who w ere still employed. While this difference was not significant, th e authors noted that it reflected a trend for self-selection based on health among TDI workers. In g e n e ra l, work assig n m en ts had been stable over the 2 years, with workers averaging 20 months at a work station; workers were therefore assigned to exposure groups on the basis of their usual work station . Since 5 workers had variable exposures and could not be assigned to any group, final testing was performed on 57 w orkers; 20 of these in each of the high and low exposure groups and 17 were in th e m edium exposure group. The incidence of coughing and phlegm production in c re ased w ith higher ex p o su re; 15% of the 57-person study group had symptoms suggestive of chronic bronchitis, but these were not related to exposure level. The 2-year d e c re a se in FEV 1 a v e ra g e d 0.102 liter (SD = 0.204 liter) in the exposed w orkers; th e groups with low, medium, and high exposure had respective decreases of 0.012, 0.085, and 0.205 liter (SD = 0.204, 0.177, and 0.185 liter). The authors noted th a t th e d e c re a se in the high-exposure group was "clearly excessive," while th a t in th e low -exposure group was " c le a rly w ithin normal limits." The authors' analysis of v arian ce show ed th e d iffe re n c e in 2-year decrem ent in FEV 1 in the th re e groups to be significant at P<0.01. Age, length of employment, and smoking habits did not d iffe r significantly in the three groups. Since several factors that a f f e c t lung size, including sex, h eig h t, and race, differed among the groups, the au th o rs sta n d a rd iz e d fo r lung size by dividing the 2-year decrease by the initial FEV 1 m e a su re m e n t; th is standardized figure still showed a significant difference between exposure groups. T h ere was no significant difference in mean FEV 1 b etw ee n w orkers exposed for less th an 3 y ears and those exposed 10-25 y ears, although the m ean for sm okers was sig n ifican tly different from that for nonsm okers. L ab o ra to ry te s ts in d ic a te d th e re w ere no alterations of peripheral blood values, hematopoietic system, or kidney function. Weili e t al and Butcher et al have reported on the first 5 y ears of a lo n g itu d in al study of respiratory symptoms, pulmonary function, and im m u n e re sp o n se s in w orkers a t a T D I-m an u fac tu rin g p la n t. The study was in itia te d in April 1973, before TDI production began at the plant, and is planned to extend through 1978. In the 1978 annual report on this study, Weill e t al noted that only 88 of th e origin al 166 w orkers were still participating. To offset attrition, workers had been added during the first 3 years of the study, so that some data were available on a to ta l of 277 workers. The original exposure groups were no longer considered valid because of w orkers tra n s fe rrin g from one exposure c a te g o ry to another. P ersonal m onitoring d a ta c o lle c te d since 1975 w ere th erefo re used to estim ate cu m u lativ e ex p osures in ppm-months for each worker. Mean TWA exposures were calculated for each of six job categories, ranging from 2 to 6 ppb (14-40 Ug/cu m). TDI c o n c e n tra tio n s fo r jobs assigned to the control group were found to be below th e lim it of d e te c ta b ility of th e method (reported as 1.5 ppb) more than 99% of th e tim e , and th e author assigned these jobs a mean TWA concentration of 0 ppb. For each worker, time spent in each job category was multiplied by the mean TWA c o n c e n tra tio n for th a t job and re su lts w ere sum m ed to d e te rm in e cum ulative exposures. Lung fu n ctio n te s t resu lts were statistically correlated with these cumulative exposures in cross-sectional and longitudinal analyses [ When re su lts w ere analyzed by sm oking and atopy categories, most of the increase in bronchitis was accounted for by nonatopic smokers in the exposed group. There was no significant d iffe re n c e betw een continuously and interm ittently exposed groups, but correlations with cumulative exposures were not made. This study is th e only study available on TDI workers that provides preex p o su re d a ta for all w o rk ers. In ad d itio n , because of the use of continuous personal monitoring, it provides realistic information on actual exposures. In a 1973 NIOSH h e a lth h azard e v alu atio n , V andervort and Sham a in v e stig a te d re s p ira to ry sym ptom s and a c u te lung fu n c tio n ch an g es in workers exposed to TDI a t low c o n c e n tra tio n s a t a p lan t making polyurethane foam ice ch e sts and p icn ic jugs. During a preliminary visit, air samples were collected and analyzed for TDI by th e m odified M arcali m ethod of Grim and Linch . A q u estio n n aire to identify histories of respiratory symptoms was administered to all 290 em plo y ees of th e p la n t, about 200 of them exposed to TDI. The authors did not in d icate th e to ta l num ber of w orkers with respiratory symptoms or describe th e ir exposures to TDI. Twenty-nine of the 200 exposed workers were selected for fu rth e r study; 13 of these were experiencing respiratory symptoms, as indicated in responses to th e q u e stio n n a ire , and 16 were asymptomatic. These workers were subdivided into m o d e ra te and low exposure groups on the basis of environmental m e asu re m en ts m ade a t th e time of the initial visit. The four workers making up th e sym ptom atic low-exposure group were among 14 sensitized workers in the plant who had been transferred away from the immediate area of the foaming operation because of intolerance to TDI. Seven unexposed control employees, m atched to the study group for age, sex, and smoking habits, were selected as controls. Two weeks after the initial visit, the investigators performed preshift and p o stsh ift pulm onary fu n ctio n te stin g on the exposed and control workers selected for th e study. The TWA exposure concentration of each employee was determined for th e sh ift from breathing-zone samples. Short questionnaires were administered before and a f te r th e m o n ito red sh ift and again th e n ex t morning to determine whether the employees were experiencing symptoms. R esu lts of pulmonary function testing showed no significant difference between m orning and evening testing except in the symptomatic low-exposure group of four s e n sitiz e d w orkers who had been tran sferred out of the foaming area; this group also showed significantly greater decreases in FVC and FEV 1 than did the controls . The individual w ith the g r e a te s t d e c re a se , who had never smoked, was exposed a t a c o n c e n tra tio n of only 0.2 yg/cu m and thus was highly sensitive to TDI. In th e asym ptom atic groups with both moderate and low exposure, all but two of th e w orkers re p o rte d m ild irritatio n of the mucous membranes, and three had re sp ira to ry sym ptom s such as coughing or chest tightness . All 13 workers in th e sy m p to m atic groups reported coughing, chest tightness, wheezing, or shortness of b reath . T here was a considerable increase over preshift findings in the number of sym ptom s re p o rte d a t th e end of th e s h ift in both th e m oderate-and lowexposure symptomatic groups and some increase in the asymptomatic groups. However, the investigators noted that it could not be assum ed th a t w orkers had become sensitized at these low levels. Nine of the 13 sy m p to m atic em p lo y ees had been exposed to spills of TDI in the past, and 8 of th e se 9 developed sym ptom s a t the tim e of th e spill, so sensitization may have developed as a result of these exposures. The authors could not determine whether this was the first occasion on which they showed symptoms. Other studies found on populations of workers exposed to MDI have provided no quantitative information on exposure concentrations, but they do indicate that there is a rela tio n sh ip b etw ee n ad v erse e f f e c ts and exp o su re lev els or d u ratio n of exposure. For ex am p le, in 1971, Tanser et al examined the effects of MDI exposure on 57 employees in a factory producing rigid polyurethane foam moldings. F o u rte e n of th e 57 w orkers re p o rte d th a t any contact with MDI vapor produced e f fe c ts ranging from a sore throat and wheezing to severe asthma and tightness in th e c h e s t. Spirometric analysis showed that 8 of the 57 employees had an FVC of less than 9096 of the predicted value or an FEV 1/FVC ratio below 75%; only 2 of these 8 reported symptoms of sensitivity to MDI. The authors reported that most of the symptoms appeared to be those of d ire c t irrita tio n and not of an allergic reaction. However, four workers who had c o n ta c t with MDI were diagnosed as having possible hypersensitivity; three of these had sev ere asth m a, and the fo u rth developed fever, headaches, aching limbs, and cough following exposure. The 1976 stu d ie s of Saia e t al and F ab b ri e t al ex p lo red the rela tio n sh ip b etw ee n exp o su re to MDI and ch ro n ic n o n sp ecific lung disease in w orkers in an Italian r e f r ig e r a to r fa c to ry . The total exposed workforce of 180 com prised 94 fu rn a c e w orkers (who removed polyurethane molds from the furnace and w ere e s tim a te d to have the highest exposures), 32 injectors, and 54 assembly line w orkers w ere also included. The groups w ere sim ilar in average age and length of employment. Responses to a questionnaire indicated that 85 of the workers in the plant had re sp ira to ry sym ptom s . The p rev alen ce of th e se sym ptom s was le a st in workers exposed less than years and greatest in those exposed more than 8 years; th e av era g e age in all th r e e groups was 37-38 years. Pulmonary function studies sh o w e d t h a t a b o u t h a lf of th e 180 w o rk e rs had v ital c a p a c ity and FEV 1 m easurem en ts below 90% of predicted values, and 15-20% had values below 80% of p red icted . The 85 workers with respiratory symptoms had pulmonary function m e a su re m e n ts sig n ifican tly low er th an the average for the 180 employees. These measurements decreased with length of exposure even when adjusted for smoking. R esu lts w ere also analyzed by job function in 160 workers who had no history of previous occupational exposure to respiratory irritants . Furnace workers had sig n ifican tly low er pulm onary fu n ctio n values and a g r e a te r prevalence of respiratory symptoms than workers in other jobs. Exposure d a ta w ere not re p o rte d and control groups were not used in these studies , severely limiting their usefulness. The authors did not reveal the source of d a ta on p re d ic te d pulm onary fu n ctio n values, so it is im possible to determine the relevance of these data to the worker population studied. Only one study, a 1975 NIOSH health hazard evaluation by Hervin and Thoburn , has been found on w orkers exposed to HDI. These w o rk ers, 18 spray p a in te rs in an airplane repair facility, were exposed to HDI at up to 300 ug/cu m (40 ppb); th ey w ere also exposed to trim e r ic biuret compounds of HDI at up to 3,800 y g /c u m and to a v a rie ty of organic solvents at concentrations above the F ed eral stan d ard s. P ulm onary fu n ctio n m e asu re m en ts in spray painters and the d e c re m e n ts in th e se m e a su re m e n ts over the workshift did not differ significantly from values in 40 controls who worked during shifts when spray painting was never perfo rm e d . All the spray p a in te rs , who wore re s p ira to rs but no eye protective devices, co m p lain ed of eye irritation while painting, and about half complained of nose and th r o a t ir r ita tio n , cough, and c h e st discomfort. The authors mentioned th a t the re s p ira to r program was deficient in many respects. This report suggests th a t MDI produces symptoms similar to those from TDI and MDI. However, it does not provide any in d ic a tio n of th e c o n c e n tra tio n s of HDI that produce irritation, since th e re was sim u ltan eo u s exposure to organic solvents and to trim eric HDI at relatively high concentrations. All th e diisocyanates that have been studied caused irritation when applied directly to th e skin of rabbits or instilled into their eyes. Their potentials as skin and eye irritants, determined from these studies, are summarized in Table XI M icroscopic ex am in atio n s of tissu e se c tio n s showed tracheitis and bronchitis with sloughing of the superficial epithelium in animals exposed to TDI at 2 ppm and killed by the 4th day a f te r exposure. Lungs of animals killed 7 or more days after exposure did not d i f f e r sig n ifican tly fro m th o se of c o n tro ls, su g g estin g th a t th e e f f e c ts w ere reversible. In animals exposed at 5 or 10 ppm, damage was more severe and longlasting. T h ere w ere a re a s of co ag u latio n necro sis of the superficial epithelium surrounded by in fla m m a to ry cells, and a t points of deep ulceration, connective tissue had developed. # Animal Toxicity B ronchopneum onia developed in all species except mice. Since th e an im als w ere exposed only once, th is lung dam age was the result of irritation rather than an allergic reaction. In an o th er 1962 stu d y , H enschler et al exposed rats and guinea pigs to TDI repeatedly at concentrations of 0.1-10 ppm (0.7-70 mg/cu m). In rats, three 4hour exposures a t 10 ppm were lethal for all animals; four exposures at 5 ppm or 10 exposures a t 1 ppm w ere le th a l for most rats. At 0.5 ppm, adult rats could w ith stan d 24 exposures, but this exposure regimen killed about half the young rats exposed. Most d eath s w ere due to severe peribronchitis and bronchial pneumonia. In surviving an im als, lung ch ang es w ere rev ersib le within several months. Rats e x p o s e d at 0.1 ppm fo r 40 exp o su res had no ch an g es in th e lungs th a t w ere attributable to TDI exposure, but they did gain less weight than controls. In guinea pigs, th e se au th o rs w ere unable to find any evidence of sensitization to TDI after 48 exposures at 0.5 ppm, which was lethal to most of the animals. These results were qualitatively similar to those reported by Zapp 5 years e a rlie r, but H enschler e t al In 1965, Niew enhuis et al described the effects on animals of repeated exposure to TDI a t a low c o n c e n tra tio n . They exposed rats, rabbits, and guinea pigs to TDI a t 0.1 ppm (0.7 mg/cu m), 6 hours/day for either 38 consecutive days or 5 day s/w eek fo r 58 exposures. Chamber concentrations were measured by the Marcali method. Lung d am age in these animals generally increased in severity for several days a f te r exposure ended . A rabbit examined immediately after exposure had e sse n tia lly norm al lungs, but animals killed 3-10 days later had bronchopneumonia, bronchitis, perivasculitis, and lung abscesses. A rabbit killed after 20 days had only chronic bronchitis. R ats killed immediately had less inflammation than those killed la te r , but fibrous tissue had proliferated in the walls of the bronchioles in several ra ts . At 3-24 days a f te r ex p o su re, in flam m atio n was marked, and animals had bronchopneumonia, extensive fibrous tissue proliferation, and polypoid hyperplasia of th e ep ith eliu m . All control rats had bronchiectasis, which the authors attributed to chronic m urine pneum onia. In guinea pigs, there were localized accumulations of lym phocy tes, m acro p h ag es, and p lasm a cells th ro u g h o u t th e lungs and varying degrees of pneumonitis and bronchopneumonia. No abnormalities of the heart, liver, kidneys, lymph nodes, or spleen were found in any of the animals. R a ts exposed a t th e h ig h est c o n c e n tra tio n gained sig n ifican tly less w eight th an th o se a t the lo w est c o n c e n tra tio n (P<0.05). No sig n ifican t d iffe re n c e s b e tw e e n exposure groups were found in blood composition, liver fu n ctio n , urin aly sis, or kidney fu n ctio n , and no dam age to any organ was observed in m a cro sc o p ic exam inations. However, there was an increased lung-tobody w eig h t ra tio in the high-exposure group. Animals exposed at 1,370 ug/cu m had sig n ifican tly low er liver and spleen weights than those exposed at 250 ug/cu m. The author did not suggest an interpretation of these differences. R esu lts of th e ir 2-hour LC50 studies in mice showed th a t HDI w as 2.3 times as toxic as CHI. The threshold concentration for influence on th e CNS in m ice was 1 mg/cu m for HDI and 10 mg/cu m for CHI, although the th resh o ld c o n c e n tra tio n s for respiratory irritation were sim ilar~2.9 mg/cu m for HDI and 4.5 mg/cu m for CHI. Exposure to CHI caused only a nonsignificant decrease in weight gain. A ccording to th e au th o rs , adding chlorine to the molecule of an organic com pound would be e x p e c te d to in c re a se th e toxicity of the compound. Yet the resu lts of this series of experiments showed that HDI was substantially more toxic th an CHI. The f a c t th a t th e le th a lity of HDI was dose dependent may indicate th a t the com pound is abso rb ed sy ste m ic a lly , while the effects of CHI appear to result only from local irritation of the respiratory tract. Kondratyev and Mustayev dem onstrated skin sensitizing effects of HDI in experim ental animals in 1974. Guinea pigs were sensitized by application of HDI in 50% solution in acetone to the skin for 2 days in a row. An initial irritant effect in th e fo rm o f h y p erem ia, ed em a, and itch in g was o b serv ed a t th e s ite s of ap p licatio n . A fter 21 days, the degree of sensitization was determined by applying HDI in various c o n c e n tra tio n s to previously unexposed skin. A specific allergic re a c tio n was seen in most animals at concentrations as much as 40 times less than th e p re v io u s ly d e t e r m in e d th r e s h o l d dose of 50% fo r skin ir r ita tio n . The ep icu tan e o u s sensitization observed was also accompanied by changes in the bloodserum p ro tein fra c tio n s . This study su g g ests that skin contact with HDI in the workplace could lead to allergic derm atitis. Kimmerle found that IPDI produced moderate skin sensitization in guinea pigs. His e x p e rim e n ta l m eth o d s w ere not described but were said to follow the r e c o m m e n d a tio n s of th e Food and D rug A d m in istratio n . IPDI, a d m in iste re d in tra d e rm a lly , produced a larger area of swelling on reinjection than it had in an earlier injection in all 15 guinea pigs tested. A nim al e x p e rim e n ta tio n has been used in sev eral studies to investigate the m echanism of sensitization to diisocyanates. In 1964, Scheel et al investigated the immunologic aspects of TDI sensitization. The authors produced TDI antigens by conjugatin g TDI with egg albumin; they attem pted to characterize the antigen, and th e ir m ethod of p re p a ra tio n , w ith m o d ificatio n s, became the standard for many subsequent immunologic studies on TDI. TDI-specific antibodies were demonstrated in ra b b its exposed to TDI by inhalation at 100 ppb (700 Ug/cu m) 6 days/week for 2-4 weeks. When a purified protein derivative of the tubercule bacillus was injected during TDI inhalation, a skin sensitivity response to TDI could also be demonstrated; a n im a ls so t r e a t e d r e a c t e d to 0.001 mg of TDI ap p lied to th e skin, while u n sen sitized anim als r e a c te d only to 0.2 mg. When the proportion of TDI in the an tig en was increased, the antigenicity of the protein was masked so that it would not re a c t w ith an tib o d ies to egg albumin. This dem onstrated that the circulating antibodies contained a reacting group specific for the TDI hapten. Thom pson and Scheel , in 1968, investigated the effects of TDI on rats p re tre a te d with alloxan to suppress anaphylaxis or with insulin and pertussis vaccine to enhance the responses. R ats were exposed to TDI at 1 ppm (7 mg/cu m) for 10 hours. A lthough th e au th o rs found th a t pretreatm ent altered the effects of TDI e x p o s u r e on th e lu n g s in th e p r e d ic te d d ire c tio n , th ey concluded th a t the m echanism of lung damage was not immunologic. This interpretation was based on th e ir in ab ility to e lic it a re a c tio n to cutaneous or intravenous challenge and the f a c t th a t reex p o su re to TDI produced less response than the original exposure. In a d d itio n , m icro sco p ic findings in d ic a te d th a t th e lung e f f e c ts p roduced w ere co n siste n t with chemical damage rather than an immunologic process and that they occurred primarily in the first few days after exposure. In 1970, S tevens and P alm er stu d ie d sensitization in guinea pigs and rhesus m onkeys exposed to TDI a t 0.01-5 ppm fo r th re e 6-hour periods. Three w eeks la te r, th e se an im als and previously unexposed animals were exposed to TDI a t 20 ppb (140 u g /c u m). B reath in g p a tte rn s of the animals were measured by plethysmography to detect changes indicative of respiratory sensitivity. P a tc h te s ts showed skin se n s itiz a tio n to TDI, but serolog ic te s ts for sensitization were negative. Guinea pigs preexposed to TDI at 0.5 ppm did not show m easurable respiratory changes, suggesting that a threshold for se n s itiz a tio n e x iste d b e tw e e n 0.5 and 2.0 ppm. T h ere was no evidence of se n sitiz a tio n in m onkeys a f te r reex p o su re, and th e re were no serologic changes in d icativ e of sen sitizatio n . The authors concluded that exposure to large amounts of TDI may produce se n s itiv ity to TDI in low er c o n c e n tra tio n s , b ut th a t this sen sitiv ity m ight not involve an allergic mechanism. However, they noted that the d iffic u lty in preparing a suitable antigenic system made it impossible to determine whether an immunologic mechanism was involved. K arol e t al , in 1978, were able to dem onstrate the production of serum an tib o d ies sp e c ific for the tolyl portion of an isocyanate molecule. They exposed guinea pigs by in h a latio n to a conjugate of the monofunctional p-tolyl isocyanate w ith egg album in (EA). This antigen induced a respiratory response in the animals beginning ab o u t th e 8th day of exposure, and serum antibodies were detectable by gel diffusion and im m u n o electro p h o resis by the 14th day. The authors concluded th a t the antibodies were hapten-specific, since p-tolyl isocyanate that was bound to an o th e r p ro tein c a r r ie r , such as bovine serum albumin, elicited both respiratory re a c tio n s and serum an tib o d y resp o n ses in anim als previously sensitized to the iso cy an a te-E A a n tig e n . In addition, sensitivity to the EA carrier in the conjugate was not produced, suggesting to the authors that the conjugate contained sufficient iso cy an a te m o lecu les to e ffe c tiv e ly shield an tig en ic determinants in the protein m olecule. In a subsequent study, which has been described in Effects on Humans, K arol and her co lleag u es used this antigen to dem onstrate IgE antibodies in the sera of workers who had sensitivity reactions to TDI. . The compounds were tested on Salmonella typhimurium s tra in s TA1535, TA1537, TA1538, TA98, and TA100, with and without a mammalian liver microsome activating system. MDI was mutagenic in strains TA98 and TA100 in th e p resen ce of the liver activating system. The other diisocyanates tested did n o t show m u t a g e n ic a c t i v i t y . D e t a i l s of th e e x p e rim e n ta l p ro c e d u re and quantitative results were not provided. # Correlation of Exposure and Effect In th e e arly days of the industry, a large proportion of the workforce exposed to TDI dev elo p ed re s p ira to ry illn esses The adv erse e f f e c ts of TDI on th e lungs may re s u lt from direct irritation caused by exposure a t relatively high concentrations. An experiment on volunteers showed th a t one of six subjects experienced irritation of the nose and throat during a 10-m inute exposure at 710 yg/cu m and that all experienced it at 3,600 y g /c u m; however, these subjects did not report chest symptoms. In an automobile p lan t, all 12 w orkers exposed to TDI developed severe respiratory symptoms when TDI c o n c e n tra tio n s w ere 30-70 ppb (210-500 y g /c u m) . Their symptoms disappeared when concentrations remained below 30 ppb. A cute and ch ro n ic re s p ira to ry e f f e c ts caused by exposure to TDI have been re p o rte d , but the results of such studies have been inconsistent. Results appear to d iffe r su b sta n tia lly depending on th e ty p e of operation or process in which TDI exposure o ccu rs. In a spraying operation, where TDI concentrations reached 6,400 y g /c u m, G andevia found a significant decrease in FEV 1 during the course of a w orkday in 20 exposed men. This decrease was not fully reversed overnight or on the weekend and the cumulative decrease over 3 weeks was also significant. In a TDI distilling operation where concentrations were generally less than 140 y g /c u m, Williamson Ll7] found no significant changes, compared with preexposure baseline values, in th e pulm onary fu n ctio n of 21 men over 14 months. He also reported little difference between Monday and Friday values. In an o th er ty p e of exposure situation, a polyurethane foam plant, Peters et al found sig n ific a n t daily, w eekly, and cumulative decreases over a 2-year period in workers exposed at concentrations below about 100 yg/cu m. This study did not include preexposure measurements, but the mean annual decrem ent in FEV 1 of 0.11 lite r /y e a r was con sid erab ly higher than those the authors found in the lite r a tu r e for norm al working and general populations, which ranged from 0.025 to 0.047 lite r /y e a r . A long-term study of workers in a TDI-manufacturing plant, conducted by Weill e t al and B utcher e t al , showed no significant exposurere la te d changes in lung function. TDI concentrations in the plant generally ranged from 14 to 50 u g /c u m during th e 5.5-year study. The entire study population, which included controls from elsewhere in the chemical factory not exposed to TDI, had ex cessiv e d eclines in som e pulm onary function measurements compared with p re d ic te d values, but th e re w as no d iffe re n c e b e tw e e n groups w ith c o n s ta n t, in te r m itte n t, and no exposure, and decreases were not correlated with cumulative exposures. These findings are questionable on the basis that the control group may h a v e b e e n so a f f e c t e d by e x p o s u re to o th e r ch em ica ls th a t th e study was insensitive to possible effects of TDI exposure. The d isa g re e m e n t of th e se findings with those obtained in polyurethane foam plants may also r e f le c t d iffe re n c e s in exposure to other chemicals and fa ilu re to detect occasional excursions to much higher exposure concentrations than those usually prevailing. Both TDI manufacturing and polyurethane foam production involve mixed exposures, but in the latter process, exposures to other chemicals are likely to be much m ore closely c o rre la te d w ith exp o su res to TDI. Thus, the ap p a re n t dose-response relationship to TDI exposure in the polyurethane foam plant A t t e m p t s to d e te rm in e th e c o n c e n tra tio n s of TDI n ece ssary to produce sensitization have not been fruitful. Porter et al reported that there were no new c a s e s o f s e n s i t i z a t i o n in a TDI p la n t in 2 y e a r s w h en a v e ra g e TDI concentrations were below 140 y g/cu m; during the previous 16 years of operation, when TDI concentrations had averaged 350-420 yg/cu m, from one to four cases of s e n s itiz a tio n had been diagnosed each year, the number gradually decreasing with increasin g len g th of o p e ra tio n . Superficially, these data suggest an average TDI exposure, 140 y g /c u m, below which s e n s itiz a tio n does not o ccu r. However, ex am in atio n of the data reveals that, even during the years when the average TDI c o n c e n tra tio n rem ain ed constant at 420 yg/cu m (1956-1969), there was a general decline in the number of cases of sensitization, suggesting that potentially sensitive individuals may have b ecom e sensitized and left the workforce during their early years of em p lo y m en t. Thus, th e se findings do not rule out the possibility that s e n s itiz a tio n might develop in newly hired workers exposed at less than 140 yg/cu m for longer periods of time. # S everal au th o rs have noted that workers often become sensitized during brief e x p o s u r e s a t h ig h c o n c e n t r a t i o n s r e s u l t i n g from spills, leaks, or spraying H ow ever, sensitivity to TDI has been observed in workers with no known exposure to spills or sp raying operations . A NIOSH health hazard survey of a plant making polyurethane foam found respiratory symptoms in workers a t a foam ing o p e ra tio n w here no TDI c o n c e n tra tio n s of more than 35 yg/cu m w ere m easu red , H ow ever, 9 of 13 workers who had been transferred away from th e foam ing operation because of severe symptoms were known to have been exposed previously to spills of TDI. In another NIOSH survey, none of the nine em ployees of a p o ly u reth an e foam p lan t where TDI concentrations averaged less than 7 y g /c u m and did not exceed 16 yg/cu m had respiratory symptoms , indicating that sensitization may be rare or nonexistent at such low concentrations. O th er stu d ie s sug g est th a t th e rate of sensitization may be som ew hat higher. F our of 47 workers (9%) in an office that received exhaust air fro m a nearby TDI p lan t b eca m e se n sitiz e d ; in 3 of th e se , se n s itiz a tio n was co n firm ed by bronchial responses in challenge tests, and the 4th improved when he was rem oved from exposure . Porter et al reported that 30 of the 300 w orkers (10%) in a TDI plant were diagnosed as sensitive to TDI during 17 years of o p eratio n . Adams found that 15% of the workforce in one plant left during th e ir 1st y ear of em p lo y m en t because of effects on their health; 1-3.5% left for th e same reason during subsequent years, for a total of about 20%. A similar rate was suggested in a study by Bruckner et al , in which 5 of 26 workers exposed to u n sp ecified iso c y a n a te s w ere considered sensitized because they had asthm atic reactions at low concentrations. Some re p o rts have suggested that sensitization to TDI is related to a personal history of allerg y or to atopy, as indicated by reactivity to prick tests with com m on in h a lan t allergens . However, most investigators report that there is no pattern of allergies or atopy in sensitized workers . Several investigators have attem pted to dem onstrate an immunologic mechanism for TDI sensitivity. In 1964, Scheel e t al dem onstrated circulating antibodies and positiv e skin reactions in guinea pigs sensitized to TDI by inhalation, but later w orkers w ere unable to co n firm th e se re su lts in guinea pigs, rats, and monkeys . In hum ans, im m unologic te stin g has indicated the existence of both re a g in -ty p e an tib o d ies and c irc u la tin g IgG antibodies in some workers exposed to TDI . H ow ever, th e se te s t resu lts have generally correlated poorly with sym ptom s su g g estiv e of TDI sensitivity or with respiratory responses to challenges w ith TDI at low concentrations. Since the TDI molecule has been thought to be too sm all to be a n tig e n ic in itself, a central problem in immunologic testing has been th e d evelo p m en t of an appropriate test antigen (a conjugate of TDI with a carrier p ro tein ). A re c e n t study by K arol e t al , using a te s t antigen of p-tolyl (m ono)isocyanate, dem onstrated the presence of tolyl-specific antibodies in the sera of th re e of four TDI w orkers who had se n sitiv ity re a c tio n s to TDI; the fourth w orker had not been exposed to TDI for 5 y ears. This study show ed that an immunologic mechanism may be involved in TDI sensitization. S tu d ie s by B u tch er e t al and Van E rt and B a ttig e lli have suggested that a pharmacologic mechanism is also involved in respiratory sensitivity to TDI. These investigators showed that TDI inhibited the isoproterenol-stimulated cy clic AMP levels in human lymphocytes. The effect was greater in lymphocytes from individuals who w ere se n sitiv e to TDI F ar less in fo rm a tio n e x ists on exposure to the other diisocyanates, but their e f f e c ts ap p ear to be generally similar to those of TDI. Thirty-four of 35 workers, only 6 of whom w ere exposed to MDI a t c o n c e n tra tio n s above 150 U g/cu m, experienced irritation of the eyes, nose, and throat, and half of them had bronchial sym ptom s . W orkers ex posed to MDI a t 50-110 y g /c u m did not have a s ig n ific a n t decrease in FEV 1 during a workshift in which foaming was carried out, but 3 of 29 w orkers had respiratory symptoms . Workers exposed to MDI at unknown concentrations in an Italian refrigerator factory had reduced vital capacity and FEV 1, and 85 of 180 workers had respiratory symptoms . This study in d icate d th a t the e f f e c ts w ere d o s e -re la te d , since fu rn a c e workers, who were exposed to MDI a t th e h ig h est co n ce n tratio n s, had significantly lower pulmonary fu n ctio n values and a g r e a te r p rev alen ce of resp irato ry symptoms than workers elsew h ere in the plant. The incidence of respiratory symptoms also increased with years of employment at the plant. These au th o rs co n sid ered im m unologic cross se n sitiv ity unlikely because of the differences in structure between the compounds. The subjects who cross-reacted to other diisocyanates tended to react to extremely low c o n c e n tra tio n s of TDI (less than 1 ppb) and to be hyperreactive to histamine. The auth o rs suggested that extrem e sensitivity to TDI might be the result of both an immunologic mechanism and a nonspecific pharmacologic or irritative mechanism, w ith th e la tte r m echanism acc o u n tin g for th e cro ss-re actio n s to MDI and HDI. This is co m p atib le w ith th e reports of Butcher et al th at TDI may block the b e ta-ad ren erg ic system and with the suggestive evidence obtained by Porter et al th a t TDI sensitivity does not necessarily require the presence of anti-TDI an tib o d ies. H ow ever, th e p o ssib ility of im m unologic c ro ss-se n sitiv ity between diisocyanates remains to be tested with a specific antigen system like that of Karol et al and is at present only speculative. In th e ab sen ce of other data on mutagenicity, this single study is insufficient evidence that diisocyanates are likely to be mutagenic in humans. During the startup procedures, the mean weekly TDI concentrations for the synthesis, finishing, and drummming areas were 5.6, 17. Both monitoring methods were used simultaneously during some period eac h day for 22 m onths. When 8-hour TWA values w ere analyzed, no positive c o rre la tio n could be found between personal and area sampling. Area monitoring, then, did not seem to accurately reflect actual individual exposures. H ervin and Thoburn reported that concentrations of airborne TDI were below the TLV of 20 ppb (140 ug/cu m) in an aircraft overhauling facility where th e p a in te rs sprayed aircraft with polyurethane paints. Area and personal samples w ere analy zed by th e th in -la y e r ch ro m ato g rap h ic method of Keller et al . The c o n c e n tra tio n s of airb o rn e TDI n ear th e aircraft fin, near the wing, on the floor w here mixing was done, and on the floor midway between the two bays were <30, <20-30, 20, and <20 U g/cu m (<4, <3-4, 3, and <3 ppb), respectively. The corresponding concentrations of HDI were 40-100, <30-60, <20-300, and <20 Ug/cu m (6 -1 5 , <4-9, <3-45, and <3 ppb). T h irteen p ersonal sam p les ta k e n a t various o p eratio n s c o n tain ed TDI at concentrations of 40 ug/cu m or less. Corresponding HDI concentrations ranged from less than 30 to 300 Ug/cu m. In an o p eratio n w here polyurethane foam lines were used to make automobile s e a t cusions, B utler and Taylor F itz p a tric k e t al stated that most of the MDI detected in the air was c a rrie d by this particulate m atter in the reactive form. The reaction of MDI with o th er com ponents of th e foam w as co m p lete w ithin about a minute, or 60 feet downstream from the spraying operation. Dharmarajan and Weill found that approximately 90% of the MDI present in air during a foam spray operation was blocked by passage through glass fiber or Teflon filters (0.5 pm pore diameter). The percentage blocked was independent of MDI c o n c e n tra tio n in th e ran g e 20-550 p g /c u m, a finding th a t su p p o rts the c o n te n tio n th a t m ost airb o rn e MDI is present in particulate form. Furtherm ore, they found th a t a p p ro x im ate ly 50% of th e p a rtic le s w ere less th an 10 pm in d i a m e t e r . A ssuming th a t all MDI in th e air is in th e form of aero so ls and assum ing an equal collection efficiency for simultaneous sampling of particulates on a f ilte r and MDI in an a b so rb er, th e au th o rs found th a t MDI constituted 3.03-20.34% of the mass of the airborne dust collected on a filter. No m e a su re a b le MDI co n ce n tratio n s were found in 10 s a m p le s from th e mixing o p e ra tio n and 12 from th e m olding o p e ra tio n ; both o p eratio n s are co n d u cted a t room tem perature. Ten samples from the torching or oven-drying operation also showed no MDI, and 17 mold-pouring samples showed less than 7 ppb (70 y g /c u m); these operations involve elevated tem peratures, and the author attributed the low concentrations to the brevity of the operations, which did not p erm it sig n ific a n t vapor concentrations to be generated. The primary objective of engineering controls for operations using diisocyanates m ust be to reduce the concentrations of airborne diisocyanates so that they are at or below th e reco m m en d ed en v iro n m en ta l lim its. P ro cess equipment should be designed so th a t the sy stem is to ta lly en closed and operates, if possible, under n eg ativ e gage pressure . When it is necessary to open a vessel or when leaks or spills are likely, local ex h au st v e n tila tio n systems should be provided. Unless o th er m eans can be used to control the concentrations of diisocyanates, the source should be f itte d w ith a local exhaust ventilation system . If a process is too large for this type of enclosure, dilution ventilation may be necessary. N um erous polyurethane products exist, and the polyurethane may sometimes be form ed under circum stances that are not readily adaptable to conventional exhaust v en tila tio n p ro ced u res, eg, a p p lic a tio n of polyurethane foam to storage tanks to p rev en t co rrosion. Some operations, such as spraying, mixing, foaming, injecting, flushing, pouring in p lace, and painting, can occur either in fixed locations or in the field. W orkers engaged in these operations may require additional protection, such as p o sitiv e p ressu re su p p lied -air re sp ira to rs and additional protective clothing. Although many types of diisocyanates are used in urethane foam systems, many of th e se system s co n ta in po ly m eric iso cy an ates, which usually have lower 1 8 ] and Industrial Ventilation-Manual of Recommended P ractice, the 1976 edition or a later edition. The co n ce n tratio n of diisocyanates in the workplace may also be decreased by s u b s t i t u t i n g a co m p o u n d w ith a low er vapor p ressu re. For ex am p le, w here fo rm u latio n c o n sid e ra tio n s p e rm it, MDI m ight be substituted for TDI . In spraying and certain foaming operations where the diisocyanate is present in aerosol form, this substitution may not be an effective means of controlling exposure. When v e n tila tio n requirements for any diisocyanate work area are determined, and it is established that an exhaust ventilation system is necessary, care must be ta k en in the p la c e m e n t of in ta k e and ex h au st vents . Carroll et al described re s p ira to ry sensitization from TDI in office workers as a result of TDIcontarninated air being drawn into the ventilation system of an office building from th e exhau st v ents of a neighboring factory using TDI. This report emphasizes the im p o rtan ce of d eterm in in g that intake air for the ventilation system is not drawn from a re a s in w hich o th e r d iiso c y an ate s are handled, and that exhaust vents be positioned to avoid exposure of other persons to the diisocyanate-contam inated air. For less chemically reactive diisocyanates such as HDI , hydrolysis would be e x p e c te d to occur at considerably slower rates. The authors concluded th a t increased humidity reduces the concentration of atmospheric TDI, but n ot to a d eg ree that would prove useful as a routine control measure in the workplace. A decrease in apparent TDI concentrations due to humidity has also been observed by other investigators . # Sampling and Analysis V olatilized amines may also be presented in workroom air where diisocyanates are being manufactured or used. Toluene diamine, a synthetic precursor to TDI and a possible hydrolysis p ro d u c t, is known to in te r f e r e p o sitiv ely in th e M arcali d e te rm in a tio n of TDI . O th er prim ary arom atic amines would be e x p ec ted to be p o sitiv e interferences with any method in which diisocyanates are determined as secondary reaction products of their amine derivatives. Meddle and Wood developed a method for detecting arom atic isocyanates in air in th e p resen ce of p rim ary a ro m a tic amines. Individual air samples were bubbled through two different absorber solutions. A solution of dimethylformamide (DMF) and 1,6 -d iam in o h ex an e (DH) was used to trap the primary arom atic amine and inactivate the isocyanate in one air sample. The second air sample was drawn through a solution of DMF, DH, and hy d ro ch lo ric acid th a t tra p s th e primary a ro m a tic am ine and hydrolyzes th e iso cy an a te to its corresponding amine. The sam ples w ere th en diazotized and coupled, and the amount of amine or isocyanate p resen t was d e te rm in e d sp ectro p h o to m etrically using standard calibration curves. The color in th e DMF-DH ab so rb en t solution is produced by the primary amine alone, and th a t in the DMF-DH-hydrochloric acid absorbent solution is produced by both the am ine and th e isocyanate. The amount of isocyanate present in the air sampled was determined by subtracting the former value from the latter. T e rtia ry am ines, such as triethylenediam ine (TEDA), which are often used as c a ta ly s ts in u re th a n e p o ly m eriza tio n s, have been shown to reduce the apparent c o n c e n tra tio n of a irb o rn e TDI . Smith and Henderson were the firs t to re p o rt the n eg a tiv e interference of TEDA vapors in the determination of gaseous TDI by the Marcali and Reilly tape methods. The fraction of apparent TDI loss, when an aly zed by both m ethods, ranged from 49 to 88%. These results led th e au th o rs to question whether the tape and Marcali values were underestimating actual TDI exposure levels in polyurethane foaming operations. The reported degree of negative interference, however, appears independent of the TEDA concentrations. In additio n , th e ra tio s of TEDA to TDI exam ined, 17-262 , are 135-2,100 tim es th o se th a t m ight be e x p e c te d during actual foaming or spraying operations . In a la te r a tte m p t to elucidate the mechanism of tertiary amine interference, Holland and Rooney compared values obtained for TDI in mixed TDI-TEDA a tm o sp h eres by th re e analytical techniques: midget impinger sampling and analysis by th e M arcali m ethod, co n tin u o u s-ta p e m onotoring, and direct air-injection gas chromatography. All m ethods gave similar values for TDI, and each showed similarly decreased v a lu e s fo r th e sam e TDI c o n c e n tra tio n in th e p resen ce of TEDA . In c o n tra s t to th e previous report , this study dem onstrated that the reduction in m easu rab le TDI exhibited some dependence on atmospheric amine concentration. A sum m ary of the data shows that, at TEDA-to-TDI ratios of 9.6-25, about 90% of the input TDI could be m easu red ; a t a TED A -to-TD I ratio of 105, only 21-25% could be m easu red . Six o th e r catalysts used in polyurethane manufacturing were s a id to g iv e s im ila r r e s u l t s . The e f f e c t of TEDA on m e asu ra b le TDI was significantly red u ced w hen glass co m p o n en ts of th e experimental apparatus were siliconized to d e c re a se su rfa c e ad so rp tio n . The a u th o rs co m m en te d th a t gas ch ro m ato g rap h y did not detect any stable reaction interm ediates, including toluene diam ine, in th e m ixed gas stream . The only reaction product found proved to be the TDI u rea th a t had fo rm ed as a w hite pow der on the surface of the mixing sy stem . The au th o rs concluded th a t all th re e a n a ly tic a l m ethods gave a c c u ra te m e asu re m en ts of atmospheric TDI in the presence or absence of tertiary am ine c a ta ly s ts and th a t th e o b serv ed n eg ativ e interference reflected an actual reduction in TDI concentration. Furtherm ore, the mechanism by which this reduction o ccu rred may have depended on surface effects, relative humidity, and constituent concentration and residence time. The above re p o rts su g g est th a t th e p re se n c e of te rtia ry amine vapors may c a ta ly z e th e hydrolysis of airb o rn e TDI to its urea. The reaction ap p ears to be f a c ilita te d by ab so rp tio n of one or m ore of th e re a c ta n ts to a su rfa c e . W hereas gas chromatography of the mixed gases failed to isolate any stab le re a c tio n in te rm e d ia te , th e e x iste n c e of short-lived, potentially toxic, r e a c t i v e c o m p le x e s in m in u te am ounts can n o t be d isco u n ted . Both stu d ies used concentrations of TDI that may be representative of actual working environm ents, 18-400 ppm; however, since the relative concentrations of amine used far ex ce ed ed re a lis tic le v els, th e re le v a n c e of th e se re s u lts to th e workplace situation remains questionable. The amine is diazotized and coupled with 1-naphthyl eth y len ed iam in e to produce a red d ish -b lu e color. The intensity of this color is m easured spectrophotom etrically at 550 nm to provide an indication of the amount of TDI present. Marcali reported that the method was capable of detecting 10 ppb (70 yg/cu m) of toluene-2,4-diisocyanate. He also determined that recovery of total TDI was apparently reduced when 35% toluene-2,6-diisocyanate was present. A similar reduction was reported by Meddle et al for TDI mixtures containing 20 or 40% of th e 2,6-isom er. To increase the accuracy of measuring mixtures of TDI is o m e r s , both M arcali and M eddle e t al reco m m en d ed th a t standard curves be constructed with the appropriate isomer ratios. A portable field kit em ploying stab le color standards could easily detect TDI at 50 ppb (360 yg/cu m) and could be modified to detect TDI at 20 ppb (140 y g/cu m). The method does not d e te c t the TDI u re a , 3 ,3 '-d iiso cy an ato -4 ,4 '-d im eth y lcarb an ilid e, a hydrolysis product that is formed on reaction of TDI with water. The Ranta method, as described by Zapp and Marcali can measure both TDI and TDI urea with equal efficiency and cannot distinguish between them. The com pounds are collected by bubbling the air sample through a reagent solution of aqueous sodium n itr ite , ethylene glycol monoethyl ether (Cellosolve), and boric acid. The in te n sity of th e resulting orange-yellow color, measured at 450 nm, is proportional to the concentration of either compound. On th e basis of field and la b o ra to ry ev alu atio n s of th e se tw o m ethods, Skonieczny concluded th a t the Marcali method was more suitable for field d e te rm in a tio n of peak c o n c e n tra tio n s and for d e te c tin g sm all amounts of TDI. Because th e R a n ta m ethod req u ire s a sampling tim e of 10-30 minutes to collect su ffic ie n t am ounts of TDI under usual working conditions, Skonieczny noted that it might not detect momentarily high concentrations. In The sensitivity of the method was said to be improved by in creasin g the length of the light path in the spectrophotom etric cells. If amounts of TDI g re a te r th an 70 ppb m ust be measured, the final reagent solution can be d iluted with absorber solution or a smaller air sample can be taken. Although MDI is detected by this method, the time required for complete color development under th e prescribed conditions is 1-2 hours, compared with 5 minutes for TDI . It is possible th a t TDI can be determined in the presence of MDI if the absorbance of the test solution is measured within 10 minutes after adding the coupling agent. Various field te s t kits using th e p rin cip les of the Marcali or Ranta method have been developed. These kits have simplified and standardized test procedures fo r o n -s ite m e a s u r e m e n t . G rim an d Linch d escrib ed a field kit for d eterm in in g c o n c e n tra tio n s of TDI. Air was sampled through the standard midget im pinger using a self-p o w ered , constant-rate air aspirator. The sensitivity of the field kit employing the Marcali method was improved to allow detection of TDI at 10 ppb (70 Ug/cu m) by co lle c tin g a la rg e r volum e of air and by reducing the volum e of th e reagent used. By increasing the coupling reagent concentration and adding sodium c a rb o n a te to the absorbing medium, the field kit could be used for d e t e r m i n a t i o n s o f a i r b o r n e MDI. The R a n ta m eth o d w as m odified to allow m easurem ent of TDI urea and TDI at concentrations as low as 10 ppb by increasing th e v o lu m e o f s a m p le c o lle c te d , reducing the volum e of re a g e n t used, and in creasin g th e le n g th of th e lig h t path in the colorimeter to 100 mm. To make the R a n ta m ethod su ita b le for field use, color standards that can be used with a portable visual com parator were developed and included in the kit. Belisle described a field kit suitable for measuring TDI in air. Air was sampled at a rate of 0.1 cu ft/m inute through an acidified absorber solution in a m odified m id g et im pinger co n tain in g in situ-generated glutaconic aldehyde and cation-exchange resin. This process converts TDI to its corresponding amine, which re a c ts w ith g lu ta c o n ic ald eh y d e to form an orange-red product. To measure the c o n c e n tra tio n of TDI, th e o ra n g e -re d color th a t appeared on the surface of the resin beads was m a tc h e d a g ain st a set of color s ta n d a rd s. R esults reportedly ag reed closely w ith those obtained by the Marcali method. A major advantage of this m ethod is th a t th e color develops while th e air is being sam pled. At a c o n c e n tr a ti o n of 10 ppb (70 U g/cu m) of TDI, sam pling and analysis can be c o m p lete d in 5 m in u tes. The m ethod is capable of measuring TDI at 5 ppb (35 U g/cu m) in 0.5 cubic fo o t of air, and it may be m o d ified to m easu re other a ro m a tic iso c y a n a te s or a ro m a tic am in es by constructing appropriate calibration data. Reilly developed a field method for determining MDI in air. The sample was draw n th ro u g h an acid ab so rb er m edium in w hich MDI w as c o lle c te d and hydrolyzed to the corresponding amine. The amine was diazotized and coupled with 3 -h y d ro x y -2 -n ap h th an ilid e to form a pin k ish -o ran g e azo com pound. This was e x tra c te d into ch lo ro fo rm and co m p ared visually w ith in o rg an ic color standard solutions. The method was capable of measuring MDI at 10-40 ppb (100-400 ug/cu m) w ith a 5 -lite r sam p le of air. The eq u ip m en t req u ire d is p o rta b le , and a complete determination can be accomplished in 12-15 minutes. M eddle e t al ex te n d e d th e G rim and Linch a d a p ta tio n of the M a rc a li m ethod to a g e n e ra l f ie ld -te s t p ro ced u re cap a b le of d e te c tin g o th e r aro m atic diisocyanates in air. Procedures were described that established maximum sampling and analytical conditions for TDI, MDI, NDI, dianisidine disocyanate, and a polym eric form of MDI, polym ethylene polyphenyl isocyanate. The authors noted th a t, w ith the ex ce p tio n of TDI, attem pts to generate dynamic vapor atmospheres by bubbling dry n itro g en th ro u g h liq u ified d iiso c y an ate s proved u n su ccessful. A irborne co n cen tratio n s produced by this method diminished rapidly indicating that th ese d iiso c y an ate s, w here p re se n t in air, would be in aero so l fo rm . Similar o b serv atio n s w ere m ade by R eilly in his work with MDI. Analysis of all te s te d d iiso c y an ate s was subsequently performed on generated aerosol atmospheres . A co n seq u en ce of th e aero so l n a tu re of these test atmospheres was the a d o p tio n of a s in te r e d dom e bubbler for air sam pling. To en su re c o m p lete reco v ery , aero so l p a rtic le s tra p p e d in th e s in te re d dom e during sampling were allowed time to dissolve in the absorber solution before coupling reagent was added. A 10-m inute digestion time was judged sufficient under these conditions. Impinger sam pling a t th e sam e flo w ra te of 1 lite r /m in u te w as only 53% as efficient as sam pling w ith the s in te re d dom e bubbler. For use in the field, permanent color standards for MDI, NDI, TDI, and polymethylene polyphenyl isocyanate are available for concentrations of 10-40 ppb. The aerosol nature of MDI in air was further supported by results of a recent study by D h arm arajan and Weill . They found that MDI vapor generated by heating th e d iiso c y an ate to 110 C in a small enclosed room did not behave as a gas but ra th e r as an aero so l. They co m p ared th e am ount of MDI collected in absorbers acco rd in g to th e standard NIOSH-recommended method with and w ithout p re filte rs and found th a t 98% of th e airb o rn e MDI was collected on a Teflon f ilte r backed w ith a cellu lo se pad and 87% was c o lle c te d on the filter backed w ith a sta in le ss-ste e l pad. Since the collection efficiency of the absorber for MDI aero so l was unknown in this study, it is likely that the actual amount of MDI in sam ples was higher th a n th e am ount d e te c te d . The percentage of the sample consisting of MDI aerosol could also have been underestimated. The authors also poin ted o ut th a t since th e m ajor p o rtio n of MDI in air is p re se n t as an aerosol, concentrations of this compound should be reported in mg/cu m rather than ppm. (c) Tape Methods R eilly dev elo p ed a test-paper method to measure the concentration of TDI in air. A 5 -lite r air sam p le was draw n through a chemically treated filter paper a t a r a te of 1 lite r /m in u te . A fter sam pling was c o m p le te d , stain was allow ed to develop on th e t e s t paper for 15 minutes. The intensity of the stain was then com pared with a set of color standards for TDI concentrations of 10, 20, 40, 60, and 100 ppb (70-700 u g /c u m). The method is specific for the arom atic d iiso c y an ate s, w ith no response o b ta in ed from th e diam ine derivatives. Several p re p a ra tio n s of te s t p ap er, exposed to TDI a t known co n cen tratio n s, showed a v aria tio n in color d ev elo p m en t of ab o u t 20%. Qualitative tests indicate that the m ethod m ay be adap ted to detect MDI and NDI, and it is reported to require less analytical skill to perform than other methods. , th e au th o rs discovered that two midget impingers in series were necessary to tra p airb o rn e TDI adequately. Although the 1973 NIOSH criteria document s ta te d th a t a "single bubbler absorbs 95% of the diisocyanate if the concentration is below 2 ppm," Miller and Mueller determined that, at TDI concentrations ranging from 1 to 76 ppb (7-532 y g /c u m), the collection efficiency of the first bubbler was approximately 83%. The D unlap/IC I m o n ito r has also been used to measure MDI under laboratory and field conditions. Development of tape color intensity in response to MDI, e ith e r as a vapor generated in toluene solution or when spotted in known concentrations directly on the tape , was approximately 75% of the m axim um when re a d a t 15 m in u tes and th e re a c tio n was complete in 4 hours. Maximum color development with TDI, on the other hand, is complete at 15 minutes. W here MDI may be found as a c o n s titu e n t of a re a c tiv e p a r tic u la te , the a c c u r a c y of th e m o n ito r may be fu rth e r red u ced . In an in itia l t e s t of the applicability of the Dunlap/ICI area monitor in MDI systems, results obtained by the m onitor in foam and p ain t spraying o p eratio n s w ere 36% and 35%, respectively A fte r applying th e m a n u fa c tu re r's recom m en d ed c o rre c tio n factor, the authors found that the values obtained by the ta p e m on ito r a t readings ranging from 1 to 5 ppb (10-50 yg/cu m) were in good agreem ent with those obtained by the spectrophotometric method. D h arm arajan and Weill assessed the performance of the TDI continuous ta p e m on ito r for an aly zin g both h e a t-g e n e ra te d and foam -spray-generated MDI aerosols. Eight-hour TWA concentrations of MDI in ppb determined by the monitors w ere com p ared w ith th o se o b tain ed by the standard NIOSH-recommended method . In th e range of 5-8 ppb as d e te rm in e d by the NIOSH method, the tape m onitors c o n siste n tly gave read in g s indicating concentrations two to three times higher. The authors explained this difference by pointing out that, whereas th e filte r ta p e medium of the monitors could be expected to collect 99.9% of the MDI aero so l, an im pinger flo w ra te of 1 liter/m inute would select against certain p a rtic le size populations. The ab so rb er co lle c tio n e ffic ie n c y , however, was not d eterm in e d in this study. H aving em p h asized th e necessity for expressing MDI concentrations in units of mg/cu m, the authors devised a procedure for calibrating th e TDI ta p e m o n ito r for use in MDI aero so ls. C olor in te n sity developed during th is tim e could be correlated with the known concentration of MDI in mg/cu m. However, the authors did not test the validity of this calibration method in the workplace. In view of these findings , it is important that calibration curves for continuous ta p e monitors used to detect MDI be constructed to simulate as closely as possible the actual conditions under which the monitor will be used. The phenom enon described by a number of investigators , th a t MDI is ra re ly , if e v e r, found as a gas a t a m b ien t tem peratures, has been shown to apply to NDI and dianisidine diisocyanate and can be considered a general p ro p erty of o th e r diisocyanates that are solids or viscous liquids at room tem perature. The sam pling r a te was lim ited to 1 liter/m inute because at higher sampling rates the toluene in the impinger would bubble over into th e next im pinger in the series and because excessive evaporation of toluene could occur. The authors suggested that 0.1 ml should be the maximum allowable volume for ev ap o ra tio n for one im pinger. This r a te of evaporation would introduce an additional 1% error into the cumulative error for the resultant concentration. The to ta l co lle c tio n e ffic ie n c y of this system was 98%, with the first impinger being 90% efficient. The a n a ly tic a l sy stem co n sisted of a B arb er-C o lm an S eries 5000 Selectra System gas chromatograph equipped with a tritium -source electron-capture detector . A 4 -fo o t P yrex U-tube column (1/4-inch inner diameter) was packed with C hrom osorb G (60/80 mesh) solid support coated with a mixture of Epon 1001 and A piezon L. O x y g en -free n itro g en was used as th e c a rr ie r gas a t an optimum flo w ra te of 100 ml/minute at an inlet pressure of 15 psig. Operating tem peratures w ere 150 C for the colum n and in je c tio n port and 170 C for the detector bath. The liquid sam ple size for in jectio n was 5 y l . Calibration curves were p rep ared by seq u en tially dilu tin g TDI with chromatographic-grade toluene . T he c a l i b r a t i o n c u r v e s w e re th en ch eck ed a g ain st a p rim ary sta n d a rd (TDI p erm e a tio n tube). This p aire d system of sampling and analysis could accurately analyze TDI in a 1 0 -liter sam p le of dry air at 1.4 ppb (10 yg/cu m). When the system was tested using air that had not been previously dried, readings were lower th a n e x p e c te d . A fter a p p ro p ria te c o rre c tio n s w ere m ade for th e e f f e c ts of humidity , the authors obtained an overall efficiency of 97%. A th in -la y e r ch ro m a to g ra p h ic (TLC) m ethod was developed by Keller et al for isolation and quantitative determination of various isocyanate compounds. The m ethod is based on th e re a c tio n of iso c y a n a te s w ith N-4-nitrobenzyl-N-npropylam ine (nitro reagent) to form the corresponding ureas. The concentration of the u reas are then determined. The method is said to be capable of isolating and m easuring diisocyanate monomers in the presence of partially polymerized reaction p roducts. C oncentrations of both monomer and free isocyanate-containing polymer can be determined. Air in th e w orking en v iro n m en t is sam pled with two impingers containing a solution of th e n itro re a g e n t . The solutions in th e im pingers are then com bined and e v a p o ra te d to dryness, and the rem ain in g resid u e is dissolved in benzene. The benzene solution is chromatographically analyzed on thin-layer silicagel p la te s, and ureas are visualized by reducing the nitro groups to amines and by diazotizing the amines with nitrous fumes. After evaporation of the nitrous fumes, the thin-layer chromatographic plates are sprayed with N-l-naphthyl ethylenediamine and qu an titativ e determinations are made by visual comparison of the samples with stan d ard s. Scanning d e n s ito m e try can be used for more accurate determinations. The low er limit for the determinations was found to be 80 yg/cu m for MDI, HDI, and TDI. The method, however, is time consuming and requires skilled attention to detail during the reduction and coupling steps. C h ro m a to g rap h ic s e p a ra tio n of th e u reas was accomplished on a pellicular silica HPLC colum n. As w ith the TLC m ethod , both a lip h a tic and aro m a tic iso cy an a tes can be analyzed. The HPLC method, however, extended the detection lim its to 5, 5, and 10 y g of TDI, MDI, and HDI, respectively. The values were based on a 20-,liter air sample and a 90-y l injection volume. The method cannot be used to an aly ze a tm o sp h e re s that can oxidize or reduce the nitro reagent used in the impingers during sampling. The report cited the following reasons for failure: deterioration of silica gel colum ns cau sed by excess nitro reagent in samples; oxidation of the n itro re a g e n t a f te r sam pling; shipping and exposure problems posed by the use of toluen e in th e c o lle c tio n m edium . The time allotted for the study precluded an attem pt to resolve these problems. Two of these difficulties have already been addressed by previous investigators . To elim inate excess nitro reagent in samples, Vogt et al added p -toly l iso c y a n a te to th e absorbing solution after the collected diisocyanates were allow ed tim e (about 1 hour) to r e a c t. The resu ltin g m onourea derivative was o b s e r v e d to ru n w e ll ah ead of the d iureas. A lte rn a tiv e ly , colum n life was p reserv ed from the effects of excess nitro reagent by daily flushing of the column with solvent. , how ever, ap p e a re d to be a result of insufficient p u rific a tio n of th e sy n th esiz ed n itro re a g e n t and of the grade of toluene used. N itro re a g e n t is now com m ercially available. When the nitro reagent is dissolved in c h ro m a to g ra p h ic -g ra d e to lu en e, absorber solution impurities should be minimal. Com pounds th a t will in te r e f e r e w ith this procedure are those that absorb in the u ltrav io let range and also have the same column retention time as the diisocyanate being investigated. Toluene rem ain s the solvent of choice for this method . Federal reg u latio n s (49 CFR 172) allow air transportation of toluene in quantities of up to 1 q u a rt/p a c k a g e . W here p ersonal sam pling p ro ced u res may expose workers to toluen e vapor for ex te n d e d p eriods, air sam p ler outlets could be fitted with an appropriate scrubber. A fter reviewing the currently available analytical methods, NIOSH recommends the HPLC procedure described above . This method is described in detail in Appendix I. Although the method may necessitate some initial experimentation b e f o r e r o u tin e m e a su re m e n ts can be m ade, it is th e m o st se n sitiv e m ethod. B ecause q u a n titie s of d iiso c y an ate s in th e nanogram range may be determined, re la tiv e ly sh o rt sam pling tim es are allow ed. A nalysis of th e primary reaction produ ct of d iiso c y an ate and ab so rb er en su res d ire c t m e a su re m e n t of available iso cy an a te fu n ctio n al groups and p reclu d es in te rfe re n c e by o th e r diisocyanate re a c tio n p ro d u cts. The re la tiv e ch em ical stability of the nitro-ureas allows for possible elapsed time between air sampling and subsequent analysis. In a d d itio n , the p ro ced u re is cap ab le of se p a ra tin g and id e n tify in g m ix tu res of d iiso cy an ate m onom ers as w ell as mixtures of monomer and partially polymerized p roducts . The p ro ced u re has not been tested with diisocyanates other th an TDI, MDI, and HDI. It is reaso n ab le to assum e, how ever, that a method which can e ffe c tiv e ly s e p a ra te th e isom ers of TDI, as does the HPLC method, can be used su ccessfu lly fo r m easuring o th e r diisocyanates. Where possible in te rfe re n c e from reducing or oxidizing atmospheres may be encountered, a lte r n a te sam pling and a n a ly tic a l m ethods should be calib rated with the HPLC procedure. N um erous stu d ies have shown th a t ab so rb er c o lle c tio n efficiency may vary d ra m a tic a lly . It is therefore recommended that two serially c o n n e c te d im pingers be used for air sam pling u n til a re p ro d u cib le c o lle c tio n efficiency is established for any given operation, after which a single impinger may be used for ro u tin e monitoring. The recommended flow rate of 2 liters/m inute for 10 m inutes re p re s e n ts a co m pro m ise for e ff ic ie n t aero so l and vapor sampling . Diisocyanates are also skin irritants and sensitizers ; how ever, e f f e c ts on th e skin from these compounds do not appear to have been a m ajor problem in in d u stry . Eye contact with liquid TDI and TDI vapor has produced irrita tio n and watering of the eyes , and it is likely that direct eye contact with other diisocyanates would produce similar effects. D iiso cy an ates en co m p assin g a wide ran g e of molecular weights and physical p ro p e rtie s (see Table X I-1) are av ailab le for use in industry. The potentials of th e se com pounds to ir r i ta te th e re s p ira to ry trac t, mucous membranes, eyes, and skin vary depending on the particular diisocyanate being considered. The potential for skin irritation and eye injury is generally higher for the lower molecular weight diisocyanates and the severity of these irritant responses is reduced with increasing molecular weight . The potential respiratory hazards encountered during the use of diisocyanates in th e w orkplace are related to their vapor pressures . The lower molecular w e ig h t d iiso c y a n a te s te n d to be m ore read ily v o la tiliz e d into th e w orkplace atm o sp h ere th an th e higher m o lecu lar w eig h t d iiso c y a n a te s . Figure XI-1 presents graphically the decrease in vapor pressure with increasing molecular weight for sev eral d iiso c y an ate s. The low er molecular weight diisocyanates, such as HDI and TDI, when handled w ith o u t special precautions, can release amounts of vapor su ffic ie n t to be e x tre m e ly irritatin g to the respiratory tra c t of workers . H igher m o lecu lar w eight com pounds such as NDI, IPDI, and MDI present a lesser vapor h aza rd when handled in well-ventilated areas at normal room tem peratures, ie, less than ap p ro x im ate ly 40 C (104 F) for MDI . Although th e vapor pressures of the higher molecular weight diisocyanates are relatively low, they may g en erate vapor concentrations sufficient to cause respiratory and mucous m em brane irrita tio n if th ey are handled in poorly ventilated areas. Air exhaust hoods may be n ecessary under such co n d itio n s . High m o lecu lar w eight d iiso c y an ate s like MDI m ay also present significant vapor hazards when heated or used in exothermic production processes . The physical s ta te of th e d iiso c y an ate being handled will also a f f e c t the p o te n tia l h aza rd s e n c o u n te re d during its use. MDI and NDI, which are normally solid m a te ria ls a t room tem perature, present less vapor inhalation or skin contact hazard as a re s u lt of sp lash es or spills th an th e low er molecular weight liquid d iiso cy an ates do . H ow ever, w orkers should be aware of the possibility of re sp ira to ry and mucous membrane irritation from the dusts of such compounds, and of c o n ta m in a tio n of th e ir clo th in g w ith the powdered diisocyanates. Operations involving the use of such compounds, such as weighing, should be performed with equipm en t incorporating a barrier between the worker and the diisocyanate. Local ex h au st v e n tila tio n m ay also be n ecessary . C lothing c o n ta m in a te d w ith solid diiso cy an ate should be decontaminated and laundered as soon as possible to prevent fu rth e r exposure of th e w orker and to avoid contamination of other work areas. Spills involving th e se compounds should also be decontaminated and cleaned up as soon as possible. The p ro cesses and o p e ra tio n s in which diisocyanates are used will affect the sev erity of th e h azard . In d u strial p ro cesses involving e v a p o ra tio n from large su rfa c e areas or spraying operations may result in a greater potential vapor hazard than o p eratio n s involving p o u rin g -in -p lace or fro th in g techniques . Special h azard s may arise in spraying operations since the diisocyanate-containing aerosol cloud may drift to areas beyond the immediate spraying area. T he r a t e o f r e a c t i o n is v e ry slo w for TDI and MDI a t te m p e ra tu re s below 50 C. As th e te m p e ra tu re increases, the reaction between th e se com pounds and w a te r becomes more vigorous. TDI and MDI will also react w ith bases, such as sodium hydroxide, am m onia, prim ary and secondary amines, acids, and alcohols. This re a c tio n may be v io len t, producing enough h e a t to in c re ase th e ev o lu tio n of diisocyanate vapor and the generation of carbon dioxide. These re a c tio n s , like th e reactions with water, may lead to dangerously increased pressure in closed c o n ta in e rs , Thus, containers of diisocyanates should be kept closed as much as possible to prevent water, atmospheric moisture, or other reactive compounds from entering and vapors or solids from escaping. D iiso cy an ates should be transported or stored in sealed, intact containers. A "sealed co n tain er" is one that has been closed and kept closed to the extent that th e re is no re le a se of d iiso c y an ate s. An " in ta c t container" is one that has not d e te r io r a te d or been dam ag ed to th e e x te n t th a t d iiso c y a n a te s a re rele ased . D iiso cy an ates in seale d , in ta c t c o n ta in e rs should pose no threat of exposure to em ployees; th e re fo re , it is not necessary to comply with required monitoring and m edical su rv eillan ce re q u ire m e n ts in o p eratio n s involving such containers. If, how ever, c o n ta in e rs a re opened or broken so that diisocyanates may be released, th en all provisions of the recommended standard should apply. Indoor storage areas should be dry, fire p ro o fe d (autom atic sprinkler systems should be considered), and well v e n tila te d ; te m p e r a tu r e e x tre m e s in these areas should be avoided . The sto ra g e a re a should have a firm floor m ade of some nonabsorbent material . If d iiso c y an ate s a re a c c id e n ta lly frozen during storage or while in transit, they may be th aw ed by storage in a warm area . Extreme caution must be used if h e a t is ap p lied , and a flame or similar localized heat source should never be used. D iiso cy an ates are transported in drums, tank trucks, or tank cars. Containers should be properly labeled and shippers should be aware of precautions to be taken fo r tra n sp o rtin g , loading, and unloading th e p a rtic u la r c o n ta in e r and type of diiso cy an ate being tra n sp o rte d . Emergency measures to be taken in the event of an a c c id e n t or some type of damage to the container or tank en route should also be w orked o u t in ad vance by th e shipper and the supplier or producer , The H a z a r d o u s M a te r i a ls R e g u la tio n s as p ro m u lg ated by th e US D e p a rtm e n t of Transportation in 49 CFR Subchapter C should be adhered to where applicable. W here bulk quantities of diisocyanates are handled, adequate ventilation should be provided and re s p ira to ry p ro te c tiv e eq u ip m en t should be read ily available. W orkers should wear chemical safety goggles when handling solid diisocyanates and chem ical sa fe ty goggles w ith face shields when using liquid diisocyanates. Local exhaust v e n tila tio n should be em ployed when opening containers of diisocyanates . Local ex h au st v e n tila tio n should also be used when performing laboratory operations involving diisocyanates. Since th e flash p o in ts of m ost diisocyanates are high, the compounds are not flam m ab le under norm al circumstances, although they can burn if they are heated su ffic ie n tly . B ecause of th e wide range of physical and chemical properties of d iiso c y an ate s (see T able XI-1), it is im portant to be aware of the potential fire hazard th a t may be a s s o c ia te d w ith a p a rtic u la r d iiso c y a n a te in the industrial se ttin g in which it is used or sto red . Any diisocyanate involved in a fire may produce high concentrations of toxic vapors, and only trained and properly equipped personnel should be involved in firefig h tin g . All nonessential personnel should be e v a c u a te d from th e a re a during a fire. Suitable extinguishing media for fighting d iiso c y an ate -su p p o rted fire s a re dry ch em ica l pow der, carbon dioxide, or foam. W ater should be used only if la rg e q u a n titie s a re av ailab le, since the reaction b etw een w ater and a hot diisocyanate may be vigorous. A fter the fire is out, the a r e a should be in sp e c te d by p roperly p ro te c te d p erso n n el, and any su spected residues should be decontaminated before other workers are perm itted to return to the area. If splashes or c o n ta c t w ith aero so ls of d iiso c y an ate s a re likely to occur, em ployees should w ear rubber or polyvinyl chloride gloves and aprons and rubber boots; a p p ro p ria te r e s p ira to ry eq u ip m en t, as d escrib ed in T able 1-1, should be readily available. Where supplied-air respirators are used, the air supply must come from a source not c o n ta m in a te d w ith diisocyanates . For all workers near spray gun operations (within approximately 10 feet), an air-supplied hood, impervious gloves (rubber or polyvinyl chloride), tightly buttoned coveralls, and rubber galoshes or b o o ts a r e n e e d e d . W orkers w ith o u t this eq u ip m en t should not be p e rm itte d close enough to spraying operations performed outdoors to be exposed to d iiso c y an ate vapors or particulates. A minimum of 50 feet is recommended. For indoor spraying operations, the safe distance for unprotected workers will depend on th e ty p e an d e ffic ie n c y of th e v e n tila tio n provided When leaks or spills of diisocyanates occur, only properly trained and equipped personnel w earing appropriate protective clothing should be perm itted to remain in the a re a . If major spills occur, air-supplied masks or self-contained breathing a p p a ra tu s as described in Table 1- Any existing reg u latio n s pertaining to the discharge of such m aterials into sewer lines should be s tric tly a d h ere d to. Since spills of diisocyanates may freeze during cold weather, water and ammonia will merely coat the solid m aterial with insoluble urea, stopping fu rth e r reactions. In cold weather, cleanup should be performed with a mixture of equal p a rts of isopropyl alcohol and e th y len e glycol . A supply of this mixture should be on hand and ready for immediate use in cold weather. B ecause of th e potential hazards of exposure to diisocyanates, the importance of good housekeeping should also be em p h asized . Spills should be cleaned up pro m p tly , and all equipm ent used in the exposure areas, such as buckets, weighing c o n ta in e rs, and funnels, should be decontam inated and cleaned immediately after use. Smoking and th e carry in g of smoking supplies should be prohibited in areas w here exposure to diisocyanates may occur, as should preparing, storing, dispensing (including vending m achines), and consum ing food and b e v e ra g e s. E m ployees exposed to d iiso c y an ate s should be encouraged to wash their hands before eating, d rin k in g , or s m o k in g , an d b e f o r e an d a f te r using to ile t f a c ilitie s . The US D e p a rtm e n t of Labor regulations concerning general sanitation in the workplace as specified in 29 CFR 1910.141 should be adhered to. Em ployees should be instructed on the health hazards of diisocyanates and the p reca u tio n s to be follow ed in handling th em . They should be trained to report prom ptly to th e ir supervisors all leaks, suspected failures of equipment, exposures to d iiso c y an ate s, or sym ptom s of exposure. The location of safety showers and eyew ash fo u n tain s should be c le a rly m arked, and ap p ro p riate warning signs and labels should be p ro m in en tly displayed in exposure a re a s and on containers of diiso c y an ate s. E m ergency ex its should be provided and be accessible at all times. All em erg en cy show er, eyew ash, p ro tectiv e, and firefighting equipment should be checked periodically to ensure its serviceability. also recommended that this limit not be exceeded because respiratory irritation occurred in anim als exposed to TDI at concentrations of 1-2 ppm. He noted that TDI and sim ilar diisocyanates are strong irritants to the skin, eyes, and gastrointestinal and respiratory tracts and that they may cause asthma-like symptoms in workers. In 1962, th e ACGIH red u ced the TLV for TDI to 0.02 ppm (0.14 mg/cu m) Isocyanates in general were reported to irritate the skin, eyes, and respiratory tra c t and to cau se re s p ira to ry s e n s itiz a tio n when sufficient vapor concentrations were p resen t even for a short time . Konzen et al observed an immunologic response in w orkers exposed to MIDI a t approximately 1.3 ppm-minute but not in workers exposed at 0.9 ppm-minute. Bruckner et al noted that workers might becom e se n sitiz e d to isocyanates when exposed at concentrations above 0.02 ppm. A ccording to the 1971 documentation , available data indicated that MDI was sim ilar to TDI in its ir r ita n t and sensitizing properties, suggesting that a similar ceiling value of 0.02 ppm (0.2 mg/cu m) was warranted. In th e U nited S ta te s , o ccu p a tio n a l exposure standards for diisocyanates have been estab lish ed only fo r TDI and MDI. A ccording to th e International Labour O f f ic e , o ccu p a tio n a l exposure lim its for TDI, MDI, and sev eral o th er d iiso c y an ate s have been s e t by foreign countries. These limits are summarized in Tables VI-1 and VI-2. C u rre n t Federal standards (29 CFR 1910(29 CFR .1000 , who re p o rte d th a t w orkers had no sym ptom s at concentrations below 10 ppb but developed resp irato ry illness within 1 week when concentrations rose to 30-70 ppb; when co n ce n tratio n s were reduced to 10-30 ppb, there were no further complaints. The recom m en d ed ceiling was intended to protect against irritative effects of TDI in non sen sitized w orkers, but ev id en ce was insufficient to determ ine whether it would also protect against sensitization. The document noted that no evidence was available to point to a concentration of TDI that would be safe for workers already s e n sitiz e d to TDI. The present document reexamines the earlier recommendations for a TDI stan d ard , taking into account more recent information that has become available since 1973. No th resh o ld concentration for such reactions has b e e n id e n tif ie d , but th e re is ev id en ce th a t, for som e individuals, it may be u nm easurably low ; th u s, it is not possible at this time to establish a level below which sensitized workers will not experience adverse respiratory effects from exposure to diisocyanates. # Basis for the Recommended Standard Several studies have shown that 5-20% of the workforce may become sensitized to d iiso c y an ate s . There is evidence, however, that the incidence of s e n sitiz a tio n can be red u ce d by co n tro llin g exposures. The data of Elkins et al on 15 TDI plants showed that all plants where average exposures exceeded 70 y g /c u m had w orkers w ith T D I-re la te d re s p ira to ry illness, but no such illnesses w ere re p o rte d in p lan ts where average exposures were 50 yg/cu m or lower. On th e o th e r hand, Weill reported instances of sensitization developing in a plant w here av era g e TDI ex posures ran g ed from 14 to 50 yg/cu m. These studies did not re p o rt th e frequency or magnitude of excursions above these averages, so that the more precise estim ates of exposure concentrations cannot be determined. Exposure to diisocyanates may also cause chronic respiratory effects measurable as lo n g -term decrem ent in pulmonary function, especially FEV 1, in excess of that expected from aging. It is not clear from existing data whether this change occurs only in se n sitiz e d w orkers or w h eth er it may also a f f e c t workers who show no clinical sym ptom s from diisocyanate exposure. The findings of Porter et al in d ic a te th a t som e w orkers who are sen sitiv e to TDI and who have anti-T D I an tib o d ies may ex h ib it norm al pulm onary fu n ctio n , w hile o th e rs w ith clin ical sym ptom s of s e n s itiz a tio n but n eg ativ e re su lts on im m unologic tests may have sev erely im p aired pulm onary fu n ctio n . In the study conducted by Wegman et al , in w h ich w orkers exposed a t c o n c e n tra tio n s above 20 U g/cu m had a significantly greater decrease in FEV 1 than those exposed at lower concentrations, th e w ork fo rce studied included both sensitized and unsensitized individuals. In the plant studied by Weill and colleagues , where workers who showed symptoms of clin ica l sensitization were not included in the study population, the investigators found no sig n ifican t e f f e c ts on lung function related to TDI exposures at average c o n c e n tra tio n s of 14-50 ug/cu m. These findings indicate that the TWA limit of 5 ppb (35 U g/cu m) for TDI recommended by NIOSH in 1973 , which was based on th e findings of Elkins et al , provides adequate protection against chronic effects of TDI on pulmonary function of workers who are not sensitized. E nv iro n m en tal d a ta for the other diisocyanates are insufficient to establish a safe exposure lim it. The diisocyanates commonly used in industry are respiratory irrita n ts . In a plant where area samples showed MDI concentrations of 10-150 U g/cu m and breathing zone concentrations for 6 sprayers were 120-270 ug/cu m, 34 of 35 workers had eye, nose, or throat irritation, and nearly half had wheezing, sh o rtn ess of b re a th , and ch est tightness . In another plant, 3 of 29 workers exposed to MDI a t 50-110 ug/cu m had respiratory symptoms . Nine of 18 w orkers exposed to HDI a t less th a n 300 Ug/cu m and to an HDI trim er at less than 3,800 ug/cu m had irritatio n of the upper respiratory trac t, cough, or chest tightness [ 100 j. L ik e TD I, o th e r d i i s o c y a n a t e s a r e likely to be re a c tiv e w ith biologic m acrom o lecu les, such as proteins, and thus are potential sensitizers. Some authors have re p o rte d th a t MDI produces respiratory sensitization, since affected w orkers gave p o sitiv e re su lts in immunologic tests. The assumption of a common m echanism of a c tio n suggests that structurally similar diisocyanates might produce cross-sensitization; there is one report of positive tests for antibodies against MDI in w orkers sen sitized to TDI, but the validity of these results is questionable because of th e lack of characterization of the test antigen used. A study of skin s e n s itiz a tio n su g g ested th a t w orkers exposed to TDI and MDI were cross sen sitized to IPDI. In an o th e r study , some workers with previous exposure only to TDI had bronchial reactions to MDI and HDI as well. These workers were sig n ifican tly m ore h y p e rre a c tiv e to h ista m in e th a n workers who reacted to TDI only, suggesting th a t c ro s s -re a c tiv ity m ight involve a nonspecific pharmacologic mechanism. The p re d ic te d r e a c tiv ity of the diisocyanates with biologic macromolecules is th e probable basis for th e ir im m unologic effects and perhaps for the respiratory sym ptom s and effects on pulmonary function produced at low concentrations. It is th e re fo re reasonable to expect that other diisocyanates react similarly to TDI on a molar basis. In th e absence of data indicating that any of the diisocyanates is substantially less to x ic th an TDI, th e exposure concentrations recommended in 1973 should be ex ten d ed to all diisocyanates. It is recommended that exposure to the diisocyanates be c o n tro lle d so th a t no em p lo y ee is exposed at concentrations in excess of the following environmental limits, equivalent, in the case of volatile diisocyanates that o c c u r as v a p o r s , to a TWA lim it of 5 ppb fo r a 10-hour w o rk sh ift, 40-hour workweek, and a ceiling limit of 20 ppb for a 10-minute sampling period: TWA G e i1ing The recom m ended method also permits separation of different diisocyanates in th e sam e sam ple. It has not been tested with diisocyanates other than TDI, MDI, and HDI, b u t, w ith a p p ro p ria te m o d ificatio n s and so lv en t systems, it should be capable of detecting these compounds in the same range. The reco m m en d ed m ethod fo r sam pling a irb o rn e d iiso c y a n a te s consists of draw ing air a t a r a te of 2 lite rs /m in u te for 10 m in u tes th ro u g h tw o serially co n n ec ted a ll-g la ss m idget impingers, each containing 15 ml of absorbing solution. The use of tw o im pingers in series is n ece ssary u n til a reproducible collection efficiency is established for any given operation, since absorber collection efficiency is highly variable . The diisocyanates react with the absorbing solution to form s p e c ific u re a d eriv ativ es which are then injected into the high speed liquid ch ro m a to g ra p h for an alysis. The m eth o d is described in detail in Appendix I. To p re v e n t d ev elo p m en t of serious re s p ira to ry sym ptom s, a medical su rv eillan ce program should provide for th e tim ely d e te c tio n of adverse health effects that develop from exposure to diisocyanates. E m ployees w ith a h isto ry of respiratory illness, especially asthma, and those exposed to o th e r re s p ira to ry ir r ita n ts , eg, smokers, may be at increased risk of developing ad v erse h e a lth effects from exposure to diisocyanates, and they should be counseled on this risk. All employees should be monitored so that work-related sym ptom s or loss of pulm onary fu n ctio n can be detected early. Employees who d e v e lo p s y m p to m s o f TDI s e n s itiz a tio n should be co u n seled on th e risks of continuing to work with TDI. Each em p lo y ee should re c e iv e a thorough preplacement medical examination, which includes a h isto ry of exposure to diiso cy an ates, a smoking history, and a history of re s p ira to ry illnesses. Each employee should receive pulmonary function te s ts , including FEV 1 and FVC, and a chest X-ray before beginning work in any p lan t m a n u fa ctu rin g or using d iiso c y a n a te s. F o r em p lo y ees w ith occupational e x p o s u re to d iiso c y an ate s, physical ex am in atio n s should be re p e a te d a t le a st annually. B ecause of seaso n al and diurnal variations in pulmonary function, the periodic e x am in atio n of an individual employee should be performed at about the sam e tim e ea c h y ear and at the same time of day, so that changes in respiratory function can be evaluated. Chest X-rays are not required at periodic examinations, and should be re p e a te d a t th e d isc re tio n of the examining physician. Records of m edical examinations should be kept for at least 30 years after the employee's last exposure to diisocyanates. The previous NIOSH criteria document on TDI recommended a leukocyte count w ith d iffe re n tia l as p a rt of th e p re p la c e m e n t m ed ical ex am in atio n and su g g ested p erio d ic eosinophil counts. However, there is no evidence at this time th a t blood findings a re a significant indicator of adverse effects of TDI exposure. This change in the recommendations for the medical examination is consistent with th e fa ilu re to find eosin o p h ila in T D I-sen sitized individuals in re c e n t stu d ies . W here liquid diisocyanates may be present on floors, protective shoe coverings should be worn. If th e p o te n tia l e x ists for splashes or contact with aerosols of diisocyanates, em ployees should be provided w ith face shields (20-cm minimum) with goggles, rubber or polyvinyl ch lo rid e gloves and aprons, rubber b oots, and ap p ro p riate re s p ira to ry eq u ip m en t as d escrib ed in Table 1-1. Because diisocyanates have poor w arning properties , the use of chemical cartridge respirators or gas masks is not recommended. At present, air-purifying respirators with an end-of-service-life in d icato r a re not available for the diisocyanates. Demand-type (negative pressure) supplied-air respirators are not recommended because of the possibility of facepiece leakage. Workers within 10 feet of a unit spraying m aterial containing diisocyanates should w ear full-body p ro te c tiv e clo th in g and appropriate respiratory protective devices . All protective clothing th at becomes contaminated with diisocyanates should be rep la ced or thoroughly decontaminated in a solution of 8% ammonia and 2% liquid d e te rg e n t in w ater and clean ed b efo re reu se. L ockers should be provided for w orkers so th a t work and s tr e e t clothes can be stored separately. The employer should arran g e for laundering of all work clothes and ensure that laundry workers are aware of the hazards of exposure and appropriate safety precations. # (e) Informing Employees of Hazards At the beginning of em p lo y m en t, all em p lo y ees should be informed of the h azard s from exposure to diisocyanates. Brochures and pamphlets may be effective aids in informing employees of hazards. In addition, signs warning of the danger of exposure to diisocyanates should be posted in any area where occupational exposure to the diisocyanates is likely. Access to areas of potential high exposure should be re s tr ic te d to em p lo y ees equipped with appropriate protective gear. A continuing ed u ca tio n program, which includes training in the use of protective equipment such as respirators and information about the value of the periodic medical examinations, should be available to the employees. Employees exposed to diisocyanates should be inform ed th a t symptoms of exposure, such as nocturnal dyspnea, may occur several hours a f te r th e end of the workshift. Because of the possibility of sensitization to th e d iiso c y an ate s, em p lo y ees should be w arned th a t the im proper home use of po ly u reth an e p ro d u cts, such as foam kits and varnishes, that contain diisocyanates may in c re a se th e ir risk of developing w o rk -re la te d h e a lth problems. Employees should be in s tru c te d in th e ir own responsibility for following work practices and sa n ita tio n p ro ced u res to help p r o te c t th e h ealth and p rovide for the safety of themselves and their fellow employees. T hese sy stem s should be designed to p re v e n t th e a c c u m u la tio n or re c irc u la tio n of d iiso c y a n a te s in the w orkplace en v iro n m en t and to effectiv ely remove diisocyanate vapors or aerosols from th e b reath in g zone of th e em p lo y ees. The v e n tila tio n systems should be periodically ch eck ed , including airflow measurements, to ensure that the systems are w orking p ro p erly . Exhaust ventilation systems discharging to outside air must conform to ap p licab le lo cal, state, and Federal air pollution regulations and must not constitute a hazard to employees or to the general public. In ad d itio n, it is recom m en d ed th a t em ployees who work in areas that use diisocyanates wash their hands thoroughly b efo re e a tin g or using to ilet facilities. Smoking should not be perm itted in areas where diisocyanates are stored or used because of the possibility that smoking materials may become contaminated with diisocyanates. # (g) Monitoring and Recordkeeping Requirements In ad d itio n to a program of regular personal monitoring of air concentrations, continuous a re a m o n ito rin g is strongly recommended where aliphatic diisocyanates such as TDI and MDI are p re se n t. This will p e rm it engineering controls to be m odified or im proved to keep the concentrations of diisocyanates at or below the recom m en d ed lim its. M onitoring should also be perform ed whenever there is a c h a n g e in th e pro cess or m a te ria ls used th a t could in c re a se th e exposure of employees. Em ployers should m onitor exposure of employees to diisocyanates by taking a s u ffic ie n t num ber of b reath in g -zo n e samples to adequately characterize exposures for every o p eratio n . In determining the sampling strategy for a particular worker, th e process and the job description of the worker should be considered, and those p ro cess cy cles in w hich p o te n tia l exposure is g r e a te s t should be given prim e consideration. R eco rd s should be kept for all sampling operations and should include the type of personal p ro te c tiv e d ev ices in use, if any, and th e sam pling and analytical m ethods used. Employees or their designated representatives should have access to records of their own environmental exposures. These records should be kept at least 30 years after the employee's last exposure to diisocyanates. # VII. RESEARCH NEEDS In fo rm atio n needed to develop a stan d ard for occupational exposure to the diisocyanates is incomplete in many respects. It has been necessary to recommend a stan d ard for th e d iiso c y a n a te s based on sim ila ritie s to TDI and MDI, since ad eq u a te in fo rm atio n is not av ailab le on other diisocyanates to dem onstrate that they differ appreciably in their toxicity. . These relationships could be studied in guinea pigs exposed to TDI, using the p-tolyl isocyanate antigen to test for th e in d u ctio n of tolyl-specific antibodies. Sera from these animals should not be pooled, so that the standard deviation can be determined. Changes in respiratory response in exposed animals should also be evaluated to determine their correlation w ith th e ap p e a ra n c e of IgE or IgG antibodies. As a corollary experiment, animals should be exposed at the same total dose administered over varying tim e periods to sim u late the e f f e c ts of excursions while retaining the same 8-hour TWA exposure; eg, groups of anim als m ight be exposed a t 5 ppb fo r 8 hours, 160 ppb for 15 minutes, and 2,400 ppb for 1 minute. To im prove p ro te c tio n of exposed w orkers, it is p a rtic u la rly desirable to d e te rm in e w h eth er th ere are intrinsic, predictable differences between sensitizable and nonsensitizable individuals. The studies of Butcher et al , showing that persons se n sitiz e d to TDI tend to be hyperreactive to bronchoconstrictors such as m echolyl, ap p ear to o ffe r prom ise in this reg ard . It is necessary to determine w hether this h y p e rre a c tiv ity is a re su lt of exposure or a preexisting factor that may in d ic a te a p red isp o sitio n to become sensitized to diisocyanates. Methods of id en tify in g sensitized individuals before overt chronic symptoms develop should also be explored. The value of m e asu re m en ts of eosinophilia and cyclic AMP and of pulm onary fu n ctio n stu d ies and immunologic testing as diagnostic tools should be c a re fu lly e v a lu a te d , since ex istin g re p o rts of their usefulness are contradictory. The p-to ly l iso c y a n a te te s t an tig e n developed by Karol and her colleagues ap pears to be a p a rtic u la rly useful d iag n o stic tool for TDI se n s itiz a tio n , and analogous an tig en s should be developed for in v e stig a tin g sensitization to other diisocyanates. B ecause th e d iiso cy an ates may be highly reactive biologically, it is important th a t th e ir p o te n tia l to cau se carcinogenic and mutagenic effects be investigated. M u tag en icity screen in g in m icro b ial te s ts should be c a rrie d o u t, using a te s t p ro to co l th at will decrease the likelihood of hydrolysis to possibly mutagenic amine in te rm e d ia te s . D iiso cy an a tes, esp ecially those that are positive in mutagenicity te s ts , should also be te s te d for carcinogenicity in animal experiments. Studies of absorptio n , distribution, metabolism, and excretion of diisocyanates are also needed to elucidate the mechanism of their action. The co n seq u en ces of ex p o su re to the aerosols produced in many diisocyanate ap p licatio n s, such as sp ray in g , should also be investigated. It has been assumed th a t re a c tiv e diisocyanates in aerosol form produce the same biologic consequences as d iiso c y an ate vapors a t eq u iv alen t c o n c e n tra tio n s. This assumption should be e x p e rim e n ta lly v e rifie d . S im ilarily, m ost a p p lic a tio n s of diisocyanates involve sim u ltan eo u s exp o su re to o th e r to x ic ch em ica ls, and in a d e q u a te information is av ailab le on th e ro le of th e se ch em icals in producing observed health effects in diisocyanate workers and their possible additive or synergistic nature. R eliab le, se n sitiv e continuous monitoring methods should be developed for all th e diisocyanates. The paper-tape method developed by Reilly is a valuable m ethod for continuous m onitoring of the arom atic diisocyanates, particularly TDI. C om parab le m ethods a re need ed for other diisocyanates to protect workers from dangerous ex cu rsio n s and to provide better information relating health effects to actual exposures. The following items of information that are applicable to a specific product or m a te ria l shall be provided in th e a p p ro p ria te block of the Material Safety Data Sheet (MSDS). The product designation is inserted in the block in the upper left corner of the firs t page to facilitate filing and retrieval. Print in upper case letters as large as possible. It should be printed to read upright with the sheet turned sideways. The product designation is that name or code designation which appears on the label, or by w hich the p ro d u ct is sold or known by em p lo y ees. The r e la tiv e numerical hazard ratings and key statem ents are those determined by the rules in Chapter V, P a rt B, of th e NIOSH p u b licatio n , An Id e n tific a tio n System for Occupationally Hazardous Materials. The company identification may be printed in the upper right corner if desired. C hem ical substances should be listed according to their complete name derived from a reco g n ized sy stem of nom enclature. Where possible, avoid using common nam es and g en era l class n am es such as " a ro m a tic am in e," " sa fe ty solvent," or "aliphatic hydrocarbon" when the specific name is known. The "%M may be the ap p ro x im ate p e rc e n ta g e by weight or volume (indicate basis) th a t eac h h azardous ingredient of the mixture bears to the whole mixture. This may be in d ic a te d as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. Toxic hazard data shall be stated in term s of concentration, mode of exposure or test, and animal used, eg, "100 ppm LC50-rat," "25 mg/kg LD50-skin-rabbit," "75 ppm LC-man," or "p erm issib le exposure from 29 C FR 1910.1000," or, if not av ailab le, from o th er sources of publications such as the American Conference of Governmental Industrial Hygienists or the American National Standards Institute Inc. F lashpoin t, shock se n s itiv ity , or similar descriptive data may be used to indicate fiammability, reactivity, or similar hazardous properties of the material. # (c) Section III. Physical Data The d a ta in Section III should be for the total mixture and should include the boiling point and melting point in degrees Fahrenheit (Celsius in parentheses); vapor p ressu re, in co nventional millimeters of mercury (mmHg); vapor density of gas or vapor (air = 1); so lubility in w a te r, in p a rts /h u n d re d parts of w ater by weight; sp ecific gravity (water = 1); percent volatiles (indicated if by weight or volume) at 70 F (21.1 C); e v a p o ra tio n r a te for liquids or sublimable solids, relative to butyl a c e ta te ; and ap p earan ce and odor. These data are useful for the control of toxic su b stan ces. Boiling point, vapor d en sity , p e rc e n t volatiles, vapor pressure, and ev ap o ratio n are useful for designing proper ventilation equipment. This information is also useful for design and d eploym ent of adequate fire and spill containment eq u ip m en t. The ap p e a ra n c e and odor may facilitate identification of substances stored in improperly marked containers, or when spilled. # (d) Section IV. Fire and Explosion Data S ection IV should co n tain c o m p le te fire and explosion data for the product, including flash p o in t and autoignition tem perature in degrees Fahrenheit (Celsius in p aren th eses); flam m ab le lim its, in percent by volume in air; suitable extinguishing m edia or m a te ria ls; sp ecia l firefighting procedures; and unusual fire and explosion h azard in fo rm atio n . If th e p ro d u ct p re se n ts no fire h aza rd , in se rt "NO FIRE HAZARD" on the line labeled "Extinguishing Media." (e) Section V. Health Hazard Information The "Health Hazard Data" should be a combined estim ate of the hazard of the to ta l pro d u ct. This can be ex p ressed as a TWA concentration, as a permissible exposure, or by som e o th e r indication of an acceptable standard. Other data are acceptable, such as lowest LD50 if multiple components are involved. Under "R o u tes of Exposure," co m m en ts in eac h category should reflect the p o te n tia l h azard from ab so rp tio n by th e ro u te in q u estio n . C o m m en ts should indicate the severity of the effect and the basis for the statem ent if possible. The 124 basis might be animal studies, analogy with similar products, or human experiences. C om m ents such as "yes" or "possible" are not helpful. Typical com ments might be: Skin C ontact-single short contact, no adverse effects likely; prolonged or repeated contact, possibly mild irritation. Eye C ontact-some pain and mild transient irritation; no corneal scarring. "E m ergency and F irs t Aid Procedures" should be w ritten in lay language and should primarily represent first-aid treatm ent that could be provided by paramedical personnel or individuals trained in first aid. In fo rm atio n in th e "N o tes to Physician" se c tio n should include any special m e d ic a l in fo rm a tio n w hich would be of a ssista n c e to an a tte n d in g physician, including required or recommended preplacement and periodic medical examinations, diagnostic procedures, and medical management of overexposed employees. It is p a rtic u la rly im p o rta n t to h ig h lig h t in s ta b ility or in c o m p atib ility to com m on su b stan ces or c irc u m s ta n c e s, such as w ater, direct sunlight, s te e l or co p p er piping, acid s, alk alies, etc. "Hazardous Decomposition P roducts" shall include those products released under fire conditions. It must also include dangerous p ro d u cts p roduced by aging, such as peroxides in the case of some ethers. Where applicable, shelf life should also be indicated. # (g) Section VII. Spill or Leak Procedures D etailed procedures for cleanup and disposal should be listed with emphasis on p reca u tio n s to be tak en to protect employees assigned to cleanup detail. Specific n e u tra liz in g c h em ica ls or p ro ced u res should be d escrib ed in d e ta il. D isposal m ethods should be ex p licit including proper labeling of containers holding residues and ultim ate disposal methods such as "sanitary landfill" or "incineration." Warnings such as "Comply with local, state, and Federal antipollution ordinances" are proper but not sufficient. Specific procedures shall be identified. # (h) Section VIII. Special Protection Information S ection VIII requires specific information. Statem ents such as "Yes," "No," or "If necessary " a re not informative. Ventilation requirements should be specific as to ty p e and p re fe rre d m eth o d s. R e sp ira to rs shall be sp e c ifie d as to type and NIOSH or US Mine S afety and H ealth Administration approval class, ie, "Supplied air," "O rganic vapor canister," etc. Protective equipment must be specified as to type and materials of construction. # (i) Section IX. Special Precautions " P re c a u tio n a ry S ta te m e n ts " shall consist of the label statem ents selected for use on the container or placard. Additional information on any aspect of safety or h e a lth not co v ered in o th e r sections should be inserted in Section IX. The lower block can contain references to published guides or in-house procedures for handling and sto rag e. D e p a rtm e n t of Transportation markings and classifications and other freight, handling, or storage requirements and environmental controls can be noted. # IX. APPENDIX I SAMPLING AND ANALYTICAL METHOD FOR DIISOCYANATES IN AIR The follow ing m ethod for sam pling and analysis of airborne diisocyanates is ad ap ted from NIOSH M ethod No. MR 240 (C la ssific a tio n E) . A Class E m ethod is defin ed by NIOSH as "Proposed: A new, unproven, or suggested method not previously used by industrial hygiene analysts but which gives promise of being suitable for the determination of a given substance." # Principle of the Method A known volum e of air is drawn at a flowrate of 2 liters/m inute through two m idget gas im p in g ers, c o n n e c te d in se rie s, containing the nitro reagent absorber solution. The recommended airflow is 2 liters/m inute, rather than 1 liter/m inute as in d i c a t e d in M eth o d N o. MR 240 , to en su re c o lle c tio n of p a rtic u la te diisocyanates. At the time of analysis, the two impinger solutions are initially kept s e p a ra te to allow d e te rm in a tio n of collection efficiency. The impinger solutions (s e p a ra te or com bined) are carefully rotary evaporated to dryness. The residue is then dissolved in 1.0 ml of m eth y len e ch lo rid e, CH2C12, containing an internal stan d ard . An aliquot of the solution is injected into a liquid chromatograph. The a r e a of the re su ltin g peak is d e te rm in e d and co m p ared to a re a s o b ta in ed by injecting standard urea solutions of known concentration. # Range and Sensitivity The upper lim it of th e range of the method depends on the concentration of th e nitro reagent in the midget gas impingers. For a 10-liter air sample, the limit of d iiso c y an ate s th at can be absorbed is 0.0015 millimole/cu m using 15 ml of 0.2 mM nitro reagent solution. T he m inim um d e te c ta b le lim it is 2 n g /in je c tio n fo r TDI and MDI and 5 n g /in jectio n for HDI. The advisable injection volume is 50 pi. Hence, for a 20lite r sam ple, th e usefu l range is 2-300 p g /c u m of to ta l d iiso c y a n a te s. If a p a rtic u la r atm o sp h e re is su sp e cted of containing a large amount of isocyanate, a smaller sampling volume should be taken. # Interferences Any com pound that reacts with nitro reagent and has the same retention time as th e a n aly te is an in te r e f e r e n c e . R e te n tio n tim e alone cannot be considered p ro o f of c h e m ic a l i d e n t i t y . W hen th e p o s s i b ili ty of in te r f e r e n c e ex ists, 117 ch ro m a to g rap h ic conditions (eg, modes of gradient, concentration of mobile phases, packings) have to be changed to circumvent the problem. Precision and Accuracy P recisio n and accuracy for the total analytical and sampling method have not been determ ined. However, the analytical method has been shown tc have relative stan d ard deviations within experimental error for peak areas and retention times of 2 .8 -1 6 .5 % and 0.6-4.1% , re s p e c tiv e ly , depending on th e c o n c e n tra tio n of the analytes. # Advantages and Disadvantages of the Method The sam p lin g d ev ice is p o rta b le . The a n a ly tic a l m ethod is sp e c ific for i s o c y a n a t e s , an d i n t e r f e r e n c e s a re m inim al. S im ultaneous analysis of five substances can be carried out routinely. T he n i t r o re a g e n t m ust be p re p a re d a t ra th e r fre q u e n t in te rv a ls. It is recommended that it not be stored for more than 3 weeks, and it should be kept in d arkness. Commercially available nitro reagent may prove more stable. The ureas f o r m e d in s o lu tio n a r e g e n e r a lly s t a b l e up to 7 d a y s . D e g r a d a t io n or p o ly m e riz a tio n may occu r a f te r th is period. E xcess n itro re a g e n t should be rem oved w ith p -to ly l iso c y a n a te or by som e o th e r means before the solution is in jecte d into the liquid chromatograph to maintain longer column life and precision. # Apparatus (a) An approved and calibrated personal sampling pump whose flowrate can be determined within ±5% at the recommended flowrate. P re p a ra tio n of N itro R e a g e n t Solution: A typical procedure for the routine p re p a ra tio n of the n itro re a g e n t so lu tio n is as follow s. A bout 120 mg (0.52 m illim ole) of the commercially available hydrochloride of nitro reagent is dissolved in 25 m l o f d is ti lle d w a te r. T h irte e n m illilite rs of 1 N NaOH is added to p re c ip ita te th e fre e am in e. The fre e amine is extracted with 50 ml of toluene. The to lu en e lay er is dried over anhydrous CaS04 (Drierite, WA Hammond Drierite Co, Xenia, Ohio), and the resulting solution is diluted to 250 ml to prepare a 2 mM solution and is sto re d in th e r e f r ig e ra to r . The nitro reagent solution is further d iluted 10-fold w ith to u len e b efo re it is added to the impinger collecting tubes. Prepared nitro reagent should be examined periodically by HPLC for the appearance of additional peaks that indicate reagent degradation. Flowrate: 2.0 ml/minute. (2) G rad ien t elu tio n : 10% B/A to 100% B in 10 minutes (B = 9.1% isopropanol/CH2C12; A = 100% CH2C12). (3) D etector: uv at 254 nm. (4) Temperature: ordinary room. (5) Recorder chart speed: 0.5 inch/minute. The peak area is measured by peak height times peak width at half the height or by an e le c tro n ic integrator such as a computing integrator. Preliminary results are read from a standard curve prepared as discussed below. # Calibration and Standards (a) A series of standards varying in concentration over the range of interest are prepared. Calibration curves are established prior to sample analysis each day. When an in te rn a l standard is used, the analyte concentration is plotted against the area ratio of the analyte to that of the internal standard. (1) The follow ing w eights of th e isocyanates are dissolved in 4.0-ml portions of CH2C12: 2.12 mg MDI; 29.60 mg of TD1 (ie, 19.30 mg of 2,4-TDI and 10.30 mg of 2,6-TDI), 21.14 mg of HDI. Then 755 y l of MDI, 83.1 y l of TDI, and 75.5 y l of HDI are m ixed and 1.017 ml CH2C12 is added to make a total of 2.00 ml (200 ng/liter of each). Then 1.0 ml nitro reagent (2.06 mg/ml in hexane) is added to 1.0 ml of the iso cy an a te m ix tu re. The total isocyanate/nitro reagent mole ratio in this solution is 1:1. The re a c tio n m ix tu re is sto re d overnight. Dilutions are made from this s o lu tio n . T he so lvent is e v a p o ra te d in a ro ta ry e v a p o ra to r and th e residue redissolved in 1 ml CH2C12. These solutions are used to establish the calibration curves, linear dynamic range, and minimum detectable amount in the 25-cm Partisil 10 column. Calculations
The O ccu p atio n al S afety and H ealth A ct of 1970 em p hasizes the need for standards to protect the health and provide for the safety of workers occupationally exposed to an ever-increasing number of potential hazards. The National Institute for O ccu p atio n al S afety and H ealth (NIOSH) evaluates all available research data and criteria and recommends standards for occupational exposure. The Secretary of Labor will w eigh th e se reco m m en d atio n s along with other considerations, such as feasibility and means of implementation, in promulgating regulatory standards. NIOSH will periodically review the recommended standards to ensure continuing p r o te c tio n of w orkers and will m ake su ccessiv e re p o rts as new re s e a rc h and epidem iolo g ic stu d ie s a re c o m p le te d and as sampling and analytical methods are developed. The co n trib u tio n s to th is do cu m en t on d iiso cy an ates by NIOSH staff, other F ed e ra l ag en cies or departm ents, the review consultants, the reviewers selected by th e S ociety for O ccu p atio n al and Environmental Health and the American Medical A ssociation, and R o b e rt B. O'C onnor, M.D., NIOSH c o n su lta n t in occupational medicine, are gratefully acknowledged. The views and co nclusions ex p ressed in th is d o cu m en t, to g e th e r w ith the re co m m en d atio n s for a sta n d a rd , a re th o se of NIOSH. They are not necessarily those of th e co nsultants, the reviewers selected by professional societies, or other F ed e ra l agencies. However, all comments, whether or not incorporated, have been s e n t w ith th e c r i t e r i a d o c u m e n t to th e O c c u p a t io n a l S a f e ty and H ealth A d m in istratio n for c o n sid e ra tio n in setting the standard. The review consultants and th e F e d e ra l agencies which received the document for review appear on pages v and vi.# NIOSH reco m m en d s th a t employee exposure to diisocyanates in the workplace be c o n tro lle d by adherence to the following sections. The standard is designed to p ro te c t the h e a lth and provide for th e s a fe ty of employees for up to a 10-hour workshift, 40-hour workweek, over a working lifetime. Compliance with all sections of the recommended standard should prevent adverse effects of diisocyanates on the h e a lth of u n sen sitized workers and provide for their safety. Sufficient technology ex ists to p erm it co m p lian ce w ith th e reco m m en d ed standard. Although NIOSH considers th e w orkplace e n v iro n m en ta l lim its to be safe levels based on current in fo rm atio n , th e em ployer should regard them as the upper boundaries of exposure and m ake every effort to keep the exposure as low as possible. The recommended standard will be reviewed and revised as necessary. D iis o c y a n a te s i r r i t a t e th e re s p ira to ry t r a c t and can a c t as re s p ira to ry s e n s itiz e rs , producing asthma-like symptoms in sensitized individuals with exposure a t very low c o n c e n tra tio n s . Exposure to diisocyantes may also result in chronic impairment of pulmonary function. NIOSH published criteria for a recommended standard for toluene diisocyanate (TDI) in 1973. The p re se n t reco m m en d ed stan d ard is ex panded to include all d iiso c y an ate s, but not their polymerized forms. It includes most of the provisions recom m end ed in th e TDI document but differs where appropriate to reflect newer in fo rm atio n or special provisions for other diisocyanates. Most of the information c u rre n tly av ailab le on effects of exposure to diisocyanates concerns TDI and, to a lesser e x te n t, d ip h en y lm eth an e d iiso c y an ate (MDI). In addition to TDI and MDI, occupational exposure limits are recommended for other diisocyanates that have had w idespread in d u strial a p p lic a tio n : hexamethylene diisocyanate (HDI), naphthalene d iis o c y a n a te (NDI), isophorone d iiso c y an ate (IPDI), and dicyclohexyl m eth an e diisocyanate (hydrogenated MDI). "O ccu p atio n al exposure to d iiso c y an ate s" is defined as exposure to airborne d iiso c y an ate s a t c o n c e n tra tio n s above o n e-h alf th e recommended time-weighted a v e ra g e (TWA) occupational exposure limit or above the recommended ceiling limit. Exposure to diisocyanates shall be controlled so that no employee is exposed at c o n c e n tra tio n s g re a te r th a n the limits specified below. These limits expressed in U g/cu m are equivalent to a vapor concentration of 5 ppb as a TWA concentration fo r up to a 1 0 -h o u r w o r k s h if t, 40-hour w orkw eek, and 20 ppb as a ceiling c o n c e n tra tio n for any 10-minute sampling period. The yg equivalents for selected diisocyanates are as follows: If o th e r diisocyanates are used, employers should observe environmental limits eq u iv alen t to a ceiling concentration of 20 ppb and a TWA concentration of 5 ppb. TWA # (b) Sampling and Analysis E n v ir o n m e n ta l sam p les shall be c o lle c te d and an aly zed by th e m ethods described in A ppendix I or by any o th e r method at leaSt equivalent in accuracy, precision, and sensitivity. Section 2 -Medical M edical su rv eillan ce shall be made available as outlined below to all workers exposed to diisocyanates in the workplace. # (a) Preplacement examinations shall include at least: (1) C o m p reh en siv e m ed ical and work histories, with special emphasis d ire c te d to ev id en ce of p re e x istin g re s p ira to ry co n d itio n s such as asthm a. A smoking history should also be compiled. (2) P hysical e x am in atio n giving particular attention to the respiratory tract. (3) S pecific clin ica l te s ts including a 14-x 17-inch posteroanterior c h est roentgenogram and baseline measurements of forced vital capacity (FVC) and forced expiratory volume at 1 second (FEV 1). (4) A judgm ent of th e w orker's a b ility to use negative and positive pressure respirators. # (b) P erio d ic e x am in atio n s shall be m ade a v ailab le a t le a s t annually, as determined by the responsible physician, and shall include: (1) Interim medical and work histories. (2) P hysical e x am in atio n giving particular attention to the respiratory tra c t and including measurements of FVC and FEV 1. # (c) During ex am in atio n s, a p p lic a n ts or em p lo y ees found to have medical c o n d itio n s t h a t c o u ld be d i r e c t l y or in d ire c tly a g g ra v a te d by exp o su re to d iiso c y an ate s, eg, respiratory allergy, chronic upper or lower respiratory irritation, chronic obstructive pulmonary disease, or evidence of sensitization to diisocyanates, shall be co unseled on th e ir in c re ased risk from w orking with these substances. Chronic bronchitis, emphysema, disabling pneumoconiosis, or cardiopulmonary disease w ith sig n ifican tly im paired ventilatory capacity similarly suggest an increased risk from exposure to d iiso c y an ate s. If a h isto ry of allerg y o th er than respiratory allerg y is e lic ite d , ap p lic a n ts should be counseled that they may be at increased risk of adverse health effects from exposure to diisocyanates. Employees shall also be advised th a t exposure to d iiso cy an ates may result in delayed effects, such as coughing or difficulty in breathing during the night. # (d) Pertinent medical records shall be maintained. Records of environmental exposures ap p licab le to an em p lo y ee shall be included in the employee's medical r e c o r d s . S u ch r e c o r d s s h a ll be k e p t fo r a t le a s t 30 y ears a f te r th e la st o ccu p a tio n a l exposure to diisocyanates. These records shall be made available to th e d esig n ated m ed ical representatives of the Secretary of Health, Education, and W elfare, of th e S e c re ta ry of Labor, of th e em p lo y er, and of th e employee or former employee. Section 3 -Labeling and Posting (a) Warning signs shall be p rin ted both in English and in the predominant language of non-English-reading workers. Workers unable to read labels and posted signs shall be instructed concerning hazardous areas and shall be orally informed of the instructions printed on labels and signs. In any a r e a w here there is a likelihood of emergency situations arising, signs req u ire d by S ectio n 3(c) shall be supplemented with signs giving emergency and firs t-a id in s tru c tio n s and p ro ced u res, th e lo c a tio n of first-aid supplies and e m e r g e n c y e q u ip m e n t, and th e lo c a tio n s of em erg en cy show ers and eyew ash fountains. # Section k -Personal Protective Equipment and Clothing T he e m p lo y e r s h a ll use en g in eerin g co n tro ls w here need ed to keep th e c o n c e n tra tio n of airb o rn e diisocyanates at or below the limits specified in Section (l)(a). The em ployer sh all also provide em p lo y ees with protective clothing and equipm en t of m aterials resistant to penetration by diisocyanates, such as rubber or p o ly v in y l c h l o r i d e , w h en n e c e s s a r y to p r e v e n t skin and ey e c o n ta c t w ith diisocyanates. Protective equipment suitable for emergency use shall be located at clearly identified stations outside the work area. (a) Eye Protection The employer shall provide face shields (20-cm minimum) with goggles and shall ensure that employees wear the protective equipment during any operation in which splashes of liquid diisocyanates are likely to occur. Protective devices for the eyes and fa c e shall be s e le c te d , used, and m a in tain ed in a c c o rd a n c e w ith 29 CFR 1910.133. (b) Skin Protection (1) The em ployer shall provide a p p ro p ria te p ro te c tiv e clothing and eq u ip m en t th a t are r e s is ta n t to p e n e tra tio n by d iiso c y a n a te s, including gloves, aprons, su its, and boots, and shall ensure that employees weai these when needed to p rev en t skin c o n ta c t w ith liquid d iiso c y an ate s. W orkers w ithin 10 feet of spraying o p eratio n s, or at greater distance when there is a greater drift of spray, shall be p r o te c te d w ith im pervious clo th in g , gloves, and footwear in addition to required respiratory protection. Rubber shoes or rubbers over leather shoes shall be worn w henever th e re is a po ssib ility th at liquid diisocyanates may be present on floors. (1) To d eterm in e the type of respirator to be used, the employer shall m easure th e concentrations of airborne diisocyanates in the workplace initially and th e re a fte r whenever control, process, operation, worksite, or clim atic changes occur that are likely to increase the concentration of airborne diisocyanates. (2) The employer shall provide respirators in accordance with Table 1-1 and s h a ll en su re th a t th e em p lo y ees use them p roperly when re s p ira to rs are required. The respiratory protective devices provided in conformance wih Table 1-1 shall be those approved by NIOSH and the Mine Safety and Health Administration as specified in 30 CFR 11. ♦Use of supplied-air suits may be necessary to prevent skin contact during exposure a t high concentrations of airborne, diisocyanates. (3) R e s p ira to rs sp ecified for use at higher concentrations of airborne diisocyanates may be used in atmospheres with lower concentrations. (4) The em p lo y er shall en su re that employees are properly instructed and drilled at least annually in the use of respirators assigned to them and on how to test for leakage, proper fit, and proper operation. (5) The em ployer shall e sta b lish and conduct a program of cleaning, san itizin g , inspecting, maintaining, repairing, and storing respirators to ensure that employees are provided with clean respirators that are in good operating condition. (6) R e s p ira to rs shall be easily a c c e ssib le , and em p lo y ees shall be informed of their location. A ll c u rr e n t and p ro sp e c tiv e em p lo y ees w orking w here o ccu p a tio n a l exposure to diisocyanates may occur shall be informed orally and in writing of the h a z a r d s , r e l e v a n t s ig n s a n d s y m p to m s of ex p o su re, a p p ro p ria te em erg en cy p ro ced u res, and proper conditions and precautions concerning safe use and handling of d iiso c y an ate s. The in s tru c tio n a l program shall include a description of the g eneral nature of the environmental and medical surveillance procedures and of the a d v an tag es to th e em p lo y ee of p a rtic ip a tin g in th e se su rv eillan ce procedures. Em ployees exposed to diisocyanates should be warned that symptoms of exposure to d iiso c y an ate s, such as nocturnal dyspnea, may occur several hours after the end of th e workshift. They should also be advised that improper home use of polyurethane products co n tain in g unpolym erized diisocyanates, such as foam kits and varnishes, may in c re ase th e ir risk of w o rk -re la te d h ealth p ro b lem s. E m ployees shall be in s tru c te d on th e ir re sp o n sib ilitie s for follow ing work p ra c tic e s and sanitation p ro ced u res to help protect the health and provide for the safety of themselves and of fellow employees. # (b) The em ployer shall institu te a continuing education program, conducted a t le a st annually by persons qualified by experience or training, to ensure that all em ployees have cu rren t knowledge of job hazards, proper maintenance and cleanup m ethods, and p ro p er re s p ira to r use. As a minimum, instruction shall include the in fo rm atio n p re sc rib e d in paragraph 5(c) below. This information shall be readily available to all employees involved in the manufacture, use, transport, or storage of diisocyanates and shall be posted in prominent positions within the workplace. (c) R eq u ired in fo rm a tio n shall be rec o rd e d on th e "Material Safety Data Sheet" shown in A ppendix II or on a sim ilar form approved by the Occupational Safety and Health Administration, US Departm ent of Labor. # Section 6 -Work Practices (a) Control of Airborne Diisocyanates (1) E ngineering c o n tro ls, such as pro cess enclosure or local exhaust v e n tila tio n , shall be used w hen need ed to keep exp o su re to diisocyanates at or below th e recom mended environmental limit. Ventilation systems, if used, shall be designed to prevent accumulation or recirculation of diisocyanates in the workplace en v iro n m en t and to e f fe c tiv e ly rem ove diisocyanates from the breathing zone of employees. (2) Exhaust ventilation systems discharging to outside air must conform to applicable local, state, and Federal air pollution regulations. (3) V en tilatio n system s shall be reg u larly maintained and cleaned to ensure e ffe c tiv e n e ss , w hich shall be verified by semiannual airflow measurements. A log showing design airflo w and th e re su lts of semiannual inspections shall be kept. (4) B efore m a in te n a n c e w ork is u n d ertaken, sources of diisocyanates shall be shut o ff and iso la ted . The need for and use of respiratory protective equipment shall be determined as outlined in Section 4. # (b) Confined Spaces In confined a re a s w here work is p e rfo rm e d routinely, such as spray booths, exposure to diisocyanates shall be kept at or below the recommended limits by the use o f e n g in e e r in g c o n t r o l s as d escrib ed in S ection 6(a). When nonroutine o p eratio n s such as cleaning and maintenance must be performed in confined spaces not equipped with such engineering controls, the following requirements shall apply. (1) E ntry into confined spaces, eg, tanks, pits, or process vessels, that may contain diisocyanates shall be controlled by a permit system. Permits shall be signed by an authorized representative of the employer, certifying th a t preparation of the con fin ed sp ace, precautionary measures, and personal protective equipment are ad eq u a te and th a t p rescrib e d procedures will be followed. Each work permit shall also be signed by the employee entering the confined space. (2) C onfined spaces that have contained diisocyanates shall be isolated by shutting off and sealing sources of diisocyanates. (3) The confined space shall be cleaned with a solvent, flushed, washed w ith w a te r, purged with air, and thoroughly ventilated. It shall then be inspected and te s te d for oxygen d e fic ie n c y , diisocyanates, and the presence of combustible gases and o th e r su sp e cted c o n ta m in a n ts b efo re being e n te re d and reinspected periodically at 1-hour intervals while occupied. (4) Each em p lo y ee entering the confined space shall be equipped with a se lf-c o n ta in e d b re a th in g a p p a ra tu s as sp ecified in Section 4, a harness, and a lifeline. At le a st one o th e r em ployee equipped for entry with the same type of p ro te c tiv e equipment shall be stationed outside to monitor the operation. At least one ad d itio n al person shall be av ailab le to assist in an emergency. (1) D iiso cy an a tes should be stored in closed containers and should be p ro te c te d from h e a t and d ire c t sunlight. They should not be stored near bases, prim ary or seco n d ary am in es, acids, or alcohols, since these chemicals may react violently with diisocyanates. (2) D iiso cy an a te c o n ta in e rs should be k ep t clo sed to prevent water from en te rin g th e c o n ta in e rs , since w a te r and diisocyanates react to produce a w ater-in so lu b le u re a and carb o n dioxide, w hich can generate enough pressure to ru p tu re the c o n ta in e rs. All c o n ta in e rs of d iiso c y an ate s should be periodically insp ected for signs of increased pressure within the containers and to ensure that the in te g rity of containers and seals are maintained. Leaking containers should be rem oved to the outdoors or to an isolated, well-ventilated area before the contents are tra n s fe rre d to o th e r su ita b le containers, and leaks of diisocyanates should be cleaned up immediately. # (d) Control of Spills and Leaks (1) A dequate fa c ilitie s for handling spills of diisocyanates shall be provided and shall include suitable floor drainage and readily accessible hoses, mops, buckets, ab so rb en t or d e c o n ta m in a tin g m a te ria ls , and protective equipment and clothing. (2) All spills or leaks of diisocyanates shall be given prompt attention by trained personnel, and all unessential personnel shall be evacuated from the area during cleanup. (3) Waste m aterial contam inated with diisocyanates shall be disposed of in a m anner not hazard o u s to em p lo y ees. D isposal m eth o d s m ust conform to ap p licab le local, state, and Federal regulations and shall not constitute a hazard to th e surrounding popu latio n or e n v iro n m en t. Spills of diisocyanates shall not be allow ed to e n te r public sewers or drains in amounts that could cause explosion or fire hazards. # (e) Emergency Procedures E m ergency plans and p ro ced u res shall be developed for all work areas where there is a potential for exposure to diisocyanates. The measures shall include those sp ecified below and any o th e rs considered appropriate for a specific operation or p r o c e s s . Em ployees shall be tra in e d to im p lem en t th e plans and p ro ced u res effectively. (1) P r e a r r a n g e d plans shall be in s titu te d for o b tain in g em erg en cy m edical care and for the transportation of injured workers. A sufficient number of em ployees shall be tra in e d in first aid so that assistance is available immediately when necessary. (2) Em ployees who have sig n ific a n t skin c o n ta c t with diisocyanates should w ash w ith w a te r or show er to rem ove th e com pound from the skin and should th en wash the a f f e c te d areas with alcohol. Contam inated clothing shall be removed and discarded or cleaned before reuse. (3) In th e ev en t of a fire involving d iiso c y a n a te s, all u n essen tial personnel shall be evacuated from the area. The types of extinguishing media that should be used in fig h tin g d iiso cy an ate-su p p o rted fires are dry chemical powder, c a r b o n d io x id e, or foam . W ater should be used only if la rg e q u a n titie s a re av ailab le. F ire fig h te rs should be cautioned of the possibility of exposure to other hazardous chemicals, such as hydrogen cyanide, phosgene, and carbon monoxide. (4) After the fire has been extinguished, the area shall be inspected by properly p r o te c te d personnel and shall be decontaminated to remove any suspected diisocyanate residues before unprotected workers are perm itted to enter the area. # (f) Laundering (1) B efore being la u n d ered , contam inated clothes shall be placed in a d e c o n ta m in a tin g solution of water containing 10% ammonia in a container that is impervious to diisocyanates. When diisocyanates are used in laboratory activities, the following provisions, in addition to other sections, shall be followed. (1) M echanical pipetting aids shall be used for all pipetting procedures. (2) Experim ents, procedures, and equipment that could produce aerosols or vapors of diisocyanates shall be confined to laboratory-type hoods, glove boxes, or o th e r sim ilar c o n tro l ap p ara tu s. Exposure chambers and associated generation apparatus shall be separately ventilated. (3) S u rfaces on which diisocyanates are handled shall be impervious to absorption or penetration by these compounds. (4) L a b o ra to ry vacuum systems, hoods, and exposure chambers shall be e x h a u s t-v e n tila te d in a m anner c o n siste n t w ith F e d e ra l and local air pollution regulations. (5) A irflow in th e laboratory shall be established in a pattern flowing from th e least to the most contam inated area. Contaminated exhaust air shall not be recirculated or discharged to other work areas. # Section 7 -Sanitation (a) Preparing, storing, dispensing (including vending machines), and consuming food and smoking shall be prohibited in work areas where occupational exposure to diisocyanates may occur. # (b) Em ployees who handle d iiso c y an ate s or eq u ip m en t contam inated with d iiso c y an ate s shall be advised to wash th e ir hands thoroughly with soap or mild detergent and water before using toilet facilities, eating, or smoking. (c) P lan t fa c ilitie s shall be maintained in a sanitary manner in accordance with sanitation requirements listed in 29 CFR 1910.141. (d) The em ployer shall provide ap p ro p ria te changing and shower rooms as required in 29 CFR 1910.141(d,e). Surveys shall be re p e a te d a t le a st annually and as soon as practicable after any change likely to result in increased concentrations of airborne diisocyanates. # (b) Personal Monitoring If it has been determined that there is occupational exposure to diisocyanates, the employer shall fulfill the following requirements: (1) A program of p erso n al m o n ito rin g shall be instituted to identify and m e a s u r e , o r p e r m i t c a l c u l a t i o n o f, th e e x p o s u r e o f e a c h e m p lo y e e o ccu p atio n ally exposed to diisocyanates. Personal monitoring may be supplemented by source and area monitoring. (2) In all p erso n al m onitoring, samples representative of the exposure in the breathing zone of the employee shall be collected. (3) F o r e a c h d e t e r m i n a t i o n of th e d iiso c y a n a te c o n c e n tra tio n , a s u ffic ie n t num ber of sam p les shall be ta k e n to characterize employee exposure. V ariations in th e em ployee's work sch ed u le, lo c atio n , or d u tie s and changes in p ro d u c tio n schedules shall be co n sid ered in deciding when sam p les a re to be collected. # (4) Sam ples from eac h o p e ra tio n in eac h work a r e a and each shift s h a ll be ta k e n a t le a s t once ev ery 6 m onths or as o th e rw ise in d ic a te d by a profession al in d u strial hygienist. If monitoring shows that an employee is exposed to d iiso c y an ate s a t concentrations above the environmental limits recommended in Section 1(a), additional monitoring shall be promptly initiated. If this confirms that exposure is ex cessiv e, co n tro l m easu res shall be initiated as soon as possible to red u ce th e c o n c e n tra tio n of d iiso c y a n a te s in the employee's environment to less than or equal to th e lim its recommended in Section 1(a). The affected employee shall be n o tified of th e ex cessiv e exposure and of th e co n tro l measures being im p lem en ted . M onitoring of th e employee's exposure shall be conducted at least every 30 days and shall co n tinue until two consecutive determinations, at least 1 w eek a p a r t , in d i c a te t h a t th e e m p lo y e e 's exposure no longer ex ce ed s the recom m en d ed en v iro n m en ta l limits. At that point, semiannual monitoring may be resumed. # (c) # Recordkeeping Environm ental monitoring records and other pertinent records shall be kept for a t le a s t 30 y ears a f te r th e la st o ccu p a tio n a l exposure to d iiso c y an ate s. The reco rd s shall include the dates and times of measurement, duties and job locations w ithin th e w o rk site, sam pling and analytical methods used, the number, duration, and resu lts of samples taken, concentrations of diisocyanates in air estim ated from th e se sam ples, the type of personal protection in use at the time of sampling, and i d e n t i f i c a t i o n of th e exposed em ployee. E m ployees shall be able to obtain in fo rm atio n on th e ir own en v iro n m en ta l ex p o su res. E n v iro n m en tal monitoring reco rd s shall be m ade av ailab le to designated representatives of the Secretary of Labor, th e S e c re ta ry of H ealth , E d u catio n , and W elfare, and th e employee or former employee. P e rtin e n t medical records shall be retained by the employer for 30 years after th e l a s t o c c u p a tio n a l ex p o su re to d iiso c y an ate s. R eco rd s of en v iro n m en ta l exposures ap p licab le to an employee should be included in records. These medical reco rd s shall be m ade available to the designated representatives of the Secretary of Labor, of the Secretary of Health, Education, and Welfare, of the employer, and of the employee or former employee. # II. INTRODUCTION This re p o rt presents the criteria and the recommended standard based thereon th a t w ere p re p a re d to m e e t the need for preventing impairment of health arising from o ccu p a tio n a l exposure to d iiso c y an ate s. The criteria document fulfills the responsib ility of th e S e c re ta ry of H ealth , Education, and Welfare under Section 20(a)(3) of th e O ccu p atio n al S afety and H ealth Act of 1970 to "develop criteria dealing w ith toxic materials and harmful physical agents and substances which will d e sc rib e ...e x p o su re levels a t w hich no em p lo y ee will su ffe r im paired health or f u n c ti o n a l c a p a c i t i e s or dim inished life e x p e c ta n c y as a re s u lt of his work experience." A fter review ing d a ta and consulting with others, NIOSH formalized a system for th e d ev elo p m en t of c r ite r ia on which standards can be established to protect th e h e a lth and to provide for th e s a fe ty of em p lo y ees exposed to hazardous chem ical and physical agents. C riteria for a recommended standard should enable m a n ag em en t and labor to develop b e tte r en g in eerin g controls resulting in more h e a lth fu l work environments, and simply complying with the recommended standard should not be regarded as the final goal. O ccu p atio n al exp o su re to som e of the diisocyanates has produced respiratory illness in w orkers. In addition to irritating the upper and lower respiratory tract, diisocyanates can cause sensitization, and sensitized individuals may develop asthma upon exposure to d iiso c y an ate s in very sm all am o u n ts. Chronic impairment of pulm onary fu n ctio n has b een re p o rte d in some workers exposed to diisocyanates. Diisocyanates are also irritating to the skin and eyes. F u rth e r re se a rc h is need ed in a num ber of a re a s re le v a n t to co n tro llin g occupational exposure to diisocyanates. The possibilities of carcinogenic, mutagenic, te ra to g e n ic , and reproductive effects from diisocyanates have not been adequately in v e stig a te d . Studies in w hich e f f e c ts on individuals a re correlated with their a c tu a l exposures are also needed. Screening tests should be developed to permit e arly reco g n itio n of adverse respiratory effects resulting from sensitization to the d ii s o c y a n a te s . A n im a l e x p e rim e n ts should be p e rfo rm e d to d e te rm in e how c o n c e n tra tio n and len g th of exposure a f f e c t th e d ev elo p m en t of sensitization. Im proved en g ineering co n tro ls should be developed to protect workers in certain diisocyanate applications, such as spraying. # III. BIOLOGIC EFFECTS OF EXPOSURE The d iiso c y an ate s are ch em ical compounds in which two isocyanate groups, -NCO, are a tta c h e d to carb o n ato m s of an org an ic ra d ic a l. The chemical and p h y sic a l p ro p e rtie s of various d iiso c y an ate s a re liste d in Table XI -1 [ 1 -1 0 ] . Synonyms for these compounds are listed in Table XI-2. Many d iiso cy an ates exhibit high chemical reactivity [ 1 1 ]. In the presence of w a te r they r e a c t exothermically to produce an unstable carbamic acid that rapidly d isso cia tes to form a p rim ary amine and carbon dioxide. The primary amine can react further with excess isocyanate to form a urea derivative. Iso cy an ates also r e a c t vigorously with organic compounds containing reactive hydrogens, esp ecially where the hydrogen atom is attached to oxygen, nitrogen, or sulfur [ 1 1 ] . In biologic m acrornolecules, these groups occur abundantly, and the iso cy an a tes will th e re fo re r e a c t and com bine w ith a v a rie ty of sites on these m olecules. Polyfunctional isocyanates, such as the diisocyanates, can act as crosslinking agents with biologic macromolecules. # Extent of Exposure The m ost com m on method of synthesis of the diisocyanates is the reaction of p rim ary am ines w ith phosgene [ 1 2 ] . In th is p ro cess, a p rim ary a lip h a tic or a ro m a tic am ine, dissolved in a so lv en t such as xylene, m onochlorobenzene, or dich lo ro b en zen e, is mixed with phosgene dissolved in the same solvent and allowed to re a c t for sev eral hours a t te m p e ra tu re s of ab o u t 200 C. More phosgene is added during the process, and the final reaction mixture is fractionated to recover th e iso cy an a te p ro d u ct, as w ell as h y d ro ch lo ric acid , u n re a c te d phosgene, the solvent for recycling, and the distillation residue for incineration. Diisocyanates are used to produce polyurethane foams, coatings, elastomers, and spandex fib ers. Toluene d iiso c y an ate (TDI), w hich is co m m ercially available as stan d ard m ix tu res of th e 2,4-and 2 ,6 -iso m ers, is g en erally used in producing flexible p o ly u reth an e fo am s. Methylene diphenyl diisocyanate (MDI), especially in p a rtia lly polym erized forms, is used more frequently in rigid foams. A substantial am ount of MDI (40-50% of th e am o u n t produced) is used in the manufacture of p o ly u r e t h a n e s y s te m s , such as fo rm u la te d p ack ag es of iso c y a n a te s, polyols, flu o ro carb o n blowing a g e n ts, fire retardants, surfactants, and catalysts. TDI and pure MDI, or sp ecial liquid MDI products, are used to make elastomers, which are used in m a n u fa ctu rin g p rin tin g rolls, liners for m ine c h u te s and grain elevator ch u tes, c o a te d fab ric s, shoe soles, and automobile bumpers [12,13]. MDI is also used in th e foundry in d u stry as part of a binding system for casting molds [1 4 ]. The total consumption of MDI and partially polymerized MDI in 1975 was about 300 m illion pounds. TDI consumption totaled about 400 million pounds, and only a few million pounds of other diisocyanates were used in 1975 [1 5 ], W orkers w ith potential occupational exposure to diisocyanates include adhesive w orkers, insulation w orkers, d iiso c y an ate resin workers, lacquer workers, organic c h e m ic a l s y n th e s iz e r s , p ain t sp ray ers, p o ly u reth an e m a k ers, rubber w orkers, shipbuilders, textile processors, and wire-coating workers [1 6 ]. A NIOSH s u r v e y co n d u cted in 1972-1974 e s tim a te d th a t 50,000-100,000 em plo y ees in th e U n ited S ta te s w ere p o te n tia lly exposed to diisocyanates. This n u m b e r does not include o ccasio n al users of iso c y a n a te p re p a ra tio n s such as p o ly u reth an e varnish and m ay th e re fo re u n d e re s tim a te th e num ber of workers exposed. # Historical Reports Toluene d iiso c y an ate and h ex am eth y len e d iiso c y an ate (HDI) were the most w id e ly u sed d iis o c y a n a te s in th e early s ta g e s of th e in d u stry , acco rd in g to W illiam son [1 7 ] and Munn [ 1 8 ] . C o n seq u en tly , the earliest reports of hazards from exposure to iso c y a n a te s usually involved th e se compounds. Both of these com pounds a re am ong th e m ore v o la tile diisocyanates, and respiratory and other h ealth problems associated with these compounds prompted the development of less v o la tile di isocyanates and derivatives, as well as improved handling techniques. As a re s u lt, although many new d iiso c y an ate p ro d u cts have been used in industrial applications more recently, the number of reports of toxic effects from exposure to diisocyanates has decreased. In G erm any in 1941, G ross and Hellrung, according to Friebel and Luchtrath [ 1 9 ] , in v e stig a te d the toxicity of TDI in animal experiments. They exposed dogs, cats, rabbits, and guinea pigs to a commercial TDI preparation at 14-1,400 ppm and re p o rte d th a t, a t the low er c o n c e n tra tio n s stu d ied , irritation of the respiratory tr a c t o c cu rred , and, a t the higher c o n c e n tra tio n s , b ro n ch itis, pneum onia, and pulmonary edema resulted. A ccording to Brugsch and Elkins [2 0 ], toxic effects of TDI had been observed in G erm an w orkers handling th e su b stan ce in w ar-related industries during World War II. However, the first published account of TDI toxicity in humans was a 1951 r e p o r t by Fuchs and Valade [ 2 1 ] , They d escrib ed nine c ases of p rogressive bronchial ir rita tio n in F ren ch w orkers exposed to TDI. On continued exposure, seven of th e a f f e c te d w orkers developed an a sth m a -lik e co n d itio n , w hich the authors suggested was allergic. In 1953, R einl [2 2 ] reported a human fatality attributed to organoisocyanate exposure. This was 1 of 17 cases of respiratory illness in German workers exposed to TDI or other isocyanates. Thirteen of these illnesses were severe. Two workers developed pulm onary ed em a, w hich in one case was f a ta l, te rm in a tin g in cor pulm onale. In the same year, in Sweden, Swensson et al [23] described three cases of re s p ira to ry illness in painters who used lacquer containing isocyanates. Two of th e se w orkers had s p iro m e tric pulm onary fu n c tio n m e a su re m e n ts suggestive of emphysema. In the 1950's many similar cases of isocyanate toxicity were reported in Europe [ 2 4 -2 6 ], including an o th er f a ta lity [ 2 7 ] , and in th e United States [25, [28][29][30][31]. T hese o c c u rre d in w orkers exposed to TDI in manufacturing polyurethane foam or using TDI-or p o ly iso c y an ate -b ase d la cq u ers and glues. As many as 99 cases of respiratory illness were reported from a single US plant manufacturing polyurethane foam [ 3 1 ] . A 1962 review by Elkins et al [32] reported a total of 222 cases of respiratory illness attributed to TDI exposure in the literature through 1960. G o ld b la tt and Goldblatt [3 3 ], in a 1956 report, described a case of a chemist exposed to th e vapor of h e a te d 1,5-naphthalene diisocyanate (NDI). The chemist developed a sev ere cough th a t recurred each time he returned to the laboratory. G erritsen [34] suggested in 1955 that an asthm atic condition in workers exposed to HDI was the result of an allergic mechanism. M ost of th e e a r ly r e p o r ts of re s p ira to ry illn ess in w orkers exposed to d iiso c y an ate s described bronchial asthm a or chronic bronchitis, often considered by th e a u t h o r s to involve ev id en ce of se n s itiz a tio n [2 3 -2 5 , 2 8 ]. H ow ever, som e re s p ira to ry illn esses w ere a ttr ib u te d to d ire c t irrita tio n from TDI, usually as a result of acute accidental exposures [28][29][30]35]. Friebel and Luchtrath [1 9 ], in 1955, attem pted to dem onstrate sensitization to TDI in guinea pigs. They were not able to produce allergic asthm atic responses in animals exposed to TDI aerosol a t 120 ppm or TDI vapor at 50-80 ppm. Effects on th e anim als' lungs were attributed to primary toxic action by TDI. Zapp [3 6 ], in 1957, also reported only direct effects on the respiratory tra c t in rats, guinea pigs, dogs, and ra b b its exposed to TDI a t 1.5 ppm fo r about 80 exposures of 6 hours each. Since 1960, ad d itio n al cases of occupational illness attributed to exposure to diisocyanates have been reported, but these have been less frequent and less severe as recognition of the hazard has increased. In 1973, NIOSH published criteria for a recom m end ed standard for occupational exposure to TDI [3 7 ]. The studies of TDI to x ic ity on which NIOSH based its 1973 reco m m en d atio n s a re discussed in the follow ing s e c tio n s, as is in fo rm a tio n on TDI th a t has appeared in the literature more recently and data on other diisocyanates. Most studies on TDI in this chapter th at are dated prior to 1973 were discussed in the earlier document. # Effects on Humans Much of th e in v e stig a tio n of th e biologic effe c ts of diisocyanates has been d i r e c t e d to w ard d e term in in g th e e x te n t and n a tu re of s e n s itiz a tio n to th e se com pounds. In this document, sensitivity to diisocyanates denotes the tendency of so m e in d iv id u a ls to h a v e a r e s p ira to ry response when th ey a re exposed a t c o n c e n tra tio n s much low er th a n th o se th a t irritate the respiratory tra c t in most p e o p le . S e n s i t i z a t i o n m ay d e v e lo p gradually or suddenly a f te r exposure to diiso c y an ate s. The usual response is an a s th m a tic re a c tio n , c h a ra c te riz e d by w heezing, dyspnea, and bronchial constriction. Use of these term s is not intended to im p lic ate any p a rtic u la r m echanism as the cause of the reaction. The term s "allergy" and "allergic," on the other hand, are reserved for conditions in which an immunologic response is implied. W orkers occu p atio n ally exposed to the diisocyanates in various industries have d e v e lo p e d a d v e r s e re s p ira to ry e ff e c ts ; re p o rts of skin d isease and evidence suggesting sy stem ic to x ic ity from such exp o su res have been far less numerous. Most of th e affected workers have been exposed in manufacturing diisocyanates, in using th e se com pounds to manufacture polyurethane products such as foam, and in painting or spraying polyurethane varnishes and paints. These activities also involve possible exposure to o th e r potentially harmful chemicals, including chlorobenzene, phosgene, sty ren e, and am in es, and little is known of how such mixed exposures may affect the toxicity of the diisocyanates. Most of th e d a ta available on exposure to diisocyanates are on TDI. Several reports on MDI and a smaller number on other diisocyantes, including HDI and NDI, have also been published. In the following subsections, information on the biologic effects of TDI is discussed first, followed by data on MDI and other diisocyanates. # (a) Respiratory Effects The odor th resh o ld for TDI estim ated by Zapp [36] in 1957 was 400 ppb (2.8 m g /c u m) in 12 of 24 m en te s te d . Five y ears la te r , H enschler e t al [3 8 ] e s tim a te d an odor threshold of 50 ppb (360 yg/cu m), using the analytical method of E rlich er and Pilz [ 3 9 ], which they found was more accurate and sensitive than the R a n ta m ethod used by Zapp. Eye irr ita tio n was experienced by three of six volunteers exposed at this concentration for 10 minutes and five of six exposed for 15 m inutes; one also had nasal irritation [3 8 ]. At 100 ppb (700 yg/cu m), two of six com plained of th r o a t ir r ita tio n , and exposure a t 500 ppb (3,600 yg/cu m) produced eye, nose, and throat irritation in all volunteers. Pulm onary fu n ctio n testing has been used in many studies of workers exposed to TDI and o th e r d iiso c y a n a te s to e v a lu a te changes in lung function. In 1964, seven fu rn itu re p lan t employees who sprayed, dipped, or painted with polyurethane varnish developed acute respiratory symptoms 0.5 hour to 3 weeks after their first known exposure [ 4 0 ] . T h ree m e a su re m e n ts m ade a f te r some improvements in ventilation showed TDI at 80, 100, and 120 ppb (570-850 yg/cu m). All seven men coughed and had difficulty in breathing and four had blood-stained sputum. Five of the seven were tested for forced vital capacity (FVC) and forced expiratory volume a t 1 second (FEV 1) 2-3.5 months after exposure and had higher values than they did shortly after exposure. The responses to a questionnaire given 22 months after exposure after exposure suggested to the author that four of six who responded had become sensitized to TDI. In 1965, W illia m s o n [ 4 1 ] d e s c r ib e d six w orkers exposed to TDI in a p o ly u reth an e foam p lan t who developed symptoms that were considered suggestive of s e n sitiz a tio n . Four of th e se , from a workforce of 99, had become sensitized during 18 m onths of exposure to TDI a t c o n c e n tra tio n s usually below 20 ppb, ap p are n tly d eterm in e d by a re a m onitoring. The author believed th at sensitization had re su lte d from exposure at higher concentrations caused by spills. Immediately a fte r one spill, TDI at 200 ppb (1.4 mg/cu m) was found, but the concentration was less than 5 ppb (35 y g /c u m) 10 m in u tes la te r . All six a ffe c te d workers had asth m a or b ro n ch itis w ith decreased ventilatory capacity (FVC and FEV 1) during th e se in c id en ts. Some of the subjects were also occasionally exposed to MDI, but always at concentrations below 20 ppb (204 yg/cu m). In 1976, Charles et al [42] described a case of pneumonitis and three cases of chronic re s p ira to ry d isease in w orkers exposed to TDI or HDI. Pneumonitis was diagnosed in a 5 0 -y ear-o ld nonsmoker who had experienced difficulty in breathing, weight loss, and fever for 6 weeks at the time of examination. Prior to his 5 years of working in th e p o ly u reth an e foam in d u stry , he had been a coal miner for 11 years. C h est X -rays showed alveolar filling lesions in both the lungs. Two months la te r , pulm onary fu n c tio n te s ts showed reduced FEV 1, vital capacity, total lung c a p a c ity , and residual volum e of the lungs. A bronchogram showed peripheral cy stic bronchiectasis in the right upper lobe. All immunologic tests for antibodies against TDI were negative. Microscopic examination of biopsy samples of lung tissue showed variations from normal architecture ranging from diffuse interstitial disease to a c u te in flam m atio n and end-stage interstitial fibrosis. The authors stated that the a re a s of filled alveoli resembled desquamative interstitial pneumonitis and that the w hole p ic tu re resem b led th a t of a pulmonary hypersensitivity response to an in h a le d a l le r g e n r a t h e r th a n c o a l-m in e rs' pneum oconiosis, but th ey did not demonstrate that this was related to TDI exposure. Two o th er w orkers developed sev ere dyspnea after exposure to spills of TDI [ 4 2 ] . Two to th r e e years after exposure, both workers had moderate obstruction of the airw ays, as in d ic a te d by FEV 1 measurements below predicted values; one also had a decreased vital capacity, and the other, although his vital capacity was normal, still experienced severe nonwheezing dyspnea after minimal exertion. Sim ilar symptoms occurred in a 61-year-old man who had been a paint sprayer for 43 y ears with no previous history of respiratory illness; he developed wheezing, d y s p n e a , an d sw eatin g w ithin hours when he f ir s t used a p o ly u reth an e paint co n tain in g HDI [ 4 2 ] . Whenever he was reexposed to the paint by casual contact w ith fum es from other spraying operations, he again developed symptoms. Testing 1 year a f te r exposure showed moderate obstruction of airways, and he complained of nonwheezing dyspnea after exertion. Pepys e t al [4 3 ] te s te d fo u r p a tie n ts w ith o ccu p a tio n a l a s th m a fo r TDI sen sitiv ity by simulating occupational exposures to a two-stage polyurethane varnish w ith TDI activ ato r. The subjects applied varnish with the TDI activator and, on a se p a ra te day, w ith o u t th e a c tiv a to r , to a surface in a small cubicle. When the a c tiv a to r was used, TDI concentrations in the air reached a maximum of almost 2 ppb (14 y g /c u m), as measured by the colorimetric method of Meddle et al [4 4 ]. No TDI was d e te c te d when th e varnish alone was applied. All the subjects were e sse n tia lly a sy m p to m a tic a t the time of testing, and their FVC and FEV 1 values w ere not m ore than 10% below predicted values. None showed positive responses in skin te s ts with common allergens, and none had a family or personal history of allergies. A 26-year-old boatbuilder, who had been using a two-stage polyurethane varnish system for 8 years, had cough and dyspnea at night [4 3 ], The attacks gradually b eca m e more severe and occurred earlier in the day, appearing only when the twos ta g e varnish was used. A nother man, 46 years old, had worked for 8 years as a m a in te n a n c e en g in eer whose d u ties included maintenance of a polyurethane foam m achine. He had chest tightness and shortness of breath, which disappeared within 20 m inu tes a f te r he le ft w ork. His sym ptom s also b eca m e progressively worse, developing into sev ere asth m a. Neither man had any known exposure to spills of TDI. In challenge testing, both reacted to the varnish only when the TDI activator was added. The boatbuilder developed a late asthm atic reaction that appeared at 1-2 hours and re a c h e d a m axim um a t 3-4 hours, while the maintenance engineer showed an immediate reaction. T h e o th er tw o su b je c ts w ere w om en who w orked in a te le v isio n fa c to ry d e p a rtm e n t w here co a te d wires were soldered [4 3 ]. The wire, coated with cured p o ly u r e t h a n e an d po ly v in y l b u ty ra l, w as dipped into a resin flux co n taining dim ethylam ine hydrochloride and then into multicore solder at 460 C. One woman, 44 y ears old, developed a chronic cough and, after 6 years, wheezing. The second w om an, a 5 4 -y ear-o ld sup erv iso r in the same departm ent, developed a productive cough, wheezing, and breathlessness, developing into severe bronchitis that kept her away from work for 5 weeks. Her symptoms recurred within 1 week of her return, and this p a tte r n was re p e a te d eac h tim e she attem pted to return to work. Both w o m en r e a c te d to th e varn ish w ith TDI a c tiv a to r . The fir s t woman had an im m e d ia te re a c tio n th a t was reso lv ed in 2 hours but was follow ed by a la te re a c tio n a t 3-4 hours. The second woman had only a late reaction at 3-4 hours. She was also te s te d w ith various com ponents used in th e soldering operation. This te s t p roduced a severe asthm atic reaction starting 30-60 minutes after exposure ended and continuing for 6 w eeks b efo re her FEV 1 re tu rn e d to p r e te s t levels. Blood tests on these four p a tie n ts showed no eosinophilia, but sputum collected from the 54-year-old woman contained eosinophils. This suggested to the authors a reagin-mediated reaction. The se n sitiz e d individuals te s te d by Pepys and colleagues [4 3 ] had adverse re a c tio n s to TDI after exposures as brief as 10 minutes a t reported concentrations of a b o u t 2 ppb (14 u g /c u m). The au th o rs em p h asized th a t none of th e se sen sitized individuals had a known history of heavy TDI exposure, such as exposure to spills. Thus, it ap p ea rs th a t exposure to massive amounts of TDI, as from a spill, may not be necessary to produce sensitization. The sam e technique of challenge exposure to polyurethane varnish was used by C a rro ll and co w o rk ers [ 4 5 ] to te s t four em p lo y ees who w orked in an o ffic e a d ja c e n t to a fa c to ry th a t used TDI. Three clerical workers and a security guard, am ong 47 w orkers in th e office block, had histories of asthm a-like symptoms, and in tw o cases these were clearly alleviated during periods away from work. It was discovered th a t the air inlet for the office building was located only 23 feet from th e v e n tila tio n outflow of the factory that used TDI, but actual air concentrations in the offices were not determined. The p o ly u reth an e varnish m ixture used in the challenge testing was one-third TDI, and th e au th o rs [ 4 5 ] s ta te d th a t th e a tm o sp h e ric c o n c e n tra tio n of TDI c r e a te d by painting it on a surface in the test chamber was about 1 ppb (7 yg/cu m), but th e m ethod of d e te rm in a tio n was not described. Three of the patients reacted to TDI, one after a 15-minute exposure, one after 30 minutes, and one only a f te r a 6 0 -m in u te exposure. Exposures to the varnish without the TDI activator produced no reactions. The a u th o rs [4 5 ] also m en tio n ed th a t one ad d itio n al o ffic e w orker had a s th m a tic sym ptom s th a t w ere re lie v e d by removal from the environment. This suggests th a t 4 of th e 47 w orkers were sensitized to TDI, a sensitization rate of about 9%. To e v a lu a te th e s p e c ific ity of TDI sensitivity, O'Brien et al [46] tested the responses of TDI workers to TDI, histamine, and exercise. The 63 men studied had been referred for investigation of possible work-related respiratory symptoms. A s u b je c t was considered sensitive to TDI if his fall in FEV 1 after exposure was 15% more than on the control day. TDI concentrations in the cubicle w ere m easu red by a continuous monitor; in 23 cases, breathing-zone sampling was also p erfo rm e d , and the results of the two measurements were found to be closely correlated (r = 0.95). F ifty -tw o of th e workers were also tested by inhaling an aerosol of histamine acid pho sp h ate a t g rad ed c o n c e n tra tio n s up to 32 m g /m l for 30-second periods [ 4 6 ] . A 20% fall in FEV 1 was considered evidence of bronchial hyperreactivity. F o rty -six su b jects p a r tic ip a te d in e x e rc ise te s tin g , co n sistin g of fre e running sufficient to increase the heart rate to 140 beats/m inute. A fall in FEV 1 of more than 9% was re g a rd e d as in d ic a tiv e of an asthm atic reaction. All subjects were prick te s te d w ith 23 common respiratory allergens, and those who reacted to 1 or more were considered atopic. T h i r ty -s e v e n of th e 63 w o rk ers w ere se n sitiv e to TDI as in d ic a te d by respiratory responses to challenge testing, which included 2 immediate, 17 late, and 18 dual reactions [4 6 ]. Nine of these workers reacted to TDI at concentrations of less than 1 ppb (7 y g /c u m). When ch allen g ed w ith h istam in e, 17 H ow ever, in th e subgroup of sensitive workers who responded to TDI at less than 1 ppb, there were significantly more reactions to both histamine (P<0.005) and e x e rc ise (P<0.01) than in those who reacted only at higher TDI concentrations; this extrem ely sensitive group also had a singificantly higher incidence of exerciseinduced a sth m a th an th e group th a t did not react to TDI (P<0.025). Age, atopic s ta tu s , and h isto ry of rh in itis w ere sim ilar in the TDI sensitive and nonsensitive groups, but th e re was a higher incidence of asthm a prior to work with TDI and of a fam ily h isto ry of allerg y in th e n o n sen sitiv e group. There was no significant d iffe re n c e s betw een smokers, exsmokers, and nonsmokers on the TDI, histamine, or exercise tests. In a fu rth e r stu d y , O 'B rien e t al [4 7 ] investigated cross-reactivity to TDI, MDI, and HDI in 24 d iiso cy an ate workers referred for investigation of respiratory sym ptom s. All 24 m en had been exposed to TDI, 14 to MDI, and 6 to HDI; 5 of the latter group had been exposed to all 3 diisocyanates. All su b jects w ere ch allen g ed w ith TDI by the procedure previously described and w ith MDI over th e sam e ran g e of co n cen tratio n s (<1-20 ppb) by heating the m a te ria l in a closed cubicle [4 7 ]. Nine, including the six with previous exposure to H D I, w e re a lso c h a l le n g e d by p a in tin g w ith an HDI v a r n is h , b u t air c o n c e n tra tio n s of HDI w ere not measured. All subjects were tested by histamine inhalation. S ix teen of th e subjects were sensitive to TDI, and eight of these also reacted to MDI, including four who had no known previous exposure to MDI [4 7 ]. Three of the nine w orkers te s te d with HDI showed positive responses; all three had also re a c te d to both TDI and MDI, and two of them had no previous exposure to HDI. H istam in e inhalation produced a positive reaction in five of the eight subjects who did not respond to challenge with diisocyanates, one of the eight who reacted only to TDI, and six of the eight who reacted to both TDI and MDI (including all three who also reacted to HDI). The authors reported that subjects who reacted to more than one d iiso c y an ate had a greater degree of histamine reactivity and reacted to TDI at lower concentrations than did those who reacted only to TDI. Among workers referred for respiratory symptoms, TDIsensitive individuals were no more likely to have nonspecific asthm atic responses to h istam in e or e x e rc ise than were those who did not react to TDI. However, those who showed e x tre m e se n s itiv ity to TDI, reacting at concentrations of less than 1 ppb, did have an increased incidence of nonspecific asthm atic responses, suggesting to the authors that specific sensitivity to TDI might be exacerbated by irritative or p h arm a co lo g ic h y p e rre a c tiv ity of th e airw ay s. The e x iste n c e of such a dual m echanism in individuals extremely sensitive to diisocyanates was supported by the re su lts of cross-challenging with TDI, MDI, and HDI [4 7 ]. The authors considered th a t im m unologic c ro s s -re a c tiv ity b etw ee n th e se th r e e compounds was unlikely because of th e ir s tr u c tu ra l d iffe re n c e s . They concluded that their results were consistent with the existence of a specific mechanism of TDI sensitivity coupled, in e x tre m e ly sen sitiv e individuals, with a pharmacologic mechanism that also caused increased reactivity to other diisocyanates. Occupational exposure to MDI has produced respiratory effects similar to those reported from TDI exposure. Longley [4 8 ], in 1964, described an incident in which 12 m en who w orked 60-120 f e e t from an MDI foarn-spraying operation developed sym ptom s, including a sth m atic breathing, retrosternal soreness, constriction of the c h e st, cough, retrobulbar pain, depression, headache, nasal discharge, and insomnia. All 12 workers developed symptoms within several hours after exposure to the mist. The w orkers actually spraying the MDI foam, who wore full protective clothing and air-supplied respirators, were unaffected. Munn co n clu d ed th a t MDI was a p o te n tia l respiratory irritant and that in rare instances it could cause sensitization. In 1972, Lob [ 4 9 ] d escrib e d a reaction to MDI in a 50-year-old worker in a p o ly u reth an e fa c to ry who had no h isto ry of allergies, bronchitis, or asthma. He in te rm itte n tly e x p e rie n c e d m alaise acco m p an ied by fever, nausea, and coughing, usually a t th e end of th e day. A fte r one such a tta c k , a thorough examination showed th a t he had a slightly decreased vital capacity. A fter another attack, he had an increased white blood cell count (WBC) of 12,650/cu mm. To d e te rm in e th e fa c to rs producing th e se symptoms, Lob [49] exposed the worker for 3-4 minutes in a simulated operation where plastic belts were welded by h e a t. The au th o r s ta te d th a t MDI was d e te c te d in th e whitish fumes given off during th e w elding p ro cess, b u t th e c o n c e n tra tio n was not given; no TDI was d e te c te d . The worker's body tem perature increased to 39 C within 4-5 hours after he was exposed, and he had nausea and a severe cough. Vital capacity decreased slig h tly , WBC in c re ased , and he had c o n g este d conjunctiva, increased pulse, and decreased blood pressure. All these signs were normal the next day. Lob concluded t h a t th e o n s e t, s e v e rity , p e rs is te n c e , and re c u rre n c e of th e sym ptom s w ere suggestive of an allergic reaction to MDI. In 1971, Lapp [50] described the effects of brief exposures to TDI and MDI on th re e men. One was a 3 8 -y ear-o ld w orker in a chemical plant who had worked w ith d iiso c y an ate s for 13 years. The other two were 25-and 23-year-old medical o ffic e rs w ith no previous exposure to diisocyanates. Each subject slowly inhaled TDI from a sn iff b o ttle . A fter at least 1 day without exposure, each subject was challenged w ith MDI in the same manner. Pulmonary function of each subject was determined before and after each exposure. F ifte e n m in u tes a f te r TDI inhalation, the values for FVC, FEV 1, and forced e x p ira to ry flow b e tw e e n 25 and 75% of the FVC (FEF 25-75) in the worker who had previously been exposed to d iiso c y an ate s w ere 3.16, 2.79, and 3.62 liters, respectively, compared with corresponding preexposure values of 4.03, 3.68, and 5.92 liters. Airway resistance increased to 123% of the preexposure value at 15 minutes and 166% a t 30 m i n u te s . T The e f f e c ts of MDI w ere re v e rs e d follow ing ad m in istra tio n of a bronchodilator. Approximately 4-6 hours later, the man again ex p erien ced c h e st tightness and wheezing and his tem perature increased to 100 F. All th e se sym ptom s had disappeared by the next morning. The other two subjects showed no loss of pulm onary fu n ctio n a f te r exposure to MDI. Minor changes in th e ir airway resistance and thoracic volume were probably due to chance, according to the author, though he noted that these might have been caused by irritation. Lapp [ 5 0 ] co n clu d ed th a t th e ch anges o b serv ed in th e previously exposed individual who was ex p osed to TDI and MDI a t levels th a t did not cause such re a c tio n s in th e o th e r su b jects co n firm ed his re s p ira to ry s e n s itiv ity to th e se com pounds. Since the isocyanates to which this worker had previously been exposed w ere not id e n tifie d , this study does not provide ev id en ce on th e p o te n tia l of diisocyanates to produce cross-sensitization. # (b) Immunologic Effects The stu d ies discussed in th e preceding section indicate that some people are sen sitiv e to d iiso cy an ates, reacting to these substances in quantities much smaller than tho se th a t p roduce d ir e c t ir r ita tio n of th e lungs in most individuals. The m e c h a n is m of s e n s i t i z a t i o n to th e d iis o c y a n a te s has been in v e stig a te d in immunologic and pharmacodynamic studies on exposed workers. A lle r g ic responses th a t re s u lt from c irc u la tin g an tib o d ies can be e ith e r im m ediate or late, or a combination of the two. Immediate responses occur within m inutes of exposure to th e a n tig en , and late reactions appear a few hours after exposure. Im m ed iate re a c tio n s to some substances are associated with atopy, an in n a te te n d e n c y to d e v e lo p a lle rg ie s , w hich may be r e la te d to high serum c o n c e n tra tio n s of re a g in -ty p e im m unoglobulin (IgE). Atopy is often judged to be p resen t if tw o or m ore skin te s ts with common inhalant allergens such as pollen and animal dander are positive. Molecules with molecular weights of less than 10,000 are rarely antigenic; thus, im m unologic a c tiv ity of th e diisocyanates probably results from a reaction with a h ap ten com plex fo rm ed from a d iiso c y an ate and a naturally occurring antigenic s u b s t a n c e such as p ro te in or p o ly sacch arid e. B ecause iso c y a n a te s r e a c t w ith hydroxyl, am ino, su lfh y d ry l, or sim ilar groups, it is likely that hapten complexes may be fo rm ed . Most in v e stig a to rs who have stu d ie d th e immune response to d iiso c y an ate s have a tte m p te d to duplicate this hapten complex by conjugating the iso cy an a te w ith a protein such as egg albumin or human serum albumin for use as an an tig en in im m unologic te s ts , using a modification of the method described in 1964 by Scheel et al Four of these five, but only 1 of 21 unsensitized workers, had a history of allergies before working with diisocyanates. These four sensitized subjects also had c le a rly p o sitiv e ly m p h o cy te tra n s fo rm a tio n te s ts , although all had been without exposure to diisocyanates for at least 6 months before testing. Neither unexposed nor u n sen sitized exposed w orkers gave p o sitiv e resp o n ses in this test. Passive cu tan eo u s anaphylaxis (PCA), Prausnitz-Kuestner (P-K), leukocyte histamine release, passive h em ag g lu tin atio n , and gel d iffusion precipitin tests were negative in all subjects and did not identify the sensitized workers. Subsequent studies by Nava et al [5 3 ] and B utcher e t al [ 5 4 ] have not confirmed the diagnostic value of the lymphocyte transformation test for diisocyanate sensitivity. B ruckner and co w o rk ers [5 2 ] noted that all five sensitized workers had been exposed to d iiso c y an ate s a t c o n ce n tratio n s above 20 ppb. They also pointed out that the development of sensitization in these workers occurred only after 2 months to 5 years of repetitive exposures, concluding that overt clinical sensitization might be avoided if workers who showed increasingly severe signs of respiratory irritation were removed from further exposure to diisocyanates. In 1970, Taylor [ 5 5 ] attem p ted to detect circulating antibodies to TDI in 55 w orkers w ith sym ptom s su g g estiv e of TDI sensitivity. Their sera were compared w ith th o s e fro m 40 u n e x p o s e d t e x t i l e w orkers for an tib o d ies by te s ts for com plem en t fix atio n , PCA , and red-cell-linked antiglobulin. None of the control sera, but 23 of the te s t sera, gave p o sitiv e results in one or more tests. Five w ere positive in more than one test, but only one in all three tests. Six sera taken w ithin a few m onths of an unusually high exposure to TDI that produced severe sym ptom s all showed p o sitiv e te s t re su lts. T here was no co rrelatio n between p o s itiv e antibody te s ts and eosinophilia as d e te rm in e d from blood or sputum sam ples. The au th o r su g g ested th a t th e lack of c o rre la tio n between the tests indicated that they detect antibodies of slightly different specificity or of different immunoglobulin classes. In 1975, N ava e t al [5 3 ] d escrib ed im m unologic re s e a rc h on 182 clinical p a tie n ts , all but one of whom had respiratory symptoms, who had been exposed to d iiso c y an ate s in th e workplace. Ninety-six of these patients reacted positively to in tra d e rm a l te stin g w ith T D I-p ro tein conjugates. Thirty-seven of the 96 workers w ith positive tests and 6 of 86 with negative ones were atopic. However, in the 45 p a tie n ts w ith im m ed iate re a c tio n s , 60% were atopic. The authors concluded that atopy was not a f a c to r in TDI se n s itiz a tio n , since m o st p a tie n ts with positive reactions to TDI in intradermal tests were not atopic. However, their data suggest that atopy may be a predisposing factor. Nava and asso ciates [53] also performed pulmonary function testing on 45 of th e s e p a tie n ts who w ere exposed to TDI a t 100-130 y g /c u m (14-18 ppb) in ch allenge te s ts . T h irty -fiv e p a tie n ts showed decreased pulmonary function after TDI challenge. An immediate or dual response occurred in 25, whereas 10 had only a la te response; in c o n tra s t, over half the positive reactions in intraderm al tests co n sisted of a la te response only. Tests with acetylcholine on 18 subjects showed th a t those who w ere h y p e rre a c tiv e to th is b ro n c h o c o n s tric to r te n d ed to have im m e d i a te b r o n c h ia l re a c tio n s to TDI. This su g g ests th a t a p h arm aco lo g ic mechanism, as well as an immunologic one, is involved in diisocyanate sensitivity. In 1975, P o rte r e t al [5 6 ] published a retrospective study of sensitization in w orkers in a TDI manufacturing plant that had been in operation since 1956. The workforce exposed to TDI numbered about 200, remaining fairly constant throughout th e study period, and the turnover during the 17 years of the study was about 100 w orkers. The investigators examined medical records of the workers to determine th e r e la tio n s h ip of c l i n i c a l p r o b le m s to TDI c o n c e n tra tio n s in th e p la n t. Immunologic and lung function testing were performed on some workers. From 1956 to 1974, 30 of 300 workers at risk in the plant were judged on the basis of medical examinations to have become sensitized to TDI [ 5 6 ]. At least six w orkers w ere h y p ersen sitiv e to TDI on first exposure, reacting at concentrations below 5 ppb (35 y g /c u m); in other workers, sensitization developed as late as 14 y ears a f te r in itia l exposure. The authors noted that, as individuals became more sensitized, they responded more quickly to TDI exposures and recovered more slowly a f te r rem oval from ex posure. T able III-1 shows th e num ber of new c ases of s e n s itiz a tio n diagnosed eac h y ear in relation to the average air concentration of TDI. The d a ta in d ic a te th a t a d o se-resp o n se relationship for sensitization may ex ist. It is also clear, however, that the incidence of sensitization decreased with tim e during th e y ears b e fo re 1970 w hen th e re w ere no s ig n ific a n t changes in av era g e TDI c o n c e n tra tio n s . The au th o rs a ttr ib u te d th is not only to improved control of peak concentrations and increased employee understanding of TDI effects, but also to possible "hardening" of exposed workers. It appears m ore likely th a t m ost p o te n tia lly s e n s itiz a b le w orkers became se n sitiz e d during th e ir e a rlie r y ears a t th e higher exposures. The authors [56] noted th a t sensitized workers were relocated out of the TDI handling area to other parts of the plant. These considerations preclude the assumption that 20 ppb can be reg ard ed as a no-effect level for sensitization on the basis of this study. The low tu rn o v er r a te im plies th a t only an average of 5-6 workers each year were newly exposed to TDI, so that most of the workers exposed during the last 3 years of the study already had several years of exposure at higher average concentrations. Some w orkers b eca m e sensitized after 14 years of exposure, indicating th at some of the s e n s i t i v i t y c a s e s r e p o r t e d in a b o u t 1970 d ev elo p ed when th e a v e ra g e TDI c o n c e n t r a t i o n s w ere ab o u t 60 ppb. It is u n clear w h eth er th e a u th o rs used a w eighting procedure in calculating the average concentrations, or perhaps, averaged a ll r e s u l t s . F o r th is r e a s o n it is im p o s s ib le to co n c lu d e w h at th e TWA c o n c e n tra tio n s w ere in the plant, and thus impossible to ascertain a concentration sufficiently low to prevent sensitization. P o rte r e t al [5 6 ] also presented case studies and results of immunologic and pulm onary fu n ctio n te s tin g fo r 32 of th e w orkers in th is p lan t; some of these workers had signs of respiratory illness, according to medical diagnosis, while others w ere a sy m p to m a tic . Sera from th e s e w orkers w ere te s te d for the presence of [56] an tib o d ies with the P-K test in monkeys and the PC A te st in guinea pigs, by using a te s t antigen of TDI conjugated with serum protein from the same animal species; this is th e only im m unologic study found on TDI th a t used antigens made with hom ologous p ro tein s in th e se te s ts . The P-K te s t was intended to identify IgE an tib o d ies, which th e au th o rs e x p e c te d to be a ss o c ia te d w ith h y p ersensitivity re a c tio n s, and th e PCA w as to id e n tify c irc u la tin g IgG antibodies, which they assum ed to confer immunologic protection. The workers' sera were similarly tested with a common pollen antigen. In th e cases d escrib ed , there was no correlation between the presence of IgE or IgG an tib o d ies ag ain st TDI and e ith e r clin ical sym ptom s, lung function, or reactivity to pollen antigens. For example, apparent sensitivity to TDI accompanied by loss of lung fu n ctio n was re p o rte d in a worker who had a positive PCA test w ith TDI an tig en but not with pollen antigen and in one who gave a positive P-K te s t w ith pollen but show ed no antibodies against TDI; another sensitized worker had positiv e PCA to both TDI and pollen but refused pulmonary function testing. Two w orkers w ith s e n s itiz a tio n re a c tio n s to TDI had p o sitiv e results with TDI an tig en in th e P-K te s t , but no loss of lung function. Workers who showed no sig n s of se n s itiz a tio n to TDI included som e who gave n eg a tiv e re s u lts in all im m unologic te sts and others with positive reactions to TDI in the PCA, P-K test, or both. These re su lts do not support the authors' hypothesis regarding the roles of IgE and IgG an tib o d ies in TDI se n s itiz a tio n . The authors attributed the loss of lung fu n ctio n , which was a p p a re n tly in d ep en d en t of th e p resen ce of an tib o d ies, to b ronchoco n strictio n ; the reactions of these individuals occurred almost immediately upon exposure and w ere relieved by treatm en t with a bronchodilator. In contrast, clinical reactions in the two workers with positive immunologic tests but no loss of lung function had developed gradually after several years of exposure. The findings of th is s tu d y i n d i c a t e t h a t an im m unologic m echanism may be involved in diisocyanate sensitivity but that sensitivity in some individuals may also result from a nonimmunologic mechanism. As part of a long-term study of workers exposed to TDI, described in detail in E pidem iologic Studies, Butcher and associates [54] reported in 1976 the results of im m unologic and in h a latio n ch allen g e studies of 167 employees who worked in a fa c to ry producing TDI. B efo re TDI p ro d u ctio n began and a t 6 and 18 months afterw ard, employees were prick-tested to determine their reactivity to a conjugate of TDI with human serum albumin (HSA) and to HSA alone. They were also prickte s te d w ith 15 common inhalant allergens. Using blood taken from the workers at th e s a m e t e s t in te r v a ls , th e in v e stig a to rs d e te rm in e d eosinophil counts and im m unoglobulin levels. To identify TDI-specific antibodies, sera were tested with the TDI-HSA antigen by radioimmunoassay tests, the PCA test on guinea pigs, and Workers were subdivided into groups with constant, interm ittent, or no exposure [ 54 ]. On initial testing, four workers had positive skin reactions to both TDI-HSA and HSA a lo n e ; h o w ev er, during th e th ird p lan t v isit 6 m onths la te r , th re e individuals r e a c te d p o sitiv ely to TDI-HSA but not to HSA. The authors did not in d icate the exposure groups of th e se persons, but they noted that none of the three showed clinical respiratory responses to TDI. Both b efo re and after TDI production began, PCA, P-K, and radioimmunoassay te s ts for TDI an tib o d ies w ere negative in all subjects [5 4 ]. Eosinophil counts did not d iffe r in exposed and unexposed groups. Immunoglobin levels were similar in all th re e exposure groups. Six m onths after production began, both IgG and IgE had in c re ased sig n ifican tly over p reex p o su re values; however, this increase was ap p a re n t in all groups, and the IgE increase was greatest in the unexposed group. The au th o rs therefore concluded that the increase was not related to TDI exposure but probably reflected seasonal variation. H ow ever, th e au th o rs n o te d that the number of individuals tested was too small to indicate that antibody titer was proportional to exposure. In a 1973 NIOSH h e a lth h azard ev alu atio n , V an dervort and Lucas [ 6 1 ] in v e stig a te d im m unologic resp on ses of 90 w orkers exposed to MDI a t average c o n c e n tra tio n s of up to 11 ppb (110 U g/cu m) in a plant manufacturing fibrous glass tan k s. PCA , P-K , and agglutination tests were carried out with a "specially prepared isocyanate antigen," not otherwise characterized. Of 12 men with positive P-K te s ts , 2 showed respiratory responses to MDI, and 1 had decreased pulmonary function; pulm onary fu n ctio n te stin g was recom m ended for 2 others to evaluate their status. The other seven showed no evidence of adverse reactions to MDI, and th e au th o rs co n sid ered th em "hardened" to its effects. Forty workers who gave positive results only in the PCA or agglutination tests were also asymptomatic. It is possible, as th e authors suggested, that certain workers giving positive tests for an tib o d ies w ere im m unologically "hardened" to th e effects of MDI and that the c irc u la tin g IgG an tib o d ies th a t m ight be in d ic a te d by positive PCA tests were involved in c o n ferrin g such im m unologic p ro te c tio n . H ow ever, th e inadequate c h a ra c te r iz a tio n of the antigen used in testing makes it difficult to determine the validity of these results. To evaluate whether the difficulty in detecting diisocyanate antibodies might be due to nonavailability of exposed hapten groups in the antigen, Karol e t al [ 62 ], in a 1978 study, used a c o n ju g ate of p -to ly l isocyanate with human serum albumin (TMI-HSA) as a te s t a n tig en . B ecause it contained only one isocyanate group on eac h m olecule, this monoisocyanate would not cross-link the protein component of th e an tig en , increasing the probability that the tolyl portion of the molecule would be ste ric a lly exposed. The authors tested 23 employees of a large TDI production fa c ility , 4 of whom w ere considered sensitized to TDI. Three of these had had a sen sitiv ity response, e ith e r bronchial or skin reac tio n , within 1 year before the study; the fourth had avoided exposure to TDI for at least 2 years. The remaining 19 w orkers w ere co n sid ered unsen sitized because they showed no adverse effects when exposed to TDI; in som e c a ses, this judgm ent was confirmed by negative results in challenge tests with TDI at 20 ppb (140 yg/cu m). A radioimmunoassay for IgE bound to TMI-HSA showed that the 19 unsensitized w o rk e rs had antibody ti te r s sim ilar to th o se of 10 blood-blank donors [ 6 2 ] . H ow ever, th e se n sitiz e d group show ed a significantly elevated titer of anti-tolyl an tib o d ies (P<0.01). The th re e workers who had had TDI reactions within the last year had antibody titers higher than any of the unsensitized or control individuals. Serum -binding to th e an tig en was inhibited in the presence of nonisocyanate tolyl com pounds, suggesting to the authors that the antibodies were toly 1-specific. There was no co rrelation between the tolyl-specific IgE antibodies and the levels of total IgE in the sera. Incubating lym phocytes from sen sitiv e and nonsensitive workers with TDI in the absence of iso p ro te re n o l did not a f f e c t cy clic AMP levels. T h ere was a d o se-dependent inhibition of iso p ro teren o l-stim u lated cyclic AMP levels in lymphocytes from both sensitive and nonsensitive subjects; there was no significant difference in the ability of cells from these two groups to exhibit cyclic AMP stimulation. In th e m echolyl ch allen g e stu d ie s, 6 of th e 10 clinically sensitive subjects showed a drop in FEV 1 of m ore th an 20% w ithin 1.5 m in u tes a f te r a single inh alatio n of mecholyl [6 3 ]. Only 1 of the 10 nonsensitized subjects gave such a response, and this occurred 5 minutes after inhaling mecholyl four times. This study [63] indicates that TDI is not a histamine releaser per se but that it does suppress stimulation of the beta-adrenergic system by isoproterenol. These resu lts a g re e w ith those of a similar study by Van Ert and Battigelli [64] on the e f f e c ts of TDI on h ista m in e release in vitro. Butcher et al [63] concluded that th e ir findings suggested that TDI may act as a beta-receptor blocking agent. This would produce increased reactivity to agents capable of causing bronchoconstriction, such as m echolyl. In a follow up re p o rte d in tw o 1978 abstracts [65,66], blood te stin g a f te r ch allen g e ex posures to TDI show ed that histamine levels increased a f te r a bronchial re a c tio n , w hile com plem ent components were not affected. In this study, all TDI r e a c to r s r e a c te d p o sitiv ely to m echolyl ch allen g e, and the au th o rs [6 5 ] n o ted that kinetic studies had revealed a strong indication that cells from TDI re a c to rs respond d iffe re n tly th an th o se of n o n re a c to rs to the betaadrenergic agonists isoproterenol and prostaglandin E, and to TDI added alone. These stu d ies [6 3 -6 6 ] suggest that a pharmacologic mechanism is involved in r e s p i r a t o r y s e n s i t i v i t y to TDI, b ut th e re is no in d icatio n w h eth er m echolyl hyperreactivity is a preexisting factor or a result of TDI exposure. (c) Skin Effects Some diisocyanates have been described as skin irritants [67,68], but there are few re p o rts in th e lite r a tu r e of skin effects from these compounds. Munn [35] has noted th a t, in several years of study, he has seen only two mild cases of skin irrita tio n from d iiso c y an ate s and no cases of skin sensitization. Bruckner et al [5 2 ] re p o rte d that 6 of 44 workers in a chemical plant experienced skin irritation a ttrib u te d to exposure to unspecified diisocyanates. These reactions consisted of e ry th e m a only on areas of skin that were in actual contact with the diisocyanates. One w orker who o fte n had d iiso c y an ate s on his hands n o ted th a t his skin had become hard and smooth, so that he had difficulty in turning pages. Possible skin sensitization to TDI was described in two of the studies discussed in th e previous section. Nava et al [53] reported that a worker with eczematous d e rm a titis was 1 of 3 w orkers who reacted positively to TDI in a patch test, out of 182 workers tested. Karol e t al [62] found tolyl-specific IgE antibodies in two w orkers who displayed im m e d ia te skin reactions when exposed to TDI, apparently w ithout a bronchial response. These skin reactions were extensive and not confined to areas where TDI had contacted the skin. Rothe [6 9 ], in 1976, described 20 cases of occupational skin disease in workers exposed to poly u reth an es. Clinical examinations, observation of the course of the disease, reex p o su re te s ts , and skin te s ts were carried out. Standard and special te s ts using a v ariety of isocyanates, amines, and additives were used to determine the specific sensitivity of the workers. R othe [ 6 9 ] found 12 cases of c o n ta c t e c z e m a c h a r a c te r iz e d by follicular papules in w orkers exposed to MDI or p artially polymerized MDI. Ten of these co n stitu ted more than half the total number of workers who had come into contact w ith a p o ly u reth an e sealing com pound a t one plant, and two were from another plan t. S everal in sp ectio n s showed that there was very close contact between the w orkers' skin and th e sealing com pound. Work c lo th e s were often soaked with resin. The workers had positive skin test reactions to the isocyanate component of th e sealing com pound. T w en ty -fiv e unexposed persons with eczem a had negative r e s u lt s . Five of seven w orkers w ith MDI a lle rg ie s e x h ib ite d ty p ical e c z e m a re a c tio n s to diaminodiphenyl methane (MDA). Only one of these had had previous contact with the MDA, which was not used at the plant. Skin disease disap p eared in all four persons se n sitiz e d to IPDI a f te r exposure was stopped. T hree of th e in v e stig a to rs tested themselves with undiluted IPDI and no reactions o ccu rred w ithin 4 days [ 6 9 ] . However, two of the three investigators developed fo llicu lar papules 10 days a f te r te stin g . Sensitization in these investigators was confirm ed in a la te r test with a 1% IPDI solution, which produced no reactions in six nonexposed subjects. The o th e r four p a tie n ts with skin disease included two cases of eczem a from TDI exposure, one case with exposure mainly to TDI but also to MDI, and one case of eczem a probably related to exposure to triphenylmethane triisocyanate [6 9 ]. In all 20 cases th e re was a pattern of brief exposure to the isocyanate, often caused by spills, with subsequent development of eczema. In most cases, sensitization was co n firm ed by sk in -te stin g with a dilute solution of the isocyanate suspected to be the agent. # UU Other Effects A lthough m ost re p o rts of diisocyanate toxicity have described effects on the resp ira to ry tr a c t or skin, some have noted other effects. These have included eye irritation, psychologic symptoms and CNS effects, and hematologic changes. Most of th e se e f f e c ts have o c c u rre d following mixed exposures to diisocyanates and other c h e m ic a ls , an d su c h e f f e c t s can n o t be c le a rly ascrib e d to th e d iiso c y an ate exposures. S everal stu d ies have su g g ested th a t TDI, esp ecially a t very high exposure levels, may cause neu ro lo g ic or CNS e f f e c ts . In th e firs t published report of occupational illness from TDI exposure, Fuchs and Valade [2 1 ] noted that insomnia was o fte n th e firs t co m p lain t of a f f e c te d w o rk ers, p reced in g any re sp ira to ry sym ptom s. They also m en tio n ed that three patients had a decrease of the kneejerk and A chilles re fle x e s . In one patient, who completely lacked these reflexes, th e cond itio n p e rsiste d for 2 months after he stopped working with TDI and then ab ru p tly re tu rn e d to no rm al. In th e ab sen ce of other signs of exposure-related nervous d iso rd ers, th e a u th o rs did not specifically im plicate TDI as the cause of this condition. A 1964 USSR study [70] investigated the effects of TDI on electrical activity in th e human cerebral cortex. No experimental details were reported, but TDI was said to a f f e c t electroencephalographic (EEG) rhythms at a threshold concentration of 100 U g/cu m (14 ppb). This study was not included in th e 1973 c r ite r ia docum ent on TDI [3 7 ]. Little can be made of these results in the absence of any inform atio n on experimental methods, but the implication of CNS effects at a such a low concentration suggests that such effects should be more carefully evaluated. In 1965, a Canadian report [7 1 ] indicated that 12 of 24 maintenance workers d e v e lo p e d r e s p i r a t o r y s y m p to m s a f t e r th e y had c lean ed pipes and vessels c o n ta m in a te d w ith TDI. In ad d itio n , four of th e workers developed psychologic problems, including anxiety neuroses, psychosomatic complaints, depression, and even paranoid te n d en cie s. A year after exposure, they had not returned to work; some s till com plained of cough and d iffic u lty in b re a th in g , although their pulmonary fu n ctio n te s ts were normal. This report suggests the possibility th at TDI produces CNS e ff e c ts ; cleaning p ro cesses, how ever, involve th e use of solvents to which th e se CNS e f f e c ts m ight be attributed. This report did not detail the procedures or solvents used in cleaning the TDI-contaminated vessels. B urton [ 7 2 ] , review ing O n tario w orkm en's co m p en sa tio n claim s in 1972, m entioned an in cid en t of TDI exp o su re in a rubber plant. One of three women em ployees who developed chronic obstructive lung disease after an acute exposure to TDI also had a "psychogenic problem," not otherwise described. N ev erth eless, it is not unequivocal that the effects reported were caused by TDI. In a 1962 report, Filatova et al [75] described the effects of mixed exposures to TDI, ch lo ro b en zen e, phosgene, to lu en e diam ine, and HDI on 63 men and 17 w om en who had m anufactured diisocyanates for 1-2 years. These effects included irrita tio n of th e ey es, nose, and skin, coughing, difficulty in breathing, headaches, in s o m n ia , w eak n ess, tre m o rs , re fle x ch an g es, and c h e s t and abdom inal pain. H e m a to lo g ic te s ts show ed d e c re a se s in eosinophils and n eu tro p h ils, and some w orkers had slightly en larg ed liv ers w ith no functional impairment. The authors concluded th a t th e substances produced during diisocyanate production were toxic, but they could not a ttr ib u te th e sym ptom s to TDI alone, since other compounds that were present could have produced similar effects. O n ly th e HDI concentrations were said to be in excess of the MAC. Thirty-two workers complained of headaches, 36 of increased perspiration, 20 of aches in th e a re a of the heart and under the right ribs, 13 of dream disturbances, 12 of difficulty in breathing, 19 of general weakness, and 6 of coughing [7 6 ]. All w orkers re p o rte d th a t HDI vapor irritated their eyes and upper respiratory tract. N in eteen w orkers, who had w orked in th e p lan t for 7-13 y ears, had developed slightly en larg ed liv ers th a t w ere painful upon palpation. Duodenal sampling and blood bilirubin and cholesterol analyses revealed no hepatic lesions. Most of the 55 w orkers ex am in ed for liver abnorm alities showed hypocholesteremia, indicating to th e au th o rs an early sta g e of d istu rb a n ce of liv er function. Most workers also showed abnormalities in blood proteins and serum cholinesterase activity. A pproxim ately 50% of the examined workers had developed chronic subatrophic pharyngitis w ith o u t any p ath o lo g ic changes in th e lungs [ 7 6 ] . Effects on the ca rd io v ascu lar system were seen in 47 workers, 27-40 years old, more than half of whom had sinus arh y th m ia , b ra d y c a rd ia , e x tra s y s to le , and slowing of endoatrial co n d u ctiv ity indicative of toxic myocardiodystrophy. Some workers had trem ors of the fingers and eyelids and increased muscular excitability. Filatova et al [76] concluded that the adverse effects on workers' health were produced by a m ix tu re of to x ic com pounds whose main component was HDI. No o th e r re p o rts of h e p a to to x ic ity or cardiovascular effects in diisocyanate workers have been found. It should be noted that chlorobenzene is a hepatotoxin that has re p o rte d ly cau sed h e p a tic necrosis in animals at high doses [77] and produced an increase in liver weight in rats inhaling 1,150 mg/cu m for 6 months [7 8 ]. # Epidemiologic Studies Studies of w orker p o pulations exposed to TDI have r e la te d environm ental exposure levels to th e incidence and severity of respiratory symptoms, changes in pulm onary function, and immunologic reactivity. Investigations of workers exposed to MDI and HDI have g en era lly provided less useful data because they involved mixed exposures to several other toxic chemicals. In 1957, H am a e t al [ 7 9 ] re p o rte d th a t 12 workers exposed to isocyanates (TDI) a t 30-70 ppb (210-500 ug/cu m) for 1 week in an automobile plant had mild to sev ere re sp ira to ry symptoms including cold symptoms, continuous coughing, sore th ro at, dyspnea, fatigue, and nocturnal sweating. No symptoms had developed during th e previous m onth when isocyanate concentrations were below 10 ppb (70 Ug/cu m), and when concentrations were subsequently reduced to the 10-30 ppb range (70-210 u g /c u m), no fu rth e r co m p lain ts o c c u rre d in over 3 m onths. A w ritte n co m m unicatio n from Hama (3une 1973) confirmed that the isocyanate was TDI and in d icate d th a t exp o su re concentration measurements were based on breathing-zone sam ples an aly zed by th e R a n ta m ethod. This m ethod is unable to distinguish b e tw e e n TDI and th e TDI u re a fo rm ed in th e p re se n c e of w a te r. Thus, the concentrations of TDI in the area were probably less than the reported values. A d e ta ile d 2 .5 -y ear study by W alw orth and Virchow [3 1 ] of a polyurethane foam p lan t was published in 1959. TDI concentrations ranged as high as 300 ppb (2,200 y g /c u m), but monthly averages were generally below 150 ppb (1,100 yg/cu m). E ig h ty -th re e cases of respiratory illness that required medical attention were a ttrib u te d to TDI ex posure; m ost of them occurred after 3-4 weeks of exposure. The to ta l num ber of w orkers a t risk was not reported. The authors noted that t h e r e w as l i t t l e c o r r e l a t i o n b etw ee n m easu red TDI c o n c e n tra tio n s and the app earan ce of respiratory symptoms. They attributed this largely to short exposures a t high c o n c e n tra tio n s not r e f le c te d in th e m easurem ents of average exposures. They added that once workers experienced adverse effects from TDI they could not tolerate even minute exposures. In 1964, to x ic e ffe c ts from TDI in workers in three New Zealand plants were reported [8 0 ]. At one plant, where usual TDI concentrations ranged from 3 to 120 ppb (20-850 y g /c u m), three cases of respiratory sensitization occurred in 1 year. In tw o of th e se w orkers, sym ptom s first appeared after 2-3 hours of pouring TDI inside a r e f r ig e ra te d van, w here unusually high concentrations were likely. The third w orker, whose sym ptom s developed gradually, could work 50-60 feet away from th e foaming operation, where TDI concentrations were about 5 ppb (35 yg/cu m), but he had a respiratory reaction when he worked within the foaming area. In a sim ilar p la n t, w here TDI co n cen tratio n s were usually below 20 ppb (140 yg/cu m), t h e r e w e re tw o c a s e s of mild cold sym ptom s and one case of possible s e n sitiz a tio n , all a ss o c ia te d w ith a foam ing o p e ra tio n in w hich concentrations reac h ed 100 ppb (700 y g /c u m). This p lan t also reported one case of a severe a s th m a tic a tta c k and co llap se in a worker exposed at a very high concentration. He subsequently re tu rn e d to work with no evidence of sensitization. In the third plan t, tw o w orkers exposed to TDI at 18 ppb (130 yg/cu m) wearing canister-type m asks e x p erien ced very mild cold symptoms at the end of the day when a double run was carried out. The total workforce at risk in these plants was not reported. In 1962, Elkins et al [32] described experiences with TDI in 15 Massachusetts plants over a 5 -y ear period. They e v a lu a te d th e c ases of re s p ira to ry illness o ccu rrin g in eac h of the plants and made environmental measurements, apparently from a re a sam ples. Most of th e sam ples were analyzed by the Marcali method. The R a n ta m ethod was used for some of the early measurements and found to be less a c c u ra te , but the authors did not indicate which measurements were made by this method. Other methods used in a few plants reportedly gave results comparable to the Marcali method. The findings of Elkins and coworkers, as adapted by NIOSH to p r e s e n t w h a t w e re c o n s id e r e d to be re le v a n t d o se-resp o n se d a ta , w ere sum m arized in the 1973 TDI criteria document [3 7 ], and are shown in Table III-2. This ta b le om its data from plants where environmental levels were not determined or w here th e au th o rs considered that these measurements were not representative of exposure. The numbers given for workers at risk are probably somewhat higher than th e a c tu a l num bers exposed to TDI, which could not be determined from the paper. Elkins e t al [3 2 ] found a to ta l of established cases and 73 questionable cases of re s p ira to ry illness a s s o c ia te d with TDI exposure. Concentrations higher than 20 ppb (140 yg/cu m) were measured in only three plants. From the data in Table III-2, it can be seen that cases of respiratory illness were associated with all exposure co n cen tratio n s above 10 ppb (70 yg/cu m), but there were no cases at 7 ppb (50 y g /c u m) or low er. At 9 ppb th e re w ere no established cases but one q u estio n ab le one; th e re w ere sev eral estab lish ed cases a t 8 ppb. The authors concluded that the environmental limit for TDI should be considerably less than 100 ppb (700 y g /c u m), and th ey su g g ested th a t 10 ppb (70 yg/cu m) was "not an unreasonable limit." E ight m en who had a positive reaction to histamine had a larg er daily decrease in FEV 1 than did nonreactors (0.310 vs 0.115 liter). Smoking status was not significantly related to the changes in FEV 1. It is therefore difficult to evaluate the significance of the changes reported. The following year, Williamson [17] reported the results of pulmonary function te stin g over a 14-m onth period on 15 w orkers in an o p e ra tio n w here TDI was se p a ra te d from a so lv en t by d istilla tio n . Frequent environmental measurements w ere m ade and th e se never show ed TDI concentrations above 20 ppb (140 Ug/cu m), but av era g e c o n c e n tra tio n s w ere not given. One major spill occurred during th e study, causing c o n c e n tra tio n s high enough to permit detection of odor, from w hich th e au th o r in fe rre d th a t th e c o n c e n tra tio n was at least 200 ppb, and the room was immediately cleared. All the workers tested were free of respiratory symptoms [1 7 ]. In four series of m easurem ents of FVC and FEV 1, the only significant change was a fall in FEV 1 a t the tim e of the second measurement (P<0.01), and subsequent tests showed no significant change from baseline FEV 1 values. There was little difference between Monday and Friday values; daily changes were not measured. Williamson No sig n ific a n t d iffe re n c e in respiratory symptoms was found b etw een 76 currently employed men exposed to TDI and controls. Nine of 76 men in the co n tro l group had w heezing, co m p ared with only 1 of 76 men exposed to TDI. In th e second p a rt of the study, Adams [ 8 3 ] exam ined men who had been rem oved from th e TDI p la n ts b ecau se of re s p ira to ry symptoms such as mild to severe bronchospasm and dyspnea. A bout 15% of the men employed in the TDI p lant w ere rem oved from th e p lan ts in th e ir 1st year b eca u se th ey developed re s p ira to ry symptoms. In the 2nd year of employment, only 3.5% of the remaining w orkers developed re s p ira to ry sym ptom s, and th e rate gradually dropped to less than 2%/year after the 5th year, totaling about 20% of the original workforce over th e 9 years of the study. In form ation on symptoms in 46 men removed from the plan t, who had not been exposed to TDI for 2-11 years, was collected annually by respiratory questionnaire and compared with responses from 46 age-m atched workers not exposed to TDI. These results were correlated with the results of pulmonary fu n ctio n te s ts . The d a ta w ere analyzed for statistical significance by chi-square test. D ata from 46 co n tro ls and 46 m en previously exposed to TDI show ed no d i f f e r e n c e s in t h e i r sm o k in g h ab its [ 8 3 ] . H ow ever, 17 of th e 46 w orkers previously exposed to TDI developed breathlessness after exertion, significantly more th a n th e 5 m en in th e c o n tro l group w ith this sym ptom (P<0.01). W heezing o cc u rre d in 17 w orkers but only in 7 controls (P<0.05). These findings indicated th a t re s p ira to ry sym ptom s p e rs is te d in som e subjects after exposure to TDI had ceased. Pulmonary function data for 61 men who had had no contact with TDI for 2-11 years showed th a t th e ir av e ra g e FVC and FEV 1 values were slightly lower than control values after adjustment for age and height [8 3 ]. Eleven of the 20 workers who had been rem o v ed from the plants because of sensitization to TDI and whose p reem p lo y m en t lung fu n ctio n records were available were asymptomatic after 3-8 years w ith o u t ex p osure, and 12 of these 20 had FEV 1 and FVC values unchanged from th e ir p reem p lo y m en t levels. Six had FEV 1 and FVC values between 90 and 100% of th e ir p reem p lo y m en t levels, and two had values of 80-90%. Those who had red u ced pulm onary fu n ctio n co m p lain ed of dyspnea on e x e rtio n , nocturnal dyspnea, and tightness in the chest. Adams [ 8 3 ] concluded th a t exposure to TDI at about 20 ppb (140 Ug/cu m) for 5 y ears did not in c re a se respiratory symptoms or affect the lung function of w orkers who w ere not se n s itiz e d to the compound. However, sensitized workers, even when no longer exposed to TDI, had m ore re s p ira to ry symptoms than did unexposed controls, suggesting th at effects of TDI are, to some extent, irreversible. F or e n v iro n m en ta l m easu rem en ts, area samples, apparently collected at 6-month intervals, were analyzed by the Marcali method. The initial study [8 4 ], made during December 1966, included 38 workers, 7 of them w om en, w ith an average age of 36.3 years (range 18-62 years), employed an av era g e of 104.6 w eeks (2-624 weeks). Environmental measurements taken during this period show ed TDI co n cen tratio n s ranging from 0.1 to 3.0 ppb (0.7-21 ug/cu m). Pulmonary function measurements on 34 workers showed a mean daily decrease in FEV 1 of 0.19 lite r (P<0.001). S ignificant daily decreases were also noted in FVC (P 0 .0 0 1 ), PFR (P<0.05), FR50% (P<0.01), and FR25% (P<0.05). From Monday m orning to F riday m orning, th e m ean FEV 1, FR 50% , and FR25% all showed significant decreases (P<0.001). Responses for smokers and nonsmokers were similar, but w orkers with respiratory symptoms had a significantly greater decrease in FEV 1 than those without symptoms (P<0.05). The authors noted that there appeared to be no rela tio n sh ip b e tw e e n pulm onary fu n ctio n changes and amount of exposure, which they judged from the distance between work stations and sources of TDI. At th e 6-m onth followup [8 5 ], 28 of the 34 workers were still employed, and 6 new w orkers w ere added to th e study group. Environmental concentrations at th a t tim e ranged from undetectable to a high of 12.0 ppb (85 ug /cu m) in the TDI pouring a re a . Monday p resh ift and postshift measurements of pulmonary function showed significant decreases (P<0.02) in both FVC and FEV 1; Tuesday morning tests sh o w e d e s s e n t i a l l y c o m p le te re c o v e ry in FVC, but FEV 1 valu es w ere s till significantly lower than on the previous morning. When pulm onary fu n ctio n te s t re su lts w ere compared with those from tests done 6 m onths e a rlie r, sig n ific a n t decreases were found in FEV 1, the ratio FEV 1/FVC, and FR values at 75, 50, 25, and 10% of vital capacity [8 6 ]. The authors noted th a t th e re was a high c o rre la tio n (r=0.72) between the 1-day and 6-month d e c r e a s e s in FEV 1. T he only o th er v aria b le sig n ific a n tly c o r r e la te d w ith pulm onary fu n ctio n test results was lifetime smoking history, and when this factor was held constant, the 6-month changes in FEV 1 were still significantly correlated with diurnal changes (r=0.60). The 12-m onth followup, made in December 1967, showed a much lower diurnal d ec re a se in FEV 1, 0.05 lite r [ 8 7 ] . In th e 25 workers still available from the original 34, the decrease in FEV 1 over 1 year was still significant, but the entire d ec re a se was a c c o u n te d for by changes during the first 6 months. The authors noted th a t TDI concentrations measured at this time were very low; the maximum concentration detected was only 1.5 ppb (11 Ug/cu m). In itial m e asu re m en ts show ed a dose-related diurnal decrease in FEV 1 in the th re e groups [8 9 ]. At the 2-year followup [9 0 ], only 63 members of the original w orkforce were still employed. Examination of records showed that 40 of those no longer em ployed had resig n ed v o lu n tarily and th a t th e se w orkers had shown a diurnal decrease in FEV 1 of 0.126 liter at the earlier testing, compared with 0.096 lite r in th o se who w ere still employed. While this difference was not significant, th e authors noted that it reflected a trend for self-selection based on health among TDI workers. In g e n e ra l, work assig n m en ts had been stable over the 2 years, with workers averaging 20 months at a work station; workers were therefore assigned to exposure groups on the basis of their usual work station [9 0 ]. Since 5 workers had variable exposures and could not be assigned to any group, final testing was performed on 57 w orkers; 20 of these in each of the high and low exposure groups and 17 were in th e m edium exposure group. The incidence of coughing and phlegm production in c re ased w ith higher ex p o su re; 15% of the 57-person study group had symptoms suggestive of chronic bronchitis, but these were not related to exposure level. The 2-year d e c re a se in FEV 1 a v e ra g e d 0.102 liter (SD = 0.204 liter) in the exposed w orkers; th e groups with low, medium, and high exposure had respective decreases of 0.012, 0.085, and 0.205 liter (SD = 0.204, 0.177, and 0.185 liter). The authors noted th a t th e d e c re a se in the high-exposure group was "clearly excessive," while th a t in th e low -exposure group was " c le a rly w ithin normal limits." The authors' analysis of v arian ce show ed th e d iffe re n c e in 2-year decrem ent in FEV 1 in the th re e groups to be significant at P<0.01. Age, length of employment, and smoking habits did not d iffe r significantly in the three groups. Since several factors that a f f e c t lung size, including sex, h eig h t, and race, differed among the groups, the au th o rs sta n d a rd iz e d fo r lung size by dividing the 2-year decrease by the initial FEV 1 m e a su re m e n t; th is standardized figure still showed a significant difference between exposure groups. T h ere was no significant difference in mean FEV 1 b etw ee n w orkers exposed for less th an 3 y ears and those exposed 10-25 y ears, although the m ean for sm okers was sig n ifican tly different from that for nonsm okers. L ab o ra to ry te s ts in d ic a te d th e re w ere no alterations of peripheral blood values, hematopoietic system, or kidney function. Weili e t al [57,59,92] and Butcher et al [54,58,63] have reported on the first 5 y ears of a lo n g itu d in al study of respiratory symptoms, pulmonary function, and im m u n e re sp o n se s in w orkers a t a T D I-m an u fac tu rin g p la n t. The study was in itia te d in April 1973, before TDI production began at the plant, and is planned to extend through 1978. In the 1978 annual report on this study, Weill e t al [59] noted that only 88 of th e origin al 166 w orkers were still participating. To offset attrition, workers had been added during the first 3 years of the study, so that some data were available on a to ta l of 277 workers. The original exposure groups were no longer considered valid because of w orkers tra n s fe rrin g from one exposure c a te g o ry to another. P ersonal m onitoring d a ta c o lle c te d since 1975 w ere th erefo re used to estim ate cu m u lativ e ex p osures in ppm-months for each worker. Mean TWA exposures were calculated for each of six job categories, ranging from 2 to 6 ppb (14-40 Ug/cu m). TDI c o n c e n tra tio n s fo r jobs assigned to the control group were found to be below th e lim it of d e te c ta b ility of th e method (reported as 1.5 ppb) more than 99% of th e tim e , and th e author assigned these jobs a mean TWA concentration of 0 ppb. For each worker, time spent in each job category was multiplied by the mean TWA c o n c e n tra tio n for th a t job and re su lts w ere sum m ed to d e te rm in e cum ulative exposures. Lung fu n ctio n te s t resu lts were statistically correlated with these cumulative exposures in cross-sectional and longitudinal analyses [ When re su lts w ere analyzed by sm oking and atopy categories, most of the increase in bronchitis was accounted for by nonatopic smokers in the exposed group. There was no significant d iffe re n c e betw een continuously and interm ittently exposed groups, but correlations with cumulative exposures were not made. This study [5 4 ,5 7 -5 9 ,6 3 ,9 2 ] is th e only study available on TDI workers that provides preex p o su re d a ta for all w o rk ers. In ad d itio n , because of the use of continuous personal monitoring, it provides realistic information on actual exposures. In a 1973 NIOSH h e a lth h azard e v alu atio n , V andervort and Sham a [9 3 ] in v e stig a te d re s p ira to ry sym ptom s and a c u te lung fu n c tio n ch an g es in workers exposed to TDI a t low c o n c e n tra tio n s a t a p lan t making polyurethane foam ice ch e sts and p icn ic jugs. During a preliminary visit, air samples were collected and analyzed for TDI by th e m odified M arcali m ethod of Grim and Linch [9 4 ]. A q u estio n n aire to identify histories of respiratory symptoms was administered to all 290 em plo y ees of th e p la n t, about 200 of them exposed to TDI. The authors did not in d icate th e to ta l num ber of w orkers with respiratory symptoms or describe th e ir exposures to TDI. Twenty-nine of the 200 exposed workers were selected for fu rth e r study; 13 of these were experiencing respiratory symptoms, as indicated in responses to th e q u e stio n n a ire , and 16 were asymptomatic. These workers were subdivided into m o d e ra te and low exposure groups on the basis of environmental m e asu re m en ts m ade a t th e time of the initial visit. The four workers making up th e sym ptom atic low-exposure group were among 14 sensitized workers in the plant who had been transferred away from the immediate area of the foaming operation because of intolerance to TDI. Seven unexposed control employees, m atched to the study group for age, sex, and smoking habits, were selected as controls. Two weeks after the initial visit, the investigators [93] performed preshift and p o stsh ift pulm onary fu n ctio n te stin g on the exposed and control workers selected for th e study. The TWA exposure concentration of each employee was determined for th e sh ift from breathing-zone samples. Short questionnaires were administered before and a f te r th e m o n ito red sh ift and again th e n ex t morning to determine whether the employees were experiencing symptoms. R esu lts of pulmonary function testing showed no significant difference between m orning and evening testing except in the symptomatic low-exposure group of four s e n sitiz e d w orkers who had been tran sferred out of the foaming area; this group also showed significantly greater decreases in FVC and FEV 1 than did the controls [ 9 3 ] . The individual w ith the g r e a te s t d e c re a se , who had never smoked, was exposed a t a c o n c e n tra tio n of only 0.2 yg/cu m and thus was highly sensitive to TDI. In th e asym ptom atic groups with both moderate and low exposure, all but two of th e w orkers re p o rte d m ild irritatio n of the mucous membranes, and three had re sp ira to ry sym ptom s such as coughing or chest tightness [9 3 ]. All 13 workers in th e sy m p to m atic groups reported coughing, chest tightness, wheezing, or shortness of b reath . T here was a considerable increase over preshift findings in the number of sym ptom s re p o rte d a t th e end of th e s h ift in both th e m oderate-and lowexposure symptomatic groups and some increase in the asymptomatic groups. However, the investigators noted that it could not be assum ed th a t w orkers had become sensitized at these low levels. Nine of the 13 sy m p to m atic em p lo y ees had been exposed to spills of TDI in the past, and 8 of th e se 9 developed sym ptom s a t the tim e of th e spill, so sensitization may have developed as a result of these exposures. The authors could not determine whether this was the first occasion on which they showed symptoms. Other studies found on populations of workers exposed to MDI have provided no quantitative information on exposure concentrations, but they do indicate that there is a rela tio n sh ip b etw ee n ad v erse e f f e c ts and exp o su re lev els or d u ratio n of exposure. For ex am p le, in 1971, Tanser et al [97] examined the effects of MDI exposure on 57 employees in a factory producing rigid polyurethane foam moldings. F o u rte e n of th e 57 w orkers re p o rte d th a t any contact with MDI vapor produced e f fe c ts ranging from a sore throat and wheezing to severe asthma and tightness in th e c h e s t. Spirometric analysis showed that 8 of the 57 employees had an FVC of less than 9096 of the predicted value or an FEV 1/FVC ratio below 75%; only 2 of these 8 reported symptoms of sensitivity to MDI. The authors [97] reported that most of the symptoms appeared to be those of d ire c t irrita tio n and not of an allergic reaction. However, four workers who had c o n ta c t with MDI were diagnosed as having possible hypersensitivity; three of these had sev ere asth m a, and the fo u rth developed fever, headaches, aching limbs, and cough following exposure. The 1976 stu d ie s of Saia e t al [ 9 8 ] and F ab b ri e t al [ 9 9 ] ex p lo red the rela tio n sh ip b etw ee n exp o su re to MDI and ch ro n ic n o n sp ecific lung disease in w orkers in an Italian r e f r ig e r a to r fa c to ry . The total exposed workforce of 180 com prised 94 fu rn a c e w orkers (who removed polyurethane molds from the furnace and w ere e s tim a te d to have the highest exposures), 32 injectors, and 54 assembly line w orkers w ere also included. The groups w ere sim ilar in average age and length of employment. Responses to a questionnaire indicated that 85 of the workers in the plant had re sp ira to ry sym ptom s [ 9 8 ] . The p rev alen ce of th e se sym ptom s was le a st in workers exposed less than years and greatest in those exposed more than 8 years; th e av era g e age in all th r e e groups was 37-38 years. Pulmonary function studies sh o w e d t h a t a b o u t h a lf of th e 180 w o rk e rs had v ital c a p a c ity and FEV 1 m easurem en ts below 90% of predicted values, and 15-20% had values below 80% of p red icted [9 9 ]. The 85 workers with respiratory symptoms had pulmonary function m e a su re m e n ts sig n ifican tly low er th an the average for the 180 employees. These measurements decreased with length of exposure even when adjusted for smoking. R esu lts w ere also analyzed by job function in 160 workers who had no history of previous occupational exposure to respiratory irritants [9 8 , 9 9 ]. Furnace workers had sig n ifican tly low er pulm onary fu n ctio n values and a g r e a te r prevalence of respiratory symptoms than workers in other jobs. Exposure d a ta w ere not re p o rte d and control groups were not used in these studies [ 9 8 ,9 9 ] , severely limiting their usefulness. The authors did not reveal the source of d a ta on p re d ic te d pulm onary fu n ctio n values, so it is im possible to determine the relevance of these data to the worker population studied. Only one study, a 1975 NIOSH health hazard evaluation by Hervin and Thoburn [ 1 0 0 ] , has been found on w orkers exposed to HDI. These w o rk ers, 18 spray p a in te rs in an airplane repair facility, were exposed to HDI at up to 300 ug/cu m (40 ppb); th ey w ere also exposed to trim e r ic biuret compounds of HDI at up to 3,800 y g /c u m and to a v a rie ty of organic solvents at concentrations above the F ed eral stan d ard s. P ulm onary fu n ctio n m e asu re m en ts in spray painters and the d e c re m e n ts in th e se m e a su re m e n ts over the workshift did not differ significantly from values in 40 controls who worked during shifts when spray painting was never perfo rm e d . All the spray p a in te rs , who wore re s p ira to rs but no eye protective devices, co m p lain ed of eye irritation while painting, and about half complained of nose and th r o a t ir r ita tio n , cough, and c h e st discomfort. The authors mentioned th a t the re s p ira to r program was deficient in many respects. This report suggests th a t MDI produces symptoms similar to those from TDI and MDI. However, it does not provide any in d ic a tio n of th e c o n c e n tra tio n s of HDI that produce irritation, since th e re was sim u ltan eo u s exposure to organic solvents and to trim eric HDI at relatively high concentrations. All th e diisocyanates that have been studied caused irritation when applied directly to th e skin of rabbits or instilled into their eyes. Their potentials as skin and eye irritants, determined from these studies, are summarized in Table XI M icroscopic ex am in atio n s of tissu e se c tio n s showed tracheitis and bronchitis with sloughing of the superficial epithelium in animals exposed to TDI at 2 ppm and killed by the 4th day a f te r exposure. Lungs of animals killed 7 or more days after exposure did not d i f f e r sig n ifican tly fro m th o se of c o n tro ls, su g g estin g th a t th e e f f e c ts w ere reversible. In animals exposed at 5 or 10 ppm, damage was more severe and longlasting. T h ere w ere a re a s of co ag u latio n necro sis of the superficial epithelium surrounded by in fla m m a to ry cells, and a t points of deep ulceration, connective tissue had developed. # Animal Toxicity B ronchopneum onia developed in all species except mice. Since th e an im als w ere exposed only once, th is lung dam age was the result of irritation rather than an allergic reaction. In an o th er 1962 stu d y , H enschler et al [38] exposed rats and guinea pigs to TDI repeatedly at concentrations of 0.1-10 ppm (0.7-70 mg/cu m). In rats, three 4hour exposures a t 10 ppm were lethal for all animals; four exposures at 5 ppm or 10 exposures a t 1 ppm w ere le th a l for most rats. At 0.5 ppm, adult rats could w ith stan d 24 exposures, but this exposure regimen killed about half the young rats exposed. Most d eath s w ere due to severe peribronchitis and bronchial pneumonia. In surviving an im als, lung ch ang es w ere rev ersib le within several months. Rats e x p o s e d at 0.1 ppm fo r 40 exp o su res had no ch an g es in th e lungs th a t w ere attributable to TDI exposure, but they did gain less weight than controls. In guinea pigs, th e se au th o rs w ere unable to find any evidence of sensitization to TDI after 48 exposures at 0.5 ppm, which was lethal to most of the animals. These results were qualitatively similar to those reported by Zapp [36] 5 years e a rlie r, but H enschler e t al In 1965, Niew enhuis et al [106] described the effects on animals of repeated exposure to TDI a t a low c o n c e n tra tio n . They exposed rats, rabbits, and guinea pigs to TDI a t 0.1 ppm (0.7 mg/cu m), 6 hours/day for either 38 consecutive days or 5 day s/w eek fo r 58 exposures. Chamber concentrations were measured by the Marcali method. Lung d am age in these animals generally increased in severity for several days a f te r exposure ended [ 1 0 6 ] . A rabbit examined immediately after exposure had e sse n tia lly norm al lungs, but animals killed 3-10 days later had bronchopneumonia, bronchitis, perivasculitis, and lung abscesses. A rabbit killed after 20 days had only chronic bronchitis. R ats killed immediately had less inflammation than those killed la te r , but fibrous tissue had proliferated in the walls of the bronchioles in several ra ts . At 3-24 days a f te r ex p o su re, in flam m atio n was marked, and animals had bronchopneumonia, extensive fibrous tissue proliferation, and polypoid hyperplasia of th e ep ith eliu m . All control rats had bronchiectasis, which the authors attributed to chronic m urine pneum onia. In guinea pigs, there were localized accumulations of lym phocy tes, m acro p h ag es, and p lasm a cells th ro u g h o u t th e lungs and varying degrees of pneumonitis and bronchopneumonia. No abnormalities of the heart, liver, kidneys, lymph nodes, or spleen were found in any of the animals. R a ts exposed a t th e h ig h est c o n c e n tra tio n gained sig n ifican tly less w eight th an th o se a t the lo w est c o n c e n tra tio n (P<0.05). No sig n ifican t d iffe re n c e s b e tw e e n exposure groups were found in blood composition, liver fu n ctio n , urin aly sis, or kidney fu n ctio n , and no dam age to any organ was observed in m a cro sc o p ic exam inations. However, there was an increased lung-tobody w eig h t ra tio in the high-exposure group. Animals exposed at 1,370 ug/cu m had sig n ifican tly low er liver and spleen weights than those exposed at 250 ug/cu m. The author did not suggest an interpretation of these differences. R esu lts of th e ir 2-hour LC50 studies in mice showed th a t HDI w as 2.3 times as toxic as CHI. The threshold concentration for influence on th e CNS in m ice was 1 mg/cu m for HDI and 10 mg/cu m for CHI, although the th resh o ld c o n c e n tra tio n s for respiratory irritation were sim ilar~2.9 mg/cu m for HDI and 4.5 mg/cu m for CHI. Exposure to CHI caused only a nonsignificant decrease in weight gain. A ccording to th e au th o rs [ 5 ] , adding chlorine to the molecule of an organic com pound would be e x p e c te d to in c re a se th e toxicity of the compound. Yet the resu lts of this series of experiments showed that HDI was substantially more toxic th an CHI. The f a c t th a t th e le th a lity of HDI was dose dependent may indicate th a t the com pound is abso rb ed sy ste m ic a lly , while the effects of CHI appear to result only from local irritation of the respiratory tract. Kondratyev and Mustayev [107] dem onstrated skin sensitizing effects of HDI in experim ental animals in 1974. Guinea pigs were sensitized by application of HDI in 50% solution in acetone to the skin for 2 days in a row. An initial irritant effect in th e fo rm o f h y p erem ia, ed em a, and itch in g was o b serv ed a t th e s ite s of ap p licatio n . A fter 21 days, the degree of sensitization was determined by applying HDI in various c o n c e n tra tio n s to previously unexposed skin. A specific allergic re a c tio n was seen in most animals at concentrations as much as 40 times less than th e p re v io u s ly d e t e r m in e d th r e s h o l d dose of 50% fo r skin ir r ita tio n . The ep icu tan e o u s sensitization observed was also accompanied by changes in the bloodserum p ro tein fra c tio n s . This study su g g ests that skin contact with HDI in the workplace could lead to allergic derm atitis. Kimmerle [104] found that IPDI produced moderate skin sensitization in guinea pigs. His e x p e rim e n ta l m eth o d s w ere not described but were said to follow the r e c o m m e n d a tio n s of th e Food and D rug A d m in istratio n . IPDI, a d m in iste re d in tra d e rm a lly , produced a larger area of swelling on reinjection than it had in an earlier injection in all 15 guinea pigs tested. A nim al e x p e rim e n ta tio n has been used in sev eral studies to investigate the m echanism of sensitization to diisocyanates. In 1964, Scheel et al [51] investigated the immunologic aspects of TDI sensitization. The authors produced TDI antigens by conjugatin g TDI with egg albumin; they attem pted to characterize the antigen, and th e ir m ethod of p re p a ra tio n , w ith m o d ificatio n s, became the standard for many subsequent immunologic studies on TDI. TDI-specific antibodies were demonstrated in ra b b its exposed to TDI by inhalation at 100 ppb (700 Ug/cu m) 6 days/week for 2-4 weeks. When a purified protein derivative of the tubercule bacillus was injected during TDI inhalation, a skin sensitivity response to TDI could also be demonstrated; a n im a ls so t r e a t e d r e a c t e d to 0.001 mg of TDI ap p lied to th e skin, while u n sen sitized anim als r e a c te d only to 0.2 mg. When the proportion of TDI in the an tig en was increased, the antigenicity of the protein was masked so that it would not re a c t w ith an tib o d ies to egg albumin. This dem onstrated that the circulating antibodies contained a reacting group specific for the TDI hapten. Thom pson and Scheel [1 0 8 ], in 1968, investigated the effects of TDI on rats p re tre a te d with alloxan to suppress anaphylaxis or with insulin and pertussis vaccine to enhance the responses. R ats were exposed to TDI at 1 ppm (7 mg/cu m) for 10 hours. A lthough th e au th o rs found th a t pretreatm ent altered the effects of TDI e x p o s u r e on th e lu n g s in th e p r e d ic te d d ire c tio n , th ey concluded th a t the m echanism of lung damage was not immunologic. This interpretation was based on th e ir in ab ility to e lic it a re a c tio n to cutaneous or intravenous challenge and the f a c t th a t reex p o su re to TDI produced less response than the original exposure. In a d d itio n , m icro sco p ic findings in d ic a te d th a t th e lung e f f e c ts p roduced w ere co n siste n t with chemical damage rather than an immunologic process and that they occurred primarily in the first few days after exposure. In 1970, S tevens and P alm er [1 0 9 ] stu d ie d sensitization in guinea pigs and rhesus m onkeys exposed to TDI a t 0.01-5 ppm fo r th re e 6-hour periods. Three w eeks la te r, th e se an im als and previously unexposed animals were exposed to TDI a t 20 ppb (140 u g /c u m). B reath in g p a tte rn s of the animals were measured by plethysmography to detect changes indicative of respiratory sensitivity. P a tc h te s ts showed skin se n s itiz a tio n to TDI, but serolog ic te s ts for sensitization were negative. Guinea pigs preexposed to TDI at 0.5 ppm did not show m easurable respiratory changes, suggesting that a threshold for se n s itiz a tio n e x iste d b e tw e e n 0.5 and 2.0 ppm. T h ere was no evidence of se n sitiz a tio n in m onkeys a f te r reex p o su re, and th e re were no serologic changes in d icativ e of sen sitizatio n . The authors concluded that exposure to large amounts of TDI may produce se n s itiv ity to TDI in low er c o n c e n tra tio n s , b ut th a t this sen sitiv ity m ight not involve an allergic mechanism. However, they noted that the d iffic u lty in preparing a suitable antigenic system made it impossible to determine whether an immunologic mechanism was involved. K arol e t al [1 1 0 ], in 1978, were able to dem onstrate the production of serum an tib o d ies sp e c ific for the tolyl portion of an isocyanate molecule. They exposed guinea pigs by in h a latio n to a conjugate of the monofunctional p-tolyl isocyanate w ith egg album in (EA). This antigen induced a respiratory response in the animals beginning ab o u t th e 8th day of exposure, and serum antibodies were detectable by gel diffusion and im m u n o electro p h o resis by the 14th day. The authors concluded th a t the antibodies were hapten-specific, since p-tolyl isocyanate that was bound to an o th e r p ro tein c a r r ie r , such as bovine serum albumin, elicited both respiratory re a c tio n s and serum an tib o d y resp o n ses in anim als previously sensitized to the iso cy an a te-E A a n tig e n . In addition, sensitivity to the EA carrier in the conjugate was not produced, suggesting to the authors that the conjugate contained sufficient iso cy an a te m o lecu les to e ffe c tiv e ly shield an tig en ic determinants in the protein m olecule. In a subsequent study, which has been described in Effects on Humans, K arol and her co lleag u es [ 6 2 ] used this antigen to dem onstrate IgE antibodies in the sera of workers who had sensitivity reactions to TDI. . The compounds were tested on Salmonella typhimurium s tra in s TA1535, TA1537, TA1538, TA98, and TA100, with and without a mammalian liver microsome activating system. MDI was mutagenic in strains TA98 and TA100 in th e p resen ce of the liver activating system. The other diisocyanates tested did n o t show m u t a g e n ic a c t i v i t y . D e t a i l s of th e e x p e rim e n ta l p ro c e d u re and quantitative results were not provided. # Correlation of Exposure and Effect In th e e arly days of the industry, a large proportion of the workforce exposed to TDI dev elo p ed re s p ira to ry illn esses The adv erse e f f e c ts of TDI on th e lungs may re s u lt from direct irritation caused by exposure a t relatively high concentrations. An experiment on volunteers [3 8 ] showed th a t one of six subjects experienced irritation of the nose and throat during a 10-m inute exposure at 710 yg/cu m and that all experienced it at 3,600 y g /c u m; however, these subjects did not report chest symptoms. In an automobile p lan t, all 12 w orkers exposed to TDI developed severe respiratory symptoms when TDI c o n c e n tra tio n s w ere 30-70 ppb (210-500 y g /c u m) [ 7 9 ] . Their symptoms disappeared when concentrations remained below 30 ppb. A cute and ch ro n ic re s p ira to ry e f f e c ts caused by exposure to TDI have been re p o rte d , but the results of such studies have been inconsistent. Results appear to d iffe r su b sta n tia lly depending on th e ty p e of operation or process in which TDI exposure o ccu rs. In a spraying operation, where TDI concentrations reached 6,400 y g /c u m, G andevia [ 8 1 ] found a significant decrease in FEV 1 during the course of a w orkday in 20 exposed men. This decrease was not fully reversed overnight or on the weekend and the cumulative decrease over 3 weeks was also significant. In a TDI distilling operation where concentrations were generally less than 140 y g /c u m, Williamson Ll7] found no significant changes, compared with preexposure baseline values, in th e pulm onary fu n ctio n of 21 men over 14 months. He also reported little difference between Monday and Friday values. In an o th er ty p e of exposure situation, a polyurethane foam plant, Peters et al [8 4 -8 7 ] found sig n ific a n t daily, w eekly, and cumulative decreases over a 2-year period in workers exposed at concentrations below about 100 yg/cu m. This study did not include preexposure measurements, but the mean annual decrem ent in FEV 1 of 0.11 lite r /y e a r was con sid erab ly higher than those the authors found in the lite r a tu r e for norm al working and general populations, which ranged from 0.025 to 0.047 lite r /y e a r . A long-term study of workers in a TDI-manufacturing plant, conducted by Weill e t al [ 57,59,92] and B utcher e t al [ 5 4 ,5 8 ,6 2 ], showed no significant exposurere la te d changes in lung function. TDI concentrations in the plant generally ranged from 14 to 50 u g /c u m during th e 5.5-year study. The entire study population, which included controls from elsewhere in the chemical factory not exposed to TDI, had ex cessiv e d eclines in som e pulm onary function measurements compared with p re d ic te d values, but th e re w as no d iffe re n c e b e tw e e n groups w ith c o n s ta n t, in te r m itte n t, and no exposure, and decreases were not correlated with cumulative exposures. These findings are questionable on the basis that the control group may h a v e b e e n so a f f e c t e d by e x p o s u re to o th e r ch em ica ls th a t th e study was insensitive to possible effects of TDI exposure. The d isa g re e m e n t of th e se findings with those obtained in polyurethane foam plants [8 7 ,9 0 ] may also r e f le c t d iffe re n c e s in exposure to other chemicals and fa ilu re to detect occasional excursions to much higher exposure concentrations than those usually prevailing. Both TDI manufacturing and polyurethane foam production involve mixed exposures, but in the latter process, exposures to other chemicals are likely to be much m ore closely c o rre la te d w ith exp o su res to TDI. Thus, the ap p a re n t dose-response relationship to TDI exposure in the polyurethane foam plant A t t e m p t s to d e te rm in e th e c o n c e n tra tio n s of TDI n ece ssary to produce sensitization have not been fruitful. Porter et al [56] reported that there were no new c a s e s o f s e n s i t i z a t i o n in a TDI p la n t in 2 y e a r s w h en a v e ra g e TDI concentrations were below 140 y g/cu m; during the previous 16 years of operation, when TDI concentrations had averaged 350-420 yg/cu m, from one to four cases of s e n s itiz a tio n had been diagnosed each year, the number gradually decreasing with increasin g len g th of o p e ra tio n . Superficially, these data suggest an average TDI exposure, 140 y g /c u m, below which s e n s itiz a tio n does not o ccu r. However, ex am in atio n of the data reveals that, even during the years when the average TDI c o n c e n tra tio n rem ain ed constant at 420 yg/cu m (1956-1969), there was a general decline in the number of cases of sensitization, suggesting that potentially sensitive individuals may have b ecom e sensitized and left the workforce during their early years of em p lo y m en t. Thus, th e se findings do not rule out the possibility that s e n s itiz a tio n might develop in newly hired workers exposed at less than 140 yg/cu m for longer periods of time. # S everal au th o rs have noted that workers often become sensitized during brief e x p o s u r e s a t h ig h c o n c e n t r a t i o n s r e s u l t i n g from spills, leaks, or spraying [ 3 1 ,4 1 ,4 2 ,5 2 ]. H ow ever, sensitivity to TDI has been observed in workers with no known exposure to spills or sp raying operations [43,45]. A NIOSH health hazard survey of a plant making polyurethane foam found respiratory symptoms in workers a t a foam ing o p e ra tio n w here no TDI c o n c e n tra tio n s of more than 35 yg/cu m w ere m easu red [ 9 3 ] , H ow ever, 9 of 13 workers who had been transferred away from th e foam ing operation because of severe symptoms were known to have been exposed previously to spills of TDI. In another NIOSH survey, none of the nine em ployees of a p o ly u reth an e foam p lan t where TDI concentrations averaged less than 7 y g /c u m and did not exceed 16 yg/cu m had respiratory symptoms [9 5 ], indicating that sensitization may be rare or nonexistent at such low concentrations. O th er stu d ie s sug g est th a t th e rate of sensitization may be som ew hat higher. F our of 47 workers (9%) in an office that received exhaust air fro m a nearby TDI p lan t b eca m e se n sitiz e d ; in 3 of th e se , se n s itiz a tio n was co n firm ed by bronchial responses in challenge tests, and the 4th improved when he was rem oved from exposure [4 5 ]. Porter et al [56] reported that 30 of the 300 w orkers (10%) in a TDI plant were diagnosed as sensitive to TDI during 17 years of o p eratio n . Adams [8 3 ] found that 15% of the workforce in one plant left during th e ir 1st y ear of em p lo y m en t because of effects on their health; 1-3.5% left for th e same reason during subsequent years, for a total of about 20%. A similar rate was suggested in a study by Bruckner et al [5 2 ], in which 5 of 26 workers exposed to u n sp ecified iso c y a n a te s w ere considered sensitized because they had asthm atic reactions at low concentrations. Some re p o rts have suggested that sensitization to TDI is related to a personal history of allerg y [ 5 2 ] or to atopy, as indicated by reactivity to prick tests with com m on in h a lan t allergens [5 3 ]. However, most investigators report that there is no pattern of allergies or atopy in sensitized workers [43,46,49,54,56]. Several investigators have attem pted to dem onstrate an immunologic mechanism for TDI sensitivity. In 1964, Scheel e t al [51] dem onstrated circulating antibodies and positiv e skin reactions in guinea pigs sensitized to TDI by inhalation, but later w orkers w ere unable to co n firm th e se re su lts in guinea pigs, rats, and monkeys [ 108,109]. In hum ans, im m unologic te stin g has indicated the existence of both re a g in -ty p e an tib o d ies and c irc u la tin g IgG antibodies in some workers exposed to TDI [ 5 3 -5 7 ] . H ow ever, th e se te s t resu lts have generally correlated poorly with sym ptom s su g g estiv e of TDI sensitivity or with respiratory responses to challenges w ith TDI at low concentrations. Since the TDI molecule has been thought to be too sm all to be a n tig e n ic in itself, a central problem in immunologic testing has been th e d evelo p m en t of an appropriate test antigen (a conjugate of TDI with a carrier p ro tein ). A re c e n t study by K arol e t al [ 6 2 ] , using a te s t antigen of p-tolyl (m ono)isocyanate, dem onstrated the presence of tolyl-specific antibodies in the sera of th re e of four TDI w orkers who had se n sitiv ity re a c tio n s to TDI; the fourth w orker had not been exposed to TDI for 5 y ears. This study show ed that an immunologic mechanism may be involved in TDI sensitization. S tu d ie s by B u tch er e t al [ 63,66] and Van E rt and B a ttig e lli [ 6 4 ] have suggested that a pharmacologic mechanism is also involved in respiratory sensitivity to TDI. These investigators showed that TDI inhibited the isoproterenol-stimulated cy clic AMP levels in human lymphocytes. The effect was greater in lymphocytes from individuals who w ere se n sitiv e to TDI F ar less in fo rm a tio n e x ists on exposure to the other diisocyanates, but their e f f e c ts ap p ear to be generally similar to those of TDI. Thirty-four of 35 workers, only 6 of whom w ere exposed to MDI a t c o n c e n tra tio n s above 150 U g/cu m, experienced irritation of the eyes, nose, and throat, and half of them had bronchial sym ptom s [ 9 6 ] . W orkers ex posed to MDI a t 50-110 y g /c u m did not have a s ig n ific a n t decrease in FEV 1 during a workshift in which foaming was carried out, but 3 of 29 w orkers had respiratory symptoms [ 6 1 ]. Workers exposed to MDI at unknown concentrations in an Italian refrigerator factory had reduced vital capacity and FEV 1, and 85 of 180 workers had respiratory symptoms [9 8 ,9 9 ]. This study in d icate d th a t the e f f e c ts w ere d o s e -re la te d , since fu rn a c e workers, who were exposed to MDI a t th e h ig h est co n ce n tratio n s, had significantly lower pulmonary fu n ctio n values and a g r e a te r p rev alen ce of resp irato ry symptoms than workers elsew h ere in the plant. The incidence of respiratory symptoms also increased with years of employment at the plant. These au th o rs co n sid ered im m unologic cross se n sitiv ity unlikely because of the differences in structure between the compounds. The subjects who cross-reacted to other diisocyanates tended to react to extremely low c o n c e n tra tio n s of TDI (less than 1 ppb) and to be hyperreactive to histamine. The auth o rs suggested that extrem e sensitivity to TDI might be the result of both an immunologic mechanism and a nonspecific pharmacologic or irritative mechanism, w ith th e la tte r m echanism acc o u n tin g for th e cro ss-re actio n s to MDI and HDI. This is co m p atib le w ith th e reports of Butcher et al [63,66] th at TDI may block the b e ta-ad ren erg ic system and with the suggestive evidence obtained by Porter et al [5 6 ] th a t TDI sensitivity does not necessarily require the presence of anti-TDI an tib o d ies. H ow ever, th e p o ssib ility of im m unologic c ro ss-se n sitiv ity between diisocyanates remains to be tested with a specific antigen system like that of Karol et al [62,110] and is at present only speculative. In th e ab sen ce of other data on mutagenicity, this single study is insufficient evidence that diisocyanates are likely to be mutagenic in humans. During the startup procedures, the mean weekly TDI concentrations for the synthesis, finishing, and drummming areas were 5.6, 17. Both monitoring methods were used simultaneously during some period eac h day for 22 m onths. When 8-hour TWA values w ere analyzed, no positive c o rre la tio n could be found between personal and area sampling. Area monitoring, then, did not seem to accurately reflect actual individual exposures. H ervin and Thoburn [ 100] reported that concentrations of airborne TDI were below the TLV of 20 ppb (140 ug/cu m) in an aircraft overhauling facility where th e p a in te rs sprayed aircraft with polyurethane paints. Area and personal samples w ere analy zed by th e th in -la y e r ch ro m ato g rap h ic method of Keller et al [113]. The c o n c e n tra tio n s of airb o rn e TDI n ear th e aircraft fin, near the wing, on the floor w here mixing was done, and on the floor midway between the two bays were <30, <20-30, 20, and <20 U g/cu m (<4, <3-4, 3, and <3 ppb), respectively. The corresponding concentrations of HDI were 40-100, <30-60, <20-300, and <20 Ug/cu m (6 -1 5 , <4-9, <3-45, and <3 ppb). T h irteen p ersonal sam p les ta k e n a t various o p eratio n s c o n tain ed TDI at concentrations of 40 ug/cu m or less. Corresponding HDI concentrations ranged from less than 30 to 300 Ug/cu m. In an o p eratio n w here polyurethane foam lines were used to make automobile s e a t cusions, B utler and Taylor F itz p a tric k e t al [115] stated that most of the MDI detected in the air was c a rrie d by this particulate m atter in the reactive form. The reaction of MDI with o th er com ponents of th e foam w as co m p lete w ithin about a minute, or 60 feet downstream from the spraying operation. Dharmarajan and Weill [116] found that approximately 90% of the MDI present in air during a foam spray operation was blocked by passage through glass fiber or Teflon filters (0.5 pm pore diameter). The percentage blocked was independent of MDI c o n c e n tra tio n in th e ran g e 20-550 p g /c u m, a finding th a t su p p o rts the c o n te n tio n th a t m ost airb o rn e MDI is present in particulate form. Furtherm ore, they found th a t a p p ro x im ate ly 50% of th e p a rtic le s w ere less th an 10 pm in d i a m e t e r . A ssuming th a t all MDI in th e air is in th e form of aero so ls and assum ing an equal collection efficiency for simultaneous sampling of particulates on a f ilte r and MDI in an a b so rb er, th e au th o rs found th a t MDI constituted 3.03-20.34% of the mass of the airborne dust collected on a filter. No m e a su re a b le MDI co n ce n tratio n s were found in 10 s a m p le s from th e mixing o p e ra tio n and 12 from th e m olding o p e ra tio n ; both o p eratio n s are co n d u cted a t room tem perature. Ten samples from the torching or oven-drying operation also showed no MDI, and 17 mold-pouring samples showed less than 7 ppb (70 y g /c u m); these operations involve elevated tem peratures, and the author attributed the low concentrations to the brevity of the operations, which did not p erm it sig n ific a n t vapor concentrations to be generated. The primary objective of engineering controls for operations using diisocyanates m ust be to reduce the concentrations of airborne diisocyanates so that they are at or below th e reco m m en d ed en v iro n m en ta l lim its. P ro cess equipment should be designed so th a t the sy stem is to ta lly en closed and operates, if possible, under n eg ativ e gage pressure [ 9 ] . When it is necessary to open a vessel or when leaks or spills are likely, local ex h au st v e n tila tio n systems should be provided. Unless o th er m eans can be used to control the concentrations of diisocyanates, the source should be f itte d w ith a local exhaust ventilation system [ 9 ] . If a process is too large for this type of enclosure, dilution ventilation may be necessary. N um erous polyurethane products exist, and the polyurethane may sometimes be form ed under circum stances that are not readily adaptable to conventional exhaust v en tila tio n p ro ced u res, eg, a p p lic a tio n of polyurethane foam to storage tanks to p rev en t co rrosion. Some operations, such as spraying, mixing, foaming, injecting, flushing, pouring in p lace, and painting, can occur either in fixed locations or in the field. W orkers engaged in these operations may require additional protection, such as p o sitiv e p ressu re su p p lied -air re sp ira to rs [37] and additional protective clothing. Although many types of diisocyanates are used in urethane foam systems, many of th e se system s co n ta in po ly m eric iso cy an ates, which usually have lower 1 8 ] and Industrial Ventilation-Manual of Recommended P ractice, the 1976 edition [119] or a later edition. The co n ce n tratio n of diisocyanates in the workplace may also be decreased by s u b s t i t u t i n g a co m p o u n d w ith a low er vapor p ressu re. For ex am p le, w here fo rm u latio n c o n sid e ra tio n s p e rm it, MDI m ight be substituted for TDI [1 2 0 ]. In spraying and certain foaming operations where the diisocyanate is present in aerosol form, this substitution may not be an effective means of controlling exposure. When v e n tila tio n requirements for any diisocyanate work area are determined, and it is established that an exhaust ventilation system is necessary, care must be ta k en in the p la c e m e n t of in ta k e and ex h au st vents [1 2 1 ]. Carroll et al [45] described re s p ira to ry sensitization from TDI in office workers as a result of TDIcontarninated air being drawn into the ventilation system of an office building from th e exhau st v ents of a neighboring factory using TDI. This report emphasizes the im p o rtan ce of d eterm in in g that intake air for the ventilation system is not drawn from a re a s in w hich o th e r d iiso c y an ate s are handled, and that exhaust vents be positioned to avoid exposure of other persons to the diisocyanate-contam inated air. For less chemically reactive diisocyanates such as HDI [123], hydrolysis would be e x p e c te d to occur at considerably slower rates. The authors [1 2 2 ] concluded th a t increased humidity reduces the concentration of atmospheric TDI, but n ot to a d eg ree that would prove useful as a routine control measure in the workplace. A decrease in apparent TDI concentrations due to humidity has also been observed by other investigators [44,111,124,125]. # Sampling and Analysis V olatilized amines may also be presented in workroom air where diisocyanates are being manufactured or used. Toluene diamine, a synthetic precursor to TDI and a possible hydrolysis p ro d u c t, is known to in te r f e r e p o sitiv ely in th e M arcali d e te rm in a tio n of TDI [1 2 4 ,1 2 6 ,1 2 7 ]. O th er prim ary arom atic amines would be e x p ec ted to be p o sitiv e interferences with any method in which diisocyanates are determined as secondary reaction products of their amine derivatives. Meddle and Wood [126] developed a method for detecting arom atic isocyanates in air in th e p resen ce of p rim ary a ro m a tic amines. Individual air samples were bubbled through two different absorber solutions. A solution of dimethylformamide (DMF) and 1,6 -d iam in o h ex an e (DH) was used to trap the primary arom atic amine and inactivate the isocyanate in one air sample. The second air sample was drawn through a solution of DMF, DH, and hy d ro ch lo ric acid th a t tra p s th e primary a ro m a tic am ine and hydrolyzes th e iso cy an a te to its corresponding amine. The sam ples w ere th en diazotized and coupled, and the amount of amine or isocyanate p resen t was d e te rm in e d sp ectro p h o to m etrically using standard calibration curves. The color in th e DMF-DH ab so rb en t solution is produced by the primary amine alone, and th a t in the DMF-DH-hydrochloric acid absorbent solution is produced by both the am ine and th e isocyanate. The amount of isocyanate present in the air sampled was determined by subtracting the former value from the latter. T e rtia ry am ines, such as triethylenediam ine (TEDA), which are often used as c a ta ly s ts in u re th a n e p o ly m eriza tio n s, have been shown to reduce the apparent c o n c e n tra tio n of a irb o rn e TDI [1 2 4 ,1 2 7 ]. Smith and Henderson [127] were the firs t to re p o rt the n eg a tiv e interference of TEDA vapors in the determination of gaseous TDI by the Marcali and Reilly tape methods. The fraction of apparent TDI loss, when an aly zed by both m ethods, ranged from 49 to 88%. These results led th e au th o rs to question whether the tape and Marcali values were underestimating actual TDI exposure levels in polyurethane foaming operations. The reported degree of negative interference, however, appears independent of the TEDA concentrations. In additio n , th e ra tio s of TEDA to TDI exam ined, 17-262 [1 2 7 ], are 135-2,100 tim es th o se th a t m ight be e x p e c te d during actual foaming or spraying operations [ 128,129]. In a la te r a tte m p t to elucidate the mechanism of tertiary amine interference, Holland and Rooney [1 2 4 ] compared values obtained for TDI in mixed TDI-TEDA a tm o sp h eres by th re e analytical techniques: midget impinger sampling and analysis by th e M arcali m ethod, co n tin u o u s-ta p e m onotoring, and direct air-injection gas chromatography. All m ethods gave similar values for TDI, and each showed similarly decreased v a lu e s fo r th e sam e TDI c o n c e n tra tio n in th e p resen ce of TEDA [ 1 2 4 ] . In c o n tra s t to th e previous report [1 2 7 ], this study dem onstrated that the reduction in m easu rab le TDI exhibited some dependence on atmospheric amine concentration. A sum m ary of the data shows that, at TEDA-to-TDI ratios of 9.6-25, about 90% of the input TDI could be m easu red ; a t a TED A -to-TD I ratio of 105, only 21-25% could be m easu red . Six o th e r catalysts used in polyurethane manufacturing were s a id to g iv e s im ila r r e s u l t s . The e f f e c t of TEDA on m e asu ra b le TDI was significantly red u ced w hen glass co m p o n en ts of th e experimental apparatus were siliconized to d e c re a se su rfa c e ad so rp tio n . The a u th o rs co m m en te d th a t gas ch ro m ato g rap h y did not detect any stable reaction interm ediates, including toluene diam ine, in th e m ixed gas stream . The only reaction product found proved to be the TDI u rea th a t had fo rm ed as a w hite pow der on the surface of the mixing sy stem . The au th o rs [1 2 7 ] concluded th a t all th re e a n a ly tic a l m ethods gave a c c u ra te m e asu re m en ts of atmospheric TDI in the presence or absence of tertiary am ine c a ta ly s ts and th a t th e o b serv ed n eg ativ e interference reflected an actual reduction in TDI concentration. Furtherm ore, the mechanism by which this reduction o ccu rred may have depended on surface effects, relative humidity, and constituent concentration and residence time. The above re p o rts [1 2 4 ,1 2 7 ] su g g est th a t th e p re se n c e of te rtia ry amine vapors may c a ta ly z e th e hydrolysis of airb o rn e TDI to its urea. The reaction ap p ears to be f a c ilita te d by ab so rp tio n of one or m ore of th e re a c ta n ts to a su rfa c e [ 1 2 4 ] . W hereas gas chromatography of the mixed gases failed to isolate any stab le re a c tio n in te rm e d ia te , th e e x iste n c e of short-lived, potentially toxic, r e a c t i v e c o m p le x e s in m in u te am ounts can n o t be d isco u n ted . Both stu d ies [ 124,127] used concentrations of TDI that may be representative of actual working environm ents, 18-400 ppm; however, since the relative concentrations of amine used far ex ce ed ed re a lis tic le v els, th e re le v a n c e of th e se re s u lts to th e workplace situation remains questionable. The amine is diazotized and coupled with 1-naphthyl eth y len ed iam in e to produce a red d ish -b lu e color. The intensity of this color is m easured spectrophotom etrically at 550 nm to provide an indication of the amount of TDI present. Marcali [111] reported that the method was capable of detecting 10 ppb (70 yg/cu m) of toluene-2,4-diisocyanate. He also determined that recovery of total TDI was apparently reduced when 35% toluene-2,6-diisocyanate was present. A similar reduction was reported by Meddle et al [44] for TDI mixtures containing 20 or 40% of th e 2,6-isom er. To increase the accuracy of measuring mixtures of TDI is o m e r s , both M arcali [1 1 1 ] and M eddle e t al [4 4 ] reco m m en d ed th a t standard curves be constructed with the appropriate isomer ratios. A portable field kit em ploying stab le color standards could easily detect TDI at 50 ppb (360 yg/cu m) and could be modified to detect TDI at 20 ppb (140 y g/cu m). The method does not d e te c t the TDI u re a , 3 ,3 '-d iiso cy an ato -4 ,4 '-d im eth y lcarb an ilid e, a hydrolysis product that is formed on reaction of TDI with water. The Ranta method, as described by Zapp [36] and Marcali [111] can measure both TDI and TDI urea with equal efficiency and cannot distinguish between them. The com pounds are collected by bubbling the air sample through a reagent solution of aqueous sodium n itr ite , ethylene glycol monoethyl ether (Cellosolve), and boric acid. The in te n sity of th e resulting orange-yellow color, measured at 450 nm, is proportional to the concentration of either compound. On th e basis of field and la b o ra to ry ev alu atio n s of th e se tw o m ethods, Skonieczny [1 3 1 ] concluded th a t the Marcali method was more suitable for field d e te rm in a tio n of peak c o n c e n tra tio n s and for d e te c tin g sm all amounts of TDI. Because th e R a n ta m ethod req u ire s a sampling tim e of 10-30 minutes to collect su ffic ie n t am ounts of TDI under usual working conditions, Skonieczny noted that it might not detect momentarily high concentrations. In The sensitivity of the method was said to be improved by in creasin g the length of the light path in the spectrophotom etric cells. If amounts of TDI g re a te r th an 70 ppb m ust be measured, the final reagent solution can be d iluted with absorber solution or a smaller air sample can be taken. Although MDI is detected by this method, the time required for complete color development under th e prescribed conditions is 1-2 hours, compared with 5 minutes for TDI [9 4 ]. It is possible th a t TDI can be determined in the presence of MDI if the absorbance of the test solution is measured within 10 minutes after adding the coupling agent. Various field te s t kits using th e p rin cip les of the Marcali or Ranta method have been developed. These kits have simplified and standardized test procedures fo r o n -s ite m e a s u r e m e n t . G rim an d Linch [ 9 4 ] d escrib ed a field kit for d eterm in in g c o n c e n tra tio n s of TDI. Air was sampled through the standard midget im pinger using a self-p o w ered , constant-rate air aspirator. The sensitivity of the field kit employing the Marcali method was improved to allow detection of TDI at 10 ppb (70 Ug/cu m) by co lle c tin g a la rg e r volum e of air and by reducing the volum e of th e reagent used. By increasing the coupling reagent concentration and adding sodium c a rb o n a te to the absorbing medium, the field kit could be used for d e t e r m i n a t i o n s o f a i r b o r n e MDI. The R a n ta m eth o d w as m odified to allow m easurem ent of TDI urea and TDI at concentrations as low as 10 ppb by increasing th e v o lu m e o f s a m p le c o lle c te d , reducing the volum e of re a g e n t used, and in creasin g th e le n g th of th e lig h t path in the colorimeter to 100 mm. To make the R a n ta m ethod su ita b le for field use, color standards that can be used with a portable visual com parator were developed and included in the kit. Belisle [1 3 3 ,1 3 4 ] described a field kit suitable for measuring TDI in air. Air was sampled at a rate of 0.1 cu ft/m inute through an acidified absorber solution in a m odified m id g et im pinger co n tain in g in situ-generated glutaconic aldehyde and cation-exchange resin. This process converts TDI to its corresponding amine, which re a c ts w ith g lu ta c o n ic ald eh y d e to form an orange-red product. To measure the c o n c e n tra tio n of TDI, th e o ra n g e -re d color th a t appeared on the surface of the resin beads was m a tc h e d a g ain st a set of color s ta n d a rd s. R esults reportedly ag reed closely w ith those obtained by the Marcali method. A major advantage of this m ethod is th a t th e color develops while th e air is being sam pled. At a c o n c e n tr a ti o n of 10 ppb (70 U g/cu m) of TDI, sam pling and analysis can be c o m p lete d in 5 m in u tes. The m ethod is capable of measuring TDI at 5 ppb (35 U g/cu m) in 0.5 cubic fo o t of air, and it may be m o d ified to m easu re other a ro m a tic iso c y a n a te s or a ro m a tic am in es by constructing appropriate calibration data. Reilly [135] developed a field method for determining MDI in air. The sample was draw n th ro u g h an acid ab so rb er m edium in w hich MDI w as c o lle c te d and hydrolyzed to the corresponding amine. The amine was diazotized and coupled with 3 -h y d ro x y -2 -n ap h th an ilid e to form a pin k ish -o ran g e azo com pound. This was e x tra c te d into ch lo ro fo rm and co m p ared visually w ith in o rg an ic color standard solutions. The method was capable of measuring MDI at 10-40 ppb (100-400 ug/cu m) w ith a 5 -lite r sam p le of air. The eq u ip m en t req u ire d is p o rta b le , and a complete determination can be accomplished in 12-15 minutes. M eddle e t al [4 4 ] ex te n d e d th e G rim and Linch a d a p ta tio n [9 4 ] of the M a rc a li m ethod to a g e n e ra l f ie ld -te s t p ro ced u re cap a b le of d e te c tin g o th e r aro m atic diisocyanates in air. Procedures were described that established maximum sampling and analytical conditions for TDI, MDI, NDI, dianisidine disocyanate, and a polym eric form of MDI, polym ethylene polyphenyl isocyanate. The authors noted th a t, w ith the ex ce p tio n of TDI, attem pts to generate dynamic vapor atmospheres by bubbling dry n itro g en th ro u g h liq u ified d iiso c y an ate s proved u n su ccessful. A irborne co n cen tratio n s produced by this method diminished rapidly indicating that th ese d iiso c y an ate s, w here p re se n t in air, would be in aero so l fo rm . Similar o b serv atio n s w ere m ade by R eilly [1 3 5 ] in his work with MDI. Analysis of all te s te d d iiso c y an ate s was subsequently performed on generated aerosol atmospheres [ 4 4 ] . A co n seq u en ce of th e aero so l n a tu re of these test atmospheres was the a d o p tio n of a s in te r e d dom e bubbler for air sam pling. To en su re c o m p lete reco v ery , aero so l p a rtic le s tra p p e d in th e s in te re d dom e during sampling were allowed time to dissolve in the absorber solution before coupling reagent was added. A 10-m inute digestion time was judged sufficient under these conditions. Impinger sam pling a t th e sam e flo w ra te of 1 lite r /m in u te w as only 53% as efficient as sam pling w ith the s in te re d dom e bubbler. For use in the field, permanent color standards for MDI, NDI, TDI, and polymethylene polyphenyl isocyanate are available for concentrations of 10-40 ppb. The aerosol nature of MDI in air was further supported by results of a recent study by D h arm arajan and Weill [1 1 6 ]. They found that MDI vapor generated by heating th e d iiso c y an ate to 110 C in a small enclosed room did not behave as a gas but ra th e r as an aero so l. They co m p ared th e am ount of MDI collected in absorbers acco rd in g to th e standard NIOSH-recommended method [130] with and w ithout p re filte rs and found th a t 98% of th e airb o rn e MDI was collected on a Teflon f ilte r backed w ith a cellu lo se pad and 87% was c o lle c te d on the filter backed w ith a sta in le ss-ste e l pad. Since the collection efficiency of the absorber for MDI aero so l was unknown in this study, it is likely that the actual amount of MDI in sam ples was higher th a n th e am ount d e te c te d . The percentage of the sample consisting of MDI aerosol could also have been underestimated. The authors also poin ted o ut th a t since th e m ajor p o rtio n of MDI in air is p re se n t as an aerosol, concentrations of this compound should be reported in mg/cu m rather than ppm. (c) Tape Methods R eilly [1 3 6 ] dev elo p ed a test-paper method to measure the concentration of TDI in air. A 5 -lite r air sam p le was draw n through a chemically treated filter paper a t a r a te of 1 lite r /m in u te . A fter sam pling was c o m p le te d , stain was allow ed to develop on th e t e s t paper for 15 minutes. The intensity of the stain was then com pared with a set of color standards for TDI concentrations of 10, 20, 40, 60, and 100 ppb (70-700 u g /c u m). The method is specific for the arom atic d iiso c y an ate s, w ith no response o b ta in ed from th e diam ine derivatives. Several p re p a ra tio n s of te s t p ap er, exposed to TDI a t known co n cen tratio n s, showed a v aria tio n in color d ev elo p m en t of ab o u t 20%. Qualitative tests indicate that the m ethod m ay be adap ted to detect MDI and NDI, and it is reported to require less analytical skill to perform than other methods. , th e au th o rs discovered that two midget impingers in series were necessary to tra p airb o rn e TDI adequately. Although the 1973 NIOSH criteria document [37] s ta te d th a t a "single bubbler absorbs 95% of the diisocyanate if the concentration is below 2 ppm," Miller and Mueller determined that, at TDI concentrations ranging from 1 to 76 ppb (7-532 y g /c u m), the collection efficiency of the first bubbler was approximately 83%. The D unlap/IC I m o n ito r has also been used to measure MDI under laboratory [1 1 6 ,1 3 8 ] and field [116,137] conditions. Development of tape color intensity in response to MDI, e ith e r as a vapor generated in toluene solution [138] or when spotted in known concentrations directly on the tape [1 1 6 ], was approximately 75% of the m axim um when re a d a t 15 m in u tes and th e re a c tio n was complete in 4 hours. Maximum color development with TDI, on the other hand, is complete at 15 minutes. W here MDI may be found as a c o n s titu e n t of a re a c tiv e p a r tic u la te , the a c c u r a c y of th e m o n ito r may be fu rth e r red u ced . In an in itia l t e s t of the applicability of the Dunlap/ICI area monitor in MDI systems, results obtained by the m onitor in foam and p ain t spraying o p eratio n s w ere 36% and 35%, respectively A fte r applying th e m a n u fa c tu re r's recom m en d ed c o rre c tio n factor, the authors found that the values obtained by the ta p e m on ito r a t readings ranging from 1 to 5 ppb (10-50 yg/cu m) were in good agreem ent with those obtained by the spectrophotometric method. D h arm arajan and Weill [116] assessed the performance of the TDI continuous ta p e m on ito r for an aly zin g both h e a t-g e n e ra te d and foam -spray-generated MDI aerosols. Eight-hour TWA concentrations of MDI in ppb determined by the monitors w ere com p ared w ith th o se o b tain ed by the standard NIOSH-recommended method [ 1 3 0 ] . In th e range of 5-8 ppb as d e te rm in e d by the NIOSH method, the tape m onitors c o n siste n tly gave read in g s indicating concentrations two to three times higher. The authors [116] explained this difference by pointing out that, whereas th e filte r ta p e medium of the monitors could be expected to collect 99.9% of the MDI aero so l, an im pinger flo w ra te of 1 liter/m inute would select against certain p a rtic le size populations. The ab so rb er co lle c tio n e ffic ie n c y , however, was not d eterm in e d in this study. H aving em p h asized th e necessity for expressing MDI concentrations in units of mg/cu m, the authors devised a procedure for calibrating th e TDI ta p e m o n ito r for use in MDI aero so ls. C olor in te n sity developed during th is tim e could be correlated with the known concentration of MDI in mg/cu m. However, the authors [116] did not test the validity of this calibration method in the workplace. In view of these findings [116,138], it is important that calibration curves for continuous ta p e monitors used to detect MDI be constructed to simulate as closely as possible the actual conditions under which the monitor will be used. The phenom enon described by a number of investigators [60,115,116,135,138], th a t MDI is ra re ly , if e v e r, found as a gas a t a m b ien t tem peratures, has been shown to apply to NDI and dianisidine diisocyanate [44] and can be considered a general p ro p erty of o th e r diisocyanates that are solids or viscous liquids at room tem perature. The sam pling r a te was lim ited to 1 liter/m inute because at higher sampling rates the toluene in the impinger would bubble over into th e next im pinger in the series and because excessive evaporation of toluene could occur. The authors suggested that 0.1 ml should be the maximum allowable volume for ev ap o ra tio n for one im pinger. This r a te of evaporation would introduce an additional 1% error into the cumulative error for the resultant concentration. The to ta l co lle c tio n e ffic ie n c y of this system was 98%, with the first impinger being 90% efficient. The a n a ly tic a l sy stem co n sisted of a B arb er-C o lm an S eries 5000 Selectra System gas chromatograph equipped with a tritium -source electron-capture detector [ 1 2 5 ] . A 4 -fo o t P yrex U-tube column (1/4-inch inner diameter) was packed with C hrom osorb G (60/80 mesh) solid support coated with a mixture of Epon 1001 and A piezon L. O x y g en -free n itro g en was used as th e c a rr ie r gas a t an optimum flo w ra te of 100 ml/minute at an inlet pressure of 15 psig. Operating tem peratures w ere 150 C for the colum n and in je c tio n port and 170 C for the detector bath. The liquid sam ple size for in jectio n was 5 y l [ 1 2 5 ] . Calibration curves were p rep ared by seq u en tially dilu tin g TDI with chromatographic-grade toluene [125]. T he c a l i b r a t i o n c u r v e s w e re th en ch eck ed a g ain st a p rim ary sta n d a rd (TDI p erm e a tio n tube). This p aire d system of sampling and analysis could accurately analyze TDI in a 1 0 -liter sam p le of dry air at 1.4 ppb (10 yg/cu m). When the system was tested using air that had not been previously dried, readings were lower th a n e x p e c te d . A fter a p p ro p ria te c o rre c tio n s w ere m ade for th e e f f e c ts of humidity [1 2 2 ], the authors [125] obtained an overall efficiency of 97%. A th in -la y e r ch ro m a to g ra p h ic (TLC) m ethod was developed by Keller et al [1 1 3 ] for isolation and quantitative determination of various isocyanate compounds. The m ethod is based on th e re a c tio n of iso c y a n a te s w ith N-4-nitrobenzyl-N-npropylam ine (nitro reagent) to form the corresponding ureas. The concentration of the u reas are then determined. The method is said to be capable of isolating and m easuring diisocyanate monomers in the presence of partially polymerized reaction p roducts. C oncentrations of both monomer and free isocyanate-containing polymer can be determined. Air in th e w orking en v iro n m en t is sam pled with two impingers containing a solution of th e n itro re a g e n t [ 1 1 3 ] . The solutions in th e im pingers are then com bined and e v a p o ra te d to dryness, and the rem ain in g resid u e is dissolved in benzene. The benzene solution is chromatographically analyzed on thin-layer silicagel p la te s, and ureas are visualized by reducing the nitro groups to amines and by diazotizing the amines with nitrous fumes. After evaporation of the nitrous fumes, the thin-layer chromatographic plates are sprayed with N-l-naphthyl ethylenediamine and qu an titativ e determinations are made by visual comparison of the samples with stan d ard s. Scanning d e n s ito m e try can be used for more accurate determinations. The low er limit for the determinations was found to be 80 yg/cu m for MDI, HDI, and TDI. The method, however, is time consuming and requires skilled attention to detail during the reduction and coupling steps. C h ro m a to g rap h ic s e p a ra tio n of th e u reas was accomplished on a pellicular silica HPLC colum n. As w ith the TLC m ethod [ 1 1 3 ] , both a lip h a tic and aro m a tic iso cy an a tes can be analyzed. The HPLC method, however, extended the detection lim its to 5, 5, and 10 y g of TDI, MDI, and HDI, respectively. The values were based on a 20-,liter air sample and a 90-y l injection volume. The method cannot be used to an aly ze a tm o sp h e re s that can oxidize or reduce the nitro reagent used in the impingers during sampling. The report [140] cited the following reasons for failure: deterioration of silica gel colum ns cau sed by excess nitro reagent in samples; oxidation of the n itro re a g e n t a f te r sam pling; shipping and exposure problems posed by the use of toluen e in th e c o lle c tio n m edium . The time allotted for the study precluded an attem pt to resolve these problems. Two of these difficulties have already been addressed by previous investigators [ 123,139]. To elim inate excess nitro reagent in samples, Vogt et al [123] added p -toly l iso c y a n a te to th e absorbing solution after the collected diisocyanates were allow ed tim e (about 1 hour) to r e a c t. The resu ltin g m onourea derivative was o b s e r v e d to ru n w e ll ah ead of the d iureas. A lte rn a tiv e ly , colum n life was p reserv ed from the effects of excess nitro reagent by daily flushing of the column with solvent. , how ever, ap p e a re d to be a result of insufficient p u rific a tio n of th e sy n th esiz ed n itro re a g e n t and of the grade of toluene used. N itro re a g e n t is now com m ercially available. When the nitro reagent is dissolved in c h ro m a to g ra p h ic -g ra d e to lu en e, absorber solution impurities should be minimal. Com pounds th a t will in te r e f e r e w ith this procedure are those that absorb in the u ltrav io let range and also have the same column retention time as the diisocyanate being investigated. Toluene rem ain s the solvent of choice for this method [123,139,140]. Federal reg u latio n s (49 CFR 172) allow air transportation of toluene in quantities of up to 1 q u a rt/p a c k a g e . W here p ersonal sam pling p ro ced u res may expose workers to toluen e vapor for ex te n d e d p eriods, air sam p ler outlets could be fitted with an appropriate scrubber. A fter reviewing the currently available analytical methods, NIOSH recommends the HPLC procedure described above [123,139]. This method is described in detail in Appendix I. Although the method may necessitate some initial experimentation b e f o r e r o u tin e m e a su re m e n ts can be m ade, it is th e m o st se n sitiv e m ethod. B ecause q u a n titie s of d iiso c y an ate s in th e nanogram range may be determined, re la tiv e ly sh o rt sam pling tim es are allow ed. A nalysis of th e primary reaction produ ct of d iiso c y an ate and ab so rb er en su res d ire c t m e a su re m e n t of available iso cy an a te fu n ctio n al groups and p reclu d es in te rfe re n c e by o th e r diisocyanate re a c tio n p ro d u cts. The re la tiv e ch em ical stability of the nitro-ureas [ 123,139] allows for possible elapsed time between air sampling and subsequent analysis. In a d d itio n , the p ro ced u re is cap ab le of se p a ra tin g and id e n tify in g m ix tu res of d iiso cy an ate m onom ers as w ell as mixtures of monomer and partially polymerized p roducts [ 123,139]. The p ro ced u re has not been tested with diisocyanates other th an TDI, MDI, and HDI. It is reaso n ab le to assum e, how ever, that a method which can e ffe c tiv e ly s e p a ra te th e isom ers of TDI, as does the HPLC method, [ 123,139] can be used su ccessfu lly fo r m easuring o th e r diisocyanates. Where possible in te rfe re n c e from reducing or oxidizing atmospheres may be encountered, a lte r n a te sam pling and a n a ly tic a l m ethods should be calib rated with the HPLC procedure. N um erous stu d ies have shown th a t ab so rb er c o lle c tio n efficiency may vary d ra m a tic a lly [44,111,116,125,137]. It is therefore recommended that two serially c o n n e c te d im pingers be used for air sam pling u n til a re p ro d u cib le c o lle c tio n efficiency is established for any given operation, after which a single impinger may be used for ro u tin e monitoring. The recommended flow rate of 2 liters/m inute for 10 m inutes re p re s e n ts a co m pro m ise for e ff ic ie n t aero so l and vapor sampling [115,116]. Diisocyanates are also skin irritants and sensitizers [105,141]; how ever, e f f e c ts on th e skin from these compounds do not appear to have been a m ajor problem in in d u stry [ 3 5 ] . Eye contact with liquid TDI and TDI vapor has produced irrita tio n and watering of the eyes [2,9,120], and it is likely that direct eye contact with other diisocyanates would produce similar effects. D iiso cy an ates en co m p assin g a wide ran g e of molecular weights and physical p ro p e rtie s (see Table X I-1) are av ailab le for use in industry. The potentials of th e se com pounds to ir r i ta te th e re s p ira to ry trac t, mucous membranes, eyes, and skin vary depending on the particular diisocyanate being considered. The potential for skin irritation and eye injury is generally higher for the lower molecular weight diisocyanates and the severity of these irritant responses is reduced with increasing molecular weight [1,2,142,143]. The potential respiratory hazards encountered during the use of diisocyanates in th e w orkplace are related to their vapor pressures [1,2,142]. The lower molecular w e ig h t d iiso c y a n a te s te n d to be m ore read ily v o la tiliz e d into th e w orkplace atm o sp h ere th an th e higher m o lecu lar w eig h t d iiso c y a n a te s [ 1 ,2 ] . Figure XI-1 presents graphically the decrease in vapor pressure with increasing molecular weight for sev eral d iiso c y an ate s. The low er molecular weight diisocyanates, such as HDI and TDI, when handled w ith o u t special precautions, can release amounts of vapor su ffic ie n t to be e x tre m e ly irritatin g to the respiratory tra c t of workers [2,142]. H igher m o lecu lar w eight com pounds such as NDI, IPDI, and MDI present a lesser vapor h aza rd when handled in well-ventilated areas at normal room tem peratures, ie, less than ap p ro x im ate ly 40 C (104 F) for MDI [1,2,6,104,121,142]. Although th e vapor pressures of the higher molecular weight diisocyanates are relatively low, they may g en erate vapor concentrations sufficient to cause respiratory and mucous m em brane irrita tio n if th ey are handled in poorly ventilated areas. Air exhaust hoods may be n ecessary under such co n d itio n s [ 1 2 1 ] . High m o lecu lar w eight d iiso c y an ate s like MDI m ay also present significant vapor hazards when heated or used in exothermic production processes [1,2,6,8]. The physical s ta te of th e d iiso c y an ate being handled will also a f f e c t the p o te n tia l h aza rd s e n c o u n te re d during its use. MDI and NDI, which are normally solid m a te ria ls a t room tem perature, present less vapor inhalation or skin contact hazard as a re s u lt of sp lash es or spills th an th e low er molecular weight liquid d iiso cy an ates do [ 1 ,2 ] . H ow ever, w orkers should be aware of the possibility of re sp ira to ry and mucous membrane irritation from the dusts of such compounds, and of c o n ta m in a tio n of th e ir clo th in g w ith the powdered diisocyanates. Operations involving the use of such compounds, such as weighing, should be performed with equipm en t incorporating a barrier between the worker and the diisocyanate. Local ex h au st v e n tila tio n m ay also be n ecessary . C lothing c o n ta m in a te d w ith solid diiso cy an ate should be decontaminated and laundered as soon as possible to prevent fu rth e r exposure of th e w orker and to avoid contamination of other work areas. Spills involving th e se compounds should also be decontaminated and cleaned up as soon as possible. The p ro cesses and o p e ra tio n s in which diisocyanates are used will affect the sev erity of th e h azard . In d u strial p ro cesses involving e v a p o ra tio n from large su rfa c e areas or spraying operations may result in a greater potential vapor hazard than o p eratio n s involving p o u rin g -in -p lace or fro th in g techniques [6,8,104,121]. Special h azard s may arise in spraying operations since the diisocyanate-containing aerosol cloud may drift to areas beyond the immediate spraying area. T he r a t e o f r e a c t i o n is v e ry slo w for TDI and MDI a t te m p e ra tu re s below 50 C. As th e te m p e ra tu re increases, the reaction between th e se com pounds and w a te r becomes more vigorous. TDI and MDI will also react w ith bases, such as sodium hydroxide, am m onia, prim ary and secondary amines, acids, and alcohols. This re a c tio n may be v io len t, producing enough h e a t to in c re ase th e ev o lu tio n of diisocyanate vapor and the generation of carbon dioxide. These re a c tio n s , like th e reactions with water, may lead to dangerously increased pressure in closed c o n ta in e rs [ 1 4 4 ] , Thus, containers of diisocyanates should be kept closed as much as possible to prevent water, atmospheric moisture, or other reactive compounds from entering and vapors or solids from escaping. D iiso cy an ates should be transported or stored in sealed, intact containers. A "sealed co n tain er" is one that has been closed and kept closed to the extent that th e re is no re le a se of d iiso c y an ate s. An " in ta c t container" is one that has not d e te r io r a te d or been dam ag ed to th e e x te n t th a t d iiso c y a n a te s a re rele ased . D iiso cy an ates in seale d , in ta c t c o n ta in e rs should pose no threat of exposure to em ployees; th e re fo re , it is not necessary to comply with required monitoring and m edical su rv eillan ce re q u ire m e n ts in o p eratio n s involving such containers. If, how ever, c o n ta in e rs a re opened or broken so that diisocyanates may be released, th en all provisions of the recommended standard should apply. Indoor storage areas should be dry, fire p ro o fe d (autom atic sprinkler systems should be considered), and well v e n tila te d ; te m p e r a tu r e e x tre m e s in these areas should be avoided [9,121]. The sto ra g e a re a should have a firm floor m ade of some nonabsorbent material [ 6 ] . If d iiso c y an ate s a re a c c id e n ta lly frozen during storage or while in transit, they may be th aw ed by storage in a warm area [1 2 1 ]. Extreme caution must be used if h e a t is ap p lied , and a flame or similar localized heat source should never be used. D iiso cy an ates are transported in drums, tank trucks, or tank cars. Containers should be properly labeled and shippers should be aware of precautions to be taken fo r tra n sp o rtin g , loading, and unloading th e p a rtic u la r c o n ta in e r and type of diiso cy an ate being tra n sp o rte d . Emergency measures to be taken in the event of an a c c id e n t or some type of damage to the container or tank en route should also be w orked o u t in ad vance by th e shipper and the supplier or producer [ 9 ] , The H a z a r d o u s M a te r i a ls R e g u la tio n s as p ro m u lg ated by th e US D e p a rtm e n t of Transportation in 49 CFR Subchapter C should be adhered to where applicable. W here bulk quantities of diisocyanates are handled, adequate ventilation should be provided and re s p ira to ry p ro te c tiv e eq u ip m en t should be read ily available. W orkers should wear chemical safety goggles when handling solid diisocyanates and chem ical sa fe ty goggles w ith face shields when using liquid diisocyanates. Local exhaust v e n tila tio n should be em ployed when opening containers of diisocyanates [ 9 ] . Local ex h au st v e n tila tio n should also be used when performing laboratory operations involving diisocyanates. Since th e flash p o in ts of m ost diisocyanates are high, the compounds are not flam m ab le under norm al circumstances, although they can burn if they are heated su ffic ie n tly . B ecause of th e wide range of physical and chemical properties of d iiso c y an ate s (see T able XI-1), it is im portant to be aware of the potential fire hazard th a t may be a s s o c ia te d w ith a p a rtic u la r d iiso c y a n a te in the industrial se ttin g in which it is used or sto red . Any diisocyanate involved in a fire may produce high concentrations of toxic vapors, and only trained and properly equipped personnel should be involved in firefig h tin g . All nonessential personnel should be e v a c u a te d from th e a re a during a fire. Suitable extinguishing media for fighting d iiso c y an ate -su p p o rted fire s a re dry ch em ica l pow der, carbon dioxide, or foam. W ater should be used only if la rg e q u a n titie s a re av ailab le, since the reaction b etw een w ater and a hot diisocyanate may be vigorous. A fter the fire is out, the a r e a should be in sp e c te d by p roperly p ro te c te d p erso n n el, and any su spected residues should be decontaminated before other workers are perm itted to return to the area. If splashes or c o n ta c t w ith aero so ls of d iiso c y an ate s a re likely to occur, em ployees should w ear rubber or polyvinyl chloride gloves and aprons and rubber boots; a p p ro p ria te r e s p ira to ry eq u ip m en t, as d escrib ed in T able 1-1, should be readily available. Where supplied-air respirators are used, the air supply must come from a source not c o n ta m in a te d w ith diisocyanates [1 4 5 ]. For all workers near spray gun operations (within approximately 10 feet), an air-supplied hood, impervious gloves (rubber or polyvinyl chloride), tightly buttoned coveralls, and rubber galoshes or b o o ts a r e n e e d e d [ 1 4 6 ] . W orkers w ith o u t this eq u ip m en t should not be p e rm itte d close enough to spraying operations performed outdoors to be exposed to d iiso c y an ate vapors or particulates. A minimum of 50 feet is recommended. For indoor spraying operations, the safe distance for unprotected workers will depend on th e ty p e an d e ffic ie n c y of th e v e n tila tio n provided [ 147,60] When leaks or spills of diisocyanates occur, only properly trained and equipped personnel w earing appropriate protective clothing should be perm itted to remain in the a re a [ 9 ] . If major spills occur, air-supplied masks or self-contained breathing a p p a ra tu s as described in Table 1- Any existing reg u latio n s pertaining to the discharge of such m aterials into sewer lines should be s tric tly a d h ere d to. Since spills of diisocyanates may freeze during cold weather, water and ammonia will merely coat the solid m aterial with insoluble urea, stopping fu rth e r reactions. In cold weather, cleanup should be performed with a mixture of equal p a rts of isopropyl alcohol and e th y len e glycol [ 9 ,6 8 ] . A supply of this mixture should be on hand and ready for immediate use in cold weather. B ecause of th e potential hazards of exposure to diisocyanates, the importance of good housekeeping should also be em p h asized . Spills should be cleaned up pro m p tly , and all equipm ent used in the exposure areas, such as buckets, weighing c o n ta in e rs, and funnels, should be decontam inated and cleaned immediately after use. Smoking and th e carry in g of smoking supplies should be prohibited in areas w here exposure to diisocyanates may occur, as should preparing, storing, dispensing (including vending m achines), and consum ing food and b e v e ra g e s. E m ployees exposed to d iiso c y an ate s should be encouraged to wash their hands before eating, d rin k in g , or s m o k in g , an d b e f o r e an d a f te r using to ile t f a c ilitie s . The US D e p a rtm e n t of Labor regulations concerning general sanitation in the workplace as specified in 29 CFR 1910.141 should be adhered to. Em ployees should be instructed on the health hazards of diisocyanates and the p reca u tio n s to be follow ed in handling th em . They should be trained to report prom ptly to th e ir supervisors all leaks, suspected failures of equipment, exposures to d iiso c y an ate s, or sym ptom s of exposure. The location of safety showers and eyew ash fo u n tain s should be c le a rly m arked, and ap p ro p riate warning signs and labels should be p ro m in en tly displayed in exposure a re a s and on containers of diiso c y an ate s. E m ergency ex its should be provided and be accessible at all times. All em erg en cy show er, eyew ash, p ro tectiv e, and firefighting equipment should be checked periodically to ensure its serviceability. [36] also recommended that this limit not be exceeded because respiratory irritation occurred in anim als exposed to TDI at concentrations of 1-2 ppm. He noted that TDI and sim ilar diisocyanates are strong irritants to the skin, eyes, and gastrointestinal and respiratory tracts and that they may cause asthma-like symptoms in workers. In 1962, th e ACGIH red u ced the TLV for TDI to 0.02 ppm (0.14 mg/cu m) Isocyanates in general were reported to irritate the skin, eyes, and respiratory tra c t and to cau se re s p ira to ry s e n s itiz a tio n when sufficient vapor concentrations were p resen t even for a short time [1 2 0 ]. Konzen et al [60] observed an immunologic response in w orkers exposed to MIDI a t approximately 1.3 ppm-minute but not in workers exposed at 0.9 ppm-minute. Bruckner et al [52] noted that workers might becom e se n sitiz e d to isocyanates when exposed at concentrations above 0.02 ppm. A ccording to the 1971 documentation [1 5 4 ], available data indicated that MDI was sim ilar to TDI in its ir r ita n t and sensitizing properties, suggesting that a similar ceiling value of 0.02 ppm (0.2 mg/cu m) was warranted. In th e U nited S ta te s , o ccu p a tio n a l exposure standards for diisocyanates have been estab lish ed only fo r TDI and MDI. A ccording to th e International Labour O f f ic e [ 1 5 6 ] , o ccu p a tio n a l exposure lim its for TDI, MDI, and sev eral o th er d iiso c y an ate s have been s e t by foreign countries. These limits are summarized in Tables VI-1 and VI-2. C u rre n t Federal standards (29 CFR 1910(29 CFR .1000 , who re p o rte d th a t w orkers had no sym ptom s at concentrations below 10 ppb but developed resp irato ry illness within 1 week when concentrations rose to 30-70 ppb; when co n ce n tratio n s were reduced to 10-30 ppb, there were no further complaints. The recom m en d ed ceiling was intended to protect against irritative effects of TDI in non sen sitized w orkers, but ev id en ce was insufficient to determ ine whether it would also protect against sensitization. The document noted that no evidence was available to point to a concentration of TDI that would be safe for workers already s e n sitiz e d to TDI. The present document reexamines the earlier recommendations for a TDI stan d ard , taking into account more recent information that has become available since 1973. No th resh o ld concentration for such reactions has b e e n id e n tif ie d , but th e re is ev id en ce th a t, for som e individuals, it may be u nm easurably low [3 1 ,5 2 ,5 6 ]; th u s, it is not possible at this time to establish a level below which sensitized workers will not experience adverse respiratory effects from exposure to diisocyanates. # Basis for the Recommended Standard Several studies have shown that 5-20% of the workforce may become sensitized to d iiso c y an ate s [32,52,56,83]. There is evidence, however, that the incidence of s e n sitiz a tio n can be red u ce d by co n tro llin g exposures. The data of Elkins et al [3 2 ] on 15 TDI plants showed that all plants where average exposures exceeded 70 y g /c u m had w orkers w ith T D I-re la te d re s p ira to ry illness, but no such illnesses w ere re p o rte d in p lan ts where average exposures were 50 yg/cu m or lower. On th e o th e r hand, Weill [57] reported instances of sensitization developing in a plant w here av era g e TDI ex posures ran g ed from 14 to 50 yg/cu m. These studies did not re p o rt th e frequency or magnitude of excursions above these averages, so that the more precise estim ates of exposure concentrations cannot be determined. Exposure to diisocyanates may also cause chronic respiratory effects measurable as lo n g -term decrem ent in pulmonary function, especially FEV 1, in excess of that expected from aging. It is not clear from existing data whether this change occurs only in se n sitiz e d w orkers or w h eth er it may also a f f e c t workers who show no clinical sym ptom s from diisocyanate exposure. The findings of Porter et al [56] in d ic a te th a t som e w orkers who are sen sitiv e to TDI and who have anti-T D I an tib o d ies may ex h ib it norm al pulm onary fu n ctio n , w hile o th e rs w ith clin ical sym ptom s of s e n s itiz a tio n but n eg ativ e re su lts on im m unologic tests may have sev erely im p aired pulm onary fu n ctio n . In the study conducted by Wegman et al [ 9 0 ] , in w h ich w orkers exposed a t c o n c e n tra tio n s above 20 U g/cu m had a significantly greater decrease in FEV 1 than those exposed at lower concentrations, th e w ork fo rce studied included both sensitized and unsensitized individuals. In the plant studied by Weill and colleagues [57][58][59], where workers who showed symptoms of clin ica l sensitization were not included in the study population, the investigators found no sig n ifican t e f f e c ts on lung function related to TDI exposures at average c o n c e n tra tio n s of 14-50 ug/cu m. These findings indicate that the TWA limit of 5 ppb (35 U g/cu m) for TDI recommended by NIOSH in 1973 [3 7 ], which was based on th e findings of Elkins et al [3 2 ], provides adequate protection against chronic effects of TDI on pulmonary function of workers who are not sensitized. E nv iro n m en tal d a ta for the other diisocyanates are insufficient to establish a safe exposure lim it. The diisocyanates commonly used in industry are respiratory irrita n ts [ 2 ] . In a plant where area samples showed MDI concentrations of 10-150 U g/cu m and breathing zone concentrations for 6 sprayers were 120-270 ug/cu m, 34 of 35 workers had eye, nose, or throat irritation, and nearly half had wheezing, sh o rtn ess of b re a th , and ch est tightness [9 6 ]. In another plant, 3 of 29 workers exposed to MDI a t 50-110 ug/cu m had respiratory symptoms [6 1 ]. Nine of 18 w orkers exposed to HDI a t less th a n 300 Ug/cu m and to an HDI trim er at less than 3,800 ug/cu m had irritatio n of the upper respiratory trac t, cough, or chest tightness [ 100 j. L ik e TD I, o th e r d i i s o c y a n a t e s a r e likely to be re a c tiv e w ith biologic m acrom o lecu les, such as proteins, and thus are potential sensitizers. Some authors [6 0 ,6 1 ] have re p o rte d th a t MDI produces respiratory sensitization, since affected w orkers gave p o sitiv e re su lts in immunologic tests. The assumption of a common m echanism of a c tio n suggests that structurally similar diisocyanates might produce cross-sensitization; there is one report [53] of positive tests for antibodies against MDI in w orkers sen sitized to TDI, but the validity of these results is questionable because of th e lack of characterization of the test antigen used. A study of skin s e n s itiz a tio n [ 6 9 ] su g g ested th a t w orkers exposed to TDI and MDI were cross sen sitized to IPDI. In an o th e r study [4 7 ], some workers with previous exposure only to TDI had bronchial reactions to MDI and HDI as well. These workers were sig n ifican tly m ore h y p e rre a c tiv e to h ista m in e th a n workers who reacted to TDI only, suggesting th a t c ro s s -re a c tiv ity m ight involve a nonspecific pharmacologic mechanism. The p re d ic te d r e a c tiv ity of the diisocyanates with biologic macromolecules is th e probable basis for th e ir im m unologic effects and perhaps for the respiratory sym ptom s and effects on pulmonary function produced at low concentrations. It is th e re fo re reasonable to expect that other diisocyanates react similarly to TDI on a molar basis. In th e absence of data indicating that any of the diisocyanates is substantially less to x ic th an TDI, th e exposure concentrations recommended in 1973 should be ex ten d ed to all diisocyanates. It is recommended that exposure to the diisocyanates be c o n tro lle d so th a t no em p lo y ee is exposed at concentrations in excess of the following environmental limits, equivalent, in the case of volatile diisocyanates that o c c u r as v a p o r s , to a TWA lim it of 5 ppb fo r a 10-hour w o rk sh ift, 40-hour workweek, and a ceiling limit of 20 ppb for a 10-minute sampling period: TWA G e i1ing The recom m ended method also permits separation of different diisocyanates in th e sam e sam ple. It has not been tested with diisocyanates other than TDI, MDI, and HDI, b u t, w ith a p p ro p ria te m o d ificatio n s and so lv en t systems, it should be capable of detecting these compounds in the same range. The reco m m en d ed m ethod fo r sam pling a irb o rn e d iiso c y a n a te s consists of draw ing air a t a r a te of 2 lite rs /m in u te for 10 m in u tes th ro u g h tw o serially co n n ec ted a ll-g la ss m idget impingers, each containing 15 ml of absorbing solution. The use of tw o im pingers in series is n ece ssary u n til a reproducible collection efficiency is established for any given operation, since absorber collection efficiency is highly variable [44,111,116,125,137]. The diisocyanates react with the absorbing solution to form s p e c ific u re a d eriv ativ es which are then injected into the high speed liquid ch ro m a to g ra p h for an alysis. The m eth o d is described in detail in Appendix I. To p re v e n t d ev elo p m en t of serious re s p ira to ry sym ptom s, a medical su rv eillan ce program should provide for th e tim ely d e te c tio n of adverse health effects that develop from exposure to diisocyanates. E m ployees w ith a h isto ry of respiratory illness, especially asthma, and those exposed to o th e r re s p ira to ry ir r ita n ts , eg, smokers, may be at increased risk of developing ad v erse h e a lth effects from exposure to diisocyanates, and they should be counseled on this risk. All employees should be monitored so that work-related sym ptom s or loss of pulm onary fu n ctio n can be detected early. Employees who d e v e lo p s y m p to m s o f TDI s e n s itiz a tio n should be co u n seled on th e risks of continuing to work with TDI. Each em p lo y ee should re c e iv e a thorough preplacement medical examination, which includes a h isto ry of exposure to diiso cy an ates, a smoking history, and a history of re s p ira to ry illnesses. Each employee should receive pulmonary function te s ts , including FEV 1 and FVC, and a chest X-ray before beginning work in any p lan t m a n u fa ctu rin g or using d iiso c y a n a te s. F o r em p lo y ees w ith occupational e x p o s u re to d iiso c y an ate s, physical ex am in atio n s should be re p e a te d a t le a st annually. B ecause of seaso n al and diurnal variations in pulmonary function, the periodic e x am in atio n of an individual employee should be performed at about the sam e tim e ea c h y ear and at the same time of day, so that changes in respiratory function can be evaluated. Chest X-rays are not required at periodic examinations, and should be re p e a te d a t th e d isc re tio n of the examining physician. Records of m edical examinations should be kept for at least 30 years after the employee's last exposure to diisocyanates. The previous NIOSH criteria document on TDI [37] recommended a leukocyte count w ith d iffe re n tia l as p a rt of th e p re p la c e m e n t m ed ical ex am in atio n and su g g ested p erio d ic eosinophil counts. However, there is no evidence at this time th a t blood findings a re a significant indicator of adverse effects of TDI exposure. This change in the recommendations for the medical examination is consistent with th e fa ilu re to find eosin o p h ila in T D I-sen sitized individuals in re c e n t stu d ies [43,54,58]. W here liquid diisocyanates may be present on floors, protective shoe coverings should be worn. If th e p o te n tia l e x ists for splashes or contact with aerosols of diisocyanates, em ployees should be provided w ith face shields (20-cm minimum) with goggles, rubber or polyvinyl ch lo rid e gloves and aprons, rubber b oots, and ap p ro p riate re s p ira to ry eq u ip m en t as d escrib ed in Table 1-1. Because diisocyanates have poor w arning properties [36,39], the use of chemical cartridge respirators or gas masks is not recommended. At present, air-purifying respirators with an end-of-service-life in d icato r a re not available for the diisocyanates. Demand-type (negative pressure) supplied-air respirators are not recommended because of the possibility of facepiece leakage. Workers within 10 feet of a unit spraying m aterial containing diisocyanates should w ear full-body p ro te c tiv e clo th in g and appropriate respiratory protective devices [60,147]. All protective clothing th at becomes contaminated with diisocyanates should be rep la ced or thoroughly decontaminated in a solution of 8% ammonia and 2% liquid d e te rg e n t in w ater and clean ed b efo re reu se. L ockers should be provided for w orkers so th a t work and s tr e e t clothes can be stored separately. The employer should arran g e for laundering of all work clothes and ensure that laundry workers are aware of the hazards of exposure and appropriate safety precations. # (e) Informing Employees of Hazards At the beginning of em p lo y m en t, all em p lo y ees should be informed of the h azard s from exposure to diisocyanates. Brochures and pamphlets may be effective aids in informing employees of hazards. In addition, signs warning of the danger of exposure to diisocyanates should be posted in any area where occupational exposure to the diisocyanates is likely. Access to areas of potential high exposure should be re s tr ic te d to em p lo y ees equipped with appropriate protective gear. A continuing ed u ca tio n program, which includes training in the use of protective equipment such as respirators and information about the value of the periodic medical examinations, should be available to the employees. Employees exposed to diisocyanates should be inform ed th a t symptoms of exposure, such as nocturnal dyspnea, may occur several hours a f te r th e end of the workshift. Because of the possibility of sensitization to th e d iiso c y an ate s, em p lo y ees should be w arned th a t the im proper home use of po ly u reth an e p ro d u cts, such as foam kits and varnishes, that contain diisocyanates may in c re a se th e ir risk of developing w o rk -re la te d h e a lth problems. Employees should be in s tru c te d in th e ir own responsibility for following work practices and sa n ita tio n p ro ced u res to help p r o te c t th e h ealth and p rovide for the safety of themselves and their fellow employees. T hese sy stem s should be designed to p re v e n t th e a c c u m u la tio n or re c irc u la tio n of d iiso c y a n a te s in the w orkplace en v iro n m en t and to effectiv ely remove diisocyanate vapors or aerosols from th e b reath in g zone of th e em p lo y ees. The v e n tila tio n systems should be periodically ch eck ed , including airflow measurements, to ensure that the systems are w orking p ro p erly . Exhaust ventilation systems discharging to outside air must conform to ap p licab le lo cal, state, and Federal air pollution regulations and must not constitute a hazard to employees or to the general public. In ad d itio n, it is recom m en d ed th a t em ployees who work in areas that use diisocyanates wash their hands thoroughly b efo re e a tin g or using to ilet facilities. Smoking should not be perm itted in areas where diisocyanates are stored or used because of the possibility that smoking materials may become contaminated with diisocyanates. # (g) Monitoring and Recordkeeping Requirements In ad d itio n to a program of regular personal monitoring of air concentrations, continuous a re a m o n ito rin g is strongly recommended where aliphatic diisocyanates such as TDI and MDI are p re se n t. This will p e rm it engineering controls to be m odified or im proved to keep the concentrations of diisocyanates at or below the recom m en d ed lim its. M onitoring should also be perform ed whenever there is a c h a n g e in th e pro cess or m a te ria ls used th a t could in c re a se th e exposure of employees. Em ployers should m onitor exposure of employees to diisocyanates by taking a s u ffic ie n t num ber of b reath in g -zo n e samples to adequately characterize exposures for every o p eratio n . In determining the sampling strategy for a particular worker, th e process and the job description of the worker should be considered, and those p ro cess cy cles in w hich p o te n tia l exposure is g r e a te s t should be given prim e consideration. R eco rd s should be kept for all sampling operations and should include the type of personal p ro te c tiv e d ev ices in use, if any, and th e sam pling and analytical m ethods used. Employees or their designated representatives should have access to records of their own environmental exposures. These records should be kept at least 30 years after the employee's last exposure to diisocyanates. # VII. RESEARCH NEEDS In fo rm atio n needed to develop a stan d ard for occupational exposure to the diisocyanates is incomplete in many respects. It has been necessary to recommend a stan d ard for th e d iiso c y a n a te s based on sim ila ritie s to TDI and MDI, since ad eq u a te in fo rm atio n is not av ailab le on other diisocyanates to dem onstrate that they differ appreciably in their toxicity. . These relationships could be studied in guinea pigs exposed to TDI, using the p-tolyl isocyanate antigen [110] to test for th e in d u ctio n of tolyl-specific antibodies. Sera from these animals should not be pooled, so that the standard deviation can be determined. Changes in respiratory response in exposed animals should also be evaluated to determine their correlation w ith th e ap p e a ra n c e of IgE or IgG antibodies. As a corollary experiment, animals should be exposed at the same total dose administered over varying tim e periods to sim u late the e f f e c ts of excursions while retaining the same 8-hour TWA exposure; eg, groups of anim als m ight be exposed a t 5 ppb fo r 8 hours, 160 ppb for 15 minutes, and 2,400 ppb for 1 minute. To im prove p ro te c tio n of exposed w orkers, it is p a rtic u la rly desirable to d e te rm in e w h eth er th ere are intrinsic, predictable differences between sensitizable and nonsensitizable individuals. The studies of Butcher et al [63,65], showing that persons se n sitiz e d to TDI tend to be hyperreactive to bronchoconstrictors such as m echolyl, ap p ear to o ffe r prom ise in this reg ard . It is necessary to determine w hether this h y p e rre a c tiv ity is a re su lt of exposure or a preexisting factor that may in d ic a te a p red isp o sitio n to become sensitized to diisocyanates. Methods of id en tify in g sensitized individuals before overt chronic symptoms develop should also be explored. The value of m e asu re m en ts of eosinophilia and cyclic AMP and of pulm onary fu n ctio n stu d ies and immunologic testing as diagnostic tools should be c a re fu lly e v a lu a te d , since ex istin g re p o rts of their usefulness are contradictory. The p-to ly l iso c y a n a te te s t an tig e n developed by Karol and her colleagues [62] ap pears to be a p a rtic u la rly useful d iag n o stic tool for TDI se n s itiz a tio n , and analogous an tig en s should be developed for in v e stig a tin g sensitization to other diisocyanates. B ecause th e d iiso cy an ates may be highly reactive biologically, it is important th a t th e ir p o te n tia l to cau se carcinogenic and mutagenic effects be investigated. M u tag en icity screen in g in m icro b ial te s ts should be c a rrie d o u t, using a te s t p ro to co l th at will decrease the likelihood of hydrolysis to possibly mutagenic amine in te rm e d ia te s . D iiso cy an a tes, esp ecially those that are positive in mutagenicity te s ts , should also be te s te d for carcinogenicity in animal experiments. Studies of absorptio n , distribution, metabolism, and excretion of diisocyanates are also needed to elucidate the mechanism of their action. The co n seq u en ces of ex p o su re to the aerosols produced in many diisocyanate ap p licatio n s, such as sp ray in g , should also be investigated. It has been assumed th a t re a c tiv e diisocyanates in aerosol form produce the same biologic consequences as d iiso c y an ate vapors a t eq u iv alen t c o n c e n tra tio n s. This assumption should be e x p e rim e n ta lly v e rifie d . S im ilarily, m ost a p p lic a tio n s of diisocyanates involve sim u ltan eo u s exp o su re to o th e r to x ic ch em ica ls, and in a d e q u a te information is av ailab le on th e ro le of th e se ch em icals in producing observed health effects in diisocyanate workers and their possible additive or synergistic nature. R eliab le, se n sitiv e continuous monitoring methods should be developed for all th e diisocyanates. The paper-tape method developed by Reilly [136] is a valuable m ethod for continuous m onitoring of the arom atic diisocyanates, particularly TDI. C om parab le m ethods a re need ed for other diisocyanates to protect workers from dangerous ex cu rsio n s and to provide better information relating health effects to actual exposures. The following items of information that are applicable to a specific product or m a te ria l shall be provided in th e a p p ro p ria te block of the Material Safety Data Sheet (MSDS). The product designation is inserted in the block in the upper left corner of the firs t page to facilitate filing and retrieval. Print in upper case letters as large as possible. It should be printed to read upright with the sheet turned sideways. The product designation is that name or code designation which appears on the label, or by w hich the p ro d u ct is sold or known by em p lo y ees. The r e la tiv e numerical hazard ratings and key statem ents are those determined by the rules in Chapter V, P a rt B, of th e NIOSH p u b licatio n , An Id e n tific a tio n System for Occupationally Hazardous Materials. The company identification may be printed in the upper right corner if desired. C hem ical substances should be listed according to their complete name derived from a reco g n ized sy stem of nom enclature. Where possible, avoid using common nam es and g en era l class n am es such as " a ro m a tic am in e," " sa fe ty solvent," or "aliphatic hydrocarbon" when the specific name is known. The "%M may be the ap p ro x im ate p e rc e n ta g e by weight or volume (indicate basis) th a t eac h h azardous ingredient of the mixture bears to the whole mixture. This may be in d ic a te d as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. Toxic hazard data shall be stated in term s of concentration, mode of exposure or test, and animal used, eg, "100 ppm LC50-rat," "25 mg/kg LD50-skin-rabbit," "75 ppm LC-man," or "p erm issib le exposure from 29 C FR 1910.1000," or, if not av ailab le, from o th er sources of publications such as the American Conference of Governmental Industrial Hygienists or the American National Standards Institute Inc. F lashpoin t, shock se n s itiv ity , or similar descriptive data may be used to indicate fiammability, reactivity, or similar hazardous properties of the material. # (c) Section III. Physical Data The d a ta in Section III should be for the total mixture and should include the boiling point and melting point in degrees Fahrenheit (Celsius in parentheses); vapor p ressu re, in co nventional millimeters of mercury (mmHg); vapor density of gas or vapor (air = 1); so lubility in w a te r, in p a rts /h u n d re d parts of w ater by weight; sp ecific gravity (water = 1); percent volatiles (indicated if by weight or volume) at 70 F (21.1 C); e v a p o ra tio n r a te for liquids or sublimable solids, relative to butyl a c e ta te ; and ap p earan ce and odor. These data are useful for the control of toxic su b stan ces. Boiling point, vapor d en sity , p e rc e n t volatiles, vapor pressure, and ev ap o ratio n are useful for designing proper ventilation equipment. This information is also useful for design and d eploym ent of adequate fire and spill containment eq u ip m en t. The ap p e a ra n c e and odor may facilitate identification of substances stored in improperly marked containers, or when spilled. # (d) Section IV. Fire and Explosion Data S ection IV should co n tain c o m p le te fire and explosion data for the product, including flash p o in t and autoignition tem perature in degrees Fahrenheit (Celsius in p aren th eses); flam m ab le lim its, in percent by volume in air; suitable extinguishing m edia or m a te ria ls; sp ecia l firefighting procedures; and unusual fire and explosion h azard in fo rm atio n . If th e p ro d u ct p re se n ts no fire h aza rd , in se rt "NO FIRE HAZARD" on the line labeled "Extinguishing Media." (e) Section V. Health Hazard Information The "Health Hazard Data" should be a combined estim ate of the hazard of the to ta l pro d u ct. This can be ex p ressed as a TWA concentration, as a permissible exposure, or by som e o th e r indication of an acceptable standard. Other data are acceptable, such as lowest LD50 if multiple components are involved. Under "R o u tes of Exposure," co m m en ts in eac h category should reflect the p o te n tia l h azard from ab so rp tio n by th e ro u te in q u estio n . C o m m en ts should indicate the severity of the effect and the basis for the statem ent if possible. The 124 basis might be animal studies, analogy with similar products, or human experiences. C om m ents such as "yes" or "possible" are not helpful. Typical com ments might be: Skin C ontact-single short contact, no adverse effects likely; prolonged or repeated contact, possibly mild irritation. Eye C ontact-some pain and mild transient irritation; no corneal scarring. "E m ergency and F irs t Aid Procedures" should be w ritten in lay language and should primarily represent first-aid treatm ent that could be provided by paramedical personnel or individuals trained in first aid. In fo rm atio n in th e "N o tes to Physician" se c tio n should include any special m e d ic a l in fo rm a tio n w hich would be of a ssista n c e to an a tte n d in g physician, including required or recommended preplacement and periodic medical examinations, diagnostic procedures, and medical management of overexposed employees. It is p a rtic u la rly im p o rta n t to h ig h lig h t in s ta b ility or in c o m p atib ility to com m on su b stan ces or c irc u m s ta n c e s, such as w ater, direct sunlight, s te e l or co p p er piping, acid s, alk alies, etc. "Hazardous Decomposition P roducts" shall include those products released under fire conditions. It must also include dangerous p ro d u cts p roduced by aging, such as peroxides in the case of some ethers. Where applicable, shelf life should also be indicated. # (g) Section VII. Spill or Leak Procedures D etailed procedures for cleanup and disposal should be listed with emphasis on p reca u tio n s to be tak en to protect employees assigned to cleanup detail. Specific n e u tra liz in g c h em ica ls or p ro ced u res should be d escrib ed in d e ta il. D isposal m ethods should be ex p licit including proper labeling of containers holding residues and ultim ate disposal methods such as "sanitary landfill" or "incineration." Warnings such as "Comply with local, state, and Federal antipollution ordinances" are proper but not sufficient. Specific procedures shall be identified. # (h) Section VIII. Special Protection Information S ection VIII requires specific information. Statem ents such as "Yes," "No," or "If necessary " a re not informative. Ventilation requirements should be specific as to ty p e and p re fe rre d m eth o d s. R e sp ira to rs shall be sp e c ifie d as to type and NIOSH or US Mine S afety and H ealth Administration approval class, ie, "Supplied air," "O rganic vapor canister," etc. Protective equipment must be specified as to type and materials of construction. # (i) Section IX. Special Precautions " P re c a u tio n a ry S ta te m e n ts " shall consist of the label statem ents selected for use on the container or placard. Additional information on any aspect of safety or h e a lth not co v ered in o th e r sections should be inserted in Section IX. The lower block can contain references to published guides or in-house procedures for handling and sto rag e. D e p a rtm e n t of Transportation markings and classifications and other freight, handling, or storage requirements and environmental controls can be noted. # IX. APPENDIX I SAMPLING AND ANALYTICAL METHOD FOR DIISOCYANATES IN AIR The follow ing m ethod for sam pling and analysis of airborne diisocyanates is ad ap ted from NIOSH M ethod No. MR 240 (C la ssific a tio n E) [1 2 3 ]. A Class E m ethod is defin ed by NIOSH as "Proposed: A new, unproven, or suggested method not previously used by industrial hygiene analysts but which gives promise of being suitable for the determination of a given substance." # Principle of the Method A known volum e of air is drawn at a flowrate of 2 liters/m inute through two m idget gas im p in g ers, c o n n e c te d in se rie s, containing the nitro reagent absorber solution. The recommended airflow is 2 liters/m inute, rather than 1 liter/m inute as in d i c a t e d in M eth o d N o. MR 240 [ 2 0 ] , to en su re c o lle c tio n of p a rtic u la te diisocyanates. At the time of analysis, the two impinger solutions are initially kept s e p a ra te to allow d e te rm in a tio n of collection efficiency. The impinger solutions (s e p a ra te or com bined) are carefully rotary evaporated to dryness. The residue is then dissolved in 1.0 ml of m eth y len e ch lo rid e, CH2C12, containing an internal stan d ard . An aliquot of the solution is injected into a liquid chromatograph. The a r e a of the re su ltin g peak is d e te rm in e d and co m p ared to a re a s o b ta in ed by injecting standard urea solutions of known concentration. # Range and Sensitivity The upper lim it of th e range of the method depends on the concentration of th e nitro reagent in the midget gas impingers. For a 10-liter air sample, the limit of d iiso c y an ate s th at can be absorbed is 0.0015 millimole/cu m using 15 ml of 0.2 mM nitro reagent solution. T he m inim um d e te c ta b le lim it is 2 n g /in je c tio n fo r TDI and MDI and 5 n g /in jectio n for HDI. The advisable injection volume is 50 pi. Hence, for a 20lite r sam ple, th e usefu l range is 2-300 p g /c u m of to ta l d iiso c y a n a te s. If a p a rtic u la r atm o sp h e re is su sp e cted of containing a large amount of isocyanate, a smaller sampling volume should be taken. # Interferences Any com pound that reacts with nitro reagent and has the same retention time as th e a n aly te is an in te r e f e r e n c e . R e te n tio n tim e alone cannot be considered p ro o f of c h e m ic a l i d e n t i t y . W hen th e p o s s i b ili ty of in te r f e r e n c e ex ists, 117 ch ro m a to g rap h ic conditions (eg, modes of gradient, concentration of mobile phases, packings) have to be changed to circumvent the problem. Precision and Accuracy P recisio n and accuracy for the total analytical and sampling method have not been determ ined. However, the analytical method has been shown tc have relative stan d ard deviations within experimental error for peak areas and retention times of 2 .8 -1 6 .5 % and 0.6-4.1% , re s p e c tiv e ly , depending on th e c o n c e n tra tio n of the analytes. # Advantages and Disadvantages of the Method The sam p lin g d ev ice is p o rta b le . The a n a ly tic a l m ethod is sp e c ific for i s o c y a n a t e s , an d i n t e r f e r e n c e s a re m inim al. S im ultaneous analysis of five substances can be carried out routinely. T he n i t r o re a g e n t m ust be p re p a re d a t ra th e r fre q u e n t in te rv a ls. It is recommended that it not be stored for more than 3 weeks, and it should be kept in d arkness. Commercially available nitro reagent may prove more stable. The ureas f o r m e d in s o lu tio n a r e g e n e r a lly s t a b l e up to 7 d a y s . D e g r a d a t io n or p o ly m e riz a tio n may occu r a f te r th is period. E xcess n itro re a g e n t should be rem oved w ith p -to ly l iso c y a n a te or by som e o th e r means before the solution is in jecte d into the liquid chromatograph to maintain longer column life and precision. # Apparatus (a) An approved and calibrated personal sampling pump whose flowrate can be determined within ±5% at the recommended flowrate. P re p a ra tio n of N itro R e a g e n t Solution: A typical procedure for the routine p re p a ra tio n of the n itro re a g e n t so lu tio n is as follow s. A bout 120 mg (0.52 m illim ole) of the commercially available hydrochloride of nitro reagent is dissolved in 25 m l o f d is ti lle d w a te r. T h irte e n m illilite rs of 1 N NaOH is added to p re c ip ita te th e fre e am in e. The fre e amine is extracted with 50 ml of toluene. The to lu en e lay er is dried over anhydrous CaS04 (Drierite, WA Hammond Drierite Co, Xenia, Ohio), and the resulting solution is diluted to 250 ml to prepare a 2 mM solution and is sto re d in th e r e f r ig e ra to r . The nitro reagent solution is further d iluted 10-fold w ith to u len e b efo re it is added to the impinger collecting tubes. Prepared nitro reagent should be examined periodically by HPLC for the appearance of additional peaks that indicate reagent degradation. Flowrate: 2.0 ml/minute. (2) G rad ien t elu tio n : 10% B/A to 100% B in 10 minutes (B = 9.1% isopropanol/CH2C12; A = 100% CH2C12). (3) D etector: uv at 254 nm. (4) Temperature: ordinary room. (5) Recorder chart speed: 0.5 inch/minute. The peak area is measured by peak height times peak width at half the height or by an e le c tro n ic integrator such as a computing integrator. Preliminary results are read from a standard curve prepared as discussed below. # Calibration and Standards (a) A series of standards varying in concentration over the range of interest are prepared. Calibration curves are established prior to sample analysis each day. When an in te rn a l standard is used, the analyte concentration is plotted against the area ratio of the analyte to that of the internal standard. (1) The follow ing w eights of th e isocyanates are dissolved in 4.0-ml portions of CH2C12: 2.12 mg MDI; 29.60 mg of TD1 (ie, 19.30 mg of 2,4-TDI and 10.30 mg of 2,6-TDI), 21.14 mg of HDI. (2) Then 755 y l of MDI, 83.1 y l of TDI, and 75.5 y l of HDI are m ixed and 1.017 ml CH2C12 is added to make a total of 2.00 ml (200 ng/liter of each). Then 1.0 ml nitro reagent (2.06 mg/ml in hexane) is added to 1.0 ml of the iso cy an a te m ix tu re. The total isocyanate/nitro reagent mole ratio in this solution is 1:1. The re a c tio n m ix tu re is sto re d overnight. Dilutions are made from this s o lu tio n . T he so lvent is e v a p o ra te d in a ro ta ry e v a p o ra to r and th e residue redissolved in 1 ml CH2C12. These solutions are used to establish the calibration curves, linear dynamic range, and minimum detectable amount in the 25-cm Partisil 10 column. Calculations
None
None
048dfe20ef67a94994b79b33bc29fce892ab9494
cdc
None
For adults and adolescents (i.e., persons aged >13 years), the human immunodeficiency virus (HIV) infection classification system and the surveillance case definitions for HIV infection and acquired immunodeficiency syndrome (AIDS) have been revised and combined into a single case definition for HIV infection (1-3). In addition, the HIV infection case definition for children aged 13 years), the case definitions for HIV infection and AIDS have been revised into a single case definition for HIV infection that includes AIDS and incorporates the HIV infection classification system. Laboratory-confirmed evidence of HIV infection is now required to meet the surveillance case definition for HIV infection, including stage 3 HIV infection (AIDS). Diagnostic confirmation of an AIDS-defining condition alone (Appendix A), without laboratory-confirmed evidence of HIV infection, is no longer sufficient to classify an adult or adolescent as HIV# Introduction Since the beginning of the human immunodeficiency virus (HIV) epidemic, case definitions for HIV infection and acquired immunodeficiency syndrome (AIDS) have undergone several revisions to respond to diagnostic and therapeutic advances and to improve standardization and comparability of surveillance data regarding persons at all stages of HIV disease. HIV testing is now widely available, and diagnostic testing has continued to improve; these changes are reflected in the 2008 revised case definition for HIV infection, which now requires laboratory-confirmed evidence of HIV infection to meet the case definition among adults, adolescents, and children aged 18 months to <13 years. Historically, the case definition for AIDS included adults and adolescents without laboratory-confirmed evidence of HIV infection if other clinical criteria were met. In 1993, the existing case definition for AIDS (1) was expanded to include 1) all HIV-infected persons with a CD4+ T-lymphocyte count of <200 cells/µL or a CD4+ T-lymphocyte percentage of total lymphocytes of <14 and 2) three additional clinical conditions (pulmonary tuberculosis, recurrent pneumonia, and invasive cervical cancer), in addition to retaining the 23 clinical conditions in the previous AIDS case definition (2). Despite these changes, the case definition for AIDS continued to include a subset of adults and adolescents without laboratory-confirmed evidence of HIV infection whose illness still met the surveillance case definition for AIDS. Illness in a person who did not have any other known cause of immunodeficiency met the surveillance case definition for AIDS if the illness met any of the following three criteria: 1) no laboratory testing performed or inconclusive laboratory evidence of HIV infection but a definitive diagnosis of a condition included in a subset of AIDS-defining conditions, 2) negative laboratory results for HIV infection but a definitive diagnosis of Pneumocystis jirovecii pneumonia, or 3) negative laboratory results for HIV infection but a definitive diagnosis of a condition included in a subset of AIDS-defining conditions and a CD4+ T-lymphocyte count of <400 cells/µL. Because of improvements in diagnostic capabilities and treatment, including increased use of new HIV-testing technologies, CDC collaborated with CSTE to recommend in 2005 an interim change in the AIDS case definition, which required laboratory confirmation of HIV infection. This recommended change required laboratory-confirmed evidence of HIV infection in addition to a CD4+ T-lymphocyte count of <200 cells/µL, a CD4+ T-lymphocyte percentage of total lymphocytes of <14, or diagnosis of an AIDS-defining condition (5). This CDC/CSTE interim recommendation has been incorporated into the 2008 HIV infection case definition, which includes AIDS (stage 3). In 1993, the revised classification system for HIV infection and the expanded AIDS surveillance case definition for adults and adolescents were based on three clinical categories (i.e., A, B, and C) and three ranges of CD4+ T-lymphocyte counts (i.e., >500 cells/µL, 200-499 cells/µL, and <200 cells/µL) or the concordant CD4+ T-lymphocyte percentages (2). Clinical category A comprised asymptomatic acute or primary HIV infection or persistent generalized lymphadenopathy. Clinical category B comprised symptomatic conditions in an HIVinfected adult or adolescent that were not included in clinical categories A or C but were attributed to a cell-mediated immunity defect or for which the clinical course or management was complicated by HIV infection. Clinical category C comprised the 26 AIDS-defining conditions. In the context of treatment and diagnostic improvements since 1993, clinical categories A and B pose particular difficulties because they include many conditions that are not discrete diseases, are not necessarily indicators of immunodeficiency, poorly match current treatment guidelines, and are not integrated into routine surveillance practices. The classification system of the 2008 case definition for HIV infection, which includes AIDS, has been simplified, with less emphasis on clinical conditions by elimination of clinical categories A and B while retaining the 26 AIDS-defining conditions in clinical category C (1,2). The role of CD4+ T-lymphocyte counts and percentages also has been clarified. The 2008 case definition highlights the central role of the CD4+ T-lymphocyte counts and percentages, which are objective measures of immunosuppression that are routinely used in the care of HIV-infected persons and are available to surveillance programs. The three CD4+ T-lymphocyte count categories have been renamed for HIV infection, increasing in severity from stage 1 through stage 3 (AIDS); an unknown stage also is included. For surveillance purposes, HIV disease progression is classified from less to more severe; once cases are classified into a surveillance severity stage, they cannot be reclassified into a less severe stage. # Children Aged <18 Months The 1999 surveillance guidelines recommended four categories of HIV infection for children aged <18 months: definitively HIV infected, presumptively HIV infected, definitively uninfected with HIV, and presumptively uninfected with HIV (3). Because of improved accuracy and the widespread availability of viral detection and antibody tests to diagnose HIV infection, changes have been made in the surveillance case definition of presumptively uninfected with HIV for children aged <18 months at the time of diagnosis (1,3,4). Thus, compared with infants categorized using the previous surveillance case definition, fewer HIV-exposed infants who have a very low probability of infection will be categorized as having indeterminate infections (3). No major revisions have been made to the remaining three categories for children aged <18 months, and no changes have been made to the AIDS surveillance case definition for children in this age group (1,3,4). Because of the greater uncertainty associated with diagnostic testing for HIV in this population (i.e., because maternal antibodies from the HIV-infected mother might exist in the infant after birth, possibly affecting HIV diagnostic testing of the infant that occurs soon after birth), children in this age group whose illness meets clinical criteria for the AIDS case definition but does not meet laboratory criteria for definitive or presumptive HIV infection are still categorized as HIV infected when the mother has laboratory-confirmed HIV infection. # Aged 18 Months to <13 Years For children aged 18 months to <13 years, laboratoryconfirmed evidence of HIV infection is now required to meet the surveillance case definition for HIV infection and AIDS. Diagnostic confirmation of an AIDS-defining condition alone, without laboratory-confirmed evidence of HIV infection, is no longer sufficient to classify a child as HIV infected for surveillance purposes (1,3,4). No changes have been made to the 24 AIDS-defining conditions (1,4) or the HIV infection classification system for children aged <13 years (4). # Surveillance Case Definition for HIV Infection Among Adults and Adolescents The 2008 HIV infection case definition for adults and adolescents (aged >13 years) replaces the HIV infection and AIDS case definitions and the HIV infection classification system (1)(2)(3)5). The case definition is intended for public health surveillance only and not as a guide for clinical diagnosis. The definition applies to all HIV variants (e.g., HIV-1 or HIV-2) and excludes confirmation of HIV infection through diagnosis of AIDS-defining conditions alone. For surveillance purposes, a reportable case of HIV infection among adults and adolescents aged >13 years is categorized by increasing severity as stage 1, stage 2, or stage 3 (AIDS) or as stage unknown (Table ). # Criteria for HIV Infection Laboratory # Case Classification A confirmed case meets the laboratory criteria for diagnosis of HIV infection and one of the four HIV infection stages (stage 1, stage 2, stage 3, or stage unknown) (Table ). Although cases with no information on CD4+ T-lymphocyte count or percentage and no information on AIDS-defining conditions can be classified as stage unknown, every effort should be made to report CD4+ T-lymphocyte counts or percentages and the presence of AIDS-defining conditions at the time of diagnosis. Additional CD4+ T-lymphocyte counts or percentages and any identified AIDS-defining conditions can be reported as recommended (6). # HIV Infection, Stage 1 No AIDS-defining condition and either CD4+ T-lymphocyte - count of >500 cells/µL or CD4+ T-lymphocyte percentage of total lymphocytes of >29. # HIV Infection, Stage 2 No AIDS-defining condition and either CD4+ T-lymphocyte - count of 200-499 cells/µL or CD4+ T-lymphocyte percentage of total lymphocytes of 14-28. - Rapid tests are EIAs that do not have to be repeated but require a confirmatory test if reactive. Most conventional EIAs require a repeatedly reactive EIA that is confirmed by a positive result with a supplemental test for HIV antibody. Standard laboratory testing procedures should always be followed. † For HIV screening, HIV virologic (non-antibody) tests should not be used in lieu of approved HIV antibody screening tests. A negative result (i.e., undetectable or nonreactive) from an HIV virologic test (e.g., viral RNA nucleic acid test) does not rule out the diagnosis of HIV infection. § Qualified medical-care providers might differ by jurisdiction and might include physicians, nurse practitioners, physician assistants, or nurse midwives. ¶ An original or copy of the laboratory report is preferred; however, in the rare instance the laboratory report is not available, a description of the laboratory report results by a physician or qualified medical-care provider documented in the medical record is acceptable for surveillance purposes. Every effort should be made to obtain a copy of the laboratory report for documentation in the medical record. # HIV Infection, Stage Unknown # No information available on CD4+ T-lymphocyte count or percentage and no information available on AIDS-defining conditions. (Every effort should be made to report CD4+ T-lymphocyte counts or percentages and the presence of AIDS-defining conditions at the time of diagnosis.) # Discussion To meet the surveillance case definition for HIV infection among adults and adolescents, laboratory-confirmed evidence of HIV infection is required. The lowest CD4+ T-lymphocyte count (or concordant CD4+ T-lymphocyte percentage of total lymphocytes) or the presence of AIDS-defining conditions is used to determine the stage of infection. If the CD4+ T-lymphocyte count and the CD4+ T-lymphocyte percentage are both available but do not correspond to the same severity stage, select the more severe stage. For surveillance purposes, disease progression is from less to more severe; once cases are classified in a more severe surveillance stage, they cannot be reclassified into a less severe surveillance stage. A diagnosis of acute HIV infection indicates documented evidence of detectable HIV RNA or DNA or of p24 antigen in plasma or serum in the presence of a documented negative or indeterminate result from an HIV antibody test. These laboratory tests should be conducted on the same specimen or on specimens obtained on the same day. Acute HIV infection occurs approximately during the time from viral acquisition until seroconversion (i.e., the development of measurable levels of HIV-specific antibodies). During this period, early immune responses to the virus produce distinctive characteristics; 40% to 80% of patients develop clinical symptoms of a nonspecific viral illness (e.g., fever, fatigue, or rash) typically lasting 1-2 weeks (7-12). Acute HIV infection often is not detected because the date of HIV acquisition is unknown, no specific clinical signs are present, no single laboratory marker is present, and the diagnostic window is small. High viral loads typically are associated with acute HIV infection, potentially increasing the risk for transmission. CD4+ T-lymphocyte counts have decreased in certain patients with acute HIV infection, especially during the months immediately following viral acquisition (7,11,12). However, the viral load and CD4+ T-lymphocyte count usually stabilize once equilibrium is reached between HIV and the immune response (i.e., the viral set point). The changing CD4+ T-lymphocyte counts associated with acute HIV infection might have implications when using these counts to stage HIV infection for surveillance purposes; for example, persons might experience a particularly low, but temporary, CD4+ T-lymphocyte count and be categorized as having a more severe stage of HIV infection than they actually have after reaching the viral set point. # Surveillance Case Definition for HIV Infection Among Children Aged <18 Months The 2008 case definition of HIV infection among children aged <18 months replaces the definition published in 1999 (3) and applies to all variants of HIV (e.g., HIV-1 or HIV-2). The 2008 definition is intended for public health surveillance only and not as a guide for clinical diagnosis. The 2008 definition takes into account new available testing technologies. Laboratory criteria for children aged <18 months at the time of diagnosis include revisions to one category: presumptively uninfected with HIV. No substantial changes have been made to the remaining three categories (definitively HIV infected, presumptively HIV infected, and definitively uninfected with HIV), and no changes have been made to the conditions listed under the AIDS criteria in the 1987 pediatric surveillance case definition for AIDS for children aged <18 months (1,3,13). Because diagnostic laboratory testing for HIV infection among children aged <18 months might be unreliable, children in this age group with perinatal HIV exposure whose illness meets the AIDS case definition on the basis of clinical criteria are considered presumptively HIV infected when the mother has laboratory-confirmed HIV infection. The definitive or presumptive exclusion of HIV infection for surveillance purposes does not mean that clinical HIV infection can be ruled out. For the purposes of calculating the exact timing of tests (e.g., when a specimen was obtained for laboratory testing) based on the surveillance case definition, 1 month corresponds to 30 days. # Criteria for Definitive or Presumptive HIV Infection A child aged <18 months is categorized for surveillance purposes as definitively or presumptively HIV infected if born to an HIV-infected mother and if the laboratory criterion or at least one of the other criteria is met. # Laboratory Criterion for Definitive HIV Infection A child aged <18 months is categorized for surveillance purposes as definitively HIV infected if born to an HIV-infected mother and the following laboratory criterion is met. Positive results on two separate specimens (not includ-- ing cord blood) from one or more of the following HIV virologic (non-antibody) tests: HIV nucleic acid (DNA or RNA) detection -HIV p24 antigen test, including neutralization assay, for a child aged >1 month HIV isolation (viral culture) - # Laboratory Criterion for Presumptive HIV Infection A child aged <18 months is categorized for surveillance purposes as presumptively HIV infected if 1) born to an HIVinfected mother, 2) the criterion for definitively HIV infected is not met, and 3) the following laboratory criterion is met. Positive # MMWR December 5, 2008 When test results regarding HIV infection status are not - available, documentation of a condition that meets the criteria in the 1987 pediatric surveillance case definition for AIDS (1) (Appendix A). # Criteria for Uninfected with HIV, Definitive or Presumptive A child aged <18 months born to an HIV-infected mother is categorized for surveillance purposes as either definitively or presumptively uninfected with HIV if 1) the criteria for definitive or presumptive HIV infection are not met and 2) at least one of the laboratory criteria or other criteria are met. # Laboratory Criteria for Uninfected with HIV, Definitive A child aged <18 months born to an HIV-infected mother is categorized for surveillance purposes as definitively uninfected with HIV if 1) the criteria for definitive or presumptive HIV infection are not met and 2) at least one of the laboratory criteria or other criteria are met. # Laboratory Criteria for Uninfected with HIV, Presumptive A child aged <18 months born to an HIV-infected mother is categorized for surveillance purposes as presumptively uninfected with HIV if 1) the criteria for definitively uninfected with HIV are not met and 2) at least one of the laboratory criteria are met. Two # Criteria for Indeterminate HIV Infection A child aged <18 months born to an HIV-infected mother is categorized as having perinatal exposure with an indeterminate HIV infection status if the criteria for infected with HIV and uninfected with HIV are not met. # Discussion The exclusion of HIV infection (definitive or presumptive) for surveillance purposes does not mean that clinical HIV infection can be ruled out. These categories are used for surveillance classification purposes and should not be used to guide clinical practice. A child with perinatal HIV exposure should continue to be monitored clinically according to nationally accepted treatment and care guidelines (17)(18)(19) to 1) monitor for potential complications of exposure to antiretroviral medications during † † Suspected cases of HIV infection among children aged 4 weeks, specimens should be obtained on separate days. the perinatal period and 2) confirm the absence of HIV infection with repeat clinical and laboratory evaluations. No changes have been made to the existing classification system for HIV infection among children aged <18 months (4). To classify HIV-infected children in this age group, use the 1994 revised classification system for HIV infection among children aged <13 years (4). # Surveillance Case Definitions for HIV Infection and AIDS Among Children Aged 18 Months to <13 Years These 2008 surveillance case definitions of HIV infection and AIDS supersede those published in 1987 (1) and 1999 (3) and apply to all variants of HIV (e.g., HIV-1 or HIV-2). They are intended for public health surveillance only and are not a guide for clinical diagnosis. The 2008 laboratory criteria for reportable HIV infection among persons aged 18 months to <13 years exclude confirmation of HIV infection through the diagnosis of AIDS-defining conditions alone. Laboratory-confirmed evidence of HIV infection is now required for all reported cases of HIV infection among children aged 18 months to <13 years (20). # Criteria for HIV Infection Children aged 18 months to <13 years are categorized as HIV infected for surveillance purposes if at least one of laboratory criteria or the other criterion is met. # Criteria for AIDS Children aged 18 months to <13 years are categorized for surveillance purposes as having AIDS if the criteria for HIV infection are met and at least one of the AIDS-defining conditions has been documented (Appendix A). The 2008 surveillance case definition for AIDS retains the 24 clinical conditions in the AIDS surveillance case definition published in 1987 (1) and revised in 1994 (4) for children aged <13 years (Appendix A). Because the 2008 definition requires that all AIDS diagnoses have laboratory-confirmed evidence of HIV infection, the presence of any AIDS-defining condition listed in Appendix A indicates a surveillance diagnosis of AIDS. Guidance on the diagnosis of these diseases in the context of all nationally notifiable diseases is available at . gov/epo/dphsi/casedef/case_definitions.htm. # Discussion To meet the surveillance case definition for HIV infection, laboratory confirmation of HIV infection is now required for children aged 18 months to <13 years. To meet the surveillance case definition for AIDS, in addition to the presence of one or more AIDS-defining conditions, laboratory-confirmed evidence of HIV infection is now required for children aged 18 months to <13 years. These revisions will increase the specificity of the HIV infection and AIDS surveillance case definitions by excluding patients without laboratory-confirmed evidence of HIV infection, reinforcing the public health message that HIV infection is the cause of AIDS. Improved specificity will provide more accurate data regarding number of HIV infection cases, which can be used to refine public health policies and determine appropriate use of HIV resources. No changes have been made to the existing classification system for HIV infection among children aged 18 months to <13 years (4). To classify HIV-infected children in this age group, refer to the 1994 revised classification system for HIV infection among children aged <13 years (4 # Revised WHo Definitions # Surveillance Case Definitions WHO recommends reporting cases of HIV infection as HIV infection or advanced HIV disease (AHD), including AIDS. All cases of HIV infection, AHD, and AIDS require a confirmed diagnosis of HIV infection based on laboratory testing, using the appropriate national testing algorithm (1). The revised WHO surveillance case definitions include the following: HIV infection (stages 1 and 2), AHD (stage 3), and AIDS (stage 4) (1). # Clinical Staging and Immunologic Criteria Four clinical stages have been established for persons with confirmed HIV infection. These stages include the full spectrum of HIV infection and coincide with WHO clinical treatment recommendations: 1) no symptoms, 2) mild symptoms, 3) advanced symptoms, and 4) severe symptoms (1). The revised staging systems include presumptive clinical diagnoses that can be made in the absence of laboratory tests and definitive clinical criteria that require confirmatory laboratory tests. The clinical stage provides useful information when HIV infection is first diagnosed, when a person begins receiving care for HIV infection, for tracking patients in treatment programs, and to guide decisions on when to initiate cotrimoxazole prophylaxis and antiretroviral therapy (ART). Age-specific immunologic criteria for the disease classification are presented. For children aged <5 years, the CD4+ T-lymphocyte percentage of total lymphocytes rather than the absolute CD4+ T-lymphocyte count should be used because the absolute count tends to vary, more than the percentage, per individual child in this age group. Immunologic and clinical criteria should be documented (when available) to describe a case of HIV infection. # Comparison of the WHo and CDC Definitions Both the WHO and CDC surveillance case definitions for HIV infection now require laboratory confirmation of HIV infection. Differences between the WHO and CDC definitions and staging systems include the following ( # Appendix A AIDS-Defining Conditions # Members of the CDC Pediatric Surveillance Case Definition for HIV Infection and AIDS Working Group
For adults and adolescents (i.e., persons aged >13 years), the human immunodeficiency virus (HIV) infection classification system and the surveillance case definitions for HIV infection and acquired immunodeficiency syndrome (AIDS) have been revised and combined into a single case definition for HIV infection (1-3). In addition, the HIV infection case definition for children aged <13 years and the AIDS case definition for children aged 18 months to <13 years have been revised (1,3,4). No changes have been made to the HIV infection classification system (4), the 24 AIDS-defining conditions (1,4) for children aged <13 years, or the AIDS case definition for children aged <18 months. These case definitions are intended for public health surveillance only and not as a guide for clinical diagnosis. Public health surveillance data are used primarily for monitoring the HIV epidemic and for planning on a population level, not for making clinical decisions for individual patients. CDC and the Council of State and Territorial Epidemiologists recommend that all states and territories conduct case surveillance of HIV infection and AIDS using the 2008 surveillance case definitions, effective immediately. Methods CDC collaborated with the Council of State and Territorial Epidemiologists (CSTE) to develop the revisions in this report. CDC obtained additional input through consultations regarding the pediatric case definitions (April 2005) and adult and adolescent case definition (August 2005 and June 2006) and through peer review by health-care professionals, in compliance with the Office of Management and Budget requirements for the dissemination of influential scientific information.For adults and adolescents (aged >13 years), the case definitions for HIV infection and AIDS have been revised into a single case definition for HIV infection that includes AIDS and incorporates the HIV infection classification system. Laboratory-confirmed evidence of HIV infection is now required to meet the surveillance case definition for HIV infection, including stage 3 HIV infection (AIDS). Diagnostic confirmation of an AIDS-defining condition alone (Appendix A), without laboratory-confirmed evidence of HIV infection, is no longer sufficient to classify an adult or adolescent as HIV# Introduction Since the beginning of the human immunodeficiency virus (HIV) epidemic, case definitions for HIV infection and acquired immunodeficiency syndrome (AIDS) have undergone several revisions to respond to diagnostic and therapeutic advances and to improve standardization and comparability of surveillance data regarding persons at all stages of HIV disease. HIV testing is now widely available, and diagnostic testing has continued to improve; these changes are reflected in the 2008 revised case definition for HIV infection, which now requires laboratory-confirmed evidence of HIV infection to meet the case definition among adults, adolescents, and children aged 18 months to <13 years. Historically, the case definition for AIDS included adults and adolescents without laboratory-confirmed evidence of HIV infection if other clinical criteria were met. In 1993, the existing case definition for AIDS (1) was expanded to include 1) all HIV-infected persons with a CD4+ T-lymphocyte count of <200 cells/µL or a CD4+ T-lymphocyte percentage of total lymphocytes of <14 and 2) three additional clinical conditions (pulmonary tuberculosis, recurrent pneumonia, and invasive cervical cancer), in addition to retaining the 23 clinical conditions in the previous AIDS case definition (2). Despite these changes, the case definition for AIDS continued to include a subset of adults and adolescents without laboratory-confirmed evidence of HIV infection whose illness still met the surveillance case definition for AIDS. Illness in a person who did not have any other known cause of immunodeficiency met the surveillance case definition for AIDS if the illness met any of the following three criteria: 1) no laboratory testing performed or inconclusive laboratory evidence of HIV infection but a definitive diagnosis of a condition included in a subset of AIDS-defining conditions, 2) negative laboratory results for HIV infection but a definitive diagnosis of Pneumocystis jirovecii pneumonia, or 3) negative laboratory results for HIV infection but a definitive diagnosis of a condition included in a subset of AIDS-defining conditions and a CD4+ T-lymphocyte count of <400 cells/µL. Because of improvements in diagnostic capabilities and treatment, including increased use of new HIV-testing technologies, CDC collaborated with CSTE to recommend in 2005 an interim change in the AIDS case definition, which required laboratory confirmation of HIV infection. This recommended change required laboratory-confirmed evidence of HIV infection in addition to a CD4+ T-lymphocyte count of <200 cells/µL, a CD4+ T-lymphocyte percentage of total lymphocytes of <14, or diagnosis of an AIDS-defining condition (5). This CDC/CSTE interim recommendation has been incorporated into the 2008 HIV infection case definition, which includes AIDS (stage 3). In 1993, the revised classification system for HIV infection and the expanded AIDS surveillance case definition for adults and adolescents were based on three clinical categories (i.e., A, B, and C) and three ranges of CD4+ T-lymphocyte counts (i.e., >500 cells/µL, 200-499 cells/µL, and <200 cells/µL) or the concordant CD4+ T-lymphocyte percentages (2). Clinical category A comprised asymptomatic acute or primary HIV infection or persistent generalized lymphadenopathy. Clinical category B comprised symptomatic conditions in an HIVinfected adult or adolescent that were not included in clinical categories A or C but were attributed to a cell-mediated immunity defect or for which the clinical course or management was complicated by HIV infection. Clinical category C comprised the 26 AIDS-defining conditions. In the context of treatment and diagnostic improvements since 1993, clinical categories A and B pose particular difficulties because they include many conditions that are not discrete diseases, are not necessarily indicators of immunodeficiency, poorly match current treatment guidelines, and are not integrated into routine surveillance practices. The classification system of the 2008 case definition for HIV infection, which includes AIDS, has been simplified, with less emphasis on clinical conditions by elimination of clinical categories A and B while retaining the 26 AIDS-defining conditions in clinical category C (1,2). The role of CD4+ T-lymphocyte counts and percentages also has been clarified. The 2008 case definition highlights the central role of the CD4+ T-lymphocyte counts and percentages, which are objective measures of immunosuppression that are routinely used in the care of HIV-infected persons and are available to surveillance programs. The three CD4+ T-lymphocyte count categories have been renamed for HIV infection, increasing in severity from stage 1 through stage 3 (AIDS); an unknown stage also is included. For surveillance purposes, HIV disease progression is classified from less to more severe; once cases are classified into a surveillance severity stage, they cannot be reclassified into a less severe stage. # Children Aged <18 Months The 1999 surveillance guidelines recommended four categories of HIV infection for children aged <18 months: definitively HIV infected, presumptively HIV infected, definitively uninfected with HIV, and presumptively uninfected with HIV (3). Because of improved accuracy and the widespread availability of viral detection and antibody tests to diagnose HIV infection, changes have been made in the surveillance case definition of presumptively uninfected with HIV for children aged <18 months at the time of diagnosis (1,3,4). Thus, compared with infants categorized using the previous surveillance case definition, fewer HIV-exposed infants who have a very low probability of infection will be categorized as having indeterminate infections (3). No major revisions have been made to the remaining three categories for children aged <18 months, and no changes have been made to the AIDS surveillance case definition for children in this age group (1,3,4). Because of the greater uncertainty associated with diagnostic testing for HIV in this population (i.e., because maternal antibodies from the HIV-infected mother might exist in the infant after birth, possibly affecting HIV diagnostic testing of the infant that occurs soon after birth), children in this age group whose illness meets clinical criteria for the AIDS case definition but does not meet laboratory criteria for definitive or presumptive HIV infection are still categorized as HIV infected when the mother has laboratory-confirmed HIV infection. # Aged 18 Months to <13 Years For children aged 18 months to <13 years, laboratoryconfirmed evidence of HIV infection is now required to meet the surveillance case definition for HIV infection and AIDS. Diagnostic confirmation of an AIDS-defining condition alone, without laboratory-confirmed evidence of HIV infection, is no longer sufficient to classify a child as HIV infected for surveillance purposes (1,3,4). No changes have been made to the 24 AIDS-defining conditions (1,4) or the HIV infection classification system for children aged <13 years (4). # Surveillance Case Definition for HIV Infection Among Adults and Adolescents The 2008 HIV infection case definition for adults and adolescents (aged >13 years) replaces the HIV infection and AIDS case definitions and the HIV infection classification system (1)(2)(3)5). The case definition is intended for public health surveillance only and not as a guide for clinical diagnosis. The definition applies to all HIV variants (e.g., HIV-1 or HIV-2) and excludes confirmation of HIV infection through diagnosis of AIDS-defining conditions alone. For surveillance purposes, a reportable case of HIV infection among adults and adolescents aged >13 years is categorized by increasing severity as stage 1, stage 2, or stage 3 (AIDS) or as stage unknown (Table ). # Criteria for HIV Infection Laboratory # Case Classification A confirmed case meets the laboratory criteria for diagnosis of HIV infection and one of the four HIV infection stages (stage 1, stage 2, stage 3, or stage unknown) (Table ). Although cases with no information on CD4+ T-lymphocyte count or percentage and no information on AIDS-defining conditions can be classified as stage unknown, every effort should be made to report CD4+ T-lymphocyte counts or percentages and the presence of AIDS-defining conditions at the time of diagnosis. Additional CD4+ T-lymphocyte counts or percentages and any identified AIDS-defining conditions can be reported as recommended (6). # HIV Infection, Stage 1 No AIDS-defining condition and either CD4+ T-lymphocyte • count of >500 cells/µL or CD4+ T-lymphocyte percentage of total lymphocytes of >29. # HIV Infection, Stage 2 No AIDS-defining condition and either CD4+ T-lymphocyte • count of 200-499 cells/µL or CD4+ T-lymphocyte percentage of total lymphocytes of 14-28. * Rapid tests are EIAs that do not have to be repeated but require a confirmatory test if reactive. Most conventional EIAs require a repeatedly reactive EIA that is confirmed by a positive result with a supplemental test for HIV antibody. Standard laboratory testing procedures should always be followed. † For HIV screening, HIV virologic (non-antibody) tests should not be used in lieu of approved HIV antibody screening tests. A negative result (i.e., undetectable or nonreactive) from an HIV virologic test (e.g., viral RNA nucleic acid test) does not rule out the diagnosis of HIV infection. § Qualified medical-care providers might differ by jurisdiction and might include physicians, nurse practitioners, physician assistants, or nurse midwives. ¶ An original or copy of the laboratory report is preferred; however, in the rare instance the laboratory report is not available, a description of the laboratory report results by a physician or qualified medical-care provider documented in the medical record is acceptable for surveillance purposes. Every effort should be made to obtain a copy of the laboratory report for documentation in the medical record. # HIV Infection, Stage Unknown # No information available on CD4+ T-lymphocyte count or • percentage and no information available on AIDS-defining conditions. (Every effort should be made to report CD4+ T-lymphocyte counts or percentages and the presence of AIDS-defining conditions at the time of diagnosis.) # Discussion To meet the surveillance case definition for HIV infection among adults and adolescents, laboratory-confirmed evidence of HIV infection is required. The lowest CD4+ T-lymphocyte count (or concordant CD4+ T-lymphocyte percentage of total lymphocytes) or the presence of AIDS-defining conditions is used to determine the stage of infection. If the CD4+ T-lymphocyte count and the CD4+ T-lymphocyte percentage are both available but do not correspond to the same severity stage, select the more severe stage. For surveillance purposes, disease progression is from less to more severe; once cases are classified in a more severe surveillance stage, they cannot be reclassified into a less severe surveillance stage. A diagnosis of acute HIV infection indicates documented evidence of detectable HIV RNA or DNA or of p24 antigen in plasma or serum in the presence of a documented negative or indeterminate result from an HIV antibody test. These laboratory tests should be conducted on the same specimen or on specimens obtained on the same day. Acute HIV infection occurs approximately during the time from viral acquisition until seroconversion (i.e., the development of measurable levels of HIV-specific antibodies). During this period, early immune responses to the virus produce distinctive characteristics; 40% to 80% of patients develop clinical symptoms of a nonspecific viral illness (e.g., fever, fatigue, or rash) typically lasting 1-2 weeks (7-12). Acute HIV infection often is not detected because the date of HIV acquisition is unknown, no specific clinical signs are present, no single laboratory marker is present, and the diagnostic window is small. High viral loads typically are associated with acute HIV infection, potentially increasing the risk for transmission. CD4+ T-lymphocyte counts have decreased in certain patients with acute HIV infection, especially during the months immediately following viral acquisition (7,11,12). However, the viral load and CD4+ T-lymphocyte count usually stabilize once equilibrium is reached between HIV and the immune response (i.e., the viral set point). The changing CD4+ T-lymphocyte counts associated with acute HIV infection might have implications when using these counts to stage HIV infection for surveillance purposes; for example, persons might experience a particularly low, but temporary, CD4+ T-lymphocyte count and be categorized as having a more severe stage of HIV infection than they actually have after reaching the viral set point. # Surveillance Case Definition for HIV Infection Among Children Aged <18 Months The 2008 case definition of HIV infection among children aged <18 months replaces the definition published in 1999 (3) and applies to all variants of HIV (e.g., HIV-1 or HIV-2). The 2008 definition is intended for public health surveillance only and not as a guide for clinical diagnosis. The 2008 definition takes into account new available testing technologies. Laboratory criteria for children aged <18 months at the time of diagnosis include revisions to one category: presumptively uninfected with HIV. No substantial changes have been made to the remaining three categories (definitively HIV infected, presumptively HIV infected, and definitively uninfected with HIV), and no changes have been made to the conditions listed under the AIDS criteria in the 1987 pediatric surveillance case definition for AIDS for children aged <18 months (1,3,13). Because diagnostic laboratory testing for HIV infection among children aged <18 months might be unreliable, children in this age group with perinatal HIV exposure whose illness meets the AIDS case definition on the basis of clinical criteria are considered presumptively HIV infected when the mother has laboratory-confirmed HIV infection. The definitive or presumptive exclusion of HIV infection for surveillance purposes does not mean that clinical HIV infection can be ruled out. For the purposes of calculating the exact timing of tests (e.g., when a specimen was obtained for laboratory testing) based on the surveillance case definition, 1 month corresponds to 30 days. # Criteria for Definitive or Presumptive HIV Infection A child aged <18 months is categorized for surveillance purposes as definitively or presumptively HIV infected if born to an HIV-infected mother and if the laboratory criterion or at least one of the other criteria is met. # Laboratory Criterion for Definitive HIV Infection A child aged <18 months is categorized for surveillance purposes as definitively HIV infected if born to an HIV-infected mother and the following laboratory criterion is met. Positive results on two separate specimens (not includ-• ing cord blood) from one or more of the following HIV virologic (non-antibody) tests: HIV nucleic acid (DNA or RNA) detection** -HIV p24 antigen test, including neutralization assay, for a child aged >1 month HIV isolation (viral culture) - # Laboratory Criterion for Presumptive HIV Infection A child aged <18 months is categorized for surveillance purposes as presumptively HIV infected if 1) born to an HIVinfected mother, 2) the criterion for definitively HIV infected is not met, and 3) the following laboratory criterion is met. Positive # MMWR December 5, 2008 When test results regarding HIV infection status are not • available, documentation of a condition that meets the criteria in the 1987 pediatric surveillance case definition for AIDS (1) (Appendix A). # Criteria for Uninfected with HIV, Definitive or Presumptive A child aged <18 months born to an HIV-infected mother is categorized for surveillance purposes as either definitively or presumptively uninfected with HIV if 1) the criteria for definitive or presumptive HIV infection are not met and 2) at least one of the laboratory criteria or other criteria are met. # Laboratory Criteria for Uninfected with HIV, Definitive A child aged <18 months born to an HIV-infected mother is categorized for surveillance purposes as definitively uninfected with HIV if 1) the criteria for definitive or presumptive HIV infection are not met and 2) at least one of the laboratory criteria or other criteria are met. # Laboratory Criteria for Uninfected with HIV, Presumptive A child aged <18 months born to an HIV-infected mother is categorized for surveillance purposes as presumptively uninfected with HIV if 1) the criteria for definitively uninfected with HIV are not met and 2) at least one of the laboratory criteria are met. Two # Criteria for Indeterminate HIV Infection A child aged <18 months born to an HIV-infected mother is categorized as having perinatal exposure with an indeterminate HIV infection status if the criteria for infected with HIV and uninfected with HIV are not met. # Discussion The exclusion of HIV infection (definitive or presumptive) for surveillance purposes does not mean that clinical HIV infection can be ruled out. These categories are used for surveillance classification purposes and should not be used to guide clinical practice. A child with perinatal HIV exposure should continue to be monitored clinically according to nationally accepted treatment and care guidelines (17)(18)(19) to 1) monitor for potential complications of exposure to antiretroviral medications during † † Suspected cases of HIV infection among children aged <18 months who are born to a documented HIV-uninfected mother should be assessed on a caseby-case basis by the appropriate health care and public health specialists. § § If specimens for both negative RNA or DNA virologic tests are obtained at age >4 weeks, specimens should be obtained on separate days. the perinatal period and 2) confirm the absence of HIV infection with repeat clinical and laboratory evaluations. No changes have been made to the existing classification system for HIV infection among children aged <18 months (4). To classify HIV-infected children in this age group, use the 1994 revised classification system for HIV infection among children aged <13 years (4). # Surveillance Case Definitions for HIV Infection and AIDS Among Children Aged 18 Months to <13 Years These 2008 surveillance case definitions of HIV infection and AIDS supersede those published in 1987 (1) and 1999 (3) and apply to all variants of HIV (e.g., HIV-1 or HIV-2). They are intended for public health surveillance only and are not a guide for clinical diagnosis. The 2008 laboratory criteria for reportable HIV infection among persons aged 18 months to <13 years exclude confirmation of HIV infection through the diagnosis of AIDS-defining conditions alone. Laboratory-confirmed evidence of HIV infection is now required for all reported cases of HIV infection among children aged 18 months to <13 years (20). # Criteria for HIV Infection Children aged 18 months to <13 years are categorized as HIV infected for surveillance purposes if at least one of laboratory criteria or the other criterion is met. # Criteria for AIDS Children aged 18 months to <13 years are categorized for surveillance purposes as having AIDS if the criteria for HIV infection are met and at least one of the AIDS-defining conditions has been documented (Appendix A). The 2008 surveillance case definition for AIDS retains the 24 clinical conditions in the AIDS surveillance case definition published in 1987 (1) and revised in 1994 (4) for children aged <13 years (Appendix A). Because the 2008 definition requires that all AIDS diagnoses have laboratory-confirmed evidence of HIV infection, the presence of any AIDS-defining condition listed in Appendix A indicates a surveillance diagnosis of AIDS. Guidance on the diagnosis of these diseases in the context of all nationally notifiable diseases is available at http://www.cdc. gov/epo/dphsi/casedef/case_definitions.htm. # Discussion To meet the surveillance case definition for HIV infection, laboratory confirmation of HIV infection is now required for children aged 18 months to <13 years. To meet the surveillance case definition for AIDS, in addition to the presence of one or more AIDS-defining conditions, laboratory-confirmed evidence of HIV infection is now required for children aged 18 months to <13 years. These revisions will increase the specificity of the HIV infection and AIDS surveillance case definitions by excluding patients without laboratory-confirmed evidence of HIV infection, reinforcing the public health message that HIV infection is the cause of AIDS. Improved specificity will provide more accurate data regarding number of HIV infection cases, which can be used to refine public health policies and determine appropriate use of HIV resources. No changes have been made to the existing classification system for HIV infection among children aged 18 months to <13 years (4). To classify HIV-infected children in this age group, refer to the 1994 revised classification system for HIV infection among children aged <13 years (4 # Revised WHo Definitions # Surveillance Case Definitions WHO recommends reporting cases of HIV infection as HIV infection or advanced HIV disease (AHD), including AIDS. All cases of HIV infection, AHD, and AIDS require a confirmed diagnosis of HIV infection based on laboratory testing, using the appropriate national testing algorithm (1). The revised WHO surveillance case definitions include the following: HIV infection (stages 1 and 2), AHD (stage 3), and AIDS (stage 4) (1). # Clinical Staging and Immunologic Criteria Four clinical stages have been established for persons with confirmed HIV infection. These stages include the full spectrum of HIV infection and coincide with WHO clinical treatment recommendations: 1) no symptoms, 2) mild symptoms, 3) advanced symptoms, and 4) severe symptoms (1). The revised staging systems include presumptive clinical diagnoses that can be made in the absence of laboratory tests and definitive clinical criteria that require confirmatory laboratory tests. The clinical stage provides useful information when HIV infection is first diagnosed, when a person begins receiving care for HIV infection, for tracking patients in treatment programs, and to guide decisions on when to initiate cotrimoxazole prophylaxis and antiretroviral therapy (ART). Age-specific immunologic criteria for the disease classification are presented. For children aged <5 years, the CD4+ T-lymphocyte percentage of total lymphocytes rather than the absolute CD4+ T-lymphocyte count should be used because the absolute count tends to vary, more than the percentage, per individual child in this age group. Immunologic and clinical criteria should be documented (when available) to describe a case of HIV infection. # Comparison of the WHo and CDC Definitions Both the WHO and CDC surveillance case definitions for HIV infection now require laboratory confirmation of HIV infection. Differences between the WHO and CDC definitions and staging systems include the following ( # Appendix A AIDS-Defining Conditions # Members of the CDC Pediatric Surveillance Case Definition for HIV Infection and AIDS Working Group
None
None
206253872ec53313c6e65cc859fb525d115fe2c1
cdc
None
M ention o f the name o f any company or product does not constitute endorsement by the N ational Institute for Occupational Safety and Health. This document is in the public domain and may b e freely copied or reprinted. C opies o f this and other NIOSH docum ents are available from Publications D issem ination, DSDTT# FOREWORD The purpose o f the Occupational Safety and Health A ct o f 1970 (Public Law 9 1 -5 9 6 ) is to assure safe and healthful working conditions for every working person and to preserve our human resources. The A ct authorizes the National Institute for Occupational Safety and Health (NIOSH) to develop and recommend occupational safety and health standards and to develop criteria that w ill ensure that no worker w ill suffer dim inished health, functional capacity, or life expectancy as a result o f his or her work experience. Because lim ited data are available from studies in humans, NIOSH bases its recommended exposure lim its (RELs) for EGME, EGEE, and their acetates on data from studies in anim als. The data were adjusted to allow for uncertainties in the extrapolation from anim als to humans. NIOSH recommends that worker exposure to EGME and EGMEA in the workplace be lim ited to 0.1 part per m illion parts o f air (0.1 ppm) (0.3 mg EGME/m^ and 0. # Through criteria documents, NIOSH communicates recommended standards to regulatory agencies, including the # SECTION 1.2 EXPOSURE MONITORING Exposure monitoring shall be conducted as specified in Sections 1.2.1 and 1.2.2. Results o f all exposure monitoring shall be recorded and maintained as specified in Section 1.9. # Industrial Hygiene Surveys In work areas where airborne exposures to EGME, EGEE, or their acetates may occur, the em ployer shall conduct initial industrial hygiene surveys to determine the magnitude o f exposure by using personal sam pling techniques for an entire workshift. The em ployer shall keep records o f these surveys. If the em ployer concludes that exposure concentrations for all g lycol ethers are less than one-half the REL, the records must show the basis for this conclusion. Surveys shall be repeated at least annually and w henever any process change is lik ely to increase concentrations o f airborne EGEE, EGME, EGEEA, and EGMEA. The em ployer shall also look for, evaluate, and record the potential for dermal exposure. # Personal Monitoring If workers are exposed to any g lyco l ether at or above one-half the REL, a program o f personal monitoring shall be instituted to identify and to measure or calculate the exposure o f each worker (see Section 8.8). Source and area monitoring may be a useful supplement to personal monitoring. In all personal monitoring, sam ples representative o f the TWA exposures to airborne g lycol ethers shall be collected in the breathing zone o f the worker. Procedures for sam pling and analysis shall be in accordance with Section 1.1.2. For each determination o f an occupational exposure concentration, a sufficient number o f sam ples (as determined in Leidel et al. ), shall be collected to characterize each worker's exposure during each workshift. Although not all workers must be monitored, a sufficient number o f sam ples must be collected to characterize the exposure o f all workers. Variations in work and production schedules as w ell as worker locations and job functions shall be considered when determining sam pling locations, tim es, and frequencies. If a worker is exposed to EGME, EGEE, or their acetates at concentrations below the REL but at or above one-half the REL, the exposure o f that worker shall be monitored at least once every 6 months or more frequently, as indicated by a professional industrial hygienist. If a worker is exposed to one o f these glycol ethers at concentrations exceeding the REL, the worker must wear a respirator until adequate engineering controls and/or work practices are instituted. Controls shall then be initiated, the worker shall be notified o f the exposure and o f the control measures being implemented, and the worker's exposure shall be evaluated at least once a week. Such monitoring shall continue until tw o consecutive determinations at least 1 w eek apart indicate that the worker's exposure no longer exceeds the REL. At that tim e, semiannual monitoring can be resumed; if concentrations o f the g lycol ethers are less than one-half the REL after tw o consecutive sem iannual surveys, sam pling can be conducted annually. A ll episodes o f skin contact shall be reported to a supervisor. T hese reports and the results o f any investigation or corrective action are to be retained with other records. # Biological Monitoring Measurement o f tw o glycol ether m etabolites-ethoxyacetic acid (EAA, the m etabolite o f EGEE and EGEEA) andmethoxyacetic acid (MAA, the metabolite o f EGME and EGM EA)-2 may help characterize occupational exposure to EG EE, EG ME, and their acetates w hen the potential exists for airborne concentrations at or above one-half the REL, or for dermal contact from accidental exposure a breakdown o f chemical protective clothing (see Section 5). G uidelines for biological monitoring are presented in Appendix G. # SECTION 1.3 MEDICAL MONITORING The em ployer shall provide the follow ing information to the physician performing or responsible for the m edical monitoring program: - The requirements o f the applicable standard - A n estim ate o f the worker's potential exposure to g ly co l ethers, including any available results from workplace sampling - A description o f the worker's duties as they relate to exposure - A description o f any protective equipment the worker may be required to use # General - The em ployer shall institute a m edical monitoring program for all workers who are exposed to airborne concentrations o f EGEE, EGME, or their acetates at or above one-half the REL, or w ho have the potential for dermal exposure. - If a worker has had a dermal exposure, the em ployer shall provide this information to the physician responsible for or performing the medical monitoring program. - The em ployer shall ensure that all m edical exam inations and procedures are performed by or tinder the direction o f a licensed physician. - The em ployer shall provide the required m edical monitoring at a reasonable time and place without loss o f pay or other co st to the workers. - The em ployer shall institute a biological monitoring program for all workers who are exposed to airborne concentrations o f EGME, EGEE, or their acetates at or above one-half the REL, or w ho have the potential for dermal exposure. Guidelines for b iological monitoring are presented in Appendix G. # Preplacem ent M edical Exam inations Preplacement m edical exam inations shall include at least the follow ing: - A com prehensive m edical, work, and reproductive history that em phasizes iden tification o f existing m edical conditions (e.g., those affecting die reproductive, hem atopoietic, and central nervous system s, skin, liver, and kidneys) and previous occupational exposure to chem ical or physical agents - A medical examination giving special attention to the reproductive, hematopoietic, and central nervous systems, skin, liver, and kidneys - Routine urinary monitoring for M AA and EAA before job placem ent, w hich may be a useful adjunct to environmental monitoring because it indicates both airborne and dermal exposures - A judgment o f the worker's ability to use positive-and negative-pressure respirators # Periodic Medical Exam inations Periodic m edical exam inations shall be provided at least annually to all workers occupation ally exposed to airborne concentrations o f EGME, EGEE, and their acetates at or above one-half the REL, and to workers with the potential for dermal exposure. T hese exam ina tions shall include at least the follow ing: - An update o f medical and work histories - A m edical examination and tests as outlined above # Medical Consultation Workers w ho have a dermal exposure or w ho are exposed to concentrations o f EGME, EGEE, or their acetates above the REL should be given the opportunity to consult with a physician regarding possible adverse health effects, including reproductive and develop mental effects. OSHA Form 2 0 0 shall be m odified to include any reports o f dermal exposure. # SECTION 1.4 LABELING AND POSTING A ll labels and warning sign s shall be printed both in English and the predominant language o f workers who do not read English. Workers unable to read the labels and warning signs shall be informed verbally regarding the instructions printed on labels and signs in the hazardous work areas o f the plant. # Labeling Containers o f EGME, EGMEA, EGEE, or EGEEA used or stored in the workplace shall carry a permanently attached label that is readily visible. The label shall identify the gly co l ether and give information regarding its effects on human health and em ergency procedures (see Figure 1 -1 ). # Posting Signs bearing information about the health effects o f EGME, EGMEA, EGEE, or EGEEA shall be posted in readily visib le positions in work areas and at entrances to work areas or 4 building enclosures where the potential ex ists for exposures at or above the REL or where skin exposures may occur (see Figure 1-2). If respirators and personal protective clothing are needed during the manufacturing or handling o f these g lycol ethers or during the installation or implementation o f required engineering controls, the follow in g statement shall be added in large letters to the signs required in this section: Respirators and protective clothing are required in th is area. In any area where emergency situations may arise, the required sign s shall be supplem ented with em ergency first-aid procedures and the locations o f em ergency show ers and eyew ash fountains. # SECTION 1.5 PROTECTIVE CLOTHING AND EQUIPMENT Engineering controls and good w ork practices shall be used to keep the airborne concentra tions o f EGME, EGEE, and their acetates below the REL and to prevent skin and eye contact. W hen protective clothing and equipment are needed, they shall be provided by the em ployer at no cost to the worker. # Eye and Face Protection The em ployer shall provide chem ical splash-proof safety goggles or face shields (20-cm minimum) with go ggles and shall ensure that workers wear the protective equipment during any operation in w hich splashes o f these glycol ethers are lik ely to occur. D evices for eye and face protection shall be selected, used, and maintained in accordance with 29 CFR 1910.133 and 30 CFR 56.150004, and 57.150004. # Skin Protection - Workers at risk o f dermal exposure to these g ly co l ethers shall be provided with appropriate protective clothing such as g lo v es and disposable clothing. Informa tion presented in Section 8.6.1 provides guidance in the selection o f appropriate materials for g loves and clothing. - Clothing contaminated with these glycol ethers shall be cleaned before reuse. Anyone w ho handles contaminated clothing or is responsible for its cleaning shall be informed o f the hazards o f these glycol ethers and the proper precautions for their safe handling and use. EGME WARNING! Exposure may be harmful to the reproductive system and blood. # CAUTION! Combustible Harmful if Ingested, Inhaled, or absorbed through skin. Irritating to skin, eyes, nose, throat, mouth, and lungs. - The em ployer shall ensure that all personal protective clothing and equipm ent is inspected regularly and maintained in a clean and satisfactory working condition. - Protective clothing or gloves shall be evaluated on a routine basis to ensure that they are in good condition and no breakthrough has occurred. # R espiratory Protection Engineering controls and good work practices shall be used to control respiratory exposure to airborne contaminants. The use o f respirators is the least desirable method o f controlling worker exposures and should not be used as the primary control method during routine operations. H owever, NIOSH recognizes that respirators may be required to provide protection in certain situations such as implementation o f engineering controls, short-duration maintenance procedures, and em ergencies. Respirator selection guides for protection against EGEE, EGME, and their acetates are presented in Tables 1 -1 through 1 -3 . - Respirators shall be provided by the em ployer when such equipment is necessary to protect the health o f the worker. The worker shall use the provided respiratory protection in accordance with instructions and training received. - The em ployer shall ensure that respirators are properly fitted and that workers are instructed at least annually in the proper use and testing for leakage o f respirators assigned to them. - Workers should not be assigned to tasks requiring the use o f respirators unless it has been determined that they are physically able to perform the work and use the equipment. The respirator user's m edical status should be review ed at least annually or more frequently as recommended by the physician responsible for the physical examination. See Appendix H for additional information about the m edical aspects o f wearing respirators. - The employer shall be responsible for establishing and maintaining a respiratory protection program as follows: 1. Written standard operating procedures governing selection and use o f respirators shall be established. 2. The worker shall be instructed and trained in the proper use o f respirators and their lim itations. Where practicable, the respirators should be assigned to individual workers for their exclu sive use. i Recom m endations f o r Standards Table 1-2 5. Respirators shall be stored in a convenient, clean, and sanitary location. 6 . Respirators used routinely shall be inspected during cleaning. Worn or deteriorated parts shall be replaced. Respirators for emergency use (e.g., self-contained devices) shall be thoroughly inspected at least once a month and after each use. 7. The respiratory protection program shall be regularly evaluated by the employer to determine its continued effectiven ess. The em ployer shall notify the worker when exposure exceeds the REL in the w ork area to which he is assigned. # Training The Workers shall also be instructed about their responsibilities for follow in g proper work practices and sanitation procedures to help protect their health and provide for their safety and that o f their fellow workers. A ll workers in areas where exposure to EGME, EGEE, or their acetates may occur during sp ills or em ergencies shall be trained in proper emergency and evacuation procedures. # File of W ritten Hazard Communication Required information shall be recorded on the material safety data sheet (see exam ple in Appendix D ) or on a sim ilar OSHA form that describes the relevant toxic, physical, and chem ical properties o f the g lycol ethers and mixtures o f glycol ethers that are used or otherwise handled in the workplace. T his information shall be kept on file and shall be readily available to workers for examination and copying. # SECTION 1.7 ENGINEERING CONTROLS AND WORK PRACTICES # Engineering Controls Engineering controls shall be used as needed to maintain exposure to airborne g ly co l ethers within the lim its prescribed in Chapter 1. # Local Exhaust Ventilation Local exhaust ventilation may be effective when used alone or in com bination w ith process enclosure. When a local exhaust ventilation system is used, it shall be designed and operated to prevent accumulation or recirculation o f airborne glycol ethers in the workplace. Exhaust ventilation system s discharging to outside air shall conform with applicable local, State, and Federal air pollution regulations and shall not constitute a hazard to workers or to the general population. Before maintenance work on control equipment begins, the generation o f airborne g lycol ethers shall be elim inated to the greatest extent feasible. # Maintaining Design Airflow Enclosures, exhaust hoods, and ductwork shall be kept in good repair to maintain designed airflow s. Measurements such as capture v elocity, duct velocity, or static pressure shall be made at least sem iannually, and preferably m onthly, to demonstrate the effectiven ess (quantitatively, the ability o f the control system to maintain exposures below the REL) o f the mechanical ventilation system . NIOSH recommends the use o f continuous airflow indicators such as water or o il manometers marked to indicate acceptable airflow . A record shall be kept show ing design airflow and the results o f all airflow measurements. Measure ments o f the effectiven ess o f the system in controlling exposures shall also be made as soon as possible after any change in production, process, or control devices that may increase airborne concentrations o f EGME, EGEE, and their acetates. # Forced-draft Ventilation Forced-draft ventilation system s shall be equipped with remote manual controls and should be designed to shut o ff automatically in the event o f a fire. # General W ork Practices - Operating instructions for all equipment shall be developed and posted where EGME, EGEE, and their acetates are handled or used. - Transportation, use, and disposal o f these glycol ethers shall com ply with all applicable lo ca l, State, and Federal regulations. - These glycol ethers shall be stored in tightly closed containers and in well-ventilated areas. - Containers shall be m oved only with the proper equipment and shall be secured to prevent loss o f control or dropping during transport. - Storage facilities shall be designed to prevent contamination o f workroom air and to contain sp ills com pletely within surrounding dikes. - Ventilation sw itches and em ergency respiratory equipment shall be located outside storage areas in readily accessible locations. - Process valves and pumps shall be readily accessible and shall not be located in pits or congested areas. - G lycol ether containers and system s shall be handled and opened with care. Approved protective clothing and equipment as specified in Section 1.5 shall be worn by workers w ho open, connect, and disconnect g ly co l ether containers and system s. Adequate ventilation shall be provided to m inim ize exposures o f such workers to airborne glycol ethers. - G lycol ether storage equipment and system s shall be inspected daily for signs o f leakage. A ll equipment, including valves, fittings, and connections, shall be checked for leaks imm ediately after glycol ethers are introduced therein. - When a leak is found, it shall be repaired promptly. W ork shall resume normally only after necessary repair or replacement has been com pleted and the area has been w ell ventilated. # Confined or Enclosed Spaces - A permit system shall be used to control entry into confined or enclosed spaces holding containers o f g lyco l ethers (e.g ., tanks, pits, tank cars, and process vessels) where egress is lim ited. Permits shall be signed by an authorized representative o f the em ployer and shall certify that preparation o f the confined space, precautionary measures, and personal protective equipment are adequate and that precautions have been taken to ensure that prescribed procedures w ill be follow ed. - Confined spaces that hold containers o f EGME, EGEE, and their acetates shall be thoroughly ventilated, inspected, and tested for oxygen deficiency and for airborne concentrations o f these glycol ethers. Every effort shall be made to prevent the inadvertent release o f hazardous amounts o f these g ly co l ethers into confined spaces in w hich work is in progress. G iycol ether supply lines shall be discon nected or blocked o ff before such work begins. - N o worker shall enter a confined space holding containers o f g ly co l ethers without an entry large enough to admit a worker wearing a safety harness, lifelin e, and appropriate personal protective equipment as specified in Section 1.5. - C onfined spaces shall be ventilated w hile work is in progress to keep the concentra tion o f g lycol ethers below the RELs, to keep the concentration o f other con taminants below toxic or dangerous lev els, and to prevent oxygen deficiency. - I f the concentrations o f these glycol ethers in the confined space exceed the RELs, respiratory protective equipment is required for entry. - A nyone entering a confined space shall be observed from the outside by another properly trained and protected worker. An additional supplied-air or self-contained breathing apparatus with safety harness and lifelin e shall be located outside the confined space for em ergency use. The person entering the confined space shall maintain continuous communication with the standby worker. # Em ergency Procedures Emergency plans and procedures shall be developed for all work areas where there is a potential for exposure to EGME, EGEE, and their acetates. T hese plans and procedures shall include those specified below and any others considered appropriate for a sp ecific operation or process. Workers shall be instructed in the effective implementation o f these plans and procedures. - The follow ing steps shall be taken in the event o f a leak or sp ill o f these g ly co l ethers: -A ll nonessential personnel shall be evacuated from the leak or sp ill area. -Persons not wearing the appropriate protective equipment and clothing shall be restricted from the leak or sp ill area until cleanup has been com pleted. -A ll ignition sources shall be removed. -The area where the leak or spill occurs shall be adequately ventilated to prevent the accum ulation o f vapor. -EGME, EGEE, EGMEA and EGEEA shall be contained and absorbed with verm iculite, sand, paper tow els, or equivalent materials. -Sm all quantities o f absorbed material shall be placed under a fume hood and sufficient tim e shall be allow ed for the liquid to evaporate and for the vapors to clear the ductwork in the hood. -Large quantities o f absorbed material shall be burned in a suitable com bustion chamber. -Absorbed materials shall be disposed o f as hazardous w aste. -The sp ill area shall be cleaned with water. - Only personnel trained in the emergency procedures and protected against the attendant g lycol ether hazards shall clean up sp ills and control and repair leaks. - Personnel entering the spill or leak area shall be furnished w ith appropriate personal protective clothing and equipment. Other personnel shall be prohibited from entering the area. ♦ Safety show ers, eyew ash fountains, and washroom fa cilities shall be provided, maintained in working condition, and made readily accessib le to workers in all areas where skin or eye contact with EGME, EGEE, EGMEA, or EGEEA is likely. If one o f these g lycol ethers is splashed or spilled on a worker, contaminated clothing shall be removed promptly, and the skin shall be washed thoroughly with soap and water. E yes splashed by these glycol ethers shall be irrigated im m ediately with a copious flow o f water for 15 min. If irritation persists, the worker should seek m edical attention. # Storage EGME, EGEE, and their acetates shall be stored in cool, w ell-ventilated areas and kept away from acids, bases, and oxidizing agents. # Waste Disposal A ll waste material shall be securely packaged in double bags, labeled, and incinerated. The incinerator residue shall be disposed o f in a manner consistent w ith Federal (EPA ), State, and local regulations, or it shall be disposed o f in a licensed hazardous waste landfill. # SANITATION AND HYGIENE # Food, Cosmetics, and Tobacco The follow in g shall be prohibited in areas where EGME, EGEE, EGMEA, or EGEEA is produced or used: the storage, preparation, dispensing, or consumption o f food or beverages; the storage or application o f cosm etics; and the storage or use o f all tobacco products. # Handwashing The em ployer shall provide handwashing facilities and encourage workers to use them before eating, sm oking, using the toilet, or leaving the worksite. # Laundering - Protective clothing, equipment, and tools shall be cleaned periodically. - The em ployer shall provide for the cleaning, laundering, or disposal o f con taminated work clothing and equipment. - A ny person w ho cleans or launders this work clothing or equipment must be informed by the em ployer that it may be contaminated with EGME, EGEE, EGM EA, or EGEEA-chem icals that may cause adverse reproductive, develop mental, hem atologic (blood), and central nervous system (C N S) effects. # Cleanup of W ork Area The work area shall be cleaned at the end o f each shift (or more frequently if needed) using vacuum pickup. C ollected wastes shall be placed in sealed containers w ith labels that indicate the contents. Cleanup and disposal shall be conducted in a manner that prevents worker contact with w astes and com plies with all applicable Federal, State, and local regulations. # Showering and Changing Facilities Workers shall be provided with quick-drench show er fa cilities and w ith facilities for show ering and changing clothes at the end o f each workshift. # RECORDKEEPING # Exposure Monitoring The em ployer shall establish and maintain an accurate record o f all exposure measurements required in Section 1.2. These records shall include the name o f the worker being monitored, social security number, duties performed and job locations, dates and tim es o f measure m ents, sam pling and analytical methods used, type o f personal protection used (if any), and number, duration, and results o f sam ples taken. # Medical Monitoring The em ployer shall establish and maintain an accurate record for each worker subject to the m edical monitoring specified in Section 1.3. Pertinent medical records (Le., the physician's written statement, the results o f m edical examinations and tests, m edical com plaints, reports o f skin exposure, etc.) shall be retained in the m edical files o f all workers subject to airborne concentrations o f EGME, EGEE, EGMEA, or EGEEA in the workplace at or above one-half the REL. C opies o f applicable environmental monitoring data shall also be included in the worker's m edical file. # Record Retention In accordance with the requirements o f 29 CFR 1910.20(d) (Preservation o f Records), the em ployer shall retain the records described in Sections 1.2, 1.3, and 1.6 for at least the follow ing periods: - 30 years for exposure monitoring records, and - the duration o f employm ent plus 3 0 years for medical surveillance records # Availability of Records - In accordance with 29 CFR 1910.20 (A ccess to Em ployee Exposure and M edical Records), the em ployer shall allow examination and copying o f exposure monitor ing records by the subject worker, the former worker, or anyone having the sp ecific written consent o f the subject or former worker. - Any m edical records required by this recommended standard shall be provided upon request for examination and copying to the subject worker, the former worker, or anyone having the sp ecific written consent o f the subject or former worker. # Transfer of Records If the em ployer ceases to do business and no successor is available to receive and retain the records for the prescribed period, the em ployer shall notify the NIOSH has formalized a system for developing criteria on w hich to base standards for ensuring the health and safety o f workers exposed to hazardous chem ical and physical agents. Sim ple com pliance with these standards is not the only goal. The criteria and recommended standards are intended to help management and labor develop better engineer ing controls and more healthful work practices. Recommended standards for EGEE, EGME, EGEEA, and EGMEA apply on ly to workplace exposures arising from the processing, manufacturing, handling, and use o f these glycol ethers. The recommendations are not designed for the population at large, and any extrapolation beyond the occupational environment may not be warranted. The recommended standards are intended to protect workers from the acute and chronic effects o f EGEE, EGME, EGEEA, and EGMEA. Exposure concentrations are measurable by techniques that are valid, reproducible, and available to industry and government agencies. # SCOPE The # PRODUCTION METHODS AND USES The ethylene glycol monoalkyl ethers EGME and EGEE are usually produced by a reaction o f ethylene oxide with m ethyl or ethyl alcohol, but may also be made by the direct alkylation o f ethylene glycol with an alkylating agent such as dim ethyl or diethyl sulfate . Temperature, pressure, reactant molar ratios, and catalysts are selected to give the product m ix desired. Ethylene g ly co l monoalkyl ethers are not formed as pure com pounds, but must be separated from the diethers o f diethylene g ly co l, triethylene g ly co l, and the higher glycols. Ethylene glycol monoalkyl ether acetates are prepared by esterifying the particular g ly co l ether with acetic acid, acetic acid anhydride, or acetic acid chloride. G lycol ethers and their acetates are w idely distributed and have been used com m ercially for more than 50 years. # PROCESS AND WORKER JOB DESCRIPTIONS The usefulness o f g lycol ethers and their acetate derivatives can be attributed to their physical properties, particularly their m iscibility or high solubility in water and other organic solvents, and their low vapor pressures. T hese properties allow them to serve a number o f functions in a variety o f products. The follow ing information was obtained during surveys conducted in various industries to determine occupational exposures to g ly co l ethers . # Paints and Coatings Although frequently com prising less than 10% o f the final product, g ly co l ethers serve a variety o f important functions in paints and coatings. One function is as a solvent to keep . the various paint com ponents in solution. Latex coatings contain g ly co l ethers or their acetate derivatives to enhance the coalescing properties o f the product w hen applied. B y slow ing the evaporation rate, g lycol ethers reduce moisture condensation or "blush." They also improve the penetration and bonding qualities o f paints and coatings. Specialty products, such as aircraft or electrostatic paints, may contain 18% to 35% g ly co l ethers . The manufacture o f paints and coatings is a batch process. Components are added manually or through a closed piping system to the m ixing tank. B ecause g ly co l ethers generally constitute a sm all percentage o f the total formulation, they are often added manually. After m ixing, the product is packaged according to customer specifications. A variety o f industries use paints and coatings, but as previously noted, these products usually contain a sm all percentage o f g ly co l ethers (i.e ., less than 10%). Lacquer containing less than 1 % glycol ethers is used to coat w ood products. H owever, electrostatic paints used in a spraying process for metal parts may contain 20% to 30% g ly co l ethers. G lycol ethers are also used in the manufacture o f coated fabrics. These fabrics pass through a dip pan to pick up the coating and then rise through a ventilated drying tower . # Cleaners and Cleaning Solvents Cleaning agents that contain glycol ethers are spot removers, carburetor cleaners, metal cleaners, and glass cleaners. In these products, glycol ethers function as surface active agents, enhancing the penetration o f the product, clarifying the appearance, and in glass cleaners, increasing the viscosity. The percentage o f glycol ethers in these products is less than 5% . # NUMBER OF WORKERS POTENTIALLY EXPOSED Based on information from the National Occupational Exposure Survey (NOES) , the estim ated number o f workers potentially exposed to g ly c o l ethers in the workplace during the period 1981 to 1983 is as shown in Both children underwent surgery in the follow ing years. The perineal and the penile hypospadias were corrected, chordee was removed in both children, and the undescended testes were removed to the scrotum. 'Hie older child was treated with chronic gonadotrophin, w hich led to norm al-sized testes. The authors stated that the risk o f isolated hypospadia was betw een 1 in 300 and 1 in 1,800, w hile the risk for a boy w hose brother has hypospadia w as 1 in 24. T hey indicated that both the family history and medical examinations showed no overt risks other than the pronounced exposure o f the mother to EGMEA during fetal development. The authors concluded that the hypospadias were actually caused by exposure to EGMEA. There is only one case report that describes the effects o f EGEE [Fucik 1969 N o statistically significant differences were found in hem atology test results, hormone lev els, or sperm counts between the exposed workers and the controls. The investigators suggested that testicular size may have been reduced; how ever, the decrease in size approached but did not reach statistical significance (P < 0.05) in either length (F -0 .1 9 ) or width (P -0 .0 8 ). # Market a n d M oody In # Boiano NIOSH conducted a health hazard evaluation in 1983 to evaluate worker exposure to two solvent cleaners, an im age remover, and the paint remover used in a silk screening process . The silk screener using the im age remover was monitored for exposure to EGEEA and cyclohexanone, w hile the worker using the paint remover was monitored for a variety o f organic solven ts, one o f which was EGEEA. A lthough the workers were primarily exposed by inhalation, they may have also been exposed by skin absorption because personal protective clothing was not alw ays worn. The workers com plained o f headaches, lethargy, sinus problems, nausea, and heartburn. When they were away from work, their symptom s improved. The silk screener using im age remover had TW A ex posures to EGEEA ranging from 1.3 to 3.3 ppm, with short-term excursions to 3.8 ppm. The silk screener using paint remover had TW A exposures to EGEEA ranging from 0 .5 to 3.9 ppm, with a short-term excursion to 4 .0 ppm. Measured airborne exposures thus were 32 below occupational standards, but absorption through the skin may have contributed to the workers' overall exposure . 4.1.2.5 G u n te r In 1985, NIOSH conducted a health hazard evaluation o f production areas at a plant that manufactured solid-state electronic circuits [Gunter 1985 Ratctiffe e ta l. NIOSH conducted an evaluation for possible adverse effects on testicular function in male workers potentially exposed to EGEE during the preparation o f ceram ic sh ells used to cast metal parts # Welch etal. , Sparer etal. , and The effect o f com bined EGME and EGEE exposure on the reproductive potential o f men w ho worked in a large shipbuilding facility was recently studied . This site was selected for study because o f a previous health hazard evaluation , which demonstrated that ADH has a higher affinity for ethanol than for the gly co l ethers . This finding confirmed the hypothesis that EGEEA is first converted to EGEE by esterases [Groeseneken et al. 1987a A detailed description o f the preceding studies may be found in Appendix B. # EFFECTS ON ANIMALS Although kidney and liver dam age, hem atologic, CN S, reproductive, and teratogenic effects have been observed in experimental animals exposed to glycol ethers and their acetates, the type and severity o f the response induced by each g ly co l ether are not identical. Therefore, each glycol ether and its corresponding acetate w ill be discussed separately follow ing Section 4.3.1. # Acute Toxicity Many experim ents investigating the acute toxicity o f glycol ethers to anim als have been performed. These investigations led to the establishm ent o f a lethal concentration or lethal dose for 50% o f the exposed animals (LC50 or LDjq) in a variety o f sp ecies by a variety o f routes (inhalation, oral, dermal, injection). A summary o f the available data by animal species is presented in Table 4 -1 . # O ral Adm inistration The toxicity o f glycol ethers has been studied more extensively by oral administration than by any other route. concentrations ranged from 500 to 6,0 0 0 ppm and were administered over a period o f 1 to 24 hr. Guinea pigs exposed to 6 ,0 0 0 ppm for 24 hr exhibited inactivity, w eakness, and dyspnea, and died by the end o f the exposure; 3,0 0 0 ppm for 24 hr caused death within 24 hr follow in g exposure; and exposure to 6,0 0 0 ppm for 10 hr, and 3,000 ppm or 1,000 ppm for 18 hr, caused death 1 to 8 days follow ing exposure. Exposure to 6,0 0 0 ppm for 1 hr, 3 .000 ppm for 4 hr, and 500 ppm for 14 hr caused no apparent harm. G ross pathological exam inations o f anim als that died during and up to tw o days after exposure revealed congestion and edema o f the lungs, distended and hemorrhagic stom achs, and congested kidneys. Ten male and ten fem ale rats and tw o male and tw o fem ale rabbits were exposed to 2.000 ppm EGEEA for 4 hr . O nly in rabbits w as there a slight and transient hemoglobinuria or hematuria; no gross pathological lesion s w ere noted in either species. # D erm al Exposure A m odified Draize "slee v e " technique was used to study the acute dermal toxicity o f EGEEA in rabbits . Death generally occurred 24 to 48 hr after the application o f 10,500 m g EGEEA/kg. Although hemoglobinuria and/or hematuria were observed, there w as little variation in Hb concentration and the number o f RBCs (less than 15% to 20%) in blood; how ever, there was a considerable decrease in the number o f W BCs (50% to 70%). In surviving anim als, the WBC counts gradually returned to normal. Necropsy revealed bloody kidneys and blood in the bladder. W hen survivors were exam ined after the 2-w eek observation period, no gross lesions were noted. . In another study , EGEEA caused an increase in W BC levels. # EGM E and EGMEA The # EGME, EGEE, and Their Acetates # Mutagenicity # In Vitro Toxicity The # EGME, EGEE, and Their Acetates # Cytotoxicity The in vitro cytotoxicities o f EGME, EGEE, and their corresponding alkoxyacetic acids (M AA and EAA) were studied using CHO cells . CHO ce lls were seeded into culture flasks, and after 4 to 5 hr test material was added to the medium. After 16 hr the medium w as renewed and the cells were allow ed to grow in colon ies for 6 to 7 days prior to counting. # RECOGNITION OF THE HAZARD Each em ployer who manufactures, transports, packages, stores, or u ses EGME, EGEE, or their acetates in any capacity should determine the potential for occupational exposure o f any worker at or above the action level (one-half the REL). - The charcoal in the sam pling tube is transferred to a sm all, stoppered sample container, and the analyte is desorbed. EGME, EGEE, EGMEA, and EGEEA may be desorbed from the charcoal with m ethylene chloride and 5% (v/v) methanol. # ENVIRONMENTAL SAMPLING - An aliquot o f the desorbed sample is injected into a gas chromatograph w ith a flam e ionization detector. - The area o f the resulting peak is determined and compared with areas obtained from the injection o f standards. The detailed analytical method is described in Appendix A . - Periodic medical examinations. The aforementioned m edical exam inations should be performed annually for all workers occupationally exposed to EGEE, EGME, or their acetates at or above the action lev els, and for all w ho have the potential for significant skin exposure. # E G M E E G M E A E G E E E G E E # BIOLOGICAL MONITORING B iological monitoring may be a useful adjunct to environmental monitoring in assessing worker exposure to EGME, EGEE, and their acetates. B iological monitoring includes the influence o f workload and percutaneous absorption. # Justification for Biological Monitoring Human experimental inhalation studies have demonstrated the uptake o f EGEE , EGEEA , and EGME . Studies that included different workloads in the experimental design demonstrated a linear relationship betw een the workload and uptake o f each glycol ether; a linear relationship was also found for the exposure concentration and uptake. Table 5 - Nakaaki et al. demonstrated that 10 tim es more EGME w as absorbed through the forearm than acetone or methanol. Johanson # Selection of Monitoring Medium A variety o f biological monitoring media can be used to assess uptake (e.g ., expired air, blood, urine). Groeseneken et al. [1986bGroeseneken et al. [ , 1987aGroeseneken et al. [ , 1989a - Expected concentrations for these metabolites at the proposed RELs can also be measured by the recommended analytical method (see Appendix F). - The acid metabolites are associated with the reproductive and hem atologic toxicity o f EGEE, EGME, and their acetates, and may reflect the concentration o f the "active agent" at the target sites. - The half-lives o f the acid m etabolites in urine are suitable for exposure monitoring and can reflect integrated exposures over a workweek [Groeseneken et al. 1989a[Groeseneken et al. , 1988. The half-life for MAA is 77 hr and for EAA is 42 to 48 hr. - C ollection o f urine sam ples is a noninvasive procedure. Full-shift personal breathing zone exposures o f EGEE ranged from 3 to 14.5 ppm for workers in the investing areas. R atcliffe et al. reported that recoveries o f EGEE in three quality control sam ples w ere as low as 69% indicating that the measured airborne concentrations could have been underestimated. # Limitations of Biological Monitoring Urine sam ples were collected as voided during a 24-hr period from three EGEE-exposed workers and two controls (unexposed workers). In the June survey, area sam ples averaged 2 .4 to 14.9 ppm. Personal breathing zone sam ples averaged 8.1 ppm for grabber operators, 4.5 ppm for shell processors, and 5 .0 ppm for investm ent room supervisors. In this case, spot urine sam ples were collected at random over a 7-day period. Table 5 - A wide range of EAA concentrations was noted in workers using EGEE-containing paints; this was probably caused by variation in work assignments, work areas, work practices, and in the use of personal protective equipment. The author concluded that there appeared to be a relationship between urinary EAA excretion and the use of paints containing EGEE . The health hazard evaluation demonstrated that the potential existed for exposure of painters to EGEE and EGME. Because of the complexity of the work environment and the variable use of personal protective equipment, no dose-response relationship could be developed. However, at the exposure concentrations measured, painters who used paints with EGEE did excrete more EAA in the urine than painters who did not use EGEE-containing paints. Sparer et al. and examined some of the same workers from the health hazard evaluation. (These studies are discussed in detail in Section 4.1.) The authors concluded that exposure to EGEE and EGME lowered sperm counts in the painters. In addition, they concluded that when smoking was controlled the painters had an increased odds ratio for a lower sperm count per ejaculate ], However, it would be inappropriate to conclude that the EGEE and EGME exposure concentrations presented in the health hazard evaluation were representative of the chronic exposure of shipyard workers who participated in the semen study ]. In addition, it cannot be concluded that marginally (but not statistically significant) lowered sperm counts were caused by exposure concentrations measured in the health hazard evaluation . # EGME No studies have evaluated the relationship between EGME exposure in the workplace and urinary MAA concentration. Results of the study by Groeseneken et al. have provided the following information regarding MAA excretion in urine. - The relatively long urinary elimination half-life of MAA (77 hr) suggests that MAA would be expected to accumulate during the workweek. If biological monitoring were done, urine specimens collected at the end of the week, or possibly prior to the first shift of the week, would be appropriate. - Sixty percent of the urine specimens from this study contained MAA at concentra tions below 2 mg/liter. If 4-hr exposures are extrapolated to 8~hr exposures, based on linear kinetics, then subjects exposed to 5.1 ppm at rest would be expected to have 60% of their urine samples with MAA concentrations below 4 mg/liter. If exposures were 1/10 of those from this study (i.e., 0.5 ppm) then 6096 of the urine specimens would be expected to have less than 0.4 mg/liter of MAA , The limit of quantitation for MAA was reported to be 0,1 mg/liter . Higher concentrations of MAA would be expected with exercise. Although dermal absorption was not studied, dermal uptake of EGME is a potential route of exposure. Dugard et al. demonstrated in vitro absorption of EGME through human abdominal skin. Nakaaki et al. also demonstrated dermal penetration of EGME through the forearm of human volunteers. Johanson suggested that dermal uptake of EGME is possibly the major route of exposure. Thus, in spite of the lack of quantitative relationships between EGME exposure and MAA excretion in urine, measurement of MAA in urine is warranted. The potential for extensive skin absorption, and the potential buildup of the active urinary metabolite MAA during the workweek, are reasons to measure MAA in urine as an exposure index. In addition, measurement of MAA in urine may be useful as an indicator of the potential for adverse reproductive effects. # Methods fo r Analyzing Urinary EAA and MAA A variety of methods have been developed for the analysis of EAA and MAA in human urine. Gas chromatographic procedures are based on either fluoranhydride derivatization following the extraction of the acid tetrabutylammonium ion-pair . Groeseneken etal. developed a method that combined the best attributes of the two basic existing models. Detailed descriptions of the above methods are presented in Appendix H. # Summary EGEE, EGME, and their acetates are metabolized to their respective alkoxyacetic acid metabolites, EAA and MAA, which are excreted in the urine. EAA and MAA have produced reproductive and hematotoxic effects noted for glycol ethers. These glycol ethers can also be absorbed through the skin. In fact, the major route of exposure to EGME and EGEE may be through the skin . Thus, monitoring of these acids may serve not only as a measure of exposure or uptake, but also as a measure of potential adverse health effects. The alkoxyacetic acid metabolites may be analyzed by a variety of sensitive and specific methods. The recently developed method of Groeseneken et al. has sufficient sensitivity to monitor excretion of these metabolites at the recommended RELs. Results from human laboratory inhalation exposure studies indicated that EAA in urine could be used to monitor uptake of EGEE and EGEEA . Investigations of occupational exposure also revealed a correlation between urinary EAA excretion and repeated daily inhalation exposure of workers to a mixture of EGEE and EGEEA . Data showed the accumulation of EAA following repeated daily exposures to EGEE and EGEEA. The estimated elimination half-life of EAA was 48 hr. Two other worksite investigations of occupational exposure to EGEE demonstrated the utility of EAA in urine to assess uptake of EGEE regardless of the route of exposure [Ratcliffe et al. 1986;Clapp et al. 1987;Lowry 1987;McManus et al. 1989;Sparer et al. 1988;. Experimental studies were conducted in which humans were exposed to EGME. Results of these studies indicated that measurement of urinary MAA could be used to assess uptake of EGME. The concentration of MAA peaked several hr after exposure ended, and MAA was eliminated with a half-life of 77 hr. Examination of the elimination kinetics showed that MAA would accumulate following repeated daily exposures, and could also accumulate over extended exposure periods. Insufficient information is available at present to construct a dose-response plot that would provide statistically sound guidelines for the concentration of alkoxyacetic acid metabolites in urine that would correspond to an airborne exposure to glycol ethers. Table 5-11 presents a summary of the laboratory and occupational dose-response data. In 1971, OSHA adopted the current Federal standards for worker exposure to EGME, EGMEA, EGEE, and EGEEA, which are based on the American Conference of Governmen tal Industrial Hygienists (ACGEH) 1968 Threshold Limit Values (TLVs®). These TLVs® were based on the hematotoxic and neurotoxic effects and exposure concentrations reported in the early case reports of human health effects . The OSHA PELs include a "skin'' notation, indicating the potential for skin absorption of toxic amounts of these glycol ethers. The OSHA PELs for occupational exposure to the glycol ethers are as follows: 25 ppm (80 mg/m3) for EGME, 25 ppm for EGMEA (120 mg/m3), 200 ppm (740 mg/m3) for EGEE, 100 ppm (540 mg/m3) for EGEEA, as TWAs for an 8 -hr workshift [29 CFR 1910[29 CFR .1000. OSHA is considering a revision of these PELs. NIOSH has previously recommended that EGME and EGEE be regarded in the workplace as having the potential to cause adverse reproductive effects in male and female workers and embryotoxic effects, including teratogenesis, in the offspring of the exposed pregnant female, and that occupational exposure to them should be reduced to the lowest extent possible . These recommendations were based on the results of animal studies that demonstrated dose-related embryotoxicity and other reproductive effects in several species of animals exposed by different routes of administration . In 1946, ACGIH established maximum allowable concentrations (m.a.c.s) of 100 ppm for EGME, EGMEA, and EGEEA, and 200 ppm for EGEE , In 1947, the m.a.c.s for EGME and EGMEA were lowered to 25 ppm because of the Greenburg et al. study in which neurologic and hematologic changes were observed in men exposed to EGME at concentrations estimated to be as low as 25 ppm. The m.a.c. for EGMEA was lowered because the toxic effects caused by it were likely to be similar to those caused by EGME as a result of EGMEA's metabolism to EGME and then to the active metabolite [ACGIH 1962[ACGIH ,1984. Although the values remained unchanged, the term "threshold limit value" was substituted for m.a.c. in 1948. In 1968, the notation "skin" (indicating the potential for skin absorption of toxic amounts of a compound) was added to the TLVs® for EGME, EGEE, EGMEA, and EGEEA. In 1971, ACGIH lowered the TLV® for EGEE from 200 to 100 ppm to prevent irritation of the nose and eyes . In 1981, the ACGIH adopted TLVs® of 50 ppm for EGEE and EGEEA, each with a short-term exposure limit (STEL) of 100 ppm; in 1987-88, the STELs were deleted . The TLVs® were lowered because of adverse hematologic effects observed in laboratory animals . Changes in rat erythrocyte fragility were produced by 125 ppm EGEE but not by 62 ppm. ACGIH deemed it prudent to maintain chemical exposures below levels found to cause blood changes in experimental animals. Because the TLV® of 100 ppm for EGEEA was based on analogy with EGEE, it was logical to establish a similar TLV® of 50 ppm for its acetate . Reports of adverse testicular effects in experimental animals treated with EGME, EGEE, and their acetatesJNagano et al. 1979] led ACGIH to lower the TLVs® for these compounds. The 5-ppm TLV for EGME, EGMEA, EGEE, and EGEEA as an 8 -hr TWA was adopted in 1984, and the "skin" notation was retained. The principal health effects documented in humans exposed to EGEE, EGME, and their acetates involve the blood, central nervous and hematopoietic systems, liver, and kidneys. These effects include headache, drowsiness, dizziness, forgetfulness, personality change, loss of appetite, tremors, hearing loss, slurred speech, hematuria, hemoglobinuria, anemia, and leukopenia. Only limited direct evidence indicates that exposure to EGEE, EGME, or their acetates causes adverse reproductive effects in humans. However, experimental studies in animals provide strong evidence of adverse reproductive and developmental effects related to these exposures. Summaries of the developmental and reproductive toxicity of EGEE, EGEEA, and EGME are presented in Tables 7-1 through 7-3. Because humans and the animal species studied metabolize these glycol ethers in the same way, the animal data are considered to be highly predictive of the hazard for humans. # CORRELATION OF EXPOSURE AND EFFECTS # EGEE # Studies in Humans No epidemiologic studies describe the effects of EGEE in humans, and only one case report exists. A 44-year-old woman who mistakenly drank 40 ml of EGEE (Section 4.1.1) experienced chest pains and vertigo, and lost consciousness shortly after the ingestion . Upon hospitalization, the following signs and symptoms were observed: restless ness, cyanosis, tachycardia, swelling of the lungs, tonoclonic spasms, and breath smelling of acetone. The urine tested positive for protein, acetone, and RBCs; the liver became enlarged and jaundice developed. After 44 days, the woman's condition improved, but insomnia, fatigue, and paresthesia of the extremities persisted for 1 year. Several cases of anemia were reported in shipyard workers exposed to EGEE and EGME, and all cases were suspected to have been caused by the exposure . A detailed description of this study is provided in Chapter 4. Few data are available on the reproductive effects of EGEE in humans. Ratcliffe et al. concluded that EGEE may have affected the semen quality by lowering the sperm counts of male workers exposed to this chemical during the preparation of ceramic shells for casting ------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------ -------------------------- ------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------ ]. The potential also existed for skin absorption. In addition, the shipyard painters had been exposed to EGME, lead, and epichlorohydrin, all of which have been reported to affect semen quality. Airborne concentrations of lead were well below those known to depress sperm count. Most blood lead concentrations were below 20 pg%, with the highest single concentration being 40 ¿¿g%. Epichlorohydrin was not detected in the air sampling during the study . # Studies In Animals Studies in animals have provided evidence of adverse reproductive and developmental effects from EGEE exposure (see Appendix B). The LOAELs and the NOAELs of the following studies were used in determining the REL for EGEE. Testicular atrophy occurred in mice given oral doses of EGEE (1,000 mg/kg of body weight per day or more), for 5 days/wk during a 5-wk period. The NOAEL noted in this study was 500 mg EG EE/kg per day ], Decreased testis weight, spermatocyte depletion and degeneration, and microscopic testicular lesions were observed in rats treated with 500 or 1,000 mg EGEE/kg per day for 11 days ; no effects were observed at 250 mg/kg. Decreased sperm counts, abnormal sperm morphology, and decreased epididymal weights were found in rats given oral doses of 936, 1,872, or 2,808 mg EGEE/kg per day for 5 days . A no-effect level was not included in this study. Stenger et al. treated male rats orally with 46.5, 93, or 186 mg EGEE/kg per day for 13 wk. Microscopic testicular lesions were found only at doses of 186 mg EGEE/kg per day. Rats and rabbits of both sexes were exposed to 0,25,100, or 400 ppm EGEE for 6 hr/day, 5 days/wk over a 13-wk period . At the highest exposures (400 ppm EGEE), reduced testicular weights and microscopic testicular lesions were observed in rabbits, and reduced pituitary weights were observed in male rats. Reduced body weights were observed in male and female rabbits at 25 and 400 ppm EGEE, and reduced spleen weights were found in nonpregnant female rats at 100 and 400 ppm EGEE. Studies have demonstrated adverse effects on the dam and the developing fetus. Stenger et al. treated rats orally with 11.5, 23,46.5,93, 186, or 372 mg EGEE/kg per day on g.d. 1 through 21. Decreased fetal body weights and skeletal defects were demonstrated at 93, 186, and 372 mg/kg per day. No effects were noted at 11.5, 23, or 46.5 mg/kg per day. In rabbits exposed to EGEE for 7 hr/day on g.d. 1 through 18, maternal toxicity and embryolethality were observed at 615 ppm, and embryolethality (22%), skeletal variations, renal and cardiovascular defects, and decreased maternal food consumption were observed at 160 ppm . No effects were apparent on fertility or pregnancy outcome when female rats were exposed to 150 or 650 ppm EGEE for 7 hr/day, 5 days/wk during the 3 wk before breeding. Toxic signs were noted in female rats exposed at 650 ppm, but none were observed at 150 ppm. However, when pregnant rats were exposed to 765 ppm EGEE for 7 hr/day on g.d. through 19, 100% intrauterine death occurred. Similar exposure at 200 ppm EGEE significantly increased fetal cardiovascular and skeletal defects. These effects on development were not influenced by exposures to filtered air or EGEE before pregnancy . In rats exposed 6 hr/day to 250 ppm EGEE on g.d. 6 through 15, investigators observed increased postimplantation loss, retarded fetal growth, decreased ossification, and increased skeletal variations; they found no effects on fetuses at 50 or 10 ppm EGEE , Fetal skeletal variations were found in rabbits exposed 6 hr/day to 175 ppm EGEE on g.d. 6 through 18; no effects were found in fetuses at 10 or 50 ppm EGEE . Exposure of pregnant rats to 100 ppm EGEE on g.d. 14 through 20 caused extended gestation (0.7 day), and exposure to 100 ppm EGEE on g.d. 7 through 13 or 14 through 20 caused altered behavioral responses and altered brain neurochemical concentrations in offspring . Effects on the fetus were also demonstrated in a dermal application study of EGEE . Four daily doses of 0.25 or 0.50 ml EGEE were applied to rats on g.d. 7 through 16. The higher dose resulted in decreased maternal body weight gain, ataxia, and 100% fetolethality; the lower dose produced fetotoxicity, 75% fetolethality, and malformations. # Basis for Selecting the No Observable Adverse Effect Level (NOA EL) Acute toxicity data for EGEE (Table 4-2) indicate that CNS and kidney effects occurred at higher EGEE concentrations than adverse reproductive and developmental effects. Smyth et al. reported narcosis, digestive tract irritation, and kidney damage in guinea pigs and rats exposed to 1,400 or 3,000 mg EGEE/kg. Dyspnea» damaged lungs, and toxic effects on WBCs were reported in mice exposed to 1,130 to 6,000 ppm EGEE , and the LC50 was 1,820 ppm EGEE. Adverse effects on the blood and hematopoietic system also occurred at higher EGEE concentrations than adverse reproductive and developmental effects. Data in Table 4-9 indicate that EGEE adversely affects the blood and hematopoietic system at concentrations of 125 to 2,000 ppm. These effects include decreased Hb, Hct, RBCs, WBCs, and increased osmotic fragility of erythrocytes . Limited human data correlate adverse reproductive effects with EGEE exposure . Table 7-1 presents the reproductive and developmental effects resulting from exposure to EGEE. In rabbits, the LOAEL for male reproductive effects was 400 ppm . This concentration caused decreased testis weight and micro scopic testicular lesions, but 100 ppm and 25 ppm had no effect on the male reproductive system. In the male rat, the LOAEL (500 mg/kg) caused decreased testis weight and micro scopic testicular lesions ; the NOAEL was250mg/kg. Adverse developmental effects (behavioral and neurochemical alterations) were observed in rats exposed at 100 ppm EGEE in a study that did not demonstrate an NOAEL for these effects . The NOAEL for structural malformations in rats and rabbits was 50 ppm EGEE . Carpenter et al. had previously established a 62-ppm NOAEL for osmotic fragility. Adverse developmental effects occur at lower EGEE concentrations than reproductive, hematotoxic, CNS, and kidney effects. Thus, limiting exposures to control adverse develop mental effects will also control reproductive, hematotoxic, CNS, and kidney effects. The LOAELs and NOAELs in Table 7-1 indicate that 50 ppm is the highest NOAEL in rats that is also lower than the lowest LOAEL in rats . Because of the lack of human data and because the rat is the species most sensitive to EGEE, it is reasonable to use the rat NOAEL to extrapolate an equivalent dose for humans. NIOSH therefore deems it appropriate to use 50 ppm as the NOAEL for EGEE and to use the body freights of rats for calculating their daily NOAEL and extrapolating an equivalent dose for humans. # EGEEA No information is available about the toxic effects of EGEEA in humans. # .1.2.1 Studies fn Animais In mice administered EGEEA orally 5 days/wk for 5 wk, testicular atrophy occurred at 1,000, 2,000, and 4,000 mg/kg per day, and depletion and degeneration of spermatocytes occurred at 4,000 mg/kg per day ], When doses were expressed as mmoles/kg per day, the dose-response relationships of EGEE and EGEEA were equivalent. No effects appeared at 500 mg EGEEA/kg per day. Testicular atrophy and spermatocyte depletion developed in rats fed 726 mg EGEEA/kg per day for 11 days . Nelson et al. examined the effects of EGEEA on rat embryo-fetal development by exposing pregnant rats to 130,390, or 600 ppm EGEEA for 7 hr/day on g.d. 7 through 15. The highest concentration (600 ppm) caused 100% fetolethality. A 56% increase in resorptions occurred at 390 ppm EGEEA, and fetal weights were significantly reduced at 130 and 390 ppm EGEEA. Visceral malformations of the heart and umbilicus occurred in fetuses at 390 ppm, and one fetus from dams exposed to 130 ppm EGEEA had a heart defect. In another study, rabbits were exposed to 25, 100, or 400 ppm EGEEA on g.d. 6 through 18 . Adverse effects on the fetus included decreased fetal body weights and retarded ossification at 100 ppm EGEEA, and vertebral column malformations at 400 ppm EGEEA. Decreased maternal body weight gain and food consumption, and increased resorptions also occurred at 400 ppm EGEEA. No adverse maternal effects developed at 25 or 100 ppm EGEEA, and no adverse effects on the fetus appeared at 25 ppm EGEEA. These studies in animals provide ample evidence of adverse reproductive and developmental effects from EGEEA exposure. The following studies, including the LOAEL and the NOAEL of each, were used in determining the REL for EGEEA. Tyl et al. found evidence of maternal toxicity and fetotoxicity in rabbits exposed by inhalation to 100, 200, and 300 ppm EGEEA for 6 hr/day on g.d. 6 through 18. A 100% incidence of malformations occurred at 300 ppm EGEEA, and external, visceral, and skeletal malformations increased significantly at 200 ppm EGEEA. No effects were observed on dams or fetuses at 50 ppm EGEEA. Tyl et al. found evidence of maternal toxicity (i.e., decreased body weight gain and food consumption, and increased liver weight) in rats exposed by inhalation to 100, 2 0 0 , and 300 ppm EGEEA for 6 hr/day on g.d. 6 through 15. Fetotoxicity was also found at 100, 200, and 300 ppm EGEEA, with an increased incidence of visceral, skeletal, and external malformations at 200 and 300 ppm EGEEA. Dams and fetuses showed no effects at 50 ppm EGEEA. Dermal treatment of pregnant rats on g.d. 7 through 16 with 1.4 ml EGEEA/day caused decreased maternal body weights and adverse developmental effects in offspring, including visceral malformations and skeletal variations , # 7A .2.2 Basis for Selecting the N o Observable Adverse Effect Level (NOAEL) Reports in the literature indicate that EGEEA exerts adverse hematologic effects in ex perimental animals at 62 to 4,000 ppm . These effects include hemolysis, reduced RBC and WBC counts, and a reduction in Hb, Hct, and MCV. Acute toxicity data for EGEEA (Table 4-2) indicate that CNS and kidney effects occur at higher EGEEA concentrations than adverse reproductive and developmental effects. Smyth et al. reported narcosis and damaged kidneys in guinea pigs and rats treated with 1,910 or 5,100 mg EGEEA/kg. Hemoglobinuria, hematuria, and renal lesions were reported in rats treated with 2,900 to 3,900 mg EGEEA/kg , and transient hemoglobinuria and/or hematuria were reported in rabbits exposed to 2,000 ppm EGEEA for 4 hr. Adverse reproductive and developmental effects generally occur at lower concentrations than hematotoxic, CNS, and kidney effects. Thus, limiting exposures to prevent adverse reproductive and developmental effects will also prevent hematotoxic, CNS, and kidney effects. Table 7-2 presents reproductive and developmental effects resulting from exposure to EGEEA. These data include the LOAEL for mice (1,000 mg/kg), rats (130 ppm), and rabbits (100 ppm). In the study by Tyl et al. , 50 ppm EGEEA caused no effects in rabbits. The LOAELs and NOAELs presented in Table 7-2 indicate that 50 ppm is the highest NOAEL in rabbits that is also lower than the lowest LOAEL in rabbits . Because human data are lacking and because the rabbit is the animal species most sensitive to EGEEA, it is reasonable to use the rabbit NOAEL to extrapolate an equivalent dose for humans. NIOSH therefore deems it appropriate to use 50 ppm as the NOAEL for EGEEA and to use the body weights of rabbits studied by Tyl et al. for calculating their daily NOAEL and extrapolating an equivalent dose for humans. # EGME # Studies in Humans As reported in Chapter 4 (Section 4.3), adverse CNS effects (headache, forgetfulness, fatigue, personality change, nausea, and neurologic abnormalities) and hematotoxic effects (anemia and lymphopenia) were observed in workers exposed to EGME-containing solvents in shirt factories . Greenburg et al. studied workers fusing shirt collars at the same factory as Parsons and Parsons and observed similar effects (i.e., anemia, neurologic abnormalities, drowsiness, and fatigue). Industrial hygiene measurements taken after the report of adverse health effects in workers indicated that the airborne concentration of EGME was about 25 ppm with windows open and 75 ppm with windows partially closed. Greenburg et al. stated that worker exposures to EGME had been higher than the measured concentrations because improvements had been made to exhaust and ventilation systems after the report of adverse health effects in workers. Severe anemia , major encephalopathy, and bone marrow depression were observed in workers exposed to EGME dermally and by inhalation in the printing and microfilm industries. In one study , EGME was used as a cleaning agent and as a solvent, but the workers seldom wore gloves. No means were available to measure possible dermal absorption. Workers were exposed to 60 to 3,960 ppm EGME during the various cleaning operations, but after airborne EGME concentrations were reduced to the order of 20 ppm EGME, no further ill effects were noted. No mention was made about preventing skin exposure. Nitter-Hauge reported general weakness, disorientation, nausea, and vomiting in two men who had each ingested about 0.1 liter of pure EGME, believing it to be ethyl alcohol. Upon admittance to the hospital, the men were suffering from cerebral confusion, pronounced hyperventilation, and profound metabolic acidosis. After i.v. treatment with sodium bicar bonate and ethyl alcohol, both patients made an uneventful recovery over a 4-wk period. Limited evidence shows the adverse effects of EGME on the male reproductive system. Data suggest that testicle size may have been reduced in male workers with potential exposure to EGME (see Section 4.1.2.2) . noted lowered sperm counts in shipyard painters exposed to EGME and EGEE; airborne EGME ranged from nondetectable concentrations to 5.6 ppm. Details of this study, which are presented in Chapter 4, indicated that lead and epichlorohydrin (also present in the work environment) had no effect on semen quality. When hematologic parameters were studied in the same group of shipyard painters , several cases of anemia were reported. Exposure to EGME and EGEE was suspected as the cause of the hematologic disorders, but no dose -response relationship was established. # Studies In Anim als Chapter 4 summarizes experimental studies demonstrating reproductive and developmental toxicity resulting from EGME exposure (see Appendix B for the complete studies). Doses of 62.5, 125, 250, 500, 1,000, or 2,000 mg EGME/kg per day were administered to mice 5 days/wk for 5 wk ]. Testicular atrophy was found at 250, 500,1,000, and 2,000 mg EGME/kg per day, but not at lower doses. In a study to determine temporal development and the site of the testicular lesion, rats were treated orally with 50,100,250, or 500 mg EGME/kg per day for up to 11 days . Testis weights were significantly reduced after 2 days at 500 mg/kg per day and after 7 days at 250 mg/kg per day. The lesion appeared localized in the primary spermatocyte 24 hr after a single dose of 100 mg/kg. Partial depletion and degeneration of spermatids and spermatocytes were also observed in rats treated with 100 mg EGME/kg per day for 11 days. No effects were noted over the 11-day treatment period at 50 mg EGME/kg per day. Treatment of rats with 50 mg EGME/kg per day for 5 days in another study caused a reduction in epididymal sperm counts and the appearance of abnormal sperm morphology at wk 4, followed by recovery at wk 8 . Adverse reproductive effects were noted in male rats exposed to >100 ppm EGME by inhalation for 6 hr/day, 5 days/wk during a 10-day to 13-wk period , testicular atrophy , microscopic testicular lesions, and decreased testis weights ; at 100 ppm, male rats showed no effects. Miller et al. observed testicular effects in rabbits exposed to 100 or 300 ppm EGME and slight microscopic changes in testicular tissue in 1 of 5 rabbits exposed to 30 ppm EGME. These investigators considered 30 ppm to be the NOAEL in rabbits. The effects of EGME on rat reproductive performance were studied by exposing males or females to 30, 100, or 300 ppm EGME for 6 hr/day, 5 days/wk for 13 wk before mating with unexposed animals . At 300 ppm, EGME completely suppressed male fertility for 2 wk after exposure; fertility was partially restored 13 to 19 wk after exposure ended. No effects were observed on female reproductive performance at any concentration of EGME, or on male reproductive performance at 30 or 100 ppm. No neonatal effects were found in this study at any EGME concentration. Nagano et al. administered doses of 31.25, 62.5, 125, 250, 500, or 1,000 mg EGME/kg per day to rats on g.d. 7 through 14. Skeletal variations consisting of bifurcated and split cervical vertebrae were observed at the lowest dose, and increased malformations (spina bifida occulta) occurred at 62.5 mg EGME/kg per day. # Ill Heart function was also evaluated in rat fetuses from dams treated orally on g.d. 7 through 13 with 25 or 50 mg EGME/kg per day . At 25 mg/kg per day, EGME caused a significant increase in the number of fetuses with abnormal QRS wave complexes; and at 50 mg/kg per day, it caused an increase in cardiovascular defects. Oral treatment of nonhuman primates with 36 mg EGME/kg during gestation resulted in one embryo that was missing a digit on each forelimb , Three of thirteen pregnancies (23%) at the 12-mg/kg dose ended in embryonic death. Rats and rabbits were exposed by inhalation to 3,10, or 50 ppm EGME for 6 hr/day on g.d. 6 through 15 (rats) or 6 through 18 (rabbits) . Maternal toxicity (decreased body weight) in dams of both species was noted at 50 ppm EGME. A significant increase in the resorption rate was also noted in pregnant rabbits exposed to 50 ppm EGME. Significant increases in the incidence of two minor skeletal variations (i.e., lumbar spurs and delayed ossification) indicated slight fetotoxicity in rat fetuses from dams exposed to 50 ppm EGME. Rabbit fetuses from dams exposed to 50 ppm EGME exhibited a significant increase in the incidence of malformations of all organ systems and a significant decrease in the mean body weight. No effects were noted in either species for dams and fetuses at 3 or 10 ppm EGME. Hanley et al. found minimally decreased body weight gains in mice exposed to 50 ppm EGME 6 hr/day on g.d. 6 through 15. Examination of fetuses from dams exposed to 50 ppm EGME revealed statistically significant increases in the incidence of extra lumbar ribs and of unilateral testicular hypoplasia. No adverse effects were noted in dams or fetuses at 10 ppm EGME. In another study, pregnant rats were exposed to 100 or 300 ppm EGME for 6 hr/day on g.d. 6 through 17, and males were exposed to 100 or 300 ppm EGME for 6 hr/day during a 10-day period . At 100 ppm, EGME increased gestation time and decreased the number of pups and live pups. At 300 ppm, EGME decreased maternal body weight and produced 100% fetolethality. Male rats showed testicular effects after 10 exposures to 300 ppm, but not after exposures to 100 ppm EGME. In a dominant lethal study, male rats were exposed by inhalation to 25 or 500 ppm EGME for 6 hr/day over 5 days . Rats exposed to 500 ppm showed decreased fertility during wk 3 through 8 , and rats exposed to 25 ppm EGME showed no adverse effects on fertility. Nelson et al. exposed male rats to 25 ppm EGME for 7 hr/day, 7 days/wk during a 6 -wk period. These rats were then mated with untreated females that were allowed to deliver and rear their young. In the same study, pregnant females were exposed to EGME for 7 hr/day on g.d. 7 through 13 or 14 through 20 and allowed to deliver and rear their young. Significant differences in avoidance conditioning were observed in offspring of dams exposed on g.d. 7 through 13, but not in offspring of dams exposed on g.d. 14 through 20. Brain neurochemical deviations were noted in offspring from the paternally exposed group and in offspring from both maternally exposed groups. In a dermal exposure study, female rats were exposed to solutions of 3%, 10%, 30%, or 100% EGME (10 ml/kg) in physiological saline , Reduced litter sizes were observed at the 10% concentration, 100% fetolethality occurred at the 30% concentra tion, and 10 0 % maternal death was observed at the 1 0 0 % concentration. A single dermal application of 500, 1,000, or 2,000 mg EGME/kg on g.d. 12 caused statistically significant increases (P<0.05) in external, visceral, and skeletal malformations . In the same study, dermal exposure of female rats to EGME ( 1 ,0 0 0 mg/kg on g.d. 12 or 2 ,0 0 0 mg/kg on g.d. 10 and 12) caused a statistically significant decrease in fetal body weights (P<0.05). The investigators determined 250 mg EGME/kg to be the NOAEL for adverse developmental effects. # Basis for Selection o f N o Observable Adverse Effect Level (NOAEL) Adverse CNS effects (encephalopathy) and hematotoxic effects (bone marrow depression, anemia, and leukopenia) were observed in workers exposed to EGME . However, there is limited evidence of an adverse effect on the male reproductive system as a result of EGME exposure ]. Acute toxicity data for EGME (Table 4-2) indicate that CNS, liver, and kidney effects occur at higher EGME concentrations than adverse reproductive and developmental effects. Wiley et al. reported tissue damage to the kidneys and liver in dogs and rabbits exposed to 2,130 mg EGME/kg. Narcosis, lung, and kidney damage were reported in rats (3,250 to 3,400 mg/kg), rabbits (890 mg/kg), and guinea pigs (950 mg/kg) , and digestive tract irritation and damaged kidneys were reported in rats and guinea pigs exposed to 246 and 950 mg EGME/kg, respectively. Adverse effects on the blood and hematopoietic system also occurred at higher EGME concentrations than adverse reproductive or developmental effects. Data in Table 4-9 indicate that 32 to 2,000 ppm EGME adversely affects the blood and hematopoietic system. These effects include increased osmotic fragility, decreased Hb, Hct, RBC and WBC counts . Table 7-3 presents the reproductive and developmental effects caused by exposure to EGME. In rats, the LOAEL of 50 mg EGME/kg per day caused decreased sperm counts and abnormal sperm morphology in two separate studies that did not demonstrate a NOAEL , In rabbits, the LOAEL (100 ppm EGME) caused microscopic testicular lesions and decreased testis weights, and the NOAEL was 30 ppm EGME . In mice, the LOAEL (250 mg/kg per day) caused testicular atrophy, and the NOAEL was 125 mg/kg per day ]. Behavioral defects and neurochemical deviations were observed in the offspring of rats exposed to 25 ppm EGME . Retarded fetal ossification was observed in the offspring of mice treated with 31.25 mg EGME/kg per day (LOAEL) . Adverse developmental effects were observed in the offspring of rats, rabbits, and mice exposed to an LOAEL of 50 ppm EGME ; the NOAEL for these species was 10 ppm EGME. In the same study, the NOAEL for maternal effects in these species was 10 ppm EGME. Feuston et al. observed an increase (P<0.05) in external, visceral, and skeletal malformations in the fetuses of rats exposed to single dermal applications of 500,1,000, or 2,000 mg EGME/kg on g.d. 12. The authors determined 250 mg EGME/kg to be the NOAEL for adverse developmental effects in this study. Adverse developmental effects occur at lower EGME concentrations than reproductive, hematotoxic, CNS, liver, and kidney effects. Thus, limiting exposure to control adverse developmental effects will also control reproductive, hematotoxic, CNS, liver, and kidney effects. The data that demonstrate reproductive and developmental toxicity, and the LOAELs and NOAELs presented in Table 7-3 indicate that in several species (rats, rabbits, and mice), 10 ppm is the highest NOAEL that is also lower than the lowest LOAEL . Because of the lack of human data, it is reasonable to use the NOAEL of 10 ppm to extrapolate an equivalent dose for humans. # EGMEA Few data are available on the toxicity of EGMEA. Bolt and Golka reported the occurrence of hypospadias at birth in two boys whose mother had been exposed to EGMEA during her pregnancies. The authors concluded that the hypospadias were caused by exposure to EGMEA. Testicular atrophy was observed in mice exposed orally for 5 days/wk during a 5-wk period to 500,1,000, or 2,000 mg EGMEA/kg per day; no reproductive effects were noted at 62.5, 125, or 250 mg EGMEA/kg per day # BASIS FOR RECOMMENDED STANDARDS FOR EGEE, EGME, AND THEIR ACETATES # Data Available from Studies in Humans and Anim als Toxic effects of human exposure to EGEE and EGME include personality change, memory loss, drowsiness, blurred vision, hearing loss, anemia, and leukopenia. However, data are limited on possible adverse reproductive and developmental effects of worker exposure to EGEE, EGME, and EGMEA, and no human data are available on EGEEA exposure. Cook et aL suggested that testicle size in males may have been reduced because of EGME exposure. concluded that exposure to EGEE and EGME caused functional impairment in males by lowering sperm counts. The occurrence of hypospadias in two boys at birth was attributed to the mother's exposure to EGMEA during her pregnancies . Ballew and Hattis performed a quantitative risk analysis under contract to NIOSH to determine the risk of developmental effects in the offspring of pregnant women exposed to EGEE and EGME. Although data for humans are limited, ample evidence from studies in animals indicates that EGEE, EGME, and their acetates adversely affect reproduction and development. In the absence of sufficient human data, NIOSH deems it appropriate to base the RELs for EGEE, EGME, and their acetates on animal data. The following procedure was therefore used to calculate equivalent human doses from animal data. # Procedure for Calculating Equivalent Human Doses from Animal Data No mechanistic models exist to describe the relationship of reproductive and developmental toxicity to exposure; only empirical models are available to use in a quantitative risk assessment (QRA). Because a threshold is assumed to exist for reproductive and develop mental toxicity, a QRA model is inappropriate since these models assume a no-threshold effect. Therefore, the following method was used to determine the RELs for EGEE, EGME, and their acetates. Both humans and animals were assumed to retain 100% of inhaled EGEE, EGME, or their acetates. The retained dose for animals exposed at the NOAEL was calculated as follows by using the inhalation rate and the average body weights of the animals (see Table 7-5): # Retained dose for animals N O A E L ( m g / m 3) x inhaja^onrate (m /day) x q 2 5 ¿ay (1 ) animal body weight That dose was converted to an equivalent exposure for humans by assuming a 70-kg body weight and an inhalation rate of 10 m3 in an 8-hr workday : t, . , . " , retained dose for animals (mg/ k g per day) x 70 kg Equivalent exposure for humans - ----------------\ a -a r j-l----e. (2) 1 0 m /day Adapted from Ballew and Hattis . Concentrations of EGEE or EGME associated with the indicated effect under a more pessimistic assumption about the degree of interindividual variability in susceptibility of the human population (log probit slope of 1). 1 Bast-estimate assumption of the degree of interindividual variability in susceptibility for the quantal developmental effects (log probit slope of 2). sDeath in the first year after birth. In the case of this hypothesized effect, only best estimates have been made. To allow for potential interspecies variability, an uncertainty factor of 10 was applied to the equivalent exposure for humans. An additional uncertainty factor of 10 was then applied to allow for potential intraspecies variability. The resulting concentration was converted to parts per million: Equivalent exposure for humans __________ 24.45__________ _ 100 mol wt of particular glycol ether # .2 .2 .1 REL fo r EGEE and EGEEA Although limited data in humans have shown adverse reproductive or developmental effects from exposure to EGEE and EGEEA Doe 1984a;Foster et al. 1984;Nelson et al. 1984b;Tyl et al. 1988]. These animal data provide the basis for determining the RELs for worker exposure to EGEE and EGEEA and for instituting controls to reduce worker exposure. On the basis of the calculations presented in Equations 4 through 12, NIOSH recommends that occupational exposure to EGEE and EGEEA be limited to 0.5 ppm as a TWA for up to a 10-hr workshift during a 40-hr workweek. Because both EGEE and EGEEA can be absorbed percutaneously , dermal contact is prohibited. The data in # REL for EG M E a n d EGM E A Case reports and clinical studies demonstrated adverse CNS and hematotoxic effects on workers exposed to EGME , but data demonstrating adverse reproductive and developmental effects in offspring of EGME-exposed workers are limited . Bolt and Golka reported hypospadias at birth in two boys whose mother was exposed to EGMEA during her pregnancies. Sufficient evidence in animal studies indicates that EGME exerts adverse reproductive and developmental effects Nagano et al. 1981;Foster et al. 1983;McGregor etal. 1983;Miller etal. 1983a;Raoetal. 1983;Hanley etal. 1984a;Nelson etal. 1984a;Chapin et al. 1985a;Chapin etal. 1985b;Toraason et al. 1985;Scott et al. 1989; Wickramaratne 1986]. EGMEA was also shown to have such effects by , who found that this glycol ether caused testicular atrophy in mice. Data from these animal studies warrant concern that EGME and EGMEA are capable of inducing similar adverse effects in exposed workers. Based on information presented in Table 7-3, a 10-ppm NOAEL was determined for EGME in rats, rabbits, and mice . Any effects that EGMEA might cause would be likely to occur through the initial conversion of EGMEA to EGME (see Section 4.2). Therefore, it is reasonable to propose the same REL for both compounds. An equivalent human dose was determined for EGME using the information presented in the study by Hanley et al. , On the basis of the calculations presented in Equations 13 through 21, NIOSH recommends that occupational exposure to EGME and EGMEA be limited to 0.1 ppm as a TWA for up to a 10-hr workday during a 40-hr workweek. Because EGME and EGMEA can be absorbed percutaneously , dermal contact is prohibited. The data in Table 7-5 were used in Equations 13 through 21 to calculate the human equivalents to the daily animal NOAELs for EGME and EGMEA as follows: The hazard communication program is to be written and made available to workers and their designated representatives. Chemical manufacturers, importers, and distributors are required to ensure that containers of hazardous chemicals leaving their workplaces are labeled, tagged, or marked to show the identity of the chemical, appropriate hazard warnings, and the name and address of the manufacturer or other responsible party. Employers must ensure that labels on incoming containers of hazardous chemicals are not removed or defaced unless they are immediately replaced with other labels containing the required information. Each container in the workplace must be prominently labeled, tagged, or marked to show the identity of any hazardous chemical it contains and the hazard warnings appropriate for worker protection. If a work area has a number of stationary containers that have similar contents and hazards, the employer may post hazard signs or placards rather than label each container. Employers may use various types of standard operating procedures, process sheets, batch tickets, or other written materials as substitutes for individual container labels on stationary process equipment. However, these written materials must contain the same information that is required on the labels and must be readily accessible to workers in the work areas. Pipes or piping systems are exempted altogether from the OSHA labeling requirements, although NIOSH recommends that filler ports and outlets be labeled. In addition, NIOSH recommends that a system be set up to ensure that pipes containing hazardous materials are identified to avoid accidental cutting and discharge of their contents. Employers are not required to label portable containers holding hazardous chemicals that have been transferred from labeled containers and that are intended only for the immediate use of the worker who performs the transfer. According to the OSHA definition of "immediate use," the container must be under the control of the worker performing the transfer and must be used only during the workshift in which the chemicals are transferred. The OSHA Hazard Communication standard requires chemical manufacturers and importers to develop an MSDS for each hazardous chemical they produce or import. Employers in the manufacturing sector (which includes paint and allied coating products) are required to obtain or develop an MSDS for each hazardous chemical used in the workplace. The MSDS is required to provide information such as the chemical and common names for the hazardous chemical. For hazardous chemical mixtures, the MSDS must list each hazardous component that constitutes 1 % or more of the mixture. NIOSH suggests that any potential occupational carcinogen be listed. Ingredients present in concentrations of less than 1 % must also be listed if there is evidence that the PEL may be exceeded or that the ingredients could present a health hazard in those concentrations. Workers should also be trained in methods for detecting the presence or release of hazardous chemicals (e.g., monitoring conducted by the employer, continuous monitoring devices, visual appearance or odor of hazardous chemicals when released, etc.). Training should include information about measures workers can take to protect themselves from exposure to hazardous chemicals (e.g., the use of appropriate work practices, emergency procedures, and personal protective equipment). # W ORK PRACTICES # W orker Isolation If feasible, workers should be isolated from direct contact with the work environment by the use of automated equipment operated from a closed control booth or room. The control room should be maintained at a positive pressure so that air flows out of rather than into the room. However, when workers must perform process checks, adjustments, maintenance, or other related operations in work areas where EGME, EGEE, or their acetates are present, personal protective clothing and equipment may be necessary, depending on exposure concentrations and the potential for dermal contact. # Storage and Handling Containers of EGME, EGEE or their acetates should be stored in a cool, dry, well ventilated location away from any area containing a fire hazard. Outside or detached storage is preferred. These glycol ethers should be isolated from materials with which they are incompatible; contact with strong oxidizing agents may cause fires and explosions. Con tainers of solvents, including those that contain EGME, EGEE, or their acetates, should be tightly covered at all times except when material is transferred. Working amounts of these solvents should be stored in containers that (1) hold no more than 5 gal, (2) have springclosing lids and spout covers, and (3) are designed to safely relieve internal pressure in case of fire. Because small amounts of residue may remain and present a fire hazard, containers that have held solvents should be thoroughly cleaned with steam and then drained and dried before reuse. Fittings should not be struck with tools or other hard objects that may cause sparks. Special spark-resistant tools of nonferrous materials should be used where flam mable gases, highly volatile liquids, or other explosive substances are used or stored . In addition, all sources of ignition such as smoking and open heaters should be prohibited except in specified areas. Fire hazards around tank trucks and cars can be reduced by keeping motors turned off during loading or unloading operations. Specific OS HA requirements for the storage and handling of flammable and combustible liquids are given in 29 CFR 1910.106. # Sanitation and Hygiene The preparation, storage, or consumption of food should not be permitted in areas where there is exposure to EGME, EGEE or their acetates. The employer should make handwash ing facilities available and encourage the workers to use them before eating, smoking, using the toilet, or leaving the worksite. Tools and protective clothing and equipment should be cleaned as needed to maintain sanitary conditions. Toxic wastes should be collected and disposed of in a manner that is not hazardous to workers or the environment. Vacuum pickup or wet mopping should be used to clean the work area at the end of each workshift or more frequently if needed to maintain good housekeeping practices. Collected wastes should be placed in sealed containers that are labeled as to their contents. Cleanup and disposal should be conducted in a manner that enables workers to avoid contact with the waste. Tobacco products should not be smoked, chewed, or carried uncovered in work areas. Workers should be provided with and advised to use facilities for showering and changing clothes at the end of each workshift. Work areas should be kept free of flammable debris. Flammable work materials (rags, solvents, etc.) should be stored in approved safety cans. # Spills and W aste Disposal Procedures for decontamination and waste disposal should be established for materials or equipment contaminated with EGME, EGEE, or their acetates. The following procedures are recommended in the event of a spill of these glycol ethers : - Exclude persons not wearing protective clothing and equipment from areas of spills or leaks until cleanup has been completed. - Remove all ignition sources. - Ventilate the area of a spill or leak. - Absorb small spills on paper towels. Allow the vapors to evaporate in a suitable place such as a fume hood, allowing sufficient time for them to clear the hood ductwork. Bum the paper towels in a suitable location away from combustible materials. - Absorb large quantities with sand or other noncombustible absorbent material and atomize the contaminated material in a suitable combustion chamber. # LABELING AND POSTING In accordance with 29 CFR 1910.1200 (Hazard Communication), workers must be informed of chemical exposure hazards, of their potential adverse health effects, and of methods to protect themselves. Labels and signs also provide an initial warning to other workers who may not normally work near processes involving hazardous chemicals such as EGME, EGEE, or their acetates. Depending on the process, warning signs should state a need to wear eye protection or a respirator, or they may be used to limit entry to an area without protective equipment. For transient nonproduction work, it may be necessary to display warning signs at the worksite to inform other workers of the potential hazards. All labels and warning signs should be printed in both English and the predominant language of workers who do not read English. Workers who cannot read labels or posted signs should be identified so that they may receive information about hazardous areas and be informed of the instructions printed on labels and signs. # EMERGENCIES The employer should formulate a set of written procedures covering fire, explosion, asphyxiation, and any other foreseeable emergency that may arise during the use of materials that may contain EGME or EGEE, or their acetates. All potentially affected workers should receive training in evacuation procedures to be used in the event of fire or explosion. All workers who are using materials containing these glycol ethers should be thoroughly trained in proper work practices that reduce the potential for starting fires and causing explosions. # ENGINEERING CONTROLS Engineering controls should be the principal method for minimizing exposure to airborne EGME, EGEE, or their acetates in the workplace. To achieve and maintain reduced airborne concentrations of these glycol ethers, adequate engineering controls are necessary (e.g., properly constructed and maintained closed-system operations and ventilation). Control technology applicable to spray painting is discussed in a NIOSH document , Airborne concentrations of these glycol ethers can be most effectively controlled at the source of contamination by enclosure of the operation and use of local exhaust ventilation. Enclosures, exhaust hoods, and ductwork should be kept in good repair so that designed airflows are maintained. Measurements of variables such as capture velocity, duct velocity, or static pressure should be made at least semiannually, and preferably monthly, to demonstrate the effectiveness of the mechanical ventilation system. The use of continuous airflow indicators (such as water or oil manometers marked to indicate acceptable airflow) is recommended. The effectiveness of the system should also be made as soon as possible after any change in production, process, or control that may result in any increase in airborne contaminants. It is essential that any scheme for exhausting air from a work area also provide a positive means of bringing in at least an equal volume of air from the outside, conditioning it, and evenly distributing it throughout the exhausted area. The ventilation system should be designed and operated to prevent the accumulation or recirculation of airborne contaminants in the workplace. To evaluate the use of these materials with EGMEA or EGEEA, users should consult the best available performance data and manufacturer's recommendations. Significant differen ces have been demonstrated in the chemical resistance of generically similar PPE materials (e.g., butyl) produced by different manufacturers . In addition, the chemical resistance of a mixture may be significantly different from that of any of its neat components . Users should therefore test the candidate material with the chemicals to be used. The worker should be trained in the proper use and care of the chemical protective clothing. After this clothing is in routine use, it should be examined along with the workplace to ensure that nothing has occurred to invalidate the effectiveness of these materials. The NIOSH publication A Guide fo r Evaluating the Performance o f Chemical Protective Clothing may be helpful. Safety showers and eye wash stations should be located close to operations that involve EGME, EGEE, or their acetates. Splash-proof chemical safety goggles or face shields (20 to 30 cm minimum) should be worn during any operation in which a solvent, caustic, or other toxic substance may be splashed into the eyes. In addition to the possible need for wearing protective outer apparel (e.g., aprons, encap sulating suits), workers should wear work uniforms, coveralls, or similar full-body coverings that are laundered each day. Employers should provide lockers or other closed areas to store work and street clothing separately. Employers should collect work clothing at the end of each workshift and provide for its laundering. Laundry personnel should be informed about the potential hazards of handling contaminated clothing and instructed about measures to minimize their health risk. Employers should ensure that protective clothing is inspected and maintained to preserve its effectiveness. Clothing should be kept reasonably free of oil or grease. Workers and persons responsible for worker health and safety should be informed that protective clothing may interfere with the body's heat dissipation, especially during hot weather or in hot industries or work situations (e.g., confined spaces). Additional monitoring is required to prevent heat-related illness when protective clothing is worn under these conditions. # Respiratory Protection Engineering controls should be the primary method used to control exposure to airborne contaminants. Respiratory protection should be used by workers only in the following circumstances: - During the development, installation, or testing of required engineering controls The actual respirator selection should be made by a qualified individual, taking into account specific use conditions, including the interaction of contaminants with the filter medium, space restrictions caused by the work location, and the use of any required face and eye protective devices. Respirator selection tables are presented in Chapter 1. # CHEMICAL SUBSTITUTION The substitution of less hazardous materials can be an important measure for reducing worker exposure to hazardous materials. # EXPOSURE MONITORING An occupational health program designed to protect workers from adverse effects caused by exposure to EGME, EGEE, or their acetates should include the means for thoroughly identifying all potential hazards. Routine environmental sampling as an indicator of worker exposure is an important part of this program, as it provides a means of assessing the effectiveness of work practices, engineering controls, personal protective clothing and equipment, etc. Prior knowledge of the presence of certain types of inteifering compounds in the sampled environment will greatly help the analyst in the selection of the appropriate analytical conditions for sample analysis. This list of compounds can be compiled from the material safety data sheets for the compounds that are used in or around the process where the sampling will take place. Initial and routine worker exposure surveys should be made by competent industrial hygiene and engineering personnel. These surveys are necessary to characterize worker exposures and to ensure that controls already in place are operational and effective. Each worker's exposure should be estimated, whether or not it is measured by a personal sampler. Therefore, the sampling strategy should allow reasonable estimates of each worker's exposure. The NIOSH publication Occupational Exposure Sampling Strategy Manual may be helpful in developing efficient programs to monitor worker exposure . In work areas where airborne exposures to EGME, EGEE or their acetates may occur, an initial survey should be done to determine the extent of worker exposure. In general, TWA exposures should be determined by collecting samples over a full shift. Measurements to determine worker exposure should be taken so that the average 8 -hr exposure is based on a single 8-hr sample or on two 4-hr samples. Several short-term interval samples (up to 30 min) may also be used to determine the average exposure concentration. When the potential for exposure to these glycol ethers is periodic, short-term samples may be needed to replace or supplement full-shift sampling. Personal sampling (i.e., samples collected in the worker's breathing zone) is preferred over area sampling. If personal sampling is not feasible, area sampling can be substituted only if the results can be used to approximate worker exposure. Sampling should be used to identify the sources of emissions so that effective engineering controls or work practices can be instituted. If a worker is found to be exposed to EGME, EGEE, or their acetates at concentrations below the REL but at or above one-half the REL, the exposure of that worker should be monitored at least once every 6 months or as otherwise indicated by a professional industrial hygienist When the work environment contains concentrations exceeding the respective RELs for these glycol ethers, workers must wear respirators for protection until adequate engineering controls or work practices are instituted; exposure monitoring is recommended at 1-wk intervals. Such monitoring should continue until consecutive determinations at least 1 wk apart indicate that the workers' exposure no longer exceeds the REL. When workers' exposures are greater than one-half the REL but less than the REL, sampling should be conducted after 6 months; if the concentrations of these glycol ethers are lower than one-half the REL after two consecutive biannual surveys, sampling can then be conducted annually. Exposure monitoring should be conducted whenever changes in production, process, controls, work practices, or weather conditions may result in a change in exposure conditions. # MEDICAL MONITORING # General Requirem ents Workers exposed to EGME, EGEE, or their acetates are at risk of suffering adverse health effects. Medical monitoring as described below should be made available to all workers. The employer should provide the following information to the physician responsible for the medical monitoring program: - Any requirements of the applicable OSHA standard or NIOSH recommended standard - Identification of and extent of exposure to physical and chemical agents that may be encountered by the worker - Any available workplace sampling results that characterize exposures for job categories previously and currently held by the worker - A description of any protective devices or equipment the worker may be required to use - The frequency and nature of any reported illness or injury of a worker - The results of any monitoring of urinary MAA or EAA for any worker exposed to unknown concentrations of EGME or EGMEA during a spill or emergency (see Appendix G). # M edical Exam inations The objectives of a medical monitoring program are to augment the primary preventive measures, which include industrial hygiene monitoring of the workplace, the implementa tion of engineering controls, and the use of proper work practices and personal protective equipment. Medical monitoring data may also be used for epidemiologic analysis within large plants and on an industrywide basis; they should be compared with exposure data from industrial hygiene monitoring. Medical examinations are conducted before job placement and periodically thereafter. The preplacement medical examination allows the physician to assess the applicant's functional capacity and inform him or her of how it relates to the physical demands and risks of the job. Furthermore, such an examination provides baseline medical data that can be compared with subsequent health changes. The preplacement examination should also provide information about prior occupational exposures. Periodic medical examinations after job placement are intended to detect work-related changes in health at an early stage. The following factors should be considered during the preplacement medical examination and any periodic medical examinations of the worker: (a) exposure to chemical and physical agents that may produce interdependent or interactive adverse effects on the worker's health (including exacerbation of pre-existing health problems and nonoccupational risk factors such as tobacco use), and (b) potentially hazardous characteristics of the worksite (e.g., confined spaces, heat, and proximity to hazards such as explosive atmospheres and toxic chemicals). The type of information that should be gathered is discussed in the following subsections. # Preplacem ent m edical examination # Medical history The medical history should contain information about occupational history, including the number of years worked in each job. Special attention should be given to any history of occupational exposure to hazardous chemical and physical agents . # Clinical examination The preplacement clinical examination should determine the fitness of the worker to perform the intended job assignment. Appropriate pulmonary and musculoskeletal evaluation should be done for workers whose jobs may require extremes of physical exertion or stamina (e.g., heavy lifting), especially these who must wear personal respiratory protection. Because the standard 12-lead electrocardiogram is of little practical value in monitoring for asymptomatic cardiovascular disease, it is not recommended. More valuable diagnostic information is provided by physician interviews of workers that elicit reports of the occurrence and work-relatedness of angina, breathlessness, and other symptoms of chest illnesses. Special attention should also be given to workers who require the use of eyeglasses. These workers must be able to wear simultaneously any equipment needed for respiratory protection, eye protection, and visual acuity, and they must be able to maintain their concurrent use during work activities. The worker's duties may be performed near unrelated operations that generate potentially harmful exposures (e.g., asbestos or cleaning or degreasing solvents). The physician must be aware of these potential exposures to evaluate possible hazards to the individual worker. # $.9.2.2 Periodic m edical examination A periodic medical examination should be conducted annually or more frequently, depend ing on age, health status at the time of a prior examination, and reported signs or symptoms associated with exposure to EGME, EGEE, or their acetates. The physician should note any trends in health changes revealed by epidemiologic analyses of examination results. The occurrence of an occupationally related disease or other work-related adverse health effects should prompt an immediate evaluation of industrial hygiene control measures and an assessment of the workplace to determine the presence of a previously unrecognized hazard. The physician's interview with the worker is an essential part of a periodic medical examination. The interview gives the physician the opportunity to leam of (1) changes in the work setting (e.g., confined spaces), and (2 ) potentially hazardous workplace exposures that are in the vicinity of the worker but are not related to the worker's job activities. During the periodic medical examination, the physician should re-examine organ systems at risk to note changes from the previous examination. # BIOLOGICAL MONITORING Urinary concentrations of the metabolites of EGME, EGEE, and their acetates may be useful biological indicators of worker exposure to these glycol ethers. Biological monitoring accounts not only for environmental concentrations and actual respiratory uptake, but also for absorption through the skin. Information about biological monitoring appears in Section S.4 of this document and guidelines for biological monitoring are given in Appendix G. Biological monitoring is suggested when the potential exists for (1) airborne exposure to EGME, EGEE, or their acetates at or above their respective RELs, or (2) skin contact as a result of accidental exposure or breakdown of chemical protective clothing (see Section 8 .6 .1). Monitoring of urinary MAA or EAA (see Appendix G) should be made available to any worker exposed to unknown concentrations of EGME, EGEE, or their acetates during a spill or other emergency. In the absence of skin exposure, a urinary MAA concentration of 0.8 mg/g creatinine or an EAA concentration of 5 mg/g creatinine approximates the concentration that would result from exposure to the REL for EGME (0.1 ppm) or EGEE (0.5 ppm) during an 8 -hr workshift. If a worker's urinary MAA or EAA suggests exposure to EGME, EGEE, or their acetates above their respective RELs, an effort should be made to ascertain the cause (e.g., failure of engineering controls, poor work practices, or nonoccupational exposures). # RECORDKEEPING Medical records as well as exposure and biological monitoring results must be maintained for workers as specified in Section 1.9 of this document. Such records must be kept for at least 30 years after termination of employment. Copies of environmental exposure records for each worker must be included with the medical records. These records must be made available to the past or present workers or to anyone having the specific written consent of a worker, as specified in Section 1.9.4 of this document. # RESEARCH NEEDS The following research is needed to further reduce the risk of adverse developmental and reproductive effects from occupational exposure to EGME, EGEE, or their acetates: - Investigations should be conducted in the workplace to relate glycol ether exposure to concentrations of metabolites in urine and toxic effects such as reduction in testis size, semen quality, etc. - Additional studies are needed to define more accurately the human reproductive hazards posed by EGME, EGEE, and their acetates. - Evaluations of exposed populations are needed to correlate dermal absorption of EGME, EGEE, and their acetates with concentrations of metabolites in urine. - Additional data should be collected to quantify airborne and dermal exposures to EGME, EGEE, and their acetates under actual conditions of use in the workplace. - Other glycol ethers should be evaluated to identify any that have effects similar to those of EGME, EGEE, and their acetates (see Appendix E for a list of glycol ethers). - Physiologically based pharmacokinetic models for EGME, EGEE, and their acetates need to be developed and validated in both human beings and the animal species in which NOAELs were determined. - Methods are needed for quantitative monitoring of dermal exposure. - Epidemiologic studies are needed to determine the effects of occupational exposure to EGME, EGEE, and their acetates. - The method of Groeseneken et al. should be validated. APPENDIX A # METHODS FOR SAMPLING AND ANALYSIS OF EGME, EGEE, EGMEA, AND EGEEA IN AIR- A.1 General Requirem ents fo r Sampling Air samples are collected that represent the air a worker breathes while performing each job or specific operation. It is advisable to maintain records of the date, time, rate, duration, volume, and location of sampling. # A.2 Collection and Shipping o f Samples 1. Immediately before sampling, break the ends of the sampling tube to provide an opening at least one-half the internal diameter of the tube (2 mm). 2. Attach the sampling tube to the sampling pump with flexible tubing. The smaller section of charcoal is used as a backup and should be positioned nearest the sampling pump. 3. The charcoal tube should be placed in a vertical direction during sampling to minimize channeling through the charcoal. 4. Air being sampled should not be passed through any hose or tubing before entering the charcoal tube. 5. The flow rate of sampling should be known with an accuracy of at least +5 %. Calibrate each sampling pump with a representative charcoal tube in line. 9. Capped charcoal tubes should be packed tightly and padded before they are shipped to minimize tube breakage during shipping. 10. A sample of the bulk material should be submitted to the laboratory in a glass container with a Teflon-lined cap. This sample should not be transported in the same container as the charcoal tubes. A number of changes were made to Method 53 to accommodate the lower target concentrations. (1) The recommended air volume for TWA samples was increased from 10 liters to 48 liters. This allows for lower detection limits and increases the TWA sampling time to a more convenient 480 min (8 hr) when sampling at 0.1 liter/m in. (2) A capillary GC column was substituted for a packed column to attain higher resolution. This was especially helpful in achieving better separa tion of 2ME and methylene chloride, a major component of the desorption solvent. (3) It was found that the desorption efficiency from wet charcoal was significantly lower for 2ME, and to a lesser extent for 2EE, at these lower concentrations. This problem was overcome by adding about 125 mg of anhydrous magnesium sulfate to each desorption vial to remove the desorbed water. Because charcoal will always collect some water from sampled air, all 2ME and 2EE air samples must be treated in this manner. Utilizing these three major modifications of Method 53, a successful evaluation was performed for these glycol ethers at the lower target concentrations. Also, a minor modification was made in the determination of desorption efficiencies. Aqueous instead of methanolic stock solutions were used to determine the desorption efficiencies for 2MEA and 2EEA. It was found that at these lower levels, when stock methanolic solutions are spiked on dry Lot 120 charcoal, part of the 2MEA and 2EEA react with the methanol to form methyl acetate and 2ME and 2EE, respectively. The reaction, which is analogous to hydrolysis, is called transesterification (alcoholysis) and is catalyzed by acid or base. The surface of dry Lot 120 charcoal is basic and the reaction was verified to occur by quantitatively determining methyl acetate and the corresponding alcohol (2ME for 2MEA samples, 2EE for 2EEA samples) from spiked samples. Transestérification was not observed when methanolic stock solutions were spiked onto wet charcoal. Therefore, transestérification is not expected to occur for samples collected from workplace air containing methanol as well as 2MEA or 2EEA because workplace atmospheres are seldom completely dry. Because of the number of modifications and the extensive amount of data generated in this evaluation, the findings are presented as a separate method instead of a revision of Method 53. This method supercedes Method 53, although Method 53 is still valid at the higher analyte concentrations. Although hydrolysis of 2MEA and 2EEA does not appear to be a problem at lower concentrations, as a precautionary measure, the special require ment that 2MEA and 2EEA samples should be refrigerated upon receipt by the laboratory was retained from Method 53. 1.1.2 Toxic effects (This section is for information only and should not be taken as the basis of OSHA policy.) As reported in the Documentation of Threshold Limit Values (Refs. 5.2 to 5.5), all four analytes were investigated by Nagano et al. (Ref. 5.6) The detection limits of the analytical procedure arc 0.10,0.04, 0.04, and 0.03 ng per injection (1.0-^iL injection with a 10:1 split) for 2ME, 2MEA, 2EE, and 2EEA respectively. These are the amounts of each analyte that will give peaks with heights approximately 5 times the height of baseline noise (Section 4.1). # Detection limit of the overall procedure The detection limits of the overall procedure are 1.0,0.40,0.37, and 0.31 fig per sample for 2ME, 2MEA, 2EE, and 2EEA respectively. These are the amounts of each analyte spiked on the sampling device that allow recovery of amounts of each analyte equivalent to the detection limits of the analytical procedure. These detection limits correspond to air concentra tions of 6.7 ppb (21 jig/m3), 1.7 ppb (8.4 jig/m3), 2.1 ppb (7.8 fig/ m3), and 1.2 ppb (6.5 jig/m ) for 2ME, 2MEA, 2EE, and 2EEA respectively (Section 4.2). # Reliable quantitation limit The reliable quantitation limits are the same as the detection limits of the overall procedure because the desorption efficiencies are essentially 1 0 0 % at these levels. These are the smallest amounts of each analyte that can be quantitated within the requirements of recoveries of at least 75% and precisions (±1.96 SD) of ±25% or better (Section 4.3). The reliable quantitation limits and detection limits reported in the method are based upon optimization of the GC for the smallest possible amounts of each analyte. When the target concentration of an analyte is exception ally higher than these limits, they may not be attainable at the routine operating parameters unless one optimizes parameters of instruments. # Instrument response to the analyte The instrument response over the concentration ranges of 0.5 to 2 times the target concentrations is linear for all four analytes (Section 4.4). # Recovery The recovery of 2ME, 2MEA, 2EE, and 2EEA from samples used in a 15-day storage test remained above 84, 87, 84, and 85% respectively when the samples were stored at ambient temperatures. The recovery of analyte from the collection medium after storage must be 75% or greater. (Section 4.5, from regression lines shown in Figures 4. 5.1.2, 4.5.2.2, 4.5.3.2, and 4.5.4.2) 1.2.6 Precision (analytical procedure) The pooled coefficients of variation obtained from replicate determinations of analytical standards at 0.5, 1, and 2 times the target concentrations are 0.022,0.004,0.002, and 0.002 for 2ME, 2MEA, 2EE, and 2EEA respec tively (Section 4.6). # Precision (overall procedure) The precisions at the 95% confidence level for the ambient temperature 15-day storage tests are ±11.7, ±11.1, ±12.3, and ±11.2% for 2ME, 2MEA, 2EE, and 2EEA respectively. These include an additional ±5% for sam pling error. The overall procedure must provide results at the target concen tration that are ±25% or better at the 95% confidence level (Section 4.7). # Reproducibility Six samples for each analyte collected from controlled test atmospheres and a draft copy of this procedure were given to a chemist unassociated with this evaluation. The samples were analyzed after 12 days of refrigerated storage. No individual sample result deviated from its theoreti cal value by more than the precision reported in Section 1.2.7 (Section 4.8). # Advantages 1.3.1 Charcoal tubes provide a convenient method for sampling. The analysis is rapid, sensitive, and precise. # Disadvantage It may not be possible to analyze co-collected solvents using this method. Most of the other common solvents which are collected on charcoal are analyzed after desorption with carbon disulfide. 3.1.3 An electronic integrator or some other suitable means of measuring peak areas or heights. A Hewlett-Packard 18652A A/D converter interfaced to a Hewlett-Packard 3357 Lab Automation Data System was used in this evaluation. 3.1.4 Two-milliliter vials with Teflon-lined caps. 3.1.5 A dispenser capable of delivering 1.0 mL to prepare standards and samples. If a dispenser is not available, a 1.0-mL volumetric pipet may be used. 3.1.6 Syringes of various sizes for preparation of standards. 3.1.7 Volumetric flasks and pipets to dilute the pure analytes in preparation of standards. 3.2.5 A suitable internal standard, reagent grade. "Quant Grade" 3-methyI-3pentanol from Polyscience Corporation was used in this evaluation. The desorption solvent consists of methylene chloride/methanol, 95/5 (v/v) containing an internal standard at a concentration of 20 pL/liter. 3.2.7 GC grade nitrogen, air, and hydrogen. # R eliable quantitation lim it The reliable quantitation lim its were determined by analyzing charcoal tubes spiked with loadings equivalent to the detection lim its o f the analytical procedure. # Precision (analytical procedure) The precision o f the analytical procedure for each analyte is the pooled coefficien t o f variation determined from replicate injections o f standards. The precision o f the analytical procedure for each analyte is given in Tables 4.6.1 to 4 .6 .4 . T hese tables are based on the data presented in Section 4.4. # .1 0 Stability o f desorbed sam ples The stability o f desorbed samples was checked by reanalyzing the target concentra tion sam ples from Section 4 .9 one day later using fresh standards. The sample vials were resealed with new septa after the original analyses and were allow ed to stand at room temperature until reanalyzed. The results are given in # Range, Stability and Interference This method has been validated for sam pling air concentrations o f the stated glycol ethers and their acetates from 2 to 25 ppm by volum e in air. The method can be used for higher concentrations; how ever, the higher range has not been validated by U nion Carbide. B ecause som e o f the compounds may becom e hydrolyzed when sampled in high humidity atmospheres, the analysis o f the charcoal tube sam ples must be com pleted within 24 hours _ _ ô f the sampling. H owever, in the case o f CELLOSOLVE solvent, sam ples can be stored for up to 14 days in a refrigerator but should be analyzed within 9 0 minutes after desorption. In the case o f passive dosim eters, the sample may be refrigerated for up to fiv e days prior to analysis w ithout any significant le ss. The presence o f other glycol ether vapor with sim ilar m olecular w eights and vapor pressure may result in interference. In addition to the 3M #3500 Organic Vapor M onitoring Badge, passive dosim eter badges from other manufacturers, such as Dupont's Pro-Tek® Organic B adge, may also be used for monitoring g lycol ethers. Please contact the manufacturer for information concerning the suitability o f their monitors for specific glycol ethers or glycol ether acetates. # Instrument Parameters # Chromatograph Som e firms are also known to provide analytical services for these m onitors for specific chem icals, including g lycol ethers. T his may be useful for som e o f the sm aller locations which do not have air sam pling equipment and/or on-site analytical capabilities o f their own. Further information about the NIOSH methods or passive monitors may be obtained from the NIOSH regional o ffice or equipment manufacturer. In the EGME-treated m ice, the mean testes w eights were significantly low er than those o f the control group® at wk 2 to 5, and apjp>eared to increase again towards the end o f the study (statistics not given); there w as also a tendency towards a dose-response relationship in the incidence o f abnormal sperm (statistics not given). In the rat dominant lethal study, the total implant numbers among fem ales mated at wk 5 to EGM E-dosed rats were reduced in a dose-dependent manner ( P 0 .0 0 1 ) . A lthough there w as a rise in preimplantation lo ss rate, there w as no statistically significant evidence for the # B.2.1.3 Inhalation In inhalation studies , pregnant N ew Zealand white rabbits were exposed 7 hr/day on g. (rabbits). This study indicated that exposure o f rabbits to EGEEA during organogenesis resulted in maternal toxicity at 100 to 300 ppm. Signs o f this included significantly decreased w eight gain and reduced gravid uterine w eight (P <0.001) and elevated absolute liver w eight (P < 0.05). In rats, significantly (P <0.001) reduced w eight gain and reduced food consumption were noted at 2 0 0 and 300 ppm EGEEA; significantly elevated relative liver w eights were noted at 1 0 0 ,2 0 0 , and 300 ppm EGEEA (no statistics given). In rabbits, an increased incidence o f totally resorbed litters at 200 ppm (P < 0.05) and 300 ppm (P < 0.001), an increase in nonviable fetuses at 300 ppm (P < 0.05), and a decrease in viable fetu ses per litter at 200 ppm and 300 ppm EGEEA (P <0.05) w ere observed. Fetotoxicity (reduced ossification) was observed at 100, 200, and 300 ppm EGEEA. The incidence o f external visceral and skeletal malform ations was increased at 2 0 0 ppm and 300 ppm (P < 0 .0 5 ). In rats, em bryo/fetotoxicity w as observed at 1 0 0 ,2 0 0 , and 3 0 0 ppm EG EEA. O bservations included increased nonviable im plantations/litter at 3 0 0 ppm (P < 0 .0 5 ), reduced fetal body w eight/litter at 2 0 0 and 300 ppm (P < 0.05), and increased incidence (P<0 .0 5 ) o f external variations at 300 ppm and visceral and skeletal variations at 100, 200, and 300 ppm. There w as no evidence o f maternal, embryonic, or fetal toxicity (including teratogenicity) at 50 ppm EGEEA in either species. T yl e t al. [ 1988J concluded that 50 ppm EGEEA was the no observable effect lev el. # B .2 .1.4 Derm al Exposure The effects o f dermal exposure to EGEE on pregnant Sprague-Dawley rats have been investigated by Hardin et al. . # B.2.2 EGME and EGMEA # B.2.2.1 O ral Adm inistration # B.2.2.2 Inhalation # B.2.2.3 D erm al Exposure The teratogenic potential o f dermal exposure to EGME was estim ated using the C hem off and K avlock in v iv o assay. Groups o f 10 pregnant Alpk/AP (W istar derived) rats were exposed to 3%, 10%, 30%, or 100% EGME in physiological saline at 10 mL/kg . The test compound was applied for 6 hr (occluded exposure) on g.d. 6 through 17. Rats were then allow ed to deliver litters normally and rear their litters until day 5 post-partum. The application o f 100% EGME w as lethal to all dams and 30% was lethal to all developing fetuses. A t 10% EGME, the litter size w as reduced by 26% as was survival at day 5 (neither statistically significant). N o adverse effects w ere seen in the 3% group. The results were evaluated using "rules" generated from the C hem off and K avlock in vivo teratology screen assay. The authors concluded that the data demonstrated a clear doseresponse and that a 10% solution o f EGME is lik ely to be a rat teratogen . Exposure o f N ew Zealand white rabbits and Fischer 344 rats to 0 ,5 0 ,1 0 0 ,2 0 0 , or 300 ppm EGEEA 6 hr/day on g.d. 6 through 18 (rabbits) and 6 through 15 (rats) resulted in adverse hem atologic parameters in both species . In rabbits, there was evidence o f enlarged erythrocytes (elevated MCV) at 300 ppm EGEEA (P < 0.01) and significant dose-related decreases in the number o f platelets at 100 (P < 0.05), 200 (P < 0.01) and 300 ppm EGEEA (P <0.001). In rats, the W BC count was significantly increased (P < 0.001) at 200 and 300 ppm EGEEA. Statistically significant reductions in rat RBC count (P < 0.0), Hb level (P < 0.01), and Hct and RBC volum e (0.05) were seen at the three highest exposures (1 0 0 , 2 0 0 , and 3 0 0 ppm EG EEA). P latelet counts were a lso sig n ifica n tly reduced at 2 0 0 ppm (P <0.001) and 300 ppm EGEEA (P <0.01) in the rat. # B.3.1.3 D erm al Exposure In a study by Truhaut et al. , rabbits were exposed to a sin g le 24-hr dermal application (10,500 m g/kg) o f EGEEA; death follow ed betw een 1 and 4 days after application. The reduction in RBC count did not exceed 15% to 20% and blood Hb lev els show ed little variation; however, a 50% to 70% decrease in W BC count was noted. In a study by Carpenter et al. , hem olytic effects in the rat erythrocyte were demonstrated by a single 4-hr inhalation exposure o f six fem ale rats to EGME or EGMEA. The low est concentrations causing significant osm otic fragility w ere 2,0 0 0 ppm EGME and 32 ppm EGMEA. In a study by M iller et al. , Fischer 344 rats and B 6C 3F1 m ice were exposed by inhalation to 0, 100, 300, or 1,000 ppm EGME 6 hr/day for a total o f 9 days in an 1 1-day period. WBC counts o f both rats and m ice exposed to 1,000 ppm EGME were statistically lower than those o f the controls (P < 0.05). M CV, RBC counts, and Hb lev els o f male and fem ale rats and male m ice in the 1,000 ppm EGME-exposure group were also statistically depressed (£*<0.05). A t 300 ppm EGME, sim ilar but less severe effects were seen in rats; statistically lower (P <0.05) WBC counts in both sex es and Hb and RBC counts in fem ales were noted. H em atologic parameters in m ice exposed at 300 ppm EGME were stated to be sim ilarly affected, but data were not presented. Histopathology was performed on rats only, revealing reduced bone marrow cellularity, lymphoid depletion in the cortex o f the thymus, and reduced numbers o f lymphoid cells in the spleen and in the m esenteric lymph nodes in the 1,000-ppm EGME group. Both m yeloid and erythroid elem ents o f the bone marrow were markedly reduced in all rats exposed to 1,000 ppm EGME, and m egakaryocytes were present in decreased numbers and were smaller than those o f controls. The entire thymic cortical lymphoid population was depleted, with less dramatic reductions in the lymph nodes and spleen. Lymphoid organ toxicity persisted at 300 ppm EGME, but to a much lesser extent. In addition, serum total protein, albumin (m ales only), and globulin values in the 1,000 ppm EGM E-exposure group (rats) were significantly reduced (P < 0.05). In This study show ed that in man, EGEE is rapidly absorbed through the lungs. About 64% o f the inhaled vapor w as retained at rest, and retention increased as physical exercise w as performed during exposure. The absorbed dose was apparently proportional to the inhaled concentration, and a linear relation was observed between uptake rate and exposure concentration. The rate o f uptake increased when physical exercise w as performed during exposure. The rate o f uptake was higher as exposure concentration, or pulmonary ventilation rate, or both increased. Individual uptake rates seemed dependent only on transport mechanism s (pulmonary ventilation, or cardiac output, or both) and not on anthropometric data or body fat content. Respiratory elim ination o f unchanged EGEE accounted for ¿0.4% o f the total body uptake and occurred rapidly after cessation o f exposure, follow ed by a much slow er decrease. This slow decrease indicated that tw o pharm acological compart ments were involved. Groeseneken et al. also studied the urinary excretion o f EA A in the 10 male volunteers w ho inhaled various concentrations o f EGEE for 4 hr both at test and during physical exercise in the previously described study . The subjects, divided into tw o groups o f fiv e , were either exposed at rest to concentrations o f 2 .7 ,5 .4 , or 10.8 ppm EGEE (1 0 ,2 0 , or 40 mg/m3, respectively) or were exposed to 5 .4 ppm EGEE at rest and during physical exercises (see Table B -2 were still detectable. On a number of days, the preshift EAA concentrations were even higher than the immediate postshift values on the preceding and same day. For this reason, and because of the more constant exposure profiles, an estimation of the maximum EAA levels after prolonged daily exposure was made on the basis o f the results of the first exposure period. A linear correlation (r-0.92) was found between average exposure to EGEE and EGEEA over the 5 exposure days and EAA excretion at the end of the 5-day workweek. EAA estimate of 150 mg ± 35 mg/g creatinine was found to correspond with repeated 5-day full-shift exposures to 5 ppm of EGEE or 5 ppm of EGEEA. Groeseneken et al. compared urinary EAA excretion in man and the rat after experimental exposure to EGEE. The human data were drawn from the previously men tioned inhalation study in which five subjects had been exposed to 2.7 ppm, 5.4 ppm, or 10.8 ppm EGEE. Urine samples collected at short intervals were pooled into 12-hr groups for comparison with rat data. Groups o f five male Wistar rats were treated by oral intubation with 0.5, 1 ,5 , 10, 50, or 100 mg EGEE/kg. Rat urine samples were collected before EGEE exposure and then at 12-hr intervals up to 60 hr after the dosing. The maximal excretion rate of EAA in human and rat urine was found within 12 hr after exposure or dosing. Afterwards, the decline of urinary EAA was much slower in man than in the rat. In man, the half-life o f EAA was on average 42.0 + 4.7 hr, a longer half-life than reported in the original study (21 to 24 hr) . The author attributes the longer half-life to the use o f 12-hr pooled urine specimens in this study rather than to specimens collected at 1-to 2-hr intervals in the previous study . In the previous study, half-lives were calculated from the peak exposure time (8 hr after the start o f exposure). Examination of the excretion curves from the previous paper showed that elimination between 8 and 12 hr was more rapid than at longer time intervals, leading to a shorter calculated elimination half-life . The authors concluded that the longer elimination half-life was more consistent with half-lives seen in occupational exposure, which were as high as 48 hr . The recovery o f EAA in human urine after 48 hr averaged 23%. Based on the half-life of EAA elimination o f 42 hr, the authors estimated total recovery of EAA as 30% to 35% of the absorbed dose. In the rat, the half-life of EAA was 7.20 ± 1.54 hr. On the average, 27.6% + 6.1 % of urinary EAA in rats was present as the glycine conjugate, with the extent of conjugation being independent o f the dose. The extent of conjugation demonstrated a diumal variation; the lowest extent of conjugation was found during the night EAA glycine conjugates were absent in human urine. In man, the recovery of EAA was higher than in the rat for equivalent low doses of EGEE (0.5 and 1 mg/kg). When urinary excretion data for the lower dose range were normalized for body weight in both species, rats excreted EAA at a higher rate than did man for equivalent doses. The authors concluded, that although nonlinear kinetics had been observed in some animal studies at high doses, the elimination kinetics seen at low doses in this study were not dose-dependent in either rats or humans . Groeseneken et al. studied the pulmonary absorption and elimination o f EGEEA in 10 male subjects under various conditions of exposure and physical workload; subjects were 23.6± 1.8 hr. However, 3hrafterthe first peak EAA excretion, a second maximum excretion of EAA was observed; this second peak was especially pronounced after exposure during physical exercise. Redistribution of EGEEA, or EAA, or both from a peripheral to central compartment could explain this phenomenon. Urinary EAA excretion was dependent on the EGEEA uptake rate, as a consequence of higher exposure (.PO.OOl), and on the uptake rate of EGEEA at constant exposure, as a consequence o f physical workload (P<0.001). On average, 22.2% ± 0.9% of the absorbed EGEEA was metabolized and excreted in the urine as EAA within 42 hr. The total excretion of EAA in 42 hr was related both to total uptake from increasing concentrations of EGEEA (P<0.001) and to total uptake, at constant exposure, with increasing workload (P<0.001). Total EAA excretion was correlated to EGEEA concentration, uptake rate, and transport mechanisms (pulmonary ventilation, oxygen consumption, respiratory rate, etc.). In addition, EAA excretion was correlated to body fat (r=0.40, /*<0.001). Groeseneken et al. concluded that the metabolism of EGEEA proceeded through EGEE via esterases and then continued through the same excretion pathway as EGEE. Indeed, the kinetics of EAA excretion after exposure to EGEEA were very similar to these found after exposure to EGEE . Workers from a shipyard painting operation who applied paint containing EGEE were evaluated for EGEE exposure . Work conditions and practices varied consid erably between brush and spray painters. Some workers were in confined spaces below deck, whereas others were in the open. The study was conducted in the winter, and the temperatures varied greatly depending on the painters' work areas. Information on work practices such as the number of hours spent painting, the type of paint used, the work area locations, and the use of personal protective equipment was gathered from questionnaires. Environmental breathing zone samples were collected for each worker for 3 days. Urine samples were collected every day for 1 wk, at the beginning and end o f each workday, and EAA levels were measured. A wide range o f EAA levels was noted in workers using EGEE-containing paints; this was probably due to variation in work assignments, work areas, and use of personal protective equipment. This study has not gone through extensive evaluation to determine the importance of the many variables on the levels o f EAA in urine. The author could only conclude that there appeared to be a relationship between urinary EAA excretion and the use of paints containing EGEE . Clapp et al. investigated EGEE exposure o f workers engaged in casting precision metal parts. The 8-hr TWAs of EGEE ranged from nondetectable to 23.8 ppm. EGEE was not detected in any of the blood samples from the EGEE-exposed workers, but exposed workers were found to have measurable levels of EAA in urine (163 mg/g creatinine). EAA was not detected in the urine of unexposed control subjects. # B.4.2 EGME and EGMEA EGME has been shown to be a possible substrate for alcohol dehydrogenase (ADH) , and thus oxidation of EGME via ADH and aldehyde dehydrogenase to methoxyacetic acid (MAA) is a potential route o f metabolism . second major urinary metabolite was identified as methoxyacetylglycine (approximately 20% o f radioactivity). Analysis o f radioactivity in the plasma demonstrated rapid disap pearance (tl/2 = 0.56 hr) o f EGME between 0 and 4 hr after dosing with a corresponding appearance o f MAA. Radioactivity clearance from plasma (tl/2 ) was estimated to be 19.7 hr. In the rats pretreated with pyrazole, the metabolism o f EGME to MAA was inhibited. Analysis of radioactivity in plasma showed a slower disappearance o f EGME (tl/2 " 42.6 ± 5 .6 hr) and radioactivity clearance from plasma (tl/2 -51.0 + 7.8 hr) than in the controls. The percentage o f the dose found in urine after 24 hr (9.8% ± 2.4%) and 48 hr (7.9% + 2.2%) showed urinary excretion not to be the major route o f elimination. MAA was not a major urinary component, and methoxyacetylglycine was not found in urine from these rats. Pretreatment with the aldehyde dehydrogenase inhibitor disulfiram had no significant effect on plasma or urinary metabolic profiles. Yonemoto et al. used an in vitro culture system to determine the effects of EGME and MAA on the development of post-implantation rat embryos. On g.d. 9, conceptuses were removed from the dams (Wistar-Porton strain) and placed in pairs in bottles that contained 3 mL of heat-inactivated male rat serum and 1 mL o f test compound. Ten to 15 conceptuses per group were cultured for 48 hr in 381 mg EGME or in 90, 180, 270, or 450 mg MAA and then examined under a stereoscopic dissecting microscope. EGME had no significant effects on embryonic growth and development when compared with that of the controls. MAA, however, produced statistically significant reductions in morphological development, crown-rump length, head length, number of somites, and yolk sac diameter when compared with those o f the controls (P<0.001). These effects demonstrated a dose response, as all were seen at 450 mg MAA; all but yolk sac diameters were affected at 270 mg, and only head length and morphological development were affected at 180 mg. No significant effects were seen at 90 mg MAA. The predominant abnormalities seen in affected conceptuses were irregular fusion of the neural tube (wavy or open neural suture line) and irregular segmentation of the somites. The authors concluded that the data demonstrated that MAA or its metabolites are the proximal toxins in vivo and that, at the organogenesis stage, the rat fetus in vitro lacks alcohol dehydrogenase activity. The in vitro culture system used by Yonemoto et al. was used by Rawlings et al. to study the mechanism of teratogenicity o f EGME. Conceptuses were explanted from pregnant Wistar-Porton rats at embryonic age 9.5 days and cultured for 48 hr with 2 or 5 mM MAA. At the end of the culture period, crown-rump length, head length, and yolk sac diameter were measured, and the degree of differentiation and development was evaluated by a morphological scoring system. MAA at the 5 mM concentration had an adverse effect on fetal development. MAA-exposed embryos had statistically significant reductions (P<0.01) in morphological score, crown-rump length, head length, and yolk sac diameter compared with those of the controls. MAA also produced statistically significant reductions (P<0.05) in the protein content of the embryo. No statistically significant reductions in growth parameters were seen at the 2 mM level. However, irregularity of the neural suture line was seen in 100% o f the MAA-exposed embryos. Other abnormalities observed in the MAA-exposed groups included abnormal otic and somite development, turning failure, open cranial folds, and abnormal yolk sac . As has been demonstrated, the induction of paw malformations following in utero as well as in vitro exposure to EGME appears to depend on the oxidation o f EGME to MAA. Sleet et al. investigated the relationship between the induction of paw malformations and the disposition of EGME in the maternal and embryonal compartments. Pregnant C D -1 mice were dosed by gavage on g.d. 11 with either EGME (1.3 to 6.6 mmol/kg, 100 to 500 mg/kg, or 5.2 fil/g) or MAA (1.1 to 7.7 mmol/kg, 100 to 693 mg/kg, or 4.9 pl/g) and were sacrificed on g.d. 18. Fetuses were delivered by laparotomy and weighed before external examination for paw defects. The embryotoxic potencies o f EGME and MAA were determined by comparing the dosedependent incidence o f digit anomalies. EGME and MAA were equipotent in causing digit malformations. The ADH inhibitor 4-methylpyrazole administered orally 1 hr before EGME reduced the incidence of malformations 60% to 100%, depending on the dosing regimen. Oxidation of EGME to MAA was nearly complete after 1 hr when approximately 90% o f ( I4C) in maternal compartment and conceptus coeluted with authentic ( 14C)-MAA on HPLC. Embryonic ( 14C)-MAA levels were 1.2 times those in plasma 1 hr and 6 hr after dosing; by 6 hr, however, concentrations in the embryo had declined to approximately 50% of 1-hr values. Dams treated i.v. with ( 14C) MAA had higher ( 14C) blood levels than did dams treated orally, but the offspring of the former had fewer digit malformations. The authors concluded that peak and steady-state plasma levels of MAA, as well as embryonic MAA levels, do not appear to determine the embryotoxic outcome whereas further metabo lism of MAA does . EGME uptake and urinary MAA excretion were examined in seven male subjects exposed at rest to 5.1 ppm EGME (16 mg/m3) by mask for four 50-min periods . There was a short 10-min break at the end o f each 50-min period to allow for urine collection. Urine samples were collected immediately before the beginning of the experi ment and at hourly intervals until the fourth hour after exposure. Collections were taken until the morning of the fifth day after the exposure period (four 2-hr collections, one 8-hr collection, and eight 12-hr collections). Urinary MAA was then measured by the method of Groeseneken et al. . Retention o f EGME was 76% during the 4-hr exposure period. The uptake rate showed no significant variation because of constant pulmonary ventilation and a fixed exposure concentration. On average, 19.4 ± 2 .1 mg EGME was inhaled during the 4-hr exposure period. MAA was detected in the urine during and up to 120 hr after exposure. The elimination half-life averaged 77.1 ± 9.5 hr. On average, 54.9 % ± 4.5% o f inhaled EGME was excreted within 120 hr o f the start o f exposure; half o f this amount was excreted within 48 hr. By extrapolation, the total amount of MAA was estimated at 85.5% ± 4.9% o f inhaled EGME. as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The " % " may be the approximate percentage by weight or volume (indicate basis) that each hazardous ingredient o f the mixture bears to the whole mixture. This may be indicated as a range or maximum amount (i.e., 10% to 40% vol. or 10% max. wt.) to avoid disclosure of trade secrets. # D.3 SECTION III. PHYSICAL DATA The data in Section III should be for the total mixture. Include the boiling point and melting point in degrees Fahrenheit (Celsius in parenthesis); vapor pressure, in conventional millimeters of mercury (mm Hg); vapor (tensity of gas or vapor (air " 1); solubility in water, in parts/hundred parts of water by weight; specific gravity (water = 1); percent volátiles (indicated if by weight or volume) at 70°F (21.1°C); evaporation rate for liquids or sublimable solids, relative to butyl acetate; and appearance and odor. These data are useful for the control of toxic substances. Boiling point, vapor density, percent volátiles, vapor pressure, and evaporation are useful for designing proper ventilation equipment This information is also useful for designing and deploying adequate fire and spill containment equipment The appearance and odor may facilitate identification of substances stored in improperly marked containers or spilled substances. # D.4 SECTION IV. FIRE AND EXPLOSION DATA Section IV should contain complete fire and explosion data for the product. Include flashpoint and autoignition temperature in degrees Fahrenheit (Celsius in parentheses), flammable limits in percent by volume in air, suitable extinguishing media or materials, special fire-fighting procedures, and unusual fire and explosion hazard information. If the product presents no fire hazard, insert "NO FIRE HAZARD" on the line labeled "Extinguishing Media." # D.5 SECTION V. HEALTH HAZARD INFORMATION For the "Health Hazard Data" line, use a combined estimate o f the hazard o f the total product. This can be expressed as a TWA concentration, as a permissible exposure, or by some other indication of an acceptable standard. Other data are acceptable, such as lowest LD50 if multiple components are involved. (3) gas chromatography analysis using ñame ionization detection (FID). Urine (1 ml) was adjusted to pH 2 with HC1 and extracted three times with methylene chloride. Phase transfer catalysis (a combination of ion-pair extraction and ñuoroanhydride derivitization) was done by adding alkaline tetrabutylammonium hydrogen sulfate and PFBB to the methylene chloride extract. The mixture was rotated for 2 hr. Gas chromatography was employed to analyze 5 (jl of the methylene chloride layer (bottom layer) using FID and a 6 ft x 1/4 in (4-mm id) glass column (packed with 1.95% QF-1 and 1.5% OV-17 on 80/100 mesh Supelcoport). Detection limits for MAA and EAA were 11.4 and 5.0 mg/liter of urine, respectively. Average recoveries (and relative standard deviations) were 78% (0.17) for MAA and 91 % (0.14) for EAA. Groeseneken et al. developed a method for determining MAA and EAA in urine based on lyophilization of urine samples followed by derivitization with diazomethane. After adjustment of urine specimens to pH 8 to 8.5 with KOH, 1 ml of urine and 50 fig of 2-furonic acid (FA) (internal standard) were added, and the sample was lyophilized. The dry residue was redissolved in 1 ml methylene chloride with added HC1 and derivitized with diazomethane in methylene chloride. Gas chromatographic analysis using FID was performed on a fused silica capillary column (CP WAX 57 CB, 25 m x 0.33 mm id) with a split ratio of 10:1. The detection limits of MAA and EAA were 0.15 and 0.07 mg/liter of urine, respectively. Mean recoveries of MAA, EAA, and FA added to "blank" urine samples were 31.4%, 62.5%, and 58.4%, respectively; the recoveries of MAA and EAA were well correlated with those of the internal standard. Day-to-day variability for MAA and EAA was 6.0% and 6.4%, respectively; the corresponding within-day variability was 6.2% and 8.9%. Smallwood et al. developed and validated a method for analysis of EAA in urine. Two ml of urine, along with potassium carbonate, tetrabutylammonium hydrogen sulfate, methylene chloride, and PFBB were added to a screw-top culture tube. After 2 hr of mixing on a rotator at 60 rpm, the tube was heated for 20 min in a 50°C water bath. Additional mixing at room temperature, removal of the upper aqueous layer, and washing o f the lower methylene chloride layers with distilled water removed unreacted reagents. The methylene chloride extract was dried with anhydrous sodium sulfate and loaded into an autosampler vial. Automated gas chromatographic analysis using FID was conducted with the use of a 6 ft x 4 mm id glass column packed with 4% SE-30 and 6% OV-210 on 100/120 mesh Chromosorb WHP. Standards were prepared in pooled urine. The analytical range for EAA was 5 to 100 mg/liter of urine; the limit o f detection was 4 mg/liter; and the limit of quantitation was 7 mg/liter. Within-day variation was 0.5% to 1.8%, and day-to-day variation was 3.0% to 4.7%. Sample stability was confirmed for at least 8 months when specimens were stored at -20°C. The authors stated that the method could also be used for MAA and butoxyacetic acid (BAA) in urine. Preliminary data were presented in the paper indicating that the technique has the potential for assessing EGEE exposure in shipyard painters who use paints containing EGEE. Groeseneken et al. observed that MAA appeared in the chromatogram of control subjects not exposed to EGME. Further investigation revealed that the diazomethane procedure was producing MAA by reacting with the hydroxyl group of naturally occurring glycolic acid. Groeseneken et al. further evaluated the existing methods for determining alkoxyacetic acids and concluded that the phase transfer catalysis procedures had the required specificity, without the production of artifacts, but lacked sufficient sensitivity to detect these metabolites at low occupational exposure concentrations. On the other hand, the methods utilizing diazomethane derivitization had the required sensitivity but lacked the specificity. Therefore, Groeseneken et al. developed an improved method that combined the best attributes of the two basic existing methods. The procedure developed by Groeseneken et al. was described as follows. Urine was adjusted to pH 7; l-ml aliquots were placed in small vials with 3-chloropropionic acid (internal standard) and lyophilized overnight. The dry residue was redissolved in methanol containing PFBB, and the vials were capped. The vials were heated at 90°C for 3 hr. After cooling, sample cleanup was done by adding distilled water and extracting the pentafluorobenzyl-esters (PFB-esters) with methylene chloride. The methylene chloride extract was analyzed by gas chromatography using FID. A fused silica capillary column was used (CP Sil 5,25 m x 0.32 mm id, 0.21 pm film thickness) with a split ratio of 5:1. Temperature programming was employed. All PFB-esters showed baseline resolution; retention times of 6.53 min (MAA), 7.77 min (EAA), and 8.59 min (internal standard) were observed. A typical gas chromatographic run, including cool-down and equilibration times, required about 30 min. Optimization studies were done for reagent concentrations as well as for urinaiy pH and reaction time. After correction for the partial solubility of methylene chloride in the 50:50 methanol: urine phase, recoveries of alkoxyacetic acids from urine averaged 95.0% (MAA), 94.8% (EAA), and 95.1 % (BAA). The yield for the derivitization reaction averaged 99.5% (MAA) and 101.8% (EAA). Standard curves were set up for urine and were linear over the range of 0.1 to 200 mg/liter. The limit of detection, at a signal-to-noise ratio o f 5, for the two acids was 0.03 mg/liter. Precision of the method, calculated from triplicate injections of 40 urine samples, averaged 3.5% (RSD), ranging from 1.1% at 25 mg/liter to 20% at 0.1 mg/1 iter. Groeseneken et al. methods [1986aGroeseneken et al. methods [ , 1989b, 1. The presence o f EAA in urine specimens (collected as specified) above a concentra tion of approximately 5 mg EAA/g creatinine is evidence for a single EGEE and/or EGEEA inhalation exposure corresponding to an 8-hr exposure to 0.5 ppm EGEE and/or EGEEA at 60 W of exercise. This value was extrapolated from 4-hr experimental exposures at 5 ppm at 60-W workload to an 8-hr exposure at 0.5 ppm at 60-W workload [Groeseneken et al. 1986c[Groeseneken et al. , 1987b # NIOSH has not validated the # G.2 Justification For Recommendations Biological monitoring for glycol ether exposure is recommended, even though no validated guidelines can be provided as to the relationship between airborne exposure to glycol ethers and the alkoxyacetic acid urinary metabolites. The alkoxyacetic acid metabolites (EAA and MAA) are not only an index of exposure or uptake of EGEE or EGME by the worker, but they are also an index of potential adverse health effects from these glycol ethers. Dermal absorption may be a major route o f exposure to EGME and EGEE and their respective acetates. The potential exists for absorption of glycol ether vapors through wet skin. The influence of workload is significant for inhalation exposure. Doubling the workload results in twice the uptake of the glycol ethers. The potential for adverse effects, particularly decreased cardiac output, from the positive pressure feature o f some respirators has been reported . However, several recent studies suggest that this is not a practical concern, at least not in healthy individuals . Theoretically, the increased fluctuations in thoracic pressure caused by breathing with a respirator might constitute an increased risk to subjects with a history o f spontaneous pneumothorax. Few data are available in this area. While an individual is using a negative-pressure respirator with relatively high resistance during very heavy exercise, the usual maximal-peak negative oral pressure during inhalation is about 15 to 17 cm of water . Similarly, the usual maximal-peak positive oral pressure during exhalation is about 15 to 17 cm of water, which might occur with a respirator in a positive-pressure mode, again during very heavy exercise . By comparison, maximal positive pressures such as those during a vigorous cough can generate 200 cm of water pressure . The normal maximal negative pleural pressure at full inspiration is -4 0 cm o f water , and normal subjects can generate -8 0 to -1 6 0 cm o f negative water pressure . Thus vigorous exercise with a respirator does alter pleural pressures, but the risk of barotrauma is substantially less with exercise than with coughing. In some asthmatics, an asthmatic attack may be exacerbated or induced by a variety of factors including exercise, cold air, and stress, all of which may be associated with wearing a respirator. Although most asthmatics who are able to control their condition should not have problems with respirators, a physician's judgment and a field trial may be needed in selected cases. # H.1.2 Cardiac Effects The added work of breathing from respirators is small and could not be detected in several studies . A typical respirator might double the work of breathing (from 3% to 6% of the total oxygen consumption), but this is probably not of clinical significance [Gee et al. 1968 # H.1.5 Psychologic Effects This important topic is discussed in recent reviews by Morgan . However, some individuals are likely to remain psychologi cally unfit for wearing respirators. # H.1.6 Local Irritation Effects Allergic skin reactions may occur occasionally from wearing a respirator, and skin occlusion may cause irritation or exacerbation of preexisting conditions such as pseudofolliculitis barbae. Facial discomfort from the pressure of the mask may occur, particularly when the fit is unsatisfactory. # H.1.7 Miscellaneous Health Effects In addition to the health effects (described above) associated with wearing respirators, specific groups of respirator wearers may be affected by (1) Corneal irritation or abrasion Comeal irritation or abrasion might occur with the exposure. This would, of course, be a problem primarily with quarter-and half-face masks, especially with particulate exposures. However, exposures could occur with full-face respirators because of leaks or inadvisable removal of the respirator for any reason. Although corneal irritation or abrasion might also occur without contact lenses, their presence is known to substantially increase this risk. (2) Loss or misplacement of a contact lens The loss or misplacement of a contact lens by an individual wearing a respirator might prompt the wearer to remove the respirator, thereby resulting in exposure to the hazard as well as to the potential problems noted above. # 3) Eye irritation from respirator airflow The constant airflow of some respirators, such as powered air-purifying respirators (P APR's) or continuous flow air-line respirators, might irritate the eyes of a contact lens wearer. # H.2 SUGGESTED MEDICAL EVALUATION AND CRITERIA FOR RESPIRATOR USE The following NIOSH recommendations allow latitude for the physician in determining a medical evaluation for a specific situation. More specific guidelines may become available as knowledge increases regarding human stresses from the complex interactions o f worker health status, respirator usage, and job tasks. Although some of the following recommen dations should be part of any medical evaluation of workers who wear respirators, others are applicable for specific situations. - A physician should determine fitness to wear a respirator by considering the worker's health, the type of respirator, and the conditions of respirator use. The recommendation above leaves the final decision of an individual's fitness to wear a respirator to the person who is best qualified to evaluate the multiple clinical and other variables. Much of the clinical and other data could be gathered by other personnel. It should be emphasized that the clinical examination alone is only one part o f the fitness determination. Collaboration with foremen, industrial hygienists, and others may often be needed to better assess the work conditions and other factors that affect an individual's fitness to wear a respirator. - A medical history and at least a limited physical examination are recommended. The medical history and physical examination should emphasize the evaluation of the cardiopulmonary system and should elicit any history of respirator use. The history is ah important tool in medical diagnosis and can be used to detect most problems that might require further evaluation. Objectives of the physical examination should be to confirm the clinical impression based on the history and to detect important medical conditions (such as hypertension) that may be essentially asymptomatic. - Although chest X-ray and/or spirometry may be medically indicated in some fitness determinations, these should not be routinely performed. In most cases, the hazardous situations requiring the wearing o f respirators will also mandate periodic chest X-rays and/or spirometry for exposed workers. When such information is available, it should be used in the determination of fitness to wear respirators. Data from routine chest X-rays and spirometry are not recommended solely for determining if a respirator should be worn. In most cases, with an essentially normal clinical examination (history and physical), these data are unlikely to influence the respirator fitness determina tion; additionally, the X-ray would be an unnecessary source of radiation exposure to the worker. Chest X-rays in general do not accurately reflect a person's cardiopulmonary physiologic status, and limited studies suggest that mild to moderate impairment detected by spirometry would not preclude the wearing of respirators in most cases. Thus it is recommended that chest X-rays and/or spirometry be done only when clinically indicated. - The recommended periodicity of medical fitness determinations varies according to several factors but could be as infrequent as every 5 years. Federal or other applicable regulations shall be followed regarding the frequency o f respirator fitness determinations. This trial period should also be used to evaluate the ability and tolerance o f the worker to wear the respirator . This trial period need not be associated with respirator fit testing and should not compromise the effectiveness of the vital fit testing procedure. - Examining physicians should realize that the main stress of heavy exercise while using a respirator is usually on the cardiovascular system and that heavy respirators (e.g., SCBA) can substantially increase this stress. Accordingly, physicians may want to consider exercise stress tests with electrocardiographic monitoring when heavy respirators are used, when cardiovascular risk factors are present, or when extremely stressful conditions are expected. also wish to consider the following conditions in selecting or permitting the use of respirators: - History of spontaneous pneumothorax The authors concluded that although the low dose o f EGME w as designed to be a n o-effect le v el, there were sligh t, previously noted [Chapin et al. 1985a This method was reprinted from OSHA . Further information on this subject or the U nion Carbide method may be obtained from U nion Carbide Corporation Saw M ill R iver Road Route 100 C Tarrytown, NY 10591 (9 1 4 )7 8 9 -2 2 3 # B .1 .1.2 Oral Adm inistration Twenty male albino rats were fed 1.45% EGEE in their basic diet for 2 yr . U pon histological examination, testicular enlargement and edema and tubular atrophy were observed in two-thirds o f the animals that had received EGEE. These changes were not seen in untreated controls. The testicular lesion s were more often bilateral than unilateral and consisted o f marked interstitial edema. Oral administration o f 46.5 or 93 mg EGEE/kg per day for 13 w eeks to m ale dogs (three per group) had no adverse effect . On the other hand, oral administration o f 186 m g EGEE/kg per day for 13 w k caused degenerative changes in the te ste s o f a ll *References for Appendix B can be found beginning on page 262. # B.3.2 EGME and EGMEA # B .3.2.1 Oral Adm inistration Oral administration o f EGME and EGMEA has induced hem atologic changes in laboratory anim als. found a statistically significant decrease (P < 0.01) in W BC counts follow in g oral administration o f 500 mg EGME/kg or 1,000 m g EGME A/kg to m ale JCL-ICR m ice 5 tim es/w k for 5 wk. Statistically significant decreases were also observed in RBC and Hb values for the 1,000 mg EGME/kg group (P<0 .0 1 ) and in Hb values for the 2,000 mg EGMEA/kg group (P <0.01). Treatment o f fem ale JCL-ICR m ice with 1,000 mg EGME/kg on g.d. 7 through 14 also significantly decreased the leukocyte counts (P < 0.01) . Grant et al. investigated the effects o f subchronic oral exposure to EGME on the hem atopoietic system o f rats and the reversibility o f such effects. Groups o f 2 4 male F344 rats were orally dosed with EGME at 0 , 100, or 500 m g/kg per day for 4 consecutive days. S ix animals from each group w ere then sacrificed 1 ,4 ,8 , and 22 days after the last treatment. Rats in the high-dose group displayed severely hemorrhagic femoral bone marrow with major loss o f the normal nucleated tissue and damage o f sinus endothelial cells on day 1. The normal architecture o f the marrow w as restored by day 4 post-treatment. Treatment with 500 m g EGME/kg for 4 days abolished extramedullary hem opoiesis (EMH) in the spleen; partial recovery w as seen on day 4 , follow ed by marked improvement on day 8, and a return to control values by day 22. The high-dose group also show ed m ild anemia characterized by reductions o f the Hct and Hb values at day 4 (P <0.05 and P < 0.001, respectively), and RBC, Hct, and Hb values at day 8 (P <0.05, P <0.05, and P <0.01, respec tively). Leukocyte counts (neutrophils and lym phocytes) were significantly reduced in this group on day 1 (P <0.001) and did not return to control values by the end o f the recovery period. Low dose (100 m g EGME/kg per day) rats also had reduced leukocyte counts on day 1 (P < 0.05). The authors concluded that the major hem atological effect o f EGME was leukopenia characterized by reductions in lym phocytes and neutrophils. Changes in the circulating blood together with reduced splenic EMH and bone marrow toxicity suggested an inhibitory action on erythropoiesis. The toxicity o f MAA will be discussed next to evaluate the importance o f metabolism as a detoxification or bioactivation mechanism for EGME. In a study by Miller et al. , groups o f five male F344 rats were given daily doses o f 0, 3 0 ,1 0 0 , or 300 mg MAA/kg per day orally on 8 days out o f 10 and were then sacrificed 24 hr after the final dose. Rats given the high dose had significantly lower body weights on the fifth day and again when recorded on the tenth day (P<0.05 The role o f metabolism in EGME-induced testicular toxicity was investigated by Moss et al. using groups o f nine male Sprague-Dawley rats pretreated i.p. with 400 mg pyrazole/kg, an alcohol dehydrogenase (ADH) inhibitor, or pretreated with 300 mg disulfiram/kg, an aldehyde dehydrogenase inhibitor. One hr after pretreatment with pyrazole or 24 hr after pretreatment with disulfiram, animals were injected i.p. with 250 mg o f labeled EGME/kg. Controls with no pretreatment also received 250 m g(14C) EGME/kg i.p. Urinary excretion was the major route of elimination o f EGME metabolites after 24 hr (40.4% ± 3.6% o f dose) and 48 hr (14.8% ± 0.6% o f dose) in controls. High performance liquid chromatography (HPLC) analysis identified MAA as the major urinary metabolite at 0 to 24 hr (63% of the radioactivity) and at 24 to 48 hr (50% of the radioactivity). The # B.3.2.2 Inhalation # APPENDIX C O CCUPATIO N AL EXPOSURES TO TH E G LYC O L ETHERS BY W ORKSITE OR PR O C ESS # APPENDIX D M ATERIAL S A FE T Y DATA SH EET The following sections describe the information that must be supplied for each product or material in the appropriate blocks of the Material Safety Data Sheet (MSDS). To # D.2 SECTION II, HAZARDOUS INGREDIENTS The "materials" listed in Section II shall be those substances that are part of the hazardous product covered by the MSDS and that individually meet any o f the criteria defining a hazardous material. Thus, one component of a multicomponent product might be listed because of its toxicity, another component because of its flammability, and a third com ponent for both its toxicity and its reactivity. Note that a MSDS for a single component product must have the name of the material repeated in this section to avoid giving the impression that there are no hazardous ingredients. # TRADE NAME # SYNONYMS II HAZARDOUS INGREDIENTS # H.1 BACKGROUND INFORMATION Brief descriptions of the health effects associated with wearing respirators are summarized below. More detailed analyses of the data are available in recent reviews by James and Raven et al. . # H.1.1 Pulmonary Effects In general, the added inspiratory and expiratory resistances and dead space of most respirators cause an increase in tidal volume and a decrease in respiratory rate and ventilation (including a small decrease in alveolar ventilation). These respirator effects have usually been small both among healthy individuals and, in limited studies, among individuals with impaired lung function . This generalization is applicable to most respirators when resistances (particularly expiratory resistance) are low . Although most studies report minimal physiologic effects during submaximal exercise, the resistances commonly lead to reduced endurance and reduced maximal exercise perform ance . The physiological responses of mild pulmonary impaired subjects while using a "demand" respirator during rest and work. Am Ind Hyg Assoc J 42(4):247-257. # Raven
M ention o f the name o f any company or product does not constitute endorsement by the N ational Institute for Occupational Safety and Health. This document is in the public domain and may b e freely copied or reprinted. C opies o f this and other NIOSH docum ents are available from Publications D issem ination, DSDTT# FOREWORD The purpose o f the Occupational Safety and Health A ct o f 1970 (Public Law 9 1 -5 9 6 ) is to assure safe and healthful working conditions for every working person and to preserve our human resources. The A ct authorizes the National Institute for Occupational Safety and Health (NIOSH) to develop and recommend occupational safety and health standards and to develop criteria that w ill ensure that no worker w ill suffer dim inished health, functional capacity, or life expectancy as a result o f his or her work experience. Because lim ited data are available from studies in humans, NIOSH bases its recommended exposure lim its (RELs) for EGME, EGEE, and their acetates on data from studies in anim als. The data were adjusted to allow for uncertainties in the extrapolation from anim als to humans. NIOSH recommends that worker exposure to EGME and EGMEA in the workplace be lim ited to 0.1 part per m illion parts o f air (0.1 ppm) (0.3 mg EGME/m^ and 0. # Through criteria documents, NIOSH communicates recommended standards to regulatory agencies, including the # SECTION 1.2 EXPOSURE MONITORING Exposure monitoring shall be conducted as specified in Sections 1.2.1 and 1.2.2. Results o f all exposure monitoring shall be recorded and maintained as specified in Section 1.9. 1 # Industrial Hygiene Surveys In work areas where airborne exposures to EGME, EGEE, or their acetates may occur, the em ployer shall conduct initial industrial hygiene surveys to determine the magnitude o f exposure by using personal sam pling techniques for an entire workshift. The em ployer shall keep records o f these surveys. If the em ployer concludes that exposure concentrations for all g lycol ethers are less than one-half the REL, the records must show the basis for this conclusion. Surveys shall be repeated at least annually and w henever any process change is lik ely to increase concentrations o f airborne EGEE, EGME, EGEEA, and EGMEA. The em ployer shall also look for, evaluate, and record the potential for dermal exposure. # Personal Monitoring If workers are exposed to any g lyco l ether at or above one-half the REL, a program o f personal monitoring shall be instituted to identify and to measure or calculate the exposure o f each worker (see Section 8.8). Source and area monitoring may be a useful supplement to personal monitoring. In all personal monitoring, sam ples representative o f the TWA exposures to airborne g lycol ethers shall be collected in the breathing zone o f the worker. Procedures for sam pling and analysis shall be in accordance with Section 1.1.2. For each determination o f an occupational exposure concentration, a sufficient number o f sam ples (as determined in Leidel et al. [1977]), shall be collected to characterize each worker's exposure during each workshift. Although not all workers must be monitored, a sufficient number o f sam ples must be collected to characterize the exposure o f all workers. Variations in work and production schedules as w ell as worker locations and job functions shall be considered when determining sam pling locations, tim es, and frequencies. If a worker is exposed to EGME, EGEE, or their acetates at concentrations below the REL but at or above one-half the REL, the exposure o f that worker shall be monitored at least once every 6 months or more frequently, as indicated by a professional industrial hygienist. If a worker is exposed to one o f these glycol ethers at concentrations exceeding the REL, the worker must wear a respirator until adequate engineering controls and/or work practices are instituted. Controls shall then be initiated, the worker shall be notified o f the exposure and o f the control measures being implemented, and the worker's exposure shall be evaluated at least once a week. Such monitoring shall continue until tw o consecutive determinations at least 1 w eek apart indicate that the worker's exposure no longer exceeds the REL. At that tim e, semiannual monitoring can be resumed; if concentrations o f the g lycol ethers are less than one-half the REL after tw o consecutive sem iannual surveys, sam pling can be conducted annually. A ll episodes o f skin contact shall be reported to a supervisor. T hese reports and the results o f any investigation or corrective action are to be retained with other records. # Biological Monitoring Measurement o f tw o glycol ether m etabolites-ethoxyacetic acid (EAA, the m etabolite o f EGEE and EGEEA) andmethoxyacetic acid (MAA, the metabolite o f EGME and EGM EA)-2 may help characterize occupational exposure to EG EE, EG ME, and their acetates w hen the potential exists for airborne concentrations at or above one-half the REL, or for dermal contact from accidental exposure a breakdown o f chemical protective clothing (see Section 5). G uidelines for biological monitoring are presented in Appendix G. # SECTION 1.3 MEDICAL MONITORING The em ployer shall provide the follow ing information to the physician performing or responsible for the m edical monitoring program: • The requirements o f the applicable standard • A n estim ate o f the worker's potential exposure to g ly co l ethers, including any available results from workplace sampling • A description o f the worker's duties as they relate to exposure • A description o f any protective equipment the worker may be required to use # General • The em ployer shall institute a m edical monitoring program for all workers who are exposed to airborne concentrations o f EGEE, EGME, or their acetates at or above one-half the REL, or w ho have the potential for dermal exposure. • If a worker has had a dermal exposure, the em ployer shall provide this information to the physician responsible for or performing the medical monitoring program. • The em ployer shall ensure that all m edical exam inations and procedures are performed by or tinder the direction o f a licensed physician. • The em ployer shall provide the required m edical monitoring at a reasonable time and place without loss o f pay or other co st to the workers. • The em ployer shall institute a biological monitoring program for all workers who are exposed to airborne concentrations o f EGME, EGEE, or their acetates at or above one-half the REL, or w ho have the potential for dermal exposure. Guidelines for b iological monitoring are presented in Appendix G. # Preplacem ent M edical Exam inations Preplacement m edical exam inations shall include at least the follow ing: • A com prehensive m edical, work, and reproductive history that em phasizes iden tification o f existing m edical conditions (e.g., those affecting die reproductive, hem atopoietic, and central nervous system s, skin, liver, and kidneys) and previous occupational exposure to chem ical or physical agents • A medical examination giving special attention to the reproductive, hematopoietic, and central nervous systems, skin, liver, and kidneys • Routine urinary monitoring for M AA and EAA before job placem ent, w hich may be a useful adjunct to environmental monitoring because it indicates both airborne and dermal exposures • A judgment o f the worker's ability to use positive-and negative-pressure respirators # Periodic Medical Exam inations Periodic m edical exam inations shall be provided at least annually to all workers occupation ally exposed to airborne concentrations o f EGME, EGEE, and their acetates at or above one-half the REL, and to workers with the potential for dermal exposure. T hese exam ina tions shall include at least the follow ing: • An update o f medical and work histories • A m edical examination and tests as outlined above # Medical Consultation Workers w ho have a dermal exposure or w ho are exposed to concentrations o f EGME, EGEE, or their acetates above the REL should be given the opportunity to consult with a physician regarding possible adverse health effects, including reproductive and develop mental effects. OSHA Form 2 0 0 shall be m odified to include any reports o f dermal exposure. # SECTION 1.4 LABELING AND POSTING A ll labels and warning sign s shall be printed both in English and the predominant language o f workers who do not read English. Workers unable to read the labels and warning signs shall be informed verbally regarding the instructions printed on labels and signs in the hazardous work areas o f the plant. # Labeling Containers o f EGME, EGMEA, EGEE, or EGEEA used or stored in the workplace shall carry a permanently attached label that is readily visible. The label shall identify the gly co l ether and give information regarding its effects on human health and em ergency procedures (see Figure 1 -1 ). # Posting Signs bearing information about the health effects o f EGME, EGMEA, EGEE, or EGEEA shall be posted in readily visib le positions in work areas and at entrances to work areas or 4 building enclosures where the potential ex ists for exposures at or above the REL or where skin exposures may occur (see Figure 1-2). If respirators and personal protective clothing are needed during the manufacturing or handling o f these g lycol ethers or during the installation or implementation o f required engineering controls, the follow in g statement shall be added in large letters to the signs required in this section: Respirators and protective clothing are required in th is area. In any area where emergency situations may arise, the required sign s shall be supplem ented with em ergency first-aid procedures and the locations o f em ergency show ers and eyew ash fountains. # SECTION 1.5 PROTECTIVE CLOTHING AND EQUIPMENT Engineering controls and good w ork practices shall be used to keep the airborne concentra tions o f EGME, EGEE, and their acetates below the REL and to prevent skin and eye contact. W hen protective clothing and equipment are needed, they shall be provided by the em ployer at no cost to the worker. # Eye and Face Protection The em ployer shall provide chem ical splash-proof safety goggles or face shields (20-cm minimum) with go ggles and shall ensure that workers wear the protective equipment during any operation in w hich splashes o f these glycol ethers are lik ely to occur. D evices for eye and face protection shall be selected, used, and maintained in accordance with 29 CFR 1910.133 and 30 CFR 56.150004, and 57.150004. # Skin Protection * Workers at risk o f dermal exposure to these g ly co l ethers shall be provided with appropriate protective clothing such as g lo v es and disposable clothing. Informa tion presented in Section 8.6.1 provides guidance in the selection o f appropriate materials for g loves and clothing. • Clothing contaminated with these glycol ethers shall be cleaned before reuse. Anyone w ho handles contaminated clothing or is responsible for its cleaning shall be informed o f the hazards o f these glycol ethers and the proper precautions for their safe handling and use. EGME WARNING! Exposure may be harmful to the reproductive system and blood. # CAUTION! Combustible Harmful if Ingested, Inhaled, or absorbed through skin. Irritating to skin, eyes, nose, throat, mouth, and lungs. • The em ployer shall ensure that all personal protective clothing and equipm ent is inspected regularly and maintained in a clean and satisfactory working condition. • Protective clothing or gloves shall be evaluated on a routine basis to ensure that they are in good condition and no breakthrough has occurred. # R espiratory Protection Engineering controls and good work practices shall be used to control respiratory exposure to airborne contaminants. The use o f respirators is the least desirable method o f controlling worker exposures and should not be used as the primary control method during routine operations. H owever, NIOSH recognizes that respirators may be required to provide protection in certain situations such as implementation o f engineering controls, short-duration maintenance procedures, and em ergencies. Respirator selection guides for protection against EGEE, EGME, and their acetates are presented in Tables 1 -1 through 1 -3 . • Respirators shall be provided by the em ployer when such equipment is necessary to protect the health o f the worker. The worker shall use the provided respiratory protection in accordance with instructions and training received. • The em ployer shall ensure that respirators are properly fitted and that workers are instructed at least annually in the proper use and testing for leakage o f respirators assigned to them. • Workers should not be assigned to tasks requiring the use o f respirators unless it has been determined that they are physically able to perform the work and use the equipment. The respirator user's m edical status should be review ed at least annually or more frequently as recommended by the physician responsible for the physical examination. See Appendix H for additional information about the m edical aspects o f wearing respirators. • The employer shall be responsible for establishing and maintaining a respiratory protection program as follows: 1. Written standard operating procedures governing selection and use o f respirators shall be established. 2. The worker shall be instructed and trained in the proper use o f respirators and their lim itations. # 3. Where practicable, the respirators should be assigned to individual workers for their exclu sive use. i Recom m endations f o r Standards Table 1-2 5. Respirators shall be stored in a convenient, clean, and sanitary location. 6 . Respirators used routinely shall be inspected during cleaning. Worn or deteriorated parts shall be replaced. Respirators for emergency use (e.g., self-contained devices) shall be thoroughly inspected at least once a month and after each use. 7. The respiratory protection program shall be regularly evaluated by the employer to determine its continued effectiven ess. The em ployer shall notify the worker when exposure exceeds the REL in the w ork area to which he is assigned. # Training The Workers shall also be instructed about their responsibilities for follow in g proper work practices and sanitation procedures to help protect their health and provide for their safety and that o f their fellow workers. A ll workers in areas where exposure to EGME, EGEE, or their acetates may occur during sp ills or em ergencies shall be trained in proper emergency and evacuation procedures. # File of W ritten Hazard Communication Required information shall be recorded on the material safety data sheet (see exam ple in Appendix D ) or on a sim ilar OSHA form that describes the relevant toxic, physical, and chem ical properties o f the g lycol ethers and mixtures o f glycol ethers that are used or otherwise handled in the workplace. T his information shall be kept on file and shall be readily available to workers for examination and copying. # SECTION 1.7 ENGINEERING CONTROLS AND WORK PRACTICES # Engineering Controls Engineering controls shall be used as needed to maintain exposure to airborne g ly co l ethers within the lim its prescribed in Chapter 1. # Local Exhaust Ventilation Local exhaust ventilation may be effective when used alone or in com bination w ith process enclosure. When a local exhaust ventilation system is used, it shall be designed and operated to prevent accumulation or recirculation o f airborne glycol ethers in the workplace. Exhaust ventilation system s discharging to outside air shall conform with applicable local, State, and Federal air pollution regulations and shall not constitute a hazard to workers or to the general population. Before maintenance work on control equipment begins, the generation o f airborne g lycol ethers shall be elim inated to the greatest extent feasible. # Maintaining Design Airflow Enclosures, exhaust hoods, and ductwork shall be kept in good repair to maintain designed airflow s. Measurements such as capture v elocity, duct velocity, or static pressure shall be made at least sem iannually, and preferably m onthly, to demonstrate the effectiven ess (quantitatively, the ability o f the control system to maintain exposures below the REL) o f the mechanical ventilation system . NIOSH recommends the use o f continuous airflow indicators such as water or o il manometers marked to indicate acceptable airflow . A record shall be kept show ing design airflow and the results o f all airflow measurements. Measure ments o f the effectiven ess o f the system in controlling exposures shall also be made as soon as possible after any change in production, process, or control devices that may increase airborne concentrations o f EGME, EGEE, and their acetates. # Forced-draft Ventilation Forced-draft ventilation system s shall be equipped with remote manual controls and should be designed to shut o ff automatically in the event o f a fire. # General W ork Practices • Operating instructions for all equipment shall be developed and posted where EGME, EGEE, and their acetates are handled or used. • Transportation, use, and disposal o f these glycol ethers shall com ply with all applicable lo ca l, State, and Federal regulations. • These glycol ethers shall be stored in tightly closed containers and in well-ventilated areas. • Containers shall be m oved only with the proper equipment and shall be secured to prevent loss o f control or dropping during transport. • Storage facilities shall be designed to prevent contamination o f workroom air and to contain sp ills com pletely within surrounding dikes. • Ventilation sw itches and em ergency respiratory equipment shall be located outside storage areas in readily accessible locations. • Process valves and pumps shall be readily accessible and shall not be located in pits or congested areas. • G lycol ether containers and system s shall be handled and opened with care. Approved protective clothing and equipment as specified in Section 1.5 shall be worn by workers w ho open, connect, and disconnect g ly co l ether containers and system s. Adequate ventilation shall be provided to m inim ize exposures o f such workers to airborne glycol ethers. • G lycol ether storage equipment and system s shall be inspected daily for signs o f leakage. A ll equipment, including valves, fittings, and connections, shall be checked for leaks imm ediately after glycol ethers are introduced therein. • When a leak is found, it shall be repaired promptly. W ork shall resume normally only after necessary repair or replacement has been com pleted and the area has been w ell ventilated. # Confined or Enclosed Spaces • A permit system shall be used to control entry into confined or enclosed spaces holding containers o f g lyco l ethers (e.g ., tanks, pits, tank cars, and process vessels) where egress is lim ited. Permits shall be signed by an authorized representative o f the em ployer and shall certify that preparation o f the confined space, precautionary measures, and personal protective equipment are adequate and that precautions have been taken to ensure that prescribed procedures w ill be follow ed. • Confined spaces that hold containers o f EGME, EGEE, and their acetates shall be thoroughly ventilated, inspected, and tested for oxygen deficiency and for airborne concentrations o f these glycol ethers. Every effort shall be made to prevent the inadvertent release o f hazardous amounts o f these g ly co l ethers into confined spaces in w hich work is in progress. G iycol ether supply lines shall be discon nected or blocked o ff before such work begins. • N o worker shall enter a confined space holding containers o f g ly co l ethers without an entry large enough to admit a worker wearing a safety harness, lifelin e, and appropriate personal protective equipment as specified in Section 1.5. • C onfined spaces shall be ventilated w hile work is in progress to keep the concentra tion o f g lycol ethers below the RELs, to keep the concentration o f other con taminants below toxic or dangerous lev els, and to prevent oxygen deficiency. • I f the concentrations o f these glycol ethers in the confined space exceed the RELs, respiratory protective equipment is required for entry. • A nyone entering a confined space shall be observed from the outside by another properly trained and protected worker. An additional supplied-air or self-contained breathing apparatus with safety harness and lifelin e shall be located outside the confined space for em ergency use. The person entering the confined space shall maintain continuous communication with the standby worker. # Em ergency Procedures Emergency plans and procedures shall be developed for all work areas where there is a potential for exposure to EGME, EGEE, and their acetates. T hese plans and procedures shall include those specified below and any others considered appropriate for a sp ecific operation or process. Workers shall be instructed in the effective implementation o f these plans and procedures. • The follow ing steps shall be taken in the event o f a leak or sp ill o f these g ly co l ethers: -A ll nonessential personnel shall be evacuated from the leak or sp ill area. -Persons not wearing the appropriate protective equipment and clothing shall be restricted from the leak or sp ill area until cleanup has been com pleted. -A ll ignition sources shall be removed. -The area where the leak or spill occurs shall be adequately ventilated to prevent the accum ulation o f vapor. -EGME, EGEE, EGMEA and EGEEA shall be contained and absorbed with verm iculite, sand, paper tow els, or equivalent materials. -Sm all quantities o f absorbed material shall be placed under a fume hood and sufficient tim e shall be allow ed for the liquid to evaporate and for the vapors to clear the ductwork in the hood. -Large quantities o f absorbed material shall be burned in a suitable com bustion chamber. -Absorbed materials shall be disposed o f as hazardous w aste. -The sp ill area shall be cleaned with water. • Only personnel trained in the emergency procedures and protected against the attendant g lycol ether hazards shall clean up sp ills and control and repair leaks. * Personnel entering the spill or leak area shall be furnished w ith appropriate personal protective clothing and equipment. Other personnel shall be prohibited from entering the area. ♦ Safety show ers, eyew ash fountains, and washroom fa cilities shall be provided, maintained in working condition, and made readily accessib le to workers in all areas where skin or eye contact with EGME, EGEE, EGMEA, or EGEEA is likely. If one o f these g lycol ethers is splashed or spilled on a worker, contaminated clothing shall be removed promptly, and the skin shall be washed thoroughly with soap and water. E yes splashed by these glycol ethers shall be irrigated im m ediately with a copious flow o f water for 15 min. If irritation persists, the worker should seek m edical attention. # Storage EGME, EGEE, and their acetates shall be stored in cool, w ell-ventilated areas and kept away from acids, bases, and oxidizing agents. # Waste Disposal A ll waste material shall be securely packaged in double bags, labeled, and incinerated. The incinerator residue shall be disposed o f in a manner consistent w ith Federal (EPA ), State, and local regulations, or it shall be disposed o f in a licensed hazardous waste landfill. # SANITATION AND HYGIENE # Food, Cosmetics, and Tobacco The follow in g shall be prohibited in areas where EGME, EGEE, EGMEA, or EGEEA is produced or used: the storage, preparation, dispensing, or consumption o f food or beverages; the storage or application o f cosm etics; and the storage or use o f all tobacco products. # Handwashing The em ployer shall provide handwashing facilities and encourage workers to use them before eating, sm oking, using the toilet, or leaving the worksite. # Laundering • Protective clothing, equipment, and tools shall be cleaned periodically. • The em ployer shall provide for the cleaning, laundering, or disposal o f con taminated work clothing and equipment. • A ny person w ho cleans or launders this work clothing or equipment must be informed by the em ployer that it may be contaminated with EGME, EGEE, EGM EA, or EGEEA-chem icals that may cause adverse reproductive, develop mental, hem atologic (blood), and central nervous system (C N S) effects. # Cleanup of W ork Area The work area shall be cleaned at the end o f each shift (or more frequently if needed) using vacuum pickup. C ollected wastes shall be placed in sealed containers w ith labels that indicate the contents. Cleanup and disposal shall be conducted in a manner that prevents worker contact with w astes and com plies with all applicable Federal, State, and local regulations. # Showering and Changing Facilities Workers shall be provided with quick-drench show er fa cilities and w ith facilities for show ering and changing clothes at the end o f each workshift. # RECORDKEEPING # Exposure Monitoring The em ployer shall establish and maintain an accurate record o f all exposure measurements required in Section 1.2. These records shall include the name o f the worker being monitored, social security number, duties performed and job locations, dates and tim es o f measure m ents, sam pling and analytical methods used, type o f personal protection used (if any), and number, duration, and results o f sam ples taken. # Medical Monitoring The em ployer shall establish and maintain an accurate record for each worker subject to the m edical monitoring specified in Section 1.3. Pertinent medical records (Le., the physician's written statement, the results o f m edical examinations and tests, m edical com plaints, reports o f skin exposure, etc.) shall be retained in the m edical files o f all workers subject to airborne concentrations o f EGME, EGEE, EGMEA, or EGEEA in the workplace at or above one-half the REL. C opies o f applicable environmental monitoring data shall also be included in the worker's m edical file. 16 # Record Retention In accordance with the requirements o f 29 CFR 1910.20(d) (Preservation o f Records), the em ployer shall retain the records described in Sections 1.2, 1.3, and 1.6 for at least the follow ing periods: • 30 years for exposure monitoring records, and • the duration o f employm ent plus 3 0 years for medical surveillance records # Availability of Records • In accordance with 29 CFR 1910.20 (A ccess to Em ployee Exposure and M edical Records), the em ployer shall allow examination and copying o f exposure monitor ing records by the subject worker, the former worker, or anyone having the sp ecific written consent o f the subject or former worker. • Any m edical records required by this recommended standard shall be provided upon request for examination and copying to the subject worker, the former worker, or anyone having the sp ecific written consent o f the subject or former worker. # Transfer of Records If the em ployer ceases to do business and no successor is available to receive and retain the records for the prescribed period, the em ployer shall notify the NIOSH has formalized a system for developing criteria on w hich to base standards for ensuring the health and safety o f workers exposed to hazardous chem ical and physical agents. Sim ple com pliance with these standards is not the only goal. The criteria and recommended standards are intended to help management and labor develop better engineer ing controls and more healthful work practices. Recommended standards for EGEE, EGME, EGEEA, and EGMEA apply on ly to workplace exposures arising from the processing, manufacturing, handling, and use o f these glycol ethers. The recommendations are not designed for the population at large, and any extrapolation beyond the occupational environment may not be warranted. The recommended standards are intended to protect workers from the acute and chronic effects o f EGEE, EGME, EGEEA, and EGMEA. Exposure concentrations are measurable by techniques that are valid, reproducible, and available to industry and government agencies. # SCOPE The # PRODUCTION METHODS AND USES The ethylene glycol monoalkyl ethers EGME and EGEE are usually produced by a reaction o f ethylene oxide with m ethyl or ethyl alcohol, but may also be made by the direct alkylation o f ethylene glycol with an alkylating agent such as dim ethyl or diethyl sulfate [R ow e and W olf 1982]. Temperature, pressure, reactant molar ratios, and catalysts are selected to give the product m ix desired. Ethylene g ly co l monoalkyl ethers are not formed as pure com pounds, but must be separated from the diethers o f diethylene g ly co l, triethylene g ly co l, and the higher glycols. Ethylene glycol monoalkyl ether acetates are prepared by esterifying the particular g ly co l ether with acetic acid, acetic acid anhydride, or acetic acid chloride. G lycol ethers and their acetates are w idely distributed and have been used com m ercially for more than 50 years. # PROCESS AND WORKER JOB DESCRIPTIONS The usefulness o f g lycol ethers and their acetate derivatives can be attributed to their physical properties, particularly their m iscibility or high solubility in water and other organic solvents, and their low vapor pressures. T hese properties allow them to serve a number o f functions in a variety o f products. The follow ing information was obtained during surveys conducted in various industries to determine occupational exposures to g ly co l ethers [Cal OSHA 1983;Meridian Research, Inc. 1987]. # Paints and Coatings Although frequently com prising less than 10% o f the final product, g ly co l ethers serve a variety o f important functions in paints and coatings. One function is as a solvent to keep [1986]. # 22 the various paint com ponents in solution. Latex coatings contain g ly co l ethers or their acetate derivatives to enhance the coalescing properties o f the product w hen applied. B y slow ing the evaporation rate, g lycol ethers reduce moisture condensation or "blush." They also improve the penetration and bonding qualities o f paints and coatings. Specialty products, such as aircraft or electrostatic paints, may contain 18% to 35% g ly co l ethers [Cal OSHA 1983]. The manufacture o f paints and coatings is a batch process. Components are added manually or through a closed piping system to the m ixing tank. B ecause g ly co l ethers generally constitute a sm all percentage o f the total formulation, they are often added manually. After m ixing, the product is packaged according to customer specifications. A variety o f industries use paints and coatings, but as previously noted, these products usually contain a sm all percentage o f g ly co l ethers (i.e ., less than 10%). Lacquer containing less than 1 % glycol ethers is used to coat w ood products. H owever, electrostatic paints used in a spraying process for metal parts may contain 20% to 30% g ly co l ethers. G lycol ethers are also used in the manufacture o f coated fabrics. These fabrics pass through a dip pan to pick up the coating and then rise through a ventilated drying tower [Cal OSHA 1983]. # Cleaners and Cleaning Solvents Cleaning agents that contain glycol ethers are spot removers, carburetor cleaners, metal cleaners, and glass cleaners. In these products, glycol ethers function as surface active agents, enhancing the penetration o f the product, clarifying the appearance, and in glass cleaners, increasing the viscosity. The percentage o f glycol ethers in these products is less than 5% [Cal OSHA 1983]. # NUMBER OF WORKERS POTENTIALLY EXPOSED Based on information from the National Occupational Exposure Survey (NOES) [NIOSH 1983c], the estim ated number o f workers potentially exposed to g ly c o l ethers in the workplace during the period 1981 to 1983 is as shown in Both children underwent surgery in the follow ing years. The perineal and the penile hypospadias were corrected, chordee was removed in both children, and the undescended testes were removed to the scrotum. 'Hie older child was treated with chronic gonadotrophin, w hich led to norm al-sized testes. The authors stated that the risk o f isolated hypospadia was betw een 1 in 300 and 1 in 1,800, w hile the risk for a boy w hose brother has hypospadia w as 1 in 24. T hey indicated that both the family history and medical examinations showed no overt risks other than the pronounced exposure o f the mother to EGMEA during fetal development. The authors concluded that the hypospadias were actually caused by exposure to EGMEA. There is only one case report that describes the effects o f EGEE [Fucik 1969 N o statistically significant differences were found in hem atology test results, hormone lev els, or sperm counts between the exposed workers and the controls. The investigators suggested that testicular size may have been reduced; how ever, the decrease in size approached but did not reach statistical significance (P < 0.05) in either length (F -0 .1 9 ) or width (P -0 .0 8 ). # Market a n d M oody [1982] In # Boiano [1983] NIOSH conducted a health hazard evaluation in 1983 to evaluate worker exposure to two solvent cleaners, an im age remover, and the paint remover used in a silk screening process [Boiano 1983]. The silk screener using the im age remover was monitored for exposure to EGEEA and cyclohexanone, w hile the worker using the paint remover was monitored for a variety o f organic solven ts, one o f which was EGEEA. A lthough the workers were primarily exposed by inhalation, they may have also been exposed by skin absorption because personal protective clothing was not alw ays worn. The workers com plained o f headaches, lethargy, sinus problems, nausea, and heartburn. When they were away from work, their symptom s improved. The silk screener using im age remover had TW A ex posures to EGEEA ranging from 1.3 to 3.3 ppm, with short-term excursions to 3.8 ppm. The silk screener using paint remover had TW A exposures to EGEEA ranging from 0 .5 to 3.9 ppm, with a short-term excursion to 4 .0 ppm. Measured airborne exposures thus were 32 below occupational standards, but absorption through the skin may have contributed to the workers' overall exposure [Boiano 1983]. 4.1.2.5 G u n te r [1985] In 1985, NIOSH conducted a health hazard evaluation o f production areas at a plant that manufactured solid-state electronic circuits [Gunter 1985 # 4.1.2.6 Ratctiffe e ta l. [1986] NIOSH conducted an evaluation for possible adverse effects on testicular function in male workers potentially exposed to EGEE during the preparation o f ceram ic sh ells used to cast metal parts # Welch etal. [1988], Sparer etal. [1988], and The effect o f com bined EGME and EGEE exposure on the reproductive potential o f men w ho worked in a large shipbuilding facility was recently studied [W elch et al. 1988]. This site was selected for study because o f a previous health hazard evaluation [Love and D onohue 1983] [1985], which demonstrated that ADH has a higher affinity for ethanol than for the gly co l ethers [Sleet et al. 1988]. This finding confirmed the hypothesis that EGEEA is first converted to EGEE by esterases [Groeseneken et al. 1987a A detailed description o f the preceding studies may be found in Appendix B. # EFFECTS ON ANIMALS Although kidney and liver dam age, hem atologic, CN S, reproductive, and teratogenic effects have been observed in experimental animals exposed to glycol ethers and their acetates, the type and severity o f the response induced by each g ly co l ether are not identical. Therefore, each glycol ether and its corresponding acetate w ill be discussed separately follow ing Section 4.3.1. # Acute Toxicity Many experim ents investigating the acute toxicity o f glycol ethers to anim als have been performed. These investigations led to the establishm ent o f a lethal concentration or lethal dose for 50% o f the exposed animals (LC50 or LDjq) in a variety o f sp ecies by a variety o f routes (inhalation, oral, dermal, injection). A summary o f the available data by animal species is presented in Table 4 -1 . # O ral Adm inistration The toxicity o f glycol ethers has been studied more extensively by oral administration than by any other route. concentrations ranged from 500 to 6,0 0 0 ppm and were administered over a period o f 1 to 24 hr. Guinea pigs exposed to 6 ,0 0 0 ppm for 24 hr exhibited inactivity, w eakness, and dyspnea, and died by the end o f the exposure; 3,0 0 0 ppm for 24 hr caused death within 24 hr follow in g exposure; and exposure to 6,0 0 0 ppm for 10 hr, and 3,000 ppm or 1,000 ppm for 18 hr, caused death 1 to 8 days follow ing exposure. Exposure to 6,0 0 0 ppm for 1 hr, 3 .000 ppm for 4 hr, and 500 ppm for 14 hr caused no apparent harm. G ross pathological exam inations o f anim als that died during and up to tw o days after exposure revealed congestion and edema o f the lungs, distended and hemorrhagic stom achs, and congested kidneys. Ten male and ten fem ale rats and tw o male and tw o fem ale rabbits were exposed to 2.000 ppm EGEEA for 4 hr [Truhaut et al. 1979]. O nly in rabbits w as there a slight and transient hemoglobinuria or hematuria; no gross pathological lesion s w ere noted in either species. # D erm al Exposure A m odified Draize "slee v e " technique was used to study the acute dermal toxicity o f EGEEA in rabbits [Truhaut et al. 1979]. Death generally occurred 24 to 48 hr after the application o f 10,500 m g EGEEA/kg. Although hemoglobinuria and/or hematuria were observed, there w as little variation in Hb concentration and the number o f RBCs (less than 15% to 20%) in blood; how ever, there was a considerable decrease in the number o f W BCs (50% to 70%). In surviving anim als, the WBC counts gradually returned to normal. Necropsy revealed bloody kidneys and blood in the bladder. W hen survivors were exam ined after the 2-w eek observation period, no gross lesions were noted. [Werner et al. 1943b]. In another study [Tyl et al. 1988], EGEEA caused an increase in W BC levels. # EGM E and EGMEA The # EGME, EGEE, and Their Acetates # Mutagenicity # In Vitro Toxicity The # EGME, EGEE, and Their Acetates # Cytotoxicity The in vitro cytotoxicities o f EGME, EGEE, and their corresponding alkoxyacetic acids (M AA and EAA) were studied using CHO cells . CHO ce lls were seeded into culture flasks, and after 4 to 5 hr test material was added to the medium. After 16 hr the medium w as renewed and the cells were allow ed to grow in colon ies for 6 to 7 days prior to counting. # RECOGNITION OF THE HAZARD Each em ployer who manufactures, transports, packages, stores, or u ses EGME, EGEE, or their acetates in any capacity should determine the potential for occupational exposure o f any worker at or above the action level (one-half the REL). • The charcoal in the sam pling tube is transferred to a sm all, stoppered sample container, and the analyte is desorbed. EGME, EGEE, EGMEA, and EGEEA may be desorbed from the charcoal with m ethylene chloride and 5% (v/v) methanol. # ENVIRONMENTAL SAMPLING • An aliquot o f the desorbed sample is injected into a gas chromatograph w ith a flam e ionization detector. • The area o f the resulting peak is determined and compared with areas obtained from the injection o f standards. The detailed analytical method is described in Appendix A . • Periodic medical examinations. The aforementioned m edical exam inations should be performed annually for all workers occupationally exposed to EGEE, EGME, or their acetates at or above the action lev els, and for all w ho have the potential for significant skin exposure. # E G M E E G M E A E G E E E G E E # BIOLOGICAL MONITORING B iological monitoring may be a useful adjunct to environmental monitoring in assessing worker exposure to EGME, EGEE, and their acetates. B iological monitoring includes the influence o f workload and percutaneous absorption. # Justification for Biological Monitoring Human experimental inhalation studies have demonstrated the uptake o f EGEE [Groeseneken et al. 1986b], EGEEA [Groeseneken et al. 1987a], and EGME [Groeseneken et al. 1989a]. Studies that included different workloads in the experimental design [Groeseneken et al. 1986b[Groeseneken et al. , 1987a] demonstrated a linear relationship betw een the workload and uptake o f each glycol ether; a linear relationship was also found for the exposure concentration and uptake. Table 5 - Nakaaki et al. [1980] demonstrated that 10 tim es more EGME w as absorbed through the forearm than acetone or methanol. Johanson [1988] # Selection of Monitoring Medium A variety o f biological monitoring media can be used to assess uptake (e.g ., expired air, blood, urine). Groeseneken et al. [1986bGroeseneken et al. [ , 1987aGroeseneken et al. [ , 1989a • Expected concentrations for these metabolites at the proposed RELs can also be measured by the recommended analytical method (see Appendix F). • The acid metabolites are associated with the reproductive and hem atologic toxicity o f EGEE, EGME, and their acetates, and may reflect the concentration o f the "active agent" at the target sites. • The half-lives o f the acid m etabolites in urine are suitable for exposure monitoring and can reflect integrated exposures over a workweek [Groeseneken et al. 1989a[Groeseneken et al. , 1988. The half-life for MAA is 77 hr and for EAA is 42 to 48 hr. • C ollection o f urine sam ples is a noninvasive procedure. Full-shift personal breathing zone exposures o f EGEE ranged from 3 to 14.5 ppm for workers in the investing areas. R atcliffe et al. [1986] reported that recoveries o f EGEE in three quality control sam ples w ere as low as 69% indicating that the measured airborne concentrations could have been underestimated. # Limitations of Biological Monitoring Urine sam ples were collected as voided during a 24-hr period from three EGEE-exposed workers and two controls (unexposed workers). In the June survey, area sam ples averaged 2 .4 to 14.9 ppm. Personal breathing zone sam ples averaged 8.1 ppm for grabber operators, 4.5 ppm for shell processors, and 5 .0 ppm for investm ent room supervisors. In this case, spot urine sam ples were collected at random over a 7-day period. Table 5 - A wide range of EAA concentrations was noted in workers using EGEE-containing paints; this was probably caused by variation in work assignments, work areas, work practices, and in the use of personal protective equipment. The author concluded that there appeared to be a relationship between urinary EAA excretion and the use of paints containing EGEE [Lowry 1987]. The health hazard evaluation demonstrated that the potential existed for exposure of painters to EGEE and EGME. Because of the complexity of the work environment and the variable use of personal protective equipment, no dose-response relationship could be developed. However, at the exposure concentrations measured, painters who used paints with EGEE did excrete more EAA in the urine than painters who did not use EGEE-containing paints. Sparer et al. [1988] and examined some of the same workers from the health hazard evaluation. (These studies are discussed in detail in Section 4.1.) The authors concluded that exposure to EGEE and EGME lowered sperm counts in the painters. In addition, they concluded that when smoking was controlled the painters had an increased odds ratio for a lower sperm count per ejaculate ], However, it would be inappropriate to conclude that the EGEE and EGME exposure concentrations presented in the health hazard evaluation [McManus et al. 1989] were representative of the chronic exposure of shipyard workers who participated in the semen study ]. In addition, it cannot be concluded that marginally (but not statistically significant) lowered sperm counts were caused by exposure concentrations measured in the health hazard evaluation [McManus et al. 1989]. # EGME No studies have evaluated the relationship between EGME exposure in the workplace and urinary MAA concentration. Results of the study by Groeseneken et al. [1989a] have provided the following information regarding MAA excretion in urine. • The relatively long urinary elimination half-life of MAA (77 hr) suggests that MAA would be expected to accumulate during the workweek. If biological monitoring were done, urine specimens collected at the end of the week, or possibly prior to the first shift of the week, would be appropriate. • Sixty percent of the urine specimens from this study contained MAA at concentra tions below 2 mg/liter. If 4-hr exposures are extrapolated to 8~hr exposures, based on linear kinetics, then subjects exposed to 5.1 ppm at rest would be expected to have 60% of their urine samples with MAA concentrations below 4 mg/liter. If exposures were 1/10 of those from this study (i.e., 0.5 ppm) then 6096 of the urine specimens would be expected to have less than 0.4 mg/liter of MAA [Groeseneken et al. 1989a], The limit of quantitation for MAA was reported to be 0,1 mg/liter [Groeseneken et al. 1989b]. Higher concentrations of MAA would be expected with exercise. Although dermal absorption was not studied, dermal uptake of EGME is a potential route of exposure. Dugard et al. [1984] demonstrated in vitro absorption of EGME through human abdominal skin. Nakaaki et al. [1980] also demonstrated dermal penetration of EGME through the forearm of human volunteers. Johanson [1988] suggested that dermal uptake of EGME is possibly the major route of exposure. Thus, in spite of the lack of quantitative relationships between EGME exposure and MAA excretion in urine, measurement of MAA in urine is warranted. The potential for extensive skin absorption, and the potential buildup of the active urinary metabolite MAA during the workweek, are reasons to measure MAA in urine as an exposure index. In addition, measurement of MAA in urine may be useful as an indicator of the potential for adverse reproductive effects. # Methods fo r Analyzing Urinary EAA and MAA A variety of methods have been developed for the analysis of EAA and MAA in human urine. Gas chromatographic procedures are based on either fluoranhydride derivatization following the extraction of the acid tetrabutylammonium ion-pair [Smallwood et al. 1984[Smallwood et al. , 1988 or diazomethane derivatization following lyophilization of the urine [Groeseneken et al. 1986a]. Groeseneken etal. [1989b] developed a method that combined the best attributes of the two basic existing models. Detailed descriptions of the above methods are presented in Appendix H. # Summary EGEE, EGME, and their acetates are metabolized to their respective alkoxyacetic acid metabolites, EAA and MAA, which are excreted in the urine. EAA and MAA have produced reproductive and hematotoxic effects noted for glycol ethers. These glycol ethers can also be absorbed through the skin. In fact, the major route of exposure to EGME and EGEE may be through the skin [Johanson 1988]. Thus, monitoring of these acids may serve not only as a measure of exposure or uptake, but also as a measure of potential adverse health effects. The alkoxyacetic acid metabolites may be analyzed by a variety of sensitive and specific methods. The recently developed method of Groeseneken et al. [1989b] has sufficient sensitivity to monitor excretion of these metabolites at the recommended RELs. Results from human laboratory inhalation exposure studies indicated that EAA in urine could be used to monitor uptake of EGEE and EGEEA [Groeseneken et al. 1986c[Groeseneken et al. , 1987b. The total amount of urinary EAA was related to EGEE and EGEEA uptake, and was influenced by pulmonary ventilation and the concentration of EGEE and EGEEA in inspired air. EAA excretion in urine peaked about 4 hr after cessation of exposure and was eliminated in the urine with a half-life of 42 hr [Groeseneken et al. 1988]. Investigations of occupational exposure also revealed a correlation between urinary EAA excretion and repeated daily inhalation exposure of workers to a mixture of EGEE and EGEEA [Veulemans et al. 1987a]. Data showed the accumulation of EAA following repeated daily exposures to EGEE and EGEEA. The estimated elimination half-life of EAA was 48 hr. Two other worksite investigations of occupational exposure to EGEE demonstrated the utility of EAA in urine to assess uptake of EGEE regardless of the route of exposure [Ratcliffe et al. 1986;Clapp et al. 1987;Lowry 1987;McManus et al. 1989;Sparer et al. 1988;. Experimental studies were conducted in which humans were exposed to EGME. Results of these studies indicated that measurement of urinary MAA could be used to assess uptake of EGME. The concentration of MAA peaked several hr after exposure ended, and MAA was eliminated with a half-life of 77 hr. Examination of the elimination kinetics showed that MAA would accumulate following repeated daily exposures, and could also accumulate over extended exposure periods. Insufficient information is available at present to construct a dose-response plot that would provide statistically sound guidelines for the concentration of alkoxyacetic acid metabolites in urine that would correspond to an airborne exposure to glycol ethers. Table 5-11 presents a summary of the laboratory and occupational dose-response data. In 1971, OSHA adopted the current Federal standards for worker exposure to EGME, EGMEA, EGEE, and EGEEA, which are based on the American Conference of Governmen tal Industrial Hygienists (ACGEH) 1968 Threshold Limit Values (TLVs®). These TLVs® were based on the hematotoxic and neurotoxic effects and exposure concentrations reported in the early case reports of human health effects [Donley 1936;Parsons and Parsons 1938;Greenburg et al. 1938]. The OSHA PELs include a "skin'' notation, indicating the potential for skin absorption of toxic amounts of these glycol ethers. The OSHA PELs for occupational exposure to the glycol ethers are as follows: 25 ppm (80 mg/m3) for EGME, 25 ppm for EGMEA (120 mg/m3), 200 ppm (740 mg/m3) for EGEE, 100 ppm (540 mg/m3) for EGEEA, as TWAs for an 8 -hr workshift [29 CFR 1910[29 CFR .1000. OSHA is considering a revision of these PELs. NIOSH has previously recommended that EGME and EGEE be regarded in the workplace as having the potential to cause adverse reproductive effects in male and female workers and embryotoxic effects, including teratogenesis, in the offspring of the exposed pregnant female, and that occupational exposure to them should be reduced to the lowest extent possible [NIOSH 1983a]. These recommendations were based on the results of animal studies that demonstrated dose-related embryotoxicity and other reproductive effects in several species of animals exposed by different routes of administration [Stenger et al. 1971;Nagano etal. 1979;Nagano etal. 1981;Andrew etal. 1981;Miller etal. 1981Miller etal. ,1983aNelson et al. 1981Nelson et al. ,1984bHardin et al. 1982;McGregor et al. 1983;Rao et al. 1983;Hanley et al. 1984a]. In 1946, ACGIH established maximum allowable concentrations (m.a.c.s) of 100 ppm for EGME, EGMEA, and EGEEA, and 200 ppm for EGEE [ACGIH 1984], In 1947, the m.a.c.s for EGME and EGMEA were lowered to 25 ppm because of the Greenburg et al. [1938] study in which neurologic and hematologic changes were observed in men exposed to EGME at concentrations estimated to be as low as 25 ppm. The m.a.c. for EGMEA was lowered because the toxic effects caused by it were likely to be similar to those caused by EGME as a result of EGMEA's metabolism to EGME and then to the active metabolite [ACGIH 1962[ACGIH ,1984. Although the values remained unchanged, the term "threshold limit value" was substituted for m.a.c. in 1948. In 1968, the notation "skin" (indicating the potential for skin absorption of toxic amounts of a compound) was added to the TLVs® for EGME, EGEE, EGMEA, and EGEEA. In 1971, ACGIH lowered the TLV® for EGEE from 200 to 100 ppm to prevent irritation of the nose and eyes [ACGIH 1980]. In 1981, the ACGIH adopted TLVs® of 50 ppm for EGEE and EGEEA, each with a short-term exposure limit (STEL) of 100 ppm; in 1987-88, the STELs were deleted [ ACGIH 1991]. The TLVs® were lowered because of adverse hematologic effects observed in laboratory animals [Carpenter et al. 1956]. Changes in rat erythrocyte fragility were produced by 125 ppm EGEE but not by 62 ppm. ACGIH deemed it prudent to maintain chemical exposures below levels found to cause blood changes in experimental animals. Because the TLV® of 100 ppm for EGEEA was based on analogy with EGEE, it was logical to establish a similar TLV® of 50 ppm for its acetate [ACGIH 1980]. Reports of adverse testicular effects in experimental animals treated with EGME, EGEE, and their acetatesJNagano et al. 1979] led ACGIH to lower the TLVs® for these compounds. The 5-ppm TLV for EGME, EGMEA, EGEE, and EGEEA as an 8 -hr TWA was adopted in 1984, and the "skin" notation was retained. The principal health effects documented in humans exposed to EGEE, EGME, and their acetates involve the blood, central nervous and hematopoietic systems, liver, and kidneys. These effects include headache, drowsiness, dizziness, forgetfulness, personality change, loss of appetite, tremors, hearing loss, slurred speech, hematuria, hemoglobinuria, anemia, and leukopenia. Only limited direct evidence indicates that exposure to EGEE, EGME, or their acetates causes adverse reproductive effects in humans. However, experimental studies in animals provide strong evidence of adverse reproductive and developmental effects related to these exposures. Summaries of the developmental and reproductive toxicity of EGEE, EGEEA, and EGME are presented in Tables 7-1 through 7-3. Because humans and the animal species studied metabolize these glycol ethers in the same way, the animal data are considered to be highly predictive of the hazard for humans. # CORRELATION OF EXPOSURE AND EFFECTS # EGEE # Studies in Humans No epidemiologic studies describe the effects of EGEE in humans, and only one case report exists. A 44-year-old woman who mistakenly drank 40 ml of EGEE (Section 4.1.1) experienced chest pains and vertigo, and lost consciousness shortly after the ingestion [Fucik 1969]. Upon hospitalization, the following signs and symptoms were observed: restless ness, cyanosis, tachycardia, swelling of the lungs, tonoclonic spasms, and breath smelling of acetone. The urine tested positive for protein, acetone, and RBCs; the liver became enlarged and jaundice developed. After 44 days, the woman's condition improved, but insomnia, fatigue, and paresthesia of the extremities persisted for 1 year. Several cases of anemia were reported in shipyard workers exposed to EGEE and EGME, and all cases were suspected to have been caused by the exposure . A detailed description of this study is provided in Chapter 4. Few data are available on the reproductive effects of EGEE in humans. Ratcliffe et al. [1986] concluded that EGEE may have affected the semen quality by lowering the sperm counts of male workers exposed to this chemical during the preparation of ceramic shells for casting ------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------ -------------------------- ------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------ ]. The potential also existed for skin absorption. In addition, the shipyard painters had been exposed to EGME, lead, and epichlorohydrin, all of which have been reported to affect semen quality. Airborne concentrations of lead were well below those known to depress sperm count. Most blood lead concentrations were below 20 pg%, with the highest single concentration being 40 ¿¿g%. Epichlorohydrin was not detected in the air sampling during the study [Sparer et al. 1988]. # Studies In Animals Studies in animals have provided evidence of adverse reproductive and developmental effects from EGEE exposure (see Appendix B). The LOAELs and the NOAELs of the following studies were used in determining the REL for EGEE. Testicular atrophy occurred in mice given oral doses of EGEE (1,000 mg/kg of body weight per day or more), for 5 days/wk during a 5-wk period. The NOAEL noted in this study was 500 mg EG EE/kg per day ], Decreased testis weight, spermatocyte depletion and degeneration, and microscopic testicular lesions were observed in rats treated with 500 or 1,000 mg EGEE/kg per day for 11 days [Foster et al. 1983;Creasy and Foster 1984]; no effects were observed at 250 mg/kg. Decreased sperm counts, abnormal sperm morphology, and decreased epididymal weights were found in rats given oral doses of 936, 1,872, or 2,808 mg EGEE/kg per day for 5 days [Oudiz et al. 1984]. A no-effect level was not included in this study. Stenger et al. [1971] treated male rats orally with 46.5, 93, or 186 mg EGEE/kg per day for 13 wk. Microscopic testicular lesions were found only at doses of 186 mg EGEE/kg per day. Rats and rabbits of both sexes were exposed to 0,25,100, or 400 ppm EGEE for 6 hr/day, 5 days/wk over a 13-wk period [Terrill and Daly 1983a,b;Barbee et al. 1984]. At the highest exposures (400 ppm EGEE), reduced testicular weights and microscopic testicular lesions were observed in rabbits, and reduced pituitary weights were observed in male rats. Reduced body weights were observed in male and female rabbits at 25 and 400 ppm EGEE, and reduced spleen weights were found in nonpregnant female rats at 100 and 400 ppm EGEE. Studies have demonstrated adverse effects on the dam and the developing fetus. Stenger et al. [1971] treated rats orally with 11.5, 23,46.5,93, 186, or 372 mg EGEE/kg per day on g.d. 1 through 21. Decreased fetal body weights and skeletal defects were demonstrated at 93, 186, and 372 mg/kg per day. No effects were noted at 11.5, 23, or 46.5 mg/kg per day. In rabbits exposed to EGEE for 7 hr/day on g.d. 1 through 18, maternal toxicity and embryolethality were observed at 615 ppm, and embryolethality (22%), skeletal variations, renal and cardiovascular defects, and decreased maternal food consumption were observed at 160 ppm [Andrew et al. 1981;Hardin et al. 1981]. No effects were apparent on fertility or pregnancy outcome when female rats were exposed to 150 or 650 ppm EGEE for 7 hr/day, 5 days/wk during the 3 wk before breeding. Toxic signs were noted in female rats exposed at 650 ppm, but none were observed at 150 ppm. However, when pregnant rats were exposed to 765 ppm EGEE for 7 hr/day on g.d. through 19, 100% intrauterine death occurred. Similar exposure at 200 ppm EGEE significantly increased fetal cardiovascular and skeletal defects. These effects on development were not influenced by exposures to filtered air or EGEE before pregnancy [Andrew et al. 1981;Hardin et al. 1981]. In rats exposed 6 hr/day to 250 ppm EGEE on g.d. 6 through 15, investigators observed increased postimplantation loss, retarded fetal growth, decreased ossification, and increased skeletal variations; they found no effects on fetuses at 50 or 10 ppm EGEE [Doe 1984a], Fetal skeletal variations were found in rabbits exposed 6 hr/day to 175 ppm EGEE on g.d. 6 through 18; no effects were found in fetuses at 10 or 50 ppm EGEE [Doe 1984a]. Exposure of pregnant rats to 100 ppm EGEE on g.d. 14 through 20 caused extended gestation (0.7 day), and exposure to 100 ppm EGEE on g.d. 7 through 13 or 14 through 20 caused altered behavioral responses and altered brain neurochemical concentrations in offspring [Nelson etal. 1981]. Effects on the fetus were also demonstrated in a dermal application study of EGEE [Hardin et al. 1982]. Four daily doses of 0.25 or 0.50 ml EGEE were applied to rats on g.d. 7 through 16. The higher dose resulted in decreased maternal body weight gain, ataxia, and 100% fetolethality; the lower dose produced fetotoxicity, 75% fetolethality, and malformations. # Basis for Selecting the No Observable Adverse Effect Level (NOA EL) Acute toxicity data for EGEE (Table 4-2) indicate that CNS and kidney effects occurred at higher EGEE concentrations than adverse reproductive and developmental effects. Smyth et al. [1941] reported narcosis, digestive tract irritation, and kidney damage in guinea pigs and rats exposed to 1,400 or 3,000 mg EGEE/kg. Dyspnea» damaged lungs, and toxic effects on WBCs were reported in mice exposed to 1,130 to 6,000 ppm EGEE [Werner et al. 1943c], and the LC50 was 1,820 ppm EGEE. Adverse effects on the blood and hematopoietic system also occurred at higher EGEE concentrations than adverse reproductive and developmental effects. Data in Table 4-9 indicate that EGEE adversely affects the blood and hematopoietic system at concentrations of 125 to 2,000 ppm. These effects include decreased Hb, Hct, RBCs, WBCs, and increased osmotic fragility of erythrocytes [Werner et al. 1943a,b;Stenger et al. 1971;Carpenter et al. 1956;Terrill and Daly 1983a,b;Barber et al. 1984;Doe 1984a]. Limited human data correlate adverse reproductive effects with EGEE exposure [Ratcliffe et al. 1986;]. Table 7-1 presents the reproductive and developmental effects resulting from exposure to EGEE. In rabbits, the LOAEL for male reproductive effects was 400 ppm [Barbee et al. 1984;Terrill and Daly 1983a]. This concentration caused decreased testis weight and micro scopic testicular lesions, but 100 ppm and 25 ppm had no effect on the male reproductive system. In the male rat, the LOAEL (500 mg/kg) caused decreased testis weight and micro scopic testicular lesions [Foster et al. 1983;Creasy and Foster 1984]; the NOAEL was250mg/kg. Adverse developmental effects (behavioral and neurochemical alterations) were observed in rats exposed at 100 ppm EGEE in a study that did not demonstrate an NOAEL for these effects [Nelson et al. 1981]. The NOAEL for structural malformations in rats and rabbits was 50 ppm EGEE [Doe 1984a]. Carpenter et al. [1956] had previously established a 62-ppm NOAEL for osmotic fragility. Adverse developmental effects occur at lower EGEE concentrations than reproductive, hematotoxic, CNS, and kidney effects. Thus, limiting exposures to control adverse develop mental effects will also control reproductive, hematotoxic, CNS, and kidney effects. The LOAELs and NOAELs in Table 7-1 indicate that 50 ppm is the highest NOAEL [Doe 1984a] in rats that is also lower than the lowest LOAEL in rats [Nelson et al. 1981]. Because of the lack of human data and because the rat is the species most sensitive to EGEE, it is reasonable to use the rat NOAEL to extrapolate an equivalent dose for humans. NIOSH therefore deems it appropriate to use 50 ppm as the NOAEL for EGEE and to use the body freights of rats [Doe 1984a] for calculating their daily NOAEL and extrapolating an equivalent dose for humans. # EGEEA No information is available about the toxic effects of EGEEA in humans. # .1.2.1 Studies fn Animais In mice administered EGEEA orally 5 days/wk for 5 wk, testicular atrophy occurred at 1,000, 2,000, and 4,000 mg/kg per day, and depletion and degeneration of spermatocytes occurred at 4,000 mg/kg per day ], When doses were expressed as mmoles/kg per day, the dose-response relationships of EGEE and EGEEA were equivalent. No effects appeared at 500 mg EGEEA/kg per day. Testicular atrophy and spermatocyte depletion developed in rats fed 726 mg EGEEA/kg per day for 11 days [Foster et al. 1984]. Nelson et al. [1984b] examined the effects of EGEEA on rat embryo-fetal development by exposing pregnant rats to 130,390, or 600 ppm EGEEA for 7 hr/day on g.d. 7 through 15. The highest concentration (600 ppm) caused 100% fetolethality. A 56% increase in resorptions occurred at 390 ppm EGEEA, and fetal weights were significantly reduced at 130 and 390 ppm EGEEA. Visceral malformations of the heart and umbilicus occurred in fetuses at 390 ppm, and one fetus from dams exposed to 130 ppm EGEEA had a heart defect. In another study, rabbits were exposed to 25, 100, or 400 ppm EGEEA on g.d. 6 through 18 [Doe 1984a]. Adverse effects on the fetus included decreased fetal body weights and retarded ossification at 100 ppm EGEEA, and vertebral column malformations at 400 ppm EGEEA. Decreased maternal body weight gain and food consumption, and increased resorptions also occurred at 400 ppm EGEEA. No adverse maternal effects developed at 25 or 100 ppm EGEEA, and no adverse effects on the fetus appeared at 25 ppm EGEEA. # 108 These studies in animals provide ample evidence of adverse reproductive and developmental effects from EGEEA exposure. The following studies, including the LOAEL and the NOAEL of each, were used in determining the REL for EGEEA. Tyl et al. [1988] found evidence of maternal toxicity and fetotoxicity in rabbits exposed by inhalation to 100, 200, and 300 ppm EGEEA for 6 hr/day on g.d. 6 through 18. A 100% incidence of malformations occurred at 300 ppm EGEEA, and external, visceral, and skeletal malformations increased significantly at 200 ppm EGEEA. No effects were observed on dams or fetuses at 50 ppm EGEEA. Tyl et al. [1988] found evidence of maternal toxicity (i.e., decreased body weight gain and food consumption, and increased liver weight) in rats exposed by inhalation to 100, 2 0 0 , and 300 ppm EGEEA for 6 hr/day on g.d. 6 through 15. Fetotoxicity was also found at 100, 200, and 300 ppm EGEEA, with an increased incidence of visceral, skeletal, and external malformations at 200 and 300 ppm EGEEA. Dams and fetuses showed no effects at 50 ppm EGEEA. Dermal treatment of pregnant rats on g.d. 7 through 16 with 1.4 ml EGEEA/day caused decreased maternal body weights and adverse developmental effects in offspring, including visceral malformations and skeletal variations , # 7A .2.2 Basis for Selecting the N o Observable Adverse Effect Level (NOAEL) Reports in the literature indicate that EGEEA exerts adverse hematologic effects in ex perimental animals at 62 to 4,000 ppm [von Oettingen and Jirouch 1931;Carpenter et al. 1956;Doe 1984a;Tyl et al. 1988;Truhaut et al. 1979;]. These effects include hemolysis, reduced RBC and WBC counts, and a reduction in Hb, Hct, and MCV. Acute toxicity data for EGEEA (Table 4-2) indicate that CNS and kidney effects occur at higher EGEEA concentrations than adverse reproductive and developmental effects. Smyth et al. [1941] reported narcosis and damaged kidneys in guinea pigs and rats treated with 1,910 or 5,100 mg EGEEA/kg. Hemoglobinuria, hematuria, and renal lesions were reported in rats treated with 2,900 to 3,900 mg EGEEA/kg [Truhart et al. 1979], and transient hemoglobinuria and/or hematuria were reported in rabbits exposed to 2,000 ppm EGEEA for 4 hr. Adverse reproductive and developmental effects generally occur at lower concentrations than hematotoxic, CNS, and kidney effects. Thus, limiting exposures to prevent adverse reproductive and developmental effects will also prevent hematotoxic, CNS, and kidney effects. Table 7-2 presents reproductive and developmental effects resulting from exposure to EGEEA. These data include the LOAEL for mice (1,000 mg/kg), rats (130 ppm), and rabbits (100 ppm). In the study by Tyl et al. [1988], 50 ppm EGEEA caused no effects in rabbits. The LOAELs and NOAELs presented in Table 7-2 indicate that 50 ppm is the highest NOAEL in rabbits that is also lower than the lowest LOAEL in rabbits [Tyl et al. 1988]. Because human data are lacking and because the rabbit is the animal species most sensitive to EGEEA, it is reasonable to use the rabbit NOAEL to extrapolate an equivalent dose for humans. NIOSH therefore deems it appropriate to use 50 ppm as the NOAEL for EGEEA and to use the body weights of rabbits studied by Tyl et al. [1988] for calculating their daily NOAEL and extrapolating an equivalent dose for humans. # EGME # Studies in Humans As reported in Chapter 4 (Section 4.3), adverse CNS effects (headache, forgetfulness, fatigue, personality change, nausea, and neurologic abnormalities) and hematotoxic effects (anemia and lymphopenia) were observed in workers exposed to EGME-containing solvents in shirt factories [Donley 1936;Parsons and Parsons 1938]. Greenburg et al. [1938] studied workers fusing shirt collars at the same factory as Parsons and Parsons [1938] and observed similar effects (i.e., anemia, neurologic abnormalities, drowsiness, and fatigue). Industrial hygiene measurements taken after the report of adverse health effects in workers indicated that the airborne concentration of EGME was about 25 ppm with windows open and 75 ppm with windows partially closed. Greenburg et al. [1938] stated that worker exposures to EGME had been higher than the measured concentrations because improvements had been made to exhaust and ventilation systems after the report of adverse health effects in workers. Severe anemia [Zavon 1963;Cohen 1984], major encephalopathy, and bone marrow depression [Ohi and Wegman 1978;Cohen 1984] were observed in workers exposed to EGME dermally and by inhalation in the printing and microfilm industries. In one study [Zavon 1963], EGME was used as a cleaning agent and as a solvent, but the workers seldom wore gloves. No means were available to measure possible dermal absorption. Workers were exposed to 60 to 3,960 ppm EGME during the various cleaning operations, but after airborne EGME concentrations were reduced to the order of 20 ppm EGME, no further ill effects were noted. No mention was made about preventing skin exposure. Nitter-Hauge [1970] reported general weakness, disorientation, nausea, and vomiting in two men who had each ingested about 0.1 liter of pure EGME, believing it to be ethyl alcohol. Upon admittance to the hospital, the men were suffering from cerebral confusion, pronounced hyperventilation, and profound metabolic acidosis. After i.v. treatment with sodium bicar bonate and ethyl alcohol, both patients made an uneventful recovery over a 4-wk period. Limited evidence shows the adverse effects of EGME on the male reproductive system. Data suggest that testicle size may have been reduced in male workers with potential exposure to EGME (see Section 4.1.2.2) [Cook et al. 1982]. noted lowered sperm counts in shipyard painters exposed to EGME and EGEE; airborne EGME ranged from nondetectable concentrations to 5.6 ppm. Details of this study, which are presented in Chapter 4, indicated that lead and epichlorohydrin (also present in the work environment) had no effect on semen quality. When hematologic parameters were studied in the same group of shipyard painters , several cases of anemia were reported. Exposure to EGME and EGEE was suspected as the cause of the hematologic disorders, but no dose -response relationship was established. # Studies In Anim als Chapter 4 summarizes experimental studies demonstrating reproductive and developmental toxicity resulting from EGME exposure (see Appendix B for the complete studies). Doses of 62.5, 125, 250, 500, 1,000, or 2,000 mg EGME/kg per day were administered to mice 5 days/wk for 5 wk ]. Testicular atrophy was found at 250, 500,1,000, and 2,000 mg EGME/kg per day, but not at lower doses. In a study to determine temporal development and the site of the testicular lesion, rats were treated orally with 50,100,250, or 500 mg EGME/kg per day for up to 11 days [Foster et al. 1983]. Testis weights were significantly reduced after 2 days at 500 mg/kg per day and after 7 days at 250 mg/kg per day. The lesion appeared localized in the primary spermatocyte 24 hr after a single dose of 100 mg/kg. Partial depletion and degeneration of spermatids and spermatocytes were also observed in rats treated with 100 mg EGME/kg per day for 11 days. No effects were noted over the 11-day treatment period at 50 mg EGME/kg per day. Treatment of rats with 50 mg EGME/kg per day for 5 days in another study caused a reduction in epididymal sperm counts [Chapin et al. 1985a] and the appearance of abnormal sperm morphology at wk 4, followed by recovery at wk 8 [Chapin et al. 1985b]. Adverse reproductive effects were noted in male rats exposed to >100 ppm EGME by inhalation for 6 hr/day, 5 days/wk during a 10-day to 13-wk period [Miller et al. 1983a;Rao et al. 1983;. At 300 ppm, rats showed decreased male fertility [Rao et al. 1983], testicular atrophy , microscopic testicular lesions, and decreased testis weights [Miller et al. 1983a]; at 100 ppm, male rats showed no effects. Miller et al. [1983a] observed testicular effects in rabbits exposed to 100 or 300 ppm EGME and slight microscopic changes in testicular tissue in 1 of 5 rabbits exposed to 30 ppm EGME. These investigators considered 30 ppm to be the NOAEL in rabbits. The effects of EGME on rat reproductive performance were studied by exposing males or females to 30, 100, or 300 ppm EGME for 6 hr/day, 5 days/wk for 13 wk before mating with unexposed animals [Rao etal. 1983]. At 300 ppm, EGME completely suppressed male fertility for 2 wk after exposure; fertility was partially restored 13 to 19 wk after exposure ended. No effects were observed on female reproductive performance at any concentration of EGME, or on male reproductive performance at 30 or 100 ppm. No neonatal effects were found in this study at any EGME concentration. Nagano et al. [1981] administered doses of 31.25, 62.5, 125, 250, 500, or 1,000 mg EGME/kg per day to rats on g.d. 7 through 14. Skeletal variations consisting of bifurcated and split cervical vertebrae were observed at the lowest dose, and increased malformations (spina bifida occulta) occurred at 62.5 mg EGME/kg per day. # Ill Heart function was also evaluated in rat fetuses from dams treated orally on g.d. 7 through 13 with 25 or 50 mg EGME/kg per day [Toraason et al. 1985]. At 25 mg/kg per day, EGME caused a significant increase in the number of fetuses with abnormal QRS wave complexes; and at 50 mg/kg per day, it caused an increase in cardiovascular defects. Oral treatment of nonhuman primates with 36 mg EGME/kg during gestation resulted in one embryo that was missing a digit on each forelimb [Scott et al. 1989], Three of thirteen pregnancies (23%) at the 12-mg/kg dose ended in embryonic death. Rats and rabbits were exposed by inhalation to 3,10, or 50 ppm EGME for 6 hr/day on g.d. 6 through 15 (rats) or 6 through 18 (rabbits) [Hanley et al. 1984a]. Maternal toxicity (decreased body weight) in dams of both species was noted at 50 ppm EGME. A significant increase in the resorption rate was also noted in pregnant rabbits exposed to 50 ppm EGME. Significant increases in the incidence of two minor skeletal variations (i.e., lumbar spurs and delayed ossification) indicated slight fetotoxicity in rat fetuses from dams exposed to 50 ppm EGME. Rabbit fetuses from dams exposed to 50 ppm EGME exhibited a significant increase in the incidence of malformations of all organ systems and a significant decrease in the mean body weight. No effects were noted in either species for dams and fetuses at 3 or 10 ppm EGME. Hanley et al. [1984a] found minimally decreased body weight gains in mice exposed to 50 ppm EGME 6 hr/day on g.d. 6 through 15. Examination of fetuses from dams exposed to 50 ppm EGME revealed statistically significant increases in the incidence of extra lumbar ribs and of unilateral testicular hypoplasia. No adverse effects were noted in dams or fetuses at 10 ppm EGME. In another study, pregnant rats were exposed to 100 or 300 ppm EGME for 6 hr/day on g.d. 6 through 17, and males were exposed to 100 or 300 ppm EGME for 6 hr/day during a 10-day period . At 100 ppm, EGME increased gestation time and decreased the number of pups and live pups. At 300 ppm, EGME decreased maternal body weight and produced 100% fetolethality. Male rats showed testicular effects after 10 exposures to 300 ppm, but not after exposures to 100 ppm EGME. In a dominant lethal study, male rats were exposed by inhalation to 25 or 500 ppm EGME for 6 hr/day over 5 days [McGregor et al. 1983]. Rats exposed to 500 ppm showed decreased fertility during wk 3 through 8 , and rats exposed to 25 ppm EGME showed no adverse effects on fertility. Nelson et al. [1984a] exposed male rats to 25 ppm EGME for 7 hr/day, 7 days/wk during a 6 -wk period. These rats were then mated with untreated females that were allowed to deliver and rear their young. In the same study, pregnant females were exposed to EGME for 7 hr/day on g.d. 7 through 13 or 14 through 20 and allowed to deliver and rear their young. Significant differences in avoidance conditioning were observed in offspring of dams exposed on g.d. 7 through 13, but not in offspring of dams exposed on g.d. 14 through 20. Brain neurochemical deviations were noted in offspring from the paternally exposed group and in offspring from both maternally exposed groups. In a dermal exposure study, female rats were exposed to solutions of 3%, 10%, 30%, or 100% EGME (10 ml/kg) in physiological saline [ Wickramaratne 1986], Reduced litter sizes were observed at the 10% concentration, 100% fetolethality occurred at the 30% concentra tion, and 10 0 % maternal death was observed at the 1 0 0 % concentration. A single dermal application of 500, 1,000, or 2,000 mg EGME/kg on g.d. 12 caused statistically significant increases (P<0.05) in external, visceral, and skeletal malformations [Feuston et al. 1990]. In the same study, dermal exposure of female rats to EGME ( 1 ,0 0 0 mg/kg on g.d. 12 or 2 ,0 0 0 mg/kg on g.d. 10 and 12) caused a statistically significant decrease in fetal body weights (P<0.05). The investigators determined 250 mg EGME/kg to be the NOAEL for adverse developmental effects. # Basis for Selection o f N o Observable Adverse Effect Level (NOAEL) Adverse CNS effects (encephalopathy) and hematotoxic effects (bone marrow depression, anemia, and leukopenia) were observed in workers exposed to EGME [Donley 1936;Parsons and Parsons 1938;Greenburg et al. 1938;Zavon 1963;Ohi and Wegman 1978;Cohen 1984]. However, there is limited evidence of an adverse effect on the male reproductive system as a result of EGME exposure ]. Acute toxicity data for EGME (Table 4-2) indicate that CNS, liver, and kidney effects occur at higher EGME concentrations than adverse reproductive and developmental effects. Wiley et al. [1938] reported tissue damage to the kidneys and liver in dogs and rabbits exposed to 2,130 mg EGME/kg. Narcosis, lung, and kidney damage were reported in rats (3,250 to 3,400 mg/kg), rabbits (890 mg/kg), and guinea pigs (950 mg/kg) [Carpenter et al. 1956], and digestive tract irritation and damaged kidneys were reported in rats and guinea pigs exposed to 246 and 950 mg EGME/kg, respectively. Adverse effects on the blood and hematopoietic system also occurred at higher EGME concentrations than adverse reproductive or developmental effects. Data in Table 4-9 indicate that 32 to 2,000 ppm EGME adversely affects the blood and hematopoietic system. These effects include increased osmotic fragility, decreased Hb, Hct, RBC and WBC counts [Carpenter et al. 1956;Grant et al. 1985;Wemer et al. 1943a,b;Miller et al. 1981;Miller et al. 1983a]. Table 7-3 presents the reproductive and developmental effects caused by exposure to EGME. In rats, the LOAEL of 50 mg EGME/kg per day caused decreased sperm counts and abnormal sperm morphology in two separate studies that did not demonstrate a NOAEL [Chapin et al. 1985a,b], In rabbits, the LOAEL (100 ppm EGME) caused microscopic testicular lesions and decreased testis weights, and the NOAEL was 30 ppm EGME [Miller et al. 1983a]. In mice, the LOAEL (250 mg/kg per day) caused testicular atrophy, and the NOAEL was 125 mg/kg per day ]. Behavioral defects and neurochemical deviations were observed in the offspring of rats exposed to 25 ppm EGME [Nelson et al. 1984a]. Retarded fetal ossification was observed in the offspring of mice treated with 31.25 mg EGME/kg per day (LOAEL) [Nagano et al. 1981]. Adverse developmental effects were observed in the offspring of rats, rabbits, and mice exposed to an LOAEL of 50 ppm EGME [Hanley et al. 1984a]; the NOAEL for these species was 10 ppm EGME. In the same study, the NOAEL for maternal effects in these species was 10 ppm EGME. Feuston et al. [1990] observed an increase (P<0.05) in external, visceral, and skeletal malformations in the fetuses of rats exposed to single dermal applications of 500,1,000, or 2,000 mg EGME/kg on g.d. 12. The authors determined 250 mg EGME/kg to be the NOAEL for adverse developmental effects in this study. Adverse developmental effects occur at lower EGME concentrations than reproductive, hematotoxic, CNS, liver, and kidney effects. Thus, limiting exposure to control adverse developmental effects will also control reproductive, hematotoxic, CNS, liver, and kidney effects. The data that demonstrate reproductive and developmental toxicity, and the LOAELs and NOAELs presented in Table 7-3 indicate that in several species (rats, rabbits, and mice), 10 ppm is the highest NOAEL that is also lower than the lowest LOAEL [Hanley et al. 1984a]. Because of the lack of human data, it is reasonable to use the NOAEL of 10 ppm [Hanley et al. 1984a] to extrapolate an equivalent dose for humans. # EGMEA Few data are available on the toxicity of EGMEA. Bolt and Golka [1990] reported the occurrence of hypospadias at birth in two boys whose mother had been exposed to EGMEA during her pregnancies. The authors concluded that the hypospadias were caused by exposure to EGMEA. Testicular atrophy was observed in mice exposed orally for 5 days/wk during a 5-wk period to 500,1,000, or 2,000 mg EGMEA/kg per day; no reproductive effects were noted at 62.5, 125, or 250 mg EGMEA/kg per day # BASIS FOR RECOMMENDED STANDARDS FOR EGEE, EGME, AND THEIR ACETATES # Data Available from Studies in Humans and Anim als Toxic effects of human exposure to EGEE and EGME include personality change, memory loss, drowsiness, blurred vision, hearing loss, anemia, and leukopenia. However, data are limited on possible adverse reproductive and developmental effects of worker exposure to EGEE, EGME, and EGMEA, and no human data are available on EGEEA exposure. Cook et aL [1982] suggested that testicle size in males may have been reduced because of EGME exposure. concluded that exposure to EGEE and EGME caused functional impairment in males by lowering sperm counts. The occurrence of hypospadias in two boys at birth was attributed to the mother's exposure to EGMEA during her pregnancies [Bolt and Golka 1990]. Ballew and Hattis [1989] performed a quantitative risk analysis under contract to NIOSH to determine the risk of developmental effects in the offspring of pregnant women exposed to EGEE and EGME. Although data for humans are limited, ample evidence from studies in animals indicates that EGEE, EGME, and their acetates adversely affect reproduction and development. In the absence of sufficient human data, NIOSH deems it appropriate to base the RELs for EGEE, EGME, and their acetates on animal data. The following procedure was therefore used to calculate equivalent human doses from animal data. # Procedure for Calculating Equivalent Human Doses from Animal Data No mechanistic models exist to describe the relationship of reproductive and developmental toxicity to exposure; only empirical models are available to use in a quantitative risk assessment (QRA). Because a threshold is assumed to exist for reproductive and develop mental toxicity, a QRA model is inappropriate since these models assume a no-threshold effect. Therefore, the following method was used to determine the RELs for EGEE, EGME, and their acetates. Both humans and animals were assumed to retain 100% of inhaled EGEE, EGME, or their acetates. The retained dose for animals exposed at the NOAEL was calculated as follows by using the inhalation rate and the average body weights of the animals (see Table 7-5): # Retained dose for animals ■ N O A E L ( m g / m 3) x inhaja^onrate (m /day) x q 2 5 ¿ay (1 ) animal body weight That dose was converted to an equivalent exposure for humans by assuming a 70-kg body weight and an inhalation rate of 10 m3 in an 8-hr workday [45 Fed. Reg. 79318 (1980); EPA 1987]: t, . , . " , retained dose for animals (mg/ k g per day) x 70 kg Equivalent exposure for humans - ----------------\ a -a r j-l----e. (2) 1 0 m /day Adapted from Ballew and Hattis [1989]. Concentrations of EGEE or EGME associated with the indicated effect under a more pessimistic assumption about the degree of interindividual variability in susceptibility of the human population (log probit slope of 1). 1 Bast-estimate assumption of the degree of interindividual variability in susceptibility for the quantal developmental effects (log probit slope of 2). sDeath in the first year after birth. In the case of this hypothesized effect, only best estimates have been made. To allow for potential interspecies variability, an uncertainty factor of 10 was applied to the equivalent exposure for humans. An additional uncertainty factor of 10 was then applied to allow for potential intraspecies variability. The resulting concentration was converted to parts per million: Equivalent exposure for humans __________ 24.45__________ _ 100 mol wt of particular glycol ether # .2 .2 .1 REL fo r EGEE and EGEEA Although limited data in humans have shown adverse reproductive or developmental effects from exposure to EGEE [Ratcliffe et al. 1986;, sufficient data have demonstrated these effects in animals exposed to EGEE Stenger et al. 1971;Andrew et al. 1981;Hardin et al. 1981Hardin et al. , 1982Nelson et al. 1981;Terrill and Daly 1983a;Foster et al. 1983;Doe 1984a;Barbee et al. 1984;Oudiz et al. 1984] and EGEEA Doe 1984a;Foster et al. 1984;Nelson et al. 1984b;Tyl et al. 1988]. These animal data provide the basis for determining the RELs for worker exposure to EGEE and EGEEA and for instituting controls to reduce worker exposure. On the basis of the calculations presented in Equations 4 through 12, NIOSH recommends that occupational exposure to EGEE and EGEEA be limited to 0.5 ppm as a TWA for up to a 10-hr workshift during a 40-hr workweek. Because both EGEE and EGEEA can be absorbed percutaneously [Dugard et al. 1984], dermal contact is prohibited. The data in # REL for EG M E a n d EGM E A Case reports and clinical studies demonstrated adverse CNS and hematotoxic effects on workers exposed to EGME [Donley 1936;Parsons and Parsons 1938;Greenburg et al. 1938;Zavon 1963;Ohi and Wegman 1978;Cohen 1984], but data demonstrating adverse reproductive and developmental effects in offspring of EGME-exposed workers are limited [Welch etal. 1988]. Bolt and Golka[1990] reported hypospadias at birth in two boys whose mother was exposed to EGMEA during her pregnancies. Sufficient evidence in animal studies indicates that EGME exerts adverse reproductive and developmental effects Nagano et al. 1981;Foster et al. 1983;McGregor etal. 1983;Miller etal. 1983a;Raoetal. 1983;Hanley etal. 1984a;Nelson etal. 1984a;Chapin et al. 1985a;Chapin etal. 1985b;Toraason et al. 1985;Scott et al. 1989; Wickramaratne 1986]. EGMEA was also shown to have such effects by , who found that this glycol ether caused testicular atrophy in mice. Data from these animal studies warrant concern that EGME and EGMEA are capable of inducing similar adverse effects in exposed workers. Based on information presented in Table 7-3, a 10-ppm NOAEL was determined for EGME in rats, rabbits, and mice [Hanley et al. 1984a]. Any effects that EGMEA might cause would be likely to occur through the initial conversion of EGMEA to EGME (see Section 4.2). Therefore, it is reasonable to propose the same REL for both compounds. An equivalent human dose was determined for EGME using the information presented in the study by Hanley et al. [1984a], On the basis of the calculations presented in Equations 13 through 21, NIOSH recommends that occupational exposure to EGME and EGMEA be limited to 0.1 ppm as a TWA for up to a 10-hr workday during a 40-hr workweek. Because EGME and EGMEA can be absorbed percutaneously [Dugard et al. 1984], dermal contact is prohibited. The data in Table 7-5 were used in Equations 13 through 21 to calculate the human equivalents to the daily animal NOAELs for EGME and EGMEA as follows: The hazard communication program is to be written and made available to workers and their designated representatives. Chemical manufacturers, importers, and distributors are required to ensure that containers of hazardous chemicals leaving their workplaces are labeled, tagged, or marked to show the identity of the chemical, appropriate hazard warnings, and the name and address of the manufacturer or other responsible party. Employers must ensure that labels on incoming containers of hazardous chemicals are not removed or defaced unless they are immediately replaced with other labels containing the required information. Each container in the workplace must be prominently labeled, tagged, or marked to show the identity of any hazardous chemical it contains and the hazard warnings appropriate for worker protection. If a work area has a number of stationary containers that have similar contents and hazards, the employer may post hazard signs or placards rather than label each container. Employers may use various types of standard operating procedures, process sheets, batch tickets, or other written materials as substitutes for individual container labels on stationary process equipment. However, these written materials must contain the same information that is required on the labels and must be readily accessible to workers in the work areas. Pipes or piping systems are exempted altogether from the OSHA labeling requirements, although NIOSH recommends that filler ports and outlets be labeled. In addition, NIOSH recommends that a system be set up to ensure that pipes containing hazardous materials are identified to avoid accidental cutting and discharge of their contents. Employers are not required to label portable containers holding hazardous chemicals that have been transferred from labeled containers and that are intended only for the immediate use of the worker who performs the transfer. According to the OSHA definition of "immediate use," the container must be under the control of the worker performing the transfer and must be used only during the workshift in which the chemicals are transferred. The OSHA Hazard Communication standard requires chemical manufacturers and importers to develop an MSDS for each hazardous chemical they produce or import. Employers in the manufacturing sector (which includes paint and allied coating products) are required to obtain or develop an MSDS for each hazardous chemical used in the workplace. The MSDS is required to provide information such as the chemical and common names for the hazardous chemical. For hazardous chemical mixtures, the MSDS must list each hazardous component that constitutes 1 % or more of the mixture. NIOSH suggests that any potential occupational carcinogen be listed. Ingredients present in concentrations of less than 1 % must also be listed if there is evidence that the PEL may be exceeded or that the ingredients could present a health hazard in those concentrations. Workers should also be trained in methods for detecting the presence or release of hazardous chemicals (e.g., monitoring conducted by the employer, continuous monitoring devices, visual appearance or odor of hazardous chemicals when released, etc.). Training should include information about measures workers can take to protect themselves from exposure to hazardous chemicals (e.g., the use of appropriate work practices, emergency procedures, and personal protective equipment). # W ORK PRACTICES # W orker Isolation If feasible, workers should be isolated from direct contact with the work environment by the use of automated equipment operated from a closed control booth or room. The control room should be maintained at a positive pressure so that air flows out of rather than into the room. However, when workers must perform process checks, adjustments, maintenance, or other related operations in work areas where EGME, EGEE, or their acetates are present, personal protective clothing and equipment may be necessary, depending on exposure concentrations and the potential for dermal contact. # Storage and Handling Containers of EGME, EGEE or their acetates should be stored in a cool, dry, well ventilated location away from any area containing a fire hazard. Outside or detached storage is preferred. These glycol ethers should be isolated from materials with which they are incompatible; contact with strong oxidizing agents may cause fires and explosions. Con tainers of solvents, including those that contain EGME, EGEE, or their acetates, should be tightly covered at all times except when material is transferred. Working amounts of these solvents should be stored in containers that (1) hold no more than 5 gal, (2) have springclosing lids and spout covers, and (3) are designed to safely relieve internal pressure in case of fire. Because small amounts of residue may remain and present a fire hazard, containers that have held solvents should be thoroughly cleaned with steam and then drained and dried before reuse. Fittings should not be struck with tools or other hard objects that may cause sparks. Special spark-resistant tools of nonferrous materials should be used where flam mable gases, highly volatile liquids, or other explosive substances are used or stored [NSC 1980]. In addition, all sources of ignition such as smoking and open heaters should be prohibited except in specified areas. Fire hazards around tank trucks and cars can be reduced by keeping motors turned off during loading or unloading operations. Specific OS HA requirements for the storage and handling of flammable and combustible liquids are given in 29 CFR 1910.106. # Sanitation and Hygiene The preparation, storage, or consumption of food should not be permitted in areas where there is exposure to EGME, EGEE or their acetates. The employer should make handwash ing facilities available and encourage the workers to use them before eating, smoking, using the toilet, or leaving the worksite. Tools and protective clothing and equipment should be cleaned as needed to maintain sanitary conditions. Toxic wastes should be collected and disposed of in a manner that is not hazardous to workers or the environment. Vacuum pickup or wet mopping should be used to clean the work area at the end of each workshift or more frequently if needed to maintain good housekeeping practices. Collected wastes should be placed in sealed containers that are labeled as to their contents. Cleanup and disposal should be conducted in a manner that enables workers to avoid contact with the waste. Tobacco products should not be smoked, chewed, or carried uncovered in work areas. Workers should be provided with and advised to use facilities for showering and changing clothes at the end of each workshift. Work areas should be kept free of flammable debris. Flammable work materials (rags, solvents, etc.) should be stored in approved safety cans. # Spills and W aste Disposal Procedures for decontamination and waste disposal should be established for materials or equipment contaminated with EGME, EGEE, or their acetates. The following procedures are recommended in the event of a spill of these glycol ethers [NIOSH 1981;DOT 1984; Canadian Center for Occupational Safety and Health 1988]: • Exclude persons not wearing protective clothing and equipment from areas of spills or leaks until cleanup has been completed. • Remove all ignition sources. • Ventilate the area of a spill or leak. • Absorb small spills on paper towels. Allow the vapors to evaporate in a suitable place such as a fume hood, allowing sufficient time for them to clear the hood ductwork. Bum the paper towels in a suitable location away from combustible materials. • Absorb large quantities with sand or other noncombustible absorbent material and atomize the contaminated material in a suitable combustion chamber. # LABELING AND POSTING In accordance with 29 CFR 1910.1200 (Hazard Communication), workers must be informed of chemical exposure hazards, of their potential adverse health effects, and of methods to protect themselves. Labels and signs also provide an initial warning to other workers who may not normally work near processes involving hazardous chemicals such as EGME, EGEE, or their acetates. Depending on the process, warning signs should state a need to wear eye protection or a respirator, or they may be used to limit entry to an area without protective equipment. For transient nonproduction work, it may be necessary to display warning signs at the worksite to inform other workers of the potential hazards. All labels and warning signs should be printed in both English and the predominant language of workers who do not read English. Workers who cannot read labels or posted signs should be identified so that they may receive information about hazardous areas and be informed of the instructions printed on labels and signs. # EMERGENCIES The employer should formulate a set of written procedures covering fire, explosion, asphyxiation, and any other foreseeable emergency that may arise during the use of materials that may contain EGME or EGEE, or their acetates. All potentially affected workers should receive training in evacuation procedures to be used in the event of fire or explosion. All workers who are using materials containing these glycol ethers should be thoroughly trained in proper work practices that reduce the potential for starting fires and causing explosions. # ENGINEERING CONTROLS Engineering controls should be the principal method for minimizing exposure to airborne EGME, EGEE, or their acetates in the workplace. To achieve and maintain reduced airborne concentrations of these glycol ethers, adequate engineering controls are necessary (e.g., properly constructed and maintained closed-system operations and ventilation). Control technology applicable to spray painting is discussed in a NIOSH document [O'Brien and Hurley 1981], Airborne concentrations of these glycol ethers can be most effectively controlled at the source of contamination by enclosure of the operation and use of local exhaust ventilation. Enclosures, exhaust hoods, and ductwork should be kept in good repair so that designed airflows are maintained. Measurements of variables such as capture velocity, duct velocity, or static pressure should be made at least semiannually, and preferably monthly, to demonstrate the effectiveness of the mechanical ventilation system. The use of continuous airflow indicators (such as water or oil manometers marked to indicate acceptable airflow) is recommended. The effectiveness of the system should also be made as soon as possible after any change in production, process, or control that may result in any increase in airborne contaminants. It is essential that any scheme for exhausting air from a work area also provide a positive means of bringing in at least an equal volume of air from the outside, conditioning it, and evenly distributing it throughout the exhausted area. The ventilation system should be designed and operated to prevent the accumulation or recirculation of airborne contaminants in the workplace. To evaluate the use of these materials with EGMEA or EGEEA, users should consult the best available performance data and manufacturer's recommendations. Significant differen ces have been demonstrated in the chemical resistance of generically similar PPE materials (e.g., butyl) produced by different manufacturers [Mickelsen and Hall 1987]. In addition, the chemical resistance of a mixture may be significantly different from that of any of its neat components [Mickelsen et al. 1986]. Users should therefore test the candidate material with the chemicals to be used. The worker should be trained in the proper use and care of the chemical protective clothing. After this clothing is in routine use, it should be examined along with the workplace to ensure that nothing has occurred to invalidate the effectiveness of these materials. The NIOSH publication A Guide fo r Evaluating the Performance o f Chemical Protective Clothing [Roder 1990] may be helpful. Safety showers and eye wash stations should be located close to operations that involve EGME, EGEE, or their acetates. Splash-proof chemical safety goggles or face shields (20 to 30 cm minimum) should be worn during any operation in which a solvent, caustic, or other toxic substance may be splashed into the eyes. In addition to the possible need for wearing protective outer apparel (e.g., aprons, encap sulating suits), workers should wear work uniforms, coveralls, or similar full-body coverings that are laundered each day. Employers should provide lockers or other closed areas to store work and street clothing separately. Employers should collect work clothing at the end of each workshift and provide for its laundering. Laundry personnel should be informed about the potential hazards of handling contaminated clothing and instructed about measures to minimize their health risk. Employers should ensure that protective clothing is inspected and maintained to preserve its effectiveness. Clothing should be kept reasonably free of oil or grease. Workers and persons responsible for worker health and safety should be informed that protective clothing may interfere with the body's heat dissipation, especially during hot weather or in hot industries or work situations (e.g., confined spaces). Additional monitoring is required to prevent heat-related illness when protective clothing is worn under these conditions. # Respiratory Protection Engineering controls should be the primary method used to control exposure to airborne contaminants. Respiratory protection should be used by workers only in the following circumstances: • During the development, installation, or testing of required engineering controls The actual respirator selection should be made by a qualified individual, taking into account specific use conditions, including the interaction of contaminants with the filter medium, space restrictions caused by the work location, and the use of any required face and eye protective devices. Respirator selection tables are presented in Chapter 1. # CHEMICAL SUBSTITUTION The substitution of less hazardous materials can be an important measure for reducing worker exposure to hazardous materials. # EXPOSURE MONITORING An occupational health program designed to protect workers from adverse effects caused by exposure to EGME, EGEE, or their acetates should include the means for thoroughly identifying all potential hazards. Routine environmental sampling as an indicator of worker exposure is an important part of this program, as it provides a means of assessing the effectiveness of work practices, engineering controls, personal protective clothing and equipment, etc. Prior knowledge of the presence of certain types of inteifering compounds in the sampled environment will greatly help the analyst in the selection of the appropriate analytical conditions for sample analysis. This list of compounds can be compiled from the material safety data sheets for the compounds that are used in or around the process where the sampling will take place. Initial and routine worker exposure surveys should be made by competent industrial hygiene and engineering personnel. These surveys are necessary to characterize worker exposures and to ensure that controls already in place are operational and effective. Each worker's exposure should be estimated, whether or not it is measured by a personal sampler. Therefore, the sampling strategy should allow reasonable estimates of each worker's exposure. The NIOSH publication Occupational Exposure Sampling Strategy Manual may be helpful in developing efficient programs to monitor worker exposure [Leidel et al. 1977]. In work areas where airborne exposures to EGME, EGEE or their acetates may occur, an initial survey should be done to determine the extent of worker exposure. In general, TWA exposures should be determined by collecting samples over a full shift. Measurements to determine worker exposure should be taken so that the average 8 -hr exposure is based on a single 8-hr sample or on two 4-hr samples. Several short-term interval samples (up to 30 min) may also be used to determine the average exposure concentration. When the potential for exposure to these glycol ethers is periodic, short-term samples may be needed to replace or supplement full-shift sampling. Personal sampling (i.e., samples collected in the worker's breathing zone) is preferred over area sampling. If personal sampling is not feasible, area sampling can be substituted only if the results can be used to approximate worker exposure. Sampling should be used to identify the sources of emissions so that effective engineering controls or work practices can be instituted. If a worker is found to be exposed to EGME, EGEE, or their acetates at concentrations below the REL but at or above one-half the REL, the exposure of that worker should be monitored at least once every 6 months or as otherwise indicated by a professional industrial hygienist When the work environment contains concentrations exceeding the respective RELs for these glycol ethers, workers must wear respirators for protection until adequate engineering controls or work practices are instituted; exposure monitoring is recommended at 1-wk intervals. Such monitoring should continue until consecutive determinations at least 1 wk apart indicate that the workers' exposure no longer exceeds the REL. When workers' exposures are greater than one-half the REL but less than the REL, sampling should be conducted after 6 months; if the concentrations of these glycol ethers are lower than one-half the REL after two consecutive biannual surveys, sampling can then be conducted annually. Exposure monitoring should be conducted whenever changes in production, process, controls, work practices, or weather conditions may result in a change in exposure conditions. # MEDICAL MONITORING # General Requirem ents Workers exposed to EGME, EGEE, or their acetates are at risk of suffering adverse health effects. Medical monitoring as described below should be made available to all workers. The employer should provide the following information to the physician responsible for the medical monitoring program: • Any requirements of the applicable OSHA standard or NIOSH recommended standard • Identification of and extent of exposure to physical and chemical agents that may be encountered by the worker • Any available workplace sampling results that characterize exposures for job categories previously and currently held by the worker • A description of any protective devices or equipment the worker may be required to use • The frequency and nature of any reported illness or injury of a worker • The results of any monitoring of urinary MAA or EAA for any worker exposed to unknown concentrations of EGME or EGMEA during a spill or emergency (see Appendix G). # M edical Exam inations The objectives of a medical monitoring program are to augment the primary preventive measures, which include industrial hygiene monitoring of the workplace, the implementa tion of engineering controls, and the use of proper work practices and personal protective equipment. Medical monitoring data may also be used for epidemiologic analysis within large plants and on an industrywide basis; they should be compared with exposure data from industrial hygiene monitoring. Medical examinations are conducted before job placement and periodically thereafter. The preplacement medical examination allows the physician to assess the applicant's functional capacity and inform him or her of how it relates to the physical demands and risks of the job. Furthermore, such an examination provides baseline medical data that can be compared with subsequent health changes. The preplacement examination should also provide information about prior occupational exposures. Periodic medical examinations after job placement are intended to detect work-related changes in health at an early stage. The following factors should be considered during the preplacement medical examination and any periodic medical examinations of the worker: (a) exposure to chemical and physical agents that may produce interdependent or interactive adverse effects on the worker's health (including exacerbation of pre-existing health problems and nonoccupational risk factors such as tobacco use), and (b) potentially hazardous characteristics of the worksite (e.g., confined spaces, heat, and proximity to hazards such as explosive atmospheres and toxic chemicals). The type of information that should be gathered is discussed in the following subsections. # Preplacem ent m edical examination # Medical history The medical history should contain information about occupational history, including the number of years worked in each job. Special attention should be given to any history of occupational exposure to hazardous chemical and physical agents [Guidotti et al. 1983]. # Clinical examination The preplacement clinical examination should determine the fitness of the worker to perform the intended job assignment. Appropriate pulmonary and musculoskeletal evaluation should be done for workers whose jobs may require extremes of physical exertion or stamina (e.g., heavy lifting), especially these who must wear personal respiratory protection. Because the standard 12-lead electrocardiogram is of little practical value in monitoring for asymptomatic cardiovascular disease, it is not recommended. More valuable diagnostic information is provided by physician interviews of workers that elicit reports of the occurrence and work-relatedness of angina, breathlessness, and other symptoms of chest illnesses. Special attention should also be given to workers who require the use of eyeglasses. These workers must be able to wear simultaneously any equipment needed for respiratory protection, eye protection, and visual acuity, and they must be able to maintain their concurrent use during work activities. The worker's duties may be performed near unrelated operations that generate potentially harmful exposures (e.g., asbestos or cleaning or degreasing solvents). The physician must be aware of these potential exposures to evaluate possible hazards to the individual worker. # $.9.2.2 Periodic m edical examination A periodic medical examination should be conducted annually or more frequently, depend ing on age, health status at the time of a prior examination, and reported signs or symptoms associated with exposure to EGME, EGEE, or their acetates. The physician should note any trends in health changes revealed by epidemiologic analyses of examination results. The occurrence of an occupationally related disease or other work-related adverse health effects should prompt an immediate evaluation of industrial hygiene control measures and an assessment of the workplace to determine the presence of a previously unrecognized hazard. The physician's interview with the worker is an essential part of a periodic medical examination. The interview gives the physician the opportunity to leam of (1) changes in the work setting (e.g., confined spaces), and (2 ) potentially hazardous workplace exposures that are in the vicinity of the worker but are not related to the worker's job activities. During the periodic medical examination, the physician should re-examine organ systems at risk to note changes from the previous examination. # BIOLOGICAL MONITORING Urinary concentrations of the metabolites of EGME, EGEE, and their acetates may be useful biological indicators of worker exposure to these glycol ethers. Biological monitoring accounts not only for environmental concentrations and actual respiratory uptake, but also for absorption through the skin. Information about biological monitoring appears in Section S.4 of this document and guidelines for biological monitoring are given in Appendix G. Biological monitoring is suggested when the potential exists for (1) airborne exposure to EGME, EGEE, or their acetates at or above their respective RELs, or (2) skin contact as a result of accidental exposure or breakdown of chemical protective clothing (see Section 8 .6 .1). Monitoring of urinary MAA or EAA (see Appendix G) should be made available to any worker exposed to unknown concentrations of EGME, EGEE, or their acetates during a spill or other emergency. In the absence of skin exposure, a urinary MAA concentration of 0.8 mg/g creatinine or an EAA concentration of 5 mg/g creatinine approximates the concentration that would result from exposure to the REL for EGME (0.1 ppm) or EGEE (0.5 ppm) during an 8 -hr workshift. If a worker's urinary MAA or EAA suggests exposure to EGME, EGEE, or their acetates above their respective RELs, an effort should be made to ascertain the cause (e.g., failure of engineering controls, poor work practices, or nonoccupational exposures). # RECORDKEEPING Medical records as well as exposure and biological monitoring results must be maintained for workers as specified in Section 1.9 of this document. Such records must be kept for at least 30 years after termination of employment. Copies of environmental exposure records for each worker must be included with the medical records. These records must be made available to the past or present workers or to anyone having the specific written consent of a worker, as specified in Section 1.9.4 of this document. # RESEARCH NEEDS The following research is needed to further reduce the risk of adverse developmental and reproductive effects from occupational exposure to EGME, EGEE, or their acetates: • Investigations should be conducted in the workplace to relate glycol ether exposure to concentrations of metabolites in urine and toxic effects such as reduction in testis size, semen quality, etc. • Additional studies are needed to define more accurately the human reproductive hazards posed by EGME, EGEE, and their acetates. • Evaluations of exposed populations are needed to correlate dermal absorption of EGME, EGEE, and their acetates with concentrations of metabolites in urine. • Additional data should be collected to quantify airborne and dermal exposures to EGME, EGEE, and their acetates under actual conditions of use in the workplace. • Other glycol ethers should be evaluated to identify any that have effects similar to those of EGME, EGEE, and their acetates (see Appendix E for a list of glycol ethers). • Physiologically based pharmacokinetic models for EGME, EGEE, and their acetates need to be developed and validated in both human beings and the animal species in which NOAELs were determined. • Methods are needed for quantitative monitoring of dermal exposure. • Epidemiologic studies are needed to determine the effects of occupational exposure to EGME, EGEE, and their acetates. • The method of Groeseneken et al. [1989b] should be validated. # 132 APPENDIX A # METHODS FOR SAMPLING AND ANALYSIS OF EGME, EGEE, EGMEA, AND EGEEA IN AIR* A.1 General Requirem ents fo r Sampling Air samples are collected that represent the air a worker breathes while performing each job or specific operation. It is advisable to maintain records of the date, time, rate, duration, volume, and location of sampling. # A.2 Collection and Shipping o f Samples 1. Immediately before sampling, break the ends of the sampling tube to provide an opening at least one-half the internal diameter of the tube (2 mm). 2. Attach the sampling tube to the sampling pump with flexible tubing. The smaller section of charcoal is used as a backup and should be positioned nearest the sampling pump. 3. The charcoal tube should be placed in a vertical direction during sampling to minimize channeling through the charcoal. 4. Air being sampled should not be passed through any hose or tubing before entering the charcoal tube. 5. The flow rate of sampling should be known with an accuracy of at least +5 %. Calibrate each sampling pump with a representative charcoal tube in line. 9. Capped charcoal tubes should be packed tightly and padded before they are shipped to minimize tube breakage during shipping. 10. A sample of the bulk material should be submitted to the laboratory in a glass container with a Teflon-lined cap. This sample should not be transported in the same container as the charcoal tubes. A number of changes were made to Method 53 to accommodate the lower target concentrations. (1) The recommended air volume for TWA samples was increased from 10 liters to 48 liters. This allows for lower detection limits and increases the TWA sampling time to a more convenient 480 min (8 hr) when sampling at 0.1 liter/m in. (2) A capillary GC column was substituted for a packed column to attain higher resolution. This was especially helpful in achieving better separa tion of 2ME and methylene chloride, a major component of the desorption solvent. (3) It was found that the desorption efficiency from wet charcoal was significantly lower for 2ME, and to a lesser extent for 2EE, at these lower concentrations. This problem was overcome by adding about 125 mg of anhydrous magnesium sulfate to each desorption vial to remove the desorbed water. Because charcoal will always collect some water from sampled air, all 2ME and 2EE air samples must be treated in this manner. Utilizing these three major modifications of Method 53, a successful evaluation was performed for these glycol ethers at the lower target concentrations. Also, a minor modification was made in the determination of desorption efficiencies. Aqueous instead of methanolic stock solutions were used to determine the desorption efficiencies for 2MEA and 2EEA. It was found that at these lower levels, when stock methanolic solutions are spiked on dry Lot 120 charcoal, part of the 2MEA and 2EEA react with the methanol to form methyl acetate and 2ME and 2EE, respectively. The reaction, which is analogous to hydrolysis, is called transesterification (alcoholysis) and is catalyzed by acid or base. The surface of dry Lot 120 charcoal is basic and the reaction was verified to occur by quantitatively determining methyl acetate and the corresponding alcohol (2ME for 2MEA samples, 2EE for 2EEA samples) from spiked samples. Transestérification was not observed when methanolic stock solutions were spiked onto wet charcoal. Therefore, transestérification is not expected to occur for samples collected from workplace air containing methanol as well as 2MEA or 2EEA because workplace atmospheres are seldom completely dry. Because of the number of modifications and the extensive amount of data generated in this evaluation, the findings are presented as a separate method instead of a revision of Method 53. This method supercedes Method 53, although Method 53 is still valid at the higher analyte concentrations. Although hydrolysis of 2MEA and 2EEA does not appear to be a problem at lower concentrations, as a precautionary measure, the special require ment that 2MEA and 2EEA samples should be refrigerated upon receipt by the laboratory was retained from Method 53. 1.1.2 Toxic effects (This section is for information only and should not be taken as the basis of OSHA policy.) As reported in the Documentation of Threshold Limit Values (Refs. 5.2 to 5.5), all four analytes were investigated by Nagano et al. (Ref. 5.6) The detection limits of the analytical procedure arc 0.10,0.04, 0.04, and 0.03 ng per injection (1.0-^iL injection with a 10:1 split) for 2ME, 2MEA, 2EE, and 2EEA respectively. These are the amounts of each analyte that will give peaks with heights approximately 5 times the height of baseline noise (Section 4.1). # Detection limit of the overall procedure The detection limits of the overall procedure are 1.0,0.40,0.37, and 0.31 fig per sample for 2ME, 2MEA, 2EE, and 2EEA respectively. These are the amounts of each analyte spiked on the sampling device that allow recovery of amounts of each analyte equivalent to the detection limits of the analytical procedure. These detection limits correspond to air concentra tions of 6.7 ppb (21 jig/m3), 1.7 ppb (8.4 jig/m3), 2.1 ppb (7.8 fig/ m3), and 1.2 ppb (6.5 jig/m ) for 2ME, 2MEA, 2EE, and 2EEA respectively (Section 4.2). # Reliable quantitation limit The reliable quantitation limits are the same as the detection limits of the overall procedure because the desorption efficiencies are essentially 1 0 0 % at these levels. These are the smallest amounts of each analyte that can be quantitated within the requirements of recoveries of at least 75% and precisions (±1.96 SD) of ±25% or better (Section 4.3). The reliable quantitation limits and detection limits reported in the method are based upon optimization of the GC for the smallest possible amounts of each analyte. When the target concentration of an analyte is exception ally higher than these limits, they may not be attainable at the routine operating parameters unless one optimizes parameters of instruments. # Instrument response to the analyte The instrument response over the concentration ranges of 0.5 to 2 times the target concentrations is linear for all four analytes (Section 4.4). # Recovery The recovery of 2ME, 2MEA, 2EE, and 2EEA from samples used in a 15-day storage test remained above 84, 87, 84, and 85% respectively when the samples were stored at ambient temperatures. The recovery of analyte from the collection medium after storage must be 75% or greater. (Section 4.5, from regression lines shown in Figures 4. 5.1.2, 4.5.2.2, 4.5.3.2, and 4.5.4.2) 1.2.6 Precision (analytical procedure) The pooled coefficients of variation obtained from replicate determinations of analytical standards at 0.5, 1, and 2 times the target concentrations are 0.022,0.004,0.002, and 0.002 for 2ME, 2MEA, 2EE, and 2EEA respec tively (Section 4.6). # Precision (overall procedure) The precisions at the 95% confidence level for the ambient temperature 15-day storage tests are ±11.7, ±11.1, ±12.3, and ±11.2% for 2ME, 2MEA, 2EE, and 2EEA respectively. These include an additional ±5% for sam pling error. The overall procedure must provide results at the target concen tration that are ±25% or better at the 95% confidence level (Section 4.7). # Reproducibility Six samples for each analyte collected from controlled test atmospheres and a draft copy of this procedure were given to a chemist unassociated with this evaluation. The samples were analyzed after 12 days of refrigerated storage. No individual sample result deviated from its theoreti cal value by more than the precision reported in Section 1.2.7 (Section 4.8). # Advantages 1.3.1 Charcoal tubes provide a convenient method for sampling. # 1.3.2 The analysis is rapid, sensitive, and precise. # Disadvantage It may not be possible to analyze co-collected solvents using this method. Most of the other common solvents which are collected on charcoal are analyzed after desorption with carbon disulfide. 3.1.3 An electronic integrator or some other suitable means of measuring peak areas or heights. A Hewlett-Packard 18652A A/D converter interfaced to a Hewlett-Packard 3357 Lab Automation Data System was used in this evaluation. 3.1.4 Two-milliliter vials with Teflon-lined caps. 3.1.5 A dispenser capable of delivering 1.0 mL to prepare standards and samples. If a dispenser is not available, a 1.0-mL volumetric pipet may be used. 3.1.6 Syringes of various sizes for preparation of standards. 3.1.7 Volumetric flasks and pipets to dilute the pure analytes in preparation of standards. 3.2.5 A suitable internal standard, reagent grade. "Quant Grade" 3-methyI-3pentanol from Polyscience Corporation was used in this evaluation. # 3.2.6 The desorption solvent consists of methylene chloride/methanol, 95/5 (v/v) containing an internal standard at a concentration of 20 pL/liter. 3.2.7 GC grade nitrogen, air, and hydrogen. # R eliable quantitation lim it The reliable quantitation lim its were determined by analyzing charcoal tubes spiked with loadings equivalent to the detection lim its o f the analytical procedure. # Precision (analytical procedure) The precision o f the analytical procedure for each analyte is the pooled coefficien t o f variation determined from replicate injections o f standards. The precision o f the analytical procedure for each analyte is given in Tables 4.6.1 to 4 .6 .4 . T hese tables are based on the data presented in Section 4.4. # .1 0 Stability o f desorbed sam ples The stability o f desorbed samples was checked by reanalyzing the target concentra tion sam ples from Section 4 .9 one day later using fresh standards. The sample vials were resealed with new septa after the original analyses and were allow ed to stand at room temperature until reanalyzed. The results are given in # Range, Stability and Interference This method has been validated for sam pling air concentrations o f the stated glycol ethers and their acetates from 2 to 25 ppm by volum e in air. The method can be used for higher concentrations; how ever, the higher range has not been validated by U nion Carbide. B ecause som e o f the compounds may becom e hydrolyzed when sampled in high humidity atmospheres, the analysis o f the charcoal tube sam ples must be com pleted within 24 hours _ _ ô f the sampling. H owever, in the case o f CELLOSOLVE solvent, sam ples can be stored for up to 14 days in a refrigerator but should be analyzed within 9 0 minutes after desorption. In the case o f passive dosim eters, the sample may be refrigerated for up to fiv e days prior to analysis w ithout any significant le ss. The presence o f other glycol ether vapor with sim ilar m olecular w eights and vapor pressure may result in interference. In addition to the 3M #3500 Organic Vapor M onitoring Badge, passive dosim eter badges from other manufacturers, such as Dupont's Pro-Tek® Organic B adge, may also be used for monitoring g lycol ethers. Please contact the manufacturer for information concerning the suitability o f their monitors for specific glycol ethers or glycol ether acetates. # Instrument Parameters # Chromatograph Som e firms are also known to provide analytical services for these m onitors for specific chem icals, including g lycol ethers. T his may be useful for som e o f the sm aller locations which do not have air sam pling equipment and/or on-site analytical capabilities o f their own. Further information about the NIOSH methods or passive monitors may be obtained from the NIOSH regional o ffice or equipment manufacturer. In the EGME-treated m ice, the mean testes w eights were significantly low er than those o f the control group® at wk 2 to 5, and apjp>eared to increase again towards the end o f the study (statistics not given); there w as also a tendency towards a dose-response relationship in the incidence o f abnormal sperm (statistics not given). In the rat dominant lethal study, the total implant numbers among fem ales mated at wk 5 to EGM E-dosed rats were reduced in a dose-dependent manner ( P 0 .0 0 1 ) . A lthough there w as a rise in preimplantation lo ss rate, there w as no statistically significant evidence for the # B.2.1.3 Inhalation In inhalation studies [Andrew et al. 1981;Hardin et al. 1981], pregnant N ew Zealand white rabbits were exposed 7 hr/day on g. (rabbits). This study indicated that exposure o f rabbits to EGEEA during organogenesis resulted in maternal toxicity at 100 to 300 ppm. Signs o f this included significantly decreased w eight gain and reduced gravid uterine w eight (P <0.001) and elevated absolute liver w eight (P < 0.05). In rats, significantly (P <0.001) reduced w eight gain and reduced food consumption were noted at 2 0 0 and 300 ppm EGEEA; significantly elevated relative liver w eights were noted at 1 0 0 ,2 0 0 , and 300 ppm EGEEA (no statistics given). In rabbits, an increased incidence o f totally resorbed litters at 200 ppm (P < 0.05) and 300 ppm (P < 0.001), an increase in nonviable fetuses at 300 ppm (P < 0.05), and a decrease in viable fetu ses per litter at 200 ppm and 300 ppm EGEEA (P <0.05) w ere observed. Fetotoxicity (reduced ossification) was observed at 100, 200, and 300 ppm EGEEA. The incidence o f external visceral and skeletal malform ations was increased at 2 0 0 ppm and 300 ppm (P < 0 .0 5 ). In rats, em bryo/fetotoxicity w as observed at 1 0 0 ,2 0 0 , and 3 0 0 ppm EG EEA. O bservations included increased nonviable im plantations/litter at 3 0 0 ppm (P < 0 .0 5 ), reduced fetal body w eight/litter at 2 0 0 and 300 ppm (P < 0.05), and increased incidence (P<0 .0 5 ) o f external variations at 300 ppm and visceral and skeletal variations at 100, 200, and 300 ppm. There w as no evidence o f maternal, embryonic, or fetal toxicity (including teratogenicity) at 50 ppm EGEEA in either species. T yl e t al. [ 1988J concluded that 50 ppm EGEEA was the no observable effect lev el. # B .2 .1.4 Derm al Exposure The effects o f dermal exposure to EGEE on pregnant Sprague-Dawley rats have been investigated by Hardin et al. [1982]. # B.2.2 EGME and EGMEA # B.2.2.1 O ral Adm inistration # B.2.2.2 Inhalation # B.2.2.3 D erm al Exposure The teratogenic potential o f dermal exposure to EGME was estim ated using the C hem off and K avlock in v iv o assay. Groups o f 10 pregnant Alpk/AP (W istar derived) rats were exposed to 3%, 10%, 30%, or 100% EGME in physiological saline at 10 mL/kg [W ickramaratne 1986]. The test compound was applied for 6 hr (occluded exposure) on g.d. 6 through 17. Rats were then allow ed to deliver litters normally and rear their litters until day 5 post-partum. The application o f 100% EGME w as lethal to all dams and 30% was lethal to all developing fetuses. A t 10% EGME, the litter size w as reduced by 26% as was survival at day 5 (neither statistically significant). N o adverse effects w ere seen in the 3% group. The results were evaluated using "rules" generated from the C hem off and K avlock in vivo teratology screen assay. The authors concluded that the data demonstrated a clear doseresponse and that a 10% solution o f EGME is lik ely to be a rat teratogen [Wickramaratne 1986]. Exposure o f N ew Zealand white rabbits and Fischer 344 rats to 0 ,5 0 ,1 0 0 ,2 0 0 , or 300 ppm EGEEA 6 hr/day on g.d. 6 through 18 (rabbits) and 6 through 15 (rats) resulted in adverse hem atologic parameters in both species [Tyl et al. 1988]. In rabbits, there was evidence o f enlarged erythrocytes (elevated MCV) at 300 ppm EGEEA (P < 0.01) and significant dose-related decreases in the number o f platelets at 100 (P < 0.05), 200 (P < 0.01) and 300 ppm EGEEA (P <0.001). In rats, the W BC count was significantly increased (P < 0.001) at 200 and 300 ppm EGEEA. Statistically significant reductions in rat RBC count (P < 0.0), Hb level (P < 0.01), and Hct and RBC volum e (0.05) were seen at the three highest exposures (1 0 0 , 2 0 0 , and 3 0 0 ppm EG EEA). P latelet counts were a lso sig n ifica n tly reduced at 2 0 0 ppm (P <0.001) and 300 ppm EGEEA (P <0.01) in the rat. # B.3.1.3 D erm al Exposure In a study by Truhaut et al. [1979], rabbits were exposed to a sin g le 24-hr dermal application (10,500 m g/kg) o f EGEEA; death follow ed betw een 1 and 4 days after application. The reduction in RBC count did not exceed 15% to 20% and blood Hb lev els show ed little variation; however, a 50% to 70% decrease in W BC count was noted. In a study by Carpenter et al. [1956], hem olytic effects in the rat erythrocyte were demonstrated by a single 4-hr inhalation exposure o f six fem ale rats to EGME or EGMEA. The low est concentrations causing significant osm otic fragility w ere 2,0 0 0 ppm EGME and 32 ppm EGMEA. In a study by M iller et al. [1981], Fischer 344 rats and B 6C 3F1 m ice were exposed by inhalation to 0, 100, 300, or 1,000 ppm EGME 6 hr/day for a total o f 9 days in an 1 1-day period. WBC counts o f both rats and m ice exposed to 1,000 ppm EGME were statistically lower than those o f the controls (P < 0.05). M CV, RBC counts, and Hb lev els o f male and fem ale rats and male m ice in the 1,000 ppm EGME-exposure group were also statistically depressed (£*<0.05). A t 300 ppm EGME, sim ilar but less severe effects were seen in rats; statistically lower (P <0.05) WBC counts in both sex es and Hb and RBC counts in fem ales were noted. H em atologic parameters in m ice exposed at 300 ppm EGME were stated to be sim ilarly affected, but data were not presented. Histopathology was performed on rats only, revealing reduced bone marrow cellularity, lymphoid depletion in the cortex o f the thymus, and reduced numbers o f lymphoid cells in the spleen and in the m esenteric lymph nodes in the 1,000-ppm EGME group. Both m yeloid and erythroid elem ents o f the bone marrow were markedly reduced in all rats exposed to 1,000 ppm EGME, and m egakaryocytes were present in decreased numbers and were smaller than those o f controls. The entire thymic cortical lymphoid population was depleted, with less dramatic reductions in the lymph nodes and spleen. Lymphoid organ toxicity persisted at 300 ppm EGME, but to a much lesser extent. In addition, serum total protein, albumin (m ales only), and globulin values in the 1,000 ppm EGM E-exposure group (rats) were significantly reduced (P < 0.05). In # 60 This study show ed that in man, EGEE is rapidly absorbed through the lungs. About 64% o f the inhaled vapor w as retained at rest, and retention increased as physical exercise w as performed during exposure. The absorbed dose was apparently proportional to the inhaled concentration, and a linear relation was observed between uptake rate and exposure concentration. The rate o f uptake increased when physical exercise w as performed during exposure. The rate o f uptake was higher as exposure concentration, or pulmonary ventilation rate, or both increased. Individual uptake rates seemed dependent only on transport mechanism s (pulmonary ventilation, or cardiac output, or both) and not on anthropometric data or body fat content. Respiratory elim ination o f unchanged EGEE accounted for ¿0.4% o f the total body uptake and occurred rapidly after cessation o f exposure, follow ed by a much slow er decrease. This slow decrease indicated that tw o pharm acological compart ments were involved. Groeseneken et al. [1986c] also studied the urinary excretion o f EA A in the 10 male volunteers w ho inhaled various concentrations o f EGEE for 4 hr both at test and during physical exercise in the previously described study [Groesenken et al. 1986b]. The subjects, divided into tw o groups o f fiv e , were either exposed at rest to concentrations o f 2 .7 ,5 .4 , or 10.8 ppm EGEE (1 0 ,2 0 , or 40 mg/m3, respectively) or were exposed to 5 .4 ppm EGEE at rest and during physical exercises (see Table B -2 were still detectable. On a number of days, the preshift EAA concentrations were even higher than the immediate postshift values on the preceding and same day. For this reason, and because of the more constant exposure profiles, an estimation of the maximum EAA levels after prolonged daily exposure was made on the basis o f the results of the first exposure period. A linear correlation (r-0.92) was found between average exposure to EGEE and EGEEA over the 5 exposure days and EAA excretion at the end of the 5-day workweek. EAA estimate of 150 mg ± 35 mg/g creatinine was found to correspond with repeated 5-day full-shift exposures to 5 ppm of EGEE or 5 ppm of EGEEA. Groeseneken et al. [1988] compared urinary EAA excretion in man and the rat after experimental exposure to EGEE. The human data were drawn from the previously men tioned inhalation study [Groeseneken et al. 1986c] in which five subjects had been exposed to 2.7 ppm, 5.4 ppm, or 10.8 ppm EGEE. Urine samples collected at short intervals were pooled into 12-hr groups for comparison with rat data. Groups o f five male Wistar rats were treated by oral intubation with 0.5, 1 ,5 , 10, 50, or 100 mg EGEE/kg. Rat urine samples were collected before EGEE exposure and then at 12-hr intervals up to 60 hr after the dosing. The maximal excretion rate of EAA in human and rat urine was found within 12 hr after exposure or dosing. Afterwards, the decline of urinary EAA was much slower in man than in the rat. In man, the half-life o f EAA was on average 42.0 + 4.7 hr, a longer half-life than reported in the original study (21 to 24 hr) [Groeseneken et al. 1986c]. The author attributes the longer half-life to the use o f 12-hr pooled urine specimens in this study [Groeseneken et al. 1988] rather than to specimens collected at 1-to 2-hr intervals in the previous study [Groeseneken et al. 1986c]. In the previous study, half-lives were calculated from the peak exposure time (8 hr after the start o f exposure). Examination of the excretion curves from the previous paper showed that elimination between 8 and 12 hr was more rapid than at longer time intervals, leading to a shorter calculated elimination half-life [Groeseneken et al. 1986c]. The authors concluded that the longer elimination half-life was more consistent with half-lives seen in occupational exposure, which were as high as 48 hr [Groeseneken et al. 1988;Veulemans et al. 1987]. The recovery o f EAA in human urine after 48 hr averaged 23%. Based on the half-life of EAA elimination o f 42 hr, the authors estimated total recovery of EAA as 30% to 35% of the absorbed dose. In the rat, the half-life of EAA was 7.20 ± 1.54 hr. On the average, 27.6% + 6.1 % of urinary EAA in rats was present as the glycine conjugate, with the extent of conjugation being independent o f the dose. The extent of conjugation demonstrated a diumal variation; the lowest extent of conjugation was found during the night EAA glycine conjugates were absent in human urine. In man, the recovery of EAA was higher than in the rat for equivalent low doses of EGEE (0.5 and 1 mg/kg). When urinary excretion data for the lower dose range were normalized for body weight in both species, rats excreted EAA at a higher rate than did man for equivalent doses. The authors concluded, that although nonlinear kinetics had been observed in some animal studies at high doses, the elimination kinetics seen at low doses in this study were not dose-dependent in either rats or humans [Groeseneken et al. 1988]. Groeseneken et al. [1987a] studied the pulmonary absorption and elimination o f EGEEA in 10 male subjects under various conditions of exposure and physical workload; subjects were 23.6± 1.8 hr. However, 3hrafterthe first peak EAA excretion, a second maximum excretion of EAA was observed; this second peak was especially pronounced after exposure during physical exercise. Redistribution of EGEEA, or EAA, or both from a peripheral to central compartment could explain this phenomenon. Urinary EAA excretion was dependent on the EGEEA uptake rate, as a consequence of higher exposure (.PO.OOl), and on the uptake rate of EGEEA at constant exposure, as a consequence o f physical workload (P<0.001). On average, 22.2% ± 0.9% of the absorbed EGEEA was metabolized and excreted in the urine as EAA within 42 hr. The total excretion of EAA in 42 hr was related both to total uptake from increasing concentrations of EGEEA (P<0.001) and to total uptake, at constant exposure, with increasing workload (P<0.001). Total EAA excretion was correlated to EGEEA concentration, uptake rate, and transport mechanisms (pulmonary ventilation, oxygen consumption, respiratory rate, etc.). In addition, EAA excretion was correlated to body fat (r=0.40, /*<0.001). Groeseneken et al. [1987b] concluded that the metabolism of EGEEA proceeded through EGEE via esterases and then continued through the same excretion pathway as EGEE. Indeed, the kinetics of EAA excretion after exposure to EGEEA were very similar to these found after exposure to EGEE [Groeseneken et al. 1987a]. Workers from a shipyard painting operation who applied paint containing EGEE were evaluated for EGEE exposure [Lowry 1987]. Work conditions and practices varied consid erably between brush and spray painters. Some workers were in confined spaces below deck, whereas others were in the open. The study was conducted in the winter, and the temperatures varied greatly depending on the painters' work areas. Information on work practices such as the number of hours spent painting, the type of paint used, the work area locations, and the use of personal protective equipment was gathered from questionnaires. Environmental breathing zone samples were collected for each worker for 3 days. Urine samples were collected every day for 1 wk, at the beginning and end o f each workday, and EAA levels were measured. A wide range o f EAA levels was noted in workers using EGEE-containing paints; this was probably due to variation in work assignments, work areas, and use of personal protective equipment. This study has not gone through extensive evaluation to determine the importance of the many variables on the levels o f EAA in urine. The author could only conclude that there appeared to be a relationship between urinary EAA excretion and the use of paints containing EGEE [Lowry 1987]. Clapp et al. [1987] investigated EGEE exposure o f workers engaged in casting precision metal parts. The 8-hr TWAs of EGEE ranged from nondetectable to 23.8 ppm. EGEE was not detected in any of the blood samples from the EGEE-exposed workers, but exposed workers were found to have measurable levels of EAA in urine (163 mg/g creatinine). EAA was not detected in the urine of unexposed control subjects. # B.4.2 EGME and EGMEA EGME has been shown to be a possible substrate for alcohol dehydrogenase (ADH) [Tsai 1968;Blair and Vallee 1966], and thus oxidation of EGME via ADH and aldehyde dehydrogenase to methoxyacetic acid (MAA) is a potential route o f metabolism [Miller et al. 1982]. second major urinary metabolite was identified as methoxyacetylglycine (approximately 20% o f radioactivity). Analysis o f radioactivity in the plasma demonstrated rapid disap pearance (tl/2 = 0.56 hr) o f EGME between 0 and 4 hr after dosing with a corresponding appearance o f MAA. Radioactivity clearance from plasma (tl/2 ) was estimated to be 19.7 hr. In the rats pretreated with pyrazole, the metabolism o f EGME to MAA was inhibited. Analysis of radioactivity in plasma showed a slower disappearance o f EGME (tl/2 " 42.6 ± 5 .6 hr) and radioactivity clearance from plasma (tl/2 -51.0 + 7.8 hr) than in the controls. The percentage o f the dose found in urine after 24 hr (9.8% ± 2.4%) and 48 hr (7.9% + 2.2%) showed urinary excretion not to be the major route o f elimination. MAA was not a major urinary component, and methoxyacetylglycine was not found in urine from these rats. Pretreatment with the aldehyde dehydrogenase inhibitor disulfiram had no significant effect on plasma or urinary metabolic profiles. Yonemoto et al. [1984] used an in vitro culture system [New 1978] to determine the effects of EGME and MAA on the development of post-implantation rat embryos. On g.d. 9, conceptuses were removed from the dams (Wistar-Porton strain) and placed in pairs in bottles that contained 3 mL of heat-inactivated male rat serum and 1 mL o f test compound. Ten to 15 conceptuses per group were cultured for 48 hr in 381 mg EGME or in 90, 180, 270, or 450 mg MAA and then examined under a stereoscopic dissecting microscope. EGME had no significant effects on embryonic growth and development when compared with that of the controls. MAA, however, produced statistically significant reductions in morphological development, crown-rump length, head length, number of somites, and yolk sac diameter when compared with those o f the controls (P<0.001). These effects demonstrated a dose response, as all were seen at 450 mg MAA; all but yolk sac diameters were affected at 270 mg, and only head length and morphological development were affected at 180 mg. No significant effects were seen at 90 mg MAA. The predominant abnormalities seen in affected conceptuses were irregular fusion of the neural tube (wavy or open neural suture line) and irregular segmentation of the somites. The authors [Yonemoto et al. 1984] concluded that the data demonstrated that MAA or its metabolites are the proximal toxins in vivo and that, at the organogenesis stage, the rat fetus in vitro lacks alcohol dehydrogenase activity. The in vitro culture system used by Yonemoto et al. [1984] was used by Rawlings et al. [1985] to study the mechanism of teratogenicity o f EGME. Conceptuses were explanted from pregnant Wistar-Porton rats at embryonic age 9.5 days and cultured for 48 hr with 2 or 5 mM MAA. At the end of the culture period, crown-rump length, head length, and yolk sac diameter were measured, and the degree of differentiation and development was evaluated by a morphological scoring system. MAA at the 5 mM concentration had an adverse effect on fetal development. MAA-exposed embryos had statistically significant reductions (P<0.01) in morphological score, crown-rump length, head length, and yolk sac diameter compared with those of the controls. MAA also produced statistically significant reductions (P<0.05) in the protein content of the embryo. No statistically significant reductions in growth parameters were seen at the 2 mM level. However, irregularity of the neural suture line was seen in 100% o f the MAA-exposed embryos. Other abnormalities observed in the MAA-exposed groups included abnormal otic and somite development, turning failure, open cranial folds, and abnormal yolk sac [Rawlings et al. 1985]. As has been demonstrated, the induction of paw malformations following in utero [Brown et al. 1984;Ritter et al. 1985] as well as in vitro [Yonemoto et al. 1984] exposure to EGME appears to depend on the oxidation o f EGME to MAA. Sleet et al. [1988] investigated the relationship between the induction of paw malformations and the disposition of EGME in the maternal and embryonal compartments. Pregnant C D -1 mice were dosed by gavage on g.d. 11 with either EGME (1.3 to 6.6 mmol/kg, 100 to 500 mg/kg, or 5.2 fil/g) or MAA (1.1 to 7.7 mmol/kg, 100 to 693 mg/kg, or 4.9 pl/g) and were sacrificed on g.d. 18. Fetuses were delivered by laparotomy and weighed before external examination for paw defects. The embryotoxic potencies o f EGME and MAA were determined by comparing the dosedependent incidence o f digit anomalies. EGME and MAA were equipotent in causing digit malformations. The ADH inhibitor 4-methylpyrazole administered orally 1 hr before EGME reduced the incidence of malformations 60% to 100%, depending on the dosing regimen. Oxidation of EGME to MAA was nearly complete after 1 hr when approximately 90% o f ( I4C) in maternal compartment and conceptus coeluted with authentic ( 14C)-MAA on HPLC. Embryonic ( 14C)-MAA levels were 1.2 times those in plasma 1 hr and 6 hr after dosing; by 6 hr, however, concentrations in the embryo had declined to approximately 50% of 1-hr values. Dams treated i.v. with ( 14C) MAA had higher ( 14C) blood levels than did dams treated orally, but the offspring of the former had fewer digit malformations. The authors concluded that peak and steady-state plasma levels of MAA, as well as embryonic MAA levels, do not appear to determine the embryotoxic outcome whereas further metabo lism of MAA does [Sleet et al. 1988]. EGME uptake and urinary MAA excretion were examined in seven male subjects exposed at rest to 5.1 ppm EGME (16 mg/m3) by mask for four 50-min periods [Groeseneken et al. 1989a]. There was a short 10-min break at the end o f each 50-min period to allow for urine collection. Urine samples were collected immediately before the beginning of the experi ment and at hourly intervals until the fourth hour after exposure. Collections were taken until the morning of the fifth day after the exposure period (four 2-hr collections, one 8-hr collection, and eight 12-hr collections). Urinary MAA was then measured by the method of Groeseneken et al. [1989b]. Retention o f EGME was 76% during the 4-hr exposure period. The uptake rate showed no significant variation because of constant pulmonary ventilation and a fixed exposure concentration. On average, 19.4 ± 2 .1 mg EGME was inhaled during the 4-hr exposure period. MAA was detected in the urine during and up to 120 hr after exposure. The elimination half-life averaged 77.1 ± 9.5 hr. On average, 54.9 % ± 4.5% o f inhaled EGME was excreted within 120 hr o f the start o f exposure; half o f this amount was excreted within 48 hr. By extrapolation, the total amount of MAA was estimated at 85.5% ± 4.9% o f inhaled EGME. as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The " % " may be the approximate percentage by weight or volume (indicate basis) that each hazardous ingredient o f the mixture bears to the whole mixture. This may be indicated as a range or maximum amount (i.e., 10% to 40% vol. or 10% max. wt.) to avoid disclosure of trade secrets. # D.3 SECTION III. PHYSICAL DATA The data in Section III should be for the total mixture. Include the boiling point and melting point in degrees Fahrenheit (Celsius in parenthesis); vapor pressure, in conventional millimeters of mercury (mm Hg); vapor (tensity of gas or vapor (air " 1); solubility in water, in parts/hundred parts of water by weight; specific gravity (water = 1); percent volátiles (indicated if by weight or volume) at 70°F (21.1°C); evaporation rate for liquids or sublimable solids, relative to butyl acetate; and appearance and odor. These data are useful for the control of toxic substances. Boiling point, vapor density, percent volátiles, vapor pressure, and evaporation are useful for designing proper ventilation equipment This information is also useful for designing and deploying adequate fire and spill containment equipment The appearance and odor may facilitate identification of substances stored in improperly marked containers or spilled substances. # D.4 SECTION IV. FIRE AND EXPLOSION DATA Section IV should contain complete fire and explosion data for the product. Include flashpoint and autoignition temperature in degrees Fahrenheit (Celsius in parentheses), flammable limits in percent by volume in air, suitable extinguishing media or materials, special fire-fighting procedures, and unusual fire and explosion hazard information. If the product presents no fire hazard, insert "NO FIRE HAZARD" on the line labeled "Extinguishing Media." # D.5 SECTION V. HEALTH HAZARD INFORMATION For the "Health Hazard Data" line, use a combined estimate o f the hazard o f the total product. This can be expressed as a TWA concentration, as a permissible exposure, or by some other indication of an acceptable standard. Other data are acceptable, such as lowest LD50 if multiple components are involved. (3) gas chromatography analysis using ñame ionization detection (FID). Urine (1 ml) was adjusted to pH 2 with HC1 and extracted three times with methylene chloride. Phase transfer catalysis (a combination of ion-pair extraction and ñuoroanhydride derivitization) was done by adding alkaline tetrabutylammonium hydrogen sulfate and PFBB to the methylene chloride extract. The mixture was rotated for 2 hr. Gas chromatography was employed to analyze 5 (jl of the methylene chloride layer (bottom layer) using FID and a 6 ft x 1/4 in (4-mm id) glass column (packed with 1.95% QF-1 and 1.5% OV-17 on 80/100 mesh Supelcoport). Detection limits for MAA and EAA were 11.4 and 5.0 mg/liter of urine, respectively. Average recoveries (and relative standard deviations) were 78% (0.17) for MAA and 91 % (0.14) for EAA. Groeseneken et al. [1986a] developed a method for determining MAA and EAA in urine based on lyophilization of urine samples followed by derivitization with diazomethane. After adjustment of urine specimens to pH 8 to 8.5 with KOH, 1 ml of urine and 50 fig of 2-furonic acid (FA) (internal standard) were added, and the sample was lyophilized. The dry residue was redissolved in 1 ml methylene chloride with added HC1 and derivitized with diazomethane in methylene chloride. Gas chromatographic analysis using FID was performed on a fused silica capillary column (CP WAX 57 CB, 25 m x 0.33 mm id) with a split ratio of 10:1. The detection limits of MAA and EAA were 0.15 and 0.07 mg/liter of urine, respectively. Mean recoveries of MAA, EAA, and FA added to "blank" urine samples were 31.4%, 62.5%, and 58.4%, respectively; the recoveries of MAA and EAA were well correlated with those of the internal standard. Day-to-day variability for MAA and EAA was 6.0% and 6.4%, respectively; the corresponding within-day variability was 6.2% and 8.9%. Smallwood et al. [1988] developed and validated a method for analysis of EAA in urine. Two ml of urine, along with potassium carbonate, tetrabutylammonium hydrogen sulfate, methylene chloride, and PFBB were added to a screw-top culture tube. After 2 hr of mixing on a rotator at 60 rpm, the tube was heated for 20 min in a 50°C water bath. Additional mixing at room temperature, removal of the upper aqueous layer, and washing o f the lower methylene chloride layers with distilled water removed unreacted reagents. The methylene chloride extract was dried with anhydrous sodium sulfate and loaded into an autosampler vial. Automated gas chromatographic analysis using FID was conducted with the use of a 6 ft x 4 mm id glass column packed with 4% SE-30 and 6% OV-210 on 100/120 mesh Chromosorb WHP. Standards were prepared in pooled urine. The analytical range for EAA was 5 to 100 mg/liter of urine; the limit o f detection was 4 mg/liter; and the limit of quantitation was 7 mg/liter. Within-day variation was 0.5% to 1.8%, and day-to-day variation was 3.0% to 4.7%. Sample stability was confirmed for at least 8 months when specimens were stored at -20°C. The authors stated that the method could also be used for MAA and butoxyacetic acid (BAA) in urine. Preliminary data were presented in the paper indicating that the technique has the potential for assessing EGEE exposure in shipyard painters who use paints containing EGEE. Groeseneken et al. [1989b] observed that MAA appeared in the chromatogram of control subjects not exposed to EGME. Further investigation revealed that the diazomethane procedure was producing MAA by reacting with the hydroxyl group of naturally occurring glycolic acid. Groeseneken et al. [1989b] further evaluated the existing methods for determining alkoxyacetic acids and concluded that the phase transfer catalysis procedures had the required specificity, without the production of artifacts, but lacked sufficient sensitivity to detect these metabolites at low occupational exposure concentrations. On the other hand, the methods utilizing diazomethane derivitization had the required sensitivity but lacked the specificity. Therefore, Groeseneken et al. [1989a] developed an improved method that combined the best attributes of the two basic existing methods. The procedure developed by Groeseneken et al. [1989b] was described as follows. Urine was adjusted to pH 7; l-ml aliquots were placed in small vials with 3-chloropropionic acid (internal standard) and lyophilized overnight. The dry residue was redissolved in methanol containing PFBB, and the vials were capped. The vials were heated at 90°C for 3 hr. After cooling, sample cleanup was done by adding distilled water and extracting the pentafluorobenzyl-esters (PFB-esters) with methylene chloride. The methylene chloride extract was analyzed by gas chromatography using FID. A fused silica capillary column was used (CP Sil 5,25 m x 0.32 mm id, 0.21 pm film thickness) with a split ratio of 5:1. Temperature programming was employed. All PFB-esters showed baseline resolution; retention times of 6.53 min (MAA), 7.77 min (EAA), and 8.59 min (internal standard) were observed. A typical gas chromatographic run, including cool-down and equilibration times, required about 30 min. Optimization studies were done for reagent concentrations as well as for urinaiy pH and reaction time. After correction for the partial solubility of methylene chloride in the 50:50 methanol: urine phase, recoveries of alkoxyacetic acids from urine averaged 95.0% (MAA), 94.8% (EAA), and 95.1 % (BAA). The yield for the derivitization reaction averaged 99.5% (MAA) and 101.8% (EAA). Standard curves were set up for urine and were linear over the range of 0.1 to 200 mg/liter. The limit of detection, at a signal-to-noise ratio o f 5, for the two acids was 0.03 mg/liter. Precision of the method, calculated from triplicate injections of 40 urine samples, averaged 3.5% (RSD), ranging from 1.1% at 25 mg/liter to 20% at 0.1 mg/1 iter. Groeseneken et al. methods [1986aGroeseneken et al. methods [ , 1989b, 1. The presence o f EAA in urine specimens (collected as specified) above a concentra tion of approximately 5 mg EAA/g creatinine is evidence for a single EGEE and/or EGEEA inhalation exposure corresponding to an 8-hr exposure to 0.5 ppm EGEE and/or EGEEA at 60 W of exercise. This value was extrapolated from 4-hr experimental exposures at 5 ppm at 60-W workload to an 8-hr exposure at 0.5 ppm at 60-W workload [Groeseneken et al. 1986c[Groeseneken et al. , 1987b # NIOSH has not validated the # G.2 Justification For Recommendations Biological monitoring for glycol ether exposure is recommended, even though no validated guidelines can be provided as to the relationship between airborne exposure to glycol ethers and the alkoxyacetic acid urinary metabolites. The alkoxyacetic acid metabolites (EAA and MAA) are not only an index of exposure or uptake of EGEE or EGME by the worker, but they are also an index of potential adverse health effects from these glycol ethers. Dermal absorption may be a major route o f exposure to EGME and EGEE and their respective acetates. The potential exists for absorption of glycol ether vapors through wet skin. The influence of workload is significant for inhalation exposure. Doubling the workload results in twice the uptake of the glycol ethers. The potential for adverse effects, particularly decreased cardiac output, from the positive pressure feature o f some respirators has been reported [Meyer et al. 1975]. However, several recent studies suggest that this is not a practical concern, at least not in healthy individuals [Bjurstedt et al. 1979;Arborelius et al. 1983;Dahlback and Balldin 1984]. Theoretically, the increased fluctuations in thoracic pressure caused by breathing with a respirator might constitute an increased risk to subjects with a history o f spontaneous pneumothorax. Few data are available in this area. While an individual is using a negative-pressure respirator with relatively high resistance during very heavy exercise, the usual maximal-peak negative oral pressure during inhalation is about 15 to 17 cm of water [Dahlback and Balldin 1984]. Similarly, the usual maximal-peak positive oral pressure during exhalation is about 15 to 17 cm of water, which might occur with a respirator in a positive-pressure mode, again during very heavy exercise [Dahlback and Balldin 1984]. By comparison, maximal positive pressures such as those during a vigorous cough can generate 200 cm of water pressure [Black and Hyatt 1969]. The normal maximal negative pleural pressure at full inspiration is -4 0 cm o f water [Bates et al. 1971], and normal subjects can generate -8 0 to -1 6 0 cm o f negative water pressure [Black and Hyatt 1969]. Thus vigorous exercise with a respirator does alter pleural pressures, but the risk of barotrauma is substantially less with exercise than with coughing. In some asthmatics, an asthmatic attack may be exacerbated or induced by a variety of factors including exercise, cold air, and stress, all of which may be associated with wearing a respirator. Although most asthmatics who are able to control their condition should not have problems with respirators, a physician's judgment and a field trial may be needed in selected cases. # H.1.2 Cardiac Effects The added work of breathing from respirators is small and could not be detected in several studies [Gee et al. 1968;Hodous et al. 1983]. A typical respirator might double the work of breathing (from 3% to 6% of the total oxygen consumption), but this is probably not of clinical significance [Gee et al. 1968 # H.1.5 Psychologic Effects This important topic is discussed in recent reviews by Morgan [Morgan 1983a[Morgan , 1983b. There is little doubt that virtually everyone suffers some discomfort when wearing a respirator. The large variability and the subjective nature o f the psychophysiologic aspects o f wearing a respirator, however, make studies and specific recommendations difficult. Fit testing obviously serves an important additional function by providing a trial to determine if the wearer can psychologically tolerate the respirator. The great majority o f workers can tolerate respirators, and experience in wearing them aids in this tolerance [Morgan 1983b]. However, some individuals are likely to remain psychologi cally unfit for wearing respirators. # H.1.6 Local Irritation Effects Allergic skin reactions may occur occasionally from wearing a respirator, and skin occlusion may cause irritation or exacerbation of preexisting conditions such as pseudofolliculitis barbae. Facial discomfort from the pressure of the mask may occur, particularly when the fit is unsatisfactory. # H.1.7 Miscellaneous Health Effects In addition to the health effects (described above) associated with wearing respirators, specific groups of respirator wearers may be affected by (1) Corneal irritation or abrasion Comeal irritation or abrasion might occur with the exposure. This would, of course, be a problem primarily with quarter-and half-face masks, especially with particulate exposures. However, exposures could occur with full-face respirators because of leaks or inadvisable removal of the respirator for any reason. Although corneal irritation or abrasion might also occur without contact lenses, their presence is known to substantially increase this risk. (2) Loss or misplacement of a contact lens The loss or misplacement of a contact lens by an individual wearing a respirator might prompt the wearer to remove the respirator, thereby resulting in exposure to the hazard as well as to the potential problems noted above. ( # 3) Eye irritation from respirator airflow The constant airflow of some respirators, such as powered air-purifying respirators (P APR's) or continuous flow air-line respirators, might irritate the eyes of a contact lens wearer. # H.2 SUGGESTED MEDICAL EVALUATION AND CRITERIA FOR RESPIRATOR USE The following NIOSH recommendations allow latitude for the physician in determining a medical evaluation for a specific situation. More specific guidelines may become available as knowledge increases regarding human stresses from the complex interactions o f worker health status, respirator usage, and job tasks. Although some of the following recommen dations should be part of any medical evaluation of workers who wear respirators, others are applicable for specific situations. • A physician should determine fitness to wear a respirator by considering the worker's health, the type of respirator, and the conditions of respirator use. The recommendation above leaves the final decision of an individual's fitness to wear a respirator to the person who is best qualified to evaluate the multiple clinical and other variables. Much of the clinical and other data could be gathered by other personnel. It should be emphasized that the clinical examination alone is only one part o f the fitness determination. Collaboration with foremen, industrial hygienists, and others may often be needed to better assess the work conditions and other factors that affect an individual's fitness to wear a respirator. • A medical history and at least a limited physical examination are recommended. The medical history and physical examination should emphasize the evaluation of the cardiopulmonary system and should elicit any history of respirator use. The history is ah important tool in medical diagnosis and can be used to detect most problems that might require further evaluation. Objectives of the physical examination should be to confirm the clinical impression based on the history and to detect important medical conditions (such as hypertension) that may be essentially asymptomatic. • Although chest X-ray and/or spirometry may be medically indicated in some fitness determinations, these should not be routinely performed. In most cases, the hazardous situations requiring the wearing o f respirators will also mandate periodic chest X-rays and/or spirometry for exposed workers. When such information is available, it should be used in the determination of fitness to wear respirators. Data from routine chest X-rays and spirometry are not recommended solely for determining if a respirator should be worn. In most cases, with an essentially normal clinical examination (history and physical), these data are unlikely to influence the respirator fitness determina tion; additionally, the X-ray would be an unnecessary source of radiation exposure to the worker. Chest X-rays in general do not accurately reflect a person's cardiopulmonary physiologic status, and limited studies suggest that mild to moderate impairment detected by spirometry would not preclude the wearing of respirators in most cases. Thus it is recommended that chest X-rays and/or spirometry be done only when clinically indicated. • The recommended periodicity of medical fitness determinations varies according to several factors but could be as infrequent as every 5 years. Federal or other applicable regulations shall be followed regarding the frequency o f respirator fitness determinations. This trial period should also be used to evaluate the ability and tolerance o f the worker to wear the respirator [Harber 1984]. This trial period need not be associated with respirator fit testing and should not compromise the effectiveness of the vital fit testing procedure. • Examining physicians should realize that the main stress of heavy exercise while using a respirator is usually on the cardiovascular system and that heavy respirators (e.g., SCBA) can substantially increase this stress. Accordingly, physicians may want to consider exercise stress tests with electrocardiographic monitoring when heavy respirators are used, when cardiovascular risk factors are present, or when extremely stressful conditions are expected. also wish to consider the following conditions in selecting or permitting the use of respirators: • History of spontaneous pneumothorax # The authors [Chapin et al. 1985b] concluded that although the low dose o f EGME w as designed to be a n o-effect le v el, there were sligh t, previously noted [Chapin et al. 1985a # This method was reprinted from OSHA [1990]. Further information on this subject or the U nion Carbide method may be obtained from U nion Carbide Corporation Saw M ill R iver Road Route 100 C Tarrytown, NY 10591 (9 1 4 )7 8 9 -2 2 3 # B .1 .1.2 Oral Adm inistration Twenty male albino rats were fed 1.45% EGEE in their basic diet for 2 yr [Morris et al. 1942]. U pon histological examination, testicular enlargement and edema and tubular atrophy were observed in two-thirds o f the animals that had received EGEE. These changes were not seen in untreated controls. The testicular lesion s were more often bilateral than unilateral and consisted o f marked interstitial edema. Oral administration o f 46.5 or 93 mg EGEE/kg per day for 13 w eeks to m ale dogs (three per group) had no adverse effect [Stenger et al. 1971 ]. On the other hand, oral administration o f 186 m g EGEE/kg per day for 13 w k caused degenerative changes in the te ste s o f a ll *References for Appendix B can be found beginning on page 262. # B.3.2 EGME and EGMEA # B .3.2.1 Oral Adm inistration Oral administration o f EGME and EGMEA has induced hem atologic changes in laboratory anim als. found a statistically significant decrease (P < 0.01) in W BC counts follow in g oral administration o f 500 mg EGME/kg or 1,000 m g EGME A/kg to m ale JCL-ICR m ice 5 tim es/w k for 5 wk. Statistically significant decreases were also observed in RBC and Hb values for the 1,000 mg EGME/kg group (P<0 .0 1 ) and in Hb values for the 2,000 mg EGMEA/kg group (P <0.01). Treatment o f fem ale JCL-ICR m ice with 1,000 mg EGME/kg on g.d. 7 through 14 also significantly decreased the leukocyte counts (P < 0.01) [Nagano et al. 1981]. Grant et al. [1985] investigated the effects o f subchronic oral exposure to EGME on the hem atopoietic system o f rats and the reversibility o f such effects. Groups o f 2 4 male F344 rats were orally dosed with EGME at 0 , 100, or 500 m g/kg per day for 4 consecutive days. S ix animals from each group w ere then sacrificed 1 ,4 ,8 , and 22 days after the last treatment. Rats in the high-dose group displayed severely hemorrhagic femoral bone marrow with major loss o f the normal nucleated tissue and damage o f sinus endothelial cells on day 1. The normal architecture o f the marrow w as restored by day 4 post-treatment. Treatment with 500 m g EGME/kg for 4 days abolished extramedullary hem opoiesis (EMH) in the spleen; partial recovery w as seen on day 4 , follow ed by marked improvement on day 8, and a return to control values by day 22. The high-dose group also show ed m ild anemia characterized by reductions o f the Hct and Hb values at day 4 (P <0.05 and P < 0.001, respectively), and RBC, Hct, and Hb values at day 8 (P <0.05, P <0.05, and P <0.01, respec tively). Leukocyte counts (neutrophils and lym phocytes) were significantly reduced in this group on day 1 (P <0.001) and did not return to control values by the end o f the recovery period. Low dose (100 m g EGME/kg per day) rats also had reduced leukocyte counts on day 1 (P < 0.05). The authors [Grant et al. 1985] concluded that the major hem atological effect o f EGME was leukopenia characterized by reductions in lym phocytes and neutrophils. Changes in the circulating blood together with reduced splenic EMH and bone marrow toxicity suggested an inhibitory action on erythropoiesis. The toxicity o f MAA will be discussed next to evaluate the importance o f metabolism as a detoxification or bioactivation mechanism for EGME. In a study by Miller et al. [1982], groups o f five male F344 rats were given daily doses o f 0, 3 0 ,1 0 0 , or 300 mg MAA/kg per day orally on 8 days out o f 10 and were then sacrificed 24 hr after the final dose. Rats given the high dose had significantly lower body weights on the fifth day and again when recorded on the tenth day (P<0.05 The role o f metabolism in EGME-induced testicular toxicity was investigated by Moss et al. [1985] using groups o f nine male Sprague-Dawley rats pretreated i.p. with 400 mg pyrazole/kg, an alcohol dehydrogenase (ADH) inhibitor, or pretreated with 300 mg disulfiram/kg, an aldehyde dehydrogenase inhibitor. One hr after pretreatment with pyrazole or 24 hr after pretreatment with disulfiram, animals were injected i.p. with 250 mg o f labeled EGME/kg. Controls with no pretreatment also received 250 m g(14C) EGME/kg i.p. Urinary excretion was the major route of elimination o f EGME metabolites after 24 hr (40.4% ± 3.6% o f dose) and 48 hr (14.8% ± 0.6% o f dose) in controls. High performance liquid chromatography (HPLC) analysis identified MAA as the major urinary metabolite at 0 to 24 hr (63% of the radioactivity) and at 24 to 48 hr (50% of the radioactivity). The # B.3.2.2 Inhalation # APPENDIX C O CCUPATIO N AL EXPOSURES TO TH E G LYC O L ETHERS BY W ORKSITE OR PR O C ESS # APPENDIX D M ATERIAL S A FE T Y DATA SH EET The following sections describe the information that must be supplied for each product or material in the appropriate blocks of the Material Safety Data Sheet (MSDS). To # D.2 SECTION II, HAZARDOUS INGREDIENTS The "materials" listed in Section II shall be those substances that are part of the hazardous product covered by the MSDS and that individually meet any o f the criteria defining a hazardous material. Thus, one component of a multicomponent product might be listed because of its toxicity, another component because of its flammability, and a third com ponent for both its toxicity and its reactivity. Note that a MSDS for a single component product must have the name of the material repeated in this section to avoid giving the impression that there are no hazardous ingredients. # TRADE NAME # SYNONYMS II HAZARDOUS INGREDIENTS # H.1 BACKGROUND INFORMATION Brief descriptions of the health effects associated with wearing respirators are summarized below. More detailed analyses of the data are available in recent reviews by James [1977] and Raven et al. [1979]. # H.1.1 Pulmonary Effects In general, the added inspiratory and expiratory resistances and dead space of most respirators cause an increase in tidal volume and a decrease in respiratory rate and ventilation (including a small decrease in alveolar ventilation). These respirator effects have usually been small both among healthy individuals and, in limited studies, among individuals with impaired lung function [Gee et al. 1968;Altose et al. 1977;Raven et al. 1981;Hodous et al. 1983;Hodous et al. 1986]. This generalization is applicable to most respirators when resistances (particularly expiratory resistance) are low [Bentley et al. 1973;Love et al. 1977]. Although most studies report minimal physiologic effects during submaximal exercise, the resistances commonly lead to reduced endurance and reduced maximal exercise perform ance [Craig et al. 1970;Raven et al. 1977 In many cases, if a worker is physically able to do an assigned job while not wearing a respirator, the worker will in most situations not be at increased risk when performing the same job while wearing a respirator. Raven PB, Jackson AW, Page K, et al. [1981]. The physiological responses of mild pulmonary impaired subjects while using a "demand" respirator during rest and work. Am Ind Hyg Assoc J 42(4):247-257. # Raven
None
None
7f7629e8eb67e4d7b71699b7fb5b75129c54f729
cdc
None
Table of Contents (continued) IX. APPENDIX I -Sampling Procedure for Collection of Tetrachloroethylene X. APPENDIX II -Analytical Procedure for Determination of Tetrachloroethylene XI.# REVIEW CONSULTANTS vil I. RECOMMENDATIONS FOR A TETRACHLOROETHTLENE STANDARD # RESPIRATOR SELECTION GUIDE FOR PROTECTION AGAINST TETRACHLOROETHYLENE # Concentrations of Tetrachloroethylene Respirator Type 500 ppm or less (1) A chemical cartridge respirator with a full facepiece and organic vapor cartridge(s) (2) A gas mask with chln-style or a front or back-mounted organic vapor canister. (3) A supplled-alr respirator with a full facepiece, helmet or hood (4) A self-contained breathing apparatus with a full facepiece >500 ppm (1) Self-contained breathing apparatus or with a full facepiece operated in entry and escape pressure-demand or other positive from unknown pressure mode concentra tions (2) A combination respirator which includes a Type C supplied-air respirator with a full facepiece operated in pressure-demand or other positive pressure or continuous-flow mode and an auxiliary self-contained breathing apparatus operated in pressuredemand or other positive pressure mode Firefighting Self-contained breathing apparatus with a full facepiece operated in pressure-demand or other positive pressure mode Escape (1) A gas mask providing protection against organic vapors (2) An escape self-contained breathing apparatus 6 (D) Two cases of hepato-nephritis due to occupational exposure to tetrachloroethylene were reported in the French literature in 1955 and 1964. # Respiratory protective devices described in In both cases, the subjects were using tetrachloroethylene for degreasing metal parts. The atmospheric concentrations of the solvent were not measured but contact with the solvent was direct in one case and within a few meters in the other. Vallud et al reported the post mortem examination of one man after his death but gave no clinical description. Post mortem showed an average size liver, with "localized degenerations" and enlarged, yellowish kidneys with congestion in the pyramidal zone. Dumortier et al concluded that the case of hepato-nephritis they had diagnosed was due to occupational exposure to tetrachloroethylene. The patient experienced vomiting, jaundice, and anuresls within 1 week after he started using tetrachloroethylene to clean metal. There was no evidence of previous infectious disease or medication that would have been predisposing. However, the 34-year-old man had a history of alconolism. Two years prior to the occupational exposure, hepatomegaly had been diagnosed. On admission to the hospital, the liver was found to be He became unconscious and was found 30 minutes iater lying or. tht. floor. Clinical examination revealed erythema and blistering over 30% of his body. In 5 days, the erythema subsided and the blistering disappeared. Some dryness and irritation of the skin reportedly occurred thereafter. Skin effects due to chronic tetrachloroethylene exposure have been reported in other studies. Munzer and Heder reported a case of eczema as a direct effect of exposure of a man in a drycleaning plant. Gold reported that a drycleaner had severe neurologic disturbances and had dry, scaly forearms and hands. The body burden of exposure to tetrachloroethylene was indicated by the distribution in expired air, blood and urine. Breath analyses showed that tetrachloroethylene is excreted in the breath for long periods after exposure. At 21 hours after a 3-hour exposure at 100 ppm, mean breath concentrations of tetrachloroethylene foT days 1 through 5 were 1.00, 2.63, There also was a reduction of electrical conductivity by 24-40% after 5 months of exposure. # Microscopic investigation revealed swollen and vacuollzed protoplasm In some cells, but these occurred only sporadically, most cells being normal. One to two months after exposure, the EEG and electric conductivity were only slightly or not at all different from controls. The 5-month exposure at 1.5 ppm resulted only In a slightly higher impedance of the cerebral tissue than was found with the controls. # (b) Effects on Liver and Kidneys Carpenter attempted to discover the highest concentration of tetrachloroethylene vapors that would not anesthetize rats exposed for 8 hours. Rats exposed at 31,000 ppm (3.1%) died after a few minutes of exposure and after 30 to 60 minutes when exposure was 19,000 ppm (1.9%). Some rats survived exposure at 19,000 ppm but their livers showed congestion and granular swelling. Similar liver effects were found after exposure at 9,000 ppm. In addition, there was marked granular swelling of the kidneys. No deaths were reported after a single exposure at 9,000, 4,500, or 2,750 ppm (0.7, 0.45, 0.275%), Post mortem examinations of rats exposed to those concentrations showed a slight increase in the prominence of liver and kidney markings. Carpenter (1) Fatty degeneration involving a thin cell layer, up to two to three cells in width, usually at the periphery of the lobules; (2) Same as in ( 1), but involving a layer of three to five cells in width; ( investigators reported that, starting from the fifth month of exposure, the mortality of male rats exposed at 600 ppm was significantly higher than that of control. The cause of death was not stated. There was no difference in mortality rates in male rats exposed at 300 ppm or female rats exposed at 300 or 600 ppm. The investigators reported that spontaneous tumors appeared with comparable frequency in both exposed and nonexposed animals after 29 months. The National Cancer Institute is currently conducting a study of the carcinogenic potential of tetrachloroethylene. The results of this study will be evaluated when they become available. Yllner determined the urinary metabolites of C 14-labeled tetra chloroethylene in five female mice exposed for 2 hours to vapor in doses of 1.3 mg/g bodyweight. In 4 days, 90% of the absorbed tetrachloroethylene was excreted or metabolized, 70% in expired air, 20% in urine, and less than 0.5% In feces. Using chromatographic, radiographic. Isotope dilution methods and, In part, Jondorf's method, the following metabolites were identified In the urine according to the percentage of total urinary activity: trichloroacetic acid, 52%; oxalic acid, 11%; and traces of dlchloroacetlc acid. Daniel studied the partition of Cl 36-labeled tetrachloroethylene in urine, feces and expired air of rats. Wlstar rats were administered 1.75 ftcl or 13 /ucl of the labeled tetrachloroethylene by stomach tube. The half-life of expiration of tetrachloroethylene was found to be 8 hours, and 97.9% of the radioactivity was found In the expired air 48 hours after administration of the labeled tetrachloroethylene. After 18 days, 1.6-2.1% of the radioactivity was found In the urine. No radioactivity was found In the feces. Trichloroacetic acid and Inorganic chloride were the only metabolites detected in the urine. On the addition of sliver nitrate to the urine, 25% of the total urinary chlorine was precipitated as chloride. The remaining urinary radioactivity was accounted for by trichloroacetic acid. Oxalic acid was not found. The toxicity of trichloroacetic acid, a metabolite of tetrachloroethylene, has been the subject of some reports in the literature. Woodard et al reported the LD50 to be 3.32 g/kg for rats and 4.97 g/kg for mice. The consequences of chronic excretion of trichloroacetic acid were considered by Frant and Westendorp. Trichloroacetic acid Is a strong organic acid which may be neutralized In the. body by sodium or potassium. Whether this constant elimination of fixed alkali will result in acidosis or diminution of the carbon dloxldecomblnlng power of the blood has not been studied. Daniel also exposed seven male and seven female rats at 1,000 ppm tetrachloroethylene to determine the effect on liver lipid content. The exposures were for three successive periods of 6 hours each. In the exposed female rats, 8.0 + 1.5 mg lipid/100 mg dry liver weight was found compared to 10.7 + 2.2 in controls, but this was not considered significant. In the exposed males, 11.3 mg lipid/100 mg dry liver weight was found compared with 11.2 for the controls. Van Dyke and Wlneman found that little chloride was liberated from tetrachloroethylene in vitro by their dechlorinatlng enzyme system. The enzyme system was located in hepatic microsomes and required NADPH, oxygen, and a factor present in the 105,000 G supernatant. # It was Inducible by phenobarbital or benzpyrene, but not by methylcholanthrene. Cornish and Adefuin studied the effect of administering ethanol to rats 16-18 hours before exposing them to different chlorinated hydrocarbons. Two groups of rats, 6 each, were treated with chlorinated hydrocarbons and one of these groups was also pretreated with alcohol. Tetrachloroethylene exposure concentrations of 4,000 ppm for 6 hours, 5,000 ppm for 4 hours, 10,000 ppm for 2 hours, and 15,000 ppm for 2 hours were Behavioral tests revealed that subjects exposed at 150 ppm showed impaired coordination after 7.5 hours of exposure. In an earlier experiment, Stewart et al found that, in a group of 15 male subjects exposed for one 7-hour period at a mean concentration of 101 ppm, 60% complained of mild eye, nose and throat Irritation during the first 2 hours which subsided by the end of the experiment; 40% of the subjects felt slightly sleeply and 25% felt lightheadedness, developed mild frontal headaches or had some difficulty speaking. # Out of 10 cases of tetrachloroethylene intoxication reported by Lob in 1957, there was one case of liver involvement. Liver cell necrosis was part of the pathologic diagnosis from an autopsy of a 33-year-old man who died from exposure to tetrachloroethylene. The man had been working in a drycleaning establishment for 4 months. The investigators made two measurements of tetrachloroethylene and reported the concentrations of 50 and 250 ppm. The acute exposures occurred when tetrachloroethylene periodically spilled out of the drycleaning machine and fell on hot pipes, when it vaporized. Exposure also occurred when machines were being filled with tetrachloroethylene. Bilirubin and thymol turbidity determinations were significantly affected in the 113 exposed workers studied by Pranke and Eggellng [721 when compared with A3 unexposed workers. Elevated SGOT was found In eight of nine firemen 12 days after a 3mlnute exposure at an unknown concentration of tetrachloroethylene. An enlarged liver and spleen were found In one man upon examination. At the time of exposure, all nine men became "woozy" for a few minutes. Rowe et al found no effects in the livers of monkeys, rabbits and rats after repeated 7-hour exposures at concentrations up to 400 ppm. However, effects on the liver were evident in guinea pigs. After 132 7hour exposures at 100 ppm, liver weights of female guinea pigs were increased; slight to moderate fatty infiltration of the liver was noted after exposure at 200 ppm and was more pronounced after exposure at 400 ppm; and slight cirrhosis was observed In guinea pigs exposed 7 hours/day for 169 days at 400 ppm. # (c) Carcinogenicity, Mutagenicity, Teratogenicity The potential of tetrachloroethylene to produce carcinogenic, mutagenic or teratogenic effects has not been studied conclusively. The effects of tetrachloroethylene in pregnant animals anc! their offspring were reported in two studies. Carpenter reported that fertility of rats increased slightly after exposure at 230 or 470 ppm for up to 7 months. Schwetz et al found that mice and rats exposed at 300 ppm tetrachloroethylene for 7 hours/day on days 6 through 15 of gestation showed an Increase In the relative maternal liver weights of mice, decrease in maternal body weights of rats, and increase in incidence of fetal resorptions In rats. In fetal mice, there were decreased body weights and Increased incidences of subcutaneous edema, delayed ossification of skull bones, and split sternebrae. Most of the effects found in the study by Schwetz et al represent fetal and maternal toxicity. Only the delayed ossification of skull bones and split sternebrae could be considered as possible teratogenic effects. These occurred in mice but not in rats. The number of animals used in this study was too few to establish the significance of these findings, since the Incidences of the abnomalltles in large samples of these strains of rata and mice are not known. Whether these effects would occur In other species Including humans has not been determined. This Is an Important requirement of further research. # (d) Metabolism The metabolic pathways of tetrachloroethylene are still questions for Investigation. Trichloroacetic acid and trlchloroethanol have been found In the urine of humans and animals. to XII-11. Peak concentrations near 1,000 ppm were found in some plants. Kerr reported that the greatest single factor to influence tetrachloroethylene vapor concentrations in the drycleaning operations studied was ventilation. In one of the drycleaning plants he found that a mean concentration of 100 ppm was accompanied by an air movement of 125 to 205 feet/minute while a 270 ppm mean concentration was observed when no air movement was recorded. In Depending on the type of detector tube, the air can be drawn directly through the tube and compared with a calibration chart, or the air may be drawn into a pyrolyzer accessory prior to the detection tube. In either case, the analysis is not specific for tetrachloroethylene since liberated halogen ions produce the stain and any halogen or halogenated The ACGIH Committee on Threshold Limits recommended, and the ACGIH adopted, a threshold limit value (TLV) of 100 ppm for tetrachloroethylene in 19A7. No basis for this change was given. The 19A7 TLV for tetrachloroethylene was restored to 200 ppm in 1953. The basis given for the change was information received from conference members that was not elaborated upon. A preface to future tables of threshold limits proposed by the TLV Committee and adopted by the ACGIH in 1953 defined the values as "maximum average atmospheric concentration of contaminants to which workers may be exposed for an 8-hour working day without injury to health." The preface was modified in 1958, and included the statement that "They tetrachloroethylene was used, severe neurological disorders were observed. In the case of one man working for 4 years In a poorly ventilated environment, loss of memory, blindness in the left eye, pronounced dermographism and vestibular dysfunction were observed upon examination. These disorders persisted even after the exposure had been discontinued. Mild central nervous disorders were found in another man 2 months after he began using tetrachloroethylene in his work. However, 6 months prior to his work with tetrachloroethylene, he had degreased metal parts with trichloroethylene. He worked with tetrachloroethylene for 9 months, enduring the symptoms until they progressed to include numbness of the fingers, difficulty walking, trembling, exaggerated dermographism, and general weakness. Slight but positive reactions for urinary urobilinogen were also found. Gold reported that a man who worked for 3 years in a drycleaning operation presented symptoms of increasing fatigue, dizziness, muscle cramps, memory difficulties, and restlessness. Neurological examination revealed an absence of olfactory sensitivity, deviation of the septum, and conjunctivitis, all on the right side. Most signs and symptoms persisted during the follow up year after exposure had ceased. Peripheral neuropathies have also been reported resulting from tetrachloroethylene exposures (E Baginsky, written communication, March 1975). The failure to find irreversible neurologic changes in the two studies of German drycleaning plants where concentrations ranged to 400 ppm was consistent with studies by Rowe et al of four animal species. Rats, rabbits, monkeys and guinea pigs were exposed at up to 400 ppm for 7 hours/day, 5 days/week for up to 6 months. No neurologic changes were observed. There is little information on the effects of chronic exposures to tetrachloroethylene at 0-200 ppm. Studies of acute exposures show effects consistent with the action of a central nervous system depressant. EEG tracings of subcllnlcal central nervous system response to repeated 7.5-hour daily exposures of humans at 100 ppm. # Rowe There is also evidence that these changes were maintained for at least 3 days after four 5-day weeks of 7.5-hours/day exposures at average concentrations of 20-150 ppm. Data were not presented nor discussed for the results of EEG tracings during the week of exposure at 20 ppm of tetrachloroethylene. The EEG tracings obtained with 100 ppm exposures indicated that the preliminary signs of narcosis were present in most subjects exposed at 100 ppm for 7.5 hours/day. The alteration of EEG tracings (increased delta wave activity), and subjective responses such as headaches, reported by Stewart et al, after exposures at 100 ppm for 7.5 hours/day, were the only effects found at this concentration. While these alterations may be a preliminary subcllnlcal response to tetrachloroethylene exposure, their appearance can result from various other factors unrelated to exposure. # Stewart et al reported no EEG alterations for subjects exposed at 20 ppm. The data on human exposures at less than 100 ppm is sparse. Tuttle This limit should prevent neurologic effects as well as eye and respiratory tract irritation. No evidence of liver damage at or near the recommended limit has been reported. Human exposures at concentrations between 100 and 300 ppm have resulted in neurologic effects such as difficulty maintaining a normal Romberg test and impaired motor coordination on the Flanagan test. To protect against temporary neurologic dysfunction, it is recommended that ceiling exposures be limited to 100 ppm as determined by sampling periods of 15 minutes. It is recognized that many workers handle small amounts of tetrachloroethylene or work in situations where, regardless of the amounts used, there is only negligible contact with the substance. Under these conditions, it should not be necessary to comply with all of the provisions of this recommended standard. However, concern for worker health requires that protective measures be instituted below the enforceable limit to ensure that exposures stay below that limit. Therefore, environmental monitoring and recordkeeping is recommended for those work situations which involve exposure above one-half the recommended limit, to delineate work areas that do not require the expenditure of health resources for control of inhalation hazards. One-half the environmental limit has been chosen on the basis of professional judgment rather than on quantitative data that delineate nonhazardous areas from those in which a hazard definitely exists. Tetrachloroethylene has caused abnormalities in fetal animals. These Included an Increased incidence of subcutaneous edema, delayed ossification of skull bones, and split sternebrae in fetal mice. There was also an increased incidence of fetal resorptions in rats. In both species, exposures occurred on days 6 through 15 of gestation for 7 hours a day at 300 ppm. The significance of these findings and their applicability to human workers is still in question. However, since the effects observed in the animals studied reflect in part maternal effects that the recommended environmental limits are expected to prevent, it is not recommended that additional considerations for exposure be given to females of child bearing age. Further Investigations are warranted and discussed in Chapter VII. Additional considerations for protecting women of child bearing age will be recommended if Indicated by results of additional research. Other information which may affect the recommended environmental limit are the results of the National Cancer Institute study of carcinogenesis which is currently in progress. When this information is made available It will be evaluated and the recommended standard will be revised If necessary. # (b) Medical Monitoring It is recommended that all workers be given preplacement and annual medical examinations. The preplacement examinations may Identify workers that might be susceptible, due to predisposing conditions, to exposure to tetrachloroethylene below the recommended environmental limit. Examinations given prior to employment will provide data for evaluating workers after various lengths of exposure. The annual medical examination has been recommended to supplement the environmental monitoring which is not continuous. These examinations should provide additional evaluation of the effectiveness of the recommended environmental limit. The medical examinations should be general with emphasis on the hepatic and nervous systems, which have been reported to be the most effected systems. Since information on the effects of tetrachloroethylene on pregnant women and offspring is inconclusive it Is recommended that physicians report any spontaneous abortions or fetal abnormalities that ftlght result from exposure to tetrachloroethylene. The annual schedule for the examinations will provide the opportunity for early detection of effects on the health of the workers. # VI. WORK PRACTICES The manufacture of tetrachloroethylene requires large amounts of chlorine. Information on safe handling of chlorine is available. Other chlorohydrocarbons may be used as starting materials or may be formed as coproducts. Caution must be taken to avoid exposure to these substances as well as any hydrocarbons which may be used as starting materials. Further information concerning specific work practices for tetra chloroethylene can be found in the Manufacturing Chemists Association Safety Data Sheet SD-24, and in various other publications. Tetrachloroethylene has been found to cause effects on the skin. Prolonged or repeated exposure of skin to tetrachloroethylene should be avoided to prevent cutaneous effects. Due to the toxicity of tetrachloroethylene, processes in which it is used in large quantities should be carried out In closed systems. Veil designed hoods and ventilation systems should be used to maintain exposures at or below concentrations specified by this standard. Further protective measures Include the use of personal protective equipment and clothing and purging of equipment prior to and during servicing and maintenance. Tetrachloroethylene Is a component of some insecticidal fumigants and conventional work practice guidelines are Inappropriate to protect agricultural workers from the hazards of exposure. For these uses, fumi gants containing tetrachloroethylene must be used in a manner consistent with their labeling requirements. These usually specify allowable time limits before a fumigated area or space may be reentered, and safe practices for the application of the particular pesticide. Consideration must be given to the wearing of personal protective equipment including a long-sleeved shirt, long-legged pants (or suitable coveralls), a hat, shoes, socks, and solvent-resistant gloves. Specific requirements of worker protection standards for agricultural pesticides may be found in 40 CFR 170. Where a funlgant Is applied to a crop in confined storage, hazardous One factor that affects the overall performance of demand-type (negative pressure) respirators is the variability of the face seal. Facepiece leakage is the major limitation of half-mask and quarter-masV facepieces operated with a negative pressure. For purposes of uniform regulations covering the many face sizes and shapes of the US population, NIOSH recommends that the half-mask or quarter-mask facepieces operated with a negative pressure not be used for protection above 10 times the TWA limit, although the majority of wearers can obtain protection in atmospheres of higher tetrachloroethylene concentrations. On the same basis, NIOSH recommends that the full facepiece, operated with negative pressure, may be used up to 50 times the TWA limit. # VII. RESEARCH NEEDS There is a need for data concerning the long-range effects of chronic exposures to tetrachloroethylene at or below the recommended environmental Unit. Of particular concern is the area of behavioral and neurological effects. A preliminary study showed that certain behavioral and neurological tests will reflect tetrachloroethylene exposure. While the results of the study are inconclusive due to small numbers of subjects and differences between exposed and unexposed groups, the approach Is valid and should be studied further. Additional research could be conclusive if the number of subjects is Increased and the experiment is «ore rigorously designed. (1) The date and time of sample collection. (2) Sampling duration. (3) Total sample volume. (2) The smaller section of charcoal Is used as a reserve and should be positioned nearest the sampling pump. The charcoal tube should be placed In a vertical posi tion during sampling. (4) Tubing may be used to connect the back of the tube to the pump, but air being sampled should not be passed through any hose or tubing before entering the charcoal tube. The sample can be taken at flowrates of 25-1,000 Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name Is known. The "%" may be the approximate percentage by weight or volume (Indicate basis) which each hazardous Ingredient of the mixture bears to the whole mixture. This may be Indicated as a range or maximum amount, le, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. Toxic hazard data shall be stated In terms of concentration, mode of exposure or test, and animal used, le, "100 ppm LC50 rat," "25 mg/kg LD50- (1) 40 ml/min (70 psig) helium carrier gas flow. (2) 65 ml/min (24 psig) hydrogen gas flow to detector. (3) 500 ml/min (50 psig) airflow to detector. # SPEC IFIC G R A V IT Y (H^O -ll V A P O R P R E S S U R E V A P O R D E N S IT Y IA IR -1 I S O L U B IL IT Y IN H 20. \ BY WT % V O L A T IL E S 8 - V O L E V A P O R A T IO N R A T E IB U T Y l A C E T A T E S A P P E A R A N C E A N D O D O R Human Data
Table of Contents (continued) IX. APPENDIX I -Sampling Procedure for Collection of Tetrachloroethylene X. APPENDIX II -Analytical Procedure for Determination of Tetrachloroethylene XI.# REVIEW CONSULTANTS vil I. RECOMMENDATIONS FOR A TETRACHLOROETHTLENE STANDARD # RESPIRATOR SELECTION GUIDE FOR PROTECTION AGAINST TETRACHLOROETHYLENE # Concentrations of Tetrachloroethylene Respirator Type 500 ppm or less (1) A chemical cartridge respirator with a full facepiece and organic vapor cartridge(s) (2) A gas mask with chln-style or a front or back-mounted organic vapor canister. (3) A supplled-alr respirator with a full facepiece, helmet or hood (4) A self-contained breathing apparatus with a full facepiece >500 ppm (1) Self-contained breathing apparatus or with a full facepiece operated in entry and escape pressure-demand or other positive from unknown pressure mode concentra tions (2) A combination respirator which includes a Type C supplied-air respirator with a full facepiece operated in pressure-demand or other positive pressure or continuous-flow mode and an auxiliary self-contained breathing apparatus operated in pressuredemand or other positive pressure mode Firefighting Self-contained breathing apparatus with a full facepiece operated in pressure-demand or other positive pressure mode Escape (1) A gas mask providing protection against organic vapors (2) An escape self-contained breathing apparatus 6 (D) Two cases of hepato-nephritis due to occupational exposure to tetrachloroethylene were reported in the French literature in 1955 and 1964. # Respiratory protective devices described in [58,59] In both cases, the subjects were using tetrachloroethylene for degreasing metal parts. The atmospheric concentrations of the solvent were not measured but contact with the solvent was direct in one case and within a few meters in the other. Vallud et al [58] reported the post mortem examination of one man after his death but gave no clinical description. Post mortem showed an average size liver, with "localized degenerations" and enlarged, yellowish kidneys with congestion in the pyramidal zone. Dumortier et al [59] concluded that the case of hepato-nephritis they had diagnosed was due to occupational exposure to tetrachloroethylene. The patient experienced vomiting, jaundice, and anuresls within 1 week after he started using tetrachloroethylene to clean metal. There was no evidence of previous infectious disease or medication that would have been predisposing. However, the 34-year-old man had a history of alconolism. Two years prior to the occupational exposure, hepatomegaly had been diagnosed. On admission to the hospital, the liver was found to be He became unconscious and was found 30 minutes iater lying or. tht. floor. Clinical examination revealed erythema and blistering over 30% of his body. In 5 days, the erythema subsided and the blistering disappeared. Some dryness and irritation of the skin reportedly occurred thereafter. Skin effects due to chronic tetrachloroethylene exposure have been reported in other studies. [46,63] Munzer and Heder [63] reported a case of eczema as a direct effect of exposure of a man in a drycleaning plant. Gold [46] reported that a drycleaner had severe neurologic disturbances and had dry, scaly forearms and hands. [41] The body burden of exposure to tetrachloroethylene was indicated by the distribution in expired air, blood and urine. Breath analyses showed that tetrachloroethylene is excreted in the breath for long periods after exposure. At 21 hours after a 3-hour exposure at 100 ppm, mean breath concentrations of tetrachloroethylene foT days 1 through 5 were 1.00, 2.63, There also was a reduction of electrical conductivity by 24-40% after 5 months of exposure. # Microscopic investigation revealed swollen and vacuollzed protoplasm In some cells, but these occurred only sporadically, most cells being normal. One to two months after exposure, the EEG and electric conductivity were only slightly or not at all different from controls. The 5-month exposure at 1.5 ppm resulted only In a slightly higher impedance of the cerebral tissue than was found with the controls. [74] # (b) Effects on Liver and Kidneys Carpenter [37] attempted to discover the highest concentration of tetrachloroethylene vapors that would not anesthetize rats exposed for 8 hours. Rats exposed at 31,000 ppm (3.1%) died after a few minutes of exposure and after 30 to 60 minutes when exposure was 19,000 ppm (1.9%). Some rats survived exposure at 19,000 ppm but their livers showed congestion and granular swelling. Similar liver effects were found after exposure at 9,000 ppm. In addition, there was marked granular swelling of the kidneys. No deaths were reported after a single exposure at 9,000, 4,500, or 2,750 ppm (0.7, 0.45, 0.275%), Post mortem examinations of rats exposed to those concentrations showed a slight increase in the prominence of liver and kidney markings. Carpenter (1) Fatty degeneration involving a thin cell layer, up to two to three cells in width, usually at the periphery of the lobules; (2) Same as in ( 1), but involving a layer of three to five cells in width; ( investigators reported that, starting from the fifth month of exposure, the mortality of male rats exposed at 600 ppm was significantly higher than that of control. The cause of death was not stated. There was no difference in mortality rates in male rats exposed at 300 ppm or female rats exposed at 300 or 600 ppm. The investigators reported that spontaneous tumors appeared with comparable frequency in both exposed and nonexposed animals after 29 months. The National Cancer Institute is currently conducting a study of the carcinogenic potential of tetrachloroethylene. The results of this study will be evaluated when they become available. Yllner [87] determined the urinary metabolites of C 14-labeled tetra chloroethylene in five female mice exposed for 2 hours to vapor in doses of 1.3 mg/g bodyweight. In 4 days, 90% of the absorbed tetrachloroethylene was excreted or metabolized, 70% in expired air, 20% in urine, and less than 0.5% In feces. Using chromatographic, radiographic. Isotope dilution methods and, In part, Jondorf's method, the following metabolites were identified In the urine according to the percentage of total urinary activity: trichloroacetic acid, 52%; oxalic acid, 11%; and traces of dlchloroacetlc acid. Daniel [88] studied the partition of Cl 36-labeled tetrachloroethylene in urine, feces and expired air of rats. Wlstar rats were administered 1.75 ftcl or 13 /ucl of the labeled tetrachloroethylene by stomach tube. The half-life of expiration of tetrachloroethylene was found to be 8 hours, and 97.9% of the radioactivity was found In the expired air 48 hours after administration of the labeled tetrachloroethylene. After 18 days, 1.6-2.1% of the radioactivity was found In the urine. No radioactivity was found In the feces. Trichloroacetic acid and Inorganic chloride were the only metabolites detected in the urine. On the addition of sliver nitrate to the urine, 25% of the total urinary chlorine was precipitated as chloride. The remaining urinary radioactivity was accounted for by trichloroacetic acid. Oxalic acid was not found. [88] The toxicity of trichloroacetic acid, a metabolite of tetrachloroethylene, has been the subject of some reports in the literature. [89,90] Woodard et al [89] reported the LD50 to be 3.32 g/kg for rats and 4.97 g/kg for mice. The consequences of chronic excretion of trichloroacetic acid were considered by Frant and Westendorp. [90] Trichloroacetic acid Is a strong organic acid which may be neutralized In the. body by sodium or potassium. Whether this constant elimination of fixed alkali will result in acidosis or diminution of the carbon dloxldecomblnlng power of the blood has not been studied. Daniel [88] also exposed seven male and seven female rats at 1,000 ppm tetrachloroethylene to determine the effect on liver lipid content. The exposures were for three successive periods of 6 hours each. In the exposed female rats, 8.0 + 1.5 mg lipid/100 mg dry liver weight was found compared to 10.7 + 2.2 in controls, but this was not considered significant. In the exposed males, 11.3 mg lipid/100 mg dry liver weight was found compared with 11.2 for the controls. Van Dyke and Wlneman [92] found that little chloride was liberated from tetrachloroethylene in vitro by their dechlorinatlng enzyme system. The enzyme system was located in hepatic microsomes and required NADPH, oxygen, and a factor present in the 105,000 G supernatant. # It was Inducible by phenobarbital or benzpyrene, but not by methylcholanthrene. Cornish and Adefuin [93] studied the effect of administering ethanol to rats 16-18 hours before exposing them to different chlorinated hydrocarbons. Two groups of rats, 6 each, were treated with chlorinated hydrocarbons and one of these groups was also pretreated with alcohol. Tetrachloroethylene exposure concentrations of 4,000 ppm for 6 hours, 5,000 ppm for 4 hours, 10,000 ppm for 2 hours, and 15,000 ppm for 2 hours were Behavioral tests revealed that subjects exposed at 150 ppm showed impaired coordination after 7.5 hours of exposure. In an earlier experiment, Stewart et al [40] found that, in a group of 15 male subjects exposed for one 7-hour period at a mean concentration of 101 ppm, 60% complained of mild eye, nose and throat Irritation during the first 2 hours which subsided by the end of the experiment; 40% of the subjects felt slightly sleeply and 25% felt lightheadedness, developed mild frontal headaches or had some difficulty speaking. # Out of 10 cases of tetrachloroethylene intoxication reported by Lob [43] in 1957, there was one case of liver involvement. Liver cell necrosis was part of the pathologic diagnosis from an autopsy of a 33-year-old man who died from exposure to tetrachloroethylene. [56] The man had been working in a drycleaning establishment for 4 months. The investigators made two measurements of tetrachloroethylene and reported the concentrations of 50 and 250 ppm. The acute exposures occurred when tetrachloroethylene periodically spilled out of the drycleaning machine and fell on hot pipes, when it vaporized. Exposure also occurred when machines were being filled with tetrachloroethylene. Bilirubin and thymol turbidity determinations were significantly affected in the 113 exposed workers studied by Pranke and Eggellng [721 when compared with A3 unexposed workers. Elevated SGOT was found In eight of nine firemen 12 days after a 3mlnute exposure at an unknown concentration of tetrachloroethylene. [55] An enlarged liver and spleen were found In one man upon examination. At the time of exposure, all nine men became "woozy" for a few minutes. Rowe et al [38] found no effects in the livers of monkeys, rabbits and rats after repeated 7-hour exposures at concentrations up to 400 ppm. However, effects on the liver were evident in guinea pigs. After 132 7hour exposures at 100 ppm, liver weights of female guinea pigs were increased; slight to moderate fatty infiltration of the liver was noted after exposure at 200 ppm and was more pronounced after exposure at 400 ppm; and slight cirrhosis was observed In guinea pigs exposed 7 hours/day for 169 days at 400 ppm. # (c) Carcinogenicity, Mutagenicity, Teratogenicity The potential of tetrachloroethylene to produce carcinogenic, mutagenic or teratogenic effects has not been studied conclusively. The effects of tetrachloroethylene in pregnant animals anc! their offspring were reported in two studies. [37,84] Carpenter [37] reported that fertility of rats increased slightly after exposure at 230 or 470 ppm for up to 7 months. Schwetz et al [84] found that mice and rats exposed at 300 ppm tetrachloroethylene for 7 hours/day on days 6 through 15 of gestation showed an Increase In the relative maternal liver weights of mice, decrease in maternal body weights of rats, and increase in incidence of fetal resorptions In rats. In fetal mice, there were decreased body weights and Increased incidences of subcutaneous edema, delayed ossification of skull bones, and split sternebrae. Most of the effects found in the study by Schwetz et al [84] represent fetal and maternal toxicity. Only the delayed ossification of skull bones and split sternebrae could be considered as possible teratogenic effects. These occurred in mice but not in rats. The number of animals used in this study was too few to establish the significance of these findings, since the Incidences of the abnomalltles in large samples of these strains of rata and mice are not known. Whether these effects would occur In other species Including humans has not been determined. This Is an Important requirement of further research. # (d) Metabolism The metabolic pathways of tetrachloroethylene are still questions for Investigation. Trichloroacetic acid and trlchloroethanol have been found In the urine of humans and animals. to XII-11. Peak concentrations near 1,000 ppm were found in some plants. [98] Kerr [98] reported that the greatest single factor to influence tetrachloroethylene vapor concentrations in the drycleaning operations studied was ventilation. In one of the drycleaning plants he found that a mean concentration of 100 ppm was accompanied by an air movement of 125 to 205 feet/minute while a 270 ppm mean concentration was observed when no air movement was recorded. In Depending on the type of detector tube, the air can be drawn directly through the tube and compared with a calibration chart, or the air may be drawn into a pyrolyzer accessory prior to the detection tube. [Ill] In either case, the analysis is not specific for tetrachloroethylene since liberated halogen ions produce the stain and any halogen or halogenated The ACGIH Committee on Threshold Limits recommended, and the ACGIH adopted, a threshold limit value (TLV) of 100 ppm for tetrachloroethylene in 19A7. [122,123] No basis for this change was given. The 19A7 TLV for tetrachloroethylene was restored to 200 ppm in 1953. [12A] The basis given for the change was information received from conference members that was not elaborated upon. A preface to future tables of threshold limits proposed by the TLV Committee and adopted by the ACGIH in 1953 defined the values as "maximum average atmospheric concentration of contaminants to which workers may be exposed for an 8-hour working day without injury to health." [124] The preface was modified in 1958, and included the statement that "They tetrachloroethylene was used, severe neurological disorders were observed. In the case of one man working for 4 years In a poorly ventilated environment, loss of memory, blindness in the left eye, pronounced dermographism and vestibular dysfunction were observed upon examination. These disorders persisted even after the exposure had been discontinued. Mild central nervous disorders were found in another man 2 months after he began using tetrachloroethylene in his work. However, 6 months prior to his work with tetrachloroethylene, he had degreased metal parts with trichloroethylene. He worked with tetrachloroethylene for 9 months, enduring the symptoms until they progressed to include numbness of the fingers, difficulty walking, trembling, exaggerated dermographism, and general weakness. Slight but positive reactions for urinary urobilinogen were also found. [43] Gold [46] reported that a man who worked for 3 years in a drycleaning operation presented symptoms of increasing fatigue, dizziness, muscle cramps, memory difficulties, and restlessness. Neurological examination revealed an absence of olfactory sensitivity, deviation of the septum, and conjunctivitis, all on the right side. Most signs and symptoms persisted during the follow up year after exposure had ceased. Peripheral neuropathies have also been reported resulting from tetrachloroethylene exposures (E Baginsky, written communication, March 1975). The failure to find irreversible neurologic changes in the two studies of German drycleaning plants where concentrations ranged to 400 ppm was consistent with studies by Rowe et al [38] of four animal species. Rats, rabbits, monkeys and guinea pigs were exposed at up to 400 ppm for 7 hours/day, 5 days/week for up to 6 months. No neurologic changes were observed. There is little information on the effects of chronic exposures to tetrachloroethylene at 0-200 ppm. Studies of acute exposures show effects consistent with the action of a central nervous system depressant. EEG tracings of subcllnlcal central nervous system response to repeated 7.5-hour daily exposures of humans at 100 ppm. # Rowe [41] There is also evidence that these changes were maintained for at least 3 days after four 5-day weeks of 7.5-hours/day exposures at average concentrations of 20-150 ppm. [41] Data were not presented nor discussed for the results of EEG tracings during the week of exposure at 20 ppm of tetrachloroethylene. [41] The EEG tracings obtained with 100 ppm exposures indicated that the preliminary signs of narcosis were present in most subjects exposed at 100 ppm for 7.5 hours/day. [41] The alteration of EEG tracings (increased delta wave activity), and subjective responses such as headaches, reported by Stewart et al, [40,41] after exposures at 100 ppm for 7.5 hours/day, were the only effects found at this concentration. While these alterations may be a preliminary subcllnlcal response to tetrachloroethylene exposure, their appearance can result from various other factors unrelated to exposure. # Stewart et al [41] reported no EEG alterations for subjects exposed at 20 ppm. The data on human exposures at less than 100 ppm is sparse. Tuttle This limit should prevent neurologic effects as well as eye and respiratory tract irritation. No evidence of liver damage at or near the recommended limit has been reported. Human exposures at concentrations between 100 and 300 ppm have resulted in neurologic effects such as difficulty maintaining a normal Romberg test and impaired motor coordination on the Flanagan test. To protect against temporary neurologic dysfunction, it is recommended that ceiling exposures be limited to 100 ppm as determined by sampling periods of 15 minutes. It is recognized that many workers handle small amounts of tetrachloroethylene or work in situations where, regardless of the amounts used, there is only negligible contact with the substance. Under these conditions, it should not be necessary to comply with all of the provisions of this recommended standard. However, concern for worker health requires that protective measures be instituted below the enforceable limit to ensure that exposures stay below that limit. Therefore, environmental monitoring and recordkeeping is recommended for those work situations which involve exposure above one-half the recommended limit, to delineate work areas that do not require the expenditure of health resources for control of inhalation hazards. One-half the environmental limit has been chosen on the basis of professional judgment rather than on quantitative data that delineate nonhazardous areas from those in which a hazard definitely exists. Tetrachloroethylene has caused abnormalities in fetal animals. [84] These Included an Increased incidence of subcutaneous edema, delayed ossification of skull bones, and split sternebrae in fetal mice. There was also an increased incidence of fetal resorptions in rats. In both species, exposures occurred on days 6 through 15 of gestation for 7 hours a day at 300 ppm. The significance of these findings and their applicability to human workers is still in question. However, since the effects observed in the animals studied reflect in part maternal effects that the recommended environmental limits are expected to prevent, it is not recommended that additional considerations for exposure be given to females of child bearing age. Further Investigations are warranted and discussed in Chapter VII. Additional considerations for protecting women of child bearing age will be recommended if Indicated by results of additional research. Other information which may affect the recommended environmental limit are the results of the National Cancer Institute study of carcinogenesis which is currently in progress. When this information is made available It will be evaluated and the recommended standard will be revised If necessary. # (b) Medical Monitoring It is recommended that all workers be given preplacement and annual medical examinations. The preplacement examinations may Identify workers that might be susceptible, due to predisposing conditions, to exposure to tetrachloroethylene below the recommended environmental limit. Examinations given prior to employment will provide data for evaluating workers after various lengths of exposure. The annual medical examination has been recommended to supplement the environmental monitoring which is not continuous. These examinations should provide additional evaluation of the effectiveness of the recommended environmental limit. The medical examinations should be general with emphasis on the hepatic and nervous systems, which have been reported to be the most effected systems. [38,43,55] Since information on the effects of tetrachloroethylene on pregnant women and offspring is inconclusive it Is recommended that physicians report any spontaneous abortions or fetal abnormalities that ftlght result from exposure to tetrachloroethylene. The annual schedule for the examinations will provide the opportunity for early detection of effects on the health of the workers. # VI. WORK PRACTICES The manufacture of tetrachloroethylene requires large amounts of chlorine. [1,9] Information on safe handling of chlorine is available. [135] Other chlorohydrocarbons may be used as starting materials or may be formed as coproducts. Caution must be taken to avoid exposure to these substances as well as any hydrocarbons which may be used as starting materials. Further information concerning specific work practices for tetra chloroethylene can be found in the Manufacturing Chemists Association Safety Data Sheet SD-24, and in various other publications. [6,136,138] Tetrachloroethylene has been found to cause effects on the skin. Prolonged or repeated exposure of skin to tetrachloroethylene should be avoided to prevent cutaneous effects. Due to the toxicity of tetrachloroethylene, processes in which it is used in large quantities should be carried out In closed systems. Veil designed hoods and ventilation systems should be used to maintain exposures at or below concentrations specified by this standard. Further protective measures Include the use of personal protective equipment and clothing and purging of equipment prior to and during servicing and maintenance. Tetrachloroethylene Is a component of some insecticidal fumigants and conventional work practice guidelines are Inappropriate to protect agricultural workers from the hazards of exposure. For these uses, fumi gants containing tetrachloroethylene must be used in a manner consistent with their labeling requirements. These usually specify allowable time limits before a fumigated area or space may be reentered, and safe practices for the application of the particular pesticide. Consideration must be given to the wearing of personal protective equipment including a long-sleeved shirt, long-legged pants (or suitable coveralls), a hat, shoes, socks, and solvent-resistant gloves. Specific requirements of worker protection standards for agricultural pesticides may be found in 40 CFR 170. Where a funlgant Is applied to a crop in confined storage, hazardous One factor that affects the overall performance of demand-type (negative pressure) respirators is the variability of the face seal. Facepiece leakage is the major limitation of half-mask and quarter-masV facepieces operated with a negative pressure. For purposes of uniform regulations covering the many face sizes and shapes of the US population, NIOSH recommends that the half-mask or quarter-mask facepieces operated with a negative pressure not be used for protection above 10 times the TWA limit, although the majority of wearers can obtain protection in atmospheres of higher tetrachloroethylene concentrations. On the same basis, NIOSH recommends that the full facepiece, operated with negative pressure, may be used up to 50 times the TWA limit. # VII. RESEARCH NEEDS There is a need for data concerning the long-range effects of chronic exposures to tetrachloroethylene at or below the recommended environmental Unit. Of particular concern is the area of behavioral and neurological effects. A preliminary study showed that certain behavioral and neurological tests will reflect tetrachloroethylene exposure. [50] While the results of the study are inconclusive due to small numbers of subjects and differences between exposed and unexposed groups, the approach Is valid and should be studied further. Additional research could be conclusive if the number of subjects is Increased and the experiment is «ore rigorously designed. (1) The date and time of sample collection. (2) Sampling duration. (3) Total sample volume. (2) The smaller section of charcoal Is used as a reserve and should be positioned nearest the sampling pump. ( )3 The charcoal tube should be placed In a vertical posi tion during sampling. (4) Tubing may be used to connect the back of the tube to the pump, but air being sampled should not be passed through any hose or tubing before entering the charcoal tube. (5) The sample can be taken at flowrates of 25-1,000 Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name Is known. The "%" may be the approximate percentage by weight or volume (Indicate basis) which each hazardous Ingredient of the mixture bears to the whole mixture. This may be Indicated as a range or maximum amount, le, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. Toxic hazard data shall be stated In terms of concentration, mode of exposure or test, and animal used, le, "100 ppm LC50 rat," "25 mg/kg LD50- # (1) 40 ml/min (70 psig) helium carrier gas flow. (2) 65 ml/min (24 psig) hydrogen gas flow to detector. (3) 500 ml/min (50 psig) airflow to detector. # SPEC IFIC G R A V IT Y (H^O -ll V A P O R P R E S S U R E V A P O R D E N S IT Y IA IR -1 I S O L U B IL IT Y IN H 20. \ BY WT % V O L A T IL E S 8 * V O L E V A P O R A T IO N R A T E IB U T Y l A C E T A T E S A P P E A R A N C E A N D O D O R Human Data
None
None
2da6bed82055f51e635d07042f49315431a31a24
cdc
None
# Executive Summary The Guideline for Disinfection and Sterilization in Healthcare Facilities, 2008, presents evidencebased recommendations on the preferred methods for cleaning, disinfection and sterilization of patientcare medical devices and for cleaning and disinfecting the healthcare environment. This document supercedes the relevant sections contained in the 1985 Centers for Disease Control (CDC) Guideline for Handwashing and Environmental Control. 1 Because maximum effectiveness from disinfection and sterilization results from first cleaning and removing organic and inorganic materials, this document also reviews cleaning methods. The chemical disinfectants discussed for patient-care equipment include alcohols, glutaraldehyde, formaldehyde, hydrogen peroxide, iodophors, ortho-phthalaldehyde, peracetic acid, phenolics, quaternary ammonium compounds, and chlorine. The choice of disinfectant, concentration, and exposure time is based on the risk for infection associated with use of the equipment and other factors discussed in this guideline. The sterilization methods discussed include steam sterilization, ethylene oxide (ETO), hydrogen peroxide gas plasma, and liquid peracetic acid. When properly used, these cleaning, disinfection, and sterilization processes can reduce the risk for infection associated with use of invasive and noninvasive medical and surgical devices. However, for these processes to be effective, health-care workers should adhere strictly to the cleaning, disinfection, and sterilization recommendations in this document and to instructions on product labels. In addition to updated recommendations, new topics addressed in this guideline include 1. inactivation of antibiotic-resistant bacteria, bioterrorist agents, emerging pathogens, and bloodborne pathogens; 2. toxicologic, environmental, and occupational concerns associated with disinfection and sterilization practices; 3. disinfection of patient-care equipment used in ambulatory settings and home care; 4. new sterilization processes, such as hydrogen peroxide gas plasma and liquid peracetic acid; and 5. disinfection of complex medical instruments (e.g., endoscopes). # Introduction In the United States, approximately 46.5 million surgical procedures and even more invasive medical procedures-including approximately 5 million gastrointestinal endoscopies-are performed each year. 2 Each procedure involves contact by a medical device or surgical instrument with a patient's sterile tissue or mucous membranes. A major risk of all such procedures is the introduction of pathogens that can lead to infection. Failure to properly disinfect or sterilize equipment carries not only risk associated with breach of host barriers but also risk for person-to-person transmission (e.g., hepatitis B virus) and transmission of environmental pathogens (e.g., Pseudomonas aeruginosa). Disinfection and sterilization are essential for ensuring that medical and surgical instruments do not transmit infectious pathogens to patients. Because sterilization of all patient-care items is not necessary, health-care policies must identify, primarily on the basis of the items' intended use, whether cleaning, disinfection, or sterilization is indicated. Multiple studies in many countries have documented lack of compliance with established guidelines for disinfection and sterilization. Failure to comply with scientifically-based guidelines has led to numerous outbreaks. This guideline presents a pragmatic approach to the judicious selection and proper use of disinfection and sterilization processes; the approach is based on well-designed studies assessing the efficacy (through laboratory investigations) and effectiveness (through clinical studies) of disinfection and sterilization procedures. # Methods This guideline resulted from a review of all MEDLINE articles in English listed under the MeSH headings of disinfection or sterilization (focusing on health-care equipment and supplies) from January 1980 through August 2006. References listed in these articles also were reviewed. Selected articles published before 1980 were reviewed and, if still relevant, included in the guideline. The three major peerreviewed journals in infection control-American Journal of Infection Control, Infection Control and Hospital Epidemiology, and Journal of Hospital Infection-were searched for relevant articles published from January 1990 through August 2006. Abstracts presented at the annual meetings of the Society for Healthcare Epidemiology of America and Association for professionals in Infection Control and Epidemiology, Inc. during 1997-2006 also were reviewed; however, abstracts were not used to support the recommendations. # Definition of Terms Sterilization describes a process that destroys or eliminates all forms of microbial life and is carried out in health-care facilities by physical or chemical methods. Steam under pressure, dry heat, EtO gas, hydrogen peroxide gas plasma, and liquid chemicals are the principal sterilizing agents used in healthcare facilities. Sterilization is intended to convey an absolute meaning; unfortunately, however, some health professionals and the technical and commercial literature refer to "disinfection" as "sterilization" and items as "partially sterile." When chemicals are used to destroy all forms of microbiologic life, they can be called chemical sterilants. These same germicides used for shorter exposure periods also can be part of the disinfection process (i.e., high-level disinfection). Disinfection describes a process that eliminates many or all pathogenic microorganisms, except bacterial spores, on inanimate objects (Tables 1 and 2). In health-care settings, objects usually are disinfected by liquid chemicals or wet pasteurization. Each of the various factors that affect the efficacy of disinfection can nullify or limit the efficacy of the process. Factors that affect the efficacy of both disinfection and sterilization include prior cleaning of the object; organic and inorganic load present; type and level of microbial contamination; concentration of and exposure time to the germicide; physical nature of the object (e.g., crevices, hinges, and lumens); presence of biofilms; temperature and pH of the disinfection process; and in some cases, relative humidity of the sterilization process (e.g., ethylene oxide). Unlike sterilization, disinfection is not sporicidal. A few disinfectants will kill spores with prolonged exposure times (3-12 hours); these are called chemical sterilants. At similar concentrations but with shorter exposure periods (e.g., 20 minutes for 2% glutaraldehyde), these same disinfectants will kill all microorganisms except large numbers of bacterial spores; they are called high-level disinfectants. Lowlevel disinfectants can kill most vegetative bacteria, some fungi, and some viruses in a practical period of time (≤10 minutes). Intermediate-level disinfectants might be cidal for mycobacteria, vegetative bacteria, most viruses, and most fungi but do not necessarily kill bacterial spores. Germicides differ markedly, primarily in their antimicrobial spectrum and rapidity of action. Cleaning is the removal of visible soil (e.g., organic and inorganic material) from objects and surfaces and normally is accomplished manually or mechanically using water with detergents or enzymatic products. Thorough cleaning is essential before high-level disinfection and sterilization because inorganic and organic materials that remain on the surfaces of instruments interfere with the effectiveness of these processes. Decontamination removes pathogenic microorganisms from objects so they are safe to handle, use, or discard. Terms with the suffix cide or cidal for killing action also are commonly used. For example, a germicide is an agent that can kill microorganisms, particularly pathogenic organisms ("germs"). The term germicide includes both antiseptics and disinfectants. Antiseptics are germicides applied to living tissue and skin; disinfectants are antimicrobials applied only to inanimate objects. In general, antiseptics are used only on the skin and not for surface disinfection, and disinfectants are not used for skin antisepsis because they can injure skin and other tissues. Virucide, fungicide, bactericide, sporicide, and tuberculocide can kill the type of microorganism identified by the prefix. For example, a bactericide is an agent that kills bacteria. # A Rational Approach to Disinfection and Sterilization More than 30 years ago, Earle H. Spaulding devised a rational approach to disinfection and sterilization of patient-care items and equipment. 14 This classification scheme is so clear and logical that it has been retained, refined, and successfully used by infection control professionals and others when planning methods for disinfection or sterilization. 1,13,15,17,19,20 Spaulding believed the nature of disinfection could be understood readily if instruments and items for patient care were categorized as critical, semicritical, and noncritical according to the degree of risk for infection involved in use of the items. The CDC Guideline for Handwashing and Hospital Environmental Control 21 , Guidelines for the Prevention of Transmission of Human Immunodeficiency Virus (HIV) and Hepatitis B Virus (HBV) to Health-Care and Public-Safety Workers 22 , and Guideline for Environmental Infection Control in Health-Care Facilities 23 employ this terminology. # Critical Items Critical items confer a high risk for infection if they are contaminated with any microorganism. Thus, objects that enter sterile tissue or the vascular system must be sterile because any microbial contamination could transmit disease. This category includes surgical instruments, cardiac and urinary catheters, implants, and ultrasound probes used in sterile body cavities. Most of the items in this category should be purchased as sterile or be sterilized with steam if possible. Heat-sensitive objects can be treated with EtO, hydrogen peroxide gas plasma; or if other methods are unsuitable, by liquid chemical sterilants. Germicides categorized as chemical sterilants include ≥2.4% glutaraldehyde-based formulations, 0.95% glutaraldehyde with 1.64% phenol/phenate, 7.5% stabilized hydrogen peroxide, 7.35% hydrogen peroxide with 0.23% peracetic acid, 0.2% peracetic acid, and 0.08% peracetic acid with 1.0% hydrogen peroxide. Liquid chemical sterilants reliably produce sterility only if cleaning precedes treatment and if proper guidelines are followed regarding concentration, contact time, temperature, and pH. # Semicritical Items Semicritical items contact mucous membranes or nonintact skin. This category includes respiratory therapy and anesthesia equipment, some endoscopes, laryngoscope blades 24 , esophageal manometry probes, cystoscopes 25 , anorectal manometry catheters, and diaphragm fitting rings. These medical devices should be free from all microorganisms; however, small numbers of bacterial spores are permissible. Intact mucous membranes, such as those of the lungs and the gastrointestinal tract, generally are resistant to infection by common bacterial spores but susceptible to other organisms, such as bacteria, mycobacteria, and viruses. Semicritical items minimally require high-level disinfection using chemical disinfectants. Glutaraldehyde, hydrogen peroxide, ortho-phthalaldehyde, and peracetic acid with hydrogen peroxide are cleared by the Food and Drug Administration (FDA) and are dependable highlevel disinfectants provided the factors influencing germicidal procedures are met (Table 1). When a disinfectant is selected for use with certain patient-care items, the chemical compatibility after extended use with the items to be disinfected also must be considered. High-level disinfection traditionally is defined as complete elimination of all microorganisms in or on an instrument, except for small numbers of bacterial spores. The FDA definition of high-level disinfection is a sterilant used for a shorter contact time to achieve a 6-log10 kill of an appropriate Mycobacterium species. Cleaning followed by high-level disinfection should eliminate enough pathogens to prevent transmission of infection. 26,27 Laparoscopes and arthroscopes entering sterile tissue ideally should be sterilized between patients. However, in the United States, this equipment sometimes undergoes only high-level disinfection between patients. As with flexible endoscopes, these devices can be difficult to clean and high-level disinfect Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) or sterilize because of intricate device design (e.g., long narrow lumens, hinges). Meticulous cleaning must precede any high-level disinfection or sterilization process. Although sterilization is preferred, no reports have been published of outbreaks resulting from high-level disinfection of these scopes when they are properly cleaned and high-level disinfected. Newer models of these instruments can withstand steam sterilization that for critical items would be preferable to high-level disinfection. Rinsing endoscopes and flushing channels with sterile water, filtered water, or tap water will prevent adverse effects associated with disinfectant retained in the endoscope (e.g., disinfectant-induced colitis). Items can be rinsed and flushed using sterile water after high-level disinfection to prevent contamination with organisms in tap water, such as nontuberculous mycobacteria, 10,31,32 Legionella, or gramnegative bacilli such as Pseudomonas. 1,17, Alternatively, a tapwater or filtered water (0.2µ filter) rinse should be followed by an alcohol rinse and forced air drying. 28, Forced-air drying markedly reduces bacterial contamination of stored endoscopes, most likely by removing the wet environment favorable for bacterial growth. 39 After rinsing, items should be dried and stored (e.g., packaged) in a manner that protects them from recontamination. Some items that may come in contact with nonintact skin for a brief period of time (i.e., hydrotherapy tanks, bed side rails) are usually considered noncritical surfaces and are disinfected with intermediatelevel disinfectants (i.e., phenolic, iodophor, alcohol, chlorine) 23 . Since hydrotherapy tanks have been associated with spread of infection, some facilities have chosen to disinfect them with recommended levels of chlorine 23,41 . In the past, high-level disinfection was recommended for mouthpieces and spirometry tubing (e.g., glutaraldehyde) but cleaning the interior surfaces of the spirometers was considered unnecessary. 42 This was based on a study that showed that mouthpieces and spirometry tubing become contaminated with microorganisms but there was no bacterial contamination of the surfaces inside the spirometers. Filters have been used to prevent contamination of this equipment distal to the filter; such filters and the proximal mouthpiece are changed between patients. # Noncritical Items Noncritical items are those that come in contact with intact skin but not mucous membranes. Intact skin acts as an effective barrier to most microorganisms; therefore, the sterility of items coming in contact with intact skin is "not critical." In this guideline, noncritical items are divided into noncritical patient care items and noncritical environmental surfaces 43,44 . Examples of noncritical patient-care items are bedpans, blood pressure cuffs, crutches and computers 45 . In contrast to critical and some semicritical items, most noncritical reusable items may be decontaminated where they are used and do not need to be transported to a central processing area. Virtually no risk has been documented for transmission of infectious agents to patients through noncritical items 37 when they are used as noncritical items and do not contact non-intact skin and/or mucous membranes. Table 1 lists several low-level disinfectants that may be used for noncritical items. Most Environmental Protection Agency (EPA)-registered disinfectants have a 10-minute label claim. However, multiple investigators have demonstrated the effectiveness of these disinfectants against vegetative bacteria (e.g., Listeria, Escherichia coli, Salmonella, vancomycin-resistant Enterococci, methicillin-resistant Staphylococcus aureus), yeasts (e.g., Candida), mycobacteria (e.g., Mycobacterium tuberculosis), and viruses (e.g. poliovirus) at exposure times of 30-60 seconds Federal law requires all applicable label instructions on EPA-registered products to be followed (e.g., use-dilution, shelf life, storage, material compatibility, safe use, and disposal). If the user selects exposure conditions (e.g., exposure time) that differ from those on the EPA-registered products label, the user assumes liability for any injuries resulting from offlabel use and is potentially subject to enforcement action under Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) 65 . Noncritcal environmental surfaces include bed rails, some food utensils, bedside tables, patient furniture and floors. Noncritical environmental surfaces frequently touched by hand (e.g., bedside tables, bed rails) potentially could contribute to secondary transmission by contaminating hands of health-care workers or by contacting medical equipment that subsequently contacts patients 13, 46-48, 51, 66, 67 . Mops and reusable cleaning cloths are regularly used to achieve low-level disinfection on environmental surfaces. However, they often are not adequately cleaned and disinfected, and if the water-disinfectant mixture is not changed regularly (e.g., after every three to four rooms, at no longer than 60-minute intervals), the mopping procedure actually can spread heavy microbial contamination throughout the health-care facility 68 . In one study, standard laundering provided acceptable decontamination of heavily contaminated mopheads but chemical disinfection with a phenolic was less effective. 68 Frequent laundering of mops (e.g., daily), therefore, is recommended. Single-use disposable towels impregnated with a disinfectant also can be used for low-level disinfection when spot-cleaning of noncritical surfaces is needed 45 . # Changes in Disinfection and Sterilization Since 1981 The Table in the CDC Guideline for Environmental Control prepared in 1981 as a guide to the appropriate selection and use of disinfectants has undergone several important changes (Table 1). 15 First, formaldehyde-alcohol has been deleted as a recommended chemical sterilant or high-level disinfectant because it is irritating and toxic and not commonly used. Second, several new chemical sterilants have been added, including hydrogen peroxide, peracetic acid 58,69,70 , and peracetic acid and hydrogen peroxide in combination. Third, 3% phenolics and iodophors have been deleted as high-level disinfectants because of their unproven efficacy against bacterial spores, M. tuberculosis, and/or some fungi. 55,71 Fourth, isopropyl alcohol and ethyl alcohol have been excluded as high-level disinfectants 15 because of their inability to inactivate bacterial spores and because of the inability of isopropyl alcohol to inactivate hydrophilic viruses (i.e., poliovirus, coxsackie virus). 72 Fifth, a 1:16 dilution of 2.0% glutaraldehyde-7.05% phenol-1.20% sodium phenate (which contained 0.125% glutaraldehyde, 0.440% phenol, and 0.075% sodium phenate when diluted) has been deleted as a high-level disinfectant because this product was removed from the marketplace in December 1991 because of a lack of bactericidal activity in the presence of organic matter; a lack of fungicidal, tuberculocidal and sporicidal activity; and reduced virucidal activity. 49,55,56,71, Sixth, the exposure time required to achieve high-level disinfection has been changed from 10-30 minutes to 12 minutes or more depending on the FDA-cleared label claim and the scientific literature. 27,55,69,76, A glutaraldehyde and an ortho-phthalaldehyde have an FDA-cleared label claim of 5 minutes when used at 35°C and 25°C, respectively, in an automated endoscope reprocessor with FDA-cleared capability to maintain the solution at the appropriate temperature. 85 In addition, many new subjects have been added to the guideline. These include inactivation of emerging pathogens, bioterrorist agents, and bloodborne pathogens; toxicologic, environmental, and occupational concerns associated with disinfection and sterilization practices; disinfection of patient-care equipment used in ambulatory and home care; inactivation of antibiotic-resistant bacteria; new sterilization processes, such as hydrogen peroxide gas plasma and liquid peracetic acid; and disinfection of complex medical instruments (e.g., endoscopes). # Disinfection of Healthcare Equipment # Concerns about Implementing the Spaulding Scheme One problem with implementing the aforementioned scheme is oversimplification. For example, the scheme does not consider problems with reprocessing of complicated medical equipment that often is heat-sensitive or problems of inactivating certain types of infectious agents (e.g., prions, such as Creutzfeldt-Jakob disease agent). Thus, in some situations, choosing a method of disinfection remains difficult, even after consideration of the categories of risk to patients. This is true particularly for a few medical devices (e.g., arthroscopes, laparoscopes) in the critical category because of controversy about whether they should be sterilized or high-level disinfected. 28,86 Heat-stable scopes (e.g., many rigid scopes) should be steam sterilized. Some of these items cannot be steam sterilized because they are heat-sensitive; additionally, sterilization using ethylene oxide (EtO) can be too time-consuming for routine use between patients (new technologies, such as hydrogen peroxide gas plasma and peracetic acid reprocessor, provide faster cycle times). However, evidence that sterilization of these items improves patient care by reducing the infection risk is lacking 29, . Many newer models of these instruments can withstand steam sterilization, which for critical items is the preferred method. Another problem with implementing the Spaulding scheme is processing of an instrument in the semicritical category (e.g., endoscope) that would be used in conjunction with a critical instrument that contacts sterile body tissues. For example, is an endoscope used for upper gastrointestinal tract investigation still a semicritical item when used with sterile biopsy forceps or in a patient who is bleeding heavily from esophageal varices? Provided that high-level disinfection is achieved, and all microorganisms except bacterial spores have been removed from the endoscope, the device should not represent an infection risk and should remain in the semicritical category . Infection with spore-forming bacteria has not been reported from appropriately high-level disinfected endoscopes. An additional problem with implementation of the Spaulding system is that the optimal contact time for high-level disinfection has not been defined or varies among professional organizations, resulting in different strategies for disinfecting different types of semicritical items (e.g., endoscopes, applanation tonometers, endocavitary transducers, cryosurgical instruments, and diaphragm fitting rings). Until simpler and effective alternatives are identified for device disinfection in clinical settings, following this guideline, other CDC guidelines 1,22,95,96 and FDA-cleared instructions for the liquid chemical sterilants/high-level disinfectants would be prudent. # Reprocessing of Endoscopes Physicians use endoscopes to diagnose and treat numerous medical disorders. Even though endoscopes represent a valuable diagnostic and therapeutic tool in modern medicine and the incidence of infection associated with their use reportedly is very low (about 1 in 1.8 million procedures) 97 , more healthcare-associated outbreaks have been linked to contaminated endoscopes than to any other medical device 6-8, 12, 98 . To prevent the spread of health-care-associated infections, all heat-sensitive endoscopes (e.g., gastrointestinal endoscopes, bronchoscopes, nasopharygoscopes) must be properly cleaned and, at a minimum, subjected to high-level disinfection after each use. High-level disinfection can be expected to destroy all microorganisms, although when high numbers of bacterial spores are present, a few spores might survive. Because of the types of body cavities they enter, flexible endoscopes acquire high levels of microbial contamination (bioburden) during each use 99 . For example, the bioburden found on flexible gastrointestinal endoscopes after use has ranged from 10 5 colony forming units (CFU)/mL to 10 10 CFU/mL, with the highest levels found in the suction channels . The average load on bronchoscopes before cleaning was 6.4x10 4 CFU/mL. Cleaning reduces the level of microbial contamination by 4-6 log10 83,103 . Using human immunovirus (HIV)-contaminated endoscopes, several investigators have shown that cleaning completely eliminates the microbial contamination on the scopes 104,105 . Similarly, other investigators found that EtO sterilization or soaking in 2% glutaraldehyde for 20 minutes was effective only when the device first was properly cleaned 106 . FDA maintains a list of cleared liquid chemical sterilants and high-level disinfectants that can be used to reprocess heat-sensitive medical devices, such as flexible endoscopes ( ces/ucm437347.htm). At this time, the FDA-cleared and marketed formulations include: ≥2.4% glutaraldehyde, 0.55% ortho-phthalaldehyde (OPA), 0.95% glutaraldehyde with 1.64% phenol/phenate, 7.35% hydrogen peroxide with 0.23% peracetic acid, 1.0% hydrogen peroxide with 0.08% peracetic acid, and 7.5% hydrogen peroxide 85 . These products have excellent antimicrobial activity; however, some oxidizing chemicals (e.g., 7.5% hydrogen peroxide, and 1.0% hydrogen peroxide with 0.08% peracetic acid ) reportedly have caused cosmetic and functional damage to endoscopes 69 . Users should check with device manufacturers for information about germicide compatibility with their device. If the germicide is FDA-cleared, then it is safe when used according to label directions; however, professionals should review the scientific literature for newly available data regarding human safety or materials compatibility. EtO sterilization of flexible endoscopes is infrequent because it requires a lengthy processing and aeration time (e.g., 12 hours) and is a potential hazard to staff and patients. The two products most commonly used for reprocessing endoscopes in the United States are glutaraldehyde and an automated, liquid chemical sterilization process that uses peracetic acid 107 . The American Society for Gastrointestinal Endoscopy (ASGE) recommends glutaraldehyde solutions that do not contain surfactants because the soapy residues of surfactants are difficult to remove during rinsing 108 . ortho-phthalaldehyde has begun to replace glutaraldehyde in many health-care facilities because it has several potential advantages over glutaraldehyde: is not known to irritate the eyes and nasal passages, does not require activation or exposure monitoring, and has a 12-minute high-level disinfection claim in the United States 69 . Disinfectants that are not FDA-cleared and should not be used for reprocessing endoscopes include iodophors, chlorine solutions, alcohols, quaternary ammonium compounds, and phenolics. These solutions might still be in use outside the United States, but their use should be strongly discouraged because of lack of proven efficacy against all microorganisms or materials incompatibility. FDA clearance of the contact conditions listed on germicide labeling is based on the manufacturer's test results . Manufacturers test the product under worst-case conditions for germicide formulation (i.e., minimum recommended concentration of the active ingredient), and include organic soil. Typically manufacturers use 5% serum as the organic soil and hard water as examples of organic and inorganic challenges. The soil represents the organic loading to which the device is exposed during actual use and that would remain on the device in the absence of cleaning. This method ensures that the contact conditions completely eliminate the test mycobacteria (e.g., 10 5 to 10 6 Mycobacteria tuberculosis in organic soil and dried on a scope) if inoculated in the most difficult areas for the disinfectant to penetrate and contact in the absence of cleaning and thus provides a margin of safety 109 . For 2.4% glutaraldehyde that requires a 45-minute immersion at 25 º C to achieve high-level disinfection (i.e., 100% kill of M. tuberculosis). FDA itself does not conduct testing but relies solely on the disinfectant manufacturer's data. Data suggest that M. tuberculosis levels can be reduced by at least 8 log10 with cleaning (4 log10) 83,101,102,110 , followed by chemical disinfection for 20 minutes at 20°C (4 to 6 log10) 83,93,111,112 . On the basis of these data, APIC 113 , the Society of Gastroenterology Nurses and Associates (SGNA) 38,114,115 , the ASGE 108 , American College of Chest Physicians 12 , and a multi-society guideline 116 recommend alternative contact conditions with 2% glutaraldehyde to achieve high-level disinfection (e.g., that equipment be immersed in 2% glutaraldehyde at 20°C for at least 20 minutes for high-level disinfection). Federal regulations are to follow the FDA-cleared label claim for high-level disinfectants. The FDA-cleared labels for high-level disinfection with >2% glutaraldehyde at 25°C range from 20-90 minutes, depending upon the product based on three tier testing which includes AOAC sporicidal tests, simulated use testing with mycobacterial and in-use testing. The studies supporting the efficacy of >2% glutaraldehyde for 20 minutes at 20ºC assume adequate cleaning prior to disinfection, whereas the FDA-cleared label claim incorporates an added margin of safety to accommodate possible lapses in cleaning practices. Facilities that have chosen to apply the 20 minute duration at 20ºC have done so based on the IA recommendation in the July 2003 SHEA position paper, "Multi-society Guideline for Reprocessing Flexible Gastrointestinal Endoscopes" 19,57,83,94,108,111, . Flexible GI Endoscope Reprocessing Update : Multisociety guideline on reprocessing flexible gastrointestinal endoscopes: 2011 ( guideline%20on%20reprocessing%20flexible%20gastrointestinal.pdf). Flexible endoscopes are particularly difficult to disinfect 122 and easy to damage because of their intricate design and delicate materials. 123 Meticulous cleaning must precede any sterilization or high-level disinfection of these instruments. Failure to perform good cleaning can result in sterilization or disinfection failure, and outbreaks of infection can occur. Several studies have demonstrated the importance of cleaning in experimental studies with the duck hepatitis B virus (HBV) 106,124 , HIV 125 and Helicobacter pylori. 126 An examination of health-care-associated infections related only to endoscopes through July 1992 found 281 infections transmitted by gastrointestinal endoscopy and 96 transmitted by bronchoscopy. The clinical spectrum ranged from asymptomatic colonization to death. Salmonella species and Pseudomonas aeruginosa repeatedly were identified as causative agents of infections transmitted by gastrointestinal endoscopy, and M. tuberculosis, atypical mycobacteria, and P. aeruginosa were the most common causes of infections transmitted by bronchoscopy 12 . Major reasons for transmission were inadequate cleaning, improper selection of a disinfecting agent, and failure to follow recommended cleaning and disinfection procedures 6,8,37,98 , and flaws in endoscope design 127,128 or automated endoscope reprocessors. 7,98 Failure to follow established guidelines has continued to result in infections associated with gastrointestinal endoscopes 8 and bronchoscopes 7,12 . Potential device-associated problems should be reported to the FDA Center for Devices and Radiologic Health. One multistate investigation found that 23.9% of the bacterial cultures from the internal channels of 71 gastrointestinal endoscopes grew ≥100,000 colonies of bacteria after completion of all disinfection and sterilization procedures (nine of 25 facilities were using a product that has been removed from the marketplace , is not FDA-cleared as a high-level disinfectant or no disinfecting agent) and before use on the next patient 129 . The incidence of postendoscopic procedure infections from an improperly processed endoscope has not been rigorously assessed. Automated endoscope reprocessors (AER) offer several advantages over manual reprocessing: they automate and standardize several important reprocessing steps , reduce the likelihood that an essential reprocessing step will be skipped, and reduce personnel exposure to high-level disinfectants or chemical sterilants. Failure of AERs has been linked to outbreaks of infections 133 or colonization 7,134 , and the AER water filtration system might not be able to reliably provide "sterile" or bacteria-free rinse water 135,136 . Establishment of correct connectors between the AER and the device is critical to ensure complete flow of disinfectants and rinse water 7,137 . In addition, some endoscopes such as the duodenoscopes (e.g., endoscopic retrograde cholangiopancreatography ) contain features (e.g., elevator-wire channel) that require a flushing pressure that is not achieved by most AERs and must be reprocessed manually using a 2-to 5-mL syringe, until new duodenoscopes equipped with a wider elevator-channel that AERs can reliably reprocess become available 132 . Outbreaks involving removable endoscope parts 138,139 such as suction valves and endoscopic accessories designed to be inserted through flexible endoscopes such as biopsy forceps emphasize the importance of cleaning to remove all foreign matter before high-level disinfection or sterilization. 140 Some types of valves are now available as single-use, disposable products (e.g., bronchoscope valves) or steam sterilizable products (e.g., gastrointestinal endoscope valves). AERs need further development and redesign 7,141 , as do endoscopes 123,142 , so that they do not represent a potential source of infectious agents. Endoscopes employing disposable components (e.g., protective barrier devices or sheaths) might provide an alternative to conventional liquid chemical highlevel disinfection/sterilization 143,144 . Another new technology is a swallowable camera-in-a-capsule that travels through the digestive tract and transmits color pictures of the small intestine to a receiver worn outside the body. This capsule currently does not replace colonoscopies. # Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) Published recommendations for cleaning and disinfecting endoscopic equipment should be strictly followed 12,38,108, . Unfortunately, audits have shown that personnel do not consistently adhere to guidelines on reprocessing and outbreaks of infection continue to occur. To ensure reprocessing personnel are properly trained, each person who reprocesses endoscopic instruments should receive initial and annual competency testing 38,155 . In general, endoscope disinfection or sterilization with a liquid chemical sterilant involves five steps after leak testing: 1. Clean: mechanically clean internal and external surfaces, including brushing internal channels and flushing each internal channel with water and a detergent or enzymatic cleaners (leak testing is recommended for endoscopes before immersion). 2. Disinfect: immerse endoscope in high-level disinfectant (or chemical sterilant) and perfuse (eliminates air pockets and ensures contact of the germicide with the internal channels) disinfectant into all accessible channels, such as the suction/biopsy channel and air/water channel and expose for a time recommended for specific products. 3. Rinse: rinse the endoscope and all channels with sterile water, filtered water (commonly used with AERs) or tap water (i.e., high-quality potable water that meets federal clean water standards at the point of use). 4. Dry: rinse the insertion tube and inner channels with alcohol, and dry with forced air after disinfection and before storage. 5. Store: store the endoscope in a way that prevents recontamination and promotes drying (e.g., hung vertically). Drying the endoscope (steps 3 and 4) is essential to greatly reduce the chance of recontamination of the endoscope by microorganisms that can be present in the rinse water 116,156 . One study demonstrated that reprocessed endoscopes (i.e., air/water channel, suction/biopsy channel) generally were negative (100% after 24 hours; 90% after 7 days ) for bacterial growth when stored by hanging vertically in a ventilated cabinet 157 . Other investigators found all endoscopes were bacteria-free immediately after high-level disinfection, and only four of 135 scopes were positive during the subsequent 5-day assessment (skin bacteria cultured from endoscope surfaces). All flush-through samples remained sterile 158 . Because tapwater can contain low levels of microorganisms 159 , some researchers have suggested that only sterile water (which can be prohibitively expensive) 160 or AER filtered water be used. The suggestion to use only sterile water or filtered water is not consistent with published guidelines that allow tapwater with an alcohol rinse and forced air-drying 38,108,113 or the scientific literature. 39,93 In addition, no evidence of disease transmission has been found when a tap water rinse is followed by an alcohol rinse and forced-air drying. AERs produce filtered water by passage through a bacterial filter (e.g., 0.2 µ). Filtered rinse water was identified as a source of bacterial contamination in a study that cultured the accessory and suction channels of endoscopes and the internal chambers of AERs during 1996-2001 and reported 8.7% of samples collected during 1996-1998 had bacterial growth, with 54% being Pseudomonas species. After a system of hot water flushing of the piping (60 º C for 60 minutes daily) was introduced, the frequency of positive cultures fell to approximately 2% with only rare isolation of >10 CFU/mL 161 . In addition to the endoscope reprocessing steps, a protocol should be developed that ensures the user knows whether an endoscope has been appropriately cleaned and disinfected (e.g., using a room or cabinet for processed endoscopes only) or has not been reprocessed. When users leave endoscopes on movable carts, confusion can result about whether the endoscope has been processed. Although one guideline recommended endoscopes (e.g., duodenoscopes) be reprocessed immediately before use 147 , other guidelines do not require this activity 38,108,115 and except for the Association of periOperative Registered Nurses (AORN), professional organizations do not recommended that reprocessing be repeated as long as the original processing is done correctly. As part of a quality assurance program, healthcare facility personnel can consider random bacterial surveillance cultures of processed endoscopes to ensure high-level disinfection or sterilization 7, . Reprocessed endoscopes should be free of microbial pathogens except for small numbers of relatively avirulent microbes that represent exogenous environmental contamination (e.g., coagulase-negative Staphylococcus, Bacillus species, diphtheroids). Although recommendations exist for the final rinse water used during endoscope reprocessing to be microbiologically cultured at least monthly 165 , a microbiologic standard has not been set, and the value of routine endoscope cultures has not been shown 166 . In addition, neither the routine culture of reprocessed endoscopes nor the final rinse water has been validated by correlating viable counts on an endoscope to infection after an endoscopic procedure. If reprocessed endoscopes were cultured, sampling the endoscope would assess water quality and other important steps (e.g., disinfectant effectiveness, exposure time, cleaning) in the reprocessing procedure. A number of methods for sampling endoscopes and water have been described 23,157,161,163,167,168 . Novel approaches (e.g., detection of adenosine triphosphate ) to evaluate the effectiveness of endoscope cleaning 169,170 or endoscope reprocessing 171 also have been evaluated, but no method has been established as a standard for assessing the outcome of endoscope reprocessing. The carrying case used to transport clean and reprocessed endoscopes outside the health-care environment should not be used to store an endoscope or to transport the instrument within the healthcare facility. A contaminated endoscope should never be placed in the carrying case because the case can also become contaminated. When the endoscope is removed from the case, properly reprocessed, and put back in the case, the case could recontaminate the endoscope. A contaminated carrying case should be discarded (Olympus America, June 2002, written communication). Infection-control professionals should ensure that institutional policies are consistent with national guidelines and conduct infection-control rounds periodically (e.g., at least annually) in areas where endoscopes are reprocessed to ensure policy compliance. Breaches in policy should be documented and corrective action instituted. In incidents in which endoscopes were not exposed to a high-level disinfection process, patients exposed to potentially contaminated endoscopes have been assessed for possible acquisition of HIV, HBV, and hepatitis C virus (HCV). A 14-step method for managing a failure incident associated with high-level disinfection or sterilization has been described . The possible transmission of bloodborne and other infectious agents highlights the importance of rigorous infection control 172,173 . # Laparoscopes and Arthroscopes Although high-level disinfection appears to be the minimum standard for processing laparoscopes and arthroscopes between patients 28,86,174,175 , this practice continues to be debated 89,90,176 . However, neither side in the high-level disinfection versus sterilization debate has sufficient data on which to base its conclusions. Proponents of high-level disinfection refer to membership surveys 29 or institutional experiences 87 involving more than 117,000 and 10,000 laparoscopic procedures, respectively, that cite a low risk for infection (<0.3%) when high-level disinfection is used for gynecologic laparoscopic equipment. Only one infection in the membership survey was linked to spores. In addition, growth of common skin microorganisms (e.g., Staphylococcus epidermidis, diphtheroids) has been documented from the umbilical area even after skin preparation with povidone-iodine and ethyl alcohol. Similar organisms were recovered in some instances from the pelvic serosal surfaces or from the laparoscopic telescopes, suggesting that the microorganisms probably were carried from the skin into the peritoneal cavity 177,178 . Proponents of sterilization focus on the possibility of transmitting infection by spore-forming organisms. Researchers have proposed several reasons why sterility was not necessary for all laparoscopic equipment: only a limited number of organisms (usually ≤10) are introduced into the peritoneal cavity during laparoscopy; minimal damage is done to inner abdominal structures with little devitalized tissue; the peritoneal cavity tolerates small numbers of spore-forming bacteria; equipment is simple to clean and disinfect; surgical sterility is relative; the natural bioburden on rigid lumened devices is low 179 ; and no evidence exists that high-level disinfection instead of sterilization increases the risk for infection 87,89,90 . With the advent of laparoscopic cholecystectomy, concern about high-level disinfection is justifiable because the degree of tissue damage and bacterial contamination is greater than with laparoscopic procedures in gynecology. Failure to completely dissemble, clean, and high-level disinfect laparoscope parts has led to infections in patients 180 . Data from one study suggested that disassembly, cleaning, and proper reassembly of laparoscopic equipment used in gynecologic procedures before steam sterilization presents no risk for infection 181 . As with laparoscopes and other equipment that enter sterile body sites, arthroscopes ideally should be sterilized before used. Older studies demonstrated that these instruments were commonly (57%) only high-level disinfected in the United States 28,86 . A later survey (with a response rate of only 5%) reported that high-level disinfection was used by 31% and a sterilization process in the remainder of the healthcare facilities 30 High-level disinfection rather than sterilization presumably has been used because the incidence of infection is low and the few infections identified probably are unrelated to the use of highlevel disinfection rather than sterilization. A retrospective study of 12,505 arthroscopic procedures found an infection rate of 0.04% (five infections) when arthroscopes were soaked in 2% glutaraldehyde for 15-20 minutes. Four infections were caused by S. aureus; the fifth was an anaerobic streptococcal infection 88 . Because these organisms are very susceptible to high-level disinfectants, such as 2% glutaraldehyde, the infections most likely originated from the patient's skin. Two cases of Clostridium perfringens arthritis have been reported when the arthroscope was disinfected with glutaraldehyde for an exposure time that is not effective against spores 182,183 . Although only limited data are available, the evidence does not demonstrate that high-level disinfection of arthroscopes and laparoscopes poses an infection risk to the patient. For example, a prospective study that compared the reprocessing of arthroscopes and laparoscopes (per 1,000 procedures) with EtO sterilization to high-level disinfection with glutaraldehyde found no statistically significant difference in infection risk between the two methods (i.e., EtO, 7.5/1,000 procedures; glutaraldehyde, 2.5/1,000 procedures) 89 . Although the debate for high-level disinfection versus sterilization of laparoscopes and arthroscopes will go unsettled until well-designed, randomized clinical trials are published, this guideline should be followed 1,17 . That is, laparoscopes, arthroscopes, and other scopes that enter normally sterile tissue should be sterilized before each use; if this is not feasible, they should receive at least high-level disinfection. # Tonometers, Cervical Diaphragm Fitting Rings, Cryosurgical Instruments, and Endocavitary Probes Disinfection strategies vary widely for other semicritical items (e.g., applanation tonometers, rectal/vaginal probes, cryosurgical instruments, and diaphragm fitting rings). FDA requests that device manufacturers include at least one validated cleaning and disinfection/sterilization protocol in the labeling for their devices. As with all medications and devices, users should be familiar with the label instructions. One study revealed that no uniform technique was in use for disinfection of applanation tonometers, with disinfectant contact times varying from <15 sec to 20 minutes 28 . In view of the potential for transmission of viruses (e.g., herpes simplex virus , adenovirus 8, or HIV) 184 by tonometer tips, CDC recommended that the tonometer tips be wiped clean and disinfected for 5-10 minutes with either 3% hydrogen peroxide, 5000 ppm chlorine, 70% ethyl alcohol, or 70% isopropyl alcohol 95 . However, more recent data suggest that 3% hydrogen peroxide and 70% isopropyl alcohol are not effective against adenovirus capable of causing epidemic keratoconjunctivitis and similar viruses and should not be used for disinfecting applanation tonometers 49,185,186 . Structural damage to Schiotz tonometers has been observed with a 1:10 sodium hypochlorite (5,000 ppm chlorine) and 3% hydrogen peroxide 187 . After disinfection, the tonometer should be thoroughly rinsed in tapwater and air dried before use. Although these disinfectants and exposure times should kill pathogens that can infect the eyes, no studies directly support this 188,189 . The guidelines of the American Academy of Ophthalmology for preventing infections in ophthalmology focus on only one potential pathogen: HIV. 190 Because a short and simple decontamination procedure is desirable in the clinical setting, swabbing the tonometer tip with a 70% isopropyl alcohol wipe sometimes is practiced. 189 Preliminary reports suggest that wiping the tonometer tip with an alcohol swab and then allowing the alcohol to evaporate might be effective in eliminating HSV, HIV, and adenovirus 189,191,192 . However, because these studies involved only a few replicates and were conducted in a controlled laboratory setting, further studies are needed before this technique can be recommended. In addition, two reports have found that disinfection of pneumotonometer tips between uses with a 70% isopropyl alcohol wipe contributed to outbreaks of epidemic keratoconjunctivitis caused by adenovirus type 8 193,194 . Limited studies have evaluated disinfection techniques for other items that contact mucous membranes, such as diaphragm fitting rings, cryosurgical probes, transesophageal echocardiography probes 195 , flexible cystoscopes 196 or vaginal/rectal probes used in sonographic scanning. Lettau, Bond, and McDougal of CDC supported the recommendation of a diaphragm fitting ring manufacturer that involved using a soap-and-water wash followed by a 15-minute immersion in 70% alcohol 96 . This disinfection method should be adequate to inactivate HIV, HBV, and HSV even though alcohols are not classified as high-level disinfectants because their activity against picornaviruses is somewhat limited 72 . No data are available regarding inactivation of human papillomavirus (HPV) by alcohol or other disinfectants because in vitro replication of complete virions has not been achieved. Thus, even though alcohol for 15 minutes should kill pathogens of relevance in gynecology, no clinical studies directly support this practice. Vaginal probes are used in sonographic scanning. A vaginal probe and all endocavitary probes without a probe cover are semicritical devices because they have direct contact with mucous membranes (e.g., vagina, rectum, pharynx). While use of the probe cover could be considered as changing the category, this guideline proposes use of a new condom/probe cover for the probe for each patient, and because condoms/probe covers can fail 195, , the probe also should be high-level disinfected. The relevance of this recommendation is reinforced with the findings that sterile transvaginal ultrasound probe covers have a very high rate of perforations even before use (0%, 25%, and 65% perforations from three suppliers). 199 One study found, after oocyte retrieval use, a very high rate of perforations in used endovaginal probe covers from two suppliers (75% and 81%) 199 , other studies demonstrated a lower rate of perforations after use of condoms (2.0% and 0.9%) 197 200 . Condoms have been found superior to commercially available probe covers for covering the ultrasound probe (1.7% for condoms versus 8.3% leakage for probe covers) 201 . These studies underscore the need for routine probe disinfection between examinations. Although most ultrasound manufacturers recommend use of 2% glutaraldehyde for highlevel disinfection of contaminated transvaginal transducers, the this agent has been questioned 202 because it might shorten the life of the transducer and might have toxic effects on the gametes and embryos 203 . An alternative procedure for disinfecting the vaginal transducer involves the mechanical removal of the gel from the transducer, cleaning the transducer in soap and water, wiping the transducer with 70% alcohol or soaking it for 2 minutes in 500 ppm chlorine, and rinsing with tap water and air drying 204 . The effectiveness of this and other methods 200 has not been validated in either rigorous laboratory experiments or in clinical use. High-level disinfection with a product (e.g., hydrogen peroxide) that is not toxic to staff, patients, probes, and retrieved cells should be used until the effectiveness of alternative procedures against microbes of importance at the cavitary site is demonstrated by welldesigned experimental scientific studies. Other probes such as rectal, cryosurgical, and transesophageal probes or devices also should be high-level disinfected between patients. Ultrasound probes used during surgical procedures also can contact sterile body sites. These probes can be covered with a sterile sheath to reduce the level of contamination on the probe and reduce the risk for infection. However, because the sheath does not completely protect the probe, the probes should be sterilized between each patient use as with other critical items. If this is not possible, at a minimum the probe should be high-level disinfected and covered with a sterile probe cover. Some cryosurgical probes are not fully immersible. During reprocessing, the tip of the probe should be immersed in a high-level disinfectant for the appropriate time; any other portion of the probe that could have mucous membrane contact can be disinfected by immersion or by wrapping with a cloth soaked in a high-level disinfectant to allow the recommended contact time. After disinfection, the probe should be rinsed with tap water and dried before use. Health-care facilities that use nonimmersible probes should replace them as soon as possible with fully immersible probes. As with other high-level disinfection procedures, proper cleaning of probes is necessary to ensure the success of the subsequent disinfection 205 . One study demonstrated that vegetative bacteria inoculated on vaginal ultrasound probes decreased when the probes were cleaned with a towel 206 . No information is available about either the level of contamination of such probes by potential viral pathogens such as HBV and HPV or their removal by cleaning (such as with a towel). Because these pathogens might be present in vaginal and rectal secretions and contaminate probes during use, highlevel disinfection of the probes after such use is recommended. # Dental Instruments Scientific articles and increased publicity about the potential for transmitting infectious agents in dentistry have focused attention on dental instruments as possible agents for pathogen transmission 207,208 . The American Dental Association recommends that surgical and other instruments that normally penetrate soft tissue or bone (e.g., extraction forceps, scalpel blades, bone chisels, periodontal scalers, and surgical burs) be classified as critical devices that should be sterilized after each use or discarded. Instruments not intended to penetrate oral soft tissues or bone (e.g., amalgam condensers, and air/water syringes) but that could contact oral tissues are classified as semicritical, but sterilization after each use is recommended if the instruments are heat-tolerant 43,209 . If a semicritical item is heat-sensitive, it should, at a minimum, be processed with high-level disinfection 43,210 . Handpieces can be contaminated internally with patient material and should be heat sterilized after each patient. Handpieces that cannot be heat sterilized should not be used. 211 Methods of sterilization that can be used for critical or semicritical dental instruments and materials that are heat-stable include steam under pressure (autoclave), chemical (formaldehyde) vapor, and dry heat (e.g., 320 º F for 2 hours). Dental professionals most commonly use the steam sterilizer 212 . All three sterilization procedures can damage some dental instruments, including steam-sterilized hand pieces 213 . Heat-tolerant alternatives are available for most clinical dental applications and are preferred 43 . CDC has divided noncritical surfaces in dental offices into clinical contact and housekeeping surfaces 43 . Clinical contact surfaces are surfaces that might be touched frequently with gloved hands during patient care or that might become contaminated with blood or other potentially infectious material and subsequently contact instruments, hands, gloves, or devices (e.g., light handles, switches, dental Xray equipment, chair-side computers). Barrier protective coverings (e.g., clear plastic wraps) can be used for these surfaces, particularly those that are difficult to clean (e.g., light handles, chair switches). The coverings should be changed when visibly soiled or damaged and routinely (e.g., between patients). Protected surfaces should be disinfected at the end of each day or if contamination is evident. If not barrier-protected, these surfaces should be disinfected between patients with an intermediate-disinfectant (i.e., EPA-registered hospital disinfectant with tuberculocidal claim) or low-level disinfectant (i.e., EPAregistered hospital disinfectant with an HBV and HIV label claim) 43,214,215 . Most housekeeping surfaces need to be cleaned only with a detergent and water or an EPAregistered hospital disinfectant, depending of the nature of the surface and the type and degree of contamination. When housekeeping surfaces are visibly contaminated by blood or body substances, however, prompt removal and surface disinfection is a sound infection control practice and required by the Occupational Safety and Health Administration (OSHA) 43,214 . Several studies have demonstrated variability among dental practices while trying to meet these recommendations 216,217 . For example, 68% of respondents believed they were sterilizing their instruments but did not use appropriate chemical sterilants or exposure times and 49% of respondents did not challenge autoclaves with biological indicators 216 . Other investigators using biologic indicators have found a high proportion (15%-65%) of positive spore tests after assessing the efficacy of sterilizers used in dental offices. In one study of Minnesota dental offices, operator error, rather than mechanical malfunction 218 , caused 87% of sterilization failures. Common factors in the improper use of sterilizers include chamber overload, low temperature setting, inadequate exposure time, failure to preheat the sterilizer, and interruption of the cycle. Mail-return sterilization monitoring services use spore strips to test sterilizers in dental clinics, but delay caused by mailing to the test laboratory could potentially cause false-negatives results. Studies revealed, however, that the post-sterilization time and temperature after a 7-day delay had no influence on the test results 219 . Delays (7 days at 27ºC and 37ºC, 3-day mail delay) did not cause any predictable pattern of inaccurate spore tests 220 . # Disinfection of HBV-, HCV-, HIV-or TB-Contaminated Devices The CDC recommendation for high-level disinfection of HBV-, HCV-, HIV-or TB-contaminated devices is appropriate because experiments have demonstrated the effectiveness of high-level disinfectants to inactivate these and other pathogens that might contaminate semicritical devices 61,62,73,81,105,121,125, . Nonetheless, some healthcare facilities have modified their disinfection procedures when endoscopes are used with a patient known or suspected to be infected with HBV, HIV, or M. tuberculosis 28,239 . This is inconsistent with the concept of Standard Precautions that presumes all patients are potentially infected with bloodborne pathogens 228 . Several studies have highlighted the inability to distinguish HBV-or HIV-infected patients from noninfected patients on clinical grounds . In addition, mycobacterial infection is unlikely to be clinically apparent in many patients. In most instances, hospitals that altered their disinfection procedure used EtO sterilization on the endoscopic instruments because they believed this practice reduced the risk for infection 28,239 . EtO is not routinely used for endoscope sterilization because of the lengthy processing time. Endoscopes and other semicritical devices should be managed the same way regardless of whether the patient is known to be infected with HBV, HCV, HIV or M. tuberculosis. An evaluation of a manual disinfection procedure to eliminate HCV from experimentally contaminated endoscopes provided some evidence that cleaning and 2% glutaraldehyde for 20 minutes should prevent transmission 236 . A study that used experimentally contaminated hysteroscopes detected HCV by polymerase chain reaction (PCR) in one (3%) of 34 samples after cleaning with a detergent, but no samples were positive after treatment with a 2% glutaraldehyde solution for 20 minutes 120 . Another study demonstrated complete elimination of HCV (as detected by PCR) from endoscopes used on chronically infected patients after cleaning and disinfection for 3-5 minutes in glutaraldehyde 118 . Similarly, PCR was used to demonstrate complete elimination of HCV after standard disinfection of experimentally contaminated endoscopes 236 and endoscopes used on HCV-antibodypositive patients had no detectable HCV RNA after high-level disinfection 243 . The inhibitory activity of a phenolic and a chlorine compound on HCV showed that the phenolic inhibited the binding and replication of HCV, but the chlorine was ineffective, probably because of its low concentration and its neutralization in the presence of organic matter 244 . # Disinfection in the Hemodialysis Unit Hemodialysis systems include hemodialysis machines, water supply, water-treatment systems, and distribution systems. During hemodialysis, patients have acquired bloodborne viruses and pathogenic bacteria . Cleaning and disinfection are important components of infection control in a hemodialysis center. EPA and FDA regulate disinfectants used to reprocess hemodialyzers, hemodialysis machines, and water-treatment systems. Noncritical surfaces (e.g., dialysis bed or chair, countertops, external surfaces of dialysis machines, and equipment ) should be disinfected with an EPA-registered disinfectant unless the item is visibly contaminated with blood; in that case a tuberculocidal agent (or a disinfectant with specific label claims for HBV and HIV) or a 1:100 dilution of a hypochlorite solution (500-600 ppm free chlorine) should be used 246,248 . This procedure accomplishes two goals: it removes soil on a regular basis and maintains an environment that is consistent with good patient care. Hemodialyzers are disinfected with peracetic acid, formaldehyde, glutaraldehyde, heat pasteurization with citric acid, and chlorine-containing compounds 249 . Hemodialysis systems usually are disinfected by chlorine-based disinfectants (e.g., sodium hypochlorite), aqueous formaldehyde, heat pasteurization, ozone, or peracetic acid 250,251 . All products must be used according to the manufacturers' recommendations. Some dialysis systems use hot-water disinfection to control microbial contamination. At its high point, 82% of U.S. chronic hemodialysis centers were reprocessing (i.e., reusing) dialyzers for the same patient using high-level disinfection 249 . However, one of the large dialysis organizations has decided to phase out reuse and, by 2002 the percentage of dialysis facilities reprocessing hemodialyzers had decreased to 63% 252 . The two commonly used disinfectants to reprocess dialyzers were peracetic acid and formaldehyde; 72% used peracetic acid and 20% used formaldehyde to disinfect hemodialyzers. Another 4% of the facilities used either glutaraldehyde or heat pasteurization in combination with citric acid 252 . Infection-control recommendations, including disinfection and sterilization and the use of dedicated machines for hepatitis B surface antigen (HBsAg)positive patients, in the hemodialysis setting were detailed in two reviews 245,246 . The Association for the Advancement of Medical Instrumentation(AAMI) has published recommendations for the reuse of hemodialyzers 253 . # Inactivation of Clostridium difficile The source of health-care-associated acquisition of Clostridium difficile in nonepidemic settings has not been determined. The environment and carriage on the hands of health-care personnel have been considered possible sources of infection 66,254 . Carpeted rooms occupied by a patient with C. difficile were more heavily contaminated with C. difficile than were noncarpeted rooms 255 . Because C. difficile spore-production can increase when exposed to nonchlorine-based cleaning agents and the spores are more resistant than vegetative cells to commonly used surface disinfectants 256 , some investigators have recommended use of dilute solutions of hypochlorite (1,600 ppm available chlorine) for routine environmental disinfection of rooms of patients with C. difficile-associated diarrhea or colitis 257 , to reduce the incidence of C. difficile diarrhea 258 , or in units with high C. difficile rates. 259 Stool samples of patients with symptomatic C. difficile colitis contain spores of the organism, as demonstrated by ethanol treatment of the stool to reduce the overgrowth of fecal flora when isolating C. difficile in the laboratory 260,261 . C. difficile-associated diarrhea rates were shown to have decreased markedly in a bone-marrow transplant unit (from 8.6 to 3.3 cases per 1,000 patient-days) during a period of bleach disinfection (1:10 dilution) of environmental surfaces compared with cleaning with a quaternary ammonium compound. Because no EPA-registered products exist that are specific for inactivating C. difficile spores, use of diluted hypochlorite should be considered in units with high C. difficile rates. Acidified bleach and regular bleach (5000 ppm chlorine) can inactivate 10 6 C. difficile spores in ≤10 minutes 262 . However, studies have shown that asymptomatic patients constitute an important reservoir within the health-care facility and that person-to-person transmission is the principal means of transmission between patients. Thus, combined use of hand washing, barrier precautions, and meticulous environmental cleaning with an EPA-registered disinfectant (e.g., germicidal detergent) should effectively prevent spread of the organism 263 . Contaminated medical devices, such as colonoscopes and thermometers,can be vehicles for transmission of C. difficile spores 264 . For this reason, investigators have studied commonly used disinfectants and exposure times to assess whether current practices can place patients at risk. Data demonstrate that 2% glutaraldehyde 79, and peracetic acid 267,268 reliably kill C. difficile spores using exposure times of 5-20 minutes. ortho-Phthalaldehyde and ≥0.2% peracetic acid (WA Rutala, personal communication, April 2006) also can inactivate ≥10 4 C. difficile spores in 10-12 minutes at 20 º C 268 . Sodium dichloroisocyanurate at a concentration of 1000 ppm available chlorine achieved lower log10 reduction factors against C. difficile spores at 10 min, ranging from 0.7 to 1.5, than 0.26% peracetic acid with log10 reduction factors ranging from 2.7 to 6.0 268 . # OSHA Bloodborne Pathogen Standard In December 1991, OSHA promulgated a standard entitled "Occupational Exposure to Bloodborne Pathogens" to eliminate or minimize occupational exposure to bloodborne pathogens 214 . One component of this requirement is that all equipment and environmental and working surfaces be cleaned and decontaminated with an appropriate disinfectant after contact with blood or other potentially infectious materials. Even though the OSHA standard does not specify the type of disinfectant or procedure, the OSHA original compliance document 269 suggested that a germicide must be tuberculocidal to kill the HBV. To follow the OSHA compliance document a tuberculocidal disinfectant (e.g., phenolic, and chlorine) would be needed to clean a blood spill. However, in February 1997, OSHA amended its policy and stated that EPA-registered disinfectants labeled as effective against HIV and HBV would be considered as appropriate disinfectants ". . . provided such surfaces have not become contaminated with agent(s) or volumes of or concentrations of agent(s) for which higher level disinfection is recommended." When bloodborne pathogens other than HBV or HIV are of concern, OSHA continues to require use of EPA-registered tuberculocidal disinfectants or hypochlorite solution (diluted 1:10 or 1:100 with water) 215,228 . Studies demonstrate that, in the presence of large blood spills, a 1:10 final dilution of EPA-registered hypochlorite solution initially should be used to inactivate bloodborne viruses 63,235 to minimize risk for infection to health-care personnel from percutaneous injury during cleanup. # Emerging Pathogens (Cryptosporidium, Helicobacter pylori, Escherichia coli O157:H7, Rotavirus, Human Papilloma Virus, Norovirus, Severe Acute Respiratory Syndrome Coronavirus) Emerging pathogens are of growing concern to the general public and infection-control professionals. Relevant pathogens include Cryptosporidium parvum, Helicobacter pylori, E. coli O157:H7, HIV, HCV, rotavirus, norovirus, severe acute respiratory syndrome (SARS) coronavirus, multidrug-resistant M. tuberculosis, and nontuberculous mycobacteria (e.g., M. chelonae). The susceptibility of each of these pathogens to chemical disinfectants and sterilants has been studied. With the exceptions discussed below, all of these emerging pathogens are susceptible to currently available chemical disinfectants and sterilants 270 . Cryptosporidium is resistant to chlorine at concentrations used in potable water. C. parvum is not completely inactivated by most disinfectants used in healthcare including ethyl alcohol 271 , glutaraldehyde 271,272 , 5.25% hypochlorite 271 , peracetic acid 271 , ortho-phthalaldehyde 271 , phenol 271,272 , povidone-iodine 271,272 , and quaternary ammonium compounds 271 . The only chemical disinfectants and sterilants able to inactivate greater than 3 log10 of C. parvum were 6% and 7.5% hydrogen peroxide 271 . Sterilization methods will fully inactivate C. parvum, including steam 271 , EtO 271,273 , and hydrogen peroxide gas plasma 271 . Although most disinfectants are ineffective against C. parvum, current cleaning and disinfection practices appear satisfactory to prevent healthcare-associated transmission. For example, endoscopes are unlikely to be an important vehicle for transmitting C. parvum because the results of bacterial studies indicate mechanical cleaning will remove approximately 10 4 organisms, and drying results in rapid loss of C. parvum viability (e.g., 30 minutes, 2.9 log10 decrease; and 60 minutes, 3.8 log10 decrease) 271 . Chlorine at ~1 ppm has been found capable of eliminating approximately 4 log10 of E. coli O157:H7 within 1 minute in a suspension test 64 . Electrolyzed oxidizing water at 23°C was effective in 10 minutes in producing a 5-log10 decrease in E. coli O157:H7 inoculated onto kitchen cutting boards 274 . The following disinfectants eliminated >5 log10 of E. coli O157:H7 within 30 seconds: a quaternary ammonium compound, a phenolic, a hypochlorite (1:10 dilution of 5.25% bleach), and ethanol 53 . Disinfectants including chlorine compounds can reduce E. coli O157:H7 experimentally inoculated onto alfalfa seeds or sprouts 275,276 or beef carcass surfaces 277 . Data are limited on the susceptibility of H. pylori to disinfectants. Using a suspension test, one study assessed the effectiveness of a variety of disinfectants against nine strains of H. pylori 60 . Ethanol (80%) and glutaraldehyde (0.5%) killed all strains within 15 seconds; chlorhexidine gluconate (0.05%, 1.0%), benzalkonium chloride (0.025%, 0.1%), alkyldiaminoethylglycine hydrochloride (0.1%), povidone-iodine (0.1%), and sodium hypochlorite (150 ppm) killed all strains within 30 seconds. Both ethanol (80%) and glutaraldehyde (0.5%) retained similar bactericidal activity in the presence of organic matter; the other disinfectants showed reduced bactericidal activity. In particular, the bactericidal activity of povidoneiodine (0.1%) and sodium hypochlorite (150 ppm) markedly decreased in the presence of dried yeast solution with killing times increased to 5 -10 minutes and 5 -30 minutes, respectively. Immersing biopsy forceps in formalin before obtaining a specimen does not affect the ability to culture H. pylori from the biopsy specimen 278 . The following methods are ineffective for eliminating H. pylori from endoscopes: cleaning with soap and water 119,279 , immersion in 70% ethanol for 3 minutes 280 , instillation of 70% ethanol 126 , instillation of 30 ml of 83% methanol 279 , and instillation of 0.2% Hyamine solution 281 . The differing results with regard to the efficacy of ethyl alcohol against Helicobacter are unexplained. Cleaning followed by use of 2% alkaline glutaraldehyde (or automated peracetic acid) has been demonstrated by culture to be effective in eliminating H. pylori 119,279,282 . Epidemiologic investigations of patients who had undergone endoscopy with endoscopes mechanically washed and disinfected with 2.0%-2.3% glutaraldehyde have revealed no evidence of person-toperson transmission of H. pylori 126,283 . Disinfection of experimentally contaminated endoscopes using 2% glutaraldehyde (10-minute, 20-minute, 45-minute exposure times) or the peracetic acid system (with and without active peracetic acid) has been demonstrated to be effective in eliminating H. pylori 119 . H. pylori DNA has been detected by PCR in fluid flushed from endoscope channels after cleaning and disinfection with 2% glutaraldehyde 284 . The clinical significance of this finding is unclear. In vitro experiments have demonstrated a >3.5-log10 reduction in H. pylori after exposure to 0.5 mg/L of free chlorine for 80 seconds 285 . An outbreak of healthcare-associated rotavirus gastroenteritis on a pediatric unit has been reported 286 . Person to person through the hands of health-care workers was proposed as the mechanism of transmission. Prolonged survival of rotavirus on environmental surfaces (90 minutes to >10 days at room temperature) and hands (>4 hours) has been demonstrated. Rotavirus suspended in feces can survive longer 287,288 . Vectors have included hands, fomites, air, water, and food 288,289 . Products with demonstrated efficacy (>3 log10 reduction in virus) against rotavirus within 1 minute include: 95% ethanol, 70% isopropanol, some phenolics, 2% glutaraldehyde, 0.35% peracetic acid, and some quaternary ammonium compounds 59, . In a human challenge study, a disinfectant spray (0.1% orthophenylphenol and 79% ethanol), sodium hypochlorite (800 ppm free chlorine), and a phenol-based product (14.7% phenol diluted 1:256 in tapwater) when sprayed onto contaminated stainless steel disks, were effective in interrupting transfer of a human rotavirus from stainless steel disk to fingerpads of volunteers after an exposure time of 3-10 minutes. A quaternary ammonium product (7.05% quaternary ammonium compound diluted 1:128 in tapwater) and tapwater allowed transfer of virus 52 . No data exist on the inactivation of HPV by alcohol or other disinfectants because in vitro replication of complete virions has not been achieved. Similarly, little is known about inactivation of noroviruses (members of the family Caliciviridae and important causes of gastroenteritis in humans) because they cannot be grown in tissue culture. Improper disinfection of environmental surfaces contaminated by feces or vomitus of infected patients is believed to play a role in the spread of noroviruses in some settings . Prolonged survival of a norovirus surrogate (i.e., feline calicivirus virus , a closely related cultivable virus) has been demonstrated (e.g., at room temperature, FCV in a dried state survived for 21-18 days) 297 . Inactivation studies with FCV have shown the effectiveness of chlorine, glutaraldehyde, and iodine-based products whereas the quaternary ammonium compound, detergent, and ethanol failed to inactivate the virus completely. 297 An evaluation of the effectiveness of several disinfectants against the feline calicivirus found that bleach diluted to 1000 ppm of available chlorine reduced infectivity of FCV by 4.5 logs in 1 minute. Other effective (log10 reduction factor of >4 in virus) disinfectants included accelerated hydrogen peroxide, 5,000 ppm (3 min); chlorine dioxide, 1,000 ppm chlorine (1 min); a mixture of four quaternary ammonium compounds, 2,470 ppm (10 min); 79% ethanol with 0.1% quaternary ammonium compound (3 min); and 75% ethanol (10 min) 298 . A quaternary ammonium compound exhibited activity against feline calicivirus supensions dried on hard surface carriers in 10 minutes 299 . Seventy percent ethanol and 70% 1-propanol reduced FCV by a 3-4-log10 reduction in 30 seconds 300 . CDC announced that a previously unrecognized human virus from the coronavirus family is the leading hypothesis for the cause of a described syndrome of SARS 301 . Two coronaviruses that are known to infect humans cause one third of common colds and can cause gastroenteritis. The virucidal efficacy of chemical germicides against coronavirus has been investigated. A study of disinfectants against coronavirus 229E found several that were effective after a 1-minute contact time; these included sodium hypochlorite (at a free chlorine concentration of 1,000 ppm and 5,000 ppm), 70% ethyl alcohol, and povidone-iodine (1% iodine) 186 . In another study, 70% ethanol, 50% isopropanol, 0.05% benzalkonium chloride, 50 ppm iodine in iodophor, 0.23% sodium chlorite, 1% cresol soap and 0.7% formaldehyde inactivated >3 logs of two animal coronaviruses (mouse hepatitis virus, canine coronavirus) after a 10-minute exposure time 302 . The activity of povidone-iodine has been demonstrated against human coronaviruses 229E and OC43 303 . A study also showed complete inactivation of the SARS coronavirus by 70% ethanol and povidone-iodine with an exposure times of 1 minute and 2.5% glutaraldehyde with an exposure time of 5 minute 304 . Because the SARS coronavirus is stable in feces and urine at room temperature for at least 1-2 days , surfaces might be a possible source of contamination and lead to infection with the SARS coronavirus and should be disinfected. Until more precise information is available, environments in which SARS patients are housed should be considered heavily contaminated, and rooms and equipment should be thoroughly disinfected daily and after the patient is discharged. EPA-registered disinfectants or 1:100 dilution of household bleach and water should be used for surface disinfection and disinfection on noncritical patient-care equipment. Highlevel disinfection and sterilization of semicritical and critical medical devices, respectively, does not need to be altered for patients with known or suspected SARS. Free-living amoeba can be pathogenic and can harbor agents of pneumonia such as Legionella pneumophila. Limited studies have shown that 2% glutaraldehyde and peracetic acid do not completely inactivate Acanthamoeba polyphaga in a 20-minute exposure time for high-level disinfection. If amoeba are found to contaminate instruments and facilitate infection, longer immersion times or other disinfectants may need to be considered 305 . # Inactivation of Bioterrorist Agents Publications have highlighted concerns about the potential for biological terrorism 306,307 . CDC has categorized several agents as "high priority" because they can be easily disseminated or transmitted from person to person, cause high mortality, and are likely to cause public panic and social disruption 308 . These agents include Bacillus anthracis (the cause of anthrax), Yersinia pestis (plague), variola major (smallpox), Clostridium botulinum toxin (botulism), Francisella tularensis (tularemia), filoviruses (Ebola hemorrhagic fever, Marburg hemorrhagic fever); and arenaviruses (Lassa , Junin ), and related viruses 308 . A few comments can be made regarding the role of sterilization and disinfection of potential agents of bioterrorism 309 . First, the susceptibility of these agents to germicides in vitro is similar to that of other related pathogens. For example, variola is similar to vaccinia 72,310,311 and B. anthracis is similar to B. atrophaeus (formerly B. subtilis) 312,313 . B. subtilis spores, for instance, proved as resistant as, if not more resistant than, B. anthracis spores (>6 log10 reduction of B. anthracis spores in 5 minutes with acidified bleach ) 313 . Thus, one can extrapolate from the larger database available on the susceptibility of genetically similar organisms 314 . Second, many of the potential bioterrorist agents are stable enough in the environment that contaminated environmental surfaces or fomites could lead to transmission of agents such as B. anthracis, F. tularensis, variola major, C. botulinum toxin, and C. burnetti 315 . Third, data suggest that current disinfection and sterilization practices are appropriate for managing patient-care equipment and environmental surfaces when potentially contaminated patients are evaluated and/or admitted in a health-care facility after exposure to a bioterrorist agent. For example, sodium hypochlorite can be used for surface disinfection (see ). In instances where the health-care facility is the site of a bioterrorist attack, environmental decontamination might require special decontamination procedures (e.g., chlorine dioxide gas for B. anthracis spores). Because no antimicrobial products are registered for decontamination of biologic agents after a bioterrorist attack, EPA has granted a crises exemption for each product (see ). Of only theoretical concern is the possibility that a bioterrorist agent could be engineered to be less susceptible to disinfection and sterilization processes 309 . # Toxicological, Environmental and Occupational Concerns Health hazards associated with the use of germicides in healthcare vary from mucous membrane irritation to death, with the latter involving accidental injection by mentally disturbed patients 316 . Although their degrees of toxicity vary , all disinfectants should be used with the proper safety precautions 321 and only for the intended purpose. Key factors associated with assessing the health risk of a chemical exposure include the duration, intensity (i.e., how much chemical is involved), and route (e.g., skin, mucous membranes, and inhalation) of exposure. Toxicity can be acute or chronic. Acute toxicity usually results from an accidental spill of a chemical substance. Exposure is sudden and often produces an emergency situation. Chronic toxicity results from repeated exposure to low levels of the chemical over a prolonged period. Employers are responsible for informing workers about the chemical hazards in the workplace and implementing control measures. The OSHA Hazard Communication Standard (29 CFR 1910(29 CFR .1200(29 CFR , 1915(29 CFR .99, 1917(29 CFR .28, 1918(29 CFR .90, 1926(29 CFR .59, and 1928 requires manufacturers and importers of hazardous chemicals to develop Material Safety Data Sheets (MSDS) for each chemical or mixture of chemicals. Employers must have these data sheets readily available to employees who work with the products to which they could be exposed. Exposure limits have been published for many chemicals used in health care to help provide a safe environment and, as relevant, are discussed in each section of this guideline. Only the exposure limits published by OSHA carry the legal force of regulations. OSHA publishes a limit as a time-weighted average (TWA), that is, the average concentration for a normal 8-hour work day and a 40-hour work week to which nearly all workers can be repeatedly exposed to a chemical without adverse health effects. For example, the permissible exposure limit (PEL) for EtO is 1.0 ppm, 8 hour TWA. The CDC National Institute for Occupational Safety and Health (NIOSH) develops recommended exposure limits (RELs). RELs are occupational exposure limits recommended by NIOSH as being protective of worker health and safety over a working lifetime. This limit is frequently expressed as a 40-hour TWA exposure for up to 10 hours per day during a 40-hour work week. These exposure limits are designed for inhalation exposures. Irritant and allergic effects can occur below the exposure limits, and skin contact can result in dermal effects or systemic absorption without inhalation. The American Conference on Governmental Industrial Hygienists (ACGIN) also provides guidelines on exposure limits 322 . Information about workplace exposures and methods to reduce them (e.g., work practices, engineering controls, PPE) is available on the OSHA (/) and NIOSH (/) websites. Some states have excluded or limited concentrations of certain chemical germicides (e.g., glutaraldehyde, formaldehyde, and some phenols) from disposal through the sewer system. These rules are intended to minimize environmental harm. If health-care facilities exceed the maximum allowable concentration of a chemical (e.g., ≥5.0 mg/L), they have three options. First, they can switch to alternative products; for example, they can change from glutaraldehyde to another disinfectant for high-level disinfection or from phenolics to quaternary ammonium compounds for low-level disinfection. Second, the health-care facility can collect the disinfectant and dispose of it as a hazardous chemical. Third, the facility can use a commercially available small-scale treatment method (e.g., neutralize glutaraldehyde with glycine). Safe disposal of regulated chemicals is important throughout the medical community. For disposal of large volumes of spent solutions, users might decide to neutralize the microbicidal activity before disposal (e.g., glutaraldehyde). Solutions can be neutralized by reaction with chemicals such as sodium bisulfite 323, 324 or glycine 325 . European authors have suggested that instruments and ventilation therapy equipment should be disinfected by heat rather than by chemicals. The concerns for chemical disinfection include toxic side effects for the patient caused by chemical residues on the instrument or object, occupational exposure to toxic chemicals, and recontamination by rinsing the disinfectant with microbially contaminated tap water 326 . # Disinfection in Ambulatory Care, Home Care, and the Home With the advent of managed healthcare, increasing numbers of patients are now being cared for in ambulatory-care and home settings. Many patients in these settings might have communicable diseases, immunocompromising conditions, or invasive devices. Therefore, adequate disinfection in these settings is necessary to provide a safe patient environment. Because the ambulatory-care setting (i.e., outpatient facility) provides the same risk for infection as the hospital, the Spaulding classification scheme described in this guideline should be followed (Table 1). 17 The home environment should be much safer than hospitals or ambulatory care. Epidemics should not be a problem, and cross-infection should be rare. The healthcare provider is responsible for providing the responsible family member information about infection-control procedures to follow in the home, including hand hygiene, proper cleaning and disinfection of equipment, and safe storage of cleaned and disinfected devices. Among the products recommended for home disinfection of reusable objects are bleach, alcohol, and hydrogen peroxide. APIC recommends that reusable objects (e.g., tracheostomy tubes) that touch mucous membranes be disinfected by immersion in 70% isopropyl alcohol for 5 minutes or in 3% hydrogen peroxide for 30 minutes. Additionally, a 1:50 dilution of 5.25%-6.15% sodium hypochlorite (household bleach) for 5 minutes should be effective . Noncritical items (e.g., blood pressure cuffs, crutches) can be cleaned with a detergent. Blood spills should be handled according to OSHA regulations as previously described (see section on OSHA Bloodborne Pathogen Standard). In general, sterilization of critical items is not practical in homes but theoretically could be accomplished by chemical sterilants or boiling. Single-use disposable items can be used or reusable items sterilized in a hospital 330,331 . Some environmental groups advocate "environmentally safe" products as alternatives to commercial germicides in the home-care setting. These alternatives (e.g., ammonia, baking soda, vinegar, Borax, liquid detergent) are not registered with EPA and should not be used for disinfecting because they are ineffective against S. aureus. Borax, baking soda, and detergents also are ineffective against Salmonella Typhi and E.coli; however, undiluted vinegar and ammonia are effective against S. Typhi and E.coli 53,332,333 . Common commercial disinfectants designed for home use also are effective against selected antibiotic-resistant bacteria 53 . Public concerns have been raised that the use of antimicrobials in the home can promote development of antibiotic-resistant bacteria 334,335 . This issue is unresolved and needs to be considered further through scientific and clinical investigations. The public health benefits of using disinfectants in the home are unknown. However, some facts are known: many sites in the home kitchen and bathroom are microbially contaminated 336 , use of hypochlorites markedly reduces bacteria 337 , and good standards of hygiene (e.g., food hygiene, hand hygiene) can help reduce infections in the home 338,339 . In addition, laboratory studies indicate that many commercially prepared household disinfectants are effective against common pathogens 53 and can interrupt surface-to-human transmission of pathogens 48 . The "targeted hygiene concept"-which means identifying situations and areas (e.g., foodpreparation surfaces and bathroom) where risk exists for transmission of pathogens-may be a reasonable way to identify when disinfection might be appropriate 340 . # Susceptibility of Antibiotic-Resistant Bacteria to Disinfectants As with antibiotics, reduced susceptibility (or acquired "resistance") of bacteria to disinfectants can arise by either chromosomal gene mutation or acquisition of genetic material in the form of plasmids or transposons 338, 341-343, 344 , 345, 346 . When changes occur in bacterial susceptibility that renders an antibiotic ineffective against an infection previously treatable by that antibiotic, the bacteria are referred to as "resistant." In contrast, reduced susceptibility to disinfectants does not correlate with failure of the disinfectant because concentrations used in disinfection still greatly exceed the cidal level. Thus, the word "resistance" when applied to these changes is incorrect, and the preferred term is "reduced susceptibility" or "increased tolerance" 344,347 . No data are available that show that antibiotic-resistant bacteria are less sensitive to the liquid chemical germicides than antibiotic-sensitive bacteria at currently used germicide contact conditions and concentrations. MRSA and vancomycin-resistant Enterococcus (VRE) are important health-care-associated agents. Some antiseptics and disinfectants have been known for years to be, because of MICs, somewhat less inhibitory to S. aureus strains that contain a plasmid-carrying gene encoding resistance to the antibiotic gentamicin 344 . For example, gentamicin resistance has been shown to also encode reduced susceptibility to propamidine, quaternary ammonium compounds, and ethidium bromide 348 , and MRSA strains have been found to be less susceptible than methicillin-sensitive S. aureus (MSSA) strains to chlorhexidine, propamidine, and the quaternary ammonium compound cetrimide 349 . In other studies, MRSA and MSSA strains have been equally sensitive to phenols and chlorhexidine, but MRSA strains were slightly more tolerant to quaternary ammonium compounds 350 . Two gene families (qacCD and qacAB) are involved in providing protection against agents that are components of disinfectant formulations such as quaternary ammonium compounds. Staphylococci have been proposed to evade destruction because the protein specified by the qacA determinant is a cytoplasmic-membrane-associated protein involved in an efflux system that actively reduces intracellular accumulation of toxicants, such as quaternary ammonium compounds, to intracellular targets. 351 Other studies demonstrated that plasmid-mediated formaldehyde tolerance is transferable from Serratia marcescens to E. coli 352 and plasmid-mediated quaternary ammonium tolerance is transferable from S. aureus to E. coli. 353 . Tolerance to mercury and silver also is plasmid borne. 341, Because the concentrations of disinfectants used in practice are much higher than the MICs observed, even for the more tolerant strains, the clinical relevance of these observations is questionable. Several studies have found antibiotic-resistant hospital strains of common healthcareassociated pathogens (i.e., Enterococcus, P. aeruginosa, Klebsiella pneumoniae, E. coli, S. aureus, and S. epidermidis) to be equally susceptible to disinfectants as antibiotic-sensitive strains 53, . The susceptibility of glycopeptide-intermediate S. aureus was similar to vancomycin-susceptible, MRSA 357 . On the basis of these data, routine disinfection and housekeeping protocols do not need to be altered because of antibiotic resistance provided the disinfection method is effective 358,359 . A study that evaluated the efficacy of selected cleaning methods (e.g., QUAT-sprayed cloth, and QUAT-immersed cloth) for eliminating VRE found that currently used disinfection processes most likely are highly effective in eliminating VRE. However, surface disinfection must involve contact with all contaminated surfaces 358 . A new method using an invisible flurorescent marker to objectively evaluate the thoroughness of cleaning activities in patient rooms might lead to improvement in cleaning of all objects and surfaces but needs further evaluation. 360 Lastly, does the use of antiseptics or disinfectants facilitate the development of disinfectant-tolerant organisms? Evidence and reviews indicate enhanced tolerance to disinfectants can be developed in response to disinfectant exposure 334,335,346,347,361 . However, the level of tolerance is not important in clinical terms because it is low and unlikely to compromise the effectiveness of disinfectants of which much higher concentrations are used 347,362 . The issue of whether low-level tolerance to germicides selects for antibiotic-resistant strains is unsettled but might depend on the mechanism by which tolerance is attained. For example, changes in the permeability barrier or efflux mechanisms might affect susceptibility to both antibiotics and germicides, but specific changes to a target site might not. Some researchers have suggested that use of disinfectants or antiseptics (e.g., triclosan) could facilitate development of antibiotic-resistant microorganisms 334,335,363 . Although evidence in laboratory studies indicates low-level resistance to triclosan, the concentrations of triclosan in these studies were low (generally <1 μg/mL) and dissimilar from the higher levels used in antimicrobial products (2,000-20,000 μg/mL) 364,365 . Thus, researchers can create laboratory-derived mutants that demonstrate reduced susceptibility to antiseptics or disinfectants. In some experiments, such bacteria have demonstrated reduced susceptibility to certain antibiotics 335 . There is no evidence that using antiseptics or disinfectants selects for antibiotic-resistant organisms in nature or that such mutants survive in nature 366 . ). In addition, the action of antibiotics and the action of disinfectants differ fundamentally. Antibiotics are selectively toxic and generally have a single target site in bacteria, thereby inhibiting a specific biosynthetic process. Germicides generally are considered nonspecific antimicrobials because of a multiplicity of toxic-effect mechanisms or target sites and are broader spectrum in the types of microorganisms against which they are effective 344,347 . The rotational use of disinfectants in some environments (e.g., pharmacy production units) has been recommended and practiced in an attempt to prevent development of resistant microbes 367,368 . There have been only rare case reports that appropriately used disinfectants have resulted in a clinical problem arising from the selection or development of nonsusceptible microorganisms 369 . # Surface Disinfection The effective use of disinfectants is part of a multibarrier strategy to prevent health-care-associated infections. Surfaces are considered noncritical items because they contact intact skin. Use of noncritical items or contact with noncritical surfaces carries little risk of causing an infection in patients or staff. Thus, the routine use of germicidal chemicals to disinfect hospital floors and other noncritical items is controversial . A 1991 study expanded the Spaulding scheme by dividing the noncritical environmental surfaces into housekeeping surfaces and medical equipment surfaces 376 . The classes of disinfectants used on housekeeping and medical equipment surfaces can be similar. However, the frequency of decontaminating can vary (see Recommendations). Medical equipment surfaces (e.g., blood pressure cuffs, stethoscopes, hemodialysis machines, and X-ray machines) can become contaminated with infectious agents and contribute to the spread of health-care-associated infections 248,375 . For this reason, noncritical medical equipment surfaces should be disinfected with an EPA-registered low-or intermediate-level disinfectant. Use of a disinfectant will provide antimicrobial activity that is likely to be achieved with minimal additional cost or work. Environmental surfaces (e.g., bedside table) also could potentially contribute to cross-transmission by contamination of health-care personnel from hand contact with contaminated surfaces, medical equipment, or patients 50,375,377 . A paper reviews the epidemiologic and microbiologic data (Table 3) regarding the use of disinfectants on noncritical surfaces 378 . Of the seven reasons to usie a disinfectant on noncritical surfaces, five are particularly noteworthy and support the use of a germicidal detergent. First, hospital floors become contaminated with microorganisms from settling airborne bacteria: by contact with shoes, wheels, and other objects; and occasionally by spills. The removal of microbes is a component in controling health-care-associated infections. In an investigation of the cleaning of hospital floors, the use of soap and water (80% reduction) was less effective in reducing the numbers of bacteria than was a phenolic disinfectant (94%-99.9% reduction) 379 . However, a few hours after floor disinfection, the bacterial count was nearly back to the pretreatment level. Second, detergents become contaminated and result in seeding the patient's environment with bacteria. Investigators have shown that mop water becomes increasingly dirty during cleaning and becomes contaminated if soap and water is used rather than a disinfectant. For example, in one study, bacterial contamination in soap and water without a disinfectant increased from 10 CFU/mL to 34,000 CFU/mL after cleaning a ward, whereas contamination in a disinfectant solution did not change (20 CFU/mL) 380 . Contamination of surfaces close to the patient that are frequently touched by the patient or staff (e.g., bed rails) could result in patient exposures0 381 . In a study, using of detergents on floors and patient room furniture, increased bacterial contamination of the patients' environmental surfaces was found after cleaning (average increase = 103.6 CFU/24cm 2 ) 382 . In addition, a P. aeruginosa outbreak was reported in a hematology-oncology unit associated with contamination of the surface cleaning equipment when nongermicidal cleaning solutions instead of disinfectants were used to decontaminate the patients' environment 383 and another study demonstrated the role of environmental cleaning in controlling an outbreak of Acinetobacter baumannii 384 . Studies also have shown that, in situations where the cleaning procedure failed to eliminate contamination from the surface and the cloth is used to wipe another surface, the contamination is transferred to that surface and the hands of the person holding the cloth 381,385 . Third, the CDC Isolation Guideline recommends that noncritical equipment contaminated with blood, body fluids, secretions, or excretions be cleaned and disinfected after use. The same guideline recommends that, in addition to cleaning, disinfection of the bedside equipment and environmental surfaces (e.g., bedrails, bedside tables, carts, commodes, door-knobs, and faucet handles) is indicated for certain pathogens, e.g., enterococci, which can survive in the inanimate environment for prolonged periods 386 . Fourth, OSHA requires that surfaces contaminated with blood and other potentially infectious materials (e.g., amniotic, pleural fluid) be disinfected. Fifth, using a single product throughout the facility can simplify both training and appropriate practice. Reasons also exist for using a detergent alone on floors because noncritical surfaces contribute minimally to endemic health-care-associated infections 387 , and no differences have been found in healthcare-associated infections rates when floors are cleaned with detergent rather than disinfectant 382, 388, 389 . However, these studies have been small and of short duration and suffer from low statistical power because the outcome-healthcare-associated infections-is of low frequency. The low rate of infections makes the efficacy of an intervention statistically difficult to demonstrate. Because housekeeping surfaces are associated with the lowest risk for disease transmission, some researchers have suggested that either detergents or a disinfectant/detergent could be used 376 . No data exist that show reduced healthcare-associated infection rates with use of surface disinfection of floors, but some data demonstrate reduced microbial load associated with the use of disinfectants. Given this information; other information showing that environmental surfaces (e.g., bedside table, bed rails) close to the patient and in outpatient settings 390 can be contaminated with epidemiologically important microbes (such as VRE and MRSA) 47, ; and data showing these organisms survive on various hospital surfaces 395,396 ; some researchers have suggested that such surfaces should be disinfected on a regular schedule 378 . Spot decontamination on fabrics that remain in hospitals or clinic rooms while patients move in and out (e.g., privacy curtains) also should be considered. One study demonstrated the effectiveness of spraying the fabric with 3% hydrogen peroxide 397 . Future studies should evaluate the level of contamination on noncritical environmental surfaces as a function of high and low hand contact and whether some surfaces (e.g., bed rails) near the patient with high contact frequencies require more frequent disinfection. Regardless of whether a detergent or disinfectant is used on surfaces in a health-care facility, surfaces should be cleaned routinely and when dirty or soiled to provide an aesthetically pleasing environment and to prevent potentially contaminated objects from serving as a source for health-careassociated infections. 398 The value of designing surfaces (e.g. hexyl-polyvinylpyridine) that kill bacteria on contact 399 or have sustained antimicrobial activity 400 should be further evaluated. Several investigators have recognized heavy microbial contamination of wet mops and cleaning cloths and the potential for spread of such contamination 68,401 . They have shown that wiping hard surfaces with contaminated cloths can contaminate hands, equipment, and other surfaces 68,402 . Data have been published that can be used to formulate effective policies for decontamination and maintenance of reusable cleaning cloths. For example, heat was the most reliable treatment of cleaning cloths as a detergent washing followed by drying at 80°C for 2 hours produced elimination of contamination. However, the dry heating process might be a fire hazard if the mop head contains petroleum-based products or lint builds up within the equipment or vent hose (American Health Care Association, personal communication, March 2003). Alternatively, immersing the cloth in hypochlorite (4,000 ppm) for 2 minutes produced no detectable surviving organisms in 10 of 13 cloths 403 . If reusable cleaning cloths or mops are used, they should be decontaminated regularly to prevent surface contamination during cleaning with subsequent transfer of organisms from these surfaces to patients or equipment by the hands of health-care workers. Some hospitals have begun using a new mopping technique involving microfiber materials to clean floors. Microfibers are densely constructed, polyester and polyamide (nylon) fibers, that are approximately 1/16 the thickness of a human hair. The positively charged microfibers attract dust (which has a negative charge) and are more absorbent than a conventional, cotton-loop mop. Microfiber materials also can be wet with disinfectants, such as quaternary ammonium compounds. In one study, the microfiber system tested demonstrated superior microbial removal compared with conventional string mops when used with a detergent cleaner (94% vs 68%). The use of a disinfectant did not improve the microbial elimination demonstrated by the microfiber system (95% vs 94%). However, use of disinfectant significantly improved microbial removal when a conventional string mop was used (95% vs 68%) (WA Rutala, unpublished data, August 2006). The microfiber system also prevents the possibility of transferring microbes from room to room because a new microfiber pad is used in each room. An important issue concerning use of disinfectants for noncritical surfaces in health-care settings is that the contact time specified on the label of the product is often too long to be practically followed. The labels of most products registered by EPA for use against HBV, HIV, or M. tuberculosis specify a contact time of 10 minutes. Such a long contact time is not practical for disinfection of environmental surfaces in a health-care setting because most health-care facilities apply a disinfectant and allow it to dry (~1 minute). Multiple scientific papers have demonstrated significant microbial reduction with contact times of 30 to 60 seconds . In addition, EPA will approve a shortened contact time for any product for which the manufacturers will submit confirmatory efficacy data. Currently, some EPA-registered disinfectants have contact times of one to three minutes. By law, users must follow all applicable label instructions for EPA-registered products. Ideally, product users should consider and use products that have the shortened contact time. However, disinfectant manufacturers also need to obtain EPA approval for shortened contact times so these products will be used correctly and effectively in the health-care environment. # Air Disinfection Disinfectant spray-fog techniques for antimicrobial control in hospital rooms has been used. This technique of spraying of disinfectants is an unsatisfactory method of decontaminating air and surfaces and is not recommended for general infection control in routine patient-care areas 386 . Disinfectant fogging is rarely, if ever, used in U.S. healthcare facilities for air and surface disinfection in patient-care areas. Methods (e.g., filtration, ultraviolet germicidal irradiation, chlorine dioxide) to reduce air contamination in the healthcare setting are discussed in another guideline 23 . # Microbial Contamination of Disinfectants Contaminated disinfectants and antiseptics have been occasional vehicles of health-care infections and pseudoepidemics for more than 50 years. Published reports describing contaminated disinfectants and antiseptic solutions leading to health-care-associated infections have been summarized 404 . Since this summary additional reports have been published . An examination of reports of disinfectants contaminated with microorganisms revealed noteworthy observations. Perhaps most importantly, high-level disinfectants/liquid chemical sterilants have not been associated with outbreaks due to intrinsic or extrinsic contamination.Members of the genus Pseudomonas (e.g., P. aeruginosa) are the most frequent isolates from contaminated disinfectants-recovered from 80% of contaminated products. Their ability to remain viable or grow in use-dilutions of disinfectants is unparalleled. This survival advantage for Pseudomonas results presumably from their nutritional versatility, their unique outer membrane that constitutes an effective barrier to the passage of germicides, and/or efflux systems 409 . Although the concentrated solutions of the disinfectants have not been demonstrated to be contaminated at the point of manufacture, an undiluted phenolic can be contaminated by a Pseudomonas sp. during use 410 . In most of the reports that describe illness associated with contaminated disinfectants, the product was used to disinfect patient-care equipment, such as cystoscopes, cardiac catheters, and thermometers. Germicides used as disinfectants that were reported to have been contaminated include chlorhexidine, quaternary ammonium compounds, phenolics, and pine oil. The following control measures should be instituted to reduce the frequency of bacterial growth in disinfectants and the threat of serious healthcare-associated infections from the use of such contaminated products 404 . First, some disinfectants should not be diluted; those that are diluted must be prepared correctly to achieve the manufacturers' recommended use-dilution. Second, infection-control professionals must learn from the literature what inappropriate activities result in extrinsic contamination (i.e., at the point of use) of germicides and train users to prevent recurrence. Common sources of extrinsic contamination of germicides in the reviewed literature are the water to make working dilutions, contaminated containers, and general contamination of the hospital areas where the germicides are prepared and/or used. Third, stock solutions of germicides must be stored as indicated on the product label. EPA verifies manufacturers' efficacy claims against microorganisms. These measures should provide assurance that products meeting the EPA registration requirements can achieve a certain level of antimicrobial activity when used as directed. # Factors Affecting the Efficacy of Disinfection and Sterilization The activity of germicides against microorganisms depends on a number of factors, some of which are intrinsic qualities of the organism, others of which are the chemical and external physical environment. Awareness of these factors should lead to better use of disinfection and sterilization processes and will be briefly reviewed. More extensive consideration of these and other factors is available elsewhere 13,14,16, . # Number and Location of Microorganisms All other conditions remaining constant, the larger the number of microbes, the more time a germicide needs to destroy all of them. Spaulding illustrated this relation when he employed identical test conditions and demonstrated that it took 30 minutes to kill 10 B. atrophaeus (formerly Bacillus subtilis) spores but 3 hours to kill 100,000 Bacillus atrophaeus spores. This reinforces the need for scrupulous cleaning of medical instruments before disinfection and sterilization. Reducing the number of microorganisms that must be inactivated through meticulous cleaning, increases the margin of safety when the germicide is used according to the labeling and shortens the exposure time required to kill the entire microbial load. Researchers also have shown that aggregated or clumped cells are more difficult to inactivate than monodispersed cells 414 . The location of microorganisms also must be considered when factors affecting the efficacy of germicides are assessed. Medical instruments with multiple pieces must be disassembled and equipment such as endoscopes that have crevices, joints, and channels are more difficult to disinfect than are flatsurface equipment because penetration of the disinfectant of all parts of the equipment is more difficult. Only surfaces that directly contact the germicide will be disinfected, so there must be no air pockets and the equipment must be completely immersed for the entire exposure period. Manufacturers should be encouraged to produce equipment engineered for ease of cleaning and disinfection. # Innate Resistance of Microorganisms Microorganisms vary greatly in their resistance to chemical germicides and sterilization processes (Figure 1) 342 Intrinsic resistance mechanisms in microorganisms to disinfectants vary. For example, spores are resistant to disinfectants because the spore coat and cortex act as a barrier, mycobacteria have a waxy cell wall that prevents disinfectant entry, and gram-negative bacteria possess an outer membrane that acts as a barrier to the uptake of disinfectants 341, . Implicit in all disinfection strategies is the consideration that the most resistant microbial subpopulation controls the sterilization or disinfection time. That is, to destroy the most resistant types of microorganisms (i.e., bacterial spores), the user needs to employ exposure times and a concentration of germicide needed to achieve complete destruction. Except for prions, bacterial spores possess the highest innate resistance to chemical germicides, followed by coccidia (e.g., Cryptosporidium), mycobacteria (e.g., M. tuberculosis), nonlipid or small viruses (e.g., poliovirus, and coxsackievirus), fungi (e.g., Aspergillus, and Candida), vegetative bacteria (e.g., Staphylococcus, and Pseudomonas) and lipid or medium-size viruses (e.g., herpes, and HIV). The germicidal resistance exhibited by the gram-positive and gram-negative bacteria is similar with some exceptions (e.g., P. aeruginosa which shows greater resistance to some disinfectants) 369,415,416 . P. aeruginosa also is significantly more resistant to a variety of disinfectants in its "naturally occurring" state than are cells subcultured on laboratory media 415,417 . Rickettsiae, Chlamydiae, and mycoplasma cannot be placed in this scale of relative resistance because information about the efficacy of germicides against these agents is limited 418 . Because these microorganisms contain lipid and are similar in structure and composition to other bacteria, they can be predicted to be inactivated by the same germicides that destroy lipid viruses and vegetative bacteria. A known exception to this supposition is Coxiella burnetti, which has demonstrated resistance to disinfectants 419 . # Concentration and Potency of Disinfectants With other variables constant, and with one exception (iodophors), the more concentrated the disinfectant, the greater its efficacy and the shorter the time necessary to achieve microbial kill. Generally not recognized, however, is that all disinfectants are not similarly affected by concentration adjustments. For example, quaternary ammonium compounds and phenol have a concentration exponent of 1 and 6, respectively; thus, halving the concentration of a quaternary ammonium compound requires doubling its disinfecting time, but halving the concentration of a phenol solution requires a 64-fold (i.e., 2 6 ) increase in its disinfecting time 365,413,420 . Considering the length of the disinfection time, which depends on the potency of the germicide, also is important. This was illustrated by Spaulding who demonstrated using the mucin-loop test that 70% isopropyl alcohol destroyed 10 4 M. tuberculosis in 5 minutes, whereas a simultaneous test with 3% phenolic required 2-3 hours to achieve the same level of microbial kill 14 . # Physical and Chemical Factors Several physical and chemical factors also influence disinfectant procedures: temperature, pH, relative humidity, and water hardness. For example, the activity of most disinfectants increases as the temperature increases, but some exceptions exist. Furthermore, too great an increase in temperature causes the disinfectant to degrade and weakens its germicidal activity and thus might produce a potential health hazard. An increase in pH improves the antimicrobial activity of some disinfectants (e.g., glutaraldehyde, quaternary ammonium compounds) but decreases the antimicrobial activity of others (e.g., phenols, hypochlorites, and iodine). The pH influences the antimicrobial activity by altering the disinfectant molecule or the cell surface 413 . Relative humidity is the single most important factor influencing the activity of gaseous disinfectants/sterilants, such as EtO, chlorine dioxide, and formaldehyde. Water hardness (i.e., high concentration of divalent cations) reduces the rate of kill of certain disinfectants because divalent cations (e.g., magnesium, calcium) in the hard water interact with the disinfectant to form insoluble precipitates 13,421 . # Organic and Inorganic Matter Organic matter in the form of serum, blood, pus, or fecal or lubricant material can interfere with the antimicrobial activity of disinfectants in at least two ways. Most commonly, interference occurs by a chemical reaction between the germicide and the organic matter resulting in a complex that is less germicidal or nongermicidal, leaving less of the active germicide available for attacking microorganisms. Chlorine and iodine disinfectants, in particular, are prone to such interaction. Alternatively, organic material can protect microorganisms from attack by acting as a physical barrier 422,423 . The effects of inorganic contaminants on the sterilization process were studied during the 1950s and 1960s 424,425 . These and other studies show the protection by inorganic contaminants of microorganisms to all sterilization processes results from occlusion in salt crystals 426,427 . This further emphasizes the importance of meticulous cleaning of medical devices before any sterilization or disinfection procedure because both organic and inorganic soils are easily removed by washing 426 . # Duration of Exposure Items must be exposed to the germicide for the appropriate minimum contact time. Multiple investigators have demonstrated the effectiveness of low-level disinfectants against vegetative bacteria (e.g., Listeria, E. coli, Salmonella, VRE, MRSA), yeasts (e.g., Candida), mycobacteria (e.g., M. tuberculosis), and viruses (e.g., poliovirus) at exposure times of 30-60 seconds . By law, all applicable label instructions on EPA-registered products must be followed. If the user selects exposure conditions that differ from those on the EPA-registered product label, the user assumes liability for any injuries resulting from off-label use and is potentially subject to enforcement action under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). All lumens and channels of endoscopic instruments must contact the disinfectant. Air pockets interfere with the disinfection process, and items that float on the disinfectant will not be disinfected. The disinfectant must be introduced reliably into the internal channels of the device. The exact times for disinfecting medical items are somewhat elusive because of the effect of the aforementioned factors on disinfection efficacy. Certain contact times have proved reliable (Table 1), but, in general, longer contact times are more effective than shorter contact times. # Biofilms Microorganisms may be protected from disinfectants by production of thick masses of cells 428 and extracellular materials, or biofilms . Biofilms are microbial communities that are tightly attached to surfaces and cannot be easly removed. Once these masses form, microbes within them can be resistant to disinfectants by multiple mechanisms, including physical characteristics of older biofilms, genotypic variation of the bacteria, microbial production of neutralizing enzymes, and physiologic gradients within the biofilm (e.g., pH). Bacteria within biofilms are up to 1,000 times more resistant to antimicrobials than are the same bacteria in suspension 436 . Although new decontamination methods 437 are being investigated for removing biofilms, chlorine and monochloramines can effectively inactivate biofilm bacteria 431 438 . Investigators have hypothesized that the glycocalyx-like cellular masses on the interior walls of polyvinyl chloride pipe would protect embedded organisms from some disinfectants and be a reservoir for continuous contamination 429,430,439 . Biofilms have been found in whirlpools 440 , dental unit waterlines 441 , and numerous medical devices (e.g., contact lenses, pacemakers, hemodialysis systems, urinary catheters, central venous catheters, endoscopes) 434,436,438,442 . Their presence can have serious implications for immunocompromised patients and patients who have indwelling medical devices. Some enzymes 436,443,444 and detergents 436 can degrade biofilms or reduce numbers of viable bacteria within a biofilm, but no products are EPA-registered or FDA-cleared for this purpose. # Cleaning Cleaning is the removal of foreign material (e.g., soil, and organic material) from objects and is normally accomplished using water with detergents or enzymatic products. Thorough cleaning is required before high-level disinfection and sterilization because inorganic and organic materials that remain on the surfaces of instruments interfere with the effectiveness of these processes. Also, if soiled materials dry or bake onto the instruments, the removal process becomes more difficult and the disinfection or sterilization process less effective or ineffective. Surgical instruments should be presoaked or rinsed to prevent drying of blood and to soften or remove blood from the instruments. Cleaning is done manually in use areas without mechanical units (e.g., ultrasonic cleaners or washerdisinfectors) or for fragile or difficult-to-clean instruments. With manual cleaning, the two essential components are friction and fluidics. Friction (e.g., rubbing/scrubbing the soiled area with a brush) is an old and dependable method. Fluidics (i.e., fluids under pressure) is used to remove soil and debris from internal channels after brushing and when the design does not allow passage of a brush through a channel 445 . When a washer-disinfector is used, care should be taken in loading instruments: hinged instruments should be opened fully to allow adequate contact with the detergent solution; stacking of instruments in washers should be avoided; and instruments should be disassembled as much as possible. The most common types of mechanical or automatic cleaners are ultrasonic cleaners, washerdecontaminators, washer-disinfectors, and washer-sterilizers. Ultrasonic cleaning removes soil by cavitation and implosion in which waves of acoustic energy are propagated in aqueous solutions to disrupt the bonds that hold particulate matter to surfaces. Bacterial contamination can be present in used ultrasonic cleaning solutions (and other used detergent solutions) because these solutions generally do not make antibacterial label claims 446 . Even though ultrasound alone does not significantly inactivate bacteria, sonication can act synergistically to increase the cidal efficacy of a disinfectant 447 . Users of ultrasonic cleaners should be aware that the cleaning fluid could result in endotoxin contamination of surgical instruments, which could cause severe inflammatory reactions 448 . Washer-sterilizers are modified steam sterilizers that clean by filling the chamber with water and detergent through which steam passes to provide agitation. Instruments are subsequently rinsed and subjected to a short steamsterilization cycle. Another washer-sterilizer employs rotating spray arms for a wash cycle followed by a steam sterilization cycle at 285°F 449,450 . Washer-decontaminators/disinfectors act like a dishwasher that uses a combination of water circulation and detergents to remove soil. These units sometimes have a cycle that subjects the instruments to a heat process (e.g., 93ºC for 10 minutes) 451 . Washer-disinfectors are generally computer-controlled units for cleaning, disinfecting, and drying solid and hollow surgical and medical equipment. In one study, cleaning (measured as 5-6 log10 reduction) was achieved on surfaces that had adequate contact with the water flow in the machine 452 . Detailed information about cleaning and preparing supplies for terminal sterilization is provided by professional organizations 453,454 and books 455 . Studies have shown that manual and mechanical cleaning of endoscopes achieves approximately a 4-log10 reduction of contaminating organisms 83,104,456,457 . Thus, cleaning alone effectively reduces the number of microorganisms on contaminated equipment. In a quantitative analysis of residual protein contamination of reprocessed surgical instruments, median levels of residual protein contamination per instrument for five trays were 267, 260, 163, 456, and 756 µg 458 . In another study, the median amount of protein from reprocessed surgical instruments from different hospitals ranged from 8 µg to 91 µg 459 . When manual methods were compared with automated methods for cleaning reusable accessory devices used for minimally invasive surgical procedures, the automated method was more efficient for cleaning biopsy forceps and ported and nonported laparoscopic devices and achieved a >99% reduction in soil parameters (i.e., protein, carbohydrate, hemoglobin) in the ported and nonported laparoscopic devices 460,461 For instrument cleaning, a neutral or near-neutral pH detergent solution commonly is used because such solutions generally provide the best material compatibility profile and good soil removal. Enzymes, usually proteases, sometimes are added to neutral pH solutions to assist in removing organic material. Enzymes in these formulations attack proteins that make up a large portion of common soil (e.g., blood, pus). Cleaning solutions also can contain lipases (enzymes active on fats) and amylases (enzymes active on starches). Enzymatic cleaners are not disinfectants, and proteinaceous enzymes can be inactivated by germicides. As with all chemicals, enzymes must be rinsed from the equipment or adverse reactions (e.g., fever, residual amounts of high-level disinfectants, proteinaceous residue) could result 462,463 . Enzyme solutions should be used in accordance with manufacturer's instructions, which include proper dilution of the enzymatic detergent and contact with equipment for the amount of time specified on the label 463 . Detergent enzymes can result in asthma or other allergic effects in users. Neutral pH detergent solutions that contain enzymes are compatible with metals and other materials used in medical instruments and are the best choice for cleaning delicate medical instruments, especially flexible endoscopes 457 . Alkalinebased cleaning agents are used for processing medical devices because they efficiently dissolve protein and fat residues 464 ; however, they can be corrosive 457 . Some data demonstrate that enzymatic cleaners are more effective than neutral detergents 465,466 in removing microorganisms from surfaces but two more recent studies found no difference in cleaning efficiency between enzymatic and alkaline-based cleaners 443,464 . Another study found no significant difference between enzymatic and non-enzymatic cleaners in terms of microbial cleaning efficacy 467 . A new non-enzyme, hydrogen peroxide-based formulation (not FDA-cleared) was as effective as enzymatic cleaners in removing protein, blood, carbohydrate, and endotoxin from surface test carriers 468 In addition, this product effected a 5-log10 reduction in microbial loads with a 3-minute exposure at room temperature 468 . Although the effectiveness of high-level disinfection and sterilization mandates effective cleaning, no "real-time" tests exist that can be employed in a clinical setting to verify cleaning. If such tests were commercially available they could be used to ensure an adequate level of cleaning. The only way to ensure adequate cleaning is to conduct a reprocessing verification test (e.g., microbiologic sampling), but this is not routinely recommended 473 . Validation of the cleaning processes in a laboratory-testing program is possible by microorganism detection, chemical detection for organic contaminants, radionuclide tagging, and chemical detection for specific ions 426,471 . During the past few years, data have been published describing use of an artificial soil, protein, endotoxin, X-ray contrast medium, or blood to verify the manual or automated cleaning process 169,452, and adenosine triphosphate bioluminescence and microbiologic sampling to evaluate the effectiveness of environmental surface cleaning 170,479 . At a minimum, all instruments should be individually inspected and be visibly clean. # Disinfection Many disinfectants are used alone or in combinations (e.g., hydrogen peroxide and peracetic acid) in the health-care setting. These include alcohols, chlorine and chlorine compounds, formaldehyde, glutaraldehyde, ortho-phthalaldehyde, hydrogen peroxide, iodophors, peracetic acid, phenolics, and quaternary ammonium compounds. Commercial formulations based on these chemicals are considered unique products and must be registered with EPA or cleared by FDA. In most instances, a given product is designed for a specific purpose and is to be used in a certain manner. Therefore, users should read labels carefully to ensure the correct product is selected for the intended use and applied efficiently. Disinfectants are not interchangeable, and incorrect concentrations and inappropriate disinfectants can result in excessive costs. Because occupational diseases among cleaning personnel have been associated with use of several disinfectants (e.g., formaldehyde, glutaraldehyde, and chlorine), precautions (e.g., gloves and proper ventilation) should be used to minimize exposure 318,480,481 . Asthma and reactive airway disease can occur in sensitized persons exposed to any airborne chemical, including germicides. Clinically important asthma can occur at levels below ceiling levels regulated by OSHA or recommended by NIOSH. The preferred method of control is elimination of the chemical (through engineering controls or substitution) or relocation of the worker. The following overview of the performance characteristics of each provides users with sufficient information to select an appropriate disinfectant for any item and use it in the most efficient way. # Chemical Disinfectants Alcohol Overview. In the healthcare setting, "alcohol" refers to two water-soluble chemical compounds-ethyl alcohol and isopropyl alcohol-that have generally underrated germicidal characteristics 482 . FDA has not cleared any liquid chemical sterilant or high-level disinfectant with alcohol as the main active ingredient. These alcohols are rapidly bactericidal rather than bacteriostatic against vegetative forms of bacteria; they also are tuberculocidal, fungicidal, and virucidal but do not destroy bacterial spores. Their cidal activity drops sharply when diluted below 50% concentration, and the optimum bactericidal concentration is 60%-90% solutions in water (volume/volume) 483,484 . # Mode of Action. The most feasible explanation for the antimicrobial action of alcohol is denaturation of proteins. This mechanism is supported by the observation that absolute ethyl alcohol, a dehydrating agent, is less bactericidal than mixtures of alcohol and water because proteins are denatured more quickly in the presence of water 484,485 . Protein denaturation also is consistent with observations that alcohol destroys the dehydrogenases of Escherichia coli 486 , and that ethyl alcohol increases the lag phase of Enterobacter aerogenes 487 and that the lag phase effect could be reversed by adding certain amino acids. The bacteriostatic action was believed caused by inhibition of the production of metabolites essential for rapid cell division. # Microbicidal Activity. Methyl alcohol (methanol) has the weakest bactericidal action of the alcohols and thus seldom is used in healthcare 488 . The bactericidal activity of various concentrations of ethyl alcohol (ethanol) was examined against a variety of microorganisms in exposure periods ranging from 10 seconds to 1 hour 483 . Pseudomonas aeruginosa was killed in 10 seconds by all concentrations of ethanol from 30% to 100% (v/v), and Serratia marcescens, E, coli and Salmonella typhosa were killed in 10 seconds by all concentrations of ethanol from 40% to 100%. The gram-positive organisms Staphylococcus aureus and Streptococcus pyogenes were slightly more resistant, being killed in 10 seconds by ethyl alcohol concentrations of 60%-95%. Isopropyl alcohol (isopropanol) was slightly more bactericidal than ethyl alcohol for E. coli and S. aureus 489 . Ethyl alcohol, at concentrations of 60%-80%, is a potent virucidal agent inactivating all of the lipophilic viruses (e.g., herpes, vaccinia, and influenza virus) and many hydrophilic viruses (e.g., adenovirus, enterovirus, rhinovirus, and rotaviruses but not hepatitis A virus (HAV) 58 or poliovirus) 49 . Isopropyl alcohol is not active against the nonlipid enteroviruses but is fully active against the lipid viruses 72 . Studies also have demonstrated the ability of ethyl and isopropyl alcohol to inactivate the hepatitis B virus(HBV) 224,225 and the herpes virus, 490 and ethyl alcohol to inactivate human immunodeficiency virus (HIV) 227 , rotavirus, echovirus, and astrovirus 491 . In tests of the effect of ethyl alcohol against M. tuberculosis, 95% ethanol killed the tubercle bacilli in sputum or water suspension within 15 seconds 492 . In 1964, Spaulding stated that alcohols were the germicide of choice for tuberculocidal activity, and they should be the standard by which all other tuberculocides are compared. For example, he compared the tuberculocidal activity of iodophor (450 ppm), a substituted phenol (3%), and isopropanol (70%/volume) using the mucin-loop test (10 6 M. tuberculosis per loop) and determined the contact times needed for complete destruction were 120-180 minutes, 45-60 minutes, and 5 minutes, respectively. The mucin-loop test is a severe test developed to produce long survival times. Thus, these figures should not be extrapolated to the exposure times needed when these germicides are used on medical or surgical material 482 . Ethyl alcohol (70%) was the most effective concentration for killing the tissue phase of Cryptococcus neoformans, Blastomyces dermatitidis, Coccidioides immitis, and Histoplasma capsulatum and the culture phases of the latter three organisms aerosolized onto various surfaces. The culture phase was more resistant to the action of ethyl alcohol and required about 20 minutes to disinfect the contaminated surface, compared with <1 minute for the tissue phase 493,494 . Isopropyl alcohol (20%) is effective in killing the cysts of Acanthamoeba culbertsoni (560) as are chlorhexidine, hydrogen peroxide, and thimerosal 496 . Uses. Alcohols are not recommended for sterilizing medical and surgical materials principally because they lack sporicidal action and they cannot penetrate protein-rich materials. Fatal postoperative wound infections with Clostridium have occurred when alcohols were used to sterilize surgical instruments contaminated with bacterial spores 497 . Alcohols have been used effectively to disinfect oral and rectal thermometers 498,499 , hospital pagers 500 , scissors 501 , and stethoscopes 502 . Alcohols have been used to disinfect fiberoptic endoscopes 503,504 but failure of this disinfectant have lead to infection 280,505 . Alcohol towelettes have been used for years to disinfect small surfaces such as rubber stoppers of multiple-dose medication vials or vaccine bottles. Furthermore, alcohol occasionally is used to disinfect external surfaces of equipment (e.g., stethoscopes, ventilators, manual ventilation bags) 506 , CPR manikins 507 , ultrasound instruments 508 or medication preparation areas. Two studies demonstrated the effectiveness of 70% isopropyl alcohol to disinfect reusable transducer heads in a controlled environment 509,510 . In contrast, three bloodstream infection outbreaks have been described when alcohol was used to disinfect transducer heads in an intensivecare setting 511 . The documented shortcomings of alcohols on equipment are that they damage the shellac mountings of lensed instruments, tend to swell and harden rubber and certain plastic tubing after prolonged and repeated use, bleach rubber and plastic tiles 482 and damage tonometer tips (by deterioration of the glue) after the equivalent of 1 working year of routine use 512 . Tonometer biprisms soaked in alcohol for 4 days developed rough front surfaces that potentially could cause corneal damage; this appeared to be caused by weakening of the cementing substances used to fabricate the biprisms 513 . Corneal opacification has been reported when tonometer tips were swabbed with alcohol immediately before measurement of intraocular pressure 514 . Alcohols are flammable and consequently must be stored in a cool, well-ventilated area. They also evaporate rapidly, making extended exposure time difficult to achieve unless the items are immersed. # Chlorine and Chlorine Compounds Overview. Hypochlorites, the most widely used of the chlorine disinfectants, are available as liquid (e.g., sodium hypochlorite) or solid (e.g., calcium hypochlorite). The most prevalent chlorine products in the United States are aqueous solutions of 5.25%-6.15% sodium hypochlorite (see glossary), usually called household bleach. They have a broad spectrum of antimicrobial activity, do not leave toxic residues, are unaffected by water hardness, are inexpensive and fast acting 328 , remove dried or fixed organisms and biofilms from surfaces 465 , and have a low incidence of serious toxicity . Sodium hypochlorite at the concentration used in household bleach (5.25-6.15%) can produce ocular irritation or oropharyngeal, esophageal, and gastric burns 318, . Other disadvantages of hypochlorites include corrosiveness to metals in high concentrations (>500 ppm), inactivation by organic matter, discoloring or "bleaching" of fabrics, release of toxic chlorine gas when mixed with ammonia or acid (e.g., household cleaning agents) , and relative stability 327 . The microbicidal activity of chlorine is attributed largely to undissociated hypochlorous acid (HOCl). The dissociation of HOCI to the less microbicidal form (hypochlorite ion OCl -) depends on pH. The disinfecting efficacy of chlorine decreases with an increase in pH that parallels the conversion of undissociated HOCI to OCl -329, 526 . A potential hazard is production of the carcinogen bis(chloromethyl) ether when hypochlorite solutions contact formaldehyde 527 and the production of the animal carcinogen trihalomethane when hot water is hyperchlorinated 528 . After reviewing environmental fate and ecologic data, EPA has determined the currently registered uses of hypochlorites will not result in unreasonable adverse effects to the environment 529 . Alternative compounds that release chlorine and are used in the health-care setting include demandrelease chlorine dioxide, sodium dichloroisocyanurate, and chloramine-T. The advantage of these compounds over the hypochlorites is that they retain chlorine longer and so exert a more prolonged bactericidal effect. Sodium dichloroisocyanurate tablets are stable, and for two reasons, the microbicidal activity of solutions prepared from sodium dichloroisocyanurate tablets might be greater than that of sodium hypochlorite solutions containing the same total available chlorine. First, with sodium dichloroisocyanurate, only 50% of the total available chlorine is free (HOCl and OCl -), whereas the remainder is combined (monochloroisocyanurate or dichloroisocyanurate), and as free available chlorine is used up, the latter is released to restore the equilibrium. Second, solutions of sodium dichloroisocyanurate are acidic, whereas sodium hypochlorite solutions are alkaline, and the more microbicidal type of chlorine (HOCl) is believed to predominate . Chlorine dioxide-based disinfectants are prepared fresh as required by mixing the two components (base solution and the activator solution ). In vitro suspension tests showed that solutions containing about 140 ppm chlorine dioxide achieved a reduction factor exceeding 10 6 of S. aureus in 1 minute and of Bacillus atrophaeus spores in 2.5 minutes in the presence of 3 g/L bovine albumin. The potential for damaging equipment requires consideration because long-term use can damage the outer plastic coat of the insertion tube 534 . In another study, chlorine dioxide solutions at either 600 ppm or 30 ppm killed Mycobacterium avium-intracellulare within 60 seconds after contact but contamination by organic material significantly affected the microbicidal properties 535 . The microbicidal activity of a new disinfectant, "superoxidized water," has been examined The concept of electrolyzing saline to create a disinfectant or antiseptics is appealing because the basic materials of saline and electricity are inexpensive and the end product (i.e., water) does not damage the environment. The main products of this water are hypochlorous acid (e.g., at a concentration of about 144 mg/L) and chlorine. As with any germicide, the antimicrobial activity of superoxidized water is strongly affected by the concentration of the active ingredient (available free chlorine) 536 . One manufacturer generates the disinfectant at the point of use by passing a saline solution over coated titanium electrodes at 9 amps. The product generated has a pH of 5.0-6.5 and an oxidation-reduction potential (redox) of >950 mV. Although superoxidized water is intended to be generated fresh at the point of use, when tested under clean conditions the disinfectant was effective within 5 minutes when 48 hours old 537 . Unfortunately, the equipment required to produce the product can be expensive because parameters such as pH, current, and redox potential must be closely monitored. The solution is nontoxic to biologic tissues. Although the United Kingdom manufacturer claims the solution is noncorrosive and nondamaging to endoscopes and processing equipment, one flexible endoscope manufacturer (Olympus Key-Med, United Kingdom) has voided the warranty on the endoscopes if superoxidized water is used to disinfect them 538 . As with any germicide formulation, the user should check with the device manufacturer for compatibility with the germicide. Additional studies are needed to determine whether this solution could be used as an alternative to other disinfectants or antiseptics for hand washing, skin antisepsis, room cleaning, or equipment disinfection (e.g., endoscopes, dialyzers) 400,539,540 . In October 2002, the FDA cleared superoxidized water as a high-level disinfectant (FDA, personal communication, September 18, 2002). # Mode of Action. The exact mechanism by which free chlorine destroys microorganisms has not been elucidated. Inactivation by chlorine can result from a number of factors: oxidation of sulfhydryl enzymes and amino acids; ring chlorination of amino acids; loss of intracellular contents; decreased uptake of nutrients; inhibition of protein synthesis; decreased oxygen uptake; oxidation of respiratory components; decreased adenosine triphosphate production; breaks in DNA; and depressed DNA synthesis 329,347 . The actual microbicidal mechanism of chlorine might involve a combination of these factors or the effect of chlorine on critical sites 347 . Microbicidal Activity. Low concentrations of free available chlorine (e.g., HOCl, OCl -, and elemental chlorine-Cl2) have a biocidal effect on mycoplasma (25 ppm) and vegetative bacteria (<5 ppm) in seconds in the absence of an organic load 329,418 . Higher concentrations (1,000 ppm) of chlorine are required to kill M. tuberculosis using the Association of Official Analytical Chemists (AOAC) tuberculocidal test 73 . A concentration of 100 ppm will kill ≥99.9% of B. atrophaeus spores within 5 minutes 541,542 and destroy mycotic agents in <1 hour 329 . Acidified bleach and regular bleach (5,000 ppm chlorine) can inactivate 10 6 Clostridium difficile spores in ≤10 minutes 262 . One study reported that 25 different viruses were inactivated in 10 minutes with 200 ppm available chlorine 72 . Several studies have demonstrated the effectiveness of diluted sodium hypochlorite and other disinfectants to inactivate HIV 61 . Chlorine (500 ppm) showed inhibition of Candida after 30 seconds of exposure 54 . In experiments using the AOAC Use-Dilution Method, 100 ppm of free chlorine killed 10 6 -10 7 S. aureus, Salmonella choleraesuis, and P. aeruginosa in <10 minutes 327 . Because household bleach contains 5.25%-6.15% sodium hypochlorite, or 52,500-61,500 ppm available chlorine, a 1:1,000 dilution provides about 53-62 ppm available chlorine, and a 1:10 dilution of household bleach provides about 5250-6150 ppm. Data are available for chlorine dioxide that support manufacturers' bactericidal, fungicidal, sporicidal, tuberculocidal, and virucidal label claims . A chlorine dioxide generator has been shown effective for decontaminating flexible endoscopes 534 but it is not currently FDA-cleared for use as a high-level disinfectant 85 . Chlorine dioxide can be produced by mixing solutions, such as a solution of chlorine with a solution of sodium chlorite 329 . In 1986, a chlorine dioxide product was voluntarily removed from the market when its use caused leakage of cellulose-based dialyzer membranes, which allowed bacteria to migrate from the dialysis fluid side of the dialyzer to the blood side 547 . Sodium dichloroisocyanurate at 2,500 ppm available chlorine is effective against bacteria in the presence of up to 20% plasma, compared with 10% plasma for sodium hypochlorite at 2,500 ppm 548 . "Superoxidized water" has been tested against bacteria, mycobacteria, viruses, fungi, and spores 537,539,549 . Freshly generated superoxidized water is rapidly effective (<2 minutes) in achieving a 5-log10 reduction of pathogenic microorganisms (i.e., M. tuberculosis, M. chelonae, poliovirus, HIV, multidrugresistant S. aureus, E. coli, Candida albicans, Enterococcus faecalis, P. aeruginosa) in the absence of organic loading. However, the biocidal activity of this disinfectant decreased substantially in the presence of organic material (e.g., 5% horse serum) 537,549,550 . No bacteria or viruses were detected on artificially contaminated endoscopes after a 5-minute exposure to superoxidized water 551 and HBV-DNA was not detected from any endoscope experimentally contaminated with HBV-positive mixed sera after a disinfectant exposure time of 7 minutes 552 . Uses. Hypochlorites are widely used in healthcare facilities in a variety of settings. 328 Inorganic chlorine solution is used for disinfecting tonometer heads 188 and for spot-disinfection of countertops and floors. A 1:10-1:100 dilution of 5.25%-6.15% sodium hypochlorite (i.e., household bleach) 22,228,553,554 or an EPA-registered tuberculocidal disinfectant has been recommended for decontaminating blood spills. For small spills of blood (i.e., drops of blood) on noncritical surfaces, the area can be disinfected with a 1:100 dilution of 5.25%-6.15% sodium hypochlorite or an EPA-registered tuberculocidal disinfectant. Because hypochlorites and other germicides are substantially inactivated in the presence of blood 63,548,555,556 , large spills of blood require that the surface be cleaned before an EPA-registered disinfectant or a 1:10 (final concentration) solution of household bleach is applied 557 . If a sharps injury is possible, the surface initially should be decontaminated 69,318 , then cleaned and disinfected (1:10 final concentration) 63 . Extreme care always should be taken to prevent percutaneous injury. At least 500 ppm available chlorine for 10 minutes is recommended for decontaminating CPR training manikins 558 . Full-strength bleach has been recommended for self-disinfection of needles and syringes used for illicit-drug injection when needle-exchange programs are not available. The difference in the recommended concentrations of bleach reflects the difficulty of cleaning the interior of needles and syringes and the use of needles and syringes for parenteral injection 559 . Clinicians should not alter their use of chlorine on environmental surfaces on the basis of testing methodologies that do not simulate actual disinfection practices 560,561 . Other uses in healthcare include as an irrigating agent in endodontic treatment 562 and as a disinfectant for manikins, laundry, dental appliances, hydrotherapy tanks 23,41 , regulated medical waste before disposal 328 , and the water distribution system in hemodialysis centers and hemodialysis machines 563 . Chlorine long has been used as the disinfectant in water treatment. Hyperchlorination of a Legionellacontaminated hospital water system 23 resulted in a dramatic decrease (from 30% to 1.5%) in the isolation of L. pneumophila from water outlets and a cessation of healthcare-associated Legionnaires' disease in an affected unit 528,564 . Water disinfection with monochloramine by municipal water-treatment plants substantially reduced the risk for healthcare-associated Legionnaires disease 565,566 . Chlorine dioxide also has been used to control Legionella in a hospital water supply. 567 Chloramine T 568 and hypochlorites 41 have been used to disinfect hydrotherapy equipment. Hypochlorite solutions in tap water at a pH >8 stored at room temperature (23 º C) in closed, opaque plastic containers can lose up to 40%-50% of their free available chlorine level over 1 month. Thus, if a user wished to have a solution containing 500 ppm of available chlorine at day 30, he or she should prepare a solution containing 1,000 ppm of chlorine at time 0. Sodium hypochlorite solution does not decompose after 30 days when stored in a closed brown bottle 327 . The use of powders, composed of a mixture of a chlorine-releasing agent with highly absorbent resin, for disinfecting spills of body fluids has been evaluated by laboratory tests and hospital ward trials. The inclusion of acrylic resin particles in formulations markedly increases the volume of fluid that can be soaked up because the resin can absorb 200-300 times its own weight of fluid, depending on the fluid consistency. When experimental formulations containing 1%, 5%, and 10% available chlorine were evaluated by a standardized surface test, those containing 10% demonstrated bactericidal activity. One problem with chlorine-releasing granules is that they can generate chlorine fumes when applied to urine 569 . # Formaldehyde Overview. Formaldehyde is used as a disinfectant and sterilant in both its liquid and gaseous states. Liquid formaldehyde will be considered briefly in this section, and the gaseous form is reviewed elsewhere 570 . Formaldehyde is sold and used principally as a water-based solution called formalin, which is 37% formaldehyde by weight. The aqueous solution is a bactericide, tuberculocide, fungicide, virucide and sporicide 72,82, . OSHA indicated that formaldehyde should be handled in the workplace as a potential carcinogen and set an employee exposure standard for formaldehyde that limits an 8-hour timeweighted average exposure concentration of 0.75 ppm 574,575 . The standard includes a second permissible exposure limit in the form of a short-term exposure limit (STEL) of 2 ppm that is the maximum exposure allowed during a 15-minute period 576 . Ingestion of formaldehyde can be fatal, and long-term exposure to low levels in the air or on the skin can cause asthma-like respiratory problems and skin irritation, such as dermatitis and itching. For these reasons, employees should have limited direct contact with formaldehyde, and these considerations limit its role in sterilization and disinfection processes. Key provisions of the OSHA standard that protects workers from exposure to formaldehyde appear in Title 29 of the Code of Federal Regulations (CFR) Part 1910.1048 (and equivalent regulations in states with OSHA-approved state plans) 577 . # Mode of Action. Formaldehyde inactivates microorganisms by alkylating the amino and sulfhydral groups of proteins and ring nitrogen atoms of purine bases 376 . Microbicidal Activity. Varying concentrations of aqueous formaldehyde solutions destroy a wide range of microorganisms. Inactivation of poliovirus in 10 minutes required an 8% concentration of formalin, but all other viruses tested were inactivated with 2% formalin 72 . Four percent formaldehyde is a tuberculocidal agent, inactivating 10 4 M. tuberculosis in 2 minutes 82 , and 2.5% formaldehyde inactivated about 10 7 Salmonella Typhi in 10 minutes in the presence of organic matter 572 . The sporicidal action of formaldehyde was slower than that of glutaraldehyde in comparative tests with 4% aqueous formaldehyde and 2% glutaraldehyde against the spores of B. anthracis 82 . The formaldehyde solution required 2 hours of contact to achieve an inactivation factor of 10 4 , whereas glutaraldehyde required only 15 minutes. # Uses. Although formaldehyde-alcohol is a chemical sterilant and formaldehyde is a high-level disinfectant, the health-care uses of formaldehyde are limited by its irritating fumes and its pungent odor even at very low levels (<1 ppm). For these reasons and others-such as its role as a suspected human carcinogen linked to nasal cancer and lung cancer 578 , this germicide is excluded from Table 1. When it is used, , direct exposure to employees generally is limited; however, excessive exposures to formaldehyde have been documented for employees of renal transplant units 574,579 , and students in a gross anatomy laboratory 580 . Formaldehyde is used in the health-care setting to prepare viral vaccines (e.g., poliovirus and influenza); as an embalming agent; and to preserve anatomic specimens; and historically has been used to sterilize surgical instruments, especially when mixed with ethanol. A 1997 survey found that formaldehyde was used for reprocessing hemodialyzers by 34% of U.S. hemodialysis centers-a 60% decrease from 1983 249,581 . If used at room temperature, a concentration of 4% with a minimum exposure of 24 hours is required to disinfect disposable hemodialyzers reused on the same patient 582,583 . Aqueous formaldehyde solutions (1%-2%) also have been used to disinfect the internal fluid pathways of dialysis machines 583 . To minimize a potential health hazard to dialysis patients, the dialysis equipment must be thoroughly rinsed and tested for residual formaldehyde before use. Paraformaldehyde, a solid polymer of formaldehyde, can be vaporized by heat for the gaseous decontamination of laminar flow biologic safety cabinets when maintenance work or filter changes require access to the sealed portion of the cabinet. # Glutaraldehyde Overview. Glutaraldehyde is a saturated dialdehyde that has gained wide acceptance as a high-level disinfectant and chemical sterilant 107 . Aqueous solutions of glutaraldehyde are acidic and generally in this state are not sporicidal. Only when the solution is "activated" (made alkaline) by use of alkalinating agents to pH 7.5-8.5 does the solution become sporicidal. Once activated, these solutions have a shelf-life of minimally 14 days because of the polymerization of the glutaraldehyde molecules at alkaline pH levels. This polymerization blocks the active sites (aldehyde groups) of the glutaraldehyde molecules that are responsible for its biocidal activity. Novel glutaraldehyde formulations (e.g., glutaraldehyde-phenol-sodium phenate, potentiated acid glutaraldehyde, stabilized alkaline glutaraldehyde) produced in the past 30 years have overcome the problem of rapid loss of activity (e.g., use-life 28-30 days) while generally maintaining excellent microbicidal activity . However, antimicrobial activity depends not only on age but also on use conditions, such as dilution and organic stress. Manufacturers' literature for these preparations suggests the neutral or alkaline glutaraldehydes possess microbicidal and anticorrosion properties superior to those of acid glutaraldehydes, and a few published reports substantiate these claims 542,589,590 . However, two studies found no difference in the microbicidal activity of alkaline and acid glutaraldehydes 73,591 . The use of glutaraldehyde-based solutions in health-care facilities is widespread because of their advantages, including excellent biocidal properties; activity in the presence of organic matter (20% bovine serum); and noncorrosive action to endoscopic equipment, thermometers, rubber, or plastic equipment (Tables 4 and 5). # Mode of Action. The biocidal activity of glutaraldehyde results from its alkylation of sulfhydryl, hydroxyl, carboxyl, and amino groups of microorganisms, which alters RNA, DNA, and protein synthesis. The mechanism of action of glutaraldehydes are reviewed extensively elsewhere 592,593 . Microbicidal Activity. The in vitro inactivation of microorganisms by glutaraldehydes has been extensively investigated and reviewed 592,593 . Several investigators showed that ≥2% aqueous solutions of glutaraldehyde, buffered to pH 7.5-8.5 with sodium bicarbonate effectively killed vegetative bacteria in <2 minutes; M. tuberculosis, fungi, and viruses in <10 minutes; and spores of Bacillus and Clostridium species in 3 hours 542, . Spores of C. difficile are more rapidly killed by 2% glutaraldehyde than are spores of other species of Clostridium and Bacillus 79,265,266 . Microorganisms with substantial resistance to glutaraldehyde have been reported, including some mycobacteria (M. chelonae, Mycobacterium aviumintracellulare, M. xenopi) , Methylobacterium mesophilicum 602 , Trichosporon, fungal ascospores (e.g., Microascus cinereus, Cheatomium globosum), and Cryptosporidium 271,603 . M. chelonae persisted in a 0.2% glutaraldehyde solution used to store porcine prosthetic heart valves 604 . Two percent alkaline glutaraldehyde solution inactivated 10 5 M. tuberculosis cells on the surface of penicylinders within 5 minutes at 18 º C 589 . However, subsequent studies 82 questioned the mycobactericidal prowess of glutaraldehydes. Two percent alkaline glutaraldehyde has slow action (20 to >30 minutes) against M. tuberculosis and compares unfavorably with alcohols, formaldehydes, iodine, and phenol 82 . Suspensions of M. avium, M. intracellulare, and M. gordonae were more resistant to inactivation by a 2% alkaline glutaraldehyde (estimated time to complete inactivation: ~60 minutes) than were virulent M. tuberculosis (estimated time to complete inactivation ~25 minutes) 605 . The rate of kill was directly proportional to the temperature, and a standardized suspension of M. tuberculosis could not be sterilized within 10 minutes 84 . An FDA-cleared chemical sterilant containing 2.5% glutaraldehyde uses increased temperature (35 º C) to reduce the time required to achieve high-level disinfection (5 minutes) 85,606 , but its use is limited to automatic endoscope reprocessors equipped with a heater. In another study employing membrane filters for measurement of mycobactericidal activity of 2% alkaline glutaraldehyde, complete inactivation was achieved within 20 minutes at 20 º C when the test inoculum was 10 6 M. tuberculosis per membrane 81 . Several investigators 55,57,73,76,80,81,84,605 have demonstrated that glutaraldehyde solutions inactivate 2.4 to >5.0 log10 of M. tuberculosis in 10 minutes (including multidrugresistant M. tuberculosis) and 4.0-6.4 log10 of M. tuberculosis in 20 minutes. On the basis of these data and other studies, 20 minutes at room temperature is considered the minimum exposure time needed to reliably kill Mycobacteria and other vegetative bacteria with ≥2% glutaraldehyde 17,19,27,57,83,94,108,111,607 . Glutaraldehyde is commonly diluted during use, and studies showed a glutaraldehyde concentration decline after a few days of use in an automatic endoscope washer 608,609 . The decline occurs because instruments are not thoroughly dried and water is carried in with the instrument, which increases the solution's volume and dilutes its effective concentration 610 . This emphasizes the need to ensure that semicritical equipment is disinfected with an acceptable concentration of glutaraldehyde. Data suggest that 1.0%-1.5% glutaraldehyde is the minimum effective concentration for >2% glutaraldehyde solutions when used as a high-level disinfectant 76,589,590,609 . Chemical test strips or liquid chemical monitors 610,611 are available for determining whether an effective concentration of glutaraldehyde is present despite repeated use and dilution. The frequency of testing should be based on how frequently the solutions are used (e.g., used daily, test daily; used weekly, test before use; used 30 times per day, test each 10th use), but the strips should not be used to extend the use life beyond the expiration date. Data suggest the chemicals in the test strip deteriorate with time 612 and a manufacturer's expiration date should be placed on the bottles. The bottle of test strips should be dated when opened and used for the period of time indicated on the bottle (e.g., 120 days). The results of test strip monitoring should be documented. The glutaraldehyde test kits have been preliminarily evaluated for accuracy and range 612 but the reliability has been questioned 613 . To ensure the presence of minimum effective concentration of the high-level disinfectant, manufacturers of some chemical test strips recommend the use of quality-control procedures to ensure the strips perform properly. If the manufacturer of the chemical test strip recommends a qualitycontrol procedure, users should comply with the manufacturer's recommendations. The concentration should be considered unacceptable or unsafe when the test indicates a dilution below the product's minimum effective concentration (MEC) (generally to ≤1.0%-1.5% glutaraldehyde) by the indicator not changing color. A 2.0% glutaraldehyde-7.05% phenol-1.20% sodium phenate product that contained 0.125% glutaraldehyde-0.44% phenol-0.075% sodium phenate when diluted 1:16 is not recommended as a highlevel disinfectant because it lacks bactericidal activity in the presence of organic matter and lacks tuberculocidal, fungicidal, virucidal, and sporicidal activity 49,55,56,71,614 . In December 1991, EPA issued an order to stop the sale of all batches of this product because of efficacy data showing the product is not effective against spores and possibly other microorganisms or inanimate objects as claimed on the label 615 . FDA has cleared a glutaraldehyde-phenol/phenate concentrate as a high-level disinfectant that contains 1.12% glutaraldehyde with 1.93% phenol/phenate at its use concentration. Other FDA cleared glutaraldehyde sterilants that contain 2.4%-3.4% glutaraldehyde are used undiluted 606 . Uses. Glutaraldehyde is used most commonly as a high-level disinfectant for medical equipment such as endoscopes 69,107,504 , spirometry tubing, dialyzers 616 , transducers, anesthesia and respiratory therapy equipment 617 , hemodialysis proportioning and dialysate delivery systems 249,618 , and reuse of laparoscopic disposable plastic trocars 619 . Glutaraldehyde is noncorrosive to metal and does not damage lensed instruments, rubber. or plastics. Glutaraldehyde should not be used for cleaning noncritical surfaces because it is too toxic and expensive. Colitis believed caused by glutaraldehyde exposure from residual disinfecting solution in endoscope solution channels has been reported and is preventable by careful endoscope rinsing 318, . One study found that residual glutaraldehyde levels were higher and more variable after manual disinfection (<0.2 mg/L to 159.5 mg/L) than after automatic disinfection (0.2-6.3 mg/L) 631 . Similarly, keratopathy and corneal decompensation were caused by ophthalmic instruments that were inadequately rinsed after soaking in 2% glutaraldehyde 632,633 . Healthcare personnel can be exposed to elevated levels of glutaraldehyde vapor when equipment is processed in poorly ventilated rooms, when spills occur, when glutaraldehyde solutions are activated or changed, 634 , or when open immersion baths are used. Acute or chronic exposure can result in skin irritation or dermatitis, mucous membrane irritation (eye, nose, mouth), or pulmonary symptoms 318, . Epistaxis, allergic contact dermatitis, asthma, and rhinitis also have been reported in healthcare workers exposed to glutaraldehyde 636, . Glutaraldehyde exposure should be monitored to ensure a safe work environment. Testing can be done by four techniques: a silica gel tube/gas chromatography with a flame ionization detector, dinitrophenylhydrazine (DNPH)-impregnated filter cassette/high-performance liquid chromatography (HPLC) with an ultraviolet (UV) detector, a passive badge/HPLC, or a handheld glutaraldehyde air monitor 648 . The silica gel tube and the DNPH-impregnated cassette are suitable for monitoring the 0.05 ppm ceiling limit. The passive badge, with a 0.02 ppm limit of detection, is considered marginal at the Americal Council of Governmental Industrial Hygienists (ACGIH) ceiling level. The ceiling level is considered too close to the glutaraldehyde meter's 0.03 ppm limit of detection to provide confidence in the readings 648 . ACGIH does not require a specific monitoring schedule for glutaraldehyde; however, a monitoring schedule is needed to ensure the level is less than the ceiling limit. For example, monitoring should be done initially to determine glutaraldehyde levels, after procedural or equipment changes, and in response to worker complaints 649 . In the absence of an OSHA permissible exposure limit, if the glutaraldehyde level is higher than the ACGIH ceiling limit of 0.05 ppm, corrective action and repeat monitoring would be prudent 649 . Engineering and work-practice controls that can be used to resolve these problems include ducted exhaust hoods, air systems that provide 7-15 air exchanges per hour, ductless fume hoods with absorbents for the glutaraldehyde vapor, tight-fitting lids on immersion baths, personal protection (e.g., nitrile or butyl rubber gloves but not natural latex gloves, goggles) to minimize skin or mucous membrane contact, and automated endoscope processors 7,650 . If engineering controls fail to maintain levels below the ceiling limit, institutions can consider the use of respirators (e.g., a half-face respirator with organic vapor cartridge 640 or a type "C" supplied air respirator with a full facepiece operated in a positive pressure mode) 651 . In general, engineering controls are preferred over work-practice and administrative controls because they do not require active participation by the health-care worker. Even though enforcement of the OSHA ceiling limit was suspended in 1993 by the U.S. Court of Appeals 577 , limiting employee exposure to 0.05 ppm (according to ACGIH) is prudent because, at this level, glutaraldehyde can irritate the eyes, throat, and nose 318,577,639,652 . If glutaraldehyde disposal through the sanitary sewer system is restricted, sodium bisulfate can be used to neutralize the glutaraldehyde and make it safe for disposal. # Hydrogen Peroxide Overview. The literature contains several accounts of the properties, germicidal effectiveness, and potential uses for stabilized hydrogen peroxide in the health-care setting. Published reports ascribe good germicidal activity to hydrogen peroxide and attest to its bactericidal, virucidal, sporicidal, and fungicidal properties . (Tables 4 and 5) The FDA website lists cleared liquid chemical sterilants and high-level disinfectants containing hydrogen peroxide and their cleared contact conditions. # Mode of Action. Hydrogen peroxide works by producing destructive hydroxyl free radicals that can attack membrane lipids, DNA, and other essential cell components. Catalase, produced by aerobic organisms and facultative anaerobes that possess cytochrome systems, can protect cells from metabolically produced hydrogen peroxide by degrading hydrogen peroxide to water and oxygen. This defense is overwhelmed by the concentrations used for disinfection 653,654 . Microbicidal Activity. Hydrogen peroxide is active against a wide range of microorganisms, including bacteria, yeasts, fungi, viruses, and spores 78,654 . A 0.5% accelerated hydrogen peroxide demonstrated bactericidal and virucidal activity in 1 minute and mycobactericidal and fungicidal activity in 5 minutes 656 . Bactericidal effectiveness and stability of hydrogen peroxide in urine has been demonstrated against a variety of health-care-associated pathogens; organisms with high cellular catalase activity (e.g., S. aureus, S. marcescens, and Proteus mirabilis) required 30-60 minutes of exposure to 0.6% hydrogen peroxide for a 10 8 reduction in cell counts, whereas organisms with lower catalase activity (e.g., E. coli, Streptococcus species, and Pseudomonas species) required only 15 minutes' exposure 657 . In an investigation of 3%, 10%, and 15% hydrogen peroxide for reducing spacecraft bacterial populations, a complete kill of 10 6 spores (i.e., Bacillus species) occurred with a 10% concentration and a 60-minute exposure time. A 3% concentration for 150 minutes killed 10 6 spores in six of seven exposure trials 658 . A 10% hydrogen peroxide solution resulted in a 10 3 decrease in B. atrophaeus spores, and a ≥10 5 decrease when tested against 13 other pathogens in 30 minutes at 20 º C 659, 660 . A 3.0% hydrogen peroxide solution was ineffective against VRE after 3 and 10 minutes exposure times 661 and caused only a 2-log10 reduction in the number of Acanthamoeba cysts in approximately 2 hours 662 . A 7% stabilized hydrogen peroxide proved to be sporicidal (6 hours of exposure), mycobactericidal (20 minutes), fungicidal (5 minutes) at full strength, virucidal (5 minutes) and bactericidal (3 minutes) at a 1:16 dilution when a quantitative carrier test was used 655 . The 7% solution of hydrogen peroxide, tested after 14 days of stress (in the form of germ-loaded carriers and respiratory therapy equipment), was sporicidal (>7 log10 reduction in 6 hours), mycobactericidal (>6.5 log10 reduction in 25 minutes), fungicidal (>5 log10 reduction in 20 minutes), bactericidal (>6 log10 reduction in 5 minutes) and virucidal (5 log10 reduction in 5 minutes) 663 . Synergistic sporicidal effects were observed when spores were exposed to a combination of hydrogen peroxide (5.9%-23.6%) and peracetic acid 664 . Other studies demonstrated the antiviral activity of hydrogen peroxide against rhinovirus 665 . The time required for inactivating three serotypes of rhinovirus using a 3% hydrogen peroxide solution was 6-8 minutes; this time increased with decreasing concentrations (18-20 minutes at 1.5%, 50-60 minutes at 0.75%). Concentrations of hydrogen peroxide from 6% to 25% show promise as chemical sterilants. The product marketed as a sterilant is a premixed, ready-to-use chemical that contains 7.5% hydrogen peroxide and 0.85% phosphoric acid (to maintain a low pH) 69 . The mycobactericidal activity of 7.5% hydrogen peroxide has been corroborated in a study showing the inactivation of >10 5 multidrug-resistant M. tuberculosis after a 10-minute exposure 666 . Thirty minutes were required for >99.9% inactivation of poliovirus and HAV 667 . Three percent and 6% hydrogen peroxide were unable to inactivate HAV in 1 minute in a carrier test 58 . When the effectiveness of 7.5% hydrogen peroxide at 10 minutes was compared with 2% alkaline glutaraldehyde at 20 minutes in manual disinfection of endoscopes, no significant difference in germicidal activity was observed 668 . ). No complaints were received from the nursing or medical staff regarding odor or toxicity. In one study, 6% hydrogen peroxide (unused product was 7.5%) was more effective in the high-level disinfection of flexible endoscopes than was the 2% glutaraldehyde solution 456 . A new, rapid-acting 13.4% hydrogen peroxide formulation (that is not yet FDA-cleared) has demonstrated sporicidal, mycobactericidal, fungicidal, and virucidal efficacy. Manufacturer data demonstrate that this solution sterilizes in 30 minutes and provides high-level disinfection in 5 minutes 669 . This product has not been used long enough to evaluate material compatibility to endoscopes and other semicritical devices, and further assessment by instrument manufacturers is needed. Under normal conditions, hydrogen peroxide is extremely stable when properly stored (e.g., in dark containers). The decomposition or loss of potency in small containers is less than 2% per year at ambient temperatures 670 . Uses. Commercially available 3% hydrogen peroxide is a stable and effective disinfectant when used on inanimate surfaces. It has been used in concentrations from 3% to 6% for disinfecting soft contact lenses (e.g., 3% for 2-3 hrs) 653,671,672 , tonometer biprisms 513 , ventilators 673 , fabrics 397 , and endoscopes 456 . Hydrogen peroxide was effective in spot-disinfecting fabrics in patients' rooms 397 . Corneal damage from a hydrogen peroxide-soaked tonometer tip that was not properly rinsed has been reported 674 . Hydrogen peroxide also has been instilled into urinary drainage bags in an attempt to eliminate the bag as a source of bladder bacteriuria and environmental contamination 675 . Although the instillation of hydrogen peroxide into the bag reduced microbial contamination of the bag, this procedure did not reduce the incidence of catheter-associated bacteriuria 675 . A chemical irritation resembling pseudomembranous colitis caused by either 3% hydrogen peroxide or a 2% glutaraldehyde has been reported 621 . An epidemic of pseudomembrane-like enteritis and colitis in seven patients in a gastrointestinal endoscopy unit also has been associated with inadequate rinsing of 3% hydrogen peroxide from the endoscope 676 . As with other chemical sterilants, dilution of the hydrogen peroxide must be monitored by regularly testing the minimum effective concentration (i.e., 7.5%-6.0%). Compatibility testing by Olympus America of the 7.5% hydrogen peroxide found both cosmetic changes (e.g., discoloration of black anodized metal finishes) 69 and functional changes with the tested endoscopes (Olympus, written communication, October 15, 1999). # Iodophors Overview. Iodine solutions or tinctures long have been used by health professionals primarily as antiseptics on skin or tissue. Iodophors, on the other hand, have been used both as antiseptics and disinfectants. FDA has not cleared any liquid chemical sterilant or high-level disinfectants with iodophors as the main active ingredient. An iodophor is a combination of iodine and a solubilizing agent or carrier; the resulting complex provides a sustained-release reservoir of iodine and releases small amounts of free iodine in aqueous solution. The best-known and most widely used iodophor is povidone-iodine, a compound of polyvinylpyrrolidone with iodine. This product and other iodophors retain the germicidal efficacy of iodine but unlike iodine generally are nonstaining and relatively free of toxicity and irritancy 677,678 . Several reports that documented intrinsic microbial contamination of antiseptic formulations of povidone-iodine and poloxamer-iodine caused a reappraisal of the chemistry and use of iodophors 682 . "Free" iodine (I2) contributes to the bactericidal activity of iodophors and dilutions of iodophors demonstrate more rapid bactericidal action than does a full-strength povidone-iodine solution. The reason for the observation that dilution increases bactericidal activity is unclear, but dilution of povidone-iodine might weaken the iodine linkage to the carrier polymer with an accompanying increase of free iodine in solution 680 . Therefore, iodophors must be diluted according to the manufacturers' directions to achieve antimicrobial activity. # Mode of Action. Iodine can penetrate the cell wall of microorganisms quickly, and the lethal effects are believed to result from disruption of protein and nucleic acid structure and synthesis. Microbicidal Activity. Published reports on the in vitro antimicrobial efficacy of iodophors demonstrate that iodophors are bactericidal, mycobactericidal, and virucidal but can require prolonged contact times to kill certain fungi and bacterial spores 14, 71-73, 290, 683-686 . Three brands of povidone-iodine solution have demonstrated more rapid kill (seconds to minutes) of S. aureus and M. chelonae at a 1:100 dilution than did the stock solution 683 . The virucidal activity of 75-150 ppm available iodine was demonstrated against seven viruses 72 . Other investigators have questioned the efficacy of iodophors against poliovirus in the presence of organic matter 685 and rotavirus SA-11 in distilled or tapwater 290 . Manufacturers' data demonstrate that commercial iodophors are not sporicidal, but they are tuberculocidal, fungicidal, virucidal, and bactericidal at their recommended use-dilution. # Uses. Besides their use as an antiseptic, iodophors have been used for disinfecting blood culture bottles and medical equipment, such as hydrotherapy tanks, thermometers, and endoscopes. Antiseptic iodophors are not suitable for use as hard-surface disinfectants because of concentration differences. Iodophors formulated as antiseptics contain less free iodine than do those formulated as disinfectants 376 . Iodine or iodine-based antiseptics should not be used on silicone catheters because they can adversely affect the silicone tubing 687 . # Ortho-phthalaldehyde (OPA) Overview. Ortho-phthalaldehyde is a high-level disinfectant that received FDA clearance in October 1999. It contains 0.55% 1,2-benzenedicarboxaldehyde (OPA). OPA solution is a clear, pale-blue liquid with a pH of 7.5. (Tables 4 and 5) # Mode of Action. Preliminary studies on the mode of action of OPA suggest that both OPA and glutaraldehyde interact with amino acids, proteins, and microorganisms. However, OPA is a less potent cross-linking agent. This is compensated for by the lipophilic aromatic nature of OPA that is likely to assist its uptake through the outer layers of mycobacteria and gram-negative bacteria . OPA appears to kill spores by blocking the spore germination process 691 . Microbicidal Activity. Studies have demonstrated excellent microbicidal activity in vitro 69,100,271,400, . For example, OPA has superior mycobactericidal activity (5-log10 reduction in 5 minutes) to glutaraldehyde. The mean times required to produce a 6-log10 reduction for M. bovis using 0.21% OPA was 6 minutes, compared with 32 minutes using 1.5% glutaraldehyde 693 . OPA showed good activity against the mycobacteria tested, including the glutaraldehyde-resistant strains, but 0.5% OPA was not sporicidal with 270 minutes of exposure. Increasing the pH from its unadjusted level (about 6.5) to pH 8 improved the sporicidal activity of OPA 694 . The level of biocidal activity was directly related to the temperature. A greater than 5-log10 reduction of B. atrophaeus spores was observed in 3 hours at 35 º C, than in 24 hours at 20 º C. Also, with an exposure time ≤5 minutes, biocidal activity decreased with increasing serum concentration. However, efficacy did not differ when the exposure time was ≥10 minutes 697 . In addition, OPA is effective (>5-log10 reduction) against a wide range of microorganisms, including glutaraldehyde-resistant mycobacteria and B. atrophaeus spores 694 . The influence of laboratory adaptation of test strains, such as P. aeruginosa, to 0.55% OPA has been evaluated. Resistant and multiresistant strains increased substantially in susceptibility to OPA after laboratory adaptation (log10 reduction factors increased by 0.54 and 0.91 for resistant and multiresistant strains, respectively) 704 . Other studies have found naturally occurring cells of P. aeurginosa were more resistant to a variety of disinfectants than were subcultured cells 705 . Uses. OPA has several potential advantages over glutaraldehyde. It has excellent stability over a wide pH range (pH 3-9), is not a known irritant to the eyes and nasal passages 706 , does not require exposure monitoring, has a barely perceptible odor, and requires no activation. OPA, like glutaraldehyde, has excellent material compatibility. A potential disadvantage of OPA is that it stains proteins gray (including unprotected skin) and thus must be handled with caution 69 . However, skin staining would indicate improper handling that requires additional training and/or personal protective equipment (e.g., gloves, eye and mouth protection, and fluid-resistant gowns). OPA residues remaining on inadequately water-rinsed transesophageal echo probes can stain the patient's mouth 707 . Meticulous cleaning, using the correct OPA exposure time (e.g., 12 minutes) and copious rinsing of the probe with water should eliminate this problem. The results of one study provided a basis for a recommendation that rinsing of instruments disinfected with OPA will require at least 250 mL of water per channel to reduce the chemical residue to a level that will not compromise patient or staff safety (<1 ppm) 708 . Personal protective equipment should be worn when contaminated instruments, equipment, and chemicals are handled 400 . In addition, equipment must be thoroughly rinsed to prevent discoloration of a patient's skin or mucous membrane. In April 2004, the manufacturer of OPA disseminated information to users about patients who reportedly experienced an anaphylaxis-like reaction after cystoscopy where the scope had been reprocessed using OPA. Of approximately 1 million urologic procedures performed using instruments reprocessed using OPA, 24 cases (17 cases in the United States, six in Japan, one in the United Kingdom) of anaphylaxis-like reactions have been reported after repeated cystoscopy (typically after four to nine treatments). Preventive measures include removal of OPA residues by thorough rinsing and not using OPA for reprocessing urologic instrumentation used to treat patients with a history of bladder cancer (Nevine Erian, personal communication, June 4, 2004; Product Notification, Advanced Sterilization Products, April 23, 2004) 709 . A few OPA clinical studies are available. In a clinical-use study, OPA exposure of 100 endoscopes for 5 minutes resulted in a >5-log10 reduction in bacterial load. Furthermore, OPA was effective over a 14-day use cycle 100 . Manufacturer data show that OPA will last longer in an automatic endoscope reprocessor before reaching its MEC limit (MEC after 82 cycles) than will glutaraldehyde (MEC after 40 cycles) 400 . High-pressure liquid chromatography confirmed that OPA levels are maintained above 0.3% for at least 50 cycles 706,710 . OPA must be disposed in accordance with local and state regulations. If OPA disposal through the sanitary sewer system is restricted, glycine (25 grams/gallon) can be used to neutralize the OPA and make it safe for disposal. The high-level disinfectant label claims for OPA solution at 20 º C vary worldwide (e.g., 5 minutes in Europe, Asia, and Latin America; 10 minutes in Canada and Australia; and 12 minutes in the United States). These label claims differ worldwide because of differences in the test methodology and requirements for licensure. In an automated endoscope reprocessor with an FDA-cleared capability to maintain solution temperatures at 25°C, the contact time for OPA is 5 minutes. # Peracetic Acid Overview. Peracetic, or peroxyacetic, acid is characterized by rapid action against all microorganisms. Special advantages of peracetic acid are that it lacks harmful decomposition products (i.e., acetic acid, water, oxygen, hydrogen peroxide), enhances removal of organic material 711 , and leaves no residue. It remains effective in the presence of organic matter and is sporicidal even at low temperatures (Tables 4 and 5). Peracetic acid can corrode copper, brass, bronze, plain steel, and galvanized iron but these effects can be reduced by additives and pH modifications. It is considered unstable, particularly when diluted; for example, a 1% solution loses half its strength through hydrolysis in 6 days, whereas 40% peracetic acid loses 1%-2% of its active ingredients per month 654 . # Mode of Action. Little is known about the mechanism of action of peracetic acid, but it is believed to function similarly to other oxidizing agents-that is, it denatures proteins, disrupts the cell wall permeability, and oxidizes sulfhydryl and sulfur bonds in proteins, enzymes, and other metabolites 654 . Microbicidal Activity. Peracetic acid will inactivate gram-positive and gram-negative bacteria, fungi, and yeasts in ≤5 minutes at 5) against all test strains of mycobacteria (M. tuberculosis, M. avium-intracellulare, M. chelonae, and M. fortuitum) within 20-30 minutes in the presence or absence of an organic load 607,712 . With bacterial spores, 500-10,000 ppm (0.05%-1%) inactivates spores in 15 seconds to 30 minutes using a spore suspension test 654,659, . Uses. An automated machine using peracetic acid to chemically sterilize medical (e.g., endoscopes, arthroscopes), surgical, and dental instruments is used in the United States . As previously noted, dental handpieces should be steam sterilized. The sterilant, 35% peracetic acid, is diluted to 0.2% with filtered water at 50 º C. Simulated-use trials have demonstrated excellent microbicidal activity 111, , and three clinical trials have demonstrated both excellent microbial killing and no clinical failures leading to infection 90,723,724 . The high efficacy of the system was demonstrated in a comparison of the efficacies of the system with that of ethylene oxide. Only the peracetic acid system completely killed 6 log10 of M. chelonae, E. faecalis, and B. atrophaeus spores with both an organic and inorganic challenge 722 . An investigation that compared the costs, performance, and maintenance of urologic endoscopic equipment processed by high-level disinfection (with glutaraldehyde) with those of the peracetic acid system reported no clinical differences between the two systems. However, the use of this system led to higher costs than the high-level disinfection, including costs for processing ($6.11 vs. $0.45 per cycle), purchasing and training ($24,845 vs. $16), installation ($5,800 vs. $0), and endoscope repairs ($6,037 vs. $445) 90 . Furthermore, three clusters of infection using the peracetic acid automated endoscope reprocessor were linked to inadequately processed bronchoscopes when inappropriate channel connectors were used with the system 725 . These clusters highlight the importance of training, proper model-specific endoscope connector systems, and quality-control procedures to ensure compliance with endoscope manufacturer recommendations and professional organization guidelines. An alternative highlevel disinfectant available in the United Kingdom contains 0.35% peracetic acid. Although this product is rapidly effective against a broad range of microorganisms 466,726,727 , it tarnishes the metal of endoscopes and is unstable, resulting in only a 24-hour use life 727 . # Peracetic Acid and Hydrogen Peroxide Overview. Two chemical sterilants are available that contain peracetic acid plus hydrogen peroxide (i.e., 0.08% peracetic acid plus 1.0% hydrogen peroxide ; and 0.23% peracetic acid plus 7.35% hydrogen peroxide (Tables 4 and 5). Microbicidal Activity. The bactericidal properties of peracetic acid and hydrogen peroxide have been demonstrated 728 . Manufacturer data demonstrated this combination of peracetic acid and hydrogen peroxide inactivated all microorganisms except bacterial spores within 20 minutes. The 0.08% peracetic acid plus 1.0% hydrogen peroxide product effectively inactivated glutaraldehyde-resistant mycobacteria 729 . Uses. The combination of peracetic acid and hydrogen peroxide has been used for disinfecting hemodialyzers 730 . The percentage of dialysis centers using a peracetic acid-hydrogen peroxide-based disinfectant for reprocessing dialyzers increased from 5% in 1983 to 56% in 1997 249 . Olympus America does not endorse use of 0.08% peracetic acid plus 1.0% hydrogen peroxide (Olympus America, personal communication, April 15, 1998) on any Olympus endoscope because of cosmetic and functional damage and will not assume liability for chemical damage resulting from use of this product. This product is not currently available. FDA has cleared a newer chemical sterilant with 0.23% peracetic acid and 7.35% hydrogen peroxide (Tables 4 and 5). After testing the 7.35% hydrogen peroxide and 0.23% peracetic acid product, Olympus America concluded it was not compatible with the company's flexible gastrointestinal endoscopes; this conclusion was based on immersion studies where the test insertion tubes had failed because of swelling and loosening of the black polymer layer of the tube (Olympus America, personal communication, September 13, 2000). # Phenolics Overview. Phenol has occupied a prominent place in the field of hospital disinfection since its initial use as a germicide by Lister in his pioneering work on antiseptic surgery. In the past 30 years, however, work has concentrated on the numerous phenol derivatives or phenolics and their antimicrobial properties. Phenol derivatives originate when a functional group (e.g., alkyl, phenyl, benzyl, halogen) replaces one of the hydrogen atoms on the aromatic ring. Two phenol derivatives commonly found as constituents of hospital disinfectants are ortho-phenylphenol and ortho-benzyl-para-chlorophenol. The antimicrobial properties of these compounds and many other phenol derivatives are much improved over those of the parent chemical. Phenolics are absorbed by porous materials, and the residual disinfectant can irritate tissue. In 1970, depigmentation of the skin was reported to be caused by phenolic germicidal detergents containing para-tertiary butylphenol and para-tertiary amylphenol 731 . # Mode of Action. In high concentrations, phenol acts as a gross protoplasmic poison, penetrating and disrupting the cell wall and precipitating the cell proteins. Low concentrations of phenol and higher molecular-weight phenol derivatives cause bacterial death by inactivation of essential enzyme systems and leakage of essential metabolites from the cell wall 732 . Microbicidal Activity. Published reports on the antimicrobial efficacy of commonly used phenolics showed they were bactericidal, fungicidal, virucidal, and tuberculocidal 14,61,71,73,227,416,573, . One study demonstrated little or no virucidal effect of a phenolic against coxsackie B4, echovirus 11, and poliovirus 1 736 . Similarly, 12% ortho-phenylphenol failed to inactivate any of the three hydrophilic viruses after a 10-minute exposure time, although 5% phenol was lethal for these viruses 72 . A 0.5% dilution of a phenolic (2.8% ortho-phenylphenol and 2.7% ortho-benzyl-para-chlorophenol) inactivated HIV 227 and a 2% solution of a phenolic (15% ortho-phenylphenol and 6.3% para-tertiary-amylphenol) inactivated all but one of 11 fungi tested 71 . Manufacturers' data using the standardized AOAC methods demonstrate that commercial phenolics are not sporicidal but are tuberculocidal, fungicidal, virucidal, and bactericidal at their recommended usedilution. Attempts to substantiate the bactericidal label claims of phenolics using the AOAC Use-Dilution Method occasionally have failed 416,737 . However, results from these same studies have varied dramatically among laboratories testing identical products. Uses. Many phenolic germicides are EPA-registered as disinfectants for use on environmental surfaces (e.g., bedside tables, bedrails, and laboratory surfaces) and noncritical medical devices. Phenolics are not FDA-cleared as high-level disinfectants for use with semicritical items but could be used to preclean or decontaminate critical and semicritical devices before terminal sterilization or high-level disinfection. The use of phenolics in nurseries has been questioned because of hyperbilirubinemia in infants placed in bassinets where phenolic detergents were used 739 . In addition, bilirubin levels were reported to increase in phenolic-exposed infants, compared with nonphenolic-exposed infants, when the phenolic was prepared according to the manufacturers' recommended dilution 740 . If phenolics are used to clean nursery floors, they must be diluted as recommended on the product label. Phenolics (and other disinfectants) should not be used to clean infant bassinets and incubators while occupied. If phenolics are used to terminally clean infant bassinets and incubators, the surfaces should be rinsed thoroughly with water and dried before reuse of infant bassinets and incubators 17 . # Quaternary Ammonium Compounds Overview. The quaternary ammonium compounds are widely used as disinfectants. Health-careassociated infections have been reported from contaminated quaternary ammonium compounds used to disinfect patient-care supplies or equipment, such as cystoscopes or cardiac catheters 741,742 . The quaternaries are good cleaning agents, but high water hardness 743 and materials such as cotton and gauze pads can make them less microbicidal because of insoluble precipitates or cotton and gauze pads absorb the active ingredients, respectively. One study showed a significant decline (~40%-50% lower at 1 hour) in the concentration of quaternaries released when cotton rags or cellulose-based wipers were used in the open-bucket system, compared with the nonwoven spunlace wipers in the closed-bucket system 744 As with several other disinfectants (e.g., phenolics, iodophors) gram-negative bacteria can survive or grow in them 404 . Chemically, the quaternaries are organically substituted ammonium compounds in which the nitrogen atom has a valence of 5, four of the substituent radicals (R1-R4) are alkyl or heterocyclic radicals of a given size or chain length, and the fifth (X -) is a halide, sulfate, or similar radical 745 . Each compound exhibits its own antimicrobial characteristics, hence the search for one compound with outstanding antimicrobial properties. Some of the chemical names of quaternary ammonium compounds used in healthcare are alkyl dimethyl benzyl ammonium chloride, alkyl didecyl dimethyl ammonium chloride, and dialkyl dimethyl ammonium chloride. The newer quaternary ammonium compounds (i.e., fourth generation), referred to as twin-chain or dialkyl quaternaries (e.g. didecyl dimethyl ammonium bromide and dioctyl dimethyl ammonium bromide), purportedly remain active in hard water and are tolerant of anionic residues 746 . A few case reports have documented occupational asthma as a result of exposure to benzalkonium chloride 747 . # Mode of Action. The bactericidal action of the quaternaries has been attributed to the inactivation of energy-producing enzymes, denaturation of essential cell proteins, and disruption of the cell membrane 746 . Evidence exists that supports these and other possibilities 745 748 . Microbicidal Activity. Results from manufacturers' data sheets and from published scientific literature indicate that the quaternaries sold as hospital disinfectants are generally fungicidal, bactericidal, and virucidal against lipophilic (enveloped) viruses; they are not sporicidal and generally not tuberculocidal or virucidal against hydrophilic (nonenveloped) viruses 14, 54-56, 58, 59, 61, 71, 73, 186, 297, 748, 749 . The poor mycobactericidal activities of quaternary ammonium compounds have been demonstrated 55,73 . Quaternary ammonium compounds (as well as 70% isopropyl alcohol, phenolic, and a chlorine-containing wipe ) effectively (>95%) remove and/or inactivate contaminants (i.e., multidrug-resistant S. aureus, vancomycin-resistant Entercoccus, P. aeruginosa) from computer keyboards with a 5-second application time. No functional damage or cosmetic changes occurred to the computer keyboards after 300 applications of the disinfectants 45 . Attempts to reproduce the manufacturers' bactericidal and tuberculocidal claims using the AOAC tests with a limited number of quaternary ammonium compounds occasionally have failed 73,416,737 . However, test results have varied extensively among laboratories testing identical products 416,737 . Uses. The quaternaries commonly are used in ordinary environmental sanitation of noncritical surfaces, such as floors, furniture, and walls. EPA-registered quaternary ammonium compounds are appropriate to use for disinfecting medical equipment that contacts intact skin (e.g., blood pressure cuffs). # Miscellaneous Inactivating Agents Other Germicides Several compounds have antimicrobial activity but for various reasons have not been incorporated into the armamentarium of health-care disinfectants. These include mercurials, sodium hydroxide, βpropiolactone, chlorhexidine gluconate, cetrimide-chlorhexidine, glycols (triethylene and propylene), and the Tego disinfectants. Two authoritative references examine these agents in detail 16,412 . A peroxygen-containing formulation had marked bactericidal action when used as a 1% weight/volume solution and virucidal activity at 3% 49 , but did not have mycobactericidal activity at concentrations of 2.3% and 4% and exposure times ranging from 30 to 120 minutes 750 . It also required 20 hours to kill B. atrophaeus spores 751 . A powder-based peroxygen compound for disinfecting contaminated spill was strongly and rapidly bactericidal 752 . In preliminary studies, nanoemulsions (composed of detergents and lipids in water) showed activity against vegetative bacteria, enveloped viruses and Candida. This product represents a potential agent for use as a topical biocidal agent. . New disinfectants that require further evaluation include glucoprotamin 756 , tertiary amines 703 . and a light-activated antimicrobial coating 757 . Several other disinfection technologies might have potential applications in the healthcare setting 758 . # Metals as Microbicides Comprehensive reviews of antisepsis 759 , disinfection 421 , and anti-infective chemotherapy 760 barely mention the antimicrobial activity of heavy metals 761,762 . Nevertheless, the anti-infective activity of some heavy metals has been known since antiquity. Heavy metals such as silver have been used for prophylaxis of conjunctivitis of the newborn, topical therapy for burn wounds, and bonding to indwelling catheters, and the use of heavy metals as antiseptics or disinfectants is again being explored 763 . Inactivation of bacteria on stainless steel surfaces by zeolite ceramic coatings containing silver and zinc ions has also been demonstrated 764,765 . Metals such as silver, iron, and copper could be used for environmental control, disinfection of water, or reusable medical devices or incorporated into medical devices (e.g., intravascular catheters) 400, . A comparative evaluation of six disinfectant formulations for residual antimicrobial activity demonstrated that only the silver disinfectant demonstrated significant residual activity against S. aureus and P. aeruginosa 763 . Preliminary data suggest metals are effective against a wide variety of microorganisms. Clinical uses of other heavy metals include copper-8-quinolinolate as a fungicide against Aspergillus, copper-silver ionization for Legionella disinfection , organic mercurials as an antiseptic (e.g., mercurochrome) and preservative/disinfectant (e.g., thimerosal ) in pharmaceuticals and cosmetics 762 . # Ultraviolet Radiation (UV) The wavelength of UV radiation ranges from 328 nm to 210 nm (3280 A to 2100 A). Its maximum bactericidal effect occurs at 240-280 nm. Mercury vapor lamps emit more than 90% of their radiation at 253.7 nm, which is near the maximum microbicidal activity 775 . Inactivation of microorganisms results from destruction of nucleic acid through induction of thymine dimers. UV radiation has been employed in the disinfection of drinking water 776 , air 775 , titanium implants 777 , and contact lenses 778 . Bacteria and viruses are more easily killed by UV light than are bacterial spores 775 . UV radiation has several potential applications, but unfortunately its germicidal effectiveness and use is influenced by organic matter; wavelength; type of suspension; temperature; type of microorganism; and UV intensity, which is affected by distance and dirty tubes 779 . The application of UV radiation in the health-care environment (i.e., operating rooms, isolation rooms, and biologic safety cabinets) is limited to destruction of airborne organisms or inactivation of microorganisms on surfaces. The effect of UV radiation on postoperative wound infections was investigated in a double-blind, randomized study in five university medical centers. After following 14,854 patients over a 2-year period, the investigators reported the overall wound infection rate was unaffected by UV radiation, although postoperative infection in the "refined clean" surgical procedures decreased significantly (3.8%-2.9%) 780 . No data support the use of UV lamps in isolation rooms, and this practice has caused at least one epidemic of UV-induced skin erythema and keratoconjunctivitis in hospital patients and visitors 781 . # Pasteurization Pasteurization is not a sterilization process; its purpose is to destroy all pathogenic microorganisms. However, pasteurization does not destroy bacterial spores. The time-temperature relation for hot-water pasteurization is generally ~70°C (158°F) for 30 minutes. The water temperature and time should be monitored as part of a quality-assurance program 782 . Pasteurization of respiratory therapy 783,784 and anesthesia equipment 785 is a recognized alternative to chemical disinfection. The efficacy of this process has been tested using an inoculum that the authors believed might simulate contamination by an infected patient. Use of a large inoculum (10 7 ) of P. aeruginosa or Acinetobacter calcoaceticus in sets of respiratory tubing before processing demonstrated that machine-assisted chemical processing was more efficient than machine-assisted pasteurization with a disinfection failure rate of 6% and 83%, respectively 783 . Other investigators found hot water disinfection to be effective (inactivation factor >5 log10) against multiple bacteria, including multidrug-resistant bacteria, for disinfecting reusable anesthesia or respiratory therapy equipment . # Flushing-and Washer-Disinfectors Flushing-and washer-disinfectors are automated and closed equipment that clean and disinfect objects from bedpans and washbowls to surgical instruments and anesthesia tubes. Items such as bedpans and urinals can be cleaned and disinfected in flushing-disinfectors. They have a short cycle of a few minutes. They clean by flushing with warm water, possibly with a detergent, and then disinfect by flushing the items with hot water or with steam. Because this machine empties, cleans, and disinfects, manual cleaning is eliminated, fewer disposable items are needed, and fewer chemical germicides are used. A microbiologic evaluation of one washer/disinfector demonstrated complete inactivation of suspensions of E. faecalis or poliovirus 787 . Other studies have shown that strains of Enterococcus faecium can survive the British Standard for heat disinfection of bedpans (80 º C for 1 minute). The significance of this finding with reference to the potential for enterococci to survive and disseminate in the health-care environment is debatable . These machines are available and used in many European countries. Surgical instruments and anesthesia equipment are more difficult to clean. They are run in washerdisinfectors on a longer cycle of approximately 20-30 minutes with a detergent. These machines also disinfect by hot water at approximately 90°C 791 . # The Regulatory Framework for Disinfectants and Sterilants Before using the guidance provided in this document, health-care workers should be aware of the federal laws and regulations that govern the sale, distribution, and use of disinfectants and sterilants. In particular, health-care workers need to know what requirements pertain to them when they apply these products. Finally, they should understand the relative roles of EPA, FDA, and CDC so the context for the guidance provided in this document is clear. # EPA and FDA In the United States, chemical germicides formulated as sanitizers, disinfectants, or sterilants are regulated in interstate commerce by the Antimicrobials Division, Office of Pesticides Program, EPA, under the authority of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) of 1947, as amended 792 . Under FIFRA, any substance or mixture of substances intended to prevent, destroy, repel, or mitigate any pest (including microorganisms but excluding those in or on living humans or animals) must be registered before sale or distribution. To obtain a registration, a manufacturer must submit specific data about the safety and effectiveness of each product. For example, EPA requires manufacturers of sanitizers, disinfectants, or chemical sterilants to test formulations by using accepted methods for microbiocidal activity, stability, and toxicity to animals and humans. The manufacturers submit these data to EPA along with proposed labeling. If EPA concludes the product can be used without causing "unreasonable adverse effects," then the product and its labeling are registered, and the manufacturer can sell and distribute the product in the United States. FIFRA also requires users of products to follow explicitly the labeling directions on each product. The following standard statement appears on all labels under the "Directions for Use" heading: "It is a violation of federal law to use this product in a manner inconsistent with its labeling." This statement means a health-care worker must follow the safety precautions and use directions on the labeling of each registered product. Failure to follow the specified use-dilution, contact time, method of application, or any other condition of use is considered a misuse of the product and potentially subject to enforcement action under FIFRA. In general, EPA regulates disinfectants and sterilants used on environmental surfaces, and not those used on critical or semicritical medical devices; the latter are regulated by FDA. In June 1993, FDA and EPA issued a "Memorandum of Understanding" that divided responsibility for review and surveillance of chemical germicides between the two agencies. Under the agreement, FDA regulates liquid chemical sterilants used on critical and semicritical devices, and EPA regulates disinfectants used on noncritical surfaces and gaseous sterilants 793 . In 1996, Congress passed the Food Quality Protection Act (FQPA). This act amended FIFRA in regard to several types of products regulated by both EPA and FDA. One provision of FQPA removed regulation of liquid chemical sterilants used on critical and semicritical medical devices from EPA's jurisdiction, and it now rests solely with FDA 792,794 . EPA continues to register nonmedical chemical sterilants. FDA and EPA have considered the impact of FQPA, and in January 2000, FDA published its final guidance document on product submissions and labeling. Antiseptics are considered antimicrobial drugs used on living tissue and thus are regulated by FDA under the Food, Drug and Cosmetic Act. FDA regulates liquid chemical sterilants and high-level disinfectants intended to process critical and semicritical devices. FDA has published recommendations on the types of test methods that manufacturers should submit to FDA for 510 clearance for such agents. # CDC At CDC, the mission of the Coordinating Center for Infections Diseases is to guide the public on how to prevent and respond to infectious diseases in both health-care settings and at home. With respect to disinfectants and sterilants, part of CDC's role is to inform the public (in this case healthcare personnel) of current scientific evidence pertaining to these products, to comment about their safety and efficacy, and to recommend which chemicals might be most appropriate or effective for specific microorganisms and settings. # Test Methods The methods EPA has used for registration are standardized by the AOAC International; however, a survey of scientific literature reveals a number of problems with these tests that were reported during 1987-1990 58,76,80,428,736,737, that cause them to be neither accurate nor reproducible 416,737 . As part of their regulatory authority, EPA and FDA support development and validation of methods for assessing disinfection claims . For example, EPA has supported the work of Dr. Syed Sattar and coworkers who have developed a two-tier quantitative carrier test to assess sporicidal, mycobactericidal, bactericidal, fungicidal, virucidal, and protozoacidal activity of chemical germicides 701,803 . EPA is accepting label claims against hepatitis B virus (HBV) using a surrogate organism, the duck HBV, to quantify disinfectant activity 124,804 . EPA also is accepting labeling claims against hepatitis C virus using the bovine viral diarrhea virus as a surrogate. For nearly 30 years, EPA also performed intramural preregistration and postregistration efficacy testing of some chemical disinfectants in its own laboratories. In 1982, this was stopped, reportedly for budgetary reasons. At that time, manufacturers did not need to have microbiologic activity claims verified by EPA or an independent testing laboratory when registering a disinfectant or chemical sterilant 805 . This occurred when the frequency of contaminated germicides and infections secondary to their use had increased 404 . Investigations demonstrating that interlaboratory reproducibility of test results was poor and manufacturers' label claims were not verifiable 416,737 and symposia sponsored by the American Society for Microbiology 800 heightened awareness of these problems and reconfirmed the need to improve the AOAC methods and reinstate a microbiologic activity verification program. A General Accounting Office report entitled Disinfectants: EPA Lacks Assurance They Work 806 seemed to provide the necessary impetus for EPA to initiate corrective measures, including cooperative agreements to improve the AOAC methods and independent verification testing for all products labeled as sporicidal and disinfectants labeled as tuberculocidal. For example, of 26 sterilant products tested by EPA, 15 were canceled because of product failure. A list of products registered with EPA and labeled for use as sterilants or tuberculocides or against HIV and/or HBV is available through EPA's website at . Organizations (e.g., Organization for Economic Cooperation and Development) are working to standardize requirements for germicide testing and registration. # Neutralization of Germicides One of the difficulties associated with evaluating the bactericidal activity of disinfectants is prevention of bacteriostasis from disinfectant residues carried over into the subculture media. Likewise, small amounts of disinfectants on environmental surfaces can make an accurate bacterial count difficult to get when sampling of the health-care environment as part of an epidemiologic or research investigation. One way these problems may be overcome is by employing neutralizers that inactivate residual disinfectants . Two commonly used neutralizing media for chemical disinfectants are Letheen Media and D/E Neutralizing Media. The former contains lecithin to neutralize quaternaries and polysorbate 80 (Tween 80) to neutralize phenolics, hexachlorophene, formalin, and, with lecithin, ethanol. The D/E Neutralizing media will neutralize a broad spectrum of antiseptic and disinfectant chemicals, including quaternary ammonium compounds, phenols, iodine and chlorine compounds, mercurials, formaldehyde, and glutaraldehyde 810 . A review of neutralizers used in germicide testing has been published 808 . # Sterilization Most medical and surgical devices used in healthcare facilities are made of materials that are heat stable and therefore undergo heat, primarily steam, sterilization. However, since 1950, there has been an increase in medical devices and instruments made of materials (e.g., plastics) that require lowtemperature sterilization. Ethylene oxide gas has been used since the 1950s for heat-and moisturesensitive medical devices. Within the past 15 years, a number of new, low-temperature sterilization systems (e.g., hydrogen peroxide gas plasma, peracetic acid immersion, ozone) have been developed and are being used to sterilize medical devices. This section reviews sterilization technologies used in healthcare and makes recommendations for their optimum performance in the processing of medical devices 1,18, . Sterilization destroys all microorganisms on the surface of an article or in a fluid to prevent disease transmission associated with the use of that item. While the use of inadequately sterilized critical items represents a high risk of transmitting pathogens, documented transmission of pathogens associated with an inadequately sterilized critical item is exceedingly rare 821,822 . This is likely due to the wide margin of safety associated with the sterilization processes used in healthcare facilities. The concept of what constitutes "sterile" is measured as a probability of sterility for each item to be sterilized. This probability is commonly referred to as the sterility assurance level (SAL) of the product and is defined as the probability of a single viable microorganism occurring on a product after sterilization. SAL is normally expressed a 10 -n . For example, if the probability of a spore surviving were one in one million, the SAL would be 10 -6 823,824 . In short, a SAL is an estimate of lethality of the entire sterilization process and is a conservative calculation. Dual SALs (e.g., 10 -3 SAL for blood culture tubes, drainage bags; 10 -6 SAL for scalpels, implants) have been used in the United States for many years and the choice of a 10 -6 SAL was strictly arbitrary and not associated with any adverse outcomes (e.g., patient infections) 823 . Medical devices that have contact with sterile body tissues or fluids are considered critical items. These items should be sterile when used because any microbial contamination could result in disease transmission. Such items include surgical instruments, biopsy forceps, and implanted medical devices. If these items are heat resistant, the recommended sterilization process is steam sterilization, because it has the largest margin of safety due to its reliability, consistency, and lethality. However, reprocessing heat-and moisture-sensitive items requires use of a low-temperature sterilization technology (e.g., ethylene oxide, hydrogen peroxide gas plasma, peracetic acid) 825 . A summary of the advantages and disadvantages for commonly used sterilization technologies is presented in Table 6. # Steam Sterilization Overview. Of all the methods available for sterilization, moist heat in the form of saturated steam under pressure is the most widely used and the most dependable. Steam sterilization is nontoxic, inexpensive 826 , rapidly microbicidal, sporicidal, and rapidly heats and penetrates fabrics (Table 6) 827 . Like all sterilization processes, steam sterilization has some deleterious effects on some materials, including corrosion and combustion of lubricants associated with dental handpieces 212 ; reduction in ability to transmit light associated with laryngoscopes 828 ; and increased hardening time (5.6 fold) with plastercast 829 . The basic principle of steam sterilization, as accomplished in an autoclave, is to expose each item to direct steam contact at the required temperature and pressure for the specified time. Thus, there are four parameters of steam sterilization: steam, pressure, temperature, and time. The ideal steam for sterilization is dry saturated steam and entrained water (dryness fraction ≥97%) 813,819 . Pressure serves as a means to obtain the high temperatures necessary to quickly kill microorganisms. Specific temperatures must be obtained to ensure the microbicidal activity. The two common steam-sterilizing temperatures are 121°C (250°F) and 132°C (270°F). These temperatures (and other high temperatures) 830 must be maintained for a minimal time to kill microorganisms. Recognized minimum exposure periods for sterilization of wrapped healthcare supplies are 30 minutes at 121°C (250°F) in a gravity displacement sterilizer or 4 minutes at 132°C (270°C) in a prevacuum sterilizer (Table 7). At constant temperatures, sterilization times vary depending on the type of item (e.g., metal versus rubber, plastic, items with lumens), whether the item is wrapped or unwrapped, and the sterilizer type. The two basic types of steam sterilizers (autoclaves) are the gravity displacement autoclave and the high-speed prevacuum sterilizer. In the former, steam is admitted at the top or the sides of the sterilizing chamber and, because the steam is lighter than air, forces air out the bottom of the chamber through the drain vent. The gravity displacement autoclaves are primarily used to process laboratory media, water, pharmaceutical products, regulated medical waste, and nonporous articles whose surfaces have direct steam contact. For gravity displacement sterilizers the penetration time into porous items is prolonged because of incomplete air elimination. This point is illustrated with the decontamination of 10 lbs of microbiological waste, which requires at least 45 minutes at 121°C because the entrapped air remaining in a load of waste greatly retards steam permeation and heating efficiency 831,832 . The high-speed prevacuum sterilizers are similar to the gravity displacement sterilizers except they are fitted with a vacuum pump (or ejector) to ensure air removal from the sterilizing chamber and load before the steam is admitted. The advantage of using a vacuum pump is that there is nearly instantaneous steam penetration even into porous loads. The Bowie-Dick test is used to detect air leaks and inadequate air removal and consists of folded 100% cotton surgical towels that are clean and preconditioned. A commercially available Bowie-Dick-type test sheet should be placed in the center of the pack. The test pack should be placed horizontally in the front, bottom section of the sterilizer rack, near the door and over the drain, in an otherwise empty chamber and run at 134°C for 3.5 minutes 813,819 . The test is used each day the vacuum-type steam sterilizer is used, before the first processed load. Air that is not removed from the chamber will interfere with steam contact. Smaller disposable test packs (or process challenge devices) have been devised to replace the stack of folded surgical towels for testing the efficacy of the vacuum system in a prevacuum sterilizer. 833 These devices are "designed to simulate product to be sterilized and to constitute a defined challenge to the sterilization process" 819,834 . They should be representative of the load and simulate the greatest challenge to the load 835 . Sterilizer vacuum performance is acceptable if the sheet inside the test pack shows a uniform color change. Entrapped air will cause a spot to appear on the test sheet, due to the inability of the steam to reach the chemical indicator. If the sterilizer fails the Bowie-Dick test, do not use the sterilizer until it is inspected by the sterilizer maintenance personnel and passes the Bowie-Dick test 813,819,836 . Another design in steam sterilization is a steam flush-pressure pulsing process, which removes air rapidly by repeatedly alternating a steam flush and a pressure pulse above atmospheric pressure. Air is rapidly removed from the load as with the prevacuum sterilizer, but air leaks do not affect this process because the steam in the sterilizing chamber is always above atmospheric pressure. Typical sterilization temperatures and times are 132°C to 135°C with 3 to 4 minutes exposure time for porous loads and instruments 827,837 . Like other sterilization systems, the steam cycle is monitored by mechanical, chemical, and biological monitors. Steam sterilizers usually are monitored using a printout (or graphically) by measuring temperature, the time at the temperature, and pressure. Typically, chemical indicators are affixed to the outside and incorporated into the pack to monitor the temperature or time and temperature. The effectiveness of steam sterilization is monitored with a biological indicator containing spores of Geobacillus stearothermophilus (formerly Bacillus stearothermophilus). Positive spore test results are a relatively rare event 838 and can be attributed to operator error, inadequate steam delivery 839 , or equipment malfunction. Portable (table-top) steam sterilizers are used in outpatient, dental, and rural clinics 840 . These sterilizers are designed for small instruments, such as hypodermic syringes and needles and dental instruments. The ability of the sterilizer to reach physical parameters necessary to achieve sterilization should be monitored by mechanical, chemical, and biological indicators. Microbicidal Activity. The oldest and most recognized agent for inactivation of microorganisms is heat. D-values (time to reduce the surviving population by 90% or 1 log10) allow a direct comparison of the heat resistance of microorganisms. Because a D-value can be determined at various temperatures, a subscript is used to designate the exposure temperature (i.e., D121C). D121C-values for Geobacillus stearothermophilus used to monitor the steam sterilization process range from 1 to 2 minutes. Heatresistant nonspore-forming bacteria, yeasts, and fungi have such low D121C values that they cannot be experimentally measured 841 . # Mode of Action. Moist heat destroys microorganisms by the irreversible coagulation and denaturation of enzymes and structural proteins. In support of this fact, it has been found that the presence of moisture significantly affects the coagulation temperature of proteins and the temperature at which microorganisms are destroyed. Uses. Steam sterilization should be used whenever possible on all critical and semicritical items that are heat and moisture resistant (e.g., steam sterilizable respiratory therapy and anesthesia equipment), even when not essential to prevent pathogen transmission. Steam sterilizers also are used in healthcare facilities to decontaminate microbiological waste and sharps containers 831,832,842 but additional exposure time is required in the gravity displacement sterilizer for these items. # Flash Sterilization Overview. "Flash" steam sterilization was originally defined by Underwood and Perkins as sterilization of an unwrapped object at 132°C for 3 minutes at 27-28 lbs. of pressure in a gravity displacement sterilizer 843 . Currently, the time required for flash sterilization depends on the type of sterilizer and the type of item (i.e., porous vs non-porous items)(see Table 8). Although the wrapped method of sterilization is preferred for the reasons listed below, correctly performed flash sterilization is an effective process for the sterilization of critical medical devices 844,845 . Flash sterilization is a modification of conventional steam sterilization (either gravity, prevacuum, or steam-flush pressure-pulse) in which the flashed item is placed in an open tray or is placed in a specially designed, covered, rigid container to allow for rapid penetration of steam. Historically, it is not recommended as a routine sterilization method because of the lack of timely biological indicators to monitor performance, absence of protective packaging following sterilization, possibility for contamination of processed items during transportation to the operating rooms, and the sterilization cycle parameters (i.e., time, temperature, pressure) are minimal. To address some of these concerns, many healthcare facilities have done the following: placed equipment for flash sterilization in close proximity to operating rooms to facilitate aseptic delivery to the point of use (usually the sterile field in an ongoing surgical procedure); extended the exposure time to ensure lethality comparable to sterilized wrapped items (e.g., 4 minutes at 132°C) 846,847 ; used biological indicators that provide results in 1 hour for flash-sterilized items 846,847 ; and used protective packaging that permits steam penetration 812, 817-819, 845, 848 . Further, some rigid, reusable sterilization container systems have been designed and validated by the container manufacturer for use with flash cycles. When sterile items are open to air, they will eventually become contaminated. Thus, the longer a sterile item is exposed to air, the greater the number of microorganisms that will settle on it. Sterilization cycle parameters for flash sterilization are shown in Table 8. A few adverse events have been associated with flash sterilization. When evaluating an increased incidence of neurosurgical infections, the investigators noted that surgical instruments were flash sterilized between cases and 2 of 3 craniotomy infections involved plate implants that were flash sterilized 849 . A report of two patients who received burns during surgery from instruments that had been flash sterilized reinforced the need to develop policies and educate staff to prevent the use of instruments hot enough to cause clinical burns 850 . Staff should use precautions to prevent burns with potentially hot instruments (e.g., transport tray using heat-protective gloves). Patient burns may be prevented by either air-cooling the instruments or immersion in sterile liquid (e.g., saline). Uses. Flash sterilization is considered acceptable for processing cleaned patient-care items that cannot be packaged, sterilized, and stored before use. It also is used when there is insufficient time to sterilize an item by the preferred package method. Flash sterilization should not be used for reasons of convenience, as an alternative to purchasing additional instrument sets, or to save time 817 . Because of the potential for serious infections, flash sterilization is not recommended for implantable devices (i.e., devices placed into a surgically or naturally formed cavity of the human body); however, flash sterilization may be unavoidable for some devices (e.g., orthopedic screw, plates). If flash sterilization of an implantable device is unavoidable, recordkeeping (i.e., load identification, patient's name/hospital identifier, and biological indicator result) is essential for epidemiological tracking (e.g., of surgical site infection, tracing results of biological indicators to patients who received the item to document sterility), and for an assessment of the reliability of the sterilization process (e.g., evaluation of biological monitoring records and sterilization maintenance records noting preventive maintenance and repairs with dates). # Low-Temperature Sterilization Technologies Ethylene oxide (ETO) has been widely used as a low-temperature sterilant since the 1950s. It has been the most commonly used process for sterilizing temperature-and moisture-sensitive medical devices and supplies in healthcare institutions in the United States. Two types of ETO sterilizers are available, mixed gas and 100% ETO. Until 1995, ethylene oxide sterilizers combined ETO with a chloroflourocarbon (CFC) stabilizing agent, most commonly in a ratio of 12% ETO mixed with 88% CFC (referred to as 12/88 ETO). For several reasons, healthcare personnel have been exploring the use of new low-temperature sterilization technologies 825,851 . First, CFCs were phased out in December 1995 under provisions of the Clean Air Act 852 . CFCs were classified as a Class I substance under the Clean Air Act because of scientific evidence linking them to destruction of the earth's ozone layer. Second, some states (e.g., California, New York, Michigan) require the use of ETO abatement technology to reduce the amount of ETO being released into ambient air from 90 to 99.9% depending on the state. Third, OSHA regulates the acceptable vapor levels of ETO (i.e., 1 ppm averaged over 8 hours) due to concerns that ETO exposure represents an occupational hazard 318 . These constraints have led to the development of alternative technologies for low-temperature sterilization in the healthcare setting. Alternative technologies to ETO with chlorofluorocarbon that are currently available and cleared by the FDA for medical equipment include 100% ETO; ETO with a different stabilizing gas, such as carbon dioxide or hydrochlorofluorocarbons (HCFC); immersion in peracetic acid; hydrogen peroxide gas plasma; and ozone. Technologies under development for use in healthcare facilities, but not cleared by the FDA, include vaporized hydrogen peroxide, vapor phase peracetic acid, gaseous chlorine dioxide, ionizing radiation, or pulsed light 400,758,853 . However, there is no guarantee that these new sterilization technologies will receive FDA clearance for use in healthcare facilities. These new technologies should be compared against the characteristics of an ideal low-temperature (<60°C) sterilant (Table 9). 851 While it is apparent that all technologies will have limitations (Table 9), understanding the limitations imposed by restrictive device designs (e.g., long, narrow lumens) is critical for proper application of new sterilization technology 854 . For example, the development of increasingly small and complex endoscopes presents a difficult challenge for current sterilization processes. This occurs because microorganisms must be in direct contact with the sterilant for inactivation to occur. Several peer-reviewed scientific publications have data demonstrating concerns about the efficacy of several of the low-temperature sterilization processes (i.e., gas plasma, vaporized hydrogen peroxide, ETO, peracetic acid), particularly when the test organisms are challenged in the presence of serum and salt and a narrow lumen vehicle 469,721,825,855,856 . Factors shown to affect the efficacy of sterilization are shown in Table 10. # Ethylene Oxide "Gas" Sterilization Overview. ETO is a colorless gas that is flammable and explosive. The four essential parameters (operational ranges) are: gas concentration (450 to 1200 mg/l); temperature (37 to 63°C); relative humidity (40 to 80%)(water molecules carry ETO to reactive sites); and exposure time (1 to 6 hours). These influence the effectiveness of ETO sterilization 814,857,858 . Within certain limitations, an increase in gas concentration and temperature may shorten the time necessary for achieving sterilization. The main disadvantages associated with ETO are the lengthy cycle time, the cost, and its potential hazards to patients and staff; the main advantage is that it can sterilize heat-or moisture-sensitive medical equipment without deleterious effects on the material used in the medical devices (Table 6). Acute exposure to ETO may result in irritation (e.g., to skin, eyes, gastrointestinal or respiratory tracts) and central nervous system depression . Chronic inhalation has been linked to the formation of cataracts, cognitive impairment, neurologic dysfunction, and disabling polyneuropathies 860,861, . Occupational exposure in healthcare facilities has been linked to hematologic changes 867 and an increased risk of spontaneous abortions and various cancers 318, . ETO should be considered a known human carcinogen 871 . The basic ETO sterilization cycle consists of five stages (i.e., preconditioning and humidification, gas introduction, exposure, evacuation, and air washes) and takes approximately 2 1/2 hrs excluding aeration time. Mechanical aeration for 8 to 12 hours at 50 to 60°C allows desorption of the toxic ETO residual contained in exposed absorbent materials. Most modern ETO sterilizers combine sterilization and aeration in the same chamber as a continuous process. These ETO models minimize potential ETO exposure during door opening and load transfer to the aerator. Ambient room aeration also will achieve desorption of the toxic ETO but requires 7 days at 20°C. There are no federal regulations for ETO sterilizer emission; however, many states have promulgated emission-control regulations 814 . The use of ETO evolved when few alternatives existed for sterilizing heat-and moisture-sensitive medical devices; however, favorable properties (Table 6) account for its continued widespread use 872 . Two ETO gas mixtures are available to replace ETO-chlorofluorocarbon (CFC) mixtures for large capacity, tank-supplied sterilizers. The ETO-carbon dioxide (CO2) mixture consists of 8.5% ETO and 91.5% CO2. This mixture is less expensive than ETO-hydrochlorofluorocarbons (HCFC), but a disadvantage is the need for pressure vessels rated for steam sterilization, because higher pressures (28psi gauge) are required. The other mixture, which is a drop-in CFC replacement, is ETO mixed with HCFC. HCFCs are approximately 50-fold less damaging to the earth's ozone layer than are CFCs. The EPA will begin regulation of HCFC in the year 2015 and will terminate production in the year 2030. Two companies provide ETO-HCFC mixtures as drop-in replacement for CFC-12; one mixture consists of 8.6% ETO and 91.4% HCFC, and the other mixture is composed of 10% ETO and 90% HCFC 872 . An alternative to the pressurized mixed gas ETO systems is 100% ETO. The 100% ETO sterilizers using unit-dose cartridges eliminate the need for external tanks. ETO is absorbed by many materials. For this reason, following sterilization the item must undergo aeration to remove residual ETO. Guidelines have been promulgated regarding allowable ETO limits for devices that depend on how the device is used, how often, and how long in order to pose a minimal risk to patients in normal product use 814 . ETO toxicity has been established in a variety of animals. Exposure to ETO can cause eye pain, sore throat, difficulty breathing and blurred vision. Exposure can also cause dizziness, nausea, headache, convulsions, blisters and vomiting and coughing 873 . In a variety of in vitro and animal studies, ETO has been demonstrated to be carcinogenic. ETO has been linked to spontaneous abortion, genetic damage, nerve damage, peripheral paralysis, muscle weakness, and impaired thinking and memory 873 . Occupational exposure in healthcare facilities has been linked to an increased risk of spontaneous abortions and various cancers 318 . Injuries (e.g., tissue burns) to patients have been associated with ETO residues in implants used in surgical procedures 874 . Residual ETO in capillary flow dialysis membranes has been shown to be neurotoxic in vitro 875 . OSHA has established a PEL of 1 ppm airborne ETO in the workplace, expressed as a TWA for an 8-hour work shift in a 40-hour work week. The "action level" for ETO is 0.5 ppm, expressed as an 8-hour TWA, and the short-term excursion limit is 5 ppm, expressed as a 15-minute TWA 814 . For details of the requirements in OSHA's ETO standard for occupational exposures, see Title 29 of the Code of Federal Regulations (CFR) Part 1910.1047 873 . Several personnel monitoring methods (e.g., charcoal tubes and passive sampling devices) are in use 814 . OSHA has established a PEL of 5 ppm for ethylene chlorohydrin (a toxic by-product of ETO) in the workplace 876 . Additional information regarding use of ETO in health care facilities is available from NIOSH. # Mode of Action. The microbicidal activity of ETO is considered to be the result of alkylation of protein, DNA, and RNA. Alkylation, or the replacement of a hydrogen atom with an alkyl group, within cells prevents normal cellular metabolism and replication 877 . Microbicidal Activity. The excellent microbicidal activity of ETO has been demonstrated in several studies 469,721,722,856,878,879 and summarized in published reports 877 . ETO inactivates all microorganisms although bacterial spores (especially B. atrophaeus) are more resistant than other microorganisms. For this reason B. atrophaeus is the recommended biological indicator. Like all sterilization processes, the effectiveness of ETO sterilization can be altered by lumen length, lumen diameter, inorganic salts, and organic materials 469,721,722,855,856,879 . For example, although ETO is not used commonly for reprocessing endoscopes 28 , several studies have shown failure of ETO in inactivating contaminating spores in endoscope channels 855 or lumen test units 469,721,879 and residual ETO levels averaging 66.2 ppm even after the standard degassing time 456 . Failure of ETO also has been observed when dental handpieces were contaminated with Streptococcus mutans and exposed to ETO 880 . It is recommended that dental handpieces be steam sterilized. Uses. ETO is used in healthcare facilities to sterilize critical items (and sometimes semicritical items) that are moisture or heat sensitive and cannot be sterilized by steam sterilization. # Hydrogen Peroxide Gas Plasma Overview. New sterilization technology based on plasma was patented in 1987 and marketed in the United States in 1993. Gas plasmas have been referred to as the fourth state of matter (i.e., liquids, solids, gases, and gas plasmas). Gas plasmas are generated in an enclosed chamber under deep vacuum using radio frequency or microwave energy to excite the gas molecules and produce charged particles, many of which are in the form of free radicals. A free radical is an atom with an unpaired electron and is a highly reactive species. The proposed mechanism of action of this device is the production of free radicals within a plasma field that are capable of interacting with essential cell components (e.g., enzymes, nucleic acids) and thereby disrupt the metabolism of microorganisms. The type of seed gas used and the depth of the vacuum are two important variables that can determine the effectiveness of this process. In the late 1980s the first hydrogen peroxide gas plasma system for sterilization of medical and surgical devices was field-tested. According to the manufacturer, the sterilization chamber is evacuated and hydrogen peroxide solution is injected from a cassette and is vaporized in the sterilization chamber to a concentration of 6 mg/l. The hydrogen peroxide vapor diffuses through the chamber (50 minutes), exposes all surfaces of the load to the sterilant, and initiates the inactivation of microorganisms. An electrical field created by a radio frequency is applied to the chamber to create a gas plasma. Microbicidal free radicals (e.g., hydroxyl and hydroperoxyl) are generated in the plasma. The excess gas is removed and in the final stage (i.e., vent) of the process the sterilization chamber is returned to atmospheric pressure by introduction of high-efficiency filtered air. The by-products of the cycle (e.g., water vapor, oxygen) are nontoxic and eliminate the need for aeration. Thus, the sterilized materials can be handled safely, either for immediate use or storage. The process operates in the range of 37-44°C and has a cycle time of 75 minutes. If any moisture is present on the objects the vacuum will not be achieved and the cycle aborts 856, . A newer version of the unit improves sterilizer efficacy by using two cycles with a hydrogen peroxide # Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) diffusion stage and a plasma stage per sterilization cycle. This revision, which is achieved by a software modification, reduces total processing time from 73 to 52 minutes. The manufacturer believes that the enhanced activity obtained with this system is due in part to the pressure changes that occur during the injection and diffusion phases of the process and to the fact that the process consists of two equal and consecutive half cycles, each with a separate injection of hydrogen peroxide. 856,884,885 This system and a smaller version 400,882 have received FDA 510 clearance with limited application for sterilization of medical devices (Table 6). The biological indicator used with this system is Bacillus atrophaeus spores 851 . The newest version of the unit, which employs a new vaporization system that removes most of the water from the hydrogen peroxide, has a cycle time from 28-38 minutes (see manufacturer's literature for device dimension restrictions). Penetration of hydrogen peroxide vapor into long or narrow lumens has been addressed outside the United States by the use of a diffusion enhancer. This is a small, breakable glass ampoule of concentrated hydrogen peroxide (50%) with an elastic connector that is inserted into the device lumen and crushed immediately before sterilization 470,885 . The diffusion enhancer has been shown to sterilize bronchoscopes contaminated with Mycobacteria tuberculosis 886 . At the present time, the diffusion enhancer is not FDA cleared. Another gas plasma system, which differs from the above in several important ways, including the use of peracetic acid-acetic acid-hydrogen peroxide vapor, was removed from the marketplace because of reports of corneal destruction to patients when ophthalmic surgery instruments had been processed in the sterilizer 887,888 . In this investigation, exposure of potentially wet ophthalmologic surgical instruments with small bores and brass components to the plasma gas led to degradation of the brass to copper and zinc 888,889 . The experimenters showed that when rabbit eyes were exposed to the rinsates of the gas plasma-sterilized instruments, corneal decompensation was documented. This toxicity is highly unlikely with the hydrogen peroxide gas plasma process since a toxic, soluble form of copper would not form (LA Feldman, written communication, April 1998). # Mode of Action. This process inactivates microorganisms primarily by the combined use of hydrogen peroxide gas and the generation of free radicals (hydroxyl and hydroproxyl free radicals) during the plasma phase of the cycle. Microbicidal Activity. This process has the ability to inactivate a broad range of microorganisms, including resistant bacterial spores. Studies have been conducted against vegetative bacteria (including mycobacteria), yeasts, fungi, viruses, and bacterial spores 469,721,856, . Like all sterilization processes, the effectiveness can be altered by lumen length, lumen diameter, inorganic salts, and organic materials 469,721,855,856,890,891,893 . Uses. Materials and devices that cannot tolerate high temperatures and humidity, such as some plastics, electrical devices, and corrosion-susceptible metal alloys, can be sterilized by hydrogen peroxide gas plasma. This method has been compatible with most (>95%) medical devices and materials tested 884,894,895 . # Peracetic Acid Sterilization Overview. Peracetic acid is a highly biocidal oxidizer that maintains its efficacy in the presence of organic soil. Peracetic acid removes surface contaminants (primarily protein) on endoscopic tubing 711,717 . An automated machine using peracetic acid to sterilize medical, surgical, and dental instruments chemically (e.g., endoscopes, arthroscopes) was introduced in 1988. This microprocessor-controlled, lowtemperature sterilization method is commonly used in the United States 107 . The sterilant, 35% peracetic acid, and an anticorrosive agent are supplied in a single-dose container. The container is punctured at the time of use, immediately prior to closing the lid and initiating the cycle. The concentrated peracetic acid is diluted to 0.2% with filtered water (0.2 µm) at a temperature of approximately 50°C. The diluted peracetic acid is circulated within the chamber of the machine and pumped through the channels of the endoscope for 12 minutes, decontaminating exterior surfaces, lumens, and accessories. Interchangeable trays are available to permit the processing of up to three rigid endoscopes or one flexible endoscope. Connectors are available for most types of flexible endoscopes for the irrigation of all channels by directed flow. Rigid endoscopes are placed within a lidded container, and the sterilant fills the lumens either by immersion in the circulating sterilant or by use of channel connectors to direct flow into the lumen(s) (see below for the importance of channel connectors). The peracetic acid is discarded via the sewer and the instrument rinsed four times with filtered water. Concern has been raised that filtered water may be inadequate to maintain sterility 896 . Limited data have shown that low-level bacterial contamination may follow the use of filtered water in an AER but no data has been published on AERs using the peracetic acid system 161 . Clean filtered air is passed through the chamber of the machine and endoscope channels to remove excess water 719 . As with any sterilization process, the system can only sterilize surfaces that can be contacted by the sterilant. For example, bronchoscopy-related infections occurred when bronchoscopes were processed using the wrong connector 155,725 . Investigation of these incidents revealed that bronchoscopes were inadequately reprocessed when inappropriate channel connectors were used and when there were inconsistencies between the reprocessing instructions provided by the manufacturer of the bronchoscope and the manufacturer of the automatic endoscope reprocessor 155 . The importance of channel connectors to achieve sterilization was also shown for rigid lumen devices 137,856 . The manufacturers suggest the use of biological monitors (G. stearothermophilus spore strips) both at the time of installation and routinely to ensure effectiveness of the process. The manufacturer's clip must be used to hold the strip in the designated spot in the machine as a broader clamp will not allow the sterilant to reach the spores trapped under it 897 . One investigator reported a 3% failure rate when the appropriate clips were used to hold the spore strip within the machine 718 . The use of biological monitors designed to monitor either steam sterilization or ETO for a liquid chemical sterilizer has been questioned for several reasons including spore wash-off from the filter paper strips which may cause less valid monitoring . The processor is equipped with a conductivity probe that will automatically abort the cycle if the buffer system is not detected in a fresh container of the peracetic acid solution. A chemical monitoring strip that detects that the active ingredient is >1500 ppm is available for routine use as an additional process control. # Mode of Action. Only limited information is available regarding the mechanism of action of peracetic acid, but it is thought to function as other oxidizing agents, i.e., it denatures proteins, disrupts cell wall permeability, and oxidizes sulfhydral and sulfur bonds in proteins, enzymes, and other metabolites 654,726 . Microbicidal Activity. Peracetic acid will inactivate gram-positive and gram-negative bacteria, fungi, and yeasts in <5 minutes at <100 ppm. In the presence of organic matter, 200-500 ppm is required. For viruses, the dosage range is wide (12-2250 ppm), with poliovirus inactivated in yeast extract in 15 minutes with 1500 to 2250 ppm. Bacterial spores in suspension are inactivated in 15 seconds to 30 minutes with 500 to 10,000 ppm (0.05 to 1%) 654 . Simulated-use trials have demonstrated microbicidal activity 111, and three clinical trials have demonstrated both microbial killing and no clinical failures leading to infection 90,723,724 . Alfa and coworkers, who compared the peracetic acid system with ETO, demonstrated the high efficacy of the system. Only the peracetic acid system was able to completely kill 6-log10 of Mycobacterium chelonae, Enterococcus faecalis, and B. atrophaeus spores with both an organic and inorganic challenge 722 . Like other sterilization processes, the efficacy of the process can be diminished by soil challenges 902 and test conditions 856 . Uses. This automated machine is used to chemically sterilize medical (e.g., GI endoscopes) and surgical (e.g., flexible endoscopes) instruments in the United States. Lumened endoscopes must be connected to an appropriate channel connector to ensure that the sterilant has direct contact with the contaminated lumen. 137,856,903 Olympus America has not listed this system as a compatible product for use in reprocessing Olympus bronchoscopes and gastrointestinal endoscopes (Olympus America, January 30, 2002, written communication). # Microbicidal Activity of Low-Temperature Sterilization Technologies Sterilization processes used in the United States must be cleared by FDA, and they require that sterilizer microbicidal performance be tested under simulated-use conditions 904 . FDA requires that the test article be inoculated with 10 6 colony-forming units of the most resistant test organism and prepared with organic and inorganic test loads as would occur after actual use. FDA requires manufacturers to use organic soil (e.g., 5% fetal calf serum), dried onto the device with the inoculum, to represent soil remaining on the device following marginal cleaning. However, 5% fetal calf serum as a measure of marginal cleaning has not been validated by measurements of protein load on devices following use and the level of protein removal by various cleaning methods. The inocula must be placed in various locations of the test articles, including those least favorable to penetration and contact with the sterilant (e.g., lumens). Cleaning before sterilization is not allowed in the demonstration of sterilization efficacy 904 . Several studies have evaluated the relative microbicidal efficacy of these low-temperature sterilization technologies (Table 11). These studies have either tested the activity of a sterilization process against specific microorganisms 892,905,906 , evaluated the microbicidal activity of a singular technology 711, 719, 724, 855, 879, 882-884, 890, 891, 907 or evaluated the comparative effectiveness of several sterilization technologies 271,426,469,721,722,856,908,909 . Several test methodologies use stainless steel or porcelain carriers that are inoculated with a test organism. Commonly used test organisms include vegetative bacteria, mycobacteria, and spores of Bacillus species. The available data demonstrate that low-temperature sterilization technologies are able to provide a 6-log10 reduction of microbes when inoculated onto carriers in the absence of salt and serum. However, tests can be constructed such that all of the available sterilization technologies are unable to reliably achieve complete inactivation of a microbial load. 425,426,469,721,856,909 For example, almost all of the sterilization processes will fail to reliably inactivate the microbial load in the presence of salt and serum 469,721,909 . The effect of salts and serums on the sterilization process were studied initially in the 1950s and 1960s 424,910 . These studies showed that a high concentration of crystalline-type materials and a low protein content provided greater protection to spores than did serum with a high protein content 426 . A study by Doyle and Ernst demonstrated resistance of spores by crystalline material applied not only to low-temperature sterilization technology but also to steam and dry heat 425 . These studies showed that occlusion of Bacillus atrophaeus spores in calcium carbonate crystals dramatically increased the time required for inactivation as follows: 10 seconds to 150 minutes for steam (121°C), 3.5 hours to 50 hours for dry heat (121°C), 30 seconds to >2 weeks for ETO (54°C). Investigators have corroborated and extended these findings 469,470,721,855,908,909 . While soils containing both organic and inorganic materials impair microbial killing, soils that contain a high inorganic salt-to-protein ratio favor crystal formation and impair sterilization by occlusion of organisms 425,426,881 . Alfa and colleagues demonstrated a 6-log10 reduction of the microbial inoculum of porcelain penicylinders using a variety of vegetative and spore-forming organisms (Table 11) 469 . However, if the bacterial inoculum was in tissue-culture medium supplemented with 10% serum, only the ETO 12/88 and ETO-HCFC sterilization mixtures could sterilize 95% to 97% of the penicylinder carriers. The plasma and 100% ETO sterilizer demonstrated significantly reduced activity (Table 11). For all sterilizers evaluated using penicylinder carriers (i.e., ETO 12/88, 100% ETO, hydrogen peroxide gas plasma), there was a 3to 6-log10 reduction of inoculated bacteria even in the presence of serum and salt. For each sterilizer evaluated, the ability to inactivate microorganisms in the presence of salt and serum was reduced even further when the inoculum was placed in a narrow-lumen test object (3 mm diameter by 125 cm long). Although there was a 2-to 4-log10 reduction in microbial kill, less than 50% of the lumen test objects were sterile when processed using any of the sterilization methods evaluated except the peracetic acid immersion system (Table 11) 721 . Complete killing (or removal) of 6-log10 of Enterococcus faecalis, Mycobacterium chelonei, and Bacillus atrophaeus spores in the presence of salt and serum and lumen test objects was observed only for the peracetic acid immersion system. With respect to the results by Alfa and coworkers 469 , Jacobs showed that the use of the tissue culture media created a technique-induced sterilization failure 426 . Jacobs et al. showed that microorganisms mixed with tissue culture media, used as a surrogate body fluid, formed physical crystals that protected the microorganisms used as a challenge. If the carriers were exposed for 60 sec to nonflowing water, the salts dissolved and the protective effect disappeared. Since any device would be exposed to water for a short period of time during the washing procedure, these protective effects would have little clinical relevance 426 . Narrow lumens provide a challenge to some low-temperature sterilization processes. For example, Rutala and colleagues showed that, as lumen size decreased, increased failures occurred with some lowtemperature sterilization technologies. However, some low-temperature processes such as ETO-HCFC and the hydrogen peroxide gas plasma process remained effective even when challenged by a lumen as small as 1 mm in the absence of salt and serum 856 . The importance of allowing the sterilant to come into contact with the inoculated carrier is demonstrated by comparing the results of two investigators who studied the peracetic acid immersion system. Alfa and coworkers demonstrated excellent activity of the peracetic acid immersion system against three test organisms using a narrow-lumen device. In these experiments, the lumen test object was connected to channel irrigators, which ensured that the sterilant had direct contact with the contaminated carriers 722 . This effectiveness was achieved through a combination of organism wash-off and peracetic acid sterilant killing the test organisms 722 . The data reported by Rutala et al. demonstrated failure of the peracetic acid immersion system to eliminate Geobacillus stearothermophilus spores from a carrier placed in a lumen test object. In these experiments, the lumen test unit was not connected to channel irrigators. The authors attributed the failure of the peracetic acid immersion system to eliminate the high levels of spores from the center of the test unit to the inability of the peracetic acid to diffuse into the center of 40-cm long, 3-mm diameter tubes. This may be caused by an air lock or air bubbles formed in the lumen, impeding the flow of the sterilant through the long and narrow lumen and limiting complete access to the Bacillus spores 137,856 . Experiments using a channel connector specifically designed for 1-, 2-, and 3-mm lumen test units with the peracetic acid immersion system were completely effective in eliminating an inoculum of 10 6 Geobacillus stearothermophilus spores 7 . The restricted diffusion environment that exists in the test conditions would not exist with flexible scopes processed in the peracetic acid immersion system, because the scopes are connected to channel irrigators to ensure that the sterilant has direct contact with contaminated surfaces. Alfa and associates attributed the efficacy of the peracetic acid immersion system to the ability of the liquid chemical process to dissolve salts and remove protein and bacteria due to the flushing action of the fluid 722 . # Bioburden of Surgical Devices In general, used medical devices are contaminated with a relatively low bioburden of organisms 179,911,912 . Nystrom evaluated medical instruments used in general surgical, gynecological, orthopedic, and earnose-throat operations and found that 62% of the instruments were contaminated with 10 2 organisms 911 . Other investigators have published similar findings 179,912 . For example, after a standard cleaning procedure, 72% of 50 surgical instruments contained 3 X 10 2912 . In another study of rigidlumen medical devices, the bioburden on both the inner and outer surface of the lumen ranged from 10 1 to 10 4 organisms per device. After cleaning, 83% of the devices had a bioburden ≤10 2 organisms 179 . In all of these studies, the contaminating microflora consisted mainly of vegetative bacteria, usually of low pathogenicity (e.g., coagulase-negative Staphylococcus) 179,911,912 . An evaluation of the microbial load on used critical medical devices such as spinal anesthesia needles and angiographic catheters and sheaths demonstrated that mesophilic microorganisms were detected at levels of 10 1 to 10 2 in only two of five needles. The bioburden on used angiographic catheters and sheath introducers exceeded 10 3 CFUs on 14% (3 of 21) and 21% (6 of 28), respectively 907 . # Effect of Cleaning on Sterilization Efficacy The effect of salt and serum on the efficacy of low-temperature sterilization technologies has raised concern regarding the margin of safety of these technologies. Experiments have shown that salts have the greatest impact on protecting microorganisms from killing426, 469. However, other studies have suggested that these concerns may not be clinically relevant. One study evaluated the relative rate of removal of inorganic salts, organic soil, and microorganisms from medical devices to better understand the dynamics of the cleaning process426. These tests were conducted by inoculating Alfa soil (tissueculture media and 10% fetal bovine serum) 469 containing 106 G. stearothermophilus spores onto the surface of a stainless-steel scalpel blade. After drying for 30 minutes at 35°C followed by 30 minutes at room temperature, the samples were placed in water at room temperature. The blades were removed at specified times, and the concentration of total protein and chloride ion was measured. The results showed that soaking in deionized water for 60 seconds resulted in a >95% release rate of chloride ion from NaCl solution in 20 seconds, Alfa soil in 30 seconds, and fetal bovine serum in 120 seconds. Thus, contact with water for short periods, even in the presence of protein, rapidly leads to dissolution of salt crystals and complete inactivation of spores by a low-temperature sterilization process (Table 10). Based on these experimental data, cleaning procedures would eliminate the detrimental effect of high salt content on a low-temperature sterilization process. These articles 426,469,721 assessing low-temperature sterilization technology reinforce the importance of meticulous cleaning before sterilization. These data support the critical need for healthcare facilities to develop rigid protocols for cleaning contaminated objects before sterilization 472 . Sterilization of instruments and medical devices is compromised if the process is not preceded by meticulous cleaning. The cleaning of any narrow-lumen medical device used in patient care presents a major challenge to reprocessing areas. While attention has been focused on flexible endoscopes, cleaning issues related to other narrow-lumen medical devices such as sphinctertomes have been investigated 913 . This study compared manual cleaning with that of automated cleaning with a narrow-lumen cleaner and found that only retro-flushing with the narrow lumen cleaner provided adequate cleaning of the three channels. If reprocessing was delayed for more than 24 hours, retro-flush cleaning was no longer effective and ETO sterilization failure was detected when devices were held for 7 days 913 . In another study involving simulated-use cleaning of laparoscopic devices, Alfa found that minimally the use of retro-flushing should be used during cleaning of non-ported laparoscopic devices 914 . # Other Sterilization Methods Ionizing Radiation. Sterilization by ionizing radiation, primarily by cobalt 60 gamma rays or electron accelerators, is a low-temperature sterilization method that has been used for a number of medical products (e.g., tissue for transplantation, pharmaceuticals, medical devices). There are no FDA-cleared ionizing radiation sterilization processes for use in healthcare facilities. Because of high sterilization costs, this method is an unfavorable alternative to ETO and plasma sterilization in healthcare facilities but is suitable for large-scale sterilization. Some deleterious effects on patient-care equipment associated with gamma radiation include induced oxidation in polyethylene 915 and delamination and cracking in polyethylene knee bearings 916 . Several reviews 917,918 dealing with the sources, effects, and application of ionizing radiation may be referred to for more detail. # Dry-Heat Sterilizers. This method should be used only for materials that might be damaged by moist heat or that are impenetrable to moist heat (e.g., powders, petroleum products, sharp instruments). The advantages for dry heat include the following: it is nontoxic and does not harm the environment; a dry heat cabinet is easy to install and has relatively low operating costs; it penetrates materials; and it is noncorrosive for metal and sharp instruments. The disadvantages for dry heat are the slow rate of heat penetration and microbial killing makes this a time-consuming method. In addition, the high temperatures are not suitable for most materials 919 . The most common time-temperature relationships for sterilization with hot air sterilizers are 170°C (340°F) for 60 minutes, 160°C (320°F) for 120 minutes, and 150°C (300°F) for 150 minutes. B. atrophaeus spores should be used to monitor the sterilization process for dry heat because they are more resistant to dry heat than are G. stearothermophilus spores. The primary lethal process is considered to be oxidation of cell constituents. There are two types of dry-heat sterilizers: the static-air type and the forced-air type. The static-air type is referred to as the oven-type sterilizer as heating coils in the bottom of the unit cause the hot air to rise inside the chamber via gravity convection. This type of dry-heat sterilizer is much slower in heating, requires longer time to reach sterilizing temperature, and is less uniform in temperature control throughout the chamber than is the forced-air type. The forced-air or mechanical convection sterilizer is equipped with a motor-driven blower that circulates heated air throughout the chamber at a high velocity, permitting a more rapid transfer of energy from the air to the instruments 920 . Liquid Chemicals. Several FDA-cleared liquid chemical sterilants include indications for sterilization of medical devices (Tables 4 and 5) 69 . The indicated contact times range from 3 hours to 12 hours. However, except for a few of the products, the contact time is based only on the conditions to pass the AOAC Sporicidal Test as a sterilant and not on simulated use testing with devices. These solutions are commonly used as high-level disinfectants when a shorter processing time is required. Generally, chemical liquid sterilants cannot be monitored using a biological indicator to verify sterility 899,900 . The survival kinetics for thermal sterilization methods, such as steam and dry heat, have been studied and characterized extensively, whereas the kinetics for sterilization with liquid sterilants are less well understood 921 . The information that is available in the literature suggests that sterilization processes based on liquid chemical sterilants, in general, may not convey the same sterility assurance level as sterilization achieved using thermal or physical methods 823 . The data indicate that the survival curves for liquid chemical sterilants may not exhibit log-linear kinetics and the shape of the survivor curve may vary depending of the formulation, chemical nature and stability of the liquid chemical sterilant. In addition, the design of the AOAC Sporicidal Test does not provide quantification of the microbial challenge. Therefore, sterilization with a liquid chemical sterilant may not convey the same sterility assurance as other sterilization methods. One of the differences between thermal and liquid chemical processes for sterilization of devices is the accessibility of microorganisms to the sterilant. Heat can penetrate barriers, such as biofilm, tissue, and blood, to attain organism kill, whereas liquids cannot adequately penetrate these barriers. In addition, the viscosity of some liquid chemical sterilants impedes their access to organisms in the narrow lumens and mated surfaces of devices 922 . Another limitation to sterilization of devices with liquid chemical germicides is the post-processing environment of the device. Devices cannot be wrapped or adequately contained during processing in a liquid chemical sterilant to maintain sterility following processing and during storage. Furthermore, devices may require rinsing following exposure to the liquid chemical sterilant with water that typically is not sterile. Therefore, due to the inherent limitations of using liquid chemical sterilants, their use should be restricted to reprocessing critical devices that are heat-sensitive and incompatible with other sterilization methods. Several published studies compare the sporicidal effect of liquid chemical germicides against spores of Bacillus and Clostridium 78,659,660,715 . Performic Acid. Performic acid is a fast-acting sporicide that was incorporated into an automated endoscope reprocessing system 400 . Systems using performic acid are not currently FDA cleared. Filtration. Although filtration is not a lethality-based process and is not an FDA-cleared sterilization method, this technology is used to remove bacteria from thermolabile pharmaceutical fluids that cannot be purified by any other means. In order to remove bacteria, the membrane pore size (e.g., 0.22 µm) must be smaller than the bacteria and uniform throughout 923 . Some investigators have appropriately questioned whether the removal of microorganisms by filtration really is a sterilization method because of slight bacterial passage through filters, viral passage through filters, and transference of the sterile filtrate into the final container under aseptic conditions entail a risk of contamination 924 . Microwave. Microwaves are used in medicine for disinfection of soft contact lenses, dental instruments, dentures, milk, and urinary catheters for intermittent self-catheterization . However, microwaves must only be used with products that are compatible (e.g., do not melt) 931 . Microwaves are radio-frequency waves, which are usually used at a frequency of 2450 MHz. The microwaves produce friction of water molecules in an alternating electrical field. The intermolecular friction derived from the vibrations generates heat and some authors believe that the effect of microwaves depends on the heat produced while others postulate a nonthermal lethal effect . The initial reports showed microwaves to be an effective microbicide. The microwaves produced by a "home-type" microwave oven (2.45 GHz) completely inactivate bacterial cultures, mycobacteria, viruses, and G. stearothermophilus spores within 60 seconds to 5 minutes depending on the challenge organism 933, . Another study confirmed these resuIts but also found that higher power microwaves in the presence of water may be needed for sterilization 932 . Complete destruction of Mycobacterium bovis was obtained with 4 minutes of microwave exposure (600W, 2450 MHz) 937 . The effectiveness of microwave ovens for different sterilization and disinfection purposes should be tested and demonstrated as test conditions affect the results (e.g., presence of water, microwave power). Sterilization of metal instruments can be accomplished but requires certain precautions. 926 . Of concern is that home-type microwave ovens may not have even distribution of microwave energy over the entire dry device (there may be hot and cold spots on solid medical devices); hence there may be areas that are not sterilized or disinfected. The use of microwave ovens to disinfect intermittent-use catheters also has been suggested. Researchers found that test bacteria (e.g., E. coli, Klebsiella pneumoniae, Candida albicans) were eliminated from red rubber catheters within 5 minutes 931 . Microwaves used for sterilization of medical devices have not been FDA cleared. Glass Bead "Sterilizer". Glass bead "sterilization" uses small glass beads (1.2-1.5 mm diameter) and high temperature (217 °C -232°C) for brief exposure times (e.g., 45 seconds) to inactivate microorganisms. These devices have been used for several years in the dental profession . FDA believes there is a risk of infection with this device because of potential failure to sterilize dental instruments and their use should be discontinued until the device has received FDA clearance. # Vaporized Hydrogen Peroxide (VHP). Hydrogen peroxide solutions have been used as chemical sterilants for many years. However, the VHP was not developed for the sterilization of medical equipment until the mid-1980s. One method for delivering VHP to the reaction site uses a deep vacuum to pull liquid hydrogen peroxide (30-35% concentration) from a disposable cartridge through a heated vaporizer and then, following vaporization, into the sterilization chamber. A second approach to VHP delivery is the flow-through approach in which the VHP is carried into the sterilization chamber by a carrier gas such as air using either a slight negative pressure (vacuum) or slight positive pressure. Applications of this technology include vacuum systems for industrial sterilization of medical devices and atmospheric systems for decontaminating for large and small areas 853 . VHP offers several appealing features that include rapid cycle time (e.g., 30-45 minutes); low temperature; environmentally safe byproducts (H2O, oxygen ); good material compatibility; and ease of operation, installation and monitoring. VHP has limitations including that cellulose cannot be processed; nylon becomes brittle; and VHP penetration capabilities are less than those of ETO. VHP has not been cleared by FDA for sterilization of medical devices in healthcare facilities. The feasibility of utilizing vapor-phase hydrogen peroxide as a surface decontaminant and sterilizer was evaluated in a centrifuge decontamination application. In this study, vapor-phase hydrogen peroxide was shown to possess significant sporicidal activity 941 . In preliminary studies, hydrogen peroxide vapor decontamination has been found to be a highly effective method of eradicating MRSA, Serratia marcescens, Clostridium botulinum spores and Clostridium difficile from rooms, furniture, surfaces and/or equipment; however, further investigation of this method to demonstrate both safety and effectiveness in reducing infection rates are required . Ozone. Ozone has been used for years as a drinking water disinfectant. Ozone is produced when O2 is energized and split into two monatomic (O1) molecules. The monatomic oxygen molecules then collide with O2 molecules to form ozone, which is O3. Thus, ozone consists of O2 with a loosely bonded third oxygen atom that is readily available to attach to, and oxidize, other molecules. This additional oxygen atom makes ozone a powerful oxidant that destroys microorganisms but is highly unstable (i.e., half-life of 22 minutes at room temperature). A new sterilization process, which uses ozone as the sterilant, was cleared by FDA in August 2003 for processing reusable medical devices. The sterilizer creates its own sterilant internally from USP grade oxygen, steam-quality water and electricity; the sterilant is converted back to oxygen and water vapor at the end of the cycle by a passing through a catalyst before being exhausted into the room. The duration of the sterilization cycle is about 4 h and 15 m, and it occurs at 30-35°C. Microbial efficacy has been demonstrated by achieving a SAL of 10 -6 with a variety of microorganisms to include the most resistant microorganism, Geobacillus stearothermophilus. The ozone process is compatible with a wide range of commonly used materials including stainless steel, titanium, anodized aluminum, ceramic, glass, silica, PVC, Teflon, silicone, polypropylene, polyethylene and acrylic. In addition, rigid lumen devices of the following diameter and length can be processed: internal diameter (ID): > 2 mm, length ≤ 25 cm; ID > 3 mm, length ≤ 47 cm; and ID > 4 mm, length ≤ 60 cm. The process should be safe for use by the operator because there is no handling of the sterilant, no toxic emissions, no residue to aerate, and low operating temperature means there is no danger of an accidental burn. The cycle is monitored using a self-contained biological indicator and a chemical indicator. The sterilization chamber is small, about 4 ft 3 # (Written communication, S Dufresne, July 2004). A gaseous ozone generator was investigated for decontamination of rooms used to house patients colonized with MRSA. The results demonstrated that the device tested would be inadequate for the decontamination of a hospital room 946 . Formaldehyde Steam. Low-temperature steam with formaldehyde is used as a low-temperature sterilization method in many countries, particularly in Scandinavia, Germany, and the United Kingdom. The process involves the use of formalin, which is vaporized into a formaldehyde gas that is admitted into the sterilization chamber. A formaldehyde concentration of 8-16 mg/l is generated at an operating temperature of 70-75°C. The sterilization cycle consists of a series of stages that include an initial vacuum to remove air from the chamber and load, followed by steam admission to the chamber with the vacuum pump running to purge the chamber of air and to heat the load, followed by a series of pulses of formaldehyde gas, followed by steam. Formaldehyde is removed from the sterilizer and load by repeated alternate evacuations and flushing with steam and air. This system has some advantages, e.g., the cycle time for formaldehyde gas is faster than that for ETO and the cost per cycle is relatively low. However, ETO is more penetrating and operates at lower temperatures than do steam/formaldehyde sterilizers. Low-temperature steam formaldehyde sterilization has been found effective against vegetative bacteria, mycobacteria, B. atrophaeus and G. stearothermophilus spores and Candida albicans . Formaldehyde vapor cabinets also may be used in healthcare facilities to sterilize heat-sensitive medical equipment 950 . Commonly, there is no circulation of formaldehyde and no temperature and humidity controls. The release of gas from paraformaldehyde tablets (placed on the lower tray) is slow and produces a low partial pressure of gas. The microbicidal quality of this procedure is unknown 951 . Reliable sterilization using formaldehyde is achieved when performed with a high concentration of Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) gas, at a temperature between 60° and 80°C and with a relative humidity of 75 to 100%. Studies indicate that formaldehyde is a mutagen and a potential human carcinogen, and OSHA regulates formaldehyde. The permissible exposure limit for formaldehyde in work areas is 0.75 ppm measured as a 8-hour TWA. The OSHA standard includes a 2 ppm STEL (i.e., maximum exposure allowed during a 15-minute period). As with the ETO standard, the formaldehyde standard requires that the employer conduct initial monitoring to identify employees who are exposed to formaldehyde at or above the action level or STEL. If this exposure level is maintained, employers may discontinue exposure monitoring until there is a change that could affect exposure levels or an employee reports formaldehyderelated signs and symptoms 269,578 . The formaldehyde steam sterilization system has not been FDA cleared for use in healthcare facilities. # Gaseous chlorine dioxide. A gaseous chlorine dioxide system for sterilization of healthcare products was developed in the late 1980s 853,952,953 . Chlorine dioxide is not mutagenic or carcinogenic in humans. As the chlorine dioxide concentration increases, the time required to achieve sterilization becomes progressively shorter. For example, only 30 minutes were required at 40 mg/l to sterilize the 10 6 B. atrophaeus spores at 30° to 32°C 954 . Currently, no gaseous chlorine dioxide system is FDA cleared. Vaporized Peracetic Acid. The sporicidal activity of peracetic acid vapor at 20, 40, 60, and 80% relative humidity and 25°C was determined on Bacillus atrophaeus spores on paper and glass surfaces. Appreciable activity occurred within 10 minutes of exposure to 1 mg of peracetic acid per liter at 40% or higher relative humidity 955 . No vaporized peracetic acid system is FDA cleared. # Infrared radiation. An infrared radiation prototype sterilizer was investigated and found to destroy B. atrophaeus spores. Some of the possible advantages of infrared technology include short cycle time, low energy consumption, no cycle residuals, and no toxicologic or environmental effects. This may provide an alternative technology for sterilization of selected heat-resistant instruments but there are no FDA-cleared systems for use in healthcare facilities 956 . The other sterilization technologies mentioned above may be used for sterilization of critical medical items if cleared by the FDA and ideally, the microbicidal effectiveness of the technology has been published in the scientific literature. The selection and use of disinfectants, chemical sterilants and sterilization processes in the healthcare field is dynamic, and products may become available that are not in existence when this guideline was written. As newer disinfectants and sterilization processes become available, persons or committees responsible for selecting disinfectants and sterilization processes should be guided by products cleared by FDA and EPA as well as information in the scientific literature. # Sterilizing Practices Overview. The delivery of sterile products for use in patient care depends not only on the effectiveness of the sterilization process but also on the unit design, decontamination, disassembling and packaging of the device, loading the sterilizer, monitoring, sterilant quality and quantity, and the appropriateness of the cycle for the load contents, and other aspects of device reprocessing. Healthcare personnel should perform most cleaning, disinfecting, and sterilizing of patient-care supplies in a central processing department in order to more easily control quality. The aim of central processing is the orderly processing of medical and surgical instruments to protect patients from infections while minimizing risks to staff and preserving the value of the items being reprocessed 957 . Healthcare facilities should promote the same level of efficiency and safety in the preparation of supplies in other areas (e.g., operating room, respiratory therapy) as is practiced in central processing. Ensuring consistency of sterilization practices requires a comprehensive program that ensures operator competence and proper methods of cleaning and wrapping instruments, loading the sterilizer, operating the sterilizer, and monitoring of the entire process. Furthermore, care must be consistent from an infection prevention standpoint in all patient-care settings, such as hospital and outpatient facilities. # Sterilization Cycle Verification. A sterilization process should be verified before it is put into use in healthcare settings. All steam, ETO, and other low-temperature sterilizers are tested with biological and chemical indicators upon installation, when the sterilizer is relocated, redesigned, after major repair and after a sterilization failure has occurred to ensure they are functioning prior to placing them into routine use. Three consecutive empty steam cycles are run with a biological and chemical indicator in an appropriate test package or tray. Each type of steam cycle used for sterilization (e.g., vacuum-assisted, gravity) is tested separately. In a prevacuum steam sterilizer three consecutive empty cycles are also run with a Bowie-Dick test. The sterilizer is not put back into use until all biological indicators are negative and chemical indicators show a correct end-point response 811-814, 819, 958 . Biological and chemical indicator testing is also done for ongoing quality assurance testing of representative samples of actual products being sterilized and product testing when major changes are made in packaging, wraps, or load configuration. Biological and chemical indicators are placed in products, which are processed in a full load. When three consecutive cycles show negative biological indicators and chemical indicators with a correct end point response, you can put the change made into routine use 958 . Items processed during the three evaluation cycles should be quarantined until the test results are negative. Physical Facilities. The central processing area(s) ideally should be divided into at least three areas: decontamination, packaging, and sterilization and storage. Physical barriers should separate the decontamination area from the other sections to contain contamination on used items. In the decontamination area reusable contaminated supplies (and possibly disposable items that are reused) are received, sorted, and decontaminated. The recommended airflow pattern should contain contaminates within the decontamination area and minimize the flow of contaminates to the clean areas. The American Institute of Architects 959 recommends negative pressure and no fewer than six air exchanges per hour in the decontamination area (AAMI recommends 10 air changes per hour) and 10 air changes per hour with positive pressure in the sterilizer equipment room. The packaging area is for inspecting, assembling, and packaging clean, but not sterile, material. The sterile storage area should be a limited access area with a controlled temperature (may be as high as 75°F) and relative humidity (30-60% in all works areas except sterile storage, where the relative humidity should not exceed 70%) 819 . The floors and walls should be constructed of materials capable of withstanding chemical agents used for cleaning or disinfecting. Ceilings and wall surfaces should be constructed of non-shedding materials. Physical arrangements of processing areas are presented schematically in four references 811,819,920,957 . Cleaning. As repeatedly mentioned, items must be cleaned using water with detergents or enzymatic cleaners 465,466,468 before processing. Cleaning reduces the bioburden and removes foreign material (i.e., organic residue and inorganic salts) that interferes with the sterilization process by acting as a barrier to the sterilization agent 179,426,457,911,912 . Surgical instruments are generally presoaked or prerinsed to prevent drying of blood and tissue. Precleaning in patient-care areas may be needed on items that are heavily soiled with feces, sputum, blood, or other material. Items sent to central processing without removing gross soil may be difficult to clean because of dried secretions and excretions. Cleaning and decontamination should be done as soon as possible after items have been used. Several types of mechanical cleaning machines (e.g., utensil washer-sanitizer, ultrasonic cleaner, washer-sterilizer, dishwasher, washer-disinfector) may facilitate cleaning and decontamination of most items. This equipment often is automated and may increase productivity, improve cleaning effectiveness, and decrease worker exposure to blood and body fluids. Delicate and intricate objects and heat-or moisture-sensitive articles may require careful cleaning by hand. All used items sent to the central processing area should be considered contaminated (unless decontaminated in the area of origin), handled with gloves (forceps or tongs are sometimes needed to avoid exposure to sharps), and decontaminated by one of the aforementioned methods to render them safer to handle. Items composed of more than one removable part should be disassembled. Care should be taken to ensure that all parts are kept together, so that reassembly can be accomplished efficiently 811 . Investigators have described the degree of cleanliness by visual and microscopic examination. One study found 91% of the instruments to be clean visually but, when examined microscopically, 84% of the instruments had residual debris. Sites that contained residual debris included junctions between insulating sheaths and activating mechanisms of laparoscopic instruments and articulations and grooves of forceps. More research is needed to understand the clinical significance of these findings 960 and how to ensure proper cleaning. Personnel working in the decontamination area should wear household-cleaning-type rubber or plastic gloves when handling or cleaning contaminated instruments and devices. Face masks, eye protection such as goggles or full-length faceshields, and appropriate gowns should be worn when exposure to blood and contaminated fluids may occur (e.g., when manually cleaning contaminated devices) 961 . Contaminated instruments are a source of microorganisms that could inoculate personnel through nonintact skin on the hands or through contact with the mucous membranes of eyes, nose, or mouth 214,811,813 . Reusable sharps that have been in contact with blood present a special hazard. Employees must not reach with their gloved hands into trays or containers that hold these sharps to retrieve them 214 . Rather, employees should use engineering controls (e.g., forceps) to retrieve these devices. Packaging. Once items are cleaned, dried, and inspected, those requiring sterilization must be wrapped or placed in rigid containers and should be arranged in instrument trays/baskets according to the guidelines provided by the AAMI and other professional organizations 454, 811-814, 819, 836, 962 . These guidelines state that hinged instruments should be opened; items with removable parts should be disassembled unless the device manufacturer or researchers provide specific instructions or test data to the contrary 181 ; complex instruments should be prepared and sterilized according to device manufacturer's instructions and test data; devices with concave surfaces should be positioned to facilitate drainage of water; heavy items should be positioned not to damage delicate items; and the weight of the instrument set should be based on the design and density of the instruments and the distribution of metal mass 811,962 . While there is no longer a specified sterilization weight limit for surgical sets, heavy metal mass is a cause of wet packs (i.e., moisture inside the case and tray after completion of the sterilization cycle) 963 . Other parameters that may influence drying are the density of the wraps and the design of the set 964 . There are several choices in methods to maintain sterility of surgical instruments, including rigid containers, peel-open pouches (e.g., self-sealed or heat-sealed plastic and paper pouches), roll stock or reels (i.e., paper-plastic combinations of tubing designed to allow the user to cut and seal the ends to form a pouch) 454 and sterilization wraps (woven and nonwoven). Healthcare facilities may use all of these packaging options. The packaging material must allow penetration of the sterilant, provide protection against contact contamination during handling, provide an effective barrier to microbial penetration, and maintain the sterility of the processed item after sterilization 965 . An ideal sterilization wrap would successfully address barrier effectiveness, penetrability (i.e., allows sterilant to penetrate), aeration (e.g., allows ETO to dissipate), ease of use, drapeability, flexibility, puncture resistance, tear strength, toxicity, odor, waste disposal, linting, cost, and transparency 966 . Unacceptable packaging for use with ETO (e.g., foil, polyvinylchloride, and polyvinylidene chlorine ) 814 or hydrogen peroxide gas plasma (e.g., linens and paper) should not be used to wrap medical items. In central processing, double wrapping can be done sequentially or nonsequentially (i.e., simultaneous wrapping). Wrapping should be done in such a manner to avoid tenting and gapping. The sequential wrap uses two sheets of the standard sterilization wrap, one wrapped after the other. This procedure creates a package within a package. The nonsequential process uses two sheets wrapped at the same time so that the wrapping needs to be performed only once. This latter method provides multiple layers of protection of surgical instruments from contamination and saves time since wrapping is done only once. Multiple layers are still common practice due to the rigors of handling within the facility even though the barrier efficacy of a single sheet of wrap has improved over the years 966 . Written and illustrated procedures for preparation of items to be packaged should be readily available and used by personnel when packaging procedures are performed 454 . # Loading. All items to be sterilized should be arranged so all surfaces will be directly exposed to the sterilizing agent. Thus, loading procedures must allow for free circulation of steam (or another sterilant) around each item. Historically, it was recommended that muslin fabric packs should not exceed the maximal dimensions, weight, and density of 12 inches wide × 12 inches high × 20 inches long, 12 lbs, and 7.2 lbs per cubic foot, respectively. Due to the variety of textiles and metal/plastic containers on the market, the textile and metal/plastic container manufacturer and the sterilizer manufacturers should be consulted for instructions on pack preparation and density parameters 819 . There are several important basic principles for loading a sterilizer: allow for proper sterilant circulation; perforated trays should be placed so the tray is parallel to the shelf; nonperforated containers should be placed on their edge (e.g., basins); small items should be loosely placed in wire baskets; and peel packs should be placed on edge in perforated or mesh bottom racks or baskets 454,811,836 . Storage. Studies in the early 1970s suggested that wrapped surgical trays remained sterile for varying periods depending on the type of material used to wrap the trays. Safe storage times for sterile packs vary with the porosity of the wrapper and storage conditions (e.g., open versus closed cabinets). Heat-sealed, plastic peel-down pouches and wrapped packs sealed in 3-mil (3/1000 inch) polyethylene overwrap have been reported to be sterile for as long as 9 months after sterilization. The 3-mil polyethylene is applied after sterilization to extend the shelf life for infrequently used items 967 . Supplies wrapped in double-thickness muslin comprising four layers, or equivalent, remain sterile for at least 30 days. Any item that has been sterilized should not be used after the expiration date has been exceeded or if the sterilized package is wet, torn, or punctured. Although some hospitals continue to date every sterilized product and use the time-related shelf-life practice, many hospitals have switched to an event-related shelf-life practice. This latter practice recognizes that the product should remain sterile until some event causes the item to become contaminated (e.g., tear in packaging, packaging becomes wet, seal is broken) 968 . Event-related factors that contribute to the contamination of a product include bioburden (i.e., the amount of contamination in the environment), air movement, traffic, location, humidity, insects, vermin, flooding, storage area space, open/closed shelving, temperature, and the properties of the wrap material 966,969 . There are data that support the event-related shelf-life practice . One study examined the effect of time on the sterile integrity of paper envelopes, peel pouches, and nylon sleeves. The most important finding was the absence of a trend toward an increased rate of contamination over time for any pack when placed in covered storage 971 . Another evaluated the effectiveness of event-related outdating by microbiologically testing sterilized items. During the 2-year study period, all of the items tested were sterile 972 . Thus, contamination of a sterile item is event-related and the probability of contamination increases with increased handling 973 . Following the sterilization process, medical and surgical devices must be handled using aseptic technique in order to prevent contamination. Sterile supplies should be stored far enough from the floor (8 to 10 inches), the ceiling (5 inches unless near a sprinkler head ), and the outside walls (2 inches) to allow for adequate air circulation, ease of cleaning, and compliance with local fire codes (e.g., supplies must be at least 18 inches from sprinkler heads). Medical and surgical supplies should not be stored under sinks or in other locations where they can become wet. Sterile items that become wet are considered contaminated because moisture brings with it microorganisms from the air and surfaces. Closed or covered cabinets are ideal but open shelving may be used for storage. Any package that has fallen or been dropped on the floor must be inspected for damage to the packaging and contents (if the items are breakable). If the package is heat-sealed in impervious plastic and the seal is still intact, the package should be considered not contaminated. If undamaged, items packaged in plastic need not be reprocessed. Monitoring. The sterilization procedure should be monitored routinely by using a combination of mechanical, chemical, and biological indicators to evaluate the sterilizing conditions and indirectly the microbiologic status of the processed items. The mechanical monitors for steam sterilization include the daily assessment of cycle time and temperature by examining the temperature record chart (or computer printout) and an assessment of pressure via the pressure gauge. The mechanical monitors for ETO include time, temperature, and pressure recorders that provide data via computer printouts, gauges, and/or displays 814 . Generally, two essential elements for ETO sterilization (i.e., the gas concentration and humidity) cannot be monitored in healthcare ETO sterilizers. Chemical indicators are convenient, are inexpensive, and indicate that the item has been exposed to the sterilization process. In one study, chemical indicators were more likely than biological indicators to inaccurately indicate sterilization at marginal sterilization times (e.g., 2 minutes) 847 . Chemical indicators should be used in conjunction with biological indicators, but based on current studies should not replace them because they indicate sterilization at marginal sterilization time and because only a biological indicator consisting of resistant spores can measure the microbial killing power of the sterilization process. 847,974 . Chemical indicators are affixed on the outside of each pack to show that the package has been processed through a sterilization cycle, but these indicators do not prove sterilization has been achieved. Preferably, a chemical indicator also should be placed on the inside of each pack to verify sterilant penetration. Chemical indicators usually are either heat-or chemical-sensitive inks that change color when one or more sterilization parameters (e.g., steam-time, temperature, and/or saturated steam; ETO-time, temperature, relative humidity and/or ETO concentration) are present. Chemical indicators have been grouped into five classes based on their ability to monitor one or multiple sterilization parameters 813,819 . If the internal and/or external indicator suggests inadequate processing, the item should not be used 815 . An air-removal test (Bowie-Dick Test) must be performed daily in an empty dynamic-air-removal sterilizer (e.g., prevacuum steam sterilizer) to ensure air removal. Biological indicators are recognized by most authorities as being closest to the ideal monitors of the sterilization process 974,975 because they measure the sterilization process directly by using the most resistant microorganisms (i.e., Bacillus spores), and not by merely testing the physical and chemical conditions necessary for sterilization. Since the Bacillus spores used in biological indicators are more resistant and present in greater numbers than are the common microbial contaminants found on patientcare equipment, the demonstration that the biological indicator has been inactivated strongly implies that other potential pathogens in the load have been killed 844 . An ideal biological monitor of the sterilization process should be easy to use, be inexpensive, not be subject to exogenous contamination, provide positive results as soon as possible after the cycle so that corrective action may be accomplished, and provide positive results only when the sterilization parameters (e.g., steam-time, temperature, and/or saturated steam; ETO-time, temperature, relative humidity and/or ETO concentration) are inadequate to kill microbial contaminates 847 . Biological indicators are the only process indicators that directly monitor the lethality of a given sterilization process. Spores used to monitor a sterilization process have demonstrated resistance to the sterilizing agent and are more resistant than the bioburden found on medical devices 179,911,912 . B. atrophaeus spores (10 6 ) are used to monitor ETO and dry heat, and G. stearothermophilus spores (10 5 ) are used to monitor steam sterilization, hydrogen peroxide gas plasma, and liquid peracetic acid sterilizers. G. stearothermophilus is incubated at 55-60°C, and B. atrophaeus is incubated at 35-37°C. Steam and low temperature sterilizers (e.g., hydrogen peroxide gas plasma, peracetic acid) should be monitored at least weekly with the appropriate commercial preparation of spores. If a sterilizer is used frequently (e.g., several loads per day), daily use of biological indicators allows earlier discovery of equipment malfunctions or procedural errors and thus minimizes the extent of patient surveillance and product recall needed in the event of a positive biological indicator 811 . Each load should be monitored if it contains implantable objects. If feasible, implantable items should not be used until the results of spore tests are known to be negative. Originally, spore-strip biological indicators required up to 7 days of incubation to detect viable spores from marginal cycles (i.e., when few spores remained viable). The next generation of biological indicator was self-contained in plastic vials containing a spore-coated paper strip and a growth media in a crushable glass ampoule. This indicator had a maximum incubation of 48 hours but significant failures could be detected in ≤24 hours. A rapid-readout biological indicator that detects the presence of enzymes of G. stearothermophilus by reading a fluorescent product produced by the enzymatic breakdown of a nonfluorescent substrate has been marketed for the more than 10 years. Studies demonstrate that the sensitivity of rapid-readout tests for steam sterilization (1 hour for 132°C gravity sterilizers, 3 hrs for 121°C gravity and 132°C vacuum sterilizers) parallels that of the conventional sterilization-specific biological indicators 846,847,976,977 and the fluorescent rapid readout results reliably predict 24-and 48-hour and 7day growth 978 . The rapid-readout biological indicator is a dual indicator system as it also detects acid metabolites produced during growth of the G. stearothermophilus spores. This system is different from the indicator system consisting of an enzyme system of bacterial origin without spores. Independent comparative data using suboptimal sterilization cycles (e.g., reduced time or temperature) with the enzyme-based indicator system have not been published 979 . A new rapid-readout ETO biological indicator has been designed for rapid and reliable monitoring of ETO sterilization processes. The indicator has been cleared by the FDA for use in the United States 400 . The rapid-readout ETO biological indicator detects the presence of B. atrophaeus by detecting a fluorescent signal indicating the activity of an enzyme present within the B. atrophaeus organism, betaglucosidase. The fluorescence indicates the presence of an active spore-associated enzyme and a sterilization process failure. This indicator also detects acid metabolites produced during growth of the B. atrophaeus spore. Per manufacturer's data, the enzyme always was detected whenever viable spores were present. This was expected because the enzyme is relatively ETO resistant and is inactivated at a slightly longer exposure time than the spore. The rapid-readout ETO biological indicator can be used to monitor 100% ETO, and ETO-HCFC mixture sterilization cycles. It has not been tested in ETO-CO2 mixture sterilization cycles. The standard biological indicator used for monitoring full-cycle steam sterilizers does not provide reliable monitoring flash sterilizers 980 . Biological indicators specifically designed for monitoring flash sterilization are now available, and studies comparing them have been published 846,847,981 . Since sterilization failure can occur (about 1% for steam) 982 , a procedure to follow in the event of positive spore tests with steam sterilization has been provided by CDC and the Association of periOperative Registered Nurses (AORN). The 1981 CDC recommendation is that "objects, other than implantable objects, do not need to be recalled because of a single positive spore test unless the steam sterilizer or the sterilization procedure is defective." The rationale for this recommendation is that single positive spore tests in sterilizers occur sporadically. They may occur for reasons such as slight variation in the resistance of the spores 983 , improper use of the sterilizer, and laboratory contamination during culture (uncommon with self-contained spore tests). If the mechanical (e.g., time, temperature, pressure in the steam sterilizer) and chemical (internal and/or external) indicators suggest that the sterilizer was functioning properly, a single positive spore test probably does not indicate sterilizer malfunction but the spore test should be repeated immediately 983 . If the spore tests remain positive, use of the sterilizer should be discontinued until it is serviced 1 . Similarly, AORN states that a single positive spore test does not necessarily indicate a sterilizer failure. If the test is positive, the sterilizer should immediately be rechallenged for proper use and function. Items, other than implantable ones, do not necessarily need to be recalled unless a sterilizer malfunction is found. If a sterilizer malfunction is discovered, the items must be considered nonsterile, and the items from the suspect load(s) should be recalled, insofar as possible, and reprocessed 984 839 . A more conservative approach also has been recommended 813 in which any positive spore test is assumed to represent sterilizer malfunction and requires that all materials processed in that sterilizer, dating from the sterilization cycle having the last negative biologic indicator to the next cycle showing satisfactory biologic indicator challenge results, must be considered nonsterile and retrieved, if possible, and reprocessed. This more conservative approach should be used for sterilization methods other than steam (e.g., ETO, hydrogen peroxide gas plasma). However, no action is necessary if there is strong evidence for the biological indicator being defective 983 or the growth medium contained a Bacillus contaminant 985 . If patient-care items were used before retrieval, the infection control professional should assess the risk of infection in collaboration with central processing, surgical services, and risk management staff. The factors that should be considered include the chemical indicator result (e.g., nonreactive chemical indicator may indicate temperature not achieved); the results of other biological indicators that followed the positive biological indicator (e.g., positive on Tuesday, negative on Wednesday); the parameters of the sterilizer associated with the positive biological indicator (e.g., reduced time at correct temperature); the time-temperature chart (or printout); and the microbial load associated with decontaminated surgical instruments (e.g., 85% of decontaminated surgical instruments have less than 100 CFU). The margin of safety in steam sterilization is sufficiently large that there is minimal infection risk associated with items in a load that show spore growth, especially if the item was properly cleaned and the temperature was achieved (e.g., as shown by acceptable chemical indicator or temperature chart). There are no published studies that document disease transmission via a nonretrieved surgical instrument following a sterilization cycle with a positive biological indicator. False-positive biological indicators may occur from improper testing or faulty indicators. The latter may occur from improper storage, processing, product contamination, material failure, or variation in resistance of spores. Gram stain and subculture of a positive biological indicator may determine if a contaminant has created a false-positive result 839,986 . However, in one incident, the broth used as growth medium contained a contaminant, B. coagulans, which resulted in broth turbidity at 55°C 985 . Testing of paired biological indicators from different manufacturers can assist in assessing a product defect 839 . False-positive biological indicators due to extrinsic contamination when using self-contained biological indicators should be uncommon. A biological indicator should not be considered a false-positive indicator until a thorough analysis of the entire sterilization process shows this to be likely. The size and composition of the biological indicator test pack should be standardized to create a significant challenge to air removal and sterilant penetration and to obtain interpretable results. There is a standard 16-towel pack recommended by AAMI for steam sterilization 813,819,987 consisting of 16 clean, preconditioned, reusable huck or absorbent surgical towels each of which is approximately 16 inches by 26 inches. Each towel is folded lengthwise into thirds and then folded widthwise in the middle. One or more biological indicators are placed between the eight and ninth towels in the approximate geometric center of the pack. When the towels are folded and placed one on top of another, to form a stack (approximately 6 inch height) it should weigh approximately 3 pounds and should have a density of approximately 11.3 pounds per cubic foot 813 . This test pack has not gained universal use as a standard pack that simulates the actual in-use conditions of steam sterilizers. Commercially available disposable test packs that have been shown to be equivalent to the AAMI 16 towel test pack also may be used. The test pack should be placed flat in an otherwise fully loaded sterilizer chamber, in the area least favorable to sterilization (i.e., the area representing the greatest challenge to the biological indicator). This area is normally in the front, bottom section of the sterilizer, near the drain 811,813 . A control biological indicator from the lot used for testing should be left unexposed to the sterilant, and then incubated to verify the presterilization viability of the test spores and proper incubation. The most conservative approach would be to use a control for each run; however, less frequent use may be adequate (e.g., weekly). There also is a routine test pack for ETO where a biological indicator is placed in a plastic syringe with plunger, then placed in the folds of a clean surgical towel, and wrapped. Alternatively, commercially available disposal test packs that have been shown to be equivalent to the AAMI test pack may be used. The test pack is placed in the center of the sterilizer load 814 . Sterilization records (mechanical, chemical, and biological) should be retained for a time period in compliance with standards (e.g., Joint Commission for the Accreditation of Healthcare Facilities requests 3 years) and state and federal regulations. In Europe, biological monitors are not used routinely to monitor the sterilization process. Instead, release of sterilizer items is based on monitoring the physical conditions of the sterilization process that is termed "parametric release." Parametric release requires that there is a defined quality system in place at the facility performing the sterilization and that the sterilization process be validated for the items being sterilized. At present in Europe, parametric release is accepted for steam, dry heat, and ionizing radiation processes, as the physical conditions are understood and can be monitored directly 988 . For example, with steam sterilizers the load could be monitored with probes that would yield data on temperature, time, and humidity at representative locations in the chamber and compared to the specifications developed during the validation process. Periodic infection control rounds to areas using sterilizers to standardize the sterilizer's use may identify correctable variances in operator competence; documentation of sterilization records, including chemical and biological indicator test results; sterilizer maintenance and wrapping; and load numbering of packs. These rounds also may identify improvement activities to ensure that operators are adhering to established standards 989 . # Reuse of Single-Use Medical Devices The reuse of single-use medical devices began in the late 1970s. Before this time most devices were considered reusable. Reuse of single-use devices increased as a cost-saving measure. Approximately 20 to 30% of U.S. hospitals reported that they reuse at least one type of single-use device. Reuse of singleuse devices involves regulatory, ethical, medical, legal and economic issues and has been extremely controversial for more than two decades 990 . The U.S. public has expressed increasing concern regarding the risk of infection and injury when reusing medical devices intended and labeled for single use. Although some investigators have demonstrated it is safe to reuse disposable medical devices such as cardiac electrode catheters, additional studies are needed to define the risks 994 and document the benefits. In August 2000, FDA released a guidance document on single-use devices reprocessed by third parties or hospitals 995 . In this guidance document, FDA states that hospitals or third-party reprocessors will be considered "manufacturers" and regulated in the same manner. A reused single-use device will have to comply with the same regulatory requirements of the device when it was originally manufactured. This document presents FDA's intent to enforce premarket submission requirements within 6 months (February 2001) for class III devices (e.g., cardiovascular intra-aortic balloon pump, transluminal coronary angioplasty catheter); 12 months (August 2001) for class II devices (e.g., blood pressure cuff, bronchoscope biopsy forceps); and 18 months (February 2002) for class I devices (e.g., disposable medical scissors, ophthalmic knife). FDA uses two types of premarket requirements for nonexempt class I and II devices, a 510(k) submission that may have to show that the device is as safe and effective as the same device when new, and a premarket approval application. The 510(k) submission must provide scientific evidence that the device is safe and effective for its intended use. FDA allowed hospitals a year to comply with the nonpremarket requirements (registration and listing, reporting adverse events associated with medical devices, quality system regulations, and proper labeling). The options for hospitals are to stop reprocessing single-use devices, comply with the rule, or outsource to a third-party reprocessor. FDA guidance document does not apply to permanently implantable pacemakers, hemodialyzers, opened but unused single-use devices, or healthcare settings other than acute-care hospitals. The reuse of single use medical devices continues to be an evolving area of regulations. For this reason, healthcare workers should refer to FDA (/) for the latest guidance 996 . # Conclusion When properly used, disinfection and sterilization can ensure the safe use of invasive and noninvasive medical devices. However, current disinfection and sterilization guidelines must be strictly followed. # Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) # Recommendations for Disinfection and Sterilization in Healthcare Facilities # A. Rationale The ultimate goal of the Recommendations for Disinfection and Sterilization in Health-Care Facilities, 2008, is to reduce rates of health-care associated infections through appropriate use of both disinfection and sterilization. Each recommendation is categorized according to scientific evidence, theoretical rationale, applicability, and federal regulations. Examples are included in some recommendations to aid the reader; however, these examples are not intended to define the only method of implementing the recommendation. The CDC system for categorizing recommendations is defined in the following (Rankings) section. # B. Rankings Category IA. Strongly recommended for implementation and strongly supported by well-designed experimental, clinical, or epidemiologic studies. Category IB. Strongly recommended for implementation and supported by some experimental, clinical, or epidemiologic studies, and by a strong theoretical rationale. Category IC. Required by state or federal regulations. Because of state differences, readers should not assume that the absence of an IC recommendation implies the absence of state regulations. Category II. Suggested for implementation and supported by suggestive clinical or epidemiologic studies or by a theoretical rationale. No recommendation. Unresolved issue. These include practices for which insufficient evidence or no consensus exists regarding efficacy. In hospitals, perform most cleaning, disinfection, and sterilization of patient-care devices in a central processing department in order to more easily control quality. Category II. 454,836,959 b. Meticulously clean patient-care items with water and detergent, or with water and enzymatic cleaners before high-level disinfection or sterilization procedures. Category IB. 6, 83, 101, 104-106, 124, 179, 424-426, 436, 465, 471, 911-913, 1004 i. Remove visible organic residue (e.g., residue of blood and tissue) and inorganic salts with cleaning. Use cleaning agents that are capable of removing visible organic and inorganic residues. Category IB. 424-426, 466, 468, 469, 471, 908, 910 ii. Clean medical devices as soon as practical after use (e.g., at the point of use) because soiled materials become dried onto the instruments. Dried or baked materials on the instrument make the removal process more difficult and the disinfection or sterilization process less effective or ineffective. Category IB. 55,56,59,291,465,1005,1006 c. Perform either manual cleaning (i.e., using friction) or mechanical cleaning (e.g., with ultrasonic cleaners, washer-disinfector, washer-sterilizers). Category IB. 426,456,471,999 d. If using an automatic washer/disinfector, ensure that the unit is used in accordance with the manufacturer's recommendations. Category IB. 7,133,155,725 e. Ensure that the detergents or enzymatic cleaners selected are compatible with the metals and other materials used in medical instruments. Ensure that the rinse step is adequate for removing cleaning residues to levels that will not interfere with subsequent disinfection/sterilization processes. Category II. 836,1004 f. Inspect equipment surfaces for breaks in integrity that would impair either cleaning or disinfection/sterilization. Discard or repair equipment that no longer functions as intended or cannot be properly cleaned, and disinfected or sterilized. Category II. 888 # Indications for Sterilization, High-Level Disinfection, and Low-Level Disinfection a. Before use on each patient, sterilize critical medical and surgical devices and instruments that enter normally sterile tissue or the vascular system or through which a sterile body fluid flows (e.g., blood). See recommendation 7g for exceptions. Category IA. 179,497,821,822,907,911,912 b. Provide, at a minimum, high-level disinfection for semicritical patient-care equipment (e.g., gastrointestinal endoscopes, endotracheal tubes, anesthesia breathing circuits, and respiratory therapy equipment) that touches either mucous membranes or nonintact skin. Category IA. 6-8, 17, 20 Process noncritical patient-care devices using a disinfectant and the concentration of germicide listed in Table 1. Category IB. 17, 46-48, 50-52, 67, 68, 378, 382, 401 b. Disinfect noncritical medical devices (e.g., blood pressure cuff) with an EPA-registered hospital disinfectant using the label's safety precautions and use directions. Most EPA-registered hospital disinfectants have a label contact time of 10 minutes. However, multiple scientific studies have demonstrated the efficacy of hospital disinfectants against pathogens with a contact time of at least 1 minute. By law, all applicable label instructions on EPA-registered products must be followed. If the user selects exposure conditions that differ from those on the EPA-registered product label, the user assumes liability from any injuries resulting from off-label use and is potentially subject to enforcement action under FIFRA. Category IB. 17, 47, 48, 50, 51, 53-57, 59, 60, 62-64, 355, 378, 382 c. Ensure that, at a minimum, noncritical patient-care devices are disinfected when visibly soiled and on a regular basis (such as after use on each patient or once daily or once weekly). Category II. 378,380,1008 d. If dedicated, disposable devices are not available, disinfect noncritical patient-care equipment after using it on a patient who is on contact precautions before using this equipment on another patient. Category IB. 47,67,391,1009 # Cleaning and Disinfecting Environmental Surfaces in Healthcare Facilities a. Clean housekeeping surfaces (e.g., floors, tabletops) on a regular basis, when spills occur, and when these surfaces are visibly soiled. Category II. 23,378,380,382,1008,1010 b. Disinfect (or clean) environmental surfaces on a regular basis (e.g., daily, three times per week) and when surfaces are visibly soiled. Category II. 378,380,402,1008 c. Follow manufacturers' instructions for proper use of disinfecting (or detergent) products ---such as recommended use-dilution, material compatibility, storage, shelf-life, and safe use and disposal. Category II. 327,365,404 d. Clean walls, blinds, and window curtains in patient-care areas when these surfaces are visibly contaminated or soiled. Category II. 1011 e. Prepare disinfecting (or detergent) solutions as needed and replace these with fresh solution frequently (e.g., replace floor mopping solution every three patient rooms, change no less often than at 60-minute intervals), according to the facility's policy. Category IB. 68,379 f. Decontaminate mop heads and cleaning cloths regularly to prevent contamination (e.g., launder and dry at least daily). Category II. 68,402,403 g. Use a one-step process and an EPA-registered hospital disinfectant designed for housekeeping purposes in patient care areas where 1. uncertainty exists about the nature of the soil on the surfaces (e.g., blood or body fluid contamination versus routine dust or dirt); or 2. uncertainty exists about the presence of multidrug resistant organisms on such surfaces. See 5n for recommendations requiring cleaning and disinfecting blood-contaminated surfaces. Category II. 23,47,48,51,214,378,379,382,416,1012 h. Detergent and water are adequate for cleaning surfaces in nonpatient-care areas (e.g., administrative offices). Category II. 23 i. Do not use high-level disinfectants/liquid chemical sterilants for disinfection of non-critical surfaces. Category IB. 23,69,318 j. Wet-dust horizontal surfaces regularly (e.g., daily, three times per week) using clean cloths moistened with an EPA-registered hospital disinfectant (or detergent). Prepare the disinfectant (or detergent) as recommended by the manufacturer. Category II. 68,378,380,402,403,1008 k. Disinfect noncritical surfaces with an EPA-registered hospital disinfectant according to the label's safety precautions and use directions. Most EPA-registered hospital disinfectants have a label contact time of 10 minutes. However, many scientific studies have demonstrated the efficacy of hospital disinfectants against pathogens with a contact time of at least 1 minute. By law, the user must follow all applicable label instructions on EPA-registered products. If the user selects exposure conditions that differ from those on the EPA-registered product label, the user assumes liability for any injuries resulting from off-label use and is potentially subject to enforcement action under FIFRA. Category II, IC. 17, 47, 48, 50, 51, 53-57, 59, 60, 62-64, 355, 378, 382 l. Do not use disinfectants to clean infant bassinets and incubators while these items are occupied. If disinfectants (e.g., phenolics) are used for the terminal cleaning of infant bassinets and incubators, thoroughly rinse the surfaces of these items with water and dry them before these items are reused. Category IB. 17,739,740 m. Promptly clean and decontaminate spills of blood and other potentially infectious materials. Discard blood-contaminated items in compliance with federal regulations. Category IB, IC. 214 n. For site decontamination of spills of blood or other potentially infectious materials (OPIM), implement the following procedures. Use protective gloves and other PPE (e.g., when sharps are involved use forceps to pick up sharps, and discard these items in a puncture-resistant container) appropriate for this task. Disinfect areas contaminated with blood spills using an EPAregistered tuberculocidal agent, a registered germicide on the EPA Lists D and E (i.e., products with specific label claims for HIV or HBV or freshly diluted hypochlorite solution. Category II, IC. 1. - If sodium hypochlorite solutions are selected use a 1:100 dilution (e.g., 1:100 dilution of a 5.25-6.15% sodium hypochlorite provides 525-615 ppm available chlorine) to decontaminate nonporous surfaces after a small spill (e.g., 10 mL) of blood or OPIM, or involves a culture spill in the laboratory, use a 1:10 dilution for the first application of hypochlorite solution before cleaning in order to reduce the risk of infection during the cleaning process in the event of a sharp injury. Follow this decontamination process with a terminal disinfection, using a 1:100 dilution of sodium hypochlorite. Category IB, IC. 63,215,557 o. If the spill contains large amounts of blood or body fluids, clean the visible matter with disposable absorbent material, and discard the contaminated materials in appropriate, labeled containment. Category II, IC. 44,214 p. Use protective gloves and other PPE appropriate for this task. Category II, IC. 44,214 q. In units with high rates of endemic Clostridium difficile infection or in an outbreak setting, use dilute solutions of 5.25%-6.15% sodium hypochlorite (e.g., 1:10 dilution of household bleach) for routine environmental disinfection. Currently, no products are EPA-registered specifically for inactivating C. difficile spores. Category II. r. If chlorine solution is not prepared fresh daily, it can be stored at room temperature for up to 30 days in a capped, opaque plastic bottle with a 50% reduction in chlorine concentration after 30 days of storage (e.g., 1000 ppm chlorine at day 0 decreases to 500 ppm chlorine by day 30). Category IB. 327,1014 s. An EPA-registered sodium hypochlorite product is preferred, but if such products are not available, generic versions of sodium hypochlorite solutions (e.g., household chlorine bleach) can be used. Category II. 44 6. Disinfectant Fogging a. Do not perform disinfectant fogging for routine purposes in patient-care areas. Category II. 23,228 Environmental Fogging Clarification Statement : CDC and HICPAC have recommendations in the 2008 Guideline for Disinfection and Sterilization in Healthcare Facilities that state that the CDC does not support disinfectant fogging. These recommendations refer to the spraying or fogging of chemicals (e.g., formaldehyde, phenolbased agents, or quaternary ammonium compounds) as a way to decontaminate environmental surfaces or disinfect the air in patient rooms. The recommendation against fogging was based on studies in the 1970's that reported a lack of microbicidal efficacy (e.g., use of quaternary ammonium compounds in mist applications) but also adverse effects on healthcare workers and others in facilities where these methods were utilized. Furthermore, some of these chemicals are not EPA-registered for use in fogging-type applications. These recommendations do not apply to newer technologies involving fogging for room decontamination (e.g., ozone mists, vaporized hydrogen peroxide) that have become available since the 2008 recommendations were made. These newer technologies were assessed by CDC and HICPAC in the 2011 Guideline for the Prevention and Control of Norovirus Gastroenteritis Outbreaks in Healthcare Settings, which makes the recommendation: "More research is required to clarify the effectiveness and reliability of fogging, UV irradiation, and ozone mists to reduce norovirus environmental contamination. (No recommendation/ unresolved issue)" The 2008 recommendations still apply; however, CDC does not yet make a recommendation regarding these newer technologies. This issue will be revisited as additional evidence becomes available. # High-Level Disinfection of Endoscopes a. To detect damaged endoscopes, test each flexible endoscope for leaks as part of each reprocessing cycle. Remove from clinical use any instrument that fails the leak test, and repair this instrument. Category II. 113,115,116 b. Immediately after use, meticulously clean the endoscope with an enzymatic cleaner that is # Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) compatible with the endoscope. Cleaning is necessary before both automated and manual disinfection. Category IA. 83, 101, 104-106, 113, 115, 116, 124, 126, 456, 465, 466, 471, 1015 c. Disconnect and disassemble endoscopic components (e.g., suction valves) as completely as possible and completely immerse all components in the enzymatic cleaner. Steam sterilize these components if they are heat stable. Category IB. 115,116,139,465,466 d. Flush and brush all accessible channels to remove all organic (e.g., blood, tissue) and other residue. Clean the external surfaces and accessories of the devices by using a soft cloth or sponge or brushes. Continue brushing until no debris appears on the brush. Category IA 6,17,108,113,115,116,137,145,147,725,856,903 . e. Use cleaning brushes appropriate for the size of the endoscope channel or port (e.g., bristles should contact surfaces). Cleaning items (e.g., brushes, cloth) should be disposable or, if they are not disposable, they should be thoroughly cleaned and either high-level disinfected or sterilized after each use. Category II. 113,115,116,1016 f. Discard enzymatic cleaners (or detergents) after each use because they are not microbicidal and, therefore, will not retard microbial growth. Category IB. 38,113,115,116,466 g. Process endoscopes (e.g., arthroscopes, cystoscope, laparoscopes) that pass through normally sterile tissues using a sterilization procedure before each use; if this is not feasible, provide at least high-level disinfection. High-level disinfection of arthroscopes, laparoscopes, and cystoscopes should be followed by a sterile water rinse. Category IB. 1,17,31,32,35,89,90,113,554 h. Phase out endoscopes that are critical items (e.g., arthroscopes, laparoscopes) but cannot be steam sterilized. Replace these endoscopes with steam sterilizable instruments when feasible. Category II. i. Mechanically clean reusable accessories inserted into endoscopes (e.g., biopsy forceps or other cutting instruments) that break the mucosal barrier (e.g., ultrasonically clean biopsy forceps) and then sterilize these items between each patient. Category IA. 1,6,8,17,108,113,115,116,138,145,147,153,278 j. Use ultrasonic cleaning of reusable endoscopic accessories to remove soil and organic material from hard-to-clean areas. Category II. 116,145,148 k. Process endoscopes and accessories that contact mucous membranes as semicritical items, and use at least high-level disinfection after use on each patient. Category IA. 1, 6, 8, 17, 108, 113, 115, 116, 129, 138, 145-148, 152-154, 278 l. Use an FDA-cleared sterilant or high-level disinfectant for sterilization or high-level disinfection (Table 1). Category IA. 1, 6-8, 17, 85, 108, 113, 115, 116, 147 m. After cleaning, use formulations containing glutaraldehyde, glutaraldehyde with phenol/phenate, ortho-phthalaldehyde, hydrogen peroxide, and both hydrogen peroxide and peracetic acid to achieve high-level disinfection followed by rinsing and drying (see Table 1 for recommended concentrations). Category IB. 1, 6-8, 17, 38, 85, 108, 113, 145-148 n. Extend exposure times beyond the minimum effective time for disinfecting semicritical patient-care equipment cautiously and conservatively because extended exposure to a high-level disinfectant is more likely to damage delicate and intricate instruments such as flexible endoscopes. The exposure times vary among the Food and Drug Administration (FDA)-cleared high-level disinfectants (Table 2). Category IB. 17,69,73,76,78,83 o. Federal regulations are to follow the FDA-cleared label claim for high-level disinfectants. The FDAcleared labels for high-level disinfection with >2% glutaraldehyde at 25°C range from 20-90 minutes, depending upon the product based on three tier testing which includes AOAC sporicidal tests, simulated use testing with mycobacterial and in-use testing. Category IC. p. Several scientific studies and professional organizations support the efficacy of >2% glutaraldehyde for 20 minutes at 20ºC; that efficacy assumes adequate cleaning prior to disinfection, whereas the FDA-cleared label claim incorporates an added margin of safety to accommodate possible lapses in cleaning practices. Facilities that have chosen to apply the 20 minute duration at 20ºC have done so based on the IA recommendation in the July 2003 SHEA position paper, "Multi-society Guideline for Reprocessing Flexible Gastrointestinal Endoscopes." 12, 17, 19, 26, 27, 49, 55, 57, 58, 60, 73, 76, 79-81, 83-85, 93, 94, 104-106, 110, 111, 115-121, 124, 125, 233, 235, 236, 243, 265, 266, 609 Update : Multisociety guideline on reprocessing flexible gastrointestinal endoscopes: 2011 (% 20guideline%20on%20reprocessing%20flexible%20gastrointestinal.pdf). q. When using FDA-cleared high-level disinfectants, use manufacturers' recommended exposure conditions. Certain products may require a shorter exposure time (e.g., 0.55% ortho-phthalaldehyde for 12 minutes at 20°C, 7.35% hydrogen peroxide plus 0.23% peracetic acid for 15 minutes at 20°C) than glutaraldehyde at room temperature because of their rapid inactivation of mycobacteria or reduced exposure time because of increased mycobactericidal activity at elevated temperature (e.g., 2.5% glutaraldehyde at 5 minutes at 35°C). Category IB. 83,100,689,693,694,700 r. Select a disinfectant or chemical sterilant that is compatible with the device that is being reprocessed. Avoid using reprocessing chemicals on an endoscope if the endoscope manufacturer warns against using these chemicals because of functional damage (with or without cosmetic damage). Category IB. 69,113,116 s. Completely immerse the endoscope in the high-level disinfectant, and ensure all channels are perfused. As soon as is feasible, phase out nonimmersible endoscopes. Category IB. 108, 113-116, 137, 725, 856, 882 t. After high-level disinfection, rinse endoscopes and flush channels with sterile water, filtered water, or tapwater to prevent adverse effects on patients associated with disinfectant retained in the endoscope (e.g., disinfectant induced colitis). Follow this water rinse with a rinse with 70% -90% ethyl or isopropyl alcohol. Category IB. 17, 31-35, 38, 39, 108, 113, 115, 116, 134, 145-148, 620-622, 624-630, 1017 u. After flushing all channels with alcohol, purge the channels using forced air to reduce the likelihood of contamination of the endoscope by waterborne pathogens and to facilitate drying. Category IB. 39,113,115,116,145,147 v. Hang endoscopes in a vertical position to facilitate drying. Category II. 17,108,113,115,116,145,815 w. Store endoscopes in a manner that will protect them from damage or contamination. Category II. 17, 108,113,115,116,145 x. Sterilize or high-level disinfect both the water bottle used to provide intraprocedural flush solution and its connecting tube at least once daily. After sterilizing or high-level disinfecting the water bottle, fill it with sterile water. Category IB. 10, 31-35, 113, 116, 1017 y. Maintain a log for each procedure and record the following: patient's name and medical record number (if available), procedure, date, endoscopist, system used to reprocess the endoscope (if more than one system could be used in the reprocessing area), and serial number or other identifier of the endoscope used. Category II. 108,113,115,116 z. Design facilities where endoscopes are used and disinfected to provide a safe environment for healthcare workers and patients. Use air-exchange equipment (e.g., the ventilation system, outexhaust ducts) to minimize exposure of all persons to potentially toxic vapors (e.g., glutaraldehyde vapor). Do not exceed the allowable limits of the vapor concentration of the chemical sterilant or high-level disinfectant (e.g., those of ACGIH and OSHA). Category IB, IC. 116,145,318,322,577,652 aa. Routinely test the liquid sterilant/high-level disinfectant to ensure minimal effective concentration of the active ingredient. Check the solution each day of use (or more frequently) using the appropriate chemical indicator (e.g., glutaraldehyde chemical indicator to test minimal effective concentration of glutaraldehyde) and document the results of this testing. Discard the solution if the chemical indicator shows the concentration is less than the minimum effective concentration. Do not use the liquid sterilant/high-level disinfectant beyond the reuse-life recommended by the manufacturer (e.g., 14 days for ortho-phthalaldehyde). Category IA. 76,108,113,115,116,608,609 ab. - Provide personnel assigned to reprocess endoscopes with device-specific reprocessing instructions to ensure proper cleaning and high-level disinfection or sterilization. Require competency testing on a regular basis (e.g., beginning of employment, annually) of all personnel who reprocess endoscopes. Category IA. 6-8, 108, 113, 115, 116, 145, 148, 155 ac. - Educate all personnel who use chemicals about the possible biologic, chemical, and environmental hazards of performing procedures that require disinfectants. Category IB, IC. 116,997,998,1018,1019 ad. - Make PPE (e.g., gloves, gowns, eyewear, face mask or shields, respiratory protection devices) available and use these items appropriately to protect workers from exposure to both chemicals and microorganisms (e.g., HBV). Category IB, IC. 115,116,214,961,997,998,1020,1021 ae. - If using an automated endoscope reprocessor (AER), place the endoscope in the reprocessor and attach all channel connectors according to the AER manufacturer's instructions to ensure exposure of all internal surfaces to the high-level disinfectant/chemical sterilant. Category IB. 7,8,115,116,155,725,903 af. - If using an AER, ensure the endoscope can be effectively reprocessed in the AER. Also, ensure any required manual cleaning/disinfecting steps are performed (e.g., elevator wire channel of duodenoscopes might not be effectively disinfected by most AERs). Category IB. 7,8,115,116,155,725 ag. - Review the FDA advisories and the scientific literature for reports of deficiencies that can lead to infection because design flaws and improper operation and practices have compromised the effectiveness of AERs. Category II. 7,98,133,134,155,725 ah. - Develop protocols to ensure that users can readily identify an endoscope that has been properly processed and is ready for patient use. Category II. ai. - Do not use the carrying case designed to transport clean and reprocessed endoscopes outside of the healthcare environment to store an endoscope or to transport the instrument within the healthcare environment. Category II. aj. - No recommendation is made about routinely performing microbiologic testing of either endoscopes or rinse water for quality assurance purposes. Unresolved Issue. 116,164 ak. - If environmental microbiologic testing is conducted, use standard microbiologic techniques. Category II. 23,116,157,161,167 al. - If a cluster of endoscopy-related infections occurs, investigate potential routes of transmission (e.g., person-to-person, common source) and reservoirs. Category IA. 8,1022 am. - Report outbreaks of endoscope-related infections to persons responsible for institutional infection control and risk management and to FDA. Category IB. 6,7,113,116,1023 1. - Notify the local and the state health departments, CDC, and the manufacturer(s). Category II. an. - No recommendation is made regarding the reprocessing of an endoscope again immediately before use if that endoscope has been processed after use according to the recommendations in this guideline. Unresolved issue. 157 ao. - Compare the reprocessing instructions provided by both the endoscope's and the AER's manufacturer's instructions and resolve any conflicting recommendations. Category IB. 116,155 # Management of Equipment and Surfaces in Dentistry a. Dental instruments that penetrate soft tissue or bone (e.g., extraction forceps, scalpel blades, bone chisels, periodontal scalers, and surgical burs) are classified as critical and should be sterilized after each use or discarded. In addition, after each use, sterilize dental instruments that are not intended to penetrate oral soft tissue or bone (e.g., amalgam condensers, air-water syringes) but that might contact oral tissues and are heat-tolerant, although classified as semicritical. Clean and, at a minimum, high-level disinfect heat-sensitive semicritical items. Category IA. 43, b. Noncritical clinical contact surfaces, such as uncovered operatory surfaces (e.g., countertops, switches, light handles), should be barrier-protected or disinfected between patients with an intermediate-disinfectant (i.e., EPA-registered hospital disinfectant with a tuberculocidal claim) or low-level disinfectant (i.e., EPA-registered hospital disinfectant with HIV and HBV claim). Category IB. 43, c. Barrier protective coverings can be used for noncritical clinical contact surfaces that are touched frequently with gloved hands during the delivery of patient care, that are likely to become contaminated with blood or body substances, or that are difficult to clean. Change these coverings when they are visibly soiled, when they become damaged, and on a routine basis (e.g., between patients). Disinfect protected surfaces at the end of the day or if visibly soiled. Category II. 43,210 b. When probe covers are available, use a probe cover or condom to reduce the level of microbial contamination. Category II. Do not use a lower category of disinfection or cease to follow the appropriate disinfectant recommendations when using probe covers because these sheaths and condoms can fail. Category IB c. After high-level disinfection, rinse all items. Use sterile water, filtered water or tapwater followed by an alcohol rinse for semicritical equipment that will have contact with mucous membranes of the upper respiratory tract (e.g., nose, pharynx, esophagus). Category II. 10,1017 d. There is no recommendation to use sterile or filtered water rather than tapwater for rinsing semicritical equipment that contact the mucous membranes of the rectum (e.g., rectal probes, anoscope) or vagina (e.g., vaginal probes). Unresolved issue. 11 e. Wipe clean tonometer tips and then disinfect them by immersing for 5-10 minutes in either 5000 ppm chlorine or 70% ethyl alcohol. None of these listed disinfectant products are FDA-cleared high-level disinfectants. Category II. 49,95,185,188,293 11. Disinfection by Healthcare Personnel in Ambulatory Care and Home Care a. Follow the same classification scheme described above (i.e., that critical devices require sterilization, semicritical devices require high-level disinfection, and noncritical equipment requires low-level disinfection) in the ambulatory-care (outpatient medical/surgical facilities) setting because risk for infection in this setting is similar to that in the hospital setting (see Table 1). Category IB. 6-8, 17, 330 b. When performing care in the home, clean and disinfect reusable objects that touch mucous membranes (e.g., tracheostomy tubes) by immersing these objects in a 1:50 dilution of 5.25%-6.15% sodium hypochlorite (household bleach) (3 minutes), 70% isopropyl alcohol (5 minutes), or 3% hydrogen peroxide (30 minutes) because the home environment is, in most instances, safer than either hospital or ambulatory care settings because person-to-person transmission is less likely. Category II. 327,328,330,331 c. Clean noncritical items that would not be shared between patients (e.g., crutches, blood e. Completely aerate surgical and medical items that have been sterilized in the EtO sterilizer (e.g., polyvinylchloride tubing requires 12 hours at 50°C, 8 hours at 60°C) before using these items in patient care. Category IB. 814 f. Sterilization using the peracetic acid immersion system can be used to sterilize heat-sensitive immersible medical and surgical items. Category IB. 90, g. Critical items that have been sterilized by the peracetic acid immersion process must be used immediately (i.e., items are not completely protected from contamination, making long-term storage unacceptable). Category II. 817,825 h. Dry-heat sterilization (e.g., 340°F for 60 minutes) can be used to sterilize items (e.g., powders, oils) that can sustain high temperatures. Category IB. 815,827 i. Comply with the sterilizer manufacturer's instructions regarding the sterilizer cycle parameters (e.g., time, temperature, concentration). Category IB. 155,725,819 j. Because narrow-lumen devices provide a challenge to all low-temperature sterilization technologies and direct contact is necessary for the sterilant to be effective, ensure that the sterilant has direct contact with contaminated surfaces (e.g., scopes processed in peracetic acid must be connected to channel irrigators). Category IB. d. The shelf life of a packaged sterile item depends on the quality of the wrapper, the storage conditions, the conditions during transport, the amount of handling, and other events (moisture) that compromise the integrity of the package. If event-related storage of sterile items is used, then packaged sterile items can be used indefinitely unless the packaging is compromised (see f and g below). Category IB. 816,819,836,968,973,1030,1031 e. Evaluate packages before use for loss of integrity (e.g., torn, wet, punctured). The pack can be used unless the integrity of the packaging is compromised. Category II. 819,968 f. If the integrity of the packaging is compromised (e.g., torn, wet, or punctured), repack and reprocess the pack before use. Category II. 819,1032 g. If time-related storage of sterile items is used, label the pack at the time of sterilization with an expiration date. Once this date expires, reprocess the pack. Category II. 819,968 19. Quality Control a. Provide comprehensive and intensive training for all staff assigned to reprocess semicritical and critical medical/surgical instruments to ensure they understand the importance of reprocessing these instruments. To achieve and maintain competency, train each member of the staff that reprocesses semicritical and/or critical instruments as follows: 1. provide hands-on training according to the institutional policy for reprocessing critical and semicritical devices; 2. supervise all work until competency is documented for each reprocessing task; 3. conduct competency testing at beginning of employment and regularly thereafter (e.g., annually); and 4. review the written reprocessing instructions regularly to ensure they comply with the scientific literature and the manufacturers' instructions. Category IB. 6-8, 108, 114, 129, 155, 725, 813, 819 b. Compare the reprocessing instructions (e.g., for the appropriate use of endoscope connectors, the capping/noncapping of specific lumens) provided by the instrument manufacturer and the sterilizer manufacturer and resolve any conflicting recommendations by communicating with both manufacturers. Category IB. # Glossary Action level: concentration of a regulated substance (e.g., ethylene oxide, formaldehyde) within the employee breathing zone, above which OSHA requirements apply. Activation of a sterilant: process of mixing the contents of a chemical sterilant that come in two containers (small vial with the activator solution; container of the chemical). Keeping the two chemicals separate until use extends the shelf life of the chemicals. Aeration: method by which ethylene oxide (EtO) is removed from EtO-sterilized items by warm air circulation in an enclosed cabinet specifically designed for this purpose. Antimicrobial agent: any agent that kills or suppresses the growth of microorganisms. Antiseptic: substance that prevents or arrests the growth or action of microorganisms by inhibiting their activity or by destroying them. The term is used especially for preparations applied topically to living tissue. Asepsis: prevention of contact with microorganisms. Autoclave: device that sterilizes instruments or other objects using steam under pressure. The length of time required for sterilization depends on temperature, vacuum, and pressure. Bacterial count: method of estimating the number of bacteria per unit sample. The term also refers to the estimated number of bacteria per unit sample, usually expressed as number of colony-forming units. Bactericide: agent that kills bacteria. Bioburden: number and types of viable microorganisms with which an item is contaminated; also called bioload or microbial load. Biofilm: accumulated mass of bacteria and extracellular material that is tightly adhered to a surface and cannot be easily removed. Biologic indicator: device for monitoring the sterilization process. The device consists of a standardized, viable population of microorganisms (usually bacterial spores) known to be resistant to the sterilization process being monitored. Biologic indicators are intended to demonstrate whether conditions were adequate to achieve sterilization. A negative biologic indicator does not prove that all items in the load are sterile or that they were all exposed to adequate sterilization conditions. Bleach: Household bleach (that includes 5.25% or 6.00%-6.15% sodium hypochlorite depending on manufacturer) is usually diluted in water at 1:10 or 1:100. Approximate dilutions are 1.5 cups of bleach in a gallon of water for a 1:10 dilution (~6,000 ppm) and 0.25 cup of bleach in a gallon of water for a 1:100 dilution (~600 ppm). Sodium hypochlorite products that make pesticidal claims, such as sanitization or disinfection, must be registered by EPA and be labeled with an EPA Registration Number. The format of this section was changed to improve readability and accessibility. The content is unchanged. # Expected Chlorine Concentrations by Various Bowie-Dick test: diagnostic test of a sterilizer's ability to remove air from the chamber of a prevacuum steam sterilizer. The air-removal or Bowie-Dick test is not a test for sterilization. Ceiling limit: concentration of an airborne chemical contaminant that should not be exceeded during any part of the workday. If instantaneous monitoring is not feasible, the ceiling must be assessed as a 15minute time-weighted average exposure. Centigrade or Celsius: a temperature scale (0°C = freezing point of water; 100°C = boiling point of water at sea level). Equivalents mentioned in the guideline are as follows: 20°C = 68°F; 25°C = 77°F; 121°C = 250°F; 132°C = 270°F; 134°C = 273°F. For other temperatures the formula is: F° = (C° × 9 ∕ 5) + 32 or C° = (F° -32) × 5 ∕ 9. Central processing or Central service department: the department within a health-care facility that processes, issues, and controls professional supplies and equipment, both sterile and nonsterile, for some or all patient-care areas of the facility. Challenge test pack: pack used in installation, qualification, and ongoing quality assurance testing of health-care facility sterilizers. Contact time: time a disinfectant is in direct contact with the surface or item to be disinfected. For surface disinfection, this period is framed by the application to the surface until complete drying has occurred. Container system, rigid container: sterilization containment device designed to hold medical devices for sterilization, storage, transportation, and aseptic presentation of contents. Contaminated: state of having actual or potential contact with microorganisms. As used in health care, the term generally refers to the presence of microorganisms that could produce disease or infection. Control, positive: biologic indicator, from the same lot as a test biologic indicator, that is left unexposed to the sterilization cycle and then incubated to verify the viability of the test biologic indicator. Cleaning: removal, usually with detergent and water or enzyme cleaner and water, of adherent visible soil, blood, protein substances, microorganisms and other debris from the surfaces, crevices, serrations, joints, and lumens of instruments, devices, and equipment by a manual or mechanical process that prepares the items for safe handling and/or further decontamination. Culture: growth of microorganisms in or on a nutrient medium; to grow microorganisms in or on such a medium. Culture medium: substance or preparation used to grow and cultivate microorganisms. Cup: 8 fluid ounces. Decontamination: according to OSHA, "the use of physical or chemical means to remove, inactivate, or destroy bloodborne pathogens on a surface or item to the point where they are no longer capable of transmitting infectious particles and the surface or item is rendered safe for handling, use, or disposal" [29 CFR 1910[29 CFR .1030. In health-care facilities, the term generally refers to all pathogenic organisms. Decontamination area: area of a health-care facility designated for collection, retention, and cleaning of soiled and/or contaminated items. Detergent: cleaning agent that makes no antimicrobial claims on the label. They comprise a hydrophilic component and a lipohilic component and can be divided into four types: anionic, cationic, amphoteric, and non-ionic detergents. Disinfectant: usually a chemical agent (but sometimes a physical agent) that destroys disease-causing pathogens or other harmful microorganisms but might not kill bacterial spores. It refers to substances applied to inanimate objects. EPA groups disinfectants by product label claims of "limited," "general," or "hospital" disinfection. Disinfection: thermal or chemical destruction of pathogenic and other types of microorganisms. Disinfection is less lethal than sterilization because it destroys most recognized pathogenic microorganisms but not necessarily all microbial forms (e.g., bacterial spores). D value: time or radiation dose required to inactivate 90% of a population of the test microorganism under stated exposure conditions. Endoscope: an instrument that allows examination and treatment of the interior of the body canals and hollow organs. Enzyme cleaner: a solution used before disinfecting instruments to improve removal of organic material (e.g., proteases to assist in removing protein). EPA Registration Number or EPA Reg. No.: a hyphenated, two-or three-part number assigned by EPA to identify each germicidal product registered within the United States. The first number is the company identification number, the second is the specific product number, and the third (when present) is the company identification number for a supplemental registrant. Exposure time: period in a sterilization process during which items are exposed to the sterilant at the specified sterilization parameters. For example, in a steam sterilization process, exposure time is the period during which items are exposed to saturated steam at the specified temperature. Flash sterilization: process designed for the steam sterilization of unwrapped patient-care items for immediate use (or placed in a specially designed, covered, rigid container to allow for rapid penetration of steam). Fungicide: agent that destroys fungi (including yeasts) and/or fungal spores pathogenic to humans or other animals in the inanimate environment. General disinfectant: EPA-registered disinfectant labeled for use against both gram-negative and grampositive bacteria. Efficacy is demonstrated against both Salmonella choleraesuis and Staphylococcus aureus. Also called broad-spectrum disinfectant. Germicide: agent that destroys microorganisms, especially pathogenic organisms. # Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) Microorganisms: animals or plants of microscopic size. As used in health care, generally refers to bacteria, fungi, viruses, and bacterial spores. # Minimum effective concentration (MEC): the minimum concentration of a liquid chemical germicide needed to achieve the claimed microbicidal activity as determined by dose-response testing. Sometimes used interchangeably with minimum recommended concentration. Muslin: loosely woven (by convention, 140 threads per square inch), 100% cotton cloth. Formerly used as a wrap for sterile packs or a surgical drape. Fabric wraps used currently consist of a cotton-polyester blend. Mycobacteria: bacteria with a thick, waxy coat that makes them more resistant to chemical germicides than other types of vegetative bacteria. Nonlipid viruses: generally considered more resistant to inactivation than lipid viruses. Also called nonenveloped or hydrophilic viruses. One-step disinfection process: simultaneous cleaning and disinfection of a noncritical surface or item. Pasteurization: process developed by Louis Pasteur of heating milk, wine, or other liquids to 65-77°C (or the equivalent) for approximately 30 minutes to kill or markedly reduce the number of pathogenic and spoilage organisms other than bacterial spores. Parametric release: declaration that a product is sterile on the basis of physical and/or chemical process data rather than on sample testing or biologic indicator results. Penicylinder: carriers inoculated with the test bacteria for in vitro tests of germicides. Can be constructed of stainless steel, porcelain, glass, or other materials and are approximately 8 x 10 mm in diameter. Prions: transmissible pathogenic agents that cause a variety of neurodegenerative diseases of humans and animals, including sheep and goats, bovine spongiform encephalopathy in cattle, and Creutzfeldt-Jakob disease in humans. They are unlike any other infectious pathogens because they are composed of an abnormal conformational isoform of a normal cellular protein, the prion protein (PrP). Prions are extremely resistant to inactivation by sterilization processes and disinfecting agents. Process challenge device (PCD): item designed to simulate product to be sterilized and to constitute a defined challenge to the sterilization process and used to assess the effective performance of the process. A PCD is a challenge test pack or test tray that contains a biologic indicator, a Class 5 integrating indicator, or an enzyme-only indicator. QUAT: abbreviation for quaternary ammonium compound, a surface-active, water-soluble disinfecting substance that has four carbon atoms linked to a nitrogen atom through covalent bonds. Recommended exposure limit (REL): occupational exposure limit recommended by NIOSH as being protective of worker health and safety over a working lifetime. Frequently expressed as a 40-hour timeweighted-average exposure for up to 10 hours per day during a 40-work week. Reprocess: method to ensure proper disinfection or sterilization; can include: cleaning, inspection, wrapping, sterilizing, and storing. Sanitizer: agent that reduces the number of bacterial contaminants to safe levels as judged by public health requirements. Commonly used with substances applied to inanimate objects. According to the protocol for the official sanitizer test, a sanitizer is a chemical that kills 99.999% of the specific test bacteria in 30 seconds under the conditions of the test. Shelf life: length of time an undiluted or use dilution of a product can remain active and effective. Also refers to the length of time a sterilized product (e.g., sterile instrument set) is expected to remain sterile. Spaulding classification: strategy for reprocessing contaminated medical devices. The system classifies a medical device as critical, semicritical, or noncritical on the basis of risk to patient safety from contamination on a device. The system also established three levels of germicidal activity (sterilization, high-level disinfection, and low-level disinfection) for strategies with the three classes of medical devices (critical, semicritical, and noncritical). Spore: relatively water-poor round or elliptical resting cell consisting of condensed cytoplasm and nucleus surrounded by an impervious cell wall or coat. Spores are relatively resistant to disinfectant and sterilant activity and drying conditions (specifically in the genera Bacillus and Clostridium). Spore strip: paper strip impregnated with a known population of spores that meets the definition of biological indicators. Steam quality: steam characteristic reflecting the dryness fraction (weight of dry steam in a mixture of dry saturated steam and entrained water) and the level of noncondensable gas (air or other gas that will not condense under the conditions of temperature and pressure used during the sterilization process). The dryness fraction (i.e., the proportion of completely dry steam in the steam being considered) should not fall below 97%. Steam sterilization: sterilization process that uses saturated steam under pressure for a specified exposure time and at a specified temperature, as the sterilizing agent. Steam sterilization, dynamic air removal type: one of two types of sterilization cycles in which air is removed from the chamber and the load by a series of pressure and vacuum excursions (prevacuum cycle) or by a series of steam flushes and pressure pulses above atmospheric pressure (steam-flushpressure-pulse cycle). Sterile or Sterility: state of being free from all living microorganisms. In practice, usually described as a probability function, e.g., as the probability of a microorganism surviving sterilization being one in one million. # Sterility assurance level (SAL): probability of a viable microorganism being present on a product unit after sterilization. Usually expressed as 10 -6 ; a SAL of 10 -6 means ≤1/1 million chance that a single viable microorganism is present on a sterilized item. A SAL of 10 -6 generally is accepted as appropriate for items intended to contact compromised tissue (i.e., tissue that has lost the integrity of the natural body barriers). The sterilizer manufacturer is responsible for ensuring the sterilizer can achieve the desired SAL. The user is responsible for monitoring the performance of the sterilizer to ensure it is operating in conformance to the manufacturer's recommendations. Sterilization: validated process used to render a product free of all forms of viable microorganisms. In a sterilization process, the presence of microorganisms on any individual item can be expressed in terms of probability. Although this probability can be reduced to a very low number, it can never be reduced to zero. Sterilization area: area of a health-care facility designed to house sterilization equipment, such as steam ethylene oxide, hydrogen peroxide gas plasma, or ozone sterilizers. Sterilizer: apparatus used to sterilize medical devices, equipment, or supplies by direct exposure to the sterilizing agent. Sterilizer, prevacuum type: type of steam sterilizer that depends on one or more pressure and vacuum excursions at the beginning of the cycle to remove air. This method of operation results in shorter cycle times for wrapped items because of the rapid removal of air from the chamber and the load by the vacuum system and because of the usually higher operating temperature (132-135°C ; 141-144°C ). This type of sterilizer generally provides for shorter exposure time and accelerated drying of fabric loads by pulling a further vacuum at the end of the sterilizing cycle. Sterilizer, steam-flush pressure-pulse type: type of sterilizer in which a repeated sequence consisting of a steam flush and a pressure pulse removes air from the sterilizing chamber and processed materials using steam at above atmospheric pressure (no vacuum is required). Like a prevacuum sterilizer, a steam-flush pressure-pulse sterilizer rapidly removes air from the sterilizing chamber and wrapped items; however, the system is not susceptible to air leaks because air is removed with the sterilizing chamber pressure at above atmospheric pressure. Typical operating temperatures are 121-123°C (250-254°F), 132-135°C (270-275°F), and 141-144°C (285-291°F). Surfactant: agent that reduces the surface tension of water or the tension at the interface between water and another liquid; a wetting agent found in many sterilants and disinfectants. Tabletop steam sterilizer: a compact gravity-displacement steam sterilizer that has a chamber volume of not more than 2 cubic feet and that generates its own steam when distilled or deionized water is added. # Time-weighted average (TWA): an average of all the concentrations of a chemical to which a worker has been exposed during a specific sampling time, reported as an average over the sampling time. For example, the permissible exposure limit for ethylene oxide is 1 ppm as an 8-hour TWA. Exposures above the ppm limit are permitted if they are compensated for by equal or longer exposures below the limit during the 8-hour workday as long as they do not exceed the ceiling limit; short-term exposure limit; or, in the case of ethylene oxide, excursion limit of 5 ppm averaged over a 15-minute sampling period. Tuberculocide: an EPA-classified hospital disinfectant that also kills Mycobacterium tuberculosis (tubercle bacilli). EPA has registered approximately 200 tuberculocides. Such agents also are called mycobactericides. Use-life: the length of time a diluted product can remain active and effective. The stability of the chemical and the storage conditions (e.g., temperature and presence of air, light, organic matter, or metals) determine the use-life of antimicrobial products. Hydrogen peroxide (7.35%) and 0.23% peracetic acid; hydrogen peroxide 1% and peracetic acid 0.08% (will corrode metal instruments) # Tables and Figure 3-8 h n/a n/a n/a Modified from Rutala and Simmons. 15,17,18,421 The selection and use of disinfectants in the healthcare field is dynamic, and products may become available that are not in existence when this guideline was written. As newer disinfectants become available, persons or committees responsible for selecting disinfectants and sterilization processes should be guided by products cleared by the FDA and the EPA as well as information in the scientific literature. See text for discussion of hydrotherapy. The longer the exposure to a disinfectant, the more likely it is that all microorganisms will be eliminated. Follow the FDA-cleared high-level disinfection claim. Ten-minute exposure is not adequate to disinfect many objects, especially those that are difficult to clean because they have narrow channels or other areas that can harbor organic material and bacteria. Twenty-minute exposure at 20°C is the minimum time needed to reliably kill M. tuberculosis and nontuberculous mycobacteria with a 2% glutaraldehyde. Some high-level disinfectants have a reduced exposure time (e.g., ortho-phthalaldehyde at 12 minutes at 20°C) because of their rapid activity against mycobacteria or reduced exposure time due to increased mycobactericidal activity at elevated temperature (e.g., 2.5% glutaraldehyde at 5 minutes at 35°C, 0.55% OPA at 5 min at 25°C in automated endoscope reprocessor). Tubing must be completely filled for high-level disinfection and liquid chemical sterilization; care must be taken to avoid entrapment of air bubbles during immersion. Material compatibility should be investigated when appropriate. A concentration of 1000 ppm available chlorine should be considered where cultures or concentrated preparations of microorganisms have spilled (5.25% to 6.15% household bleach diluted 1:50 provides > 1000 ppm available chlorine). This solution may corrode some surfaces. Pasteurization (washer-disinfector) of respiratory therapy or anesthesia equipment is a recognized alternative to highlevel disinfection. Some data challenge the efficacy of some pasteurization units. Thermostability should be investigated when appropriate. Do not mix rectal and oral thermometers at any stage of handling or processing. By law, all applicable label instructions on EPA-registered products must be followed. If the user selects exposure conditions that differ from those on the EPA-registered products label, the user assumes liability from any injuries resulting from off-label use and is potentially subject to enforcement action under FIFRA. Modified from Russell and Favero 13,344 . 1 All products effective in presence of organic soil, relatively easy to use, and have a broad spectrum of antimicrobial activity (bacteria, fungi, viruses, bacterial spores, and mycobacteria). The above characteristics are documented in the literature; contact the manufacturer of the instrument and sterilant for additional information. All products listed above are FDA-cleared as chemical sterilants except OPA, which is an FDA-cleared high-level disinfectant. - Rapid activity: ability to quickly achieve sterilization - Strong penetrability: ability to penetrate common medical-device packaging materials and penetrate into the interior of device lumens - Material compatibility: produces only negligible changes in the appearance or the function of processed items and packaging materials even after repeated cycling - Nontoxic: presents no toxic health risk to the operator or the patient and poses no hazard to the environment - Organic material resistance: withstands reasonable organic material challenge without loss of efficacy Test organism was G. stearothermophilus spores . The lumen test units had a removable 5 cm center piece (1.2 cm diameter) of stainless steel sealed to the narrower steel tubing by hard rubber septums. Test organism was G. stearothermophilus spores. The lumen test unit was a straight stainless steel tube.
# Executive Summary The Guideline for Disinfection and Sterilization in Healthcare Facilities, 2008, presents evidencebased recommendations on the preferred methods for cleaning, disinfection and sterilization of patientcare medical devices and for cleaning and disinfecting the healthcare environment. This document supercedes the relevant sections contained in the 1985 Centers for Disease Control (CDC) Guideline for Handwashing and Environmental Control. 1 Because maximum effectiveness from disinfection and sterilization results from first cleaning and removing organic and inorganic materials, this document also reviews cleaning methods. The chemical disinfectants discussed for patient-care equipment include alcohols, glutaraldehyde, formaldehyde, hydrogen peroxide, iodophors, ortho-phthalaldehyde, peracetic acid, phenolics, quaternary ammonium compounds, and chlorine. The choice of disinfectant, concentration, and exposure time is based on the risk for infection associated with use of the equipment and other factors discussed in this guideline. The sterilization methods discussed include steam sterilization, ethylene oxide (ETO), hydrogen peroxide gas plasma, and liquid peracetic acid. When properly used, these cleaning, disinfection, and sterilization processes can reduce the risk for infection associated with use of invasive and noninvasive medical and surgical devices. However, for these processes to be effective, health-care workers should adhere strictly to the cleaning, disinfection, and sterilization recommendations in this document and to instructions on product labels. In addition to updated recommendations, new topics addressed in this guideline include 1. inactivation of antibiotic-resistant bacteria, bioterrorist agents, emerging pathogens, and bloodborne pathogens; 2. toxicologic, environmental, and occupational concerns associated with disinfection and sterilization practices; 3. disinfection of patient-care equipment used in ambulatory settings and home care; 4. new sterilization processes, such as hydrogen peroxide gas plasma and liquid peracetic acid; and 5. disinfection of complex medical instruments (e.g., endoscopes). # Introduction In the United States, approximately 46.5 million surgical procedures and even more invasive medical procedures-including approximately 5 million gastrointestinal endoscopies-are performed each year. 2 Each procedure involves contact by a medical device or surgical instrument with a patient's sterile tissue or mucous membranes. A major risk of all such procedures is the introduction of pathogens that can lead to infection. Failure to properly disinfect or sterilize equipment carries not only risk associated with breach of host barriers but also risk for person-to-person transmission (e.g., hepatitis B virus) and transmission of environmental pathogens (e.g., Pseudomonas aeruginosa). Disinfection and sterilization are essential for ensuring that medical and surgical instruments do not transmit infectious pathogens to patients. Because sterilization of all patient-care items is not necessary, health-care policies must identify, primarily on the basis of the items' intended use, whether cleaning, disinfection, or sterilization is indicated. Multiple studies in many countries have documented lack of compliance with established guidelines for disinfection and sterilization. [3][4][5][6] Failure to comply with scientifically-based guidelines has led to numerous outbreaks. [6][7][8][9][10][11][12] This guideline presents a pragmatic approach to the judicious selection and proper use of disinfection and sterilization processes; the approach is based on well-designed studies assessing the efficacy (through laboratory investigations) and effectiveness (through clinical studies) of disinfection and sterilization procedures. # Methods This guideline resulted from a review of all MEDLINE articles in English listed under the MeSH headings of disinfection or sterilization (focusing on health-care equipment and supplies) from January 1980 through August 2006. References listed in these articles also were reviewed. Selected articles published before 1980 were reviewed and, if still relevant, included in the guideline. The three major peerreviewed journals in infection control-American Journal of Infection Control, Infection Control and Hospital Epidemiology, and Journal of Hospital Infection-were searched for relevant articles published from January 1990 through August 2006. Abstracts presented at the annual meetings of the Society for Healthcare Epidemiology of America and Association for professionals in Infection Control and Epidemiology, Inc. during 1997-2006 also were reviewed; however, abstracts were not used to support the recommendations. # Definition of Terms Sterilization describes a process that destroys or eliminates all forms of microbial life and is carried out in health-care facilities by physical or chemical methods. Steam under pressure, dry heat, EtO gas, hydrogen peroxide gas plasma, and liquid chemicals are the principal sterilizing agents used in healthcare facilities. Sterilization is intended to convey an absolute meaning; unfortunately, however, some health professionals and the technical and commercial literature refer to "disinfection" as "sterilization" and items as "partially sterile." When chemicals are used to destroy all forms of microbiologic life, they can be called chemical sterilants. These same germicides used for shorter exposure periods also can be part of the disinfection process (i.e., high-level disinfection). Disinfection describes a process that eliminates many or all pathogenic microorganisms, except bacterial spores, on inanimate objects (Tables 1 and 2). In health-care settings, objects usually are disinfected by liquid chemicals or wet pasteurization. Each of the various factors that affect the efficacy of disinfection can nullify or limit the efficacy of the process. Factors that affect the efficacy of both disinfection and sterilization include prior cleaning of the object; organic and inorganic load present; type and level of microbial contamination; concentration of and exposure time to the germicide; physical nature of the object (e.g., crevices, hinges, and lumens); presence of biofilms; temperature and pH of the disinfection process; and in some cases, relative humidity of the sterilization process (e.g., ethylene oxide). Unlike sterilization, disinfection is not sporicidal. A few disinfectants will kill spores with prolonged exposure times (3-12 hours); these are called chemical sterilants. At similar concentrations but with shorter exposure periods (e.g., 20 minutes for 2% glutaraldehyde), these same disinfectants will kill all microorganisms except large numbers of bacterial spores; they are called high-level disinfectants. Lowlevel disinfectants can kill most vegetative bacteria, some fungi, and some viruses in a practical period of time (≤10 minutes). Intermediate-level disinfectants might be cidal for mycobacteria, vegetative bacteria, most viruses, and most fungi but do not necessarily kill bacterial spores. Germicides differ markedly, primarily in their antimicrobial spectrum and rapidity of action. Cleaning is the removal of visible soil (e.g., organic and inorganic material) from objects and surfaces and normally is accomplished manually or mechanically using water with detergents or enzymatic products. Thorough cleaning is essential before high-level disinfection and sterilization because inorganic and organic materials that remain on the surfaces of instruments interfere with the effectiveness of these processes. Decontamination removes pathogenic microorganisms from objects so they are safe to handle, use, or discard. Terms with the suffix cide or cidal for killing action also are commonly used. For example, a germicide is an agent that can kill microorganisms, particularly pathogenic organisms ("germs"). The term germicide includes both antiseptics and disinfectants. Antiseptics are germicides applied to living tissue and skin; disinfectants are antimicrobials applied only to inanimate objects. In general, antiseptics are used only on the skin and not for surface disinfection, and disinfectants are not used for skin antisepsis because they can injure skin and other tissues. Virucide, fungicide, bactericide, sporicide, and tuberculocide can kill the type of microorganism identified by the prefix. For example, a bactericide is an agent that kills bacteria. [13][14][15][16][17][18] # A Rational Approach to Disinfection and Sterilization More than 30 years ago, Earle H. Spaulding devised a rational approach to disinfection and sterilization of patient-care items and equipment. 14 This classification scheme is so clear and logical that it has been retained, refined, and successfully used by infection control professionals and others when planning methods for disinfection or sterilization. 1,13,15,17,19,20 Spaulding believed the nature of disinfection could be understood readily if instruments and items for patient care were categorized as critical, semicritical, and noncritical according to the degree of risk for infection involved in use of the items. The CDC Guideline for Handwashing and Hospital Environmental Control 21 , Guidelines for the Prevention of Transmission of Human Immunodeficiency Virus (HIV) and Hepatitis B Virus (HBV) to Health-Care and Public-Safety Workers 22 , and Guideline for Environmental Infection Control in Health-Care Facilities 23 employ this terminology. # Critical Items Critical items confer a high risk for infection if they are contaminated with any microorganism. Thus, objects that enter sterile tissue or the vascular system must be sterile because any microbial contamination could transmit disease. This category includes surgical instruments, cardiac and urinary catheters, implants, and ultrasound probes used in sterile body cavities. Most of the items in this category should be purchased as sterile or be sterilized with steam if possible. Heat-sensitive objects can be treated with EtO, hydrogen peroxide gas plasma; or if other methods are unsuitable, by liquid chemical sterilants. Germicides categorized as chemical sterilants include ≥2.4% glutaraldehyde-based formulations, 0.95% glutaraldehyde with 1.64% phenol/phenate, 7.5% stabilized hydrogen peroxide, 7.35% hydrogen peroxide with 0.23% peracetic acid, 0.2% peracetic acid, and 0.08% peracetic acid with 1.0% hydrogen peroxide. Liquid chemical sterilants reliably produce sterility only if cleaning precedes treatment and if proper guidelines are followed regarding concentration, contact time, temperature, and pH. # Semicritical Items Semicritical items contact mucous membranes or nonintact skin. This category includes respiratory therapy and anesthesia equipment, some endoscopes, laryngoscope blades 24 , esophageal manometry probes, cystoscopes 25 , anorectal manometry catheters, and diaphragm fitting rings. These medical devices should be free from all microorganisms; however, small numbers of bacterial spores are permissible. Intact mucous membranes, such as those of the lungs and the gastrointestinal tract, generally are resistant to infection by common bacterial spores but susceptible to other organisms, such as bacteria, mycobacteria, and viruses. Semicritical items minimally require high-level disinfection using chemical disinfectants. Glutaraldehyde, hydrogen peroxide, ortho-phthalaldehyde, and peracetic acid with hydrogen peroxide are cleared by the Food and Drug Administration (FDA) and are dependable highlevel disinfectants provided the factors influencing germicidal procedures are met (Table 1). When a disinfectant is selected for use with certain patient-care items, the chemical compatibility after extended use with the items to be disinfected also must be considered. High-level disinfection traditionally is defined as complete elimination of all microorganisms in or on an instrument, except for small numbers of bacterial spores. The FDA definition of high-level disinfection is a sterilant used for a shorter contact time to achieve a 6-log10 kill of an appropriate Mycobacterium species. Cleaning followed by high-level disinfection should eliminate enough pathogens to prevent transmission of infection. 26,27 Laparoscopes and arthroscopes entering sterile tissue ideally should be sterilized between patients. However, in the United States, this equipment sometimes undergoes only high-level disinfection between patients. [28][29][30] As with flexible endoscopes, these devices can be difficult to clean and high-level disinfect Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) or sterilize because of intricate device design (e.g., long narrow lumens, hinges). Meticulous cleaning must precede any high-level disinfection or sterilization process. Although sterilization is preferred, no reports have been published of outbreaks resulting from high-level disinfection of these scopes when they are properly cleaned and high-level disinfected. Newer models of these instruments can withstand steam sterilization that for critical items would be preferable to high-level disinfection. Rinsing endoscopes and flushing channels with sterile water, filtered water, or tap water will prevent adverse effects associated with disinfectant retained in the endoscope (e.g., disinfectant-induced colitis). Items can be rinsed and flushed using sterile water after high-level disinfection to prevent contamination with organisms in tap water, such as nontuberculous mycobacteria, 10,31,32 Legionella, [33][34][35] or gramnegative bacilli such as Pseudomonas. 1,17,[36][37][38] Alternatively, a tapwater or filtered water (0.2µ filter) rinse should be followed by an alcohol rinse and forced air drying. 28,[38][39][40] Forced-air drying markedly reduces bacterial contamination of stored endoscopes, most likely by removing the wet environment favorable for bacterial growth. 39 After rinsing, items should be dried and stored (e.g., packaged) in a manner that protects them from recontamination. Some items that may come in contact with nonintact skin for a brief period of time (i.e., hydrotherapy tanks, bed side rails) are usually considered noncritical surfaces and are disinfected with intermediatelevel disinfectants (i.e., phenolic, iodophor, alcohol, chlorine) 23 . Since hydrotherapy tanks have been associated with spread of infection, some facilities have chosen to disinfect them with recommended levels of chlorine 23,41 . In the past, high-level disinfection was recommended for mouthpieces and spirometry tubing (e.g., glutaraldehyde) but cleaning the interior surfaces of the spirometers was considered unnecessary. 42 This was based on a study that showed that mouthpieces and spirometry tubing become contaminated with microorganisms but there was no bacterial contamination of the surfaces inside the spirometers. Filters have been used to prevent contamination of this equipment distal to the filter; such filters and the proximal mouthpiece are changed between patients. # Noncritical Items Noncritical items are those that come in contact with intact skin but not mucous membranes. Intact skin acts as an effective barrier to most microorganisms; therefore, the sterility of items coming in contact with intact skin is "not critical." In this guideline, noncritical items are divided into noncritical patient care items and noncritical environmental surfaces 43,44 . Examples of noncritical patient-care items are bedpans, blood pressure cuffs, crutches and computers 45 . In contrast to critical and some semicritical items, most noncritical reusable items may be decontaminated where they are used and do not need to be transported to a central processing area. Virtually no risk has been documented for transmission of infectious agents to patients through noncritical items 37 when they are used as noncritical items and do not contact non-intact skin and/or mucous membranes. Table 1 lists several low-level disinfectants that may be used for noncritical items. Most Environmental Protection Agency (EPA)-registered disinfectants have a 10-minute label claim. However, multiple investigators have demonstrated the effectiveness of these disinfectants against vegetative bacteria (e.g., Listeria, Escherichia coli, Salmonella, vancomycin-resistant Enterococci, methicillin-resistant Staphylococcus aureus), yeasts (e.g., Candida), mycobacteria (e.g., Mycobacterium tuberculosis), and viruses (e.g. poliovirus) at exposure times of 30-60 seconds [46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64] Federal law requires all applicable label instructions on EPA-registered products to be followed (e.g., use-dilution, shelf life, storage, material compatibility, safe use, and disposal). If the user selects exposure conditions (e.g., exposure time) that differ from those on the EPA-registered products label, the user assumes liability for any injuries resulting from offlabel use and is potentially subject to enforcement action under Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) 65 . Noncritcal environmental surfaces include bed rails, some food utensils, bedside tables, patient furniture and floors. Noncritical environmental surfaces frequently touched by hand (e.g., bedside tables, bed rails) potentially could contribute to secondary transmission by contaminating hands of health-care workers or by contacting medical equipment that subsequently contacts patients 13, 46-48, 51, 66, 67 . Mops and reusable cleaning cloths are regularly used to achieve low-level disinfection on environmental surfaces. However, they often are not adequately cleaned and disinfected, and if the water-disinfectant mixture is not changed regularly (e.g., after every three to four rooms, at no longer than 60-minute intervals), the mopping procedure actually can spread heavy microbial contamination throughout the health-care facility 68 . In one study, standard laundering provided acceptable decontamination of heavily contaminated mopheads but chemical disinfection with a phenolic was less effective. 68 Frequent laundering of mops (e.g., daily), therefore, is recommended. Single-use disposable towels impregnated with a disinfectant also can be used for low-level disinfection when spot-cleaning of noncritical surfaces is needed 45 . # Changes in Disinfection and Sterilization Since 1981 The Table in the CDC Guideline for Environmental Control prepared in 1981 as a guide to the appropriate selection and use of disinfectants has undergone several important changes (Table 1). 15 First, formaldehyde-alcohol has been deleted as a recommended chemical sterilant or high-level disinfectant because it is irritating and toxic and not commonly used. Second, several new chemical sterilants have been added, including hydrogen peroxide, peracetic acid 58,69,70 , and peracetic acid and hydrogen peroxide in combination. Third, 3% phenolics and iodophors have been deleted as high-level disinfectants because of their unproven efficacy against bacterial spores, M. tuberculosis, and/or some fungi. 55,71 Fourth, isopropyl alcohol and ethyl alcohol have been excluded as high-level disinfectants 15 because of their inability to inactivate bacterial spores and because of the inability of isopropyl alcohol to inactivate hydrophilic viruses (i.e., poliovirus, coxsackie virus). 72 Fifth, a 1:16 dilution of 2.0% glutaraldehyde-7.05% phenol-1.20% sodium phenate (which contained 0.125% glutaraldehyde, 0.440% phenol, and 0.075% sodium phenate when diluted) has been deleted as a high-level disinfectant because this product was removed from the marketplace in December 1991 because of a lack of bactericidal activity in the presence of organic matter; a lack of fungicidal, tuberculocidal and sporicidal activity; and reduced virucidal activity. 49,55,56,71,[73][74][75][76][77][78][79] Sixth, the exposure time required to achieve high-level disinfection has been changed from 10-30 minutes to 12 minutes or more depending on the FDA-cleared label claim and the scientific literature. 27,55,69,76,[80][81][82][83][84] A glutaraldehyde and an ortho-phthalaldehyde have an FDA-cleared label claim of 5 minutes when used at 35°C and 25°C, respectively, in an automated endoscope reprocessor with FDA-cleared capability to maintain the solution at the appropriate temperature. 85 In addition, many new subjects have been added to the guideline. These include inactivation of emerging pathogens, bioterrorist agents, and bloodborne pathogens; toxicologic, environmental, and occupational concerns associated with disinfection and sterilization practices; disinfection of patient-care equipment used in ambulatory and home care; inactivation of antibiotic-resistant bacteria; new sterilization processes, such as hydrogen peroxide gas plasma and liquid peracetic acid; and disinfection of complex medical instruments (e.g., endoscopes). # Disinfection of Healthcare Equipment # Concerns about Implementing the Spaulding Scheme One problem with implementing the aforementioned scheme is oversimplification. For example, the scheme does not consider problems with reprocessing of complicated medical equipment that often is heat-sensitive or problems of inactivating certain types of infectious agents (e.g., prions, such as Creutzfeldt-Jakob disease [CJD] agent). Thus, in some situations, choosing a method of disinfection remains difficult, even after consideration of the categories of risk to patients. This is true particularly for a few medical devices (e.g., arthroscopes, laparoscopes) in the critical category because of controversy about whether they should be sterilized or high-level disinfected. 28,86 Heat-stable scopes (e.g., many rigid scopes) should be steam sterilized. Some of these items cannot be steam sterilized because they are heat-sensitive; additionally, sterilization using ethylene oxide (EtO) can be too time-consuming for routine use between patients (new technologies, such as hydrogen peroxide gas plasma and peracetic acid reprocessor, provide faster cycle times). However, evidence that sterilization of these items improves patient care by reducing the infection risk is lacking 29,[87][88][89][90][91] . Many newer models of these instruments can withstand steam sterilization, which for critical items is the preferred method. Another problem with implementing the Spaulding scheme is processing of an instrument in the semicritical category (e.g., endoscope) that would be used in conjunction with a critical instrument that contacts sterile body tissues. For example, is an endoscope used for upper gastrointestinal tract investigation still a semicritical item when used with sterile biopsy forceps or in a patient who is bleeding heavily from esophageal varices? Provided that high-level disinfection is achieved, and all microorganisms except bacterial spores have been removed from the endoscope, the device should not represent an infection risk and should remain in the semicritical category [92][93][94] . Infection with spore-forming bacteria has not been reported from appropriately high-level disinfected endoscopes. An additional problem with implementation of the Spaulding system is that the optimal contact time for high-level disinfection has not been defined or varies among professional organizations, resulting in different strategies for disinfecting different types of semicritical items (e.g., endoscopes, applanation tonometers, endocavitary transducers, cryosurgical instruments, and diaphragm fitting rings). Until simpler and effective alternatives are identified for device disinfection in clinical settings, following this guideline, other CDC guidelines 1,22,95,96 and FDA-cleared instructions for the liquid chemical sterilants/high-level disinfectants would be prudent. # Reprocessing of Endoscopes Physicians use endoscopes to diagnose and treat numerous medical disorders. Even though endoscopes represent a valuable diagnostic and therapeutic tool in modern medicine and the incidence of infection associated with their use reportedly is very low (about 1 in 1.8 million procedures) 97 , more healthcare-associated outbreaks have been linked to contaminated endoscopes than to any other medical device 6-8, 12, 98 . To prevent the spread of health-care-associated infections, all heat-sensitive endoscopes (e.g., gastrointestinal endoscopes, bronchoscopes, nasopharygoscopes) must be properly cleaned and, at a minimum, subjected to high-level disinfection after each use. High-level disinfection can be expected to destroy all microorganisms, although when high numbers of bacterial spores are present, a few spores might survive. Because of the types of body cavities they enter, flexible endoscopes acquire high levels of microbial contamination (bioburden) during each use 99 . For example, the bioburden found on flexible gastrointestinal endoscopes after use has ranged from 10 5 colony forming units (CFU)/mL to 10 10 CFU/mL, with the highest levels found in the suction channels [99][100][101][102] . The average load on bronchoscopes before cleaning was 6.4x10 4 CFU/mL. Cleaning reduces the level of microbial contamination by 4-6 log10 83,103 . Using human immunovirus (HIV)-contaminated endoscopes, several investigators have shown that cleaning completely eliminates the microbial contamination on the scopes 104,105 . Similarly, other investigators found that EtO sterilization or soaking in 2% glutaraldehyde for 20 minutes was effective only when the device first was properly cleaned 106 . FDA maintains a list of cleared liquid chemical sterilants and high-level disinfectants that can be used to reprocess heat-sensitive medical devices, such as flexible endoscopes [This link is no longer active: http://www.fda.gov/cdrh/ode/germlab.html. The current version of this document may differ from original version:FDA-Cleared Sterilants and High Level Disinfectants with General Claims for Processing Reusable Medical and Dental Devices -March 2015] (http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/ReprocessingofReusableMedicalDevi ces/ucm437347.htm). At this time, the FDA-cleared and marketed formulations include: ≥2.4% glutaraldehyde, 0.55% ortho-phthalaldehyde (OPA), 0.95% glutaraldehyde with 1.64% phenol/phenate, 7.35% hydrogen peroxide with 0.23% peracetic acid, 1.0% hydrogen peroxide with 0.08% peracetic acid, and 7.5% hydrogen peroxide 85 . These products have excellent antimicrobial activity; however, some oxidizing chemicals (e.g., 7.5% hydrogen peroxide, and 1.0% hydrogen peroxide with 0.08% peracetic acid [latter product is no longer marketed]) reportedly have caused cosmetic and functional damage to endoscopes 69 . Users should check with device manufacturers for information about germicide compatibility with their device. If the germicide is FDA-cleared, then it is safe when used according to label directions; however, professionals should review the scientific literature for newly available data regarding human safety or materials compatibility. EtO sterilization of flexible endoscopes is infrequent because it requires a lengthy processing and aeration time (e.g., 12 hours) and is a potential hazard to staff and patients. The two products most commonly used for reprocessing endoscopes in the United States are glutaraldehyde and an automated, liquid chemical sterilization process that uses peracetic acid 107 . The American Society for Gastrointestinal Endoscopy (ASGE) recommends glutaraldehyde solutions that do not contain surfactants because the soapy residues of surfactants are difficult to remove during rinsing 108 . ortho-phthalaldehyde has begun to replace glutaraldehyde in many health-care facilities because it has several potential advantages over glutaraldehyde: is not known to irritate the eyes and nasal passages, does not require activation or exposure monitoring, and has a 12-minute high-level disinfection claim in the United States 69 . Disinfectants that are not FDA-cleared and should not be used for reprocessing endoscopes include iodophors, chlorine solutions, alcohols, quaternary ammonium compounds, and phenolics. These solutions might still be in use outside the United States, but their use should be strongly discouraged because of lack of proven efficacy against all microorganisms or materials incompatibility. FDA clearance of the contact conditions listed on germicide labeling is based on the manufacturer's test results [This link is no longer active: http://www.fda.gov/cdrh/ode/germlab.html. The current version of this document may differ from original version:FDA-Cleared Sterilants and High Level Disinfectants with General Claims for Processing Reusable Medical and Dental Devices -March 2015 (http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/ReprocessingofReusableMedicalDevi ces/ucm437347.htm)]. Manufacturers test the product under worst-case conditions for germicide formulation (i.e., minimum recommended concentration of the active ingredient), and include organic soil. Typically manufacturers use 5% serum as the organic soil and hard water as examples of organic and inorganic challenges. The soil represents the organic loading to which the device is exposed during actual use and that would remain on the device in the absence of cleaning. This method ensures that the contact conditions completely eliminate the test mycobacteria (e.g., 10 5 to 10 6 Mycobacteria tuberculosis in organic soil and dried on a scope) if inoculated in the most difficult areas for the disinfectant to penetrate and contact in the absence of cleaning and thus provides a margin of safety 109 . For 2.4% glutaraldehyde that requires a 45-minute immersion at 25 º C to achieve high-level disinfection (i.e., 100% kill of M. tuberculosis). FDA itself does not conduct testing but relies solely on the disinfectant manufacturer's data. Data suggest that M. tuberculosis levels can be reduced by at least 8 log10 with cleaning (4 log10) 83,101,102,110 , followed by chemical disinfection for 20 minutes at 20°C (4 to 6 log10) 83,93,111,112 . On the basis of these data, APIC 113 , the Society of Gastroenterology Nurses and Associates (SGNA) 38,114,115 , the ASGE 108 , American College of Chest Physicians 12 , and a multi-society guideline 116 recommend alternative contact conditions with 2% glutaraldehyde to achieve high-level disinfection (e.g., that equipment be immersed in 2% glutaraldehyde at 20°C for at least 20 minutes for high-level disinfection). Federal regulations are to follow the FDA-cleared label claim for high-level disinfectants. The FDA-cleared labels for high-level disinfection with >2% glutaraldehyde at 25°C range from 20-90 minutes, depending upon the product based on three tier testing which includes AOAC sporicidal tests, simulated use testing with mycobacterial and in-use testing. The studies supporting the efficacy of >2% glutaraldehyde for 20 minutes at 20ºC assume adequate cleaning prior to disinfection, whereas the FDA-cleared label claim incorporates an added margin of safety to accommodate possible lapses in cleaning practices. Facilities that have chosen to apply the 20 minute duration at 20ºC have done so based on the IA recommendation in the July 2003 SHEA position paper, "Multi-society Guideline for Reprocessing Flexible Gastrointestinal Endoscopes" 19,57,83,94,108,111,[116][117][118][119][120][121] . Flexible GI Endoscope Reprocessing Update [June 2011]: Multisociety guideline on reprocessing flexible gastrointestinal endoscopes: 2011 (http://www.asge.org/uploadedFiles/Publications_and_Products/Practice_Guidelines/Multisociety%20 guideline%20on%20reprocessing%20flexible%20gastrointestinal.pdf). Flexible endoscopes are particularly difficult to disinfect 122 and easy to damage because of their intricate design and delicate materials. 123 Meticulous cleaning must precede any sterilization or high-level disinfection of these instruments. Failure to perform good cleaning can result in sterilization or disinfection failure, and outbreaks of infection can occur. Several studies have demonstrated the importance of cleaning in experimental studies with the duck hepatitis B virus (HBV) 106,124 , HIV 125 and Helicobacter pylori. 126 An examination of health-care-associated infections related only to endoscopes through July 1992 found 281 infections transmitted by gastrointestinal endoscopy and 96 transmitted by bronchoscopy. The clinical spectrum ranged from asymptomatic colonization to death. Salmonella species and Pseudomonas aeruginosa repeatedly were identified as causative agents of infections transmitted by gastrointestinal endoscopy, and M. tuberculosis, atypical mycobacteria, and P. aeruginosa were the most common causes of infections transmitted by bronchoscopy 12 . Major reasons for transmission were inadequate cleaning, improper selection of a disinfecting agent, and failure to follow recommended cleaning and disinfection procedures 6,8,37,98 , and flaws in endoscope design 127,128 or automated endoscope reprocessors. 7,98 Failure to follow established guidelines has continued to result in infections associated with gastrointestinal endoscopes 8 and bronchoscopes 7,12 . Potential device-associated problems should be reported to the FDA Center for Devices and Radiologic Health. One multistate investigation found that 23.9% of the bacterial cultures from the internal channels of 71 gastrointestinal endoscopes grew ≥100,000 colonies of bacteria after completion of all disinfection and sterilization procedures (nine of 25 facilities were using a product that has been removed from the marketplace [six facilities using 1:16 glutaraldehyde phenate], is not FDA-cleared as a high-level disinfectant [an iodophor] or no disinfecting agent) and before use on the next patient 129 . The incidence of postendoscopic procedure infections from an improperly processed endoscope has not been rigorously assessed. Automated endoscope reprocessors (AER) offer several advantages over manual reprocessing: they automate and standardize several important reprocessing steps [130][131][132] , reduce the likelihood that an essential reprocessing step will be skipped, and reduce personnel exposure to high-level disinfectants or chemical sterilants. Failure of AERs has been linked to outbreaks of infections 133 or colonization 7,134 , and the AER water filtration system might not be able to reliably provide "sterile" or bacteria-free rinse water 135,136 . Establishment of correct connectors between the AER and the device is critical to ensure complete flow of disinfectants and rinse water 7,137 . In addition, some endoscopes such as the duodenoscopes (e.g., endoscopic retrograde cholangiopancreatography [ERCP]) contain features (e.g., elevator-wire channel) that require a flushing pressure that is not achieved by most AERs and must be reprocessed manually using a 2-to 5-mL syringe, until new duodenoscopes equipped with a wider elevator-channel that AERs can reliably reprocess become available 132 . Outbreaks involving removable endoscope parts 138,139 such as suction valves and endoscopic accessories designed to be inserted through flexible endoscopes such as biopsy forceps emphasize the importance of cleaning to remove all foreign matter before high-level disinfection or sterilization. 140 Some types of valves are now available as single-use, disposable products (e.g., bronchoscope valves) or steam sterilizable products (e.g., gastrointestinal endoscope valves). AERs need further development and redesign 7,141 , as do endoscopes 123,142 , so that they do not represent a potential source of infectious agents. Endoscopes employing disposable components (e.g., protective barrier devices or sheaths) might provide an alternative to conventional liquid chemical highlevel disinfection/sterilization 143,144 . Another new technology is a swallowable camera-in-a-capsule that travels through the digestive tract and transmits color pictures of the small intestine to a receiver worn outside the body. This capsule currently does not replace colonoscopies. # Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) Published recommendations for cleaning and disinfecting endoscopic equipment should be strictly followed 12,38,108,[113][114][115][116][145][146][147][148] . Unfortunately, audits have shown that personnel do not consistently adhere to guidelines on reprocessing [149][150][151] and outbreaks of infection continue to occur. [152][153][154] To ensure reprocessing personnel are properly trained, each person who reprocesses endoscopic instruments should receive initial and annual competency testing 38,155 . In general, endoscope disinfection or sterilization with a liquid chemical sterilant involves five steps after leak testing: 1. Clean: mechanically clean internal and external surfaces, including brushing internal channels and flushing each internal channel with water and a detergent or enzymatic cleaners (leak testing is recommended for endoscopes before immersion). 2. Disinfect: immerse endoscope in high-level disinfectant (or chemical sterilant) and perfuse (eliminates air pockets and ensures contact of the germicide with the internal channels) disinfectant into all accessible channels, such as the suction/biopsy channel and air/water channel and expose for a time recommended for specific products. 3. Rinse: rinse the endoscope and all channels with sterile water, filtered water (commonly used with AERs) or tap water (i.e., high-quality potable water that meets federal clean water standards at the point of use). 4. Dry: rinse the insertion tube and inner channels with alcohol, and dry with forced air after disinfection and before storage. 5. Store: store the endoscope in a way that prevents recontamination and promotes drying (e.g., hung vertically). Drying the endoscope (steps 3 and 4) is essential to greatly reduce the chance of recontamination of the endoscope by microorganisms that can be present in the rinse water 116,156 . One study demonstrated that reprocessed endoscopes (i.e., air/water channel, suction/biopsy channel) generally were negative (100% after 24 hours; 90% after 7 days [1 CFU of coagulase-negative Staphylococcus in one channel]) for bacterial growth when stored by hanging vertically in a ventilated cabinet 157 . Other investigators found all endoscopes were bacteria-free immediately after high-level disinfection, and only four of 135 scopes were positive during the subsequent 5-day assessment (skin bacteria cultured from endoscope surfaces). All flush-through samples remained sterile 158 . Because tapwater can contain low levels of microorganisms 159 , some researchers have suggested that only sterile water (which can be prohibitively expensive) 160 or AER filtered water be used. The suggestion to use only sterile water or filtered water is not consistent with published guidelines that allow tapwater with an alcohol rinse and forced air-drying 38,108,113 or the scientific literature. 39,93 In addition, no evidence of disease transmission has been found when a tap water rinse is followed by an alcohol rinse and forced-air drying. AERs produce filtered water by passage through a bacterial filter (e.g., 0.2 µ). Filtered rinse water was identified as a source of bacterial contamination in a study that cultured the accessory and suction channels of endoscopes and the internal chambers of AERs during 1996-2001 and reported 8.7% of samples collected during 1996-1998 had bacterial growth, with 54% being Pseudomonas species. After a system of hot water flushing of the piping (60 º C for 60 minutes daily) was introduced, the frequency of positive cultures fell to approximately 2% with only rare isolation of >10 CFU/mL 161 . In addition to the endoscope reprocessing steps, a protocol should be developed that ensures the user knows whether an endoscope has been appropriately cleaned and disinfected (e.g., using a room or cabinet for processed endoscopes only) or has not been reprocessed. When users leave endoscopes on movable carts, confusion can result about whether the endoscope has been processed. Although one guideline recommended endoscopes (e.g., duodenoscopes) be reprocessed immediately before use 147 , other guidelines do not require this activity 38,108,115 and except for the Association of periOperative Registered Nurses (AORN), professional organizations do not recommended that reprocessing be repeated as long as the original processing is done correctly. As part of a quality assurance program, healthcare facility personnel can consider random bacterial surveillance cultures of processed endoscopes to ensure high-level disinfection or sterilization 7,[162][163][164] . Reprocessed endoscopes should be free of microbial pathogens except for small numbers of relatively avirulent microbes that represent exogenous environmental contamination (e.g., coagulase-negative Staphylococcus, Bacillus species, diphtheroids). Although recommendations exist for the final rinse water used during endoscope reprocessing to be microbiologically cultured at least monthly 165 , a microbiologic standard has not been set, and the value of routine endoscope cultures has not been shown 166 . In addition, neither the routine culture of reprocessed endoscopes nor the final rinse water has been validated by correlating viable counts on an endoscope to infection after an endoscopic procedure. If reprocessed endoscopes were cultured, sampling the endoscope would assess water quality and other important steps (e.g., disinfectant effectiveness, exposure time, cleaning) in the reprocessing procedure. A number of methods for sampling endoscopes and water have been described 23,157,161,163,167,168 . Novel approaches (e.g., detection of adenosine triphosphate [ATP]) to evaluate the effectiveness of endoscope cleaning 169,170 or endoscope reprocessing 171 also have been evaluated, but no method has been established as a standard for assessing the outcome of endoscope reprocessing. The carrying case used to transport clean and reprocessed endoscopes outside the health-care environment should not be used to store an endoscope or to transport the instrument within the healthcare facility. A contaminated endoscope should never be placed in the carrying case because the case can also become contaminated. When the endoscope is removed from the case, properly reprocessed, and put back in the case, the case could recontaminate the endoscope. A contaminated carrying case should be discarded (Olympus America, June 2002, written communication). Infection-control professionals should ensure that institutional policies are consistent with national guidelines and conduct infection-control rounds periodically (e.g., at least annually) in areas where endoscopes are reprocessed to ensure policy compliance. Breaches in policy should be documented and corrective action instituted. In incidents in which endoscopes were not exposed to a high-level disinfection process, patients exposed to potentially contaminated endoscopes have been assessed for possible acquisition of HIV, HBV, and hepatitis C virus (HCV). A 14-step method for managing a failure incident associated with high-level disinfection or sterilization has been described [Rutala WA, 2006 #12512]. The possible transmission of bloodborne and other infectious agents highlights the importance of rigorous infection control 172,173 . # Laparoscopes and Arthroscopes Although high-level disinfection appears to be the minimum standard for processing laparoscopes and arthroscopes between patients 28,86,174,175 , this practice continues to be debated 89,90,176 . However, neither side in the high-level disinfection versus sterilization debate has sufficient data on which to base its conclusions. Proponents of high-level disinfection refer to membership surveys 29 or institutional experiences 87 involving more than 117,000 and 10,000 laparoscopic procedures, respectively, that cite a low risk for infection (<0.3%) when high-level disinfection is used for gynecologic laparoscopic equipment. Only one infection in the membership survey was linked to spores. In addition, growth of common skin microorganisms (e.g., Staphylococcus epidermidis, diphtheroids) has been documented from the umbilical area even after skin preparation with povidone-iodine and ethyl alcohol. Similar organisms were recovered in some instances from the pelvic serosal surfaces or from the laparoscopic telescopes, suggesting that the microorganisms probably were carried from the skin into the peritoneal cavity 177,178 . Proponents of sterilization focus on the possibility of transmitting infection by spore-forming organisms. Researchers have proposed several reasons why sterility was not necessary for all laparoscopic equipment: only a limited number of organisms (usually ≤10) are introduced into the peritoneal cavity during laparoscopy; minimal damage is done to inner abdominal structures with little devitalized tissue; the peritoneal cavity tolerates small numbers of spore-forming bacteria; equipment is simple to clean and disinfect; surgical sterility is relative; the natural bioburden on rigid lumened devices is low 179 ; and no evidence exists that high-level disinfection instead of sterilization increases the risk for infection 87,89,90 . With the advent of laparoscopic cholecystectomy, concern about high-level disinfection is justifiable because the degree of tissue damage and bacterial contamination is greater than with laparoscopic procedures in gynecology. Failure to completely dissemble, clean, and high-level disinfect laparoscope parts has led to infections in patients 180 . Data from one study suggested that disassembly, cleaning, and proper reassembly of laparoscopic equipment used in gynecologic procedures before steam sterilization presents no risk for infection 181 . As with laparoscopes and other equipment that enter sterile body sites, arthroscopes ideally should be sterilized before used. Older studies demonstrated that these instruments were commonly (57%) only high-level disinfected in the United States 28,86 . A later survey (with a response rate of only 5%) reported that high-level disinfection was used by 31% and a sterilization process in the remainder of the healthcare facilities 30 High-level disinfection rather than sterilization presumably has been used because the incidence of infection is low and the few infections identified probably are unrelated to the use of highlevel disinfection rather than sterilization. A retrospective study of 12,505 arthroscopic procedures found an infection rate of 0.04% (five infections) when arthroscopes were soaked in 2% glutaraldehyde for 15-20 minutes. Four infections were caused by S. aureus; the fifth was an anaerobic streptococcal infection 88 . Because these organisms are very susceptible to high-level disinfectants, such as 2% glutaraldehyde, the infections most likely originated from the patient's skin. Two cases of Clostridium perfringens arthritis have been reported when the arthroscope was disinfected with glutaraldehyde for an exposure time that is not effective against spores 182,183 . Although only limited data are available, the evidence does not demonstrate that high-level disinfection of arthroscopes and laparoscopes poses an infection risk to the patient. For example, a prospective study that compared the reprocessing of arthroscopes and laparoscopes (per 1,000 procedures) with EtO sterilization to high-level disinfection with glutaraldehyde found no statistically significant difference in infection risk between the two methods (i.e., EtO, 7.5/1,000 procedures; glutaraldehyde, 2.5/1,000 procedures) 89 . Although the debate for high-level disinfection versus sterilization of laparoscopes and arthroscopes will go unsettled until well-designed, randomized clinical trials are published, this guideline should be followed 1,17 . That is, laparoscopes, arthroscopes, and other scopes that enter normally sterile tissue should be sterilized before each use; if this is not feasible, they should receive at least high-level disinfection. # Tonometers, Cervical Diaphragm Fitting Rings, Cryosurgical Instruments, and Endocavitary Probes Disinfection strategies vary widely for other semicritical items (e.g., applanation tonometers, rectal/vaginal probes, cryosurgical instruments, and diaphragm fitting rings). FDA requests that device manufacturers include at least one validated cleaning and disinfection/sterilization protocol in the labeling for their devices. As with all medications and devices, users should be familiar with the label instructions. One study revealed that no uniform technique was in use for disinfection of applanation tonometers, with disinfectant contact times varying from <15 sec to 20 minutes 28 . In view of the potential for transmission of viruses (e.g., herpes simplex virus [HSV], adenovirus 8, or HIV) 184 by tonometer tips, CDC recommended that the tonometer tips be wiped clean and disinfected for 5-10 minutes with either 3% hydrogen peroxide, 5000 ppm chlorine, 70% ethyl alcohol, or 70% isopropyl alcohol 95 . However, more recent data suggest that 3% hydrogen peroxide and 70% isopropyl alcohol are not effective against adenovirus capable of causing epidemic keratoconjunctivitis and similar viruses and should not be used for disinfecting applanation tonometers 49,185,186 . Structural damage to Schiotz tonometers has been observed with a 1:10 sodium hypochlorite (5,000 ppm chlorine) and 3% hydrogen peroxide 187 . After disinfection, the tonometer should be thoroughly rinsed in tapwater and air dried before use. Although these disinfectants and exposure times should kill pathogens that can infect the eyes, no studies directly support this 188,189 . The guidelines of the American Academy of Ophthalmology for preventing infections in ophthalmology focus on only one potential pathogen: HIV. 190 Because a short and simple decontamination procedure is desirable in the clinical setting, swabbing the tonometer tip with a 70% isopropyl alcohol wipe sometimes is practiced. 189 Preliminary reports suggest that wiping the tonometer tip with an alcohol swab and then allowing the alcohol to evaporate might be effective in eliminating HSV, HIV, and adenovirus 189,191,192 . However, because these studies involved only a few replicates and were conducted in a controlled laboratory setting, further studies are needed before this technique can be recommended. In addition, two reports have found that disinfection of pneumotonometer tips between uses with a 70% isopropyl alcohol wipe contributed to outbreaks of epidemic keratoconjunctivitis caused by adenovirus type 8 193,194 . Limited studies have evaluated disinfection techniques for other items that contact mucous membranes, such as diaphragm fitting rings, cryosurgical probes, transesophageal echocardiography probes 195 , flexible cystoscopes 196 or vaginal/rectal probes used in sonographic scanning. Lettau, Bond, and McDougal of CDC supported the recommendation of a diaphragm fitting ring manufacturer that involved using a soap-and-water wash followed by a 15-minute immersion in 70% alcohol 96 . This disinfection method should be adequate to inactivate HIV, HBV, and HSV even though alcohols are not classified as high-level disinfectants because their activity against picornaviruses is somewhat limited 72 . No data are available regarding inactivation of human papillomavirus (HPV) by alcohol or other disinfectants because in vitro replication of complete virions has not been achieved. Thus, even though alcohol for 15 minutes should kill pathogens of relevance in gynecology, no clinical studies directly support this practice. Vaginal probes are used in sonographic scanning. A vaginal probe and all endocavitary probes without a probe cover are semicritical devices because they have direct contact with mucous membranes (e.g., vagina, rectum, pharynx). While use of the probe cover could be considered as changing the category, this guideline proposes use of a new condom/probe cover for the probe for each patient, and because condoms/probe covers can fail 195,[197][198][199] , the probe also should be high-level disinfected. The relevance of this recommendation is reinforced with the findings that sterile transvaginal ultrasound probe covers have a very high rate of perforations even before use (0%, 25%, and 65% perforations from three suppliers). 199 One study found, after oocyte retrieval use, a very high rate of perforations in used endovaginal probe covers from two suppliers (75% and 81%) 199 , other studies demonstrated a lower rate of perforations after use of condoms (2.0% and 0.9%) 197 200 . Condoms have been found superior to commercially available probe covers for covering the ultrasound probe (1.7% for condoms versus 8.3% leakage for probe covers) 201 . These studies underscore the need for routine probe disinfection between examinations. Although most ultrasound manufacturers recommend use of 2% glutaraldehyde for highlevel disinfection of contaminated transvaginal transducers, the this agent has been questioned 202 because it might shorten the life of the transducer and might have toxic effects on the gametes and embryos 203 . An alternative procedure for disinfecting the vaginal transducer involves the mechanical removal of the gel from the transducer, cleaning the transducer in soap and water, wiping the transducer with 70% alcohol or soaking it for 2 minutes in 500 ppm chlorine, and rinsing with tap water and air drying 204 . The effectiveness of this and other methods 200 has not been validated in either rigorous laboratory experiments or in clinical use. High-level disinfection with a product (e.g., hydrogen peroxide) that is not toxic to staff, patients, probes, and retrieved cells should be used until the effectiveness of alternative procedures against microbes of importance at the cavitary site is demonstrated by welldesigned experimental scientific studies. Other probes such as rectal, cryosurgical, and transesophageal probes or devices also should be high-level disinfected between patients. Ultrasound probes used during surgical procedures also can contact sterile body sites. These probes can be covered with a sterile sheath to reduce the level of contamination on the probe and reduce the risk for infection. However, because the sheath does not completely protect the probe, the probes should be sterilized between each patient use as with other critical items. If this is not possible, at a minimum the probe should be high-level disinfected and covered with a sterile probe cover. Some cryosurgical probes are not fully immersible. During reprocessing, the tip of the probe should be immersed in a high-level disinfectant for the appropriate time; any other portion of the probe that could have mucous membrane contact can be disinfected by immersion or by wrapping with a cloth soaked in a high-level disinfectant to allow the recommended contact time. After disinfection, the probe should be rinsed with tap water and dried before use. Health-care facilities that use nonimmersible probes should replace them as soon as possible with fully immersible probes. As with other high-level disinfection procedures, proper cleaning of probes is necessary to ensure the success of the subsequent disinfection 205 . One study demonstrated that vegetative bacteria inoculated on vaginal ultrasound probes decreased when the probes were cleaned with a towel 206 . No information is available about either the level of contamination of such probes by potential viral pathogens such as HBV and HPV or their removal by cleaning (such as with a towel). Because these pathogens might be present in vaginal and rectal secretions and contaminate probes during use, highlevel disinfection of the probes after such use is recommended. # Dental Instruments Scientific articles and increased publicity about the potential for transmitting infectious agents in dentistry have focused attention on dental instruments as possible agents for pathogen transmission 207,208 . The American Dental Association recommends that surgical and other instruments that normally penetrate soft tissue or bone (e.g., extraction forceps, scalpel blades, bone chisels, periodontal scalers, and surgical burs) be classified as critical devices that should be sterilized after each use or discarded. Instruments not intended to penetrate oral soft tissues or bone (e.g., amalgam condensers, and air/water syringes) but that could contact oral tissues are classified as semicritical, but sterilization after each use is recommended if the instruments are heat-tolerant 43,209 . If a semicritical item is heat-sensitive, it should, at a minimum, be processed with high-level disinfection 43,210 . Handpieces can be contaminated internally with patient material and should be heat sterilized after each patient. Handpieces that cannot be heat sterilized should not be used. 211 Methods of sterilization that can be used for critical or semicritical dental instruments and materials that are heat-stable include steam under pressure (autoclave), chemical (formaldehyde) vapor, and dry heat (e.g., 320 º F for 2 hours). Dental professionals most commonly use the steam sterilizer 212 . All three sterilization procedures can damage some dental instruments, including steam-sterilized hand pieces 213 . Heat-tolerant alternatives are available for most clinical dental applications and are preferred 43 . CDC has divided noncritical surfaces in dental offices into clinical contact and housekeeping surfaces 43 . Clinical contact surfaces are surfaces that might be touched frequently with gloved hands during patient care or that might become contaminated with blood or other potentially infectious material and subsequently contact instruments, hands, gloves, or devices (e.g., light handles, switches, dental Xray equipment, chair-side computers). Barrier protective coverings (e.g., clear plastic wraps) can be used for these surfaces, particularly those that are difficult to clean (e.g., light handles, chair switches). The coverings should be changed when visibly soiled or damaged and routinely (e.g., between patients). Protected surfaces should be disinfected at the end of each day or if contamination is evident. If not barrier-protected, these surfaces should be disinfected between patients with an intermediate-disinfectant (i.e., EPA-registered hospital disinfectant with tuberculocidal claim) or low-level disinfectant (i.e., EPAregistered hospital disinfectant with an HBV and HIV label claim) 43,214,215 . Most housekeeping surfaces need to be cleaned only with a detergent and water or an EPAregistered hospital disinfectant, depending of the nature of the surface and the type and degree of contamination. When housekeeping surfaces are visibly contaminated by blood or body substances, however, prompt removal and surface disinfection is a sound infection control practice and required by the Occupational Safety and Health Administration (OSHA) 43,214 . Several studies have demonstrated variability among dental practices while trying to meet these recommendations 216,217 . For example, 68% of respondents believed they were sterilizing their instruments but did not use appropriate chemical sterilants or exposure times and 49% of respondents did not challenge autoclaves with biological indicators 216 . Other investigators using biologic indicators have found a high proportion (15%-65%) of positive spore tests after assessing the efficacy of sterilizers used in dental offices. In one study of Minnesota dental offices, operator error, rather than mechanical malfunction 218 , caused 87% of sterilization failures. Common factors in the improper use of sterilizers include chamber overload, low temperature setting, inadequate exposure time, failure to preheat the sterilizer, and interruption of the cycle. Mail-return sterilization monitoring services use spore strips to test sterilizers in dental clinics, but delay caused by mailing to the test laboratory could potentially cause false-negatives results. Studies revealed, however, that the post-sterilization time and temperature after a 7-day delay had no influence on the test results 219 . Delays (7 days at 27ºC and 37ºC, 3-day mail delay) did not cause any predictable pattern of inaccurate spore tests 220 . # Disinfection of HBV-, HCV-, HIV-or TB-Contaminated Devices The CDC recommendation for high-level disinfection of HBV-, HCV-, HIV-or TB-contaminated devices is appropriate because experiments have demonstrated the effectiveness of high-level disinfectants to inactivate these and other pathogens that might contaminate semicritical devices 61,62,73,81,105,121,125,[221][222][223][224][225][226][227][228][229][230][231][232][233][234][235][236][237][238] . Nonetheless, some healthcare facilities have modified their disinfection procedures when endoscopes are used with a patient known or suspected to be infected with HBV, HIV, or M. tuberculosis 28,239 . This is inconsistent with the concept of Standard Precautions that presumes all patients are potentially infected with bloodborne pathogens 228 . Several studies have highlighted the inability to distinguish HBV-or HIV-infected patients from noninfected patients on clinical grounds [240][241][242] . In addition, mycobacterial infection is unlikely to be clinically apparent in many patients. In most instances, hospitals that altered their disinfection procedure used EtO sterilization on the endoscopic instruments because they believed this practice reduced the risk for infection 28,239 . EtO is not routinely used for endoscope sterilization because of the lengthy processing time. Endoscopes and other semicritical devices should be managed the same way regardless of whether the patient is known to be infected with HBV, HCV, HIV or M. tuberculosis. An evaluation of a manual disinfection procedure to eliminate HCV from experimentally contaminated endoscopes provided some evidence that cleaning and 2% glutaraldehyde for 20 minutes should prevent transmission 236 . A study that used experimentally contaminated hysteroscopes detected HCV by polymerase chain reaction (PCR) in one (3%) of 34 samples after cleaning with a detergent, but no samples were positive after treatment with a 2% glutaraldehyde solution for 20 minutes 120 . Another study demonstrated complete elimination of HCV (as detected by PCR) from endoscopes used on chronically infected patients after cleaning and disinfection for 3-5 minutes in glutaraldehyde 118 . Similarly, PCR was used to demonstrate complete elimination of HCV after standard disinfection of experimentally contaminated endoscopes 236 and endoscopes used on HCV-antibodypositive patients had no detectable HCV RNA after high-level disinfection 243 . The inhibitory activity of a phenolic and a chlorine compound on HCV showed that the phenolic inhibited the binding and replication of HCV, but the chlorine was ineffective, probably because of its low concentration and its neutralization in the presence of organic matter 244 . # Disinfection in the Hemodialysis Unit Hemodialysis systems include hemodialysis machines, water supply, water-treatment systems, and distribution systems. During hemodialysis, patients have acquired bloodborne viruses and pathogenic bacteria [245][246][247] . Cleaning and disinfection are important components of infection control in a hemodialysis center. EPA and FDA regulate disinfectants used to reprocess hemodialyzers, hemodialysis machines, and water-treatment systems. Noncritical surfaces (e.g., dialysis bed or chair, countertops, external surfaces of dialysis machines, and equipment [scissors, hemostats, clamps, blood pressure cuffs, stethoscopes]) should be disinfected with an EPA-registered disinfectant unless the item is visibly contaminated with blood; in that case a tuberculocidal agent (or a disinfectant with specific label claims for HBV and HIV) or a 1:100 dilution of a hypochlorite solution (500-600 ppm free chlorine) should be used 246,248 . This procedure accomplishes two goals: it removes soil on a regular basis and maintains an environment that is consistent with good patient care. Hemodialyzers are disinfected with peracetic acid, formaldehyde, glutaraldehyde, heat pasteurization with citric acid, and chlorine-containing compounds 249 . Hemodialysis systems usually are disinfected by chlorine-based disinfectants (e.g., sodium hypochlorite), aqueous formaldehyde, heat pasteurization, ozone, or peracetic acid 250,251 . All products must be used according to the manufacturers' recommendations. Some dialysis systems use hot-water disinfection to control microbial contamination. At its high point, 82% of U.S. chronic hemodialysis centers were reprocessing (i.e., reusing) dialyzers for the same patient using high-level disinfection 249 . However, one of the large dialysis organizations has decided to phase out reuse and, by 2002 the percentage of dialysis facilities reprocessing hemodialyzers had decreased to 63% 252 . The two commonly used disinfectants to reprocess dialyzers were peracetic acid and formaldehyde; 72% used peracetic acid and 20% used formaldehyde to disinfect hemodialyzers. Another 4% of the facilities used either glutaraldehyde or heat pasteurization in combination with citric acid 252 . Infection-control recommendations, including disinfection and sterilization and the use of dedicated machines for hepatitis B surface antigen (HBsAg)positive patients, in the hemodialysis setting were detailed in two reviews 245,246 . The Association for the Advancement of Medical Instrumentation(AAMI) has published recommendations for the reuse of hemodialyzers 253 . # Inactivation of Clostridium difficile The source of health-care-associated acquisition of Clostridium difficile in nonepidemic settings has not been determined. The environment and carriage on the hands of health-care personnel have been considered possible sources of infection 66,254 . Carpeted rooms occupied by a patient with C. difficile were more heavily contaminated with C. difficile than were noncarpeted rooms 255 . Because C. difficile spore-production can increase when exposed to nonchlorine-based cleaning agents and the spores are more resistant than vegetative cells to commonly used surface disinfectants 256 , some investigators have recommended use of dilute solutions of hypochlorite (1,600 ppm available chlorine) for routine environmental disinfection of rooms of patients with C. difficile-associated diarrhea or colitis 257 , to reduce the incidence of C. difficile diarrhea 258 , or in units with high C. difficile rates. 259 Stool samples of patients with symptomatic C. difficile colitis contain spores of the organism, as demonstrated by ethanol treatment of the stool to reduce the overgrowth of fecal flora when isolating C. difficile in the laboratory 260,261 . C. difficile-associated diarrhea rates were shown to have decreased markedly in a bone-marrow transplant unit (from 8.6 to 3.3 cases per 1,000 patient-days) during a period of bleach disinfection (1:10 dilution) of environmental surfaces compared with cleaning with a quaternary ammonium compound. Because no EPA-registered products exist that are specific for inactivating C. difficile spores, use of diluted hypochlorite should be considered in units with high C. difficile rates. Acidified bleach and regular bleach (5000 ppm chlorine) can inactivate 10 6 C. difficile spores in ≤10 minutes 262 . However, studies have shown that asymptomatic patients constitute an important reservoir within the health-care facility and that person-to-person transmission is the principal means of transmission between patients. Thus, combined use of hand washing, barrier precautions, and meticulous environmental cleaning with an EPA-registered disinfectant (e.g., germicidal detergent) should effectively prevent spread of the organism 263 . Contaminated medical devices, such as colonoscopes and thermometers,can be vehicles for transmission of C. difficile spores 264 . For this reason, investigators have studied commonly used disinfectants and exposure times to assess whether current practices can place patients at risk. Data demonstrate that 2% glutaraldehyde 79,[265][266][267] and peracetic acid 267,268 reliably kill C. difficile spores using exposure times of 5-20 minutes. ortho-Phthalaldehyde and ≥0.2% peracetic acid (WA Rutala, personal communication, April 2006) also can inactivate ≥10 4 C. difficile spores in 10-12 minutes at 20 º C 268 . Sodium dichloroisocyanurate at a concentration of 1000 ppm available chlorine achieved lower log10 reduction factors against C. difficile spores at 10 min, ranging from 0.7 to 1.5, than 0.26% peracetic acid with log10 reduction factors ranging from 2.7 to 6.0 268 . # OSHA Bloodborne Pathogen Standard In December 1991, OSHA promulgated a standard entitled "Occupational Exposure to Bloodborne Pathogens" to eliminate or minimize occupational exposure to bloodborne pathogens 214 . One component of this requirement is that all equipment and environmental and working surfaces be cleaned and decontaminated with an appropriate disinfectant after contact with blood or other potentially infectious materials. Even though the OSHA standard does not specify the type of disinfectant or procedure, the OSHA original compliance document 269 suggested that a germicide must be tuberculocidal to kill the HBV. To follow the OSHA compliance document a tuberculocidal disinfectant (e.g., phenolic, and chlorine) would be needed to clean a blood spill. However, in February 1997, OSHA amended its policy and stated that EPA-registered disinfectants labeled as effective against HIV and HBV would be considered as appropriate disinfectants ". . . provided such surfaces have not become contaminated with agent(s) or volumes of or concentrations of agent(s) for which higher level disinfection is recommended." When bloodborne pathogens other than HBV or HIV are of concern, OSHA continues to require use of EPA-registered tuberculocidal disinfectants or hypochlorite solution (diluted 1:10 or 1:100 with water) 215,228 . Studies demonstrate that, in the presence of large blood spills, a 1:10 final dilution of EPA-registered hypochlorite solution initially should be used to inactivate bloodborne viruses 63,235 to minimize risk for infection to health-care personnel from percutaneous injury during cleanup. # Emerging Pathogens (Cryptosporidium, Helicobacter pylori, Escherichia coli O157:H7, Rotavirus, Human Papilloma Virus, Norovirus, Severe Acute Respiratory Syndrome [SARS] Coronavirus) Emerging pathogens are of growing concern to the general public and infection-control professionals. Relevant pathogens include Cryptosporidium parvum, Helicobacter pylori, E. coli O157:H7, HIV, HCV, rotavirus, norovirus, severe acute respiratory syndrome (SARS) coronavirus, multidrug-resistant M. tuberculosis, and nontuberculous mycobacteria (e.g., M. chelonae). The susceptibility of each of these pathogens to chemical disinfectants and sterilants has been studied. With the exceptions discussed below, all of these emerging pathogens are susceptible to currently available chemical disinfectants and sterilants 270 . Cryptosporidium is resistant to chlorine at concentrations used in potable water. C. parvum is not completely inactivated by most disinfectants used in healthcare including ethyl alcohol 271 , glutaraldehyde 271,272 , 5.25% hypochlorite 271 , peracetic acid 271 , ortho-phthalaldehyde 271 , phenol 271,272 , povidone-iodine 271,272 , and quaternary ammonium compounds 271 . The only chemical disinfectants and sterilants able to inactivate greater than 3 log10 of C. parvum were 6% and 7.5% hydrogen peroxide 271 . Sterilization methods will fully inactivate C. parvum, including steam 271 , EtO 271,273 , and hydrogen peroxide gas plasma 271 . Although most disinfectants are ineffective against C. parvum, current cleaning and disinfection practices appear satisfactory to prevent healthcare-associated transmission. For example, endoscopes are unlikely to be an important vehicle for transmitting C. parvum because the results of bacterial studies indicate mechanical cleaning will remove approximately 10 4 organisms, and drying results in rapid loss of C. parvum viability (e.g., 30 minutes, 2.9 log10 decrease; and 60 minutes, 3.8 log10 decrease) 271 . Chlorine at ~1 ppm has been found capable of eliminating approximately 4 log10 of E. coli O157:H7 within 1 minute in a suspension test 64 . Electrolyzed oxidizing water at 23°C was effective in 10 minutes in producing a 5-log10 decrease in E. coli O157:H7 inoculated onto kitchen cutting boards 274 . The following disinfectants eliminated >5 log10 of E. coli O157:H7 within 30 seconds: a quaternary ammonium compound, a phenolic, a hypochlorite (1:10 dilution of 5.25% bleach), and ethanol 53 . Disinfectants including chlorine compounds can reduce E. coli O157:H7 experimentally inoculated onto alfalfa seeds or sprouts 275,276 or beef carcass surfaces 277 . Data are limited on the susceptibility of H. pylori to disinfectants. Using a suspension test, one study assessed the effectiveness of a variety of disinfectants against nine strains of H. pylori 60 . Ethanol (80%) and glutaraldehyde (0.5%) killed all strains within 15 seconds; chlorhexidine gluconate (0.05%, 1.0%), benzalkonium chloride (0.025%, 0.1%), alkyldiaminoethylglycine hydrochloride (0.1%), povidone-iodine (0.1%), and sodium hypochlorite (150 ppm) killed all strains within 30 seconds. Both ethanol (80%) and glutaraldehyde (0.5%) retained similar bactericidal activity in the presence of organic matter; the other disinfectants showed reduced bactericidal activity. In particular, the bactericidal activity of povidoneiodine (0.1%) and sodium hypochlorite (150 ppm) markedly decreased in the presence of dried yeast solution with killing times increased to 5 -10 minutes and 5 -30 minutes, respectively. Immersing biopsy forceps in formalin before obtaining a specimen does not affect the ability to culture H. pylori from the biopsy specimen 278 . The following methods are ineffective for eliminating H. pylori from endoscopes: cleaning with soap and water 119,279 , immersion in 70% ethanol for 3 minutes 280 , instillation of 70% ethanol 126 , instillation of 30 ml of 83% methanol 279 , and instillation of 0.2% Hyamine solution 281 . The differing results with regard to the efficacy of ethyl alcohol against Helicobacter are unexplained. Cleaning followed by use of 2% alkaline glutaraldehyde (or automated peracetic acid) has been demonstrated by culture to be effective in eliminating H. pylori 119,279,282 . Epidemiologic investigations of patients who had undergone endoscopy with endoscopes mechanically washed and disinfected with 2.0%-2.3% glutaraldehyde have revealed no evidence of person-toperson transmission of H. pylori 126,283 . Disinfection of experimentally contaminated endoscopes using 2% glutaraldehyde (10-minute, 20-minute, 45-minute exposure times) or the peracetic acid system (with and without active peracetic acid) has been demonstrated to be effective in eliminating H. pylori 119 . H. pylori DNA has been detected by PCR in fluid flushed from endoscope channels after cleaning and disinfection with 2% glutaraldehyde 284 . The clinical significance of this finding is unclear. In vitro experiments have demonstrated a >3.5-log10 reduction in H. pylori after exposure to 0.5 mg/L of free chlorine for 80 seconds 285 . An outbreak of healthcare-associated rotavirus gastroenteritis on a pediatric unit has been reported 286 . Person to person through the hands of health-care workers was proposed as the mechanism of transmission. Prolonged survival of rotavirus on environmental surfaces (90 minutes to >10 days at room temperature) and hands (>4 hours) has been demonstrated. Rotavirus suspended in feces can survive longer 287,288 . Vectors have included hands, fomites, air, water, and food 288,289 . Products with demonstrated efficacy (>3 log10 reduction in virus) against rotavirus within 1 minute include: 95% ethanol, 70% isopropanol, some phenolics, 2% glutaraldehyde, 0.35% peracetic acid, and some quaternary ammonium compounds 59,[290][291][292][293] . In a human challenge study, a disinfectant spray (0.1% orthophenylphenol and 79% ethanol), sodium hypochlorite (800 ppm free chlorine), and a phenol-based product (14.7% phenol diluted 1:256 in tapwater) when sprayed onto contaminated stainless steel disks, were effective in interrupting transfer of a human rotavirus from stainless steel disk to fingerpads of volunteers after an exposure time of 3-10 minutes. A quaternary ammonium product (7.05% quaternary ammonium compound diluted 1:128 in tapwater) and tapwater allowed transfer of virus 52 . No data exist on the inactivation of HPV by alcohol or other disinfectants because in vitro replication of complete virions has not been achieved. Similarly, little is known about inactivation of noroviruses (members of the family Caliciviridae and important causes of gastroenteritis in humans) because they cannot be grown in tissue culture. Improper disinfection of environmental surfaces contaminated by feces or vomitus of infected patients is believed to play a role in the spread of noroviruses in some settings [294][295][296] . Prolonged survival of a norovirus surrogate (i.e., feline calicivirus virus [FCV], a closely related cultivable virus) has been demonstrated (e.g., at room temperature, FCV in a dried state survived for 21-18 days) 297 . Inactivation studies with FCV have shown the effectiveness of chlorine, glutaraldehyde, and iodine-based products whereas the quaternary ammonium compound, detergent, and ethanol failed to inactivate the virus completely. 297 An evaluation of the effectiveness of several disinfectants against the feline calicivirus found that bleach diluted to 1000 ppm of available chlorine reduced infectivity of FCV by 4.5 logs in 1 minute. Other effective (log10 reduction factor of >4 in virus) disinfectants included accelerated hydrogen peroxide, 5,000 ppm (3 min); chlorine dioxide, 1,000 ppm chlorine (1 min); a mixture of four quaternary ammonium compounds, 2,470 ppm (10 min); 79% ethanol with 0.1% quaternary ammonium compound (3 min); and 75% ethanol (10 min) 298 . A quaternary ammonium compound exhibited activity against feline calicivirus supensions dried on hard surface carriers in 10 minutes 299 . Seventy percent ethanol and 70% 1-propanol reduced FCV by a 3-4-log10 reduction in 30 seconds 300 . CDC announced that a previously unrecognized human virus from the coronavirus family is the leading hypothesis for the cause of a described syndrome of SARS 301 . Two coronaviruses that are known to infect humans cause one third of common colds and can cause gastroenteritis. The virucidal efficacy of chemical germicides against coronavirus has been investigated. A study of disinfectants against coronavirus 229E found several that were effective after a 1-minute contact time; these included sodium hypochlorite (at a free chlorine concentration of 1,000 ppm and 5,000 ppm), 70% ethyl alcohol, and povidone-iodine (1% iodine) 186 . In another study, 70% ethanol, 50% isopropanol, 0.05% benzalkonium chloride, 50 ppm iodine in iodophor, 0.23% sodium chlorite, 1% cresol soap and 0.7% formaldehyde inactivated >3 logs of two animal coronaviruses (mouse hepatitis virus, canine coronavirus) after a 10-minute exposure time 302 . The activity of povidone-iodine has been demonstrated against human coronaviruses 229E and OC43 303 . A study also showed complete inactivation of the SARS coronavirus by 70% ethanol and povidone-iodine with an exposure times of 1 minute and 2.5% glutaraldehyde with an exposure time of 5 minute 304 . Because the SARS coronavirus is stable in feces and urine at room temperature for at least 1-2 days [The current version of this document may differ from original: First data on stability and resistance of SARS coronavirus compiled by members of WHO laboratory network (http://www.who.int/csr/sars/survival_2003_05_04/en/)], surfaces might be a possible source of contamination and lead to infection with the SARS coronavirus and should be disinfected. Until more precise information is available, environments in which SARS patients are housed should be considered heavily contaminated, and rooms and equipment should be thoroughly disinfected daily and after the patient is discharged. EPA-registered disinfectants or 1:100 dilution of household bleach and water should be used for surface disinfection and disinfection on noncritical patient-care equipment. Highlevel disinfection and sterilization of semicritical and critical medical devices, respectively, does not need to be altered for patients with known or suspected SARS. Free-living amoeba can be pathogenic and can harbor agents of pneumonia such as Legionella pneumophila. Limited studies have shown that 2% glutaraldehyde and peracetic acid do not completely inactivate Acanthamoeba polyphaga in a 20-minute exposure time for high-level disinfection. If amoeba are found to contaminate instruments and facilitate infection, longer immersion times or other disinfectants may need to be considered 305 . # Inactivation of Bioterrorist Agents Publications have highlighted concerns about the potential for biological terrorism 306,307 . CDC has categorized several agents as "high priority" because they can be easily disseminated or transmitted from person to person, cause high mortality, and are likely to cause public panic and social disruption 308 . These agents include Bacillus anthracis (the cause of anthrax), Yersinia pestis (plague), variola major (smallpox), Clostridium botulinum toxin (botulism), Francisella tularensis (tularemia), filoviruses (Ebola hemorrhagic fever, Marburg hemorrhagic fever); and arenaviruses (Lassa [Lassa fever], Junin [Argentine hemorrhagic fever]), and related viruses 308 . A few comments can be made regarding the role of sterilization and disinfection of potential agents of bioterrorism 309 . First, the susceptibility of these agents to germicides in vitro is similar to that of other related pathogens. For example, variola is similar to vaccinia 72,310,311 and B. anthracis is similar to B. atrophaeus (formerly B. subtilis) 312,313 . B. subtilis spores, for instance, proved as resistant as, if not more resistant than, B. anthracis spores (>6 log10 reduction of B. anthracis spores in 5 minutes with acidified bleach [5,250 ppm chlorine]) 313 . Thus, one can extrapolate from the larger database available on the susceptibility of genetically similar organisms 314 . Second, many of the potential bioterrorist agents are stable enough in the environment that contaminated environmental surfaces or fomites could lead to transmission of agents such as B. anthracis, F. tularensis, variola major, C. botulinum toxin, and C. burnetti 315 . Third, data suggest that current disinfection and sterilization practices are appropriate for managing patient-care equipment and environmental surfaces when potentially contaminated patients are evaluated and/or admitted in a health-care facility after exposure to a bioterrorist agent. For example, sodium hypochlorite can be used for surface disinfection (see [This link is no longer active: http://www.epa.gov/pesticides/factsheets/chemicals/bleachfactsheet.htm.]). In instances where the health-care facility is the site of a bioterrorist attack, environmental decontamination might require special decontamination procedures (e.g., chlorine dioxide gas for B. anthracis spores). Because no antimicrobial products are registered for decontamination of biologic agents after a bioterrorist attack, EPA has granted a crises exemption for each product (see [This link is no longer active: http://www.epa.gov/pesticides/factsheets/chemicals/bleachfactsheet.htm.]). Of only theoretical concern is the possibility that a bioterrorist agent could be engineered to be less susceptible to disinfection and sterilization processes 309 . # Toxicological, Environmental and Occupational Concerns Health hazards associated with the use of germicides in healthcare vary from mucous membrane irritation to death, with the latter involving accidental injection by mentally disturbed patients 316 . Although their degrees of toxicity vary [317][318][319][320] , all disinfectants should be used with the proper safety precautions 321 and only for the intended purpose. Key factors associated with assessing the health risk of a chemical exposure include the duration, intensity (i.e., how much chemical is involved), and route (e.g., skin, mucous membranes, and inhalation) of exposure. Toxicity can be acute or chronic. Acute toxicity usually results from an accidental spill of a chemical substance. Exposure is sudden and often produces an emergency situation. Chronic toxicity results from repeated exposure to low levels of the chemical over a prolonged period. Employers are responsible for informing workers about the chemical hazards in the workplace and implementing control measures. The OSHA Hazard Communication Standard (29 CFR 1910(29 CFR .1200(29 CFR , 1915(29 CFR .99, 1917(29 CFR .28, 1918(29 CFR .90, 1926(29 CFR .59, and 1928 requires manufacturers and importers of hazardous chemicals to develop Material Safety Data Sheets (MSDS) for each chemical or mixture of chemicals. Employers must have these data sheets readily available to employees who work with the products to which they could be exposed. Exposure limits have been published for many chemicals used in health care to help provide a safe environment and, as relevant, are discussed in each section of this guideline. Only the exposure limits published by OSHA carry the legal force of regulations. OSHA publishes a limit as a time-weighted average (TWA), that is, the average concentration for a normal 8-hour work day and a 40-hour work week to which nearly all workers can be repeatedly exposed to a chemical without adverse health effects. For example, the permissible exposure limit (PEL) for EtO is 1.0 ppm, 8 hour TWA. The CDC National Institute for Occupational Safety and Health (NIOSH) develops recommended exposure limits (RELs). RELs are occupational exposure limits recommended by NIOSH as being protective of worker health and safety over a working lifetime. This limit is frequently expressed as a 40-hour TWA exposure for up to 10 hours per day during a 40-hour work week. These exposure limits are designed for inhalation exposures. Irritant and allergic effects can occur below the exposure limits, and skin contact can result in dermal effects or systemic absorption without inhalation. The American Conference on Governmental Industrial Hygienists (ACGIN) also provides guidelines on exposure limits 322 . Information about workplace exposures and methods to reduce them (e.g., work practices, engineering controls, PPE) is available on the OSHA (https://www.osha.gov/) and NIOSH (https://www.cdc.gov/niosh/) websites. Some states have excluded or limited concentrations of certain chemical germicides (e.g., glutaraldehyde, formaldehyde, and some phenols) from disposal through the sewer system. These rules are intended to minimize environmental harm. If health-care facilities exceed the maximum allowable concentration of a chemical (e.g., ≥5.0 mg/L), they have three options. First, they can switch to alternative products; for example, they can change from glutaraldehyde to another disinfectant for high-level disinfection or from phenolics to quaternary ammonium compounds for low-level disinfection. Second, the health-care facility can collect the disinfectant and dispose of it as a hazardous chemical. Third, the facility can use a commercially available small-scale treatment method (e.g., neutralize glutaraldehyde with glycine). Safe disposal of regulated chemicals is important throughout the medical community. For disposal of large volumes of spent solutions, users might decide to neutralize the microbicidal activity before disposal (e.g., glutaraldehyde). Solutions can be neutralized by reaction with chemicals such as sodium bisulfite 323, 324 or glycine 325 . European authors have suggested that instruments and ventilation therapy equipment should be disinfected by heat rather than by chemicals. The concerns for chemical disinfection include toxic side effects for the patient caused by chemical residues on the instrument or object, occupational exposure to toxic chemicals, and recontamination by rinsing the disinfectant with microbially contaminated tap water 326 . # Disinfection in Ambulatory Care, Home Care, and the Home With the advent of managed healthcare, increasing numbers of patients are now being cared for in ambulatory-care and home settings. Many patients in these settings might have communicable diseases, immunocompromising conditions, or invasive devices. Therefore, adequate disinfection in these settings is necessary to provide a safe patient environment. Because the ambulatory-care setting (i.e., outpatient facility) provides the same risk for infection as the hospital, the Spaulding classification scheme described in this guideline should be followed (Table 1). 17 The home environment should be much safer than hospitals or ambulatory care. Epidemics should not be a problem, and cross-infection should be rare. The healthcare provider is responsible for providing the responsible family member information about infection-control procedures to follow in the home, including hand hygiene, proper cleaning and disinfection of equipment, and safe storage of cleaned and disinfected devices. Among the products recommended for home disinfection of reusable objects are bleach, alcohol, and hydrogen peroxide. APIC recommends that reusable objects (e.g., tracheostomy tubes) that touch mucous membranes be disinfected by immersion in 70% isopropyl alcohol for 5 minutes or in 3% hydrogen peroxide for 30 minutes. Additionally, a 1:50 dilution of 5.25%-6.15% sodium hypochlorite (household bleach) for 5 minutes should be effective [327][328][329] . Noncritical items (e.g., blood pressure cuffs, crutches) can be cleaned with a detergent. Blood spills should be handled according to OSHA regulations as previously described (see section on OSHA Bloodborne Pathogen Standard). In general, sterilization of critical items is not practical in homes but theoretically could be accomplished by chemical sterilants or boiling. Single-use disposable items can be used or reusable items sterilized in a hospital 330,331 . Some environmental groups advocate "environmentally safe" products as alternatives to commercial germicides in the home-care setting. These alternatives (e.g., ammonia, baking soda, vinegar, Borax, liquid detergent) are not registered with EPA and should not be used for disinfecting because they are ineffective against S. aureus. Borax, baking soda, and detergents also are ineffective against Salmonella Typhi and E.coli; however, undiluted vinegar and ammonia are effective against S. Typhi and E.coli 53,332,333 . Common commercial disinfectants designed for home use also are effective against selected antibiotic-resistant bacteria 53 . Public concerns have been raised that the use of antimicrobials in the home can promote development of antibiotic-resistant bacteria 334,335 . This issue is unresolved and needs to be considered further through scientific and clinical investigations. The public health benefits of using disinfectants in the home are unknown. However, some facts are known: many sites in the home kitchen and bathroom are microbially contaminated 336 , use of hypochlorites markedly reduces bacteria 337 , and good standards of hygiene (e.g., food hygiene, hand hygiene) can help reduce infections in the home 338,339 . In addition, laboratory studies indicate that many commercially prepared household disinfectants are effective against common pathogens 53 and can interrupt surface-to-human transmission of pathogens 48 . The "targeted hygiene concept"-which means identifying situations and areas (e.g., foodpreparation surfaces and bathroom) where risk exists for transmission of pathogens-may be a reasonable way to identify when disinfection might be appropriate 340 . # Susceptibility of Antibiotic-Resistant Bacteria to Disinfectants As with antibiotics, reduced susceptibility (or acquired "resistance") of bacteria to disinfectants can arise by either chromosomal gene mutation or acquisition of genetic material in the form of plasmids or transposons 338, 341-343, 344 , 345, 346 . When changes occur in bacterial susceptibility that renders an antibiotic ineffective against an infection previously treatable by that antibiotic, the bacteria are referred to as "resistant." In contrast, reduced susceptibility to disinfectants does not correlate with failure of the disinfectant because concentrations used in disinfection still greatly exceed the cidal level. Thus, the word "resistance" when applied to these changes is incorrect, and the preferred term is "reduced susceptibility" or "increased tolerance" 344,347 . No data are available that show that antibiotic-resistant bacteria are less sensitive to the liquid chemical germicides than antibiotic-sensitive bacteria at currently used germicide contact conditions and concentrations. MRSA and vancomycin-resistant Enterococcus (VRE) are important health-care-associated agents. Some antiseptics and disinfectants have been known for years to be, because of MICs, somewhat less inhibitory to S. aureus strains that contain a plasmid-carrying gene encoding resistance to the antibiotic gentamicin 344 . For example, gentamicin resistance has been shown to also encode reduced susceptibility to propamidine, quaternary ammonium compounds, and ethidium bromide 348 , and MRSA strains have been found to be less susceptible than methicillin-sensitive S. aureus (MSSA) strains to chlorhexidine, propamidine, and the quaternary ammonium compound cetrimide 349 . In other studies, MRSA and MSSA strains have been equally sensitive to phenols and chlorhexidine, but MRSA strains were slightly more tolerant to quaternary ammonium compounds 350 . Two gene families (qacCD [now referred to as smr] and qacAB) are involved in providing protection against agents that are components of disinfectant formulations such as quaternary ammonium compounds. Staphylococci have been proposed to evade destruction because the protein specified by the qacA determinant is a cytoplasmic-membrane-associated protein involved in an efflux system that actively reduces intracellular accumulation of toxicants, such as quaternary ammonium compounds, to intracellular targets. 351 Other studies demonstrated that plasmid-mediated formaldehyde tolerance is transferable from Serratia marcescens to E. coli 352 and plasmid-mediated quaternary ammonium tolerance is transferable from S. aureus to E. coli. 353 . Tolerance to mercury and silver also is plasmid borne. 341,[343][344][345][346] Because the concentrations of disinfectants used in practice are much higher than the MICs observed, even for the more tolerant strains, the clinical relevance of these observations is questionable. Several studies have found antibiotic-resistant hospital strains of common healthcareassociated pathogens (i.e., Enterococcus, P. aeruginosa, Klebsiella pneumoniae, E. coli, S. aureus, and S. epidermidis) to be equally susceptible to disinfectants as antibiotic-sensitive strains 53,[354][355][356] . The susceptibility of glycopeptide-intermediate S. aureus was similar to vancomycin-susceptible, MRSA 357 . On the basis of these data, routine disinfection and housekeeping protocols do not need to be altered because of antibiotic resistance provided the disinfection method is effective 358,359 . A study that evaluated the efficacy of selected cleaning methods (e.g., QUAT-sprayed cloth, and QUAT-immersed cloth) for eliminating VRE found that currently used disinfection processes most likely are highly effective in eliminating VRE. However, surface disinfection must involve contact with all contaminated surfaces 358 . A new method using an invisible flurorescent marker to objectively evaluate the thoroughness of cleaning activities in patient rooms might lead to improvement in cleaning of all objects and surfaces but needs further evaluation. 360 Lastly, does the use of antiseptics or disinfectants facilitate the development of disinfectant-tolerant organisms? Evidence and reviews indicate enhanced tolerance to disinfectants can be developed in response to disinfectant exposure 334,335,346,347,361 . However, the level of tolerance is not important in clinical terms because it is low and unlikely to compromise the effectiveness of disinfectants of which much higher concentrations are used 347,362 . The issue of whether low-level tolerance to germicides selects for antibiotic-resistant strains is unsettled but might depend on the mechanism by which tolerance is attained. For example, changes in the permeability barrier or efflux mechanisms might affect susceptibility to both antibiotics and germicides, but specific changes to a target site might not. Some researchers have suggested that use of disinfectants or antiseptics (e.g., triclosan) could facilitate development of antibiotic-resistant microorganisms 334,335,363 . Although evidence in laboratory studies indicates low-level resistance to triclosan, the concentrations of triclosan in these studies were low (generally <1 μg/mL) and dissimilar from the higher levels used in antimicrobial products (2,000-20,000 μg/mL) 364,365 . Thus, researchers can create laboratory-derived mutants that demonstrate reduced susceptibility to antiseptics or disinfectants. In some experiments, such bacteria have demonstrated reduced susceptibility to certain antibiotics 335 . There is no evidence that using antiseptics or disinfectants selects for antibiotic-resistant organisms in nature or that such mutants survive in nature 366 . ). In addition, the action of antibiotics and the action of disinfectants differ fundamentally. Antibiotics are selectively toxic and generally have a single target site in bacteria, thereby inhibiting a specific biosynthetic process. Germicides generally are considered nonspecific antimicrobials because of a multiplicity of toxic-effect mechanisms or target sites and are broader spectrum in the types of microorganisms against which they are effective 344,347 . The rotational use of disinfectants in some environments (e.g., pharmacy production units) has been recommended and practiced in an attempt to prevent development of resistant microbes 367,368 . There have been only rare case reports that appropriately used disinfectants have resulted in a clinical problem arising from the selection or development of nonsusceptible microorganisms 369 . # Surface Disinfection The effective use of disinfectants is part of a multibarrier strategy to prevent health-care-associated infections. Surfaces are considered noncritical items because they contact intact skin. Use of noncritical items or contact with noncritical surfaces carries little risk of causing an infection in patients or staff. Thus, the routine use of germicidal chemicals to disinfect hospital floors and other noncritical items is controversial [370][371][372][373][374][375] . A 1991 study expanded the Spaulding scheme by dividing the noncritical environmental surfaces into housekeeping surfaces and medical equipment surfaces 376 . The classes of disinfectants used on housekeeping and medical equipment surfaces can be similar. However, the frequency of decontaminating can vary (see Recommendations). Medical equipment surfaces (e.g., blood pressure cuffs, stethoscopes, hemodialysis machines, and X-ray machines) can become contaminated with infectious agents and contribute to the spread of health-care-associated infections 248,375 . For this reason, noncritical medical equipment surfaces should be disinfected with an EPA-registered low-or intermediate-level disinfectant. Use of a disinfectant will provide antimicrobial activity that is likely to be achieved with minimal additional cost or work. Environmental surfaces (e.g., bedside table) also could potentially contribute to cross-transmission by contamination of health-care personnel from hand contact with contaminated surfaces, medical equipment, or patients 50,375,377 . A paper reviews the epidemiologic and microbiologic data (Table 3) regarding the use of disinfectants on noncritical surfaces 378 . Of the seven reasons to usie a disinfectant on noncritical surfaces, five are particularly noteworthy and support the use of a germicidal detergent. First, hospital floors become contaminated with microorganisms from settling airborne bacteria: by contact with shoes, wheels, and other objects; and occasionally by spills. The removal of microbes is a component in controling health-care-associated infections. In an investigation of the cleaning of hospital floors, the use of soap and water (80% reduction) was less effective in reducing the numbers of bacteria than was a phenolic disinfectant (94%-99.9% reduction) 379 . However, a few hours after floor disinfection, the bacterial count was nearly back to the pretreatment level. Second, detergents become contaminated and result in seeding the patient's environment with bacteria. Investigators have shown that mop water becomes increasingly dirty during cleaning and becomes contaminated if soap and water is used rather than a disinfectant. For example, in one study, bacterial contamination in soap and water without a disinfectant increased from 10 CFU/mL to 34,000 CFU/mL after cleaning a ward, whereas contamination in a disinfectant solution did not change (20 CFU/mL) 380 . Contamination of surfaces close to the patient that are frequently touched by the patient or staff (e.g., bed rails) could result in patient exposures0 381 . In a study, using of detergents on floors and patient room furniture, increased bacterial contamination of the patients' environmental surfaces was found after cleaning (average increase = 103.6 CFU/24cm 2 ) 382 . In addition, a P. aeruginosa outbreak was reported in a hematology-oncology unit associated with contamination of the surface cleaning equipment when nongermicidal cleaning solutions instead of disinfectants were used to decontaminate the patients' environment 383 and another study demonstrated the role of environmental cleaning in controlling an outbreak of Acinetobacter baumannii 384 . Studies also have shown that, in situations where the cleaning procedure failed to eliminate contamination from the surface and the cloth is used to wipe another surface, the contamination is transferred to that surface and the hands of the person holding the cloth 381,385 . Third, the CDC Isolation Guideline recommends that noncritical equipment contaminated with blood, body fluids, secretions, or excretions be cleaned and disinfected after use. The same guideline recommends that, in addition to cleaning, disinfection of the bedside equipment and environmental surfaces (e.g., bedrails, bedside tables, carts, commodes, door-knobs, and faucet handles) is indicated for certain pathogens, e.g., enterococci, which can survive in the inanimate environment for prolonged periods 386 . Fourth, OSHA requires that surfaces contaminated with blood and other potentially infectious materials (e.g., amniotic, pleural fluid) be disinfected. Fifth, using a single product throughout the facility can simplify both training and appropriate practice. Reasons also exist for using a detergent alone on floors because noncritical surfaces contribute minimally to endemic health-care-associated infections 387 , and no differences have been found in healthcare-associated infections rates when floors are cleaned with detergent rather than disinfectant 382, 388, 389 . However, these studies have been small and of short duration and suffer from low statistical power because the outcome-healthcare-associated infections-is of low frequency. The low rate of infections makes the efficacy of an intervention statistically difficult to demonstrate. Because housekeeping surfaces are associated with the lowest risk for disease transmission, some researchers have suggested that either detergents or a disinfectant/detergent could be used 376 . No data exist that show reduced healthcare-associated infection rates with use of surface disinfection of floors, but some data demonstrate reduced microbial load associated with the use of disinfectants. Given this information; other information showing that environmental surfaces (e.g., bedside table, bed rails) close to the patient and in outpatient settings 390 can be contaminated with epidemiologically important microbes (such as VRE and MRSA) 47,[390][391][392][393][394] ; and data showing these organisms survive on various hospital surfaces 395,396 ; some researchers have suggested that such surfaces should be disinfected on a regular schedule 378 . Spot decontamination on fabrics that remain in hospitals or clinic rooms while patients move in and out (e.g., privacy curtains) also should be considered. One study demonstrated the effectiveness of spraying the fabric with 3% hydrogen peroxide 397 . Future studies should evaluate the level of contamination on noncritical environmental surfaces as a function of high and low hand contact and whether some surfaces (e.g., bed rails) near the patient with high contact frequencies require more frequent disinfection. Regardless of whether a detergent or disinfectant is used on surfaces in a health-care facility, surfaces should be cleaned routinely and when dirty or soiled to provide an aesthetically pleasing environment and to prevent potentially contaminated objects from serving as a source for health-careassociated infections. 398 The value of designing surfaces (e.g. hexyl-polyvinylpyridine) that kill bacteria on contact 399 or have sustained antimicrobial activity 400 should be further evaluated. Several investigators have recognized heavy microbial contamination of wet mops and cleaning cloths and the potential for spread of such contamination 68,401 . They have shown that wiping hard surfaces with contaminated cloths can contaminate hands, equipment, and other surfaces 68,402 . Data have been published that can be used to formulate effective policies for decontamination and maintenance of reusable cleaning cloths. For example, heat was the most reliable treatment of cleaning cloths as a detergent washing followed by drying at 80°C for 2 hours produced elimination of contamination. However, the dry heating process might be a fire hazard if the mop head contains petroleum-based products or lint builds up within the equipment or vent hose (American Health Care Association, personal communication, March 2003). Alternatively, immersing the cloth in hypochlorite (4,000 ppm) for 2 minutes produced no detectable surviving organisms in 10 of 13 cloths 403 . If reusable cleaning cloths or mops are used, they should be decontaminated regularly to prevent surface contamination during cleaning with subsequent transfer of organisms from these surfaces to patients or equipment by the hands of health-care workers. Some hospitals have begun using a new mopping technique involving microfiber materials to clean floors. Microfibers are densely constructed, polyester and polyamide (nylon) fibers, that are approximately 1/16 the thickness of a human hair. The positively charged microfibers attract dust (which has a negative charge) and are more absorbent than a conventional, cotton-loop mop. Microfiber materials also can be wet with disinfectants, such as quaternary ammonium compounds. In one study, the microfiber system tested demonstrated superior microbial removal compared with conventional string mops when used with a detergent cleaner (94% vs 68%). The use of a disinfectant did not improve the microbial elimination demonstrated by the microfiber system (95% vs 94%). However, use of disinfectant significantly improved microbial removal when a conventional string mop was used (95% vs 68%) (WA Rutala, unpublished data, August 2006). The microfiber system also prevents the possibility of transferring microbes from room to room because a new microfiber pad is used in each room. An important issue concerning use of disinfectants for noncritical surfaces in health-care settings is that the contact time specified on the label of the product is often too long to be practically followed. The labels of most products registered by EPA for use against HBV, HIV, or M. tuberculosis specify a contact time of 10 minutes. Such a long contact time is not practical for disinfection of environmental surfaces in a health-care setting because most health-care facilities apply a disinfectant and allow it to dry (~1 minute). Multiple scientific papers have demonstrated significant microbial reduction with contact times of 30 to 60 seconds [46][47][48][49][50][51][52][53][54][55][56][58][59][60][61][62][63][64] . In addition, EPA will approve a shortened contact time for any product for which the manufacturers will submit confirmatory efficacy data. Currently, some EPA-registered disinfectants have contact times of one to three minutes. By law, users must follow all applicable label instructions for EPA-registered products. Ideally, product users should consider and use products that have the shortened contact time. However, disinfectant manufacturers also need to obtain EPA approval for shortened contact times so these products will be used correctly and effectively in the health-care environment. # Air Disinfection Disinfectant spray-fog techniques for antimicrobial control in hospital rooms has been used. This technique of spraying of disinfectants is an unsatisfactory method of decontaminating air and surfaces and is not recommended for general infection control in routine patient-care areas 386 . Disinfectant fogging is rarely, if ever, used in U.S. healthcare facilities for air and surface disinfection in patient-care areas. Methods (e.g., filtration, ultraviolet germicidal irradiation, chlorine dioxide) to reduce air contamination in the healthcare setting are discussed in another guideline 23 . # Microbial Contamination of Disinfectants Contaminated disinfectants and antiseptics have been occasional vehicles of health-care infections and pseudoepidemics for more than 50 years. Published reports describing contaminated disinfectants and antiseptic solutions leading to health-care-associated infections have been summarized 404 . Since this summary additional reports have been published [405][406][407][408] . An examination of reports of disinfectants contaminated with microorganisms revealed noteworthy observations. Perhaps most importantly, high-level disinfectants/liquid chemical sterilants have not been associated with outbreaks due to intrinsic or extrinsic contamination.Members of the genus Pseudomonas (e.g., P. aeruginosa) are the most frequent isolates from contaminated disinfectants-recovered from 80% of contaminated products. Their ability to remain viable or grow in use-dilutions of disinfectants is unparalleled. This survival advantage for Pseudomonas results presumably from their nutritional versatility, their unique outer membrane that constitutes an effective barrier to the passage of germicides, and/or efflux systems 409 . Although the concentrated solutions of the disinfectants have not been demonstrated to be contaminated at the point of manufacture, an undiluted phenolic can be contaminated by a Pseudomonas sp. during use 410 . In most of the reports that describe illness associated with contaminated disinfectants, the product was used to disinfect patient-care equipment, such as cystoscopes, cardiac catheters, and thermometers. Germicides used as disinfectants that were reported to have been contaminated include chlorhexidine, quaternary ammonium compounds, phenolics, and pine oil. The following control measures should be instituted to reduce the frequency of bacterial growth in disinfectants and the threat of serious healthcare-associated infections from the use of such contaminated products 404 . First, some disinfectants should not be diluted; those that are diluted must be prepared correctly to achieve the manufacturers' recommended use-dilution. Second, infection-control professionals must learn from the literature what inappropriate activities result in extrinsic contamination (i.e., at the point of use) of germicides and train users to prevent recurrence. Common sources of extrinsic contamination of germicides in the reviewed literature are the water to make working dilutions, contaminated containers, and general contamination of the hospital areas where the germicides are prepared and/or used. Third, stock solutions of germicides must be stored as indicated on the product label. EPA verifies manufacturers' efficacy claims against microorganisms. These measures should provide assurance that products meeting the EPA registration requirements can achieve a certain level of antimicrobial activity when used as directed. # Factors Affecting the Efficacy of Disinfection and Sterilization The activity of germicides against microorganisms depends on a number of factors, some of which are intrinsic qualities of the organism, others of which are the chemical and external physical environment. Awareness of these factors should lead to better use of disinfection and sterilization processes and will be briefly reviewed. More extensive consideration of these and other factors is available elsewhere 13,14,16,[411][412][413] . # Number and Location of Microorganisms All other conditions remaining constant, the larger the number of microbes, the more time a germicide needs to destroy all of them. Spaulding illustrated this relation when he employed identical test conditions and demonstrated that it took 30 minutes to kill 10 B. atrophaeus (formerly Bacillus subtilis) spores but 3 hours to kill 100,000 Bacillus atrophaeus spores. This reinforces the need for scrupulous cleaning of medical instruments before disinfection and sterilization. Reducing the number of microorganisms that must be inactivated through meticulous cleaning, increases the margin of safety when the germicide is used according to the labeling and shortens the exposure time required to kill the entire microbial load. Researchers also have shown that aggregated or clumped cells are more difficult to inactivate than monodispersed cells 414 . The location of microorganisms also must be considered when factors affecting the efficacy of germicides are assessed. Medical instruments with multiple pieces must be disassembled and equipment such as endoscopes that have crevices, joints, and channels are more difficult to disinfect than are flatsurface equipment because penetration of the disinfectant of all parts of the equipment is more difficult. Only surfaces that directly contact the germicide will be disinfected, so there must be no air pockets and the equipment must be completely immersed for the entire exposure period. Manufacturers should be encouraged to produce equipment engineered for ease of cleaning and disinfection. # Innate Resistance of Microorganisms Microorganisms vary greatly in their resistance to chemical germicides and sterilization processes (Figure 1) 342 Intrinsic resistance mechanisms in microorganisms to disinfectants vary. For example, spores are resistant to disinfectants because the spore coat and cortex act as a barrier, mycobacteria have a waxy cell wall that prevents disinfectant entry, and gram-negative bacteria possess an outer membrane that acts as a barrier to the uptake of disinfectants 341,[343][344][345] . Implicit in all disinfection strategies is the consideration that the most resistant microbial subpopulation controls the sterilization or disinfection time. That is, to destroy the most resistant types of microorganisms (i.e., bacterial spores), the user needs to employ exposure times and a concentration of germicide needed to achieve complete destruction. Except for prions, bacterial spores possess the highest innate resistance to chemical germicides, followed by coccidia (e.g., Cryptosporidium), mycobacteria (e.g., M. tuberculosis), nonlipid or small viruses (e.g., poliovirus, and coxsackievirus), fungi (e.g., Aspergillus, and Candida), vegetative bacteria (e.g., Staphylococcus, and Pseudomonas) and lipid or medium-size viruses (e.g., herpes, and HIV). The germicidal resistance exhibited by the gram-positive and gram-negative bacteria is similar with some exceptions (e.g., P. aeruginosa which shows greater resistance to some disinfectants) 369,415,416 . P. aeruginosa also is significantly more resistant to a variety of disinfectants in its "naturally occurring" state than are cells subcultured on laboratory media 415,417 . Rickettsiae, Chlamydiae, and mycoplasma cannot be placed in this scale of relative resistance because information about the efficacy of germicides against these agents is limited 418 . Because these microorganisms contain lipid and are similar in structure and composition to other bacteria, they can be predicted to be inactivated by the same germicides that destroy lipid viruses and vegetative bacteria. A known exception to this supposition is Coxiella burnetti, which has demonstrated resistance to disinfectants 419 . # Concentration and Potency of Disinfectants With other variables constant, and with one exception (iodophors), the more concentrated the disinfectant, the greater its efficacy and the shorter the time necessary to achieve microbial kill. Generally not recognized, however, is that all disinfectants are not similarly affected by concentration adjustments. For example, quaternary ammonium compounds and phenol have a concentration exponent of 1 and 6, respectively; thus, halving the concentration of a quaternary ammonium compound requires doubling its disinfecting time, but halving the concentration of a phenol solution requires a 64-fold (i.e., 2 6 ) increase in its disinfecting time 365,413,420 . Considering the length of the disinfection time, which depends on the potency of the germicide, also is important. This was illustrated by Spaulding who demonstrated using the mucin-loop test that 70% isopropyl alcohol destroyed 10 4 M. tuberculosis in 5 minutes, whereas a simultaneous test with 3% phenolic required 2-3 hours to achieve the same level of microbial kill 14 . # Physical and Chemical Factors Several physical and chemical factors also influence disinfectant procedures: temperature, pH, relative humidity, and water hardness. For example, the activity of most disinfectants increases as the temperature increases, but some exceptions exist. Furthermore, too great an increase in temperature causes the disinfectant to degrade and weakens its germicidal activity and thus might produce a potential health hazard. An increase in pH improves the antimicrobial activity of some disinfectants (e.g., glutaraldehyde, quaternary ammonium compounds) but decreases the antimicrobial activity of others (e.g., phenols, hypochlorites, and iodine). The pH influences the antimicrobial activity by altering the disinfectant molecule or the cell surface 413 . Relative humidity is the single most important factor influencing the activity of gaseous disinfectants/sterilants, such as EtO, chlorine dioxide, and formaldehyde. Water hardness (i.e., high concentration of divalent cations) reduces the rate of kill of certain disinfectants because divalent cations (e.g., magnesium, calcium) in the hard water interact with the disinfectant to form insoluble precipitates 13,421 . # Organic and Inorganic Matter Organic matter in the form of serum, blood, pus, or fecal or lubricant material can interfere with the antimicrobial activity of disinfectants in at least two ways. Most commonly, interference occurs by a chemical reaction between the germicide and the organic matter resulting in a complex that is less germicidal or nongermicidal, leaving less of the active germicide available for attacking microorganisms. Chlorine and iodine disinfectants, in particular, are prone to such interaction. Alternatively, organic material can protect microorganisms from attack by acting as a physical barrier 422,423 . The effects of inorganic contaminants on the sterilization process were studied during the 1950s and 1960s 424,425 . These and other studies show the protection by inorganic contaminants of microorganisms to all sterilization processes results from occlusion in salt crystals 426,427 . This further emphasizes the importance of meticulous cleaning of medical devices before any sterilization or disinfection procedure because both organic and inorganic soils are easily removed by washing 426 . # Duration of Exposure Items must be exposed to the germicide for the appropriate minimum contact time. Multiple investigators have demonstrated the effectiveness of low-level disinfectants against vegetative bacteria (e.g., Listeria, E. coli, Salmonella, VRE, MRSA), yeasts (e.g., Candida), mycobacteria (e.g., M. tuberculosis), and viruses (e.g., poliovirus) at exposure times of 30-60 seconds [46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64] . By law, all applicable label instructions on EPA-registered products must be followed. If the user selects exposure conditions that differ from those on the EPA-registered product label, the user assumes liability for any injuries resulting from off-label use and is potentially subject to enforcement action under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). All lumens and channels of endoscopic instruments must contact the disinfectant. Air pockets interfere with the disinfection process, and items that float on the disinfectant will not be disinfected. The disinfectant must be introduced reliably into the internal channels of the device. The exact times for disinfecting medical items are somewhat elusive because of the effect of the aforementioned factors on disinfection efficacy. Certain contact times have proved reliable (Table 1), but, in general, longer contact times are more effective than shorter contact times. # Biofilms Microorganisms may be protected from disinfectants by production of thick masses of cells 428 and extracellular materials, or biofilms [429][430][431][432][433][434][435] . Biofilms are microbial communities that are tightly attached to surfaces and cannot be easly removed. Once these masses form, microbes within them can be resistant to disinfectants by multiple mechanisms, including physical characteristics of older biofilms, genotypic variation of the bacteria, microbial production of neutralizing enzymes, and physiologic gradients within the biofilm (e.g., pH). Bacteria within biofilms are up to 1,000 times more resistant to antimicrobials than are the same bacteria in suspension 436 . Although new decontamination methods 437 are being investigated for removing biofilms, chlorine and monochloramines can effectively inactivate biofilm bacteria 431 438 . Investigators have hypothesized that the glycocalyx-like cellular masses on the interior walls of polyvinyl chloride pipe would protect embedded organisms from some disinfectants and be a reservoir for continuous contamination 429,430,439 . Biofilms have been found in whirlpools 440 , dental unit waterlines 441 , and numerous medical devices (e.g., contact lenses, pacemakers, hemodialysis systems, urinary catheters, central venous catheters, endoscopes) 434,436,438,442 . Their presence can have serious implications for immunocompromised patients and patients who have indwelling medical devices. Some enzymes 436,443,444 and detergents 436 can degrade biofilms or reduce numbers of viable bacteria within a biofilm, but no products are EPA-registered or FDA-cleared for this purpose. # Cleaning Cleaning is the removal of foreign material (e.g., soil, and organic material) from objects and is normally accomplished using water with detergents or enzymatic products. Thorough cleaning is required before high-level disinfection and sterilization because inorganic and organic materials that remain on the surfaces of instruments interfere with the effectiveness of these processes. Also, if soiled materials dry or bake onto the instruments, the removal process becomes more difficult and the disinfection or sterilization process less effective or ineffective. Surgical instruments should be presoaked or rinsed to prevent drying of blood and to soften or remove blood from the instruments. Cleaning is done manually in use areas without mechanical units (e.g., ultrasonic cleaners or washerdisinfectors) or for fragile or difficult-to-clean instruments. With manual cleaning, the two essential components are friction and fluidics. Friction (e.g., rubbing/scrubbing the soiled area with a brush) is an old and dependable method. Fluidics (i.e., fluids under pressure) is used to remove soil and debris from internal channels after brushing and when the design does not allow passage of a brush through a channel 445 . When a washer-disinfector is used, care should be taken in loading instruments: hinged instruments should be opened fully to allow adequate contact with the detergent solution; stacking of instruments in washers should be avoided; and instruments should be disassembled as much as possible. The most common types of mechanical or automatic cleaners are ultrasonic cleaners, washerdecontaminators, washer-disinfectors, and washer-sterilizers. Ultrasonic cleaning removes soil by cavitation and implosion in which waves of acoustic energy are propagated in aqueous solutions to disrupt the bonds that hold particulate matter to surfaces. Bacterial contamination can be present in used ultrasonic cleaning solutions (and other used detergent solutions) because these solutions generally do not make antibacterial label claims 446 . Even though ultrasound alone does not significantly inactivate bacteria, sonication can act synergistically to increase the cidal efficacy of a disinfectant 447 . Users of ultrasonic cleaners should be aware that the cleaning fluid could result in endotoxin contamination of surgical instruments, which could cause severe inflammatory reactions 448 . Washer-sterilizers are modified steam sterilizers that clean by filling the chamber with water and detergent through which steam passes to provide agitation. Instruments are subsequently rinsed and subjected to a short steamsterilization cycle. Another washer-sterilizer employs rotating spray arms for a wash cycle followed by a steam sterilization cycle at 285°F 449,450 . Washer-decontaminators/disinfectors act like a dishwasher that uses a combination of water circulation and detergents to remove soil. These units sometimes have a cycle that subjects the instruments to a heat process (e.g., 93ºC for 10 minutes) 451 . Washer-disinfectors are generally computer-controlled units for cleaning, disinfecting, and drying solid and hollow surgical and medical equipment. In one study, cleaning (measured as 5-6 log10 reduction) was achieved on surfaces that had adequate contact with the water flow in the machine 452 . Detailed information about cleaning and preparing supplies for terminal sterilization is provided by professional organizations 453,454 and books 455 . Studies have shown that manual and mechanical cleaning of endoscopes achieves approximately a 4-log10 reduction of contaminating organisms 83,104,456,457 . Thus, cleaning alone effectively reduces the number of microorganisms on contaminated equipment. In a quantitative analysis of residual protein contamination of reprocessed surgical instruments, median levels of residual protein contamination per instrument for five trays were 267, 260, 163, 456, and 756 µg 458 . In another study, the median amount of protein from reprocessed surgical instruments from different hospitals ranged from 8 µg to 91 µg 459 . When manual methods were compared with automated methods for cleaning reusable accessory devices used for minimally invasive surgical procedures, the automated method was more efficient for cleaning biopsy forceps and ported and nonported laparoscopic devices and achieved a >99% reduction in soil parameters (i.e., protein, carbohydrate, hemoglobin) in the ported and nonported laparoscopic devices 460,461 For instrument cleaning, a neutral or near-neutral pH detergent solution commonly is used because such solutions generally provide the best material compatibility profile and good soil removal. Enzymes, usually proteases, sometimes are added to neutral pH solutions to assist in removing organic material. Enzymes in these formulations attack proteins that make up a large portion of common soil (e.g., blood, pus). Cleaning solutions also can contain lipases (enzymes active on fats) and amylases (enzymes active on starches). Enzymatic cleaners are not disinfectants, and proteinaceous enzymes can be inactivated by germicides. As with all chemicals, enzymes must be rinsed from the equipment or adverse reactions (e.g., fever, residual amounts of high-level disinfectants, proteinaceous residue) could result 462,463 . Enzyme solutions should be used in accordance with manufacturer's instructions, which include proper dilution of the enzymatic detergent and contact with equipment for the amount of time specified on the label 463 . Detergent enzymes can result in asthma or other allergic effects in users. Neutral pH detergent solutions that contain enzymes are compatible with metals and other materials used in medical instruments and are the best choice for cleaning delicate medical instruments, especially flexible endoscopes 457 . Alkalinebased cleaning agents are used for processing medical devices because they efficiently dissolve protein and fat residues 464 ; however, they can be corrosive 457 . Some data demonstrate that enzymatic cleaners are more effective than neutral detergents 465,466 in removing microorganisms from surfaces but two more recent studies found no difference in cleaning efficiency between enzymatic and alkaline-based cleaners 443,464 . Another study found no significant difference between enzymatic and non-enzymatic cleaners in terms of microbial cleaning efficacy 467 . A new non-enzyme, hydrogen peroxide-based formulation (not FDA-cleared) was as effective as enzymatic cleaners in removing protein, blood, carbohydrate, and endotoxin from surface test carriers 468 In addition, this product effected a 5-log10 reduction in microbial loads with a 3-minute exposure at room temperature 468 . Although the effectiveness of high-level disinfection and sterilization mandates effective cleaning, no "real-time" tests exist that can be employed in a clinical setting to verify cleaning. If such tests were commercially available they could be used to ensure an adequate level of cleaning. [469][470][471][472] The only way to ensure adequate cleaning is to conduct a reprocessing verification test (e.g., microbiologic sampling), but this is not routinely recommended 473 . Validation of the cleaning processes in a laboratory-testing program is possible by microorganism detection, chemical detection for organic contaminants, radionuclide tagging, and chemical detection for specific ions 426,471 . During the past few years, data have been published describing use of an artificial soil, protein, endotoxin, X-ray contrast medium, or blood to verify the manual or automated cleaning process 169,452,[474][475][476][477][478] and adenosine triphosphate bioluminescence and microbiologic sampling to evaluate the effectiveness of environmental surface cleaning 170,479 . At a minimum, all instruments should be individually inspected and be visibly clean. # Disinfection Many disinfectants are used alone or in combinations (e.g., hydrogen peroxide and peracetic acid) in the health-care setting. These include alcohols, chlorine and chlorine compounds, formaldehyde, glutaraldehyde, ortho-phthalaldehyde, hydrogen peroxide, iodophors, peracetic acid, phenolics, and quaternary ammonium compounds. Commercial formulations based on these chemicals are considered unique products and must be registered with EPA or cleared by FDA. In most instances, a given product is designed for a specific purpose and is to be used in a certain manner. Therefore, users should read labels carefully to ensure the correct product is selected for the intended use and applied efficiently. Disinfectants are not interchangeable, and incorrect concentrations and inappropriate disinfectants can result in excessive costs. Because occupational diseases among cleaning personnel have been associated with use of several disinfectants (e.g., formaldehyde, glutaraldehyde, and chlorine), precautions (e.g., gloves and proper ventilation) should be used to minimize exposure 318,480,481 . Asthma and reactive airway disease can occur in sensitized persons exposed to any airborne chemical, including germicides. Clinically important asthma can occur at levels below ceiling levels regulated by OSHA or recommended by NIOSH. The preferred method of control is elimination of the chemical (through engineering controls or substitution) or relocation of the worker. The following overview of the performance characteristics of each provides users with sufficient information to select an appropriate disinfectant for any item and use it in the most efficient way. # Chemical Disinfectants Alcohol Overview. In the healthcare setting, "alcohol" refers to two water-soluble chemical compounds-ethyl alcohol and isopropyl alcohol-that have generally underrated germicidal characteristics 482 . FDA has not cleared any liquid chemical sterilant or high-level disinfectant with alcohol as the main active ingredient. These alcohols are rapidly bactericidal rather than bacteriostatic against vegetative forms of bacteria; they also are tuberculocidal, fungicidal, and virucidal but do not destroy bacterial spores. Their cidal activity drops sharply when diluted below 50% concentration, and the optimum bactericidal concentration is 60%-90% solutions in water (volume/volume) 483,484 . # Mode of Action. The most feasible explanation for the antimicrobial action of alcohol is denaturation of proteins. This mechanism is supported by the observation that absolute ethyl alcohol, a dehydrating agent, is less bactericidal than mixtures of alcohol and water because proteins are denatured more quickly in the presence of water 484,485 . Protein denaturation also is consistent with observations that alcohol destroys the dehydrogenases of Escherichia coli 486 , and that ethyl alcohol increases the lag phase of Enterobacter aerogenes 487 and that the lag phase effect could be reversed by adding certain amino acids. The bacteriostatic action was believed caused by inhibition of the production of metabolites essential for rapid cell division. # Microbicidal Activity. Methyl alcohol (methanol) has the weakest bactericidal action of the alcohols and thus seldom is used in healthcare 488 . The bactericidal activity of various concentrations of ethyl alcohol (ethanol) was examined against a variety of microorganisms in exposure periods ranging from 10 seconds to 1 hour 483 . Pseudomonas aeruginosa was killed in 10 seconds by all concentrations of ethanol from 30% to 100% (v/v), and Serratia marcescens, E, coli and Salmonella typhosa were killed in 10 seconds by all concentrations of ethanol from 40% to 100%. The gram-positive organisms Staphylococcus aureus and Streptococcus pyogenes were slightly more resistant, being killed in 10 seconds by ethyl alcohol concentrations of 60%-95%. Isopropyl alcohol (isopropanol) was slightly more bactericidal than ethyl alcohol for E. coli and S. aureus 489 . Ethyl alcohol, at concentrations of 60%-80%, is a potent virucidal agent inactivating all of the lipophilic viruses (e.g., herpes, vaccinia, and influenza virus) and many hydrophilic viruses (e.g., adenovirus, enterovirus, rhinovirus, and rotaviruses but not hepatitis A virus (HAV) 58 or poliovirus) 49 . Isopropyl alcohol is not active against the nonlipid enteroviruses but is fully active against the lipid viruses 72 . Studies also have demonstrated the ability of ethyl and isopropyl alcohol to inactivate the hepatitis B virus(HBV) 224,225 and the herpes virus, 490 and ethyl alcohol to inactivate human immunodeficiency virus (HIV) 227 , rotavirus, echovirus, and astrovirus 491 . In tests of the effect of ethyl alcohol against M. tuberculosis, 95% ethanol killed the tubercle bacilli in sputum or water suspension within 15 seconds 492 . In 1964, Spaulding stated that alcohols were the germicide of choice for tuberculocidal activity, and they should be the standard by which all other tuberculocides are compared. For example, he compared the tuberculocidal activity of iodophor (450 ppm), a substituted phenol (3%), and isopropanol (70%/volume) using the mucin-loop test (10 6 M. tuberculosis per loop) and determined the contact times needed for complete destruction were 120-180 minutes, 45-60 minutes, and 5 minutes, respectively. The mucin-loop test is a severe test developed to produce long survival times. Thus, these figures should not be extrapolated to the exposure times needed when these germicides are used on medical or surgical material 482 . Ethyl alcohol (70%) was the most effective concentration for killing the tissue phase of Cryptococcus neoformans, Blastomyces dermatitidis, Coccidioides immitis, and Histoplasma capsulatum and the culture phases of the latter three organisms aerosolized onto various surfaces. The culture phase was more resistant to the action of ethyl alcohol and required about 20 minutes to disinfect the contaminated surface, compared with <1 minute for the tissue phase 493,494 . Isopropyl alcohol (20%) is effective in killing the cysts of Acanthamoeba culbertsoni (560) as are chlorhexidine, hydrogen peroxide, and thimerosal 496 . Uses. Alcohols are not recommended for sterilizing medical and surgical materials principally because they lack sporicidal action and they cannot penetrate protein-rich materials. Fatal postoperative wound infections with Clostridium have occurred when alcohols were used to sterilize surgical instruments contaminated with bacterial spores 497 . Alcohols have been used effectively to disinfect oral and rectal thermometers 498,499 , hospital pagers 500 , scissors 501 , and stethoscopes 502 . Alcohols have been used to disinfect fiberoptic endoscopes 503,504 but failure of this disinfectant have lead to infection 280,505 . Alcohol towelettes have been used for years to disinfect small surfaces such as rubber stoppers of multiple-dose medication vials or vaccine bottles. Furthermore, alcohol occasionally is used to disinfect external surfaces of equipment (e.g., stethoscopes, ventilators, manual ventilation bags) 506 , CPR manikins 507 , ultrasound instruments 508 or medication preparation areas. Two studies demonstrated the effectiveness of 70% isopropyl alcohol to disinfect reusable transducer heads in a controlled environment 509,510 . In contrast, three bloodstream infection outbreaks have been described when alcohol was used to disinfect transducer heads in an intensivecare setting 511 . The documented shortcomings of alcohols on equipment are that they damage the shellac mountings of lensed instruments, tend to swell and harden rubber and certain plastic tubing after prolonged and repeated use, bleach rubber and plastic tiles 482 and damage tonometer tips (by deterioration of the glue) after the equivalent of 1 working year of routine use 512 . Tonometer biprisms soaked in alcohol for 4 days developed rough front surfaces that potentially could cause corneal damage; this appeared to be caused by weakening of the cementing substances used to fabricate the biprisms 513 . Corneal opacification has been reported when tonometer tips were swabbed with alcohol immediately before measurement of intraocular pressure 514 . Alcohols are flammable and consequently must be stored in a cool, well-ventilated area. They also evaporate rapidly, making extended exposure time difficult to achieve unless the items are immersed. # Chlorine and Chlorine Compounds Overview. Hypochlorites, the most widely used of the chlorine disinfectants, are available as liquid (e.g., sodium hypochlorite) or solid (e.g., calcium hypochlorite). The most prevalent chlorine products in the United States are aqueous solutions of 5.25%-6.15% sodium hypochlorite (see glossary), usually called household bleach. They have a broad spectrum of antimicrobial activity, do not leave toxic residues, are unaffected by water hardness, are inexpensive and fast acting 328 , remove dried or fixed organisms and biofilms from surfaces 465 , and have a low incidence of serious toxicity [515][516][517] . Sodium hypochlorite at the concentration used in household bleach (5.25-6.15%) can produce ocular irritation or oropharyngeal, esophageal, and gastric burns 318,[518][519][520][521][522] . Other disadvantages of hypochlorites include corrosiveness to metals in high concentrations (>500 ppm), inactivation by organic matter, discoloring or "bleaching" of fabrics, release of toxic chlorine gas when mixed with ammonia or acid (e.g., household cleaning agents) [523][524][525] , and relative stability 327 . The microbicidal activity of chlorine is attributed largely to undissociated hypochlorous acid (HOCl). The dissociation of HOCI to the less microbicidal form (hypochlorite ion OCl -) depends on pH. The disinfecting efficacy of chlorine decreases with an increase in pH that parallels the conversion of undissociated HOCI to OCl -329, 526 . A potential hazard is production of the carcinogen bis(chloromethyl) ether when hypochlorite solutions contact formaldehyde 527 and the production of the animal carcinogen trihalomethane when hot water is hyperchlorinated 528 . After reviewing environmental fate and ecologic data, EPA has determined the currently registered uses of hypochlorites will not result in unreasonable adverse effects to the environment 529 . Alternative compounds that release chlorine and are used in the health-care setting include demandrelease chlorine dioxide, sodium dichloroisocyanurate, and chloramine-T. The advantage of these compounds over the hypochlorites is that they retain chlorine longer and so exert a more prolonged bactericidal effect. Sodium dichloroisocyanurate tablets are stable, and for two reasons, the microbicidal activity of solutions prepared from sodium dichloroisocyanurate tablets might be greater than that of sodium hypochlorite solutions containing the same total available chlorine. First, with sodium dichloroisocyanurate, only 50% of the total available chlorine is free (HOCl and OCl -), whereas the remainder is combined (monochloroisocyanurate or dichloroisocyanurate), and as free available chlorine is used up, the latter is released to restore the equilibrium. Second, solutions of sodium dichloroisocyanurate are acidic, whereas sodium hypochlorite solutions are alkaline, and the more microbicidal type of chlorine (HOCl) is believed to predominate [530][531][532][533] . Chlorine dioxide-based disinfectants are prepared fresh as required by mixing the two components (base solution [citric acid with preservatives and corrosion inhibitors] and the activator solution [sodium chlorite]). In vitro suspension tests showed that solutions containing about 140 ppm chlorine dioxide achieved a reduction factor exceeding 10 6 of S. aureus in 1 minute and of Bacillus atrophaeus spores in 2.5 minutes in the presence of 3 g/L bovine albumin. The potential for damaging equipment requires consideration because long-term use can damage the outer plastic coat of the insertion tube 534 . In another study, chlorine dioxide solutions at either 600 ppm or 30 ppm killed Mycobacterium avium-intracellulare within 60 seconds after contact but contamination by organic material significantly affected the microbicidal properties 535 . The microbicidal activity of a new disinfectant, "superoxidized water," has been examined The concept of electrolyzing saline to create a disinfectant or antiseptics is appealing because the basic materials of saline and electricity are inexpensive and the end product (i.e., water) does not damage the environment. The main products of this water are hypochlorous acid (e.g., at a concentration of about 144 mg/L) and chlorine. As with any germicide, the antimicrobial activity of superoxidized water is strongly affected by the concentration of the active ingredient (available free chlorine) 536 . One manufacturer generates the disinfectant at the point of use by passing a saline solution over coated titanium electrodes at 9 amps. The product generated has a pH of 5.0-6.5 and an oxidation-reduction potential (redox) of >950 mV. Although superoxidized water is intended to be generated fresh at the point of use, when tested under clean conditions the disinfectant was effective within 5 minutes when 48 hours old 537 . Unfortunately, the equipment required to produce the product can be expensive because parameters such as pH, current, and redox potential must be closely monitored. The solution is nontoxic to biologic tissues. Although the United Kingdom manufacturer claims the solution is noncorrosive and nondamaging to endoscopes and processing equipment, one flexible endoscope manufacturer (Olympus Key-Med, United Kingdom) has voided the warranty on the endoscopes if superoxidized water is used to disinfect them 538 . As with any germicide formulation, the user should check with the device manufacturer for compatibility with the germicide. Additional studies are needed to determine whether this solution could be used as an alternative to other disinfectants or antiseptics for hand washing, skin antisepsis, room cleaning, or equipment disinfection (e.g., endoscopes, dialyzers) 400,539,540 . In October 2002, the FDA cleared superoxidized water as a high-level disinfectant (FDA, personal communication, September 18, 2002). # Mode of Action. The exact mechanism by which free chlorine destroys microorganisms has not been elucidated. Inactivation by chlorine can result from a number of factors: oxidation of sulfhydryl enzymes and amino acids; ring chlorination of amino acids; loss of intracellular contents; decreased uptake of nutrients; inhibition of protein synthesis; decreased oxygen uptake; oxidation of respiratory components; decreased adenosine triphosphate production; breaks in DNA; and depressed DNA synthesis 329,347 . The actual microbicidal mechanism of chlorine might involve a combination of these factors or the effect of chlorine on critical sites 347 . Microbicidal Activity. Low concentrations of free available chlorine (e.g., HOCl, OCl -, and elemental chlorine-Cl2) have a biocidal effect on mycoplasma (25 ppm) and vegetative bacteria (<5 ppm) in seconds in the absence of an organic load 329,418 . Higher concentrations (1,000 ppm) of chlorine are required to kill M. tuberculosis using the Association of Official Analytical Chemists (AOAC) tuberculocidal test 73 . A concentration of 100 ppm will kill ≥99.9% of B. atrophaeus spores within 5 minutes 541,542 and destroy mycotic agents in <1 hour 329 . Acidified bleach and regular bleach (5,000 ppm chlorine) can inactivate 10 6 Clostridium difficile spores in ≤10 minutes 262 . One study reported that 25 different viruses were inactivated in 10 minutes with 200 ppm available chlorine 72 . Several studies have demonstrated the effectiveness of diluted sodium hypochlorite and other disinfectants to inactivate HIV 61 . Chlorine (500 ppm) showed inhibition of Candida after 30 seconds of exposure 54 . In experiments using the AOAC Use-Dilution Method, 100 ppm of free chlorine killed 10 6 -10 7 S. aureus, Salmonella choleraesuis, and P. aeruginosa in <10 minutes 327 . Because household bleach contains 5.25%-6.15% sodium hypochlorite, or 52,500-61,500 ppm available chlorine, a 1:1,000 dilution provides about 53-62 ppm available chlorine, and a 1:10 dilution of household bleach provides about 5250-6150 ppm. Data are available for chlorine dioxide that support manufacturers' bactericidal, fungicidal, sporicidal, tuberculocidal, and virucidal label claims [543][544][545][546] . A chlorine dioxide generator has been shown effective for decontaminating flexible endoscopes 534 but it is not currently FDA-cleared for use as a high-level disinfectant 85 . Chlorine dioxide can be produced by mixing solutions, such as a solution of chlorine with a solution of sodium chlorite 329 . In 1986, a chlorine dioxide product was voluntarily removed from the market when its use caused leakage of cellulose-based dialyzer membranes, which allowed bacteria to migrate from the dialysis fluid side of the dialyzer to the blood side 547 . Sodium dichloroisocyanurate at 2,500 ppm available chlorine is effective against bacteria in the presence of up to 20% plasma, compared with 10% plasma for sodium hypochlorite at 2,500 ppm 548 . "Superoxidized water" has been tested against bacteria, mycobacteria, viruses, fungi, and spores 537,539,549 . Freshly generated superoxidized water is rapidly effective (<2 minutes) in achieving a 5-log10 reduction of pathogenic microorganisms (i.e., M. tuberculosis, M. chelonae, poliovirus, HIV, multidrugresistant S. aureus, E. coli, Candida albicans, Enterococcus faecalis, P. aeruginosa) in the absence of organic loading. However, the biocidal activity of this disinfectant decreased substantially in the presence of organic material (e.g., 5% horse serum) 537,549,550 . No bacteria or viruses were detected on artificially contaminated endoscopes after a 5-minute exposure to superoxidized water 551 and HBV-DNA was not detected from any endoscope experimentally contaminated with HBV-positive mixed sera after a disinfectant exposure time of 7 minutes 552 . Uses. Hypochlorites are widely used in healthcare facilities in a variety of settings. 328 Inorganic chlorine solution is used for disinfecting tonometer heads 188 and for spot-disinfection of countertops and floors. A 1:10-1:100 dilution of 5.25%-6.15% sodium hypochlorite (i.e., household bleach) 22,228,553,554 or an EPA-registered tuberculocidal disinfectant has been recommended for decontaminating blood spills. For small spills of blood (i.e., drops of blood) on noncritical surfaces, the area can be disinfected with a 1:100 dilution of 5.25%-6.15% sodium hypochlorite or an EPA-registered tuberculocidal disinfectant. Because hypochlorites and other germicides are substantially inactivated in the presence of blood 63,548,555,556 , large spills of blood require that the surface be cleaned before an EPA-registered disinfectant or a 1:10 (final concentration) solution of household bleach is applied 557 . If a sharps injury is possible, the surface initially should be decontaminated 69,318 , then cleaned and disinfected (1:10 final concentration) 63 . Extreme care always should be taken to prevent percutaneous injury. At least 500 ppm available chlorine for 10 minutes is recommended for decontaminating CPR training manikins 558 . Full-strength bleach has been recommended for self-disinfection of needles and syringes used for illicit-drug injection when needle-exchange programs are not available. The difference in the recommended concentrations of bleach reflects the difficulty of cleaning the interior of needles and syringes and the use of needles and syringes for parenteral injection 559 . Clinicians should not alter their use of chlorine on environmental surfaces on the basis of testing methodologies that do not simulate actual disinfection practices 560,561 . Other uses in healthcare include as an irrigating agent in endodontic treatment 562 and as a disinfectant for manikins, laundry, dental appliances, hydrotherapy tanks 23,41 , regulated medical waste before disposal 328 , and the water distribution system in hemodialysis centers and hemodialysis machines 563 . Chlorine long has been used as the disinfectant in water treatment. Hyperchlorination of a Legionellacontaminated hospital water system 23 resulted in a dramatic decrease (from 30% to 1.5%) in the isolation of L. pneumophila from water outlets and a cessation of healthcare-associated Legionnaires' disease in an affected unit 528,564 . Water disinfection with monochloramine by municipal water-treatment plants substantially reduced the risk for healthcare-associated Legionnaires disease 565,566 . Chlorine dioxide also has been used to control Legionella in a hospital water supply. 567 Chloramine T 568 and hypochlorites 41 have been used to disinfect hydrotherapy equipment. Hypochlorite solutions in tap water at a pH >8 stored at room temperature (23 º C) in closed, opaque plastic containers can lose up to 40%-50% of their free available chlorine level over 1 month. Thus, if a user wished to have a solution containing 500 ppm of available chlorine at day 30, he or she should prepare a solution containing 1,000 ppm of chlorine at time 0. Sodium hypochlorite solution does not decompose after 30 days when stored in a closed brown bottle 327 . The use of powders, composed of a mixture of a chlorine-releasing agent with highly absorbent resin, for disinfecting spills of body fluids has been evaluated by laboratory tests and hospital ward trials. The inclusion of acrylic resin particles in formulations markedly increases the volume of fluid that can be soaked up because the resin can absorb 200-300 times its own weight of fluid, depending on the fluid consistency. When experimental formulations containing 1%, 5%, and 10% available chlorine were evaluated by a standardized surface test, those containing 10% demonstrated bactericidal activity. One problem with chlorine-releasing granules is that they can generate chlorine fumes when applied to urine 569 . # Formaldehyde Overview. Formaldehyde is used as a disinfectant and sterilant in both its liquid and gaseous states. Liquid formaldehyde will be considered briefly in this section, and the gaseous form is reviewed elsewhere 570 . Formaldehyde is sold and used principally as a water-based solution called formalin, which is 37% formaldehyde by weight. The aqueous solution is a bactericide, tuberculocide, fungicide, virucide and sporicide 72,82,[571][572][573] . OSHA indicated that formaldehyde should be handled in the workplace as a potential carcinogen and set an employee exposure standard for formaldehyde that limits an 8-hour timeweighted average exposure concentration of 0.75 ppm 574,575 . The standard includes a second permissible exposure limit in the form of a short-term exposure limit (STEL) of 2 ppm that is the maximum exposure allowed during a 15-minute period 576 . Ingestion of formaldehyde can be fatal, and long-term exposure to low levels in the air or on the skin can cause asthma-like respiratory problems and skin irritation, such as dermatitis and itching. For these reasons, employees should have limited direct contact with formaldehyde, and these considerations limit its role in sterilization and disinfection processes. Key provisions of the OSHA standard that protects workers from exposure to formaldehyde appear in Title 29 of the Code of Federal Regulations (CFR) Part 1910.1048 (and equivalent regulations in states with OSHA-approved state plans) 577 . # Mode of Action. Formaldehyde inactivates microorganisms by alkylating the amino and sulfhydral groups of proteins and ring nitrogen atoms of purine bases 376 . Microbicidal Activity. Varying concentrations of aqueous formaldehyde solutions destroy a wide range of microorganisms. Inactivation of poliovirus in 10 minutes required an 8% concentration of formalin, but all other viruses tested were inactivated with 2% formalin 72 . Four percent formaldehyde is a tuberculocidal agent, inactivating 10 4 M. tuberculosis in 2 minutes 82 , and 2.5% formaldehyde inactivated about 10 7 Salmonella Typhi in 10 minutes in the presence of organic matter 572 . The sporicidal action of formaldehyde was slower than that of glutaraldehyde in comparative tests with 4% aqueous formaldehyde and 2% glutaraldehyde against the spores of B. anthracis 82 . The formaldehyde solution required 2 hours of contact to achieve an inactivation factor of 10 4 , whereas glutaraldehyde required only 15 minutes. # Uses. Although formaldehyde-alcohol is a chemical sterilant and formaldehyde is a high-level disinfectant, the health-care uses of formaldehyde are limited by its irritating fumes and its pungent odor even at very low levels (<1 ppm). For these reasons and others-such as its role as a suspected human carcinogen linked to nasal cancer and lung cancer 578 , this germicide is excluded from Table 1. When it is used, , direct exposure to employees generally is limited; however, excessive exposures to formaldehyde have been documented for employees of renal transplant units 574,579 , and students in a gross anatomy laboratory 580 . Formaldehyde is used in the health-care setting to prepare viral vaccines (e.g., poliovirus and influenza); as an embalming agent; and to preserve anatomic specimens; and historically has been used to sterilize surgical instruments, especially when mixed with ethanol. A 1997 survey found that formaldehyde was used for reprocessing hemodialyzers by 34% of U.S. hemodialysis centers-a 60% decrease from 1983 249,581 . If used at room temperature, a concentration of 4% with a minimum exposure of 24 hours is required to disinfect disposable hemodialyzers reused on the same patient 582,583 . Aqueous formaldehyde solutions (1%-2%) also have been used to disinfect the internal fluid pathways of dialysis machines 583 . To minimize a potential health hazard to dialysis patients, the dialysis equipment must be thoroughly rinsed and tested for residual formaldehyde before use. Paraformaldehyde, a solid polymer of formaldehyde, can be vaporized by heat for the gaseous decontamination of laminar flow biologic safety cabinets when maintenance work or filter changes require access to the sealed portion of the cabinet. # Glutaraldehyde Overview. Glutaraldehyde is a saturated dialdehyde that has gained wide acceptance as a high-level disinfectant and chemical sterilant 107 . Aqueous solutions of glutaraldehyde are acidic and generally in this state are not sporicidal. Only when the solution is "activated" (made alkaline) by use of alkalinating agents to pH 7.5-8.5 does the solution become sporicidal. Once activated, these solutions have a shelf-life of minimally 14 days because of the polymerization of the glutaraldehyde molecules at alkaline pH levels. This polymerization blocks the active sites (aldehyde groups) of the glutaraldehyde molecules that are responsible for its biocidal activity. Novel glutaraldehyde formulations (e.g., glutaraldehyde-phenol-sodium phenate, potentiated acid glutaraldehyde, stabilized alkaline glutaraldehyde) produced in the past 30 years have overcome the problem of rapid loss of activity (e.g., use-life 28-30 days) while generally maintaining excellent microbicidal activity [584][585][586][587][588] . However, antimicrobial activity depends not only on age but also on use conditions, such as dilution and organic stress. Manufacturers' literature for these preparations suggests the neutral or alkaline glutaraldehydes possess microbicidal and anticorrosion properties superior to those of acid glutaraldehydes, and a few published reports substantiate these claims 542,589,590 . However, two studies found no difference in the microbicidal activity of alkaline and acid glutaraldehydes 73,591 . The use of glutaraldehyde-based solutions in health-care facilities is widespread because of their advantages, including excellent biocidal properties; activity in the presence of organic matter (20% bovine serum); and noncorrosive action to endoscopic equipment, thermometers, rubber, or plastic equipment (Tables 4 and 5). # Mode of Action. The biocidal activity of glutaraldehyde results from its alkylation of sulfhydryl, hydroxyl, carboxyl, and amino groups of microorganisms, which alters RNA, DNA, and protein synthesis. The mechanism of action of glutaraldehydes are reviewed extensively elsewhere 592,593 . Microbicidal Activity. The in vitro inactivation of microorganisms by glutaraldehydes has been extensively investigated and reviewed 592,593 . Several investigators showed that ≥2% aqueous solutions of glutaraldehyde, buffered to pH 7.5-8.5 with sodium bicarbonate effectively killed vegetative bacteria in <2 minutes; M. tuberculosis, fungi, and viruses in <10 minutes; and spores of Bacillus and Clostridium species in 3 hours 542,[592][593][594][595][596][597] . Spores of C. difficile are more rapidly killed by 2% glutaraldehyde than are spores of other species of Clostridium and Bacillus 79,265,266 . Microorganisms with substantial resistance to glutaraldehyde have been reported, including some mycobacteria (M. chelonae, Mycobacterium aviumintracellulare, M. xenopi) [598][599][600][601] , Methylobacterium mesophilicum 602 , Trichosporon, fungal ascospores (e.g., Microascus cinereus, Cheatomium globosum), and Cryptosporidium 271,603 . M. chelonae persisted in a 0.2% glutaraldehyde solution used to store porcine prosthetic heart valves 604 . Two percent alkaline glutaraldehyde solution inactivated 10 5 M. tuberculosis cells on the surface of penicylinders within 5 minutes at 18 º C 589 . However, subsequent studies 82 questioned the mycobactericidal prowess of glutaraldehydes. Two percent alkaline glutaraldehyde has slow action (20 to >30 minutes) against M. tuberculosis and compares unfavorably with alcohols, formaldehydes, iodine, and phenol 82 . Suspensions of M. avium, M. intracellulare, and M. gordonae were more resistant to inactivation by a 2% alkaline glutaraldehyde (estimated time to complete inactivation: ~60 minutes) than were virulent M. tuberculosis (estimated time to complete inactivation ~25 minutes) 605 . The rate of kill was directly proportional to the temperature, and a standardized suspension of M. tuberculosis could not be sterilized within 10 minutes 84 . An FDA-cleared chemical sterilant containing 2.5% glutaraldehyde uses increased temperature (35 º C) to reduce the time required to achieve high-level disinfection (5 minutes) 85,606 , but its use is limited to automatic endoscope reprocessors equipped with a heater. In another study employing membrane filters for measurement of mycobactericidal activity of 2% alkaline glutaraldehyde, complete inactivation was achieved within 20 minutes at 20 º C when the test inoculum was 10 6 M. tuberculosis per membrane 81 . Several investigators 55,57,73,76,80,81,84,605 have demonstrated that glutaraldehyde solutions inactivate 2.4 to >5.0 log10 of M. tuberculosis in 10 minutes (including multidrugresistant M. tuberculosis) and 4.0-6.4 log10 of M. tuberculosis in 20 minutes. On the basis of these data and other studies, 20 minutes at room temperature is considered the minimum exposure time needed to reliably kill Mycobacteria and other vegetative bacteria with ≥2% glutaraldehyde 17,19,27,57,83,94,108,111,[117][118][119][120][121]607 . Glutaraldehyde is commonly diluted during use, and studies showed a glutaraldehyde concentration decline after a few days of use in an automatic endoscope washer 608,609 . The decline occurs because instruments are not thoroughly dried and water is carried in with the instrument, which increases the solution's volume and dilutes its effective concentration 610 . This emphasizes the need to ensure that semicritical equipment is disinfected with an acceptable concentration of glutaraldehyde. Data suggest that 1.0%-1.5% glutaraldehyde is the minimum effective concentration for >2% glutaraldehyde solutions when used as a high-level disinfectant 76,589,590,609 . Chemical test strips or liquid chemical monitors 610,611 are available for determining whether an effective concentration of glutaraldehyde is present despite repeated use and dilution. The frequency of testing should be based on how frequently the solutions are used (e.g., used daily, test daily; used weekly, test before use; used 30 times per day, test each 10th use), but the strips should not be used to extend the use life beyond the expiration date. Data suggest the chemicals in the test strip deteriorate with time 612 and a manufacturer's expiration date should be placed on the bottles. The bottle of test strips should be dated when opened and used for the period of time indicated on the bottle (e.g., 120 days). The results of test strip monitoring should be documented. The glutaraldehyde test kits have been preliminarily evaluated for accuracy and range 612 but the reliability has been questioned 613 . To ensure the presence of minimum effective concentration of the high-level disinfectant, manufacturers of some chemical test strips recommend the use of quality-control procedures to ensure the strips perform properly. If the manufacturer of the chemical test strip recommends a qualitycontrol procedure, users should comply with the manufacturer's recommendations. The concentration should be considered unacceptable or unsafe when the test indicates a dilution below the product's minimum effective concentration (MEC) (generally to ≤1.0%-1.5% glutaraldehyde) by the indicator not changing color. A 2.0% glutaraldehyde-7.05% phenol-1.20% sodium phenate product that contained 0.125% glutaraldehyde-0.44% phenol-0.075% sodium phenate when diluted 1:16 is not recommended as a highlevel disinfectant because it lacks bactericidal activity in the presence of organic matter and lacks tuberculocidal, fungicidal, virucidal, and sporicidal activity 49,55,56,71,[73][74][75][76][77][78][79]614 . In December 1991, EPA issued an order to stop the sale of all batches of this product because of efficacy data showing the product is not effective against spores and possibly other microorganisms or inanimate objects as claimed on the label 615 . FDA has cleared a glutaraldehyde-phenol/phenate concentrate as a high-level disinfectant that contains 1.12% glutaraldehyde with 1.93% phenol/phenate at its use concentration. Other FDA cleared glutaraldehyde sterilants that contain 2.4%-3.4% glutaraldehyde are used undiluted 606 . Uses. Glutaraldehyde is used most commonly as a high-level disinfectant for medical equipment such as endoscopes 69,107,504 , spirometry tubing, dialyzers 616 , transducers, anesthesia and respiratory therapy equipment 617 , hemodialysis proportioning and dialysate delivery systems 249,618 , and reuse of laparoscopic disposable plastic trocars 619 . Glutaraldehyde is noncorrosive to metal and does not damage lensed instruments, rubber. or plastics. Glutaraldehyde should not be used for cleaning noncritical surfaces because it is too toxic and expensive. Colitis believed caused by glutaraldehyde exposure from residual disinfecting solution in endoscope solution channels has been reported and is preventable by careful endoscope rinsing 318,[620][621][622][623][624][625][626][627][628][629][630] . One study found that residual glutaraldehyde levels were higher and more variable after manual disinfection (<0.2 mg/L to 159.5 mg/L) than after automatic disinfection (0.2-6.3 mg/L) 631 . Similarly, keratopathy and corneal decompensation were caused by ophthalmic instruments that were inadequately rinsed after soaking in 2% glutaraldehyde 632,633 . Healthcare personnel can be exposed to elevated levels of glutaraldehyde vapor when equipment is processed in poorly ventilated rooms, when spills occur, when glutaraldehyde solutions are activated or changed, 634 , or when open immersion baths are used. Acute or chronic exposure can result in skin irritation or dermatitis, mucous membrane irritation (eye, nose, mouth), or pulmonary symptoms 318,[635][636][637][638][639] . Epistaxis, allergic contact dermatitis, asthma, and rhinitis also have been reported in healthcare workers exposed to glutaraldehyde 636,[640][641][642][643][644][645][646][647] . Glutaraldehyde exposure should be monitored to ensure a safe work environment. Testing can be done by four techniques: a silica gel tube/gas chromatography with a flame ionization detector, dinitrophenylhydrazine (DNPH)-impregnated filter cassette/high-performance liquid chromatography (HPLC) with an ultraviolet (UV) detector, a passive badge/HPLC, or a handheld glutaraldehyde air monitor 648 . The silica gel tube and the DNPH-impregnated cassette are suitable for monitoring the 0.05 ppm ceiling limit. The passive badge, with a 0.02 ppm limit of detection, is considered marginal at the Americal Council of Governmental Industrial Hygienists (ACGIH) ceiling level. The ceiling level is considered too close to the glutaraldehyde meter's 0.03 ppm limit of detection to provide confidence in the readings 648 . ACGIH does not require a specific monitoring schedule for glutaraldehyde; however, a monitoring schedule is needed to ensure the level is less than the ceiling limit. For example, monitoring should be done initially to determine glutaraldehyde levels, after procedural or equipment changes, and in response to worker complaints 649 . In the absence of an OSHA permissible exposure limit, if the glutaraldehyde level is higher than the ACGIH ceiling limit of 0.05 ppm, corrective action and repeat monitoring would be prudent 649 . Engineering and work-practice controls that can be used to resolve these problems include ducted exhaust hoods, air systems that provide 7-15 air exchanges per hour, ductless fume hoods with absorbents for the glutaraldehyde vapor, tight-fitting lids on immersion baths, personal protection (e.g., nitrile or butyl rubber gloves but not natural latex gloves, goggles) to minimize skin or mucous membrane contact, and automated endoscope processors 7,650 . If engineering controls fail to maintain levels below the ceiling limit, institutions can consider the use of respirators (e.g., a half-face respirator with organic vapor cartridge 640 or a type "C" supplied air respirator with a full facepiece operated in a positive pressure mode) 651 . In general, engineering controls are preferred over work-practice and administrative controls because they do not require active participation by the health-care worker. Even though enforcement of the OSHA ceiling limit was suspended in 1993 by the U.S. Court of Appeals 577 , limiting employee exposure to 0.05 ppm (according to ACGIH) is prudent because, at this level, glutaraldehyde can irritate the eyes, throat, and nose 318,577,639,652 . If glutaraldehyde disposal through the sanitary sewer system is restricted, sodium bisulfate can be used to neutralize the glutaraldehyde and make it safe for disposal. # Hydrogen Peroxide Overview. The literature contains several accounts of the properties, germicidal effectiveness, and potential uses for stabilized hydrogen peroxide in the health-care setting. Published reports ascribe good germicidal activity to hydrogen peroxide and attest to its bactericidal, virucidal, sporicidal, and fungicidal properties [653][654][655] . (Tables 4 and 5) The FDA website lists cleared liquid chemical sterilants and high-level disinfectants containing hydrogen peroxide and their cleared contact conditions. # Mode of Action. Hydrogen peroxide works by producing destructive hydroxyl free radicals that can attack membrane lipids, DNA, and other essential cell components. Catalase, produced by aerobic organisms and facultative anaerobes that possess cytochrome systems, can protect cells from metabolically produced hydrogen peroxide by degrading hydrogen peroxide to water and oxygen. This defense is overwhelmed by the concentrations used for disinfection 653,654 . Microbicidal Activity. Hydrogen peroxide is active against a wide range of microorganisms, including bacteria, yeasts, fungi, viruses, and spores 78,654 . A 0.5% accelerated hydrogen peroxide demonstrated bactericidal and virucidal activity in 1 minute and mycobactericidal and fungicidal activity in 5 minutes 656 . Bactericidal effectiveness and stability of hydrogen peroxide in urine has been demonstrated against a variety of health-care-associated pathogens; organisms with high cellular catalase activity (e.g., S. aureus, S. marcescens, and Proteus mirabilis) required 30-60 minutes of exposure to 0.6% hydrogen peroxide for a 10 8 reduction in cell counts, whereas organisms with lower catalase activity (e.g., E. coli, Streptococcus species, and Pseudomonas species) required only 15 minutes' exposure 657 . In an investigation of 3%, 10%, and 15% hydrogen peroxide for reducing spacecraft bacterial populations, a complete kill of 10 6 spores (i.e., Bacillus species) occurred with a 10% concentration and a 60-minute exposure time. A 3% concentration for 150 minutes killed 10 6 spores in six of seven exposure trials 658 . A 10% hydrogen peroxide solution resulted in a 10 3 decrease in B. atrophaeus spores, and a ≥10 5 decrease when tested against 13 other pathogens in 30 minutes at 20 º C 659, 660 . A 3.0% hydrogen peroxide solution was ineffective against VRE after 3 and 10 minutes exposure times 661 and caused only a 2-log10 reduction in the number of Acanthamoeba cysts in approximately 2 hours 662 . A 7% stabilized hydrogen peroxide proved to be sporicidal (6 hours of exposure), mycobactericidal (20 minutes), fungicidal (5 minutes) at full strength, virucidal (5 minutes) and bactericidal (3 minutes) at a 1:16 dilution when a quantitative carrier test was used 655 . The 7% solution of hydrogen peroxide, tested after 14 days of stress (in the form of germ-loaded carriers and respiratory therapy equipment), was sporicidal (>7 log10 reduction in 6 hours), mycobactericidal (>6.5 log10 reduction in 25 minutes), fungicidal (>5 log10 reduction in 20 minutes), bactericidal (>6 log10 reduction in 5 minutes) and virucidal (5 log10 reduction in 5 minutes) 663 . Synergistic sporicidal effects were observed when spores were exposed to a combination of hydrogen peroxide (5.9%-23.6%) and peracetic acid 664 . Other studies demonstrated the antiviral activity of hydrogen peroxide against rhinovirus 665 . The time required for inactivating three serotypes of rhinovirus using a 3% hydrogen peroxide solution was 6-8 minutes; this time increased with decreasing concentrations (18-20 minutes at 1.5%, 50-60 minutes at 0.75%). Concentrations of hydrogen peroxide from 6% to 25% show promise as chemical sterilants. The product marketed as a sterilant is a premixed, ready-to-use chemical that contains 7.5% hydrogen peroxide and 0.85% phosphoric acid (to maintain a low pH) 69 . The mycobactericidal activity of 7.5% hydrogen peroxide has been corroborated in a study showing the inactivation of >10 5 multidrug-resistant M. tuberculosis after a 10-minute exposure 666 . Thirty minutes were required for >99.9% inactivation of poliovirus and HAV 667 . Three percent and 6% hydrogen peroxide were unable to inactivate HAV in 1 minute in a carrier test 58 . When the effectiveness of 7.5% hydrogen peroxide at 10 minutes was compared with 2% alkaline glutaraldehyde at 20 minutes in manual disinfection of endoscopes, no significant difference in germicidal activity was observed 668 . ). No complaints were received from the nursing or medical staff regarding odor or toxicity. In one study, 6% hydrogen peroxide (unused product was 7.5%) was more effective in the high-level disinfection of flexible endoscopes than was the 2% glutaraldehyde solution 456 . A new, rapid-acting 13.4% hydrogen peroxide formulation (that is not yet FDA-cleared) has demonstrated sporicidal, mycobactericidal, fungicidal, and virucidal efficacy. Manufacturer data demonstrate that this solution sterilizes in 30 minutes and provides high-level disinfection in 5 minutes 669 . This product has not been used long enough to evaluate material compatibility to endoscopes and other semicritical devices, and further assessment by instrument manufacturers is needed. Under normal conditions, hydrogen peroxide is extremely stable when properly stored (e.g., in dark containers). The decomposition or loss of potency in small containers is less than 2% per year at ambient temperatures 670 . Uses. Commercially available 3% hydrogen peroxide is a stable and effective disinfectant when used on inanimate surfaces. It has been used in concentrations from 3% to 6% for disinfecting soft contact lenses (e.g., 3% for 2-3 hrs) 653,671,672 , tonometer biprisms 513 , ventilators 673 , fabrics 397 , and endoscopes 456 . Hydrogen peroxide was effective in spot-disinfecting fabrics in patients' rooms 397 . Corneal damage from a hydrogen peroxide-soaked tonometer tip that was not properly rinsed has been reported 674 . Hydrogen peroxide also has been instilled into urinary drainage bags in an attempt to eliminate the bag as a source of bladder bacteriuria and environmental contamination 675 . Although the instillation of hydrogen peroxide into the bag reduced microbial contamination of the bag, this procedure did not reduce the incidence of catheter-associated bacteriuria 675 . A chemical irritation resembling pseudomembranous colitis caused by either 3% hydrogen peroxide or a 2% glutaraldehyde has been reported 621 . An epidemic of pseudomembrane-like enteritis and colitis in seven patients in a gastrointestinal endoscopy unit also has been associated with inadequate rinsing of 3% hydrogen peroxide from the endoscope 676 . As with other chemical sterilants, dilution of the hydrogen peroxide must be monitored by regularly testing the minimum effective concentration (i.e., 7.5%-6.0%). Compatibility testing by Olympus America of the 7.5% hydrogen peroxide found both cosmetic changes (e.g., discoloration of black anodized metal finishes) 69 and functional changes with the tested endoscopes (Olympus, written communication, October 15, 1999). # Iodophors Overview. Iodine solutions or tinctures long have been used by health professionals primarily as antiseptics on skin or tissue. Iodophors, on the other hand, have been used both as antiseptics and disinfectants. FDA has not cleared any liquid chemical sterilant or high-level disinfectants with iodophors as the main active ingredient. An iodophor is a combination of iodine and a solubilizing agent or carrier; the resulting complex provides a sustained-release reservoir of iodine and releases small amounts of free iodine in aqueous solution. The best-known and most widely used iodophor is povidone-iodine, a compound of polyvinylpyrrolidone with iodine. This product and other iodophors retain the germicidal efficacy of iodine but unlike iodine generally are nonstaining and relatively free of toxicity and irritancy 677,678 . Several reports that documented intrinsic microbial contamination of antiseptic formulations of povidone-iodine and poloxamer-iodine [679][680][681] caused a reappraisal of the chemistry and use of iodophors 682 . "Free" iodine (I2) contributes to the bactericidal activity of iodophors and dilutions of iodophors demonstrate more rapid bactericidal action than does a full-strength povidone-iodine solution. The reason for the observation that dilution increases bactericidal activity is unclear, but dilution of povidone-iodine might weaken the iodine linkage to the carrier polymer with an accompanying increase of free iodine in solution 680 . Therefore, iodophors must be diluted according to the manufacturers' directions to achieve antimicrobial activity. # Mode of Action. Iodine can penetrate the cell wall of microorganisms quickly, and the lethal effects are believed to result from disruption of protein and nucleic acid structure and synthesis. Microbicidal Activity. Published reports on the in vitro antimicrobial efficacy of iodophors demonstrate that iodophors are bactericidal, mycobactericidal, and virucidal but can require prolonged contact times to kill certain fungi and bacterial spores 14, 71-73, 290, 683-686 . Three brands of povidone-iodine solution have demonstrated more rapid kill (seconds to minutes) of S. aureus and M. chelonae at a 1:100 dilution than did the stock solution 683 . The virucidal activity of 75-150 ppm available iodine was demonstrated against seven viruses 72 . Other investigators have questioned the efficacy of iodophors against poliovirus in the presence of organic matter 685 and rotavirus SA-11 in distilled or tapwater 290 . Manufacturers' data demonstrate that commercial iodophors are not sporicidal, but they are tuberculocidal, fungicidal, virucidal, and bactericidal at their recommended use-dilution. # Uses. Besides their use as an antiseptic, iodophors have been used for disinfecting blood culture bottles and medical equipment, such as hydrotherapy tanks, thermometers, and endoscopes. Antiseptic iodophors are not suitable for use as hard-surface disinfectants because of concentration differences. Iodophors formulated as antiseptics contain less free iodine than do those formulated as disinfectants 376 . Iodine or iodine-based antiseptics should not be used on silicone catheters because they can adversely affect the silicone tubing 687 . # Ortho-phthalaldehyde (OPA) Overview. Ortho-phthalaldehyde is a high-level disinfectant that received FDA clearance in October 1999. It contains 0.55% 1,2-benzenedicarboxaldehyde (OPA). OPA solution is a clear, pale-blue liquid with a pH of 7.5. (Tables 4 and 5) # Mode of Action. Preliminary studies on the mode of action of OPA suggest that both OPA and glutaraldehyde interact with amino acids, proteins, and microorganisms. However, OPA is a less potent cross-linking agent. This is compensated for by the lipophilic aromatic nature of OPA that is likely to assist its uptake through the outer layers of mycobacteria and gram-negative bacteria [688][689][690] . OPA appears to kill spores by blocking the spore germination process 691 . Microbicidal Activity. Studies have demonstrated excellent microbicidal activity in vitro 69,100,271,400,[692][693][694][695][696][697][698][699][700][701][702][703] . For example, OPA has superior mycobactericidal activity (5-log10 reduction in 5 minutes) to glutaraldehyde. The mean times required to produce a 6-log10 reduction for M. bovis using 0.21% OPA was 6 minutes, compared with 32 minutes using 1.5% glutaraldehyde 693 . OPA showed good activity against the mycobacteria tested, including the glutaraldehyde-resistant strains, but 0.5% OPA was not sporicidal with 270 minutes of exposure. Increasing the pH from its unadjusted level (about 6.5) to pH 8 improved the sporicidal activity of OPA 694 . The level of biocidal activity was directly related to the temperature. A greater than 5-log10 reduction of B. atrophaeus spores was observed in 3 hours at 35 º C, than in 24 hours at 20 º C. Also, with an exposure time ≤5 minutes, biocidal activity decreased with increasing serum concentration. However, efficacy did not differ when the exposure time was ≥10 minutes 697 . In addition, OPA is effective (>5-log10 reduction) against a wide range of microorganisms, including glutaraldehyde-resistant mycobacteria and B. atrophaeus spores 694 . The influence of laboratory adaptation of test strains, such as P. aeruginosa, to 0.55% OPA has been evaluated. Resistant and multiresistant strains increased substantially in susceptibility to OPA after laboratory adaptation (log10 reduction factors increased by 0.54 and 0.91 for resistant and multiresistant strains, respectively) 704 . Other studies have found naturally occurring cells of P. aeurginosa were more resistant to a variety of disinfectants than were subcultured cells 705 . Uses. OPA has several potential advantages over glutaraldehyde. It has excellent stability over a wide pH range (pH 3-9), is not a known irritant to the eyes and nasal passages 706 , does not require exposure monitoring, has a barely perceptible odor, and requires no activation. OPA, like glutaraldehyde, has excellent material compatibility. A potential disadvantage of OPA is that it stains proteins gray (including unprotected skin) and thus must be handled with caution 69 . However, skin staining would indicate improper handling that requires additional training and/or personal protective equipment (e.g., gloves, eye and mouth protection, and fluid-resistant gowns). OPA residues remaining on inadequately water-rinsed transesophageal echo probes can stain the patient's mouth 707 . Meticulous cleaning, using the correct OPA exposure time (e.g., 12 minutes) and copious rinsing of the probe with water should eliminate this problem. The results of one study provided a basis for a recommendation that rinsing of instruments disinfected with OPA will require at least 250 mL of water per channel to reduce the chemical residue to a level that will not compromise patient or staff safety (<1 ppm) 708 . Personal protective equipment should be worn when contaminated instruments, equipment, and chemicals are handled 400 . In addition, equipment must be thoroughly rinsed to prevent discoloration of a patient's skin or mucous membrane. In April 2004, the manufacturer of OPA disseminated information to users about patients who reportedly experienced an anaphylaxis-like reaction after cystoscopy where the scope had been reprocessed using OPA. Of approximately 1 million urologic procedures performed using instruments reprocessed using OPA, 24 cases (17 cases in the United States, six in Japan, one in the United Kingdom) of anaphylaxis-like reactions have been reported after repeated cystoscopy (typically after four to nine treatments). Preventive measures include removal of OPA residues by thorough rinsing and not using OPA for reprocessing urologic instrumentation used to treat patients with a history of bladder cancer (Nevine Erian, personal communication, June 4, 2004; Product Notification, Advanced Sterilization Products, April 23, 2004) 709 . A few OPA clinical studies are available. In a clinical-use study, OPA exposure of 100 endoscopes for 5 minutes resulted in a >5-log10 reduction in bacterial load. Furthermore, OPA was effective over a 14-day use cycle 100 . Manufacturer data show that OPA will last longer in an automatic endoscope reprocessor before reaching its MEC limit (MEC after 82 cycles) than will glutaraldehyde (MEC after 40 cycles) 400 . High-pressure liquid chromatography confirmed that OPA levels are maintained above 0.3% for at least 50 cycles 706,710 . OPA must be disposed in accordance with local and state regulations. If OPA disposal through the sanitary sewer system is restricted, glycine (25 grams/gallon) can be used to neutralize the OPA and make it safe for disposal. The high-level disinfectant label claims for OPA solution at 20 º C vary worldwide (e.g., 5 minutes in Europe, Asia, and Latin America; 10 minutes in Canada and Australia; and 12 minutes in the United States). These label claims differ worldwide because of differences in the test methodology and requirements for licensure. In an automated endoscope reprocessor with an FDA-cleared capability to maintain solution temperatures at 25°C, the contact time for OPA is 5 minutes. # Peracetic Acid Overview. Peracetic, or peroxyacetic, acid is characterized by rapid action against all microorganisms. Special advantages of peracetic acid are that it lacks harmful decomposition products (i.e., acetic acid, water, oxygen, hydrogen peroxide), enhances removal of organic material 711 , and leaves no residue. It remains effective in the presence of organic matter and is sporicidal even at low temperatures (Tables 4 and 5). Peracetic acid can corrode copper, brass, bronze, plain steel, and galvanized iron but these effects can be reduced by additives and pH modifications. It is considered unstable, particularly when diluted; for example, a 1% solution loses half its strength through hydrolysis in 6 days, whereas 40% peracetic acid loses 1%-2% of its active ingredients per month 654 . # Mode of Action. Little is known about the mechanism of action of peracetic acid, but it is believed to function similarly to other oxidizing agents-that is, it denatures proteins, disrupts the cell wall permeability, and oxidizes sulfhydryl and sulfur bonds in proteins, enzymes, and other metabolites 654 . Microbicidal Activity. Peracetic acid will inactivate gram-positive and gram-negative bacteria, fungi, and yeasts in ≤5 minutes at <100 ppm. In the presence of organic matter, 200-500 ppm is required. For viruses, the dosage range is wide (12-2250 ppm), with poliovirus inactivated in yeast extract in 15 minutes with 1,500-2,250 ppm. In one study, 3.5% peracetic acid was ineffective against HAV after 1minute exposure using a carrier test 58 . Peracetic acid (0.26%) was effective (log10 reduction factor >5) against all test strains of mycobacteria (M. tuberculosis, M. avium-intracellulare, M. chelonae, and M. fortuitum) within 20-30 minutes in the presence or absence of an organic load 607,712 . With bacterial spores, 500-10,000 ppm (0.05%-1%) inactivates spores in 15 seconds to 30 minutes using a spore suspension test 654,659,[713][714][715] . Uses. An automated machine using peracetic acid to chemically sterilize medical (e.g., endoscopes, arthroscopes), surgical, and dental instruments is used in the United States [716][717][718] . As previously noted, dental handpieces should be steam sterilized. The sterilant, 35% peracetic acid, is diluted to 0.2% with filtered water at 50 º C. Simulated-use trials have demonstrated excellent microbicidal activity 111,[718][719][720][721][722] , and three clinical trials have demonstrated both excellent microbial killing and no clinical failures leading to infection 90,723,724 . The high efficacy of the system was demonstrated in a comparison of the efficacies of the system with that of ethylene oxide. Only the peracetic acid system completely killed 6 log10 of M. chelonae, E. faecalis, and B. atrophaeus spores with both an organic and inorganic challenge 722 . An investigation that compared the costs, performance, and maintenance of urologic endoscopic equipment processed by high-level disinfection (with glutaraldehyde) with those of the peracetic acid system reported no clinical differences between the two systems. However, the use of this system led to higher costs than the high-level disinfection, including costs for processing ($6.11 vs. $0.45 per cycle), purchasing and training ($24,845 vs. $16), installation ($5,800 vs. $0), and endoscope repairs ($6,037 vs. $445) 90 . Furthermore, three clusters of infection using the peracetic acid automated endoscope reprocessor were linked to inadequately processed bronchoscopes when inappropriate channel connectors were used with the system 725 . These clusters highlight the importance of training, proper model-specific endoscope connector systems, and quality-control procedures to ensure compliance with endoscope manufacturer recommendations and professional organization guidelines. An alternative highlevel disinfectant available in the United Kingdom contains 0.35% peracetic acid. Although this product is rapidly effective against a broad range of microorganisms 466,726,727 , it tarnishes the metal of endoscopes and is unstable, resulting in only a 24-hour use life 727 . # Peracetic Acid and Hydrogen Peroxide Overview. Two chemical sterilants are available that contain peracetic acid plus hydrogen peroxide (i.e., 0.08% peracetic acid plus 1.0% hydrogen peroxide [no longer marketed]; and 0.23% peracetic acid plus 7.35% hydrogen peroxide (Tables 4 and 5). Microbicidal Activity. The bactericidal properties of peracetic acid and hydrogen peroxide have been demonstrated 728 . Manufacturer data demonstrated this combination of peracetic acid and hydrogen peroxide inactivated all microorganisms except bacterial spores within 20 minutes. The 0.08% peracetic acid plus 1.0% hydrogen peroxide product effectively inactivated glutaraldehyde-resistant mycobacteria 729 . Uses. The combination of peracetic acid and hydrogen peroxide has been used for disinfecting hemodialyzers 730 . The percentage of dialysis centers using a peracetic acid-hydrogen peroxide-based disinfectant for reprocessing dialyzers increased from 5% in 1983 to 56% in 1997 249 . Olympus America does not endorse use of 0.08% peracetic acid plus 1.0% hydrogen peroxide (Olympus America, personal communication, April 15, 1998) on any Olympus endoscope because of cosmetic and functional damage and will not assume liability for chemical damage resulting from use of this product. This product is not currently available. FDA has cleared a newer chemical sterilant with 0.23% peracetic acid and 7.35% hydrogen peroxide (Tables 4 and 5). After testing the 7.35% hydrogen peroxide and 0.23% peracetic acid product, Olympus America concluded it was not compatible with the company's flexible gastrointestinal endoscopes; this conclusion was based on immersion studies where the test insertion tubes had failed because of swelling and loosening of the black polymer layer of the tube (Olympus America, personal communication, September 13, 2000). # Phenolics Overview. Phenol has occupied a prominent place in the field of hospital disinfection since its initial use as a germicide by Lister in his pioneering work on antiseptic surgery. In the past 30 years, however, work has concentrated on the numerous phenol derivatives or phenolics and their antimicrobial properties. Phenol derivatives originate when a functional group (e.g., alkyl, phenyl, benzyl, halogen) replaces one of the hydrogen atoms on the aromatic ring. Two phenol derivatives commonly found as constituents of hospital disinfectants are ortho-phenylphenol and ortho-benzyl-para-chlorophenol. The antimicrobial properties of these compounds and many other phenol derivatives are much improved over those of the parent chemical. Phenolics are absorbed by porous materials, and the residual disinfectant can irritate tissue. In 1970, depigmentation of the skin was reported to be caused by phenolic germicidal detergents containing para-tertiary butylphenol and para-tertiary amylphenol 731 . # Mode of Action. In high concentrations, phenol acts as a gross protoplasmic poison, penetrating and disrupting the cell wall and precipitating the cell proteins. Low concentrations of phenol and higher molecular-weight phenol derivatives cause bacterial death by inactivation of essential enzyme systems and leakage of essential metabolites from the cell wall 732 . Microbicidal Activity. Published reports on the antimicrobial efficacy of commonly used phenolics showed they were bactericidal, fungicidal, virucidal, and tuberculocidal 14,61,71,73,227,416,573,[732][733][734][735][736][737][738] . One study demonstrated little or no virucidal effect of a phenolic against coxsackie B4, echovirus 11, and poliovirus 1 736 . Similarly, 12% ortho-phenylphenol failed to inactivate any of the three hydrophilic viruses after a 10-minute exposure time, although 5% phenol was lethal for these viruses 72 . A 0.5% dilution of a phenolic (2.8% ortho-phenylphenol and 2.7% ortho-benzyl-para-chlorophenol) inactivated HIV 227 and a 2% solution of a phenolic (15% ortho-phenylphenol and 6.3% para-tertiary-amylphenol) inactivated all but one of 11 fungi tested 71 . Manufacturers' data using the standardized AOAC methods demonstrate that commercial phenolics are not sporicidal but are tuberculocidal, fungicidal, virucidal, and bactericidal at their recommended usedilution. Attempts to substantiate the bactericidal label claims of phenolics using the AOAC Use-Dilution Method occasionally have failed 416,737 . However, results from these same studies have varied dramatically among laboratories testing identical products. Uses. Many phenolic germicides are EPA-registered as disinfectants for use on environmental surfaces (e.g., bedside tables, bedrails, and laboratory surfaces) and noncritical medical devices. Phenolics are not FDA-cleared as high-level disinfectants for use with semicritical items but could be used to preclean or decontaminate critical and semicritical devices before terminal sterilization or high-level disinfection. The use of phenolics in nurseries has been questioned because of hyperbilirubinemia in infants placed in bassinets where phenolic detergents were used 739 . In addition, bilirubin levels were reported to increase in phenolic-exposed infants, compared with nonphenolic-exposed infants, when the phenolic was prepared according to the manufacturers' recommended dilution 740 . If phenolics are used to clean nursery floors, they must be diluted as recommended on the product label. Phenolics (and other disinfectants) should not be used to clean infant bassinets and incubators while occupied. If phenolics are used to terminally clean infant bassinets and incubators, the surfaces should be rinsed thoroughly with water and dried before reuse of infant bassinets and incubators 17 . # Quaternary Ammonium Compounds Overview. The quaternary ammonium compounds are widely used as disinfectants. Health-careassociated infections have been reported from contaminated quaternary ammonium compounds used to disinfect patient-care supplies or equipment, such as cystoscopes or cardiac catheters 741,742 . The quaternaries are good cleaning agents, but high water hardness 743 and materials such as cotton and gauze pads can make them less microbicidal because of insoluble precipitates or cotton and gauze pads absorb the active ingredients, respectively. One study showed a significant decline (~40%-50% lower at 1 hour) in the concentration of quaternaries released when cotton rags or cellulose-based wipers were used in the open-bucket system, compared with the nonwoven spunlace wipers in the closed-bucket system 744 As with several other disinfectants (e.g., phenolics, iodophors) gram-negative bacteria can survive or grow in them 404 . Chemically, the quaternaries are organically substituted ammonium compounds in which the nitrogen atom has a valence of 5, four of the substituent radicals (R1-R4) are alkyl or heterocyclic radicals of a given size or chain length, and the fifth (X -) is a halide, sulfate, or similar radical 745 . Each compound exhibits its own antimicrobial characteristics, hence the search for one compound with outstanding antimicrobial properties. Some of the chemical names of quaternary ammonium compounds used in healthcare are alkyl dimethyl benzyl ammonium chloride, alkyl didecyl dimethyl ammonium chloride, and dialkyl dimethyl ammonium chloride. The newer quaternary ammonium compounds (i.e., fourth generation), referred to as twin-chain or dialkyl quaternaries (e.g. didecyl dimethyl ammonium bromide and dioctyl dimethyl ammonium bromide), purportedly remain active in hard water and are tolerant of anionic residues 746 . A few case reports have documented occupational asthma as a result of exposure to benzalkonium chloride 747 . # Mode of Action. The bactericidal action of the quaternaries has been attributed to the inactivation of energy-producing enzymes, denaturation of essential cell proteins, and disruption of the cell membrane 746 . Evidence exists that supports these and other possibilities 745 748 . Microbicidal Activity. Results from manufacturers' data sheets and from published scientific literature indicate that the quaternaries sold as hospital disinfectants are generally fungicidal, bactericidal, and virucidal against lipophilic (enveloped) viruses; they are not sporicidal and generally not tuberculocidal or virucidal against hydrophilic (nonenveloped) viruses 14, 54-56, 58, 59, 61, 71, 73, 186, 297, 748, 749 . The poor mycobactericidal activities of quaternary ammonium compounds have been demonstrated 55,73 . Quaternary ammonium compounds (as well as 70% isopropyl alcohol, phenolic, and a chlorine-containing wipe [80 ppm]) effectively (>95%) remove and/or inactivate contaminants (i.e., multidrug-resistant S. aureus, vancomycin-resistant Entercoccus, P. aeruginosa) from computer keyboards with a 5-second application time. No functional damage or cosmetic changes occurred to the computer keyboards after 300 applications of the disinfectants 45 . Attempts to reproduce the manufacturers' bactericidal and tuberculocidal claims using the AOAC tests with a limited number of quaternary ammonium compounds occasionally have failed 73,416,737 . However, test results have varied extensively among laboratories testing identical products 416,737 . Uses. The quaternaries commonly are used in ordinary environmental sanitation of noncritical surfaces, such as floors, furniture, and walls. EPA-registered quaternary ammonium compounds are appropriate to use for disinfecting medical equipment that contacts intact skin (e.g., blood pressure cuffs). # Miscellaneous Inactivating Agents Other Germicides Several compounds have antimicrobial activity but for various reasons have not been incorporated into the armamentarium of health-care disinfectants. These include mercurials, sodium hydroxide, βpropiolactone, chlorhexidine gluconate, cetrimide-chlorhexidine, glycols (triethylene and propylene), and the Tego disinfectants. Two authoritative references examine these agents in detail 16,412 . A peroxygen-containing formulation had marked bactericidal action when used as a 1% weight/volume solution and virucidal activity at 3% 49 , but did not have mycobactericidal activity at concentrations of 2.3% and 4% and exposure times ranging from 30 to 120 minutes 750 . It also required 20 hours to kill B. atrophaeus spores 751 . A powder-based peroxygen compound for disinfecting contaminated spill was strongly and rapidly bactericidal 752 . In preliminary studies, nanoemulsions (composed of detergents and lipids in water) showed activity against vegetative bacteria, enveloped viruses and Candida. This product represents a potential agent for use as a topical biocidal agent. [753][754][755] . New disinfectants that require further evaluation include glucoprotamin 756 , tertiary amines 703 . and a light-activated antimicrobial coating 757 . Several other disinfection technologies might have potential applications in the healthcare setting 758 . # Metals as Microbicides Comprehensive reviews of antisepsis 759 , disinfection 421 , and anti-infective chemotherapy 760 barely mention the antimicrobial activity of heavy metals 761,762 . Nevertheless, the anti-infective activity of some heavy metals has been known since antiquity. Heavy metals such as silver have been used for prophylaxis of conjunctivitis of the newborn, topical therapy for burn wounds, and bonding to indwelling catheters, and the use of heavy metals as antiseptics or disinfectants is again being explored 763 . Inactivation of bacteria on stainless steel surfaces by zeolite ceramic coatings containing silver and zinc ions has also been demonstrated 764,765 . Metals such as silver, iron, and copper could be used for environmental control, disinfection of water, or reusable medical devices or incorporated into medical devices (e.g., intravascular catheters) 400,[761][762][763][766][767][768][769][770] . A comparative evaluation of six disinfectant formulations for residual antimicrobial activity demonstrated that only the silver disinfectant demonstrated significant residual activity against S. aureus and P. aeruginosa 763 . Preliminary data suggest metals are effective against a wide variety of microorganisms. Clinical uses of other heavy metals include copper-8-quinolinolate as a fungicide against Aspergillus, copper-silver ionization for Legionella disinfection [771][772][773][774] , organic mercurials as an antiseptic (e.g., mercurochrome) and preservative/disinfectant (e.g., thimerosal [currently being removed from vaccines]) in pharmaceuticals and cosmetics 762 . # Ultraviolet Radiation (UV) The wavelength of UV radiation ranges from 328 nm to 210 nm (3280 A to 2100 A). Its maximum bactericidal effect occurs at 240-280 nm. Mercury vapor lamps emit more than 90% of their radiation at 253.7 nm, which is near the maximum microbicidal activity 775 . Inactivation of microorganisms results from destruction of nucleic acid through induction of thymine dimers. UV radiation has been employed in the disinfection of drinking water 776 , air 775 , titanium implants 777 , and contact lenses 778 . Bacteria and viruses are more easily killed by UV light than are bacterial spores 775 . UV radiation has several potential applications, but unfortunately its germicidal effectiveness and use is influenced by organic matter; wavelength; type of suspension; temperature; type of microorganism; and UV intensity, which is affected by distance and dirty tubes 779 . The application of UV radiation in the health-care environment (i.e., operating rooms, isolation rooms, and biologic safety cabinets) is limited to destruction of airborne organisms or inactivation of microorganisms on surfaces. The effect of UV radiation on postoperative wound infections was investigated in a double-blind, randomized study in five university medical centers. After following 14,854 patients over a 2-year period, the investigators reported the overall wound infection rate was unaffected by UV radiation, although postoperative infection in the "refined clean" surgical procedures decreased significantly (3.8%-2.9%) 780 . No data support the use of UV lamps in isolation rooms, and this practice has caused at least one epidemic of UV-induced skin erythema and keratoconjunctivitis in hospital patients and visitors 781 . # Pasteurization Pasteurization is not a sterilization process; its purpose is to destroy all pathogenic microorganisms. However, pasteurization does not destroy bacterial spores. The time-temperature relation for hot-water pasteurization is generally ~70°C (158°F) for 30 minutes. The water temperature and time should be monitored as part of a quality-assurance program 782 . Pasteurization of respiratory therapy 783,784 and anesthesia equipment 785 is a recognized alternative to chemical disinfection. The efficacy of this process has been tested using an inoculum that the authors believed might simulate contamination by an infected patient. Use of a large inoculum (10 7 ) of P. aeruginosa or Acinetobacter calcoaceticus in sets of respiratory tubing before processing demonstrated that machine-assisted chemical processing was more efficient than machine-assisted pasteurization with a disinfection failure rate of 6% and 83%, respectively 783 . Other investigators found hot water disinfection to be effective (inactivation factor >5 log10) against multiple bacteria, including multidrug-resistant bacteria, for disinfecting reusable anesthesia or respiratory therapy equipment [784][785][786] . # Flushing-and Washer-Disinfectors Flushing-and washer-disinfectors are automated and closed equipment that clean and disinfect objects from bedpans and washbowls to surgical instruments and anesthesia tubes. Items such as bedpans and urinals can be cleaned and disinfected in flushing-disinfectors. They have a short cycle of a few minutes. They clean by flushing with warm water, possibly with a detergent, and then disinfect by flushing the items with hot water or with steam. Because this machine empties, cleans, and disinfects, manual cleaning is eliminated, fewer disposable items are needed, and fewer chemical germicides are used. A microbiologic evaluation of one washer/disinfector demonstrated complete inactivation of suspensions of E. faecalis or poliovirus 787 . Other studies have shown that strains of Enterococcus faecium can survive the British Standard for heat disinfection of bedpans (80 º C for 1 minute). The significance of this finding with reference to the potential for enterococci to survive and disseminate in the health-care environment is debatable [788][789][790] . These machines are available and used in many European countries. Surgical instruments and anesthesia equipment are more difficult to clean. They are run in washerdisinfectors on a longer cycle of approximately 20-30 minutes with a detergent. These machines also disinfect by hot water at approximately 90°C 791 . # The Regulatory Framework for Disinfectants and Sterilants Before using the guidance provided in this document, health-care workers should be aware of the federal laws and regulations that govern the sale, distribution, and use of disinfectants and sterilants. In particular, health-care workers need to know what requirements pertain to them when they apply these products. Finally, they should understand the relative roles of EPA, FDA, and CDC so the context for the guidance provided in this document is clear. # EPA and FDA In the United States, chemical germicides formulated as sanitizers, disinfectants, or sterilants are regulated in interstate commerce by the Antimicrobials Division, Office of Pesticides Program, EPA, under the authority of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) of 1947, as amended 792 . Under FIFRA, any substance or mixture of substances intended to prevent, destroy, repel, or mitigate any pest (including microorganisms but excluding those in or on living humans or animals) must be registered before sale or distribution. To obtain a registration, a manufacturer must submit specific data about the safety and effectiveness of each product. For example, EPA requires manufacturers of sanitizers, disinfectants, or chemical sterilants to test formulations by using accepted methods for microbiocidal activity, stability, and toxicity to animals and humans. The manufacturers submit these data to EPA along with proposed labeling. If EPA concludes the product can be used without causing "unreasonable adverse effects," then the product and its labeling are registered, and the manufacturer can sell and distribute the product in the United States. FIFRA also requires users of products to follow explicitly the labeling directions on each product. The following standard statement appears on all labels under the "Directions for Use" heading: "It is a violation of federal law to use this product in a manner inconsistent with its labeling." This statement means a health-care worker must follow the safety precautions and use directions on the labeling of each registered product. Failure to follow the specified use-dilution, contact time, method of application, or any other condition of use is considered a misuse of the product and potentially subject to enforcement action under FIFRA. In general, EPA regulates disinfectants and sterilants used on environmental surfaces, and not those used on critical or semicritical medical devices; the latter are regulated by FDA. In June 1993, FDA and EPA issued a "Memorandum of Understanding" that divided responsibility for review and surveillance of chemical germicides between the two agencies. Under the agreement, FDA regulates liquid chemical sterilants used on critical and semicritical devices, and EPA regulates disinfectants used on noncritical surfaces and gaseous sterilants 793 . In 1996, Congress passed the Food Quality Protection Act (FQPA). This act amended FIFRA in regard to several types of products regulated by both EPA and FDA. One provision of FQPA removed regulation of liquid chemical sterilants used on critical and semicritical medical devices from EPA's jurisdiction, and it now rests solely with FDA 792,794 . EPA continues to register nonmedical chemical sterilants. FDA and EPA have considered the impact of FQPA, and in January 2000, FDA published its final guidance document on product submissions and labeling. Antiseptics are considered antimicrobial drugs used on living tissue and thus are regulated by FDA under the Food, Drug and Cosmetic Act. FDA regulates liquid chemical sterilants and high-level disinfectants intended to process critical and semicritical devices. FDA has published recommendations on the types of test methods that manufacturers should submit to FDA for 510[k] clearance for such agents. # CDC At CDC, the mission of the Coordinating Center for Infections Diseases is to guide the public on how to prevent and respond to infectious diseases in both health-care settings and at home. With respect to disinfectants and sterilants, part of CDC's role is to inform the public (in this case healthcare personnel) of current scientific evidence pertaining to these products, to comment about their safety and efficacy, and to recommend which chemicals might be most appropriate or effective for specific microorganisms and settings. # Test Methods The methods EPA has used for registration are standardized by the AOAC International; however, a survey of scientific literature reveals a number of problems with these tests that were reported during 1987-1990 58,76,80,428,736,737,[795][796][797][798][799][800] that cause them to be neither accurate nor reproducible 416,737 . As part of their regulatory authority, EPA and FDA support development and validation of methods for assessing disinfection claims [801][802][803] . For example, EPA has supported the work of Dr. Syed Sattar and coworkers who have developed a two-tier quantitative carrier test to assess sporicidal, mycobactericidal, bactericidal, fungicidal, virucidal, and protozoacidal activity of chemical germicides 701,803 . EPA is accepting label claims against hepatitis B virus (HBV) using a surrogate organism, the duck HBV, to quantify disinfectant activity 124,804 . EPA also is accepting labeling claims against hepatitis C virus using the bovine viral diarrhea virus as a surrogate. For nearly 30 years, EPA also performed intramural preregistration and postregistration efficacy testing of some chemical disinfectants in its own laboratories. In 1982, this was stopped, reportedly for budgetary reasons. At that time, manufacturers did not need to have microbiologic activity claims verified by EPA or an independent testing laboratory when registering a disinfectant or chemical sterilant 805 . This occurred when the frequency of contaminated germicides and infections secondary to their use had increased 404 . Investigations demonstrating that interlaboratory reproducibility of test results was poor and manufacturers' label claims were not verifiable 416,737 and symposia sponsored by the American Society for Microbiology 800 heightened awareness of these problems and reconfirmed the need to improve the AOAC methods and reinstate a microbiologic activity verification program. A General Accounting Office report entitled Disinfectants: EPA Lacks Assurance They Work 806 seemed to provide the necessary impetus for EPA to initiate corrective measures, including cooperative agreements to improve the AOAC methods and independent verification testing for all products labeled as sporicidal and disinfectants labeled as tuberculocidal. For example, of 26 sterilant products tested by EPA, 15 were canceled because of product failure. A list of products registered with EPA and labeled for use as sterilants or tuberculocides or against HIV and/or HBV is available through EPA's website at [This link is no longer active: http://www.epa.gov/oppad001/chemregindex.htm. The current version of this document may differ from original version: Selected EPA-registered Disinfectants (https://www.epa.gov/pesticideregistration/selected-epa-registered-disinfectants).]. Organizations (e.g., Organization for Economic Cooperation and Development) are working to standardize requirements for germicide testing and registration. # Neutralization of Germicides One of the difficulties associated with evaluating the bactericidal activity of disinfectants is prevention of bacteriostasis from disinfectant residues carried over into the subculture media. Likewise, small amounts of disinfectants on environmental surfaces can make an accurate bacterial count difficult to get when sampling of the health-care environment as part of an epidemiologic or research investigation. One way these problems may be overcome is by employing neutralizers that inactivate residual disinfectants [807][808][809] . Two commonly used neutralizing media for chemical disinfectants are Letheen Media and D/E Neutralizing Media. The former contains lecithin to neutralize quaternaries and polysorbate 80 (Tween 80) to neutralize phenolics, hexachlorophene, formalin, and, with lecithin, ethanol. The D/E Neutralizing media will neutralize a broad spectrum of antiseptic and disinfectant chemicals, including quaternary ammonium compounds, phenols, iodine and chlorine compounds, mercurials, formaldehyde, and glutaraldehyde 810 . A review of neutralizers used in germicide testing has been published 808 . # Sterilization Most medical and surgical devices used in healthcare facilities are made of materials that are heat stable and therefore undergo heat, primarily steam, sterilization. However, since 1950, there has been an increase in medical devices and instruments made of materials (e.g., plastics) that require lowtemperature sterilization. Ethylene oxide gas has been used since the 1950s for heat-and moisturesensitive medical devices. Within the past 15 years, a number of new, low-temperature sterilization systems (e.g., hydrogen peroxide gas plasma, peracetic acid immersion, ozone) have been developed and are being used to sterilize medical devices. This section reviews sterilization technologies used in healthcare and makes recommendations for their optimum performance in the processing of medical devices 1,18,[811][812][813][814][815][816][817][818][819][820] . Sterilization destroys all microorganisms on the surface of an article or in a fluid to prevent disease transmission associated with the use of that item. While the use of inadequately sterilized critical items represents a high risk of transmitting pathogens, documented transmission of pathogens associated with an inadequately sterilized critical item is exceedingly rare 821,822 . This is likely due to the wide margin of safety associated with the sterilization processes used in healthcare facilities. The concept of what constitutes "sterile" is measured as a probability of sterility for each item to be sterilized. This probability is commonly referred to as the sterility assurance level (SAL) of the product and is defined as the probability of a single viable microorganism occurring on a product after sterilization. SAL is normally expressed a 10 -n . For example, if the probability of a spore surviving were one in one million, the SAL would be 10 -6 823,824 . In short, a SAL is an estimate of lethality of the entire sterilization process and is a conservative calculation. Dual SALs (e.g., 10 -3 SAL for blood culture tubes, drainage bags; 10 -6 SAL for scalpels, implants) have been used in the United States for many years and the choice of a 10 -6 SAL was strictly arbitrary and not associated with any adverse outcomes (e.g., patient infections) 823 . Medical devices that have contact with sterile body tissues or fluids are considered critical items. These items should be sterile when used because any microbial contamination could result in disease transmission. Such items include surgical instruments, biopsy forceps, and implanted medical devices. If these items are heat resistant, the recommended sterilization process is steam sterilization, because it has the largest margin of safety due to its reliability, consistency, and lethality. However, reprocessing heat-and moisture-sensitive items requires use of a low-temperature sterilization technology (e.g., ethylene oxide, hydrogen peroxide gas plasma, peracetic acid) 825 . A summary of the advantages and disadvantages for commonly used sterilization technologies is presented in Table 6. # Steam Sterilization Overview. Of all the methods available for sterilization, moist heat in the form of saturated steam under pressure is the most widely used and the most dependable. Steam sterilization is nontoxic, inexpensive 826 , rapidly microbicidal, sporicidal, and rapidly heats and penetrates fabrics (Table 6) 827 . Like all sterilization processes, steam sterilization has some deleterious effects on some materials, including corrosion and combustion of lubricants associated with dental handpieces 212 ; reduction in ability to transmit light associated with laryngoscopes 828 ; and increased hardening time (5.6 fold) with plastercast 829 . The basic principle of steam sterilization, as accomplished in an autoclave, is to expose each item to direct steam contact at the required temperature and pressure for the specified time. Thus, there are four parameters of steam sterilization: steam, pressure, temperature, and time. The ideal steam for sterilization is dry saturated steam and entrained water (dryness fraction ≥97%) 813,819 . Pressure serves as a means to obtain the high temperatures necessary to quickly kill microorganisms. Specific temperatures must be obtained to ensure the microbicidal activity. The two common steam-sterilizing temperatures are 121°C (250°F) and 132°C (270°F). These temperatures (and other high temperatures) 830 must be maintained for a minimal time to kill microorganisms. Recognized minimum exposure periods for sterilization of wrapped healthcare supplies are 30 minutes at 121°C (250°F) in a gravity displacement sterilizer or 4 minutes at 132°C (270°C) in a prevacuum sterilizer (Table 7). At constant temperatures, sterilization times vary depending on the type of item (e.g., metal versus rubber, plastic, items with lumens), whether the item is wrapped or unwrapped, and the sterilizer type. The two basic types of steam sterilizers (autoclaves) are the gravity displacement autoclave and the high-speed prevacuum sterilizer. In the former, steam is admitted at the top or the sides of the sterilizing chamber and, because the steam is lighter than air, forces air out the bottom of the chamber through the drain vent. The gravity displacement autoclaves are primarily used to process laboratory media, water, pharmaceutical products, regulated medical waste, and nonporous articles whose surfaces have direct steam contact. For gravity displacement sterilizers the penetration time into porous items is prolonged because of incomplete air elimination. This point is illustrated with the decontamination of 10 lbs of microbiological waste, which requires at least 45 minutes at 121°C because the entrapped air remaining in a load of waste greatly retards steam permeation and heating efficiency 831,832 . The high-speed prevacuum sterilizers are similar to the gravity displacement sterilizers except they are fitted with a vacuum pump (or ejector) to ensure air removal from the sterilizing chamber and load before the steam is admitted. The advantage of using a vacuum pump is that there is nearly instantaneous steam penetration even into porous loads. The Bowie-Dick test is used to detect air leaks and inadequate air removal and consists of folded 100% cotton surgical towels that are clean and preconditioned. A commercially available Bowie-Dick-type test sheet should be placed in the center of the pack. The test pack should be placed horizontally in the front, bottom section of the sterilizer rack, near the door and over the drain, in an otherwise empty chamber and run at 134°C for 3.5 minutes 813,819 . The test is used each day the vacuum-type steam sterilizer is used, before the first processed load. Air that is not removed from the chamber will interfere with steam contact. Smaller disposable test packs (or process challenge devices) have been devised to replace the stack of folded surgical towels for testing the efficacy of the vacuum system in a prevacuum sterilizer. 833 These devices are "designed to simulate product to be sterilized and to constitute a defined challenge to the sterilization process" 819,834 . They should be representative of the load and simulate the greatest challenge to the load 835 . Sterilizer vacuum performance is acceptable if the sheet inside the test pack shows a uniform color change. Entrapped air will cause a spot to appear on the test sheet, due to the inability of the steam to reach the chemical indicator. If the sterilizer fails the Bowie-Dick test, do not use the sterilizer until it is inspected by the sterilizer maintenance personnel and passes the Bowie-Dick test 813,819,836 . Another design in steam sterilization is a steam flush-pressure pulsing process, which removes air rapidly by repeatedly alternating a steam flush and a pressure pulse above atmospheric pressure. Air is rapidly removed from the load as with the prevacuum sterilizer, but air leaks do not affect this process because the steam in the sterilizing chamber is always above atmospheric pressure. Typical sterilization temperatures and times are 132°C to 135°C with 3 to 4 minutes exposure time for porous loads and instruments 827,837 . Like other sterilization systems, the steam cycle is monitored by mechanical, chemical, and biological monitors. Steam sterilizers usually are monitored using a printout (or graphically) by measuring temperature, the time at the temperature, and pressure. Typically, chemical indicators are affixed to the outside and incorporated into the pack to monitor the temperature or time and temperature. The effectiveness of steam sterilization is monitored with a biological indicator containing spores of Geobacillus stearothermophilus (formerly Bacillus stearothermophilus). Positive spore test results are a relatively rare event 838 and can be attributed to operator error, inadequate steam delivery 839 , or equipment malfunction. Portable (table-top) steam sterilizers are used in outpatient, dental, and rural clinics 840 . These sterilizers are designed for small instruments, such as hypodermic syringes and needles and dental instruments. The ability of the sterilizer to reach physical parameters necessary to achieve sterilization should be monitored by mechanical, chemical, and biological indicators. Microbicidal Activity. The oldest and most recognized agent for inactivation of microorganisms is heat. D-values (time to reduce the surviving population by 90% or 1 log10) allow a direct comparison of the heat resistance of microorganisms. Because a D-value can be determined at various temperatures, a subscript is used to designate the exposure temperature (i.e., D121C). D121C-values for Geobacillus stearothermophilus used to monitor the steam sterilization process range from 1 to 2 minutes. Heatresistant nonspore-forming bacteria, yeasts, and fungi have such low D121C values that they cannot be experimentally measured 841 . # Mode of Action. Moist heat destroys microorganisms by the irreversible coagulation and denaturation of enzymes and structural proteins. In support of this fact, it has been found that the presence of moisture significantly affects the coagulation temperature of proteins and the temperature at which microorganisms are destroyed. Uses. Steam sterilization should be used whenever possible on all critical and semicritical items that are heat and moisture resistant (e.g., steam sterilizable respiratory therapy and anesthesia equipment), even when not essential to prevent pathogen transmission. Steam sterilizers also are used in healthcare facilities to decontaminate microbiological waste and sharps containers 831,832,842 but additional exposure time is required in the gravity displacement sterilizer for these items. # Flash Sterilization Overview. "Flash" steam sterilization was originally defined by Underwood and Perkins as sterilization of an unwrapped object at 132°C for 3 minutes at 27-28 lbs. of pressure in a gravity displacement sterilizer 843 . Currently, the time required for flash sterilization depends on the type of sterilizer and the type of item (i.e., porous vs non-porous items)(see Table 8). Although the wrapped method of sterilization is preferred for the reasons listed below, correctly performed flash sterilization is an effective process for the sterilization of critical medical devices 844,845 . Flash sterilization is a modification of conventional steam sterilization (either gravity, prevacuum, or steam-flush pressure-pulse) in which the flashed item is placed in an open tray or is placed in a specially designed, covered, rigid container to allow for rapid penetration of steam. Historically, it is not recommended as a routine sterilization method because of the lack of timely biological indicators to monitor performance, absence of protective packaging following sterilization, possibility for contamination of processed items during transportation to the operating rooms, and the sterilization cycle parameters (i.e., time, temperature, pressure) are minimal. To address some of these concerns, many healthcare facilities have done the following: placed equipment for flash sterilization in close proximity to operating rooms to facilitate aseptic delivery to the point of use (usually the sterile field in an ongoing surgical procedure); extended the exposure time to ensure lethality comparable to sterilized wrapped items (e.g., 4 minutes at 132°C) 846,847 ; used biological indicators that provide results in 1 hour for flash-sterilized items 846,847 ; and used protective packaging that permits steam penetration 812, 817-819, 845, 848 . Further, some rigid, reusable sterilization container systems have been designed and validated by the container manufacturer for use with flash cycles. When sterile items are open to air, they will eventually become contaminated. Thus, the longer a sterile item is exposed to air, the greater the number of microorganisms that will settle on it. Sterilization cycle parameters for flash sterilization are shown in Table 8. A few adverse events have been associated with flash sterilization. When evaluating an increased incidence of neurosurgical infections, the investigators noted that surgical instruments were flash sterilized between cases and 2 of 3 craniotomy infections involved plate implants that were flash sterilized 849 . A report of two patients who received burns during surgery from instruments that had been flash sterilized reinforced the need to develop policies and educate staff to prevent the use of instruments hot enough to cause clinical burns 850 . Staff should use precautions to prevent burns with potentially hot instruments (e.g., transport tray using heat-protective gloves). Patient burns may be prevented by either air-cooling the instruments or immersion in sterile liquid (e.g., saline). Uses. Flash sterilization is considered acceptable for processing cleaned patient-care items that cannot be packaged, sterilized, and stored before use. It also is used when there is insufficient time to sterilize an item by the preferred package method. Flash sterilization should not be used for reasons of convenience, as an alternative to purchasing additional instrument sets, or to save time 817 . Because of the potential for serious infections, flash sterilization is not recommended for implantable devices (i.e., devices placed into a surgically or naturally formed cavity of the human body); however, flash sterilization may be unavoidable for some devices (e.g., orthopedic screw, plates). If flash sterilization of an implantable device is unavoidable, recordkeeping (i.e., load identification, patient's name/hospital identifier, and biological indicator result) is essential for epidemiological tracking (e.g., of surgical site infection, tracing results of biological indicators to patients who received the item to document sterility), and for an assessment of the reliability of the sterilization process (e.g., evaluation of biological monitoring records and sterilization maintenance records noting preventive maintenance and repairs with dates). # Low-Temperature Sterilization Technologies Ethylene oxide (ETO) has been widely used as a low-temperature sterilant since the 1950s. It has been the most commonly used process for sterilizing temperature-and moisture-sensitive medical devices and supplies in healthcare institutions in the United States. Two types of ETO sterilizers are available, mixed gas and 100% ETO. Until 1995, ethylene oxide sterilizers combined ETO with a chloroflourocarbon (CFC) stabilizing agent, most commonly in a ratio of 12% ETO mixed with 88% CFC (referred to as 12/88 ETO). For several reasons, healthcare personnel have been exploring the use of new low-temperature sterilization technologies 825,851 . First, CFCs were phased out in December 1995 under provisions of the Clean Air Act 852 . CFCs were classified as a Class I substance under the Clean Air Act because of scientific evidence linking them to destruction of the earth's ozone layer. Second, some states (e.g., California, New York, Michigan) require the use of ETO abatement technology to reduce the amount of ETO being released into ambient air from 90 to 99.9% depending on the state. Third, OSHA regulates the acceptable vapor levels of ETO (i.e., 1 ppm averaged over 8 hours) due to concerns that ETO exposure represents an occupational hazard 318 . These constraints have led to the development of alternative technologies for low-temperature sterilization in the healthcare setting. Alternative technologies to ETO with chlorofluorocarbon that are currently available and cleared by the FDA for medical equipment include 100% ETO; ETO with a different stabilizing gas, such as carbon dioxide or hydrochlorofluorocarbons (HCFC); immersion in peracetic acid; hydrogen peroxide gas plasma; and ozone. Technologies under development for use in healthcare facilities, but not cleared by the FDA, include vaporized hydrogen peroxide, vapor phase peracetic acid, gaseous chlorine dioxide, ionizing radiation, or pulsed light 400,758,853 . However, there is no guarantee that these new sterilization technologies will receive FDA clearance for use in healthcare facilities. These new technologies should be compared against the characteristics of an ideal low-temperature (<60°C) sterilant (Table 9). 851 While it is apparent that all technologies will have limitations (Table 9), understanding the limitations imposed by restrictive device designs (e.g., long, narrow lumens) is critical for proper application of new sterilization technology 854 . For example, the development of increasingly small and complex endoscopes presents a difficult challenge for current sterilization processes. This occurs because microorganisms must be in direct contact with the sterilant for inactivation to occur. Several peer-reviewed scientific publications have data demonstrating concerns about the efficacy of several of the low-temperature sterilization processes (i.e., gas plasma, vaporized hydrogen peroxide, ETO, peracetic acid), particularly when the test organisms are challenged in the presence of serum and salt and a narrow lumen vehicle 469,721,825,855,856 . Factors shown to affect the efficacy of sterilization are shown in Table 10. # Ethylene Oxide "Gas" Sterilization Overview. ETO is a colorless gas that is flammable and explosive. The four essential parameters (operational ranges) are: gas concentration (450 to 1200 mg/l); temperature (37 to 63°C); relative humidity (40 to 80%)(water molecules carry ETO to reactive sites); and exposure time (1 to 6 hours). These influence the effectiveness of ETO sterilization 814,857,858 . Within certain limitations, an increase in gas concentration and temperature may shorten the time necessary for achieving sterilization. The main disadvantages associated with ETO are the lengthy cycle time, the cost, and its potential hazards to patients and staff; the main advantage is that it can sterilize heat-or moisture-sensitive medical equipment without deleterious effects on the material used in the medical devices (Table 6). Acute exposure to ETO may result in irritation (e.g., to skin, eyes, gastrointestinal or respiratory tracts) and central nervous system depression [859][860][861][862] . Chronic inhalation has been linked to the formation of cataracts, cognitive impairment, neurologic dysfunction, and disabling polyneuropathies 860,861,[863][864][865][866] . Occupational exposure in healthcare facilities has been linked to hematologic changes 867 and an increased risk of spontaneous abortions and various cancers 318,[868][869][870] . ETO should be considered a known human carcinogen 871 . The basic ETO sterilization cycle consists of five stages (i.e., preconditioning and humidification, gas introduction, exposure, evacuation, and air washes) and takes approximately 2 1/2 hrs excluding aeration time. Mechanical aeration for 8 to 12 hours at 50 to 60°C allows desorption of the toxic ETO residual contained in exposed absorbent materials. Most modern ETO sterilizers combine sterilization and aeration in the same chamber as a continuous process. These ETO models minimize potential ETO exposure during door opening and load transfer to the aerator. Ambient room aeration also will achieve desorption of the toxic ETO but requires 7 days at 20°C. There are no federal regulations for ETO sterilizer emission; however, many states have promulgated emission-control regulations 814 . The use of ETO evolved when few alternatives existed for sterilizing heat-and moisture-sensitive medical devices; however, favorable properties (Table 6) account for its continued widespread use 872 . Two ETO gas mixtures are available to replace ETO-chlorofluorocarbon (CFC) mixtures for large capacity, tank-supplied sterilizers. The ETO-carbon dioxide (CO2) mixture consists of 8.5% ETO and 91.5% CO2. This mixture is less expensive than ETO-hydrochlorofluorocarbons (HCFC), but a disadvantage is the need for pressure vessels rated for steam sterilization, because higher pressures (28psi gauge) are required. The other mixture, which is a drop-in CFC replacement, is ETO mixed with HCFC. HCFCs are approximately 50-fold less damaging to the earth's ozone layer than are CFCs. The EPA will begin regulation of HCFC in the year 2015 and will terminate production in the year 2030. Two companies provide ETO-HCFC mixtures as drop-in replacement for CFC-12; one mixture consists of 8.6% ETO and 91.4% HCFC, and the other mixture is composed of 10% ETO and 90% HCFC 872 . An alternative to the pressurized mixed gas ETO systems is 100% ETO. The 100% ETO sterilizers using unit-dose cartridges eliminate the need for external tanks. ETO is absorbed by many materials. For this reason, following sterilization the item must undergo aeration to remove residual ETO. Guidelines have been promulgated regarding allowable ETO limits for devices that depend on how the device is used, how often, and how long in order to pose a minimal risk to patients in normal product use 814 . ETO toxicity has been established in a variety of animals. Exposure to ETO can cause eye pain, sore throat, difficulty breathing and blurred vision. Exposure can also cause dizziness, nausea, headache, convulsions, blisters and vomiting and coughing 873 . In a variety of in vitro and animal studies, ETO has been demonstrated to be carcinogenic. ETO has been linked to spontaneous abortion, genetic damage, nerve damage, peripheral paralysis, muscle weakness, and impaired thinking and memory 873 . Occupational exposure in healthcare facilities has been linked to an increased risk of spontaneous abortions and various cancers 318 . Injuries (e.g., tissue burns) to patients have been associated with ETO residues in implants used in surgical procedures 874 . Residual ETO in capillary flow dialysis membranes has been shown to be neurotoxic in vitro 875 . OSHA has established a PEL of 1 ppm airborne ETO in the workplace, expressed as a TWA for an 8-hour work shift in a 40-hour work week. The "action level" for ETO is 0.5 ppm, expressed as an 8-hour TWA, and the short-term excursion limit is 5 ppm, expressed as a 15-minute TWA 814 . For details of the requirements in OSHA's ETO standard for occupational exposures, see Title 29 of the Code of Federal Regulations (CFR) Part 1910.1047 873 . Several personnel monitoring methods (e.g., charcoal tubes and passive sampling devices) are in use 814 . OSHA has established a PEL of 5 ppm for ethylene chlorohydrin (a toxic by-product of ETO) in the workplace 876 . Additional information regarding use of ETO in health care facilities is available from NIOSH. # Mode of Action. The microbicidal activity of ETO is considered to be the result of alkylation of protein, DNA, and RNA. Alkylation, or the replacement of a hydrogen atom with an alkyl group, within cells prevents normal cellular metabolism and replication 877 . Microbicidal Activity. The excellent microbicidal activity of ETO has been demonstrated in several studies 469,721,722,856,878,879 and summarized in published reports 877 . ETO inactivates all microorganisms although bacterial spores (especially B. atrophaeus) are more resistant than other microorganisms. For this reason B. atrophaeus is the recommended biological indicator. Like all sterilization processes, the effectiveness of ETO sterilization can be altered by lumen length, lumen diameter, inorganic salts, and organic materials 469,721,722,855,856,879 . For example, although ETO is not used commonly for reprocessing endoscopes 28 , several studies have shown failure of ETO in inactivating contaminating spores in endoscope channels 855 or lumen test units 469,721,879 and residual ETO levels averaging 66.2 ppm even after the standard degassing time 456 . Failure of ETO also has been observed when dental handpieces were contaminated with Streptococcus mutans and exposed to ETO 880 . It is recommended that dental handpieces be steam sterilized. Uses. ETO is used in healthcare facilities to sterilize critical items (and sometimes semicritical items) that are moisture or heat sensitive and cannot be sterilized by steam sterilization. # Hydrogen Peroxide Gas Plasma Overview. New sterilization technology based on plasma was patented in 1987 and marketed in the United States in 1993. Gas plasmas have been referred to as the fourth state of matter (i.e., liquids, solids, gases, and gas plasmas). Gas plasmas are generated in an enclosed chamber under deep vacuum using radio frequency or microwave energy to excite the gas molecules and produce charged particles, many of which are in the form of free radicals. A free radical is an atom with an unpaired electron and is a highly reactive species. The proposed mechanism of action of this device is the production of free radicals within a plasma field that are capable of interacting with essential cell components (e.g., enzymes, nucleic acids) and thereby disrupt the metabolism of microorganisms. The type of seed gas used and the depth of the vacuum are two important variables that can determine the effectiveness of this process. In the late 1980s the first hydrogen peroxide gas plasma system for sterilization of medical and surgical devices was field-tested. According to the manufacturer, the sterilization chamber is evacuated and hydrogen peroxide solution is injected from a cassette and is vaporized in the sterilization chamber to a concentration of 6 mg/l. The hydrogen peroxide vapor diffuses through the chamber (50 minutes), exposes all surfaces of the load to the sterilant, and initiates the inactivation of microorganisms. An electrical field created by a radio frequency is applied to the chamber to create a gas plasma. Microbicidal free radicals (e.g., hydroxyl and hydroperoxyl) are generated in the plasma. The excess gas is removed and in the final stage (i.e., vent) of the process the sterilization chamber is returned to atmospheric pressure by introduction of high-efficiency filtered air. The by-products of the cycle (e.g., water vapor, oxygen) are nontoxic and eliminate the need for aeration. Thus, the sterilized materials can be handled safely, either for immediate use or storage. The process operates in the range of 37-44°C and has a cycle time of 75 minutes. If any moisture is present on the objects the vacuum will not be achieved and the cycle aborts 856,[881][882][883] . A newer version of the unit improves sterilizer efficacy by using two cycles with a hydrogen peroxide # Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) diffusion stage and a plasma stage per sterilization cycle. This revision, which is achieved by a software modification, reduces total processing time from 73 to 52 minutes. The manufacturer believes that the enhanced activity obtained with this system is due in part to the pressure changes that occur during the injection and diffusion phases of the process and to the fact that the process consists of two equal and consecutive half cycles, each with a separate injection of hydrogen peroxide. 856,884,885 This system and a smaller version 400,882 have received FDA 510[k] clearance with limited application for sterilization of medical devices (Table 6). The biological indicator used with this system is Bacillus atrophaeus spores 851 . The newest version of the unit, which employs a new vaporization system that removes most of the water from the hydrogen peroxide, has a cycle time from 28-38 minutes (see manufacturer's literature for device dimension restrictions). Penetration of hydrogen peroxide vapor into long or narrow lumens has been addressed outside the United States by the use of a diffusion enhancer. This is a small, breakable glass ampoule of concentrated hydrogen peroxide (50%) with an elastic connector that is inserted into the device lumen and crushed immediately before sterilization 470,885 . The diffusion enhancer has been shown to sterilize bronchoscopes contaminated with Mycobacteria tuberculosis 886 . At the present time, the diffusion enhancer is not FDA cleared. Another gas plasma system, which differs from the above in several important ways, including the use of peracetic acid-acetic acid-hydrogen peroxide vapor, was removed from the marketplace because of reports of corneal destruction to patients when ophthalmic surgery instruments had been processed in the sterilizer 887,888 . In this investigation, exposure of potentially wet ophthalmologic surgical instruments with small bores and brass components to the plasma gas led to degradation of the brass to copper and zinc 888,889 . The experimenters showed that when rabbit eyes were exposed to the rinsates of the gas plasma-sterilized instruments, corneal decompensation was documented. This toxicity is highly unlikely with the hydrogen peroxide gas plasma process since a toxic, soluble form of copper would not form (LA Feldman, written communication, April 1998). # Mode of Action. This process inactivates microorganisms primarily by the combined use of hydrogen peroxide gas and the generation of free radicals (hydroxyl and hydroproxyl free radicals) during the plasma phase of the cycle. Microbicidal Activity. This process has the ability to inactivate a broad range of microorganisms, including resistant bacterial spores. Studies have been conducted against vegetative bacteria (including mycobacteria), yeasts, fungi, viruses, and bacterial spores 469,721,856,[881][882][883][890][891][892][893] . Like all sterilization processes, the effectiveness can be altered by lumen length, lumen diameter, inorganic salts, and organic materials 469,721,855,856,890,891,893 . Uses. Materials and devices that cannot tolerate high temperatures and humidity, such as some plastics, electrical devices, and corrosion-susceptible metal alloys, can be sterilized by hydrogen peroxide gas plasma. This method has been compatible with most (>95%) medical devices and materials tested 884,894,895 . # Peracetic Acid Sterilization Overview. Peracetic acid is a highly biocidal oxidizer that maintains its efficacy in the presence of organic soil. Peracetic acid removes surface contaminants (primarily protein) on endoscopic tubing 711,717 . An automated machine using peracetic acid to sterilize medical, surgical, and dental instruments chemically (e.g., endoscopes, arthroscopes) was introduced in 1988. This microprocessor-controlled, lowtemperature sterilization method is commonly used in the United States 107 . The sterilant, 35% peracetic acid, and an anticorrosive agent are supplied in a single-dose container. The container is punctured at the time of use, immediately prior to closing the lid and initiating the cycle. The concentrated peracetic acid is diluted to 0.2% with filtered water (0.2 µm) at a temperature of approximately 50°C. The diluted peracetic acid is circulated within the chamber of the machine and pumped through the channels of the endoscope for 12 minutes, decontaminating exterior surfaces, lumens, and accessories. Interchangeable trays are available to permit the processing of up to three rigid endoscopes or one flexible endoscope. Connectors are available for most types of flexible endoscopes for the irrigation of all channels by directed flow. Rigid endoscopes are placed within a lidded container, and the sterilant fills the lumens either by immersion in the circulating sterilant or by use of channel connectors to direct flow into the lumen(s) (see below for the importance of channel connectors). The peracetic acid is discarded via the sewer and the instrument rinsed four times with filtered water. Concern has been raised that filtered water may be inadequate to maintain sterility 896 . Limited data have shown that low-level bacterial contamination may follow the use of filtered water in an AER but no data has been published on AERs using the peracetic acid system 161 . Clean filtered air is passed through the chamber of the machine and endoscope channels to remove excess water 719 . As with any sterilization process, the system can only sterilize surfaces that can be contacted by the sterilant. For example, bronchoscopy-related infections occurred when bronchoscopes were processed using the wrong connector 155,725 . Investigation of these incidents revealed that bronchoscopes were inadequately reprocessed when inappropriate channel connectors were used and when there were inconsistencies between the reprocessing instructions provided by the manufacturer of the bronchoscope and the manufacturer of the automatic endoscope reprocessor 155 . The importance of channel connectors to achieve sterilization was also shown for rigid lumen devices 137,856 . The manufacturers suggest the use of biological monitors (G. stearothermophilus spore strips) both at the time of installation and routinely to ensure effectiveness of the process. The manufacturer's clip must be used to hold the strip in the designated spot in the machine as a broader clamp will not allow the sterilant to reach the spores trapped under it 897 . One investigator reported a 3% failure rate when the appropriate clips were used to hold the spore strip within the machine 718 . The use of biological monitors designed to monitor either steam sterilization or ETO for a liquid chemical sterilizer has been questioned for several reasons including spore wash-off from the filter paper strips which may cause less valid monitoring [898][899][900][901] . The processor is equipped with a conductivity probe that will automatically abort the cycle if the buffer system is not detected in a fresh container of the peracetic acid solution. A chemical monitoring strip that detects that the active ingredient is >1500 ppm is available for routine use as an additional process control. # Mode of Action. Only limited information is available regarding the mechanism of action of peracetic acid, but it is thought to function as other oxidizing agents, i.e., it denatures proteins, disrupts cell wall permeability, and oxidizes sulfhydral and sulfur bonds in proteins, enzymes, and other metabolites 654,726 . Microbicidal Activity. Peracetic acid will inactivate gram-positive and gram-negative bacteria, fungi, and yeasts in <5 minutes at <100 ppm. In the presence of organic matter, 200-500 ppm is required. For viruses, the dosage range is wide (12-2250 ppm), with poliovirus inactivated in yeast extract in 15 minutes with 1500 to 2250 ppm. Bacterial spores in suspension are inactivated in 15 seconds to 30 minutes with 500 to 10,000 ppm (0.05 to 1%) 654 . Simulated-use trials have demonstrated microbicidal activity 111,[718][719][720][721][722] and three clinical trials have demonstrated both microbial killing and no clinical failures leading to infection 90,723,724 . Alfa and coworkers, who compared the peracetic acid system with ETO, demonstrated the high efficacy of the system. Only the peracetic acid system was able to completely kill 6-log10 of Mycobacterium chelonae, Enterococcus faecalis, and B. atrophaeus spores with both an organic and inorganic challenge 722 . Like other sterilization processes, the efficacy of the process can be diminished by soil challenges 902 and test conditions 856 . Uses. This automated machine is used to chemically sterilize medical (e.g., GI endoscopes) and surgical (e.g., flexible endoscopes) instruments in the United States. Lumened endoscopes must be connected to an appropriate channel connector to ensure that the sterilant has direct contact with the contaminated lumen. 137,856,903 Olympus America has not listed this system as a compatible product for use in reprocessing Olympus bronchoscopes and gastrointestinal endoscopes (Olympus America, January 30, 2002, written communication). # Microbicidal Activity of Low-Temperature Sterilization Technologies Sterilization processes used in the United States must be cleared by FDA, and they require that sterilizer microbicidal performance be tested under simulated-use conditions 904 . FDA requires that the test article be inoculated with 10 6 colony-forming units of the most resistant test organism and prepared with organic and inorganic test loads as would occur after actual use. FDA requires manufacturers to use organic soil (e.g., 5% fetal calf serum), dried onto the device with the inoculum, to represent soil remaining on the device following marginal cleaning. However, 5% fetal calf serum as a measure of marginal cleaning has not been validated by measurements of protein load on devices following use and the level of protein removal by various cleaning methods. The inocula must be placed in various locations of the test articles, including those least favorable to penetration and contact with the sterilant (e.g., lumens). Cleaning before sterilization is not allowed in the demonstration of sterilization efficacy 904 . Several studies have evaluated the relative microbicidal efficacy of these low-temperature sterilization technologies (Table 11). These studies have either tested the activity of a sterilization process against specific microorganisms 892,905,906 , evaluated the microbicidal activity of a singular technology 711, 719, 724, 855, 879, 882-884, 890, 891, 907 or evaluated the comparative effectiveness of several sterilization technologies 271,426,469,721,722,856,908,909 . Several test methodologies use stainless steel or porcelain carriers that are inoculated with a test organism. Commonly used test organisms include vegetative bacteria, mycobacteria, and spores of Bacillus species. The available data demonstrate that low-temperature sterilization technologies are able to provide a 6-log10 reduction of microbes when inoculated onto carriers in the absence of salt and serum. However, tests can be constructed such that all of the available sterilization technologies are unable to reliably achieve complete inactivation of a microbial load. 425,426,469,721,856,909 For example, almost all of the sterilization processes will fail to reliably inactivate the microbial load in the presence of salt and serum 469,721,909 . The effect of salts and serums on the sterilization process were studied initially in the 1950s and 1960s 424,910 . These studies showed that a high concentration of crystalline-type materials and a low protein content provided greater protection to spores than did serum with a high protein content 426 . A study by Doyle and Ernst demonstrated resistance of spores by crystalline material applied not only to low-temperature sterilization technology but also to steam and dry heat 425 . These studies showed that occlusion of Bacillus atrophaeus spores in calcium carbonate crystals dramatically increased the time required for inactivation as follows: 10 seconds to 150 minutes for steam (121°C), 3.5 hours to 50 hours for dry heat (121°C), 30 seconds to >2 weeks for ETO (54°C). Investigators have corroborated and extended these findings 469,470,721,855,908,909 . While soils containing both organic and inorganic materials impair microbial killing, soils that contain a high inorganic salt-to-protein ratio favor crystal formation and impair sterilization by occlusion of organisms 425,426,881 . Alfa and colleagues demonstrated a 6-log10 reduction of the microbial inoculum of porcelain penicylinders using a variety of vegetative and spore-forming organisms (Table 11) 469 . However, if the bacterial inoculum was in tissue-culture medium supplemented with 10% serum, only the ETO 12/88 and ETO-HCFC sterilization mixtures could sterilize 95% to 97% of the penicylinder carriers. The plasma and 100% ETO sterilizer demonstrated significantly reduced activity (Table 11). For all sterilizers evaluated using penicylinder carriers (i.e., ETO 12/88, 100% ETO, hydrogen peroxide gas plasma), there was a 3to 6-log10 reduction of inoculated bacteria even in the presence of serum and salt. For each sterilizer evaluated, the ability to inactivate microorganisms in the presence of salt and serum was reduced even further when the inoculum was placed in a narrow-lumen test object (3 mm diameter by 125 cm long). Although there was a 2-to 4-log10 reduction in microbial kill, less than 50% of the lumen test objects were sterile when processed using any of the sterilization methods evaluated except the peracetic acid immersion system (Table 11) 721 . Complete killing (or removal) of 6-log10 of Enterococcus faecalis, Mycobacterium chelonei, and Bacillus atrophaeus spores in the presence of salt and serum and lumen test objects was observed only for the peracetic acid immersion system. With respect to the results by Alfa and coworkers 469 , Jacobs showed that the use of the tissue culture media created a technique-induced sterilization failure 426 . Jacobs et al. showed that microorganisms mixed with tissue culture media, used as a surrogate body fluid, formed physical crystals that protected the microorganisms used as a challenge. If the carriers were exposed for 60 sec to nonflowing water, the salts dissolved and the protective effect disappeared. Since any device would be exposed to water for a short period of time during the washing procedure, these protective effects would have little clinical relevance 426 . Narrow lumens provide a challenge to some low-temperature sterilization processes. For example, Rutala and colleagues showed that, as lumen size decreased, increased failures occurred with some lowtemperature sterilization technologies. However, some low-temperature processes such as ETO-HCFC and the hydrogen peroxide gas plasma process remained effective even when challenged by a lumen as small as 1 mm in the absence of salt and serum 856 . The importance of allowing the sterilant to come into contact with the inoculated carrier is demonstrated by comparing the results of two investigators who studied the peracetic acid immersion system. Alfa and coworkers demonstrated excellent activity of the peracetic acid immersion system against three test organisms using a narrow-lumen device. In these experiments, the lumen test object was connected to channel irrigators, which ensured that the sterilant had direct contact with the contaminated carriers 722 . This effectiveness was achieved through a combination of organism wash-off and peracetic acid sterilant killing the test organisms 722 . The data reported by Rutala et al. demonstrated failure of the peracetic acid immersion system to eliminate Geobacillus stearothermophilus spores from a carrier placed in a lumen test object. In these experiments, the lumen test unit was not connected to channel irrigators. The authors attributed the failure of the peracetic acid immersion system to eliminate the high levels of spores from the center of the test unit to the inability of the peracetic acid to diffuse into the center of 40-cm long, 3-mm diameter tubes. This may be caused by an air lock or air bubbles formed in the lumen, impeding the flow of the sterilant through the long and narrow lumen and limiting complete access to the Bacillus spores 137,856 . Experiments using a channel connector specifically designed for 1-, 2-, and 3-mm lumen test units with the peracetic acid immersion system were completely effective in eliminating an inoculum of 10 6 Geobacillus stearothermophilus spores 7 . The restricted diffusion environment that exists in the test conditions would not exist with flexible scopes processed in the peracetic acid immersion system, because the scopes are connected to channel irrigators to ensure that the sterilant has direct contact with contaminated surfaces. Alfa and associates attributed the efficacy of the peracetic acid immersion system to the ability of the liquid chemical process to dissolve salts and remove protein and bacteria due to the flushing action of the fluid 722 . # Bioburden of Surgical Devices In general, used medical devices are contaminated with a relatively low bioburden of organisms 179,911,912 . Nystrom evaluated medical instruments used in general surgical, gynecological, orthopedic, and earnose-throat operations and found that 62% of the instruments were contaminated with <10 1 organisms after use, 82% with <10 2 , and 91% with <10 3 . After being washed in an instrument washer, more than 98% of the instruments had <10 1 organisms, and none >10 2 organisms 911 . Other investigators have published similar findings 179,912 . For example, after a standard cleaning procedure, 72% of 50 surgical instruments contained <10 1 organisms, 86% <10 2 , and only 6% had >3 X 10 2912 . In another study of rigidlumen medical devices, the bioburden on both the inner and outer surface of the lumen ranged from 10 1 to 10 4 organisms per device. After cleaning, 83% of the devices had a bioburden ≤10 2 organisms 179 . In all of these studies, the contaminating microflora consisted mainly of vegetative bacteria, usually of low pathogenicity (e.g., coagulase-negative Staphylococcus) 179,911,912 . An evaluation of the microbial load on used critical medical devices such as spinal anesthesia needles and angiographic catheters and sheaths demonstrated that mesophilic microorganisms were detected at levels of 10 1 to 10 2 in only two of five needles. The bioburden on used angiographic catheters and sheath introducers exceeded 10 3 CFUs on 14% (3 of 21) and 21% (6 of 28), respectively 907 . # Effect of Cleaning on Sterilization Efficacy The effect of salt and serum on the efficacy of low-temperature sterilization technologies has raised concern regarding the margin of safety of these technologies. Experiments have shown that salts have the greatest impact on protecting microorganisms from killing426, 469. However, other studies have suggested that these concerns may not be clinically relevant. One study evaluated the relative rate of removal of inorganic salts, organic soil, and microorganisms from medical devices to better understand the dynamics of the cleaning process426. These tests were conducted by inoculating Alfa soil (tissueculture media and 10% fetal bovine serum) 469 containing 106 G. stearothermophilus spores onto the surface of a stainless-steel scalpel blade. After drying for 30 minutes at 35°C followed by 30 minutes at room temperature, the samples were placed in water at room temperature. The blades were removed at specified times, and the concentration of total protein and chloride ion was measured. The results showed that soaking in deionized water for 60 seconds resulted in a >95% release rate of chloride ion from NaCl solution in 20 seconds, Alfa soil in 30 seconds, and fetal bovine serum in 120 seconds. Thus, contact with water for short periods, even in the presence of protein, rapidly leads to dissolution of salt crystals and complete inactivation of spores by a low-temperature sterilization process (Table 10). Based on these experimental data, cleaning procedures would eliminate the detrimental effect of high salt content on a low-temperature sterilization process. These articles 426,469,721 assessing low-temperature sterilization technology reinforce the importance of meticulous cleaning before sterilization. These data support the critical need for healthcare facilities to develop rigid protocols for cleaning contaminated objects before sterilization 472 . Sterilization of instruments and medical devices is compromised if the process is not preceded by meticulous cleaning. The cleaning of any narrow-lumen medical device used in patient care presents a major challenge to reprocessing areas. While attention has been focused on flexible endoscopes, cleaning issues related to other narrow-lumen medical devices such as sphinctertomes have been investigated 913 . This study compared manual cleaning with that of automated cleaning with a narrow-lumen cleaner and found that only retro-flushing with the narrow lumen cleaner provided adequate cleaning of the three channels. If reprocessing was delayed for more than 24 hours, retro-flush cleaning was no longer effective and ETO sterilization failure was detected when devices were held for 7 days 913 . In another study involving simulated-use cleaning of laparoscopic devices, Alfa found that minimally the use of retro-flushing should be used during cleaning of non-ported laparoscopic devices 914 . # Other Sterilization Methods Ionizing Radiation. Sterilization by ionizing radiation, primarily by cobalt 60 gamma rays or electron accelerators, is a low-temperature sterilization method that has been used for a number of medical products (e.g., tissue for transplantation, pharmaceuticals, medical devices). There are no FDA-cleared ionizing radiation sterilization processes for use in healthcare facilities. Because of high sterilization costs, this method is an unfavorable alternative to ETO and plasma sterilization in healthcare facilities but is suitable for large-scale sterilization. Some deleterious effects on patient-care equipment associated with gamma radiation include induced oxidation in polyethylene 915 and delamination and cracking in polyethylene knee bearings 916 . Several reviews 917,918 dealing with the sources, effects, and application of ionizing radiation may be referred to for more detail. # Dry-Heat Sterilizers. This method should be used only for materials that might be damaged by moist heat or that are impenetrable to moist heat (e.g., powders, petroleum products, sharp instruments). The advantages for dry heat include the following: it is nontoxic and does not harm the environment; a dry heat cabinet is easy to install and has relatively low operating costs; it penetrates materials; and it is noncorrosive for metal and sharp instruments. The disadvantages for dry heat are the slow rate of heat penetration and microbial killing makes this a time-consuming method. In addition, the high temperatures are not suitable for most materials 919 . The most common time-temperature relationships for sterilization with hot air sterilizers are 170°C (340°F) for 60 minutes, 160°C (320°F) for 120 minutes, and 150°C (300°F) for 150 minutes. B. atrophaeus spores should be used to monitor the sterilization process for dry heat because they are more resistant to dry heat than are G. stearothermophilus spores. The primary lethal process is considered to be oxidation of cell constituents. There are two types of dry-heat sterilizers: the static-air type and the forced-air type. The static-air type is referred to as the oven-type sterilizer as heating coils in the bottom of the unit cause the hot air to rise inside the chamber via gravity convection. This type of dry-heat sterilizer is much slower in heating, requires longer time to reach sterilizing temperature, and is less uniform in temperature control throughout the chamber than is the forced-air type. The forced-air or mechanical convection sterilizer is equipped with a motor-driven blower that circulates heated air throughout the chamber at a high velocity, permitting a more rapid transfer of energy from the air to the instruments 920 . Liquid Chemicals. Several FDA-cleared liquid chemical sterilants include indications for sterilization of medical devices (Tables 4 and 5) 69 . The indicated contact times range from 3 hours to 12 hours. However, except for a few of the products, the contact time is based only on the conditions to pass the AOAC Sporicidal Test as a sterilant and not on simulated use testing with devices. These solutions are commonly used as high-level disinfectants when a shorter processing time is required. Generally, chemical liquid sterilants cannot be monitored using a biological indicator to verify sterility 899,900 . The survival kinetics for thermal sterilization methods, such as steam and dry heat, have been studied and characterized extensively, whereas the kinetics for sterilization with liquid sterilants are less well understood 921 . The information that is available in the literature suggests that sterilization processes based on liquid chemical sterilants, in general, may not convey the same sterility assurance level as sterilization achieved using thermal or physical methods 823 . The data indicate that the survival curves for liquid chemical sterilants may not exhibit log-linear kinetics and the shape of the survivor curve may vary depending of the formulation, chemical nature and stability of the liquid chemical sterilant. In addition, the design of the AOAC Sporicidal Test does not provide quantification of the microbial challenge. Therefore, sterilization with a liquid chemical sterilant may not convey the same sterility assurance as other sterilization methods. One of the differences between thermal and liquid chemical processes for sterilization of devices is the accessibility of microorganisms to the sterilant. Heat can penetrate barriers, such as biofilm, tissue, and blood, to attain organism kill, whereas liquids cannot adequately penetrate these barriers. In addition, the viscosity of some liquid chemical sterilants impedes their access to organisms in the narrow lumens and mated surfaces of devices 922 . Another limitation to sterilization of devices with liquid chemical germicides is the post-processing environment of the device. Devices cannot be wrapped or adequately contained during processing in a liquid chemical sterilant to maintain sterility following processing and during storage. Furthermore, devices may require rinsing following exposure to the liquid chemical sterilant with water that typically is not sterile. Therefore, due to the inherent limitations of using liquid chemical sterilants, their use should be restricted to reprocessing critical devices that are heat-sensitive and incompatible with other sterilization methods. Several published studies compare the sporicidal effect of liquid chemical germicides against spores of Bacillus and Clostridium 78,659,660,715 . Performic Acid. Performic acid is a fast-acting sporicide that was incorporated into an automated endoscope reprocessing system 400 . Systems using performic acid are not currently FDA cleared. Filtration. Although filtration is not a lethality-based process and is not an FDA-cleared sterilization method, this technology is used to remove bacteria from thermolabile pharmaceutical fluids that cannot be purified by any other means. In order to remove bacteria, the membrane pore size (e.g., 0.22 µm) must be smaller than the bacteria and uniform throughout 923 . Some investigators have appropriately questioned whether the removal of microorganisms by filtration really is a sterilization method because of slight bacterial passage through filters, viral passage through filters, and transference of the sterile filtrate into the final container under aseptic conditions entail a risk of contamination 924 . Microwave. Microwaves are used in medicine for disinfection of soft contact lenses, dental instruments, dentures, milk, and urinary catheters for intermittent self-catheterization [925][926][927][928][929][930][931] . However, microwaves must only be used with products that are compatible (e.g., do not melt) 931 . Microwaves are radio-frequency waves, which are usually used at a frequency of 2450 MHz. The microwaves produce friction of water molecules in an alternating electrical field. The intermolecular friction derived from the vibrations generates heat and some authors believe that the effect of microwaves depends on the heat produced while others postulate a nonthermal lethal effect [932][933][934] . The initial reports showed microwaves to be an effective microbicide. The microwaves produced by a "home-type" microwave oven (2.45 GHz) completely inactivate bacterial cultures, mycobacteria, viruses, and G. stearothermophilus spores within 60 seconds to 5 minutes depending on the challenge organism 933,[935][936][937] . Another study confirmed these resuIts but also found that higher power microwaves in the presence of water may be needed for sterilization 932 . Complete destruction of Mycobacterium bovis was obtained with 4 minutes of microwave exposure (600W, 2450 MHz) 937 . The effectiveness of microwave ovens for different sterilization and disinfection purposes should be tested and demonstrated as test conditions affect the results (e.g., presence of water, microwave power). Sterilization of metal instruments can be accomplished but requires certain precautions. 926 . Of concern is that home-type microwave ovens may not have even distribution of microwave energy over the entire dry device (there may be hot and cold spots on solid medical devices); hence there may be areas that are not sterilized or disinfected. The use of microwave ovens to disinfect intermittent-use catheters also has been suggested. Researchers found that test bacteria (e.g., E. coli, Klebsiella pneumoniae, Candida albicans) were eliminated from red rubber catheters within 5 minutes 931 . Microwaves used for sterilization of medical devices have not been FDA cleared. Glass Bead "Sterilizer". Glass bead "sterilization" uses small glass beads (1.2-1.5 mm diameter) and high temperature (217 °C -232°C) for brief exposure times (e.g., 45 seconds) to inactivate microorganisms. These devices have been used for several years in the dental profession [938][939][940] . FDA believes there is a risk of infection with this device because of potential failure to sterilize dental instruments and their use should be discontinued until the device has received FDA clearance. # Vaporized Hydrogen Peroxide (VHP). Hydrogen peroxide solutions have been used as chemical sterilants for many years. However, the VHP was not developed for the sterilization of medical equipment until the mid-1980s. One method for delivering VHP to the reaction site uses a deep vacuum to pull liquid hydrogen peroxide (30-35% concentration) from a disposable cartridge through a heated vaporizer and then, following vaporization, into the sterilization chamber. A second approach to VHP delivery is the flow-through approach in which the VHP is carried into the sterilization chamber by a carrier gas such as air using either a slight negative pressure (vacuum) or slight positive pressure. Applications of this technology include vacuum systems for industrial sterilization of medical devices and atmospheric systems for decontaminating for large and small areas 853 . VHP offers several appealing features that include rapid cycle time (e.g., 30-45 minutes); low temperature; environmentally safe byproducts (H2O, oxygen [O2]); good material compatibility; and ease of operation, installation and monitoring. VHP has limitations including that cellulose cannot be processed; nylon becomes brittle; and VHP penetration capabilities are less than those of ETO. VHP has not been cleared by FDA for sterilization of medical devices in healthcare facilities. The feasibility of utilizing vapor-phase hydrogen peroxide as a surface decontaminant and sterilizer was evaluated in a centrifuge decontamination application. In this study, vapor-phase hydrogen peroxide was shown to possess significant sporicidal activity 941 . In preliminary studies, hydrogen peroxide vapor decontamination has been found to be a highly effective method of eradicating MRSA, Serratia marcescens, Clostridium botulinum spores and Clostridium difficile from rooms, furniture, surfaces and/or equipment; however, further investigation of this method to demonstrate both safety and effectiveness in reducing infection rates are required [942][943][944][945] . Ozone. Ozone has been used for years as a drinking water disinfectant. Ozone is produced when O2 is energized and split into two monatomic (O1) molecules. The monatomic oxygen molecules then collide with O2 molecules to form ozone, which is O3. Thus, ozone consists of O2 with a loosely bonded third oxygen atom that is readily available to attach to, and oxidize, other molecules. This additional oxygen atom makes ozone a powerful oxidant that destroys microorganisms but is highly unstable (i.e., half-life of 22 minutes at room temperature). A new sterilization process, which uses ozone as the sterilant, was cleared by FDA in August 2003 for processing reusable medical devices. The sterilizer creates its own sterilant internally from USP grade oxygen, steam-quality water and electricity; the sterilant is converted back to oxygen and water vapor at the end of the cycle by a passing through a catalyst before being exhausted into the room. The duration of the sterilization cycle is about 4 h and 15 m, and it occurs at 30-35°C. Microbial efficacy has been demonstrated by achieving a SAL of 10 -6 with a variety of microorganisms to include the most resistant microorganism, Geobacillus stearothermophilus. The ozone process is compatible with a wide range of commonly used materials including stainless steel, titanium, anodized aluminum, ceramic, glass, silica, PVC, Teflon, silicone, polypropylene, polyethylene and acrylic. In addition, rigid lumen devices of the following diameter and length can be processed: internal diameter (ID): > 2 mm, length ≤ 25 cm; ID > 3 mm, length ≤ 47 cm; and ID > 4 mm, length ≤ 60 cm. The process should be safe for use by the operator because there is no handling of the sterilant, no toxic emissions, no residue to aerate, and low operating temperature means there is no danger of an accidental burn. The cycle is monitored using a self-contained biological indicator and a chemical indicator. The sterilization chamber is small, about 4 ft 3 # (Written communication, S Dufresne, July 2004). A gaseous ozone generator was investigated for decontamination of rooms used to house patients colonized with MRSA. The results demonstrated that the device tested would be inadequate for the decontamination of a hospital room 946 . Formaldehyde Steam. Low-temperature steam with formaldehyde is used as a low-temperature sterilization method in many countries, particularly in Scandinavia, Germany, and the United Kingdom. The process involves the use of formalin, which is vaporized into a formaldehyde gas that is admitted into the sterilization chamber. A formaldehyde concentration of 8-16 mg/l is generated at an operating temperature of 70-75°C. The sterilization cycle consists of a series of stages that include an initial vacuum to remove air from the chamber and load, followed by steam admission to the chamber with the vacuum pump running to purge the chamber of air and to heat the load, followed by a series of pulses of formaldehyde gas, followed by steam. Formaldehyde is removed from the sterilizer and load by repeated alternate evacuations and flushing with steam and air. This system has some advantages, e.g., the cycle time for formaldehyde gas is faster than that for ETO and the cost per cycle is relatively low. However, ETO is more penetrating and operates at lower temperatures than do steam/formaldehyde sterilizers. Low-temperature steam formaldehyde sterilization has been found effective against vegetative bacteria, mycobacteria, B. atrophaeus and G. stearothermophilus spores and Candida albicans [947][948][949] . Formaldehyde vapor cabinets also may be used in healthcare facilities to sterilize heat-sensitive medical equipment 950 . Commonly, there is no circulation of formaldehyde and no temperature and humidity controls. The release of gas from paraformaldehyde tablets (placed on the lower tray) is slow and produces a low partial pressure of gas. The microbicidal quality of this procedure is unknown 951 . Reliable sterilization using formaldehyde is achieved when performed with a high concentration of Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) gas, at a temperature between 60° and 80°C and with a relative humidity of 75 to 100%. Studies indicate that formaldehyde is a mutagen and a potential human carcinogen, and OSHA regulates formaldehyde. The permissible exposure limit for formaldehyde in work areas is 0.75 ppm measured as a 8-hour TWA. The OSHA standard includes a 2 ppm STEL (i.e., maximum exposure allowed during a 15-minute period). As with the ETO standard, the formaldehyde standard requires that the employer conduct initial monitoring to identify employees who are exposed to formaldehyde at or above the action level or STEL. If this exposure level is maintained, employers may discontinue exposure monitoring until there is a change that could affect exposure levels or an employee reports formaldehyderelated signs and symptoms 269,578 . The formaldehyde steam sterilization system has not been FDA cleared for use in healthcare facilities. # Gaseous chlorine dioxide. A gaseous chlorine dioxide system for sterilization of healthcare products was developed in the late 1980s 853,952,953 . Chlorine dioxide is not mutagenic or carcinogenic in humans. As the chlorine dioxide concentration increases, the time required to achieve sterilization becomes progressively shorter. For example, only 30 minutes were required at 40 mg/l to sterilize the 10 6 B. atrophaeus spores at 30° to 32°C 954 . Currently, no gaseous chlorine dioxide system is FDA cleared. Vaporized Peracetic Acid. The sporicidal activity of peracetic acid vapor at 20, 40, 60, and 80% relative humidity and 25°C was determined on Bacillus atrophaeus spores on paper and glass surfaces. Appreciable activity occurred within 10 minutes of exposure to 1 mg of peracetic acid per liter at 40% or higher relative humidity 955 . No vaporized peracetic acid system is FDA cleared. # Infrared radiation. An infrared radiation prototype sterilizer was investigated and found to destroy B. atrophaeus spores. Some of the possible advantages of infrared technology include short cycle time, low energy consumption, no cycle residuals, and no toxicologic or environmental effects. This may provide an alternative technology for sterilization of selected heat-resistant instruments but there are no FDA-cleared systems for use in healthcare facilities 956 . The other sterilization technologies mentioned above may be used for sterilization of critical medical items if cleared by the FDA and ideally, the microbicidal effectiveness of the technology has been published in the scientific literature. The selection and use of disinfectants, chemical sterilants and sterilization processes in the healthcare field is dynamic, and products may become available that are not in existence when this guideline was written. As newer disinfectants and sterilization processes become available, persons or committees responsible for selecting disinfectants and sterilization processes should be guided by products cleared by FDA and EPA as well as information in the scientific literature. # Sterilizing Practices Overview. The delivery of sterile products for use in patient care depends not only on the effectiveness of the sterilization process but also on the unit design, decontamination, disassembling and packaging of the device, loading the sterilizer, monitoring, sterilant quality and quantity, and the appropriateness of the cycle for the load contents, and other aspects of device reprocessing. Healthcare personnel should perform most cleaning, disinfecting, and sterilizing of patient-care supplies in a central processing department in order to more easily control quality. The aim of central processing is the orderly processing of medical and surgical instruments to protect patients from infections while minimizing risks to staff and preserving the value of the items being reprocessed 957 . Healthcare facilities should promote the same level of efficiency and safety in the preparation of supplies in other areas (e.g., operating room, respiratory therapy) as is practiced in central processing. Ensuring consistency of sterilization practices requires a comprehensive program that ensures operator competence and proper methods of cleaning and wrapping instruments, loading the sterilizer, operating the sterilizer, and monitoring of the entire process. Furthermore, care must be consistent from an infection prevention standpoint in all patient-care settings, such as hospital and outpatient facilities. # Sterilization Cycle Verification. A sterilization process should be verified before it is put into use in healthcare settings. All steam, ETO, and other low-temperature sterilizers are tested with biological and chemical indicators upon installation, when the sterilizer is relocated, redesigned, after major repair and after a sterilization failure has occurred to ensure they are functioning prior to placing them into routine use. Three consecutive empty steam cycles are run with a biological and chemical indicator in an appropriate test package or tray. Each type of steam cycle used for sterilization (e.g., vacuum-assisted, gravity) is tested separately. In a prevacuum steam sterilizer three consecutive empty cycles are also run with a Bowie-Dick test. The sterilizer is not put back into use until all biological indicators are negative and chemical indicators show a correct end-point response 811-814, 819, 958 . Biological and chemical indicator testing is also done for ongoing quality assurance testing of representative samples of actual products being sterilized and product testing when major changes are made in packaging, wraps, or load configuration. Biological and chemical indicators are placed in products, which are processed in a full load. When three consecutive cycles show negative biological indicators and chemical indicators with a correct end point response, you can put the change made into routine use [811][812][813][814]958 . Items processed during the three evaluation cycles should be quarantined until the test results are negative. Physical Facilities. The central processing area(s) ideally should be divided into at least three areas: decontamination, packaging, and sterilization and storage. Physical barriers should separate the decontamination area from the other sections to contain contamination on used items. In the decontamination area reusable contaminated supplies (and possibly disposable items that are reused) are received, sorted, and decontaminated. The recommended airflow pattern should contain contaminates within the decontamination area and minimize the flow of contaminates to the clean areas. The American Institute of Architects 959 recommends negative pressure and no fewer than six air exchanges per hour in the decontamination area (AAMI recommends 10 air changes per hour) and 10 air changes per hour with positive pressure in the sterilizer equipment room. The packaging area is for inspecting, assembling, and packaging clean, but not sterile, material. The sterile storage area should be a limited access area with a controlled temperature (may be as high as 75°F) and relative humidity (30-60% in all works areas except sterile storage, where the relative humidity should not exceed 70%) 819 . The floors and walls should be constructed of materials capable of withstanding chemical agents used for cleaning or disinfecting. Ceilings and wall surfaces should be constructed of non-shedding materials. Physical arrangements of processing areas are presented schematically in four references 811,819,920,957 . Cleaning. As repeatedly mentioned, items must be cleaned using water with detergents or enzymatic cleaners 465,466,468 before processing. Cleaning reduces the bioburden and removes foreign material (i.e., organic residue and inorganic salts) that interferes with the sterilization process by acting as a barrier to the sterilization agent 179,426,457,911,912 . Surgical instruments are generally presoaked or prerinsed to prevent drying of blood and tissue. Precleaning in patient-care areas may be needed on items that are heavily soiled with feces, sputum, blood, or other material. Items sent to central processing without removing gross soil may be difficult to clean because of dried secretions and excretions. Cleaning and decontamination should be done as soon as possible after items have been used. Several types of mechanical cleaning machines (e.g., utensil washer-sanitizer, ultrasonic cleaner, washer-sterilizer, dishwasher, washer-disinfector) may facilitate cleaning and decontamination of most items. This equipment often is automated and may increase productivity, improve cleaning effectiveness, and decrease worker exposure to blood and body fluids. Delicate and intricate objects and heat-or moisture-sensitive articles may require careful cleaning by hand. All used items sent to the central processing area should be considered contaminated (unless decontaminated in the area of origin), handled with gloves (forceps or tongs are sometimes needed to avoid exposure to sharps), and decontaminated by one of the aforementioned methods to render them safer to handle. Items composed of more than one removable part should be disassembled. Care should be taken to ensure that all parts are kept together, so that reassembly can be accomplished efficiently 811 . Investigators have described the degree of cleanliness by visual and microscopic examination. One study found 91% of the instruments to be clean visually but, when examined microscopically, 84% of the instruments had residual debris. Sites that contained residual debris included junctions between insulating sheaths and activating mechanisms of laparoscopic instruments and articulations and grooves of forceps. More research is needed to understand the clinical significance of these findings 960 and how to ensure proper cleaning. Personnel working in the decontamination area should wear household-cleaning-type rubber or plastic gloves when handling or cleaning contaminated instruments and devices. Face masks, eye protection such as goggles or full-length faceshields, and appropriate gowns should be worn when exposure to blood and contaminated fluids may occur (e.g., when manually cleaning contaminated devices) 961 . Contaminated instruments are a source of microorganisms that could inoculate personnel through nonintact skin on the hands or through contact with the mucous membranes of eyes, nose, or mouth 214,811,813 . Reusable sharps that have been in contact with blood present a special hazard. Employees must not reach with their gloved hands into trays or containers that hold these sharps to retrieve them 214 . Rather, employees should use engineering controls (e.g., forceps) to retrieve these devices. Packaging. Once items are cleaned, dried, and inspected, those requiring sterilization must be wrapped or placed in rigid containers and should be arranged in instrument trays/baskets according to the guidelines provided by the AAMI and other professional organizations 454, 811-814, 819, 836, 962 . These guidelines state that hinged instruments should be opened; items with removable parts should be disassembled unless the device manufacturer or researchers provide specific instructions or test data to the contrary 181 ; complex instruments should be prepared and sterilized according to device manufacturer's instructions and test data; devices with concave surfaces should be positioned to facilitate drainage of water; heavy items should be positioned not to damage delicate items; and the weight of the instrument set should be based on the design and density of the instruments and the distribution of metal mass 811,962 . While there is no longer a specified sterilization weight limit for surgical sets, heavy metal mass is a cause of wet packs (i.e., moisture inside the case and tray after completion of the sterilization cycle) 963 . Other parameters that may influence drying are the density of the wraps and the design of the set 964 . There are several choices in methods to maintain sterility of surgical instruments, including rigid containers, peel-open pouches (e.g., self-sealed or heat-sealed plastic and paper pouches), roll stock or reels (i.e., paper-plastic combinations of tubing designed to allow the user to cut and seal the ends to form a pouch) 454 and sterilization wraps (woven and nonwoven). Healthcare facilities may use all of these packaging options. The packaging material must allow penetration of the sterilant, provide protection against contact contamination during handling, provide an effective barrier to microbial penetration, and maintain the sterility of the processed item after sterilization 965 . An ideal sterilization wrap would successfully address barrier effectiveness, penetrability (i.e., allows sterilant to penetrate), aeration (e.g., allows ETO to dissipate), ease of use, drapeability, flexibility, puncture resistance, tear strength, toxicity, odor, waste disposal, linting, cost, and transparency 966 . Unacceptable packaging for use with ETO (e.g., foil, polyvinylchloride, and polyvinylidene chlorine [kitchen-type transparent wrap]) 814 or hydrogen peroxide gas plasma (e.g., linens and paper) should not be used to wrap medical items. In central processing, double wrapping can be done sequentially or nonsequentially (i.e., simultaneous wrapping). Wrapping should be done in such a manner to avoid tenting and gapping. The sequential wrap uses two sheets of the standard sterilization wrap, one wrapped after the other. This procedure creates a package within a package. The nonsequential process uses two sheets wrapped at the same time so that the wrapping needs to be performed only once. This latter method provides multiple layers of protection of surgical instruments from contamination and saves time since wrapping is done only once. Multiple layers are still common practice due to the rigors of handling within the facility even though the barrier efficacy of a single sheet of wrap has improved over the years 966 . Written and illustrated procedures for preparation of items to be packaged should be readily available and used by personnel when packaging procedures are performed 454 . # Loading. All items to be sterilized should be arranged so all surfaces will be directly exposed to the sterilizing agent. Thus, loading procedures must allow for free circulation of steam (or another sterilant) around each item. Historically, it was recommended that muslin fabric packs should not exceed the maximal dimensions, weight, and density of 12 inches wide × 12 inches high × 20 inches long, 12 lbs, and 7.2 lbs per cubic foot, respectively. Due to the variety of textiles and metal/plastic containers on the market, the textile and metal/plastic container manufacturer and the sterilizer manufacturers should be consulted for instructions on pack preparation and density parameters 819 . There are several important basic principles for loading a sterilizer: allow for proper sterilant circulation; perforated trays should be placed so the tray is parallel to the shelf; nonperforated containers should be placed on their edge (e.g., basins); small items should be loosely placed in wire baskets; and peel packs should be placed on edge in perforated or mesh bottom racks or baskets 454,811,836 . Storage. Studies in the early 1970s suggested that wrapped surgical trays remained sterile for varying periods depending on the type of material used to wrap the trays. Safe storage times for sterile packs vary with the porosity of the wrapper and storage conditions (e.g., open versus closed cabinets). Heat-sealed, plastic peel-down pouches and wrapped packs sealed in 3-mil (3/1000 inch) polyethylene overwrap have been reported to be sterile for as long as 9 months after sterilization. The 3-mil polyethylene is applied after sterilization to extend the shelf life for infrequently used items 967 . Supplies wrapped in double-thickness muslin comprising four layers, or equivalent, remain sterile for at least 30 days. Any item that has been sterilized should not be used after the expiration date has been exceeded or if the sterilized package is wet, torn, or punctured. Although some hospitals continue to date every sterilized product and use the time-related shelf-life practice, many hospitals have switched to an event-related shelf-life practice. This latter practice recognizes that the product should remain sterile until some event causes the item to become contaminated (e.g., tear in packaging, packaging becomes wet, seal is broken) 968 . Event-related factors that contribute to the contamination of a product include bioburden (i.e., the amount of contamination in the environment), air movement, traffic, location, humidity, insects, vermin, flooding, storage area space, open/closed shelving, temperature, and the properties of the wrap material 966,969 . There are data that support the event-related shelf-life practice [970][971][972] . One study examined the effect of time on the sterile integrity of paper envelopes, peel pouches, and nylon sleeves. The most important finding was the absence of a trend toward an increased rate of contamination over time for any pack when placed in covered storage 971 . Another evaluated the effectiveness of event-related outdating by microbiologically testing sterilized items. During the 2-year study period, all of the items tested were sterile 972 . Thus, contamination of a sterile item is event-related and the probability of contamination increases with increased handling 973 . Following the sterilization process, medical and surgical devices must be handled using aseptic technique in order to prevent contamination. Sterile supplies should be stored far enough from the floor (8 to 10 inches), the ceiling (5 inches unless near a sprinkler head [18 inches from sprinkler head]), and the outside walls (2 inches) to allow for adequate air circulation, ease of cleaning, and compliance with local fire codes (e.g., supplies must be at least 18 inches from sprinkler heads). Medical and surgical supplies should not be stored under sinks or in other locations where they can become wet. Sterile items that become wet are considered contaminated because moisture brings with it microorganisms from the air and surfaces. Closed or covered cabinets are ideal but open shelving may be used for storage. Any package that has fallen or been dropped on the floor must be inspected for damage to the packaging and contents (if the items are breakable). If the package is heat-sealed in impervious plastic and the seal is still intact, the package should be considered not contaminated. If undamaged, items packaged in plastic need not be reprocessed. Monitoring. The sterilization procedure should be monitored routinely by using a combination of mechanical, chemical, and biological indicators to evaluate the sterilizing conditions and indirectly the microbiologic status of the processed items. The mechanical monitors for steam sterilization include the daily assessment of cycle time and temperature by examining the temperature record chart (or computer printout) and an assessment of pressure via the pressure gauge. The mechanical monitors for ETO include time, temperature, and pressure recorders that provide data via computer printouts, gauges, and/or displays 814 . Generally, two essential elements for ETO sterilization (i.e., the gas concentration and humidity) cannot be monitored in healthcare ETO sterilizers. Chemical indicators are convenient, are inexpensive, and indicate that the item has been exposed to the sterilization process. In one study, chemical indicators were more likely than biological indicators to inaccurately indicate sterilization at marginal sterilization times (e.g., 2 minutes) 847 . Chemical indicators should be used in conjunction with biological indicators, but based on current studies should not replace them because they indicate sterilization at marginal sterilization time and because only a biological indicator consisting of resistant spores can measure the microbial killing power of the sterilization process. 847,974 . Chemical indicators are affixed on the outside of each pack to show that the package has been processed through a sterilization cycle, but these indicators do not prove sterilization has been achieved. Preferably, a chemical indicator also should be placed on the inside of each pack to verify sterilant penetration. Chemical indicators usually are either heat-or chemical-sensitive inks that change color when one or more sterilization parameters (e.g., steam-time, temperature, and/or saturated steam; ETO-time, temperature, relative humidity and/or ETO concentration) are present. Chemical indicators have been grouped into five classes based on their ability to monitor one or multiple sterilization parameters 813,819 . If the internal and/or external indicator suggests inadequate processing, the item should not be used 815 . An air-removal test (Bowie-Dick Test) must be performed daily in an empty dynamic-air-removal sterilizer (e.g., prevacuum steam sterilizer) to ensure air removal. Biological indicators are recognized by most authorities as being closest to the ideal monitors of the sterilization process 974,975 because they measure the sterilization process directly by using the most resistant microorganisms (i.e., Bacillus spores), and not by merely testing the physical and chemical conditions necessary for sterilization. Since the Bacillus spores used in biological indicators are more resistant and present in greater numbers than are the common microbial contaminants found on patientcare equipment, the demonstration that the biological indicator has been inactivated strongly implies that other potential pathogens in the load have been killed 844 . An ideal biological monitor of the sterilization process should be easy to use, be inexpensive, not be subject to exogenous contamination, provide positive results as soon as possible after the cycle so that corrective action may be accomplished, and provide positive results only when the sterilization parameters (e.g., steam-time, temperature, and/or saturated steam; ETO-time, temperature, relative humidity and/or ETO concentration) are inadequate to kill microbial contaminates 847 . Biological indicators are the only process indicators that directly monitor the lethality of a given sterilization process. Spores used to monitor a sterilization process have demonstrated resistance to the sterilizing agent and are more resistant than the bioburden found on medical devices 179,911,912 . B. atrophaeus spores (10 6 ) are used to monitor ETO and dry heat, and G. stearothermophilus spores (10 5 ) are used to monitor steam sterilization, hydrogen peroxide gas plasma, and liquid peracetic acid sterilizers. G. stearothermophilus is incubated at 55-60°C, and B. atrophaeus is incubated at 35-37°C. Steam and low temperature sterilizers (e.g., hydrogen peroxide gas plasma, peracetic acid) should be monitored at least weekly with the appropriate commercial preparation of spores. If a sterilizer is used frequently (e.g., several loads per day), daily use of biological indicators allows earlier discovery of equipment malfunctions or procedural errors and thus minimizes the extent of patient surveillance and product recall needed in the event of a positive biological indicator 811 . Each load should be monitored if it contains implantable objects. If feasible, implantable items should not be used until the results of spore tests are known to be negative. Originally, spore-strip biological indicators required up to 7 days of incubation to detect viable spores from marginal cycles (i.e., when few spores remained viable). The next generation of biological indicator was self-contained in plastic vials containing a spore-coated paper strip and a growth media in a crushable glass ampoule. This indicator had a maximum incubation of 48 hours but significant failures could be detected in ≤24 hours. A rapid-readout biological indicator that detects the presence of enzymes of G. stearothermophilus by reading a fluorescent product produced by the enzymatic breakdown of a nonfluorescent substrate has been marketed for the more than 10 years. Studies demonstrate that the sensitivity of rapid-readout tests for steam sterilization (1 hour for 132°C gravity sterilizers, 3 hrs for 121°C gravity and 132°C vacuum sterilizers) parallels that of the conventional sterilization-specific biological indicators 846,847,976,977 and the fluorescent rapid readout results reliably predict 24-and 48-hour and 7day growth 978 . The rapid-readout biological indicator is a dual indicator system as it also detects acid metabolites produced during growth of the G. stearothermophilus spores. This system is different from the indicator system consisting of an enzyme system of bacterial origin without spores. Independent comparative data using suboptimal sterilization cycles (e.g., reduced time or temperature) with the enzyme-based indicator system have not been published 979 . A new rapid-readout ETO biological indicator has been designed for rapid and reliable monitoring of ETO sterilization processes. The indicator has been cleared by the FDA for use in the United States 400 . The rapid-readout ETO biological indicator detects the presence of B. atrophaeus by detecting a fluorescent signal indicating the activity of an enzyme present within the B. atrophaeus organism, betaglucosidase. The fluorescence indicates the presence of an active spore-associated enzyme and a sterilization process failure. This indicator also detects acid metabolites produced during growth of the B. atrophaeus spore. Per manufacturer's data, the enzyme always was detected whenever viable spores were present. This was expected because the enzyme is relatively ETO resistant and is inactivated at a slightly longer exposure time than the spore. The rapid-readout ETO biological indicator can be used to monitor 100% ETO, and ETO-HCFC mixture sterilization cycles. It has not been tested in ETO-CO2 mixture sterilization cycles. The standard biological indicator used for monitoring full-cycle steam sterilizers does not provide reliable monitoring flash sterilizers 980 . Biological indicators specifically designed for monitoring flash sterilization are now available, and studies comparing them have been published 846,847,981 . Since sterilization failure can occur (about 1% for steam) 982 , a procedure to follow in the event of positive spore tests with steam sterilization has been provided by CDC and the Association of periOperative Registered Nurses (AORN). The 1981 CDC recommendation is that "objects, other than implantable objects, do not need to be recalled because of a single positive spore test unless the steam sterilizer or the sterilization procedure is defective." The rationale for this recommendation is that single positive spore tests in sterilizers occur sporadically. They may occur for reasons such as slight variation in the resistance of the spores 983 , improper use of the sterilizer, and laboratory contamination during culture (uncommon with self-contained spore tests). If the mechanical (e.g., time, temperature, pressure in the steam sterilizer) and chemical (internal and/or external) indicators suggest that the sterilizer was functioning properly, a single positive spore test probably does not indicate sterilizer malfunction but the spore test should be repeated immediately 983 . If the spore tests remain positive, use of the sterilizer should be discontinued until it is serviced 1 . Similarly, AORN states that a single positive spore test does not necessarily indicate a sterilizer failure. If the test is positive, the sterilizer should immediately be rechallenged for proper use and function. Items, other than implantable ones, do not necessarily need to be recalled unless a sterilizer malfunction is found. If a sterilizer malfunction is discovered, the items must be considered nonsterile, and the items from the suspect load(s) should be recalled, insofar as possible, and reprocessed 984 839 . A more conservative approach also has been recommended 813 in which any positive spore test is assumed to represent sterilizer malfunction and requires that all materials processed in that sterilizer, dating from the sterilization cycle having the last negative biologic indicator to the next cycle showing satisfactory biologic indicator challenge results, must be considered nonsterile and retrieved, if possible, and reprocessed. This more conservative approach should be used for sterilization methods other than steam (e.g., ETO, hydrogen peroxide gas plasma). However, no action is necessary if there is strong evidence for the biological indicator being defective 983 or the growth medium contained a Bacillus contaminant 985 . If patient-care items were used before retrieval, the infection control professional should assess the risk of infection in collaboration with central processing, surgical services, and risk management staff. The factors that should be considered include the chemical indicator result (e.g., nonreactive chemical indicator may indicate temperature not achieved); the results of other biological indicators that followed the positive biological indicator (e.g., positive on Tuesday, negative on Wednesday); the parameters of the sterilizer associated with the positive biological indicator (e.g., reduced time at correct temperature); the time-temperature chart (or printout); and the microbial load associated with decontaminated surgical instruments (e.g., 85% of decontaminated surgical instruments have less than 100 CFU). The margin of safety in steam sterilization is sufficiently large that there is minimal infection risk associated with items in a load that show spore growth, especially if the item was properly cleaned and the temperature was achieved (e.g., as shown by acceptable chemical indicator or temperature chart). There are no published studies that document disease transmission via a nonretrieved surgical instrument following a sterilization cycle with a positive biological indicator. False-positive biological indicators may occur from improper testing or faulty indicators. The latter may occur from improper storage, processing, product contamination, material failure, or variation in resistance of spores. Gram stain and subculture of a positive biological indicator may determine if a contaminant has created a false-positive result 839,986 . However, in one incident, the broth used as growth medium contained a contaminant, B. coagulans, which resulted in broth turbidity at 55°C 985 . Testing of paired biological indicators from different manufacturers can assist in assessing a product defect 839 . False-positive biological indicators due to extrinsic contamination when using self-contained biological indicators should be uncommon. A biological indicator should not be considered a false-positive indicator until a thorough analysis of the entire sterilization process shows this to be likely. The size and composition of the biological indicator test pack should be standardized to create a significant challenge to air removal and sterilant penetration and to obtain interpretable results. There is a standard 16-towel pack recommended by AAMI for steam sterilization 813,819,987 consisting of 16 clean, preconditioned, reusable huck or absorbent surgical towels each of which is approximately 16 inches by 26 inches. Each towel is folded lengthwise into thirds and then folded widthwise in the middle. One or more biological indicators are placed between the eight and ninth towels in the approximate geometric center of the pack. When the towels are folded and placed one on top of another, to form a stack (approximately 6 inch height) it should weigh approximately 3 pounds and should have a density of approximately 11.3 pounds per cubic foot 813 . This test pack has not gained universal use as a standard pack that simulates the actual in-use conditions of steam sterilizers. Commercially available disposable test packs that have been shown to be equivalent to the AAMI 16 towel test pack also may be used. The test pack should be placed flat in an otherwise fully loaded sterilizer chamber, in the area least favorable to sterilization (i.e., the area representing the greatest challenge to the biological indicator). This area is normally in the front, bottom section of the sterilizer, near the drain 811,813 . A control biological indicator from the lot used for testing should be left unexposed to the sterilant, and then incubated to verify the presterilization viability of the test spores and proper incubation. The most conservative approach would be to use a control for each run; however, less frequent use may be adequate (e.g., weekly). There also is a routine test pack for ETO where a biological indicator is placed in a plastic syringe with plunger, then placed in the folds of a clean surgical towel, and wrapped. Alternatively, commercially available disposal test packs that have been shown to be equivalent to the AAMI test pack may be used. The test pack is placed in the center of the sterilizer load 814 . Sterilization records (mechanical, chemical, and biological) should be retained for a time period in compliance with standards (e.g., Joint Commission for the Accreditation of Healthcare Facilities requests 3 years) and state and federal regulations. In Europe, biological monitors are not used routinely to monitor the sterilization process. Instead, release of sterilizer items is based on monitoring the physical conditions of the sterilization process that is termed "parametric release." Parametric release requires that there is a defined quality system in place at the facility performing the sterilization and that the sterilization process be validated for the items being sterilized. At present in Europe, parametric release is accepted for steam, dry heat, and ionizing radiation processes, as the physical conditions are understood and can be monitored directly 988 . For example, with steam sterilizers the load could be monitored with probes that would yield data on temperature, time, and humidity at representative locations in the chamber and compared to the specifications developed during the validation process. Periodic infection control rounds to areas using sterilizers to standardize the sterilizer's use may identify correctable variances in operator competence; documentation of sterilization records, including chemical and biological indicator test results; sterilizer maintenance and wrapping; and load numbering of packs. These rounds also may identify improvement activities to ensure that operators are adhering to established standards 989 . # Reuse of Single-Use Medical Devices The reuse of single-use medical devices began in the late 1970s. Before this time most devices were considered reusable. Reuse of single-use devices increased as a cost-saving measure. Approximately 20 to 30% of U.S. hospitals reported that they reuse at least one type of single-use device. Reuse of singleuse devices involves regulatory, ethical, medical, legal and economic issues and has been extremely controversial for more than two decades 990 . The U.S. public has expressed increasing concern regarding the risk of infection and injury when reusing medical devices intended and labeled for single use. Although some investigators have demonstrated it is safe to reuse disposable medical devices such as cardiac electrode catheters, [991][992][993] additional studies are needed to define the risks 994 and document the benefits. In August 2000, FDA released a guidance document on single-use devices reprocessed by third parties or hospitals 995 . In this guidance document, FDA states that hospitals or third-party reprocessors will be considered "manufacturers" and regulated in the same manner. A reused single-use device will have to comply with the same regulatory requirements of the device when it was originally manufactured. This document presents FDA's intent to enforce premarket submission requirements within 6 months (February 2001) for class III devices (e.g., cardiovascular intra-aortic balloon pump, transluminal coronary angioplasty catheter); 12 months (August 2001) for class II devices (e.g., blood pressure cuff, bronchoscope biopsy forceps); and 18 months (February 2002) for class I devices (e.g., disposable medical scissors, ophthalmic knife). FDA uses two types of premarket requirements for nonexempt class I and II devices, a 510(k) submission that may have to show that the device is as safe and effective as the same device when new, and a premarket approval application. The 510(k) submission must provide scientific evidence that the device is safe and effective for its intended use. FDA allowed hospitals a year to comply with the nonpremarket requirements (registration and listing, reporting adverse events associated with medical devices, quality system regulations, and proper labeling). The options for hospitals are to stop reprocessing single-use devices, comply with the rule, or outsource to a third-party reprocessor. FDA guidance document does not apply to permanently implantable pacemakers, hemodialyzers, opened but unused single-use devices, or healthcare settings other than acute-care hospitals. The reuse of single use medical devices continues to be an evolving area of regulations. For this reason, healthcare workers should refer to FDA (http://www.fda.gov/) for the latest guidance 996 . # Conclusion When properly used, disinfection and sterilization can ensure the safe use of invasive and noninvasive medical devices. However, current disinfection and sterilization guidelines must be strictly followed. # Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) # Recommendations for Disinfection and Sterilization in Healthcare Facilities # A. Rationale The ultimate goal of the Recommendations for Disinfection and Sterilization in Health-Care Facilities, 2008, is to reduce rates of health-care associated infections through appropriate use of both disinfection and sterilization. Each recommendation is categorized according to scientific evidence, theoretical rationale, applicability, and federal regulations. Examples are included in some recommendations to aid the reader; however, these examples are not intended to define the only method of implementing the recommendation. The CDC system for categorizing recommendations is defined in the following (Rankings) section. # B. Rankings Category IA. Strongly recommended for implementation and strongly supported by well-designed experimental, clinical, or epidemiologic studies. Category IB. Strongly recommended for implementation and supported by some experimental, clinical, or epidemiologic studies, and by a strong theoretical rationale. Category IC. Required by state or federal regulations. Because of state differences, readers should not assume that the absence of an IC recommendation implies the absence of state regulations. Category II. Suggested for implementation and supported by suggestive clinical or epidemiologic studies or by a theoretical rationale. No recommendation. Unresolved issue. These include practices for which insufficient evidence or no consensus exists regarding efficacy. In hospitals, perform most cleaning, disinfection, and sterilization of patient-care devices in a central processing department in order to more easily control quality. Category II. 454,836,959 b. Meticulously clean patient-care items with water and detergent, or with water and enzymatic cleaners before high-level disinfection or sterilization procedures. Category IB. 6, 83, 101, 104-106, 124, 179, 424-426, 436, 465, 471, 911-913, 1004 i. Remove visible organic residue (e.g., residue of blood and tissue) and inorganic salts with cleaning. Use cleaning agents that are capable of removing visible organic and inorganic residues. Category IB. 424-426, 466, 468, 469, 471, 908, 910 ii. Clean medical devices as soon as practical after use (e.g., at the point of use) because soiled materials become dried onto the instruments. Dried or baked materials on the instrument make the removal process more difficult and the disinfection or sterilization process less effective or ineffective. Category IB. 55,56,59,291,465,1005,1006 c. Perform either manual cleaning (i.e., using friction) or mechanical cleaning (e.g., with ultrasonic cleaners, washer-disinfector, washer-sterilizers). Category IB. 426,456,471,999 d. If using an automatic washer/disinfector, ensure that the unit is used in accordance with the manufacturer's recommendations. Category IB. 7,133,155,725 e. Ensure that the detergents or enzymatic cleaners selected are compatible with the metals and other materials used in medical instruments. Ensure that the rinse step is adequate for removing cleaning residues to levels that will not interfere with subsequent disinfection/sterilization processes. Category II. 836,1004 f. Inspect equipment surfaces for breaks in integrity that would impair either cleaning or disinfection/sterilization. Discard or repair equipment that no longer functions as intended or cannot be properly cleaned, and disinfected or sterilized. Category II. 888 # Indications for Sterilization, High-Level Disinfection, and Low-Level Disinfection a. Before use on each patient, sterilize critical medical and surgical devices and instruments that enter normally sterile tissue or the vascular system or through which a sterile body fluid flows (e.g., blood). See recommendation 7g for exceptions. Category IA. 179,497,821,822,907,911,912 b. Provide, at a minimum, high-level disinfection for semicritical patient-care equipment (e.g., gastrointestinal endoscopes, endotracheal tubes, anesthesia breathing circuits, and respiratory therapy equipment) that touches either mucous membranes or nonintact skin. Category IA. 6-8, 17, 20 Process noncritical patient-care devices using a disinfectant and the concentration of germicide listed in Table 1. Category IB. 17, 46-48, 50-52, 67, 68, 378, 382, 401 b. Disinfect noncritical medical devices (e.g., blood pressure cuff) with an EPA-registered hospital disinfectant using the label's safety precautions and use directions. Most EPA-registered hospital disinfectants have a label contact time of 10 minutes. However, multiple scientific studies have demonstrated the efficacy of hospital disinfectants against pathogens with a contact time of at least 1 minute. By law, all applicable label instructions on EPA-registered products must be followed. If the user selects exposure conditions that differ from those on the EPA-registered product label, the user assumes liability from any injuries resulting from off-label use and is potentially subject to enforcement action under FIFRA. Category IB. 17, 47, 48, 50, 51, 53-57, 59, 60, 62-64, 355, 378, 382 c. Ensure that, at a minimum, noncritical patient-care devices are disinfected when visibly soiled and on a regular basis (such as after use on each patient or once daily or once weekly). Category II. 378,380,1008 d. If dedicated, disposable devices are not available, disinfect noncritical patient-care equipment after using it on a patient who is on contact precautions before using this equipment on another patient. Category IB. 47,67,391,1009 # Cleaning and Disinfecting Environmental Surfaces in Healthcare Facilities a. Clean housekeeping surfaces (e.g., floors, tabletops) on a regular basis, when spills occur, and when these surfaces are visibly soiled. Category II. 23,378,380,382,1008,1010 b. Disinfect (or clean) environmental surfaces on a regular basis (e.g., daily, three times per week) and when surfaces are visibly soiled. Category II. 378,380,402,1008 c. Follow manufacturers' instructions for proper use of disinfecting (or detergent) products ---such as recommended use-dilution, material compatibility, storage, shelf-life, and safe use and disposal. Category II. 327,365,404 d. Clean walls, blinds, and window curtains in patient-care areas when these surfaces are visibly contaminated or soiled. Category II. 1011 e. Prepare disinfecting (or detergent) solutions as needed and replace these with fresh solution frequently (e.g., replace floor mopping solution every three patient rooms, change no less often than at 60-minute intervals), according to the facility's policy. Category IB. 68,379 f. Decontaminate mop heads and cleaning cloths regularly to prevent contamination (e.g., launder and dry at least daily). Category II. 68,402,403 g. Use a one-step process and an EPA-registered hospital disinfectant designed for housekeeping purposes in patient care areas where 1. uncertainty exists about the nature of the soil on the surfaces (e.g., blood or body fluid contamination versus routine dust or dirt); or 2. uncertainty exists about the presence of multidrug resistant organisms on such surfaces. See 5n for recommendations requiring cleaning and disinfecting blood-contaminated surfaces. Category II. 23,47,48,51,214,378,379,382,416,1012 h. Detergent and water are adequate for cleaning surfaces in nonpatient-care areas (e.g., administrative offices). Category II. 23 i. Do not use high-level disinfectants/liquid chemical sterilants for disinfection of non-critical surfaces. Category IB. 23,69,318 j. Wet-dust horizontal surfaces regularly (e.g., daily, three times per week) using clean cloths moistened with an EPA-registered hospital disinfectant (or detergent). Prepare the disinfectant (or detergent) as recommended by the manufacturer. Category II. 68,378,380,402,403,1008 k. Disinfect noncritical surfaces with an EPA-registered hospital disinfectant according to the label's safety precautions and use directions. Most EPA-registered hospital disinfectants have a label contact time of 10 minutes. However, many scientific studies have demonstrated the efficacy of hospital disinfectants against pathogens with a contact time of at least 1 minute. By law, the user must follow all applicable label instructions on EPA-registered products. If the user selects exposure conditions that differ from those on the EPA-registered product label, the user assumes liability for any injuries resulting from off-label use and is potentially subject to enforcement action under FIFRA. Category II, IC. 17, 47, 48, 50, 51, 53-57, 59, 60, 62-64, 355, 378, 382 l. Do not use disinfectants to clean infant bassinets and incubators while these items are occupied. If disinfectants (e.g., phenolics) are used for the terminal cleaning of infant bassinets and incubators, thoroughly rinse the surfaces of these items with water and dry them before these items are reused. Category IB. 17,739,740 m. Promptly clean and decontaminate spills of blood and other potentially infectious materials. Discard blood-contaminated items in compliance with federal regulations. Category IB, IC. 214 n. For site decontamination of spills of blood or other potentially infectious materials (OPIM), implement the following procedures. Use protective gloves and other PPE (e.g., when sharps are involved use forceps to pick up sharps, and discard these items in a puncture-resistant container) appropriate for this task. Disinfect areas contaminated with blood spills using an EPAregistered tuberculocidal agent, a registered germicide on the EPA Lists D and E (i.e., products with specific label claims for HIV or HBV or freshly diluted hypochlorite solution. Category II, IC. 1. * If sodium hypochlorite solutions are selected use a 1:100 dilution (e.g., 1:100 dilution of a 5.25-6.15% sodium hypochlorite provides 525-615 ppm available chlorine) to decontaminate nonporous surfaces after a small spill (e.g., <10 mL) of either blood or OPIM. If a spill involves large amounts (e.g., >10 mL) of blood or OPIM, or involves a culture spill in the laboratory, use a 1:10 dilution for the first application of hypochlorite solution before cleaning in order to reduce the risk of infection during the cleaning process in the event of a sharp injury. Follow this decontamination process with a terminal disinfection, using a 1:100 dilution of sodium hypochlorite. Category IB, IC. 63,215,557 o. If the spill contains large amounts of blood or body fluids, clean the visible matter with disposable absorbent material, and discard the contaminated materials in appropriate, labeled containment. Category II, IC. 44,214 p. Use protective gloves and other PPE appropriate for this task. Category II, IC. 44,214 q. In units with high rates of endemic Clostridium difficile infection or in an outbreak setting, use dilute solutions of 5.25%-6.15% sodium hypochlorite (e.g., 1:10 dilution of household bleach) for routine environmental disinfection. Currently, no products are EPA-registered specifically for inactivating C. difficile spores. Category II. [257][258][259] r. If chlorine solution is not prepared fresh daily, it can be stored at room temperature for up to 30 days in a capped, opaque plastic bottle with a 50% reduction in chlorine concentration after 30 days of storage (e.g., 1000 ppm chlorine [approximately a 1:50 dilution] at day 0 decreases to 500 ppm chlorine by day 30). Category IB. 327,1014 s. An EPA-registered sodium hypochlorite product is preferred, but if such products are not available, generic versions of sodium hypochlorite solutions (e.g., household chlorine bleach) can be used. Category II. 44 6. Disinfectant Fogging a. Do not perform disinfectant fogging for routine purposes in patient-care areas. Category II. 23,228 Environmental Fogging Clarification Statement [December 2009]: CDC and HICPAC have recommendations in the 2008 Guideline for Disinfection and Sterilization in Healthcare Facilities that state that the CDC does not support disinfectant fogging. These recommendations refer to the spraying or fogging of chemicals (e.g., formaldehyde, phenolbased agents, or quaternary ammonium compounds) as a way to decontaminate environmental surfaces or disinfect the air in patient rooms. The recommendation against fogging was based on studies in the 1970's that reported a lack of microbicidal efficacy (e.g., use of quaternary ammonium compounds in mist applications) but also adverse effects on healthcare workers and others in facilities where these methods were utilized. Furthermore, some of these chemicals are not EPA-registered for use in fogging-type applications. These recommendations do not apply to newer technologies involving fogging for room decontamination (e.g., ozone mists, vaporized hydrogen peroxide) that have become available since the 2008 recommendations were made. These newer technologies were assessed by CDC and HICPAC in the 2011 Guideline for the Prevention and Control of Norovirus Gastroenteritis Outbreaks in Healthcare Settings, which makes the recommendation: "More research is required to clarify the effectiveness and reliability of fogging, UV irradiation, and ozone mists to reduce norovirus environmental contamination. (No recommendation/ unresolved issue)" The 2008 recommendations still apply; however, CDC does not yet make a recommendation regarding these newer technologies. This issue will be revisited as additional evidence becomes available. # High-Level Disinfection of Endoscopes a. To detect damaged endoscopes, test each flexible endoscope for leaks as part of each reprocessing cycle. Remove from clinical use any instrument that fails the leak test, and repair this instrument. Category II. 113,115,116 b. Immediately after use, meticulously clean the endoscope with an enzymatic cleaner that is # Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) compatible with the endoscope. Cleaning is necessary before both automated and manual disinfection. Category IA. 83, 101, 104-106, 113, 115, 116, 124, 126, 456, 465, 466, 471, 1015 c. Disconnect and disassemble endoscopic components (e.g., suction valves) as completely as possible and completely immerse all components in the enzymatic cleaner. Steam sterilize these components if they are heat stable. Category IB. 115,116,139,465,466 d. Flush and brush all accessible channels to remove all organic (e.g., blood, tissue) and other residue. Clean the external surfaces and accessories of the devices by using a soft cloth or sponge or brushes. Continue brushing until no debris appears on the brush. Category IA 6,17,108,113,115,116,137,145,147,725,856,903 . e. Use cleaning brushes appropriate for the size of the endoscope channel or port (e.g., bristles should contact surfaces). Cleaning items (e.g., brushes, cloth) should be disposable or, if they are not disposable, they should be thoroughly cleaned and either high-level disinfected or sterilized after each use. Category II. 113,115,116,1016 f. Discard enzymatic cleaners (or detergents) after each use because they are not microbicidal and, therefore, will not retard microbial growth. Category IB. 38,113,115,116,466 g. Process endoscopes (e.g., arthroscopes, cystoscope, laparoscopes) that pass through normally sterile tissues using a sterilization procedure before each use; if this is not feasible, provide at least high-level disinfection. High-level disinfection of arthroscopes, laparoscopes, and cystoscopes should be followed by a sterile water rinse. Category IB. 1,17,31,32,35,89,90,113,554 h. Phase out endoscopes that are critical items (e.g., arthroscopes, laparoscopes) but cannot be steam sterilized. Replace these endoscopes with steam sterilizable instruments when feasible. Category II. i. Mechanically clean reusable accessories inserted into endoscopes (e.g., biopsy forceps or other cutting instruments) that break the mucosal barrier (e.g., ultrasonically clean biopsy forceps) and then sterilize these items between each patient. Category IA. 1,6,8,17,108,113,115,116,138,145,147,153,278 j. Use ultrasonic cleaning of reusable endoscopic accessories to remove soil and organic material from hard-to-clean areas. Category II. 116,145,148 k. Process endoscopes and accessories that contact mucous membranes as semicritical items, and use at least high-level disinfection after use on each patient. Category IA. 1, 6, 8, 17, 108, 113, 115, 116, 129, 138, 145-148, 152-154, 278 l. Use an FDA-cleared sterilant or high-level disinfectant for sterilization or high-level disinfection (Table 1). Category IA. 1, 6-8, 17, 85, 108, 113, 115, 116, 147 m. After cleaning, use formulations containing glutaraldehyde, glutaraldehyde with phenol/phenate, ortho-phthalaldehyde, hydrogen peroxide, and both hydrogen peroxide and peracetic acid to achieve high-level disinfection followed by rinsing and drying (see Table 1 for recommended concentrations). Category IB. 1, 6-8, 17, 38, 85, 108, 113, 145-148 n. Extend exposure times beyond the minimum effective time for disinfecting semicritical patient-care equipment cautiously and conservatively because extended exposure to a high-level disinfectant is more likely to damage delicate and intricate instruments such as flexible endoscopes. The exposure times vary among the Food and Drug Administration (FDA)-cleared high-level disinfectants (Table 2). Category IB. 17,69,73,76,78,83 o. Federal regulations are to follow the FDA-cleared label claim for high-level disinfectants. The FDAcleared labels for high-level disinfection with >2% glutaraldehyde at 25°C range from 20-90 minutes, depending upon the product based on three tier testing which includes AOAC sporicidal tests, simulated use testing with mycobacterial and in-use testing. Category IC. p. Several scientific studies and professional organizations support the efficacy of >2% glutaraldehyde for 20 minutes at 20ºC; that efficacy assumes adequate cleaning prior to disinfection, whereas the FDA-cleared label claim incorporates an added margin of safety to accommodate possible lapses in cleaning practices. Facilities that have chosen to apply the 20 minute duration at 20ºC have done so based on the IA recommendation in the July 2003 SHEA position paper, "Multi-society Guideline for Reprocessing Flexible Gastrointestinal Endoscopes." 12, 17, 19, 26, 27, 49, 55, 57, 58, 60, 73, 76, 79-81, 83-85, 93, 94, 104-106, 110, 111, 115-121, 124, 125, 233, 235, 236, 243, 265, 266, 609 Update [June 2011]: Multisociety guideline on reprocessing flexible gastrointestinal endoscopes: 2011 (http://www.asge.org/uploadedFiles/Publications_and_Products/Practice_Guidelines/Multisociety% 20guideline%20on%20reprocessing%20flexible%20gastrointestinal.pdf). q. When using FDA-cleared high-level disinfectants, use manufacturers' recommended exposure conditions. Certain products may require a shorter exposure time (e.g., 0.55% ortho-phthalaldehyde for 12 minutes at 20°C, 7.35% hydrogen peroxide plus 0.23% peracetic acid for 15 minutes at 20°C) than glutaraldehyde at room temperature because of their rapid inactivation of mycobacteria or reduced exposure time because of increased mycobactericidal activity at elevated temperature (e.g., 2.5% glutaraldehyde at 5 minutes at 35°C). Category IB. 83,100,689,693,694,700 r. Select a disinfectant or chemical sterilant that is compatible with the device that is being reprocessed. Avoid using reprocessing chemicals on an endoscope if the endoscope manufacturer warns against using these chemicals because of functional damage (with or without cosmetic damage). Category IB. 69,113,116 s. Completely immerse the endoscope in the high-level disinfectant, and ensure all channels are perfused. As soon as is feasible, phase out nonimmersible endoscopes. Category IB. 108, 113-116, 137, 725, 856, 882 t. After high-level disinfection, rinse endoscopes and flush channels with sterile water, filtered water, or tapwater to prevent adverse effects on patients associated with disinfectant retained in the endoscope (e.g., disinfectant induced colitis). Follow this water rinse with a rinse with 70% -90% ethyl or isopropyl alcohol. Category IB. 17, 31-35, 38, 39, 108, 113, 115, 116, 134, 145-148, 620-622, 624-630, 1017 u. After flushing all channels with alcohol, purge the channels using forced air to reduce the likelihood of contamination of the endoscope by waterborne pathogens and to facilitate drying. Category IB. 39,113,115,116,145,147 v. Hang endoscopes in a vertical position to facilitate drying. Category II. 17,108,113,115,116,145,815 w. Store endoscopes in a manner that will protect them from damage or contamination. Category II. 17, 108,113,115,116,145 x. Sterilize or high-level disinfect both the water bottle used to provide intraprocedural flush solution and its connecting tube at least once daily. After sterilizing or high-level disinfecting the water bottle, fill it with sterile water. Category IB. 10, 31-35, 113, 116, 1017 y. Maintain a log for each procedure and record the following: patient's name and medical record number (if available), procedure, date, endoscopist, system used to reprocess the endoscope (if more than one system could be used in the reprocessing area), and serial number or other identifier of the endoscope used. Category II. 108,113,115,116 z. Design facilities where endoscopes are used and disinfected to provide a safe environment for healthcare workers and patients. Use air-exchange equipment (e.g., the ventilation system, outexhaust ducts) to minimize exposure of all persons to potentially toxic vapors (e.g., glutaraldehyde vapor). Do not exceed the allowable limits of the vapor concentration of the chemical sterilant or high-level disinfectant (e.g., those of ACGIH and OSHA). Category IB, IC. 116,145,318,322,577,652 aa. Routinely test the liquid sterilant/high-level disinfectant to ensure minimal effective concentration of the active ingredient. Check the solution each day of use (or more frequently) using the appropriate chemical indicator (e.g., glutaraldehyde chemical indicator to test minimal effective concentration of glutaraldehyde) and document the results of this testing. Discard the solution if the chemical indicator shows the concentration is less than the minimum effective concentration. Do not use the liquid sterilant/high-level disinfectant beyond the reuse-life recommended by the manufacturer (e.g., 14 days for ortho-phthalaldehyde). Category IA. 76,108,113,115,116,608,609 ab. * Provide personnel assigned to reprocess endoscopes with device-specific reprocessing instructions to ensure proper cleaning and high-level disinfection or sterilization. Require competency testing on a regular basis (e.g., beginning of employment, annually) of all personnel who reprocess endoscopes. Category IA. 6-8, 108, 113, 115, 116, 145, 148, 155 ac. * Educate all personnel who use chemicals about the possible biologic, chemical, and environmental hazards of performing procedures that require disinfectants. Category IB, IC. 116,997,998,1018,1019 ad. * Make PPE (e.g., gloves, gowns, eyewear, face mask or shields, respiratory protection devices) available and use these items appropriately to protect workers from exposure to both chemicals and microorganisms (e.g., HBV). Category IB, IC. 115,116,214,961,997,998,1020,1021 ae. * If using an automated endoscope reprocessor (AER), place the endoscope in the reprocessor and attach all channel connectors according to the AER manufacturer's instructions to ensure exposure of all internal surfaces to the high-level disinfectant/chemical sterilant. Category IB. 7,8,115,116,155,725,903 af. * If using an AER, ensure the endoscope can be effectively reprocessed in the AER. Also, ensure any required manual cleaning/disinfecting steps are performed (e.g., elevator wire channel of duodenoscopes might not be effectively disinfected by most AERs). Category IB. 7,8,115,116,155,725 ag. * Review the FDA advisories and the scientific literature for reports of deficiencies that can lead to infection because design flaws and improper operation and practices have compromised the effectiveness of AERs. Category II. 7,98,133,134,155,725 ah. * Develop protocols to ensure that users can readily identify an endoscope that has been properly processed and is ready for patient use. Category II. ai. * Do not use the carrying case designed to transport clean and reprocessed endoscopes outside of the healthcare environment to store an endoscope or to transport the instrument within the healthcare environment. Category II. aj. * No recommendation is made about routinely performing microbiologic testing of either endoscopes or rinse water for quality assurance purposes. Unresolved Issue. 116,164 ak. * If environmental microbiologic testing is conducted, use standard microbiologic techniques. Category II. 23,116,157,161,167 al. * If a cluster of endoscopy-related infections occurs, investigate potential routes of transmission (e.g., person-to-person, common source) and reservoirs. Category IA. 8,1022 am. * Report outbreaks of endoscope-related infections to persons responsible for institutional infection control and risk management and to FDA. Category IB. 6,7,113,116,1023 1. * Notify the local and the state health departments, CDC, and the manufacturer(s). Category II. an. * No recommendation is made regarding the reprocessing of an endoscope again immediately before use if that endoscope has been processed after use according to the recommendations in this guideline. Unresolved issue. 157 ao. * Compare the reprocessing instructions provided by both the endoscope's and the AER's manufacturer's instructions and resolve any conflicting recommendations. Category IB. 116,155 # Management of Equipment and Surfaces in Dentistry a. Dental instruments that penetrate soft tissue or bone (e.g., extraction forceps, scalpel blades, bone chisels, periodontal scalers, and surgical burs) are classified as critical and should be sterilized after each use or discarded. In addition, after each use, sterilize dental instruments that are not intended to penetrate oral soft tissue or bone (e.g., amalgam condensers, air-water syringes) but that might contact oral tissues and are heat-tolerant, although classified as semicritical. Clean and, at a minimum, high-level disinfect heat-sensitive semicritical items. Category IA. 43,[209][210][211] b. Noncritical clinical contact surfaces, such as uncovered operatory surfaces (e.g., countertops, switches, light handles), should be barrier-protected or disinfected between patients with an intermediate-disinfectant (i.e., EPA-registered hospital disinfectant with a tuberculocidal claim) or low-level disinfectant (i.e., EPA-registered hospital disinfectant with HIV and HBV claim). Category IB. 43,[209][210][211] c. Barrier protective coverings can be used for noncritical clinical contact surfaces that are touched frequently with gloved hands during the delivery of patient care, that are likely to become contaminated with blood or body substances, or that are difficult to clean. Change these coverings when they are visibly soiled, when they become damaged, and on a routine basis (e.g., between patients). Disinfect protected surfaces at the end of the day or if visibly soiled. Category II. 43,210 b. When probe covers are available, use a probe cover or condom to reduce the level of microbial contamination. Category II. [197][198][199][200][201] Do not use a lower category of disinfection or cease to follow the appropriate disinfectant recommendations when using probe covers because these sheaths and condoms can fail. Category IB [197][198][199][200][201] c. After high-level disinfection, rinse all items. Use sterile water, filtered water or tapwater followed by an alcohol rinse for semicritical equipment that will have contact with mucous membranes of the upper respiratory tract (e.g., nose, pharynx, esophagus). Category II. 10,[31][32][33][34][35]1017 d. There is no recommendation to use sterile or filtered water rather than tapwater for rinsing semicritical equipment that contact the mucous membranes of the rectum (e.g., rectal probes, anoscope) or vagina (e.g., vaginal probes). Unresolved issue. 11 e. Wipe clean tonometer tips and then disinfect them by immersing for 5-10 minutes in either 5000 ppm chlorine or 70% ethyl alcohol. None of these listed disinfectant products are FDA-cleared high-level disinfectants. Category II. 49,95,185,188,293 11. Disinfection by Healthcare Personnel in Ambulatory Care and Home Care a. Follow the same classification scheme described above (i.e., that critical devices require sterilization, semicritical devices require high-level disinfection, and noncritical equipment requires low-level disinfection) in the ambulatory-care (outpatient medical/surgical facilities) setting because risk for infection in this setting is similar to that in the hospital setting (see Table 1). Category IB. 6-8, 17, 330 b. When performing care in the home, clean and disinfect reusable objects that touch mucous membranes (e.g., tracheostomy tubes) by immersing these objects in a 1:50 dilution of 5.25%-6.15% sodium hypochlorite (household bleach) (3 minutes), 70% isopropyl alcohol (5 minutes), or 3% hydrogen peroxide (30 minutes) because the home environment is, in most instances, safer than either hospital or ambulatory care settings because person-to-person transmission is less likely. Category II. 327,328,330,331 c. Clean noncritical items that would not be shared between patients (e.g., crutches, blood e. Completely aerate surgical and medical items that have been sterilized in the EtO sterilizer (e.g., polyvinylchloride tubing requires 12 hours at 50°C, 8 hours at 60°C) before using these items in patient care. Category IB. 814 f. Sterilization using the peracetic acid immersion system can be used to sterilize heat-sensitive immersible medical and surgical items. Category IB. 90,[717][718][719][721][722][723][724] g. Critical items that have been sterilized by the peracetic acid immersion process must be used immediately (i.e., items are not completely protected from contamination, making long-term storage unacceptable). Category II. 817,825 h. Dry-heat sterilization (e.g., 340°F for 60 minutes) can be used to sterilize items (e.g., powders, oils) that can sustain high temperatures. Category IB. 815,827 i. Comply with the sterilizer manufacturer's instructions regarding the sterilizer cycle parameters (e.g., time, temperature, concentration). Category IB. 155,725,[811][812][813][814]819 j. Because narrow-lumen devices provide a challenge to all low-temperature sterilization technologies and direct contact is necessary for the sterilant to be effective, ensure that the sterilant has direct contact with contaminated surfaces (e.g., scopes processed in peracetic acid must be connected to channel irrigators). Category IB. d. The shelf life of a packaged sterile item depends on the quality of the wrapper, the storage conditions, the conditions during transport, the amount of handling, and other events (moisture) that compromise the integrity of the package. If event-related storage of sterile items is used, then packaged sterile items can be used indefinitely unless the packaging is compromised (see f and g below). Category IB. 816,819,836,968,973,1030,1031 e. Evaluate packages before use for loss of integrity (e.g., torn, wet, punctured). The pack can be used unless the integrity of the packaging is compromised. Category II. 819,968 f. If the integrity of the packaging is compromised (e.g., torn, wet, or punctured), repack and reprocess the pack before use. Category II. 819,1032 g. If time-related storage of sterile items is used, label the pack at the time of sterilization with an expiration date. Once this date expires, reprocess the pack. Category II. 819,968 19. Quality Control a. Provide comprehensive and intensive training for all staff assigned to reprocess semicritical and critical medical/surgical instruments to ensure they understand the importance of reprocessing these instruments. To achieve and maintain competency, train each member of the staff that reprocesses semicritical and/or critical instruments as follows: 1. provide hands-on training according to the institutional policy for reprocessing critical and semicritical devices; 2. supervise all work until competency is documented for each reprocessing task; 3. conduct competency testing at beginning of employment and regularly thereafter (e.g., annually); and 4. review the written reprocessing instructions regularly to ensure they comply with the scientific literature and the manufacturers' instructions. Category IB. 6-8, 108, 114, 129, 155, 725, 813, 819 b. Compare the reprocessing instructions (e.g., for the appropriate use of endoscope connectors, the capping/noncapping of specific lumens) provided by the instrument manufacturer and the sterilizer manufacturer and resolve any conflicting recommendations by communicating with both manufacturers. Category IB. # Glossary Action level: concentration of a regulated substance (e.g., ethylene oxide, formaldehyde) within the employee breathing zone, above which OSHA requirements apply. Activation of a sterilant: process of mixing the contents of a chemical sterilant that come in two containers (small vial with the activator solution; container of the chemical). Keeping the two chemicals separate until use extends the shelf life of the chemicals. Aeration: method by which ethylene oxide (EtO) is removed from EtO-sterilized items by warm air circulation in an enclosed cabinet specifically designed for this purpose. Antimicrobial agent: any agent that kills or suppresses the growth of microorganisms. Antiseptic: substance that prevents or arrests the growth or action of microorganisms by inhibiting their activity or by destroying them. The term is used especially for preparations applied topically to living tissue. Asepsis: prevention of contact with microorganisms. Autoclave: device that sterilizes instruments or other objects using steam under pressure. The length of time required for sterilization depends on temperature, vacuum, and pressure. Bacterial count: method of estimating the number of bacteria per unit sample. The term also refers to the estimated number of bacteria per unit sample, usually expressed as number of colony-forming units. Bactericide: agent that kills bacteria. Bioburden: number and types of viable microorganisms with which an item is contaminated; also called bioload or microbial load. Biofilm: accumulated mass of bacteria and extracellular material that is tightly adhered to a surface and cannot be easily removed. Biologic indicator: device for monitoring the sterilization process. The device consists of a standardized, viable population of microorganisms (usually bacterial spores) known to be resistant to the sterilization process being monitored. Biologic indicators are intended to demonstrate whether conditions were adequate to achieve sterilization. A negative biologic indicator does not prove that all items in the load are sterile or that they were all exposed to adequate sterilization conditions. Bleach: Household bleach (that includes 5.25% or 6.00%-6.15% sodium hypochlorite depending on manufacturer) is usually diluted in water at 1:10 or 1:100. Approximate dilutions are 1.5 cups of bleach in a gallon of water for a 1:10 dilution (~6,000 ppm) and 0.25 cup of bleach in a gallon of water for a 1:100 dilution (~600 ppm). Sodium hypochlorite products that make pesticidal claims, such as sanitization or disinfection, must be registered by EPA and be labeled with an EPA Registration Number. The format of this section was changed to improve readability and accessibility. The content is unchanged. # Expected Chlorine Concentrations by Various Bowie-Dick test: diagnostic test of a sterilizer's ability to remove air from the chamber of a prevacuum steam sterilizer. The air-removal or Bowie-Dick test is not a test for sterilization. Ceiling limit: concentration of an airborne chemical contaminant that should not be exceeded during any part of the workday. If instantaneous monitoring is not feasible, the ceiling must be assessed as a 15minute time-weighted average exposure. Centigrade or Celsius: a temperature scale (0°C = freezing point of water; 100°C = boiling point of water at sea level). Equivalents mentioned in the guideline are as follows: 20°C = 68°F; 25°C = 77°F; 121°C = 250°F; 132°C = 270°F; 134°C = 273°F. For other temperatures the formula is: F° = (C° × 9 ∕ 5) + 32 or C° = (F° -32) × 5 ∕ 9. Central processing or Central service department: the department within a health-care facility that processes, issues, and controls professional supplies and equipment, both sterile and nonsterile, for some or all patient-care areas of the facility. Challenge test pack: pack used in installation, qualification, and ongoing quality assurance testing of health-care facility sterilizers. Contact time: time a disinfectant is in direct contact with the surface or item to be disinfected. For surface disinfection, this period is framed by the application to the surface until complete drying has occurred. Container system, rigid container: sterilization containment device designed to hold medical devices for sterilization, storage, transportation, and aseptic presentation of contents. Contaminated: state of having actual or potential contact with microorganisms. As used in health care, the term generally refers to the presence of microorganisms that could produce disease or infection. Control, positive: biologic indicator, from the same lot as a test biologic indicator, that is left unexposed to the sterilization cycle and then incubated to verify the viability of the test biologic indicator. Cleaning: removal, usually with detergent and water or enzyme cleaner and water, of adherent visible soil, blood, protein substances, microorganisms and other debris from the surfaces, crevices, serrations, joints, and lumens of instruments, devices, and equipment by a manual or mechanical process that prepares the items for safe handling and/or further decontamination. Culture: growth of microorganisms in or on a nutrient medium; to grow microorganisms in or on such a medium. Culture medium: substance or preparation used to grow and cultivate microorganisms. Cup: 8 fluid ounces. Decontamination: according to OSHA, "the use of physical or chemical means to remove, inactivate, or destroy bloodborne pathogens on a surface or item to the point where they are no longer capable of transmitting infectious particles and the surface or item is rendered safe for handling, use, or disposal" [29 CFR 1910[29 CFR .1030. In health-care facilities, the term generally refers to all pathogenic organisms. Decontamination area: area of a health-care facility designated for collection, retention, and cleaning of soiled and/or contaminated items. Detergent: cleaning agent that makes no antimicrobial claims on the label. They comprise a hydrophilic component and a lipohilic component and can be divided into four types: anionic, cationic, amphoteric, and non-ionic detergents. Disinfectant: usually a chemical agent (but sometimes a physical agent) that destroys disease-causing pathogens or other harmful microorganisms but might not kill bacterial spores. It refers to substances applied to inanimate objects. EPA groups disinfectants by product label claims of "limited," "general," or "hospital" disinfection. Disinfection: thermal or chemical destruction of pathogenic and other types of microorganisms. Disinfection is less lethal than sterilization because it destroys most recognized pathogenic microorganisms but not necessarily all microbial forms (e.g., bacterial spores). D value: time or radiation dose required to inactivate 90% of a population of the test microorganism under stated exposure conditions. Endoscope: an instrument that allows examination and treatment of the interior of the body canals and hollow organs. Enzyme cleaner: a solution used before disinfecting instruments to improve removal of organic material (e.g., proteases to assist in removing protein). EPA Registration Number or EPA Reg. No.: a hyphenated, two-or three-part number assigned by EPA to identify each germicidal product registered within the United States. The first number is the company identification number, the second is the specific product number, and the third (when present) is the company identification number for a supplemental registrant. Exposure time: period in a sterilization process during which items are exposed to the sterilant at the specified sterilization parameters. For example, in a steam sterilization process, exposure time is the period during which items are exposed to saturated steam at the specified temperature. Flash sterilization: process designed for the steam sterilization of unwrapped patient-care items for immediate use (or placed in a specially designed, covered, rigid container to allow for rapid penetration of steam). Fungicide: agent that destroys fungi (including yeasts) and/or fungal spores pathogenic to humans or other animals in the inanimate environment. General disinfectant: EPA-registered disinfectant labeled for use against both gram-negative and grampositive bacteria. Efficacy is demonstrated against both Salmonella choleraesuis and Staphylococcus aureus. Also called broad-spectrum disinfectant. Germicide: agent that destroys microorganisms, especially pathogenic organisms. # Guideline for Disinfection and Sterilization in Healthcare Facilities (2008) Microorganisms: animals or plants of microscopic size. As used in health care, generally refers to bacteria, fungi, viruses, and bacterial spores. # Minimum effective concentration (MEC): the minimum concentration of a liquid chemical germicide needed to achieve the claimed microbicidal activity as determined by dose-response testing. Sometimes used interchangeably with minimum recommended concentration. Muslin: loosely woven (by convention, 140 threads per square inch), 100% cotton cloth. Formerly used as a wrap for sterile packs or a surgical drape. Fabric wraps used currently consist of a cotton-polyester blend. Mycobacteria: bacteria with a thick, waxy coat that makes them more resistant to chemical germicides than other types of vegetative bacteria. Nonlipid viruses: generally considered more resistant to inactivation than lipid viruses. Also called nonenveloped or hydrophilic viruses. One-step disinfection process: simultaneous cleaning and disinfection of a noncritical surface or item. Pasteurization: process developed by Louis Pasteur of heating milk, wine, or other liquids to 65-77°C (or the equivalent) for approximately 30 minutes to kill or markedly reduce the number of pathogenic and spoilage organisms other than bacterial spores. Parametric release: declaration that a product is sterile on the basis of physical and/or chemical process data rather than on sample testing or biologic indicator results. Penicylinder: carriers inoculated with the test bacteria for in vitro tests of germicides. Can be constructed of stainless steel, porcelain, glass, or other materials and are approximately 8 x 10 mm in diameter. Prions: transmissible pathogenic agents that cause a variety of neurodegenerative diseases of humans and animals, including sheep and goats, bovine spongiform encephalopathy in cattle, and Creutzfeldt-Jakob disease in humans. They are unlike any other infectious pathogens because they are composed of an abnormal conformational isoform of a normal cellular protein, the prion protein (PrP). Prions are extremely resistant to inactivation by sterilization processes and disinfecting agents. Process challenge device (PCD): item designed to simulate product to be sterilized and to constitute a defined challenge to the sterilization process and used to assess the effective performance of the process. A PCD is a challenge test pack or test tray that contains a biologic indicator, a Class 5 integrating indicator, or an enzyme-only indicator. QUAT: abbreviation for quaternary ammonium compound, a surface-active, water-soluble disinfecting substance that has four carbon atoms linked to a nitrogen atom through covalent bonds. Recommended exposure limit (REL): occupational exposure limit recommended by NIOSH as being protective of worker health and safety over a working lifetime. Frequently expressed as a 40-hour timeweighted-average exposure for up to 10 hours per day during a 40-work week. Reprocess: method to ensure proper disinfection or sterilization; can include: cleaning, inspection, wrapping, sterilizing, and storing. Sanitizer: agent that reduces the number of bacterial contaminants to safe levels as judged by public health requirements. Commonly used with substances applied to inanimate objects. According to the protocol for the official sanitizer test, a sanitizer is a chemical that kills 99.999% of the specific test bacteria in 30 seconds under the conditions of the test. Shelf life: length of time an undiluted or use dilution of a product can remain active and effective. Also refers to the length of time a sterilized product (e.g., sterile instrument set) is expected to remain sterile. Spaulding classification: strategy for reprocessing contaminated medical devices. The system classifies a medical device as critical, semicritical, or noncritical on the basis of risk to patient safety from contamination on a device. The system also established three levels of germicidal activity (sterilization, high-level disinfection, and low-level disinfection) for strategies with the three classes of medical devices (critical, semicritical, and noncritical). Spore: relatively water-poor round or elliptical resting cell consisting of condensed cytoplasm and nucleus surrounded by an impervious cell wall or coat. Spores are relatively resistant to disinfectant and sterilant activity and drying conditions (specifically in the genera Bacillus and Clostridium). Spore strip: paper strip impregnated with a known population of spores that meets the definition of biological indicators. Steam quality: steam characteristic reflecting the dryness fraction (weight of dry steam in a mixture of dry saturated steam and entrained water) and the level of noncondensable gas (air or other gas that will not condense under the conditions of temperature and pressure used during the sterilization process). The dryness fraction (i.e., the proportion of completely dry steam in the steam being considered) should not fall below 97%. Steam sterilization: sterilization process that uses saturated steam under pressure for a specified exposure time and at a specified temperature, as the sterilizing agent. Steam sterilization, dynamic air removal type: one of two types of sterilization cycles in which air is removed from the chamber and the load by a series of pressure and vacuum excursions (prevacuum cycle) or by a series of steam flushes and pressure pulses above atmospheric pressure (steam-flushpressure-pulse cycle). Sterile or Sterility: state of being free from all living microorganisms. In practice, usually described as a probability function, e.g., as the probability of a microorganism surviving sterilization being one in one million. # Sterility assurance level (SAL): probability of a viable microorganism being present on a product unit after sterilization. Usually expressed as 10 -6 ; a SAL of 10 -6 means ≤1/1 million chance that a single viable microorganism is present on a sterilized item. A SAL of 10 -6 generally is accepted as appropriate for items intended to contact compromised tissue (i.e., tissue that has lost the integrity of the natural body barriers). The sterilizer manufacturer is responsible for ensuring the sterilizer can achieve the desired SAL. The user is responsible for monitoring the performance of the sterilizer to ensure it is operating in conformance to the manufacturer's recommendations. Sterilization: validated process used to render a product free of all forms of viable microorganisms. In a sterilization process, the presence of microorganisms on any individual item can be expressed in terms of probability. Although this probability can be reduced to a very low number, it can never be reduced to zero. Sterilization area: area of a health-care facility designed to house sterilization equipment, such as steam ethylene oxide, hydrogen peroxide gas plasma, or ozone sterilizers. Sterilizer: apparatus used to sterilize medical devices, equipment, or supplies by direct exposure to the sterilizing agent. Sterilizer, prevacuum type: type of steam sterilizer that depends on one or more pressure and vacuum excursions at the beginning of the cycle to remove air. This method of operation results in shorter cycle times for wrapped items because of the rapid removal of air from the chamber and the load by the vacuum system and because of the usually higher operating temperature (132-135°C [270-275°F]; 141-144°C [285-291°F]). This type of sterilizer generally provides for shorter exposure time and accelerated drying of fabric loads by pulling a further vacuum at the end of the sterilizing cycle. Sterilizer, steam-flush pressure-pulse type: type of sterilizer in which a repeated sequence consisting of a steam flush and a pressure pulse removes air from the sterilizing chamber and processed materials using steam at above atmospheric pressure (no vacuum is required). Like a prevacuum sterilizer, a steam-flush pressure-pulse sterilizer rapidly removes air from the sterilizing chamber and wrapped items; however, the system is not susceptible to air leaks because air is removed with the sterilizing chamber pressure at above atmospheric pressure. Typical operating temperatures are 121-123°C (250-254°F), 132-135°C (270-275°F), and 141-144°C (285-291°F). Surfactant: agent that reduces the surface tension of water or the tension at the interface between water and another liquid; a wetting agent found in many sterilants and disinfectants. Tabletop steam sterilizer: a compact gravity-displacement steam sterilizer that has a chamber volume of not more than 2 cubic feet and that generates its own steam when distilled or deionized water is added. # Time-weighted average (TWA): an average of all the concentrations of a chemical to which a worker has been exposed during a specific sampling time, reported as an average over the sampling time. For example, the permissible exposure limit for ethylene oxide is 1 ppm as an 8-hour TWA. Exposures above the ppm limit are permitted if they are compensated for by equal or longer exposures below the limit during the 8-hour workday as long as they do not exceed the ceiling limit; short-term exposure limit; or, in the case of ethylene oxide, excursion limit of 5 ppm averaged over a 15-minute sampling period. Tuberculocide: an EPA-classified hospital disinfectant that also kills Mycobacterium tuberculosis (tubercle bacilli). EPA has registered approximately 200 tuberculocides. Such agents also are called mycobactericides. Use-life: the length of time a diluted product can remain active and effective. The stability of the chemical and the storage conditions (e.g., temperature and presence of air, light, organic matter, or metals) determine the use-life of antimicrobial products. Hydrogen peroxide (7.35%) and 0.23% peracetic acid; hydrogen peroxide 1% and peracetic acid 0.08% (will corrode metal instruments) # Tables and Figure 3-8 h n/a n/a n/a Modified from Rutala and Simmons. 15,17,18,421 The selection and use of disinfectants in the healthcare field is dynamic, and products may become available that are not in existence when this guideline was written. As newer disinfectants become available, persons or committees responsible for selecting disinfectants and sterilization processes should be guided by products cleared by the FDA and the EPA as well as information in the scientific literature. 1 See text for discussion of hydrotherapy. 2 The longer the exposure to a disinfectant, the more likely it is that all microorganisms will be eliminated. Follow the FDA-cleared high-level disinfection claim. Ten-minute exposure is not adequate to disinfect many objects, especially those that are difficult to clean because they have narrow channels or other areas that can harbor organic material and bacteria. Twenty-minute exposure at 20°C is the minimum time needed to reliably kill M. tuberculosis and nontuberculous mycobacteria with a 2% glutaraldehyde. Some high-level disinfectants have a reduced exposure time (e.g., ortho-phthalaldehyde at 12 minutes at 20°C) because of their rapid activity against mycobacteria or reduced exposure time due to increased mycobactericidal activity at elevated temperature (e.g., 2.5% glutaraldehyde at 5 minutes at 35°C, 0.55% OPA at 5 min at 25°C in automated endoscope reprocessor). 3 Tubing must be completely filled for high-level disinfection and liquid chemical sterilization; care must be taken to avoid entrapment of air bubbles during immersion. 4 Material compatibility should be investigated when appropriate. # 5 A concentration of 1000 ppm available chlorine should be considered where cultures or concentrated preparations of microorganisms have spilled (5.25% to 6.15% household bleach diluted 1:50 provides > 1000 ppm available chlorine). This solution may corrode some surfaces. 6 Pasteurization (washer-disinfector) of respiratory therapy or anesthesia equipment is a recognized alternative to highlevel disinfection. Some data challenge the efficacy of some pasteurization units. 7 Thermostability should be investigated when appropriate. 8 Do not mix rectal and oral thermometers at any stage of handling or processing. 9 By law, all applicable label instructions on EPA-registered products must be followed. If the user selects exposure conditions that differ from those on the EPA-registered products label, the user assumes liability from any injuries resulting from off-label use and is potentially subject to enforcement action under FIFRA. Modified from Russell and Favero 13,344 . 1 All products effective in presence of organic soil, relatively easy to use, and have a broad spectrum of antimicrobial activity (bacteria, fungi, viruses, bacterial spores, and mycobacteria). The above characteristics are documented in the literature; contact the manufacturer of the instrument and sterilant for additional information. All products listed above are FDA-cleared as chemical sterilants except OPA, which is an FDA-cleared high-level disinfectant. • Rapid activity: ability to quickly achieve sterilization • Strong penetrability: ability to penetrate common medical-device packaging materials and penetrate into the interior of device lumens • Material compatibility: produces only negligible changes in the appearance or the function of processed items and packaging materials even after repeated cycling • Nontoxic: presents no toxic health risk to the operator or the patient and poses no hazard to the environment • Organic material resistance: withstands reasonable organic material challenge without loss of efficacy 3 Test organism was G. stearothermophilus spores . The lumen test units had a removable 5 cm center piece (1.2 cm diameter) of stainless steel sealed to the narrower steel tubing by hard rubber septums. 4 Test organism was G. stearothermophilus spores. The lumen test unit was a straight stainless steel tube. # Acknowledgements The authors gratefully acknowledge Eva P. Clontz, M.S., for her assistance in referencing this guideline. The Healthcare Infection Control Practices Advisory Committee thanks the following experts for reviewing a draft of this guideline: Martin S. Favero, Ph.D., Syed A. Sattar, Ph.D., A. Denver Russell, D.Sc., and Martin Exner, M.D. The opinions of the reviewers might not be reflected in all the recommendations contained in this document. # Web-Based Disinfection and Sterilization Resources Additional information about disinfection and sterilization is available at the following dedicated websites: Food and Drug Administration, Rockville, Maryland Performance Indicators 1. Monitor adherence to high-level disinfection and/or sterilization guidelines for endoscopes on a regular basis. This monitoring should include ensuring the proper training of persons performing reprocessing and their adherence to all endoscope reprocessing steps, as demonstrated by competency testing at commencement of employment and annually. 2. Develop a mechanism for the occupational health service to report all adverse health events potentially resulting from exposure to disinfectants and sterilants; review such exposures; and implement engineering, work practice, and PPE to prevent future exposures. 3. Monitor possible sterilization failures that resulted in instrument recall. Assess whether additional training of personnel or equipment maintenance is required. Germicidal detergent: detergent that also is EPA-registered as a disinfectant. High-level disinfectant: agent capable of killing bacterial spores when used in sufficient concentration under suitable conditions. It therefore is expected to kill all other microorganisms. Huck towel: all-cotton surgical towel with a honey-comb weave; both warp and fill yarns are tightly twisted. Huck towels can be used to prepare biologic indicator challenge test packs. Implantable device: according to FDA, "device that is placed into a surgically or naturally formed cavity of the human body if it is intended to remain there for a period of 30 days or more" [21 CFR 812.3(d)]. Inanimate surface: nonliving surface (e.g., floors, walls, furniture). Incubator: apparatus for maintaining a constant and suitable temperature for the growth and cultivation of microorganisms. Infectious microorganisms: microorganisms capable of producing disease in appropriate hosts. Inorganic and organic load: naturally occurring or artificially placed inorganic (e.g., metal salts) or organic (e.g., proteins) contaminants on a medical device before exposure to a microbicidal process. Intermediate-level disinfectant: agent that destroys all vegetative bacteria, including tubercle bacilli, lipid and some nonlipid viruses, and fungi, but not bacterial spores. Limited disinfectant: disinfectant registered for use against a specific major group of organisms (gramnegative or gram-positive bacteria). Efficacy has been demonstrated in laboratory tests against either Salmonella choleraesuis or Staphylococcus aureus bacteria. Lipid virus: virus surrounded by an envelope of lipoprotein in addition to the usual core of nucleic acid surrounded by a coat of protein. This type of virus (e.g., HIV) is generally easily inactivated by many types of disinfectants. Also called enveloped or lipophilic virus. Low-level disinfectant: agent that destroys all vegetative bacteria (except tubercle bacilli), lipid viruses, some nonlipid viruses, and some fungi, but not bacterial spores. Mechanical indicator: devices that monitor the sterilization process (e.g., graphs, gauges, printouts). Medical device: instrument, apparatus, material, or other article, whether used alone or in combination, including software necessary for its application, intended by the manufacturer to be used for human beings for • diagnosis, prevention, monitoring treatment, or alleviation of disease; • diagnosis, monitoring, treatment, or alleviation of or compensation for an injury or handicap; • investigation, replacement, or modification of the anatomy or of a physiologic process; or • control of conception and that does not achieve its primary intended action in or on the human body by pharmacologic, immunologic, or metabolic means but might be assisted in its function by such means. Vegetative bacteria: bacteria that are devoid of spores and usually can be readily inactivated by many types of germicides. Virucide: an agent that kills viruses to make them noninfective. Adapted from Association for the Advancement of Medical Instrumentation; [811][812][813][814]819 Association of periOperating Registered Nurses (AORN), 815 American Hospital Association, 319 and Block. 16,1034 Table 10. Factors affecting the efficacy of sterilization Factors Effect Cleaning 1 Failure to adequately clean instrument results in higher bioburden, protein load, and salt concentration. These will decrease sterilization efficacy. Bioburden 1 The natural bioburden of used surgical devices is 10 0 to 10 3 organisms (primarily vegetative bacteria), which is substantially below the 10 5 -10 6 spores used with biological indicators. Pathogen type Spore-forming organisms are most resistant to sterilization and are the test organisms required for FDA clearance. However, the contaminating microflora on used surgical instruments consists mainly of vegetative bacteria. Protein 1 Residual protein decreases efficacy of sterilization. However, cleaning appears to rapidly remove protein load. Salt 1 Residual salt decreases efficacy of sterilization more than does protein load. However, cleaning appears to rapidly remove salt load. Biofilm accumulation 1 Biofilm accumulation reduces efficacy of sterilization by impairing exposure of the sterilant to the microbial cell. # Lumen length Increasing lumen length impairs sterilant penetration. May require forced flow through lumen to achieve sterilization. # Lumen diameter Decreasing lumen diameter impairs sterilant penetration. May require forced flow through lumen to achieve sterilization. # Restricted flow Sterilant must come into contact with microorganisms. Device designs that prevent or inhibit this contact (e.g., sharp bends, blind lumens) will decrease sterilization efficacy. # Device design and construction Materials used in construction may affect compatibility with different sterilization processes and affect sterilization efficacy. Design issues (e.g., screws, hinges) will also affect sterilization efficacy. Modified from Alfa and Rutala. 470, 825 1 Factor only relevant for reused surgical/medical devices
None
None
48e842488c3e84e0540f7dfe55b00da932fef11e
cdc
None
data are insufficient to recommend use of the FDA-approved single-use rapid HIV-1/HIV-2 antigen/antibody combination immunoassay as the initial assay in the algorithm.# Recommended Laboratory HIV Testing Algorithm for Serum or Plasma Specimens 1. Laboratories should conduct initial testing for HIV with an FDA-approved antigen/antibody combination immunoassay- that detects HIV-1 and HIV-2 antibodies and HIV-1 p24 antigen to screen for established infection with HIV-1 or HIV-2 and for acute HIV-1 infection. No further testing is required for specimens that are nonreactive on the initial immunoassay. 2. Specimens with a reactive antigen/antibody combination immunoassay result (or repeatedly reactive, if repeat testing is recommended by the manufacturer or required by regulatory authorities) should be tested with an FDA-approved antibody immunoassay that differentiates HIV-1 antibodies from HIV-2 antibodies. Reactive results on the initial antigen/antibody combination immunoassay and the HIV-1/HIV-2 antibody differentiation immunoassay should be interpreted as positive for HIV-1 antibodies, HIV-2 antibodies, or HIV antibodies, undifferentiated. 3. Specimens that are reactive on the initial antigen/antibody combination immunoassay and nonreactive or indeterminate on the HIV-1/HIV-2 antibody differentiation immunoassay should be tested with an FDA-approved HIV-1 nucleic acid test (NAT). - A reactive HIV-1 NAT result and nonreactive HIV-1/HIV-2 antibody differentiation immunoassay result indicates laboratory evidence for acute HIV-1 infection. - A reactive HIV-1 NAT result and indeterminate HIV-1/HIV-2 antibody differentiation immunoassay result indicates the presence of HIV-1 infection confirmed by HIV-1 NAT. - A negative HIV-1 NAT result and nonreactive or indeterminate HIV-1/HIV-2 antibody differentiation immunoassay result indicates a false-positive result on the initial immunoassay. 4. Laboratories should use this same testing algorithm, beginning with an antigen/antibody combination immunoassay, with serum or plasma specimens submitted for testing after a reactive (preliminary positive) result from any rapid HIV test.
data are insufficient to recommend use of the FDA-approved single-use rapid HIV-1/HIV-2 antigen/antibody combination immunoassay as the initial assay in the algorithm.# Recommended Laboratory HIV Testing Algorithm for Serum or Plasma Specimens 1. Laboratories should conduct initial testing for HIV with an FDA-approved antigen/antibody combination immunoassay* that detects HIV-1 and HIV-2 antibodies and HIV-1 p24 antigen to screen for established infection with HIV-1 or HIV-2 and for acute HIV-1 infection. No further testing is required for specimens that are nonreactive on the initial immunoassay. 2. Specimens with a reactive antigen/antibody combination immunoassay result (or repeatedly reactive, if repeat testing is recommended by the manufacturer or required by regulatory authorities) should be tested with an FDA-approved antibody immunoassay that differentiates HIV-1 antibodies from HIV-2 antibodies. Reactive results on the initial antigen/antibody combination immunoassay and the HIV-1/HIV-2 antibody differentiation immunoassay should be interpreted as positive for HIV-1 antibodies, HIV-2 antibodies, or HIV antibodies, undifferentiated. 3. Specimens that are reactive on the initial antigen/antibody combination immunoassay and nonreactive or indeterminate on the HIV-1/HIV-2 antibody differentiation immunoassay should be tested with an FDA-approved HIV-1 nucleic acid test (NAT). • A reactive HIV-1 NAT result and nonreactive HIV-1/HIV-2 antibody differentiation immunoassay result indicates laboratory evidence for acute HIV-1 infection. • A reactive HIV-1 NAT result and indeterminate HIV-1/HIV-2 antibody differentiation immunoassay result indicates the presence of HIV-1 infection confirmed by HIV-1 NAT. • A negative HIV-1 NAT result and nonreactive or indeterminate HIV-1/HIV-2 antibody differentiation immunoassay result indicates a false-positive result on the initial immunoassay. 4. Laboratories should use this same testing algorithm, beginning with an antigen/antibody combination immunoassay, with serum or plasma specimens submitted for testing after a reactive (preliminary positive) result from any rapid HIV test.
None
None
3d667d4b207667cffde2c5cbb930af668ffe138d
cdc
None
Low water volumes combined with high temperatures and heavy bather loads make public hot tub operation challenging. The result can be low disinfectant levels that allow the growth and spread of a variety of germs (e.g., Pseudomonas & Legionella) that can cause skin and respiratory Recreational Water Illnesses (RWIs). Operators that focus on hot tub maintenance and operation to ensure continuous, high water quality are the first line of defense in preventing the spread of RWIs.# - Ensure availability of trained operation staff during weekends when hot tubs are used most. - Maintain free chlorine (2-4 parts per million or ppm) or bromine (4-6 ppm) levels continuously. - Maintain the pH level of the water at 7.2-7.8. - Test pH and disinfectant levels at least twice per day (hourly when in heavy use). - Maintain accurate records of disinfectant/pH measurements and maintenance activities. - Maintain filtration and recirculation systems according to manufacturer recommendations. - Inspect accessible recirculation system components for a slime layer and clean as needed. - Scrub hot tub surfaces to remove any slime layer - Enforce bather load limits - Drain and replace all or portions of the water on a weekly to monthly basis, depending on usage and water quality. Depending on filter type, clean filter or replace filter media before refilling the hot tub. - Treat the hot tub with a biocidal shock treatment on a daily to weekly basis, depending on water quality and frequency of water replacement. - Institute a preventive maintenance program to replace equipment or parts before they fail (e.g., feed pump tubing, sensor probes). - Provide disinfection guidelines for fecal accidents and body fluid spills. - Develop a clear communication chain for reporting operation problems. - Cover hot tubs, if possible, to minimize loss of disinfectant and reduce the levels of environmental contamination (e.g., debris and dirt). - Educate hot tub users about appropriate hot tub use. # Additional Hot tub Safety Measures - Prevent the water temperature from exceeding 104˚F (40˚C). - Exclude children less than five years old from using hot tubs. - Maintain a locked safety cover for the hot tub when possible. - Recommend that all pregnant women consult a physician before hot tub use, particularly in the first trimester. - Prevent entrapment injuries with appropriate drain design and configuration.
Low water volumes combined with high temperatures and heavy bather loads make public hot tub operation challenging. The result can be low disinfectant levels that allow the growth and spread of a variety of germs (e.g., Pseudomonas & Legionella) that can cause skin and respiratory Recreational Water Illnesses (RWIs). Operators that focus on hot tub maintenance and operation to ensure continuous, high water quality are the first line of defense in preventing the spread of RWIs.# • Ensure availability of trained operation staff during weekends when hot tubs are used most. • Maintain free chlorine (2-4 parts per million or ppm) or bromine (4-6 ppm) levels continuously. • Maintain the pH level of the water at 7.2-7.8. • Test pH and disinfectant levels at least twice per day (hourly when in heavy use). • Maintain accurate records of disinfectant/pH measurements and maintenance activities. • Maintain filtration and recirculation systems according to manufacturer recommendations. • Inspect accessible recirculation system components for a slime layer and clean as needed. • Scrub hot tub surfaces to remove any slime layer • Enforce bather load limits • Drain and replace all or portions of the water on a weekly to monthly basis, depending on usage and water quality. Depending on filter type, clean filter or replace filter media before refilling the hot tub. • Treat the hot tub with a biocidal shock treatment on a daily to weekly basis, depending on water quality and frequency of water replacement. • Institute a preventive maintenance program to replace equipment or parts before they fail (e.g., feed pump tubing, sensor probes). • Provide disinfection guidelines for fecal accidents and body fluid spills. • Develop a clear communication chain for reporting operation problems. • Cover hot tubs, if possible, to minimize loss of disinfectant and reduce the levels of environmental contamination (e.g., debris and dirt). • Educate hot tub users about appropriate hot tub use. # Additional Hot tub Safety Measures • Prevent the water temperature from exceeding 104˚F (40˚C). • Exclude children less than five years old from using hot tubs. • Maintain a locked safety cover for the hot tub when possible. • Recommend that all pregnant women consult a physician before hot tub use, particularly in the first trimester. • Prevent entrapment injuries with appropriate drain design and configuration.
None
None
e60f72e94f004f1c2e604408fcd83716db075137
cdc
None
Drotrecogin-α (Xigris ® ) primarily is used to treat severe sepsis. The drug is a recombinant form of human activated protein C that has antithrombocytic, antiinflammatory, and profibrinolytic properties.# Tickborne relapsing fever (TBRF) is a bacterial illness caused by certain species of Borrelia and transmitted through brief and painless bites from Ornithodoros ticks (1,2). Illness usually is characterized by intermittent periods of fever, fatigue, and muscle aches. In April 2005, CDC received reports of two cases of severe TBRF associated with acute respiratory distress syndrome (ARDS) in residents of California and Nevada. After a report describing these cases was posted on CDC's Epidemic Information Exchange (Epi-X), health officials in Washington reported a third severe case associated with ARDS. This report summarizes these three cases and the results of the subsequent epidemiologic investigations. The findings indicate that ARDS might occur more frequently in patients with TBRF than previously recognized. Optimal management of TBRF requires both prompt diagnosis and careful observation during the initial phases of treatment. # Case Reports Nevada. On February 17, 2005, a previously healthy woman aged 46 years from Washoe County, Nevada, had onset of nonspecific leg pain, which progressed during the next 24 hours to generalized myalgia. She visited a local hospital emergency department (ED), where a viral syndrome was diagnosed. She was treated with intravenous (IV) fluids and pain medication and discharged home. Two days later, she returned to the ED with fever, chills, fatigue, anorexia, nausea, and an episode of syncope. On arrival, she was noted to be tachycardic (130 beats per minute ), tachypneic (24 breaths per minute), and hypotensive (systolic blood pressure: 89 mm Hg) with a temperature of 96.8°F (36.0°C). A physical examination was otherwise unremarkable. Pulse oximetry on room air indicated an oxygen saturation of 96%. Initial laboratory testing revealed a white blood cell count (WBC) of 11.4 × 10 3 /µL, hemoglobin level of 13 g/dL, platelet count of 66 × 10 3 /µL, and alanine aminotransferase (ALT) of 153 U/L. A chest radiograph revealed a right middle lobe infiltrate, consistent with community-acquired pneumonia. She was treated with gatifloxacin and transferred to the intensive care unit (ICU). Approximately 10 hours after admission, she was intubated for worsening tachypnea (respiratory rate : 40 breaths per minute). Diffuse bilateral infiltrates were noted on chest radiograph, and an arterial blood gas sample yielded oxygenation of 53 mmHg on 100% inspired oxygen. The patient's antimicrobial treatment was broadened to include vancomycin and doxycycline. The next day, the treating physician was notified that spirochetes were observed during examination of a blood smear obtained when the patient was admitted; the smear had been manually reviewed because of thrombocytopenia. The patient remained intubated for 12 days for what was ultimately determined to be ARDS. During this time, she was administered three additional antimicrobials (ciprofloxacin, tobramycin, and ceftriaxone) and drotrecogin-α.- She was discharged after 21 days and recovered completely. The causative organism was identified as Borrelia hermsii by polymerase chain reaction (PCR) performed on a whole blood sample and serologic testing of convalescent-phase serum, both performed at CDC. Although the patient did not recall receiving a tick bite, she did report staying at a resort near South Lake Tahoe (an area known to be highly endemic for TBRF) 5 days before becoming ill. California. On April 12, 2005, a previously healthy woman aged 43 years from El Dorado County, California, had onset of lethargy and myalgia. She went to a local hospital ED on April 14 with fever, chills, headache, myalgia, and dehydration. She was febrile (100.5°F ), tachycardic (138 bpm), mildly tachypneic (16 breaths per minute), and hypotensive (systolic blood pressure: 97 mm Hg). A physical examination was otherwise unremarkable. Pulse oximetry on room air indicated an oxygen saturation of 97%. A chest radiograph was not obtained, but initial blood tests indicated an elevated bilirubin level of 3.6 mg/ dL, aspartate aminotransferase (AST) of 93 U/L, and ALT of 88 U/L. She was treated with acetaminophen and discharged home with instructions to return for reevaluation of blood tests the next day. The patient returned the next day with headache, sweating, fatigue, and nausea. A physical examination revealed rhonchi; a chest radiograph was obtained and read as normal. She was treated with IV fluids and IV ceftriaxone. Within 1 hour of receiving antibiotics, her pulse increased to 127 bpm, her systolic blood pressure decreased to 85 mm Hg, and pulse oximetry on room air indicated an oxygen saturation of 95%. TBRF was diagnosed by observation of spirochetes in smears of peripheral blood (Figure). The patient was treated with dopamine for hypotension and doxycycline for TBRF and transferred to another medical center. Shortly after arrival, a chest radiograph taken because of worsening respiratory distress demonstrated diffuse bilateral infiltrates. The patient was intubated for respiratory failure (RR: 44 breaths per minute; oxygen saturation of 82% on 100% inspired oxygen via nonrebreather mask) attributed to ARDS. Laboratory testing revealed a WBC of 3.4 × 10 3 /µL, hemoglobin level of 11.4 g/dL, and platelet count of 19 × 10 3 /µL. Platelets and fresh frozen plasma were administered. The patient remained intubated for 10 days, during which she was administered four different antimicrobials (vancomycin, piperacillin/tazobactam, metronidazole, and doxycycline) and drotrecogin-a. She was discharged after 19 days of hospitalization and eventually recovered from her illness. A blood sample obtained early in illness and cultured at CDC yielded B. hermsii. An environmental investigation was conducted at her home, located 5 miles south of Lake Tahoe and approximately 10 miles from the resort visited by the Nevada patient. An engorged soft tick was found in her bedroom, and removal of house siding revealed multiple rodent nests from which approximately 30 Ornithodoros hermsi ticks were recovered. Washington. A woman aged 40 years from King County, Washington, visited a hospital ED on September 21, 2004, with myalgia, arthralgia, nausea, vomiting, and headache. She was treated with IV fluids, promethazine, and hydrocodone. Hospital admission was recommended, but she refused. After experiencing a syncopal episode at home, she returned and was noted to be febrile (102 º F ), hypotensive (systolic blood pressure: 100 mm Hg), mildly tachycardic (107 bpm), and hypoxic (oxygen saturation: 92% on 4 L of oxygen). A physical examination was otherwise unremarkable. Her chest radiograph revealed bilateral lower lobe infiltrates. Initial laboratory studies indicated a WBC of 9.5 x 10 3 /µL, hematocrit of 33%, platelet count of 49 x 10 3 /µL, ALT of 192 U/L, and a D-dimer of 754. She was admitted for presumed community-acquired pneumonia with sepsis and treated empirically with IV cefuroxime and azithromycin. After receiving the cefuroxime, she was transiently hypotensive and became somnolent. She was intubated and transferred to the ICU with a diagnosis of ARDS and worsening mental status. Because of thrombocytopenia, a peripheral blood smear was examined, revealing spirochetes diagnostic of TBRF. Her transient hypotension was attributed to a Jarisch-Herxheimer reaction (JHR). † She remained intubated for 3 days, was discharged home after 10 days, and eventually recovered from her illness. The most likely site of exposure was a forest cabin in Chelan County, Washington, where she had slept approximately 11 days before illness onset. On inspection, the cabin had evidence of rodent infestation; however, attempts to trap ticks and rodents were unsuccessful. # Epidemiologic Investigations To determine the frequency of ARDS among patients with TBRF acquired in the South Lake Tahoe area, case-report forms for all TBRF cases reported to Nevada and California state and local health departments during 1995-2004 were reviewed. Additionally, cases were ascertained by 1) a computerized search of discharge records from Lake Tahoe area hospitals where cases had been diagnosed; 2) interviews with physicians and laboratorians from area hospitals and private practices where cases had been diagnosed; and 3) postings on Epi-X and the Emerging Infections Network. Including the California and Nevada cases described in this report, 65 cases of TBRF among persons who reported living in or visiting the Lake Tahoe area during the usual incubation period of 2-18 days before illness onset were ascertained. Thirty (46%) were in patients who required hospitalization. Detailed clinical information from medical records was available for 38 (58%) patients. Among these 38 patients, 16 (42%) experienced one or more of the following complications: eight (21%), JHR; six (16%), hypoxia; five (13%), elevated liver enzyme levels; three (8%), arrhythmia or myocarditis; two (5%), azotemia; and two (5%), ARDS. TBRF cases in Washington state were similarly reviewed by using all case reports submitted to the state health department during 1996-2005. Including the single case described in this report, 46 TBRF cases were reported in Washington during 1996-2005, of which 37 (80%) were in patients who required hospitalization. Comments on case-report forms indicated that five (13%) patients required care in an ICU, three (6%) had JHR, and three (6%) had ARDS. All three ARDS cases occurred after 2001. been described previously (9), and this case occurred in a woman who was pregnant and therefore more susceptible to severe TBRF (10). Results of this investigation indicate that ARDS might occur more frequently in patients with TBRF than previously recognized and can occur in persons without predisposing conditions. All cases of TBRF-associated ARDS identified in this review occurred after 2001, but further surveillance will be needed to determine whether the risk for ARDS in TBRF is increasing. Increases might be related to changes in medical practice, use of newer antimicrobials, or possibly the emergence of a more virulent strain. All three cases described in this report occurred in women, but no common medical history (e.g., menopausal status, hormone replacement therapy, or oral contraceptive use) was identified. All three patients had received antimicrobial treatment before onset of ARDS; however, whether they had ARDS as a result of JHR or underlying sepsis could not be determined. The findings in this report are subject to at least two limitations. First, cases were evaluated in only two geographic areas; therefore, results might not be generalizable to the endemic western states. Second, TBRF is not a nationally notifiable disease, and each state has different reporting requirements; therefore, case information is subject to underreporting and ascertainment bias. These methodological differences might have affected the observed rates of hospitalization and classification of ARDS. Health-care professionals should report suspected TBRF cases to local or state health departments, providing a thorough clinical and exposure history and, as appropriate, samples (i.e., serum or whole blood) for diagnostic testing. The observation of spirochetes in a Wright-or Giemsastained peripheral blood smear collected during a febrile episode is considered diagnostic of TBRF and is not typical of other spirochetal infections (1). Laboratory diagnosis also can be made by culture, serology, or PCR of serum and blood at certain reference laboratories. TBRF can be prevented by minimizing rodent infestations in homes. Health officials in endemic areas should consider educational measures that increase awareness of potential exposures, demonstrate methods for rodent proofing dwellings, and promote early recognition of cases by health-care professionals (5). These measures are especially important in mountainous resort areas that serve numerous visitors. # Emergence of Antimicrobial-Resistant Serotype 19A # Streptococcus pneumoniae -Massachusetts, 2001-2006 Streptococcus pneumoniae (pneumococcus) is a leading cause of otitis, sinusitis, pneumonia, and meningitis worldwide. Treatment of the most serious type of pneumococcal infection, invasive pneumococcal disease (IPD),- is complicated by antimicrobial resistance. Widespread introduction in 2000 of heptavalent pneumococcal conjugate vaccine (PCV7) against serotypes 4, 6B, 9V, 14, 18C, 19F, and 23F resulted in a decline in antimicrobialnonsusceptible IPD in the United States (1,2), including in Massachusetts (3). However, development of antimicrobial resistance in serotypes not covered by PCV7 is a growing concern (1,4). In Massachusetts during 2001-2006, IPD surveillance identified an increased number of cases in children caused by pneumococcal serotypes (most notably 19A) not covered by PCV7 and an associated increase in antimicrobial resistance among these isolates. This report examines these trends and clinical characteristics of Massachusetts patients with antimicrobial-nonsusceptible, non-PCV7-type IPD. The findings indicated that, despite increases in incidence of antimicrobial-nonsusceptible IPD, overall rates of IPD remained stable during 2001-2006. In addition, persons with IPD caused by antimicrobialnonsusceptible S. pneumoniae had clinical outcomes comparable to persons with IPD caused by antimicrobialsusceptible serotypes. Although PCV7 is effective in preventing IPD, these results confirm that antimicrobial resistance among serotypes not covered by PCV7 remains a concern. On October 1, 2001, the Massachusetts Department of Public Health and the Section of Pediatric Infectious Diseases at Boston University Medical Center initiated statewide laboratory-and population-based surveillance for IPD among children. † For this report, cases of IPD were defined by isolation of pneumococcus from a normally sterile body site (e.g., blood or cerebrospinal, pleural, or joint fluid) in a Massachusetts resident aged <18 years during October 1, 2001-September 30, 2006. Demographic and clinical data were obtained from telephone interviews with primarycare providers or adult caregivers. PCV7 vaccination rates were estimated using CDC's National Immunization Survey. § Serotyping was performed at Boston University Medical Center, using the Quellung reaction with pneumococcal antisera. Susceptibility to five antimicrobials often used in pediatric patients (i.e., amoxicillin, penicillin, ceftriaxone, azithromycin, and trimethoprim-sulfamethoxazole) was determined by E-test (epsilometer test, an agar diffusion method), and interpretations were based on Clinical and Laboratory Standards Institute 2007 guidelines (5). For each antimicrobial agent tested, isolates with either intermediate-level or high-level antimicrobial resistance were considered nonsusceptible to the antimicrobial agent unless otherwise indicated. Population denominators were obtained from 2000-2005 census figures. Mantel-Haenszel chi-square test for trend was used to identify changes in serotype distribution or antimicrobial resistance over time. Chi-square or Fisher's exact tests of proportions were used to compare risk factors and clinical characteristics of disease. Because IPD surveillance did not begin until after introduction of PCV7, no data on pre-PCV7 susceptibility were available for comparison. PCV7 2). No significant changes were noted in the proportions of IPD caused by other PCV7 or PCV7related serotypes or by non-PCV7 serogroups (Figure 2). Because 19A was the most common serotype isolated during 2005-2006, the antimicrobial susceptibility of 19A isolates was examined further (Table ). The majority of 19A isolates were nonsusceptible to penicillin. During 2001-2006, significant increases were noted in the proportion of 19A isolates that were nonsusceptible to amoxicillin (minimum inhibitory concentration >2 µg/mL), ceftriaxone (MIC >0.5 µg/mL), or three or more classes of antimicrobials (Table ). Fourteen (15%) of 94 isolates of 19A were highly resistant to ceftriaxone (MIC >2 µg/ml), a first-line antimicrobial used for empiric bacterial meningitis treatment. No significant trends in the antimicrobial resistance of non-19A isolates were noted. To describe the clinical features of and identify risk factors for infection with ceftriaxone-nonsusceptible serotype 19A, demographic and clinical characteristics of the 14 patients with highly ceftriaxone-resistant 19A IPD were compared with those of 73 patients with ceftriaxonesusceptible 19A IPD and 237 patients with ceftriaxonesusceptible non-19A IPD. The results indicated that patients with highly ceftriaxone-resistant 19A disease did not differ from the other groups with regard to established risk factors for antimicrobial-nonsusceptible pneumococcal disease, including age, sex, race/ethnicity, geographic region, degree of household crowding, or day care exposure. Underlying medical conditions that might predispose to IPD (e.g., sickle cell disease or congenital or acquired immune deficiencies) were not significantly more common among patients with highly ceftriaxone-resistant 19A IPD (three of 14 ) than among patients in the ceftriaxonesusceptible 19A group (nine of 73 ) or the non-19A group (33 of 237 ). In addition, no significant differences among the three groups were detected in the proportion of patients with meningitis, pneumonia, or bacteremia without focus, case-fatality ratios, rates of hospitalization (79% versus 68% and 59%, respectively), or longer hospital stay (64% with >4 days versus 40% and 51%, respectively). (1,4,7,8). In Massachusetts and in other states, serotype 19A has emerged as the most common cause of IPD, and the proportion of 19A isolates that are nonsusceptible to commonly used antimicrobials is greater than the proportion for other serotypes (1,4). As a member of the same serogroup as the PCV7-type 19F, serotype 19A is considered a PCV7-related serotype. However, PCV7induced antibodies to 19F are not active against serotype 19A (9). Concern exists that emergence of antimicrobialnonsusceptible non-PCV7-type IPD could erode the success of PCV7 against pneumococcal infections. The limited number of 19A cases restricted the ability of this study to identify risk factors or characteristic clinical features of antimicrobial-nonsusceptible 19A disease. However, the study found no evidence that infections caused by antimicrobial-nonsusceptible serotype 19A had different clinical syndromes or outcomes than infections caused by antimicrobial-susceptible 19A. Despite the lack of continuous surveillance data before PCV7 introduction, the overall stability of IPD incidence in Massachusetts during the study period indicates that the decline in IPD resulting from PCV7 introduction is being maintained (3,6). Furthermore, antimicrobial-nonsusceptible infections have not negated the positive impact of PCV7. Accordingly, vaccination with PCV7 remains a priority in Massachusetts. Nonetheless, the emergence of antimicrobial-nonsusceptible non-PCV7-type IPD is of concern. Continued surveillance for IPD in Massachusetts will provide data on the clinical impact of antimicrobial-nonsusceptible 19A infection and will be useful in development and monitoring of new pneumococcal vaccines. The findings in this report support the continued empiric use of combination therapy with vancomycin and cefotaxime or ceftriaxone (the antimicrobials of choice to treat nonsusceptible pneumococci) for children with bacterial meningitis caused by, or possibly caused by, S. pneumoniae, and for critically ill children with nonmeningeal IPD (10). Antimicrobial-resistance data obtained through surveillance will continue to guide empiric treatment regimens for IPD in Massachusetts and provide data that can be used to tailor treatment recommendations to state-specific resistance patterns. State-based surveillance also will help detect trends in the emergence of nonsusceptible non-PCV7 IPD. The recent development of polymerase chain reaction (PCR)-based serotyping provides the opportunity for state public health laboratories and academic partners to identify IPD isolates by serotype. Serotyping based on the Quellung reaction requires expensive reagents and substantial training and experience to perform reliably. In contrast, PCR-based serotyping can be performed using commercially available reagents and equipment and technical expertise already available in most state public health laboratories. ¶ If applied in other states, these techniques might increase understanding of IPD trends that have occurred nationally since introduction of PCV7. # Update: Prevention of Hepatitis A After Exposure to Hepatitis A Virus and in International Travelers. Updated Recommendations of the Advisory Committee on Immunization Practices (ACIP) In 1995, highly effective inactivated hepatitis A vaccines were first licensed in the United States for preexposure prophylaxis against hepatitis A virus (HAV) among persons aged >2 years. In 2005, vaccine manufacturers received Food and Drug Administration approval for use of the vaccines in children aged 12-23 months (1). The Advisory Committee on Immunization Practices (ACIP) issued recommendations for preexposure use of hepatitis A vaccine in 1996, 1999, and 2006 (1). Currently, ACIP recommends hepatitis A vaccination of all children at age 12-23 months, catch-up vaccination of older children in selected areas, and vaccination of persons at increased risk for hepatitis A (e.g., travelers to endemic areas, users of illicit drugs, or men who have sex with men) (1). For decades, immune globulin (IG) has been recommended for prophylaxis after exposure to HAV (1). IG also has been recommended in addition to hepatitis A vaccine for preexposure prophylaxis for travelers to countries with high or intermediate hepatitis A endemicity who are scheduled to depart <4 weeks after receiving the initial vaccine dose. This report details updated recommendations, made by ACIP in June 2007, for prevention of hepatitis A after exposure to HAV and in departing international travelers (Box) and incorporates existing ACIP recommendations for prevention of hepatitis A (1). # Rationale and Methods for Updated Recommendations When administered within 2 weeks of last exposure, IG is 80%-90% effective in preventing clinical hepatitis A. Despite previously available limited data suggesting that hepatitis A vaccine might be efficacious when administered after exposure (2), in the absence of an appropriately # BOX. Summary of updated recommendations for prevention of hepatitis A after exposure to hepatitis A virus (HAV) and in departing international travelers # Postexposure prophylaxis Persons who recently have been exposed to HAV and who previously have not received hepatitis A vaccine should be administered a single dose of single-antigen hepatitis A vaccine or immune globulin (IG) (0.02 mL/kg) as soon as possible. # International travel All susceptible persons traveling to or working in countries that have high or intermediate hepatitis A endemicity should be vaccinated or receive IG before departure. Hepatitis A vaccine at the age-appropriate dose is preferred to IG. The first dose of hepatitis A vaccine should be administered as soon as travel is considered. - One dose of single-antigen hepatitis A vaccine administered at any time before departure can provide adequate protection for most healthy persons. - Older adults, immunocompromised persons, and persons with chronic liver disease or other chronic medical conditions planning to depart to an area in <2 weeks should receive the initial dose of vaccine and also simultaneously can be administered IG (0.02 mL/kg) at a separate anatomic injection site. - Travelers who elect not to receive vaccine, are aged <12 months, or are allergic to a vaccine component should receive a single dose of IG (0.02 mL/kg), which provides effective protection for up to 3 months. # NOTE: Previous recommendations remain unchanged regarding 1) settings in which postexposure prophylaxis is indicated, and 2) timing of administration of postexposure prophylaxis. designed clinical trial comparing the postexposure efficacy of vaccine with that of IG, ACIP continued to recommend IG exclusively for postexposure use (1). Hepatitis A vaccine, if recommended for other reasons, could be given at the same time. ACIP was prompted to revisit these recommendations when findings became available from a randomized, double-blind noninferiority clinical trial comparing the efficacy of hepatitis A vaccine and IG after exposure to HAV (3). The results of this clinical trial were presented to ACIP at its February 2007 meeting. During April-May 2007, the ACIP Hepatitis Vaccines Workgroup considered these results in a series of teleconferences. During these teleconferences, the workgroup also considered the experiences of other countries (e.g., Canada and the United Kingdom) where hepatitis A vaccine has been recommended for postexposure use for >5 years and reviewed data on the immunogenicity of hepatitis A vaccine, the risk for HAV transmission in various settings, and factors known to affect the severity of hepatitis A. Additionally, the workgroup took into account potential advantages of vaccine, recognized disadvantages of IG, and relevance of these data to existing recommendations for use of hepatitis A vaccine and IG in international travelers departing <4 weeks after receiving the first dose of hepatitis A vaccine. The workgroup also considered the likelihood that no additional postexposure efficacy data would become available, because of the difficulties of conducting postexposure efficacy studies of IG and vaccine. On the basis of this evidence and the expert opinions of workgroup members, other scientists, and feedback from ACIP partner organizations, the ACIP Hepatitis Vaccines Workgroup drafted a revision of the hepatitis A postexposure prophylaxis and travel recommendations. These updated recommendations were deliberated and approved by ACIP at the June 2007 meeting. # I. Prevention of Hepatitis A After Exposure to HAV Efficacy of hepatitis A vaccine versus IG. The clinical trial comparing hepatitis A vaccine with IG was conducted among 1,090 persons aged 2-40 years who were contacts of hepatitis A cases and susceptible to HAV infection. The trial compared the efficacy of hepatitis A vaccine and IG in preventing laboratory-confirmed symptomatic hepatitis A (i.e., the primary outcome) when administered <14 days after exposure to HAV (3). The primary outcome occurred among 25 (4.4%) of 568 recipients of hepatitis A vaccine and 17 (3.3%) of 522 IG recipients (relative risk: 1.35; 95% confidence interval = 0.70-2.67); the prespecified statistical criterion for noninferiority was met. The low frequency of study endpoints among IG and vaccine recipients indicated that both interventions provided good protection. The risk for hepatitis A in the vaccine group was never more than 1.5 percentage points greater than that for the IG group for the primary outcome or any secondary study endpoint. Assuming IG is 90% efficacious, the point estimate for hepatitis A vaccine efficacy relative to IG in preventing clinical hepatitis A was 86% (CI = 73%-93%) (3). This clinical trial suggested that the performance of vaccine, when administered <14 days after exposure, approaches that of IG in healthy children and adults aged <40 years. However, these findings might not be generalizable to all populations and settings. In contrast, years of experience have demonstrated that IG performs well as postexposure prophylaxis in all populations and settings. Advantages of hepatitis A vaccine. The ability to use hepatitis A vaccine for postexposure prophylaxis provides numerous public health advantages, including the induction of active immunity and longer protection, greater ease of administration, higher acceptability and availability, and a cost per dose that is similar to IG. Also, the greater availability and ease of administration of hepatitis A vaccine might increase the number of persons at risk for infection who receive postexposure prophylaxis. Risk for HAV transmission in various settings. The risk for transmission of HAV is influenced by host and environmental factors and varies considerably in different settings. For example, without postexposure prophylaxis, secondary attack rates of 15%-30% have been reported in households, with higher rates of transmission occurring from infected young children than from infected adolescents and adults (4)(5)(6). In contrast, attack rates among patrons of food service establishments who have been exposed to HAV-infected food handlers generally are low (7). Indeed, most food handlers with hepatitis A do not transmit HAV to exposed consumers or restaurant patrons (7). Given the wide range of HAV transmission risks in various settings for which postexposure prophylaxis is recommended, magnitude of risk in each situation is an important factor in determining whether to use IG or vaccine. Factors affecting clinical manifestations of hepatitis A. Older persons and persons with chronic liver disease are more likely to have severe manifestations of hepatitis A. Among older children and adults, infection typically is symptomatic, with jaundice occurring in >70% of patients (8). The case-fatality rate among cases reported through national surveillance reaches a high of 1.8% among persons aged >60 years, and fulminant hepatitis has been reported more frequently among older patients with hepatitis A (9). Although # MMWR October 19, 2007 not at increased risk for HAV infection, persons with chronic liver disease also are at increased risk for fulminant hepatitis A (10). Because of the frequency of severe consequences, preventing hepatitis A among exposed older persons and persons with chronic liver disease is particularly vital. The performance of hepatitis A vaccine as postexposure prophylaxis in these groups was not assessed in the recent clinical trial and remains unknown. In contrast, IG has been recommended and used successfully for many years in these groups and in the general population. These recommendations replace previous ACIP recommendations for postexposure prophylaxis with IG (1), incorporating new recommendations for use of single-antigen hepatitis A vaccine and updated recommendations for use of IG postexposure. These recommendations also incorporate and consolidate existing recommendations regarding recommended settings for which postexposure prophylaxis is indicated, including close personal contact with a person with hepatitis A and selected circumstances in which hepatitis A is recognized in a food handler or in a child care center (1). Also, the updated recommendations leave unchanged the recommendation that postexposure prophylaxis (using vaccine or IG) should be administered as soon as possible. No information exists regarding the efficacy of IG or vaccine if administered >2 weeks after exposure (1). The updated recommendations for use of hepatitis A vaccine alone for postexposure prophylaxis do not apply to the combination hepatitis A/hepatitis B vaccine because no data exist regarding the performance of the combination vaccine for prophylaxis after exposure to HAV. The concentration of HAV antigen in the currently available combination vaccine formulation is half that included in the single-antigen vaccine available from the same manufacturer (1). Recommendations for postexposure prophylaxis with IG or hepatitis A vaccine. Persons who recently have been exposed to HAV and who previously have not received hepatitis A vaccine should be administered a single dose of singleantigen vaccine or IG (0.02 mL/kg) as soon as possible. Information about the relative efficacy of vaccine compared with IG postexposure is limited, and no data are available for persons aged >40 years or those with underlying medical conditions. Therefore, decisions to use vaccine or IG should take into account patient characteristics associated with more severe manifestations of hepatitis A, including older age and chronic liver disease. For healthy persons aged 12 months-40 years, singleantigen hepatitis A vaccine at the age-appropriate dose is preferred to IG because of vaccine advantages that include long-term protection and ease of administration. For persons aged >40 years, IG is preferred because of the absence of information regarding vaccine performance and the more severe manifestations of hepatitis A in this age group; vaccine can be used if IG cannot be obtained. The magnitude of the risk for HAV transmission from the exposure should be considered in decisions to use IG or vaccine. IG should be used for children aged <12 months, immunocompromised persons, persons who have had chronic liver disease diagnosed, and persons for whom vaccine is contraindicated. Persons administered IG for whom hepatitis A vaccine also is recommended for other reasons should receive a dose of vaccine simultaneously with IG. For persons who receive vaccine, the second dose should be administered according to the licensed schedule to complete the series. The efficacy of IG or vaccine when administered >2 weeks after exposure has not been established. Close personal contact. Hepatitis A vaccine or IG should be administered to all previously unvaccinated household and sexual contacts of persons with serologically confirmed hepatitis A. In addition, persons who have shared illicit drugs with a person who has serologically confirmed hepatitis A should receive hepatitis A vaccine, or IG and hepatitis A vaccine simultaneously. Consideration also should be given to providing IG or hepatitis A vaccine to persons with other types of ongoing, close personal contact (e.g., regular babysitting) with a person with hepatitis A. Child care centers. Hepatitis A vaccine or IG should be administered to all previously unvaccinated staff members and attendees of child care centers or homes if 1) one or more cases of hepatitis A are recognized in children or employees or 2) cases are recognized in two or more households of center attendees. In centers that do not provide care to children who wear diapers, hepatitis A vaccine or IG need be administered only to classroom contacts of the index patient. When an outbreak occurs (i.e., hepatitis A cases in three or more families), hepatitis A vaccine or IG also should be considered for members of households that have children (center attendees) in diapers. Common-source exposure. If a food handler receives a diagnosis of hepatitis A, vaccine or IG should be administered to other food handlers at the same establishment. Because common-source transmission to patrons is unlikely, hepatitis A vaccine or IG administration to patrons typically is not indicated but may be considered if 1) during the time when the food handler was likely to be infectious, the food handler both directly handled uncooked or cooked foods and had diarrhea or poor hygienic practices and 2) patrons can be identified and treated <2 weeks after the exposure. In settings in which repeated exposures to HAV might have occurred (e.g., institutional cafeterias), stronger consideration of hepatitis A vaccine or IG use could be warranted. In the event of a common-source outbreak, postexposure prophylaxis should not be provided to exposed persons after cases have begun to occur because the 2-week period after exposure during which IG or hepatitis A vaccine is known to be effective will have been exceeded. Schools, hospitals, and work settings. Hepatitis A postexposure prophylaxis is not routinely indicated when a single case occurs in an elementary or secondary school or an office or other work setting, and the source of infection is outside the school or work setting. Similarly, when a person who has hepatitis A is admitted to a hospital, staff members should not routinely be administered hepatitis A postexposure prophylaxis; instead, careful hygienic practices should be emphasized. Hepatitis A vaccine or IG should be administered to persons who have close contact with index patients if an epidemiologic investigation indicates HAV transmission has occurred among students in a school or among patients or between patients and staff members in a hospital. # II. Prevention of Hepatitis A Before International Travel Hepatitis A vaccination is recommended to prevent hepatitis A among travelers to countries with high or intermediate hepatitis A endemicity. Previously, however, because few data were available regarding the immunogenicity of hepatitis A vaccine during the 4 weeks immediately following administration of the first dose, ACIP recommended that, for optimal protection, persons traveling to an area where the risk for transmission was high <4 weeks after the initial vaccine dose also could be administered IG (1). In June 2007, ACIP concluded that if hepatitis A vaccine alone can be recommended for prophylaxis after exposure to HAV, vaccine also should be recommended for healthy international travelers aged <40 years regardless of their scheduled dates for departure. Similar to updated recommendations for postexposure prophylaxis, ACIP recognized that, for certain international travelers (e.g., older adults or those with underlying medical conditions), the performance of vaccine alone is unknown and clinical manifestations of hepatitis A tend to be more severe. Hence, under the updated recommendations for international travelers, for optimal protection, IG can be considered in addition to vaccine for older adults, immunocompromised persons, and persons with chronic liver disease or other chronic medical conditions who are traveling to an area within 2 weeks. The following recommendation updates recommendations for prevention of hepatitis A among travelers departing in <4 weeks to areas where prophylaxis is recommended and consolidates other recommendations for prevention of hepatitis A among international travelers (1). These recommendations replace previous ACIP recommendations for preexposure protection against hepatitis A for travelers (1). Recommendations for preexposure protection against hepatitis A for travelers. All susceptible persons traveling to or working in countries that have high or intermediate hepatitis A endemicity are at increased risk for HAV infection and should be vaccinated or receive IG before departure.- Hepatitis A vaccination at the age-appropriate dose is preferred to IG. Data are not available regarding the risk for hepatitis A for persons traveling to certain areas of the Caribbean, although prophylaxis should be considered if travel to areas with questionable sanitation is anticipated. Travelers to Australia, Canada, western Europe, Japan, or New Zealand (i.e., countries in which endemicity is low) are at no greater risk for infection than persons living or traveling in the United States. The first dose of hepatitis A vaccine should be administered as soon as travel is considered. Based on limited data indicating equivalent postexposure efficacy of IG and vaccine among healthy persons aged <40 years, 1 dose of single-antigen hepatitis A vaccine administered at any time before departure can provide adequate protection for most healthy persons. However, no data are available for other populations or other hepatitis A vaccine formulations (e.g., Twinrix ® ). For optimal protection, older adults, immunocompromised persons, and persons with chronic liver disease or other chronic medical conditions planning to depart to an area in <2 weeks should receive the initial dose of vaccine and also simultaneously can be administered IG (0.02 mL/kg) at a separate anatomic injection site. Completion of the vaccine series according to the licensed schedule is necessary for long-term protection. Travelers who elect not to receive vaccine, are aged 2 months should be administered IG at 0.06 mL/kg; administration must be repeated if the travel period is >5 months. The full statement containing licensed vaccination schedule and recommended dose of IG and vaccine has been published previously (1). Conference attendees agreed that undergraduate public health education can help produce an educated citizenry that is better prepared to cope with public health challenges ranging from acquired immunodeficiency syndrome to aging, avian influenza, and health-care costs. Conference working groups recommended that two introductory courses, Public Health 101 and Epidemiology 101, be offered by all U.S. colleges and universities to fulfill undergraduate social science and science distribution requirements, respectively. The groups further recommended that high-quality minors in public health should be developed, with core courses, experiencebased learning, and focus areas such as global health. The full recommendations from the conference have been published online by CCAS at . The modern era of undergraduate public health education began at Johns Hopkins University in the mid-1970s, when a public health major was approved through the School of Arts and Sciences in collaboration with what was then the School of Hygiene and Public Health. After slow growth in the 1980s, interest in undergraduate public health education grew rapidly in the 1990s. By the end of the 20th century, a substantial number of schools of public health were experimenting with undergraduate courses, minors, and majors. Programs in public health also were revising professionally focused curricula and developing broader approaches to undergraduate public health education (2,3). Recent surveys indicate that the majority of the approximately 40 accredited public health schools (ASPH, unpublished data, 2006) and approximately 60 accredited public health programs (Association for Prevention Teaching and Research, unpublished data, 2006) offer undergraduate courses in public health. However, public health courses are offered rarely among the 1,900 colleges and universities that have no public health schools or programs yet might choose to include public health in their arts and sciences curricula. The conference working groups recommended that Public Health 101 and Epidemiology 101 be designed to fit within the broadest possible array of arts and science In addition, 1,489 dead corvids and 435 other dead birds with WNV infection have been reported in 34 states and New York City during 2007. WNV infections have been reported in horses in 31 states, in three canines in Idaho and Oregon, in 26 squirrels in California and Oregon, and in three unidentified animal species in Idaho and Montana. WNV seroconversions have been reported in 637 sentinel chicken flocks in 11 states (Arizona, Arkansas, California, Delaware, Florida, Iowa, North Carolina, North Dakota, Oregon, Utah, and Virginia) and Puerto Rico. A total of 7,208 WNV-positive mosquito pools have been reported from 36 states, the District of Columbia, and New York City. Additional information about national WNV activity is available from CDC at / westnile/index.htm and at . # Notice to Readers # Recommendations for Public Health Curriculum -Consensus Conference on Undergraduate Public Health Education, November 2006 The Institute of Medicine of the National Academies has recommended that all undergraduates have access to education in public health (1) Methods for integrating recommendations from the conference into the nation's long-term strategy for public health also were discussed. These included 1) websites to provide information on undergraduate public health and share curriculum materials, 2) faculty development measures to assist colleges and universities in developing new introductory public health courses, 3) encouragement of applicants by health professions education and graduate public health degree programs to enroll in introductory undergraduate public health courses, 4) continued discussion of approaches for developing minors in public health and global health in institutions with and without schools or programs in public health, and 5) participation by public-health practitioners in experiential or service-learning and other components of undergraduate education. # Weekly # Changes for October 2007-September 2008 Age-Based Schedule (Figure 1) - The yellow bar for varicella vaccine has been extended through all age groups, indicating that the vaccine is recommended for all adults without evidence of immunity to varicella. - Zoster vaccine has been added, with a yellow bar indicating that the vaccine is recommended for persons aged >60 years. Medical/Other Indications Schedule (Figure 2) - The title has been changed to "Vaccines that might be indicated for adults based on medical and other indications," indicating that not all of the vaccines are recommended based on medical indications. - The word "contraindicated" has been added to the red bars and removed from the legend. - The "immunocompromising conditions" column heading has been shortened by removing the list of conditions. - The "human immunodeficiency virus (HIV) infection" column has been moved next to the "immunocompromising conditions" column. - The HIV column has been split into CD4+ T lymphocyte counts of 200 cells/µL. - The indication "recipients of clotting factor concentrates" has been removed from the column heading "chronic liver disease" because only one vaccine has this recommendation. The indication remains in the hepatitis A vaccine footnote. - The varicella vaccine yellow bar has been extended to include persons infected with HIV who have CD4+ T lymphocyte counts of >200 cells/µL (1). - The influenza vaccine yellow bar for "health-care personnel" indicates that health-care personnel can receive either trivalent inactivated influenza vaccine (TIV) or live, attenuated influenza vaccine (LAIV). - The yellow bar for influenza vaccine has been extended to include persons in the "asplenia" risk group. - The bar for meningococcal vaccine has been revised to indicate that 1 or more doses might be indicated. - Zoster vaccine has been added to the schedule with a yellow bar to indicate that the vaccine is recommended for all indications except pregnancy, immunocompromising conditions, and HIV. A red bar, indicating a contraindication, has been inserted for pregnancy, immunocompromising conditions, and HIV infection with a CD4+ T lymphocyte count of <200 cells/µL. # Footnotes (Figures 1 and 2) - Text for vaccine contraindications in pregnancy has been removed from the footnotes of human papillomavirus (HPV) (#2); measles, mumps, rubella (MMR) (#3); and varicella (#4) to be consistent with the intent of the footnotes to summarize the indications for vaccine use. Pregnancy contraindications are indicated with a red bar. - The HPV footnote (#2) has been revised to clarify evidence of prior infection, clarify that HPV vaccine is not specifically indicated based on medical conditions, and indicate that efficacy and immunogenicity might be lower in persons with certain medical conditions. - The varicella footnote (#4) has been revised to clarify that birth before 1980 for immunocompromised persons is not evidence of immunity and to add a requirement for evidence of immunity. - The pneumococcal polysaccharide vaccine (PPV) footnote (#6) has been revised by adding chronic alcoholism and cerebrospinal fluid leaks and deleting the immunocompromising conditions. - The hepatitis B footnote (#9) has been revised by removing persons who receive clotting factor concentrates as a risk group and by clarifying the special formulations dose. - The meningococcal vaccine footnote (#10) has been revised to clarify that persons who remain at increased risk for infection might be indicated for revaccination. - A footnote (#11) has been added to reflect ACIP recommendations for herpes zoster vaccination for persons aged >60 years. - A footnote (#13) has been added to provide a reference for vaccines in persons with immunocompromising conditions. Reference 1. CDC. Prevention of varicella: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR 2007; 56(No. RR-4). The Recommended Adult Immunization Schedule has been approved by the Advisory Committee on Immunization Practices, the American Academy of Family Physicians, the American College of Obstetricians and Gynecologists, and the American College of Physicians. The standard MMWR footnote format has been modified for publication of this schedule. Adults with uncertain histories of a complete primary vaccination series with tetanus and diphtheria toxoid-containing vaccines should begin or complete a primary vaccination series. A primary series for adults is 3 doses of tetanus and diphtheria toxoid-containing vaccines; administer the first 2 doses at least 4 weeks apart and the third dose 6-12 months after the second. However, Tdap can substitute for any one of the doses of Td in the 3-dose primary series. The booster dose of tetanus and diphtheria toxoid-containing vaccine should be administered to adults who have completed a primary series and if the last vaccination was received >10 years previously. Tdap or Td vaccine may be used, as indicated. If the person is pregnant and received the last Td vaccination >10 years previously, administer Td during the second or third trimester; if the person received the last Td vaccination in <10 years, administer Tdap during the immediate postpartum period. A one-time administration of 1 dose of Tdap with an interval as short as 2 years from a previous Td vaccination is recommended for postpartum women, close contacts of infants aged <12 months, and all health-care workers with direct patient contact. In certain situations, Td can be deferred during pregnancy and Tdap substituted in the immediate postpartum period, or Tdap can be administered instead of Td to a pregnant woman after an informed discussion with the woman. Consult the ACIP statement for recommendations for administering Td as prophylaxis in wound management. # Human papillomavirus (HPV) vaccination HPV vaccination is recommended for all females aged <26 years who have not completed the vaccine series. History of genital warts, abnormal Papanicolaou test, or positive HPV DNA test is not evidence of prior infection with all vaccine HPV types; HPV vaccination is still recommended for these persons. Ideally, vaccine should be administered before potential exposure to HPV through sexual activity; however, females who are sexually active should still be vaccinated. Sexually active females who have not been infected with any of the HPV vaccine types receive the full benefit of the vaccination. Vaccination is less beneficial for females who have already been infected with one or more of the HPV vaccine types. A complete series consists of 3 doses. The second dose should be administered 2 months after the first dose; the third dose should be administered 6 months after the first dose. Although HPV vaccination is not specifically recommended for females with the medical indications described in Figure 2, "Vaccines that might be indicated for adults based on medical and other indications," it is not a live-virus vaccine and can be administered. However, immune response and vaccine efficacy might be less than in persons who do not have the medical indications described or who are immunocompetent. # Measles, mumps, rubella (MMR) vaccination Measles component: adults born before 1957 can be considered immune to measles. Adults born during or after 1957 should receive >1 dose of MMR unless they have a medical contraindication, documentation of >1 dose, history of measles based on health-care provider diagnosis, or laboratory evidence of immunity. A second dose of MMR is recommended for adults who 1) have been recently exposed to measles or are in an outbreak setting; 2) have been previously vaccinated with killed measles vaccine; 3) have been vaccinated with an unknown type of measles vaccine during 1963-1967; 4) are students in postsecondary educational institutions; 5) work in a health-care facility; or 6) plan to travel internationally. Mumps component: adults born before 1957 can generally be A second dose of MMR is recommended for adults who 1) are in an age group that is affected during a mumps outbreak; 2) are students in postsecondary educational institutions; 3) work in a health-care facility; or 4) plan to travel internationally. For unvaccinated healthcare workers born before 1957 who do not have other evidence of mumps immunity, consider administering 1 dose on a routine basis and strongly consider administering a second dose during an outbreak. Rubella component: administer 1 dose of MMR vaccine to women whose rubella vaccination history is unreliable or who lack laboratory evidence of immunity. For women of childbearing age, regardless of birth year, routinely determine rubella immunity and counsel women regarding congenital rubella syndrome. Women who do not have evidence of immunity should receive MMR vaccine on completion or termination of pregnancy and before discharge from the health-care facility. # Varicella vaccination All adults without evidence of immunity to varicella should receive 2 doses of single-antigen varicella vaccine unless they have a medical contraindication. Special consideration should be given to those who 1) have close contact with persons at high risk for severe disease (e.g., health-care personnel and family contacts of immunocompromised persons) or 2) are at high risk for exposure or transmission (e.g., teachers; child care employees; residents and staff members of institutional settings, including correctional institutions; college students; military personnel; adolescents and adults living in households with children; nonpregnant women of childbearing age; and international travelers). Evidence of immunity to varicella in adults includes any of the following: 1) documentation of 2 doses of varicella vaccine at least 4 weeks apart; 2) U.S.-born before 1980 (although for health-care personnel and pregnant women, birth before 1980 should not be considered evidence of immunity); 3) history of varicella based on diagnosis or verification of varicella by a health-care provider (for a patient reporting a history of or presenting with an atypical case, a mild case, or both, health-care providers should seek either an epidemiologic link with a typical varicella case or to a laboratoryconfirmed case or evidence of laboratory confirmation, if it was performed at the time of acute disease); 4) history of herpes zoster based on health-care provider diagnosis; or 5) laboratory evidence of immunity or laboratory confirmation of disease. Assess pregnant women for evidence of varicella immunity. Women who do not have evidence of immunity should receive the first dose of varicella vaccine upon completion or termination of pregnancy and before discharge from the health-care facility. The second dose should be administered 4-8 weeks after the first dose. # Influenza vaccination Medical indications: chronic disorders of the cardiovascular or pulmonary systems, including asthma; chronic metabolic diseases, including diabetes mellitus, renal or hepatic dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications or human immunodeficiency virus ); any condition that compromises respiratory function or the handling of respiratory secretions or that can increase the risk of aspiration (e.g., cognitive dysfunction, spinal cord injury, or seizure disorder or other neuromuscular disorder); and pregnancy during the influenza season. No data exist on the risk for severe or complicated influenza disease among persons with asplenia; however, influenza is a risk factor for secondary bacterial infections that can cause severe disease among persons with asplenia. # HIV infection # Q-4 MMWR QuickGuide October 19, 2007 Occupational indications: health-care personnel and employees of long-term-care and assisted-living facilities. Other indications: residents of nursing homes and other long-termcare and assisted-living facilities; persons likely to transmit influenza to persons at high risk (e.g., in-home household contacts and caregivers of children aged 0-59 months, or persons of all ages with high-risk conditions); and anyone who would like to be vaccinated. Healthy, nonpregnant adults aged <49 years without high-risk medical conditions who are not contacts of severely immunocompromised persons in special care units can receive either intranasally administered live, attenuated influenza vaccine (FluMist ® ) or inactivated vaccine. Other persons should receive the inactivated vaccine. # Pneumococcal polysaccharide vaccination Medical indications: chronic pulmonary disease (excluding asthma); chronic cardiovascular diseases; diabetes mellitus; chronic liver diseases, including liver disease as a result of alcohol abuse (e.g., cirrhosis); chronic alcoholism, chronic renal failure, or nephrotic syndrome; functional or anatomic asplenia (e.g., sickle cell disease or splenectomy ); immunosuppressive conditions; and cochlear implants and cerebrospinal fluid leaks. Vaccinate as close to HIV diagnosis as possible. Other indications: Alaska Natives and certain American Indian populations and residents of nursing homes or other long-term-care facilities. # Revaccination with pneumococcal polysaccharide vaccine One-time revaccination after 5 years for persons with chronic renal failure or nephrotic syndrome; functional or anatomic asplenia (e.g., sickle cell disease or splenectomy); or immunosuppressive conditions. For persons aged >65 years, one-time revaccination if they were vaccinated >5 years previously and were aged <65 years at the time of primary vaccination. # Hepatitis A vaccination Medical indications: persons with chronic liver disease and persons who receive clotting factor concentrates. Behavioral indications: men who have sex with men and persons who use illegal drugs. Occupational indications: persons working with hepatitis A virus (HAV)-infected primates or with HAV in a research laboratory setting. Other indications: persons traveling to or working in countries that have high or intermediate endemicity of hepatitis A (a list of countries is available at ) and any person seeking protection from HAV infection. Single-antigen vaccine formulations should be administered in a 2-dose schedule at either 0 and 6-12 months (Havrix ® ), or 0 and 6-18 months (Vaqta ® ). If the combined hepatitis A and hepatitis B vaccine (Twinrix ® ) is used, administer 3 doses at 0, 1, and 6 months. # Hepatitis B vaccination Medical indications: persons with end-stage renal disease, including patients receiving hemodialysis; persons seeking evaluation or treatment for a sexually transmitted disease (STD); persons with HIV infection; and persons with chronic liver disease. Occupational indications: health-care personnel and public-safety workers who are exposed to blood or other potentially infectious body fluids. Behavioral indications: sexually active persons who are not in a long-term, mutually monogamous relationship (e.g., persons with more than one sex partner during the previous 6 months); current or recent injection-drug users; and men who have sex with men. Other indications: household contacts and sex partners of persons with chronic hepatitis B virus (HBV) infection; clients and staff members of institutions for persons with developmental disabilities; international travelers to countries with high or intermediate prevalence of chronic HBV infection (a list of countries is available at / travel/contentdiseases.aspx); and any adult seeking protection from HBV infection. Settings where hepatitis B vaccination is recommended for all adults: STD treatment facilities; HIV testing and treatment facilities; facilities providing drug-abuse treatment and prevention services; health-care settings targeting services to injection-drug users or men who have sex with men; correctional facilities; end-stage renal disease programs and facilities for chronic hemodialysis patients; and institutions and nonresidential day care facilities for persons with developmental disabilities. Special formulation indications: for adult patients receiving hemodialysis and other immunocompromised adults, 1 dose of 40 µg/mL (Recombivax HB ® ) or 2 doses of 20 µg/mL (Engerix-B ® ), administered simultaneously. # Meningococcal vaccination Medical indications: adults with anatomic or functional asplenia or terminal complement component deficiencies. Other indications: first-year college students living in dormitories; microbiologists who are routinely exposed to isolates of Neisseria meningitidis; military recruits; and persons who travel to or live in countries in which meningococcal disease is hyperendemic or epidemic (e.g., the "meningitis belt" of sub-Saharan Africa during the dry season ), particularly if their contact with local populations will be prolonged. Vaccination is required by the government of Saudi Arabia for all travelers to Mecca during the annual Hajj. Meningococcal conjugate vaccine is preferred for adults with any of the preceding indications who are aged <55 years, although meningococcal polysaccharide vaccine (MPSV4) is an acceptable alternative. Revaccination after 3-5 years might be indicated for adults previously vaccinated with MPSV4 who remain at increased risk for infection (e.g., persons residing in areas in which disease is epidemic). # Herpes zoster vaccination A single dose of zoster vaccine is recommended for adults aged >60 years regardless of whether they report a prior episode of herpes zoster. Persons with chronic medical conditions may be vaccinated unless a contraindication or precaution exists for their condition. # Selected conditions for which Haemophilus influenzae type b (Hib) vaccine may be used Hib conjugate vaccines are licensed for children aged 6 weeks-71 months. No efficacy data are available on which to base a recommendation concerning use of Hib vaccine for older children and adults with the chronic conditions associated with an increased risk for Hib disease. However, studies suggest good immunogenicity in patients who have sickle cell disease, leukemia, or HIV infection or who have had splenectomies; administering vaccine to these patients is not contraindicated. # Immunocompromising conditions Inactivated vaccines generally are acceptable (e.g., pneumococcal, meningococcal, and influenza ) and live vaccines generally are avoided in persons with immune deficiencies or immune suppressive conditions. Information on specific conditions is available at . This schedule indicates the recommended age groups and medical indications for routine administration of currently licensed vaccines for persons aged >19 years, as of October 1, 2007. Licensed combination vaccines may be used whenever any components of the combination are indicated and when the vaccine's other components are not contraindicated. For detailed recommendations on all vaccines, including those used primarily for travelers or those issued during the year, consult the manufacturers' package inserts and the complete statements from the Advisory Committee on Immunization Practices (available at ). Report all clinically significant postvaccination reactions to the Vaccine Adverse Event Reporting System (VAERS). Reporting forms and instructions on filing a VAERS report are available at or by telephone, 800-822-7967. # Information on how to file a Vaccine Injury Compensation
Drotrecogin-α (Xigris ® ) primarily is used to treat severe sepsis. The drug is a recombinant form of human activated protein C that has antithrombocytic, antiinflammatory, and profibrinolytic properties.# Tickborne relapsing fever (TBRF) is a bacterial illness caused by certain species of Borrelia and transmitted through brief and painless bites from Ornithodoros ticks (1,2). Illness usually is characterized by intermittent periods of fever, fatigue, and muscle aches. In April 2005, CDC received reports of two cases of severe TBRF associated with acute respiratory distress syndrome (ARDS) in residents of California and Nevada. After a report describing these cases was posted on CDC's Epidemic Information Exchange (Epi-X), health officials in Washington reported a third severe case associated with ARDS. This report summarizes these three cases and the results of the subsequent epidemiologic investigations. The findings indicate that ARDS might occur more frequently in patients with TBRF than previously recognized. Optimal management of TBRF requires both prompt diagnosis and careful observation during the initial phases of treatment. # Case Reports Nevada. On February 17, 2005, a previously healthy woman aged 46 years from Washoe County, Nevada, had onset of nonspecific leg pain, which progressed during the next 24 hours to generalized myalgia. She visited a local hospital emergency department (ED), where a viral syndrome was diagnosed. She was treated with intravenous (IV) fluids and pain medication and discharged home. Two days later, she returned to the ED with fever, chills, fatigue, anorexia, nausea, and an episode of syncope. On arrival, she was noted to be tachycardic (130 beats per minute [bpm]), tachypneic (24 breaths per minute), and hypotensive (systolic blood pressure: 89 mm Hg) with a temperature of 96.8°F (36.0°C). A physical examination was otherwise unremarkable. Pulse oximetry on room air indicated an oxygen saturation of 96%. Initial laboratory testing revealed a white blood cell count (WBC) of 11.4 × 10 3 /µL, hemoglobin level of 13 g/dL, platelet count of 66 × 10 3 /µL, and alanine aminotransferase (ALT) of 153 U/L. A chest radiograph revealed a right middle lobe infiltrate, consistent with community-acquired pneumonia. She was treated with gatifloxacin and transferred to the intensive care unit (ICU). Approximately 10 hours after admission, she was intubated for worsening tachypnea (respiratory rate [RR]: 40 breaths per minute). Diffuse bilateral infiltrates were noted on chest radiograph, and an arterial blood gas sample yielded oxygenation of 53 mmHg on 100% inspired oxygen. The patient's antimicrobial treatment was broadened to include vancomycin and doxycycline. The next day, the treating physician was notified that spirochetes were observed during examination of a blood smear obtained when the patient was admitted; the smear had been manually reviewed because of thrombocytopenia. The patient remained intubated for 12 days for what was ultimately determined to be ARDS. During this time, she was administered three additional antimicrobials (ciprofloxacin, tobramycin, and ceftriaxone) and drotrecogin-α.* She was discharged after 21 days and recovered completely. The causative organism was identified as Borrelia hermsii by polymerase chain reaction (PCR) performed on a whole blood sample and serologic testing of convalescent-phase serum, both performed at CDC. Although the patient did not recall receiving a tick bite, she did report staying at a resort near South Lake Tahoe (an area known to be highly endemic for TBRF) 5 days before becoming ill. California. On April 12, 2005, a previously healthy woman aged 43 years from El Dorado County, California, had onset of lethargy and myalgia. She went to a local hospital ED on April 14 with fever, chills, headache, myalgia, and dehydration. She was febrile (100.5°F [38.1°C]), tachycardic (138 bpm), mildly tachypneic (16 breaths per minute), and hypotensive (systolic blood pressure: 97 mm Hg). A physical examination was otherwise unremarkable. Pulse oximetry on room air indicated an oxygen saturation of 97%. A chest radiograph was not obtained, but initial blood tests indicated an elevated bilirubin level of 3.6 mg/ dL, aspartate aminotransferase (AST) of 93 U/L, and ALT of 88 U/L. She was treated with acetaminophen and discharged home with instructions to return for reevaluation of blood tests the next day. The patient returned the next day with headache, sweating, fatigue, and nausea. A physical examination revealed rhonchi; a chest radiograph was obtained and read as normal. She was treated with IV fluids and IV ceftriaxone. Within 1 hour of receiving antibiotics, her pulse increased to 127 bpm, her systolic blood pressure decreased to 85 mm Hg, and pulse oximetry on room air indicated an oxygen saturation of 95%. TBRF was diagnosed by observation of spirochetes in smears of peripheral blood (Figure). The patient was treated with dopamine for hypotension and doxycycline for TBRF and transferred to another medical center. Shortly after arrival, a chest radiograph taken because of worsening respiratory distress demonstrated diffuse bilateral infiltrates. The patient was intubated for respiratory failure (RR: 44 breaths per minute; oxygen saturation of 82% on 100% inspired oxygen via nonrebreather mask) attributed to ARDS. Laboratory testing revealed a WBC of 3.4 × 10 3 /µL, hemoglobin level of 11.4 g/dL, and platelet count of 19 × 10 3 /µL. Platelets and fresh frozen plasma were administered. The patient remained intubated for 10 days, during which she was administered four different antimicrobials (vancomycin, piperacillin/tazobactam, metronidazole, and doxycycline) and drotrecogin-a. She was discharged after 19 days of hospitalization and eventually recovered from her illness. A blood sample obtained early in illness and cultured at CDC yielded B. hermsii. An environmental investigation was conducted at her home, located 5 miles south of Lake Tahoe and approximately 10 miles from the resort visited by the Nevada patient. An engorged soft tick was found in her bedroom, and removal of house siding revealed multiple rodent nests from which approximately 30 Ornithodoros hermsi ticks were recovered. Washington. A woman aged 40 years from King County, Washington, visited a hospital ED on September 21, 2004, with myalgia, arthralgia, nausea, vomiting, and headache. She was treated with IV fluids, promethazine, and hydrocodone. Hospital admission was recommended, but she refused. After experiencing a syncopal episode at home, she returned and was noted to be febrile (102 º F [38.9°C]), hypotensive (systolic blood pressure: 100 mm Hg), mildly tachycardic (107 bpm), and hypoxic (oxygen saturation: 92% on 4 L of oxygen). A physical examination was otherwise unremarkable. Her chest radiograph revealed bilateral lower lobe infiltrates. Initial laboratory studies indicated a WBC of 9.5 x 10 3 /µL, hematocrit of 33%, platelet count of 49 x 10 3 /µL, ALT of 192 U/L, and a D-dimer of 754. She was admitted for presumed community-acquired pneumonia with sepsis and treated empirically with IV cefuroxime and azithromycin. After receiving the cefuroxime, she was transiently hypotensive and became somnolent. She was intubated and transferred to the ICU with a diagnosis of ARDS and worsening mental status. Because of thrombocytopenia, a peripheral blood smear was examined, revealing spirochetes diagnostic of TBRF. Her transient hypotension was attributed to a Jarisch-Herxheimer reaction (JHR). † She remained intubated for 3 days, was discharged home after 10 days, and eventually recovered from her illness. The most likely site of exposure was a forest cabin in Chelan County, Washington, where she had slept approximately 11 days before illness onset. On inspection, the cabin had evidence of rodent infestation; however, attempts to trap ticks and rodents were unsuccessful. # Epidemiologic Investigations To determine the frequency of ARDS among patients with TBRF acquired in the South Lake Tahoe area, case-report forms for all TBRF cases reported to Nevada and California state and local health departments during 1995-2004 were reviewed. Additionally, cases were ascertained by 1) a computerized search of discharge records from Lake Tahoe area hospitals where cases had been diagnosed; 2) interviews with physicians and laboratorians from area hospitals and private practices where cases had been diagnosed; and 3) postings on Epi-X and the Emerging Infections Network. Including the California and Nevada cases described in this report, 65 cases of TBRF among persons who reported living in or visiting the Lake Tahoe area during the usual incubation period of 2-18 days before illness onset were ascertained. Thirty (46%) were in patients who required hospitalization. Detailed clinical information from medical records was available for 38 (58%) patients. Among these 38 patients, 16 (42%) experienced one or more of the following complications: eight (21%), JHR; six (16%), hypoxia; five (13%), elevated liver enzyme levels; three (8%), arrhythmia or myocarditis; two (5%), azotemia; and two (5%), ARDS. TBRF cases in Washington state were similarly reviewed by using all case reports submitted to the state health department during 1996-2005. Including the single case described in this report, 46 TBRF cases were reported in Washington during 1996-2005, of which 37 (80%) were in patients who required hospitalization. Comments on case-report forms indicated that five (13%) patients required care in an ICU, three (6%) had JHR, and three (6%) had ARDS. All three ARDS cases occurred after 2001. been described previously (9), and this case occurred in a woman who was pregnant and therefore more susceptible to severe TBRF (10). Results of this investigation indicate that ARDS might occur more frequently in patients with TBRF than previously recognized and can occur in persons without predisposing conditions. All cases of TBRF-associated ARDS identified in this review occurred after 2001, but further surveillance will be needed to determine whether the risk for ARDS in TBRF is increasing. Increases might be related to changes in medical practice, use of newer antimicrobials, or possibly the emergence of a more virulent strain. All three cases described in this report occurred in women, but no common medical history (e.g., menopausal status, hormone replacement therapy, or oral contraceptive use) was identified. All three patients had received antimicrobial treatment before onset of ARDS; however, whether they had ARDS as a result of JHR or underlying sepsis could not be determined. The findings in this report are subject to at least two limitations. First, cases were evaluated in only two geographic areas; therefore, results might not be generalizable to the endemic western states. Second, TBRF is not a nationally notifiable disease, and each state has different reporting requirements; therefore, case information is subject to underreporting and ascertainment bias. These methodological differences might have affected the observed rates of hospitalization and classification of ARDS. Health-care professionals should report suspected TBRF cases to local or state health departments, providing a thorough clinical and exposure history and, as appropriate, samples (i.e., serum or whole blood) for diagnostic testing. The observation of spirochetes in a Wright-or Giemsastained peripheral blood smear collected during a febrile episode is considered diagnostic of TBRF and is not typical of other spirochetal infections (1). Laboratory diagnosis also can be made by culture, serology, or PCR of serum and blood at certain reference laboratories. TBRF can be prevented by minimizing rodent infestations in homes. Health officials in endemic areas should consider educational measures that increase awareness of potential exposures, demonstrate methods for rodent proofing dwellings, and promote early recognition of cases by health-care professionals (5). These measures are especially important in mountainous resort areas that serve numerous visitors. # Emergence of Antimicrobial-Resistant Serotype 19A # Streptococcus pneumoniae -Massachusetts, 2001-2006 Streptococcus pneumoniae (pneumococcus) is a leading cause of otitis, sinusitis, pneumonia, and meningitis worldwide. Treatment of the most serious type of pneumococcal infection, invasive pneumococcal disease (IPD),* is complicated by antimicrobial resistance. Widespread introduction in 2000 of heptavalent pneumococcal conjugate vaccine (PCV7) against serotypes 4, 6B, 9V, 14, 18C, 19F, and 23F resulted in a decline in antimicrobialnonsusceptible IPD in the United States (1,2), including in Massachusetts (3). However, development of antimicrobial resistance in serotypes not covered by PCV7 is a growing concern (1,4). In Massachusetts during 2001-2006, IPD surveillance identified an increased number of cases in children caused by pneumococcal serotypes (most notably 19A) not covered by PCV7 and an associated increase in antimicrobial resistance among these isolates. This report examines these trends and clinical characteristics of Massachusetts patients with antimicrobial-nonsusceptible, non-PCV7-type IPD. The findings indicated that, despite increases in incidence of antimicrobial-nonsusceptible IPD, overall rates of IPD remained stable during 2001-2006. In addition, persons with IPD caused by antimicrobialnonsusceptible S. pneumoniae had clinical outcomes comparable to persons with IPD caused by antimicrobialsusceptible serotypes. Although PCV7 is effective in preventing IPD, these results confirm that antimicrobial resistance among serotypes not covered by PCV7 remains a concern. On October 1, 2001, the Massachusetts Department of Public Health and the Section of Pediatric Infectious Diseases at Boston University Medical Center initiated statewide laboratory-and population-based surveillance for IPD among children. † For this report, cases of IPD were defined by isolation of pneumococcus from a normally sterile body site (e.g., blood or cerebrospinal, pleural, or joint fluid) in a Massachusetts resident aged <18 years during October 1, 2001-September 30, 2006. Demographic and clinical data were obtained from telephone interviews with primarycare providers or adult caregivers. PCV7 vaccination rates were estimated using CDC's National Immunization Survey. § Serotyping was performed at Boston University Medical Center, using the Quellung reaction with pneumococcal antisera. Susceptibility to five antimicrobials often used in pediatric patients (i.e., amoxicillin, penicillin, ceftriaxone, azithromycin, and trimethoprim-sulfamethoxazole) was determined by E-test (epsilometer test, an agar diffusion method), and interpretations were based on Clinical and Laboratory Standards Institute 2007 guidelines (5). For each antimicrobial agent tested, isolates with either intermediate-level or high-level antimicrobial resistance were considered nonsusceptible to the antimicrobial agent unless otherwise indicated. Population denominators were obtained from 2000-2005 census figures. Mantel-Haenszel chi-square test for trend was used to identify changes in serotype distribution or antimicrobial resistance over time. Chi-square or Fisher's exact tests of proportions were used to compare risk factors and clinical characteristics of disease. Because IPD surveillance did not begin until after introduction of PCV7, no data on pre-PCV7 susceptibility were available for comparison. PCV7 2). No significant changes were noted in the proportions of IPD caused by other PCV7 or PCV7related serotypes or by non-PCV7 serogroups (Figure 2). Because 19A was the most common serotype isolated during 2005-2006, the antimicrobial susceptibility of 19A isolates was examined further (Table ). The majority of 19A isolates were nonsusceptible to penicillin. During 2001-2006, significant increases were noted in the proportion of 19A isolates that were nonsusceptible to amoxicillin (minimum inhibitory concentration [MIC] >2 µg/mL), ceftriaxone (MIC >0.5 µg/mL), or three or more classes of antimicrobials (Table ). Fourteen (15%) of 94 isolates of 19A were highly resistant to ceftriaxone (MIC >2 µg/ml), a first-line antimicrobial used for empiric bacterial meningitis treatment. No significant trends in the antimicrobial resistance of non-19A isolates were noted. To describe the clinical features of and identify risk factors for infection with ceftriaxone-nonsusceptible serotype 19A, demographic and clinical characteristics of the 14 patients with highly ceftriaxone-resistant 19A IPD were compared with those of 73 patients with ceftriaxonesusceptible 19A IPD and 237 patients with ceftriaxonesusceptible non-19A IPD. The results indicated that patients with highly ceftriaxone-resistant 19A disease did not differ from the other groups with regard to established risk factors for antimicrobial-nonsusceptible pneumococcal disease, including age, sex, race/ethnicity, geographic region, degree of household crowding, or day care exposure. Underlying medical conditions that might predispose to IPD (e.g., sickle cell disease or congenital or acquired immune deficiencies) were not significantly more common among patients with highly ceftriaxone-resistant 19A IPD (three of 14 [21%]) than among patients in the ceftriaxonesusceptible 19A group (nine of 73 [12%]) or the non-19A group (33 of 237 [14%]). In addition, no significant differences among the three groups were detected in the proportion of patients with meningitis, pneumonia, or bacteremia without focus, case-fatality ratios, rates of hospitalization (79% versus 68% and 59%, respectively), or longer hospital stay (64% with >4 days versus 40% and 51%, respectively). (1,4,7,8). In Massachusetts and in other states, serotype 19A has emerged as the most common cause of IPD, and the proportion of 19A isolates that are nonsusceptible to commonly used antimicrobials is greater than the proportion for other serotypes (1,4). As a member of the same serogroup as the PCV7-type 19F, serotype 19A is considered a PCV7-related serotype. However, PCV7induced antibodies to 19F are not active against serotype 19A (9). Concern exists that emergence of antimicrobialnonsusceptible non-PCV7-type IPD could erode the success of PCV7 against pneumococcal infections. The limited number of 19A cases restricted the ability of this study to identify risk factors or characteristic clinical features of antimicrobial-nonsusceptible 19A disease. However, the study found no evidence that infections caused by antimicrobial-nonsusceptible serotype 19A had different clinical syndromes or outcomes than infections caused by antimicrobial-susceptible 19A. Despite the lack of continuous surveillance data before PCV7 introduction, the overall stability of IPD incidence in Massachusetts during the study period indicates that the decline in IPD resulting from PCV7 introduction is being maintained (3,6). Furthermore, antimicrobial-nonsusceptible infections have not negated the positive impact of PCV7. Accordingly, vaccination with PCV7 remains a priority in Massachusetts. Nonetheless, the emergence of antimicrobial-nonsusceptible non-PCV7-type IPD is of concern. Continued surveillance for IPD in Massachusetts will provide data on the clinical impact of antimicrobial-nonsusceptible 19A infection and will be useful in development and monitoring of new pneumococcal vaccines. The findings in this report support the continued empiric use of combination therapy with vancomycin and cefotaxime or ceftriaxone (the antimicrobials of choice to treat nonsusceptible pneumococci) for children with bacterial meningitis caused by, or possibly caused by, S. pneumoniae, and for critically ill children with nonmeningeal IPD (10). Antimicrobial-resistance data obtained through surveillance will continue to guide empiric treatment regimens for IPD in Massachusetts and provide data that can be used to tailor treatment recommendations to state-specific resistance patterns. State-based surveillance also will help detect trends in the emergence of nonsusceptible non-PCV7 IPD. The recent development of polymerase chain reaction (PCR)-based serotyping provides the opportunity for state public health laboratories and academic partners to identify IPD isolates by serotype. Serotyping based on the Quellung reaction requires expensive reagents and substantial training and experience to perform reliably. In contrast, PCR-based serotyping can be performed using commercially available reagents and equipment and technical expertise already available in most state public health laboratories. ¶ If applied in other states, these techniques might increase understanding of IPD trends that have occurred nationally since introduction of PCV7. # Update: Prevention of Hepatitis A After Exposure to Hepatitis A Virus and in International Travelers. Updated Recommendations of the Advisory Committee on Immunization Practices (ACIP) In 1995, highly effective inactivated hepatitis A vaccines were first licensed in the United States for preexposure prophylaxis against hepatitis A virus (HAV) among persons aged >2 years. In 2005, vaccine manufacturers received Food and Drug Administration approval for use of the vaccines in children aged 12-23 months (1). The Advisory Committee on Immunization Practices (ACIP) issued recommendations for preexposure use of hepatitis A vaccine in 1996, 1999, and 2006 (1). Currently, ACIP recommends hepatitis A vaccination of all children at age 12-23 months, catch-up vaccination of older children in selected areas, and vaccination of persons at increased risk for hepatitis A (e.g., travelers to endemic areas, users of illicit drugs, or men who have sex with men) (1). For decades, immune globulin (IG) has been recommended for prophylaxis after exposure to HAV (1). IG also has been recommended in addition to hepatitis A vaccine for preexposure prophylaxis for travelers to countries with high or intermediate hepatitis A endemicity who are scheduled to depart <4 weeks after receiving the initial vaccine dose. This report details updated recommendations, made by ACIP in June 2007, for prevention of hepatitis A after exposure to HAV and in departing international travelers (Box) and incorporates existing ACIP recommendations for prevention of hepatitis A (1). # Rationale and Methods for Updated Recommendations When administered within 2 weeks of last exposure, IG is 80%-90% effective in preventing clinical hepatitis A. Despite previously available limited data suggesting that hepatitis A vaccine might be efficacious when administered after exposure (2), in the absence of an appropriately # BOX. Summary of updated recommendations for prevention of hepatitis A after exposure to hepatitis A virus (HAV) and in departing international travelers # Postexposure prophylaxis Persons who recently have been exposed to HAV and who previously have not received hepatitis A vaccine should be administered a single dose of single-antigen hepatitis A vaccine or immune globulin (IG) (0.02 mL/kg) as soon as possible. • # International travel All susceptible persons traveling to or working in countries that have high or intermediate hepatitis A endemicity should be vaccinated or receive IG before departure. Hepatitis A vaccine at the age-appropriate dose is preferred to IG. The first dose of hepatitis A vaccine should be administered as soon as travel is considered. • One dose of single-antigen hepatitis A vaccine administered at any time before departure can provide adequate protection for most healthy persons. • Older adults, immunocompromised persons, and persons with chronic liver disease or other chronic medical conditions planning to depart to an area in <2 weeks should receive the initial dose of vaccine and also simultaneously can be administered IG (0.02 mL/kg) at a separate anatomic injection site. • Travelers who elect not to receive vaccine, are aged <12 months, or are allergic to a vaccine component should receive a single dose of IG (0.02 mL/kg), which provides effective protection for up to 3 months. # NOTE: Previous recommendations remain unchanged regarding 1) settings in which postexposure prophylaxis is indicated, and 2) timing of administration of postexposure prophylaxis. designed clinical trial comparing the postexposure efficacy of vaccine with that of IG, ACIP continued to recommend IG exclusively for postexposure use (1). Hepatitis A vaccine, if recommended for other reasons, could be given at the same time. ACIP was prompted to revisit these recommendations when findings became available from a randomized, double-blind noninferiority clinical trial comparing the efficacy of hepatitis A vaccine and IG after exposure to HAV (3). The results of this clinical trial were presented to ACIP at its February 2007 meeting. During April-May 2007, the ACIP Hepatitis Vaccines Workgroup considered these results in a series of teleconferences. During these teleconferences, the workgroup also considered the experiences of other countries (e.g., Canada and the United Kingdom) where hepatitis A vaccine has been recommended for postexposure use for >5 years and reviewed data on the immunogenicity of hepatitis A vaccine, the risk for HAV transmission in various settings, and factors known to affect the severity of hepatitis A. Additionally, the workgroup took into account potential advantages of vaccine, recognized disadvantages of IG, and relevance of these data to existing recommendations for use of hepatitis A vaccine and IG in international travelers departing <4 weeks after receiving the first dose of hepatitis A vaccine. The workgroup also considered the likelihood that no additional postexposure efficacy data would become available, because of the difficulties of conducting postexposure efficacy studies of IG and vaccine. On the basis of this evidence and the expert opinions of workgroup members, other scientists, and feedback from ACIP partner organizations, the ACIP Hepatitis Vaccines Workgroup drafted a revision of the hepatitis A postexposure prophylaxis and travel recommendations. These updated recommendations were deliberated and approved by ACIP at the June 2007 meeting. # I. Prevention of Hepatitis A After Exposure to HAV Efficacy of hepatitis A vaccine versus IG. The clinical trial comparing hepatitis A vaccine with IG was conducted among 1,090 persons aged 2-40 years who were contacts of hepatitis A cases and susceptible to HAV infection. The trial compared the efficacy of hepatitis A vaccine and IG in preventing laboratory-confirmed symptomatic hepatitis A (i.e., the primary outcome) when administered <14 days after exposure to HAV (3). The primary outcome occurred among 25 (4.4%) of 568 recipients of hepatitis A vaccine and 17 (3.3%) of 522 IG recipients (relative risk: 1.35; 95% confidence interval [CI] = 0.70-2.67); the prespecified statistical criterion for noninferiority was met. The low frequency of study endpoints among IG and vaccine recipients indicated that both interventions provided good protection. The risk for hepatitis A in the vaccine group was never more than 1.5 percentage points greater than that for the IG group for the primary outcome or any secondary study endpoint. Assuming IG is 90% efficacious, the point estimate for hepatitis A vaccine efficacy relative to IG in preventing clinical hepatitis A was 86% (CI = 73%-93%) (3). This clinical trial suggested that the performance of vaccine, when administered <14 days after exposure, approaches that of IG in healthy children and adults aged <40 years. However, these findings might not be generalizable to all populations and settings. In contrast, years of experience have demonstrated that IG performs well as postexposure prophylaxis in all populations and settings. Advantages of hepatitis A vaccine. The ability to use hepatitis A vaccine for postexposure prophylaxis provides numerous public health advantages, including the induction of active immunity and longer protection, greater ease of administration, higher acceptability and availability, and a cost per dose that is similar to IG. Also, the greater availability and ease of administration of hepatitis A vaccine might increase the number of persons at risk for infection who receive postexposure prophylaxis. Risk for HAV transmission in various settings. The risk for transmission of HAV is influenced by host and environmental factors and varies considerably in different settings. For example, without postexposure prophylaxis, secondary attack rates of 15%-30% have been reported in households, with higher rates of transmission occurring from infected young children than from infected adolescents and adults (4)(5)(6). In contrast, attack rates among patrons of food service establishments who have been exposed to HAV-infected food handlers generally are low (7). Indeed, most food handlers with hepatitis A do not transmit HAV to exposed consumers or restaurant patrons (7). Given the wide range of HAV transmission risks in various settings for which postexposure prophylaxis is recommended, magnitude of risk in each situation is an important factor in determining whether to use IG or vaccine. Factors affecting clinical manifestations of hepatitis A. Older persons and persons with chronic liver disease are more likely to have severe manifestations of hepatitis A. Among older children and adults, infection typically is symptomatic, with jaundice occurring in >70% of patients (8). The case-fatality rate among cases reported through national surveillance reaches a high of 1.8% among persons aged >60 years, and fulminant hepatitis has been reported more frequently among older patients with hepatitis A (9). Although # MMWR October 19, 2007 not at increased risk for HAV infection, persons with chronic liver disease also are at increased risk for fulminant hepatitis A (10). Because of the frequency of severe consequences, preventing hepatitis A among exposed older persons and persons with chronic liver disease is particularly vital. The performance of hepatitis A vaccine as postexposure prophylaxis in these groups was not assessed in the recent clinical trial and remains unknown. In contrast, IG has been recommended and used successfully for many years in these groups and in the general population. These recommendations replace previous ACIP recommendations for postexposure prophylaxis with IG (1), incorporating new recommendations for use of single-antigen hepatitis A vaccine and updated recommendations for use of IG postexposure. These recommendations also incorporate and consolidate existing recommendations regarding recommended settings for which postexposure prophylaxis is indicated, including close personal contact with a person with hepatitis A and selected circumstances in which hepatitis A is recognized in a food handler or in a child care center (1). Also, the updated recommendations leave unchanged the recommendation that postexposure prophylaxis (using vaccine or IG) should be administered as soon as possible. No information exists regarding the efficacy of IG or vaccine if administered >2 weeks after exposure (1). The updated recommendations for use of hepatitis A vaccine alone for postexposure prophylaxis do not apply to the combination hepatitis A/hepatitis B vaccine because no data exist regarding the performance of the combination vaccine for prophylaxis after exposure to HAV. The concentration of HAV antigen in the currently available combination vaccine formulation is half that included in the single-antigen vaccine available from the same manufacturer (1). Recommendations for postexposure prophylaxis with IG or hepatitis A vaccine. Persons who recently have been exposed to HAV and who previously have not received hepatitis A vaccine should be administered a single dose of singleantigen vaccine or IG (0.02 mL/kg) as soon as possible. Information about the relative efficacy of vaccine compared with IG postexposure is limited, and no data are available for persons aged >40 years or those with underlying medical conditions. Therefore, decisions to use vaccine or IG should take into account patient characteristics associated with more severe manifestations of hepatitis A, including older age and chronic liver disease. For healthy persons aged 12 months-40 years, singleantigen hepatitis A vaccine at the age-appropriate dose is preferred to IG because of vaccine advantages that include long-term protection and ease of administration. For persons aged >40 years, IG is preferred because of the absence of information regarding vaccine performance and the more severe manifestations of hepatitis A in this age group; vaccine can be used if IG cannot be obtained. The magnitude of the risk for HAV transmission from the exposure should be considered in decisions to use IG or vaccine. IG should be used for children aged <12 months, immunocompromised persons, persons who have had chronic liver disease diagnosed, and persons for whom vaccine is contraindicated. Persons administered IG for whom hepatitis A vaccine also is recommended for other reasons should receive a dose of vaccine simultaneously with IG. For persons who receive vaccine, the second dose should be administered according to the licensed schedule to complete the series. The efficacy of IG or vaccine when administered >2 weeks after exposure has not been established. Close personal contact. Hepatitis A vaccine or IG should be administered to all previously unvaccinated household and sexual contacts of persons with serologically confirmed hepatitis A. In addition, persons who have shared illicit drugs with a person who has serologically confirmed hepatitis A should receive hepatitis A vaccine, or IG and hepatitis A vaccine simultaneously. Consideration also should be given to providing IG or hepatitis A vaccine to persons with other types of ongoing, close personal contact (e.g., regular babysitting) with a person with hepatitis A. Child care centers. Hepatitis A vaccine or IG should be administered to all previously unvaccinated staff members and attendees of child care centers or homes if 1) one or more cases of hepatitis A are recognized in children or employees or 2) cases are recognized in two or more households of center attendees. In centers that do not provide care to children who wear diapers, hepatitis A vaccine or IG need be administered only to classroom contacts of the index patient. When an outbreak occurs (i.e., hepatitis A cases in three or more families), hepatitis A vaccine or IG also should be considered for members of households that have children (center attendees) in diapers. Common-source exposure. If a food handler receives a diagnosis of hepatitis A, vaccine or IG should be administered to other food handlers at the same establishment. Because common-source transmission to patrons is unlikely, hepatitis A vaccine or IG administration to patrons typically is not indicated but may be considered if 1) during the time when the food handler was likely to be infectious, the food handler both directly handled uncooked or cooked foods and had diarrhea or poor hygienic practices and 2) patrons can be identified and treated <2 weeks after the exposure. In settings in which repeated exposures to HAV might have occurred (e.g., institutional cafeterias), stronger consideration of hepatitis A vaccine or IG use could be warranted. In the event of a common-source outbreak, postexposure prophylaxis should not be provided to exposed persons after cases have begun to occur because the 2-week period after exposure during which IG or hepatitis A vaccine is known to be effective will have been exceeded. Schools, hospitals, and work settings. Hepatitis A postexposure prophylaxis is not routinely indicated when a single case occurs in an elementary or secondary school or an office or other work setting, and the source of infection is outside the school or work setting. Similarly, when a person who has hepatitis A is admitted to a hospital, staff members should not routinely be administered hepatitis A postexposure prophylaxis; instead, careful hygienic practices should be emphasized. Hepatitis A vaccine or IG should be administered to persons who have close contact with index patients if an epidemiologic investigation indicates HAV transmission has occurred among students in a school or among patients or between patients and staff members in a hospital. # II. Prevention of Hepatitis A Before International Travel Hepatitis A vaccination is recommended to prevent hepatitis A among travelers to countries with high or intermediate hepatitis A endemicity. Previously, however, because few data were available regarding the immunogenicity of hepatitis A vaccine during the 4 weeks immediately following administration of the first dose, ACIP recommended that, for optimal protection, persons traveling to an area where the risk for transmission was high <4 weeks after the initial vaccine dose also could be administered IG (1). In June 2007, ACIP concluded that if hepatitis A vaccine alone can be recommended for prophylaxis after exposure to HAV, vaccine also should be recommended for healthy international travelers aged <40 years regardless of their scheduled dates for departure. Similar to updated recommendations for postexposure prophylaxis, ACIP recognized that, for certain international travelers (e.g., older adults or those with underlying medical conditions), the performance of vaccine alone is unknown and clinical manifestations of hepatitis A tend to be more severe. Hence, under the updated recommendations for international travelers, for optimal protection, IG can be considered in addition to vaccine for older adults, immunocompromised persons, and persons with chronic liver disease or other chronic medical conditions who are traveling to an area within 2 weeks. The following recommendation updates recommendations for prevention of hepatitis A among travelers departing in <4 weeks to areas where prophylaxis is recommended and consolidates other recommendations for prevention of hepatitis A among international travelers (1). These recommendations replace previous ACIP recommendations for preexposure protection against hepatitis A for travelers (1). Recommendations for preexposure protection against hepatitis A for travelers. All susceptible persons traveling to or working in countries that have high or intermediate hepatitis A endemicity are at increased risk for HAV infection and should be vaccinated or receive IG before departure.* Hepatitis A vaccination at the age-appropriate dose is preferred to IG. Data are not available regarding the risk for hepatitis A for persons traveling to certain areas of the Caribbean, although prophylaxis should be considered if travel to areas with questionable sanitation is anticipated. Travelers to Australia, Canada, western Europe, Japan, or New Zealand (i.e., countries in which endemicity is low) are at no greater risk for infection than persons living or traveling in the United States. The first dose of hepatitis A vaccine should be administered as soon as travel is considered. Based on limited data indicating equivalent postexposure efficacy of IG and vaccine among healthy persons aged <40 years, 1 dose of single-antigen hepatitis A vaccine administered at any time before departure can provide adequate protection for most healthy persons. However, no data are available for other populations or other hepatitis A vaccine formulations (e.g., Twinrix ® ). For optimal protection, older adults, immunocompromised persons, and persons with chronic liver disease or other chronic medical conditions planning to depart to an area in <2 weeks should receive the initial dose of vaccine and also simultaneously can be administered IG (0.02 mL/kg) at a separate anatomic injection site. Completion of the vaccine series according to the licensed schedule is necessary for long-term protection. Travelers who elect not to receive vaccine, are aged <12 months, or are allergic to a vaccine component should receive a single dose of IG (0.02 mL/kg), which provides effective protection against hepatitis A for up to 3 months. Such travelers whose travel period is expected to be >2 months should be administered IG at 0.06 mL/kg; administration must be repeated if the travel period is >5 months. The full statement containing licensed vaccination schedule and recommended dose of IG and vaccine has been published previously (1). Conference attendees agreed that undergraduate public health education can help produce an educated citizenry that is better prepared to cope with public health challenges ranging from acquired immunodeficiency syndrome to aging, avian influenza, and health-care costs. Conference working groups recommended that two introductory courses, Public Health 101 and Epidemiology 101, be offered by all U.S. colleges and universities to fulfill undergraduate social science and science distribution requirements, respectively. The groups further recommended that high-quality minors in public health should be developed, with core courses, experiencebased learning, and focus areas such as global health. The full recommendations from the conference have been published online by CCAS at http://www.ccas.net. The modern era of undergraduate public health education began at Johns Hopkins University in the mid-1970s, when a public health major was approved through the School of Arts and Sciences in collaboration with what was then the School of Hygiene and Public Health. After slow growth in the 1980s, interest in undergraduate public health education grew rapidly in the 1990s. By the end of the 20th century, a substantial number of schools of public health were experimenting with undergraduate courses, minors, and majors. Programs in public health also were revising professionally focused curricula and developing broader approaches to undergraduate public health education (2,3). Recent surveys indicate that the majority of the approximately 40 accredited public health schools (ASPH, unpublished data, 2006) and approximately 60 accredited public health programs (Association for Prevention Teaching and Research, unpublished data, 2006) offer undergraduate courses in public health. However, public health courses are offered rarely among the 1,900 colleges and universities that have no public health schools or programs yet might choose to include public health in their arts and sciences curricula. The conference working groups recommended that Public Health 101 and Epidemiology 101 be designed to fit within the broadest possible array of arts and science In addition, 1,489 dead corvids and 435 other dead birds with WNV infection have been reported in 34 states and New York City during 2007. WNV infections have been reported in horses in 31 states, in three canines in Idaho and Oregon, in 26 squirrels in California and Oregon, and in three unidentified animal species in Idaho and Montana. WNV seroconversions have been reported in 637 sentinel chicken flocks in 11 states (Arizona, Arkansas, California, Delaware, Florida, Iowa, North Carolina, North Dakota, Oregon, Utah, and Virginia) and Puerto Rico. A total of 7,208 WNV-positive mosquito pools have been reported from 36 states, the District of Columbia, and New York City. Additional information about national WNV activity is available from CDC at http://www.cdc.gov/ncidod/dvbid/ westnile/index.htm and at http://westnilemaps.usgs.gov. # Notice to Readers # Recommendations for Public Health Curriculum -Consensus Conference on Undergraduate Public Health Education, November 2006 The Institute of Medicine of the National Academies has recommended that all undergraduates have access to education in public health (1) Methods for integrating recommendations from the conference into the nation's long-term strategy for public health also were discussed. These included 1) websites to provide information on undergraduate public health and share curriculum materials, 2) faculty development measures to assist colleges and universities in developing new introductory public health courses, 3) encouragement of applicants by health professions education and graduate public health degree programs to enroll in introductory undergraduate public health courses, 4) continued discussion of approaches for developing minors in public health and global health in institutions with and without schools or programs in public health, and 5) participation by public-health practitioners in experiential or service-learning and other components of undergraduate education. # Weekly # Changes for October 2007-September 2008 Age-Based Schedule (Figure 1) • The yellow bar for varicella vaccine has been extended through all age groups, indicating that the vaccine is recommended for all adults without evidence of immunity to varicella. • Zoster vaccine has been added, with a yellow bar indicating that the vaccine is recommended for persons aged >60 years. Medical/Other Indications Schedule (Figure 2) • The title has been changed to "Vaccines that might be indicated for adults based on medical and other indications," indicating that not all of the vaccines are recommended based on medical indications. • The word "contraindicated" has been added to the red bars and removed from the legend. • The "immunocompromising conditions" column heading has been shortened by removing the list of conditions. • The "human immunodeficiency virus (HIV) infection" column has been moved next to the "immunocompromising conditions" column. • The HIV column has been split into CD4+ T lymphocyte counts of <200 cells/µL and >200 cells/µL. • The indication "recipients of clotting factor concentrates" has been removed from the column heading "chronic liver disease" because only one vaccine has this recommendation. The indication remains in the hepatitis A vaccine footnote. • The varicella vaccine yellow bar has been extended to include persons infected with HIV who have CD4+ T lymphocyte counts of >200 cells/µL (1). • The influenza vaccine yellow bar for "health-care personnel" indicates that health-care personnel can receive either trivalent inactivated influenza vaccine (TIV) or live, attenuated influenza vaccine (LAIV). • The yellow bar for influenza vaccine has been extended to include persons in the "asplenia" risk group. • The bar for meningococcal vaccine has been revised to indicate that 1 or more doses might be indicated. • Zoster vaccine has been added to the schedule with a yellow bar to indicate that the vaccine is recommended for all indications except pregnancy, immunocompromising conditions, and HIV. A red bar, indicating a contraindication, has been inserted for pregnancy, immunocompromising conditions, and HIV infection with a CD4+ T lymphocyte count of <200 cells/µL. # Footnotes (Figures 1 and 2) • Text for vaccine contraindications in pregnancy has been removed from the footnotes of human papillomavirus (HPV) (#2); measles, mumps, rubella (MMR) (#3); and varicella (#4) to be consistent with the intent of the footnotes to summarize the indications for vaccine use. Pregnancy contraindications are indicated with a red bar. • The HPV footnote (#2) has been revised to clarify evidence of prior infection, clarify that HPV vaccine is not specifically indicated based on medical conditions, and indicate that efficacy and immunogenicity might be lower in persons with certain medical conditions. • The varicella footnote (#4) has been revised to clarify that birth before 1980 for immunocompromised persons is not evidence of immunity and to add a requirement for evidence of immunity. • The pneumococcal polysaccharide vaccine (PPV) footnote (#6) has been revised by adding chronic alcoholism and cerebrospinal fluid leaks and deleting the immunocompromising conditions. • The hepatitis B footnote (#9) has been revised by removing persons who receive clotting factor concentrates as a risk group and by clarifying the special formulations dose. • The meningococcal vaccine footnote (#10) has been revised to clarify that persons who remain at increased risk for infection might be indicated for revaccination. • A footnote (#11) has been added to reflect ACIP recommendations for herpes zoster vaccination for persons aged >60 years. • A footnote (#13) has been added to provide a reference for vaccines in persons with immunocompromising conditions. Reference 1. CDC. Prevention of varicella: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR 2007; 56(No. RR-4). The Recommended Adult Immunization Schedule has been approved by the Advisory Committee on Immunization Practices, the American Academy of Family Physicians, the American College of Obstetricians and Gynecologists, and the American College of Physicians. The standard MMWR footnote format has been modified for publication of this schedule. Adults with uncertain histories of a complete primary vaccination series with tetanus and diphtheria toxoid-containing vaccines should begin or complete a primary vaccination series. A primary series for adults is 3 doses of tetanus and diphtheria toxoid-containing vaccines; administer the first 2 doses at least 4 weeks apart and the third dose 6-12 months after the second. However, Tdap can substitute for any one of the doses of Td in the 3-dose primary series. The booster dose of tetanus and diphtheria toxoid-containing vaccine should be administered to adults who have completed a primary series and if the last vaccination was received >10 years previously. Tdap or Td vaccine may be used, as indicated. If the person is pregnant and received the last Td vaccination >10 years previously, administer Td during the second or third trimester; if the person received the last Td vaccination in <10 years, administer Tdap during the immediate postpartum period. A one-time administration of 1 dose of Tdap with an interval as short as 2 years from a previous Td vaccination is recommended for postpartum women, close contacts of infants aged <12 months, and all health-care workers with direct patient contact. In certain situations, Td can be deferred during pregnancy and Tdap substituted in the immediate postpartum period, or Tdap can be administered instead of Td to a pregnant woman after an informed discussion with the woman. Consult the ACIP statement for recommendations for administering Td as prophylaxis in wound management. # Human papillomavirus (HPV) vaccination HPV vaccination is recommended for all females aged <26 years who have not completed the vaccine series. History of genital warts, abnormal Papanicolaou test, or positive HPV DNA test is not evidence of prior infection with all vaccine HPV types; HPV vaccination is still recommended for these persons. Ideally, vaccine should be administered before potential exposure to HPV through sexual activity; however, females who are sexually active should still be vaccinated. Sexually active females who have not been infected with any of the HPV vaccine types receive the full benefit of the vaccination. Vaccination is less beneficial for females who have already been infected with one or more of the HPV vaccine types. A complete series consists of 3 doses. The second dose should be administered 2 months after the first dose; the third dose should be administered 6 months after the first dose. Although HPV vaccination is not specifically recommended for females with the medical indications described in Figure 2, "Vaccines that might be indicated for adults based on medical and other indications," it is not a live-virus vaccine and can be administered. However, immune response and vaccine efficacy might be less than in persons who do not have the medical indications described or who are immunocompetent. # Measles, mumps, rubella (MMR) vaccination Measles component: adults born before 1957 can be considered immune to measles. Adults born during or after 1957 should receive >1 dose of MMR unless they have a medical contraindication, documentation of >1 dose, history of measles based on health-care provider diagnosis, or laboratory evidence of immunity. A second dose of MMR is recommended for adults who 1) have been recently exposed to measles or are in an outbreak setting; 2) have been previously vaccinated with killed measles vaccine; 3) have been vaccinated with an unknown type of measles vaccine during 1963-1967; 4) are students in postsecondary educational institutions; 5) work in a health-care facility; or 6) plan to travel internationally. Mumps component: adults born before 1957 can generally be A second dose of MMR is recommended for adults who 1) are in an age group that is affected during a mumps outbreak; 2) are students in postsecondary educational institutions; 3) work in a health-care facility; or 4) plan to travel internationally. For unvaccinated healthcare workers born before 1957 who do not have other evidence of mumps immunity, consider administering 1 dose on a routine basis and strongly consider administering a second dose during an outbreak. Rubella component: administer 1 dose of MMR vaccine to women whose rubella vaccination history is unreliable or who lack laboratory evidence of immunity. For women of childbearing age, regardless of birth year, routinely determine rubella immunity and counsel women regarding congenital rubella syndrome. Women who do not have evidence of immunity should receive MMR vaccine on completion or termination of pregnancy and before discharge from the health-care facility. # Varicella vaccination All adults without evidence of immunity to varicella should receive 2 doses of single-antigen varicella vaccine unless they have a medical contraindication. Special consideration should be given to those who 1) have close contact with persons at high risk for severe disease (e.g., health-care personnel and family contacts of immunocompromised persons) or 2) are at high risk for exposure or transmission (e.g., teachers; child care employees; residents and staff members of institutional settings, including correctional institutions; college students; military personnel; adolescents and adults living in households with children; nonpregnant women of childbearing age; and international travelers). Evidence of immunity to varicella in adults includes any of the following: 1) documentation of 2 doses of varicella vaccine at least 4 weeks apart; 2) U.S.-born before 1980 (although for health-care personnel and pregnant women, birth before 1980 should not be considered evidence of immunity); 3) history of varicella based on diagnosis or verification of varicella by a health-care provider (for a patient reporting a history of or presenting with an atypical case, a mild case, or both, health-care providers should seek either an epidemiologic link with a typical varicella case or to a laboratoryconfirmed case or evidence of laboratory confirmation, if it was performed at the time of acute disease); 4) history of herpes zoster based on health-care provider diagnosis; or 5) laboratory evidence of immunity or laboratory confirmation of disease. Assess pregnant women for evidence of varicella immunity. Women who do not have evidence of immunity should receive the first dose of varicella vaccine upon completion or termination of pregnancy and before discharge from the health-care facility. The second dose should be administered 4-8 weeks after the first dose. # Influenza vaccination Medical indications: chronic disorders of the cardiovascular or pulmonary systems, including asthma; chronic metabolic diseases, including diabetes mellitus, renal or hepatic dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications or human immunodeficiency virus [HIV]); any condition that compromises respiratory function or the handling of respiratory secretions or that can increase the risk of aspiration (e.g., cognitive dysfunction, spinal cord injury, or seizure disorder or other neuromuscular disorder); and pregnancy during the influenza season. No data exist on the risk for severe or complicated influenza disease among persons with asplenia; however, influenza is a risk factor for secondary bacterial infections that can cause severe disease among persons with asplenia. # HIV infection # Q-4 MMWR QuickGuide October 19, 2007 Occupational indications: health-care personnel and employees of long-term-care and assisted-living facilities. Other indications: residents of nursing homes and other long-termcare and assisted-living facilities; persons likely to transmit influenza to persons at high risk (e.g., in-home household contacts and caregivers of children aged 0-59 months, or persons of all ages with high-risk conditions); and anyone who would like to be vaccinated. Healthy, nonpregnant adults aged <49 years without high-risk medical conditions who are not contacts of severely immunocompromised persons in special care units can receive either intranasally administered live, attenuated influenza vaccine (FluMist ® ) or inactivated vaccine. Other persons should receive the inactivated vaccine. # Pneumococcal polysaccharide vaccination Medical indications: chronic pulmonary disease (excluding asthma); chronic cardiovascular diseases; diabetes mellitus; chronic liver diseases, including liver disease as a result of alcohol abuse (e.g., cirrhosis); chronic alcoholism, chronic renal failure, or nephrotic syndrome; functional or anatomic asplenia (e.g., sickle cell disease or splenectomy [if elective splenectomy is planned, vaccinate at least 2 weeks before surgery]); immunosuppressive conditions; and cochlear implants and cerebrospinal fluid leaks. Vaccinate as close to HIV diagnosis as possible. Other indications: Alaska Natives and certain American Indian populations and residents of nursing homes or other long-term-care facilities. # Revaccination with pneumococcal polysaccharide vaccine One-time revaccination after 5 years for persons with chronic renal failure or nephrotic syndrome; functional or anatomic asplenia (e.g., sickle cell disease or splenectomy); or immunosuppressive conditions. For persons aged >65 years, one-time revaccination if they were vaccinated >5 years previously and were aged <65 years at the time of primary vaccination. # Hepatitis A vaccination Medical indications: persons with chronic liver disease and persons who receive clotting factor concentrates. Behavioral indications: men who have sex with men and persons who use illegal drugs. Occupational indications: persons working with hepatitis A virus (HAV)-infected primates or with HAV in a research laboratory setting. Other indications: persons traveling to or working in countries that have high or intermediate endemicity of hepatitis A (a list of countries is available at http://wwwn.cdc.gov/travel/contentdiseases.aspx) and any person seeking protection from HAV infection. Single-antigen vaccine formulations should be administered in a 2-dose schedule at either 0 and 6-12 months (Havrix ® ), or 0 and 6-18 months (Vaqta ® ). If the combined hepatitis A and hepatitis B vaccine (Twinrix ® ) is used, administer 3 doses at 0, 1, and 6 months. # Hepatitis B vaccination Medical indications: persons with end-stage renal disease, including patients receiving hemodialysis; persons seeking evaluation or treatment for a sexually transmitted disease (STD); persons with HIV infection; and persons with chronic liver disease. Occupational indications: health-care personnel and public-safety workers who are exposed to blood or other potentially infectious body fluids. Behavioral indications: sexually active persons who are not in a long-term, mutually monogamous relationship (e.g., persons with more than one sex partner during the previous 6 months); current or recent injection-drug users; and men who have sex with men. Other indications: household contacts and sex partners of persons with chronic hepatitis B virus (HBV) infection; clients and staff members of institutions for persons with developmental disabilities; international travelers to countries with high or intermediate prevalence of chronic HBV infection (a list of countries is available at http://wwwn.cdc.gov/ travel/contentdiseases.aspx); and any adult seeking protection from HBV infection. Settings where hepatitis B vaccination is recommended for all adults: STD treatment facilities; HIV testing and treatment facilities; facilities providing drug-abuse treatment and prevention services; health-care settings targeting services to injection-drug users or men who have sex with men; correctional facilities; end-stage renal disease programs and facilities for chronic hemodialysis patients; and institutions and nonresidential day care facilities for persons with developmental disabilities. Special formulation indications: for adult patients receiving hemodialysis and other immunocompromised adults, 1 dose of 40 µg/mL (Recombivax HB ® ) or 2 doses of 20 µg/mL (Engerix-B ® ), administered simultaneously. # Meningococcal vaccination Medical indications: adults with anatomic or functional asplenia or terminal complement component deficiencies. Other indications: first-year college students living in dormitories; microbiologists who are routinely exposed to isolates of Neisseria meningitidis; military recruits; and persons who travel to or live in countries in which meningococcal disease is hyperendemic or epidemic (e.g., the "meningitis belt" of sub-Saharan Africa during the dry season [December-June]), particularly if their contact with local populations will be prolonged. Vaccination is required by the government of Saudi Arabia for all travelers to Mecca during the annual Hajj. Meningococcal conjugate vaccine is preferred for adults with any of the preceding indications who are aged <55 years, although meningococcal polysaccharide vaccine (MPSV4) is an acceptable alternative. Revaccination after 3-5 years might be indicated for adults previously vaccinated with MPSV4 who remain at increased risk for infection (e.g., persons residing in areas in which disease is epidemic). # Herpes zoster vaccination A single dose of zoster vaccine is recommended for adults aged >60 years regardless of whether they report a prior episode of herpes zoster. Persons with chronic medical conditions may be vaccinated unless a contraindication or precaution exists for their condition. # Selected conditions for which Haemophilus influenzae type b (Hib) vaccine may be used Hib conjugate vaccines are licensed for children aged 6 weeks-71 months. No efficacy data are available on which to base a recommendation concerning use of Hib vaccine for older children and adults with the chronic conditions associated with an increased risk for Hib disease. However, studies suggest good immunogenicity in patients who have sickle cell disease, leukemia, or HIV infection or who have had splenectomies; administering vaccine to these patients is not contraindicated. # Immunocompromising conditions Inactivated vaccines generally are acceptable (e.g., pneumococcal, meningococcal, and influenza [trivalent inactivated influenza vaccine]) and live vaccines generally are avoided in persons with immune deficiencies or immune suppressive conditions. Information on specific conditions is available at http://www.cdc.gov/vaccines/pubs/aciplist.htm. This schedule indicates the recommended age groups and medical indications for routine administration of currently licensed vaccines for persons aged >19 years, as of October 1, 2007. Licensed combination vaccines may be used whenever any components of the combination are indicated and when the vaccine's other components are not contraindicated. For detailed recommendations on all vaccines, including those used primarily for travelers or those issued during the year, consult the manufacturers' package inserts and the complete statements from the Advisory Committee on Immunization Practices (available at http://www.cdc.gov/vaccines/pubs/acip-list.htm). Report all clinically significant postvaccination reactions to the Vaccine Adverse Event Reporting System (VAERS). Reporting forms and instructions on filing a VAERS report are available at http://www.vaers.hhs.gov or by telephone, 800-822-7967. # Information on how to file a Vaccine Injury Compensation
None
None
8cc92c6d3daada3c45d2ba6e0632c262cbc74857
cdc
None
# Immunization of Adolescents Recommendations of the Advisory Committee on Immunization Practices, the American Academy of Pediatrics, the American Academy of Family Physicians, and the American Medical Association Summary This report concerning the immunization of adolescents (i.e., persons 11-21 years of age, as defined by the American Medical Association and the American Academy of Pediatrics ) is a supplement to previous publications (i.e., MMWR 1994;43 1-38; the AAP 1994 Red Book: Report of the Committee on Infectious Diseases; Summary of Policy Recommendations for Periodic Health Examination, August 1996 from the American Academy of Family Physicians ; and AMA Guidelines for Adolescent Preventive Services : Recommendations and Rationale). This report presents a new strategy to improve the delivery of vaccination services to adolescents and to integrate recommendations for vaccination with other preventive services provided to adolescents. This new strategy emphasizes vaccination of adolescents 11-12 years of age by establishing a routine visit to their health-care providers. Specifically, the purposes of this visit are to a) vaccinate adolescents who have not been previously vaccinated with varicella virus vaccine, hepatitis B vaccine, or the second dose of the measles, mumps, and rubella (MMR) vaccine; b) provide a booster dose of tetanus and diphtheria toxoids; c) administer other vaccines that may be recommended for certain adolescents; and d) provide other recommended preventive services. The recommendations for vaccination of adolescents are based on new or current information for each vaccine. The most recent recommendations from ACIP, AAP, AAFP, and AMA concerning specific vaccines and delivery of preventive services should be consulted for details (Exhibit 2). # BACKGROUND In the United States, vaccination programs that focus on infants and children have decreased the occurrence of many childhood, vaccine-preventable diseases (1 ). However, many adolescents (i.e., persons 11-21 years of age i.e., as defined by the American Medical Association and the American Academy of Pediatrics ) and young adults (i.e., persons 22-39 years of age) continue to be adversely affected by vaccine-preventable diseases (e.g., varicella, hepatitis B, measles, and rubella), partially because vaccination programs have not focused on improving vaccination coverage among adolescents. These recommendations for the immunization of adolescents were developed to improve vaccination coverage among adolescents and focus on establishing a routine visit to health-care providers (i.e., providers) for adolescents ages 11-12 years. Such a visit provides the opportunity for a) ensuring vaccination of those adolescents not previously vaccinated with hepatitis B vaccine, varicella virus vaccine (if indicated), or the second dose of the measles, mumps, and rubella (MMR) vaccine; b) administering a tetanus and diphtheria toxoid (Td) booster; c) administering other vaccines that may be recommended for certain adolescents; and d) providing other recommended preventive services. Flexibility in scheduling vaccinations is an important factor for improving vaccination coverage among adolescents. Because multiple-dose vaccines or simultaneous administration of several vaccines may be indicated for adolescents (Table 1), providers may need to be flexible in determining which vaccines to administer during the initial visit and which to administer on return visits. # IMMUNIZATION AS A PREVENTIVE HEALTH SERVICE FOR ADOLESCENTS Administration of vaccinations should be integrated with other preventive services provided to adolescents. The importance of improving the vaccination levels and of providing other preventive services indicated for adolescents and young adults has been emphasized recently by many national organizations (Exhibit 1). In particular, AAP has advocated and provided specific recommendations for the vaccination of adolescents (2,3 ). Similarly, AMA and the Health Resources and Services Administration (HRSA) have proposed comprehensive recommendations that provide a framework for organizing the content and delivery of preventive health services (including vaccinations) for adolescents (4,5 ). The United States Preventive Services Task Force (USPSTF) has advocated specific vaccinations for adolescents that are based on the patient's age and risk factors (6 ). In addition, the American Academy of Family Physicians (AAFP) has recommended delivery of preventive services based on reviews by USPSTF and the AAFP Commission on Clinical Policies and Research (7 ). Guidelines recommended by these organizations include the delivery of preventive health services during a series of regular visits by adolescents to providers. These services include specific guidance on health behaviors; screening for biomedical, behavioral, and emotional conditions; and delivery of other health services, including vaccinations. The recommendations for vaccination of adolescents adopted by the Advisory Committee on Immunization Practices (ACIP), AAP, AAFP, and AMA are consistent with those of other groups that promote preventive health services for adolescents. # RATIONALE FOR VACCINE ADMINISTRATION DURING AN ADOLESCENT'S VISIT TO PROVIDERS # Hepatitis B Vaccine In the United States, most persons infected with hepatitis B virus (HBV) acquired their infection as young adults or adolescents. HBV is transmitted primarily through sexual contact, injecting-drug use, regular household contact with a chronically in- The second dose of vaccine is recommended at age 1 month and the third dose (Hep B-3) at age 6 months. § Adolescents who have not received three doses of hepatitis B vaccine should initiate or complete the series at ages 11-12 years. The second dose should be administered at least 1 month after the first dose, and the third dose should be administered at least 4 months after the first dose and at least 2 months after the second dose. ¶ The fourth dose of diphtheria and tetanus toxoids and pertussis vaccine (DTP) may be administered at age 12 months if at least 6 months have elapsed since the third dose of DTP. Diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP) is licensed for the fourth and/or fifth vaccine dose(s) for children ages ≥15 months and may be preferred for these doses in this age group. Tetanus and diphtheria toxoids, adsorbed, for adult use (Td) is recommended at ages 11-12 years if at least 5 years have elapsed since the last dose of DTP, DTaP, or diphtheria and tetanus toxoids, adsorbed, for pediatric use (DT). Three H. influenzae type b (Hib) conjugate vaccines are licensed for infant use. If PedvaxHIB ® (Merck & Co.) Haemophilus b conjugate vaccine (Meningococcal Protein Conjugate) (PRP-OMP) is administered at ages 2 and 4 months, a dose at 6 months is not required. After completing the primary series, any Hib conjugate vaccine may be used as a booster. † † Oral poliovirus vaccine (OPV) is recommended for routine vaccination of infants. Inactivated poliovirus vaccine (IPV) is recommended for persons-or household contacts of persons-with a congenital or acquired immune-deficiency disease or an altered immune status resulting from disease or immunosuppressive therapy and is an acceptable alternative for other persons. The primary three-dose series for IPV should be given with a minimum interval of 4 weeks between the first and second doses and 6 months between the second and third doses. § § The second dose of measles, mumps, and rubella vaccine (MMR) is routinely recommended at ages 4-6 years or at ages 11-12 years but may be administered at any visit provided at least 1 month has elapsed since receipt of the first dose. ¶ ¶ Varicella virus vaccine (Var) can be administered to susceptible children and adolescents at any time after age 12 months. Unvaccinated adolescents who lack a reliable history of chickenpox should be vaccinated at ages 11-12 years. Use of trade names and commercial sources is for identification only and does not imply endorsement by the Public Health Service or the U.S. Department of Health and Human Services. Source: Advisory Committee on Immunization Practices, American Academy of Pediatrics, and American Academy of Family Physicians. fected person, or occupational exposure. However, the source of infection is unknown for approximately one third of persons who have acute hepatitis B (8 ). A comprehensive vaccination strategy to eliminate transmission of HBV through routine vaccination of infants, adolescents ages 11-12 years, and adolescents who are at increased risk for HBV infection has been adopted (3,7,9,10 ). Any reduction in HBVrelated liver disease resulting from universal vaccination of infants cannot be expected until vaccinated children reach adolescence and adulthood. Routine vaccination of adolescents 11-12 years of age who have not been vaccinated previously is an effective strategy for more rapidly lowering the incidence of HBV infection and assisting in the elimination of HBV transmission in the United States (3,10 ). An adolescent's visit at ages 11-12 years gives the provider an opportunity to initiate protection against HBV before the adolescent begins high-risk behaviors. Unvaccinated adolescents >12 years of age who are at increased risk for HBV infection also should be vaccinated (10 ). Such adolescents are at increased risk for HBV infection and should be vaccinated against hepatitis B if they a) have multiple sexual partners (i.e., more than one partner in a 6-month period), b) use illegal injecting drugs, c) are males who have sex with males, d) have sexual or regular household contact with a person who is positive for hepatitis B surface antigen, e) are health-care or public-safety workers who are occupationally exposed to human blood, f) are undergoing hemodialysis, g) are residents of institutions for the developmentally disabled, h) are administered clotting factors, or i) travel to an area of high or intermediate HBV endemicity for ≥6 months. In addition, AAP recommends that providers administer hepatitis B vaccine to all adolescents for whom they provide services (3 ). Adolescents can be vaccinated against hepatitis B in various settings, including schools and providers' offices. In the United States, school-based demonstration projects to vaccinate adolescents against hepatitis B have achieved >70% vaccination coverage (11)(12)(13). Adolescents should receive three age-appropriate doses of hepatitis B vaccine (Table 2). Hepatitis B vaccine is highly immunogenic in adolescents and young adults when administered in varying three-dose schedules (14,15 ). A schedule of 0, 1-2, and 4-6 months is recommended. Flexibility in scheduling is an important factor for achieving high rates of vaccination in adolescents. When the vaccination schedule is interrupted, the vaccine series does not require reinitiation (CDC, unpublished data; 16 ) . Studies of "off-schedule" vaccinations indicate that if the series is interrupted after the first dose, the second dose should be administered as soon as possible, and the second and third doses should be separated by an interval of at least 2 months. If only the third dose is delayed, it should be administered as soon as possible. Intervals of up to 1 year between administration of the first and third doses induce excellent antibody responses (15 ), and studies are in progress to evaluate longer intervals. # Measles, Mumps, and Rubella Vaccine The sustained decline of measles in the United States has been associated with a shift in occurrence from children to infants and young adults. During 1990-1994, 47% of reported cases occurred in persons ages ≥10 years, compared with only 10% during 1960-1964 (CDC, unpublished data; 17 ). During the 1980s, outbreaks of measles occurred among school-age children in schools with measles-vaccination levels of ≥98% (18 ). Primary vaccine failure was considered the principal contributing factor in these outbreaks. As a result, beginning in 1989, a two-dose measles-vaccination schedule for students in primary schools, secondary schools, and colleges and universities was recommended (18)(19)(20). This two-dose vaccination schedule provides protection to ≥98% of persons vaccinated. Administration of a second dose of MMR at entry to elementary school (i.e., at ages 4-6 years) or junior high or middle school (i.e., at ages 11-12 years) is recommended (21)(22)(23). State policies for implementing the two-dose strategy have varied; some states require the second dose for entry into primary school, and others require it for entry into middle school. Because the recommendation for a second dose of MMR was made in 1989, many children born before 1985 (and some children born after 1985, depending on local policy) may not have received the second vaccine dose. The routine visit to providers at ages 11-12 years affords an opportunity to administer a second dose of MMR to adolescents who have not received two doses of MMR at ≥12 months of age. MMR should not be given to adolescents who are known to be pregnant or to adolescents who are considering becoming pregnant within 3 months of vaccination. Asking adolescents if they are pregnant, excluding those who say they are, and explaining the theoretical risk of fetal infection to the other female adolescents are recommended precautions. # Tetanus and Diphtheria Toxoids Although booster doses of Td are recommended at 10-year intervals, no special strategies have been developed to ensure that this recommendation is fully implemented. During 1991-1994, 191 (95%) of the 201 reported cases of tetanus in the United States occurred in persons ages ≥20 years, and nine (45%) of the 20 reported cases of diphtheria occurred in persons ages ≥20 years (CDC, unpublished data). Data from a serosurvey conducted in Minnesota indicated that 62% of persons 18-39 years of age lacked adequate protection against diphtheria (24 ). Epidemic diphtheria has reemerged in the New Independent States (NIS) of the former Soviet Union and has resulted in >47,000 cases reported in 1994 and >50,000 in 1995 (CDC, unpublished data; 25 ). Although no imported cases were reported in the United States during those years, ≥20 cases of diphtheria were reported in Europe, and two cases occurred among U.S. citizens who resided or were traveling in the NIS. This threat of infection underscores the importance of maintaining high levels of diphtheria immunity in the U.S. population. Recent data from CDC's National Health and Nutrition Examination Survey (NHANES III) suggested that immunity to tetanus varied with age (26 ). Among children ages 6-16 years, 82% had protective levels of tetanus antitoxin (defined as a serum level >0.15 IU per mL). Immunity in persons decreased at ages 9-13 years, with 15%-36% of these persons unprotected (CDC, unpublished data). Immunity also varied inversely with the length of time since the last tetanus vaccination. Among children who were reported as being vaccinated 6-10 years before the serologic survey, 28% lacked immunity to tetanus, compared with 14% who were reported as being vaccinated 1-5 years before the survey and 5% who were reported as being vaccinated ≤1 year before the survey (27 ). A Td booster is essential to ensure long-lasting immunity against tetanus. Lowering the age for administration of the first Td booster from ages 14-16 years to ages 11-12 years should increase compliance and thereby reduce the susceptibility of adolescents to tetanus and diphtheria. Administering the Td booster at ages 11-12 years provides a rationale for a routine visit to providers for adolescents, regardless of their need for other vaccines. Data suggest there should be no increased risk for serious side effects to Td when the first booster dose is administered at ages 11-12 years rather than at ages 14-16 years (CDC, unpublished data). With the exception of the Td booster at ages 11-12 years, routine boosters should be administered every 10 years. If a dose of Td has been administered after receipt of tetanus-and diphtheria-containing vaccine at ages 4-6 years and before the routine Td booster at ages 11-12 years, the dose at ages 11-12 years is not indicated. The next dose should follow the last dose by 10 years, unless specifically indicated because of a tetanus-prone injury (i.e., persons who sustain a tetanus-prone injury should be administered a Td booster immediately if >5 years have elapsed since their last Td booster). # Varicella Virus Vaccine Before varicella virus vaccine became available in 1995, most persons in the United States contracted varicella (i.e., chickenpox), resulting in an estimated 4 million infections annually. At present, approximately 20% of adolescents ages 11-12 years remain susceptible to varicella (CDC, unpublished data). The rate of complications, including death, is greater for persons who contract chickenpox when they are ≥15 years of age. Varicella virus vaccine should be administered to adolescents ages 11-12 years if they have not been vaccinated and do not have a reliable history of chickenpox (7,27,28 ). At ages 11-12 years, providers should assess the adolescent's need for varicella virus vaccine and administer the vaccine to those who are eligible. When administered to children 95% of recipients. For susceptible persons ≥13 years of age, two doses separated by 4-8 weeks are recommended. Varicella vaccine should not be given to adolescents who are known to be pregnant or to adolescents who are considering becoming pregnant within 1 month of vaccination. Asking adolescents if they are pregnant, excluding those who say they are, and explaining the potential effects of the vaccine virus on the fetus to the other female adolescents are recommended precautions. # OTHER VACCINES INDICATED FOR CERTAIN ADOLESCENTS Influenza Vaccine More than 8 million children and adolescents in the United States, including 2.2 million persons ages 10-18 years who have asthma (CDC, unpublished data), have at least one medical condition that places them at high risk for complications associated with influenza. Such adolescents should be vaccinated annually for influenza; however, few actually receive the vaccine. Adolescents at high risk who should be administered influenza vaccine annually are those who a) have chronic disorders of the pulmonary system (including those who have asthma) or the cardiovascular system; b) reside in chronic-care facilities that house persons of any age who have chronic medical conditions; c) have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic disease(s) (including those who have diabetes mellitus), renal dysfunction, hemoglobinopathy, or immunosuppression (including those who have immunosuppression caused by medication); or d) receive long-term aspirin therapy and, therefore, may be at risk for contracting Reye syndrome after influenza. In addition, adolescents who have close contact- with persons who meet any of these conditions or with persons ≥65 years of age should be administered influenza vaccine annually. Students in institutional settings (e.g., those residing in dormitories) should be encouraged to receive influenza vaccine annually to minimize any disruption of routine activities during epidemics. In addition, any adolescent may be vaccinated annually to reduce the likelihood of acquiring influenza infection. Administration of influenza vaccine to adolescents ages 11-12 years may assist in establishing the lifetime practice of annual influenza vaccination in persons for whom it is indicated. Providers should administer influenza vaccine to adolescents who visit them for routine care if vaccination is indicated and if their visit is during the time of year appropriate for influenza vaccination (i.e., September-December); such adolescents should be scheduled for an additional visit if they are seen at a time of year when vaccination is not indicated. Adolescents may receive influenza vaccine at the same time they receive other recommended vaccines. Additional strategies are needed to improve delivery of influenza vaccine to adolescents for whom it is indicated. # Pneumococcal Polysaccharide Vaccine Approximately 340,000 persons 2-18 years of age have chronic illnesses associated with increased risk for pneumococcal disease or its complications and should receive the 23-valent pneumococcal vaccine. Adolescents who should be vaccinated include those who have a) anatomic or functional asplenia (including sickle cell disease), b) nephrotic syndrome, c) cerebrospinal-fluid leaks, or d) conditions associated with immunosuppression (including human immunodeficiency virus ). Revaccination is recommended for adolescents at highest risk for serious pneumococcal infection and those likely to experience rapid decline in pneumococcalantibody levels, provided ≥5 years have passed since administration of the first dose of pneumococcal vaccine. The possible need for subsequent doses following revaccination requires further study. Persons at highest risk and persons likely to have a rapid decline in pneumococcal-antibody levels include those who have a) splenic dysfunction or anatomic asplenia, b) sickle cell disease, c) HIV infection, d) Hodgkin's disease, e) lymphoma, f) multiple myeloma, g) chronic renal failure, h) nephrotic syndrome, or i) other conditions associated with immunosuppression (e.g., undergoing organ transplantation or receiving immunosuppressive chemotherapy). *Close contact occurs when persons live with, work with, or otherwise are frequently in close physical proximity to other persons. # Hepatitis A Vaccine Each year, approximately 140,000 persons in the United States are infected with hepatitis A virus (HAV). The highest rates of disease occur among persons 5-14 years of age. Most cases of hepatitis A can be attributed to person-to-person transmission. Adolescents who plan to travel to or work in a country that has high or intermediate endemicity of hepatitis A virus (HAV) infection- should be administered hepatitis A vaccine or immune globulin (29 ). For adolescents who plan to travel repeatedly to or reside for long periods in such areas, administration of hepatitis A vaccine rather than immune globulin is preferred (29 ). Unvaccinated adolescents who reside in a community that has a high rate of HAV infection and periodic outbreaks of hepatitis A disease also should be vaccinated. During outbreaks in such a community, age-specific disease rates provide an indirect indication of the age groups in which a large percentage of the group has prior immunity and, therefore, would benefit little from vaccination. Often the upper-age cutoff for hepatitis A vaccination is between 10 years of age and 15 years of age. In addition, adolescents should be vaccinated against hepatitis A if they a) have chronic liver disease, b) are administered clotting factors, c) use illegal injecting or noninjecting drugs (i.e., if local epidemiologic data indicate current or past outbreaks have occurred among persons who have such risk behaviors), or d) are males who have sex with males. # SCHEDULING VACCINATIONS # Simultaneous Administration of Vaccines Extensive clinical experience and experimental evidence from studies of infants and children have strengthened the scientific basis for administering certain vaccines simultaneously. Although specific studies have not been conducted regarding the simultaneous administration of all vaccines recommended for routine use in adolescents, no evidence has established that this practice is unsafe or ineffective (30 ). All indicated vaccinations should be administered at the scheduled immunization visit for adolescents who are 11-12 years of age. However, some adolescents may require multiple (i.e., four or more) vaccinations, and the provider may choose not to administer all indicated vaccines during the same visit. In these circumstances, the provider may prioritize which vaccines to administer during the visit and schedule the adolescent for one or more return visits. Factors to consider in this decision include which vaccines require multiple doses, which diseases pose an immediate threat to the adolescent, and whether the adolescent is likely to return for scheduled visits. # Documentation of Previous Vaccinations Providers may encounter adolescents who do not have documentation of previously received vaccines. In these circumstances, providers should attempt to assess each adolescent's vaccination status through documentation obtained from the parent, previous providers, or school records. If documentation of an adolescent's *This includes countries other than Australia, Canada, Japan, New Zealand and those located in western Europe. vaccination status is not available at the time of the visit, the following strategy is recommended while awaiting documentation: a) for those vaccinations required by law or regulation that the adolescent previously was subject to, assume that the adolescent has been vaccinated (unless required vaccinations have not been administered for religious, philosophic, or medical reasons) and withhold those vaccinations; and b) administer those vaccines that the adolescent previously was not subject to by law or regulation. # STATE VACCINATION LAWS AND REGULATIONS In the United States, state vaccination laws and regulations for kindergarten through grade 12 are effective in ensuring high coverage levels among school attendees and have led to a marked decline of overall morbidity and mortality from vaccine-preventable diseases. Additional state laws and regulations requiring documentation of up-to-date immunization of adolescents or a reliable history of diseaserelated immunity at entry into sixth or seventh grade would ensure implementation of these recommendations and would lead to further reduction in transmission of vaccine-preventable disease. # RECOMMENDATIONS FOR VACCINATION OF ADOLESCENTS The recommendations for administering each vaccine are consistent with current ACIP, AAP, AAFP, and AMA documents (Exhibit 2). However, the Td recommendation has been changed recently such that the ages at which the first Td booster is administered may be lowered from 14-16 years to 11-12 years (21)(22)(23). General recommendations and vaccine-specific recommendations for providers are as follows: # General Recommendations - Establish a visit to providers for adolescents ages 11-12 years to screen for immunization deficiencies, and administer those indicated vaccines that have not been received (Table 1). During the initial visit, schedule appointments to receive needed doses of vaccine that are not administered during the initial visit. Provide other indicated preventive services during this and all other visits. - Check the vaccination status of adolescents during each subsequent visit to providers and correct any deficiencies, including those associated with the threedose series of hepatitis B vaccinations. # Vaccine-Specific Recommendations - Hepatitis B vaccine. Vaccinate adolescents 11-12 years of age who have not been vaccinated previously with the three-dose series of hepatitis B vaccine. Ensure completion of the series by scheduling the vaccinations that are needed and by following up on those adolescents who do not receive these scheduled vaccinations. In addition, adolescents >12 years of age who are at increased risk for HBV infection should be vaccinated. - MMR (second dose). Administer the second dose of MMR to adolescents who have not received two doses of MMR at ≥12 months of age. - Td booster. Administer a booster dose of Td vaccine to adolescents at ages 11-12 or 14-16 years if they have received the primary series of vaccinations and if no dose has been received during the previous 5 years. All subsequent, routine Td boosters (i.e., in the absence of tetanus-prone injury) should be administered at 10-year intervals. - Varicella virus vaccine. Administer varicella virus vaccine to adolescents ages 11-12 years who do not have a reliable history of chickenpox and who have not been vaccinated with varicella virus vaccine. - Influenza vaccine. Administer influenza vaccine annually to adolescents who, because of an underlying medical condition, are at high risk for complications associated with influenza. If seen at a time of year when vaccination is not indicated, schedule the adolescent for an influenza vaccination at the appropriate vaccination time (i.e., September-December). Vaccinate adolescents who have close contact with persons at high risk for complications associated with influenza. This vaccine also may be administered to adolescents who have no underlying medical condition to reduce their risk for influenza infection. - Pneumococcal polysaccharide vaccine. Administer pneumococcal vaccine to adolescents who have chronic illnesses associated with increased risk for pneumococcal disease or its complications. Use adolescents' visits to providers to ensure that the vaccine has been administered to persons for whom it is indicated. - Hepatitis A vaccine.
# Immunization of Adolescents Recommendations of the Advisory Committee on Immunization Practices, the American Academy of Pediatrics, the American Academy of Family Physicians, and the American Medical Association Summary This report concerning the immunization of adolescents (i.e., persons 11-21 years of age, as defined by the American Medical Association [AMA] and the American Academy of Pediatrics [AAP]) is a supplement to previous publications (i.e., MMWR 1994;43 [No. RR-1]1-38; the AAP 1994 Red Book: Report of the Committee on Infectious Diseases; Summary of Policy Recommendations for Periodic Health Examination, August 1996 from the American Academy of Family Physicians [AAFP]; and AMA Guidelines for Adolescent Preventive Services [GAPS]: Recommendations and Rationale). This report presents a new strategy to improve the delivery of vaccination services to adolescents and to integrate recommendations for vaccination with other preventive services provided to adolescents. This new strategy emphasizes vaccination of adolescents 11-12 years of age by establishing a routine visit to their health-care providers. Specifically, the purposes of this visit are to a) vaccinate adolescents who have not been previously vaccinated with varicella virus vaccine, hepatitis B vaccine, or the second dose of the measles, mumps, and rubella (MMR) vaccine; b) provide a booster dose of tetanus and diphtheria toxoids; c) administer other vaccines that may be recommended for certain adolescents; and d) provide other recommended preventive services. The recommendations for vaccination of adolescents are based on new or current information for each vaccine. The most recent recommendations from ACIP, AAP, AAFP, and AMA concerning specific vaccines and delivery of preventive services should be consulted for details (Exhibit 2). # BACKGROUND In the United States, vaccination programs that focus on infants and children have decreased the occurrence of many childhood, vaccine-preventable diseases (1 ). However, many adolescents (i.e., persons 11-21 years of age i.e., as defined by the American Medical Association [AMA] and the American Academy of Pediatrics [AAP]) and young adults (i.e., persons 22-39 years of age) continue to be adversely affected by vaccine-preventable diseases (e.g., varicella, hepatitis B, measles, and rubella), partially because vaccination programs have not focused on improving vaccination coverage among adolescents. These recommendations for the immunization of adolescents were developed to improve vaccination coverage among adolescents and focus on establishing a routine visit to health-care providers (i.e., providers) for adolescents ages 11-12 years. Such a visit provides the opportunity for a) ensuring vaccination of those adolescents not previously vaccinated with hepatitis B vaccine, varicella virus vaccine (if indicated), or the second dose of the measles, mumps, and rubella (MMR) vaccine; b) administering a tetanus and diphtheria toxoid (Td) booster; c) administering other vaccines that may be recommended for certain adolescents; and d) providing other recommended preventive services. Flexibility in scheduling vaccinations is an important factor for improving vaccination coverage among adolescents. Because multiple-dose vaccines or simultaneous administration of several vaccines may be indicated for adolescents (Table 1), providers may need to be flexible in determining which vaccines to administer during the initial visit and which to administer on return visits. # IMMUNIZATION AS A PREVENTIVE HEALTH SERVICE FOR ADOLESCENTS Administration of vaccinations should be integrated with other preventive services provided to adolescents. The importance of improving the vaccination levels and of providing other preventive services indicated for adolescents and young adults has been emphasized recently by many national organizations (Exhibit 1). In particular, AAP has advocated and provided specific recommendations for the vaccination of adolescents (2,3 ). Similarly, AMA and the Health Resources and Services Administration (HRSA) have proposed comprehensive recommendations that provide a framework for organizing the content and delivery of preventive health services (including vaccinations) for adolescents (4,5 ). The United States Preventive Services Task Force (USPSTF) has advocated specific vaccinations for adolescents that are based on the patient's age and risk factors (6 ). In addition, the American Academy of Family Physicians (AAFP) has recommended delivery of preventive services based on reviews by USPSTF and the AAFP Commission on Clinical Policies and Research (7 ). Guidelines recommended by these organizations include the delivery of preventive health services during a series of regular visits by adolescents to providers. These services include specific guidance on health behaviors; screening for biomedical, behavioral, and emotional conditions; and delivery of other health services, including vaccinations. The recommendations for vaccination of adolescents adopted by the Advisory Committee on Immunization Practices (ACIP), AAP, AAFP, and AMA are consistent with those of other groups that promote preventive health services for adolescents. # RATIONALE FOR VACCINE ADMINISTRATION DURING AN ADOLESCENT'S VISIT TO PROVIDERS # Hepatitis B Vaccine In the United States, most persons infected with hepatitis B virus (HBV) acquired their infection as young adults or adolescents. HBV is transmitted primarily through sexual contact, injecting-drug use, regular household contact with a chronically in- The second dose of vaccine is recommended at age 1 month and the third dose (Hep B-3) at age 6 months. § Adolescents who have not received three doses of hepatitis B vaccine should initiate or complete the series at ages 11-12 years. The second dose should be administered at least 1 month after the first dose, and the third dose should be administered at least 4 months after the first dose and at least 2 months after the second dose. ¶ The fourth dose of diphtheria and tetanus toxoids and pertussis vaccine (DTP) may be administered at age 12 months if at least 6 months have elapsed since the third dose of DTP. Diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP) is licensed for the fourth and/or fifth vaccine dose(s) for children ages ≥15 months and may be preferred for these doses in this age group. Tetanus and diphtheria toxoids, adsorbed, for adult use (Td) is recommended at ages 11-12 years if at least 5 years have elapsed since the last dose of DTP, DTaP, or diphtheria and tetanus toxoids, adsorbed, for pediatric use (DT). ** Three H. influenzae type b (Hib) conjugate vaccines are licensed for infant use. If PedvaxHIB ® (Merck & Co.) Haemophilus b conjugate vaccine (Meningococcal Protein Conjugate) (PRP-OMP) is administered at ages 2 and 4 months, a dose at 6 months is not required. After completing the primary series, any Hib conjugate vaccine may be used as a booster. † † Oral poliovirus vaccine (OPV) is recommended for routine vaccination of infants. Inactivated poliovirus vaccine (IPV) is recommended for persons-or household contacts of persons-with a congenital or acquired immune-deficiency disease or an altered immune status resulting from disease or immunosuppressive therapy and is an acceptable alternative for other persons. The primary three-dose series for IPV should be given with a minimum interval of 4 weeks between the first and second doses and 6 months between the second and third doses. § § The second dose of measles, mumps, and rubella vaccine (MMR) is routinely recommended at ages 4-6 years or at ages 11-12 years but may be administered at any visit provided at least 1 month has elapsed since receipt of the first dose. ¶ ¶ Varicella virus vaccine (Var) can be administered to susceptible children and adolescents at any time after age 12 months. Unvaccinated adolescents who lack a reliable history of chickenpox should be vaccinated at ages 11-12 years. Use of trade names and commercial sources is for identification only and does not imply endorsement by the Public Health Service or the U.S. Department of Health and Human Services. Source: Advisory Committee on Immunization Practices, American Academy of Pediatrics, and American Academy of Family Physicians. fected person, or occupational exposure. However, the source of infection is unknown for approximately one third of persons who have acute hepatitis B (8 ). A comprehensive vaccination strategy to eliminate transmission of HBV through routine vaccination of infants, adolescents ages 11-12 years, and adolescents who are at increased risk for HBV infection has been adopted (3,7,9,10 ). Any reduction in HBVrelated liver disease resulting from universal vaccination of infants cannot be expected until vaccinated children reach adolescence and adulthood. Routine vaccination of adolescents 11-12 years of age who have not been vaccinated previously is an effective strategy for more rapidly lowering the incidence of HBV infection and assisting in the elimination of HBV transmission in the United States (3,10 ). An adolescent's visit at ages 11-12 years gives the provider an opportunity to initiate protection against HBV before the adolescent begins high-risk behaviors. Unvaccinated adolescents >12 years of age who are at increased risk for HBV infection also should be vaccinated (10 ). Such adolescents are at increased risk for HBV infection and should be vaccinated against hepatitis B if they a) have multiple sexual partners (i.e., more than one partner in a 6-month period), b) use illegal injecting drugs, c) are males who have sex with males, d) have sexual or regular household contact with a person who is positive for hepatitis B surface antigen, e) are health-care or public-safety workers who are occupationally exposed to human blood, f) are undergoing hemodialysis, g) are residents of institutions for the developmentally disabled, h) are administered clotting factors, or i) travel to an area of high or intermediate HBV endemicity for ≥6 months. In addition, AAP recommends that providers administer hepatitis B vaccine to all adolescents for whom they provide services (3 ). Adolescents can be vaccinated against hepatitis B in various settings, including schools and providers' offices. In the United States, school-based demonstration projects to vaccinate adolescents against hepatitis B have achieved >70% vaccination coverage (11)(12)(13). Adolescents should receive three age-appropriate doses of hepatitis B vaccine (Table 2). Hepatitis B vaccine is highly immunogenic in adolescents and young adults when administered in varying three-dose schedules (14,15 ). A schedule of 0, 1-2, and 4-6 months is recommended. Flexibility in scheduling is an important factor for achieving high rates of vaccination in adolescents. When the vaccination schedule is interrupted, the vaccine series does not require reinitiation (CDC, unpublished data; 16 ) . Studies of "off-schedule" vaccinations indicate that if the series is interrupted after the first dose, the second dose should be administered as soon as possible, and the second and third doses should be separated by an interval of at least 2 months. If only the third dose is delayed, it should be administered as soon as possible. Intervals of up to 1 year between administration of the first and third doses induce excellent antibody responses (15 ), and studies are in progress to evaluate longer intervals. # Measles, Mumps, and Rubella Vaccine The sustained decline of measles in the United States has been associated with a shift in occurrence from children to infants and young adults. During 1990-1994, 47% of reported cases occurred in persons ages ≥10 years, compared with only 10% during 1960-1964 (CDC, unpublished data; 17 ). During the 1980s, outbreaks of measles occurred among school-age children in schools with measles-vaccination levels of ≥98% (18 ). Primary vaccine failure was considered the principal contributing factor in these outbreaks. As a result, beginning in 1989, a two-dose measles-vaccination schedule for students in primary schools, secondary schools, and colleges and universities was recommended (18)(19)(20). This two-dose vaccination schedule provides protection to ≥98% of persons vaccinated. Administration of a second dose of MMR at entry to elementary school (i.e., at ages 4-6 years) or junior high or middle school (i.e., at ages 11-12 years) is recommended (21)(22)(23). State policies for implementing the two-dose strategy have varied; some states require the second dose for entry into primary school, and others require it for entry into middle school. Because the recommendation for a second dose of MMR was made in 1989, many children born before 1985 (and some children born after 1985, depending on local policy) may not have received the second vaccine dose. The routine visit to providers at ages 11-12 years affords an opportunity to administer a second dose of MMR to adolescents who have not received two doses of MMR at ≥12 months of age. MMR should not be given to adolescents who are known to be pregnant or to adolescents who are considering becoming pregnant within 3 months of vaccination. Asking adolescents if they are pregnant, excluding those who say they are, and explaining the theoretical risk of fetal infection to the other female adolescents are recommended precautions. # Tetanus and Diphtheria Toxoids Although booster doses of Td are recommended at 10-year intervals, no special strategies have been developed to ensure that this recommendation is fully implemented. During 1991-1994, 191 (95%) of the 201 reported cases of tetanus in the United States occurred in persons ages ≥20 years, and nine (45%) of the 20 reported cases of diphtheria occurred in persons ages ≥20 years (CDC, unpublished data). Data from a serosurvey conducted in Minnesota indicated that 62% of persons 18-39 years of age lacked adequate protection against diphtheria (24 ). Epidemic diphtheria has reemerged in the New Independent States (NIS) of the former Soviet Union and has resulted in >47,000 cases reported in 1994 and >50,000 in 1995 (CDC, unpublished data; 25 ). Although no imported cases were reported in the United States during those years, ≥20 cases of diphtheria were reported in Europe, and two cases occurred among U.S. citizens who resided or were traveling in the NIS. This threat of infection underscores the importance of maintaining high levels of diphtheria immunity in the U.S. population. Recent data from CDC's National Health and Nutrition Examination Survey (NHANES III) suggested that immunity to tetanus varied with age (26 ). Among children ages 6-16 years, 82% had protective levels of tetanus antitoxin (defined as a serum level >0.15 IU per mL). Immunity in persons decreased at ages 9-13 years, with 15%-36% of these persons unprotected (CDC, unpublished data). Immunity also varied inversely with the length of time since the last tetanus vaccination. Among children who were reported as being vaccinated 6-10 years before the serologic survey, 28% lacked immunity to tetanus, compared with 14% who were reported as being vaccinated 1-5 years before the survey and 5% who were reported as being vaccinated ≤1 year before the survey (27 ). A Td booster is essential to ensure long-lasting immunity against tetanus. Lowering the age for administration of the first Td booster from ages 14-16 years to ages 11-12 years should increase compliance and thereby reduce the susceptibility of adolescents to tetanus and diphtheria. Administering the Td booster at ages 11-12 years provides a rationale for a routine visit to providers for adolescents, regardless of their need for other vaccines. Data suggest there should be no increased risk for serious side effects to Td when the first booster dose is administered at ages 11-12 years rather than at ages 14-16 years (CDC, unpublished data). With the exception of the Td booster at ages 11-12 years, routine boosters should be administered every 10 years. If a dose of Td has been administered after receipt of tetanus-and diphtheria-containing vaccine at ages 4-6 years and before the routine Td booster at ages 11-12 years, the dose at ages 11-12 years is not indicated. The next dose should follow the last dose by 10 years, unless specifically indicated because of a tetanus-prone injury (i.e., persons who sustain a tetanus-prone injury should be administered a Td booster immediately if >5 years have elapsed since their last Td booster). # Varicella Virus Vaccine Before varicella virus vaccine became available in 1995, most persons in the United States contracted varicella (i.e., chickenpox), resulting in an estimated 4 million infections annually. At present, approximately 20% of adolescents ages 11-12 years remain susceptible to varicella (CDC, unpublished data). The rate of complications, including death, is greater for persons who contract chickenpox when they are ≥15 years of age. Varicella virus vaccine should be administered to adolescents ages 11-12 years if they have not been vaccinated and do not have a reliable history of chickenpox (7,27,28 ). At ages 11-12 years, providers should assess the adolescent's need for varicella virus vaccine and administer the vaccine to those who are eligible. When administered to children <13 years of age, a single dose of vaccine induces protective antibodies in >95% of recipients. For susceptible persons ≥13 years of age, two doses separated by 4-8 weeks are recommended. Varicella vaccine should not be given to adolescents who are known to be pregnant or to adolescents who are considering becoming pregnant within 1 month of vaccination. Asking adolescents if they are pregnant, excluding those who say they are, and explaining the potential effects of the vaccine virus on the fetus to the other female adolescents are recommended precautions. # OTHER VACCINES INDICATED FOR CERTAIN ADOLESCENTS Influenza Vaccine More than 8 million children and adolescents in the United States, including 2.2 million persons ages 10-18 years who have asthma (CDC, unpublished data), have at least one medical condition that places them at high risk for complications associated with influenza. Such adolescents should be vaccinated annually for influenza; however, few actually receive the vaccine. Adolescents at high risk who should be administered influenza vaccine annually are those who a) have chronic disorders of the pulmonary system (including those who have asthma) or the cardiovascular system; b) reside in chronic-care facilities that house persons of any age who have chronic medical conditions; c) have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic disease(s) (including those who have diabetes mellitus), renal dysfunction, hemoglobinopathy, or immunosuppression (including those who have immunosuppression caused by medication); or d) receive long-term aspirin therapy and, therefore, may be at risk for contracting Reye syndrome after influenza. In addition, adolescents who have close contact* with persons who meet any of these conditions or with persons ≥65 years of age should be administered influenza vaccine annually. Students in institutional settings (e.g., those residing in dormitories) should be encouraged to receive influenza vaccine annually to minimize any disruption of routine activities during epidemics. In addition, any adolescent may be vaccinated annually to reduce the likelihood of acquiring influenza infection. Administration of influenza vaccine to adolescents ages 11-12 years may assist in establishing the lifetime practice of annual influenza vaccination in persons for whom it is indicated. Providers should administer influenza vaccine to adolescents who visit them for routine care if vaccination is indicated and if their visit is during the time of year appropriate for influenza vaccination (i.e., September-December); such adolescents should be scheduled for an additional visit if they are seen at a time of year when vaccination is not indicated. Adolescents may receive influenza vaccine at the same time they receive other recommended vaccines. Additional strategies are needed to improve delivery of influenza vaccine to adolescents for whom it is indicated. # Pneumococcal Polysaccharide Vaccine Approximately 340,000 persons 2-18 years of age have chronic illnesses associated with increased risk for pneumococcal disease or its complications and should receive the 23-valent pneumococcal vaccine. Adolescents who should be vaccinated include those who have a) anatomic or functional asplenia (including sickle cell disease), b) nephrotic syndrome, c) cerebrospinal-fluid leaks, or d) conditions associated with immunosuppression (including human immunodeficiency virus [HIV]). Revaccination is recommended for adolescents at highest risk for serious pneumococcal infection and those likely to experience rapid decline in pneumococcalantibody levels, provided ≥5 years have passed since administration of the first dose of pneumococcal vaccine. The possible need for subsequent doses following revaccination requires further study. Persons at highest risk and persons likely to have a rapid decline in pneumococcal-antibody levels include those who have a) splenic dysfunction or anatomic asplenia, b) sickle cell disease, c) HIV infection, d) Hodgkin's disease, e) lymphoma, f) multiple myeloma, g) chronic renal failure, h) nephrotic syndrome, or i) other conditions associated with immunosuppression (e.g., undergoing organ transplantation or receiving immunosuppressive chemotherapy). *Close contact occurs when persons live with, work with, or otherwise are frequently in close physical proximity to other persons. # Hepatitis A Vaccine Each year, approximately 140,000 persons in the United States are infected with hepatitis A virus (HAV). The highest rates of disease occur among persons 5-14 years of age. Most cases of hepatitis A can be attributed to person-to-person transmission. Adolescents who plan to travel to or work in a country that has high or intermediate endemicity of hepatitis A virus (HAV) infection* should be administered hepatitis A vaccine or immune globulin (29 ). For adolescents who plan to travel repeatedly to or reside for long periods in such areas, administration of hepatitis A vaccine rather than immune globulin is preferred (29 ). Unvaccinated adolescents who reside in a community that has a high rate of HAV infection and periodic outbreaks of hepatitis A disease also should be vaccinated. During outbreaks in such a community, age-specific disease rates provide an indirect indication of the age groups in which a large percentage of the group has prior immunity and, therefore, would benefit little from vaccination. Often the upper-age cutoff for hepatitis A vaccination is between 10 years of age and 15 years of age. In addition, adolescents should be vaccinated against hepatitis A if they a) have chronic liver disease, b) are administered clotting factors, c) use illegal injecting or noninjecting drugs (i.e., if local epidemiologic data indicate current or past outbreaks have occurred among persons who have such risk behaviors), or d) are males who have sex with males. # SCHEDULING VACCINATIONS # Simultaneous Administration of Vaccines Extensive clinical experience and experimental evidence from studies of infants and children have strengthened the scientific basis for administering certain vaccines simultaneously. Although specific studies have not been conducted regarding the simultaneous administration of all vaccines recommended for routine use in adolescents, no evidence has established that this practice is unsafe or ineffective (30 ). All indicated vaccinations should be administered at the scheduled immunization visit for adolescents who are 11-12 years of age. However, some adolescents may require multiple (i.e., four or more) vaccinations, and the provider may choose not to administer all indicated vaccines during the same visit. In these circumstances, the provider may prioritize which vaccines to administer during the visit and schedule the adolescent for one or more return visits. Factors to consider in this decision include which vaccines require multiple doses, which diseases pose an immediate threat to the adolescent, and whether the adolescent is likely to return for scheduled visits. # Documentation of Previous Vaccinations Providers may encounter adolescents who do not have documentation of previously received vaccines. In these circumstances, providers should attempt to assess each adolescent's vaccination status through documentation obtained from the parent, previous providers, or school records. If documentation of an adolescent's *This includes countries other than Australia, Canada, Japan, New Zealand and those located in western Europe. vaccination status is not available at the time of the visit, the following strategy is recommended while awaiting documentation: a) for those vaccinations required by law or regulation that the adolescent previously was subject to, assume that the adolescent has been vaccinated (unless required vaccinations have not been administered for religious, philosophic, or medical reasons) and withhold those vaccinations; and b) administer those vaccines that the adolescent previously was not subject to by law or regulation. # STATE VACCINATION LAWS AND REGULATIONS In the United States, state vaccination laws and regulations for kindergarten through grade 12 are effective in ensuring high coverage levels among school attendees and have led to a marked decline of overall morbidity and mortality from vaccine-preventable diseases. Additional state laws and regulations requiring documentation of up-to-date immunization of adolescents or a reliable history of diseaserelated immunity at entry into sixth or seventh grade would ensure implementation of these recommendations and would lead to further reduction in transmission of vaccine-preventable disease. # RECOMMENDATIONS FOR VACCINATION OF ADOLESCENTS The recommendations for administering each vaccine are consistent with current ACIP, AAP, AAFP, and AMA documents (Exhibit 2). However, the Td recommendation has been changed recently such that the ages at which the first Td booster is administered may be lowered from 14-16 years to 11-12 years (21)(22)(23). General recommendations and vaccine-specific recommendations for providers are as follows: # General Recommendations • Establish a visit to providers for adolescents ages 11-12 years to screen for immunization deficiencies, and administer those indicated vaccines that have not been received (Table 1). During the initial visit, schedule appointments to receive needed doses of vaccine that are not administered during the initial visit. Provide other indicated preventive services during this and all other visits. • Check the vaccination status of adolescents during each subsequent visit to providers and correct any deficiencies, including those associated with the threedose series of hepatitis B vaccinations. # Vaccine-Specific Recommendations • Hepatitis B vaccine. Vaccinate adolescents 11-12 years of age who have not been vaccinated previously with the three-dose series of hepatitis B vaccine. Ensure completion of the series by scheduling the vaccinations that are needed and by following up on those adolescents who do not receive these scheduled vaccinations. In addition, adolescents >12 years of age who are at increased risk for HBV infection should be vaccinated. • MMR (second dose). Administer the second dose of MMR to adolescents who have not received two doses of MMR at ≥12 months of age. • Td booster. Administer a booster dose of Td vaccine to adolescents at ages 11-12 or 14-16 years if they have received the primary series of vaccinations and if no dose has been received during the previous 5 years. All subsequent, routine Td boosters (i.e., in the absence of tetanus-prone injury) should be administered at 10-year intervals. • Varicella virus vaccine. Administer varicella virus vaccine to adolescents ages 11-12 years who do not have a reliable history of chickenpox and who have not been vaccinated with varicella virus vaccine. • Influenza vaccine. Administer influenza vaccine annually to adolescents who, because of an underlying medical condition, are at high risk for complications associated with influenza. If seen at a time of year when vaccination is not indicated, schedule the adolescent for an influenza vaccination at the appropriate vaccination time (i.e., September-December). Vaccinate adolescents who have close contact with persons at high risk for complications associated with influenza. This vaccine also may be administered to adolescents who have no underlying medical condition to reduce their risk for influenza infection. • Pneumococcal polysaccharide vaccine. Administer pneumococcal vaccine to adolescents who have chronic illnesses associated with increased risk for pneumococcal disease or its complications. Use adolescents' visits to providers to ensure that the vaccine has been administered to persons for whom it is indicated. • Hepatitis A vaccine.
None
None
388c190233fb50a84f78521c909ab80422b2baa9
cdc
None
# v SUMMARY AND CONCLUSIONS Exposure to chrysene as an isolated chemical presently occurs only in specific occupations, i.e., chrysene synthesis, laboratory experimentation, and possibly in the synthesis of anthraquinone vat dyes. However, chrysene and its derivatives, e.g., certain methylchrysene isomers, along with hundreds of other polycyclic aromatic hydrocarbons (PAH's), are formed during the pyrolysis of organic matter and can occur in any occupational environment where this process takes place. While it has been suggested that 700 C is the optimum pyrolytic temperature for PAH-formation, other factors such as the chemical and physical nature of the pyrolyzed material, the presence or absence of oxygen, and the period of tine during which the compound is pyrolyzed also determine the amounts and mixtures of various PAH's formed. Chrysene has been detected in many materials which result from, or are used in,industrial processes. Chrysene also has been detected in the ambient atmosphere and in certain food products. A number of experimental animal studies have been conducted for the purpose of assessing the carcinogenicity of chrysene, with varying results. When three 100(u)g doses of chrysene were administered by subcutaneous (s.c.) injection to newborn mice, an increased incidence of liver tumors was observed. The increase, however, was not statistically significant. Marginal results were obtained when mice were injected (s.c.) with 1 mg chrysene in arachis oil, once a week, for 10 weeks. This treatment resulted in the development of injection-site tumors in 2 of 20 animals. Tumors also developed after chrysene (1% solution in acetone) was applied to the shaved backs of mice. Chrysene appears to exert a stronger carcinogenic effect when used as a tumor initiator than when used as a "complete" carcinogen, e.g., when the shaved backs of each of 20 mice were painted with ten doses of 0.1 mg chrysene (each in 0.1 ml acetone), followed by treatment with tetradecanoyl phorbol acetate (TPA) ,11 of the 18 surviving mice had tumors (total of 19 tumors) after 20 weeks. Similar results were obtained using chrysene (total dose of 4.4 micromoles) either as an initiator with TPA, or when chrysene (1.0 mg in 0.4 ml acetone) was painted on the shaved backs of mice prior to treatment with croton resin. Chrysene also has produced positive results in mammalian cell transformation tests and in the "Ames" test for mutagenicity. Certain methylchrysene isomers have also been shown to possess carcinogenic potential: 5-methylchrysene has been reported to be a strong "complete" carcinogen as well as a stronger tumor initiator than chrysene. Epidemiologic evidence for the carcinogenicity of chrysene as an isolated chemical was not located. However, certain industry-related materials which contain many PAH's, including chrysene and its methyl derivatives, Chrysene may be generated inadvertently in numerous industrial processes which involve the pyrolysis of organic matter. Further, conditions which are favorable for the formation of chrysene also are likely to be favorable for the formation of methylchrysenes and numerous other PAH's whose carcinogenic potencies range from greater than to less than that of chrysene. While the aforementioned control measures may be applied to the general class of PAH's, NIOSH recognizes that under certain circumstances of PAH-exposure, they will be neither appropriate nor feasible. While no workplace environmental limit for chrysene is recommended at this time, information on analytical methodology is included in this review. Most methods for the identification and quantitation of chrysene involve various forms of chromatography and spectrometry. # INTRODUCTION The carcinogenic potential of certain polycyclic aromatic hydrocarbons (PAH's), e.g., benz(a)pyrene and benz(a)anthracene, has been well documented (IARC Monograph, Volume 3, 1973;Dipple, 1976). However, comparatively little information exists upon which to base an evaluation of the carcinogenic potential of chrysene. Further, isolated exposure to chrysene occurs only in specific operations, i.e., chrysene manufacture, laboratory experimentation, and possibly in the synthesis of anthraquinone vat dyes. The number of workers thus exposed is small relative to the number of workers exposed to chrysene ad-mixed with other PAH's in industrial processes which involve the pyrolysis of organic matter. This notwithstanding, sufficient quantities of chrysene may now be in use to elicit concern over any biological effects exerted by this chemical. Thus, an integration and evaluation of all experimental data pertinent to the carcinogenic potential of chrysene appears warranted. There is now no federal occupational health standard for chrysene, or # II. COMMON OPERATIONS, USE, AND OCCURRENCE A'. Workplace Environment PAH's are perhaps the most ubiquitous of the carcinogenic agents that have been detected in the atmosphere. Chrysene, along with other PAH's, occurs most frequently as an adsórbate on airborne particulates resulting from the pyrolysis of carbonaceous materials (Filatova et al., 1972). Chrysene is sometimes encountered as a vapor. The yield of chrysene can be affected by the chemical and physical nature of the pyrolyzed compound, the pyrolysis temperature, the presence or absence of adequate oxygen for complete combustion, and the period of time during which the compound is exposed to heat. Experimental determinations have shown the optimal temperature for PAH formation to be around 700 C (Masuda et al., 1967). From an occupational health standpoint it is important to identify the sources of these compounds. Table I lists some of these sources for chrysene. Cleary, 1963Qazi and Nau, 1975Cleary, 1963Rhee and Bratzler, 1968 Lijinsky et al., 1963Lijinsky and Raha, 1961Filatova et al., 1972Tye et al., 1966Hendrickson et al., 1963Falk et al., 1951 *Unless otherwise stated, the units (ppm) refer to parts of chrysene per million parts of source material sampled, as a weight/weight ratio. # B. Occupational Areas of Concern At present, it is believed that exposure to chrysene as an isolated chemical occurs only in specific occupations, i.e., in chrysene manufacture, laboratory experimentation, and perhaps in the synthesis of anthraauinone vat dyes. It was Holbro (1953) who suggested the use of chrysene as a starting material in the synthesis of dyestuffs, particularly of anthraquinone vat dyes, to which the results of his research showed chrysene imparted favorable properties regarding "affinity, shade, and fastness". While the scope of this report is limited primarily to chrysene as an isolated chemical, its occurrence with other PAH's in a wide range of occupational environments cannot be disregarded. The following list of occupational areas, where worker exposure to chrysene and other PAH's is likely to occur, is separated into stationarypoint and non-stationary point source categories: the former refers to inplace operations, i.e., where engineering controls could be expected to substantially reduce employee exposure, and the latter refers to outdoor or mobile operations where the application of engineering controls may be more difficult. In the latter case, however, it would be expected that natural factors (wind, etc.) would contribute to reductions in worker exposure. While this list is based on the occurrence of chrysene as it has been reported in the scientific literature, many other occupational exposure sources undoubtedly exist. # C. General Environment Chrysene is found in the general environment primarily as a product of the pyrolysis of fossil fuels for energy production and the operation of diesel-and gasoline-fueled internal combustion engines. Chrysene has been measured in the atmospheres of several major cities. In Sydney, Australia, a 12-month survey of the ambient PAH content of suspended particulate matter showed chrysene at concentrations ranging from 0.0002 to 0.0065 (u)g/cu m of air (Cleary and Sullivan, 1965). Chrysene was found in the atmosphere of Buffalo, New York, at concentrations of 0.082-0.116 (u)g/cu m of air, and in composite samples from the air of urban Cincinnati, Ohio at concentrations of 0.0032 and 0.006 (u)g/cu m of air CCcnlee et al., 1967). Studies have been conducted to determine the contribution of the gasoline engine to atmospheric levels of PAH's. Gasoline engine exhaust tar was found to contain 175 ppm chrysene (Hoffman and Wynder, 1962). In another investigation, the PAH emissions from diesel-and gasoline-powered vehicles were compared (Sullivan and Cleary, 1964), and chrysene was found in a diesel-trafficked area at a concentration of 0.0032 (u)g/cu m of air; in the gasoline-trafficked area, chrysene was found at a concentration of 0.0034 (u)g/cu m of air. In addition to the finding of chrysene in ambient air, its presence in soil and water has been substantiated. Although quantitative data are few, chrysene has been identified in soils in rural Massachusetts and Connecticut (Blumer, 1961), and in wells, rivers, lakes, and soils in Germany (Borneff, 1964;Borneff and Kunte, 1965). One investigator suggested that the occurrence of chrysene and other PAH's in rural soil is not due to fallout from polluted air but, rather, is the result of the natural pyrolytic transformation of plant organic matter to peat and lignite; or that alternatively, they occur as metabolites of organisms which reside in the soils (Blumer, 1961). Chrysene also has been detected in certain food products: roasted coffee (Kuratsune and Hueper, 1960), spinach and tomatoes (Grimmer and Hildebrand, 1965), baker's yeast (Grimmer and Wilhelm, 1969), "liquid smoke" (Lijinsky and Shubik, 1965), smoked ham and barbecued beef (Malanoski et al., 1968), charcoal-broiled steak and chicken (Lijinsky and Ross, 1967), and electric-broiled fish (Masuda et al., 1966). Of particular importance is the occurrence of chrysene in cigarettesmoke condensate. Chrysene has been found in cigarette-smoke condensate at a concentration of 0.06 (u)g per 100 cigarettes (Van Duuren, 1958). Cigarette smoke may add significantly to the total exposure to chrysene from both the general environment and work environment. # III. BIOLOGICAL EFFECTS OF EXPOSURE A. Toxic Effects in Animals Schmid et al. (1967) used mortality and body and organ weight changes in mice as measures of chrysene toxicity. Each of 10 male AKR and 10 male C57BL/6 mice was given a single i.p. injection of 7.5 mg chrysene dissolved in 1 ml of sesame oil. All animals were alive after 20 days of observation at which time they were killed. Autopsies revealed that none of the mice had experienced any significant change in body, spleen, or thymus weight. The investigators concluded that chrysene was not toxic as reflected by the aforementioned parameters and experimental conditions. Gershbein (1958) tested the effects of chrysene on liver regeneration in partially hepatectomized adult male Sprague-Dawley rats. Single s.c. injections of 12.0 mg chrysene in peanut oil were given daily for the 11day test period to each rat (320 mg/kg body weight). This treatment caused no significant change in the extent of liver regeneration from that in the corresponding controls. Bernheim et al. (1953) found that a 0.3% solution of chrysene in benzene, when painted on the skins of white mice, inhibited fatty acid peroxide-induced thiobarM -».ric acid-color development on the epidermal side, but did not in the subcutaneous fat. The investigators attributed these results to inhibition of fatty acid oxidation. They noted that oxidized fatty acids are more effective inhibitors of certain enzymes and of bacterial growth than are the corresponding unoxidized acids, so that a carcinogen, by inhibiting peroxide formation, might produce conditions favorable for growth. They also suggested that inhibition of fatty acid oxidation may be a necessary, but certainly not a sufficient, condition for tumorigenesis. # B. Carcinogenicity in Animals A number of experimental animal studies of the carcinogenic potential of chrysene have yielded varying results. Most of these studies have been conducted with mice, and various investigators have described chrysene as a weak carcinogen (Wynder and Hoffman, 1959); a cancer "initiator" (Hoffman et al., 1974); an "incomplete" carcinogen (Horton and Christian, 1974); and as a cancer "inhibitor" (Huggins et al., 1964 weeks, 9 had liver tumors (1 of which was multiple) and 4 mice had lung tumors (1 of which was multiple). The investigators concluded that chrysene seemed to be a more active tumorigen than its epoxide, and that it increased the incidence of liver tumors above that in the controls, while the epoxide did not do so, and that neither chrysene nor the epoxide affected the incidence of lung tumors. The liver tumors ranged in appearance from well differentiated liver cell masses that were difficult to distinguish from normal liver tissues to those exhibiting pleomorphism, invasion, and metastasis. The increased incidence of liver tumors after exposure to chrysene, however, was not statistically significant (P=0.05). Boyland and Sims (1967) conducted a similar experiment with chrysene and chrysene 5,6-oxide (Table II.B.). C57 black mice, 120 days old, were injected s.c. with 1.0 ml suspensions of arachis oil that contained 1.0 mg of either chrysene or chrysene 5,6-oxide, once/week, for 10 weeks. A third group, given only arachis oil, served as the control. After an observation period ranging from 60 to 80 weeks, 2 of the 20 chrysene-treated mice surviving at the time of observation of the first tumor (initial number of animals treated was not stated) had developed injection-site tumors. A comparable number of animals (3 of 20) developed injection-site tumors from the epoxide but the time-to-tumor was longer by an average of 8.6 days. Because of the longer latency period for the epoxide, the investigators concluded that the parent PAH was a more active tumorigen. None of the control animals had tumors. Of the tumors induced by the active compounds, about 80% were spindle-cell sarcomas, about 12% were pleomorphic sarcomas and about 8% were squamous cell carcinomas. Wynder and Hoffman (1959) applied a 1% solution of chrysene in acetone to the backs of female Swiss (Millerton) mice, three times weekly until the last animal died (sometime during the 13th month of treatment). Nine of the original 20 animals had developed papillomas and 8 had developed carcinomas, with the first tumor appearing after 8 months of treatment. From the original data it is not possible to determine with exactitude the number of animals with both papillomas and carcinomas. A decrease in lifespan was also noted; no controls were included in the experiment. It is unlikely, however, that such a high incidence of carcinomas would occur spontaneously in this strain of mouse. Several investigators have shown chrysene to act as a cancer "initiator." Hoffman et al. (1974) tested the tumor-initiating activity of chrysene and of six of its methyl derivatives. The purities of the test substances were greater than 99.9 %. Ten doses of 0.1 ml of acetone each containing 0.1 mg of the test PAH (total dose, 1.0 mg) were applied on alternate days to the shaved backs of each of 20 female Swiss albino mice (Ha/ICR/Mil). Application of the "promoting" substance, tetradecanoyl phorbol acetate (TPA) was begun 10 days following the last PAH-treatment. TPA, in doses of 2.5 (u)g in 0.1 ml of acetone, was applied three times a week for 20 weeks, for total doses of 0.15 mg. Acetone alone (10 doses of 0.1 ml) served as a control treatment and was reported to have produced negative results in a separate group of mice, although no quantitative data were given. In the chrysene-exposed group, after 20 weeks of TPA applications, 11 of the 18 surviving mice had a total of 19 tumors. The 3methyl-and especially the 5-methylchrysene were even stronger tumor initiators than was chrysene, with 14 of 20 surviving mice sharing a total of 26 tumors , and 17 of 20 surviving mice sharing a total of 96 tumors, respectively. Each compound was concurrently bioassayed for "complete" carcinogenicity in mice of the same strain and sex. Solutions of 0.1 mg of each material in 0.1 ml acetone were applied under identical conditions to the shaved backs of the animals, three times weekly for the duration of the experiment. After 30 weeks, 5-methylchrysene showed high carcinogenic activity. Each of the 20 mice had tumors (with a total of 99 tumors), including 12 with 37 carcinomas, and 2 with multiple métastasés to the lungs and spleen. Chrysene and the other five methylchrysenes showed no significant tumorigenic activity according to the investigators. Quantitative data for chrysene and for the control vehicle were not given. Hecht et al. (1976) also tested 5-methylchrysene for "complete" carcinogenicity via two routes. Ten s.c. injections of 0.05 mg 5methylchrysene in trioctanoin (0,1 ml each injection) were given to each of 25 male C-57 black mice at 2-week intervals, after which the animals were observed for 32 weeks. This exposure to 5-methylchrysene resulted in 24 fibrosarcomas in 22 animals, with the first tumor appearing at 25 weeks. Vehicle controls had no tumors. In addition, 5-methylchrysene was compared with benzo(a)pyrene (BaP) for tumor-initiating potential and "complete" (1974) found that a 0.15% solution of chrysene in decahydronaphthalene (Decalin), a "non-carcinogenic" agent, when painted on the backs of 20 C3H male mice (60 (u)l doses, 2 x week), induced a papilloma in 1 of 12 surviving animals after 76 weeks of treatment (Table II.E.). However, when chrysene was applied in a 0.15% solution with a 50:50 Decalin: n-dodecane mixture (a known co-carcinogenic mixture) to the backs of 20 mice, 12 and 5 out of 19 surviving animals developed papillomas and carcinomas, respectively. The median time for tumor induction was 49 weeks. The effects of n-dodecane alone were not tested for. The authors classified chrysene as an "incomplete" carcinogen, i.e., one lacking the co-carcinogenic activity of a strong "complete" carcinogen. Steiner and Falk (1951) studied the carcinogenic activity of chrysene in combination with 1,2-benzanthracene (Table II.F.). The actions of chrysene and of 1,2-benzanthracene were first tested separately. Fifty C57 black mice, 3-4 months old and in about equal numbers of males and females, were injected s.c. with 5.0 mg chrysene in 0.5 cc tricaprylin. There was a total of 4 injection-site sarcomas among the 24 surviving mice for a tumor incidence of 16.6 % compared with a 1.3 % incidence of tumors in the tricaprylin controls and a zero incidence in the uninjected controls. The investigators considered the results to have shown that chrysene has a definite carcinogenic potential. A tumor incidence of 18.2 % was seen after the administration of 1,2-benzanthracene with the average induction time of 285 days. A mixed solution of the two PAH's (2.5 mg each) in tricaprylin was then administered to each of 50 mice in a like manner. The tumor incidence from this dose was 44.1%. Because this incidence was a little greater than the summation of the individual responses at full doses, the investigators concluded that the results were demonstrative of either summation or synergism. In a follow-up study by Steiner (1955) , a mixture of the "strong" carcinogen 1,2,5,6-dibenzanthrene with chrysene, when administered s.c. to approximately fifty C57BL mice, showed no significant additivity of effects, i.e., the incidence of tumors was the same as for either agent alone, and the question was raised by the investigator as to whether there had not been actual inhibition of tumorigenesis. Huggins et al. (1964) attempted to detect any inhibitory effects of chrysene upon the action of a more potent carcinogen. In these experiments, 2 mg i.v. injections of 7,12-dimethylbenz(a)anthracene (7,12-DMBA) were given to each of 74 rats on days 50, 53, and 56 of their lives. Mammary tumors resulted in all 74 animals (100%) . A separate group of animals was given (gastric intubation) single doses of chrysene (40 mg) followed by the same 7,12-DMBA treatment. These animals showed no significant change in the incidence of mammary tumors. However, in another group of test animals, multiple doses of chrysene (15 mg x 16 days) during a period that overlapped i.v. injections of 7,12-DMBA greatly reduced the incidence of mammary tumors (60% vs 100%) . Riegel et al. (1951), painted a mixture of chrysene (15%) and methylcholanthrene (15%) in acetone on the shaved backs of CF1 mice (0.02 ml of test solution, twice weekly, for 31 weeks) and compared its carcinogenic activity with that of methylcholanthrene (15% in acetone) alone. In the group treated only with methylcholanthrene, 23 of 28 animals (82%) developed tumors after a mean latent period of 14.1 weeks. In the group painted with the chrysene/methylcholanthrene mixture, 17 of 20 animals (85%) developed tumors after a mean latent period of 13.8 weeks. Mean latent periods were calculated from the day that painting with the mixture was begun. From these results,, the investigators concluded that chrysene had no significant inhibitory effect on the carcinogenicity of methylcholanthrene. Similar applications of 0.20% chrysene in acetone to 20 mice resulted in one tumor among 16 survivors at 31 weeks. In addition to the positive animal test results, indirect evidence for the carcinogenicity of chrysene can be derived from results with two different in vitro test systems. McCann et al. (1975) reported that both chrysene and chrysene-5,6-oxide were mutagenic in the Ames test. This test utilizes the organism, Salmonella typhimurium, to indicate DNA damage, and mammalian liver extracts for metabolic activation of the test chemical. Chrysene was tested on strain TA100 and resulted in 38 revertants per nanomole. The investigators noted that the relationship between the.mutagenic potency of a substance on a particular strain to its overall mutagenic potential on DNA in general, and to carcinogenic potency, remained to be determined. Huberman et al. (1972) tested the effects of chrysene (15.0 (u)g/ml for 4 hours) on Syrian hamster embryo cells in culture and found definite dose dependent increases in the numbers of "malignant" transformations in exposed cell colonies. Also, it was found that chrysene 5,6-oxide (15.0 (u)g/ml) was even more active in producing transformations. Similar relationships were observed between other PAH's and their epoxides. For this reason, and because such epoxides had been shown to be metabolic intermediates of PAH's, the investigators postulated that the epoxides are possibly the ultimate carcinogenic forms of at least those PAH's that do not contain active methyl groups. In a follow-up to the 1972 study of Huberman et al., Huberman and Sachs (1976) tested chrysene, in the presence of enzymes required for its metabolic activation, on Chinese hamster cells in culture (2-day treatment at 1(u)g/ml/day). Chrysene showed negligible mutagenic activity (9 mutants for chrysene vs 6 mutants for solvent alone, per 10,000 surviving cells). Negative results were also obtained by Marquardt et al. (1972) who tested the ability of chrysene and its epoxide (1.0-10.0 (u)g/ml doses for each) to transform cells derived from mouse prostate. According to other investigators, the primary process in cancer induction (by PAH's) involves binding to a cellular component, e.g., DNA, ENA, or protein, at the K-region of the PAH (Raha et al. 1973). Typically, the K-region of a PAH is a highly reactive site. Experimentally measured bond lengths, chemical reactivities, and valence-bond and molecular orbital calculations all agree in assigning a higher electron density to the Kregion than to any other bond in the PAH molecule (Herndon, 1973) . # C. Toxic Effects in Humans No reports concerning the toxicity of chrysene in humans were located. Considerations for any toxic effects of chrysene in humans would primarily be concerned with inhalation of particulate matter, since chrysene is encountered most frequently as an airborne particulate or as an adsórbate on airborne particulates. The occurrence of any biological effect would be closely related to the physical characteristics of the particle. Particle size predominantly determines the extent to which penetration of (and retention by) the tracheobronchial tree will occur, with maximum retention falling within an aerodynamic particle size range of 0.5 to 2.0 microns (Kotin and Falk, 1964) . Ingestion and skin absorption must also be considered potential routes of exposure. Further, it should be noted that small amounts of chrysene-containing particles might be ingested following their inhalation as particles and mucus reflux from the bronchial tree (Hill et al., 1972) . No epidemiologic studies concerning the carcinogenicity of chrysene per se were located in the literature. The carcinogenic potential of chrysene in humans must therefore be estimated on the basis of animal studies or be deduced from experiences with mixed exposure to chrysene and other PAH's. D. # Summary of Carcinogenicity Data While it is clear that exposure to PAH's is associated with increased risk for both lung and skin cancer in humans, the relative contribution of Certain generalizations can be made regarding the tumorigenic potential of chrysene. For instance, its tumorigenic potential seems to vary in a dose-dependent manner. Large doses and repeated applications, as well as lengthy induction periods, are usually required to obtain significant incidences of tumorigenesis, prompting many investigators to classify chrysene as a "weak" carcinogen. Further, it seems that chrysene may have a greater effect when acting in concert with other, more potent' agents, i.e., as a tumor-initiator, a synergist, or an "incomplete" carcinogen, enhancing tumor formation but lacking the capacity for complete carcinogenesis. In one study, chrysene appeared to have inhibited the activity of a more potent carcinogen, but this case was contradicted by other similar investigations that have produced evidence to the contrary. Thus, no generalization can be made regarding chrysene's possible tumor "inhibiting" properties. Tests on mammalian cells in culture also have produced conflicting results: some investigations have shown chrysene to produce malignant transformations in a dose-dependent fashion, while others have shown it to have a negligible effect. In microbial assays, however, chrysene has been mutagenic. Chrysene's methyl derivatives and certain of their epoxides have been shown to possess carcinogenic potential. The 5,6 oxide of chrysene (an epoxide) was shown to be less active than chrysene in in vivo tests, but in in vitro tests on mammalian cell cultures and in microbial assays, the converse was seen. Because such epoxides have been shown to react covalently with cellular macromolecules in vitro, they have been postulated to be likely candidates for the role of the ultimate carcinogenic derivatives of PAH's . Several possible explanations have been suggested for the lack of epoxide activity in vivo as opposed to in vitro. For instance, because of solubility and other factors the epoxide may be less able to enter the cell (Boyland and Sims, 1967), or possibly the high chemical reactivity of such epoxides could cause them to react nonspecifically in tissues (with extracellular keratin and other molecules) and hence be depleted before reaching and reacting with intracellular target macromolecules (Huberman et al., 1972). The 5methylchrysene has been shown to be both a strong, "complete" carcinogen and a more active tumor "initiator" than the parent chrysene. One possible explanation for the carcinogenic potential of this compound may lie in its metabolic activation by the formation of a carbonium ion on the methyl group (Marquardt et al., 1972). Thus, there exists certain evidence indicating that chrysene and its derivatives have carcinogenic potential. Additional research is needed to elucidate the nature and extent of chrysene's carcinogenic activity and the mechanisms whereby such effects may be exerted. No experimental studies were found in the literature concerning the inhalation of chrysene as an adsórbate on airborne particulate matter, i.e., the mode of entry that represents one of the most probable routes of occupational exposure. Further, and most importantly, it should be noted that early studies which produced negative results for chrysene quite likely used inadequate numbers of animals with insufficient periods of observation (Steiner and Falk, 1951 # H. Recordkeeping and Availability of Records Employers should keep accurate records of (1) all measurements taken to determine employee exposure to chrysene, (2) measurements demonstrating the effectiveness of mechanical ventilation, and (3) all data obtained from medical examinations which is pertinent to chrysene exposure. Pertinent records from medical examinations should be maintained for at least 30 years after the worker's employment has ended. # V. SAMPLING AND ANALYTICAL METHODS A workplace environmental limit for chrysene is not recommended at this time. However, in the event that NIOSH should establish such a limit (either for chrysene or for the general class of PAH's) in the future, a validated sampling and analytical procedure would be required. NIOSH has not validated a sampling method for chrysene. Chrysene is normally encountered as an airborne pollutant (usually adsorbed on airborne particulate matter, but sometimes as a vapor) along with numerous other PAH's and their derivatives. When chrysene is encountered as a solid or is adsorbed on particulate matter, a sampling method similar to the one described in the occupational standard for coke oven emissions (29 CFR 1910.1029) would be appropriate. NIOSH has not validated a specific analytical procedure for chrysene. The choice of any particular method depends upon the sample source, the amount collected, time constraints, financial constraints, or personal preferences. The occupational standard for coke oven emissions (29 CFR 1910(29 CFR .1029) utilizes the benzene-soluble fraction of total particulate matter (BSFTPM) as an indicator of worker exposure to PAH's. This procedure, however, does not identify individual PAH's. In order to identify and quantify individual PAH's, they must first be separated from any interfering compounds with which they occur. Many reviews concerning the state of the art of PAH analysis have appeared in the literature, e.g., Sawicki, 1964;Sawicki et al., 1967;Rutzinger et al., 1973;Zander, 1975. In addition, Appendix A contains a list of some analytical methods which have been used to determine the PAH content of various compounds.
# v SUMMARY AND CONCLUSIONS Exposure to chrysene as an isolated chemical presently occurs only in specific occupations, i.e., chrysene synthesis, laboratory experimentation, and possibly in the synthesis of anthraquinone vat dyes. However, chrysene and its derivatives, e.g., certain methylchrysene isomers, along with hundreds of other polycyclic aromatic hydrocarbons (PAH's), are formed during the pyrolysis of organic matter and can occur in any occupational environment where this process takes place. While it has been suggested that 700 C is the optimum pyrolytic temperature for PAH-formation, other factors such as the chemical and physical nature of the pyrolyzed material, the presence or absence of oxygen, and the period of tine during which the compound is pyrolyzed also determine the amounts and mixtures of various PAH's formed. Chrysene has been detected in many materials which result from, or are used in,industrial processes. Chrysene also has been detected in the ambient atmosphere and in certain food products. A number of experimental animal studies have been conducted for the purpose of assessing the carcinogenicity of chrysene, with varying results. When three 100(u)g doses of chrysene were administered by subcutaneous (s.c.) injection to newborn mice, an increased incidence of liver tumors was observed. The increase, however, was not statistically significant. Marginal results were obtained when mice were injected (s.c.) with 1 mg chrysene in arachis oil, once a week, for 10 weeks. This treatment resulted in the development of injection-site tumors in 2 of 20 animals. Tumors also developed after chrysene (1% solution in acetone) was applied to the shaved backs of mice. Chrysene appears to exert a stronger carcinogenic effect when used as a tumor initiator than when used as a "complete" carcinogen, e.g., when the shaved backs of each of 20 mice were painted with ten doses of 0.1 mg chrysene (each in 0.1 ml acetone), followed by treatment with tetradecanoyl phorbol acetate (TPA) ,11 of the 18 surviving mice had tumors (total of 19 tumors) after 20 weeks. Similar results were obtained using chrysene (total dose of 4.4 micromoles) either as an initiator with TPA, or when chrysene (1.0 mg in 0.4 ml acetone) was painted on the shaved backs of mice prior to treatment with croton resin. Chrysene also has produced positive results in mammalian cell transformation tests and in the "Ames" test for mutagenicity. Certain methylchrysene isomers have also been shown to possess carcinogenic potential: 5-methylchrysene has been reported to be a strong "complete" carcinogen as well as a stronger tumor initiator than chrysene. Epidemiologic evidence for the carcinogenicity of chrysene as an isolated chemical was not located. However, certain industry-related materials which contain many PAH's, including chrysene and its methyl derivatives, Chrysene may be generated inadvertently in numerous industrial processes which involve the pyrolysis of organic matter. Further, conditions which are favorable for the formation of chrysene also are likely to be favorable for the formation of methylchrysenes and numerous other PAH's whose carcinogenic potencies range from greater than to less than that of chrysene. While the aforementioned control measures may be applied to the general class of PAH's, NIOSH recognizes that under certain circumstances of PAH-exposure, they will be neither appropriate nor feasible. While no workplace environmental limit for chrysene is recommended at this time, information on analytical methodology is included in this review. Most methods for the identification and quantitation of chrysene involve various forms of chromatography and spectrometry. # INTRODUCTION The carcinogenic potential of certain polycyclic aromatic hydrocarbons (PAH's), e.g., benz(a)pyrene and benz(a)anthracene, has been well documented (IARC Monograph, Volume 3, 1973;Dipple, 1976). However, comparatively little information exists upon which to base an evaluation of the carcinogenic potential of chrysene. Further, isolated exposure to chrysene occurs only in specific operations, i.e., chrysene manufacture, laboratory experimentation, and possibly in the synthesis of anthraquinone vat dyes. The number of workers thus exposed is small relative to the number of workers exposed to chrysene ad-mixed with other PAH's in industrial processes which involve the pyrolysis of organic matter. This notwithstanding, sufficient quantities of chrysene may now be in use to elicit concern over any biological effects exerted by this chemical. Thus, an integration and evaluation of all experimental data pertinent to the carcinogenic potential of chrysene appears warranted. There is now no federal occupational health standard for chrysene, or # II. COMMON OPERATIONS, USE, AND OCCURRENCE A'. Workplace Environment PAH's are perhaps the most ubiquitous of the carcinogenic agents that have been detected in the atmosphere. Chrysene, along with other PAH's, occurs most frequently as an adsórbate on airborne particulates resulting from the pyrolysis of carbonaceous materials (Filatova et al., 1972). Chrysene is sometimes encountered as a vapor. The yield of chrysene can be affected by the chemical and physical nature of the pyrolyzed compound, the pyrolysis temperature, the presence or absence of adequate oxygen for complete combustion, and the period of time during which the compound is exposed to heat. Experimental determinations have shown the optimal temperature for PAH formation to be around 700 C (Masuda et al., 1967). From an occupational health standpoint it is important to identify the sources of these compounds. Table I lists some of these sources for chrysene. Cleary, 1963Qazi and Nau, 1975Cleary, 1963Rhee and Bratzler, 1968 Lijinsky et al., 1963Lijinsky and Raha, 1961Filatova et al., 1972Tye et al., 1966Hendrickson et al., 1963Falk et al., 1951 *Unless otherwise stated, the units (ppm) refer to parts of chrysene per million parts of source material sampled, as a weight/weight ratio. # B. Occupational Areas of Concern At present, it is believed that exposure to chrysene as an isolated chemical occurs only in specific occupations, i.e., in chrysene manufacture, laboratory experimentation, and perhaps in the synthesis of anthraauinone vat dyes. It was Holbro (1953) who suggested the use of chrysene as a starting material in the synthesis of dyestuffs, particularly of anthraquinone vat dyes, to which the results of his research showed chrysene imparted favorable properties regarding "affinity, shade, and fastness". While the scope of this report is limited primarily to chrysene as an isolated chemical, its occurrence with other PAH's in a wide range of occupational environments cannot be disregarded. The following list of occupational areas, where worker exposure to chrysene and other PAH's is likely to occur, is separated into stationarypoint and non-stationary point source categories: the former refers to inplace operations, i.e., where engineering controls could be expected to substantially reduce employee exposure, and the latter refers to outdoor or mobile operations where the application of engineering controls may be more difficult. In the latter case, however, it would be expected that natural factors (wind, etc.) would contribute to reductions in worker exposure. While this list is based on the occurrence of chrysene as it has been reported in the scientific literature, many other occupational exposure sources undoubtedly exist. # C. General Environment Chrysene is found in the general environment primarily as a product of the pyrolysis of fossil fuels for energy production and the operation of diesel-and gasoline-fueled internal combustion engines. Chrysene has been measured in the atmospheres of several major cities. In Sydney, Australia, a 12-month survey of the ambient PAH content of suspended particulate matter showed chrysene at concentrations ranging from 0.0002 to 0.0065 (u)g/cu m of air (Cleary and Sullivan, 1965). Chrysene was found in the atmosphere of Buffalo, New York, at concentrations of 0.082-0.116 (u)g/cu m of air, and in composite samples from the air of urban Cincinnati, Ohio at concentrations of 0.0032 and 0.006 (u)g/cu m of air CCcnlee et al., 1967). Studies have been conducted to determine the contribution of the gasoline engine to atmospheric levels of PAH's. Gasoline engine exhaust tar was found to contain 175 ppm chrysene (Hoffman and Wynder, 1962). In another investigation, the PAH emissions from diesel-and gasoline-powered vehicles were compared (Sullivan and Cleary, 1964), and chrysene was found in a diesel-trafficked area at a concentration of 0.0032 (u)g/cu m of air; in the gasoline-trafficked area, chrysene was found at a concentration of 0.0034 (u)g/cu m of air. In addition to the finding of chrysene in ambient air, its presence in soil and water has been substantiated. Although quantitative data are few, chrysene has been identified in soils in rural Massachusetts and Connecticut (Blumer, 1961), and in wells, rivers, lakes, and soils in Germany (Borneff, 1964;Borneff and Kunte, 1965). One investigator suggested that the occurrence of chrysene and other PAH's in rural soil is not due to fallout from polluted air but, rather, is the result of the natural pyrolytic transformation of plant organic matter to peat and lignite; or that alternatively, they occur as metabolites of organisms which reside in the soils (Blumer, 1961). Chrysene also has been detected in certain food products: roasted coffee (Kuratsune and Hueper, 1960), spinach and tomatoes (Grimmer and Hildebrand, 1965), baker's yeast (Grimmer and Wilhelm, 1969), "liquid smoke" (Lijinsky and Shubik, 1965), smoked ham and barbecued beef (Malanoski et al., 1968), charcoal-broiled steak and chicken (Lijinsky and Ross, 1967), and electric-broiled fish (Masuda et al., 1966). Of particular importance is the occurrence of chrysene in cigarettesmoke condensate. Chrysene has been found in cigarette-smoke condensate at a concentration of 0.06 (u)g per 100 cigarettes (Van Duuren, 1958). Cigarette smoke may add significantly to the total exposure to chrysene from both the general environment and work environment. # III. BIOLOGICAL EFFECTS OF EXPOSURE A. Toxic Effects in Animals Schmid et al. (1967) used mortality and body and organ weight changes in mice as measures of chrysene toxicity. Each of 10 male AKR and 10 male C57BL/6 mice was given a single i.p. injection of 7.5 mg chrysene dissolved in 1 ml of sesame oil. All animals were alive after 20 days of observation at which time they were killed. Autopsies revealed that none of the mice had experienced any significant change in body, spleen, or thymus weight. The investigators concluded that chrysene was not toxic as reflected by the aforementioned parameters and experimental conditions. Gershbein (1958) tested the effects of chrysene on liver regeneration in partially hepatectomized adult male Sprague-Dawley rats. Single s.c. injections of 12.0 mg chrysene in peanut oil were given daily for the 11day test period to each rat (320 mg/kg body weight). This treatment caused no significant change in the extent of liver regeneration from that in the corresponding controls. Bernheim et al. (1953) found that a 0.3% solution of chrysene in benzene, when painted on the skins of white mice, inhibited fatty acid peroxide-induced thiobarM ■ -».ric acid-color development on the epidermal side, but did not in the subcutaneous fat. The investigators attributed these results to inhibition of fatty acid oxidation. They noted that oxidized fatty acids are more effective inhibitors of certain enzymes and of bacterial growth than are the corresponding unoxidized acids, so that a carcinogen, by inhibiting peroxide formation, might produce conditions favorable for growth. They also suggested that inhibition of fatty acid oxidation may be a necessary, but certainly not a sufficient, condition for tumorigenesis. # B. Carcinogenicity in Animals A number of experimental animal studies of the carcinogenic potential of chrysene have yielded varying results. Most of these studies have been conducted with mice, and various investigators have described chrysene as a weak carcinogen (Wynder and Hoffman, 1959); a cancer "initiator" (Hoffman et al., 1974); an "incomplete" carcinogen (Horton and Christian, 1974); and as a cancer "inhibitor" (Huggins et al., 1964 weeks, 9 had liver tumors (1 of which was multiple) and 4 mice had lung tumors (1 of which was multiple). The investigators concluded that chrysene seemed to be a more active tumorigen than its epoxide, and that it increased the incidence of liver tumors above that in the controls, while the epoxide did not do so, and that neither chrysene nor the epoxide affected the incidence of lung tumors. The liver tumors ranged in appearance from well differentiated liver cell masses that were difficult to distinguish from normal liver tissues to those exhibiting pleomorphism, invasion, and metastasis. The increased incidence of liver tumors after exposure to chrysene, however, was not statistically significant (P=0.05). Boyland and Sims (1967) conducted a similar experiment with chrysene and chrysene 5,6-oxide (Table II.B.). C57 black mice, 120 days old, were injected s.c. with 1.0 ml suspensions of arachis oil that contained 1.0 mg of either chrysene or chrysene 5,6-oxide, once/week, for 10 weeks. A third group, given only arachis oil, served as the control. After an observation period ranging from 60 to 80 weeks, 2 of the 20 chrysene-treated mice surviving at the time of observation of the first tumor (initial number of animals treated was not stated) had developed injection-site tumors. A comparable number of animals (3 of 20) developed injection-site tumors from the epoxide but the time-to-tumor was longer by an average of 8.6 days. Because of the longer latency period for the epoxide, the investigators concluded that the parent PAH was a more active tumorigen. None of the control animals had tumors. Of the tumors induced by the active compounds, about 80% were spindle-cell sarcomas, about 12% were pleomorphic sarcomas and about 8% were squamous cell carcinomas. Wynder and Hoffman (1959) applied a 1% solution of chrysene in acetone to the backs of female Swiss (Millerton) mice, three times weekly until the last animal died (sometime during the 13th month of treatment). Nine of the original 20 animals had developed papillomas and 8 had developed carcinomas, with the first tumor appearing after 8 months of treatment. From the original data it is not possible to determine with exactitude the number of animals with both papillomas and carcinomas. A decrease in lifespan was also noted; no controls were included in the experiment. It is unlikely, however, that such a high incidence of carcinomas would occur spontaneously in this strain of mouse. Several investigators have shown chrysene to act as a cancer "initiator." Hoffman et al. (1974) tested the tumor-initiating activity of chrysene and of six of its methyl derivatives. The purities of the test substances were greater than 99.9 %. Ten doses of 0.1 ml of acetone each containing 0.1 mg of the test PAH (total dose, 1.0 mg) were applied on alternate days to the shaved backs of each of 20 female Swiss albino mice (Ha/ICR/Mil). Application of the "promoting" substance, tetradecanoyl phorbol acetate (TPA) was begun 10 days following the last PAH-treatment. TPA, in doses of 2.5 (u)g in 0.1 ml of acetone, was applied three times a week for 20 weeks, for total doses of 0.15 mg. Acetone alone (10 doses of 0.1 ml) served as a control treatment and was reported to have produced negative results in a separate group of mice, although no quantitative data were given. In the chrysene-exposed group, after 20 weeks of TPA applications, 11 of the 18 surviving mice had a total of 19 tumors. The 3methyl-and especially the 5-methylchrysene were even stronger tumor initiators than was chrysene, with 14 of 20 surviving mice sharing a total of 26 tumors , and 17 of 20 surviving mice sharing a total of 96 tumors, respectively. Each compound was concurrently bioassayed for "complete" carcinogenicity in mice of the same strain and sex. Solutions of 0.1 mg of each material in 0.1 ml acetone were applied under identical conditions to the shaved backs of the animals, three times weekly for the duration of the experiment. After 30 weeks, 5-methylchrysene showed high carcinogenic activity. Each of the 20 mice had tumors (with a total of 99 tumors), including 12 with 37 carcinomas, and 2 with multiple métastasés to the lungs and spleen. Chrysene and the other five methylchrysenes showed no significant tumorigenic activity according to the investigators. Quantitative data for chrysene and for the control vehicle were not given. Hecht et al. (1976) also tested 5-methylchrysene for "complete" carcinogenicity via two routes. Ten s.c. injections of 0.05 mg 5methylchrysene in trioctanoin (0,1 ml each injection) were given to each of 25 male C-57 black mice at 2-week intervals, after which the animals were observed for 32 weeks. This exposure to 5-methylchrysene resulted in 24 fibrosarcomas in 22 animals, with the first tumor appearing at 25 weeks. Vehicle controls had no tumors. In addition, 5-methylchrysene was compared with benzo(a)pyrene (BaP) for tumor-initiating potential and "complete" (1974) found that a 0.15% solution of chrysene in decahydronaphthalene (Decalin), a "non-carcinogenic" agent, when painted on the backs of 20 C3H male mice (60 (u)l doses, 2 x week), induced a papilloma in 1 of 12 surviving animals after 76 weeks of treatment (Table II.E.). However, when chrysene was applied in a 0.15% solution with a 50:50 Decalin: n-dodecane mixture (a known co-carcinogenic mixture) to the backs of 20 mice, 12 and 5 out of 19 surviving animals developed papillomas and carcinomas, respectively. The median time for tumor induction was 49 weeks. The effects of n-dodecane alone were not tested for. The authors classified chrysene as an "incomplete" carcinogen, i.e., one lacking the co-carcinogenic activity of a strong "complete" carcinogen. Steiner and Falk (1951) studied the carcinogenic activity of chrysene in combination with 1,2-benzanthracene (Table II.F.). The actions of chrysene and of 1,2-benzanthracene were first tested separately. Fifty C57 black mice, 3-4 months old and in about equal numbers of males and females, were injected s.c. with 5.0 mg chrysene in 0.5 cc tricaprylin. There was a total of 4 injection-site sarcomas among the 24 surviving mice for a tumor incidence of 16.6 % compared with a 1.3 % incidence of tumors in the tricaprylin controls and a zero incidence in the uninjected controls. The investigators considered the results to have shown that chrysene has a definite carcinogenic potential. A tumor incidence of 18.2 % was seen after the administration of 1,2-benzanthracene with the average induction time of 285 days. A mixed solution of the two PAH's (2.5 mg each) in tricaprylin was then administered to each of 50 mice in a like manner. The tumor incidence from this dose was 44.1%. Because this incidence was a little greater than the summation of the individual responses at full doses, the investigators concluded that the results were demonstrative of either summation or synergism. In a follow-up study by Steiner (1955) , a mixture of the "strong" carcinogen 1,2,5,6-dibenzanthrene with chrysene, when administered s.c. to approximately fifty C57BL mice, showed no significant additivity of effects, i.e., the incidence of tumors was the same as for either agent alone, and the question was raised by the investigator as to whether there had not been actual inhibition of tumorigenesis. Huggins et al. (1964) attempted to detect any inhibitory effects of chrysene upon the action of a more potent carcinogen. In these experiments, 2 mg i.v. injections of 7,12-dimethylbenz(a)anthracene (7,12-DMBA) were given to each of 74 rats on days 50, 53, and 56 of their lives. Mammary tumors resulted in all 74 animals (100%) . A separate group of animals was given (gastric intubation) single doses of chrysene (40 mg) followed by the same 7,12-DMBA treatment. These animals showed no significant change in the incidence of mammary tumors. However, in another group of test animals, multiple doses of chrysene (15 mg x 16 days) during a period that overlapped i.v. injections of 7,12-DMBA greatly reduced the incidence of mammary tumors (60% vs 100%) . Riegel et al. (1951), painted a mixture of chrysene (15%) and methylcholanthrene (15%) in acetone on the shaved backs of CF1 mice (0.02 ml of test solution, twice weekly, for 31 weeks) and compared its carcinogenic activity with that of methylcholanthrene (15% in acetone) alone. In the group treated only with methylcholanthrene, 23 of 28 animals (82%) developed tumors after a mean latent period of 14.1 weeks. In the group painted with the chrysene/methylcholanthrene mixture, 17 of 20 animals (85%) developed tumors after a mean latent period of 13.8 weeks. Mean latent periods were calculated from the day that painting with the mixture was begun. From these results,, the investigators concluded that chrysene had no significant inhibitory effect on the carcinogenicity of methylcholanthrene. Similar applications of 0.20% chrysene in acetone to 20 mice resulted in one tumor among 16 survivors at 31 weeks. In addition to the positive animal test results, indirect evidence for the carcinogenicity of chrysene can be derived from results with two different in vitro test systems. McCann et al. (1975) reported that both chrysene and chrysene-5,6-oxide were mutagenic in the Ames test. This test utilizes the organism, Salmonella typhimurium, to indicate DNA damage, and mammalian liver extracts for metabolic activation of the test chemical. Chrysene was tested on strain TA100 and resulted in 38 revertants per nanomole. The investigators noted that the relationship between the.mutagenic potency of a substance on a particular strain to its overall mutagenic potential on DNA in general, and to carcinogenic potency, remained to be determined. Huberman et al. (1972) tested the effects of chrysene (15.0 (u)g/ml for 4 hours) on Syrian hamster embryo cells in culture and found definite dose dependent increases in the numbers of "malignant" transformations in exposed cell colonies. Also, it was found that chrysene 5,6-oxide (15.0 (u)g/ml) was even more active in producing transformations. Similar relationships were observed between other PAH's and their epoxides. For this reason, and because such epoxides had been shown to be metabolic intermediates of PAH's, the investigators postulated that the epoxides are possibly the ultimate carcinogenic forms of at least those PAH's that do not contain active methyl groups. In a follow-up to the 1972 study of Huberman et al., Huberman and Sachs (1976) tested chrysene, in the presence of enzymes required for its metabolic activation, on Chinese hamster cells in culture (2-day treatment at 1(u)g/ml/day). Chrysene showed negligible mutagenic activity (9 mutants for chrysene vs 6 mutants for solvent alone, per 10,000 surviving cells). Negative results were also obtained by Marquardt et al. (1972) who tested the ability of chrysene and its epoxide (1.0-10.0 (u)g/ml doses for each) to transform cells derived from mouse prostate. According to other investigators, the primary process in cancer induction (by PAH's) involves binding to a cellular component, e.g., DNA, ENA, or protein, at the K-region of the PAH (Raha et al. 1973). Typically, the K-region of a PAH is a highly reactive site. Experimentally measured bond lengths, chemical reactivities, and valence-bond and molecular orbital calculations all agree in assigning a higher electron density to the Kregion than to any other bond in the PAH molecule (Herndon, 1973) . # C. Toxic Effects in Humans No reports concerning the toxicity of chrysene in humans were located. Considerations for any toxic effects of chrysene in humans would primarily be concerned with inhalation of particulate matter, since chrysene is encountered most frequently as an airborne particulate or as an adsórbate on airborne particulates. The occurrence of any biological effect would be closely related to the physical characteristics of the particle. Particle size predominantly determines the extent to which penetration of (and retention by) the tracheobronchial tree will occur, with maximum retention falling within an aerodynamic particle size range of 0.5 to 2.0 microns (Kotin and Falk, 1964) . Ingestion and skin absorption must also be considered potential routes of exposure. Further, it should be noted that small amounts of chrysene-containing particles might be ingested following their inhalation as particles and mucus reflux from the bronchial tree (Hill et al., 1972) . No epidemiologic studies concerning the carcinogenicity of chrysene per se were located in the literature. The carcinogenic potential of chrysene in humans must therefore be estimated on the basis of animal studies or be deduced from experiences with mixed exposure to chrysene and other PAH's. D. # Summary of Carcinogenicity Data While it is clear that exposure to PAH's is associated with increased risk for both lung and skin cancer in humans, the relative contribution of Certain generalizations can be made regarding the tumorigenic potential of chrysene. For instance, its tumorigenic potential seems to vary in a dose-dependent manner. Large doses and repeated applications, as well as lengthy induction periods, are usually required to obtain significant incidences of tumorigenesis, prompting many investigators to classify chrysene as a "weak" carcinogen. Further, it seems that chrysene may have a greater effect when acting in concert with other, more potent' agents, i.e., as a tumor-initiator, a synergist, or an "incomplete" carcinogen, enhancing tumor formation but lacking the capacity for complete carcinogenesis. In one study, chrysene appeared to have inhibited the activity of a more potent carcinogen, but this case was contradicted by other similar investigations that have produced evidence to the contrary. Thus, no generalization can be made regarding chrysene's possible tumor "inhibiting" properties. Tests on mammalian cells in culture also have produced conflicting results: some investigations have shown chrysene to produce malignant transformations in a dose-dependent fashion, while others have shown it to have a negligible effect. In microbial assays, however, chrysene has been mutagenic. Chrysene's methyl derivatives and certain of their epoxides have been shown to possess carcinogenic potential. The 5,6 oxide of chrysene (an epoxide) was shown to be less active than chrysene in in vivo tests, but in in vitro tests on mammalian cell cultures and in microbial assays, the converse was seen. Because such epoxides have been shown to react covalently with cellular macromolecules in vitro, they have been postulated to be likely candidates for the role of the ultimate carcinogenic derivatives of PAH's . Several possible explanations have been suggested for the lack of epoxide activity in vivo as opposed to in vitro. For instance, because of solubility and other factors the epoxide may be less able to enter the cell (Boyland and Sims, 1967), or possibly the high chemical reactivity of such epoxides could cause them to react nonspecifically in tissues (with extracellular keratin and other molecules) and hence be depleted before reaching and reacting with intracellular target macromolecules (Huberman et al., 1972). The 5methylchrysene has been shown to be both a strong, "complete" carcinogen and a more active tumor "initiator" than the parent chrysene. One possible explanation for the carcinogenic potential of this compound may lie in its metabolic activation by the formation of a carbonium ion on the methyl group (Marquardt et al., 1972). Thus, there exists certain evidence indicating that chrysene and its derivatives have carcinogenic potential. Additional research is needed to elucidate the nature and extent of chrysene's carcinogenic activity and the mechanisms whereby such effects may be exerted. No experimental studies were found in the literature concerning the inhalation of chrysene as an adsórbate on airborne particulate matter, i.e., the mode of entry that represents one of the most probable routes of occupational exposure. Further, and most importantly, it should be noted that early studies which produced negative results for chrysene quite likely used inadequate numbers of animals with insufficient periods of observation (Steiner and Falk, 1951 # H. Recordkeeping and Availability of Records Employers should keep accurate records of (1) all measurements taken to determine employee exposure to chrysene, (2) measurements demonstrating the effectiveness of mechanical ventilation, and (3) all data obtained from medical examinations which is pertinent to chrysene exposure. Pertinent records from medical examinations should be maintained for at least 30 years after the worker's employment has ended. # V. SAMPLING AND ANALYTICAL METHODS A workplace environmental limit for chrysene is not recommended at this time. However, in the event that NIOSH should establish such a limit (either for chrysene or for the general class of PAH's) in the future, a validated sampling and analytical procedure would be required. NIOSH has not validated a sampling method for chrysene. Chrysene is normally encountered as an airborne pollutant (usually adsorbed on airborne particulate matter, but sometimes as a vapor) along with numerous other PAH's and their derivatives. When chrysene is encountered as a solid or is adsorbed on particulate matter, a sampling method similar to the one described in the occupational standard for coke oven emissions (29 CFR 1910.1029) would be appropriate. NIOSH has not validated a specific analytical procedure for chrysene. The choice of any particular method depends upon the sample source, the amount collected, time constraints, financial constraints, or personal preferences. The occupational standard for coke oven emissions (29 CFR 1910(29 CFR .1029) utilizes the benzene-soluble fraction of total particulate matter (BSFTPM) as an indicator of worker exposure to PAH's. This procedure, however, does not identify individual PAH's. In order to identify and quantify individual PAH's, they must first be separated from any interfering compounds with which they occur. Many reviews concerning the state of the art of PAH analysis have appeared in the literature, e.g., Sawicki, 1964;Sawicki et al., 1967;Rutzinger et al., 1973;Zander, 1975. In addition, Appendix A contains a list of some analytical methods which have been used to determine the PAH content of various compounds. # ACKNOWLEDGEMENTS
None
None
e7104f53fd20ee318f54e6b2d19962ac48b4c778
cdc
None
The Advisory Committee on Immunization Practices (ACIP) annually publishes an immunization schedule for persons aged 0 through 18 years that summarizes recommendations for currently licensed vaccines for children aged 18 years and younger and includes recommendations in effect as of December 15, 2009. Changes to the previous schedule (1) include the following: The statement concerning use of combination vaccines in the introductory paragraph has been changed to reflect the revised ACIP recommendation on this issue (2). The last dose in the inactivated poliovirus vaccine series is now recommended to be administered on or after the fourth birthday and at least 6 months after the previous dose. In addition, if 4 doses are administered before age 4 years, an additional (fifth) dose should be administered at age 4 through 6 years (3). The hepatitis A footnote has been revised to allow vaccination of children older than 23 months for whom immunity against hepatitis A is desired. Revaccination with meningococcal conjugate vaccine is now recommended for children who remain at increased risk for meningococcal disease after 3 years (if the first dose was administered at age 2 through 6 years), or after 5 years (if the first dose was administered at age 7 years or older) (4). Footnotes for human papillomavirus (HPV) vaccine have been modified to include 1) the availability of and recommendations for bivalent HPV vaccine, and 2) a permissive recommendation for administration of quadrivalent HPV vaccine to males aged 9 through 18 years to reduce the likelihood of acquiring genital warts (5).# 4. CDC. Updated recommendation from the Advisory Committee on Immunization Practices (ACIP) for revaccination of persons at prolonged increased risk for meningococcal disease MMWR 2009;58:1042--3. 5. CDC. ACIP provisional recommendations for HPV vaccine. Atlanta, GA: US Department of Health and Human Services, CDC; 2009. Available at . Accessed December 23, 2009. 6. American Academy of Pediatrics. Active and passive immunization. In: Pickering LK, Baker CJ, Kimberlin DW, Long SS, eds. 2009 red book: report of the Committee on Infectious Diseases. 28th ed. Elk Grove Village, IL: American Academy of Pediatrics; 2009. The recommended immunization schedules for persons aged 0 through 18 years and the catch-up immunization schedule for 2010 have been approved by the Advisory Committee on Immunization Practices, the American Academy of Pediatrics, and the American Academy of Family Physicians. Suggested citation: Centers for Disease Control and Prevention. Recommended immunization schedules for persons aged 0 through 18 years---United States, 2010. MMWR 201058(51&52). ]) This schedule includes recommendations in effect as of December 15, 2009. Any dose not administered at the recommended age should be administered at a subsequent visit, when indicated and feasible. Infants born to HBsAg-positive mothers should be tested for HBsAg and antibody to HBsAg 1 to 2 months after completion of at least 3 doses of the HepB series, at age 9 through 18 months (generally at the next well-child visit). Administration of 4 doses of HepB to infants is permissible when a combination vaccine containing HepB is administered after the birth dose. The fourth dose should be administered no earlier than age 24 weeks. # Rotavirus vaccine (RV). (Minimum age: 6 weeks) Administer the first dose at age 6 through 14 weeks (maximum age: 14 weeks 6 days). Vaccination should not be initiated for infants aged 15 weeks 0 days or older. The maximum age for the final dose in the series is 8 months 0 days If Rotarix is administered at ages 2 and 4 months, a dose at 6 months is not indicated. 3. Diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP). (Minimum age: 6 weeks) The fourth dose may be administered as early as age 12 months, provided at least 6 months have elapsed since the third dose. Administer the final dose in the series at age 4 through 6 years. 4. Haemophilus influenzae type b conjugate vaccine (Hib). (Minimum age: 6 weeks) If PRP-OMP (PedvaxHIB or Comvax ) is administered at ages 2 and 4 months, a dose at age 6 months is not indicated. TriHiBit (DTaP/Hib) and Hiberix (PRP-T) should not be used for doses at ages 2, 4, or 6 months for the primary series but can be used as the final dose in children aged 12 months through 4 years. # Pneumococcal vaccine. (Minimum age: 6 weeks for pneumococcal conjugate vaccine ; 2 years for pneumococcal polysaccharide vaccine ) PCV is recommended for all children aged younger than 5 years. Administer 1 dose of PCV to all healthy children aged 24 through 59 months who are not completely vaccinated for their age. Administer PPSV 2 or more months after last dose of PCV to children aged 2 years or older with certain underlying medical conditions, including a cochlear implant. See MMWR 1997;46(No. RR-8). # Inactivated poliovirus vaccine (IPV) (Minimum age: 6 weeks) The final dose in the series should be administered on or after the fourth birthday and at least 6 months following the previous dose. If 4 doses are administered prior to age 4 years a fifth dose should be administered at age 4 through 6 years. See MMWR 2009;58(30):829--30. # Influenza vaccine (seasonal). (Minimum age: 6 months for trivalent inactivated influenza vaccine ; 2 years for live, attenuated influenza vaccine ) Administer annually to children aged 6 months through 18 years. For healthy children aged 2 through 6 years (i.e., those who do not have underlying medical conditions that predispose them to influenza complications), either LAIV or TIV may be used, except LAIV should not be given to children aged 2 through 4 years who have had wheezing in the past 12 months. Children receiving TIV should receive 0.25 mL if aged 6 through 35 months or 0.5 mL if aged 3 years or older. Administer 2 doses (separated by at least 4 weeks) to children aged younger than 9 years who are receiving influenza vaccine for the first time or who were vaccinated for the first time during the previous influenza season but only received 1 dose. For recommendations for use of influenza A (H1N1) 2009 monovalent vaccine see MMWR 2009;58(No. RR-10). # Measles, mumps, and rubella vaccine (MMR). (Minimum age: 12 months) Administer the second dose routinely at age 4 through 6 years. However, the second dose may be administered before age 4, provided at least 28 days have elapsed since the first dose. 9. Varicella vaccine. (Minimum age: 12 months) Administer the second dose routinely at age 4 through 6 years. However, the second dose may be administered before age 4, provided at least 3 months have elapsed since the first dose. For children aged 12 months through 12 years the minimum interval between doses is 3 months. However, if the second dose was administered at least 28 days after the first dose, it can be accepted as valid. 10. Hepatitis A vaccine (HepA). (Minimum age: 12 months) Administer to all children aged 1 year (i.e., aged 12 through 23 months). Administer 2 doses at least 6 months apart. Children not fully vaccinated by age 2 years can be vaccinated at subsequent visits A 5-year interval from the last Td dose is encouraged when Tdap is used as a booster dose; however, a shorter interval may be used if pertussis immunity is needed. # Human papillomavirus vaccine (HPV). (Minimum age: 9 years) Two HPV vaccines are licensed: a quadrivalent vaccine (HPV4) for the prevention of cervical, vaginal and vulvar cancers (in females) and genital warts (in females and males), and a bivalent vaccine (HPV2) for the prevention of cervical cancers in females. HPV vaccines are most effective for both males and females when given before exposure to HPV through sexual contact. HPV4 or HPV2 is recommended for the prevention of cervical precancers and cancers in females. HPV4 is recommended for the prevention of cervical, vaginal and vulvar precancers and cancers and genital warts in females. Administer the first dose to females at age 11 or 12 years. Administer the second dose 1 to 2 months after the first dose and the third dose 6 months after the first dose (at least 24 weeks after the first dose). Administer the series to females at age 13 through 18 years if not previously vaccinated. HPV4 may be administered in a 3-dose series to males aged 9 through 18 years to reduce their likelihood of acquiring genital warts. # Meningococcal conjugate vaccine (MCV4). Administer at age 11 or 12 years, or at age 13 through 18 years if not previously vaccinated. Administer to previously unvaccinated college freshmen living in a dormitory. Administer MCV4 to children aged 2 through 10 years with persistent complement component deficiency, anatomic or functional asplenia, or certain other conditions placing them at high risk. Administer to children previously vaccinated with MCV4 or MPSV4 who remain at increased risk after 3 years (if first dose administered at age 2 through 6 years) or after 5 years (if first dose administered at age 7 years or older). Persons whose only risk factor is living in on-campus housing are not recommended to receive an additional dose. See MMWR 2009;58:1042--3. # Influenza vaccine (seasonal). Administer annually to children aged 6 months through 18 years. For healthy nonpregnant persons aged 7 through 18 years (i.e., those who do not have underlying medical conditions that predispose them to influenza complications), either LAIV or TIV may be used. Administer 2 doses (separated by at least 4 weeks) to children aged younger than 9 years who are receiving influenza vaccine for the first time or who were vaccinated for the first time during the previous influenza season but only received 1 dose. For recommendations for use of influenza A (H1N1) 2009 monovalent vaccine. See MMWR 2009;58(No. RR-10) 5. Pneumococcal polysaccharide vaccine (PPSV). Administer to children with certain underlying medical conditions, including a cochlear implant. A single revaccination should be administered after 5 years to children with functional or anatomic asplenia or an immunocompromising condition. See MMWR 1997;46(No. RR-8). 6. Hepatitis A vaccine (HepA). Administer 2 doses at least 6 months apart. HepA is recommended for children aged older than 23 months who live in areas where vaccination programs target older children, who are at increased risk for infection, or for whom immunity against hepatitis A is desired. 7. Hepatitis B vaccine (HepB). Administer the 3-dose series to those not previously vaccinated. A 2-dose series (separated by at least 4 months) of adult formulation Recombivax HB is licensed for children aged 11 through 15 years. 8. Inactivated poliovirus vaccine (IPV). The final dose in the series should be administered on or after the fourth birthday and at least 6 months following the previous dose. If both OPV and IPV were administered as part of a series, a total of 4 doses should be administered, regardless of the child's current age. 9. Measles, mumps, and rubella vaccine (MMR). If not previously vaccinated, administer 2 doses or the second dose for those who have received only 1 dose, with at least 28 days between doses. 10. Varicella vaccine. For persons aged 7 through 18 years without evidence of immunity (see MMWR 2007;56), administer 2 doses if not previously vaccinated or the second dose if only 1 dose has been administered. For persons aged 7 through 12 years, the minimum interval between doses is 3 months. However, if the second dose was administered at least 28 days after the first dose, it can be accepted as valid. For persons aged 13 years and older, the minimum interval between doses is 28 days. # Rotavirus vaccine (RV). The maximum age for the first dose is 14 weeks 6 days. Vaccination should not be initiated for infants aged 15 weeks 0 days or older. The maximum age for the final dose in the series is 8 months 0 days. If Rotarix was administered for the first and second doses, a third dose is not indicated. 3. Diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP). The fifth dose is not necessary if the fourth dose was administered at age 4 years or older. # Haemophilus influenzae type b conjugate vaccine (Hib). Hib vaccine is not generally recommended for persons aged 5 years or older. No efficacy data are available on which to base a recommendation concerning use of Hib vaccine for older children and adults. However, studies suggest good immunogenicity in persons who have sickle cell disease, leukemia, or HIV infection, or who have had a splenectomy; administering 1 dose of Hib vaccine to these persons who have not previously received Hib vaccine is not contraindicated. If the first 2 doses were PRP-OMP (PedvaxHIB or Comvax), and administered at age 11 months or younger, the third (and final) dose should be administered at age 12 through 15 months and at least 8 weeks after the second dose. If the first dose was administered at age 7 through 11 months, administer the second dose at least 4 weeks later and a final dose at age 12 through 15 months. 5. Pneumococcal vaccine. Administer 1 dose of pneumococcal conjugate vaccine (PCV) to all healthy children aged 24 through 59 months who have not received at least 1 dose of PCV on or after age 12 months. For children aged 24 through 59 months with underlying medical conditions, administer 1 dose of PCV if 3 doses were received previously or administer 2 doses of PCV at least 8 weeks apart if fewer than 3 doses were received previously. Administer pneumococcal polysaccharide vaccine (PPSV) to children aged 2 years or older with certain underlying medical conditions, including a cochlear implant, at least 8 weeks after the last dose of PCV. See MMWR 1997;46(No. RR-8). # Inactivated poliovirus vaccine (IPV). The final dose in the series should be administered on or after the fourth birthday and at least 6 months following the previous dose. A fourth dose is not necessary if the third dose was administered at age 4 years or older and at least 6 months following the previous dose. In the first 6 months of life, minimum age and minimum intervals are only recommended if the person is at risk for imminent exposure to circulating poliovirus (i.e., travel to a polio-endemic region or during an outbreak). 7. Measles, mumps, and rubella vaccine (MMR). Administer the second dose routinely at age 4 through 6 years. However, the second dose may be administered before age 4, provided at least 28 days have elapsed since the first dose. If not previously vaccinated, administer 2 doses with at least 28 days between doses. 8. Varicella vaccine. Administer the second dose routinely at age 4 through 6 years. However, the second dose may be administered before age 4, provided at least 3 months have elapsed since the first dose. For persons aged 12 months through 12 years, the minimum interval between doses is 3 months. However, if the second dose was administered at least 28 days after the first dose, it can be accepted as valid. For persons aged 13 years and older, the minimum interval between doses is 28 days. 9. Hepatitis A vaccine (HepA). HepA is recommended for children aged older than 23 months who live in areas where vaccination programs target older children, who are at increased risk for infection, or for whom immunity against hepatitis A is desired. 10. Tetanus and diphtheria toxoids vaccine (Td) and tetanus and diphtheria toxoids and acellular pertussis vaccine (Tdap). Doses of DTaP are counted as part of the Td/Tdap series Tdap should be substituted for a single dose of Td in the catch-up series or as a booster for children aged 10 through 18 years; use Td for other doses. 11. Human papillomavirus vaccine (HPV). Administer the series to females at age 13 through 18 years if not previously vaccinated. Use recommended routine dosing intervals for series catch-up (i.e., the second and third doses should be administered at 1 to 2 and 6 months after the first dose). The minimum interval between the first and second doses is 4 weeks. The minimum interval between the second and third doses is 12 weeks, and the third dose should be administered at least 24 weeks after the first dose.
The Advisory Committee on Immunization Practices (ACIP) annually publishes an immunization schedule for persons aged 0 through 18 years that summarizes recommendations for currently licensed vaccines for children aged 18 years and younger and includes recommendations in effect as of December 15, 2009. Changes to the previous schedule (1) include the following: The statement concerning use of combination vaccines in the introductory paragraph has been changed to reflect the revised ACIP recommendation on this issue (2). The last dose in the inactivated poliovirus vaccine series is now recommended to be administered on or after the fourth birthday and at least 6 months after the previous dose. In addition, if 4 doses are administered before age 4 years, an additional (fifth) dose should be administered at age 4 through 6 years (3). The hepatitis A footnote has been revised to allow vaccination of children older than 23 months for whom immunity against hepatitis A is desired. Revaccination with meningococcal conjugate vaccine is now recommended for children who remain at increased risk for meningococcal disease after 3 years (if the first dose was administered at age 2 through 6 years), or after 5 years (if the first dose was administered at age 7 years or older) (4). Footnotes for human papillomavirus (HPV) vaccine have been modified to include 1) the availability of and recommendations for bivalent HPV vaccine, and 2) a permissive recommendation for administration of quadrivalent HPV vaccine to males aged 9 through 18 years to reduce the likelihood of acquiring genital warts (5).# 4. CDC. Updated recommendation from the Advisory Committee on Immunization Practices (ACIP) for revaccination of persons at prolonged increased risk for meningococcal disease MMWR 2009;58:1042--3. 5. CDC. ACIP provisional recommendations for HPV vaccine. Atlanta, GA: US Department of Health and Human Services, CDC; 2009. Available at http://www.cdc.gov/vaccines/recs/provisional/downloads/hpv-vac-dec2009-508.pdf. Accessed December 23, 2009. 6. American Academy of Pediatrics. Active and passive immunization. In: Pickering LK, Baker CJ, Kimberlin DW, Long SS, eds. 2009 red book: report of the Committee on Infectious Diseases. 28th ed. Elk Grove Village, IL: American Academy of Pediatrics; 2009. The recommended immunization schedules for persons aged 0 through 18 years and the catch-up immunization schedule for 2010 have been approved by the Advisory Committee on Immunization Practices, the American Academy of Pediatrics, and the American Academy of Family Physicians. Suggested citation: Centers for Disease Control and Prevention. Recommended immunization schedules for persons aged 0 through 18 years---United States, 2010. MMWR 201058(51&52). ]) This schedule includes recommendations in effect as of December 15, 2009. Any dose not administered at the recommended age should be administered at a subsequent visit, when indicated and feasible. Infants born to HBsAg-positive mothers should be tested for HBsAg and antibody to HBsAg 1 to 2 months after completion of at least 3 doses of the HepB series, at age 9 through 18 months (generally at the next well-child visit). Administration of 4 doses of HepB to infants is permissible when a combination vaccine containing HepB is administered after the birth dose. The fourth dose should be administered no earlier than age 24 weeks. # Rotavirus vaccine (RV). (Minimum age: 6 weeks) Administer the first dose at age 6 through 14 weeks (maximum age: 14 weeks 6 days). Vaccination should not be initiated for infants aged 15 weeks 0 days or older. The maximum age for the final dose in the series is 8 months 0 days If Rotarix is administered at ages 2 and 4 months, a dose at 6 months is not indicated. 3. Diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP). (Minimum age: 6 weeks) The fourth dose may be administered as early as age 12 months, provided at least 6 months have elapsed since the third dose. Administer the final dose in the series at age 4 through 6 years. 4. Haemophilus influenzae type b conjugate vaccine (Hib). (Minimum age: 6 weeks) If PRP-OMP (PedvaxHIB or Comvax [HepB-Hib]) is administered at ages 2 and 4 months, a dose at age 6 months is not indicated. TriHiBit (DTaP/Hib) and Hiberix (PRP-T) should not be used for doses at ages 2, 4, or 6 months for the primary series but can be used as the final dose in children aged 12 months through 4 years. # Pneumococcal vaccine. (Minimum age: 6 weeks for pneumococcal conjugate vaccine [PCV]; 2 years for pneumococcal polysaccharide vaccine [PPSV]) PCV is recommended for all children aged younger than 5 years. Administer 1 dose of PCV to all healthy children aged 24 through 59 months who are not completely vaccinated for their age. Administer PPSV 2 or more months after last dose of PCV to children aged 2 years or older with certain underlying medical conditions, including a cochlear implant. See MMWR 1997;46(No. RR-8). # Inactivated poliovirus vaccine (IPV) (Minimum age: 6 weeks) The final dose in the series should be administered on or after the fourth birthday and at least 6 months following the previous dose. If 4 doses are administered prior to age 4 years a fifth dose should be administered at age 4 through 6 years. See MMWR 2009;58(30):829--30. # Influenza vaccine (seasonal). (Minimum age: 6 months for trivalent inactivated influenza vaccine [TIV]; 2 years for live, attenuated influenza vaccine [LAIV]) Administer annually to children aged 6 months through 18 years. For healthy children aged 2 through 6 years (i.e., those who do not have underlying medical conditions that predispose them to influenza complications), either LAIV or TIV may be used, except LAIV should not be given to children aged 2 through 4 years who have had wheezing in the past 12 months. Children receiving TIV should receive 0.25 mL if aged 6 through 35 months or 0.5 mL if aged 3 years or older. Administer 2 doses (separated by at least 4 weeks) to children aged younger than 9 years who are receiving influenza vaccine for the first time or who were vaccinated for the first time during the previous influenza season but only received 1 dose. For recommendations for use of influenza A (H1N1) 2009 monovalent vaccine see MMWR 2009;58(No. RR-10). # Measles, mumps, and rubella vaccine (MMR). (Minimum age: 12 months) Administer the second dose routinely at age 4 through 6 years. However, the second dose may be administered before age 4, provided at least 28 days have elapsed since the first dose. 9. Varicella vaccine. (Minimum age: 12 months) Administer the second dose routinely at age 4 through 6 years. However, the second dose may be administered before age 4, provided at least 3 months have elapsed since the first dose. For children aged 12 months through 12 years the minimum interval between doses is 3 months. However, if the second dose was administered at least 28 days after the first dose, it can be accepted as valid. 10. Hepatitis A vaccine (HepA). (Minimum age: 12 months) Administer to all children aged 1 year (i.e., aged 12 through 23 months). Administer 2 doses at least 6 months apart. Children not fully vaccinated by age 2 years can be vaccinated at subsequent visits A 5-year interval from the last Td dose is encouraged when Tdap is used as a booster dose; however, a shorter interval may be used if pertussis immunity is needed. # Human papillomavirus vaccine (HPV). (Minimum age: 9 years) Two HPV vaccines are licensed: a quadrivalent vaccine (HPV4) for the prevention of cervical, vaginal and vulvar cancers (in females) and genital warts (in females and males), and a bivalent vaccine (HPV2) for the prevention of cervical cancers in females. HPV vaccines are most effective for both males and females when given before exposure to HPV through sexual contact. HPV4 or HPV2 is recommended for the prevention of cervical precancers and cancers in females. HPV4 is recommended for the prevention of cervical, vaginal and vulvar precancers and cancers and genital warts in females. Administer the first dose to females at age 11 or 12 years. Administer the second dose 1 to 2 months after the first dose and the third dose 6 months after the first dose (at least 24 weeks after the first dose). Administer the series to females at age 13 through 18 years if not previously vaccinated. HPV4 may be administered in a 3-dose series to males aged 9 through 18 years to reduce their likelihood of acquiring genital warts. # Meningococcal conjugate vaccine (MCV4). Administer at age 11 or 12 years, or at age 13 through 18 years if not previously vaccinated. Administer to previously unvaccinated college freshmen living in a dormitory. Administer MCV4 to children aged 2 through 10 years with persistent complement component deficiency, anatomic or functional asplenia, or certain other conditions placing them at high risk. Administer to children previously vaccinated with MCV4 or MPSV4 who remain at increased risk after 3 years (if first dose administered at age 2 through 6 years) or after 5 years (if first dose administered at age 7 years or older). Persons whose only risk factor is living in on-campus housing are not recommended to receive an additional dose. See MMWR 2009;58:1042--3. # Influenza vaccine (seasonal). Administer annually to children aged 6 months through 18 years. For healthy nonpregnant persons aged 7 through 18 years (i.e., those who do not have underlying medical conditions that predispose them to influenza complications), either LAIV or TIV may be used. Administer 2 doses (separated by at least 4 weeks) to children aged younger than 9 years who are receiving influenza vaccine for the first time or who were vaccinated for the first time during the previous influenza season but only received 1 dose. For recommendations for use of influenza A (H1N1) 2009 monovalent vaccine. See MMWR 2009;58(No. RR-10) 5. Pneumococcal polysaccharide vaccine (PPSV). Administer to children with certain underlying medical conditions, including a cochlear implant. A single revaccination should be administered after 5 years to children with functional or anatomic asplenia or an immunocompromising condition. See MMWR 1997;46(No. RR-8). 6. Hepatitis A vaccine (HepA). Administer 2 doses at least 6 months apart. HepA is recommended for children aged older than 23 months who live in areas where vaccination programs target older children, who are at increased risk for infection, or for whom immunity against hepatitis A is desired. 7. Hepatitis B vaccine (HepB). Administer the 3-dose series to those not previously vaccinated. A 2-dose series (separated by at least 4 months) of adult formulation Recombivax HB is licensed for children aged 11 through 15 years. 8. Inactivated poliovirus vaccine (IPV). The final dose in the series should be administered on or after the fourth birthday and at least 6 months following the previous dose. If both OPV and IPV were administered as part of a series, a total of 4 doses should be administered, regardless of the child's current age. 9. Measles, mumps, and rubella vaccine (MMR). If not previously vaccinated, administer 2 doses or the second dose for those who have received only 1 dose, with at least 28 days between doses. 10. Varicella vaccine. For persons aged 7 through 18 years without evidence of immunity (see MMWR 2007;56[No. RR-4]), administer 2 doses if not previously vaccinated or the second dose if only 1 dose has been administered. For persons aged 7 through 12 years, the minimum interval between doses is 3 months. However, if the second dose was administered at least 28 days after the first dose, it can be accepted as valid. For persons aged 13 years and older, the minimum interval between doses is 28 days. # Rotavirus vaccine (RV). The maximum age for the first dose is 14 weeks 6 days. Vaccination should not be initiated for infants aged 15 weeks 0 days or older. The maximum age for the final dose in the series is 8 months 0 days. If Rotarix was administered for the first and second doses, a third dose is not indicated. 3. Diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP). The fifth dose is not necessary if the fourth dose was administered at age 4 years or older. # Haemophilus influenzae type b conjugate vaccine (Hib). Hib vaccine is not generally recommended for persons aged 5 years or older. No efficacy data are available on which to base a recommendation concerning use of Hib vaccine for older children and adults. However, studies suggest good immunogenicity in persons who have sickle cell disease, leukemia, or HIV infection, or who have had a splenectomy; administering 1 dose of Hib vaccine to these persons who have not previously received Hib vaccine is not contraindicated. If the first 2 doses were PRP-OMP (PedvaxHIB or Comvax), and administered at age 11 months or younger, the third (and final) dose should be administered at age 12 through 15 months and at least 8 weeks after the second dose. If the first dose was administered at age 7 through 11 months, administer the second dose at least 4 weeks later and a final dose at age 12 through 15 months. 5. Pneumococcal vaccine. Administer 1 dose of pneumococcal conjugate vaccine (PCV) to all healthy children aged 24 through 59 months who have not received at least 1 dose of PCV on or after age 12 months. For children aged 24 through 59 months with underlying medical conditions, administer 1 dose of PCV if 3 doses were received previously or administer 2 doses of PCV at least 8 weeks apart if fewer than 3 doses were received previously. Administer pneumococcal polysaccharide vaccine (PPSV) to children aged 2 years or older with certain underlying medical conditions, including a cochlear implant, at least 8 weeks after the last dose of PCV. See MMWR 1997;46(No. RR-8). # Inactivated poliovirus vaccine (IPV). The final dose in the series should be administered on or after the fourth birthday and at least 6 months following the previous dose. A fourth dose is not necessary if the third dose was administered at age 4 years or older and at least 6 months following the previous dose. In the first 6 months of life, minimum age and minimum intervals are only recommended if the person is at risk for imminent exposure to circulating poliovirus (i.e., travel to a polio-endemic region or during an outbreak). 7. Measles, mumps, and rubella vaccine (MMR). Administer the second dose routinely at age 4 through 6 years. However, the second dose may be administered before age 4, provided at least 28 days have elapsed since the first dose. If not previously vaccinated, administer 2 doses with at least 28 days between doses. 8. Varicella vaccine. Administer the second dose routinely at age 4 through 6 years. However, the second dose may be administered before age 4, provided at least 3 months have elapsed since the first dose. For persons aged 12 months through 12 years, the minimum interval between doses is 3 months. However, if the second dose was administered at least 28 days after the first dose, it can be accepted as valid. For persons aged 13 years and older, the minimum interval between doses is 28 days. 9. Hepatitis A vaccine (HepA). HepA is recommended for children aged older than 23 months who live in areas where vaccination programs target older children, who are at increased risk for infection, or for whom immunity against hepatitis A is desired. 10. Tetanus and diphtheria toxoids vaccine (Td) and tetanus and diphtheria toxoids and acellular pertussis vaccine (Tdap). Doses of DTaP are counted as part of the Td/Tdap series Tdap should be substituted for a single dose of Td in the catch-up series or as a booster for children aged 10 through 18 years; use Td for other doses. 11. Human papillomavirus vaccine (HPV). Administer the series to females at age 13 through 18 years if not previously vaccinated. Use recommended routine dosing intervals for series catch-up (i.e., the second and third doses should be administered at 1 to 2 and 6 months after the first dose). The minimum interval between the first and second doses is 4 weeks. The minimum interval between the second and third doses is 12 weeks, and the third dose should be administered at least 24 weeks after the first dose.
None
None
d8286bc30fb16f1b7bd438bc50b7771685f2ada5
cdc
None
Respondents who reported currently smoking manufactured (i.e., commercial) cigarettes on a "daily" or "less than daily" basis. The term "smokers" in this report refers to current smokers of manufactured cigarettes. Smokers of other tobacco products, such as bidis, kreteks, hand-rolled cigarettes, cigars, pipes, and waterpipes who did not also smoke manufactured cigarettes are not included in this analysis. § "In the last 30 days, did you notice any health warnings on cigarette packages?" and "In the last 30 days, have warning labels on cigarette packages led you to think about quitting?"# Tobacco use is the leading preventable cause of death; this year approximately 5 million persons worldwide will die from tobacco-related heart attacks, strokes, cancers, and other diseases (1). Sponsored by the World Health Organization (WHO), World No Tobacco Day is observed every year on May 31. This year, World No Tobacco Day highlights the WHO Framework Convention on Tobacco Control (WHO FCTC) (1). Adopted as a resolution at WHO's 1996 World Health Assembly, WHO FCTC took effect in 2005 and is maintained by the United Nations. A total of 172 countries have adopted the treaty (2), making WHO FCTC one of the most widely embraced evidence-based treaties in United Nations history. WHO FCTC urges all countries to ratify the treaty, fully implement its provisions, and adopt its guidelines (1), and WHO provides country-level assistance for implementing effective tobacco control measures (3). CDC has supported this effort by working with partners to provide technical assistance and infrastructure support for the creation of sustainable surveillance systems (the Global Adult Tobacco Survey and the Global Youth Tobacco Survey). Additional information is available at / index.html. # Cigarette Package Health Warnings and Interest in Quitting Smoking -14 Countries, 2008-2010 The World Health Organization (WHO) Framework Convention on Tobacco Control (FCTC) requires health warnings on tobacco product packages sold in countries that ratified the WHO FCTC treaty (1). These warnings are expected to 1) describe the harmful effects of tobacco use; 2) be approved by the appropriate national authority; 3) appear on at least 30%, and ideally 50% or more, of the package's principal display areas; 4) be large, clear, visible, and legible in the country's principal language(s); 5) have multiple, rotating messages; and 6) preferably use pictures or pictograms. To assess the effects of cigarette package health warnings on interest in quitting smoking among smokers of manufactured cigarettes aged ≥15 years, this report examines 2008-2010 data from the Global Adult Tobacco Survey (GATS) in 14 WHO FCTC countries. Among men, the prevalence of manufactured cigarette smoking ranged from 9.6% in India to 59.3% in Russia. Among men in 12 of the countries and women in seven countries, >90% of smokers reported noticing a package warning in the previous 30 days. The percentage of smokers thinking about quitting because of the warnings was >50% in six countries and >25% in men and women in all countries except Poland. WHO has identified providing tobacco health information, including graphic health warnings on tobacco packages, as a powerful "best buy" in combating noncommunicable disease (2). Implementing effective warning labels as a component of a comprehensive approach can help decrease tobacco use and its many health consequences. GATS is a nationally representative household survey conducted among persons aged ≥15 years using a standardized questionnaire, sample design, data collection method, and analysis protocol to obtain measures on key tobacco control indicators and ensure comparability across countries.- GATS was conducted once in each of the 14 countries during 2008-2010 by national governments, ministries of health, survey-implementing agencies, and international partners. In each country, a multistage cluster sample design is used, with households selected proportional to population size. Data are weighted to reflect the noninstitutionalized population aged ≥15 years in each country. For this analysis, current smokers of manufactured cigarettes † were asked whether they had noticed health warnings on a cigarette package in the previous 30 days, and whether the label led them to think about quitting smoking. § Responses were analyzed by sex and, within sex strata, by age and education level using bivariate analysis within individual countries. Differences in response estimates were considered statistically significant if 95% confidence intervals did not overlap. Overall response rates ranged from 65.1% in Poland to 97.7% in Russia. The health warnings on cigarette packages in each country at the time GATS was conducted were described according to WHO FCTC guidelines (3,4). All GATS countries had warning labels on cigarette packages describing harmful effects of smoking at the time their survey was conducted. Four of the 14 countries (Brazil, Egypt, Thailand, and Uruguay) had pictorial warnings. A fifth country, India, introduced pictorial warnings in 2009, and had both text and pictorial warnings in circulation when GATS was conducted (Table 1). In all 14 countries, men were more likely to be cigarette smokers than women. Among men, prevalence of smoking ranged from 9.6% in India to 59.3% in Russia (Table 2). Among women, prevalence of smoking was <25% in all countries and <2% in Bangladesh, China, Egypt, India, Thailand, and Vietnam. and Egypt, not enough women reported current smoking to calculate this percentage. Smokers aged ≥65 years were less likely to notice warnings in Bangladesh (men), Brazil (men and women), Mexico (men), Philippines (men and women), Thailand (men), and Ukraine (men) (Table 2). Smokers who had not completed primary school education were less likely to have noticed warnings in Bangladesh (men), China (men and women), India (men), Mexico (women), Philippines (men and women), Turkey (women), and Vietnam (men) (Table 2). Among smokers who noticed a package warning, the percentage thinking about quitting because of the warning was >50% in six GATS countries (Bangladesh, Brazil, India, Thailand, Ukraine, and Vietnam) and >25% for men and women in all countries except one (Poland). Older male smokers were less likely to think about quitting in India and Uruguay; no other age group differences were noted. This report is the first to provide survey results from all 14 countries that participated in GATS during 2008-2010. In these countries, the prevalence of smoking manufactured cigarettes varied widely and was more common among men. Warning the public about the dangers of tobacco is one of the strategies in WHO's MPOWER package to combat the tobacco epidemic (3). Most of these countries had met the minimum WHO FCTC health warning label criteria for cigarette packages at the time GATS was conducted. The majority of smokers noticed the health warnings, and in most countries >25% who noticed the warnings said they were led to think about quitting. These results indicate that package warnings can be effective for various populations and settings, including countries in which cigarette smoking prevalence currently is low. To be effective, cigarette package warnings must capture smokers' attention and educate them about the health effects of tobacco use (5). The WHO FCTC guidelines provide parameters to accomplish these objectives by emphasizing features that increase the salience of warnings (1,4). Prominent, pictorial warnings have been found to be the most effective in communicating the harms of smoking in several studies (4,6). Smokers who perceive a greater health risk from smoking are more likely to think about quitting and to quit successfully (6). Further, evidence indicates that warnings are more likely to be effective if they elicit strong emotions, such as fear, seem personally relevant, and increase confidence in the ability to quit (4,7). For example, a comparative analysis of responses to labels in Brazil, Mexico, and Uruguay found that the Brazilian warnings depicting human suffering had the strongest impact on thinking about quitting (8). Rotating warnings also is important because the impact of an individual label will decrease over time (5). Thus, a warning that is small in total size or font size, has been in circulation for a long time, or lacks informational content that generates an emotional response likely will not have the strongest possible impact. Graphic warnings have the potential to reach those who do not notice or read text-only warnings; they also have the potential to better evoke emotional responses, increase knowledge of health risks, and reinforce motivations to quit smoking (9). Therefore, the WHO FCTC guidelines strongly encourage the use of graphic warnings (5). Low education level and older age were associated with not noticing warnings in some countries; virtually all of these countries had text-only warnings. Women were less likely to notice warnings than men in India, China, and Vietnam, countries where cigarette smoking prevalence is very low among women. These findings emphasize the importance of using warnings that are effective in communicating the risks of smoking to all cigarette smokers and using other What is already known on this topic? Warning the public about the dangers of tobacco is one of the key strategies in the World Health Organization's MPOWER package to combat tobacco use. # What is added by this report? For the first time, data from all 14 Global Adult Tobacco Survey (GATS) countries are available. In these countries, the prevalence of smoking manufactured cigarettes varies widely and is more common among men. The majority of smokers noticed package warning labels. Among smokers who noticed a health warning, the percentage thinking about quitting because of the warning was >50% in six GATS countries. What are the implications for public health practice? Strong health warning labels on cigarette packages are effective in motivating smokers to consider quitting. These findings emphasize the importance of using warnings that are effective in communicating the risks of smoking to all cigarette smokers. evidence-based tobacco control measures that reach populations that are not frequently exposed to cigarette packages. Warnings were more effective at getting smokers to think about quitting in some countries than in others. Brazil and Thailand, countries with numerous prominent and graphic pictorial warnings in rotation, had among the highest prevalences of smokers thinking about quitting because of the warnings; these warnings received WHO's highest rating (3). However, reported thinking about quitting smoking also was relatively high in Bangladesh and Vietnam, where warnings covered less of the package and were text-only. The reasons for these findings are not immediately clear but might relate to the relative importance of package warnings among other contextual factors such as smokers' baseline knowledge about health risks, level of interest in quitting, and level of tobacco dependence, as well as concurrent tobacco control efforts and social norms surrounding tobacco use (7). Further research might be helpful in elucidating these factors and in determining the extent to which thinking about quitting because of warnings leads to quit attempts in GATS countries. The findings in this report are subject to at least five limitations. First, all data were self reported, and social norms (e.g., unacceptability in some countries of women smoking) might have affected responses. Second, the education categories used in Brazil are not comparable to the categories used in this analysis, so Brazil's data were not evaluated for differences in prevalence by education. Third, this analysis included only smokers of manufactured cigarettes; however, other tobacco products (e.g., bidis, kreteks, hand-rolled cigarettes, waterpipes, and smokeless tobacco) are commonly used in India and other GATS countries. Fourth, the prevalence of smoking among women is low in some countries, so analyzing or interpreting results on the impact of package warnings among women was not possible. Finally, GATS was not designed to evaluate the effectiveness of individual health warning labels, and its core questions did not distinguish between the different labels in circulation in a country. After GATS was conducted, Mexico, Philippines, Turkey, and Ukraine passed legislation requiring pictorial warning labels, and Thailand and Uruguay increased the size of their warnings. Worldwide, a majority of countries now have warnings on cigarette packages, but their features and strength vary (7). As of 2010, approximately 30 countries had pictorial warning labels covering at least 50% of the package (7), and additional countries were developing such labels. ¶ Future GATS will allow tracking of progress toward key tobacco use and control indicators. Smokers view their cigarette packages every time they remove a cigarette; therefore, the cigarette package represents a powerful vehicle to deliver health warnings directly to smokers. Nonsmokers and former smokers also can be discouraged from smoking by viewing comprehensive warnings (7). WHO has identified price increases; smoke-free policies; bans on tobacco advertising, promotion, and sponsorship; and providing tobacco health information via mass media campaigns and graphic health warnings to the public as tobacco "best buys" because they can reduce tobacco initiation, help to prevent progression from initiation to addiction, increase cessation, decrease consumption, and change social norms (2). Providing information about the dangers of using tobacco products with package warnings is a simple and cost-effective strategy to motivate quit attempts, thus helping to prevent the life-threatening effects of tobacco use (9,10). Jamestown Canyon virus (JCV) is a mosquito-borne zoonotic pathogen belonging to the California serogroup of bunyaviruses. Although JCV is widely distributed throughout temperate North America, reports of human JCV infection in the United States are rare. This is the first report of human JCV infection detected in Montana, one of only 15 cases reported in the United States since 2004, when JCV became reportable. On May 26, 2009, a man aged 51 years with no travel history outside of Montana went to a local emergency department immediately following onset of fever, severe frontal headache, dizziness, left-sided numbness, and tingling. His blood pressure was elevated. Stroke was ruled out, oxygen was administered, medication was prescribed for hypertension, and the patient was sent home. One week later, the patient visited his primarycare physician complaining of continued neurologic symptoms consistent with acute febrile encephalitis and recent mosquito bites. Although West Nile virus (WNV) disease was diagnosed based on detection of WNV-immunoglobulin M (IgM) and G (IgG) antibodies, subsequent testing indicated that the WNV antibodies were from a past infection and that his illness was caused by JCV. The final diagnosis of JCV infection was based on positive JCV-specific IgM enzyme-linked immunosorbent assay (ELISA) results and a fourfold rise in paired sample JCV plaque reduction neutralization test (PRNT) titers. This finding represents a previously unrecognized risk for JCV infection in Montana; clinicians should consider JCV infection when assessing patients for suspected arboviral infections. # Case Report On May 26, 2009, a previously healthy man aged 51 years with no travel history outside of Montana went to a local emergency department immediately following onset of fever, severe acute frontal headache, dizziness, left-sided numbness, and tingling. No other symptoms were noted. Results of a physical examination were normal, except for an elevated blood pressure of 214/119 mmHg. Blood chemistries and cardiac enzyme tests were within normal limits, except for an elevated glucose of 130 mg/dL (normal: 70-110 mg/dL). Results of an electrocardiogram, magnetic resonance imaging, and computed tomography scan of the brain were normal. Oxygen was administered to the patient, telmisartan was prescribed for hypertension, and he was sent home. A week later, on June 2, the patient visited his primary-care physician complaining of fever, persistent headache, and new onset of muscle pain and weakness. The physician considered the patient's symptoms to be consistent with a neurologic illness and evaluated the patient further for a possible stroke or arboviral infection. A carotid Doppler test showed no evidence of abnormal arterial blood flow. A lumbar puncture performed on June 11 showed clear, colorless cerebrospinal fluid with no leukocytes or erythrocytes, and bacterial culture showed no growth at 72 hours; no tests for virus were performed. The patient was referred to and visited a neurologist on July 6. The neurologist found no evidence of stroke, diagnosed a complex migraine, and prescribed medication for headache management. The patient's symptoms gradually improved, and he reported no residual symptoms 6 months after illness onset. On visiting his primary-care physician and during interviews conducted by the local health department and Montana Department of Public Health and Human Services (DPHHS), the patient reported recent exposure to mosquitoes while working outdoors around his home, which was located in a rural area of Montana. An acute-phase serum sample collected on June 2 (1 week after symptom onset) tested positive for WNV-specific IgM and IgG by ELISA at the Montana Public Health Laboratory (MTPHL). These laboratory results, in combination with the patient's symptoms and history of recent mosquito bites, supported a presumptive diagnosis of WNV disease. The acute sample was then sent to CDC's arbovirus diagnostic laboratory (CDC-ADL) in Fort Collins, Colorado, to confirm the diagnosis by PRNT. Testing at CDC-ADL was positive for WNV-specific IgM and IgG antibodies, with a neutralizing titer of 320. Testing also was positive for St. Louis encephalitis virus (SLEV)-specific IgG antibodies, but a negative SLEV-specific IgM antibody test and a neutralizing titer of 10 suggested cross-reactive flaviviral antibodies. An initial convalescent serum sample drawn on June 11 (16 days after symptom onset) also tested positive for WNV-specific IgM and IgG by ELISA at MTPHL but was not available for testing at CDC-ADL. However, another convalescent serum sample was obtained on December 1 (189 days after symptom onset) and was tested at CDC-ADL. Results indicated persistence of WNV-specific IgM and IgG antibodies and stable neutralizing titers (Table ). Because the stable titers suggested a previously acquired WNV infection (>6 months before illness onset), WNV avidity testing was obtained from the Viral Zoonoses Section, National Microbiology Laboratory, Public Health Agency of Canada (NML-PHAC), in Winnipeg, Manitoba, Canada. Testing found high-avidity WNV IgG, strongly suggesting that the WNV antibodies were from a past WNV infection (1). In addition to WNV testing, CDC-ADL tested the acute specimen collected on June 2 for antibodies against other # Human Jamestown Canyon Virus Infection -Montana, 2009 arboviruses. Results were equivocal for IgM and IgG antibodies against La Crosse virus (LACV) by ELISA. Neutralizing titers of 40 against LACV and 80 against JCV suggested a possible recent infection with a California serogroup virus (Table ). Follow-up testing on the day 189 sample was negative for LACV IgM antibodies by ELISA, but showed a twofold increase in LACV neutralizing titers and a fourfold increase in JCV titers. These results suggested that the patient's infection most likely was JCV. To confirm the diagnosis, samples were sent to NML-PHAC for testing with their recently developed IgM ELISA assays incorporating JCV antigen. Patient sera obtained June 11 and December 1 were positive for JCVspecific IgM antibodies (Table ). The presence of JCV-specific IgM and the fourfold diagnostic rise in JCV-neutralizing antibody titers confirmed the diagnosis of JCV infection. This finding indicated that JCV is present in Montana and that a risk for human infection exists. # Editorial Note Arthropod-borne viruses (i.e., arboviruses) are transmitted to humans primarily through bites from infected mosquitoes or ticks. Most arboviruses of public health importance belong to one of three virus genera: Flavivirus, Alphavirus, and Bunyavirus. Human cases caused by the following domestic arboviruses are nationally reportable to CDC: West Nile, St. Louis encephalitis, Powassan, eastern equine encephalitis, western equine encephalitis, and California serogroup viruses (i.e., La Crosse, Jamestown Canyon, California encephalitis, Keystone, snowshoe hare, and trivittatus). JCV is distributed throughout temperate North America, where it circulates primarily between deer and various mosquito species (2)(3)(4). Despite its wide geographic range, only 15 human JCV infections (mean: <3 per year) have been reported in the United States since 2004, when JCV became a reportable condition, and those have originated predominantly from the midwestern and northeastern states. JCV infections initially were described in the early 1970s to cause a mild febrile illness in humans (5). Serosurveys in Connecticut and New York have shown evidence of JCV infection in up to 12% of the population (3,6). Despite descriptions of mild illness caused by JCV, at least 11 subsequent cases with moderate-to-severe meningoencephalitis have been described; 10 in the early 1980s and one in 2001 (3,6). A retrospective study of patients with central nervous system manifestations and serologic findings for California serogroup viruses during 1971-1981 confirmed that 41 of 53 patients (77%) had antibodies to JCV, indicating that JCV originally was underdiagnosed in these patients (7). In comparison with clinical illness caused by LACV, JCV has been described as affecting adults and is more likely to cause meningitis (6,7). Furthermore, while seasonal distribution of LACV infections in humans generally occurs in August, JCV infections can occur earlier, in May and June, and continue through the end of summer, likely because the seasonal distribution of mosquito vectors differs for each virus (8). Although the Montana patient with JCV infection was suspected to have an acute WNV infection, human cases of WNV infection in Montana typically are not reported until late July, with the majority of cases occurring in late August and early September. The onset of illness for this patient was during late spring, which is consistent with approximately 40% of recognized human JCV infections. The differences in the seasonal distribution of these diseases likely are related to the mosquito species that transmit the viruses. Mosquitoes belonging to snow-melt Aedes species are common vectors of JCV, emerge early in spring, and are distributed throughout Montana (3,9). Vertical transmission of JCV in mosquitoes, overwintering of the virus in mosquito eggs, and larval maturation in temporary ponds produced by melting snow increase the likelihood of human JCV transmission in the spring (10). Detection of JCV previously has relied on cross-reactive antibodies in the LACV-specific ELISA (6,7). Testing of the acute serum sample for this case yielded equivocal anti-LACV IgM results, with a slightly higher neutralizing antibody titer against JCV than LACV. The titers against JCV and LACV were not different enough to determine the etiology. Although the convalescent sample confirmed a fourfold rise in JCV-neutralizing antibody titers, testing of paired acute and convalescent samples using a JCV antigen-specific ELISA was necessary to confirm JCV IgM positive results. The discordant anti-LACV and JCV IgM results suggested that cross-reactivity between LACV and JCV antibodies in the LACV-specific ELISA was incomplete, and that sole reliance on the LACV-specific ELISA to detect JCV can lead to missed JCV infections. In response to this, CDC has developed a JCV-specific IgM ELISA. Currently, testing is available only at CDC on request. As more information about the distribution and frequency of JCV infections and disease is known, testing might be expanded to include regional or state laboratories. The availability of this test will enable clinicians and public health officials to quickly differentiate between arboviral infections, especially within the California serogroup. Initial diagnostic tests in this case included testing for several arboviral diseases. However, the lack of a readily available diagnostic test specific to JCV delayed the diagnosis and led the clinician to consider noninfectious causes of illness. For the patient, the delayed diagnosis resulted in unnecessary medical procedures, including a carotid Doppler ultrasound, plus several hours of travel, and lost work to seek additional medical evaluation from a specialist. Clinically, patient care might not have differed significantly; however, supportive care, including headache management and patient prognosis, would have been established more quickly. Treatment for JCV infection typically includes supportive care and management of complications, such as relieving increased intracranial pressure. This case underscores the importance of Montana clinicians considering JCV infection in patients with a febrile neurologic illness when an arboviral infection is suspected and WNV testing is inconclusive. Improved and timely diagnosis will aid clinicians in making patient-care and management decisions, help public health professionals perform accurate epidemiologic investigations and implement preventive measures, and provide a better understanding of California serogroup virus distribution. What is already known on this topic? Jamestown Canyon virus (JCV) circulates widely in North America, primarily between deer and various mosquito species. Reports of human JCV infections in the United States have been rare and are confined primarily to the midwestern and northeastern states. JCV's nonspecific clinical presentation and the limited availability of sensitive tests for JCV might contribute to many human infections going undetected. # What is added by this report? This first reported human case of JCV in Montana suggests that the geographic distribution of human JCV infection is wider than previously recognized, and that increased JCV surveillance is needed to determine whether mosquito-borne arboviruses other than West Nile virus (WNV) pose a substantial risk to humans in the region. # What are the implications for public health practice? Clinicians should consider JCV infection in differential diagnoses when an arboviral infection is suspected to be causing a febrile neurologic illness, but WNV testing is inconclusive. Improved and timely arboviral disease diagnostics will aid clinicians in making patient-care and management decisions, help public health professionals perform accurate epidemiologic investigations and target preventive measures, and provide a better understanding of arboviral disease distribution in the United States. Regular physical activity helps maintain healthy weight and reduces the likelihood of developing chronic diseases. The 2008 Physical Activity Guidelines for Americans (1) are derived from the most recent scientific review of physical activity health benefits and do not differentiate among physical activity for leisure, transportation, work, or other purposes. To examine the potential influence of occupational physical activity on meeting minimum weekly aerobic physical activity guidelines, the Washington State Department of Health (WADOH) analyzed demographic patterns in physical activity levels with and without consideration of occupational physical activity using 2007 Behavioral Risk Factor Surveillance System (BRFSS) data. This report describes the results of that analysis, which indicated that, approximately two thirds (64.3%) of U.S. adults met minimum physical activity guidelines through nonoccupational physical activity. When occupational physical activity (defined as reported work activity of mostly walking or heavy labor) was considered, an additional 6.5% of adults likely met the guidelines. The increase was greatest for Hispanic men (14.4%) and men with less than a high school education (15.9%). Public health agencies conducting surveillance of population physical activity levels also should consider including occupational physical activity, which will help to identify demographic groups for targeted programs that increase physical activity. BRFSS is a state-based, random-digit-dialed telephone survey of the noninstitutionalized, U.S. civilian adult population. The Council of American Survey Research Organizations (CASRO) median response rate for the 2007 BRFSS survey was 50.6%. Among 430,912 respondents, complete occupational and nonoccupational physical activity data were available for 386,397 respondents from 50 states and the District of Columbia. BRFSS collects data on frequency and duration of nonoccupational physical activity, which includes leisure, transportation (e.g., walking), and maintaining a home. WADOH computed the products of activity frequency (days per week) and duration (minutes per day) for moderate-intensity and vigorousintensity activities. Consistent with the guidelines, WADOH classified respondents as having met guidelines if they reported weekly nonoccupational physical activity of ≥150 minutes of moderate-intensity activity (e.g., brisk walking or gardening), ≥75 minutes of vigorous-intensity activity (e.g., running or heavy yard work), or a combination of moderate-intensity and vigorous-intensity activity (with vigorous-intensity activity minutes multiplied by two) totaling ≥150 minutes. BRFSS does not collect data on occupational physical activity frequency and duration; instead, respondents who indicate employment are asked whether their activity at work is mostly standing or sitting, mostly walking, or mostly heavy labor or physically demanding work.- For this analysis, respondents who did not meet guidelines through nonoccupational physical activity were coded as meeting the guidelines if they reported mostly walking or mostly heavy labor or physically demanding work (Figure). WADOH computed age-adjusted prevalence of meeting physical activity guidelines by selected demographic characteristics and calculated age-adjusted prevalence ratios (PRs) for meeting guidelines by fitting two sets of Poisson regressions in which the outcome measures were meeting recommendations (in the first set through nonoccupational activity and in the second set through either nonoccupational or occupational activity). Each Poisson regression contained age and, except for the analysis in which age was the only predictor, an additional predictor variable: race/ethnicity, annual household income, or education. All analyses were stratified by sex and conducted using statistical software that accounted for the complex sampling design. - Regarding occupational physical activity, respondents were asked the following: "When you are at work, which of the following best describes what you do? Would you say 1) mostly sitting or standing; 2) mostly walking; or 3) mostly heavy labor or physically demanding work?" Responses of "don't know/not sure" and a respondent's refusal to respond ("refused") also were included. Additional information available at . # FIGURE. Classification for meeting 2008 physical activity guidelines* # Contribution of Occupational Physical Activity Toward Meeting Recommended Physical Activity Guidelines -United States, 2007 Approximately two thirds (68.5%) of men met guidelines through nonoccupational physical activity. When occupational physical activity levels also were considered, the proportion meeting guidelines increased from 68.5% to 76.3% (Table 1); 14.8% (95% confidence interval = 14.4%-15.3%) of men reported "mostly walking," and 14.3% (CI = 13.9%-14.7%) reported "mostly heavy labor" at work. For women, the proportion increased from 60.4% to 65.7% (Table 2); 12.7% (CI = 12.4%-13.0%) of women reported "mostly walking," and 3.4% (CI = 3.3%-3.6%) of women reported "mostly heavy labor" at work. Hispanic men and men with less than a high school education exhibited the greatest absolute gains in the proportion meeting guidelines when occupational physical activity was included (from 60.6% to 75.0% and from 55.7% to 71.6%, respectively). Among Hispanic men, 24.3% (CI = 22.6%-26.2%) reported "mostly walking," and 15.0% (CI = 13.5%-16.5%) reported "mostly heavy labor" at work; among men with less than a high school education 21.2% (CI = 19.4%-23.2%) reported "mostly walking," and 18.4% (CI = 16.8% -20.0%) reported "mostly heavy labor." Hispanic men had a lower prevalence of meeting guidelines through nonoccupational physical activity compared with non-Hispanic white men (PR = 0.85) (Table 1). However, when occupational physical activity was included, the PR was attenuated (i.e., it approached 1.0; PR = 0.97). Similarly, men with less than a high school education had lower prevalence of meeting physical activity guidelines through nonoccupational physical activity compared with men with a college degree (PR = 0.75). When occupational physical activity was included, the PR was attenuated (PR = 0.93). Similar patterns in attenuation of PRs were noted when comparing men with reported annual household incomes of ≤$35,000 with those As expected, findings from this report provide evidence for a modest contribution of occupational physical activity toward successfully meeting minimum physical activity guidelines among U.S. adults, with a larger impact for some subpopulations than others. National Health Interview Survey (NHIS) data from 1990 and earlier revealed that approximately half of respondents classified as sedentary in leisure time reported ≥1 hour of strenuous occupational activity daily; that report indicated that assessing only leisure activity might underestimate physical activity (2). Recent analyses from NHIS are not available because questions regarding the amount of jobrelated physical activity have not been asked since 1990. The findings presented in this report are consistent with reports of Hispanic persons expending more energy at work than persons of other racial/ethnic groups (3). In addition, education and income are strong predictors of leisure-time physical activity, and they remain important predictors of total activity, even though including occupational activity attenuates the association between education and physical activity for men. Although the BRFSS occupational physical activity question has been reported as valid and reliable for classifying physical activities at work (4-6), the question does not quantify the intensity or duration of continuous occupational physical activity. For this report, the analysis assumed that "mostly walking" included moderate-intensity activity in ≥10-minute intervals for ≥150 minutes per week and "mostly heavy labor" included vigorous-intensity activity in ≥10-minute intervals for ≥75 minutes per week. If the actual time spent in activity of sufficient intensity is less than this, then the effect of occupational physical activity on meeting the minimum aerobic activity guidelines will be overestimated. Relative to a standard work week of 40 hours, these assumptions seem reasonable. Also, a variety of occupational walking activities are in the "moderate" range, and a variety of heavy labor activities are in the "vigorous" range, based on comparisons of energy need while performing a task to energy need at rest (5,7). However, a more detailed assessment of occupational physical activities would be needed to confirm these assumptions. The findings in this report are subject to at least three limitations. First, because the duration of "mostly" walking or heavy physical labor is unavailable, it was not possible to assess whether respondents who did not meet guidelines through nonoccupational activity alone might meet guidelines through the combination of occupational and nonoccupational physical activity. As such, the proportions of persons meeting guidelines might have been underestimated. Second, BRFSS excludes persons in households without landline telephones. Finally, the 2007 BRFSS survey had a low CASRO response rate. These latter two factors can lead to bias, especially if physical activity patterns differ between those with and without landline telephones and between respondents and nonrespondents. The directions of these potential biases are unknown. As one of the 10 leading health indicators in the United States (8), physical activity is monitored at state and national levels to provide information for public health program planning, implementation, and evaluation. The state of Washington has used combined occupational and nonoccupational physical activity data as part of its assessment to target communities for policy and environmental changes. Debate about the health benefits of physical activity at work is ongoing, but the current guidelines do not distinguish between occupational and nonoccupational physical activity. Thus, public health surveillance that includes both occupational and nonoccupational physical activity more accurately describes whether persons meet guidelines than surveillance that includes only nonoccupational physical activity. Because demographic groups vary in amounts of physical activity at work (9), surveillance that includes both occupational and nonoccupational physical activity can be used to target groups that could derive health benefits by being more physically active. # What is already known on this topic? Prevalence estimates of physical activity predominately focus on nonoccupational physical activity; however, physical activity at work also can contribute to levels of physical activity sufficient to meet physical activity recommendations. # What is added by this report? This report is the first to provide estimates based on national surveillance data of the potential contribution of occupational physical activity toward meeting physical activity guidelines described in the 2008 Physical Activity Guidelines for Americans; when occupational physical activity was considered, an estimated additional 6.5% of adults overall very likely met the guidelines, and, for some groups, an estimated additional 14%-16% met the guidelines. What are the implications for public health practice? Pending evaluation of the usefulness of collecting information on occupational physical activity frequency and duration, consideration of occupational physical activity in the monitoring of population physical activity levels can help to identify demographic groups for targeted programs to increase physical activity. Japanese encephalitis (JE) virus, a mosquito-borne flavivirus, is an important cause of encephalitis in Asia with a case fatality rate of 20%-30% and neurologic or psychiatric sequelae in 30%-50% of survivors (1). Travelers to JE-endemic countries and laboratory personnel who work with infectious JE virus are at potential risk for JE virus infection. In 2010, CDC's Advisory Committee on Immunization Practices (ACIP) updated recommendations for prevention of JE. The updated recommendations included information on use of a new inactivated, Vero cell culture-derived JE vaccine (JE-VC ) that was licensed in the United States in 2009. Data on the need for and timing of booster doses with JE-VC were not available when the vaccine was licensed. This report summarizes new data on the persistence of neutralizing antibodies following primary vaccination with JE-VC and the safety and immunogenicity of a booster dose of JE-VC. The report also provides updated guidance to health-care personnel regarding use of a booster dose of JE-VC for U.S. travelers and laboratory personnel. ACIP recommends that if the primary series of JE-VC was administered >1 year previously, a booster dose may be given before potential JE virus exposure. # Background For most travelers to Asia, the risk for JE is very low but varies based on destination, duration, season, and activities (2). ACIP recommends JE vaccine for travelers who plan to spend a month or longer in JE-endemic areas during the JE virus transmission season. JE vaccine should be considered for short-term travelers (<1 month) if they plan to travel outside of an urban area and have an itinerary or activities that will increase the risk of JE virus exposure. JE vaccine also is recommended for laboratory personnel with a potential for exposure to infectious JE virus (1). In 2009, the Food and Drug Administration (FDA) licensed JE-VC for use in persons aged ≥17 years. JE-VC is manufactured by Intercell Biomedical (Livingston, United Kingdom) and is distributed in the United States by Novartis Vaccines (Cambridge, Massachusetts). JE-VC is administered in a 2-dose primary series at 0 and 28 days. Another JE vaccine, an inactivated mouse brain-derived vaccine (JE-VAX ), has been licensed in the U.S since 1992. However, JE-MB is no longer being produced and remaining doses expire in May 2011. Additional JE-VC study data have become available since the vaccine's licensure. The ACIP JE Vaccines Workgroup reviewed JE-VC clinical trial data on the persistence of neutralizing antibodies following primary vaccination with JE-VC and the safety and immunogenicity of a booster dose of JE-VC. These were primarily from published, peer-reviewed studies; however, unpublished data also were considered. FDA approved an update to the prescribing information for JE-VC in September 2010. No previous guidelines have been given on booster doses with JE-VC. At the February 2011 ACIP meeting, the workgroup presented data supporting use of a booster dose and proposed recommendations for a booster dose. ACIP approved the booster dose recommendations at the meeting. # Persistence of protective neutralizing antibodies after primary vaccination with JE-VC Three clinical trials have provided data on persistence of protective neutralizing antibodies after a primary 2-dose series of JE-VC. In JE vaccine clinical trials, JE virus neutralizing antibody levels measured by plaque reduction neutralization test (PRNT) can be used as a surrogate for protection. A 50% PRNT (PRNT 50 ) titer of ≥10 is accepted as an immunologic correlate of protection from JE in humans (3). In a study performed in central Europe (Austria, Germany, and Romania) to evaluate persistence of neutralizing antibodies among subjects who received 2 doses of JE-VC, 95% (172 of 181), 83% (151 of 181), 82% (148 of 181) and 85% (129 of 152) had protective antibodies at 6 months, 12 months, 24 months, and 36 months after receiving the first dose, respectively (Table 1) (4-6). A study that used similar methods but was performed in western and northern Europe (Germany and Northern Ireland) found that among adults receiving 2 doses of JE-VC, seroprotection rates were 83% (96 of 116) at 6 months, 58% (67 of 116) at 12 months, and 48% (56 of 116) at 24 months after their first vaccination (Table 1) (7). The manufacturer suggested that the different seroprotection rates in the two populations may have resulted from differences in prior vaccination against tick-borne encephalitis (TBE) virus, a related flavivirus. An estimated 75% of subjects in the first study might have received prior TBE vaccine compared with none of the subjects in the second study. A higher JE virus neutralizing antibody response after the first dose of JE-VC previously had been found in subjects with preexisting TBE # Recommendations for Use of a Booster Dose of Inactivated Vero Cell Culture-Derived Japanese Encephalitis Vaccine -Advisory Committee on Immunization Practices, 2011 antibodies compared with those without TBE antibodies (8). In a third clinical trial, conducted in Austria and Germany, at 15 months after the first dose of a 2-dose JE-VC immunization series, 69% (137 of 198) of subjects had a protective neutralizing antibody titer (Table 1) (9). # Safety and immunogenicity of a booster dose of JE-VC Two clinical trials have provided data on the response to a booster dose of JE-VC. In a study conducted in Austria and Germany, 198 adults aged ≥18 years who had received a 2-dose primary series of JE-VC were administered a booster dose 15 months after the first dose (9). The percentage of subjects with a protective neutralizing antibody titer increased from 69% (137 of 198) on Day 0 before the booster dose to 100% (198 of 198) at Day 28 after the booster dose. Protective titers were found in 98% (194 of 197) at 6 months and 98% (191 of 194) at 12 months after the booster dose (Table 2). The geometric mean titer (GMT) before the booster was 23 and increased 40-fold to 900 at Day 28 after the booster dose. GMTs were 487 and 361 at 6 and 12 months after the booster, respectively (Table 2). During the 7 days following the booster dose, local adverse events were reported in subject diaries by 31% (60 of 195) of subjects. The most frequent local reactions were tenderness in 19% (37 of 193) and pain in 13% (25 of 195) (Table 3). Systemic adverse events were reported by 23% (44 of 190) of subjects within 7 days of the booster dose. The most commonly reported systemic reactions were headache in 11% (21 of 194) and fatigue in 10% (18 of 188) (6). No serious adverse events were reported during the 28 days following the booster dose. In a second study, a booster dose administered to 40 subjects who had received primary immunization but no longer had protective neutralizing antibody titers resulted in protective titers in all subjects when the booster was administered at 11 months (n = 16) or 23 months (n = 24) after the first dose (7). GMTs at 1 month after the booster increased to 676 and to 2,496 in the groups administered the dose at 11 months and 23 months after the first dose, respectively. Among the 16 subjects who received the booster dose at 11 months, all still had seroprotective titers 13 months later. # Guidelines for use of a booster dose of JE-VC ACIP recommends that if the primary series of JE-VC was administered >1 year previously, a booster dose may be given before potential JE virus exposure. ACIP recommendations should be consulted for information on prevention of JE and settings in which JE vaccine is recommended, can be considered, or is not recommended (1). Data on the response to a booster dose administered >2 years after the primary series of JE-VC are not available. Data on the need for and timing of additional booster doses also are not available. No data exist on the use of JE-VC as a booster dose after a primary series with inactivated mouse brain-derived JE vaccine (JE-MB ). Adults aged ≥17 years who have received JE-MB previously and require further vaccination against JE virus should receive a 2-dose primary series of JE-VC. ACIP will review any additional data that are made available. Recommendations will be updated as needed. Inactivated mouse brain-derived Japanese encephalitis (JE) vaccine (JE-MB ), the only JE vaccine that is licensed for use in children in the United States, is no longer available. This notice provides updated information regarding options for obtaining JE vaccine for U.S. children. # TABLE 2. Number and percentage of subjects with a protective Japanese encephalitis virus neutralizing antibody titer (≥10) and the geometric mean titers (GMT) prior to and at day 28, month 6, and month 12 after a booster dose of inactivated Vero cell culture-derived Japanese encephalitis vaccine (JE-VC ) administered 15 months after dose 1 of a 2-dose primary JE-VC series # JE among U.S. travelers JE virus is the leading cause of vaccine-preventable encephalitis in Asia and the western Pacific. For most travelers to Asia, the risk for JE is low but varies on the basis of destination, duration, season, and activities. During the past 4 decades, 17 cases of JE have been reported among U.S. travelers and expatriates, including three cases among U.S. children aged <18 years (1,2). JE is a severe disease; 20%-30% of patients die, and 30%-50% of survivors have neurologic or psychiatric sequelae (3). # Recommendations for prevention of JE among travelers The Advisory Committee on Immunization Practices (ACIP) recommends that all travelers, including children, take precautions to avoid mosquito bites to reduce the risk for JE and other vector-borne infectious diseases (3). These precautions include using insect repellent, permethrin-impregnated clothing, and bed nets, and staying in accommodations with screened or air-conditioned rooms. Additional information on protection against mosquitoes and other arthropods is available in CDC's Health Information for International Travel (Yellow Book) (4). For some travelers who will be in a high-risk setting on the basis of season, location, duration, and activities, JE vaccine can reduce the risk for disease further (3). # JE vaccine for U.S. children JE-MB has been licensed in the United States since 1992 for use in adults and children aged ≥1 year. JE-MB has been associated with serious, but rare, allergic and neurologic adverse events (3). During 2002-2009, a total of 848,571 doses of JE-MB were distributed in the United States (mean: 106,071 doses per year), of which 534,330 (63%) doses were distributed to military health-care providers (5). During this period, an estimated 2,000-3,000 doses of JE-MB were distributed for use in U.S. children each year (Sanofi Pasteur, unpublished data, 2011). However, JE-MB is no longer being produced, and all remaining doses expire in May 2011. In 2009, the Food and Drug Administration (FDA) approved an inactivated Vero cell culture-derived JE vaccine (JE-VC ) for use in adults aged ≥17 years. One pediatric dose-ranging study has been completed among 60 children aged 12-35 months in India (48 children received JE-VC, and 12 children received another inactivated mouse brain-derived JE vaccine ) (6). A safety and immunogenicity study is ongoing among approximately 1,900 children aged 2 months-17 years in the Philippines, and a safety and immunogenicity bridging study has been initiated in the United States and other nonendemic countries with a targeted enrollment of approximately 100 children. Despite these ongoing studies, it likely will be several years before JE-VC is licensed in the United States for use in children. JE-VC product information is available online from FDA at / approvedproducts/ucm179132.htm. # Current options for obtaining JE vaccine for U.S. children For U.S. health-care providers interested in obtaining JE vaccine for pediatric patients they judge to be at risk, current options include 1) enroll children in the ongoing clinical trial, 2) administer JE-VC off-label, or 3) receive JE vaccine at an international travelers' health clinic in Asia. The ongoing pediatric safety and immunogenicity trial with JE-VC is enrolling children aged 2 months-17 years at five U.S. sites (trial identifier NCT01047839). The study is open-label, and all enrollees receive 2 doses of JE-VC administered 28 days apart. A third study visit is required at 56 days after the first dose of vaccine. Additional information about the clinical trial is available online from the National Institutes of Health at . In addition, a list of U.S. clinical trial sites and contact information is available online from CDC at / jencephalitis/children.htm. JE-VC is FDA-licensed for use in adults aged ≥17 years. However, a health-care provider may choose to administer the vaccine off-label in children aged <17 years. Data from the one completed pediatric study have been published (6). Additional information about the use of JE-VC in children is available from Novartis Medical Communications by telephone (877-683-4732) or e-mail ([email protected]). Several JE vaccines are manufactured and available for pediatric use in Asia but are not licensed in the United States. # Update on Japanese Encephalitis Vaccine for Children -United States, May 2011 Vaccines available at international travelers' health clinics in Asia include another inactivated mouse brain-derived JE vaccine manufactured in South Korea, live attenuated SA 14-14-2 vaccine manufactured in China, or another Vero cell culturederived JE vaccine manufactured in Japan. The recommended number of doses and schedule varies by vaccine and country. A partial list of international travelers' health clinics in Asia that administer JE vaccines to children is available online from CDC at / children.htm. # On May 24, this report was posted as an MMWR Early Release on the MMWR website (). Measles is a highly contagious, acute viral illness that can lead to serious complications and death. Endemic or sustained measles transmission has not occurred in the United States since the late 1990s, despite continued importations (1). During 2001-2008, a median of 56 (range: 37-140) measles cases were reported to CDC annually (2); during the first 19 weeks of 2011, 118 cases of measles were reported, the highest number reported for this period since 1996. Of the 118 cases, 105 (89%) were associated with importation from other countries, including 46 importations (34 among U.S. residents traveling abroad and 12 among foreign visitors). Among those 46 cases, 40 (87%) were importations from the World Health Organization (WHO) European and South-East Asia regions. Of the 118, 105 (89%) patients were unvaccinated. Forty-seven (40%) patients were hospitalized and nine had pneumonia. The increased number of measles importations into the United States this year underscores the importance of vaccination to prevent measles and its complications. Measles cases are reported by state health departments to CDC, and confirmed cases are reported via the National Notifiable Disease Surveillance System (NNDSS) using standard case definitions (3). Cases are considered internationally imported if at least some of the exposure period (7-21 days before rash onset) occurred outside the United States and rash occurred within 21 days of entry into the United States, with no known exposure to measles in the United States during that time. Import-associated cases include 1) internationally imported cases; 2) cases that are related epidemiologically to imported cases; and 3) imported virus cases for which an epidemiologic link has not been identified but the viral genotype detected suggests recent importation.- Laboratory confirmation of measles is made by detection in serum of measles-specific immunoglobulin M antibodies, isolation of measles virus, or detection of measles virus RNA by nucleic acid amplification in an appropriate clinical specimen (e.g., nasopharyngeal/oropharyngeal swabs, nasal aspirates, throat washes, or urine). For this report, persons with reported unknown or undocumented vaccination status are considered unvaccinated. An outbreak of measles is defined as a chain of transmission with three or more confirmed cases. During January 1-May 20, 2011, a total of 118 cases were reported from 23 states and New York City (Figure 1), the highest reported number for the same period since 1996 (Figure 2). Patients ranged in age from 3 months to 68 years; 18 (15%) were aged <12 months, 24 (20%) were aged 1-4 years, 23 (19%) were aged 5-19 years, and 53 (45%) were aged ≥20 years. Measles was laboratory-confirmed in 105 (89%) cases, and measles virus RNA was detected in 52 (44%) cases. Among the 118 cases, 105 (89%) were import-associated, of which 46 (44%) were importations from at least 15 countries (Table ), 49 (47%) were import-linked, and 10 (10%) were imported virus cases. The source of 13 cases not import-associated could not be determined. Among the 46 imported cases, most were among persons who acquired the disease in the WHO European Region (20) or South-East Asia Region (20), and 34 (74%) occurred in U.S. residents traveling abroad. Of the 118 cases, 47 (40%) resulted in hospitalization. Nine patients had pneumonia, but none had encephalitis and none died. All but one hospitalized patient were unvaccinated. The vaccinated patient reported having received 1 dose of measlescontaining vaccine and was hospitalized for observation only. Hospitalization rates were highest among infants and children aged <5 years (52%), but rates also were high among children and adults aged ≥5 years (33%). Unvaccinated persons accounted for 105 (89%) of the 118 cases. Among the 45 U.S. residents aged 12 months−19 years who acquired measles, 39 (87%) were unvaccinated, including 24 whose parents claimed a religious or personal exemption and eight who missed opportunities for vaccination. Among the 42 U.S. residents aged ≥20 years who acquired measles, 35 (83%) were unvaccinated, including six who declined vaccination because of philosophical objections to vaccination. Of the # Measles -United States, January-May 20, 2011 FIGURE 1. Distribution and origin of reported measles cases (N = 118) -United States, January 1-May 20, 2011 Import associated Unknown source - Additional information is available at / nndss/casedef/measles_2010.htm. 33 U.S. residents who were vaccine-eligible and had traveled abroad, 30 were unvaccinated and one had received only 1 of the 2 recommended doses. Nine outbreaks accounted for 58 (49%) of the 118 cases. The median outbreak size was four cases (range: 3-21). In six outbreaks, the index case acquired measles abroad; the source of the other three outbreaks could not be determined. Transmission occurred in households, child care centers, shelters, schools, emergency departments, and at a large community event. The largest outbreak occurred among 21 persons in a Minnesota population in which many children were unvaccinated because of parental concerns about the safety of measles, mumps, and rubella (MMR) vaccine. That outbreak resulted in exposure to many persons and infection of at least seven infants too young to receive MMR vaccine (4). # Reported by # Div of Viral Diseases, National Center for Immunization and Respiratory Diseases, CDC. Corresponding contributor: Huong McLean, [email protected], 404-639-7714. # Editorial Note As a result of high vaccination coverage, measles elimination (i.e., the absence of endemic transmission) was achieved in the United States in the late 1990s (1) and likely in the rest of the Americas since the early 2000s (5). However, as long as measles remains endemic in the rest of the world, importations into the Western Hemisphere will continue. The unusually large number of importations into the United States in the first 19 weeks of 2011 is related to recent increases in measles in countries visited by U.S. travelers. The most frequent sources of importation in 2011 were countries in the WHO European Region, which has accounted for the majority of measles importations in the United States since 2005 (2), and the South-East Asia Region. This year, 33 countries in the WHO European Region have reported an increase in measles. France, the source of most of the importations from the European Region, is experiencing a large outbreak, with approximately 10,000 cases reported during the first 4 months of 2011, including 12 cases of encephalitis, a complication that often results in permanent neurologic sequelae, 360 cases of severe measles pneumonia, and six measles-related deaths (6). Measles can be severe and is highly infectious; following exposure, up to 90% of susceptible persons develop measles. Measles can lead to life-threatening complications. During 1989-1991, a resurgence of measles in the United States resulted in >100 deaths among >55,000 cases reported, reminding U.S. residents of the potential severity of measles, even in the era of modern medical care (7). In the years that followed, the United States witnessed the return of subacute sclerosing panencephalitis among U.S. children, a rare, fatal neurologic complication of measles that had all but disappeared after measles vaccine was introduced in the 1960s (8). Children and adults who remain unvaccinated and develop measles also put others in their community at risk. For infants too young for routine vaccination (age <12 months) and persons with medical conditions that contraindicate measles - Patient had visited more than one country where measles are endemic during the incubation period, and exposure could have occurred in any of the countries listed. † Although the patient acquired measles in the Dominican Republic, the likely source of infection was a French tourist with measles who stayed in an adjacent room at the same resort at the same time as the patient. The genotype identified in this patient was D4, a genotype commonly circulating in France. MMR vaccine is safe and highly effective in preventing measles and its complications. MMR vaccine is recommended routinely for all children at age 12-15 months, with a second dose at age 4-6 years. For adults with no evidence of immunity to measles, † 1 dose of MMR vaccine is recommended unless the adult is in a high-risk group (i.e., health care personnel, international travelers, or students at post-high school educational institutions), in which case, 2 doses of MMR vaccine are recommended. Measles is endemic in many countries, and exposures might occur in airports and in countries of travel. All travelers aged ≥6 months are eligible to receive MMR vaccine and should be vaccinated before travel (10). Maintaining high immunization rates with MMR vaccine is the cornerstone of outbreak prevention. immunization, the risk for measles complications is particularly high. These persons depend on high MMR vaccination coverage among those around them to protect them from exposure. In the United States this year, infants aged <12 months accounted for 15% of cases and 15% of hospitalizations. In Europe in recent years, measles has been fatal for several children and adolescents, including some who could not be vaccinated because they were immune compromised. Rapid control efforts by state and local public health agencies, which are both time intensive and costly, have been a key factor in limiting the size of outbreaks and preventing the spread of measles into communities with increased numbers of unvaccinated persons. Nonetheless, maintenance of high 2-dose MMR vaccination coverage is the most critical factor for sustaining elimination. For measles, even a small decrease in coverage can increase the risk for large outbreaks and endemic transmission, as occurred in the United Kingdom in the past decade (9). Because of ongoing importations of measles to the United States, health-care providers should suspect measles in persons with a febrile rash illness and clinically compatible symptoms (e.g., cough, coryza, and/or conjunctivitis) who have recently traveled abroad or have had contact with travelers. Providers should isolate and report suspected measles cases immediately to their local health department and obtain specimens for measles testing, including viral specimens for confirmation and genotyping. What is already known on this topic? Measles, mumps, and rubella (MMR) vaccine is highly effective in preventing measles and its complications. Sustained measles transmission was eliminated from the United States in the late 1990s, but the disease remains common in many countries globally, and cases of measles are imported into the United States regularly. What is added by this report? During the first 19 weeks of 2011, 118 cases of measles were reported in the United States, the highest number for the same period in any year since 1996, and hospitalization rates were high (40%). Importations accounted for 46 (40%) cases, including 34 (74%) cases among U.S. residents who had recently traveled abroad, among 105 import-associated cases. What are the implications for public health practice? High 2-dose MMR vaccine coverage is critical for decreasing the risk for reestablishment of endemic measles transmission after importation of measles into the United States. Before any international travel, infants aged 6-11 months should receive 1 dose of MMR vaccine and persons aged ≥12 months should receive 2 doses of MMR vaccine at least 28 days apart or have other evidence of immunity to measles. # Notice to Readers # Updated "N" Indicators for the Year 2010 in National Notifiable Diseases Surveillance System Tables The 2010 Council of State and Territorial Epidemiologists (CSTE) State Reportable Conditions Assessment (2010 SRCA) has collected data from 55 reporting jurisdictions (50 U.S. states, the District of Columbia, New York City, and three territories ) to determine which of the nationally notifiable conditions (NNC) were reportable in each reporting jurisdiction in 2010. The 2010 SRCA gathered information regarding whether the condition is 1) explicitly reportable (i.e., listed as a specific disease or as a category of diseases on reportable disease lists), 2) implicitly reportable (i.e., included in a general category of the reportable disease list, such as "rare diseases of public health importance"), or 3) not reportable within each jurisdiction. Only conditions that were explicitly reportable were considered reportable based on the 2010 SRCA methodology. Results of the 2010 SRCA will be used to indicate whether each NNC is or is not reportable for the specified period and reporting jurisdiction. NNC that are not reportable are noted with an "N" indicator (for "not reportable") in the MMWR Table II weekly update (Provisional cases of selected notifiable diseases, United States) and in the MMWR Summary of Notifiable Diseases-United States, 2010. This notation will allow readers to distinguish whether 1) no cases were reported even though the condition is reportable or 2) no cases were reported because the condition is not reportable. The 2010 SRCA data collection and validation concluded in April 2011; results will be used to populate the "N" indicators for National Notifiable Diseases Surveillance System (NNDSS) data in the 2011 MMWR tables for the current week and for both the 2010 and 2011 cumulative year columns. The 2011 NNDSS data displayed in the MMWR weekly provisional tables will reflect reporting requirements gathered from the 2010 SRCA until 2011 SRCA official results are available. # Announcement Preventive Medicine Residency and Fellowship Applications Deadline -September 15, 2011 The Preventive Medicine Residency and Fellowship (PMR/F) programs are accepting applications from physicians for the residency and from veterinarians, dentists, nurses, physician assistants, and international medical graduates for the fellowship. Applicants with public health and applied epidemiologic practice experience who seek to become preventive medicine and population health specialists and public health leaders are encouraged to apply. The PMR/F prepares clinicians for leadership roles in public health at international, federal, state, and local levels through instruction and supervised practical experiences focused on translating epidemiology to public health practice, management, and policy and program development. Development of leadership and management competency is emphasized. Residents and fellows conduct their training at a CDC location or in a state or local health department. PMR/F alumni occupy many leadership positions at CDC, at state and local health departments, in academia, and in private-sector agencies. Completion of the residency, which is accredited by the Accreditation Council for Graduate Medical Education for 24 months of training, qualifies graduates to apply for certification by the American Board of Preventive Medicine (ABPM) in Public Health and General Preventive Medicine. Select candidates can be considered for 1 year of residency training that also should qualify for application for ABPM certification. Training for PMF clinicians also is 1 year. Applications are being accepted for the class that begins in mid-June 2012. Applications must be submitted online by September 15, 2011, and supporting documents must be received in the PMR/F office by that same day. Additional information regarding the programs, eligibility criteria, and application process is available at , by telephone at 404-498-6140, or by e-mail at [email protected]. # Erratum # Vol. 60, No. RR-3 In the MMWR Recommendations and Reports "Updated Norovirus Outbreak Management and Disease Prevention Guidelines," an error occurred on page 9. The second sentence under the heading "Environmental Specimens" should read: "If a food or a water source is strongly suspected as the source of an outbreak, a sample should be obtained as early as possible with respect to the time of exposure, and CDC or FDA should be contacted for further guidance on testing. Food samples should be stored frozen at -4°F (-20°C), and water samples should be stored refrigerated or chilled on ice at 39°F (4°C)." - Hypertension was defined as systolic blood pressure ≥140 mmHg or diastolic blood pressure ≥90 mmHg or currently taking medication to lower blood pressure, based on positive responses to the following questions: "Because of your high blood pressure/hypertension have you ever been told to take prescribed medicine?" and "Are you now taking a prescribed medicine?" Undiagnosed hypertension was a finding of hypertension and a negative response to the following question: "Have you ever been told by a doctor or other health professional that you had hypertension, also called high blood pressure?" † Health insurance coverage is at the time of interview. Public coverage includes Medicaid, Children's Health Insurance Program (CHIP) state-sponsored or other government-sponsored health plan, Medicare (disability), or military health plan (TRICARE, VA, or CHAMP-VA). Persons with both public and private insurance coverage were included in the private coverage category only. § 95% confidence interval. During 2005-2008, among U.S. adults aged 20-64 years with hypertension, 40% of those with no health insurance had hypertension that was undiagnosed, compared with 21% of those with private insurance and 16% of those with public insurance. In the 20-39 years and 40-64 years age groups, undiagnosed hypertension also was more common among persons with no health insurance compared with those with private or public insurance. - 0 1 - 1 - 0 1 - - - 0 1 - - Mountain - 0 0 - - - 0 0 - - - 0 1 2 - Arizona - 0 0 - - - 0 0 - - - 0 1 2 - Colorado N 0 0 N N N 0 0 N N N 0 0 N N Idaho § N 0 0 N N N 0 0 N N N 0 0 N N Montana § N 0 0 N N N 0 0 N N N 0 0 N N Nevada § N 0 0 N N N 0 0 N N N 0 0 N N New Mexico § N 0 0 N N N 0 0 N N N 0 0 N N Utah - 0 0 - - - 0 0 - - - 0 0 - - Wyoming § - 0 0 - - - 0 0 - - - 0 0 - - Pacific - 0 1 - 1 - 0 0 - - - 0 1 1 2 Alaska N 0 0 N N N 0 0 N N N 0 0 N N California - 0 1 - 1 - 0 0 - - - 0 1 1 2 Hawaii N 0 0 N N N 0 0 N N N 0 0 N N Oregon - 0 0 - - - 0 0 - - - 0 0 - - Washington - 0 0 - - - 0 0 - - - 0 0 - - # Territories # Territories - Case counts for reporting year 2010 and 2011 are provisional and subject to change. For further information on interpretation of these data, see / nndss/phs/files/ProvisionalNationa%20NotifiableDiseasesSurveillanceData20100927.pdf. Data for TB are displayed in Table IV, which appears quarterly. † Dengue Fever includes cases that meet criteria for Dengue Fever with hemorrhage, other clinical and unknown case classifications. § DHF includes cases that meet criteria for dengue shock syndrome (DSS), a more severe form of DHF. ¶ Contains data reported through the National Electronic Disease Surveillance System (NEDSS). The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format. To receive an electronic copy each week, visit MMWR's free subscription page at . html. Paper copy subscriptions are available through the Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone 202-512-1800. Data presented by the Notifiable Disease Data Team and 122 Cities Mortality Data Team in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. Address all inquiries about the MMWR Series, including material to be considered for publication, to Editor, MMWR Series, Mailstop E-90, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333 or to [email protected]. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. References to non-CDC sites on the Internet are provided as a service to MMWR readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of these sites. URL addresses listed in MMWR were current as of the date of publication.
Respondents who reported currently smoking manufactured (i.e., commercial) cigarettes on a "daily" or "less than daily" basis. The term "smokers" in this report refers to current smokers of manufactured cigarettes. Smokers of other tobacco products, such as bidis, kreteks, hand-rolled cigarettes, cigars, pipes, and waterpipes who did not also smoke manufactured cigarettes are not included in this analysis. § "In the last 30 days, did you notice any health warnings on cigarette packages?" and "In the last 30 days, have warning labels on cigarette packages led you to think about quitting?"# Tobacco use is the leading preventable cause of death; this year approximately 5 million persons worldwide will die from tobacco-related heart attacks, strokes, cancers, and other diseases (1). Sponsored by the World Health Organization (WHO), World No Tobacco Day is observed every year on May 31. This year, World No Tobacco Day highlights the WHO Framework Convention on Tobacco Control (WHO FCTC) (1). Adopted as a resolution at WHO's 1996 World Health Assembly, WHO FCTC took effect in 2005 and is maintained by the United Nations. A total of 172 countries have adopted the treaty (2), making WHO FCTC one of the most widely embraced evidence-based treaties in United Nations history. WHO FCTC urges all countries to ratify the treaty, fully implement its provisions, and adopt its guidelines (1), and WHO provides country-level assistance for implementing effective tobacco control measures (3). CDC has supported this effort by working with partners to provide technical assistance and infrastructure support for the creation of sustainable surveillance systems (the Global Adult Tobacco Survey and the Global Youth Tobacco Survey). Additional information is available at http://www.who.int/fctc/en/ index.html. # Cigarette Package Health Warnings and Interest in Quitting Smoking -14 Countries, 2008-2010 The World Health Organization (WHO) Framework Convention on Tobacco Control (FCTC) requires health warnings on tobacco product packages sold in countries that ratified the WHO FCTC treaty (1). These warnings are expected to 1) describe the harmful effects of tobacco use; 2) be approved by the appropriate national authority; 3) appear on at least 30%, and ideally 50% or more, of the package's principal display areas; 4) be large, clear, visible, and legible in the country's principal language(s); 5) have multiple, rotating messages; and 6) preferably use pictures or pictograms. To assess the effects of cigarette package health warnings on interest in quitting smoking among smokers of manufactured cigarettes aged ≥15 years, this report examines 2008-2010 data from the Global Adult Tobacco Survey (GATS) in 14 WHO FCTC countries. Among men, the prevalence of manufactured cigarette smoking ranged from 9.6% in India to 59.3% in Russia. Among men in 12 of the countries and women in seven countries, >90% of smokers reported noticing a package warning in the previous 30 days. The percentage of smokers thinking about quitting because of the warnings was >50% in six countries and >25% in men and women in all countries except Poland. WHO has identified providing tobacco health information, including graphic health warnings on tobacco packages, as a powerful "best buy" in combating noncommunicable disease (2). Implementing effective warning labels as a component of a comprehensive approach can help decrease tobacco use and its many health consequences. GATS is a nationally representative household survey conducted among persons aged ≥15 years using a standardized questionnaire, sample design, data collection method, and analysis protocol to obtain measures on key tobacco control indicators and ensure comparability across countries.* GATS was conducted once in each of the 14 countries during 2008-2010 by national governments, ministries of health, survey-implementing agencies, and international partners. In each country, a multistage cluster sample design is used, with households selected proportional to population size. Data are weighted to reflect the noninstitutionalized population aged ≥15 years in each country. For this analysis, current smokers of manufactured cigarettes † were asked whether they had noticed health warnings on a cigarette package in the previous 30 days, and whether the label led them to think about quitting smoking. § Responses were analyzed by sex and, within sex strata, by age and education level using bivariate analysis within individual countries. Differences in response estimates were considered statistically significant if 95% confidence intervals did not overlap. Overall response rates ranged from 65.1% in Poland to 97.7% in Russia. The health warnings on cigarette packages in each country at the time GATS was conducted were described according to WHO FCTC guidelines (3,4). All GATS countries had warning labels on cigarette packages describing harmful effects of smoking at the time their survey was conducted. Four of the 14 countries (Brazil, Egypt, Thailand, and Uruguay) had pictorial warnings. A fifth country, India, introduced pictorial warnings in 2009, and had both text and pictorial warnings in circulation when GATS was conducted (Table 1). In all 14 countries, men were more likely to be cigarette smokers than women. Among men, prevalence of smoking ranged from 9.6% in India to 59.3% in Russia (Table 2). Among women, prevalence of smoking was <25% in all countries and <2% in Bangladesh, China, Egypt, India, Thailand, and Vietnam. and Egypt, not enough women reported current smoking to calculate this percentage. Smokers aged ≥65 years were less likely to notice warnings in Bangladesh (men), Brazil (men and women), Mexico (men), Philippines (men and women), Thailand (men), and Ukraine (men) (Table 2). Smokers who had not completed primary school education were less likely to have noticed warnings in Bangladesh (men), China (men and women), India (men), Mexico (women), Philippines (men and women), Turkey (women), and Vietnam (men) (Table 2). Among smokers who noticed a package warning, the percentage thinking about quitting because of the warning was >50% in six GATS countries (Bangladesh, Brazil, India, Thailand, Ukraine, and Vietnam) and >25% for men and women in all countries except one (Poland). Older male smokers were less likely to think about quitting in India and Uruguay; no other age group differences were noted. This report is the first to provide survey results from all 14 countries that participated in GATS during 2008-2010. In these countries, the prevalence of smoking manufactured cigarettes varied widely and was more common among men. Warning the public about the dangers of tobacco is one of the strategies in WHO's MPOWER package to combat the tobacco epidemic (3). Most of these countries had met the minimum WHO FCTC health warning label criteria for cigarette packages at the time GATS was conducted. The majority of smokers noticed the health warnings, and in most countries >25% who noticed the warnings said they were led to think about quitting. These results indicate that package warnings can be effective for various populations and settings, including countries in which cigarette smoking prevalence currently is low. To be effective, cigarette package warnings must capture smokers' attention and educate them about the health effects of tobacco use (5). The WHO FCTC guidelines provide parameters to accomplish these objectives by emphasizing features that increase the salience of warnings (1,4). Prominent, pictorial warnings have been found to be the most effective in communicating the harms of smoking in several studies (4,6). Smokers who perceive a greater health risk from smoking are more likely to think about quitting and to quit successfully (6). Further, evidence indicates that warnings are more likely to be effective if they elicit strong emotions, such as fear, seem personally relevant, and increase confidence in the ability to quit (4,7). For example, a comparative analysis of responses to labels in Brazil, Mexico, and Uruguay found that the Brazilian warnings depicting human suffering had the strongest impact on thinking about quitting (8). Rotating warnings also is important because the impact of an individual label will decrease over time (5). Thus, a warning that is small in total size or font size, has been in circulation for a long time, or lacks informational content that generates an emotional response likely will not have the strongest possible impact. Graphic warnings have the potential to reach those who do not notice or read text-only warnings; they also have the potential to better evoke emotional responses, increase knowledge of health risks, and reinforce motivations to quit smoking (9). Therefore, the WHO FCTC guidelines strongly encourage the use of graphic warnings (5). Low education level and older age were associated with not noticing warnings in some countries; virtually all of these countries had text-only warnings. Women were less likely to notice warnings than men in India, China, and Vietnam, countries where cigarette smoking prevalence is very low among women. These findings emphasize the importance of using warnings that are effective in communicating the risks of smoking to all cigarette smokers and using other What is already known on this topic? Warning the public about the dangers of tobacco is one of the key strategies in the World Health Organization's MPOWER package to combat tobacco use. # What is added by this report? For the first time, data from all 14 Global Adult Tobacco Survey (GATS) countries are available. In these countries, the prevalence of smoking manufactured cigarettes varies widely and is more common among men. The majority of smokers noticed package warning labels. Among smokers who noticed a health warning, the percentage thinking about quitting because of the warning was >50% in six GATS countries. What are the implications for public health practice? Strong health warning labels on cigarette packages are effective in motivating smokers to consider quitting. These findings emphasize the importance of using warnings that are effective in communicating the risks of smoking to all cigarette smokers. evidence-based tobacco control measures that reach populations that are not frequently exposed to cigarette packages. Warnings were more effective at getting smokers to think about quitting in some countries than in others. Brazil and Thailand, countries with numerous prominent and graphic pictorial warnings in rotation, had among the highest prevalences of smokers thinking about quitting because of the warnings; these warnings received WHO's highest rating (3). However, reported thinking about quitting smoking also was relatively high in Bangladesh and Vietnam, where warnings covered less of the package and were text-only. The reasons for these findings are not immediately clear but might relate to the relative importance of package warnings among other contextual factors such as smokers' baseline knowledge about health risks, level of interest in quitting, and level of tobacco dependence, as well as concurrent tobacco control efforts and social norms surrounding tobacco use (7). Further research might be helpful in elucidating these factors and in determining the extent to which thinking about quitting because of warnings leads to quit attempts in GATS countries. The findings in this report are subject to at least five limitations. First, all data were self reported, and social norms (e.g., unacceptability in some countries of women smoking) might have affected responses. Second, the education categories used in Brazil are not comparable to the categories used in this analysis, so Brazil's data were not evaluated for differences in prevalence by education. Third, this analysis included only smokers of manufactured cigarettes; however, other tobacco products (e.g., bidis, kreteks, hand-rolled cigarettes, waterpipes, and smokeless tobacco) are commonly used in India and other GATS countries. Fourth, the prevalence of smoking among women is low in some countries, so analyzing or interpreting results on the impact of package warnings among women was not possible. Finally, GATS was not designed to evaluate the effectiveness of individual health warning labels, and its core questions did not distinguish between the different labels in circulation in a country. After GATS was conducted, Mexico, Philippines, Turkey, and Ukraine passed legislation requiring pictorial warning labels, and Thailand and Uruguay increased the size of their warnings. Worldwide, a majority of countries now have warnings on cigarette packages, but their features and strength vary (7). As of 2010, approximately 30 countries had pictorial warning labels covering at least 50% of the package (7), and additional countries were developing such labels. ¶ Future GATS will allow tracking of progress toward key tobacco use and control indicators. Smokers view their cigarette packages every time they remove a cigarette; therefore, the cigarette package represents a powerful vehicle to deliver health warnings directly to smokers. Nonsmokers and former smokers also can be discouraged from smoking by viewing comprehensive warnings (7). WHO has identified price increases; smoke-free policies; bans on tobacco advertising, promotion, and sponsorship; and providing tobacco health information via mass media campaigns and graphic health warnings to the public as tobacco "best buys"** because they can reduce tobacco initiation, help to prevent progression from initiation to addiction, increase cessation, decrease consumption, and change social norms (2). Providing information about the dangers of using tobacco products with package warnings is a simple and cost-effective strategy to motivate quit attempts, thus helping to prevent the life-threatening effects of tobacco use (9,10). Jamestown Canyon virus (JCV) is a mosquito-borne zoonotic pathogen belonging to the California serogroup of bunyaviruses. Although JCV is widely distributed throughout temperate North America, reports of human JCV infection in the United States are rare. This is the first report of human JCV infection detected in Montana, one of only 15 cases reported in the United States since 2004, when JCV became reportable. On May 26, 2009, a man aged 51 years with no travel history outside of Montana went to a local emergency department immediately following onset of fever, severe frontal headache, dizziness, left-sided numbness, and tingling. His blood pressure was elevated. Stroke was ruled out, oxygen was administered, medication was prescribed for hypertension, and the patient was sent home. One week later, the patient visited his primarycare physician complaining of continued neurologic symptoms consistent with acute febrile encephalitis and recent mosquito bites. Although West Nile virus (WNV) disease was diagnosed based on detection of WNV-immunoglobulin M (IgM) and G (IgG) antibodies, subsequent testing indicated that the WNV antibodies were from a past infection and that his illness was caused by JCV. The final diagnosis of JCV infection was based on positive JCV-specific IgM enzyme-linked immunosorbent assay (ELISA) results and a fourfold rise in paired sample JCV plaque reduction neutralization test (PRNT) titers. This finding represents a previously unrecognized risk for JCV infection in Montana; clinicians should consider JCV infection when assessing patients for suspected arboviral infections. # Case Report On May 26, 2009, a previously healthy man aged 51 years with no travel history outside of Montana went to a local emergency department immediately following onset of fever, severe acute frontal headache, dizziness, left-sided numbness, and tingling. No other symptoms were noted. Results of a physical examination were normal, except for an elevated blood pressure of 214/119 mmHg. Blood chemistries and cardiac enzyme tests were within normal limits, except for an elevated glucose of 130 mg/dL (normal: 70-110 mg/dL). Results of an electrocardiogram, magnetic resonance imaging, and computed tomography scan of the brain were normal. Oxygen was administered to the patient, telmisartan was prescribed for hypertension, and he was sent home. A week later, on June 2, the patient visited his primary-care physician complaining of fever, persistent headache, and new onset of muscle pain and weakness. The physician considered the patient's symptoms to be consistent with a neurologic illness and evaluated the patient further for a possible stroke or arboviral infection. A carotid Doppler test showed no evidence of abnormal arterial blood flow. A lumbar puncture performed on June 11 showed clear, colorless cerebrospinal fluid with no leukocytes or erythrocytes, and bacterial culture showed no growth at 72 hours; no tests for virus were performed. The patient was referred to and visited a neurologist on July 6. The neurologist found no evidence of stroke, diagnosed a complex migraine, and prescribed medication for headache management. The patient's symptoms gradually improved, and he reported no residual symptoms 6 months after illness onset. On visiting his primary-care physician and during interviews conducted by the local health department and Montana Department of Public Health and Human Services (DPHHS), the patient reported recent exposure to mosquitoes while working outdoors around his home, which was located in a rural area of Montana. An acute-phase serum sample collected on June 2 (1 week after symptom onset) tested positive for WNV-specific IgM and IgG by ELISA at the Montana Public Health Laboratory (MTPHL). These laboratory results, in combination with the patient's symptoms and history of recent mosquito bites, supported a presumptive diagnosis of WNV disease. The acute sample was then sent to CDC's arbovirus diagnostic laboratory (CDC-ADL) in Fort Collins, Colorado, to confirm the diagnosis by PRNT. Testing at CDC-ADL was positive for WNV-specific IgM and IgG antibodies, with a neutralizing titer of 320. Testing also was positive for St. Louis encephalitis virus (SLEV)-specific IgG antibodies, but a negative SLEV-specific IgM antibody test and a neutralizing titer of 10 suggested cross-reactive flaviviral antibodies. An initial convalescent serum sample drawn on June 11 (16 days after symptom onset) also tested positive for WNV-specific IgM and IgG by ELISA at MTPHL but was not available for testing at CDC-ADL. However, another convalescent serum sample was obtained on December 1 (189 days after symptom onset) and was tested at CDC-ADL. Results indicated persistence of WNV-specific IgM and IgG antibodies and stable neutralizing titers (Table ). Because the stable titers suggested a previously acquired WNV infection (>6 months before illness onset), WNV avidity testing was obtained from the Viral Zoonoses Section, National Microbiology Laboratory, Public Health Agency of Canada (NML-PHAC), in Winnipeg, Manitoba, Canada. Testing found high-avidity WNV IgG, strongly suggesting that the WNV antibodies were from a past WNV infection (1). In addition to WNV testing, CDC-ADL tested the acute specimen collected on June 2 for antibodies against other # Human Jamestown Canyon Virus Infection -Montana, 2009 arboviruses. Results were equivocal for IgM and IgG antibodies against La Crosse virus (LACV) by ELISA. Neutralizing titers of 40 against LACV and 80 against JCV suggested a possible recent infection with a California serogroup virus (Table ). Follow-up testing on the day 189 sample was negative for LACV IgM antibodies by ELISA, but showed a twofold increase in LACV neutralizing titers and a fourfold increase in JCV titers. These results suggested that the patient's infection most likely was JCV. To confirm the diagnosis, samples were sent to NML-PHAC for testing with their recently developed IgM ELISA assays incorporating JCV antigen. Patient sera obtained June 11 and December 1 were positive for JCVspecific IgM antibodies (Table ). The presence of JCV-specific IgM and the fourfold diagnostic rise in JCV-neutralizing antibody titers confirmed the diagnosis of JCV infection. This finding indicated that JCV is present in Montana and that a risk for human infection exists. # Editorial Note Arthropod-borne viruses (i.e., arboviruses) are transmitted to humans primarily through bites from infected mosquitoes or ticks. Most arboviruses of public health importance belong to one of three virus genera: Flavivirus, Alphavirus, and Bunyavirus. Human cases caused by the following domestic arboviruses are nationally reportable to CDC: West Nile, St. Louis encephalitis, Powassan, eastern equine encephalitis, western equine encephalitis, and California serogroup viruses (i.e., La Crosse, Jamestown Canyon, California encephalitis, Keystone, snowshoe hare, and trivittatus). JCV is distributed throughout temperate North America, where it circulates primarily between deer and various mosquito species (2)(3)(4). Despite its wide geographic range, only 15 human JCV infections (mean: <3 per year) have been reported in the United States since 2004, when JCV became a reportable condition, and those have originated predominantly from the midwestern and northeastern states. JCV infections initially were described in the early 1970s to cause a mild febrile illness in humans (5). Serosurveys in Connecticut and New York have shown evidence of JCV infection in up to 12% of the population (3,6). Despite descriptions of mild illness caused by JCV, at least 11 subsequent cases with moderate-to-severe meningoencephalitis have been described; 10 in the early 1980s and one in 2001 (3,6). A retrospective study of patients with central nervous system manifestations and serologic findings for California serogroup viruses during 1971-1981 confirmed that 41 of 53 patients (77%) had antibodies to JCV, indicating that JCV originally was underdiagnosed in these patients (7). In comparison with clinical illness caused by LACV, JCV has been described as affecting adults and is more likely to cause meningitis (6,7). Furthermore, while seasonal distribution of LACV infections in humans generally occurs in August, JCV infections can occur earlier, in May and June, and continue through the end of summer, likely because the seasonal distribution of mosquito vectors differs for each virus (8). Although the Montana patient with JCV infection was suspected to have an acute WNV infection, human cases of WNV infection in Montana typically are not reported until late July, with the majority of cases occurring in late August and early September. The onset of illness for this patient was during late spring, which is consistent with approximately 40% of recognized human JCV infections. The differences in the seasonal distribution of these diseases likely are related to the mosquito species that transmit the viruses. Mosquitoes belonging to snow-melt Aedes species are common vectors of JCV, emerge early in spring, and are distributed throughout Montana (3,9). Vertical transmission of JCV in mosquitoes, overwintering of the virus in mosquito eggs, and larval maturation in temporary ponds produced by melting snow increase the likelihood of human JCV transmission in the spring (10). Detection of JCV previously has relied on cross-reactive antibodies in the LACV-specific ELISA (6,7). Testing of the acute serum sample for this case yielded equivocal anti-LACV IgM results, with a slightly higher neutralizing antibody titer against JCV than LACV. The titers against JCV and LACV were not different enough to determine the etiology. Although the convalescent sample confirmed a fourfold rise in JCV-neutralizing antibody titers, testing of paired acute and convalescent samples using a JCV antigen-specific ELISA was necessary to confirm JCV IgM positive results. The discordant anti-LACV and JCV IgM results suggested that cross-reactivity between LACV and JCV antibodies in the LACV-specific ELISA was incomplete, and that sole reliance on the LACV-specific ELISA to detect JCV can lead to missed JCV infections. In response to this, CDC has developed a JCV-specific IgM ELISA. Currently, testing is available only at CDC on request. As more information about the distribution and frequency of JCV infections and disease is known, testing might be expanded to include regional or state laboratories. The availability of this test will enable clinicians and public health officials to quickly differentiate between arboviral infections, especially within the California serogroup. Initial diagnostic tests in this case included testing for several arboviral diseases. However, the lack of a readily available diagnostic test specific to JCV delayed the diagnosis and led the clinician to consider noninfectious causes of illness. For the patient, the delayed diagnosis resulted in unnecessary medical procedures, including a carotid Doppler ultrasound, plus several hours of travel, and lost work to seek additional medical evaluation from a specialist. Clinically, patient care might not have differed significantly; however, supportive care, including headache management and patient prognosis, would have been established more quickly. Treatment for JCV infection typically includes supportive care and management of complications, such as relieving increased intracranial pressure. This case underscores the importance of Montana clinicians considering JCV infection in patients with a febrile neurologic illness when an arboviral infection is suspected and WNV testing is inconclusive. Improved and timely diagnosis will aid clinicians in making patient-care and management decisions, help public health professionals perform accurate epidemiologic investigations and implement preventive measures, and provide a better understanding of California serogroup virus distribution. What is already known on this topic? Jamestown Canyon virus (JCV) circulates widely in North America, primarily between deer and various mosquito species. Reports of human JCV infections in the United States have been rare and are confined primarily to the midwestern and northeastern states. JCV's nonspecific clinical presentation and the limited availability of sensitive tests for JCV might contribute to many human infections going undetected. # What is added by this report? This first reported human case of JCV in Montana suggests that the geographic distribution of human JCV infection is wider than previously recognized, and that increased JCV surveillance is needed to determine whether mosquito-borne arboviruses other than West Nile virus (WNV) pose a substantial risk to humans in the region. # What are the implications for public health practice? Clinicians should consider JCV infection in differential diagnoses when an arboviral infection is suspected to be causing a febrile neurologic illness, but WNV testing is inconclusive. Improved and timely arboviral disease diagnostics will aid clinicians in making patient-care and management decisions, help public health professionals perform accurate epidemiologic investigations and target preventive measures, and provide a better understanding of arboviral disease distribution in the United States. Regular physical activity helps maintain healthy weight and reduces the likelihood of developing chronic diseases. The 2008 Physical Activity Guidelines for Americans (1) are derived from the most recent scientific review of physical activity health benefits and do not differentiate among physical activity for leisure, transportation, work, or other purposes. To examine the potential influence of occupational physical activity on meeting minimum weekly aerobic physical activity guidelines, the Washington State Department of Health (WADOH) analyzed demographic patterns in physical activity levels with and without consideration of occupational physical activity using 2007 Behavioral Risk Factor Surveillance System (BRFSS) data. This report describes the results of that analysis, which indicated that, approximately two thirds (64.3%) of U.S. adults met minimum physical activity guidelines through nonoccupational physical activity. When occupational physical activity (defined as reported work activity of mostly walking or heavy labor) was considered, an additional 6.5% of adults likely met the guidelines. The increase was greatest for Hispanic men (14.4%) and men with less than a high school education (15.9%). Public health agencies conducting surveillance of population physical activity levels also should consider including occupational physical activity, which will help to identify demographic groups for targeted programs that increase physical activity. BRFSS is a state-based, random-digit-dialed telephone survey of the noninstitutionalized, U.S. civilian adult population. The Council of American Survey Research Organizations (CASRO) median response rate for the 2007 BRFSS survey was 50.6%. Among 430,912 respondents, complete occupational and nonoccupational physical activity data were available for 386,397 respondents from 50 states and the District of Columbia. BRFSS collects data on frequency and duration of nonoccupational physical activity, which includes leisure, transportation (e.g., walking), and maintaining a home. WADOH computed the products of activity frequency (days per week) and duration (minutes per day) for moderate-intensity and vigorousintensity activities. Consistent with the guidelines, WADOH classified respondents as having met guidelines if they reported weekly nonoccupational physical activity of ≥150 minutes of moderate-intensity activity (e.g., brisk walking or gardening), ≥75 minutes of vigorous-intensity activity (e.g., running or heavy yard work), or a combination of moderate-intensity and vigorous-intensity activity (with vigorous-intensity activity minutes multiplied by two) totaling ≥150 minutes. BRFSS does not collect data on occupational physical activity frequency and duration; instead, respondents who indicate employment are asked whether their activity at work is mostly standing or sitting, mostly walking, or mostly heavy labor or physically demanding work.* For this analysis, respondents who did not meet guidelines through nonoccupational physical activity were coded as meeting the guidelines if they reported mostly walking or mostly heavy labor or physically demanding work (Figure). WADOH computed age-adjusted prevalence of meeting physical activity guidelines by selected demographic characteristics and calculated age-adjusted prevalence ratios (PRs) for meeting guidelines by fitting two sets of Poisson regressions in which the outcome measures were meeting recommendations (in the first set through nonoccupational activity and in the second set through either nonoccupational or occupational activity). Each Poisson regression contained age and, except for the analysis in which age was the only predictor, an additional predictor variable: race/ethnicity, annual household income, or education. All analyses were stratified by sex and conducted using statistical software that accounted for the complex sampling design. * Regarding occupational physical activity, respondents were asked the following: "When you are at work, which of the following best describes what you do? Would you say 1) mostly sitting or standing; 2) mostly walking; or 3) mostly heavy labor or physically demanding work?" Responses of "don't know/not sure" and a respondent's refusal to respond ("refused") also were included. Additional information available at http://www.cdc.gov/brfss/questionnaires/pdfques/2007brfss.pdf. # FIGURE. Classification for meeting 2008 physical activity guidelines* # Contribution of Occupational Physical Activity Toward Meeting Recommended Physical Activity Guidelines -United States, 2007 Approximately two thirds (68.5%) of men met guidelines through nonoccupational physical activity. When occupational physical activity levels also were considered, the proportion meeting guidelines increased from 68.5% to 76.3% (Table 1); 14.8% (95% confidence interval [CI] = 14.4%-15.3%) of men reported "mostly walking," and 14.3% (CI = 13.9%-14.7%) reported "mostly heavy labor" at work. For women, the proportion increased from 60.4% to 65.7% (Table 2); 12.7% (CI = 12.4%-13.0%) of women reported "mostly walking," and 3.4% (CI = 3.3%-3.6%) of women reported "mostly heavy labor" at work. Hispanic men and men with less than a high school education exhibited the greatest absolute gains in the proportion meeting guidelines when occupational physical activity was included (from 60.6% to 75.0% and from 55.7% to 71.6%, respectively). Among Hispanic men, 24.3% (CI = 22.6%-26.2%) reported "mostly walking," and 15.0% (CI = 13.5%-16.5%) reported "mostly heavy labor" at work; among men with less than a high school education 21.2% (CI = 19.4%-23.2%) reported "mostly walking," and 18.4% (CI = 16.8% -20.0%) reported "mostly heavy labor." Hispanic men had a lower prevalence of meeting guidelines through nonoccupational physical activity compared with non-Hispanic white men (PR = 0.85) (Table 1). However, when occupational physical activity was included, the PR was attenuated (i.e., it approached 1.0; PR = 0.97). Similarly, men with less than a high school education had lower prevalence of meeting physical activity guidelines through nonoccupational physical activity compared with men with a college degree (PR = 0.75). When occupational physical activity was included, the PR was attenuated (PR = 0.93). Similar patterns in attenuation of PRs were noted when comparing men with reported annual household incomes of ≤$35,000 with those As expected, findings from this report provide evidence for a modest contribution of occupational physical activity toward successfully meeting minimum physical activity guidelines among U.S. adults, with a larger impact for some subpopulations than others. National Health Interview Survey (NHIS) data from 1990 and earlier revealed that approximately half of respondents classified as sedentary in leisure time reported ≥1 hour of strenuous occupational activity daily; that report indicated that assessing only leisure activity might underestimate physical activity (2). Recent analyses from NHIS are not available because questions regarding the amount of jobrelated physical activity have not been asked since 1990. The findings presented in this report are consistent with reports of Hispanic persons expending more energy at work than persons of other racial/ethnic groups (3). In addition, education and income are strong predictors of leisure-time physical activity, and they remain important predictors of total activity, even though including occupational activity attenuates the association between education and physical activity for men. Although the BRFSS occupational physical activity question has been reported as valid and reliable for classifying physical activities at work (4-6), the question does not quantify the intensity or duration of continuous occupational physical activity. For this report, the analysis assumed that "mostly walking" included moderate-intensity activity in ≥10-minute intervals for ≥150 minutes per week and "mostly heavy labor" included vigorous-intensity activity in ≥10-minute intervals for ≥75 minutes per week. If the actual time spent in activity of sufficient intensity is less than this, then the effect of occupational physical activity on meeting the minimum aerobic activity guidelines will be overestimated. Relative to a standard work week of 40 hours, these assumptions seem reasonable. Also, a variety of occupational walking activities are in the "moderate" range, and a variety of heavy labor activities are in the "vigorous" range, based on comparisons of energy need while performing a task to energy need at rest (5,7). However, a more detailed assessment of occupational physical activities would be needed to confirm these assumptions. The findings in this report are subject to at least three limitations. First, because the duration of "mostly" walking or heavy physical labor is unavailable, it was not possible to assess whether respondents who did not meet guidelines through nonoccupational activity alone might meet guidelines through the combination of occupational and nonoccupational physical activity. As such, the proportions of persons meeting guidelines might have been underestimated. Second, BRFSS excludes persons in households without landline telephones. Finally, the 2007 BRFSS survey had a low CASRO response rate. These latter two factors can lead to bias, especially if physical activity patterns differ between those with and without landline telephones and between respondents and nonrespondents. The directions of these potential biases are unknown. As one of the 10 leading health indicators in the United States (8), physical activity is monitored at state and national levels to provide information for public health program planning, implementation, and evaluation. The state of Washington has used combined occupational and nonoccupational physical activity data as part of its assessment to target communities for policy and environmental changes. Debate about the health benefits of physical activity at work is ongoing, but the current guidelines do not distinguish between occupational and nonoccupational physical activity. Thus, public health surveillance that includes both occupational and nonoccupational physical activity more accurately describes whether persons meet guidelines than surveillance that includes only nonoccupational physical activity. Because demographic groups vary in amounts of physical activity at work (9), surveillance that includes both occupational and nonoccupational physical activity can be used to target groups that could derive health benefits by being more physically active. # What is already known on this topic? Prevalence estimates of physical activity predominately focus on nonoccupational physical activity; however, physical activity at work also can contribute to levels of physical activity sufficient to meet physical activity recommendations. # What is added by this report? This report is the first to provide estimates based on national surveillance data of the potential contribution of occupational physical activity toward meeting physical activity guidelines described in the 2008 Physical Activity Guidelines for Americans; when occupational physical activity was considered, an estimated additional 6.5% of adults overall very likely met the guidelines, and, for some groups, an estimated additional 14%-16% met the guidelines. What are the implications for public health practice? Pending evaluation of the usefulness of collecting information on occupational physical activity frequency and duration, consideration of occupational physical activity in the monitoring of population physical activity levels can help to identify demographic groups for targeted programs to increase physical activity. Japanese encephalitis (JE) virus, a mosquito-borne flavivirus, is an important cause of encephalitis in Asia with a case fatality rate of 20%-30% and neurologic or psychiatric sequelae in 30%-50% of survivors (1). Travelers to JE-endemic countries and laboratory personnel who work with infectious JE virus are at potential risk for JE virus infection. In 2010, CDC's Advisory Committee on Immunization Practices (ACIP) updated recommendations for prevention of JE. The updated recommendations included information on use of a new inactivated, Vero cell culture-derived JE vaccine (JE-VC [manufactured as Ixiaro]) that was licensed in the United States in 2009. Data on the need for and timing of booster doses with JE-VC were not available when the vaccine was licensed. This report summarizes new data on the persistence of neutralizing antibodies following primary vaccination with JE-VC and the safety and immunogenicity of a booster dose of JE-VC. The report also provides updated guidance to health-care personnel regarding use of a booster dose of JE-VC for U.S. travelers and laboratory personnel. ACIP recommends that if the primary series of JE-VC was administered >1 year previously, a booster dose may be given before potential JE virus exposure. # Background For most travelers to Asia, the risk for JE is very low but varies based on destination, duration, season, and activities (2). ACIP recommends JE vaccine for travelers who plan to spend a month or longer in JE-endemic areas during the JE virus transmission season. JE vaccine should be considered for short-term travelers (<1 month) if they plan to travel outside of an urban area and have an itinerary or activities that will increase the risk of JE virus exposure. JE vaccine also is recommended for laboratory personnel with a potential for exposure to infectious JE virus (1). In 2009, the Food and Drug Administration (FDA) licensed JE-VC for use in persons aged ≥17 years. JE-VC is manufactured by Intercell Biomedical (Livingston, United Kingdom) and is distributed in the United States by Novartis Vaccines (Cambridge, Massachusetts). JE-VC is administered in a 2-dose primary series at 0 and 28 days. Another JE vaccine, an inactivated mouse brain-derived vaccine (JE-VAX [JE-MB]), has been licensed in the U.S since 1992. However, JE-MB is no longer being produced and remaining doses expire in May 2011. Additional JE-VC study data have become available since the vaccine's licensure. The ACIP JE Vaccines Workgroup reviewed JE-VC clinical trial data on the persistence of neutralizing antibodies following primary vaccination with JE-VC and the safety and immunogenicity of a booster dose of JE-VC. These were primarily from published, peer-reviewed studies; however, unpublished data also were considered. FDA approved an update to the prescribing information for JE-VC in September 2010. No previous guidelines have been given on booster doses with JE-VC. At the February 2011 ACIP meeting, the workgroup presented data supporting use of a booster dose and proposed recommendations for a booster dose. ACIP approved the booster dose recommendations at the meeting. # Persistence of protective neutralizing antibodies after primary vaccination with JE-VC Three clinical trials have provided data on persistence of protective neutralizing antibodies after a primary 2-dose series of JE-VC. In JE vaccine clinical trials, JE virus neutralizing antibody levels measured by plaque reduction neutralization test (PRNT) can be used as a surrogate for protection. A 50% PRNT (PRNT 50 ) titer of ≥10 is accepted as an immunologic correlate of protection from JE in humans (3). In a study performed in central Europe (Austria, Germany, and Romania) to evaluate persistence of neutralizing antibodies among subjects who received 2 doses of JE-VC, 95% (172 of 181), 83% (151 of 181), 82% (148 of 181) and 85% (129 of 152) had protective antibodies at 6 months, 12 months, 24 months, and 36 months after receiving the first dose, respectively (Table 1) (4-6). A study that used similar methods but was performed in western and northern Europe (Germany and Northern Ireland) found that among adults receiving 2 doses of JE-VC, seroprotection rates were 83% (96 of 116) at 6 months, 58% (67 of 116) at 12 months, and 48% (56 of 116) at 24 months after their first vaccination (Table 1) (7). The manufacturer suggested that the different seroprotection rates in the two populations may have resulted from differences in prior vaccination against tick-borne encephalitis (TBE) virus, a related flavivirus. An estimated 75% of subjects in the first study might have received prior TBE vaccine compared with none of the subjects in the second study. A higher JE virus neutralizing antibody response after the first dose of JE-VC previously had been found in subjects with preexisting TBE # Recommendations for Use of a Booster Dose of Inactivated Vero Cell Culture-Derived Japanese Encephalitis Vaccine -Advisory Committee on Immunization Practices, 2011 antibodies compared with those without TBE antibodies (8). In a third clinical trial, conducted in Austria and Germany, at 15 months after the first dose of a 2-dose JE-VC immunization series, 69% (137 of 198) of subjects had a protective neutralizing antibody titer (Table 1) (9). # Safety and immunogenicity of a booster dose of JE-VC Two clinical trials have provided data on the response to a booster dose of JE-VC. In a study conducted in Austria and Germany, 198 adults aged ≥18 years who had received a 2-dose primary series of JE-VC were administered a booster dose 15 months after the first dose (9). The percentage of subjects with a protective neutralizing antibody titer increased from 69% (137 of 198) on Day 0 before the booster dose to 100% (198 of 198) at Day 28 after the booster dose. Protective titers were found in 98% (194 of 197) at 6 months and 98% (191 of 194) at 12 months after the booster dose (Table 2). The geometric mean titer (GMT) before the booster was 23 and increased 40-fold to 900 at Day 28 after the booster dose. GMTs were 487 and 361 at 6 and 12 months after the booster, respectively (Table 2). During the 7 days following the booster dose, local adverse events were reported in subject diaries by 31% (60 of 195) of subjects. The most frequent local reactions were tenderness in 19% (37 of 193) and pain in 13% (25 of 195) (Table 3). Systemic adverse events were reported by 23% (44 of 190) of subjects within 7 days of the booster dose. The most commonly reported systemic reactions were headache in 11% (21 of 194) and fatigue in 10% (18 of 188) (6). No serious adverse events were reported during the 28 days following the booster dose. In a second study, a booster dose administered to 40 subjects who had received primary immunization but no longer had protective neutralizing antibody titers resulted in protective titers in all subjects when the booster was administered at 11 months (n = 16) or 23 months (n = 24) after the first dose (7). GMTs at 1 month after the booster increased to 676 and to 2,496 in the groups administered the dose at 11 months and 23 months after the first dose, respectively. Among the 16 subjects who received the booster dose at 11 months, all still had seroprotective titers 13 months later. # Guidelines for use of a booster dose of JE-VC ACIP recommends that if the primary series of JE-VC was administered >1 year previously, a booster dose may be given before potential JE virus exposure. ACIP recommendations should be consulted for information on prevention of JE and settings in which JE vaccine is recommended, can be considered, or is not recommended (1). Data on the response to a booster dose administered >2 years after the primary series of JE-VC are not available. Data on the need for and timing of additional booster doses also are not available. No data exist on the use of JE-VC as a booster dose after a primary series with inactivated mouse brain-derived JE vaccine (JE-MB [manufactured as JE-Vax]). Adults aged ≥17 years who have received JE-MB previously and require further vaccination against JE virus should receive a 2-dose primary series of JE-VC. ACIP will review any additional data that are made available. Recommendations will be updated as needed. Inactivated mouse brain-derived Japanese encephalitis (JE) vaccine (JE-MB [manufactured as JE-Vax]), the only JE vaccine that is licensed for use in children in the United States, is no longer available. This notice provides updated information regarding options for obtaining JE vaccine for U.S. children. # TABLE 2. Number and percentage of subjects with a protective Japanese encephalitis virus neutralizing antibody titer (≥10) and the geometric mean titers (GMT) prior to and at day 28, month 6, and month 12 after a booster dose of inactivated Vero cell culture-derived Japanese encephalitis vaccine (JE-VC [manufactured as Ixiaro]) administered 15 months after dose 1 of a 2-dose primary JE-VC series # JE among U.S. travelers JE virus is the leading cause of vaccine-preventable encephalitis in Asia and the western Pacific. For most travelers to Asia, the risk for JE is low but varies on the basis of destination, duration, season, and activities. During the past 4 decades, 17 cases of JE have been reported among U.S. travelers and expatriates, including three cases among U.S. children aged <18 years (1,2). JE is a severe disease; 20%-30% of patients die, and 30%-50% of survivors have neurologic or psychiatric sequelae (3). # Recommendations for prevention of JE among travelers The Advisory Committee on Immunization Practices (ACIP) recommends that all travelers, including children, take precautions to avoid mosquito bites to reduce the risk for JE and other vector-borne infectious diseases (3). These precautions include using insect repellent, permethrin-impregnated clothing, and bed nets, and staying in accommodations with screened or air-conditioned rooms. Additional information on protection against mosquitoes and other arthropods is available in CDC's Health Information for International Travel (Yellow Book) (4). For some travelers who will be in a high-risk setting on the basis of season, location, duration, and activities, JE vaccine can reduce the risk for disease further (3). # JE vaccine for U.S. children JE-MB has been licensed in the United States since 1992 for use in adults and children aged ≥1 year. JE-MB has been associated with serious, but rare, allergic and neurologic adverse events (3). During 2002-2009, a total of 848,571 doses of JE-MB were distributed in the United States (mean: 106,071 doses per year), of which 534,330 (63%) doses were distributed to military health-care providers (5). During this period, an estimated 2,000-3,000 doses of JE-MB were distributed for use in U.S. children each year (Sanofi Pasteur, unpublished data, 2011). However, JE-MB is no longer being produced, and all remaining doses expire in May 2011. In 2009, the Food and Drug Administration (FDA) approved an inactivated Vero cell culture-derived JE vaccine (JE-VC [manufactured as Ixiaro]) for use in adults aged ≥17 years. One pediatric dose-ranging study has been completed among 60 children aged 12-35 months in India (48 children received JE-VC, and 12 children received another inactivated mouse brain-derived JE vaccine [manufactured as JenceVac]) (6). A safety and immunogenicity study is ongoing among approximately 1,900 children aged 2 months-17 years in the Philippines, and a safety and immunogenicity bridging study has been initiated in the United States and other nonendemic countries with a targeted enrollment of approximately 100 children. Despite these ongoing studies, it likely will be several years before JE-VC is licensed in the United States for use in children. JE-VC product information is available online from FDA at http://www.fda.gov/biologicsbloodvaccines/vaccines/ approvedproducts/ucm179132.htm. # Current options for obtaining JE vaccine for U.S. children For U.S. health-care providers interested in obtaining JE vaccine for pediatric patients they judge to be at risk, current options include 1) enroll children in the ongoing clinical trial, 2) administer JE-VC off-label, or 3) receive JE vaccine at an international travelers' health clinic in Asia. The ongoing pediatric safety and immunogenicity trial with JE-VC is enrolling children aged 2 months-17 years at five U.S. sites (trial identifier NCT01047839). The study is open-label, and all enrollees receive 2 doses of JE-VC administered 28 days apart. A third study visit is required at 56 days after the first dose of vaccine. Additional information about the clinical trial is available online from the National Institutes of Health at http://clinicaltrials.gov/ct2/show/nct01047839. In addition, a list of U.S. clinical trial sites and contact information is available online from CDC at http://www.cdc.gov/ncidod/dvbid/ jencephalitis/children.htm. JE-VC is FDA-licensed for use in adults aged ≥17 years. However, a health-care provider may choose to administer the vaccine off-label in children aged <17 years. Data from the one completed pediatric study have been published (6). Additional information about the use of JE-VC in children is available from Novartis Medical Communications by telephone (877-683-4732) or e-mail ([email protected]). Several JE vaccines are manufactured and available for pediatric use in Asia but are not licensed in the United States. # Update on Japanese Encephalitis Vaccine for Children -United States, May 2011 Vaccines available at international travelers' health clinics in Asia include another inactivated mouse brain-derived JE vaccine manufactured in South Korea, live attenuated SA 14-14-2 vaccine manufactured in China, or another Vero cell culturederived JE vaccine manufactured in Japan. The recommended number of doses and schedule varies by vaccine and country. A partial list of international travelers' health clinics in Asia that administer JE vaccines to children is available online from CDC at http://www.cdc.gov/ncidod/dvbid/jencephalitis/ children.htm. # On May 24, this report was posted as an MMWR Early Release on the MMWR website (http://www.cdc.gov/mmwr). Measles is a highly contagious, acute viral illness that can lead to serious complications and death. Endemic or sustained measles transmission has not occurred in the United States since the late 1990s, despite continued importations (1). During 2001-2008, a median of 56 (range: 37-140) measles cases were reported to CDC annually (2); during the first 19 weeks of 2011, 118 cases of measles were reported, the highest number reported for this period since 1996. Of the 118 cases, 105 (89%) were associated with importation from other countries, including 46 importations (34 among U.S. residents traveling abroad and 12 among foreign visitors). Among those 46 cases, 40 (87%) were importations from the World Health Organization (WHO) European and South-East Asia regions. Of the 118, 105 (89%) patients were unvaccinated. Forty-seven (40%) patients were hospitalized and nine had pneumonia. The increased number of measles importations into the United States this year underscores the importance of vaccination to prevent measles and its complications. Measles cases are reported by state health departments to CDC, and confirmed cases are reported via the National Notifiable Disease Surveillance System (NNDSS) using standard case definitions (3). Cases are considered internationally imported if at least some of the exposure period (7-21 days before rash onset) occurred outside the United States and rash occurred within 21 days of entry into the United States, with no known exposure to measles in the United States during that time. Import-associated cases include 1) internationally imported cases; 2) cases that are related epidemiologically to imported cases; and 3) imported virus cases for which an epidemiologic link has not been identified but the viral genotype detected suggests recent importation.* Laboratory confirmation of measles is made by detection in serum of measles-specific immunoglobulin M antibodies, isolation of measles virus, or detection of measles virus RNA by nucleic acid amplification in an appropriate clinical specimen (e.g., nasopharyngeal/oropharyngeal swabs, nasal aspirates, throat washes, or urine). For this report, persons with reported unknown or undocumented vaccination status are considered unvaccinated. An outbreak of measles is defined as a chain of transmission with three or more confirmed cases. During January 1-May 20, 2011, a total of 118 cases were reported from 23 states and New York City (Figure 1), the highest reported number for the same period since 1996 (Figure 2). Patients ranged in age from 3 months to 68 years; 18 (15%) were aged <12 months, 24 (20%) were aged 1-4 years, 23 (19%) were aged 5-19 years, and 53 (45%) were aged ≥20 years. Measles was laboratory-confirmed in 105 (89%) cases, and measles virus RNA was detected in 52 (44%) cases. Among the 118 cases, 105 (89%) were import-associated, of which 46 (44%) were importations from at least 15 countries (Table ), 49 (47%) were import-linked, and 10 (10%) were imported virus cases. The source of 13 cases not import-associated could not be determined. Among the 46 imported cases, most were among persons who acquired the disease in the WHO European Region (20) or South-East Asia Region (20), and 34 (74%) occurred in U.S. residents traveling abroad. Of the 118 cases, 47 (40%) resulted in hospitalization. Nine patients had pneumonia, but none had encephalitis and none died. All but one hospitalized patient were unvaccinated. The vaccinated patient reported having received 1 dose of measlescontaining vaccine and was hospitalized for observation only. Hospitalization rates were highest among infants and children aged <5 years (52%), but rates also were high among children and adults aged ≥5 years (33%). Unvaccinated persons accounted for 105 (89%) of the 118 cases. Among the 45 U.S. residents aged 12 months−19 years who acquired measles, 39 (87%) were unvaccinated, including 24 whose parents claimed a religious or personal exemption and eight who missed opportunities for vaccination. Among the 42 U.S. residents aged ≥20 years who acquired measles, 35 (83%) were unvaccinated, including six who declined vaccination because of philosophical objections to vaccination. Of the # Measles -United States, January-May 20, 2011 FIGURE 1. Distribution and origin of reported measles cases (N = 118) -United States, January 1-May 20, 2011 Import associated Unknown source ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! * Additional information is available at http://www.cdc.gov/osels/ph_surveillance/ nndss/casedef/measles_2010.htm. 33 U.S. residents who were vaccine-eligible and had traveled abroad, 30 were unvaccinated and one had received only 1 of the 2 recommended doses. Nine outbreaks accounted for 58 (49%) of the 118 cases. The median outbreak size was four cases (range: 3-21). In six outbreaks, the index case acquired measles abroad; the source of the other three outbreaks could not be determined. Transmission occurred in households, child care centers, shelters, schools, emergency departments, and at a large community event. The largest outbreak occurred among 21 persons in a Minnesota population in which many children were unvaccinated because of parental concerns about the safety of measles, mumps, and rubella (MMR) vaccine. That outbreak resulted in exposure to many persons and infection of at least seven infants too young to receive MMR vaccine (4). # Reported by # Div of Viral Diseases, National Center for Immunization and Respiratory Diseases, CDC. Corresponding contributor: Huong McLean, [email protected], 404-639-7714. # Editorial Note As a result of high vaccination coverage, measles elimination (i.e., the absence of endemic transmission) was achieved in the United States in the late 1990s (1) and likely in the rest of the Americas since the early 2000s (5). However, as long as measles remains endemic in the rest of the world, importations into the Western Hemisphere will continue. The unusually large number of importations into the United States in the first 19 weeks of 2011 is related to recent increases in measles in countries visited by U.S. travelers. The most frequent sources of importation in 2011 were countries in the WHO European Region, which has accounted for the majority of measles importations in the United States since 2005 (2), and the South-East Asia Region. This year, 33 countries in the WHO European Region have reported an increase in measles. France, the source of most of the importations from the European Region, is experiencing a large outbreak, with approximately 10,000 cases reported during the first 4 months of 2011, including 12 cases of encephalitis, a complication that often results in permanent neurologic sequelae, 360 cases of severe measles pneumonia, and six measles-related deaths (6). Measles can be severe and is highly infectious; following exposure, up to 90% of susceptible persons develop measles. Measles can lead to life-threatening complications. During 1989-1991, a resurgence of measles in the United States resulted in >100 deaths among >55,000 cases reported, reminding U.S. residents of the potential severity of measles, even in the era of modern medical care (7). In the years that followed, the United States witnessed the return of subacute sclerosing panencephalitis among U.S. children, a rare, fatal neurologic complication of measles that had all but disappeared after measles vaccine was introduced in the 1960s (8). Children and adults who remain unvaccinated and develop measles also put others in their community at risk. For infants too young for routine vaccination (age <12 months) and persons with medical conditions that contraindicate measles * Patient had visited more than one country where measles are endemic during the incubation period, and exposure could have occurred in any of the countries listed. † Although the patient acquired measles in the Dominican Republic, the likely source of infection was a French tourist with measles who stayed in an adjacent room at the same resort at the same time as the patient. The genotype identified in this patient was D4, a genotype commonly circulating in France. MMR vaccine is safe and highly effective in preventing measles and its complications. MMR vaccine is recommended routinely for all children at age 12-15 months, with a second dose at age 4-6 years. For adults with no evidence of immunity to measles, † 1 dose of MMR vaccine is recommended unless the adult is in a high-risk group (i.e., health care personnel, international travelers, or students at post-high school educational institutions), in which case, 2 doses of MMR vaccine are recommended. Measles is endemic in many countries, and exposures might occur in airports and in countries of travel. All travelers aged ≥6 months are eligible to receive MMR vaccine and should be vaccinated before travel (10). Maintaining high immunization rates with MMR vaccine is the cornerstone of outbreak prevention. immunization, the risk for measles complications is particularly high. These persons depend on high MMR vaccination coverage among those around them to protect them from exposure. In the United States this year, infants aged <12 months accounted for 15% of cases and 15% of hospitalizations. In Europe in recent years, measles has been fatal for several children and adolescents, including some who could not be vaccinated because they were immune compromised. Rapid control efforts by state and local public health agencies, which are both time intensive and costly, have been a key factor in limiting the size of outbreaks and preventing the spread of measles into communities with increased numbers of unvaccinated persons. Nonetheless, maintenance of high 2-dose MMR vaccination coverage is the most critical factor for sustaining elimination. For measles, even a small decrease in coverage can increase the risk for large outbreaks and endemic transmission, as occurred in the United Kingdom in the past decade (9). Because of ongoing importations of measles to the United States, health-care providers should suspect measles in persons with a febrile rash illness and clinically compatible symptoms (e.g., cough, coryza, and/or conjunctivitis) who have recently traveled abroad or have had contact with travelers. Providers should isolate and report suspected measles cases immediately to their local health department and obtain specimens for measles testing, including viral specimens for confirmation and genotyping. What is already known on this topic? Measles, mumps, and rubella (MMR) vaccine is highly effective in preventing measles and its complications. Sustained measles transmission was eliminated from the United States in the late 1990s, but the disease remains common in many countries globally, and cases of measles are imported into the United States regularly. What is added by this report? During the first 19 weeks of 2011, 118 cases of measles were reported in the United States, the highest number for the same period in any year since 1996, and hospitalization rates were high (40%). Importations accounted for 46 (40%) cases, including 34 (74%) cases among U.S. residents who had recently traveled abroad, among 105 import-associated cases. What are the implications for public health practice? High 2-dose MMR vaccine coverage is critical for decreasing the risk for reestablishment of endemic measles transmission after importation of measles into the United States. Before any international travel, infants aged 6-11 months should receive 1 dose of MMR vaccine and persons aged ≥12 months should receive 2 doses of MMR vaccine at least 28 days apart or have other evidence of immunity to measles. # Notice to Readers # Updated "N" Indicators for the Year 2010 in National Notifiable Diseases Surveillance System Tables The 2010 Council of State and Territorial Epidemiologists (CSTE) State Reportable Conditions Assessment (2010 SRCA) has collected data from 55 reporting jurisdictions (50 U.S. states, the District of Columbia, New York City, and three territories [American Samoa, Guam, and Puerto Rico]) to determine which of the nationally notifiable conditions (NNC) were reportable in each reporting jurisdiction in 2010. The 2010 SRCA gathered information regarding whether the condition is 1) explicitly reportable (i.e., listed as a specific disease or as a category of diseases on reportable disease lists), 2) implicitly reportable (i.e., included in a general category of the reportable disease list, such as "rare diseases of public health importance"), or 3) not reportable within each jurisdiction. Only conditions that were explicitly reportable were considered reportable based on the 2010 SRCA methodology. Results of the 2010 SRCA will be used to indicate whether each NNC is or is not reportable for the specified period and reporting jurisdiction. NNC that are not reportable are noted with an "N" indicator (for "not reportable") in the MMWR Table II weekly update (Provisional cases of selected notifiable diseases, United States) and in the MMWR Summary of Notifiable Diseases-United States, 2010. This notation will allow readers to distinguish whether 1) no cases were reported even though the condition is reportable or 2) no cases were reported because the condition is not reportable. The 2010 SRCA data collection and validation concluded in April 2011; results will be used to populate the "N" indicators for National Notifiable Diseases Surveillance System (NNDSS) data in the 2011 MMWR tables for the current week and for both the 2010 and 2011 cumulative year columns. The 2011 NNDSS data displayed in the MMWR weekly provisional tables will reflect reporting requirements gathered from the 2010 SRCA until 2011 SRCA official results are available. # Announcement Preventive Medicine Residency and Fellowship Applications Deadline -September 15, 2011 The Preventive Medicine Residency and Fellowship (PMR/F) programs are accepting applications from physicians for the residency and from veterinarians, dentists, nurses, physician assistants, and international medical graduates for the fellowship. Applicants with public health and applied epidemiologic practice experience who seek to become preventive medicine and population health specialists and public health leaders are encouraged to apply. The PMR/F prepares clinicians for leadership roles in public health at international, federal, state, and local levels through instruction and supervised practical experiences focused on translating epidemiology to public health practice, management, and policy and program development. Development of leadership and management competency is emphasized. Residents and fellows conduct their training at a CDC location or in a state or local health department. PMR/F alumni occupy many leadership positions at CDC, at state and local health departments, in academia, and in private-sector agencies. Completion of the residency, which is accredited by the Accreditation Council for Graduate Medical Education for 24 months of training, qualifies graduates to apply for certification by the American Board of Preventive Medicine (ABPM) in Public Health and General Preventive Medicine. Select candidates can be considered for 1 year of residency training that also should qualify for application for ABPM certification. Training for PMF clinicians also is 1 year. Applications are being accepted for the class that begins in mid-June 2012. Applications must be submitted online by September 15, 2011, and supporting documents must be received in the PMR/F office by that same day. Additional information regarding the programs, eligibility criteria, and application process is available at http://www.cdc.gov/prevmed, by telephone at 404-498-6140, or by e-mail at [email protected]. # Erratum # Vol. 60, No. RR-3 In the MMWR Recommendations and Reports "Updated Norovirus Outbreak Management and Disease Prevention Guidelines," an error occurred on page 9. The second sentence under the heading "Environmental Specimens" should read: "If a food or a water source is strongly suspected as the source of an outbreak, a sample should be obtained as early as possible with respect to the time of exposure, and CDC or FDA should be contacted for further guidance on testing. Food samples should be stored frozen at -4°F (-20°C), and water samples should be stored refrigerated or chilled on ice at 39°F (4°C)." * Hypertension was defined as systolic blood pressure ≥140 mmHg or diastolic blood pressure ≥90 mmHg or currently taking medication to lower blood pressure, based on positive responses to the following questions: "Because of your high blood pressure/hypertension have you ever been told to take prescribed medicine?" and "Are you now taking a prescribed medicine?" Undiagnosed hypertension was a finding of hypertension and a negative response to the following question: "Have you ever been told by a doctor or other health professional that you had hypertension, also called high blood pressure?" † Health insurance coverage is at the time of interview. Public coverage includes Medicaid, Children's Health Insurance Program (CHIP) state-sponsored or other government-sponsored health plan, Medicare (disability), or military health plan (TRICARE, VA, or CHAMP-VA). Persons with both public and private insurance coverage were included in the private coverage category only. § 95% confidence interval. During 2005-2008, among U.S. adults aged 20-64 years with hypertension, 40% of those with no health insurance had hypertension that was undiagnosed, compared with 21% of those with private insurance and 16% of those with public insurance. In the 20-39 years and 40-64 years age groups, undiagnosed hypertension also was more common among persons with no health insurance compared with those with private or public insurance. - 0 1 - 1 - 0 1 - - - 0 1 - - Mountain - 0 0 - - - 0 0 - - - 0 1 2 - Arizona - 0 0 - - - 0 0 - - - 0 1 2 - Colorado N 0 0 N N N 0 0 N N N 0 0 N N Idaho § N 0 0 N N N 0 0 N N N 0 0 N N Montana § N 0 0 N N N 0 0 N N N 0 0 N N Nevada § N 0 0 N N N 0 0 N N N 0 0 N N New Mexico § N 0 0 N N N 0 0 N N N 0 0 N N Utah - 0 0 - - - 0 0 - - - 0 0 - - Wyoming § - 0 0 - - - 0 0 - - - 0 0 - - Pacific - 0 1 - 1 - 0 0 - - - 0 1 1 2 Alaska N 0 0 N N N 0 0 N N N 0 0 N N California - 0 1 - 1 - 0 0 - - - 0 1 1 2 Hawaii N 0 0 N N N 0 0 N N N 0 0 N N Oregon - 0 0 - - - 0 0 - - - 0 0 - - Washington - 0 0 - - - 0 0 - - - 0 0 - - # Territories # Territories * Case counts for reporting year 2010 and 2011 are provisional and subject to change. For further information on interpretation of these data, see http://www.cdc.gov/osels/ph_surveillance/ nndss/phs/files/ProvisionalNationa%20NotifiableDiseasesSurveillanceData20100927.pdf. Data for TB are displayed in Table IV, which appears quarterly. † Dengue Fever includes cases that meet criteria for Dengue Fever with hemorrhage, other clinical and unknown case classifications. § DHF includes cases that meet criteria for dengue shock syndrome (DSS), a more severe form of DHF. ¶ Contains data reported through the National Electronic Disease Surveillance System (NEDSS). The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format. To receive an electronic copy each week, visit MMWR's free subscription page at http://www.cdc.gov/mmwr/mmwrsubscribe. html. Paper copy subscriptions are available through the Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone 202-512-1800. Data presented by the Notifiable Disease Data Team and 122 Cities Mortality Data Team in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. Address all inquiries about the MMWR Series, including material to be considered for publication, to Editor, MMWR Series, Mailstop E-90, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333 or to [email protected]. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. References to non-CDC sites on the Internet are provided as a service to MMWR readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of these sites. URL addresses listed in MMWR were current as of the date of publication.
None
None
d75fe9d0f4b21ac7a2f12bcaa5636737a7ce49a3
cdc
None
# CRITERIA DOCUMENT: RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR TOLUENE DIISOCYANATE # RECOMMENDATIONS FOR A TOLUENE DIISOCYANATE STANDARD The Administrative controls can also be used to reduce exposure to TDI. does not exceed 100 times the time-weighted average for continuous work, or 100 times the ceiling for work of short duration, eg 20 minutes or less. ( Rubbers shall be decontaminated and ventilated after contamination. Leather shoes which have been contaminated with TDI shall be decontaminated or disposed of. Workers within 10 feet of spray operations, or at greater distances when there is a greater drift of spray, shall be protected with impervious clothing, gloves, and footwear in addition to a supplied air impervious hood. (2) Chemical workers' goggles shall be worn where splashes are likely to occur. The two isomers are believed to have similar physiological properties. Their physical and chemical properties are very similar except that the 2,6 isomer has a lower freezing point. The 80% 2,4:20% 2,6 mixture represents better than 95% of industrial usage. Properties of commercial samples of this mixture are listed in # R-NCO + H20 yields RNH-COOH which yields R-NH2 + C02 The primary amine so produced will react further with excess TDI with the formation of a urea derivative: Probably because it reacts avidly with all proteins, its direct effects are, in accidental or occupational exposure of man, virtually confined to its reaction upon the surface membranes of the body. Systemic absorption of TDI, with toxic effects upon internal organs, has not been reported in man, except in the special sense of the hypothetical immunologic involvement of the reticuloendothelial system in those subjects who become sensitized to TDI. In the occupational exposure of man to TDI in the vapor or aerosol phase, its impact upon the respiratory tract is overwhelmingly the most important. Its topical effects upon other tissues will be briefly considered first. In a plant producing slabs of polyurethane foam by a continuous process air levels of TDI and the workers' health were studied for a period of 2-1/2 years. More than 1,000 air samples were analyzed. The extreme range of air level values at various sites and times was a reported 0.00 to 3.0 ppm. The range of average values for the various sites was given as 0.00 to 2.6 ppm. It is impossible to determine a time-weighted average level from the data published, but monthly average levels throughout the plant were reported to be in the 0.00 to 0.15 ppm range. During the study period a total of 83 illnesses attributed to TDI required medical attention. Twenty of the original cohort of workers had by 1969 been followed at 6-monthly intervals for a total of two years (five series of examinations). During the second year of follow-up the decline of FEV 1.0 had continued at a mean annual rate of 0.11 liters, which exceeds the predicted rate of decline due to aging alone. It is confirmed in this report that acute daily changes in ventilatory capacity continued to occur in workers after two years and more continued exposure to TDI at low levels, that workers with respiratory symptoms showed a greater acute and cumulative response to TDI than asymptomatic workers, that there was a strong correlation between oneday change and cumulative effect, and that the effect of smoking does not seem important. For the highly sensitized individual it is doubtful whether there is a measurable safe level for TDI below which that individual is completely safe from response. Whether there is a measurable level below which no one will become sensitized de novo is not completely clear from the available evidence to date, although this is the intended basis for the TLV of 0.02 ppm (1) Additional company analyses verify that air levels were high. (2) The workers wore respirators, which probably indicates acute irritation. (3) Some workers had been transferred after complaints. # method. [ The calibration instrument will be connected in sequence to the impinger unit which will be followed by the sampler pump. In this way, the calibration instrument will be at atmospheric pressure. If the personal sampler pump is used, each pump must be calibrated separately. If the burette is used, it should be set up so that the flow is toward the narrow end of the unit. From these data, it can be seen that at all exposure levels of 0.01 ppm or higher, some cases of TDI toxicity occurred, but there were no cases at 0.007 or lower. At 0.009 ppm, there were no established cases, but one questionable one; there were several established cases at 0.008 ppm. It is concluded that the timeweighted average environmental limit should be below 0.01 ppm, and should be 0.005 ppm to ensure some margin of safety.
# CRITERIA DOCUMENT: RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR TOLUENE DIISOCYANATE # RECOMMENDATIONS FOR A TOLUENE DIISOCYANATE STANDARD The Administrative controls can also be used to reduce exposure to TDI. does not exceed 100 times the time-weighted average for continuous work, or 100 times the ceiling for work of short duration, eg 20 minutes or less. ( Rubbers shall be decontaminated and ventilated after contamination. Leather shoes which have been contaminated with TDI shall be decontaminated or disposed of. Workers within 10 feet of spray operations, or at greater distances when there is a greater drift of spray, shall be protected with impervious clothing, gloves, and footwear in addition to a supplied air impervious hood. (2) Chemical workers' goggles shall be worn where splashes are likely to occur. The two isomers are believed to have similar physiological properties. [1] Their physical and chemical properties are very similar except that the 2,6 isomer has a lower freezing point. [2] The 80% 2,4:20% 2,6 mixture represents better than 95% of industrial usage. [3] Properties of commercial samples of this mixture are listed in # R-NCO + H20 yields RNH-COOH which yields R-NH2 + C02 The primary amine so produced will react further with excess TDI with the formation of a urea derivative: Probably because it reacts avidly with all proteins, its direct effects are, in accidental or occupational exposure of man, virtually confined to its reaction upon the surface membranes of the body. Systemic absorption of TDI, with toxic effects upon internal organs, has not been reported in man, except in the special sense of the hypothetical immunologic involvement of the reticuloendothelial system in those subjects who become sensitized to TDI. In the occupational exposure of man to TDI in the vapor or aerosol phase, its impact upon the respiratory tract is overwhelmingly the most important. Its topical effects upon other tissues will be briefly considered first. In a plant producing slabs of polyurethane foam by a continuous process air levels of TDI and the workers' health were studied for a period of 2-1/2 years. [17] More than 1,000 air samples were analyzed. The extreme range of air level values at various sites and times was a reported 0.00 to 3.0 ppm. The range of average values for the various sites was given as 0.00 to 2.6 ppm. It is impossible to determine a time-weighted average level from the data published, but monthly average levels throughout the plant were reported to be in the 0.00 to 0.15 ppm range. During the study period a total of 83 illnesses attributed to TDI required medical attention. Twenty of the original cohort of workers had by 1969 been followed at 6-monthly intervals for a total of two years (five series of examinations). [42] During the second year of follow-up the decline of FEV 1.0 had continued at a mean annual rate of 0.11 liters, which exceeds the predicted rate of decline due to aging alone. It is confirmed in this report [42] that acute daily changes in ventilatory capacity continued to occur in workers after two years and more continued exposure to TDI at low levels, that workers with respiratory symptoms showed a greater acute and cumulative response to TDI than asymptomatic workers, that there was a strong correlation between oneday change and cumulative effect, and that the effect of smoking does not seem important. For the highly sensitized individual it is doubtful whether there is a measurable safe level for TDI below which that individual is completely safe from response. Whether there is a measurable level below which no one will become sensitized de novo is not completely clear from the available evidence to date, although this is the intended basis for the TLV of 0.02 ppm (2) (3) (1) Additional company analyses verify that air levels were high. (2) The workers wore respirators, which probably indicates acute irritation. (3) Some workers had been transferred after complaints. # method. [ The calibration instrument will be connected in sequence to the impinger unit which will be followed by the sampler pump. In this way, the calibration instrument will be at atmospheric pressure. If the personal sampler pump is used, each pump must be calibrated separately. If the burette is used, it should be set up so that the flow is toward the narrow end of the unit. # From these data, it can be seen that at all exposure levels of 0.01 ppm or higher, some cases of TDI toxicity occurred, but there were no cases at 0.007 or lower. At 0.009 ppm, there were no established cases, but one questionable one; there were several established cases at 0.008 ppm. It is concluded that the timeweighted average environmental limit should be below 0.01 ppm, and should be 0.005 ppm to ensure some margin of safety.
None
None
a2e0ddb96714cb5ae3252756abebaccc4e90112b
cdc
None
# Table of Contents # I. Executive Summary This guideline updates and expands the original Centers for Disease Control and Prevention (CDC) Guideline for Prevention of Catheter-associated Urinary Tract Infections (CAUTI) published in 1981. Several developments necessitated revision of the 1981 guideline, including new research and technological advancements for preventing CAUTI, increasing need to address patients in non-acute care settings and patients requiring long-term urinary catheterization, and greater emphasis on prevention initiatives as well as better defined goals and metrics for outcomes and process measures. In addition to updating the previous guideline, this revised guideline reviews the available evidence on CAUTI prevention for patients requiring chronic indwelling catheters and individuals who can be managed with alternative methods of urinary drainage (e.g., intermittent catheterization). The revised guideline also includes specific recommendations for implementation, performance measurement, and surveillance. Although the general principles of CAUTI prevention have not changed from the previous version, the revised guideline provides clarification and more specific guidance based on a defined, systematic review of the literature through July 2007. For areas where knowledge gaps exist, recommendations for further research are listed. Finally, the revised guideline outlines highpriority recommendations for CAUTI prevention in order to offer guidance for implementation. This document is intended for use by infection prevention staff, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection prevention and control programs for healthcare settings across the continuum of care. The guideline can also be used as a resource for societies or organizations that wish to develop more detailed implementation guidance for prevention of CAUTI. Our goal was to develop a guideline based on a targeted systematic review of the best available evidence, with explicit links between the evidence and recommendations. To accomplish this, we used an adapted GRADE system approach for evaluating quality of evidence and determining strength of recommendations. The methodology, structure, and components of this guideline are approved by HICPAC and will be used for subsequent guidelines issued by HICPAC. A more detailed description of our approach is available in the Methods section. To evaluate the evidence on preventing CAUTI, we examined data addressing three key questions and related subquestions: Evidence addressing the key questions was used to formulate recommendations, and explicit links between the evidence and recommendations are available in the Evidence Review in the body of the guideline and Evidence Tables and GRADE Tables in the Appendices (). It is important to note that Category I recommendations are all considered strong recommendations and should be equally implemented; it is only the quality of the evidence underlying the recommendation that distinguishes between levels A and B. Category IC recommendations are required by state or federal regulation and may have any level of supporting evidence. The categorization scheme used in this guideline is presented in Table 1 in the Summary of Recommendations and described further in the Methods section. The Summary of Recommendations is organized as follows: 1. recommendations for who should receive indwelling urinary catheters (or, for certain populations, alternatives to indwelling catheters); 2. recommendations for catheter insertion; 3. recommendations for catheter maintenance; 4. quality improvement programs to achieve appropriate placement, care, and removal of catheters; 5. administrative infrastructure required; and 6. surveillance strategies. The Implementation and Audit section includes a prioritization of recommendations (i.e., highpriority recommendations that are essential for every healthcare facility), organized by modules, in order to provide facilities more guidance on implementation of these guidelines. A list of recommended performance measures that can potentially be used for internal reporting purposes is also included. Areas in need of further research identified during the evidence review are outlined in the Recommendations for Further Research. This section includes guidance for specific methodological approaches that should be used in future studies. Readers who wish to examine the primary evidence underlying the recommendations are referred to the Evidence Review in the body of the guideline, and the Evidence Tables and GRADE Tables in the Appendices (). The Evidence Review includes narrative summaries of the data presented in the Evidence Tables and GRADE Tables. The Evidence Tables include all study-level data used in the guideline, and the GRADE Tables assess the overall quality of evidence for each question. The Appendices also contain a clearly delineated search strategy that will be used for periodic updates to ensure that the guideline remains a timely resource as new information becomes available. # II. Summary of Recommendations # B. Examples of Inappropriate Uses of Indwelling Catheters - As a substitute for nursing care of the patient or resident with incontinence. - As a means of obtaining urine for culture or other diagnostic tests when the patient can voluntarily void. - For prolonged postoperative duration without appropriate indications (e.g., structural repair of urethra or contiguous structures, prolonged effect of epidural anaesthesia, etc.). # II. Proper Techniques for Urinary Catheter Insertion # III. Implementation and Audit # Prioritization of Recommendations In this section, the recommendations considered essential for all healthcare facilities caring for patients requiring urinary catheterization are organized into modules in order to provide more guidance to facilities on implementation of these guidelines. The high-priority recommendations were chosen by a consensus of experts based on strength of recommendation as well as on the likely impact of the strategy in preventing CAUTI. The administrative functions and infrastructure listed above in the summary of recommendations are necessary to accomplish the high priority recommendations and are therefore critical to the success of a prevention program. In addition, quality improvement programs should be implemented as an active approach to accomplishing these recommendations and when process and outcome measure goals are not being met based on internal reporting. # Priority Recommendations for Appropriate Urinary Catheter Use (Module 1) - Insert catheters only for appropriate indications (see Table 2), and leave in place only as long as needed. Conduct random audits of selected units and calculate compliance rate: - Numerator: number of patients on unit with catheters with proper documentation of insertion and removal dates - Denominator: number of patients on the unit with a catheter in place at some point during admission - Standardization factor: 100 (i.e., multiply by 100 so that measure is expressed as a percentage) c) Compliance with documentation of indication for catheter placement: Conduct random audits of selected units and calculate compliance rate # IV. Recommendations for Further Research Our literature review revealed that many of the studies addressing strategies to prevent CAUTI were not of sufficient quality to allow firm conclusions regarding the benefit of certain interventions. Future studies of CAUTI prevention should: 1. Be primary analytic research (i.e. systematic reviews, meta-analyses, interventional studies, and observational studies ) 2. Evaluate clinically relevant outcomes (e.g., SUTI, bloodstream infections secondary to CAUTI) 3. Adjust for confounders as needed using multivariable analyses 4. Stratify outcomes by patient populations at risk for CAUTI 5. Ensure adequate statistical power to detect differences The following is a compilation of recommendations for further research: # V. Background Urinary tract infections are the most common type of healthcare-associated infection, accounting for more than 30% of infections reported by acute care hospitals. 19 Virtually all healthcare-associated UTIs are caused by instrumentation of the urinary tract. Catheterassociated urinary tract infection (CAUTI) has been associated with increased morbidity, mortality, hospital cost, and length of stay. In addition, bacteriuria commonly leads to unnecessary antimicrobial use, and urinary drainage systems are often reservoirs for multidrugresistant bacteria and a source of transmission to other patients. 10,11 # Definitions An indwelling urinary catheter is a drainage tube that is inserted into the urinary bladder through the urethra, is left in place, and is connected to a closed collection system. Alternative methods of urinary drainage may be employed in some patients. Intermittent ("in-and-out") catheterization involves brief insertion of a catheter into the bladder through the urethra to drain urine at intervals. An external catheter is a urine containment device that fits over or adheres to the genitalia and is attached to a urinary drainage bag. The most commonly used external catheter is a soft flexible sheath that fits over the penis ("condom" catheter). A suprapubic catheter is surgically inserted into the bladder through an incision above the pubis. The limitations and heterogeneity of definitions of CAUTI used in various studies present major challenges in appraising the quality of evidence in the CAUTI literature. Study investigators have used numerous different definitions for CAUTI outcomes, ranging from simple bacteriuria at a range of concentrations to, less commonly, symptomatic infection defined by combinations of bacteriuria and various signs and symptoms. Futhermore, most studies that used CDC/NHSN definitions for CAUTI did not distinguish between SUTI and ASB in their analyses. 30 The heterogeneity of definitions used for CAUTI may reduce the quality of evidence for a given intervention and often precludes meta-analyses. The clinical significance of ASB in catheterized patients is undefined. Approximately 75% to 90% of patients with ASB do not develop a systemic inflammatory response or other signs or symptoms to suggest infection. 6,31 Monitoring and treatment of ASB is also not an effective prevention measure for SUTI, as most cases of SUTI are not preceded by bacteriuria for more than a day. 25 Treatment of ASB has not been shown to be clinically beneficial and is associated with the selection of antimicrobial-resistant organisms. # Epidemiology Between 15% and 25% of hospitalized patients may receive short-term indwelling urinary catheters. 12,13 In many cases, catheters are placed for inappropriate indications, and healthcare providers are often unaware that their patients have catheters, leading to prolonged, unnecessary use. In acute care hospitals reporting to NHSN in 2006, pooled mean urinary catheter utilization ratios in ICU and non-ICU areas ranged from 0.23-0.91 urinary catheterdays/patient-days. 17 While the numbers of units reporting were small, the highest ratios were in trauma ICUs and the lowest in inpatient medical/surgical wards. The overall prevalence of longterm indwelling urethral catheterization use is unknown. The prevalence of urinary catheter use in residents in long-term care facilities in the United States is on the order of 5%, representing approximately 50,000 residents with catheters at any given time. 18 This number appears to be declining over time, likely because of federally mandated nursing home quality measures. However, the high prevalence of urinary catheters in patients transferred to skilled nursing facilities suggests that acute care hospitals should focus more efforts on removing unnecessary catheters prior to transfer. 18 Reported rates of UTI among patients with urinary catheters vary substantially. National data from NHSN acute care hospitals in 2006 showed a range of pooled mean CAUTI rates of 3.1-7.5 infections per 1000 catheter-days. 17 The highest rates were in burn ICUs, followed by inpatient medical wards and neurosurgical ICUs, although these sites also had the fewest numbers of locations reporting. The lowest rates were in medical/surgical ICUs. Although morbidity and mortality from CAUTI is considered to be relatively low compared to other HAIs, the high prevalence of urinary catheter use leads to a large cumulative burden of infections with resulting infectious complications and deaths. An estimate of annual incidence of HAIs and mortality in 2002, based on a broad survey of US hospitals, found that urinary tract infections made up the highest number of infections (> 560,000) compared to other HAIs, and attributable deaths from UTI were estimated to be over 13,000 (mortality rate 2.3%). 19 And while fewer than 5% of bacteriuric cases develop bacteremia, 6 CAUTI is the leading cause of secondary nosocomial bloodstream infections; about 17% of hospital-acquired bacteremias are from a urinary source, with an associated mortality of approximately 10%. 20 In the nursing home setting, bacteremias are most commonly caused by UTIs, the majority of which are catheterrelated. 21 An estimated 17% to 69% of CAUTI may be preventable with recommended infection control measures, which means that up to 380,000 infections and 9000 deaths related to CAUTI per year could be prevented. 22 # Pathogenesis and Microbiology The source of microorganisms causing CAUTI can be endogenous, typically via meatal, rectal, or vaginal colonization, or exogenous, such as via contaminated hands of healthcare personnel or equipment. Microbial pathogens can enter the urinary tract either by the extraluminal route, via migration along the outside of the catheter in the periurethral mucous sheath, or by the intraluminal route, via movement along the internal lumen of the catheter from a contaminated collection bag or catheter-drainage tube junction. The relative contribution of each route in the pathogenesis of CAUTI is not well known. The marked reduction in risk of bacteriuria with the introduction of the sterile, closed urinary drainage system in the1960's 23 suggests the importance of the intraluminal route. However, even with the closed drainage system, bacteriuria inevitably occurs over time either via breaks in the sterile system or via the extraluminal route. 24 The daily risk of bacteriuria with catheterization is 3% to 10%, 25,26 approaching 100% after 30 days, which is considered the delineation between short and longterm catheterization. 27 Formation of biofilms by urinary pathogens on the surface of the catheter and drainage system occurs universally with prolonged duration of catheterization. 28 Over time, the urinary catheter becomes colonized with microorganisms living in a sessile state within the biofilm, rendering them resistant to antimicrobials and host defenses and virtually impossible to eradicate without removing the catheter. The role of bacteria within biofilms in the pathogenesis of CAUTI is unknown and is an area requiring further research. Antimicrobial resistance among urinary pathogens is an ever increasing problem. About a quarter of E. coli isolates and one third of P. aeruginosa isolates from CAUTI cases were fluoroquinolone-resistant. Resistance of gram-negative pathogens to other agents, including third-generation cephalosporins and carbapenems, was also substantial 5 . The proportion of organisms that were multidrug-resistant, defined by non-susceptibility to all agents in 4 classes, was 4% of P. aeruginosa, 9% of K. pneumoniae, and 21% of Acinetobacter baumannii. 29 # VI. Scope and Purpose This guideline updates and expands the original CDC Guideline for Prevention of CAUTI published in 1981. The revised guideline addresses the prevention of CAUTI for patients in need of either short-or long-term (i.e., > 30 days) urinary catheterization in any type of healthcare facility and evaluates evidence for alternative methods of urinary drainage, including intermittent catheterization, external catheters, and suprapubic catheters. The guideline also includes specific recommendations for implementation, performance measurement, and surveillance. Recommendations for further research are also provided to address the knowledge gaps in CAUTI prevention identified during the literature review. To evaluate the evidence on preventing CAUTI, we examined data addressing three key questions and related subquestions: This document is intended for use by infection prevention staff, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection prevention and control programs for healthcare settings across the continuum of care. The guideline can also be used as a resource for societies or organizations that wish to develop more detailed implementation guidance for prevention of CAUTI. # VII. Methods This guideline was based on a targeted systematic review of the best available evidence on CAUTI prevention. We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to provide explicit links between the available evidence and the resulting recommendations. Our guideline development process is outlined in Figure 1. # Figure 1. The Guideline Development Process # Development of Key Questions We first conducted an electronic search of the National Guideline Clearinghouse® ( # Literature Search Following the development of the key questions, search terms were developed for identifying literature relevant to the key questions. For the purposes of quality assurance, we compared these terms to those used in relevant seminal studies and guidelines. These search terms were then incorporated into search strategies for the relevant electronic databases. Searches were performed in Medline® (National Library of Medicine) using the Ovid® Platform (Ovid Technologies, Wolters Kluwer, New York, NY), EMBASE® (Elsevier BV, Amsterdam, Netherlands), CINAHL® (Ebsco Publishing, Ipswich, MA) and Cochrane® (Cochrane Collaboration, Oxford, UK) (all databases were searched in July 2007), and the resulting references were imported into a reference manager, where duplicates were resolved. For Cochrane reviews ultimately included in our guideline, we checked for updates in July 2008. The detailed search strategy used for identifying primary literature and the results of the search can be found in Appendix 1B (). # Study Selection Titles and abstracts from references were screened by a single author (C.V.G, R.K.A., or D.A.P.) and the full text articles were retrieved if they were 1. relevant to one or more key questions, 2. primary analytic research, systematic reviews or meta-analyses, and 3. written in English. Likewise, the full-text articles were screened by a single author (C.V.G. or D.A.P.) using the same criteria, and included studies underwent a second review for inclusion by another author (R.K.A.). Disagreements were resolved by the remaining authors. The results of this process are depicted in Figure 2. # Data Extraction and Synthesis Data on the study author, year, design, objective, population, setting, sample size, power, followup, and definitions and results of clinically relevant outcomes were extracted into evidence tables (Appendix 2 ()). Three evidence tables were developed, each of which represented one of our key questions. Studies were extracted into the most relevant evidence table . Then, studies were organized by the common themes that emerged within each evidence table. Data were extracted by one author (R.K.A.) and cross-checked by another (C.V.G.). Disagreements were resolved by the remaining authors. Data and analyses were extracted as originally presented in the included studies. Meta-analyses were performed only where their use was deemed critical to a recommendation, and only in circumstances where multiple studies with sufficiently homogenous populations, interventions, and outcomes could be analyzed. Systematic reviews were included in our review. To avoid duplication of data, we excluded primary studies if they were also included in a systematic review captured by our search. The only exception to this was if the primary study also addressed a relevant question that was outside the scope of the included systematic review. Before exclusion, data from the primary studies that we originally captured were abstracted into the evidence tables and reviewed. We also excluded systematic reviews that analyzed primary studies that were fully captured in a more recent systematic review. The only exception to this was if the older systematic review also addressed a relevant question that was outside the scope of the newer systematic review. To ensure that all relevant studies were captured in the search, the bibliography was vetted by a panel of clinical experts. # Grading of Evidence First, the quality of each study was assessed using scales adapted from existing methodology checklists, and scores were recorded in the evidence tables. Appendix 3 () includes the sets of questions we used to assess the quality of each of the major study designs. Next, the quality of the evidence base was assessed using methods adapted from the GRADE Working Group. 32 Briefly, GRADE tables were developed for each of the interventions or questions addressed within the evidence tables. Included in the GRADE tables were the intervention of interest, any outcomes listed in the evidence tables that were judged to be clinically important, the quantity and type of evidence for each outcome, the relevant findings, and the GRADE of evidence for each outcome, as well as an overall GRADE of the evidence base for the given intervention or question. The initial GRADE of evidence for each outcome was deemed high if the evidence base included a randomized controlled trial (RCT) or a systematic review of RCTs, low if the evidence base included only observational studies, or very low if the evidence base consisted only of uncontrolled studies. The initial GRADE could then be modified by eight criteria. 34 Criteria which could decrease the GRADE of an evidence base included quality, consistency, directness, precision, and publication bias. Criteria that could increase the GRADE included a large magnitude of effect, a dose-response gradient, or inclusion of unmeasured confounders that would increase the magnitude of effect (Table 3). GRADE definitions are as follows: 1. High -further research is very unlikely to change confidence in the estimate of effect 2. Moderate -further research is likely to affect confidence in the estimate of effect and may change the estimate 3. Low -further research is very likely to affect confidence in the estimate of effect and is likely to change the estimate 4. Very low -any estimate of effect is very uncertain After determining the GRADE of the evidence base for each outcome of a given intervention or question, we calculated the overall GRADE of the evidence base for that intervention or question. The overall GRADE was based on the lowest GRADE for the outcomes deemed critical to making a recommendation. # Table 3. Rating the Quality of Evidence Using the GRADE Approach The format of this section was changed to improve readability and accessibility. The content is unchanged. # Rating the # Formulating Recommendations Narrative evidence summaries were then drafted by the working group using the evidence and GRADE tables. One summary was written for each theme that emerged under each key question. The working group then used the narrative evidence summaries to develop guideline recommendations. Factors determining the strength of a recommendation included 1. the values and preferences used to determine which outcomes were "critical," 2. the harms and benefits that result from weighing the "critical" outcomes, and 3. the overall GRADE of the evidence base for the given intervention or question (Table 4). 33 If weighing the "critical outcomes" for a given intervention or question resulted in a "net benefit" or a "net harm," then a "Category I Recommendation" was formulated to strongly recommend for or against the given intervention respectively. If weighing the "critical outcomes" for a given intervention or question resulted in a "trade off" between benefits and harms, then a "Category II Recommendation" was formulated to recommend that providers or institutions consider the intervention when deemed appropriate. If weighing the "critical outcomes" for a given intervention or question resulted in an "uncertain trade off" between benefits and harms, then a "No Recommendation" was formulated to reflect this uncertainty. No recommendation/ Unresolved issue for which there is low to very low quality evidence with unresolved issue uncertain trade offs between benefits and harms. Category I recommendations are defined as strong recommendations with the following implications: 1. For patients: Most people in the patient's situation would want the recommended course of action and only a small proportion would not; request discussion if the intervention is not offered. 2. For clinicians: Most patients should receive the recommended course of action. 3. For policymakers: The recommendation may be adopted as a policy. Category II recommendations are defined as weak recommendations with the following implications: 1. For patients: Most people in the patient's situation would want the recommended course of action, but many would not. 2. For clinicians: Different choices will be appropriate for different patients, and clinicians must help each patient to arrive at a management decision consistent with her or his values and preferences. 3. For policymakers: Policy making will require substantial debate and involvement of many stakeholders. It should be noted that Category II recommendations are discretionary for the individual institution and are not intended to be enforced. The wording of each recommendation was carefully selected to reflect the recommendation's strength. In most cases, we used the active voice when writing Category I recommendationsthe strong recommendations. Phrases like "do" or "do not" and verbs without auxiliaries or conditionals were used to convey certainty. We used a more passive voice when writing Category II recommendations -the weak recommendations. Words like "consider" and phrases like "is preferable," "is suggested," "is not suggested," or "is not recommended" were chosen to reflect the lesser certainty of the Category II recommendations. Rather than a simple statement of fact, each recommendation is actionable, describing precisely a proposed action to take. The category "No recommendation/unresolved issue" was most commonly applied to situations where either 1. the overall quality of the evidence base for a given intervention was low to very low and there was no consensus on the benefit of the intervention or 2. there was no published evidence on outcomes deemed critical to weighing the risks and benefits of a given intervention. If the latter was the case, those critical outcomes will be noted at the end of the relevant evidence summary. Our evidence-based recommendations were cross-checked with those from guidelines identified in our original systematic search. Recommendations from previous guidelines for topics not directly addressed by our systematic review of the evidence were included in our "Summary of Recommendations" if they were deemed critical to the target users of this guideline. Unlike recommendations informed by our literature search, these recommendations are not linked to a key question. These recommendations were agreed upon by expert consensus and are designated either IB if they represent a strong recommendation based on accepted practices (e.g., aseptic technique) or II if they are a suggestion based on a probable net benefit despite limited evidence. All recommendations were approved by HICPAC. Recommendations focused only on efficacy, effectiveness, and safety. The optimal use of these guidelines should include a consideration of the costs relevant to the local setting of guideline users. # Reviewing and Finalizing the Guideline After a draft of the tables, narrative summaries, and recommendations was completed, the working group shared the draft with the expert panel for in-depth review. While the expert panel was reviewing this draft, the working group completed the remaining sections of the guideline, including the executive summary, background, scope and purpose, methods, summary of recommendations, and recommendations for guideline implementation, audit, and further research. The working group then made revisions to the draft based on feedback from members of the expert panel and presented the entire draft guideline to HICPAC for review. The guideline was then posted on the Federal Register for public comment. After a period of public comment, the guideline was revised accordingly, and the changes were reviewed and voted on by HICPAC. The final guideline was cleared internally by CDC and published and posted on the HICPAC website. # Updating the Guideline Future revisions to this guideline will be dictated by new research and technological advancements for preventing CAUTI and will occur at the request of HICPAC. # VIII. Evidence Review # Q1. Who should receive urinary catheters? To answer this question, we focused on three subquestions: A. When is urinary catheterization necessary? B. What are the risk factors for CAUTI? and C. What populations are at highest risk of mortality from urinary catheters? # Q1A. When is urinary catheterization necessary? The available data examined five main populations. In all populations, we considered CAUTI outcomes as well as other outcomes we deemed critical to weighing the risks and benefits of catheterization. The evidence for this question consists of 1 systematic review, 37 9 RCTs, and 12 observational studies. The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 1A. For operative patients, low-quality evidence suggested a benefit of avoiding urinary catheterization. This was based on a decreased risk of bacteriuria/unspecified UTI, no effect on bladder injury, and increased risk of urinary retention in patients without catheters. Urinary retention in patients without catheters was specifically seen following urogenital surgeries. The most common surgeries studied were urogenital, gynecological, laparoscopic, and orthopedic surgeries. Our search did not reveal data on the impact of catheterization on peri-operative hemodynamic management. For incontinent patients, low-quality evidence suggested a benefit of avoiding urinary catheterization. 45, This was based on a decreased risk of both SUTI and bacteriuria/unspecified UTI in male nursing home residents without urinary catheters compared to those with continuous condom catheters. We found no difference in the risk of UTI between having a condom catheter only at night and having no catheter. Our search did not reveal data on the impact of catheterization on skin breakdown. For patients with bladder outlet obstruction, very low-quality evidence suggested a benefit of a urethral stent over an indwelling catheter. 53 This was based on a reduced risk of bacteriuria in those receiving a urethral stent. Our search did not reveal data on the impact of catheterization versus stent placement on urinary complications. For patients with spinal cord injury, very low-quality evidence suggested a benefit of avoiding indwelling urinary catheters. 54,56 This was based on a decreased risk of SUTI and bacteriuria in those without indwelling catheters (including patients managed with spontaneous voiding, clean intermittent catheterization , and external striated sphincterotomy with condom catheter drainage), as well as a lower risk of urinary complications, including hematuria, stones, and urethral injury (fistula, erosion, stricture). For children with myelomeningocele and neurogenic bladder, very low-quality evidence suggested a benefit of CIC compared to urinary diversion or self voiding. 46,57,58 This was based on a decreased risk of bacteriuria/unspecified UTI in patients receiving CIC compared to urinary diversion, and a lower risk of urinary tract deterioration (defined by febrile urinary tract infection, vesicoureteral reflux, hydronephrosis, or increases in BUN or serum creatinine) compared to self-voiding and in those receiving CIC early ( 3 years of age). # Evidence Review Table 1A. When is urinary catheterization necessary? 1A.1. Use urinary catheters in operative patients only as necessary, rather than routinely. # Q1B. What are the risk factors for CAUTI? To answer this question, we reviewed the quality of evidence for those risk factors examined in more than one study. We considered the critical outcomes for decision-making to be SUTI and bacteriuria. The evidence for this question consists of 11 RCTs 59-69 and 37 observational studies. 9,50,54, The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 1B. For SUTI, 50,54,61,62,74,75,79,83,102,103 # Q1C. What populations are at highest risk of mortality from urinary catheters? To answer this question, we reviewed the quality of evidence for those risk factors examined in more than one study. The evidence for this question consists of 2 observational studies. 7,74 The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 1C. Low-quality evidence suggested that older age, higher severity of illness, and being on an internal medicine service compared to a surgical service were independent risk factors for mortality in patients with indwelling urinary catheters. Both studies evaluating these risk factors found the highest risk of mortality in patients over 70 years of age. Low-quality evidence also suggested that CAUTI was a risk factor for mortality in patients with catheters. Evidence Review Table 1C. What populations are at highest risk of mortality from catheters? 1C.1. Minimize urinary catheter use and duration in all patients, particularly those who may be at higher risk for mortality due to catheterization, such as the elderly and patients with severe illness. (Category IB) # Q2. For those who may require urinary catheters, what are the best practices? To answer this question, we focused on four subquestions: A. What are the risks and benefits associated with different approaches to catheterization? B. What are the risks and benefits associated with different types of catheters or collecting systems? C. What are the risks and benefits associated with different catheter management techniques D. What are the risks and benefits associated with different systems interventions? # Q2A. What are the risks and benefits associated with different approaches to catheterization? The available data examined the following comparisons of different catheterization approaches: 1. External versus indwelling urethral 2. Intermittent versus indwelling urethral 3. Intermittent versus suprapubic 4. Suprapubic versus indwelling urethral 5. Clean intermittent versus sterile intermittent For all comparisons, we considered SUTI, bacteriuria/unspecified UTI, or combinations of these outcomes depending on availability, as well as other outcomes critical to weighing the risks and benefits of different catheterization approaches. The evidence for this question consists of 6 systematic reviews, 37, 16 RCTs, 62,63, and 18 observational studies. 54,73,81,84, The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 2A # Q2A.1. External versus indwelling urethral Low-quality evidence suggested a benefit of using external catheters over indwelling urethral catheters in male patients who require a urinary collection device but do not have an indication for an indwelling catheter such as urinary retention or bladder outlet obstruction. 81,109,123 This was based on a decreased risk of a composite outcome of SUTI, bacteriuria, or death as well as increased patient satisfaction with condom catheters. Differences were most pronounced in men without dementia. Statistically significant differences were not found or reported for the individual CAUTI outcomes or death. Our search did not reveal data on differences in local complications such as skin maceration or phimosis. # Q2A.2. Intermittent versus indwelling urethral Low-quality evidence suggested a benefit of using intermittent catheterization over indwelling urethral catheters in selected populations. 84,135,136 This was based on a decreased risk of SUTI and bacteriuria/unspecified UTI but an increased risk of urinary retention in postoperative patients with intermittent catheterization. In one study, urinary retention and bladder distension were avoided by performing catheterization at regular intervals (every 6-8 hrs) until return of voiding. Studies of patients with neurogenic bladder most consistently found a decreased risk of CAUTI with intermittent catheterization. Studies in operative patients whose catheters were removed within 24 hrs of surgery found no differences in bacteriuria with intermittent vs. indwelling catheterization, while studies where catheters were left in for longer durations had mixed results. Our search did not reveal data on differences in patient satisfaction. # Q2A.3. Intermittent versus suprapubic Very low-quality evidence suggested a benefit of intermittent over suprapubic catheterization in selected populations 115,116, based on increased patient acceptability and decreased risk of urinary complications (bladder calculi, vesicoureteral reflux, and upper tract abnormalities). Although we found a decreased risk of bacteriuria/unspecified UTI with suprapubic catheterization, there were no differences in SUTI. The populations studied included women undergoing urogynecologic surgery and spinal cord injury patients. # Q2A.4. Suprapubic versus indwelling urethral Low-quality evidence suggested a benefit of suprapubic catheters over indwelling urethral catheters in selected populations. 37,62,104,107,108,135,136 This was based on a decreased risk of bacteriuria/unspecified UTI, recatheterization, and urethral stricture, and increased patient comfort and satisfaction. However, there were no differences in SUTI and an increased risk of longer duration of catheterization with suprapubic catheters. Studies involved primarily postoperative and spinal cord injury patients. Our search did not reveal data on differences in complications related to catheter insertion or the catheter site. # Q2A.5. Clean intermittent versus sterile intermittent Moderate-quality evidence suggested no benefit of using sterile over clean technique for intermittent catheterization. 63,73,105, No differences were found in the risk of SUTI or bacteriuria/unspecified UTI. Study populations included nursing home residents and adults and children with neurogenic bladder/spinal cord injury. # Q2B. What are the risks and benefits associated with different catheters or collecting systems? The available data examined the following comparisons between different types of catheters and drainage systems: 1. Antimicrobial/antiseptic catheters vs. standard catheters a. Silver-coated catheters vs. standard catheters b. Nitrofurazone-impregnated catheters vs. standard catheters 2. Hydrophilic catheters vs. standard catheters 3. Closed vs. open drainage systems 4. Complex vs. simple drainage systems 5. Preconnected/sealed junction catheters vs. standard catheters 6. Catheter valves vs. catheter bags For all comparisons, we considered CAUTI outcomes as well as other outcomes critical to weighing the risks and benefits of different types of catheters or collecting systems. The evidence for this question consists of 5 systematic reviews, 37, 17 RCTs, 64, 23 observational studies, 82,86,89,97, and 3 economic analyses. 179180,181 The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 2B. # Q2B.1.a. Silver-coated catheters vs. standard catheters Low-quality evidence suggested a benefit of silver-coated catheters over standard latex catheters. 37,82,86,137-139,143,159-163, 165,166 This was based on a decreased risk of bacteriuria/unspecified UTI with silver-coated catheters and no evidence of increased urethral irritation or antimicrobial resistance in studies that reported data on microbiological outcomes. Differences were significant for silver alloy-coated catheters but not silver oxide-coated catheters. In a meta-analysis of randomized controlled trials (see Appendix), silver alloy-coated catheters reduced the risk of asymptomatic bacteriuria compared to standard latex catheters (control latex catheters were either uncoated or coated with hydrogel, Teflon®, or silicone), whereas there were no differences when compared to standard, all silicone catheters. The effect of silver alloy catheters compared to latex catheters was more pronounced when used in patients catheterized <1 week. The results were robust to inclusion or exclusion of non peerreviewed studies. Only one observational study found a decrease in SUTI with silver alloycoated catheters. 166 The setting was a burn referral center, where the control catheters were latex, and patients in the intervention group had new catheters placed on admission, whereas the control group did not. Recent observational studies in hospitalized patients found mixed results for bacteriuria/unspecified UTI. # Q2B.1.b. Nitrofurazone-impregnated catheters vs. standard catheters Low-quality evidence suggested a benefit of nitrofurazone-impregnated catheters in patients catheterized for short periods of time. 137,138 This was based on a decreased risk of bacteriuria and no evidence of increased antimicrobial resistance in studies that reported microbiological outcomes. Differences were significant in a meta-analysis of three studies examining nitrofurazone-impregnated catheters (only one individual study significant) when duration of catheterization was 1 week, although the meta-analysis was borderline significant. # Q2B.2. Hydrophilic catheters vs. standard catheters Very low-quality evidence suggested a benefit of hydrophilic catheters over standard nonhydrophilic catheters in specific populations undergoing clean intermittent catheterization. 137,169 This was based on a decreased risk of SUTI, bacteriuria, hematuria, and pain during insertion, and increased patient satisfaction. Differences in CAUTI outcomes were limited to one study of spinal cord injury patients and one study of patients receiving intravesical immunochemoprophylaxis for bladder cancer, while multiple other studies found no significant differences. # Q2B.3. Closed vs. open drainage systems Very low-quality evidence suggested a benefit of using a closed rather than open urinary drainage system. 89,171 This was based on a decreased risk of bacteriuria with a closed drainage system. One study also found a suggestion of a decreased risk of SUTI, bacteremia, and UTIrelated mortality associated with closed drainage systems, but differences were not statistically significant. Sterile, continuously closed drainage systems became the standard of care based on an uncontrolled study published in 1966 demonstrating a dramatic reduction in the risk of infection in short-term catheterized patients with the use of a closed system. 23 Recent data also include the finding that disconnection of the drainage system is a risk factor for bacteriuria (Q1B). # Q2B.4. Complex vs. simple drainage systems Low-quality evidence suggested no benefit of complex closed urinary drainage systems over simple closed urinary drainage systems. 154,172,176,177 Although there was a decreased risk of bacteriuria with the complex systems, differences were found only in studies published before 1990, and not in more recent studies. The complex drainage systems studied included various mechanisms for reducing bacterial entry, such as antiseptic-releasing cartridges at the drain port of the urine collection bag; see evidence table for systems evaluated. # Q2B.5. Preconnected/sealed junction catheters vs. standard catheters Low-quality evidence suggested a benefit of using preconnected catheters with junction seals over catheters with unsealed junctions to reduce the risk of disconnections. 64,153,156,175 This was based on a decreased risk of SUTI and bacteriuria with preconnected sealed catheters. Studies that found differences had higher rates of CAUTI in the control group than studies that did not find an effect. # Q2B.6. Catheter valves vs. drainage bags Moderate-quality evidence suggested a benefit of catheter valves over drainage bags in selected patients with indwelling urinary catheters. 140 Catheter valves led to greater patient satisfaction but no differences in bacteriuria/unspecified UTI or pain/bladder spasms. Details regarding the setting for recruitment and follow-up of the patients in the studies were unclear, and the majority of subjects were men. Our search did not reveal data on the effect of catheter valves on bladder function, bladder/urethral trauma, or catheter blockage. # Q2C. What are the risks and benefits associated with different catheter management techniques? The available data examined the following catheter management techniques: For all comparisons, we considered CAUTI outcomes as well as other outcomes critical to weighing the risks and benefits of different catheter management techniques. The evidence for this question consists of 6 systematic reviews, 37,105,106, 56 RCTs, 60,61,143,158,158, 34 observational studies, 83,85,88,90,96,102,133,167,178, and 1 economic analysis. 180 The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 2C. # Q2C.1. Antimicrobial prophylaxis Low-quality evidence suggested no benefit of antimicrobial prophylaxis in patients undergoing short-term catheterization. 37,60,61,83,85,133,158,178,182,185,186, This was based on heterogeneous results for SUTI and bacteriuria/unspecified UTI and no adverse events related to antimicrobials. Lack of consistency in specific factors, such as patient population, antimicrobial agents, timing of administration, and duration of follow-up, did not allow for a summary of evidence of the effect of antimicrobial prophylaxis on CAUTI in patients undergoing short term catheterization. Only two studies evaluated adverse events related to antimicrobials. Our search did not reveal data on antimicrobial resistance or Clostridium difficile infection. Low-quality evidence suggested no benefit of antimicrobial prophylaxis in patients undergoing long-term catheterization (indwelling and clean intermittent catheterization). 106,183,192,194,235,238 This was based on a decreased risk of bacteriuria, heterogeneous results for SUTI, and no differences reported for catheter encrustation or adverse events, although data were sparse. One systematic review suggested an increase in antimicrobial resistance with antimicrobial use. # Q2C.2. Urinary antiseptics Low-quality evidence suggested a benefit of methenamine for short-term catheterized patients. 196,197 This was based on a reduced risk of SUTI and bacteriuria and no differences in adverse events. Evidence was limited to two studies of patients following gynecological surgery in Norway and Sweden. Very low-quality evidence suggested a benefit of methanamine for long-term catheterized patients. 106, This was based on a reduced risk of encrustation but no differences in risk of SUTI or bacteriuria. Data on encrustation was limited to one study. Studies involved primarily elderly and spinal cord injury patients with chronic indwelling catheters # Q2C.3. Bladder irrigation Low-quality evidence suggested no benefit of bladder irrigation in patients with indwelling or intermittent catheters. 66,69, This was based on no differences in SUTI and heterogeneous findings for bacteriuria. # Q2C.4. Antiseptic instillation in the drainage bag Low-quality evidence suggested no benefit of antiseptic instillation in urinary drainage bags. 90, This was based on no differences in SUTI and heterogeneous results for bacteriuria. # Q2C.5. Periurethral care Low-quality evidence suggested no benefit of antiseptic meatal cleaning regimens before or during catheterization to prevent CAUTI. 65,67,68,88,158,246,247 This was based on no difference in the risk of bacteriuria in patients receiving periurethral care regimens compared to those not receiving them. One study found a higher risk of bacteriuria with cleaning of the urethral meatus-catheter junction (either twice daily application of povidine-iodine or once daily cleaning with a non-antiseptic solution of green soap and water) in a subgroup of women with positive meatal cultures and in patients not receiving antimicrobials. Periurethral cleaning with chlorhexidine before catheter insertion did not have an effect in two studies. # Q2C.6. Routine catheter or bag change Low-quality evidence suggested no benefit of routine catheter or drainage bag changes to prevent CAUTI. 102,248,249 This was based on no difference or an increased risk of SUTI and no difference in bacteriuria with routine compared to as-needed changes or with more frequent changing intervals. One study in nursing home residents found no differences in SUTI with routine monthly catheter changes compared to changing only for obstruction or infection, but the study was underpowered to detect a difference. Another study in home care patients found an increased risk of SUTI when catheters were changed more frequently than monthly. # Q2C.7. Catheter lubricants Very low-quality evidence suggested a benefit of using lubricants for catheter insertion. 167, This was based on a decreased risk of SUTI and bacteriuria with the use of a prelubricated catheter compared to a catheter lubricated by the patient and a decreased risk of bacteriuria with use of a lubricant versus no lubricant. Studies were heterogeneous both in the interventions and outcomes studied. Several studies comparing antiseptic lubricants to nonantiseptic lubricants found no significant differences. # Q2C.8. Securing devices Low-quality evidence suggested no benefit of using catheter securing devices to prevent CAUTI. 224 This was based on no significant difference in the risk of SUTI or meatal erosion. The only study in this category looked at one particular product. # Q2C.9. Bacterial interference Moderate-quality evidence suggested a benefit of using bacterial interference in catheterized patients. 225 In the one study evaluating this intervention, urinary colonization with a nonpathogenic Escherichia coli was associated with a decreased risk of SUTI in adults with spinal cord injury and a history of frequent CAUTI. # Q2C.10. Catheter cleansing Very low-quality evidence suggested a benefit of wet versus dry storage procedures for catheters used in clean intermittent catheterization. 255 This was based on a decreased risk of SUTI with a wet storage procedure in one study of spinal cord injury patients undergoing clean intermittent catheterization compared to a dry storage procedure where the catheter was left to air dry after washing. In the wet procedure, the catheter was stored in a dilute povidone-iodine solution after washing with soap and water. # Q2C.11. Catheter removal strategies a. Clamping vs. free drainage prior to removal Low-quality evidence suggested no benefit of clamping versus free drainage before catheter removal. 37,184 This was based on no difference in risk of bacteriuria, urinary retention, or recatheterization between the two strategies. One study comparing a clamp and release strategy to free drainage over 72 hours found a greater risk of bacteriuria in the clamping group. # b. Postoperative duration of catheterization Moderate-quality evidence suggested a benefit of shorter versus longer postoperative durations of catheterization. 37,184,227,228 This was based on a decreased risk of bacteriuria/unspecified UTI, decreased time to ambulation and length of stay, no differences in urinary retention and SUTI, and increased risk of recatheterization. Significant decreases in bacteriuria/unspecified UTI were found specifically for comparisons of 1 day versus 3 or 5 days of postoperative catheterization. Recatheterization risk was greater in only one study comparing immediate removal to removal 6 or 12 hours after hysterectomy. # Q2C.12. Assessment of urine volumes Low-quality evidence suggested a benefit of using portable ultrasound to assess urine volume in patients undergoing intermittent catheterization. 229,230 This was based on fewer catheterizations but no reported differences in risk of unspecified UTI. Patients studied were adults with neurogenic bladder in inpatient rehabilitation centers. Our search did not reveal data on the use of ultrasound in catheterized patients in other settings. We considered CAUTI outcomes, duration of catheterization, recatheterization, and transmission of pathogens when weighing the risks and benefits of different systems interventions. The evidence for this question consists of 1 RCT 259 and 19 observational studies. 3,25, The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 2D. # Q2D.1. Multifaceted infection control/quality improvement programs Low-quality evidence suggested a benefit of multifaceted infection control/quality improvement programs to reduce the risk of CAUTI. 3, This was based on a decreased risk of SUTI, bacteriuria/unspecified UTI, and duration of catheter use with implementation of such programs. Studies evaluated various multifaceted interventions. The studies with significant findings included: 1. education and performance feedback regarding compliance with catheter care, emphasizing hand hygiene, and maintaining unobstructed urine flow; 2. computerized alerts to physicians, nurse-driven protocols to remove catheters, and use of handheld bladder scanners to assess for urinary retention; 3. guidelines and education focusing on perioperative catheter management; and 4. a multifaceted infection control program including guidelines for catheter insertion and maintenance. A program using a checklist and algorithm for appropriate catheter use also suggested a decrease in unspecified UTI and catheter duration, but statistical differences were not reported. # Q2D.2. Reminders Very low-quality evidence suggested a benefit of using urinary catheter reminders to prevent CAUTI. This was based on a decreased risk of bacteriuria and duration of catheterization and no differences in recatheterization or SUTI when reminders were used. Reminders to physicians included both computerized and non-computerized alerts about the presence of urinary catheters and the need to remove unnecessary catheters. # Q2D.3. Bacteriologic monitoring Very low-quality evidence suggested no benefit of bacteriologic monitoring to prevent CAUTI. 25,271 Although one study found a decreased risk of bacteriuria during a period of bacteriologic monitoring and feedback, only 2% of SUTI episodes were considered potentially preventable with the use of bacteriologic monitoring. # Q2D.4. Hand hygiene Very low-quality evidence suggested a benefit of using alcohol hand sanitizer in reducing CAUTI. This was based on one study in a rehabilitation facility that found a decrease in unspecified UTI, although no statistical differences were reported. 272 A separate multifaceted study that included education and performance feedback on compliance with catheter care and hand hygiene showed a decrease in risk of SUTI. 265 # Q2D.5. Patient placement Very low-quality evidence suggested a benefit of spatially separating patients to prevent transmission of urinary pathogens. 273 This was based on a decreased risk of transmission of urinary bacterial pathogens in nursing home residents in separate rooms compared to residents in the same rooms. # Q2D.6. Catheter team versus self-catheterization Very low-quality evidence suggested no benefit of a catheter team to prevent CAUTI among patients requiring intermittent catheterization. 274 This was based on one study showing no difference in unspecified UTI between use of a catheter care team and self-catheterization for intermittent catheterization in paraplegic patients. # Q2D.7. Feedback Very low-quality evidence suggested a benefit of using nursing feedback to prevent CAUTI. 275 This was based on a decreased risk of unspecified UTI during an intervention where nursing staff were provided with regular reports of unit-specific rates of CAUTI. # Q2D.8. Nurse-directed catheter removal Very low-quality evidence suggested a benefit of a nurse-directed catheter removal program to prevent CAUTI. 276 This was based on a decreased risk of unspecified UTI during an intervention where criteria were developed that allowed a registered nurse to remove a catheter without a physician's order when no longer medically necessary. Of the three intensive care units where the intervention was implemented, differences were significant only in the coronary intensive care unit. Evidence Review Table 2D # Q3: What are the best practices for preventing UTI associated with obstructed urinary catheters? The available data examined the following practices: 1. Methods to prevent/reduce encrustations or blockage 2. Catheter materials preventing blockage For this question, available relevant outcomes included blockage/encrustation. We did not find data on the outcomes of CAUTI. The evidence for this question consists of 1 systematic review, 277 2 RCTs, 278,279 and 2 observational studies. 280,281 The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 3. # Q3.1. Methods to prevent/reduce encrustations or blockage Low-quality evidence suggested a benefit of acidifying solutions or oral acetohydroxamic acid in preventing or reducing catheter encrustations and blockage in long-term catheterized patients. 277,278,280,281 No differences were seen with daily catheter irrigation with normal saline. # Q3.2. Catheter materials preventing blockage Low-quality evidence suggested a benefit of silicone over latex or Teflon-coated catheters in prevention or reducing catheter encrustations in long-term catheterized patients who were prone to blockage. No differences were seen with different materials in patients considered "nonblockers." 279 Evidence Review Table 3. What are the best practices for preventing UTI associated with obstructed urinary catheters? # Abbreviations Abbreviation
# Table of Contents # I. Executive Summary This guideline updates and expands the original Centers for Disease Control and Prevention (CDC) Guideline for Prevention of Catheter-associated Urinary Tract Infections (CAUTI) published in 1981. Several developments necessitated revision of the 1981 guideline, including new research and technological advancements for preventing CAUTI, increasing need to address patients in non-acute care settings and patients requiring long-term urinary catheterization, and greater emphasis on prevention initiatives as well as better defined goals and metrics for outcomes and process measures. In addition to updating the previous guideline, this revised guideline reviews the available evidence on CAUTI prevention for patients requiring chronic indwelling catheters and individuals who can be managed with alternative methods of urinary drainage (e.g., intermittent catheterization). The revised guideline also includes specific recommendations for implementation, performance measurement, and surveillance. Although the general principles of CAUTI prevention have not changed from the previous version, the revised guideline provides clarification and more specific guidance based on a defined, systematic review of the literature through July 2007. For areas where knowledge gaps exist, recommendations for further research are listed. Finally, the revised guideline outlines highpriority recommendations for CAUTI prevention in order to offer guidance for implementation. This document is intended for use by infection prevention staff, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection prevention and control programs for healthcare settings across the continuum of care. The guideline can also be used as a resource for societies or organizations that wish to develop more detailed implementation guidance for prevention of CAUTI. Our goal was to develop a guideline based on a targeted systematic review of the best available evidence, with explicit links between the evidence and recommendations. To accomplish this, we used an adapted GRADE system approach for evaluating quality of evidence and determining strength of recommendations. The methodology, structure, and components of this guideline are approved by HICPAC and will be used for subsequent guidelines issued by HICPAC. A more detailed description of our approach is available in the Methods section. To evaluate the evidence on preventing CAUTI, we examined data addressing three key questions and related subquestions: Evidence addressing the key questions was used to formulate recommendations, and explicit links between the evidence and recommendations are available in the Evidence Review in the body of the guideline and Evidence Tables and GRADE Tables in the Appendices (https://www.cdc.gov/infectioncontrol/guidelines/pdf/cauti-guidelines-appendix.pdf). It is important to note that Category I recommendations are all considered strong recommendations and should be equally implemented; it is only the quality of the evidence underlying the recommendation that distinguishes between levels A and B. Category IC recommendations are required by state or federal regulation and may have any level of supporting evidence. The categorization scheme used in this guideline is presented in Table 1 in the Summary of Recommendations and described further in the Methods section. The Summary of Recommendations is organized as follows: 1. recommendations for who should receive indwelling urinary catheters (or, for certain populations, alternatives to indwelling catheters); 2. recommendations for catheter insertion; 3. recommendations for catheter maintenance; 4. quality improvement programs to achieve appropriate placement, care, and removal of catheters; 5. administrative infrastructure required; and 6. surveillance strategies. The Implementation and Audit section includes a prioritization of recommendations (i.e., highpriority recommendations that are essential for every healthcare facility), organized by modules, in order to provide facilities more guidance on implementation of these guidelines. A list of recommended performance measures that can potentially be used for internal reporting purposes is also included. Areas in need of further research identified during the evidence review are outlined in the Recommendations for Further Research. This section includes guidance for specific methodological approaches that should be used in future studies. Readers who wish to examine the primary evidence underlying the recommendations are referred to the Evidence Review in the body of the guideline, and the Evidence Tables and GRADE Tables in the Appendices (https://www.cdc.gov/infectioncontrol/guidelines/pdf/cautiguidelines-appendix.pdf). The Evidence Review includes narrative summaries of the data presented in the Evidence Tables and GRADE Tables. The Evidence Tables include all study-level data used in the guideline, and the GRADE Tables assess the overall quality of evidence for each question. The Appendices also contain a clearly delineated search strategy that will be used for periodic updates to ensure that the guideline remains a timely resource as new information becomes available. # II. Summary of Recommendations # B. Examples of Inappropriate Uses of Indwelling Catheters • As a substitute for nursing care of the patient or resident with incontinence. • As a means of obtaining urine for culture or other diagnostic tests when the patient can voluntarily void. • For prolonged postoperative duration without appropriate indications (e.g., structural repair of urethra or contiguous structures, prolonged effect of epidural anaesthesia, etc.). # II. Proper Techniques for Urinary Catheter Insertion # III. Implementation and Audit # Prioritization of Recommendations In this section, the recommendations considered essential for all healthcare facilities caring for patients requiring urinary catheterization are organized into modules in order to provide more guidance to facilities on implementation of these guidelines. The high-priority recommendations were chosen by a consensus of experts based on strength of recommendation as well as on the likely impact of the strategy in preventing CAUTI. The administrative functions and infrastructure listed above in the summary of recommendations are necessary to accomplish the high priority recommendations and are therefore critical to the success of a prevention program. In addition, quality improvement programs should be implemented as an active approach to accomplishing these recommendations and when process and outcome measure goals are not being met based on internal reporting. # Priority Recommendations for Appropriate Urinary Catheter Use (Module 1) • Insert catheters only for appropriate indications (see Table 2), and leave in place only as long as needed. Conduct random audits of selected units and calculate compliance rate: • Numerator: number of patients on unit with catheters with proper documentation of insertion and removal dates • Denominator: number of patients on the unit with a catheter in place at some point during admission • Standardization factor: 100 (i.e., multiply by 100 so that measure is expressed as a percentage) c) Compliance with documentation of indication for catheter placement: Conduct random audits of selected units and calculate compliance rate • # IV. Recommendations for Further Research Our literature review revealed that many of the studies addressing strategies to prevent CAUTI were not of sufficient quality to allow firm conclusions regarding the benefit of certain interventions. Future studies of CAUTI prevention should: 1. Be primary analytic research (i.e. systematic reviews, meta-analyses, interventional studies, and observational studies [cohort, case-control, analytic cross-sectional studies]) 2. Evaluate clinically relevant outcomes (e.g., SUTI, bloodstream infections secondary to CAUTI) 3. Adjust for confounders as needed using multivariable analyses 4. Stratify outcomes by patient populations at risk for CAUTI 5. Ensure adequate statistical power to detect differences The following is a compilation of recommendations for further research: # V. Background Urinary tract infections are the most common type of healthcare-associated infection, accounting for more than 30% of infections reported by acute care hospitals. 19 Virtually all healthcare-associated UTIs are caused by instrumentation of the urinary tract. Catheterassociated urinary tract infection (CAUTI) has been associated with increased morbidity, mortality, hospital cost, and length of stay. [6][7][8][9] In addition, bacteriuria commonly leads to unnecessary antimicrobial use, and urinary drainage systems are often reservoirs for multidrugresistant bacteria and a source of transmission to other patients. 10,11 # Definitions An indwelling urinary catheter is a drainage tube that is inserted into the urinary bladder through the urethra, is left in place, and is connected to a closed collection system. Alternative methods of urinary drainage may be employed in some patients. Intermittent ("in-and-out") catheterization involves brief insertion of a catheter into the bladder through the urethra to drain urine at intervals. An external catheter is a urine containment device that fits over or adheres to the genitalia and is attached to a urinary drainage bag. The most commonly used external catheter is a soft flexible sheath that fits over the penis ("condom" catheter). A suprapubic catheter is surgically inserted into the bladder through an incision above the pubis. The limitations and heterogeneity of definitions of CAUTI used in various studies present major challenges in appraising the quality of evidence in the CAUTI literature. Study investigators have used numerous different definitions for CAUTI outcomes, ranging from simple bacteriuria at a range of concentrations to, less commonly, symptomatic infection defined by combinations of bacteriuria and various signs and symptoms. Futhermore, most studies that used CDC/NHSN definitions for CAUTI did not distinguish between SUTI and ASB in their analyses. 30 The heterogeneity of definitions used for CAUTI may reduce the quality of evidence for a given intervention and often precludes meta-analyses. The clinical significance of ASB in catheterized patients is undefined. Approximately 75% to 90% of patients with ASB do not develop a systemic inflammatory response or other signs or symptoms to suggest infection. 6,31 Monitoring and treatment of ASB is also not an effective prevention measure for SUTI, as most cases of SUTI are not preceded by bacteriuria for more than a day. 25 Treatment of ASB has not been shown to be clinically beneficial and is associated with the selection of antimicrobial-resistant organisms. # Epidemiology Between 15% and 25% of hospitalized patients may receive short-term indwelling urinary catheters. 12,13 In many cases, catheters are placed for inappropriate indications, and healthcare providers are often unaware that their patients have catheters, leading to prolonged, unnecessary use. [14][15][16] In acute care hospitals reporting to NHSN in 2006, pooled mean urinary catheter utilization ratios in ICU and non-ICU areas ranged from 0.23-0.91 urinary catheterdays/patient-days. 17 While the numbers of units reporting were small, the highest ratios were in trauma ICUs and the lowest in inpatient medical/surgical wards. The overall prevalence of longterm indwelling urethral catheterization use is unknown. The prevalence of urinary catheter use in residents in long-term care facilities in the United States is on the order of 5%, representing approximately 50,000 residents with catheters at any given time. 18 This number appears to be declining over time, likely because of federally mandated nursing home quality measures. However, the high prevalence of urinary catheters in patients transferred to skilled nursing facilities suggests that acute care hospitals should focus more efforts on removing unnecessary catheters prior to transfer. 18 Reported rates of UTI among patients with urinary catheters vary substantially. National data from NHSN acute care hospitals in 2006 showed a range of pooled mean CAUTI rates of 3.1-7.5 infections per 1000 catheter-days. 17 The highest rates were in burn ICUs, followed by inpatient medical wards and neurosurgical ICUs, although these sites also had the fewest numbers of locations reporting. The lowest rates were in medical/surgical ICUs. Although morbidity and mortality from CAUTI is considered to be relatively low compared to other HAIs, the high prevalence of urinary catheter use leads to a large cumulative burden of infections with resulting infectious complications and deaths. An estimate of annual incidence of HAIs and mortality in 2002, based on a broad survey of US hospitals, found that urinary tract infections made up the highest number of infections (> 560,000) compared to other HAIs, and attributable deaths from UTI were estimated to be over 13,000 (mortality rate 2.3%). 19 And while fewer than 5% of bacteriuric cases develop bacteremia, 6 CAUTI is the leading cause of secondary nosocomial bloodstream infections; about 17% of hospital-acquired bacteremias are from a urinary source, with an associated mortality of approximately 10%. 20 In the nursing home setting, bacteremias are most commonly caused by UTIs, the majority of which are catheterrelated. 21 An estimated 17% to 69% of CAUTI may be preventable with recommended infection control measures, which means that up to 380,000 infections and 9000 deaths related to CAUTI per year could be prevented. 22 # Pathogenesis and Microbiology The source of microorganisms causing CAUTI can be endogenous, typically via meatal, rectal, or vaginal colonization, or exogenous, such as via contaminated hands of healthcare personnel or equipment. Microbial pathogens can enter the urinary tract either by the extraluminal route, via migration along the outside of the catheter in the periurethral mucous sheath, or by the intraluminal route, via movement along the internal lumen of the catheter from a contaminated collection bag or catheter-drainage tube junction. The relative contribution of each route in the pathogenesis of CAUTI is not well known. The marked reduction in risk of bacteriuria with the introduction of the sterile, closed urinary drainage system in the1960's 23 suggests the importance of the intraluminal route. However, even with the closed drainage system, bacteriuria inevitably occurs over time either via breaks in the sterile system or via the extraluminal route. 24 The daily risk of bacteriuria with catheterization is 3% to 10%, 25,26 approaching 100% after 30 days, which is considered the delineation between short and longterm catheterization. 27 Formation of biofilms by urinary pathogens on the surface of the catheter and drainage system occurs universally with prolonged duration of catheterization. 28 Over time, the urinary catheter becomes colonized with microorganisms living in a sessile state within the biofilm, rendering them resistant to antimicrobials and host defenses and virtually impossible to eradicate without removing the catheter. The role of bacteria within biofilms in the pathogenesis of CAUTI is unknown and is an area requiring further research. Antimicrobial resistance among urinary pathogens is an ever increasing problem. About a quarter of E. coli isolates and one third of P. aeruginosa isolates from CAUTI cases were fluoroquinolone-resistant. Resistance of gram-negative pathogens to other agents, including third-generation cephalosporins and carbapenems, was also substantial 5 . The proportion of organisms that were multidrug-resistant, defined by non-susceptibility to all agents in 4 classes, was 4% of P. aeruginosa, 9% of K. pneumoniae, and 21% of Acinetobacter baumannii. 29 # VI. Scope and Purpose This guideline updates and expands the original CDC Guideline for Prevention of CAUTI published in 1981. The revised guideline addresses the prevention of CAUTI for patients in need of either short-or long-term (i.e., > 30 days) urinary catheterization in any type of healthcare facility and evaluates evidence for alternative methods of urinary drainage, including intermittent catheterization, external catheters, and suprapubic catheters. The guideline also includes specific recommendations for implementation, performance measurement, and surveillance. Recommendations for further research are also provided to address the knowledge gaps in CAUTI prevention identified during the literature review. To evaluate the evidence on preventing CAUTI, we examined data addressing three key questions and related subquestions: This document is intended for use by infection prevention staff, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection prevention and control programs for healthcare settings across the continuum of care. The guideline can also be used as a resource for societies or organizations that wish to develop more detailed implementation guidance for prevention of CAUTI. 1. # VII. Methods This guideline was based on a targeted systematic review of the best available evidence on CAUTI prevention. We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach [32][33][34] to provide explicit links between the available evidence and the resulting recommendations. Our guideline development process is outlined in Figure 1. # Figure 1. The Guideline Development Process # Development of Key Questions We first conducted an electronic search of the National Guideline Clearinghouse® ( # Literature Search Following the development of the key questions, search terms were developed for identifying literature relevant to the key questions. For the purposes of quality assurance, we compared these terms to those used in relevant seminal studies and guidelines. These search terms were then incorporated into search strategies for the relevant electronic databases. Searches were performed in Medline® (National Library of Medicine) using the Ovid® Platform (Ovid Technologies, Wolters Kluwer, New York, NY), EMBASE® (Elsevier BV, Amsterdam, Netherlands), CINAHL® (Ebsco Publishing, Ipswich, MA) and Cochrane® (Cochrane Collaboration, Oxford, UK) (all databases were searched in July 2007), and the resulting references were imported into a reference manager, where duplicates were resolved. For Cochrane reviews ultimately included in our guideline, we checked for updates in July 2008. The detailed search strategy used for identifying primary literature and the results of the search can be found in Appendix 1B (https://www.cdc.gov/infectioncontrol/guidelines/pdf/cautiguidelines-appendix.pdf). # Study Selection Titles and abstracts from references were screened by a single author (C.V.G, R.K.A., or D.A.P.) and the full text articles were retrieved if they were 1. relevant to one or more key questions, 2. primary analytic research, systematic reviews or meta-analyses, and 3. written in English. Likewise, the full-text articles were screened by a single author (C.V.G. or D.A.P.) using the same criteria, and included studies underwent a second review for inclusion by another author (R.K.A.). Disagreements were resolved by the remaining authors. The results of this process are depicted in Figure 2. # Data Extraction and Synthesis Data on the study author, year, design, objective, population, setting, sample size, power, followup, and definitions and results of clinically relevant outcomes were extracted into evidence tables (Appendix 2 (https://www.cdc.gov/infectioncontrol/guidelines/pdf/cauti-guidelines-appendix.pdf)). Three evidence tables were developed, each of which represented one of our key questions. Studies were extracted into the most relevant evidence table . Then, studies were organized by the common themes that emerged within each evidence table. Data were extracted by one author (R.K.A.) and cross-checked by another (C.V.G.). Disagreements were resolved by the remaining authors. Data and analyses were extracted as originally presented in the included studies. Meta-analyses were performed only where their use was deemed critical to a recommendation, and only in circumstances where multiple studies with sufficiently homogenous populations, interventions, and outcomes could be analyzed. Systematic reviews were included in our review. To avoid duplication of data, we excluded primary studies if they were also included in a systematic review captured by our search. The only exception to this was if the primary study also addressed a relevant question that was outside the scope of the included systematic review. Before exclusion, data from the primary studies that we originally captured were abstracted into the evidence tables and reviewed. We also excluded systematic reviews that analyzed primary studies that were fully captured in a more recent systematic review. The only exception to this was if the older systematic review also addressed a relevant question that was outside the scope of the newer systematic review. To ensure that all relevant studies were captured in the search, the bibliography was vetted by a panel of clinical experts. # Grading of Evidence First, the quality of each study was assessed using scales adapted from existing methodology checklists, and scores were recorded in the evidence tables. Appendix 3 (https://www.cdc.gov/infectioncontrol/guidelines/pdf/cauti-guidelines-appendix.pdf) includes the sets of questions we used to assess the quality of each of the major study designs. Next, the quality of the evidence base was assessed using methods adapted from the GRADE Working Group. 32 Briefly, GRADE tables were developed for each of the interventions or questions addressed within the evidence tables. Included in the GRADE tables were the intervention of interest, any outcomes listed in the evidence tables that were judged to be clinically important, the quantity and type of evidence for each outcome, the relevant findings, and the GRADE of evidence for each outcome, as well as an overall GRADE of the evidence base for the given intervention or question. The initial GRADE of evidence for each outcome was deemed high if the evidence base included a randomized controlled trial (RCT) or a systematic review of RCTs, low if the evidence base included only observational studies, or very low if the evidence base consisted only of uncontrolled studies. The initial GRADE could then be modified by eight criteria. 34 Criteria which could decrease the GRADE of an evidence base included quality, consistency, directness, precision, and publication bias. Criteria that could increase the GRADE included a large magnitude of effect, a dose-response gradient, or inclusion of unmeasured confounders that would increase the magnitude of effect (Table 3). GRADE definitions are as follows: 1. High -further research is very unlikely to change confidence in the estimate of effect 2. Moderate -further research is likely to affect confidence in the estimate of effect and may change the estimate 3. Low -further research is very likely to affect confidence in the estimate of effect and is likely to change the estimate 4. Very low -any estimate of effect is very uncertain After determining the GRADE of the evidence base for each outcome of a given intervention or question, we calculated the overall GRADE of the evidence base for that intervention or question. The overall GRADE was based on the lowest GRADE for the outcomes deemed critical to making a recommendation. # Table 3. Rating the Quality of Evidence Using the GRADE Approach The format of this section was changed to improve readability and accessibility. The content is unchanged. # Rating the # Formulating Recommendations Narrative evidence summaries were then drafted by the working group using the evidence and GRADE tables. One summary was written for each theme that emerged under each key question. The working group then used the narrative evidence summaries to develop guideline recommendations. Factors determining the strength of a recommendation included 1. the values and preferences used to determine which outcomes were "critical," 2. the harms and benefits that result from weighing the "critical" outcomes, and 3. the overall GRADE of the evidence base for the given intervention or question (Table 4). 33 If weighing the "critical outcomes" for a given intervention or question resulted in a "net benefit" or a "net harm," then a "Category I Recommendation" was formulated to strongly recommend for or against the given intervention respectively. If weighing the "critical outcomes" for a given intervention or question resulted in a "trade off" between benefits and harms, then a "Category II Recommendation" was formulated to recommend that providers or institutions consider the intervention when deemed appropriate. If weighing the "critical outcomes" for a given intervention or question resulted in an "uncertain trade off" between benefits and harms, then a "No Recommendation" was formulated to reflect this uncertainty. No recommendation/ Unresolved issue for which there is low to very low quality evidence with unresolved issue uncertain trade offs between benefits and harms. Category I recommendations are defined as strong recommendations with the following implications: 1. For patients: Most people in the patient's situation would want the recommended course of action and only a small proportion would not; request discussion if the intervention is not offered. 2. For clinicians: Most patients should receive the recommended course of action. 3. For policymakers: The recommendation may be adopted as a policy. Category II recommendations are defined as weak recommendations with the following implications: 1. For patients: Most people in the patient's situation would want the recommended course of action, but many would not. 2. For clinicians: Different choices will be appropriate for different patients, and clinicians must help each patient to arrive at a management decision consistent with her or his values and preferences. 3. For policymakers: Policy making will require substantial debate and involvement of many stakeholders. It should be noted that Category II recommendations are discretionary for the individual institution and are not intended to be enforced. The wording of each recommendation was carefully selected to reflect the recommendation's strength. In most cases, we used the active voice when writing Category I recommendationsthe strong recommendations. Phrases like "do" or "do not" and verbs without auxiliaries or conditionals were used to convey certainty. We used a more passive voice when writing Category II recommendations -the weak recommendations. Words like "consider" and phrases like "is preferable," "is suggested," "is not suggested," or "is not recommended" were chosen to reflect the lesser certainty of the Category II recommendations. Rather than a simple statement of fact, each recommendation is actionable, describing precisely a proposed action to take. The category "No recommendation/unresolved issue" was most commonly applied to situations where either 1. the overall quality of the evidence base for a given intervention was low to very low and there was no consensus on the benefit of the intervention or 2. there was no published evidence on outcomes deemed critical to weighing the risks and benefits of a given intervention. If the latter was the case, those critical outcomes will be noted at the end of the relevant evidence summary. Our evidence-based recommendations were cross-checked with those from guidelines identified in our original systematic search. Recommendations from previous guidelines for topics not directly addressed by our systematic review of the evidence were included in our "Summary of Recommendations" if they were deemed critical to the target users of this guideline. Unlike recommendations informed by our literature search, these recommendations are not linked to a key question. These recommendations were agreed upon by expert consensus and are designated either IB if they represent a strong recommendation based on accepted practices (e.g., aseptic technique) or II if they are a suggestion based on a probable net benefit despite limited evidence. All recommendations were approved by HICPAC. Recommendations focused only on efficacy, effectiveness, and safety. The optimal use of these guidelines should include a consideration of the costs relevant to the local setting of guideline users. # Reviewing and Finalizing the Guideline After a draft of the tables, narrative summaries, and recommendations was completed, the working group shared the draft with the expert panel for in-depth review. While the expert panel was reviewing this draft, the working group completed the remaining sections of the guideline, including the executive summary, background, scope and purpose, methods, summary of recommendations, and recommendations for guideline implementation, audit, and further research. The working group then made revisions to the draft based on feedback from members of the expert panel and presented the entire draft guideline to HICPAC for review. The guideline was then posted on the Federal Register for public comment. After a period of public comment, the guideline was revised accordingly, and the changes were reviewed and voted on by HICPAC. The final guideline was cleared internally by CDC and published and posted on the HICPAC website. # Updating the Guideline Future revisions to this guideline will be dictated by new research and technological advancements for preventing CAUTI and will occur at the request of HICPAC. # VIII. Evidence Review # Q1. Who should receive urinary catheters? To answer this question, we focused on three subquestions: A. When is urinary catheterization necessary? B. What are the risk factors for CAUTI? and C. What populations are at highest risk of mortality from urinary catheters? # Q1A. When is urinary catheterization necessary? The available data examined five main populations. In all populations, we considered CAUTI outcomes as well as other outcomes we deemed critical to weighing the risks and benefits of catheterization. The evidence for this question consists of 1 systematic review, 37 9 RCTs, [38][39][40][41][42][43][44][45][46] and 12 observational studies. [47][48][49][50][51][52][53][54][55][56][57][58] The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 1A. For operative patients, low-quality evidence suggested a benefit of avoiding urinary catheterization. [37][38][39][40][41][42][43][44][47][48][49] This was based on a decreased risk of bacteriuria/unspecified UTI, no effect on bladder injury, and increased risk of urinary retention in patients without catheters. Urinary retention in patients without catheters was specifically seen following urogenital surgeries. The most common surgeries studied were urogenital, gynecological, laparoscopic, and orthopedic surgeries. Our search did not reveal data on the impact of catheterization on peri-operative hemodynamic management. For incontinent patients, low-quality evidence suggested a benefit of avoiding urinary catheterization. 45,[50][51][52] This was based on a decreased risk of both SUTI and bacteriuria/unspecified UTI in male nursing home residents without urinary catheters compared to those with continuous condom catheters. We found no difference in the risk of UTI between having a condom catheter only at night and having no catheter. Our search did not reveal data on the impact of catheterization on skin breakdown. For patients with bladder outlet obstruction, very low-quality evidence suggested a benefit of a urethral stent over an indwelling catheter. 53 This was based on a reduced risk of bacteriuria in those receiving a urethral stent. Our search did not reveal data on the impact of catheterization versus stent placement on urinary complications. For patients with spinal cord injury, very low-quality evidence suggested a benefit of avoiding indwelling urinary catheters. 54,56 This was based on a decreased risk of SUTI and bacteriuria in those without indwelling catheters (including patients managed with spontaneous voiding, clean intermittent catheterization [CIC], and external striated sphincterotomy with condom catheter drainage), as well as a lower risk of urinary complications, including hematuria, stones, and urethral injury (fistula, erosion, stricture). For children with myelomeningocele and neurogenic bladder, very low-quality evidence suggested a benefit of CIC compared to urinary diversion or self voiding. 46,57,58 This was based on a decreased risk of bacteriuria/unspecified UTI in patients receiving CIC compared to urinary diversion, and a lower risk of urinary tract deterioration (defined by febrile urinary tract infection, vesicoureteral reflux, hydronephrosis, or increases in BUN or serum creatinine) compared to self-voiding and in those receiving CIC early (< 1 year of age) versus late (> 3 years of age). # Evidence Review Table 1A. When is urinary catheterization necessary? 1A.1. Use urinary catheters in operative patients only as necessary, rather than routinely. ( # Q1B. What are the risk factors for CAUTI? To answer this question, we reviewed the quality of evidence for those risk factors examined in more than one study. We considered the critical outcomes for decision-making to be SUTI and bacteriuria. The evidence for this question consists of 11 RCTs 59-69 and 37 observational studies. 9,50,54, The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 1B. For SUTI, 50,54,61,62,74,75,79,83,102,103 # Q1C. What populations are at highest risk of mortality from urinary catheters? To answer this question, we reviewed the quality of evidence for those risk factors examined in more than one study. The evidence for this question consists of 2 observational studies. 7,74 The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 1C. Low-quality evidence suggested that older age, higher severity of illness, and being on an internal medicine service compared to a surgical service were independent risk factors for mortality in patients with indwelling urinary catheters. Both studies evaluating these risk factors found the highest risk of mortality in patients over 70 years of age. Low-quality evidence also suggested that CAUTI was a risk factor for mortality in patients with catheters. Evidence Review Table 1C. What populations are at highest risk of mortality from catheters? 1C.1. Minimize urinary catheter use and duration in all patients, particularly those who may be at higher risk for mortality due to catheterization, such as the elderly and patients with severe illness. (Category IB) # Q2. For those who may require urinary catheters, what are the best practices? To answer this question, we focused on four subquestions: A. What are the risks and benefits associated with different approaches to catheterization? B. What are the risks and benefits associated with different types of catheters or collecting systems? C. What are the risks and benefits associated with different catheter management techniques D. What are the risks and benefits associated with different systems interventions? # Q2A. What are the risks and benefits associated with different approaches to catheterization? The available data examined the following comparisons of different catheterization approaches: 1. External versus indwelling urethral 2. Intermittent versus indwelling urethral 3. Intermittent versus suprapubic 4. Suprapubic versus indwelling urethral 5. Clean intermittent versus sterile intermittent For all comparisons, we considered SUTI, bacteriuria/unspecified UTI, or combinations of these outcomes depending on availability, as well as other outcomes critical to weighing the risks and benefits of different catheterization approaches. The evidence for this question consists of 6 systematic reviews, 37,[104][105][106][107][108] 16 RCTs, 62,63,[109][110][111][112][113][114][115][116][117][118][119][120][121][122] and 18 observational studies. 54,73,81,84,[123][124][125][126][127][128][129][130][131][132][133][134][135][136] The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 2A # Q2A.1. External versus indwelling urethral Low-quality evidence suggested a benefit of using external catheters over indwelling urethral catheters in male patients who require a urinary collection device but do not have an indication for an indwelling catheter such as urinary retention or bladder outlet obstruction. 81,109,123 This was based on a decreased risk of a composite outcome of SUTI, bacteriuria, or death as well as increased patient satisfaction with condom catheters. Differences were most pronounced in men without dementia. Statistically significant differences were not found or reported for the individual CAUTI outcomes or death. Our search did not reveal data on differences in local complications such as skin maceration or phimosis. # Q2A.2. Intermittent versus indwelling urethral Low-quality evidence suggested a benefit of using intermittent catheterization over indwelling urethral catheters in selected populations. 84,[104][105][106][110][111][112][113][114][124][125][126]135,136 This was based on a decreased risk of SUTI and bacteriuria/unspecified UTI but an increased risk of urinary retention in postoperative patients with intermittent catheterization. In one study, urinary retention and bladder distension were avoided by performing catheterization at regular intervals (every 6-8 hrs) until return of voiding. Studies of patients with neurogenic bladder most consistently found a decreased risk of CAUTI with intermittent catheterization. Studies in operative patients whose catheters were removed within 24 hrs of surgery found no differences in bacteriuria with intermittent vs. indwelling catheterization, while studies where catheters were left in for longer durations had mixed results. Our search did not reveal data on differences in patient satisfaction. # Q2A.3. Intermittent versus suprapubic Very low-quality evidence suggested a benefit of intermittent over suprapubic catheterization in selected populations 115,116,[134][135][136] based on increased patient acceptability and decreased risk of urinary complications (bladder calculi, vesicoureteral reflux, and upper tract abnormalities). Although we found a decreased risk of bacteriuria/unspecified UTI with suprapubic catheterization, there were no differences in SUTI. The populations studied included women undergoing urogynecologic surgery and spinal cord injury patients. # Q2A.4. Suprapubic versus indwelling urethral Low-quality evidence suggested a benefit of suprapubic catheters over indwelling urethral catheters in selected populations. 37,62,104,107,108,[128][129][130][131][132][133]135,136 This was based on a decreased risk of bacteriuria/unspecified UTI, recatheterization, and urethral stricture, and increased patient comfort and satisfaction. However, there were no differences in SUTI and an increased risk of longer duration of catheterization with suprapubic catheters. Studies involved primarily postoperative and spinal cord injury patients. Our search did not reveal data on differences in complications related to catheter insertion or the catheter site. # Q2A.5. Clean intermittent versus sterile intermittent Moderate-quality evidence suggested no benefit of using sterile over clean technique for intermittent catheterization. 63,73,105,[117][118][119][120][121][122] No differences were found in the risk of SUTI or bacteriuria/unspecified UTI. Study populations included nursing home residents and adults and children with neurogenic bladder/spinal cord injury. # Q2B. What are the risks and benefits associated with different catheters or collecting systems? The available data examined the following comparisons between different types of catheters and drainage systems: 1. Antimicrobial/antiseptic catheters vs. standard catheters a. Silver-coated catheters vs. standard catheters b. Nitrofurazone-impregnated catheters vs. standard catheters 2. Hydrophilic catheters vs. standard catheters 3. Closed vs. open drainage systems 4. Complex vs. simple drainage systems 5. Preconnected/sealed junction catheters vs. standard catheters 6. Catheter valves vs. catheter bags For all comparisons, we considered CAUTI outcomes as well as other outcomes critical to weighing the risks and benefits of different types of catheters or collecting systems. The evidence for this question consists of 5 systematic reviews, 37,[137][138][139][140] 17 RCTs, 64,[143][144][145][146][147][148][149][150][151][152][153][154][155][156][157][158] 23 observational studies, 82,86,89,97,[159][160][161][162][163][165][166][167][168][169][170][171][172][173][174][175][176][177][178] and 3 economic analyses. 179180,181 The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 2B. # Q2B.1.a. Silver-coated catheters vs. standard catheters Low-quality evidence suggested a benefit of silver-coated catheters over standard latex catheters. 37,82,86,137-139,143,159-163, 165,166 This was based on a decreased risk of bacteriuria/unspecified UTI with silver-coated catheters and no evidence of increased urethral irritation or antimicrobial resistance in studies that reported data on microbiological outcomes. Differences were significant for silver alloy-coated catheters but not silver oxide-coated catheters. In a meta-analysis of randomized controlled trials (see Appendix), silver alloy-coated catheters reduced the risk of asymptomatic bacteriuria compared to standard latex catheters (control latex catheters were either uncoated or coated with hydrogel, Teflon®, or silicone), whereas there were no differences when compared to standard, all silicone catheters. The effect of silver alloy catheters compared to latex catheters was more pronounced when used in patients catheterized <1 week. The results were robust to inclusion or exclusion of non peerreviewed studies. Only one observational study found a decrease in SUTI with silver alloycoated catheters. 166 The setting was a burn referral center, where the control catheters were latex, and patients in the intervention group had new catheters placed on admission, whereas the control group did not. Recent observational studies in hospitalized patients found mixed results for bacteriuria/unspecified UTI. # Q2B.1.b. Nitrofurazone-impregnated catheters vs. standard catheters Low-quality evidence suggested a benefit of nitrofurazone-impregnated catheters in patients catheterized for short periods of time. 137,138 This was based on a decreased risk of bacteriuria and no evidence of increased antimicrobial resistance in studies that reported microbiological outcomes. Differences were significant in a meta-analysis of three studies examining nitrofurazone-impregnated catheters (only one individual study significant) when duration of catheterization was <1 week. No differences were seen when duration of catheterization was >1 week, although the meta-analysis was borderline significant. # Q2B.2. Hydrophilic catheters vs. standard catheters Very low-quality evidence suggested a benefit of hydrophilic catheters over standard nonhydrophilic catheters in specific populations undergoing clean intermittent catheterization. 137,[144][145][146][147][148]169 This was based on a decreased risk of SUTI, bacteriuria, hematuria, and pain during insertion, and increased patient satisfaction. Differences in CAUTI outcomes were limited to one study of spinal cord injury patients and one study of patients receiving intravesical immunochemoprophylaxis for bladder cancer, while multiple other studies found no significant differences. # Q2B.3. Closed vs. open drainage systems Very low-quality evidence suggested a benefit of using a closed rather than open urinary drainage system. 89,171 This was based on a decreased risk of bacteriuria with a closed drainage system. One study also found a suggestion of a decreased risk of SUTI, bacteremia, and UTIrelated mortality associated with closed drainage systems, but differences were not statistically significant. Sterile, continuously closed drainage systems became the standard of care based on an uncontrolled study published in 1966 demonstrating a dramatic reduction in the risk of infection in short-term catheterized patients with the use of a closed system. 23 Recent data also include the finding that disconnection of the drainage system is a risk factor for bacteriuria (Q1B). # Q2B.4. Complex vs. simple drainage systems Low-quality evidence suggested no benefit of complex closed urinary drainage systems over simple closed urinary drainage systems. [150][151][152]154,172,176,177 Although there was a decreased risk of bacteriuria with the complex systems, differences were found only in studies published before 1990, and not in more recent studies. The complex drainage systems studied included various mechanisms for reducing bacterial entry, such as antiseptic-releasing cartridges at the drain port of the urine collection bag; see evidence table for systems evaluated. # Q2B.5. Preconnected/sealed junction catheters vs. standard catheters Low-quality evidence suggested a benefit of using preconnected catheters with junction seals over catheters with unsealed junctions to reduce the risk of disconnections. 64,153,156,175 This was based on a decreased risk of SUTI and bacteriuria with preconnected sealed catheters. Studies that found differences had higher rates of CAUTI in the control group than studies that did not find an effect. # Q2B.6. Catheter valves vs. drainage bags Moderate-quality evidence suggested a benefit of catheter valves over drainage bags in selected patients with indwelling urinary catheters. 140 Catheter valves led to greater patient satisfaction but no differences in bacteriuria/unspecified UTI or pain/bladder spasms. Details regarding the setting for recruitment and follow-up of the patients in the studies were unclear, and the majority of subjects were men. Our search did not reveal data on the effect of catheter valves on bladder function, bladder/urethral trauma, or catheter blockage. # Q2C. What are the risks and benefits associated with different catheter management techniques? The available data examined the following catheter management techniques: For all comparisons, we considered CAUTI outcomes as well as other outcomes critical to weighing the risks and benefits of different catheter management techniques. The evidence for this question consists of 6 systematic reviews, 37,105,106,[182][183][184] 56 RCTs, 60,61,[65][66][67][68][69]143,158,158, 34 observational studies, 83,85,88,90,96,102,133,167,178, and 1 economic analysis. 180 The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 2C. # Q2C.1. Antimicrobial prophylaxis Low-quality evidence suggested no benefit of antimicrobial prophylaxis in patients undergoing short-term catheterization. 37,60,61,83,85,133,158,178,182,185,186,[189][190][191][232][233][234] This was based on heterogeneous results for SUTI and bacteriuria/unspecified UTI and no adverse events related to antimicrobials. Lack of consistency in specific factors, such as patient population, antimicrobial agents, timing of administration, and duration of follow-up, did not allow for a summary of evidence of the effect of antimicrobial prophylaxis on CAUTI in patients undergoing short term catheterization. Only two studies evaluated adverse events related to antimicrobials. Our search did not reveal data on antimicrobial resistance or Clostridium difficile infection. Low-quality evidence suggested no benefit of antimicrobial prophylaxis in patients undergoing long-term catheterization (indwelling and clean intermittent catheterization). 106,183,192,194,235,238 This was based on a decreased risk of bacteriuria, heterogeneous results for SUTI, and no differences reported for catheter encrustation or adverse events, although data were sparse. One systematic review suggested an increase in antimicrobial resistance with antimicrobial use. # Q2C.2. Urinary antiseptics Low-quality evidence suggested a benefit of methenamine for short-term catheterized patients. 196,197 This was based on a reduced risk of SUTI and bacteriuria and no differences in adverse events. Evidence was limited to two studies of patients following gynecological surgery in Norway and Sweden. Very low-quality evidence suggested a benefit of methanamine for long-term catheterized patients. 106,[236][237][238][239] This was based on a reduced risk of encrustation but no differences in risk of SUTI or bacteriuria. Data on encrustation was limited to one study. Studies involved primarily elderly and spinal cord injury patients with chronic indwelling catheters # Q2C.3. Bladder irrigation Low-quality evidence suggested no benefit of bladder irrigation in patients with indwelling or intermittent catheters. 66,69,[199][200][201][202][203][204][205][206][240][241][242] This was based on no differences in SUTI and heterogeneous findings for bacteriuria. # Q2C.4. Antiseptic instillation in the drainage bag Low-quality evidence suggested no benefit of antiseptic instillation in urinary drainage bags. 90,[207][208][209][210][211][243][244][245] This was based on no differences in SUTI and heterogeneous results for bacteriuria. # Q2C.5. Periurethral care Low-quality evidence suggested no benefit of antiseptic meatal cleaning regimens before or during catheterization to prevent CAUTI. 65,67,68,88,158,[212][213][214][215][216]246,247 This was based on no difference in the risk of bacteriuria in patients receiving periurethral care regimens compared to those not receiving them. One study found a higher risk of bacteriuria with cleaning of the urethral meatus-catheter junction (either twice daily application of povidine-iodine or once daily cleaning with a non-antiseptic solution of green soap and water) in a subgroup of women with positive meatal cultures and in patients not receiving antimicrobials. Periurethral cleaning with chlorhexidine before catheter insertion did not have an effect in two studies. # Q2C.6. Routine catheter or bag change Low-quality evidence suggested no benefit of routine catheter or drainage bag changes to prevent CAUTI. 102,[217][218][219]248,249 This was based on no difference or an increased risk of SUTI and no difference in bacteriuria with routine compared to as-needed changes or with more frequent changing intervals. One study in nursing home residents found no differences in SUTI with routine monthly catheter changes compared to changing only for obstruction or infection, but the study was underpowered to detect a difference. Another study in home care patients found an increased risk of SUTI when catheters were changed more frequently than monthly. # Q2C.7. Catheter lubricants Very low-quality evidence suggested a benefit of using lubricants for catheter insertion. 167,[220][221][222][223][250][251][252][253][254] This was based on a decreased risk of SUTI and bacteriuria with the use of a prelubricated catheter compared to a catheter lubricated by the patient and a decreased risk of bacteriuria with use of a lubricant versus no lubricant. Studies were heterogeneous both in the interventions and outcomes studied. Several studies comparing antiseptic lubricants to nonantiseptic lubricants found no significant differences. # Q2C.8. Securing devices Low-quality evidence suggested no benefit of using catheter securing devices to prevent CAUTI. 224 This was based on no significant difference in the risk of SUTI or meatal erosion. The only study in this category looked at one particular product. # Q2C.9. Bacterial interference Moderate-quality evidence suggested a benefit of using bacterial interference in catheterized patients. 225 In the one study evaluating this intervention, urinary colonization with a nonpathogenic Escherichia coli was associated with a decreased risk of SUTI in adults with spinal cord injury and a history of frequent CAUTI. # Q2C.10. Catheter cleansing Very low-quality evidence suggested a benefit of wet versus dry storage procedures for catheters used in clean intermittent catheterization. 255 This was based on a decreased risk of SUTI with a wet storage procedure in one study of spinal cord injury patients undergoing clean intermittent catheterization compared to a dry storage procedure where the catheter was left to air dry after washing. In the wet procedure, the catheter was stored in a dilute povidone-iodine solution after washing with soap and water. # Q2C.11. Catheter removal strategies a. Clamping vs. free drainage prior to removal Low-quality evidence suggested no benefit of clamping versus free drainage before catheter removal. 37,184 This was based on no difference in risk of bacteriuria, urinary retention, or recatheterization between the two strategies. One study comparing a clamp and release strategy to free drainage over 72 hours found a greater risk of bacteriuria in the clamping group. # b. Postoperative duration of catheterization Moderate-quality evidence suggested a benefit of shorter versus longer postoperative durations of catheterization. 37,184,227,228 This was based on a decreased risk of bacteriuria/unspecified UTI, decreased time to ambulation and length of stay, no differences in urinary retention and SUTI, and increased risk of recatheterization. Significant decreases in bacteriuria/unspecified UTI were found specifically for comparisons of 1 day versus 3 or 5 days of postoperative catheterization. Recatheterization risk was greater in only one study comparing immediate removal to removal 6 or 12 hours after hysterectomy. # Q2C.12. Assessment of urine volumes Low-quality evidence suggested a benefit of using portable ultrasound to assess urine volume in patients undergoing intermittent catheterization. 229,230 This was based on fewer catheterizations but no reported differences in risk of unspecified UTI. Patients studied were adults with neurogenic bladder in inpatient rehabilitation centers. Our search did not reveal data on the use of ultrasound in catheterized patients in other settings. We considered CAUTI outcomes, duration of catheterization, recatheterization, and transmission of pathogens when weighing the risks and benefits of different systems interventions. The evidence for this question consists of 1 RCT 259 and 19 observational studies. 3,25,[260][261][262][263][264][265][266][267][268][269][270][271][272][273][274][275][276] The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 2D. # Q2D.1. Multifaceted infection control/quality improvement programs Low-quality evidence suggested a benefit of multifaceted infection control/quality improvement programs to reduce the risk of CAUTI. 3,[260][261][262][263][264][265][266][267] This was based on a decreased risk of SUTI, bacteriuria/unspecified UTI, and duration of catheter use with implementation of such programs. Studies evaluated various multifaceted interventions. The studies with significant findings included: 1. education and performance feedback regarding compliance with catheter care, emphasizing hand hygiene, and maintaining unobstructed urine flow; 2. computerized alerts to physicians, nurse-driven protocols to remove catheters, and use of handheld bladder scanners to assess for urinary retention; 3. guidelines and education focusing on perioperative catheter management; and 4. a multifaceted infection control program including guidelines for catheter insertion and maintenance. A program using a checklist and algorithm for appropriate catheter use also suggested a decrease in unspecified UTI and catheter duration, but statistical differences were not reported. # Q2D.2. Reminders Very low-quality evidence suggested a benefit of using urinary catheter reminders to prevent CAUTI. [268][269][270] This was based on a decreased risk of bacteriuria and duration of catheterization and no differences in recatheterization or SUTI when reminders were used. Reminders to physicians included both computerized and non-computerized alerts about the presence of urinary catheters and the need to remove unnecessary catheters. # Q2D.3. Bacteriologic monitoring Very low-quality evidence suggested no benefit of bacteriologic monitoring to prevent CAUTI. 25,271 Although one study found a decreased risk of bacteriuria during a period of bacteriologic monitoring and feedback, only 2% of SUTI episodes were considered potentially preventable with the use of bacteriologic monitoring. # Q2D.4. Hand hygiene Very low-quality evidence suggested a benefit of using alcohol hand sanitizer in reducing CAUTI. This was based on one study in a rehabilitation facility that found a decrease in unspecified UTI, although no statistical differences were reported. 272 A separate multifaceted study that included education and performance feedback on compliance with catheter care and hand hygiene showed a decrease in risk of SUTI. 265 # Q2D.5. Patient placement Very low-quality evidence suggested a benefit of spatially separating patients to prevent transmission of urinary pathogens. 273 This was based on a decreased risk of transmission of urinary bacterial pathogens in nursing home residents in separate rooms compared to residents in the same rooms. # Q2D.6. Catheter team versus self-catheterization Very low-quality evidence suggested no benefit of a catheter team to prevent CAUTI among patients requiring intermittent catheterization. 274 This was based on one study showing no difference in unspecified UTI between use of a catheter care team and self-catheterization for intermittent catheterization in paraplegic patients. # Q2D.7. Feedback Very low-quality evidence suggested a benefit of using nursing feedback to prevent CAUTI. 275 This was based on a decreased risk of unspecified UTI during an intervention where nursing staff were provided with regular reports of unit-specific rates of CAUTI. # Q2D.8. Nurse-directed catheter removal Very low-quality evidence suggested a benefit of a nurse-directed catheter removal program to prevent CAUTI. 276 This was based on a decreased risk of unspecified UTI during an intervention where criteria were developed that allowed a registered nurse to remove a catheter without a physician's order when no longer medically necessary. Of the three intensive care units where the intervention was implemented, differences were significant only in the coronary intensive care unit. Evidence Review Table 2D # Q3: What are the best practices for preventing UTI associated with obstructed urinary catheters? The available data examined the following practices: 1. Methods to prevent/reduce encrustations or blockage 2. Catheter materials preventing blockage For this question, available relevant outcomes included blockage/encrustation. We did not find data on the outcomes of CAUTI. The evidence for this question consists of 1 systematic review, 277 2 RCTs, 278,279 and 2 observational studies. 280,281 The findings of the evidence review and the grades for all important outcomes are shown in Evidence Review Table 3. # Q3.1. Methods to prevent/reduce encrustations or blockage Low-quality evidence suggested a benefit of acidifying solutions or oral acetohydroxamic acid in preventing or reducing catheter encrustations and blockage in long-term catheterized patients. 277,278,280,281 No differences were seen with daily catheter irrigation with normal saline. # Q3.2. Catheter materials preventing blockage Low-quality evidence suggested a benefit of silicone over latex or Teflon-coated catheters in prevention or reducing catheter encrustations in long-term catheterized patients who were prone to blockage. No differences were seen with different materials in patients considered "nonblockers." 279 Evidence Review Table 3. What are the best practices for preventing UTI associated with obstructed urinary catheters? # Abbreviations Abbreviation
None
None
41bdd01b73d09b39429d99c6b8132c36aefc7879
cdc
None
Fetal alcohol syndrome (FAS) results from maternal alcohol use during pregnancy and carries lifelong consequences. Early recognition of FAS can result in better outcomes for persons who receive a diagnosis. Although FAS was first identified in 1973, persons with this condition often do not receive a diagnosis. In 2002, Congress directed CDC to update and refine diagnostic and referral criteria for FAS, incorporating recent scientific and clinical evidence. In 2002, CDC convened a scientific working group (SWG) of persons with expertise in FAS research, diagnosis, and treatment to draft criteria for diagnosing FAS. This report summarizes the diagnostic guidelines drafted by the SWG, provides recommendations for when and how to refer a person suspected of having problems related to prenatal alcohol exposure, and assesses existing practices for creating supportive environments that might prevent long-term adverse consequences associated with FAS. The guidelines were created on the basis of a review of scientific evidence, clinical expertise, and the experiences of families affected by FAS regarding the physical and neuropsychologic features of FAS and the medical, educational, and social services needed by persons with FAS and their families. The guidelines are intended to facilitate early identification of persons affected by prenatal exposure to alcohol so they and their families can receive services that enable them to achieve healthy lives and reach their full potential. This report also includes recommendations to enhance identification of and intervention for women at risk for alcohol-exposed pregnancies. Additional data are needed to develop diagnostic criteria for other related disorders (e.g., alcohol-related neurodevelopmental disorder).# Introduction Prenatal exposure to alcohol during pregnancy damages the developing fetus and is a leading preventable cause of birth defects and developmental disabilities (1)(2)(3). Children exposed to alcohol during fetal development can suffer multiple negative effects, including physical and cognitive deficits. Although the number and severity of negative effects can range from subtle to serious, they are always lifelong. Referral and diagnosis for fetal alcohol syndrome (FAS) can be made throughout the lifespan. However, the majority of persons with FAS are referred and receive a diagnosis during childhood. Thus, the terms "child" or "children" as used in these guidelines are not intended to preclude referral, assessment, and diagnosis of older persons. # Background The effects of prenatal exposure to alcohol and basic diagnostic features of FAS were first described in 1973 (4)(5)(6)(7)(8). In 1981, the U.S. Surgeon General issued a public health advisory warning that alcohol use during pregnancy could cause birth defects (9); this warning was reissued in 2004 (10). In 1989, Congress mandated that language warning of the consequences of drinking during pregnancy be included on alcohol product labels (11). Despite the known adverse effects of prenatal exposure to alcohol (4,5), children who experience these effects often do not receive a correct diagnosis or referral for diagnostic evaluation because of the absence of uniformly accepted diagnostic criteria and guidelines for referral. Early identification and diagnosis of FAS in affected persons are essential components to providing health, education, and social services that promote optimal well-being. In 2002, Congress directed CDC to 1) develop guidelines for diagnosing FAS and other negative birth outcomes resulting from prenatal exposure to alcohol, 2) incorporate these guidelines into curricula for medical and allied health students and practitioners, and 3) disseminate curricula concerning these guidelines to facilitate training of medical and allied health students and practitioners. These guidelines represent a consensus of opinion from persons with expertise in relevant scientific and clinical fields, with input from service professionals and families affected by FAS. Information that served as the basis for the development of these guidelines was obtained from published scientific literature, clinical knowledge of participants, and the experience of families affected by FAS. CDC staff initially identified reports and other documents that were used as the scientific basis for creating diagnostic guidelines. On the basis of this information, and in coordination with the National Taskforce on Fetal Alcohol Syndrome and Fetal Alcohol Effect (NTFFAS/FAE), other federally funded FAS programs, and nongovernment organizations concerned with FAS, CDC formed a scientific working group (SWG) of persons with expertise in research and clinical practice regarding prenatal exposure to alcohol to develop diagnostic guidelines for FAS. Guidelines were formulated on the basis of consensus among SWG members and NTFFAS/FAE. To assist in defining the dysmorphologic features most useful for identifying persons with FAS, SWG members assembled a matrix of the major and associated dysmorphic features of non-FAS syndromes that had one or more features in common with FAS. This matrix was used to determine a combination of dysmorphic features most discriminative for FAS. To assist deliberations concerning central nervous system (CNS) abnormalities associated with FAS, persons with expertise in the science, assessment, and treatment of psychological aspects of FAS were asked to identify the CNS abnormalities and other neurobehavioral domains most common among persons affected by prenatal alcohol exposure. These responses formed the basis for discussion and the resulting guidelines for CNS abnormalities for persons with FAS. This report summarizes the guidelines drafted as a result of the SWG's deliberations, provides recommendations for when and how to refer a person suspected of having problems related to prenatal alcohol exposure, and assesses existing practices for creating supportive environments that might prevent long-term adverse consequences associated with FAS. # Prevalence Varied FAS prevalences (range: 0.2-1.5 cases per 1,000 live births) have been reported worldwide (12)(13)(14)(15). Other studies that used different ascertainment methodologies have produced different estimates (range: 0.5-2.0 cases per 1,000 live births) (16)(17)(18)(19)(20)(21)(22). These rates are comparable with or higher than rates for other common developmental disabilities (e.g., Down syndrome or spina bifida) (23). On the basis of these prevalence estimates, approximately 4 million infants are born each year with prenatal alcohol exposure, and an estimated 1,000-6,000 are born with FAS. Studies have reported consistently that >50% of all U.S. women of childbearing age report alcohol consumption during the previous month (1,(24)(25)(26)(27)(28). The majority of these women drank only occasionally, but >13% could have been classified as moderate or heavy drinkers. In addition, 12% of women reported binge drinking (i.e., consuming five or more drinks on one occasion) during the preceding month (1,25,27,28). Approximately half of all U.S. pregnancies are unintended, and millions of women of childbearing age are sexually active while not using adequate contraception (24)(25)(26)(27)(28). Recent data from the Behavioral Risk Factors Surveillance System indicate that an estimated 12%-13% of U.S. women aged 18-44 years are sexually active, do not use contraception effectively, and drink alcohol frequently or binge drink, thereby putting them at risk for an alcohol-exposed pregnancy (24). Because data are available for all subpopulations, prevalences might be greater than these data indicate. # Fetal Alcohol Spectrum Disorder Multiple terms are used to describe the continuum of effects that result from prenatal exposure to alcohol, including fetal alcohol effects, alcohol-related birth defects (ARBD), alcohol-related neurodevelopment disorder (ARND), and, most recently, fetal alcohol spectrum disorders (FASDs) (29). In April 2004, the National Organization on Fetal Alcohol Syndrome (NOFAS) convened a meeting of representatives from three federal agencies (the National Institutes of Health , CDC, and the Substance Abuse and Mental Health Services Administration ) and persons with expertise in the field to develop a consensus definition of FASDs. The resulting definition, which is used in this report, defined FASDs as the range of effects that can occur in a person whose mother drank alcohol during pregnancy, including physical, mental, behavior, and learning disabilities, with possible lifelong implications. As this definition indicates, multiple diagnostic categories (e.g., FAS, ARND, and ARBD) are subsumed under the term FASDs. However, FASDs is not a diagnostic category and should be used only when referring to the collection of diagnostic terms resulting from prenatal exposure to alcohol. # Recommendations Diagnostic Criteria For the majority of health-care providers, the key indicator of FAS is the set of characteristic facial features first described in 1973 (4). Alcohol is a teratogen that results in dysmorphia, growth problems, and abnormalities of the central nervous system in multiple ways (30,31). Confirmation and documentation of prenatal alcohol exposure can be difficult to establish. For birth mothers, admission of alcohol use during pregnancy can be stigmatizing. The situation can be further complicated if the woman continues to use alcohol, especially at high consumption rates. Clinicians might need to obtain information regarding alcohol use from other reliable informants, such as a relative. Clinicians often have to evaluate a child or adult for FAS without definitive information regarding the mother's use of alcohol during pregnancy. This situation occurs frequently for children in foster and adoptive homes. In such cases, every effort should be made to obtain the necessary information, but lack of confirmation of alcohol use during pregnancy should not preclude a diagnosis of FAS if all other criteria are present. In rare instances, absence of exposure will be confirmed. Documentation that the birth mother did not drink any amount of alcohol from conception through birth would indicate that a FAS diagnosis is not appropriate. This finding typically implies either that the birth mother knew the date of conception (e.g., a planned pregnancy) and did not consume alcohol from that day forward or that she was prevented from drinking for a certain reason (e.g., incarceration). Because of the imprecise nature of exposure information, the following two qualifying terms are suggested for a finding of prenatal alcohol exposure: - FAS with confirmed prenatal alcohol exposure requires documentation of the alcohol consumption patterns of the birth mother during the index pregnancy on the basis of clinical observation; self-reports; reports of heavy alcohol use during pregnancy by a reliable informant; medical records documenting positive blood alcohol levels or alcohol treatment; or other social, legal, or medical problems related to drinking during the index pregnancy. - FAS with unknown prenatal alcohol exposure indicates neither a confirmed presence nor a confirmed absence of exposure. Examples include situations in which the child is adopted, and any prenatal exposure is unknown; the birth mother is an alcoholic, but confirmed evidence of exposure during pregnancy does not exist; or conflicting reports regarding exposure exist that cannot be reliably resolved. Prenatal exposure to alcohol alone is not sufficient to warrant a diagnosis of FAS. Despite the heterogeneity of expression for features related to prenatal exposure to alcohol, a diagnosis of FAS requires documentation of three findings: 1) three specific facial abnormalities; 2) growth deficit; and 3) CNS abnormalities (Appendix) (30,31) (Box). # BOX. Characteristics for diagnosing fetal alcohol syndrome # Facial dysmorphia On the basis of racial norms (i.e., those appropriate for a person's race), the person exhibits all three of the following characteristic facial features: - smooth philtrum (University of Washington Lip-Philtrum Guide- rank 4 or 5*), - thin vermillion border (University of Washington Lip-Philtrum Guide rank 4 or 5), and - small palpebral fissures (<10th percentile). # Growth problems Confirmed, documented prenatal or postnatal height, weight, or both <10th percentile, adjusted for age, sex, gestational age, and race or ethnicity Central # Considerations When Diagnosing FAS Because FAS is a syndrome rather than a specific disease, additional features can be present. For example, in addition to the key facial dymorphic features, maxillary hypoplasia is often noted for persons with FAS (3). Features often change with age or development. After puberty, the characteristic facial features associated with FAS can become more difficult to detect (32). However, the key features remain constant for the majority of persons with FAS (33,34). Changes in growth pattern across development also lead to variability in presentation. For certain affected persons, growth problems might occur at a younger age but not be present at the time of the diagnostic evaluation. This is particularly important when considering prenatal growth retardation or early growth problems caused by failure to thrive. Because multiple treatments exist for growth problems (e.g., use of feeding tubes or hormone therapy), any history of growth retardation, including prenatal growth deficiencies, is consistent with the criteria for diagnosing FAS (35). The clinician should be certain that the child was not nutritionally deprived at the single point in time when the growth deficit was present. The adopted threshold for growth (<10th percentile) represents an attempt to maximize sensitivity, even though it reduces specificity. # CNS Abnormalities The diagnostic criteria for CNS abnormality require documentation of one of three types of deficits or abnormalities (i.e., structural, neurologic, and functional). A person might have more than one CNS abnormality (36). Identifying CNS abnormalities resulting from prenatal alcohol exposure can be the most difficult aspect of a FAS diagnosis because of the heterogeneity of expression for these deficits across persons (Appendix). Approximately one fourth of persons who receive a diagnosis of FAS perform at two standard deviations below the mean (which approaches substantial impairment ) on standardized measures of cognition (37). To capture the full spectrum of effects adequately, two levels of functional deficits are consistent with the criteria for a CNS abnormality: 1) performance below the third percentile (i.e., two standard deviations below the mean) on a measure of global cognitive functioning or 2) performance <16th percentile (i.e., one standard deviation below the mean) on standardized measures of three functional domains. Thus, persons scoring below the normal range on a global measure of intelligence or development and persons scoring in the belowaverage range on standardized measures of three specific functional domains would be consistent with the criteria for functional CNS abnormality for diagnostic purposes. Because of the importance of documenting CNS abnormalities and the variability in functional deficits, the diagnostic process should include a thorough neuropsychologic evaluation that assesses multiple domains. Extensive standardized testing might not be readily available in all diagnostic settings. Clinicians are encouraged to supplement their observations by obtaining standardized testing through early intervention programs, public schools, and psychologists in private practice. Such testing will facilitate the development of appropriate personalized treatment plans for persons who receive a diagnosis of FAS. These guidelines recommend that functional domains be assessed by using norm-referenced standardized measures. Assessments should be conducted by professionals using reliable and validated instruments. # Differential Diagnosis Individual dysmorphic features are not unique to any particular syndrome. Even rare defects or certain clusters of dysmorphic features can appear in multiple syndromes. Therefore, a process of differential diagnosis is essential in making an accurate FAS diagnosis. Features that discriminate these disorders from FAS have been described (38). Certain syndromes have single overlapping features with FAS. With the exception of toluene embryopathy, no other known syndrome has the full constellation of small palpebral fissures, thin vermillion border, and smooth philtrum. However, for certain syndromes (e.g., Williams syndrome, Dubowitz syndrome, or fetal dilantin syndrome), the overall constellation of features (primary, occasional features, or both) is similar to FAS, and these syndromes should be considered in particular when completing the differential diagnosis. Growth retardation and deficiencies occur among children, adolescents, and adults for multiple reasons. Insufficient nutrition could be a particular problem for infants with poor sucking responses who fail to thrive. In addition, certain genetic disorders result in specific growth deficiencies (e.g., dwarfism). Prenatal growth retardation can result from multiple factors, including maternal smoking or other behaviors leading to hypoxia, poor maternal nutrition, or genetic disorders unrelated to maternal alcohol consumption. Both environmental and genetic bases for growth retardation should be considered for differential diagnosis when considering a FAS diagnosis. Finally, because a threshold of <10th percentile (rather than the lower threshold of the third percentile commonly used to denote growth retardation) was adopted, certain children will be classified as being consistent with this criterion for reasons other than prenatal exposure to alcohol (e.g., parents having short stature). However, because the diagnosis of FAS is made only when facial dysmorphia and CNS abnormalities also are present, the increased sensitivity achieved with the 10th percentile was selected. Differential diagnosis of CNS abnormities involves not only ruling out other disorders but also specifying simultaneously occurring disorders. CNS deficits associated with FAS (especially functional deficits) can be produced by multiple factors in addition to prenatal alcohol exposure. Observed functional deficits should be determined not to be better explained by other causes. In addition to other organic syndromes that produce deficits in one or more of the previously cited domains (e.g., Williams syndrome and Down syndrome), disrupted home environments or other external factors can produce functional deficits in multiple domains that overlap those affected by FAS. In making a differential diagnosis of FAS, the clinician should evaluate CNS abnormalities in conjunction with dysmorphia and laboratory findings. CNS abnormalities resulting from environmental influences (e.g., abuse or neglect, disruptive homes, and lack of opportunities) are harder to differentiate. To assist with differential diagnosis between FAS and environmental causes for CNS abnormalities, clinicians should obtain a complete, detailed history for the person and family members. In addition to ruling out other causes for CNS abnormalities, a complete diagnosis should identify and specify other disorders that can coexist with FAS (e.g., autism, conduct disorder, or oppositional defiant disorder). A particular person might have a conduct disorder in addition to FAS; however, not all persons with FAS have conduct disorders, and not all persons with conduct disorders have FAS. Certain functional deficits might lead to additional behavior problems. For example, a child with an attention problem also could have conduct disorder. Clinicians should consider organic causes, environmental contributions, and comorbidity for both inclusive and exclusive purposes when evaluating a person for a FAS diagnosis (32,39). Because differential diagnosis for CNS abnormalities within a FAS diagnosis is difficult, the evaluation should be conducted by professionals trained in both the features of FAS and those of a broad array of birth defects and developmental disabilities. # Conditions Consistent with a Subset of Diagnostic Criteria for FAS The majority of persons with deficits resulting from prenatal exposure to alcohol do not express all the features necessary for a FAS diagnosis (36). Sufficient scientific evidence is not available to define diagnostic criteria for any prenatal alcohol-related condition other than FAS. Persons who have the neurodevelopment deficits required for a FAS diagnosis but who do not have all three facial features or growth deficits might not receive a diagnosis and so not be provided with services. Ongoing funding has been provided by the National Institute on Alcohol Abuse and Alcoholism to conduct research that might lead to evidence-based diagnostic criteria for persons with other conditions caused by prenatal alcohol use. CDC is using a collaborative database of neurodevelopment data from five intervention studies to explore the nature of persons who could be considered in the diagnostic category of alcohol related neurodevelopment disorder, as well as data from a prospective cohort study in Denmark of children aged 5 years. FAS is the only diagnostic category with scientific evidence to support clinical criteria at this time. As future data become available, these guidelines can be refined and expanded to delineate other conditions resulting from prenatal alcohol exposure. # Mental Health Problems and Other Lifelong Consequences FAS has lifelong consequences. Common FAS-related mental health conditions (excluding attention problems) reported include conduct disorders, oppositional defiant disorders, anxiety disorders, adjustment disorders, sleep disorders, and depression (37,(40)(41)(42)(43)(44). Although attention problems can be classified as a mental health issue or psychiatric condition, in these guidelines, they are treated as a primary deficit resulting from alcohol-related CNS damage rather than a secondary mental health concern (45). Decreased adaptive skills and increased problems with daily living abilities have been documented (e.g., dependent living conditions, disrupted school experiences, poor employment records, and encounters with law enforcement, including incarceration) among persons with FAS (37). These mental health-related consequences should not be used for diagnosis. However, they are prevalent among persons with FAS and are likely to result in referral and comprehensive diagnostic evaluation. # Referral Considerations Providers of medical, educational, and social services often must decide whether to refer a child, person, or family to a specialist for a full FAS diagnostic evaluation. This decision can be difficult. For biologically related family members, social stigma might be associated with any evaluation concerning prenatal alcohol exposure. In adoptive or foster families, alcohol use during pregnancy might be suspected, but direct information might not be available. # MMWR October 28, 2005 The following guidelines were developed to assist service providers in making referral decisions. Each case should be evaluated individually. When in doubt, providers should refer persons for a full evaluation by a multidisciplinary team with experience in evaluating prenatal alcohol exposure. The following circumstances should prompt a diagnostic referral: - When prenatal alcohol exposure is known, a child should be referred for full FAS evaluation when substantial prenatal alcohol use (i.e., seven or more drinks per week, three or more drinks on multiple occasions, or both) has been confirmed. If substantial prenatal alcohol exposure is known, in the absence of any other positive criteria (i.e., dysmorphia, growth deficits, or CNS abnormalities), the primary health-care provider should document this exposure and monitor the child's ongoing growth and development closely. - When information regarding prenatal alcohol exposure is unknown, a child should be referred for full FAS evaluation for any one of the following: -any report of concern by a parent or caregiver (e.g., foster or adoptive parent) that a child has or might have FAS; -presence of all three facial features (i.e., smooth philtrum, thin vermillion border, and small palpebral fissures); -presence of one or more of these facial features, with growth deficits in height, weight, or both; -presence of one or more facial features, with one or more CNS abnormalities; or -presence of one or more facial features, with growth deficits and one or more CNS abnormalities. In addition to specific features associated with a FAS diagnosis, certain social and family history factors have been associated with prenatal alcohol exposure (46). The possibility of prenatal alcohol exposure should be considered fully for persons who are experiencing or have experienced one or more of the following: - premature maternal death related to alcohol use (either disease or trauma), - living with an alcoholic parent, - current or previous abuse or neglect, - current or previous involvement with child protective services agencies (PSAs), - a history of transient caregiving situations, or - foster or adoptive placements (including kinship care). Although such situations might have a negative impact on the development of any child, evidence exists that children with FAS or a related disorder are particularly likely to expe-rience negative situations that involve a dysfunctional family unit (46), especially if the biologic mother abuses alcohol. # Services for Persons with FAS For persons with developmental disabilities and their families, diagnosis is never an endpoint. This is particularly true for persons with FAS, their families, and their communities. The diagnostic process (especially the neuropsychologic assessment) should be part of a continuum of care that identifies and facilitates appropriate health-care, education, and community services. Early diagnosis and a stable, nurturing home environment have been identified as strong protective factors for persons with FAS (46). Limited information is available regarding strategies for interventions specific to persons with FAS. Information available has been gathered primarily from the experience of persons with other disabilities and from that of parents gained through trial and error and shared through informal networks. Treatments currently employed to reduce the risk for adverse effects of FAS have not been evaluated systematically or scientifically. In 2001, CDC provided the first federal funding to develop and test systematic, scientifically developed interventions specific to FAS (e.g., a modified mathematics curriculum or a program to develop peer friendship skills). These projects are in their final stages, and findings will be published. The learning and life skills affected by prenatal alcohol exposure vary among persons, depending on the amount, timing, and pattern of exposure and on each person's current and past environment (47,48). As a result, services needed for persons with FAS and their families vary according to the parts of the brain affected, the person's age or level of maturation, the health or functioning of the family, and the person's overall living environment. Thus, the service needs of affected persons and their families should be individualized (49). Certain general areas of service and specific services have been identified as helpful to persons with FAS and their families (32). Interventions should include strategies that stabilize home placement and improve parent-child interaction (47). One means of accomplishing this goal is to increase the understanding of the disorder among parents, teachers, law enforcement personnel, and other professionals who might become involved with the affected person. Children with FAS often need specialized parenting techniques because of their difficulty with cause-and-effect reasoning and other executive functioning skills (47). Caregiver education should highlight and explain differences in the thought processes of children with FAS compared with typically developing children and children with other developmental disabilities. This knowl-edge would enable parents to avoid potentially difficult situations (e.g., overly stimulating environments) and better manage problems when they do arise. Overall, a better functioning family that results from caregiver education promotes the stable, nurturing home that has been demonstrated to be a protective factor for children with FAS (50). Professionals who work with persons affected by FAS could benefit from better understanding of the disorder and services available for affected persons and their families (39). These professionals can help link families with needed community resources and ensure that affected children receive maximum benefit from services provided. Interacting with social and educational service agencies can be overwhelming and confusing, and each agency typically uses a specialized vocabulary (i.e., jargon) that is difficult for nonspecialists to understand. In addition to being able to diagnose FAS, clinicians should help parents and caregivers identify available services, determine which ones are effective for their children, and understand how to work productively with service providers (32). Prenatally exposed infants and children often enter the foster or adoptive care system at an early age. The prevalence of children with FAS or a related disorder in the foster care system is estimated to be 10 times that of the general population (51). Although PSAs might have information regarding a child's prenatal history, PSA staff generally do not know about FAS, understand how FAS affects the child, or communicate with other service systems regarding the child's FAS status (51). As a result, foster and adoptive families typically are not educated regarding the long-term effects of FAS and are unprepared to meet their children's needs. The majority of PSAs require foster parents to take a specified number of educational courses annually. These courses should include education regarding the effects and developmental needs of children with FAS because the majority of foster parents will encounter at least one child with FAS or a related disorder during their time as a foster parent (51). Projects funded by CDC have developed FAS curricula for parents, educators, and juvenile justice systems; information regarding these curricula is available at / ncbddd/fas/awareness.htm. The assessment process is integral to both the FAS diagnosis and the development of an effective treatment plan. Such a treatment plan minimizes risk factors for lifelong negative consequences and promotes protective factors that maximize developmental potential. Clinicians and service providers must ensure that assessments include communication and social skills, emotional maturity, verbal and comprehension abilities, language usage, and, if appropriate, referral for medication assessments. Finally, the health and development of children with disabilities, including children with FAS, can be promoted by public support for programs that provide access to school, recreational, and social activities. # Alcohol Use During Pregnancy Because no safe threshold of alcohol use during pregnancy has been established, CDC and NTFFAS/FAE recommend that women who are pregnant, planning a pregnancy, or at risk for pregnancy should not drink alcohol. Women of childbearing age who are not pregnant should drink no more than seven drinks per week and no more than three drinks on any one occasion. Federal, state, and local agencies; clinicians and researchers; educational and social service professionals; and families should work together to educate women of childbearing age and communities countrywide regarding the risks of drinking alcohol during pregnancy. Women who have had at least one child with FAS are at especially high risk for giving birth to a second affected child (2,52). Universal screening for alcohol use among all women of childbearing age might help identify women who drink above recommended levels as well as those who drink and might become pregnant. Screening can be performed in clinicians' offices or in community health settings. Screening techniques that include measures of quantity, frequency, and heavy episodic drinking, as well as behavioral manifestations of risk drinking, have proven to be most beneficial; simple questionnaires have been developed to screen for problematic alcohol use among adults in multiple populations and settings (53). Effective prevention programs frequently employ a multicomponent approach that combines cognitive-behavioral techniques with norms clarification, education, and motivational enhancement interventions. For women who screen positive for hazardous alcohol use or abuse, brief interventions that use time-limited, self-help, and preventative strategies to promote reductions in alcohol use in nondependent persons and that facilitate referral of dependent persons to specialized treatment programs are low-cost, effective treatment alternatives (54)(55)(56)(57). The acronym FRAMES is used to encompass six key elements of the majority of successful brief interventions as follows: 1) feedback of personal risk, 2) responsibility for personal control, 3) advice to change, 4) menu of ways to reduce or stop drinking, 5) empathetic counseling style, and 6) self-efficacy or optimism regarding reducing or stopping drinking (58). Preconception counseling of women of childbearing age who are at risk for an alcohol-exposed pregnancy and who are not using effective contraception has been demonstrated as a promising method of prevention (59). Project CHOICES, funded by CDC, is an example of a brief intervention that has been effective. Information regarding this project and other federally sponsored studies of prenatal alcohol screening and intervention programs is available at , , , and services.ahrq.gov. # Summary of Recommendations On the basis of a review of current scientific and clinical evidence, the following recommendations are made concerning referral of children and diagnosis of FAS: # Diagnosis of FAS - A diagnosis of FAS should be made if documentation exists of 1) all three dysmorphic facial features (i.e., smooth philtrum, thin vermillion border, and small palpebral fissures), 2) prenatal or postnatal growth deficit in height or weight, and 3) CNS abnormality. - The diagnosis should be classified on the basis of available history as confirmed prenatal alcohol exposure or unknown prenatal alcohol exposure. - CNS abnormality may be documented as structural, neurologic, or functional (Box). # Referral - If prenatal alcohol exposure is known, a child or person should be referred for full FAS evaluation when alcohol abuse (defined as seven or more alcohol drinks per week or three or more alcohol drinks on multiple occasions, or both) is confirmed. - If prenatal alcohol exposure is unknown, a child or person should be referred for full FAS evaluation when: -a parent or caregiver (foster or adoptive parent) reports that a child has or might have FAS; -all three facial features (i.e., smooth philtrum, thin vermillion border, and small palpebral fissures) are present; -one or more facial features are present in addition to growth deficits in height, weight, or both; one or more facial features are present and one or more CNS abnormalities; or -one or more facial features are present, with growth deficits and one or more CNS abnormalities. - In addition to specific features associated with the FAS diagnosis, the following social and family history factors associated with prenatal exposures to alcohol might indicate a need for referral: -premature maternal death related to alcohol use (either disease or trauma), -living with an alcoholic parent, -current or previous abuse or neglect, -current or previous involvement with child PSAs, -a history of transient caregiving situations, or -having been in foster or adoptive care (including kinship care). # Services - The FAS diagnosis and the diagnostic process (especially the neuropsychologic assessment) should be considered as part of a continuum of care that identifies and facilitates appropriate health-care, education, and community services. - General areas of service needs for persons with FAS and their families should include strategies that stabilize home placement, improve parent-child interaction through caregiver education, advocate for access to services, and educate service professionals involved with affected persons and their families regarding FAS and its consequences. - Specific intervention services should be tailored to a person's individual needs and deficits. These might include communication and social skills; emotional development; verbal and comprehension abilities; language usage; and, if appropriate, referral for medication assessments. - The needs of children in adoptive or foster placements should receive particular attention in the diagnostic and referral process. # Prevention - Federal, state, and local agencies; clinicians and researchers; educational and social service professionals; and families should work together to educate women of childbearing age and communities countrywide regarding the risks of drinking alcohol during pregnancy. - Universal screening by health-care providers for alcohol use is recommended for all women of childbearing age. - For women drinking at risk levels and not effectively using contraception, brief interventions have proven effective in reducing the risk for an alcohol-exposed pregnancy. - Because no safe threshold of alcohol use during pregnancy has been established, women who are pregnant, planning a pregnancy, or at risk for pregnancy should be advised not to drink alcohol. Women who are not pregnant, not planning a pregnancy, or not at risk for unintended preg-nancy should be advised to drink no more than seven drinks per week and no more than three drinks on any one occasion. - Additional information regarding these guidelines has been published (60). # MMWR October 28, 2005 ing on the amount, timing, and pattern of alcohol exposure (e.g., chronic exposure versus binge episodes). Despite this inherent variation in effects, areas of functional vulnerability have been observed consistently by clinicians and researchers with particular damage to corresponding structures reported (e.g., corpus callosum, cerebellum, or basal ganglia). For functional deficits, multiple locations in the brain (and corresponding functional capability) are generally accepted to be affected by prenatal exposure to alcohol. Functional deficits consistent with CNS abnormality criteria can be identified in two ways: 1) global cognitive deficit (e.g., decreased IQ) or substantial developmental delay in children too young for an IQ assessment or 2) deficits in three or more specific functional domains. These two ways of assessing functional CNS abnormality were adopted because of the composite nature of cognitive, intellectual, and developmental measures (15,16). Decreased performance on a standardized measure of cognition, intelligence, and development assumes deficits in multiple domains. In the absence of such a measure, multiple domains should be assessed individually to determine that multiple functional domains have been affected. For each domain, other agents and environmental factors can produce deficits or outcomes similar to prenatal alcohol exposure, making differential diagnosis essential. The specific domains most often cited as areas of deficit or concern for persons with FAS are described below. These descriptions are intended to be suggestive and are examples of likely and possible problems a clinician might encounter and need to assess by using psychometric instruments. The examples are not intended to be exhaustive or to present a list of behaviors to be used as a checklist without reliable and valid assessment. - Cognitive deficits or significant developmental discrepancies. Global deficits or delays can leave the child scoring in the normal range of development but below what would be expected for the child's environment and background (17)(18)(19)(20)(21)(22). In addition to formal testing (either through records or current testing), behaviors that might be observed or reported in the clinical setting that suggest cognitive deficits or developmental delays that should be assessed by standardized testing include but are not limited to specific learning disabilities (especially mathematic or visual-spatial deficits), uneven profile of (22,(36)(37)(38). Behaviors that can be observed or reported in the clinical setting that indicate motor problems that should be assessed by standardized testing include but are not limited to delayed motor milestones, difficulty with writing or drawing, clumsiness, balance problems, tremors, and poor dexterity. For infants, a poor suck is often observed (17,(38)(39)(40). - Attention and hyperactivity problems. Attention problems are often noted for children with FAS, with children frequently receiving a diagnosis of attention-deficit hyperactivity disorder (ADHD) (41). Although such a diagnosis can be applied, attention problems for children with FAS do not appear to be consistent with the classic pattern of ADHD. Persons with FAS tend to have difficulty with the encoding of information and flexibility (shifting) aspects of attention, whereas children with ADHD typically display problems with focus and sustaining attention (42,43). Persons with FAS also can appear to display hyperactivity because their impulsivity might lead to increased activity levels. Behaviors that might be observed or reported in the clinical setting that suggest attention problems related to FAS that should be assessed by standardized testing include but are not limited to being described by adults as "busy," inattentive, easily distracted, difficulty calming down, being overly active, difficulty completing tasks, and/or trouble with transitions. Parents might report inconsistency in attention from day to day (e.g., "on" days and "off" days) (44-50). - Social skills problems. The executive, attention, and developmental problems described previously often lead to clinically significant difficulty for persons with FAS when interacting with peers and others. Because of the mental representation problems, persons with FAS often have social perception or social communication problems that make it difficult for them to grasp the more subtle aspects of human interactions (51,52). Consistent difficulty understanding the consequence of behavior or inappropriate behavior frequently is described for persons with FAS (53,54). Behaviors that can be observed or reported in the clinical setting that indicate these types of social difficulties that should be assessed by standardized testing include but are not limited to lack of fear of strangers, naiveté and gullibility, being taken advantage of easily, inappropriate choice of friends, preferring younger friends, immaturity, superficial interactions, adaptive skills significantly below cognitive potential, inappropriate sexual behaviors, difficulty understanding the perspective of others, poor social cognition, and clinically significant inappropriate initiations or interactions (55)(56)(57). Standardized assessment of social problems can be difficult; social functioning is a multifaceted domain that can require multiple areas of assessment. - Other potential domains that can be affected. In addition to these five most-often-cited problem areas, deficits and problems to be assessed by standardized testing can present in several other areas, including sensory problems (e.g., tactile defensiveness and oral sensitivity), pragmatic language problems (e.g., difficulty reading facial expression, and poor ability to understand the perspectives of others), memory deficits (e.g., forgetting welllearned material, and needing many trials to remember), and difficulty responding appropriately to common parenting practices (e.g., not understanding cause-andeffect discipline). Although abnormalities in these areas have been reported for persons with FAS, deficits in these areas are reported at a lower frequency than are those in the other five specific domains (53). # INSTRUCTIONS ACCREDITATION Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1.25 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training. CDC will award 0.1 CEUs to participants who successfully complete this activity. # Continuing Nursing Education (CNE). This activity for 1.3 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. # What percentage of sexually active women of childbearing age do not use contraception effectively and drink alcohol frequently or binge drink, putting them at risk for an alcohol-exposed pregnancy? A. 1%-2%. B. 12%-13%. C. 20%-40%. D. 50%-75%. # The diagnosis of FAS includes which of the following criteria? A. Documentation of all three facial abnormalities (i.e., smooth philtrum, thin vermillion, and small palpebral fissures). B. Documentation of growth deficits. C. Documentation of central nervous system abnormalities. D. Documentation of mental retardation. E. A, B, and C. F. A and D. # Which of the following statements is true? A. One of the diagnostic criteria for FAS is mental retardation. B. Persons who have been exposed to alcohol prenatally but whose physical condition is not consistent with the criteria for FAS might have substantial cognitive deficits. C. All persons with FAS have attention deficit hyperactivity disorder. D. Persons with FAS are likely to have speech and language impairments but not fine motor deficits. # A person should be referred for a complete multidisciplinary diagnostic evaluation when… A. all three facial features are present. B. any concern is reported by a parent or caregiver that a child has or might possibly have been exposed to alcohol prenatally. C. a child is living with an alcoholic parent, or the biological mother died as a result of alcohol-related disease or trauma. D. all of the above. # Goal and Objectives This report provides updated criteria for diagnosis of fetal alcohol syndrome (FAS) among persons affected by prenatal alcohol exposure. The goal of this report is to provide guidance for health-care providers in determining which persons might need referral for a complete multidisciplinary diagnostic evaluation and information regarding medical, educational, social, and family services appropriate for affected persons. Upon completion of this educational activity, the reader should be able to 1) describe the negative outcomes associated with prenatal exposure to alcohol, 2) list the specific criteria that constitute a diagnosis of FAS, 3) identify persons who should receive a referral for a multidisciplinary evaluation for FAS, 4) list services appropriate for a person receiving a FAS diagnosis, and 5) list instruments appropriate for screening women of childbearing age for alcohol use or abuse. To receive continuing education credit, please answer all of the following questions.
Fetal alcohol syndrome (FAS) results from maternal alcohol use during pregnancy and carries lifelong consequences. Early recognition of FAS can result in better outcomes for persons who receive a diagnosis. Although FAS was first identified in 1973, persons with this condition often do not receive a diagnosis. In 2002, Congress directed CDC to update and refine diagnostic and referral criteria for FAS, incorporating recent scientific and clinical evidence. In 2002, CDC convened a scientific working group (SWG) of persons with expertise in FAS research, diagnosis, and treatment to draft criteria for diagnosing FAS. This report summarizes the diagnostic guidelines drafted by the SWG, provides recommendations for when and how to refer a person suspected of having problems related to prenatal alcohol exposure, and assesses existing practices for creating supportive environments that might prevent long-term adverse consequences associated with FAS. The guidelines were created on the basis of a review of scientific evidence, clinical expertise, and the experiences of families affected by FAS regarding the physical and neuropsychologic features of FAS and the medical, educational, and social services needed by persons with FAS and their families. The guidelines are intended to facilitate early identification of persons affected by prenatal exposure to alcohol so they and their families can receive services that enable them to achieve healthy lives and reach their full potential. This report also includes recommendations to enhance identification of and intervention for women at risk for alcohol-exposed pregnancies. Additional data are needed to develop diagnostic criteria for other related disorders (e.g., alcohol-related neurodevelopmental disorder).# Introduction Prenatal exposure to alcohol during pregnancy damages the developing fetus and is a leading preventable cause of birth defects and developmental disabilities (1)(2)(3). Children exposed to alcohol during fetal development can suffer multiple negative effects, including physical and cognitive deficits. Although the number and severity of negative effects can range from subtle to serious, they are always lifelong. Referral and diagnosis for fetal alcohol syndrome (FAS) can be made throughout the lifespan. However, the majority of persons with FAS are referred and receive a diagnosis during childhood. Thus, the terms "child" or "children" as used in these guidelines are not intended to preclude referral, assessment, and diagnosis of older persons. # Background The effects of prenatal exposure to alcohol and basic diagnostic features of FAS were first described in 1973 (4)(5)(6)(7)(8). In 1981, the U.S. Surgeon General issued a public health advisory warning that alcohol use during pregnancy could cause birth defects (9); this warning was reissued in 2004 (10). In 1989, Congress mandated that language warning of the consequences of drinking during pregnancy be included on alcohol product labels (11). Despite the known adverse effects of prenatal exposure to alcohol (4,5), children who experience these effects often do not receive a correct diagnosis or referral for diagnostic evaluation because of the absence of uniformly accepted diagnostic criteria and guidelines for referral. Early identification and diagnosis of FAS in affected persons are essential components to providing health, education, and social services that promote optimal well-being. In 2002, Congress directed CDC to 1) develop guidelines for diagnosing FAS and other negative birth outcomes resulting from prenatal exposure to alcohol, 2) incorporate these guidelines into curricula for medical and allied health students and practitioners, and 3) disseminate curricula concerning these guidelines to facilitate training of medical and allied health students and practitioners. These guidelines represent a consensus of opinion from persons with expertise in relevant scientific and clinical fields, with input from service professionals and families affected by FAS. Information that served as the basis for the development of these guidelines was obtained from published scientific literature, clinical knowledge of participants, and the experience of families affected by FAS. CDC staff initially identified reports and other documents that were used as the scientific basis for creating diagnostic guidelines. On the basis of this information, and in coordination with the National Taskforce on Fetal Alcohol Syndrome and Fetal Alcohol Effect (NTFFAS/FAE), other federally funded FAS programs, and nongovernment organizations concerned with FAS, CDC formed a scientific working group (SWG) of persons with expertise in research and clinical practice regarding prenatal exposure to alcohol to develop diagnostic guidelines for FAS. Guidelines were formulated on the basis of consensus among SWG members and NTFFAS/FAE. To assist in defining the dysmorphologic features most useful for identifying persons with FAS, SWG members assembled a matrix of the major and associated dysmorphic features of non-FAS syndromes that had one or more features in common with FAS. This matrix was used to determine a combination of dysmorphic features most discriminative for FAS. To assist deliberations concerning central nervous system (CNS) abnormalities associated with FAS, persons with expertise in the science, assessment, and treatment of psychological aspects of FAS were asked to identify the CNS abnormalities and other neurobehavioral domains most common among persons affected by prenatal alcohol exposure. These responses formed the basis for discussion and the resulting guidelines for CNS abnormalities for persons with FAS. This report summarizes the guidelines drafted as a result of the SWG's deliberations, provides recommendations for when and how to refer a person suspected of having problems related to prenatal alcohol exposure, and assesses existing practices for creating supportive environments that might prevent long-term adverse consequences associated with FAS. # Prevalence Varied FAS prevalences (range: 0.2-1.5 cases per 1,000 live births) have been reported worldwide (12)(13)(14)(15). Other studies that used different ascertainment methodologies have produced different estimates (range: 0.5-2.0 cases per 1,000 live births) (16)(17)(18)(19)(20)(21)(22). These rates are comparable with or higher than rates for other common developmental disabilities (e.g., Down syndrome or spina bifida) (23). On the basis of these prevalence estimates, approximately 4 million infants are born each year with prenatal alcohol exposure, and an estimated 1,000-6,000 are born with FAS. Studies have reported consistently that >50% of all U.S. women of childbearing age report alcohol consumption during the previous month (1,(24)(25)(26)(27)(28). The majority of these women drank only occasionally, but >13% could have been classified as moderate or heavy drinkers. In addition, 12% of women reported binge drinking (i.e., consuming five or more drinks on one occasion) during the preceding month (1,25,27,28). Approximately half of all U.S. pregnancies are unintended, and millions of women of childbearing age are sexually active while not using adequate contraception (24)(25)(26)(27)(28). Recent data from the Behavioral Risk Factors Surveillance System indicate that an estimated 12%-13% of U.S. women aged 18-44 years are sexually active, do not use contraception effectively, and drink alcohol frequently or binge drink, thereby putting them at risk for an alcohol-exposed pregnancy (24). Because data are available for all subpopulations, prevalences might be greater than these data indicate. # Fetal Alcohol Spectrum Disorder Multiple terms are used to describe the continuum of effects that result from prenatal exposure to alcohol, including fetal alcohol effects, alcohol-related birth defects (ARBD), alcohol-related neurodevelopment disorder (ARND), and, most recently, fetal alcohol spectrum disorders (FASDs) (29). In April 2004, the National Organization on Fetal Alcohol Syndrome (NOFAS) convened a meeting of representatives from three federal agencies (the National Institutes of Health [NIH], CDC, and the Substance Abuse and Mental Health Services Administration [SAMHSA]) and persons with expertise in the field to develop a consensus definition of FASDs. The resulting definition, which is used in this report, defined FASDs as the range of effects that can occur in a person whose mother drank alcohol during pregnancy, including physical, mental, behavior, and learning disabilities, with possible lifelong implications. As this definition indicates, multiple diagnostic categories (e.g., FAS, ARND, and ARBD) are subsumed under the term FASDs. However, FASDs is not a diagnostic category and should be used only when referring to the collection of diagnostic terms resulting from prenatal exposure to alcohol. # Recommendations Diagnostic Criteria For the majority of health-care providers, the key indicator of FAS is the set of characteristic facial features first described in 1973 (4). Alcohol is a teratogen that results in dysmorphia, growth problems, and abnormalities of the central nervous system in multiple ways (30,31). Confirmation and documentation of prenatal alcohol exposure can be difficult to establish. For birth mothers, admission of alcohol use during pregnancy can be stigmatizing. The situation can be further complicated if the woman continues to use alcohol, especially at high consumption rates. Clinicians might need to obtain information regarding alcohol use from other reliable informants, such as a relative. Clinicians often have to evaluate a child or adult for FAS without definitive information regarding the mother's use of alcohol during pregnancy. This situation occurs frequently for children in foster and adoptive homes. In such cases, every effort should be made to obtain the necessary information, but lack of confirmation of alcohol use during pregnancy should not preclude a diagnosis of FAS if all other criteria are present. In rare instances, absence of exposure will be confirmed. Documentation that the birth mother did not drink any amount of alcohol from conception through birth would indicate that a FAS diagnosis is not appropriate. This finding typically implies either that the birth mother knew the date of conception (e.g., a planned pregnancy) and did not consume alcohol from that day forward or that she was prevented from drinking for a certain reason (e.g., incarceration). Because of the imprecise nature of exposure information, the following two qualifying terms are suggested for a finding of prenatal alcohol exposure: • FAS with confirmed prenatal alcohol exposure requires documentation of the alcohol consumption patterns of the birth mother during the index pregnancy on the basis of clinical observation; self-reports; reports of heavy alcohol use during pregnancy by a reliable informant; medical records documenting positive blood alcohol levels or alcohol treatment; or other social, legal, or medical problems related to drinking during the index pregnancy. • FAS with unknown prenatal alcohol exposure indicates neither a confirmed presence nor a confirmed absence of exposure. Examples include situations in which the child is adopted, and any prenatal exposure is unknown; the birth mother is an alcoholic, but confirmed evidence of exposure during pregnancy does not exist; or conflicting reports regarding exposure exist that cannot be reliably resolved. Prenatal exposure to alcohol alone is not sufficient to warrant a diagnosis of FAS. Despite the heterogeneity of expression for features related to prenatal exposure to alcohol, a diagnosis of FAS requires documentation of three findings: 1) three specific facial abnormalities; 2) growth deficit; and 3) CNS abnormalities (Appendix) (30,31) (Box). # BOX. Characteristics for diagnosing fetal alcohol syndrome # Facial dysmorphia On the basis of racial norms (i.e., those appropriate for a person's race), the person exhibits all three of the following characteristic facial features: • smooth philtrum (University of Washington Lip-Philtrum Guide* rank 4 or 5*), • thin vermillion border (University of Washington Lip-Philtrum Guide rank 4 or 5), and • small palpebral fissures (<10th percentile). # Growth problems Confirmed, documented prenatal or postnatal height, weight, or both <10th percentile, adjusted for age, sex, gestational age, and race or ethnicity Central # Considerations When Diagnosing FAS Because FAS is a syndrome rather than a specific disease, additional features can be present. For example, in addition to the key facial dymorphic features, maxillary hypoplasia is often noted for persons with FAS (3). Features often change with age or development. After puberty, the characteristic facial features associated with FAS can become more difficult to detect (32). However, the key features remain constant for the majority of persons with FAS (33,34). Changes in growth pattern across development also lead to variability in presentation. For certain affected persons, growth problems might occur at a younger age but not be present at the time of the diagnostic evaluation. This is particularly important when considering prenatal growth retardation or early growth problems caused by failure to thrive. Because multiple treatments exist for growth problems (e.g., use of feeding tubes or hormone therapy), any history of growth retardation, including prenatal growth deficiencies, is consistent with the criteria for diagnosing FAS (35). The clinician should be certain that the child was not nutritionally deprived at the single point in time when the growth deficit was present. The adopted threshold for growth (<10th percentile) represents an attempt to maximize sensitivity, even though it reduces specificity. # CNS Abnormalities The diagnostic criteria for CNS abnormality require documentation of one of three types of deficits or abnormalities (i.e., structural, neurologic, and functional). A person might have more than one CNS abnormality (36). Identifying CNS abnormalities resulting from prenatal alcohol exposure can be the most difficult aspect of a FAS diagnosis because of the heterogeneity of expression for these deficits across persons (Appendix). Approximately one fourth of persons who receive a diagnosis of FAS perform at two standard deviations below the mean (which approaches substantial impairment [i.e., mental retardation]) on standardized measures of cognition (37). To capture the full spectrum of effects adequately, two levels of functional deficits are consistent with the criteria for a CNS abnormality: 1) performance below the third percentile (i.e., two standard deviations below the mean) on a measure of global cognitive functioning or 2) performance <16th percentile (i.e., one standard deviation below the mean) on standardized measures of three functional domains. Thus, persons scoring below the normal range on a global measure of intelligence or development and persons scoring in the belowaverage range on standardized measures of three specific functional domains would be consistent with the criteria for functional CNS abnormality for diagnostic purposes. Because of the importance of documenting CNS abnormalities and the variability in functional deficits, the diagnostic process should include a thorough neuropsychologic evaluation that assesses multiple domains. Extensive standardized testing might not be readily available in all diagnostic settings. Clinicians are encouraged to supplement their observations by obtaining standardized testing through early intervention programs, public schools, and psychologists in private practice. Such testing will facilitate the development of appropriate personalized treatment plans for persons who receive a diagnosis of FAS. These guidelines recommend that functional domains be assessed by using norm-referenced standardized measures. Assessments should be conducted by professionals using reliable and validated instruments. # Differential Diagnosis Individual dysmorphic features are not unique to any particular syndrome. Even rare defects or certain clusters of dysmorphic features can appear in multiple syndromes. Therefore, a process of differential diagnosis is essential in making an accurate FAS diagnosis. Features that discriminate these disorders from FAS have been described (38). Certain syndromes have single overlapping features with FAS. With the exception of toluene embryopathy, no other known syndrome has the full constellation of small palpebral fissures, thin vermillion border, and smooth philtrum. However, for certain syndromes (e.g., Williams syndrome, Dubowitz syndrome, or fetal dilantin syndrome), the overall constellation of features (primary, occasional features, or both) is similar to FAS, and these syndromes should be considered in particular when completing the differential diagnosis. Growth retardation and deficiencies occur among children, adolescents, and adults for multiple reasons. Insufficient nutrition could be a particular problem for infants with poor sucking responses who fail to thrive. In addition, certain genetic disorders result in specific growth deficiencies (e.g., dwarfism). Prenatal growth retardation can result from multiple factors, including maternal smoking or other behaviors leading to hypoxia, poor maternal nutrition, or genetic disorders unrelated to maternal alcohol consumption. Both environmental and genetic bases for growth retardation should be considered for differential diagnosis when considering a FAS diagnosis. Finally, because a threshold of <10th percentile (rather than the lower threshold of the third percentile commonly used to denote growth retardation) was adopted, certain children will be classified as being consistent with this criterion for reasons other than prenatal exposure to alcohol (e.g., parents having short stature). However, because the diagnosis of FAS is made only when facial dysmorphia and CNS abnormalities also are present, the increased sensitivity achieved with the 10th percentile was selected. Differential diagnosis of CNS abnormities involves not only ruling out other disorders but also specifying simultaneously occurring disorders. CNS deficits associated with FAS (especially functional deficits) can be produced by multiple factors in addition to prenatal alcohol exposure. Observed functional deficits should be determined not to be better explained by other causes. In addition to other organic syndromes that produce deficits in one or more of the previously cited domains (e.g., Williams syndrome and Down syndrome), disrupted home environments or other external factors can produce functional deficits in multiple domains that overlap those affected by FAS. In making a differential diagnosis of FAS, the clinician should evaluate CNS abnormalities in conjunction with dysmorphia and laboratory findings. CNS abnormalities resulting from environmental influences (e.g., abuse or neglect, disruptive homes, and lack of opportunities) are harder to differentiate. To assist with differential diagnosis between FAS and environmental causes for CNS abnormalities, clinicians should obtain a complete, detailed history for the person and family members. In addition to ruling out other causes for CNS abnormalities, a complete diagnosis should identify and specify other disorders that can coexist with FAS (e.g., autism, conduct disorder, or oppositional defiant disorder). A particular person might have a conduct disorder in addition to FAS; however, not all persons with FAS have conduct disorders, and not all persons with conduct disorders have FAS. Certain functional deficits might lead to additional behavior problems. For example, a child with an attention problem also could have conduct disorder. Clinicians should consider organic causes, environmental contributions, and comorbidity for both inclusive and exclusive purposes when evaluating a person for a FAS diagnosis (32,39). Because differential diagnosis for CNS abnormalities within a FAS diagnosis is difficult, the evaluation should be conducted by professionals trained in both the features of FAS and those of a broad array of birth defects and developmental disabilities. # Conditions Consistent with a Subset of Diagnostic Criteria for FAS The majority of persons with deficits resulting from prenatal exposure to alcohol do not express all the features necessary for a FAS diagnosis (36). Sufficient scientific evidence is not available to define diagnostic criteria for any prenatal alcohol-related condition other than FAS. Persons who have the neurodevelopment deficits required for a FAS diagnosis but who do not have all three facial features or growth deficits might not receive a diagnosis and so not be provided with services. Ongoing funding has been provided by the National Institute on Alcohol Abuse and Alcoholism to conduct research that might lead to evidence-based diagnostic criteria for persons with other conditions caused by prenatal alcohol use. CDC is using a collaborative database of neurodevelopment data from five intervention studies to explore the nature of persons who could be considered in the diagnostic category of alcohol related neurodevelopment disorder, as well as data from a prospective cohort study in Denmark of children aged 5 years. FAS is the only diagnostic category with scientific evidence to support clinical criteria at this time. As future data become available, these guidelines can be refined and expanded to delineate other conditions resulting from prenatal alcohol exposure. # Mental Health Problems and Other Lifelong Consequences FAS has lifelong consequences. Common FAS-related mental health conditions (excluding attention problems) reported include conduct disorders, oppositional defiant disorders, anxiety disorders, adjustment disorders, sleep disorders, and depression (37,(40)(41)(42)(43)(44). Although attention problems can be classified as a mental health issue or psychiatric condition, in these guidelines, they are treated as a primary deficit resulting from alcohol-related CNS damage rather than a secondary mental health concern (45). Decreased adaptive skills and increased problems with daily living abilities have been documented (e.g., dependent living conditions, disrupted school experiences, poor employment records, and encounters with law enforcement, including incarceration) among persons with FAS (37). These mental health-related consequences should not be used for diagnosis. However, they are prevalent among persons with FAS and are likely to result in referral and comprehensive diagnostic evaluation. # Referral Considerations Providers of medical, educational, and social services often must decide whether to refer a child, person, or family to a specialist for a full FAS diagnostic evaluation. This decision can be difficult. For biologically related family members, social stigma might be associated with any evaluation concerning prenatal alcohol exposure. In adoptive or foster families, alcohol use during pregnancy might be suspected, but direct information might not be available. # MMWR October 28, 2005 The following guidelines were developed to assist service providers in making referral decisions. Each case should be evaluated individually. When in doubt, providers should refer persons for a full evaluation by a multidisciplinary team with experience in evaluating prenatal alcohol exposure. The following circumstances should prompt a diagnostic referral: • When prenatal alcohol exposure is known, a child should be referred for full FAS evaluation when substantial prenatal alcohol use (i.e., seven or more drinks per week, three or more drinks on multiple occasions, or both) has been confirmed. If substantial prenatal alcohol exposure is known, in the absence of any other positive criteria (i.e., dysmorphia, growth deficits, or CNS abnormalities), the primary health-care provider should document this exposure and monitor the child's ongoing growth and development closely. • When information regarding prenatal alcohol exposure is unknown, a child should be referred for full FAS evaluation for any one of the following: -any report of concern by a parent or caregiver (e.g., foster or adoptive parent) that a child has or might have FAS; -presence of all three facial features (i.e., smooth philtrum, thin vermillion border, and small palpebral fissures); -presence of one or more of these facial features, with growth deficits in height, weight, or both; -presence of one or more facial features, with one or more CNS abnormalities; or -presence of one or more facial features, with growth deficits and one or more CNS abnormalities. In addition to specific features associated with a FAS diagnosis, certain social and family history factors have been associated with prenatal alcohol exposure (46). The possibility of prenatal alcohol exposure should be considered fully for persons who are experiencing or have experienced one or more of the following: • premature maternal death related to alcohol use (either disease or trauma), • living with an alcoholic parent, • current or previous abuse or neglect, • current or previous involvement with child protective services agencies (PSAs), • a history of transient caregiving situations, or • foster or adoptive placements (including kinship care). Although such situations might have a negative impact on the development of any child, evidence exists that children with FAS or a related disorder are particularly likely to expe-rience negative situations that involve a dysfunctional family unit (46), especially if the biologic mother abuses alcohol. # Services for Persons with FAS For persons with developmental disabilities and their families, diagnosis is never an endpoint. This is particularly true for persons with FAS, their families, and their communities. The diagnostic process (especially the neuropsychologic assessment) should be part of a continuum of care that identifies and facilitates appropriate health-care, education, and community services. Early diagnosis and a stable, nurturing home environment have been identified as strong protective factors for persons with FAS (46). Limited information is available regarding strategies for interventions specific to persons with FAS. Information available has been gathered primarily from the experience of persons with other disabilities and from that of parents gained through trial and error and shared through informal networks. Treatments currently employed to reduce the risk for adverse effects of FAS have not been evaluated systematically or scientifically. In 2001, CDC provided the first federal funding to develop and test systematic, scientifically developed interventions specific to FAS (e.g., a modified mathematics curriculum or a program to develop peer friendship skills). These projects are in their final stages, and findings will be published. The learning and life skills affected by prenatal alcohol exposure vary among persons, depending on the amount, timing, and pattern of exposure and on each person's current and past environment (47,48). As a result, services needed for persons with FAS and their families vary according to the parts of the brain affected, the person's age or level of maturation, the health or functioning of the family, and the person's overall living environment. Thus, the service needs of affected persons and their families should be individualized (49). Certain general areas of service and specific services have been identified as helpful to persons with FAS and their families (32). Interventions should include strategies that stabilize home placement and improve parent-child interaction (47). One means of accomplishing this goal is to increase the understanding of the disorder among parents, teachers, law enforcement personnel, and other professionals who might become involved with the affected person. Children with FAS often need specialized parenting techniques because of their difficulty with cause-and-effect reasoning and other executive functioning skills (47). Caregiver education should highlight and explain differences in the thought processes of children with FAS compared with typically developing children and children with other developmental disabilities. This knowl-edge would enable parents to avoid potentially difficult situations (e.g., overly stimulating environments) and better manage problems when they do arise. Overall, a better functioning family that results from caregiver education promotes the stable, nurturing home that has been demonstrated to be a protective factor for children with FAS (50). Professionals who work with persons affected by FAS could benefit from better understanding of the disorder and services available for affected persons and their families (39). These professionals can help link families with needed community resources and ensure that affected children receive maximum benefit from services provided. Interacting with social and educational service agencies can be overwhelming and confusing, and each agency typically uses a specialized vocabulary (i.e., jargon) that is difficult for nonspecialists to understand. In addition to being able to diagnose FAS, clinicians should help parents and caregivers identify available services, determine which ones are effective for their children, and understand how to work productively with service providers (32). Prenatally exposed infants and children often enter the foster or adoptive care system at an early age. The prevalence of children with FAS or a related disorder in the foster care system is estimated to be 10 times that of the general population (51). Although PSAs might have information regarding a child's prenatal history, PSA staff generally do not know about FAS, understand how FAS affects the child, or communicate with other service systems regarding the child's FAS status (51). As a result, foster and adoptive families typically are not educated regarding the long-term effects of FAS and are unprepared to meet their children's needs. The majority of PSAs require foster parents to take a specified number of educational courses annually. These courses should include education regarding the effects and developmental needs of children with FAS because the majority of foster parents will encounter at least one child with FAS or a related disorder during their time as a foster parent (51). Projects funded by CDC have developed FAS curricula for parents, educators, and juvenile justice systems; information regarding these curricula is available at http://www.cdc.gov/ ncbddd/fas/awareness.htm. The assessment process is integral to both the FAS diagnosis and the development of an effective treatment plan. Such a treatment plan minimizes risk factors for lifelong negative consequences and promotes protective factors that maximize developmental potential. Clinicians and service providers must ensure that assessments include communication and social skills, emotional maturity, verbal and comprehension abilities, language usage, and, if appropriate, referral for medication assessments. Finally, the health and development of children with disabilities, including children with FAS, can be promoted by public support for programs that provide access to school, recreational, and social activities. # Alcohol Use During Pregnancy Because no safe threshold of alcohol use during pregnancy has been established, CDC and NTFFAS/FAE recommend that women who are pregnant, planning a pregnancy, or at risk for pregnancy should not drink alcohol. Women of childbearing age who are not pregnant should drink no more than seven drinks per week and no more than three drinks on any one occasion. Federal, state, and local agencies; clinicians and researchers; educational and social service professionals; and families should work together to educate women of childbearing age and communities countrywide regarding the risks of drinking alcohol during pregnancy. Women who have had at least one child with FAS are at especially high risk for giving birth to a second affected child (2,52). Universal screening for alcohol use among all women of childbearing age might help identify women who drink above recommended levels as well as those who drink and might become pregnant. Screening can be performed in clinicians' offices or in community health settings. Screening techniques that include measures of quantity, frequency, and heavy episodic drinking, as well as behavioral manifestations of risk drinking, have proven to be most beneficial; simple questionnaires have been developed to screen for problematic alcohol use among adults in multiple populations and settings (53). Effective prevention programs frequently employ a multicomponent approach that combines cognitive-behavioral techniques with norms clarification, education, and motivational enhancement interventions. For women who screen positive for hazardous alcohol use or abuse, brief interventions that use time-limited, self-help, and preventative strategies to promote reductions in alcohol use in nondependent persons and that facilitate referral of dependent persons to specialized treatment programs are low-cost, effective treatment alternatives (54)(55)(56)(57). The acronym FRAMES is used to encompass six key elements of the majority of successful brief interventions as follows: 1) feedback of personal risk, 2) responsibility for personal control, 3) advice to change, 4) menu of ways to reduce or stop drinking, 5) empathetic counseling style, and 6) self-efficacy or optimism regarding reducing or stopping drinking (58). Preconception counseling of women of childbearing age who are at risk for an alcohol-exposed pregnancy and who are not using effective contraception has been demonstrated as a promising method of prevention (59). Project CHOICES, funded by CDC, is an example of a brief intervention that has been effective. Information regarding this project and other federally sponsored studies of prenatal alcohol screening and intervention programs is available at http://www.cdc.gov/ncbddd/fas, http://www.niaaa.nih.gov, http://www.fascenter.samhsa.gov, and http://www.preventive services.ahrq.gov. # Summary of Recommendations On the basis of a review of current scientific and clinical evidence, the following recommendations are made concerning referral of children and diagnosis of FAS: # Diagnosis of FAS • A diagnosis of FAS should be made if documentation exists of 1) all three dysmorphic facial features (i.e., smooth philtrum, thin vermillion border, and small palpebral fissures), 2) prenatal or postnatal growth deficit in height or weight, and 3) CNS abnormality. • The diagnosis should be classified on the basis of available history as confirmed prenatal alcohol exposure or unknown prenatal alcohol exposure. • CNS abnormality may be documented as structural, neurologic, or functional (Box). # Referral • If prenatal alcohol exposure is known, a child or person should be referred for full FAS evaluation when alcohol abuse (defined as seven or more alcohol drinks per week or three or more alcohol drinks on multiple occasions, or both) is confirmed. • If prenatal alcohol exposure is unknown, a child or person should be referred for full FAS evaluation when: -a parent or caregiver (foster or adoptive parent) reports that a child has or might have FAS; -all three facial features (i.e., smooth philtrum, thin vermillion border, and small palpebral fissures) are present; -one or more facial features are present in addition to growth deficits in height, weight, or both; one or more facial features are present and one or more CNS abnormalities; or -one or more facial features are present, with growth deficits and one or more CNS abnormalities. • In addition to specific features associated with the FAS diagnosis, the following social and family history factors associated with prenatal exposures to alcohol might indicate a need for referral: -premature maternal death related to alcohol use (either disease or trauma), -living with an alcoholic parent, -current or previous abuse or neglect, -current or previous involvement with child PSAs, -a history of transient caregiving situations, or -having been in foster or adoptive care (including kinship care). # Services • The FAS diagnosis and the diagnostic process (especially the neuropsychologic assessment) should be considered as part of a continuum of care that identifies and facilitates appropriate health-care, education, and community services. • General areas of service needs for persons with FAS and their families should include strategies that stabilize home placement, improve parent-child interaction through caregiver education, advocate for access to services, and educate service professionals involved with affected persons and their families regarding FAS and its consequences. • Specific intervention services should be tailored to a person's individual needs and deficits. These might include communication and social skills; emotional development; verbal and comprehension abilities; language usage; and, if appropriate, referral for medication assessments. • The needs of children in adoptive or foster placements should receive particular attention in the diagnostic and referral process. # Prevention • Federal, state, and local agencies; clinicians and researchers; educational and social service professionals; and families should work together to educate women of childbearing age and communities countrywide regarding the risks of drinking alcohol during pregnancy. • Universal screening by health-care providers for alcohol use is recommended for all women of childbearing age. • For women drinking at risk levels and not effectively using contraception, brief interventions have proven effective in reducing the risk for an alcohol-exposed pregnancy. • Because no safe threshold of alcohol use during pregnancy has been established, women who are pregnant, planning a pregnancy, or at risk for pregnancy should be advised not to drink alcohol. Women who are not pregnant, not planning a pregnancy, or not at risk for unintended preg-nancy should be advised to drink no more than seven drinks per week and no more than three drinks on any one occasion. • Additional information regarding these guidelines has been published (60). # MMWR October 28, 2005 ing on the amount, timing, and pattern of alcohol exposure (e.g., chronic exposure versus binge episodes). Despite this inherent variation in effects, areas of functional vulnerability have been observed consistently by clinicians and researchers with particular damage to corresponding structures reported (e.g., corpus callosum, cerebellum, or basal ganglia). For functional deficits, multiple locations in the brain (and corresponding functional capability) are generally accepted to be affected by prenatal exposure to alcohol. Functional deficits consistent with CNS abnormality criteria can be identified in two ways: 1) global cognitive deficit (e.g., decreased IQ) or substantial developmental delay in children too young for an IQ assessment or 2) deficits in three or more specific functional domains. These two ways of assessing functional CNS abnormality were adopted because of the composite nature of cognitive, intellectual, and developmental measures (15,16). Decreased performance on a standardized measure of cognition, intelligence, and development assumes deficits in multiple domains. In the absence of such a measure, multiple domains should be assessed individually to determine that multiple functional domains have been affected. For each domain, other agents and environmental factors can produce deficits or outcomes similar to prenatal alcohol exposure, making differential diagnosis essential. The specific domains most often cited as areas of deficit or concern for persons with FAS are described below. These descriptions are intended to be suggestive and are examples of likely and possible problems a clinician might encounter and need to assess by using psychometric instruments. The examples are not intended to be exhaustive or to present a list of behaviors to be used as a checklist without reliable and valid assessment. • Cognitive deficits or significant developmental discrepancies. Global deficits or delays can leave the child scoring in the normal range of development but below what would be expected for the child's environment and background (17)(18)(19)(20)(21)(22). In addition to formal testing (either through records or current testing), behaviors that might be observed or reported in the clinical setting that suggest cognitive deficits or developmental delays that should be assessed by standardized testing include but are not limited to specific learning disabilities (especially mathematic or visual-spatial deficits), uneven profile of (22,(36)(37)(38). Behaviors that can be observed or reported in the clinical setting that indicate motor problems that should be assessed by standardized testing include but are not limited to delayed motor milestones, difficulty with writing or drawing, clumsiness, balance problems, tremors, and poor dexterity. For infants, a poor suck is often observed (17,(38)(39)(40). • Attention and hyperactivity problems. Attention problems are often noted for children with FAS, with children frequently receiving a diagnosis of attention-deficit hyperactivity disorder (ADHD) (41). Although such a diagnosis can be applied, attention problems for children with FAS do not appear to be consistent with the classic pattern of ADHD. Persons with FAS tend to have difficulty with the encoding of information and flexibility (shifting) aspects of attention, whereas children with ADHD typically display problems with focus and sustaining attention (42,43). Persons with FAS also can appear to display hyperactivity because their impulsivity might lead to increased activity levels. Behaviors that might be observed or reported in the clinical setting that suggest attention problems related to FAS that should be assessed by standardized testing include but are not limited to being described by adults as "busy," inattentive, easily distracted, difficulty calming down, being overly active, difficulty completing tasks, and/or trouble with transitions. Parents might report inconsistency in attention from day to day (e.g., "on" days and "off" days) (44-50). • Social skills problems. The executive, attention, and developmental problems described previously often lead to clinically significant difficulty for persons with FAS when interacting with peers and others. Because of the mental representation problems, persons with FAS often have social perception or social communication problems that make it difficult for them to grasp the more subtle aspects of human interactions (51,52). Consistent difficulty understanding the consequence of behavior or inappropriate behavior frequently is described for persons with FAS (53,54). Behaviors that can be observed or reported in the clinical setting that indicate these types of social difficulties that should be assessed by standardized testing include but are not limited to lack of fear of strangers, naiveté and gullibility, being taken advantage of easily, inappropriate choice of friends, preferring younger friends, immaturity, superficial interactions, adaptive skills significantly below cognitive potential, inappropriate sexual behaviors, difficulty understanding the perspective of others, poor social cognition, and clinically significant inappropriate initiations or interactions (55)(56)(57). Standardized assessment of social problems can be difficult; social functioning is a multifaceted domain that can require multiple areas of assessment. • Other potential domains that can be affected. In addition to these five most-often-cited problem areas, deficits and problems to be assessed by standardized testing can present in several other areas, including sensory problems (e.g., tactile defensiveness and oral sensitivity), pragmatic language problems (e.g., difficulty reading facial expression, and poor ability to understand the perspectives of others), memory deficits (e.g., forgetting welllearned material, and needing many trials to remember), and difficulty responding appropriately to common parenting practices (e.g., not understanding cause-andeffect discipline). Although abnormalities in these areas have been reported for persons with FAS, deficits in these areas are reported at a lower frequency than are those in the other five specific domains (53). # INSTRUCTIONS ACCREDITATION Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1.25 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training. CDC will award 0.1 CEUs to participants who successfully complete this activity. # Continuing Nursing Education (CNE). This activity for 1.3 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. # What percentage of sexually active women of childbearing age do not use contraception effectively and drink alcohol frequently or binge drink, putting them at risk for an alcohol-exposed pregnancy? A. 1%-2%. B. 12%-13%. C. 20%-40%. D. 50%-75%. # The diagnosis of FAS includes which of the following criteria? A. Documentation of all three facial abnormalities (i.e., smooth philtrum, thin vermillion, and small palpebral fissures). B. Documentation of growth deficits. C. Documentation of central nervous system abnormalities. D. Documentation of mental retardation. E. A, B, and C. F. A and D. # Which of the following statements is true? A. One of the diagnostic criteria for FAS is mental retardation. B. Persons who have been exposed to alcohol prenatally but whose physical condition is not consistent with the criteria for FAS might have substantial cognitive deficits. C. All persons with FAS have attention deficit hyperactivity disorder. D. Persons with FAS are likely to have speech and language impairments but not fine motor deficits. # A person should be referred for a complete multidisciplinary diagnostic evaluation when… A. all three facial features are present. B. any concern is reported by a parent or caregiver that a child has or might possibly have been exposed to alcohol prenatally. C. a child is living with an alcoholic parent, or the biological mother died as a result of alcohol-related disease or trauma. D. all of the above. # Goal and Objectives This report provides updated criteria for diagnosis of fetal alcohol syndrome (FAS) among persons affected by prenatal alcohol exposure. The goal of this report is to provide guidance for health-care providers in determining which persons might need referral for a complete multidisciplinary diagnostic evaluation and information regarding medical, educational, social, and family services appropriate for affected persons. Upon completion of this educational activity, the reader should be able to 1) describe the negative outcomes associated with prenatal exposure to alcohol, 2) list the specific criteria that constitute a diagnosis of FAS, 3) identify persons who should receive a referral for a multidisciplinary evaluation for FAS, 4) list services appropriate for a person receiving a FAS diagnosis, and 5) list instruments appropriate for screening women of childbearing age for alcohol use or abuse. To receive continuing education credit, please answer all of the following questions.
None
None
8735edab7c82a8d3649d03ae67a6167cdebb0ed7
cdc
None
# CRITERIA DOCUMENT: RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR CARBON MONOXIDE recommends that employee exposure to carbon monoxide (CO) at the workplace be controlled by requiring compliance with the standard set forth in the following eight sections. Control of employee exposure to CO at his place of employment at the limits stated will (1) prevent acute CO poisoning, (2) protect the employee from deleterious myocardial alterations associated with levels of carboxyhemoglobin (COHb) in excess of 5 percent and (3) provide the employee protection from adverse behavioral manifestations resulting from exposure to low levels of CO. The recommended standard is measurable by techniques that are valid, reproducible and currently available to industry and governmental agencies and is attainable with existing technology. The recommended standard is designed to protect the safety and health of workers who are performing a normal 8-hour per day, 40-hour per week work assignment. It is not designed for the population-at-large and any extrapolation beyond the general worker population is unwarranted. Because of the well-defined relationship between smoking and the concomitant exposure to CO in inhaled smoke the recommended standard may not provide the same degree of protection to those workers who smoke as it will to nonsmokers. Likewise, under conditions of reduced ambient oxygen concentration, such as would be encountered by workers at very high altitudes (e.g., 5,000 -8,000 feet above sea level), the permissible exposure stated in the recommended standard should be appropriately lowered to compensate for loss in the oxygen-carrying capacity of the blood. In 1-1 addition, workers with physical impairments that interfere with normal oxygen delivery to the tissues (e.g., emphysema, anemia, coronary heart disease) will not be provided the same degree of protection as the healthy worker population. The criteria and the standard recommended in this docu ment will be reviewed and revised as necessary. # 1-2 (a) Concentration (1) Occupational exposure to carbon monoxide shall be controlled so that no worker shall be exposed at a concentration greater than 35 ppm determined as a time-weighted average (TWA) exposure for an 8-hour workday, as measured with a portable, direct reading, hopcalite-type carbon monoxide meter calibrated against known concentrations of CO, or with gas detector tube units certified under Title 42 of the Code of Federal Regulations, Part 84. (2) No level of carbon monoxide to which workers are exposed shall exceed a ceiling concentration of 200 ppm. # (b) Calibration, Sampling and Analysis Procedures for calibration of equipment, sampling and analysis of CO samples shall be followed as provided in Appendix I. # Section 1 -Work Environment 1-3 Because employees with overt cardiovascular disease may not be protected by an occupational exposure to 35 ppm of CO, a medical program should be instituted consisting of preplacement and periodic examinations with special attention to the cardiovascular system and to medical con ditions which could be exacerbated by exposure to CO. Such a medical program could also provide the opportunity for conducting antismoking programs for high-risk employees. Areas where significant exposure to carbon monoxide is likely to occur shall be posted with a sign stating: # CARBON MONOXIDE (CO) DANGER High concentrations may be fatal Provide adequate ventilation High concentrations in air may be explosive Seek immediate medical attention if you experience any of the below symptoms: 1 -Severe Headache 2 -Dizziness 3 -Nausea and vomiting Gas masks are located: (Specific location to be filled in by employer) For purposes of this section the term "significant exposure to carbon monoxide" refers to eight-hour TWA exposures exceeding 25 ppm but excludes such exposure as may be self-administered through smoking. In the event of an emergency or when a variance has been allowed and the use of respiratory protective equipment authorized by the Secretary of Labor, the employer shall provide and insure that the employee wears, as appropriate, one of the following respiratory protective devices approved by NIOSH or the Bureau of Mines as provided in Part 11 of Title 30, Code of Federal Regulations: (a) Type N Gas Mask (Emergency Situation): For entry into or escape from an environment containing not over 20,000 ppm, which is not deficient in oxygen, for a total exposure period of not more than 30 minutes. Areas in which large amounts of CO are stored, used or emitted, or areas within the workplace through which large amounts of CO are trans ported, shall be provided with sufficient respiratory protective devices of the types specified in Section 4 and shall be readily accessible to persons who may be located in the area to assure a timely, orderly evacuation of the area by all persons in the event of accidental, massive release of CO. An automatic visual and audible alarm that is set to be activated when the CO concentration reaches 500 ppm should be employed in such areas. Employees working in such areas shall be informed of the hazards and symptoms of acute CO poisoning (see Section 6) and shall be trained to implement an emergency evacuation plan designed for such an occurrence. Periodic drills, held not less frequently than every six months, shall be conducted to assure that such plans are adequate and effective in case of an emergency situation. (c) Fire Hazards Adequate fire extinguishing agents shall be readily available in the areas outlined in paragraph (b) of this section since CO will burn when mixed with air and may be explosive when concentrations of between 12.5 percent to 74.2 percent are reached. Section 5 -Emergency Procedures 1-7 (a) Each employee who receives significant exposure to carbon monoxide shall be apprised of all hazards, relevant symptoms, appropriate emergency procedures, and proper conditions and precautions for the safe use of CO or safe exposure to CO and shall be instructed as to the location of such information, which shall be kept on file as prescribed in paragraph (b) of this section and shall be readily accessible to all employees at each establishment where CO is involved in industrial processes and operations. (a) Each container in which CO is stored shall be examined for leaks upon its arrival at the establishment or upon filling and shall be reexamined periodically at least every three months. (b) Prior to transferring CO from a storage container, an inspection shall be conducted to detect any gas leaks in the transport system (e.g., cylinder seal with gas regulator, regulator apparatus, regulator seal with transport conduits, conduit system, etc.). (1) Personal exposure ("breathing zone") samples- shall be collected in accordance with procedures specified in Appendix I in all workplaces where employees are significantly exposed to CO , Samples will be collected and evaluated as both TWA and ceiling concentration values. (2) Frequency of monitoring should be at least annually. It is recognized that more frequent monitoring may be indicated in certain circum stances depending upon the nature of the process, the rate of production, the effectiveness of control measurements, the time of day and seasonal variation, however, more frequent monitoring requirements cannot be recommended until there are extensive studies of specific operations in industry. (3) Blood analysis for COHb, as specified in Appendix II, shall be performed on all persons employed in work areas when, in the judgment of the OSHA Industrial Hygienist, biologic standards are needed to evaluate borderline exposure to CO (see Chapter V). and experiments and any other information available to him which will assure insofar as practicable that no employee will suffer diminished health, functional capacity, or life expectancy as a result of his work experience. The National Institute for Occupational Safety and Health (NIOSH), after a review of data and consultations with others, formalized a system for the development of criteria upon which standards can be established to protect the health of employees from exposure to hazardous chemical and physical agents. It should be emphasized that any criteria documentation for a recommended standard should enable management and labor to develop better engineering controls and more healthful work practices and should not be accepted as a final goal in itself. 1 In evaluating occupational hazards and setting priorities, it was determined that the potential for exposure of employees at the workplace to CO was greater than that for any other chemical or physical agent. The significance of the CO dose-response relationship in man is attested furthermore by numerous research studies which have been documented in II-l the scientific literature. In a recent conference on the biological effects of CO sponsored by the New York Academy of Sciences, the findings and views of scientists from the United States, Australia, England, France, and 2 Denmark were presented. Background information for this report was obtained from many sources, including those directly cited and listed in the references. In addition 3 there is a bibliography of nearly 1,000 references and abstracts on CO The criteria contained in this document were developed to assure that the recommended standard based thereon would (1) protect employees against both acute and chronic CO exposure, (2) be measurable by techniques that are valid, reproducible and available to industry and governmental agencies and (3) be attainable with existing technology. # II-2 In 1969 the Committee on Effects of Atmospheric Contaminants on Human Health and Welfare, appointed by the Environmental Studies Board of the National Academy of Sciences, reported on the evaluation of current knowledge concerning the effects of CO on man.^ While this report deals primarily with the effects on man of exposure to CO from air pollution sources, it is pertinent to note the following statement in the Introduction: "...Today, the two main sources of carbon monoxide appear to be cigarette smoke and the internal combustion engine. And the subject of concern has changed from the acute effects of short-term exposure...to the lasting effects of long-term, low-level exposure, of a duration anywhere from a month to a lifetime, and in the range of CO concentrations that would produce 0.5-10% COHb." Thus, in the case of occupational exposures to CO, a worker's smoking habits must be taken into consideration when evaluating the environment. This factor has not been generally considered until the past few years. It should be emphasized that the above report raised many questions concerning the effects of low-level exposures to CO, and that Committee recommended many additional research studies in this area. Although important evidence exists which indicates that subtle aber rations may occur in the central nervous system (CNS) during exposure to levels of CO lower than those of the recommended standard, the significance of these changes and their translation to effects upon employee safety and health is not entirely clear. The diversity of opinions and the conflicting experimental evidence existing in this area does not permit the clear-cut assessment of the scientific merit of such data or its extrapolation to the normal working population at this time. If reliable data become available which clearly demonstrate significant impairment of employee II-3 behavior during exposure to very low levels of CO (producing less than 5 percent COHb), then the criteria for the recommended standard will be reassessed on the basis of the additional evidence. # II-4 III. BIOLOGIC EFFECTS OF EXPOSURE TO CARBON MONOXIDE Carbon monoxide (CO) is an odorless, colorless, tasteless gas which is an active reducing agent for chemicals at elevated temperatures, but is principally encountered as a waste product of incomplete combustion of carbonaceous material. A summary of the physical properties is presented in Table I. The best understood biologic effect of CO is its combination with hemoglobin (Hb) to form carboxyhemoglobin (COIIb) , thereby rendering the hemoglobin molecule less able to bind with oxygen. This action of CO results in more persons succumbing each year to acute CO poisoning than to any other single toxic agent, except alcohol.6 # Extent of Exposure With the single exception of carbon dioxide (CO2X total emissions of CO each year exceed those of all other atmospheric pollutants combined. In 1968 it was estimated that 102 million tons of CO were released into the atmosphere by the major sources of emission.^ Over one-half of this amount (58 percent) was produced by the gasoline-powered internal combustion engine. Specified industrial processes accounted for approximately 10 percent of the total. A summation of CO emission estimates for 1968 by specific industrial processes is presented in Table II. From Table II it can be observed that large amounts of CO are emitted from petroleum refineries, iron foundries, kraft pulp mills, sintering mills, lampblack plants and formaldehyde manufacturers. Major sources of CO production within the first four industries have been identified as the cupola in the iron foundry, the catalytic cracking units in the petroleum refineries, the lime kilns and the kraft recovery furnaces in the kraft paper mills, and the sintering of blast furnace feed in sintering plants. # III-l Aside from the above major industrial processes which produce large quantities of CO, there are numerous operations (arc welding, automobile repair, traffic control, tunnel construction, etc.) where the occupational exposure for a worker to CO can be considerable. In fact any industrial process or operation where incomplete combustion of carbonaceous material occurs may easily be of consequence as concerns occupational exposure to CO. # Historical Reports and Theoretical Considerations Man's intimate association with carbon monoxide as a true environmental hazard began with his discovery of fire. Since that time his technological history has been one of advancement by means of incomplete combustion. The seriousness of exposure to high concentration of CO has been realized since the earliest medical writings. The Greeks and Romans used exposure to CO 7 as a means of execution of criminals and committing suicide. Priestly discovered the chemical composition of CO in the late 18th within the range, 220 to 290 and later investigations have generally confirmed this range, although at a slightly lower level, under conditions of complete hemoglobin saturation by oxygen and CO.^ ^ Briefly, the affinity constant may be expressed as the number of moles of oxygen which must be present with each mole of CO in order to maintain an equal saturation of hemoglobin. While CO actually combines less rapidly with reduced hemo globin than does oxygen, the tenacity of the binding between CO and hemo globin is some 200 to 300 times that of oxygen. This saturation of hemoglobin by either oxygen or CO occurs in four steps involving intermediates with four equilibrium constants: k' k' Hb4 + X ^ XHb4 KX = k '2 XHb . + X (X) 0Hb . K9 = ; --- 4 k.2 2 4 ^ k-2 ill k,3 (X)2Hb4 + X £ = ? <X)3Hb4 K3-Rj- k '4 k 'i (X)3Hb4 + X ^ (X)4Hb4 K4 -Î n the above equations, X represents either oxygen or CO with a different equilibrium constant for each equation. It has been determined that in both cases is much higher (18 to 50 times higher) than K-^, K2 , or K3 because k'^ is much greater than Under these circumstances the last ligand to bind to hemoglobin (K^ equation) dissociates much more 15 readily than it binds. The differences between the four constants depend on intra-molecular forces occurring as a result of the interactions of each III-3 ligand with the others, or with other portions of the hemoglobin molecule. Hence, the effect is an allosteric one resulting in conformational changes in the hemoglobin molecule. These differences in reaction rates affect the dissociation of oxygen or CO from hemoglobin such that, as can be observed in Figure 1, the shapes of the respective dissociation curves are sigmoidal rather than parabolic, as is the case with myoglobin, which possesses only 16 one heme group with a single dissociation constant. Likewise # III-4 The difference in the partial pressure of oxygen (Pq^) between freshly oxygenated arterial blood (P = 100 mm Hg) and mixed venous blood°2 (P = 40 mm Hg) represents a release to the tissues of approximately Oo 16 5 milliliters 0^/100 milliliters blood. A shift of the steep portion of the oxyhemoglobin dissociation curve to the left would tend to change the release of oxygen to the tissues appreciably. While the dissociation curve shifts to the right, allowing for a more efficient dissociation of oxygen to the peripheral tissues under conditions of reduced ambient oxygen tension (hypoxic hypoxia), just the opposite 4,16,19,21 situation occurs during exposure to CO (anemic hypoxia). The leftward shift during CO exposure occurs because of the much greater affinity of CO for hemoglobin and in spite of the fact that the amount of oxygen in 4 physical solution in the blood remains near normal. The oxygen content of the blood is not only lowered during exposure to CO, but the shift of the oxyhemoglobin dissociation curve to the left decreases the amount of remaining oxygen that is made available to the tissues. As mentioned earlier in this report it has been confirmed that an 51 52 endogenous source of CO exists as a product of heme catabolism. ' # III-8 When an ct-methylene bridge in the heme portion of hemoglobin is broken during 51 the catabolic process, a molecule of CO is released. It has been estimated that this production amounts to approximately 0.3 to 1.0 milliliter(ml)/hour with an additional 0.1 milliliter(ml)/hour resulting from a similar catabolic process involving other heme-containing compounds (e.g., myoglobin and cyto-6,53 chrome and catalase enzymes). This endogenous production of CO, which gives rise to approximately 0.5 to 0.8 percent COHb, would not be expected to be of important physiologic consequence per se since the hypothesized mechanism has evolved with man, but its removal would merit due consideration in a closed system such as a submarine or a space capsule. Also, in the determination of total CO exposure this quantity must be included as "base line" COHb. # Cardiovascular Effects of CO The critical importance of cardiovascular involvement during exposure 6,28,49-6*0 to CO is becoming increasingly evident. While the brain has a higher requirement for oxygen than the heart, in contrast to the cerebral circulation the coronary circulation must supply an even increased amount of oxygen during periods of generalized tissue hypoxia; since under these circumstances the heart is forced to increase both its rate and its output 49 in order to meet the normal oxygen demands of the body. This increase in myocardial activity demands an increased oxygen supply to the myocardium which must be met by the coronary circulation. Under hypoxic conditions increased oxygen supply to the peripheral tissues can be accommodated by increased blood flow (via vascular dilatation) and/or increased oxygen extraction by the tissues. As mentioned earlier the peculiar dissociation characteristics of O^Hb permits an oxygen reserve which is used at reduced III-9 P . The myocardium under these circumstances appears only to increase 2 the flow of blood rather than to extract an additional amount of oxygen from the coronary circulation. While the peripheral tissues normally extract only 25 percent of the oxygen content of the perfusing arterial blood during resting conditions, the myocardium extracts 75 percent, thus leaving the mixed venous blood only 25 percent saturated. This mechanism has the overall effect of maintaining the myocardial oxygen tension at a higher level than would be present in other muscle tissue and thus insures a continual aerobic metabolism, even under hypoxic duress. In terms of oxygen tension, the mixed venous blood of the peripheral tissues is approximately 40 mm Hg while the mixed venous blood of the coronary circulation is only 20 mm Hg. In the presence of COHb (and the shift to the left of the oxyhemoglobin dissociation curve), however, the arterio-venous difference can only be maintained by an increased flow in the coronary circulation. In an individual with diminished coronary circulation because of coronary heart disease, however, this situation may result in a decrease in the mixed venous oxygen tension of the myocardium precipitated by an inability to maintain the normal arterio-venous gradient. This hypoxic effect is further enhanced, as mentioned above, by an increase in cardiac rate and output as a general response to peripheral tissue hypoxemia. A person with diminished coronary circulation caused by coronary heart disease, consequently, may be constantly near the point of myocardial tissue hypoxia. This exposure to CO increased the COHb slightly less than 1 percent per minute. It appears that the canine may be able to tolerate a much higher level of COHb than is the human before significant myocardial changes occur. # Jaffe has emphasized the relationship between CO poisoning and an elevated titer of serum lactic dehydrogenase as an indicator of myocardial damage. He speculated that "...even 'normal' amounts of carbon monoxide may operate as the last straw in precipitating coronary attacks." Mien rats 65 were exposed to 500 ppm CO for four hours Lassiter, Coleman, and Lawrence observed a statistically significant aberration in the plasma lactic dehydrogenase isoenzyme distribution which was considered to be highly indica tive of myocardial damage. The COHb level in the animals at termination of exposure was <40 percent. The relationship between chronic cigarette smoking and increased risk of 66 coronary heart disease (CHD) is undeniable, as is the fact that cigarette smoking causes increased exposure to CO. A CO concentration of 4 percent (40,000 ppm) in cigarette smoke, which will cause an alveolar concentration of 0.04 to 0.05 percent (400 to 500 ppm) will produce a COHb concentration 6,67-69 70 3 to 10 percent. Goldsmith estimated that the cigarette smoker is exposed to 475 ppm of CO for approximately six minutes per cigarette. In a review article on CO and human health} Goldsmith and Landaw stated that the COHb level in the one-pack-a-day cigarette smoker is 5.9 percent, which they say is a sufficient concentration to impose a serious health threat to persons with underlying vascular insufficiency. In another paper these two investigators used a regression analysis of expired air samples from 3,311 longshoremen and found a COHb of 6.8 percent in two-pack-a-day smokers and 1.2 percent COHb in nonsmokers. They believed that the high level in the nonsmokers, above the 0.5 to 0.8 percent normally present as a result of endogenous production, was accounted for by occupa tional exposure. In a study by Kjeldsen"^ COHb levels of both smokers and nonsmokers were compared from 934 'tHD-free"persons. The mean COHb was 0.4 percent for nonsmokers and 7.3 percent for cigarette smokers who inhaled. In addition, the mean level of COHb for all 416 smokers in the study, regard less of inhalation habits or number of cigarettes smoked, was 4.0 percent. A similar study is presently being conducted to determine the range 72 of COHb in the American public by Stewart and Peterson. The presently available data, from Milwaukee, has demonstrated that the mean COHb levels for nonsmokers is 1.33 ±0.85 percent and for smokers is 4.47 ±2.52 percent. In an epidemiologic survey involving approximately 4,000 middle-aged males, who were kept under medical surveillance for 8 to 10 years, Doyle 74 and co-workers found that one-pack-a-day smokers had about a three-fold greater risk of myocardial infarction than did nonsmokers, or former smokers. The finding that death rates for ex-smokers were no higher than for 16 nonsmokers prompted Bartlett to state that the effect of cigarette smoking on this particular pattern is completely reversible when an individual ceases to smoke. He concluded that smoking caused a myocardial hypoxemia by some acute, reversible process, which was probably unrelated to the formation of hard, irreversible, atherosclerotic lesions and that CO would fit such an epidemiologic pattern very well. He further stated, however, that other components of cigarette smoke, including nicotine, may be responsible for this pattern and that the question, in his estimation, remained unsolved. Astrup, Kjeldsen, and Wanstrup,^ reporting on an investigation in progress in 19 70 in which 1,000 randomly chosen individuals were examined for evidence of arteriosclerotic disease, demonstrated a clear relationship between this disease and high COHb levels after smoking. They further stated that it is very likely "...that it is the inhalation of CO in tobacco smoke that is, in part, responsible for the much higher risk of smokers to develop coronary heart disease and other obliterating arterial diseases in comparison to that of nonsmokers." pain in the patients exposed to 50 ppm for four hours. The same level of significance was also found for the patients exposed to 100 ppm although the differences of results between the two regimes were not significantly different. The other parameter measured, duration of pain, was statisti cally significant (p <0.05) only for the patients exposed to 100 ppm CO. Again the differences of results between the lower and higher exposures were not significant. Knelson concluded, using this particular protocol of exposure, that patients with angina pectoris who were exposed to CO and then exercised experienced pain earlier and pain lasted longer, than when these same patients were exposed to ambient air. An interesting set of data has been produced recently by Horvat and 7Q co-workers ' concerning the effect of oxygen breathing on the threshold of angina pain in patients with confirmed coronary heart disease. The investigators first determined the angina threshold while the patients breathed air by increasing the heart rate via right atrial pacing. Patients were then, unknowingly, switched to 100 percent oxygen and the heart paced to the previously determined angina threshold. Angina pain did not occur in nine of eleven patients. Associated with this improvement was increased lactate extraction from an average of -17 ±15 to +18 ±10 percent (p <0.025). In four of six patients lactate production turned to lactate extraction, in six of seven patients S-T abnormalities in the EKG were improved as well as improvement in pulsus alternans in three of five patients. These data present significant evidence that oxygen breathing permits the heart to do more work before coronary insufficiency develops. They also indicate the validity of the approach used by Knelson in determining Lloyd and co-workers noted that while certain areas (e.g., janitors) had an excess of deaths due to heart disease, other areas had a deficit in this specific cause of mortality. In attempting to determine why an apparent selection for health should have been more likely among steelworkers dying from heart disease, it was suggested that such persons may have been more likely to migrate between work areas. Thus, employees exhibiting symptoms of cardiac insufficiency (e.g., shortness of breath) may move to less physically demanding jobs on their own. Similarly, employees with CHD who return to work following an attack are often returned to less physically demanding jobs (e.g., janitors and mechanical maintenance). As a result the investigators indicate that selection for health may have been more important than environmental factors in explaining the excess in mortality from heart disease among janitors and mechanical maintenance personnel. If this suggestion is correct, then the work areas from which those employees dying from heart disease migrated would not be expected to show similar increases in deaths from heart disease. For example, if these employees migrated from the areas aforementioned that produce significant exposure to CO, then those areas would be expected to have a lower SMR for heart disease. Although this rate for those areas was not published in the paper, because of statistical insignificance, the SMR for all deaths in those areas, as mentioned, did not demonstrate trends which were statistically significant. Based upon this study, then, it would not seem possible either to infer or to deny that occupational exposure to CO at the three work places in question resulted in an excess of deaths due to heart disease. More information is needed concerning daily exposure of workers, engaged in various levels of activity, to specific sources of CO within their working environment and the correlation of such information to both smoking habits and to nonoccupational exposure. # III-32 Regardless of CO source, however, the rate of CO excretion from the blood is dependent upon the concentration in the ambient air, and for smokers with elevated COHb levels who are occupationally exposed to CO, their excretion of CO will be proportionally delayed. Because the recommended standard for occupational exposure to CO is designed to protect employees with CHD, it is necessary to characterize the extent of this disease among the general worker population. Friedberg has defined CHD to represent clinical heart disease due to lesions of the coronary arteries. However, the term, CHD, is generally used to refer to the process of atherosclerosis of the coronary arteries leading to disturbances in the myocardial blood supply. It is in this latter context that the term applies in the criteria document. It is an established fact that each year more persons in the U.S. die 103 from CHD than from any other disease. Coronary atherosclerotic heart 104 disease is the most common form of cardiac disease in adults in the U.S. During the Korean War autopsies performed on young soldiers, with an average age of twenty-two years, revealed that 77.3 percent had gross pathologic 104 , 105 " evidence of CHD. A study of autopsies in a stabilized population of 30,000 revealed that CHD was the cause of death in 40 percent of the males. # Friedberg states: "The diagnosis of coronary (atherosclerotic) heart disease refers to clinical manifestations and not to the mere presence of atherosclerotic lesions. Extensive pathologic lesions may be present but cannot be diagnosed unless they produce overt clinical manifestations or are revealed by coronary angiography." In most cases the first clinical manifestation of CHD is expressed''"ê ither as the angina pectoris syndrome or as frank myocardial infarction. # III-33 According to the Framingham, Massachusetts, study conducted by the USPHS, CHD was first manifested in one-sixth of the cases of CHD as sudden death. produced in the myocardium of patients with coronary artery disease during acute exposure to concentrations of CO sufficient to produce a COHb content of less than 9 percent provides metabolic evidence of myocardial hypoxia occurring under this regime. His finding that in patients with coronary artery disease the coronary blood flow did not increase at this level of COHb saturation, although it did increase in controls, serves to further underscore the possible consequences for an individual with coronary heart disease (CHD) who is carrying less than 9 percent COHb. Furthermore, it 111-35 must be emphasized that these coronary patients were at rest and were not subjected to exercise. The consequences of this additional stress were 7 8 documented earlier in the study by Knelson. His discovery that the time to onset of pain was diminished in cigarette smokers with angina pectoris during exercise immediately following exposure to 50 ppm of CO for four hours (average COHb of 3.0 percent), provides considerable evidence that workers with CHD, and particularly those with CHD who smoke, should not be exposed to concentrations of CO which will produce a level of COHb in excess of 5 percent. Similarly, the effects of CO on behavior are not in complete agreement. As mentioned previously, several mechanisms have been suggested to possibly account for the differing results. These suggestions can be summarized by noting that no one study has completely replicated the experimental design of any previous study that indicated a behavioral impairment due to CO Because the emission of CO is on so large a scale it becomes difficult to single out, identify and rank individual sources in order of importance as concerns occupational exposure. Indeed, as emphasized earlier in this report, the smoking habits of the worker have a tremendous bearing on his actual daily exposure to CO. The CO pollution from traffic to and from the place of employment likewise will affect his total daily exposure. Several studies concerning occupational exposure to CO are presented below. The investigators commented on the results of the study by saying: V-6 to recognize the effect that the level of activity has upon the uptake of CO and to judiciously evaluate the exposure of his employees and limit their activity accordingly. The data in Appendix IV have been included specifically for this purpose. In addition, the employer must give special consideration to limiting the activity of employees exposed to CO at high altitudes in order to compensate for the dual loss in oxygen-carrying capacity of the blood. V-7 # VI. COMPATIBILITY WITH AIR QUALITY STANDARDS The Environmental Protection Agency (EPA), under provisions of the Clean Air Act (PL 91-604), promulgated national primary and secondary air quality 123 standards on April 30, 1971. The primary and secondary standards for "in the comments, serious questions were raised about the soundness of this evidence. Extensive consideration was given to this matter. The conclusions reached were that the evidence regarding impaired time-interval discrimination had not been refuted and that a less restrictive national standard for CO would therefore not provide the margin of safety which may be needed to protect the health of persons especially sensitive to the effects of elevated carboxy hemoglobin levels. The only change made in the national standards for CO was a modification of the 1-hour value. The revised standard affords protection from the same low levels of blood carboxyhemoglobin as a result of short-term exposure. The national standards for carbon monoxide, as set forth below, are intended to protect against the occurrence of carboxyhemoglobin levels above 2 percent. It is the Administrator's judgment that attainment of the national standards for carbon monoxide will provide an adequate safety margin for protection of public health and will protect against known and anticipated adverse effects on public welfare." # VI-1 The air quality standard is designed to protect the population-at-large and takes into consideration 24-hour per day exposure of the very young, the very old, and the seriously ill. The evidence presented in this (NIOSH) criteria documentation supports the concept of the necessity of providing protection for that portion of the general worker population with coronary heart disease (CHD), who are especially sensitive to elevated levels of COHb. Although the Administrator of EPA has stated above that the evidence concerning the impairment of time-interval discrimination at low levels of COHb, presumably as low as 2 percent, has not been refuted, neither has such evidence been confirmed. VI-2 # VII. APPENDIX I SAMPLING AND ANALYSIS OF CARBON MONOXIDE (CO) Worker exposure to CO shall be measured with a portable, direct reading, hopcalite-type carbon monoxide meter calibrated against known concentrations of CO, or with gas detector tube units certified under Title 42 of the Code of Federal Regulations, Part 84. Samples shall be collected from the worker's breathing zone and a sufficient number shall be collected at random intervals throughout the workday so that a statistically accurate determination of compliance may be made as outlined in the following section. # Principles for Air Sampling for Carbon Monoxide (CO) The characteristic manner in which CO proceeds from the pulmonary alveolar spaces, through the blood capillary membrane "barrier," into the plasma, through the red blood cell membrane, ultimately to combine with hemoglobin imposes certain requirements for air sampling if proper deter mination of compliance with the recommended standard is to be made. The rate of CO diffusion from the lung to hemoglobin depends upon the partition coefficient for CO between alveolar air and pulmonary blood. The magnitude of this coefficient is such as to delay transfer of CO to the circulating hemoglobin (Hb). This time delay makes it essential in any estimation of the carboxyhemoglobin level to know how long the exposure was experienced as well as the CO concentration during the exposure. Reference to Figure 2 shows that a continuous exposure of over eight hours' duration is required to attain maximum combination of CO with hemoglobin at the recommended standard of 35 ppm. # Air Sampling Methods # VIII-1 The sampling and analytical procedures recommended below will provide the necessary data to determine compliance with the recommended standard. The methodology and equipment utilized to collect and analyze samples to determine concentrations of CO in the air or concentration of COHb in the exposed worker shall be subjected to a proficiency and/or calibration test program conducted by NIOSH or by another agency of the federal govern ment under agreement with NIOSH for certification for such determinations. # Determination of Compliance The following procedure shall be used to determine compliance with the eight-hour, time-weighted average standard based on a small number of instantaneous (grab) samples collected at random intervals during the workday. Given: the results of r i samples with a mean m and a range (difference between least and greatest) jr. If, for from 3 to 10 samples, m is greater than the total of: An extensive review of the methodology covering the determination of 12 4 carbon monoxide has been prepared by Maehly. In an updating review of Because carboxyhemoglobin constitutes the toxic product formed during carbon monoxide intoxication, and because the carboxyhemoglobin formed is quite stable, the analyst is presented with the opportunity of directly evaluating the toxic agent. In so doing, two facts are established: (1) proof that an exposure has occurred, and (2) quantitative proof the exposure has produced a particular toxic concentration in the body. Diffusion, chromatographic and gasometric methods all involve liberation of carbon monoxide from hemoglobin and either gasometric transfer operations or secondary titrations which are technically difficult for the microliter gas volumes involved. This limitation is even more critical when the effects of temperature and pressure on gas volumes are considered. Careful evaluation of the methods for direct measurement of carboxyhemoglobin has been reviewed in detail. Of these methods, three are considered to be feasible for con sideration as the recommended method. # (a) The colorimetric method of Amenta uses small blood samples and measures the absorbance of an ammonia-hemolyzed sample at three wave-length # Analytical Methodology IX-1 settings on the spectrophotometer: 560, 575, and 498 nm. In calculating the results, the absorbance of the mixture at 498 nm (an isosbestic point) is the denominator in the equation for oxyhemoglobin, carboxyhemoglobin, and the unknown. This mathematical manipulation gives a value corrected for the total hemoglobin concentration (i.e., the R factor). The values (IL = 1.097, R = 0.057) are then used for calculating percentage COHb by 2 the ratio R^ -R^/R^^-R^q XIOO = percentage COHb. The relatively small absorbance difference observed with this method reduces its precision and accuracy, especially in the low COHb range (ca. 10 percent). # (b) The method of Harper involves hemolyssis of blood by distilled water, addition of a buffer (pH 7.2) and reduction of oxyhemoglobin by sodium hydrosulfite. The reduced hemoglobin is then converted to methemoglobin by the addition of potassium ferricyanide. The carboxyhemoglobin is converted to methemoglobin at a slower rate than reduced hemoglobin; therefore, the carboxyhemoglobin can be measured before appreciable conversion to methemoglobin takes place. The method requires a careful control of timed manipulations to obtain reproducible results. # Range and Sensitivity For 1 ml of oxalated blood the range is 0 to 100 percent saturation COHb. Sensitivity is 0.5 percent saturation. # Interferences Hemolyzed blood contains pigments arising from the breakdown of hemoglobin and will cause interference in the method. Bile pigments may also interfere. # Precision and Accuracy The difference in concentrations obtained by this procedure and compared to results obtained on the same samples analyzed by the Van Slyke method are not significant at the 5 percent level [t-Value = 1.81 (14 deg. (A) Chemical stability, incompatibility, hazardous decomposition products, and hazardous polymerization. X-2 (A) Detailed procedures to be followed with emphasis on precautions to be taken in cleaning up and safe disposal of materials leaked or spilled. This includes proper labeling and disposal of containers containing residues, contaminated absorbants, etc. (ix) Section VIII. Special Protection Information. (A) Requirements for personal protective equipment, such as respirators, eye protection and protective clothing, and ventilation, such as local exhaust (at site of product use or application), general, or other special types. (x) Section IX. Special Precautions. (A) Any other general precautionary information, such as personal protective equipment for exposure to the thermal decomposition products listed in Section VI, and to particulates formed by abrading a dry coating, such as by a power sanding disc. (xi) Hie signature of the responsible person filling out the data sheet, his address, and the date on which it is filled out. (xii) The NFPA 704M numerical hazard ratings as defined in Section (c) (5) following. The entry shall be made immediately to the right of the heading, "Material Safety Data Sheet" at the top of the page and within a diamond symbol preprinted on the forms. N i in O ir - -1 - - - - - - - - - - - - - - - - - - - - (M ao r» < NO »o m < r" < u e O o O O O o o o O o ' J o O O O O o O 0 0 - - - - - - - - - - - - - - - - - - - - o o O O o *»? o o o c o o CJ o o O í j O o o H O o © O o o o o o o o O O O © © O o © O - - - - - - - - - - - - - - - - - - - - © o © © © © © © o o © © © o o © c © o o C M ro 4- tn o O' <\l i f s co «4 4-f-o rr. O' nj in 00 *4 -i H o o o © O © © © O © © © o o o © © o © © © © o o o © O O o Q © O o © © O O © o © - - - - - - - - - - - - - - - - - - - - © o © © © © © © © © o O o o © © G © © © «if 4-4-4" 4" 4" 4-4-4-4-'4' 4-4" 4-4-4- 4-4- í- - -fr - í' > O O o o o o eí o o o o o o o o o o o o o - - - - - - - - - - - - - - - - - - - - O O o o o o o o o o o o o o o o o o o o O o o o o o o o o o o o a o o o c o o o » ->1 -> r- 1 -► -1 - *«- t (V i ÍN J N >l\J MMl\J U l U l IM> yji -J 00 vC e-1 P J t- -J i» -J > 1-J 00 co \D o H - u> O ) 00 - N i o i-1-si í ' V ÛN J - - - - - - - - - - - é - - é - - - - è N ) - *«4 o S' o -J o O ' 'O £s U l U J o -J PO x j o Ü J -J 00 M!-- o sß 1 -O ' o 03 O ' f - O ' O v _ n 0 ( -- -S' -S' V- - .£ -t- -r -r« -t- > JN -e - -e - © o © © o o o C i o © o o o o c- o o o o o - « - - - - - - - - - - - - - - - - - - C, O o o o o o o o o o o o © o © o © I ' .I o O o o o o o o CJ c o o o o o o o o © o © o r1 tc tw » -1 -i-I-11 -- K - » -r-r-> I-1t-t-' i- u >u >u >Ul f'J l\J i-- H - )-- 0 0 U ) M X ) O ' U >© -J -p ' 0 0 Ul IV -o O ' Ul -i' Ul rsj - " © o © © o o © © © O © o © o © o © © © © - - - « - - - - - - - - - - - - - - - « © o © © © © © o o o © © o © o o © o © © N J h o IV rvj rsj N > O J O J U J u >Ul J- Ul -s i 0 0 ► - o r o h - 00 U) Ul si -sj -si CD 0 0 sD sO © PO Ul O' o - rsj u l o C OUl - - - - - - « - # - - - - » - - - - - - u >Ul 05 N J O' rsj £> I " C OM O' vO © Ul cr-Ul O' © Ul Ul 0 0 o IN i -sj - -si -St O' PO - u i - -© C ON) O' -si U ) X) tj 0 n o Time í" 4> - p- f'- - -f' - -r> f- * 4> -f' o o o O O o o o o o o o o o o a O o o o - é - - - - - - - - - - - - - - - - - - o o o o o o o o o o o o o o o o o o o o o o o Cí o o o o o o o o Ô o o o o o Cí o ro ro ro N) (V J (V ) - - « - - t - - « - » - - - - - - - - - 1-» O I O D I-1 O ' Ni o t-- O ' - - - - - - - - - - - - - - » - - - - - o o o o c» o o o o o o o o o o o o o C 1 H 9 t ¡ ' t í ' 0 a n > N) N) N> N) NJ ro ro Uí Ui OJ U> > U1 £> o >-» ro h-» U) u> 00 CD o UÎ 00 u> 1-- 0D vD U Ì OJ ro - - - » - - - - - - - - - - - - - - « u> o sO UJ OD
# CRITERIA DOCUMENT: RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR CARBON MONOXIDE recommends that employee exposure to carbon monoxide (CO) at the workplace be controlled by requiring compliance with the standard set forth in the following eight sections. Control of employee exposure to CO at his place of employment at the limits stated will (1) prevent acute CO poisoning, (2) protect the employee from deleterious myocardial alterations associated with levels of carboxyhemoglobin (COHb) in excess of 5 percent and (3) provide the employee protection from adverse behavioral manifestations resulting from exposure to low levels of CO. The recommended standard is measurable by techniques that are valid, reproducible and currently available to industry and governmental agencies and is attainable with existing technology. The recommended standard is designed to protect the safety and health of workers who are performing a normal 8-hour per day, 40-hour per week work assignment. It is not designed for the population-at-large and any extrapolation beyond the general worker population is unwarranted. Because of the well-defined relationship between smoking and the concomitant exposure to CO in inhaled smoke the recommended standard may not provide the same degree of protection to those workers who smoke as it will to nonsmokers. Likewise, under conditions of reduced ambient oxygen concentration, such as would be encountered by workers at very high altitudes (e.g., 5,000 -8,000 feet above sea level), the permissible exposure stated in the recommended standard should be appropriately lowered to compensate for loss in the oxygen-carrying capacity of the blood. In 1-1 addition, workers with physical impairments that interfere with normal oxygen delivery to the tissues (e.g., emphysema, anemia, coronary heart disease) will not be provided the same degree of protection as the healthy worker population. The criteria and the standard recommended in this docu ment will be reviewed and revised as necessary. # 1-2 (a) Concentration (1) Occupational exposure to carbon monoxide shall be controlled so that no worker shall be exposed at a concentration greater than 35 ppm determined as a time-weighted average (TWA) exposure for an 8-hour workday, as measured with a portable, direct reading, hopcalite-type carbon monoxide meter calibrated against known concentrations of CO, or with gas detector tube units certified under Title 42 of the Code of Federal Regulations, Part 84. (2) No level of carbon monoxide to which workers are exposed shall exceed a ceiling concentration of 200 ppm. # (b) Calibration, Sampling and Analysis Procedures for calibration of equipment, sampling and analysis of CO samples shall be followed as provided in Appendix I. # Section 1 -Work Environment 1-3 Because employees with overt cardiovascular disease may not be protected by an occupational exposure to 35 ppm of CO, a medical program should be instituted consisting of preplacement and periodic examinations with special attention to the cardiovascular system and to medical con ditions which could be exacerbated by exposure to CO. Such a medical program could also provide the opportunity for conducting antismoking programs for high-risk employees. Areas where significant exposure to carbon monoxide is likely to occur shall be posted with a sign stating: # CARBON MONOXIDE (CO) DANGER High concentrations may be fatal Provide adequate ventilation High concentrations in air may be explosive Seek immediate medical attention if you experience any of the below symptoms: 1 -Severe Headache 2 -Dizziness 3 -Nausea and vomiting Gas masks are located: (Specific location to be filled in by employer) For purposes of this section the term "significant exposure to carbon monoxide" refers to eight-hour TWA exposures exceeding 25 ppm but excludes such exposure as may be self-administered through smoking. # 1-5 In the event of an emergency or when a variance has been allowed and the use of respiratory protective equipment authorized by the Secretary of Labor, the employer shall provide and insure that the employee wears, as appropriate, one of the following respiratory protective devices approved by NIOSH or the Bureau of Mines as provided in Part 11 of Title 30, Code of Federal Regulations: (a) Type N Gas Mask (Emergency Situation): For entry into or escape from an environment containing not over 20,000 ppm, which is not deficient in oxygen, for a total exposure period of not more than 30 minutes. Areas in which large amounts of CO are stored, used or emitted, or areas within the workplace through which large amounts of CO are trans ported, shall be provided with sufficient respiratory protective devices of the types specified in Section 4 and shall be readily accessible to persons who may be located in the area to assure a timely, orderly evacuation of the area by all persons in the event of accidental, massive release of CO. An automatic visual and audible alarm that is set to be activated when the CO concentration reaches 500 ppm should be employed in such areas. Employees working in such areas shall be informed of the hazards and symptoms of acute CO poisoning (see Section 6) and shall be trained to implement an emergency evacuation plan designed for such an occurrence. Periodic drills, held not less frequently than every six months, shall be conducted to assure that such plans are adequate and effective in case of an emergency situation. (c) Fire Hazards Adequate fire extinguishing agents shall be readily available in the areas outlined in paragraph (b) of this section since CO will burn when mixed with air and may be explosive when concentrations of between 12.5 percent to 74.2 percent are reached. Section 5 -Emergency Procedures 1-7 (a) Each employee who receives significant exposure to carbon monoxide [see Section 3(b)] shall be apprised of all hazards, relevant symptoms, appropriate emergency procedures, and proper conditions and precautions for the safe use of CO or safe exposure to CO and shall be instructed as to the location of such information, which shall be kept on file as prescribed in paragraph (b) of this section and shall be readily accessible to all employees at each establishment where CO is involved in industrial processes and operations. (a) Each container in which CO is stored shall be examined for leaks upon its arrival at the establishment or upon filling and shall be reexamined periodically at least every three months. (b) Prior to transferring CO from a storage container, an inspection shall be conducted to detect any gas leaks in the transport system (e.g., cylinder seal with gas regulator, regulator apparatus, regulator seal with transport conduits, conduit system, etc.). (1) Personal exposure ("breathing zone") samples* shall be collected in accordance with procedures specified in Appendix I in all workplaces where employees are significantly exposed to CO [see Section 3(b)], Samples will be collected and evaluated as both TWA and ceiling concentration values. (2) Frequency of monitoring should be at least annually. It is recognized that more frequent monitoring may be indicated in certain circum stances depending upon the nature of the process, the rate of production, the effectiveness of control measurements, the time of day and seasonal variation, however, more frequent monitoring requirements cannot be recommended until there are extensive studies of specific operations in industry. (3) Blood analysis for COHb, as specified in Appendix II, shall be performed on all persons employed in work areas when, in the judgment of the OSHA Industrial Hygienist, biologic standards are needed to evaluate borderline exposure to CO (see Chapter V). and experiments and any other information available to him which will assure insofar as practicable that no employee will suffer diminished health, functional capacity, or life expectancy as a result of his work experience. The National Institute for Occupational Safety and Health (NIOSH), after a review of data and consultations with others, formalized a system for the development of criteria upon which standards can be established to protect the health of employees from exposure to hazardous chemical and physical agents. It should be emphasized that any criteria documentation for a recommended standard should enable management and labor to develop better engineering controls and more healthful work practices and should not be accepted as a final goal in itself. 1 In evaluating occupational hazards and setting priorities, it was determined that the potential for exposure of employees at the workplace to CO was greater than that for any other chemical or physical agent. The significance of the CO dose-response relationship in man is attested furthermore by numerous research studies which have been documented in II-l the scientific literature. In a recent conference on the biological effects of CO sponsored by the New York Academy of Sciences, the findings and views of scientists from the United States, Australia, England, France, and 2 Denmark were presented. Background information for this report was obtained from many sources, including those directly cited and listed in the references. In addition 3 there is a bibliography of nearly 1,000 references and abstracts on CO The criteria contained in this document were developed to assure that the recommended standard based thereon would (1) protect employees against both acute and chronic CO exposure, (2) be measurable by techniques that are valid, reproducible and available to industry and governmental agencies and (3) be attainable with existing technology. # II-2 In 1969 the Committee on Effects of Atmospheric Contaminants on Human Health and Welfare, appointed by the Environmental Studies Board of the National Academy of Sciences, reported on the evaluation of current knowledge concerning the effects of CO on man.^ While this report deals primarily with the effects on man of exposure to CO from air pollution sources, it is pertinent to note the following statement in the Introduction: "...Today, the two main sources of carbon monoxide appear to be cigarette smoke and the internal combustion engine. And the subject of concern has changed from the acute effects of short-term exposure...to the lasting effects of long-term, low-level exposure, of a duration anywhere from a month to a lifetime, and in the range of CO concentrations that would produce 0.5-10% COHb." Thus, in the case of occupational exposures to CO, a worker's smoking habits must be taken into consideration when evaluating the environment. This factor has not been generally considered until the past few years. It should be emphasized that the above report raised many questions concerning the effects of low-level exposures to CO, and that Committee recommended many additional research studies in this area. Although important evidence exists which indicates that subtle aber rations may occur in the central nervous system (CNS) during exposure to levels of CO lower than those of the recommended standard, the significance of these changes and their translation to effects upon employee safety and health is not entirely clear. The diversity of opinions and the conflicting experimental evidence existing in this area does not permit the clear-cut assessment of the scientific merit of such data or its extrapolation to the normal working population at this time. If reliable data become available which clearly demonstrate significant impairment of employee II-3 behavior during exposure to very low levels of CO (producing less than 5 percent COHb), then the criteria for the recommended standard will be reassessed on the basis of the additional evidence. # II-4 III. BIOLOGIC EFFECTS OF EXPOSURE TO CARBON MONOXIDE Carbon monoxide (CO) is an odorless, colorless, tasteless gas which is an active reducing agent for chemicals at elevated temperatures, but is principally encountered as a waste product of incomplete combustion of carbonaceous material. A summary of the physical properties is presented in Table I. The best understood biologic effect of CO is its combination with hemoglobin (Hb) to form carboxyhemoglobin (COIIb) , thereby rendering the hemoglobin molecule less able to bind with oxygen. This action of CO results in more persons succumbing each year to acute CO poisoning than to any other single toxic agent, except alcohol.6 # Extent of Exposure With the single exception of carbon dioxide (CO2X total emissions of CO each year exceed those of all other atmospheric pollutants combined. In 1968 it was estimated that 102 million tons of CO were released into the atmosphere by the major sources of emission.^ Over one-half of this amount (58 percent) was produced by the gasoline-powered internal combustion engine. Specified industrial processes accounted for approximately 10 percent of the total. A summation of CO emission estimates for 1968 by specific industrial processes is presented in Table II. From Table II it can be observed that large amounts of CO are emitted from petroleum refineries, iron foundries, kraft pulp mills, sintering mills, lampblack plants and formaldehyde manufacturers. Major sources of CO production within the first four industries have been identified as the cupola in the iron foundry, the catalytic cracking units in the petroleum refineries, the lime kilns and the kraft recovery furnaces in the kraft paper mills, and the sintering of blast furnace feed in sintering plants. # III-l Aside from the above major industrial processes which produce large quantities of CO, there are numerous operations (arc welding, automobile repair, traffic control, tunnel construction, etc.) where the occupational exposure for a worker to CO can be considerable. In fact any industrial process or operation where incomplete combustion of carbonaceous material occurs may easily be of consequence as concerns occupational exposure to CO. # Historical Reports and Theoretical Considerations Man's intimate association with carbon monoxide as a true environmental hazard began with his discovery of fire. Since that time his technological history has been one of advancement by means of incomplete combustion. The seriousness of exposure to high concentration of CO has been realized since the earliest medical writings. The Greeks and Romans used exposure to CO 7 as a means of execution of criminals and committing suicide. Priestly discovered the chemical composition of CO in the late 18th within the range, 220 to 290 and later investigations have generally confirmed this range, although at a slightly lower level, under conditions of complete hemoglobin saturation by oxygen and CO.^ ^ Briefly, the affinity constant may be expressed as the number of moles of oxygen which must be present with each mole of CO in order to maintain an equal saturation of hemoglobin. While CO actually combines less rapidly with reduced hemo globin than does oxygen, the tenacity of the binding between CO and hemo globin is some 200 to 300 times that of oxygen. This saturation of hemoglobin by either oxygen or CO occurs in four steps involving intermediates with four equilibrium constants: k' k' Hb4 + X ^ XHb4 KX = k '2 XHb . + X (X) 0Hb . K9 = ; --- 4 k.2 2 4 ^ k-2 ill k,3 (X)2Hb4 + X £ = ? <X)3Hb4 K3-Rj- k '4 k 'i (X)3Hb4 + X ^ (X)4Hb4 K4 -Î n the above equations, X represents either oxygen or CO with a different equilibrium constant for each equation. It has been determined that in both cases is much higher (18 to 50 times higher) than K-^, K2 , or K3 because k'^ is much greater than Under these circumstances the last ligand to bind to hemoglobin (K^ equation) dissociates much more 15 readily than it binds. The differences between the four constants depend on intra-molecular forces occurring as a result of the interactions of each III-3 ligand with the others, or with other portions of the hemoglobin molecule. Hence, the effect is an allosteric one resulting in conformational changes in the hemoglobin molecule. These differences in reaction rates affect the dissociation of oxygen or CO from hemoglobin such that, as can be observed in Figure 1, the shapes of the respective dissociation curves are sigmoidal rather than parabolic, as is the case with myoglobin, which possesses only 16 one heme group with a single dissociation constant. Likewise # III-4 The difference in the partial pressure of oxygen (Pq^) between freshly oxygenated arterial blood (P = 100 mm Hg) and mixed venous blood°2 (P = 40 mm Hg) represents a release to the tissues of approximately Oo 16 5 milliliters 0^/100 milliliters blood. A shift of the steep portion of the oxyhemoglobin dissociation curve to the left would tend to change the release of oxygen to the tissues appreciably. While the dissociation curve shifts to the right, allowing for a more efficient dissociation of oxygen to the peripheral tissues under conditions of reduced ambient oxygen tension (hypoxic hypoxia), just the opposite 4,16,19,21 situation occurs during exposure to CO (anemic hypoxia). The leftward shift during CO exposure occurs because of the much greater affinity of CO for hemoglobin and in spite of the fact that the amount of oxygen in 4 physical solution in the blood remains near normal. The oxygen content of the blood is not only lowered during exposure to CO, but the shift of the oxyhemoglobin dissociation curve to the left decreases the amount of remaining oxygen that is made available to the tissues. As mentioned earlier in this report it has been confirmed that an 51 52 endogenous source of CO exists as a product of heme catabolism. ' # III-8 When an ct-methylene bridge in the heme portion of hemoglobin is broken during 51 the catabolic process, a molecule of CO is released. It has been estimated that this production amounts to approximately 0.3 to 1.0 milliliter(ml)/hour with an additional 0.1 milliliter(ml)/hour resulting from a similar catabolic process involving other heme-containing compounds (e.g., myoglobin and cyto-6,53 chrome and catalase enzymes). This endogenous production of CO, which gives rise to approximately 0.5 to 0.8 percent COHb, would not be expected to be of important physiologic consequence per se since the hypothesized mechanism has evolved with man, but its removal would merit due consideration in a closed system such as a submarine or a space capsule. Also, in the determination of total CO exposure this quantity must be included as "base line" COHb. # Cardiovascular Effects of CO The critical importance of cardiovascular involvement during exposure 6,28,49-6*0 to CO is becoming increasingly evident. While the brain has a higher requirement for oxygen than the heart, in contrast to the cerebral circulation the coronary circulation must supply an even increased amount of oxygen during periods of generalized tissue hypoxia; since under these circumstances the heart is forced to increase both its rate and its output 49 in order to meet the normal oxygen demands of the body. This increase in myocardial activity demands an increased oxygen supply to the myocardium which must be met by the coronary circulation. Under hypoxic conditions increased oxygen supply to the peripheral tissues can be accommodated by increased blood flow (via vascular dilatation) and/or increased oxygen extraction by the tissues. As mentioned earlier the peculiar dissociation characteristics of O^Hb permits an oxygen reserve which is used at reduced III-9 P . The myocardium under these circumstances appears only to increase 2 the flow of blood rather than to extract an additional amount of oxygen from the coronary circulation. While the peripheral tissues normally extract only 25 percent of the oxygen content of the perfusing arterial blood during resting conditions, the myocardium extracts 75 percent, thus leaving the mixed venous blood only 25 percent saturated. This mechanism has the overall effect of maintaining the myocardial oxygen tension at a higher level than would be present in other muscle tissue and thus insures a continual aerobic metabolism, even under hypoxic duress. In terms of oxygen tension, the mixed venous blood of the peripheral tissues is approximately 40 mm Hg while the mixed venous blood of the coronary circulation is only 20 mm Hg. In the presence of COHb (and the shift to the left of the oxyhemoglobin dissociation curve), however, the arterio-venous difference can only be maintained by an increased flow in the coronary circulation. In an individual with diminished coronary circulation because of coronary heart disease, however, this situation may result in a decrease in the mixed venous oxygen tension of the myocardium precipitated by an inability to maintain the normal arterio-venous gradient. This hypoxic effect is further enhanced, as mentioned above, by an increase in cardiac rate and output as a general response to peripheral tissue hypoxemia. A person with diminished coronary circulation caused by coronary heart disease, consequently, may be constantly near the point of myocardial tissue hypoxia. This exposure to CO increased the COHb slightly less than 1 percent per minute. It appears that the canine may be able to tolerate a much higher level of COHb than is the human before significant myocardial changes occur. # Jaffe has emphasized the relationship between CO poisoning and an elevated titer of serum lactic dehydrogenase as an indicator of myocardial damage. He speculated that "...even 'normal' amounts of carbon monoxide may operate as the last straw in precipitating coronary attacks." Mien rats 65 were exposed to 500 ppm CO for four hours Lassiter, Coleman, and Lawrence observed a statistically significant aberration in the plasma lactic dehydrogenase isoenzyme distribution which was considered to be highly indica tive of myocardial damage. The COHb level in the animals at termination of exposure was <40 percent. The relationship between chronic cigarette smoking and increased risk of 66 coronary heart disease (CHD) is undeniable, as is the fact that cigarette smoking causes increased exposure to CO. A CO concentration of 4 percent (40,000 ppm) in cigarette smoke, which will cause an alveolar concentration of 0.04 to 0.05 percent (400 to 500 ppm) will produce a COHb concentration 6,67-69 70 3 to 10 percent. Goldsmith estimated that the cigarette smoker is exposed to 475 ppm of CO for approximately six minutes per cigarette. # 111-14 In a review article on CO and human health} Goldsmith and Landaw stated that the COHb level in the one-pack-a-day cigarette smoker is 5.9 percent, which they say is a sufficient concentration to impose a serious health threat to persons with underlying vascular insufficiency. In another paper these two investigators used a regression analysis of expired air samples from 3,311 longshoremen and found a COHb of 6.8 percent in two-pack-a-day smokers and 1.2 percent COHb in nonsmokers. They believed that the high level in the nonsmokers, above the 0.5 to 0.8 percent normally present as a result of endogenous production, was accounted for by occupa tional exposure. In a study by Kjeldsen"^ COHb levels of both smokers and nonsmokers were compared from 934 'tHD-free"persons. The mean COHb was 0.4 percent for nonsmokers and 7.3 percent for cigarette smokers who inhaled. In addition, the mean level of COHb for all 416 smokers in the study, regard less of inhalation habits or number of cigarettes smoked, was 4.0 percent. A similar study is presently being conducted to determine the range 72 of COHb in the American public by Stewart and Peterson. The presently available data, from Milwaukee, has demonstrated that the mean COHb levels for nonsmokers is 1.33 ±0.85 percent and for smokers is 4.47 ±2.52 percent. In an epidemiologic survey involving approximately 4,000 middle-aged males, who were kept under medical surveillance for 8 to 10 years, Doyle 74 and co-workers found that one-pack-a-day smokers had about a three-fold greater risk of myocardial infarction than did nonsmokers, or former smokers. The finding that death rates for ex-smokers were no higher than for 16 nonsmokers prompted Bartlett to state that the effect of cigarette smoking on this particular pattern is completely reversible when an individual ceases to smoke. He concluded that smoking caused a myocardial hypoxemia by some acute, reversible process, which was probably unrelated to the formation of hard, irreversible, atherosclerotic lesions and that CO would fit such an epidemiologic pattern very well. He further stated, however, that other components of cigarette smoke, including nicotine, may be responsible for this pattern and that the question, in his estimation, remained unsolved. Astrup, Kjeldsen, and Wanstrup,^ reporting on an investigation in progress in 19 70 in which 1,000 randomly chosen individuals were examined for evidence of arteriosclerotic disease, demonstrated a clear relationship between this disease and high COHb levels after smoking. They further stated that it is very likely "...that it is the inhalation of CO in tobacco smoke that is, in part, responsible for the much higher risk of smokers to develop coronary heart disease and other obliterating arterial diseases in comparison to that of nonsmokers." pain in the patients exposed to 50 ppm for four hours. The same level of significance was also found for the patients exposed to 100 ppm although the differences of results between the two regimes were not significantly different. The other parameter measured, duration of pain, was statisti cally significant (p <0.05) only for the patients exposed to 100 ppm CO. Again the differences of results between the lower and higher exposures were not significant. Knelson concluded, using this particular protocol of exposure, that patients with angina pectoris who were exposed to CO and then exercised experienced pain earlier and pain lasted longer, than when these same patients were exposed to ambient air. An interesting set of data has been produced recently by Horvat and 7Q co-workers ' concerning the effect of oxygen breathing on the threshold of angina pain in patients with confirmed coronary heart disease. The investigators first determined the angina threshold while the patients breathed air by increasing the heart rate via right atrial pacing. Patients were then, unknowingly, switched to 100 percent oxygen and the heart paced to the previously determined angina threshold. Angina pain did not occur in nine of eleven patients. Associated with this improvement was increased lactate extraction from an average of -17 ±15 to +18 ±10 percent (p <0.025). In four of six patients lactate production turned to lactate extraction, in six of seven patients S-T abnormalities in the EKG were improved as well as improvement in pulsus alternans in three of five patients. These data present significant evidence that oxygen breathing permits the heart to do more work before coronary insufficiency develops. They also indicate the validity of the approach used by Knelson in determining Lloyd and co-workers noted that while certain areas (e.g., janitors) had an excess of deaths due to heart disease, other areas had a deficit in this specific cause of mortality. In attempting to determine why an apparent selection for health should have been more likely among steelworkers dying from heart disease, it was # 111-31 suggested that such persons may have been more likely to migrate between work areas. Thus, employees exhibiting symptoms of cardiac insufficiency (e.g., shortness of breath) may move to less physically demanding jobs on their own. Similarly, employees with CHD who return to work following an attack are often returned to less physically demanding jobs (e.g., janitors and mechanical maintenance). As a result the investigators indicate that selection for health may have been more important than environmental factors in explaining the excess in mortality from heart disease among janitors and mechanical maintenance personnel. If this suggestion is correct, then the work areas from which those employees dying from heart disease migrated would not be expected to show similar increases in deaths from heart disease. For example, if these employees migrated from the areas aforementioned that produce significant exposure to CO, then those areas would be expected to have a lower SMR for heart disease. Although this rate for those areas was not published in the paper, because of statistical insignificance, the SMR for all deaths in those areas, as mentioned, did not demonstrate trends which were statistically significant. Based upon this study, then, it would not seem possible either to infer or to deny that occupational exposure to CO at the three work places in question resulted in an excess of deaths due to heart disease. More information is needed concerning daily exposure of workers, engaged in various levels of activity, to specific sources of CO within their working environment and the correlation of such information to both smoking habits and to nonoccupational exposure. # III-32 Regardless of CO source, however, the rate of CO excretion from the blood is dependent upon the concentration in the ambient air, and for smokers with elevated COHb levels who are occupationally exposed to CO, their excretion of CO will be proportionally delayed. Because the recommended standard for occupational exposure to CO is designed to protect employees with CHD, it is necessary to characterize the extent of this disease among the general worker population. # 102 Friedberg has defined CHD to represent clinical heart disease due to lesions of the coronary arteries. However, the term, CHD, is generally used to refer to the process of atherosclerosis of the coronary arteries leading to disturbances in the myocardial blood supply. It is in this latter context that the term applies in the criteria document. It is an established fact that each year more persons in the U.S. die 103 from CHD than from any other disease. Coronary atherosclerotic heart 104 disease is the most common form of cardiac disease in adults in the U.S. During the Korean War autopsies performed on young soldiers, with an average age of twenty-two years, revealed that 77.3 percent had gross pathologic 104 , 105 " evidence of CHD. A study of autopsies in a stabilized population of 30,000 revealed that CHD was the cause of death in 40 percent of the males. # 102 # Friedberg states: "The diagnosis of coronary (atherosclerotic) heart disease refers to clinical manifestations and not to the mere presence of atherosclerotic lesions. Extensive pathologic lesions may be present but cannot be diagnosed unless they produce overt clinical manifestations or are revealed by coronary angiography." In most cases the first clinical manifestation of CHD is expressed''"ê ither as the angina pectoris syndrome or as frank myocardial infarction. # III-33 According to the Framingham, Massachusetts, study conducted by the USPHS, CHD was first manifested in one-sixth of the cases of CHD as sudden death. produced in the myocardium of patients with coronary artery disease during acute exposure to concentrations of CO sufficient to produce a COHb content of less than 9 percent provides metabolic evidence of myocardial hypoxia occurring under this regime. His finding that in patients with coronary artery disease the coronary blood flow did not increase at this level of COHb saturation, although it did increase in controls, serves to further underscore the possible consequences for an individual with coronary heart disease (CHD) who is carrying less than 9 percent COHb. Furthermore, it 111-35 must be emphasized that these coronary patients were at rest and were not subjected to exercise. The consequences of this additional stress were 7 8 documented earlier in the study by Knelson. His discovery that the time to onset of pain was diminished in cigarette smokers with angina pectoris during exercise immediately following exposure to 50 ppm of CO for four hours (average COHb of 3.0 percent), provides considerable evidence that workers with CHD, and particularly those with CHD who smoke, should not be exposed to concentrations of CO which will produce a level of COHb in excess of 5 percent. Similarly, the effects of CO on behavior are not in complete agreement. As mentioned previously, several mechanisms have been suggested to possibly account for the differing results. These suggestions can be summarized by noting that no one study has completely replicated the experimental design of any previous study that indicated a behavioral impairment due to CO Because the emission of CO is on so large a scale it becomes difficult to single out, identify and rank individual sources in order of importance as concerns occupational exposure. Indeed, as emphasized earlier in this report, the smoking habits of the worker have a tremendous bearing on his actual daily exposure to CO. The CO pollution from traffic to and from the place of employment likewise will affect his total daily exposure. Several studies concerning occupational exposure to CO are presented below. The investigators commented on the results of the study by saying: V-6 to recognize the effect that the level of activity has upon the uptake of CO and to judiciously evaluate the exposure of his employees and limit their activity accordingly. The data in Appendix IV have been included specifically for this purpose. In addition, the employer must give special consideration to limiting the activity of employees exposed to CO at high altitudes in order to compensate for the dual loss in oxygen-carrying capacity of the blood. V-7 # VI. COMPATIBILITY WITH AIR QUALITY STANDARDS The Environmental Protection Agency (EPA), under provisions of the Clean Air Act (PL 91-604), promulgated national primary and secondary air quality 123 standards on April 30, 1971. The primary and secondary standards for "in the comments, serious questions were raised about the soundness of this evidence. Extensive consideration was given to this matter. The conclusions reached were that the evidence regarding impaired time-interval discrimination had not been refuted and that a less restrictive national standard for CO would therefore not provide the margin of safety which may be needed to protect the health of persons especially sensitive to the effects of elevated carboxy hemoglobin levels. The only change made in the national standards for CO was a modification of the 1-hour value. The revised standard affords protection from the same low levels of blood carboxyhemoglobin as a result of short-term exposure. The national standards for carbon monoxide, as set forth below, are intended to protect against the occurrence of carboxyhemoglobin levels above 2 percent. It is the Administrator's judgment that attainment of the national standards for carbon monoxide will provide an adequate safety margin for protection of public health and will protect against known and anticipated adverse effects on public welfare." # VI-1 The air quality standard is designed to protect the population-at-large and takes into consideration 24-hour per day exposure of the very young, the very old, and the seriously ill. The evidence presented in this (NIOSH) criteria documentation supports the concept of the necessity of providing protection for that portion of the general worker population with coronary heart disease (CHD), who are especially sensitive to elevated levels of COHb. Although the Administrator of EPA has stated above that the evidence concerning the impairment of time-interval discrimination at low levels of COHb, presumably as low as 2 percent, has not been refuted, neither has such evidence been confirmed. VI-2 # VII. APPENDIX I SAMPLING AND ANALYSIS OF CARBON MONOXIDE (CO) Worker exposure to CO shall be measured with a portable, direct reading, hopcalite-type carbon monoxide meter calibrated against known concentrations of CO, or with gas detector tube units certified under Title 42 of the Code of Federal Regulations, Part 84. Samples shall be collected from the worker's breathing zone and a sufficient number shall be collected at random intervals throughout the workday so that a statistically accurate determination of compliance may be made as outlined in the following section. # Principles for Air Sampling for Carbon Monoxide (CO) The characteristic manner in which CO proceeds from the pulmonary alveolar spaces, through the blood capillary membrane "barrier," into the plasma, through the red blood cell membrane, ultimately to combine with hemoglobin imposes certain requirements for air sampling if proper deter mination of compliance with the recommended standard is to be made. The rate of CO diffusion from the lung to hemoglobin depends upon the partition coefficient for CO between alveolar air and pulmonary blood. The magnitude of this coefficient is such as to delay transfer of CO to the circulating hemoglobin (Hb). This time delay makes it essential in any estimation of the carboxyhemoglobin level to know how long the exposure was experienced as well as the CO concentration during the exposure. Reference to Figure 2 shows that a continuous exposure of over eight hours' duration is required to attain maximum combination of CO with hemoglobin at the recommended standard of 35 ppm. # Air Sampling Methods # VIII-1 The sampling and analytical procedures recommended below will provide the necessary data to determine compliance with the recommended standard. The methodology and equipment utilized to collect and analyze samples to determine concentrations of CO in the air or concentration of COHb in the exposed worker shall be subjected to a proficiency and/or calibration test program conducted by NIOSH or by another agency of the federal govern ment under agreement with NIOSH for certification for such determinations. # Determination of Compliance The following procedure shall be used to determine compliance with the eight-hour, time-weighted average standard based on a small number of instantaneous (grab) samples collected at random intervals during the workday. Given: the results of r i samples with a mean m and a range (difference between least and greatest) jr. If, for from 3 to 10 samples, m is greater than the total of: An extensive review of the methodology covering the determination of 12 4 carbon monoxide has been prepared by Maehly. In an updating review of Because carboxyhemoglobin constitutes the toxic product formed during carbon monoxide intoxication, and because the carboxyhemoglobin formed is quite stable, the analyst is presented with the opportunity of directly evaluating the toxic agent. In so doing, two facts are established: (1) proof that an exposure has occurred, and (2) quantitative proof the exposure has produced a particular toxic concentration in the body. Diffusion, chromatographic and gasometric methods all involve liberation of carbon monoxide from hemoglobin and either gasometric transfer operations or secondary titrations which are technically difficult for the microliter gas volumes involved. This limitation is even more critical when the effects of temperature and pressure on gas volumes are considered. Careful evaluation of the methods for direct measurement of carboxyhemoglobin has been reviewed in detail. Of these methods, three are considered to be feasible for con sideration as the recommended method. # (a) The colorimetric method of Amenta uses small blood samples and measures the absorbance of an ammonia-hemolyzed sample at three wave-length # Analytical Methodology IX-1 settings on the spectrophotometer: 560, 575, and 498 nm. In calculating the results, the absorbance of the mixture at 498 nm (an isosbestic point) is the denominator in the equation for oxyhemoglobin, carboxyhemoglobin, and the unknown. This mathematical manipulation gives a value corrected for the total hemoglobin concentration (i.e., the R factor). The values (IL = 1.097, R = 0.057) are then used for calculating percentage COHb by 2 the ratio R^ -R^/R^^-R^q XIOO = percentage COHb. The relatively small absorbance difference observed with this method reduces its precision and accuracy, especially in the low COHb range (ca. 10 percent). # (b) The method of Harper involves hemolyssis of blood by distilled water, addition of a buffer (pH 7.2) and reduction of oxyhemoglobin by sodium hydrosulfite. The reduced hemoglobin is then converted to methemoglobin by the addition of potassium ferricyanide. The carboxyhemoglobin is converted to methemoglobin at a slower rate than reduced hemoglobin; therefore, the carboxyhemoglobin can be measured before appreciable conversion to methemoglobin takes place. The method requires a careful control of timed manipulations to obtain reproducible results. # Range and Sensitivity For 1 ml of oxalated blood the range is 0 to 100 percent saturation COHb. Sensitivity is 0.5 percent saturation. # Interferences Hemolyzed blood contains pigments arising from the breakdown of hemoglobin and will cause interference in the method. Bile pigments may also interfere. # Precision and Accuracy The difference in concentrations obtained by this procedure and compared to results obtained on the same samples analyzed by the Van Slyke method are not significant at the 5 percent level [t-Value = 1.81 (14 deg. (A) Chemical stability, incompatibility, hazardous decomposition products, and hazardous polymerization. X-2 (A) Detailed procedures to be followed with emphasis on precautions to be taken in cleaning up and safe disposal of materials leaked or spilled. This includes proper labeling and disposal of containers containing residues, contaminated absorbants, etc. (ix) Section VIII. Special Protection Information. (A) Requirements for personal protective equipment, such as respirators, eye protection and protective clothing, and ventilation, such as local exhaust (at site of product use or application), general, or other special types. (x) Section IX. Special Precautions. (A) Any other general precautionary information, such as personal protective equipment for exposure to the thermal decomposition products listed in Section VI, and to particulates formed by abrading a dry coating, such as by a power sanding disc. (xi) Hie signature of the responsible person filling out the data sheet, his address, and the date on which it is filled out. (xii) The NFPA 704M numerical hazard ratings as defined in Section (c) (5) following. The entry shall be made immediately to the right of the heading, "Material Safety Data Sheet" at the top of the page and within a diamond symbol preprinted on the forms. N i in O ir * -1 • • • • • • • • • • • • • • • • • • • • (M ao r» < NO »o m < r" < u e O o O O O o o o O o ' J o O O O O o O 0 0 • • • * • • • • • • • * • • • • • • • • o o O •O o *»? o o o c o o CJ o o O í j O o o •H O o © O o o o o o o o O O O © © O o © O • • • • • • • • • • • • • • • • • • • • © o © © © © © © o o © © © o o © c © o o C M ro 4* tn o O' <\l i f s co •«4 4-f-o rr. O' nj in 00 *■4 -i H o o o © O © © © O © © © o o o © © o © © © © o o o © O O o Q © O o © © O O © o © • • • • • • • • • • • • • • • • • • • • © o © © © © © © © © o O o o © © G © © © «if 4-4-4" 4" 4" 4-4-4-4-'4' ■4-4" 4-4-4- 4-4- ■ í* * -fr * ■ í' > O O o o o o eí o o o o o o o o o o o o o • • • • • • • • • • • • • • • • • • • • O O o o o o o o o o o o o o o o o o o o O o o o o o o o o o o o a o o o c o o o » ->1 -> r- 1 -► -1 -■ *«• t (V i ÍN J N >l\J MMl\J U l U l IM> yji -J 00 vC e-1 P J t- -J i» -J •> 1-J 00 co \D o H * u> O ) 00 * N i o i-1-si í ' V ÛN J • • • * * • • • * • • é • • é * • • • è N ) * *«4 o ■S' o -J o O ' 'O £s U l U J o -J PO x j o Ü J -J 00 M!-• o sß 1 -O ' o 03 O ' f * O ' O v _ n 0 ( -* -S' -S' ■V* * .£ ■-t* -r -r« -t* > JN -e * -e * © o © © o o o C i o © o o o o c* o o o o o • « • • • • • * • • • • • • • • • • • * C, O o o o o o o o o o o o © o © o © I ' .I o O o o o o o o CJ c o o o o o o o o © o © o r1 tc tw » -1 -i-I-11 -* K * » -■r-r-> *-I-1t-t-' < -* -s i -J * < •-4 -s i • s i ■ s i • s i -J -J -s i -s j -J -J ■ S i -s i -s i -s i -s i ■ s i o o o © o © o o o o © o o © o o © o o o o o © O o o o © © © o o o o o o © © o © O o o O o © o © o © o o o © © o? o o a e * • • • « • • • « • • • • • • • * » • • w o o o o © o © o o o © © o o © o © o © o c* o o o o © o o o o o o o o o © o o o <3 > ■ i- u >u >u >Ul f'J l\J i-* H * )-* 0 0 U ) M ■ X ) O ' U >© -J -p ' 0 0 Ul IV -o O ' Ul -i' Ul rsj * " © o © © o o © © © O © o © o © o © © © © • • • « • • • • • • • • • • • • • • * « © o © © © © © o o o © © o © o o © o © © N J h o IV rvj rsj N > O J O J U J u >Ul J* Ul -s i 0 0 ► - o r o h - 00 U) Ul ■si -sj -si CD 0 0 sD sO © PO Ul O' o * rsj u l o C OUl *• • • • • • • « * # • • • • » • • * • • • u >Ul 05 N J O' rsj •£> <£> I " C OM O' vO © Ul cr-Ul O' © Ul Ul 0 0 o IN i -sj * -si -St O' PO * u i • -© C ON) O' -si U ) X) tj 0 n o Time •í" 4> * ■p* ■f'* * -f' * -r> ■f* *■ 4> -f' o o o O O o o o o o o o o o o a O o o o • é • • * • * • • • • • • • • • * • * • o o o o o o o o o o o o o o o o o o o o o o o Cí o o o o o o o o Ô o o o o o Cí o ro ro ro N) (V J (V ) • • « • • t • • « • » • • • • • • * • • 1-» O I O D I-1 O ' Ni o t-* O ' • • • • • • • • • • • • • • » • • • • • o o o o c» o o o o o o o o o o o o o C 1 H 9 t ¡ ' t í ' 0 a n > N) N) N> N) NJ ro ro Uí Ui OJ U> > U1 £> o >-» ro h-» U) u> 00 CD o UÎ 00 u> 1-• 0D vD U Ì OJ ro • • • » • • * • • • • • • • • • • • « • u> o sO UJ OD
None
None
13b728afc53ed7b202256ceba7c1ae3416008d2a
cdc
None
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# When first evaluated at WRAMC, the 22 patients had a median of three (range: one to nine) skin lesions, which ranged from 3 mm to 40 mm in diameter. Higher proportions of the lesions were located on the upper (39%) or lower (32%) extremities than on the trunk/back (16%) or face/neck (13%). Typically, the lesions were painless, had enlarged slowly, and ultimately had central ulceration, often covered with eschar and surrounded by an erythematous, indurated border (Figure 2). Regional lymph nodes (e.g., epitrochlear, axillary, and inguinal), if palpable, usually were <1 cm in diameter. None of the patients had systemic symptoms. In 17 (77%) of the 22 cases, parasites were noted on lightmicroscopic examination of tissue. Of the 19 patients who had tissue cultured for parasites, 14 (74%) had positive cultures, of which 13 (93%) had sufficient organisms for species identification by isoenzyme electrophoresis. All nine of the 13 patients whose cultures had been tested as of October 20, 2003, were infected with Leishmania major. Additional evidence that 21 (95%) of the 22 patients had CL was obtained by testing tissue with an investigational, fluorogenic, genusspecific polymerase chain reaction (PCR) assay developed and conducted by staff of WRAMC and Walter Reed Army Institute of Research (Silver Spring, Maryland) (6). Since 1978, military personnel with potential cases of leishmaniasis have been referred to WRAMC for evaluation and therapy with the pentavalent antimonial compound sodium stibogluconate (Pentostam ® , The Wellcome Foundation, United Kingdom). Although treatment of cases of CL with pentavalent antimonial compounds has been considered the standard of care for over half a century (1), these compounds Editorial Note: Leishmaniasis is a vector-borne parasitic disease endemic in parts of the tropics, subtropics, and Southern Europe. The World Health Organization estimates that 1.5 million cases of CL and 500,000 cases of visceral leishmaniasis (VL) occur each year (1). Both cutaneous and visceral infection can remain asymptomatic or be associated with mild, nonspecific, and nonprogressive symptoms (1). Clinical manifestations, if they develop, typically are first noted weeks to months after exposure. The skin lesions of CL, which can be chronic and disfiguring, typically evolve from papules to nodules to ulcerative lesions but can persist as nodules or plaques (1). Host (e.g., immune status) and parasite (e.g., species and strain) characteristics affect the natural history and the ease and importance of diagnosing and treating cases of CL. Although both L. major and L. tropica are common etiologic agents of CL in Afghanistan, Iraq, and Kuwait (1,7,8), which species has caused a particular case of CL depends on such factors as the geographic area and ecologic setting of exposure and the species of the sand-fly vector. VL is more prevalent in Iraq (8) than in Afghanistan or Kuwait. Manifestations of cases of advanced VL include fever, cachexia, hepatosplenomegaly, pancytopenia, and hypergammaglobulinemia; such cases can be fatal if not treated appropriately and quickly (1). No FDA-approved vaccines or prophylactic medications to prevent leishmaniasis are available (1). Control measures against vectors or reservoir hosts of infection might be effective in particular settings (1,8,9). Personal protective measures to decrease risk for infection include avoiding, if possible, areas where leishmaniasis is endemic, particularly from dusk through dawn; using permethrin-treated bed nets and clothing; minimizing the amount of exposed skin; and applying insect repellents containing 30%-35% DEET (lower percentages for children) to exposed skin. Transmission of leishmanial parasites through blood transfusion has not been reported in the United States. However, as a precautionary measure, the Armed Services Blood Program Office of the Department of Defense (DoD) (Falls Church, Virginia) and the American Association of Blood Banks (AABB) (Bethesda, Maryland) are implementing policies to defer prospective blood donors who have been in Iraq from donating blood for 12 months after the last date they left Iraq. Additional information about these deferral policies is available from DoD at / downloads/prevmed/leishmanAug03.pdf and from AABB at . In Operations Desert Storm and Shield during 1990-1991, among approximately 697,000 deployed military personnel, WRAMC identified 12 cases of so-called viscerotropic leishmaniasis caused by L. tropica (a syndrome associated with visceral infection but not necessarily the classic clinical manifestations of VL) and 20 cases of CL (3,10;WRAMC, unpublished data, 2003). During August 2002-September 2003, WRAMC identified 22 cases of CL among personnel participating in Operations Iraqi and Enduring Freedom. The apparent decline in numbers of cases of CL with self-reported onset of lesions during July-August 2003 (Figure 1) could reflect delays in persons seeking medical evaluation for skin lesions that might not cause concern initially. U.S. personnel in Iraq have reported being bitten by sand flies (some persons have received >100 bites in a single night), and up to 2% of female phlebotomine sand flies collected in Iraq were infected with leishmanial parasites. As of October 21, WRAMC had identified nine more cases of CL in addition to the 22 cases described in this report. WRAMC is evaluating additional potential cases of CL in deployed personnel, and the number of confirmed cases probably will continue to increase. U.S. health-care providers should consider the possibility of CL in persons with chronic skin lesions who were deployed to Southwest/Central Asia or who were in other areas where leishmaniasis is endemic and that of VL in such persons with persistent, febrile illnesses, especially if associated with other manifestations suggestive of VL (e.g., splenomegaly and pancytopenia) (1,4,8,10). Information about diagnosing and treating CL and VL has been published (1,5). Both WRAMC and CDC provide diagnostic services and the antileishmanial compound sodium stibogluconate. For treatment of health-care beneficiaries of the military, health-care providers should contact WRAMC, telephone 202-782-6740. For treatment of civilians, providers should contact CDC's Drug Service, telephone 404-639-3670. # Infant Health Among Puerto Ricans -Puerto Rico and U.S. Mainland, 1989-2000 Although the overall U.S. infant mortality rate (IMR) declined dramatically during the 1900s, striking racial/ethnic disparities in infant mortality remain (1,2). Infant health disparities associated with maternal place of birth also exist within some racial/ethnic populations (3,4). Eliminating disparities in infant health is crucial to achieving the 2010 national health objective of reducing the infant death rate to 4.5 per 1,000 live births (objective 16-1c) (5). Hispanics comprise the largest racial/ethnic minority population in the United States. Among U.S. Hispanics, considerable heterogeneity exists in infant health, with the poorest outcomes reported among Puerto Rican infants (6). This report compares trends during the previous decade in IMRs and major determinants of these rates such as low birthweight (LBW), preterm delivery (PTD), and selected maternal characteristics among infants born to Puerto Rican women on the U.S. mainland (50 states and the District of Columbia) with corresponding trends among infants born in Puerto Rico. The findings indicate that despite having lower prevalence of selected maternal risk factors, Puerto Rico-born infants are at greater risk for LBW, PTD, and infant death than mainland-born Puerto Rican infants. This report also highlights a persistent disparity in IMRs and an emerging disparity in LBW and PTD rates between Puerto Rico-born infants and mainland-born Puerto Rican infants. Future research should focus on identifying factors responsible for these disparities to improve infant health in Puerto Rico. Linked birth/infant death files for the 50 states, the District of Columbia, and Puerto Rico for 1989-1991 and 1998-2000 were used to assess IMR trends. Natality files for 1990-2000 were used to examine trends in rates of LBW (<2,500 g), know what matters. # Dispatch up-to-the-minute: adj 1 : extending up to the immediate present, including the very latest information; see also MMWR. PTD (<37 weeks' gestation), and selected maternal characteristics among live-born infants. Analyses were limited to infants born to Puerto Rican women (i.e., those born in Puerto Rico, those born on the mainland to Puerto Rico-born mothers, or those born on the mainland to mothers who reported being of Puerto Rican ethnicity). Infants born in Puerto Rico to women not born either in Puerto Rico or on the mainland were excluded. Four subpopulations of Puerto Rican infants were examined initially: infants born in Puerto Rico to Puerto Rico-born mothers, infants born in Puerto Rico to mainlandborn mothers, infants born on the mainland to Puerto Ricoborn mothers, and infants born on the mainland to mainland-born mothers of Puerto Rican ethnicity. However, because maternal place of birth was not associated substantially with infant health outcomes, data are shown for Puerto Rico-born and mainland-born infants without regard to maternal place of birth. Chi square tests were used to compare differences in the prevalence of infant and maternal characteristics and differences in IMRs among the groups. # Low Birthweight and Preterm Delivery In 1990, Puerto Rico-born infants were 1.03 times more likely to be of LBW than mainland-born infants, and in 2000, this disparity increased to 1.2 (Figure). From 1990 to 2000, the LBW rate for Puerto Rico-born infants increased 18.0%, from 9.2% to 10.9%; for mainland-born infants, the LBW rate increased 3.7%, from 8.9% to 9.3%. Similar differences in LBW rate increases were observed when analyses were restricted to full-term and singleton births. The increase in the LBW rate among Puerto Rico-born infants was associated predominantly with an increase in the percentage of infants with an intermediate LBW (ILBW; 1,500-2,499 g); however, a small increase also was observed in the percentage with a very low birthweight (VLBW; <1,500 g) (Table 1). In 2000, Puerto Rico-born infants were less likely than mainlandborn infants to be of VLBW (ratio = 0.7) but more likely to be of ILBW (ratio = 1.3) (Table 1). In 1990, Puerto Rico-born infants were less likely than mainland-born infants to be born preterm (ratio = 0.9) (Figure ). From 1990 to 2000, the PTD rate among Puerto Rico-born infants increased 29.3% (from 11.6% to 15.0%), and that among mainland-born infants increased 0.9% (from 13.3% to 13.4%). As a result, in 2000, Puerto Rico-born infants were 1.1 times more likely than mainland-born infants to be born preterm. Similar differences in PTD rates were observed when analyses were limited to singleton births. The increase in the PTD rate among Puerto Rico-born infants was attributable primarily to an increase in the rate of moderately preterm births (32-36 weeks' gestation), although the rate of very preterm births (<32 weeks' gestation) also increased slightly (Table 1). In 2000, despite higher rates of LBW and PTD among Puerto Rico-born infants, their mothers were less likely than mothers of mainland-born infants to report selected maternal risk factors, including receiving late/no prenatal care, having <12 years of education, not being married, having plural births, and using tobacco (Table 1). The prevalence of first trimester prenatal care and the percentage of mothers aged <19 years at their infant's birth were similar for the two groups (Table 1). The prevalence of these maternal characteristics did not differ by maternal place of birth. # Infant Mortality From 1989-1991 to 1998-2000, the combined IMR for Puerto Rico-born and mainland-born infants declined approximately 24%. The 1989-1991 IMR for Puerto (6), risk factors such as maternal tobacco use might be reported less completely in Puerto Rico than on the mainland. Second, because Hispanic origin is not recorded on birth certificates in Puerto Rico, this study was based on records for infants born either in Puerto Rico or on the mainland; mainland-born infants were defined as having Puerto Rican ethnicity if their mothers were born in Puerto Rico or reported being Puerto Rican. This report highlights a continuing disparity in infant mortality rates and an emerging disparity in LBW and PTD rates between Puerto Rico-born infants and infants born on the mainland to Puerto Rican mothers. These differences should be considered in the planning and implementation of efforts to reduce IMRs among Puerto Ricans. The higher birthweightand gestational age-specific IMRs in Puerto Rico contribute more to the overall higher IMR in Puerto Rico than do the differences in birthweight and gestational age distributions between Puerto Rico-and mainland-born infants (8). Efforts to reduce IMR in Puerto Rico should focus on reducing mortality rates among LBW and preterm infants, perhaps by examining existing perinatal services. Additional opportunities might exist for lowering the overall IMR if the underlying causes of the increases in the prevalence of LBW and PTD can be identified and prevented. Improving infant health in Puerto Ricans will most likely require interventions at the individual, provider, and health-care system levels. # West Nile Virus Infection Among Turkey Breeder Farm Workers -Wisconsin, 2002 In 2002, Wisconsin public health officials were notified of two cases of febrile illness in workers at a commercial turkey breeder farm (farm A) in county A. The Wisconsin Division of Public Health (WDPH) initiated an investigation that found a high prevalence of West Nile virus (WNV) antibody among farm A workers and turkeys. An associated high incidence of febrile illness among farm A workers also was observed. This report summarizes the results of this investigation, which indicate possible nonmosquito transmission among birds and subsequent infection of humans at farm A. Because the mode of transmission in this outbreak is unknown, turkey handlers should take appropriate precautions, including use of DEETcontaining mosquito repellents, protective clothing and gloves, respiratory protection, and proper hand hygiene. Suspected occupationally acquired WNV infections should be reported immediately to local and state health departments. During November 2002, WDPH and the Wisconsin State Laboratory of Hygiene (WSLH) confirmed that two ill residents of county A had been infected with WNV. Before these reports, only one human WNV infection had been reported in this county. Both persons worked at farm A and had febrile illness with rash during late September-early October. These human illnesses occurred after a suspected fowl pox outbreak among farm A turkeys in September. Workers were concerned the pox outbreak might be associated with their illnesses. Farm A is one of six turkey breeder farms in county A owned by a company that also operates nonbreeder farms and a turkey meat processing plant in county A. The five other turkey breeder farms are located within 10 miles of farm A, and multiple private residences are within a quarter mile. In February 2003, county and state public health staff, in collaboration with the company, identified workers at the six turkey breeder farms, the nonbreeder farms, and the plant, and requested their consent to participate in a serosurvey. Serum samples were collected from participating workers (N = 93) to identify persons infected recently. A questionnaire was administered to identify persons who had a febrile illness during August-October 2002. Serum samples also were collected from residents (N = 14) who lived within a quarter mile of farm A. All serum samples were tested for WNV-specific IgM antibody at WSLH (1). IgM-positive specimens were confirmed by plaque-reduction neutralization tests at CDC (2). Of 107 total participants, 10 (9%) were seropositive. Of approximately 90 workers at the six breeder farms, 57 (63%) participated; of these, 10 (18%) were infected recently with WNV (Table ). None of the meat processing workers or other area residents was infected. Of 11 persons who worked exclusively at farm A, six (55%) were WNV IgM-positive, compared with two (25%) of eight who worked at both farm A and other breeder farms and two (5%) of 38 who worked only at other breeder farms. Of the 10 IgM-positive workers, six (60%) reported febrile headaches during August-October (all occurring during the last week of September), compared with seven (7%) of 97 IgM-negative persons sampled (p = 0.0002 by Fisher exact test). All six IgMpositive persons who reported febrile headache had worked at farm A. All six noted a skin rash, and one had meningoencephalitis and was hospitalized; no deaths occurred. Reported mosquito exposures and bites were similar for IgM-positive (nine and eight of 10, respectively) and IgMnegative workers (67 and 54 of 79, respectively). Only one (2%) of 57 breeder farm workers reported using insect repellent while working. Farm A includes two breeder bird barns and a juvenile flock barn. The breeder barns separate uncaged females from male turkeys with a solid plywood wall. The sides of the barns housing the female turkeys are covered with 1 in. x 1 in. mesh wire fencing and plastic curtains that can be adjusted to lower the temperature during warm months. Serum from farm A turkeys and turkeys from the nearest breeder farm were collected in late January 2003. The farm A flock sampled was the group of birds housed in the juvenile flock barn from mid-June to early December 2002, at which time this flock was moved to a breeder barn on farm A to replace a flock slaughtered in November. The flock sampled on the nearby farm was a breeder flock also in place in Editorial Note: The investigation described in this report found that workers at farm A had a higher incidence of febrile illness and prevalence of WNV antibodies than workers at other breeder and nonbreeder farms, workers at a turkey meat processing facility, or persons who lived on or near the affected farm and who did not work in the turkey barns. The mode of transmission to these workers is unknown. Although the majority of human WNV infections are mosquito-borne, transmission by less typical routes might have occurred, including percutaneous (e.g., exposure of broken skin or mucosa to infected turkey feces or serous exudates from duallyinfected pox lesions), fecal-oral, or respiratory (e.g., exposure to aerosolized infected turkey feces). The WNV seroprevalence (96%) among female turkeys on farm A was high. However, experimental evidence suggests that turkeys develop insufficient levels of WNV viremia to contribute to a bird-mosquito-bird amplification cycle (3). Although WNV was detected in the feces of these turkeys, no oropharyngeal shedding or transmission to cage mates was observed (3). Nonvector-borne WNV transmission has been demonstrated experimentally among rodents and among certain bird species other than turkeys (4,5). Once WNV was introduced to female turkeys at farm A (presumably by mosquitoes), widespread transmission within that flock might have taken place by fecal-oral, respiratory, or another atypical (e.g., percutaneous exposure associated with pecking behavior or vaccination) route. In addition, other unique conditions at farm A, including possible co-infection with an avian pox virus, might have resulted in higher WNV viremias or infectious materials with higher WNV titers than laboratory studies have suggested. Despite uncertainty over the mode(s) of transmission, epidemiologic evidence suggests that this outbreak was related to occupational exposure. Occupationally acquired WNV infections have been reported previously among laboratory or field workers who experienced a known percutaneous injury or aerosol exposure while working with high concentrations of WNV in cell culture or infected animal tissues (6)(7)(8)(9). In this investigation, no such exposure was documented. Because the mode of transmission in this outbreak is unknown, turkey handlers should 1) take personal protective measures, including wearing protective clothing and using mosquito repellents (e.g., those containing DEET on skin and clothing and those containing permethrin on clothing), as recommended for outdoor workers; 2) wear gloves; and 3) wash hands frequently. In addition, respiratory protection has been recommended for reducing other exposures to workers in turkey barns (10). Respiratory protection should be selected and used in accordance with the Occupational Safety and Health Administration (OSHA) respiratory protection standard (Title 29 CFR 1910.134). Workers should receive training that reinforces awareness of potential occupational hazards and risks and stresses the importance of timely reporting of all injuries and illnesses of suspected occupational origin. Health-care workers should inquire about a patient's outdoor exposure and occupation when a human WNV infection is suspected or identified and consider WNV as a possible etiology among turkey farm workers with febrile headache or rash, meningitis, encephalitis, or other severe neurologic illness, especially when WNV illnesses exist among co-workers or birds. Suspected occupationally acquired WNV infections should be reported immediately to local and state health departments. The investigation of turkey breeder farm workers in county A is ongoing. In addition, further studies are needed to determine the factors involved in this outbreak, to better define the occupational risk for WNV infections, and to assess appropriate personal protective measures. On the basis of recommendations from public health staff, the company has made mosquito repellent containing 30% DEET available at farm A and other turkey breeder farms. Recommendations that were outlined previously in place at the company farms include protective clothing, frequent hand washing, and an OSHArequired respiratory protection program. Gloves and safety glasses also are available to workers. # Public Health and Aging # Nonfatal Injuries Among Older Adults Treated in Hospital Emergency Departments -United States, 2001 Because injuries generally are considered a problem of the young, injuries among older adults (i.e., persons aged >65 years) have received little attention. However, injuries are the eighth leading cause of death among older adults in the United States (1). In 2001, approximately 2.7 million older adults were treated for nonfatal injuries in hospital emergency departments (EDs); the majority of these injuries were the result of falls (1). To characterize nonfatal injuries among older adults, CDC analyzed data from the National Electronic Injury Surveillance System-All Injury Program (NEISS-AIP). This report summarizes the results of that analysis, which indicate differences in type and mechanism of injury by sex, suggesting that prevention programs should be designed and tailored differently for men and women. NEISS-AIP is operated by the U.S. Consumer Product Safety Commission and collects data about initial visits for all (MMWR on line) cdc.gov/mmwr Online types and causes of injuries treated in U.S. EDs, drawing from a nationally representative sample of 66 hospitals selected as a stratified probability sample of hospitals in the United States. Data from these cases are weighted by the inverse of the probability of selection to produce national estimates (2). For this report, annualized estimates were calculated on the basis of weighted data for 36,752 nonfatal injuries among older adults treated in EDs during January-December 2001. U.S. Census Bureau population estimates for 2001 were used to calculate injury rates (3). A direct variance estimation procedure was used to calculate 95% confidence intervals and to account for the complex sample design (2). All nonfatal injuries were classified according to the mechanism of injury (e.g., fall, struck by/against, or motor vehicle crash), diagnosis, primary body part injured, disposition, location of injury, and intent. The diagnosis and intent of the injury were classified according to the most severe injury (4). Injuries of unknown intent were grouped with those classified as unintentional. During 2001, an estimated 935,556 men and 1,731,640 women aged >65 years were treated in EDs for nonfatal injuries. The overall injury rate per 100,000 persons was higher among women (8,466 per 100,000 persons) than among men (6,404). Injury rates increased with age, to 15,272 for women aged >85 years and 11,547 for men aged >85 years. Nearly all injuries (99%) were classified as unintentional/unknown intent (Table ). Overall, falls resulted in the highest rates of injury (4,684 per 100,000 persons) and were the most common mechanism of injury, accounting for 62% of all nonfatal injury ED visits in this population. The injury rate from falls was higher among women (5,659) than men (3,319). However, the injury rates for women were lower with certain other types of injuries, such as being struck by/against (588 versus 617), occupying a motor vehicle (525 versus 540), and being cut or pierced (243 versus 488) (Table ). The greatest number of nonfatal injuries among older adults were diagnosed as fractures (26%), followed by contusions/ abrasions (23%), lacerations (17%), strains/sprains (13%), and internal injuries (5%). Diagnoses varied by sex. Fractures of all parts of the body were more common among women than men (30% versus 19%), and lacerations were more common among men than women (22% versus 14%). The parts of the body affected most were the head/neck (25%) and arms/hands (22%). The majority (82%) of older adults were treated and released; 16% were hospitalized. The ratio of patients treated/ released to those hospitalized was lower among women (4.7:1) than men (5.9:1), suggesting women were more often hospitalized after a nonfatal injury. The most common (47%) location for nonfatal injuries was the home (Table ). Editorial Note: Falls remain the leading cause of both nonfatal and fatal injury among older adults aged >65 years in the United States (1). The findings in this report, which indicate that falls were the most common reason for injury-related ED visits among persons aged >65 years, are consistent with previous studies indicating that approximately 40% of older adults living in community settings (e.g., in private residences or minimally assisted environments) fall each year (5). In this study, 82% of persons aged >65 years were treated and released following injury, compared with 95% of persons aged <65 years. Older adults were more than three times more likely (1,217 per 100,000 persons) to be hospitalized than persons aged <65 years (353) (1). The findings in this report are subject to at least five limitations. First, NEISS-AIP provides national estimates and does not allow for estimates by region, state, or local jurisdiction. Second, injury outcomes are specific to ED visits and do not include subsequent outcomes. Third, NEISS-AIP data reflect only those injuries that were severe enough to require treatment in an ED. Fourth, in cases with multiple injuries, only data regarding the most severe injury are recorded. Finally, data for intent are classified on the basis of information contained in the medical record. Injuries for which intent cannot be determined conclusively from the ED record are grouped with unintentional injuries. The findings in this report can form the basis for targeting prevention efforts to different populations of older adults. For example, exercise can reduce the risk for fall among older adults by 15% (6). Because women are more likely to sustain fallrelated injuries, exercise can be an especially important preventive measure for this population. Data from NEISS-AIP can continue to be a source for monitoring trends, evaluating interventions, and characterizing nonfatal injuries among persons aged >65 years. (n = four), New Mexico (n = four), North Dakota (n = four), Alabama (n = three), Ohio (n = three), Indiana (n = two), Missouri (n = two), Montana (n = two), New Jersey (n = two), Delaware (n = one), Illinois (n = one), Kansas (n = one), Kentucky (n = one), Louisiana (n = one), Michigan (n = one), Mississippi (n = one), Tennessee (n = one), and Virginia (n = one). A total of 682 presumptive West Nile viremic blood donors have been reported to ArboNET. Of these, 596 (87%) were reported from the following nine western and midwestern states: Colorado, Kansas, Nebraska, New Mexico, North Dakota, Oklahoma, South Dakota, Texas, and Wyoming. Of the 529 donors for whom data were reported completely, six subsequently had meningoencephalitis, and 76 subsequently had West Nile fever. In addition, 10,453 dead birds with WNV infection have been reported from 42 states, the District of Columbia, and New York City; 3,270 WNV infections in horses, 16 WNV infections in dogs, 14 infections in squirrels, and 24 infections in unidentified # Human WNV disease and animal WNV activity Animal WNV activity only one. In addition, seropositivity was reported from one other unidentified animal species. A total of 6,667 WNV-positive mosquito pools have been reported from 38 states, the District of Columbia, and New York City. Additional information about WNV activity is available from CDC at / index.htm and . # Notice to Readers # Guidelines for Maintaining and Managing the Vaccine Cold Chain In February 2002, the Advisory Committee on Immunization Practices (ACIP) and American Academy of Family Physicians (AAFP) released their revised General Recommendations on Immunization (1), which included recommendations on the storage and handling of immunobiologics. Because of increased concern over the potential for errors with the vaccine cold chain (i.e., maintaining proper vaccine temperatures during storage and handling to preserve potency), this notice advises vaccine providers of the importance of proper cold chain management practices. This report describes proper storage units and storage temperatures, outlines appropriate temperature-monitoring practices, and recommends steps for evaluating a temperature-monitoring program. The success of efforts against vaccine-preventable diseases is attributable in part to proper storage and handling of vaccines. Exposure of vaccines to temperatures outside the recommended ranges can affect potency adversely, thereby reducing protection from vaccine-preventable diseases (1). Good practices to maintain proper vaccine storage and handling can ensure that the full benefit of immunization is realized. # Recommended Storage Temperatures The majority of commonly recommended vaccines require storage temperatures of 35°F-46°F (2°C-8°C) and must not be exposed to freezing temperatures. Introduction of varicella vaccine in 1995 and of live attenuated influenza vaccine (LAIV) more recently increased the complexity of vaccine storage. Both varicella vaccine and LAIV must be stored in a continuously frozen state <5°F (-15°C) with no freeze-thaw cycles (Table 1). In recent years, instances of improper vaccine storage have been reported. An estimated 17%-37% of providers expose vaccines to improper storage temperatures, and refrigerator temperatures are more commonly kept too cold than too warm (2,3). Freezing temperatures can irreversibly reduce the potency of vaccines required to be stored at 35°F-46°F (2°C-8°C). Certain freeze-sensitive vaccines contain an aluminum adjuvant that precipitates when exposed to freezing temperatures. This results in loss of the adjuvant effect and vaccine potency (4). Physical changes are not always apparent after exposure to freezing temperatures and visible signs of freezing are not necessary to result in a decrease in vaccine potency. Although the potency of the majority of vaccines can be affected adversely by storage temperatures that are too warm, these effects are usually more gradual, predictable, and smaller in magnitude than losses from temperatures that are too cold. In contrast, varicella vaccine and LAIV are required to be stored in continuously frozen states and lose potency when stored above the recommended temperature range. # Vaccine Storage Requirements Vaccine storage units must be selected carefully and used properly. A combination refrigerator/freezer unit sold for home use is acceptable for vaccine storage if the refrigerator and freezer compartments each have a separate door. However, vaccines should not be stored near the cold air outlet from the freezer to the refrigerator. Many combination units cool the refrigerator compartment by using air from the freezer compartment. In these units, the freezer thermostat controls freezer temperature while the refrigerator thermostat controls the volume of freezer temperature air entering the refrigerator. This can result in different temperature zones within the refrigerator. Refrigerators without freezers and stand-alone freezers usually perform better at maintaining the precise temperatures required for vaccine storage, and such single-purpose units sold for home use are less expensive alternatives to medical specialty equipment. Any refrigerator or freezer used for vaccine storage must maintain the required temperature range year-round, be large enough to hold the year's largest inventory, and be dedicated to storage of biologics (i.e., food or beverages should not be stored in vaccine storage units). In addition, vaccines should be stored centrally in the refrigerator or freezer, not in the door or on the bottom of the storage unit, and sufficiently away from walls to allow air to circulate. # Temperature Monitoring Proper temperature monitoring is key to proper cold chain management. Thermometers should be placed in a central location in the storage unit, adjacent to the vaccine. Temperatures should be read and documented twice each day, once when the office or clinic opens and once at the end of the day. Temperature logs should be kept on file for >3 years, unless state statutes or rules require a longer period. Immediate action must be taken to correct storage temperatures that are outside the recommended ranges. Mishandled vaccines should not be administered. One person should be assigned primary responsibility for maintaining temperature logs, along with one backup person. Temperature logs should be reviewed by the backup person at least weekly. All staff members working with vaccines should be familiar with proper temperature monitoring. Different types of thermometers can be used, including standard fluid-filled, min-max, and continuous chart recorder thermometers (Table 2). Standard fluid-filled thermometers are the simplest and least expensive products, but some models might perform poorly. Product temperature thermometers (i.e., those encased in biosafe liquids) might reflect vaccine temperature more accurately. Min-max thermometers monitor the temperature range. Continuous chart recorder thermometers monitor temperature range and duration and can be recalibrated at specified intervals. All thermometers used for monitoring vaccine storage temperatures should be calibrated and certified by an appropriate agency (e.g., National Institute of Standards and Technology). In addition, temperature indicators (e.g., Freeze Watch ™ or ColdMark ™ ) can be considered as a backup monitoring system (5); however, such indicators should not be used as a substitute for twice daily temperature readings and documentation. All medical care providers who administer vaccines should evaluate their cold chain maintenance and management to ensure that 1) designated personnel and backup personnel have written duties and are trained in vaccine storage and handling; 2) accurate thermometers are placed properly in all vaccine storage units and any limitations of the storage system are fully known; 3) vaccines are placed properly within the refrigerator or freezer in which proper temperatures are maintained; 4) temperature logs are reviewed for completeness and any deviations from recommended temperature ranges; 5) any outof-range temperatures prompt immediate action to fix the problem, with results of these actions documented; 6) any vaccines exposed to out-of-range temperatures are marked "do not use" and isolated physically; 7) when a problem is discovered, the exposed vaccine is maintained at proper temperatures while state or local health departments, or the vaccine manufacturers, are contacted for guidance; and 8) written emergency retrieval and storage procedures are in place in case of equipment failures or power outages. Around-the-clock monitoring systems might be considered to alert staff to afterhours emergencies, particularly if large vaccine inventories are maintained. Additional information on vaccine storage and handling is available from the Immunization Action Coalition at http:// www.immunize.org/izpractices/index.htm. Links to state and local health departments are available at / other.htm. Especially detailed guidelines from the Commonwealth of Australia on vaccine storage and handling, vaccine storage units, temperature monitoring, and stability of vaccines at different temperatures (6) # Notice to Readers # International Conference on Emerging Infectious Diseases CDC's National Center for Infectious Diseases, the Council of State and Territorial Epidemiologists, the American Society for Microbiology, and the World Health Organization will cosponsor the International Conference on Emerging Infectious Diseases February 29-March 3, 2004, at the Marriott Marquis Hotel in Atlanta, Georgia. The conference will explore the most current research, surveillance, and prevention and control programs addressing all aspects of emerging infectious diseases. Attendance is limited to 2,500 participants. The conference will include general and plenary sessions, symposia, panels of speakers, presentations on emerging infections activities, oral and poster presentations, and exhibits. The deadline for abstract submission for presentations is November 14, 2003. Information about submitting abstracts is available at . Abstracts should address new, reemerging, or drug-resistant infectious diseases that affect human health. The deadline for late-breaker abstracts is January 16, 2004. Registration information is available at and at and by e-mail at [email protected] or at [email protected]. # Errata: Vol. 52, No. 40 In the article, "Cigarette Smoking Among Adults -United States, 2001," an error occurred in the table on page 955. Total prevalence for persons with 0-12 yrs (no diploma) of education was reported to be 28.4% (95% CI +1.4). The correct prevalence for this population should have been 27.5% (95% CI +1.4). In the article, "Recommended Adult Immunization Schedule -United States, 2003-2004," on page 968, an incorrect volume number was given for the fourth reference. The reference should read, "4. CDC. Prevention of pneumococcal disease: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR 1997;46(No. RR-8)." # Errata: Vol. 52, No. RR-10 In the MMWR Recommendations and Reports, "Guidelines for Environmental Infection Control in Health-Care Facilities: Recommendations of CDC and the Healthcare Infection Control Practices Advisory Committee (HICPAC)," On page 10, in Figure 1, the third bullet under the footnote should read as follows: "- air volume differential >125 cfm supply versus exhaust." On page 11, in Figure 2, the label "Neutral anteroom" should read "anteroom." Also, the first, second, and seventh bullets under the footnote should read as follows: "- pressure differential of 2.5 Pa (0.01-in. water gauge) measured at the door between patient room and anteroom; - air volume differential >125 cfm, depending on anteroom airflow direction (i.e., pressurized versus depressurized); - anteroom airflow patterns (i.e., anteroom is pressurized in top and middle panels, and depressurized in bottom panel)." On page 12, in Figure 3, the third bullet under the footnote should read as follows: " e ncore. Week after week, MMWR Online plays an important role in helping you stay informed. From the latest CDC guidance to breaking health news, count on MMWR Online to deliver the news you need, when you need it. Log on to cdc.gov/mmwr and enjoy MMWR performance. - No measles or rubella cases were reported for the current 4-week period yielding a ratio for week 42 of zero (0). † Ratio of current 4-week total to mean of 15 4-week totals (from previous, comparable, and subsequent 4-week periods for the past 5 years). The point where the hatched area begins is based on the mean and two standard deviations of these 4-week totals. U: Unavailable. -:No reported cases. - Mortality data in this table are voluntarily reported from 122 cities in the United States, most of which have populations of >100,000. A death is reported by the place of its occurrence and by the week that the death certificate was filed. Fetal deaths are not included. † Pneumonia and influenza. § Because of changes in reporting methods in this Pennsylvania city, these numbers are partial counts for the current week. Complete counts will be available in 4 to 6 weeks. ¶ Total includes unknown ages.
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# When first evaluated at WRAMC, the 22 patients had a median of three (range: one to nine) skin lesions, which ranged from 3 mm to 40 mm in diameter. Higher proportions of the lesions were located on the upper (39%) or lower (32%) extremities than on the trunk/back (16%) or face/neck (13%). Typically, the lesions were painless, had enlarged slowly, and ultimately had central ulceration, often covered with eschar and surrounded by an erythematous, indurated border (Figure 2). Regional lymph nodes (e.g., epitrochlear, axillary, and inguinal), if palpable, usually were <1 cm in diameter. None of the patients had systemic symptoms. In 17 (77%) of the 22 cases, parasites were noted on lightmicroscopic examination of tissue. Of the 19 patients who had tissue cultured for parasites, 14 (74%) had positive cultures, of which 13 (93%) had sufficient organisms for species identification by isoenzyme electrophoresis. All nine of the 13 patients whose cultures had been tested as of October 20, 2003, were infected with Leishmania major. Additional evidence that 21 (95%) of the 22 patients had CL was obtained by testing tissue with an investigational, fluorogenic, genusspecific polymerase chain reaction (PCR) assay developed and conducted by staff of WRAMC and Walter Reed Army Institute of Research (Silver Spring, Maryland) (6). Since 1978, military personnel with potential cases of leishmaniasis have been referred to WRAMC for evaluation and therapy with the pentavalent antimonial compound sodium stibogluconate (Pentostam ® , The Wellcome Foundation, United Kingdom). Although treatment of cases of CL with pentavalent antimonial compounds has been considered the standard of care for over half a century (1), these compounds Editorial Note: Leishmaniasis is a vector-borne parasitic disease endemic in parts of the tropics, subtropics, and Southern Europe. The World Health Organization estimates that 1.5 million cases of CL and 500,000 cases of visceral leishmaniasis (VL) occur each year (1). Both cutaneous and visceral infection can remain asymptomatic or be associated with mild, nonspecific, and nonprogressive symptoms (1). Clinical manifestations, if they develop, typically are first noted weeks to months after exposure. The skin lesions of CL, which can be chronic and disfiguring, typically evolve from papules to nodules to ulcerative lesions but can persist as nodules or plaques (1). Host (e.g., immune status) and parasite (e.g., species and strain) characteristics affect the natural history and the ease and importance of diagnosing and treating cases of CL. Although both L. major and L. tropica are common etiologic agents of CL in Afghanistan, Iraq, and Kuwait (1,7,8), which species has caused a particular case of CL depends on such factors as the geographic area and ecologic setting of exposure and the species of the sand-fly vector. VL is more prevalent in Iraq (8) than in Afghanistan or Kuwait. Manifestations of cases of advanced VL include fever, cachexia, hepatosplenomegaly, pancytopenia, and hypergammaglobulinemia; such cases can be fatal if not treated appropriately and quickly (1). No FDA-approved vaccines or prophylactic medications to prevent leishmaniasis are available (1). Control measures against vectors or reservoir hosts of infection might be effective in particular settings (1,8,9). Personal protective measures to decrease risk for infection include avoiding, if possible, areas where leishmaniasis is endemic, particularly from dusk through dawn; using permethrin-treated bed nets and clothing; minimizing the amount of exposed skin; and applying insect repellents containing 30%-35% DEET (lower percentages for children) to exposed skin. Transmission of leishmanial parasites through blood transfusion has not been reported in the United States. However, as a precautionary measure, the Armed Services Blood Program Office of the Department of Defense (DoD) (Falls Church, Virginia) and the American Association of Blood Banks (AABB) (Bethesda, Maryland) are implementing policies to defer prospective blood donors who have been in Iraq from donating blood for 12 months after the last date they left Iraq. Additional information about these deferral policies is available from DoD at http://www-nehc.med.navy.mil/ downloads/prevmed/leishmanAug03.pdf and from AABB at http://www.aabb.org. In Operations Desert Storm and Shield during 1990-1991, among approximately 697,000 deployed military personnel, WRAMC identified 12 cases of so-called viscerotropic leishmaniasis caused by L. tropica (a syndrome associated with visceral infection but not necessarily the classic clinical manifestations of VL) and 20 cases of CL (3,10;WRAMC, unpublished data, 2003). During August 2002-September 2003, WRAMC identified 22 cases of CL among personnel participating in Operations Iraqi and Enduring Freedom. The apparent decline in numbers of cases of CL with self-reported onset of lesions during July-August 2003 (Figure 1) could reflect delays in persons seeking medical evaluation for skin lesions that might not cause concern initially. U.S. personnel in Iraq have reported being bitten by sand flies (some persons have received >100 bites in a single night), and up to 2% of female phlebotomine sand flies collected in Iraq were infected with leishmanial parasites. As of October 21, WRAMC had identified nine more cases of CL in addition to the 22 cases described in this report. WRAMC is evaluating additional potential cases of CL in deployed personnel, and the number of confirmed cases probably will continue to increase. U.S. health-care providers should consider the possibility of CL in persons with chronic skin lesions who were deployed to Southwest/Central Asia or who were in other areas where leishmaniasis is endemic and that of VL in such persons with persistent, febrile illnesses, especially if associated with other manifestations suggestive of VL (e.g., splenomegaly and pancytopenia) (1,4,8,10). Information about diagnosing and treating CL and VL has been published (1,5). Both WRAMC and CDC provide diagnostic services and the antileishmanial compound sodium stibogluconate. For treatment of health-care beneficiaries of the military, health-care providers should contact WRAMC, telephone 202-782-6740. For treatment of civilians, providers should contact CDC's Drug Service, telephone 404-639-3670. # Infant Health Among Puerto Ricans -Puerto Rico and U.S. Mainland, 1989-2000 Although the overall U.S. infant mortality rate (IMR) declined dramatically during the 1900s, striking racial/ethnic disparities in infant mortality remain (1,2). Infant health disparities associated with maternal place of birth also exist within some racial/ethnic populations (3,4). Eliminating disparities in infant health is crucial to achieving the 2010 national health objective of reducing the infant death rate to 4.5 per 1,000 live births (objective 16-1c) (5). Hispanics comprise the largest racial/ethnic minority population in the United States. Among U.S. Hispanics, considerable heterogeneity exists in infant health, with the poorest outcomes reported among Puerto Rican infants (6). This report compares trends during the previous decade in IMRs and major determinants of these rates such as low birthweight (LBW), preterm delivery (PTD), and selected maternal characteristics among infants born to Puerto Rican women on the U.S. mainland (50 states and the District of Columbia) with corresponding trends among infants born in Puerto Rico. The findings indicate that despite having lower prevalence of selected maternal risk factors, Puerto Rico-born infants are at greater risk for LBW, PTD, and infant death than mainland-born Puerto Rican infants. This report also highlights a persistent disparity in IMRs and an emerging disparity in LBW and PTD rates between Puerto Rico-born infants and mainland-born Puerto Rican infants. Future research should focus on identifying factors responsible for these disparities to improve infant health in Puerto Rico. Linked birth/infant death files for the 50 states, the District of Columbia, and Puerto Rico for 1989-1991 and 1998-2000 were used to assess IMR trends. Natality files for 1990-2000 were used to examine trends in rates of LBW (<2,500 g), know what matters. # Dispatch up-to-the-minute: adj 1 : extending up to the immediate present, including the very latest information; see also MMWR. PTD (<37 weeks' gestation), and selected maternal characteristics among live-born infants. Analyses were limited to infants born to Puerto Rican women (i.e., those born in Puerto Rico, those born on the mainland to Puerto Rico-born mothers, or those born on the mainland to mothers who reported being of Puerto Rican ethnicity). Infants born in Puerto Rico to women not born either in Puerto Rico or on the mainland were excluded. Four subpopulations of Puerto Rican infants were examined initially: infants born in Puerto Rico to Puerto Rico-born mothers, infants born in Puerto Rico to mainlandborn mothers, infants born on the mainland to Puerto Ricoborn mothers, and infants born on the mainland to mainland-born mothers of Puerto Rican ethnicity. However, because maternal place of birth was not associated substantially with infant health outcomes, data are shown for Puerto Rico-born and mainland-born infants without regard to maternal place of birth. Chi square tests were used to compare differences in the prevalence of infant and maternal characteristics and differences in IMRs among the groups. # Low Birthweight and Preterm Delivery In 1990, Puerto Rico-born infants were 1.03 times more likely to be of LBW than mainland-born infants, and in 2000, this disparity increased to 1.2 (Figure). From 1990 to 2000, the LBW rate for Puerto Rico-born infants increased 18.0%, from 9.2% to 10.9%; for mainland-born infants, the LBW rate increased 3.7%, from 8.9% to 9.3%. Similar differences in LBW rate increases were observed when analyses were restricted to full-term and singleton births. The increase in the LBW rate among Puerto Rico-born infants was associated predominantly with an increase in the percentage of infants with an intermediate LBW (ILBW; 1,500-2,499 g); however, a small increase also was observed in the percentage with a very low birthweight (VLBW; <1,500 g) (Table 1). In 2000, Puerto Rico-born infants were less likely than mainlandborn infants to be of VLBW (ratio = 0.7) but more likely to be of ILBW (ratio = 1.3) (Table 1). In 1990, Puerto Rico-born infants were less likely than mainland-born infants to be born preterm (ratio = 0.9) (Figure ). From 1990 to 2000, the PTD rate among Puerto Rico-born infants increased 29.3% (from 11.6% to 15.0%), and that among mainland-born infants increased 0.9% (from 13.3% to 13.4%). As a result, in 2000, Puerto Rico-born infants were 1.1 times more likely than mainland-born infants to be born preterm. Similar differences in PTD rates were observed when analyses were limited to singleton births. The increase in the PTD rate among Puerto Rico-born infants was attributable primarily to an increase in the rate of moderately preterm births (32-36 weeks' gestation), although the rate of very preterm births (<32 weeks' gestation) also increased slightly (Table 1). In 2000, despite higher rates of LBW and PTD among Puerto Rico-born infants, their mothers were less likely than mothers of mainland-born infants to report selected maternal risk factors, including receiving late/no prenatal care, having <12 years of education, not being married, having plural births, and using tobacco (Table 1). The prevalence of first trimester prenatal care and the percentage of mothers aged <19 years at their infant's birth were similar for the two groups (Table 1). The prevalence of these maternal characteristics did not differ by maternal place of birth. # Infant Mortality From 1989-1991 to 1998-2000, the combined IMR for Puerto Rico-born and mainland-born infants declined approximately 24%. The 1989-1991 IMR for Puerto (6), risk factors such as maternal tobacco use might be reported less completely in Puerto Rico than on the mainland. Second, because Hispanic origin is not recorded on birth certificates in Puerto Rico, this study was based on records for infants born either in Puerto Rico or on the mainland; mainland-born infants were defined as having Puerto Rican ethnicity if their mothers were born in Puerto Rico or reported being Puerto Rican. This report highlights a continuing disparity in infant mortality rates and an emerging disparity in LBW and PTD rates between Puerto Rico-born infants and infants born on the mainland to Puerto Rican mothers. These differences should be considered in the planning and implementation of efforts to reduce IMRs among Puerto Ricans. The higher birthweightand gestational age-specific IMRs in Puerto Rico contribute more to the overall higher IMR in Puerto Rico than do the differences in birthweight and gestational age distributions between Puerto Rico-and mainland-born infants (8). Efforts to reduce IMR in Puerto Rico should focus on reducing mortality rates among LBW and preterm infants, perhaps by examining existing perinatal services. Additional opportunities might exist for lowering the overall IMR if the underlying causes of the increases in the prevalence of LBW and PTD can be identified and prevented. Improving infant health in Puerto Ricans will most likely require interventions at the individual, provider, and health-care system levels. # West Nile Virus Infection Among Turkey Breeder Farm Workers -Wisconsin, 2002 In 2002, Wisconsin public health officials were notified of two cases of febrile illness in workers at a commercial turkey breeder farm (farm A) in county A. The Wisconsin Division of Public Health (WDPH) initiated an investigation that found a high prevalence of West Nile virus (WNV) antibody among farm A workers and turkeys. An associated high incidence of febrile illness among farm A workers also was observed. This report summarizes the results of this investigation, which indicate possible nonmosquito transmission among birds and subsequent infection of humans at farm A. Because the mode of transmission in this outbreak is unknown, turkey handlers should take appropriate precautions, including use of DEETcontaining mosquito repellents, protective clothing and gloves, respiratory protection, and proper hand hygiene. Suspected occupationally acquired WNV infections should be reported immediately to local and state health departments. During November 2002, WDPH and the Wisconsin State Laboratory of Hygiene (WSLH) confirmed that two ill residents of county A had been infected with WNV. Before these reports, only one human WNV infection had been reported in this county. Both persons worked at farm A and had febrile illness with rash during late September-early October. These human illnesses occurred after a suspected fowl pox outbreak among farm A turkeys in September. Workers were concerned the pox outbreak might be associated with their illnesses. Farm A is one of six turkey breeder farms in county A owned by a company that also operates nonbreeder farms and a turkey meat processing plant in county A. The five other turkey breeder farms are located within 10 miles of farm A, and multiple private residences are within a quarter mile. In February 2003, county and state public health staff, in collaboration with the company, identified workers at the six turkey breeder farms, the nonbreeder farms, and the plant, and requested their consent to participate in a serosurvey. Serum samples were collected from participating workers (N = 93) to identify persons infected recently. A questionnaire was administered to identify persons who had a febrile illness during August-October 2002. Serum samples also were collected from residents (N = 14) who lived within a quarter mile of farm A. All serum samples were tested for WNV-specific IgM antibody at WSLH (1). IgM-positive specimens were confirmed by plaque-reduction neutralization tests at CDC (2). Of 107 total participants, 10 (9%) were seropositive. Of approximately 90 workers at the six breeder farms, 57 (63%) participated; of these, 10 (18%) were infected recently with WNV (Table ). None of the meat processing workers or other area residents was infected. Of 11 persons who worked exclusively at farm A, six (55%) were WNV IgM-positive, compared with two (25%) of eight who worked at both farm A and other breeder farms and two (5%) of 38 who worked only at other breeder farms. Of the 10 IgM-positive workers, six (60%) reported febrile headaches during August-October (all occurring during the last week of September), compared with seven (7%) of 97 IgM-negative persons sampled (p = 0.0002 by Fisher exact test). All six IgMpositive persons who reported febrile headache had worked at farm A. All six noted a skin rash, and one had meningoencephalitis and was hospitalized; no deaths occurred. Reported mosquito exposures and bites were similar for IgM-positive (nine [90%] and eight [80%] of 10, respectively) and IgMnegative workers (67 [85%] and 54 [68%] of 79, respectively). Only one (2%) of 57 breeder farm workers reported using insect repellent while working. Farm A includes two breeder bird barns and a juvenile flock barn. The breeder barns separate uncaged females from male turkeys with a solid plywood wall. The sides of the barns housing the female turkeys are covered with 1 in. x 1 in. mesh wire fencing and plastic curtains that can be adjusted to lower the temperature during warm months. Serum from farm A turkeys and turkeys from the nearest breeder farm were collected in late January 2003. The farm A flock sampled was the group of birds housed in the juvenile flock barn from mid-June to early December 2002, at which time this flock was moved to a breeder barn on farm A to replace a flock slaughtered in November. The flock sampled on the nearby farm was a breeder flock also in place in Editorial Note: The investigation described in this report found that workers at farm A had a higher incidence of febrile illness and prevalence of WNV antibodies than workers at other breeder and nonbreeder farms, workers at a turkey meat processing facility, or persons who lived on or near the affected farm and who did not work in the turkey barns. The mode of transmission to these workers is unknown. Although the majority of human WNV infections are mosquito-borne, transmission by less typical routes might have occurred, including percutaneous (e.g., exposure of broken skin or mucosa to infected turkey feces or serous exudates from duallyinfected pox lesions), fecal-oral, or respiratory (e.g., exposure to aerosolized infected turkey feces). The WNV seroprevalence (96%) among female turkeys on farm A was high. However, experimental evidence suggests that turkeys develop insufficient levels of WNV viremia to contribute to a bird-mosquito-bird amplification cycle (3). Although WNV was detected in the feces of these turkeys, no oropharyngeal shedding or transmission to cage mates was observed (3). Nonvector-borne WNV transmission has been demonstrated experimentally among rodents and among certain bird species other than turkeys (4,5). Once WNV was introduced to female turkeys at farm A (presumably by mosquitoes), widespread transmission within that flock might have taken place by fecal-oral, respiratory, or another atypical (e.g., percutaneous exposure associated with pecking behavior or vaccination) route. In addition, other unique conditions at farm A, including possible co-infection with an avian pox virus, might have resulted in higher WNV viremias or infectious materials with higher WNV titers than laboratory studies have suggested. Despite uncertainty over the mode(s) of transmission, epidemiologic evidence suggests that this outbreak was related to occupational exposure. Occupationally acquired WNV infections have been reported previously among laboratory or field workers who experienced a known percutaneous injury or aerosol exposure while working with high concentrations of WNV in cell culture or infected animal tissues (6)(7)(8)(9). In this investigation, no such exposure was documented. Because the mode of transmission in this outbreak is unknown, turkey handlers should 1) take personal protective measures, including wearing protective clothing and using mosquito repellents (e.g., those containing DEET on skin and clothing and those containing permethrin on clothing), as recommended for outdoor workers; 2) wear gloves; and 3) wash hands frequently. In addition, respiratory protection has been recommended for reducing other exposures to workers in turkey barns (10). Respiratory protection should be selected and used in accordance with the Occupational Safety and Health Administration (OSHA) respiratory protection standard (Title 29 CFR 1910.134). Workers should receive training that reinforces awareness of potential occupational hazards and risks and stresses the importance of timely reporting of all injuries and illnesses of suspected occupational origin. Health-care workers should inquire about a patient's outdoor exposure and occupation when a human WNV infection is suspected or identified and consider WNV as a possible etiology among turkey farm workers with febrile headache or rash, meningitis, encephalitis, or other severe neurologic illness, especially when WNV illnesses exist among co-workers or birds. Suspected occupationally acquired WNV infections should be reported immediately to local and state health departments. The investigation of turkey breeder farm workers in county A is ongoing. In addition, further studies are needed to determine the factors involved in this outbreak, to better define the occupational risk for WNV infections, and to assess appropriate personal protective measures. On the basis of recommendations from public health staff, the company has made mosquito repellent containing 30% DEET available at farm A and other turkey breeder farms. Recommendations that were outlined previously in place at the company farms include protective clothing, frequent hand washing, and an OSHArequired respiratory protection program. Gloves and safety glasses also are available to workers. # Public Health and Aging # Nonfatal Injuries Among Older Adults Treated in Hospital Emergency Departments -United States, 2001 Because injuries generally are considered a problem of the young, injuries among older adults (i.e., persons aged >65 years) have received little attention. However, injuries are the eighth leading cause of death among older adults in the United States (1). In 2001, approximately 2.7 million older adults were treated for nonfatal injuries in hospital emergency departments (EDs); the majority of these injuries were the result of falls (1). To characterize nonfatal injuries among older adults, CDC analyzed data from the National Electronic Injury Surveillance System-All Injury Program (NEISS-AIP). This report summarizes the results of that analysis, which indicate differences in type and mechanism of injury by sex, suggesting that prevention programs should be designed and tailored differently for men and women. NEISS-AIP is operated by the U.S. Consumer Product Safety Commission and collects data about initial visits for all (MMWR on line) cdc.gov/mmwr Online types and causes of injuries treated in U.S. EDs, drawing from a nationally representative sample of 66 hospitals selected as a stratified probability sample of hospitals in the United States. Data from these cases are weighted by the inverse of the probability of selection to produce national estimates (2). For this report, annualized estimates were calculated on the basis of weighted data for 36,752 nonfatal injuries among older adults treated in EDs during January-December 2001. U.S. Census Bureau population estimates for 2001 were used to calculate injury rates (3). A direct variance estimation procedure was used to calculate 95% confidence intervals and to account for the complex sample design (2). All nonfatal injuries were classified according to the mechanism of injury (e.g., fall, struck by/against, or motor vehicle crash), diagnosis, primary body part injured, disposition, location of injury, and intent. The diagnosis and intent of the injury were classified according to the most severe injury (4). Injuries of unknown intent were grouped with those classified as unintentional. During 2001, an estimated 935,556 men and 1,731,640 women aged >65 years were treated in EDs for nonfatal injuries. The overall injury rate per 100,000 persons was higher among women (8,466 per 100,000 persons) than among men (6,404). Injury rates increased with age, to 15,272 for women aged >85 years and 11,547 for men aged >85 years. Nearly all injuries (99%) were classified as unintentional/unknown intent (Table ). Overall, falls resulted in the highest rates of injury (4,684 per 100,000 persons) and were the most common mechanism of injury, accounting for 62% of all nonfatal injury ED visits in this population. The injury rate from falls was higher among women (5,659) than men (3,319). However, the injury rates for women were lower with certain other types of injuries, such as being struck by/against (588 versus 617), occupying a motor vehicle (525 versus 540), and being cut or pierced (243 versus 488) (Table ). The greatest number of nonfatal injuries among older adults were diagnosed as fractures (26%), followed by contusions/ abrasions (23%), lacerations (17%), strains/sprains (13%), and internal injuries (5%). Diagnoses varied by sex. Fractures of all parts of the body were more common among women than men (30% versus 19%), and lacerations were more common among men than women (22% versus 14%). The parts of the body affected most were the head/neck (25%) and arms/hands (22%). The majority (82%) of older adults were treated and released; 16% were hospitalized. The ratio of patients treated/ released to those hospitalized was lower among women (4.7:1) than men (5.9:1), suggesting women were more often hospitalized after a nonfatal injury. The most common (47%) location for nonfatal injuries was the home (Table ). Editorial Note: Falls remain the leading cause of both nonfatal and fatal injury among older adults aged >65 years in the United States (1). The findings in this report, which indicate that falls were the most common reason for injury-related ED visits among persons aged >65 years, are consistent with previous studies indicating that approximately 40% of older adults living in community settings (e.g., in private residences or minimally assisted environments) fall each year (5). In this study, 82% of persons aged >65 years were treated and released following injury, compared with 95% of persons aged <65 years. Older adults were more than three times more likely (1,217 per 100,000 persons) to be hospitalized than persons aged <65 years (353) (1). The findings in this report are subject to at least five limitations. First, NEISS-AIP provides national estimates and does not allow for estimates by region, state, or local jurisdiction. Second, injury outcomes are specific to ED visits and do not include subsequent outcomes. Third, NEISS-AIP data reflect only those injuries that were severe enough to require treatment in an ED. Fourth, in cases with multiple injuries, only data regarding the most severe injury are recorded. Finally, data for intent are classified on the basis of information contained in the medical record. Injuries for which intent cannot be determined conclusively from the ED record are grouped with unintentional injuries. The findings in this report can form the basis for targeting prevention efforts to different populations of older adults. For example, exercise can reduce the risk for fall among older adults by 15% (6). Because women are more likely to sustain fallrelated injuries, exercise can be an especially important preventive measure for this population. Data from NEISS-AIP can continue to be a source for monitoring trends, evaluating interventions, and characterizing nonfatal injuries among persons aged >65 years. (n = four), New Mexico (n = four), North Dakota (n = four), Alabama (n = three), Ohio (n = three), Indiana (n = two), Missouri (n = two), Montana (n = two), New Jersey (n = two), Delaware (n = one), Illinois (n = one), Kansas (n = one), Kentucky (n = one), Louisiana (n = one), Michigan (n = one), Mississippi (n = one), Tennessee (n = one), and Virginia (n = one). A total of 682 presumptive West Nile viremic blood donors have been reported to ArboNET. Of these, 596 (87%) were reported from the following nine western and midwestern states: Colorado, Kansas, Nebraska, New Mexico, North Dakota, Oklahoma, South Dakota, Texas, and Wyoming. Of the 529 donors for whom data were reported completely, six subsequently had meningoencephalitis, and 76 subsequently had West Nile fever. In addition, 10,453 dead birds with WNV infection have been reported from 42 states, the District of Columbia, and New York City; 3,270 WNV infections in horses, 16 WNV infections in dogs, 14 infections in squirrels, and 24 infections in unidentified # Human WNV disease and animal WNV activity Animal WNV activity only one. In addition, seropositivity was reported from one other unidentified animal species. A total of 6,667 WNV-positive mosquito pools have been reported from 38 states, the District of Columbia, and New York City. Additional information about WNV activity is available from CDC at http://www.cdc.gov/ncidod/dvbid/westnile/ index.htm and http://westnilemaps.usgs.gov. # Notice to Readers # Guidelines for Maintaining and Managing the Vaccine Cold Chain In February 2002, the Advisory Committee on Immunization Practices (ACIP) and American Academy of Family Physicians (AAFP) released their revised General Recommendations on Immunization (1), which included recommendations on the storage and handling of immunobiologics. Because of increased concern over the potential for errors with the vaccine cold chain (i.e., maintaining proper vaccine temperatures during storage and handling to preserve potency), this notice advises vaccine providers of the importance of proper cold chain management practices. This report describes proper storage units and storage temperatures, outlines appropriate temperature-monitoring practices, and recommends steps for evaluating a temperature-monitoring program. The success of efforts against vaccine-preventable diseases is attributable in part to proper storage and handling of vaccines. Exposure of vaccines to temperatures outside the recommended ranges can affect potency adversely, thereby reducing protection from vaccine-preventable diseases (1). Good practices to maintain proper vaccine storage and handling can ensure that the full benefit of immunization is realized. # Recommended Storage Temperatures The majority of commonly recommended vaccines require storage temperatures of 35°F-46°F (2°C-8°C) and must not be exposed to freezing temperatures. Introduction of varicella vaccine in 1995 and of live attenuated influenza vaccine (LAIV) more recently increased the complexity of vaccine storage. Both varicella vaccine and LAIV must be stored in a continuously frozen state <5°F (-15°C) with no freeze-thaw cycles (Table 1). In recent years, instances of improper vaccine storage have been reported. An estimated 17%-37% of providers expose vaccines to improper storage temperatures, and refrigerator temperatures are more commonly kept too cold than too warm (2,3). Freezing temperatures can irreversibly reduce the potency of vaccines required to be stored at 35°F-46°F (2°C-8°C). Certain freeze-sensitive vaccines contain an aluminum adjuvant that precipitates when exposed to freezing temperatures. This results in loss of the adjuvant effect and vaccine potency (4). Physical changes are not always apparent after exposure to freezing temperatures and visible signs of freezing are not necessary to result in a decrease in vaccine potency. Although the potency of the majority of vaccines can be affected adversely by storage temperatures that are too warm, these effects are usually more gradual, predictable, and smaller in magnitude than losses from temperatures that are too cold. In contrast, varicella vaccine and LAIV are required to be stored in continuously frozen states and lose potency when stored above the recommended temperature range. # Vaccine Storage Requirements Vaccine storage units must be selected carefully and used properly. A combination refrigerator/freezer unit sold for home use is acceptable for vaccine storage if the refrigerator and freezer compartments each have a separate door. However, vaccines should not be stored near the cold air outlet from the freezer to the refrigerator. Many combination units cool the refrigerator compartment by using air from the freezer compartment. In these units, the freezer thermostat controls freezer temperature while the refrigerator thermostat controls the volume of freezer temperature air entering the refrigerator. This can result in different temperature zones within the refrigerator. Refrigerators without freezers and stand-alone freezers usually perform better at maintaining the precise temperatures required for vaccine storage, and such single-purpose units sold for home use are less expensive alternatives to medical specialty equipment. Any refrigerator or freezer used for vaccine storage must maintain the required temperature range year-round, be large enough to hold the year's largest inventory, and be dedicated to storage of biologics (i.e., food or beverages should not be stored in vaccine storage units). In addition, vaccines should be stored centrally in the refrigerator or freezer, not in the door or on the bottom of the storage unit, and sufficiently away from walls to allow air to circulate. # Temperature Monitoring Proper temperature monitoring is key to proper cold chain management. Thermometers should be placed in a central location in the storage unit, adjacent to the vaccine. Temperatures should be read and documented twice each day, once when the office or clinic opens and once at the end of the day. Temperature logs should be kept on file for >3 years, unless state statutes or rules require a longer period. Immediate action must be taken to correct storage temperatures that are outside the recommended ranges. Mishandled vaccines should not be administered. One person should be assigned primary responsibility for maintaining temperature logs, along with one backup person. Temperature logs should be reviewed by the backup person at least weekly. All staff members working with vaccines should be familiar with proper temperature monitoring. Different types of thermometers can be used, including standard fluid-filled, min-max, and continuous chart recorder thermometers (Table 2). Standard fluid-filled thermometers are the simplest and least expensive products, but some models might perform poorly. Product temperature thermometers (i.e., those encased in biosafe liquids) might reflect vaccine temperature more accurately. Min-max thermometers monitor the temperature range. Continuous chart recorder thermometers monitor temperature range and duration and can be recalibrated at specified intervals. All thermometers used for monitoring vaccine storage temperatures should be calibrated and certified by an appropriate agency (e.g., National Institute of Standards and Technology). In addition, temperature indicators (e.g., Freeze Watch ™ [3M, St. Paul, Minnesota] or ColdMark ™ [Cold Ice, Inc., Oakland, California]) can be considered as a backup monitoring system (5); however, such indicators should not be used as a substitute for twice daily temperature readings and documentation. All medical care providers who administer vaccines should evaluate their cold chain maintenance and management to ensure that 1) designated personnel and backup personnel have written duties and are trained in vaccine storage and handling; 2) accurate thermometers are placed properly in all vaccine storage units and any limitations of the storage system are fully known; 3) vaccines are placed properly within the refrigerator or freezer in which proper temperatures are maintained; 4) temperature logs are reviewed for completeness and any deviations from recommended temperature ranges; 5) any outof-range temperatures prompt immediate action to fix the problem, with results of these actions documented; 6) any vaccines exposed to out-of-range temperatures are marked "do not use" and isolated physically; 7) when a problem is discovered, the exposed vaccine is maintained at proper temperatures while state or local health departments, or the vaccine manufacturers, are contacted for guidance; and 8) written emergency retrieval and storage procedures are in place in case of equipment failures or power outages. Around-the-clock monitoring systems might be considered to alert staff to afterhours emergencies, particularly if large vaccine inventories are maintained. Additional information on vaccine storage and handling is available from the Immunization Action Coalition at http:// www.immunize.org/izpractices/index.htm. Links to state and local health departments are available at http://www.cdc.gov/ other.htm. Especially detailed guidelines from the Commonwealth of Australia on vaccine storage and handling, vaccine storage units, temperature monitoring, and stability of vaccines at different temperatures (6) # Notice to Readers # International Conference on Emerging Infectious Diseases CDC's National Center for Infectious Diseases, the Council of State and Territorial Epidemiologists, the American Society for Microbiology, and the World Health Organization will cosponsor the International Conference on Emerging Infectious Diseases February 29-March 3, 2004, at the Marriott Marquis Hotel in Atlanta, Georgia. The conference will explore the most current research, surveillance, and prevention and control programs addressing all aspects of emerging infectious diseases. Attendance is limited to 2,500 participants. The conference will include general and plenary sessions, symposia, panels of speakers, presentations on emerging infections activities, oral and poster presentations, and exhibits. The deadline for abstract submission for presentations is November 14, 2003. Information about submitting abstracts is available at http://www.iceid.org/abssub.asp. Abstracts should address new, reemerging, or drug-resistant infectious diseases that affect human health. The deadline for late-breaker abstracts is January 16, 2004. Registration information is available at http://www.iceid.org and at http://www.cdc.gov/ncidod and by e-mail at [email protected] or at [email protected]. # Errata: Vol. 52, No. 40 In the article, "Cigarette Smoking Among Adults -United States, 2001," an error occurred in the table on page 955. Total prevalence for persons with 0-12 yrs (no diploma) of education was reported to be 28.4% (95% CI +1.4). The correct prevalence for this population should have been 27.5% (95% CI +1.4). In the article, "Recommended Adult Immunization Schedule -United States, 2003-2004," on page 968, an incorrect volume number was given for the fourth reference. The reference should read, "4. CDC. Prevention of pneumococcal disease: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR 1997;46(No. RR-8)." # Errata: Vol. 52, No. RR-10 In the MMWR Recommendations and Reports, "Guidelines for Environmental Infection Control in Health-Care Facilities: Recommendations of CDC and the Healthcare Infection Control Practices Advisory Committee (HICPAC)," On page 10, in Figure 1, the third bullet under the footnote should read as follows: "• air volume differential >125 cfm supply versus exhaust." On page 11, in Figure 2, the label "Neutral anteroom" should read "anteroom." Also, the first, second, and seventh bullets under the footnote should read as follows: "• pressure differential of 2.5 Pa (0.01-in. water gauge) measured at the door between patient room and anteroom; • air volume differential >125 cfm, depending on anteroom airflow direction (i.e., pressurized versus depressurized); • anteroom airflow patterns (i.e., anteroom is pressurized in top and middle panels, and depressurized in bottom panel)." On page 12, in Figure 3, the third bullet under the footnote should read as follows: " e ncore. Week after week, MMWR Online plays an important role in helping you stay informed. From the latest CDC guidance to breaking health news, count on MMWR Online to deliver the news you need, when you need it. Log on to cdc.gov/mmwr and enjoy MMWR performance. * No measles or rubella cases were reported for the current 4-week period yielding a ratio for week 42 of zero (0). † Ratio of current 4-week total to mean of 15 4-week totals (from previous, comparable, and subsequent 4-week periods for the past 5 years). The point where the hatched area begins is based on the mean and two standard deviations of these 4-week totals. # U: Unavailable. -:No reported cases. * Mortality data in this table are voluntarily reported from 122 cities in the United States, most of which have populations of >100,000. A death is reported by the place of its occurrence and by the week that the death certificate was filed. Fetal deaths are not included. † Pneumonia and influenza. § Because of changes in reporting methods in this Pennsylvania city, these numbers are partial counts for the current week. Complete counts will be available in 4 to 6 weeks. ¶ Total includes unknown ages.
None
None
25629bb57077b675ba61b02f3677d2ee20e15fcf
cdc
None
This report updates the 2008 recommendations by CDC's Advisory Committee on Immunization Practices (ACIP) regarding the use of influenza vaccine for the prevention and control of seasonal influenza (CDC. Prevention and control of influenza: recommendations of the Advisory Committee on Immunization Practices . MMWR 2008;57). Information on vaccination issues related to the recently identified novel influenza A H1N1 virus will be published later in 2009. The 2009 seasonal influenza recommendations include new and updated information. Highlights of the 2009 recommendations include 1) a recommendation that annual vaccination be administered to all children aged 6 months-18 years for the 2009-10 influenza season; 2) a recommendation that vaccines containing the 2009-10 trivalent vaccine virus strains A/Brisbane/59/2007 (H1N1)like, A/Brisbane/10/2007 (H3N2)-like, and B/Brisbane/60/2008-like antigens be used; and 3) a notice that recommendations for influenza diagnosis and antiviral use will be published before the start of the 2009-10 influenza season. Vaccination efforts should begin as soon as vaccine is available and continue through the influenza season. Approximately 83% of the United States population is specifically recommended for annual vaccination against seasonal influenza; however, <40% of the U.S. population received the 2008-09 influenza vaccine. These recommendations also include a summary of safety data for U.S. licensed influenza vaccines. These recommendations and other information are available at CDC's influenza website (/ flu); any updates or supplements that might be required during the 2009-10 influenza season also can be found at this website. Vaccination and health-care providers should be alert to announcements of recommendation updates and should check the CDC influenza website periodically for additional information.# Introduction In the United States, annual epidemics of seasonal influenza occur typically during the late fall through early spring. Influenza viruses can cause disease among persons in any age group, but rates of infection are highest among children (1)(2)(3). Rates of serious illness and death are highest among persons aged >65 years, children aged <2 years, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (1,4,5). An annual average of approximately 36,000 deaths during 1990-1999 and 226,000 hospitalizations during 1979-2001 have been associated with influenza epidemics (6,7). Annual influenza vaccination is the most effective method for preventing influenza virus infection and its complications. Influenza vaccine can be administered to any person aged >6 months who does not have contraindications to vaccination to reduce the likelihood of becoming ill with influenza or of transmitting influenza to others. Trivalent inactivated influenza vaccine (TIV) can be used for any person aged >6 months, including those with high-risk conditions (Boxes 1 and 2). Live, attenuated influenza vaccine (LAIV) may be used for healthy, nonpregnant persons aged 2-49 years. No preference All children aged 6 months-18 years should be vaccinated annually. Children and adolescents at higher risk for influenza complications should continue to be a focus of vaccination efforts as providers and programs transition to routinely vaccinating all children and adolescents, including those who: are aged 6 months-4 years ( - 59 months); have chronic pulmonary (including asthma), cardiovas-- cular (except hypertension), renal, hepatic, cognitive, neurologic/neuromuscular, hematological or metabolic disorders (including diabetes mellitus); are immunosuppressed (including immunosuppression - caused by medications or by human immunodeficiency virus); are receiving long-term aspirin therapy and therefore - might be at risk for experiencing Reye syndrome after influenza virus infection; are residents of long-term care facilities; and - will be pregnant during the influenza season. - Note: Children aged <6 months cannot receive influenza vaccination. Household and other close contacts (e.g., daycare providers) of children aged <6 months, including older children and adolescents, should be vaccinated. is indicated for LAIV or TIV when considering vaccination of healthy, nonpregnant persons aged 2-49 years. Because the safety or effectiveness of LAIV has not been established in persons with underlying medical conditions that confer a higher risk for influenza complications, these persons should be vaccinated only with TIV. Influenza viruses undergo frequent antigenic change (i.e., antigenic drift); to gain immunity against viruses in circulation, patients must receive an annual vaccination against the influenza viruses that are predicted on the basis of viral surveillance data. . Although vaccination coverage has increased in recent years for many groups targeted for routine vaccination, coverage remains low among most of these groups, and strategies to improve vaccination coverage, including use of reminder/recall systems and standing orders programs, should be implemented or expanded. Antiviral medications are an adjunct to vaccination and are effective when administered as treatment and when used for chemoprophylaxis after an exposure to influenza virus. However, the emergence since 2005 of resistance to one or more of the four licensed antiviral agents (oseltamivir, zanamivir, amantadine and rimantadine) among circulating strains Annual vaccination against influenza is recommended for any adult who wants to reduce the risk of becoming ill with influenza or of transmitting it to others. Vaccination is recommended for all adults without contraindications in the following groups, because these persons either are at higher risk for influenza complications, or are close contacts of persons at higher risk has complicated antiviral treatment and chemoprophylaxis recommendations. Updated antiviral treatment and chemoprophylaxis recommendations will be provided in a separate set of guidelines later in 2009. CDC has issued interim recommendations for antiviral treatment and chemoprophylaxis of influenza (8), and these guidelines should be consulted pending issuance of new recommendations. In April 2009, a novel influenza A (H1N1) virus that is similar to influenza viruses previously identified in swine was determined to be the cause of an influenza respiratory illness that spread across North America and was identified in many areas of the world by May 2009. The symptoms of novel influenza A (H1N1) virus infection are similar to those of seasonal influenza, and specific diagnostic testing is required to distinguish novel influenza A (H1N1) virus infection from seasonal influenza (9). The epidemiology of this illness is still being studied and prevention issues related to this newly emerging influenza virus will be published separately. # Methods CDC's Advisory Committee on Immunization Practices (ACIP) provides annual recommendations for the prevention and control of influenza. The ACIP Influenza Vaccine Working Group- meets monthly throughout the year to discuss newly published studies, review current guidelines, and consider revisions to the recommendations. As they review the annual recommendations for ACIP consideration of the full committee, members of the working group consider a variety of issues, including burden of influenza illness, vaccine effectiveness, safety, and coverage in groups recommended for vaccination, feasibility, cost-effectiveness, and anticipated vaccine supply. Working group members also request periodic updates on vaccine and antiviral production, supply, safety and efficacy from vaccinologists, epidemiologists, and manufacturers. State and local vaccination program representatives are consulted. CDC's Influenza Division (available at / flu) provides influenza surveillance and antiviral resistance data. The Vaccines and Related Biological Products Advisory Committee provides advice on vaccine strain selection to the Food and Drug Administration (FDA), which selects the viral strains to be used in the annual trivalent influenza vaccines. Published, peer-reviewed studies are the primary source of data used by ACIP in making recommendations for the prevention and control of influenza, but unpublished data that are relevant to issues under discussion also might be considered. Among studies discussed or cited, those of greatest scientific quality and those that measured influenza-specific outcomes are the most influential. For example, population-based estimates that use outcomes associated with laboratory-confirmed influenza virus infection outcomes contribute the most specific data for estimates of influenza burden. The best evidence for vaccine or antiviral efficacy and effectiveness comes from randomized controlled trials that assess laboratory-confirmed influenza infections as an outcome measure and consider factors such as timing and intensity of influenza circulation and degree of match between vaccine strains and wild circulating strains (10,11). Randomized, placebo-controlled trials cannot be performed ethically in populations for which vaccination already is recommended, but observational studies that assess outcomes associated with laboratory-confirmed influenza infection can provide important vaccine or antiviral effectiveness data. Randomized, placebo-controlled clinical trials are the best source of vaccine and antiviral safety data for common adverse events; however, such studies do not have the statistical power to identify rare but potentially serious adverse events. The frequency of rare adverse events that might be associated with vaccination is best assessed by reviewing computerized medical records from large linked clinical databases and medical charts of persons who are identified as having a potential adverse event after vaccination (12,13). Vaccine coverage data from a nationally representative, randomly selected population that includes verification of vaccination through health-care record review are superior to coverage data derived from limited populations or without verification of vaccination; however, these data rarely are available for older children or adults (14). Finally, studies that assess vaccination program practices that improve vaccination coverage are most influential in formulating recommendations if the study design includes a nonintervention comparison group. In cited studies that included statistical comparisons, a difference was considered to be statistically significant if the p-value was <0.05 or the 95% confidence interval (CI) around an estimate of effect allowed rejection of the null hypothesis (i.e., no effect). These recommendations were presented to the full ACIP and approved in February 2009. Modifications were made to the ACIP statement during the subsequent review process at CDC to update and clarify wording in the document. Vaccine recommendations apply only to persons who do not have contraindications to vaccine use (see Contraindications and Precautions for use of TIV and Contraindications and Precautions for use of LAIV). Data presented in this report were current as of July 17, 2009. Further updates, if needed, will be posted at CDC's influenza website (. gov/flu). # Primary Changes and Updates in the Recommendations The 2009 recommendations include three principal changes or updates: Annual vaccination of all children aged 6 months-18 - years should begin as soon as the 2009-10 influenza vaccine is available. Annual vaccination of all children aged 6 months-4 years (59 months) and older children with conditions that place them at increased risk for complications from influenza should continue to be a primary focus of vaccination efforts as providers and programs transition to routinely vaccinating all children. from the United States and other countries are now resistant to oseltamivir. Recommendations for influenza diagnosis and antiviral use will be published later in 2009. CDC issued interim recommendations for antiviral treatment and chemoprophylaxis of influenza in December 2008, and these should be consulted for guidance pending recommendations from the ACIP (8). # Background and Epidemiology # Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease. Influenza A viruses are categorized into subtypes on the basis of two surface antigens: hemagglutinin and neuraminidase. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have circulated globally. Influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H3N2) and A (H1N1) viruses also have been identified in some influenza seasons. In April 2009, human infections with a novel influenza A (H1N1) virus were identified; as of June 2009, infections with the novel influenza A (H1N1) virus have been reported worldwide. This novel virus is derived partly from influenza A viruses that circulate in swine and is antigenically distinct from human influenza A (H1N1) viruses in circulation since 1977. Influenza A subtypes and B viruses are further separated into groups on the basis of antigenic similarities. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations and recombination events that occur during viral replication (15). Recent studies have begun to shed some light on the complex molecular evolution and epidemiologic dynamics of influenza A viruses (16)(17)(18). Currently circulating influenza B viruses are separated into two distinct genetic lineages (Yamagata and Victoria) but are not categorized into subtypes. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. Influenza B viruses from both lineages have circulated in most recent influenza seasons (19). Immunity to the surface antigens, particularly the hemagglutinin, reduces the likelihood of infection (20). Antibody against one influenza virus type or subtype confers limited or no protection against another type or subtype of influenza virus. Furthermore, antibody to one antigenic type or subtype of influenza virus might not protect against infection with a new antigenic variant of the same type or subtype (21). Frequent emergence of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and is the reason for annually reassessing the need to change one or more of the recommended strains for influenza vaccines. More dramatic changes, or antigenic shifts, occur less frequently. Antigenic shift occurs when a new subtype of influenza A virus appears and can result in the emergence of a novel influenza A virus with the potential to cause a pandemic. New influenza A subtypes have the potential to cause a pandemic when they are able to cause human illness and demonstrate efficient human-to-human transmission and little or no previously existing immunity has been identified among humans (15). Novel influenza A (H1N1) virus is not a new subtype, but because the large majority of humans appear to have no pre-existing antibody to key novel influenza A (H1N1) virus hemagglutinin epitopes, substantial potential exists for widespread infection (16). # Health-Care Use, Hospitalizations, and Deaths Attributed to Influenza In the United States, annual epidemics of influenza typically occur during the fall or winter months, but the peak of influenza activity can occur as late as April or May (Figure 1). Influenza-related complications requiring urgent medical care, including hospitalizations or deaths, can result from the direct effects of influenza virus infection, from complications associated with age or pregnancy, or from complications of underlying cardiopulmonary conditions or other chronic diseases. Studies that have measured rates of a clinical outcome without a laboratory confirmation of influenza virus infection (e.g., respiratory illness requiring hospitalization during influenza season) to assess the effect of influenza can be difficult to interpret because of circulation of other respiratory pathogens (e.g., respiratory syncytial virus) during the same time as influenza viruses (22)(23)(24). However, increases in healthcare provider visits for acute febrile respiratory illness occur each year during the time when influenza viruses circulate. Data from the U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet) demonstrate the annual increase in physician visits for influenza-like illness (ILI) † and for each influenza season; for 2009, the data also indicate the recent resurgence of respiratory illness associated with circulation of novel influenza A (H1N1) virus (Figure 2) (25,26). During seasonal influenza epidemics from 1979-1980 through 2000-2001, the estimated annual overall number of influenza-associated hospitalizations in the United States ranged from approximately 55,000 to 431,000 per annual epidemic (mean: 226,000) (7). The estimated annual number of deaths attributed to influenza from the 1990-91 influenza season through the 1998-99 season ranged from 17,000 to 51,000 per epidemic (mean: 36,000) (6). In the United States, the estimated number of influenza-associated deaths 53; therefore the week 53 data point for those seasons is an average of weeks 52 and 1. ¶ The national baseline is the mean percentage of visits for ILI during noninfluenza weeks for the previous three seasons plus two standard deviations. A noninfluenza week is a week during which 65 years who were at increased risk for death from influenza complications (6). In one study, an average of approximately 19,000 influenza-associated pulmonary and circulatory deaths per influenza season occurred during 1976-1990 compared with an average of approximately 36,000 deaths per season during 1990-1999 (6). In addition, influenza A (H3N2) viruses, which have been associated with higher mortality (27), predominated in 90% of influenza seasons during 1990-1999 compared with compared with 57% of seasons during 1976-1990 (6). Influenza viruses cause disease among persons in all age groups (1)(2)(3)(4)(5). Rates of infection are highest among children, but the risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (1,4,5,(28)(29)(30)(31). Estimated rates of influenzaassociated hospitalizations and deaths varied substantially by age group in studies conducted during different influenza epidemics. During 1990-1999, estimated average rates of influenza-associated pulmonary and circulatory deaths per 100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years (6). # Children Among children aged <5 years, influenza-related illness is a common cause of visits to medical practices and emergency departments (EDs). During two influenza seasons (2002-03 and 2003-04), the percentage of visits among children aged <5 years with acute respiratory illness or fever caused by laboratory-confirmed influenza ranged from 10%-19% of medical office visits to 6%-29% of EDs visits during the influenza season. On the basis of these data, the rate of visits to medical clinics for influenza was estimated to be 50-95 per 1,000 children, and the rate of visits to EDs was estimated to be 6-27 per 1,000 children (32). A multiyear study in New York City used viral surveillance data to estimate influenza strain-specific illness rates among ED visits. In addition to the expected variation by season and age group, influenza B epidemics were found to be an important cause of illness among school-aged children in several seasons, and annual epidemics of both influenza A and B peaked among school-aged children before other age groups (33). Retrospective studies using medical records data have demonstrated similar rates of illness among children aged <5 years during other influenza seasons (29,34,35). During the influenza season, an estimated 7-12 additional outpatient visits and 5-7 additional antibiotic prescriptions per 100 children aged <15 years have been documented when compared with periods when influenza viruses are not circulating, with rates decreasing with increasing age of the child (35). During 1993During -2004 in the Boston area, the rate of ED visits for respiratory illness that was attributed to influenza virus based on viral surveillance data among children aged <7 years during the winter respiratory illness season ranged from 22.0 per 1,000 children aged 6-23 months to 5.4 per 1,000 children aged 5-7 years (36). Rates of influenza-associated hospitalization are substantially higher among infants and young children than among older children when influenza viruses are in circulation and are similar to rates for other groups considered at high risk for influenza-related complications (37)(38)(39)(40)(41)(42), including persons aged >65 years (35,39). During 1979-2001, on the basis of data from a national sample of hospital discharges of influenzaassociated hospitalizations among children aged <5 years, the estimated rate of influenza-associated hospitalizations in the United States was 108 hospitalizations per 100,000 personyears (7). Recent population-based studies that measured hospitalization rates for laboratory-confirmed influenza in young children have documented hospitalization rates that are similar to or higher than rates derived from studies that analyzed hospital discharge data (32,34,41,43,44). Annual hospitalization rates for laboratory-confirmed influenza decrease with increasing age, ranging from 240-720 per 100,000 children aged <6 months to approximately 20 per 100,000 children aged 2-5 years (32). Hospitalization rates for children aged <5 years with high-risk medical conditions are approximately 250-500 per 100,000 children (29,31,45). Influenza-associated deaths are uncommon among children. An estimated annual average of 92 influenza-related deaths (0.4 deaths per 100,000 persons) occurred among children aged 65 years (6). Of 153 laboratory-confirmed influenza-related pediatric deaths reported during the 2003-04 influenza season, 96 (63%) deaths occurred among children aged <5 years and 61 (40%) among children aged <2 years. Among the 149 children who died and for whom information on underlying health status was available, 100 (67%) did not have an underlying medical condition that was an indication for vaccination at that time (46). In California during the 2003-04 and 2004-05 influenza seasons, 51% of children with laboratory-confirmed influenza who died and 40% of those who required admission to an intensive care unit had no underlying medical conditions (47). These data indicate that although children with risk factors for influenza complications are at higher risk for death, the majority of pediatric deaths occur among children with no known high-risk conditions. The annual number of influenza-associated deaths among children reported to CDC for the past four influenza seasons has ranged from 44 during 2004-05 to 84 during 2007-08 (48). As of July 8, 2009, a total of 17 deaths caused by novel influenza A (H1N1) virus infection have occurred in 2009 among children in the United States (CDC, unpublished data, 2009). Death associated with laboratory-confirmed influenza virus infection among children (defined as persons aged <18 years) is a nationally reportable condition. Deaths among children that have been attributed to co-infection with influenza and Staphylococcus aureus, particularly methicillin-resistant S. aureus (MRSA), have increased during the preceding four influenza seasons (26,49). The reason for this increase is not established but might reflect an increasing prevalence within the general population of colonization with MRSA strains, some of which carry certain virulence factors (50,51). # Adults Hospitalization rates during the influenza season are substantially increased for persons aged >65 years. One retrospective analysis based on data from managed-care organizations collected during 1996-2000 estimated that the risk during influenza season among persons aged >65 years with underlying conditions that put them at risk for influenza-related complications (i.e., one or more of the conditions listed as indications for vaccination) was approximately 560 influenzaassociated hospitalizations per 100,000 persons compared with approximately 190 per 100,000 healthy persons. Persons aged 50-64 years with underlying medical conditions also were at substantially increased risk for hospitalizations during influenza season compared with healthy adults aged 50-64 years. No increased risk for influenza-related hospitalizations was demonstrated among healthy adults aged 50-64 years or among those aged 19-49 years, regardless of underlying medical conditions (28). Influenza is an important contributor to the annual increase in deaths attributed to pneumonia and influenza that is observed during the winter months (Figure 3). During 1976-2001, an estimated yearly average of 32,651 (90%) influenza-related deaths occurred among adults aged >65 years (6). Risk for influenza-related death was highest among the oldest elderly, with persons aged >85 years 16 times more likely to die from an influenza-related illness than persons aged 65-69 years (6). The duration of influenza symptoms is prolonged and the severity of influenza illness increased among persons with human immunodeficiency virus (HIV) infection (52)(53)(54)(55)(56). A retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the attributable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than it was either before or after influenza was circulating. The risk for hospitalization was higher for HIV-infected women than it was for women with other underlying medical conditions (57). Another study estimated that the risk for influenza- - Each week, the vital statistics offices of 122 cities report the total number of death certificates received and the number of those for which pneumonia or influenza (P&I) was listed as the underlying or contributing cause of death by age group. The percentage of all deaths attributable to P&I are compared with a seasonal baseline and epidemic threshold value calculated for each week. † An increase of 1.645 standard deviations above the seasonal baseline deaths is considered the "epidemic threshold," i.e., the point at which the observed proportion of deaths attributed to pneumonia or influenza was significantly higher than would be expected at that time of the year in the absence of substantial influenza-related mortality. § The seasonal baseline of P&I deaths is calculated using a periodic regression model that incorporates a robust regression procedure applied to data from the previous 5 years. related death was 94-146 deaths per 100,000 persons with acquired immunodeficiency syndrome (AIDS) compared with 0.9-1.0 deaths per 100,000 persons aged 25-54 years and 64-70 deaths per 100,000 persons aged >65 years in the general population (58). Influenza-related excess deaths among pregnant women were reported during the pandemics of 1918-1919 and 1957-1958 (59-63). Case reports and several epidemiologic studies also indicate that pregnancy increases the risk for influenza complications (64-69) for the mother. The majority of studies that have attempted to assess the effect of influenza on pregnant women have measured changes in excess hospitalizations for respiratory illness during influenza season but not laboratoryconfirmed influenza hospitalizations. Pregnant women have an increased number of medical visits for respiratory illnesses during influenza season compared with nonpregnant women (70). Hospitalized pregnant women with respiratory illness during influenza season have increased lengths of stay compared with hospitalized pregnant women without respiratory illness. Rates of hospitalization for respiratory illness were twice as common during influenza season (71). A retrospective cohort study of approximately 134,000 pregnant women conducted in Nova Scotia during 1990-2002 compared medical record data for pregnant women to data from the same women during the year before pregnancy. Among pregnant women, 0.4% were hospitalized and 25% visited a clinician during pregnancy for a respiratory illness. The rate of third-trimester hospital admissions during the influenza season was five times higher than the rate during the influenza season in the year before pregnancy and more than twice as high as the rate during the noninfluenza season. An excess of 1,210 hospital admissions in the third trimester per 100,000 pregnant women with comorbidities and 68 admissions per 100,000 women without comorbidities was reported (72). In one study, pregnant women with respiratory hospitalizations did not have an increase in adverse perinatal outcomes or delivery complications (73); another study indicated an increase in delivery complications, including fetal distress, preterm labor, and cesarean delivery. However, infants born to women with laboratory-confirmed influenza during pregnancy do not have higher rates of low birth weight, congenital abnormalities, or lower Apgar scores compared with infants born to uninfected women (64,74). # options for Controlling Influenza The most effective strategy for preventing influenza is annual vaccination (10,15). Strategies that focus on providing routine vaccination to persons at higher risk for influenza complications have long been recommended, although coverage among the majority of these groups remains low. Routine vaccination of certain persons (e.g., children, contacts of persons at risk for influenza complications, and health-care personnel ) who serve as a source of influenza virus transmission might provide additional protection to persons at risk for influenza complications and reduce the overall influenza burden. However, coverage levels among these persons need to be increased before effects on transmission can be measured reliably. Antiviral drugs used for chemoprophylaxis or treatment of influenza are adjuncts to vaccine but are not substitutes for annual vaccination. However, antiviral drugs might be underused among those hospitalized with influenza (75). Nonpharmacologic interventions (e.g., advising frequent handwashing and improved respiratory hygiene) are reasonable and inexpensive; these strategies have been demonstrated to reduce respiratory diseases; reductions in detectable influenza A viruses on hands after handwashing also have been demonstrated (76)(77)(78). Few data are available to assess the effects of community-level respiratory disease mitigation strategies (e.g., closing schools, avoiding mass gatherings, or using respiratory protection) on reducing influenza virus transmission during typical seasonal influenza epidemics (79,80). # Influenza Vaccine Efficacy, Effectiveness, and Safety Evaluating Influenza Vaccine Efficacy and Effectiveness Studies The efficacy (i.e., prevention of illness among vaccinated persons in controlled trials) and effectiveness (i.e., prevention of illness in vaccinated populations) of influenza vaccines depend in part on the age and immunocompetence of the vaccine recipient, the degree of similarity between the viruses in the vaccine and those in circulation (see Effectiveness of Influenza Vaccination when Circulating Influenza Virus Strains Differ from Vaccine Strains), and the outcome being measured. Influenza vaccine efficacy and effectiveness studies have used multiple possible outcome measures, including the prevention of medically attended acute respiratory illness (MAARI), prevention of laboratory-confirmed influenza virus illness, prevention of influenza or pneumonia-associated hospitalizations or deaths, or prevention of seroconversion to circulating influenza virus strains. Efficacy or effectiveness for more specific outcomes such as laboratory-confirmed influenza typically will be higher than for less specific outcomes such as MAARI because the causes of MAARI include infections with other pathogens that influenza vaccination would not be expected to prevent (81). Observational studies that compare less-specific outcomes among vaccinated populations to those among unvaccinated populations are subject to biases that are difficult to control for during analyses. For example, an observational study that determines that influenza vaccination reduces overall mortality might be biased if healthier persons in the study are more likely to be vaccinated (82,83). Randomized controlled trials that measure laboratory-confirmed influenza virus infections as the outcome are the most persuasive evidence of vaccine efficacy, but such trials cannot be conducted ethically among groups recommended to receive vaccine annually. # Influenza Vaccine Composition Both LAIV and TIV contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one influenza A (H1N1) virus, and one influenza B virus. Each year, one or more virus strains in the vaccine might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. For the 2009-10 influenza season, the influenza B vaccine virus strain was changed to B/Brisbane/60/2008, a representative of the B/Victoria lineage) compared with the 2008-09 season. The influenza A (H1N1 and H3N2 vaccine virus strains were not changed (84). Viruses for both types of currently licensed vaccines are grown in eggs. Both vaccines are administered annually to provide optimal protection against influenza virus infection (Table 1). Both TIV and LAIV are widely available in the United States. Although both types of vaccines are expected to be effective, the vaccines differ in several respects (Table 1). # Major Differences Between tIV and LAIV During the preparation of TIV, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (15). Only subvirion and purified surface antigen preparations of TIV (often referred to as "split" and subunit vaccines, respectively) are available in the United States. TIV contains killed viruses and thus cannot cause influenza. LAIV contains live, attenuated influenza viruses that have the potential to cause mild signs or symptoms (e.g., runny nose, nasal congestion, fever, or sore throat). LAIV is administered intranasally by sprayer, whereas TIV is administered intramuscularly by injection. LAIV is licensed for use among nonpregnant persons aged 2-49 years; safety has not been established in persons with underlying medical conditions that confer a higher risk for influenza complications. TIV is licensed for use among persons aged >6 months, including those who are healthy and those with chronic medical conditions (Table 1). # Correlates of Protection after Vaccination Immune correlates of protection against influenza infection after vaccination include serum hemagglutination inhibition antibody and neutralizing antibody (20,85). Increased levels of antibody induced by vaccination decrease the risk for illness caused by strains that are antigenically similar to those strains of the same type or subtype included in the vaccine (86)(87)(88)(89). The majority of healthy children and adults have high titers of antibody after vaccination (87,90). Although immune correlates such as achievement of certain antibody titers after vaccination correlate well with immunity on a population level, the significance of reaching or failing to reach a certain antibody threshold is not well understood on the individual level. Other immunologic correlates of protection that might best indicate clinical protection after receipt of an intranasal vaccine such as LAIV (e.g., mucosal antibody) are more difficult to measure (91,92). Laboratory measurements that correlate with protective immunity induced by LAIV have been described, including measurement of cell-mediated immunity with ELISPOT assays that measure gamma-interferon (89). If not simultaneously administered, can be administered within 4 wks of an inactivated vaccine Yes Yes - Children aged 6 months-8 years who have never received influenza vaccine before should receive 2 doses. Those who only receive 1 dose in their first year of vaccination should receive 2 doses in the following year, spaced 4 weeks apart. † Persons at higher risk for complications of influenza infection because of underlying medical conditions should not receive LAIV. Persons at higher risk for complications of influenza infection because of underlying medical conditions include adults and children with chronic disorders of the pulmonary or cardiovascular systems; adults and children with chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunnosuppression; children and adolescents receiving long-term aspirin therapy (at risk for developing Reye syndrome after wild-type influenza infection); persons who have any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration; pregnant women; and residents of nursing homes and other chronic-care facilities that house persons with chronic medical conditions. § Clinicians and immunization programs should screen for possible reactive airways diseases when considering use of LAIV for children aged 2-4 years and should avoid use of this vaccine in children with asthma or a recent wheezing episode. Health-care providers should consult the medical record, when available, to identify children aged 2-4 years with asthma or recurrent wheezing that might indicate asthma. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record during the preceding 12 months should not receive LAIV. ¶ LAIV coadministration has been evaluated systematically only among children aged 12-15 months who received measles, mumps, and rubella vaccine or varicella vaccine. TIV coadministration has been evaluated systematically only among adults who received pneumococcal polysaccharide or zoster vaccine. # Immunogenicity, Efficacy, and Effectiveness of tIV Children Children aged >6 months typically have protective levels of anti-influenza antibody against specific influenza virus strains after receiving the recommended number of doses of influenza vaccine (85 90 93-97). In most seasons, one or more vaccine antigens are changed compared with the previous season. In consecutive years when vaccine antigens change, children aged <9 years who received only 1 dose of vaccine in their first year of vaccination are less likely to have protective antibody responses when administered only a single dose during their second year of vaccination compared with children who received 2 doses in their first year of vaccination (98)(99)(100). When the vaccine antigens do not change from one season to the next, priming children aged 6-23 months with a single dose of vaccine in the spring followed by a dose in the fall engenders similar antibody responses compared with a regimen of 2 doses in the fall (101). However, one study conducted during a season when the vaccine antigens did not change compared with the previous season estimated 62% effectiveness against ILI for healthy children who had received only 1 dose in the previous influenza season and only 1 dose in the study season compared with 82% for those who received 2 doses separated by >4 weeks during the study season (102). The antibody response among children at higher risk for influenza-related complications (e.g., children with chronic medical conditions) might be lower than those reported typically among healthy children (103,104). However, antibody responses among children with asthma are similar to those of healthy children and are not substantially altered during asthma exacerbations requiring short-term prednisone treatment (105). Vaccine effectiveness studies also have indicated that 2 doses are needed to provide adequate protection during the first season that young children are vaccinated. Among children aged <5 years who have never received influenza vaccine previously or who received only 1 dose of influenza vaccine in their first year of vaccination, vaccine effectiveness is lower compared with children who received 2 doses in their first year of being vaccinated. Two large retrospective studies of young children who had received only 1 dose of TIV in their first year of being vaccinated determined that no decrease was observed in ILI-related office visits compared with unvaccinated children (102,106). Similar results were reported in a case-control study of children aged 6-59 months (107). These results, along with the immunogenicity data indicating that antibody responses are significantly higher when young children are given 2 doses, are the basis for the recommendation that all children aged <9 years who are being vaccinated for the first time should receive 2 vaccine doses separated by at least 4 weeks. Estimates of vaccine efficacy or effectiveness among children aged >6 months have varied by season and study design. In a randomized trial conducted during five influenza seasons (1985)(1986)(1987)(1988)(1989)(1990) in the United States among children aged 1-15 years, annual vaccination reduced laboratory-confirmed influenza A substantially (77%-91%) (87). A limited 1-year placebo-controlled study reported vaccine efficacy against laboratory-confirmed influenza illness of 56% among healthy children aged 3-9 years and 100% among healthy children and adolescents aged 10-18 years (108). A randomized, double-blind, placebo-controlled trial conducted during two influenza seasons among children aged 6-24 months indicated that efficacy was 66% against culture-confirmed influenza illness during the 1999-00 influenza season but did not reduce culture-confirmed influenza illness significantly during the 2000-20 influenza season (109). A case-control study conducted during the 2003-04 season found vaccine effectiveness of 49% against laboratory-confirmed influenza (107). An observational study among children aged 6-59 months with laboratory-confirmed influenza compared with children who tested negative for influenza reported vaccine effectiveness of 44% in the 2003-04 influenza season and 57% during the 2004-05 season (110). Partial vaccination (only 1 dose for children being vaccinated for the first time) was not effective in either study. During an influenza season (2003-04) with a suboptimal vaccine match, a retrospective cohort study conducted among approximately 30,000 children aged 6 months-8 years indicated vaccine effectiveness of 51% against medically attended, clinically diagnosed pneumonia or influenza (i.e., no laboratory confirmation of influenza) among fully vaccinated children and 49% among approximately 5,000 children aged 6-23 months (106). Another retrospective cohort study of similar size conducted during the same influenza season in Denver but limited to healthy children aged 6-21 months estimated clinical effectiveness of 2 TIV doses to be 87% against pneumonia or influenza-related office visits (102). Among children, TIV effectiveness might increase with age (87,111). A systematic review of published studies estimated vaccine effectiveness at 59% for children aged >2 years but concluded that additional evidence was needed to demonstrate effectiveness among children aged 6 months-2 years (112). Because of the recognized influenza-related disease burden among children with other chronic diseases or immunosuppression and the long-standing recommendation for vaccination of these children, randomized placebo-controlled studies to study efficacy in these children have not been conducted. In a nonrandomized controlled trial among children aged 2-6 years and 7-14 years who had asthma, vaccine efficacy was 54% and 78% against laboratory-confirmed influenza type A infection and 22% and 60% against laboratory-confirmed influenza type B infection, respectively. Vaccinated children aged 2-6 years with asthma did not have substantially fewer type B influenza virus infections compared with the control group in this study (113). The association between vaccination and prevention of asthma exacerbations is unclear. One study suggested that vaccination might provide protection against asthma exacerbations (114); however, other studies of children with asthma have not demonstrated decreased exacerbations (115). TIV has been demonstrated to reduce acute otitis media in some studies. Two studies have reported that TIV decreases the risk for influenza-related otitis media by approximately 30% among children with mean ages of 20 and 27 months, respectively (116,117). However, a large study conducted among children with a mean age of 14 months indicated that TIV was not effective against acute otitis media (109). Influenza vaccine effectiveness against a nonspecific clinical outcome such as acute otitis media, which is caused by a variety of pathogens and is not typically diagnosed using influenza virus culture, would be expected to be relatively low. # Adults Aged <65 Years One dose of TIV is highly immunogenic in healthy adults aged <65 years. Limited or no increase in antibody response is reported among adults when a second dose is administered during the same season (118)(119)(120). When the vaccine and circulating viruses are antigenically similar, TIV prevents laboratory-confirmed influenza illness among approximately 70%-90% of healthy adults aged <65 years in randomized controlled trials (121)(122)(123)(124). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are well-matched (121)(122)(123). Efficacy or effectiveness against laboratory-confirmed influenza illness was 47%-77% in studies conducted during different influenza seasons when the vaccine strains were antigenically dissimilar to the majority of circulating strains (117,119,(121)(122)(123)(124). However, effectiveness among healthy adults against influenza-related hospitalization, measured in the most recent of these studies, was 90% (125). In certain studies, persons with certain chronic diseases have lower serum antibody responses after vaccination compared with healthy young adults and can remain susceptible to influenza virus infection and influenza-related upper respiratory tract illness (126,127). Vaccine effectiveness among adults aged <65 years who are at higher risk for influenza complications typically is lower than that reported for healthy adults. In a case-control study conducted during the 2003-04 influenza season, when the vaccine was a suboptimal antigenic match to many circulating virus strains, effectiveness for prevention of laboratory-confirmed influenza illness among adults aged 50-64 years with high-risk conditions was 48% compared with 60% for healthy adults (125). Effectiveness against hospitalization among adults aged 50-64 years with high-risk conditions was 36% compared with 90% effectiveness among healthy adults in that age range (125). A randomized controlled trial among adults in Thailand with chronic obstructive pulmonary disease (median age: 68 years) indicated a vaccine effectiveness of 76% in preventing laboratory-confirmed influenza during a season when viruses were well-matched to vaccine viruses. Effectiveness did not decrease with increasing severity of underlying lung disease (128). Few randomized controlled trials have studied the effect of influenza vaccination on noninfluenza outcomes. A randomized controlled trial conducted in Argentina among 301 adults hospitalized with myocardial infarction or undergoing angioplasty for cardiovascular disease (56% of whom were aged >65 years) found that a significantly lower percentage (6%) of cardiovascular deaths occurred among vaccinated persons at 1 year after vaccination compared with unvaccinated persons (17%) (129). A randomized, double-blind, placebo-controlled study conducted in Poland among 658 persons with coronary artery disease indicated that significantly fewer vaccinated persons vaccinated persons had a cardiac ischemic event during the 9 months of follow up compared with unvaccinated persons (p <0.05) (130). Observational studies that have measured clinical endpoints without laboratory confirmation of influenza virus infection, typically have demonstrated substantial reductions in hospitalizations or deaths among adults with risk factors for influenza complications. In a case-control study conducted during 1999-2000 in Denmark among adults aged <65 years with underlying medical conditions, vaccination reduced deaths attributable to any cause 78% and reduced hospitalizations attributable to respiratory infections or cardiopulmonary diseases 87% (131). A benefit was reported after the first vaccination and increased with subsequent vaccinations in subsequent years (132). Among patients with diabetes mellitus, vaccination was associated with a 56% reduction in any complication, a 54% reduction in hospitalizations, and a 58% reduction in deaths (133). Certain experts have noted that the substantial effects on morbidity and mortality among those who received influenza vaccination in these observational studies should be interpreted with caution because of the difficulties in ensuring that those who received vaccination had similar baseline health status as those who did not (82,83). One meta-analysis of published studies concluded that evidence was insufficient to demonstrate that persons with asthma benefit from vaccination (134). However, a meta-analysis that examined effectiveness among persons with chronic obstructive pulmonary disease identified evidence of benefit from vaccination (135). # Immunocompromised Persons TIV produces adequate antibody concentrations against influenza among vaccinated HIV-infected persons who have minimal AIDS-related symptoms and normal or near-normal CD4+ T-lymphocyte cell counts (136)(137)(138). Among persons who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, TIV might not induce protective antibody titers (138,139); a second dose of vaccine does not improve the immune response in these persons (139,140). A randomized, placebo-controlled trial determined that TIV was highly effective in preventing symptomatic, laboratory-confirmed influenza virus infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm3; however, a limited number of persons with CD4+ T-lymphocyte cell counts of 100 CD4+ cells and among those with <30,000 viral copies of HIV type-1/mL (53). On the basis of certain limited studies, immunogenicity for persons with solid organ transplants varies according to transplant type. Among persons with kidney or heart transplants, the proportion who developed seroprotective antibody concentrations was similar or slightly reduced compared with healthy persons (141)(142)(143). However, a study among persons with liver transplants indicated reduced immunologic responses to influenza vaccination (144)(145)(146), especially if vaccination occurred within the 4 months after the transplant procedure (144). # Pregnant Women and neonates Pregnant women have protective levels of anti-influenza antibodies after vaccination (147,148). Passive transfer of anti-influenza antibodies that might provide protection from vaccinated women to neonates has been reported (147,(149)(150)(151). A retrospective, clinic-based study conducted during 1998-2003 documented a nonsignificant trend toward fewer episodes of MAARI during one influenza season among vaccinated pregnant women compared with unvaccinated pregnant women and substantially fewer episodes of MAARI during the peak influenza season (148). However, a retrospective study conducted during 1997-2002 that used clinical records data did not indicate a reduction in ILI among vaccinated pregnant women or their infants (152). In another study conducted during 1995-2001, medical visits for respiratory illness among the infants were not substantially reduced (153). One randomized controlled trial conducted in Bangladesh that provided vaccination to pregnant women during the third trimester demonstrated a 29% reduction in respiratory illness with fever and a 36% reduction in respiratory illness with fever among their infants during the first 6 months after birth. In addition, infants born to vaccinated women had a 63% reduction in laboratory-confirmed influenza illness during the first 6 months of life (154). All women in this trial breastfed their infants (mean duration: 14 weeks). # older Adults Adults aged >65 years typically have a diminished immune response to influenza vaccination compared with young healthy adults, suggesting that immunity might be of shorter duration (although still extending through one influenza season) (155,156). However, a review of the published literature concluded that no clear evidence existed that immunity declined more rapidly in the elderly (157), and additional vaccine doses during the same season do not increase the antibody response (118,120). Infections among the vaccinated elderly might be associated with an age-related reduction in ability to respond to vaccination rather than reduced duration of immunity (127,128). One prospective cohort study found that immunogenicity among hospitalized persons who were either aged >65 years or who were aged 18-64 years and had one or more chronic medical conditions was similar compared with outpatients (158). The only randomized controlled trial among communitydwelling persons aged >60 years reported a vaccine efficacy of 58% (CI = 26%-77%) against laboratory-confirmed influenza illness during a season when the vaccine strains were considered to be well-matched to circulating strains (159). Additional information from this trial published separately indicated that efficacy among those aged >70 years was 57% (CI = -36%-87%), similar to younger persons. However, few persons aged >75 years participated in this study, and the wide confidence interval for the estimate of efficacy among participants aged >70 years included 0 (160). Influenza vaccine effectiveness in preventing MAARI among the elderly in nursing homes has been estimated at 20%-40% (161,162), and reported outbreaks among well-vaccinated nursing home populations have suggested that vaccination might not have any significant effectiveness when circulating strains are drifted from vaccine strains (163,164). In contrast, some studies have indicated that vaccination can be up to 80% effective in preventing influenzarelated death (161,(165)(166)(167). Among elderly persons not living in nursing homes or similar long-term-care facilities, influenza vaccine is 27%-70% effective in preventing hospitalization for pneumonia and influenza (168)(169)(170). Influenza vaccination reduces the frequency of secondary complications and reduces the risk for influenza-related hospitalization and death among community-dwelling adults aged >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (169)(170)(171)(172)(173)(174). However, studies demonstrating large reductions in hospitalizations and deaths among the vaccinated elderly have been conducted using medical record databases and have not measured reductions in laboratory-confirmed influenza illness. These studies have been challenged because of concerns that they have not adequately controlled for differences in the propensity for healthier persons to be more likely than less healthy persons to receive vaccination (82,83,166,(175)(176)(177). # tIV Dosage, Administration, and Storage The composition of TIV varies according to manufacturer, and package inserts should be consulted. TIV formulations in multidose vials contain the vaccine preservative thimerosal; preservative-free, single-dose preparations also are available. TIV should be stored at 35°F-46°F (2°C-8°C) and should not be frozen. TIV that has been frozen should be discarded. Dosage recommendations and schedules vary according to age group (Table 2). Vaccine prepared for a previous influenza season should not be administered to provide protection for any subsequent season. The intramuscular route is recommended for TIV. Adults and older children should be vaccinated in the deltoid muscle. A needle length of >1 inch (>25 mm) should be considered for persons in these age groups because needles of <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (178). When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7/8-1.25 inches is recommended (179). Infants and young children should be vaccinated in the anterolateral aspect of the thigh. A needle length of 7/8-1 inch should be used for children aged <12 months. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving FluMist, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record during the preceding 12 months should not receive FluMist. † † Two doses administered at least 4 weeks apart are recommended for children aged 2-8 years who are receiving LAIV for the first time, and those who only received 1 dose in their first year of vaccination should receive 2 doses in the following year. # Adverse Events After Receipt of tIV Children Studies support the safety of annual TIV in children and adolescents. The largest published postlicensure populationbased study assessed TIV safety in 251,600 children aged <18 years, (including 8,476 vaccinations in children aged 6-23 months) through the Vaccine Safety Datalink (VSD), who were enrolled in one of five health maintenance organizations (HMOs) during 1993-1999. This study indicated no increase in clinically important medically attended events during the 2 weeks after inactivated influenza vaccination compared with control periods 3-4 weeks before and after vaccination (180). A retrospective cohort study using VSD medical records data from 45,356 children aged 6-23 months provided additional evidence supporting overall safety of TIV in this age group. During the 2 weeks after vaccination, TIV was not associated with statistically significant increases in any clinically important medically attended events other than gastritis/duodenitis, and 13 diagnoses, including acute upper respiratory illness, otitis media and asthma, were significantly less common (181). On chart review, most children with a diagnosis of gastritis/duodenitis had self-limited vomiting or diarrhea. The positive or negative associations between TIV and any of these diagnoses do not necessarily indicate a causal relationship (181). In a study of 791 healthy children aged 1-15 years, postvaccination fever was noted among 12% of those aged 1-5 years, 5% among those aged 6-10 years, and 5% among those aged 11-15 years (87). Fever, malaise, myalgia, and other systemic symptoms that can occur after vaccination with inactivated vaccine most often affect persons who have had no previous exposure to the influenza virus antigens in the vaccine (e.g., young children) (182,183). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Data about potential adverse events among children after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). Because of the limitations of passive reporting systems, determining causality for specific types of adverse events usually is not possible using VAERS data alone. Published reviews of VAERS reports submitted after administration of TIV to children aged 6-23 months indicated that the most frequently reported adverse events were fever, rash, injection-site reactions, and seizures; the majority of the limited number of reported seizures appeared to be febrile (184,185). Seizure and fever were the leading serious adverse events (SAEs), defined using standard criteria, reported to VAERS in these studies (184,185); further investigation in VSD did not confirm an association with febrile seizures as identified in VAERS (181). # Adults In placebo-controlled studies among adults, the most frequent side effect of vaccination was soreness at the vaccination site (affecting 10%-64% of patients) that lasted 65 years or were aged 18-64 years and had one or more chronic medical conditions compared with outpatients (158). Adverse events in adults aged >18 years reported to VAERS during 1990-2005 were analyzed. The most common adverse events reported to VAERS in adults included injection-site reactions, pain, fever, myalgia, and headache. The VAERS review identified no new safety concerns. In clinical trials, SAEs were reported to occur after vaccination with TIV at a rate of <1%. A small proportion (14%) of the TIV VAERS reports in adults were classified as SAEs, without assessment of causality. The most common SAE reported after TIV in VAERS in adults was Guillain-Barré Syndrome (GBS) (189). The potential association between TIV and GBS has been an area of ongoing research (see Guillain-Barré Syndrome and TIV). # Pregnant Women and neonates FDA has classified TIV as a "Pregnancy Category C" medication, indicating that adequate animal reproduction studies have not been conducted.. Available data indicate that influenza vaccine does not cause fetal harm when administered to a pregnant woman or affect reproductive capacity. One study of approximately 2,000 pregnant women who received TIV during pregnancy demonstrated no adverse fetal effects and no adverse effects during infancy or early childhood (190). A matched case-control study of 252 pregnant women who received TIV within the 6 months before delivery determined no adverse events after vaccination among pregnant women and no difference in pregnancy outcomes compared with 826 pregnant women who were not vaccinated (148). During 2000-2003, an estimated 2 million pregnant women were vaccinated, and only 20 adverse events among women who received TIV were reported to VAERS during this time, including nine injection-site reactions and eight systemic reactions (e.g., fever, headache, and myalgias). In addition, three miscarriages were reported, but these were not known to be causally related to vaccination (191). Similar results have been reported in certain smaller studies (147,149,192), and a recent international review of data on the safety of TIV concluded that no evidence exists to suggest harm to the fetus (193). The rate of adverse events associated with TIV was similar to the rate of adverse events among pregnant women who received pneumococcal polysaccharide vaccine in one small randomized controlled trial in Bangladesh, and no severe adverse events were reported in any study group (154). # Persons with Chronic Medical Conditions In a randomized cross-over study of children and adults with asthma, no increase in asthma exacerbations was reported for either age group (194), and two additional studies also have indicated no increase in wheezing among vaccinated asthmatic children (114) or adults (195). One study reported that 20%-28% of children with asthma aged 9 months-18 years had local pain and swelling at the site of influenza vaccination ( 104), and another study reported that 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions (93). A blinded, randomized, cross-over study of 1,952 adults and children with asthma demonstrated that only self-reported "body aches" were reported more frequently after TIV (25%) than placebo-injection (21%) (194). However, a placebo-controlled trial of TIV indicated no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years (97). Among children with high-risk medical conditions, one study of 52 children aged 6 months-3 years reported fever among 27% and irritability and insomnia among 25% (93); and a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (196). No placebo comparison group was used in these studies. # Immunocompromised Persons Data demonstrating safety of TIV for HIV-infected persons are limited, but no evidence exists that vaccination has a clinically important impact on HIV infection or immunocompetence. One study demonstrated a transient (i.e., 2-4 week) increase in HIV RNA (ribonucleic acid) levels in one HIV-infected person after influenza virus infection (197). Studies have demonstrated a transient increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (138,198). However, more recent and better-designed studies have not documented a substantial increase in the replication of HIV (199)(200)(201)(202). CD4+ T-lymphocyte cell counts or progression of HIV disease have not been demonstrated to change substantially after influenza vaccination among HIV-infected persons compared with unvaccinated HIV-infected persons (138,203). Limited information is available about the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza virus infection or influenza vaccination (52,204). Data are similarly limited for persons with other immunocompromising conditions. In small studies, vaccination did not affect allograft function or cause rejection episodes in recipients of kidney transplants (141,142), heart transplants (143), or liver transplants (144). # Immediate Hypersensitivity Reactions after Influenza Vaccines Vaccine components can rarely cause allergic reactions, also called immediate hypersensitivity reactions, among certain recipients. Immediate hypersensitivity reactions are mediated by preformed immunoglobulin E (IgE) antibodies against a vaccine component and usually occur within minutes to hours of exposure (205). Symptoms of immediate hypersensitivity range from mild urticaria (hives) and angioedema to anaphylaxis. Anaphylaxis is a severe life-threatening reaction that involves multiple organ systems and can progress rapidly. Symptoms and signs of anaphylaxis can include but are not limited to generalized urticaria, wheezing, swelling of the mouth and throat, difficulty breathing, vomiting, hypotension, decreased level of consciousness, and shock. Minor symptoms such as red eyes or hoarse voice also might be present (179,(205)(206)(207)(208). Allergic reactions might be caused by the vaccine antigen, residual animal protein, antimicrobial agents, preservatives, stabilizers, or other vaccine components (209). Manufacturers use a variety of compounds to inactivate influenza viruses and add antibiotics to prevent bacterial growth. Package inserts for specific vaccines of interest should be consulted for additional information. ACIP has recommended that all vaccine providers should be familiar with the office emergency plan and be certified in cardiopulmonary resuscitation (179). The Clinical Immunization Safety Assessment (CISA) network, a collaboration between CDC and six medical research centers with expertise in vaccination safety, has developed an algorithm to guide evaluation and revaccination decisions for persons with suspected immediate hypersensitivity after vaccination (205). Immediate hypersensitivity reaction after TIV and LAIV are rare. A VSD study of children aged <18 years in four HMOs during 1991-1997 estimated the overall risk of postvaccination anaphylaxis to be less than 1 case per 500,000 doses administered and in this study no cases were identified in TIV recipients (210). Reports of anaphylaxis occurring after receipt of TIV and LAIV in adults have rarely been reported to VAERS (189). Some immediate hypersensitivity reactions after TIV or LAIV are caused by the presence of residual egg protein in the vaccines (211). Although influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Asking persons if they can eat eggs without adverse effects is a reasonable way to determine who might be at risk for allergic reactions from receiving influenza vaccines (179). Persons who have had symptoms such as hives or swelling of the lips or tongue, or who have experienced acute respiratory distress after eating eggs, should consult a physician for appropriate evaluation to help determine if future influenza vaccine should be administered. Persons who have documented (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma related to egg exposure or other allergic responses to egg protein, also might be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician before vaccination should be considered (212)(213)(214). A regimen has been developed for administering influenza vaccine to asthmatic children with severe disease and egg hypersensitivity (213). Hypersensitivity reactions to other vaccine components also can rarely occur. Although exposure to vaccines containing thimerosal can lead to hypersensitivity (215), the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (216,217). When reported, hypersensitivity to thimerosal typically has consisted of local delayed hypersensitivity reactions (216). # ocular and Respiratory Symptoms after tIV Ocular or respiratory symptoms have occasionally been reported within 24 hours after TIV administration, but these symptoms typically are mild and resolve quickly without specific treatment. In some trials conducted in the United States, ocular or respiratory symptoms included red eyes (<1%-6%), cough (1%-7%), wheezing (1%), and chest tightness (1%-3%) (207,208,(218)(219)(220). However, most of these trials were not placebo-controlled, and causality cannot be determined. In addition, ocular and respiratory symptoms are features of a variety of respiratory illnesses and seasonal allergies that would be expected to occur coincidentally among vaccine recipients unrelated to vaccination. A placebo-controlled vaccine effectiveness study among young adults found that 2% of persons who received the 2006-07 formulation of Fluzone (Sanofi Pasteur) reported red eyes compared with none of the controls (p = 0.03) (221). A similar trial conducted during the 2005-06 influenza season found that 3% of Fluzone recipients reported red eyes compared with 1% of placebo recipients; however the difference was not statistically significant (222) . Oculorespiratory syndrome (ORS), an acute, self-limited reaction to TIV with prominent ocular and respiratory symptoms, was first described during the 2000-01 influenza season in Canada. The initial case-definition for ORS was the onset of one or more of the following within 2-24 hours after receiving TIV: bilateral red eyes and/or facial edema and/or respiratory symptoms (coughing, wheezing, chest tightness, difficulty breathing, sore throat, hoarseness or difficulty swallowing, cough, wheeze, chest tightness, difficulty breathing, sore throat, or facial swelling) (223). ORS was first described in Canada and strongly associated with one vaccine preparation (Fluviral S/F, Shire Biologics, Quebec, Canada) not available in the United States during the 2000-01 influenza season (224). Subsequent investigations identified persons with ocular or respiratory symptoms meeting an ORS case-definition in safety monitoring systems and trials that had been conducted before 2000 in Canada, the United States, and several European countries (225)(226)(227). The cause of ORS has not been established; however studies suggest the reaction is not IgE-mediated (228). After changes in the manufacturing process of the vaccine preparation associated with ORS during 2000-01, the incidence of ORS in Canada was greatly reduced (226). In one placebo-controlled study, only hoarseness, cough, and itchy or sore eyes (but not red eyes) were significantly associated with a reformulated Fluviral preparation. These findings indicated that ORS symptoms following use of the reformulated vaccine were mild, resolved within 24 hours, and might not typically be of sufficient concern to cause vaccine recipients to seek medical care (229). Ocular and respiratory symptoms reported after TIV administration, including ORS, have some similarities with immediate hypersensitivity reactions. One study indicated that the risk for ORS recurrence with subsequent vaccination is low, and persons with ocular or respiratory symptoms (e.g., bilateral red eyes, cough, sore throat, or hoarseness) after TIV that did not involve the lower respiratory tract have been revaccinated without reports of SAEs after subsequent exposure to TIV (230). VAERS routinely monitors for adverse events such as ocular or respiratory symptoms after receipt of TIV. # Contraindications and Precautions for Use of tIV TIV is contraindicated and should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine unless the recipient has been desensitized. Prophylactic use of antiviral agents is an option for preventing influenza among such persons. Information about vaccine components is located in package inserts from each manufacturer. Persons with moderate to severe acute febrile illness usually should not be vaccinated until their symptoms have abated. Moderate or severe acute illness with or without fever is a precaution § for TIV. GBS within 6 weeks following a previous dose of influenza vaccine is considered to be a precaution for use of influenza vaccines. # Revaccination in Persons Who Experienced ocular or Respiratory Symptoms After tIV When assessing whether a patient who experienced ocular and respiratory symptoms should be revaccinated, providers should determine if concerning signs and symptoms of Ig-E mediated immediate hypersensitivity are present (see Immediate Hypersensitivity after Influenza Vaccines). Healthcare providers who are unsure whether symptoms reported or observed after TIV represent an IgE-mediated hypersensitivity immune response should seek advice from an allergist/immunologist. Persons with symptoms of possible IgE-mediated hypersensitivity after TIV should not receive influenza vaccination unless hypersensitivity is ruled out or revaccination is administered under close medical supervision (205). Ocular or respiratory symptoms observed after TIV often are coincidental and unrelated to TIV administration, as observed among placebo recipients in some randomized controlled studies. Determining whether ocular or respiratory symptoms are coincidental or related to possible ORS might not be possible. Persons who have had red eyes, mild upper facial swelling, or mild respiratory symptoms (e.g., sore throat, cough, or hoarseness) after TIV without other concerning signs or symptoms of hypersensitivity can receive TIV in subsequent seasons without further evaluation. Two studies showed that persons who had symptoms of ORS after TIV were at a higher risk for ORS after subsequent TIV administration; however, these events usually were milder than the first episode (230,231). # Guillain-Barré Syndrome and tIV The annual incidence of GBS is 10-20 cases per 1 million adults (232). Substantial evidence exists that multiple infectious illnesses, most notably Campylobacter jejuni gastrointestinal infections and upper respiratory tract infections, are associated with GBS (233)(234)(235). A recent study identified serologically confirmed influenza virus infection as a trigger of GBS, with time from onset of influenza illness to GBS of 3-30 days. The estimated frequency of influenza-related GBS was four to seven times higher than the frequency that has been estimated for influenza-vaccine-associated GBS (236). The 1976 swine influenza vaccine was associated with an increased frequency of GBS, estimated at one additional case of GBS per 100,000 persons vaccinated (237,238). The risk for influenza-vaccine-associated GBS was higher among persons aged >25 years than among persons aged <25 years (239). However, obtaining epidemiologic evidence for a small increase in risk for a rare condition with multiple causes is difficult, and no evidence consistently exists for a causal relation between subsequent vaccines prepared from other influenza viruses and GBS. None of the studies conducted using influenza vaccines other than the 1976 swine influenza vaccine has demonstrated a substantial increase in GBS associated with influenza vaccines. During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were not statistically significant in any of these studies (240)(241)(242). However, in a study of the 1992-93 and 1993-94 seasons, the overall relative risk for GBS was 1.7 (CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately one additional case of GBS per 1 million persons vaccinated; the combined number of GBS cases peaked 2 weeks after vaccination (238). Results of a study that examined health-care data from Ontario, Canada, during 1992-2004 demonstrated a small but statistically significant temporal association between receiving influenza vaccination and subsequent hospital admission for GBS. However, no increase in cases of GBS at the population level was reported after introduction of a mass public influenza vaccination program in Ontario beginning in 2000 (243). Data from VAERS have documented decreased reporting of GBS occurring after vaccination across age groups over time, despite overall increased reporting of other non-GBS conditions occurring after administration of influenza vaccine (237). Published data from the United Kingdom's General Practice Research Database (GPRD) found influenza vaccine to be associated with a decreased risk for GBS, although whether this was associated with protection against influenza or confounding because of a "healthy vaccinee" (e.g., healthier persons might be more likely to be vaccinated and also be at lower risk for GBS) (244) is unclear. A separate GPRD analysis found no association between vaccination and GBS for a 9-year period; only three cases of GBS occurred within 6 weeks after administration of influenza vaccine (245). A third GPRD analysis found that GBS was associated with recent ILI, but not influenza vaccination (246). § A precaution is a condition in a recipient that might increase the risk for a serious adverse reaction or that might compromise the ability of the vaccine to produce immunity (179). The estimated risk for GBS (on the basis of the few studies that have demonstrated an association between vaccination and GBS) is low (i.e., approximately one additional case per 1 million persons vaccinated). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death substantially outweigh these estimates of risk for vaccine-associated GBS. No evidence indicates that the casefatality ratio for GBS differs among vaccinated persons and those not vaccinated. # Use of tIV Among Patients with a History of GBS The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (232). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown. Among 311 patients with GBS who responded to a survey, 11 (4%) reported some worsening of symptoms after influenza vaccination; however, some of these patients had received other vaccines at the same time, and recurring symptoms were generally mild (247). However, as a precaution, persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks generally should not be vaccinated. As an alternative, physicians might consider using influenza antiviral chemoprophylaxis for these persons. Although data are limited, the established benefits of influenza vaccination might outweigh the risks for many persons who have a history of GBS and who also are at high risk for severe complications from influenza. # Vaccine Preservative (thimerosal) in Multidose Vials of tIV Thimerosal, a mercury-containing antibacterial compound, has been used as a preservative in vaccines and other medications since the 1930s (248) and is used in multidose vial preparations of TIV to reduce the likelihood of bacterial growth. No scientific evidence indicates that thimerosal in vaccines, including influenza vaccines, is a cause of adverse events other than occasional local hypersensitivity reactions in vaccine recipients. In addition, no scientific evidence exists that thimerosal-containing vaccines are a cause of adverse events among children born to women who received vaccine during pregnancy. The weight of accumulating evidence does not suggest an increased risk for neurodevelopment disorders from exposure to thimerosal-containing vaccines (249)(250)(251)(252)(253)(254)(255)(256)(257)(258). The U.S. Public Health Service and other organizations have recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines as part of a strategy to reduce mercury exposures from all sources (249,250,259) Also, continuing public concerns about exposure to mercury in vaccines has been viewed as a potential barrier to achieving higher vaccine coverage levels and reducing the burden of vaccine-preventable diseases. Since mid-2001, vaccines routinely recommended for infants aged <6 months in the United States have been manufactured either without or with greatly reduced (trace) amounts of thimerosal. As a result, a substantial reduction in the total mercury exposure from vaccines for infants and children already has been achieved ( 179)). ACIP and other federal agencies and professional medical organizations continue to support efforts to provide thimerosal-preservative-free vaccine options. The benefits of influenza vaccination for all recommended groups, including pregnant women and young children, outweigh concerns on the basis of a theoretical risk from thimerosal exposure through vaccination. The risks for severe illness from influenza virus infection are elevated among both young children and pregnant women, and vaccination has been demonstrated to reduce the risk for severe influenza illness and subsequent medical complications. In contrast, no scientifically conclusive evidence has demonstrated harm from exposure to vaccine containing thimerosal preservative. For these reasons, persons recommended to receive TIV may receive any age-and risk factor-appropriate vaccine preparation, depending on availability. An analysis of VAERS reports found no difference in the safety profile of preservative-containing compared with preservative-free TIV vaccines in infants aged 6-23 months (184). Nonetheless, as of May 2009, some states have enacted legislation banning the administration of vaccines containing mercury; the provisions defining mercury content vary (260). LAIV and many of the single-dose vial or syringe preparations of TIV are thimerosal-free, and the number of influenza vaccine doses that do not contain thimerosal as a preservative is expected to increase (Table 2). However, these laws might present a barrier to vaccination unless influenza vaccines that do not contain thimerosal as a preservative are easily available in those states. The U.S. vaccine supply for infants and pregnant women is in a period of transition as manufacturers expand the availability of thimerosal-reduced or thimerosal-free vaccine to reduce the cumulative exposure of infants to mercury. Other environmental sources of mercury exposure are more difficult or impossible to avoid or eliminate (249). # LAIV Dosage, Administration, and Storage Each dose of LAIV contains the same three vaccine antigens used in TIV. However, the antigens are constituted as live, attenuated, cold-adapted, temperature-sensitive vaccine viruses. Providers should refer to the package insert, which contains additional information about the formulation of this vaccine and other vaccine components. LAIV does not contain thimerosal. LAIV is made from attenuated viruses that are able to replicate efficiently only at temperatures present in the nasal mucosa. LAIV does not cause systemic symptoms of influenza in vaccine recipients, although a minority of recipients experience nasal congestion or fever, which is probably a result of effects of intranasal vaccine administration or local viral replication or fever (261). LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV is not licensed for vaccination of children aged 49 years. LAIV is supplied in a prefilled, single-use sprayer containing 0.2 mL of vaccine. Approximately 0.1 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. LAIV is shipped at 35°F-46°F (2°C-8°C). LAIV should be stored at 35°F-46°F (2°C-8°C) on receipt and can remain at that temperature until the expiration date is reached (261). Vaccine prepared for a previous influenza season should not be administered to provide protection for any subsequent season. # Shedding, transmission, and Stability of Vaccine Viruses Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses after vaccination, although in lower amounts than occur typically with shedding of wild-type influenza viruses. In rare instances, shed vaccine viruses can be transmitted from vaccine recipients to unvaccinated persons. However, serious illnesses have not been reported among unvaccinated persons who have been infected inadvertently with vaccine viruses. One study of 197 children aged 8-36 months in a child care center assessed transmissibility of vaccine viruses from 98 vaccinated children to the other 99 unvaccinated children; 80% of vaccine recipients shed one or more virus strains (mean duration: 7.6 days). One influenza type B vaccine strain isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the coldadapted, temperature-sensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient who was in the same play group. The placebo recipient from whom the influenza type B vaccine strain was isolated had symptoms of a mild upper respiratory illness but did not experience any serious clinical events. The estimated probability of acquiring vaccine virus after close contact with a single LAIV recipient in this child care population was 1%-2% (262). Studies assessing whether vaccine viruses are shed have been based on viral cultures or PCR detection of vaccine viruses in nasal aspirates from persons who have received LAIV. Among 345 subjects aged 5-49 years, 30% had detectable virus in nasal secretions obtained by nasal swabbing after receiving LAIV. The duration of virus shedding and the amount of virus shed was inversely correlated with age, and maximal shedding occurred within 2 days of vaccination. Symptoms reported after vaccination, including runny nose, headache, and sore throat, did not correlate with virus shedding (263). Other smaller studies have reported similar findings (264,265). Vaccine strain virus was detected from nasal secretions in one (2%) of 57 HIV-infected adults who received LAIV, none of 54 HIV-negative participants (266), and three (13%) of 23 HIV-infected children compared with seven (28%) of 25 children who were not HIV-infected (267). No participants in these studies had detectable virus beyond 10 days after receipt of LAIV. The possibility of person-to-person transmission of vaccine viruses was not assessed in these studies (264)(265)(266)(267). In clinical trials, viruses isolated from vaccine recipients have retained attenuated phenotypes. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (268). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. A study conducted in a child care setting demonstrated that limited genetic change occurred in the LAIV strains following replication in the vaccine recipients (269). # Immunogenicity, Efficacy, and Effectiveness of LAIV LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not understood completely but appear to involve both serum and nasal secretory antibodies. The immunogenic-ity of the approved LAIV has been assessed in multiple studies conducted among children and adults (270)(271)(272)(273)(274)(275)(276). # Healthy Children A randomized, double-blind, placebo-controlled trial among 1,602 healthy children aged 15-71 months assessed the efficacy of LAIV against culture-confirmed influenza during two seasons (277,278). This trial included a subset of children aged 60-71 months who received 2 doses in the first season. During season one (1996-97), when vaccine and circulating virus strains were well-matched, efficacy against culture-confirmed influenza was 94% for participants who received 2 doses of LAIV separated by >6 weeks, and 89% for those who received 1 dose. During season two (1997-98), when the A (H3N2) component in the vaccine was not well-matched with circulating virus strains, efficacy (1 dose) was 86%, for an overall efficacy for two influenza seasons of 92%. Receipt of LAIV also resulted in 21% fewer febrile illnesses and a significant decrease in acute otitis media requiring antibiotics (277,279). Other randomized, placebo-controlled trials demonstrating the efficacy of LAIV in young children against culture-confirmed influenza include a study conducted among children aged 6-35 months attending child care centers during consecutive influenza seasons (280) in which 85%-89% efficacy was observed, and a study conducted among children aged 12-36 months living in Asia during consecutive influenza seasons in which 64%-70% efficacy was documented (281). In one community-based, nonrandomized open-label study, reductions in MAARI were observed among children who received 1 dose of LAIV during the 1990-00 and 2000-01 influenza seasons even though antigenically drifted influenza A/H1N1 and B viruses were circulating during that season (282). LAIV efficacy in preventing laboratory-confirmed influenza also has been demonstrated in studies comparing the efficacy of LAIV with TIV rather than with a placebo (see Comparisons of LAIV and TIV Efficacy or Effectiveness). # Healthy Adults A randomized, double-blind, placebo-controlled trial of LAIV effectiveness among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in self-reported respiratory tract illness without laboratory confirmation, work loss, health-care visits, and medication use during influenza outbreak periods . The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not well-matched. The frequency of febrile illnesses was not significantly decreased among LAIV recipients compared with those who received placebo. However, vaccine recipients had significantly fewer severe febrile illnesses (19% reduction) and febrile upper respiratory tract illnesses (24% reduction), and significant reductions in days of illness, days of work lost, days with health-care-provider visits, and use of prescription antibiotics and over-the-counter medications (283). Efficacy against culture-confirmed influenza in a randomized, placebo-controlled study was 57% in the 2004-05 influenza season and 43% in the 2005-06 influenza season, although efficacy in these studies was not demonstrated to be significantly greater than placebo (221,222). # Adverse Events after Receipt of LAIV Healthy Children Aged 2-18 Years In a subset of healthy children aged 60-71 months from one clinical trial, certain signs and symptoms were reported more often after the first dose among LAIV recipients (n = 214) than among placebo recipients (n = 95), including runny nose (48% and 44%, respectively); headache (18% and 12%, respectively); vomiting (5% and 3%, respectively); and myalgias (6% and 4%, respectively) (277). However, these differences were not statistically significant. In other trials, signs and symptoms reported after LAIV administration have included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0-26%), vomiting (3%-13%), abdominal pain (2%), and myalgias (0-21%) (270,272,273,280,(284)(285)(286)(287). These symptoms were associated more often with the first dose and were self-limited. A placebo-controlled trial in 9,689 children aged 1-17 years assessed prespecified medically attended outcomes during the 42 days after vaccination (286). Following >1,500 statistical analyses in the 42 days after LAIV, elevated risks that were biologically plausible were observed for the following conditions: asthma, upper respiratory infection, musculoskeletal pain, otitis media with effusion, and adenitis/ adenopathy. The increased risk for wheezing events after LAIV was observed among children aged 18-35 months (RR: 4.06; 90% CI = 1.3-17.9). In this study, the rate of SAEs was 0.2% in LAIV and placebo recipients; none of the SAEs was judged to be related to the vaccine by the study investigators (286). In a randomized trial published in 2007, LAIV and TIV were compared among children aged 6-59 months (288). Children with medically diagnosed or treated wheezing within 42 days before enrollment or with a history of severe asthma were excluded from this study. Among children aged 24-59 months who received LAIV, the rate of medically significant wheezing, using a prespecified definition, was not greater compared with those who received TIV (288). Wheezing was observed more frequently among younger LAIV recipients aged 6-23 months in this study; LAIV is not licensed for this age group. In a previous randomized placebo-controlled safety trial among children aged 12 months-17 years without a history of asthma by parental report, an elevated risk for asthma events (RR: 4.1; CI = 1.3-17.9) was documented among 728 children aged 18-35 months who received LAIV. Of the 16 children with asthma-related events in this study, seven had a history of asthma on the basis of subsequent medical record review. None required hospitalization, and elevated risks for asthma were not observed in other age groups (286). Another study was conducted among >11,000 children aged 18 months-18 years in which 18,780 doses of vaccine were administered for 4 years. For children aged 18 months-4 years, no increase was reported in asthma visits 0-15 days after vaccination compared with the prevaccination period. A significant increase in asthma events was reported 15-42 days after vaccination, but only in vaccine year 1 (289). A 4-year, open-label field trial study assessed LAIV safety of more than 2000 doses administered to children aged 18 months-18 years with a history of intermittent wheeze who were otherwise healthy. Among these children, no increased risk was reported for medically attended acute respiratory illnesses, including acute asthma exacerbation, during the 0-14 or 0-42 days after LAIV compared with the pre-and postvaccination reference periods (290). Initial data from VAERS during 2007-2008, following ACIP's recommendation for LAIV use in healthy children aged 2-4 years, did not suggest a concern for wheezing after LAIV in young children. However data also suggest uptake of LAIV was limited, and safety monitoring for wheezing events after LAIV is ongoing (CDC, unpublished data, 2008). # Adults Aged 19-49 Years Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (277,291). In one clinical trial among a subset of healthy adults aged 18-49 years, signs and symptoms reported significantly more often (p<0.05) among LAIV recipients (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (14% and 11%, respectively), runny nose (45% and 27%, respectively), sore throat (28% and 17%, respectively), chills (9% and 6%, respectively), and tiredness/weakness (26% and 22%, respectively) (92). A review of 460 reports to VAERS after distribution of approximately 2.5 million doses during the 2003-04 and 2004-05 influenza seasons did not indicate any new safety concerns (292). Few of the LAIV VAERS reports (9%) were SAEs; respiratory events were the most common conditions reported. # Persons at Higher Risk for Influenza-Related Complications Limited data assessing the safety of LAIV use for certain groups at higher risk for influenza-related complications are available. In one study of 54 HIV-infected persons aged 18-58 years and with CD4+ counts >200 cells/mm 3 who received LAIV, no SAEs were reported during a 1-month follow-up period (266). Similarly, one study demonstrated no significant difference in the frequency of adverse events or viral shedding among HIV-infected children aged 1-8 years on effective antiretroviral therapy who were administered LAIV compared with HIV-uninfected children receiving LAIV (267). LAIV was well-tolerated among adults aged >65 years with chronic medical conditions (293). These findings suggest that persons at risk for influenza complications who have inadvertent exposure to LAIV would not have significant adverse events or prolonged viral shedding and that persons who have contact with persons at higher risk for influenza-related complications may receive LAIV. # Comparisons of LAIV and tIV Efficacy or Effectiveness Both TIV and LAIV have been demonstrated to be effective in children and adults. However, data directly comparing the efficacy or effectiveness of these two types of influenza vaccines are limited and insufficient to identify whether one vaccine might offer a clear advantage over the other in certain settings or populations. Studies comparing the efficacy of TIV to that of LAIV have been conducted in a variety of settings and populations using several different outcomes. One randomized, double-blind, placebo-controlled challenge study that was conducted among 92 healthy adults aged 18-41 years assessed the efficacy of both LAIV and TIV in preventing influenza infection when challenged with wild-type strains that were antigenically similar to vaccine strains (294). The overall efficacy in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, when challenged 28 days after vaccination by viruses to which study participants were susceptible before vaccination. The difference in efficacy between the two vaccines was not statistically significant in this limited study. No additional challenges were conducted to assess efficacy at time points later than 28 days (294). In a randomized, double-blind, placebo-controlled trial that was conducted among young adults during the 2004-05 influenza season, when the majority of circulating H3N2 viruses were antigenically drifted from that season's vaccine viruses, the efficacy of LAIV and TIV against culture-confirmed influenza was 57% and 77%, respectively. The difference in efficacy was not statistically significant and was attributable primarily to a difference in efficacy against influenza B (222). A similar study conducted during the 2005-06 influenza season found no significant difference in vaccine efficacy (221). A randomized controlled clinical trial conducted among children aged 6-59 months during the 2004-05 influenza season demonstrated a 55% reduction in cases of culture-confirmed influenza among children who received LAIV compared with those who received TIV (288). In this study, LAIV efficacy was higher compared with TIV against antigenically drifted viruses and well-matched viruses (288). An open-label, nonrandomized, community-based influenza vaccine trial conducted during an influenza season when circulating H3N2 strains were poorly matched with strains contained in the vaccine also indicated that LAIV, but not TIV, was effective against antigenically drifted H3N2 strains during that influenza season. In this study, children aged 5-18 years who received LAIV had significant protection against laboratory-confirmed influenza (37%) and pneumonia and influenza events (50%) (295). A recent observational study conducted among military personnel aged 17-49 years over three influenza seasons indicated that persons who received TIV had a significantly lower incidence of health-care encounters resulting in diagnostic coding for pneumonia and influenza compared with those who received LAIV. However, among new recruits being vaccinated for the first time, the incidence of pneumonia-and influenza-coded health-care encounters among those received LAIV was similar to those receiving TIV (296). Although LAIV is not licensed for use in persons with risk factors for influenza complications, certain studies have compared the efficacy of LAIV to TIV in these groups. LAIV provided 32% increased protection in preventing culture-confirmed influenza compared with TIV in one study conducted among children aged >6 years and adolescents with asthma (297) and 52% increased protection compared with TIV among children aged 6-71 months with recurrent respiratory tract infections (298). # Effectiveness of Vaccination for Decreasing transmission to Contacts Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce ILI and complications among persons at high risk. Influenza virus infection and ILI are common among HCP (299-301). Influenza outbreaks have been attributed to low vaccination rates among HCP in hospitals and long-term-care facilities (302)(303)(304). One serosurvey demonstrated that 23% of HCP had serologic evidence of influenza virus infection during a single influenza season; the majority had mild illness or subclinical infection (299). Observational studies have demonstrated that vaccination of HCP is associated with decreased deaths among nursing home patients (305,306). In one clusterrandomized controlled trial that included 2,604 residents of 44 nursing homes, significant decreases in mortality, ILI, and medical visits for ILI care were demonstrated among residents in nursing homes in which staff were offered influenza vaccination (coverage rate: 48%) compared with nursing homes in which staff were not provided with vaccination (coverage rate: 6%) (307). A review concluded that vaccination of HCP in settings in which patients also were vaccinated provided significant reductions in deaths among elderly patients from all causes and deaths from pneumonia (308). Epidemiologic studies of community outbreaks of influenza demonstrate that school-aged children typically have the highest influenza illness attack rates, suggesting routine universal vaccination of children might reduce transmission to their household contacts and possibly others in the community. Results from certain studies have indicated that the benefits of vaccinating children might extend to protection of their adult contacts and to persons at risk for influenza complications in the community. However, these data are limited, and studies have not used laboratory-confirmed influenza as an outcome measure. A single-blinded, randomized controlled study conducted as part of a 1996-1997 vaccine effectiveness study demonstrated that vaccinating preschool-aged children with TIV reduced influenza-related morbidity among some household contacts (309). A randomized, placebo-controlled trial among children with recurrent respiratory tract infections demonstrated that members of families with children who had received LAIV were significantly less likely to have respiratory tract infections and reported significantly fewer workdays lost compared with families with children who received placebo (310). In nonrandomized community-based studies, administration of LAIV has been demonstrated to reduce MAARI (311,312) and ILI-related economic and medical consequences (e.g., workdays lost and number of health-care provider visits) among contacts of vaccine recipients (312). Households with children attending schools in which school-based LAIV vaccination programs had been established reported less ILI and fewer physician visits during peak influenza season compared with households with children in schools in which no LAIV vaccination had been offered. However a decrease in the overall rate of school absenteeism was not reported in communities in which LAIV vaccination was offered (312). During an influenza outbreak during the 2005-06 influenza season, countywide school-based influenza vaccination was associated with reduced absenteeism among elementary and high school students in one county that implemented a school based vaccination program compared with another county without such a program (313). These community-based studies have not used laboratory-confirmed influenza as an outcome. Some studies also have documented reductions in influenza illness among persons living in communities where focused programs for vaccinating children have been conducted. A community-based observational study conducted during the 1968 pandemic using a univalent inactivated vaccine reported that a vaccination program targeting school-aged children (coverage rate: 86%) in one community reduced influenza rates within the community among all age groups compared with another community in which aggressive vaccination was not conducted among school-aged children (314). An observational study conducted in Russia demonstrated reductions in ILI among the community-dwelling elderly after implementation of a vaccination program using TIV for children aged 3-6 years (57% coverage achieved) and children and adolescents aged 7-17 years (72% coverage achieved) (315). In a nonrandomized community-based study conducted over three influenza seasons, 8%-18% reductions in the incidence of MAARI during the influenza season among adults aged >35 years were observed in communities in which LAIV was offered to all children aged >18 months (estimated coverage rate: 20%-25%) compared with communities that did not provide routine influenza vaccination programs for all children (311). In a subsequent influenza season, the same investigators documented a 9% reduction in MAARI rates during the influenza season among persons aged 35-44 years in intervention communities, where coverage was estimated at 31% among school children. However, MAARI rates among persons aged >45 years were lower in the intervention communities regardless of the presence of influenza in the community, suggesting that lower rates could not be attributed to vaccination of school children against influenza (295). The largest study to examine the community effects of increasing overall vaccine coverage was an ecologic study that described the experience in Ontario, Canada, which was the only province to implement a universal influenza vaccination program beginning in 2000. On the basis of models developed from administrative and viral surveillance data, influenzarelated mortality, hospitalizations, ED use, and physicians' office visits decreased significantly more in Ontario after program introduction than in other provinces, with the largest reductions observed in younger age groups (316). # Effectiveness of Influenza Vaccination When Circulating Influenza Virus Strains Differ from Vaccine Strains Manufacturing trivalent influenza virus vaccines is a challenging process that takes 6-8 months to complete. Vaccination can provide reduced but substantial cross-protection against drifted strains in some seasons, including reductions in severe outcomes such as hospitalization. Usually one or more circulating viruses with antigenic changes compared with the vaccine strains are identified in each influenza season. In addition, two distinct lineages of influenza B viruses have co-circulated in recent years, and limited cross-protection is observed against the lineage not represented in the vaccine (48). However, assessment of the clinical effectiveness of influenza vaccines cannot be determined solely by laboratory evaluation of the degree of antigenic match between vaccine and circulating strains. In some influenza seasons, circulating influenza viruses with significant antigenic differences predominate, and reductions in vaccine effectiveness sometimes are observed compared with seasons when vaccine and circulating strains are wellmatched, (107,121,125,173,222). However, even during years when vaccine strains were not antigenically well matched to circulating strains (the result of antigenic drift), substantial protection has been observed against severe outcomes, presumably because of vaccine-induced cross-reacting antibodies (121,125,222,283). For example, in one study conducted during the 2003-04 influenza season, when the predominant circulating strain was an influenza A (H3N2) virus that was antigenically different from that season's vaccine strain, effectiveness against laboratory-confirmed influenza illness among persons aged 50-64 years was 60% among healthy persons and 48% among persons with medical conditions that increased the risk for influenza complications (125). An interim, withinseason analysis during the 2007-08 influenza season indicated that vaccine effectiveness was 44% overall, 54% among healthy persons aged 5-49 years, and 58% against influenza A, despite the finding that viruses circulating in the study area were predominately a drifted influenza A (H3N2) and an influenza B strain from a different lineage compared with vaccine strains (317). Among children, both TIV and LAIV provide protection against infection even in seasons when vaccines and circulating strains are not well-matched. Vaccine effectiveness against ILI was 49%-69% in two observational studies, and 49% against medically attended, laboratory-confirmed influenza in a case-control study conducted among young children during the 2003-04 influenza season, when a drifted influenza A (H3N2) strain predominated, based on viral surveillance data (102,106). However, continued improvements in collecting representative circulating viruses and use of surveillance data to forecast antigenic drift are needed. Shortening manufacturing time to increase the time to identify good vaccine candidate strains from among the most recent circulating strains also is important. Data from multiple seasons that are collected in a consistent manner are needed to better understand vaccine effectiveness during seasons when circulating and vaccine virus strains are not well-matched. Seasonal influenza vaccines are not expected to provide protection against novel influenza A (H1N1) virus infection because this novel strain hemagglutinin is substantially different from seasonal influenza A (H1N1). Preliminary immunologic data indicate that few persons have antibody that shows evidence of cross-reactivity against novel influenza A (H1N1) virus, and few show increases in antibody titer to novel influenza A (H1N1) virus after vaccination with the 2007-08 or the 2008-09 seasonal influenza vaccines (318). Vaccines currently are being developed that are specific to novel influenza A (H1N1) virus. # Cost-Effectiveness of Influenza Vaccination Economic studies of influenza vaccination are difficult to compare because they have used different measures of both costs and benefits (e.g., cost-only, cost-effectiveness, costbenefit, or cost-utility). However, most studies find that vaccination reduces or minimizes health care, societal, and individual costs and the productivity losses and absenteeism associated with influenza illness. One national study estimated the annual economic burden of seasonal influenza in the United States (using 2003 population and dollars) to be $87.1 billion, including $10.4 billion in direct medical costs (319). Studies of influenza vaccination in the United States among persons aged >65 years have estimated substantial reductions in hospitalizations and deaths and overall societal cost savings (168,169). Studies comparing adults in different age groups also find that vaccination is economically beneficial. One study that compared the economic impact of vaccination among persons aged >65 years with those aged 15-64 years indicated that vaccination resulted in a net savings per quality-adjusted life year (QALY) and that the Medicare program saved costs of treating illness by paying for vaccination (320). A study of a larger population comparing persons aged 50-64 years with those aged >65 years estimated the cost-effectiveness of influenza vaccination to be $28,000 per QALY saved (in 2000 dollars) in persons aged 50-64 years compared with $980 per QALY saved among persons aged >65 years (321). Economic analyses among adults aged <65 years have reported mixed results regarding influenza vaccination. Two studies in the United States found that vaccination can reduce both direct medical costs and indirect costs from work absenteeism and reduced productivity (322,323). However, another U.S. study indicated no productivity and absentee savings in a strategy to vaccinate healthy working adults, although vaccination was still estimated to be cost-effective (324). Cost analyses have documented the considerable financial burden of illness among children. In a study of 727 children conducted at a medical center during 2000-2004, the mean total cost of hospitalization for influenza-related illness was $13,159 ($39,792 for patients admitted to an intensive care unit and $7,030 for patients cared for exclusively on the wards) (325). A strategy that focuses on vaccinating children with medical conditions that confer a higher risk for influenza complications are more cost-effective than a strategy of vaccinating all children (324). An analysis that compared the costs of vaccinating children of varying ages with TIV and LAIV indicated that costs per QALY saved increased with age for both vaccines. In 2003 dollars per QALY saved, costs for routine vaccination using TIV were $12,000 for healthy children aged 6-23 months and $119,000 for healthy adolescents aged 12-17 years compared with $9,000 and $109,000 using LAIV, respectively (326). Economic evaluations of vaccinating children have demonstrated a wide range of cost estimates, but have generally found this strategy to be either cost-saving or cost-beneficial (327)(328)(329)(330). Economic analyses are sensitive to the vaccination venue, with vaccination in medical care settings incurring higher projected costs. In a published model, the mean cost (year 2004 values) of vaccination was lower in mass vaccination ($17.04) and pharmacy ($11.57) settings than in scheduled doctor's office visits ($28.67) (331). Vaccination in nonmedical settings was projected to be cost saving for healthy adults aged >50 years and for high-risk adults of all ages. For healthy adults aged 18-49 years, preventing an episode of influenza would cost $90 if vaccination were delivered in a pharmacy setting, $210 in a mass vaccination setting, and $870 during a scheduled doctor's office visit (331). Medicare payment rates in recent years have been less than the costs associated with providing vaccination in a medical practice (332). # Vaccination Coverage Levels Continued annual monitoring is needed to determine the effects on vaccination coverage of vaccine supply delays and shortages, changes in influenza vaccination recommendations and target groups for vaccination, reimbursement rates for vaccine and vaccine administration, and other factors related to vaccination coverage among adults and children. One of the Healthy People 2010 objectives (objective no. 14-29a) includes achieving an influenza vaccination coverage level of 90% for persons aged >65 years and among nursing home residents (333,334); new strategies to improve coverage are needed to achieve this objective (335,336). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use. On 3) and are only slightly lower than coverage levels observed before the 2004-05 vaccine shortage year (337)(338)(339). In the 2006-07 and 2007-08 influenza seasons, estimated vaccination coverage levels among adults with high-risk conditions aged 18-49 years were 25% and 30%, respectively, substantially lower than the Healthy People 2000 and Healthy People 2010 objectives of 60% (Table 3) (333,334). Studies conducted among children and adults indicate that opportunities to vaccinate persons at risk for influenza complications (e.g., during hospitalizations for other causes) often are missed. In one study, 23% of children hospitalized with influenza and a comorbidity had a previous hospitalization during the preceding influenza vaccination season (340). In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (341). A study in New York City conducted during 2001-2005 among 7,063 children aged 6-23 months indicated that 2-dose vaccine coverage increased from 1.6% to 23.7% over time; however, although the average number of medical visits during which an opportunity to be vaccinated decreased during the course of the study from 2.9 to 2.0 per child, 55% of all visits during the final year of the study still represented a missed vaccination opportunity (342). Using standing orders in hospitals increases vaccination rates among hospitalized persons (343), and vaccination of hospitalized patients is safe and stimulates an appropriate immune response (158). In one survey, the strongest predictor of receiving vaccination was the survey respondent's belief that he or she was in a high-risk group, based on data from one survey; however, many persons in high-risk groups did not know that they were in a group recommended for vaccination (344). Reducing racial/ethnic health disparities, including disparities in influenza vaccination coverage, is an overarching national goal that is not being met (334). Estimated vaccination coverage levels in 2007 among persons aged >65 years were 70% for non-Hispanic whites, 58% for non-Hispanic blacks, and 54% for Hispanics (345). Among Medicare beneficiaries, other key factors that contribute to disparities in coverage include variations in the propensity of patients to actively seek vaccination and variations in the likelihood that providers recommend vaccination (346,347). One study estimated that eliminating these disparities in vaccination coverage would have an impact on mortality similar to the impact of eliminating deaths attributable to kidney disease among blacks or liver disease among Hispanics (348). Reported vaccination levels are low among children at increased risk for influenza complications. Coverage among children aged 2-17 years with asthma for the 2004-05 influenza season was estimated to be 29% (349). One study reported 79% vaccination coverage among children attending a cystic fibrosis treatment center (350). During the first season for which ACIP recommended that all children aged 6 months-23 months receive vaccination, 33% received 1 or more doses of influenza vaccine, and 18% received 2 doses if they were unvaccinated previously (351). Among children enrolled in HMOs who had received a first dose during 2001-2004, second dose coverage varied from 29% to 44% among children aged 6-23 months and from 12% to 24% among children aged 2-8 years (352). A rapid analysis of influenza vaccination coverage levels among members of an HMO in Northern California demonstrated that during the 2004-05 influenza season, the first year of the recommendation for vaccination of children aged 6-23 months, 1-dose coverage was 57% (353). During the 2006-07 influenza season, the second season for which ACIP recommended that all children aged 6 months-23 months receive vaccination, coverage remained low and did not increase substantially from the 2004-05 season. Data collected in 2007 by the National Immunization Survey indicated that for the 2006-07 season, 32% of children aged 6-23 months received at least 1 dose of influenza vaccine and 21% were fully vaccinated (i.e., received 1 or 2 doses depending on previous vaccination history); however, results varied substantially among states (354). As has been reported for older adults, a physician recommendation for vaccination and the perception that having a child be vaccinated "is a smart idea" were associated positively with likelihood of vaccination of children aged 6-23 months (355). Similarly, children with asthma were more likely to be vaccinated if their parents recalled a physician recommendation to be vaccinated or believed that the vaccine worked well (356). Implementation of a reminder/recall system in a pediatric clinic increased the percentage of children with asthma receiving vaccination from 5% to 32% (357). Although annual vaccination is recommended for HCP and is a high priority for reducing morbidity associated with influenza in health-care settings and for expanding influenza vaccine use (358)(359)(360), national survey data demonstrated a vaccination coverage level of only 42% among HCP during the 2005-06 season, and 44% during the 2006-07 season (Table † † Adults categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer during the previous 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer during the previous 12 months (post coding for a cancer diagnosis was not yet completed at the time of this publication so this diagnosis was not include in the 2006-07 season data.); 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack during the preceding 12 months. For children aged <18 years, high-risk conditions included ever having been told by a physician of having diabetes, cystic fibrosis, sickle cell anemia, congenital heart disease, other heart disease, or neuromuscular conditions (seizures, cerebral palsy, and muscular dystrophy), or having an asthma episode or attack during the preceding 12 months. § § Aged 18-44 years, pregnant at the time of the survey and without high-risk conditions. ¶ ¶ Adults were classified as health-care workers if they were employed in a health-care occupation or in a health-care-industry setting, on the basis of standard occupation and industry categories recoded in groups by CDC's National Center for Health Statistics. * Interviewed sample child or adult in each household containing at least one of the following: a child aged <5 years, an adult aged ≥65 years, or any person aged 5-17 years at high risk (see previous footnote † † ). To obtain information on household composition and high-risk status of household members, the sampled adult, child, and person files from NHIS were merged. Interviewed adults who were health-care workers or who had high-risk conditions were excluded. Information could not be assessed regarding high-risk status of other adults aged 18-64 years in the household; therefore, certain adults aged 18-64 years who lived with an adult aged 18-64 years at high risk were not included in the analysis. Also note that although the recommendation for children aged 2-4 years was not in place during the 2005-06 season, children aged 2-4 years in these calculations were considered to have an indication for vaccination to facilitate comparison of coverage data for subsequent years. 3). Vaccination of HCP has been associated with reduced work absenteeism (300) and with fewer deaths among nursing home patients (305,307) and elderly hospitalized patients (308). Factors associated with a higher rate of influenza vaccination among HCP include older age, being a hospital employee, having employer-provided health-care insurance, having had pneumococcal or hepatitis B vaccination in the past, or having visited a health-care professional during the preceding year. Non-Hispanic black HCP were less likely than non-Hispanic white HCP to be vaccinated (361). HCP who decline vaccination frequently express doubts about the risk for influenza and the need for vaccination, are concerned about vaccine effectiveness and side effects, and dislike injections (362). Vaccine coverage among pregnant women increased during the 2007-08 influenza season with 24% of pregnant women reporting vaccination, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions (Table 3). However, the sample size is small, and the increase in coverage compared with previous seasons was not statistically significant. In a study of influenza vaccine acceptance by pregnant women, 71% of those who were offered the vaccine chose to be vaccinated (363). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients in their practices, although 86% agreed that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (364). Influenza vaccination coverage in all groups recommended for vaccination remains suboptimal. Despite the timing of the peak of influenza disease, administration of vaccine decreases substantially after November. According to results from the NHIS regarding the two most recent influenza seasons for which these data are available, approximately 84% of all influenza vaccination were administered during September-November. Among persons aged >65 years, the percentage of September-November vaccinations was 92% (365). Because many persons recommended for vaccination remain unvaccinated at the end of November, CDC encourages public health partners and health-care providers to conduct vaccination clinics and other activities that promote seasonal influenza vaccination annually during National Influenza Vaccination Week (December 6-12, 2009) and throughout the remainder of the influenza season. Self-report of influenza vaccination among adults compared with determining vaccination status from the medical record, is a sensitive and specific source of information (366,367). Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (367). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available. # Recommendations for Using tIV and LAIV During the 2009-10 Influenza Season Both TIV and LAIV prepared for the 2009-10 season will include A/Brisbane/59/2007 (H1N1)-like, A/ Brisbane/10/2007 (H3N2)-like, and B/Brisbane/60/2008-like antigens. The influenza B virus component of the 2009-10 vaccine is from the Victoria lineage (368). These viruses will be used because they are representative of seasonal influenza viruses that are predicted to be circulating in the United States during the 2009-10 influenza season and have favorable growth properties in eggs. Seasonal influenza vaccines are not expected to provide substantial protection against infection with the recently identified novel influenza A (H1N1) (318), and guidance for the prevention of infection against this virus will be published separately. TIV and LAIV can be used to reduce the risk for influenza virus infection and its complications. Vaccination providers should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza or transmitting influenza to others should they become infected. Healthy, nonpregnant persons aged 2-49 years can choose to receive either vaccine. Some TIV formulations are FDAlicensed for use in persons as young as age 6 months (see Recommended Vaccines for Different Age Groups). TIV is licensed for use in persons with high-risk conditions (Table 2). LAIV is FDA-licensed for use only for persons aged 2-49 years. In addition, FDA has indicated that the safety of LAIV has not been established in persons with underlying medical conditions that confer a higher risk for influenza complications. All children aged 6 months-8 years who have not been vaccinated previously at any time with at least 1 dose of either LAIV (if appropriate) or TIV should receive 2 doses of ageappropriate vaccine in the same season, with a single dose during subsequent seasons. # target Groups for Protection through Vaccination Influenza vaccine should be provided to all persons who want to reduce the risk for becoming ill with influenza or of transmitting it to others. However, emphasis on providing routine vaccination annually to certain groups at higher risk for influenza infection or complications is advised, including all children aged 6 months-18 years, all persons aged >50 years, and other adults at risk for medical complications from influenza. In addition, all persons who live with or care for persons at high risk for influenza-related complications, including contacts of children aged <6 months, should receive influenza vaccine annually (Boxes 1 and 2). Approximately 85% of the U.S. population is included in one or more of these target groups; however, <40% of the U.S. population received an influenza vaccination during the 2008-09 influenza season. # Children Aged 6 Months-18 Years Beginning with the 2008-09 influenza season, annual vaccination for all children aged 6 months-18 years was recommended. Children and adolescents at high risk for influenza complications should continue to be a focus of vaccination efforts as providers and programs transition to routinely vaccinating all children. Healthy children aged 2-18 years can receive either LAIV or TIV. Children aged 6-23 months, and those aged 2-4 years who have evidence of asthma wheezing or who have medical conditions that put them at higher risk for influenza complications should receive TIV (see Considerations When Using LAIV). All children aged 6 months-8 years who have not received vaccination against influenza previously should receive 2 doses of vaccine the first year they are vaccinated. # Persons at Risk for Medical Complications Vaccination to prevent influenza is particularly important for the following persons, who are at increased risk for severe complications from influenza, or at higher risk for influenzarelated outpatient, ED, or hospital visits: all children aged 6 months- facilities. For children, the risk for severe complications from seasonal influenza is highest among those aged <2 years, who have much higher rates of hospitalization for influenza-related complications compared with older children (7,32,39). Medical care and ED visits attributable to influenza are increased among children aged <5 years compared with older children (32). Chronic neurologic and neuromuscular conditions include any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration (30). # Persons Who Live With or Care for Persons at High Risk for Influenza-Related Complications To prevent transmission to persons identified above, vaccination with TIV or LAIV (unless contraindicated) also is recommended for the following persons. When vaccine supply is limited, vaccination efforts should focus on delivering vaccination to these persons: HCP; - household contacts (including children) and caregivers - of children aged 50 years; and household contacts (including children) and caregivers - of persons with medical conditions that put them at higher risk for severe complications from influenza. # Children Aged <6 Months Children aged <6 months are not recommended for vaccination, and antivirals are not licensed for use among infants. Protection of young infants, who have hospitalization rates similar to those observed among the elderly, depends on vaccination of the infants' close contacts. A recent study conducted in Bangladesh demonstrated that infants born to vaccinated women have significant protection from laboratory-confirmed influenza, either through transfer of influenza-specific maternal antibodies or by reducing the risk for exposure to influenza that might occur through vaccination of the mother (154). All household contacts, health-care and day care providers, and other close contacts of young infants should be vaccinated. # Vaccination of Specific Populations Children Aged 6 Months-18 Years All children aged 6 months-18 years should be vaccinated against influenza annually. In 2004, ACIP recommended routine vaccination for all children aged 6-23 months, and in 2006, ACIP expanded the recommendation to include all children aged 24-59 months. Recommendations to provide routine influenza vaccination to all children and adolescents aged 6 months-18 years are made on the basis of 1) accumulated evidence that influenza vaccine is effective and safe for children (see Influenza Vaccine Efficacy, Effectiveness, and Safety); 2) increased evidence that influenza has substantial adverse impacts among children and their contacts (e.g., school absenteeism, increased antibiotic use, medical care visits, and parental work loss) (see Health-Care Use, Hospitalizations, and Deaths Attributed to Influenza); and 3) an expectation that a simplified age-based influenza vaccine recommendation for all children and adolescents will improve vaccine coverage levels among children who already have a risk-or contact-based indication for annual influenza vaccination. Children typically have the highest attack rates during community outbreaks of influenza and serve as a major source of transmission within communities (1,2). If sufficient vaccination coverage among children can be achieved, potential benefits include the indirect effect of reducing influenza among persons who have close contact with children and reducing overall transmission within communities. Achieving and sustaining community-level reductions in influenza will require mobilization of community resources and development of sustainable annual vaccination campaigns to assist health-care providers and vaccination programs in providing influenza vaccination services to children of all ages. In many areas, innovative community-based efforts, which might include mass vaccination programs in school or other community settings, will be needed to supplement vaccination services provided in health-care providers' offices or public health clinics. In nonrandomized community-based controlled trials, reductions in ILI-related symptoms and medical visits among household contacts have been demonstrated in communities where vaccination programs among school-aged children were established compared with communities without such vaccination programs (295,314,315).Reducing influenza-related illness among children who are at high risk for influenza complications should continue to be a primary focus of influenza-prevention efforts. Children who should be vaccinated because they are at high risk for influenza complications include all children aged 6-59 months, children with certain medical conditions, children who are contacts of children aged 50 years, and children who are contacts of persons at high risk for influenza complications because of medical conditions. All children aged 6 months-8 years who have not received vaccination against influenza previously should receive 2 doses of vaccine the first influenza season that they are vaccinated. The second dose should be administered 4 or more weeks after the initial dose. When only 1 dose is administered to children aged 6 months-8 years during their first year of vaccination, 2 doses should be administered in the following season. However, 2 doses should only be administered in the first season of vaccination, or in the season that immediately follows if only 1 dose is administered in the first season. For example, children aged 6 months-8 years who were vaccinated for the first time with the 2008-09 influenza vaccine but received only 1 dose should receive 2 doses of the 2009-10 influenza vaccine. All other children aged 6 months-8 years who have previously received 1 or more doses of influenza vaccine at any time should receive 1 dose of the 2009-10 influenza vaccine. Children aged 6 months-8 years who received only a single vaccination during a season before 2007-08 should receive 1 dose of the 2009-10 influenza vaccine. If possible, both doses should be administered before onset of influenza season. However, vaccination, including the second dose, is recommended even after influenza virus begins to circulate in a community. # HCP and other Persons Who Can transmit Influenza to those at High Risk Healthy persons who are infected with influenza virus, including those with subclinical infection, can transmit influenza virus to persons at higher risk for complications from influenza. In addition to HCP, groups that can transmit influenza to high-risk persons and that should be vaccinated include employees of assisted living and other residences for All HCP and persons in training for health-care professions should be vaccinated annually against influenza. Persons working in health-care settings who should be vaccinated include physicians, nurses, and other workers in both hospital and outpatient-care settings, medical emergency-response workers (e.g., paramedics and emergency medical technicians), employees of nursing home and long-term-care facilities who have contact with patients or residents, and students in these professions who will have contact with patients (359,360,371). Facilities that employ HCP should provide vaccine to workers by using approaches that have been demonstrated to be effective in increasing vaccination coverage. Health-care administrators should consider the level of vaccination coverage among HCP to be one measure of a patient safety quality program and consider obtaining signed declinations from personnel who decline influenza vaccination for reasons other than medical contraindications (360,372,373). Influenza vaccination rates among HCP within facilities should be regularly measured and reported, and ward-, unit-, and specialty-specific coverage rates should be provided to staff and administration (360). Studies have demonstrated that organized campaigns can attain higher rates of vaccination among HCP with moderate effort and by using strategies that increase vaccine acceptance (358,360,374). Efforts to increase vaccination coverage among HCP are supported by various national accrediting and professional organizations and in certain states by statute. The Joint Commission on Accreditation of Health-Care Organizations has approved an infection-control standard that requires accredited organizations to offer influenza vaccinations to staff, including volunteers and licensed independent practitioners with close patient contact. The standard became an accreditation requirement beginning January 1, 2007 (375). In addition, the Infectious Diseases Society of America has recommended mandatory vaccination for HCP, with a provision for declination of vaccination based on religious or medical reasons (376). Some states have regulations regarding vaccination of HCP in long-term-care facilities (377), require that health-care facilities offer influenza vaccination to HCP, or require that HCP either receive influenza vaccination or indicate a religious, medical, or philosophic reason for not being vaccinated (378,379). # Close Contacts of Immunocompromised Persons Immunocompromised persons are at risk for influenza complications but might have inadequate protection after vaccination. Close contacts of immunocompromised persons, including HCP, should be vaccinated to reduce the risk for influenza transmission. TIV is recommended for vaccinating household members, HCP, and others who have close contact with severely immunosuppressed persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunosuppressed person requires care in a protective environment (typically defined as a specialized patient-care area with a positive airflow relative to the cor-ridor, high-efficiency particulate air filtration, and frequent air changes) (360,380). LAIV transmission from a recently vaccinated person causing clinically important illness in an immunocompromised contact has not been reported. The rationale for avoiding use of LAIV among HCP or other close contacts of severely immunocompromised patients is the theoretical risk that a live, attenuated vaccine virus could be transmitted to the severely immunosuppressed person. As a precautionary measure, HCP who receive LAIV should avoid providing care for severely immunosuppressed patients requiring a protected environment for 7 days after vaccination. Hospital visitors who have received LAIV should avoid contact with severely immunosuppressed persons in protected environments for 7 days after vaccination but should not be restricted from visiting less severely immunosuppressed patients. No preference is indicated for TIV use by persons who have close contact with persons with lesser degrees of immunosuppression (e.g., persons with diabetes, persons with asthma who take corticosteroids, persons who have recently received chemotherapy or radiation but who are not being cared for in a protective environment as defined above, or persons infected with HIV) or for TIV use by HCP or other healthy nonpregnant persons aged 2-49 years in close contact with persons in all other groups at high risk. # Pregnant Women Pregnant women and newborns are at risk for influenza complications, and all women who are pregnant or will be pregnant during influenza season should be vaccinated. The American College of Obstetricians and Gynecologists and the American Academy of Family Physicians also have recommended routine vaccination of all pregnant women (381). No preference is indicated for use of TIV that does not contain thimerosal as a preservative (see Vaccine Preservative in Multidose Vials of TIV) for any group recommended for vaccination, including pregnant women. LAIV is not licensed for use in pregnant women. However, pregnant women do not need to avoid contact with persons recently vaccinated with LAIV. # Breastfeeding Mothers Vaccination is recommended for all persons, including breastfeeding women, who are contacts of infants or children aged <5 years because infants and young children are at high risk for influenza complications and are more likely to require medical care or hospitalization if infected. Breastfeeding does not affect the immune response adversely and is not a con-traindication for vaccination (179). Unless contraindicated because of other medical conditions, women who are breastfeeding can receive either TIV or LAIV. In one randomized controlled trial conducted in Bangladesh, infants born to women vaccinated during pregnancy had a lower risk for laboratory-confirmed influenza. However, the contribution to protection from influenza of breastfeeding compared with passive transfer of maternal antibodies during pregnancy was not determined (154). # travelers The risk for exposure to influenza during travel depends on the time of year and destination. In the temperate regions of the Southern Hemisphere, influenza activity occurs typically during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large tourist groups (e.g., on cruise ships) that include persons from areas of the world in which influenza viruses are circulating (382,383). In the tropics, influenza occurs throughout the year. In a study among Swiss travelers to tropical and subtropical countries, influenza was the most frequently acquired vaccine-preventable disease (384). Any traveler who wants to reduce the risk for influenza infection should consider influenza vaccination, preferably at least 2 weeks before departure. In particular, persons at high risk for complications of influenza and who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to travel to the tropics, - with organized tourist groups at any time of year, or - to the Southern Hemisphere during April-September. - No information is available about the benefits of revaccinating persons before summer travel who already were vaccinated during the preceding fall, and revaccination is not recommended. Persons at high risk who receive the previous season's vaccine before travel should be revaccinated with the current vaccine the following fall or winter. Persons at higher risk for influenza complications should consult with their health-care practitioner to discuss the risk for influenza or other travel-related diseases before embarking on travel during the summer. # General Population Vaccination is recommended for any persons who wish to reduce the likelihood of their becoming ill with influenza or transmitting influenza to others should they become infected. Healthy, nonpregnant persons aged 2-49 years might choose to receive either TIV or LAIV. All other persons aged >6 months should receive TIV. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories or correctional facilities) should be encouraged to receive vaccine to minimize morbidity and the disruption of routine activities during influenza epidemics (385,386). # Recommended Vaccines for Different Age Groups When vaccinating children aged 6-35 months with TIV, health-care providers should use TIV that has been licensed by the FDA for this age group (i.e., TIV manufactured by Sanofi Pasteur ) (219). TIV from Novartis (Fluvirin) is FDA-approved in the United States for use among persons aged >4 years (220). TIV from GlaxoSmithKline (Fluarix and FluLaval) or CSL Biotherapies (Afluria) is labeled for use in persons aged >18 years because data to demonstrate immunogenicity or efficacy among younger persons have not been provided to FDA (207,208,218). LAIV from MedImmune (FluMist) is recommended for use by healthy nonpregnant persons aged 2-49 years (Table 2) (291). If a pediatric vaccine dose (0.25mL) is administered to an adult, an additional pediatric dose (0.25 mL) should be given to provide a full adult dose (0.5mL). If the error is discovered later (after the patient has left the vaccination setting), an adult dose should be administered as soon as the patient can return. No action needs to be taken if an adult dose is administered to a child. Several new vaccine formulations are being evaluated in immunogenicity and efficacy trials; when licensed, these new products will increase the influenza vaccine supply and provide additional vaccine choices for practitioners and their patients. # Influenza Vaccines and Use of Influenza Antiviral Medications Unvaccinated persons who are receiving antiviral medications for treatment or chemoprophylaxis often also are recommended for vaccination. Administration of TIV to persons receiving influenza antivirals is acceptable. The effect on safety and effectiveness of LAIV coadministration with influenza antiviral medications has not been studied. However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy, and influenza antiviral medications should not be administered for 2 weeks after receipt of LAIV. Persons receiving antivirals within the period 2 days before to 14 days after vaccination with LAIV should be revaccinated at a later date with any approved vaccine formulation (179,291). # Considerations When Using LAIV LAIV is an option for vaccination of healthy, nonpregnant persons aged 2-49 years, including HCP and other close contacts of high-risk persons (excepting severely immunocompromised persons who require care in a protected environment). No preference is indicated for LAIV or TIV when considering vaccination of healthy, ¶ nonpregnant persons aged 2-49 years. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response in children, its ease of administration, and the possibly increased acceptability of an intranasal rather than intramuscular route of administration. If the vaccine recipient sneezes after administration, the dose should not be repeated. However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness, or TIV should be administered instead. No data exist about concomitant use of nasal corticosteroids or other intranasal medications (261). Although FDA licensure of LAIV excludes children aged 2-4 years with a history of asthma or recurrent wheezing, the precise risk, if any, of wheezing caused by LAIV among these children is unknown because experience with LAIV among these young children is limited. Young children might not have a history of recurrent wheezing if their exposure to respiratory viruses has been limited because of their age. Certain children might have a history of wheezing with respiratory illnesses but have not had asthma diagnosed. The following screening recommendations should be used to assist persons who administer influenza vaccines in providing the appropriate vaccine for children aged 2-4 years. Clinicians and vaccination programs should screen - for asthma or wheezing illness (or history of wheezing illness) when considering use of LAIV for children aged 2-4 years, and should avoid use of this vaccine in children with asthma or a recent wheezing episode within the previous 12 months. Health-care providers should consult the medical record, when available, to identify children aged 2-4 years with asthma or recurrent wheezing that might indicate asthma. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record during the preceding 12 months should not receive LAIV. TIV is available for use in children with asthma or wheezing (387).LAIV can be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness. # Contraindications and Precautions for Use of LAIV The effectiveness or safety of LAIV is not known for the following groups and administration of LAIV is contraindicated: persons ing aspirin or other salicylates (because of the association of Reye syndrome with wild-type influenza virus infection); or pregnant women. - A moderate or severe illness with or without fever is a precaution for use of LAIV. GBS within 6 weeks following a previous dose of influenza vaccine is considered to be a precaution for use of influenza vaccines. LAIV should not be administered to close contacts of immunosuppressed persons who require a protected environment. ¶ Use of the term "healthy" in this recommendation refers to persons who do not have any of the underlying medical conditions that confer high risk for severe complications (see Contraindications and Precautions for Use of LAIV). # Personnel Who Can Administer LAIV Low-level introduction of vaccine viruses into the environment probably is unavoidable when administering LAIV. The risk for acquiring vaccine viruses from the environment is unknown but is probably low. Severely immunosuppressed persons should not administer LAIV. However, other persons at higher risk for influenza complications can administer LAIV. These include persons with underlying medical conditions placing them at higher risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years. # Concurrent Administration of Influenza Vaccine with other Vaccines Use of LAIV concurrently with measles, mumps, rubella (MMR) alone and MMR and varicella vaccine among children aged 12-15 months has been studied, and no interference with the immunogenicity to antigens in any of the vaccines was observed (261,388). Among adults aged >50 years, the safety and immunogenicity of zoster vaccine and TIV was similar whether administered simultaneously or spaced 4 weeks apart (389). In the absence of specific data indicating interference, following ACIP's general recommendations for vaccination is prudent (179). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. Inactivated or live vaccines can be administered simultaneously with LAIV. However, after administration of a live vaccine, at least 4 weeks should pass before another live vaccine is administered. # Recommendations for Vaccination Administration and Vaccination Programs Although influenza vaccination levels increased substantially during the 1990s, little progress has been made since 2000 toward achieving national health objectives, and further improvements in vaccine coverage levels are needed to reduce the annual impact of influenza substantially. Strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (335,336,345), should be implemented whenever feasible. Vaccination efforts should begin as soon as vaccine is available and continue through the influenza season. Vaccination coverage can be increased by administering vaccine before and during the influenza season to persons during hospitalizations or routine health-care visits. Vaccinations can be provided in alternative settings (e.g., pharmacies, grocery stores, workplaces, or other locations in the community), thereby making special visits to physicians' offices or clinics unnecessary. Coordinated campaigns such as the National Influenza Vaccination Week (December 6-12, 2009) provide opportunities to refocus public attention on the benefits, safety, and availability of influenza vaccination throughout the influenza season. When educating patients about adverse events, clinicians should provide access to Vaccine Information Sheets (available at . gov/vaccines/pubs/vis), and emphasize that 1) TIV contains noninfectious killed viruses and cannot cause influenza, 2) LAIV contains weakened influenza viruses that cannot replicate outside the upper respiratory tract and are unlikely to infect others, and 3) concomitant symptoms or respiratory disease unrelated to vaccination with either TIV or LAIV can occur after vaccination. Adverse events after influenza vaccination should be reported promptly to VAERS at . gov even if the health-care professional is not certain that the vaccine caused the event. # Information About the Vaccines for Children Program The Vaccines for Children (VFC) program supplies vaccine to all states, territories, and the District of Columbia for use by participating providers. These vaccines are to be provided to eligible children without vaccine cost to the patient or the provider, although the provider might charge a vaccine administration fee. All routine childhood vaccines recommended by ACIP are available through this program, including influenza vaccines. The program saves parents and providers out-ofpocket expenses for vaccine purchases and provides cost savings to states through CDC's vaccine contracts. The program results in lower vaccine prices and ensures that all states pay the same contract prices. Detailed information about the VFC program is available at / vfc/default.htm. # Influenza Vaccine Supply Considerations The annual supply of influenza vaccine and the timing of its distribution cannot be guaranteed in any year. During the 2008-09 influenza season, 113 million doses of influenza vaccine were distributed in the United States. For the 2009-10 season, total production of seasonal influenza vaccine for the United States is anticipated to be >130 million doses, depending on demand and production yields. However, influenza vaccine distribution delays or vaccine shortages remain possible. One factor that affects production is the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains. Multiple manufacturing and regulatory issues, including the anticipated need to produce a separate vaccine against novel influenza A (H1N1), also might affect the production schedule. To ensure optimal use of available doses of influenza vaccine, health-care providers, persons planning organized campaigns, and state and local public health agencies should develop plans for expanding outreach and infrastructure to vaccinate more persons in targeted groups and others who wish to reduce their risk for influenza. They also should develop contingency plans for the timing and prioritization of administering influenza vaccine if the supply of vaccine is delayed or reduced. If supplies of TIV are not adequate, vaccination should be carried out in accordance with local circumstances of supply and demand based on the judgment of state and local health officials and health-care providers. Guidance for tiered use of TIV during prolonged distribution delays or supply short falls is available at and will be modified as needed in the event of shortage. CDC and other public health agencies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will inform both providers and the general public if any indication exists of a substantial delay or an inadequate supply. Because LAIV is recommended for use only in healthy nonpregnant persons aged 2-49 years, no recommendations for prioritization of LAIV use are made. Either LAIV or TIV can be used when considering vaccination of healthy, nonpregnant persons aged 2-49 years. However, during shortages of TIV, LAIV should be used preferentially when feasible for all healthy nonpregnant persons aged 2-49 years (including HCP) who desire or are recommended for vaccination to increase the availability of inactivated vaccine for persons at high risk. # timing of Vaccination Vaccination efforts should be structured to ensure the vaccination of as many persons as possible over the course of several months, with emphasis on vaccinating before influenza activity in the community begins. Even if vaccine distribution begins before October, distribution probably will not be completed until December or January. The following recommendations reflect this phased distribution of vaccine. In any given year, the optimal time to vaccinate patients cannot be determined precisely because influenza seasons vary in their timing and duration, and more than one outbreak might occur in a single community in a single year. In the United States, localized outbreaks that indicate the start of seasonal influenza activity can occur as early as October. However, in >80% of influenza seasons since 1976, peak influenza activity (which often is close to the midpoint of influenza activity for the season) has not occurred until January or later, and in >60% of seasons, the peak was in February or later (Figure 1). In general, health-care providers should begin offering vaccination soon after vaccine becomes available and if possible by October. To avoid missed opportunities for vaccination, providers should offer vaccination during routine health-care visits or during hospitalizations whenever vaccine is available. The potential for addition of a novel influenza A (H1N1) vaccine program to the current burden on vaccination programs and providers underscores the need for careful planning of seasonal vaccination programs. Beginning use of seasonal vaccine as soon as available, including in September or earlier, might reduce the overlap of seasonal and novel influenza vaccination efforts. Vaccination efforts should continue throughout the season, because the duration of the influenza season varies, and influenza might not appear in certain communities until February or March. Providers should offer influenza vaccine routinely, and organized vaccination campaigns should continue throughout the influenza season, including after influenza activity has begun in the community. Vaccine administered in December or later, even if influenza activity has already begun, is likely to be beneficial in the majority of influenza seasons. The majority of adults have antibody protection against influenza virus infection within 2 weeks after vaccination (390,391). All children aged 6 months-8 years who have not received vaccination against influenza previously should receive their first dose as soon after vaccine becomes available as is feasible and should receive the second dose >4 weeks later. This practice increases the opportunity for both doses to be administered before or shortly after the onset of influenza activity. Vaccination clinics should be scheduled through December, and later if feasible, with attention to settings that serve children aged >6 months, pregnant women, other persons aged 50 years, HCP, and persons who are household contacts of children aged <59 months or other persons at high risk. Planners are encouraged to develop the capacity and flexibility to schedule at least one vaccination clinic in December. Guidelines for planning large-scale vaccination clinics are available at / vaccination/vax_clinic.htm. During a vaccine shortage or delay, substantial proportions of TIV doses might not be released and distributed until November and December or later. When the vaccine is substantially delayed or disease activity has not subsided, providers should consider offering vaccination clinics into January and beyond as long as vaccine supplies are available. Campaigns using LAIV also can extend into January and beyond. # Strategies for Implementing Vaccination Recommendations in Health-Care Settings Successful vaccination programs combine publicity and education for HCP and other potential vaccine recipients, a plan for identifying persons recommended for vaccination, use of reminder/recall systems, assessment of practice-level vaccination rates with feedback to staff, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (336,392). The use of standing orders programs by long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies ensures that vaccination is offered. Standing orders programs for influenza vaccination should be conducted under the supervision of a licensed practitioner according to a physician-approved facility or agency policy by HCP trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. The Centers for Medicare and Medicaid Services (CMS) has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, long-term-care facilities, and home health agencies (393). To the extent allowed by local and state law, these facilities and agencies can implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaideligible patients. Payment for influenza vaccine under Medicare Part B is available (394,395) Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs (396). In addition, physician reminders (e.g., flagging charts) and patient reminders are recognized strategies for increasing rates of influenza vaccination. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections. # outpatient Facilities Providing ongoing Care Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record or vaccination information system. Patients for whom vaccination is recommended and who do not have regularly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination. # outpatient Facilities Providing Episodic or Acute Care Acute health-care facilities (e.g., EDs and walk-in clinics) should offer vaccinations throughout the influenza season to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility. # nursing Homes and other Long-term-Care Facilities Vaccination should be provided routinely to all residents of long-term-care facilities. If possible, all residents should be vaccinated at one time before influenza season. In the majority of seasons, TIV will become available to long-term-care facilities in October or November, and vaccination should commence as soon as vaccine is available. As soon as possible after admission to the facility, the benefits and risks of vaccination should be discussed and education materials provided (397). Signed consent is not required (398). Residents admitted after completion of the vaccination program at the facility should be vaccinated at the time of admission. Since October 2005, CMS has required nursing homes participating in the Medicare and Medicaid programs to offer all residents influenza and pneumococcal vaccines and to document the results. According to the requirements, each resident is to be vaccinated unless contraindicated medically, the resident or a legal representative refuses vaccination, or the vaccine is not available because of shortage. This information is to be reported as part of the CMS Minimum Data Set, which tracks nursing home health parameters (395,399). # Acute-Care Hospitals Hospitals should serve as a key setting for identifying persons at increased risk for influenza complications. Unvaccinated persons of all ages (including children) with high-risk conditions and persons aged 6 months-18 years or >50 years who are hospitalized at any time during the period when vaccine is available should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Standing orders to offer influenza vaccination to all hospitalized persons should be considered. # Visiting nurses and others Providing Home Care to Persons at High Risk Nursing-care plans should identify patients for whom vaccination is recommended, and vaccine should be administered in the home if necessary as soon as influenza vaccine is available and throughout the influenza season. Caregivers and other persons in the household (including children) should be referred for vaccination. # other Facilities Providing Services to Persons Aged >50 Years Facilities providing services to persons aged >50 years (e.g., assisted living housing, retirement communities, and recreation centers) should offer unvaccinated residents, attendees, and staff annual on-site vaccination before the start of the influenza season. Continuing to offer vaccination throughout the fall and winter months is appropriate. Efforts to vaccinate newly admitted patients or new employees also should be continued, both to prevent illness and to avoid having these persons serve as a source of new influenza infections. Staff education should emphasize the benefits for self, staff and patients of protection from influenza through vaccination. # Health-Care Personnel Health-care facilities should offer influenza vaccinations to all HCP, including night, weekend, and temporary staff. Particular emphasis should be placed on providing vaccinations to workers who provide direct care for persons at high risk for influenza complications. Efforts should be made to educate HCP regarding the benefits of vaccination and the potential health consequences of influenza illness for their patients, themselves, and their family members. All HCP should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (360,374,375). # Future Directions for Research and Recommendations Related to Influenza Vaccine Although available influenza vaccines are effective and safe, additional research is needed to improve prevention efforts. Most mortality from influenza occurs among persons aged >65 years (6), and more immunogenic influenza vaccines are needed for this age group and other groups at high risk for mortality. Additional research also is needed to understand potential biases in estimating the benefits of vaccination among older adults in reducing hospitalizations and deaths (82,175,400). Additional studies of the relative cost-effectiveness and cost utility of influenza vaccination among children and adults, especially those aged <65 years, are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, hospitalization costs and rates, and vaccine effectiveness when evaluating the long-term costs and benefits of annual vaccination (401). Additional data on indirect effects of vaccination also are needed to quantify the benefits of influenza vaccination of HCP in protecting their patients (308) and the benefits of vaccinating children to reduce influenza complications among those at risk. Because expansions in ACIP recommendations for vaccination will lead to more persons being vaccinated, much larger research networks are needed that can identify and assess the causality of very rare events that occur after vaccination, including GBS. Ongoing studies of safety in pediatric populations with expanded recommendations are needed and are underway. These research networks also could provide a platform for effectiveness and safety studies in the event of a pandemic. A recent study showed that influenza vaccines contain structures that can induce anti-GM1 antibodies after inoculation into mice (402). Further research on potential biologic or genetic risk factors for GBS in humans also is needed (397). In addition, a better understanding is needed of how to motivate persons at risk to seek annual influenza vaccination. ACIP continues to review new vaccination strategies to protect against influenza, including the possibility of expanding routine influenza vaccination recommendations toward universal vaccination or other approaches that will help reduce or prevent the transmission of influenza and reduce the burden of severe disease (403)(404)(405)(406)(407)(408). The 2009 ACIP expansion of annual vaccination recommendations to include all children aged 6 months-18 years will require a substantial increase in resources for epidemiologic research to develop long-term studies capable of assessing the possible effects on community-level transmission. Additional planning to improve surveillance systems capable of monitoring effectiveness, safety and vaccine coverage, and further development of implementation strategies will also be necessary. In addition, as noted by the National Vaccine Advisory Committee, strengthening the U.S. influenza vaccination system will require improving vaccine financing and demand and implementing systems to help better understand the burden of influenza in the United States (409). Vaccination programs capable of delivering annual influenza vaccination to a broad range of the population could potentially serve as a resilient and sustainable platform for delivering vaccines and monitoring outcomes for other urgently required public health interventions (e.g., vaccines for pandemic influenza or medications to prevent or treat illnesses caused by acts of terrorism). # Seasonal Influenza Vaccine and Influenza Viruses of Animal origin Human infection with novel or nonhuman influenza A virus strains, including influenza A viruses of animal origin, is a nationally notifiable disease (410). Human infections with nonhuman or novel human influenza A virus should be identified quickly and investigated to determine possible sources of exposure, identify additional cases, and evaluate the possibility of human-to-human transmission because transmission patterns could change over time with variations in these influenza A viruses. Sporadic severe and fatal human cases of infection with highly pathogenic avian influenza A (H5N1) virus have been identified in Asia, Africa, Europe, and the Middle East, primarily among persons who have had direct or close unprotected contact with sick or dead birds associated with the ongoing H5N1 panzootic among birds (411)(412)(413)(414)(415)(416)(417)(418)(419). Limited, nonsustained human-to-human transmission of H5N1 virus has likely occurred in some case clusters (420,421). To date, no evidence exists of genetic reassortment between human influenza A and H5N1 viruses. However, influenza viruses derived from strains circulating among poultry (e.g., the H5N1 virus that has caused outbreaks of avian influenza and occasionally have infected humans) have the potential to recombine with human influenza A viruses (422,423). To date, highly pathogenic H5N1 virus has not been identified in wild or domestic birds or in humans in the United States. Guidance for testing suspected cases of H5N1 virus infection among persons in the U.S. and follow-up of contacts is available (424,425). Human illness from infection with different avian influenza A subtype viruses also have been documented, including infections with low pathogenic and highly pathogenic viruses. A range of clinical illness has been reported for human infection with low pathogenic avian influenza viruses, including conjunctivitis with influenza A (H7N7) virus in the United Kingdom, lower respiratory tract disease and conjunctivitis with influenza A (H7N2) virus in the United Kingdom, and uncomplicated ILI with influenza A (H9N2) virus in Hong Kong and China (426)(427)(428)(429)(430)(431)(432). Two human cases of infection with low pathogenic influenza A (H7N2) were reported in the United States (429). Although human infections with highly pathogenic A (H7N7) virus infections typically have ILI or conjunctivitis, severe infections, including one fatal case in the Netherlands, have been reported (433,434). Conjunctivitis also has been reported because of human infection with highly pathogenic influenza A (H7N3) virus in Canada and low pathogenic A (H7N3) in the United Kingdom (426,434). In contrast, sporadic infections with highly pathogenic avian influenza A (H5N1) virus have caused severe illness in many countries, with an overall case-fatality proportion of >60% (421,435). Swine influenza A (H1N1), A (H1N2), and A (H3N2) viruses, including reassortant viruses, are endemic among pig populations in the United States (436). Two clusters of influenza A (H2N3) virus infections among pigs have been reported recently (437). Outbreaks among pigs normally occur in colder weather months (late fall and winter) and sometimes with the introduction of new pigs into susceptible herds. An estimated 30% of the pig population in the United States has serologic evidence of having had swine influenza A (H1N1) virus infection. Sporadic human infections with a variety of swine influenza A viruses occur in the United States, but the incidence of these human infections is unknown (438)(439)(440)(441)(442)(443). Persons infected with swine influenza A viruses typically report direct contact with ill pigs or places where pigs have been present (e.g., agricultural fairs or farms), and have symptoms that are clinically indistinguishable from infection with other respiratory viruses (440,441,444,445). Clinicians should consider swine influenza A virus infection in the differential diagnosis of patients with ILI who have had recent contact with pigs. The sporadic cases identified in recent years have not resulted in sustained human-to-human transmission of swine influenza A viruses or community outbreaks (368,445). Although immunity to swine influenza A viruses appears to be low (<2%) in the overall human population, 10%-20% of persons exposed occupationally to pigs (e.g., pig farmers or pig veterinarians) have been documented in certain studies to have antibody evidence of prior swine influenza A (H1N1) virus infection (438,446). In April 2009, a novel influenza A (H1N1) virus similar to influenza viruses previously identified in swine was determined to the cause of an influenza-like respiratory illness among humans that spread across North America and throughout most of the world by May 2009 (9,447). The epidemiology of influenza caused by this novel influenza virus is still being studied, and whether this virus will achieve long-term circulation among humans or even replace one of the other seasonal influenza viruses as the cause of annual epidemics is unknown. Current seasonal influenza vaccines are not expected to provide protection against human infection with avian influenza A viruses, including influenza A (H5N1) viruses, or to provide protection against currently circulating swine influenza A or the novel influenza A (H1N1) viruses (318,448). However, reducing seasonal influenza risk through influenza vaccination of persons who might be exposed to nonhuman influenza viruses (e.g., H5N1 virus) might reduce the theoretical risk for recombination of influenza A viruses of animal origin and human influenza A viruses by preventing seasonal influenza A virus infection within a human host. CDC has recommended that persons who are charged with responding to avian influenza outbreaks among poultry receive seasonal influenza vaccination (448,449). As part of preparedness activities, the Occupational Safety and Health Administration (OSHA) has issued an advisory notice regarding poultry worker safety that is intended for implementation in the event of a suspected or confirmed avian influenza outbreak at a poultry facility in the United States. OSHA guidelines recommend that poultry workers in an involved facility receive vaccination against seasonal influenza; OSHA also has recommended that HCP involved in the care of patients with documented or suspected avian influenza should be vaccinated with the most recent seasonal human influenza vaccine to reduce the risk for co-infection with human influenza A viruses (449). # Recommendations for Using Antiviral Agents for Seasonal Influenza Annual vaccination is the primary strategy for preventing complications of influenza virus infections. Antiviral medications with activity against influenza viruses are useful adjuncts in the prevention of influenza, and effective when used early in the course of illness for treatment. Four influenza antiviral agents are licensed in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. During the 2007-08 influenza season, influenza A (H1N1) viruses with a mutation that confers resistance to oseltamivir became more common in the United States and other countries (450)(451)(452). As of July 2009, in the United States, approximately 99% of human influenza A (H1N1) viruses tested, and none of the influenza A (H3N2) or influenza B viruses tested have been resistant to oseltamivir. As of July 2, 2009, with few exceptions, novel influenza A (H1N1) viruses that began circulating in April 2009 remained sensitive to oseltamivir (453). Oseltamivir resistance among circulating seasonal influenza A (H1N1) virus strains presents challenges for the selection of antiviral medications for treatment and chemoprophylaxis of influenza, and provides additional reasons for clinicians to test patients for influenza virus infection and to consult surveillance data when evaluating persons with acute respiratory illnesses during influenza season. CDC has published interim guidelines to provide options for treatment or chemoprophylaxis of influenza in the United States if oseltamivir-resistant seasonal influenza A (H1N1) viruses are circulating widely in a community or if the prevalence of oseltamivir-resistant influenza A (H1N1) viruses is uncertain (8). Updated guidance on antiviral use will be available from ACIP before the start of the 2009-10 influenza season. This guidance will include a summary of antiviral resistance data from the 2008-09 influenza season, and will be published separately from the vaccination recommendations. Until the ACIP recommendations for use of antivirals against influenza are published, CDC's previously published recommendations for use of influenza antiviral medications should be consulted for guidance on antiviral use (8). New guidance on clinical management of influenza, including use of antivirals, also is available from the Infectious Diseases Society of America (454). # Sources of Information Regarding Influenza and its Surveillance Information regarding influenza surveillance, prevention, detection, and control is available at . During October-May, surveillance information is updated weekly. In addition, periodic updates regarding influenza are published in MMWR (). Additional information regarding influenza vaccine can be obtained by calling 1-800-CDC-INFO (1-800-232-4636). State and local health departments should be consulted about availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, reporting of influenza outbreaks and influenza-related pediatric deaths, and advice concerning outbreak control. # Vaccine Adverse Event Reporting System (VAERS) Adverse events after influenza vaccination should be reported promptly to VAERS at , even if the reporter is unsure whether vaccine caused the event. Clinically significant adverse events that follow vaccination should be reported to VAERS at . Reports may be filed securely online or by telephone at 1-800-822-7967 to request reporting forms or other assistance. # national Vaccine Injury Compensation Program The National Vaccine Injury Compensation Program (VICP), established by the National Childhood Vaccine Injury Act of 1986, as amended, provides a mechanism through which compensation can be paid on behalf of a person determined to have been injured or to have died as a result of receiving a vaccine covered by VICP. The Vaccine Injury Table lists the vaccines covered by VICP and the injuries and conditions (including death) for which compensation might be paid. If the injury or condition is not on the Table, or does not occur within the specified time period on the Table, persons must prove that the vaccine caused the injury or condition. For a person to be eligible for compensation, the general filing deadlines for injuries require claims to be filed within 3 years after the first symptom of the vaccine injury; for a death, claims must be filed within 2 years of the vaccine-related death and not more than 4 years after the start of the first symptom of the vaccine-related injury from which the death occurred. When a new vaccine is covered by VICP or when a new injury/ condition is added to the Table, claims that do not meet the general filing deadlines must be filed within 2 years from the date the vaccine or injury/condition is added to the Table for injuries or deaths that occurred up to 8 years before the Table change. Persons of all ages who receive a VICP-covered vaccine might be eligible to file a claim. Both the intranasal (LAIV) and injectable (TIV) trivalent influenza vaccines are covered under VICP. Additional information about VICP is available at http//www.hrsa.gov/vaccinecompensation or by calling 1-800-338-2382. # Additional Information Regarding Influenza Virus Infection Control Among Specific Populations Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, HCP, hospital patients, pregnant women, children, and travelers) also are available in the following publications: CDC.
This report updates the 2008 recommendations by CDC's Advisory Committee on Immunization Practices (ACIP) regarding the use of influenza vaccine for the prevention and control of seasonal influenza (CDC. Prevention and control of influenza: recommendations of the Advisory Committee on Immunization Practices [ACIP]. MMWR 2008;57[No. RR-7]). Information on vaccination issues related to the recently identified novel influenza A H1N1 virus will be published later in 2009. The 2009 seasonal influenza recommendations include new and updated information. Highlights of the 2009 recommendations include 1) a recommendation that annual vaccination be administered to all children aged 6 months-18 years for the 2009-10 influenza season; 2) a recommendation that vaccines containing the 2009-10 trivalent vaccine virus strains A/Brisbane/59/2007 (H1N1)like, A/Brisbane/10/2007 (H3N2)-like, and B/Brisbane/60/2008-like antigens be used; and 3) a notice that recommendations for influenza diagnosis and antiviral use will be published before the start of the 2009-10 influenza season. Vaccination efforts should begin as soon as vaccine is available and continue through the influenza season. Approximately 83% of the United States population is specifically recommended for annual vaccination against seasonal influenza; however, <40% of the U.S. population received the 2008-09 influenza vaccine. These recommendations also include a summary of safety data for U.S. licensed influenza vaccines. These recommendations and other information are available at CDC's influenza website (http://www.cdc.gov/ flu); any updates or supplements that might be required during the 2009-10 influenza season also can be found at this website. Vaccination and health-care providers should be alert to announcements of recommendation updates and should check the CDC influenza website periodically for additional information.# Introduction In the United States, annual epidemics of seasonal influenza occur typically during the late fall through early spring. Influenza viruses can cause disease among persons in any age group, but rates of infection are highest among children (1)(2)(3). Rates of serious illness and death are highest among persons aged >65 years, children aged <2 years, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (1,4,5). An annual average of approximately 36,000 deaths during 1990-1999 and 226,000 hospitalizations during 1979-2001 have been associated with influenza epidemics (6,7). Annual influenza vaccination is the most effective method for preventing influenza virus infection and its complications. Influenza vaccine can be administered to any person aged >6 months who does not have contraindications to vaccination to reduce the likelihood of becoming ill with influenza or of transmitting influenza to others. Trivalent inactivated influenza vaccine (TIV) can be used for any person aged >6 months, including those with high-risk conditions (Boxes 1 and 2). Live, attenuated influenza vaccine (LAIV) may be used for healthy, nonpregnant persons aged 2-49 years. No preference All children aged 6 months-18 years should be vaccinated annually. Children and adolescents at higher risk for influenza complications should continue to be a focus of vaccination efforts as providers and programs transition to routinely vaccinating all children and adolescents, including those who: are aged 6 months-4 years ( • 59 months); have chronic pulmonary (including asthma), cardiovas-• cular (except hypertension), renal, hepatic, cognitive, neurologic/neuromuscular, hematological or metabolic disorders (including diabetes mellitus); are immunosuppressed (including immunosuppression • caused by medications or by human immunodeficiency virus); are receiving long-term aspirin therapy and therefore • might be at risk for experiencing Reye syndrome after influenza virus infection; are residents of long-term care facilities; and • will be pregnant during the influenza season. • Note: Children aged <6 months cannot receive influenza vaccination. Household and other close contacts (e.g., daycare providers) of children aged <6 months, including older children and adolescents, should be vaccinated. is indicated for LAIV or TIV when considering vaccination of healthy, nonpregnant persons aged 2-49 years. Because the safety or effectiveness of LAIV has not been established in persons with underlying medical conditions that confer a higher risk for influenza complications, these persons should be vaccinated only with TIV. Influenza viruses undergo frequent antigenic change (i.e., antigenic drift); to gain immunity against viruses in circulation, patients must receive an annual vaccination against the influenza viruses that are predicted on the basis of viral surveillance data. . Although vaccination coverage has increased in recent years for many groups targeted for routine vaccination, coverage remains low among most of these groups, and strategies to improve vaccination coverage, including use of reminder/recall systems and standing orders programs, should be implemented or expanded. Antiviral medications are an adjunct to vaccination and are effective when administered as treatment and when used for chemoprophylaxis after an exposure to influenza virus. However, the emergence since 2005 of resistance to one or more of the four licensed antiviral agents (oseltamivir, zanamivir, amantadine and rimantadine) among circulating strains Annual vaccination against influenza is recommended for any adult who wants to reduce the risk of becoming ill with influenza or of transmitting it to others. Vaccination is recommended for all adults without contraindications in the following groups, because these persons either are at higher risk for influenza complications, or are close contacts of persons at higher risk has complicated antiviral treatment and chemoprophylaxis recommendations. Updated antiviral treatment and chemoprophylaxis recommendations will be provided in a separate set of guidelines later in 2009. CDC has issued interim recommendations for antiviral treatment and chemoprophylaxis of influenza (8), and these guidelines should be consulted pending issuance of new recommendations. In April 2009, a novel influenza A (H1N1) virus that is similar to influenza viruses previously identified in swine was determined to be the cause of an influenza respiratory illness that spread across North America and was identified in many areas of the world by May 2009. The symptoms of novel influenza A (H1N1) virus infection are similar to those of seasonal influenza, and specific diagnostic testing is required to distinguish novel influenza A (H1N1) virus infection from seasonal influenza (9). The epidemiology of this illness is still being studied and prevention issues related to this newly emerging influenza virus will be published separately. # Methods CDC's Advisory Committee on Immunization Practices (ACIP) provides annual recommendations for the prevention and control of influenza. The ACIP Influenza Vaccine Working Group* meets monthly throughout the year to discuss newly published studies, review current guidelines, and consider revisions to the recommendations. As they review the annual recommendations for ACIP consideration of the full committee, members of the working group consider a variety of issues, including burden of influenza illness, vaccine effectiveness, safety, and coverage in groups recommended for vaccination, feasibility, cost-effectiveness, and anticipated vaccine supply. Working group members also request periodic updates on vaccine and antiviral production, supply, safety and efficacy from vaccinologists, epidemiologists, and manufacturers. State and local vaccination program representatives are consulted. CDC's Influenza Division (available at http://www.cdc.gov/ flu) provides influenza surveillance and antiviral resistance data. The Vaccines and Related Biological Products Advisory Committee provides advice on vaccine strain selection to the Food and Drug Administration (FDA), which selects the viral strains to be used in the annual trivalent influenza vaccines. Published, peer-reviewed studies are the primary source of data used by ACIP in making recommendations for the prevention and control of influenza, but unpublished data that are relevant to issues under discussion also might be considered. Among studies discussed or cited, those of greatest scientific quality and those that measured influenza-specific outcomes are the most influential. For example, population-based estimates that use outcomes associated with laboratory-confirmed influenza virus infection outcomes contribute the most specific data for estimates of influenza burden. The best evidence for vaccine or antiviral efficacy and effectiveness comes from randomized controlled trials that assess laboratory-confirmed influenza infections as an outcome measure and consider factors such as timing and intensity of influenza circulation and degree of match between vaccine strains and wild circulating strains (10,11). Randomized, placebo-controlled trials cannot be performed ethically in populations for which vaccination already is recommended, but observational studies that assess outcomes associated with laboratory-confirmed influenza infection can provide important vaccine or antiviral effectiveness data. Randomized, placebo-controlled clinical trials are the best source of vaccine and antiviral safety data for common adverse events; however, such studies do not have the statistical power to identify rare but potentially serious adverse events. The frequency of rare adverse events that might be associated with vaccination is best assessed by reviewing computerized medical records from large linked clinical databases and medical charts of persons who are identified as having a potential adverse event after vaccination (12,13). Vaccine coverage data from a nationally representative, randomly selected population that includes verification of vaccination through health-care record review are superior to coverage data derived from limited populations or without verification of vaccination; however, these data rarely are available for older children or adults (14). Finally, studies that assess vaccination program practices that improve vaccination coverage are most influential in formulating recommendations if the study design includes a nonintervention comparison group. In cited studies that included statistical comparisons, a difference was considered to be statistically significant if the p-value was <0.05 or the 95% confidence interval (CI) around an estimate of effect allowed rejection of the null hypothesis (i.e., no effect). These recommendations were presented to the full ACIP and approved in February 2009. Modifications were made to the ACIP statement during the subsequent review process at CDC to update and clarify wording in the document. Vaccine recommendations apply only to persons who do not have contraindications to vaccine use (see Contraindications and Precautions for use of TIV and Contraindications and Precautions for use of LAIV). Data presented in this report were current as of July 17, 2009. Further updates, if needed, will be posted at CDC's influenza website (http://www.cdc. gov/flu). # Primary Changes and Updates in the Recommendations The 2009 recommendations include three principal changes or updates: Annual vaccination of all children aged 6 months-18 • years should begin as soon as the 2009-10 influenza vaccine is available. Annual vaccination of all children aged 6 months-4 years (59 months) and older children with conditions that place them at increased risk for complications from influenza should continue to be a primary focus of vaccination efforts as providers and programs transition to routinely vaccinating all children. from the United States and other countries are now resistant to oseltamivir. Recommendations for influenza diagnosis and antiviral use will be published later in 2009. CDC issued interim recommendations for antiviral treatment and chemoprophylaxis of influenza in December 2008, and these should be consulted for guidance pending recommendations from the ACIP (8). # Background and Epidemiology # Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease. Influenza A viruses are categorized into subtypes on the basis of two surface antigens: hemagglutinin and neuraminidase. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have circulated globally. Influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H3N2) and A (H1N1) viruses also have been identified in some influenza seasons. In April 2009, human infections with a novel influenza A (H1N1) virus were identified; as of June 2009, infections with the novel influenza A (H1N1) virus have been reported worldwide. This novel virus is derived partly from influenza A viruses that circulate in swine and is antigenically distinct from human influenza A (H1N1) viruses in circulation since 1977. Influenza A subtypes and B viruses are further separated into groups on the basis of antigenic similarities. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations and recombination events that occur during viral replication (15). Recent studies have begun to shed some light on the complex molecular evolution and epidemiologic dynamics of influenza A viruses (16)(17)(18). Currently circulating influenza B viruses are separated into two distinct genetic lineages (Yamagata and Victoria) but are not categorized into subtypes. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. Influenza B viruses from both lineages have circulated in most recent influenza seasons (19). Immunity to the surface antigens, particularly the hemagglutinin, reduces the likelihood of infection (20). Antibody against one influenza virus type or subtype confers limited or no protection against another type or subtype of influenza virus. Furthermore, antibody to one antigenic type or subtype of influenza virus might not protect against infection with a new antigenic variant of the same type or subtype (21). Frequent emergence of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and is the reason for annually reassessing the need to change one or more of the recommended strains for influenza vaccines. More dramatic changes, or antigenic shifts, occur less frequently. Antigenic shift occurs when a new subtype of influenza A virus appears and can result in the emergence of a novel influenza A virus with the potential to cause a pandemic. New influenza A subtypes have the potential to cause a pandemic when they are able to cause human illness and demonstrate efficient human-to-human transmission and little or no previously existing immunity has been identified among humans (15). Novel influenza A (H1N1) virus is not a new subtype, but because the large majority of humans appear to have no pre-existing antibody to key novel influenza A (H1N1) virus hemagglutinin epitopes, substantial potential exists for widespread infection (16). # Health-Care Use, Hospitalizations, and Deaths Attributed to Influenza In the United States, annual epidemics of influenza typically occur during the fall or winter months, but the peak of influenza activity can occur as late as April or May (Figure 1). Influenza-related complications requiring urgent medical care, including hospitalizations or deaths, can result from the direct effects of influenza virus infection, from complications associated with age or pregnancy, or from complications of underlying cardiopulmonary conditions or other chronic diseases. Studies that have measured rates of a clinical outcome without a laboratory confirmation of influenza virus infection (e.g., respiratory illness requiring hospitalization during influenza season) to assess the effect of influenza can be difficult to interpret because of circulation of other respiratory pathogens (e.g., respiratory syncytial virus) during the same time as influenza viruses (22)(23)(24). However, increases in healthcare provider visits for acute febrile respiratory illness occur each year during the time when influenza viruses circulate. Data from the U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet) demonstrate the annual increase in physician visits for influenza-like illness (ILI) † and for each influenza season; for 2009, the data also indicate the recent resurgence of respiratory illness associated with circulation of novel influenza A (H1N1) virus (Figure 2) (25,26). During seasonal influenza epidemics from 1979-1980 through 2000-2001, the estimated annual overall number of influenza-associated hospitalizations in the United States ranged from approximately 55,000 to 431,000 per annual epidemic (mean: 226,000) (7). The estimated annual number of deaths attributed to influenza from the 1990-91 influenza season through the 1998-99 season ranged from 17,000 to 51,000 per epidemic (mean: 36,000) (6). In the United States, the estimated number of influenza-associated deaths 53; therefore the week 53 data point for those seasons is an average of weeks 52 and 1. ¶ The national baseline is the mean percentage of visits for ILI during noninfluenza weeks for the previous three seasons plus two standard deviations. A noninfluenza week is a week during which <10% of specimens tested positive for influenza. increased during 1990-1999. This increase was attributed in part to the substantial increase in the number of persons aged >65 years who were at increased risk for death from influenza complications (6). In one study, an average of approximately 19,000 influenza-associated pulmonary and circulatory deaths per influenza season occurred during 1976-1990 compared with an average of approximately 36,000 deaths per season during 1990-1999 (6). In addition, influenza A (H3N2) viruses, which have been associated with higher mortality (27), predominated in 90% of influenza seasons during 1990-1999 compared with compared with 57% of seasons during 1976-1990 (6). Influenza viruses cause disease among persons in all age groups (1)(2)(3)(4)(5). Rates of infection are highest among children, but the risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (1,4,5,(28)(29)(30)(31). Estimated rates of influenzaassociated hospitalizations and deaths varied substantially by age group in studies conducted during different influenza epidemics. During 1990-1999, estimated average rates of influenza-associated pulmonary and circulatory deaths per 100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years (6). # Children Among children aged <5 years, influenza-related illness is a common cause of visits to medical practices and emergency departments (EDs). During two influenza seasons (2002-03 and 2003-04), the percentage of visits among children aged <5 years with acute respiratory illness or fever caused by laboratory-confirmed influenza ranged from 10%-19% of medical office visits to 6%-29% of EDs visits during the influenza season. On the basis of these data, the rate of visits to medical clinics for influenza was estimated to be 50-95 per 1,000 children, and the rate of visits to EDs was estimated to be 6-27 per 1,000 children (32). A multiyear study in New York City used viral surveillance data to estimate influenza strain-specific illness rates among ED visits. In addition to the expected variation by season and age group, influenza B epidemics were found to be an important cause of illness among school-aged children in several seasons, and annual epidemics of both influenza A and B peaked among school-aged children before other age groups (33). Retrospective studies using medical records data have demonstrated similar rates of illness among children aged <5 years during other influenza seasons (29,34,35). During the influenza season, an estimated 7-12 additional outpatient visits and 5-7 additional antibiotic prescriptions per 100 children aged <15 years have been documented when compared with periods when influenza viruses are not circulating, with rates decreasing with increasing age of the child (35). During 1993During -2004 in the Boston area, the rate of ED visits for respiratory illness that was attributed to influenza virus based on viral surveillance data among children aged <7 years during the winter respiratory illness season ranged from 22.0 per 1,000 children aged 6-23 months to 5.4 per 1,000 children aged 5-7 years (36). Rates of influenza-associated hospitalization are substantially higher among infants and young children than among older children when influenza viruses are in circulation and are similar to rates for other groups considered at high risk for influenza-related complications (37)(38)(39)(40)(41)(42), including persons aged >65 years (35,39). During 1979-2001, on the basis of data from a national sample of hospital discharges of influenzaassociated hospitalizations among children aged <5 years, the estimated rate of influenza-associated hospitalizations in the United States was 108 hospitalizations per 100,000 personyears (7). Recent population-based studies that measured hospitalization rates for laboratory-confirmed influenza in young children have documented hospitalization rates that are similar to or higher than rates derived from studies that analyzed hospital discharge data (32,34,41,43,44). Annual hospitalization rates for laboratory-confirmed influenza decrease with increasing age, ranging from 240-720 per 100,000 children aged <6 months to approximately 20 per 100,000 children aged 2-5 years (32). Hospitalization rates for children aged <5 years with high-risk medical conditions are approximately 250-500 per 100,000 children (29,31,45). Influenza-associated deaths are uncommon among children. An estimated annual average of 92 influenza-related deaths (0.4 deaths per 100,000 persons) occurred among children aged <5 years during the 1990s compared with 32,651 deaths (98.3 per 100,000 persons) among adults aged >65 years (6). Of 153 laboratory-confirmed influenza-related pediatric deaths reported during the 2003-04 influenza season, 96 (63%) deaths occurred among children aged <5 years and 61 (40%) among children aged <2 years. Among the 149 children who died and for whom information on underlying health status was available, 100 (67%) did not have an underlying medical condition that was an indication for vaccination at that time (46). In California during the 2003-04 and 2004-05 influenza seasons, 51% of children with laboratory-confirmed influenza who died and 40% of those who required admission to an intensive care unit had no underlying medical conditions (47). These data indicate that although children with risk factors for influenza complications are at higher risk for death, the majority of pediatric deaths occur among children with no known high-risk conditions. The annual number of influenza-associated deaths among children reported to CDC for the past four influenza seasons has ranged from 44 during 2004-05 to 84 during 2007-08 (48). As of July 8, 2009, a total of 17 deaths caused by novel influenza A (H1N1) virus infection have occurred in 2009 among children in the United States (CDC, unpublished data, 2009). Death associated with laboratory-confirmed influenza virus infection among children (defined as persons aged <18 years) is a nationally reportable condition. Deaths among children that have been attributed to co-infection with influenza and Staphylococcus aureus, particularly methicillin-resistant S. aureus (MRSA), have increased during the preceding four influenza seasons (26,49). The reason for this increase is not established but might reflect an increasing prevalence within the general population of colonization with MRSA strains, some of which carry certain virulence factors (50,51). # Adults Hospitalization rates during the influenza season are substantially increased for persons aged >65 years. One retrospective analysis based on data from managed-care organizations collected during 1996-2000 estimated that the risk during influenza season among persons aged >65 years with underlying conditions that put them at risk for influenza-related complications (i.e., one or more of the conditions listed as indications for vaccination) was approximately 560 influenzaassociated hospitalizations per 100,000 persons compared with approximately 190 per 100,000 healthy persons. Persons aged 50-64 years with underlying medical conditions also were at substantially increased risk for hospitalizations during influenza season compared with healthy adults aged 50-64 years. No increased risk for influenza-related hospitalizations was demonstrated among healthy adults aged 50-64 years or among those aged 19-49 years, regardless of underlying medical conditions (28). Influenza is an important contributor to the annual increase in deaths attributed to pneumonia and influenza that is observed during the winter months (Figure 3). During 1976-2001, an estimated yearly average of 32,651 (90%) influenza-related deaths occurred among adults aged >65 years (6). Risk for influenza-related death was highest among the oldest elderly, with persons aged >85 years 16 times more likely to die from an influenza-related illness than persons aged 65-69 years (6). The duration of influenza symptoms is prolonged and the severity of influenza illness increased among persons with human immunodeficiency virus (HIV) infection (52)(53)(54)(55)(56). A retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the attributable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than it was either before or after influenza was circulating. The risk for hospitalization was higher for HIV-infected women than it was for women with other underlying medical conditions (57). Another study estimated that the risk for influenza- * Each week, the vital statistics offices of 122 cities report the total number of death certificates received and the number of those for which pneumonia or influenza (P&I) was listed as the underlying or contributing cause of death by age group. The percentage of all deaths attributable to P&I are compared with a seasonal baseline and epidemic threshold value calculated for each week. † An increase of 1.645 standard deviations above the seasonal baseline deaths is considered the "epidemic threshold," i.e., the point at which the observed proportion of deaths attributed to pneumonia or influenza was significantly higher than would be expected at that time of the year in the absence of substantial influenza-related mortality. § The seasonal baseline of P&I deaths is calculated using a periodic regression model that incorporates a robust regression procedure applied to data from the previous 5 years. related death was 94-146 deaths per 100,000 persons with acquired immunodeficiency syndrome (AIDS) compared with 0.9-1.0 deaths per 100,000 persons aged 25-54 years and 64-70 deaths per 100,000 persons aged >65 years in the general population (58). Influenza-related excess deaths among pregnant women were reported during the pandemics of 1918-1919 and 1957-1958 (59-63). Case reports and several epidemiologic studies also indicate that pregnancy increases the risk for influenza complications (64-69) for the mother. The majority of studies that have attempted to assess the effect of influenza on pregnant women have measured changes in excess hospitalizations for respiratory illness during influenza season but not laboratoryconfirmed influenza hospitalizations. Pregnant women have an increased number of medical visits for respiratory illnesses during influenza season compared with nonpregnant women (70). Hospitalized pregnant women with respiratory illness during influenza season have increased lengths of stay compared with hospitalized pregnant women without respiratory illness. Rates of hospitalization for respiratory illness were twice as common during influenza season (71). A retrospective cohort study of approximately 134,000 pregnant women conducted in Nova Scotia during 1990-2002 compared medical record data for pregnant women to data from the same women during the year before pregnancy. Among pregnant women, 0.4% were hospitalized and 25% visited a clinician during pregnancy for a respiratory illness. The rate of third-trimester hospital admissions during the influenza season was five times higher than the rate during the influenza season in the year before pregnancy and more than twice as high as the rate during the noninfluenza season. An excess of 1,210 hospital admissions in the third trimester per 100,000 pregnant women with comorbidities and 68 admissions per 100,000 women without comorbidities was reported (72). In one study, pregnant women with respiratory hospitalizations did not have an increase in adverse perinatal outcomes or delivery complications (73); another study indicated an increase in delivery complications, including fetal distress, preterm labor, and cesarean delivery. However, infants born to women with laboratory-confirmed influenza during pregnancy do not have higher rates of low birth weight, congenital abnormalities, or lower Apgar scores compared with infants born to uninfected women (64,74). # options for Controlling Influenza The most effective strategy for preventing influenza is annual vaccination (10,15). Strategies that focus on providing routine vaccination to persons at higher risk for influenza complications have long been recommended, although coverage among the majority of these groups remains low. Routine vaccination of certain persons (e.g., children, contacts of persons at risk for influenza complications, and health-care personnel [HCP]) who serve as a source of influenza virus transmission might provide additional protection to persons at risk for influenza complications and reduce the overall influenza burden. However, coverage levels among these persons need to be increased before effects on transmission can be measured reliably. Antiviral drugs used for chemoprophylaxis or treatment of influenza are adjuncts to vaccine but are not substitutes for annual vaccination. However, antiviral drugs might be underused among those hospitalized with influenza (75). Nonpharmacologic interventions (e.g., advising frequent handwashing and improved respiratory hygiene) are reasonable and inexpensive; these strategies have been demonstrated to reduce respiratory diseases; reductions in detectable influenza A viruses on hands after handwashing also have been demonstrated (76)(77)(78). Few data are available to assess the effects of community-level respiratory disease mitigation strategies (e.g., closing schools, avoiding mass gatherings, or using respiratory protection) on reducing influenza virus transmission during typical seasonal influenza epidemics (79,80). # Influenza Vaccine Efficacy, Effectiveness, and Safety Evaluating Influenza Vaccine Efficacy and Effectiveness Studies The efficacy (i.e., prevention of illness among vaccinated persons in controlled trials) and effectiveness (i.e., prevention of illness in vaccinated populations) of influenza vaccines depend in part on the age and immunocompetence of the vaccine recipient, the degree of similarity between the viruses in the vaccine and those in circulation (see Effectiveness of Influenza Vaccination when Circulating Influenza Virus Strains Differ from Vaccine Strains), and the outcome being measured. Influenza vaccine efficacy and effectiveness studies have used multiple possible outcome measures, including the prevention of medically attended acute respiratory illness (MAARI), prevention of laboratory-confirmed influenza virus illness, prevention of influenza or pneumonia-associated hospitalizations or deaths, or prevention of seroconversion to circulating influenza virus strains. Efficacy or effectiveness for more specific outcomes such as laboratory-confirmed influenza typically will be higher than for less specific outcomes such as MAARI because the causes of MAARI include infections with other pathogens that influenza vaccination would not be expected to prevent (81). Observational studies that compare less-specific outcomes among vaccinated populations to those among unvaccinated populations are subject to biases that are difficult to control for during analyses. For example, an observational study that determines that influenza vaccination reduces overall mortality might be biased if healthier persons in the study are more likely to be vaccinated (82,83). Randomized controlled trials that measure laboratory-confirmed influenza virus infections as the outcome are the most persuasive evidence of vaccine efficacy, but such trials cannot be conducted ethically among groups recommended to receive vaccine annually. # Influenza Vaccine Composition Both LAIV and TIV contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one influenza A (H1N1) virus, and one influenza B virus. Each year, one or more virus strains in the vaccine might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. For the 2009-10 influenza season, the influenza B vaccine virus strain was changed to B/Brisbane/60/2008, a representative of the B/Victoria lineage) compared with the 2008-09 season. The influenza A (H1N1 and H3N2 vaccine virus strains were not changed (84). Viruses for both types of currently licensed vaccines are grown in eggs. Both vaccines are administered annually to provide optimal protection against influenza virus infection (Table 1). Both TIV and LAIV are widely available in the United States. Although both types of vaccines are expected to be effective, the vaccines differ in several respects (Table 1). # Major Differences Between tIV and LAIV During the preparation of TIV, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (15). Only subvirion and purified surface antigen preparations of TIV (often referred to as "split" and subunit vaccines, respectively) are available in the United States. TIV contains killed viruses and thus cannot cause influenza. LAIV contains live, attenuated influenza viruses that have the potential to cause mild signs or symptoms (e.g., runny nose, nasal congestion, fever, or sore throat). LAIV is administered intranasally by sprayer, whereas TIV is administered intramuscularly by injection. LAIV is licensed for use among nonpregnant persons aged 2-49 years; safety has not been established in persons with underlying medical conditions that confer a higher risk for influenza complications. TIV is licensed for use among persons aged >6 months, including those who are healthy and those with chronic medical conditions (Table 1). # Correlates of Protection after Vaccination Immune correlates of protection against influenza infection after vaccination include serum hemagglutination inhibition antibody and neutralizing antibody (20,85). Increased levels of antibody induced by vaccination decrease the risk for illness caused by strains that are antigenically similar to those strains of the same type or subtype included in the vaccine (86)(87)(88)(89). The majority of healthy children and adults have high titers of antibody after vaccination (87,90). Although immune correlates such as achievement of certain antibody titers after vaccination correlate well with immunity on a population level, the significance of reaching or failing to reach a certain antibody threshold is not well understood on the individual level. Other immunologic correlates of protection that might best indicate clinical protection after receipt of an intranasal vaccine such as LAIV (e.g., mucosal antibody) are more difficult to measure (91,92). Laboratory measurements that correlate with protective immunity induced by LAIV have been described, including measurement of cell-mediated immunity with ELISPOT assays that measure gamma-interferon (89). If not simultaneously administered, can be administered within 4 wks of an inactivated vaccine Yes Yes * Children aged 6 months-8 years who have never received influenza vaccine before should receive 2 doses. Those who only receive 1 dose in their first year of vaccination should receive 2 doses in the following year, spaced 4 weeks apart. † Persons at higher risk for complications of influenza infection because of underlying medical conditions should not receive LAIV. Persons at higher risk for complications of influenza infection because of underlying medical conditions include adults and children with chronic disorders of the pulmonary or cardiovascular systems; adults and children with chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunnosuppression; children and adolescents receiving long-term aspirin therapy (at risk for developing Reye syndrome after wild-type influenza infection); persons who have any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration; pregnant women; and residents of nursing homes and other chronic-care facilities that house persons with chronic medical conditions. § Clinicians and immunization programs should screen for possible reactive airways diseases when considering use of LAIV for children aged 2-4 years and should avoid use of this vaccine in children with asthma or a recent wheezing episode. Health-care providers should consult the medical record, when available, to identify children aged 2-4 years with asthma or recurrent wheezing that might indicate asthma. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record during the preceding 12 months should not receive LAIV. ¶ LAIV coadministration has been evaluated systematically only among children aged 12-15 months who received measles, mumps, and rubella vaccine or varicella vaccine. ** TIV coadministration has been evaluated systematically only among adults who received pneumococcal polysaccharide or zoster vaccine. # Immunogenicity, Efficacy, and Effectiveness of tIV Children Children aged >6 months typically have protective levels of anti-influenza antibody against specific influenza virus strains after receiving the recommended number of doses of influenza vaccine (85 90 93-97). In most seasons, one or more vaccine antigens are changed compared with the previous season. In consecutive years when vaccine antigens change, children aged <9 years who received only 1 dose of vaccine in their first year of vaccination are less likely to have protective antibody responses when administered only a single dose during their second year of vaccination compared with children who received 2 doses in their first year of vaccination (98)(99)(100). When the vaccine antigens do not change from one season to the next, priming children aged 6-23 months with a single dose of vaccine in the spring followed by a dose in the fall engenders similar antibody responses compared with a regimen of 2 doses in the fall (101). However, one study conducted during a season when the vaccine antigens did not change compared with the previous season estimated 62% effectiveness against ILI for healthy children who had received only 1 dose in the previous influenza season and only 1 dose in the study season compared with 82% for those who received 2 doses separated by >4 weeks during the study season (102). The antibody response among children at higher risk for influenza-related complications (e.g., children with chronic medical conditions) might be lower than those reported typically among healthy children (103,104). However, antibody responses among children with asthma are similar to those of healthy children and are not substantially altered during asthma exacerbations requiring short-term prednisone treatment (105). Vaccine effectiveness studies also have indicated that 2 doses are needed to provide adequate protection during the first season that young children are vaccinated. Among children aged <5 years who have never received influenza vaccine previously or who received only 1 dose of influenza vaccine in their first year of vaccination, vaccine effectiveness is lower compared with children who received 2 doses in their first year of being vaccinated. Two large retrospective studies of young children who had received only 1 dose of TIV in their first year of being vaccinated determined that no decrease was observed in ILI-related office visits compared with unvaccinated children (102,106). Similar results were reported in a case-control study of children aged 6-59 months (107). These results, along with the immunogenicity data indicating that antibody responses are significantly higher when young children are given 2 doses, are the basis for the recommendation that all children aged <9 years who are being vaccinated for the first time should receive 2 vaccine doses separated by at least 4 weeks. Estimates of vaccine efficacy or effectiveness among children aged >6 months have varied by season and study design. In a randomized trial conducted during five influenza seasons (1985)(1986)(1987)(1988)(1989)(1990) in the United States among children aged 1-15 years, annual vaccination reduced laboratory-confirmed influenza A substantially (77%-91%) (87). A limited 1-year placebo-controlled study reported vaccine efficacy against laboratory-confirmed influenza illness of 56% among healthy children aged 3-9 years and 100% among healthy children and adolescents aged 10-18 years (108). A randomized, double-blind, placebo-controlled trial conducted during two influenza seasons among children aged 6-24 months indicated that efficacy was 66% against culture-confirmed influenza illness during the 1999-00 influenza season but did not reduce culture-confirmed influenza illness significantly during the 2000-20 influenza season (109). A case-control study conducted during the 2003-04 season found vaccine effectiveness of 49% against laboratory-confirmed influenza (107). An observational study among children aged 6-59 months with laboratory-confirmed influenza compared with children who tested negative for influenza reported vaccine effectiveness of 44% in the 2003-04 influenza season and 57% during the 2004-05 season (110). Partial vaccination (only 1 dose for children being vaccinated for the first time) was not effective in either study. During an influenza season (2003-04) with a suboptimal vaccine match, a retrospective cohort study conducted among approximately 30,000 children aged 6 months-8 years indicated vaccine effectiveness of 51% against medically attended, clinically diagnosed pneumonia or influenza (i.e., no laboratory confirmation of influenza) among fully vaccinated children and 49% among approximately 5,000 children aged 6-23 months (106). Another retrospective cohort study of similar size conducted during the same influenza season in Denver but limited to healthy children aged 6-21 months estimated clinical effectiveness of 2 TIV doses to be 87% against pneumonia or influenza-related office visits (102). Among children, TIV effectiveness might increase with age (87,111). A systematic review of published studies estimated vaccine effectiveness at 59% for children aged >2 years but concluded that additional evidence was needed to demonstrate effectiveness among children aged 6 months-2 years (112). Because of the recognized influenza-related disease burden among children with other chronic diseases or immunosuppression and the long-standing recommendation for vaccination of these children, randomized placebo-controlled studies to study efficacy in these children have not been conducted. In a nonrandomized controlled trial among children aged 2-6 years and 7-14 years who had asthma, vaccine efficacy was 54% and 78% against laboratory-confirmed influenza type A infection and 22% and 60% against laboratory-confirmed influenza type B infection, respectively. Vaccinated children aged 2-6 years with asthma did not have substantially fewer type B influenza virus infections compared with the control group in this study (113). The association between vaccination and prevention of asthma exacerbations is unclear. One study suggested that vaccination might provide protection against asthma exacerbations (114); however, other studies of children with asthma have not demonstrated decreased exacerbations (115). TIV has been demonstrated to reduce acute otitis media in some studies. Two studies have reported that TIV decreases the risk for influenza-related otitis media by approximately 30% among children with mean ages of 20 and 27 months, respectively (116,117). However, a large study conducted among children with a mean age of 14 months indicated that TIV was not effective against acute otitis media (109). Influenza vaccine effectiveness against a nonspecific clinical outcome such as acute otitis media, which is caused by a variety of pathogens and is not typically diagnosed using influenza virus culture, would be expected to be relatively low. # Adults Aged <65 Years One dose of TIV is highly immunogenic in healthy adults aged <65 years. Limited or no increase in antibody response is reported among adults when a second dose is administered during the same season (118)(119)(120). When the vaccine and circulating viruses are antigenically similar, TIV prevents laboratory-confirmed influenza illness among approximately 70%-90% of healthy adults aged <65 years in randomized controlled trials (121)(122)(123)(124). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are well-matched (121)(122)(123). Efficacy or effectiveness against laboratory-confirmed influenza illness was 47%-77% in studies conducted during different influenza seasons when the vaccine strains were antigenically dissimilar to the majority of circulating strains (117,119,(121)(122)(123)(124). However, effectiveness among healthy adults against influenza-related hospitalization, measured in the most recent of these studies, was 90% (125). In certain studies, persons with certain chronic diseases have lower serum antibody responses after vaccination compared with healthy young adults and can remain susceptible to influenza virus infection and influenza-related upper respiratory tract illness (126,127). Vaccine effectiveness among adults aged <65 years who are at higher risk for influenza complications typically is lower than that reported for healthy adults. In a case-control study conducted during the 2003-04 influenza season, when the vaccine was a suboptimal antigenic match to many circulating virus strains, effectiveness for prevention of laboratory-confirmed influenza illness among adults aged 50-64 years with high-risk conditions was 48% compared with 60% for healthy adults (125). Effectiveness against hospitalization among adults aged 50-64 years with high-risk conditions was 36% compared with 90% effectiveness among healthy adults in that age range (125). A randomized controlled trial among adults in Thailand with chronic obstructive pulmonary disease (median age: 68 years) indicated a vaccine effectiveness of 76% in preventing laboratory-confirmed influenza during a season when viruses were well-matched to vaccine viruses. Effectiveness did not decrease with increasing severity of underlying lung disease (128). Few randomized controlled trials have studied the effect of influenza vaccination on noninfluenza outcomes. A randomized controlled trial conducted in Argentina among 301 adults hospitalized with myocardial infarction or undergoing angioplasty for cardiovascular disease (56% of whom were aged >65 years) found that a significantly lower percentage (6%) of cardiovascular deaths occurred among vaccinated persons at 1 year after vaccination compared with unvaccinated persons (17%) (129). A randomized, double-blind, placebo-controlled study conducted in Poland among 658 persons with coronary artery disease indicated that significantly fewer vaccinated persons vaccinated persons had a cardiac ischemic event during the 9 months of follow up compared with unvaccinated persons (p <0.05) (130). Observational studies that have measured clinical endpoints without laboratory confirmation of influenza virus infection, typically have demonstrated substantial reductions in hospitalizations or deaths among adults with risk factors for influenza complications. In a case-control study conducted during 1999-2000 in Denmark among adults aged <65 years with underlying medical conditions, vaccination reduced deaths attributable to any cause 78% and reduced hospitalizations attributable to respiratory infections or cardiopulmonary diseases 87% (131). A benefit was reported after the first vaccination and increased with subsequent vaccinations in subsequent years (132). Among patients with diabetes mellitus, vaccination was associated with a 56% reduction in any complication, a 54% reduction in hospitalizations, and a 58% reduction in deaths (133). Certain experts have noted that the substantial effects on morbidity and mortality among those who received influenza vaccination in these observational studies should be interpreted with caution because of the difficulties in ensuring that those who received vaccination had similar baseline health status as those who did not (82,83). One meta-analysis of published studies concluded that evidence was insufficient to demonstrate that persons with asthma benefit from vaccination (134). However, a meta-analysis that examined effectiveness among persons with chronic obstructive pulmonary disease identified evidence of benefit from vaccination (135). # Immunocompromised Persons TIV produces adequate antibody concentrations against influenza among vaccinated HIV-infected persons who have minimal AIDS-related symptoms and normal or near-normal CD4+ T-lymphocyte cell counts (136)(137)(138). Among persons who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, TIV might not induce protective antibody titers (138,139); a second dose of vaccine does not improve the immune response in these persons (139,140). A randomized, placebo-controlled trial determined that TIV was highly effective in preventing symptomatic, laboratory-confirmed influenza virus infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm3; however, a limited number of persons with CD4+ T-lymphocyte cell counts of <200 were included in that study (140). A nonrandomized study of HIV-infected persons determined that influenza vaccination was most effective among persons with >100 CD4+ cells and among those with <30,000 viral copies of HIV type-1/mL (53). On the basis of certain limited studies, immunogenicity for persons with solid organ transplants varies according to transplant type. Among persons with kidney or heart transplants, the proportion who developed seroprotective antibody concentrations was similar or slightly reduced compared with healthy persons (141)(142)(143). However, a study among persons with liver transplants indicated reduced immunologic responses to influenza vaccination (144)(145)(146), especially if vaccination occurred within the 4 months after the transplant procedure (144). # Pregnant Women and neonates Pregnant women have protective levels of anti-influenza antibodies after vaccination (147,148). Passive transfer of anti-influenza antibodies that might provide protection from vaccinated women to neonates has been reported (147,(149)(150)(151). A retrospective, clinic-based study conducted during 1998-2003 documented a nonsignificant trend toward fewer episodes of MAARI during one influenza season among vaccinated pregnant women compared with unvaccinated pregnant women and substantially fewer episodes of MAARI during the peak influenza season (148). However, a retrospective study conducted during 1997-2002 that used clinical records data did not indicate a reduction in ILI among vaccinated pregnant women or their infants (152). In another study conducted during 1995-2001, medical visits for respiratory illness among the infants were not substantially reduced (153). One randomized controlled trial conducted in Bangladesh that provided vaccination to pregnant women during the third trimester demonstrated a 29% reduction in respiratory illness with fever and a 36% reduction in respiratory illness with fever among their infants during the first 6 months after birth. In addition, infants born to vaccinated women had a 63% reduction in laboratory-confirmed influenza illness during the first 6 months of life (154). All women in this trial breastfed their infants (mean duration: 14 weeks). # older Adults Adults aged >65 years typically have a diminished immune response to influenza vaccination compared with young healthy adults, suggesting that immunity might be of shorter duration (although still extending through one influenza season) (155,156). However, a review of the published literature concluded that no clear evidence existed that immunity declined more rapidly in the elderly (157), and additional vaccine doses during the same season do not increase the antibody response (118,120). Infections among the vaccinated elderly might be associated with an age-related reduction in ability to respond to vaccination rather than reduced duration of immunity (127,128). One prospective cohort study found that immunogenicity among hospitalized persons who were either aged >65 years or who were aged 18-64 years and had one or more chronic medical conditions was similar compared with outpatients (158). The only randomized controlled trial among communitydwelling persons aged >60 years reported a vaccine efficacy of 58% (CI = 26%-77%) against laboratory-confirmed influenza illness during a season when the vaccine strains were considered to be well-matched to circulating strains (159). Additional information from this trial published separately indicated that efficacy among those aged >70 years was 57% (CI = -36%-87%), similar to younger persons. However, few persons aged >75 years participated in this study, and the wide confidence interval for the estimate of efficacy among participants aged >70 years included 0 (160). Influenza vaccine effectiveness in preventing MAARI among the elderly in nursing homes has been estimated at 20%-40% (161,162), and reported outbreaks among well-vaccinated nursing home populations have suggested that vaccination might not have any significant effectiveness when circulating strains are drifted from vaccine strains (163,164). In contrast, some studies have indicated that vaccination can be up to 80% effective in preventing influenzarelated death (161,(165)(166)(167). Among elderly persons not living in nursing homes or similar long-term-care facilities, influenza vaccine is 27%-70% effective in preventing hospitalization for pneumonia and influenza (168)(169)(170). Influenza vaccination reduces the frequency of secondary complications and reduces the risk for influenza-related hospitalization and death among community-dwelling adults aged >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (169)(170)(171)(172)(173)(174). However, studies demonstrating large reductions in hospitalizations and deaths among the vaccinated elderly have been conducted using medical record databases and have not measured reductions in laboratory-confirmed influenza illness. These studies have been challenged because of concerns that they have not adequately controlled for differences in the propensity for healthier persons to be more likely than less healthy persons to receive vaccination (82,83,166,(175)(176)(177). # tIV Dosage, Administration, and Storage The composition of TIV varies according to manufacturer, and package inserts should be consulted. TIV formulations in multidose vials contain the vaccine preservative thimerosal; preservative-free, single-dose preparations also are available. TIV should be stored at 35°F-46°F (2°C-8°C) and should not be frozen. TIV that has been frozen should be discarded. Dosage recommendations and schedules vary according to age group (Table 2). Vaccine prepared for a previous influenza season should not be administered to provide protection for any subsequent season. The intramuscular route is recommended for TIV. Adults and older children should be vaccinated in the deltoid muscle. A needle length of >1 inch (>25 mm) should be considered for persons in these age groups because needles of <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (178). When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7/8-1.25 inches is recommended (179). Infants and young children should be vaccinated in the anterolateral aspect of the thigh. A needle length of 7/8-1 inch should be used for children aged <12 months. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving FluMist, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record during the preceding 12 months should not receive FluMist. † † Two doses administered at least 4 weeks apart are recommended for children aged 2-8 years who are receiving LAIV for the first time, and those who only received 1 dose in their first year of vaccination should receive 2 doses in the following year. # Adverse Events After Receipt of tIV Children Studies support the safety of annual TIV in children and adolescents. The largest published postlicensure populationbased study assessed TIV safety in 251,600 children aged <18 years, (including 8,476 vaccinations in children aged 6-23 months) through the Vaccine Safety Datalink (VSD), who were enrolled in one of five health maintenance organizations (HMOs) during 1993-1999. This study indicated no increase in clinically important medically attended events during the 2 weeks after inactivated influenza vaccination compared with control periods 3-4 weeks before and after vaccination (180). A retrospective cohort study using VSD medical records data from 45,356 children aged 6-23 months provided additional evidence supporting overall safety of TIV in this age group. During the 2 weeks after vaccination, TIV was not associated with statistically significant increases in any clinically important medically attended events other than gastritis/duodenitis, and 13 diagnoses, including acute upper respiratory illness, otitis media and asthma, were significantly less common (181). On chart review, most children with a diagnosis of gastritis/duodenitis had self-limited vomiting or diarrhea. The positive or negative associations between TIV and any of these diagnoses do not necessarily indicate a causal relationship (181). In a study of 791 healthy children aged 1-15 years, postvaccination fever was noted among 12% of those aged 1-5 years, 5% among those aged 6-10 years, and 5% among those aged 11-15 years (87). Fever, malaise, myalgia, and other systemic symptoms that can occur after vaccination with inactivated vaccine most often affect persons who have had no previous exposure to the influenza virus antigens in the vaccine (e.g., young children) (182,183). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Data about potential adverse events among children after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). Because of the limitations of passive reporting systems, determining causality for specific types of adverse events usually is not possible using VAERS data alone. Published reviews of VAERS reports submitted after administration of TIV to children aged 6-23 months indicated that the most frequently reported adverse events were fever, rash, injection-site reactions, and seizures; the majority of the limited number of reported seizures appeared to be febrile (184,185). Seizure and fever were the leading serious adverse events (SAEs), defined using standard criteria, reported to VAERS in these studies (184,185); further investigation in VSD did not confirm an association with febrile seizures as identified in VAERS (181). # Adults In placebo-controlled studies among adults, the most frequent side effect of vaccination was soreness at the vaccination site (affecting 10%-64% of patients) that lasted <2 days (186,187). These local reactions typically were mild and rarely interfered with the recipients' ability to conduct usual daily activities. Placebo-controlled trials demonstrate that among older persons and healthy young adults, administration of TIV is not associated with higher rates for systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (121,134,(186)(187)(188). One prospective cohort study found that the rate of adverse events was similar among hospitalized persons who either were aged >65 years or were aged 18-64 years and had one or more chronic medical conditions compared with outpatients (158). Adverse events in adults aged >18 years reported to VAERS during 1990-2005 were analyzed. The most common adverse events reported to VAERS in adults included injection-site reactions, pain, fever, myalgia, and headache. The VAERS review identified no new safety concerns. In clinical trials, SAEs were reported to occur after vaccination with TIV at a rate of <1%. A small proportion (14%) of the TIV VAERS reports in adults were classified as SAEs, without assessment of causality. The most common SAE reported after TIV in VAERS in adults was Guillain-Barré Syndrome (GBS) (189). The potential association between TIV and GBS has been an area of ongoing research (see Guillain-Barré Syndrome and TIV). # Pregnant Women and neonates FDA has classified TIV as a "Pregnancy Category C" medication, indicating that adequate animal reproduction studies have not been conducted.. Available data indicate that influenza vaccine does not cause fetal harm when administered to a pregnant woman or affect reproductive capacity. One study of approximately 2,000 pregnant women who received TIV during pregnancy demonstrated no adverse fetal effects and no adverse effects during infancy or early childhood (190). A matched case-control study of 252 pregnant women who received TIV within the 6 months before delivery determined no adverse events after vaccination among pregnant women and no difference in pregnancy outcomes compared with 826 pregnant women who were not vaccinated (148). During 2000-2003, an estimated 2 million pregnant women were vaccinated, and only 20 adverse events among women who received TIV were reported to VAERS during this time, including nine injection-site reactions and eight systemic reactions (e.g., fever, headache, and myalgias). In addition, three miscarriages were reported, but these were not known to be causally related to vaccination (191). Similar results have been reported in certain smaller studies (147,149,192), and a recent international review of data on the safety of TIV concluded that no evidence exists to suggest harm to the fetus (193). The rate of adverse events associated with TIV was similar to the rate of adverse events among pregnant women who received pneumococcal polysaccharide vaccine in one small randomized controlled trial in Bangladesh, and no severe adverse events were reported in any study group (154). # Persons with Chronic Medical Conditions In a randomized cross-over study of children and adults with asthma, no increase in asthma exacerbations was reported for either age group (194), and two additional studies also have indicated no increase in wheezing among vaccinated asthmatic children (114) or adults (195). One study reported that 20%-28% of children with asthma aged 9 months-18 years had local pain and swelling at the site of influenza vaccination ( 104), and another study reported that 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions (93). A blinded, randomized, cross-over study of 1,952 adults and children with asthma demonstrated that only self-reported "body aches" were reported more frequently after TIV (25%) than placebo-injection (21%) (194). However, a placebo-controlled trial of TIV indicated no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years (97). Among children with high-risk medical conditions, one study of 52 children aged 6 months-3 years reported fever among 27% and irritability and insomnia among 25% (93); and a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (196). No placebo comparison group was used in these studies. # Immunocompromised Persons Data demonstrating safety of TIV for HIV-infected persons are limited, but no evidence exists that vaccination has a clinically important impact on HIV infection or immunocompetence. One study demonstrated a transient (i.e., 2-4 week) increase in HIV RNA (ribonucleic acid) levels in one HIV-infected person after influenza virus infection (197). Studies have demonstrated a transient increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (138,198). However, more recent and better-designed studies have not documented a substantial increase in the replication of HIV (199)(200)(201)(202). CD4+ T-lymphocyte cell counts or progression of HIV disease have not been demonstrated to change substantially after influenza vaccination among HIV-infected persons compared with unvaccinated HIV-infected persons (138,203). Limited information is available about the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza virus infection or influenza vaccination (52,204). Data are similarly limited for persons with other immunocompromising conditions. In small studies, vaccination did not affect allograft function or cause rejection episodes in recipients of kidney transplants (141,142), heart transplants (143), or liver transplants (144). # Immediate Hypersensitivity Reactions after Influenza Vaccines Vaccine components can rarely cause allergic reactions, also called immediate hypersensitivity reactions, among certain recipients. Immediate hypersensitivity reactions are mediated by preformed immunoglobulin E (IgE) antibodies against a vaccine component and usually occur within minutes to hours of exposure (205). Symptoms of immediate hypersensitivity range from mild urticaria (hives) and angioedema to anaphylaxis. Anaphylaxis is a severe life-threatening reaction that involves multiple organ systems and can progress rapidly. Symptoms and signs of anaphylaxis can include but are not limited to generalized urticaria, wheezing, swelling of the mouth and throat, difficulty breathing, vomiting, hypotension, decreased level of consciousness, and shock. Minor symptoms such as red eyes or hoarse voice also might be present (179,(205)(206)(207)(208). Allergic reactions might be caused by the vaccine antigen, residual animal protein, antimicrobial agents, preservatives, stabilizers, or other vaccine components (209). Manufacturers use a variety of compounds to inactivate influenza viruses and add antibiotics to prevent bacterial growth. Package inserts for specific vaccines of interest should be consulted for additional information. ACIP has recommended that all vaccine providers should be familiar with the office emergency plan and be certified in cardiopulmonary resuscitation (179). The Clinical Immunization Safety Assessment (CISA) network, a collaboration between CDC and six medical research centers with expertise in vaccination safety, has developed an algorithm to guide evaluation and revaccination decisions for persons with suspected immediate hypersensitivity after vaccination (205). Immediate hypersensitivity reaction after TIV and LAIV are rare. A VSD study of children aged <18 years in four HMOs during 1991-1997 estimated the overall risk of postvaccination anaphylaxis to be less than 1 case per 500,000 doses administered and in this study no cases were identified in TIV recipients (210). Reports of anaphylaxis occurring after receipt of TIV and LAIV in adults have rarely been reported to VAERS (189). Some immediate hypersensitivity reactions after TIV or LAIV are caused by the presence of residual egg protein in the vaccines (211). Although influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Asking persons if they can eat eggs without adverse effects is a reasonable way to determine who might be at risk for allergic reactions from receiving influenza vaccines (179). Persons who have had symptoms such as hives or swelling of the lips or tongue, or who have experienced acute respiratory distress after eating eggs, should consult a physician for appropriate evaluation to help determine if future influenza vaccine should be administered. Persons who have documented (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma related to egg exposure or other allergic responses to egg protein, also might be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician before vaccination should be considered (212)(213)(214). A regimen has been developed for administering influenza vaccine to asthmatic children with severe disease and egg hypersensitivity (213). Hypersensitivity reactions to other vaccine components also can rarely occur. Although exposure to vaccines containing thimerosal can lead to hypersensitivity (215), the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (216,217). When reported, hypersensitivity to thimerosal typically has consisted of local delayed hypersensitivity reactions (216). # ocular and Respiratory Symptoms after tIV Ocular or respiratory symptoms have occasionally been reported within 24 hours after TIV administration, but these symptoms typically are mild and resolve quickly without specific treatment. In some trials conducted in the United States, ocular or respiratory symptoms included red eyes (<1%-6%), cough (1%-7%), wheezing (1%), and chest tightness (1%-3%) (207,208,(218)(219)(220). However, most of these trials were not placebo-controlled, and causality cannot be determined. In addition, ocular and respiratory symptoms are features of a variety of respiratory illnesses and seasonal allergies that would be expected to occur coincidentally among vaccine recipients unrelated to vaccination. A placebo-controlled vaccine effectiveness study among young adults found that 2% of persons who received the 2006-07 formulation of Fluzone (Sanofi Pasteur) reported red eyes compared with none of the controls (p = 0.03) (221). A similar trial conducted during the 2005-06 influenza season found that 3% of Fluzone recipients reported red eyes compared with 1% of placebo recipients; however the difference was not statistically significant (222) . Oculorespiratory syndrome (ORS), an acute, self-limited reaction to TIV with prominent ocular and respiratory symptoms, was first described during the 2000-01 influenza season in Canada. The initial case-definition for ORS was the onset of one or more of the following within 2-24 hours after receiving TIV: bilateral red eyes and/or facial edema and/or respiratory symptoms (coughing, wheezing, chest tightness, difficulty breathing, sore throat, hoarseness or difficulty swallowing, cough, wheeze, chest tightness, difficulty breathing, sore throat, or facial swelling) (223). ORS was first described in Canada and strongly associated with one vaccine preparation (Fluviral S/F, Shire Biologics, Quebec, Canada) not available in the United States during the 2000-01 influenza season (224). Subsequent investigations identified persons with ocular or respiratory symptoms meeting an ORS case-definition in safety monitoring systems and trials that had been conducted before 2000 in Canada, the United States, and several European countries (225)(226)(227). The cause of ORS has not been established; however studies suggest the reaction is not IgE-mediated (228). After changes in the manufacturing process of the vaccine preparation associated with ORS during 2000-01, the incidence of ORS in Canada was greatly reduced (226). In one placebo-controlled study, only hoarseness, cough, and itchy or sore eyes (but not red eyes) were significantly associated with a reformulated Fluviral preparation. These findings indicated that ORS symptoms following use of the reformulated vaccine were mild, resolved within 24 hours, and might not typically be of sufficient concern to cause vaccine recipients to seek medical care (229). Ocular and respiratory symptoms reported after TIV administration, including ORS, have some similarities with immediate hypersensitivity reactions. One study indicated that the risk for ORS recurrence with subsequent vaccination is low, and persons with ocular or respiratory symptoms (e.g., bilateral red eyes, cough, sore throat, or hoarseness) after TIV that did not involve the lower respiratory tract have been revaccinated without reports of SAEs after subsequent exposure to TIV (230). VAERS routinely monitors for adverse events such as ocular or respiratory symptoms after receipt of TIV. # Contraindications and Precautions for Use of tIV TIV is contraindicated and should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine unless the recipient has been desensitized. Prophylactic use of antiviral agents is an option for preventing influenza among such persons. Information about vaccine components is located in package inserts from each manufacturer. Persons with moderate to severe acute febrile illness usually should not be vaccinated until their symptoms have abated. Moderate or severe acute illness with or without fever is a precaution § for TIV. GBS within 6 weeks following a previous dose of influenza vaccine is considered to be a precaution for use of influenza vaccines. # Revaccination in Persons Who Experienced ocular or Respiratory Symptoms After tIV When assessing whether a patient who experienced ocular and respiratory symptoms should be revaccinated, providers should determine if concerning signs and symptoms of Ig-E mediated immediate hypersensitivity are present (see Immediate Hypersensitivity after Influenza Vaccines). Healthcare providers who are unsure whether symptoms reported or observed after TIV represent an IgE-mediated hypersensitivity immune response should seek advice from an allergist/immunologist. Persons with symptoms of possible IgE-mediated hypersensitivity after TIV should not receive influenza vaccination unless hypersensitivity is ruled out or revaccination is administered under close medical supervision (205). Ocular or respiratory symptoms observed after TIV often are coincidental and unrelated to TIV administration, as observed among placebo recipients in some randomized controlled studies. Determining whether ocular or respiratory symptoms are coincidental or related to possible ORS might not be possible. Persons who have had red eyes, mild upper facial swelling, or mild respiratory symptoms (e.g., sore throat, cough, or hoarseness) after TIV without other concerning signs or symptoms of hypersensitivity can receive TIV in subsequent seasons without further evaluation. Two studies showed that persons who had symptoms of ORS after TIV were at a higher risk for ORS after subsequent TIV administration; however, these events usually were milder than the first episode (230,231). # Guillain-Barré Syndrome and tIV The annual incidence of GBS is 10-20 cases per 1 million adults (232). Substantial evidence exists that multiple infectious illnesses, most notably Campylobacter jejuni gastrointestinal infections and upper respiratory tract infections, are associated with GBS (233)(234)(235). A recent study identified serologically confirmed influenza virus infection as a trigger of GBS, with time from onset of influenza illness to GBS of 3-30 days. The estimated frequency of influenza-related GBS was four to seven times higher than the frequency that has been estimated for influenza-vaccine-associated GBS (236). The 1976 swine influenza vaccine was associated with an increased frequency of GBS, estimated at one additional case of GBS per 100,000 persons vaccinated (237,238). The risk for influenza-vaccine-associated GBS was higher among persons aged >25 years than among persons aged <25 years (239). However, obtaining epidemiologic evidence for a small increase in risk for a rare condition with multiple causes is difficult, and no evidence consistently exists for a causal relation between subsequent vaccines prepared from other influenza viruses and GBS. None of the studies conducted using influenza vaccines other than the 1976 swine influenza vaccine has demonstrated a substantial increase in GBS associated with influenza vaccines. During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were not statistically significant in any of these studies (240)(241)(242). However, in a study of the 1992-93 and 1993-94 seasons, the overall relative risk for GBS was 1.7 (CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately one additional case of GBS per 1 million persons vaccinated; the combined number of GBS cases peaked 2 weeks after vaccination (238). Results of a study that examined health-care data from Ontario, Canada, during 1992-2004 demonstrated a small but statistically significant temporal association between receiving influenza vaccination and subsequent hospital admission for GBS. However, no increase in cases of GBS at the population level was reported after introduction of a mass public influenza vaccination program in Ontario beginning in 2000 (243). Data from VAERS have documented decreased reporting of GBS occurring after vaccination across age groups over time, despite overall increased reporting of other non-GBS conditions occurring after administration of influenza vaccine (237). Published data from the United Kingdom's General Practice Research Database (GPRD) found influenza vaccine to be associated with a decreased risk for GBS, although whether this was associated with protection against influenza or confounding because of a "healthy vaccinee" (e.g., healthier persons might be more likely to be vaccinated and also be at lower risk for GBS) (244) is unclear. A separate GPRD analysis found no association between vaccination and GBS for a 9-year period; only three cases of GBS occurred within 6 weeks after administration of influenza vaccine (245). A third GPRD analysis found that GBS was associated with recent ILI, but not influenza vaccination (246). § A precaution is a condition in a recipient that might increase the risk for a serious adverse reaction or that might compromise the ability of the vaccine to produce immunity (179). The estimated risk for GBS (on the basis of the few studies that have demonstrated an association between vaccination and GBS) is low (i.e., approximately one additional case per 1 million persons vaccinated). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death substantially outweigh these estimates of risk for vaccine-associated GBS. No evidence indicates that the casefatality ratio for GBS differs among vaccinated persons and those not vaccinated. # Use of tIV Among Patients with a History of GBS The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (232). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown. Among 311 patients with GBS who responded to a survey, 11 (4%) reported some worsening of symptoms after influenza vaccination; however, some of these patients had received other vaccines at the same time, and recurring symptoms were generally mild (247). However, as a precaution, persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks generally should not be vaccinated. As an alternative, physicians might consider using influenza antiviral chemoprophylaxis for these persons. Although data are limited, the established benefits of influenza vaccination might outweigh the risks for many persons who have a history of GBS and who also are at high risk for severe complications from influenza. # Vaccine Preservative (thimerosal) in Multidose Vials of tIV Thimerosal, a mercury-containing antibacterial compound, has been used as a preservative in vaccines and other medications since the 1930s (248) and is used in multidose vial preparations of TIV to reduce the likelihood of bacterial growth. No scientific evidence indicates that thimerosal in vaccines, including influenza vaccines, is a cause of adverse events other than occasional local hypersensitivity reactions in vaccine recipients. In addition, no scientific evidence exists that thimerosal-containing vaccines are a cause of adverse events among children born to women who received vaccine during pregnancy. The weight of accumulating evidence does not suggest an increased risk for neurodevelopment disorders from exposure to thimerosal-containing vaccines (249)(250)(251)(252)(253)(254)(255)(256)(257)(258). The U.S. Public Health Service and other organizations have recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines as part of a strategy to reduce mercury exposures from all sources (249,250,259) Also, continuing public concerns about exposure to mercury in vaccines has been viewed as a potential barrier to achieving higher vaccine coverage levels and reducing the burden of vaccine-preventable diseases. Since mid-2001, vaccines routinely recommended for infants aged <6 months in the United States have been manufactured either without or with greatly reduced (trace) amounts of thimerosal. As a result, a substantial reduction in the total mercury exposure from vaccines for infants and children already has been achieved ( 179)). ACIP and other federal agencies and professional medical organizations continue to support efforts to provide thimerosal-preservative-free vaccine options. The benefits of influenza vaccination for all recommended groups, including pregnant women and young children, outweigh concerns on the basis of a theoretical risk from thimerosal exposure through vaccination. The risks for severe illness from influenza virus infection are elevated among both young children and pregnant women, and vaccination has been demonstrated to reduce the risk for severe influenza illness and subsequent medical complications. In contrast, no scientifically conclusive evidence has demonstrated harm from exposure to vaccine containing thimerosal preservative. For these reasons, persons recommended to receive TIV may receive any age-and risk factor-appropriate vaccine preparation, depending on availability. An analysis of VAERS reports found no difference in the safety profile of preservative-containing compared with preservative-free TIV vaccines in infants aged 6-23 months (184). Nonetheless, as of May 2009, some states have enacted legislation banning the administration of vaccines containing mercury; the provisions defining mercury content vary (260). LAIV and many of the single-dose vial or syringe preparations of TIV are thimerosal-free, and the number of influenza vaccine doses that do not contain thimerosal as a preservative is expected to increase (Table 2). However, these laws might present a barrier to vaccination unless influenza vaccines that do not contain thimerosal as a preservative are easily available in those states. The U.S. vaccine supply for infants and pregnant women is in a period of transition as manufacturers expand the availability of thimerosal-reduced or thimerosal-free vaccine to reduce the cumulative exposure of infants to mercury. Other environmental sources of mercury exposure are more difficult or impossible to avoid or eliminate (249). # LAIV Dosage, Administration, and Storage Each dose of LAIV contains the same three vaccine antigens used in TIV. However, the antigens are constituted as live, attenuated, cold-adapted, temperature-sensitive vaccine viruses. Providers should refer to the package insert, which contains additional information about the formulation of this vaccine and other vaccine components. LAIV does not contain thimerosal. LAIV is made from attenuated viruses that are able to replicate efficiently only at temperatures present in the nasal mucosa. LAIV does not cause systemic symptoms of influenza in vaccine recipients, although a minority of recipients experience nasal congestion or fever, which is probably a result of effects of intranasal vaccine administration or local viral replication or fever (261). LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV is not licensed for vaccination of children aged <2 years or adults aged >49 years. LAIV is supplied in a prefilled, single-use sprayer containing 0.2 mL of vaccine. Approximately 0.1 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. LAIV is shipped at 35°F-46°F (2°C-8°C). LAIV should be stored at 35°F-46°F (2°C-8°C) on receipt and can remain at that temperature until the expiration date is reached (261). Vaccine prepared for a previous influenza season should not be administered to provide protection for any subsequent season. # Shedding, transmission, and Stability of Vaccine Viruses Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses after vaccination, although in lower amounts than occur typically with shedding of wild-type influenza viruses. In rare instances, shed vaccine viruses can be transmitted from vaccine recipients to unvaccinated persons. However, serious illnesses have not been reported among unvaccinated persons who have been infected inadvertently with vaccine viruses. One study of 197 children aged 8-36 months in a child care center assessed transmissibility of vaccine viruses from 98 vaccinated children to the other 99 unvaccinated children; 80% of vaccine recipients shed one or more virus strains (mean duration: 7.6 days). One influenza type B vaccine strain isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the coldadapted, temperature-sensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient who was in the same play group. The placebo recipient from whom the influenza type B vaccine strain was isolated had symptoms of a mild upper respiratory illness but did not experience any serious clinical events. The estimated probability of acquiring vaccine virus after close contact with a single LAIV recipient in this child care population was 1%-2% (262). Studies assessing whether vaccine viruses are shed have been based on viral cultures or PCR detection of vaccine viruses in nasal aspirates from persons who have received LAIV. Among 345 subjects aged 5-49 years, 30% had detectable virus in nasal secretions obtained by nasal swabbing after receiving LAIV. The duration of virus shedding and the amount of virus shed was inversely correlated with age, and maximal shedding occurred within 2 days of vaccination. Symptoms reported after vaccination, including runny nose, headache, and sore throat, did not correlate with virus shedding (263). Other smaller studies have reported similar findings (264,265). Vaccine strain virus was detected from nasal secretions in one (2%) of 57 HIV-infected adults who received LAIV, none of 54 HIV-negative participants (266), and three (13%) of 23 HIV-infected children compared with seven (28%) of 25 children who were not HIV-infected (267). No participants in these studies had detectable virus beyond 10 days after receipt of LAIV. The possibility of person-to-person transmission of vaccine viruses was not assessed in these studies (264)(265)(266)(267). In clinical trials, viruses isolated from vaccine recipients have retained attenuated phenotypes. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (268). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. A study conducted in a child care setting demonstrated that limited genetic change occurred in the LAIV strains following replication in the vaccine recipients (269). # Immunogenicity, Efficacy, and Effectiveness of LAIV LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not understood completely but appear to involve both serum and nasal secretory antibodies. The immunogenic-ity of the approved LAIV has been assessed in multiple studies conducted among children and adults (270)(271)(272)(273)(274)(275)(276). # Healthy Children A randomized, double-blind, placebo-controlled trial among 1,602 healthy children aged 15-71 months assessed the efficacy of LAIV against culture-confirmed influenza during two seasons (277,278). This trial included a subset of children aged 60-71 months who received 2 doses in the first season. During season one (1996-97), when vaccine and circulating virus strains were well-matched, efficacy against culture-confirmed influenza was 94% for participants who received 2 doses of LAIV separated by >6 weeks, and 89% for those who received 1 dose. During season two (1997-98), when the A (H3N2) component in the vaccine was not well-matched with circulating virus strains, efficacy (1 dose) was 86%, for an overall efficacy for two influenza seasons of 92%. Receipt of LAIV also resulted in 21% fewer febrile illnesses and a significant decrease in acute otitis media requiring antibiotics (277,279). Other randomized, placebo-controlled trials demonstrating the efficacy of LAIV in young children against culture-confirmed influenza include a study conducted among children aged 6-35 months attending child care centers during consecutive influenza seasons (280) in which 85%-89% efficacy was observed, and a study conducted among children aged 12-36 months living in Asia during consecutive influenza seasons in which 64%-70% efficacy was documented (281). In one community-based, nonrandomized open-label study, reductions in MAARI were observed among children who received 1 dose of LAIV during the 1990-00 and 2000-01 influenza seasons even though antigenically drifted influenza A/H1N1 and B viruses were circulating during that season (282). LAIV efficacy in preventing laboratory-confirmed influenza also has been demonstrated in studies comparing the efficacy of LAIV with TIV rather than with a placebo (see Comparisons of LAIV and TIV Efficacy or Effectiveness). # Healthy Adults A randomized, double-blind, placebo-controlled trial of LAIV effectiveness among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in self-reported respiratory tract illness without laboratory confirmation, work loss, health-care visits, and medication use during influenza outbreak periods . The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not well-matched. The frequency of febrile illnesses was not significantly decreased among LAIV recipients compared with those who received placebo. However, vaccine recipients had significantly fewer severe febrile illnesses (19% reduction) and febrile upper respiratory tract illnesses (24% reduction), and significant reductions in days of illness, days of work lost, days with health-care-provider visits, and use of prescription antibiotics and over-the-counter medications (283). Efficacy against culture-confirmed influenza in a randomized, placebo-controlled study was 57% in the 2004-05 influenza season and 43% in the 2005-06 influenza season, although efficacy in these studies was not demonstrated to be significantly greater than placebo (221,222). # Adverse Events after Receipt of LAIV Healthy Children Aged 2-18 Years In a subset of healthy children aged 60-71 months from one clinical trial, certain signs and symptoms were reported more often after the first dose among LAIV recipients (n = 214) than among placebo recipients (n = 95), including runny nose (48% and 44%, respectively); headache (18% and 12%, respectively); vomiting (5% and 3%, respectively); and myalgias (6% and 4%, respectively) (277). However, these differences were not statistically significant. In other trials, signs and symptoms reported after LAIV administration have included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0-26%), vomiting (3%-13%), abdominal pain (2%), and myalgias (0-21%) (270,272,273,280,(284)(285)(286)(287). These symptoms were associated more often with the first dose and were self-limited. A placebo-controlled trial in 9,689 children aged 1-17 years assessed prespecified medically attended outcomes during the 42 days after vaccination (286). Following >1,500 statistical analyses in the 42 days after LAIV, elevated risks that were biologically plausible were observed for the following conditions: asthma, upper respiratory infection, musculoskeletal pain, otitis media with effusion, and adenitis/ adenopathy. The increased risk for wheezing events after LAIV was observed among children aged 18-35 months (RR: 4.06; 90% CI = 1.3-17.9). In this study, the rate of SAEs was 0.2% in LAIV and placebo recipients; none of the SAEs was judged to be related to the vaccine by the study investigators (286). In a randomized trial published in 2007, LAIV and TIV were compared among children aged 6-59 months (288). Children with medically diagnosed or treated wheezing within 42 days before enrollment or with a history of severe asthma were excluded from this study. Among children aged 24-59 months who received LAIV, the rate of medically significant wheezing, using a prespecified definition, was not greater compared with those who received TIV (288). Wheezing was observed more frequently among younger LAIV recipients aged 6-23 months in this study; LAIV is not licensed for this age group. In a previous randomized placebo-controlled safety trial among children aged 12 months-17 years without a history of asthma by parental report, an elevated risk for asthma events (RR: 4.1; CI = 1.3-17.9) was documented among 728 children aged 18-35 months who received LAIV. Of the 16 children with asthma-related events in this study, seven had a history of asthma on the basis of subsequent medical record review. None required hospitalization, and elevated risks for asthma were not observed in other age groups (286). Another study was conducted among >11,000 children aged 18 months-18 years in which 18,780 doses of vaccine were administered for 4 years. For children aged 18 months-4 years, no increase was reported in asthma visits 0-15 days after vaccination compared with the prevaccination period. A significant increase in asthma events was reported 15-42 days after vaccination, but only in vaccine year 1 (289). A 4-year, open-label field trial study assessed LAIV safety of more than 2000 doses administered to children aged 18 months-18 years with a history of intermittent wheeze who were otherwise healthy. Among these children, no increased risk was reported for medically attended acute respiratory illnesses, including acute asthma exacerbation, during the 0-14 or 0-42 days after LAIV compared with the pre-and postvaccination reference periods (290). Initial data from VAERS during 2007-2008, following ACIP's recommendation for LAIV use in healthy children aged 2-4 years, did not suggest a concern for wheezing after LAIV in young children. However data also suggest uptake of LAIV was limited, and safety monitoring for wheezing events after LAIV is ongoing (CDC, unpublished data, 2008). # Adults Aged 19-49 Years Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (277,291). In one clinical trial among a subset of healthy adults aged 18-49 years, signs and symptoms reported significantly more often (p<0.05) among LAIV recipients (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (14% and 11%, respectively), runny nose (45% and 27%, respectively), sore throat (28% and 17%, respectively), chills (9% and 6%, respectively), and tiredness/weakness (26% and 22%, respectively) (92). A review of 460 reports to VAERS after distribution of approximately 2.5 million doses during the 2003-04 and 2004-05 influenza seasons did not indicate any new safety concerns (292). Few of the LAIV VAERS reports (9%) were SAEs; respiratory events were the most common conditions reported. # Persons at Higher Risk for Influenza-Related Complications Limited data assessing the safety of LAIV use for certain groups at higher risk for influenza-related complications are available. In one study of 54 HIV-infected persons aged 18-58 years and with CD4+ counts >200 cells/mm 3 who received LAIV, no SAEs were reported during a 1-month follow-up period (266). Similarly, one study demonstrated no significant difference in the frequency of adverse events or viral shedding among HIV-infected children aged 1-8 years on effective antiretroviral therapy who were administered LAIV compared with HIV-uninfected children receiving LAIV (267). LAIV was well-tolerated among adults aged >65 years with chronic medical conditions (293). These findings suggest that persons at risk for influenza complications who have inadvertent exposure to LAIV would not have significant adverse events or prolonged viral shedding and that persons who have contact with persons at higher risk for influenza-related complications may receive LAIV. # Comparisons of LAIV and tIV Efficacy or Effectiveness Both TIV and LAIV have been demonstrated to be effective in children and adults. However, data directly comparing the efficacy or effectiveness of these two types of influenza vaccines are limited and insufficient to identify whether one vaccine might offer a clear advantage over the other in certain settings or populations. Studies comparing the efficacy of TIV to that of LAIV have been conducted in a variety of settings and populations using several different outcomes. One randomized, double-blind, placebo-controlled challenge study that was conducted among 92 healthy adults aged 18-41 years assessed the efficacy of both LAIV and TIV in preventing influenza infection when challenged with wild-type strains that were antigenically similar to vaccine strains (294). The overall efficacy in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, when challenged 28 days after vaccination by viruses to which study participants were susceptible before vaccination. The difference in efficacy between the two vaccines was not statistically significant in this limited study. No additional challenges were conducted to assess efficacy at time points later than 28 days (294). In a randomized, double-blind, placebo-controlled trial that was conducted among young adults during the 2004-05 influenza season, when the majority of circulating H3N2 viruses were antigenically drifted from that season's vaccine viruses, the efficacy of LAIV and TIV against culture-confirmed influenza was 57% and 77%, respectively. The difference in efficacy was not statistically significant and was attributable primarily to a difference in efficacy against influenza B (222). A similar study conducted during the 2005-06 influenza season found no significant difference in vaccine efficacy (221). A randomized controlled clinical trial conducted among children aged 6-59 months during the 2004-05 influenza season demonstrated a 55% reduction in cases of culture-confirmed influenza among children who received LAIV compared with those who received TIV (288). In this study, LAIV efficacy was higher compared with TIV against antigenically drifted viruses and well-matched viruses (288). An open-label, nonrandomized, community-based influenza vaccine trial conducted during an influenza season when circulating H3N2 strains were poorly matched with strains contained in the vaccine also indicated that LAIV, but not TIV, was effective against antigenically drifted H3N2 strains during that influenza season. In this study, children aged 5-18 years who received LAIV had significant protection against laboratory-confirmed influenza (37%) and pneumonia and influenza events (50%) (295). A recent observational study conducted among military personnel aged 17-49 years over three influenza seasons indicated that persons who received TIV had a significantly lower incidence of health-care encounters resulting in diagnostic coding for pneumonia and influenza compared with those who received LAIV. However, among new recruits being vaccinated for the first time, the incidence of pneumonia-and influenza-coded health-care encounters among those received LAIV was similar to those receiving TIV (296). Although LAIV is not licensed for use in persons with risk factors for influenza complications, certain studies have compared the efficacy of LAIV to TIV in these groups. LAIV provided 32% increased protection in preventing culture-confirmed influenza compared with TIV in one study conducted among children aged >6 years and adolescents with asthma (297) and 52% increased protection compared with TIV among children aged 6-71 months with recurrent respiratory tract infections (298). # Effectiveness of Vaccination for Decreasing transmission to Contacts Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce ILI and complications among persons at high risk. Influenza virus infection and ILI are common among HCP (299-301). Influenza outbreaks have been attributed to low vaccination rates among HCP in hospitals and long-term-care facilities (302)(303)(304). One serosurvey demonstrated that 23% of HCP had serologic evidence of influenza virus infection during a single influenza season; the majority had mild illness or subclinical infection (299). Observational studies have demonstrated that vaccination of HCP is associated with decreased deaths among nursing home patients (305,306). In one clusterrandomized controlled trial that included 2,604 residents of 44 nursing homes, significant decreases in mortality, ILI, and medical visits for ILI care were demonstrated among residents in nursing homes in which staff were offered influenza vaccination (coverage rate: 48%) compared with nursing homes in which staff were not provided with vaccination (coverage rate: 6%) (307). A review concluded that vaccination of HCP in settings in which patients also were vaccinated provided significant reductions in deaths among elderly patients from all causes and deaths from pneumonia (308). Epidemiologic studies of community outbreaks of influenza demonstrate that school-aged children typically have the highest influenza illness attack rates, suggesting routine universal vaccination of children might reduce transmission to their household contacts and possibly others in the community. Results from certain studies have indicated that the benefits of vaccinating children might extend to protection of their adult contacts and to persons at risk for influenza complications in the community. However, these data are limited, and studies have not used laboratory-confirmed influenza as an outcome measure. A single-blinded, randomized controlled study conducted as part of a 1996-1997 vaccine effectiveness study demonstrated that vaccinating preschool-aged children with TIV reduced influenza-related morbidity among some household contacts (309). A randomized, placebo-controlled trial among children with recurrent respiratory tract infections demonstrated that members of families with children who had received LAIV were significantly less likely to have respiratory tract infections and reported significantly fewer workdays lost compared with families with children who received placebo (310). In nonrandomized community-based studies, administration of LAIV has been demonstrated to reduce MAARI (311,312) and ILI-related economic and medical consequences (e.g., workdays lost and number of health-care provider visits) among contacts of vaccine recipients (312). Households with children attending schools in which school-based LAIV vaccination programs had been established reported less ILI and fewer physician visits during peak influenza season compared with households with children in schools in which no LAIV vaccination had been offered. However a decrease in the overall rate of school absenteeism was not reported in communities in which LAIV vaccination was offered (312). During an influenza outbreak during the 2005-06 influenza season, countywide school-based influenza vaccination was associated with reduced absenteeism among elementary and high school students in one county that implemented a school based vaccination program compared with another county without such a program (313). These community-based studies have not used laboratory-confirmed influenza as an outcome. Some studies also have documented reductions in influenza illness among persons living in communities where focused programs for vaccinating children have been conducted. A community-based observational study conducted during the 1968 pandemic using a univalent inactivated vaccine reported that a vaccination program targeting school-aged children (coverage rate: 86%) in one community reduced influenza rates within the community among all age groups compared with another community in which aggressive vaccination was not conducted among school-aged children (314). An observational study conducted in Russia demonstrated reductions in ILI among the community-dwelling elderly after implementation of a vaccination program using TIV for children aged 3-6 years (57% coverage achieved) and children and adolescents aged 7-17 years (72% coverage achieved) (315). In a nonrandomized community-based study conducted over three influenza seasons, 8%-18% reductions in the incidence of MAARI during the influenza season among adults aged >35 years were observed in communities in which LAIV was offered to all children aged >18 months (estimated coverage rate: 20%-25%) compared with communities that did not provide routine influenza vaccination programs for all children (311). In a subsequent influenza season, the same investigators documented a 9% reduction in MAARI rates during the influenza season among persons aged 35-44 years in intervention communities, where coverage was estimated at 31% among school children. However, MAARI rates among persons aged >45 years were lower in the intervention communities regardless of the presence of influenza in the community, suggesting that lower rates could not be attributed to vaccination of school children against influenza (295). The largest study to examine the community effects of increasing overall vaccine coverage was an ecologic study that described the experience in Ontario, Canada, which was the only province to implement a universal influenza vaccination program beginning in 2000. On the basis of models developed from administrative and viral surveillance data, influenzarelated mortality, hospitalizations, ED use, and physicians' office visits decreased significantly more in Ontario after program introduction than in other provinces, with the largest reductions observed in younger age groups (316). # Effectiveness of Influenza Vaccination When Circulating Influenza Virus Strains Differ from Vaccine Strains Manufacturing trivalent influenza virus vaccines is a challenging process that takes 6-8 months to complete. Vaccination can provide reduced but substantial cross-protection against drifted strains in some seasons, including reductions in severe outcomes such as hospitalization. Usually one or more circulating viruses with antigenic changes compared with the vaccine strains are identified in each influenza season. In addition, two distinct lineages of influenza B viruses have co-circulated in recent years, and limited cross-protection is observed against the lineage not represented in the vaccine (48). However, assessment of the clinical effectiveness of influenza vaccines cannot be determined solely by laboratory evaluation of the degree of antigenic match between vaccine and circulating strains. In some influenza seasons, circulating influenza viruses with significant antigenic differences predominate, and reductions in vaccine effectiveness sometimes are observed compared with seasons when vaccine and circulating strains are wellmatched, (107,121,125,173,222). However, even during years when vaccine strains were not antigenically well matched to circulating strains (the result of antigenic drift), substantial protection has been observed against severe outcomes, presumably because of vaccine-induced cross-reacting antibodies (121,125,222,283). For example, in one study conducted during the 2003-04 influenza season, when the predominant circulating strain was an influenza A (H3N2) virus that was antigenically different from that season's vaccine strain, effectiveness against laboratory-confirmed influenza illness among persons aged 50-64 years was 60% among healthy persons and 48% among persons with medical conditions that increased the risk for influenza complications (125). An interim, withinseason analysis during the 2007-08 influenza season indicated that vaccine effectiveness was 44% overall, 54% among healthy persons aged 5-49 years, and 58% against influenza A, despite the finding that viruses circulating in the study area were predominately a drifted influenza A (H3N2) and an influenza B strain from a different lineage compared with vaccine strains (317). Among children, both TIV and LAIV provide protection against infection even in seasons when vaccines and circulating strains are not well-matched. Vaccine effectiveness against ILI was 49%-69% in two observational studies, and 49% against medically attended, laboratory-confirmed influenza in a case-control study conducted among young children during the 2003-04 influenza season, when a drifted influenza A (H3N2) strain predominated, based on viral surveillance data (102,106). However, continued improvements in collecting representative circulating viruses and use of surveillance data to forecast antigenic drift are needed. Shortening manufacturing time to increase the time to identify good vaccine candidate strains from among the most recent circulating strains also is important. Data from multiple seasons that are collected in a consistent manner are needed to better understand vaccine effectiveness during seasons when circulating and vaccine virus strains are not well-matched. Seasonal influenza vaccines are not expected to provide protection against novel influenza A (H1N1) virus infection because this novel strain hemagglutinin is substantially different from seasonal influenza A (H1N1). Preliminary immunologic data indicate that few persons have antibody that shows evidence of cross-reactivity against novel influenza A (H1N1) virus, and few show increases in antibody titer to novel influenza A (H1N1) virus after vaccination with the 2007-08 or the 2008-09 seasonal influenza vaccines (318). Vaccines currently are being developed that are specific to novel influenza A (H1N1) virus. # Cost-Effectiveness of Influenza Vaccination Economic studies of influenza vaccination are difficult to compare because they have used different measures of both costs and benefits (e.g., cost-only, cost-effectiveness, costbenefit, or cost-utility). However, most studies find that vaccination reduces or minimizes health care, societal, and individual costs and the productivity losses and absenteeism associated with influenza illness. One national study estimated the annual economic burden of seasonal influenza in the United States (using 2003 population and dollars) to be $87.1 billion, including $10.4 billion in direct medical costs (319). Studies of influenza vaccination in the United States among persons aged >65 years have estimated substantial reductions in hospitalizations and deaths and overall societal cost savings (168,169). Studies comparing adults in different age groups also find that vaccination is economically beneficial. One study that compared the economic impact of vaccination among persons aged >65 years with those aged 15-64 years indicated that vaccination resulted in a net savings per quality-adjusted life year (QALY) and that the Medicare program saved costs of treating illness by paying for vaccination (320). A study of a larger population comparing persons aged 50-64 years with those aged >65 years estimated the cost-effectiveness of influenza vaccination to be $28,000 per QALY saved (in 2000 dollars) in persons aged 50-64 years compared with $980 per QALY saved among persons aged >65 years (321). Economic analyses among adults aged <65 years have reported mixed results regarding influenza vaccination. Two studies in the United States found that vaccination can reduce both direct medical costs and indirect costs from work absenteeism and reduced productivity (322,323). However, another U.S. study indicated no productivity and absentee savings in a strategy to vaccinate healthy working adults, although vaccination was still estimated to be cost-effective (324). Cost analyses have documented the considerable financial burden of illness among children. In a study of 727 children conducted at a medical center during 2000-2004, the mean total cost of hospitalization for influenza-related illness was $13,159 ($39,792 for patients admitted to an intensive care unit and $7,030 for patients cared for exclusively on the wards) (325). A strategy that focuses on vaccinating children with medical conditions that confer a higher risk for influenza complications are more cost-effective than a strategy of vaccinating all children (324). An analysis that compared the costs of vaccinating children of varying ages with TIV and LAIV indicated that costs per QALY saved increased with age for both vaccines. In 2003 dollars per QALY saved, costs for routine vaccination using TIV were $12,000 for healthy children aged 6-23 months and $119,000 for healthy adolescents aged 12-17 years compared with $9,000 and $109,000 using LAIV, respectively (326). Economic evaluations of vaccinating children have demonstrated a wide range of cost estimates, but have generally found this strategy to be either cost-saving or cost-beneficial (327)(328)(329)(330). Economic analyses are sensitive to the vaccination venue, with vaccination in medical care settings incurring higher projected costs. In a published model, the mean cost (year 2004 values) of vaccination was lower in mass vaccination ($17.04) and pharmacy ($11.57) settings than in scheduled doctor's office visits ($28.67) (331). Vaccination in nonmedical settings was projected to be cost saving for healthy adults aged >50 years and for high-risk adults of all ages. For healthy adults aged 18-49 years, preventing an episode of influenza would cost $90 if vaccination were delivered in a pharmacy setting, $210 in a mass vaccination setting, and $870 during a scheduled doctor's office visit (331). Medicare payment rates in recent years have been less than the costs associated with providing vaccination in a medical practice (332). # Vaccination Coverage Levels Continued annual monitoring is needed to determine the effects on vaccination coverage of vaccine supply delays and shortages, changes in influenza vaccination recommendations and target groups for vaccination, reimbursement rates for vaccine and vaccine administration, and other factors related to vaccination coverage among adults and children. One of the Healthy People 2010 objectives (objective no. 14-29a) includes achieving an influenza vaccination coverage level of 90% for persons aged >65 years and among nursing home residents (333,334); new strategies to improve coverage are needed to achieve this objective (335,336). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use. On 3) and are only slightly lower than coverage levels observed before the 2004-05 vaccine shortage year (337)(338)(339). In the 2006-07 and 2007-08 influenza seasons, estimated vaccination coverage levels among adults with high-risk conditions aged 18-49 years were 25% and 30%, respectively, substantially lower than the Healthy People 2000 and Healthy People 2010 objectives of 60% (Table 3) (333,334). Studies conducted among children and adults indicate that opportunities to vaccinate persons at risk for influenza complications (e.g., during hospitalizations for other causes) often are missed. In one study, 23% of children hospitalized with influenza and a comorbidity had a previous hospitalization during the preceding influenza vaccination season (340). In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (341). A study in New York City conducted during 2001-2005 among 7,063 children aged 6-23 months indicated that 2-dose vaccine coverage increased from 1.6% to 23.7% over time; however, although the average number of medical visits during which an opportunity to be vaccinated decreased during the course of the study from 2.9 to 2.0 per child, 55% of all visits during the final year of the study still represented a missed vaccination opportunity (342). Using standing orders in hospitals increases vaccination rates among hospitalized persons (343), and vaccination of hospitalized patients is safe and stimulates an appropriate immune response (158). In one survey, the strongest predictor of receiving vaccination was the survey respondent's belief that he or she was in a high-risk group, based on data from one survey; however, many persons in high-risk groups did not know that they were in a group recommended for vaccination (344). Reducing racial/ethnic health disparities, including disparities in influenza vaccination coverage, is an overarching national goal that is not being met (334). Estimated vaccination coverage levels in 2007 among persons aged >65 years were 70% for non-Hispanic whites, 58% for non-Hispanic blacks, and 54% for Hispanics (345). Among Medicare beneficiaries, other key factors that contribute to disparities in coverage include variations in the propensity of patients to actively seek vaccination and variations in the likelihood that providers recommend vaccination (346,347). One study estimated that eliminating these disparities in vaccination coverage would have an impact on mortality similar to the impact of eliminating deaths attributable to kidney disease among blacks or liver disease among Hispanics (348). Reported vaccination levels are low among children at increased risk for influenza complications. Coverage among children aged 2-17 years with asthma for the 2004-05 influenza season was estimated to be 29% (349). One study reported 79% vaccination coverage among children attending a cystic fibrosis treatment center (350). During the first season for which ACIP recommended that all children aged 6 months-23 months receive vaccination, 33% received 1 or more doses of influenza vaccine, and 18% received 2 doses if they were unvaccinated previously (351). Among children enrolled in HMOs who had received a first dose during 2001-2004, second dose coverage varied from 29% to 44% among children aged 6-23 months and from 12% to 24% among children aged 2-8 years (352). A rapid analysis of influenza vaccination coverage levels among members of an HMO in Northern California demonstrated that during the 2004-05 influenza season, the first year of the recommendation for vaccination of children aged 6-23 months, 1-dose coverage was 57% (353). During the 2006-07 influenza season, the second season for which ACIP recommended that all children aged 6 months-23 months receive vaccination, coverage remained low and did not increase substantially from the 2004-05 season. Data collected in 2007 by the National Immunization Survey indicated that for the 2006-07 season, 32% of children aged 6-23 months received at least 1 dose of influenza vaccine and 21% were fully vaccinated (i.e., received 1 or 2 doses depending on previous vaccination history); however, results varied substantially among states (354). As has been reported for older adults, a physician recommendation for vaccination and the perception that having a child be vaccinated "is a smart idea" were associated positively with likelihood of vaccination of children aged 6-23 months (355). Similarly, children with asthma were more likely to be vaccinated if their parents recalled a physician recommendation to be vaccinated or believed that the vaccine worked well (356). Implementation of a reminder/recall system in a pediatric clinic increased the percentage of children with asthma receiving vaccination from 5% to 32% (357). Although annual vaccination is recommended for HCP and is a high priority for reducing morbidity associated with influenza in health-care settings and for expanding influenza vaccine use (358)(359)(360), national survey data demonstrated a vaccination coverage level of only 42% among HCP during the 2005-06 season, and 44% during the 2006-07 season (Table † † Adults categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer during the previous 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer during the previous 12 months (post coding for a cancer diagnosis was not yet completed at the time of this publication so this diagnosis was not include in the 2006-07 season data.); 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack during the preceding 12 months. For children aged <18 years, high-risk conditions included ever having been told by a physician of having diabetes, cystic fibrosis, sickle cell anemia, congenital heart disease, other heart disease, or neuromuscular conditions (seizures, cerebral palsy, and muscular dystrophy), or having an asthma episode or attack during the preceding 12 months. § § Aged 18-44 years, pregnant at the time of the survey and without high-risk conditions. ¶ ¶ Adults were classified as health-care workers if they were employed in a health-care occupation or in a health-care-industry setting, on the basis of standard occupation and industry categories recoded in groups by CDC's National Center for Health Statistics. *** Interviewed sample child or adult in each household containing at least one of the following: a child aged <5 years, an adult aged ≥65 years, or any person aged 5-17 years at high risk (see previous footnote † † ). To obtain information on household composition and high-risk status of household members, the sampled adult, child, and person files from NHIS were merged. Interviewed adults who were health-care workers or who had high-risk conditions were excluded. Information could not be assessed regarding high-risk status of other adults aged 18-64 years in the household; therefore, certain adults aged 18-64 years who lived with an adult aged 18-64 years at high risk were not included in the analysis. Also note that although the recommendation for children aged 2-4 years was not in place during the 2005-06 season, children aged 2-4 years in these calculations were considered to have an indication for vaccination to facilitate comparison of coverage data for subsequent years. 3). Vaccination of HCP has been associated with reduced work absenteeism (300) and with fewer deaths among nursing home patients (305,307) and elderly hospitalized patients (308). Factors associated with a higher rate of influenza vaccination among HCP include older age, being a hospital employee, having employer-provided health-care insurance, having had pneumococcal or hepatitis B vaccination in the past, or having visited a health-care professional during the preceding year. Non-Hispanic black HCP were less likely than non-Hispanic white HCP to be vaccinated (361). HCP who decline vaccination frequently express doubts about the risk for influenza and the need for vaccination, are concerned about vaccine effectiveness and side effects, and dislike injections (362). Vaccine coverage among pregnant women increased during the 2007-08 influenza season with 24% of pregnant women reporting vaccination, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions (Table 3). However, the sample size is small, and the increase in coverage compared with previous seasons was not statistically significant. In a study of influenza vaccine acceptance by pregnant women, 71% of those who were offered the vaccine chose to be vaccinated (363). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients in their practices, although 86% agreed that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (364). Influenza vaccination coverage in all groups recommended for vaccination remains suboptimal. Despite the timing of the peak of influenza disease, administration of vaccine decreases substantially after November. According to results from the NHIS regarding the two most recent influenza seasons for which these data are available, approximately 84% of all influenza vaccination were administered during September-November. Among persons aged >65 years, the percentage of September-November vaccinations was 92% (365). Because many persons recommended for vaccination remain unvaccinated at the end of November, CDC encourages public health partners and health-care providers to conduct vaccination clinics and other activities that promote seasonal influenza vaccination annually during National Influenza Vaccination Week (December 6-12, 2009) and throughout the remainder of the influenza season. Self-report of influenza vaccination among adults compared with determining vaccination status from the medical record, is a sensitive and specific source of information (366,367). Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (367). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available. # Recommendations for Using tIV and LAIV During the 2009-10 Influenza Season Both TIV and LAIV prepared for the 2009-10 season will include A/Brisbane/59/2007 (H1N1)-like, A/ Brisbane/10/2007 (H3N2)-like, and B/Brisbane/60/2008-like antigens. The influenza B virus component of the 2009-10 vaccine is from the Victoria lineage (368). These viruses will be used because they are representative of seasonal influenza viruses that are predicted to be circulating in the United States during the 2009-10 influenza season and have favorable growth properties in eggs. Seasonal influenza vaccines are not expected to provide substantial protection against infection with the recently identified novel influenza A (H1N1) (318), and guidance for the prevention of infection against this virus will be published separately. TIV and LAIV can be used to reduce the risk for influenza virus infection and its complications. Vaccination providers should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza or transmitting influenza to others should they become infected. Healthy, nonpregnant persons aged 2-49 years can choose to receive either vaccine. Some TIV formulations are FDAlicensed for use in persons as young as age 6 months (see Recommended Vaccines for Different Age Groups). TIV is licensed for use in persons with high-risk conditions (Table 2). LAIV is FDA-licensed for use only for persons aged 2-49 years. In addition, FDA has indicated that the safety of LAIV has not been established in persons with underlying medical conditions that confer a higher risk for influenza complications. All children aged 6 months-8 years who have not been vaccinated previously at any time with at least 1 dose of either LAIV (if appropriate) or TIV should receive 2 doses of ageappropriate vaccine in the same season, with a single dose during subsequent seasons. # target Groups for Protection through Vaccination Influenza vaccine should be provided to all persons who want to reduce the risk for becoming ill with influenza or of transmitting it to others. However, emphasis on providing routine vaccination annually to certain groups at higher risk for influenza infection or complications is advised, including all children aged 6 months-18 years, all persons aged >50 years, and other adults at risk for medical complications from influenza. In addition, all persons who live with or care for persons at high risk for influenza-related complications, including contacts of children aged <6 months, should receive influenza vaccine annually (Boxes 1 and 2). Approximately 85% of the U.S. population is included in one or more of these target groups; however, <40% of the U.S. population received an influenza vaccination during the 2008-09 influenza season. # Children Aged 6 Months-18 Years Beginning with the 2008-09 influenza season, annual vaccination for all children aged 6 months-18 years was recommended. Children and adolescents at high risk for influenza complications should continue to be a focus of vaccination efforts as providers and programs transition to routinely vaccinating all children. Healthy children aged 2-18 years can receive either LAIV or TIV. Children aged 6-23 months, and those aged 2-4 years who have evidence of asthma wheezing or who have medical conditions that put them at higher risk for influenza complications should receive TIV (see Considerations When Using LAIV). All children aged 6 months-8 years who have not received vaccination against influenza previously should receive 2 doses of vaccine the first year they are vaccinated. # Persons at Risk for Medical Complications Vaccination to prevent influenza is particularly important for the following persons, who are at increased risk for severe complications from influenza, or at higher risk for influenzarelated outpatient, ED, or hospital visits: all children aged 6 months- facilities. For children, the risk for severe complications from seasonal influenza is highest among those aged <2 years, who have much higher rates of hospitalization for influenza-related complications compared with older children (7,32,39). Medical care and ED visits attributable to influenza are increased among children aged <5 years compared with older children (32). Chronic neurologic and neuromuscular conditions include any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration (30). # Persons Who Live With or Care for Persons at High Risk for Influenza-Related Complications To prevent transmission to persons identified above, vaccination with TIV or LAIV (unless contraindicated) also is recommended for the following persons. When vaccine supply is limited, vaccination efforts should focus on delivering vaccination to these persons: HCP; • household contacts (including children) and caregivers • of children aged <59 months (i.e., aged <5 years) and adults aged >50 years; and household contacts (including children) and caregivers • of persons with medical conditions that put them at higher risk for severe complications from influenza. # Children Aged <6 Months Children aged <6 months are not recommended for vaccination, and antivirals are not licensed for use among infants. Protection of young infants, who have hospitalization rates similar to those observed among the elderly, depends on vaccination of the infants' close contacts. A recent study conducted in Bangladesh demonstrated that infants born to vaccinated women have significant protection from laboratory-confirmed influenza, either through transfer of influenza-specific maternal antibodies or by reducing the risk for exposure to influenza that might occur through vaccination of the mother (154). All household contacts, health-care and day care providers, and other close contacts of young infants should be vaccinated. # Vaccination of Specific Populations Children Aged 6 Months-18 Years All children aged 6 months-18 years should be vaccinated against influenza annually. In 2004, ACIP recommended routine vaccination for all children aged 6-23 months, and in 2006, ACIP expanded the recommendation to include all children aged 24-59 months. Recommendations to provide routine influenza vaccination to all children and adolescents aged 6 months-18 years are made on the basis of 1) accumulated evidence that influenza vaccine is effective and safe for children (see Influenza Vaccine Efficacy, Effectiveness, and Safety); 2) increased evidence that influenza has substantial adverse impacts among children and their contacts (e.g., school absenteeism, increased antibiotic use, medical care visits, and parental work loss) (see Health-Care Use, Hospitalizations, and Deaths Attributed to Influenza); and 3) an expectation that a simplified age-based influenza vaccine recommendation for all children and adolescents will improve vaccine coverage levels among children who already have a risk-or contact-based indication for annual influenza vaccination. Children typically have the highest attack rates during community outbreaks of influenza and serve as a major source of transmission within communities (1,2). If sufficient vaccination coverage among children can be achieved, potential benefits include the indirect effect of reducing influenza among persons who have close contact with children and reducing overall transmission within communities. Achieving and sustaining community-level reductions in influenza will require mobilization of community resources and development of sustainable annual vaccination campaigns to assist health-care providers and vaccination programs in providing influenza vaccination services to children of all ages. In many areas, innovative community-based efforts, which might include mass vaccination programs in school or other community settings, will be needed to supplement vaccination services provided in health-care providers' offices or public health clinics. In nonrandomized community-based controlled trials, reductions in ILI-related symptoms and medical visits among household contacts have been demonstrated in communities where vaccination programs among school-aged children were established compared with communities without such vaccination programs (295,314,315).Reducing influenza-related illness among children who are at high risk for influenza complications should continue to be a primary focus of influenza-prevention efforts. Children who should be vaccinated because they are at high risk for influenza complications include all children aged 6-59 months, children with certain medical conditions, children who are contacts of children aged <5 years (60 months) or of persons aged >50 years, and children who are contacts of persons at high risk for influenza complications because of medical conditions. All children aged 6 months-8 years who have not received vaccination against influenza previously should receive 2 doses of vaccine the first influenza season that they are vaccinated. The second dose should be administered 4 or more weeks after the initial dose. When only 1 dose is administered to children aged 6 months-8 years during their first year of vaccination, 2 doses should be administered in the following season. However, 2 doses should only be administered in the first season of vaccination, or in the season that immediately follows if only 1 dose is administered in the first season. For example, children aged 6 months-8 years who were vaccinated for the first time with the 2008-09 influenza vaccine but received only 1 dose should receive 2 doses of the 2009-10 influenza vaccine. All other children aged 6 months-8 years who have previously received 1 or more doses of influenza vaccine at any time should receive 1 dose of the 2009-10 influenza vaccine. Children aged 6 months-8 years who received only a single vaccination during a season before 2007-08 should receive 1 dose of the 2009-10 influenza vaccine. If possible, both doses should be administered before onset of influenza season. However, vaccination, including the second dose, is recommended even after influenza virus begins to circulate in a community. # HCP and other Persons Who Can transmit Influenza to those at High Risk Healthy persons who are infected with influenza virus, including those with subclinical infection, can transmit influenza virus to persons at higher risk for complications from influenza. In addition to HCP, groups that can transmit influenza to high-risk persons and that should be vaccinated include employees of assisted living and other residences for All HCP and persons in training for health-care professions should be vaccinated annually against influenza. Persons working in health-care settings who should be vaccinated include physicians, nurses, and other workers in both hospital and outpatient-care settings, medical emergency-response workers (e.g., paramedics and emergency medical technicians), employees of nursing home and long-term-care facilities who have contact with patients or residents, and students in these professions who will have contact with patients (359,360,371). Facilities that employ HCP should provide vaccine to workers by using approaches that have been demonstrated to be effective in increasing vaccination coverage. Health-care administrators should consider the level of vaccination coverage among HCP to be one measure of a patient safety quality program and consider obtaining signed declinations from personnel who decline influenza vaccination for reasons other than medical contraindications (360,372,373). Influenza vaccination rates among HCP within facilities should be regularly measured and reported, and ward-, unit-, and specialty-specific coverage rates should be provided to staff and administration (360). Studies have demonstrated that organized campaigns can attain higher rates of vaccination among HCP with moderate effort and by using strategies that increase vaccine acceptance (358,360,374). Efforts to increase vaccination coverage among HCP are supported by various national accrediting and professional organizations and in certain states by statute. The Joint Commission on Accreditation of Health-Care Organizations has approved an infection-control standard that requires accredited organizations to offer influenza vaccinations to staff, including volunteers and licensed independent practitioners with close patient contact. The standard became an accreditation requirement beginning January 1, 2007 (375). In addition, the Infectious Diseases Society of America has recommended mandatory vaccination for HCP, with a provision for declination of vaccination based on religious or medical reasons (376). Some states have regulations regarding vaccination of HCP in long-term-care facilities (377), require that health-care facilities offer influenza vaccination to HCP, or require that HCP either receive influenza vaccination or indicate a religious, medical, or philosophic reason for not being vaccinated (378,379). # Close Contacts of Immunocompromised Persons Immunocompromised persons are at risk for influenza complications but might have inadequate protection after vaccination. Close contacts of immunocompromised persons, including HCP, should be vaccinated to reduce the risk for influenza transmission. TIV is recommended for vaccinating household members, HCP, and others who have close contact with severely immunosuppressed persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunosuppressed person requires care in a protective environment (typically defined as a specialized patient-care area with a positive airflow relative to the cor-ridor, high-efficiency particulate air filtration, and frequent air changes) (360,380). LAIV transmission from a recently vaccinated person causing clinically important illness in an immunocompromised contact has not been reported. The rationale for avoiding use of LAIV among HCP or other close contacts of severely immunocompromised patients is the theoretical risk that a live, attenuated vaccine virus could be transmitted to the severely immunosuppressed person. As a precautionary measure, HCP who receive LAIV should avoid providing care for severely immunosuppressed patients requiring a protected environment for 7 days after vaccination. Hospital visitors who have received LAIV should avoid contact with severely immunosuppressed persons in protected environments for 7 days after vaccination but should not be restricted from visiting less severely immunosuppressed patients. No preference is indicated for TIV use by persons who have close contact with persons with lesser degrees of immunosuppression (e.g., persons with diabetes, persons with asthma who take corticosteroids, persons who have recently received chemotherapy or radiation but who are not being cared for in a protective environment as defined above, or persons infected with HIV) or for TIV use by HCP or other healthy nonpregnant persons aged 2-49 years in close contact with persons in all other groups at high risk. # Pregnant Women Pregnant women and newborns are at risk for influenza complications, and all women who are pregnant or will be pregnant during influenza season should be vaccinated. The American College of Obstetricians and Gynecologists and the American Academy of Family Physicians also have recommended routine vaccination of all pregnant women (381). No preference is indicated for use of TIV that does not contain thimerosal as a preservative (see Vaccine Preservative [Thimerosal] in Multidose Vials of TIV) for any group recommended for vaccination, including pregnant women. LAIV is not licensed for use in pregnant women. However, pregnant women do not need to avoid contact with persons recently vaccinated with LAIV. # Breastfeeding Mothers Vaccination is recommended for all persons, including breastfeeding women, who are contacts of infants or children aged <5 years because infants and young children are at high risk for influenza complications and are more likely to require medical care or hospitalization if infected. Breastfeeding does not affect the immune response adversely and is not a con-traindication for vaccination (179). Unless contraindicated because of other medical conditions, women who are breastfeeding can receive either TIV or LAIV. In one randomized controlled trial conducted in Bangladesh, infants born to women vaccinated during pregnancy had a lower risk for laboratory-confirmed influenza. However, the contribution to protection from influenza of breastfeeding compared with passive transfer of maternal antibodies during pregnancy was not determined (154). # travelers The risk for exposure to influenza during travel depends on the time of year and destination. In the temperate regions of the Southern Hemisphere, influenza activity occurs typically during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large tourist groups (e.g., on cruise ships) that include persons from areas of the world in which influenza viruses are circulating (382,383). In the tropics, influenza occurs throughout the year. In a study among Swiss travelers to tropical and subtropical countries, influenza was the most frequently acquired vaccine-preventable disease (384). Any traveler who wants to reduce the risk for influenza infection should consider influenza vaccination, preferably at least 2 weeks before departure. In particular, persons at high risk for complications of influenza and who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to travel to the tropics, • with organized tourist groups at any time of year, or • to the Southern Hemisphere during April-September. • No information is available about the benefits of revaccinating persons before summer travel who already were vaccinated during the preceding fall, and revaccination is not recommended. Persons at high risk who receive the previous season's vaccine before travel should be revaccinated with the current vaccine the following fall or winter. Persons at higher risk for influenza complications should consult with their health-care practitioner to discuss the risk for influenza or other travel-related diseases before embarking on travel during the summer. # General Population Vaccination is recommended for any persons who wish to reduce the likelihood of their becoming ill with influenza or transmitting influenza to others should they become infected. Healthy, nonpregnant persons aged 2-49 years might choose to receive either TIV or LAIV. All other persons aged >6 months should receive TIV. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories or correctional facilities) should be encouraged to receive vaccine to minimize morbidity and the disruption of routine activities during influenza epidemics (385,386). # Recommended Vaccines for Different Age Groups When vaccinating children aged 6-35 months with TIV, health-care providers should use TIV that has been licensed by the FDA for this age group (i.e., TIV manufactured by Sanofi Pasteur [FluZone]) (219). TIV from Novartis (Fluvirin) is FDA-approved in the United States for use among persons aged >4 years (220). TIV from GlaxoSmithKline (Fluarix and FluLaval) or CSL Biotherapies (Afluria) is labeled for use in persons aged >18 years because data to demonstrate immunogenicity or efficacy among younger persons have not been provided to FDA (207,208,218). LAIV from MedImmune (FluMist) is recommended for use by healthy nonpregnant persons aged 2-49 years (Table 2) (291). If a pediatric vaccine dose (0.25mL) is administered to an adult, an additional pediatric dose (0.25 mL) should be given to provide a full adult dose (0.5mL). If the error is discovered later (after the patient has left the vaccination setting), an adult dose should be administered as soon as the patient can return. No action needs to be taken if an adult dose is administered to a child. Several new vaccine formulations are being evaluated in immunogenicity and efficacy trials; when licensed, these new products will increase the influenza vaccine supply and provide additional vaccine choices for practitioners and their patients. # Influenza Vaccines and Use of Influenza Antiviral Medications Unvaccinated persons who are receiving antiviral medications for treatment or chemoprophylaxis often also are recommended for vaccination. Administration of TIV to persons receiving influenza antivirals is acceptable. The effect on safety and effectiveness of LAIV coadministration with influenza antiviral medications has not been studied. However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy, and influenza antiviral medications should not be administered for 2 weeks after receipt of LAIV. Persons receiving antivirals within the period 2 days before to 14 days after vaccination with LAIV should be revaccinated at a later date with any approved vaccine formulation (179,291). # Considerations When Using LAIV LAIV is an option for vaccination of healthy, nonpregnant persons aged 2-49 years, including HCP and other close contacts of high-risk persons (excepting severely immunocompromised persons who require care in a protected environment). No preference is indicated for LAIV or TIV when considering vaccination of healthy, ¶ nonpregnant persons aged 2-49 years. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response in children, its ease of administration, and the possibly increased acceptability of an intranasal rather than intramuscular route of administration. If the vaccine recipient sneezes after administration, the dose should not be repeated. However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness, or TIV should be administered instead. No data exist about concomitant use of nasal corticosteroids or other intranasal medications (261). Although FDA licensure of LAIV excludes children aged 2-4 years with a history of asthma or recurrent wheezing, the precise risk, if any, of wheezing caused by LAIV among these children is unknown because experience with LAIV among these young children is limited. Young children might not have a history of recurrent wheezing if their exposure to respiratory viruses has been limited because of their age. Certain children might have a history of wheezing with respiratory illnesses but have not had asthma diagnosed. The following screening recommendations should be used to assist persons who administer influenza vaccines in providing the appropriate vaccine for children aged 2-4 years. Clinicians and vaccination programs should screen • for asthma or wheezing illness (or history of wheezing illness) when considering use of LAIV for children aged 2-4 years, and should avoid use of this vaccine in children with asthma or a recent wheezing episode within the previous 12 months. Health-care providers should consult the medical record, when available, to identify children aged 2-4 years with asthma or recurrent wheezing that might indicate asthma. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2-4 years should be asked: "In the past 12 months, has a health-care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record during the preceding 12 months should not receive LAIV. TIV is available for use in children with asthma or wheezing (387).LAIV can be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness. # Contraindications and Precautions for Use of LAIV The effectiveness or safety of LAIV is not known for the following groups and administration of LAIV is contraindicated: persons ing aspirin or other salicylates (because of the association of Reye syndrome with wild-type influenza virus infection); or pregnant women. • A moderate or severe illness with or without fever is a precaution for use of LAIV. GBS within 6 weeks following a previous dose of influenza vaccine is considered to be a precaution for use of influenza vaccines. LAIV should not be administered to close contacts of immunosuppressed persons who require a protected environment. ¶ Use of the term "healthy" in this recommendation refers to persons who do not have any of the underlying medical conditions that confer high risk for severe complications (see Contraindications and Precautions for Use of LAIV). # Personnel Who Can Administer LAIV Low-level introduction of vaccine viruses into the environment probably is unavoidable when administering LAIV. The risk for acquiring vaccine viruses from the environment is unknown but is probably low. Severely immunosuppressed persons should not administer LAIV. However, other persons at higher risk for influenza complications can administer LAIV. These include persons with underlying medical conditions placing them at higher risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years. # Concurrent Administration of Influenza Vaccine with other Vaccines Use of LAIV concurrently with measles, mumps, rubella (MMR) alone and MMR and varicella vaccine among children aged 12-15 months has been studied, and no interference with the immunogenicity to antigens in any of the vaccines was observed (261,388). Among adults aged >50 years, the safety and immunogenicity of zoster vaccine and TIV was similar whether administered simultaneously or spaced 4 weeks apart (389). In the absence of specific data indicating interference, following ACIP's general recommendations for vaccination is prudent (179). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. Inactivated or live vaccines can be administered simultaneously with LAIV. However, after administration of a live vaccine, at least 4 weeks should pass before another live vaccine is administered. # Recommendations for Vaccination Administration and Vaccination Programs Although influenza vaccination levels increased substantially during the 1990s, little progress has been made since 2000 toward achieving national health objectives, and further improvements in vaccine coverage levels are needed to reduce the annual impact of influenza substantially. Strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (335,336,345), should be implemented whenever feasible. Vaccination efforts should begin as soon as vaccine is available and continue through the influenza season. Vaccination coverage can be increased by administering vaccine before and during the influenza season to persons during hospitalizations or routine health-care visits. Vaccinations can be provided in alternative settings (e.g., pharmacies, grocery stores, workplaces, or other locations in the community), thereby making special visits to physicians' offices or clinics unnecessary. Coordinated campaigns such as the National Influenza Vaccination Week (December 6-12, 2009) provide opportunities to refocus public attention on the benefits, safety, and availability of influenza vaccination throughout the influenza season. When educating patients about adverse events, clinicians should provide access to Vaccine Information Sheets (available at http://www.cdc. gov/vaccines/pubs/vis), and emphasize that 1) TIV contains noninfectious killed viruses and cannot cause influenza, 2) LAIV contains weakened influenza viruses that cannot replicate outside the upper respiratory tract and are unlikely to infect others, and 3) concomitant symptoms or respiratory disease unrelated to vaccination with either TIV or LAIV can occur after vaccination. Adverse events after influenza vaccination should be reported promptly to VAERS at http://vaers.hhs. gov even if the health-care professional is not certain that the vaccine caused the event. # Information About the Vaccines for Children Program The Vaccines for Children (VFC) program supplies vaccine to all states, territories, and the District of Columbia for use by participating providers. These vaccines are to be provided to eligible children without vaccine cost to the patient or the provider, although the provider might charge a vaccine administration fee. All routine childhood vaccines recommended by ACIP are available through this program, including influenza vaccines. The program saves parents and providers out-ofpocket expenses for vaccine purchases and provides cost savings to states through CDC's vaccine contracts. The program results in lower vaccine prices and ensures that all states pay the same contract prices. Detailed information about the VFC program is available at http://www.cdc.gov/vaccines/programs/ vfc/default.htm. # Influenza Vaccine Supply Considerations The annual supply of influenza vaccine and the timing of its distribution cannot be guaranteed in any year. During the 2008-09 influenza season, 113 million doses of influenza vaccine were distributed in the United States. For the 2009-10 season, total production of seasonal influenza vaccine for the United States is anticipated to be >130 million doses, depending on demand and production yields. However, influenza vaccine distribution delays or vaccine shortages remain possible. One factor that affects production is the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains. Multiple manufacturing and regulatory issues, including the anticipated need to produce a separate vaccine against novel influenza A (H1N1), also might affect the production schedule. To ensure optimal use of available doses of influenza vaccine, health-care providers, persons planning organized campaigns, and state and local public health agencies should develop plans for expanding outreach and infrastructure to vaccinate more persons in targeted groups and others who wish to reduce their risk for influenza. They also should develop contingency plans for the timing and prioritization of administering influenza vaccine if the supply of vaccine is delayed or reduced. If supplies of TIV are not adequate, vaccination should be carried out in accordance with local circumstances of supply and demand based on the judgment of state and local health officials and health-care providers. Guidance for tiered use of TIV during prolonged distribution delays or supply short falls is available at http://www.cdc.gov/flu/professionals/vaccination/vax_priority.htm and will be modified as needed in the event of shortage. CDC and other public health agencies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will inform both providers and the general public if any indication exists of a substantial delay or an inadequate supply. Because LAIV is recommended for use only in healthy nonpregnant persons aged 2-49 years, no recommendations for prioritization of LAIV use are made. Either LAIV or TIV can be used when considering vaccination of healthy, nonpregnant persons aged 2-49 years. However, during shortages of TIV, LAIV should be used preferentially when feasible for all healthy nonpregnant persons aged 2-49 years (including HCP) who desire or are recommended for vaccination to increase the availability of inactivated vaccine for persons at high risk. # timing of Vaccination Vaccination efforts should be structured to ensure the vaccination of as many persons as possible over the course of several months, with emphasis on vaccinating before influenza activity in the community begins. Even if vaccine distribution begins before October, distribution probably will not be completed until December or January. The following recommendations reflect this phased distribution of vaccine. In any given year, the optimal time to vaccinate patients cannot be determined precisely because influenza seasons vary in their timing and duration, and more than one outbreak might occur in a single community in a single year. In the United States, localized outbreaks that indicate the start of seasonal influenza activity can occur as early as October. However, in >80% of influenza seasons since 1976, peak influenza activity (which often is close to the midpoint of influenza activity for the season) has not occurred until January or later, and in >60% of seasons, the peak was in February or later (Figure 1). In general, health-care providers should begin offering vaccination soon after vaccine becomes available and if possible by October. To avoid missed opportunities for vaccination, providers should offer vaccination during routine health-care visits or during hospitalizations whenever vaccine is available. The potential for addition of a novel influenza A (H1N1) vaccine program to the current burden on vaccination programs and providers underscores the need for careful planning of seasonal vaccination programs. Beginning use of seasonal vaccine as soon as available, including in September or earlier, might reduce the overlap of seasonal and novel influenza vaccination efforts. Vaccination efforts should continue throughout the season, because the duration of the influenza season varies, and influenza might not appear in certain communities until February or March. Providers should offer influenza vaccine routinely, and organized vaccination campaigns should continue throughout the influenza season, including after influenza activity has begun in the community. Vaccine administered in December or later, even if influenza activity has already begun, is likely to be beneficial in the majority of influenza seasons. The majority of adults have antibody protection against influenza virus infection within 2 weeks after vaccination (390,391). All children aged 6 months-8 years who have not received vaccination against influenza previously should receive their first dose as soon after vaccine becomes available as is feasible and should receive the second dose >4 weeks later. This practice increases the opportunity for both doses to be administered before or shortly after the onset of influenza activity. Vaccination clinics should be scheduled through December, and later if feasible, with attention to settings that serve children aged >6 months, pregnant women, other persons aged <50 years at increased risk for influenza-related complications, persons aged >50 years, HCP, and persons who are household contacts of children aged <59 months or other persons at high risk. Planners are encouraged to develop the capacity and flexibility to schedule at least one vaccination clinic in December. Guidelines for planning large-scale vaccination clinics are available at http://www.cdc.gov/flu/professionals/ vaccination/vax_clinic.htm. During a vaccine shortage or delay, substantial proportions of TIV doses might not be released and distributed until November and December or later. When the vaccine is substantially delayed or disease activity has not subsided, providers should consider offering vaccination clinics into January and beyond as long as vaccine supplies are available. Campaigns using LAIV also can extend into January and beyond. # Strategies for Implementing Vaccination Recommendations in Health-Care Settings Successful vaccination programs combine publicity and education for HCP and other potential vaccine recipients, a plan for identifying persons recommended for vaccination, use of reminder/recall systems, assessment of practice-level vaccination rates with feedback to staff, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (336,392). The use of standing orders programs by long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies ensures that vaccination is offered. Standing orders programs for influenza vaccination should be conducted under the supervision of a licensed practitioner according to a physician-approved facility or agency policy by HCP trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. The Centers for Medicare and Medicaid Services (CMS) has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, long-term-care facilities, and home health agencies (393). To the extent allowed by local and state law, these facilities and agencies can implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaideligible patients. Payment for influenza vaccine under Medicare Part B is available (394,395) Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs (396). In addition, physician reminders (e.g., flagging charts) and patient reminders are recognized strategies for increasing rates of influenza vaccination. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections. # outpatient Facilities Providing ongoing Care Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record or vaccination information system. Patients for whom vaccination is recommended and who do not have regularly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination. # outpatient Facilities Providing Episodic or Acute Care Acute health-care facilities (e.g., EDs and walk-in clinics) should offer vaccinations throughout the influenza season to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility. # nursing Homes and other Long-term-Care Facilities Vaccination should be provided routinely to all residents of long-term-care facilities. If possible, all residents should be vaccinated at one time before influenza season. In the majority of seasons, TIV will become available to long-term-care facilities in October or November, and vaccination should commence as soon as vaccine is available. As soon as possible after admission to the facility, the benefits and risks of vaccination should be discussed and education materials provided (397). Signed consent is not required (398). Residents admitted after completion of the vaccination program at the facility should be vaccinated at the time of admission. Since October 2005, CMS has required nursing homes participating in the Medicare and Medicaid programs to offer all residents influenza and pneumococcal vaccines and to document the results. According to the requirements, each resident is to be vaccinated unless contraindicated medically, the resident or a legal representative refuses vaccination, or the vaccine is not available because of shortage. This information is to be reported as part of the CMS Minimum Data Set, which tracks nursing home health parameters (395,399). # Acute-Care Hospitals Hospitals should serve as a key setting for identifying persons at increased risk for influenza complications. Unvaccinated persons of all ages (including children) with high-risk conditions and persons aged 6 months-18 years or >50 years who are hospitalized at any time during the period when vaccine is available should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Standing orders to offer influenza vaccination to all hospitalized persons should be considered. # Visiting nurses and others Providing Home Care to Persons at High Risk Nursing-care plans should identify patients for whom vaccination is recommended, and vaccine should be administered in the home if necessary as soon as influenza vaccine is available and throughout the influenza season. Caregivers and other persons in the household (including children) should be referred for vaccination. # other Facilities Providing Services to Persons Aged >50 Years Facilities providing services to persons aged >50 years (e.g., assisted living housing, retirement communities, and recreation centers) should offer unvaccinated residents, attendees, and staff annual on-site vaccination before the start of the influenza season. Continuing to offer vaccination throughout the fall and winter months is appropriate. Efforts to vaccinate newly admitted patients or new employees also should be continued, both to prevent illness and to avoid having these persons serve as a source of new influenza infections. Staff education should emphasize the benefits for self, staff and patients of protection from influenza through vaccination. # Health-Care Personnel Health-care facilities should offer influenza vaccinations to all HCP, including night, weekend, and temporary staff. Particular emphasis should be placed on providing vaccinations to workers who provide direct care for persons at high risk for influenza complications. Efforts should be made to educate HCP regarding the benefits of vaccination and the potential health consequences of influenza illness for their patients, themselves, and their family members. All HCP should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (360,374,375). # Future Directions for Research and Recommendations Related to Influenza Vaccine Although available influenza vaccines are effective and safe, additional research is needed to improve prevention efforts. Most mortality from influenza occurs among persons aged >65 years (6), and more immunogenic influenza vaccines are needed for this age group and other groups at high risk for mortality. Additional research also is needed to understand potential biases in estimating the benefits of vaccination among older adults in reducing hospitalizations and deaths (82,175,400). Additional studies of the relative cost-effectiveness and cost utility of influenza vaccination among children and adults, especially those aged <65 years, are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, hospitalization costs and rates, and vaccine effectiveness when evaluating the long-term costs and benefits of annual vaccination (401). Additional data on indirect effects of vaccination also are needed to quantify the benefits of influenza vaccination of HCP in protecting their patients (308) and the benefits of vaccinating children to reduce influenza complications among those at risk. Because expansions in ACIP recommendations for vaccination will lead to more persons being vaccinated, much larger research networks are needed that can identify and assess the causality of very rare events that occur after vaccination, including GBS. Ongoing studies of safety in pediatric populations with expanded recommendations are needed and are underway. These research networks also could provide a platform for effectiveness and safety studies in the event of a pandemic. A recent study showed that influenza vaccines contain structures that can induce anti-GM1 antibodies after inoculation into mice (402). Further research on potential biologic or genetic risk factors for GBS in humans also is needed (397). In addition, a better understanding is needed of how to motivate persons at risk to seek annual influenza vaccination. ACIP continues to review new vaccination strategies to protect against influenza, including the possibility of expanding routine influenza vaccination recommendations toward universal vaccination or other approaches that will help reduce or prevent the transmission of influenza and reduce the burden of severe disease (403)(404)(405)(406)(407)(408). The 2009 ACIP expansion of annual vaccination recommendations to include all children aged 6 months-18 years will require a substantial increase in resources for epidemiologic research to develop long-term studies capable of assessing the possible effects on community-level transmission. Additional planning to improve surveillance systems capable of monitoring effectiveness, safety and vaccine coverage, and further development of implementation strategies will also be necessary. In addition, as noted by the National Vaccine Advisory Committee, strengthening the U.S. influenza vaccination system will require improving vaccine financing and demand and implementing systems to help better understand the burden of influenza in the United States (409). Vaccination programs capable of delivering annual influenza vaccination to a broad range of the population could potentially serve as a resilient and sustainable platform for delivering vaccines and monitoring outcomes for other urgently required public health interventions (e.g., vaccines for pandemic influenza or medications to prevent or treat illnesses caused by acts of terrorism). # Seasonal Influenza Vaccine and Influenza Viruses of Animal origin Human infection with novel or nonhuman influenza A virus strains, including influenza A viruses of animal origin, is a nationally notifiable disease (410). Human infections with nonhuman or novel human influenza A virus should be identified quickly and investigated to determine possible sources of exposure, identify additional cases, and evaluate the possibility of human-to-human transmission because transmission patterns could change over time with variations in these influenza A viruses. Sporadic severe and fatal human cases of infection with highly pathogenic avian influenza A (H5N1) virus have been identified in Asia, Africa, Europe, and the Middle East, primarily among persons who have had direct or close unprotected contact with sick or dead birds associated with the ongoing H5N1 panzootic among birds (411)(412)(413)(414)(415)(416)(417)(418)(419). Limited, nonsustained human-to-human transmission of H5N1 virus has likely occurred in some case clusters (420,421). To date, no evidence exists of genetic reassortment between human influenza A and H5N1 viruses. However, influenza viruses derived from strains circulating among poultry (e.g., the H5N1 virus that has caused outbreaks of avian influenza and occasionally have infected humans) have the potential to recombine with human influenza A viruses (422,423). To date, highly pathogenic H5N1 virus has not been identified in wild or domestic birds or in humans in the United States. Guidance for testing suspected cases of H5N1 virus infection among persons in the U.S. and follow-up of contacts is available (424,425). Human illness from infection with different avian influenza A subtype viruses also have been documented, including infections with low pathogenic and highly pathogenic viruses. A range of clinical illness has been reported for human infection with low pathogenic avian influenza viruses, including conjunctivitis with influenza A (H7N7) virus in the United Kingdom, lower respiratory tract disease and conjunctivitis with influenza A (H7N2) virus in the United Kingdom, and uncomplicated ILI with influenza A (H9N2) virus in Hong Kong and China (426)(427)(428)(429)(430)(431)(432). Two human cases of infection with low pathogenic influenza A (H7N2) were reported in the United States (429). Although human infections with highly pathogenic A (H7N7) virus infections typically have ILI or conjunctivitis, severe infections, including one fatal case in the Netherlands, have been reported (433,434). Conjunctivitis also has been reported because of human infection with highly pathogenic influenza A (H7N3) virus in Canada and low pathogenic A (H7N3) in the United Kingdom (426,434). In contrast, sporadic infections with highly pathogenic avian influenza A (H5N1) virus have caused severe illness in many countries, with an overall case-fatality proportion of >60% (421,435). Swine influenza A (H1N1), A (H1N2), and A (H3N2) viruses, including reassortant viruses, are endemic among pig populations in the United States (436). Two clusters of influenza A (H2N3) virus infections among pigs have been reported recently (437). Outbreaks among pigs normally occur in colder weather months (late fall and winter) and sometimes with the introduction of new pigs into susceptible herds. An estimated 30% of the pig population in the United States has serologic evidence of having had swine influenza A (H1N1) virus infection. Sporadic human infections with a variety of swine influenza A viruses occur in the United States, but the incidence of these human infections is unknown (438)(439)(440)(441)(442)(443). Persons infected with swine influenza A viruses typically report direct contact with ill pigs or places where pigs have been present (e.g., agricultural fairs or farms), and have symptoms that are clinically indistinguishable from infection with other respiratory viruses (440,441,444,445). Clinicians should consider swine influenza A virus infection in the differential diagnosis of patients with ILI who have had recent contact with pigs. The sporadic cases identified in recent years have not resulted in sustained human-to-human transmission of swine influenza A viruses or community outbreaks (368,445). Although immunity to swine influenza A viruses appears to be low (<2%) in the overall human population, 10%-20% of persons exposed occupationally to pigs (e.g., pig farmers or pig veterinarians) have been documented in certain studies to have antibody evidence of prior swine influenza A (H1N1) virus infection (438,446). In April 2009, a novel influenza A (H1N1) virus similar to influenza viruses previously identified in swine was determined to the cause of an influenza-like respiratory illness among humans that spread across North America and throughout most of the world by May 2009 (9,447). The epidemiology of influenza caused by this novel influenza virus is still being studied, and whether this virus will achieve long-term circulation among humans or even replace one of the other seasonal influenza viruses as the cause of annual epidemics is unknown. Current seasonal influenza vaccines are not expected to provide protection against human infection with avian influenza A viruses, including influenza A (H5N1) viruses, or to provide protection against currently circulating swine influenza A or the novel influenza A (H1N1) viruses (318,448). However, reducing seasonal influenza risk through influenza vaccination of persons who might be exposed to nonhuman influenza viruses (e.g., H5N1 virus) might reduce the theoretical risk for recombination of influenza A viruses of animal origin and human influenza A viruses by preventing seasonal influenza A virus infection within a human host. CDC has recommended that persons who are charged with responding to avian influenza outbreaks among poultry receive seasonal influenza vaccination (448,449). As part of preparedness activities, the Occupational Safety and Health Administration (OSHA) has issued an advisory notice regarding poultry worker safety that is intended for implementation in the event of a suspected or confirmed avian influenza outbreak at a poultry facility in the United States. OSHA guidelines recommend that poultry workers in an involved facility receive vaccination against seasonal influenza; OSHA also has recommended that HCP involved in the care of patients with documented or suspected avian influenza should be vaccinated with the most recent seasonal human influenza vaccine to reduce the risk for co-infection with human influenza A viruses (449). # Recommendations for Using Antiviral Agents for Seasonal Influenza Annual vaccination is the primary strategy for preventing complications of influenza virus infections. Antiviral medications with activity against influenza viruses are useful adjuncts in the prevention of influenza, and effective when used early in the course of illness for treatment. Four influenza antiviral agents are licensed in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. During the 2007-08 influenza season, influenza A (H1N1) viruses with a mutation that confers resistance to oseltamivir became more common in the United States and other countries (450)(451)(452). As of July 2009, in the United States, approximately 99% of human influenza A (H1N1) viruses tested, and none of the influenza A (H3N2) or influenza B viruses tested have been resistant to oseltamivir. As of July 2, 2009, with few exceptions, novel influenza A (H1N1) viruses that began circulating in April 2009 remained sensitive to oseltamivir (453). Oseltamivir resistance among circulating seasonal influenza A (H1N1) virus strains presents challenges for the selection of antiviral medications for treatment and chemoprophylaxis of influenza, and provides additional reasons for clinicians to test patients for influenza virus infection and to consult surveillance data when evaluating persons with acute respiratory illnesses during influenza season. CDC has published interim guidelines to provide options for treatment or chemoprophylaxis of influenza in the United States if oseltamivir-resistant seasonal influenza A (H1N1) viruses are circulating widely in a community or if the prevalence of oseltamivir-resistant influenza A (H1N1) viruses is uncertain (8). Updated guidance on antiviral use will be available from ACIP before the start of the 2009-10 influenza season. This guidance will include a summary of antiviral resistance data from the 2008-09 influenza season, and will be published separately from the vaccination recommendations. Until the ACIP recommendations for use of antivirals against influenza are published, CDC's previously published recommendations for use of influenza antiviral medications should be consulted for guidance on antiviral use (8). New guidance on clinical management of influenza, including use of antivirals, also is available from the Infectious Diseases Society of America (454). # Sources of Information Regarding Influenza and its Surveillance Information regarding influenza surveillance, prevention, detection, and control is available at http://www.cdc.gov/flu. During October-May, surveillance information is updated weekly. In addition, periodic updates regarding influenza are published in MMWR (http://www.cdc.gov/mmwr). Additional information regarding influenza vaccine can be obtained by calling 1-800-CDC-INFO (1-800-232-4636). State and local health departments should be consulted about availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, reporting of influenza outbreaks and influenza-related pediatric deaths, and advice concerning outbreak control. # Vaccine Adverse Event Reporting System (VAERS) Adverse events after influenza vaccination should be reported promptly to VAERS at http://vaers.hhs.gov, even if the reporter is unsure whether vaccine caused the event. Clinically significant adverse events that follow vaccination should be reported to VAERS at http://www.vaers.hhs.gov. Reports may be filed securely online or by telephone at 1-800-822-7967 to request reporting forms or other assistance. # national Vaccine Injury Compensation Program The National Vaccine Injury Compensation Program (VICP), established by the National Childhood Vaccine Injury Act of 1986, as amended, provides a mechanism through which compensation can be paid on behalf of a person determined to have been injured or to have died as a result of receiving a vaccine covered by VICP. The Vaccine Injury Table lists the vaccines covered by VICP and the injuries and conditions (including death) for which compensation might be paid. If the injury or condition is not on the Table, or does not occur within the specified time period on the Table, persons must prove that the vaccine caused the injury or condition. For a person to be eligible for compensation, the general filing deadlines for injuries require claims to be filed within 3 years after the first symptom of the vaccine injury; for a death, claims must be filed within 2 years of the vaccine-related death and not more than 4 years after the start of the first symptom of the vaccine-related injury from which the death occurred. When a new vaccine is covered by VICP or when a new injury/ condition is added to the Table, claims that do not meet the general filing deadlines must be filed within 2 years from the date the vaccine or injury/condition is added to the Table for injuries or deaths that occurred up to 8 years before the Table change. Persons of all ages who receive a VICP-covered vaccine might be eligible to file a claim. Both the intranasal (LAIV) and injectable (TIV) trivalent influenza vaccines are covered under VICP. Additional information about VICP is available at http//www.hrsa.gov/vaccinecompensation or by calling 1-800-338-2382. # Additional Information Regarding Influenza Virus Infection Control Among Specific Populations Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, HCP, hospital patients, pregnant women, children, and travelers) also are available in the following publications: CDC.
None
None
513f1be5e7056f2af22339a8a068ae820c67bd5b
cdc
None
CDC convened a workshop in Atlanta on March 26-28, 1991, to discuss chlamydia prevention. The following persons participated in the workshop and provided expert technical and scientific review:# INTRODUCTION Chlamydia trachomatis infections are common in sexually active adolescents and young adults in the United States (CDC, unpublished review ). More than 4 million chlamydial infections occur annually (2,3 ). Infection by this organism is insidioussymptoms are absent or minor among most infected women and many men. This large group of asymptomatic and infectious persons sustains transmission within a community. In addition, these persons are at risk for acute illness and serious longterm sequelae. The direct and indirect costs of chlamydial illness exceed $2.4 billion annually (2-4 )*. Until recently, chlamydia prevention and patient care were impeded by the lack of suitable laboratory tests for screening and diagnosis. Such tests are now available. Through education, screening, partner referral, and proper patient care, public health workers and health-care practitioners can combine efforts to decrease the morbidity and costs resulting from this infection. # Prevalence of Chlamydial Infection Adolescents and young adults are at substantial risk of becoming infected with chlamydia. Unrecognized infection is highly prevalent in this group (CDC, unpublished review ). In the United States, published studies of sexually active females screened during visits to health-care providers indicate that age is the sociodemographic factor most strongly associated with chlamydial infection. Prevalence has been highest (>10%) among sexually active, adolescent females (CDC, unpublished review ). The prevalence of chlamydial infection also has been higher among those patients who live in inner cities, have a lower socioeconomic status, or are black (5)(6)(7)(8)(9)(10)(11). Although prevalences have been higher in these subgroups, with few exceptions prevalences are ≥5% regardless of region of the country, urban/rural location of provider, or race/ethnicity (CDC, unpublished review ). Fewer screening studies have been reported for men, but prevalences have been >5% among young men seeking health care for reasons other than genitourinary tract problems. Published reports of these studies from North America and Europe have included male high school students (12 ), military personnel (13,14 ), semen donors (15,16 ), adolescents in detention centers (17)(18)(19)(20), and teens attending adolescent clinics (17,20 ), adolescents attending university health centers (20,21 ), and chemical dependency units (22 ). # Clinical Spectrum of Chlamydial Infection Among Nonpregnant Women Pelvic inflammatory disease (PID) accounts for most of the serious acute illness, morbidity, and economic cost resulting from chlamydial infection. Treating women with acute, symptomatic PID is not a sufficient prevention strategy because treatment does not always prevent sequelae and because many women with tubal infection have symptoms too mild or too nonspecific for them to seek treatment. A woman's exposure to chlamydia is usually a result of sexual intercourse. The site of initial infection is most often the cervix; the urethra and the rectum may also be infected (23)(24)(25)(26)(27)(28). Chlamydia causes symptoms in a minority of infected women (9,(29)(30)(31)(32)(33)(34)(35)(36)(37)(38). When symptoms occur, they include vaginal discharge and dysuria (9,23,(25)(26)(27)(29)(30)(31)(32)(33)(35)(36)(37)(38). The ascension of lower, genitourinary tract infection to the endometrium and fallopian tubes may cause lower abdominal pain and menstrual abnormalities. Untreated infections among women often persist for months (7,34,(39)(40)(41). During this period, complications may develop, and many of these women transmit their infection to others. The proportion of women with chlamydial infection who develop infection of the upper reproductive tract (endometritis, salpingitis, and pelvic peritonitis) is uncertain; however, an estimated 8% of women with chlamydia also had overt salpingitis (42). A second study of women with dual gonococcal and chlamydial infections, but who were treated only for gonorrhea, revealed that 30% developed salpingitis during follow-up (43 ). Chlamydia, alone or with other microorganisms, has been isolated from 5% to 50% of women seeking care for symptoms of PID (CDC, unpublished review). Symptomatic PID prompts 2.5 million outpatient visits to physicians annually (44,45 ). More than 275,000 women are hospitalized and more than 100,000 surgical procedures are performed yearly because of PID (45 ). Both the diagnosis and the treatment of PID are unsatisfactory: approximately 17% of women treated for PID will be infertile; an equal proportion will experience chronic pelvic pain as a result of infection (46,47 ); and 10% who do conceive will have an ectopic pregnancy (48 ). The importance of undetected, untreated fallopian tube infections as a cause of infertility and ectopic pregnancy is clear. In two studies of chlamydia and PID in infertile women, PID was the cause of the infertility in half of the women, and anti-chlamydial antibody was strongly associated with a tubal etiology for infertility (49,50 ). Of concern in these studies was the small proportion of women with tubalfactor infertility who reported a past history of salpingitis. Other studies indicate that many of these women may have had salpingitis, but were not treated because symptoms were absent or nonspecific (49,(51)(52)(53)(54)(55)(56)(57)(58)(59)(60). Unrecognized PID also may be a contributor to the occurrence of ectopic pregnancies (60,61 ). Because chlamydial infections are not usually associated with overt symptoms, prevention of infection is the most effective means of preventing sequelae. Pregnant women with chlamydial infection are at risk for postpartum PID. Postpartum and perinatal disease are preventable by treating infected pregnant women and their sex partners. Endometritis, and possibly salpingitis, developed among 10%-28% of pregnant women with untreated chlamydial infection who underwent induced abortions (27,(62)(63)(64). Similarly, endometritis may develop during the late postpartum period among 19%-34% of infected pregnant women who deliver vaginally and atterm (65,66 ). # Clinical Spectrum of Chlamydial Infection Among Infants Nearly two-thirds of the infants born vaginally to mothers with chlamydial infection become infected during delivery (67,68 ). Even after ophthalmia prophylaxis with silver nitrate or antibiotic ointment, 15%-25% of infants exposed to chlamydia developed chlamydia conjunctivitis, and 3%-16% developed chlamydia pneumonia (41,(68)(69)(70)(71)(72)(73). C. trachomatis is the most common cause of neonatal conjunctivitis (72,(74)(75)(76)(77)(78) and is one of the most common causes of pneumonia during the first few months of life (79)(80)(81)(82)(83)(84)(85). Infants with chlamydia pneumonia are at increased risk for abnormal pulmonary function tests later in childhood (86)(87)(88). # Clinical Spectrum of Chlamydial Infection Among Men Although urethritis is the most common illness resulting from chlamydia, chlamydial infections among men rarely result in sequelae. However, asymptomatic infected men may unknowingly infect their sex partner(s) before seeking treatment. Chlamydial infections among heterosexual men are usually urethral. Symptoms are similar to gonorrhea (e.g., urethral discharge or dysuria). In contrast to gonorrheal urethritis, chlamydia symptoms are often absent or mild (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)89 ). Therefore, the number of heterosexual men with asymptomatic chlamydial infections is larger than the number of such men with gonorrhea. Chlamydial infections of the lower genitourinary tract account for 30%-40% of the 4-6 million physician visits for nongonococcal urethritis (NGU), and 50% of the 158,000 outpatient visits and 7,000 hospitalizations for epididymitis among adolescent and young adult males (2,3,90,91 ). Chlamydial infections among men readily respond to treatment with antibiotics. Most of the morbidity and economic costs of chlamydial infections among heterosexual men result from infection of female sex partners who develop sequelae (3 ). The rectum is a common site of initial chlamydial infection for men who engage in receptive anal intercourse. Rectal infections are generally asymptomatic, but may cause symptoms characteristic of proctitis (e.g., rectal discharge, pain during defecation) or proctocolitis (92)(93)(94)(95)(96). # Other Chlamydial Illnesses Among adolescents and young adults of both sexes, chlamydial infection is an important part of the differential diagnosis for other, less common, illnesses. Chlamydia salpingitis may progress to perihepatitis-the Fitz-Hugh-Curtis syndrome (97 ). Although chlamydia is an uncommon cause of cystitis symptoms among female patients, chlamydia has been an important cause of such symptoms in some groups of young women who have pyuria, but sterile urine cultures (acute dysuria-pyuria syndrome or urethral syndrome) (5,38,98 ). Another important, but uncommon complication of urogenital chlamydial infection is Reiter's syndrome (reactive arthritis, conjunctivitis, and urethritis), which occurs primarily among men. Chlamydia is receiving greater emphasis in the differential diagnosis of arthritis in young women because of the availability of diagnostic tests for chlamydia. Chlamydial infection is more likely to be missed among women than among men who have arthritis because of the less obvious role of urethritis. Chlamydia is also part of the differential diagnosis of chronic conjunctivitis among adolescents and young adults (99)(100)(101)(102). Ocular or ophthalmic infections may result from exposure to infectious genital secretions during oral-genital sexual contact, or by autoinoculation. Although chlamydia can be detected in the pharynx after inoculation from oral-genital exposure (103-111 ), chlamydia has not been established as a cause of pharyngitis (103,104,(110)(111)(112)(113)(114). # PREVENTION STRATEGIES The principal goal of chlamydia prevention strategies is to prevent both overt and silent chlamydia salpingitis and its sequelae. Other goals include the prevention of perinatal and postpartum infection and other adverse consequences of chlamydial infection. # General Approach The prevention of chlamydia salpingitis, pregnancy-related complications, and other chlamydial illnesses requires that chlamydia prevention programs include both primary and secondary prevention strategies. # Primary Prevention Strategies Primary prevention strategies are efforts to prevent chlamydial infection. Primary prevention of chlamydia can be accomplished in two general ways. - Behavioral changes that reduce the risk of acquiring or transmitting infection should be promoted (e.g., delaying age at first intercourse, decreasing the number of sex partners, partner selection, and the use of barrier contraception ). Efforts to effect behavioral changes are not specific to chlamydia prevention but are also critical components in preventing sexual transmission of the human immunodeficiency virus (HIV) and other sexually transmitted diseases (STDs) (45,(115)(116)(117). - Identify and treat persons with genital chlamydial infection before they infect their sex partners, and for pregnant women before they infect their babies. Efforts to detect chlamydial infection are essential to chlamydia prevention. Identifying and treating chlamydial infections require active screening and referral of sex partners of infected persons, since infections among women and men are usually asymptomatic. # Secondary Prevention Strategies Secondary prevention strategies are efforts to prevent complications among persons infected with chlamydia. The most important complication to be prevented is salpingitis and its potential sequelae (i.e., ectopic pregnancy, tubal infertility, and chronic pelvic pain). Secondary prevention of chlamydia salpingitis can be accomplished by a) screening women to identify and treat asymptomatic chlamydial infection; b) treating the female partners of men with infection; c) recognizing clinical conditions such as mucopurulent cervicitis (MPC) and the urethral syndrome, and then applying or using appropriate chlamydia diagnostic tests and treatment, as appropriate. # Target Population Chlamydial infection is especially prevalent among adolescents. Furthermore, PID occurs more commonly after chlamydial infection among adolescent females than among older women (42 ). Therefore, chlamydia prevention efforts should be directed toward young women. All sexually active adolescents and young adults are at high risk for chlamydia; the infection is broadly distributed geographically and socioeconomically. Therefore, chlamydia prevention programs should target all sexually active adolescents and young adults. All private and public health-care providers should be involved in these prevention efforts. # Specific Strategies Specific strategies for the prevention of chlamydia are grouped into two categories. Those categories are: a) community-based strategies, and b) health-care provider strategies. # Community-Based Strategies Since the prevalence of chlamydia is consistently high among adolescents and young adults regardless of socioeconomic status, race, or geographic location, prevention efforts should be implemented communitywide. Public Awareness. Community-based strategies should increase public awareness of chlamydia, its consequences, and the availability and importance of diagnosis and treatment. Groups at high risk for chlamydial infection and the persons who educate and care for them (e.g., parents, teachers, and health-care providers) must be informed about the high rate of genital chlamydial infection and its sequelae among sexually active adolescents and young adults. HIV/STD Risk Reduction Programs. Programs designed to reduce the risk of sexual transmission of HIV and other STDs by means of behavioral changes should emphasize the especially high risk of chlamydial infection. In addition, chlamydial infection may be a sentinel for unsafe sexual practices. Concern about chlamydia may provide additional motivation for persons to delay initiation of sexual activity, limit the number of sex partners, avoid sex partners at increased risk for STDs, and use condoms. Schools. Because of their access to adolescents, educators have an important role to play in chlamydia prevention programs. Chlamydia-specific material should be integrated into educational curricula that address HIV and other STDs. In addition, school programs should assist students in developing the social and behavioral skills needed to avoid chlamydial infection, HIV, and other STDs. Although most school health education curricula address HIV, fewer discuss other STDs (including chlamydia). Information that should be provided through school health education programs are listed below: - Rates of chlamydial infection among adolescents - Adverse consequences of chlamydia (e.g., PID and infertility) - Symptoms and signs of chlamydial infection (and other STDs) - Asymptomatic infection - Treatment for sex partners - Where and how to obtain health care (including locations, telephone numbers , costs, and issues of confidentiality) Some schools may offer more than classroom instruction (e.g., access to health care for infected persons and screening programs to identify asymptomatic chlamydial infection). Personnel in school-based clinics that perform pelvic examinations should use the opportunity to test for chlamydial infection. However, tests also are needed to identify asymptomatic males (e.g., by screening urine collected during sports physical examinations or other health-screening programs). The leukocyte esterase test (LET) of urine is one possible method for testing males, but the accuracy of the test requires additional evaluation (118 ). Such approaches to identifying and treating young men with asymptomatic chlamydial infection may prove important since these persons may account for most of the transmission of chlamydia to young women (see Laboratory Testing and Patient Care: An Expanded Role for the Use of Chlamydia Tests). Out-of-school Adolescents. The prevalence of chlamydia may be even greater among adolescents who have dropped out of school. Therefore, organizations serving these adolescents (e.g., Job Corps, vocational training centers, detention centers, community-based recreational programs) should offer health care that addresses chlamydial infection as part of STD/HIV risk reduction programs. # Health-Care Provider Strategies Reducing the high prevalence of chlamydial infection requires that health-care providers be aware of the high prevalence of chlamydia and recognize chlamydial illness, screen asymptomatic patients, arrange for the treatment of sex partners, and counsel all sexually active patients about the risks of STD infections. Medical providers should be trained to recognize and manage the following conditions that may be caused by chlamydia: MPC, PID, urethral syndrome (women), and urethritis and epididymitis (men). Screening. The screening of women for chlamydial infection is a critical component in a chlamydia prevention program since many women are asymptomatic, and the infection may persist for extended periods of time. Many women of reproductive age undergo pelvic examination during visits for routine health care or because of illness. During these examinations, specimens can be obtained for chlamydia screening tests. Female patients of adolescent-care providers, women undergoing induced abortion, women attending STD clinics, and women in detention facilities should be screened for chlamydial infection. Screening of these women is important because a) many are adolescents or young adults, b) they are at high risk for salpingitis, and c) they or their partners are likely to transmit infection. Chlamydia screening at family planning and prenatal care clinics is particularly cost-effective because of the large number of sexually active young women who undergo pelvic examinations. Providers such as family physicians, internists, obstetricians-gynecologists, and pediatricians who provide care for sexually active young women also should implement chlamydia screening programs-although a lower volume of such patients may increase the cost of testing. The following criteria can help identify women who should be tested for chlamydia: - Women with MPC - Sexually active women 24 years of age who meet both criteria-inconsistent use of barrier contraception, or new or more than one sex partner during the last 3 months. Patient selection criteria should be evaluated periodically. Although the incidence of chlamydial infection among all women previously tested for chlamydia is unknown, the incidence of chlamydial infection among previously infected adolescent females has been as high as 39% (118 ). Because of the high incidence of chlamydial infection among sexually active adolescent females, recommendations for the frequency of testing are listed below: - Women <20 years of age should be tested when undergoing a pelvic examination, unless sexual activity since the last test for chlamydia has been limited to a single, mutually monogamous partner. - All other women who meet the suggested screening criteria (listed above) should be tested for chlamydia annually. Although young men infrequently seek routine health care, medical providers should use such opportunities to evaluate them for asymptomatic chlamydial infection, possibly by means of the LET (119 ) (see Laboratory Testing and Patient Care: An Expanded Role for the Use of Chlamydia Tests). Treatment of Sex Partners. Treatment of sex partners of infected persons is an important strategy for reaching large numbers of men and women with asymptomatic chlamydial infection. Also, if partners are not treated, reinfection may occur. In addition, treating the male partners of infected women is critical since this is the principal way to eliminate asymptomatic infection among males. If chlamydia screening is widely implemented, the number of infected women identified may exceed the capacity of some public health systems to notify, evaluate, and treat partners. Therefore, health department personnel should assist health-care providers in developing cooperative approaches to refer partners for treatment. Where possible, health-care providers who treat female patients for chlamydia should offer examination and treatment services for the patients' male sex partner(s), or should arrange the appropriate referral of such partners. # Risk Reduction Counseling. In addition to screening, treatment, and referral of sex partners(s) of persons with chlamydial infection, health-care providers should: - Educate sexually active patients regarding HIV and other STDs, - Assess the patients' risk factors for infection, - Offer at-risk patients advice about behavior changes to reduce the risk of infection, - Encourage the use of condoms. Preventing Chlamydial Infection During Pregnancy. To prevent maternal postnatal complications and chlamydial infection among infants, pregnant women should be screened for chlamydia during the third trimester, so that treatment, if needed, will be completed before delivery (see Primary Prevention Strategies). The screening criteria already discussed can identify those at higher risk for infection. Screening during the first trimester prevents transmission of the infection and adverse effects of chlamydia during the pregnancy. However, the evidence for adverse effects during pregnancy is minimal. If screening is performed only during the first trimester, a longer period exists for infection before delivery. Infants with chlamydial infections respond readily to treatment; morbidity can be limited by the early diagnosis and systemtic treatment of infants who have conjunctivitis and pneumonia caused by chlamydial infection (secondary prevention strategies). Further, the mothers of infants diagnosed with chlamydial infection and the sex partner(s) of those mothers should be evaluated and treated. # LABORATORY TESTING Diagnostic test manufacturers have introduced a variety of nonculture tests for chlamydia, including enzyme immunoassays (EIAs) to detect chlamydia antigens, fluorescein-conjugated monoclonal antibodies for the direct visualization of chlamydia elementary bodies on smears, nucleic acid hybridization tests, and rapid (stat) tests. Because nonculture tests do not require strict handling of specimens, they are easier to perform and less expensive than culture tests; consequently the numbers of laboratories and health-care providers offering chlamydia testing have increased. The expanded use of nonculture tests is a cornerstone of chlamydia prevention strategies. Although nonculture tests have advantages that make them more suitable than cell culture tests for widespread screening programs, nonculture tests also have limitations. In particular, nonculture tests are less specific than culture tests and may produce false-positive results. All positive nonculture results should be interpreted as presumptive infection until verified by culture or other nonculture test. The decision to treat and perform additional tests should be based on the specific clinical situation. The test methodologies discussed in these recommendations are for the detection of Chlamydia trachomatis -not other chlamydia species (e.g., C. psittaci and C. pneumoniae ). A number of commercial products for the detection of C. trachomatis are available; however, most information relating to the performance of most of these products is the manufacturers' data. When possible, health-care providers and laboratory staff should compare a potential nonculture test's performance with that of an appropriate standard in their own laboratory. # Specimens for Screening The proper collection and handling of specimens are important in all the methods used to identify chlamydia. Even diagnostic tests with the highest performance ratings cannot produce accurate results when specimens submitted to the laboratory are improperly collected. Clinicians require training and periodic assessment to maintain proper technique. Because chlamydia are obligate intracellular organisms that infect the columnar epithelium, the objective of specimen collection procedures is to obtain columnar epithelial cells from the endocervix or the urethra. The following recommendations for specimen collection apply to all screening tests. # Preferred Anatomic Sites The endocervix is the preferred anatomic site to collect screening specimens from women. When culture isolation is to be used, processing an additional specimen from the urethra may increase sensitivity by 23% (120 ). Placing the two specimens in the same transport container is acceptable. For nonculture tests, the usefulness of a second specimen from the urethra has not been determined. The urethra is the preferred site for collecting screening specimens from men. # Collecting Specimens The following guidelines are recommended for obtaining endocervical specimens: - Obtain specimens for chlamydia tests after obtaining specimens for gramstained smear, Neisseria gonorrhoeae culture, or Papanicolaou smear. - Before obtaining a specimen for a chlamydia test, use a sponge or large swab to remove all secretions and discharge from the cervical os. - For nonculture chlamydia tests, use the swab supplied or specified by the manufacturer of the test. - Insert the appropriate swab or endocervical brush 1-2 cm into the endocervical canal (i.e., past the squamocolumnar junction). Rotate the swab against the wall of the endocervical canal several times for 10-30 seconds. Withdraw the swab without touching any vaginal surfaces and place it in the appropriate transport medium (culture, EIA, or deoxyribonucleic acid probe testing) or prepare a slide (direct fluorescent antibody testing). The following guidelines are recommended for obtaining urethral specimens: - Delay obtaining specimens until 2 hours after the patient has voided. - Obtain specimens for chlamydia tests after obtaining specimens for gram-stain smear or N. gonorrhoeae culture. - For nonculture chlamydia tests, use the swab supplied or specified by the manufacturer. - Gently insert the urogenital swab into the urethra (females: 1-2 cm, males: 2-4 cm). Rotate the swab in one direction for at least one revolution for 5 seconds. Withdraw the swab and place it in the appropriate transport medium (culture, EIA, or DNA probe testing) or use the swab to prepare a slide for DFA testing. # Quality Assurance of Specimen Collection Without specimen quality assurance, ≥10% of specimens are likely to be unsatisfactory because they contain secretions or exudate, but lack urethral or endocervical columnar cells (6,(121)(122). Periodic cytologic evaluation of specimen quality is recommended when using non-DFA tests to ensure continued proper specimen collection. # Cell Culture Compared with other diagnostic tests for C. trachomatis, a major advantage of cell culture isolation is a specificity that approaches 100%. In cell culture, organisms from each of the three chlamydia species (C. trachomatis, C. pneumoniae, C. psittaci ) grow and produce intracytoplasmic inclusions. The direct visualization of these inclusions contributes to the specificity of cell culture in identifying chlamydia. The preferred method for identifying inclusions is to stain the infected cells with a species-specific, monoclonal fluorescein-labeled antibody (FA) for C. trachomatis. Alternative antibodies used to stain chlamydia inclusions that bind to chlamydia lipopolysaccharide (LPS) are NOT specific for C. trachomatis and also will stain C. psittaci and C. pneumoniae inclusions. In these recommendations, the use of C. trachomatis-specific, anti-major outer membrane protein (MOMP) antibody is the presumed method used for the detection of C. trachomatis isolated in cell culture. Culture sensitivity is approximately 70%-90% in experienced laboratories (123 ). Since culture amplifies small numbers of organisms, it is preferred for specimens in which low numbers of organisms are anticipated (e.g., asymptomatic infection). Culture also allows the organism to be preserved for additional studies, including the determination of immunotype (serovar) and antimicrobial susceptibilities. Cell culture isolation has several disadvantages. First, the method is technically difficult and requires 3-7 days to obtain a result. Second, since only viable organisms are detected, special transport media must be used, and transportation and storage temperature requirements are stringent. Finally, some specimens contain contaminating microorganisms or substances that are toxic to the cell monolayers used for isolating chlamydia. # Indications for the Use of Cell Culture A high-quality culture system for chlamydia is the diagnostic method of choice and is essential for the diagnosis of chlamydia in all medical/legal situations. Culture serves as the standard for the quality assurance of nonculture tests and for the evaluation of new diagnostic methods. Culture is recommended for detection of chlamydia in specimens for which nonculture methods have not been developed, have not been adequately evaluated, or perform poorly. Culture is recommended for specimens from the following sites: - Urethral specimens (women and asymptomatic men). - Nasopharyngeal specimens (infants). - Rectal specimens (all patients, regardless of age). - Vaginal specimens (prepubertal girls). # Collecting Specimens for Cell Culture Swabs with plastic or wire shafts can be used to obtain cell cultures. Swab tips can be made of cotton, rayon, dacron, or calcium alginate. Swabs with wooden shafts should not be used because the wood may contain substances that are toxic to chlamydia. As part of routine quality control, samples of each lot of swabs that are used to collect specimens for chlamydia isolation should be screened for possible toxicity to chlamydia. The substitution of an endocervical brush for a swab may increase the sensitivity of culture for endocervical specimens from nonpregnant women (124 ). However, the use of the endocervical brush may induce bleeding. Although such bleeding does not interfere with the isolation of C. trachomatis, patients should be advised about possible spotting. # Transporting Cell Culture Specimens The viability of chlamydia organisms must be maintained during transport to the laboratory. Consult with the laboratory regarding the selection of appropriate transport medium and procedures (125,126 ). Specific recommendations for transporting specimens are listed below: - Specimens for culture should be stored at 4 C and inoculated in cell culture as quickly as possible after collection. The elapsed time until inoculation should not exceed 24 hours. If specimens cannot be inoculated within 24 hours, the specimens should be maintained at -70 C or colder. - Specimens for culture should never be stored at -20 C or in "frost-free" freezers. These conditions cause a decrease in chlamydia viability. - Only cell lines that have been checked for their ability to support chlamydial growth should be used. # Analyzing Cell Culture Specimens - The identification of C. trachomatis should be based on visualizing characteristic chlamydia inclusions using species-specific FA staining. - Laboratories should determine periodically whether multiple passages will increase sensitivity. # NOTE: The specificity of culture is based on the identification of inclusions. The identification of chlamydia after culture by EIA, or any other method that does not identify characteristic intracytoplasmic inclusions, is NOT recommended. # Interpreting Cell Culture Results A positive cell culture and visualization of characteristic inclusions by species-specific monoclonal FA staining is required to diagnose a chlamydial infection. With proper technique, the specificity of cell culture isolation of C. trachomatis is nearly 100%. A negative chlamydia culture does not rule out the presence of a chlamydial infection. The sensitivity of culture is approximately 70%-90% when recommended laboratory and specimen collection techniques are performed. # Quality Assurance of Cell Culture Specimen Collection The isolation and detection of chlamydia in cell culture are technically demanding. The training of clinicians and laboratorians, along with quality assurance of laboratory performance and specimen collection, is critical to performing cell culture adequately. Specific recommendations to follow for cell culture specimens are listed below: - To monitor the sensitivity and specificity of the cell culture system, laboratories should include known samples to analyze appropriate positive and negative controls when clinical specimens are cultured. - Periodically (i.e., monthly or bimonthly), positive controls should be submitted from patient settings to verify the effectiveness of transport systems. - Reference laboratories should provide a quality assurance system for monitoring culture performance and specimen adequacy. These laboratories also should provide assistance in the evaluation of unexpected or discrepant results. # Nonculture Chlamydia Tests More aggressive prevention strategies are now possible with the availability of nonculture test methods for the detection of chlamydia. Published evaluations of the performance of nonculture chlamydia tests are based primarily on the MicroTrak ® DFA (Syva) test and the Chlamydiazyme ® EIA (Abbott) test. Additional tests have been approved by the FDA for the detection of chlamydial infection. Published information on these other tests is increasing rapidly, but is still limited. The recommended uses of nonculture chlamydia tests for presumptive diagnosis, with and without additional testing, to verify a positive result are summarized in these recommendations (Table 1). # Description of Nonculture Tests Nonculture chlamydia tests include several categories that differ in format and execution. Direct Fluorescent Antibody (DFA) tests. Depending on the commercial product used, the antigen that is detected by the antibody in the DFA procedure is either the MOMP or LPS. Specimen material is obtained with a swab or endocervical brush which is then rolled over the specimen "well" of a slide. After the slide has dried and the fixative has been applied, the slide can be stored or shipped at ambient temperature. The slide should be processed by the laboratory within 7 days after the specimen has been obtained. Staining consists of covering the smear with fluorescent monoclonal antibody that binds to chlamydia elementary bodies. Stained elementary bodies are then identified by fluorescence microscopy. Total processing time is 30-40 minutes. Only C. trachomatis organisms will stain with the anti-MOMP antibodies used in commercial kits. Cross-reactions can occur between the anti-LPS antibodies used in commercial kits and other bacterial species, as well as with C. pneumoniae and C. psittaci. Enzyme Immunoassay (EIA) Tests. EIA tests detect chlamydia LPS with a monoclonal or polyclonal antibody that has been labeled with an enzyme. The enzyme converts a colorless substrate into a colored product. The intensity of the color is measured with a spectrophotometer, which provides a numerical readout. The specimens for EIA are collected by using specimen collection kits supplied by the manufacturer. These kits include swabs, transport tubes, and instructions (to obtain optimum performance of the test kit, manufacturer's instructions should be followed). Specimens can be stored and transported at ambient temperature and should be processed within the time indicated by the manufacturer. Total processing time is 3-4 hours. One disadvantage of the EIA methods that detect LPS is that cross-reaction of the antibody with other microorganisms leads to false-positive results (125,(127)(128)(129). In addition, the LPS-based EIA tests detect all three chlamydia species and, therefore, are not specific for C. trachomatis. Some manufacturers have developed blocking assays that are used to verify positive EIA test results. The test is repeated on initially positive specimens with the addition of a monoclonal antibody specific for chlamydia LPS. The monoclonal antibody competitively inhibits chlamydia-specific binding by the enzyme-labeled antibody; a negative result with the blocking antibody is interpreted as verification of the initial positive test result. Nucleic Acid Hybridization Tests (DNA Probe). Nucleic acid hybridization methods can be used for the diagnosis of chlamydial infections. In the hybridization assay, a chemiluminescent DNA probe that is complimentary to a specific sequence of C. trachomatis ribosomal RNA (rRNA) is allowed to hybridize to any chlamydia rRNA that is present in the specimen. The resulting DNA:rRNA hybrids are adsorbed to magnetic particles and are then detected by using a luminometer that provides a numerical readout. The test kit includes a specimen collection swab and transport medium. Specimens should be maintained at temperatures of 2-25 C during transport and storage. The manufacturer suggests that assays be performed within 7 days after collection; otherwise the specimens should be stored at -20 C or colder. Total processing time is 2-3 hours. The technical requirements and the necessary expertise to perform nucleic acid hybridization tests are similar to those of the EIA methods. The probe assay is specific for C. trachomatis; cross-reactions with organisms other than C. trachomatis, including C. pneumoniae and C. psittaci, have not been reported. A competitive probe assay has been developed to provide a means of assuring high specificity. The usefulness of the competitive assay is being evaluated in clinical trials and has not been approved for in vitro use by the Food and Drug Administration (FDA). Rapid Chlamydia Tests. Chlamydia tests have been developed that can be performed within 30 minutes, do not require expensive or sophisticated equipment, and are packaged as single units. The results are read qualitatively. These rapid or stat tests can offer advantages in physicians' offices, small clinics and hospitals, and settings in which results are needed immediately (e.g., when making decisions about additional testing or treatment while the patient is still present). Like EIAs, these tests use antibodies against LPS that detect all three chlamydia species, but are subject to the same potential for false-positive results due to cross-reactions with other microorganisms. The performance characteristics of these tests have not been extensively evaluated. In addition, since rapid chlamydia tests are designed to be performed by nonlaboratory personnel, quality assurance is essential. Personnel requirements, quality assurance, and quality control requirements relating to the use of these and other tests are published in the Clinical Laboratory Improvement Amendments (CLIA) (130 ) and are governed by the test categorization compilation. # Leukocyte Esterase Test (LET). The LET is a dipstick test that is applied to urine specimens to screen for urinary tract infection. The LET detects enzymes that are produced by polymorphonuclear leukocytes. These enzymes hydrolyze an indoxylcarbonic acid ester on the dipstick to indoxyl, which reacts with an indicator in the strip to produce a purple color. The procedure requires <2 minutes to perform after the specimen is collected. # Accuracy of Nonculture Tests and Recommendations for Use For small-volume testing, DFA may be preferred because the quality of the specimen can be assessed. Automated tests are probably preferable for high-volume testing, but a quality control system is essential for monitoring the adequacy of specimens. Published evaluations of the performance of nonculture chlamydia tests are based primarily on the MicroTrak ® DFA (Syva) test and the Chlamydiazyme ® EIA (Abbott) test. Additional tests have been approved by the FDA for the detection of chlamydial infection. Published information on these other tests is increasing, but is still limited. # Cervical Specimens from Women (Sensitivity). The performance characteristics of Mi-croTrak ® DFA and Chlamydiazyme ® nonculture chlamydia tests have been reported in published evaluations of women in high-prevalence (≥5%) patient populations. Sensitivities, using culture as a standard, have varied greatly but generally exceed 70% (CDC, unpublished review ). This variability in reported sensitivity is a result of differences in specimen collection technique, performance of the culture system used as a standard, and patient characteristics (e.g., age, MPC, and other factors such as prevalence of chlamydial infection, duration of infection, and previous exposure to chlamydia) (121,122,128,(131)(132)(133)(134)(135)(136). A sensitivity of at least 70% is adequate for screening, but not sufficient to exclude chlamydial infection (see Patient Care: An Expanded Role for the Use of Chlamydia Tests). Evaluations have been presented or published for the use of a number of tests for cervical specimens from women. Such tests include IDEIA ® (Dako Diagnostics), Pace 2 ® (Gen-Probe), MicroTrak ® EIA (Syva), and Clearview ® (Unipath) tests. Sensitivities reported for these tests are comparable to sensitivities reported for Chlamydiazyme ® and MicroTrak ® DFA. However, chlamydia diagnostics experts recommend additional evaluation of these tests. First, the true sensitivity of these tests may be less than that indicated in published reports. The number of evaluations published for any given test is small and the number of patients with chlamydial infections studied in most evaluations is also small. Therefore, sensitivities may prove to be lower (or higher) as more evaluations are reported. More evaluations need to be conducted by laboratories that have quality assurance programs for culture including, for example, exchange of specimens with other laboratories (proficiency testing). Evaluations that compare new tests directly with established tests would aid in determining the performance of new tests. Second, none of these tests have been adequately evaluated in low-prevalence patient populations. Diagnostics experts are concerned that differences in factors such as previous exposure to chlamydia and duration of infection may result in lower test sensitivities in low-prevalence populations (128 ). Third, although the reported performance of the Clearview ® rapid (stat) test approaches that of Chlamydiazyme ® and MicroTrak ® DFA, additional evaluations are needed to determine its performance when the test is used in an outpatient setting during the patient's visit. Too few evaluations have been reported for other chlamydia tests to assess their sensitivities (≤3 patient care-laboratory settings or 240 patients with chlamydia isolated by cell culture). # Specificity and Predictive Value Positive. The specificity of nonculture tests for cervical specimens has been high (97%-99%). However, clinicians should be aware that even with these high specificities, false positive results account for an important proportion of all positive test results among groups of patients with a low prevalence of chlamydial infection. The effect of false-positive tests in a population can be quantified by the predictive value positive (PVP). The PVP is the proportion of all persons who have a positive test result for a condition who actually have that condition. In chlamydia screening applications, the PVP is influenced primarily by the specificity of the test and the prevalence of chlamydia. For example, when a nonculture test with a specificity of 98% and a sensitivity of 80% is used to screen 1,000 patients from a high-risk patient population with a chlamydia prevalence of 15% (150 patients have an infection), the test produces 137 positive results: 120 patients are actually infected and 17 are not infected (false-positives). The PVP is 120/137=0.88. When this same test is used to screen 1,000 patients from a low-risk patient population with a chlamydia prevalence of only 2% (20 patients have an infection), 36 positive results are obtained: 16 patients are infected and 20 are not. The PVP is 16/36=0.44. In the low-risk patient population, fewer than half the patients with positive tests actually have chlamydial infection; the remainder are at risk of being incorrectly identified as having an STD unless the clinician takes other measures, such as arranging for verification when screening results are positive. Specificities of the nonculture tests have been <99% in most published evaluations even when a third test was used to detect false-negative cultures. Contamination of cervical specimens with vaginal secretions is responsible for some false-positive tests when using Chlamydiazyme. The polyclonal anti-LPS antibody used in Chlamydiazyme crossreacts with LPS on other bacteria found in the vagina and urinary tract of these patients (121,122,125,127,128,137 ). The source of nonspecific fluorescence with MicroTrak ® DFA and of nonspecific signals with tests using monoclonal antibodies or genetic probes has not been determined. NOTE: Hereafter, a prevalence of <5% is considered to be "low prevalence." At a 5% prevalence (specificity 99% and sensitivity 80%), the PVP would be 80%. Although defining low prevalence as <5% is arbitrary, the potential for 20% of positive test results to be falsely positive (in the absence of a second test for verification) supports the utility of this cutoff. Verifying Positive Screening Test Results. Clinicians should verify positive screening test results with a supplemental test if a false-positive test result is likely to have adverse medical, social, or psychological consequences (Table 1). Verification should probably be routine in low-prevalence patient populations, but might be selective in high-prevalence populations. Methods that are suitable for verifying positive nonculture tests for cervical specimens are recommended below: - Verify positive tests by culture, using fluorescein-conjugated C. trachomatisspecific antibody. Culture is the most sensitive and specific method for verification. However, verification with culture requires a second specimen that is collected either at the time the first specimen is collected or during a return visit. Because of the limited availability of culture testing and the requirement for a second specimen, culture should be used for verification only if very high specificity is required (e.g., medical/legal situations). - Perform a second nonculture test that identifies a C. trachomatis antigen or a nucleic acid sequence that is different from that identified by the screening test. Theoretically, detecting a second, highly specific antigen or nucleic acid sequence should provide adequate specificity for verification. However, this approach has had limited evaluation in field trials. A positive screening test may be verified by a second test using a second specimen that is collected either at the time the first specimen is collected or during a return visit. To avoid taking a second specimen, some EIA test manufacturers suggest using the excess specimen in the EIA transport medium for verification of positive EIA tests with a DFA test. - Use an unlabeled "blocking" antibody or "competitive" probe that verifies a positive test result by preventing attachment of the labeled antibody or probe that is used in the standard assay. These methods are theoretically less desirable because they identify the same molecules as those identified by the initial nonculture tests. However, the Chlamydiazyme ® blocking antibody test appears to produce adequate results (127,(138)(139)(140)(141)(142). These methods do not require collecting a second specimen. With the exception of the Chlamydiazyme ® blocking antibody test, nonculture methods of verifying initial positive chlamydia test results have received insufficient evaluation. Since the sensitivity of tests used for verification of positive chlamydia tests is uncertain but <90%, failure to verify the initial positive test does not rule out chlamydial infection. Evaluation studies are needed to determine which of the above approaches is preferable in relation to sensitivity, specificity, and cost (see Clinician-Laboratory Protocols for Verifying Positive Tests). Urethral Specimens from Men. The use of the Chlamydiazyme ® EIA test and the Mi-croTrak ® DFA test for urethral specimens from males with symptomatic urethritis has been evaluated in published studies. Reported sensitivities of the Chlamydiazyme ® EIA and MicroTrak ® DFA tests among men with chlamydial urethritis have been highly variable but usually exceed 70%. These sensitivities are sufficiently high to recommend using these tests to detect chlamydial infections among men with symptomatic urethritis. In such men, detection of chlamydial infection may be useful in promoting examination and treatment of sex partners and managing the patient's infection. Neither nonculture tests nor tissue culture isolation are sufficiently sensitive to rule out chlamydial infection based solely on a negative test result. Positive Chlamydiazyme ® and MicroTrak ® DFA tests have been highly predictive of chlamydial infection among adolescent and young adult men with symptomatic urethritis. The reported specificities of the Chlamydiazyme ® EIA and MicroTrak ® DFA tests for urethral specimens from men with symptomatic urethritis have been sufficiently high (97%-≥99%). Because Chlamydiazyme ® may yield false-positive results from the urine of men with bladder infections, a positive nonculture test result may be less predictive of chlamydial infection among older men who have a higher incidence of nonchlamydial urinary tract infection (143 ). Because information on the performance of the nonculture tests among asymptomatic men is limited and tests for identifying asymptomatic urethral infection among men may be insensitive, none of the nonculture tests are recommended for this group of patients. # Other Nonculture Test Specimens Urine. The LET can be used to screen sexually active teenage males for urethritis, which is often caused by chlamydia or gonorrhea. Patients with positive results indicating the presence of urethritis require specific tests for C. trachomatis and N. gonorrhoeae. Reported sensitivities of the LET in screening for chlamydia and gonorrhea range from 46% to 100% and specificities range from 83% to 100% (19,20,144,145 ). Data are insufficient to recommend its use for older males or for women. Rectum. Culture is the preferred method for detecting chlamydia in rectal specimens. Some DFA reagents have been approved by the FDA to evaluate rectal specimens; however, the specificity of the test for rectal specimens may be less than that for specimens from the cervix or the urethra. If DFA is used, slides should be read only by highly experienced microscopists. Conjunctiva. Although data are limited, the performance of nonculture tests with conjunctival specimens has been at least as effective as with genital specimens. Nasopharynx (Infants). Nonculture tests have not been adequately evaluated for the detection of C. trachomatis in nasopharyngeal specimens. Many of these tests cannot distinguish between C. trachomatis, C. pneumoniae, or C. psittaci. # Serum. Chlamydia serology has little value in the routine clinical care of genital tract infections. Commercial serologic tests are not useful in routine diagnosis because previous chlamydial infections elicit long-lasting antibodies that cannot be easily distinguished from the antibodies produced in a current infection. Immunoglobulin M microimmunofluorescence (MIF) is the test used frequently for the diagnosis of chlamydial pneumonia among infants. Chlamydia serology is also useful for persons with symptoms consistent with lymphogranuloma venereum (LGV). For such persons, a fourfold rise in MIF titer to LGV antigens or a complement fixation titer of ≥1:32 supports a presumptive diagnosis of LGV. The difficulties in preparing antigens and in performing these tests restrict their use to a limited number of reference and research laboratories. Post-Treatment Tests. If a post-treatment test is performed using a nonculture test, the test should be scheduled a minimum of 3 weeks after completion of antimicrobial therapy. Tests performed earlier may be false-negative because of small numbers of chlamydia organisms; the presence of dead organisms also causes false-positive nonculture test results (see Follow-up of Patients Treated for Chlamydial Infection). # Quality Assurance of Nonculture Tests As more laboratories begin to provide diagnostic services for chlamydial infections, the development of an infrastructure for laboratory quality assurance is increasingly important. Sites in which laboratory testing is performed must adhere to the CLIA regulations for staffing, professional training, patient test management, quality assurance, and quality control (130 ). Federal and state regulatory agencies that monitor quality assurance and quality control may have additional requirements and should be consulted for specific information. Each laboratory should verify the accuracy of nonculture test methods by periodically comparing its results with those obtained by using a high-quality culture system. In addition, CLIA recommends that laboratories enroll in proficiency testing programs, such as those provided by the College of American Pathologists and the American Proficiency Institute. These quality assurance measures are especially important when a laboratory implements a new test method. Training is recommended for the performance of all laboratory tests for chlamydia. Sources of training include product manufacturers, the National Laboratory Training Network (a source of instructional material that also may assist in locating or cosponsoring training workshops), and individual state public health laboratories. All clinicians, particularly new providers of chlamydia testing services, should be trained in order to obtain adequate specimens. This training should include a) instruction in obtaining sufficient numbers of cells from any particular site, and b) instruction in obtaining endocervical cells rather than ectocervical cells or vaginal material from the cervix. # Laboratory Testing for Sexual Assault and Abuse Victims Detailed information concerning the evaluation and treatment of suspected victims of sexual assault or abuse may be obtained from the 1993 Sexually Transmitted Diseases Treatment Guidelines (146 ) and the Sexually Transmitted Diseases Clinical Practice Guidelines 1991 (147 ). Specimens that are taken for chlamydia testing immediately after sexual assault may yield false-negative results because of small numbers of organisms present early in infection. Also, tests may be positive because of prior -not assault-acquiredinfection. Specimens for chlamydia cultures should be obtained from adults and adolescents during the initial evaluation and at a follow-up visit 2 weeks later. Specimens should be obtained from all sites of exposure. Among children, specimens should be routinely collected from the pharynx and the rectum in addition to the vagina (girls). In the absence of signs of urethral infection, obtaining a urethral specimen from boys may not be justified because of a relatively low yield of positive test results and the discomfort associated with obtaining the specimen. The decision to obtain specimens at a follow-up examination must be determined according to each case. Obtaining such follow-up specimens may not be justified if the exposure occurred several days or more before the initial examination, or if the examination would be psychologically traumatic. Only cell culture isolation using standard methods employing C. trachomatisspecific antibodies should be used to detect C. trachomatis infection in the investigation of possible sexual abuse (129 ). Nonculture tests are not sufficiently sensitive or specific to be used in the investigation of sexual abuse (129,138,(148)(149)(150)(151). All specimens and isolates from both suspected victims and alleged assailants should be stored at -70 C or colder in case additional testing by a qualified reference laboratory is needed. # Future Directions for Laboratory Testing The availability of sensitive and noninvasive C. trachomatis-specific screening tests for men and women (e.g., urine tests) will greatly expand the population that can be screened. For women, such tests are not now available. For men, sensitivities for EIA methods for detecting chlamydia in first-voided urine range from 30% in a group of asymptomatic men (152 ) to approximately 88% in men with symptoms (153 ). Although specificity is ≥97%, increased rates of false-positive results have been reported with specimens from men with urinary tract infections caused by Escherichia coli and Klebsiella pneumoniae (143 ). Further studies are needed before noninvasive C. trachomatis screening tests for asymptomatic men can be recommended. Recent evidence suggests that the numerical results of nonculture tests, together with the cutoff point for a positive result, might aid laboratories and clinicians in determining when to perform a second test to verify the initial results (139,154 ). Positive results that substantially exceed the cutoff point may be more likely to be true positives than those near the cutoff point. Similarly, negative results that are close to the cutoff point may be less likely to be true negatives than those with lower values. By establishing a zone just above and just below the cutoff, specimens giving low-positive or high-negative results would be evaluated by a second test. The desired effect would be to increase both the sensitivity and specificity of tests. Although studies are in progress, data are insufficient for a formal recommendation on the use of numerical results. Technologies will continue to be refined, thereby improving both the sensitivity and the specificity of available tests for sexually transmitted chlamydial infections. Because of their ability to amplify chlamydia DNA in the specimen, new technologies such as polymerase chain reaction (CPCR) and ligase chain reaction (LCR) promise specificities equal to, and sensitivities higher than, those of culture. # PATIENT CARE: AN EXPANDED ROLE FOR THE USE OF CHLAMYDIA TESTS The Chlamydia trachomatis infections policy guidelines published in 1985 (1 ) emphasized the need to include treatment for chlamydia in regimens for patients whose diagnoses were strongly associated with chlamydial infection. The increasing availability of accurate and economical chlamydia tests permits widespread screening of asymptomatic persons, but also suggests that chlamydia treatment for symptomatic patients and the referral of sex partners in the absence of testing, a key strategy in past guidelines, should be discouraged. These tests should be used to diagnose chlamydia for patients with symptoms or signs suggestive of chlamydial infection even if therapy is administered and partners are referred before test results are available. A specific chlamydia diagnosis should facilitate sex partner referral since a positive chlamydia test result indicates that the patient's infection is sexually transmitted. A specific diagnosis may also facilitate medical care for patients who do not respond as expected to initial chlamydia therapy. Some providers do not have resources to screen asymptomatic patients for chlamydia and to also test patients whose conditions warrant presumptive treatment and partner referral. If presumptive treatment of patients with symptoms or signs of chlamydia without testing is elected, efforts must be made to ensure the treatment of partners. Although the benefits of chlamydia tests for screening and diagnosis justify their use, the potential for adverse consequences must also be recognized and steps taken to minimize them. The adverse consequences for infected patients and their sex partners, including disease complications and transmission of chlamydial infection, may occur if chlamydia treatment is delayed while waiting for test results or if treatment is withheld because of a false-negative test result. Adverse consequences also may occur if uninfected patients and their sex partners are treated unnecessarily because the chlamydia test result is unavailable or false-positive. The adverse consequences of treating uninfected persons are more likely to be psychosocial, resulting from the misdiagnosis of a sexually transmitted infection; adverse effects of the antibiotic used to treat chlamydia are relatively uncommon and mild. These adverse consequences can be avoided by a) treating patients who are symptomatic or have a substantially increased risk of chlamydial infection and treating their sex partners for chlamydia without waiting for chlamydia test results, and b) arranging for a second test to verify an initial positive screening test result for patients and their sex partners who are susceptible to the adverse psychosocial consequences of having a false diagnosis of an STD. # Presumptive Diagnosis of Chlamydial Infection Several conditions-NGU, PID, epididymitis, and gonococcal infection-are consistently associated with an increased prevalence of chlamydial infection among patients and their sex partners (Table 2). Patients with these illnesses require immediate treatment to relieve symptoms and/or to prevent complications. Treatment for these conditions should include an antibiotic regimen for chlamydia. Sex partners of infected patients should be evaluated and treated for chlamydia without waiting for the patient's test results. Immediate chlamydia treatment of MPC is warranted by an increased prevalence of chlamydia among women with this condition in most patient care settings. However, immediate chlamydia treatment of MPC or urethral syndrome among females or proctitis among homosexual males-and the referral of sex partners of patients with any of these conditions before obtaining a microbiologic diagnosis-may not always be warranted (Table 3). Chlamydia risk factors (e.g., age <25 years, having new or multiple sex partners) and the likelihood of compliance with follow-up visits should be considered when deciding whether to defer chlamydia treatment and referral of sex partners until chlamydia test results are available. Whenever possible, these decisions should be based on local estimates of chlamydia prevalence. # Chlamydia Tests for Patients Who Are Treated Presumptively Even if a patient with a presumptive diagnosis of chlamydial infection will be treated and counseled to refer partners before test results are known, chlamydia tests should be performed: - Ensuring appropriate medical care, particularly if symptoms persist, - Facilitating counseling of the patient, - Providing firm grounds for partner notification, - Improving compliance. Limited resources may require health-care providers to decide between performing chlamydia tests for patients who would be treated for chlamydia because of symptoms or signs and patients who are asymptomatic and would not otherwise receive chlamydia treatment. If the health-care provider elects to provide presumptive treatment for patients with symptoms or signs-but without chlamydia testing-efforts must be made to ensure treating the sex partners of patients. # Screening Women for Chlamydial Infection Screening of women for chlamydial infection is a principal element of a chlamydia prevention program (see Health-Care Provider Strategies). The decision to provide treatment for patients whose screening results are positive and to evaluate and treat their sex partners depends upon the patient's risk for a sexually transmitted infection and the potential for adverse psychosocial consequences. A positive second chlamydia test strongly supports the validity of a positive screening test; a negative second test following a positive second test does not rule out chlamydial infection. Verification of the initial positive chlamydia test result should be obtained for persons who have a positive nonculture chlamydia test and who are at low risk for infection (e.g., involved in a monogamous relationship, have no history of sexually transmitted infection, member of low-prevalence patient populations) or for whom a misdiagnosis of chlamydial infection could lead to social/psychological distress. If verification of the initial positive chlamydia test result is indicated, these patients and their sex partners should be treated while waiting for the results of the supplementary test. The health-care provider should postpone treatment and partner referral only if the likelihood and adverse consequences of a false-positive test outweigh the risks of transmission and disease progression. Risk factors for chlamydial infection and the probability that the patient will return for follow-up visits should also be considered. (see Nonculture Chlamydia Tests and Clinician-Laboratory Protocols for Verifying Positive Tests). # Screening Men for Chlamydial Infection Screening tests for chlamydia would be more acceptable if urine rather than intraurethral swab specimens could be used. Chlamydia-specific nonculture tests (i.e., EIA, DFA, and nucleic acid probe tests) have not been adequately evaluated for use with urine. However, the LET detects inflammatory cells in urine. Persons whose LET test results are positive require further evaluation of the cause of inflammation, which generally will be urethritis due to chlamydia, N. gonorrhoeae or other sexually transmitted agents, or occasionally a urinary tract infection unrelated to a sexually transmitted agent. The utility of the LET in detecting urethritis has been evaluated primarily among adolescent males for whom urinary tract infections other than urethritis are rare. In this group, the sensitivity of the LET in detecting asymptomatic chlamydial infection has varied from 46% to 100% (19,20,144,145 ). Compared with the use of chlamydia-specific tests, the LET is inexpensive, easy-to-use, and provides immediate results. Although the test cannot exclude infection among asymptomatic males, continued evaluation of the LET is recommended to better define its usefulness in detecting asymptomatic chlamydial (and gonococcal) infections. # Physical Examination of Sex Partners Female partners of males with chlamydial infection should be referred for examination, chlamydia testing, and treatment. The examination and testing of female partners is recommended because a) sensitive tests are available, and a positive test result may lead to additional partners who are likely to be infected; b) women can be asymptomatic but, when examined, have signs of PID, which requires more intensive therapy; and c) women may be asymptomatically infected with other STDs. However, information is needed on the rates of PID and other STDs among female partners of infected men. Male sex partners of females with chlamydial infections should be evaluated for symptoms of chlamydia and other sexually transmitted infections and for allergy to the treatment drug. A physical examination of male sex partners should be encouraged, but an examination is less important than treatment. The examination and testing of asymptomatic male partners are recommended because a) a positive test result may lead to the treatment of additional partners who are likely to be infected, b) men can be asymptomatically infected with other STDs, and c) male partners may be allergic to the treatment drug. However, chlamydia tests for asymptomatic males are insensitive. Further, low rates of other STDs among asymptomatic male partners of women with chlamydial infection have been demonstrated in limited studies. Also, many males do not have readily identifiable sources of medical care for STDs and so may be unlikely to be evaluated by a clinician even if asked to do so by a sex partner. For some male partners of women with chlamydial infection, therefore, it may be reasonable for the woman's clinician to evaluate the male partner even if, in the case of providers who do not offer health-care services for men, this means evaluation without a physical examination. Although approaches to evaluating male partners without a physical examination have not been adequately studied, evaluation of the male partner could be performed at the clinician's office or possibly by telephone. Before prescribing treatment without an examination, the clinician should determine that the male partner does not have symptoms suggestive of another STD and is not allergic to the treatment drug. # Exposure Periods For women with chlamydial infections and for asymptomatically infected men, health-care providers should treat all sex partners with whom patients have ongoing sexual relations and all other partners with whom patients have had sexual exposures within 60 days before the date of the patient's examination/test. For males with symptomatic chlamydial infection, the 30-day period is sufficient to detect person(s) who probably transmitted the infection to the index patient, as well as recent sex partners who may have been exposed to the infection by the patient. For males and females with asymptomatic infections, a longer exposure period helps to identify additional infected partners. These extended periods, however, have received insufficient evaluation to support specific recommendations. If no sexual exposure has occurred within the specified exposure periods, the most recent sex partner is presumed to be at increased risk for chlamydial infection and should be evaluated. # Responsibility for Referral of Sex Partners Health-care providers should inform infected patients that they must have their sex partners evaluated and treated. Health-care providers or health departments should ensure the notification, evaluation, and treatment of the sex partners of patients with chlamydial infection. Partner referral can be performed by patients (patient referral) or by providers (provider referral). Patients who do comply with partner referral notify their sex partners of their exposure and encourage them to be examined and treated. Provider referrals require that third parties (e.g., health department personnel) assume responsibility for notifying sex partners of their exposure and providing evaluation and treatment. Provider referral of partners-including field follow-up by health department staff-is cost effective (155 ). However, because of the high prevalence of chlamydial infection among some populations and the limited number of health department outreach workers, patient referrals remain the only method of referral available to most clinicians. The responsibility for evaluating the sex partners of persons with chlamydial infection is often unclear and is a major reason partners remain untreated. This is a particular problem for male partners of females with chlamydial infection; male partners (who are often asymptomatic) may be reluctant to visit an STD clinic. Health-care providers who treat women with chlamydial infection should assist in making arrangements for the evaluation and treatment of male partner(s). Health departments can assist health-care providers in developing effective referral systems. # Clinician-Laboratory Protocols for Verifying Positive Tests Clinician-laboratory protocols for chlamydia testing are necessary to maximize the benefits of testing for chlamydia while minimizing adverse consequences and cost. These protocols should address which initial and supplementary tests the laboratory will perform, how clinicians should request these tests, and what specimens the clinicians should collect and submit to the laboratory during the patient's initial and follow-up visits. State and local health departments should facilitate the collaboration between health-care providers and laboratories that is necessary to develop suitable testing protocols (See Nonculture Chlamydia Tests). When developing clinician-laboratory protocols for chlamydia testing, the prevalence of chlamydial infection in the patient population should be considered. In settings with a high prevalence of infection (e.g., false-positive test results account for a small proportion of total positive test results), initial positive test results might only be verified if requested by the clinician on the basis of an assessment of a patient's risk for infection and the potential adverse effects of a false-positive result. In settings with a low prevalence of infection (e.g., false-positive test results account for a substantial proportion of total positive test results), an additional test for verification should be performed on all patients whose screening test results are positive. Three alternative testing protocols are suitable for supplementary testing. With the first protocol, a test system is chosen that permits performing the supplementary test on the residual material from the initial specimen. With the second protocol, a test system is chosen in which the supplementary test is performed on a second specimen. The clinician routinely obtains the second specimen at the same time as the initial specimen and submits both specimens to the laboratory; the supplementary test is performed on the second specimen if the initial test is positive and verification is required. With the third protocol, the laboratory reports a positive result from the initial test to the clinician who arranges the patient's return visit for treatment and then obtains a second specimen for a supplemental test. In most settings, one of the first two protocols is preferable since they do not require a return visit to collect an additional specimen. The choice of supplementary test systems and related protocols is difficult because insufficient information is available regarding their comparative performance and cost. # ANTIMICROBIAL REGIMENS Recommendations for the treatment of genital chlamydial infections have been published (146,156 ). Two new antimicrobials approved by the FDA for the treatment of chlamydia-ofloxacin and azithromycin-offer the clinician additional therapeutic choices. A substantial advantage of azithromycin, in comparison with all other therapies, is that a single dose is effective; this antimicrobial may prove most useful in situations in which compliance with a 7-day regimen of another antimicrobial cannot be ensured. In view of the high efficacy of tetracycline and doxycycline, cost also should be considered when selecting a treatment regimen. The recommended treatment regimens for uncomplicated urethral, endocervical, or rectal chlamydial infections among adults are listed below: - Doxycycline 100 mg orally 2 times a day for 7 days or - Azithromycin 1 gm orally in a single dose NOTE: Doxycycline and azithromycin are not recommended for use during pregnancy. # Alternative Treatment Regimens Alternative treatment regimens for uncomplicated urethral, endocervical, or rectal chlamydial infections among adults are listed below: - Ofloxacin 300 mg orally 2 times a day for 7 days or - Erythromycin base 500 mg orally 4 times a day for 7 days or - Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days or - Sulfisoxazole 500 mg orally 4 times a day for 10 days NOTE: Ofloxacin is not recommended for treating adolescents ≤17 years of age nor for pregnant women. The efficacy of sulfisoxazole is inferior to other regimens. The recommended treatment regimen for chlamydial infection during pregnancy is stated below: - Erythromycin base 500 mg orally times a day for 7 days If this regimen cannot be tolerated, the following regimens are recommended: - Erythromycin base 250 mg orally 4 times a day for 14 days or - Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days or - Erythromycin ethylsuccinate 400 mg orally 4 times a day for 14 days If the patient cannot tolerate erythromycin, the following regimen is recommended: - Amoxicillin 500 mg orally 3 times a day for 7-10 days NOTE: Erythromycin estolate is contraindicated during pregnancy, since drug-related hepatotoxicity can result. For the treatment of complicated chlamydial infection, chlamydia-associated conditions, and chlamydial infection among infants or children, see the 1993 Sexually Transmitted Diseases Treatment Guidelines (146 ) for the treatment of adults, and the American Academy of Pediatrics Report of the Committee on Infectious Diseases (157 ) for the treatment of infants and children. # Follow-up of Patients Treated for Chlamydial Infection Treatment failure, indicated by positive cultures 7-14 days after therapy, is uncommon after successful completion of a ≥7 day regimen of tetracycline or doxycycline; failure rates of 0%-3% have been reported for males and 0%-8% for females (156,158 ). However, in one study of adolescents followed up to 24 months after therapy for chlamydial infection, rates of infection were 39% (118 ). Whether these infections are reinfections or cases of latent, unsuccessfully treated chlamydial infection is unknown. Further, some studies suggest that women with chlamydial infections are at increased risk for subsequent infection. Although routine test-of-cure visits during the immediate post-treatment period are not recommended, health-care providers should consider retesting females infected with chlamydia weeks to months after initial therapy (see Nonculture Chlamydia Tests). # Establishing a Surveillance System To measure chlamydia trends accurately, identify populations with an increased frequency of chlamydial infection and to estimate the burden of chlamydia disease, health departments should develop active chlamydia surveillance systems. These systems should incorporate the following recommendations: - Encourage participation by private and public health-care providers and laboratories to ensure that results are representative of the whole population. - Include measures of incidence as well as prevalence of chlamydial infections. - Use screening results as an index of the prevalence of asymptomatic chlamydial infection. # Monitoring Incidence Health departments should monitor chlamydia incidence by collecting information regarding men who seek care because of symptoms of urethritis. If tests are performed, the results of chlamydia and gonorrhea laboratory tests should be included with reported cases of urethritis. If chlamydia testing is not performed, the reported incidence of NGU should be used as an index of chlamydia incidence. # Monitoring Prevalence (Screening) The number of chlamydia cases identified by community screening programs should be monitored as an index of chlamydia prevalence. Information on the following factors that affect the number of chlamydia cases detected should also be collected: - Number of chlamydia tests being performed - Reason for chlamydia testing (e.g., asymptomatic screening or diagnostic testing performed because of symptoms) - Characteristics of the population undergoing chlamydia testing In some states, the volume of chlamydia cases precludes collecting sufficient information on each case of chlamydia. These states should develop community-based sentinel surveillance to monitor trends in the distribution of chlamydial infections and disease burden by demographic, behavioral, and clinical factors. With a sentinel system, a group of health-care providers serving patients who are representative of select populations in the community are recruited to monitor chlamydial infection rates. Examples of potential sentinel sites include STD clinics, family planning clinics, prenatal-care providers, community health centers, school health providers, jails and detention centers, and group practices or health maintenance organizations. A sentinel site monitoring the prevalence of chlamydia should test all patients (or a given number of consecutive patients or a random or systematic sample of patients) for chlamydial infection regardless of the patient's chief complaint or symptom status. Demographic, behavioral, and clinical information should be recorded for all patients tested. A sentinel site that monitors the incidence of NGU should collect information on all male patients with symptoms of urethritis and negative test results for gonorrhea. When possible, sentinel sites should monitor the prevalence of gonococcal infections and the incidence of gonococcal urethritis for all patients and the key sequelae of chlamydial and gonococcal infections (e.g., PID, infertility, ectopic pregnancy). # Laboratory Surveillance Because the surveillance definition of chlamydial infection requires laboratory testing, local laboratories are an important component of a chlamydia surveillance system. Health departments should establish an inventory of laboratories conducting chlamydia testing and poll them regularly for the number of chlamydia tests performed and the number of positive test results. # Monitoring Interventions Chlamydia prevention programs should monitor intervention activities. Health departments should monitor the extent and quality of the interventions (process evaluation). Data should be analyzed to determine whether chlamydia trends (measured by surveillance systems) can be linked to interventions (outcome evaluation). Screening and partner notification are interventions of particular interest. The following guidelines are recommended: - To permit health departments to monitor screening programs, participating health-care providers should provide screening test results (see Monitoring Prevalence). - Where substantial resources are allocated to partner notification or other interventions, health departments should monitor these activities to ensure optimal distribution of those resources. # ORGANIZING STRATEGIC PARTNERSHIPS The primary goal of a chlamydia prevention strategy should be to secure the resources to provide adolescents and young adults with access to information regarding chlamydial infection, screening and treatment services, and partner notification. Meeting this goal will require the cooperation of all agencies and programs that serve the health-care needs of adolescents and young adults. Every community has an opportunity to provide appropriate educational material regarding STDs and sexual behaviors, and to make screening and treatment for STDs more readily available to those at high risk for chlamydial infection. For example, some family planning programs provide chlamydia testing and treatment for their clients and their male sex partners, and a similar service is provided in some STD clinics. However, these efforts reach only a portion of the population that is infected with chlamydia. Such programs should be expanded to include other primary care providers who deliver medical services to sexually active adolescents and young adults-community health centers, migrant health centers, Native-American health centers, school-based clinics, Job Corps, detention centers, active-duty military facilities, hospital emergency rooms, and providers in the private sector. Schools, community-based recreational and after-school programs such as YMCAs, and other agencies can offer information about STDs (including HIV infection). A successful chlamydia prevention program should effectively coordinate all resources in a community. State STD/HIV prevention programs can provide solutions since their staff have experience in organizing clinical, educational, and laboratory resources within states, and they have developed partnerships with providers (e.g., family planning, prenatal, and migrant health clinics) who are part of any prevention program. The challenge is to focus on the common problem-high rates of chlamydial infection throughout the country-and to develop the interagency relationships and coalitions that are needed to deliver appropriate chlamydia services in both public and private settings. # SURVEILLANCE AND PROGRAM EVALUATION Information should be collected at the national, state, and local levels to provide quantitative estimates of disease occurrence, monitor trends, and monitor interventions and evaluate their impact. The epidemiologic analysis of surveillance and intervention data should be the basis for decision-making in chlamydia prevention programs, including allocation of intervention resources and evaluation of prevention efforts. In addition, these data can be used to develop hypotheses for etiologic and intervention research. Surveillance of chlamydial infections is difficult for four reasons. First, the prevalence rate is high and the potential burden of reporting cases correspondingly large. Second, many infected patients are asymptomatic and, therefore, are difficult to identify except through screening programs. Third, because the duration of infection cannot be determined for many patients, prevalence and incidence are difficult to distinguish. Finally, maintaining a surveillance system requires substantial resources for laboratory testing and information systems. # Reporting Laws All states should have laws or regulations requiring that information on cases of chlamydial infection be reported to the appropriate public health departments. - Reporting laws or regulations should acknowledge chlamydia as a public health problem and should provide the legal basis for establishing chlamydia surveillance systems. - Reporting laws or regulations should allow states to collect information regarding cases of chlamydial infection from health-care providers and laboratories. Specific categories of data should include demographics (age, race/ethnicity, sex, geographic location, source of report), clinical characteristics (anatomic site, symptoms/signs, treatment), and behaviors (risk factors). # NOTE: In many areas, reporting laws or regulations are also important because they are linked to the laws and regulations that authorize health departments to initiate and support prevention activities such as screening and partner notification. # Reportable Conditions The recommended surveillance definition is a case of chlamydial infection diagnosed by a positive laboratory test result. If tests are performed to verify a positive chlamydia test result, reporting should be contingent on verification of the initial positive test result. Reports of chlamydial complications or of conditions serving as surrogates for chlamydial infection-when chlamydia tests are not available-should be analyzed separately. NGU, PID, ophthalmia neonatorum, and epididymitis are the principal conditions that might be reported as complications of, or surrogates for, chlamydial infection. The performance of chlamydia tests and their results should be included with reports of these conditions. All material in the MMWR Series is in the public domain and may be used and reprinted without special permission; citation as to source, however, is appreciated. # MMWR
CDC convened a workshop in Atlanta on March 26-28, 1991, to discuss chlamydia prevention. The following persons participated in the workshop and provided expert technical and scientific review:# INTRODUCTION Chlamydia trachomatis infections are common in sexually active adolescents and young adults in the United States (CDC, unpublished review ). More than 4 million chlamydial infections occur annually (2,3 ). Infection by this organism is insidioussymptoms are absent or minor among most infected women and many men. This large group of asymptomatic and infectious persons sustains transmission within a community. In addition, these persons are at risk for acute illness and serious longterm sequelae. The direct and indirect costs of chlamydial illness exceed $2.4 billion annually (2-4 )*. Until recently, chlamydia prevention and patient care were impeded by the lack of suitable laboratory tests for screening and diagnosis. Such tests are now available. Through education, screening, partner referral, and proper patient care, public health workers and health-care practitioners can combine efforts to decrease the morbidity and costs resulting from this infection. # Prevalence of Chlamydial Infection Adolescents and young adults are at substantial risk of becoming infected with chlamydia. Unrecognized infection is highly prevalent in this group (CDC, unpublished review ). In the United States, published studies of sexually active females screened during visits to health-care providers indicate that age is the sociodemographic factor most strongly associated with chlamydial infection. Prevalence has been highest (>10%) among sexually active, adolescent females (CDC, unpublished review ). The prevalence of chlamydial infection also has been higher among those patients who live in inner cities, have a lower socioeconomic status, or are black (5)(6)(7)(8)(9)(10)(11). Although prevalences have been higher in these subgroups, with few exceptions prevalences are ≥5% regardless of region of the country, urban/rural location of provider, or race/ethnicity (CDC, unpublished review ). Fewer screening studies have been reported for men, but prevalences have been >5% among young men seeking health care for reasons other than genitourinary tract problems. Published reports of these studies from North America and Europe have included male high school students (12 ), military personnel (13,14 ), semen donors (15,16 ), adolescents in detention centers (17)(18)(19)(20), and teens attending adolescent clinics (17,20 ), adolescents attending university health centers (20,21 ), and chemical dependency units (22 ). # Clinical Spectrum of Chlamydial Infection Among Nonpregnant Women Pelvic inflammatory disease (PID) accounts for most of the serious acute illness, morbidity, and economic cost resulting from chlamydial infection. Treating women with acute, symptomatic PID is not a sufficient prevention strategy because treatment does not always prevent sequelae and because many women with tubal infection have symptoms too mild or too nonspecific for them to seek treatment. A woman's exposure to chlamydia is usually a result of sexual intercourse. The site of initial infection is most often the cervix; the urethra and the rectum may also be infected (23)(24)(25)(26)(27)(28). Chlamydia causes symptoms in a minority of infected women (9,(29)(30)(31)(32)(33)(34)(35)(36)(37)(38). When symptoms occur, they include vaginal discharge and dysuria (9,23,(25)(26)(27)(29)(30)(31)(32)(33)(35)(36)(37)(38). The ascension of lower, genitourinary tract infection to the endometrium and fallopian tubes may cause lower abdominal pain and menstrual abnormalities. Untreated infections among women often persist for months (7,34,(39)(40)(41). During this period, complications may develop, and many of these women transmit their infection to others. The proportion of women with chlamydial infection who develop infection of the upper reproductive tract (endometritis, salpingitis, and pelvic peritonitis) is uncertain; however, an estimated 8% of women with chlamydia also had overt salpingitis (42). A second study of women with dual gonococcal and chlamydial infections, but who were treated only for gonorrhea, revealed that 30% developed salpingitis during follow-up (43 ). Chlamydia, alone or with other microorganisms, has been isolated from 5% to 50% of women seeking care for symptoms of PID (CDC, unpublished review). Symptomatic PID prompts 2.5 million outpatient visits to physicians annually (44,45 ). More than 275,000 women are hospitalized and more than 100,000 surgical procedures are performed yearly because of PID (45 ). Both the diagnosis and the treatment of PID are unsatisfactory: approximately 17% of women treated for PID will be infertile; an equal proportion will experience chronic pelvic pain as a result of infection (46,47 ); and 10% who do conceive will have an ectopic pregnancy (48 ). The importance of undetected, untreated fallopian tube infections as a cause of infertility and ectopic pregnancy is clear. In two studies of chlamydia and PID in infertile women, PID was the cause of the infertility in half of the women, and anti-chlamydial antibody was strongly associated with a tubal etiology for infertility (49,50 ). Of concern in these studies was the small proportion of women with tubalfactor infertility who reported a past history of salpingitis. Other studies indicate that many of these women may have had salpingitis, but were not treated because symptoms were absent or nonspecific (49,(51)(52)(53)(54)(55)(56)(57)(58)(59)(60). Unrecognized PID also may be a contributor to the occurrence of ectopic pregnancies (60,61 ). Because chlamydial infections are not usually associated with overt symptoms, prevention of infection is the most effective means of preventing sequelae. Pregnant women with chlamydial infection are at risk for postpartum PID. Postpartum and perinatal disease are preventable by treating infected pregnant women and their sex partners. Endometritis, and possibly salpingitis, developed among 10%-28% of pregnant women with untreated chlamydial infection who underwent induced abortions (27,(62)(63)(64). Similarly, endometritis may develop during the late postpartum period among 19%-34% of infected pregnant women who deliver vaginally and atterm (65,66 ). # Clinical Spectrum of Chlamydial Infection Among Infants Nearly two-thirds of the infants born vaginally to mothers with chlamydial infection become infected during delivery (67,68 ). Even after ophthalmia prophylaxis with silver nitrate or antibiotic ointment, 15%-25% of infants exposed to chlamydia developed chlamydia conjunctivitis, and 3%-16% developed chlamydia pneumonia (41,(68)(69)(70)(71)(72)(73). C. trachomatis is the most common cause of neonatal conjunctivitis (72,(74)(75)(76)(77)(78) and is one of the most common causes of pneumonia during the first few months of life (79)(80)(81)(82)(83)(84)(85). Infants with chlamydia pneumonia are at increased risk for abnormal pulmonary function tests later in childhood (86)(87)(88). # Clinical Spectrum of Chlamydial Infection Among Men Although urethritis is the most common illness resulting from chlamydia, chlamydial infections among men rarely result in sequelae. However, asymptomatic infected men may unknowingly infect their sex partner(s) before seeking treatment. Chlamydial infections among heterosexual men are usually urethral. Symptoms are similar to gonorrhea (e.g., urethral discharge or dysuria). In contrast to gonorrheal urethritis, chlamydia symptoms are often absent or mild (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)89 ). Therefore, the number of heterosexual men with asymptomatic chlamydial infections is larger than the number of such men with gonorrhea. Chlamydial infections of the lower genitourinary tract account for 30%-40% of the 4-6 million physician visits for nongonococcal urethritis (NGU), and 50% of the 158,000 outpatient visits and 7,000 hospitalizations for epididymitis among adolescent and young adult males (2,3,90,91 ). Chlamydial infections among men readily respond to treatment with antibiotics. Most of the morbidity and economic costs of chlamydial infections among heterosexual men result from infection of female sex partners who develop sequelae (3 ). The rectum is a common site of initial chlamydial infection for men who engage in receptive anal intercourse. Rectal infections are generally asymptomatic, but may cause symptoms characteristic of proctitis (e.g., rectal discharge, pain during defecation) or proctocolitis (92)(93)(94)(95)(96). # Other Chlamydial Illnesses Among adolescents and young adults of both sexes, chlamydial infection is an important part of the differential diagnosis for other, less common, illnesses. Chlamydia salpingitis may progress to perihepatitis-the Fitz-Hugh-Curtis syndrome (97 ). Although chlamydia is an uncommon cause of cystitis symptoms among female patients, chlamydia has been an important cause of such symptoms in some groups of young women who have pyuria, but sterile urine cultures (acute dysuria-pyuria syndrome or urethral syndrome) (5,38,98 ). Another important, but uncommon complication of urogenital chlamydial infection is Reiter's syndrome (reactive arthritis, conjunctivitis, and urethritis), which occurs primarily among men. Chlamydia is receiving greater emphasis in the differential diagnosis of arthritis in young women because of the availability of diagnostic tests for chlamydia. Chlamydial infection is more likely to be missed among women than among men who have arthritis because of the less obvious role of urethritis. Chlamydia is also part of the differential diagnosis of chronic conjunctivitis among adolescents and young adults (99)(100)(101)(102). Ocular or ophthalmic infections may result from exposure to infectious genital secretions during oral-genital sexual contact, or by autoinoculation. Although chlamydia can be detected in the pharynx after inoculation from oral-genital exposure (103-111 ), chlamydia has not been established as a cause of pharyngitis (103,104,(110)(111)(112)(113)(114). # PREVENTION STRATEGIES The principal goal of chlamydia prevention strategies is to prevent both overt and silent chlamydia salpingitis and its sequelae. Other goals include the prevention of perinatal and postpartum infection and other adverse consequences of chlamydial infection. # General Approach The prevention of chlamydia salpingitis, pregnancy-related complications, and other chlamydial illnesses requires that chlamydia prevention programs include both primary and secondary prevention strategies. # Primary Prevention Strategies Primary prevention strategies are efforts to prevent chlamydial infection. Primary prevention of chlamydia can be accomplished in two general ways. • Behavioral changes that reduce the risk of acquiring or transmitting infection should be promoted (e.g., delaying age at first intercourse, decreasing the number of sex partners, partner selection, and the use of barrier contraception [condoms]). Efforts to effect behavioral changes are not specific to chlamydia prevention but are also critical components in preventing sexual transmission of the human immunodeficiency virus (HIV) and other sexually transmitted diseases (STDs) (45,(115)(116)(117). • Identify and treat persons with genital chlamydial infection before they infect their sex partners, and for pregnant women before they infect their babies. Efforts to detect chlamydial infection are essential to chlamydia prevention. Identifying and treating chlamydial infections require active screening and referral of sex partners of infected persons, since infections among women and men are usually asymptomatic. # Secondary Prevention Strategies Secondary prevention strategies are efforts to prevent complications among persons infected with chlamydia. The most important complication to be prevented is salpingitis and its potential sequelae (i.e., ectopic pregnancy, tubal infertility, and chronic pelvic pain). Secondary prevention of chlamydia salpingitis can be accomplished by a) screening women to identify and treat asymptomatic chlamydial infection; b) treating the female partners of men with infection; c) recognizing clinical conditions such as mucopurulent cervicitis (MPC) and the urethral syndrome, and then applying or using appropriate chlamydia diagnostic tests and treatment, as appropriate. # Target Population Chlamydial infection is especially prevalent among adolescents. Furthermore, PID occurs more commonly after chlamydial infection among adolescent females than among older women (42 ). Therefore, chlamydia prevention efforts should be directed toward young women. All sexually active adolescents and young adults are at high risk for chlamydia; the infection is broadly distributed geographically and socioeconomically. Therefore, chlamydia prevention programs should target all sexually active adolescents and young adults. All private and public health-care providers should be involved in these prevention efforts. # Specific Strategies Specific strategies for the prevention of chlamydia are grouped into two categories. Those categories are: a) community-based strategies, and b) health-care provider strategies. # Community-Based Strategies Since the prevalence of chlamydia is consistently high among adolescents and young adults regardless of socioeconomic status, race, or geographic location, prevention efforts should be implemented communitywide. Public Awareness. Community-based strategies should increase public awareness of chlamydia, its consequences, and the availability and importance of diagnosis and treatment. Groups at high risk for chlamydial infection and the persons who educate and care for them (e.g., parents, teachers, and health-care providers) must be informed about the high rate of genital chlamydial infection and its sequelae among sexually active adolescents and young adults. HIV/STD Risk Reduction Programs. Programs designed to reduce the risk of sexual transmission of HIV and other STDs by means of behavioral changes should emphasize the especially high risk of chlamydial infection. In addition, chlamydial infection may be a sentinel for unsafe sexual practices. Concern about chlamydia may provide additional motivation for persons to delay initiation of sexual activity, limit the number of sex partners, avoid sex partners at increased risk for STDs, and use condoms. Schools. Because of their access to adolescents, educators have an important role to play in chlamydia prevention programs. Chlamydia-specific material should be integrated into educational curricula that address HIV and other STDs. In addition, school programs should assist students in developing the social and behavioral skills needed to avoid chlamydial infection, HIV, and other STDs. Although most school health education curricula address HIV, fewer discuss other STDs (including chlamydia). Information that should be provided through school health education programs are listed below: • Rates of chlamydial infection among adolescents • Adverse consequences of chlamydia (e.g., PID and infertility) • Symptoms and signs of chlamydial infection (and other STDs) • Asymptomatic infection • Treatment for sex partners • Where and how to obtain health care (including locations, telephone numbers [e.g., STD hot-line number], costs, and issues of confidentiality) Some schools may offer more than classroom instruction (e.g., access to health care for infected persons and screening programs to identify asymptomatic chlamydial infection). Personnel in school-based clinics that perform pelvic examinations should use the opportunity to test for chlamydial infection. However, tests also are needed to identify asymptomatic males (e.g., by screening urine collected during sports physical examinations or other health-screening programs). The leukocyte esterase test (LET) of urine is one possible method for testing males, but the accuracy of the test requires additional evaluation (118 ). Such approaches to identifying and treating young men with asymptomatic chlamydial infection may prove important since these persons may account for most of the transmission of chlamydia to young women (see Laboratory Testing and Patient Care: An Expanded Role for the Use of Chlamydia Tests). Out-of-school Adolescents. The prevalence of chlamydia may be even greater among adolescents who have dropped out of school. Therefore, organizations serving these adolescents (e.g., Job Corps, vocational training centers, detention centers, community-based recreational programs) should offer health care that addresses chlamydial infection as part of STD/HIV risk reduction programs. # Health-Care Provider Strategies Reducing the high prevalence of chlamydial infection requires that health-care providers be aware of the high prevalence of chlamydia and recognize chlamydial illness, screen asymptomatic patients, arrange for the treatment of sex partners, and counsel all sexually active patients about the risks of STD infections. Medical providers should be trained to recognize and manage the following conditions that may be caused by chlamydia: MPC, PID, urethral syndrome (women), and urethritis and epididymitis (men). Screening. The screening of women for chlamydial infection is a critical component in a chlamydia prevention program since many women are asymptomatic, and the infection may persist for extended periods of time. Many women of reproductive age undergo pelvic examination during visits for routine health care or because of illness. During these examinations, specimens can be obtained for chlamydia screening tests. Female patients of adolescent-care providers, women undergoing induced abortion, women attending STD clinics, and women in detention facilities should be screened for chlamydial infection. Screening of these women is important because a) many are adolescents or young adults, b) they are at high risk for salpingitis, and c) they or their partners are likely to transmit infection. Chlamydia screening at family planning and prenatal care clinics is particularly cost-effective because of the large number of sexually active young women who undergo pelvic examinations. Providers such as family physicians, internists, obstetricians-gynecologists, and pediatricians who provide care for sexually active young women also should implement chlamydia screening programs-although a lower volume of such patients may increase the cost of testing. The following criteria can help identify women who should be tested for chlamydia: • Women with MPC • Sexually active women <20 years of age • Women 20-24 years of age who meet either of the following criteria, or women >24 years of age who meet both criteria-inconsistent use of barrier contraception, or new or more than one sex partner during the last 3 months. Patient selection criteria should be evaluated periodically. Although the incidence of chlamydial infection among all women previously tested for chlamydia is unknown, the incidence of chlamydial infection among previously infected adolescent females has been as high as 39% (118 ). Because of the high incidence of chlamydial infection among sexually active adolescent females, recommendations for the frequency of testing are listed below: • Women <20 years of age should be tested when undergoing a pelvic examination, unless sexual activity since the last test for chlamydia has been limited to a single, mutually monogamous partner. • All other women who meet the suggested screening criteria (listed above) should be tested for chlamydia annually. Although young men infrequently seek routine health care, medical providers should use such opportunities to evaluate them for asymptomatic chlamydial infection, possibly by means of the LET (119 ) (see Laboratory Testing and Patient Care: An Expanded Role for the Use of Chlamydia Tests). Treatment of Sex Partners. Treatment of sex partners of infected persons is an important strategy for reaching large numbers of men and women with asymptomatic chlamydial infection. Also, if partners are not treated, reinfection may occur. In addition, treating the male partners of infected women is critical since this is the principal way to eliminate asymptomatic infection among males. If chlamydia screening is widely implemented, the number of infected women identified may exceed the capacity of some public health systems to notify, evaluate, and treat partners. Therefore, health department personnel should assist health-care providers in developing cooperative approaches to refer partners for treatment. Where possible, health-care providers who treat female patients for chlamydia should offer examination and treatment services for the patients' male sex partner(s), or should arrange the appropriate referral of such partners. # Risk Reduction Counseling. In addition to screening, treatment, and referral of sex partners(s) of persons with chlamydial infection, health-care providers should: • Educate sexually active patients regarding HIV and other STDs, • Assess the patients' risk factors for infection, • Offer at-risk patients advice about behavior changes to reduce the risk of infection, • Encourage the use of condoms. Preventing Chlamydial Infection During Pregnancy. To prevent maternal postnatal complications and chlamydial infection among infants, pregnant women should be screened for chlamydia during the third trimester, so that treatment, if needed, will be completed before delivery (see Primary Prevention Strategies). The screening criteria already discussed can identify those at higher risk for infection. Screening during the first trimester prevents transmission of the infection and adverse effects of chlamydia during the pregnancy. However, the evidence for adverse effects during pregnancy is minimal. If screening is performed only during the first trimester, a longer period exists for infection before delivery. Infants with chlamydial infections respond readily to treatment; morbidity can be limited by the early diagnosis and systemtic treatment of infants who have conjunctivitis and pneumonia caused by chlamydial infection (secondary prevention strategies). Further, the mothers of infants diagnosed with chlamydial infection and the sex partner(s) of those mothers should be evaluated and treated. # LABORATORY TESTING Diagnostic test manufacturers have introduced a variety of nonculture tests for chlamydia, including enzyme immunoassays (EIAs) to detect chlamydia antigens, fluorescein-conjugated monoclonal antibodies for the direct visualization of chlamydia elementary bodies on smears, nucleic acid hybridization tests, and rapid (stat) tests. Because nonculture tests do not require strict handling of specimens, they are easier to perform and less expensive than culture tests; consequently the numbers of laboratories and health-care providers offering chlamydia testing have increased. The expanded use of nonculture tests is a cornerstone of chlamydia prevention strategies. Although nonculture tests have advantages that make them more suitable than cell culture tests for widespread screening programs, nonculture tests also have limitations. In particular, nonculture tests are less specific than culture tests and may produce false-positive results. All positive nonculture results should be interpreted as presumptive infection until verified by culture or other nonculture test. The decision to treat and perform additional tests should be based on the specific clinical situation. The test methodologies discussed in these recommendations are for the detection of Chlamydia trachomatis -not other chlamydia species (e.g., C. psittaci and C. pneumoniae ). A number of commercial products for the detection of C. trachomatis are available; however, most information relating to the performance of most of these products is the manufacturers' data. When possible, health-care providers and laboratory staff should compare a potential nonculture test's performance with that of an appropriate standard in their own laboratory. # Specimens for Screening The proper collection and handling of specimens are important in all the methods used to identify chlamydia. Even diagnostic tests with the highest performance ratings cannot produce accurate results when specimens submitted to the laboratory are improperly collected. Clinicians require training and periodic assessment to maintain proper technique. Because chlamydia are obligate intracellular organisms that infect the columnar epithelium, the objective of specimen collection procedures is to obtain columnar epithelial cells from the endocervix or the urethra. The following recommendations for specimen collection apply to all screening tests. # Preferred Anatomic Sites The endocervix is the preferred anatomic site to collect screening specimens from women. When culture isolation is to be used, processing an additional specimen from the urethra may increase sensitivity by 23% (120 ). Placing the two specimens in the same transport container is acceptable. For nonculture tests, the usefulness of a second specimen from the urethra has not been determined. The urethra is the preferred site for collecting screening specimens from men. # Collecting Specimens The following guidelines are recommended for obtaining endocervical specimens: • Obtain specimens for chlamydia tests after obtaining specimens for gramstained smear, Neisseria gonorrhoeae culture, or Papanicolaou smear. • Before obtaining a specimen for a chlamydia test, use a sponge or large swab to remove all secretions and discharge from the cervical os. • For nonculture chlamydia tests, use the swab supplied or specified by the manufacturer of the test. • Insert the appropriate swab or endocervical brush 1-2 cm into the endocervical canal (i.e., past the squamocolumnar junction). Rotate the swab against the wall of the endocervical canal several times for 10-30 seconds. Withdraw the swab without touching any vaginal surfaces and place it in the appropriate transport medium (culture, EIA, or deoxyribonucleic acid [DNA] probe testing) or prepare a slide (direct fluorescent antibody [DFA] testing). The following guidelines are recommended for obtaining urethral specimens: • Delay obtaining specimens until 2 hours after the patient has voided. • Obtain specimens for chlamydia tests after obtaining specimens for gram-stain smear or N. gonorrhoeae culture. • For nonculture chlamydia tests, use the swab supplied or specified by the manufacturer. • Gently insert the urogenital swab into the urethra (females: 1-2 cm, males: 2-4 cm). Rotate the swab in one direction for at least one revolution for 5 seconds. Withdraw the swab and place it in the appropriate transport medium (culture, EIA, or DNA probe testing) or use the swab to prepare a slide for DFA testing. # Quality Assurance of Specimen Collection Without specimen quality assurance, ≥10% of specimens are likely to be unsatisfactory because they contain secretions or exudate, but lack urethral or endocervical columnar cells (6,(121)(122). Periodic cytologic evaluation of specimen quality is recommended when using non-DFA tests to ensure continued proper specimen collection. # Cell Culture Compared with other diagnostic tests for C. trachomatis, a major advantage of cell culture isolation is a specificity that approaches 100%. In cell culture, organisms from each of the three chlamydia species (C. trachomatis, C. pneumoniae, C. psittaci ) grow and produce intracytoplasmic inclusions. The direct visualization of these inclusions contributes to the specificity of cell culture in identifying chlamydia. The preferred method for identifying inclusions is to stain the infected cells with a species-specific, monoclonal fluorescein-labeled antibody (FA) for C. trachomatis. Alternative antibodies used to stain chlamydia inclusions that bind to chlamydia lipopolysaccharide (LPS) are NOT specific for C. trachomatis and also will stain C. psittaci and C. pneumoniae inclusions. In these recommendations, the use of C. trachomatis-specific, anti-major outer membrane protein (MOMP) antibody is the presumed method used for the detection of C. trachomatis isolated in cell culture. Culture sensitivity is approximately 70%-90% in experienced laboratories (123 ). Since culture amplifies small numbers of organisms, it is preferred for specimens in which low numbers of organisms are anticipated (e.g., asymptomatic infection). Culture also allows the organism to be preserved for additional studies, including the determination of immunotype (serovar) and antimicrobial susceptibilities. Cell culture isolation has several disadvantages. First, the method is technically difficult and requires 3-7 days to obtain a result. Second, since only viable organisms are detected, special transport media must be used, and transportation and storage temperature requirements are stringent. Finally, some specimens contain contaminating microorganisms or substances that are toxic to the cell monolayers used for isolating chlamydia. # Indications for the Use of Cell Culture A high-quality culture system for chlamydia is the diagnostic method of choice and is essential for the diagnosis of chlamydia in all medical/legal situations. Culture serves as the standard for the quality assurance of nonculture tests and for the evaluation of new diagnostic methods. Culture is recommended for detection of chlamydia in specimens for which nonculture methods have not been developed, have not been adequately evaluated, or perform poorly. Culture is recommended for specimens from the following sites: • Urethral specimens (women and asymptomatic men). • Nasopharyngeal specimens (infants). • Rectal specimens (all patients, regardless of age). • Vaginal specimens (prepubertal girls). # Collecting Specimens for Cell Culture Swabs with plastic or wire shafts can be used to obtain cell cultures. Swab tips can be made of cotton, rayon, dacron, or calcium alginate. Swabs with wooden shafts should not be used because the wood may contain substances that are toxic to chlamydia. As part of routine quality control, samples of each lot of swabs that are used to collect specimens for chlamydia isolation should be screened for possible toxicity to chlamydia. The substitution of an endocervical brush for a swab may increase the sensitivity of culture for endocervical specimens from nonpregnant women (124 ). However, the use of the endocervical brush may induce bleeding. Although such bleeding does not interfere with the isolation of C. trachomatis, patients should be advised about possible spotting. # Transporting Cell Culture Specimens The viability of chlamydia organisms must be maintained during transport to the laboratory. Consult with the laboratory regarding the selection of appropriate transport medium and procedures (125,126 ). Specific recommendations for transporting specimens are listed below: • Specimens for culture should be stored at 4 C and inoculated in cell culture as quickly as possible after collection. The elapsed time until inoculation should not exceed 24 hours. If specimens cannot be inoculated within 24 hours, the specimens should be maintained at -70 C or colder. • Specimens for culture should never be stored at -20 C or in "frost-free" freezers. These conditions cause a decrease in chlamydia viability. • Only cell lines that have been checked for their ability to support chlamydial growth should be used. # Analyzing Cell Culture Specimens • The identification of C. trachomatis should be based on visualizing characteristic chlamydia inclusions using species-specific FA staining. • Laboratories should determine periodically whether multiple passages will increase sensitivity. # NOTE: The specificity of culture is based on the identification of inclusions. The identification of chlamydia after culture by EIA, or any other method that does not identify characteristic intracytoplasmic inclusions, is NOT recommended. # Interpreting Cell Culture Results A positive cell culture and visualization of characteristic inclusions by species-specific monoclonal FA staining is required to diagnose a chlamydial infection. With proper technique, the specificity of cell culture isolation of C. trachomatis is nearly 100%. A negative chlamydia culture does not rule out the presence of a chlamydial infection. The sensitivity of culture is approximately 70%-90% when recommended laboratory and specimen collection techniques are performed. # Quality Assurance of Cell Culture Specimen Collection The isolation and detection of chlamydia in cell culture are technically demanding. The training of clinicians and laboratorians, along with quality assurance of laboratory performance and specimen collection, is critical to performing cell culture adequately. Specific recommendations to follow for cell culture specimens are listed below: • To monitor the sensitivity and specificity of the cell culture system, laboratories should include known samples to analyze appropriate positive and negative controls when clinical specimens are cultured. • Periodically (i.e., monthly or bimonthly), positive controls should be submitted from patient settings to verify the effectiveness of transport systems. • Reference laboratories should provide a quality assurance system for monitoring culture performance and specimen adequacy. These laboratories also should provide assistance in the evaluation of unexpected or discrepant results. # Nonculture Chlamydia Tests More aggressive prevention strategies are now possible with the availability of nonculture test methods for the detection of chlamydia. Published evaluations of the performance of nonculture chlamydia tests are based primarily on the MicroTrak ® DFA (Syva) test and the Chlamydiazyme ® EIA (Abbott) test. Additional tests have been approved by the FDA for the detection of chlamydial infection. Published information on these other tests is increasing rapidly, but is still limited. The recommended uses of nonculture chlamydia tests for presumptive diagnosis, with and without additional testing, to verify a positive result are summarized in these recommendations (Table 1). # Description of Nonculture Tests Nonculture chlamydia tests include several categories that differ in format and execution. Direct Fluorescent Antibody (DFA) tests. Depending on the commercial product used, the antigen that is detected by the antibody in the DFA procedure is either the MOMP or LPS. Specimen material is obtained with a swab or endocervical brush which is then rolled over the specimen "well" of a slide. After the slide has dried and the fixative has been applied, the slide can be stored or shipped at ambient temperature. The slide should be processed by the laboratory within 7 days after the specimen has been obtained. Staining consists of covering the smear with fluorescent monoclonal antibody that binds to chlamydia elementary bodies. Stained elementary bodies are then identified by fluorescence microscopy. Total processing time is 30-40 minutes. Only C. trachomatis organisms will stain with the anti-MOMP antibodies used in commercial kits. Cross-reactions can occur between the anti-LPS antibodies used in commercial kits and other bacterial species, as well as with C. pneumoniae and C. psittaci. Enzyme Immunoassay (EIA) Tests. EIA tests detect chlamydia LPS with a monoclonal or polyclonal antibody that has been labeled with an enzyme. The enzyme converts a colorless substrate into a colored product. The intensity of the color is measured with a spectrophotometer, which provides a numerical readout. The specimens for EIA are collected by using specimen collection kits supplied by the manufacturer. These kits include swabs, transport tubes, and instructions (to obtain optimum performance of the test kit, manufacturer's instructions should be followed). Specimens can be stored and transported at ambient temperature and should be processed within the time indicated by the manufacturer. Total processing time is 3-4 hours. One disadvantage of the EIA methods that detect LPS is that cross-reaction of the antibody with other microorganisms leads to false-positive results (125,(127)(128)(129). In addition, the LPS-based EIA tests detect all three chlamydia species and, therefore, are not specific for C. trachomatis. Some manufacturers have developed blocking assays that are used to verify positive EIA test results. The test is repeated on initially positive specimens with the addition of a monoclonal antibody specific for chlamydia LPS. The monoclonal antibody competitively inhibits chlamydia-specific binding by the enzyme-labeled antibody; a negative result with the blocking antibody is interpreted as verification of the initial positive test result. Nucleic Acid Hybridization Tests (DNA Probe). Nucleic acid hybridization methods can be used for the diagnosis of chlamydial infections. In the hybridization assay, a chemiluminescent DNA probe that is complimentary to a specific sequence of C. trachomatis ribosomal RNA (rRNA) is allowed to hybridize to any chlamydia rRNA that is present in the specimen. The resulting DNA:rRNA hybrids are adsorbed to magnetic particles and are then detected by using a luminometer that provides a numerical readout. The test kit includes a specimen collection swab and transport medium. Specimens should be maintained at temperatures of 2-25 C during transport and storage. The manufacturer suggests that assays be performed within 7 days after collection; otherwise the specimens should be stored at -20 C or colder. Total processing time is 2-3 hours. The technical requirements and the necessary expertise to perform nucleic acid hybridization tests are similar to those of the EIA methods. The probe assay is specific for C. trachomatis; cross-reactions with organisms other than C. trachomatis, including C. pneumoniae and C. psittaci, have not been reported. A competitive probe assay has been developed to provide a means of assuring high specificity. The usefulness of the competitive assay is being evaluated in clinical trials and has not been approved for in vitro use by the Food and Drug Administration (FDA). Rapid Chlamydia Tests. Chlamydia tests have been developed that can be performed within 30 minutes, do not require expensive or sophisticated equipment, and are packaged as single units. The results are read qualitatively. These rapid or stat tests can offer advantages in physicians' offices, small clinics and hospitals, and settings in which results are needed immediately (e.g., when making decisions about additional testing or treatment while the patient is still present). Like EIAs, these tests use antibodies against LPS that detect all three chlamydia species, but are subject to the same potential for false-positive results due to cross-reactions with other microorganisms. The performance characteristics of these tests have not been extensively evaluated. In addition, since rapid chlamydia tests are designed to be performed by nonlaboratory personnel, quality assurance is essential. Personnel requirements, quality assurance, and quality control requirements relating to the use of these and other tests are published in the Clinical Laboratory Improvement Amendments (CLIA) (130 ) and are governed by the test categorization compilation. # Leukocyte Esterase Test (LET). The LET is a dipstick test that is applied to urine specimens to screen for urinary tract infection. The LET detects enzymes that are produced by polymorphonuclear leukocytes. These enzymes hydrolyze an indoxylcarbonic acid ester on the dipstick to indoxyl, which reacts with an indicator in the strip to produce a purple color. The procedure requires <2 minutes to perform after the specimen is collected. # Accuracy of Nonculture Tests and Recommendations for Use For small-volume testing, DFA may be preferred because the quality of the specimen can be assessed. Automated tests are probably preferable for high-volume testing, but a quality control system is essential for monitoring the adequacy of specimens. Published evaluations of the performance of nonculture chlamydia tests are based primarily on the MicroTrak ® DFA (Syva) test and the Chlamydiazyme ® EIA (Abbott) test. Additional tests have been approved by the FDA for the detection of chlamydial infection. Published information on these other tests is increasing, but is still limited. # Cervical Specimens from Women (Sensitivity). The performance characteristics of Mi-croTrak ® DFA and Chlamydiazyme ® nonculture chlamydia tests have been reported in published evaluations of women in high-prevalence (≥5%) patient populations. Sensitivities, using culture as a standard, have varied greatly but generally exceed 70% (CDC, unpublished review ). This variability in reported sensitivity is a result of differences in specimen collection technique, performance of the culture system used as a standard, and patient characteristics (e.g., age, MPC, and other factors such as prevalence of chlamydial infection, duration of infection, and previous exposure to chlamydia) (121,122,128,(131)(132)(133)(134)(135)(136). A sensitivity of at least 70% is adequate for screening, but not sufficient to exclude chlamydial infection (see Patient Care: An Expanded Role for the Use of Chlamydia Tests). Evaluations have been presented or published for the use of a number of tests for cervical specimens from women. Such tests include IDEIA ® (Dako Diagnostics), Pace 2 ® (Gen-Probe), MicroTrak ® EIA (Syva), and Clearview ® (Unipath) tests. Sensitivities reported for these tests are comparable to sensitivities reported for Chlamydiazyme ® and MicroTrak ® DFA. However, chlamydia diagnostics experts recommend additional evaluation of these tests. First, the true sensitivity of these tests may be less than that indicated in published reports. The number of evaluations published for any given test is small and the number of patients with chlamydial infections studied in most evaluations is also small. Therefore, sensitivities may prove to be lower (or higher) as more evaluations are reported. More evaluations need to be conducted by laboratories that have quality assurance programs for culture including, for example, exchange of specimens with other laboratories (proficiency testing). Evaluations that compare new tests directly with established tests would aid in determining the performance of new tests. Second, none of these tests have been adequately evaluated in low-prevalence patient populations. Diagnostics experts are concerned that differences in factors such as previous exposure to chlamydia and duration of infection may result in lower test sensitivities in low-prevalence populations (128 ). Third, although the reported performance of the Clearview ® rapid (stat) test approaches that of Chlamydiazyme ® and MicroTrak ® DFA, additional evaluations are needed to determine its performance when the test is used in an outpatient setting during the patient's visit. Too few evaluations have been reported for other chlamydia tests to assess their sensitivities (≤3 patient care-laboratory settings or 240 patients with chlamydia isolated by cell culture). # Specificity and Predictive Value Positive. The specificity of nonculture tests for cervical specimens has been high (97%-99%). However, clinicians should be aware that even with these high specificities, false positive results account for an important proportion of all positive test results among groups of patients with a low prevalence of chlamydial infection. The effect of false-positive tests in a population can be quantified by the predictive value positive (PVP). The PVP is the proportion of all persons who have a positive test result for a condition who actually have that condition. In chlamydia screening applications, the PVP is influenced primarily by the specificity of the test and the prevalence of chlamydia. For example, when a nonculture test with a specificity of 98% and a sensitivity of 80% is used to screen 1,000 patients from a high-risk patient population with a chlamydia prevalence of 15% (150 patients have an infection), the test produces 137 positive results: 120 patients are actually infected and 17 are not infected (false-positives). The PVP is 120/137=0.88. When this same test is used to screen 1,000 patients from a low-risk patient population with a chlamydia prevalence of only 2% (20 patients have an infection), 36 positive results are obtained: 16 patients are infected and 20 are not. The PVP is 16/36=0.44. In the low-risk patient population, fewer than half the patients with positive tests actually have chlamydial infection; the remainder are at risk of being incorrectly identified as having an STD unless the clinician takes other measures, such as arranging for verification when screening results are positive. Specificities of the nonculture tests have been <99% in most published evaluations even when a third test was used to detect false-negative cultures. Contamination of cervical specimens with vaginal secretions is responsible for some false-positive tests when using Chlamydiazyme. The polyclonal anti-LPS antibody used in Chlamydiazyme crossreacts with LPS on other bacteria found in the vagina and urinary tract of these patients (121,122,125,127,128,137 ). The source of nonspecific fluorescence with MicroTrak ® DFA and of nonspecific signals with tests using monoclonal antibodies or genetic probes has not been determined. NOTE: Hereafter, a prevalence of <5% is considered to be "low prevalence." At a 5% prevalence (specificity 99% and sensitivity 80%), the PVP would be 80%. Although defining low prevalence as <5% is arbitrary, the potential for 20% of positive test results to be falsely positive (in the absence of a second test for verification) supports the utility of this cutoff. Verifying Positive Screening Test Results. Clinicians should verify positive screening test results with a supplemental test if a false-positive test result is likely to have adverse medical, social, or psychological consequences (Table 1). Verification should probably be routine in low-prevalence patient populations, but might be selective in high-prevalence populations. Methods that are suitable for verifying positive nonculture tests for cervical specimens are recommended below: • Verify positive tests by culture, using fluorescein-conjugated C. trachomatisspecific antibody. Culture is the most sensitive and specific method for verification. However, verification with culture requires a second specimen that is collected either at the time the first specimen is collected or during a return visit. Because of the limited availability of culture testing and the requirement for a second specimen, culture should be used for verification only if very high specificity is required (e.g., medical/legal situations). • Perform a second nonculture test that identifies a C. trachomatis antigen or a nucleic acid sequence that is different from that identified by the screening test. Theoretically, detecting a second, highly specific antigen or nucleic acid sequence should provide adequate specificity for verification. However, this approach has had limited evaluation in field trials. A positive screening test may be verified by a second test using a second specimen that is collected either at the time the first specimen is collected or during a return visit. To avoid taking a second specimen, some EIA test manufacturers suggest using the excess specimen in the EIA transport medium for verification of positive EIA tests with a DFA test. • Use an unlabeled "blocking" antibody or "competitive" probe that verifies a positive test result by preventing attachment of the labeled antibody or probe that is used in the standard assay. These methods are theoretically less desirable because they identify the same molecules as those identified by the initial nonculture tests. However, the Chlamydiazyme ® blocking antibody test appears to produce adequate results (127,(138)(139)(140)(141)(142). These methods do not require collecting a second specimen. With the exception of the Chlamydiazyme ® blocking antibody test, nonculture methods of verifying initial positive chlamydia test results have received insufficient evaluation. Since the sensitivity of tests used for verification of positive chlamydia tests is uncertain but <90%, failure to verify the initial positive test does not rule out chlamydial infection. Evaluation studies are needed to determine which of the above approaches is preferable in relation to sensitivity, specificity, and cost (see Clinician-Laboratory Protocols for Verifying Positive Tests). Urethral Specimens from Men. The use of the Chlamydiazyme ® EIA test and the Mi-croTrak ® DFA test for urethral specimens from males with symptomatic urethritis has been evaluated in published studies. Reported sensitivities of the Chlamydiazyme ® EIA and MicroTrak ® DFA tests among men with chlamydial urethritis have been highly variable but usually exceed 70%. These sensitivities are sufficiently high to recommend using these tests to detect chlamydial infections among men with symptomatic urethritis. In such men, detection of chlamydial infection may be useful in promoting examination and treatment of sex partners and managing the patient's infection. Neither nonculture tests nor tissue culture isolation are sufficiently sensitive to rule out chlamydial infection based solely on a negative test result. Positive Chlamydiazyme ® and MicroTrak ® DFA tests have been highly predictive of chlamydial infection among adolescent and young adult men with symptomatic urethritis. The reported specificities of the Chlamydiazyme ® EIA and MicroTrak ® DFA tests for urethral specimens from men with symptomatic urethritis have been sufficiently high (97%-≥99%). Because Chlamydiazyme ® may yield false-positive results from the urine of men with bladder infections, a positive nonculture test result may be less predictive of chlamydial infection among older men who have a higher incidence of nonchlamydial urinary tract infection (143 ). Because information on the performance of the nonculture tests among asymptomatic men is limited and tests for identifying asymptomatic urethral infection among men may be insensitive, none of the nonculture tests are recommended for this group of patients. # Other Nonculture Test Specimens Urine. The LET can be used to screen sexually active teenage males for urethritis, which is often caused by chlamydia or gonorrhea. Patients with positive results indicating the presence of urethritis require specific tests for C. trachomatis and N. gonorrhoeae. Reported sensitivities of the LET in screening for chlamydia and gonorrhea range from 46% to 100% and specificities range from 83% to 100% (19,20,144,145 ). Data are insufficient to recommend its use for older males or for women. Rectum. Culture is the preferred method for detecting chlamydia in rectal specimens. Some DFA reagents have been approved by the FDA to evaluate rectal specimens; however, the specificity of the test for rectal specimens may be less than that for specimens from the cervix or the urethra. If DFA is used, slides should be read only by highly experienced microscopists. Conjunctiva. Although data are limited, the performance of nonculture tests with conjunctival specimens has been at least as effective as with genital specimens. Nasopharynx (Infants). Nonculture tests have not been adequately evaluated for the detection of C. trachomatis in nasopharyngeal specimens. Many of these tests cannot distinguish between C. trachomatis, C. pneumoniae, or C. psittaci. # Serum. Chlamydia serology has little value in the routine clinical care of genital tract infections. Commercial serologic tests are not useful in routine diagnosis because previous chlamydial infections elicit long-lasting antibodies that cannot be easily distinguished from the antibodies produced in a current infection. Immunoglobulin M microimmunofluorescence (MIF) is the test used frequently for the diagnosis of chlamydial pneumonia among infants. Chlamydia serology is also useful for persons with symptoms consistent with lymphogranuloma venereum (LGV). For such persons, a fourfold rise in MIF titer to LGV antigens or a complement fixation titer of ≥1:32 supports a presumptive diagnosis of LGV. The difficulties in preparing antigens and in performing these tests restrict their use to a limited number of reference and research laboratories. Post-Treatment Tests. If a post-treatment test is performed using a nonculture test, the test should be scheduled a minimum of 3 weeks after completion of antimicrobial therapy. Tests performed earlier may be false-negative because of small numbers of chlamydia organisms; the presence of dead organisms also causes false-positive nonculture test results (see Follow-up of Patients Treated for Chlamydial Infection). # Quality Assurance of Nonculture Tests As more laboratories begin to provide diagnostic services for chlamydial infections, the development of an infrastructure for laboratory quality assurance is increasingly important. Sites in which laboratory testing is performed must adhere to the CLIA regulations for staffing, professional training, patient test management, quality assurance, and quality control (130 ). Federal and state regulatory agencies that monitor quality assurance and quality control may have additional requirements and should be consulted for specific information. Each laboratory should verify the accuracy of nonculture test methods by periodically comparing its results with those obtained by using a high-quality culture system. In addition, CLIA recommends that laboratories enroll in proficiency testing programs, such as those provided by the College of American Pathologists and the American Proficiency Institute. These quality assurance measures are especially important when a laboratory implements a new test method. Training is recommended for the performance of all laboratory tests for chlamydia. Sources of training include product manufacturers, the National Laboratory Training Network (a source of instructional material that also may assist in locating or cosponsoring training workshops), and individual state public health laboratories. All clinicians, particularly new providers of chlamydia testing services, should be trained in order to obtain adequate specimens. This training should include a) instruction in obtaining sufficient numbers of cells from any particular site, and b) instruction in obtaining endocervical cells rather than ectocervical cells or vaginal material from the cervix. # Laboratory Testing for Sexual Assault and Abuse Victims Detailed information concerning the evaluation and treatment of suspected victims of sexual assault or abuse may be obtained from the 1993 Sexually Transmitted Diseases Treatment Guidelines (146 ) and the Sexually Transmitted Diseases Clinical Practice Guidelines 1991 (147 ). Specimens that are taken for chlamydia testing immediately after sexual assault may yield false-negative results because of small numbers of organisms present early in infection. Also, tests may be positive because of prior -not assault-acquiredinfection. Specimens for chlamydia cultures should be obtained from adults and adolescents during the initial evaluation and at a follow-up visit 2 weeks later. Specimens should be obtained from all sites of exposure. Among children, specimens should be routinely collected from the pharynx and the rectum in addition to the vagina (girls). In the absence of signs of urethral infection, obtaining a urethral specimen from boys may not be justified because of a relatively low yield of positive test results and the discomfort associated with obtaining the specimen. The decision to obtain specimens at a follow-up examination must be determined according to each case. Obtaining such follow-up specimens may not be justified if the exposure occurred several days or more before the initial examination, or if the examination would be psychologically traumatic. Only cell culture isolation using standard methods employing C. trachomatisspecific antibodies should be used to detect C. trachomatis infection in the investigation of possible sexual abuse (129 ). Nonculture tests are not sufficiently sensitive or specific to be used in the investigation of sexual abuse (129,138,(148)(149)(150)(151). All specimens and isolates from both suspected victims and alleged assailants should be stored at -70 C or colder in case additional testing by a qualified reference laboratory is needed. # Future Directions for Laboratory Testing The availability of sensitive and noninvasive C. trachomatis-specific screening tests for men and women (e.g., urine tests) will greatly expand the population that can be screened. For women, such tests are not now available. For men, sensitivities for EIA methods for detecting chlamydia in first-voided urine range from 30% in a group of asymptomatic men (152 ) to approximately 88% in men with symptoms (153 ). Although specificity is ≥97%, increased rates of false-positive results have been reported with specimens from men with urinary tract infections caused by Escherichia coli and Klebsiella pneumoniae (143 ). Further studies are needed before noninvasive C. trachomatis screening tests for asymptomatic men can be recommended. Recent evidence suggests that the numerical results of nonculture tests, together with the cutoff point for a positive result, might aid laboratories and clinicians in determining when to perform a second test to verify the initial results (139,154 ). Positive results that substantially exceed the cutoff point may be more likely to be true positives than those near the cutoff point. Similarly, negative results that are close to the cutoff point may be less likely to be true negatives than those with lower values. By establishing a zone just above and just below the cutoff, specimens giving low-positive or high-negative results would be evaluated by a second test. The desired effect would be to increase both the sensitivity and specificity of tests. Although studies are in progress, data are insufficient for a formal recommendation on the use of numerical results. Technologies will continue to be refined, thereby improving both the sensitivity and the specificity of available tests for sexually transmitted chlamydial infections. Because of their ability to amplify chlamydia DNA in the specimen, new technologies such as polymerase chain reaction (CPCR) and ligase chain reaction (LCR) promise specificities equal to, and sensitivities higher than, those of culture. # PATIENT CARE: AN EXPANDED ROLE FOR THE USE OF CHLAMYDIA TESTS The Chlamydia trachomatis infections policy guidelines published in 1985 (1 ) emphasized the need to include treatment for chlamydia in regimens for patients whose diagnoses were strongly associated with chlamydial infection. The increasing availability of accurate and economical chlamydia tests permits widespread screening of asymptomatic persons, but also suggests that chlamydia treatment for symptomatic patients and the referral of sex partners in the absence of testing, a key strategy in past guidelines, should be discouraged. These tests should be used to diagnose chlamydia for patients with symptoms or signs suggestive of chlamydial infection even if therapy is administered and partners are referred before test results are available. A specific chlamydia diagnosis should facilitate sex partner referral since a positive chlamydia test result indicates that the patient's infection is sexually transmitted. A specific diagnosis may also facilitate medical care for patients who do not respond as expected to initial chlamydia therapy. Some providers do not have resources to screen asymptomatic patients for chlamydia and to also test patients whose conditions warrant presumptive treatment and partner referral. If presumptive treatment of patients with symptoms or signs of chlamydia without testing is elected, efforts must be made to ensure the treatment of partners. Although the benefits of chlamydia tests for screening and diagnosis justify their use, the potential for adverse consequences must also be recognized and steps taken to minimize them. The adverse consequences for infected patients and their sex partners, including disease complications and transmission of chlamydial infection, may occur if chlamydia treatment is delayed while waiting for test results or if treatment is withheld because of a false-negative test result. Adverse consequences also may occur if uninfected patients and their sex partners are treated unnecessarily because the chlamydia test result is unavailable or false-positive. The adverse consequences of treating uninfected persons are more likely to be psychosocial, resulting from the misdiagnosis of a sexually transmitted infection; adverse effects of the antibiotic used to treat chlamydia are relatively uncommon and mild. These adverse consequences can be avoided by a) treating patients who are symptomatic or have a substantially increased risk of chlamydial infection and treating their sex partners for chlamydia without waiting for chlamydia test results, and b) arranging for a second test to verify an initial positive screening test result for patients and their sex partners who are susceptible to the adverse psychosocial consequences of having a false diagnosis of an STD. # Presumptive Diagnosis of Chlamydial Infection Several conditions-NGU, PID, epididymitis, and gonococcal infection-are consistently associated with an increased prevalence of chlamydial infection among patients and their sex partners (Table 2). Patients with these illnesses require immediate treatment to relieve symptoms and/or to prevent complications. Treatment for these conditions should include an antibiotic regimen for chlamydia. Sex partners of infected patients should be evaluated and treated for chlamydia without waiting for the patient's test results. Immediate chlamydia treatment of MPC is warranted by an increased prevalence of chlamydia among women with this condition in most patient care settings. However, immediate chlamydia treatment of MPC or urethral syndrome among females or proctitis among homosexual males-and the referral of sex partners of patients with any of these conditions before obtaining a microbiologic diagnosis-may not always be warranted (Table 3). Chlamydia risk factors (e.g., age <25 years, having new or multiple sex partners) and the likelihood of compliance with follow-up visits should be considered when deciding whether to defer chlamydia treatment and referral of sex partners until chlamydia test results are available. Whenever possible, these decisions should be based on local estimates of chlamydia prevalence. # Chlamydia Tests for Patients Who Are Treated Presumptively Even if a patient with a presumptive diagnosis of chlamydial infection will be treated and counseled to refer partners before test results are known, chlamydia tests should be performed: • Ensuring appropriate medical care, particularly if symptoms persist, • Facilitating counseling of the patient, • Providing firm grounds for partner notification, • Improving compliance. Limited resources may require health-care providers to decide between performing chlamydia tests for patients who would be treated for chlamydia because of symptoms or signs and patients who are asymptomatic and would not otherwise receive chlamydia treatment. If the health-care provider elects to provide presumptive treatment for patients with symptoms or signs-but without chlamydia testing-efforts must be made to ensure treating the sex partners of patients. # Screening Women for Chlamydial Infection Screening of women for chlamydial infection is a principal element of a chlamydia prevention program (see Health-Care Provider Strategies). The decision to provide treatment for patients whose screening results are positive and to evaluate and treat their sex partners depends upon the patient's risk for a sexually transmitted infection and the potential for adverse psychosocial consequences. A positive second chlamydia test strongly supports the validity of a positive screening test; a negative second test following a positive second test does not rule out chlamydial infection. Verification of the initial positive chlamydia test result should be obtained for persons who have a positive nonculture chlamydia test and who are at low risk for infection (e.g., involved in a monogamous relationship, have no history of sexually transmitted infection, member of low-prevalence [<5%] patient populations) or for whom a misdiagnosis of chlamydial infection could lead to social/psychological distress. If verification of the initial positive chlamydia test result is indicated, these patients and their sex partners should be treated while waiting for the results of the supplementary test. The health-care provider should postpone treatment and partner referral only if the likelihood and adverse consequences of a false-positive test outweigh the risks of transmission and disease progression. Risk factors for chlamydial infection and the probability that the patient will return for follow-up visits should also be considered. (see Nonculture Chlamydia Tests and Clinician-Laboratory Protocols for Verifying Positive Tests). # Screening Men for Chlamydial Infection Screening tests for chlamydia would be more acceptable if urine rather than intraurethral swab specimens could be used. Chlamydia-specific nonculture tests (i.e., EIA, DFA, and nucleic acid probe tests) have not been adequately evaluated for use with urine. However, the LET detects inflammatory cells in urine. Persons whose LET test results are positive require further evaluation of the cause of inflammation, which generally will be urethritis due to chlamydia, N. gonorrhoeae or other sexually transmitted agents, or occasionally a urinary tract infection unrelated to a sexually transmitted agent. The utility of the LET in detecting urethritis has been evaluated primarily among adolescent males for whom urinary tract infections other than urethritis are rare. In this group, the sensitivity of the LET in detecting asymptomatic chlamydial infection has varied from 46% to 100% (19,20,144,145 ). Compared with the use of chlamydia-specific tests, the LET is inexpensive, easy-to-use, and provides immediate results. Although the test cannot exclude infection among asymptomatic males, continued evaluation of the LET is recommended to better define its usefulness in detecting asymptomatic chlamydial (and gonococcal) infections. # Physical Examination of Sex Partners Female partners of males with chlamydial infection should be referred for examination, chlamydia testing, and treatment. The examination and testing of female partners is recommended because a) sensitive tests are available, and a positive test result may lead to additional partners who are likely to be infected; b) women can be asymptomatic but, when examined, have signs of PID, which requires more intensive therapy; and c) women may be asymptomatically infected with other STDs. However, information is needed on the rates of PID and other STDs among female partners of infected men. Male sex partners of females with chlamydial infections should be evaluated for symptoms of chlamydia and other sexually transmitted infections and for allergy to the treatment drug. A physical examination of male sex partners should be encouraged, but an examination is less important than treatment. The examination and testing of asymptomatic male partners are recommended because a) a positive test result may lead to the treatment of additional partners who are likely to be infected, b) men can be asymptomatically infected with other STDs, and c) male partners may be allergic to the treatment drug. However, chlamydia tests for asymptomatic males are insensitive. Further, low rates of other STDs among asymptomatic male partners of women with chlamydial infection have been demonstrated in limited studies. Also, many males do not have readily identifiable sources of medical care for STDs and so may be unlikely to be evaluated by a clinician even if asked to do so by a sex partner. For some male partners of women with chlamydial infection, therefore, it may be reasonable for the woman's clinician to evaluate the male partner even if, in the case of providers who do not offer health-care services for men, this means evaluation without a physical examination. Although approaches to evaluating male partners without a physical examination have not been adequately studied, evaluation of the male partner could be performed at the clinician's office or possibly by telephone. Before prescribing treatment without an examination, the clinician should determine that the male partner does not have symptoms suggestive of another STD and is not allergic to the treatment drug. # Exposure Periods For women with chlamydial infections and for asymptomatically infected men, health-care providers should treat all sex partners with whom patients have ongoing sexual relations and all other partners with whom patients have had sexual exposures within 60 days before the date of the patient's examination/test. For males with symptomatic chlamydial infection, the 30-day period is sufficient to detect person(s) who probably transmitted the infection to the index patient, as well as recent sex partners who may have been exposed to the infection by the patient. For males and females with asymptomatic infections, a longer exposure period helps to identify additional infected partners. These extended periods, however, have received insufficient evaluation to support specific recommendations. If no sexual exposure has occurred within the specified exposure periods, the most recent sex partner is presumed to be at increased risk for chlamydial infection and should be evaluated. # Responsibility for Referral of Sex Partners Health-care providers should inform infected patients that they must have their sex partners evaluated and treated. Health-care providers or health departments should ensure the notification, evaluation, and treatment of the sex partners of patients with chlamydial infection. Partner referral can be performed by patients (patient referral) or by providers (provider referral). Patients who do comply with partner referral notify their sex partners of their exposure and encourage them to be examined and treated. Provider referrals require that third parties (e.g., health department personnel) assume responsibility for notifying sex partners of their exposure and providing evaluation and treatment. Provider referral of partners-including field follow-up by health department staff-is cost effective (155 ). However, because of the high prevalence of chlamydial infection among some populations and the limited number of health department outreach workers, patient referrals remain the only method of referral available to most clinicians. The responsibility for evaluating the sex partners of persons with chlamydial infection is often unclear and is a major reason partners remain untreated. This is a particular problem for male partners of females with chlamydial infection; male partners (who are often asymptomatic) may be reluctant to visit an STD clinic. Health-care providers who treat women with chlamydial infection should assist in making arrangements for the evaluation and treatment of male partner(s). Health departments can assist health-care providers in developing effective referral systems. # Clinician-Laboratory Protocols for Verifying Positive Tests Clinician-laboratory protocols for chlamydia testing are necessary to maximize the benefits of testing for chlamydia while minimizing adverse consequences and cost. These protocols should address which initial and supplementary tests the laboratory will perform, how clinicians should request these tests, and what specimens the clinicians should collect and submit to the laboratory during the patient's initial and follow-up visits. State and local health departments should facilitate the collaboration between health-care providers and laboratories that is necessary to develop suitable testing protocols (See Nonculture Chlamydia Tests). When developing clinician-laboratory protocols for chlamydia testing, the prevalence of chlamydial infection in the patient population should be considered. In settings with a high prevalence of infection (e.g., false-positive test results account for a small proportion of total positive test results), initial positive test results might only be verified if requested by the clinician on the basis of an assessment of a patient's risk for infection and the potential adverse effects of a false-positive result. In settings with a low prevalence of infection (e.g., false-positive test results account for a substantial proportion of total positive test results), an additional test for verification should be performed on all patients whose screening test results are positive. Three alternative testing protocols are suitable for supplementary testing. With the first protocol, a test system is chosen that permits performing the supplementary test on the residual material from the initial specimen. With the second protocol, a test system is chosen in which the supplementary test is performed on a second specimen. The clinician routinely obtains the second specimen at the same time as the initial specimen and submits both specimens to the laboratory; the supplementary test is performed on the second specimen if the initial test is positive and verification is required. With the third protocol, the laboratory reports a positive result from the initial test to the clinician who arranges the patient's return visit for treatment and then obtains a second specimen for a supplemental test. In most settings, one of the first two protocols is preferable since they do not require a return visit to collect an additional specimen. The choice of supplementary test systems and related protocols is difficult because insufficient information is available regarding their comparative performance and cost. # ANTIMICROBIAL REGIMENS Recommendations for the treatment of genital chlamydial infections have been published (146,156 ). Two new antimicrobials approved by the FDA for the treatment of chlamydia-ofloxacin and azithromycin-offer the clinician additional therapeutic choices. A substantial advantage of azithromycin, in comparison with all other therapies, is that a single dose is effective; this antimicrobial may prove most useful in situations in which compliance with a 7-day regimen of another antimicrobial cannot be ensured. In view of the high efficacy of tetracycline and doxycycline, cost also should be considered when selecting a treatment regimen. The recommended treatment regimens for uncomplicated urethral, endocervical, or rectal chlamydial infections among adults are listed below: • Doxycycline 100 mg orally 2 times a day for 7 days or • Azithromycin 1 gm orally in a single dose NOTE: Doxycycline and azithromycin are not recommended for use during pregnancy. # Alternative Treatment Regimens Alternative treatment regimens for uncomplicated urethral, endocervical, or rectal chlamydial infections among adults are listed below: • Ofloxacin 300 mg orally 2 times a day for 7 days or • Erythromycin base 500 mg orally 4 times a day for 7 days or • Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days or • Sulfisoxazole 500 mg orally 4 times a day for 10 days NOTE: Ofloxacin is not recommended for treating adolescents ≤17 years of age nor for pregnant women. The efficacy of sulfisoxazole is inferior to other regimens. The recommended treatment regimen for chlamydial infection during pregnancy is stated below: • Erythromycin base 500 mg orally times a day for 7 days If this regimen cannot be tolerated, the following regimens are recommended: • Erythromycin base 250 mg orally 4 times a day for 14 days or • Erythromycin ethylsuccinate 800 mg orally 4 times a day for 7 days or • Erythromycin ethylsuccinate 400 mg orally 4 times a day for 14 days If the patient cannot tolerate erythromycin, the following regimen is recommended: • Amoxicillin 500 mg orally 3 times a day for 7-10 days NOTE: Erythromycin estolate is contraindicated during pregnancy, since drug-related hepatotoxicity can result. For the treatment of complicated chlamydial infection, chlamydia-associated conditions, and chlamydial infection among infants or children, see the 1993 Sexually Transmitted Diseases Treatment Guidelines (146 ) for the treatment of adults, and the American Academy of Pediatrics Report of the Committee on Infectious Diseases (157 ) for the treatment of infants and children. # Follow-up of Patients Treated for Chlamydial Infection Treatment failure, indicated by positive cultures 7-14 days after therapy, is uncommon after successful completion of a ≥7 day regimen of tetracycline or doxycycline; failure rates of 0%-3% have been reported for males and 0%-8% for females (156,158 ). However, in one study of adolescents followed up to 24 months after therapy for chlamydial infection, rates of infection were 39% (118 ). Whether these infections are reinfections or cases of latent, unsuccessfully treated chlamydial infection is unknown. Further, some studies suggest that women with chlamydial infections are at increased risk for subsequent infection. Although routine test-of-cure visits during the immediate post-treatment period are not recommended, health-care providers should consider retesting females infected with chlamydia weeks to months after initial therapy (see Nonculture Chlamydia Tests). # Establishing a Surveillance System To measure chlamydia trends accurately, identify populations with an increased frequency of chlamydial infection and to estimate the burden of chlamydia disease, health departments should develop active chlamydia surveillance systems. These systems should incorporate the following recommendations: • Encourage participation by private and public health-care providers and laboratories to ensure that results are representative of the whole population. • Include measures of incidence as well as prevalence of chlamydial infections. • Use screening results as an index of the prevalence of asymptomatic chlamydial infection. # Monitoring Incidence Health departments should monitor chlamydia incidence by collecting information regarding men who seek care because of symptoms of urethritis. If tests are performed, the results of chlamydia and gonorrhea laboratory tests should be included with reported cases of urethritis. If chlamydia testing is not performed, the reported incidence of NGU should be used as an index of chlamydia incidence. # Monitoring Prevalence (Screening) The number of chlamydia cases identified by community screening programs should be monitored as an index of chlamydia prevalence. Information on the following factors that affect the number of chlamydia cases detected should also be collected: • Number of chlamydia tests being performed • Reason for chlamydia testing (e.g., asymptomatic screening or diagnostic testing performed because of symptoms) • Characteristics of the population undergoing chlamydia testing In some states, the volume of chlamydia cases precludes collecting sufficient information on each case of chlamydia. These states should develop community-based sentinel surveillance to monitor trends in the distribution of chlamydial infections and disease burden by demographic, behavioral, and clinical factors. With a sentinel system, a group of health-care providers serving patients who are representative of select populations in the community are recruited to monitor chlamydial infection rates. Examples of potential sentinel sites include STD clinics, family planning clinics, prenatal-care providers, community health centers, school health providers, jails and detention centers, and group practices or health maintenance organizations. A sentinel site monitoring the prevalence of chlamydia should test all patients (or a given number of consecutive patients or a random or systematic sample of patients) for chlamydial infection regardless of the patient's chief complaint or symptom status. Demographic, behavioral, and clinical information should be recorded for all patients tested. A sentinel site that monitors the incidence of NGU should collect information on all male patients with symptoms of urethritis and negative test results for gonorrhea. When possible, sentinel sites should monitor the prevalence of gonococcal infections and the incidence of gonococcal urethritis for all patients and the key sequelae of chlamydial and gonococcal infections (e.g., PID, infertility, ectopic pregnancy). # Laboratory Surveillance Because the surveillance definition of chlamydial infection requires laboratory testing, local laboratories are an important component of a chlamydia surveillance system. Health departments should establish an inventory of laboratories conducting chlamydia testing and poll them regularly for the number of chlamydia tests performed and the number of positive test results. # Monitoring Interventions Chlamydia prevention programs should monitor intervention activities. Health departments should monitor the extent and quality of the interventions (process evaluation). Data should be analyzed to determine whether chlamydia trends (measured by surveillance systems) can be linked to interventions (outcome evaluation). Screening and partner notification are interventions of particular interest. The following guidelines are recommended: • To permit health departments to monitor screening programs, participating health-care providers should provide screening test results (see Monitoring Prevalence). • Where substantial resources are allocated to partner notification or other interventions, health departments should monitor these activities to ensure optimal distribution of those resources. # ORGANIZING STRATEGIC PARTNERSHIPS The primary goal of a chlamydia prevention strategy should be to secure the resources to provide adolescents and young adults with access to information regarding chlamydial infection, screening and treatment services, and partner notification. Meeting this goal will require the cooperation of all agencies and programs that serve the health-care needs of adolescents and young adults. Every community has an opportunity to provide appropriate educational material regarding STDs and sexual behaviors, and to make screening and treatment for STDs more readily available to those at high risk for chlamydial infection. For example, some family planning programs provide chlamydia testing and treatment for their clients and their male sex partners, and a similar service is provided in some STD clinics. However, these efforts reach only a portion of the population that is infected with chlamydia. Such programs should be expanded to include other primary care providers who deliver medical services to sexually active adolescents and young adults-community health centers, migrant health centers, Native-American health centers, school-based clinics, Job Corps, detention centers, active-duty military facilities, hospital emergency rooms, and providers in the private sector. Schools, community-based recreational and after-school programs such as YMCAs, and other agencies can offer information about STDs (including HIV infection). A successful chlamydia prevention program should effectively coordinate all resources in a community. State STD/HIV prevention programs can provide solutions since their staff have experience in organizing clinical, educational, and laboratory resources within states, and they have developed partnerships with providers (e.g., family planning, prenatal, and migrant health clinics) who are part of any prevention program. The challenge is to focus on the common problem-high rates of chlamydial infection throughout the country-and to develop the interagency relationships and coalitions that are needed to deliver appropriate chlamydia services in both public and private settings. # SURVEILLANCE AND PROGRAM EVALUATION Information should be collected at the national, state, and local levels to provide quantitative estimates of disease occurrence, monitor trends, and monitor interventions and evaluate their impact. The epidemiologic analysis of surveillance and intervention data should be the basis for decision-making in chlamydia prevention programs, including allocation of intervention resources and evaluation of prevention efforts. In addition, these data can be used to develop hypotheses for etiologic and intervention research. Surveillance of chlamydial infections is difficult for four reasons. First, the prevalence rate is high and the potential burden of reporting cases correspondingly large. Second, many infected patients are asymptomatic and, therefore, are difficult to identify except through screening programs. Third, because the duration of infection cannot be determined for many patients, prevalence and incidence are difficult to distinguish. Finally, maintaining a surveillance system requires substantial resources for laboratory testing and information systems. # Reporting Laws All states should have laws or regulations requiring that information on cases of chlamydial infection be reported to the appropriate public health departments. • Reporting laws or regulations should acknowledge chlamydia as a public health problem and should provide the legal basis for establishing chlamydia surveillance systems. • Reporting laws or regulations should allow states to collect information regarding cases of chlamydial infection from health-care providers and laboratories. Specific categories of data should include demographics (age, race/ethnicity, sex, geographic location, source of report), clinical characteristics (anatomic site, symptoms/signs, treatment), and behaviors (risk factors). # NOTE: In many areas, reporting laws or regulations are also important because they are linked to the laws and regulations that authorize health departments to initiate and support prevention activities such as screening and partner notification. # Reportable Conditions The recommended surveillance definition is a case of chlamydial infection diagnosed by a positive laboratory test result. If tests are performed to verify a positive chlamydia test result, reporting should be contingent on verification of the initial positive test result. Reports of chlamydial complications or of conditions serving as surrogates for chlamydial infection-when chlamydia tests are not available-should be analyzed separately. NGU, PID, ophthalmia neonatorum, and epididymitis are the principal conditions that might be reported as complications of, or surrogates for, chlamydial infection. The performance of chlamydia tests and their results should be included with reports of these conditions. All material in the MMWR Series is in the public domain and may be used and reprinted without special permission; citation as to source, however, is appreciated. # MMWR
None
None
5039a8f36fe9a99ac0e87559a7caba25962db046
cdc
None
Laboratory data, including CD4 and viral load test results, are an essential component of the national HIV surveillance system. CD4 and viral load data can be used to identify cases, classify stage of disease at diagnosis, and monitor disease progression. These data can also be used to evaluate HIV testing and prevention efforts, determine entry into care and retention in care, measure viral load suppression, and assess unmet health care needs. Analyses at the national level to monitor progress against HIV can only occur if all HIV-related CD4 and viral load test results are reported by all jurisdictions. States with laws, regulations, or policies that support the reporting of all CD4 and viral load test results to HIV surveillance programs have increased reporting and improved completeness and timeliness of HIV surveillance data. Although all states have reporting laws, regulations, or policies, the level at which results must be reported varies. A state, for example, may require only data for CD4 counts above 500 or detectable viral load results be reported. CDC recommends the reporting of all HIV-related CD4 results (counts and percentages) and all viral load results (undetectable and specific values). Where laws, regulations, or policies are not aligned with these recommendations, states might consider strategies to best implement these recommendations within current parameters or consider steps to resolve conflicts with these recommendations. In addition, reporting of HIV-1 nucleotide sequences from genotypic resistance testing might also be considered to monitor prevalence of antiretroviral drug resistance, and HIV genetic diversity subtypes and transmission patterns.# methods that meet Federal Information Processing Standards (FIPS) Publication 197, ADVANCED ENCRYPTION STANDARD (AES) (See .) and sent securely to the state/local health department along with results from all other reportable conditions. If reported to a central location within the health department, the data would then be parsed by the health department and HIV-related results shared with the HIV program. The Epidemiology and Laboratory Capacity for Infectious Disease Cooperative Agreement (ELC) is a CDC cooperative agreement that includes support for implementing electronic laboratory reporting (ELR) solutions. The ELC has assisted many jurisdictions with developing an infrastructure for ELR. All jurisdictions receive ELC funds in some capacity, and most have taken advantage of these funds by implementing tools for receiving laboratory reports electronically. HIV programs are encouraged to leverage existing ELC-funded resources when implementing ELR. Enhancements in electronic death reporting systems can also improve quality and timeliness of death ascertainment for persons with HIV and can be achieved through implementation of electronic death registration. Mortality surveillance is a core public health function and critical to HIV surveillance to ensure accurate estimates of prevalence and other related measures used to monitor the NHAS. HIV surveillance programs are encouraged to work with their Vital Records offices to support adoption of electronic death registration where possible. CDC is committed to providing the technical assistance necessary to improve laboratory reporting so that it enhances, rather than disrupts, ongoing HIV surveillance.
Laboratory data, including CD4 and viral load test results, are an essential component of the national HIV surveillance system. CD4 and viral load data can be used to identify cases, classify stage of disease at diagnosis, and monitor disease progression. These data can also be used to evaluate HIV testing and prevention efforts, determine entry into care and retention in care, measure viral load suppression, and assess unmet health care needs. Analyses at the national level to monitor progress against HIV can only occur if all HIV-related CD4 and viral load test results are reported by all jurisdictions. States with laws, regulations, or policies that support the reporting of all CD4 and viral load test results to HIV surveillance programs have increased reporting and improved completeness and timeliness of HIV surveillance data. Although all states have reporting laws, regulations, or policies, the level at which results must be reported varies. A state, for example, may require only data for CD4 counts above 500 or detectable viral load results be reported. CDC recommends the reporting of all HIV-related CD4 results (counts and percentages) and all viral load results (undetectable and specific values). Where laws, regulations, or policies are not aligned with these recommendations, states might consider strategies to best implement these recommendations within current parameters or consider steps to resolve conflicts with these recommendations. In addition, reporting of HIV-1 nucleotide sequences from genotypic resistance testing might also be considered to monitor prevalence of antiretroviral drug resistance, and HIV genetic diversity subtypes and transmission patterns.# methods that meet Federal Information Processing Standards (FIPS) Publication 197, ADVANCED ENCRYPTION STANDARD (AES) (See http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf.) and sent securely to the state/local health department along with results from all other reportable conditions. If reported to a central location within the health department, the data would then be parsed by the health department and HIV-related results shared with the HIV program. The Epidemiology and Laboratory Capacity for Infectious Disease Cooperative Agreement (ELC) is a CDC cooperative agreement that includes support for implementing electronic laboratory reporting (ELR) solutions. The ELC has assisted many jurisdictions with developing an infrastructure for ELR. All jurisdictions receive ELC funds in some capacity, and most have taken advantage of these funds by implementing tools for receiving laboratory reports electronically. HIV programs are encouraged to leverage existing ELC-funded resources when implementing ELR. Enhancements in electronic death reporting systems can also improve quality and timeliness of death ascertainment for persons with HIV and can be achieved through implementation of electronic death registration. Mortality surveillance is a core public health function and critical to HIV surveillance to ensure accurate estimates of prevalence and other related measures used to monitor the NHAS. HIV surveillance programs are encouraged to work with their Vital Records offices to support adoption of electronic death registration where possible. CDC is committed to providing the technical assistance necessary to improve laboratory reporting so that it enhances, rather than disrupts, ongoing HIV surveillance.
None
None
b807443b0086ba32386aac51b2328554d0479e1c
cdc
None
Services (HHS), including the Administration for Children and Families (ACF), Food and Drug Administration (FDA), and National Institutes of Health (NIH). CDC acknowledges Mr. Pete Hunt (HHS/CDC) for his role as lead author of these guidelines. CDC also acknowledges Dr.# Introduction # Overview Food allergies are a growing food safety and public health concern that affect an estimated 4%-6% of children in the United States. 1,2 Children with food allergies are two to four times more likely to have asthma or other allergic conditions than those without food allergies. 1 The prevalence of food allergies among children increased 18% during 1997-2007, and allergic reactions to foods have become the most common cause of anaphylaxis in community health settings. 1,3 In 2006, about 88% of schools had one or more students with a food allergy. 4 Staff who work in schools and early care and education (ECE) programs should develop plans for how they will respond effectively to children with food allergies. Although the number of children with food allergies in any one school or ECE program may seem small, allergic reactions can be life-threatening and have far-reaching effects on children and their families, as well as on the schools or ECE programs they attend. Any child with a food allergy deserves attention and the school or ECE program should create a plan for preventing an allergic reaction and responding to a food allergy emergency. Studies show that 16%-18% of children with food allergies have had a reaction from accidentally eating food allergens while at school. 5,6 In addition, 25% of the severe and potentially life-threatening reactions (anaphylaxis) reported at schools happened in children with no previous diagnosis of food allergy. 5,7 School and ECE program staff should be ready to address the needs of children with known food allergies. They also should be prepared to respond effectively to the emergency needs of children who are not known to have food allergies but who exhibit allergic signs and symptoms. Until now, no national guidelines had been developed to help schools and ECE programs address the needs of the growing numbers of children with food allergies. However, 14 states and many school districts have formal policies or guidelines to improve the management of food allergies in schools. 8,9 Many schools and ECE programs have implemented some of the steps needed to manage food allergies effectively. 4 Yet systematic planning for managing the risk of food allergies and responding to food allergy emergencies in schools and ECE programs remain incomplete and inconsistent. 10,11 # FDA Food Safety Modernization Act These guidelines, Voluntary Guidelines for Managing Food Allergies in Schools and Early Care and Education Programs (hereafter called the Voluntary Guidelines for Managing Food Allergies), were developed in response to Section 112 of the FDA Food Safety Modernization Act, which was enacted in 2011. 12 This act is designed to improve food safety in the United States by shifting the focus from response to prevention. Section 112(b) calls for the Secretary of Health and Human Services, in consultation with the Secretary of Education, to "develop guidelines to be used on a voluntary basis to develop plans for individuals to manage the risk of food allergy and anaphylaxis in schools a and early childhood education programs b " and "make such guidelines available to local educational agencies, schools, early childhood education programs, and other interested entities and individuals to be implemented on a voluntary basis only. " 12 Each plan, described in the act to manage the risk of food allergy and anaphylaxis in schools and early childhood education programs, that is developed for an individual shall be considered an education record for the purpose of Section 444 of the General Education Provisions Act (commonly referred to as the Family Educational Rights and Privacy Act). The act specifies that nothing in the guidelines developed under its auspices should be construed to preempt state law. (A link to FDA Food Safety Modernization Act, including Section 112 statutory language is provided in Section 6, Resources.) Specifically, the content of these guidelines should address the following: - Parental obligation to provide the school or early childhood education program, prior to the start of every school year, with documentation from their child's physician or nurse supporting a diagnosis of food allergy, and any risk of anaphylaxis, if applicable; identifying any food to which the child is allergic; describing, if appropriate, any prior history of anaphylaxis; listing any medication prescribed for the child for the treatment of anaphylaxis; detailing emergency treatment procedures in the event of a reaction; listing the signs and symptoms of a reaction; assessing the child's readiness for self-administration of prescription medication; and a list of substitute meals that may be offered to the child by school or early childhood education program food service personnel. - The creation and maintenance of an individual plan for food allergy management, in consultation with the parent, tailored to the needs of each child with a documented risk for anaphylaxis, including any procedures for the self-administration of medication by such children in instances where the children are capable of self-administering medication; and such administration is not prohibited by state law. - Communication strategies between individual schools or early childhood education programs and providers of emergency medical services, including appropriate instructions for emergency medical response. - Strategies to reduce the risk of exposure to anaphylactic causative agents in classrooms and common school or early childhood education program areas such as cafeterias. a. The term school is defined in the FDA Food Safety Modernization Act (FDA FSMA) to include public kindergartens, elementary schools, and secondary schools. 12 b. The term early childhood education program is defined in the FDA Food Safety Modernization Act to include a Head Start program or an Early Head Start program carried out under the Head Start Act (42 U.S.C. 9831 et seq.), a statelicensed or state-regulated child care program or school, or a state prekindergarten program that serves children from birth through kindergarten. 12 Some of these early childhood education programs may provide early intervention and preschool services under the Individuals with Disabilities Education Act (IDEA). Some programs that provide these IDEA early intervention and preschool services may not fall under the FDA FSMA, but may still wish to consider these voluntary guidelines when developing procedures for children with food allergies. - The dissemination of general information on life threatening food allergies to school or early childhood education program staff, parents, and children. - Food allergy management training of school or early childhood education program personnel who regularly come into contact with children with life-threatening food allergies. - The authorization and training of school or early childhood education program personnel to administer epinephrine when the nurse is not immediately available. - The timely accessibility of epinephrine by school or early childhood education program personnel when the nurse is not immediately available. - The creation of a plan contained in each individual plan for food allergy management that addresses the appropriate response to an incident of anaphylaxis of a child while such child is engaged in extracurricular programs of a school or early childhood education program, such as non-academic outings and field trips, before-and afterschool programs or before-and after-early childhood education programs, and schoolsponsored or early childhood education program-sponsored programs held on weekends. - Maintenance of information for each administration of epinephrine to a child at risk for anaphylaxis and prompt notification to parents. - Other elements determined necessary for the management of food allergies and anaphylaxis in schools and early childhood education programs. 12 Section 1 of the Voluntary Guidelines for Managing Food Allergies addresses these content requirements. Section 2 provides a list of recommended actions for school board members and administrators and staff at the district level. Section 3 provides a list of recommended actions for administrators and staff in schools, while Section 4 provides recommended actions for administrators and staff in ECE programs. Section 5 provides information about relevant federal laws that are enforced or administered by the U.S. Department of Education (ED), the U.S. Department of Justice (DOJ), and the U.S. Department of Agriculture (USDA). Section 6 provides a list of resources with more information and strategies. # Methods To develop these guidelines, staff at the Centers for Disease Control and Prevention (CDC) created a systematic process to collect, review, and compile expert advice, scientific literature, state guidelines, best practice documents, and position statements from individuals, agencies, and organizations. In January 2010, CDC sponsored a meeting of experts to gather their input into the critical processes and actions needed to protect students with food allergies and to respond to food allergy emergencies in schools. This meeting included representatives from the following groups (see Acknowledgements for details): - Federal agencies with expertise in food allergy management in the public health sector, including in schools (CDC, Food and Drug Administration, U.S. National Institute of Allergy and Infectious Diseases, U.S. Department of Education , U.S. Department of Agriculture ). - Organizations with expertise or experience in clinical food allergy management and food allergy advice to consumers (Food Allergy and Anaphylaxis Network, Food Allergy Initiative, Asthma and Allergy Foundation of America, American Academy of Pediatrics, and the American College of Asthma, Allergy, and Immunology). - Organizations representing professionals who work in schools (National School Boards Association, National Education Association, National Association of School Nurses, National Association of School Administrators, National Association of Elementary School Principals, National Association of Secondary School Principals, School Nutrition Association, American School Counseling Association, and the American School Health Association). - One state educational agency. - One local school district. - Two parents of children with food allergies. Meeting participants provided input on the organization and content for food allergy guidelines. In terms of organization, they recommended that the guidelines should: - Use a management framework that is similar to the framework used to address other chronic conditions in schools and ECE programs. This framework should include creation of a building-wide team and essential elements of a plan to manage food allergies. - Make sure that management and support systems address the needs of children with food allergies and promote an allergen-safe c school or ECE program. - Promote partnerships among schools and ECE programs, children with food allergies and their families, and health care providers. c. The term allergen-safe refers to an environment that is made as safe as possible from food allergens. The phrase should not be interpreted to mean an allergen-free environment totally safe from food allergens. There is no failsafe way to prevent an allergen from inadvertently entering a school or ECE program facility. When guarding against exposures to food allergens, a school or ECE program should still properly plan for children with any life-threatening food allergies, to educate all school personnel accordingly, and ensure that school staff are trained and prepared to prevent and respond to a food allergy emergency. - Emphasize the value of an onsite, full-time registered nurse to manage food allergies in children, but provide guidance for schools and ECE programs that don't have a full-time or part-time registered nurse. - Convey scientific evidence and established clinical practice in terms that leaders and staff in schools and ECE programs understand. Make sure this information is consistent with regulatory and planning guidelines from other federal agencies. - Emphasize the need to follow state laws, including regulations (such as Nurse Practice Acts and state food safety regulations) and school or ECE program policies when deciding which procedures and actions are permissible or allowed. In terms of content, meeting participants recommended that the guidelines should: - Recommend that children with food allergies be identified so an individual plan for managing their food allergies can be developed. - Include essential practices for protecting children from allergic reactions and responding to reactions that occur at schools or ECE programs, at sponsored events, or during transport to and from schools or ECE programs. - Develop strategies to reduce the risk of exposure to food allergens in classrooms, cafeterias, and other school or ECE program settings. - Develop steps for responding to food allergy emergencies, including the administration of epinephrine, that are consistent with a school's or ECE program's "all-hazards" plan. - Emphasize training for staff to improve their understanding of food allergies, their ability to help children prevent exposure to food allergens, and their ability to respond to food allergy emergencies (including administration of epinephrine). This training can help to create an environment of acceptance and support for children with food allergies. - Emphasize the need to teach children about food allergies as part of the school's or ECE program's health education curriculum. - Address the need to teach all parents about food allergies. - Address the physical safety and emotional needs of children with food allergies (i.e., stigma, bullying, harassment). - Include a list of actions for all staff working in schools and ECE programs that may have a role in managing risk of food allergy, including administrators, registered nurses, teachers, paraprofessional staff, counselors, food service staff, and custodial staff. Meeting participants provided an initial set of data and literature sources for analysis. CDC staff then reviewed published literature from peer-reviewed and nonpeer-reviewed sources, including descriptive studies, epidemiologic studies, expert statements, policy statements, and relevant Web-based content from federal agencies. CDC scientists conducted an extensive search for scientific reports, using four electronic citation databases: PubMed, Medline, Web of Science, and ERIC. To ensure a comprehensive review of food allergy sources, CDC used search terms that included a combination of terms, such as "food allergy, " "school, " "anaphylaxis, " "epinephrine, " "peanut allergy, " "child, " "pediatric, " "food allergy and bullying, " "emergencies, " "life-threatening allergy, " "school nurse, " and "food allergy and child care. " CDC staff reviewed all recommended documents and eliminated sources that were outdated (earlier than 2000), were superseded by more current data or recommendations, in conflict with current standards of clinical and school-based practice, reflected international recommendations not relevant to U.S. schools or ECE programs, or were limited to adult food allergies. CDC staff relied heavily on the content and references in the 2010 Guidelines for the Diagnosis and Management of Food Allergy in the United States. 13 These 2010 guidelines reflect the most up-to-date, extensive systematic review of the literature and assessment of the body of evidence on the science of food allergies. They met the standards of rigorous systematic search and review methods, and they provide clear recommendations that are based on consensus among researchers, scientists, clinical practitioners, and the public. While the 2010 guidelines did not address the management of patients with food allergies outside of clinical settings (and thus did not directly address the management of food allergies in schools), they were deemed an important source for informing the clinical practice recommendations for managing risks for children with food allergies in the Voluntary Guidelines for Managing Food Allergies. To ensure that recommendations for managing the risk of food allergies were consistent with those recommended for other chronic conditions, CDC staff added search terms that included "school" and "asthma, ""diabetes, ""epilepsy, ""chronic condition, " and "management. " In particular, they used information from the following three documents: Students with Chronic Illness: Guidance for Families, Schools, and Students, 14 Managing Asthma: A Guide for Schools, 15 and Helping the Student with Diabetes Succeed: A Guide for School Personnel. 16 CDC staff analyzed best practice documents, state school food allergy guidelines (n = 14), and relevant health and education organizations' position statements for compatibility with the priorities outlined by experts, common themes, and the accuracy and clarity of recommendations or positions based on clinical standards and scientific evidence. To ensure that these Voluntary Guidelines for Managing Food Allergies were compatible with existing federal laws, federal regulations, and current guidelines for schools and ECE programs, CDC solicited expertise and input from the following sources: - Office of the General Counsel; Office of Safe and Healthy Students, Office of Elementary and Secondary Education; Office for Civil Rights; and Office of Special Education and Rehabilitative Services, U.S. Department of Education (ED). - Civil Rights Division, U.S. Department of Justice (DOJ). - Office of the Deputy Assistant Secretary for Early Childhood, Administration for Children and Families, U.S. Department of Health and Human Services (HHS). In addition to the input from the experts meeting, the analysis of research and practice documents, and the technical advice and assistance provided by regulatory federal agencies, CDC conducted three formal rounds of expert review and comment. During the first round, CDC staff worked with meeting participants and agency partners to get concurrence with recommendations. Second and third round reviews were used to refine the content. Reviewers included participants from the first meeting and additional reviewers added after each round to ensure input from at least one new person who had not previously reviewed the document. In addition to these formal reviews, CDC staff asked for multiple reviews from select experts in food allergy management, schools, and ECE programs to ensure the accuracy of the information and relevance of the recommendations to professional practice. During the review process, CDC staff reviewed and accepted additional references that supported changes made in draft recommendations. The resulting Voluntary Guidelines for Managing Food Allergies include recommendations for practice in the following five priority areas that should be addressed in each school's or ECE program's Food Allergy Management Prevention Plan: 1. Ensure the daily management of food allergies in individual children. 2. Prepare for food allergy emergencies. 3. Provide professional development on food allergies for staff members. 4. Educate children and family members about food allergies. 5. Create and maintain a healthy and safe educational environment. # Purpose The Voluntary Guidelines for Managing Food Allergies are intended to support implementation of food allergy management and prevention plans and practices in schools and ECE programs. They provide practical information, planning steps, and strategies for reducing allergic reactions and responding to life-threatening reactions for parents, district administrators, school administrators and staff, and ECE program administrators and staff. They can guide improvements in existing food allergy management plans and practices. They can help schools and ECE programs develop a plan where none currently exists. Schools and ECE programs will not need to change their organization or structure or incorporate burdensome practices to respond effectively. They also should not have to incur significant financial costs where basic health and emergency services are already provided. Although the practices in these guidelines are voluntary, any actions taken for individual children must be implemented consistent with applicable federal and state laws and local policies. Many of the practices reinforce relevant federal laws and regulations administered or enforced by ED, HHS, DOJ, and USDA. How these laws apply case by case will depend upon the facts in each situation. These guidelines also do not address state and local laws or local school district policies because the requirements of these laws and policies vary from state to state and from school district to school district. References to state guidelines reflect support for and consistency with the recommendations in the Voluntary Food Allergy Guidelines, but do not suggest federal endorsement of state guidelines. While these guidelines provide information related to certain applicable laws, they should not be construed as giving legal advice. Schools and ECE programs should consult local legal professionals for such advice. Although schools and ECE programs have some common characteristics, they operate under different laws and regulations and serve children with different developmental and supervisory needs. Different practices are needed in each setting to manage the risk of food allergies. These guidelines include recommendations that apply to both settings, and they identify how the recommendations should be applied differently in each setting when appropriate. These guidelines do not provide specific guidance for unlicensed child care settings, although many recommendations can be used in these settings. If a school or ECE program participates in the Child Nutrition Programs (CNPs), then USDA Food and Nutrition Service is the federal agency with oversight of the meals served. The CNPs include the National School Lunch and School Breakfast Programs, the Special Milk Program, the Fresh Fruit and Vegetable Program, the Child and Adult Care Food Program, and the Summer Food Service Program. Schools, institutions, and sites participating in the CNPs are required under relevant statutes to make accommodations to program meals for children that are determined to have a food allergy disability. A food related disability can be a food allergy if the allergy is acknowledged to be a disability by a licensed doctor. These guidelines can assist CNP program operators in providing safe meals and a safe environment for this population of children. # About Food Allergies A food allergy is defined as an adverse health effect arising from a specific immune response that occurs reproducibly on exposure to a given food. 13 The immune response can be severe and life-threatening. Although the immune system normally protects people from germs, in people with food allergies, the immune system mistakenly responds to food as if it were harmful. One way that the immune system causes food allergies is by making a protein antibody called immunoglobulin E (IgE) to the food. The substance in foods that cause this reaction is called the food allergen. When exposed to the food allergen, the IgE antibodies alert cells to release powerful substances, such as histamine, that cause symptoms that can affect the respiratory system, gastrointestinal tract, skin, or cardiovascular system and lead to a lifethreatening reaction called anaphylaxis. 17 The Voluntary Guidelines for Managing Food Allergies focuses not on all food allergies but on food allergies associated with IgE because those are the food allergies that are associated with the risk of anaphylaxis. There are other types of food-related conditions and diseases that range from the frequent problem of digesting lactose in milk, resulting in gas, bloating, and diarrhea, to reactions caused by cereal grains (celiac disease) that can result in severe malabsorption and a variety of other serious health problems. These conditions and diseases may be serious but are not immediately life-threatening and are not addressed in these guidelines. 13, More than 170 foods are known to cause IgE mediated food allergies. In the United States, the following eight foods or food groups account for 90% of serious allergic reactions: milk, eggs, fish, crustacean shellfish, wheat, soy, peanuts, and tree nuts. 13 Federal law requires food labels in the United States to clearly identify the food allergen source of all foods and ingredients that are (or contain any protein derived from) these common allergens. 20 Some nonfood products used in schools and ECE programssuch as clay, paste, or finger paints-can also contain allergens that may or may not be identified as ingredients on product labels. 21 The symptoms of allergic reactions to food vary both in type and severity among individuals and even in one individual over time. Symptoms associated with an allergic reaction to food include the following: - Mucous Membrane Symptoms: red watery eyes or swollen lips, tongue, or eyes. - Skin Symptoms: itchiness, flushing, rash, or hives. - Gastrointestinal Symptoms: nausea, pain, cramping, vomiting, diarrhea, or acid reflux. - Upper Respiratory Symptoms: nasal congestion, sneezing, hoarse voice, trouble swallowing, dry staccato cough, or numbness around mouth. - Lower Respiratory Symptoms: deep cough, wheezing, shortness of breath or difficulty breathing, or chest tightness. - Cardiovascular Symptoms: pale or blue skin color, weak pulse, dizziness or fainting, confusion or shock, hypotension (decrease in blood pressure), or loss of consciousness. - Mental or Emotional Symptoms: sense of "impending doom, " irritability, change in alertness, mood change, or confusion. # Food Allergy Symptoms in Children Children with food allergies might communicate their symptoms in the following ways: - It feels like something is poking my tongue. - My tongue (or mouth) is tingling (or burning). - My tongue (or mouth) itches. - My tongue feels like there is hair on it. - My mouth feels funny. - There's a frog in my throat; there's something stuck in my throat. - My tongue feels full (or heavy). - My lips feel tight. - It feels like there are bugs in there (to describe itchy ears). - It (my throat) feels thick. - It feels like a bump is on the back of my tongue (throat). Source: The Food Allergy & Anaphylaxis Network. Food Allergy News. 2003;13 (2). Children sometimes do not exhibit overt and visible symptoms after ingesting an allergen, making early diagnosis difficult. 13,22 Some children may not be able to communicate their symptoms clearly because of their age or developmental challenges. Complaints such as abdominal pain, itchiness, or other discomforts may be the first signs of an allergic reaction (see Food Allergy Symptoms in Children). Signs and symptoms can become evident within a few minutes or up to 1-2 hours after ingestion of the allergen, and rarely, several hours after ingestion. Symptoms of breathing difficulty, voice hoarseness, or faintness associated with change in mood or alertness or rapid progression of symptoms that involve a combination of the skin, gastrointestinal tract, or cardiovascular symptoms signal a more severe allergic reaction (anaphylaxis) and require immediate attention. The severity of reactions to food allergens is difficult to predict and varies depending on the child's particular sensitivity to the food and on the type and amount of exposure to the food. Ingesting a food allergen triggers most severe reactions, while inhaling or having skin contact with food allergens generally causes mild reactions. 23,24,25 The severity of reaction from food ingestion also can be influenced by the child's age, how quickly the allergen is absorbed (e.g., absorption is faster if food is taken on an empty stomach or ingestion is associated with exercise), and by co-existing health conditions or factors. 17 For example, a person with asthma might be at greater risk of having a more severe anaphylactic reaction. Exercise and certain medications also can increase the harmful effects of certain food allergens. 13,26,27 # Allergic Reactions and Anaphylaxis Anaphylaxis is best described as a severe allergic reaction that is rapid in onset and may cause death. 31 Not all allergic reactions will develop # Food Allergies and Asthma One-third of children with food allergies also have asthma, which increases their risk of experiencing a severe or fatal reaction. 28 Data also suggest that children with asthma and food allergies have more visits to hospitals and emergency departments than children who don't have asthma. 2,29,30 Because asthma can pose serious risks to the health of children with food allergies, schools and ECE programs must consider these risks when they develop plans for managing food allergies. into anaphylaxis. In fact, most are mild and resolve without problems. However, early signs of anaphylaxis can resemble a mild allergic reaction. Unless obvious symptoms-such as throat hoarseness or swelling, persistent wheezing, or fainting or low blood pressure-are present, it is not easy to predict whether these initial, mild symptoms will progress to become an anaphylactic reaction that can result in death. 13 Therefore, all children with known or suspected ingestion of a food allergen and the appearance of symptoms consistent with an allergic reaction must be closely monitored and possibly treated for early signs of anaphylaxis. # Characteristics and Risk Factors Food allergies account for 35%-50% of all cases of anaphylaxis in emergency care settings .32 Many different food allergens (e.g., milk, egg, fish, shellfish) can cause anaphylaxis. In the United States, fatal or near fatal reactions are most often caused by peanuts (50%-62%) and tree nuts (15%-30%). 33 Results of studies of fatal allergic reactions to food found that a delay in administering epinephrine was one of the most significant risk factors associated with fatal outcomes. 13,26 Some population groups, including children with a history of anaphylaxis, are at higher risk of having a severe reaction to food (see Fatal Food Allergy Reactions). # Fatal Food Allergy Reactions # Risk Factors - Delayed administration of epinephrine. - Reliance on oral antihistamines alone to treat symptoms. - Consuming alcohol and the food allergen at the same time. # Groups at Higher Risk - Adolescents and young adults. - Children with a known food allergy. - Children with a prior history of anaphylaxis. - Children with asthma, particularly those with poorly controlled asthma. # Timing of Symptoms In general, anaphylaxis caused by a food allergen occurs within minutes to several hours after food ingestion . 13 Death due to food-induced anaphylaxis may occur within 30 minutes to 2 hours of exposure, usually from cardiorespiratory compromise. 13 By the time symptoms of an allergic reaction are recognized, a child is likely to already be experiencing anaphylaxis. Symptoms of anaphylaxis can begin with mild skin symptoms (e.g., hives, flushing) that progress slowly, appear rapidly with more severe symptoms, or appear (in rare circumstances) with shock in the absence of other symptoms. In fact, many fatal anaphylaxis cases caused by food do not follow a predictable pattern that starts with mild skin symptoms. Even if initial symptoms are successfully treated or resolve completely, up to 20% of anaphylactic reactions recur within 4-8 hours (called biphasic reaction). In other cases, symptoms do not completely resolve and require additional emergency care. For these reasons, children with food-induced anaphylaxis must be monitored closely and evaluated as soon as possible in an emergency care setting. # Treatment of Anaphylaxis and Use of Epinephrine No treatment exists to prevent reactions to food allergies or anaphylaxis. Strict avoidance of the food allergen # Allergens that May Result is the only way to prevent a reaction. However, avoidance in Anaphylaxis that Require Use is not always easy or possible, and staff in schools and ECE of Epinephrine programs must be prepared to deal with allergic reactions, - Foods such as peanuts, tree nuts, including anaphylaxis. Early and quick recognition and milk, eggs, fish, or shellfish. treatment of allergic reactions that may lead to anaphylaxis can prevent serious health problems or death. - Medications such as penicillin or aspirin. The recommended first line of treatment for anaphylaxis is the prompt use of epinephrine. Early use of epinephrine - Bee venom or insect stings, such to treat anaphylaxis improves a person's chance of survival as from yellow jackets, wasps, and quick recovery. 13,34 hornets, or fire ant). Epinephrine, also called adrenaline, is naturally produced - Latex, such as from gloves. by the body. When given by injection, it rapidly improves breathing, increases heart rate, and reduces swelling of the face, lips, and throat. Epinephrine is typically available in the form of an autoinjector, a spring loaded syringe used to deliver a measured dose of epinephrine, designed for self-administration by patients, or administration by persons untrained in other needle-based forms of epinephrine delivery. In a clinical setting, patients may receive epinephrine through other needle-based delivery methods. Epinephrine can quickly improve a person's symptoms, but the effects are not long lasting. If symptoms recur (biphasic reaction), additional doses of epinephrine are needed. Even when epinephrine is used, 911 or other emergency medical services (EMS) must be called so the person can be transported quickly in an emergency vehicle to the nearest hospital emergency department for further medical treatment and observation. 13 It is not possible to set one guideline for when to use epinephrine to treat allergic reactions caused by food. A person needs clinical experience and judgment to recognize the symptoms associated with anaphylaxis, and not all school or ECE program staff have this experience. Clinical guidelines for how to manage food-induced allergic reactions have mainly focused on the health care setting. They emphasize the need to watch patients closely and give the proper treatment, including epinephrine. Treatment decisions are based on the progression or increased severity of symptoms and whether the patient has a history of risk factors for anaphylaxis (see Fatal Food Allergy Reactions). 13 For example, the clinical guidelines favor quick and early use of epinephrine as soon as even mild symptoms appear for children who have had severe allergic reactions in the past. Some schools and ECE programs offer clinical services from a doctor or registered nurse. In these cases, the doctor or nurse can use the clinical guidelines to assess children and make decisions about treatment, including if or when to use epinephrine. However, many schools and most ECE programs do not have a doctor or nurse onsite to make such an assessment. In these cases, a staff person at the scene should call 911 or EMS immediately. If staff are trained to recognize symptoms of an allergic reaction or anaphylaxis and are delegated and trained to administer epinephrine, they also should administer epinephrine by auto-injector at the first signs of an allergic reaction, especially if the child's breathing changes. In addition, school or ECE program staff should make sure that the child is transported without delay in an emergency vehicle to the nearest hospital emergency department for further medical treatment and observation. 24,35 These actions may result in administering epinephrine and activating emergency response systems for a child whose allergic reaction does not progress to life-threatening anaphylaxis. However, the delay or failure to administer epinephrine and the lack of medical attention have contributed to many fatal anaphylaxis cases from food allergies. 25, The risk of death from untreated anaphylaxis outweighs the risk of adverse side effects from using epinephrine in these cases. # Emotional Impact on Children with Food Allergies and Their Parents d The health of a child with a food allergy can be compromised at any time by an allergic reaction to food that is severe or life threatening. Many studies have shown that food allergies have a significant effect on the psychosocial well-being of children with food allergies and their families. Parents of a child with a food allergy may have constant fear about the possibility of a life-threatening reaction and stress from constant vigilance needed to prevent a reaction. They also have to trust their child to the care of others, make sure their child is safe outside the home, and help their child have a normal sense of identity. Children with food allergies may also have constant fear and stress about the possibility of a lifethreatening reaction. The fear of ingesting a food allergen without knowing it can lead to coping strategies that limit social and other daily activities. Children can carry emotional burdens because they are not accepted by other people, they are socially isolated, or they believe they are a burden to others. They also may have anxiety and distress that is caused by teasing, taunting, harassment, or bullying by peers, teachers, or other adults. School and ECE program staff must consider these factors as they develop plans for managing the risk of food allergy for children with food allergies. d. For the purposes of this document, the word parent is used to refer to the adult primary caregiver(s) of a child's basic needs (e.g., feeding, safety). This includes biological parents; other biological relatives such as grandparents, aunts, uncles, or siblings; and non-biological parents such as adoptive, foster, or stepparents. # Section 1. Food Allergy Management in Schools and Early Care and Education Programs # Essential First Steps School and ECE program staff should develop a comprehensive strategy to manage the risk of food allergy reactions in children. This strategy should include (1) a coordinated approach, (2) strong leadership, and (3) a specific and comprehensive plan for managing food allergies. # Use a coordinated approach that is based on effective partnerships. The management of any chronic health condition should be based on a partnership among school or ECE program staff, children and their families, and the family's allergist or other doctor. 11,42,46 School or ECE Program Staff # Child with Food Allergy and Parent # Effective Management of Food Allergies # Allergist or Other Doctor The collective knowledge and experience of a licensed doctor, children with food allergies, and their families can guide the most effective management of food allergies in schools or ECE programs for each child. Close working relationships can help ease anxiety among parents, build trust, and improve the knowledge and skill of school or ECE program staff members. 10,14,42 In schools, all staff members play a part in protecting the health and safety of children with chronic conditions. These staff members include administrators, school nurses or school doctors, food service staff (including food service contract staff ), classroom and specialty teachers, athletic coaches, school counselors, bus drivers, custodial and maintenance staff, therapists, paraeducators, special education service providers, librarians and media specialists, security staff, substitute teachers, and volunteers, such as playground monitors and field trip chaperones. In ECE programs, staff members include the center director, health consultants, nutrition and food service staff, Head Start and child care providers, preschool teachers, teaching assistants, aides, volunteers, and transportation staff. A team structure allows for collective management of food allergies, with coordinated planning and communication to ensure that staff responsibilities are carried out in a clear and consistent manner. Instead of creating a new team to address the food allergy-related needs of a particular student with a food allergy, schools and ECE programs can use an existing team such as the student's Section 504 committee (which addresses Section 504 of the Rehabilitation Act of 1973), the student's Individualized Education Program (IEP) team , the school improvement team, child and learning support team, school health or wellness team, or Head Start Health Services Advisory Committee. Involving the school doctor (if applicable), an allergist in the community, or the child's doctor can help the school or ECE program reduce the risk of accidental exposure to allergens. 47 A doctor's diagnosis of a food allergy is necessary to accurately inform plans for avoiding food allergens and managing allergic reactions. The doctor can also give advice on the best practices to control or manage food allergies. 24,25 An allergist is a licensed doctor with specialty training in the diagnosis and treatment of allergic diseases, asthma, and diseases of the immune system. Children with food allergies and their parents have firsthand experience with allergic reactions and are most familiar with a child's unique signs and symptoms. Parents should give the school or ECE program documentation that supports a doctor's diagnosis of food allergy, as well as information about prior history and current risk of anaphylaxis. This information is critical to preventing risk of exposure to allergens and outlining the actions that must be taken if a food allergen exposure occurs. Parents should be continually involved in helping to build a learning environment that is responsive to their child's unique health condition. 47 By working together, parents and school or ECE program staff can communicate better and make sure they have the same expectations. This partnership also shows a shared commitment to the child's well-being and builds parental support and confidence in the ability of school or ECE program staff to manage food allergies. Many parents give their ECE program an Emergency Care Plan (ECP) developed by the child's allergist or other doctor. This plan may be the only information ECE program staff members have to manage the child's food allergy. When multiple children have food allergies, the result can be multiple approaches for addressing and managing food allergies and reactions. Instead, ECE programs should use a coordinated approach that is built on partnerships among ECE program staff, parents, and doctors. With a coordinated approach, staff can create one consistent plan of action for responding to any child with a food allergy and to any allergic reaction. 10,44,48 # Provide clear leadership to guide planning and ensure implementation of food allergy management plans and practices. Successful coordination of food allergy planning requires strong school and ECE program leadership. The support of school principals and ECE program administrators is critical, but it may make more sense for the person who provides or coordinates health services for children to lead the food allergy planning process. 49,50 For example, most schools and some ECE programs have a school or district nurse, school doctor, or health consultant or manager. Nationwide, about 85% of schools have either a part-time or full-time nurse to provide health services to students (37% have a full-time nurse). 8 In Head Start programs, health services must be supervised by staff members or consultants with training in health-related fields. 50,51 School nurses, school doctors, and health consultants or managers should have the expertise to help schools and ECE programs develop plans to manage food allergies. Specifically, these staff members can: - Work with families and doctors to obtain or create an Emergency Care Plan (ECP) for children with food allergies. - Make sure that each child's plan for managing food allergies is consistent with federal laws and regulations, state laws, including regulations, local policies, and standards of professional practices. - Act as a liaison between school and district policy makers, ECE program administrators, health services staff members, food service staff members, community health service providers, and emergency responders. - Make sure that education records that include personally-identifiable information about a student's food allergy are generally not disclosed without the prior written consent of the parent (or eligible student) in compliance with the Family Educational Rights and Privacy Act of 1974 (FERPA), 20 U.S.C. 1232g and its implementing regulations in 34 CFR part 99, and any other applicable federal and state laws that protect the privacy or confidentiality of student information. (FERPA may not require parental consent in all circumstances.) FERPA also includes an emergency exception to the prior consent requirement if there is an articulable and significant threat to the health or safety of the student or others. (See Section 5 for more information about FERPA.) - Monitor the use of medication in the school or ECE program setting. - Obtain an epinephrine auto-injector and make sure it is rapidly available to designated and trained staff members to respond to a child's food allergy emergency. - Recognize and handle medical emergencies. - Learn about best practices for managing food allergies. - Help schools and ECE programs develop a comprehensive approach to managing food allergies. Coordinate a team to put the resulting plan into action. - Work with food service staff on parts of the plan that involve meal and snack preparation and services. - Identify internal resources and community partners that can support the planning process. - Share general information about food allergies with staff members, parents, and others who need it. - Make sure staff receive the training they need, including how to administer an epinephrine auto-injector. A doctor or registered nurse can provide this training. - Talk with staff members, doctors, children, and their families about food allergies and how they should be managed. Share concerns from parents and children with the food allergy management team. - Review the school or ECE program plan on a regular basis to look for ways to reduce exposure to food allergens and better manage allergic reactions. Recommend changes when needed to make the plans better. 49,50,52 The many responsibilities outlined in this section demonstrate the benefit of having a full-or parttime registered nurse (and a doctor part-time) and ECE programs having a medically trained and knowledgeable health consultant or manager. However, if a registered nurse, doctor, or health consultant or manager is not available, the school or ECE program administrator can develop a comprehensive plan that may include delegating some critical responsibilities to other trained professional staff. This plan should also include seeking advice from the child's primary doctor or allergist and training guidance and assistance from health services staff at the district level 47 (See Sections 2 and 3.) # Develop and implement a comprehensive plan for managing food allergies. To effectively manage food allergies and the risks associated with these conditions, many people inside and outside the school or ECE program must come together to develop a comprehensive plan, called the Food Allergy Management and Prevention Plan (FAMPP). This plan should include all strategies and actions needed to manage food allergies in the school or ECE program. It also should be compatible with the approach used to address other chronic conditions in each individual setting. 14 The FAMPP should reinforce the efforts of each school or ECE program to create a safe learning environment for all children. It should address systemwide planning, implementation, and follow-up and include specific actions for each individual child with a food allergy. The FAMPP should: - Meet the requirements of federal laws and regulations, such as Section 504 of the Rehabilitation Act of 1973, the Americans with Disabilities Act (ADA) and the Richard B. Russell National School Lunch Act, if applicable. An explanation of how these federal laws could apply to students with food allergies is provided in Section 5. Among other things, these federal laws address individualized assessment of each child's needs and parental participation in the development of any plan or program designed to meet these dietary needs. An effective FAMPP also would need to meet the requirements of state and local laws and regulations and district policies. - Reflect clear goals, purposes, and expectations for food allergy management that are consistent with the school's or ECE program's mission and policies. - Be clear and easy to understand and implement. - Be responsive to the needs of any child with food allergies by taking into account the different and unique needs of each child. - Be adaptable and updated regularly on the basis of experiences, best practices, current research, and changes in district policy or state or county law. The FAMPP should address the following five priorities: 1. Ensure the daily management of food allergies for individual children. 2. Prepare for food allergy emergencies. 3. Provide professional development on food allergies for staff members. 4. Educate children and family members about food allergies. 5. Create and maintain a healthy and safe educational environment. The remainder of this section provides more detail and specific recommendations for each priority. This section concludes with a comprehensive Food Allergy Management and Prevention Plan (FAMPP) Checklist for use in schools and ECE programs. This checklist can help schools and ECE programs improve their ability to manage the risk food allergies and assess whether their plans address all five priorities. # Priorities for Managing Food Allergies # Ensure the daily management of food allergies for individual children. To protect the health and safety of an individual child with food allergies, school and ECE program staff must identify children with a history of food allergies and develop or obtain plans to manage their allergies. a. Identify children with food allergies. Schools and ECE programs usually have forms and procedures to identify children with chronic conditions, including food allergies, when they enroll or transfer to the school-or when the condition is not initially reported but becomes evident during the academic year. Examples include health condition forms or parent interviews. Children or parents may report a food allergy on the required forms, but this information may not be accurate or complete. Schools and ECE program staff must work with parents to obtain, directly from the child's healthcare provider, the medical information necessary to develop plans for managing the individual care and emergency actions. The USDA requires a doctor's statement that a child has a food allergy disability before food service staff in the Child Nutrition Program can make meal accommodations and provide a safe meal for a child with a food allergy. # b. Develop a plan to manage and reduce the risk of food allergy reactions in individual children. Parents and doctors should provide information and recommendations to help schools and ECE programs develop written plans to manage food allergies for children on a daily basis. This information may be provided on health condition forms, medical orders, doctor's statement, or diet orders. There are a variety of names used to label written plans for individual children with food allergies. It is essential for children to have a short, easy to follow plan for emergency care. This is usually a food allergy Emergency Care Plan (ECP). Other names used for the ECP can include a "food allergy action plan, ""emergency action plan, " or in ECE programs, an "individual care plan". Schools or ECE programs may need to establish additional plans, such as a Section 504 plan or, if appropriate, an Individualized Education Program (IEP), or may establish a nursing assessment and outcome-type Individualized Health Plan (IHP). The ECP is the basic form used to collect food allergy information and it should be completed for every child identified as having a food allergy. 24,25,48, (If an ECP form is used by the Child Nutrition Program staff to make meal accommodations, it should include the medical information required by the USDA and must be signed by the doctor). This form should be kept in each child's school health record, and it may include the following: °°A recent photo of the child. °°Information about the food allergen, including a confirmed written diagnosis from the child's doctor or allergist. °°Information about signs and symptoms of the child's possible reactions to known allergens. °°Information about the possible severity of reactions, including any history of prior anaphylaxis (even though anaphylaxis can occur even in children without a history of prior anaphylaxis). °°A treatment plan for responding to a food allergy reaction or emergency, including whether an epinephrine auto-injector should be used. °°Information about other conditions, such as asthma or exercise-induced anaphylaxis that might affect food allergy management. °°Contact information for parents and doctors, including alternate phone numbers for notification in case of emergency. The ECP should be written by the child's doctor and confirmed with the parents. In some cases, it can be written by a registered nurse, or school doctor, as long as the child's doctor is consulted and the parents confirm the plan. The child's doctor and parents should sign and date the ECP, and schools and ECE programs should not accept a child's ECP without confirmation and signature from the child's doctor. If a public elementary or secondary school maintains an ECP on an individual child, the ECP would be covered by FERPA as an "education record. " The ECP should specifically state who may have access to the information in the plan, and should ensure that any such access to this information is permissible under FERPA and any other applicable federal or state laws that protect the privacy or confidentiality of student information. (See Section 5 for more information about FERPA.) Section 6 lists state and organizational resources that include examples of ECPs and suggested processes that schools and ECE programs might use to develop their ECPs. An IHP is a written document that outlines how children will receive health care services at school and is developed and used by a registered nurse. The IHP documents a specific student's health needs and outlines specific health outcome expectations and plans for achieving these expectations. The use of an IHP is standard practice for schools with a full-time or part-time registered nurse and it is commonly used to document the progress of children with an identified chronic condition such as food allergies. 39,47,52,61 The IHP helps registered nurses manage the risk of food allergies, prevent allergic reactions, and coordinate care with other staff (such as food service staff ) and health service providers outside the school. Federal law does not require the use of an IHP, but its contents can be useful to the nurse in addressing the requirements of federal laws related to school responsibilities for children with food allergies. Section 6 lists state and organizational resources that include examples of an IHP. If a doctor determines that a child's food allergy may result in anaphylaxis and if the child's food allergy constitutes a disability under applicable federal disability laws, school staff can integrate information from the ECP, doctor's statement, and IHP into a Section 504 plan or, if appropriate, into an IEP. (See Section 5 for more information on applicable federal laws.) Schools should still use an ECP with specific, easy-to-read information about how to respond to a food allergy reaction. For children that are identified as having a food allergy disability and who attend a school or ECE program that participates in the U.S. Department of Agriculture's (USDA's) Child Nutrition Programs, a meal or food substitution or modification must be made when the diagnosis is supported by a doctors' signed statement. Before Child Nutrition Program food service staff can provide a safe meal accommodation, parents must provide a statement from a licensed doctor that identifies: °°The child's disability (according to pertinent statutes). °°An explanation of why the disability restricts the child's diet. °°The major life activity affected by the disability. °°The food or foods to be omitted from the child's diet. °°The food or choice of foods that must be substituted. 62 A child recognized by the Child Nutrition Program staff as having a food allergy disability does not have to have a Section 504 plan, ECP, IHP, or IEP in order for a meal accommodation to be provided. A statement signed by a licensed doctor addressing the points above is sufficient. However, the Child Nutrition Program-required doctor's statement can be integrated in any plan a school or ECE develops to meet a child's special dietary needs. If a Section 504 plan or, if appropriate, an IEP, is developed in connection with the provision of services required under those laws to address the student's food allergy disability, information from the ECP is still useful and can be referenced in, or incorporated into, the Section 504 Plan or IEP. Note that a Section 504 plan or IEP is an education record subject to FERPA. For children not covered by Federal disability laws, schools can use the ECP and IHP to manage each child's food allergy. The IHP can include information about modifications and substitutions for meal and snack planning. An IHP or ECP developed for an individual student is also an education record subject to FERPA. In ECE programs, every child with a food allergy should have an ECP or individual care plan, even if the child has a Section 504 plan or, if eligible for services under IDEA, has an IEP or, if appropriate, an individualized family service plan (IFSP). (See Section 5 for more information regarding these Federal laws.) Because most ECE programs do not have a registered nurse on staff to develop such a plan, the ECE program's health consultant, health manager, or administrator should review each child's health records and emergency information at enrollment and work with parents to obtain an ECP for each child diagnosed with a food allergy. The ECP should be updated at least once a year. Health consultants or managers can share information about any allergic reactions, changes in the child's health status, and exposure to allergens with parents and doctors (with the parents' permission). Working with parents and the child's healthcare provider is essential to make sure that children get the medical services and accommodations they need. Staff should consider referring children without access to health care to health services, when possible. ECPs used by ECE programs should be signed and dated by the child's doctor and parents. The plan should specifically state who has access to the plan. The plan also should state which staff members are responsible for the care, transportation, and feeding of children with food allergies. 12,50 (See Section 5 for more information about applicable Federal laws.) c. Help students manage their own food allergies. Young children in ECE programs and early elementary grades in schools generally cannot manage their own food allergies. However, some students, especially adolescents, can take responsibility for managing their own food allergies, including carrying and using epinephrine when needed. When medication is required by students who have chronic health conditions, especially when medication may be lifesaving, it is best practice to encourage and assist students to become educated and competent in their own care. 48,54,63,64 Students who can manage their own food allergies should have quick (within a few minutes) access to an epinephrine auto-injector, both at school and during school-related events. 13 Some schools allow students to carry prescribed epinephrine auto-injectors (e.g., in their pocket, backpack, or purse) at school. Some state laws, allow students to carry auto-injectors during activities on school property and during transportation to and from school or school-related events. 63 Federal law requires reasonable modifications of school policies when necessary to avoid disability discrimination, and in some cases, this may require allowing a student to carry an epinephrine auto-injector. School officials should check state and federal laws before setting their policies and practices. See Section 5 for more information about applicable Federal laws. Before students are allowed to carry and use medication, school staff should assess students' knowledge, attitudes, behaviors, and skills to determine their ability to handle this responsibility. 64 This decision should be reassessed periodically, and the school nurse or another assigned staff member should randomly check to make sure students are carrying their epinephrine auto-injector. Some students with food allergies may choose to wear medical alert bracelets, which can aid emergency response. 13 School officials can encourage students to wear these bracelets, but they should not require them. Some students will not want to wear such jewelry because they fear being stigmatized. School nurses and other school staff members should reinforce self-management skills for students with food allergies. These skills include reading labels, asking questions about foods in the school meal and snack programs, avoiding unlabeled or unknown foods, using epinephrine auto-injectors when needed, and recognizing and reporting an allergic reaction to an adult. Even when students are able to manage their own food allergies, school staff need to know which students have allergies so they can have plans in place to monitor each student's condition and be able to respond in an emergency. Because some symptoms of anaphylaxis may continue after a dose of epinephrine is administered and because students might not always have their medication with them, schools should also keep a second epinephrine auto-injector (provided by parent or student) in a secure but rapidly accessible location. 63,65,66 (See the textbox on page 31 related to the justification for more than one dose of epinephrine.) # Prepare for food allergy emergencies. All schools and ECE programs should anticipate and prepare for food allergy emergencies in the same ways they approach emergency preparedness for other hazards. Comprehensive emergency planning includes prevention, preparedness, response, and recovery for any type of emergency. This "all-hazards" model is often used to plan for natural disasters, weather-related emergencies, and pandemic influenza. A school's all-hazards emergency plan also should address potential crises caused by violence or food allergy emergencies. 67 This plan should go beyond each child's ECP to include building-level planning, communication, training, and emergency response procedures. 68 a. Set up communication systems that are easy to use. Communication devices, such as intercoms, walkie-talkies, or cell phones, should be available at all times in case of an emergency. School and ECE program staff in classrooms, gymnasiums, cafeterias, playgrounds, and transportation vehicles should be able to communicate easily and quickly with the school nurse, school authorities, health consultants or managers, emergency responders and parents. Communication devices should be checked regularly to make sure they work. b. Make sure staff can get to epinephrine auto-injectors quickly and easily. Quick access to and immediate availability of epinephrine to respond to anaphylaxis emergencies is essential. 13 It is the parent's responsibility to provide at least one or two epinephrine auto-injectors for a child with food allergies if they are prescribed by a doctor. It is the school's or ECE program's responsibility to store epinephrine auto-injectors in a place that can be reached quickly and easily and to delegate and train staff to give epinephrine in response to allergic reactions. Studies have shown that quick access to epinephrine is critical to saving lives in episodes of anaphylaxis. 24,25,37 To ensure quick access to epinephrine, auto-injectors should be kept in a safe and secure place that trained staff members can get to quickly during school or ECE program hours. 63, At the same time, staff must also follow federal and state laws, including regulations, and local policies that may require medications to be locked in a secure place. For example, federal Head Start regulations require that all "grantee and delegate agencies establish and maintain written procedures regarding the administration, handling, and storage of medication for every child," including "labeling and storing, under lock and key, and refrigerating, if necessary, all medications, including those required for staff and volunteers. " 51 State regulations and local policies may similarly require locking medications in a secure location. School and ECE program staff should seek guidance from federal and state regulatory agencies and local policy makers when deciding how to store epinephrine auto-injectors. These decisions also must take into account the needs of each student and the specific characteristics of the school district, the staff, and the school building. Decisions on where to store medication, such as in a central location (office or health room), in the classroom, or in several locations (on a large school campus) may vary among school districts and schools. These decisions should be based on state and local laws and regulations and school policies. They also must ensure the safety of children with food allergies. The Guidelines for Managing Life-threatening Food Allergies in Connecticut Schools list some issues to consider, including the general safety standards for handling and storage of medication, developmental stage and competence of the student, size of the building, availability of a full-time school nurse in the building, availability of communication devices between teachers and paraprofessionals who are inside the building or on the playground and the school nurse, school nurse response time from the health office to the classroom, preferences and other responsibilities of the teacher, preferences of the parent, preferences of the student (as applicable), and movement of the student within the building. 54 The location(s) of medications should be listed in the school's overall emergency plan and in each child's ECP (and IHP, Section 504 plan, or IEP, if appropriate). Schools and ECE programs should also identify which staff members will be responsible for reviewing expiration dates and replacing outdated epinephrine auto-injectors and for carrying medication during field trips and other school events. 70 c. Make sure that epinephrine is used when needed and someone immediately contacts emergency medical services. Delays in using epinephrine have resulted in near fatal and fatal food allergy reactions in schools and ECE programs. 25,36,37 In a food allergy emergency, trained staff should give epinephrine immediately. Early and appropriate administration of epinephrine can temporarily stop allergic reactions and provide the critical time needed to get medical help. State laws, state nursing regulations, and local school board policies direct the medication administration in school and ECE programs. They often define which medications nonhealth professionals are allowed to administer in schools, including who may administer epinephrine by auto-injector. If nonhealth staff members are permitted to administer epinephrine, training should be required. 39,71 When epinephrine is used, school or ECE program staff must call 911 or emergency medical services (EMS). EMS should be informed that the emergency is due to an allergic reaction, if epinephrine has been administered, when it was administered, and that an additional dose of epinephrine may be needed. The child should be transported quickly in an emergency vehicle to the nearest hospital emergency department for further medical treatment and observation. 13 Staff also should contact the child's parents to inform them of their child's food allergy emergency and tell them where the child is being transported. Because medical attention is needed urgently in this situation, staff must not wait for parents to come and pick up their children before calling EMS. # Justification for More Than One Dose of Epinephrine Schools and ECE programs should consider keeping multiple doses of epinephrine onsite so they can respond quickly to a food allergy emergency. Although some schools allow students to carry their own auto-injectors, a second auto-injector should be available at school in case a student does not have one at the time of the emergency. School and ECE program staff may also decide that having more than one auto-injector at different locations (especially for a large building or campus) will best meet a child's needs. In addition, some symptoms of anaphylaxis may continue after one dose of epinephrine, so a second dose may be needed at school if EMS does not arrive quickly. Some state laws allow for the prescribing of stock supply of non-patient specific epinephrine auto-injectors for use in schools, which may allow schools or ECE programs to acquire the needed additional doses of epinephrine. When allowed by state law and local policy, schools and ECE programs that have a doctor or nurse onsite can stock their emergency medical kits with epinephrine auto-injectors to be used for anaphylaxis emergencies. 63,65,66,72 In states where legislation does not exist or does not allow schools or ECE programs to stock epinephrine, staff will need to work with parents and their doctors to get additional epinephrine auto-injectors for students who need them. # d. Identify the role of each staff member in an emergency. Any plan for managing food allergies should state specifically what each staff member should do in an emergency. This information should be simple and easy to follow, particularly when a staff member who is not a licensed health professional is delegated to administer epinephrine. 24,68 Ideally, a registered nurse or doctor would be available to assess a food allergy emergency and decide if epinephrine is needed. When a nurse or doctor is not onsite, trained unlicensed assistive personnel or nonhealth professionals can recognize the signs and symptoms of an allergic reaction, have quick access to an epinephrine auto-injector, and administer epinephrine. 70,71 Examples of these staff members may include health aides and assistants, teachers, athletic coaches, food service staff, administrators, and parent or adult chaperones. A licensed health care professional such as a registered nurse, doctor, or allergist should train, evaluate, and supervise unlicensed assistive personnel or delegated nonhealth professionals. This training should teach staff how to recognize the signs and symptoms of a reaction, administer epinephrine, contact EMS, and understand state and local laws and regulations related to giving medication to students. ECE programs that care for children with chronic conditions such as food allergies should seek the services of a trained health advocate or consultant to help staff develop emergency plans, write policies, and train staff. ECE programs are required to have a certified first aider present at all times. 50 All ECE program staff should get annual first aid training that teaches them how to recognize and respond to pediatric emergencies. 49 This training should include how to recognize the signs and symptoms of an allergic reaction and how to give epinephrine through an auto-injector. 23 ECE programs should keep records of all staff training. # e. Prepare for food allergy reactions in children without a prior history of food allergies. Schools and ECE programs should be ready to respond to severe allergic reactions in children with no history of anaphylaxis or no previously diagnosed food allergies. At a minimum, schools and ECE programs should establish a protocol for contacting emergency services when an allergic reaction is suspected and follow this protocol immediately when a child exhibits signs of anaphylaxis. If allowed by state law, the school doctor or nurse may stock their emergency medical kits with epinephrine auto-injectors to be used for anaphylaxis emergencies. If the school or ECE program has a FAMPP, written protocol, and licensed or delegated trained staff, an epinephrine auto-injector may be used for anaphylaxis regardless of previous allergy history. f. Document the response to a food allergy emergency. Emergency response should include a protocol for documenting or recording each emergency incident and use of epinephrine. 12,65,68 Documentation should include the following: °°Time and location of the incident. °°Food allergen that triggered the reaction (if known). °°If epinephrine was used and the time it was used. °°Notification of parents and EMS. °°Staff members who responded to the emergency. Section 6 lists state and organizational resources that include examples of epinephrine administration reports. Corrective actions and lessons learned from an incident should be used to revise the child's individual plan and the school's or ECE program's FAMPP, if needed. School and ECE program administrators also should review the emergency response with the child's parents, the staff members involved in the response, local EMS responders, and the child. 63,70 See the Example Checklist for an example of steps to follow after a nonfatal food allergy emergency. # Example Checklist: Steps to Take Within 24 Hours of a Nonfatal Food Allergy Reaction - Call parent or guardian to follow up on student condition. - Review anaphylactic or allergic episode with parent or guardian and student. °°Identify allergen and route of exposure-discuss signs and symptoms with parent or guardian. °°Review actions taken. °°Discuss positive and negative outcomes. °°Discuss any needed revision to care plan based on experience or outcome. - Discuss family role with parent or guardian to improve outcomes. - Discuss school, ECE program, and home concerns to improve prevention, response, and student outcomes. - Ask parent or guardian to replace epinephrine dose that was given, if needed. - Ask parent or guardian to follow up with health care provider. Source: National Association of School Nurses, 2011. # Provide professional development on food allergies for staff. Schools and ECE programs should provide training to all staff members to increase their knowledge about food allergies and how to respond to food allergy emergencies. This training should focus on how to reduce the risk of an allergic reaction, respond to allergic reactions, and support the social and academic development of children with food allergies. 25,38,73 Schools and ECE programs should coordinate training activities with a licensed health care professional, such as a school nurse, public health nurse, public health educator, or school or community doctor. Training can include use of existing materials that provide general information about food allergies, as well as information and resources to help staff meet the specific needs of individual children. 23,24,39,49,73,74 Administrators should allow enough time for proper training, and all training should be evaluated to make sure it is effective. In 2010, the National Diabetes Education Program updated their guidance to help students manage their diabetes in schools. 16 This updated guide outlines three levels of training that include basic training for all staff and specialized training for specific staff members. This approach provides a useful framework that has been adapted here to guide training on food allergy management in schools and ECE programs. # a. Provide general training on food allergies for all staff. Any staff member who might interact with children with food allergies or be asked to help respond to a food allergy emergency should be trained. Examples include administrators, nutrition and food service staff (including contract staff ), classroom and specialty teachers, athletic coaches, school counselors, bus drivers, custodial and maintenance staff, therapists, paraeducators, special education service providers, librarians and media specialists, security staff, substitute teachers, and volunteers such as playground monitors and field trip chaperones. General training content should include the following: °°School or ECE program policies and practices. °°An overview of food allergies. °°Definitions of key terms, including food allergy, major allergens, epinephrine, and anaphylaxis. °°The difference between potentially life-threatening food allergy and other food-related problems. °°Signs and symptoms of a food allergy reaction and anaphylaxis and information on common emergency medications. °°General strategies for reducing and preventing exposure to allergens (in food and nonfood items). °°Policies on bullying and harassment and how they apply to children with food allergies. °°How to respond to a food allergy emergency. °°Information about federal laws that could apply, such as the ADA, Section 504, and FERPA. (See Section 5 for more information about applicable federal laws.) Information about any state laws, including regulations, or district policies that apply. °°How to administer epinephrine with an auto-injector (for those formally delegated to do so). °°How to help children treat their own food allergy episodes. °°Effects of food allergies on children's behavior and ability to learn. °°Importance of giving emotional support to children with food allergies and to other children who might witness a severe food allergy reaction (anaphylaxis). °°Common risk factors, triggers, and areas of exposure to food allergens in schools or ECE programs. °°Specific strategies for fully integrating children with food allergies into school and class activities while reducing the risk of exposure to allergens in classrooms, during meals, during nonacademic outings, on field trips, during official activities before and after school or ECE programs, and during events sponsored by schools or ECE programs that are held outside of regular hours. These strategies could address (but are not limited to) the following: §°Special seating arrangements when age and circumstance appropriate (e.g., during meal times, birthday parties). §°Plans for keeping foods with allergens separated from foods provided to children with food allergies. §°Rules on how staff and students should wash their hands and clean surfaces to reduce the risk of exposure to food allergens. §°The importance of not sharing food. §°How to read food labels to identify food allergens. # c. Provide specialized training for staff who are responsible for managing the health of children with food allergies on a daily basis. This training should be required for district nurses, school nurses, school doctors, and professionally qualified health coordinators or managers. In addition to the general and in-depth content described previously, this training should include information about how to: °°Create ECPs and review or develop other individual care plans as needed. °°Manage and store medication. °°Delegate and train unlicensed assistive personnel to administer epinephrine. °°Help children manage their own food allergies. °°Document the tasks performed as part of food allergy management. °°Evaluate emergency responses and staff members' ability to respond to food allergy emergencies. Training should be conducted at least once a year, and should be reviewed after a food allergy reaction or anaphylaxis emergency for the purpose of improving prevention and response. Schools and ECE programs should consult with parents of children with food allergies when they design staff training. These parents have knowledge and experience on how to manage their child's food allergies, as well as information from their child's doctor. Parents do not need to participate in the delivery of training sessions or attend staff training. # Educate children and family members about food allergies. a. Teach all children about food allergies. All children need to learn about food allergies, but teaching methods will differ on the basis of their age and the setting. For example, schools can provide food allergy education as part of the health education or other curriculum topic, such as family and consumer sciences, general science, physical education, and character education. 41,45,67,69,75,76 ECE programs can provide food allergy education with help from certified health education specialists. Food allergy education should be appropriate for the developmental level and culture of the children in a particular school or ECE program. It should focus on increasing awareness and understanding of food allergies and building support and acceptance of people with food allergies. 59,76 At a minimum, all children should be able to: °°Identify signs and symptoms of anaphylaxis. °°Know and understand why it is wrong to tease or bully others, including people with food allergies. °°Know and understand the importance of finding a staff member who can help respond to suspected food allergy emergencies. °°Understand rules on hand washing, food sharing, allergen-safe zones, and personal conduct. Food allergy awareness is reinforced when staff members model behaviors and attitudes that comply with rules that reduce exposure to food allergens. 48 b. Teach all parents and families about food allergies. A successful FAMPP needs support and participation from parents of children with food allergies and from parents of children without food allergies. All parents should get information to increase their awareness and understanding of food allergies, the policies and practices that protect children with food allergies, the roles of all staff members in protecting children with food allergies, and the measures parents of children with and without food allergies can take to help ensure this protection. 45,76 School and ECE program administrators, working with school or district nurses or health consultants or managers, should educate families on food allergy policies and practices. Classroom teachers should provide information to all parents about what is being done to prevent food allergy reactions in the classroom. Food service staff should provide information to families about federal regulations of the U.S. Department of Agriculture's Food and Nutrition Service and practices that protect children, and manage food allergies during meals served under USDA meal programs. District and school policies and protocols to prevent bullying, respond to food allergy emergencies, and create a safe environment for all children should be shared with all families. Schools and ECE programs can share information in many ways, including through letters or e-mails to parents; updates on school Web sites; and announcements at parent-teacher association meetings, school nights, health fairs, and community events. # Create and maintain a healthy and safe educational environment. Schools, ECE programs, and communities have a shared responsibility to promote a safe physical environment that protects children with food allergies and climate that supports their positive psychological and social development. 77,78 a. Create an environment that is as safe as possible from exposure to food allergens. Schools and ECE programs can create a safer learning environment by reducing children's exposure to potential allergens. 24,39,74 When a child has a documented food allergy, staff should take active steps to reduce the risk of exposure in all common areas, such as classrooms and cafeterias. 12 Some schools or ECE programs have considered banning or have banned specific food across the entire school or ECE program setting in an attempt to eliminate exposing a child with a food allergy to that food. But, such an option cannot guarantee a totally safe environment because there is no reasonable or fail-safe way to prevent an allergen from inadvertently entering into a building. Even with such a ban in place, a school or ECE program still has a responsibility to properly plan for children with any life-threatening food allergies, to educate all school personnel accordingly, and ensure that school staff are trained and prepared to prevent and respond to a food allergy emergency. Schools or ECE programs may choose other alternatives to banning allergens including the designation of allergen-safe zones, such as an individual classroom or eating area in the cafeteria, or designation of food-free zones, such as a library, classroom, or buses. 45 79 provide school districts, schools, and ECE programs with requirements governing the cleaning and sanitizing of surfaces and other practices that can protect against the unintentional transfer of residue or trace amount of an allergic food into another food. Some practices to reduce this cross-contact include the following: °°Clean and sanitize with soap and water or all-purpose cleaning agents and sanitizers that meet state and local food safety regulations, all surfaces that come into contact with food in kitchens, classrooms, and other locations where food is prepared or eaten. Cleaning with water alone will not remove food allergens. °°Clean and sanitize food preparation equipment, such as food slicers, and utensils before and after use to prevent cross-contact. °°Clean and sanitize trays and baking sheets after each use. Oils can seep through wax paper or other liners and cause cross-contact. °°Prepare food separately for children with food allergies. Strategies should include preparing items without allergens first, using a separate work space and equipment, and labeling and storing items before preparing other foods. °°Train all staff who prepare, handle, or serve food how to read labels to identify food allergens. Make sure that staff members are knowledgeable about current labeling laws. Because food labels often change, they should be read every time the food is purchased. Ingredient lists posted on Web sites are not reliable. The manufacturer of the food should be contacted if clarification is needed. °°Use appropriate hand-washing procedures that emphasize the use of soap and water. Hand sanitizers are not effective in removing food allergens. Nutrition and food service staff in schools and ECE programs are required to follow local food safety and sanitation laws and be trained in practices that prevent food, surface-to-food, and food-to-food contamination that also serve to help prevent cross-contact of food allergens. Meals and snacks may be served in locations other than cafeterias, handled by staff members other than the food service staff, or provided outside of a USDA Child Nutrition Program. When developing policies and procedures for food handling, consider all possible situations where food might be prepared or served, any staff members who might be involved, and the state and local food safety regulations that might be appropriate to help prevent the transfer of food allergens in these situations. In ECE programs, additional precautions are recommended to reduce the risk of food allergy reactions, especially among children with a history of anaphylaxis. Many of these recommendations are consistent with common practices for managing any child in an ECE program. °°Make sure that all staff members can read product labels and identify food allergens. °°Recommend, but do not require, that children with known food allergies wear a medical alert bracelet. °°Promote good hand-washing practices before and after eating. °°Supervise children closely during mealtimes. Consider assigned seating for meals, especially in situations with family-style dining. Emphasize that children not share food. °°Put children's names on cups, plates, and utensils to avoid confusion and cross-contact. °°Designate food storage areas for foods brought from home. 6,45,77 c. Make outside groups aware of food allergy policies and rules when they use school or ECE program facilities before or after hours. Local agencies, community groups, and community members who use school or ECE program facilities before or after operating hours should be aware of and comply with policies on food, cleaning, and sanitation procedures. If food is allowed in the building, consider banning food from specific classrooms or areas that children with food allergies use often. School and ECE program staff should be notified when outside groups are using their facilities. # d. Create a positive psychosocial climate. Schools and ECE programs should foster a climate that promotes positive psychological and social development; that actively promotes safety, respect, and acceptance of differences; and fosters positive interpersonal relationships between staff members and children and between the children themselves. The psychosocial climate is influenced by clear and consistent disciplinary policies, meaningful opportunities for participation, and supportive behaviors by staff members and parents. 78 Children with food allergies need an environment where they feel secure and can interact with caring people they trust. Bullying, teasing, and harassment can lead to psychological distress for children with food allergies which could lead to a more severe reaction when the allergen is present. 22,43,44 A positive psychosocial climate-coupled with food allergy education and awareness for all children, families, and staff members-can help remove feelings of anxiety and alienation among children with food allergies. 43,44 To create a positive psychosocial climate, staff members, children, and parents must all work together. School nurses, school counselors, or mental health consultants can provide leadership and guidance to set best practices and strategies for a positive psychosocial climate. Staff members should promote and reinforce expectations for a positive and supportive climate by making sure the needs of children with food allergies are addressed. For example, they can avoid using language and activities that isolate children with food allergies and encourage everyone's help in keeping the classroom safe from food allergens. Children can help develop classroom rules, rewards, and activities. All children and staff members share responsibility for preventing bullying and social isolation of children with food allergies. School and ECE program staff should recognize that acceptance by peers is one of the most important influences on a child's emotional and social development. 78 Among adolescents, food allergy education and awareness can be an effective strategy to improve social interactions, reduce peer pressure, and decrease risk-taking behaviors that expose them to food allergens. 22 Children should be expected to treat others with respect and to be good citizens, not passive bystanders, when they are aware of bullying or peers who seem troubled. Children should understand the positive or negative consequences associated with their actions. Rules and policies against bullying behavior should be developed in partnership with staff members, families, and children. They should be posted in buildings; published in school handbooks; and discussed with staff members, children, and families. All children and staff members should be encouraged to report bullying and harassment of any child with food allergies. 80,81 # Conclusion Schools and ECE programs are responsible for the health and safety of children with food allergies. The strategies presented in these guidelines can help schools and ECE programs take a comprehensive approach to managing food allergies. Through the collective efforts of school and ECE program staff members, parents, and health care providers, children with food allergies can be assured a safe place to thrive, learn, and succeed. # Meals and Snacks - Use nonfood incentives for prizes, gifts, and awards. - Help students with food allergies read labels of foods provided by others so they can avoid ingesting hidden food allergens. - Consider methods (such as assigned cubicles) to prevent cross-contact of food allergens from lunches and snacks stored in the classroom. - Support parents of children with food allergies who wish to provide safe snack items for their child in the event of unexpected circumstances. - Encourage children to wash hands before and after handling or consuming food. - - Designate an allergen-safe food preparation area. - Provide advanced copies of menus for parents to use in planning. - Be prepared to share food labels, recipes, or ingredient lists used to prepare meals and snacks with others. - Do not allow food to be eaten on buses except by children with special needs such as those with diabetes. - Encourage children to wash hands before and after handling or consuming food. - Identify special needs before field trips or events. - Package meals and snacks appropriately to prevent crosscontact. - Encourage children to wash hands before and after handling or consuming food. - Encourage hand washing before and after handling or consuming food. # Physical Education and Recess # Meals and Snacks - Keep food labels from all foods served to children with allergies for at least 24 hours after servicing the food in case the child has a reaction. - Keep current contact information for vendors and suppliers so you can get food ingredient information. - Read all food labels and recheck with each purchase for potential food allergens. - Report mistakes such as crosscontact with an allergen or errors in the ingredient list or menu immediately to administrators and parents. - Wash all tables and chairs with soap and water or all-purpose cleaning agents before each meal period. - Encourage children, school staff , and volunteers to wash hands before and after handling or consuming food. a USDA Web site: www.fns.usda.gov/cnd/guidance/special_dietary_needs.pdf. # Food Allergy Management and Prevention Plan Checklist Use this checklist to determine if your school or ECE program has appropriate plans in place to promote the health and well-being of children with food allergies. For each priority, check the box to the left if you have plans and practices in place. Develop plans to address the priorities you did not check. You can also use the checklist to evaluate your response to food allergy emergencies. Ongoing evaluation and improvement can help you improve your plans and actions. Review the full descriptions of the five priorities (pages to make sure that your plans and practices are complete and that your plans for improvement will meet the needs of children, their families, administrators, and staff. This section presents the actions that school district leaders can take to implement the voluntary recommendations in Section 1. Although the focus of the recommendations is on the management of food allergies at the school building level, district-level leadership and policy and staff support are essential for the success of school-level food allergy management. # School District Policy Support School boards can adopt written policies that direct and support clear, consistent, and effective practices for managing the risk of food allergies and response to food allergy emergencies. Data from CDC's 2006 School Health Policies and Programs Study indicate that only slightly more than 40% of school districts have model food allergy policies. 8 A comprehensive and uniform set of district policies for managing food allergies in schools can: - Communicate the district's commitment to effectively managing food allergies to school administrators and staff members, parents, and the community. - Promote consistency of priorities, actions, and options for managing food allergies across the district to avoid confusion and haphazard responses. - Align food allergy management plans in schools with federal and state laws, including regulations, and policies, as well as other established school policies. - Make protective practices and strategies for managing food allergies in schools an integral part of ongoing school activities. - Support the food allergy management decisions and practices of school administrators and staff members. - Increase public knowledge about food allergies and applicable laws and public support for implementation of effective food allergy management practices in schools. Section 6 provides a list of resources with more information and strategies to inform school policymakers. # School District Staff Support District policies are implemented with the support of board members, the district superintendent, and district-level staff members. 82 District leaders and staff can: - Communicate policy requirements and school system directives. - Help schools implement and comply with applicable federal and state laws, including regulations, and policies. - Help communicate lines of authority for managing food allergies in school buildings. - Provide standardized forms, procedures, tools, and plans, including a sample Food Allergy Management and Prevention Plan (FAMPP), to schools. - Coordinate training to improve consistency of practices across the district. - Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, during class parties, at athletic events, and during after-school programs. - Help schools plan and implement their FAMPPs. District staff might also provide direct assistance to schools to help them meet the needs of students with food allergies, especially when the school does not have key staff, such as a doctor or full-time registered nurse, working at the building level. District staff sometimes communicate directly with parents and doctors who might need additional information about a school's food allergy policies and practices. They also may communicate directly with parents whose children need help managing their food allergy as they move from one school to the next within the district. # School Board Members 1. Set the direction for the school district's coordinated approach to managing food allergies. - Develop a comprehensive set of school district policies to manage food allergies in school settings. Work with a variety of school staff, including school administrators, Section 504 coordinators, licensed health care professionals (e.g., doctors, registered nurses), school health advisory council members, teachers, paraeducators, school food service staff, bus drivers and other transportation staff, custodians and maintenance staff, after-school program staff, students, parents, community experts, and others who will implement policies. Section 6 provides a list of resources with more information and strategies to inform school policymakers. - Align food allergy policies and practices with the district's "all-hazards" approach to emergency planning and with policies on the care of students with chronic health conditions. - Be familiar with federal and state laws, including regulations, and policies relevant to the obligations of schools to students with food allergies and make sure local school policies and practices follow these laws and policies. - Use multiple mechanisms, such as newsletters and Web sites, to disseminate and communicate food allergy policies to appropriate district staff, families, and the community. - Give parents and students information about the school district's procedures they can use if they disagree with the food allergy policies and plans implemented by the school district. - On a regular schedule, review and evaluate the district's food allergy-related policies and revise as needed. # Prepare for food allergy emergencies. - Make sure that responding to life-threatening food allergy reactions is part of the school district's "all-hazards" approach to emergency planning. - Support and allocate resources to trained and appropriately certified or licensed staff members to respond to food allergy emergencies in all schools. - Review data and information (e.g., when and where medication was used) from incident reports of food allergy reactions and assess the effect of the incident on all students involved. Modify your policies as needed. # Support professional development on food allergies for staff. - Support and allocate resources and time for professional development and training on food allergies. - Identify professional development and training needs to make sure that district and school staff, especially those on food allergy management teams, are adequately trained, competent, and confident to perform assigned responsibilities to help students with life-threatening food allergies and respond to an emergency. # Educate students and family members about food allergies. - Encourage the inclusion of information about food allergies in the district's health education or other curriculum for students to raise awareness. - Support and allocate resources for awareness education for students and parents. # Create and maintain a healthy and safe school environment. - Endorse the use of signs and other strategies to increase awareness about food allergies throughout the school environment. - Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, during class parties, at athletic events, and during after-school programs. - Support collaboration with district and community experts to integrate the management of food allergies with the management of other chronic health conditions. - Support collaboration with district and community experts to make sure schools have healthy and safe physical environments. - Develop and consistently enforce policies that prohibit discrimination and bullying against all students, including those with food allergies. # School District Superintendent 1. Lead the school district's coordinated approach to managing food allergies. - Provide leadership and designate school district resources to implement the school district's comprehensive approach to managing food allergies. - Promote, disseminate, and communicate food allergy-related policies to all school staff, families, and the community. - Make sure that each school has a team that is responsible for food allergy management. - Be familiar with federal and state laws, including regulations, and policies relevant to the obligations of schools to students with food allergies and make sure your policies and practices follow these laws and policies. - Give parents and students information about the school district's procedures they can use if they disagree with the food allergy policies and plans implemented by the school district. - On a regular schedule, review and evaluate the school district's food allergy policies and practices and revise as needed. - Establish evaluation strategies for determining when the district's food allergy policies and practices or the school's FAMPP are not effectively implemented. # Prepare for food allergy emergencies. - Make sure that responding to life-threatening food allergy reactions is part of the school district's all-hazards approach to emergency planning. - Make sure that each school has trained and appropriately certified or licensed staff members develop and implement written Emergency Care Plans (ECPs) for students with food allergies. Additional plans can include Individualized Healthcare Plans (IHPs), Section 504 plans, or, if appropriate, Individualized Education Programs (IEPs). - Encourage periodic emergency response drills and practice on how to handle a food allergy emergency in schools. - Review data and information (e.g., when and where medication was administered) from incident reports of food allergy reactions and assess the effect of the incident on all students involved. Modify policies as needed. # Support professional development on food allergies for staff. - Make sure that district and school staff, especially those responsible for implementing the FAMPP, have professional development and training opportunities to become adequately trained, competent, and confident to perform assigned responsibilities to help students with food allergies and respond to an emergency. # Educate students and family members about food allergies. - Help ensure that information about food allergies is included in the district's health education curriculum for students to raise awareness. - Communicate with parents about the district's policies and practices to protect the health of students with food allergies. # Create and maintain a healthy and safe school environment. - Increase awareness of food allergies throughout the school environment. - Collaborate with school board members, school administrators, and other school staff to create a safe environment for students with food allergies. Provide oversight of schools with children who have food allergies. - Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, during class parties, at athletic events, and during after-school programs. - Consistently enforce policies that prohibit discrimination and bullying against all students, including those with food allergies. # Health Services Director The health services director can be a doctor or registered nurse working at the district level. # Participate in the school's coordinated approach to managing food allergies. - Help develop a school district's comprehensive approach to managing life-threatening food allergies that will support the FAMPP used in each school. - Provide leadership and obtain the resources needed to implement the district's comprehensive approach to managing food allergies. - Promote, disseminate, and communicate the food allergy policies and practices to all school staff, families, the school community, and the local medical community. - Know and educate others about federal and state laws, including regulations and policies relevant to the obligations of schools to students with food allergies and make sure policies and practices follow these laws. - Make sure a doctor or registered nurse reviews all FAMPPs and ECPs. Create other plans as needed. - Provide direct assistance to help schools develop procedures and plans for monitoring students with food allergies, including, if appropriate, through Section 504 plans, or IEPs. - Coordinate with other district staff, including the food service director, curriculum coordinator, and student support services director. - Make sure that food allergy policies and practices address competitive foods (foods and beverages sold outside of the federal reimbursable school meals program), such as those available in vending machines, in school stores, during class parties, at athletic events, and during after-school programs. - On a regular schedule, review and evaluate the school district's food allergy policies and practices and revise as needed. # Ensure the daily management of food allergies for individual students. - Help the school team responsible for the FAMPP write this plan. If a student is eligible to receive services under Section 504 or, if appropiate, IDEA, make sure all provisions of these federal laws are met. - Create standard forms, such as health forms, school registration forms, and ECPs, for schools to use to identify students e with food allergies and develop individual management plans for them. Establish protocols for tasks related to developing management plans, such as how to interview parents, get appropriate documentation from doctors, and coordinate meals with food service staff. - Help schools implement policies and procedures for managing student medications. These policies should include how epinephrine auto-injectors are stored and accessed, how their use is monitored, and the schedule for regularly inspecting auto-injector expiration dates. They should also include plans for supporting students who are permitted and capable of managing their own food allergies by carrying and using epinephrine auto-injectors. - Help schools that do not have a registered nurse on site develop plans to manage food allergies in individual students, provide health services when needed, and respond to food allergy emergencies. - Help schools link students with food allergies and their families to community health services and family support services when needed. # Prepare for food allergy emergencies. - Develop protocols for responding to food allergy emergencies that can guide practices at the building level. - If allowed by state laws, including regulations, and district policy, obtain or write nonpatient-specific prescriptions and standing orders for epinephrine auto-injectors that can be used to respond to anaphylaxis emergencies. - Work directly with local emergency responders to confirm that they carry epinephrine auto-injectors for anaphylaxis emergencies. - Review school emergency response plans to make sure they include the actions needed to respond to food allergy emergencies. - Help schools conduct periodic emergency response drills and practice how to handle food allergy emergencies. - Help schools conduct debriefing meetings after a food allergy reaction or emergency. - Review data and information (e.g., when and where medication was administered) from incident reports of food allergy reactions and assess the effect of the incident on all students involved. Provide input to modify policies and practices as needed. - Collect school data to monitor and track food allergy emergencies across the district. Use these data to guide improvements in policies and practices. # Support professional development on food allergies for staff. - Seek professional development opportunities to learn updated information about managing food allergies. - Educate district and school staff about food allergies so they are adequately trained, competent, and confident to perform assigned responsibilities to help students with food allergies and respond to an emergency. - Coordinate district training for school nurses and others who might lead school teams responsible for implementing FAMPPs to make sure they have the information they need to develop effective plans. - Know and educate others about federal and state laws, including regulations, and policies relevant to the obligations of schools to students with food allergies and make sure district policies and practices follow these laws and policies. - Help school building leaders plan and provide food allergy training for staff, parents, and students. - Help train delegated staff members on how to store, access, and administer epinephrine autoinjectors. # Educate students and family members about food allergies. - Work collaboratively with the curriculum coordinator or health education coordinator at the district level to identify appropriate food allergy content for the district's health education curriculum. - Help school administrators communicate the district's policies and practices for managing food allergies to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. - Work collaboratively with district staff to enforce policies that promote healthy physical environments. - Work collaboratively with student support services staff at the district level to enforce policies that prohibit discrimination and bullying against all students, including those with food allergies. # Student Support Services Director The student support services director can be a school psychologist, school counselor, or child and family services director. 1. Participate in the school's coordinated approach to managing food allergies. - Help develop a school district's comprehensive approach to managing food allergies that will support the FAMPP used in each school. - Promote, disseminate, and communicate food allergy policies to all school staff, families, and the community. - Know and inform others about federal and state laws, including regulations, and policies relevant to the obligations of schools to students with food allergies and make sure district policies and practices follow these laws and policies. - Provide direct assistance to help schools establish procedures and plans for monitoring students with food allergies, including, if appropriate, through Section 504 plans or IEPs. - Coordinate with other district staff, including the food service director, curriculum coordinator, and health services director. - On a regular schedule, review and evaluate the school district's food allergy policies and practices and revise as needed. # Ensure the daily management of food allergies for individual students. - Help the school team responsible for implementing the FAMPP write this plan. If a student is eligible to receive services under Section 504 or, if appropiate, IDEA, make sure all provisions of these federal laws are met. - Help schools link students with food allergies and their families to community health services and family support services when needed. # Prepare for food allergy emergencies. - Help develop protocols for responding to food allergy emergencies that can guide practices in district schools. - Review school emergency response plans to make sure they include the actions needed to respond to food allergy emergencies. - Help schools conduct periodic emergency response drills and practice how to handle a food allergy emergency. - Review data and information (e.g., when and where medication was administered) from incident reports of food allergy reactions and assess the effect of the incident on affected students. Provide input to modify policies and practices as needed. # Support professional development on food allergies for staff. - Help educate district and school staff about food allergies so they are adequately trained, competent, and confident to perform assigned responsibilities to help students with food allergies and respond to an emergency. - Help develop district training for all school staff to help them improve their FAMPPs. - Know and educate others about federal and state laws, including regulations, and policies relevant to the obligations of schools to students with food allergies and make sure district and school policies and practices follow these laws and policies. - Help school building leaders plan and provide food allergy training for staff, parents, and students. # Educate students and family members about food allergies. - Help school administrators communicate the district's policies and practices for preventing food allergy reactions to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. - Work collaboratively with district staff to enforce policies that promote healthy physical environments. - Work collaboratively with district health services staff, school principals, school counselors, and others to help enforce policies that prohibit discrimination and bullying against all students, including those with food allergies. # District Food Service Director 1. Participate in the school's coordinated approach to managing food allergies. - Help develop a school district's comprehensive approach to managing food allergies that will support the FAMPP used in each school. - Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, fundraisers, during class parties, at athletic events, and during after-school programs. - Access and use resources and guidance from local health departments and the state agency that administers child nutrition programs. - Promote, disseminate, and communicate the food allergy policies to school staff, families, and the community. - Know and educate others about federal and state laws, including regulations, and policies on food allergies and the need to follow these laws and policies, including those regulations that pertain to the U.S. Department of Agriculture's (USDA's) Child Nutrition Program. - Ensure that food service staff understand USDA's required doctor's statement as written and that the statement provides sufficient information to provide a safe meal. - Help establish school-level procedures and plans for monitoring students with food allergies, including plans for accommodating the special nutritional needs of individual students when necessary. - Coordinate with other district staff, including the student support services director, curriculum coordinator, and health services director. - On a regular schedule, review and evaluate the school district's food allergy policies and practices and revise as needed. # Ensure the daily management of food allergies for individual students. - Develop and implement procedures in each school for identifying students with food allergies in school cafeterias. Make sure that procedures governing access to personally identifiable information from education records are consistent with student rights under the Family Educational Rights and Privacy Act of 1974 (FERPA) and any other federal and state laws that protect the privacy or confidentiality of student information. (See Section 5 for more information about FERPA.) - Work with the health services director, principals and other school staff responsible for implementing FAMPPs to set up procedures for handling food allergies in the cafeteria. These plans should be consistent with the student's IHP, Section 504 plan, or, if appropriate, IEP, and USDA regulations on meals and food substitutions, as reflected in the USDA's Accommodating Children with Special Dietary Needs in the School Nutrition Programs. Procedures should be established for children who participate in school meals programs and those who bring food from home. - Work with school teams responsible for developing ECPs for students with food allergies. For schools that participate in the USDA's Child Nutrition programs, make sure that documents that list appropriate food substitutions for a student with a food allergy disability are signed by a licensed doctor. The doctor's statement must identify: °°The child's food allergy. °°An explanation of why the allergy restricts the child's diet. °°The major life activity affected by the allergy. °°The food or foods to be omitted from the child's diet and the foods or choices that can be substituted. - Establish procedures for obtaining information to clarify food substitutions and other relevant medical information from a student's doctor as needed. - Coordinate food substitutions for all schools with students who have food allergies, in consultation as necessary with each child's doctor, and manage the documentation of these activities. When possible, use foods that are already served in school meals or snacks to make appropriate substitutions. - Provide oversight and tracking of each student's dietary plans, including tracking allergic reactions that occur during school meals. - Develop and implement policies and procedures to prevent allergic reactions and cross-contact during meal preparation and service. Communicate these policies and procedures to school food service staff. - Keep information about ingredients for all foods bought and served by school food service programs and keep labels of foods given to food-allergic children for at least 24 hours so that the labels can be reviewed if needed. - Be prepared to share information about ingredients in recipes and foods served by food service programs with parents. # Prepare for food allergy emergencies. - Help develop protocols for responding to food allergy emergencies that can guide practices in district schools. - Help the health services director communicate the appropriate ways to avoid exposure to food allergens and respond to food allergy emergencies to all staff members who are involved in managing a student's food allergy in the cafeteria. - Make sure that food service staff are able to respond to a food allergy emergency in the cafeteria and implement an ECP. - Review school emergency response plans to make sure they include the actions needed to respond to food allergy emergencies during school meals. - Help schools conduct periodic emergency response drills and practice how to handle a food allergy emergency. - Review data and information (e.g., when and where medication was administered) from incident reports on any food allergy reactions and assess the effect of the incident on affected students. Provide input to modify policies and practices as needed. # Support professional development on food allergies for staff. - Help educate district and school staff about food allergies so they are adequately trained, competent, and confident to perform assigned responsibilities to help students with food allergies and respond to an emergency. - Provide training opportunities for school food service staff to help them understand how to follow policies and procedures for preparing and serving safe meals and snacks for students with food allergies. - Make sure that school food service staff participate in district training on food allergies. - Make sure that all school staff understand their role in preventing and responding to emergencies in the school cafeteria. - Help school building leaders plan and provide food allergy training for staff, parents, and students. # Educate students and family members about food allergies. - Help the curriculum coordinator or health education coordinator integrate food allergy lessons, such as how to read food labels, into the district's health education curriculum. - Communicate with parents about any foods that might be served as part of school meals programs such as the School Breakfast Program or the Fresh Fruit and Vegetable Program. - Share information about options for food substitutions with the parents of students with food allergies. Schools are encouraged to make substitutions with foods that have already been bought, when possible. - Work with administrators, classroom teachers, and parent-teacher organizations to offer food allergy education to parents in schools. - Help school administrators communicate the policies and procedures used in food service programs to prevent food allergy reactions to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. - Work collaboratively with district staff to help enforce policies that promote healthy physical environments. - Work collaboratively with district health services staff, school principals, school food service staff, and others to help enforce policies that prohibit discrimination and bullying against students with food allergies. - Provide guidance to school food service staff that helps them to meet the dietary needs of students with food allergies and protect their health during school meals, while guarding against practices that could result in alienation of or discrimination against these students. See Section 6 for more resources and tools that might assist in managing food allergies and allergy-related emergencies in schools. # Section 3. Putting Guidelines into Practice: Actions for School Administrators and Staff Effective management of food allergies in schools requires the participation of many people. This section presents the actions that school building administrators and staff can take to implement the recommendations in Section 1. Some actions duplicate responsibilities required under applicable federal and state laws, including regulations, and policies. Although many of the actions presented here are not required by statute, they can contribute to better management of food allergies in schools. Some actions are intentionally repeated for different staff positions to ensure that critical actions are addressed even if a particular position does not exist in the district or school (e.g., school doctor). This duplication also reinforces the need for different staff members to work together to manage food allergies effectively. All actions are important, but some will have a greater effect than others. Some actions may be most appropriately carried out by district-level staff members whose roles are to support food allergy management plans and practices across schools or to provide specific services to schools that do not have an on-site staff person to provide these services. Ultimately, each school district or school must determine which actions are most practical and necessary to implement and who should be responsible for those actions. # School Administrator The school administrator can be a principal or assistant principal. # Lead the school's coordinated approach to managing food allergies. - Coordinate planning and implementation of a comprehensive Food Allergy Management and Prevention Plan (FAMPP) for your school. If your school has an on-site registered nurse, work with this person and the members of any relevant team-such as the school wellness team, school health team, or school improvement team-to plan and implement the FAMPP. Designate a qualified person (e.g., the registered nurse) to lead development of the FAMPP and designate responsibilities for implementing the plan as appropriate. If your school does not have an on-site nurse, ask for help from a registered nurse at the district level or from a public health nurse in the community. - Make sure staff understand the school's responsibilities under Section 504 of the Rehabilitation Act of 1973, the Americans with Disabilities Act (ADA), the Individuals with Disabilities Education Act (IDEA), and the Richard B. Russell National School Lunch Act to students who are or may be eligible for services under those laws. Make sure they understand the need to comply with the Family Educational Rights and Privacy Act of 1974 (FERPA) and any other federal and state laws that protect the privacy of student information. (See Section 5 for information about applicable federal laws.) - Communicate school district policies and the school's practices for managing food allergies to all school staff, substitute teachers, classroom volunteers, and families. - Make sure staff implement school district policies for managing food allergies. - Help staff implement the school's FAMPP. - On a regular basis, review and evaluate your school's FAMPP and revise as needed. # Ensure the daily management of food allergies for individual students. - Make sure that mechanisms-such as health forms, registration forms, and parent interviewsare in place to identify students with food allergies. - If your school does not have an on-site registered nurse, work with the parents of children with food allergies and their doctor to develop a written Emergency Care Plan (ECP) (sometimes called a Food Allergy Action Plan). This plan is needed to manage and monitor students with food allergies on a daily basis, whether they are at school or at school-sponsored events. If a student has been determined to be eligible for services under Section 504 or, if appropriate, IDEA, make sure that all provisions of these federal laws are met. - Share information about students with food allergies with all staff members who need to know, provided the exchange of information occurs in accordance with FERPA and any other federal and state laws that protect the confidentiality or privacy of student information. (See section 5 for more information about FERPA.) Make sure these staff members are aware of what actions are needed to manage each student's food allergy on a daily basis. # Prepare for and respond to food allergy emergencies. - Make sure that responding to life-threatening food allergy reactions is part of the school's "all-hazards" approach to emergency planning. - Make sure that parents of students with food allergies provide epinephrine auto-injectors to use in food allergy emergencies, if their use is called for in a student's ECP. - Set up communication systems that are easy to use for staff who need to respond to food allergy reactions and emergencies. - Make sure that staff who are delegated and trained to administer epinephrine auto-injectors can get to them quickly and easily. - Make sure that local emergency responders know that epinephrine may be needed when they are called to respond to a school emergency. - Prepare for food allergy reactions in students without a prior history of food allergies or anaphylaxis. - Make sure that staff plan for the needs of students with food allergies during class field trips and during other extracurricular activities. - Conduct periodic emergency response drills and practice how to handle a food allergy emergency. - Contact parents immediately after any suspected allergic reaction and after a child with a food allergy ingests or has contact with a food that may contain an allergen, even if an allergic reaction does not occur. If the child may need treatment, recommend that the parents notify the child's primary health care provider or allergist. - Document all responses to food allergy emergencies. Review data and information (e.g., when and where medication was used) from incident reports of food allergy emergencies and assess the effect on affected students. Provide input to modify your school district's emergency response policies and practices as needed. # Support professional development on food allergies for staff. - Make sure staff receive professional development and training on food allergies. - Coordinate training with licensed health care professionals, such as school or district doctors or nurses or local health department staff, and with other essential school or district professionals, such as the district's food service director, if appropriate. Invite parents of students with food allergies to help develop the content for this training. # Educate students and family members about food allergies. - Make sure that the school's curricular offerings include information about food allergies to raise awareness among students. - Communicate the school's responsibilities, expectations, and practices for managing food allergies to all parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. - Increase awareness of food allergies throughout the school environment. - Emphasize and support practices that protect and promote the health of students with food allergies across the school environment, during before-and after-school activities, and during transportation of students. - Make sure that students with food allergies have an equal opportunity to participate in all school activities and events. - Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, fundraisers, during class parties, at athletic events, and during after-school programs. - Reinforce the school's rules that prohibit discrimination and bullying as they relate to students with food allergies. # Registered School Nurses # Participate in the school's coordinated approach to managing food allergies. - Take the lead in planning and implementing the school's FAMPP or help the school administrator with this task. - Support partnerships among school staff and the parents and doctors (e.g., pediatricians or allergists) of students with food allergies. - Consult state and local Nurse Practice Acts and guidelines to guide the roles and responsibilities of school nurses. # Supervise the daily management of food allergies for individual students. - Make sure that students with food allergies are identified. Share information with other staff members as needed, provided the exchange of information occurs in accordance with FERPA and any other federal and state laws that protect the confidentiality or privacy of student information. - Obtain or develop an ECP for each student with a food allergy or food allergy disability. Get the medical information needed to care for children with food allergies when they are at school, such as medical records and emergency information. Communicate with parents and health care providers (with parental consent) about known food allergies, signs of allergic reactions, relevant use of medications, complicating conditions, and other relevant health information. - Make sure that USDA's required doctor's statement is completed and provides clear information to assist in the preparation of a safe meal accommodation. This statement can be part of an ECP or a separate document. - Use a team approach to develop an Individualized Healthcare Plan (IHP) for each student with a food allergy, and, if required by Federal law, a Section 504 plan, or an Individualized Education Program (IEP), if appropriate. - Monitor each student's ECP or other relevant plan on a regular basis and modify plans when needed. - Refer parents of children who do not have access to health care to services in the community. - For students who have permission to carry and use their own epinephrine auto-injectors, regularly assess their ability to perform these tasks. # Prepare for and respond to food allergy emergencies. - Develop instructions for responding to an emergency if a school nurse is not immediately available. Add these instructions to the school's FAMPP. - File ECPs in a place where staff can get to them easily in an emergency. Distribute ECPs to staff on a need-to-know basis. - Make sure that the administration of an epinephrine auto-injector follows school policies and state mandates. Make sure that medications are kept in a secure place that staff can get to quickly and easily. Keep back-up epinephrine auto-injectors for students who carry their own. Regularly inspect the expiration date on all stored epinephrine auto-injectors. - Train and supervise delegated staff members how to administer an epinephrine auto-injector and recognize the signs and symptoms of food allergy reactions and anaphylaxis. - If allowed by state and local laws, work with school leaders to get extra epinephrine auto-injectors or nonpatient-specific prescriptions or standing orders for auto-injectors to keep at school for use by staff delegated and trained to administer epinephrine in an anaphylaxis emergency. - Assess whether students can reliably carry and use their own epinephrine auto-injectors and encourage self-directed care when appropriate. - Make sure that school emergency plans include procedures for responding to any student who experiences signs of anaphylaxis, whether the student has been identified as having a food allergy or not. - Make sure that staff plan for the needs of students with food allergies during class field trips and during other extracurricular activities. - Contact parents immediately after any suspected allergic reaction and after a child with a food allergy ingests or has contact with a food that may contain an allergen, even if an allergic reaction does not occur. If the child may need treatment, recommend that the parents notify the child's primary health care provider or allergist. - After each food allergy emergency, review how it was handled with the school administrator, school doctor or nurse (if applicable), parents, staff members involved in the response, emergency medical services (EMS) responders, and the student to identify ways to prevent future emergencies and improve emergency response. - Help students with food allergies transition back to school after an emergency. - Talk with students who may have witnessed a life-threatening allergic reaction in a way that does not violate the privacy rights of the student with the food allergy. # Help provide professional development on food allergies for staff. - Stay up-to-date on best practices for managing food allergies. Sources for this information include allergists or other doctors who are treating students with food allergies, local health department staff, national school nursing resources, and the district's food service director or registered dietitian. - Educate teachers and other school staff about food allergies and the needs of specific students with food allergies in a manner consistent with FERPA, USDA, and any other federal and state laws that protect the privacy or confidentiality of student information. (See Section 5 for more information about FERPA.) - Advise staff to refer students to the school nurse when food allergy symptoms or side effects interfere with school activities so that medical and educational services can be properly coordinated. # Provide food allergy education to students and parents. - Teach students with food allergies about food allergies and help them develop self-management skills. - Make sure that students who are able to manage their own food allergies know how to recognize the signs and symptoms of their own allergic reactions, are capable of using an epinephrine auto-injector, and know how to notify an adult who can respond to a food allergy reaction. - Help classroom teachers add food allergy lessons to their health and education curricula. - Find ways for the parents of students with food allergies to share their knowledge and experience with other parents. - Work with administrators, classroom teachers, and parent-teacher organizations to offer food allergy education for parents at school. - Help the school administrator communicate the school's policies and practices for preventing food allergy reactions to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. - Work with other school staff and parents to create a safe environment for students with food allergies. On a regular basis, assess the school environment, including the cafeteria and classrooms, to identify allergens in the environment that could lead to allergic reactions. Work with appropriate staff to develop strategies to help children avoid identified allergens. - Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, fundraisers, during class parties, at athletic events, and during after-school programs. - Work with school counselors and other school staff to provide emotional support to students with food allergies. - Promote an environment that encourages students with food allergies to tell a staff member if they are bullied because of their allergy. # School Doctors A school doctor works full-time or part-time to provide consultation and a wide range of health services to the school population. # Participate in the school's coordinated approach to managing food allergies. - Lead or help plan and implement the school's FAMPP. - Support partnerships among school staff and the parents and doctors (e.g., allergists, pediatricians) of students with food allergies. - Keep current on federal, state, and local guidance on food allergy management. - Consult state and local Nurse Practice Acts to make sure the roles and responsibilities of school nurses are appropriate. - Guide and support the food allergy management practices of school nursing staff. - Help evaluate school FAMPPs. # Ensure the daily management of food allergies for individual students. - Help the school nurse perform the actions necessary to manage students with food allergies on a daily basis. (See the items under Action 2 for Registered School Nurses.) 3. Prepare for and respond to food allergy emergencies. - Help the school nurse make sure that all students with food allergies have an ECP. - If allowed by state and local laws, write prescriptions or standing orders for nonpatient-specific epinephrine auto-injectors so the school can stock back-up medication for use in food allergy emergencies. - Help the school nurse assess whether students can reliably carry and use their own epinephrine auto-injector and encourage self-directed care when appropriate. - Help the school nurse train staff how to use epinephrine auto-injectors and recognize the signs and symptoms of food allergy reactions and anaphylaxis. - Help the school nurse and health assistants regularly inspect the expiration date on all stored epinephrine auto-injectors. - Make sure that school emergency plans include procedures for responding to any student who experiences signs of anaphylaxis, whether diagnosed with a food allergy or not. - Make sure that staff plan for the needs of students with food allergies during class field trips and during other extracurricular activities. - Contact parents immediately after any suspected allergic reaction and after a child with a food allergy ingests or has contact with a food that may contain an allergen, even if an allergic reaction does not occur. If the child may need treatment, recommend that the parents notify the child's primary health care provider or allergist. - After each food allergy emergency, review how it was handled with the school administrator, school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future emergencies and improve emergency response. # Help provide professional development on food allergies for staff. - Share current and relevant knowledge of best practices for managing food allergies with school leaders (e.g., school administrator, school nurse). - Help educate teachers and other school staff about food allergies and the needs of specific students with food allergies, in a manner consistent with FERPA, USDA, and any other federal and state laws that protect the privacy or confidentiality of student information. (See Section 5 for more information about FERPA.) - Advise staff to refer students to the school doctor or nurse when symptoms or side effects of a food allergy interfere with school activities so that medical and educational services can be coordinated. # Provide food allergy education to students and parents. - Help teach students with food allergies about food allergies and help them develop selfmanagement skills. - Help the school nurse make sure that students who are able to manage their food allergies know how to recognize the signs and symptoms of their own allergic reactions, are capable of using an epinephrine auto-injector, and know how to notify an adult who can respond to a food allergy reaction. - Help classroom teachers add food allergy lessons to their health and education curricula. - Help find ways for parents of students with food allergies to share their knowledge and experience with other parents. - Work with administrators, the school nurse, classroom teachers, and parent-teacher organizations to offer food allergy education for parents at school. - Help the school administrator communicate the school's policies and practices for preventing food allergy reactions to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. - Work with other school staff and parents to create a safe environment for students with food allergies. - On a regular basis, assess the school environment, including the cafeteria and classrooms, to identify allergens in the environment that could lead to allergic reactions. Work with appropriate staff to manage identified allergens. - Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, fundraisers, during class parties, at athletic events, and during after-school programs. - Work with school counselors, the school nurse, and other school staff to provide emotional support to students with food allergies. - Promote an environment that encourages students with food allergies to tell a staff member if they are bullied or harassed because of their allergy. # Health Assistants, Health Aides, or Other Unlicensed Personnel These staff members work with the school or district nurse or doctor. # Help with the daily management of food allergies for individual students. - Help the school nurse identify students with food allergies. Review the medical records and emergency information of all students. - Talk with the school nurse about any allergic reactions and changes in a student's health status. # Prepare for and respond to food allergy emergencies. - Get a copy of the ECP for every student with food allergies. Make sure the plan includes information about signs and symptoms of an allergic reaction, how to respond, and whether medications should be given. - File ECPs in a place where staff can get to them easily in an emergency. - Be ready to respond to a food allergy emergency if a nurse is not immediately available. If school policies and state mandates allow you to give medication and you are delegated to perform this task, complete training on how to administer epinephrine, regularly review instructions, and practice this task. Make sure that medications are kept in a secure place that you or other delegated staff members can get to quickly and easily. Regularly inspect the expiration date on all stored epinephrine auto-injectors. - After each food allergy emergency, participate in a review of how it was handled with the school administrator, school doctor (if applicable), school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future emergencies and improve emergency response. # Participate in professional development on food allergies. - Complete training to help you recognize and understand the following: °°Signs and symptoms of allergic reactions and how they are communicated by students. °°How to read food labels and identify allergens. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°How to deal with emergencies in the school in ways that are consistent with a student's ECP. °°Your role in implementing a student's ECP. °°When and how to call EMS and parents. °°How FERPA, USDA, and other federal and state laws that protect the privacy and confidentiality of student information apply to students with food allergies and food allergy disabilities. °°General strategies for reducing or preventing exposure to food allergens in the classroom, such as cleaning surfaces, using nonfood items for celebrations, and getting rid of nonfood items that contain food allergens (e.g., clay, paste). °°Policies on bullying and discrimination against all students, including those with food allergies. # Provide food allergy education to students and parents. - Get help from the school counselor or other mental health professionals to teach students about bullying of and discrimination against students with food allergies. - Help communicate policies on bullying and discrimination to parents. # Create and maintain a healthy and safe school environment. - Work with other school staff and parents to create a safe environment for students with food allergies. - Promote an environment that encourages support for students with food allergies and promotes positive interactions between students. - Report all cases of bullying against students, including those with food allergies, to the school administrator, school nurse, or school counselor. # Classroom Teachers This category includes classroom teachers in all basic subjects, as well as physical education teachers, instructional specialists such as music or art teachers, paraeducators, student teachers, long-term substitute teachers, classroom aides, and classroom volunteers. # Participate in the school's coordinated approach to managing food allergies. - Ask the school nurse or school administrator for information on current policies and practices for managing students with food allergies, including how to manage medications and respond to a food allergy reaction. - Help plan and implement the school's FAMPP. # Help with the daily management of food allergies for individual students. - Make sure you understand the essential actions that you need to take to help manage food allergies when students with food allergies are under your supervision, including when meals or snacks are served in the classroom, on field trips, or during extracurricular activities. Seek guidance and help from the school administrator, school nurse, or school food service director as needed. - Be available and willing to help students who manage their own food allergies. - Work with parents and the school nurse and other appropriate school personnel to determine if any classroom modifications are needed to make sure that students with food allergies can participate fully in class activities. - With parental consent, share information and responsibilities with substitute teachers and other adults who regularly help in the classroom (e.g., paraeducators, volunteers, instructional specialists). (Depending on a school district's FERPA notice as to which individuals would constitute school officials with legitimate educational interests, FERPA may not require parental consent in these circumstances. FERPA also includes an emergency exception to the prior consent requirement if there is an articulable and significant threat to the health or safety of the student or others. See Section 5 for more information about FERPA.) - Refer students with undiagnosed but suspected food allergies to the school nurse for follow-up. f f. See footnote e. infra. - If your school does not have a nurse on-site, talk with parents about the signs and symptoms you have seen and recommend that they discuss them with their primary health care provider. - If you suspect a severe food allergy reaction or anaphylaxis, take immediate action, consistent with your school's FAMPP or "all-hazards" emergency response protocol. # Prepare for and respond to food allergy emergencies. - Read and regularly review each student's ECP. Never hesitate to activate the plan in an emergency. If you are delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector. - Keep copies of ECPs for your students in a secure place that you can get to easily in an emergency. With parental consent, share information from the ECP with substitute teachers and other adults who regularly help in the classroom to help them know how to respond to a food allergy emergency. (Depending on a school district's FERPA notice as to which individuals would constitute school officials with legitimate educational interests, FERPA may not require parental consent in these circumstances. FERPA also includes an emergency exception to the prior consent requirement if there is an articulable and significant threat to the health or safety of the student or others. See Section 5 for more information about FERPA.) - Support and help students who have permission to carry and use their own epinephrine in cases of an allergic reaction. - Make sure that the needs of students with food allergies are met during class field trips and during other extracurricular activities. - Immediately contact the school administrator and if available, the school nurse after any suspected allergic reaction. - After each food allergy emergency, review how it was handled with the school administrator, school nurse, parents, other staff members involved in the response, EMS responders, and the student to identify ways to prevent future emergencies and improve emergency response. - Help students with food allergies transition back to school after an emergency. - Address concerns with students who witness a life-threatening allergic reaction in a way that does not compromise the confidentiality rights of the student with the allergy. # Participate in professional development on food allergies. - Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are manifested in and communicated by students. °°How to read food labels and identify allergens. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°How to respond to food allergy emergencies in ways that are consistent with a student's ECP, if appropriate, a Section 504 Plan, or IEP, if appropriate. °°When and how to call EMS and parents. °°Your role in implementing a student's ECP. °°FERPA, USDA, and other federal and state laws that protect the privacy or confidentiality of student information, and other legal rights of students with food allergies. (See Section 5 for more information about federal laws.) °°General strategies for reducing or preventing exposure to food allergens in the classroom, such as cleaning surfaces, using nonfood items for celebrations, getting rid of nonfood materials that contain food allergens (e.g., clay, paste), and preventing cross contact of allergens when meals or snacks are served in the classroom. °°Policies that prohibit discrimination and bullying against all students, including those with food allergies. # Provide food allergy education to students and parents. - Look for ways to add information about food allergies to your curriculum. Work with other teachers to plan lessons and activities to teach students how they can prevent allergic reactions. - Work with the school nurse to educate parents about the presence and needs of students with food allergies in the classroom. Raise awareness and educate the parents of children without food allergies about "food rules" for the classroom. Ask parents to help you keep certain foods out of the classroom during meals, celebrations, and other activities that might include food. - Ask the school counselor or other mental health professionals for help or resources to teach students about policies that prohibit discrimination and bullying against all students, including those with food allergies. - Communicate policies on bullying and discrimination to all parents. # Create and maintain a healthy and safe school environment. - Promote a safe physical environment through the following actions: °°Create classroom rules and practices for dealing with food allergies. Tell parents about these rules and practices at the beginning of the school year or when you find out that a student with a food allergy will be in your class. °°Create ways for students with food allergies to participate in all class activities. °°Avoid using known allergens in classroom activities, such as arts and crafts, counting, science projects, parties, holidays and celebrations, or cooking. °°Enforce hand washing before and after eating, particularly for younger students. °°Use nonfood items for rewards or incentives. °°Encourage the use of allergen-safe foods or nonfood items for birthday parties or other celebrations in the classroom. Support parents of students with food allergies who wish to send allergen-safe snacks for their children. °°Discourage trading or sharing of food with a student with a food allergy in the classroom, particularly for younger students. °°Enforce food allergy prevention practices while supervising students in the cafeteria. - Manage food allergies on field trips through the following actions: °°Determine if the intended location is safe for students with food allergies. If it is not safe, the field trip might have to be changed or cancelled if accommodations cannot be made. Students cannot be excluded from field trips because of food allergies. °°Invite the parents of students with food allergies to chaperone or go with their child on the field trip. Many parents may want to go, but they cannot be required to go. °°Work with school food service staff to plan meals and snacks. °°Make sure you include someone who is delegated and trained to administer epinephrine, that you have quick access to an epinephrine auto-injector, and that you know where the nearest medical facilities are located. If a food allergy emergency occurs, activate the student's ECP and notify the parents. °°Make sure there are appropriate emergency protocols and mechanisms in place to respond to a food allergy emergency when away from the school. °°Make sure that communication devices are working so you can respond quickly during an emergency. - Promote a positive psychosocial climate through the following actions: °°Be a role model by respecting the needs of students with food allergies. °°Help students make decisions about and manage their own food allergies. °°Encourage supportive and positive interactions between students. °°Reinforce the school's rules against discrimination and bullying. °°Take action to address all reports of bullying or harassment of a student with a food allergy. °°Tell parents if their child has been bullied, and report all cases of bullying to the school administrator. °°Tell parents and the school nurse if you see negative changes in a student's academic performance or behavior. # School Food Service Managers and Staff 1. Participate in the school's coordinated approach to managing food allergies. - Use resources and guidance from the district food service director, local board of health, USDA, and dietitians to reduce exposure to food allergens. - Help plan and implement the school's FAMPP. Make sure that it includes specific practices for managing food allergens in school meals served inside and outside of the cafeteria. # Help with the daily management of food allergies for individual students. - Identify students with food allergies in a way that does not compromise students' privacy or confidentiality rights. - Make sure you have and understand dietary orders, or the doctor's statement, and other relevant medical information that you need to make meal accommodations for students with food allergies and food allergy disabilities. - Consult with the district foodservice director to help develop individual dietary and cafeteria management plans for each student with a food allergy and food allergy disability. These plans should be consistent with the student's IHP, and if the student has a food allergy, the student's Section 504 plan, or, if appropriate, IEP, and USDA regulations on meals and food substitutions, as reflected in the USDA's Accommodating Children with Special Dietary Needs in the School Nutrition Programs. - Help communicate appropriate actions to avoid allergic reactions and respond to food allergy emergencies to all staff members and food service staff who are expected to help manage a student's food allergy in the cafeteria. - Follow policies and procedures to prevent allergic reactions and cross-contact of potential food allergens during food preparation and service. - Understand how to read labels to identify allergens in foods and beverages served in school meals. Work with the school foodservice director, the district food service director, or the food manufacturer if additional information or clarification is needed on the product's ingredients. - Manage food substitutions for students with food allergies and food allergy disabilities and manage the documentation of these activities. Work with the school administrator or school nurse and the district food service director to make sure that the information needed to meet USDA and state regulations for food service is documented as required. - Be prepared to share information about ingredients in recipes and foods served by the school food service program with parents. # Prepare for and respond to food allergy emergencies. - Be familiar with student's ECPs and the doctor's statement required by USDA, what actions must be taken if a food allergy emergency occurs in the cafeteria. Make sure that food service staff are able to respond to a food allergy emergency in the cafeteria and implement an ECP. - If you are delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector. - If appropriate and allowed by state laws, including regulations, school policy, and the school's FAMPP, keep an epinephrine auto-injector in a secure place in the cafeteria that you can get to quickly and easily. - Provide support and help to students who carry and use their own medication. - After each food allergy emergency, participate in a review of how it was handled with the school administrator, school doctor (if applicable), school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future emergencies and improve emergency response. # Participate in professional development on food allergies. - Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are communicated by student. °°How to read food labels and identify allergens. °°How to plan meals for students with food allergies and allergy disabilities and how to prevent cross-contact of allergens. Consult with the district school food service director when necessary. °°How to deal with emergencies in the school in ways that are consistent with a student's ECP. °°The role of the food service manager and staff in implementing a child's doctor statement under USDA requirements and ECP, if applicable. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°FERPA, USDA, and other federal and state laws that protect the privacy or confidentiality of student information and other legal rights of students with food allergies. (See Section 5 for more information about federal laws.) °°General strategies for reducing or preventing allergic reactions in the cafeteria. °°Policies on bullying and discrimination against all students, including those with food allergies. # Provide food allergy education to students and parents. - Help classroom teachers add food allergy lessons into their health and education curriculum, including teaching students how to read food labels. - Share menu ideas with parents of students with food allergies to identify potential allergens and improve healthy eating. - Find ways for parents of students with food allergies to share their knowledge and experience with other parents. - Help the school administrator communicate the policies and practices used by the food service staff to prevent food allergy reactions to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. - Reduce the potential for allergic reactions through the following actions: °°Be able to recognize students with food allergies and food allergy disabilities in the cafeteria. °°Follow procedures for handling food allergies in the cafeteria, even if a student is not participating in the Child Nutrition Program school meals program. °°Read food labels to identify allergens. °°Follow policies and procedures to prevent cross-contact of potential food allergens during food preparation and service. °°Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, fundraisers, during class parties, at athletic events, and during after-school programs. - Promote a positive psychosocial climate in the cafeteria through the following actions: °°Encourage supportive and positive interactions between students. °°Reinforce the school's rules against bullying and discrimination. °°Take action to address all reports of bullying or harassment of a student with a food allergy. °°Report all cases of bullying and harassment against students, including those with food allergies, to the school administrator, school nurse, or school counselor. # School Counselors and Other Mental Health Services Staff This category includes school psychologists and school social workers. 1. Participate in the school's coordinated approach to managing food allergies. - Help plan and implement the school's FAMPP. # Help with the daily management of food allergies for individual students. - Address immediate and long-term mental health problems, such as anxiety, depression, low self-esteem, negative behavior, or eating disorders, among students with food allergies. - Address adolescent oppositional behavior, such as noncompliance with IHPs. - Make referrals to mental health services and professionals outside the school for students who need them, consistent with applicable requirements of Section 504 and IDEA, if appropriate. - Work with school health service staff (e.g., school doctor, school nurse) to develop consistent protocols for referrals. - Read and regularly review each student's ECP. Never hesitate to activate the plan in an emergency. If you are the person delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector if needed. - After each food allergy emergency, participate in a review of how it was handled with the school administrator, school doctor (if applicable), school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future allergic reactions and to improve emergency response. - Help students with food allergies transition back to school after an emergency. - Be prepared to respond to the emotional needs of students who witness a life-threatening allergic reaction in a way that does not compromise the students' privacy or confidentiality rights. # Participate in professional development on food allergies. - Work with the school or district nurse and other health professionals to support training and education for staff on the mental and emotional health issues faced by a student with food allergies. - Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are communicated by students. °°How to read food labels and identify allergens. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°How to deal with emergencies in the school in ways that are consistent with a student's ECP. °°Your role in implementing a child's ECP. °°FERPA, USDA, and other federal and state laws that protect the privacy or confidentiality of student information and other legal rights of students with food allergies. (See Section 5 for more information about federal laws.) °°Policies that prohibit discrimination and bullying against students with food allergies. # Provide food allergy education to students and parents. - Work with classroom teachers and other school staff to educate parents and students about bullying and discrimination against students with food allergies. # Create and maintain a healthy and safe school environment. - Encourage staff to support a broad range of school-based mental health promotion efforts to support all students that promote positive interactions between students, build a positive school climate, encourage diversity and acceptance, discourage bullying, and promote student independence. - Reinforce the school's rules against bullying and discrimination. - Take action to address all reports of bullying or harassment of a student with a food allergy. - Tell parents if their child has been bullied, and report all cases of bullying to school administrators. # Bus Drivers and School Transportation Staff 1. Participate in the school's coordinated approach to managing food allergies. - Ask the school nurse or school administrator for information on current policies and practices for managing students with food allergies, including how to manage medications and respond to a food allergy reaction. - Support school's FAMPPs. # Help with the daily management of food allergies for individual students. - Be aware of students with food allergies and know how to respond to an allergic reaction if it occurs while the student is being transported to or from school. - Enforce district food policies for all students riding a school bus. # Prepare for and respond to food allergy emergencies. - Read and regularly review the ECP for any student riding to and from school on a bus. Never hesitate to activate the plan in an emergency. If you are the person delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector if needed. - Know procedures for communicating an emergency during the transporting of children to and from school. Make sure that other adults on the bus are aware of emergency communication protocol. - Make sure communication devices are working so you can reach school officials, EMS, and others during a food allergy emergency. - Call 911 or EMS to ask for emergency transportation of any student exhibiting signs of anaphylaxis. Notify the school administrator of your actions and the need for someone to contact the student's parents. - After any food allergy emergency that occurs while a student is being transported to or from school, participate in a review of how it was handled with the school administrator, school doctor (if applicable), school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future allergic reactions and improve emergency response. # Participate in professional development on food allergies. - Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are communicated by students. °°How to respond to a food allergy emergency while transporting children to and from school. How to use an epinephrine auto-injector (if delegated and trained to do so). °°How to deal with emergencies in a way that is consistent with a student's ECP or transportation emergency protocol. °°Your role in implementing a child's ECP. °°FERPA, USDA, and other federal and state laws that protect the privacy or confidentiality of student information and other legal rights of students with food allergies. (See Section 5 for more information about federal laws.) °°Policies that prohibit discrimination and bullying against all students, including those with food allergies. # Create a healthy and safe environment. - Advocate for two-way communication systems between schools and transportation vehicles that are kept in working order. - Enforce district food policies for all students riding a school bus. - Encourage supportive and positive interactions between students. - Reinforce the school's rules against discrimination and bullying. - Report all cases of bullying or harassment of students, including those with food allergies, to the school administrator. # Facilities and Maintenance Staff This category includes custodial staff. # Participate in the school's coordinated approach to managing food allergies. - Help plan and implement the school's FAMPP. # Help with the daily management of food allergies for individual students. - Be aware of students with food allergies and know how to respond to an allergic reaction if it occurs while the student is at school. - Help create a safe and healthy environment to prevent allergic reactions. # Prepare for and respond to food allergy emergencies. - Activate your school's "all-hazard" emergency response practices if a student displays signs or symptoms of an allergic reaction. - Know and understand your school's communication protocols for an emergency. - Make sure communication devices are working. - After each food allergy emergency, participate in a review of how it was handled with the school administrator, school doctor (if applicable), school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future allergic reactions and improve emergency response. # Participate in professional development on food allergies. - Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are communicated by students. °°How to respond to emergencies at the school. °°Your role in supporting a child's ECP. °°Policies that prohibit discrimination and bullying against all students, including those with food allergies. °°Policies and standards for washing hands and cleaning surfaces to reduce food allergens on surfaces. # Create and maintain a healthy and safe environment. - Promote a safe and healthy physical environment through the following actions: °°Advocate for two-way communication systems throughout school buildings that are kept in working order. °°Enforce district food policies. °°Clean floors, surfaces, and food-handling areas with approved soap and water or all-purpose cleaning products. - Promote a positive psychosocial climate through the following actions: °°Encourage supportive and positive interactions between students. °°Reinforce the school's rules against discrimination and bullying. °°Report all cases of bullying or harassment of students, including those with food allergies, to the school administrator. See Section 6 for more resources and tools that might assist in managing food allergies and allergy-related emergencies in schools. # Section 4. Putting Guidelines into Practice: Actions for Early Care and Education Administrators and Staff Effective management of food allergies in early care and education (ECE) programs requires the participation of many people. This section presents the actions that ECE program staff can take to implement the recommendations in Section 1. Some actions duplicate responsibilities required under applicable federal and state laws, including regulations, and policies. Although many responsibilities presented here are not required by statute, they can contribute to better management of food allergies in ECE programs. If the ECE program participates in USDA's Child Nutrition Programs, the ECE program must follow USDA statutes, regulations, and guidance for providing meal accommodations for children with food allergy disabilities. Some actions are intentionally repeated for different staff positions to ensure that critical actions are addressed even if a particular position does not exist in the ECE program. This duplication also reinforces the need for different staff members to work together to manage food allergies effectively. All actions are important, but some will have a greater effect than others. Ultimately, each ECE program must determine which actions are most practical and necessary to implement and who should be responsible for those actions. Although these guidelines are specifically for licensed ECE programs, many of the recommendations can be used in unlicensed child care settings. # Program Directors and Family Child Care Providers # Lead the ECE program's coordinated approach to managing food allergies. - Coordinate planning and implementation of a comprehensive Food Allergy Management and Prevention Plan (FAMPP). Work with staff, parents, food services, and the children's health care providers. - Designate a qualified person (e.g., health manager, health consultant) to lead development of the program's FAMPP and designate responsibilities for implementing the plan as appropriate. # Ensure the daily management of food allergies for individual children. - Make sure that mechanisms-such as health forms, registration forms, USDA-required doctor's statement and parent interviews-are in place to identify children with food allergies. - Work with the parents of children with food allergies and the child's primary health care provider or allergist to obtain a written Emergency Care Plan (ECP) to manage and monitor children with food allergies on a daily basis. - Share information about children with food allergies with all staff who need to know. Make sure they are aware of what actions are needed to manage each child's food allergy on a daily basis. # Prepare for and respond to food allergy emergencies. - Make sure that all ECPs include the following: °°A doctor's statement addressing the meal accommodation needs of particular child with a food allergy disability as required for USDA's Child Nutrition Programs. °°Written instructions about food(s) to which the child is allergic and steps that should be taken to avoid that food. °°A detailed treatment plan to be implemented if an allergic reaction occurs. This plan should include the names and doses of medications and how they should be used. It should also include specific symptoms that would indicate the need to give one or more medications or take the child to an emergency medical facility. - Make sure that parents of children with food allergies provide epinephrine auto-injectors to use in food allergy emergencies if their use is called for in the child's ECP. - Make sure that medications are kept in a secure place and that staff who are delegated and trained to use epinephrine auto-injectors can get to them quickly and easily. - Make sure that staff plan for the needs of students with food allergies during class field trips and during other extracurricular activities. - Contact parents immediately after any suspected allergic reaction. You also should contact parents immediately after a child ingests a potential allergen or has contact with a potential allergen, even if an allergic reaction does not occur. If the child needed treatment, recommend that the parents notify the child's primary health care provider or allergist. - If epinephrine is given, contact emergency medical services (EMS) and have the child transported to an emergency room by ambulance. Contact the parents to tell them the child's location and condition. - Conduct periodic emergency response drills and practice how to handle a food allergy emergency. - Be ready to respond to severe allergic reactions in children with no history of diagnosed food allergies or anaphylaxis. - Review data and information (e.g., when and where medication was used) from incident reports of food allergy emergencies and assess the effect on affected children. Modify policies and practices as needed. # Support professional development on food allergies for staff. - Make sure staff receive professional development and training on food allergies. - Make sure that training helps your program meet any applicable Head Start Program Performance Standards and Other Regulations. - Coordinate training with licensed health care professionals. - Invite parents of children with food allergies to participate in training for staff. # Educate children and family members about food allergies. - Communicate your program's responsibilities, expectations, and practices for managing food allergies to all parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe ECE program environment. - Increase awareness of food allergies and food allergy disabilities throughout the program environment. - Make sure that children with food allergies have an equal opportunity to participate in all program activities and events. Child Care Providers, Preschool Teachers, Teaching Assistants, Volunteers, Aides, and Other Staff # Participate in the ECE program's coordinated approach to managing food allergies. - Help plan and implement the program's FAMPP. # Help with the daily management of food allergies for individual children. - Make sure all children with food allergies have an ECP. In programs that participate in USDA's Child Nutrition Programs, include a doctor's statement of disability. - Make sure you understand the essential actions that you need to take to help manage food allergies and food allergy disabilities in children when they are under your supervision. - Enforce hand washing practices and make sure tables and surfaces are cleaned before and after meals with approved soap and water or all-purpose cleaning products to reduce cross-contact of allergens. - Work with parents to determine if any modifications are needed to make sure that children with food allergies can participate fully in all program activities. # Prepare for and respond to food allergy emergencies. - Make sure that all ECPs include the following: °°A doctor's statement addressing the meal accommodation needs of particular child with a food allergy disability as required for USDA's Child Nutrition Programs. °°Written instructions about food(s) to which the child is allergic and steps that should be taken to avoid that food. °°A detailed treatment plan to be implemented if an allergic reaction occurs. This plan should include the names and doses of medications and how they should be used. It should also include specific symptoms that would indicate the need to give one or more medications or take the child to an emergency medical facility. - Make sure that parents of children with food allergies provide epinephrine auto-injectors to use in food allergy emergencies if their use is called for in the child's ECP. - Make sure that medications are kept in a secure place and that staff who are delegated and trained to use epinephrine auto-injectors can get to them quickly and easily. - Never hesitate to activate a child's ECP in an emergency. If you are delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector. - Keep copies of ECPs for children in your care in a secure place that you can get to quickly and easily in an emergency. - Provide feedback on the child's ECP and participate in a debriefing meeting after a food allergy reaction or emergency. - Contact parents immediately after any suspected allergic reactions. You also should contact parents immediately after a child ingests a potential allergen or has contact with a potential allergen, even if an allergic reaction does not occur. If the child needed treatment, recommend that the parents notify the child's primary health care provider or allergist. - If epinephrine is given, contact EMS, tell them when epinephrine was administered, and have the child transported to an emergency room by ambulance. Contact the parents to tell them the child's location and condition. - After each food allergy emergency, review how it was handled with the ECE program administrator, registered nurse, parents, staff members involved in the response, EMS responders, and the child to identify ways to prevent future emergencies and improve emergency response. # Participate in professional development on food allergies. - Complete training to help you recognize and understand the following: °°Signs and symptoms of allergic reactions and how they are communicated by young children. °°How to read food label and identify allergens. °°Your role in implementing a child's ECP. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°General strategies for reducing or preventing exposure to food allergens in the program setting and during field trips or other program-sponsored events. °°Policies that prohibit discrimination and bullying against children with food allergies. # Create and maintain a healthy and safe ECE program environment. - Promote a safe physical environment through the following actions: °°Create rules and practices for dealing with food allergies, including preventing exposure to allergens. Tell parents about these rules and practices each year or when you find out that a child with a food allergy will be in your care. °°Create ways for children with food allergies to participate in all class activities. °°Avoid using known allergens in program activities, such as arts and crafts, counting, science projects, parties, holidays and celebrations, or cooking. °°Enforce hand washing before and after eating. °°Clean tables and chairs before and after eating with approved soap and water or all-purpose cleaning products. °°Use nonfood items for rewards or incentives. °°Encourage the use of allergen-safe foods or nonfood items for birthday parties or other celebrations. Support parents of children with food allergies who wish to send allergen-safe snacks for their children. °°Discourage trading or sharing of food. - Manage food allergies on field trips through the following actions: °°Determine if the intended location is safe for children with food allergies. °°Make sure that field trips and other events are consistent with the program's food allergy policies. °°Plan for meals and snacks. °°Make sure you have quick access to an epinephrine auto-injector or other medications and that you know where the nearest medical facilities are located. If a food allergy emergency occurs, activate the child's ECP and notify the parents. °°Make sure that a person who is certified in first aid and trained to use an epinephrine auto-injector is available. - Promote a positive psychosocial climate through the following actions: °°Be a role model by respecting the needs of children with food allergies. °°Encourage supportive and positive interactions between children. °°Take action to address all reports of bullying or harassment of a child with a food allergy. °°Tell parents if you see negative changes in their child's behavior. # Nutrition Services Staff 1. Participate in the ECE program's coordinated approach to managing food allergies. - Help plan and implement the program's FAMPP. # Help with the daily management of food allergies for individual children. - Read and regularly review each child's ECP. Make sure you understand the essential actions that you need to take to help manage food allergies in children during meals. - Make sure you get the dietary orders and other relevant medical information that you need to accommodate children with food allergies. - Document information about meal substitutions as outlined in each child's ECP. Make sure that the information needed to meet the U.S. Department of Agriculture's (USDA's) Child Nutrition Program regulations and state regulations is documented. - Work with the state agency that administers USDA programs, the local health department, and dietitians in the community to get the information and resources you need to make sure your program is following all federal and state regulations and you are responding to a child's dietary requirements. - Take the food allergies of the children in your program into account when you buy food and formula. - Establish and follow policies and procedures to prevent allergic reactions and cross-contact of potential food allergens during food preparation and service. # Prepare for and respond to food allergy emergencies. - Make sure all ECPs include the following: °°A doctor's statement addressing the meal accommodation needs of particular child with a food allergy disability as required for USDA's Child Nutrition Programs. °°Written instructions about food(s) to which the child is allergic and steps that should be taken to avoid that food. If your program participates in the USDA's Child Nutrition Program, make sure you have the proper documentation to meet USDA and state regulations. °°A detailed treatment plan to be implemented if an allergic reaction occurs. This plan should include the names and doses of medication and how they should be used. - Make sure that medications are kept in a secure place and that staff who are delegated and trained to use epinephrine auto-injectors can get to them quickly and easily. - Never hesitate to activate a child's ECP in an emergency. If you are delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector. - Keep copies of ECP for children in your care in a secure place that you can get to quickly and easily in an emergency. - Provide feedback on the child's ECP and participate in a debriefing meeting after a food allergy reaction or emergency. - Contact parents immediately after any suspected allergic reactions. You also should contact parents immediately after a child ingests a potential allergen or has contact with a potential allergen, even if an allergic reaction does not occur. If the child needed treatment, recommend that the parents notify the child's primary health care provider or allergist. - If epinephrine is given, contact EMS, tell them when epinephrine was administered, and have the child transported to an emergency room by ambulance. Contact the parents to tell them the child's location and condition. - Make sure that a food service staff member who has been trained to respond to a food allergy reaction is available during all meals and snack times. # Participate in professional development on food allergies. - Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are communicated by young children. °°How to read food labels and identify food allergens. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°How to deal with emergencies in the ECE program setting in ways that are consistent with a child's ECP. °°Legal rights of children with food allergies. °°USDA's statutes, regulations, and guidance (for ECE programs participating in USDA's Child Nutrition Programs). °°State, and local laws and policies for food services and food safety. °°General strategies for reducing or preventing exposure to food allergens in the kitchen or area where food is served. °°The role of nutrition staff in implementing a child's ECP, including the specific duties outlined in Head Start Program Performance Standards and Other Regulations. # Health Services Staff # Participate in the ECE program's coordinated approach to managing food allergies. - Help plan and implement the program's FAMPP. # Ensure the daily management of food allergies for individual children. - Make sure children with food allergies are identified in a way that complies with Head Start Program Performance Standards and Other Regulations and established enrollment practices but does not compromise their confidentiality rights. - Read and regularly review medical records and emergency information for all children with food allergies. - Communicate with parents and health care providers (with parental consent) about any allergic reactions, changes in a child's health, and exposures to allergens. - Read and regularly review each child's ECP. Make sure that all ECPs include the following: °°A doctor's statement addressing the meal accommodation needs of particular child with a food allergy disability as required for USDA's Child Nutrition Programs. °°Written instructions about food(s) to which the child is allergic and steps that should be taken to avoid that food. °°A detailed treatment plan to be implemented if an allergic reaction occurs. This plan should include the names and doses of medications and how they should be used. It should also include specific symptoms that would indicate the need to give one or more medications or take the child to an emergency medical facility. - Work with parents and health care providers to make sure that the medical needs of children with food allergies are met and that all necessary accommodations are made. - Refer parents of children who do not have access to health care to State Children's Health Insurance Program providers. # Prepare for and respond to food allergy emergencies. - Keep copies of ECPs for children in your care in a secure place that you can get to quickly and easily in an emergency. - Make sure that parents of children with food allergies provide epinephrine auto-injectors to use in food allergy emergencies if their use is called for in the child's ECP. - Make sure that medications are kept in a secure place and that staff who are delegated and trained to use epinephrine auto-injectors can get to them quickly and easily. Regularly inspect the expiration date of epinephrine auto-injectors. - Make sure that staff plan for the needs of students with food allergies during class field trips and during other extracurricular activities. - If allowed by state and local laws, work with the program director to get extra epinephrine autoinjectors or nonpatient-specific prescriptions or standing orders for auto-injectors that can be used by a registered nurse and those delegated and trained to administer epinephrine during allergy emergencies. - Never hesitate to activate a child's ECP in an emergency. If you are delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector. - After each food allergy emergency, review how it was handled with the ECE program administrator, parents, staff members involved in the response, EMS responders, and the child to identify ways to prevent future emergencies and improve emergency response. Make revisions to the child's ECP as appropriate. # Help provide professional development on food allergies for staff. - Stay up-to-date on best practices for managing food allergies. Sources for this information include allergists who are treating children with food allergies and local health departments. - Educate other staff about food allergies and the needs of children with food allergies in a way that does not compromise their confidentiality rights. - Use each child's ECP to train other staff members how to recognize the specific signs of an allergic reaction in each child and how to respond to a food allergy emergency. - Coordinate annual training for all staff on relevant federal and state regulations for managing food allergies in children. - Coordinate annual training for all staff on emergency response protocol and practices, including how to respond to food allergy emergencies. - Provide or coordinate training for delegated staff on how to use an epinephrine auto-injector. See Section 6 for more resources and tools that might assist in managing food allergies and allergy-related emergencies in ECE programs. # Section 5. Federal Laws and Regulations that Govern Food Allergies in Schools and Early Care and Education Programs The federal laws and regulations described in this section address the responsibilities of schools and early care and education (ECE) programs to help children and adolescents manage food allergies that may constitute a disability under federal law and to ensure that children are not subject to discrimination on the basis of their disability. This section also addresses privacy and confidentiality requirements that apply to the education records of students with food allergies, regardless of whether they have been found to have a disability under federal law. Schools and ECE programs are encouraged to copy and distribute relevant laws to appropriate staff and to reinforce relevant laws and regulations in all training provided to staff. For information on how to get copies of relevant federal laws and regulations, see Section 6. g The federal laws described in this section are enforced or administered by the U.S. Department of Education (ED), the U.S. Department of Justice (DOJ), and the U.S. Department of Agriculture (USDA). # Section 504 of the Rehabilitation Act of 1973 (Section 504) and the Americans with Disabilities Act of 1990 (ADA) h Section 504 is a federal law that prohibits discrimination on the basis of disability in programs and activities that receive federal financial assistance. Recipients of federal financial assistance from ED include public school districts, other state and local educational agencies, and postsecondary educational institutions. The Department of Education's Office for Civil Rights (OCR) enforces Section 504 as it applies to these recipients. The USDA enforces Section 504 as it applies to recipients of federal financial assistance from USDA. Title II of the ADA prohibits discrimination on the basis of disability by public entities, including public elementary, secondary, and postsecondary educational institutions, whether or not they receive federal financial assistance. For public schools, OCR shares Title II enforcement responsibilities with the U.S. Department of Justice (DOJ). Section 504 and Title II of the ADA require that qualified individuals with disabilities, including students, parents, and other program participants, not be excluded from or denied the benefits of services, programs, or activities or otherwise subjected to discrimination by reason of a disability. Public school districts that receive federal financial assistance are covered by both Section 504 and Title II. As a general rule, because Title II does not provide less protection than Section 504, violations of Section 504 also constitute violations of Title II. To the extent that Title II provides greater protections, schools must also comply with Title II and provide those additional protections. g. In addition to becoming familiar with these relevant federal laws, schools and ECE programs should determine which applicable state statutes, policies, and regulations and local statues and policies should be considered when developing management plans for children with food allergies. If a student's food allergy is a disability, that student is entitled to the protections of Section 504 and the ADA. Both laws define a disability as a physical or mental impairment that substantially limits a major life activity. Children with food allergies may be substantially limited in major life activities such as eating, breathing, or the operation of major bodily functions such as the respiratory or gastrointestinal system. The U.S. Congress has made clear that the definition of disability under Section 504 and the ADA is to be construed broadly. i Under both Section 504 and Title II, students with disabilities in public schools must be given an equal opportunity to participate in academic, nonacademic, and extracurricular activities. ED's Section 504 regulation outlines a process for schools to use to determine whether a student has a disability and to determine what services a student with a disability needs. j This evaluation process must be tailored individually because each student is different, and his or her needs will vary. The Section 504 regulations specify that school districts must identify all students with disabilities and provide them with a free appropriate public education (FAPE). Under ED's Section 504 regulation, FAPE is the provision of regular or special education and related aids and services designed to meet the individual educational needs of students with disabilities as adequately as the needs of students who do not have disabilities are met. A student does not have to receive special education services, however, in order to receive related aids and services under Section 504. The most common practice is to include these related aids and services, as well as any needed special education services, in a written document, sometimes called a Section 504 plan. Even if a school district does not believe that a student needs special education or related aids and services, Section 504 and Title II require the district to consider whether it can reasonably modify policies, practices, or procedures to ensure that a student with a disability has an equal opportunity to participate in and benefit from the school's services and programs. ED's Section 504 regulation also states that public preschool and day care programs operated by recipients of federal funds may not, on the basis of disability, exclude students with disabilities and must take their needs into account when determining the aid, benefits, or services to be provided. k Under ED's Section 504 regulation, private schools that receive federal financial assistance may not exclude an individual student with a disability if the school can, with minor adjustments; provide an appropriate education to that student. Private, nonreligious schools and ECE programs are covered by Title III of the ADA. Title III prohibits public accommodations such as these from discriminating against individuals with disabilities in the full and equal enjoyment of the entity's services and activities. Under Title III, private schools and ECE programs must make reasonable modifications to policies, practices, and procedures when necessary to give children with disabilities, including those with food allergies, full and equal access to and participation in programs and services unless the entity can show that the modification would result in a fundamental alteration of those programs and services. Under Section 504 and the ADA, children with food allergy disabilities in schools and ECE programs must be provided with the services and modifications they need in order to attend. Examples of these services and modifications might include implementing allergen-safe food plans, administering epinephrine according to a doctor's orders (even if the school or ECE program has a no-medication policy), allowing students to carry their own medication, and providing an allergen-safe environment in which the student can eat meals. Disability harassment is a form of discrimination prohibited by Section 504 and Titles II and III of the ADA. Harassment creates a hostile environment when the conduct is sufficiently serious so as to interfere with or limit a student's ability to participate in or benefit from the services, activities, or opportunities offered by a school. When student-on-student disability harassment occurs and the school knows or reasonably should know about the harassment, a school must take prompt and effective steps reasonably calculated to end the harassment, prevent its recurrence, and eliminate any hostile environment created by the harassment. Section 504 and Title II require schools to take such steps and prohibit schools from encouraging, tolerating, or ignoring peer harassment based on disability that creates a hostile environment. Bullying, teasing, or harassment about an allergy can lead to psychological distress for children with food allergies which could lead to a more severe reaction when the allergen is present. And exposing an allergic child to the allergen (e.g., putting the allergen in the child's food or forcing the child to ingest the allergen) can have very serious-even fatal-consequences. School districts, in developing and implementing policies on bullying and harassment, should instruct staff and students as to how such policies apply to children with food allergies, including the possible disciplinary consequences for bullying and harassment that target or place food-allergic children at risk. Additional consequences could be to separate the harasser from the target and to provide counseling for the target or the harasser. Finally, a school should take steps to stop further harassment and prevent any retaliation against the person who made the complaint (or was the subject of the harassment) or against those who provided information as witnesses. The ADA, as amended, Section 504 of the Rehabilitation Act, The Richard B. Russell National School Lunch Act 42 USC 1758(a), the Child Nutrition Act, CNP regulations, and USDA's Non-discrimination regulations at 7 CFR 15b, govern meal accommodations for children with food related disabilities in schools and ECE programs that participate in the CNPs. USDA has oversight for providing meals in these programs. Program operators in the CNPs must make meal accommodations to regular program meals for children identified by a licensed doctor as having a food allergy disability that prevents them from consuming a meal as prepared. For purposes of discussion in these guidelines, if a child has a food allergy that is identified as a disability by a licensed doctor, meal accommodations must be provided. Additionally, a school, institution, and site participating in USDA's Child Nutrition Programs, is not required to establish a Section 504 plan, IEP, IHP, ICP (or any plan that may be used by a child with special dietary needs) to make an accommodation to a program meal for a child with a food related disability. Instead, the CNPs require a written statement from a licensed doctor which identifies the following requirements: - The child's disability (according to pertinent statutes). - An explanation of why the disability restricts the child's diet. - The major life activity affected by the disability. - The food or foods to be omitted from the child's diet. - The food or choice of foods that must be substituted. 62 A statement signed by a licensed doctor addressing the points above is sufficient. However, the written statement from a licensed doctor may be incorporated in any of the plans discussed above. # Individuals with Disabilities Education Act (IDEA) IDEA Part B l provides federal funds to help states make FAPE available to eligible children with disabilities in the least restrictive environment. The obligation to make FAPE available in the least restrictive environment begins at the child's third birthday and could last until the child's twenty-second birthday, depending on state law or practice. m FAPE under IDEA Part B refers to the provision of special education and related services at no cost to the parents that include an appropriate preschool, elementary school, or secondary school education at the state level. Eligibility determinations under IDEA Part B are made at the state and local school district level on an individual, case-by-case basis in light of applicable IDEA Part B requirements and state education standards. At the federal level, IDEA is administered by the Office of Special Education Programs in the Office of Special Education and Rehabilitative Services in the ED. A child could be found eligible for services under IDEA Part B because of a food allergy only if it adversely affects the child's educational performance, and the child needs special education and related services because of the food allergy. If determined eligible, the school district must develop an Individualized Education Program (IEP) for the child, or if appropriate, an Individualized Family Service Plan (IFSP) for a child age three through five. n An IEP is a written document developed by a team that includes the child's parents and school officials. It sets out, among other elements, the special education and related services and supplementary aids and services to be provided to the child. If parents place their disabled child at a private school at their own expense, IDEA Part B generally would not require the school district to develop an IEP for the child at the private school. In general, if a child with a food allergy only needs a related service and does not need special education, that child would not be eligible for services under IDEA Part B. Such a child might still be eligible for services or modifications under Section 504 or Title II. In addition, IDEA Part C provides federal funds to assist states in identifying and providing early intervention services to children with disabilities from birth to age three and, at the state's discretion, through five or when the child enters kindergarten. Under IDEA Part C, a child is eligible based on a developmental delay, diagnosed condition, or, at the state's discretion, at-risk status. An IFSP is a document written by a team that includes the child's parents that identifies the specific early intervention services needed by the child. o # Family Educational Rights and Privacy Act (FERPA) of 1974 FERPA applies to educational agencies or institutions that receive federal funds under a program administered by ED. p FERPA generally prohibits schools and school districts from disclosing personally identifiable information from a student's education record unless the student's parent or the eligible student (a student who is aged 18 years or older or who attends an institution of postsecondary education) provides prior, written consent for the disclosure. This requirement has several exceptions. q One exception permits schools to disclose personally identifiable information from a student's education record without obtaining prior written consent to school officials, including teachers, who have legitimate educational interests in the information, including the educational interests of the child. Schools must use reasonable methods, such as physical, technological, or administrative access controls, to ensure that school officials obtain access only to those education records in which they have legitimate educational interests. To use this exception, schools must include in their annual notification of FERPA rights to parents and eligible students the criteria for determining who constitutes a school official and what constitutes a legitimate educational interest. This exception for school officials also applies to a contractor, consultant, volunteer, or other party to whom a school has outsourced institutional services or functions provided that the outside party: - Performs an institutional service or function for which the school would otherwise use employees. - Is under the direct control of the school with respect to the use and maintenance of education records. - Is subject to the requirements in FERPA that govern the use and redisclosure of personally identifiable information from education records. Another exception to the requirement of prior written consent permits schools to disclose personally identifiable information from an education record to appropriate parties, including the parent of an eligible student, in connection with an emergency if knowledge of the information is necessary to protect the health or safety of the student or other individuals. Under this exception, a school may take into account the totality of the circumstances pertaining to a threat to the health or safety of a student or other individuals. If a school determines that there is an articulable and significant threat to the health or safety of a student or other individuals, it may disclose information from education records to any person whose knowledge of the information is necessary to protect the health or safety of the student or other individuals. If the information available at the time of the incident forms a rational basis for the decision to disclose information, ED will not substitute its judgment for that of the school in evaluating the circumstances and making its determination. When disclosures are made under this exception, a school must record the articulable and significant threat to the health or safety of a student or other individual that formed the basis for the disclosure and the parties to whom the information was disclosed. In addition, under FERPA, the parent or eligible student must be given the opportunity to inspect and review the student's education records. A school must comply with a request for access to the student's education records within a reasonable period of time, but not more than 45 days after it has received the request. Additional information and resources, including how to access copies of federal laws, are provided in Section 6.
Services (HHS), including the Administration for Children and Families (ACF), Food and Drug Administration (FDA), and National Institutes of Health (NIH). CDC acknowledges Mr. Pete Hunt (HHS/CDC) for his role as lead author of these guidelines. CDC also acknowledges Dr.# Introduction # Overview Food allergies are a growing food safety and public health concern that affect an estimated 4%-6% of children in the United States. 1,2 Children with food allergies are two to four times more likely to have asthma or other allergic conditions than those without food allergies. 1 The prevalence of food allergies among children increased 18% during 1997-2007, and allergic reactions to foods have become the most common cause of anaphylaxis in community health settings. 1,3 In 2006, about 88% of schools had one or more students with a food allergy. 4 Staff who work in schools and early care and education (ECE) programs should develop plans for how they will respond effectively to children with food allergies. Although the number of children with food allergies in any one school or ECE program may seem small, allergic reactions can be life-threatening and have far-reaching effects on children and their families, as well as on the schools or ECE programs they attend. Any child with a food allergy deserves attention and the school or ECE program should create a plan for preventing an allergic reaction and responding to a food allergy emergency. Studies show that 16%-18% of children with food allergies have had a reaction from accidentally eating food allergens while at school. 5,6 In addition, 25% of the severe and potentially life-threatening reactions (anaphylaxis) reported at schools happened in children with no previous diagnosis of food allergy. 5,7 School and ECE program staff should be ready to address the needs of children with known food allergies. They also should be prepared to respond effectively to the emergency needs of children who are not known to have food allergies but who exhibit allergic signs and symptoms. Until now, no national guidelines had been developed to help schools and ECE programs address the needs of the growing numbers of children with food allergies. However, 14 states and many school districts have formal policies or guidelines to improve the management of food allergies in schools. 8,9 Many schools and ECE programs have implemented some of the steps needed to manage food allergies effectively. 4 Yet systematic planning for managing the risk of food allergies and responding to food allergy emergencies in schools and ECE programs remain incomplete and inconsistent. 10,11 # FDA Food Safety Modernization Act These guidelines, Voluntary Guidelines for Managing Food Allergies in Schools and Early Care and Education Programs (hereafter called the Voluntary Guidelines for Managing Food Allergies), were developed in response to Section 112 of the FDA Food Safety Modernization Act, which was enacted in 2011. 12 This act is designed to improve food safety in the United States by shifting the focus from response to prevention. Section 112(b) calls for the Secretary of Health and Human Services, in consultation with the Secretary of Education, to "develop guidelines to be used on a voluntary basis to develop plans for individuals to manage the risk of food allergy and anaphylaxis in schools a and early childhood education programs b " and "make such guidelines available to local educational agencies, schools, early childhood education programs, and other interested entities and individuals to be implemented on a voluntary basis only. " 12 Each plan, described in the act to manage the risk of food allergy and anaphylaxis in schools and early childhood education programs, that is developed for an individual shall be considered an education record for the purpose of Section 444 of the General Education Provisions Act (commonly referred to as the Family Educational Rights and Privacy Act). The act specifies that nothing in the guidelines developed under its auspices should be construed to preempt state law. (A link to FDA Food Safety Modernization Act, including Section 112 statutory language is provided in Section 6, Resources.) Specifically, the content of these guidelines should address the following: • Parental obligation to provide the school or early childhood education program, prior to the start of every school year, with documentation from their child's physician or nurse supporting a diagnosis of food allergy, and any risk of anaphylaxis, if applicable; identifying any food to which the child is allergic; describing, if appropriate, any prior history of anaphylaxis; listing any medication prescribed for the child for the treatment of anaphylaxis; detailing emergency treatment procedures in the event of a reaction; listing the signs and symptoms of a reaction; assessing the child's readiness for self-administration of prescription medication; and a list of substitute meals that may be offered to the child by school or early childhood education program food service personnel. • The creation and maintenance of an individual plan for food allergy management, in consultation with the parent, tailored to the needs of each child with a documented risk for anaphylaxis, including any procedures for the self-administration of medication by such children in instances where the children are capable of self-administering medication; and such administration is not prohibited by state law. • Communication strategies between individual schools or early childhood education programs and providers of emergency medical services, including appropriate instructions for emergency medical response. • Strategies to reduce the risk of exposure to anaphylactic causative agents in classrooms and common school or early childhood education program areas such as cafeterias. a. The term school is defined in the FDA Food Safety Modernization Act (FDA FSMA) to include public kindergartens, elementary schools, and secondary schools. 12 b. The term early childhood education program is defined in the FDA Food Safety Modernization Act to include a Head Start program or an Early Head Start program carried out under the Head Start Act (42 U.S.C. 9831 et seq.), a statelicensed or state-regulated child care program or school, or a state prekindergarten program that serves children from birth through kindergarten. 12 Some of these early childhood education programs may provide early intervention and preschool services under the Individuals with Disabilities Education Act (IDEA). Some programs that provide these IDEA early intervention and preschool services may not fall under the FDA FSMA, but may still wish to consider these voluntary guidelines when developing procedures for children with food allergies. • The dissemination of general information on life threatening food allergies to school or early childhood education program staff, parents, and children. • Food allergy management training of school or early childhood education program personnel who regularly come into contact with children with life-threatening food allergies. • The authorization and training of school or early childhood education program personnel to administer epinephrine when the nurse is not immediately available. • The timely accessibility of epinephrine by school or early childhood education program personnel when the nurse is not immediately available. • The creation of a plan contained in each individual plan for food allergy management that addresses the appropriate response to an incident of anaphylaxis of a child while such child is engaged in extracurricular programs of a school or early childhood education program, such as non-academic outings and field trips, before-and afterschool programs or before-and after-early childhood education programs, and schoolsponsored or early childhood education program-sponsored programs held on weekends. • Maintenance of information for each administration of epinephrine to a child at risk for anaphylaxis and prompt notification to parents. • Other elements determined necessary for the management of food allergies and anaphylaxis in schools and early childhood education programs. 12 Section 1 of the Voluntary Guidelines for Managing Food Allergies addresses these content requirements. Section 2 provides a list of recommended actions for school board members and administrators and staff at the district level. Section 3 provides a list of recommended actions for administrators and staff in schools, while Section 4 provides recommended actions for administrators and staff in ECE programs. Section 5 provides information about relevant federal laws that are enforced or administered by the U.S. Department of Education (ED), the U.S. Department of Justice (DOJ), and the U.S. Department of Agriculture (USDA). Section 6 provides a list of resources with more information and strategies. # Methods To develop these guidelines, staff at the Centers for Disease Control and Prevention (CDC) created a systematic process to collect, review, and compile expert advice, scientific literature, state guidelines, best practice documents, and position statements from individuals, agencies, and organizations. In January 2010, CDC sponsored a meeting of experts to gather their input into the critical processes and actions needed to protect students with food allergies and to respond to food allergy emergencies in schools. This meeting included representatives from the following groups (see Acknowledgements for details): • Federal agencies with expertise in food allergy management in the public health sector, including in schools (CDC, Food and Drug Administration, U.S. National Institute of Allergy and Infectious Diseases, U.S. Department of Education [ED], U.S. Department of Agriculture [USDA]). • Organizations with expertise or experience in clinical food allergy management and food allergy advice to consumers (Food Allergy and Anaphylaxis Network, Food Allergy Initiative, Asthma and Allergy Foundation of America, American Academy of Pediatrics, and the American College of Asthma, Allergy, and Immunology). • Organizations representing professionals who work in schools (National School Boards Association, National Education Association, National Association of School Nurses, National Association of School Administrators, National Association of Elementary School Principals, National Association of Secondary School Principals, School Nutrition Association, American School Counseling Association, and the American School Health Association). • One state educational agency. • One local school district. • Two parents of children with food allergies. Meeting participants provided input on the organization and content for food allergy guidelines. In terms of organization, they recommended that the guidelines should: • Use a management framework that is similar to the framework used to address other chronic conditions in schools and ECE programs. This framework should include creation of a building-wide team and essential elements of a plan to manage food allergies. • Make sure that management and support systems address the needs of children with food allergies and promote an allergen-safe c school or ECE program. • Promote partnerships among schools and ECE programs, children with food allergies and their families, and health care providers. c. The term allergen-safe refers to an environment that is made as safe as possible from food allergens. The phrase should not be interpreted to mean an allergen-free environment totally safe from food allergens. There is no failsafe way to prevent an allergen from inadvertently entering a school or ECE program facility. When guarding against exposures to food allergens, a school or ECE program should still properly plan for children with any life-threatening food allergies, to educate all school personnel accordingly, and ensure that school staff are trained and prepared to prevent and respond to a food allergy emergency. • Emphasize the value of an onsite, full-time registered nurse to manage food allergies in children, but provide guidance for schools and ECE programs that don't have a full-time or part-time registered nurse. • Convey scientific evidence and established clinical practice in terms that leaders and staff in schools and ECE programs understand. Make sure this information is consistent with regulatory and planning guidelines from other federal agencies. • Emphasize the need to follow state laws, including regulations (such as Nurse Practice Acts and state food safety regulations) and school or ECE program policies when deciding which procedures and actions are permissible or allowed. In terms of content, meeting participants recommended that the guidelines should: • Recommend that children with food allergies be identified so an individual plan for managing their food allergies can be developed. • Include essential practices for protecting children from allergic reactions and responding to reactions that occur at schools or ECE programs, at sponsored events, or during transport to and from schools or ECE programs. • Develop strategies to reduce the risk of exposure to food allergens in classrooms, cafeterias, and other school or ECE program settings. • Develop steps for responding to food allergy emergencies, including the administration of epinephrine, that are consistent with a school's or ECE program's "all-hazards" plan. • Emphasize training for staff to improve their understanding of food allergies, their ability to help children prevent exposure to food allergens, and their ability to respond to food allergy emergencies (including administration of epinephrine). This training can help to create an environment of acceptance and support for children with food allergies. • Emphasize the need to teach children about food allergies as part of the school's or ECE program's health education curriculum. • Address the need to teach all parents about food allergies. • Address the physical safety and emotional needs of children with food allergies (i.e., stigma, bullying, harassment). • Include a list of actions for all staff working in schools and ECE programs that may have a role in managing risk of food allergy, including administrators, registered nurses, teachers, paraprofessional staff, counselors, food service staff, and custodial staff. Meeting participants provided an initial set of data and literature sources for analysis. CDC staff then reviewed published literature from peer-reviewed and nonpeer-reviewed sources, including descriptive studies, epidemiologic studies, expert statements, policy statements, and relevant Web-based content from federal agencies. CDC scientists conducted an extensive search for scientific reports, using four electronic citation databases: PubMed, Medline, Web of Science, and ERIC. To ensure a comprehensive review of food allergy sources, CDC used search terms that included a combination of terms, such as "food allergy, " "school, " "anaphylaxis, " "epinephrine, " "peanut allergy, " "child, " "pediatric, " "food allergy and bullying, " "emergencies, " "life-threatening allergy, " "school nurse, " and "food allergy and child care. " CDC staff reviewed all recommended documents and eliminated sources that were outdated (earlier than 2000), were superseded by more current data or recommendations, in conflict with current standards of clinical and school-based practice, reflected international recommendations not relevant to U.S. schools or ECE programs, or were limited to adult food allergies. CDC staff relied heavily on the content and references in the 2010 Guidelines for the Diagnosis and Management of Food Allergy in the United States. 13 These 2010 guidelines reflect the most up-to-date, extensive systematic review of the literature and assessment of the body of evidence on the science of food allergies. They met the standards of rigorous systematic search and review methods, and they provide clear recommendations that are based on consensus among researchers, scientists, clinical practitioners, and the public. While the 2010 guidelines did not address the management of patients with food allergies outside of clinical settings (and thus did not directly address the management of food allergies in schools), they were deemed an important source for informing the clinical practice recommendations for managing risks for children with food allergies in the Voluntary Guidelines for Managing Food Allergies. To ensure that recommendations for managing the risk of food allergies were consistent with those recommended for other chronic conditions, CDC staff added search terms that included "school" and "asthma, ""diabetes, ""epilepsy, ""chronic condition, " and "management. " In particular, they used information from the following three documents: Students with Chronic Illness: Guidance for Families, Schools, and Students, 14 Managing Asthma: A Guide for Schools, 15 and Helping the Student with Diabetes Succeed: A Guide for School Personnel. 16 CDC staff analyzed best practice documents, state school food allergy guidelines (n = 14), and relevant health and education organizations' position statements for compatibility with the priorities outlined by experts, common themes, and the accuracy and clarity of recommendations or positions based on clinical standards and scientific evidence. To ensure that these Voluntary Guidelines for Managing Food Allergies were compatible with existing federal laws, federal regulations, and current guidelines for schools and ECE programs, CDC solicited expertise and input from the following sources: • Office of the General Counsel; Office of Safe and Healthy Students, Office of Elementary and Secondary Education; Office for Civil Rights; and Office of Special Education and Rehabilitative Services, U.S. Department of Education (ED). • Civil Rights Division, U.S. Department of Justice (DOJ). • Office of the Deputy Assistant Secretary for Early Childhood, Administration for Children and Families, U.S. Department of Health and Human Services (HHS). In addition to the input from the experts meeting, the analysis of research and practice documents, and the technical advice and assistance provided by regulatory federal agencies, CDC conducted three formal rounds of expert review and comment. During the first round, CDC staff worked with meeting participants and agency partners to get concurrence with recommendations. Second and third round reviews were used to refine the content. Reviewers included participants from the first meeting and additional reviewers added after each round to ensure input from at least one new person who had not previously reviewed the document. In addition to these formal reviews, CDC staff asked for multiple reviews from select experts in food allergy management, schools, and ECE programs to ensure the accuracy of the information and relevance of the recommendations to professional practice. During the review process, CDC staff reviewed and accepted additional references that supported changes made in draft recommendations. The resulting Voluntary Guidelines for Managing Food Allergies include recommendations for practice in the following five priority areas that should be addressed in each school's or ECE program's Food Allergy Management Prevention Plan: 1. Ensure the daily management of food allergies in individual children. 2. Prepare for food allergy emergencies. 3. Provide professional development on food allergies for staff members. 4. Educate children and family members about food allergies. 5. Create and maintain a healthy and safe educational environment. # Purpose The Voluntary Guidelines for Managing Food Allergies are intended to support implementation of food allergy management and prevention plans and practices in schools and ECE programs. They provide practical information, planning steps, and strategies for reducing allergic reactions and responding to life-threatening reactions for parents, district administrators, school administrators and staff, and ECE program administrators and staff. They can guide improvements in existing food allergy management plans and practices. They can help schools and ECE programs develop a plan where none currently exists. Schools and ECE programs will not need to change their organization or structure or incorporate burdensome practices to respond effectively. They also should not have to incur significant financial costs where basic health and emergency services are already provided. Although the practices in these guidelines are voluntary, any actions taken for individual children must be implemented consistent with applicable federal and state laws and local policies. Many of the practices reinforce relevant federal laws and regulations administered or enforced by ED, HHS, DOJ, and USDA. How these laws apply case by case will depend upon the facts in each situation. These guidelines also do not address state and local laws or local school district policies because the requirements of these laws and policies vary from state to state and from school district to school district. References to state guidelines reflect support for and consistency with the recommendations in the Voluntary Food Allergy Guidelines, but do not suggest federal endorsement of state guidelines. While these guidelines provide information related to certain applicable laws, they should not be construed as giving legal advice. Schools and ECE programs should consult local legal professionals for such advice. Although schools and ECE programs have some common characteristics, they operate under different laws and regulations and serve children with different developmental and supervisory needs. Different practices are needed in each setting to manage the risk of food allergies. These guidelines include recommendations that apply to both settings, and they identify how the recommendations should be applied differently in each setting when appropriate. These guidelines do not provide specific guidance for unlicensed child care settings, although many recommendations can be used in these settings. If a school or ECE program participates in the Child Nutrition Programs (CNPs), then USDA Food and Nutrition Service is the federal agency with oversight of the meals served. The CNPs include the National School Lunch and School Breakfast Programs, the Special Milk Program, the Fresh Fruit and Vegetable Program, the Child and Adult Care Food Program, and the Summer Food Service Program. Schools, institutions, and sites participating in the CNPs are required under relevant statutes to make accommodations to program meals for children that are determined to have a food allergy disability. A food related disability can be a food allergy if the allergy is acknowledged to be a disability by a licensed doctor. These guidelines can assist CNP program operators in providing safe meals and a safe environment for this population of children. # About Food Allergies A food allergy is defined as an adverse health effect arising from a specific immune response that occurs reproducibly on exposure to a given food. 13 The immune response can be severe and life-threatening. Although the immune system normally protects people from germs, in people with food allergies, the immune system mistakenly responds to food as if it were harmful. One way that the immune system causes food allergies is by making a protein antibody called immunoglobulin E (IgE) to the food. The substance in foods that cause this reaction is called the food allergen. When exposed to the food allergen, the IgE antibodies alert cells to release powerful substances, such as histamine, that cause symptoms that can affect the respiratory system, gastrointestinal tract, skin, or cardiovascular system and lead to a lifethreatening reaction called anaphylaxis. 17 The Voluntary Guidelines for Managing Food Allergies focuses not on all food allergies but on food allergies associated with IgE because those are the food allergies that are associated with the risk of anaphylaxis. There are other types of food-related conditions and diseases that range from the frequent problem of digesting lactose in milk, resulting in gas, bloating, and diarrhea, to reactions caused by cereal grains (celiac disease) that can result in severe malabsorption and a variety of other serious health problems. These conditions and diseases may be serious but are not immediately life-threatening and are not addressed in these guidelines. 13,[17][18][19] More than 170 foods are known to cause IgE mediated food allergies. In the United States, the following eight foods or food groups account for 90% of serious allergic reactions: milk, eggs, fish, crustacean shellfish, wheat, soy, peanuts, and tree nuts. 13 Federal law requires food labels in the United States to clearly identify the food allergen source of all foods and ingredients that are (or contain any protein derived from) these common allergens. 20 Some nonfood products used in schools and ECE programssuch as clay, paste, or finger paints-can also contain allergens that may or may not be identified as ingredients on product labels. 21 The symptoms of allergic reactions to food vary both in type and severity among individuals and even in one individual over time. Symptoms associated with an allergic reaction to food include the following: • Mucous Membrane Symptoms: red watery eyes or swollen lips, tongue, or eyes. • Skin Symptoms: itchiness, flushing, rash, or hives. • Gastrointestinal Symptoms: nausea, pain, cramping, vomiting, diarrhea, or acid reflux. • Upper Respiratory Symptoms: nasal congestion, sneezing, hoarse voice, trouble swallowing, dry staccato cough, or numbness around mouth. • Lower Respiratory Symptoms: deep cough, wheezing, shortness of breath or difficulty breathing, or chest tightness. • Cardiovascular Symptoms: pale or blue skin color, weak pulse, dizziness or fainting, confusion or shock, hypotension (decrease in blood pressure), or loss of consciousness. • Mental or Emotional Symptoms: sense of "impending doom, " irritability, change in alertness, mood change, or confusion. # Food Allergy Symptoms in Children Children with food allergies might communicate their symptoms in the following ways: • It feels like something is poking my tongue. • My tongue (or mouth) is tingling (or burning). • My tongue (or mouth) itches. • My tongue feels like there is hair on it. • My mouth feels funny. • There's a frog in my throat; there's something stuck in my throat. • My tongue feels full (or heavy). • My lips feel tight. • It feels like there are bugs in there (to describe itchy ears). • It (my throat) feels thick. • It feels like a bump is on the back of my tongue (throat). Source: The Food Allergy & Anaphylaxis Network. Food Allergy News. 2003;13 (2). Children sometimes do not exhibit overt and visible symptoms after ingesting an allergen, making early diagnosis difficult. 13,22 Some children may not be able to communicate their symptoms clearly because of their age or developmental challenges. Complaints such as abdominal pain, itchiness, or other discomforts may be the first signs of an allergic reaction (see Food Allergy Symptoms in Children). Signs and symptoms can become evident within a few minutes or up to 1-2 hours after ingestion of the allergen, and rarely, several hours after ingestion. Symptoms of breathing difficulty, voice hoarseness, or faintness associated with change in mood or alertness or rapid progression of symptoms that involve a combination of the skin, gastrointestinal tract, or cardiovascular symptoms signal a more severe allergic reaction (anaphylaxis) and require immediate attention. The severity of reactions to food allergens is difficult to predict and varies depending on the child's particular sensitivity to the food and on the type and amount of exposure to the food. Ingesting a food allergen triggers most severe reactions, while inhaling or having skin contact with food allergens generally causes mild reactions. 23,24,25 The severity of reaction from food ingestion also can be influenced by the child's age, how quickly the allergen is absorbed (e.g., absorption is faster if food is taken on an empty stomach or ingestion is associated with exercise), and by co-existing health conditions or factors. 17 For example, a person with asthma might be at greater risk of having a more severe anaphylactic reaction. Exercise and certain medications also can increase the harmful effects of certain food allergens. 13,26,27 # Allergic Reactions and Anaphylaxis Anaphylaxis is best described as a severe allergic reaction that is rapid in onset and may cause death. 31 Not all allergic reactions will develop # Food Allergies and Asthma One-third of children with food allergies also have asthma, which increases their risk of experiencing a severe or fatal reaction. 28 Data also suggest that children with asthma and food allergies have more visits to hospitals and emergency departments than children who don't have asthma. 2,29,30 Because asthma can pose serious risks to the health of children with food allergies, schools and ECE programs must consider these risks when they develop plans for managing food allergies. into anaphylaxis. In fact, most are mild and resolve without problems. However, early signs of anaphylaxis can resemble a mild allergic reaction. Unless obvious symptoms-such as throat hoarseness or swelling, persistent wheezing, or fainting or low blood pressure-are present, it is not easy to predict whether these initial, mild symptoms will progress to become an anaphylactic reaction that can result in death. 13 Therefore, all children with known or suspected ingestion of a food allergen and the appearance of symptoms consistent with an allergic reaction must be closely monitored and possibly treated for early signs of anaphylaxis. # Characteristics and Risk Factors Food allergies account for 35%-50% of all cases of anaphylaxis in emergency care settings .32 Many different food allergens (e.g., milk, egg, fish, shellfish) can cause anaphylaxis. In the United States, fatal or near fatal reactions are most often caused by peanuts (50%-62%) and tree nuts (15%-30%). 33 Results of studies of fatal allergic reactions to food found that a delay in administering epinephrine was one of the most significant risk factors associated with fatal outcomes. 13,26 Some population groups, including children with a history of anaphylaxis, are at higher risk of having a severe reaction to food (see Fatal Food Allergy Reactions). # Fatal Food Allergy Reactions # Risk Factors • Delayed administration of epinephrine. • Reliance on oral antihistamines alone to treat symptoms. • Consuming alcohol and the food allergen at the same time. # Groups at Higher Risk • Adolescents and young adults. • Children with a known food allergy. • Children with a prior history of anaphylaxis. • Children with asthma, particularly those with poorly controlled asthma. # Timing of Symptoms In general, anaphylaxis caused by a food allergen occurs within minutes to several hours after food ingestion . 13 Death due to food-induced anaphylaxis may occur within 30 minutes to 2 hours of exposure, usually from cardiorespiratory compromise. 13 By the time symptoms of an allergic reaction are recognized, a child is likely to already be experiencing anaphylaxis. Symptoms of anaphylaxis can begin with mild skin symptoms (e.g., hives, flushing) that progress slowly, appear rapidly with more severe symptoms, or appear (in rare circumstances) with shock in the absence of other symptoms. In fact, many fatal anaphylaxis cases caused by food do not follow a predictable pattern that starts with mild skin symptoms. Even if initial symptoms are successfully treated or resolve completely, up to 20% of anaphylactic reactions recur within 4-8 hours (called biphasic reaction). In other cases, symptoms do not completely resolve and require additional emergency care. For these reasons, children with food-induced anaphylaxis must be monitored closely and evaluated as soon as possible in an emergency care setting. # Treatment of Anaphylaxis and Use of Epinephrine No treatment exists to prevent reactions to food allergies or anaphylaxis. Strict avoidance of the food allergen # Allergens that May Result is the only way to prevent a reaction. However, avoidance in Anaphylaxis that Require Use is not always easy or possible, and staff in schools and ECE of Epinephrine programs must be prepared to deal with allergic reactions, • Foods such as peanuts, tree nuts, including anaphylaxis. Early and quick recognition and milk, eggs, fish, or shellfish. treatment of allergic reactions that may lead to anaphylaxis can prevent serious health problems or death. • Medications such as penicillin or aspirin. The recommended first line of treatment for anaphylaxis is the prompt use of epinephrine. Early use of epinephrine • Bee venom or insect stings, such to treat anaphylaxis improves a person's chance of survival as from yellow jackets, wasps, and quick recovery. 13,34 hornets, or fire ant). Epinephrine, also called adrenaline, is naturally produced • Latex, such as from gloves. by the body. When given by injection, it rapidly improves breathing, increases heart rate, and reduces swelling of the face, lips, and throat. Epinephrine is typically available in the form of an autoinjector, a spring loaded syringe used to deliver a measured dose of epinephrine, designed for self-administration by patients, or administration by persons untrained in other needle-based forms of epinephrine delivery. In a clinical setting, patients may receive epinephrine through other needle-based delivery methods. Epinephrine can quickly improve a person's symptoms, but the effects are not long lasting. If symptoms recur (biphasic reaction), additional doses of epinephrine are needed. Even when epinephrine is used, 911 or other emergency medical services (EMS) must be called so the person can be transported quickly in an emergency vehicle to the nearest hospital emergency department for further medical treatment and observation. 13 It is not possible to set one guideline for when to use epinephrine to treat allergic reactions caused by food. A person needs clinical experience and judgment to recognize the symptoms associated with anaphylaxis, and not all school or ECE program staff have this experience. Clinical guidelines for how to manage food-induced allergic reactions have mainly focused on the health care setting. They emphasize the need to watch patients closely and give the proper treatment, including epinephrine. Treatment decisions are based on the progression or increased severity of symptoms and whether the patient has a history of risk factors for anaphylaxis (see Fatal Food Allergy Reactions). 13 For example, the clinical guidelines favor quick and early use of epinephrine as soon as even mild symptoms appear for children who have had severe allergic reactions in the past. Some schools and ECE programs offer clinical services from a doctor or registered nurse. In these cases, the doctor or nurse can use the clinical guidelines to assess children and make decisions about treatment, including if or when to use epinephrine. However, many schools and most ECE programs do not have a doctor or nurse onsite to make such an assessment. In these cases, a staff person at the scene should call 911 or EMS immediately. If staff are trained to recognize symptoms of an allergic reaction or anaphylaxis and are delegated and trained to administer epinephrine, they also should administer epinephrine by auto-injector at the first signs of an allergic reaction, especially if the child's breathing changes. In addition, school or ECE program staff should make sure that the child is transported without delay in an emergency vehicle to the nearest hospital emergency department for further medical treatment and observation. 24,35 These actions may result in administering epinephrine and activating emergency response systems for a child whose allergic reaction does not progress to life-threatening anaphylaxis. However, the delay or failure to administer epinephrine and the lack of medical attention have contributed to many fatal anaphylaxis cases from food allergies. 25,[36][37][38] The risk of death from untreated anaphylaxis outweighs the risk of adverse side effects from using epinephrine in these cases. # Emotional Impact on Children with Food Allergies and Their Parents d The health of a child with a food allergy can be compromised at any time by an allergic reaction to food that is severe or life threatening. Many studies have shown that food allergies have a significant effect on the psychosocial well-being of children with food allergies and their families. [39][40][41][42][43][44][45] Parents of a child with a food allergy may have constant fear about the possibility of a life-threatening reaction and stress from constant vigilance needed to prevent a reaction. They also have to trust their child to the care of others, make sure their child is safe outside the home, and help their child have a normal sense of identity. Children with food allergies may also have constant fear and stress about the possibility of a lifethreatening reaction. The fear of ingesting a food allergen without knowing it can lead to coping strategies that limit social and other daily activities. Children can carry emotional burdens because they are not accepted by other people, they are socially isolated, or they believe they are a burden to others. They also may have anxiety and distress that is caused by teasing, taunting, harassment, or bullying by peers, teachers, or other adults. School and ECE program staff must consider these factors as they develop plans for managing the risk of food allergy for children with food allergies. d. For the purposes of this document, the word parent is used to refer to the adult primary caregiver(s) of a child's basic needs (e.g., feeding, safety). This includes biological parents; other biological relatives such as grandparents, aunts, uncles, or siblings; and non-biological parents such as adoptive, foster, or stepparents. # Section 1. Food Allergy Management in Schools and Early Care and Education Programs # Essential First Steps School and ECE program staff should develop a comprehensive strategy to manage the risk of food allergy reactions in children. This strategy should include (1) a coordinated approach, (2) strong leadership, and (3) a specific and comprehensive plan for managing food allergies. # Use a coordinated approach that is based on effective partnerships. The management of any chronic health condition should be based on a partnership among school or ECE program staff, children and their families, and the family's allergist or other doctor. 11,42,46 School or ECE Program Staff # Child with Food Allergy and Parent # Effective Management of Food Allergies # Allergist or Other Doctor The collective knowledge and experience of a licensed doctor, children with food allergies, and their families can guide the most effective management of food allergies in schools or ECE programs for each child. Close working relationships can help ease anxiety among parents, build trust, and improve the knowledge and skill of school or ECE program staff members. 10,14,42 In schools, all staff members play a part in protecting the health and safety of children with chronic conditions. These staff members include administrators, school nurses or school doctors, food service staff (including food service contract staff ), classroom and specialty teachers, athletic coaches, school counselors, bus drivers, custodial and maintenance staff, therapists, paraeducators, special education service providers, librarians and media specialists, security staff, substitute teachers, and volunteers, such as playground monitors and field trip chaperones. In ECE programs, staff members include the center director, health consultants, nutrition and food service staff, Head Start and child care providers, preschool teachers, teaching assistants, aides, volunteers, and transportation staff. A team structure allows for collective management of food allergies, with coordinated planning and communication to ensure that staff responsibilities are carried out in a clear and consistent manner. Instead of creating a new team to address the food allergy-related needs of a particular student with a food allergy, schools and ECE programs can use an existing team such as the student's Section 504 committee (which addresses Section 504 of the Rehabilitation Act of 1973), the student's Individualized Education Program (IEP) team [which addresses special education and related services under Part B of the Individuals With Disabilities Education Act (IDEA)], the school improvement team, child and learning support team, school health or wellness team, or Head Start Health Services Advisory Committee. Involving the school doctor (if applicable), an allergist in the community, or the child's doctor can help the school or ECE program reduce the risk of accidental exposure to allergens. 47 A doctor's diagnosis of a food allergy is necessary to accurately inform plans for avoiding food allergens and managing allergic reactions. The doctor can also give advice on the best practices to control or manage food allergies. 24,25 An allergist is a licensed doctor with specialty training in the diagnosis and treatment of allergic diseases, asthma, and diseases of the immune system. Children with food allergies and their parents have firsthand experience with allergic reactions and are most familiar with a child's unique signs and symptoms. Parents should give the school or ECE program documentation that supports a doctor's diagnosis of food allergy, as well as information about prior history and current risk of anaphylaxis. This information is critical to preventing risk of exposure to allergens and outlining the actions that must be taken if a food allergen exposure occurs. Parents should be continually involved in helping to build a learning environment that is responsive to their child's unique health condition. 47 By working together, parents and school or ECE program staff can communicate better and make sure they have the same expectations. This partnership also shows a shared commitment to the child's well-being and builds parental support and confidence in the ability of school or ECE program staff to manage food allergies. Many parents give their ECE program an Emergency Care Plan (ECP) developed by the child's allergist or other doctor. This plan may be the only information ECE program staff members have to manage the child's food allergy. When multiple children have food allergies, the result can be multiple approaches for addressing and managing food allergies and reactions. Instead, ECE programs should use a coordinated approach that is built on partnerships among ECE program staff, parents, and doctors. With a coordinated approach, staff can create one consistent plan of action for responding to any child with a food allergy and to any allergic reaction. 10,44,48 # Provide clear leadership to guide planning and ensure implementation of food allergy management plans and practices. Successful coordination of food allergy planning requires strong school and ECE program leadership. The support of school principals and ECE program administrators is critical, but it may make more sense for the person who provides or coordinates health services for children to lead the food allergy planning process. 49,50 For example, most schools and some ECE programs have a school or district nurse, school doctor, or health consultant or manager. Nationwide, about 85% of schools have either a part-time or full-time nurse to provide health services to students (37% have a full-time nurse). 8 In Head Start programs, health services must be supervised by staff members or consultants with training in health-related fields. 50,51 School nurses, school doctors, and health consultants or managers should have the expertise to help schools and ECE programs develop plans to manage food allergies. Specifically, these staff members can: • Work with families and doctors to obtain or create an Emergency Care Plan (ECP) for children with food allergies. • Make sure that each child's plan for managing food allergies is consistent with federal laws and regulations, state laws, including regulations, local policies, and standards of professional practices. • Act as a liaison between school and district policy makers, ECE program administrators, health services staff members, food service staff members, community health service providers, and emergency responders. • Make sure that education records that include personally-identifiable information about a student's food allergy are generally not disclosed without the prior written consent of the parent (or eligible student) in compliance with the Family Educational Rights and Privacy Act of 1974 (FERPA), 20 U.S.C. 1232g and its implementing regulations in 34 CFR part 99, and any other applicable federal and state laws that protect the privacy or confidentiality of student information. (FERPA may not require parental consent in all circumstances.) FERPA also includes an emergency exception to the prior consent requirement if there is an articulable and significant threat to the health or safety of the student or others. (See Section 5 for more information about FERPA.) • Monitor the use of medication in the school or ECE program setting. • Obtain an epinephrine auto-injector and make sure it is rapidly available to designated and trained staff members to respond to a child's food allergy emergency. • Recognize and handle medical emergencies. • Learn about best practices for managing food allergies. • Help schools and ECE programs develop a comprehensive approach to managing food allergies. Coordinate a team to put the resulting plan into action. • Work with food service staff on parts of the plan that involve meal and snack preparation and services. • Identify internal resources and community partners that can support the planning process. • Share general information about food allergies with staff members, parents, and others who need it. • Make sure staff receive the training they need, including how to administer an epinephrine auto-injector. A doctor or registered nurse can provide this training. • Talk with staff members, doctors, children, and their families about food allergies and how they should be managed. Share concerns from parents and children with the food allergy management team. • Review the school or ECE program plan on a regular basis to look for ways to reduce exposure to food allergens and better manage allergic reactions. Recommend changes when needed to make the plans better. 49,50,52 The many responsibilities outlined in this section demonstrate the benefit of having a full-or parttime registered nurse (and a doctor part-time) and ECE programs having a medically trained and knowledgeable health consultant or manager. However, if a registered nurse, doctor, or health consultant or manager is not available, the school or ECE program administrator can develop a comprehensive plan that may include delegating some critical responsibilities to other trained professional staff. This plan should also include seeking advice from the child's primary doctor or allergist and training guidance and assistance from health services staff at the district level 47 (See Sections 2 and 3.) # Develop and implement a comprehensive plan for managing food allergies. To effectively manage food allergies and the risks associated with these conditions, many people inside and outside the school or ECE program must come together to develop a comprehensive plan, called the Food Allergy Management and Prevention Plan (FAMPP). This plan should include all strategies and actions needed to manage food allergies in the school or ECE program. It also should be compatible with the approach used to address other chronic conditions in each individual setting. 14 The FAMPP should reinforce the efforts of each school or ECE program to create a safe learning environment for all children. It should address systemwide planning, implementation, and follow-up and include specific actions for each individual child with a food allergy. The FAMPP should: • Meet the requirements of federal laws and regulations, such as Section 504 of the Rehabilitation Act of 1973, the Americans with Disabilities Act (ADA) and the Richard B. Russell National School Lunch Act, if applicable. An explanation of how these federal laws could apply to students with food allergies is provided in Section 5. Among other things, these federal laws address individualized assessment of each child's needs and parental participation in the development of any plan or program designed to meet these dietary needs. An effective FAMPP also would need to meet the requirements of state and local laws and regulations and district policies. • Reflect clear goals, purposes, and expectations for food allergy management that are consistent with the school's or ECE program's mission and policies. • Be clear and easy to understand and implement. • Be responsive to the needs of any child with food allergies by taking into account the different and unique needs of each child. • Be adaptable and updated regularly on the basis of experiences, best practices, current research, and changes in district policy or state or county law. The FAMPP should address the following five priorities: 1. Ensure the daily management of food allergies for individual children. 2. Prepare for food allergy emergencies. 3. Provide professional development on food allergies for staff members. 4. Educate children and family members about food allergies. 5. Create and maintain a healthy and safe educational environment. The remainder of this section provides more detail and specific recommendations for each priority. This section concludes with a comprehensive Food Allergy Management and Prevention Plan (FAMPP) Checklist for use in schools and ECE programs. This checklist can help schools and ECE programs improve their ability to manage the risk food allergies and assess whether their plans address all five priorities. # Priorities for Managing Food Allergies # Ensure the daily management of food allergies for individual children. To protect the health and safety of an individual child with food allergies, school and ECE program staff must identify children with a history of food allergies and develop or obtain plans to manage their allergies. a. Identify children with food allergies. Schools and ECE programs usually have forms and procedures to identify children with chronic conditions, including food allergies, when they enroll or transfer to the school-or when the condition is not initially reported but becomes evident during the academic year. Examples include health condition forms or parent interviews. Children or parents may report a food allergy on the required forms, but this information may not be accurate or complete. Schools and ECE program staff must work with parents to obtain, directly from the child's healthcare provider, the medical information necessary to develop plans for managing the individual care and emergency actions. The USDA requires a doctor's statement that a child has a food allergy disability before food service staff in the Child Nutrition Program can make meal accommodations and provide a safe meal for a child with a food allergy. # b. Develop a plan to manage and reduce the risk of food allergy reactions in individual children. Parents and doctors should provide information and recommendations to help schools and ECE programs develop written plans to manage food allergies for children on a daily basis. This information may be provided on health condition forms, medical orders, doctor's statement, or diet orders. There are a variety of names used to label written plans for individual children with food allergies. It is essential for children to have a short, easy to follow plan for emergency care. This is usually a food allergy Emergency Care Plan (ECP). Other names used for the ECP can include a "food allergy action plan, ""emergency action plan, " or in ECE programs, an "individual care plan". Schools or ECE programs may need to establish additional plans, such as a Section 504 plan or, if appropriate, an Individualized Education Program (IEP), or may establish a nursing assessment and outcome-type Individualized Health Plan (IHP). The ECP is the basic form used to collect food allergy information and it should be completed for every child identified as having a food allergy. 24,25,48,[52][53][54][55][56][57][58][59][60] (If an ECP form is used by the Child Nutrition Program staff to make meal accommodations, it should include the medical information required by the USDA and must be signed by the doctor). This form should be kept in each child's school health record, and it may include the following: °°A recent photo of the child. °°Information about the food allergen, including a confirmed written diagnosis from the child's doctor or allergist. °°Information about signs and symptoms of the child's possible reactions to known allergens. °°Information about the possible severity of reactions, including any history of prior anaphylaxis (even though anaphylaxis can occur even in children without a history of prior anaphylaxis). °°A treatment plan for responding to a food allergy reaction or emergency, including whether an epinephrine auto-injector should be used. °°Information about other conditions, such as asthma or exercise-induced anaphylaxis that might affect food allergy management. °°Contact information for parents and doctors, including alternate phone numbers for notification in case of emergency. The ECP should be written by the child's doctor and confirmed with the parents. In some cases, it can be written by a registered nurse, or school doctor, as long as the child's doctor is consulted and the parents confirm the plan. The child's doctor and parents should sign and date the ECP, and schools and ECE programs should not accept a child's ECP without confirmation and signature from the child's doctor. If a public elementary or secondary school maintains an ECP on an individual child, the ECP would be covered by FERPA as an "education record. " The ECP should specifically state who may have access to the information in the plan, and should ensure that any such access to this information is permissible under FERPA and any other applicable federal or state laws that protect the privacy or confidentiality of student information. (See Section 5 for more information about FERPA.) Section 6 lists state and organizational resources that include examples of ECPs and suggested processes that schools and ECE programs might use to develop their ECPs. An IHP is a written document that outlines how children will receive health care services at school and is developed and used by a registered nurse. The IHP documents a specific student's health needs and outlines specific health outcome expectations and plans for achieving these expectations. The use of an IHP is standard practice for schools with a full-time or part-time registered nurse and it is commonly used to document the progress of children with an identified chronic condition such as food allergies. 39,47,52,61 The IHP helps registered nurses manage the risk of food allergies, prevent allergic reactions, and coordinate care with other staff (such as food service staff ) and health service providers outside the school. Federal law does not require the use of an IHP, but its contents can be useful to the nurse in addressing the requirements of federal laws related to school responsibilities for children with food allergies. Section 6 lists state and organizational resources that include examples of an IHP. If a doctor determines that a child's food allergy may result in anaphylaxis and if the child's food allergy constitutes a disability under applicable federal disability laws, school staff can integrate information from the ECP, doctor's statement, and IHP into a Section 504 plan or, if appropriate, into an IEP. (See Section 5 for more information on applicable federal laws.) Schools should still use an ECP with specific, easy-to-read information about how to respond to a food allergy reaction. For children that are identified as having a food allergy disability and who attend a school or ECE program that participates in the U.S. Department of Agriculture's (USDA's) Child Nutrition Programs, a meal or food substitution or modification must be made when the diagnosis is supported by a doctors' signed statement. Before Child Nutrition Program food service staff can provide a safe meal accommodation, parents must provide a statement from a licensed doctor that identifies: °°The child's disability (according to pertinent statutes). °°An explanation of why the disability restricts the child's diet. °°The major life activity affected by the disability. °°The food or foods to be omitted from the child's diet. °°The food or choice of foods that must be substituted. 62 A child recognized by the Child Nutrition Program staff as having a food allergy disability does not have to have a Section 504 plan, ECP, IHP, or IEP in order for a meal accommodation to be provided. A statement signed by a licensed doctor addressing the points above is sufficient. However, the Child Nutrition Program-required doctor's statement can be integrated in any plan a school or ECE develops to meet a child's special dietary needs. If a Section 504 plan or, if appropriate, an IEP, is developed in connection with the provision of services required under those laws to address the student's food allergy disability, information from the ECP is still useful and can be referenced in, or incorporated into, the Section 504 Plan or IEP. Note that a Section 504 plan or IEP is an education record subject to FERPA. For children not covered by Federal disability laws, schools can use the ECP and IHP to manage each child's food allergy. The IHP can include information about modifications and substitutions for meal and snack planning. An IHP or ECP developed for an individual student is also an education record subject to FERPA. In ECE programs, every child with a food allergy should have an ECP or individual care plan, even if the child has a Section 504 plan or, if eligible for services under IDEA, has an IEP or, if appropriate, an individualized family service plan (IFSP). (See Section 5 for more information regarding these Federal laws.) Because most ECE programs do not have a registered nurse on staff to develop such a plan, the ECE program's health consultant, health manager, or administrator should review each child's health records and emergency information at enrollment and work with parents to obtain an ECP for each child diagnosed with a food allergy. The ECP should be updated at least once a year. Health consultants or managers can share information about any allergic reactions, changes in the child's health status, and exposure to allergens with parents and doctors (with the parents' permission). Working with parents and the child's healthcare provider is essential to make sure that children get the medical services and accommodations they need. Staff should consider referring children without access to health care to health services, when possible. ECPs used by ECE programs should be signed and dated by the child's doctor and parents. The plan should specifically state who has access to the plan. The plan also should state which staff members are responsible for the care, transportation, and feeding of children with food allergies. 12,50 (See Section 5 for more information about applicable Federal laws.) c. Help students manage their own food allergies. Young children in ECE programs and early elementary grades in schools generally cannot manage their own food allergies. However, some students, especially adolescents, can take responsibility for managing their own food allergies, including carrying and using epinephrine when needed. When medication is required by students who have chronic health conditions, especially when medication may be lifesaving, it is best practice to encourage and assist students to become educated and competent in their own care. 48,54,63,64 Students who can manage their own food allergies should have quick (within a few minutes) access to an epinephrine auto-injector, both at school and during school-related events. 13 Some schools allow students to carry prescribed epinephrine auto-injectors (e.g., in their pocket, backpack, or purse) at school. Some state laws, allow students to carry auto-injectors during activities on school property and during transportation to and from school or school-related events. 63 Federal law requires reasonable modifications of school policies when necessary to avoid disability discrimination, and in some cases, this may require allowing a student to carry an epinephrine auto-injector. School officials should check state and federal laws before setting their policies and practices. See Section 5 for more information about applicable Federal laws. Before students are allowed to carry and use medication, school staff should assess students' knowledge, attitudes, behaviors, and skills to determine their ability to handle this responsibility. 64 This decision should be reassessed periodically, and the school nurse or another assigned staff member should randomly check to make sure students are carrying their epinephrine auto-injector. Some students with food allergies may choose to wear medical alert bracelets, which can aid emergency response. 13 School officials can encourage students to wear these bracelets, but they should not require them. Some students will not want to wear such jewelry because they fear being stigmatized. School nurses and other school staff members should reinforce self-management skills for students with food allergies. These skills include reading labels, asking questions about foods in the school meal and snack programs, avoiding unlabeled or unknown foods, using epinephrine auto-injectors when needed, and recognizing and reporting an allergic reaction to an adult. Even when students are able to manage their own food allergies, school staff need to know which students have allergies so they can have plans in place to monitor each student's condition and be able to respond in an emergency. Because some symptoms of anaphylaxis may continue after a dose of epinephrine is administered and because students might not always have their medication with them, schools should also keep a second epinephrine auto-injector (provided by parent or student) in a secure but rapidly accessible location. 63,65,66 (See the textbox on page 31 related to the justification for more than one dose of epinephrine.) # Prepare for food allergy emergencies. All schools and ECE programs should anticipate and prepare for food allergy emergencies in the same ways they approach emergency preparedness for other hazards. Comprehensive emergency planning includes prevention, preparedness, response, and recovery for any type of emergency. This "all-hazards" model is often used to plan for natural disasters, weather-related emergencies, and pandemic influenza. A school's all-hazards emergency plan also should address potential crises caused by violence or food allergy emergencies. 67 This plan should go beyond each child's ECP to include building-level planning, communication, training, and emergency response procedures. 68 a. Set up communication systems that are easy to use. Communication devices, such as intercoms, walkie-talkies, or cell phones, should be available at all times in case of an emergency. School and ECE program staff in classrooms, gymnasiums, cafeterias, playgrounds, and transportation vehicles should be able to communicate easily and quickly with the school nurse, school authorities, health consultants or managers, emergency responders and parents. Communication devices should be checked regularly to make sure they work. b. Make sure staff can get to epinephrine auto-injectors quickly and easily. Quick access to and immediate availability of epinephrine to respond to anaphylaxis emergencies is essential. 13 It is the parent's responsibility to provide at least one or two epinephrine auto-injectors for a child with food allergies if they are prescribed by a doctor. It is the school's or ECE program's responsibility to store epinephrine auto-injectors in a place that can be reached quickly and easily and to delegate and train staff to give epinephrine in response to allergic reactions. Studies have shown that quick access to epinephrine is critical to saving lives in episodes of anaphylaxis. 24,25,37 To ensure quick access to epinephrine, auto-injectors should be kept in a safe and secure place that trained staff members can get to quickly during school or ECE program hours. 63,[68][69][70] At the same time, staff must also follow federal and state laws, including regulations, and local policies that may require medications to be locked in a secure place. For example, federal Head Start regulations require that all "grantee and delegate agencies establish and maintain written procedures regarding the administration, handling, and storage of medication for every child," including "labeling and storing, under lock and key, and refrigerating, if necessary, all medications, including those required for staff and volunteers. " 51 State regulations and local policies may similarly require locking medications in a secure location. School and ECE program staff should seek guidance from federal and state regulatory agencies and local policy makers when deciding how to store epinephrine auto-injectors. These decisions also must take into account the needs of each student and the specific characteristics of the school district, the staff, and the school building. Decisions on where to store medication, such as in a central location (office or health room), in the classroom, or in several locations (on a large school campus) may vary among school districts and schools. These decisions should be based on state and local laws and regulations and school policies. They also must ensure the safety of children with food allergies. The Guidelines for Managing Life-threatening Food Allergies in Connecticut Schools list some issues to consider, including the general safety standards for handling and storage of medication, developmental stage and competence of the student, size of the building, availability of a full-time school nurse in the building, availability of communication devices between teachers and paraprofessionals who are inside the building or on the playground and the school nurse, school nurse response time from the health office to the classroom, preferences and other responsibilities of the teacher, preferences of the parent, preferences of the student (as applicable), and movement of the student within the building. 54 The location(s) of medications should be listed in the school's overall emergency plan and in each child's ECP (and IHP, Section 504 plan, or IEP, if appropriate). Schools and ECE programs should also identify which staff members will be responsible for reviewing expiration dates and replacing outdated epinephrine auto-injectors and for carrying medication during field trips and other school events. 70 c. Make sure that epinephrine is used when needed and someone immediately contacts emergency medical services. Delays in using epinephrine have resulted in near fatal and fatal food allergy reactions in schools and ECE programs. 25,36,37 In a food allergy emergency, trained staff should give epinephrine immediately. Early and appropriate administration of epinephrine can temporarily stop allergic reactions and provide the critical time needed to get medical help. State laws, state nursing regulations, and local school board policies direct the medication administration in school and ECE programs. They often define which medications nonhealth professionals are allowed to administer in schools, including who may administer epinephrine by auto-injector. If nonhealth staff members are permitted to administer epinephrine, training should be required. 39,71 When epinephrine is used, school or ECE program staff must call 911 or emergency medical services (EMS). EMS should be informed that the emergency is due to an allergic reaction, if epinephrine has been administered, when it was administered, and that an additional dose of epinephrine may be needed. The child should be transported quickly in an emergency vehicle to the nearest hospital emergency department for further medical treatment and observation. 13 Staff also should contact the child's parents to inform them of their child's food allergy emergency and tell them where the child is being transported. Because medical attention is needed urgently in this situation, staff must not wait for parents to come and pick up their children before calling EMS. # Justification for More Than One Dose of Epinephrine Schools and ECE programs should consider keeping multiple doses of epinephrine onsite so they can respond quickly to a food allergy emergency. Although some schools allow students to carry their own auto-injectors, a second auto-injector should be available at school in case a student does not have one at the time of the emergency. School and ECE program staff may also decide that having more than one auto-injector at different locations (especially for a large building or campus) will best meet a child's needs. In addition, some symptoms of anaphylaxis may continue after one dose of epinephrine, so a second dose may be needed at school if EMS does not arrive quickly. Some state laws allow for the prescribing of stock supply of non-patient specific epinephrine auto-injectors for use in schools, which may allow schools or ECE programs to acquire the needed additional doses of epinephrine. When allowed by state law and local policy, schools and ECE programs that have a doctor or nurse onsite can stock their emergency medical kits with epinephrine auto-injectors to be used for anaphylaxis emergencies. 63,65,66,72 In states where legislation does not exist or does not allow schools or ECE programs to stock epinephrine, staff will need to work with parents and their doctors to get additional epinephrine auto-injectors for students who need them. # d. Identify the role of each staff member in an emergency. Any plan for managing food allergies should state specifically what each staff member should do in an emergency. This information should be simple and easy to follow, particularly when a staff member who is not a licensed health professional is delegated to administer epinephrine. 24,68 Ideally, a registered nurse or doctor would be available to assess a food allergy emergency and decide if epinephrine is needed. When a nurse or doctor is not onsite, trained unlicensed assistive personnel or nonhealth professionals can recognize the signs and symptoms of an allergic reaction, have quick access to an epinephrine auto-injector, and administer epinephrine. 70,71 Examples of these staff members may include health aides and assistants, teachers, athletic coaches, food service staff, administrators, and parent or adult chaperones. A licensed health care professional such as a registered nurse, doctor, or allergist should train, evaluate, and supervise unlicensed assistive personnel or delegated nonhealth professionals. This training should teach staff how to recognize the signs and symptoms of a reaction, administer epinephrine, contact EMS, and understand state and local laws and regulations related to giving medication to students. ECE programs that care for children with chronic conditions such as food allergies should seek the services of a trained health advocate or consultant to help staff develop emergency plans, write policies, and train staff. ECE programs are required to have a certified first aider present at all times. 50 All ECE program staff should get annual first aid training that teaches them how to recognize and respond to pediatric emergencies. 49 This training should include how to recognize the signs and symptoms of an allergic reaction and how to give epinephrine through an auto-injector. 23 ECE programs should keep records of all staff training. # e. Prepare for food allergy reactions in children without a prior history of food allergies. Schools and ECE programs should be ready to respond to severe allergic reactions in children with no history of anaphylaxis or no previously diagnosed food allergies. At a minimum, schools and ECE programs should establish a protocol for contacting emergency services when an allergic reaction is suspected and follow this protocol immediately when a child exhibits signs of anaphylaxis. If allowed by state law, the school doctor or nurse may stock their emergency medical kits with epinephrine auto-injectors to be used for anaphylaxis emergencies. If the school or ECE program has a FAMPP, written protocol, and licensed or delegated trained staff, an epinephrine auto-injector may be used for anaphylaxis regardless of previous allergy history. f. Document the response to a food allergy emergency. Emergency response should include a protocol for documenting or recording each emergency incident and use of epinephrine. 12,65,68 Documentation should include the following: °°Time and location of the incident. °°Food allergen that triggered the reaction (if known). °°If epinephrine was used and the time it was used. °°Notification of parents and EMS. °°Staff members who responded to the emergency. Section 6 lists state and organizational resources that include examples of epinephrine administration reports. Corrective actions and lessons learned from an incident should be used to revise the child's individual plan and the school's or ECE program's FAMPP, if needed. School and ECE program administrators also should review the emergency response with the child's parents, the staff members involved in the response, local EMS responders, and the child. 63,70 See the Example Checklist for an example of steps to follow after a nonfatal food allergy emergency. # Example Checklist: Steps to Take Within 24 Hours of a Nonfatal Food Allergy Reaction • Call parent or guardian to follow up on student condition. • Review anaphylactic or allergic episode with parent or guardian and student. °°Identify allergen and route of exposure-discuss signs and symptoms with parent or guardian. °°Review actions taken. °°Discuss positive and negative outcomes. °°Discuss any needed revision to care plan based on experience or outcome. • Discuss family role with parent or guardian to improve outcomes. • Discuss school, ECE program, and home concerns to improve prevention, response, and student outcomes. • Ask parent or guardian to replace epinephrine dose that was given, if needed. • Ask parent or guardian to follow up with health care provider. Source: National Association of School Nurses, 2011. # Provide professional development on food allergies for staff. Schools and ECE programs should provide training to all staff members to increase their knowledge about food allergies and how to respond to food allergy emergencies. This training should focus on how to reduce the risk of an allergic reaction, respond to allergic reactions, and support the social and academic development of children with food allergies. 25,38,73 Schools and ECE programs should coordinate training activities with a licensed health care professional, such as a school nurse, public health nurse, public health educator, or school or community doctor. Training can include use of existing materials that provide general information about food allergies, as well as information and resources to help staff meet the specific needs of individual children. 23,24,39,49,[54][55][56][57][58][59][60]73,74 Administrators should allow enough time for proper training, and all training should be evaluated to make sure it is effective. In 2010, the National Diabetes Education Program updated their guidance to help students manage their diabetes in schools. 16 This updated guide outlines three levels of training that include basic training for all staff and specialized training for specific staff members. This approach provides a useful framework that has been adapted here to guide training on food allergy management in schools and ECE programs. # a. Provide general training on food allergies for all staff. Any staff member who might interact with children with food allergies or be asked to help respond to a food allergy emergency should be trained. Examples include administrators, nutrition and food service staff (including contract staff ), classroom and specialty teachers, athletic coaches, school counselors, bus drivers, custodial and maintenance staff, therapists, paraeducators, special education service providers, librarians and media specialists, security staff, substitute teachers, and volunteers such as playground monitors and field trip chaperones. General training content should include the following: °°School or ECE program policies and practices. °°An overview of food allergies. °°Definitions of key terms, including food allergy, major allergens, epinephrine, and anaphylaxis. °°The difference between potentially life-threatening food allergy and other food-related problems. °°Signs and symptoms of a food allergy reaction and anaphylaxis and information on common emergency medications. °°General strategies for reducing and preventing exposure to allergens (in food and nonfood items). °°Policies on bullying and harassment and how they apply to children with food allergies. °°How to respond to a food allergy emergency. °°Information about federal laws that could apply, such as the ADA, Section 504, and FERPA. (See Section 5 for more information about applicable federal laws.) Information about any state laws, including regulations, or district policies that apply. °°How to administer epinephrine with an auto-injector (for those formally delegated to do so). °°How to help children treat their own food allergy episodes. °°Effects of food allergies on children's behavior and ability to learn. °°Importance of giving emotional support to children with food allergies and to other children who might witness a severe food allergy reaction (anaphylaxis). °°Common risk factors, triggers, and areas of exposure to food allergens in schools or ECE programs. °°Specific strategies for fully integrating children with food allergies into school and class activities while reducing the risk of exposure to allergens in classrooms, during meals, during nonacademic outings, on field trips, during official activities before and after school or ECE programs, and during events sponsored by schools or ECE programs that are held outside of regular hours. These strategies could address (but are not limited to) the following: §°Special seating arrangements when age and circumstance appropriate (e.g., during meal times, birthday parties). §°Plans for keeping foods with allergens separated from foods provided to children with food allergies. §°Rules on how staff and students should wash their hands and clean surfaces to reduce the risk of exposure to food allergens. §°The importance of not sharing food. §°How to read food labels to identify food allergens. # c. Provide specialized training for staff who are responsible for managing the health of children with food allergies on a daily basis. This training should be required for district nurses, school nurses, school doctors, and professionally qualified health coordinators or managers. In addition to the general and in-depth content described previously, this training should include information about how to: °°Create ECPs and review or develop other individual care plans as needed. °°Manage and store medication. °°Delegate and train unlicensed assistive personnel to administer epinephrine. °°Help children manage their own food allergies. °°Document the tasks performed as part of food allergy management. °°Evaluate emergency responses and staff members' ability to respond to food allergy emergencies. Training should be conducted at least once a year, and should be reviewed after a food allergy reaction or anaphylaxis emergency for the purpose of improving prevention and response. Schools and ECE programs should consult with parents of children with food allergies when they design staff training. These parents have knowledge and experience on how to manage their child's food allergies, as well as information from their child's doctor. Parents do not need to participate in the delivery of training sessions or attend staff training. # Educate children and family members about food allergies. a. Teach all children about food allergies. All children need to learn about food allergies, but teaching methods will differ on the basis of their age and the setting. For example, schools can provide food allergy education as part of the health education or other curriculum topic, such as family and consumer sciences, general science, physical education, and character education. 41,45,67,69,75,76 ECE programs can provide food allergy education with help from certified health education specialists. Food allergy education should be appropriate for the developmental level and culture of the children in a particular school or ECE program. It should focus on increasing awareness and understanding of food allergies and building support and acceptance of people with food allergies. 59,76 At a minimum, all children should be able to: °°Identify signs and symptoms of anaphylaxis. °°Know and understand why it is wrong to tease or bully others, including people with food allergies. °°Know and understand the importance of finding a staff member who can help respond to suspected food allergy emergencies. °°Understand rules on hand washing, food sharing, allergen-safe zones, and personal conduct. Food allergy awareness is reinforced when staff members model behaviors and attitudes that comply with rules that reduce exposure to food allergens. 48 b. Teach all parents and families about food allergies. A successful FAMPP needs support and participation from parents of children with food allergies and from parents of children without food allergies. All parents should get information to increase their awareness and understanding of food allergies, the policies and practices that protect children with food allergies, the roles of all staff members in protecting children with food allergies, and the measures parents of children with and without food allergies can take to help ensure this protection. 45,76 School and ECE program administrators, working with school or district nurses or health consultants or managers, should educate families on food allergy policies and practices. Classroom teachers should provide information to all parents about what is being done to prevent food allergy reactions in the classroom. Food service staff should provide information to families about federal regulations of the U.S. Department of Agriculture's Food and Nutrition Service and practices that protect children, and manage food allergies during meals served under USDA meal programs. District and school policies and protocols to prevent bullying, respond to food allergy emergencies, and create a safe environment for all children should be shared with all families. Schools and ECE programs can share information in many ways, including through letters or e-mails to parents; updates on school Web sites; and announcements at parent-teacher association meetings, school nights, health fairs, and community events. # Create and maintain a healthy and safe educational environment. Schools, ECE programs, and communities have a shared responsibility to promote a safe physical environment that protects children with food allergies and climate that supports their positive psychological and social development. 77,78 a. Create an environment that is as safe as possible from exposure to food allergens. Schools and ECE programs can create a safer learning environment by reducing children's exposure to potential allergens. 24,39,[54][55][56][57][58][59]74 When a child has a documented food allergy, staff should take active steps to reduce the risk of exposure in all common areas, such as classrooms and cafeterias. 12 Some schools or ECE programs have considered banning or have banned specific food across the entire school or ECE program setting in an attempt to eliminate exposing a child with a food allergy to that food. But, such an option cannot guarantee a totally safe environment because there is no reasonable or fail-safe way to prevent an allergen from inadvertently entering into a building. Even with such a ban in place, a school or ECE program still has a responsibility to properly plan for children with any life-threatening food allergies, to educate all school personnel accordingly, and ensure that school staff are trained and prepared to prevent and respond to a food allergy emergency. Schools or ECE programs may choose other alternatives to banning allergens including the designation of allergen-safe zones, such as an individual classroom or eating area in the cafeteria, or designation of food-free zones, such as a library, classroom, or buses. 45 79 provide school districts, schools, and ECE programs with requirements governing the cleaning and sanitizing of surfaces and other practices that can protect against the unintentional transfer of residue or trace amount of an allergic food into another food. Some practices to reduce this cross-contact include the following: °°Clean and sanitize with soap and water or all-purpose cleaning agents and sanitizers that meet state and local food safety regulations, all surfaces that come into contact with food in kitchens, classrooms, and other locations where food is prepared or eaten. Cleaning with water alone will not remove food allergens. °°Clean and sanitize food preparation equipment, such as food slicers, and utensils before and after use to prevent cross-contact. °°Clean and sanitize trays and baking sheets after each use. Oils can seep through wax paper or other liners and cause cross-contact. °°Prepare food separately for children with food allergies. Strategies should include preparing items without allergens first, using a separate work space and equipment, and labeling and storing items before preparing other foods. °°Train all staff who prepare, handle, or serve food how to read labels to identify food allergens. Make sure that staff members are knowledgeable about current labeling laws. Because food labels often change, they should be read every time the food is purchased. Ingredient lists posted on Web sites are not reliable. The manufacturer of the food should be contacted if clarification is needed. °°Use appropriate hand-washing procedures that emphasize the use of soap and water. Hand sanitizers are not effective in removing food allergens. Nutrition and food service staff in schools and ECE programs are required to follow local food safety and sanitation laws and be trained in practices that prevent food, surface-to-food, and food-to-food contamination that also serve to help prevent cross-contact of food allergens. Meals and snacks may be served in locations other than cafeterias, handled by staff members other than the food service staff, or provided outside of a USDA Child Nutrition Program. When developing policies and procedures for food handling, consider all possible situations where food might be prepared or served, any staff members who might be involved, and the state and local food safety regulations that might be appropriate to help prevent the transfer of food allergens in these situations. In ECE programs, additional precautions are recommended to reduce the risk of food allergy reactions, especially among children with a history of anaphylaxis. Many of these recommendations are consistent with common practices for managing any child in an ECE program. °°Make sure that all staff members can read product labels and identify food allergens. °°Recommend, but do not require, that children with known food allergies wear a medical alert bracelet. °°Promote good hand-washing practices before and after eating. °°Supervise children closely during mealtimes. Consider assigned seating for meals, especially in situations with family-style dining. Emphasize that children not share food. °°Put children's names on cups, plates, and utensils to avoid confusion and cross-contact. °°Designate food storage areas for foods brought from home. 6,45,77 c. Make outside groups aware of food allergy policies and rules when they use school or ECE program facilities before or after hours. Local agencies, community groups, and community members who use school or ECE program facilities before or after operating hours should be aware of and comply with policies on food, cleaning, and sanitation procedures. If food is allowed in the building, consider banning food from specific classrooms or areas that children with food allergies use often. School and ECE program staff should be notified when outside groups are using their facilities. # d. Create a positive psychosocial climate. Schools and ECE programs should foster a climate that promotes positive psychological and social development; that actively promotes safety, respect, and acceptance of differences; and fosters positive interpersonal relationships between staff members and children and between the children themselves. The psychosocial climate is influenced by clear and consistent disciplinary policies, meaningful opportunities for participation, and supportive behaviors by staff members and parents. 78 Children with food allergies need an environment where they feel secure and can interact with caring people they trust. Bullying, teasing, and harassment can lead to psychological distress for children with food allergies which could lead to a more severe reaction when the allergen is present. 22,43,44 A positive psychosocial climate-coupled with food allergy education and awareness for all children, families, and staff members-can help remove feelings of anxiety and alienation among children with food allergies. 43,44 To create a positive psychosocial climate, staff members, children, and parents must all work together. School nurses, school counselors, or mental health consultants can provide leadership and guidance to set best practices and strategies for a positive psychosocial climate. Staff members should promote and reinforce expectations for a positive and supportive climate by making sure the needs of children with food allergies are addressed. For example, they can avoid using language and activities that isolate children with food allergies and encourage everyone's help in keeping the classroom safe from food allergens. Children can help develop classroom rules, rewards, and activities. All children and staff members share responsibility for preventing bullying and social isolation of children with food allergies. School and ECE program staff should recognize that acceptance by peers is one of the most important influences on a child's emotional and social development. 78 Among adolescents, food allergy education and awareness can be an effective strategy to improve social interactions, reduce peer pressure, and decrease risk-taking behaviors that expose them to food allergens. 22 Children should be expected to treat others with respect and to be good citizens, not passive bystanders, when they are aware of bullying or peers who seem troubled. Children should understand the positive or negative consequences associated with their actions. Rules and policies against bullying behavior should be developed in partnership with staff members, families, and children. They should be posted in buildings; published in school handbooks; and discussed with staff members, children, and families. All children and staff members should be encouraged to report bullying and harassment of any child with food allergies. 80,81 # Conclusion Schools and ECE programs are responsible for the health and safety of children with food allergies. The strategies presented in these guidelines can help schools and ECE programs take a comprehensive approach to managing food allergies. Through the collective efforts of school and ECE program staff members, parents, and health care providers, children with food allergies can be assured a safe place to thrive, learn, and succeed. # Meals and Snacks • Use nonfood incentives for prizes, gifts, and awards. • Help students with food allergies read labels of foods provided by others so they can avoid ingesting hidden food allergens. • Consider methods (such as assigned cubicles) to prevent cross-contact of food allergens from lunches and snacks stored in the classroom. • Support parents of children with food allergies who wish to provide safe snack items for their child in the event of unexpected circumstances. • Encourage children to wash hands before and after handling or consuming food. • • Designate an allergen-safe food preparation area. • Provide advanced copies of menus for parents to use in planning. • Be prepared to share food labels, recipes, or ingredient lists used to prepare meals and snacks with others. • Do not allow food to be eaten on buses except by children with special needs such as those with diabetes. • Encourage children to wash hands before and after handling or consuming food. • Identify special needs before field trips or events. • Package meals and snacks appropriately to prevent crosscontact. • Encourage children to wash hands before and after handling or consuming food. • Encourage hand washing before and after handling or consuming food. # Physical Education and Recess # Meals and Snacks • Keep food labels from all foods served to children with allergies for at least 24 hours after servicing the food in case the child has a reaction. • Keep current contact information for vendors and suppliers so you can get food ingredient information. • Read all food labels and recheck with each purchase for potential food allergens. • Report mistakes such as crosscontact with an allergen or errors in the ingredient list or menu immediately to administrators and parents. • Wash all tables and chairs with soap and water or all-purpose cleaning agents before each meal period. • Encourage children, school staff , and volunteers to wash hands before and after handling or consuming food. a USDA Web site: www.fns.usda.gov/cnd/guidance/special_dietary_needs.pdf. # Food Allergy Management and Prevention Plan Checklist Use this checklist to determine if your school or ECE program has appropriate plans in place to promote the health and well-being of children with food allergies. For each priority, check the box to the left if you have plans and practices in place. Develop plans to address the priorities you did not check. You can also use the checklist to evaluate your response to food allergy emergencies. Ongoing evaluation and improvement can help you improve your plans and actions. Review the full descriptions of the five priorities (pages [25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40] to make sure that your plans and practices are complete and that your plans for improvement will meet the needs of children, their families, administrators, and staff. This section presents the actions that school district leaders can take to implement the voluntary recommendations in Section 1. Although the focus of the recommendations is on the management of food allergies at the school building level, district-level leadership and policy and staff support are essential for the success of school-level food allergy management. # School District Policy Support School boards can adopt written policies that direct and support clear, consistent, and effective practices for managing the risk of food allergies and response to food allergy emergencies. Data from CDC's 2006 School Health Policies and Programs Study indicate that only slightly more than 40% of school districts have model food allergy policies. 8 A comprehensive and uniform set of district policies for managing food allergies in schools can: • Communicate the district's commitment to effectively managing food allergies to school administrators and staff members, parents, and the community. • Promote consistency of priorities, actions, and options for managing food allergies across the district to avoid confusion and haphazard responses. • Align food allergy management plans in schools with federal and state laws, including regulations, and policies, as well as other established school policies. • Make protective practices and strategies for managing food allergies in schools an integral part of ongoing school activities. • Support the food allergy management decisions and practices of school administrators and staff members. • Increase public knowledge about food allergies and applicable laws and public support for implementation of effective food allergy management practices in schools. [82][83][84][85] Section 6 provides a list of resources with more information and strategies to inform school policymakers. # School District Staff Support District policies are implemented with the support of board members, the district superintendent, and district-level staff members. 82 District leaders and staff can: • Communicate policy requirements and school system directives. • Help schools implement and comply with applicable federal and state laws, including regulations, and policies. • Help communicate lines of authority for managing food allergies in school buildings. • Provide standardized forms, procedures, tools, and plans, including a sample Food Allergy Management and Prevention Plan (FAMPP), to schools. • Coordinate training to improve consistency of practices across the district. • Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, during class parties, at athletic events, and during after-school programs. • Help schools plan and implement their FAMPPs. District staff might also provide direct assistance to schools to help them meet the needs of students with food allergies, especially when the school does not have key staff, such as a doctor or full-time registered nurse, working at the building level. District staff sometimes communicate directly with parents and doctors who might need additional information about a school's food allergy policies and practices. They also may communicate directly with parents whose children need help managing their food allergy as they move from one school to the next within the district. # School Board Members 1. Set the direction for the school district's coordinated approach to managing food allergies. • Develop a comprehensive set of school district policies to manage food allergies in school settings. Work with a variety of school staff, including school administrators, Section 504 coordinators, licensed health care professionals (e.g., doctors, registered nurses), school health advisory council members, teachers, paraeducators, school food service staff, bus drivers and other transportation staff, custodians and maintenance staff, after-school program staff, students, parents, community experts, and others who will implement policies. Section 6 provides a list of resources with more information and strategies to inform school policymakers. • Align food allergy policies and practices with the district's "all-hazards" approach to emergency planning and with policies on the care of students with chronic health conditions. • Be familiar with federal and state laws, including regulations, and policies relevant to the obligations of schools to students with food allergies and make sure local school policies and practices follow these laws and policies. • Use multiple mechanisms, such as newsletters and Web sites, to disseminate and communicate food allergy policies to appropriate district staff, families, and the community. • Give parents and students information about the school district's procedures they can use if they disagree with the food allergy policies and plans implemented by the school district. • On a regular schedule, review and evaluate the district's food allergy-related policies and revise as needed. # Prepare for food allergy emergencies. • Make sure that responding to life-threatening food allergy reactions is part of the school district's "all-hazards" approach to emergency planning. • Support and allocate resources to trained and appropriately certified or licensed staff members to respond to food allergy emergencies in all schools. • Review data and information (e.g., when and where medication was used) from incident reports of food allergy reactions and assess the effect of the incident on all students involved. Modify your policies as needed. # Support professional development on food allergies for staff. • Support and allocate resources and time for professional development and training on food allergies. • Identify professional development and training needs to make sure that district and school staff, especially those on food allergy management teams, are adequately trained, competent, and confident to perform assigned responsibilities to help students with life-threatening food allergies and respond to an emergency. # Educate students and family members about food allergies. • Encourage the inclusion of information about food allergies in the district's health education or other curriculum for students to raise awareness. • Support and allocate resources for awareness education for students and parents. # Create and maintain a healthy and safe school environment. • Endorse the use of signs and other strategies to increase awareness about food allergies throughout the school environment. • Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, during class parties, at athletic events, and during after-school programs. • Support collaboration with district and community experts to integrate the management of food allergies with the management of other chronic health conditions. • Support collaboration with district and community experts to make sure schools have healthy and safe physical environments. • Develop and consistently enforce policies that prohibit discrimination and bullying against all students, including those with food allergies. # School District Superintendent 1. Lead the school district's coordinated approach to managing food allergies. • Provide leadership and designate school district resources to implement the school district's comprehensive approach to managing food allergies. • Promote, disseminate, and communicate food allergy-related policies to all school staff, families, and the community. • Make sure that each school has a team that is responsible for food allergy management. • Be familiar with federal and state laws, including regulations, and policies relevant to the obligations of schools to students with food allergies and make sure your policies and practices follow these laws and policies. • Give parents and students information about the school district's procedures they can use if they disagree with the food allergy policies and plans implemented by the school district. • On a regular schedule, review and evaluate the school district's food allergy policies and practices and revise as needed. • Establish evaluation strategies for determining when the district's food allergy policies and practices or the school's FAMPP are not effectively implemented. # Prepare for food allergy emergencies. • Make sure that responding to life-threatening food allergy reactions is part of the school district's all-hazards approach to emergency planning. • Make sure that each school has trained and appropriately certified or licensed staff members develop and implement written Emergency Care Plans (ECPs) for students with food allergies. Additional plans can include Individualized Healthcare Plans (IHPs), Section 504 plans, or, if appropriate, Individualized Education Programs (IEPs). • Encourage periodic emergency response drills and practice on how to handle a food allergy emergency in schools. • Review data and information (e.g., when and where medication was administered) from incident reports of food allergy reactions and assess the effect of the incident on all students involved. Modify policies as needed. # Support professional development on food allergies for staff. • Make sure that district and school staff, especially those responsible for implementing the FAMPP, have professional development and training opportunities to become adequately trained, competent, and confident to perform assigned responsibilities to help students with food allergies and respond to an emergency. # Educate students and family members about food allergies. • Help ensure that information about food allergies is included in the district's health education curriculum for students to raise awareness. • Communicate with parents about the district's policies and practices to protect the health of students with food allergies. # Create and maintain a healthy and safe school environment. • Increase awareness of food allergies throughout the school environment. • Collaborate with school board members, school administrators, and other school staff to create a safe environment for students with food allergies. Provide oversight of schools with children who have food allergies. • Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, during class parties, at athletic events, and during after-school programs. • Consistently enforce policies that prohibit discrimination and bullying against all students, including those with food allergies. # Health Services Director The health services director can be a doctor or registered nurse working at the district level. # Participate in the school's coordinated approach to managing food allergies. • Help develop a school district's comprehensive approach to managing life-threatening food allergies that will support the FAMPP used in each school. • Provide leadership and obtain the resources needed to implement the district's comprehensive approach to managing food allergies. • Promote, disseminate, and communicate the food allergy policies and practices to all school staff, families, the school community, and the local medical community. • Know and educate others about federal and state laws, including regulations and policies relevant to the obligations of schools to students with food allergies and make sure policies and practices follow these laws. • Make sure a doctor or registered nurse reviews all FAMPPs and ECPs. Create other plans as needed. • Provide direct assistance to help schools develop procedures and plans for monitoring students with food allergies, including, if appropriate, through Section 504 plans, or IEPs. • Coordinate with other district staff, including the food service director, curriculum coordinator, and student support services director. • Make sure that food allergy policies and practices address competitive foods (foods and beverages sold outside of the federal reimbursable school meals program), such as those available in vending machines, in school stores, during class parties, at athletic events, and during after-school programs. • On a regular schedule, review and evaluate the school district's food allergy policies and practices and revise as needed. # Ensure the daily management of food allergies for individual students. • Help the school team responsible for the FAMPP write this plan. If a student is eligible to receive services under Section 504 or, if appropiate, IDEA, make sure all provisions of these federal laws are met. • Create standard forms, such as health forms, school registration forms, and ECPs, for schools to use to identify students e with food allergies and develop individual management plans for them. Establish protocols for tasks related to developing management plans, such as how to interview parents, get appropriate documentation from doctors, and coordinate meals with food service staff. • Help schools implement policies and procedures for managing student medications. These policies should include how epinephrine auto-injectors are stored and accessed, how their use is monitored, and the schedule for regularly inspecting auto-injector expiration dates. They should also include plans for supporting students who are permitted and capable of managing their own food allergies by carrying and using epinephrine auto-injectors. • Help schools that do not have a registered nurse on site develop plans to manage food allergies in individual students, provide health services when needed, and respond to food allergy emergencies. • Help schools link students with food allergies and their families to community health services and family support services when needed. # Prepare for food allergy emergencies. • Develop protocols for responding to food allergy emergencies that can guide practices at the building level. • If allowed by state laws, including regulations, and district policy, obtain or write nonpatient-specific prescriptions and standing orders for epinephrine auto-injectors that can be used to respond to anaphylaxis emergencies. • Work directly with local emergency responders to confirm that they carry epinephrine auto-injectors for anaphylaxis emergencies. • Review school emergency response plans to make sure they include the actions needed to respond to food allergy emergencies. • Help schools conduct periodic emergency response drills and practice how to handle food allergy emergencies. • Help schools conduct debriefing meetings after a food allergy reaction or emergency. • Review data and information (e.g., when and where medication was administered) from incident reports of food allergy reactions and assess the effect of the incident on all students involved. Provide input to modify policies and practices as needed. • Collect school data to monitor and track food allergy emergencies across the district. Use these data to guide improvements in policies and practices. # Support professional development on food allergies for staff. • Seek professional development opportunities to learn updated information about managing food allergies. • Educate district and school staff about food allergies so they are adequately trained, competent, and confident to perform assigned responsibilities to help students with food allergies and respond to an emergency. • Coordinate district training for school nurses and others who might lead school teams responsible for implementing FAMPPs to make sure they have the information they need to develop effective plans. • Know and educate others about federal and state laws, including regulations, and policies relevant to the obligations of schools to students with food allergies and make sure district policies and practices follow these laws and policies. • Help school building leaders plan and provide food allergy training for staff, parents, and students. • Help train delegated staff members on how to store, access, and administer epinephrine autoinjectors. # Educate students and family members about food allergies. • Work collaboratively with the curriculum coordinator or health education coordinator at the district level to identify appropriate food allergy content for the district's health education curriculum. • Help school administrators communicate the district's policies and practices for managing food allergies to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. • Work collaboratively with district staff to enforce policies that promote healthy physical environments. • Work collaboratively with student support services staff at the district level to enforce policies that prohibit discrimination and bullying against all students, including those with food allergies. # Student Support Services Director The student support services director can be a school psychologist, school counselor, or child and family services director. 1. Participate in the school's coordinated approach to managing food allergies. • Help develop a school district's comprehensive approach to managing food allergies that will support the FAMPP used in each school. • Promote, disseminate, and communicate food allergy policies to all school staff, families, and the community. • Know and inform others about federal and state laws, including regulations, and policies relevant to the obligations of schools to students with food allergies and make sure district policies and practices follow these laws and policies. • Provide direct assistance to help schools establish procedures and plans for monitoring students with food allergies, including, if appropriate, through Section 504 plans or IEPs. • Coordinate with other district staff, including the food service director, curriculum coordinator, and health services director. • On a regular schedule, review and evaluate the school district's food allergy policies and practices and revise as needed. # Ensure the daily management of food allergies for individual students. • Help the school team responsible for implementing the FAMPP write this plan. If a student is eligible to receive services under Section 504 or, if appropiate, IDEA, make sure all provisions of these federal laws are met. • Help schools link students with food allergies and their families to community health services and family support services when needed. # Prepare for food allergy emergencies. • Help develop protocols for responding to food allergy emergencies that can guide practices in district schools. • Review school emergency response plans to make sure they include the actions needed to respond to food allergy emergencies. • Help schools conduct periodic emergency response drills and practice how to handle a food allergy emergency. • Review data and information (e.g., when and where medication was administered) from incident reports of food allergy reactions and assess the effect of the incident on affected students. Provide input to modify policies and practices as needed. # Support professional development on food allergies for staff. • Help educate district and school staff about food allergies so they are adequately trained, competent, and confident to perform assigned responsibilities to help students with food allergies and respond to an emergency. • Help develop district training for all school staff to help them improve their FAMPPs. • Know and educate others about federal and state laws, including regulations, and policies relevant to the obligations of schools to students with food allergies and make sure district and school policies and practices follow these laws and policies. • Help school building leaders plan and provide food allergy training for staff, parents, and students. # Educate students and family members about food allergies. • Help school administrators communicate the district's policies and practices for preventing food allergy reactions to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. • Work collaboratively with district staff to enforce policies that promote healthy physical environments. • Work collaboratively with district health services staff, school principals, school counselors, and others to help enforce policies that prohibit discrimination and bullying against all students, including those with food allergies. # District Food Service Director 1. Participate in the school's coordinated approach to managing food allergies. • Help develop a school district's comprehensive approach to managing food allergies that will support the FAMPP used in each school. • Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, fundraisers, during class parties, at athletic events, and during after-school programs. • Access and use resources and guidance from local health departments and the state agency that administers child nutrition programs. • Promote, disseminate, and communicate the food allergy policies to school staff, families, and the community. • Know and educate others about federal and state laws, including regulations, and policies on food allergies and the need to follow these laws and policies, including those regulations that pertain to the U.S. Department of Agriculture's (USDA's) Child Nutrition Program. • Ensure that food service staff understand USDA's required doctor's statement as written and that the statement provides sufficient information to provide a safe meal. • Help establish school-level procedures and plans for monitoring students with food allergies, including plans for accommodating the special nutritional needs of individual students when necessary. • Coordinate with other district staff, including the student support services director, curriculum coordinator, and health services director. • On a regular schedule, review and evaluate the school district's food allergy policies and practices and revise as needed. # Ensure the daily management of food allergies for individual students. • Develop and implement procedures in each school for identifying students with food allergies in school cafeterias. Make sure that procedures governing access to personally identifiable information from education records are consistent with student rights under the Family Educational Rights and Privacy Act of 1974 (FERPA) and any other federal and state laws that protect the privacy or confidentiality of student information. (See Section 5 for more information about FERPA.) • Work with the health services director, principals and other school staff responsible for implementing FAMPPs to set up procedures for handling food allergies in the cafeteria. These plans should be consistent with the student's IHP, Section 504 plan, or, if appropriate, IEP, and USDA regulations on meals and food substitutions, as reflected in the USDA's Accommodating Children with Special Dietary Needs in the School Nutrition Programs. Procedures should be established for children who participate in school meals programs and those who bring food from home. • Work with school teams responsible for developing ECPs for students with food allergies. For schools that participate in the USDA's Child Nutrition programs, make sure that documents that list appropriate food substitutions for a student with a food allergy disability are signed by a licensed doctor. The doctor's statement must identify: °°The child's food allergy. °°An explanation of why the allergy restricts the child's diet. °°The major life activity affected by the allergy. °°The food or foods to be omitted from the child's diet and the foods or choices that can be substituted. • Establish procedures for obtaining information to clarify food substitutions and other relevant medical information from a student's doctor as needed. • Coordinate food substitutions for all schools with students who have food allergies, in consultation as necessary with each child's doctor, and manage the documentation of these activities. When possible, use foods that are already served in school meals or snacks to make appropriate substitutions. • Provide oversight and tracking of each student's dietary plans, including tracking allergic reactions that occur during school meals. • Develop and implement policies and procedures to prevent allergic reactions and cross-contact during meal preparation and service. Communicate these policies and procedures to school food service staff. • Keep information about ingredients for all foods bought and served by school food service programs and keep labels of foods given to food-allergic children for at least 24 hours so that the labels can be reviewed if needed. • Be prepared to share information about ingredients in recipes and foods served by food service programs with parents. # Prepare for food allergy emergencies. • Help develop protocols for responding to food allergy emergencies that can guide practices in district schools. • Help the health services director communicate the appropriate ways to avoid exposure to food allergens and respond to food allergy emergencies to all staff members who are involved in managing a student's food allergy in the cafeteria. • Make sure that food service staff are able to respond to a food allergy emergency in the cafeteria and implement an ECP. • Review school emergency response plans to make sure they include the actions needed to respond to food allergy emergencies during school meals. • Help schools conduct periodic emergency response drills and practice how to handle a food allergy emergency. • Review data and information (e.g., when and where medication was administered) from incident reports on any food allergy reactions and assess the effect of the incident on affected students. Provide input to modify policies and practices as needed. # Support professional development on food allergies for staff. • Help educate district and school staff about food allergies so they are adequately trained, competent, and confident to perform assigned responsibilities to help students with food allergies and respond to an emergency. • Provide training opportunities for school food service staff to help them understand how to follow policies and procedures for preparing and serving safe meals and snacks for students with food allergies. • Make sure that school food service staff participate in district training on food allergies. • Make sure that all school staff understand their role in preventing and responding to emergencies in the school cafeteria. • Help school building leaders plan and provide food allergy training for staff, parents, and students. # Educate students and family members about food allergies. • Help the curriculum coordinator or health education coordinator integrate food allergy lessons, such as how to read food labels, into the district's health education curriculum. • Communicate with parents about any foods that might be served as part of school meals programs such as the School Breakfast Program or the Fresh Fruit and Vegetable Program. • Share information about options for food substitutions with the parents of students with food allergies. Schools are encouraged to make substitutions with foods that have already been bought, when possible. • Work with administrators, classroom teachers, and parent-teacher organizations to offer food allergy education to parents in schools. • Help school administrators communicate the policies and procedures used in food service programs to prevent food allergy reactions to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. • Work collaboratively with district staff to help enforce policies that promote healthy physical environments. • Work collaboratively with district health services staff, school principals, school food service staff, and others to help enforce policies that prohibit discrimination and bullying against students with food allergies. • Provide guidance to school food service staff that helps them to meet the dietary needs of students with food allergies and protect their health during school meals, while guarding against practices that could result in alienation of or discrimination against these students. See Section 6 for more resources and tools that might assist in managing food allergies and allergy-related emergencies in schools. # Section 3. Putting Guidelines into Practice: Actions for School Administrators and Staff Effective management of food allergies in schools requires the participation of many people. This section presents the actions that school building administrators and staff can take to implement the recommendations in Section 1. Some actions duplicate responsibilities required under applicable federal and state laws, including regulations, and policies. Although many of the actions presented here are not required by statute, they can contribute to better management of food allergies in schools. Some actions are intentionally repeated for different staff positions to ensure that critical actions are addressed even if a particular position does not exist in the district or school (e.g., school doctor). This duplication also reinforces the need for different staff members to work together to manage food allergies effectively. All actions are important, but some will have a greater effect than others. Some actions may be most appropriately carried out by district-level staff members whose roles are to support food allergy management plans and practices across schools or to provide specific services to schools that do not have an on-site staff person to provide these services. Ultimately, each school district or school must determine which actions are most practical and necessary to implement and who should be responsible for those actions. # School Administrator The school administrator can be a principal or assistant principal. # Lead the school's coordinated approach to managing food allergies. • Coordinate planning and implementation of a comprehensive Food Allergy Management and Prevention Plan (FAMPP) for your school. If your school has an on-site registered nurse, work with this person and the members of any relevant team-such as the school wellness team, school health team, or school improvement team-to plan and implement the FAMPP. Designate a qualified person (e.g., the registered nurse) to lead development of the FAMPP and designate responsibilities for implementing the plan as appropriate. If your school does not have an on-site nurse, ask for help from a registered nurse at the district level or from a public health nurse in the community. • Make sure staff understand the school's responsibilities under Section 504 of the Rehabilitation Act of 1973, the Americans with Disabilities Act (ADA), the Individuals with Disabilities Education Act (IDEA), and the Richard B. Russell National School Lunch Act to students who are or may be eligible for services under those laws. Make sure they understand the need to comply with the Family Educational Rights and Privacy Act of 1974 (FERPA) and any other federal and state laws that protect the privacy of student information. (See Section 5 for information about applicable federal laws.) • Communicate school district policies and the school's practices for managing food allergies to all school staff, substitute teachers, classroom volunteers, and families. • Make sure staff implement school district policies for managing food allergies. • Help staff implement the school's FAMPP. • On a regular basis, review and evaluate your school's FAMPP and revise as needed. # Ensure the daily management of food allergies for individual students. • Make sure that mechanisms-such as health forms, registration forms, and parent interviewsare in place to identify students with food allergies. • If your school does not have an on-site registered nurse, work with the parents of children with food allergies and their doctor to develop a written Emergency Care Plan (ECP) (sometimes called a Food Allergy Action Plan). This plan is needed to manage and monitor students with food allergies on a daily basis, whether they are at school or at school-sponsored events. If a student has been determined to be eligible for services under Section 504 or, if appropriate, IDEA, make sure that all provisions of these federal laws are met. • Share information about students with food allergies with all staff members who need to know, provided the exchange of information occurs in accordance with FERPA and any other federal and state laws that protect the confidentiality or privacy of student information. (See section 5 for more information about FERPA.) Make sure these staff members are aware of what actions are needed to manage each student's food allergy on a daily basis. # Prepare for and respond to food allergy emergencies. • Make sure that responding to life-threatening food allergy reactions is part of the school's "all-hazards" approach to emergency planning. • Make sure that parents of students with food allergies provide epinephrine auto-injectors to use in food allergy emergencies, if their use is called for in a student's ECP. • Set up communication systems that are easy to use for staff who need to respond to food allergy reactions and emergencies. • Make sure that staff who are delegated and trained to administer epinephrine auto-injectors can get to them quickly and easily. • Make sure that local emergency responders know that epinephrine may be needed when they are called to respond to a school emergency. • Prepare for food allergy reactions in students without a prior history of food allergies or anaphylaxis. • Make sure that staff plan for the needs of students with food allergies during class field trips and during other extracurricular activities. • Conduct periodic emergency response drills and practice how to handle a food allergy emergency. • Contact parents immediately after any suspected allergic reaction and after a child with a food allergy ingests or has contact with a food that may contain an allergen, even if an allergic reaction does not occur. If the child may need treatment, recommend that the parents notify the child's primary health care provider or allergist. • Document all responses to food allergy emergencies. Review data and information (e.g., when and where medication was used) from incident reports of food allergy emergencies and assess the effect on affected students. Provide input to modify your school district's emergency response policies and practices as needed. # Support professional development on food allergies for staff. • Make sure staff receive professional development and training on food allergies. • Coordinate training with licensed health care professionals, such as school or district doctors or nurses or local health department staff, and with other essential school or district professionals, such as the district's food service director, if appropriate. Invite parents of students with food allergies to help develop the content for this training. # Educate students and family members about food allergies. • Make sure that the school's curricular offerings include information about food allergies to raise awareness among students. • Communicate the school's responsibilities, expectations, and practices for managing food allergies to all parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. • Increase awareness of food allergies throughout the school environment. • Emphasize and support practices that protect and promote the health of students with food allergies across the school environment, during before-and after-school activities, and during transportation of students. • Make sure that students with food allergies have an equal opportunity to participate in all school activities and events. • Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, fundraisers, during class parties, at athletic events, and during after-school programs. • Reinforce the school's rules that prohibit discrimination and bullying as they relate to students with food allergies. # Registered School Nurses # Participate in the school's coordinated approach to managing food allergies. • Take the lead in planning and implementing the school's FAMPP or help the school administrator with this task. • Support partnerships among school staff and the parents and doctors (e.g., pediatricians or allergists) of students with food allergies. • Consult state and local Nurse Practice Acts and guidelines to guide the roles and responsibilities of school nurses. # Supervise the daily management of food allergies for individual students. • Make sure that students with food allergies are identified. Share information with other staff members as needed, provided the exchange of information occurs in accordance with FERPA and any other federal and state laws that protect the confidentiality or privacy of student information. • Obtain or develop an ECP for each student with a food allergy or food allergy disability. Get the medical information needed to care for children with food allergies when they are at school, such as medical records and emergency information. Communicate with parents and health care providers (with parental consent) about known food allergies, signs of allergic reactions, relevant use of medications, complicating conditions, and other relevant health information. • Make sure that USDA's required doctor's statement is completed and provides clear information to assist in the preparation of a safe meal accommodation. This statement can be part of an ECP or a separate document. • Use a team approach to develop an Individualized Healthcare Plan (IHP) for each student with a food allergy, and, if required by Federal law, a Section 504 plan, or an Individualized Education Program (IEP), if appropriate. • Monitor each student's ECP or other relevant plan on a regular basis and modify plans when needed. • Refer parents of children who do not have access to health care to services in the community. • For students who have permission to carry and use their own epinephrine auto-injectors, regularly assess their ability to perform these tasks. # Prepare for and respond to food allergy emergencies. • Develop instructions for responding to an emergency if a school nurse is not immediately available. Add these instructions to the school's FAMPP. • File ECPs in a place where staff can get to them easily in an emergency. Distribute ECPs to staff on a need-to-know basis. • Make sure that the administration of an epinephrine auto-injector follows school policies and state mandates. Make sure that medications are kept in a secure place that staff can get to quickly and easily. Keep back-up epinephrine auto-injectors for students who carry their own. Regularly inspect the expiration date on all stored epinephrine auto-injectors. • Train and supervise delegated staff members how to administer an epinephrine auto-injector and recognize the signs and symptoms of food allergy reactions and anaphylaxis. • If allowed by state and local laws, work with school leaders to get extra epinephrine auto-injectors or nonpatient-specific prescriptions or standing orders for auto-injectors to keep at school for use by staff delegated and trained to administer epinephrine in an anaphylaxis emergency. • Assess whether students can reliably carry and use their own epinephrine auto-injectors and encourage self-directed care when appropriate. • Make sure that school emergency plans include procedures for responding to any student who experiences signs of anaphylaxis, whether the student has been identified as having a food allergy or not. • Make sure that staff plan for the needs of students with food allergies during class field trips and during other extracurricular activities. • Contact parents immediately after any suspected allergic reaction and after a child with a food allergy ingests or has contact with a food that may contain an allergen, even if an allergic reaction does not occur. If the child may need treatment, recommend that the parents notify the child's primary health care provider or allergist. • After each food allergy emergency, review how it was handled with the school administrator, school doctor or nurse (if applicable), parents, staff members involved in the response, emergency medical services (EMS) responders, and the student to identify ways to prevent future emergencies and improve emergency response. • Help students with food allergies transition back to school after an emergency. • Talk with students who may have witnessed a life-threatening allergic reaction in a way that does not violate the privacy rights of the student with the food allergy. # Help provide professional development on food allergies for staff. • Stay up-to-date on best practices for managing food allergies. Sources for this information include allergists or other doctors who are treating students with food allergies, local health department staff, national school nursing resources, and the district's food service director or registered dietitian. • Educate teachers and other school staff about food allergies and the needs of specific students with food allergies in a manner consistent with FERPA, USDA, and any other federal and state laws that protect the privacy or confidentiality of student information. (See Section 5 for more information about FERPA.) • Advise staff to refer students to the school nurse when food allergy symptoms or side effects interfere with school activities so that medical and educational services can be properly coordinated. # Provide food allergy education to students and parents. • Teach students with food allergies about food allergies and help them develop self-management skills. • Make sure that students who are able to manage their own food allergies know how to recognize the signs and symptoms of their own allergic reactions, are capable of using an epinephrine auto-injector, and know how to notify an adult who can respond to a food allergy reaction. • Help classroom teachers add food allergy lessons to their health and education curricula. • Find ways for the parents of students with food allergies to share their knowledge and experience with other parents. • Work with administrators, classroom teachers, and parent-teacher organizations to offer food allergy education for parents at school. • Help the school administrator communicate the school's policies and practices for preventing food allergy reactions to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. • Work with other school staff and parents to create a safe environment for students with food allergies. On a regular basis, assess the school environment, including the cafeteria and classrooms, to identify allergens in the environment that could lead to allergic reactions. Work with appropriate staff to develop strategies to help children avoid identified allergens. • Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, fundraisers, during class parties, at athletic events, and during after-school programs. • Work with school counselors and other school staff to provide emotional support to students with food allergies. • Promote an environment that encourages students with food allergies to tell a staff member if they are bullied because of their allergy. # School Doctors A school doctor works full-time or part-time to provide consultation and a wide range of health services to the school population. # Participate in the school's coordinated approach to managing food allergies. • Lead or help plan and implement the school's FAMPP. • Support partnerships among school staff and the parents and doctors (e.g., allergists, pediatricians) of students with food allergies. • Keep current on federal, state, and local guidance on food allergy management. • Consult state and local Nurse Practice Acts to make sure the roles and responsibilities of school nurses are appropriate. • Guide and support the food allergy management practices of school nursing staff. • Help evaluate school FAMPPs. # Ensure the daily management of food allergies for individual students. • Help the school nurse perform the actions necessary to manage students with food allergies on a daily basis. (See the items under Action 2 for Registered School Nurses.) # 62 3. Prepare for and respond to food allergy emergencies. • Help the school nurse make sure that all students with food allergies have an ECP. • If allowed by state and local laws, write prescriptions or standing orders for nonpatient-specific epinephrine auto-injectors so the school can stock back-up medication for use in food allergy emergencies. • Help the school nurse assess whether students can reliably carry and use their own epinephrine auto-injector and encourage self-directed care when appropriate. • Help the school nurse train staff how to use epinephrine auto-injectors and recognize the signs and symptoms of food allergy reactions and anaphylaxis. • Help the school nurse and health assistants regularly inspect the expiration date on all stored epinephrine auto-injectors. • Make sure that school emergency plans include procedures for responding to any student who experiences signs of anaphylaxis, whether diagnosed with a food allergy or not. • Make sure that staff plan for the needs of students with food allergies during class field trips and during other extracurricular activities. • Contact parents immediately after any suspected allergic reaction and after a child with a food allergy ingests or has contact with a food that may contain an allergen, even if an allergic reaction does not occur. If the child may need treatment, recommend that the parents notify the child's primary health care provider or allergist. • After each food allergy emergency, review how it was handled with the school administrator, school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future emergencies and improve emergency response. # Help provide professional development on food allergies for staff. • Share current and relevant knowledge of best practices for managing food allergies with school leaders (e.g., school administrator, school nurse). • Help educate teachers and other school staff about food allergies and the needs of specific students with food allergies, in a manner consistent with FERPA, USDA, and any other federal and state laws that protect the privacy or confidentiality of student information. (See Section 5 for more information about FERPA.) • Advise staff to refer students to the school doctor or nurse when symptoms or side effects of a food allergy interfere with school activities so that medical and educational services can be coordinated. # Provide food allergy education to students and parents. • Help teach students with food allergies about food allergies and help them develop selfmanagement skills. • Help the school nurse make sure that students who are able to manage their food allergies know how to recognize the signs and symptoms of their own allergic reactions, are capable of using an epinephrine auto-injector, and know how to notify an adult who can respond to a food allergy reaction. • Help classroom teachers add food allergy lessons to their health and education curricula. • Help find ways for parents of students with food allergies to share their knowledge and experience with other parents. • Work with administrators, the school nurse, classroom teachers, and parent-teacher organizations to offer food allergy education for parents at school. • Help the school administrator communicate the school's policies and practices for preventing food allergy reactions to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. • Work with other school staff and parents to create a safe environment for students with food allergies. • On a regular basis, assess the school environment, including the cafeteria and classrooms, to identify allergens in the environment that could lead to allergic reactions. Work with appropriate staff to manage identified allergens. • Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, fundraisers, during class parties, at athletic events, and during after-school programs. • Work with school counselors, the school nurse, and other school staff to provide emotional support to students with food allergies. • Promote an environment that encourages students with food allergies to tell a staff member if they are bullied or harassed because of their allergy. # Health Assistants, Health Aides, or Other Unlicensed Personnel These staff members work with the school or district nurse or doctor. # Help with the daily management of food allergies for individual students. • Help the school nurse identify students with food allergies. Review the medical records and emergency information of all students. • Talk with the school nurse about any allergic reactions and changes in a student's health status. # Prepare for and respond to food allergy emergencies. • Get a copy of the ECP for every student with food allergies. Make sure the plan includes information about signs and symptoms of an allergic reaction, how to respond, and whether medications should be given. • File ECPs in a place where staff can get to them easily in an emergency. • Be ready to respond to a food allergy emergency if a nurse is not immediately available. If school policies and state mandates allow you to give medication and you are delegated to perform this task, complete training on how to administer epinephrine, regularly review instructions, and practice this task. Make sure that medications are kept in a secure place that you or other delegated staff members can get to quickly and easily. Regularly inspect the expiration date on all stored epinephrine auto-injectors. • After each food allergy emergency, participate in a review of how it was handled with the school administrator, school doctor (if applicable), school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future emergencies and improve emergency response. # Participate in professional development on food allergies. • Complete training to help you recognize and understand the following: °°Signs and symptoms of allergic reactions and how they are communicated by students. °°How to read food labels and identify allergens. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°How to deal with emergencies in the school in ways that are consistent with a student's ECP. °°Your role in implementing a student's ECP. °°When and how to call EMS and parents. °°How FERPA, USDA, and other federal and state laws that protect the privacy and confidentiality of student information apply to students with food allergies and food allergy disabilities. °°General strategies for reducing or preventing exposure to food allergens in the classroom, such as cleaning surfaces, using nonfood items for celebrations, and getting rid of nonfood items that contain food allergens (e.g., clay, paste). °°Policies on bullying and discrimination against all students, including those with food allergies. # Provide food allergy education to students and parents. • Get help from the school counselor or other mental health professionals to teach students about bullying of and discrimination against students with food allergies. • Help communicate policies on bullying and discrimination to parents. # Create and maintain a healthy and safe school environment. • Work with other school staff and parents to create a safe environment for students with food allergies. • Promote an environment that encourages support for students with food allergies and promotes positive interactions between students. • Report all cases of bullying against students, including those with food allergies, to the school administrator, school nurse, or school counselor. # Classroom Teachers This category includes classroom teachers in all basic subjects, as well as physical education teachers, instructional specialists such as music or art teachers, paraeducators, student teachers, long-term substitute teachers, classroom aides, and classroom volunteers. # Participate in the school's coordinated approach to managing food allergies. • Ask the school nurse or school administrator for information on current policies and practices for managing students with food allergies, including how to manage medications and respond to a food allergy reaction. • Help plan and implement the school's FAMPP. # Help with the daily management of food allergies for individual students. • Make sure you understand the essential actions that you need to take to help manage food allergies when students with food allergies are under your supervision, including when meals or snacks are served in the classroom, on field trips, or during extracurricular activities. Seek guidance and help from the school administrator, school nurse, or school food service director as needed. • Be available and willing to help students who manage their own food allergies. • Work with parents and the school nurse and other appropriate school personnel to determine if any classroom modifications are needed to make sure that students with food allergies can participate fully in class activities. • With parental consent, share information and responsibilities with substitute teachers and other adults who regularly help in the classroom (e.g., paraeducators, volunteers, instructional specialists). (Depending on a school district's FERPA notice as to which individuals would constitute school officials with legitimate educational interests, FERPA may not require parental consent in these circumstances. FERPA also includes an emergency exception to the prior consent requirement if there is an articulable and significant threat to the health or safety of the student or others. See Section 5 for more information about FERPA.) • Refer students with undiagnosed but suspected food allergies to the school nurse for follow-up. f f. See footnote e. infra. • If your school does not have a nurse on-site, talk with parents about the signs and symptoms you have seen and recommend that they discuss them with their primary health care provider. • If you suspect a severe food allergy reaction or anaphylaxis, take immediate action, consistent with your school's FAMPP or "all-hazards" emergency response protocol. # Prepare for and respond to food allergy emergencies. • Read and regularly review each student's ECP. Never hesitate to activate the plan in an emergency. If you are delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector. • Keep copies of ECPs for your students in a secure place that you can get to easily in an emergency. With parental consent, share information from the ECP with substitute teachers and other adults who regularly help in the classroom to help them know how to respond to a food allergy emergency. (Depending on a school district's FERPA notice as to which individuals would constitute school officials with legitimate educational interests, FERPA may not require parental consent in these circumstances. FERPA also includes an emergency exception to the prior consent requirement if there is an articulable and significant threat to the health or safety of the student or others. See Section 5 for more information about FERPA.) • Support and help students who have permission to carry and use their own epinephrine in cases of an allergic reaction. • Make sure that the needs of students with food allergies are met during class field trips and during other extracurricular activities. • Immediately contact the school administrator and if available, the school nurse after any suspected allergic reaction. • After each food allergy emergency, review how it was handled with the school administrator, school nurse, parents, other staff members involved in the response, EMS responders, and the student to identify ways to prevent future emergencies and improve emergency response. • Help students with food allergies transition back to school after an emergency. • Address concerns with students who witness a life-threatening allergic reaction in a way that does not compromise the confidentiality rights of the student with the allergy. # Participate in professional development on food allergies. • Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are manifested in and communicated by students. °°How to read food labels and identify allergens. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°How to respond to food allergy emergencies in ways that are consistent with a student's ECP, if appropriate, a Section 504 Plan, or IEP, if appropriate. °°When and how to call EMS and parents. °°Your role in implementing a student's ECP. °°FERPA, USDA, and other federal and state laws that protect the privacy or confidentiality of student information, and other legal rights of students with food allergies. (See Section 5 for more information about federal laws.) °°General strategies for reducing or preventing exposure to food allergens in the classroom, such as cleaning surfaces, using nonfood items for celebrations, getting rid of nonfood materials that contain food allergens (e.g., clay, paste), and preventing cross contact of allergens when meals or snacks are served in the classroom. °°Policies that prohibit discrimination and bullying against all students, including those with food allergies. # Provide food allergy education to students and parents. • Look for ways to add information about food allergies to your curriculum. Work with other teachers to plan lessons and activities to teach students how they can prevent allergic reactions. • Work with the school nurse to educate parents about the presence and needs of students with food allergies in the classroom. Raise awareness and educate the parents of children without food allergies about "food rules" for the classroom. Ask parents to help you keep certain foods out of the classroom during meals, celebrations, and other activities that might include food. • Ask the school counselor or other mental health professionals for help or resources to teach students about policies that prohibit discrimination and bullying against all students, including those with food allergies. • Communicate policies on bullying and discrimination to all parents. # Create and maintain a healthy and safe school environment. • Promote a safe physical environment through the following actions: °°Create classroom rules and practices for dealing with food allergies. Tell parents about these rules and practices at the beginning of the school year or when you find out that a student with a food allergy will be in your class. °°Create ways for students with food allergies to participate in all class activities. °°Avoid using known allergens in classroom activities, such as arts and crafts, counting, science projects, parties, holidays and celebrations, or cooking. °°Enforce hand washing before and after eating, particularly for younger students. °°Use nonfood items for rewards or incentives. °°Encourage the use of allergen-safe foods or nonfood items for birthday parties or other celebrations in the classroom. Support parents of students with food allergies who wish to send allergen-safe snacks for their children. °°Discourage trading or sharing of food with a student with a food allergy in the classroom, particularly for younger students. °°Enforce food allergy prevention practices while supervising students in the cafeteria. • Manage food allergies on field trips through the following actions: °°Determine if the intended location is safe for students with food allergies. If it is not safe, the field trip might have to be changed or cancelled if accommodations cannot be made. Students cannot be excluded from field trips because of food allergies. °°Invite the parents of students with food allergies to chaperone or go with their child on the field trip. Many parents may want to go, but they cannot be required to go. °°Work with school food service staff to plan meals and snacks. °°Make sure you include someone who is delegated and trained to administer epinephrine, that you have quick access to an epinephrine auto-injector, and that you know where the nearest medical facilities are located. If a food allergy emergency occurs, activate the student's ECP and notify the parents. °°Make sure there are appropriate emergency protocols and mechanisms in place to respond to a food allergy emergency when away from the school. °°Make sure that communication devices are working so you can respond quickly during an emergency. • Promote a positive psychosocial climate through the following actions: °°Be a role model by respecting the needs of students with food allergies. °°Help students make decisions about and manage their own food allergies. °°Encourage supportive and positive interactions between students. °°Reinforce the school's rules against discrimination and bullying. °°Take action to address all reports of bullying or harassment of a student with a food allergy. °°Tell parents if their child has been bullied, and report all cases of bullying to the school administrator. °°Tell parents and the school nurse if you see negative changes in a student's academic performance or behavior. # School Food Service Managers and Staff 1. Participate in the school's coordinated approach to managing food allergies. • Use resources and guidance from the district food service director, local board of health, USDA, and dietitians to reduce exposure to food allergens. • Help plan and implement the school's FAMPP. Make sure that it includes specific practices for managing food allergens in school meals served inside and outside of the cafeteria. # Help with the daily management of food allergies for individual students. • Identify students with food allergies in a way that does not compromise students' privacy or confidentiality rights. • Make sure you have and understand dietary orders, or the doctor's statement, and other relevant medical information that you need to make meal accommodations for students with food allergies and food allergy disabilities. • Consult with the district foodservice director to help develop individual dietary and cafeteria management plans for each student with a food allergy and food allergy disability. These plans should be consistent with the student's IHP, and if the student has a food allergy, the student's Section 504 plan, or, if appropriate, IEP, and USDA regulations on meals and food substitutions, as reflected in the USDA's Accommodating Children with Special Dietary Needs in the School Nutrition Programs. • Help communicate appropriate actions to avoid allergic reactions and respond to food allergy emergencies to all staff members and food service staff who are expected to help manage a student's food allergy in the cafeteria. • Follow policies and procedures to prevent allergic reactions and cross-contact of potential food allergens during food preparation and service. • Understand how to read labels to identify allergens in foods and beverages served in school meals. Work with the school foodservice director, the district food service director, or the food manufacturer if additional information or clarification is needed on the product's ingredients. • Manage food substitutions for students with food allergies and food allergy disabilities and manage the documentation of these activities. Work with the school administrator or school nurse and the district food service director to make sure that the information needed to meet USDA and state regulations for food service is documented as required. • Be prepared to share information about ingredients in recipes and foods served by the school food service program with parents. # Prepare for and respond to food allergy emergencies. • Be familiar with student's ECPs and the doctor's statement required by USDA, what actions must be taken if a food allergy emergency occurs in the cafeteria. Make sure that food service staff are able to respond to a food allergy emergency in the cafeteria and implement an ECP. • If you are delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector. • If appropriate and allowed by state laws, including regulations, school policy, and the school's FAMPP, keep an epinephrine auto-injector in a secure place in the cafeteria that you can get to quickly and easily. • Provide support and help to students who carry and use their own medication. • After each food allergy emergency, participate in a review of how it was handled with the school administrator, school doctor (if applicable), school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future emergencies and improve emergency response. # Participate in professional development on food allergies. • Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are communicated by student. °°How to read food labels and identify allergens. °°How to plan meals for students with food allergies and allergy disabilities and how to prevent cross-contact of allergens. Consult with the district school food service director when necessary. °°How to deal with emergencies in the school in ways that are consistent with a student's ECP. °°The role of the food service manager and staff in implementing a child's doctor statement under USDA requirements and ECP, if applicable. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°FERPA, USDA, and other federal and state laws that protect the privacy or confidentiality of student information and other legal rights of students with food allergies. (See Section 5 for more information about federal laws.) °°General strategies for reducing or preventing allergic reactions in the cafeteria. °°Policies on bullying and discrimination against all students, including those with food allergies. # Provide food allergy education to students and parents. • Help classroom teachers add food allergy lessons into their health and education curriculum, including teaching students how to read food labels. • Share menu ideas with parents of students with food allergies to identify potential allergens and improve healthy eating. • Find ways for parents of students with food allergies to share their knowledge and experience with other parents. • Help the school administrator communicate the policies and practices used by the food service staff to prevent food allergy reactions to parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe school environment. • Reduce the potential for allergic reactions through the following actions: °°Be able to recognize students with food allergies and food allergy disabilities in the cafeteria. °°Follow procedures for handling food allergies in the cafeteria, even if a student is not participating in the Child Nutrition Program school meals program. °°Read food labels to identify allergens. °°Follow policies and procedures to prevent cross-contact of potential food allergens during food preparation and service. °°Make sure that food allergy policies and practices address competitive foods, such as those available in vending machines, in school stores, fundraisers, during class parties, at athletic events, and during after-school programs. • Promote a positive psychosocial climate in the cafeteria through the following actions: °°Encourage supportive and positive interactions between students. °°Reinforce the school's rules against bullying and discrimination. °°Take action to address all reports of bullying or harassment of a student with a food allergy. °°Report all cases of bullying and harassment against students, including those with food allergies, to the school administrator, school nurse, or school counselor. # School Counselors and Other Mental Health Services Staff This category includes school psychologists and school social workers. 1. Participate in the school's coordinated approach to managing food allergies. • Help plan and implement the school's FAMPP. # Help with the daily management of food allergies for individual students. • Address immediate and long-term mental health problems, such as anxiety, depression, low self-esteem, negative behavior, or eating disorders, among students with food allergies. • Address adolescent oppositional behavior, such as noncompliance with IHPs. • Make referrals to mental health services and professionals outside the school for students who need them, consistent with applicable requirements of Section 504 and IDEA, if appropriate. • Work with school health service staff (e.g., school doctor, school nurse) to develop consistent protocols for referrals. • Read and regularly review each student's ECP. Never hesitate to activate the plan in an emergency. If you are the person delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector if needed. • After each food allergy emergency, participate in a review of how it was handled with the school administrator, school doctor (if applicable), school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future allergic reactions and to improve emergency response. • Help students with food allergies transition back to school after an emergency. • Be prepared to respond to the emotional needs of students who witness a life-threatening allergic reaction in a way that does not compromise the students' privacy or confidentiality rights. # Participate in professional development on food allergies. • Work with the school or district nurse and other health professionals to support training and education for staff on the mental and emotional health issues faced by a student with food allergies. • Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are communicated by students. °°How to read food labels and identify allergens. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°How to deal with emergencies in the school in ways that are consistent with a student's ECP. °°Your role in implementing a child's ECP. °°FERPA, USDA, and other federal and state laws that protect the privacy or confidentiality of student information and other legal rights of students with food allergies. (See Section 5 for more information about federal laws.) °°Policies that prohibit discrimination and bullying against students with food allergies. # Provide food allergy education to students and parents. • Work with classroom teachers and other school staff to educate parents and students about bullying and discrimination against students with food allergies. # Create and maintain a healthy and safe school environment. • Encourage staff to support a broad range of school-based mental health promotion efforts to support all students that promote positive interactions between students, build a positive school climate, encourage diversity and acceptance, discourage bullying, and promote student independence. • Reinforce the school's rules against bullying and discrimination. • Take action to address all reports of bullying or harassment of a student with a food allergy. • Tell parents if their child has been bullied, and report all cases of bullying to school administrators. # Bus Drivers and School Transportation Staff 1. Participate in the school's coordinated approach to managing food allergies. • Ask the school nurse or school administrator for information on current policies and practices for managing students with food allergies, including how to manage medications and respond to a food allergy reaction. • Support school's FAMPPs. # Help with the daily management of food allergies for individual students. • Be aware of students with food allergies and know how to respond to an allergic reaction if it occurs while the student is being transported to or from school. • Enforce district food policies for all students riding a school bus. # Prepare for and respond to food allergy emergencies. • Read and regularly review the ECP for any student riding to and from school on a bus. Never hesitate to activate the plan in an emergency. If you are the person delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector if needed. • Know procedures for communicating an emergency during the transporting of children to and from school. Make sure that other adults on the bus are aware of emergency communication protocol. • Make sure communication devices are working so you can reach school officials, EMS, and others during a food allergy emergency. • Call 911 or EMS to ask for emergency transportation of any student exhibiting signs of anaphylaxis. Notify the school administrator of your actions and the need for someone to contact the student's parents. • After any food allergy emergency that occurs while a student is being transported to or from school, participate in a review of how it was handled with the school administrator, school doctor (if applicable), school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future allergic reactions and improve emergency response. # Participate in professional development on food allergies. • Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are communicated by students. °°How to respond to a food allergy emergency while transporting children to and from school. How to use an epinephrine auto-injector (if delegated and trained to do so). °°How to deal with emergencies in a way that is consistent with a student's ECP or transportation emergency protocol. °°Your role in implementing a child's ECP. °°FERPA, USDA, and other federal and state laws that protect the privacy or confidentiality of student information and other legal rights of students with food allergies. (See Section 5 for more information about federal laws.) °°Policies that prohibit discrimination and bullying against all students, including those with food allergies. # Create a healthy and safe environment. • Advocate for two-way communication systems between schools and transportation vehicles that are kept in working order. • Enforce district food policies for all students riding a school bus. • Encourage supportive and positive interactions between students. • Reinforce the school's rules against discrimination and bullying. • Report all cases of bullying or harassment of students, including those with food allergies, to the school administrator. # Facilities and Maintenance Staff This category includes custodial staff. # Participate in the school's coordinated approach to managing food allergies. • Help plan and implement the school's FAMPP. # Help with the daily management of food allergies for individual students. • Be aware of students with food allergies and know how to respond to an allergic reaction if it occurs while the student is at school. • Help create a safe and healthy environment to prevent allergic reactions. # Prepare for and respond to food allergy emergencies. • Activate your school's "all-hazard" emergency response practices if a student displays signs or symptoms of an allergic reaction. • Know and understand your school's communication protocols for an emergency. • Make sure communication devices are working. • After each food allergy emergency, participate in a review of how it was handled with the school administrator, school doctor (if applicable), school nurse, parents, staff members involved in the response, EMS responders, and the student to identify ways to prevent future allergic reactions and improve emergency response. # Participate in professional development on food allergies. • Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are communicated by students. °°How to respond to emergencies at the school. °°Your role in supporting a child's ECP. °°Policies that prohibit discrimination and bullying against all students, including those with food allergies. °°Policies and standards for washing hands and cleaning surfaces to reduce food allergens on surfaces. # Create and maintain a healthy and safe environment. • Promote a safe and healthy physical environment through the following actions: °°Advocate for two-way communication systems throughout school buildings that are kept in working order. °°Enforce district food policies. °°Clean floors, surfaces, and food-handling areas with approved soap and water or all-purpose cleaning products. • Promote a positive psychosocial climate through the following actions: °°Encourage supportive and positive interactions between students. °°Reinforce the school's rules against discrimination and bullying. °°Report all cases of bullying or harassment of students, including those with food allergies, to the school administrator. See Section 6 for more resources and tools that might assist in managing food allergies and allergy-related emergencies in schools. # Section 4. Putting Guidelines into Practice: Actions for Early Care and Education Administrators and Staff Effective management of food allergies in early care and education (ECE) programs requires the participation of many people. This section presents the actions that ECE program staff can take to implement the recommendations in Section 1. Some actions duplicate responsibilities required under applicable federal and state laws, including regulations, and policies. Although many responsibilities presented here are not required by statute, they can contribute to better management of food allergies in ECE programs. If the ECE program participates in USDA's Child Nutrition Programs, the ECE program must follow USDA statutes, regulations, and guidance for providing meal accommodations for children with food allergy disabilities. Some actions are intentionally repeated for different staff positions to ensure that critical actions are addressed even if a particular position does not exist in the ECE program. This duplication also reinforces the need for different staff members to work together to manage food allergies effectively. All actions are important, but some will have a greater effect than others. Ultimately, each ECE program must determine which actions are most practical and necessary to implement and who should be responsible for those actions. Although these guidelines are specifically for licensed ECE programs, many of the recommendations can be used in unlicensed child care settings. # Program Directors and Family Child Care Providers # Lead the ECE program's coordinated approach to managing food allergies. • Coordinate planning and implementation of a comprehensive Food Allergy Management and Prevention Plan (FAMPP). Work with staff, parents, food services, and the children's health care providers. • Designate a qualified person (e.g., health manager, health consultant) to lead development of the program's FAMPP and designate responsibilities for implementing the plan as appropriate. • # Ensure the daily management of food allergies for individual children. • Make sure that mechanisms-such as health forms, registration forms, USDA-required doctor's statement and parent interviews-are in place to identify children with food allergies. • Work with the parents of children with food allergies and the child's primary health care provider or allergist to obtain a written Emergency Care Plan (ECP) to manage and monitor children with food allergies on a daily basis. • Share information about children with food allergies with all staff who need to know. Make sure they are aware of what actions are needed to manage each child's food allergy on a daily basis. # Prepare for and respond to food allergy emergencies. • Make sure that all ECPs include the following: °°A doctor's statement addressing the meal accommodation needs of particular child with a food allergy disability as required for USDA's Child Nutrition Programs. °°Written instructions about food(s) to which the child is allergic and steps that should be taken to avoid that food. °°A detailed treatment plan to be implemented if an allergic reaction occurs. This plan should include the names and doses of medications and how they should be used. It should also include specific symptoms that would indicate the need to give one or more medications or take the child to an emergency medical facility. • Make sure that parents of children with food allergies provide epinephrine auto-injectors to use in food allergy emergencies if their use is called for in the child's ECP. • Make sure that medications are kept in a secure place and that staff who are delegated and trained to use epinephrine auto-injectors can get to them quickly and easily. • Make sure that staff plan for the needs of students with food allergies during class field trips and during other extracurricular activities. • Contact parents immediately after any suspected allergic reaction. You also should contact parents immediately after a child ingests a potential allergen or has contact with a potential allergen, even if an allergic reaction does not occur. If the child needed treatment, recommend that the parents notify the child's primary health care provider or allergist. • If epinephrine is given, contact emergency medical services (EMS) and have the child transported to an emergency room by ambulance. Contact the parents to tell them the child's location and condition. • Conduct periodic emergency response drills and practice how to handle a food allergy emergency. • Be ready to respond to severe allergic reactions in children with no history of diagnosed food allergies or anaphylaxis. • Review data and information (e.g., when and where medication was used) from incident reports of food allergy emergencies and assess the effect on affected children. Modify policies and practices as needed. # Support professional development on food allergies for staff. • Make sure staff receive professional development and training on food allergies. • Make sure that training helps your program meet any applicable Head Start Program Performance Standards and Other Regulations. • Coordinate training with licensed health care professionals. • Invite parents of children with food allergies to participate in training for staff. # Educate children and family members about food allergies. • Communicate your program's responsibilities, expectations, and practices for managing food allergies to all parents through newsletters, announcements, and other methods. # Create and maintain a healthy and safe ECE program environment. • Increase awareness of food allergies and food allergy disabilities throughout the program environment. • Make sure that children with food allergies have an equal opportunity to participate in all program activities and events. Child Care Providers, Preschool Teachers, Teaching Assistants, Volunteers, Aides, and Other Staff # Participate in the ECE program's coordinated approach to managing food allergies. • Help plan and implement the program's FAMPP. # Help with the daily management of food allergies for individual children. • Make sure all children with food allergies have an ECP. In programs that participate in USDA's Child Nutrition Programs, include a doctor's statement of disability. • Make sure you understand the essential actions that you need to take to help manage food allergies and food allergy disabilities in children when they are under your supervision. • Enforce hand washing practices and make sure tables and surfaces are cleaned before and after meals with approved soap and water or all-purpose cleaning products to reduce cross-contact of allergens. • Work with parents to determine if any modifications are needed to make sure that children with food allergies can participate fully in all program activities. # Prepare for and respond to food allergy emergencies. • Make sure that all ECPs include the following: °°A doctor's statement addressing the meal accommodation needs of particular child with a food allergy disability as required for USDA's Child Nutrition Programs. °°Written instructions about food(s) to which the child is allergic and steps that should be taken to avoid that food. °°A detailed treatment plan to be implemented if an allergic reaction occurs. This plan should include the names and doses of medications and how they should be used. It should also include specific symptoms that would indicate the need to give one or more medications or take the child to an emergency medical facility. • Make sure that parents of children with food allergies provide epinephrine auto-injectors to use in food allergy emergencies if their use is called for in the child's ECP. • Make sure that medications are kept in a secure place and that staff who are delegated and trained to use epinephrine auto-injectors can get to them quickly and easily. • Never hesitate to activate a child's ECP in an emergency. If you are delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector. • Keep copies of ECPs for children in your care in a secure place that you can get to quickly and easily in an emergency. • Provide feedback on the child's ECP and participate in a debriefing meeting after a food allergy reaction or emergency. • Contact parents immediately after any suspected allergic reactions. You also should contact parents immediately after a child ingests a potential allergen or has contact with a potential allergen, even if an allergic reaction does not occur. If the child needed treatment, recommend that the parents notify the child's primary health care provider or allergist. • If epinephrine is given, contact EMS, tell them when epinephrine was administered, and have the child transported to an emergency room by ambulance. Contact the parents to tell them the child's location and condition. • After each food allergy emergency, review how it was handled with the ECE program administrator, registered nurse, parents, staff members involved in the response, EMS responders, and the child to identify ways to prevent future emergencies and improve emergency response. # Participate in professional development on food allergies. • Complete training to help you recognize and understand the following: °°Signs and symptoms of allergic reactions and how they are communicated by young children. °°How to read food label and identify allergens. °°Your role in implementing a child's ECP. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°General strategies for reducing or preventing exposure to food allergens in the program setting and during field trips or other program-sponsored events. °°Policies that prohibit discrimination and bullying against children with food allergies. # Create and maintain a healthy and safe ECE program environment. • Promote a safe physical environment through the following actions: °°Create rules and practices for dealing with food allergies, including preventing exposure to allergens. Tell parents about these rules and practices each year or when you find out that a child with a food allergy will be in your care. °°Create ways for children with food allergies to participate in all class activities. °°Avoid using known allergens in program activities, such as arts and crafts, counting, science projects, parties, holidays and celebrations, or cooking. °°Enforce hand washing before and after eating. °°Clean tables and chairs before and after eating with approved soap and water or all-purpose cleaning products. °°Use nonfood items for rewards or incentives. °°Encourage the use of allergen-safe foods or nonfood items for birthday parties or other celebrations. Support parents of children with food allergies who wish to send allergen-safe snacks for their children. °°Discourage trading or sharing of food. • Manage food allergies on field trips through the following actions: °°Determine if the intended location is safe for children with food allergies. °°Make sure that field trips and other events are consistent with the program's food allergy policies. °°Plan for meals and snacks. °°Make sure you have quick access to an epinephrine auto-injector or other medications and that you know where the nearest medical facilities are located. If a food allergy emergency occurs, activate the child's ECP and notify the parents. °°Make sure that a person who is certified in first aid and trained to use an epinephrine auto-injector is available. • Promote a positive psychosocial climate through the following actions: °°Be a role model by respecting the needs of children with food allergies. °°Encourage supportive and positive interactions between children. °°Take action to address all reports of bullying or harassment of a child with a food allergy. °°Tell parents if you see negative changes in their child's behavior. # Nutrition Services Staff 1. Participate in the ECE program's coordinated approach to managing food allergies. • Help plan and implement the program's FAMPP. # Help with the daily management of food allergies for individual children. • Read and regularly review each child's ECP. Make sure you understand the essential actions that you need to take to help manage food allergies in children during meals. • Make sure you get the dietary orders and other relevant medical information that you need to accommodate children with food allergies. • Document information about meal substitutions as outlined in each child's ECP. Make sure that the information needed to meet the U.S. Department of Agriculture's (USDA's) Child Nutrition Program regulations and state regulations is documented. • Work with the state agency that administers USDA programs, the local health department, and dietitians in the community to get the information and resources you need to make sure your program is following all federal and state regulations and you are responding to a child's dietary requirements. • Take the food allergies of the children in your program into account when you buy food and formula. • Establish and follow policies and procedures to prevent allergic reactions and cross-contact of potential food allergens during food preparation and service. # Prepare for and respond to food allergy emergencies. • Make sure all ECPs include the following: °°A doctor's statement addressing the meal accommodation needs of particular child with a food allergy disability as required for USDA's Child Nutrition Programs. °°Written instructions about food(s) to which the child is allergic and steps that should be taken to avoid that food. If your program participates in the USDA's Child Nutrition Program, make sure you have the proper documentation to meet USDA and state regulations. °°A detailed treatment plan to be implemented if an allergic reaction occurs. This plan should include the names and doses of medication and how they should be used. • Make sure that medications are kept in a secure place and that staff who are delegated and trained to use epinephrine auto-injectors can get to them quickly and easily. • Never hesitate to activate a child's ECP in an emergency. If you are delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector. • Keep copies of ECP for children in your care in a secure place that you can get to quickly and easily in an emergency. • Provide feedback on the child's ECP and participate in a debriefing meeting after a food allergy reaction or emergency. • Contact parents immediately after any suspected allergic reactions. You also should contact parents immediately after a child ingests a potential allergen or has contact with a potential allergen, even if an allergic reaction does not occur. If the child needed treatment, recommend that the parents notify the child's primary health care provider or allergist. • If epinephrine is given, contact EMS, tell them when epinephrine was administered, and have the child transported to an emergency room by ambulance. Contact the parents to tell them the child's location and condition. • Make sure that a food service staff member who has been trained to respond to a food allergy reaction is available during all meals and snack times. # Participate in professional development on food allergies. • Complete training to help you recognize and understand the following: °°Signs and symptoms of food allergies and how they are communicated by young children. °°How to read food labels and identify food allergens. °°How to use an epinephrine auto-injector (if delegated and trained to do so). °°How to deal with emergencies in the ECE program setting in ways that are consistent with a child's ECP. °°Legal rights of children with food allergies. °°USDA's statutes, regulations, and guidance (for ECE programs participating in USDA's Child Nutrition Programs). °°State, and local laws and policies for food services and food safety. °°General strategies for reducing or preventing exposure to food allergens in the kitchen or area where food is served. °°The role of nutrition staff in implementing a child's ECP, including the specific duties outlined in Head Start Program Performance Standards and Other Regulations. # Health Services Staff # Participate in the ECE program's coordinated approach to managing food allergies. • Help plan and implement the program's FAMPP. # Ensure the daily management of food allergies for individual children. • Make sure children with food allergies are identified in a way that complies with Head Start Program Performance Standards and Other Regulations and established enrollment practices but does not compromise their confidentiality rights. • Read and regularly review medical records and emergency information for all children with food allergies. • Communicate with parents and health care providers (with parental consent) about any allergic reactions, changes in a child's health, and exposures to allergens. • Read and regularly review each child's ECP. Make sure that all ECPs include the following: °°A doctor's statement addressing the meal accommodation needs of particular child with a food allergy disability as required for USDA's Child Nutrition Programs. °°Written instructions about food(s) to which the child is allergic and steps that should be taken to avoid that food. °°A detailed treatment plan to be implemented if an allergic reaction occurs. This plan should include the names and doses of medications and how they should be used. It should also include specific symptoms that would indicate the need to give one or more medications or take the child to an emergency medical facility. • Work with parents and health care providers to make sure that the medical needs of children with food allergies are met and that all necessary accommodations are made. • Refer parents of children who do not have access to health care to State Children's Health Insurance Program providers. # Prepare for and respond to food allergy emergencies. • Keep copies of ECPs for children in your care in a secure place that you can get to quickly and easily in an emergency. • Make sure that parents of children with food allergies provide epinephrine auto-injectors to use in food allergy emergencies if their use is called for in the child's ECP. • Make sure that medications are kept in a secure place and that staff who are delegated and trained to use epinephrine auto-injectors can get to them quickly and easily. Regularly inspect the expiration date of epinephrine auto-injectors. • Make sure that staff plan for the needs of students with food allergies during class field trips and during other extracurricular activities. • If allowed by state and local laws, work with the program director to get extra epinephrine autoinjectors or nonpatient-specific prescriptions or standing orders for auto-injectors that can be used by a registered nurse and those delegated and trained to administer epinephrine during allergy emergencies. • Never hesitate to activate a child's ECP in an emergency. If you are delegated and trained according to state laws, including regulations, be ready to use an epinephrine auto-injector. • After each food allergy emergency, review how it was handled with the ECE program administrator, parents, staff members involved in the response, EMS responders, and the child to identify ways to prevent future emergencies and improve emergency response. Make revisions to the child's ECP as appropriate. # Help provide professional development on food allergies for staff. • Stay up-to-date on best practices for managing food allergies. Sources for this information include allergists who are treating children with food allergies and local health departments. • Educate other staff about food allergies and the needs of children with food allergies in a way that does not compromise their confidentiality rights. • Use each child's ECP to train other staff members how to recognize the specific signs of an allergic reaction in each child and how to respond to a food allergy emergency. • Coordinate annual training for all staff on relevant federal and state regulations for managing food allergies in children. • Coordinate annual training for all staff on emergency response protocol and practices, including how to respond to food allergy emergencies. • Provide or coordinate training for delegated staff on how to use an epinephrine auto-injector. See Section 6 for more resources and tools that might assist in managing food allergies and allergy-related emergencies in ECE programs. # Section 5. Federal Laws and Regulations that Govern Food Allergies in Schools and Early Care and Education Programs The federal laws and regulations described in this section address the responsibilities of schools and early care and education (ECE) programs to help children and adolescents manage food allergies that may constitute a disability under federal law and to ensure that children are not subject to discrimination on the basis of their disability. This section also addresses privacy and confidentiality requirements that apply to the education records of students with food allergies, regardless of whether they have been found to have a disability under federal law. Schools and ECE programs are encouraged to copy and distribute relevant laws to appropriate staff and to reinforce relevant laws and regulations in all training provided to staff. For information on how to get copies of relevant federal laws and regulations, see Section 6. g The federal laws described in this section are enforced or administered by the U.S. Department of Education (ED), the U.S. Department of Justice (DOJ), and the U.S. Department of Agriculture (USDA). # Section 504 of the Rehabilitation Act of 1973 (Section 504) and the Americans with Disabilities Act of 1990 (ADA) h Section 504 is a federal law that prohibits discrimination on the basis of disability in programs and activities that receive federal financial assistance. Recipients of federal financial assistance from ED include public school districts, other state and local educational agencies, and postsecondary educational institutions. The Department of Education's Office for Civil Rights (OCR) enforces Section 504 as it applies to these recipients. The USDA enforces Section 504 as it applies to recipients of federal financial assistance from USDA. Title II of the ADA prohibits discrimination on the basis of disability by public entities, including public elementary, secondary, and postsecondary educational institutions, whether or not they receive federal financial assistance. For public schools, OCR shares Title II enforcement responsibilities with the U.S. Department of Justice (DOJ). Section 504 and Title II of the ADA require that qualified individuals with disabilities, including students, parents, and other program participants, not be excluded from or denied the benefits of services, programs, or activities or otherwise subjected to discrimination by reason of a disability. Public school districts that receive federal financial assistance are covered by both Section 504 and Title II. As a general rule, because Title II does not provide less protection than Section 504, violations of Section 504 also constitute violations of Title II. To the extent that Title II provides greater protections, schools must also comply with Title II and provide those additional protections. g. In addition to becoming familiar with these relevant federal laws, schools and ECE programs should determine which applicable state statutes, policies, and regulations and local statues and policies should be considered when developing management plans for children with food allergies. If a student's food allergy is a disability, that student is entitled to the protections of Section 504 and the ADA. Both laws define a disability as a physical or mental impairment that substantially limits a major life activity. Children with food allergies may be substantially limited in major life activities such as eating, breathing, or the operation of major bodily functions such as the respiratory or gastrointestinal system. The U.S. Congress has made clear that the definition of disability under Section 504 and the ADA is to be construed broadly. i Under both Section 504 and Title II, students with disabilities in public schools must be given an equal opportunity to participate in academic, nonacademic, and extracurricular activities. ED's Section 504 regulation outlines a process for schools to use to determine whether a student has a disability and to determine what services a student with a disability needs. j This evaluation process must be tailored individually because each student is different, and his or her needs will vary. The Section 504 regulations specify that school districts must identify all students with disabilities and provide them with a free appropriate public education (FAPE). Under ED's Section 504 regulation, FAPE is the provision of regular or special education and related aids and services designed to meet the individual educational needs of students with disabilities as adequately as the needs of students who do not have disabilities are met. A student does not have to receive special education services, however, in order to receive related aids and services under Section 504. The most common practice is to include these related aids and services, as well as any needed special education services, in a written document, sometimes called a Section 504 plan. Even if a school district does not believe that a student needs special education or related aids and services, Section 504 and Title II require the district to consider whether it can reasonably modify policies, practices, or procedures to ensure that a student with a disability has an equal opportunity to participate in and benefit from the school's services and programs. ED's Section 504 regulation also states that public preschool and day care programs operated by recipients of federal funds may not, on the basis of disability, exclude students with disabilities and must take their needs into account when determining the aid, benefits, or services to be provided. k Under ED's Section 504 regulation, private schools that receive federal financial assistance may not exclude an individual student with a disability if the school can, with minor adjustments; provide an appropriate education to that student. Private, nonreligious schools and ECE programs are covered by Title III of the ADA. Title III prohibits public accommodations such as these from discriminating against individuals with disabilities in the full and equal enjoyment of the entity's services and activities. Under Title III, private schools and ECE programs must make reasonable modifications to policies, practices, and procedures when necessary to give children with disabilities, including those with food allergies, full and equal access to and participation in programs and services unless the entity can show that the modification would result in a fundamental alteration of those programs and services. Under Section 504 and the ADA, children with food allergy disabilities in schools and ECE programs must be provided with the services and modifications they need in order to attend. Examples of these services and modifications might include implementing allergen-safe food plans, administering epinephrine according to a doctor's orders (even if the school or ECE program has a no-medication policy), allowing students to carry their own medication, and providing an allergen-safe environment in which the student can eat meals. Disability harassment is a form of discrimination prohibited by Section 504 and Titles II and III of the ADA. Harassment creates a hostile environment when the conduct is sufficiently serious so as to interfere with or limit a student's ability to participate in or benefit from the services, activities, or opportunities offered by a school. When student-on-student disability harassment occurs and the school knows or reasonably should know about the harassment, a school must take prompt and effective steps reasonably calculated to end the harassment, prevent its recurrence, and eliminate any hostile environment created by the harassment. Section 504 and Title II require schools to take such steps and prohibit schools from encouraging, tolerating, or ignoring peer harassment based on disability that creates a hostile environment. Bullying, teasing, or harassment about an allergy can lead to psychological distress for children with food allergies which could lead to a more severe reaction when the allergen is present. And exposing an allergic child to the allergen (e.g., putting the allergen in the child's food or forcing the child to ingest the allergen) can have very serious-even fatal-consequences. School districts, in developing and implementing policies on bullying and harassment, should instruct staff and students as to how such policies apply to children with food allergies, including the possible disciplinary consequences for bullying and harassment that target or place food-allergic children at risk. Additional consequences could be to separate the harasser from the target and to provide counseling for the target or the harasser. Finally, a school should take steps to stop further harassment and prevent any retaliation against the person who made the complaint (or was the subject of the harassment) or against those who provided information as witnesses. The ADA, as amended, Section 504 of the Rehabilitation Act, The Richard B. Russell National School Lunch Act 42 USC 1758(a), the Child Nutrition Act, CNP regulations, and USDA's Non-discrimination regulations at 7 CFR 15b, govern meal accommodations for children with food related disabilities in schools and ECE programs that participate in the CNPs. USDA has oversight for providing meals in these programs. Program operators in the CNPs must make meal accommodations to regular program meals for children identified by a licensed doctor as having a food allergy disability that prevents them from consuming a meal as prepared. For purposes of discussion in these guidelines, if a child has a food allergy that is identified as a disability by a licensed doctor, meal accommodations must be provided. Additionally, a school, institution, and site participating in USDA's Child Nutrition Programs, is not required to establish a Section 504 plan, IEP, IHP, ICP (or any plan that may be used by a child with special dietary needs) to make an accommodation to a program meal for a child with a food related disability. Instead, the CNPs require a written statement from a licensed doctor which identifies the following requirements: • The child's disability (according to pertinent statutes). • An explanation of why the disability restricts the child's diet. • The major life activity affected by the disability. • The food or foods to be omitted from the child's diet. • The food or choice of foods that must be substituted. 62 A statement signed by a licensed doctor addressing the points above is sufficient. However, the written statement from a licensed doctor may be incorporated in any of the plans discussed above. # Individuals with Disabilities Education Act (IDEA) IDEA Part B l provides federal funds to help states make FAPE available to eligible children with disabilities in the least restrictive environment. The obligation to make FAPE available in the least restrictive environment begins at the child's third birthday and could last until the child's twenty-second birthday, depending on state law or practice. m FAPE under IDEA Part B refers to the provision of special education and related services at no cost to the parents that include an appropriate preschool, elementary school, or secondary school education at the state level. Eligibility determinations under IDEA Part B are made at the state and local school district level on an individual, case-by-case basis in light of applicable IDEA Part B requirements and state education standards. At the federal level, IDEA is administered by the Office of Special Education Programs in the Office of Special Education and Rehabilitative Services in the ED. A child could be found eligible for services under IDEA Part B because of a food allergy only if it adversely affects the child's educational performance, and the child needs special education and related services because of the food allergy. If determined eligible, the school district must develop an Individualized Education Program (IEP) for the child, or if appropriate, an Individualized Family Service Plan (IFSP) for a child age three through five. n An IEP is a written document developed by a team that includes the child's parents and school officials. It sets out, among other elements, the special education and related services and supplementary aids and services to be provided to the child. If parents place their disabled child at a private school at their own expense, IDEA Part B generally would not require the school district to develop an IEP for the child at the private school. In general, if a child with a food allergy only needs a related service and does not need special education, that child would not be eligible for services under IDEA Part B. Such a child might still be eligible for services or modifications under Section 504 or Title II. In addition, IDEA Part C provides federal funds to assist states in identifying and providing early intervention services to children with disabilities from birth to age three and, at the state's discretion, through five or when the child enters kindergarten. Under IDEA Part C, a child is eligible based on a developmental delay, diagnosed condition, or, at the state's discretion, at-risk status. An IFSP is a document written by a team that includes the child's parents that identifies the specific early intervention services needed by the child. o # Family Educational Rights and Privacy Act (FERPA) of 1974 FERPA applies to educational agencies or institutions that receive federal funds under a program administered by ED. p FERPA generally prohibits schools and school districts from disclosing personally identifiable information from a student's education record unless the student's parent or the eligible student (a student who is aged 18 years or older or who attends an institution of postsecondary education) provides prior, written consent for the disclosure. This requirement has several exceptions. q One exception permits schools to disclose personally identifiable information from a student's education record without obtaining prior written consent to school officials, including teachers, who have legitimate educational interests in the information, including the educational interests of the child. Schools must use reasonable methods, such as physical, technological, or administrative access controls, to ensure that school officials obtain access only to those education records in which they have legitimate educational interests. To use this exception, schools must include in their annual notification of FERPA rights to parents and eligible students the criteria for determining who constitutes a school official and what constitutes a legitimate educational interest. This exception for school officials also applies to a contractor, consultant, volunteer, or other party to whom a school has outsourced institutional services or functions provided that the outside party: • Performs an institutional service or function for which the school would otherwise use employees. • Is under the direct control of the school with respect to the use and maintenance of education records. • Is subject to the requirements in FERPA that govern the use and redisclosure of personally identifiable information from education records. Another exception to the requirement of prior written consent permits schools to disclose personally identifiable information from an education record to appropriate parties, including the parent of an eligible student, in connection with an emergency if knowledge of the information is necessary to protect the health or safety of the student or other individuals. Under this exception, a school may take into account the totality of the circumstances pertaining to a threat to the health or safety of a student or other individuals. If a school determines that there is an articulable and significant threat to the health or safety of a student or other individuals, it may disclose information from education records to any person whose knowledge of the information is necessary to protect the health or safety of the student or other individuals. If the information available at the time of the incident forms a rational basis for the decision to disclose information, ED will not substitute its judgment for that of the school in evaluating the circumstances and making its determination. When disclosures are made under this exception, a school must record the articulable and significant threat to the health or safety of a student or other individual that formed the basis for the disclosure and the parties to whom the information was disclosed. In addition, under FERPA, the parent or eligible student must be given the opportunity to inspect and review the student's education records. A school must comply with a request for access to the student's education records within a reasonable period of time, but not more than 45 days after it has received the request. Additional information and resources, including how to access copies of federal laws, are provided in Section 6. # Acknowledgements Food Allergies: What School Employees Need to Know http://neahin.org/foodallergies Developed by the NEA (National Education Association) Health Information Network, with support from the U.S. Department of Agriculture, this booklet is designed to educate school employees about food allergies and how they can help to prevent and respond to allergic reactions in schools. Booklets are available in print and online in both English and Spanish. # National Food Service Management Institute Resources # Federal Resources Food Allergy Overview www.niaid.nih.gov/topics/foodAllergy/understanding/Pages/default.aspx U.S. Department of Health and Human Services (HHS), National Institute of Allergies and Infectious Diseases. These resources are designed to improve understanding of food allergies, share information about food allergy research, and present current guidelines for clinical diagnosis and management of food allergies in the United States. General information about food allergies is also available in PDF format at www.niaid.nih.gov/topics/foodallergy/documents/foodallergy.pdf. # Food Allergies: What You Need to Know www.fda.gov/Food/ResourcesForYou/Consumers/ucm079311.htm U.S. Department of Health and Human Services (HHS), Food and Drug Administration. These resources are designed to improve understanding of food allergies and labeling of food products that contain proteins derived from the eight most common food allergens. Information also includes food allergy updates for consumers. # Readiness and Emergency Management for Schools (REMS) Technical Assistance (TA) Center http://rems.ed.gov Sponsored by the U.S. Department of Education (ED) the REMS TA Center's primary goal is to support schools, school districts, and institutions of higher education in school emergency management, including the development and implementation of comprehensive all-hazards emergency management plans. The TA Center disseminates information about school emergency management to help individual schools, school districts, and institutions of higher education learn more about developing, implementing, and evaluating comprehensive, all-hazards school emergency management plans. In addition, the TA Center helps ED coordinate technical assistance meetings and share school emergency management information, and responds to direct requests for technical assistance and training. # The National Center on Safe Supportive Learning Environments http://safesupportiveschools.ed.gov Supported by the U.S. Department of Education, (ED) the National Center on Safe Supportive Learning Environments (NCSSLE) provides information and technical assistance to states, districts, schools, institutions of higher education, communities, and other federal grantees programs on how to improve conditions for learning. To improve conditions for learning, the Center assists others in measuring school climate and conditions for learning and implementing appropriate programmatic interventions, so that all students have the opportunity to realize academic success in safe and supportive environments. The Center also specifically addresses issues related to bullying, violence and substance abuse prevention that often negatively impact learning environments. # Guidance Related to Federal Laws # Federal Statutes and Regulations The following statutes are in the U.S. Code (U.S.C. April 1, 2008), the Food Code is a model that assists food control jurisdictions at all levels of government by providing them with a scientifically sound technical and legal basis for regulating the retail and food service segment of the industry. It serves as a reference document for state, city, county and tribal agencies that regulate restaurants, retail food stores, vending operations and food service operations in institutions such as schools, hospitals, nursing homes and child care centers. # Food Allergy and Anaphylaxis: An NASN Tool Kit www.nasn.org/ToolsResources/FoodAllergyandAnaphylaxis Sponsored by the National School Nurses Association, this site provides a variety of tools and templates to educate and help people who are responsible for managing students with food allergies as an integral part of the delivery of health care services in schools. # National Nongovernmental Resources: School Policy # Safe at School and Ready to Learn: A Comprehensive Policy Guide for Protecting Students with Life-Threatening Food Allergies www.nsba.org/foodallergyguide.pdf Developed by the National School Boards Association, this guide is designed to help school leaders, especially school boards, make sure that policies at the district and school level support the safety, well-being, and success of students with life-threatening food allergies. It includes a checklist that school can use to assess the extent to which the guide's components are included in their food allergy policies and used in practice. It also has examples of state and local education policies. # Statewide Guidelines for Schools http://www.foodallergy.org/laws-and-regulations/statewide-guidelines-for-schools Hosted by Food Allergy Research & Education (FARE), this site provides state guidelines for managing food allergies in schools. # National Nongovernmental Resources: Food Allergy Training How to CARE for Students with Food Allergies: What Every Educator Should Know http://allergyready.com This free online course, developed by Food Allergy Research & Education (F.A.R.E.), is designed to help teachers, administrators, and other school staff members prevent and manage potentially life-threatening allergic reactions. Educational materials include guidance for people who might be training staff how to use an epinephrine auto-injector. Managing Food Allergies in Schools: Food Allergy Education for the School Community www.allergyhome.org/schools This resource was developed in partnership with Kids with Food Allergies, the Asthma and Allergy Foundation of America New England Chapter, the Association of Camp Nurses, and the American Camping Association. It was modified for and approved by the Massachusetts Department of Public Health's School Health Services. It provides practical teaching tools, including presentations with audio to assist in nurse, staff, parent and student education. This resource provides school nurses with tools to assist in the training their school community, including students and parents without food allergies, and includes guidance for school nurses who will train staff on administration of epinephrine by auto-injector. It includes links to other allergy education sites, materials for families of children with food allergies, and materials for others working in child care and camp programs. # National Nongovernmental Resources: Parent Education # Glossary of Abbreviations and Acronyms
None
None
2231652beaa28e66ae989dbe5b5cea95095b5125
cdc
None
This report provides recommendations for use of a newly developed recombinant outer-surface protein A (rOspA) Lyme disease vaccine (LYMErix,™ SmithKline Beecham Pharmaceuticals) for persons aged 15-70 years in the United States. The purpose of these recommendations is to provide health-care providers, public health authorities, and the public with guidance regarding the risk for acquiring Lyme disease and the role of vaccination as an adjunct to preventing Lyme disease. The Advisory Committee on Immunization Practices recommends that decisions regarding vaccine use be made on the basis of assessment of individual risk, taking into account both geographic risk and a person's activities and behaviors relating to tick exposure. *sensu lato: including all subordinate taxa of a taxon that would otherwise be considered separately. † sensu stricto: excluding similar taxa that otherwise would be considered together.# INTRODUCTION Lyme disease is a tickborne zoonosis caused by infection with the spirochete Borrelia burgdorferi. The number of annually reported cases of Lyme disease in the United States has increased approximately 25-fold since national surveillance began in 1982; during 1993-1997, a mean of 12,451 cases annually were reported by states to CDC (1,2, CDC, unpublished data, 1998). In the United States, the disease is primarily localized to states in the northeastern, mid-Atlantic, and upper north-central regions, and to several areas in northwestern California (1 ). Lyme disease is a multisystem, multistage, inflammatory illness. In its early stages, Lyme disease can be treated successfully with oral antibiotics; however, untreated or inadequately treated infection can progress to late-stage complications requiring more intensive therapy. The first line of defense against Lyme disease and other tickborne illnesses is avoidance of tick-infested habitats, use of personal protective measures (e.g., repellents and protective clothing), and checking for and removing attached ticks. Early diagnosis and treatment are effective in preventing late-stage complications. Recently, two Lyme disease vaccines have been developed that use recombinant B. burgdorferi lipidated outer-surface protein A (rOspA) as immunogen -LYMErix,™ SmithKline Beecham Pharmaceuticals, and ImuLyme,™ Pasteur Mérieux Connaught. As of publication of this report, only LYMErix has been licensed by the U.S. Food and Drug Administration for use in the United States; therefore, these recommendations apply only to the use of that vaccine. Additional statements will be provided as other Lyme disease vaccines are licensed. Results of a large-scale, randomized, controlled (Phase III) trial of safety and efficacy of LYMErix in persons aged 15-70 years residing in disease-endemic areas of the northeastern and north-central United States indicate that the vaccine is safe and efficacious when administered on a three-dose schedule of 0, 1, and 12 months (3,4 ). Information regarding vaccine safety and efficacy beyond the transmission season immediately after the third dose is not available. Thus, the duration of protective immunity and need for booster doses beyond the third dose are unknown. # CLINICAL FEATURES OF LYME DISEASE Clinical Description Most often, Lyme disease is evidenced by a characteristic rash (erythema migrans) accompanied by nonspecific symptoms (e.g., fever, malaise, fatigue, headache, myalgia, and arthralgia) (5)(6)(7). The incubation period from infection to onset of erythema migrans is typically 7-14 days but can be as short as 3 days or as long as 30 days. Some infected persons have no recognized illness (i.e., asymptomatic infection determined by serologic testing), or they manifest only nonspecific symptoms (e.g., fever, headache, fatigue, and myalgia). Lyme disease spirochetes disseminate from the site of inoculation by cutaneous, lymphatic, and bloodborne routes. The signs of early disseminated infection usually occur from days to weeks after the appearance of a solitary erythema migrans lesion. In addition to multiple or secondary erythema migrans lesions, early disseminated infection can be manifested as disease of the nervous system, the musculoskeletal system, or the heart (5)(6)(7). Early neurologic manifestations include lymphocytic meningitis; cranial neuropathy, especially facial nerve palsy; and radiculoneuritis. Musculoskeletal manifestations can include migratory joint and muscle pains with or without objective signs of joint swelling. Cardiac manifestations are rare but can include myocarditis and transient atrioventricular block of varying degree. B. burgdorferi infection in the untreated or inadequately treated patient can progress to late-disseminated disease from weeks to months after infection (5)(6)(7). The most common objective manifestation of late-disseminated Lyme disease is intermittent swelling and pain of one or some joints, usually large, weight-bearing joints (e.g., the knee). Some patients experience chronic axonal polyneuropathy, or encephalopathy, the latter usually manifested by cognitive disorders, sleep disturbance, fatigue, and personality changes. Infrequently, Lyme disease morbidity can be severe, chronic, and disabling (8,9 ). An ill-defined post-Lyme disease syndrome occurs in some persons after treatment for Lyme disease (10)(11)(12). Lyme disease is rarely, if ever, fatal. # Diagnosis The diagnosis of Lyme disease is based primarily on clinical findings, and treating patients with early disease solely on the basis of objective signs and a known exposure is often appropriate (13 ). Serologic testing can, however, provide valuable supportive diagnostic information in patients with endemic exposure and objective clinical findings that indicate later-stage disseminated Lyme disease (13 ). When serologic testing is indicated, CDC recommends testing initially with a sensitive first test, either an enzyme-linked immunosorbent assay (ELISA) or an indirect fluorescent antibody test, followed by testing with the more specific Western immunoblot (WB) test to corroborate equivocal or positive results obtained with the first test (14 ). Although antibiotic treatment in early localized disease can blunt or abrogate the antibody response, patients with early disseminated or late-stage disease usually have strong serologic reactivity and demonstrate expanded WB immunoglobulin G (IgG) banding patterns to diagnostic B. burgdorferi antigens (15,16 ). Antibodies often persist for months or years after successfully treated or untreated infection. Thus, seroreactivity alone cannot be used as a marker of active disease. Neither positive serologic test results nor a history of previous Lyme disease ensures that a person has protective immunity. Repeated infection with B. burgdorferi has been reported (17 ). B. burgdorferi can be cultured from 80% or more of biopsy specimens taken from early erythema migrans lesions (18 ). However, the diagnostic usefulness of this procedure is limited because of the need for a special bacteriologic medium (i.e., modified Barbour-Stoenner-Kelly medium) and protracted observation of cultures. Polymerase chain reaction (PCR) has been used to amplify genomic DNA of B. burgdorferi in skin, blood, cerebrospinal fluid, and synovial fluid (19,20 ), but PCR has not been standardized for routine diagnosis of Lyme disease. # Treatment Lyme disease can usually be treated successfully with standard antibiotic regimens (5,6 ). Early and uncomplicated infection, including infection with isolated cranial nerve palsy, usually responds satisfactorily to treatment with orally administered antibiotics (21 ). Parenteral antibiotics are generally recommended for treating meningitis, carditis, later-stage neurologic Lyme disease, and complicated Lyme disease arthritis. Late, complicated Lyme disease might respond slowly or incompletely, and more than one antibiotic treatment course can be required to eliminate active infection (8,9 ). Refractory Lyme disease arthritis is associated with expression of certain Class II major histocompatibility complex (MHC II) molecules (22 ), and can require anti-inflammatory agents and surgical synovectomy for relief of symptoms (8 ). In a limited number of patients, persistent or recurrent symptoms after appropriate antibiotic therapy often can be attributed to causes other than persistent infection (22,23 ). # EPIDEMIOLOGY OF LYME DISEASE Antigenic Variation of B. burgdorferi Sensu Lato* In the United States, a number of genospecies of B. burgdorferi sensu lato have been isolated from animals and ticks, but only OspA expressing B. burgdorferi sensu stricto † has been isolated from humans (24 ). Existing evidence also demonstrates that rOspA vaccines will be protective against most if not all human infections in the United States (25 ). B. burgdorferi sensu stricto also occurs in Europe, but the dominant European and Asian genospecies are B. garinii and B. afzelii, both of which are antigenically distinct from B. burgdorferi sensu stricto (26 ) and vary in their expression of OspA. Vaccines using combinations of immunogenic proteins might be necessary to provide protection against multiple genospecies (27 ). # Routes of Transmission Humans acquire B. burgdorferi infection from infected ticks at the time the tick takes a blood meal (28 ); Lyme disease is not spread by person-to-person contact or by direct contact with infected animals. Transplacental transmission of B. burgdorferi has been reported (29,30 ), but the effects of such transmission on the fetus remain unclear. The results of two epidemiologic studies document that congenital Lyme disease must be rare, if it occurs at all (31,32 ). Transmission in breast milk has not been described. B. burgdorferi can be cultured from the blood in some patients with early acute infection, and it is able to survive for several weeks in stored blood. However, at least one study has found that the risk for transfusion-acquired infection is minimal (33 ). # Tick Vectors of Lyme Disease B. burgdorferi is transmitted to humans by ticks of the Ixodes ricinus complex (34 ). I. scapularis, the black-legged or deer tick, is the vector in the eastern United States; I. pacificus, the western black-legged tick, transmits B. burgdorferi in the western United States (35,36 ). I. scapularis is also a vector for human granulocytic ehrlichiosis and babesiosis (34,37 ). In their nymphal stage, these ticks feed predominantly in the late spring and early summer. The majority of Lyme disease cases result from bites by infected nymphs. In highly enzootic areas of the United States, approximately 15%-30% of questing I. scapularis nymphs and up to 14% of I. pacificus nymphs are infected with B. burgdorferi (38)(39)(40)(41). However, in the southern United States, the prevalence of infection in I. scapularis ticks is generally 0%-3% (36 ). The risk for acquiring Lyme disease in the United States varies with the distribution, density, and prevalence of infection in vector ticks (Appendix). During the past several decades, the distribution of I. scapularis has spread slowly in the northeastern and upper north-central regions of the United States (42 ). Although deer are not competent reservoirs of B. burgdorferi, they are the principal maintenance hosts for adult black-legged ticks, and the presence of deer appears to be a prerequisite for the establishment of I. scapularis in any area (43 ). The explosive repopulation in the eastern United States by white-tailed deer during recent decades has been linked to the spread of I. scapularis ticks and of Lyme disease in this region. The future limits of this spread are not known (42 ). # Distribution of Human Cases of Lyme Disease Lyme disease is endemic in several regions in the United States, Canada, and temperate Eurasia (1,44 ). The disease accounts for more than 95% of all reported cases of vectorborne illness in the United States. Using a national surveillance case definition (45 ), state health officials reported >62,000 cases to CDC during 1993-1997, and the national mean annual rate during this 5-year period was 5.5 cases/100,000 population (1,2, CDC, unpublished data, 1998). Persons of all ages are equally susceptible to infection, although the highest reported rates of Lyme disease occur in children aged <15 years and in adults aged 30-59 years (1 ). Both underreporting and overdiagnosis are common (46)(47)(48). Approximately 90% of cases are reported by approximately 140 counties located along the northeastern and mid-Atlantic seaboard and in the upper north-central region of the United States (Appendix). A rash similar to erythema migrans of Lyme disease, but not caused by B. burgdorferi infection, has been described in patients who have been bitten by ticks in the southern United States (49,50 ). This rash is suspected of being associated with the bite of Amblyomma americanum ticks (51 ). # Populations at Risk for Lyme Disease Most B. burgdorferi infections result from periresidential exposure to infected ticks (38,(52)(53)(54)(55) during property maintenance, recreation, and leisure activities. Thus, persons who live or work in residential areas surrrounded by woods or overgrown brush infested by vector ticks are at risk for acquiring Lyme disease. In addition, persons who participate in recreational activities away from home (e.g., hiking, camping, fishing, and hunting) in tick habitat and persons who engage in outdoor occupations (e.g., landscaping, brush clearing, forestry, and wildlife and parks management) in endemic areas might also be at elevated risk for acquiring Lyme disease (56)(57)(58). # PREVENTION AND CONTROL OF LYME DISEASE # Avoidance of Tick Habitat Whenever possible, persons should avoid entering areas that are likely to be infested with ticks, particularly in spring and summer when nymphal ticks feed. Ticks favor a moist, shaded environment, especially that provided by leaf litter and lowlying vegetation in wooded, brushy, or overgrown grassy habitat. Both deer and rodent hosts must be abundant to maintain the enzootic cycle of B. burgdorferi. Sources of information regarding the distribution of ticks in an area include state and local health departments, park personnel, and agricultural extension services. # Personal Protection Persons who are exposed to tick-infested areas should wear light-colored clothing so that ticks can be spotted more easily and removed before becoming attached. Wearing long-sleeved shirts and tucking pants into socks or boot tops can help keep ticks from reaching the skin. Ticks are usually located close to the ground, so wearing high rubber boots can provide additional protection. Applying insect repellents containing DEET (n,n-diethyl-m-toluamide) to clothes and exposed skin and applying permethrin, which kills ticks on contact, to clothes, should also help reduce the risk of tick attachment. DEET can be used safely on children and adults but should be applied according to the U.S. Environmental Protection Agency guidelines to reduce the possibility of toxicity (59 ). Because transmission of B. burgdorferi from an infected tick is unlikely to occur before 36 hours of tick attachment (28,60 ), daily checks for ticks and their prompt removal will help prevent infection. # Strategies for Reducing Tick Abundance The number of ticks in endemic residential areas can be reduced by removing leaf litter, brush, and woodpiles around houses and at the edges of yards and by clearing trees and brush to admit more sunlight, thus reducing deer, rodent, and tick habitat (61 ). Tick populations have also been effectively suppressed by applying pesticides to residential properties (62,63 ). Community-based interventions to reduce deer populations or to kill ticks on deer and rodents have not been extensively implemented, but might be effective in reducing communitywide risk for Lyme disease (64 ). The effectiveness of deer feeding stations equipped with pesticide applicators to kill ticks on deer and other baited devices to kill ticks on rodents is currently under evaluation. # Prophylaxis After Tick Bite The relative cost-effectiveness of postexposure treatment of tick bites to avoid Lyme disease in endemic areas is dependent on the probability of B. burgdorferi infection after a tick bite (65 ). In most circumstances, treating persons for tick bite alone is not recommended (6,66 ). Persons who are bitten by a deer tick should remove the tick and seek medical attention if any signs and symptoms of early Lyme disease, ehrlichiosis, or babesiosis develop during the ensuing days or weeks. # Early Diagnosis and Treatment Lyme disease is readily treatable in its early stages (5,6 ). The early diagnosis and proper antibiotic treatment of Lyme disease are important strategies for avoiding the morbidity and costs of complicated and late-stage illness. # LYME DISEASE VACCINE Description LYMErix is made from lipidated rOspA of B. burgdorferi sensu stricto. The rOspA protein is expressed in Escherichia coli and purified. Each 0.5-mL dose of LYMErix contains 30 µg of purified rOspA lipidated protein adsorbed onto aluminum hydroxide adjuvant. # Mechanism of Action Several studies in animals have provided evidence that B. burgdorferi in a vector tick undergoes substantial antigenic change between the time of tick attachment on a mammalian host and subsequent transmission of the bacterium to the host. The spirochetes residing in the tick gut at the initiation of tick feeding express primarily OspA. As tick feeding begins, the expression of outer-surface protein C (OspC) is increased and the expression of OspA is decreased, so that spirochetes that reach the mammalian host after passing through the tick salivary glands express primarily OspC (67 ). Thus, the rOspA vaccine might exert its principal protective effect by eliciting antibodies that kill Lyme disease spirochetes within the tick gut (68,69 ). # Route of Administration, Vaccination Schedule, and Dosage LYMErix is administered by intramuscular injection, 0.5 mL (30 µg), into the deltoid muscle. Three doses are required for optimal protection. The first dose is followed by a second dose 1 month later and a third dose administered 12 months after the first dose. Vaccine administration should be timed so that the second dose of the vaccine (year 1) and the third dose (year 2) are administered several weeks before the beginning of the B. burgdorferi transmission season, which usually begins in April. The safety and immunogenicity of alternate dosing schedules are currently being evaluated. # VACCINE PERFORMANCE # Safety # Randomized, Controlled Clinical (Phase III) Trial of LYMErix A total of 10,936 subjects aged 15-70 years living in Lyme disease-endemic areas were recruited at 31 sites and randomized to receive three doses of vaccine or placebo (3 ); 5,469 subjects received at least one 30-µg dose of rOspA vaccine, and 5,467 subjects received at least one injection of placebo. The subjects were then followed for 20 months. Information regarding adverse events that were believed to be related or possibly related to injection were available from 4,999 subjects in each group. Soreness at the injection site was the most frequently reported adverse event, which was reported without solicitation by 24.1% of vaccine recipients and 7.6% of placebo recipients (p < 0.001). Redness and swelling at the injection site were reported by <2% of either group but were reported more frequently among vaccine recipients than among those who received placebo (p < 0.001). Myalgia, influenza-like illness, fever, and chills were more common among vaccine recipients than placebo recipients (p < 0.001), but none of these was reported by more than 3.2% of subjects (3 ). Reports of arthritis were not significantly different between vaccine and placebo recipients, but vaccine recipients were significantly (p < 0.05) more likely to report arthralgia or myalgia within 30 days after each dose (70 ). No statistically significant differences existed between vaccine and placebo groups in the incidence of adverse events more than 30 days after receiving a dose, and no episodes of immediate hypersensitivity among vaccine recipients were noted (3 ). # Safety in Patients with Previously Diagnosed Lyme Disease The safety of three different dosage strengths of rOspA vaccine with adjuvant in 30 adults with previous Lyme disease was evaluated in an uncontrolled safety and immunogenicity trial (71 ). Doses were administered at 0, 1, and 2 months. Follow-up of subjects was conducted 1 month after the third dose. No serious adverse events were recorded during the study period. In the randomized controlled Phase III trial of LYMErix, the incidence of adverse events among vaccinees who were seropositive at baseline was similar to the incidence among those who were seronegative (70 ). The incidence of musculoskeletal symptoms within the first 30 days after vaccination was higher among vaccinees with a self-reported previous history of Lyme disease compared with vaccinees with no such history. This difference was not statistically significant at the p = 0.05 level in the placebo group. No statistically significant difference existed in the incidence of late musculoskeletal adverse events between vaccine and placebo recipients with a selfreported previous history of Lyme disease (70 ). # Risk for Possible Immunopathogenicity of rOspA Vaccine After infection with B. burgdorferi, persons who express certain MHC II molecules are more likely than others to develop chronic, poorly responsive Lyme arthritis associated with high levels of antibody to OspA in serum and synovial fluid (22 ). In chronic Lyme arthritis patients, the levels of antibody to OspA, and especially to the C-terminal epitope of OspA, have been found to correlate directly with the severity and duration of the arthritis (72 ). Researchers have proposed that an autoimmune reaction might develop within the joints of some Lyme arthritis patients as a result of molecular mimicry between the dominant T-cell epitope of OspA and human leukocyte function associated antigen 1 (hLFA-1) (73 ). The Phase III trial did not detect differences in the incidence of neurologic or rheumatologic disorders between vaccine recipients and their placebo controls during the 20 months after the initial dose (3 ). However, because the association between immune reactivity to OspA and treatment-resistant Lyme arthritis is poorly understood, the vaccine should not be administered to persons with a history of treatment-resistant Lyme arthritis. # Efficacy Randomized, Controlled Trial (Phase III) of LYMErix Using an intention-to-treat analysis, the vaccine efficacy in protecting against "definite" Lyme disease after two doses was 49% (95% confidence interval = 15%-69%) and after three doses was 76% (95% CI = 58%-86%) (3 ). (In this study, "definite" Lyme disease was defined as the presence of erythema migrans or objective neurologic, musculoskeletal, or cardiovascular manifestations of Lyme disease, plus laboratory confirmation of infection by cultural isolation, PCR positivity, or WB seroconversion.) Efficacy in protecting against asymptomatic infection (no recognized symptoms, but with WB seroconversions recorded in year 1 or year 2) was 83% (95% CI = 32%-97%) in year 1 and 100% (95% CI = 26%-100%) in year 2. # Immunogenicity A subset of adult subjects enrolled in the Phase III clinical trial of LYMErix was studied for the development of OspA antibodies at months 2, 12, 13, and 20 (3 ). At month 2, one month after the second injection, the geometric mean antibody titer (GMT) of IgG anti-OspA antibodies was 1,227 ELISA units/mL. Ten months later, the GMT had declined to 116 ELISA units/mL. At month 13, one month after the third injection, a marked anamnestic response resulted in a GMT of 6,006 ELISA units/mL. At month 20, the mean response had decreased to 1,991 ELISA units/mL (70 ). An analysis of antibody titers and the risk for developing Lyme disease for a subset of subjects enrolled in the Phase III clinical trial concluded that a titer >1,200 ELISA units/mL correlated with protection (SmithKline Beecham poster at Infectious Disease Society of America Conference, Denver, Colorado, November 1998). # Effect of Vaccination on the Serologic Diagnosis of Lyme Disease Care providers and laboratorians should be advised that vaccine-induced anti-rOspA antibodies routinely cause false-positive ELISA results for Lyme disease (74 ). Experienced laboratory workers, through careful interpretation of the results of WB, can usually discriminate between B. burgdorferi infection and previous rOspA immunization, because anti-OspA antibodies do not develop after natural infection. # COST-EFFECTIVENESS OF LYME DISEASE VACCINATION The cost of Lyme disease has been evaluated from both a societal and a thirdparty-payer perspective (75 ). The cost-effectiveness of vaccinating against Lyme disease has also been analyzed from a societal perspective (76 ). At an assumed cost of vaccination of $100/person/year, a vaccine effectiveness of 0.85, a probability of 0.85 of correctly identifying and treating early Lyme disease, and an assumed incidence of Lyme disease of 1,000/100,000 persons/year, the net cost of vaccination to society was $5,692/case averted and $35,375/complicated neurologic or arthritic case avoided (Figure 1). Using these same baseline assumptions, the societal cost of vaccination exceeds the cost of not vaccinating, unless the incidence of Lyme disease is >1,973/100,000 persons/year. Of the variables examined, the incidence of Lyme disease had the greatest impact on cost-effectiveness of vaccination. The likelihood of early diagnosis and treatment also has a substantial impact on vaccine costeffectiveness because of the reduced incidence of sequelae when Lyme disease is diagnosed and patients are treated early in the disease. Most disease-endemic states and counties report Lyme disease incidence that are substantially below 1,000/100,000 persons/year. For example, in 1997, the highest reported state incidence was 70/100,000 persons in Connecticut, and the highest reported county incidence was 600/100,000 population in Nantucket County, Massachusetts. However, some studies document that approximately 10%-15% of physician-diagnosed cases of Lyme disease are reported to state authorities in highly endemic areas (46,47 ). Epidemiologic studies of populations at high risk in the northeastern United States have estimated annual incidence of >1,000/100,000 persons/ year in several communities (77)(78)(79)(80). # ASSESSING THE RISK FOR LYME DISEASE The decision to administer Lyme disease vaccine should be made on the basis of an assessment of individual risk, which depends on a person's likelihood of being bitten by tick vectors infected with B. burgdorferi. This likelihood is primarily determined by the following: - density of vector ticks in the environment, which varies by place and season; - the prevalence of B. burgdorferi infection in vector ticks; and - the extent of person-tick contact, which is related to the type, frequency, and duration of a person's activities in a tick-infested environment. Assessing risk should include considering the geographic distribution of Lyme disease. The areas of highest Lyme disease risk in the United States are concentrated within some northeastern and north-central states. The risk for Lyme disease differs not only between regions and states and counties within states (Appendix), but even within counties and townships. Detailed information regarding the distribution of Lyme disease risk within specific areas is best obtained from state and local public health authorities. The second step in determining Lyme disease risk is to assess a person's activities. Activities that place persons at high risk are those that involve frequent or prolonged exposure to the habitat of infected ticks at times of the year when the nymphal stages of these ticks are actively seeking hosts, which in most endemic areas is April-July. Typical habitat of Ixodes ticks are wooded, brushy, or overgrown grassy areas that are favorable for deer and the ticks' rodent hosts. Several recreational, property # FIGURE 1. Cost-effectiveness of Lyme disease vaccination OBJECTIVE This MMWR provides recommendations regarding prevention and control of Lyme disease. These recommendations were developed by CDC staff members and the ACIP members. This report is intended to guide clinical practice and policy development related to administration of the Lyme disease vaccine. Upon completion of this educational activity, the reader should be able to describe the epidemiology of Lyme disease in the United States; list effective methods of Lyme disease prevention; describe the characteristics and use of the currently licensed Lyme disease vaccine; and recognize the most common adverse reactions following administration of Lyme disease vaccine. # EXPIRATION -June 4, 2000 The response form must be completed and returned electronically, by fax, or by mail, postmarked no later than 1 year from the publication date of this report, for eligibility to receive continuing education credit. # INSTRUCTIONS 1. Read this MMWR (Vol. 48, RR-7 ), which contains the correct answers to the questions beginning on the next page. 2. Complete all registration information on the response form, including your name, mailing address, phone number, and e-mail address, if available. # Indicate whether you are registering for Continuing Medical Education (CME) credit, Continuing Education Unit (CEU) credit, or Continuing Nursing Education (CNE) credit. 4. Select your answers to the questions, and mark the corresponding letters on the response form. To receive continuing education credit, you must answer all of the questions. Questions with more than one correct answer will instruct you to "indicate all that are true." 5. Sign and date the response form. # MMWR Response Form for Continuing Education Credit June 4, 1999 / Vol. 48 / No. RR-7 # Recommendations for the Use of Lyme Disease Vaccine Recommendations of the Advisory Committee on Immunization Practices (ACIP) Fill in the appropriate block(s) to indicate your answer(s). To receive continuing education credit, you must answer all of the questions. . A B C D E 2. A B C D E 3. A B C D E 4. A B C D E 5. A B C D E 6. A B C D E 7. A B C D E 8. A B C D E 9. A B C D E 10. A B C D E F 11. A B C D E F 12. A B C D E 13. A B C D E 14. A B C 15. A B C D E 16. A B C D E 17. A B C D E 18. A B C D E Detach or photocopy. 1.2 hours of CNE credit maintenance, occupational, or leisure pursuits that are carried out in tick habitat can be risky activities. When in highly endemic areas, persons can reduce their risk for Lyme disease and other tickborne illnesses by avoiding tick-infested habitats. If exposure to a tick-infested habitat cannot be avoided, persons should use repellents, wear protective clothing, and regularly check themselves for ticks. Persons who are unlikely to seek medical care for early manifestations of Lyme disease can be at increased risk for Lyme disease complications. Morbidity from Lyme disease can be substantially reduced by detecting and treating the infection in its early stages, because early and correct treatment usually results in a prompt and uncomplicated cure. # RECOMMENDATIONS FOR USE OF LYME DISEASE VACCINE Lyme disease vaccine does not protect all recipients against infection with B. burgdorferi and offers no protection against other tickborne diseases. Vaccinated persons should continue to practice personal protective measures against ticks and should seek early diagnosis and treatment of suspected tickborne infections. Because Lyme disease is not transmitted person-to-person, use of the vaccine will not reduce risk among unvaccinated persons. Decisions regarding the use of vaccine should be based on individual assessment of the risk for exposure to infected ticks and on careful consideration of the relative risks and benefits of vaccination compared with other protective measures, including early diagnosis and treatment of Lyme disease. The risk for Lyme disease is focally distributed in the United States (Appendix). Detailed information regarding the distribution of Lyme disease risk within specific areas is best obtained from state and local public health authorities. The following recommendations are made regarding use of Lyme disease vaccine: -Lyme disease vaccination may be considered for persons aged 15-70 years who are exposed to tick-infested habitat but whose exposure is neither frequent nor prolonged. The benefit of vaccination beyond that provided by basic personal protection and early diagnosis and treatment of infection is uncertain. -Lyme disease vaccination is not recommended for persons who have minimal or no exposure to tick-infested habitat. - Travelers can obtain some protection from two doses of vaccine but will not achieve optimal protection until the full series of three doses has been administered. All travelers to high-or moderate-risk areas during Lyme disease transmission season should practice personal protection measures as described earlier and seek prompt diagnosis and treatment if signs or symptoms of Lyme disease develop. Lyme disease is endemic in some temperate areas of Europe and Asia; however, considerable heterogeneity of expression exists in the Eurasian strains of B. burgdorferi sensu lato that infect humans, and whether the rOspA vaccine licensed for use in the United States would protect against infection with Eurasian strains is uncertain. - Pregnant Women -Because the safety of rOspA vaccines administered during pregnancy has not been established, vaccination of women who are known to be pregnant is not recommended. No evidence exists that pregnancy increases the risk for Lyme disease or its severity. Acute Lyme disease during pregnancy responds well to antibiotic therapy, and adverse fetal outcomes have not been reported in pregnant women receiving standard courses of treatment. A vaccine pregnancy registry has been established by SmithKline Beecham Pharmaceuticals. In the event that a pregnant women is vaccinated, health-care providers are encouraged to register this vaccination by calling, toll-free, (800) 366-8900, ext. 5231. - Persons with Immunodeficiency -Persons with immunodeficiency were excluded from the Phase III safety and efficacy trial, and no data are available regarding Lyme disease vaccine use in this group. # Persons with Musculoskeletal Disease -Persons with diseases associated with joint swelling (including rheumatoid arthritis) or diffuse musculoskeletal pain were excluded from the Phase III safety and efficacy trial, and only limited data are available regarding Lyme disease vaccine use in such patients. - Persons with a Previous History of Lyme Disease -Vaccination should be considered for persons with a history of previous uncomplicated Lyme disease who are at continued high risk. -Persons who have treatment-resistant Lyme arthritis should not be vaccinated because of the association between this condition and immune reactivity to OspA. -Persons with chronic joint or neurologic illness related to Lyme disease, as well as second-or third-degree atrioventricular block, were excluded from the Phase III safety and efficacy trial, and thus, the safety and efficacy of Lyme disease vaccine in such persons are unknown. - Vaccine Schedule, Including Spacing and Timing of Administration -Three doses of the vaccine should be administered by intramuscular injection. The initial dose should be followed by a second dose 1 month later and a third dose 12 months after the first dose. Vaccine administration should be timed so that the second dose of the vaccine (year 1) and the third dose (year 2) are administered several weeks before the beginning of the B. burgdorferi transmission season, which usually begins in April. - Boosters -Whether protective immunity will last longer than 1 year beyond the month-12 dose is unknown. Data regarding antibody levels during a 20-month period after the first injection of LYMErix indicate that boosters beyond the month-12 booster might be necessary (see Immunogenicity). Additional data are needed before recommendations regarding vaccination with more than three doses of rOspA vaccine can be made. # Simultaneous Administration with Other Vaccines -The safety and efficacy of the simultaneous administration of rOspA vaccine with other vaccines have not been established. If LYMErix must be administered concurrently with other vaccines, each vaccine should be administered in a separate syringe at a separate injection site. # FUTURE CONSIDERATIONS # Recommendations for Surveillance, Research, Education, and Program Evaluation Activities - Determine safety, immunogenicity, and efficacy of Lyme disease vaccine in children. - Determine optimal vaccine dosage schedules and timing of administration. - Determine the need for and spacing of booster doses. - Determine safety and efficacy of the vaccine in persons aged >70 years. - Develop additional serodiagnostic tests that discriminate between infection and vaccine-induced antibody production. - Develop a program of Lyme disease vaccine education for care providers and prospective vaccine clients. - Develop an information sheet to be distributed to prospective vaccine recipients or to persons at the time of vaccine administration. - Conduct surveillance for rare or late-developing adverse effects of vaccination. - Establish postlicensure epidemiologic studies of safety, efficacy, prevention effectiveness, cost-effectiveness, and patterns of use. - Develop a program to monitor vaccine use at the local, state, and national levels and to measure its public health and economic impact. - Develop population-based studies to assess the impact of vaccine use on incidence of Lyme disease in communities. - Continue to develop maps of geographic distribution of Lyme disease with improved accuracy and predictive power. exposure were compiled on a county-unit scale for the United States. Then geographic information systems (GIS) technology was used to combine these data and categorize each of the 3,140 counties into four risk classes. # ENTOMOLOGIC RISK Vector Distribution Vector data were obtained from a national distribution map of Ixodes scapularis and I. pacificus, which was previously published by CDC (2 ). These data delineate three classes of tick distribution based on all published and unpublished county collection records available to CDC before 1998. The three classes are as follows: - established populations (≥6 ticks reported or more than one life stage); - reported occurrence (<6 ticks reported and only one life stage); and - absence of ticks or missing data. Although these data are currently the best source of vector distribution available, many gaps exist because of uneven sampling efforts among the counties. Therefore, a neighborhood analysis GIS procedure was used to modify the original tick distribution to smooth absent data and minimize the impact of reporting gaps. In this process, the original tick coverage map was rasterized to 1 km, and each cell was given a numeric value corresponding to the county tick class (0 = absent; 1 = reported; and 2 = established). A neighborhood analysis was performed using ERDAS IMAGINE- image-processing software. This function employed a moving filter (25 by 25 km), which summed the values of the area surrounding each 1-km pixel and created a new focally smoothed image. An outline of counties was overlaid to define boundaries on the smoothed map, and new values were summed from the total pixel values for each county. The three original vector classes were maintained with the new classification. The revised map employed a threshold reclassification based on mean summary statistics generated from the neighborhood analysis. This procedure resulted in a weighted value for each county that was determined by the classes of surrounding counties, thus smoothing the map to minimize rough edges and isolated holes in the data. The modified vector distribution increased the number of counties containing I. scapularis and I. pacificus from 1,058 counties (34% of total counties) in the original data set to 1,404 (45% of total) in the modified version. This modification resulted in greater continuity among adjacent counties, as well as a less-conservative description of vector distribution. # Infection Prevalence in Vectors The prevalence of infection with B. burgdorferi is low throughout the distribution of I. pacificus (3 ) with the exception of one California county (4 ). Within the entire southern distribution of I. scapularis, prevalence of infection with B. burgdorferi is low compared with the Northeast and upper Midwest (3 ). One possible reason for these *ERDAS IMAGINE map production computer software, a product of ERDAS, Inc., 2801 Buford Highway, Atlanta, GA 30329-2137, (404) 248-9000, . The national map illustrates a clear focal pattern of Lyme disease risk with the greatest risk occurring in the Northeast and upper Midwest regions. Overall, 115 (4%) counties were classified as high risk, followed by 146 (5%) moderate risk, 1,143 (36%) low risk, and 1,736 (55%) as minimal or no-risk counties. Lyme disease risk is measurable as a function of two epidemiologic parametersentomologic risk and human exposure. Entomologic risk for Lyme disease is defined as the density per unit area of host-seeking nymphal ticks infected with Borrelia burgdorferi (1 ). Field studies needed for determination of entomologic risk require trained entomologists, and such studies are limited to a narrow seasonal window within the life-cycle of vector ticks. Limited resources preclude the direct measurement of entomologic risk over large geographic areas; therefore, indirect measures were used to estimate risk to develop this national Lyme disease risk map. First, data on vector distribution, abundance, B. burgdorferi infection prevalence, and human # High risk # Moderate risk # Low risk # Minimal or no risk # Areas of predicted Lyme disease transmission Note: This map demonstrates an approximate distribution of predicted Lyme disease risk in the United States. The true relative risk in any given county compared with other counties might differ from that shown here and might change from year to year. Risk categories are defined in the accompanying text. Information on risk distribution within states and counties is best obtained from state and local public health authorities. differences is the geographic variations in abundance of hosts that are competent reservoirs of infection for immature ticks. The white-footed mouse (Peromyscus leucopus) is the principal host for ticks in the Northeast and upper Midwest and is a competent reservoir for the spirochete. But in the Southeast and West Coast regions, reptiles appear to serve as major hosts for immature ticks, and reptiles are either inefficient or incompetent reservoir hosts for spirochetes. This pattern of tick-host association might result from the greater population density of lizards relative to rodents ( 5), resulting in reduced transmission rates in regions where lizards dominate. An index was created to map the effect of host-species composition on infection prevalence in I. scapularis ticks. # National Lyme disease risk map with four categories of risk A literature survey was conducted to identify a complete list of hosts for I. scapularis (6 ). A total of 38 nondomestic host species was identified, including 32 mammal species and 6 reptile species. Birds were excluded because of their migratory nature and their uncertain role as natural reservoir hosts. Species range maps were obtained from the literature (7,8 ), then digitized by county into ArcView GIS- software for presence or absence of reservoir hosts. The county data were then summed to determine the total host species composition available for I. scapularis. A ratio of total reptiles divided by the total hosts multiplied by 100 was calculated for each county and mapped. The reptile ratio index delineates those areas having a high reptile-to-total-hosts ratio (>10) and forms a linear boundary, below which reptiles are more likely to serve as hosts for ticks. The geographic boundary runs roughly on the 38º north latitude from Virginia to Missouri. This reptile ratio illustrates that although total hosts in the northern states can be equal to those of the southern states, reptiles dilute the force of transmission, thus lowering the prevalence of infection in ticks and creating less of a risk to humans in the South. # HUMAN EXPOSURE TO RISK CDC case reports were used as a measure of human exposure to entomologic risk. County-specific data were compiled for the years 1994-1997. Counties comprising the ninetieth percentile of all human cases reported during this 4-year period were selected to represent counties with high human exposure. These 137 counties reported a minimum total of 23 cases. Heuristic, or procedure-based decision rule, was employed to construct the national Lyme disease risk map. Expert decision rule was applied to construct the risk classification as follows: # Risk Classes - High Risk. Counties where I. scapularis or I. pacificus populations are established and where prevalence of infection is predicted to be high, and which are in the top tenth percentile of counties reporting human cases during the 4-year period, 1994-1997.
This report provides recommendations for use of a newly developed recombinant outer-surface protein A (rOspA) Lyme disease vaccine (LYMErix,™ SmithKline Beecham Pharmaceuticals) for persons aged 15-70 years in the United States. The purpose of these recommendations is to provide health-care providers, public health authorities, and the public with guidance regarding the risk for acquiring Lyme disease and the role of vaccination as an adjunct to preventing Lyme disease. The Advisory Committee on Immunization Practices recommends that decisions regarding vaccine use be made on the basis of assessment of individual risk, taking into account both geographic risk and a person's activities and behaviors relating to tick exposure. *sensu lato: including all subordinate taxa of a taxon that would otherwise be considered separately. † sensu stricto: excluding similar taxa that otherwise would be considered together.# INTRODUCTION Lyme disease is a tickborne zoonosis caused by infection with the spirochete Borrelia burgdorferi. The number of annually reported cases of Lyme disease in the United States has increased approximately 25-fold since national surveillance began in 1982; during 1993-1997, a mean of 12,451 cases annually were reported by states to CDC (1,2, CDC, unpublished data, 1998). In the United States, the disease is primarily localized to states in the northeastern, mid-Atlantic, and upper north-central regions, and to several areas in northwestern California (1 ). Lyme disease is a multisystem, multistage, inflammatory illness. In its early stages, Lyme disease can be treated successfully with oral antibiotics; however, untreated or inadequately treated infection can progress to late-stage complications requiring more intensive therapy. The first line of defense against Lyme disease and other tickborne illnesses is avoidance of tick-infested habitats, use of personal protective measures (e.g., repellents and protective clothing), and checking for and removing attached ticks. Early diagnosis and treatment are effective in preventing late-stage complications. Recently, two Lyme disease vaccines have been developed that use recombinant B. burgdorferi lipidated outer-surface protein A (rOspA) as immunogen -LYMErix,™ SmithKline Beecham Pharmaceuticals, and ImuLyme,™ Pasteur Mérieux Connaught. As of publication of this report, only LYMErix has been licensed by the U.S. Food and Drug Administration for use in the United States; therefore, these recommendations apply only to the use of that vaccine. Additional statements will be provided as other Lyme disease vaccines are licensed. Results of a large-scale, randomized, controlled (Phase III) trial of safety and efficacy of LYMErix in persons aged 15-70 years residing in disease-endemic areas of the northeastern and north-central United States indicate that the vaccine is safe and efficacious when administered on a three-dose schedule of 0, 1, and 12 months (3,4 ). Information regarding vaccine safety and efficacy beyond the transmission season immediately after the third dose is not available. Thus, the duration of protective immunity and need for booster doses beyond the third dose are unknown. # CLINICAL FEATURES OF LYME DISEASE Clinical Description Most often, Lyme disease is evidenced by a characteristic rash (erythema migrans) accompanied by nonspecific symptoms (e.g., fever, malaise, fatigue, headache, myalgia, and arthralgia) (5)(6)(7). The incubation period from infection to onset of erythema migrans is typically 7-14 days but can be as short as 3 days or as long as 30 days. Some infected persons have no recognized illness (i.e., asymptomatic infection determined by serologic testing), or they manifest only nonspecific symptoms (e.g., fever, headache, fatigue, and myalgia). Lyme disease spirochetes disseminate from the site of inoculation by cutaneous, lymphatic, and bloodborne routes. The signs of early disseminated infection usually occur from days to weeks after the appearance of a solitary erythema migrans lesion. In addition to multiple or secondary erythema migrans lesions, early disseminated infection can be manifested as disease of the nervous system, the musculoskeletal system, or the heart (5)(6)(7). Early neurologic manifestations include lymphocytic meningitis; cranial neuropathy, especially facial nerve palsy; and radiculoneuritis. Musculoskeletal manifestations can include migratory joint and muscle pains with or without objective signs of joint swelling. Cardiac manifestations are rare but can include myocarditis and transient atrioventricular block of varying degree. B. burgdorferi infection in the untreated or inadequately treated patient can progress to late-disseminated disease from weeks to months after infection (5)(6)(7). The most common objective manifestation of late-disseminated Lyme disease is intermittent swelling and pain of one or some joints, usually large, weight-bearing joints (e.g., the knee). Some patients experience chronic axonal polyneuropathy, or encephalopathy, the latter usually manifested by cognitive disorders, sleep disturbance, fatigue, and personality changes. Infrequently, Lyme disease morbidity can be severe, chronic, and disabling (8,9 ). An ill-defined post-Lyme disease syndrome occurs in some persons after treatment for Lyme disease (10)(11)(12). Lyme disease is rarely, if ever, fatal. # Diagnosis The diagnosis of Lyme disease is based primarily on clinical findings, and treating patients with early disease solely on the basis of objective signs and a known exposure is often appropriate (13 ). Serologic testing can, however, provide valuable supportive diagnostic information in patients with endemic exposure and objective clinical findings that indicate later-stage disseminated Lyme disease (13 ). When serologic testing is indicated, CDC recommends testing initially with a sensitive first test, either an enzyme-linked immunosorbent assay (ELISA) or an indirect fluorescent antibody test, followed by testing with the more specific Western immunoblot (WB) test to corroborate equivocal or positive results obtained with the first test (14 ). Although antibiotic treatment in early localized disease can blunt or abrogate the antibody response, patients with early disseminated or late-stage disease usually have strong serologic reactivity and demonstrate expanded WB immunoglobulin G (IgG) banding patterns to diagnostic B. burgdorferi antigens (15,16 ). Antibodies often persist for months or years after successfully treated or untreated infection. Thus, seroreactivity alone cannot be used as a marker of active disease. Neither positive serologic test results nor a history of previous Lyme disease ensures that a person has protective immunity. Repeated infection with B. burgdorferi has been reported (17 ). B. burgdorferi can be cultured from 80% or more of biopsy specimens taken from early erythema migrans lesions (18 ). However, the diagnostic usefulness of this procedure is limited because of the need for a special bacteriologic medium (i.e., modified Barbour-Stoenner-Kelly medium) and protracted observation of cultures. Polymerase chain reaction (PCR) has been used to amplify genomic DNA of B. burgdorferi in skin, blood, cerebrospinal fluid, and synovial fluid (19,20 ), but PCR has not been standardized for routine diagnosis of Lyme disease. # Treatment Lyme disease can usually be treated successfully with standard antibiotic regimens (5,6 ). Early and uncomplicated infection, including infection with isolated cranial nerve palsy, usually responds satisfactorily to treatment with orally administered antibiotics (21 ). Parenteral antibiotics are generally recommended for treating meningitis, carditis, later-stage neurologic Lyme disease, and complicated Lyme disease arthritis. Late, complicated Lyme disease might respond slowly or incompletely, and more than one antibiotic treatment course can be required to eliminate active infection (8,9 ). Refractory Lyme disease arthritis is associated with expression of certain Class II major histocompatibility complex (MHC II) molecules (22 ), and can require anti-inflammatory agents and surgical synovectomy for relief of symptoms (8 ). In a limited number of patients, persistent or recurrent symptoms after appropriate antibiotic therapy often can be attributed to causes other than persistent infection (22,23 ). # EPIDEMIOLOGY OF LYME DISEASE Antigenic Variation of B. burgdorferi Sensu Lato* In the United States, a number of genospecies of B. burgdorferi sensu lato have been isolated from animals and ticks, but only OspA expressing B. burgdorferi sensu stricto † has been isolated from humans (24 ). Existing evidence also demonstrates that rOspA vaccines will be protective against most if not all human infections in the United States (25 ). B. burgdorferi sensu stricto also occurs in Europe, but the dominant European and Asian genospecies are B. garinii and B. afzelii, both of which are antigenically distinct from B. burgdorferi sensu stricto (26 ) and vary in their expression of OspA. Vaccines using combinations of immunogenic proteins might be necessary to provide protection against multiple genospecies (27 ). # Routes of Transmission Humans acquire B. burgdorferi infection from infected ticks at the time the tick takes a blood meal (28 ); Lyme disease is not spread by person-to-person contact or by direct contact with infected animals. Transplacental transmission of B. burgdorferi has been reported (29,30 ), but the effects of such transmission on the fetus remain unclear. The results of two epidemiologic studies document that congenital Lyme disease must be rare, if it occurs at all (31,32 ). Transmission in breast milk has not been described. B. burgdorferi can be cultured from the blood in some patients with early acute infection, and it is able to survive for several weeks in stored blood. However, at least one study has found that the risk for transfusion-acquired infection is minimal (33 ). # Tick Vectors of Lyme Disease B. burgdorferi is transmitted to humans by ticks of the Ixodes ricinus complex (34 ). I. scapularis, the black-legged or deer tick, is the vector in the eastern United States; I. pacificus, the western black-legged tick, transmits B. burgdorferi in the western United States (35,36 ). I. scapularis is also a vector for human granulocytic ehrlichiosis and babesiosis (34,37 ). In their nymphal stage, these ticks feed predominantly in the late spring and early summer. The majority of Lyme disease cases result from bites by infected nymphs. In highly enzootic areas of the United States, approximately 15%-30% of questing I. scapularis nymphs and up to 14% of I. pacificus nymphs are infected with B. burgdorferi (38)(39)(40)(41). However, in the southern United States, the prevalence of infection in I. scapularis ticks is generally 0%-3% (36 ). The risk for acquiring Lyme disease in the United States varies with the distribution, density, and prevalence of infection in vector ticks (Appendix). During the past several decades, the distribution of I. scapularis has spread slowly in the northeastern and upper north-central regions of the United States (42 ). Although deer are not competent reservoirs of B. burgdorferi, they are the principal maintenance hosts for adult black-legged ticks, and the presence of deer appears to be a prerequisite for the establishment of I. scapularis in any area (43 ). The explosive repopulation in the eastern United States by white-tailed deer during recent decades has been linked to the spread of I. scapularis ticks and of Lyme disease in this region. The future limits of this spread are not known (42 ). # Distribution of Human Cases of Lyme Disease Lyme disease is endemic in several regions in the United States, Canada, and temperate Eurasia (1,44 ). The disease accounts for more than 95% of all reported cases of vectorborne illness in the United States. Using a national surveillance case definition (45 ), state health officials reported >62,000 cases to CDC during 1993-1997, and the national mean annual rate during this 5-year period was 5.5 cases/100,000 population (1,2, CDC, unpublished data, 1998). Persons of all ages are equally susceptible to infection, although the highest reported rates of Lyme disease occur in children aged <15 years and in adults aged 30-59 years (1 ). Both underreporting and overdiagnosis are common (46)(47)(48). Approximately 90% of cases are reported by approximately 140 counties located along the northeastern and mid-Atlantic seaboard and in the upper north-central region of the United States (Appendix). A rash similar to erythema migrans of Lyme disease, but not caused by B. burgdorferi infection, has been described in patients who have been bitten by ticks in the southern United States (49,50 ). This rash is suspected of being associated with the bite of Amblyomma americanum ticks (51 ). # Populations at Risk for Lyme Disease Most B. burgdorferi infections result from periresidential exposure to infected ticks (38,(52)(53)(54)(55) during property maintenance, recreation, and leisure activities. Thus, persons who live or work in residential areas surrrounded by woods or overgrown brush infested by vector ticks are at risk for acquiring Lyme disease. In addition, persons who participate in recreational activities away from home (e.g., hiking, camping, fishing, and hunting) in tick habitat and persons who engage in outdoor occupations (e.g., landscaping, brush clearing, forestry, and wildlife and parks management) in endemic areas might also be at elevated risk for acquiring Lyme disease (56)(57)(58). # PREVENTION AND CONTROL OF LYME DISEASE # Avoidance of Tick Habitat Whenever possible, persons should avoid entering areas that are likely to be infested with ticks, particularly in spring and summer when nymphal ticks feed. Ticks favor a moist, shaded environment, especially that provided by leaf litter and lowlying vegetation in wooded, brushy, or overgrown grassy habitat. Both deer and rodent hosts must be abundant to maintain the enzootic cycle of B. burgdorferi. Sources of information regarding the distribution of ticks in an area include state and local health departments, park personnel, and agricultural extension services. # Personal Protection Persons who are exposed to tick-infested areas should wear light-colored clothing so that ticks can be spotted more easily and removed before becoming attached. Wearing long-sleeved shirts and tucking pants into socks or boot tops can help keep ticks from reaching the skin. Ticks are usually located close to the ground, so wearing high rubber boots can provide additional protection. Applying insect repellents containing DEET (n,n-diethyl-m-toluamide) to clothes and exposed skin and applying permethrin, which kills ticks on contact, to clothes, should also help reduce the risk of tick attachment. DEET can be used safely on children and adults but should be applied according to the U.S. Environmental Protection Agency guidelines to reduce the possibility of toxicity (59 ). Because transmission of B. burgdorferi from an infected tick is unlikely to occur before 36 hours of tick attachment (28,60 ), daily checks for ticks and their prompt removal will help prevent infection. # Strategies for Reducing Tick Abundance The number of ticks in endemic residential areas can be reduced by removing leaf litter, brush, and woodpiles around houses and at the edges of yards and by clearing trees and brush to admit more sunlight, thus reducing deer, rodent, and tick habitat (61 ). Tick populations have also been effectively suppressed by applying pesticides to residential properties (62,63 ). Community-based interventions to reduce deer populations or to kill ticks on deer and rodents have not been extensively implemented, but might be effective in reducing communitywide risk for Lyme disease (64 ). The effectiveness of deer feeding stations equipped with pesticide applicators to kill ticks on deer and other baited devices to kill ticks on rodents is currently under evaluation. # Prophylaxis After Tick Bite The relative cost-effectiveness of postexposure treatment of tick bites to avoid Lyme disease in endemic areas is dependent on the probability of B. burgdorferi infection after a tick bite (65 ). In most circumstances, treating persons for tick bite alone is not recommended (6,66 ). Persons who are bitten by a deer tick should remove the tick and seek medical attention if any signs and symptoms of early Lyme disease, ehrlichiosis, or babesiosis develop during the ensuing days or weeks. # Early Diagnosis and Treatment Lyme disease is readily treatable in its early stages (5,6 ). The early diagnosis and proper antibiotic treatment of Lyme disease are important strategies for avoiding the morbidity and costs of complicated and late-stage illness. # LYME DISEASE VACCINE Description LYMErix is made from lipidated rOspA of B. burgdorferi sensu stricto. The rOspA protein is expressed in Escherichia coli and purified. Each 0.5-mL dose of LYMErix contains 30 µg of purified rOspA lipidated protein adsorbed onto aluminum hydroxide adjuvant. # Mechanism of Action Several studies in animals have provided evidence that B. burgdorferi in a vector tick undergoes substantial antigenic change between the time of tick attachment on a mammalian host and subsequent transmission of the bacterium to the host. The spirochetes residing in the tick gut at the initiation of tick feeding express primarily OspA. As tick feeding begins, the expression of outer-surface protein C (OspC) is increased and the expression of OspA is decreased, so that spirochetes that reach the mammalian host after passing through the tick salivary glands express primarily OspC (67 ). Thus, the rOspA vaccine might exert its principal protective effect by eliciting antibodies that kill Lyme disease spirochetes within the tick gut (68,69 ). # Route of Administration, Vaccination Schedule, and Dosage LYMErix is administered by intramuscular injection, 0.5 mL (30 µg), into the deltoid muscle. Three doses are required for optimal protection. The first dose is followed by a second dose 1 month later and a third dose administered 12 months after the first dose. Vaccine administration should be timed so that the second dose of the vaccine (year 1) and the third dose (year 2) are administered several weeks before the beginning of the B. burgdorferi transmission season, which usually begins in April. The safety and immunogenicity of alternate dosing schedules are currently being evaluated. # VACCINE PERFORMANCE # Safety # Randomized, Controlled Clinical (Phase III) Trial of LYMErix A total of 10,936 subjects aged 15-70 years living in Lyme disease-endemic areas were recruited at 31 sites and randomized to receive three doses of vaccine or placebo (3 ); 5,469 subjects received at least one 30-µg dose of rOspA vaccine, and 5,467 subjects received at least one injection of placebo. The subjects were then followed for 20 months. Information regarding adverse events that were believed to be related or possibly related to injection were available from 4,999 subjects in each group. Soreness at the injection site was the most frequently reported adverse event, which was reported without solicitation by 24.1% of vaccine recipients and 7.6% of placebo recipients (p < 0.001). Redness and swelling at the injection site were reported by <2% of either group but were reported more frequently among vaccine recipients than among those who received placebo (p < 0.001). Myalgia, influenza-like illness, fever, and chills were more common among vaccine recipients than placebo recipients (p < 0.001), but none of these was reported by more than 3.2% of subjects (3 ). Reports of arthritis were not significantly different between vaccine and placebo recipients, but vaccine recipients were significantly (p < 0.05) more likely to report arthralgia or myalgia within 30 days after each dose (70 ). No statistically significant differences existed between vaccine and placebo groups in the incidence of adverse events more than 30 days after receiving a dose, and no episodes of immediate hypersensitivity among vaccine recipients were noted (3 ). # Safety in Patients with Previously Diagnosed Lyme Disease The safety of three different dosage strengths of rOspA vaccine with adjuvant in 30 adults with previous Lyme disease was evaluated in an uncontrolled safety and immunogenicity trial (71 ). Doses were administered at 0, 1, and 2 months. Follow-up of subjects was conducted 1 month after the third dose. No serious adverse events were recorded during the study period. In the randomized controlled Phase III trial of LYMErix, the incidence of adverse events among vaccinees who were seropositive at baseline was similar to the incidence among those who were seronegative (70 ). The incidence of musculoskeletal symptoms within the first 30 days after vaccination was higher among vaccinees with a self-reported previous history of Lyme disease compared with vaccinees with no such history. This difference was not statistically significant at the p = 0.05 level in the placebo group. No statistically significant difference existed in the incidence of late musculoskeletal adverse events between vaccine and placebo recipients with a selfreported previous history of Lyme disease (70 ). # Risk for Possible Immunopathogenicity of rOspA Vaccine After infection with B. burgdorferi, persons who express certain MHC II molecules are more likely than others to develop chronic, poorly responsive Lyme arthritis associated with high levels of antibody to OspA in serum and synovial fluid (22 ). In chronic Lyme arthritis patients, the levels of antibody to OspA, and especially to the C-terminal epitope of OspA, have been found to correlate directly with the severity and duration of the arthritis (72 ). Researchers have proposed that an autoimmune reaction might develop within the joints of some Lyme arthritis patients as a result of molecular mimicry between the dominant T-cell epitope of OspA and human leukocyte function associated antigen 1 (hLFA-1) (73 ). The Phase III trial did not detect differences in the incidence of neurologic or rheumatologic disorders between vaccine recipients and their placebo controls during the 20 months after the initial dose (3 ). However, because the association between immune reactivity to OspA and treatment-resistant Lyme arthritis is poorly understood, the vaccine should not be administered to persons with a history of treatment-resistant Lyme arthritis. # Efficacy Randomized, Controlled Trial (Phase III) of LYMErix Using an intention-to-treat analysis, the vaccine efficacy in protecting against "definite" Lyme disease after two doses was 49% (95% confidence interval [CI] = 15%-69%) and after three doses was 76% (95% CI = 58%-86%) (3 ). (In this study, "definite" Lyme disease was defined as the presence of erythema migrans or objective neurologic, musculoskeletal, or cardiovascular manifestations of Lyme disease, plus laboratory confirmation of infection by cultural isolation, PCR positivity, or WB seroconversion.) Efficacy in protecting against asymptomatic infection (no recognized symptoms, but with WB seroconversions recorded in year 1 or year 2) was 83% (95% CI = 32%-97%) in year 1 and 100% (95% CI = 26%-100%) in year 2. # Immunogenicity A subset of adult subjects enrolled in the Phase III clinical trial of LYMErix was studied for the development of OspA antibodies at months 2, 12, 13, and 20 (3 ). At month 2, one month after the second injection, the geometric mean antibody titer (GMT) of IgG anti-OspA antibodies was 1,227 ELISA units/mL. Ten months later, the GMT had declined to 116 ELISA units/mL. At month 13, one month after the third injection, a marked anamnestic response resulted in a GMT of 6,006 ELISA units/mL. At month 20, the mean response had decreased to 1,991 ELISA units/mL (70 ). An analysis of antibody titers and the risk for developing Lyme disease for a subset of subjects enrolled in the Phase III clinical trial concluded that a titer >1,200 ELISA units/mL correlated with protection (SmithKline Beecham poster at Infectious Disease Society of America Conference, Denver, Colorado, November 1998). # Effect of Vaccination on the Serologic Diagnosis of Lyme Disease Care providers and laboratorians should be advised that vaccine-induced anti-rOspA antibodies routinely cause false-positive ELISA results for Lyme disease (74 ). Experienced laboratory workers, through careful interpretation of the results of WB, can usually discriminate between B. burgdorferi infection and previous rOspA immunization, because anti-OspA antibodies do not develop after natural infection. # COST-EFFECTIVENESS OF LYME DISEASE VACCINATION The cost of Lyme disease has been evaluated from both a societal and a thirdparty-payer perspective (75 ). The cost-effectiveness of vaccinating against Lyme disease has also been analyzed from a societal perspective (76 ). At an assumed cost of vaccination of $100/person/year, a vaccine effectiveness of 0.85, a probability of 0.85 of correctly identifying and treating early Lyme disease, and an assumed incidence of Lyme disease of 1,000/100,000 persons/year, the net cost of vaccination to society was $5,692/case averted and $35,375/complicated neurologic or arthritic case avoided (Figure 1). Using these same baseline assumptions, the societal cost of vaccination exceeds the cost of not vaccinating, unless the incidence of Lyme disease is >1,973/100,000 persons/year. Of the variables examined, the incidence of Lyme disease had the greatest impact on cost-effectiveness of vaccination. The likelihood of early diagnosis and treatment also has a substantial impact on vaccine costeffectiveness because of the reduced incidence of sequelae when Lyme disease is diagnosed and patients are treated early in the disease. Most disease-endemic states and counties report Lyme disease incidence that are substantially below 1,000/100,000 persons/year. For example, in 1997, the highest reported state incidence was 70/100,000 persons in Connecticut, and the highest reported county incidence was 600/100,000 population in Nantucket County, Massachusetts. However, some studies document that approximately 10%-15% of physician-diagnosed cases of Lyme disease are reported to state authorities in highly endemic areas (46,47 ). Epidemiologic studies of populations at high risk in the northeastern United States have estimated annual incidence of >1,000/100,000 persons/ year in several communities (77)(78)(79)(80). # ASSESSING THE RISK FOR LYME DISEASE The decision to administer Lyme disease vaccine should be made on the basis of an assessment of individual risk, which depends on a person's likelihood of being bitten by tick vectors infected with B. burgdorferi. This likelihood is primarily determined by the following: • density of vector ticks in the environment, which varies by place and season; • the prevalence of B. burgdorferi infection in vector ticks; and • the extent of person-tick contact, which is related to the type, frequency, and duration of a person's activities in a tick-infested environment. Assessing risk should include considering the geographic distribution of Lyme disease. The areas of highest Lyme disease risk in the United States are concentrated within some northeastern and north-central states. The risk for Lyme disease differs not only between regions and states and counties within states (Appendix), but even within counties and townships. Detailed information regarding the distribution of Lyme disease risk within specific areas is best obtained from state and local public health authorities. The second step in determining Lyme disease risk is to assess a person's activities. Activities that place persons at high risk are those that involve frequent or prolonged exposure to the habitat of infected ticks at times of the year when the nymphal stages of these ticks are actively seeking hosts, which in most endemic areas is April-July. Typical habitat of Ixodes ticks are wooded, brushy, or overgrown grassy areas that are favorable for deer and the ticks' rodent hosts. Several recreational, property # FIGURE 1. Cost-effectiveness of Lyme disease vaccination OBJECTIVE This MMWR provides recommendations regarding prevention and control of Lyme disease. These recommendations were developed by CDC staff members and the ACIP members. This report is intended to guide clinical practice and policy development related to administration of the Lyme disease vaccine. Upon completion of this educational activity, the reader should be able to describe the epidemiology of Lyme disease in the United States; list effective methods of Lyme disease prevention; describe the characteristics and use of the currently licensed Lyme disease vaccine; and recognize the most common adverse reactions following administration of Lyme disease vaccine. # EXPIRATION -June 4, 2000 The response form must be completed and returned electronically, by fax, or by mail, postmarked no later than 1 year from the publication date of this report, for eligibility to receive continuing education credit. # INSTRUCTIONS 1. Read this MMWR (Vol. 48, RR-7 ), which contains the correct answers to the questions beginning on the next page. 2. Complete all registration information on the response form, including your name, mailing address, phone number, and e-mail address, if available. # Indicate whether you are registering for Continuing Medical Education (CME) credit, Continuing Education Unit (CEU) credit, or Continuing Nursing Education (CNE) credit. 4. Select your answers to the questions, and mark the corresponding letters on the response form. To receive continuing education credit, you must answer all of the questions. Questions with more than one correct answer will instruct you to "indicate all that are true." 5. Sign and date the response form. # MMWR Response Form for Continuing Education Credit June 4, 1999 / Vol. 48 / No. RR-7 # Recommendations for the Use of Lyme Disease Vaccine Recommendations of the Advisory Committee on Immunization Practices (ACIP) Fill in the appropriate block(s) to indicate your answer(s). To receive continuing education credit, you must answer all of the questions. 1 . [ ] A [ ] B [ ] C [ ] D [ ] E 2. [ ] A [ ] B [ ] C [ ] D [ ] E 3. [ ] A [ ] B [ ] C [ ] D [ ] E 4. [ ] A [ ] B [ ] C [ ] D [ ] E 5. [ ] A [ ] B [ ] C [ ] D [ ] E 6. [ ] A [ ] B [ ] C [ ] D [ ] E 7. [ ] A [ ] B [ ] C [ ] D [ ] E 8. [ ] A [ ] B [ ] C [ ] D [ ] E 9. [ ] A [ ] B [ ] C [ ] D [ ] E 10. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 11. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 12. [ ] A [ ] B [ ] C [ ] D [ ] E 13. [ ] A [ ] B [ ] C [ ] D [ ] E 14. [ ] A [ ] B [ ] C 15. [ ] A [ ] B [ ] C [ ] D [ ] E 16. [ ] A [ ] B [ ] C [ ] D [ ] E 17. [ ] A [ ] B [ ] C [ ] D [ ] E 18. [ ] A [ ] B [ ] C [ ] D [ ] E Detach or photocopy. 1.2 hours of CNE credit maintenance, occupational, or leisure pursuits that are carried out in tick habitat can be risky activities. When in highly endemic areas, persons can reduce their risk for Lyme disease and other tickborne illnesses by avoiding tick-infested habitats. If exposure to a tick-infested habitat cannot be avoided, persons should use repellents, wear protective clothing, and regularly check themselves for ticks. Persons who are unlikely to seek medical care for early manifestations of Lyme disease can be at increased risk for Lyme disease complications. Morbidity from Lyme disease can be substantially reduced by detecting and treating the infection in its early stages, because early and correct treatment usually results in a prompt and uncomplicated cure. # RECOMMENDATIONS FOR USE OF LYME DISEASE VACCINE Lyme disease vaccine does not protect all recipients against infection with B. burgdorferi and offers no protection against other tickborne diseases. Vaccinated persons should continue to practice personal protective measures against ticks and should seek early diagnosis and treatment of suspected tickborne infections. Because Lyme disease is not transmitted person-to-person, use of the vaccine will not reduce risk among unvaccinated persons. Decisions regarding the use of vaccine should be based on individual assessment of the risk for exposure to infected ticks and on careful consideration of the relative risks and benefits of vaccination compared with other protective measures, including early diagnosis and treatment of Lyme disease. The risk for Lyme disease is focally distributed in the United States (Appendix). Detailed information regarding the distribution of Lyme disease risk within specific areas is best obtained from state and local public health authorities. The following recommendations are made regarding use of Lyme disease vaccine: -Lyme disease vaccination may be considered for persons aged 15-70 years who are exposed to tick-infested habitat but whose exposure is neither frequent nor prolonged. The benefit of vaccination beyond that provided by basic personal protection and early diagnosis and treatment of infection is uncertain. • -Lyme disease vaccination is not recommended for persons who have minimal or no exposure to tick-infested habitat. • Travelers can obtain some protection from two doses of vaccine but will not achieve optimal protection until the full series of three doses has been administered. All travelers to high-or moderate-risk areas during Lyme disease transmission season should practice personal protection measures as described earlier and seek prompt diagnosis and treatment if signs or symptoms of Lyme disease develop. Lyme disease is endemic in some temperate areas of Europe and Asia; however, considerable heterogeneity of expression exists in the Eurasian strains of B. burgdorferi sensu lato that infect humans, and whether the rOspA vaccine licensed for use in the United States would protect against infection with Eurasian strains is uncertain. • Pregnant Women -Because the safety of rOspA vaccines administered during pregnancy has not been established, vaccination of women who are known to be pregnant is not recommended. No evidence exists that pregnancy increases the risk for Lyme disease or its severity. Acute Lyme disease during pregnancy responds well to antibiotic therapy, and adverse fetal outcomes have not been reported in pregnant women receiving standard courses of treatment. A vaccine pregnancy registry has been established by SmithKline Beecham Pharmaceuticals. In the event that a pregnant women is vaccinated, health-care providers are encouraged to register this vaccination by calling, toll-free, (800) 366-8900, ext. 5231. • Persons with Immunodeficiency -Persons with immunodeficiency were excluded from the Phase III safety and efficacy trial, and no data are available regarding Lyme disease vaccine use in this group. # • Persons with Musculoskeletal Disease -Persons with diseases associated with joint swelling (including rheumatoid arthritis) or diffuse musculoskeletal pain were excluded from the Phase III safety and efficacy trial, and only limited data are available regarding Lyme disease vaccine use in such patients. • Persons with a Previous History of Lyme Disease -Vaccination should be considered for persons with a history of previous uncomplicated Lyme disease who are at continued high risk. -Persons who have treatment-resistant Lyme arthritis should not be vaccinated because of the association between this condition and immune reactivity to OspA. -Persons with chronic joint or neurologic illness related to Lyme disease, as well as second-or third-degree atrioventricular block, were excluded from the Phase III safety and efficacy trial, and thus, the safety and efficacy of Lyme disease vaccine in such persons are unknown. • Vaccine Schedule, Including Spacing and Timing of Administration -Three doses of the vaccine should be administered by intramuscular injection. The initial dose should be followed by a second dose 1 month later and a third dose 12 months after the first dose. Vaccine administration should be timed so that the second dose of the vaccine (year 1) and the third dose (year 2) are administered several weeks before the beginning of the B. burgdorferi transmission season, which usually begins in April. • Boosters -Whether protective immunity will last longer than 1 year beyond the month-12 dose is unknown. Data regarding antibody levels during a 20-month period after the first injection of LYMErix indicate that boosters beyond the month-12 booster might be necessary (see Immunogenicity). Additional data are needed before recommendations regarding vaccination with more than three doses of rOspA vaccine can be made. # • Simultaneous Administration with Other Vaccines -The safety and efficacy of the simultaneous administration of rOspA vaccine with other vaccines have not been established. If LYMErix must be administered concurrently with other vaccines, each vaccine should be administered in a separate syringe at a separate injection site. # FUTURE CONSIDERATIONS # Recommendations for Surveillance, Research, Education, and Program Evaluation Activities • Determine safety, immunogenicity, and efficacy of Lyme disease vaccine in children. • Determine optimal vaccine dosage schedules and timing of administration. • Determine the need for and spacing of booster doses. • Determine safety and efficacy of the vaccine in persons aged >70 years. • Develop additional serodiagnostic tests that discriminate between infection and vaccine-induced antibody production. • Develop a program of Lyme disease vaccine education for care providers and prospective vaccine clients. • Develop an information sheet to be distributed to prospective vaccine recipients or to persons at the time of vaccine administration. • Conduct surveillance for rare or late-developing adverse effects of vaccination. • Establish postlicensure epidemiologic studies of safety, efficacy, prevention effectiveness, cost-effectiveness, and patterns of use. • Develop a program to monitor vaccine use at the local, state, and national levels and to measure its public health and economic impact. • Develop population-based studies to assess the impact of vaccine use on incidence of Lyme disease in communities. • Continue to develop maps of geographic distribution of Lyme disease with improved accuracy and predictive power. exposure were compiled on a county-unit scale for the United States. Then geographic information systems (GIS) technology was used to combine these data and categorize each of the 3,140 counties into four risk classes. # ENTOMOLOGIC RISK Vector Distribution Vector data were obtained from a national distribution map of Ixodes scapularis and I. pacificus, which was previously published by CDC (2 ). These data delineate three classes of tick distribution based on all published and unpublished county collection records available to CDC before 1998. The three classes are as follows: • established populations (≥6 ticks reported or more than one life stage); • reported occurrence (<6 ticks reported and only one life stage); and • absence of ticks or missing data. Although these data are currently the best source of vector distribution available, many gaps exist because of uneven sampling efforts among the counties. Therefore, a neighborhood analysis GIS procedure was used to modify the original tick distribution to smooth absent data and minimize the impact of reporting gaps. In this process, the original tick coverage map was rasterized to 1 km, and each cell was given a numeric value corresponding to the county tick class (0 = absent; 1 = reported; and 2 = established). A neighborhood analysis was performed using ERDAS IMAGINE* image-processing software. This function employed a moving filter (25 by 25 km), which summed the values of the area surrounding each 1-km pixel and created a new focally smoothed image. An outline of counties was overlaid to define boundaries on the smoothed map, and new values were summed from the total pixel values for each county. The three original vector classes were maintained with the new classification. The revised map employed a threshold reclassification based on mean summary statistics generated from the neighborhood analysis. This procedure resulted in a weighted value for each county that was determined by the classes of surrounding counties, thus smoothing the map to minimize rough edges and isolated holes in the data. The modified vector distribution increased the number of counties containing I. scapularis and I. pacificus from 1,058 counties (34% of total counties) in the original data set to 1,404 (45% of total) in the modified version. This modification resulted in greater continuity among adjacent counties, as well as a less-conservative description of vector distribution. # Infection Prevalence in Vectors The prevalence of infection with B. burgdorferi is low throughout the distribution of I. pacificus (3 ) with the exception of one California county (4 ). Within the entire southern distribution of I. scapularis, prevalence of infection with B. burgdorferi is low compared with the Northeast and upper Midwest (3 ). One possible reason for these *ERDAS IMAGINE map production computer software, a product of ERDAS, Inc., 2801 Buford Highway, Atlanta, GA 30329-2137, (404) 248-9000, <http://www.erdas.com>. The national map illustrates a clear focal pattern of Lyme disease risk with the greatest risk occurring in the Northeast and upper Midwest regions. Overall, 115 (4%) counties were classified as high risk, followed by 146 (5%) moderate risk, 1,143 (36%) low risk, and 1,736 (55%) as minimal or no-risk counties. # Lyme disease risk is measurable as a function of two epidemiologic parametersentomologic risk and human exposure. Entomologic risk for Lyme disease is defined as the density per unit area of host-seeking nymphal ticks infected with Borrelia burgdorferi (1 ). Field studies needed for determination of entomologic risk require trained entomologists, and such studies are limited to a narrow seasonal window within the life-cycle of vector ticks. Limited resources preclude the direct measurement of entomologic risk over large geographic areas; therefore, indirect measures were used to estimate risk to develop this national Lyme disease risk map. First, data on vector distribution, abundance, B. burgdorferi infection prevalence, and human # High risk # Moderate risk # Low risk # Minimal or no risk # Areas of predicted Lyme disease transmission Note: This map demonstrates an approximate distribution of predicted Lyme disease risk in the United States. The true relative risk in any given county compared with other counties might differ from that shown here and might change from year to year. Risk categories are defined in the accompanying text. Information on risk distribution within states and counties is best obtained from state and local public health authorities. differences is the geographic variations in abundance of hosts that are competent reservoirs of infection for immature ticks. The white-footed mouse (Peromyscus leucopus) is the principal host for ticks in the Northeast and upper Midwest and is a competent reservoir for the spirochete. But in the Southeast and West Coast regions, reptiles appear to serve as major hosts for immature ticks, and reptiles are either inefficient or incompetent reservoir hosts for spirochetes. This pattern of tick-host association might result from the greater population density of lizards relative to rodents ( 5), resulting in reduced transmission rates in regions where lizards dominate. An index was created to map the effect of host-species composition on infection prevalence in I. scapularis ticks. # National Lyme disease risk map with four categories of risk A literature survey was conducted to identify a complete list of hosts for I. scapularis (6 ). A total of 38 nondomestic host species was identified, including 32 mammal species and 6 reptile species. Birds were excluded because of their migratory nature and their uncertain role as natural reservoir hosts. Species range maps were obtained from the literature (7,8 ), then digitized by county into ArcView GIS* software for presence or absence of reservoir hosts. The county data were then summed to determine the total host species composition available for I. scapularis. A ratio of total reptiles divided by the total hosts multiplied by 100 was calculated for each county and mapped. The reptile ratio index delineates those areas having a high reptile-to-total-hosts ratio (>10) and forms a linear boundary, below which reptiles are more likely to serve as hosts for ticks. The geographic boundary runs roughly on the 38º north latitude from Virginia to Missouri. This reptile ratio illustrates that although total hosts in the northern states can be equal to those of the southern states, reptiles dilute the force of transmission, thus lowering the prevalence of infection in ticks and creating less of a risk to humans in the South. # HUMAN EXPOSURE TO RISK CDC case reports were used as a measure of human exposure to entomologic risk. County-specific data were compiled for the years 1994-1997. Counties comprising the ninetieth percentile of all human cases reported during this 4-year period were selected to represent counties with high human exposure. These 137 counties reported a minimum total of 23 cases. Heuristic, or procedure-based decision rule, was employed to construct the national Lyme disease risk map. Expert decision rule was applied to construct the risk classification as follows: # Risk Classes • High Risk. Counties where I. scapularis or I. pacificus populations are established and where prevalence of infection is predicted to be high, and which are in the top tenth percentile of counties reporting human cases during the 4-year period, 1994-1997.
None
None
06de1b920cca6d89d0a2e1383657a3f65ba74c9c
cdc
None
These recommendations include information on two vaccines recently licensed for use among infants: Haemophilus b Conjugate Vaccine (PRP-T ), manufactured by Pasteur Mérieux Vaccins, and TETRAMUNE TM , manufactured by Lederle Laboratories/Praxis Biologics. This statement also updates recommendations for use of other available Haemophilus b vaccines (PRP-D ; HbOC ; and PRP-OMP ) for infants and children.# INTRODUCTION The incidence of Haemophilus influenzae type b (Hib) disease in the United States has declined since the mid-1980s (1 ). Before the introduction of effective vaccines, Hib was the leading cause of bacterial meningitis and other invasive bacterial disease among children <5 years of age; approximately one in 200 children developed invasive Hib disease before the age of 5 years. Nearly all infections occurred among children <5 years of age, and approximately two thirds of all cases occurred among children <18 months of age. Meningitis occurred in approximately two thirds of children with invasive Hib disease, resulting in hearing impairment or neurologic sequelae in 15%-30%. The case-fatality rate was 2%-5% (2 ). In 1985, the first Hib vaccines were licensed for use in the United States. These vaccines contained purified polyribosylribitol phosphate (PRP) capsular material from type b strains. Antibody against PRP was shown to be the primary component of serum bactericidal activity against the organism. Although the vaccine was highly effective in trials in Finland among children ≥18 months of age (3 ), postmarketing efficacy studies in the United States demonstrated variable efficacy (4,5 ). PRP vaccines were ineffective in children <18 months of age because of the T-cell-independent nature of the immune response to PRP polysaccharide (3 ). Conjugation of the PRP polysaccharide with protein carriers confers T-celldependent characteristics to the vaccine and substantially enhances the immunologic response to the PRP antigen. By 1989, Hib conjugate vaccines (PRP-D ; HbOC ; and PRP-OMP ) were licensed for use among children ≥15 months of age. In late 1990, following two prospective studies, PRP-OMP (6 ) and HbOC (7 ) were licensed for use among infants. On the basis of findings establishing comparable immunogenicity, a third conjugate vaccine, PRP-T (ActHIB TM , OmniHIB TM ) has now been licensed for use among infants. Specific characteristics of the four conjugate vaccines available for infants and children vary (e.g., the type of protein carrier, the size of the polysaccharide, and the chemical linkage between the polysaccharide and carrier) (Table 1). Current recommendations for universal vaccination of infants require parenteral administration of three different vaccines (diphtheria-tetanus-pertussis , Hib conjugate, and hepatitis B) during two or three different visits to a health-care provider. Combination vaccines were developed to reduce the number of injections at each visit. TETRAMUNE TM is the first licensed combination vaccine that provides protection against diphtheria, tetanus, pertussis, and Hib disease. This statement a) summarizes current immunogenicity and efficacy data regarding Hib conjugate vaccines, including PRP-T; b) summarizes available information regarding the safety and immunogenicity of TETRAMUNE TM ; and c) provides updated recommendations from the Advisory Committee on Immunization Practices (ACIP) for use of conjugate Hib vaccines and TETRAMUNE TM for infants and children. # HIB CONJUGATE VACCINES Immunogenicity Studies have been performed with all four Hib conjugate vaccines to determine immunogenicity in infants 2-6 months of age. Direct comparison of studies is complicated by differing vaccination and blood collection regimens and interlaboratory variation in assays for measurement of PRP antibody. Also, the precise level of antibody required for protection against invasive disease is not clearly established. However, a geometric mean titer (GMT) of 1 µg/mL 3 weeks postvaccination correlated with protection in studies following vaccination with unconjugated PRP vaccine and suggests long-term protection from invasive disease (8 ). After three vaccinations at ages 2, 4, and 6 months, each of three Hib conjugate vaccines-HbOC, PRP-OMP, and PRP-T-produced protective levels of anticapsular antibody (9)(10)(11)(12)(13). One study reported comparative immunogenicity with PRP-OMP, HbOC, and PRP-T (12 ), and two other studies compared immunogenicity with all four conjugate vaccines (11,13 ) (Table 2). In this age group, only PRP-OMP vaccine produced a substantial increase in antibody after one dose (12,13 ). Antibody response among infants following a series of three infant vaccinations with PRP-D is limited (only 15%-45% of infants develop a GMT ≥1 µg/mL after 3 doses) (11,13,14 ) and is lower than with HbOC, PRP-OMP, or PRP-T. With each conjugate vaccine, antibody levels decline after administration of the primary series. Regardless of the conjugate vaccine used in the primary series for infants, booster vaccination of children ≥12 months of age with any of the licensed conjugate vaccines will likely elicit an adequate response, as studies indicate that a) unconjugated PRP (administered at 12-14 months of age) elicits a good booster response in infants who are administered PRP-OMP (15 ), HbOC (15,16 ), or PRP-T (15,16, ) during infancy; b) PRP-D administered as a booster at age 15 months induces adequate immunologic response regardless of the Hib conjugate administered in the initial series (17 ); and c) each conjugate vaccine demonstrates adequate immunogenicity when administered as a single vaccination to children ≥15 months of age (18,19 ). Limited information is available regarding the interchangeability of different Hib vaccines for the primary series at 2, 4, and 6 months of age (Table 3). Preliminary findings from two studies suggest that the vaccination series consisting of PRP-OMP at 2 months, followed by either PRP-T (20,21 ) or HbOC (20,22 ) at 4 months and 6 months, induces adequate anti-PRP antibody response. The sequence HbOC, PRP-T, PRP-T was also immunogenic (20 ) after the complete primary series was administered. This information suggests that any combination of three doses of Hib conjugate vaccines licensed for use among infants will provide adequate protection. The carrier proteins used in PRP-T, PRP-D, and HbOC (but not PRP-OMP) are derived from the toxoids used in DTP vaccine (Table 1). Limited data have indicated that prior or concurrent administration of DTP vaccine may enhance anti-PRP antibody response following vaccination with these Hib vaccines (23)(24)(25)(26). It has been suggested that priming T-cells to the carrier proteins may be important for optimal antibody response to the conjugated vaccine. For infants, the immunogenicity of PRP-OMP (which has a carrier derived from the meningococcal outer membrane protein) appears to be unaffected by the absence of prior or concurrent DTP vaccination (24 ). # Efficacy Efficacy studies of PRP-D, PRP-OMP, HbOC, and PRP-T vaccines administered to infants 2-6 months of age are summarized in this report (Table 4). Two randomized, double-blind trials evaluating PRP-T vaccine efficacy were discontinued in the United States in October 1990 after licensure was granted to HbOC and PRP-OMP vaccines. A third efficacy trial of PRP-T in England has recently been completed. Efficacy of the PRP-D conjugate has varied with the age of the population studied. Postlicensure studies performed in the United States among children aged 15-60 months demonstrated point estimates of efficacy ranging from 74%-96%. Two studies have prospectively evaluated efficacy of PRP-D administered to children <12 months of age. In a trial in Finland involving 114,000 infants vaccinated at 3, 4, and 6 months of age, the point estimate of efficacy was 89% (95% confidence interval = 70-96) (27 ). In comparison, among Alaskan Natives vaccinated at ages 2, 4, and 6 months, the point estimate of efficacy was 35% (95% CI = -57-73) and not significantly different from zero (28 ). PRP-OMP was shown to be 93%-100% efficacious in Navajo infants (a population at particularly high risk of disease) vaccinated at 2 and 4 months of age (6 ). Two studies have indicated a point estimate of efficacy of ≥97% following the administration of two doses of HbOC (in Finland) (29 ) or three doses of vaccine (in the United States) (7 ) to infants. Although the trials evaluating PRP-T vaccine efficacy among infants in the United States were terminated early, no cases of invasive Hib disease were reported among more than 6,200 vaccinees at the time of termination (30 , J.C. Parke, Carolinas Medical Center, unpublished data). In a trial in Great Britain, PRP-T was protective in infants vaccinated at ages 2, 3, and 4 months (31 ). Efficacy of PRP-T was also suggested in a nationwide immunization program that was implemented in Finland in January 1990. There were no reported cases of invasive Hib disease among more than 97,000 infants who received ≥2 doses of PRP-T. Two children developed disease after one dose of vaccine (32 ). In the United States, 30-50 cases of invasive Hib disease have been reported among children who had received appropriate vaccination with the Hib conjugate vaccine and therefore were expected to have had adequate levels of protective antibody (33)(34)(35). These cases may represent vaccine failure. Results of immunologic evaluation of children who had vaccine failure vary with the age of the child. In a study of children vaccinated with Hib conjugate vaccine at age ≥15 months, subnormal immunoglobulin concentrations were present in approximately 40% of those who developed invasive Hib disease (33 ). However, in a separate study that evaluated fully vaccinated children <12 months of age who developed invasive Hib disease, only 9% had evidence of low immunoglobulin levels (34 ). # COMBINATION VACCINES TETRAMUNE TM On March 30, 1993, the Food and Drug Administration licensed TETRAMUNE TM , a vaccine manufactured by Lederle Laboratories/Praxis Biologics by combining a licensed DTP vaccine (TRI-IMMUNOL ® ) and HbOC (HibTITER ® ). TETRAMUNE TM contains 12.5 Lf of diphtheria toxoid, 5 Lf of tetanus toxoid, an estimated 4 protective units of pertussis vaccine, 10 µg of purified Hib polysaccharide, and approximately 25 µg of CRM197 protein. The single-dose volume is 0.5 mL to be administered intramuscularly. # Safety and Immunogenicity The safety of TETRAMUNE TM vaccine was evaluated in a study of 6,497 infants in California who received doses at ages 2, 4, and 6 months, compared with 3,935 infants who received DTP (TRI-IMMUNOL ® ) and HbOC vaccines as separate concurrent injections (36, Lederle Laboratories/Praxis Biologics, unpublished data). Based on follow-up of a randomized subset of 1,411 infants (for whom 1,347 parents were interviewed), the risk for local and systemic reactions was similar for infants who received the combination product when compared with those who received two separate injections of DTP and HbOC. The infants who received the combination vaccine, however, experienced a higher likelihood of restless sleep following the second dose and ≥1 inch of swelling at the injection site after administration of the first dose of the combination vaccine (Table 5). Immunogenicity studies among infants who received TETRAMUNE TM at ages 2, 4, and 6 months (37, Lederle Laboratories/Praxis Biologics, unpublished data) indicated that antibody responses to Hib PRP, diphtheria, and tetanus toxins were comparable to or higher than those in persons who received separate, but concurrent administration of DTP and HbOC vaccines. Similarly, antibody responses to pertussis toxin, filamentous hemagglutinin, 69-kd outer-membrane protein (pertactin), and pertussis agglutinins were comparable to or higher than those following separate concurrent administration of DTP and HbOC vaccines (Table 6). Comparable immunogenicity results also were reported for children aged 15-18 months (who had received prior doses of DTP, but not Hib vaccine) following a dose of TETRAMUNE TM when compared with separate but concurrent administration of DTP and HbOC vaccines. # Efficacy No efficacy information is available for TETRAMUNE TM , but the immunogenicity information suggests TETRAMUNE TM will provide similar protection against Hib disease and diphtheria, tetanus, and pertussis as will the separate administration of HbOC or DTP vaccines. # Other combination vaccines Published studies have evaluated the safety and immunogenicity of other vaccines that combine DTP (or DTaP ) and Hib in the same formulation or the same syringe. However, none of these formulations (PRP-D-DTaP , HbOC-DTaP , or PRP-T-DTP ) have been licensed for use. Two studies have shown a slight reduction in the immune response to pertussis when certain Hib vaccines were combined with DTP (42,43 ), but the magnitude of the effect is unlikely to be clinically relevant. # Simultaneous vaccination Large prelicensure and postlicensure studies have demonstrated the safety, immunogenicity, and efficacy of each of the licensed Hib conjugate vaccines administered to infants concurrently with DTP vaccine and oral polio vaccine (OPV) (6,7,9,32 ) as well as the safety and immunogenicity of TETRAMUNE TM when administered concurrently with OPV in recommended schedules (36,37 ). More limited data also support the safety and immunogenicity of Hib conjugate vaccines when administered simultaneously with hepatitis B vaccine to infants, and with DTP, OPV, measles, mumps, and rubella (MMR) and/or hepatitis B vaccine when administered at ages 12-18 months. # Recommendations for Hib Vaccination General All infants should receive a conjugate Hib vaccine (separate or in combination with DTP ), beginning at age 2 months (but not earlier than 6 weeks). If the first vaccination is delayed beyond age 6 months, the schedule of vaccination for previously unimmunized children should be followed (Table 7). When possible, the Hib conjugate vaccine used at the first vaccination should be used for all subsequent vaccinations in the primary series. When either Hib vaccines or TETRAMUNE TM is used, the vaccine should be administered intramuscularly using a separate syringe and administered at a separate site from any other concurrent vaccinations. # HbOC or PRP-T Previously unvaccinated infants aged 2-6 months should receive three doses of vaccine administered 2 months apart, followed by a booster dose at age 12-15 months, at least 2 months after the last vaccination. Unvaccinated children ages 7-11 months should receive two doses of vaccine, 2 months apart, followed by a booster dose at age 12-18 months, at least 2 months after the last vaccination. Unvaccinated children ages 12-14 months should receive two doses of vaccine, at least 2 months apart. Any previously unvaccinated child aged 15-59 months should receive a single dose of vaccine. # PRP-OMP Previously unvaccinated infants ages 2-6 months should receive two doses of vaccine administered at least 2 months apart. Although PRP-OMP induces a substantial antibody response after one dose, all children should receive all recommended doses of PRP-OMP. Because of the substantial antibody response after one dose, it may be advantageous to use PRP-OMP vaccine in populations that are known to be at increased risk for disease during early infancy (e.g., Alaskan Natives). A booster dose should be administered to all children at 12-15 months of age at least 2 months after the last vaccination. Unvaccinated children ages 7-11 months should receive two doses of vaccine, 2 months apart, followed by a booster dose at 12-18 months of age, at least 2 months after the last dose. Unvaccinated children ages 12-14 months should receive two doses of vaccine, 2 months apart. Any previously unvaccinated child 15-59 months of age should receive a single dose of vaccine. # PRP-D One dose of PRP-D may be administered to unvaccinated children aged 15-59 months. This vaccine may be used as a booster dose at 12-18 months of age following a two-or three-dose primary series, regardless of the vaccine used in the primary series. This vaccine is not licensed for use among infants because of its limited immunogenicity and variable protective efficacy in this age group. # TETRAMUNE TM The combination vaccine TETRAMUNE TM may be used for routine vaccination of infants, beginning at age 2 months, to prevent diphtheria, tetanus, pertussis, and invasive Hib disease. Previously unvaccinated infants aged 2-6 months should receive three doses administered at least 2 months apart. An additional dose should be administered at 12-15 months of age, after at least a 6-month interval following the third dose. Alternatively, acellular DTP and Hib vaccine can be administered as separate injections at 12-15 months of age. Acellular DTP is preferred for doses four and five of the five-dose DTP series. For infants who begin both Hib and DTP vaccinations late (after 2 months of age), TETRAMUNE TM may be used for the first and second doses of the vaccine series. However, because delay in initiation of the DTP series does not reduce the number of required doses of DTP, additional doses of DTP without Hib are necessary to ensure that all four doses are administered. Infants ages 7-11 months who have not previously been vaccinated with DTP or Hib vaccines should receive two doses of TETRAMUNE TM , administered at least 2 months apart, followed by a dose of DTP vaccine 4-8 weeks after the second dose of TETRAMUNE TM . An additional dose of DTP and Hib vaccines should then be administered: DTP vaccine at least 6 months after the third immunizing dose against diphtheria, tetanus, and pertussis; and Hib vaccine at 12-18 months of age, at least 2 months after the last Hib dose. TETRAMUNE TM may be used to complete an infant immunization series started with any Hib vaccine (licensed for use in this age group) and with any DTP vaccine if both vaccines are to be administered simultaneously. Completion of the primary series using the same Hib vaccine, however, is preferable. Conversely, any DTP vaccine may be used to complete a series initiated with TETRAMUNE TM (see the general ACIP statement on Diphtheria, Tetanus and Pertussis: Recommendations for Vaccine Use and Other Preventive Measures for further information). # Other considerations for Hib vaccination Other considerations for Hib vaccination are discussed in the following section: 1) Although an interval of 2 months between doses of Hib vaccine in the primary series is recommended, an interval of 1 month is acceptable, if necessary. # Adverse reactions Adverse reactions to each of the four Hib conjugate vaccines are generally uncommon. Swelling, redness, and/or pain have been reported in 5%-30% of recipients and usually resolve within 12-24 hours. Systemic reactions such as fever and irritability are infrequent. Available information on side effects and adverse reactions suggests that the risks for local and systemic events following TETRAMUNE TM administration are similar to those following concurrent administration of its individual component vaccines (i.e., DTP and Hib vaccines), and may be due largely to the pertussis component of the DTP vaccine (52 ). Surveillance regarding the safety of TETRAMUNE TM , PRP-T, and other Hib vaccines in large-scale use aids in the assessment of vaccine safety by identifying potential events that may warrant further study. The Vaccine Adverse Event Reporting System (VAERS) of the U.S. Department of Health and Human Services encourages reports of all serious adverse events that occur after receipt of any vaccine.- Invasive Hib disease is a reportable condition in 43 states. All health-care workers should report any case of invasive Hib disease to local and state health departments. # Contraindications and Precautions Vaccination with a specific Hib conjugate vaccine is contraindicated in persons known to have experienced anaphylaxis following a prior dose of that vaccine. Vaccination should be delayed in children with moderate or severe illnesses. Minor illnesses (e.g., mild upper-respiratory infection) are not contraindications to vaccination. Contraindications and precautions of the use of TETRAMUNE TM are the same as those for its individual component vaccines (i.e., DTP or Hib) (see the general ACIP statement on Diphtheria, Tetanus, and Pertussis: Recommendations for Vaccine Use and Other Preventive Measures for more details on the use of vaccines containing DTP). The following CDC staff members prepared this report: # Advisory Committee on Immunization Practices Membership List, October 1993 -Continued
These recommendations include information on two vaccines recently licensed for use among infants: Haemophilus b Conjugate Vaccine (PRP-T [ActHIB TM , OmniHIB TM ]), manufactured by Pasteur Mérieux Vaccins, and TETRAMUNE TM , manufactured by Lederle Laboratories/Praxis Biologics. This statement also updates recommendations for use of other available Haemophilus b vaccines (PRP-D [ProHIBiT ® ]; HbOC [HibTITER ® ]; and PRP-OMP [PedvaxHIB ® ]) for infants and children.# INTRODUCTION The incidence of Haemophilus influenzae type b (Hib) disease in the United States has declined since the mid-1980s (1 ). Before the introduction of effective vaccines, Hib was the leading cause of bacterial meningitis and other invasive bacterial disease among children <5 years of age; approximately one in 200 children developed invasive Hib disease before the age of 5 years. Nearly all infections occurred among children <5 years of age, and approximately two thirds of all cases occurred among children <18 months of age. Meningitis occurred in approximately two thirds of children with invasive Hib disease, resulting in hearing impairment or neurologic sequelae in 15%-30%. The case-fatality rate was 2%-5% (2 ). In 1985, the first Hib vaccines were licensed for use in the United States. These vaccines contained purified polyribosylribitol phosphate (PRP) capsular material from type b strains. Antibody against PRP was shown to be the primary component of serum bactericidal activity against the organism. Although the vaccine was highly effective in trials in Finland among children ≥18 months of age (3 ), postmarketing efficacy studies in the United States demonstrated variable efficacy (4,5 ). PRP vaccines were ineffective in children <18 months of age because of the T-cell-independent nature of the immune response to PRP polysaccharide (3 ). Conjugation of the PRP polysaccharide with protein carriers confers T-celldependent characteristics to the vaccine and substantially enhances the immunologic response to the PRP antigen. By 1989, Hib conjugate vaccines (PRP-D [ProHIBiT ® ]; HbOC [HibTITER ® ]; and PRP-OMP [PedvaxHIB ® ]) were licensed for use among children ≥15 months of age. In late 1990, following two prospective studies, PRP-OMP (6 ) and HbOC (7 ) were licensed for use among infants. On the basis of findings establishing comparable immunogenicity, a third conjugate vaccine, PRP-T (ActHIB TM , OmniHIB TM ) has now been licensed for use among infants. Specific characteristics of the four conjugate vaccines available for infants and children vary (e.g., the type of protein carrier, the size of the polysaccharide, and the chemical linkage between the polysaccharide and carrier) (Table 1). Current recommendations for universal vaccination of infants require parenteral administration of three different vaccines (diphtheria-tetanus-pertussis [DTP], Hib conjugate, and hepatitis B) during two or three different visits to a health-care provider. Combination vaccines were developed to reduce the number of injections at each visit. TETRAMUNE TM is the first licensed combination vaccine that provides protection against diphtheria, tetanus, pertussis, and Hib disease. This statement a) summarizes current immunogenicity and efficacy data regarding Hib conjugate vaccines, including PRP-T; b) summarizes available information regarding the safety and immunogenicity of TETRAMUNE TM ; and c) provides updated recommendations from the Advisory Committee on Immunization Practices (ACIP) for use of conjugate Hib vaccines and TETRAMUNE TM for infants and children. # HIB CONJUGATE VACCINES Immunogenicity Studies have been performed with all four Hib conjugate vaccines to determine immunogenicity in infants 2-6 months of age. Direct comparison of studies is complicated by differing vaccination and blood collection regimens and interlaboratory variation in assays for measurement of PRP antibody. Also, the precise level of antibody required for protection against invasive disease is not clearly established. However, a geometric mean titer (GMT) of 1 µg/mL 3 weeks postvaccination correlated with protection in studies following vaccination with unconjugated PRP vaccine and suggests long-term protection from invasive disease (8 ). After three vaccinations at ages 2, 4, and 6 months, each of three Hib conjugate vaccines-HbOC, PRP-OMP, and PRP-T-produced protective levels of anticapsular antibody (9)(10)(11)(12)(13). One study reported comparative immunogenicity with PRP-OMP, HbOC, and PRP-T (12 ), and two other studies compared immunogenicity with all four conjugate vaccines (11,13 ) (Table 2). In this age group, only PRP-OMP vaccine produced a substantial increase in antibody after one dose (12,13 ). Antibody response among infants following a series of three infant vaccinations with PRP-D is limited (only 15%-45% of infants develop a GMT ≥1 µg/mL after 3 doses) (11,13,14 ) and is lower than with HbOC, PRP-OMP, or PRP-T. With each conjugate vaccine, antibody levels decline after administration of the primary series. Regardless of the conjugate vaccine used in the primary series for infants, booster vaccination of children ≥12 months of age with any of the licensed conjugate vaccines will likely elicit an adequate response, as studies indicate that a) unconjugated PRP (administered at 12-14 months of age) elicits a good booster response in infants who are administered PRP-OMP (15 ), HbOC (15,16 ), or PRP-T (15,16, ) during infancy; b) PRP-D administered as a booster at age 15 months induces adequate immunologic response regardless of the Hib conjugate administered in the initial series (17 ); and c) each conjugate vaccine demonstrates adequate immunogenicity when administered as a single vaccination to children ≥15 months of age (18,19 ). Limited information is available regarding the interchangeability of different Hib vaccines for the primary series at 2, 4, and 6 months of age (Table 3). Preliminary findings from two studies suggest that the vaccination series consisting of PRP-OMP at 2 months, followed by either PRP-T (20,21 ) or HbOC (20,22 ) at 4 months and 6 months, induces adequate anti-PRP antibody response. The sequence HbOC, PRP-T, PRP-T was also immunogenic (20 ) after the complete primary series was administered. This information suggests that any combination of three doses of Hib conjugate vaccines licensed for use among infants will provide adequate protection. The carrier proteins used in PRP-T, PRP-D, and HbOC (but not PRP-OMP) are derived from the toxoids used in DTP vaccine (Table 1). Limited data have indicated that prior or concurrent administration of DTP vaccine may enhance anti-PRP antibody response following vaccination with these Hib vaccines (23)(24)(25)(26). It has been suggested that priming T-cells to the carrier proteins may be important for optimal antibody response to the conjugated vaccine. For infants, the immunogenicity of PRP-OMP (which has a carrier derived from the meningococcal outer membrane protein) appears to be unaffected by the absence of prior or concurrent DTP vaccination (24 ). # Efficacy Efficacy studies of PRP-D, PRP-OMP, HbOC, and PRP-T vaccines administered to infants 2-6 months of age are summarized in this report (Table 4). Two randomized, double-blind trials evaluating PRP-T vaccine efficacy were discontinued in the United States in October 1990 after licensure was granted to HbOC and PRP-OMP vaccines. A third efficacy trial of PRP-T in England has recently been completed. Efficacy of the PRP-D conjugate has varied with the age of the population studied. Postlicensure studies performed in the United States among children aged 15-60 months demonstrated point estimates of efficacy ranging from 74%-96%. Two studies have prospectively evaluated efficacy of PRP-D administered to children <12 months of age. In a trial in Finland involving 114,000 infants vaccinated at 3, 4, and 6 months of age, the point estimate of efficacy was 89% (95% confidence interval [CI] = 70-96) (27 ). In comparison, among Alaskan Natives vaccinated at ages 2, 4, and 6 months, the point estimate of efficacy was 35% (95% CI = -57-73) and not significantly different from zero (28 ). PRP-OMP was shown to be 93%-100% efficacious in Navajo infants (a population at particularly high risk of disease) vaccinated at 2 and 4 months of age (6 ). Two studies have indicated a point estimate of efficacy of ≥97% following the administration of two doses of HbOC (in Finland) (29 ) or three doses of vaccine (in the United States) (7 ) to infants. Although the trials evaluating PRP-T vaccine efficacy among infants in the United States were terminated early, no cases of invasive Hib disease were reported among more than 6,200 vaccinees at the time of termination (30 , J.C. Parke, Carolinas Medical Center, unpublished data). In a trial in Great Britain, PRP-T was protective in infants vaccinated at ages 2, 3, and 4 months (31 ). Efficacy of PRP-T was also suggested in a nationwide immunization program that was implemented in Finland in January 1990. There were no reported cases of invasive Hib disease among more than 97,000 infants who received ≥2 doses of PRP-T. Two children developed disease after one dose of vaccine (32 ). In the United States, 30-50 cases of invasive Hib disease have been reported among children who had received appropriate vaccination with the Hib conjugate vaccine and therefore were expected to have had adequate levels of protective antibody (33)(34)(35). These cases may represent vaccine failure. Results of immunologic evaluation of children who had vaccine failure vary with the age of the child. In a study of children vaccinated with Hib conjugate vaccine at age ≥15 months, subnormal immunoglobulin concentrations were present in approximately 40% of those who developed invasive Hib disease (33 ). However, in a separate study that evaluated fully vaccinated children <12 months of age who developed invasive Hib disease, only 9% had evidence of low immunoglobulin levels (34 ). # COMBINATION VACCINES TETRAMUNE TM On March 30, 1993, the Food and Drug Administration licensed TETRAMUNE TM , a vaccine manufactured by Lederle Laboratories/Praxis Biologics by combining a licensed DTP vaccine (TRI-IMMUNOL ® ) and HbOC (HibTITER ® ). TETRAMUNE TM contains 12.5 Lf of diphtheria toxoid, 5 Lf of tetanus toxoid, an estimated 4 protective units of pertussis vaccine, 10 µg of purified Hib polysaccharide, and approximately 25 µg of CRM197 protein. The single-dose volume is 0.5 mL to be administered intramuscularly. # Safety and Immunogenicity The safety of TETRAMUNE TM vaccine was evaluated in a study of 6,497 infants in California who received doses at ages 2, 4, and 6 months, compared with 3,935 infants who received DTP (TRI-IMMUNOL ® ) and HbOC vaccines as separate concurrent injections (36, Lederle Laboratories/Praxis Biologics, unpublished data). Based on follow-up of a randomized subset of 1,411 infants (for whom 1,347 parents were interviewed), the risk for local and systemic reactions was similar for infants who received the combination product when compared with those who received two separate injections of DTP and HbOC. The infants who received the combination vaccine, however, experienced a higher likelihood of restless sleep following the second dose and ≥1 inch of swelling at the injection site after administration of the first dose of the combination vaccine (Table 5). Immunogenicity studies among infants who received TETRAMUNE TM at ages 2, 4, and 6 months (37, Lederle Laboratories/Praxis Biologics, unpublished data) indicated that antibody responses to Hib PRP, diphtheria, and tetanus toxins were comparable to or higher than those in persons who received separate, but concurrent administration of DTP and HbOC vaccines. Similarly, antibody responses to pertussis toxin, filamentous hemagglutinin, 69-kd outer-membrane protein (pertactin), and pertussis agglutinins were comparable to or higher than those following separate concurrent administration of DTP and HbOC vaccines (Table 6). Comparable immunogenicity results also were reported for children aged 15-18 months (who had received prior doses of DTP, but not Hib vaccine) following a dose of TETRAMUNE TM when compared with separate but concurrent administration of DTP and HbOC vaccines. # Efficacy No efficacy information is available for TETRAMUNE TM , but the immunogenicity information suggests TETRAMUNE TM will provide similar protection against Hib disease and diphtheria, tetanus, and pertussis as will the separate administration of HbOC or DTP vaccines. # Other combination vaccines Published studies have evaluated the safety and immunogenicity of other vaccines that combine DTP (or DTaP [acellular pertussis component]) and Hib in the same formulation or the same syringe. However, none of these formulations (PRP-D-DTaP [38 ], HbOC-DTaP [39,40 ], or PRP-T-DTP [41 ]) have been licensed for use. Two studies have shown a slight reduction in the immune response to pertussis when certain Hib vaccines were combined with DTP (42,43 ), but the magnitude of the effect is unlikely to be clinically relevant. # Simultaneous vaccination Large prelicensure and postlicensure studies have demonstrated the safety, immunogenicity, and efficacy of each of the licensed Hib conjugate vaccines administered to infants concurrently with DTP vaccine and oral polio vaccine (OPV) (6,7,9,32 ) as well as the safety and immunogenicity of TETRAMUNE TM when administered concurrently with OPV in recommended schedules (36,37 ). More limited data also support the safety and immunogenicity of Hib conjugate vaccines when administered simultaneously with hepatitis B vaccine to infants, and with DTP, OPV, measles, mumps, and rubella (MMR) and/or hepatitis B vaccine when administered at ages 12-18 months. # Recommendations for Hib Vaccination General All infants should receive a conjugate Hib vaccine (separate or in combination with DTP [TETRAMUNE TM ]), beginning at age 2 months (but not earlier than 6 weeks). If the first vaccination is delayed beyond age 6 months, the schedule of vaccination for previously unimmunized children should be followed (Table 7). When possible, the Hib conjugate vaccine used at the first vaccination should be used for all subsequent vaccinations in the primary series. When either Hib vaccines or TETRAMUNE TM is used, the vaccine should be administered intramuscularly using a separate syringe and administered at a separate site from any other concurrent vaccinations. # HbOC or PRP-T Previously unvaccinated infants aged 2-6 months should receive three doses of vaccine administered 2 months apart, followed by a booster dose at age 12-15 months, at least 2 months after the last vaccination. Unvaccinated children ages 7-11 months should receive two doses of vaccine, 2 months apart, followed by a booster dose at age 12-18 months, at least 2 months after the last vaccination. Unvaccinated children ages 12-14 months should receive two doses of vaccine, at least 2 months apart. Any previously unvaccinated child aged 15-59 months should receive a single dose of vaccine. # PRP-OMP Previously unvaccinated infants ages 2-6 months should receive two doses of vaccine administered at least 2 months apart. Although PRP-OMP induces a substantial antibody response after one dose, all children should receive all recommended doses of PRP-OMP. Because of the substantial antibody response after one dose, it may be advantageous to use PRP-OMP vaccine in populations that are known to be at increased risk for disease during early infancy (e.g., Alaskan Natives). A booster dose should be administered to all children at 12-15 months of age at least 2 months after the last vaccination. Unvaccinated children ages 7-11 months should receive two doses of vaccine, 2 months apart, followed by a booster dose at 12-18 months of age, at least 2 months after the last dose. Unvaccinated children ages 12-14 months should receive two doses of vaccine, 2 months apart. Any previously unvaccinated child 15-59 months of age should receive a single dose of vaccine. # PRP-D One dose of PRP-D may be administered to unvaccinated children aged 15-59 months. This vaccine may be used as a booster dose at 12-18 months of age following a two-or three-dose primary series, regardless of the vaccine used in the primary series. This vaccine is not licensed for use among infants because of its limited immunogenicity and variable protective efficacy in this age group. # TETRAMUNE TM The combination vaccine TETRAMUNE TM may be used for routine vaccination of infants, beginning at age 2 months, to prevent diphtheria, tetanus, pertussis, and invasive Hib disease. Previously unvaccinated infants aged 2-6 months should receive three doses administered at least 2 months apart. An additional dose should be administered at 12-15 months of age, after at least a 6-month interval following the third dose. Alternatively, acellular DTP and Hib vaccine can be administered as separate injections at 12-15 months of age. Acellular DTP is preferred for doses four and five of the five-dose DTP series. For infants who begin both Hib and DTP vaccinations late (after 2 months of age), TETRAMUNE TM may be used for the first and second doses of the vaccine series. However, because delay in initiation of the DTP series does not reduce the number of required doses of DTP, additional doses of DTP without Hib are necessary to ensure that all four doses are administered. Infants ages 7-11 months who have not previously been vaccinated with DTP or Hib vaccines should receive two doses of TETRAMUNE TM , administered at least 2 months apart, followed by a dose of DTP vaccine 4-8 weeks after the second dose of TETRAMUNE TM . An additional dose of DTP and Hib vaccines should then be administered: DTP vaccine at least 6 months after the third immunizing dose against diphtheria, tetanus, and pertussis; and Hib vaccine at 12-18 months of age, at least 2 months after the last Hib dose. TETRAMUNE TM may be used to complete an infant immunization series started with any Hib vaccine (licensed for use in this age group) and with any DTP vaccine if both vaccines are to be administered simultaneously. Completion of the primary series using the same Hib vaccine, however, is preferable. Conversely, any DTP vaccine may be used to complete a series initiated with TETRAMUNE TM (see the general ACIP statement on Diphtheria, Tetanus and Pertussis: Recommendations for Vaccine Use and Other Preventive Measures [44 ] for further information). # Other considerations for Hib vaccination Other considerations for Hib vaccination are discussed in the following section: 1) Although an interval of 2 months between doses of Hib vaccine in the primary series is recommended, an interval of 1 month is acceptable, if necessary. # Adverse reactions Adverse reactions to each of the four Hib conjugate vaccines are generally uncommon. Swelling, redness, and/or pain have been reported in 5%-30% of recipients and usually resolve within 12-24 hours. Systemic reactions such as fever and irritability are infrequent. Available information on side effects and adverse reactions suggests that the risks for local and systemic events following TETRAMUNE TM administration are similar to those following concurrent administration of its individual component vaccines (i.e., DTP and Hib vaccines), and may be due largely to the pertussis component of the DTP vaccine (52 ). Surveillance regarding the safety of TETRAMUNE TM , PRP-T, and other Hib vaccines in large-scale use aids in the assessment of vaccine safety by identifying potential events that may warrant further study. The Vaccine Adverse Event Reporting System (VAERS) of the U.S. Department of Health and Human Services encourages reports of all serious adverse events that occur after receipt of any vaccine.* Invasive Hib disease is a reportable condition in 43 states. All health-care workers should report any case of invasive Hib disease to local and state health departments. # Contraindications and Precautions Vaccination with a specific Hib conjugate vaccine is contraindicated in persons known to have experienced anaphylaxis following a prior dose of that vaccine. Vaccination should be delayed in children with moderate or severe illnesses. Minor illnesses (e.g., mild upper-respiratory infection) are not contraindications to vaccination. Contraindications and precautions of the use of TETRAMUNE TM are the same as those for its individual component vaccines (i.e., DTP or Hib) (see the general ACIP statement on Diphtheria, Tetanus, and Pertussis: Recommendations for Vaccine Use and Other Preventive Measures [44 ] for more details on the use of vaccines containing DTP). # The following CDC staff members prepared this report: # Advisory Committee on Immunization Practices Membership List, October 1993 -Continued
None
None
db29efcc9b1369cc37601a2af1d1842df9202278
cdc
None
the nation's, the proportion of total homicide-attributable YPLL in Michigan involving blacks is 68% compared with 44% in the nation. These differences largely reflect the higher homicide rate for blacks in Michigan than for the U.S. black population. Black homicide victims are also slightly younger than white victims in Michigan; in 1985, they had an average of 33 YPLL per homicide death, compared with 31 for whites. Examining descriptive data such as those presented here is important for public health agencies addressing homicide. In addition, analytic studies of potentially modifiable risk factors are needed. Because 67% of Michigan's homicides in 1985 occurred in the Detroit area, these data highlight the importance of implementing and evaluating prevention measures, such as the recently implemented handgun ordi nance, in Detroit. At the state level, excess homicide has led to plans to integrate health department and police data bases for surveillance of homicide. These data may help define factors associated with excess homicide in Michigan. # INTRODUCTION Since measles vaccine was introduced in the United States in 1963, the reported incidence of measles has decreased 99%, and indigenous measles transmission has been eliminated from most of the country. However, the goal to eliminate measles by October 1982 has not been met. Between 1981 and 1987, a low of 1497 (1983) to a high of 6282 (1986) cases were reported annually (7). Two major types of outbreaks have occurred recently in the United States: those among unvaccinated preschool-aged children, including children younger than the recommended age for routine vaccination (i.e., 15 months), and those among vac cinated school-aged children (2). Large outbreaks among unvaccinated preschoolaged children have occurred in several inner-city areas. In these outbreaks, up to 88% of cases in vaccine-eligible children 16 months to 4 years of age were unvaccinated; as many as 40% of all cases occurred in children <16 months of age. Surveys of immunization levels in areas where these outbreaks occurred indicate that only 49%-65% of 2-year-olds had received measles vaccine (3). Many outbreaks have occurred among school-aged children in schools with vaccination levels above 98%. These outbreaks have occurred in all parts of the country. Attack rates in individual schools have been low (1%-5%), and the calculated vaccine efficacy has been high. Primary vaccine failures (i.e., the approximately January 13,1989 ACIP: Measles -Continued 2%-10% of vaccinees who fail to seroconvert after measles vaccination) have played a substantial role in transmission. In many of these outbreaks, children vaccinated at 12-14 months of age have had higher attack rates than those vaccinated at older ages (4). In a few outbreaks (5,6), persons vaccinated in the more distant past, independent of age at vaccination, have been at increased risk for disease. However, no conclusive data indicate that waning vaccine-induced immunity itself has been a major problem. # EVALUATION OF THE CURRENT MEASLES ELIMINATION STRATEGY The current measles elimination strategy calls for administration of one dose of measles vaccine at 15 months of age (7). A documented history of vaccination at or after 12 months of age, however, is considered appropriate vaccination. High immunization levels, along with careful surveillance and aggressive outbreak control, are the three essential elements of this strategy. The Immunization Practices Advisory Committee (ACIP) has periodically reviewed the current strategy and progress toward measles elimination (7). At a recent meeting, the ACIP again reviewed the epidemi ology of measles in the United States as well as recommendations, made by a group of consultants convened by CDC in February 1988, for modification of the measles elimination strategy. To increase vaccine coverage among preschool-aged children in inner-city areas, the ACIP considered it essential that research be conducted to determine ways to increase vaccine delivery. A variety of additions and/or changes in the current strategy were considered, including a routine two-dose measles vaccination schedule and a one-time mass revaccination for school-aged children. Two new strategies were recommended and are described below (Table 1). # NEW RECOMMENDATIONS Changes in vaccination schedule in areas with recurrent measles transmission among preschool-aged children To improve immunity levels in high-risk children <15 months of age, the ACIP recommends that a routine two-dose vaccination schedule for preschoolers be implemented in areas with recurrent measles transmission (i.e., counties with more than five reported cases among preschool-aged children during each of the last 5 years). If recurrent measles transmission is occurring in defined parts of a county, local officials may elect to implement the routine two-dose schedule selectively in those parts. Health authorities in other urban areas that have experienced recent outbreaks among unvaccinated preschool-aged children may also consider imple menting this policy. The first dose of measles vaccine should be administered at age 9 months or at the first health-care contact thereafter. Infants vaccinated before their first birthday should receive a second dose at or about 15 months of age. Single antigen (monovalent) measles vaccine should be used for infants <1 year of age, and measles, mumps, and rubella vaccine (MMR), for persons vaccinated on or after the first birthday. Although some data suggest that children who do not respond to the first dose administered at a young age may have an altered immune response when revaccinated at an older age (8), there are no data to suggest that such children are not protected from measles (9 ). If resource constraints do not permit a routine two-dose schedule, an acceptable alternative is to lower the age for routine vaccination to 12 months in those areas using one dose of MMR. If children also need diphtheria and tetanus toxoids and pertussis vaccine (DTP) and oral polio vaccine (OPV), these vaccines can be admin istered simultaneously with measles vaccine or MMR. # Changes in outbreak-control strategies for school-based outbreaks Because of the prominent role that persons with primary vaccine failure are playing in measles transmission, the ACIP recommends the institution of some form of revaccination in outbreaks that occur in junior or senior high schools, colleges, universities, or other secondary institutions. In an outbreak, the ACIP recommends that, in affected schools as well as unaffected schools at risk of measles transmission from students in affected schools, all students and their siblings who received their most recent dose of measles vaccine before 1980 should be revaccinated. This date was selected for several reasons: 1) this strategy will capture almost all students vaccinated between 12 and 14 months of age, a group known to be at increased risk of primary vaccine failure, since the recommended age for routine vaccination was changed from 12 to 15 months in 1976; 2) it may be easier to identify students by year of vaccination than by age at vaccination; and 3) in some outbreak investigations, students vaccinated before 1978-1980 have been found to be at increased risk for measles. This is not felt to be due to waning immunity but rather to a higher rate o primary vaccine failure in persons vaccinated before that time. This higher rate may be due to different reasons, including less than optimal vaccine storage and handling or to the greater lability of the measles vaccine manufactured before a new stabilizer was used in 1979. While the exact date has not been determined, 1980 is a conservative cutoff. If all students vaccinated before 1980 cannot be revaccinated, then persons vaccinated before 15 months of age should be targeted. # Notices to Readers # Epidemiology in Action Course CDC and Emory University will cosponsor a course designed for practicing state and local health department professionals. This course, " Epidemiology in Action/' will be held at CDC May 15-26, 1989. It emphasizes the practical application of epidemiology to public health problems and will consist of lectures, workshops, classroom exercises (including actual epidemiologic problems), roundtable discus sions, and an on-site community survey.
# the nation's, the proportion of total homicide-attributable YPLL in Michigan involving blacks is 68% compared with 44% in the nation. These differences largely reflect the higher homicide rate for blacks in Michigan than for the U.S. black population. Black homicide victims are also slightly younger than white victims in Michigan; in 1985, they had an average of 33 YPLL per homicide death, compared with 31 for whites. Examining descriptive data such as those presented here is important for public health agencies addressing homicide. In addition, analytic studies of potentially modifiable risk factors are needed. Because 67% of Michigan's homicides in 1985 occurred in the Detroit area, these data highlight the importance of implementing and evaluating prevention measures, such as the recently implemented handgun ordi nance, in Detroit. At the state level, excess homicide has led to plans to integrate health department and police data bases for surveillance of homicide. These data may help define factors associated with excess homicide in Michigan. # INTRODUCTION Since measles vaccine was introduced in the United States in 1963, the reported incidence of measles has decreased 99%, and indigenous measles transmission has been eliminated from most of the country. However, the goal to eliminate measles by October 1982 has not been met. Between 1981 and 1987, a low of 1497 (1983) to a high of 6282 (1986) cases were reported annually (7). Two major types of outbreaks have occurred recently in the United States: those among unvaccinated preschool-aged children, including children younger than the recommended age for routine vaccination (i.e., 15 months), and those among vac cinated school-aged children (2). Large outbreaks among unvaccinated preschoolaged children have occurred in several inner-city areas. In these outbreaks, up to 88% of cases in vaccine-eligible children 16 months to 4 years of age were unvaccinated; as many as 40% of all cases occurred in children <16 months of age. Surveys of immunization levels in areas where these outbreaks occurred indicate that only 49%-65% of 2-year-olds had received measles vaccine (3). Many outbreaks have occurred among school-aged children in schools with vaccination levels above 98%. These outbreaks have occurred in all parts of the country. Attack rates in individual schools have been low (1%-5%), and the calculated vaccine efficacy has been high. Primary vaccine failures (i.e., the approximately January 13,1989 ACIP: Measles -Continued 2%-10% of vaccinees who fail to seroconvert after measles vaccination) have played a substantial role in transmission. In many of these outbreaks, children vaccinated at 12-14 months of age have had higher attack rates than those vaccinated at older ages (4). In a few outbreaks (5,6), persons vaccinated in the more distant past, independent of age at vaccination, have been at increased risk for disease. However, no conclusive data indicate that waning vaccine-induced immunity itself has been a major problem. # EVALUATION OF THE CURRENT MEASLES ELIMINATION STRATEGY The current measles elimination strategy calls for administration of one dose of measles vaccine at 15 months of age (7). A documented history of vaccination at or after 12 months of age, however, is considered appropriate vaccination. High immunization levels, along with careful surveillance and aggressive outbreak control, are the three essential elements of this strategy. The Immunization Practices Advisory Committee (ACIP) has periodically reviewed the current strategy and progress toward measles elimination (7). At a recent meeting, the ACIP again reviewed the epidemi ology of measles in the United States as well as recommendations, made by a group of consultants convened by CDC in February 1988, for modification of the measles elimination strategy. To increase vaccine coverage among preschool-aged children in inner-city areas, the ACIP considered it essential that research be conducted to determine ways to increase vaccine delivery. A variety of additions and/or changes in the current strategy were considered, including a routine two-dose measles vaccination schedule and a one-time mass revaccination for school-aged children. Two new strategies were recommended and are described below (Table 1). # NEW RECOMMENDATIONS Changes in vaccination schedule in areas with recurrent measles transmission among preschool-aged children To improve immunity levels in high-risk children <15 months of age, the ACIP recommends that a routine two-dose vaccination schedule for preschoolers be implemented in areas with recurrent measles transmission (i.e., counties with more than five reported cases among preschool-aged children during each of the last 5 years). If recurrent measles transmission is occurring in defined parts of a county, local officials may elect to implement the routine two-dose schedule selectively in those parts. Health authorities in other urban areas that have experienced recent outbreaks among unvaccinated preschool-aged children may also consider imple menting this policy. The first dose of measles vaccine should be administered at age 9 months or at the first health-care contact thereafter. Infants vaccinated before their first birthday should receive a second dose at or about 15 months of age. Single antigen (monovalent) measles vaccine should be used for infants <1 year of age, and measles, mumps, and rubella vaccine (MMR), for persons vaccinated on or after the first birthday. Although some data suggest that children who do not respond to the first dose administered at a young age may have an altered immune response when revaccinated at an older age (8), there are no data to suggest that such children are not protected from measles (9 ). If resource constraints do not permit a routine two-dose schedule, an acceptable alternative is to lower the age for routine vaccination to 12 months in those areas using one dose of MMR. If children also need diphtheria and tetanus toxoids and pertussis vaccine (DTP) and oral polio vaccine (OPV), these vaccines can be admin istered simultaneously with measles vaccine or MMR. # Changes in outbreak-control strategies for school-based outbreaks Because of the prominent role that persons with primary vaccine failure are playing in measles transmission, the ACIP recommends the institution of some form of revaccination in outbreaks that occur in junior or senior high schools, colleges, universities, or other secondary institutions. In an outbreak, the ACIP recommends that, in affected schools as well as unaffected schools at risk of measles transmission from students in affected schools, all students and their siblings who received their most recent dose of measles vaccine before 1980 should be revaccinated. This date was selected for several reasons: 1) this strategy will capture almost all students vaccinated between 12 and 14 months of age, a group known to be at increased risk of primary vaccine failure, since the recommended age for routine vaccination was changed from 12 to 15 months in 1976; 2) it may be easier to identify students by year of vaccination than by age at vaccination; and 3) in some outbreak investigations, students vaccinated before 1978-1980 have been found to be at increased risk for measles. This is not felt to be due to waning immunity but rather to a higher rate o primary vaccine failure in persons vaccinated before that time. This higher rate may be due to different reasons, including less than optimal vaccine storage and handling or to the greater lability of the measles vaccine manufactured before a new stabilizer was used in 1979. While the exact date has not been determined, 1980 is a conservative cutoff. If all students vaccinated before 1980 cannot be revaccinated, then persons vaccinated before 15 months of age should be targeted. # Notices to Readers # Epidemiology in Action Course CDC and Emory University will cosponsor a course designed for practicing state and local health department professionals. This course, " Epidemiology in Action/' will be held at CDC May 15-26, 1989. It emphasizes the practical application of epidemiology to public health problems and will consist of lectures, workshops, classroom exercises (including actual epidemiologic problems), roundtable discus sions, and an on-site community survey.
None
None
616b92a2fa71d500b78c7cf8a3c4001ab569216a
cdc
None
# Prevention and Control of Influenza: Part II, Antiviral Agents Recommendations of the Advisory Committee on Immunization Practices (ACIP) Summary These recommendations provide information about two antiviral agents: amantadine hydrochloride and rimantadine hydrochloride. These recommendations supersede MMWR 1992;41(No. RR-9). The primary changes include information about the recently licensed drug rimantadine, expanded information on the potential for adverse reactions to amantadine and rimantadine, and guidelines for the use of these drugs among certain persons. # INTRODUCTION The two antiviral agents with specific activity against influenza A viruses are amantadine hydrochloride and rimantadine hydrochloride. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses. When administered prophylactically to healthy adults or children before and throughout the epidemic period, both drugs are approximately 70%-90% effective in preventing illness caused by naturally occurring strains of type A influenza viruses. Because antiviral agents taken prophylactically may prevent illness but not subclinical infection, some persons who take these drugs may still develop immune responses that will protect them when they are exposed to antigenically related viruses in later years. In otherwise healthy adults, amantadine and rimantadine can reduce the severity and duration of signs and symptoms of influenza A illness when administered within 48 hours of illness onset. Studies evaluating the efficacy of treatment for children with either amantadine or rimantadine are limited. Amantadine was approved for treatment and prophylaxis of all influenza type A virus infections in 1976. Although few placebo-controlled studies were conducted to determine the efficacy of amantadine treatment among children prior to approval, amantadine is indicated for treatment and prophylaxis of adults and children ≥1 year of age. Rimantadine was approved in 1993 for treatment and prophylaxis in adults but was approved only for prophylaxis in children. Further studies may provide the data needed to support future approval of rimantadine treatment in this age group. As with all drugs, amantadine and rimantadine may cause adverse reactions in some persons. Such adverse reactions are rarely severe; however, for some categories of patients, severe adverse reactions are more likely to occur. Amantadine has been associated with a higher incidence of adverse central nervous system (CNS) reactions than rimantadine (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment). # RECOMMENDATIONS FOR THE USE OF AMANTADINE AND RIMANTADINE # Use as Prophylaxis Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk of severe illness and complications if infected with influenza A virus (i.e., persons at high risk). Groups at high risk for influenza-related complications include: - persons ≥65 years of age; - residents of nursing homes and other chronic-care facilities that house persons of any age with chronic medical conditions; - adults and children with chronic disorders of the pulmonary or cardiovascular systems, including children with asthma; - adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications); and - children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy and therefore may be at risk for developing Reye syndrome after influenza. When amantadine or rimantadine is administered as prophylaxis, factors such as cost, compliance, and potential side effects should be considered when determining the period of prophylaxis. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost effective, amantadine or rimantadine prophylaxis should be taken only during the period of peak influenza activity in a community. # Persons at High Risk Vaccinated After Influenza A Activity Has Begun Persons at high risk can still be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination can take as long as 2 weeks, during which time chemoprophylaxis should be considered. Children who receive influenza vaccine for the first time may require as long as 6 weeks of prophylaxis (i.e., prophylaxis for 2 weeks after the second dose of vaccine has been received). Amantadine and rimantadine do not interfere with the antibody response to the vaccine. # Persons Providing Care to Those at High Risk To reduce the spread of virus to persons at high risk, chemoprophylaxis may be considered during community outbreaks for a) unvaccinated persons who have frequent contact with persons at high risk (e.g., household members, visiting nurses, and volunteer workers) and b) unvaccinated employees of hospitals, clinics, and chroniccare facilities. For those persons who cannot be vaccinated, chemoprophylaxis during the period of peak influenza activity may be considered. For those persons who receive vaccine at a time when influenza A is present in the community, chemoprophylaxis can be administered for 2 weeks after vaccination. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that may not be controlled by the vaccine. # Persons Who Have Immune Deficiency Chemoprophylaxis may be indicated for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons with human immunodeficiency virus (HIV) infection, especially those with advanced HIV disease. No data are available on possible interactions with other drugs used in the management of patients with HIV infection. Such patients should be monitored closely if amantadine or rimantadine chemoprophylaxis is administered. # Persons for Whom Influenza Vaccine Is Contraindicated Chemoprophylaxis throughout the influenza season or during peak influenza activity may be appropriate for persons at high risk who should not be vaccinated. Influenza vaccine may be contraindicated in persons with severe anaphylactic hypersensitivity to egg protein or other vaccine components. # Other Persons Amantadine or rimantadine also can be administered prophylactically to anyone who wishes to avoid influenza A illness. The health-care provider and patient should make this decision on an individual basis. # Use of Antivirals as Therapy Amantadine and rimantadine can reduce the severity and shorten the duration of influenza A illness among healthy adults when administered within 48 hours of illness onset. Whether antiviral therapy will prevent complications of influenza type A among high-risk persons is unknown. Insufficient data exist to determine the efficacy of rimantadine treatment in children. Thus, rimantadine is currently approved only for prophylaxis in children, but it is not approved for treatment in this age group. Amantadine-and rimantadine-resistant influenza A viruses can emerge when either of these drugs is administered for treatment; amantadine-resistant strains are cross-resistant to rimantadine and vice versa. Both the frequency with which resistant viruses emerge and the extent of their transmission are unknown, but data indicate that amantadine-and rimantadine-resistant viruses are no more virulent or transmissible than amantadine-and rimantadine-sensitive viruses. The screening of naturally occurring epidemic strains of influenza type A has rarely detected amantadine-and rimantadine-resistant viruses. Resistant viruses have most frequently been isolated from persons taking one of these drugs as therapy for influenza A infection. Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy. Persons who have influenza-like illness should avoid contact with uninfected persons as much as possible, regardless of whether they are being treated with amantadine or rimantadine. Persons who have influenza type A infection and who are treated with either drug may shed amantadine-or rimantadine-sensitive viruses early in the course of treatment, but may later shed drug-resistant viruses, especially after 5-7 days of therapy. Such persons can benefit from therapy even when resistant viruses emerge; however, they also can transmit infection to other persons with whom they come in contact. Because of possible induction of amantadine or rimantadine resistance, treatment of persons who have influenza-like illness should be discontinued as soon as clinically warranted, generally after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. Laboratory isolation of influenza viruses obtained from persons who are receiving amantadine or rimantadine should be reported to CDC through state health departments, and the isolates should be saved for antiviral sensitivity testing. # Outbreak Control in Institutions When confirmed or suspected outbreaks of influenza A occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. Contingency planning is needed to ensure rapid administration of amantadine or rimantadine to residents. This planning should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amantadine or rimantadine is used for outbreak control, the drug should be administered to all residents of the institution-regardless of whether they received influenza vaccine the previous fall. The drug should be continued for at least 2 weeks or until approximately 1 week after the end of the outbreak. The dose for each resident should be determined after consulting the dosage recommendations and precautions (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment) and the manufacturer's package insert. To reduce the spread of virus and to minimize disruption of patient care, chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that is not controlled by the vaccine. Chemoprophylaxis also may be considered for controlling influenza A outbreaks in other closed or semi-closed settings (e.g., dormitories or other settings where persons live in close proximity). To reduce the spread of infection and the chances of prophylaxis failure due to transmission of drug-resistant virus, measures should be taken to reduce contact as much as possible between persons on chemoprophylaxis and those taking drug for treatment. # CONSIDERATIONS FOR SELECTING AMANTADINE OR RIMANTADINE FOR CHEMOPROPHYLAXIS OR TREATMENT Side Effects/Toxicity Despite the similarities between the two drugs, amantadine and rimantadine differ in their pharmacokinetic properties. More than 90% of amantadine is excreted unchanged, whereas approximately 75% of rimantadine is metabolized by the liver. However, both drugs and their metabolites are excreted by the kidney. The pharmacokinetic differences between amantadine and rimantadine may partially explain differences in side effects. Although both drugs can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day, the incidence of CNS side effects (e.g., nervousness, anxiety, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine compared with those taking rimantadine. In a 6-week study of prophylaxis in healthy adults, approximately 6% of participants taking rimantadine at a dose of 200 mg/day experienced at least one CNS symptom, compared with approximately 14% of those taking the same dose of amantadine and 4% of those taking placebo. The incidence of gastrointestinal side effects (e.g., nausea and anorexia) is approximately 3% in persons taking either drug, compared with 1%-2% of persons receiving the placebo. Side effects associated with both drugs are usually mild and cease soon after discontinuing the drug. Side effects may diminish or disappear after the first week despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among elderly persons who have been taking amantadine as prophylaxis at a dose of 200 mg/day. Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects, and recommendations for reduced dosages for these groups of patients have been made. Because rimantadine has only recently been approved for marketing, its safety in certain patient populations (e.g., chronically ill and elderly persons) has been evaluated less frequently. Clinical trials of rimantadine have more commonly involved young, healthy persons. Providers should review the package insert before using amantadine or rimantadine for any patient. The patient's age, weight, renal function, other medications, presence of other medical conditions, and indications for use of amantadine or rimantadine (prophylaxis or therapy) must be considered, and the dosage and duration of treatment must be adjusted appropriately. Modifications in dosage may be required for persons who have impaired renal or hepatic function, the elderly, children, and persons with a history of seizures. The following are guidelines for the use of amantadine and rimantadine in certain patient populations. Dosage recommendations are also summarized (Table 1). # Persons Who Have Impaired Renal Function # Amantadine Amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion. Thus, renal clearance of amantadine is reduced substantially in persons with renal insufficiency. A reduction in dosage is recommended for patients with creatinine clearance ≤50 mL/min. Guidelines for amantadine dosage based on creatinine clearance are found in the packet insert. However, because recommended dosages based on creatinine clearance may provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully so that adverse reactions can be recognized promptly and either the dose can be further reduced or the drug can be discontinued, if necessary. Hemodialysis contributes little to drug clearance. # Rimantadine The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration. Further studies are needed to determine the multiple-dose pharmacokinetics and the most appropriate dosages for these patients. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6-fold greater than that in healthy controls of the same age. Hemodialysis did not contribute to drug clearance. In studies among persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were *The drug package insert should be consulted for dosage recommendations for administering amantadine to persons with creatinine clearance ≤50 mL/min. † 5 mg/kg of amantadine or rimantadine syrup = 1 tsp/22 lbs. § Children ≥10 years of age who weigh 100 mg/day should be observed closely, and the dosage should be reduced or the drug discontinued, if necessary. Elderly nursing-home residents should be administered only 100 mg/day of rimantadine. A reduction in dose to 100 mg/day should be considered for all persons ≥65 years of age if they experience possible side effects when taking 200 mg/day. NA=Not applicable. higher compared with control patients without renal disease who were the same weight, age, and sex. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance ≤10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including elderly persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. # Persons ≥65 Years of Age # Amantadine Because renal function declines with increasing age, the daily dose for persons ≥65 years of age should not exceed 100 mg for prophylaxis or treatment. For some elderly persons, the dose should be further reduced. Studies suggest that because of their smaller average body size, elderly women are more likely than elderly men to experience side effects at a daily dose of 100 mg. # Rimantadine The incidence and severity of CNS side effects among elderly persons appear to be substantially lower among those taking rimantadine at a dose of 200 mg/day compared with elderly persons taking the same dose of amantadine. However, when rimantadine has been administered at a dosage of 200 mg/day to chronically ill elderly persons, they have had a higher incidence of CNS and gastrointestinal symptoms than healthy, younger persons taking rimantadine at the same dosage. After longterm administration of rimantadine at a dosage of 200 mg/day, serum rimantadine concentrations among elderly nursing-home residents have been two to four times greater than those reported in younger adults. The dosage of rimantadine should be reduced to 100 mg/day for treatment or prophylaxis of elderly nursing-home residents. Although further studies are needed to determine the optimal dose for other elderly persons, a reduction in dosage to 100 mg/day should be considered for all persons ≥65 years of age if they experience signs and symptoms that may represent side effects when taking a dosage of 200 mg/day. # Persons Who Have Liver Disease # Amantadine No increase in adverse reactions to amantadine has been observed among persons with liver disease. # Rimantadine The safety and pharmacokinetics of rimantadine have only been evaluated after single-dose administration. In a study of persons with chronic liver disease (most with stabilized cirrhosis), no alterations were observed after a single dose. However, in persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease. A dose reduction to 100 mg/day is recommended for persons with severe hepatic dysfunction. # Persons Who Have Seizure Disorders # Amantadine An increased incidence of seizures has been reported in patients with a history of seizure disorders who have received amantadine. Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine. # Rimantadine In clinical trials, seizures (or seizure-like activity) have been observed in a few persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine. The extent to which rimantadine may increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated, because such persons have usually been excluded from participating in clinical trials of rimantadine. # Children # Amantadine The use of amantadine in children <1 year of age has not been adequately evaluated. The FDA-approved dosage for children 1-9 years of age is 4.4-8.8 mg/kg/day, not to exceed 150 mg/day. Although further studies to determine the optimal dosage for children are needed, physicians should consider prescribing only 5 mg/kg/day (not to exceed 150 mg/day) to reduce the risk for toxicity. The approved dosage for children ≥10 years of age is 200 mg/day; however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, is advisable. # Rimantadine The use of rimantadine in children <1 year of age has not been adequately evaluated. In children 1-9 years of age, rimantadine should be administered in one or two divided doses at a dosage of 5 mg/kg/day, not to exceed 150 mg/day. The approved dosage for children ≥10 years of age is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, also is recommended. # Drug Interactions # Amantadine Careful observation is advised when amantadine is administered concurrently with drugs that affect the CNS, especially CNS stimulants. # Rimantadine No clinically significant drug interactions have been identified. For more detailed information concerning potential drug interactions for either drug, the package insert should be consulted. # Advisory Committee on Immunization Practices Membership List, October 1994 -Continued # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS Information regarding influenza surveillance is available through the CDC Voice Information System (influenza update), telephone (404) 332-4551, or through the CDC Information Service on the Public Health Network electronic bulletin board. From October through May, the information is updated at least every other week. In addition, periodic updates about influenza are published in the weekly MMWR. State and local health departments should be consulted regarding availability of influenza vaccine, access to vaccination programs, and information about state or local influenza activity.
# Prevention and Control of Influenza: Part II, Antiviral Agents Recommendations of the Advisory Committee on Immunization Practices (ACIP) Summary These recommendations provide information about two antiviral agents: amantadine hydrochloride and rimantadine hydrochloride. These recommendations supersede MMWR 1992;41(No. RR-9). The primary changes include information about the recently licensed drug rimantadine, expanded information on the potential for adverse reactions to amantadine and rimantadine, and guidelines for the use of these drugs among certain persons. # INTRODUCTION The two antiviral agents with specific activity against influenza A viruses are amantadine hydrochloride and rimantadine hydrochloride. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses. When administered prophylactically to healthy adults or children before and throughout the epidemic period, both drugs are approximately 70%-90% effective in preventing illness caused by naturally occurring strains of type A influenza viruses. Because antiviral agents taken prophylactically may prevent illness but not subclinical infection, some persons who take these drugs may still develop immune responses that will protect them when they are exposed to antigenically related viruses in later years. In otherwise healthy adults, amantadine and rimantadine can reduce the severity and duration of signs and symptoms of influenza A illness when administered within 48 hours of illness onset. Studies evaluating the efficacy of treatment for children with either amantadine or rimantadine are limited. Amantadine was approved for treatment and prophylaxis of all influenza type A virus infections in 1976. Although few placebo-controlled studies were conducted to determine the efficacy of amantadine treatment among children prior to approval, amantadine is indicated for treatment and prophylaxis of adults and children ≥1 year of age. Rimantadine was approved in 1993 for treatment and prophylaxis in adults but was approved only for prophylaxis in children. Further studies may provide the data needed to support future approval of rimantadine treatment in this age group. As with all drugs, amantadine and rimantadine may cause adverse reactions in some persons. Such adverse reactions are rarely severe; however, for some categories of patients, severe adverse reactions are more likely to occur. Amantadine has been associated with a higher incidence of adverse central nervous system (CNS) reactions than rimantadine (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment). # RECOMMENDATIONS FOR THE USE OF AMANTADINE AND RIMANTADINE # Use as Prophylaxis Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk of severe illness and complications if infected with influenza A virus (i.e., persons at high risk). Groups at high risk for influenza-related complications include: • persons ≥65 years of age; • residents of nursing homes and other chronic-care facilities that house persons of any age with chronic medical conditions; • adults and children with chronic disorders of the pulmonary or cardiovascular systems, including children with asthma; • adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications); and • children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy and therefore may be at risk for developing Reye syndrome after influenza. When amantadine or rimantadine is administered as prophylaxis, factors such as cost, compliance, and potential side effects should be considered when determining the period of prophylaxis. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost effective, amantadine or rimantadine prophylaxis should be taken only during the period of peak influenza activity in a community. # Persons at High Risk Vaccinated After Influenza A Activity Has Begun Persons at high risk can still be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination can take as long as 2 weeks, during which time chemoprophylaxis should be considered. Children who receive influenza vaccine for the first time may require as long as 6 weeks of prophylaxis (i.e., prophylaxis for 2 weeks after the second dose of vaccine has been received). Amantadine and rimantadine do not interfere with the antibody response to the vaccine. # Persons Providing Care to Those at High Risk To reduce the spread of virus to persons at high risk, chemoprophylaxis may be considered during community outbreaks for a) unvaccinated persons who have frequent contact with persons at high risk (e.g., household members, visiting nurses, and volunteer workers) and b) unvaccinated employees of hospitals, clinics, and chroniccare facilities. For those persons who cannot be vaccinated, chemoprophylaxis during the period of peak influenza activity may be considered. For those persons who receive vaccine at a time when influenza A is present in the community, chemoprophylaxis can be administered for 2 weeks after vaccination. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that may not be controlled by the vaccine. # Persons Who Have Immune Deficiency Chemoprophylaxis may be indicated for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons with human immunodeficiency virus (HIV) infection, especially those with advanced HIV disease. No data are available on possible interactions with other drugs used in the management of patients with HIV infection. Such patients should be monitored closely if amantadine or rimantadine chemoprophylaxis is administered. # Persons for Whom Influenza Vaccine Is Contraindicated Chemoprophylaxis throughout the influenza season or during peak influenza activity may be appropriate for persons at high risk who should not be vaccinated. Influenza vaccine may be contraindicated in persons with severe anaphylactic hypersensitivity to egg protein or other vaccine components. # Other Persons Amantadine or rimantadine also can be administered prophylactically to anyone who wishes to avoid influenza A illness. The health-care provider and patient should make this decision on an individual basis. # Use of Antivirals as Therapy Amantadine and rimantadine can reduce the severity and shorten the duration of influenza A illness among healthy adults when administered within 48 hours of illness onset. Whether antiviral therapy will prevent complications of influenza type A among high-risk persons is unknown. Insufficient data exist to determine the efficacy of rimantadine treatment in children. Thus, rimantadine is currently approved only for prophylaxis in children, but it is not approved for treatment in this age group. Amantadine-and rimantadine-resistant influenza A viruses can emerge when either of these drugs is administered for treatment; amantadine-resistant strains are cross-resistant to rimantadine and vice versa. Both the frequency with which resistant viruses emerge and the extent of their transmission are unknown, but data indicate that amantadine-and rimantadine-resistant viruses are no more virulent or transmissible than amantadine-and rimantadine-sensitive viruses. The screening of naturally occurring epidemic strains of influenza type A has rarely detected amantadine-and rimantadine-resistant viruses. Resistant viruses have most frequently been isolated from persons taking one of these drugs as therapy for influenza A infection. Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy. Persons who have influenza-like illness should avoid contact with uninfected persons as much as possible, regardless of whether they are being treated with amantadine or rimantadine. Persons who have influenza type A infection and who are treated with either drug may shed amantadine-or rimantadine-sensitive viruses early in the course of treatment, but may later shed drug-resistant viruses, especially after 5-7 days of therapy. Such persons can benefit from therapy even when resistant viruses emerge; however, they also can transmit infection to other persons with whom they come in contact. Because of possible induction of amantadine or rimantadine resistance, treatment of persons who have influenza-like illness should be discontinued as soon as clinically warranted, generally after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. Laboratory isolation of influenza viruses obtained from persons who are receiving amantadine or rimantadine should be reported to CDC through state health departments, and the isolates should be saved for antiviral sensitivity testing. # Outbreak Control in Institutions When confirmed or suspected outbreaks of influenza A occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. Contingency planning is needed to ensure rapid administration of amantadine or rimantadine to residents. This planning should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amantadine or rimantadine is used for outbreak control, the drug should be administered to all residents of the institution-regardless of whether they received influenza vaccine the previous fall. The drug should be continued for at least 2 weeks or until approximately 1 week after the end of the outbreak. The dose for each resident should be determined after consulting the dosage recommendations and precautions (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment) and the manufacturer's package insert. To reduce the spread of virus and to minimize disruption of patient care, chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that is not controlled by the vaccine. Chemoprophylaxis also may be considered for controlling influenza A outbreaks in other closed or semi-closed settings (e.g., dormitories or other settings where persons live in close proximity). To reduce the spread of infection and the chances of prophylaxis failure due to transmission of drug-resistant virus, measures should be taken to reduce contact as much as possible between persons on chemoprophylaxis and those taking drug for treatment. # CONSIDERATIONS FOR SELECTING AMANTADINE OR RIMANTADINE FOR CHEMOPROPHYLAXIS OR TREATMENT Side Effects/Toxicity Despite the similarities between the two drugs, amantadine and rimantadine differ in their pharmacokinetic properties. More than 90% of amantadine is excreted unchanged, whereas approximately 75% of rimantadine is metabolized by the liver. However, both drugs and their metabolites are excreted by the kidney. The pharmacokinetic differences between amantadine and rimantadine may partially explain differences in side effects. Although both drugs can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day, the incidence of CNS side effects (e.g., nervousness, anxiety, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine compared with those taking rimantadine. In a 6-week study of prophylaxis in healthy adults, approximately 6% of participants taking rimantadine at a dose of 200 mg/day experienced at least one CNS symptom, compared with approximately 14% of those taking the same dose of amantadine and 4% of those taking placebo. The incidence of gastrointestinal side effects (e.g., nausea and anorexia) is approximately 3% in persons taking either drug, compared with 1%-2% of persons receiving the placebo. Side effects associated with both drugs are usually mild and cease soon after discontinuing the drug. Side effects may diminish or disappear after the first week despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among elderly persons who have been taking amantadine as prophylaxis at a dose of 200 mg/day. Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects, and recommendations for reduced dosages for these groups of patients have been made. Because rimantadine has only recently been approved for marketing, its safety in certain patient populations (e.g., chronically ill and elderly persons) has been evaluated less frequently. Clinical trials of rimantadine have more commonly involved young, healthy persons. Providers should review the package insert before using amantadine or rimantadine for any patient. The patient's age, weight, renal function, other medications, presence of other medical conditions, and indications for use of amantadine or rimantadine (prophylaxis or therapy) must be considered, and the dosage and duration of treatment must be adjusted appropriately. Modifications in dosage may be required for persons who have impaired renal or hepatic function, the elderly, children, and persons with a history of seizures. The following are guidelines for the use of amantadine and rimantadine in certain patient populations. Dosage recommendations are also summarized (Table 1). # Persons Who Have Impaired Renal Function # Amantadine Amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion. Thus, renal clearance of amantadine is reduced substantially in persons with renal insufficiency. A reduction in dosage is recommended for patients with creatinine clearance ≤50 mL/min. Guidelines for amantadine dosage based on creatinine clearance are found in the packet insert. However, because recommended dosages based on creatinine clearance may provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully so that adverse reactions can be recognized promptly and either the dose can be further reduced or the drug can be discontinued, if necessary. Hemodialysis contributes little to drug clearance. # Rimantadine The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration. Further studies are needed to determine the multiple-dose pharmacokinetics and the most appropriate dosages for these patients. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6-fold greater than that in healthy controls of the same age. Hemodialysis did not contribute to drug clearance. In studies among persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were *The drug package insert should be consulted for dosage recommendations for administering amantadine to persons with creatinine clearance ≤50 mL/min. † 5 mg/kg of amantadine or rimantadine syrup = 1 tsp/22 lbs. § Children ≥10 years of age who weigh <40 kg should be administered amantadine or rimantadine at a dose of 5 mg/kg/day. ¶ A reduction in dose to 100 mg/day of rimantadine is recommended for persons who have severe hepatic dysfunction or those with creatinine clearance ≤10 mL/min. Other persons with less severe hepatic or renal dysfunction taking >100 mg/day should be observed closely, and the dosage should be reduced or the drug discontinued, if necessary. ** Elderly nursing-home residents should be administered only 100 mg/day of rimantadine. A reduction in dose to 100 mg/day should be considered for all persons ≥65 years of age if they experience possible side effects when taking 200 mg/day. NA=Not applicable. higher compared with control patients without renal disease who were the same weight, age, and sex. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance ≤10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including elderly persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. # Persons ≥65 Years of Age # Amantadine Because renal function declines with increasing age, the daily dose for persons ≥65 years of age should not exceed 100 mg for prophylaxis or treatment. For some elderly persons, the dose should be further reduced. Studies suggest that because of their smaller average body size, elderly women are more likely than elderly men to experience side effects at a daily dose of 100 mg. # Rimantadine The incidence and severity of CNS side effects among elderly persons appear to be substantially lower among those taking rimantadine at a dose of 200 mg/day compared with elderly persons taking the same dose of amantadine. However, when rimantadine has been administered at a dosage of 200 mg/day to chronically ill elderly persons, they have had a higher incidence of CNS and gastrointestinal symptoms than healthy, younger persons taking rimantadine at the same dosage. After longterm administration of rimantadine at a dosage of 200 mg/day, serum rimantadine concentrations among elderly nursing-home residents have been two to four times greater than those reported in younger adults. The dosage of rimantadine should be reduced to 100 mg/day for treatment or prophylaxis of elderly nursing-home residents. Although further studies are needed to determine the optimal dose for other elderly persons, a reduction in dosage to 100 mg/day should be considered for all persons ≥65 years of age if they experience signs and symptoms that may represent side effects when taking a dosage of 200 mg/day. # Persons Who Have Liver Disease # Amantadine No increase in adverse reactions to amantadine has been observed among persons with liver disease. # Rimantadine The safety and pharmacokinetics of rimantadine have only been evaluated after single-dose administration. In a study of persons with chronic liver disease (most with stabilized cirrhosis), no alterations were observed after a single dose. However, in persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease. A dose reduction to 100 mg/day is recommended for persons with severe hepatic dysfunction. # Persons Who Have Seizure Disorders # Amantadine An increased incidence of seizures has been reported in patients with a history of seizure disorders who have received amantadine. Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine. # Rimantadine In clinical trials, seizures (or seizure-like activity) have been observed in a few persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine. The extent to which rimantadine may increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated, because such persons have usually been excluded from participating in clinical trials of rimantadine. # Children # Amantadine The use of amantadine in children <1 year of age has not been adequately evaluated. The FDA-approved dosage for children 1-9 years of age is 4.4-8.8 mg/kg/day, not to exceed 150 mg/day. Although further studies to determine the optimal dosage for children are needed, physicians should consider prescribing only 5 mg/kg/day (not to exceed 150 mg/day) to reduce the risk for toxicity. The approved dosage for children ≥10 years of age is 200 mg/day; however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, is advisable. # Rimantadine The use of rimantadine in children <1 year of age has not been adequately evaluated. In children 1-9 years of age, rimantadine should be administered in one or two divided doses at a dosage of 5 mg/kg/day, not to exceed 150 mg/day. The approved dosage for children ≥10 years of age is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, also is recommended. # Drug Interactions # Amantadine Careful observation is advised when amantadine is administered concurrently with drugs that affect the CNS, especially CNS stimulants. # Rimantadine No clinically significant drug interactions have been identified. For more detailed information concerning potential drug interactions for either drug, the package insert should be consulted. # Advisory Committee on Immunization Practices Membership List, October 1994 -Continued # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS Information regarding influenza surveillance is available through the CDC Voice Information System (influenza update), telephone (404) 332-4551, or through the CDC Information Service on the Public Health Network electronic bulletin board. From October through May, the information is updated at least every other week. In addition, periodic updates about influenza are published in the weekly MMWR. State and local health departments should be consulted regarding availability of influenza vaccine, access to vaccination programs, and information about state or local influenza activity.
None
None
a7fa1391a44c61d819d26fb23ab93572c601a2ea
cdc
None
# Preventing and Controlling Oral and Pharyngeal Cancer Recommendations from a National Strategic Planning Conference Summary In August 1996, CDC convened a national conference to develop strategies for preventing and controlling oral and pharyngeal cancer in the United States. The conference, which was cosponsored by the National Institute of Dental Research of the National Institutes of Health and the American Dental Association, included 125 experts in oral and pharyngeal cancer prevention, treatment, and research; both the private and public sectors were represented. Participants at the conference developed recommendations concerning advocacy, collaboration, and coalition building; public health policy; public education; professional education and practice; and data collection, evaluation, and research. A follow-up meeting consisting of selected participants of the 1996 conference was held in September 1997. During this meeting, changes that had occurred in the political and scientific arenas since the 1996 conference were considered, and 10 recommended strategies from the conference were selected for priority implementation. These 10 strategies were to a) establish a mechanism to implement and monitor the recommended strategies developed during the conference; b) urge oral health professionals to become more actively involved in community health; c) require instruction in preventing and controlling tobacco and alcohol use at all levels of training in dental, medical, nursing, and other related health-care disciplines; d) encourage Medicaid, Medicare, traditional insurance plans, and managed-care entities to consider making oral cancer examinations an integral part of comprehensive physical and oral examinations; e) designate federal funding for a national program of oral cancer prevention, early detection, and control; f) after assessing local needs, develop, implement, and evaluate statewide models to educate all relevant groups; g) develop and conduct a national promotional campaign to raise public awareness of oral cancer and its link to tobacco use and heavy alcohol consumption; h) develop health-care curricula that require competency in prevention, diagnosis, and multidisciplinary management of oral and pharyngeal cancer; i) sponsor and promote continuing education for health-care professionals on the multidisciplinary management of all phases of oral cancer and its sequelae; and j) strengthen organizational approaches to reducing oral cancer by developing organized cooperative and collaborative arrangements, funding formal centers, and involving commercial firms. CDC will use these recommended strategies to develop programs to reduce the burden of oral and pharyngeal cancer in the United States. Through the Oral Cancer Roundtable, a group of conference and meeting participants, CDC will communicate to interested agencies, organizations, and state health departments ways in which they can implement elements of the national plan. The Roundtable will help CDC track the efforts and progress of these groups. # INTRODUCTION During the past decade, federal health agencies have focused on reducing the incidence of oral and pharyngeal cancer and increasing the 5-year survival rate from these cancers in the United States. Beginning with a consortium of health agencies in 1992 (and including a strategic planning conference in 1996 and a follow-up meeting in 1997), CDC has been involved in concerted efforts to establish a national plan for preventing and controlling these cancers. This report presents recommended strategies for action from the 1996 conference and a list of priority recommendations from the 1997 meeting. These recommendations will enable CDC to develop a coordinated national plan to reduce morbidity and mortality from oral and pharyngeal cancer in the United States. # ORAL AND PHARYNGEAL CANCER Oral cancer (i.e., cancer of the lip, tongue, floor of the mouth, palate, gingiva and alveolar mucosa, buccal mucosa, or oropharynx)- accounts for 2%-4% of cancers diagnosed annually in the United States; approximately two thirds occur in the oral cavity, and the remainder occurs in the oropharynx (1 ). In 1998, this diagnosis will be made in an estimated 30,300 Americans; approximately 8,000 deaths (5,200 males and 2,800 females) are expected in this year (2 ). Ninety-five percent of cases of oral cancer occur among persons aged >40 years, and the average age at diagnosis is 60 years (3 ). In 1950, the male-to-female ratio of oral cancer incidence was approximately 6:1; by 1997, it was approximately 2:1. The changing ratio is likely the result of the increase in smoking among women in the past three decades (3 ). In addition, cancer is an age-related disease, and in the United States, the number of women aged >65 years now exceeds the number of men aged >65 years by almost 50% (3 ). During 1990-1994, the annual incidence rate among black males in the United States was 1.6 times higher than the rate among white males (20.1 versus 12.9 new cases per 100,000) and the annual mortality rate among black males was 2.5 times higher (7.6 versus 3.1 deaths per 100,000); the annual incidence rate among black females was slightly higher than that among white females (5.6 versus 4.9 new cases per 100,000), as was the annual mortality rate (1.8 versus 1.2 deaths per 100,000) (4 ). Despite agressive combinations of surgery, radiation therapy, and chemotherapy, the 5-year survival rate for oral cancer is poor (blacks: 35%; whites: 55%) (1,5 ). Tobacco smoking (i.e., cigarette, pipe, or cigar smoking), particularly when combined with heavy alcohol consumption (i.e., ≥30 drinks per week), has been identified as the primary risk factor for approximately 75% of oral cancers in the United States (6 ) . The use of tobacco in other forms (i.e., snuff and chew) has also been identified as a risk factor (7-9 ), as have certain other lifestyle and environmental factors (e.g., diet and occupational exposure to sunlight) (10 ). Approximately 90% of oral cancer lesions are squamous cell carcinomas. Persons who have oral cancer often develop multiple primary lesions (i.e., field cancerization), and they develop second primary tumors at a rate of approximately 4% annually (11 ). Persons having primary oral cancer are more likely to develop a second primary cancer of the aerodigestive tract (i.e., oral cavity, pharynx, esophagus, larynx, and lungs) *Hereafter, pharyngeal cancer is also included in the term oral cancer. (12,13 ). The initally diagnosed disease accounts for one half of the deaths caused by oral cancer; one fourth of these deaths are due to a second primary cancer, and the remaining one fourth are attributable to other illnesses (13 ). Diagnosing cancers at an early stage is crucial to improving survival rate and reducing morbidity. At the time of diagnosis of oral cancer, 36% of persons have localized disease, 43% have regional disease, and 9% have distant disease (for 12% the disease is unstaged) (4 ). The 5-year survival rate for persons having oral cancer is 81% for those with localized disease, 42% for patients with regional disease, and 17% for those with distant metastases (4 ). During the past decade, at diagnosis stage has not changed significantly (3 ). # ORAL CANCER STRATEGIC PLANNING CONFERENCE Background In 1992, a consortium of health agencies led by CDC and the National Institute of Dental Research (NIDR) of the National Institutes of Health began to establish goals, objectives, and programs to reduce oral cancer morbidity and mortality in the United States. The Oral Cancer Work Group, which was formed as part of this initiative, subsequently developed short-term and long-term goals for preventing and controlling oral cancer. A list of these goals was disseminated to interested organizations and individuals in 1993. One of the recommendations of the Oral Cancer Work Group was to summarize the state of the science regarding oral cancer. In response, CDC commissioned nine background papers regarding the prevention, control, and treatment of the disease and addressing current knowledge, emerging trends, opportunities, and barriers to further progress. The authors, representing several specialties and expertise, drew on current literature reviews, in-depth critiques, and personal experience. The Oral Cancer Work Group also suggested that CDC convene a conference to develop national strategies to help make oral cancer prevention and control a higher public health priority. Subsequently, CDC, in partnership with NIDR and the American Dental Association (ADA), formed a conference planning group. The planning group, along with a larger cadre of oral cancer experts, developed a draft set of strategies. This draft and the nine background papers were distributed to invited participants before the conference. # Conference Format The Oral Cancer Strategic Planning Conference was held August 7-9, 1996, at the ADA headquarters in Chicago. Participants included 125 invited experts in oral cancer prevention, treatment, and research; both the private and public sectors were represented. Following brief welcoming remarks by ADA, CDC, and NIDR representatives, nationally recognized experts made presentations on the etiology of oral cancer, its epidemiology, ongoing and needed research, and clinical experience with five other cancers (i.e., leukemia and breast, cervical, lung, and prostate cancers). A survivor of oral cancer described the human impact of the disease. Conference participants broke into five work groups: advocacy, collaboration, and coalition building; public health policy; public education; professional education and practice; and data collection, evaluation, and research. Each work group had a chairperson and co-chairperson who were preselected from the conference participants; toward the conclusion of the conference, chairpersons presented their work groups' recommended strategies to all conference participants, who provided oral and written feedback. The work groups made revisions, including comments raised during the general session. After the conference, the recommended strategies were disseminated to all participants for final review and comments. These last comments were incorporated to produce the finalized recommended strategies to reduce oral cancer morbidity and mortality in the United States. # Recommended Strategies from Work Groups Advocacy, Collaboration, and Coalition Building The work group on advocacy, collaboration, and coalition building (e.g., formation by the oral health community of partnerships with other health professionals and public or private organizations to facilitate increased awareness of the risk factors for oral cancer) developed three main recommended strategies. - Establish an ongoing, institutionalized mechanism to implement and monitor progress made regarding the recommended strategies developed during the conference. - Urge professionals in oral health and other health disciplines to become more actively involved in community health concerns, especially in preventing tobacco and heavy alcohol use, by -developing a comprehensive advocacy training program for a core group of oral health professionals; -recruiting persons from the health community and enrolling them in a national database for tobacco and oral cancer advocacy; -designing outreach programs to encourage local and state dental societies to be proactive in oral cancer and related coalitions; -establishing an advocacy network of oral cancer survivors; and -developing a speakers bureau of sports figures and other prominent persons willing to speak about risk factors for oral cancer and the importance of its early detection. - Increase excise taxes on tobacco and alcohol products to provide targeted funding for oral cancer prevention programs. - Strengthen and enforce laws regarding youth access to tobacco and alcohol. - Give the U.S. Food and Drug Administration regulatory authority over tobacco, because nicotine is an addictive drug. - Prohibit all advertising and promotional activities by the tobacco industry and conduct a well-funded counteradvertising campaign that focuses on cigarettes, cigars, pipe tobacco, and spit tobacco. - Deny federal health and medical research funding to organizations that accept health research funding from the tobacco industry or its research institutes.* - Increase excise taxes on spit tobacco to an amount equal to or greater than the taxes on cigarettes. - Encourage professional sports teams to ban the use of tobacco products among team members during practices and games. - Add strong statements to tobacco and alcohol warning labels about the risk of oral cancer. Ensure that tobacco warning labels cover 25%-30% of the front or back of a product's package and advertising copy. Model warnings after those used in Australia and Canada. # Professional Knowledge and Behaviors. † - Require instruction in preventing and controlling tobacco and alcohol use, including tobacco cessation, at all levels of training in dental, medical, nursing, and related health-care disciplines. - Ensure that clinicians learn procedures to detect oral cancer that are appropriate to their professional practice. - Urge all health professionals to routinely assess tobacco and alcohol intake by their patients. - Encourage health-care agencies and professionals to recommend that all clinicians who deliver primary health care routinely examine their patients for oral cancer. § *This strategy generated considerable discussion among the conference participants. The work group recognized this strategy could negatively affect research support for oral cancer but still recommended it. † These strategies complement those developed by the work group on professional education and practice but are listed here because of their implications for public policy. § The U.S. Preventive Services Task Force states that "there is insufficient evidence to recommend for or against routine screening of asymptomatic persons for oral cancer by primary care physicians" but that "clinicians should remain alert to signs and symptoms of oral cancer and premalignancy in persons who use tobacco and alcohol" (16 ). The work group, however, believed that all persons should be routinely examined and chose a stronger recommendation. # Compensation. - Work with the ADA and the American Medical Association to reaffirm that existing codes for reimbursement (e.g., Common Procedure Terminology and Common Dental Terminology) appropriately identify oral cancer examinations as part of the standard oral examination. - Encourage Medicaid, Medicare, traditional insurance plans, and managed-care entities to make oral cancer examinations an integral part of comprehensive physical and oral examinations. - Base reimbursement for oral cancer examinations on the service provided rather than the academic degree of the provider. # National Programs. - Designate federal funding for a national program of oral cancer prevention, early detection, and control that includes support for outcomes assessment and policy-based research. # Public Education Seven major strategies were recommended by the work group on public education. - Develop and disseminate guidelines and lists of resources to assist communities (e.g., states, counties, cities, towns, and members of organizations and institutions) in developing, implementing, and evaluating models for oral cancer education. This effort could include an inventory of available guidelines, literature, processes, and educational models. - Develop, implement, and evaluate statewide models to educate all relevant groups. These models should be tailored to local needs, practical, culturally appropriate, and user friendly and should include the following content areas: -risk factors for oral cancer (e.g., tobacco use, alcohol use, and nutritional deficiencies); -signs and symptoms of oral cancer; -procedures for a thorough oral cancer examination and the ease with which the examination can be performed; and -methods of public advocacy. - Pursuade relevant CDC and National Institutes of Health decisionmakers, members of Congress, and members of other organizations to secure funding for statewide oral cancer model demonstration projects and to establish an oral health component in CDC's Initiatives to Mobilize for the Prevention and Control of Tobacco Use (IMPACT) program. - Develop and conduct a national campaign to raise public awareness of oral cancer and its link to tobacco use and heavy alcohol consumption. The campaign might include a mascot or logo, sports figures or other distinguished persons as spokespersons, or a national oral cancer awareness week. - Ensure that behavioral and educational research in oral cancer is included in the budget of organizations that sponsor such research (e.g., the National Institutes of Health, universities, and foundations). - Increase the representation of educators, behavioral scientists, and oral cancer specialists on the grant review committees of cancer and dental research institutions. - Ensure that a national research agenda is developed that includes the following: -ongoing surveillance to monitor knowledge, opinions, attitudes, and practices of the public, especially populations at high risk for oral cancer; -surveys of the knowledge, opinions, attitudes, and practices of relevant healthcare providers regarding oral cancer; -evaluations of the effectiveness of educational interventions among targeted populations; -changes in existing survey instruments (e.g., the National Health Interview Survey) to include items on oral cancer comparable to items on other cancers; -inclusion of oral cancer questions in state Behavioral Risk Factor Surveillance System surveys; -determination of the proficiency of persons who have been taught to perform an oral cancer self-examination; and -assessment of the quality (e.g., reading level or scientific accuracy), quantity, and availability of educational materials directed to the public about oral cancer. # Professional Education and Practice This work group developed five recommended strategies. - Develop health-care curricula that require competency in prevention, diagnosis, and multidisciplinary management of oral cancer, including the prevention and cessation of tobacco use and alcohol abuse. - Promote soft tissue examination for oral cancer as a standard part of a complete patient examination. - Develop, promote, and maintain a database of all professional education materials related to oral cancer. - Define, identify, develop, and promote centers of excellence in oral cancer management. - Sponsor and promote continuing education for health-care professionals on the multidisciplinary management of all phases of oral cancer and its sequelae. In addition, the work group identified seven initiatives that would facilitate achievement of their recommended strategies: develop educational standards and standards of care for oral cancer; standardize techniques for oral cancer examination and implement them consistently; create a national speakers bureau with standardized educational materials; place an oral cancer home page on the World Wide Web; create guidelines for developing screening and detection programs; develop self-instructional materials for health professionals on a range of topics (e.g., risk factors, early detection, and counseling of high-risk patients); and identify and catalog professional education materials, determine deficits in these materials, and ensure access to the cataloged materials. appropriate biopsy specimens, research subjects and patients, and research findings so that researchers can maximize the information gained from biological studies; and developing laboratory assays that conserve specimens, thus allowing for multiple assessments of the same tissue; and -evaluating innovative approaches for identifying persons at greatest risk for oral cancer and recruiting them for research studies (e.g., form partnerships with organizations serving residents of homeless shelters or clients of alcohol treatment centers). - Develop valid and reliable patient-oriented indices of health, quality of life, and functioning. - - Create multidisciplinary groups to facilitate movement of findings in two directions-from basic research to applied research and from research in the clinical sciences, epidemiology, and health-services delivery to basic science-thus helping to focus basic research efforts. Such strategies may include the following: -developing innovative science transfer techniques (e.g., Internet applications) for researchers, clinicians, and the public; -develop effective means of communicating the complex biological processes to clinicians, students, and the public; and -increasing research on how health-care practitioners and the public understand and act on the concept of risk of a disease and its consequences. - Strengthen organizational approaches to reducing oral cancer by developing cooperative and collaborative arrangements, funding formal centers, and involving commercial firms. The following means are suggested: -consortia of researchers and medical and dental practitioners could share patient sources, standardize clinical protocols, achieve adequate sample sizes, recruit patients and at-risk persons for research studies, and enhance science transfer; individual practitioners as well as organizations (e.g., alcohol treatment centers) that serve populations at risk for oral cancer or its sequelae could be sources of study subjects; -other formal centers could be established in addition to those funded by NIDR and the National Cancer Institute; and -commercial firms could use their marketing and distribution systems to enhance science transfer, health promotion, and disease prevention activities; in addition, they could join with academic or government groups to fund or otherwise facilitate research. # ORAL CANCER WORKING GROUP The Oral Cancer Working Group, a multidisciplinary group who attended the 1996 Oral Cancer Strategic Planning Conference, met September 29-30, 1997, to identify 10 strategies from the 1996 meeting recommendations to receive immediate attention and implementation by the agencies they represented. The Oral Cancer Working Group considered political and scientific changes that had occurred after the 1996 conference (e.g., the U.S. Food and Drug Administration had been given regulatory authority over tobacco, legal cases involving tobacco had been settled in several states, national tobacco legislation had been proposed, and four comprehensive oral cancer research centers had been funded by NIDR) and selected strategies the group could effect (as opposed to strategies already under way as a result of the leadership and support of other groups). Leadership at the 1997 meeting was shared by representatives of ADA, the American Association of Dental Research, the Association of State and Territorial Dental Directors, CDC, the International Society of Oral Oncology, NIDR, and Oral Health America. The 10 priority strategies are as follows. # Advocacy, Collaboration, and Coalition Building - Establish a mechanism to implement and monitor progress made regarding the recommended strategies developed during the 1996 national conference. - Urge oral health professionals to become more actively involved in community health concerns. # Public Health Policy - Require instruction in preventing and controlling tobacco and alcohol use at all levels of training in dental, medical, nursing, and related health-care disciplines. - Encourage Medicaid, Medicare, traditional insurance plans, and managed-care entities to make oral cancer examinations an integral part of comprehensive physical and oral examinations. - Designate federal funding for a national program of oral cancer prevention, early detection, and control. # Public Education - After assessing local needs, develop, implement, and evaluate statewide models to educate all relevant groups. - Develop and conduct a national campaign to raise public awareness of oral cancer and its link to tobacco use and heavy alcohol consumption. # Professional Education and Practice - Develop health-care curricula that require competency in prevention, diagnosis, and multidisciplinary management of oral cancer. # Public Health Policy This work group presented its recommended strategies in four categories. # Prevention and Control of Tobacco and Alcohol Use. # Data Collection, Evaluation, and Research These recommended strategies would facilitate research regarding the etiology, prevention, and treatment of oral cancer and would translate research findings into effective public health action. - Increase funding or target existing funding to initiate and sustain research concerning oral cancer. - Improve the capacity of individual health practitioners and small medical centers to participate in research regarding prevention strategies and therapeutic approaches.* - Develop curricula for basic preparation and continuing education for health professionals that will improve their knowledge of the nature, value, implementation, and importance of well-designed and well-conducted research studies. - Improve researchers' access to tissues, study populations, and data sources. Possible approaches include -using population-based cancer registries for follow-up studies; -combining information from state-based population-based cancer registries and from national registries (e.g., the Surveillance, Epidemiology, and End Results program) to help develop an enhanced descriptive epidemiology of oral cancer, particularly for smaller subpopulations insufficiently represented in the SEER or state registries; -encouraging the use of existing databases, either singly or in combination, to address questions about oral cancer care, consequences, and costs (e.g., one such database combines SEER incidence and survival data for Medicare beneficiaries with their Medicare claims data); -developing a systematic approach to providing researchers with access to tissue specimens and detailed information about the behavioral and medical characteristics of persons who are at high risk for oral cancer, have premalignant lesions, or currently have oral cancer; finding creative ways to share - This strategy would facilitate execution of multicenter studies, which are often needed to produce highly generalizable findings and to provide adequate statistical power to detect relatively small differences. It also recognizes the growing trend toward treating oral cancer in ambulatory settings and within managed-care delivery systems. Differences in treatment outcomes for all the major delivery systems and settings cannot be assessed completely if physicians' and dentists' offices are not included in research studies of small medical centers. - Sponsor and promote continuing education for health-care professionals on the multidisciplinary management of all phases of oral cancer and its sequelae. # Data Collection, Evaluation, and Research - Strengthen organizational approaches to reducing oral cancer by developing cooperative and collaborative arrangements, funding formal centers, and involving commercial firms. At the 1997 follow-up meeting, the Oral Cancer Working Group created a smaller group known as the Oral Cancer Roundtable. Members of the Roundtable will communicate among themselves to discuss implemention of the priority recommendations and the recommendations from the 1996 conference and to share information on progress made. Through the Roundtable, CDC will communicate to interested agencies, organizations, and state health departmets ways in which they can implement elements of the national plan. The Roundtable will help CDC track the efforts and progress of these groups. # CONCLUSION National efforts to reduce morbidity and mortality associated with oral cancer must focus on two areas: primary prevention (i.e., reducing risk factors) and early detection. Although persons at high risk for the disease are more likely to visit a physician than a dentist, physicians may be less likely than dentists to perform an oral cancer examination on such patients (17)(18)(19)(20)(21). Thus, all primary-care providers must assume more responsiblity for counseling patients about behaviors that put them at risk for developing this cancer, examining patients who are at high risk for developing the disease because of tobacco use or excessive alcohol consumption (22 ), and referring patients to an appropriate specialist for management of a suspicious oral lesion. Comprehensive education of medical and dental practitioners in diagnosing and promptly managing early lesions could facilitate the multidisciplinary collaboration necessary to detect oral cancer in its earliest stages. Furthermore, because of the public's lack of knowlege about the risk factors for oral cancer and because this disease can often be detected in its early stages (21,23 ), the public's awareness of oral cancer (including its risk factors, signs, and symptoms) must also be increased. Oral cancer occurs in sites that lend themselves to early detection by most primary health-care providers and, to a lesser extent, by self-examination. Heightened awareness in the general population could help with early detection of this cancer and could stimulate dialogue between patients and their primary health-care providers about behaviors that may increase the risk for developing oral cancer. Recent advances in understanding the molecular events involved in developing cancer might provide the tools needed to design novel preventive, diagnostic, prognostic, and therapeutic regimens to combat oral cancer. Acquiring greater knowledge of the biology, immunology, and pathology of the oral mucosa may also help to reduce the morbidity and mortality from this disease. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on
# Preventing and Controlling Oral and Pharyngeal Cancer Recommendations from a National Strategic Planning Conference Summary In August 1996, CDC convened a national conference to develop strategies for preventing and controlling oral and pharyngeal cancer in the United States. The conference, which was cosponsored by the National Institute of Dental Research of the National Institutes of Health and the American Dental Association, included 125 experts in oral and pharyngeal cancer prevention, treatment, and research; both the private and public sectors were represented. Participants at the conference developed recommendations concerning advocacy, collaboration, and coalition building; public health policy; public education; professional education and practice; and data collection, evaluation, and research. A follow-up meeting consisting of selected participants of the 1996 conference was held in September 1997. During this meeting, changes that had occurred in the political and scientific arenas since the 1996 conference were considered, and 10 recommended strategies from the conference were selected for priority implementation. These 10 strategies were to a) establish a mechanism to implement and monitor the recommended strategies developed during the conference; b) urge oral health professionals to become more actively involved in community health; c) require instruction in preventing and controlling tobacco and alcohol use at all levels of training in dental, medical, nursing, and other related health-care disciplines; d) encourage Medicaid, Medicare, traditional insurance plans, and managed-care entities to consider making oral cancer examinations an integral part of comprehensive physical and oral examinations; e) designate federal funding for a national program of oral cancer prevention, early detection, and control; f) after assessing local needs, develop, implement, and evaluate statewide models to educate all relevant groups; g) develop and conduct a national promotional campaign to raise public awareness of oral cancer and its link to tobacco use and heavy alcohol consumption; h) develop health-care curricula that require competency in prevention, diagnosis, and multidisciplinary management of oral and pharyngeal cancer; i) sponsor and promote continuing education for health-care professionals on the multidisciplinary management of all phases of oral cancer and its sequelae; and j) strengthen organizational approaches to reducing oral cancer by developing organized cooperative and collaborative arrangements, funding formal centers, and involving commercial firms. CDC will use these recommended strategies to develop programs to reduce the burden of oral and pharyngeal cancer in the United States. Through the Oral Cancer Roundtable, a group of conference and meeting participants, CDC will communicate to interested agencies, organizations, and state health departments ways in which they can implement elements of the national plan. The Roundtable will help CDC track the efforts and progress of these groups. # INTRODUCTION During the past decade, federal health agencies have focused on reducing the incidence of oral and pharyngeal cancer and increasing the 5-year survival rate from these cancers in the United States. Beginning with a consortium of health agencies in 1992 (and including a strategic planning conference in 1996 and a follow-up meeting in 1997), CDC has been involved in concerted efforts to establish a national plan for preventing and controlling these cancers. This report presents recommended strategies for action from the 1996 conference and a list of priority recommendations from the 1997 meeting. These recommendations will enable CDC to develop a coordinated national plan to reduce morbidity and mortality from oral and pharyngeal cancer in the United States. # ORAL AND PHARYNGEAL CANCER Oral cancer (i.e., cancer of the lip, tongue, floor of the mouth, palate, gingiva and alveolar mucosa, buccal mucosa, or oropharynx)* accounts for 2%-4% of cancers diagnosed annually in the United States; approximately two thirds occur in the oral cavity, and the remainder occurs in the oropharynx (1 ). In 1998, this diagnosis will be made in an estimated 30,300 Americans; approximately 8,000 deaths (5,200 males and 2,800 females) are expected in this year (2 ). Ninety-five percent of cases of oral cancer occur among persons aged >40 years, and the average age at diagnosis is 60 years (3 ). In 1950, the male-to-female ratio of oral cancer incidence was approximately 6:1; by 1997, it was approximately 2:1. The changing ratio is likely the result of the increase in smoking among women in the past three decades (3 ). In addition, cancer is an age-related disease, and in the United States, the number of women aged >65 years now exceeds the number of men aged >65 years by almost 50% (3 ). During 1990-1994, the annual incidence rate among black males in the United States was 1.6 times higher than the rate among white males (20.1 versus 12.9 new cases per 100,000) and the annual mortality rate among black males was 2.5 times higher (7.6 versus 3.1 deaths per 100,000); the annual incidence rate among black females was slightly higher than that among white females (5.6 versus 4.9 new cases per 100,000), as was the annual mortality rate (1.8 versus 1.2 deaths per 100,000) (4 ). Despite agressive combinations of surgery, radiation therapy, and chemotherapy, the 5-year survival rate for oral cancer is poor (blacks: 35%; whites: 55%) (1,5 ). Tobacco smoking (i.e., cigarette, pipe, or cigar smoking), particularly when combined with heavy alcohol consumption (i.e., ≥30 drinks per week), has been identified as the primary risk factor for approximately 75% of oral cancers in the United States (6 ) . The use of tobacco in other forms (i.e., snuff and chew) has also been identified as a risk factor (7-9 ), as have certain other lifestyle and environmental factors (e.g., diet and occupational exposure to sunlight) (10 ). Approximately 90% of oral cancer lesions are squamous cell carcinomas. Persons who have oral cancer often develop multiple primary lesions (i.e., field cancerization), and they develop second primary tumors at a rate of approximately 4% annually (11 ). Persons having primary oral cancer are more likely to develop a second primary cancer of the aerodigestive tract (i.e., oral cavity, pharynx, esophagus, larynx, and lungs) *Hereafter, pharyngeal cancer is also included in the term oral cancer. (12,13 ). The initally diagnosed disease accounts for one half of the deaths caused by oral cancer; one fourth of these deaths are due to a second primary cancer, and the remaining one fourth are attributable to other illnesses (13 ). Diagnosing cancers at an early stage is crucial to improving survival rate and reducing morbidity. At the time of diagnosis of oral cancer, 36% of persons have localized disease, 43% have regional disease, and 9% have distant disease (for 12% the disease is unstaged) (4 ). The 5-year survival rate for persons having oral cancer is 81% for those with localized disease, 42% for patients with regional disease, and 17% for those with distant metastases (4 ). During the past decade, at diagnosis stage has not changed significantly (3 ). # ORAL CANCER STRATEGIC PLANNING CONFERENCE Background In 1992, a consortium of health agencies led by CDC and the National Institute of Dental Research (NIDR) of the National Institutes of Health began to establish goals, objectives, and programs to reduce oral cancer morbidity and mortality in the United States. The Oral Cancer Work Group, which was formed as part of this initiative, subsequently developed short-term and long-term goals for preventing and controlling oral cancer. A list of these goals was disseminated to interested organizations and individuals in 1993. One of the recommendations of the Oral Cancer Work Group was to summarize the state of the science regarding oral cancer. In response, CDC commissioned nine background papers regarding the prevention, control, and treatment of the disease and addressing current knowledge, emerging trends, opportunities, and barriers to further progress. The authors, representing several specialties and expertise, drew on current literature reviews, in-depth critiques, and personal experience. The Oral Cancer Work Group also suggested that CDC convene a conference to develop national strategies to help make oral cancer prevention and control a higher public health priority. Subsequently, CDC, in partnership with NIDR and the American Dental Association (ADA), formed a conference planning group. The planning group, along with a larger cadre of oral cancer experts, developed a draft set of strategies. This draft and the nine background papers were distributed to invited participants before the conference. # Conference Format The Oral Cancer Strategic Planning Conference was held August 7-9, 1996, at the ADA headquarters in Chicago. Participants included 125 invited experts in oral cancer prevention, treatment, and research; both the private and public sectors were represented. Following brief welcoming remarks by ADA, CDC, and NIDR representatives, nationally recognized experts made presentations on the etiology of oral cancer, its epidemiology, ongoing and needed research, and clinical experience with five other cancers (i.e., leukemia and breast, cervical, lung, and prostate cancers). A survivor of oral cancer described the human impact of the disease. Conference participants broke into five work groups: advocacy, collaboration, and coalition building; public health policy; public education; professional education and practice; and data collection, evaluation, and research. Each work group had a chairperson and co-chairperson who were preselected from the conference participants; toward the conclusion of the conference, chairpersons presented their work groups' recommended strategies to all conference participants, who provided oral and written feedback. The work groups made revisions, including comments raised during the general session. After the conference, the recommended strategies were disseminated to all participants for final review and comments. These last comments were incorporated to produce the finalized recommended strategies to reduce oral cancer morbidity and mortality in the United States. # Recommended Strategies from Work Groups Advocacy, Collaboration, and Coalition Building The work group on advocacy, collaboration, and coalition building (e.g., formation by the oral health community of partnerships with other health professionals and public or private organizations to facilitate increased awareness of the risk factors for oral cancer) developed three main recommended strategies. • Establish an ongoing, institutionalized mechanism to implement and monitor progress made regarding the recommended strategies developed during the conference. • Urge professionals in oral health and other health disciplines to become more actively involved in community health concerns, especially in preventing tobacco and heavy alcohol use, by -developing a comprehensive advocacy training program for a core group of oral health professionals; -recruiting persons from the health community and enrolling them in a national database for tobacco and oral cancer advocacy; -designing outreach programs to encourage local and state dental societies to be proactive in oral cancer and related coalitions; -establishing an advocacy network of oral cancer survivors; and -developing a speakers bureau of sports figures and other prominent persons willing to speak about risk factors for oral cancer and the importance of its early detection. • Increase excise taxes on tobacco and alcohol products to provide targeted funding for oral cancer prevention programs. • Strengthen and enforce laws regarding youth access to tobacco and alcohol. • Give the U.S. Food and Drug Administration regulatory authority over tobacco, because nicotine is an addictive drug. • Prohibit all advertising and promotional activities by the tobacco industry and conduct a well-funded counteradvertising campaign that focuses on cigarettes, cigars, pipe tobacco, and spit tobacco. • Deny federal health and medical research funding to organizations that accept health research funding from the tobacco industry or its research institutes.* • Increase excise taxes on spit tobacco to an amount equal to or greater than the taxes on cigarettes. • Encourage professional sports teams to ban the use of tobacco products among team members during practices and games. • Add strong statements to tobacco and alcohol warning labels about the risk of oral cancer. Ensure that tobacco warning labels cover 25%-30% of the front or back of a product's package and advertising copy. Model warnings after those used in Australia and Canada. # Professional Knowledge and Behaviors. † • Require instruction in preventing and controlling tobacco and alcohol use, including tobacco cessation, at all levels of training in dental, medical, nursing, and related health-care disciplines. • Ensure that clinicians learn procedures to detect oral cancer that are appropriate to their professional practice. • Urge all health professionals to routinely assess tobacco and alcohol intake by their patients. • Encourage health-care agencies and professionals to recommend that all clinicians who deliver primary health care routinely examine their patients for oral cancer. § *This strategy generated considerable discussion among the conference participants. The work group recognized this strategy could negatively affect research support for oral cancer but still recommended it. † These strategies complement those developed by the work group on professional education and practice but are listed here because of their implications for public policy. § The U.S. Preventive Services Task Force states that "there is insufficient evidence to recommend for or against routine screening of asymptomatic persons for oral cancer by primary care physicians" but that "clinicians should remain alert to signs and symptoms of oral cancer and premalignancy in persons who use tobacco and alcohol" (16 ). The work group, however, believed that all persons should be routinely examined and chose a stronger recommendation. # Compensation. • Work with the ADA and the American Medical Association to reaffirm that existing codes for reimbursement (e.g., Common Procedure Terminology and Common Dental Terminology) appropriately identify oral cancer examinations as part of the standard oral examination. • Encourage Medicaid, Medicare, traditional insurance plans, and managed-care entities to make oral cancer examinations an integral part of comprehensive physical and oral examinations. • Base reimbursement for oral cancer examinations on the service provided rather than the academic degree of the provider. # National Programs. • Designate federal funding for a national program of oral cancer prevention, early detection, and control that includes support for outcomes assessment and policy-based research. # Public Education Seven major strategies were recommended by the work group on public education. • Develop and disseminate guidelines and lists of resources to assist communities (e.g., states, counties, cities, towns, and members of organizations and institutions) in developing, implementing, and evaluating models for oral cancer education. This effort could include an inventory of available guidelines, literature, processes, and educational models. • Develop, implement, and evaluate statewide models to educate all relevant groups. These models should be tailored to local needs, practical, culturally appropriate, and user friendly and should include the following content areas: -risk factors for oral cancer (e.g., tobacco use, alcohol use, and nutritional deficiencies); -signs and symptoms of oral cancer; -procedures for a thorough oral cancer examination and the ease with which the examination can be performed; and -methods of public advocacy. • Pursuade relevant CDC and National Institutes of Health decisionmakers, members of Congress, and members of other organizations to secure funding for statewide oral cancer model demonstration projects and to establish an oral health component in CDC's Initiatives to Mobilize for the Prevention and Control of Tobacco Use (IMPACT) program. • Develop and conduct a national campaign to raise public awareness of oral cancer and its link to tobacco use and heavy alcohol consumption. The campaign might include a mascot or logo, sports figures or other distinguished persons as spokespersons, or a national oral cancer awareness week. • Ensure that behavioral and educational research in oral cancer is included in the budget of organizations that sponsor such research (e.g., the National Institutes of Health, universities, and foundations). • Increase the representation of educators, behavioral scientists, and oral cancer specialists on the grant review committees of cancer and dental research institutions. • Ensure that a national research agenda is developed that includes the following: -ongoing surveillance to monitor knowledge, opinions, attitudes, and practices of the public, especially populations at high risk for oral cancer; -surveys of the knowledge, opinions, attitudes, and practices of relevant healthcare providers regarding oral cancer; -evaluations of the effectiveness of educational interventions among targeted populations; -changes in existing survey instruments (e.g., the National Health Interview Survey) to include items on oral cancer comparable to items on other cancers; -inclusion of oral cancer questions in state Behavioral Risk Factor Surveillance System surveys; -determination of the proficiency of persons who have been taught to perform an oral cancer self-examination; and -assessment of the quality (e.g., reading level or scientific accuracy), quantity, and availability of educational materials directed to the public about oral cancer. # Professional Education and Practice This work group developed five recommended strategies. • Develop health-care curricula that require competency in prevention, diagnosis, and multidisciplinary management of oral cancer, including the prevention and cessation of tobacco use and alcohol abuse. • Promote soft tissue examination for oral cancer as a standard part of a complete patient examination. • Develop, promote, and maintain a database of all professional education materials related to oral cancer. • Define, identify, develop, and promote centers of excellence in oral cancer management. • Sponsor and promote continuing education for health-care professionals on the multidisciplinary management of all phases of oral cancer and its sequelae. In addition, the work group identified seven initiatives that would facilitate achievement of their recommended strategies: develop educational standards and standards of care for oral cancer; standardize techniques for oral cancer examination and implement them consistently; create a national speakers bureau with standardized educational materials; place an oral cancer home page on the World Wide Web; create guidelines for developing screening and detection programs; develop self-instructional materials for health professionals on a range of topics (e.g., risk factors, early detection, and counseling of high-risk patients); and identify and catalog professional education materials, determine deficits in these materials, and ensure access to the cataloged materials. appropriate biopsy specimens, research subjects and patients, and research findings so that researchers can maximize the information gained from biological studies; and developing laboratory assays that conserve specimens, thus allowing for multiple assessments of the same tissue; and -evaluating innovative approaches for identifying persons at greatest risk for oral cancer and recruiting them for research studies (e.g., form partnerships with organizations serving residents of homeless shelters or clients of alcohol treatment centers). • Develop valid and reliable patient-oriented indices of health, quality of life, and functioning. • • Create multidisciplinary groups to facilitate movement of findings in two directions-from basic research to applied research and from research in the clinical sciences, epidemiology, and health-services delivery to basic science-thus helping to focus basic research efforts. Such strategies may include the following: -developing innovative science transfer techniques (e.g., Internet applications) for researchers, clinicians, and the public; -develop effective means of communicating the complex biological processes to clinicians, students, and the public; and -increasing research on how health-care practitioners and the public understand and act on the concept of risk of a disease and its consequences. • Strengthen organizational approaches to reducing oral cancer by developing cooperative and collaborative arrangements, funding formal centers, and involving commercial firms. The following means are suggested: -consortia of researchers and medical and dental practitioners could share patient sources, standardize clinical protocols, achieve adequate sample sizes, recruit patients and at-risk persons for research studies, and enhance science transfer; individual practitioners as well as organizations (e.g., alcohol treatment centers) that serve populations at risk for oral cancer or its sequelae could be sources of study subjects; -other formal centers could be established in addition to those funded by NIDR and the National Cancer Institute; and -commercial firms could use their marketing and distribution systems to enhance science transfer, health promotion, and disease prevention activities; in addition, they could join with academic or government groups to fund or otherwise facilitate research. # ORAL CANCER WORKING GROUP The Oral Cancer Working Group, a multidisciplinary group who attended the 1996 Oral Cancer Strategic Planning Conference, met September 29-30, 1997, to identify 10 strategies from the 1996 meeting recommendations to receive immediate attention and implementation by the agencies they represented. The Oral Cancer Working Group considered political and scientific changes that had occurred after the 1996 conference (e.g., the U.S. Food and Drug Administration had been given regulatory authority over tobacco, legal cases involving tobacco had been settled in several states, national tobacco legislation had been proposed, and four comprehensive oral cancer research centers had been funded by NIDR) and selected strategies the group could effect (as opposed to strategies already under way as a result of the leadership and support of other groups). Leadership at the 1997 meeting was shared by representatives of ADA, the American Association of Dental Research, the Association of State and Territorial Dental Directors, CDC, the International Society of Oral Oncology, NIDR, and Oral Health America. The 10 priority strategies are as follows. # Advocacy, Collaboration, and Coalition Building • Establish a mechanism to implement and monitor progress made regarding the recommended strategies developed during the 1996 national conference. • Urge oral health professionals to become more actively involved in community health concerns. # Public Health Policy • Require instruction in preventing and controlling tobacco and alcohol use at all levels of training in dental, medical, nursing, and related health-care disciplines. • Encourage Medicaid, Medicare, traditional insurance plans, and managed-care entities to make oral cancer examinations an integral part of comprehensive physical and oral examinations. • Designate federal funding for a national program of oral cancer prevention, early detection, and control. # Public Education • After assessing local needs, develop, implement, and evaluate statewide models to educate all relevant groups. • Develop and conduct a national campaign to raise public awareness of oral cancer and its link to tobacco use and heavy alcohol consumption. # Professional Education and Practice • Develop health-care curricula that require competency in prevention, diagnosis, and multidisciplinary management of oral cancer. # Public Health Policy This work group presented its recommended strategies in four categories. # Prevention and Control of Tobacco and Alcohol Use. # Data Collection, Evaluation, and Research These recommended strategies would facilitate research regarding the etiology, prevention, and treatment of oral cancer and would translate research findings into effective public health action. • Increase funding or target existing funding to initiate and sustain research concerning oral cancer. • Improve the capacity of individual health practitioners and small medical centers to participate in research regarding prevention strategies and therapeutic approaches.* • Develop curricula for basic preparation and continuing education for health professionals that will improve their knowledge of the nature, value, implementation, and importance of well-designed and well-conducted research studies. • Improve researchers' access to tissues, study populations, and data sources. Possible approaches include -using population-based cancer registries for follow-up studies; -combining information from state-based population-based cancer registries and from national registries (e.g., the Surveillance, Epidemiology, and End Results [SEER] program) to help develop an enhanced descriptive epidemiology of oral cancer, particularly for smaller subpopulations insufficiently represented in the SEER or state registries; -encouraging the use of existing databases, either singly or in combination, to address questions about oral cancer care, consequences, and costs (e.g., one such database combines SEER incidence and survival data for Medicare beneficiaries with their Medicare claims data); -developing a systematic approach to providing researchers with access to tissue specimens and detailed information about the behavioral and medical characteristics of persons who are at high risk for oral cancer, have premalignant lesions, or currently have oral cancer; finding creative ways to share * This strategy would facilitate execution of multicenter studies, which are often needed to produce highly generalizable findings and to provide adequate statistical power to detect relatively small differences. It also recognizes the growing trend toward treating oral cancer in ambulatory settings and within managed-care delivery systems. Differences in treatment outcomes for all the major delivery systems and settings cannot be assessed completely if physicians' and dentists' offices are not included in research studies of small medical centers. • Sponsor and promote continuing education for health-care professionals on the multidisciplinary management of all phases of oral cancer and its sequelae. # Data Collection, Evaluation, and Research • Strengthen organizational approaches to reducing oral cancer by developing cooperative and collaborative arrangements, funding formal centers, and involving commercial firms. At the 1997 follow-up meeting, the Oral Cancer Working Group created a smaller group known as the Oral Cancer Roundtable. Members of the Roundtable will communicate among themselves to discuss implemention of the priority recommendations and the recommendations from the 1996 conference and to share information on progress made. Through the Roundtable, CDC will communicate to interested agencies, organizations, and state health departmets ways in which they can implement elements of the national plan. The Roundtable will help CDC track the efforts and progress of these groups. # CONCLUSION National efforts to reduce morbidity and mortality associated with oral cancer must focus on two areas: primary prevention (i.e., reducing risk factors) and early detection. Although persons at high risk for the disease are more likely to visit a physician than a dentist, physicians may be less likely than dentists to perform an oral cancer examination on such patients (17)(18)(19)(20)(21). Thus, all primary-care providers must assume more responsiblity for counseling patients about behaviors that put them at risk for developing this cancer, examining patients who are at high risk for developing the disease because of tobacco use or excessive alcohol consumption (22 ), and referring patients to an appropriate specialist for management of a suspicious oral lesion. Comprehensive education of medical and dental practitioners in diagnosing and promptly managing early lesions could facilitate the multidisciplinary collaboration necessary to detect oral cancer in its earliest stages. Furthermore, because of the public's lack of knowlege about the risk factors for oral cancer and because this disease can often be detected in its early stages (21,23 ), the public's awareness of oral cancer (including its risk factors, signs, and symptoms) must also be increased. Oral cancer occurs in sites that lend themselves to early detection by most primary health-care providers and, to a lesser extent, by self-examination. Heightened awareness in the general population could help with early detection of this cancer and could stimulate dialogue between patients and their primary health-care providers about behaviors that may increase the risk for developing oral cancer. Recent advances in understanding the molecular events involved in developing cancer might provide the tools needed to design novel preventive, diagnostic, prognostic, and therapeutic regimens to combat oral cancer. Acquiring greater knowledge of the biology, immunology, and pathology of the oral mucosa may also help to reduce the morbidity and mortality from this disease. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on
None
None
8b96b7ae1309f9f046783485fed2205f7c8c8f52
cdc
None
The term nosocomial infection is retained to refer only to infections acquired in hospitals. The term healthcare-associated infection (HAI) is used to refer to infections associated with healthcare delivery in any setting (e.g., hospitals, long-term care facilities, ambulatory settings, home care). This term reflects the inability to determine with certainty where the pathogen is acquired since patients may be colonized with or exposed to potential pathogens outside of the healthcare setting, before receiving health care, or may develop infections caused by those pathogens when exposed to the conditions associated with delivery of healthcare. Additionally, patients frequently move among the various settings within a healthcare system 8 . A new addition to the practice recommendations for Standard Precautions is Respiratory Hygiene/Cough Etiquette. While Standard Precautions generally apply to the recommended practices of healthcare personnel during patient care, Respiratory Hygiene/Cough Etiquette applies broadly to all persons who enter a healthcare setting, including healthcare personnel, patients and visitors. These recommendations evolved from observations during the SARS epidemic that failure to implement basic source control measures with patients, visitors, and healthcare personnel with signs and symptoms of respiratory tract infection may have contributed to SARS coronavirus (SARS-CoV) transmission. This concept has been incorporated into CDC planning documents for SARS and pandemic influenza 9, 10 . The term "Airborne Precautions" has been supplemented with the term "Airborne Infection Isolation Room (AIIR)" for consistency with the Guidelines for Environmental Infection Control in Healthcare Facilities 11 , the Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Settings 2005 12 and the American Institute of Architects (AIA) guidelines for design and construction of hospitals, 2006 13 A set of prevention measures termed Protective Environment has been added to the precautions used to prevent HAIs. These measures, which have been defined in other guidelines , consist of engineering and design interventions that decrease the risk of exposure to environmental fungi for severely immunocompromised allogeneic hematiopoietic stem cell transplant (HSCT) patients during their highest risk phase, usually the first 100 days post transplant, or longer in the presence of graft-versus-host disease 11, . Recommendations for a Protective Environment apply only to acute care hospitals that provide care to HSCT patients. Scope This guideline, like its predecessors, focuses primarily on interactions between patients and healthcare providers. The Guidelines for the Prevention of MDRO Infection were published separately in November 2006, and are available online at www.cdc.gov/ncidod/dhqp/index.html. Several other HICPAC# TABLE OF CONTENTS Executive # Tables Table 1. Recent history of guidelines for prevention of healthcare-associated infections...... Table 2. Clinical syndromes or conditions warranting additional empiric transmissionbased precautions pending confirmation of diagnosis…………………………......... Table 3. Infection control considerations for high-priority (CDC Category A) diseases that Table 4. Recommendations for application of Standard Precautions for the care of all may result from bioterrorist attacks or are considered to be bioterrorist threats .. # EXECUTIVE SUMMARY The Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Healthcare Settings updates and expands the 1996 Guideline for Isolation Precautions in Hospitals. The following developments led to revision of the 1996 guideline: 1. The transition of healthcare delivery from primarily acute care hospitals to other healthcare settings (e.g., home care, ambulatory care, free-standing specialty care sites, long-term care) created a need for recommendations that can be applied in all healthcare settings using common principles of infection control practice, yet can be modified to reflect setting-specific needs. Accordingly, the revised guideline addresses the spectrum of healthcare delivery settings. Furthermore, the term "nosocomial infections" is replaced by "healthcareassociated infections" (HAIs) to reflect the changing patterns in healthcare delivery and difficulty in determining the geographic site of exposure to an infectious agent and/or acquisition of infection. 2. The emergence of new pathogens (e.g., SARS-CoV associated with the severe acute respiratory syndrome , Avian influenza in humans), renewed concern for evolving known pathogens (e.g., C. difficile, noroviruses, communityassociated MRSA ), development of new therapies (e.g., gene therapy), and increasing concern for the threat of bioweapons attacks, established a need to address a broader scope of issues than in previous isolation guidelines. 3. The successful experience with Standard Precautions, first recommended in the 1996 guideline, has led to a reaffirmation of this approach as the foundation for preventing transmission of infectious agents in all healthcare settings. New additions to the recommendations for Standard Precautions are Respiratory Hygiene/Cough Etiquette and safe injection practices, including the use of a mask when performing certain high-risk, prolonged procedures involving spinal canal punctures (e.g., myelography, epidural anesthesia). The need for a recommendation for Respiratory Hygiene/Cough Etiquette grew out of observations during the SARS outbreaks where failure to implement simple source control measures with patients, visitors, and healthcare personnel with respiratory symptoms may have contributed to SARS coronavirus (SARS-CoV) transmission. The recommended practices have a strong evidence base. The continued occurrence of outbreaks of hepatitis B and hepatitis C viruses in ambulatory settings indicated a need to re-iterate safe injection practice recommendations as part of Standard Precautions. The addition of a mask for certain spinal injections grew from recent evidence of an associated risk for developing meningitis caused by respiratory flora. 4. The accumulated evidence that environmental controls decrease the risk of lifethreatening fungal infections in the most severely immunocompromised patients (allogeneic hematopoietic stem-cell transplant patients) led to the update on the components of the Protective Environment (PE). 5. Evidence that organizational characteristics (e.g., nurse staffing levels and composition, establishment of a safety culture) influence healthcare personnel adherence to recommended infection control practices, and therefore are important factors in preventing transmission of infectious agents, led to a new emphasis and recommendations for administrative involvement in the development and support of infection control programs. 6. Continued increase in the incidence of HAIs caused by multidrug-resistant organisms (MDROs) in all healthcare settings and the expanded body of knowledge concerning prevention of transmission of MDROs created a need for more specific recommendations for surveillance and control of these pathogens that would be practical and effective in various types of healthcare settings. This document is intended for use by infection control staff, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection control programs for healthcare settings across the continuum of care. The reader is referred to other guidelines and websites for more detailed information and for recommendations concerning specialized infection control problems. # Parts I -III: Review of the Scientific Data Regarding Transmission of Infectious Agents in Healthcare Settings Part I reviews the relevant scientific literature that supports the recommended prevention and control practices. As with the 1996 guideline, the modes and factors that influence transmission risks are described in detail. New to the section on transmission are discussions of bioaerosols and of how droplet and airborne transmission may contribute to infection transmission. This became a concern during the SARS outbreaks of 2003, when transmission associated with aerosol-generating procedures was observed. Also new is a definition of "epidemiologically important organisms" that was developed to assist in the identification of clusters of infections that require investigation (i.e. multidrug-resistant organisms, C. difficile). Several other pathogens that hold special infection control interest (i.e., norovirus, SARS, Category A bioterrorist agents, prions, monkeypox, and the hemorrhagic fever viruses) also are discussed to present new information and infection control lessons learned from experience with these agents. This section of the guideline also presents information on infection risks associated with specific healthcare settings and patient populations. Part II updates information on the basic principles of hand hygiene, barrier precautions, safe work practices and isolation practices that were included in previous guidelines. However, new to this guideline, is important information on healthcare system components that influence transmission risks, including those under the influence of healthcare administrators. An important administrative priority that is described is the need for appropriate infection control staffing to meet the ever-expanding role of infection control professionals in the modern, complex healthcare system. Evidence presented also demonstrates another administrative concern, the importance of nurse staffing levels, including numbers of appropriately trained nurses in ICUs for preventing HAIs. The role of the clinical microbiology laboratory in supporting infection control is described to emphasize the need for this service in healthcare facilites. Other factors that influence transmission risks are discussed i.e., healthcare worker adherence to recommended infection control practices, organizational safety culture or climate, education and training Discussed for the first time in an isolation guideline is surveillance of healthcare-associated infections. The information presented will be useful to new infection control professionals as well as persons involved in designing or responding to state programs for public reporting of HAI rates. Part III describes each of the categories of precautions developed by the Healthcare Infection Control Practices Advisory Committee (HICPAC) and the Centers for Disease Control and Prevention (CDC) and provides guidance for their application in various healthcare settings. The categories of Transmission-Based Precautions are unchanged from those in the 1996 guideline: Contact, Droplet, and Airborne. One important change is the recommendation to don the indicated personal protective equipment (gowns, gloves, mask) upon entry into the patient's room for patients who are on Contact and/or Droplet Precautions since the nature of the interaction with the patient cannot be predicted with certainty and contaminated environmental surfaces are important sources for transmission of pathogens. In addition, the Protective Environment (PE) for allogeneic hematopoietic stem cell transplant patients, described in previous guidelines, has been updated. # Tables, Appendices, and other Information There are several tables that summarize important information: 1) a summary of the evolution of this document; 2) guidance on using empiric isolation precautions according to a clinical syndrome; 3) a summary of infection control recommendations for category A agents of bioterrorism; 4) components of Standard Precautions and recommendations for their application; 5) components of the Protective Environment; and 6) a glossary of definitions used in this guideline. New in this guideline is a figure that shows a recommended sequence for donning and removing personal protective equipment used for isolation precautions to optimize safety and prevent self-contamination during removal. # Appendix A: Type and Duration of Precautions Recommended for Selected Infections and Conditions Appendix A consists of an updated alphabetical list of most infectious agents and clinical conditions for which isolation precautions are recommended. A preamble to the Appendix provides a rationale for recommending the use of one or more Transmission-Based Precautions, in addition to Standard Precautions, based on a review of the literature and evidence demonstrating a real or potential risk for person-to-person transmission in healthcare settings.The type and duration of recommended precautions are presented with additional comments concerning the use of adjunctive measures or other relevant considerations to prevent transmission of the specific agent. Relevant citations are included. # Pre-Publication of the Guideline on Preventing Transmission of MDROs New to this guideline is a comprehensive review and detailed recommendations for prevention of transmission of MDROs. This portion of the guideline was published electronically in October 2006 and updated in November, 2006 (Siegel JD, Rhinehart E, Jackson M, Chiarello L and HICPAC. Management of Multidrug-Resistant Organisms in Healthcare Settings 2006 www.cdc.gov/ncidod/dhqp/pdf/ar/mdroGuideline2006.pdf), and is considered a part of the Guideline for Isolation Precautions. This section provides a detailed review of the complex topic of MDRO control in healthcare settings and is intended to provide a context for evaluation of MDRO at individual healthcare settings. A rationale and institutional requirements for developing an effective MDRO control program are summarized. Although the focus of this guideline is on measures to prevent transmission of MDROs in healthcare settings, information concerning the judicious use of antimicrobial agents is presented since such practices are intricately related to the size of the reservoir of MDROs which in turn influences transmission (e.g. colonization pressure). There are two tables that summarize recommended prevention and control practices using the following seven categories of interventions to control MDROs: administrative measures, education of healthcare personnel, judicious antimicrobial use, surveillance, infection control precautions, environmental measures, and decolonization. Recommendations for each category apply to and are adapted for the various healthcare settings. With the increasing incidence and prevalence of MDROs, all healthcare facilities must prioritize effective control of MDRO transmission. Facilities should identify prevalent MDROs at the facility, implement control measures, assess the effectiveness of control programs, and demonstrate decreasing MDRO rates. A set of intensified MDRO prevention interventions is presented to be added 1) if the incidence of transmission of a target MDRO is NOT decreasing despite implementation of basic MDRO infection control measures, and 2) when the first case(s) of an epidemiologically important MDRO is identified within a healthcare facility. # Summary This updated guideline responds to changes in healthcare delivery and addresses new concerns about transmission of infectious agents to patients and healthcare workers in the United States and infection control. The primary objective of the guideline is to improve the safety of the nation's healthcare delivery system by reducing the rates of HAIs. Objectives and methods The objectives of this guideline are to 1) provide infection control recommendations for all components of the healthcare delivery system, including hospitals, long-term care facilities, ambulatory care, home care and hospice; 2) reaffirm Standard Precautions as the foundation for preventing transmission during patient care in all healthcare settings; 3) reaffirm the importance of implementing Transmission-Based Precautions based on the clinical presentation or syndrome and likely pathogens until the infectious etiology has been determined (Table 2); and 4) provide epidemiologically sound and, whenever possible, evidence-based recommendations. This guideline is designed for use by individuals who are charged with administering infection control programs in hospitals and other healthcare settings. The information also will be useful for other healthcare personnel, healthcare administrators, and anyone needing information about infection control measures to prevent transmission of infectious agents. Commonly used abbreviations are provided on page 12 and terms used in the guideline are defined in the Glossary (page 137). Med-line and Pub Med were used to search for relevant studies published in English, focusing on those published since 1996. Much of the evidence cited for preventing transmission of infectious agents in healthcare settings is derived from studies that used "quasi-experimental designs", also referred to as nonrandomized, pre-post-intervention study designs 2 . Although these types of studies can provide valuable information regarding the effectiveness of various interventions, several factors decrease the certainty of attributing improved outcome to a specific intervention. These include: difficulties in controlling for important confounding variables; the use of multiple interventions during an outbreak; and results that are explained by the statistical principle of regression to the mean, (e.g., improvement over time without any intervention) 3 . Observational studies remain relevant and have been used to evaluate infection control interventions 4,5 . The quality of studies, consistency of results and correlation with results from randomized, controlled trials when available were considered during the literature review and assignment of evidence-based categories (See Part IV: Recommendations) to the recommendations in this guideline. Several authors have summarized properties to consider when evaluating studies for the purpose of determining if the results should change practice or in designing new studies 2,6,7 . # Abbreviations Used in the Guideline AIIR guidelines to prevent transmission of infectious agents associated with healthcare delivery are cited; e.g., Guideline for Hand Hygiene, Guideline for Environmental Infection Control, Guideline for Prevention of Healthcare-Associated Pneumonia, and Guideline for Infection Control in Healthcare Personnel 11,14,16,17 . In combination, these provide comprehensive guidance on the primary infection control measures for ensuring a safe environment for patients and healthcare personnel. This guideline does not discuss in detail specialized infection control issues in defined populations that are addressed elsewhere, (e.g., Recommendations for Preventing Transmission of Infections among Chronic Hemodialysis Patients , # Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Facilities 2005, Guidelines for Infection Control in Dental Health-Care Settings and Infection Control Recommendations for Patients with Cystic Fibrosis 12, . An exception has been made by including abbreviated guidance for a Protective Environment used for allogeneic HSCT recipients because components of the Protective Environment have been more completely defined since publication of the Guidelines for Preventing Opportunistic Infections Among HSCT Recipients in 2000 and the Guideline for Environmental Infection Control in Healthcare Facilities 11, 15 . # I.B. Rationale for Standard and Transmission-Based Precautions in healthcare settings Transmission of infectious agents within a healthcare setting requires three elements: a source (or reservoir) of infectious agents, a susceptible host with a portal of entry receptive to the agent, and a mode of transmission for the agent. This section describes the interrelationship of these elements in the epidemiology of HAIs. # I.B.1. Sources of infectious agents Infectious agents transmitted during healthcare derive primarily from human sources but inanimate environmental sources also are implicated in transmission. Human reservoirs include patients , healthcare personnel 29-35 17, 36-39 , and household members and other visitors . Such source individuals may have active infections, may be in the asymptomatic and/or incubation period of an infectious disease, or may be transiently or chronically colonized with pathogenic microorganisms, particularly in the respiratory and gastrointestinal tracts. The endogenous flora of patients (e.g., bacteria residing in the respiratory or gastrointestinal tract) also are the source of HAIs . # I.B.2. Susceptible hosts Infection is the result of a complex interrelationship between a potential host and an infectious agent. Most of the factors that influence infection and the occurrence and severity of disease are related to the host. However, characteristics of the host-agent interaction as it relates to pathogenicity, virulence and antigenicity are also important, as are the infectious dose, mechanisms of disease production and route of exposure 55 . There is a spectrum of possible outcomes following exposure to an infectious agent. Some persons exposed to pathogenic microorganisms never develop symptomatic disease while others become severely ill and even die. Some individuals are prone to becoming transiently or permanently colonized but remain asymptomatic. Still others progress from colonization to symptomatic disease either immediately following exposure, or after a period of asymptomatic colonization. The immune state at the time of exposure to an infectious agent, interaction between pathogens, and virulence factors intrinsic to the agent are important predictors of an individuals' outcome. Host factors such as extremes of age and underlying disease (e.g. diabetes 56,57 ), human immunodeficiency virus/acquired immune deficiency syndrome 58,59 , malignancy, and transplants 18,60,61 can increase susceptibility to infection as do a variety of medications that alter the normal flora (e.g., antimicrobial agents, gastric acid suppressants, corticosteroids, antirejection drugs, antineoplastic agents, and immunosuppressive drugs). Surgical procedures and radiation therapy impair defenses of the skin and other involved organ systems. Indwelling devices such as urinary catheters, endotracheal tubes, central venous and arterial catheters and synthetic implants facilitate development of HAIs by allowing potential pathogens to bypass local defenses that would ordinarily impede their invasion and by providing surfaces for development of bioflms that may facilitate adherence of microorganisms and protect from antimicrobial activity 65 . Some infections associated with invasive procedures result from transmission within the healthcare facility; others arise from the patient's endogenous flora . High-risk patient populations with noteworthy risk factors for infection are discussed further in Sections I.D, I.E., and I.F. # I.B.3. Modes of transmission Several classes of pathogens can cause infection, including bacteria, viruses, fungi, parasites, and prions. The modes of transmission vary by type of organism and some infectious agents may be transmitted by more than one route: some are transmitted primarily by direct or indirect contact, (e.g., Herpes simplex virus , respiratory syncytial virus, Staphylococcus aureus), others by the droplet, (e.g., influenza virus, B. pertussis) or airborne routes (e.g., M. tuberculosis). Other infectious agents, such as bloodborne viruses (e.g., hepatitis B and C viruses and HIV are transmitted rarely in healthcare settings, via percutaneous or mucous membrane exposure. Importantly, not all infectious agents are transmitted from person to person. These are distinguished in Appendix A. The three principal routes of transmission are summarized below. # I.B.3.a. Contact transmission The most common mode of transmission, contact transmission is divided into two subgroups: direct contact and indirect contact. # I.B.3.a.i. Direct contact transmission Direct transmission occurs when microorganisms are transferred from one infected person to another person without a contaminated intermediate object or person. Opportunities for direct contact transmission between patients and healthcare personnel have been summarized in the Guideline for Infection Control in Healthcare Personnel, 1998 17 and include: - blood or other blood-containing body fluids from a patient directly enters a caregiver's body through contact with a mucous membrane 66 or breaks (i.e., cuts, abrasions) in the skin 67 . - mites from a scabies-infested patient are transferred to the skin of a caregiver while he/she is having direct ungloved contact with the patient's skin 68,69 . - a healthcare provider develops herpetic whitlow on a finger after contact with HSV when providing oral care to a patient without using gloves or HSV is transmitted to a patient from a herpetic whitlow on an ungloved hand of a healthcare worker (HCW) 70,71 . # I.B.3.a.ii. Indirect contact transmission Indirect transmission involves the transfer of an infectious agent through a contaminated intermediate object or person. In the absence of a point-source outbreak, it is difficult to determine how indirect transmission occurs. However, extensive evidence cited in the Guideline for Hand Hygiene in Health-Care Settings suggests that the contaminated hands of healthcare personnel are important contributors to indirect contact transmission 16 . Examples of opportunities for indirect contact transmission include: - Hands of healthcare personnel may transmit pathogens after touching an infected or colonized body site on one patient or a contaminated inanimate object, if hand hygiene is not performed before touching another patient. 72,73 . - Patient-care devices (e.g., electronic thermometers, glucose monitoring devices) may transmit pathogens if devices contaminated with blood or body fluids are shared between patients without cleaning and disinfecting between patients 74 75-77 . - Shared toys may become a vehicle for transmitting respiratory viruses (e.g., respiratory syncytial virus 24,78,79 or pathogenic bacteria (e.g., Pseudomonas aeruginosa 80 ) among pediatric patients. - Instruments that are inadequately cleaned between patients before disinfection or sterilization (e.g., endoscopes or surgical instruments) or that have manufacturing defects that interfere with the effectiveness of reprocessing 86,87 may transmit bacterial and viral pathogens. Clothing, uniforms, laboratory coats, or isolation gowns used as personal protective equipment (PPE), may become contaminated with potential pathogens after care of a patient colonized or infected with an infectious agent, (e.g., MRSA 88 , VRE 89 , and C. difficile 90 . Although contaminated clothing has not been implicated directly in transmission, the potential exists for soiled garments to transfer infectious agents to successive patients. # I.B.3.b. Droplet transmission Droplet transmission is, technically, a form of contact transmission, and some infectious agents transmitted by the droplet route also may be transmitted by the direct and indirect contact routes. However, in contrast to contact transmission, respiratory droplets carrying infectious pathogens transmit infection when they travel directly from the respiratory tract of the infectious individual to susceptible mucosal surfaces of the recipient, generally over short distances, necessitating facial protection. Respiratory droplets are generated when an infected person coughs, sneezes, or talks 91,92 or during procedures such as suctioning, endotracheal intubation, , cough induction by chest physiotherapy 97 and cardiopulmonary resuscitation 98,99 . Evidence for droplet transmission comes from epidemiological studies of disease outbreaks , experimental studies 104 and from information on aerosol dynamics 91,105 . Studies have shown that the nasal mucosa, conjunctivae and less frequently the mouth, are susceptible portals of entry for respiratory viruses 106 . The maximum distance for droplet transmission is currently unresolved, although pathogens transmitted by the droplet route have not been transmitted through the air over long distances, in contrast to the airborne pathogens discussed below. Historically, the area of defined risk has been a distance of <3 feet around the patient and is based on epidemiologic and simulated studies of selected infections 103,104 . Using this distance for donning masks has been effective in preventing transmission of infectious agents via the droplet route. However, experimental studies with smallpox 107,108 and investigations during the global SARS outbreaks of 2003 101 suggest that droplets from patients with these two infections could reach persons located 6 feet or more from their source. It is likely that the distance droplets travel depends on the velocity and mechanism by which respiratory droplets are propelled from the source, the density of respiratory secretions, environmental factors such as temperature and humidity, and the ability of the pathogen to maintain infectivity over that distance 105 . Thus, a distance of <3 feet around the patient is best viewed as an example of what is meant by "a short distance from a patient" and should not be used as the sole criterion for deciding when a mask should be donned to protect from droplet exposure. Based on these considerations, it may be prudent to don a mask when within 6 to 10 feet of the patient or upon entry into the patient's room, especially when exposure to emerging or highly virulent pathogens is likely. More studies are needed to improve understanding of droplet transmission under various circumstances. Droplet size is another variable under discussion. Droplets traditionally have been defined as being >5 µm in size. Droplet nuclei, particles arising from desiccation of suspended droplets, have been associated with airborne transmission and defined as <5 µm in size 105 , a reflection of the pathogenesis of pulmonary tuberculosis which is not generalizeable to other organisms. Observations of particle dynamics have demonstrated that a range of droplet sizes, including those with diameters of 30µm or greater, can remain suspended in the air 109 . The behavior of droplets and droplet nuclei affect recommendations for preventing transmission. Whereas fine airborne particles containing pathogens that are able to remain infective may transmit infections over long distances, requiring AIIR to prevent its dissemination within a facility; organisms transmitted by the droplet route do not remain infective over long distances, and therefore do not require special air handling and ventilation. Examples of infectious agents that are transmitted via the droplet route include Bordetella pertussis 110 , influenza virus 23 , adenovirus 111 , rhinovirus 104 , Mycoplasma pneumoniae 112 , SARS-associated coronavirus (SARS-CoV) 21,96,113 , group A streptococcus 114 , and Neisseria meningitidis 95,103,115 . Although respiratory syncytial virus may be transmitted by the droplet route, direct contact with infected respiratory secretions is the most important determinant of transmission and consistent adherence to Standard plus Contact Precautions prevents transmission in healthcare settings 24,116,117 . Rarely, pathogens that are not transmitted routinely by the droplet route are dispersed into the air over short distances. For example, although S. aureus is transmitted most frequently by the contact route, viral upper respiratory tract infection has been associated with increased dispersal of S. aureus from the nose into the air for a distance of 4 feet under both outbreak and experimental conditions and is known as the "cloud baby" and "cloud adult" phenomenon . # I.B.3.c. Airborne transmission Airborne transmission occurs by dissemination of either airborne droplet nuclei or small particles in the respirable size range containing infectious agents that remain infective over time and distance (e.g., spores of Aspergillus spp, and Mycobacterium tuberculosis). Microorganisms carried in this manner may be dispersed over long distances by air currents and may be inhaled by susceptible individuals who have not had face-to-face contact with (or been in the same room with) the infectious individual . Preventing the spread of pathogens that are transmitted by the airborne route requires the use of special air handling and ventilation systems (e.g., AIIRs) to contain and then safely remove the infectious agent 11,12 . Infectious agents to which this applies include Mycobacterium tuberculosis , rubeola virus (measles) 122 , and varicella-zoster virus (chickenpox) 123 . In addition, published data suggest the possibility that variola virus (smallpox) may be transmitted over long distances through the air under unusual circumstances and AIIRs are recommended for this agent as well; however, droplet and contact routes are the more frequent routes of transmission for smallpox 108,128,129 . In addition to AIIRs, respiratory protection with NIOSH certified N95 or higher level respirator is recommended for healthcare personnel entering the AIIR to prevent acquisition of airborne infectious agents such as M. tuberculosis 12 . For certain other respiratory infectious agents, such as influenza 130,131 and rhinovirus 104 , and even some gastrointestinal viruses (e.g., norovirus 132 and rotavirus 133 ) there is some evidence that the pathogen may be transmitted via small-particle aerosols, under natural and experimental conditions. Such transmission has occurred over distances longer than 3 feet but within a defined airspace (e.g., patient room), suggesting that it is unlikely that these agents remain viable on air currents that travel long distances. AIIRs are not required routinely to prevent transmission of these agents. Additional issues concerning examples of small particle aerosol transmission of agents that are most frequently transmitted by the droplet route are discussed below. # I.B.3.d. Emerging issues concerning airborne transmission of infectious agents. # I.B.3.d.i. Transmission from patients The emergence of SARS in 2002, the importation of monkeypox into the United States in 2003, and the emergence of avian influenza present challenges to the assignment of isolation categories because of conflicting information and uncertainty about possible routes of transmission. Although SARS-CoV is transmitted primarily by contact and/or droplet routes, airborne transmission over a limited distance (e.g. within a room), has been suggested, though not proven . This is true of other infectious agents such as influenza virus 130 and noroviruses 132,142,143 . Influenza viruses are transmitted primarily by close contact with respiratory droplets 23,102 and acquisition by healthcare personnel has been prevented by Droplet Precautions, even when positive pressure rooms were used in one center 144 However, inhalational transmission could not be excluded in an outbreak of influenza in the passengers and crew of a single aircraft 130 . Observations of a protective effect of UV lights in preventing influenza among patients with tuberculosis during the influenza pandemic of 1957-'58 have been used to suggest airborne transmission 145,146 . In contrast to the strict interpretation of an airborne route for transmission (i.e., long distances beyond the patient room environment), short distance transmission by small particle aerosols generated under specific circumstances (e.g., during endotracheal intubation) to persons in the immediate area near the patient has been demonstrated. Also, aerosolized particles <100 μm can remain suspended in air when room air current velocities exceed the terminal settling velocities of the particles 109 . SARS-CoV transmission has been associated with endotracheal intubation, noninvasive positive pressure ventilation, and cardiopulmonary resuscitation 93,94,96,98,141 . Although the most frequent routes of transmission of noroviruses are contact and food and waterborne routes, several reports suggest that noroviruses may be transmitted through aerosolization of infectious particles from vomitus or fecal material 142,143,147,148 . It is hypothesized that the aerosolized particles are inhaled and subsequently swallowed. Roy and Milton proposed a new classification for aerosol transmission when evaluating routes of SARS transmission: 1) obligate: under natural conditions, disease occurs following transmission of the agent only through inhalation of small particle aerosols (e.g., tuberculosis); 2) preferential: natural infection results from transmission through multiple routes, but small particle aerosols are the predominant route (e.g. measles, varicella); and 3) opportunistic: agents that naturally cause disease through other routes, but under special circumstances may be transmitted via fine particle aerosols 149 . This conceptual framework can explain rare occurrences of airborne transmission of agents that are transmitted most frequently by other routes (e.g., smallpox, SARS, influenza, noroviruses). Concerns about unknown or possible routes of transmission of agents associated with severe disease and no known treatment often result in more extreme prevention strategies than may be necessary; therefore, recommended precautions could change as the epidemiology of an emerging infection is defined and controversial issues are resolved. # I.B.3.d.ii. Transmission from the environment Some airborne infectious agents are derived from the environment and do not usually involve person-toperson transmission. For example, anthrax spores present in a finely milled powdered preparation can be aerosolized from contaminated environmental surfaces and inhaled into the respiratory tract 150,151 . Spores of environmental fungi (e.g., Aspergillus spp.) are ubiquitous in the environment and may cause disease in immunocompromised patients who inhale aerosolized (e.g., via construction dust) spores 152,153 . As a rule, neither of these organisms is subsequently transmitted from infected patients. However, there is one welldocumented report of person-to-person transmission of Aspergillus sp. in the ICU setting that was most likey due to the aerosolization of spores during wound debridement 154 . A Protective Environment refers to isolation practices designed to decrease the risk of exposure to environmental fungal agents in allogeneic HSCT patients 11,14,15, . Environmental sources of respiratory pathogens (eg. Legionella) transmitted to humans through a common aerosol source is distinct from direct patient-topatient transmission. # I.B.3.e. Other sources of infection Transmission of infection from sources other than infectious individuals include those associated with common environmental sources or vehicles (e.g. contaminated food, water, or medications (e.g. intravenous fluids). Although Aspergillus spp. have been recovered from hospital water systems 159 , the role of water as a reservoir for immunosuppressed patients remains uncertain. Vectorborne transmission of infectious agents from mosquitoes, flies, rats, and other vermin also can occur in healthcare settings. Prevention of vector borne transmission is not addressed in this document. # I.C. Infectious agents of special infection control interest for healthcare settings Several infectious agents with important infection control implications that either were not discussed extensively in previous isolation guidelines or have emerged recently are discussed below. These are epidemiologically important organisms (e.g., C. difficile), agents of bioterrorism, prions, SARS-CoV, monkeypox, noroviruses, and the hemorrhagic fever viruses. Experience with these agents has broadened the understanding of modes of transmission and effective preventive measures. These agents are included for purposes of information and, for some (i.e., SARS-CoV, monkeypox), because of the lessons that have been learned about preparedness planning and responding effectively to new infectious agents. # I.C.1. Epidemiologically important organisms Any infectious agents transmitted in healthcare settings may, under defined conditions, become targeted for control because they are epidemiologically important. C. difficile is specifically discussed below because of wide recognition of its current importance in U.S. healthcare facilities. In determining what constitutes an "epidemiologically important organism", the following characteristics apply: - A propensity for transmission within healthcare facilities based on published reports and the occurrence of temporal or geographic clusters of > 2 patients, (e.g., C..difficile, norovirus, respiratory syncytial virus (RSV), influenza, rotavirus, Enterobacter spp; Serratia spp., group A streptococcus). A single case of healthcare-associated invasive disease caused by certain pathogens (e.g., group A streptococcus post-operatively 14, 163 160 , in burn units 161 , or in a LTCF 162 ; Legionella sp. , Aspergillus sp. 164 ) is generally considered a trigger for investigation and enhanced control measures because of the risk of additional cases and severity of illness associated with these infections. Antimicrobial resistance - Resistance to first-line therapies (e.g., MRSA, VISA, VRSA, VRE, ESBLproducing organisms). - Common and uncommon microorganisms with unusual patterns of resistance within a facility (e.g., the first isolate of Burkholderia cepacia complex or Ralstonia spp. in non-CF patients or a quinolone-resistant strain of Pseudomonas aeruginosa in a facility). - Difficult to treat because of innate or acquired resistance to multiple classes of antimicrobial agents (e.g., Stenotrophomonas maltophilia, Acinetobacter spp.). - Association with serious clinical disease, increased morbidity and mortality (e.g., MRSA and MSSA, group A streptococcus) - A newly discovered or reemerging pathogen I.C.1.a. C.difficile C. difficile is a spore-forming gram positive anaerobic bacillus that was first isolated from stools of neonates in 1935 165 and identified as the most commonly identified causative agent of antibiotic-associated diarrhea and pseudomembranous colitis in 1977 166 . This pathogen is a major cause of healthcare-associated diarrhea and has been responsible for many large outbreaks in healthcare settings that were extremely difficult to control. Important factors that contribute to healthcare-associated outbreaks include environmental contamination, persistence of spores for prolonged periods of time, resistance of spores to routinely used disinfectants and antiseptics, hand carriage by healthcare personnel to other patients, and exposure of patients to frequent courses of antimicrobial agents 167 . Antimicrobials most frequently associated with increased risk of C. difficile include third generation cephalosporins, clindamycin, vancomycin, and fluoroquinolones. Since 2001, outbreaks and sporadic cases of C. difficile with increased morbidity and mortality have been observed in several U.S. states, Canada, England and the Netherlands . The same strain of C. difficile has been implicated in these outbreaks 173 . This strain, toxinotype III, North American PFGE type 1, and PCR-ribotype 027 (NAP1/027). has been found to hyperproduce toxin A (16 fold increase) and toxin B (23 fold increase) compared with isolates from 12 different pulsed-field gel electrophoresisPFGE types. A recent survey of U.S. infectious disease physicians found that 40% perceived recent increases in the incidence and severity of C. difficile disease 174 . Standardization of testing methodology and surveillance definitions is needed for accurate comparisons of trends in rates among hospitals 175 . It is hypothesized that the incidence of disease and apparent heightened transmissibility of this new strain may be due, at least in part, to the greater production of toxins A and B, increasing the severity of diarrhea and resulting in more environmental contamination. Considering the greater morbidity, mortality, length of stay, and costs associated with C. difficile disease in both acute care and long term care facilities, control of this pathogen is now even more important than previously. Prevention of transmission focuses on syndromic application of Contact Precautions for patients with diarrhea, accurate identification of patients, environmental measures (e.g., rigorous cleaning of patient rooms) and consistent hand hygiene. Use of soap and water, rather than alcohol based handrubs, for mechanical removal of spores from hands, and a bleach-containing disinfectant (5000 ppm) for environmental disinfection, may be valuable when there is transmission in a healthcare facility. See Appendix A for specific recommendations. # I.C.1. b. Multidrug-Resistant Organisms (MDROs) In general, MDROs are defined as microorganisms -predominantly bacteria -that are resistant to one or more classes of antimicrobial agents 176 . Although the names of certain MDROs suggest resistance to only one agent (e.g., methicillin-resistant Staphylococcus aureus , vancomycin resistant enterococcus ), these pathogens are usually resistant to all but a few commercially available antimicrobial agents. This latter feature defines MDROs that are considered to be epidemiologically important and deserve special attention in healthcare facilities 177 . Other MDROs of current concern include multidrug-resistant Streptococcus pneumoniae (MDRSP) which is resistant to penicillin and other broad-spectrum agents such as macrolides and fluroquinolones, multidrug-resistant gram-negative bacilli (MDR-GNB), especially those producing extended spectrum beta-lactamases (ESBLs); and strains of S. aureus that are intermediate or resistant to vancomycin (i.e., VISA and VRSA) 178-197 198 . MDROs are transmitted by the same routes as antimicrobial susceptible infectious agents. Patient-to-patient transmission in healthcare settings, usually via hands of HCWs, has been a major factor accounting for the increase in MDRO incidence and prevalence, especially for MRSA and VRE in acute care facilities . Preventing the emergence and transmission of these pathogens requires a comprehensive approach that includes administrative involvement and measures (e.g., nurse staffing, communication systems, performance improvement processes to ensure adherence to recommended infection control measures), education and training of medical and other healthcare personnel, judicious antibiotic use, comprehensive surveillance for targeted MDROs, application of infection control precautions during patient care, environmental measures (e.g., cleaning and disinfection of the patient care environment and equipment, dedicated single-patient-use of non-critical equipment), and decolonization therapy when appropriate. The prevention and control of MDROs is a national priority -one that requires that all healthcare facilities and agencies assume responsibility and participate in community-wide control programs 176,177 . A detailed discussion of this topic and recommendations for prevention was published in 2006 may be found at I.C.2. Agents of bioterrorism CDC has designated the agents that cause anthrax, smallpox, plague, tularemia, viral hemorrhagic fevers, and botulism as Category A (high priority) because these agents can be easily disseminated environmentally and/or transmitted from person to person; can cause high mortality and have the potential for major public health impact; might cause public panic and social disruption; and require special action for public health preparedness 202 . General information relevant to infection control in healthcare settings for Category A agents of bioterrorism is summarized in Table 3. Consult www.bt.cdc.gov for additional, updated Category A agent information as well as information concerning Category B and C agents of bioterrorism and updates. Category B and C agents are important but are not as readily disseminated and cause less morbidity and mortality than Category A agents. Healthcare facilities confront a different set of issues when dealing with a suspected bioterrorism event as compared with other communicable diseases. An understanding of the epidemiology, modes of transmission, and clinical course of each disease, as well as carefully drafted plans that provide an approach and relevant websites and other resources for disease-specific guidance to healthcare, administrative, and support personnel, are essential for responding to and managing a bioterrorism event. Infection control issues to be addressed include: 1) identifying persons who may be exposed or infected; 2) preventing transmission among patients, healthcare personnel, and visitors; 3) providing treatment, chemoprophylaxis or vaccine to potentially large numbers of people; 4) protecting the environment including the logistical aspects of securing sufficient numbers of AIIRs or designating areas for patient cohorts when there are an insufficient number of AIIRs available;5) providing adequate quantities of appropriate personal protective equipment; and 6) identifying appropriate staff to care for potentially infectious patients (e.g., vaccinated healthcare personnel for care of patients with smallpox). The response is likely to differ for exposures resulting from an intentional release compared with naturally occurring disease because of the large number persons that can be exposed at the same time and possible differences in pathogenicity. A variety of sources offer guidance for the management of persons exposed to the most likely agents of bioterrorism. Federal agency websites (e.g., www.usamriid.army.mil/publications/index.html , www.bt.cdc.gov ) and state and county health department web sites should be consulted for the most up-to-date information. Sources of information on specific agents include: anthrax 203 ; smallpox ; plague 207,208 ; botulinum toxin 209 ; tularemia 210 ; and hemorrhagic fever viruses: 211,212 . # I.C.2.a. Pre-event administration of smallpox (vaccinia) vaccine to healthcare personnel Vaccination of personnel in preparation for a possible smallpox exposure has important infection control implications . These include the need for meticulous screening for vaccine contraindications in persons who are at increased risk for adverse vaccinia events; containment and monitoring of the vaccination site to prevent transmission in the healthcare setting and at home; and the management of patients with vaccinia-related adverse events 216,217 . The pre-event U.S. smallpox vaccination program of 2003 is an example of the effectiveness of carefully developed recommendations for both screening potential vaccinees for contraindications and vaccination site care and monitoring. Approximately 760,000 individuals were vaccinated in the Department of Defense and 40,000 in the civilian or public health populations from December 2002 to February 2005, including approximately 70,000 who worked in healthcare settings. There were no cases of eczema vaccinatum, progressive vaccinia, fetal vaccinia, or contact transfer of vaccinia in healthcare settings or in military workplaces 218,219 . Outside the healthcare setting, there were 53 cases of contact transfer from military vaccinees to close personal contacts (e.g., bed partners or contacts during participation in sports such as wrestling 220 ). All contact transfers were from individuals who were not following recommendations to cover their vaccination sites. Vaccinia virus was confirmed by culture or PCR in 30 cases, and two of the confirmed cases resulted from tertiary transfer. All recipients, including one breast-fed infant, recovered without complication. Subsequent studies using viral culture and PCR techniques have confirmed the effectiveness of semipermeable dressings to contain vaccinia . This experience emphasizes the importance of ensuring that newly vaccinated healthcare personnel adhere to recommended vaccination-site care, especially if they are to care for high-risk patients. Recommendations for preevent smallpox vaccination of healthcare personnel and vaccinia-related infection control recommendations are published in the MMWR 216,225 with updates posted on the CDC bioterrorism web site 205 . # I.C.3. Prions Creutzfeldt-Jakob disease (CJD) is a rapidly progressive, degenerative, neurologic disorder of humans with an incidence in the United States of approximately 1 person/million population/year 226,227 (www.cdc.gov/ncidod/diseases/cjd/cjd.htm). CJD is believed to be caused by a transmissible proteinaceous infectious agent termed a prion. Infectious prions are isoforms of a host-encoded glycoprotein known as the prion protein. The incubation period (i.e., time between exposure and and onset of symptoms) varies from two years to many decades. However, death typically occurs within 1 year of the onset of symptoms. Approximately 85% of CJD cases occur sporadically with no known environmental source of infection and 10% are familial. Iatrogenic transmission has occurred with most resulting from treatment with human cadaveric pituitary-derived growth hormone or gonadotropin 228,229 , from implantation of contaminated human dura mater grafts 230 or from corneal transplants 231 ). Transmission has been linked to the use of contaminated neurosurgical instruments or stereotactic electroencephalogram electrodes 232, 233 , 234 , 235 . Prion diseases in animals include scrapie in sheep and goats, bovine spongiform encephalopathy (BSE, or "mad cow disease") in cattle, and chronic wasting disease in deer and elk 236 . BSE, first recognized in the United Kingdom (UK) in 1986, was associated with a major epidemic among cattle that had consumed contaminated meat and bone meal. The possible transmission of BSE to humans causing variant CJD (vCJD) was first described in 1996 and subsequently found to be associated with consumption of BSE-contaminated cattle products primarily in the United Kingdom. There is strong epidemiologic and laboratory evidence for a causal association between the causative agent of BSE and vCJD 237 . Although most cases of vCJD have been reported from the UK, a few cases also have been reported from Europe, Japan, Canada, and the United States. Most vCJD cases worldwide lived in or visited the UK during the years of a large outbreak of BSE (1980-96) and may have consumed contaminated cattle products during that time (www.cdc.gov/ncidod/diseases/cjd/cjd.htm). Although there has been no indigenously acquired vCJD in the United States, the sporadic occurrence of BSE in cattle in North America has heightened awareness of the possibility that such infections could occur and have led to increased surveillance activities. Updated information may be found on the following website: www.cdc.gov/ncidod/diseases/cjd/cjd.htm. The public health impact of prion diseases has been reviewed 238 . vCJD in humans has different clinical and pathologic characteristics from sporadic or classic CJD 239 , including the following: 1) younger median age at death: 28 (range 16-48) vs. 68 years; 2) longer duration of illness: median 14 months vs. 4-6 months; 3) increased frequency of sensory symptoms and early psychiatric symptoms with delayed onset of frank neurologic signs; and 4) detection of prions in tonsillar and other lymphoid tissues from vCJD patients but not from sporadic CJD patients 240 . Similar to sporadic CJD, there have been no reported cases of direct human-to-human transmission of vCJD by casual or environmental contact, droplet, or airborne routes. Ongoing blood safety surveillance in the U.S. has not detected sporadic CJD transmission through blood transfusion . However, bloodborne transmission of vCJD is believed to have occurred in two UK patients 244,245 . The following FDA websites provide information on steps that are being taken in the US to protect the blood supply from CJD and vCJD: ; . Standard Precautions are used when caring for patients with suspected or confirmed CJD or vCJD. However, special precautions are recommended for tissue handling in the histology laboratory and for conducting an autopsy, embalming, and for contact with a body that has undergone autopsy 246 . Recommendations for reprocessing surgical instruments to prevent transmission of CJD in healthcare settings have been published by the World Health Organization (WHO) and are currently under review at CDC. Questions concerning notification of patients potentially exposed to CJD or vCJD through contaminated instruments and blood products from patients with CJD or vCJD or at risk of having vCJD may arise. The risk of transmission associated with such exposures is believed to be extremely low but may vary based on the specific circumstance. Therefore consultation on appropriate options is advised. The United Kingdom has developed several documents that clinicians and patients in the US may find useful (). # I.C.4. Severe Acute Respiratory Syndrome (SARS) SARS is a newly discovered respiratory disease that emerged in China late in 2002 and spread to several countries 135,140 ; Mainland China, Hong Kong, Hanoi, Singapore, and Toronto were affected significantly. SARS is caused by SARS CoV, a previously unrecognized member of the coronavirus family 247,248 . The incubation period from exposure to the onset of symptoms is 2 to 7 days but can be as long as 10 days and uncommonly even longer 249 . The illness is initially difficult to distinguish from other common respiratory infections. Signs and symptoms usually include fever >38.0 o C and chills and rigors, sometimes accompanied by headache, myalgia, and mild to severe respiratory symptoms. Radiographic finding of atypical pneumonia is an important clinical indicator of possible SARS. Compared with adults, children have been affected less frequently, have milder disease, and are less likely to transmit SARS-CoV 135, . The overall case fatality rate is approximately 6.0%; underlying disease and advanced age increase the risk of mortality (www.who.int/csr/sarsarchive/2003_05_07a/en/). Outbreaks in healthcare settings, with transmission to large numbers of healthcare personnel and patients have been a striking feature of SARS; undiagnosed, infectious patients and visitors were important initiators of these outbreaks 21, . The relative contribution of potential modes of transmission is not precisely known. There is ample evidence for droplet and contact transmission 96,101,113 ; however, opportunistic airborne transmission cannot be excluded 101, 135-139, 149, 255 . For example, exposure to aerosol-generating procedures (e.g., endotracheal intubation, suctioning) was associated with transmission of infection to large numbers of healthcare personnel outside of the United States 93,94,96,98,253 .Therefore, aerosolization of small infectious particles generated during these and other similar procedures could be a risk factor for transmission to others within a multi-bed room or shared airspace. A review of the infection control literature generated from the SARS outbreaks of 2003 concluded that the greatest risk of transmission is to those who have close contact, are not properly trained in use of protective infection control procedures, do not consistently use PPE; and that N95 or higher respirators may offer additional protection to those exposed to aerosol-generating procedures and high risk activities 256,257 . Organizational and individual factors that affected adherence to infection control practices for SARS also were identified 257 . Control of SARS requires a coordinated, dynamic response by multiple disciplines in a healthcare setting 9 . Early detection of cases is accomplished by screening persons with symptoms of a respiratory infection for history of travel to areas experiencing community transmission or contact with SARS patients, followed by implementation of Respiratory Hygiene/Cough Etiquette (i.e., placing a mask over the patient's nose and mouth) and physical separation from other patients in common waiting areas.The precise combination of precautions to protect healthcare personnel has not been determined. At the time of this publication, CDC recommends Standard Precautions, with emphasis on the use of hand hygiene, Contact Precautions with emphasis on environmental cleaning due to the detection of SARS CoV RNA by PCR on surfaces in rooms occupied by SARS patients 138,254,258 , Airborne Precautions, including use of fit-tested NIOSH-approved N95 or higher level respirators, and eye protection 259 . In Hong Kong, the use of Droplet and Contact Precautions, which included use of a mask but not a respirator, was effective in protecting healthcare personnel 113 . However, in Toronto, consistent use of an N95 respirator was slightly more protective than a mask 93 . It is noteworthy that there was no transmission of SARS-CoV to public hospital workers in Vietnam despite inconsistent use of infection control measures, including use of PPE, which suggests other factors (e.g., severity of disease, frequency of high risk procedures or events, environmental features) may influence opportunities for transmission 260 . SARS-CoV also has been transmitted in the laboratory setting through breaches in recommended laboratory practices. Research laboratories where SARS-CoV was under investigation were the source of most cases reported after the first series of outbreaks in the winter and spring of 2003 261,262 . Studies of the SARS outbreaks of 2003 and transmissions that occurred in the laboratory re-affirm the effectiveness of recommended infection control precautions and highlight the importance of consistent adherence to these measures. Lessons from the SARS outbreaks are useful for planning to respond to future public health crises, such as pandemic influenza and bioterrorism events. Surveillance for cases among patients and healthcare personnel, ensuring availability of adequate supplies and staffing, and limiting access to healthcare facilities were important factors in the response to SARS that have been summarized 9 . Guidance for infection control precautions in various settings is available at www.cdc.gov/ncidod/sars. I.C.5. Monkeypox Monkeypox is a rare viral disease found mostly in the rain forest countries of Central and West Africa. The disease is caused by an orthopoxvirus that is similar in appearance to smallpox but causes a milder disease. The only recognized outbreak of human monkeypox in the United States was detected in June 2003 after several people became ill following contact with sick pet prairie dogs. Infection in the prairie dogs was subsequently traced to their contact with a shipment of animals from Africa, including giant Gambian rats 263 . This outbreak demonstrates the importance of recognition and prompt reporting of unusual disease presentations by clinicians to enable prompt identification of the etiology; and the potential of epizootic diseases to spread from animal reservoirs to humans through personal and occupational exposure 264 . Limited data on transmission of monkeypox are available. Transmission from infected animals and humans is believed to occur primarily through direct contact with lesions and respiratory secretions; airborne transmission from animals to humans is unlikely but cannot be excluded, and may have occurred in veterinary practices (e.g., during administration of nebulized medications to ill prairie dogs 265 ). Among humans, four instances of monkeypox transmission within hospitals have been reported in Africa among children, usually related to sharing the same ward or bed 266,267 . Additional recent literature documents transmission of Congo Basin monkeypox in a hospital compound for an extended number of generations 268 . There has been no evidence of airborne or any other person-to-person transmission of monkeypox in the United States, and no new cases of monkeypox have been identified since the outbreak in June 2003 269 . The outbreak strain is a clade of monkeypox distinct from the Congo Basin clade and may have different epidemiologic properties (including human-to-human transmission potential) from monkeypox strains of the Congo Basin 270 ; this awaits further study. Smallpox vaccine is 85% protective against Congo Basin monkeypox 271 . Since there is an associated case fatality rate of <10%, administration of smallpox vaccine within 4 days to individuals who have had direct exposure to patients or animals with monkeypox is a reasonable consideration 272 . For the most current information on monkeypox, see www.cdc.gov/ncidod/monkeypox/clinicians.htm. I.C.6. Noroviruses Noroviruses, formerly referred to as Norwalk-like viruses, are members of the Caliciviridae family. These agents are transmitted via contaminated food or water and from person-to-person, causing explosive outbreaks of gastrointestinal disease 273 . Environmental contamination also has been documented as a contributing factor in ongoing transmission during outbreaks 274,275 . Although noroviruses cannot be propagated in cell culture, DNA detection by molecular diagnostic techniques has facilitated a greater appreciation of their role in outbreaks of gastrointestinal disease 276 . Reported outbreaks in hospitals 132,142,277 , nursing homes 275, , cruise ships 284,285 , hotels 143,147 , schools 148 , and large crowded shelters established for hurricane evacuees 286 , demonstrate their highly contagious nature, the disruptive impact they have in healthcare facilities and the community, and the difficulty of controlling outbreaks in settings where people share common facilites and space. Of note, there is nearly a 5 fold increase in the risk to patients in outbreaks where a patient is the index case compared with exposure of patients during outbreaks where a staff member is the index case 287 . The average incubation period for gastroenteritis caused by noroviruses is 12-48 hours and the clinical course lasts 12-60 hours 273 . Illness is characterized by acute onset of nausea, vomiting, abdominal cramps, and/or diarrhea. The disease is largely self-limited; rarely, death caused by severe dehydration can occur, particularly among the elderly with debilitating health conditions. The epidemiology of norovirus outbreaks shows that even though primary cases may result from exposure to a fecally-contaminated food or water, secondary and tertiary cases often result from person-to-person transmission that is facilitated by contamination of fomites 273,288 and dissemination of infectious particles, especially during the process of vomiting 132,142,143,147,148,273,279,280 . Widespread, persistent and inapparent contamination of the environment and fomites can make outbreaks extremely difficult to control 147,275,284 .These clinical observations and the detection of norovirus DNA on horizontal surfaces 5 feet above the level that might be touched normally suggest that, under certain circumstances, aerosolized particles may travel distances beyond 3 feet 147 . It is hypothesized that infectious particles may be aerosolized from vomitus, inhaled, and swallowed. In addition, individuals who are responsible for cleaning the environment may be at increased risk of infection. Development of disease and transmission may be facilitated by the low infectious dose (i.e., <100 viral particles) 289 and the resistance of these viruses to the usual cleaning and disinfection agents (i.e., may survive < 10 ppm chlorine) . An alternate phenolic agent that was shown to be effective against feline calicivirus was used for environmental cleaning in one outbreak 275,293 . There are insufficient data to determine the efficacy of alcohol-based hand rubs against noroviruses when the hands are not visibly soiled 294 . Absence of disease in certain individuals during an outbreak may be explained by protection from infection conferred by the B histo-blood group antigen 295 . Consultation on outbreaks of gastroenteritis is available through CDC's Division of Viral and Rickettsial Diseases 296 . # I.C.7. Hemorrhagic fever viruses (HFV) The hemorrhagic fever viruses are a mixed group of viruses that cause serious disease with high fever, skin rash, bleeding diathesis, and in some cases, high mortality; the disease caused is referred to as viral hemorrhagic fever (VHF). Among the more commonly known HFVs are Ebola and Marburg viruses (Filoviridae), Lassa virus (Arenaviridae), Crimean-Congo hemorrhagic fever and Rift Valley Fever virus (Bunyaviridae), and Dengue and Yellow fever viruses (Flaviviridae) 212,297 . These viruses are transmitted to humans via contact with infected animals or via arthropod vectors. While none of these viruses is endemic in the United States, outbreaks in affected countries provide potential opportunities for importation by infected humans and animals. Furthermore, there are concerns that some of these agents could be used as bioweapons 212 . Person-to-person transmission is documented for Ebola, Marburg, Lassa and Crimean-Congo hemorrhagic fever viruses. In resource-limited healthcare settings, transmission of these agents to healthcare personnel, patients and visitors has been described and in some outbreaks has accounted for a large proportion of cases . Transmissions within households also have occurred among individuals who had direct contact with ill persons or their body fluids, but not to those who did not have such contact 301 . Evidence concerning the transmission of HFVs has been summarized 212,302 . Person-to-person transmission is associated primarily with direct blood and body fluid contact. Percutaneous exposure to contaminated blood carries a particularly high risk for transmission and increased mortality 303,304 . The finding of large numbers of Ebola viral particles in the skin and the lumina of sweat glands has raised concern that transmission could occur from direct contact with intact skin though epidemiologic evidence to support this is lacking 305 . Postmortem handling of infected bodies is an important risk for transmission 301,306,307 . In rare situations, cases in which the mode of transmission was unexplained among individuals with no known direct contact , have led to speculation that airborne transmission could have occurred 298 . However, airborne transmission of naturally occurring HFVs in humans has not been seen. In one study of airplane passengers exposed to an in-flight index case of Lassa fever, there was no transmission to any passengers 308 . In the laboratory setting, animals have been infected experimentally with Marburg or Ebola viruses via direct inoculation of the nose, mouth and/or conjunctiva 309,310 and by using mechanically generated virus-containing aerosols 311,312 . Transmission of Ebola virus among laboratory primates in an animal facility has been described 313 . Secondarily infected animals were in individual cages and separated by approximately 3 meters. Although the possibility of airborne transmission was suggested, the authors were not able to exclude droplet or indirect contact transmission in this incidental observation. Guidance on infection control precautions for HVFs that are transmitted personto-person have been published by CDC 1, 211 and by the Johns Hopkins Center for Civilian Biodefense Strategies 212 . The most recent recommendations at the time of publication of this document were posted on the CDC website on 5/19/05 314 . Inconsistencies among the various recommendations have raised questions about the appropriate precautions to use in U.S. hospitals. In less developed countries, outbreaks of HFVs have been controlled with basic hygiene, barrier precautions, safe injection practices, and safe burial practices 299,306 . The preponderance of evidence on HFV transmission indicates that Standard, Contact and Droplet Precautions with eye protection are effective in protecting healthcare personnel and visitors who may attend an infected patient. Single gloves are adequate for routine patient care; double-gloving is advised during invasive procedures (e.g., surgery) that pose an increased risk for blood exposure. Routine eye protection (i.e. goggles or face shield) is particularly important. Fluid-resistant gowns should be worn for all patient contact. Airborne Precautions are not required for routine patient care; however, use of AIIRs is prudent when procedures that could generate infectious aerosols are performed (e.g., endotracheal intubation, bronchoscopy, suctioning, autopsy procedures involving oscillating saws). N95 or higher level respirators may provide added protection for individuals in a room during aerosol-generating procedures (Table 3, Appendix A). When a patient with a syndrome consistent with hemorrhagic fever also has a history of travel to an endemic area, precautions are initiated upon presentation and then modified as more information is obtained (Table 2). Patients with hemorrhagic fever syndrome in the setting of a suspected bioweapon attack should be managed using Airborne Precautions, including AIIRs, since the epidemiology of a potentially weaponized hemorrhagic fever virus is unpredictable. # I.D. Transmission risks associated with specific types of healthcare settings Numerous factors influence differences in transmission risks among the various healthcare settings. These include the population characteristics (e.g., increased susceptibility to infections, type and prevalence of indwelling devices), intensity of care, exposure to environmental sources, length of stay, and frequency of interaction between patients/residents with each other and with HCWs. These factors, as well as organizational priorities, goals, and resources, influence how different healthcare settings adapt transmission prevention guidelines to meet their specific needs 315,316 . Infection control management decisions are informed by data regarding institutional experience/epidemiology, trends in community and institutional HAIs, local, regional, and national epidemiology, and emerging infectious disease threats. # I.D.1. Hospitals Infection transmission risks are present in all hospital settings. However, certain hospital settings and patient populations have unique conditions that predispose patients to infection and merit special mention. These are often sentinel sites for the emergence of new transmission risks that may be unique to that setting or present opportunities for transmission to other settings in the hospital. # I.D.1.a. Intensive Care Units Intensive care units (ICUs) serve patients who are immunocompromised by disease state and/or by treatment modalities, as well as patients with major trauma, respiratory failure and other life-threatening conditions (e.g., myocardial infarction, congestive heart failure, overdoses, strokes, gastrointestinal bleeding, renal failure, hepatic failure, multi-organ system failure, and the extremes of age). Although ICUs account for a relatively small proportion of hospitalized patients, infections acquired in these units accounted for >20% of all HAIs 317 . In the National Nosocomial Infection Surveillance (NNIS) system, 26.6% of HAIs were reported from ICU and high risk nursery (NICU) patients in 2002 (NNIS, unpublished data). This patient population has increased susceptibility to colonization and infection, especially with MDROs and Candida sp. 318,319 , because of underlying diseases and conditions, the invasive medical devices and technology used in their care (e.g. central venous catheters and other intravascular devices, mechanical ventilators, extracorporeal membrane oxygenation (ECMO), hemodialysis/-filtration, pacemakers, implantable left ventricular assist devices), the frequency of contact with healthcare personnel, prolonged length of stay, and prolonged exposure to antimicrobial agents . Furthermore, adverse patient outcomes in this setting are more severe and are associated with a higher mortality 332 . Outbreaks associated with a variety of bacterial, fungal and viral pathogens due to commonsource and person-to-person transmissions are frequent in adult and pediatric ICUs 31, 333-336, 337 , 338 . # I.D.1.b. Burn Units Burn wounds can provide optimal conditions for colonization, infection, and transmission of pathogens; infection acquired by burn patients is a frequent cause of morbidity and mortality 320,339,340 . In patients with a burn injury involving >30% of the total body surface area (TBSA), the risk of invasive burn wound infection is particularly high 341,342 . Infections that occur in patients with burn injury involving <30% TBSA are usually associated with the use of invasive devices. Methicillin-susceptible Staphylococcus aureus, MRSA, enterococci, including VRE, gram-negative bacteria, and candida are prevalent pathogens in burn infections 53,340, and outbreaks of these organisms have been reported . Shifts over time in the predominance of pathogens causing infections among burn patients often lead to changes in burn care practices 343, . Burn wound infections caused by Aspergillus sp. or other environmental molds may result from exposure to supplies contaminated during construction 359 or to dust generated during construction or other environmental disruption 360 . Hydrotherapy equipment is an important environmental reservoir of gramnegative organisms. Its use for burn care is discouraged based on demonstrated associations between use of contaminated hydrotherapy equipment and infections. Burn wound infections and colonization, as well as bloodstream infections, caused by multidrug-resistant P. aeruginosa 361 , A. baumannii 362 , and MRSA 352 have been associated with hydrotherapy; excision of burn wounds in operating rooms is preferred. Advances in burn care, specifically early excision and grafting of the burn wound, use of topical antimicrobial agents, and institution of early enteral feeding, have led to decreased infectious complications. Other advances have included prophylactic antimicrobial usage, selective digestive decontamination (SDD), and use of antimicrobial-coated catheters (ACC), but few epidemiologic studies and no efficacy studies have been performed to show the relative benefit of these 357 measures . There is no consensus on the most effective infection control practices to prevent transmission of infections to and from patients with serious burns (e.g., singlebed rooms 358 , laminar flow 363 and high efficiency particulate air filtration 360 or maintaining burn patients in a separate unit without exposure to patients or equipment from other units 364 ). There also is controversy regarding the need for and type of barrier precautions for routine care of burn patients. One retrospective study demonstrated efficacy and cost effectiveness of a simplified barrier isolation protocol for wound colonization, emphasizing handwashing and use of gloves, caps, masks and plastic impermeable aprons (rather than isolation gowns) for direct patient contact 365 . However, there have been no studies that define the most effective combination of infection control precautions for use in burn settings. Prospective studies in this area are needed. # I.D.1.c. Pediatrics Studies of the epidemiology of HAIs in children have identified unique infection control issues in this population 63,64, . Pediatric intensive care unit (PICU) patients and the lowest birthweight babies in the highrisk nursery (HRN) monitored in the NNIS system have had high rates of central venous catheter-associated bloodstream infections 64,320, . Additionally, there is a high prevalence of community-acquired infections among hospitalized infants and young children who have not yet become immune either by vaccination or by natural infection. The result is more patients and their sibling visitors with transmissible infections present in pediatric healthcare settings, especially during seasonal epidemics (e.g., pertussis 36,40,41 , respiratory viral infections including those caused by RSV 24 , influenza viruses 373 , parainfluenza virus 374 , human metapneumovirus 375 , and adenoviruses 376 ; rubeola 34 , varicella 377 , and rotavirus 38,378 ). Close physical contact between healthcare personnel and infants and young children (eg. cuddling, feeding, playing, changing soiled diapers, and cleaning copious uncontrolled respiratory secretions) provides abundant opportunities for transmission of infectious material. Practices and behaviors such as congregation of children in play areas where toys and bodily secretions are easily shared and family members rooming-in with pediatric patients can further increase the risk of transmission. Pathogenic bacteria have been recovered from toys used by hospitalized patients 379 ; contaminated bath toys were implicated in an outbreak of multidrug-resistant P. aeruginosa on a pediatric oncology unit 80 . In addition, several patient factors increase the likelihood that infection will result from exposure to pathogens in healthcare settings (e.g., immaturity of the neonatal immune system, lack of previous natural infection and resulting immunity, prevalence of patients with congenital or acquired immune deficiencies, congenital anatomic anomalies, and use of life-saving invasive devices in neontal and pediatric intensive care units) 63 . There are theoretical concerns that infection risk will increase in association with innovative practices used in the NICU for the purpose of improving developmental outcomes, Such factors include co-bedding 380 and kangaroo care 381 that may increase opportunity for skin-to-skin exposure of multiple gestation infants to each other and to their mothers, respectively; although infection risk smay actually be reduced among infants receiving kangaroo care 382 . Children who attend child care centers 383,384 and pediatric rehabilitation units 385 may increase the overall burden of antimicrobial resistance (eg. by contributing to the reservoir of community-associated MRSA ) . Patients in chronic care facilities may have increased rates of colonization with resistant GNBs and may be sources of introduction of resistant organisms to acute care settings 50 . # I.D.2. Nonacute healthcare settings Healthcare is provided in various settings outside of hospitals including facilities, such as long-term care facilities (LTCF) (e.g. nursing homes), homes for the developmentally disabled, settings where behavioral health services are provided, rehabilitation centers and hospices 392 . In addition, healthcare may be provided in nonhealthcare settings such as workplaces with occupational health clinics, adult day care centers, assisted living facilities, homeless shelters, jails and prisons, school clinics and infirmaries. Each of these settings has unique circumstances and population risks to consider when designing and implementing an infection control program. Several of the most common settings and their particular challenges are discussed below. While this Guideline does not address each setting, the principles and strategies provided may be adapted and applied as appropriate. # I.D.2.a. Long-term care The designation LTCF applies to a diverse group of residential settings, ranging from institutions for the developmentally disabled to nursing homes for the elderly and pediatric chronic-care facilities . Nursing homes for the elderly predominate numerically and frequently represent longterm care as a group of facilities. Approximately 1.8 million Americans reside in the nation's 16,500 nursing homes 396 . Estimates of HAI rates of 1.8 to 13.5 per 1000 resident-care days have been reported with a range of 3 to 7 per 1000 resident-care days in the more rigorous studies . The infrastructure described in the Department of Veterans Affairs nursing home care units is a promising example for the development of a nationwide HAI surveillance system for LTCFs 402 . LCTFs are different from other healthcare settings in that elderly patients at increased risk for infection are brought together in one setting and remain in the facility for extended periods of time; for most residents, it is their home. An atmosphere of community is fostered and residents share common eating and living areas, and participate in various facility-sponsored activities 403,404 . Since able residents interact freely with each other, controlling transmission of infection in this setting is challenging 405 . Residents who are colonized or infected with certain microorganisms are, in some cases, restricted to their room. However, because of the psychosocial risks associated with such restriction, it has been recommended that psychosocial needs be balanced with infection control needs in the LTCF setting . Documented LTCF outbreaks have been caused by various viruses (e.g., influenza virus 35, , rhinovirus 413 , adenovirus (conjunctivitis) 414 , norovirus 278, 279 275, 281 ) and bacteria, including group A streptococcus 162 , B. pertussis 415 , non-susceptible S. pneumoniae 197,198 , other MDROs, and Clostridium difficile 416 ) These pathogens can lead to substantial morbidity and mortality, and increased medical costs; prompt detection and implementation of effective control measures are required. Risk factors for infection are prevalent among LTCF residents 395,417,418 . Agerelated declines in immunity may affect responses to immunizations for influenza and other infectious agents, and increase susceptibility to tuberculosis. Immobility, incontinence, dysphagia, underlying chronic diseases, poor functional status, and age-related skin changes increase susceptibility to urinary, respiratory and cutaneous and soft tissue infections, while malnutrition can impair wound healing . Medications (e.g., drugs that affect level of consciousness, immune function, gastric acid secretions, and normal flora, including antimicrobial therapy) and invasive devices (e.g., urinary catheters and feeding tubes) heighten susceptibility to infection and colonization in LTCF residents . Finally, limited functional status and total dependence on healthcare personnel for activities of daily living have been identified as independent risk factors for infection 401,417,427 and for colonization with MRSA 428,429 and ESBL-producing K. pneumoniae 430 . Several position papers and review articles have been published that provide guidance on various aspects of infection control and antimicrobial resistance in LTCFs . The Centers for Medicare and Medicaid Services (CMS) have established regulations for the prevention of infection in LTCFs 437 . Because residents of LTCFs are hospitalized frequently, they can transfer pathogens between LTCFs and healthcare facilities in which they receive care 8, . This is also true for pediatric long-term care populations. Pediatric chronic care facilities have been associated with importing extended-spectrum cephalosporin-resistant, gram-negative bacilli into one PICU 50 . Children from pediatric rehabilitation units may contribute to the reservoir of communityassociated MRSA 385, . # I.D.2.b. Ambulatory Care In the past decade, healthcare delivery in the United States has shifted from the acute, inpatient hospital to a variety of ambulatory and community-based settings, including the home. Ambulatory care is provided in hospital-based outpatient clinics, nonhospital-based clinics and physician offices, public health clinics, free-standing dialysis centers, ambulatory surgical centers, urgent care centers, and many others. In 2000, there were 83 million visits to hospital outpatient clinics and more than 823 million visits to physician offices 442 ; ambulatory care now accounts for most patient encounters with the health care system 443 . In these settings, adapting transmission prevention guidelines is challenging because patients remain in common areas for prolonged periods waiting to be seen by a healthcare provider or awaiting admission to the hospital, examination or treatment rooms are turned around quickly with limited cleaning, and infectious patients may not be recognized immediately. Furthermore, immunocompromised patients often receive chemotherapy in infusion rooms where they stay for extended periods of time along with other types of patients. There are few data on the risk of HAIs in ambulatory care settings, with the exception of hemodialysis centers 18 , 444, 445 . Transmission of infections in outpatient settings has been reviewed in three publications . Goodman and Solomon summarized 53 clusters of infections associated with the outpatient setting from 1961-1990 446 . Overall, 29 clusters were associated with common source transmission from contaminated solutions or equipment, 14 with personto-person transmission from or involving healthcare personnel and ten associated with airborne or droplet transmission among patients and healthcare workers. Transmission of bloodborne pathogens (i.e., hepatitis B and C viruses and, rarely, HIV) in outbreaks, sometimes involving hundreds of patients, continues to occur in ambulatory settings. These outbreaks often are related to common source exposures, usually a contaminated medical device, multi-dose vial, or intravenous solution 82, . In all cases, transmission has been attributed to failure to adhere to fundamental infection control principles, including safe injection practices and aseptic technique.This subject has been reviewed and recommended infection control and safe injection practices summarized 454 . Airborne transmission of M.tuberculosis and measles in ambulatory settings, most frequently emergency departments, has been reported 34,127,446,448, . Measles virus was transmitted in physician offices and other outpatient settings during an era when immunization rates were low and measles outbreaks in the community were occurring regularly 34,122,458 . Rubella has been transmitted in the outpatient obstetric setting 33 ; there are no published reports of varicella transmission in the outpatient setting. In the ophthalmology setting, adenovirus type 8 epidemic keratoconjunctivitis has been transmitted via incompletely disinfected ophthalmology equipment and/or from healthcare workers to patients, presumably by contaminated hands 17,446,448, . If transmission in outpatient settings is to be prevented, screening for potentially infectious symptomatic and asymptomatic individuals, especially those who may be at risk for transmitting airborne infectious agents (e.g., M. tuberculosis, varicella-zoster virus, rubeola ), is necessary at the start of the initial patient encounter. Upon identification of a potentially infectious patient, implementation of prevention measures, including prompt separation of potentially infectious patients and implementation of appropriate control measures (e.g., Respiratory Hygiene/Cough Etiquette and Transmission-Based Precautions) can decrease transmission risks 9,12 . Transmission of MRSA and VRE in outpatient settings has not been reported, but the association of CA-MRSA in healthcare personnel working in an outpatient HIV clinic with environmental CA-MRSA contamination in that clinic, suggests the possibility of transmission in that setting 463 . Patient-to-patient transmission of Burkholderia species and Pseudomonas aeruginosa in outpatient clinics for adults and children with cystic fibrosis has been confirmed 464,465 . # I.D.2.c. Home Care Home care in the United States is delivered by over 20,000 provider agencies that include home health agencies, hospices, durable medical equipment providers, home infusion therapy services, and personal care and support services providers. Home care is provided to patients of all ages with both acute and chronic conditions. The scope of services ranges from assistance with activities of daily living and physical and occupational therapy to the care of wounds, infusion therapy, and chronic ambulatory peritoneal dialysis (CAPD). The incidence of infection in home care patients, other than those associated with infusion therapy is not well studied . However, data collection and calculation of infection rates have been accomplished for central venous catheter-associated bloodstream infections in patients receiving home infusion therapy and for the risk of blood contact through percutaneous or mucosal exposures, demonstrating that surveillance can be performed in this setting 475 . Draft definitions for home care associated infections have been developed 476 . Transmission risks during home care are presumed to be minimal. The main transmission risks to home care patients are from an infectious healthcare provider or contaminated equipment; providers also can be exposed to an infectious patient during home visits. Since home care involves patient care by a limited number of personnel in settings without multiple patients or shared equipment, the potential reservoir of pathogens is reduced. Infections of home care providers, that could pose a risk to home care patients include infections transmitted by the airborne or droplet routes (e.g., chickenpox, tuberculosis, influenza), and skin infestations (e.g., scabies 69 and lice) and infections (e.g.,impetigo) transmitted by direct or indirect contact. There are no published data on indirect transmission of MDROs from one home care patient to another, although this is theoretically possible if contaminated equipment is transported from an infected or colonized patient and used on another patient. Of note, investigation of the first case of VISA in homecare 186 and the first 2 reported cases of VRSA 178,180,181,183 found no evidence of transmission of VISA or VRSA to other home care recipients. Home health care also may contribute to antimicrobial resistance; a review of outpatient vancomycin use found 39% of recipients did not receive the antibiotic according to recommended guidelines 477 . Although most home care agencies implement policies and procedures to prevent transmission of organisms, the current approach is based on the adaptation of the 1996 Guideline for Isolation Precautions in Hospitals 1 as well as other professional guidance 478,479 . This issue has been very challenging in the home care industry and practice has been inconsistent and frequently not evidence-based. For example, many home health agencies continue to observe "nursing bag technique," a practice that prescribes the use of barriers between the nursing bag and environmental surfaces in the home 480 . While the home environment may not always appear clean, the use of barriers between two noncritical surfaces has been questioned 481,482 . Opportunites exist to conduct research in home care related to infection transmission risks 483 . # I.D.2.d. Other sites of healthcare delivery Facilities that are not primarily healthcare settings but in which healthcare is delivered include clinics in correctional facilities and shelters. Both settings can have suboptimal features, such as crowded conditions and poor ventilation. Economically disadvantaged individuals who may have chronic illnesses and healthcare problems related to alcoholism, injection drug use, poor nutrition, and/or inadequate shelter often receive their primary healthcare at sites such as these 484 . Infectious diseases of special concern for transmission include tuberculosis, scabies, respiratory infections (e.g., N. meningitides, S. pneumoniae), sexually transmitted and bloodborne diseases (e.g.,HIV, HBV, HCV, syphilis, gonorrhea), hepatitis A virus (HAV), diarrheal agents such as norovirus, and foodborne diseases 286, . A high index of suspicion for tuberculosis and CA-MRSA in these populations is needed as outbreaks in these settings or among the populations they serve have been reported . Patient encounters in these types of facilities provide an opportunity to deliver recommended immunizations and screen for M. tuberculosis infection in addition to diagnosing and treating acute illnesses 498 . Recommended infection control measures in these non-traditional areas designated for healthcare delivery are the same as for other ambulatory care settings. Therefore, these settings must be equipped to observe Standard Precautions and, when indicated, Transmission-based Precautions. # I.E. Transmission risks associated with special patient populations As new treatments emerge for complex diseases, unique infection control challenges associated with special patient populations need to be addressed. # I.E.1. Immunocompromised patients Patients who have congenital primary immune deficiencies or acquired disease (eg. treatment-induced immune deficiencies) are at increased risk for numerous types of infections while receiving healthcare and may be located throughout the healthcare facility. The specific defects of the immune system determine the types of infections that are most likely to be acquired (e.g., viral infections are associated with T-cell defects and fungal and bacterial infections occur in patients who are neutropenic). As a general group, immunocompromised patients can be cared for in the same environment as other patients; however, it is always advisable to minimize exposure to other patients with transmissible infections such as influenza and other respiratory viruses 499,500 . The use of more intense chemotherapy regimens for treatment of childhood leukemia may be associated with prolonged periods of neutropenia and suppression of other components of the immune system, extending the period of infection risk and raising the concern that additional precautions may be indicated for select groups 501,502 . With the application of newer and more intense immunosuppressive therapies for a variety of medical conditions (e.g., rheumatologic disease 503,504 , inflammatory bowel disease 505 ), immunosuppressed patients are likely to be more widely distributed throughout a healthcare facility rather than localized to single patient units (e.g. hematology-oncology). Guidelines for preventing infections in certain groups of immunocompromised patients have been published 15,506,507 . Published data provide evidence to support placing allogeneic HSCT patients in a Protective Environment 15,157,158 . Also, three guidelines have been developed that address the special requirements of these immunocompromised patients, including use of antimicrobial prophylaxis and engineering controls to create a Protective Environment for the prevention of infections caused by Aspergillus spp. and other environmental fungi 11,14,15 . As more intense chemotherapy regimens associated with prolonged periods of neutropenia or graft-versus-host disease are implemented, the period of risk and duration of environmental protection may need to be prolonged beyond the traditional 100 days 508 . # I.E.2. Cystic fibrosis patients Patients with cystic fibrosis (CF) require special consideration when developing infection control guidelines. Compared to other patients, CF patients require additional protection to prevent transmission from contaminated respiratory therapy equipment . Infectious agents such as Burkholderia cepacia complex and P. aeruginosa 464,465,514,515 have unique clinical and prognostic significance. In CF patients, B. cepacia infection has been associated with increased morbidity and mortality , while delayed acquisition of chronic P.aeruginosa infection may be associated with an improved long-term clinical outcome 519,520 . Person-to-person transmission of B. cepacia complex has been demonstrated among children 517 and adults 521 with CF in healthcare settings 464,522 , during various social contacts 523 , most notably attendance at camps for patients with CF 524 , and among siblings with CF 525 . Successful infection control measures used to prevent transmission of respiratory secretions include segregation of CF patients from each other in ambulatory and hospital settings (including use of private rooms with separate showers), environmental decontamination of surfaces and equipment contaminated with respiratory secretions, elimination of group chest physiotherapy sessions, and disbanding of CF camps 97,526 . The Cystic Fibrosis Foundation published a consensus document with evidencebased recommendations for infection control practices for CF patients 20 . # I.F. New therapies associated with potentially transmissible infectious agents I.F.1. Gene therapy Gene therapy has has been attempted using a number of different viral vectors, including nonreplicating retroviruses, adenoviruses, adenoassociated viruses, and replication-competent strains of poxviruses. Unexpected adverse events have restricted the prevalence of gene therapy protocols. The infectious hazards of gene therapy are theoretical at this time, but require meticulous surveillance due to the possible occurrence of in vivo recombination and the subsequent emergence of a transmissible genetically altered pathogen. Greatest concern attends the use of replication-competent viruses, especially vaccinia. As of the time of publication, no reports have described transmission of a vector virus from a gene therapy recipient to another individual, but surveillance is ongoing. Recommendations for monitoring infection control issues throughout the course of gene therapy trials have been published . # I.F.2. Infections transmitted through blood, organs and other tissues The potential hazard of transmitting infectious pathogens through biologic products is a small but ever present risk, despite donor screening. Reported infections transmitted by transfusion or transplantation include West Nile Virus infection 530 cytomegalovirus infection 531 , Creutzfeldt-Jacob disease 230 , hepatitis C 532 , infections with Clostridium spp. 533 and group A streptococcus 534 , malaria 535 , babesiosis 536 , Chagas disease 537 , lymphocytic choriomeningitis 538 , and rabies 539, 540 . Therefore, it is important to consider receipt of biologic products when evaluating patients for potential sources of infection. # I.F.3. Xenotransplantation The transplantation of nonhuman cells, tissues, and organs into humans potentially exposes patients to zoonotic pathogens. Transmission of known zoonotic infections (e.g., trichinosis from porcine tissue), constitutes one concern, but also of concern is the possibility that transplantation of nonhuman cells, tissues, or organs may transmit previously unknown zoonotic infections (xenozoonoses) to immunosuppressed human recipients. Potential infections that might accompany transplantation of porcine organs have been described 541 . Guidelines from the U.S. Public Health Service address many infectious diseases and infection control issues that surround the developing field of xenotransplantation 542 ); work in this area is ongoing. # Part II: Fundamental elements needed to prevent transmission of infectious agents in healthcare settings II.A. Healthcare system components that influence the effectiveness of precautions to prevent transmission II.A.1. Administrative measures Healthcare organizations can demonstrate a commitment to preventing transmission of infectious agents by incorporating infection control into the objectives of the organization's patient and occupational safety programs . An infrastructure to guide, support, and monitor adherence to Standard and Transmission-Based Precautions 434,548,549 will facilitate fulfillment of the organization's mission and achievement of the Joint Commission on Accreditation of Healthcare Organization's patient safety goal to decrease HAIs 550 . Policies and procedures that explain how Standard and Transmission-Based Precautions are applied, including systems used to identify and communicate information about patients with potentially transmissible infectious agents, are essential to ensure the success of these measures and may vary according to the characteristics of the organization. A key administrative measure is provision of fiscal and human resources for maintaining infection control and occupational health programs that are responsive to emerging needs. Specific components include bedside nurse 551 and infection prevention and control professional (ICP) staffing levels 552 , inclusion of ICPs in facility construction and design decisions 11 , clinical microbiology laboratory support 553,554 , adequate supplies and equipment including facility ventilation systems 11 , adherence monitoring 555 , assessment and correction of system failures that contribute to transmission 556,557 , and provision of feedback to healthcare personnel and senior administrators 434,548,549,558 . The positive influence of institutional leadership has been demonstrated repeatedly in studies of HCW adherence to recommended hand hygiene practices 176,177,434,548,549, . Healthcare administrator involvement in infection control processes can improve administrators' awareness of the rationale and resource requirements for following recommended infection control practices. Several administrative factors may affect the transmission of infectious agents in healthcare settings: institutional culture, individual worker behavior, and the work environment. Each of these areas is suitable for performance improvement monitoring and incorporation into the organization's patient safety goals 543,544,546,565 . # II.A.1.a.Scope of work and staffing needs for infection control professionals The effectiveness of infection surveillance and control programs in preventing nosocomial infections in United States hospitals was assessed by the CDC through the Study on the Efficacy of Nosocomial Infection Control (SENIC Project) conducted 1970-76 566 . In a representative sample of US general hospitals, those with a trained infection control physician or microbiologist involved in an infection control program, and at least one infection control nurse per 250 beds, were associated with a 32% lower rate of four infections studied (CVC-associated bloodstream infections, ventilator-associated pneumonias, catheter-related urinary tract infections, and surgical site infections). Since that landmark study was published, responsibilities of ICPs have expanded commensurate with the growing complexity of the healthcare system, the patient populations served, and the increasing numbers of medical procedures and devices used in all types of healthcare settings. The scope of work of ICPs was first assessed in 1982 by the Certification Board of Infection Control (CBIC), and has been re-assessed every five years since that time 558, . The findings of these task analyses have been used to develop and update the Infection Control Certification Examination, offered for the first time in 1983. With each survey, it is apparent that the role of the ICP is growing in complexity and scope, beyond traditional infection control activities in acute care hospitals. Activities currently assigned to ICPs in response to emerging challenges include: 1) surveillance and infection prevention at facilities other than acute care hospitals e.g., ambulatory clinics, day surgery centers, long term care facilities, rehabilitation centers, home care; 2) oversight of employee health services related to infection prevention, e.g. assessment of risk and administration of recommended treatment following exposure to infectious agents, tuberculosis screening, influenza vaccination, respiratory protection fit testing, and administration of other vaccines as indicated, such as smallpox vaccine in 2003; 3) preparedness planning for annual influenza outbreaks, pandemic influenza, SARS, bioweapons attacks; 4) adherence monitoring for selected infection control practices; 5) oversight of risk assessment and implementation of prevention measures associated with construction and renovation; 6) prevention of transmission of MDROs; 7) evaluation of new medical products that could be associated with increased infection risk. e.g.,intravenous infusion materials; 9) communication with the public, facility staff, and state and local health departments concerning infection control-related issues; and 10) participation in local and multi-center research projects 434,549,552,558,573,574 . None of the CBIC job analyses addressed specific staffing requirements for the identified tasks, although the surveys did include information about hours worked; the 2001 survey included the number of ICPs assigned to the responding facilities 558 . There is agreement in the literature that 1 ICP per 250 acute care beds is no longer adequate to meet current infection control needs; a Delphi project that assessed staffing needs of infection control programs in the 21 st century concluded that a ratio of 0.8 to 1.0 ICP per 100 occupied acute care beds is an appropriate level of staffing 552 . A survey of participants in the National Nosocomial Infections Surveillance (NNIS) system found the average daily census per ICP was 115 316 . Results of other studies have been similar: 3 per 500 beds for large acute care hospitals, 1 per 150-250 beds in long term care facilities, and 1.56 per 250 in small rural hospitals 573,575 . The foregoing demonstrates that infection control staffing can no longer be based on patient census alone, but rather must be determined by the scope of the program, characteristics of the patient population, complexity of the healthcare system, tools available to assist personnel to perform essential tasks (e.g., electronic tracking and laboratory support for surveillance), and unique or urgent needs of the institution and community 552 . Furthermore, appropriate training is required to optimize the quality of work performed 558, 572, 576 . # II.A.1.a.i. Infection Control Nurse Liaison Designating a bedside nurse on a patient care unit as an infection control liaison or "link nurse" is reported to be an effective adjunct to enhance infection control at the unit level . Such individuals receive training in basic infection control and have frequent communication with the ICPs, but maintain their primary role as bedside caregiver on their units. The infection control nurse liaison increases the awareness of infection control at the unit level. He or she is especially effective in implementation of new policies or control interventions because of the rapport with individuals on the unit, an understanding of unit-specific challenges, and ability to promote strategies that are most likely to be successful in that unit. This position is an adjunct to, not a replacement for, fully trained ICPs. Furthermore, the infection control liaison nurses should not be counted when considering ICP staffing. # II.A.1.b. Bedside nurse staffing There is increasing evidence that the level of bedside nurse-staffing influences the quality of patient care 583,584 . If there are adequate nursing staff, it is more likely that infection control practices, including hand hygiene and Standard and Transmission-Based Precautions, will be given appropriate attention and applied correctly and consistently 552 . A national multicenter study reported strong and consistent inverse relationships between nurse staffing and five adverse outcomes in medical patients, two of which were HAIs: urinary tract infections and pneumonia 583 . The association of nursing staff shortages with increased rates of HAIs has been demonstrated in several outbreaks in hospitals and long term care settings, and with increased transmission of hepatitis C virus in dialysis units 22,418,551, . In most cases, when staffing improved as part of a comprehensive control intervention, the outbreak ended or the HAI rate declined. In two studies 590,596 , the composition of the nursing staff ("pool" or "float" vs. regular staff nurses) influenced the rate of primary bloodstream infections, with an increased infection rate occurring when the proportion of regular nurses decreased and pool nurses increased. # II.A.1.c. Clinical microbiology laboratory support The critical role of the clinical microbiology laboratory in infection control and healthcare epidemiology is described well 553,554, and is supported by the Infectious Disease Society of America policy statement on consolidation of clinical microbiology laboratories published in 2001 553 . The clinical microbiology laboratory contributes to preventing transmission of infectious diseases in healthcare settings by promptly detecting and reporting epidemiologically important organisms, identifying emerging patterns of antimicrobial resistance, and assisting in assessment of the effectiveness of recommended precautions to limit transmission during outbreaks 598 . Outbreaks of infections may be recognized first by laboratorians 162 . Healthcare organizations need to ensure the availability of the recommended scope and quality of laboratory services, a sufficient number of appropriately trained laboratory staff members, and systems to promptly communicate epidemiologically important results to those who will take action (e.g., providers of clinical care, infection control staff, healthcare epidemiologists, and infectious disease consultants) 601 . As concerns about emerging pathogens and bioterrorism grow, the role of the clinical microbiology laboratory takes on even greater importance. For healthcare organizations that outsource microbiology laboratory services (e.g., ambulatory care, home care, LTCFs, smaller acute care hospitals), it is important to specify by contract the types of services (e.g., periodic institution-specific aggregate susceptibility reports) required to support infection control. Several key functions of the clinical microbiology laboratory are relevant to this guideline: - Antimicrobial susceptibility by testing and interpretation in accordance with current guidelines developed by the National Committee for Clinical Laboratory Standards (NCCLS), known as the Clinical and Laboratory Standards Institute (CLSI) since 2005 602 , for the detection of emerging resistance patterns 603,604 , and for the preparation, analysis, and distribution of periodic cumulative antimicrobial susceptibility summary reports . While not required, clinical laboratories ideally should have access to rapid genotypic identification of bacteria and their antibiotic resistance genes 608 . - Performance of surveillance cultures when appropriate (including retention of isolates for analysis) to assess patterns of infection transmission and effectiveness of infection control interventions at the facility or organization. Microbiologists assist in decisions concerning the indications for initiating and discontinuing active surveillance programs and optimize the use of laboratory resources. - Molecular typing, on-site or outsourced, in order to investigate and control healthcare-associated outbreaks 609 . - Application of rapid diagnostic tests to support clinical decisions involving patient treatment, room selection, and implementation of control measures including barrier precautions and use of vaccine or chemoprophylaxis agents (e.g., influenza , B. pertussis 613 , RSV 614,615 , and enteroviruses 616 ). The microbiologist provides guidance to limit rapid testing to clinical situations in which rapid results influence patient management decisions, as well as providing oversight of point-of-care testing performed by non-laboratory healthcare workers 617 . - Detection and rapid reporting of epidemiologically important organisms, including those that are reportable to public health agencies. - Implementation of a quality control program that ensures testing services are appropriate for the population served, and stringently evaluated for sensitivity, specificity, applicability, and feasibility. - Participation in a multidisciplinary team to develop and maintain an effective institutional program for the judicious use of antimicrobial agents 618, 619 . # II.A.2. Institutional safety culture and organizational characteristics Safety culture (or safety climate) refers to a work environment where a shared commitment to safety on the part of management and the workforce is understood and followed 557,620,621 . The authors of the Institute of Medicine Report, To Err is Human 543 , acknowledge that causes of medical error are multifaceted but emphasize repeatedly the pivotal role of system failures and the benefits of a safety culture. A safety culture is created through 1) the actions management takes to improve patient and worker safety; 2) worker participation in safety planning; 3) the availability of appropriate protective equipment; 4) influence of group norms regarding acceptable safety practices; and 5) the organization's socialization process for new personnel. Safety and patient outcomes can be enhanced by improving or creating organizational characteristics within patient care units as demonstrated by studies of surgical ICUs 622, 623 . Each of these factors has a direct bearing on adherence to transmission prevention recommendations 257 . Measurement of an institutional culture of safety is useful for designing improvements in healthcare 624,625 . Several hospital-based studies have linked measures of safety culture with both employee adherence to safe practices and reduced exposures to blood and body fluids . One study of hand hygiene practices concluded that improved adherence requires integration of infection control into the organization's safety culture 561 . Several hospitals that are part of the Veterans Administration Healthcare System have taken specific steps toward improving the safety culture, including error reporting mechanisms, performing root cause analysis on problems identified, providing safety incentives, and employee education. . # II.A.3. Adherence of healthcare personnel to recommended guidelines Adherence to recommended infection control practices decreases transmission of infectious agents in healthcare settings 116,562, . However, several observational studies have shown limited adherence to recommended practices by healthcare personnel 559, . Observed adherence to universal precautions ranged from 43% to 89% 641,642,649,651,652 . However, the degree of adherence depended frequently on the practice that was assessed and, for glove use, the circumstance in which they were used. Appropriate glove use has ranged from a low of 15% 645 to a high of 82% 650 . However, 92% and 98% adherence with glove use have been reported during arterial blood gas collection and resuscitation, respectively, procedures where there may be considerable blood contact 643,656 . Differences in observed adherence have been reported among occupational groups in the same healthcare facility 641 and between experienced and nonexperienced professionals 645 . In surveys of healthcare personnel, selfreported adherence was generally higher than that reported in observational studies. Furthermore, where an observational component was included with a self-reported survey, self-perceived adherence was often greater than observed adherence 657 . Among nurses and physicians, increasing years of experience is a negative predictor of adherence 645,651 . Education to improve adherence is the primary intervention that has been studied. While positive changes in knowledge and attitude have been demonstrated, 640,658 , there often has been limited or no accompanying change in behavior 642,644 . Self-reported adherence is higher in groups that have received an educational intervention 630,659 . Educational interventions that incorporated videotaping and performance feedback were successful in improving adherence during the period of study; the long-term effect of these interventions is not known 654 .The use of videotape also served to identify system problems (e.g., communication and access to personal protective equipment) that otherwise may not have been recognized. Use of engineering controls and facility design concepts for improving adherence is gaining interest. While introduction of automated sinks had a negative impact on consistent adherence to hand washing 660 , use of electronic monitoring and voice prompts to remind healthcare workers to perform hand hygiene, and improving accessibility to hand hygiene products, increased adherence and contributed to a decrease in HAIs in one study 661 . More information is needed regarding how technology might improve adherence. Improving adherence to infection control practices requires a multifaceted approach that incorporates continuous assessment of both the individual and the work environment 559,561 . Using several behavioral theories, Kretzer and Larson concluded that a single intervention (e.g., a handwashing campaign or putting up new posters about transmission precautions) would likely be ineffective in improving healthcare personnel adherence 662 . Improvement requires that the organizational leadership make prevention an institutional priority and integrate infection control practices into the organization's safety culture 561 . A recent review of the literature concluded that variations in organizational factors (e.g., safety climate, policies and procedures, education and training) and individual factors (e.g., knowledge, perceptions of risk, past experience) were determinants of adherence to infection control guidelines for protection against SARS and other respiratory pathogens 257 . # II.B. Surveillance for healthcare-associated infections (HAIs) Surveillance is an essential tool for case-finding of single patients or clusters of patients who are infected or colonized with epidemiologically important organisms (e.g., susceptible bacteria such as S. aureus, S. pyogenes or Enterobacter-Klebsiella spp; MRSA, VRE, and other MDROs; C. difficile; RSV; influenza virus) for which transmission-based precautions may be required. Surveillance is defined as the ongoing, systematic collection, analysis, interpretation, and dissemination of data regarding a health-related event for use in public health action to reduce morbidity and mortality and to improve health 663 . The work of Ignaz Semmelweis that described the role of person-to-person transmission in puerperal sepsis is the earliest example of the use of surveillance data to reduce transmission of infectious agents 664 . Surveillance of both process measures and the infection rates to which they are linked are important for evaluating the effectiveness of infection prevention efforts and identifying indications for change 555, . The Study on the Efficacy of Nosocomial Infection Control (SENIC) found that different combinations of infection control practices resulted in reduced rates of nosocomial surgical site infections, pneumonia, urinary tract infections, and bacteremia in acute care hospitals 566 ; however, surveillance was the only component essential for reducing all four types of HAIs. Although a similar study has not been conducted in other healthcare settings, a role for surveillance and the need for novel strategies have been described in LTCFs 398,434,669,670 and in home care . The essential elements of a surveillance system are: 1) standardized definitions; 2) identification of patient populations at risk for infection; 3) statistical analysis (e.g. risk-adjustment, calculation of rates using appropriate denominators, trend analysis using methods such as statistical process control charts); and 4) feedback of results to the primary caregivers . Data gathered through surveillance of high-risk populations, device use, procedures, and/or facility locations (e.g., ICUs) are useful for detecting transmission trends . Identification of clusters of infections should be followed by a systematic epidemiologic investigation to determine commonalities in persons, places, and time; and guide implementation of interventions and evaluation of the effectiveness of those interventions. Targeted surveillance based on the highest risk areas or patients has been preferred over facility-wide surveillance for the most effective use of resources 673, 676 . However, surveillance for certain epidemiologically important organisms may need to be facility-wide. Surveillance methods will continue to evolve as healthcare delivery systems change 392,677 and user-friendly electronic tools become more widely available for electronic tracking and trend analysis 674,678,679 . Individuals with experience in healthcare epidemiology and infection control should be involved in selecting software packages for data aggregation and analysis to assure that the need for efficient and accurate HAI surveillance will be met. Effective surveillance is increasingly important as legislation requiring public reporting of HAI rates is passed and states work to develop effective systems to support such legislation 680 . # II.C. Education of HCWs, patients, and families Education and training of healthcare personnel are a prerequisite for ensuring that policies and procedures for Standard and Transmission-Based Precautions are understood and practiced. Understanding the scientific rationale for the precautions will allow HCWs to apply procedures correctly, as well as safely modify precautions based on changing requirements, resources, or healthcare settings 14,655, . In one study, the likelihood of HCWs developing SARS was strongly associated with less than 2 hours of infection control training and lack of understanding of infection control procedures 689 . Education about the important role of vaccines (e.g., influenza, measles, varicella, pertussis, pneumococcal) in protecting healthcare personnel, their patients, and family members can help improve vaccination rates . Patients, family members, and visitors can be partners in preventing transmission of infections in healthcare settings 9,42, . Information about Standard Precautions, especially hand hygiene, Respiratory Hygiene/Cough Etiquette, vaccination (especially against influenza) and other routine infection prevention strategies may be incorporated into patient information materials that are provided upon admission to the healthcare facility. Additional information about Transmission-Based Precautions is best provided at the time they are initiated. Fact sheets, pamphlets, and other printed material may include information on the rationale for the additional precautions, risks to household members, room assignment for Transmission-Based Precautions purposes, explanation about the use of personal protective equipment by HCWs, and directions for use of such equipment by family members and visitors. Such information may be particularly helpful in the home environment where household members often have primary responsibility for adherence to recommended infection control practices. Healthcare personnel must be available and prepared to explain this material and answer questions as needed. # II.D. Hand hygiene Hand hygiene has been cited frequently as the single most important practice to reduce the transmission of infectious agents in healthcare settings 559,712,713 and is an essential element of Standard Precautions. The term "hand hygiene" includes both handwashing with either plain or antiseptic-containing soap and water, and use of alcohol-based products (gels, rinses, foams) that do not require the use of water. In the absence of visible soiling of hands, approved alcoholbased products for hand disinfection are preferred over antimicrobial or plain soap and water because of their superior microbiocidal activity, reduced drying of the skin, and convenience 559 . Improved hand hygiene practices have been associated with a sustained decrease in the incidence of MRSA and VRE infections primarily in the ICU 561,562, . The scientific rationale, indications, methods, and products for hand hygiene are summarized in other publications 559, 717 . The effectiveness of hand hygiene can be reduced by the type and length of fingernails 559,718,719 . Individuals wearing artifical nails have been shown to harbor more pathogenic organisms, especially gram negative bacilli and yeasts, on the nails and in the subungual area than those with native nails 720,721 . In 2002, CDC/HICPAC recommended (Category IA) that artificial fingernails and extenders not be worn by healthcare personnel who have contact with high-risk patients (e.g., those in ICUs, ORs) due to the association with outbreaks of gramnegative bacillus and candidal infections as confirmed by molecular typing of isolates 30,31,559, .The need to restrict the wearing of artificial fingernails by all healthcare personnel who provide direct patient care or by healthcare personnel who have contact with other high risk groups (e.g., oncology, cystic fibrosis patients), has not been studied, but has been recommended by some experts 20 . At this time such decisions are at the discretion of an individual facility's infection control program. There is less evidence that jewelry affects the quality of hand hygiene. Although hand contamination with potential pathogens is increased with ring-wearing 559,726 , no studies have related this practice to HCW-to-patient transmission of pathogens. # II.E. Personal protective equipment (PPE) for healthcare personnel PPE refers to a variety of barriers and respirators used alone or in combination to protect mucous membranes, airways, skin, and clothing from contact with infectious agents. The selection of PPE is based on the nature of the patient interaction and/or the likely mode(s) of transmission. Guidance on the use of PPE is discussed in Part III. A suggested procedure for donning and removing PPE that will prevent skin or clothing contamination is presented in the Figure. Designated containers for used disposable or reusable PPE should be placed in a location that is convenient to the site of removal to facilitate disposal and containment of contaminated materials. Hand hygiene is always the final step after removing and disposing of PPE. The following sections highlight the primary uses and methods for selecting this equipment. # II.E.1. Gloves Gloves are used to prevent contamination of healthcare personnel hands when 1) anticipating direct contact with blood or body fluids, mucous membranes, nonintact skin and other potentially infectious material; 2) having direct contact with patients who are colonized or infected with pathogens transmitted by the contact route e.g., VRE, MRSA, RSV 559,727,728 ; or 3) handling or touching visibly or potentially contaminated patient care equipment and environmental surfaces 72,73,559 . Gloves can protect both patients and healthcare personnel from exposure to infectious material that may be carried on hands 73 . The extent to which gloves will protect healthcare personnel from transmission of bloodborne pathogens (e.g., HIV, HBV, HCV) following a needlestick or other pucture that penetrates the glove barrier has not been determined. Although gloves may reduce the volume of blood on the external surface of a sharp by 46-86% 729 , the residual blood in the lumen of a hollowbore needle would not be affected; therefore, the effect on transmission risk is unknown. Gloves manufactured for healthcare purposes are subject to FDA evaluation and clearance 730 . Nonsterile disposable medical gloves made of a variety of materials (e.g., latex, vinyl, nitrile) are available for routine patient care 731 . The selection of glove type for non-surgical use is based on a number of factors, including the task that is to be performed, anticipated contact with chemicals and chemotherapeutic agents, latex sensitivity, sizing, and facility policies for creating a latex-free environment 17, . For contact with blood and body fluids during non-surgical patient care, a single pair of gloves generally provides adequate barrier protection 734 . However, there is considerable variability among gloves; both the quality of the manufacturing process and type of material influence their barrier effectiveness 735 . While there is little difference in the barrier properties of unused intact gloves 736 , studies have shown repeatedly that vinyl gloves have higher failure rates than latex or nitrile gloves when tested under simulated and actual clinical conditions 731, . For this reason either latex or nitrile gloves are preferable for clinical procedures that require manual dexterity and/or will involve more than brief patient contact. It may be necessary to stock gloves in several sizes. Heavier, reusable utility gloves are indicated for non-patient care activities, such as handling or cleaning contaminated equipment or surfaces 11, 14, . During patient care, transmission of infectious organisms can be reduced by adhering to the principles of working from "clean" to "dirty", and confining or limiting contamination to surfaces that are directly needed for patient care. It may be necessary to change gloves during the care of a single patient to prevent cross-contamination of body sites 559,740 . It also may be necessary to change gloves if the patient interaction also involves touching portable computer keyboards or other mobile equipment that is transported from room to room. Discarding gloves between patients is necessary to prevent transmission of infectious material. Gloves must not be washed for subsequent reuse because microorganisms cannot be removed reliably from glove surfaces and continued glove integrity cannot be ensured. Furthermore, glove reuse has been associated with transmission of MRSA and gram-negative bacilli . When gloves are worn in combination with other PPE, they are put on last. Gloves that fit snugly around the wrist are preferred for use with an isolation gown because they will cover the gown cuff and provide a more reliable continuous barrier for the arms, wrists, and hands. Gloves that are removed properly will prevent hand contamination (Figure). Hand hygiene following glove removal further ensures that the hands will not carry potentially infectious material that might have penetrated through unrecognized tears or that could contaminate the hands during glove removal 559,728,741 . # II.E.2. Isolation gowns Isolation gowns are used as specified by Standard and Transmission-Based Precautions, to protect the HCW's arms and exposed body areas and prevent contamination of clothing with blood, body fluids, and other potentially infectious material 24,88,262, . The need for and type of isolation gown selected is based on the nature of the patient interaction, including the anticipated degree of contact with infectious material and potential for blood and body fluid penetration of the barrier. The wearing of isolation gowns and other protective apparel is mandated by the OSHA Bloodborne Pathogens Standard 739 . Clinical and laboratory coats or jackets worn over personal clothing for comfort and/or purposes of identity are not considered PPE. When applying Standard Precautions, an isolation gown is worn only if contact with blood or body fluid is anticipated. However, when Contact Precautions are used (i.e., to prevent transmission of an infectious agent that is not interrupted by Standard Precautions alone and that is associated with environmental contamination), donning of both gown and gloves upon room entry is indicated to address unintentional contact with contaminated environmental surfaces 54,72,73,88 . The routine donning of isolation gowns upon entry into an intensive care unit or other high-risk area does not prevent or influence potential colonization or infection of patients in those areas 365, . Isolation gowns are always worn in combination with gloves, and with other PPE when indicated. Gowns are usually the first piece of PPE to be donned. Full coverage of the arms and body front, from neck to the mid-thigh or below will ensure that clothing and exposed upper body areas are protected. Several gown sizes should be available in a healthcare facility to ensure appropriate coverage for staff members. Isolation gowns should be removed before leaving the patient care area to prevent possible contamination of the environment outside the patient's room. Isolation gowns should be removed in a manner that prevents contamination of clothing or skin (Figure). The outer, "contaminated", side of the gown is turned inward and rolled into a bundle, and then discarded into a designated container for waste or linen to contain contamination. # II.E.3. Face protection: masks, goggles, face shields II.E.3.a. Masks Masks are used for three primary purposes in healthcare settings: 1) placed on healthcare personnel to protect them from contact with infectious material from patients e.g., respiratory secretions and sprays of blood or body fluids, consistent with Standard Precautions and Droplet Precautions; 2) placed on healthcare personnel when engaged in procedures requiring sterile technique to protect patients from exposure to infectious agents carried in a healthcare worker's mouth or nose, and 3) placed on coughing patients to limit potential dissemination of infectious respiratory secretions from the patient to others (i.e., Respiratory Hygiene/Cough Etiquette). Masks may be used in combination with goggles to protect the mouth, nose and eyes, or a face shield may be used instead of a mask and goggles, to provide more complete protection for the face, as discussed below. Masks should not be confused with particulate respirators that are used to prevent inhalation of small particles that may contain infectious agents transmitted via the airborne route as described below. The mucous membranes of the mouth, nose, and eyes are susceptible portals of entry for infectious agents, as can be other skin surfaces if skin integrity is compromised (e.g., by acne, dermatitis) 66, . Therefore, use of PPE to protect these body sites is an important component of Standard Precautions. The protective effect of masks for exposed healthcare personnel has been demonstrated 93,113,755,756 . Procedures that generate splashes or sprays of blood, body fluids, secretions, or excretions (e.g., endotracheal suctioning, bronchoscopy, invasive vascular procedures) require either a face shield (disposable or reusable) or mask and goggles 93-95, 96 , 113, 115, 262, 739, 757 .The wearing of masks, eye protection, and face shields in specified circumstances when blood or body fluid exposures are likely to occur is mandated by the OSHA Bloodborne Pathogens Standard 739 . Appropriate PPE should be selected based on the anticipated level of exposure. Two mask types are available for use in healthcare settings: surgical masks that are cleared by the FDA and required to have fluid-resistant properties, and procedure or isolation masks 758 #2688 . No studies have been published that compare mask types to determine whether one mask type provides better protection than another. Since procedure/isolation masks are not regulated by the FDA, there may be more variability in quality and performance than with surgical masks. Masks come in various shapes (e.g., molded and non-molded), sizes, filtration efficiency, and method of attachment (e.g., ties, elastic, ear loops). Healthcare facilities may find that different types of masks are needed to meet individual healthcare personnel needs. # II.E.3.b. Goggles, face shields Guidance on eye protection for infection control has been published 759 . The eye protection chosen for specific work situations (e.g., goggles or face shield) depends upon the circumstances of exposure, other PPE used, and personal vision needs. Personal eyeglasses and contact lenses are NOT considered adequate eye protection (www.cdc.gov/niosh/topics/eye/eye-infectious.html). NIOSH states that, eye protection must be comfortable, allow for sufficient peripheral vision, and must be adjustable to ensure a secure fit. It may be necessary to provide several different types, styles, and sizes of protective equipment. Indirectly-vented goggles with a manufacturer's anti-fog coating may provide the most reliable practical eye protection from splashes, sprays, and respiratory droplets from multiple angles. Newer styles of goggles may provide better indirect airflow properties to reduce fogging, as well as better peripheral vision and more size options for fitting goggles to different workers. Many styles of goggles fit adequately over prescription glasses with minimal gaps. While effective as eye protection, goggles do not provide splash or spray protection to other parts of the face. The role of goggles, in addition to a mask, in preventing exposure to infectious agents transmitted via respiratory droplets has been studied only for RSV. Reports published in the mid-1980s demonstrated that eye protection reduced occupational transmission of RSV 760,761 . Whether this was due to preventing hand-eye contact or respiratory droplet-eye contact has not been determined. However, subsequent studies demonstrated that RSV transmission is effectively prevented by adherence to Standard plus Contact Precations and that for this virus routine use of goggles is not necessary 24,116,117,684,762 . It is important to remind healthcare personnel that even if Droplet Precautions are not recommended for a specific respiratory tract pathogen, protection for the eyes, nose and mouth by using a mask and goggles, or face shield alone, is necessary when it is likely that there will be a splash or spray of any respiratory secretions or other body fluids as defined in Standard Precautions Disposable or non-disposable face shields may be used as an alternative to goggles 759 . As compared with goggles, a face shield can provide protection to other facial areas in addition to the eyes. Face shields extending from chin to crown provide better face and eye protection from splashes and sprays; face shields that wrap around the sides may reduce splashes around the edge of the shield. Removal of a face shield, goggles and mask can be performed safely after gloves have been removed, and hand hygiene performed. The ties, ear pieces and/or headband used to secure the equipment to the head are considered "clean" and therefore safe to touch with bare hands. The front of a mask, goggles and face shield are considered contaminated (Figure ). # II.E.4. Respiratory protection The subject of respiratory protection as it applies to preventing transmission of airborne infectious agents, including the need for and frequency of fit-testing is under scientific review and was the subject of a CDC workshop in 2004 763 . Respiratory protection currently requires the use of a respirator with N95 or higher filtration to prevent inhalation of infectious particles. Information about respirators and respiratory protection programs is summarized in the Guideline for Preventing Transmission of Mycobacterium tuberculosis in Health-care Settings, 2005 (CDC.MMWR 2005; 54: RR-17 12 ). Respiratory protection is broadly regulated by OSHA under the general industry standard for respiratory protection (29CFR1910.134) 764 which requires that U.S. employers in all employment settings implement a program to protect employees from inhalation of toxic materials. OSHA program components include medical clearance to wear a respirator; provision and use of appropriate respirators, including fit-tested NIOSH-certified N95 and higher particulate filtering respirators; education on respirator use and periodic re-evaluation of the respiratory protection program. When selecting particulate respirators, models with inherently good fit characteristics (i.e., those expected to provide protection factors of 10 or more to 95% of wearers) are preferred and could theoretically relieve the need for fit testing 765,766 . Issues pertaining to respiratory protection remain the subject of ongoing debate. Information on various types of respirators may be found at www.cdc.gov/niosh/npptl/respirators/respsars.html and in published studies 765,767,768 . A user-seal check (formerly called a "fit check") should be performed by the wearer of a respirator each time a respirator is donned to minimize air leakage around the facepiece 769 . The optimal frequency of fit-testng has not been determined; re-testing may be indicated if there is a change in facial features of the wearer, onset of a medical condition that would affect respiratory function in the wearer, or a change in the model or size of the initially assigned respirator 12 . Respiratory protection was first recommended for protection of preventing U.S. healthcare personnel from exposure to M. tuberculosis in 1989. That recommendation has been maintained in two successive revisions of the Guidelines for Prevention of Transmission of Tuberculosis in Hospitals and other Healthcare Settings 12,126 . The incremental benefit from respirator use, in addition to administrative and engineering controls (i.e., AIIRs, early recognition of patients likely to have tuberculosis and prompt placement in an AIIR, and maintenance of a patient with suspected tuberculosis in an AIIR until no longer infectious), for preventing transmission of airborne infectious agents (e.g., M. tuberculosis) is undetermined. Although some studies have demonstrated effective prevention of M. tuberculosis transmission in hospitals where surgical masks, instead of respirators, were used in conjunction with other administrative and engineering controls 637,770,771 , CDC currently recommends N95 or higher level respirators for personnel exposed to patients with suspected or confirmed tuberculosis. Currently this is also true for other diseases that could be transmitted through the airborne route, including SARS 262 and smallpox 108,129,772 , until inhalational transmission is better defined or healthcare-specific protective equipment more suitable for for preventing infection are developed. Respirators are also currently recommended to be worn during the performance of aerosol-generating procedures (e.g., intubation, bronchoscopy, suctioning) on patients withSARS Co-V infection, avian influenza and pandemic influenza (See Appendix A). Although Airborne Precautions are recommended for preventing airborne transmission of measles and varicella-zoster viruses, there are no data upon which to base a recommendation for respiratory protection to protect susceptible personnel against these two infections; transmission of varicella-zoster virus has been prevented among pediatric patients using negative pressure isolation alone 773 . Whether respiratory protection (i.e., wearing a particulate respirator) would enhance protection from these viruses has not been studied. Since the majority of healthcare personnel have natural or acquired immunity to these viruses, only immune personnel generally care for patients with these infections . Although there is no evidence to suggest that masks are not adequate to protect healthcare personnel in these settings, for purposes of consistency and simplicity, or because of difficulties in ascertaining immunity, some facilities may require the use of respirators for entry into all AIIRs, regardless of the specific infectious agent. Procedures for safe removal of respirators are provided (Figure). In some healthcare settings, particulate respirators used to provide care for patients with M. tuberculosis are reused by the same HCW. This is an acceptable practice providing the respirator is not damaged or soiled, the fit is not compromised by change in shape, and the respirator has not been contaminated with blood or body fluids. There are no data on which to base a recommendation for the length of time a respirator may be reused. # II.F. Safe work practices to prevent HCW exposure to bloodborne pathogens # II.F.1. Prevention of needlesticks and other sharps-related injuries Injuries due to needles and other sharps have been associated with transmission of HBV, HCV and HIV to healthcare personnel 778,779 . The prevention of sharps injuries has always been an essential element of Universal and now Standard Precautions 1, 780 . These include measures to handle needles and other sharp devices in a manner that will prevent injury to the user and to others who may encounter the device during or after a procedure. These measures apply to routine patient care and do not address the prevention of sharps injuries and other blood exposures during surgical and other invasive procedures that are addressed elsewhere . Since 1991, when OSHA first issued its Bloodborne Pathogens Standard to protect healthcare personnel from blood exposure, the focus of regulatory and legislative activity has been on implementing a hierarchy of control measures. This has included focusing attention on removing sharps hazards through the development and use of engineering controls. The federal Needlestick Safety and Prevention Act signed into law in November, 2000 authorized OSHA's revision of its Bloodborne Pathogens Standard to more explicitly require the use of safety-engineered sharp devices 786 . CDC has provided guidance on sharps injury prevention 787,788 , including for the design, implementation and evaluation of a comprehensive sharps injury prevention 789 program . # II.F.2. Prevention of mucous membrane contact Exposure of mucous membranes of the eyes, nose and mouth to blood and body fluids has been associated with the transmission of bloodborne viruses and other infectious agents to healthcare personnel 66,752,754,779 . The prevention of mucous membrane exposures has always been an element of Universal and now Standard Precautions for routine patient care 1, 753 and is subject to OSHA bloodborne pathogen regulations. Safe work practices, in addition to wearing PPE, are used to protect mucous membranes and non-intact skin from contact with potentially infectious material. These include keeping gloved and ungloved hands that are contaminated from touching the mouth, nose, eyes, or face; and positioning patients to direct sprays and splatter away from the face of the caregiver. Careful placement of PPE before patient contact will help avoid the need to make PPE adjustments and possible face or mucous membrane contamination during use. In areas where the need for resuscitation is unpredictable, mouthpieces, pocket resuscitation masks with one-way valves, and other ventilation devices provide an alternative to mouth-to-mouth resuscitation, preventing exposure of the caregiver's nose and mouth to oral and respiratory fluids during the procedure. # II.F.2.a. Precautions during aerosol-generating procedures The performance of procedures that can generate small particle aerosols (aerosol-generating procedures), such as bronchoscopy, endotracheal intubation, and open suctioning of the respiratory tract, have been associated with transmission of infectious agents to healthcare personnel, including M. tuberculosis 790 , SARS-CoV 93,94,98 and N. meningitidis 95 . Protection of the eyes, nose and mouth, in addition to gown and gloves, is recommended during performance of these procedures in accordance with Standard Precautions. Use of a particulate respirator is recommended during aerosol-generating procedures when the aerosol is likely to contain M. tuberculosis, SARS-CoV, or avian or pandemic influenza viruses. # II.G. Patient placement # II.G.1. Hospitals and long-term care settings Options for patient placement include single patient rooms, two patient rooms, and multi-bed wards. Of these, single patient rooms are prefered when there is a concern about transmission of an infectious agent. Although some studies have failed to demonstrate the efficacy of single patient rooms to prevent HAIs 791 , other published studies, including one commissioned by the American Institute of Architects and the Facility Guidelines Institute, have documented a beneficial relationship between private rooms and reduction in infectious and noninfectious adverse patient outcomes 792,793 . The AIA notes that private rooms are the trend in hospital planning and design. However, most hospitals and long-term care facilities have multi-bed rooms and must consider many competing priorities when determining the appropriate room placement for patients (e.g., reason for admission; patient characteristics, such as age, gender, mental status; staffing needs; family requests; psychosocial factors; reimbursement concerns). In the absence of obvious infectious diseases that require specified airborne infection isolation rooms (e.g., tuberculosis, SARS, chickenpox), the risk of transmission of infectious agents is not always considered when making placement decisions. When there are only a limited number of single-patient rooms, it is prudent to prioritize them for those patients who have conditions that facilitate transmission of infectious material to other patients (e.g., draining wounds, stool incontinence, uncontained secretions) and for those who are at increased risk of acquisition and adverse outcomes resulting from HAI (e.g., immunosuppression, open wounds, indwelling catheters, anticipated prolonged length of stay, total dependence on HCWs for activities of daily living) 15,24,43,430,794,795 . Single-patient rooms are always indicated for patients placed on Airborne Precautions and in a Protective Environment and are preferred for patients who require Contact or Droplet Precautions 23,24,410,435,796,797 . During a suspected or proven outbreak caused by a pathogen whose reservoir is the gastrointestinal tract, use of single patient rooms with private bathrooms limits opportunities for transmission, especially when the colonized or infected patient has poor personal hygiene habits, fecal incontinence, or cannot be expected to assist in maintaining procedures that prevent transmission of microorganisms (e.g., infants, children, and patients with altered mental status or developmental delay). In the absence of continued transmission, it is not necessary to provide a private bathroom for patients colonized or infected with enteric pathogens as long as personal hygiene practices and Standard Precautions, especially hand hygiene and appropriate environmental cleaning, are maintained. Assignment of a dedicated commode to a patient,and cleaning and disinfecting fixtures and equipment that may have fecal contamination (e.g., bathrooms, commodes 798 , scales used for weighing diapers) and the adjacent surfaces with appropriate agents may be especially important when a single-patient room can not be used since environmental contamination with intestinal tract pathogens is likely from both continent and incontinent patients 54,799 . Results of several studies to determine the benefit of a single-patient room to prevent transmission of Clostridium difficile are inconclusive 167, . Some studies have shown that being in the same room with a colonized or infected patient is not necessarily a risk factor for transmission 791, . However, for children, the risk of healthcare-associated diarrhea is increased with the increased number of patients per room 806 . Thus, patient factors are important determinants of infection transmission risks, and the need for a single-patient room and/or private bathroom for any patient is best determined on a case-by-case basis. Cohorting is the practice of grouping together patients who are colonized or infected with the same organism to confine their care to one area and prevent contact with other patients. Cohorts are created based on clinical diagnosis, microbiologic confirmation when available, epidemiology, and mode of transmission of the infectious agent. It is generally preferred not to place severely immunosuppressed patients in rooms with other patients. Cohorting has been used extensively for managing outbreaks of MDROs including MRSA 22,807 811 ; RSV 812,813 ; adenovirus keratoconjunctivitis 814 ; rotavirus 815 ; and SARS 816 . Modeling studies provide additional support for cohorting patients to control outbreaks Talon . However, cohorting often is implemented only after routine infection control measures have failed to control an outbreak. Assigning or cohorting healthcare personnel to care only for patients infected or colonized with a single target pathogen limits further transmission of the target pathogen to uninfected patients 740,819 but is difficult to achieve in the face of current staffing shortages in hospitals 583 and residential healthcare sites . However, when continued transmission is occurring after implementing routine infection control measures and creating patient cohorts, cohorting of healthcare personnel may be beneficial. During the seasons when RSV, human metapneumovirus 823 , parainfluenza, influenza, other respiratory viruses 824 , and rotavirus are circulating in the community, cohorting based on the presenting clinical syndrome is often a priority in facilities that care for infants and young children 825 . For example, during the respiratory virus season, infants may be cohorted based soley on the clinical diagnosis of bronchiolitis due to the logistical difficulties and costs associated with requiring microbiologic confirmation prior to room placement, and the predominance of RSV during most of the season. However, when available, single patient rooms are always preferred since a common clinical presentation (e.g., bronchiolitis), can be caused by more than one infectious agent 823,824,826 . Furthermore, the inability of infants and children to contain body fluids, and the close physical contact that occurs during their care, increases infection transmission risks for patients and personnel in this setting 24,795 . # II.G.2. Ambulatory settings Patients actively infected with or incubating transmissible infectious diseases are seen frequently in ambulatory settings (e.g., outpatient clinics, physicians' offices, emergency departments) and potentially expose healthcare personnel and other patients, family members and visitors 21,34,127,135,142,827 . In response to the global outbreak of SARS in 2003 and in preparation for pandemic influenza, healthcare providers working in outpatient settings are urged to implement source containment measures (e.g., asking couging patients to wear a surgical mask or cover their coughs with tissues) to prevent transmission of respiratory infections, beginning at the point of initial patient encounter 9, 262, 828 as described below in section III.A.1.a. Signs can be posted at the entrance to facilities or at the reception or registration desk requesting that the patient or individuals accompanying the patient promptly inform the receptionist if there are symptoms of a respiratory infection (e.g., cough, flu-like illness, increased production of respiratory secretions). The presence of diarrhea, skin rash, or known or suspected exposure to a transmissible disease (e.g., measles, pertussis, chickenpox, tuberculosis) also could be added. Placement of potentially infectious patients without delay in an examination room limits the number of exposed individuals, e.g., in the common waiting area. In waiting areas, maintaining a distance between symptomatic and nonsymptomatic patients (e.g., >3 feet), in addition to source control measures, may limit exposures. However, infections transmitted via the airborne route (e.g., M tuberculosis, measles, chickenpox) require additional precautions 12,125,829 . Patients suspected of having such an infection can wear a surgical mask for source containment, if tolerated, and should be placed in an examination room, preferably an AIIR, as soon as possible. If this is not possible, having the patient wear a mask and segregate him/herself from other patients in the waiting area will reduce opportunities to expose others. Since the person(s) accompanying the patient also may be infectious, application of the same infection control precautions may need to be extended to these persons if they are symptomatic 21, 252, 830 . For example, family members accompanying children admitted with suspected M. tuberculosis have been found to have unsuspected pulmonary tuberculosis with cavitary lesions, even when asymptomatic 42,831 . Patients with underlying conditions that increase their susceptibility to infection (e.g., those who are immunocompromised 43,44 or have cystic fibrosis 20 ) require special efforts to protect them from exposures to infected patients in common waiting areas. By informing the receptionist of their infection risk upon arrival, appropriate steps may be taken to further protect them from infection. In some cystic fibrosis clinics, in order to avoid exposure to other patients who could be colonized with B. cepacia, patients have been given beepers upon registration so that they may leave the area and receive notification to return when an examination room becomes available 832 . # II.G.3. Home care In home care, the patient placement concerns focus on protecting others in the home from exposure to an infectious household member. For individuals who are especially vulnerable to adverse outcomes associated with certain infections, it may be beneficial to either remove them from the home or segregate them within the home. Persons who are not part of the household may need to be prohibited from visiting during the period of infectivity. For example, if a patient with pulmonary tuberculosis is contagious and being cared for at home, very young children (<4 years of age) 833 and immunocompromised persons who have not yet been infected should be removed or excluded from the household. During the SARS outbreak of 2003, segregation of infected persons during the communicable phase of the illness was beneficial in preventing household transmission 249,834 . # II.H. Transport of patients Several principles are used to guide transport of patients requiring Transmission-Based Precautions. In the inpatient and residential settings these include 1) limiting transport of such patients to essential purposes, such as diagnostic and therapeutic procedures that cannot be performed in the patient's room; 2) when transport is necessary, using appropriate barriers on the patient (e.g., mask, gown, wrapping in sheets or use of impervious dressings to cover the affected area(s) when infectious skin lesions or drainage are present, consistent with the route and risk of transmission; 3) notifying healthcare personnel in the receiving area of the impending arrival of the patient and of the precautions necessary to prevent transmission; and 4) for patients being transported outside the facility, informing the receiving facility and the medi-van or emergency vehicle personnel in advance about the type of Transmission-Based Precautions being used. For tuberculosis, additional precautions may be needed in a small shared air space such as in an ambulance 12 . # II.I. Environmental measures Cleaning and disinfecting non-critical surfaces in patient-care areas are part of Standard Precautions. In general, these procedures do not need to be changed for patients on Transmission-Based Precautions. The cleaning and disinfection of all patient-care areas is important for frequently touched surfaces, especially those closest to the patient, that are most likely to be contaminated (e.g., bedrails, bedside tables, commodes, doorknobs, sinks, surfaces and equipment in close proximity to the patient) 11, 72,73,835 . The frequency or intensity of cleaning may need to change based on the patient's level of hygiene and the degree of environmental contamination and for certain for infectious agents whose reservoir is the intestinal tract 54 . This may be especially true in LTCFs and pediatric facilities where patients with stool and urine incontinence are encountered more frequently. Also, increased frequency of cleaning may be needed in a Protective Environment to minimize dust accumulation 11 . Special recommendations for cleaning and disinfecting environmental surfaces in dialysis centers have been published 18 . In all healthcare settings, administrative, staffing and scheduling activities should prioritize the proper cleaning and disinfection of surfaces that could be implicated in transmission. During a suspected or proven outbreak where an environmental reservoir is suspected, routine cleaning procedures should be reviewed, and the need for additional trained cleaning staff should be assessed. Adherence should be monitored and reinforced to promote consistent and correct cleaning is performed. EPA-registered disinfectants or detergents/disinfectants that best meet the overall needs of the healthcare facility for routine cleaning and disinfection should be selected 11,836 . In general, use of the existing facility detergent/disinfectant according to the manufacturer's recommendations for amount, dilution, and contact time is sufficient to remove pathogens from surfaces of rooms where colonized or infected individuals were housed. This includes those pathogens that are resistant to multiple classes of antimicrobial agents (e.g., C. difficile, VRE, MRSA, MDR-GNB 11, 24,88,435,746,796,837 ). Most often, environmental reservoirs of pathogens during outbreaks are related to a failure to follow recommended procedures for cleaning and disinfection rather than the specific cleaning and disinfectant agents used . Certain pathogens (e.g., rotavirus, noroviruses, C. difficile) may be resistant to some routinely used hospital disinfectants 275,292, .The role of specific disinfectants in limiting transmission of rotavirus has been demonstrated experimentally 842 . Also, since C. difficile may display increased levels of spore production when exposed to non-chlorine-based cleaning agents, and the spores are more resistant than vegetative cells to commonly used surface disinfectants, some investigators have recommended the use of a 1:10 dilution of 5.25% sodium hypochlorite (household bleach) and water for routine environmental disinfection of rooms of patients with C. difficile when there is continued transmission 844,848 . In one study, the use of a hypochlorite solution was associated with a decrease in rates of C. difficile infections 847 . The need to change disinfectants based on the presence of these organisms can be determined in consultation with the infection control committee 11, 847,848 . Detailed recommendations for disinfection and sterilization of surfaces and medical equipment that have been in contact with prion-containing tissue or high risk body fluids, and for cleaning of blood and body substance spills, are available in the Guidelines for Environmental Infection Control in Health-Care Facilities 11 and in the Guideline for Disinfection and Sterilization 848 . # II.J. Patient care equipment and instruments/devices Medical equipment and instruments/devices must be cleaned and maintained according to the manufacturers' instructions to prevent patient-to-patient transmission of infectious agents 86,87,325,849 . Cleaning to remove organic material must always precede high level disinfection and sterilization of critical and semi-critical instruments and devices because residual proteinacous material reduces the effectiveness of the disinfection and sterilization processes 836,848 . Noncritical equipment, such as commodes, intravenous pumps, and ventilators, must be thoroughly cleaned and disinfected before use on another patient. All such equipment and devices should be handled in a manner that will prevent HCW and environmental contact with potentially infectious material. It is important to include computers and personal digital assistants (PDAs) used in patient care in policies for cleaning and disinfection of non-critical items. The literature on contamination of computers with pathogens has been summarized 850 and two reports have linked computer contamination to colonization and infections in patients 851,852 . Although keyboard covers and washable keyboards that can be easily disinfected are in use, the infection control benefit of those items and optimal management have not been determined. In all healthcare settings, providing patients who are on Transmission-Based Precautions with dedicated noncritical medical equipment (e.g., stethoscope, blood pressure cuff, electronic thermometer) has been beneficial for preventing transmission 74,89,740,853,854 . When this is not possible, disinfection after use is recommended. Consult other guidelines for detailed guidance in developing specific protocols for cleaning and reprocessing medical equipment and patient care items in both routine and special circumstances 11,14,18,20,740,836,848 . In home care, it is preferable to remove visible blood or body fluids from durable medical equipment before it leaves the home. Equipment can be cleaned on-site using a detergent/disinfectant and, when possible, should be placed in a single plastic bag for transport to the reprocessing location 20,739 . # II.K. Textiles and laundry Soiled textiles, including bedding, towels, and patient or resident clothing may be contaminated with pathogenic microorganisms. However, the risk of disease transmission is negligible if they are handled, transported, and laundered in a safe manner 11,855,856 . Key principles for handling soiled laundry are 1) not shaking the items or handling them in any way that may aerosolize infectious agents; 2) avoiding contact of one's body and personal clothing with the soiled items being handled; and 3) containing soiled items in a laundry bag or designated bin. When laundry chutes are used, they must be maintained to minimize dispersion of aerosols from contaminated items 11 . The methods for handling, transporting, and laundering soiled textiles are determined by organizational policy and any applicable regulations 739 ; guidance is provided in the Guidelines for Environmental Infection Control 11 . Rather than rigid rules and regulations, hygienic and common sense storage and processing of clean textiles is recommended 11,857 . When laundering occurs outside of a healthcare facility, the clean items must be packaged or completely covered and placed in an enclosed space during transport to prevent contamination with outside air or construction dust that could contain infectious fungal spores that are a risk for immunocompromised patients 11 . Institutions are required to launder garments used as personal protective equipment and uniforms visibly soiled with blood or infective material 739 . There are few data to determine the safety of home laundering of HCW uniforms, but no increase in infection rates was observed in the one published study 858 and no pathogens were recovered from home-or hospital-laundered scrubs in another study 859 . In the home, textiles and laundry from patients with potentially transmissible infectious pathogens do not require special handling or separate laundering, and may be washed with warm water and detergent 11, 858,859 . # II.L. Solid waste The management of solid waste emanating from the healthcare environment is subject to federal and state regulations for medical and non-medical waste 860,861 . No additional precautions are needed for non-medical solid waste that is being removed from rooms of patients on Transmission-Based Precautions. Solid waste may be contained in a single bag (as compared to using two bags) of sufficient strength. 862 . # II.M. Dishware and eating utensils The combination of hot water and detergents used in dishwashers is sufficient to decontaminate dishware and eating utensils. Therefore, no special precautions are needed for dishware (e.g., dishes, glasses, cups) or eating utensils; reusable dishware and utensils may be used for patients requiring Transmission-Based Precautions. In the home and other communal settings, eating utensils and drinking vessels that are being used should not be shared, consistent with principles of good personal hygiene and for the purpose of preventing transmission of respiratory viruses, Herpes simplex virus, and infectious agents that infect the gastrointestinal tract and are transmitted by the fecal/oral route (e.g., hepatitis A virus, noroviruses). If adequate resources for cleaning utensils and dishes are not available, disposable products may be used. # II.N. Adjunctive measures Important adjunctive measures that are not considered primary components of programs to prevent transmission of infectious agents, but improve the effectiveness of such programs, include 1) antimicrobial management programs; 2) postexposure chemoprophylaxis with antiviral or antibacterial agents; 3) vaccines used both for pre and postexposure prevention; and 4) screening and restricting visitors with signs of transmissible infections. Detailed discussion of judicious use of antimicrobial agents is beyond the scope of this document; however the topic is addressed in the MDRO section (Management of Multidrug-Resistant Organisms in Healthcare Settings 2006. www.cdc.gov/ncidod/dhqp/pdf/ar/mdroGuideline2006.pdf). # II.N.1. Chemoprophylaxis Antimicrobial agents and topical antiseptics may be used to prevent infection and potential outbreaks of selected agents. Infections for which postexposure chemoprophylaxis is recommended under defined conditions include B. pertussis 17,863 , N. meningitidis 864 , B. anthracis after environmental exposure to aeosolizable material 865 , influenza virus 611 , HIV 866 , and group A streptococcus 160 . Orally administered antimicrobials may also be used under defined circumstances for MRSA decolonization of patients or healthcare personnel 867 . Another form of chemoprophylaxis is the use of topical antiseptic agents. For example, triple dye is used routinely on the umbilical cords of term newborns to reduce the risk of colonization, skin infections, and omphalitis caused by S. aureus, including MRSA, and group A streptococcus 868,869 . Extension of the use of triple dye to low birth weight infants in the NICU was one component of a program that controlled one longstanding MRSA outbreak 22 . Topical antiseptics are also used for decolonization of healthcare personnel or selected patients colonized with MRSA, using mupirocin as discussed in the MDRO guideline 870 867, 871-873 . # II.N.2. Immunoprophylaxis Certain immunizations recommended for susceptible healthcare personnel have decreased the risk of infection and the potential for transmission in healthcare facilities 17,874 . The OSHA mandate that requires employers to offer hepatitis B vaccination to HCWs played a substantial role in the sharp decline in incidence of occupational HBV infection 778,875 . The use of varicella vaccine in healthcare personnel has decreased the need to place susceptible HCWs on administrative leave following exposure to patients with varicella 775 . Also, reports of healthcare-associated transmission of rubella in obstetrical clinics 33,876 and measles in acute care settings 34 demonstrate the importance of immunization of susceptible healthcare personnel against childhood diseases. Many states have requirements for HCW vaccination for measles and rubella in the absence of evidence of immunity. Annual influenza vaccine campaigns targeted to patients and healthcare personnel in LTCFs and acute-care settings have been instrumental in preventing or limiting institutional outbreaks and increasing attention is being directed toward improving influenza vaccination rates in healthcare personnel 35 , 611, 690, 877, 878 , 879 . Transmission of B. pertussis in healthcare facilities has been associated with large and costly outbreaks that include both healthcare personnel and patients 17,36,41,100,683,827,880,881 . HCWs who have close contact with infants with pertussis are at particularly high risk because of waning immunity and, until 2005, the absence of a vaccine that could be used in adults. However, two acellular pertussis vaccines were licensed in the United States in 2005, one for use in individuals aged 11-18 and one for use in ages 10-64 years 882 . Provisional ACIP recommendations at the time of publication of this document include adolescents and adults, especially those with contact with infants < 12 months of age and healthcare personnel with direct patient contact 883 884 . Immunization of children and adults will help prevent the introduction of vaccinepreventable diseases into healthcare settings. The recommended immunization schedule for children is published annually in the January issues of the Morbidity Mortality Weekly Report with interim updates as needed 885,886 . An adult immunization schedule also is available for healthy adults and those with special immunization needs due to high risk medical conditions 887 . Some vaccines are also used for postexposure prophylaxis of susceptible individuals, including varicella 888 , influenza 611 , hepatitis B 778 , and smallpox 225 vaccines 17,874 . In the future, administration of a newly developed S. aureus conjugate vaccine (still under investigation) to selected patients may provide a novel method of preventing healthcare-associated S. aureus, including MRSA, infections in high-risk groups (e.g., hemodialysis patients and candidates for selected surgical procedures) 889,890 . Immune globulin preparations also are used for postexposure prophylaxis of certain infectious agents under specified circumstances (e.g., varicella-zoster virus , hepatitis B virus , rabies , measles and hepatitis A virus 17,833,874 ). The RSV monoclonal antibody preparation, Palivizumab, may have contributed to controlling a nosocomial outbreak of RSV in one NICU , but there is insufficient evidence to support a routine recommendation for its use in this setting 891 . # II.N. 3. Management of visitors # II.N.3.a. Visitors as sources of infection Visitors have been identified as the source of several types of HAIs (e.g., pertussis 40,41 , M. tuberculosis 42,892 , influenza, and other respiratory viruses 24,43,44,373 and SARS 21, ). However, effective methods for visitor screening in healthcare settings have not been studied. Visitor screening is especially important during community outbreaks of infectious diseases and for high risk patient units. Sibling visits are often encouraged in birthing centers, post partum rooms and in pediatric inpatient units, ICUs, and in residential settings for children; in hospital settings, a child visitor should visit only his or her own sibling. Screening of visiting siblings and other children before they are allowed into clinical areas is necessary to prevent the introduction of childhood illnesses and common respiratory infections. Screening may be passive through the use of signs to alert family members and visitors with signs and symptoms of communicable diseases not to enter clinical areas. More active screening may include the completion of a screening tool or questionnaire which elicits information related to recent exposures or current symptoms. That information is reviewed by the facility staff and the visitor is either permitted to visit or is excluded 833 . Family and household members visiting pediatric patients with pertussis and tuberculosis may need to be screened for a history of exposure as well as signs and symptoms of current infection. Potentially infectious visitors are excluded until they receive appropriate medical screening, diagnosis, or treatment. If exclusion is not considered to be in the best interest of the patient or family (i.e., primary family members of critically or terminally ill patients), then the symptomatic visitor must wear a mask while in the healthcare facility and remain in the patient's room, avoiding exposure to others, especially in public waiting areas and the cafeteria. Visitor screening is used consistently on HSCT units 15,43 . However, considering the experience during the 2003 SARS outbreaks and the potential for pandemic influenza, developing effective visitor screening systems will be beneficial 9 . Education concerning Respiratory Hygiene/Cough Etiquette is a useful adjunct to visitor screening. # II.N.3.b. Use of barrier precautions by visitors The use of gowns, gloves, or masks by visitors in healthcare settings has not been addressed specifically in the scientific literature. Some studies included the use of gowns and gloves by visitors in the control of MDRO's, but did not perform a separate analysis to determine whether their use by visitors had a measurable impact . Family members or visitors who are providing care or having very close patient contact (e.g., feeding, holding) may have contact with other patients and could contribute to transmission if barrier precautions are not used correctly. Specific recommendations may vary by facility or by unit and should be determined by the level of interaction. # Part III: Precautions to Prevent Transmission of Infectious Agents There are two tiers of HICPAC/CDC precautions to prevent transmission of infectious agents, Standard Precautions and Transmission-Based Precautions. Standard Precautions are intended to be applied to the care of all patients in all healthcare settings, regardless of the suspected or confirmed presence of an infectious agent. Implementation of Standard Precautions constitutes the primary strategy for the prevention of healthcare-associated transmission of infectious agents among patients and healthcare personnel. Transmission-Based Precautions are for patients who are known or suspected to be infected or colonized with infectious agents, including certain epidemiologically important pathogens, which require additional control measures to effectively prevent transmission. Since the infecting agent often is not known at the time of admission to a healthcare facility, Transmission-Based Precautions are used empirically, according to the clinical syndrome and the likely etiologic agents at the time, and then modified when the pathogen is identified or a transmissible infectious etiology is ruled out. Examples of this syndromic approach are presented in Table 2. The HICPAC/CDC Guidelines also include recommendations for creating a Protective Environment for allogeneic HSCT patients. The specific elements of Standard and Transmission-Based Precautions are discussed in Part II of this guideline. In Part III, the circumstances in which Standard Precautions, Transmission-Based Precautions, and a Protective Environment are applied are discussed. See Tables 4 and 5 for summaries of the key elements of these sets of precautions III.A. Standard Precautions Standard Precautions combine the major features of Universal Precautions (UP) 780,896 and Body Substance Isolation (BSI) 640 and are based on the principle that all blood, body fluids, secretions, excretions except sweat, nonintact skin, and mucous membranes may contain transmissible infectious agents. Standard Precautions include a group of infection prevention practices that apply to all patients, regardless of suspected or confirmed infection status, in any setting in which healthcare is delivered (Table 4). These include: hand hygiene; use of gloves, gown, mask, eye protection, or face shield, depending on the anticipated exposure; and safe injection practices. Also, equipment or items in the patient environment likely to have been contaminated with infectious body fluids must be handled in a manner to prevent transmission of infectious agents (e.g. wear gloves for direct contact, contain heavily soiled equipment, properly clean and disinfect or sterilize reusable equipment before use on another patient). The application of Standard Precautions during patient care is determined by the nature of the HCW-patient interaction and the extent of anticipated blood, body fluid, or pathogen exposure. For some interactions (e.g., performing venipuncture), only gloves may be needed; during other interactions (e.g., intubation), use of gloves, gown, and face shield or mask and goggles is necessary. Education and training on the principles and rationale for recommended practices are critical elements of Standard Precautions because they facilitate appropriate decision-making and promote adherence when HCWs are faced with new circumstances 655, . An example of the importance of the use of Standard Precautions is intubation, especially under emergency circumstances when infectious agents may not be suspected, but later are identified (e.g., SARS-CoV, N. meningitides). The application of Standard Precautions is described below and summarized in Table 4. Guidance on donning and removing gloves, gowns and other PPE is presented in the Figure . Standard Precautions are also intended to protect patients by ensuring that healthcare personnel do not carry infectious agents to patients on their hands or via equipment used during patient care. # III.A.1. New Elements of Standard Precautions Infection control problems that are identified in the course of outbreak investigations often indicate the need for new recommendations or reinforcement of existing infection control recommendations to protect patients. Because such recommendations are considered a standard of care and may not be included in other guidelines, they are added here to Standard Precautions. Three such areas of practice that have been added are: Respiratory Hygiene/Cough Etiquette, safe injection practices, and use of masks for insertion of catheters or injection of material into spinal or epidural spaces via lumbar puncture procedures (e.g., myelogram, spinal or epidural anesthesia). While most elements of Standard Precautions evolved from Universal Precautions that were developed for protection of healthcare personnel, these new elements of Standard Precautions focus on protection of patients. # III.A.1.a. Respiratory Hygiene/Cough Etiquette The transmission of SARS- CoV in emergency departments by patients and their family members during the widespread SARS outbreaks in 2003 highlighted the need for vigilance and prompt implementation of infection control measures at the first point of encounter within a healthcare setting (e.g., reception and triage areas in emergency departments, outpatient clinics, and physician offices) 21,254,897 . The strategy proposed has been termed Respiratory Hygiene/Cough Etiquette 9,828 and is intended to be incorporated into infection control practices as a new component of Standard Precautions. The strategy is targeted at patients and accompanying family members and friends with undiagnosed transmissible respiratory infections, and applies to any person with signs of illness including cough, congestion, rhinorrhea, or increased production of respiratory secretions when entering a healthcare facility 40,41,43 . The term cough etiquette is derived from recommended source control measures for M. tuberculosis 12,126 . The elements of Respiratory Hygiene/Cough Etiquette include 1) education of healthcare facility staff, patients, and visitors; 2) posted signs, in language(s) appropriate to the population served, with instructions to patients and accompanying family members or friends; 3) source control measures (e.g., covering the mouth/nose with a tissue when coughing and prompt disposal of used tissues, using surgical masks on the coughing person when tolerated and appropriate); 4) hand hygiene after contact with respiratory secretions; and 5) spatial separation, ideally >3 feet, of persons with respiratory infections in common waiting areas when possible. Covering sneezes and coughs and placing masks on coughing patients are proven means of source containment that prevent infected persons from dispersing respiratory secretions into the air 107, 145, 898, 899 . Masking may be difficult in some settings, (e.g., pediatrics, in which case, the emphasis by necessity may be on cough etiquette 900 . Physical proximity of <3 feet has been associated with an increased risk for transmission of infections via the droplet route (e.g., N. meningitidis 103 and group A streptococcus 114 and therefore supports the practice of distancing infected persons from others who are not infected. The effectiveness of good hygiene practices, especially hand hygiene, in preventing transmission of viruses and reducing the incidence of respiratory infections both within and outside healthcare settings is summarized in several reviews 559,717,904 . These measures should be effective in decreasing the risk of transmission of pathogens contained in large respiratory droplets (e.g., influenza virus 23 , adenovirus 111 , B. pertussis 827 and Mycoplasma pneumoniae 112 . Although fever will be present in many respiratory infections, patients with pertussis and mild upper respiratory tract infections are often afebrile. Therefore, the absence of fever does not always exclude a respiratory infection. Patients who have asthma, allergic rhinitis, or chronic obstructive lung disease also may be coughing and sneezing. While these patients often are not infectious, cough etiquette measures are prudent. Healthcare personnel are advised to observe Droplet Precautions (i.e., wear a mask) and hand hygiene when examining and caring for patients with signs and symptoms of a respiratory infection. Healthcare personnel who have a respiratory infection are advised to avoid direct patient contact, especially with high risk patients. If this is not possible, then a mask should be worn while providing patient care. # III.A.1.b. Safe Injection Practices The investigation of four large outbreaks of HBV and HCV among patients in ambulatory care facilities in the United States identified a need to define and reinforce safe injection practices 453 . The four outbreaks occurred in a private medical practice, a pain clinic, an endoscopy clinic, and a hematology/oncology clinic. The primary breaches in infection control practice that contributed to these outbreaks were 1) reinsertion of used needles into a multiple-dose vial or solution container (e.g., saline bag) and 2) use of a single needle/syringe to administer intravenous medication to multiple patients. In one of these outbreaks, preparation of medications in the same workspace where used needle/syringes were dismantled also may have been a contributing factor. These and other outbreaks of viral hepatitis could have been prevented by adherence to basic principles of aseptic technique for the preparation and administration of parenteral medications 453,454 . These include the use of a sterile, single-use, disposable needle and syringe for each injection given and prevention of contamination of injection equipment and medication. Whenever possible, use of single-dose vials is preferred over multiple-dose vials, especially when medications will be administered to multiple patients. Outbreaks related to unsafe injection practices indicate that some healthcare personnel are unaware of, do not understand, or do not adhere to basic principles of infection control and aseptic technique. A survey of US healthcare workers who provide medication through injection found that 1% to 3% reused the same needle and/or syringe on multiple patients 905 . Among the deficiencies identified in recent outbreaks were a lack of oversight of personnel and failure to follow-up on reported breaches in infection control practices in ambulatory settings. Therefore, to ensure that all healthcare workers understand and adhere to recommended practices, principles of infection control and aseptic technique need to be reinforced in training programs and incorporated into institutional polices that are monitored for adherence 454 . # III.A.1.c. Infection Control Practices for Special Lumbar Puncture Procedues In 2004, CDC investigated eight cases of post-myelography meningitis that either were reported to CDC or identified through a survey of the Emerging Infections Network of the Infectious Disease Society of America. Blood and/or cerebrospinal fluid of all eight cases yielded streptococcal species consistent with oropharyngeal flora and there were changes in the CSF indices and clinical status indicative of bacterial meningitis. Equipment and products used during these procedures (e.g., contrast media) were excluded as probable sources of contamination. Procedural details available for seven cases determined that antiseptic skin preparations and sterile gloves had been used. However, none of the clinicians wore a face mask, giving rise to the speculation that droplet transmission of oralpharyngeal flora was the most likely explanation for these infections. Bacterial meningitis following myelogram and other spinal procedures (e.g., lumbar puncture, spinal and epidural anesthesia, intrathecal chemotherapy) has been reported previously . As a result, the question of whether face masks should be worn to prevent droplet spread of oral flora during spinal procedures (e.g., myelogram, lumbar puncture, spinal anesthesia) has been debated 916,917 . Face masks are effective in limiting the dispersal of oropharyngeal droplets 918 and are recommended for the placement of central venous catheters 919 . In October 2005, the Healthcare Infection Control Practices Advisory Committee (HICPAC) reviewed the evidence and concluded that there is sufficient experience to warrant the additional protection of a face mask for the individual placing a catheter or injecting material into the spinal or epidural space. # III.B. Transmission-Based Precautions There are three categories of Transmission-Based Precautions: Contact Precautions, Droplet Precautions, and Airborne Precautions. Transmission-Based Precautions are used when the route(s) of transmission is (are) not completely interrupted using Standard Precautions alone. For some diseases that have multiple routes of transmission (e.g., SARS), more than one Transmission-Based Precautions category may be used. When used either singly or in combination, they are always used in addition to Standard Precautions. See Appendix A for recommended precautions for specific infections. When Transmission-Based Precautions are indicated, efforts must be made to counteract possible adverse effects on patients (i.e., anxiety, depression and other mood disturbances , perceptions of stigma 923 , reduced contact with clinical staff , and increases in preventable adverse events 565 in order to improve acceptance by the patients and adherence by HCWs. # III.B.1. Contact Precautions Contact Precautions are intended to prevent transmission of infectious agents, including epidemiologically important microorganisms, which are spread by direct or indirect contact with the patient or the patient's environment as described in I.B.3.a. The specific agents and circumstance for which Contact Precautions are indicated are found in Appendix A. The application of Contact Precautions for patients infected or colonized with MDROs is described in the 2006 HICPAC/CDC MDRO guideline 927 . Contact Precautions also apply where the presence of excessive wound drainage, fecal incontinence, or other discharges from the body suggest an increased potential for extensive environmental contamination and risk of transmission. A singlepatient room is preferred for patients who require Contact Precautions. When a single-patient room is not available, consultation with infection control personnel is recommended to assess the various risks associated with other patient placement options (e.g., cohorting, keeping the patient with an existing roommate). In multi-patient rooms, >3 feet spatial separation between beds is advised to reduce the opportunities for inadvertent sharing of items between the infected/colonized patient and other patients. Healthcare personnel caring for patients on Contact Precautions wear a gown and gloves for all interactions that may involve contact with the patient or potentially contaminated areas in the patient's environment. Donning PPE upon room entry and discarding before exiting the patient room is done to contain pathogens, especially those that have been implicated in transmission through environmental contamination (e.g., VRE, C. difficile, noroviruses and other intestinal tract pathogens; RSV) 54,72,73,78,274,275,740 . # III.B.2. Droplet Precautions Droplet Precautions are intended to prevent transmission of pathogens spread through close respiratory or mucous membrane contact with respiratory secretions as described in I.B.3.b. Because these pathogens do not remain infectious over long distances in a healthcare facility, special air handling and ventilation are not required to prevent droplet transmission. Infectious agents for which Droplet Precautions are indicated are found in Appendix A and include B. pertussis, influenza virus, adenovirus, rhinovirus, N. meningitides, and group A streptococcus (for the first 24 hours of antimicrobial therapy). A single patient room is preferred for patients who require Droplet Precautions. When a single-patient room is not available, consultation with infection control personnel is recommended to assess the various risks associated with other patient placement options (e.g., cohorting, keeping the patient with an existing roommate). Spatial separation of > 3 feet and drawing the curtain between patient beds is especially important for patients in multi-bed rooms with infections transmitted by the droplet route. Healthcare personnel wear a mask (a respirator is not necessary) for close contact with infectious patient; the mask is generally donned upon room entry. Patients on Droplet Precautions who must be transported outside of the room should wear a mask if tolerated and follow Respiratory Hygiene/Cough Etiquette. # III.B.3. Airborne Precautions Airborne Precautions prevent transmission of infectious agents that remain infectious over long distances when suspended in the air (e.g., rubeola virus , varicella virus , M. tuberculosis, and possibly SARS-CoV) as described in I.B.3.c and Appendix A. The preferred placement for patients who require Airborne Precautions is in an airborne infection isolation room (AIIR). An AIIR is a single-patient room that is equipped with special air handling and ventilation capacity that meet the American Institute of Architects/Facility Guidelines Institute (AIA/FGI) standards for AIIRs (i.e., monitored negative pressure relative to the surrounding area, 12 air exchanges per hour for new construction and renovation and 6 air exchanges per hour for existing facilities, air exhausted directly to the outside or recirculated through HEPA filtration before return) 12,13 . Some states require the availability of such rooms in hospitals, emergency departments, and nursing homes that care for patients with M. tuberculosis. A respiratory protection program that includes education about use of respirators, fit-testing, and user seal checks is required in any facility with AIIRs. In settings where Airborne Precautions cannot be implemented due to limited engineering resources (e.g., physician offices), masking the patient, placing the patient in a private room (e.g., office examination room) with the door closed, and providing N95 or higher level respirators or masks if respirators are not available for healthcare personnel will reduce the likelihood of airborne transmission until the patient is either transferred to a facility with an AIIR or returned to the home environment, as deemed medically appropriate. Healthcare personnel caring for patients on Airborne Precautions wear a mask or respirator, depending on the disease-specific recommendations (Respiratory Protection II.E.4, Table 2, and Appendix A), that is donned prior to room entry. Whenever possible, non-immune HCWs should not care for patients with vaccine-preventable airborne diseases (e.g., measles, chickenpox, and smallpox). # III.C. Syndromic and empiric applications of Transmission-Based Precautions Diagnosis of many infections requires laboratory confirmation. Since laboratory tests, especially those that depend on culture techniques, often require two or more days for completion, Transmission-Based Precautions must be implemented while test results are pending based on the clinical presentation and likely pathogens. Use of appropriate Transmission-Based Precautions at the time a patient develops symptoms or signs of transmissible infection, or arrives at a healthcare facility for care, reduces transmission opportunities. While it is not possible to identify prospectively all patients needing Transmission-Based Precautions, certain clinical syndromes and conditions carry a sufficiently high risk to warrant their use empirically while confirmatory tests are pending (Table 2). Infection control professionals are encouraged to modify or adapt this table according to local conditions. # III.D. Discontinuation of Transmission-Based Precautions Transmission- Based Precautions remain in effect for limited periods of time (i.e., while the risk for transmission of the infectious agent persists or for the duration of the illness (Appendix A). For most infectious diseases, this duration reflects known patterns of persistence and shedding of infectious agents associated with the natural history of the infectious process and its treatment. For some diseases (e.g., pharyngeal or cutaneous diphtheria, RSV), Transmission-Based Precautions remain in effect until culture or antigen-detection test results document eradication of the pathogen and, for RSV, symptomatic disease is resolved. For other diseases, (e.g., M. tuberculosis) state laws and regulations, and healthcare facility policies, may dictate the duration of precautions 12 ). In immunocompromised patients, viral shedding can persist for prolonged periods of time (many weeks to months) and transmission to others may occur during that time; therefore, the duration of contact and/or droplet precautions may be prolonged for many weeks 500, . The duration of Contact Precautions for patients who are colonized or infected with MDROs remains undefined. MRSA is the only MDRO for which effective decolonization regimens are available 867 . However, carriers of MRSA who have negative nasal cultures after a course of systemic or topical therapy may resume shedding MRSA in the weeks that follow therapy 934,935 . Although early guidelines for VRE suggested discontinuation of Contact Precautions after three stool cultures obtained at weekly intervals proved negative 740 , subsequent experiences have indicated that such screening may fail to detect colonization that can persist for >1 year 27, . Likewise, available data indicate that colonization with VRE, MRSA 939 , and possibly MDR-GNB, can persist for many months, especially in the presence of severe underlying disease, invasive devices, and recurrent courses of antimicrobial agents. It may be prudent to assume that MDRO carriers are colonized permanently and manage them accordingly. Alternatively, an interval free of hospitalizations, antimicrobial therapy, and invasive devices (e.g., 6 or 12 months) before reculturing patients to document clearance of carriage may be used. Determination of the best strategy awaits the results of additional studies. See the 2006 HICPAC/CDC MDRO guideline 927 for discussion of possible criteria to discontinue Contact Precautions for patients colonized or infected with MDROs. # III.E. Application of Transmission-Based Precautions in ambulatory and home care settings Although Transmission-Based Precautions generally apply in all healthcare settings, exceptions exist. For example, in home care, AIIRs are not available. Furthermore, family members already exposed to diseases such as varicella and tuberculosis would not use masks or respiratory protection, but visiting HCWs would need to use such protection. Similarly, management of patients colonized or infected with MDROs may necessitate Contact Precautions in acute care hospitals and in some LTCFs when there is continued transmission, but the risk of transmission in ambulatory care and home care, has not been defined. Consistent use of Standard Precautions may suffice in these settings, but more information is needed. # III.F. Protective Environment A Protective Environment is designed for allogeneic HSCT patients to minimize fungal spore counts in the air and reduce the risk of invasive environmental fungal infections (see Table 5 for specifications) 11, . The need for such controls has been demonstrated in studies of aspergillus outbreaks associated with construction 11,14,15,157,158 . As defined by the American Insitute of Architecture 13 and presented in detail in the Guideline for Environmental Infection Control 2003 11,861 , air quality for HSCT patients is improved through a combination of environmental controls that include 1) HEPA filtration of incoming air; 2) directed room air flow; 3) positive room air pressure relative to the corridor; 4) well-sealed rooms (including sealed walls, floors, ceilings, windows, electrical outlets) to prevent flow of air from the outside; 5) ventilation to provide >12 air changes per hour; 6) strategies to minimize dust (e.g., scrubbable surfaces rather than upholstery 940 and carpet 941 , and routinely cleaning crevices and sprinkler heads); and 7) prohibiting dried and fresh flowers and potted plants in the rooms of HSCT patients. The latter is based on molecular typing studies that have found indistinguishable strains of Aspergillus terreus in patients with hematologic malignancies and in potted plants in the vicinity of the patients . The desired quality of air may be achieved without incurring the inconvenience or expense of laminar airflow 15,157 . To prevent inhalation of fungal spores during periods when construction, renovation, or other dust-generating activities that may be ongoing in and around the health-care facility, it has been advised that severely immunocompromised patients wear a high-efficiency respiratory-protection device (e.g., an N95 respirator) when they leave the Protective Environment 11, 14,945 ). The use of masks or respirators by HSCT patients when they are outside of the Protective Environment for prevention of environmental fungal infections in the absence of construction has not been evaluated. A Protective Environment does not include the use of barrier precautions beyond those indicated for Standard and Transmission-Based Precautions. No published reports support the benefit of placing solid organ transplants or other immunocompromised patients in a Protective Environment. # Part IV: # Recommendations These recommendations are designed to prevent transmission of infectious agents among patients and healthcare personnel in all settings where healthcare is delivered. As in other CDC/HICPAC guidelines, each recommendation is categorized on the basis of existing scientific data, theoretical rationale, applicability, and when possible, economic impact. The CDC/HICPAC system for categorizing recommendations is as follows: Category IA Strongly recommended for implementation and strongly supported by well-designed experimental, clinical, or epidemiologic studies. Category IB Strongly recommended for implementation and supported by some experimental, clinical, or epidemiologic studies and a strong theoretical rationale. Category IC Required for implementation, as mandated by federal and/or state regulation or standard. Category II Suggested for implementation and supported by suggestive clinical or epidemiologic studies or a theoretical rationale. No recommendation; unresolved issue. Practices for which insufficient evidence or no consensus regarding efficacy exists. # I. Administrative Responsibilities Healthcare organization administrators should ensure the implementation of recommendations in this section. underlying conditions that predispose to serious adverse outcomes) y Analyze data to identify trends that may indicated increased rates of transmission y Feedback information on trends in the incidence and prevalence of HAIs, probable risk factors, and prevention strategies and their impact to the appropriate healthcare providers, organization administrators, and as required by local and state health authorities III.C. Develop and implement strategies to reduce risks for transmission and evaluate effectiveness 566, 673, 684, 970 963 971 . Category IB III.D. When transmission of epidemiologically-important organisms continues despite implementation and documented adherence to infection prevention and control strategies, obtain consultation from persons knowledgeable in infection control and healthcare epidemiology to review the situation and recommend additional measures for control 566 247 687 . Category IB III.E. Review periodically information on community or regional trends in the incidence and prevalence of epidemiologically-important organisms (e.g., influenza, RSV, pertussis, invasive group A streptococcal disease, MRSA, VRE) (including in other healthcare facilities) that may impact transmission of organisms within the facility 398, 687, 972, 973 974 . Category II # IV. Standard Precautions Assume that every person is potentially infected or colonized with an organism that could be transmitted in the healthcare setting and apply the following infection control practices during the delivery of health care. IV.A. Hand Hygiene IV.A.1. During the delivery of healthcare, avoid unnecessary touching of surfaces in close proximity to the patient to prevent both contamination of clean hands from environmental surfaces and transmission of pathogens from contaminated hands to surfaces 72, 73 739, 800, 975{CDC, 2001 #970 . Category IB/IC IV.A.2. When hands are visibly dirty, contaminated with proteinaceous material, or visibly soiled with blood or body fluids, wash hands with either a nonantimicrobial soap and water or an antimicrobial soap and water 559 . Category IA IV.A.3. If hands are not visibly soiled, or after removing visible material with nonantimicrobial soap and water, decontaminate hands in the clinical situations described in IV.A.2.a-f. The preferred method of hand decontamination is with an alcohol-based hand rub 562,978 . Alternatively, hands may be washed with an antimicrobial soap and water. Frequent use of alcohol-based hand rub immediately following handwashing with nonantimicrobial soap may increase the frequency of dermatitis 559 . Category IB Perform hand hygiene: IV.A.3.a. Before having direct contact with patients 664,979 . Category IB IV.A.3.b. After contact with blood, body fluids or excretions, mucous membranes, nonintact skin, or wound dressings 664 . Category IA IV.A.3.c. After contact with a patient's intact skin (e.g., when taking a pulse or blood pressure or lifting a patient) 167,976,979,980 . Category IB IV.A.3.d. If hands will be moving from a contaminated-body site to a clean-body site during patient care. Category II IV.A.3.e. After contact with inanimate objects (including medical equipment) in the immediate vicinity of the patient 72, 73, 88, 800, 981 982 . Category II IV.A.3.f. After removing gloves 728,741,742 . Category IB IV.A.4. Wash hands with non-antimicrobial soap and water or with antimicrobial soap and water if contact with spores (e.g., C. difficile or Bacillus anthracis) is likely to have occurred. The physical action of washing and rinsing hands under such circumstances is recommended because alcohols, chlorhexidine, iodophors, and other antiseptic agents have poor activity against spores 559,956,983 . Category II IV.A.5. Do not wear artificial fingernails or extenders if duties include direct contact with patients at high risk for infection and associated adverse outcomes (e.g., those in ICUs or operating rooms) 30,31,559, . Category IA IV.A.5.a. Develop an organizational policy on the wearing of non-natural nails by healthcare personnel who have direct contact with patients outside of the groups specified above 984 IV.E.1. Establish policies and procedures for containing, transporting, and handling patient-care equipment and instruments/devices that may be contaminated with blood or body fluids 18,739,975 . Category IB/IC IV.E.2. Remove organic material from critical and semi-critical instrument/devices, using recommended cleaning agents before high level disinfection and sterilization to enable effective disinfection and sterilization processes 836 991, 992 . Category IA IV.E.3. Wear PPE (e.g., gloves, gown), according to the level of anticipated contamination, when handling patient-care equipment and instruments/devices that is visibly soiled or may have been in contact with blood or body fluids 18,739,975 . Category IB/IC IV.F. Care of the environment 11 IV.F.1. Establish policies and procedures for routine and targeted cleaning of environmental surfaces as indicated by the level of patient contact and degree of soiling 11 . Category II IV.F.2. Clean and disinfect surfaces that are likely to be contaminated with pathogens, including those that are in close proximity to the patient (e.g., bed rails, over bed tables) and frequently-touched surfaces in the patient care environment (e.g., door knobs, surfaces in and surrounding toilets in patients' rooms) on a more frequent schedule compared to that for other surfaces (e.g., horizontal surfaces in waiting rooms) 11 73, 740, 746, 993, 994 72, 800, 835 995 . Category IB IV.F.3. Use EPA-registered disinfectants that have microbiocidal (i.e., killing) activity against the pathogens most likely to contaminate the patient-care environment. Use in accordance with manufacturer's instructions 842-844, 956, 996 . Category IB/IC IV.F.3.a. Review the efficacy of in-use disinfectants when evidence of continuing transmission of an infectious agent (e.g., rotavirus, C. difficile, norovirus) may indicate resistance to the in-use product and change to a more effective disinfectant as indicated 275,842,847 . Category II IV.F. 4. In facilities that provide health care to pediatric patients or have waiting areas with child play toys (e.g., obstetric/gynecology offices and clinics), establish policies and procedures for cleaning and disinfecting toys at regular intervals 379 80 . Category IB - Use the following principles in developing this policy and procedures: Category II y Select play toys that can be easily cleaned and disinfected y Do not permit use of stuffed furry toys if they will be shared y Clean and disinfect large stationary toys (e.g., climbing equipment) at least weekly and whenever visibly soiled y If toys are likely to be mouthed, rinse with water after disinfection; alternatively wash in a dishwasher y When a toy requires cleaning and disinfection, do so immediately or store in a designated labeled container separate from toys that are clean and ready for use IV.F.5. Include multi-use electronic equipment in policies and procedures for preventing contamination and for cleaning and disinfection, especially those items that are used by patients, those used during delivery of patient care, and mobile devices that are moved in and out of patient rooms frequently (e.g., daily) 850 851, 852, 997 . Category IB IV.F.5.a. No recommendation for use of removable protective covers or washable keyboards. Unresolved issue IV.G. Textiles and laundry IV.G.1. Handle used textiles and fabrics with minimum agitation to avoid contamination of air, surfaces and persons 739,998,999 . Category IB/IC IV.G.2. If laundry chutes are used, ensure that they are properly designed, maintained, and used in a manner to minimize dispersion of aerosols from contaminated laundry 11, 13, 1000, 1001 . Category IB/IC IV.H. Safe injection practices The following recommendations apply to the use of needles, cannulas that replace needles, and, where applicable intravenous delivery systems 454 IV.H.1. Use aseptic technique to avoid contamination of sterile injection equipment 1002,1003 . Category IA IV.H.2. Do not administer medications from a syringe to multiple patients, even if the needle or cannula on the syringe is changed. Needles, cannulae and syringes are sterile, single-use items; they should not be reused for another patient nor to access a medication or solution that might be used for a subsequent patient 453,919,1004,1005 . Category IA IV.H.3. Use fluid infusion and administration sets (i.e., intravenous bags, tubing and connectors) for one patient only and dispose appropriately after use. Consider a syringe or needle/cannula contaminated once it has been used to enter or connect to a patient's intravenous infusion bag or administration set 453 . Category IB IV.H.4. Use single-dose vials for parenteral medications whenever possible 453 . Category IA IV.H.5. Do not administer medications from single-dose vials or ampules to multiple patients or combine leftover contents for later use 369 453, 1005 . Category IA IV.H.6. If multidose vials must be used, both the needle or cannula and syringe used to access the multidose vial must be sterile 453,1002 . Category IA IV.H.7. Do not keep multidose vials in the immediate patient treatment area and store in accordance with the manufacturer's recommendations; discard if sterility is compromised or questionable 453,1003 . Category IA IV.H.8. Do not use bags or bottles of intravenous solution as a common source of supply for multiple patients 453,1006 . Category IB IV.I. Infection control practices for special lumbar puncture procedures Wear a surgical mask when placing a catheter or injecting material into the spinal canal or subdural space (i.e., during myelograms, lumbar puncture and spinal or epidural anesthesia 906 907-909 910, 911 912-914, 918 1007 . Category IB IV.J. Worker safety Adhere to federal and state requirements for protection of healthcare personnel from exposure to bloodborne pathogens 739 . Category IC # V. Transmission-Based Precautions V.A. General principles V.A.1. In addition to Standard Precautions, use Transmission-Based Precautions for patients with documented or suspected infection or colonization with highly transmissible or epidemiologically-important pathogens for which additional precautions are needed to prevent transmission (see Appendix A) 24,93,126,141,306,806,1008 regarding patient placement on a case-by-case basis, balancing infection risks to other patients in the room, the presence of risk factors that increase the likelihood of transmission, and the potential adverse psychological impact on the infected or colonized patient 920,921 . Category II V.B.2.c. In ambulatory settings, place patients who require Contact Precautions in an examination room or cubicle as soon as possible 20 . Category II V.B.3. Use of personal protective equipment V.B.3.a. Gloves Wear gloves whenever touching the patient's intact skin 24,89,134,559,746,837 or surfaces and articles in close proximity to the patient (e.g., medical equipment, bed rails) 72,73,88,837 . Don gloves upon entry into the room or cubicle. Category IB V.B.3.b. Gowns V.B.3.b.i. Wear a gown whenever anticipating that clothing will have direct contact with the patient or potentially contaminated environmental surfaces or equipment in close proximity to the patient. Don gown upon entry into the room or cubicle. Remove gown and observe hand hygiene before leaving the patient-care environment 24,88,134,745,837 . Category IB V.B.3.b.ii. After gown removal, ensure that clothing and skin do not contact potentially contaminated environmental surfaces that could result in possible transfer of microorganism to other patients or environmental surfaces 72,73 . Patient-care equipment and instruments/devices V.B.5.a. Handle patient-care equipment and instruments/devices according to Standard Precautions 739,836 . Category IB/IC V.B.5.b. In acute care hospitals and long-term care and other residential settings, use disposable noncritical patient-care equipment (e.g., blood pressure cuffs) or implement patient-dedicated use of such equipment. If common use of equipment for multiple patients is unavoidable, clean and disinfect such equipment before use on another patient 24,88,796,836,837,854,1016 . Category IB V.B.5.c. In home care settings V.B.5.c.i. Category II V.B. Limit the amount of non-disposable patient-care equipment brought into the home of patients on Contact Precautions. Whenever possible, leave patient-care equipment in the home until discharge from home care services. Category II V.B.5.c.ii. If noncritical patient-care equipment (e.g., stethoscope) cannot remain in the home, clean and disinfect items before taking them from the home using a low-to intermediate-level disinfectant. Alternatively, place contaminated reusable items in a plastic bag for transport and subsequent cleaning and disinfection. Category II V.B.5.d. In ambulatory settings, place contaminated reusable noncritical patient-care equipment in a plastic bag for transport to a soiled utility area for reprocessing. Category II V.B.6. Environmental measures Ensure that rooms of patients on Contact Precautions are prioritized for frequent cleaning and disinfection (e.g., at least daily) with a focus on frequently-touched surfaces (e.g., bed rails, overbed table, bedside commode, lavatory surfaces in patient bathrooms, doorknobs) and equipment in the immediate vicinity of the patient 11,24,88,746,837 . Category IB V.B.7. Discontinue Contact Precautions after signs and symptoms of the infection have resolved or according to pathogen-specific recommendations in Appendix A. Category IB V.C. Droplet Precautions V.C.1. Use Droplet Precautions as recommended in Appendix A for patients known or suspected to be infected with pathogens transmitted by respiratory droplets (i.e., large-particle droplets >5µ in size) that are generated by a patient who is coughing, sneezing or talking 14, 23, Steinberg, 196914, 23, Steinberg, #170814, 23, Steinberg, , 41, 95, 103, 111, 112, 755, 756, 989, 1017 . Category IB V.C.2. Patient placement V.C.2.a. In acute care hospitals, place patients who require Droplet Precautions in a single-patient room when available Category II When single-patient rooms are in short supply, apply the following principles for making decisions on patient placement: y Prioritize patients who have excessive cough and sputum production for single-patient room placement Category II y Place together in the same room (cohort) patients who are infected the same pathogen and are suitable roommates 814 816 . Category IB y If it becomes necessary to place patients who require Droplet Precautions in a room with a patient who does not have the same infection: y Avoid placing patients on Droplet Precautions in the same room with patients who have conditions that may increase the risk of adverse outcome from infection or that may facilitate transmission (e.g., those who are immunocompromised, have or have anticipated prolonged lengths of stay). Category II y Ensure that patients are physically separated (i.e., >3 feet apart) from each other. Draw the privacy curtain between beds to minimize opportunities for close contact 103, 104 410 . Category IB y Change protective attire and perform hand hygiene between contact with patients in the same room, regardless of whether one patient or both patients are on Droplet Precautions 741-743, 988, 1014, 1015 . Category IB V.C.2.b. In long-term care and other residential settings, make decisions regarding patient placement on a case-by-case basis after considering infection risks to other patients in the room and available alternatives 410 . Category II V.C.2.c. In ambulatory settings, place patients who require Droplet Precautions in an examination room or cubicle as soon as possible. Instruct patients to follow recommendations for Respiratory Hygiene/Cough Etiquette 447, 448 9, 828 . Category II V.C.3. Use of personal protective equipment V.C.3.a. Don a mask upon entry into the patient room or cubicle 14,23,41,103,111,113,115,827 . Category IB V.C. Use Airborne Precautions as recommended in Appendix A for patients known or suspected to be infected with infectious agents transmitted person-to-person by the airborne route (e.g., M tuberculosis 12 , measles 34,122,1020 , chickenpox 123,773,1021 , disseminated herpes zoster 1022 . Category IA/IC V.D.2. Patient placement V.D.2.a. In acute care hospitals and long-term care settings, place patients who require Airborne Precautions in an AIIR that has been constructed in accordance with current guidelines . Category IA/IC V.D.2.a.i. Provide at least six (existing facility) or 12 (new construction/renovation) air changes per hour. V.D.2.a.ii. Direct exhaust of air to the outside. If it is not possible to exhaust air from an AIIR directly to the outside, the air may be returned to the air-handling system or adjacent spaces if all air is directed through HEPA filters. V.D.2.a.iii. Whenever an AIIR is in use for a patient on Airborne Precautions, monitor air pressure daily with visual indicators (e.g., smoke tubes, flutter strips), regardless of the presence of differential pressure sensing devices (e.g., manometers) 11, 12, 1023, 1024 . V.D.2.a.iv. Keep the AIIR door closed when not required for entry and exit. V.D.2.b. When an AIIR is not available, transfer the patient to a facility that has an available AIIR 12 . Category II V.D.2.c. In the event of an outbreak or exposure involving large numbers of patients who require Airborne Precautions: y Consult infection control professionals before patient placement to determine the safety of alternative room that do not meet engineering requirements for an AIIR. y Place together (cohort) patients who are presumed to have the same infection( based on clinical presentation and diagnosis when known) in areas of the facility that are away from other patients, especially patients who are at increased risk for infection (e.g., immunocompromised patients). y Use temporary portable solutions (e.g., exhaust fan) to create a negative pressure environment in the converted area of the facility. Discharge air directly to the outside,away from people and air intakes, or direct all the air through HEPA filters before it is introduced to other air spaces 12 Category II V.D. Place the patient in an AIIR as soon as possible. If an AIIR is not available, place a surgical mask on the patient and place him/her in an examination room. Once the patient leaves, the room should remain vacant for the appropriate time, generally one hour, to allow for a full exchange of air 11,12,122 . Category IB/IC V.D.2.d.iii. Instruct patients with a known or suspected airborne infection to wear a surgical mask and observe Respiratory Hygiene/Cough Etiquette. Once in an AIIR, the mask may be removed; the mask should remain on if the patient is not in an AIIR 12,107,145,899 . Category IB/IC V.D.3. Personnel restrictions Restrict susceptible healthcare personnel from entering the rooms of patients known or suspected to have measles (rubeola), varicella (chickenpox), disseminated zoster, or smallpox if other immune healthcare personnel are available 17,775 . Category IB V.D.4. Use of PPE V.D.4.a. Wear a fit-tested NIOSH-approved N95 or higher level respirator for respiratory protection when entering the room or home of a patient when the following diseases are suspected or confirmed: y Infectious pulmonary or laryngeal tuberculosis or when infectious tuberculosis skin lesions are present and procedures that would aerosolize viable organisms (e.g., irrigation, incision and drainage, whirlpool treatments) are performed 12,1025,1026 . Category IB y Smallpox (vaccinated and unvaccinated). Respiratory protection is recommended for all healthcare personnel, including those with a documented "take" after smallpox vaccination due to the risk of a genetically engineered virus against which the vaccine may not provide protection, or of exposure to a very large viral load (e.g., from high-risk aerosol-generating procedures, immunocompromised patients, hemorrhagic or flat smallpox 108,129 . Category II V.D.4.b. No recommendation is made regarding the use of PPE by healthcare personnel who are presumed to be immune to measles (rubeola) or varicella-zoster based on history of disease, vaccine, or serologic testing when caring for an individual with known or suspected measles, chickenpox or disseminated zoster, due to difficulties in establishing definite immunity 1027,1028 . Unresolved issue V.D.4.c. No recommendation is made regarding the type of personal protective equipment (i.e., surgical mask or respiratory protection with a N95 or higher respirator) to be worn by susceptible healthcare personnel who must have contact with patients with known or suspected measles, chickenpox or disseminated herpes zoster. Unresolved issue V.D.5. Patient transport V.D.5.a. In acute care hospitals and long-term care and other residential settings, limit transport and movement of patients outside of the room to medically-necessary purposes. Category II V.D.5.b. If transport or movement outside an AIIR is necessary, instruct patients to wear a surgical mask, if possible, and observe Respiratory Hygiene/Cough Etiquette 12 . Category II V.D.5.c. For patients with skin lesions associated with varicella or smallpox or draining skin lesions caused by M. tuberculosis, cover the affected areas to prevent aerosolization or contact with the infectious agent in skin lesions 108,1025,1026, . Category IB V.D.5.d. Healthcare personnel transporting patients who are on Airborne Precautions do not need to wear a mask or respirator during transport if the patient is wearing a mask and infectious skin lesions are covered. Category II V.D.6. Exposure management Immunize or provide the appropriate immune globulin to susceptible persons as soon as possible following unprotected contact (i.e., exposed) to a patient with measles, varicella or smallpox: Category IA y Administer measles vaccine to exposed susceptible persons within 72 hours after the exposure or administer immune globulin within six days of the exposure event for high-risk persons in whom vaccine is contraindicated 17, . y Administer varicella vaccine to exposed susceptible persons within 120 hours after the exposure or administer varicella immune globulin (VZIG or alternative product), when available, within 96 hours for high-risk persons in whom vaccine is contraindicated (e.g., immunocompromised patients, pregnant women, newborns whose mother's varicella onset was 12.5 Pa ) 13 . Category IB VI.C.1.c.i. Monitor air pressure daily with visual indicators (e.g., smoke tubes, flutter strips) 11, 1024 . Category IA VI.C.1.d. Well-sealed rooms that prevent infiltration of outside air 13 . Category IB VI.C.1.e. At least 12 air changes per hour 13 . Category IB VI.C.2. Lower dust levels by using smooth, nonporous surfaces and finishes that can be scrubbed, rather than textured material (e.g., upholstery). Wet dust horizontal surfaces whenever dust detected and routinely clean crevices and sprinkler heads where dust may accumulate 940,941 . Category II VI.C.3. Avoid carpeting in hallways and patient rooms in areas 941 . Category IB VI.C.4. Prohibit dried and fresh flowers and potted plants . Category II VI.D. Minimize the length of time that patients who require a Protective Environment are outside their rooms for diagnostic procedures and other activities 11,158,945 . Category IB VI.E. During periods of construction, to prevent inhalation of respirable particles that could contain infectious spores, provide respiratory protection (e.g., N95 respirator) to patients who are medically fit to tolerate a respirator when they are required to leave the Protective Environment 945 158 . Barrier precautions, (e.g., masks, gowns, gloves) are not required for healthcare personnel in the absence of suspected or confirmed infection in the patient or if they are not indicated according to Standard Precautions 15 . Category II VI.F.4. Category Implement Airborne Precautions for patients who require a Protective Environment room and who also have an airborne infectious disease (e.g., pulmonary or laryngeal tuberculosis, acute varicella-zoster). Category IA VI.F.4.a. Ensure that the Protective Environment is designed to maintain positive pressure 13 . Category IB VI.F.4.b. Use an anteroom to further support the appropriate air-balance relative to the corridor and the Protective Environment; provide independent exhaust of contaminated air to the outside or place a HEPA filter in the exhaust duct if the return air must be recirculated 13,1041 . Category IB VI.F.4.c. If an anteroom is not available, place the patient in an AIIR and use portable, industrial-grade HEPA filters in the room to enhance filtration of spores 1042 . Category II Preamble The mode(s) and risk of transmission for each specific disease agent included in Appendix A were reviewed. Principle sources consulted for the development of disease-specific recommendations for Appendix A included infectious disease manuals and textbooks 833,1043,1044 . The published literature was searched for evidence of person-to-person transmission in healthcare and non-healthcare settings with a focus on reported outbreaks that would assist in developing recommendations for all settings where healthcare is delivered. Criteria used to assign Transmission-Based Precautions categories follow: - A Transmission-Based Precautions category was assigned if there was strong evidence for person-to-person transmission via droplet, contact, or airborne routes in healthcare or non-healthcare settings and/or if patient factors (e.g., diapered infants, diarrhea, draining wounds) increased the risk of transmission - Transmission-Based Precautions category assignments reflect the predominant mode(s) of transmission - If there was no evidence for person-to-person transmission by droplet, contact or airborne routes, Standard Precautions were assigned - If there was a low risk for person-to-person transmission and no evidence of healthcare-associated transmission, Standard Precautions were assigned - Standard Precautions were assigned for bloodborne pathogens (e.g., hepatitis B and C viruses, human immunodeficiency virus) as per CDC recommendations for Universal Precautions issued in 1988 780 . Subsequent experience has confirmed the efficacy of Standard Precautions to prevent exposure to infected blood and body fluid 778,779,866 . Additional information relevant to use of precautions was added in the comments column to assist the caregiver in decision-making. Citations were added as needed to support a change in or provide additional evidence for recommendations for a specific disease and for new infectious agents (e.g., SARS-CoV, avian influenza) that have been added to Appendix A. The reader may refer to more detailed discussion concerning modes of transmission and emerging pathogens in the background text and for MDRO control in Appendix B. Use Contact Precautions for diapered or incontinent persons for the duration of illness or to control institutional outbreaks. Persons who clean areas heavily contaminated with feces or vomitus may benefit from wearing masks since virus can be aerosolized from these body substances 142, 147 148 ; ensure consistent environmental cleaning and disinfection with focus on restrooms even when apparently unsoiled 273,1064 ). Hypochlorite solutions may be required when there is continued transmission . Alcohol is less active, but there is no evidence that alcohol antiseptic handrubs are not effective for hand decontamination 294 . Cohorting of affected patients to separate airspaces and toilet facilities may help interrupt transmission during outbreaks. . Install screens in windows and doors in endemic areas. Use DEET-containing mosquito repellants and clothing to cover extremities Marburg virus disease (see viral hemorrhagic fevers) Measles (rubeola) A 4 days after onset of rash; DI in immune compromised Susceptible HCWs should not enter room if immune care providers are available; no recommendation for face protection for immune HCW; no recommendation for type of face protection for susceptible HCWs, i.e., mask or respirator 1027,1028 . For exposed susceptibles, post-exposure vaccine within 72 hrs. or immune globulin within 6 days when available 17,1032,1034 precautions are implemented always, hospitals must have systems in place to evaluate patients routinely according to these criteria as part of their preadmission and admission care. † Patients with the syndromes or conditions listed below may present with atypical signs or symptoms (e.g.neonates and adults with pertussis may not have paroxysmal or severe cough). The clinician's index of suspicion should be guided by the prevalence of specific conditions in the community, as well as clinical judgment. ‡ The organisms listed under the column "Potential Pathogens" are not intended to represent the complete, or even most likely, diagnoses, but rather possible etiologic agents that require additional precautions beyond Standard Precautions until they can be ruled out. § These pathogens include enterohemorrhagic Escherichia coli O157:H7, Shigella spp, hepatitis A virus, noroviruses, rotavirus, C. difficile. Cutaneous (contact with spores);RT (inhalation of spores);GIT (ingestion of spores -rare) Comment: Spores can be inhaled into the lower respiratory tract. The infectious dose of B. anthracis in humans by any route is not precisely known. In primates, the LD50 (i.e., the dose required to kill 50% of animals) for an aerosol challenge with B. anthracis is estimated to be 8,000-50,000 spores; the infectious dose may be as low as 1-3 spores # APPENDIX A 1 # TYPE AND DURATION OF PRECAUTIONS RECOMMENDED FOR SELECTED INFECTIONS AND CONDITIONS # Rotavirus # Incubation Period Cutaneous: 1 to12 days; RT: Usually 1 to 7 days but up to 43 days reported; GIT: 15-72 hours # Clinical Features Cutaneous: Painless, reddish papule, which develops a central vesicle or bulla in 1-2 days; over next 3-7 days lesion becomes pustular, and then necrotic, with black eschar; extensive surrounding edema. RT: initial flu-like illness for 1-3 days with headache, fever, malaise, cough; by day 4 severe dyspnea and shock, and is usually fatal (85%-90% if untreated; meningitis in 50% of RT cases. GIT: ; if intestinal form, necrotic, ulcerated edematous lesions develop in intestines with fever, nausea and vomiting, progression to hematemesis and bloody diarrhea; 25 GIT: Ingestion of toxin-containing food, RT: Inhalation of toxin containing aerosol cause disease. Comment: Toxin ingested or potentially delivered by aerosol in bioterrorist incidents. LD 50 for type A is 0.001 μg/ml/kg. # Incubation Period 1-5 days. # Clinical Features Ptosis, generalized weakness, dizziness, dry mouth and throat, blurred vision, diplopia, dysarthria, dysphonia, and dysphagia followed by symmetrical descending paralysis and respiratory failure. # Diagnosis Clinical diagnosis; identification of toxin in stool, serology unless toxin-containing material available for toxin neutralization bioassays. # Infectivity Not transmitted from person to person. Exposure to toxin necessary for disease. # Recommended Precautions Standard Precautions. # Disease Ebola Hemorrhagic Fever Site(s) of Infection; Transmission Mode As a rule infection develops after exposure of mucous membranes or RT, or through broken skin or percutaneous injury. # Incubation Period 2-19 days, usually 5-10 days # Clinical Features Febrile illnesses with malaise, myalgias, headache, vomiting and diarrhea that are rapidly complicated by hypotension, shock, and hemorrhagic features. Massive hemorrhage in < 50% pts. # Diagnosis Etiologic diagnosis can be made using RT-PCR, serologic detection of antibody and antigen, pathologic assessment with immunohistochemistry and viral culture with EM confirmation of morphology, # Infectivity Person-to-person transmission primarily occurs through unprotected contact with blood and body fluids; percutaneous injuries (e.g., needlestick) associated with a high rate of transmission; transmission in healthcare settings has been reported but is prevented by use of barrier precautions. # Recommended Precautions Hemorrhagic fever specific barrier precautions: If disease is believed to be related to intentional release of a bioweapon, epidemiology of transmission is unpredictable pending observation of disease transmission. Until the nature of the pathogen is understood and its transmission pattern confirmed, Standard, Contact and Airborne Precautions should be used. Once the pathogen is characterized, if the epidemiology of transmission is consistent with natural disease, Droplet Precautions can be substituted for Airborne Precautions. Emphasize: 1) use of sharps safety devices and safe work practices, 2) hand hygiene; 3) barrier protection against blood and body fluids upon entry into room (single gloves and fluidresistant or impermeable gown, face/eye protection with masks, goggles or face shields); and 4) appropriate waste handling. Use N95 or higher respirators when performing aerosol-generating procedures. In settings where AIIRs are unavailable or the large numbers of patients cannot be accommodated by existing AIIRs, observe Droplet Precautions (plus Standard Precautions and Contact Precautions) and segregate patients from those not suspected of VHF infection. Limit blooddraws to those essential to care. See text for discussion and Appendix A for recommendations for naturally occurring VHFs. # Disease Plague 2 Site(s) of Infection; Transmission Mode RT: Inhalation of respiratory droplets. Comment: Pneumonic plague most likely to occur if used as a biological weapon, but some cases of bubonic and primary septicemia may also occur. Infective dose 100 to 500 bacteria Incubation Period 1 to 6, usually 2 to 3 days. # Clinical Features Pneumonic: fever, chills, headache, cough, dyspnea, rapid progression of weakness, and in a later stage hemoptysis, circulatory collapse, and bleeding diathesis Diagnosis Presumptive diagnosis from Gram stain or Wayson stain of sputum, blood, or lymph node aspirate; definitive diagnosis from cultures of same material, or paired acute/convalescent serology. # Infectivity Person-to-person transmission occurs via respiratory droplets risk of transmission is low during first 20-24 hours of illness and requires close contact. Respiratory secretions probably are not infectious within a few hours after initiation of appropriate therapy. # Recommended Precautions Standard Precautions, Droplet Precautions until patients have received 48 hours of appropriate therapy. Chemoprophylaxis: Consider antibiotic prophylaxis for HCWs with close contact exposure. transmit the infection when the disease is in the end stage. These persons cough copious amounts of bloody sputum that contains many plague bacteria. Patients in the early stage of primary pneumonic plague (approximately the first 20-24 h) apparently pose little risk . Antibiotic medication rapidly clears the sputum of plague bacilli, so that a patient generally is not infective within hours after initiation of effective antibiotic treatment . This means that in modern times many patients will never reach a stage where they pose a significant risk to others. Even in the end stage of disease, transmission only occurs after close contact. Simple protective measures, such as wearing masks, good hygiene, and avoiding close contact, have been effective to interrupt transmission during many pneumonic plague outbreaks . In the United States, the last known cases of person to person transmission of pneumonic plague occurred in 1925 . Only immune HCWs to care for pts; post-exposure vaccine within 4 days. Vaccinia: HCWs cover vaccination site with gauze and semi-permeable dressing until scab separates (>21 days). Observe hand hygiene. Adverse events with virus-containing lesions: Standard plus Contact Precautions until all lesions crusted b Transmission by the airborne route is a rare event; Airborne Precautions is recommended when possible, but in the event of mass exposures, barrier precautions and containment within a designated area are most important 204,212 . c Vaccinia adverse events with lesions containing infectious virus include inadvertent autoinoculation, ocular lesions (blepharitis, conjunctivitis), generalized vaccinia, progressive vaccinia, eczema vaccinatum; bacterial superinfection also requires addition of contact precautions if exudates cannot be contained 216,217 . Pneumonic: malaise, cough, sputum production, dyspnea; Typhoidal: fever, prostration, weight loss and frequently an associated pneumonia. # Disease # Diagnosis Diagnosis usually made with serology on acute and convalescent serum specimens; bacterium can be detected by PCR (LRN) or isolated from blood and other body fluids on cysteine-enriched media or mouse inoculation. # Infectivity Person-to-person spread is rare. Laboratory workers who encounter/handle cultures of this organism are at high risk for disease if exposed. # Recommended Precautions Standard Precautions - - During aerosol-generating procedures on patients with suspected or proven infections transmitted by respiratory aerosols (e.g., SARS), wear a fit-tested N95 or higher respirator in addition to gloves, gown,and face/eye protection. (e.g., flutter strips, smoke tubes) or a hand held pressure gauge - Self-closing door on all room exits - Maintain back-up ventilation equipment (e.g., portable units for fans or filters) for emergency provision of ventilation requirements for PE areas and take immediate steps to restore the fixed ventilation system - For patients who require both a PE and Airborne Infection Isolation, use an anteroom to ensure proper air balance relationships and provide independent exhaust of contaminated air to the outside or place a HEPA filter in the exhaust duct. If an anteroom is not available, place patient in an AIIR and use portable ventilation units, industrial-grade HEPA filters to enhance filtration of spores. Bioaerosols. An airborne dispersion of particles containing whole or parts of biological entities, such as bacteria, viruses, dust mites, fungal hyphae, or fungal spores. Such aerosols usually consist of a mixture of mono-dispersed and aggregate cells, spores or viruses, carried by other materials, such as respiratory secretions and/or inert particles. Infectious bioaerosols (i.e., those that contain biological agents capable of causing an infectious disease) can be generated from human sources (e.g., expulsion from the respiratory tract during coughing, sneezing, talking or singing; during suctioning or wound irrigation), wet environmental sources (e.g. HVAC and cooling tower water with Legionella) or dry sources (e.g.,constuction dust with spores produced by Aspergillus spp.). Bioaerosols include large respiratory droplets and small droplet nuclei (Cole EC. AJIC 1998;26: 453-64). # IV. Surfaces Caregivers.. All persons who are not employees of an organization, are not paid, and provide or assist in providing healthcare to a patient (e.g., family member, friend) and acquire technical training as needed based on the tasks that must be performed. Cohorting. In the context of this guideline, this term applies to the practice of grouping patients infected or colonized with the same infectious agent together to confine their care to one area and prevent contact with susceptible patients (cohorting patients). During outbreaks, healthcare personnel may be assigned to a cohort of patients to further limit opportunities for transmission (cohorting staff). Colonization. Proliferation of microorganisms on or within body sites without detectable host immune response, cellular damage, or clinical expression. The presence of a microorganism within a host may occur with varying duration, but may become a source of potential transmission. In many instances, colonization and carriage are synonymous. Droplet nuclei. Microscopic particles < 5 µm in size that are the residue of evaporated droplets and are produced when a person coughs, sneezes, shouts, or sings. These particles can remain suspended in the air for prolonged periods of time and can be carried on normal air currents in a room or beyond, to adjacent spaces or areas receiving exhaust air. # Hand hygiene. A general term that applies to any one of the following: 1) handwashing with plain (nonantimicrobial) soap and water); 2) antiseptic handwash (soap containing antiseptic agents and water); 3) antiseptic handrub (waterless antiseptic product, most often alcohol-based, rubbed on all surfaces of hands); or 4) surgical hand antisepsis (antiseptic handwash or antiseptic handrub performed preoperatively by surgical personnel to eliminate transient hand flora and reduce resident hand flora) 559 . # Healthcare-associated infection (HAI). An infection that develops in a patient who is cared for in any setting where healthcare is delivered (e.g., acute care hospital, chronic care facility, ambulatory clinic, dialysis center, surgicenter, home) and is related to receiving health care (i.e., was not incubating or present at the time healthcare was provided). In ambulatory and home settings, HAI would apply to any infection that is associated with a medical or surgical intervention. Since the geographic location of infection acquisition is often uncertain, the preferred term is considered to be healthcare-associated rather than healthcare-acquired. # Hematopoietic stem cell transplantation (HSCT). Any transplantation of bloodor bone marrow-derived hematopoietic stem cells, regardless of donor type (e.g., allogeneic or autologous) or cell source (e.g., bone marrow, peripheral blood, or placental/umbilical cord blood); associated with periods of severe immunosuppression that vary with the source of the cells, the intensity of chemotherapy required, and the presence of graft versus host disease (MMWR 2000; 49: RR-10). High-efficiency particulate air (HEPA) filter. An air filter that removes >99.97% of particles > 0.3µm (the most penetrating particle size) at a specified flow rate of air. HEPA filters may be integrated into the central air handling systems, installed at the point of use above the ceiling of a room, or used as portable units (MMWR 2003; 52: RR-10). Home care. A wide-range of medical, nursing, rehabilitation, hospice and social services delivered to patients in their place of residence (e.g., private residence, senior living center, assisted living facility). Home health-care services include care provided by home health aides and skilled nurses, respiratory therapists, dieticians, physicians, chaplains, and volunteers; provision of durable medical equipment; home infusion therapy; and physical, speech, and occupational therapy. # Immunocompromised patients. Those patients whose immune mechanisms are deficient because of congenital or acquired immunologic disorders (e.g., human immunodeficiency virus infection, congenital immune deficiency syndromes), chronic diseases such as diabetes mellitus, cancer, emphysema, or cardiac failure, ICU care, malnutrition, and immunosuppressive therapy of another disease process ). The type of infections for which an immunocompromised patient has increased susceptibility is determined by the severity of immunosuppression and the specific component(s) of the immune system that is affected. Patients undergoing allogeneic HSCT and those with chronic graft versus host disease are considered the most vulnerable to HAIs. Immunocompromised states also make it more difficult to diagnose certain infections (e.g., tuberculosis) and are associated with more severe clinical disease states than persons with the same infection and a normal immune system. Infection. The transmission of microorganisms into a host after evading or overcoming defense mechanisms, resulting in the organism's proliferation and invasion within host tissue(s). Host responses to infection may include clinical symptoms or may be subclinical, with manifestations of disease mediated by direct organisms pathogenesis and/or a function of cell-mediated or antibody responses that result in the destruction of host tissues. # Infection control and prevention professional (ICP). A person whose primary training is in either nursing, medical technology, microbiology, or epidemiology and who has acquired special training in infection control. Responsibilities may include collection, analysis, and feedback of infection data and trends to healthcare providers; consultation on infection risk assessment, prevention and control strategies; performance of education and training activities; implementation of evidence-based infection control practices or those mandated by regulatory and licensing agencies; application of epidemiologic principles to improve patient outcomes; participation in planning renovation and construction projects (e.g., to ensure appropriate containment of construction dust); evaluation of new products or procedures on patient outcomes; oversight of employee health services related to infection prevention; implementation of preparedness plans; communication within the healthcare setting, with local and state health departments, and with the community at large concerning infection control issues; and participation in research. Certification in infection control (CIC) is available through the Certification Board of Infection Control and Epidemiology. # Infection control and prevention program. A multidisciplinary program that includes a group of activities to ensure that recommended practices for the prevention of healthcare-associated infections are implemented and followed by HCWs, making the healthcare setting safe from infection for patients and healthcare personnel. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) requires the following five components of an infection control program for accreditation: 1) surveillance: monitoring patients and healthcare personnel for acquisition of infection and/or colonization; 2) investigation: identification and analysis of infection problems or undesirable trends; 3) prevention: implementation of measures to prevent transmission of infectious agents and to reduce risks for device-and procedure-related infections; 4) control: evaluation and management of outbreaks; and 5) reporting: provision of information to external agencies as required by state and federal law and regulation (www.jcaho.org). The infection control program staff has the ultimate authority to determine infection control policies for a healthcare organization with the approval of the organization's governing body. # Long-term care facilities (LTCFs ). An array of residential and outpatient facilities designed to meet the bio-psychosocial needs of persons with sustained self-care deficits. These include skilled nursing facilities, chronic disease hospitals, nursing homes, foster and group homes, institutions for the developmentally disabled, residential care facilities, assisted living facilities, retirement homes, adult day health care facilities, rehabilitation centers, and longterm psychiatric hospitals. # Mask. A term that applies collectively to items used to cover the nose and mouth and includes both procedure masks and surgical masks (www.fda.gov/cdrh/ode/guidance/094.html#4). # Multidrug-resistant organisms (MDROs). In general, bacteria that are resistant to one or more classes of antimicrobial agents and usually are resistant to all but one or two commercially available antimicrobial agents (e.g., MRSA, VRE, extended spectrum beta-lactamase -producing or intrinsically resistant gram-negative bacilli) 176 . # Nosocomial infection. A term that is derived from two Greek words "nosos" (disease) and "komeion" (to take care of) and refers to any infection that develops during or as a result of an admission to an acute care facility (hospital) and was not incubating at the time of admission. # Personal protective equipment (PPE). A variety of barriers used alone or in combination to protect mucous membranes, skin, and clothing from contact with infectious agents. PPE includes gloves, masks, respirators, goggles, face shields, and gowns. # Procedure Mask. A covering for the nose and mouth that is intended for use in general patient care situations. These masks generally attach to the face with ear loops rather than ties or elastic. Unlike surgical masks, procedure masks are not regulated by the Food and Drug Administration. Protective Environment. A specialized patient-care area, usually in a hospital, that has a positive air flow relative to the corridor (i.e., air flows from the room to the outside adjacent space). The combination of high-efficiency particulate air (HEPA) filtration, high numbers (>12) of air changes per hour (ACH), and minimal leakage of air into the room creates an environment that can safely accommodate patients with a severely compromised immune system (e.g., those who have received allogeneic hemopoietic stem-cell transplant ) and decrease the risk of exposure to spores produced by environmental fungi. Other components include use of scrubbable surfaces instead of materials such as upholstery or carpeting, cleaning to prevent dust accumulation, and prohibition of fresh flowers or potted plants. Quasi-experimental studies. Studies to evaluate interventions but do not use randomization as part of the study design. These studies are also referred to as nonrandomized, pre-post-intervention study designs. These studies aim to demonstrate causality between an intervention and an outcome but cannot achieve the level of confidence concerning attributable benefit obtained through a randomized, controlled trial. In hospitals and public health settings, randomized control trials often cannot be implemented due to ethical, practical and urgency reasons; therefore, quasi-experimental design studies are used commonly. However, even if an intervention appears to be effective statistically, the question can be raised as to the possibility of alternative explanations for the result.. Such study design is used when it is not logistically feasible or ethically possible to conduct a randomized, controlled trial, (e.g., during outbreaks). Within the classification of quasi-experimental study designs, there is a hierarchy of design features that may contribute to validity of results (Harris et al. CID 2004:38: 1586). # Residential care setting. A facility in which people live, minimal medical care is delivered, and the psychosocial needs of the residents are provided for. # Respirator. A personal protective device worn by healthcare personnel to protect them from inhalation exposure to airborne infectious agents that are < 5 μm in size. These include infectious droplet nuclei from patients with M. tuberculosis, variola virus , SARS-CoV), and dust particles that contain infectious particles, such as spores of environmental fungi (e.g., Aspergillus sp. # Respiratory Hygiene/ Cough Etiquette. A combination of measures designed to minimize the transmission of respiratory pathogens via droplet or airborne routes in healthcare settings. The components of Respiratory Hygiene/Cough Etiquette are 1) covering the mouth and nose during coughing and sneezing, 2) using tissues to contain respiratory secretions with prompt disposal into a notouch receptacle, 3) offering a surgical mask to persons who are coughing to decrease contamination of the surrounding environment, and 4) turning the head away from others and maintaining spatial separation, ideally >3 feet, when coughing. These measures are targeted to all patients with symptoms of respiratory infection and their accompanying family members or friends beginning at the point of initial encounter with a healthcare setting (e.g., reception/triage in emergency departments, ambulatory clinics, healthcare provider offices) 126 (Srinivasin A ICHE 2004; 25: 1020; www.cdc.gov/flu/professionals/infectioncontrol/resphygiene.htm). Safety culture/climate. The shared perceptions of workers and management regarding the expectations of safety in the work environment. A hospital safety climate includes the following six organizational components: 1) senior management support for safety programs; 2) absence of workplace barriers to safe work practices; 3) cleanliness and orderliness of the worksite; 4) minimal conflict and good communication among staff members; 5) frequent safetyrelated feedback/training by supervisors; and 6) availability of PPE and engineering controls 620 . Source Control. The process of containing an infectious agent either at the portal of exit from the body or within a confined space. The term is applied most frequently to containment of infectious agents transmitted by the respiratory route but could apply to other routes of transmission, (e.g., a draining wound, vesicular or bullous skin lesions). Respiratory Hygiene/Cough Etiquette that encourages individuals to "cover your cough" and/or wear a mask is a source control measure. The use of enclosing devices for local exhaust ventilation (e.g., booths for sputum induction or administration of aerosolized medication) is another example of source control. # Standard Precautions. A group of infection prevention practices that apply to all patients, regardless of suspected or confirmed diagnosis or presumed infection status. Standard Precautions is a combination and expansion of Universal Precautions 780 and Body Substance Isolation 1102 . Standard Precautions is based on the principle that all blood, body fluids, secretions, excretions except sweat, nonintact skin, and mucous membranes may contain transmissible infectious agents. Standard Precautions includes hand hygiene, and depending on the anticipated exposure, use of gloves, gown, mask, eye protection, or face shield. Also, equipment or items in the patient environment likely to have been contaminated with infectious fluids must be handled in a manner to prevent transmission of infectious agents, (e.g. wear gloves for handling, contain heavily soiled equipment, properly clean and disinfect or sterilize reusable equipment before use on another patient). # Surgical mask. A device worn over the mouth and nose by operating room personnel during surgical procedures to protect both surgical patients and operating room personnel from transfer of microorganisms and body fluids. Surgical masks also are used to protect healthcare personnel from contact with large infectious droplets (>5 μm in size). According to draft guidance issued by the Food and Drug Administration on May 15, 2003, surgical masks are evaluated using standardized testing procedures for fluid resistance, bacterial filtration efficiency, differential pressure (air exchange), and flammability in order to mitigate the risks to health associated with the use of surgical masks. These specifications apply to any masks that are labeled surgical, laser, isolation, or dental or medical procedure (www.fda.gov/cdrh/ode/guidance/094.html#4). Surgical masks do not protect against inhalation of small particles or droplet nuclei and should not be confused with particulate respirators that are recommended for protection against selected airborne infectious agents, (e.g., Mycobacterium tuberculosis). # Appendix A: # APPENDIX A 1 # TYPE AND DURATION OF PRECAUTIONS RECOMMENDED FOR SELECTED INFECTIONS AND CONDITIONS # APPENDIX A 1 # TYPE AND DURATION OF PRECAUTIONS RECOMMENDED FOR SELECTED INFECTIONS AND CONDITIONS # HAND HYGIENE Perform hand hygiene immediately after removing all PPE!
The term nosocomial infection is retained to refer only to infections acquired in hospitals. The term healthcare-associated infection (HAI) is used to refer to infections associated with healthcare delivery in any setting (e.g., hospitals, long-term care facilities, ambulatory settings, home care). This term reflects the inability to determine with certainty where the pathogen is acquired since patients may be colonized with or exposed to potential pathogens outside of the healthcare setting, before receiving health care, or may develop infections caused by those pathogens when exposed to the conditions associated with delivery of healthcare. Additionally, patients frequently move among the various settings within a healthcare system 8 . A new addition to the practice recommendations for Standard Precautions is Respiratory Hygiene/Cough Etiquette. While Standard Precautions generally apply to the recommended practices of healthcare personnel during patient care, Respiratory Hygiene/Cough Etiquette applies broadly to all persons who enter a healthcare setting, including healthcare personnel, patients and visitors. These recommendations evolved from observations during the SARS epidemic that failure to implement basic source control measures with patients, visitors, and healthcare personnel with signs and symptoms of respiratory tract infection may have contributed to SARS coronavirus (SARS-CoV) transmission. This concept has been incorporated into CDC planning documents for SARS and pandemic influenza 9, 10 . The term "Airborne Precautions" has been supplemented with the term "Airborne Infection Isolation Room (AIIR)" for consistency with the Guidelines for Environmental Infection Control in Healthcare Facilities 11 , the Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Settings 2005 12 and the American Institute of Architects (AIA) guidelines for design and construction of hospitals, 2006 13 A set of prevention measures termed Protective Environment has been added to the precautions used to prevent HAIs. These measures, which have been defined in other guidelines , consist of engineering and design interventions that decrease the risk of exposure to environmental fungi for severely immunocompromised allogeneic hematiopoietic stem cell transplant (HSCT) patients during their highest risk phase, usually the first 100 days post transplant, or longer in the presence of graft-versus-host disease 11,[13][14][15] . Recommendations for a Protective Environment apply only to acute care hospitals that provide care to HSCT patients. Scope This guideline, like its predecessors, focuses primarily on interactions between patients and healthcare providers. The Guidelines for the Prevention of MDRO Infection were published separately in November 2006, and are available online at www.cdc.gov/ncidod/dhqp/index.html. Several other HICPAC# TABLE OF CONTENTS Executive # Tables Table 1. Recent history of guidelines for prevention of healthcare-associated infections...... Table 2. Clinical syndromes or conditions warranting additional empiric transmissionbased precautions pending confirmation of diagnosis…………………………......... Table 3. Infection control considerations for high-priority (CDC Category A) diseases that Table 4. Recommendations for application of Standard Precautions for the care of all may result from bioterrorist attacks or are considered to be bioterrorist threats .. # EXECUTIVE SUMMARY The Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Healthcare Settings updates and expands the 1996 Guideline for Isolation Precautions in Hospitals. The following developments led to revision of the 1996 guideline: 1. The transition of healthcare delivery from primarily acute care hospitals to other healthcare settings (e.g., home care, ambulatory care, free-standing specialty care sites, long-term care) created a need for recommendations that can be applied in all healthcare settings using common principles of infection control practice, yet can be modified to reflect setting-specific needs. Accordingly, the revised guideline addresses the spectrum of healthcare delivery settings. Furthermore, the term "nosocomial infections" is replaced by "healthcareassociated infections" (HAIs) to reflect the changing patterns in healthcare delivery and difficulty in determining the geographic site of exposure to an infectious agent and/or acquisition of infection. 2. The emergence of new pathogens (e.g., SARS-CoV associated with the severe acute respiratory syndrome [SARS], Avian influenza in humans), renewed concern for evolving known pathogens (e.g., C. difficile, noroviruses, communityassociated MRSA [CA-MRSA]), development of new therapies (e.g., gene therapy), and increasing concern for the threat of bioweapons attacks, established a need to address a broader scope of issues than in previous isolation guidelines. 3. The successful experience with Standard Precautions, first recommended in the 1996 guideline, has led to a reaffirmation of this approach as the foundation for preventing transmission of infectious agents in all healthcare settings. New additions to the recommendations for Standard Precautions are Respiratory Hygiene/Cough Etiquette and safe injection practices, including the use of a mask when performing certain high-risk, prolonged procedures involving spinal canal punctures (e.g., myelography, epidural anesthesia). The need for a recommendation for Respiratory Hygiene/Cough Etiquette grew out of observations during the SARS outbreaks where failure to implement simple source control measures with patients, visitors, and healthcare personnel with respiratory symptoms may have contributed to SARS coronavirus (SARS-CoV) transmission. The recommended practices have a strong evidence base. The continued occurrence of outbreaks of hepatitis B and hepatitis C viruses in ambulatory settings indicated a need to re-iterate safe injection practice recommendations as part of Standard Precautions. The addition of a mask for certain spinal injections grew from recent evidence of an associated risk for developing meningitis caused by respiratory flora. 4. The accumulated evidence that environmental controls decrease the risk of lifethreatening fungal infections in the most severely immunocompromised patients (allogeneic hematopoietic stem-cell transplant patients) led to the update on the components of the Protective Environment (PE). 5. Evidence that organizational characteristics (e.g., nurse staffing levels and composition, establishment of a safety culture) influence healthcare personnel adherence to recommended infection control practices, and therefore are important factors in preventing transmission of infectious agents, led to a new emphasis and recommendations for administrative involvement in the development and support of infection control programs. 6. Continued increase in the incidence of HAIs caused by multidrug-resistant organisms (MDROs) in all healthcare settings and the expanded body of knowledge concerning prevention of transmission of MDROs created a need for more specific recommendations for surveillance and control of these pathogens that would be practical and effective in various types of healthcare settings. This document is intended for use by infection control staff, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection control programs for healthcare settings across the continuum of care. The reader is referred to other guidelines and websites for more detailed information and for recommendations concerning specialized infection control problems. # Parts I -III: Review of the Scientific Data Regarding Transmission of Infectious Agents in Healthcare Settings Part I reviews the relevant scientific literature that supports the recommended prevention and control practices. As with the 1996 guideline, the modes and factors that influence transmission risks are described in detail. New to the section on transmission are discussions of bioaerosols and of how droplet and airborne transmission may contribute to infection transmission. This became a concern during the SARS outbreaks of 2003, when transmission associated with aerosol-generating procedures was observed. Also new is a definition of "epidemiologically important organisms" that was developed to assist in the identification of clusters of infections that require investigation (i.e. multidrug-resistant organisms, C. difficile). Several other pathogens that hold special infection control interest (i.e., norovirus, SARS, Category A bioterrorist agents, prions, monkeypox, and the hemorrhagic fever viruses) also are discussed to present new information and infection control lessons learned from experience with these agents. This section of the guideline also presents information on infection risks associated with specific healthcare settings and patient populations. Part II updates information on the basic principles of hand hygiene, barrier precautions, safe work practices and isolation practices that were included in previous guidelines. However, new to this guideline, is important information on healthcare system components that influence transmission risks, including those under the influence of healthcare administrators. An important administrative priority that is described is the need for appropriate infection control staffing to meet the ever-expanding role of infection control professionals in the modern, complex healthcare system. Evidence presented also demonstrates another administrative concern, the importance of nurse staffing levels, including numbers of appropriately trained nurses in ICUs for preventing HAIs. The role of the clinical microbiology laboratory in supporting infection control is described to emphasize the need for this service in healthcare facilites. Other factors that influence transmission risks are discussed i.e., healthcare worker adherence to recommended infection control practices, organizational safety culture or climate, education and training Discussed for the first time in an isolation guideline is surveillance of healthcare-associated infections. The information presented will be useful to new infection control professionals as well as persons involved in designing or responding to state programs for public reporting of HAI rates. Part III describes each of the categories of precautions developed by the Healthcare Infection Control Practices Advisory Committee (HICPAC) and the Centers for Disease Control and Prevention (CDC) and provides guidance for their application in various healthcare settings. The categories of Transmission-Based Precautions are unchanged from those in the 1996 guideline: Contact, Droplet, and Airborne. One important change is the recommendation to don the indicated personal protective equipment (gowns, gloves, mask) upon entry into the patient's room for patients who are on Contact and/or Droplet Precautions since the nature of the interaction with the patient cannot be predicted with certainty and contaminated environmental surfaces are important sources for transmission of pathogens. In addition, the Protective Environment (PE) for allogeneic hematopoietic stem cell transplant patients, described in previous guidelines, has been updated. # Tables, Appendices, and other Information There are several tables that summarize important information: 1) a summary of the evolution of this document; 2) guidance on using empiric isolation precautions according to a clinical syndrome; 3) a summary of infection control recommendations for category A agents of bioterrorism; 4) components of Standard Precautions and recommendations for their application; 5) components of the Protective Environment; and 6) a glossary of definitions used in this guideline. New in this guideline is a figure that shows a recommended sequence for donning and removing personal protective equipment used for isolation precautions to optimize safety and prevent self-contamination during removal. # Appendix A: Type and Duration of Precautions Recommended for Selected Infections and Conditions Appendix A consists of an updated alphabetical list of most infectious agents and clinical conditions for which isolation precautions are recommended. A preamble to the Appendix provides a rationale for recommending the use of one or more Transmission-Based Precautions, in addition to Standard Precautions, based on a review of the literature and evidence demonstrating a real or potential risk for person-to-person transmission in healthcare settings.The type and duration of recommended precautions are presented with additional comments concerning the use of adjunctive measures or other relevant considerations to prevent transmission of the specific agent. Relevant citations are included. # Pre-Publication of the Guideline on Preventing Transmission of MDROs New to this guideline is a comprehensive review and detailed recommendations for prevention of transmission of MDROs. This portion of the guideline was published electronically in October 2006 and updated in November, 2006 (Siegel JD, Rhinehart E, Jackson M, Chiarello L and HICPAC. Management of Multidrug-Resistant Organisms in Healthcare Settings 2006 www.cdc.gov/ncidod/dhqp/pdf/ar/mdroGuideline2006.pdf), and is considered a part of the Guideline for Isolation Precautions. This section provides a detailed review of the complex topic of MDRO control in healthcare settings and is intended to provide a context for evaluation of MDRO at individual healthcare settings. A rationale and institutional requirements for developing an effective MDRO control program are summarized. Although the focus of this guideline is on measures to prevent transmission of MDROs in healthcare settings, information concerning the judicious use of antimicrobial agents is presented since such practices are intricately related to the size of the reservoir of MDROs which in turn influences transmission (e.g. colonization pressure). There are two tables that summarize recommended prevention and control practices using the following seven categories of interventions to control MDROs: administrative measures, education of healthcare personnel, judicious antimicrobial use, surveillance, infection control precautions, environmental measures, and decolonization. Recommendations for each category apply to and are adapted for the various healthcare settings. With the increasing incidence and prevalence of MDROs, all healthcare facilities must prioritize effective control of MDRO transmission. Facilities should identify prevalent MDROs at the facility, implement control measures, assess the effectiveness of control programs, and demonstrate decreasing MDRO rates. A set of intensified MDRO prevention interventions is presented to be added 1) if the incidence of transmission of a target MDRO is NOT decreasing despite implementation of basic MDRO infection control measures, and 2) when the first case(s) of an epidemiologically important MDRO is identified within a healthcare facility. # Summary This updated guideline responds to changes in healthcare delivery and addresses new concerns about transmission of infectious agents to patients and healthcare workers in the United States and infection control. The primary objective of the guideline is to improve the safety of the nation's healthcare delivery system by reducing the rates of HAIs. Objectives and methods The objectives of this guideline are to 1) provide infection control recommendations for all components of the healthcare delivery system, including hospitals, long-term care facilities, ambulatory care, home care and hospice; 2) reaffirm Standard Precautions as the foundation for preventing transmission during patient care in all healthcare settings; 3) reaffirm the importance of implementing Transmission-Based Precautions based on the clinical presentation or syndrome and likely pathogens until the infectious etiology has been determined (Table 2); and 4) provide epidemiologically sound and, whenever possible, evidence-based recommendations. This guideline is designed for use by individuals who are charged with administering infection control programs in hospitals and other healthcare settings. The information also will be useful for other healthcare personnel, healthcare administrators, and anyone needing information about infection control measures to prevent transmission of infectious agents. Commonly used abbreviations are provided on page 12 and terms used in the guideline are defined in the Glossary (page 137). Med-line and Pub Med were used to search for relevant studies published in English, focusing on those published since 1996. Much of the evidence cited for preventing transmission of infectious agents in healthcare settings is derived from studies that used "quasi-experimental designs", also referred to as nonrandomized, pre-post-intervention study designs 2 . Although these types of studies can provide valuable information regarding the effectiveness of various interventions, several factors decrease the certainty of attributing improved outcome to a specific intervention. These include: difficulties in controlling for important confounding variables; the use of multiple interventions during an outbreak; and results that are explained by the statistical principle of regression to the mean, (e.g., improvement over time without any intervention) 3 . Observational studies remain relevant and have been used to evaluate infection control interventions 4,5 . The quality of studies, consistency of results and correlation with results from randomized, controlled trials when available were considered during the literature review and assignment of evidence-based categories (See Part IV: Recommendations) to the recommendations in this guideline. Several authors have summarized properties to consider when evaluating studies for the purpose of determining if the results should change practice or in designing new studies 2,6,7 . # Abbreviations Used in the Guideline AIIR guidelines to prevent transmission of infectious agents associated with healthcare delivery are cited; e.g., Guideline for Hand Hygiene, Guideline for Environmental Infection Control, Guideline for Prevention of Healthcare-Associated Pneumonia, and Guideline for Infection Control in Healthcare Personnel 11,14,16,17 . In combination, these provide comprehensive guidance on the primary infection control measures for ensuring a safe environment for patients and healthcare personnel. This guideline does not discuss in detail specialized infection control issues in defined populations that are addressed elsewhere, (e.g., Recommendations for Preventing Transmission of Infections among Chronic Hemodialysis Patients , # Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Facilities 2005, Guidelines for Infection Control in Dental Health-Care Settings and Infection Control Recommendations for Patients with Cystic Fibrosis 12,[18][19][20] . An exception has been made by including abbreviated guidance for a Protective Environment used for allogeneic HSCT recipients because components of the Protective Environment have been more completely defined since publication of the Guidelines for Preventing Opportunistic Infections Among HSCT Recipients in 2000 and the Guideline for Environmental Infection Control in Healthcare Facilities 11, 15 . # I.B. Rationale for Standard and Transmission-Based Precautions in healthcare settings Transmission of infectious agents within a healthcare setting requires three elements: a source (or reservoir) of infectious agents, a susceptible host with a portal of entry receptive to the agent, and a mode of transmission for the agent. This section describes the interrelationship of these elements in the epidemiology of HAIs. # I.B.1. Sources of infectious agents Infectious agents transmitted during healthcare derive primarily from human sources but inanimate environmental sources also are implicated in transmission. Human reservoirs include patients [20][21][22][23][24][25][26][27][28] , healthcare personnel 29-35 17, 36-39 , and household members and other visitors [40][41][42][43][44][45] . Such source individuals may have active infections, may be in the asymptomatic and/or incubation period of an infectious disease, or may be transiently or chronically colonized with pathogenic microorganisms, particularly in the respiratory and gastrointestinal tracts. The endogenous flora of patients (e.g., bacteria residing in the respiratory or gastrointestinal tract) also are the source of HAIs [46][47][48][49][50][51][52][53][54] . # I.B.2. Susceptible hosts Infection is the result of a complex interrelationship between a potential host and an infectious agent. Most of the factors that influence infection and the occurrence and severity of disease are related to the host. However, characteristics of the host-agent interaction as it relates to pathogenicity, virulence and antigenicity are also important, as are the infectious dose, mechanisms of disease production and route of exposure 55 . There is a spectrum of possible outcomes following exposure to an infectious agent. Some persons exposed to pathogenic microorganisms never develop symptomatic disease while others become severely ill and even die. Some individuals are prone to becoming transiently or permanently colonized but remain asymptomatic. Still others progress from colonization to symptomatic disease either immediately following exposure, or after a period of asymptomatic colonization. The immune state at the time of exposure to an infectious agent, interaction between pathogens, and virulence factors intrinsic to the agent are important predictors of an individuals' outcome. Host factors such as extremes of age and underlying disease (e.g. diabetes 56,57 ), human immunodeficiency virus/acquired immune deficiency syndrome [HIV/AIDS] 58,59 , malignancy, and transplants 18,60,61 can increase susceptibility to infection as do a variety of medications that alter the normal flora (e.g., antimicrobial agents, gastric acid suppressants, corticosteroids, antirejection drugs, antineoplastic agents, and immunosuppressive drugs). Surgical procedures and radiation therapy impair defenses of the skin and other involved organ systems. Indwelling devices such as urinary catheters, endotracheal tubes, central venous and arterial catheters [62][63][64] and synthetic implants facilitate development of HAIs by allowing potential pathogens to bypass local defenses that would ordinarily impede their invasion and by providing surfaces for development of bioflms that may facilitate adherence of microorganisms and protect from antimicrobial activity 65 . Some infections associated with invasive procedures result from transmission within the healthcare facility; others arise from the patient's endogenous flora [46][47][48][49][50] . High-risk patient populations with noteworthy risk factors for infection are discussed further in Sections I.D, I.E., and I.F. # I.B.3. Modes of transmission Several classes of pathogens can cause infection, including bacteria, viruses, fungi, parasites, and prions. The modes of transmission vary by type of organism and some infectious agents may be transmitted by more than one route: some are transmitted primarily by direct or indirect contact, (e.g., Herpes simplex virus [HSV], respiratory syncytial virus, Staphylococcus aureus), others by the droplet, (e.g., influenza virus, B. pertussis) or airborne routes (e.g., M. tuberculosis). Other infectious agents, such as bloodborne viruses (e.g., hepatitis B and C viruses [HBV, HCV] and HIV are transmitted rarely in healthcare settings, via percutaneous or mucous membrane exposure. Importantly, not all infectious agents are transmitted from person to person. These are distinguished in Appendix A. The three principal routes of transmission are summarized below. # I.B.3.a. Contact transmission The most common mode of transmission, contact transmission is divided into two subgroups: direct contact and indirect contact. # I.B.3.a.i. Direct contact transmission Direct transmission occurs when microorganisms are transferred from one infected person to another person without a contaminated intermediate object or person. Opportunities for direct contact transmission between patients and healthcare personnel have been summarized in the Guideline for Infection Control in Healthcare Personnel, 1998 17 and include: • blood or other blood-containing body fluids from a patient directly enters a caregiver's body through contact with a mucous membrane 66 or breaks (i.e., cuts, abrasions) in the skin 67 . • mites from a scabies-infested patient are transferred to the skin of a caregiver while he/she is having direct ungloved contact with the patient's skin 68,69 . • a healthcare provider develops herpetic whitlow on a finger after contact with HSV when providing oral care to a patient without using gloves or HSV is transmitted to a patient from a herpetic whitlow on an ungloved hand of a healthcare worker (HCW) 70,71 . # I.B.3.a.ii. Indirect contact transmission Indirect transmission involves the transfer of an infectious agent through a contaminated intermediate object or person. In the absence of a point-source outbreak, it is difficult to determine how indirect transmission occurs. However, extensive evidence cited in the Guideline for Hand Hygiene in Health-Care Settings suggests that the contaminated hands of healthcare personnel are important contributors to indirect contact transmission 16 . Examples of opportunities for indirect contact transmission include: • Hands of healthcare personnel may transmit pathogens after touching an infected or colonized body site on one patient or a contaminated inanimate object, if hand hygiene is not performed before touching another patient. 72,73 . • Patient-care devices (e.g., electronic thermometers, glucose monitoring devices) may transmit pathogens if devices contaminated with blood or body fluids are shared between patients without cleaning and disinfecting between patients 74 75-77 . • Shared toys may become a vehicle for transmitting respiratory viruses (e.g., respiratory syncytial virus 24,78,79 or pathogenic bacteria (e.g., Pseudomonas aeruginosa 80 ) among pediatric patients. • Instruments that are inadequately cleaned between patients before disinfection or sterilization (e.g., endoscopes or surgical instruments) [81][82][83][84][85] or that have manufacturing defects that interfere with the effectiveness of reprocessing 86,87 may transmit bacterial and viral pathogens. Clothing, uniforms, laboratory coats, or isolation gowns used as personal protective equipment (PPE), may become contaminated with potential pathogens after care of a patient colonized or infected with an infectious agent, (e.g., MRSA 88 , VRE 89 , and C. difficile 90 . Although contaminated clothing has not been implicated directly in transmission, the potential exists for soiled garments to transfer infectious agents to successive patients. # I.B.3.b. Droplet transmission Droplet transmission is, technically, a form of contact transmission, and some infectious agents transmitted by the droplet route also may be transmitted by the direct and indirect contact routes. However, in contrast to contact transmission, respiratory droplets carrying infectious pathogens transmit infection when they travel directly from the respiratory tract of the infectious individual to susceptible mucosal surfaces of the recipient, generally over short distances, necessitating facial protection. Respiratory droplets are generated when an infected person coughs, sneezes, or talks 91,92 or during procedures such as suctioning, endotracheal intubation, [93][94][95][96] , cough induction by chest physiotherapy 97 and cardiopulmonary resuscitation 98,99 . Evidence for droplet transmission comes from epidemiological studies of disease outbreaks [100][101][102][103] , experimental studies 104 and from information on aerosol dynamics 91,105 . Studies have shown that the nasal mucosa, conjunctivae and less frequently the mouth, are susceptible portals of entry for respiratory viruses 106 . The maximum distance for droplet transmission is currently unresolved, although pathogens transmitted by the droplet route have not been transmitted through the air over long distances, in contrast to the airborne pathogens discussed below. Historically, the area of defined risk has been a distance of <3 feet around the patient and is based on epidemiologic and simulated studies of selected infections 103,104 . Using this distance for donning masks has been effective in preventing transmission of infectious agents via the droplet route. However, experimental studies with smallpox 107,108 and investigations during the global SARS outbreaks of 2003 101 suggest that droplets from patients with these two infections could reach persons located 6 feet or more from their source. It is likely that the distance droplets travel depends on the velocity and mechanism by which respiratory droplets are propelled from the source, the density of respiratory secretions, environmental factors such as temperature and humidity, and the ability of the pathogen to maintain infectivity over that distance 105 . Thus, a distance of <3 feet around the patient is best viewed as an example of what is meant by "a short distance from a patient" and should not be used as the sole criterion for deciding when a mask should be donned to protect from droplet exposure. Based on these considerations, it may be prudent to don a mask when within 6 to 10 feet of the patient or upon entry into the patient's room, especially when exposure to emerging or highly virulent pathogens is likely. More studies are needed to improve understanding of droplet transmission under various circumstances. Droplet size is another variable under discussion. Droplets traditionally have been defined as being >5 µm in size. Droplet nuclei, particles arising from desiccation of suspended droplets, have been associated with airborne transmission and defined as <5 µm in size 105 , a reflection of the pathogenesis of pulmonary tuberculosis which is not generalizeable to other organisms. Observations of particle dynamics have demonstrated that a range of droplet sizes, including those with diameters of 30µm or greater, can remain suspended in the air 109 . The behavior of droplets and droplet nuclei affect recommendations for preventing transmission. Whereas fine airborne particles containing pathogens that are able to remain infective may transmit infections over long distances, requiring AIIR to prevent its dissemination within a facility; organisms transmitted by the droplet route do not remain infective over long distances, and therefore do not require special air handling and ventilation. Examples of infectious agents that are transmitted via the droplet route include Bordetella pertussis 110 , influenza virus 23 , adenovirus 111 , rhinovirus 104 , Mycoplasma pneumoniae 112 , SARS-associated coronavirus (SARS-CoV) 21,96,113 , group A streptococcus 114 , and Neisseria meningitidis 95,103,115 . Although respiratory syncytial virus may be transmitted by the droplet route, direct contact with infected respiratory secretions is the most important determinant of transmission and consistent adherence to Standard plus Contact Precautions prevents transmission in healthcare settings 24,116,117 . Rarely, pathogens that are not transmitted routinely by the droplet route are dispersed into the air over short distances. For example, although S. aureus is transmitted most frequently by the contact route, viral upper respiratory tract infection has been associated with increased dispersal of S. aureus from the nose into the air for a distance of 4 feet under both outbreak and experimental conditions and is known as the "cloud baby" and "cloud adult" phenomenon [118][119][120] . # I.B.3.c. Airborne transmission Airborne transmission occurs by dissemination of either airborne droplet nuclei or small particles in the respirable size range containing infectious agents that remain infective over time and distance (e.g., spores of Aspergillus spp, and Mycobacterium tuberculosis). Microorganisms carried in this manner may be dispersed over long distances by air currents and may be inhaled by susceptible individuals who have not had face-to-face contact with (or been in the same room with) the infectious individual [121][122][123][124] . Preventing the spread of pathogens that are transmitted by the airborne route requires the use of special air handling and ventilation systems (e.g., AIIRs) to contain and then safely remove the infectious agent 11,12 . Infectious agents to which this applies include Mycobacterium tuberculosis [124][125][126][127] , rubeola virus (measles) 122 , and varicella-zoster virus (chickenpox) 123 . In addition, published data suggest the possibility that variola virus (smallpox) may be transmitted over long distances through the air under unusual circumstances and AIIRs are recommended for this agent as well; however, droplet and contact routes are the more frequent routes of transmission for smallpox 108,128,129 . In addition to AIIRs, respiratory protection with NIOSH certified N95 or higher level respirator is recommended for healthcare personnel entering the AIIR to prevent acquisition of airborne infectious agents such as M. tuberculosis 12 . For certain other respiratory infectious agents, such as influenza 130,131 and rhinovirus 104 , and even some gastrointestinal viruses (e.g., norovirus 132 and rotavirus 133 ) there is some evidence that the pathogen may be transmitted via small-particle aerosols, under natural and experimental conditions. Such transmission has occurred over distances longer than 3 feet but within a defined airspace (e.g., patient room), suggesting that it is unlikely that these agents remain viable on air currents that travel long distances. AIIRs are not required routinely to prevent transmission of these agents. Additional issues concerning examples of small particle aerosol transmission of agents that are most frequently transmitted by the droplet route are discussed below. # I.B.3.d. Emerging issues concerning airborne transmission of infectious agents. # I.B.3.d.i. Transmission from patients The emergence of SARS in 2002, the importation of monkeypox into the United States in 2003, and the emergence of avian influenza present challenges to the assignment of isolation categories because of conflicting information and uncertainty about possible routes of transmission. Although SARS-CoV is transmitted primarily by contact and/or droplet routes, airborne transmission over a limited distance (e.g. within a room), has been suggested, though not proven [134][135][136][137][138][139][140][141] . This is true of other infectious agents such as influenza virus 130 and noroviruses 132,142,143 . Influenza viruses are transmitted primarily by close contact with respiratory droplets 23,102 and acquisition by healthcare personnel has been prevented by Droplet Precautions, even when positive pressure rooms were used in one center 144 However, inhalational transmission could not be excluded in an outbreak of influenza in the passengers and crew of a single aircraft 130 . Observations of a protective effect of UV lights in preventing influenza among patients with tuberculosis during the influenza pandemic of 1957-'58 have been used to suggest airborne transmission 145,146 . In contrast to the strict interpretation of an airborne route for transmission (i.e., long distances beyond the patient room environment), short distance transmission by small particle aerosols generated under specific circumstances (e.g., during endotracheal intubation) to persons in the immediate area near the patient has been demonstrated. Also, aerosolized particles <100 μm can remain suspended in air when room air current velocities exceed the terminal settling velocities of the particles 109 . SARS-CoV transmission has been associated with endotracheal intubation, noninvasive positive pressure ventilation, and cardiopulmonary resuscitation 93,94,96,98,141 . Although the most frequent routes of transmission of noroviruses are contact and food and waterborne routes, several reports suggest that noroviruses may be transmitted through aerosolization of infectious particles from vomitus or fecal material 142,143,147,148 . It is hypothesized that the aerosolized particles are inhaled and subsequently swallowed. Roy and Milton proposed a new classification for aerosol transmission when evaluating routes of SARS transmission: 1) obligate: under natural conditions, disease occurs following transmission of the agent only through inhalation of small particle aerosols (e.g., tuberculosis); 2) preferential: natural infection results from transmission through multiple routes, but small particle aerosols are the predominant route (e.g. measles, varicella); and 3) opportunistic: agents that naturally cause disease through other routes, but under special circumstances may be transmitted via fine particle aerosols 149 . This conceptual framework can explain rare occurrences of airborne transmission of agents that are transmitted most frequently by other routes (e.g., smallpox, SARS, influenza, noroviruses). Concerns about unknown or possible routes of transmission of agents associated with severe disease and no known treatment often result in more extreme prevention strategies than may be necessary; therefore, recommended precautions could change as the epidemiology of an emerging infection is defined and controversial issues are resolved. # I.B.3.d.ii. Transmission from the environment Some airborne infectious agents are derived from the environment and do not usually involve person-toperson transmission. For example, anthrax spores present in a finely milled powdered preparation can be aerosolized from contaminated environmental surfaces and inhaled into the respiratory tract 150,151 . Spores of environmental fungi (e.g., Aspergillus spp.) are ubiquitous in the environment and may cause disease in immunocompromised patients who inhale aerosolized (e.g., via construction dust) spores 152,153 . As a rule, neither of these organisms is subsequently transmitted from infected patients. However, there is one welldocumented report of person-to-person transmission of Aspergillus sp. in the ICU setting that was most likey due to the aerosolization of spores during wound debridement 154 . A Protective Environment refers to isolation practices designed to decrease the risk of exposure to environmental fungal agents in allogeneic HSCT patients 11,14,15,[155][156][157][158] . Environmental sources of respiratory pathogens (eg. Legionella) transmitted to humans through a common aerosol source is distinct from direct patient-topatient transmission. # I.B.3.e. Other sources of infection Transmission of infection from sources other than infectious individuals include those associated with common environmental sources or vehicles (e.g. contaminated food, water, or medications (e.g. intravenous fluids). Although Aspergillus spp. have been recovered from hospital water systems 159 , the role of water as a reservoir for immunosuppressed patients remains uncertain. Vectorborne transmission of infectious agents from mosquitoes, flies, rats, and other vermin also can occur in healthcare settings. Prevention of vector borne transmission is not addressed in this document. # I.C. Infectious agents of special infection control interest for healthcare settings Several infectious agents with important infection control implications that either were not discussed extensively in previous isolation guidelines or have emerged recently are discussed below. These are epidemiologically important organisms (e.g., C. difficile), agents of bioterrorism, prions, SARS-CoV, monkeypox, noroviruses, and the hemorrhagic fever viruses. Experience with these agents has broadened the understanding of modes of transmission and effective preventive measures. These agents are included for purposes of information and, for some (i.e., SARS-CoV, monkeypox), because of the lessons that have been learned about preparedness planning and responding effectively to new infectious agents. # I.C.1. Epidemiologically important organisms Any infectious agents transmitted in healthcare settings may, under defined conditions, become targeted for control because they are epidemiologically important. C. difficile is specifically discussed below because of wide recognition of its current importance in U.S. healthcare facilities. In determining what constitutes an "epidemiologically important organism", the following characteristics apply: • A propensity for transmission within healthcare facilities based on published reports and the occurrence of temporal or geographic clusters of > 2 patients, (e.g., C..difficile, norovirus, respiratory syncytial virus (RSV), influenza, rotavirus, Enterobacter spp; Serratia spp., group A streptococcus). A single case of healthcare-associated invasive disease caused by certain pathogens (e.g., group A streptococcus post-operatively 14, 163 160 , in burn units 161 , or in a LTCF 162 ; Legionella sp. , Aspergillus sp. 164 ) is generally considered a trigger for investigation and enhanced control measures because of the risk of additional cases and severity of illness associated with these infections. Antimicrobial resistance • Resistance to first-line therapies (e.g., MRSA, VISA, VRSA, VRE, ESBLproducing organisms). • Common and uncommon microorganisms with unusual patterns of resistance within a facility (e.g., the first isolate of Burkholderia cepacia complex or Ralstonia spp. in non-CF patients or a quinolone-resistant strain of Pseudomonas aeruginosa in a facility). • Difficult to treat because of innate or acquired resistance to multiple classes of antimicrobial agents (e.g., Stenotrophomonas maltophilia, Acinetobacter spp.). • Association with serious clinical disease, increased morbidity and mortality (e.g., MRSA and MSSA, group A streptococcus) • A newly discovered or reemerging pathogen I.C.1.a. C.difficile C. difficile is a spore-forming gram positive anaerobic bacillus that was first isolated from stools of neonates in 1935 165 and identified as the most commonly identified causative agent of antibiotic-associated diarrhea and pseudomembranous colitis in 1977 166 . This pathogen is a major cause of healthcare-associated diarrhea and has been responsible for many large outbreaks in healthcare settings that were extremely difficult to control. Important factors that contribute to healthcare-associated outbreaks include environmental contamination, persistence of spores for prolonged periods of time, resistance of spores to routinely used disinfectants and antiseptics, hand carriage by healthcare personnel to other patients, and exposure of patients to frequent courses of antimicrobial agents 167 . Antimicrobials most frequently associated with increased risk of C. difficile include third generation cephalosporins, clindamycin, vancomycin, and fluoroquinolones. Since 2001, outbreaks and sporadic cases of C. difficile with increased morbidity and mortality have been observed in several U.S. states, Canada, England and the Netherlands [168][169][170][171][172] . The same strain of C. difficile has been implicated in these outbreaks 173 . This strain, toxinotype III, North American PFGE type 1, and PCR-ribotype 027 (NAP1/027). has been found to hyperproduce toxin A (16 fold increase) and toxin B (23 fold increase) compared with isolates from 12 different pulsed-field gel electrophoresisPFGE types. A recent survey of U.S. infectious disease physicians found that 40% perceived recent increases in the incidence and severity of C. difficile disease 174 . Standardization of testing methodology and surveillance definitions is needed for accurate comparisons of trends in rates among hospitals 175 . It is hypothesized that the incidence of disease and apparent heightened transmissibility of this new strain may be due, at least in part, to the greater production of toxins A and B, increasing the severity of diarrhea and resulting in more environmental contamination. Considering the greater morbidity, mortality, length of stay, and costs associated with C. difficile disease in both acute care and long term care facilities, control of this pathogen is now even more important than previously. Prevention of transmission focuses on syndromic application of Contact Precautions for patients with diarrhea, accurate identification of patients, environmental measures (e.g., rigorous cleaning of patient rooms) and consistent hand hygiene. Use of soap and water, rather than alcohol based handrubs, for mechanical removal of spores from hands, and a bleach-containing disinfectant (5000 ppm) for environmental disinfection, may be valuable when there is transmission in a healthcare facility. See Appendix A for specific recommendations. # I.C.1. b. Multidrug-Resistant Organisms (MDROs) In general, MDROs are defined as microorganisms -predominantly bacteria -that are resistant to one or more classes of antimicrobial agents 176 . Although the names of certain MDROs suggest resistance to only one agent (e.g., methicillin-resistant Staphylococcus aureus [MRSA], vancomycin resistant enterococcus [VRE]), these pathogens are usually resistant to all but a few commercially available antimicrobial agents. This latter feature defines MDROs that are considered to be epidemiologically important and deserve special attention in healthcare facilities 177 . Other MDROs of current concern include multidrug-resistant Streptococcus pneumoniae (MDRSP) which is resistant to penicillin and other broad-spectrum agents such as macrolides and fluroquinolones, multidrug-resistant gram-negative bacilli (MDR-GNB), especially those producing extended spectrum beta-lactamases (ESBLs); and strains of S. aureus that are intermediate or resistant to vancomycin (i.e., VISA and VRSA) 178-197 198 . MDROs are transmitted by the same routes as antimicrobial susceptible infectious agents. Patient-to-patient transmission in healthcare settings, usually via hands of HCWs, has been a major factor accounting for the increase in MDRO incidence and prevalence, especially for MRSA and VRE in acute care facilities [199][200][201] . Preventing the emergence and transmission of these pathogens requires a comprehensive approach that includes administrative involvement and measures (e.g., nurse staffing, communication systems, performance improvement processes to ensure adherence to recommended infection control measures), education and training of medical and other healthcare personnel, judicious antibiotic use, comprehensive surveillance for targeted MDROs, application of infection control precautions during patient care, environmental measures (e.g., cleaning and disinfection of the patient care environment and equipment, dedicated single-patient-use of non-critical equipment), and decolonization therapy when appropriate. The prevention and control of MDROs is a national priority -one that requires that all healthcare facilities and agencies assume responsibility and participate in community-wide control programs 176,177 . A detailed discussion of this topic and recommendations for prevention was published in 2006 may be found at http://www.cdc.gov/ncidod/dhqp/pdf/ar/mdroGuideline2006.pdf I.C.2. Agents of bioterrorism CDC has designated the agents that cause anthrax, smallpox, plague, tularemia, viral hemorrhagic fevers, and botulism as Category A (high priority) because these agents can be easily disseminated environmentally and/or transmitted from person to person; can cause high mortality and have the potential for major public health impact; might cause public panic and social disruption; and require special action for public health preparedness 202 . General information relevant to infection control in healthcare settings for Category A agents of bioterrorism is summarized in Table 3. Consult www.bt.cdc.gov for additional, updated Category A agent information as well as information concerning Category B and C agents of bioterrorism and updates. Category B and C agents are important but are not as readily disseminated and cause less morbidity and mortality than Category A agents. Healthcare facilities confront a different set of issues when dealing with a suspected bioterrorism event as compared with other communicable diseases. An understanding of the epidemiology, modes of transmission, and clinical course of each disease, as well as carefully drafted plans that provide an approach and relevant websites and other resources for disease-specific guidance to healthcare, administrative, and support personnel, are essential for responding to and managing a bioterrorism event. Infection control issues to be addressed include: 1) identifying persons who may be exposed or infected; 2) preventing transmission among patients, healthcare personnel, and visitors; 3) providing treatment, chemoprophylaxis or vaccine to potentially large numbers of people; 4) protecting the environment including the logistical aspects of securing sufficient numbers of AIIRs or designating areas for patient cohorts when there are an insufficient number of AIIRs available;5) providing adequate quantities of appropriate personal protective equipment; and 6) identifying appropriate staff to care for potentially infectious patients (e.g., vaccinated healthcare personnel for care of patients with smallpox). The response is likely to differ for exposures resulting from an intentional release compared with naturally occurring disease because of the large number persons that can be exposed at the same time and possible differences in pathogenicity. A variety of sources offer guidance for the management of persons exposed to the most likely agents of bioterrorism. Federal agency websites (e.g., www.usamriid.army.mil/publications/index.html , www.bt.cdc.gov ) and state and county health department web sites should be consulted for the most up-to-date information. Sources of information on specific agents include: anthrax 203 ; smallpox [204][205][206] ; plague 207,208 ; botulinum toxin 209 ; tularemia 210 ; and hemorrhagic fever viruses: 211,212 . # I.C.2.a. Pre-event administration of smallpox (vaccinia) vaccine to healthcare personnel Vaccination of personnel in preparation for a possible smallpox exposure has important infection control implications [213][214][215] . These include the need for meticulous screening for vaccine contraindications in persons who are at increased risk for adverse vaccinia events; containment and monitoring of the vaccination site to prevent transmission in the healthcare setting and at home; and the management of patients with vaccinia-related adverse events 216,217 . The pre-event U.S. smallpox vaccination program of 2003 is an example of the effectiveness of carefully developed recommendations for both screening potential vaccinees for contraindications and vaccination site care and monitoring. Approximately 760,000 individuals were vaccinated in the Department of Defense and 40,000 in the civilian or public health populations from December 2002 to February 2005, including approximately 70,000 who worked in healthcare settings. There were no cases of eczema vaccinatum, progressive vaccinia, fetal vaccinia, or contact transfer of vaccinia in healthcare settings or in military workplaces 218,219 . Outside the healthcare setting, there were 53 cases of contact transfer from military vaccinees to close personal contacts (e.g., bed partners or contacts during participation in sports such as wrestling 220 ). All contact transfers were from individuals who were not following recommendations to cover their vaccination sites. Vaccinia virus was confirmed by culture or PCR in 30 cases, and two of the confirmed cases resulted from tertiary transfer. All recipients, including one breast-fed infant, recovered without complication. Subsequent studies using viral culture and PCR techniques have confirmed the effectiveness of semipermeable dressings to contain vaccinia [221][222][223][224] . This experience emphasizes the importance of ensuring that newly vaccinated healthcare personnel adhere to recommended vaccination-site care, especially if they are to care for high-risk patients. Recommendations for preevent smallpox vaccination of healthcare personnel and vaccinia-related infection control recommendations are published in the MMWR 216,225 with updates posted on the CDC bioterrorism web site 205 . # I.C.3. Prions Creutzfeldt-Jakob disease (CJD) is a rapidly progressive, degenerative, neurologic disorder of humans with an incidence in the United States of approximately 1 person/million population/year 226,227 (www.cdc.gov/ncidod/diseases/cjd/cjd.htm). CJD is believed to be caused by a transmissible proteinaceous infectious agent termed a prion. Infectious prions are isoforms of a host-encoded glycoprotein known as the prion protein. The incubation period (i.e., time between exposure and and onset of symptoms) varies from two years to many decades. However, death typically occurs within 1 year of the onset of symptoms. Approximately 85% of CJD cases occur sporadically with no known environmental source of infection and 10% are familial. Iatrogenic transmission has occurred with most resulting from treatment with human cadaveric pituitary-derived growth hormone or gonadotropin 228,229 , from implantation of contaminated human dura mater grafts 230 or from corneal transplants 231 ). Transmission has been linked to the use of contaminated neurosurgical instruments or stereotactic electroencephalogram electrodes 232, 233 , 234 , 235 . Prion diseases in animals include scrapie in sheep and goats, bovine spongiform encephalopathy (BSE, or "mad cow disease") in cattle, and chronic wasting disease in deer and elk 236 . BSE, first recognized in the United Kingdom (UK) in 1986, was associated with a major epidemic among cattle that had consumed contaminated meat and bone meal. The possible transmission of BSE to humans causing variant CJD (vCJD) was first described in 1996 and subsequently found to be associated with consumption of BSE-contaminated cattle products primarily in the United Kingdom. There is strong epidemiologic and laboratory evidence for a causal association between the causative agent of BSE and vCJD 237 . Although most cases of vCJD have been reported from the UK, a few cases also have been reported from Europe, Japan, Canada, and the United States. Most vCJD cases worldwide lived in or visited the UK during the years of a large outbreak of BSE (1980-96) and may have consumed contaminated cattle products during that time (www.cdc.gov/ncidod/diseases/cjd/cjd.htm). Although there has been no indigenously acquired vCJD in the United States, the sporadic occurrence of BSE in cattle in North America has heightened awareness of the possibility that such infections could occur and have led to increased surveillance activities. Updated information may be found on the following website: www.cdc.gov/ncidod/diseases/cjd/cjd.htm. The public health impact of prion diseases has been reviewed 238 . vCJD in humans has different clinical and pathologic characteristics from sporadic or classic CJD 239 , including the following: 1) younger median age at death: 28 (range 16-48) vs. 68 years; 2) longer duration of illness: median 14 months vs. 4-6 months; 3) increased frequency of sensory symptoms and early psychiatric symptoms with delayed onset of frank neurologic signs; and 4) detection of prions in tonsillar and other lymphoid tissues from vCJD patients but not from sporadic CJD patients 240 . Similar to sporadic CJD, there have been no reported cases of direct human-to-human transmission of vCJD by casual or environmental contact, droplet, or airborne routes. Ongoing blood safety surveillance in the U.S. has not detected sporadic CJD transmission through blood transfusion [241][242][243] . However, bloodborne transmission of vCJD is believed to have occurred in two UK patients 244,245 . The following FDA websites provide information on steps that are being taken in the US to protect the blood supply from CJD and vCJD: http://www.fda.gov/cber/gdlns/cjdvcjd.htm; http://www.fda.gov/cber/gdlns/cjdvcjdq&a.htm. Standard Precautions are used when caring for patients with suspected or confirmed CJD or vCJD. However, special precautions are recommended for tissue handling in the histology laboratory and for conducting an autopsy, embalming, and for contact with a body that has undergone autopsy 246 . Recommendations for reprocessing surgical instruments to prevent transmission of CJD in healthcare settings have been published by the World Health Organization (WHO) and are currently under review at CDC. Questions concerning notification of patients potentially exposed to CJD or vCJD through contaminated instruments and blood products from patients with CJD or vCJD or at risk of having vCJD may arise. The risk of transmission associated with such exposures is believed to be extremely low but may vary based on the specific circumstance. Therefore consultation on appropriate options is advised. The United Kingdom has developed several documents that clinicians and patients in the US may find useful (http://www.hpa.org.uk/infections/topics_az/cjd/information_documents.htm). # I.C.4. Severe Acute Respiratory Syndrome (SARS) SARS is a newly discovered respiratory disease that emerged in China late in 2002 and spread to several countries 135,140 ; Mainland China, Hong Kong, Hanoi, Singapore, and Toronto were affected significantly. SARS is caused by SARS CoV, a previously unrecognized member of the coronavirus family 247,248 . The incubation period from exposure to the onset of symptoms is 2 to 7 days but can be as long as 10 days and uncommonly even longer 249 . The illness is initially difficult to distinguish from other common respiratory infections. Signs and symptoms usually include fever >38.0 o C and chills and rigors, sometimes accompanied by headache, myalgia, and mild to severe respiratory symptoms. Radiographic finding of atypical pneumonia is an important clinical indicator of possible SARS. Compared with adults, children have been affected less frequently, have milder disease, and are less likely to transmit SARS-CoV 135,[249][250][251] . The overall case fatality rate is approximately 6.0%; underlying disease and advanced age increase the risk of mortality (www.who.int/csr/sarsarchive/2003_05_07a/en/). Outbreaks in healthcare settings, with transmission to large numbers of healthcare personnel and patients have been a striking feature of SARS; undiagnosed, infectious patients and visitors were important initiators of these outbreaks 21,[252][253][254] . The relative contribution of potential modes of transmission is not precisely known. There is ample evidence for droplet and contact transmission 96,101,113 ; however, opportunistic airborne transmission cannot be excluded 101, 135-139, 149, 255 . For example, exposure to aerosol-generating procedures (e.g., endotracheal intubation, suctioning) was associated with transmission of infection to large numbers of healthcare personnel outside of the United States 93,94,96,98,253 .Therefore, aerosolization of small infectious particles generated during these and other similar procedures could be a risk factor for transmission to others within a multi-bed room or shared airspace. A review of the infection control literature generated from the SARS outbreaks of 2003 concluded that the greatest risk of transmission is to those who have close contact, are not properly trained in use of protective infection control procedures, do not consistently use PPE; and that N95 or higher respirators may offer additional protection to those exposed to aerosol-generating procedures and high risk activities 256,257 . Organizational and individual factors that affected adherence to infection control practices for SARS also were identified 257 . Control of SARS requires a coordinated, dynamic response by multiple disciplines in a healthcare setting 9 . Early detection of cases is accomplished by screening persons with symptoms of a respiratory infection for history of travel to areas experiencing community transmission or contact with SARS patients, followed by implementation of Respiratory Hygiene/Cough Etiquette (i.e., placing a mask over the patient's nose and mouth) and physical separation from other patients in common waiting areas.The precise combination of precautions to protect healthcare personnel has not been determined. At the time of this publication, CDC recommends Standard Precautions, with emphasis on the use of hand hygiene, Contact Precautions with emphasis on environmental cleaning due to the detection of SARS CoV RNA by PCR on surfaces in rooms occupied by SARS patients 138,254,258 , Airborne Precautions, including use of fit-tested NIOSH-approved N95 or higher level respirators, and eye protection 259 . In Hong Kong, the use of Droplet and Contact Precautions, which included use of a mask but not a respirator, was effective in protecting healthcare personnel 113 . However, in Toronto, consistent use of an N95 respirator was slightly more protective than a mask 93 . It is noteworthy that there was no transmission of SARS-CoV to public hospital workers in Vietnam despite inconsistent use of infection control measures, including use of PPE, which suggests other factors (e.g., severity of disease, frequency of high risk procedures or events, environmental features) may influence opportunities for transmission 260 . SARS-CoV also has been transmitted in the laboratory setting through breaches in recommended laboratory practices. Research laboratories where SARS-CoV was under investigation were the source of most cases reported after the first series of outbreaks in the winter and spring of 2003 261,262 . Studies of the SARS outbreaks of 2003 and transmissions that occurred in the laboratory re-affirm the effectiveness of recommended infection control precautions and highlight the importance of consistent adherence to these measures. Lessons from the SARS outbreaks are useful for planning to respond to future public health crises, such as pandemic influenza and bioterrorism events. Surveillance for cases among patients and healthcare personnel, ensuring availability of adequate supplies and staffing, and limiting access to healthcare facilities were important factors in the response to SARS that have been summarized 9 . Guidance for infection control precautions in various settings is available at www.cdc.gov/ncidod/sars. I.C.5. Monkeypox Monkeypox is a rare viral disease found mostly in the rain forest countries of Central and West Africa. The disease is caused by an orthopoxvirus that is similar in appearance to smallpox but causes a milder disease. The only recognized outbreak of human monkeypox in the United States was detected in June 2003 after several people became ill following contact with sick pet prairie dogs. Infection in the prairie dogs was subsequently traced to their contact with a shipment of animals from Africa, including giant Gambian rats 263 . This outbreak demonstrates the importance of recognition and prompt reporting of unusual disease presentations by clinicians to enable prompt identification of the etiology; and the potential of epizootic diseases to spread from animal reservoirs to humans through personal and occupational exposure 264 . Limited data on transmission of monkeypox are available. Transmission from infected animals and humans is believed to occur primarily through direct contact with lesions and respiratory secretions; airborne transmission from animals to humans is unlikely but cannot be excluded, and may have occurred in veterinary practices (e.g., during administration of nebulized medications to ill prairie dogs 265 ). Among humans, four instances of monkeypox transmission within hospitals have been reported in Africa among children, usually related to sharing the same ward or bed 266,267 . Additional recent literature documents transmission of Congo Basin monkeypox in a hospital compound for an extended number of generations 268 . There has been no evidence of airborne or any other person-to-person transmission of monkeypox in the United States, and no new cases of monkeypox have been identified since the outbreak in June 2003 269 . The outbreak strain is a clade of monkeypox distinct from the Congo Basin clade and may have different epidemiologic properties (including human-to-human transmission potential) from monkeypox strains of the Congo Basin 270 ; this awaits further study. Smallpox vaccine is 85% protective against Congo Basin monkeypox 271 . Since there is an associated case fatality rate of <10%, administration of smallpox vaccine within 4 days to individuals who have had direct exposure to patients or animals with monkeypox is a reasonable consideration 272 . For the most current information on monkeypox, see www.cdc.gov/ncidod/monkeypox/clinicians.htm. I.C.6. Noroviruses Noroviruses, formerly referred to as Norwalk-like viruses, are members of the Caliciviridae family. These agents are transmitted via contaminated food or water and from person-to-person, causing explosive outbreaks of gastrointestinal disease 273 . Environmental contamination also has been documented as a contributing factor in ongoing transmission during outbreaks 274,275 . Although noroviruses cannot be propagated in cell culture, DNA detection by molecular diagnostic techniques has facilitated a greater appreciation of their role in outbreaks of gastrointestinal disease 276 . Reported outbreaks in hospitals 132,142,277 , nursing homes 275,[278][279][280][281][282][283] , cruise ships 284,285 , hotels 143,147 , schools 148 , and large crowded shelters established for hurricane evacuees 286 , demonstrate their highly contagious nature, the disruptive impact they have in healthcare facilities and the community, and the difficulty of controlling outbreaks in settings where people share common facilites and space. Of note, there is nearly a 5 fold increase in the risk to patients in outbreaks where a patient is the index case compared with exposure of patients during outbreaks where a staff member is the index case 287 . The average incubation period for gastroenteritis caused by noroviruses is 12-48 hours and the clinical course lasts 12-60 hours 273 . Illness is characterized by acute onset of nausea, vomiting, abdominal cramps, and/or diarrhea. The disease is largely self-limited; rarely, death caused by severe dehydration can occur, particularly among the elderly with debilitating health conditions. The epidemiology of norovirus outbreaks shows that even though primary cases may result from exposure to a fecally-contaminated food or water, secondary and tertiary cases often result from person-to-person transmission that is facilitated by contamination of fomites 273,288 and dissemination of infectious particles, especially during the process of vomiting 132,142,143,147,148,273,279,280 . Widespread, persistent and inapparent contamination of the environment and fomites can make outbreaks extremely difficult to control 147,275,284 .These clinical observations and the detection of norovirus DNA on horizontal surfaces 5 feet above the level that might be touched normally suggest that, under certain circumstances, aerosolized particles may travel distances beyond 3 feet 147 . It is hypothesized that infectious particles may be aerosolized from vomitus, inhaled, and swallowed. In addition, individuals who are responsible for cleaning the environment may be at increased risk of infection. Development of disease and transmission may be facilitated by the low infectious dose (i.e., <100 viral particles) 289 and the resistance of these viruses to the usual cleaning and disinfection agents (i.e., may survive < 10 ppm chlorine) [290][291][292] . An alternate phenolic agent that was shown to be effective against feline calicivirus was used for environmental cleaning in one outbreak 275,293 . There are insufficient data to determine the efficacy of alcohol-based hand rubs against noroviruses when the hands are not visibly soiled 294 . Absence of disease in certain individuals during an outbreak may be explained by protection from infection conferred by the B histo-blood group antigen 295 . Consultation on outbreaks of gastroenteritis is available through CDC's Division of Viral and Rickettsial Diseases 296 . # I.C.7. Hemorrhagic fever viruses (HFV) The hemorrhagic fever viruses are a mixed group of viruses that cause serious disease with high fever, skin rash, bleeding diathesis, and in some cases, high mortality; the disease caused is referred to as viral hemorrhagic fever (VHF). Among the more commonly known HFVs are Ebola and Marburg viruses (Filoviridae), Lassa virus (Arenaviridae), Crimean-Congo hemorrhagic fever and Rift Valley Fever virus (Bunyaviridae), and Dengue and Yellow fever viruses (Flaviviridae) 212,297 . These viruses are transmitted to humans via contact with infected animals or via arthropod vectors. While none of these viruses is endemic in the United States, outbreaks in affected countries provide potential opportunities for importation by infected humans and animals. Furthermore, there are concerns that some of these agents could be used as bioweapons 212 . Person-to-person transmission is documented for Ebola, Marburg, Lassa and Crimean-Congo hemorrhagic fever viruses. In resource-limited healthcare settings, transmission of these agents to healthcare personnel, patients and visitors has been described and in some outbreaks has accounted for a large proportion of cases [298][299][300] . Transmissions within households also have occurred among individuals who had direct contact with ill persons or their body fluids, but not to those who did not have such contact 301 . Evidence concerning the transmission of HFVs has been summarized 212,302 . Person-to-person transmission is associated primarily with direct blood and body fluid contact. Percutaneous exposure to contaminated blood carries a particularly high risk for transmission and increased mortality 303,304 . The finding of large numbers of Ebola viral particles in the skin and the lumina of sweat glands has raised concern that transmission could occur from direct contact with intact skin though epidemiologic evidence to support this is lacking 305 . Postmortem handling of infected bodies is an important risk for transmission 301,306,307 . In rare situations, cases in which the mode of transmission was unexplained among individuals with no known direct contact , have led to speculation that airborne transmission could have occurred 298 . However, airborne transmission of naturally occurring HFVs in humans has not been seen. In one study of airplane passengers exposed to an in-flight index case of Lassa fever, there was no transmission to any passengers 308 . In the laboratory setting, animals have been infected experimentally with Marburg or Ebola viruses via direct inoculation of the nose, mouth and/or conjunctiva 309,310 and by using mechanically generated virus-containing aerosols 311,312 . Transmission of Ebola virus among laboratory primates in an animal facility has been described 313 . Secondarily infected animals were in individual cages and separated by approximately 3 meters. Although the possibility of airborne transmission was suggested, the authors were not able to exclude droplet or indirect contact transmission in this incidental observation. Guidance on infection control precautions for HVFs that are transmitted personto-person have been published by CDC 1, 211 and by the Johns Hopkins Center for Civilian Biodefense Strategies 212 . The most recent recommendations at the time of publication of this document were posted on the CDC website on 5/19/05 314 . Inconsistencies among the various recommendations have raised questions about the appropriate precautions to use in U.S. hospitals. In less developed countries, outbreaks of HFVs have been controlled with basic hygiene, barrier precautions, safe injection practices, and safe burial practices 299,306 . The preponderance of evidence on HFV transmission indicates that Standard, Contact and Droplet Precautions with eye protection are effective in protecting healthcare personnel and visitors who may attend an infected patient. Single gloves are adequate for routine patient care; double-gloving is advised during invasive procedures (e.g., surgery) that pose an increased risk for blood exposure. Routine eye protection (i.e. goggles or face shield) is particularly important. Fluid-resistant gowns should be worn for all patient contact. Airborne Precautions are not required for routine patient care; however, use of AIIRs is prudent when procedures that could generate infectious aerosols are performed (e.g., endotracheal intubation, bronchoscopy, suctioning, autopsy procedures involving oscillating saws). N95 or higher level respirators may provide added protection for individuals in a room during aerosol-generating procedures (Table 3, Appendix A). When a patient with a syndrome consistent with hemorrhagic fever also has a history of travel to an endemic area, precautions are initiated upon presentation and then modified as more information is obtained (Table 2). Patients with hemorrhagic fever syndrome in the setting of a suspected bioweapon attack should be managed using Airborne Precautions, including AIIRs, since the epidemiology of a potentially weaponized hemorrhagic fever virus is unpredictable. # I.D. Transmission risks associated with specific types of healthcare settings Numerous factors influence differences in transmission risks among the various healthcare settings. These include the population characteristics (e.g., increased susceptibility to infections, type and prevalence of indwelling devices), intensity of care, exposure to environmental sources, length of stay, and frequency of interaction between patients/residents with each other and with HCWs. These factors, as well as organizational priorities, goals, and resources, influence how different healthcare settings adapt transmission prevention guidelines to meet their specific needs 315,316 . Infection control management decisions are informed by data regarding institutional experience/epidemiology, trends in community and institutional HAIs, local, regional, and national epidemiology, and emerging infectious disease threats. # I.D.1. Hospitals Infection transmission risks are present in all hospital settings. However, certain hospital settings and patient populations have unique conditions that predispose patients to infection and merit special mention. These are often sentinel sites for the emergence of new transmission risks that may be unique to that setting or present opportunities for transmission to other settings in the hospital. # I.D.1.a. Intensive Care Units Intensive care units (ICUs) serve patients who are immunocompromised by disease state and/or by treatment modalities, as well as patients with major trauma, respiratory failure and other life-threatening conditions (e.g., myocardial infarction, congestive heart failure, overdoses, strokes, gastrointestinal bleeding, renal failure, hepatic failure, multi-organ system failure, and the extremes of age). Although ICUs account for a relatively small proportion of hospitalized patients, infections acquired in these units accounted for >20% of all HAIs 317 . In the National Nosocomial Infection Surveillance (NNIS) system, 26.6% of HAIs were reported from ICU and high risk nursery (NICU) patients in 2002 (NNIS, unpublished data). This patient population has increased susceptibility to colonization and infection, especially with MDROs and Candida sp. 318,319 , because of underlying diseases and conditions, the invasive medical devices and technology used in their care (e.g. central venous catheters and other intravascular devices, mechanical ventilators, extracorporeal membrane oxygenation (ECMO), hemodialysis/-filtration, pacemakers, implantable left ventricular assist devices), the frequency of contact with healthcare personnel, prolonged length of stay, and prolonged exposure to antimicrobial agents [320][321][322][323][324][325][326][327][328][329][330][331] . Furthermore, adverse patient outcomes in this setting are more severe and are associated with a higher mortality 332 . Outbreaks associated with a variety of bacterial, fungal and viral pathogens due to commonsource and person-to-person transmissions are frequent in adult and pediatric ICUs 31, 333-336, 337 , 338 . # I.D.1.b. Burn Units Burn wounds can provide optimal conditions for colonization, infection, and transmission of pathogens; infection acquired by burn patients is a frequent cause of morbidity and mortality 320,339,340 . In patients with a burn injury involving >30% of the total body surface area (TBSA), the risk of invasive burn wound infection is particularly high 341,342 . Infections that occur in patients with burn injury involving <30% TBSA are usually associated with the use of invasive devices. Methicillin-susceptible Staphylococcus aureus, MRSA, enterococci, including VRE, gram-negative bacteria, and candida are prevalent pathogens in burn infections 53,340,[343][344][345][346][347][348][349][350] and outbreaks of these organisms have been reported [351][352][353][354] . Shifts over time in the predominance of pathogens causing infections among burn patients often lead to changes in burn care practices 343,[355][356][357][358] . Burn wound infections caused by Aspergillus sp. or other environmental molds may result from exposure to supplies contaminated during construction 359 or to dust generated during construction or other environmental disruption 360 . Hydrotherapy equipment is an important environmental reservoir of gramnegative organisms. Its use for burn care is discouraged based on demonstrated associations between use of contaminated hydrotherapy equipment and infections. Burn wound infections and colonization, as well as bloodstream infections, caused by multidrug-resistant P. aeruginosa 361 , A. baumannii 362 , and MRSA 352 have been associated with hydrotherapy; excision of burn wounds in operating rooms is preferred. Advances in burn care, specifically early excision and grafting of the burn wound, use of topical antimicrobial agents, and institution of early enteral feeding, have led to decreased infectious complications. Other advances have included prophylactic antimicrobial usage, selective digestive decontamination (SDD), and use of antimicrobial-coated catheters (ACC), but few epidemiologic studies and no efficacy studies have been performed to show the relative benefit of these 357 measures . There is no consensus on the most effective infection control practices to prevent transmission of infections to and from patients with serious burns (e.g., singlebed rooms 358 , laminar flow 363 and high efficiency particulate air filtration [HEPA] 360 or maintaining burn patients in a separate unit without exposure to patients or equipment from other units 364 ). There also is controversy regarding the need for and type of barrier precautions for routine care of burn patients. One retrospective study demonstrated efficacy and cost effectiveness of a simplified barrier isolation protocol for wound colonization, emphasizing handwashing and use of gloves, caps, masks and plastic impermeable aprons (rather than isolation gowns) for direct patient contact 365 . However, there have been no studies that define the most effective combination of infection control precautions for use in burn settings. Prospective studies in this area are needed. # I.D.1.c. Pediatrics Studies of the epidemiology of HAIs in children have identified unique infection control issues in this population 63,64,[366][367][368][369][370] . Pediatric intensive care unit (PICU) patients and the lowest birthweight babies in the highrisk nursery (HRN) monitored in the NNIS system have had high rates of central venous catheter-associated bloodstream infections 64,320,[369][370][371][372] . Additionally, there is a high prevalence of community-acquired infections among hospitalized infants and young children who have not yet become immune either by vaccination or by natural infection. The result is more patients and their sibling visitors with transmissible infections present in pediatric healthcare settings, especially during seasonal epidemics (e.g., pertussis 36,40,41 , respiratory viral infections including those caused by RSV 24 , influenza viruses 373 , parainfluenza virus 374 , human metapneumovirus 375 , and adenoviruses 376 ; rubeola [measles] 34 , varicella [chickenpox] 377 , and rotavirus 38,378 ). Close physical contact between healthcare personnel and infants and young children (eg. cuddling, feeding, playing, changing soiled diapers, and cleaning copious uncontrolled respiratory secretions) provides abundant opportunities for transmission of infectious material. Practices and behaviors such as congregation of children in play areas where toys and bodily secretions are easily shared and family members rooming-in with pediatric patients can further increase the risk of transmission. Pathogenic bacteria have been recovered from toys used by hospitalized patients 379 ; contaminated bath toys were implicated in an outbreak of multidrug-resistant P. aeruginosa on a pediatric oncology unit 80 . In addition, several patient factors increase the likelihood that infection will result from exposure to pathogens in healthcare settings (e.g., immaturity of the neonatal immune system, lack of previous natural infection and resulting immunity, prevalence of patients with congenital or acquired immune deficiencies, congenital anatomic anomalies, and use of life-saving invasive devices in neontal and pediatric intensive care units) 63 . There are theoretical concerns that infection risk will increase in association with innovative practices used in the NICU for the purpose of improving developmental outcomes, Such factors include co-bedding 380 and kangaroo care 381 that may increase opportunity for skin-to-skin exposure of multiple gestation infants to each other and to their mothers, respectively; although infection risk smay actually be reduced among infants receiving kangaroo care 382 . Children who attend child care centers 383,384 and pediatric rehabilitation units 385 may increase the overall burden of antimicrobial resistance (eg. by contributing to the reservoir of community-associated MRSA [CA-MRSA]) [386][387][388][389][390][391] . Patients in chronic care facilities may have increased rates of colonization with resistant GNBs and may be sources of introduction of resistant organisms to acute care settings 50 . # I.D.2. Nonacute healthcare settings Healthcare is provided in various settings outside of hospitals including facilities, such as long-term care facilities (LTCF) (e.g. nursing homes), homes for the developmentally disabled, settings where behavioral health services are provided, rehabilitation centers and hospices 392 . In addition, healthcare may be provided in nonhealthcare settings such as workplaces with occupational health clinics, adult day care centers, assisted living facilities, homeless shelters, jails and prisons, school clinics and infirmaries. Each of these settings has unique circumstances and population risks to consider when designing and implementing an infection control program. Several of the most common settings and their particular challenges are discussed below. While this Guideline does not address each setting, the principles and strategies provided may be adapted and applied as appropriate. # I.D.2.a. Long-term care The designation LTCF applies to a diverse group of residential settings, ranging from institutions for the developmentally disabled to nursing homes for the elderly and pediatric chronic-care facilities [393][394][395] . Nursing homes for the elderly predominate numerically and frequently represent longterm care as a group of facilities. Approximately 1.8 million Americans reside in the nation's 16,500 nursing homes 396 . Estimates of HAI rates of 1.8 to 13.5 per 1000 resident-care days have been reported with a range of 3 to 7 per 1000 resident-care days in the more rigorous studies [397][398][399][400][401] . The infrastructure described in the Department of Veterans Affairs nursing home care units is a promising example for the development of a nationwide HAI surveillance system for LTCFs 402 . LCTFs are different from other healthcare settings in that elderly patients at increased risk for infection are brought together in one setting and remain in the facility for extended periods of time; for most residents, it is their home. An atmosphere of community is fostered and residents share common eating and living areas, and participate in various facility-sponsored activities 403,404 . Since able residents interact freely with each other, controlling transmission of infection in this setting is challenging 405 . Residents who are colonized or infected with certain microorganisms are, in some cases, restricted to their room. However, because of the psychosocial risks associated with such restriction, it has been recommended that psychosocial needs be balanced with infection control needs in the LTCF setting [406][407][408][409] . Documented LTCF outbreaks have been caused by various viruses (e.g., influenza virus 35,[410][411][412] , rhinovirus 413 , adenovirus (conjunctivitis) 414 , norovirus 278, 279 275, 281 ) and bacteria, including group A streptococcus 162 , B. pertussis 415 , non-susceptible S. pneumoniae 197,198 , other MDROs, and Clostridium difficile 416 ) These pathogens can lead to substantial morbidity and mortality, and increased medical costs; prompt detection and implementation of effective control measures are required. Risk factors for infection are prevalent among LTCF residents 395,417,418 . Agerelated declines in immunity may affect responses to immunizations for influenza and other infectious agents, and increase susceptibility to tuberculosis. Immobility, incontinence, dysphagia, underlying chronic diseases, poor functional status, and age-related skin changes increase susceptibility to urinary, respiratory and cutaneous and soft tissue infections, while malnutrition can impair wound healing [419][420][421][422][423] . Medications (e.g., drugs that affect level of consciousness, immune function, gastric acid secretions, and normal flora, including antimicrobial therapy) and invasive devices (e.g., urinary catheters and feeding tubes) heighten susceptibility to infection and colonization in LTCF residents [424][425][426] . Finally, limited functional status and total dependence on healthcare personnel for activities of daily living have been identified as independent risk factors for infection 401,417,427 and for colonization with MRSA 428,429 and ESBL-producing K. pneumoniae 430 . Several position papers and review articles have been published that provide guidance on various aspects of infection control and antimicrobial resistance in LTCFs [406][407][408][431][432][433][434][435][436] . The Centers for Medicare and Medicaid Services (CMS) have established regulations for the prevention of infection in LTCFs 437 . Because residents of LTCFs are hospitalized frequently, they can transfer pathogens between LTCFs and healthcare facilities in which they receive care 8,[438][439][440][441] . This is also true for pediatric long-term care populations. Pediatric chronic care facilities have been associated with importing extended-spectrum cephalosporin-resistant, gram-negative bacilli into one PICU 50 . Children from pediatric rehabilitation units may contribute to the reservoir of communityassociated MRSA 385,[389][390][391] . # I.D.2.b. Ambulatory Care In the past decade, healthcare delivery in the United States has shifted from the acute, inpatient hospital to a variety of ambulatory and community-based settings, including the home. Ambulatory care is provided in hospital-based outpatient clinics, nonhospital-based clinics and physician offices, public health clinics, free-standing dialysis centers, ambulatory surgical centers, urgent care centers, and many others. In 2000, there were 83 million visits to hospital outpatient clinics and more than 823 million visits to physician offices 442 ; ambulatory care now accounts for most patient encounters with the health care system 443 . In these settings, adapting transmission prevention guidelines is challenging because patients remain in common areas for prolonged periods waiting to be seen by a healthcare provider or awaiting admission to the hospital, examination or treatment rooms are turned around quickly with limited cleaning, and infectious patients may not be recognized immediately. Furthermore, immunocompromised patients often receive chemotherapy in infusion rooms where they stay for extended periods of time along with other types of patients. There are few data on the risk of HAIs in ambulatory care settings, with the exception of hemodialysis centers 18 , 444, 445 . Transmission of infections in outpatient settings has been reviewed in three publications [446][447][448] . Goodman and Solomon summarized 53 clusters of infections associated with the outpatient setting from 1961-1990 446 . Overall, 29 clusters were associated with common source transmission from contaminated solutions or equipment, 14 with personto-person transmission from or involving healthcare personnel and ten associated with airborne or droplet transmission among patients and healthcare workers. Transmission of bloodborne pathogens (i.e., hepatitis B and C viruses and, rarely, HIV) in outbreaks, sometimes involving hundreds of patients, continues to occur in ambulatory settings. These outbreaks often are related to common source exposures, usually a contaminated medical device, multi-dose vial, or intravenous solution 82,[449][450][451][452][453] . In all cases, transmission has been attributed to failure to adhere to fundamental infection control principles, including safe injection practices and aseptic technique.This subject has been reviewed and recommended infection control and safe injection practices summarized 454 . Airborne transmission of M.tuberculosis and measles in ambulatory settings, most frequently emergency departments, has been reported 34,127,446,448,[455][456][457] . Measles virus was transmitted in physician offices and other outpatient settings during an era when immunization rates were low and measles outbreaks in the community were occurring regularly 34,122,458 . Rubella has been transmitted in the outpatient obstetric setting 33 ; there are no published reports of varicella transmission in the outpatient setting. In the ophthalmology setting, adenovirus type 8 epidemic keratoconjunctivitis has been transmitted via incompletely disinfected ophthalmology equipment and/or from healthcare workers to patients, presumably by contaminated hands 17,446,448,[459][460][461][462] . If transmission in outpatient settings is to be prevented, screening for potentially infectious symptomatic and asymptomatic individuals, especially those who may be at risk for transmitting airborne infectious agents (e.g., M. tuberculosis, varicella-zoster virus, rubeola [measles]), is necessary at the start of the initial patient encounter. Upon identification of a potentially infectious patient, implementation of prevention measures, including prompt separation of potentially infectious patients and implementation of appropriate control measures (e.g., Respiratory Hygiene/Cough Etiquette and Transmission-Based Precautions) can decrease transmission risks 9,12 . Transmission of MRSA and VRE in outpatient settings has not been reported, but the association of CA-MRSA in healthcare personnel working in an outpatient HIV clinic with environmental CA-MRSA contamination in that clinic, suggests the possibility of transmission in that setting 463 . Patient-to-patient transmission of Burkholderia species and Pseudomonas aeruginosa in outpatient clinics for adults and children with cystic fibrosis has been confirmed 464,465 . # I.D.2.c. Home Care Home care in the United States is delivered by over 20,000 provider agencies that include home health agencies, hospices, durable medical equipment providers, home infusion therapy services, and personal care and support services providers. Home care is provided to patients of all ages with both acute and chronic conditions. The scope of services ranges from assistance with activities of daily living and physical and occupational therapy to the care of wounds, infusion therapy, and chronic ambulatory peritoneal dialysis (CAPD). The incidence of infection in home care patients, other than those associated with infusion therapy is not well studied [466][467][468][469][470][471] . However, data collection and calculation of infection rates have been accomplished for central venous catheter-associated bloodstream infections in patients receiving home infusion therapy [470][471][472][473][474] and for the risk of blood contact through percutaneous or mucosal exposures, demonstrating that surveillance can be performed in this setting 475 . Draft definitions for home care associated infections have been developed 476 . Transmission risks during home care are presumed to be minimal. The main transmission risks to home care patients are from an infectious healthcare provider or contaminated equipment; providers also can be exposed to an infectious patient during home visits. Since home care involves patient care by a limited number of personnel in settings without multiple patients or shared equipment, the potential reservoir of pathogens is reduced. Infections of home care providers, that could pose a risk to home care patients include infections transmitted by the airborne or droplet routes (e.g., chickenpox, tuberculosis, influenza), and skin infestations (e.g., scabies 69 and lice) and infections (e.g.,impetigo) transmitted by direct or indirect contact. There are no published data on indirect transmission of MDROs from one home care patient to another, although this is theoretically possible if contaminated equipment is transported from an infected or colonized patient and used on another patient. Of note, investigation of the first case of VISA in homecare 186 and the first 2 reported cases of VRSA 178,180,181,183 found no evidence of transmission of VISA or VRSA to other home care recipients. Home health care also may contribute to antimicrobial resistance; a review of outpatient vancomycin use found 39% of recipients did not receive the antibiotic according to recommended guidelines 477 . Although most home care agencies implement policies and procedures to prevent transmission of organisms, the current approach is based on the adaptation of the 1996 Guideline for Isolation Precautions in Hospitals 1 as well as other professional guidance 478,479 . This issue has been very challenging in the home care industry and practice has been inconsistent and frequently not evidence-based. For example, many home health agencies continue to observe "nursing bag technique," a practice that prescribes the use of barriers between the nursing bag and environmental surfaces in the home 480 . While the home environment may not always appear clean, the use of barriers between two noncritical surfaces has been questioned 481,482 . Opportunites exist to conduct research in home care related to infection transmission risks 483 . # I.D.2.d. Other sites of healthcare delivery Facilities that are not primarily healthcare settings but in which healthcare is delivered include clinics in correctional facilities and shelters. Both settings can have suboptimal features, such as crowded conditions and poor ventilation. Economically disadvantaged individuals who may have chronic illnesses and healthcare problems related to alcoholism, injection drug use, poor nutrition, and/or inadequate shelter often receive their primary healthcare at sites such as these 484 . Infectious diseases of special concern for transmission include tuberculosis, scabies, respiratory infections (e.g., N. meningitides, S. pneumoniae), sexually transmitted and bloodborne diseases (e.g.,HIV, HBV, HCV, syphilis, gonorrhea), hepatitis A virus (HAV), diarrheal agents such as norovirus, and foodborne diseases 286,[485][486][487][488] . A high index of suspicion for tuberculosis and CA-MRSA in these populations is needed as outbreaks in these settings or among the populations they serve have been reported [489][490][491][492][493][494][495][496][497] . Patient encounters in these types of facilities provide an opportunity to deliver recommended immunizations and screen for M. tuberculosis infection in addition to diagnosing and treating acute illnesses 498 . Recommended infection control measures in these non-traditional areas designated for healthcare delivery are the same as for other ambulatory care settings. Therefore, these settings must be equipped to observe Standard Precautions and, when indicated, Transmission-based Precautions. # I.E. Transmission risks associated with special patient populations As new treatments emerge for complex diseases, unique infection control challenges associated with special patient populations need to be addressed. # I.E.1. Immunocompromised patients Patients who have congenital primary immune deficiencies or acquired disease (eg. treatment-induced immune deficiencies) are at increased risk for numerous types of infections while receiving healthcare and may be located throughout the healthcare facility. The specific defects of the immune system determine the types of infections that are most likely to be acquired (e.g., viral infections are associated with T-cell defects and fungal and bacterial infections occur in patients who are neutropenic). As a general group, immunocompromised patients can be cared for in the same environment as other patients; however, it is always advisable to minimize exposure to other patients with transmissible infections such as influenza and other respiratory viruses 499,500 . The use of more intense chemotherapy regimens for treatment of childhood leukemia may be associated with prolonged periods of neutropenia and suppression of other components of the immune system, extending the period of infection risk and raising the concern that additional precautions may be indicated for select groups 501,502 . With the application of newer and more intense immunosuppressive therapies for a variety of medical conditions (e.g., rheumatologic disease 503,504 , inflammatory bowel disease 505 ), immunosuppressed patients are likely to be more widely distributed throughout a healthcare facility rather than localized to single patient units (e.g. hematology-oncology). Guidelines for preventing infections in certain groups of immunocompromised patients have been published 15,506,507 . Published data provide evidence to support placing allogeneic HSCT patients in a Protective Environment 15,157,158 . Also, three guidelines have been developed that address the special requirements of these immunocompromised patients, including use of antimicrobial prophylaxis and engineering controls to create a Protective Environment for the prevention of infections caused by Aspergillus spp. and other environmental fungi 11,14,15 . As more intense chemotherapy regimens associated with prolonged periods of neutropenia or graft-versus-host disease are implemented, the period of risk and duration of environmental protection may need to be prolonged beyond the traditional 100 days 508 . # I.E.2. Cystic fibrosis patients Patients with cystic fibrosis (CF) require special consideration when developing infection control guidelines. Compared to other patients, CF patients require additional protection to prevent transmission from contaminated respiratory therapy equipment [509][510][511][512][513] . Infectious agents such as Burkholderia cepacia complex and P. aeruginosa 464,465,514,515 have unique clinical and prognostic significance. In CF patients, B. cepacia infection has been associated with increased morbidity and mortality [516][517][518] , while delayed acquisition of chronic P.aeruginosa infection may be associated with an improved long-term clinical outcome 519,520 . Person-to-person transmission of B. cepacia complex has been demonstrated among children 517 and adults 521 with CF in healthcare settings 464,522 , during various social contacts 523 , most notably attendance at camps for patients with CF 524 , and among siblings with CF 525 . Successful infection control measures used to prevent transmission of respiratory secretions include segregation of CF patients from each other in ambulatory and hospital settings (including use of private rooms with separate showers), environmental decontamination of surfaces and equipment contaminated with respiratory secretions, elimination of group chest physiotherapy sessions, and disbanding of CF camps 97,526 . The Cystic Fibrosis Foundation published a consensus document with evidencebased recommendations for infection control practices for CF patients 20 . # I.F. New therapies associated with potentially transmissible infectious agents I.F.1. Gene therapy Gene therapy has has been attempted using a number of different viral vectors, including nonreplicating retroviruses, adenoviruses, adenoassociated viruses, and replication-competent strains of poxviruses. Unexpected adverse events have restricted the prevalence of gene therapy protocols. The infectious hazards of gene therapy are theoretical at this time, but require meticulous surveillance due to the possible occurrence of in vivo recombination and the subsequent emergence of a transmissible genetically altered pathogen. Greatest concern attends the use of replication-competent viruses, especially vaccinia. As of the time of publication, no reports have described transmission of a vector virus from a gene therapy recipient to another individual, but surveillance is ongoing. Recommendations for monitoring infection control issues throughout the course of gene therapy trials have been published [527][528][529] . # I.F.2. Infections transmitted through blood, organs and other tissues The potential hazard of transmitting infectious pathogens through biologic products is a small but ever present risk, despite donor screening. Reported infections transmitted by transfusion or transplantation include West Nile Virus infection 530 cytomegalovirus infection 531 , Creutzfeldt-Jacob disease 230 , hepatitis C 532 , infections with Clostridium spp. 533 and group A streptococcus 534 , malaria 535 , babesiosis 536 , Chagas disease 537 , lymphocytic choriomeningitis 538 , and rabies 539, 540 . Therefore, it is important to consider receipt of biologic products when evaluating patients for potential sources of infection. # I.F.3. Xenotransplantation The transplantation of nonhuman cells, tissues, and organs into humans potentially exposes patients to zoonotic pathogens. Transmission of known zoonotic infections (e.g., trichinosis from porcine tissue), constitutes one concern, but also of concern is the possibility that transplantation of nonhuman cells, tissues, or organs may transmit previously unknown zoonotic infections (xenozoonoses) to immunosuppressed human recipients. Potential infections that might accompany transplantation of porcine organs have been described 541 . Guidelines from the U.S. Public Health Service address many infectious diseases and infection control issues that surround the developing field of xenotransplantation 542 ); work in this area is ongoing. # Part II: Fundamental elements needed to prevent transmission of infectious agents in healthcare settings II.A. Healthcare system components that influence the effectiveness of precautions to prevent transmission II.A.1. Administrative measures Healthcare organizations can demonstrate a commitment to preventing transmission of infectious agents by incorporating infection control into the objectives of the organization's patient and occupational safety programs [543][544][545][546][547] . An infrastructure to guide, support, and monitor adherence to Standard and Transmission-Based Precautions 434,548,549 will facilitate fulfillment of the organization's mission and achievement of the Joint Commission on Accreditation of Healthcare Organization's patient safety goal to decrease HAIs 550 . Policies and procedures that explain how Standard and Transmission-Based Precautions are applied, including systems used to identify and communicate information about patients with potentially transmissible infectious agents, are essential to ensure the success of these measures and may vary according to the characteristics of the organization. A key administrative measure is provision of fiscal and human resources for maintaining infection control and occupational health programs that are responsive to emerging needs. Specific components include bedside nurse 551 and infection prevention and control professional (ICP) staffing levels 552 , inclusion of ICPs in facility construction and design decisions 11 , clinical microbiology laboratory support 553,554 , adequate supplies and equipment including facility ventilation systems 11 , adherence monitoring 555 , assessment and correction of system failures that contribute to transmission 556,557 , and provision of feedback to healthcare personnel and senior administrators 434,548,549,558 . The positive influence of institutional leadership has been demonstrated repeatedly in studies of HCW adherence to recommended hand hygiene practices 176,177,434,548,549,[559][560][561][562][563][564] . Healthcare administrator involvement in infection control processes can improve administrators' awareness of the rationale and resource requirements for following recommended infection control practices. Several administrative factors may affect the transmission of infectious agents in healthcare settings: institutional culture, individual worker behavior, and the work environment. Each of these areas is suitable for performance improvement monitoring and incorporation into the organization's patient safety goals 543,544,546,565 . # II.A.1.a.Scope of work and staffing needs for infection control professionals The effectiveness of infection surveillance and control programs in preventing nosocomial infections in United States hospitals was assessed by the CDC through the Study on the Efficacy of Nosocomial Infection Control (SENIC Project) conducted 1970-76 566 . In a representative sample of US general hospitals, those with a trained infection control physician or microbiologist involved in an infection control program, and at least one infection control nurse per 250 beds, were associated with a 32% lower rate of four infections studied (CVC-associated bloodstream infections, ventilator-associated pneumonias, catheter-related urinary tract infections, and surgical site infections). Since that landmark study was published, responsibilities of ICPs have expanded commensurate with the growing complexity of the healthcare system, the patient populations served, and the increasing numbers of medical procedures and devices used in all types of healthcare settings. The scope of work of ICPs was first assessed in 1982 [567][568][569] by the Certification Board of Infection Control (CBIC), and has been re-assessed every five years since that time 558, [570][571][572] . The findings of these task analyses have been used to develop and update the Infection Control Certification Examination, offered for the first time in 1983. With each survey, it is apparent that the role of the ICP is growing in complexity and scope, beyond traditional infection control activities in acute care hospitals. Activities currently assigned to ICPs in response to emerging challenges include: 1) surveillance and infection prevention at facilities other than acute care hospitals e.g., ambulatory clinics, day surgery centers, long term care facilities, rehabilitation centers, home care; 2) oversight of employee health services related to infection prevention, e.g. assessment of risk and administration of recommended treatment following exposure to infectious agents, tuberculosis screening, influenza vaccination, respiratory protection fit testing, and administration of other vaccines as indicated, such as smallpox vaccine in 2003; 3) preparedness planning for annual influenza outbreaks, pandemic influenza, SARS, bioweapons attacks; 4) adherence monitoring for selected infection control practices; 5) oversight of risk assessment and implementation of prevention measures associated with construction and renovation; 6) prevention of transmission of MDROs; 7) evaluation of new medical products that could be associated with increased infection risk. e.g.,intravenous infusion materials; 9) communication with the public, facility staff, and state and local health departments concerning infection control-related issues; and 10) participation in local and multi-center research projects 434,549,552,558,573,574 . None of the CBIC job analyses addressed specific staffing requirements for the identified tasks, although the surveys did include information about hours worked; the 2001 survey included the number of ICPs assigned to the responding facilities 558 . There is agreement in the literature that 1 ICP per 250 acute care beds is no longer adequate to meet current infection control needs; a Delphi project that assessed staffing needs of infection control programs in the 21 st century concluded that a ratio of 0.8 to 1.0 ICP per 100 occupied acute care beds is an appropriate level of staffing 552 . A survey of participants in the National Nosocomial Infections Surveillance (NNIS) system found the average daily census per ICP was 115 316 . Results of other studies have been similar: 3 per 500 beds for large acute care hospitals, 1 per 150-250 beds in long term care facilities, and 1.56 per 250 in small rural hospitals 573,575 . The foregoing demonstrates that infection control staffing can no longer be based on patient census alone, but rather must be determined by the scope of the program, characteristics of the patient population, complexity of the healthcare system, tools available to assist personnel to perform essential tasks (e.g., electronic tracking and laboratory support for surveillance), and unique or urgent needs of the institution and community 552 . Furthermore, appropriate training is required to optimize the quality of work performed 558, 572, 576 . # II.A.1.a.i. Infection Control Nurse Liaison Designating a bedside nurse on a patient care unit as an infection control liaison or "link nurse" is reported to be an effective adjunct to enhance infection control at the unit level [577][578][579][580][581][582] . Such individuals receive training in basic infection control and have frequent communication with the ICPs, but maintain their primary role as bedside caregiver on their units. The infection control nurse liaison increases the awareness of infection control at the unit level. He or she is especially effective in implementation of new policies or control interventions because of the rapport with individuals on the unit, an understanding of unit-specific challenges, and ability to promote strategies that are most likely to be successful in that unit. This position is an adjunct to, not a replacement for, fully trained ICPs. Furthermore, the infection control liaison nurses should not be counted when considering ICP staffing. # II.A.1.b. Bedside nurse staffing There is increasing evidence that the level of bedside nurse-staffing influences the quality of patient care 583,584 . If there are adequate nursing staff, it is more likely that infection control practices, including hand hygiene and Standard and Transmission-Based Precautions, will be given appropriate attention and applied correctly and consistently 552 . A national multicenter study reported strong and consistent inverse relationships between nurse staffing and five adverse outcomes in medical patients, two of which were HAIs: urinary tract infections and pneumonia 583 . The association of nursing staff shortages with increased rates of HAIs has been demonstrated in several outbreaks in hospitals and long term care settings, and with increased transmission of hepatitis C virus in dialysis units 22,418,551,[585][586][587][588][589][590][591][592][593][594][595][596][597] . In most cases, when staffing improved as part of a comprehensive control intervention, the outbreak ended or the HAI rate declined. In two studies 590,596 , the composition of the nursing staff ("pool" or "float" vs. regular staff nurses) influenced the rate of primary bloodstream infections, with an increased infection rate occurring when the proportion of regular nurses decreased and pool nurses increased. # II.A.1.c. Clinical microbiology laboratory support The critical role of the clinical microbiology laboratory in infection control and healthcare epidemiology is described well 553,554,[598][599][600] and is supported by the Infectious Disease Society of America policy statement on consolidation of clinical microbiology laboratories published in 2001 553 . The clinical microbiology laboratory contributes to preventing transmission of infectious diseases in healthcare settings by promptly detecting and reporting epidemiologically important organisms, identifying emerging patterns of antimicrobial resistance, and assisting in assessment of the effectiveness of recommended precautions to limit transmission during outbreaks 598 . Outbreaks of infections may be recognized first by laboratorians 162 . Healthcare organizations need to ensure the availability of the recommended scope and quality of laboratory services, a sufficient number of appropriately trained laboratory staff members, and systems to promptly communicate epidemiologically important results to those who will take action (e.g., providers of clinical care, infection control staff, healthcare epidemiologists, and infectious disease consultants) 601 . As concerns about emerging pathogens and bioterrorism grow, the role of the clinical microbiology laboratory takes on even greater importance. For healthcare organizations that outsource microbiology laboratory services (e.g., ambulatory care, home care, LTCFs, smaller acute care hospitals), it is important to specify by contract the types of services (e.g., periodic institution-specific aggregate susceptibility reports) required to support infection control. Several key functions of the clinical microbiology laboratory are relevant to this guideline: • Antimicrobial susceptibility by testing and interpretation in accordance with current guidelines developed by the National Committee for Clinical Laboratory Standards (NCCLS), known as the Clinical and Laboratory Standards Institute (CLSI) since 2005 602 , for the detection of emerging resistance patterns 603,604 , and for the preparation, analysis, and distribution of periodic cumulative antimicrobial susceptibility summary reports [605][606][607] . While not required, clinical laboratories ideally should have access to rapid genotypic identification of bacteria and their antibiotic resistance genes 608 . • Performance of surveillance cultures when appropriate (including retention of isolates for analysis) to assess patterns of infection transmission and effectiveness of infection control interventions at the facility or organization. Microbiologists assist in decisions concerning the indications for initiating and discontinuing active surveillance programs and optimize the use of laboratory resources. • Molecular typing, on-site or outsourced, in order to investigate and control healthcare-associated outbreaks 609 . • Application of rapid diagnostic tests to support clinical decisions involving patient treatment, room selection, and implementation of control measures including barrier precautions and use of vaccine or chemoprophylaxis agents (e.g., influenza [610][611][612] , B. pertussis 613 , RSV 614,615 , and enteroviruses 616 ). The microbiologist provides guidance to limit rapid testing to clinical situations in which rapid results influence patient management decisions, as well as providing oversight of point-of-care testing performed by non-laboratory healthcare workers 617 . • Detection and rapid reporting of epidemiologically important organisms, including those that are reportable to public health agencies. • Implementation of a quality control program that ensures testing services are appropriate for the population served, and stringently evaluated for sensitivity, specificity, applicability, and feasibility. • Participation in a multidisciplinary team to develop and maintain an effective institutional program for the judicious use of antimicrobial agents 618, 619 . # II.A.2. Institutional safety culture and organizational characteristics Safety culture (or safety climate) refers to a work environment where a shared commitment to safety on the part of management and the workforce is understood and followed 557,620,621 . The authors of the Institute of Medicine Report, To Err is Human 543 , acknowledge that causes of medical error are multifaceted but emphasize repeatedly the pivotal role of system failures and the benefits of a safety culture. A safety culture is created through 1) the actions management takes to improve patient and worker safety; 2) worker participation in safety planning; 3) the availability of appropriate protective equipment; 4) influence of group norms regarding acceptable safety practices; and 5) the organization's socialization process for new personnel. Safety and patient outcomes can be enhanced by improving or creating organizational characteristics within patient care units as demonstrated by studies of surgical ICUs 622, 623 . Each of these factors has a direct bearing on adherence to transmission prevention recommendations 257 . Measurement of an institutional culture of safety is useful for designing improvements in healthcare 624,625 . Several hospital-based studies have linked measures of safety culture with both employee adherence to safe practices and reduced exposures to blood and body fluids [626][627][628][629][630][631][632] . One study of hand hygiene practices concluded that improved adherence requires integration of infection control into the organization's safety culture 561 . Several hospitals that are part of the Veterans Administration Healthcare System have taken specific steps toward improving the safety culture, including error reporting mechanisms, performing root cause analysis on problems identified, providing safety incentives, and employee education. [633][634][635] . # II.A.3. Adherence of healthcare personnel to recommended guidelines Adherence to recommended infection control practices decreases transmission of infectious agents in healthcare settings 116,562,[636][637][638][639][640] . However, several observational studies have shown limited adherence to recommended practices by healthcare personnel 559,[640][641][642][643][644][645][646][647][648][649][650][651][652][653][654][655][656][657] . Observed adherence to universal precautions ranged from 43% to 89% 641,642,649,651,652 . However, the degree of adherence depended frequently on the practice that was assessed and, for glove use, the circumstance in which they were used. Appropriate glove use has ranged from a low of 15% 645 to a high of 82% 650 . However, 92% and 98% adherence with glove use have been reported during arterial blood gas collection and resuscitation, respectively, procedures where there may be considerable blood contact 643,656 . Differences in observed adherence have been reported among occupational groups in the same healthcare facility 641 and between experienced and nonexperienced professionals 645 . In surveys of healthcare personnel, selfreported adherence was generally higher than that reported in observational studies. Furthermore, where an observational component was included with a self-reported survey, self-perceived adherence was often greater than observed adherence 657 . Among nurses and physicians, increasing years of experience is a negative predictor of adherence 645,651 . Education to improve adherence is the primary intervention that has been studied. While positive changes in knowledge and attitude have been demonstrated, 640,658 , there often has been limited or no accompanying change in behavior 642,644 . Self-reported adherence is higher in groups that have received an educational intervention 630,659 . Educational interventions that incorporated videotaping and performance feedback were successful in improving adherence during the period of study; the long-term effect of these interventions is not known 654 .The use of videotape also served to identify system problems (e.g., communication and access to personal protective equipment) that otherwise may not have been recognized. Use of engineering controls and facility design concepts for improving adherence is gaining interest. While introduction of automated sinks had a negative impact on consistent adherence to hand washing 660 , use of electronic monitoring and voice prompts to remind healthcare workers to perform hand hygiene, and improving accessibility to hand hygiene products, increased adherence and contributed to a decrease in HAIs in one study 661 . More information is needed regarding how technology might improve adherence. Improving adherence to infection control practices requires a multifaceted approach that incorporates continuous assessment of both the individual and the work environment 559,561 . Using several behavioral theories, Kretzer and Larson concluded that a single intervention (e.g., a handwashing campaign or putting up new posters about transmission precautions) would likely be ineffective in improving healthcare personnel adherence 662 . Improvement requires that the organizational leadership make prevention an institutional priority and integrate infection control practices into the organization's safety culture 561 . A recent review of the literature concluded that variations in organizational factors (e.g., safety climate, policies and procedures, education and training) and individual factors (e.g., knowledge, perceptions of risk, past experience) were determinants of adherence to infection control guidelines for protection against SARS and other respiratory pathogens 257 . # II.B. Surveillance for healthcare-associated infections (HAIs) Surveillance is an essential tool for case-finding of single patients or clusters of patients who are infected or colonized with epidemiologically important organisms (e.g., susceptible bacteria such as S. aureus, S. pyogenes [Group A streptococcus] or Enterobacter-Klebsiella spp; MRSA, VRE, and other MDROs; C. difficile; RSV; influenza virus) for which transmission-based precautions may be required. Surveillance is defined as the ongoing, systematic collection, analysis, interpretation, and dissemination of data regarding a health-related event for use in public health action to reduce morbidity and mortality and to improve health 663 . The work of Ignaz Semmelweis that described the role of person-to-person transmission in puerperal sepsis is the earliest example of the use of surveillance data to reduce transmission of infectious agents 664 . Surveillance of both process measures and the infection rates to which they are linked are important for evaluating the effectiveness of infection prevention efforts and identifying indications for change 555,[665][666][667][668] . The Study on the Efficacy of Nosocomial Infection Control (SENIC) found that different combinations of infection control practices resulted in reduced rates of nosocomial surgical site infections, pneumonia, urinary tract infections, and bacteremia in acute care hospitals 566 ; however, surveillance was the only component essential for reducing all four types of HAIs. Although a similar study has not been conducted in other healthcare settings, a role for surveillance and the need for novel strategies have been described in LTCFs 398,434,669,670 and in home care [470][471][472][473] . The essential elements of a surveillance system are: 1) standardized definitions; 2) identification of patient populations at risk for infection; 3) statistical analysis (e.g. risk-adjustment, calculation of rates using appropriate denominators, trend analysis using methods such as statistical process control charts); and 4) feedback of results to the primary caregivers [671][672][673][674][675][676] . Data gathered through surveillance of high-risk populations, device use, procedures, and/or facility locations (e.g., ICUs) are useful for detecting transmission trends [671][672][673] . Identification of clusters of infections should be followed by a systematic epidemiologic investigation to determine commonalities in persons, places, and time; and guide implementation of interventions and evaluation of the effectiveness of those interventions. Targeted surveillance based on the highest risk areas or patients has been preferred over facility-wide surveillance for the most effective use of resources 673, 676 . However, surveillance for certain epidemiologically important organisms may need to be facility-wide. Surveillance methods will continue to evolve as healthcare delivery systems change 392,677 and user-friendly electronic tools become more widely available for electronic tracking and trend analysis 674,678,679 . Individuals with experience in healthcare epidemiology and infection control should be involved in selecting software packages for data aggregation and analysis to assure that the need for efficient and accurate HAI surveillance will be met. Effective surveillance is increasingly important as legislation requiring public reporting of HAI rates is passed and states work to develop effective systems to support such legislation 680 . # II.C. Education of HCWs, patients, and families Education and training of healthcare personnel are a prerequisite for ensuring that policies and procedures for Standard and Transmission-Based Precautions are understood and practiced. Understanding the scientific rationale for the precautions will allow HCWs to apply procedures correctly, as well as safely modify precautions based on changing requirements, resources, or healthcare settings 14,655,[681][682][683][684][685][686][687][688] . In one study, the likelihood of HCWs developing SARS was strongly associated with less than 2 hours of infection control training and lack of understanding of infection control procedures 689 . Education about the important role of vaccines (e.g., influenza, measles, varicella, pertussis, pneumococcal) in protecting healthcare personnel, their patients, and family members can help improve vaccination rates [690][691][692][693] . Patients, family members, and visitors can be partners in preventing transmission of infections in healthcare settings 9,42,[709][710][711] . Information about Standard Precautions, especially hand hygiene, Respiratory Hygiene/Cough Etiquette, vaccination (especially against influenza) and other routine infection prevention strategies may be incorporated into patient information materials that are provided upon admission to the healthcare facility. Additional information about Transmission-Based Precautions is best provided at the time they are initiated. Fact sheets, pamphlets, and other printed material may include information on the rationale for the additional precautions, risks to household members, room assignment for Transmission-Based Precautions purposes, explanation about the use of personal protective equipment by HCWs, and directions for use of such equipment by family members and visitors. Such information may be particularly helpful in the home environment where household members often have primary responsibility for adherence to recommended infection control practices. Healthcare personnel must be available and prepared to explain this material and answer questions as needed. # II.D. Hand hygiene Hand hygiene has been cited frequently as the single most important practice to reduce the transmission of infectious agents in healthcare settings 559,712,713 and is an essential element of Standard Precautions. The term "hand hygiene" includes both handwashing with either plain or antiseptic-containing soap and water, and use of alcohol-based products (gels, rinses, foams) that do not require the use of water. In the absence of visible soiling of hands, approved alcoholbased products for hand disinfection are preferred over antimicrobial or plain soap and water because of their superior microbiocidal activity, reduced drying of the skin, and convenience 559 . Improved hand hygiene practices have been associated with a sustained decrease in the incidence of MRSA and VRE infections primarily in the ICU 561,562,[714][715][716][717] . The scientific rationale, indications, methods, and products for hand hygiene are summarized in other publications 559, 717 . The effectiveness of hand hygiene can be reduced by the type and length of fingernails 559,718,719 . Individuals wearing artifical nails have been shown to harbor more pathogenic organisms, especially gram negative bacilli and yeasts, on the nails and in the subungual area than those with native nails 720,721 . In 2002, CDC/HICPAC recommended (Category IA) that artificial fingernails and extenders not be worn by healthcare personnel who have contact with high-risk patients (e.g., those in ICUs, ORs) due to the association with outbreaks of gramnegative bacillus and candidal infections as confirmed by molecular typing of isolates 30,31,559,[722][723][724][725] .The need to restrict the wearing of artificial fingernails by all healthcare personnel who provide direct patient care or by healthcare personnel who have contact with other high risk groups (e.g., oncology, cystic fibrosis patients), has not been studied, but has been recommended by some experts 20 . At this time such decisions are at the discretion of an individual facility's infection control program. There is less evidence that jewelry affects the quality of hand hygiene. Although hand contamination with potential pathogens is increased with ring-wearing 559,726 , no studies have related this practice to HCW-to-patient transmission of pathogens. # II.E. Personal protective equipment (PPE) for healthcare personnel PPE refers to a variety of barriers and respirators used alone or in combination to protect mucous membranes, airways, skin, and clothing from contact with infectious agents. The selection of PPE is based on the nature of the patient interaction and/or the likely mode(s) of transmission. Guidance on the use of PPE is discussed in Part III. A suggested procedure for donning and removing PPE that will prevent skin or clothing contamination is presented in the Figure. Designated containers for used disposable or reusable PPE should be placed in a location that is convenient to the site of removal to facilitate disposal and containment of contaminated materials. Hand hygiene is always the final step after removing and disposing of PPE. The following sections highlight the primary uses and methods for selecting this equipment. # II.E.1. Gloves Gloves are used to prevent contamination of healthcare personnel hands when 1) anticipating direct contact with blood or body fluids, mucous membranes, nonintact skin and other potentially infectious material; 2) having direct contact with patients who are colonized or infected with pathogens transmitted by the contact route e.g., VRE, MRSA, RSV 559,727,728 ; or 3) handling or touching visibly or potentially contaminated patient care equipment and environmental surfaces 72,73,559 . Gloves can protect both patients and healthcare personnel from exposure to infectious material that may be carried on hands 73 . The extent to which gloves will protect healthcare personnel from transmission of bloodborne pathogens (e.g., HIV, HBV, HCV) following a needlestick or other pucture that penetrates the glove barrier has not been determined. Although gloves may reduce the volume of blood on the external surface of a sharp by 46-86% 729 , the residual blood in the lumen of a hollowbore needle would not be affected; therefore, the effect on transmission risk is unknown. Gloves manufactured for healthcare purposes are subject to FDA evaluation and clearance 730 . Nonsterile disposable medical gloves made of a variety of materials (e.g., latex, vinyl, nitrile) are available for routine patient care 731 . The selection of glove type for non-surgical use is based on a number of factors, including the task that is to be performed, anticipated contact with chemicals and chemotherapeutic agents, latex sensitivity, sizing, and facility policies for creating a latex-free environment 17,[732][733][734] . For contact with blood and body fluids during non-surgical patient care, a single pair of gloves generally provides adequate barrier protection 734 . However, there is considerable variability among gloves; both the quality of the manufacturing process and type of material influence their barrier effectiveness 735 . While there is little difference in the barrier properties of unused intact gloves 736 , studies have shown repeatedly that vinyl gloves have higher failure rates than latex or nitrile gloves when tested under simulated and actual clinical conditions 731,[735][736][737][738] . For this reason either latex or nitrile gloves are preferable for clinical procedures that require manual dexterity and/or will involve more than brief patient contact. It may be necessary to stock gloves in several sizes. Heavier, reusable utility gloves are indicated for non-patient care activities, such as handling or cleaning contaminated equipment or surfaces 11, 14, . During patient care, transmission of infectious organisms can be reduced by adhering to the principles of working from "clean" to "dirty", and confining or limiting contamination to surfaces that are directly needed for patient care. It may be necessary to change gloves during the care of a single patient to prevent cross-contamination of body sites 559,740 . It also may be necessary to change gloves if the patient interaction also involves touching portable computer keyboards or other mobile equipment that is transported from room to room. Discarding gloves between patients is necessary to prevent transmission of infectious material. Gloves must not be washed for subsequent reuse because microorganisms cannot be removed reliably from glove surfaces and continued glove integrity cannot be ensured. Furthermore, glove reuse has been associated with transmission of MRSA and gram-negative bacilli [741][742][743] . When gloves are worn in combination with other PPE, they are put on last. Gloves that fit snugly around the wrist are preferred for use with an isolation gown because they will cover the gown cuff and provide a more reliable continuous barrier for the arms, wrists, and hands. Gloves that are removed properly will prevent hand contamination (Figure). Hand hygiene following glove removal further ensures that the hands will not carry potentially infectious material that might have penetrated through unrecognized tears or that could contaminate the hands during glove removal 559,728,741 . # II.E.2. Isolation gowns Isolation gowns are used as specified by Standard and Transmission-Based Precautions, to protect the HCW's arms and exposed body areas and prevent contamination of clothing with blood, body fluids, and other potentially infectious material 24,88,262,[744][745][746] . The need for and type of isolation gown selected is based on the nature of the patient interaction, including the anticipated degree of contact with infectious material and potential for blood and body fluid penetration of the barrier. The wearing of isolation gowns and other protective apparel is mandated by the OSHA Bloodborne Pathogens Standard 739 . Clinical and laboratory coats or jackets worn over personal clothing for comfort and/or purposes of identity are not considered PPE. When applying Standard Precautions, an isolation gown is worn only if contact with blood or body fluid is anticipated. However, when Contact Precautions are used (i.e., to prevent transmission of an infectious agent that is not interrupted by Standard Precautions alone and that is associated with environmental contamination), donning of both gown and gloves upon room entry is indicated to address unintentional contact with contaminated environmental surfaces 54,72,73,88 . The routine donning of isolation gowns upon entry into an intensive care unit or other high-risk area does not prevent or influence potential colonization or infection of patients in those areas 365,[747][748][749][750] . Isolation gowns are always worn in combination with gloves, and with other PPE when indicated. Gowns are usually the first piece of PPE to be donned. Full coverage of the arms and body front, from neck to the mid-thigh or below will ensure that clothing and exposed upper body areas are protected. Several gown sizes should be available in a healthcare facility to ensure appropriate coverage for staff members. Isolation gowns should be removed before leaving the patient care area to prevent possible contamination of the environment outside the patient's room. Isolation gowns should be removed in a manner that prevents contamination of clothing or skin (Figure). The outer, "contaminated", side of the gown is turned inward and rolled into a bundle, and then discarded into a designated container for waste or linen to contain contamination. # II.E.3. Face protection: masks, goggles, face shields II.E.3.a. Masks Masks are used for three primary purposes in healthcare settings: 1) placed on healthcare personnel to protect them from contact with infectious material from patients e.g., respiratory secretions and sprays of blood or body fluids, consistent with Standard Precautions and Droplet Precautions; 2) placed on healthcare personnel when engaged in procedures requiring sterile technique to protect patients from exposure to infectious agents carried in a healthcare worker's mouth or nose, and 3) placed on coughing patients to limit potential dissemination of infectious respiratory secretions from the patient to others (i.e., Respiratory Hygiene/Cough Etiquette). Masks may be used in combination with goggles to protect the mouth, nose and eyes, or a face shield may be used instead of a mask and goggles, to provide more complete protection for the face, as discussed below. Masks should not be confused with particulate respirators that are used to prevent inhalation of small particles that may contain infectious agents transmitted via the airborne route as described below. The mucous membranes of the mouth, nose, and eyes are susceptible portals of entry for infectious agents, as can be other skin surfaces if skin integrity is compromised (e.g., by acne, dermatitis) 66,[751][752][753][754] . Therefore, use of PPE to protect these body sites is an important component of Standard Precautions. The protective effect of masks for exposed healthcare personnel has been demonstrated 93,113,755,756 . Procedures that generate splashes or sprays of blood, body fluids, secretions, or excretions (e.g., endotracheal suctioning, bronchoscopy, invasive vascular procedures) require either a face shield (disposable or reusable) or mask and goggles 93-95, 96 , 113, 115, 262, 739, 757 .The wearing of masks, eye protection, and face shields in specified circumstances when blood or body fluid exposures are likely to occur is mandated by the OSHA Bloodborne Pathogens Standard 739 . Appropriate PPE should be selected based on the anticipated level of exposure. Two mask types are available for use in healthcare settings: surgical masks that are cleared by the FDA and required to have fluid-resistant properties, and procedure or isolation masks 758 #2688 . No studies have been published that compare mask types to determine whether one mask type provides better protection than another. Since procedure/isolation masks are not regulated by the FDA, there may be more variability in quality and performance than with surgical masks. Masks come in various shapes (e.g., molded and non-molded), sizes, filtration efficiency, and method of attachment (e.g., ties, elastic, ear loops). Healthcare facilities may find that different types of masks are needed to meet individual healthcare personnel needs. # II.E.3.b. Goggles, face shields Guidance on eye protection for infection control has been published 759 . The eye protection chosen for specific work situations (e.g., goggles or face shield) depends upon the circumstances of exposure, other PPE used, and personal vision needs. Personal eyeglasses and contact lenses are NOT considered adequate eye protection (www.cdc.gov/niosh/topics/eye/eye-infectious.html). NIOSH states that, eye protection must be comfortable, allow for sufficient peripheral vision, and must be adjustable to ensure a secure fit. It may be necessary to provide several different types, styles, and sizes of protective equipment. Indirectly-vented goggles with a manufacturer's anti-fog coating may provide the most reliable practical eye protection from splashes, sprays, and respiratory droplets from multiple angles. Newer styles of goggles may provide better indirect airflow properties to reduce fogging, as well as better peripheral vision and more size options for fitting goggles to different workers. Many styles of goggles fit adequately over prescription glasses with minimal gaps. While effective as eye protection, goggles do not provide splash or spray protection to other parts of the face. The role of goggles, in addition to a mask, in preventing exposure to infectious agents transmitted via respiratory droplets has been studied only for RSV. Reports published in the mid-1980s demonstrated that eye protection reduced occupational transmission of RSV 760,761 . Whether this was due to preventing hand-eye contact or respiratory droplet-eye contact has not been determined. However, subsequent studies demonstrated that RSV transmission is effectively prevented by adherence to Standard plus Contact Precations and that for this virus routine use of goggles is not necessary 24,116,117,684,762 . It is important to remind healthcare personnel that even if Droplet Precautions are not recommended for a specific respiratory tract pathogen, protection for the eyes, nose and mouth by using a mask and goggles, or face shield alone, is necessary when it is likely that there will be a splash or spray of any respiratory secretions or other body fluids as defined in Standard Precautions Disposable or non-disposable face shields may be used as an alternative to goggles 759 . As compared with goggles, a face shield can provide protection to other facial areas in addition to the eyes. Face shields extending from chin to crown provide better face and eye protection from splashes and sprays; face shields that wrap around the sides may reduce splashes around the edge of the shield. Removal of a face shield, goggles and mask can be performed safely after gloves have been removed, and hand hygiene performed. The ties, ear pieces and/or headband used to secure the equipment to the head are considered "clean" and therefore safe to touch with bare hands. The front of a mask, goggles and face shield are considered contaminated (Figure ). # II.E.4. Respiratory protection The subject of respiratory protection as it applies to preventing transmission of airborne infectious agents, including the need for and frequency of fit-testing is under scientific review and was the subject of a CDC workshop in 2004 763 . Respiratory protection currently requires the use of a respirator with N95 or higher filtration to prevent inhalation of infectious particles. Information about respirators and respiratory protection programs is summarized in the Guideline for Preventing Transmission of Mycobacterium tuberculosis in Health-care Settings, 2005 (CDC.MMWR 2005; 54: RR-17 12 ). Respiratory protection is broadly regulated by OSHA under the general industry standard for respiratory protection (29CFR1910.134) 764 which requires that U.S. employers in all employment settings implement a program to protect employees from inhalation of toxic materials. OSHA program components include medical clearance to wear a respirator; provision and use of appropriate respirators, including fit-tested NIOSH-certified N95 and higher particulate filtering respirators; education on respirator use and periodic re-evaluation of the respiratory protection program. When selecting particulate respirators, models with inherently good fit characteristics (i.e., those expected to provide protection factors of 10 or more to 95% of wearers) are preferred and could theoretically relieve the need for fit testing 765,766 . Issues pertaining to respiratory protection remain the subject of ongoing debate. Information on various types of respirators may be found at www.cdc.gov/niosh/npptl/respirators/respsars.html and in published studies 765,767,768 . A user-seal check (formerly called a "fit check") should be performed by the wearer of a respirator each time a respirator is donned to minimize air leakage around the facepiece 769 . The optimal frequency of fit-testng has not been determined; re-testing may be indicated if there is a change in facial features of the wearer, onset of a medical condition that would affect respiratory function in the wearer, or a change in the model or size of the initially assigned respirator 12 . Respiratory protection was first recommended for protection of preventing U.S. healthcare personnel from exposure to M. tuberculosis in 1989. That recommendation has been maintained in two successive revisions of the Guidelines for Prevention of Transmission of Tuberculosis in Hospitals and other Healthcare Settings 12,126 . The incremental benefit from respirator use, in addition to administrative and engineering controls (i.e., AIIRs, early recognition of patients likely to have tuberculosis and prompt placement in an AIIR, and maintenance of a patient with suspected tuberculosis in an AIIR until no longer infectious), for preventing transmission of airborne infectious agents (e.g., M. tuberculosis) is undetermined. Although some studies have demonstrated effective prevention of M. tuberculosis transmission in hospitals where surgical masks, instead of respirators, were used in conjunction with other administrative and engineering controls 637,770,771 , CDC currently recommends N95 or higher level respirators for personnel exposed to patients with suspected or confirmed tuberculosis. Currently this is also true for other diseases that could be transmitted through the airborne route, including SARS 262 and smallpox 108,129,772 , until inhalational transmission is better defined or healthcare-specific protective equipment more suitable for for preventing infection are developed. Respirators are also currently recommended to be worn during the performance of aerosol-generating procedures (e.g., intubation, bronchoscopy, suctioning) on patients withSARS Co-V infection, avian influenza and pandemic influenza (See Appendix A). Although Airborne Precautions are recommended for preventing airborne transmission of measles and varicella-zoster viruses, there are no data upon which to base a recommendation for respiratory protection to protect susceptible personnel against these two infections; transmission of varicella-zoster virus has been prevented among pediatric patients using negative pressure isolation alone 773 . Whether respiratory protection (i.e., wearing a particulate respirator) would enhance protection from these viruses has not been studied. Since the majority of healthcare personnel have natural or acquired immunity to these viruses, only immune personnel generally care for patients with these infections [774][775][776][777] . Although there is no evidence to suggest that masks are not adequate to protect healthcare personnel in these settings, for purposes of consistency and simplicity, or because of difficulties in ascertaining immunity, some facilities may require the use of respirators for entry into all AIIRs, regardless of the specific infectious agent. Procedures for safe removal of respirators are provided (Figure). In some healthcare settings, particulate respirators used to provide care for patients with M. tuberculosis are reused by the same HCW. This is an acceptable practice providing the respirator is not damaged or soiled, the fit is not compromised by change in shape, and the respirator has not been contaminated with blood or body fluids. There are no data on which to base a recommendation for the length of time a respirator may be reused. # II.F. Safe work practices to prevent HCW exposure to bloodborne pathogens # II.F.1. Prevention of needlesticks and other sharps-related injuries Injuries due to needles and other sharps have been associated with transmission of HBV, HCV and HIV to healthcare personnel 778,779 . The prevention of sharps injuries has always been an essential element of Universal and now Standard Precautions 1, 780 . These include measures to handle needles and other sharp devices in a manner that will prevent injury to the user and to others who may encounter the device during or after a procedure. These measures apply to routine patient care and do not address the prevention of sharps injuries and other blood exposures during surgical and other invasive procedures that are addressed elsewhere [781][782][783][784][785] . Since 1991, when OSHA first issued its Bloodborne Pathogens Standard to protect healthcare personnel from blood exposure, the focus of regulatory and legislative activity has been on implementing a hierarchy of control measures. This has included focusing attention on removing sharps hazards through the development and use of engineering controls. The federal Needlestick Safety and Prevention Act signed into law in November, 2000 authorized OSHA's revision of its Bloodborne Pathogens Standard to more explicitly require the use of safety-engineered sharp devices 786 . CDC has provided guidance on sharps injury prevention 787,788 , including for the design, implementation and evaluation of a comprehensive sharps injury prevention 789 program . # II.F.2. Prevention of mucous membrane contact Exposure of mucous membranes of the eyes, nose and mouth to blood and body fluids has been associated with the transmission of bloodborne viruses and other infectious agents to healthcare personnel 66,752,754,779 . The prevention of mucous membrane exposures has always been an element of Universal and now Standard Precautions for routine patient care 1, 753 and is subject to OSHA bloodborne pathogen regulations. Safe work practices, in addition to wearing PPE, are used to protect mucous membranes and non-intact skin from contact with potentially infectious material. These include keeping gloved and ungloved hands that are contaminated from touching the mouth, nose, eyes, or face; and positioning patients to direct sprays and splatter away from the face of the caregiver. Careful placement of PPE before patient contact will help avoid the need to make PPE adjustments and possible face or mucous membrane contamination during use. In areas where the need for resuscitation is unpredictable, mouthpieces, pocket resuscitation masks with one-way valves, and other ventilation devices provide an alternative to mouth-to-mouth resuscitation, preventing exposure of the caregiver's nose and mouth to oral and respiratory fluids during the procedure. # II.F.2.a. Precautions during aerosol-generating procedures The performance of procedures that can generate small particle aerosols (aerosol-generating procedures), such as bronchoscopy, endotracheal intubation, and open suctioning of the respiratory tract, have been associated with transmission of infectious agents to healthcare personnel, including M. tuberculosis 790 , SARS-CoV 93,94,98 and N. meningitidis 95 . Protection of the eyes, nose and mouth, in addition to gown and gloves, is recommended during performance of these procedures in accordance with Standard Precautions. Use of a particulate respirator is recommended during aerosol-generating procedures when the aerosol is likely to contain M. tuberculosis, SARS-CoV, or avian or pandemic influenza viruses. # II.G. Patient placement # II.G.1. Hospitals and long-term care settings Options for patient placement include single patient rooms, two patient rooms, and multi-bed wards. Of these, single patient rooms are prefered when there is a concern about transmission of an infectious agent. Although some studies have failed to demonstrate the efficacy of single patient rooms to prevent HAIs 791 , other published studies, including one commissioned by the American Institute of Architects and the Facility Guidelines Institute, have documented a beneficial relationship between private rooms and reduction in infectious and noninfectious adverse patient outcomes 792,793 . The AIA notes that private rooms are the trend in hospital planning and design. However, most hospitals and long-term care facilities have multi-bed rooms and must consider many competing priorities when determining the appropriate room placement for patients (e.g., reason for admission; patient characteristics, such as age, gender, mental status; staffing needs; family requests; psychosocial factors; reimbursement concerns). In the absence of obvious infectious diseases that require specified airborne infection isolation rooms (e.g., tuberculosis, SARS, chickenpox), the risk of transmission of infectious agents is not always considered when making placement decisions. When there are only a limited number of single-patient rooms, it is prudent to prioritize them for those patients who have conditions that facilitate transmission of infectious material to other patients (e.g., draining wounds, stool incontinence, uncontained secretions) and for those who are at increased risk of acquisition and adverse outcomes resulting from HAI (e.g., immunosuppression, open wounds, indwelling catheters, anticipated prolonged length of stay, total dependence on HCWs for activities of daily living) 15,24,43,430,794,795 . Single-patient rooms are always indicated for patients placed on Airborne Precautions and in a Protective Environment and are preferred for patients who require Contact or Droplet Precautions 23,24,410,435,796,797 . During a suspected or proven outbreak caused by a pathogen whose reservoir is the gastrointestinal tract, use of single patient rooms with private bathrooms limits opportunities for transmission, especially when the colonized or infected patient has poor personal hygiene habits, fecal incontinence, or cannot be expected to assist in maintaining procedures that prevent transmission of microorganisms (e.g., infants, children, and patients with altered mental status or developmental delay). In the absence of continued transmission, it is not necessary to provide a private bathroom for patients colonized or infected with enteric pathogens as long as personal hygiene practices and Standard Precautions, especially hand hygiene and appropriate environmental cleaning, are maintained. Assignment of a dedicated commode to a patient,and cleaning and disinfecting fixtures and equipment that may have fecal contamination (e.g., bathrooms, commodes 798 , scales used for weighing diapers) and the adjacent surfaces with appropriate agents may be especially important when a single-patient room can not be used since environmental contamination with intestinal tract pathogens is likely from both continent and incontinent patients 54,799 . Results of several studies to determine the benefit of a single-patient room to prevent transmission of Clostridium difficile are inconclusive 167,[800][801][802] . Some studies have shown that being in the same room with a colonized or infected patient is not necessarily a risk factor for transmission 791,[803][804][805] . However, for children, the risk of healthcare-associated diarrhea is increased with the increased number of patients per room 806 . Thus, patient factors are important determinants of infection transmission risks, and the need for a single-patient room and/or private bathroom for any patient is best determined on a case-by-case basis. Cohorting is the practice of grouping together patients who are colonized or infected with the same organism to confine their care to one area and prevent contact with other patients. Cohorts are created based on clinical diagnosis, microbiologic confirmation when available, epidemiology, and mode of transmission of the infectious agent. It is generally preferred not to place severely immunosuppressed patients in rooms with other patients. Cohorting has been used extensively for managing outbreaks of MDROs including MRSA 22,807 811 ; RSV 812,813 ; adenovirus keratoconjunctivitis 814 ; rotavirus 815 ; and SARS 816 . Modeling studies provide additional support for cohorting patients to control outbreaks Talon [817][818][819] . However, cohorting often is implemented only after routine infection control measures have failed to control an outbreak. Assigning or cohorting healthcare personnel to care only for patients infected or colonized with a single target pathogen limits further transmission of the target pathogen to uninfected patients 740,819 but is difficult to achieve in the face of current staffing shortages in hospitals 583 and residential healthcare sites [820][821][822] . However, when continued transmission is occurring after implementing routine infection control measures and creating patient cohorts, cohorting of healthcare personnel may be beneficial. During the seasons when RSV, human metapneumovirus 823 , parainfluenza, influenza, other respiratory viruses 824 , and rotavirus are circulating in the community, cohorting based on the presenting clinical syndrome is often a priority in facilities that care for infants and young children 825 . For example, during the respiratory virus season, infants may be cohorted based soley on the clinical diagnosis of bronchiolitis due to the logistical difficulties and costs associated with requiring microbiologic confirmation prior to room placement, and the predominance of RSV during most of the season. However, when available, single patient rooms are always preferred since a common clinical presentation (e.g., bronchiolitis), can be caused by more than one infectious agent 823,824,826 . Furthermore, the inability of infants and children to contain body fluids, and the close physical contact that occurs during their care, increases infection transmission risks for patients and personnel in this setting 24,795 . # II.G.2. Ambulatory settings Patients actively infected with or incubating transmissible infectious diseases are seen frequently in ambulatory settings (e.g., outpatient clinics, physicians' offices, emergency departments) and potentially expose healthcare personnel and other patients, family members and visitors 21,34,127,135,142,827 . In response to the global outbreak of SARS in 2003 and in preparation for pandemic influenza, healthcare providers working in outpatient settings are urged to implement source containment measures (e.g., asking couging patients to wear a surgical mask or cover their coughs with tissues) to prevent transmission of respiratory infections, beginning at the point of initial patient encounter 9, 262, 828 as described below in section III.A.1.a. Signs can be posted at the entrance to facilities or at the reception or registration desk requesting that the patient or individuals accompanying the patient promptly inform the receptionist if there are symptoms of a respiratory infection (e.g., cough, flu-like illness, increased production of respiratory secretions). The presence of diarrhea, skin rash, or known or suspected exposure to a transmissible disease (e.g., measles, pertussis, chickenpox, tuberculosis) also could be added. Placement of potentially infectious patients without delay in an examination room limits the number of exposed individuals, e.g., in the common waiting area. In waiting areas, maintaining a distance between symptomatic and nonsymptomatic patients (e.g., >3 feet), in addition to source control measures, may limit exposures. However, infections transmitted via the airborne route (e.g., M tuberculosis, measles, chickenpox) require additional precautions 12,125,829 . Patients suspected of having such an infection can wear a surgical mask for source containment, if tolerated, and should be placed in an examination room, preferably an AIIR, as soon as possible. If this is not possible, having the patient wear a mask and segregate him/herself from other patients in the waiting area will reduce opportunities to expose others. Since the person(s) accompanying the patient also may be infectious, application of the same infection control precautions may need to be extended to these persons if they are symptomatic 21, 252, 830 . For example, family members accompanying children admitted with suspected M. tuberculosis have been found to have unsuspected pulmonary tuberculosis with cavitary lesions, even when asymptomatic 42,831 . Patients with underlying conditions that increase their susceptibility to infection (e.g., those who are immunocompromised 43,44 or have cystic fibrosis 20 ) require special efforts to protect them from exposures to infected patients in common waiting areas. By informing the receptionist of their infection risk upon arrival, appropriate steps may be taken to further protect them from infection. In some cystic fibrosis clinics, in order to avoid exposure to other patients who could be colonized with B. cepacia, patients have been given beepers upon registration so that they may leave the area and receive notification to return when an examination room becomes available 832 . # II.G.3. Home care In home care, the patient placement concerns focus on protecting others in the home from exposure to an infectious household member. For individuals who are especially vulnerable to adverse outcomes associated with certain infections, it may be beneficial to either remove them from the home or segregate them within the home. Persons who are not part of the household may need to be prohibited from visiting during the period of infectivity. For example, if a patient with pulmonary tuberculosis is contagious and being cared for at home, very young children (<4 years of age) 833 and immunocompromised persons who have not yet been infected should be removed or excluded from the household. During the SARS outbreak of 2003, segregation of infected persons during the communicable phase of the illness was beneficial in preventing household transmission 249,834 . # II.H. Transport of patients Several principles are used to guide transport of patients requiring Transmission-Based Precautions. In the inpatient and residential settings these include 1) limiting transport of such patients to essential purposes, such as diagnostic and therapeutic procedures that cannot be performed in the patient's room; 2) when transport is necessary, using appropriate barriers on the patient (e.g., mask, gown, wrapping in sheets or use of impervious dressings to cover the affected area(s) when infectious skin lesions or drainage are present, consistent with the route and risk of transmission; 3) notifying healthcare personnel in the receiving area of the impending arrival of the patient and of the precautions necessary to prevent transmission; and 4) for patients being transported outside the facility, informing the receiving facility and the medi-van or emergency vehicle personnel in advance about the type of Transmission-Based Precautions being used. For tuberculosis, additional precautions may be needed in a small shared air space such as in an ambulance 12 . # II.I. Environmental measures Cleaning and disinfecting non-critical surfaces in patient-care areas are part of Standard Precautions. In general, these procedures do not need to be changed for patients on Transmission-Based Precautions. The cleaning and disinfection of all patient-care areas is important for frequently touched surfaces, especially those closest to the patient, that are most likely to be contaminated (e.g., bedrails, bedside tables, commodes, doorknobs, sinks, surfaces and equipment in close proximity to the patient) 11, 72,73,835 . The frequency or intensity of cleaning may need to change based on the patient's level of hygiene and the degree of environmental contamination and for certain for infectious agents whose reservoir is the intestinal tract 54 . This may be especially true in LTCFs and pediatric facilities where patients with stool and urine incontinence are encountered more frequently. Also, increased frequency of cleaning may be needed in a Protective Environment to minimize dust accumulation 11 . Special recommendations for cleaning and disinfecting environmental surfaces in dialysis centers have been published 18 . In all healthcare settings, administrative, staffing and scheduling activities should prioritize the proper cleaning and disinfection of surfaces that could be implicated in transmission. During a suspected or proven outbreak where an environmental reservoir is suspected, routine cleaning procedures should be reviewed, and the need for additional trained cleaning staff should be assessed. Adherence should be monitored and reinforced to promote consistent and correct cleaning is performed. EPA-registered disinfectants or detergents/disinfectants that best meet the overall needs of the healthcare facility for routine cleaning and disinfection should be selected 11,836 . In general, use of the existing facility detergent/disinfectant according to the manufacturer's recommendations for amount, dilution, and contact time is sufficient to remove pathogens from surfaces of rooms where colonized or infected individuals were housed. This includes those pathogens that are resistant to multiple classes of antimicrobial agents (e.g., C. difficile, VRE, MRSA, MDR-GNB 11, 24,88,435,746,796,837 ). Most often, environmental reservoirs of pathogens during outbreaks are related to a failure to follow recommended procedures for cleaning and disinfection rather than the specific cleaning and disinfectant agents used [838][839][840][841] . Certain pathogens (e.g., rotavirus, noroviruses, C. difficile) may be resistant to some routinely used hospital disinfectants 275,292,[842][843][844][845][846][847] .The role of specific disinfectants in limiting transmission of rotavirus has been demonstrated experimentally 842 . Also, since C. difficile may display increased levels of spore production when exposed to non-chlorine-based cleaning agents, and the spores are more resistant than vegetative cells to commonly used surface disinfectants, some investigators have recommended the use of a 1:10 dilution of 5.25% sodium hypochlorite (household bleach) and water for routine environmental disinfection of rooms of patients with C. difficile when there is continued transmission 844,848 . In one study, the use of a hypochlorite solution was associated with a decrease in rates of C. difficile infections 847 . The need to change disinfectants based on the presence of these organisms can be determined in consultation with the infection control committee 11, 847,848 . Detailed recommendations for disinfection and sterilization of surfaces and medical equipment that have been in contact with prion-containing tissue or high risk body fluids, and for cleaning of blood and body substance spills, are available in the Guidelines for Environmental Infection Control in Health-Care Facilities 11 and in the Guideline for Disinfection and Sterilization 848 . # II.J. Patient care equipment and instruments/devices Medical equipment and instruments/devices must be cleaned and maintained according to the manufacturers' instructions to prevent patient-to-patient transmission of infectious agents 86,87,325,849 . Cleaning to remove organic material must always precede high level disinfection and sterilization of critical and semi-critical instruments and devices because residual proteinacous material reduces the effectiveness of the disinfection and sterilization processes 836,848 . Noncritical equipment, such as commodes, intravenous pumps, and ventilators, must be thoroughly cleaned and disinfected before use on another patient. All such equipment and devices should be handled in a manner that will prevent HCW and environmental contact with potentially infectious material. It is important to include computers and personal digital assistants (PDAs) used in patient care in policies for cleaning and disinfection of non-critical items. The literature on contamination of computers with pathogens has been summarized 850 and two reports have linked computer contamination to colonization and infections in patients 851,852 . Although keyboard covers and washable keyboards that can be easily disinfected are in use, the infection control benefit of those items and optimal management have not been determined. In all healthcare settings, providing patients who are on Transmission-Based Precautions with dedicated noncritical medical equipment (e.g., stethoscope, blood pressure cuff, electronic thermometer) has been beneficial for preventing transmission 74,89,740,853,854 . When this is not possible, disinfection after use is recommended. Consult other guidelines for detailed guidance in developing specific protocols for cleaning and reprocessing medical equipment and patient care items in both routine and special circumstances 11,14,18,20,740,836,848 . In home care, it is preferable to remove visible blood or body fluids from durable medical equipment before it leaves the home. Equipment can be cleaned on-site using a detergent/disinfectant and, when possible, should be placed in a single plastic bag for transport to the reprocessing location 20,739 . # II.K. Textiles and laundry Soiled textiles, including bedding, towels, and patient or resident clothing may be contaminated with pathogenic microorganisms. However, the risk of disease transmission is negligible if they are handled, transported, and laundered in a safe manner 11,855,856 . Key principles for handling soiled laundry are 1) not shaking the items or handling them in any way that may aerosolize infectious agents; 2) avoiding contact of one's body and personal clothing with the soiled items being handled; and 3) containing soiled items in a laundry bag or designated bin. When laundry chutes are used, they must be maintained to minimize dispersion of aerosols from contaminated items 11 . The methods for handling, transporting, and laundering soiled textiles are determined by organizational policy and any applicable regulations 739 ; guidance is provided in the Guidelines for Environmental Infection Control 11 . Rather than rigid rules and regulations, hygienic and common sense storage and processing of clean textiles is recommended 11,857 . When laundering occurs outside of a healthcare facility, the clean items must be packaged or completely covered and placed in an enclosed space during transport to prevent contamination with outside air or construction dust that could contain infectious fungal spores that are a risk for immunocompromised patients 11 . Institutions are required to launder garments used as personal protective equipment and uniforms visibly soiled with blood or infective material 739 . There are few data to determine the safety of home laundering of HCW uniforms, but no increase in infection rates was observed in the one published study 858 and no pathogens were recovered from home-or hospital-laundered scrubs in another study 859 . In the home, textiles and laundry from patients with potentially transmissible infectious pathogens do not require special handling or separate laundering, and may be washed with warm water and detergent 11, 858,859 . # II.L. Solid waste The management of solid waste emanating from the healthcare environment is subject to federal and state regulations for medical and non-medical waste 860,861 . No additional precautions are needed for non-medical solid waste that is being removed from rooms of patients on Transmission-Based Precautions. Solid waste may be contained in a single bag (as compared to using two bags) of sufficient strength. 862 . # II.M. Dishware and eating utensils The combination of hot water and detergents used in dishwashers is sufficient to decontaminate dishware and eating utensils. Therefore, no special precautions are needed for dishware (e.g., dishes, glasses, cups) or eating utensils; reusable dishware and utensils may be used for patients requiring Transmission-Based Precautions. In the home and other communal settings, eating utensils and drinking vessels that are being used should not be shared, consistent with principles of good personal hygiene and for the purpose of preventing transmission of respiratory viruses, Herpes simplex virus, and infectious agents that infect the gastrointestinal tract and are transmitted by the fecal/oral route (e.g., hepatitis A virus, noroviruses). If adequate resources for cleaning utensils and dishes are not available, disposable products may be used. # II.N. Adjunctive measures Important adjunctive measures that are not considered primary components of programs to prevent transmission of infectious agents, but improve the effectiveness of such programs, include 1) antimicrobial management programs; 2) postexposure chemoprophylaxis with antiviral or antibacterial agents; 3) vaccines used both for pre and postexposure prevention; and 4) screening and restricting visitors with signs of transmissible infections. Detailed discussion of judicious use of antimicrobial agents is beyond the scope of this document; however the topic is addressed in the MDRO section (Management of Multidrug-Resistant Organisms in Healthcare Settings 2006. www.cdc.gov/ncidod/dhqp/pdf/ar/mdroGuideline2006.pdf). # II.N.1. Chemoprophylaxis Antimicrobial agents and topical antiseptics may be used to prevent infection and potential outbreaks of selected agents. Infections for which postexposure chemoprophylaxis is recommended under defined conditions include B. pertussis 17,863 , N. meningitidis 864 , B. anthracis after environmental exposure to aeosolizable material 865 , influenza virus 611 , HIV 866 , and group A streptococcus 160 . Orally administered antimicrobials may also be used under defined circumstances for MRSA decolonization of patients or healthcare personnel 867 . Another form of chemoprophylaxis is the use of topical antiseptic agents. For example, triple dye is used routinely on the umbilical cords of term newborns to reduce the risk of colonization, skin infections, and omphalitis caused by S. aureus, including MRSA, and group A streptococcus 868,869 . Extension of the use of triple dye to low birth weight infants in the NICU was one component of a program that controlled one longstanding MRSA outbreak 22 . Topical antiseptics are also used for decolonization of healthcare personnel or selected patients colonized with MRSA, using mupirocin as discussed in the MDRO guideline 870 867, 871-873 . # II.N.2. Immunoprophylaxis Certain immunizations recommended for susceptible healthcare personnel have decreased the risk of infection and the potential for transmission in healthcare facilities 17,874 . The OSHA mandate that requires employers to offer hepatitis B vaccination to HCWs played a substantial role in the sharp decline in incidence of occupational HBV infection 778,875 . The use of varicella vaccine in healthcare personnel has decreased the need to place susceptible HCWs on administrative leave following exposure to patients with varicella 775 . Also, reports of healthcare-associated transmission of rubella in obstetrical clinics 33,876 and measles in acute care settings 34 demonstrate the importance of immunization of susceptible healthcare personnel against childhood diseases. Many states have requirements for HCW vaccination for measles and rubella in the absence of evidence of immunity. Annual influenza vaccine campaigns targeted to patients and healthcare personnel in LTCFs and acute-care settings have been instrumental in preventing or limiting institutional outbreaks and increasing attention is being directed toward improving influenza vaccination rates in healthcare personnel 35 , 611, 690, 877, 878 , 879 . Transmission of B. pertussis in healthcare facilities has been associated with large and costly outbreaks that include both healthcare personnel and patients 17,36,41,100,683,827,880,881 . HCWs who have close contact with infants with pertussis are at particularly high risk because of waning immunity and, until 2005, the absence of a vaccine that could be used in adults. However, two acellular pertussis vaccines were licensed in the United States in 2005, one for use in individuals aged 11-18 and one for use in ages 10-64 years 882 . Provisional ACIP recommendations at the time of publication of this document include adolescents and adults, especially those with contact with infants < 12 months of age and healthcare personnel with direct patient contact 883 884 . Immunization of children and adults will help prevent the introduction of vaccinepreventable diseases into healthcare settings. The recommended immunization schedule for children is published annually in the January issues of the Morbidity Mortality Weekly Report with interim updates as needed 885,886 . An adult immunization schedule also is available for healthy adults and those with special immunization needs due to high risk medical conditions 887 . Some vaccines are also used for postexposure prophylaxis of susceptible individuals, including varicella 888 , influenza 611 , hepatitis B 778 , and smallpox 225 vaccines 17,874 . In the future, administration of a newly developed S. aureus conjugate vaccine (still under investigation) to selected patients may provide a novel method of preventing healthcare-associated S. aureus, including MRSA, infections in high-risk groups (e.g., hemodialysis patients and candidates for selected surgical procedures) 889,890 . Immune globulin preparations also are used for postexposure prophylaxis of certain infectious agents under specified circumstances (e.g., varicella-zoster virus [VZIG], hepatitis B virus [HBIG], rabies [RIG], measles and hepatitis A virus [IG] 17,833,874 ). The RSV monoclonal antibody preparation, Palivizumab, may have contributed to controlling a nosocomial outbreak of RSV in one NICU , but there is insufficient evidence to support a routine recommendation for its use in this setting 891 . # II.N. 3. Management of visitors # II.N.3.a. Visitors as sources of infection Visitors have been identified as the source of several types of HAIs (e.g., pertussis 40,41 , M. tuberculosis 42,892 , influenza, and other respiratory viruses 24,43,44,373 and SARS 21,[252][253][254] ). However, effective methods for visitor screening in healthcare settings have not been studied. Visitor screening is especially important during community outbreaks of infectious diseases and for high risk patient units. Sibling visits are often encouraged in birthing centers, post partum rooms and in pediatric inpatient units, ICUs, and in residential settings for children; in hospital settings, a child visitor should visit only his or her own sibling. Screening of visiting siblings and other children before they are allowed into clinical areas is necessary to prevent the introduction of childhood illnesses and common respiratory infections. Screening may be passive through the use of signs to alert family members and visitors with signs and symptoms of communicable diseases not to enter clinical areas. More active screening may include the completion of a screening tool or questionnaire which elicits information related to recent exposures or current symptoms. That information is reviewed by the facility staff and the visitor is either permitted to visit or is excluded 833 . Family and household members visiting pediatric patients with pertussis and tuberculosis may need to be screened for a history of exposure as well as signs and symptoms of current infection. Potentially infectious visitors are excluded until they receive appropriate medical screening, diagnosis, or treatment. If exclusion is not considered to be in the best interest of the patient or family (i.e., primary family members of critically or terminally ill patients), then the symptomatic visitor must wear a mask while in the healthcare facility and remain in the patient's room, avoiding exposure to others, especially in public waiting areas and the cafeteria. Visitor screening is used consistently on HSCT units 15,43 . However, considering the experience during the 2003 SARS outbreaks and the potential for pandemic influenza, developing effective visitor screening systems will be beneficial 9 . Education concerning Respiratory Hygiene/Cough Etiquette is a useful adjunct to visitor screening. # II.N.3.b. Use of barrier precautions by visitors The use of gowns, gloves, or masks by visitors in healthcare settings has not been addressed specifically in the scientific literature. Some studies included the use of gowns and gloves by visitors in the control of MDRO's, but did not perform a separate analysis to determine whether their use by visitors had a measurable impact [893][894][895] . Family members or visitors who are providing care or having very close patient contact (e.g., feeding, holding) may have contact with other patients and could contribute to transmission if barrier precautions are not used correctly. Specific recommendations may vary by facility or by unit and should be determined by the level of interaction. # Part III: Precautions to Prevent Transmission of Infectious Agents There are two tiers of HICPAC/CDC precautions to prevent transmission of infectious agents, Standard Precautions and Transmission-Based Precautions. Standard Precautions are intended to be applied to the care of all patients in all healthcare settings, regardless of the suspected or confirmed presence of an infectious agent. Implementation of Standard Precautions constitutes the primary strategy for the prevention of healthcare-associated transmission of infectious agents among patients and healthcare personnel. Transmission-Based Precautions are for patients who are known or suspected to be infected or colonized with infectious agents, including certain epidemiologically important pathogens, which require additional control measures to effectively prevent transmission. Since the infecting agent often is not known at the time of admission to a healthcare facility, Transmission-Based Precautions are used empirically, according to the clinical syndrome and the likely etiologic agents at the time, and then modified when the pathogen is identified or a transmissible infectious etiology is ruled out. Examples of this syndromic approach are presented in Table 2. The HICPAC/CDC Guidelines also include recommendations for creating a Protective Environment for allogeneic HSCT patients. The specific elements of Standard and Transmission-Based Precautions are discussed in Part II of this guideline. In Part III, the circumstances in which Standard Precautions, Transmission-Based Precautions, and a Protective Environment are applied are discussed. See Tables 4 and 5 for summaries of the key elements of these sets of precautions III.A. Standard Precautions Standard Precautions combine the major features of Universal Precautions (UP) 780,896 and Body Substance Isolation (BSI) 640 and are based on the principle that all blood, body fluids, secretions, excretions except sweat, nonintact skin, and mucous membranes may contain transmissible infectious agents. Standard Precautions include a group of infection prevention practices that apply to all patients, regardless of suspected or confirmed infection status, in any setting in which healthcare is delivered (Table 4). These include: hand hygiene; use of gloves, gown, mask, eye protection, or face shield, depending on the anticipated exposure; and safe injection practices. Also, equipment or items in the patient environment likely to have been contaminated with infectious body fluids must be handled in a manner to prevent transmission of infectious agents (e.g. wear gloves for direct contact, contain heavily soiled equipment, properly clean and disinfect or sterilize reusable equipment before use on another patient). The application of Standard Precautions during patient care is determined by the nature of the HCW-patient interaction and the extent of anticipated blood, body fluid, or pathogen exposure. For some interactions (e.g., performing venipuncture), only gloves may be needed; during other interactions (e.g., intubation), use of gloves, gown, and face shield or mask and goggles is necessary. Education and training on the principles and rationale for recommended practices are critical elements of Standard Precautions because they facilitate appropriate decision-making and promote adherence when HCWs are faced with new circumstances 655,[681][682][683][684][685][686] . An example of the importance of the use of Standard Precautions is intubation, especially under emergency circumstances when infectious agents may not be suspected, but later are identified (e.g., SARS-CoV, N. meningitides). The application of Standard Precautions is described below and summarized in Table 4. Guidance on donning and removing gloves, gowns and other PPE is presented in the Figure . Standard Precautions are also intended to protect patients by ensuring that healthcare personnel do not carry infectious agents to patients on their hands or via equipment used during patient care. # III.A.1. New Elements of Standard Precautions Infection control problems that are identified in the course of outbreak investigations often indicate the need for new recommendations or reinforcement of existing infection control recommendations to protect patients. Because such recommendations are considered a standard of care and may not be included in other guidelines, they are added here to Standard Precautions. Three such areas of practice that have been added are: Respiratory Hygiene/Cough Etiquette, safe injection practices, and use of masks for insertion of catheters or injection of material into spinal or epidural spaces via lumbar puncture procedures (e.g., myelogram, spinal or epidural anesthesia). While most elements of Standard Precautions evolved from Universal Precautions that were developed for protection of healthcare personnel, these new elements of Standard Precautions focus on protection of patients. # III.A.1.a. Respiratory Hygiene/Cough Etiquette The transmission of SARS- CoV in emergency departments by patients and their family members during the widespread SARS outbreaks in 2003 highlighted the need for vigilance and prompt implementation of infection control measures at the first point of encounter within a healthcare setting (e.g., reception and triage areas in emergency departments, outpatient clinics, and physician offices) 21,254,897 . The strategy proposed has been termed Respiratory Hygiene/Cough Etiquette 9,828 and is intended to be incorporated into infection control practices as a new component of Standard Precautions. The strategy is targeted at patients and accompanying family members and friends with undiagnosed transmissible respiratory infections, and applies to any person with signs of illness including cough, congestion, rhinorrhea, or increased production of respiratory secretions when entering a healthcare facility 40,41,43 . The term cough etiquette is derived from recommended source control measures for M. tuberculosis 12,126 . The elements of Respiratory Hygiene/Cough Etiquette include 1) education of healthcare facility staff, patients, and visitors; 2) posted signs, in language(s) appropriate to the population served, with instructions to patients and accompanying family members or friends; 3) source control measures (e.g., covering the mouth/nose with a tissue when coughing and prompt disposal of used tissues, using surgical masks on the coughing person when tolerated and appropriate); 4) hand hygiene after contact with respiratory secretions; and 5) spatial separation, ideally >3 feet, of persons with respiratory infections in common waiting areas when possible. Covering sneezes and coughs and placing masks on coughing patients are proven means of source containment that prevent infected persons from dispersing respiratory secretions into the air 107, 145, 898, 899 . Masking may be difficult in some settings, (e.g., pediatrics, in which case, the emphasis by necessity may be on cough etiquette 900 . Physical proximity of <3 feet has been associated with an increased risk for transmission of infections via the droplet route (e.g., N. meningitidis 103 and group A streptococcus 114 and therefore supports the practice of distancing infected persons from others who are not infected. The effectiveness of good hygiene practices, especially hand hygiene, in preventing transmission of viruses and reducing the incidence of respiratory infections both within and outside [901][902][903] healthcare settings is summarized in several reviews 559,717,904 . These measures should be effective in decreasing the risk of transmission of pathogens contained in large respiratory droplets (e.g., influenza virus 23 , adenovirus 111 , B. pertussis 827 and Mycoplasma pneumoniae 112 . Although fever will be present in many respiratory infections, patients with pertussis and mild upper respiratory tract infections are often afebrile. Therefore, the absence of fever does not always exclude a respiratory infection. Patients who have asthma, allergic rhinitis, or chronic obstructive lung disease also may be coughing and sneezing. While these patients often are not infectious, cough etiquette measures are prudent. Healthcare personnel are advised to observe Droplet Precautions (i.e., wear a mask) and hand hygiene when examining and caring for patients with signs and symptoms of a respiratory infection. Healthcare personnel who have a respiratory infection are advised to avoid direct patient contact, especially with high risk patients. If this is not possible, then a mask should be worn while providing patient care. # III.A.1.b. Safe Injection Practices The investigation of four large outbreaks of HBV and HCV among patients in ambulatory care facilities in the United States identified a need to define and reinforce safe injection practices 453 . The four outbreaks occurred in a private medical practice, a pain clinic, an endoscopy clinic, and a hematology/oncology clinic. The primary breaches in infection control practice that contributed to these outbreaks were 1) reinsertion of used needles into a multiple-dose vial or solution container (e.g., saline bag) and 2) use of a single needle/syringe to administer intravenous medication to multiple patients. In one of these outbreaks, preparation of medications in the same workspace where used needle/syringes were dismantled also may have been a contributing factor. These and other outbreaks of viral hepatitis could have been prevented by adherence to basic principles of aseptic technique for the preparation and administration of parenteral medications 453,454 . These include the use of a sterile, single-use, disposable needle and syringe for each injection given and prevention of contamination of injection equipment and medication. Whenever possible, use of single-dose vials is preferred over multiple-dose vials, especially when medications will be administered to multiple patients. Outbreaks related to unsafe injection practices indicate that some healthcare personnel are unaware of, do not understand, or do not adhere to basic principles of infection control and aseptic technique. A survey of US healthcare workers who provide medication through injection found that 1% to 3% reused the same needle and/or syringe on multiple patients 905 . Among the deficiencies identified in recent outbreaks were a lack of oversight of personnel and failure to follow-up on reported breaches in infection control practices in ambulatory settings. Therefore, to ensure that all healthcare workers understand and adhere to recommended practices, principles of infection control and aseptic technique need to be reinforced in training programs and incorporated into institutional polices that are monitored for adherence 454 . # III.A.1.c. Infection Control Practices for Special Lumbar Puncture Procedues In 2004, CDC investigated eight cases of post-myelography meningitis that either were reported to CDC or identified through a survey of the Emerging Infections Network of the Infectious Disease Society of America. Blood and/or cerebrospinal fluid of all eight cases yielded streptococcal species consistent with oropharyngeal flora and there were changes in the CSF indices and clinical status indicative of bacterial meningitis. Equipment and products used during these procedures (e.g., contrast media) were excluded as probable sources of contamination. Procedural details available for seven cases determined that antiseptic skin preparations and sterile gloves had been used. However, none of the clinicians wore a face mask, giving rise to the speculation that droplet transmission of oralpharyngeal flora was the most likely explanation for these infections. Bacterial meningitis following myelogram and other spinal procedures (e.g., lumbar puncture, spinal and epidural anesthesia, intrathecal chemotherapy) has been reported previously [906][907][908][909][910][911][912][913][914][915] . As a result, the question of whether face masks should be worn to prevent droplet spread of oral flora during spinal procedures (e.g., myelogram, lumbar puncture, spinal anesthesia) has been debated 916,917 . Face masks are effective in limiting the dispersal of oropharyngeal droplets 918 and are recommended for the placement of central venous catheters 919 . In October 2005, the Healthcare Infection Control Practices Advisory Committee (HICPAC) reviewed the evidence and concluded that there is sufficient experience to warrant the additional protection of a face mask for the individual placing a catheter or injecting material into the spinal or epidural space. # III.B. Transmission-Based Precautions There are three categories of Transmission-Based Precautions: Contact Precautions, Droplet Precautions, and Airborne Precautions. Transmission-Based Precautions are used when the route(s) of transmission is (are) not completely interrupted using Standard Precautions alone. For some diseases that have multiple routes of transmission (e.g., SARS), more than one Transmission-Based Precautions category may be used. When used either singly or in combination, they are always used in addition to Standard Precautions. See Appendix A for recommended precautions for specific infections. When Transmission-Based Precautions are indicated, efforts must be made to counteract possible adverse effects on patients (i.e., anxiety, depression and other mood disturbances [920][921][922] , perceptions of stigma 923 , reduced contact with clinical staff [924][925][926] , and increases in preventable adverse events 565 in order to improve acceptance by the patients and adherence by HCWs. # III.B.1. Contact Precautions Contact Precautions are intended to prevent transmission of infectious agents, including epidemiologically important microorganisms, which are spread by direct or indirect contact with the patient or the patient's environment as described in I.B.3.a. The specific agents and circumstance for which Contact Precautions are indicated are found in Appendix A. The application of Contact Precautions for patients infected or colonized with MDROs is described in the 2006 HICPAC/CDC MDRO guideline 927 . Contact Precautions also apply where the presence of excessive wound drainage, fecal incontinence, or other discharges from the body suggest an increased potential for extensive environmental contamination and risk of transmission. A singlepatient room is preferred for patients who require Contact Precautions. When a single-patient room is not available, consultation with infection control personnel is recommended to assess the various risks associated with other patient placement options (e.g., cohorting, keeping the patient with an existing roommate). In multi-patient rooms, >3 feet spatial separation between beds is advised to reduce the opportunities for inadvertent sharing of items between the infected/colonized patient and other patients. Healthcare personnel caring for patients on Contact Precautions wear a gown and gloves for all interactions that may involve contact with the patient or potentially contaminated areas in the patient's environment. Donning PPE upon room entry and discarding before exiting the patient room is done to contain pathogens, especially those that have been implicated in transmission through environmental contamination (e.g., VRE, C. difficile, noroviruses and other intestinal tract pathogens; RSV) 54,72,73,78,274,275,740 . # III.B.2. Droplet Precautions Droplet Precautions are intended to prevent transmission of pathogens spread through close respiratory or mucous membrane contact with respiratory secretions as described in I.B.3.b. Because these pathogens do not remain infectious over long distances in a healthcare facility, special air handling and ventilation are not required to prevent droplet transmission. Infectious agents for which Droplet Precautions are indicated are found in Appendix A and include B. pertussis, influenza virus, adenovirus, rhinovirus, N. meningitides, and group A streptococcus (for the first 24 hours of antimicrobial therapy). A single patient room is preferred for patients who require Droplet Precautions. When a single-patient room is not available, consultation with infection control personnel is recommended to assess the various risks associated with other patient placement options (e.g., cohorting, keeping the patient with an existing roommate). Spatial separation of > 3 feet and drawing the curtain between patient beds is especially important for patients in multi-bed rooms with infections transmitted by the droplet route. Healthcare personnel wear a mask (a respirator is not necessary) for close contact with infectious patient; the mask is generally donned upon room entry. Patients on Droplet Precautions who must be transported outside of the room should wear a mask if tolerated and follow Respiratory Hygiene/Cough Etiquette. # III.B.3. Airborne Precautions Airborne Precautions prevent transmission of infectious agents that remain infectious over long distances when suspended in the air (e.g., rubeola virus [measles], varicella virus [chickenpox], M. tuberculosis, and possibly SARS-CoV) as described in I.B.3.c and Appendix A. The preferred placement for patients who require Airborne Precautions is in an airborne infection isolation room (AIIR). An AIIR is a single-patient room that is equipped with special air handling and ventilation capacity that meet the American Institute of Architects/Facility Guidelines Institute (AIA/FGI) standards for AIIRs (i.e., monitored negative pressure relative to the surrounding area, 12 air exchanges per hour for new construction and renovation and 6 air exchanges per hour for existing facilities, air exhausted directly to the outside or recirculated through HEPA filtration before return) 12,13 . Some states require the availability of such rooms in hospitals, emergency departments, and nursing homes that care for patients with M. tuberculosis. A respiratory protection program that includes education about use of respirators, fit-testing, and user seal checks is required in any facility with AIIRs. In settings where Airborne Precautions cannot be implemented due to limited engineering resources (e.g., physician offices), masking the patient, placing the patient in a private room (e.g., office examination room) with the door closed, and providing N95 or higher level respirators or masks if respirators are not available for healthcare personnel will reduce the likelihood of airborne transmission until the patient is either transferred to a facility with an AIIR or returned to the home environment, as deemed medically appropriate. Healthcare personnel caring for patients on Airborne Precautions wear a mask or respirator, depending on the disease-specific recommendations (Respiratory Protection II.E.4, Table 2, and Appendix A), that is donned prior to room entry. Whenever possible, non-immune HCWs should not care for patients with vaccine-preventable airborne diseases (e.g., measles, chickenpox, and smallpox). # III.C. Syndromic and empiric applications of Transmission-Based Precautions Diagnosis of many infections requires laboratory confirmation. Since laboratory tests, especially those that depend on culture techniques, often require two or more days for completion, Transmission-Based Precautions must be implemented while test results are pending based on the clinical presentation and likely pathogens. Use of appropriate Transmission-Based Precautions at the time a patient develops symptoms or signs of transmissible infection, or arrives at a healthcare facility for care, reduces transmission opportunities. While it is not possible to identify prospectively all patients needing Transmission-Based Precautions, certain clinical syndromes and conditions carry a sufficiently high risk to warrant their use empirically while confirmatory tests are pending (Table 2). Infection control professionals are encouraged to modify or adapt this table according to local conditions. # III.D. Discontinuation of Transmission-Based Precautions Transmission- Based Precautions remain in effect for limited periods of time (i.e., while the risk for transmission of the infectious agent persists or for the duration of the illness (Appendix A). For most infectious diseases, this duration reflects known patterns of persistence and shedding of infectious agents associated with the natural history of the infectious process and its treatment. For some diseases (e.g., pharyngeal or cutaneous diphtheria, RSV), Transmission-Based Precautions remain in effect until culture or antigen-detection test results document eradication of the pathogen and, for RSV, symptomatic disease is resolved. For other diseases, (e.g., M. tuberculosis) state laws and regulations, and healthcare facility policies, may dictate the duration of precautions 12 ). In immunocompromised patients, viral shedding can persist for prolonged periods of time (many weeks to months) and transmission to others may occur during that time; therefore, the duration of contact and/or droplet precautions may be prolonged for many weeks 500,[928][929][930][931][932][933] . The duration of Contact Precautions for patients who are colonized or infected with MDROs remains undefined. MRSA is the only MDRO for which effective decolonization regimens are available 867 . However, carriers of MRSA who have negative nasal cultures after a course of systemic or topical therapy may resume shedding MRSA in the weeks that follow therapy 934,935 . Although early guidelines for VRE suggested discontinuation of Contact Precautions after three stool cultures obtained at weekly intervals proved negative 740 , subsequent experiences have indicated that such screening may fail to detect colonization that can persist for >1 year 27,[936][937][938] . Likewise, available data indicate that colonization with VRE, MRSA 939 , and possibly MDR-GNB, can persist for many months, especially in the presence of severe underlying disease, invasive devices, and recurrent courses of antimicrobial agents. It may be prudent to assume that MDRO carriers are colonized permanently and manage them accordingly. Alternatively, an interval free of hospitalizations, antimicrobial therapy, and invasive devices (e.g., 6 or 12 months) before reculturing patients to document clearance of carriage may be used. Determination of the best strategy awaits the results of additional studies. See the 2006 HICPAC/CDC MDRO guideline 927 for discussion of possible criteria to discontinue Contact Precautions for patients colonized or infected with MDROs. # III.E. Application of Transmission-Based Precautions in ambulatory and home care settings Although Transmission-Based Precautions generally apply in all healthcare settings, exceptions exist. For example, in home care, AIIRs are not available. Furthermore, family members already exposed to diseases such as varicella and tuberculosis would not use masks or respiratory protection, but visiting HCWs would need to use such protection. Similarly, management of patients colonized or infected with MDROs may necessitate Contact Precautions in acute care hospitals and in some LTCFs when there is continued transmission, but the risk of transmission in ambulatory care and home care, has not been defined. Consistent use of Standard Precautions may suffice in these settings, but more information is needed. # III.F. Protective Environment A Protective Environment is designed for allogeneic HSCT patients to minimize fungal spore counts in the air and reduce the risk of invasive environmental fungal infections (see Table 5 for specifications) 11, [13][14][15] . The need for such controls has been demonstrated in studies of aspergillus outbreaks associated with construction 11,14,15,157,158 . As defined by the American Insitute of Architecture 13 and presented in detail in the Guideline for Environmental Infection Control 2003 11,861 , air quality for HSCT patients is improved through a combination of environmental controls that include 1) HEPA filtration of incoming air; 2) directed room air flow; 3) positive room air pressure relative to the corridor; 4) well-sealed rooms (including sealed walls, floors, ceilings, windows, electrical outlets) to prevent flow of air from the outside; 5) ventilation to provide >12 air changes per hour; 6) strategies to minimize dust (e.g., scrubbable surfaces rather than upholstery 940 and carpet 941 , and routinely cleaning crevices and sprinkler heads); and 7) prohibiting dried and fresh flowers and potted plants in the rooms of HSCT patients. The latter is based on molecular typing studies that have found indistinguishable strains of Aspergillus terreus in patients with hematologic malignancies and in potted plants in the vicinity of the patients [942][943][944] . The desired quality of air may be achieved without incurring the inconvenience or expense of laminar airflow 15,157 . To prevent inhalation of fungal spores during periods when construction, renovation, or other dust-generating activities that may be ongoing in and around the health-care facility, it has been advised that severely immunocompromised patients wear a high-efficiency respiratory-protection device (e.g., an N95 respirator) when they leave the Protective Environment 11, 14,945 ). The use of masks or respirators by HSCT patients when they are outside of the Protective Environment for prevention of environmental fungal infections in the absence of construction has not been evaluated. A Protective Environment does not include the use of barrier precautions beyond those indicated for Standard and Transmission-Based Precautions. No published reports support the benefit of placing solid organ transplants or other immunocompromised patients in a Protective Environment. # Part IV: # Recommendations These recommendations are designed to prevent transmission of infectious agents among patients and healthcare personnel in all settings where healthcare is delivered. As in other CDC/HICPAC guidelines, each recommendation is categorized on the basis of existing scientific data, theoretical rationale, applicability, and when possible, economic impact. The CDC/HICPAC system for categorizing recommendations is as follows: Category IA Strongly recommended for implementation and strongly supported by well-designed experimental, clinical, or epidemiologic studies. Category IB Strongly recommended for implementation and supported by some experimental, clinical, or epidemiologic studies and a strong theoretical rationale. Category IC Required for implementation, as mandated by federal and/or state regulation or standard. Category II Suggested for implementation and supported by suggestive clinical or epidemiologic studies or a theoretical rationale. No recommendation; unresolved issue. Practices for which insufficient evidence or no consensus regarding efficacy exists. # I. Administrative Responsibilities Healthcare organization administrators should ensure the implementation of recommendations in this section. underlying conditions that predispose to serious adverse outcomes) y Analyze data to identify trends that may indicated increased rates of transmission y Feedback information on trends in the incidence and prevalence of HAIs, probable risk factors, and prevention strategies and their impact to the appropriate healthcare providers, organization administrators, and as required by local and state health authorities III.C. Develop and implement strategies to reduce risks for transmission and evaluate effectiveness 566, 673, 684, 970 963 971 . Category IB III.D. When transmission of epidemiologically-important organisms continues despite implementation and documented adherence to infection prevention and control strategies, obtain consultation from persons knowledgeable in infection control and healthcare epidemiology to review the situation and recommend additional measures for control 566 247 687 . Category IB III.E. Review periodically information on community or regional trends in the incidence and prevalence of epidemiologically-important organisms (e.g., influenza, RSV, pertussis, invasive group A streptococcal disease, MRSA, VRE) (including in other healthcare facilities) that may impact transmission of organisms within the facility 398, 687, 972, 973 974 . Category II # IV. Standard Precautions Assume that every person is potentially infected or colonized with an organism that could be transmitted in the healthcare setting and apply the following infection control practices during the delivery of health care. IV.A. Hand Hygiene IV.A.1. During the delivery of healthcare, avoid unnecessary touching of surfaces in close proximity to the patient to prevent both contamination of clean hands from environmental surfaces and transmission of pathogens from contaminated hands to surfaces 72, 73 739, 800, 975{CDC, 2001 #970 . Category IB/IC IV.A.2. When hands are visibly dirty, contaminated with proteinaceous material, or visibly soiled with blood or body fluids, wash hands with either a nonantimicrobial soap and water or an antimicrobial soap and water 559 . Category IA IV.A.3. If hands are not visibly soiled, or after removing visible material with nonantimicrobial soap and water, decontaminate hands in the clinical situations described in IV.A.2.a-f. The preferred method of hand decontamination is with an alcohol-based hand rub 562,978 . Alternatively, hands may be washed with an antimicrobial soap and water. Frequent use of alcohol-based hand rub immediately following handwashing with nonantimicrobial soap may increase the frequency of dermatitis 559 . Category IB Perform hand hygiene: IV.A.3.a. Before having direct contact with patients 664,979 . Category IB IV.A.3.b. After contact with blood, body fluids or excretions, mucous membranes, nonintact skin, or wound dressings 664 . Category IA IV.A.3.c. After contact with a patient's intact skin (e.g., when taking a pulse or blood pressure or lifting a patient) 167,976,979,980 . Category IB IV.A.3.d. If hands will be moving from a contaminated-body site to a clean-body site during patient care. Category II IV.A.3.e. After contact with inanimate objects (including medical equipment) in the immediate vicinity of the patient 72, 73, 88, 800, 981 982 . Category II IV.A.3.f. After removing gloves 728,741,742 . Category IB IV.A.4. Wash hands with non-antimicrobial soap and water or with antimicrobial soap and water if contact with spores (e.g., C. difficile or Bacillus anthracis) is likely to have occurred. The physical action of washing and rinsing hands under such circumstances is recommended because alcohols, chlorhexidine, iodophors, and other antiseptic agents have poor activity against spores 559,956,983 . Category II IV.A.5. Do not wear artificial fingernails or extenders if duties include direct contact with patients at high risk for infection and associated adverse outcomes (e.g., those in ICUs or operating rooms) 30,31,559,[722][723][724] . Category IA IV.A.5.a. Develop an organizational policy on the wearing of non-natural nails by healthcare personnel who have direct contact with patients outside of the groups specified above 984 IV.E.1. Establish policies and procedures for containing, transporting, and handling patient-care equipment and instruments/devices that may be contaminated with blood or body fluids 18,739,975 . Category IB/IC IV.E.2. Remove organic material from critical and semi-critical instrument/devices, using recommended cleaning agents before high level disinfection and sterilization to enable effective disinfection and sterilization processes 836 991, 992 . Category IA IV.E.3. Wear PPE (e.g., gloves, gown), according to the level of anticipated contamination, when handling patient-care equipment and instruments/devices that is visibly soiled or may have been in contact with blood or body fluids 18,739,975 . Category IB/IC IV.F. Care of the environment 11 IV.F.1. Establish policies and procedures for routine and targeted cleaning of environmental surfaces as indicated by the level of patient contact and degree of soiling 11 . Category II IV.F.2. Clean and disinfect surfaces that are likely to be contaminated with pathogens, including those that are in close proximity to the patient (e.g., bed rails, over bed tables) and frequently-touched surfaces in the patient care environment (e.g., door knobs, surfaces in and surrounding toilets in patients' rooms) on a more frequent schedule compared to that for other surfaces (e.g., horizontal surfaces in waiting rooms) 11 73, 740, 746, 993, 994 72, 800, 835 995 . Category IB IV.F.3. Use EPA-registered disinfectants that have microbiocidal (i.e., killing) activity against the pathogens most likely to contaminate the patient-care environment. Use in accordance with manufacturer's instructions 842-844, 956, 996 . Category IB/IC IV.F.3.a. Review the efficacy of in-use disinfectants when evidence of continuing transmission of an infectious agent (e.g., rotavirus, C. difficile, norovirus) may indicate resistance to the in-use product and change to a more effective disinfectant as indicated 275,842,847 . Category II IV.F. 4. In facilities that provide health care to pediatric patients or have waiting areas with child play toys (e.g., obstetric/gynecology offices and clinics), establish policies and procedures for cleaning and disinfecting toys at regular intervals 379 80 . Category IB • Use the following principles in developing this policy and procedures: Category II y Select play toys that can be easily cleaned and disinfected y Do not permit use of stuffed furry toys if they will be shared y Clean and disinfect large stationary toys (e.g., climbing equipment) at least weekly and whenever visibly soiled y If toys are likely to be mouthed, rinse with water after disinfection; alternatively wash in a dishwasher y When a toy requires cleaning and disinfection, do so immediately or store in a designated labeled container separate from toys that are clean and ready for use IV.F.5. Include multi-use electronic equipment in policies and procedures for preventing contamination and for cleaning and disinfection, especially those items that are used by patients, those used during delivery of patient care, and mobile devices that are moved in and out of patient rooms frequently (e.g., daily) 850 851, 852, 997 . Category IB IV.F.5.a. No recommendation for use of removable protective covers or washable keyboards. Unresolved issue IV.G. Textiles and laundry IV.G.1. Handle used textiles and fabrics with minimum agitation to avoid contamination of air, surfaces and persons 739,998,999 . Category IB/IC IV.G.2. If laundry chutes are used, ensure that they are properly designed, maintained, and used in a manner to minimize dispersion of aerosols from contaminated laundry 11, 13, 1000, 1001 . Category IB/IC IV.H. Safe injection practices The following recommendations apply to the use of needles, cannulas that replace needles, and, where applicable intravenous delivery systems 454 IV.H.1. Use aseptic technique to avoid contamination of sterile injection equipment 1002,1003 . Category IA IV.H.2. Do not administer medications from a syringe to multiple patients, even if the needle or cannula on the syringe is changed. Needles, cannulae and syringes are sterile, single-use items; they should not be reused for another patient nor to access a medication or solution that might be used for a subsequent patient 453,919,1004,1005 . Category IA IV.H.3. Use fluid infusion and administration sets (i.e., intravenous bags, tubing and connectors) for one patient only and dispose appropriately after use. Consider a syringe or needle/cannula contaminated once it has been used to enter or connect to a patient's intravenous infusion bag or administration set 453 . Category IB IV.H.4. Use single-dose vials for parenteral medications whenever possible 453 . Category IA IV.H.5. Do not administer medications from single-dose vials or ampules to multiple patients or combine leftover contents for later use 369 453, 1005 . Category IA IV.H.6. If multidose vials must be used, both the needle or cannula and syringe used to access the multidose vial must be sterile 453,1002 . Category IA IV.H.7. Do not keep multidose vials in the immediate patient treatment area and store in accordance with the manufacturer's recommendations; discard if sterility is compromised or questionable 453,1003 . Category IA IV.H.8. Do not use bags or bottles of intravenous solution as a common source of supply for multiple patients 453,1006 . Category IB IV.I. Infection control practices for special lumbar puncture procedures Wear a surgical mask when placing a catheter or injecting material into the spinal canal or subdural space (i.e., during myelograms, lumbar puncture and spinal or epidural anesthesia 906 907-909 910, 911 912-914, 918 1007 . Category IB IV.J. Worker safety Adhere to federal and state requirements for protection of healthcare personnel from exposure to bloodborne pathogens 739 . Category IC # V. Transmission-Based Precautions V.A. General principles V.A.1. In addition to Standard Precautions, use Transmission-Based Precautions for patients with documented or suspected infection or colonization with highly transmissible or epidemiologically-important pathogens for which additional precautions are needed to prevent transmission (see Appendix A) 24,93,126,141,306,806,1008 regarding patient placement on a case-by-case basis, balancing infection risks to other patients in the room, the presence of risk factors that increase the likelihood of transmission, and the potential adverse psychological impact on the infected or colonized patient 920,921 . Category II V.B.2.c. In ambulatory settings, place patients who require Contact Precautions in an examination room or cubicle as soon as possible 20 . Category II V.B.3. Use of personal protective equipment V.B.3.a. Gloves Wear gloves whenever touching the patient's intact skin 24,89,134,559,746,837 or surfaces and articles in close proximity to the patient (e.g., medical equipment, bed rails) 72,73,88,837 . Don gloves upon entry into the room or cubicle. Category IB V.B.3.b. Gowns V.B.3.b.i. Wear a gown whenever anticipating that clothing will have direct contact with the patient or potentially contaminated environmental surfaces or equipment in close proximity to the patient. Don gown upon entry into the room or cubicle. Remove gown and observe hand hygiene before leaving the patient-care environment 24,88,134,745,837 . Category IB V.B.3.b.ii. After gown removal, ensure that clothing and skin do not contact potentially contaminated environmental surfaces that could result in possible transfer of microorganism to other patients or environmental surfaces 72,73 . Patient-care equipment and instruments/devices V.B.5.a. Handle patient-care equipment and instruments/devices according to Standard Precautions 739,836 . Category IB/IC V.B.5.b. In acute care hospitals and long-term care and other residential settings, use disposable noncritical patient-care equipment (e.g., blood pressure cuffs) or implement patient-dedicated use of such equipment. If common use of equipment for multiple patients is unavoidable, clean and disinfect such equipment before use on another patient 24,88,796,836,837,854,1016 . Category IB V.B.5.c. In home care settings V.B.5.c.i. Category II V.B. Limit the amount of non-disposable patient-care equipment brought into the home of patients on Contact Precautions. Whenever possible, leave patient-care equipment in the home until discharge from home care services. Category II V.B.5.c.ii. If noncritical patient-care equipment (e.g., stethoscope) cannot remain in the home, clean and disinfect items before taking them from the home using a low-to intermediate-level disinfectant. Alternatively, place contaminated reusable items in a plastic bag for transport and subsequent cleaning and disinfection. Category II V.B.5.d. In ambulatory settings, place contaminated reusable noncritical patient-care equipment in a plastic bag for transport to a soiled utility area for reprocessing. Category II V.B.6. Environmental measures Ensure that rooms of patients on Contact Precautions are prioritized for frequent cleaning and disinfection (e.g., at least daily) with a focus on frequently-touched surfaces (e.g., bed rails, overbed table, bedside commode, lavatory surfaces in patient bathrooms, doorknobs) and equipment in the immediate vicinity of the patient 11,24,88,746,837 . Category IB V.B.7. Discontinue Contact Precautions after signs and symptoms of the infection have resolved or according to pathogen-specific recommendations in Appendix A. Category IB V.C. Droplet Precautions V.C.1. Use Droplet Precautions as recommended in Appendix A for patients known or suspected to be infected with pathogens transmitted by respiratory droplets (i.e., large-particle droplets >5µ in size) that are generated by a patient who is coughing, sneezing or talking 14, 23, Steinberg, 196914, 23, Steinberg, #170814, 23, Steinberg, , 41, 95, 103, 111, 112, 755, 756, 989, 1017 . Category IB V.C.2. Patient placement V.C.2.a. In acute care hospitals, place patients who require Droplet Precautions in a single-patient room when available Category II When single-patient rooms are in short supply, apply the following principles for making decisions on patient placement: y Prioritize patients who have excessive cough and sputum production for single-patient room placement Category II y Place together in the same room (cohort) patients who are infected the same pathogen and are suitable roommates 814 816 . Category IB y If it becomes necessary to place patients who require Droplet Precautions in a room with a patient who does not have the same infection: y Avoid placing patients on Droplet Precautions in the same room with patients who have conditions that may increase the risk of adverse outcome from infection or that may facilitate transmission (e.g., those who are immunocompromised, have or have anticipated prolonged lengths of stay). Category II y Ensure that patients are physically separated (i.e., >3 feet apart) from each other. Draw the privacy curtain between beds to minimize opportunities for close contact 103, 104 410 . Category IB y Change protective attire and perform hand hygiene between contact with patients in the same room, regardless of whether one patient or both patients are on Droplet Precautions 741-743, 988, 1014, 1015 . Category IB V.C.2.b. In long-term care and other residential settings, make decisions regarding patient placement on a case-by-case basis after considering infection risks to other patients in the room and available alternatives 410 . Category II V.C.2.c. In ambulatory settings, place patients who require Droplet Precautions in an examination room or cubicle as soon as possible. Instruct patients to follow recommendations for Respiratory Hygiene/Cough Etiquette 447, 448 9, 828 . Category II V.C.3. Use of personal protective equipment V.C.3.a. Don a mask upon entry into the patient room or cubicle 14,23,41,103,111,113,115,827 . Category IB V.C. Use Airborne Precautions as recommended in Appendix A for patients known or suspected to be infected with infectious agents transmitted person-to-person by the airborne route (e.g., M tuberculosis 12 , measles 34,122,1020 , chickenpox 123,773,1021 , disseminated herpes zoster 1022 . Category IA/IC V.D.2. Patient placement V.D.2.a. In acute care hospitals and long-term care settings, place patients who require Airborne Precautions in an AIIR that has been constructed in accordance with current guidelines [11][12][13] . Category IA/IC V.D.2.a.i. Provide at least six (existing facility) or 12 (new construction/renovation) air changes per hour. V.D.2.a.ii. Direct exhaust of air to the outside. If it is not possible to exhaust air from an AIIR directly to the outside, the air may be returned to the air-handling system or adjacent spaces if all air is directed through HEPA filters. V.D.2.a.iii. Whenever an AIIR is in use for a patient on Airborne Precautions, monitor air pressure daily with visual indicators (e.g., smoke tubes, flutter strips), regardless of the presence of differential pressure sensing devices (e.g., manometers) 11, 12, 1023, 1024 . V.D.2.a.iv. Keep the AIIR door closed when not required for entry and exit. V.D.2.b. When an AIIR is not available, transfer the patient to a facility that has an available AIIR 12 . Category II V.D.2.c. In the event of an outbreak or exposure involving large numbers of patients who require Airborne Precautions: y Consult infection control professionals before patient placement to determine the safety of alternative room that do not meet engineering requirements for an AIIR. y Place together (cohort) patients who are presumed to have the same infection( based on clinical presentation and diagnosis when known) in areas of the facility that are away from other patients, especially patients who are at increased risk for infection (e.g., immunocompromised patients). y Use temporary portable solutions (e.g., exhaust fan) to create a negative pressure environment in the converted area of the facility. Discharge air directly to the outside,away from people and air intakes, or direct all the air through HEPA filters before it is introduced to other air spaces 12 Category II V.D. Place the patient in an AIIR as soon as possible. If an AIIR is not available, place a surgical mask on the patient and place him/her in an examination room. Once the patient leaves, the room should remain vacant for the appropriate time, generally one hour, to allow for a full exchange of air 11,12,122 . Category IB/IC V.D.2.d.iii. Instruct patients with a known or suspected airborne infection to wear a surgical mask and observe Respiratory Hygiene/Cough Etiquette. Once in an AIIR, the mask may be removed; the mask should remain on if the patient is not in an AIIR 12,107,145,899 . Category IB/IC V.D.3. Personnel restrictions Restrict susceptible healthcare personnel from entering the rooms of patients known or suspected to have measles (rubeola), varicella (chickenpox), disseminated zoster, or smallpox if other immune healthcare personnel are available 17,775 . Category IB V.D.4. Use of PPE V.D.4.a. Wear a fit-tested NIOSH-approved N95 or higher level respirator for respiratory protection when entering the room or home of a patient when the following diseases are suspected or confirmed: y Infectious pulmonary or laryngeal tuberculosis or when infectious tuberculosis skin lesions are present and procedures that would aerosolize viable organisms (e.g., irrigation, incision and drainage, whirlpool treatments) are performed 12,1025,1026 . Category IB y Smallpox (vaccinated and unvaccinated). Respiratory protection is recommended for all healthcare personnel, including those with a documented "take" after smallpox vaccination due to the risk of a genetically engineered virus against which the vaccine may not provide protection, or of exposure to a very large viral load (e.g., from high-risk aerosol-generating procedures, immunocompromised patients, hemorrhagic or flat smallpox 108,129 . Category II V.D.4.b. No recommendation is made regarding the use of PPE by healthcare personnel who are presumed to be immune to measles (rubeola) or varicella-zoster based on history of disease, vaccine, or serologic testing when caring for an individual with known or suspected measles, chickenpox or disseminated zoster, due to difficulties in establishing definite immunity 1027,1028 . Unresolved issue V.D.4.c. No recommendation is made regarding the type of personal protective equipment (i.e., surgical mask or respiratory protection with a N95 or higher respirator) to be worn by susceptible healthcare personnel who must have contact with patients with known or suspected measles, chickenpox or disseminated herpes zoster. Unresolved issue V.D.5. Patient transport V.D.5.a. In acute care hospitals and long-term care and other residential settings, limit transport and movement of patients outside of the room to medically-necessary purposes. Category II V.D.5.b. If transport or movement outside an AIIR is necessary, instruct patients to wear a surgical mask, if possible, and observe Respiratory Hygiene/Cough Etiquette 12 . Category II V.D.5.c. For patients with skin lesions associated with varicella or smallpox or draining skin lesions caused by M. tuberculosis, cover the affected areas to prevent aerosolization or contact with the infectious agent in skin lesions 108,1025,1026,[1029][1030][1031] . Category IB V.D.5.d. Healthcare personnel transporting patients who are on Airborne Precautions do not need to wear a mask or respirator during transport if the patient is wearing a mask and infectious skin lesions are covered. Category II V.D.6. Exposure management Immunize or provide the appropriate immune globulin to susceptible persons as soon as possible following unprotected contact (i.e., exposed) to a patient with measles, varicella or smallpox: Category IA y Administer measles vaccine to exposed susceptible persons within 72 hours after the exposure or administer immune globulin within six days of the exposure event for high-risk persons in whom vaccine is contraindicated 17,[1032][1033][1034][1035] . y Administer varicella vaccine to exposed susceptible persons within 120 hours after the exposure or administer varicella immune globulin (VZIG or alternative product), when available, within 96 hours for high-risk persons in whom vaccine is contraindicated (e.g., immunocompromised patients, pregnant women, newborns whose mother's varicella onset was <5 days before or within 48 hours after delivery 888,[1035][1036][1037] 13 . Category IB VI.C.1.b. Directed room airflow with the air supply on one side of the room that moves air across the patient bed and out through an exhaust on the opposite side of the room 13 . Category IB VI.C.1.c. Positive air pressure in room relative to the corridor (pressure differential of >12.5 Pa [0.01-in water gauge]) 13 . Category IB VI.C.1.c.i. Monitor air pressure daily with visual indicators (e.g., smoke tubes, flutter strips) 11, 1024 . Category IA VI.C.1.d. Well-sealed rooms that prevent infiltration of outside air 13 . Category IB VI.C.1.e. At least 12 air changes per hour 13 . Category IB VI.C.2. Lower dust levels by using smooth, nonporous surfaces and finishes that can be scrubbed, rather than textured material (e.g., upholstery). Wet dust horizontal surfaces whenever dust detected and routinely clean crevices and sprinkler heads where dust may accumulate 940,941 . Category II VI.C.3. Avoid carpeting in hallways and patient rooms in areas 941 . Category IB VI.C.4. Prohibit dried and fresh flowers and potted plants [942][943][944] . Category II VI.D. Minimize the length of time that patients who require a Protective Environment are outside their rooms for diagnostic procedures and other activities 11,158,945 . Category IB VI.E. During periods of construction, to prevent inhalation of respirable particles that could contain infectious spores, provide respiratory protection (e.g., N95 respirator) to patients who are medically fit to tolerate a respirator when they are required to leave the Protective Environment 945 158 . Barrier precautions, (e.g., masks, gowns, gloves) are not required for healthcare personnel in the absence of suspected or confirmed infection in the patient or if they are not indicated according to Standard Precautions 15 . Category II VI.F.4. Category Implement Airborne Precautions for patients who require a Protective Environment room and who also have an airborne infectious disease (e.g., pulmonary or laryngeal tuberculosis, acute varicella-zoster). Category IA VI.F.4.a. Ensure that the Protective Environment is designed to maintain positive pressure 13 . Category IB VI.F.4.b. Use an anteroom to further support the appropriate air-balance relative to the corridor and the Protective Environment; provide independent exhaust of contaminated air to the outside or place a HEPA filter in the exhaust duct if the return air must be recirculated 13,1041 . Category IB VI.F.4.c. If an anteroom is not available, place the patient in an AIIR and use portable, industrial-grade HEPA filters in the room to enhance filtration of spores 1042 . Category II Preamble The mode(s) and risk of transmission for each specific disease agent included in Appendix A were reviewed. Principle sources consulted for the development of disease-specific recommendations for Appendix A included infectious disease manuals and textbooks 833,1043,1044 . The published literature was searched for evidence of person-to-person transmission in healthcare and non-healthcare settings with a focus on reported outbreaks that would assist in developing recommendations for all settings where healthcare is delivered. Criteria used to assign Transmission-Based Precautions categories follow: • A Transmission-Based Precautions category was assigned if there was strong evidence for person-to-person transmission via droplet, contact, or airborne routes in healthcare or non-healthcare settings and/or if patient factors (e.g., diapered infants, diarrhea, draining wounds) increased the risk of transmission • Transmission-Based Precautions category assignments reflect the predominant mode(s) of transmission • If there was no evidence for person-to-person transmission by droplet, contact or airborne routes, Standard Precautions were assigned • If there was a low risk for person-to-person transmission and no evidence of healthcare-associated transmission, Standard Precautions were assigned • Standard Precautions were assigned for bloodborne pathogens (e.g., hepatitis B and C viruses, human immunodeficiency virus) as per CDC recommendations for Universal Precautions issued in 1988 780 . Subsequent experience has confirmed the efficacy of Standard Precautions to prevent exposure to infected blood and body fluid 778,779,866 . Additional information relevant to use of precautions was added in the comments column to assist the caregiver in decision-making. Citations were added as needed to support a change in or provide additional evidence for recommendations for a specific disease and for new infectious agents (e.g., SARS-CoV, avian influenza) that have been added to Appendix A. The reader may refer to more detailed discussion concerning modes of transmission and emerging pathogens in the background text and for MDRO control in Appendix B. Use Contact Precautions for diapered or incontinent persons for the duration of illness or to control institutional outbreaks. Persons who clean areas heavily contaminated with feces or vomitus may benefit from wearing masks since virus can be aerosolized from these body substances 142, 147 148 ; ensure consistent environmental cleaning and disinfection with focus on restrooms even when apparently unsoiled 273,1064 ). Hypochlorite solutions may be required when there is continued transmission [290][291][292] . Alcohol is less active, but there is no evidence that alcohol antiseptic handrubs are not effective for hand decontamination 294 . Cohorting of affected patients to separate airspaces and toilet facilities may help interrupt transmission during outbreaks. [1076][1077][1078][1079] . Install screens in windows and doors in endemic areas. Use DEET-containing mosquito repellants and clothing to cover extremities Marburg virus disease (see viral hemorrhagic fevers) Measles (rubeola) A 4 days after onset of rash; DI in immune compromised Susceptible HCWs should not enter room if immune care providers are available; no recommendation for face protection for immune HCW; no recommendation for type of face protection for susceptible HCWs, i.e., mask or respirator 1027,1028 . For exposed susceptibles, post-exposure vaccine within 72 hrs. or immune globulin within 6 days when available 17,1032,1034 precautions are implemented always, hospitals must have systems in place to evaluate patients routinely according to these criteria as part of their preadmission and admission care. † Patients with the syndromes or conditions listed below may present with atypical signs or symptoms (e.g.neonates and adults with pertussis may not have paroxysmal or severe cough). The clinician's index of suspicion should be guided by the prevalence of specific conditions in the community, as well as clinical judgment. ‡ The organisms listed under the column "Potential Pathogens" are not intended to represent the complete, or even most likely, diagnoses, but rather possible etiologic agents that require additional precautions beyond Standard Precautions until they can be ruled out. § These pathogens include enterohemorrhagic Escherichia coli O157:H7, Shigella spp, hepatitis A virus, noroviruses, rotavirus, C. difficile. Cutaneous (contact with spores);RT (inhalation of spores);GIT (ingestion of spores -rare) Comment: Spores can be inhaled into the lower respiratory tract. The infectious dose of B. anthracis in humans by any route is not precisely known. In primates, the LD50 (i.e., the dose required to kill 50% of animals) for an aerosol challenge with B. anthracis is estimated to be 8,000-50,000 spores; the infectious dose may be as low as 1-3 spores # APPENDIX A 1 # TYPE AND DURATION OF PRECAUTIONS RECOMMENDED FOR SELECTED INFECTIONS AND CONDITIONS # Rotavirus # Incubation Period Cutaneous: 1 to12 days; RT: Usually 1 to 7 days but up to 43 days reported; GIT: 15-72 hours # Clinical Features Cutaneous: Painless, reddish papule, which develops a central vesicle or bulla in 1-2 days; over next 3-7 days lesion becomes pustular, and then necrotic, with black eschar; extensive surrounding edema. RT: initial flu-like illness for 1-3 days with headache, fever, malaise, cough; by day 4 severe dyspnea and shock, and is usually fatal (85%-90% if untreated; meningitis in 50% of RT cases. GIT: ; if intestinal form, necrotic, ulcerated edematous lesions develop in intestines with fever, nausea and vomiting, progression to hematemesis and bloody diarrhea; 25 GIT: Ingestion of toxin-containing food, RT: Inhalation of toxin containing aerosol cause disease. Comment: Toxin ingested or potentially delivered by aerosol in bioterrorist incidents. LD 50 for type A is 0.001 μg/ml/kg. # Incubation Period 1-5 days. # Clinical Features Ptosis, generalized weakness, dizziness, dry mouth and throat, blurred vision, diplopia, dysarthria, dysphonia, and dysphagia followed by symmetrical descending paralysis and respiratory failure. # Diagnosis Clinical diagnosis; identification of toxin in stool, serology unless toxin-containing material available for toxin neutralization bioassays. # Infectivity Not transmitted from person to person. Exposure to toxin necessary for disease. # Recommended Precautions Standard Precautions. # Disease Ebola Hemorrhagic Fever Site(s) of Infection; Transmission Mode As a rule infection develops after exposure of mucous membranes or RT, or through broken skin or percutaneous injury. # Incubation Period 2-19 days, usually 5-10 days # Clinical Features Febrile illnesses with malaise, myalgias, headache, vomiting and diarrhea that are rapidly complicated by hypotension, shock, and hemorrhagic features. Massive hemorrhage in < 50% pts. # Diagnosis Etiologic diagnosis can be made using RT-PCR, serologic detection of antibody and antigen, pathologic assessment with immunohistochemistry and viral culture with EM confirmation of morphology, # Infectivity Person-to-person transmission primarily occurs through unprotected contact with blood and body fluids; percutaneous injuries (e.g., needlestick) associated with a high rate of transmission; transmission in healthcare settings has been reported but is prevented by use of barrier precautions. # Recommended Precautions Hemorrhagic fever specific barrier precautions: If disease is believed to be related to intentional release of a bioweapon, epidemiology of transmission is unpredictable pending observation of disease transmission. Until the nature of the pathogen is understood and its transmission pattern confirmed, Standard, Contact and Airborne Precautions should be used. Once the pathogen is characterized, if the epidemiology of transmission is consistent with natural disease, Droplet Precautions can be substituted for Airborne Precautions. Emphasize: 1) use of sharps safety devices and safe work practices, 2) hand hygiene; 3) barrier protection against blood and body fluids upon entry into room (single gloves and fluidresistant or impermeable gown, face/eye protection with masks, goggles or face shields); and 4) appropriate waste handling. Use N95 or higher respirators when performing aerosol-generating procedures. In settings where AIIRs are unavailable or the large numbers of patients cannot be accommodated by existing AIIRs, observe Droplet Precautions (plus Standard Precautions and Contact Precautions) and segregate patients from those not suspected of VHF infection. Limit blooddraws to those essential to care. See text for discussion and Appendix A for recommendations for naturally occurring VHFs. # Disease Plague 2 Site(s) of Infection; Transmission Mode RT: Inhalation of respiratory droplets. Comment: Pneumonic plague most likely to occur if used as a biological weapon, but some cases of bubonic and primary septicemia may also occur. Infective dose 100 to 500 bacteria Incubation Period 1 to 6, usually 2 to 3 days. # Clinical Features Pneumonic: fever, chills, headache, cough, dyspnea, rapid progression of weakness, and in a later stage hemoptysis, circulatory collapse, and bleeding diathesis Diagnosis Presumptive diagnosis from Gram stain or Wayson stain of sputum, blood, or lymph node aspirate; definitive diagnosis from cultures of same material, or paired acute/convalescent serology. # Infectivity Person-to-person transmission occurs via respiratory droplets risk of transmission is low during first 20-24 hours of illness and requires close contact. Respiratory secretions probably are not infectious within a few hours after initiation of appropriate therapy. # Recommended Precautions Standard Precautions, Droplet Precautions until patients have received 48 hours of appropriate therapy. Chemoprophylaxis: Consider antibiotic prophylaxis for HCWs with close contact exposure. transmit the infection when the disease is in the end stage. These persons cough copious amounts of bloody sputum that contains many plague bacteria. Patients in the early stage of primary pneumonic plague (approximately the first 20-24 h) apparently pose little risk [1,2]. Antibiotic medication rapidly clears the sputum of plague bacilli, so that a patient generally is not infective within hours after initiation of effective antibiotic treatment [3]. This means that in modern times many patients will never reach a stage where they pose a significant risk to others. Even in the end stage of disease, transmission only occurs after close contact. Simple protective measures, such as wearing masks, good hygiene, and avoiding close contact, have been effective to interrupt transmission during many pneumonic plague outbreaks [2]. In the United States, the last known cases of person to person transmission of pneumonic plague occurred in 1925 [2]. Only immune HCWs to care for pts; post-exposure vaccine within 4 days. Vaccinia: HCWs cover vaccination site with gauze and semi-permeable dressing until scab separates (>21 days). Observe hand hygiene. Adverse events with virus-containing lesions: Standard plus Contact Precautions until all lesions crusted b Transmission by the airborne route is a rare event; Airborne Precautions is recommended when possible, but in the event of mass exposures, barrier precautions and containment within a designated area are most important 204,212 . c Vaccinia adverse events with lesions containing infectious virus include inadvertent autoinoculation, ocular lesions (blepharitis, conjunctivitis), generalized vaccinia, progressive vaccinia, eczema vaccinatum; bacterial superinfection also requires addition of contact precautions if exudates cannot be contained 216,217 . Pneumonic: malaise, cough, sputum production, dyspnea; Typhoidal: fever, prostration, weight loss and frequently an associated pneumonia. # Disease # Diagnosis Diagnosis usually made with serology on acute and convalescent serum specimens; bacterium can be detected by PCR (LRN) or isolated from blood and other body fluids on cysteine-enriched media or mouse inoculation. # Infectivity Person-to-person spread is rare. Laboratory workers who encounter/handle cultures of this organism are at high risk for disease if exposed. # Recommended Precautions Standard Precautions # 130 * * During aerosol-generating procedures on patients with suspected or proven infections transmitted by respiratory aerosols (e.g., SARS), wear a fit-tested N95 or higher respirator in addition to gloves, gown,and face/eye protection. (e.g., flutter strips, smoke tubes) or a hand held pressure gauge • Self-closing door on all room exits • Maintain back-up ventilation equipment (e.g., portable units for fans or filters) for emergency provision of ventilation requirements for PE areas and take immediate steps to restore the fixed ventilation system • For patients who require both a PE and Airborne Infection Isolation, use an anteroom to ensure proper air balance relationships and provide independent exhaust of contaminated air to the outside or place a HEPA filter in the exhaust duct. If an anteroom is not available, place patient in an AIIR and use portable ventilation units, industrial-grade HEPA filters to enhance filtration of spores. Bioaerosols. An airborne dispersion of particles containing whole or parts of biological entities, such as bacteria, viruses, dust mites, fungal hyphae, or fungal spores. Such aerosols usually consist of a mixture of mono-dispersed and aggregate cells, spores or viruses, carried by other materials, such as respiratory secretions and/or inert particles. Infectious bioaerosols (i.e., those that contain biological agents capable of causing an infectious disease) can be generated from human sources (e.g., expulsion from the respiratory tract during coughing, sneezing, talking or singing; during suctioning or wound irrigation), wet environmental sources (e.g. HVAC and cooling tower water with Legionella) or dry sources (e.g.,constuction dust with spores produced by Aspergillus spp.). Bioaerosols include large respiratory droplets and small droplet nuclei (Cole EC. AJIC 1998;26: 453-64). # IV. Surfaces Caregivers.. All persons who are not employees of an organization, are not paid, and provide or assist in providing healthcare to a patient (e.g., family member, friend) and acquire technical training as needed based on the tasks that must be performed. Cohorting. In the context of this guideline, this term applies to the practice of grouping patients infected or colonized with the same infectious agent together to confine their care to one area and prevent contact with susceptible patients (cohorting patients). During outbreaks, healthcare personnel may be assigned to a cohort of patients to further limit opportunities for transmission (cohorting staff). Colonization. Proliferation of microorganisms on or within body sites without detectable host immune response, cellular damage, or clinical expression. The presence of a microorganism within a host may occur with varying duration, but may become a source of potential transmission. In many instances, colonization and carriage are synonymous. Droplet nuclei. Microscopic particles < 5 µm in size that are the residue of evaporated droplets and are produced when a person coughs, sneezes, shouts, or sings. These particles can remain suspended in the air for prolonged periods of time and can be carried on normal air currents in a room or beyond, to adjacent spaces or areas receiving exhaust air. # Hand hygiene. A general term that applies to any one of the following: 1) handwashing with plain (nonantimicrobial) soap and water); 2) antiseptic handwash (soap containing antiseptic agents and water); 3) antiseptic handrub (waterless antiseptic product, most often alcohol-based, rubbed on all surfaces of hands); or 4) surgical hand antisepsis (antiseptic handwash or antiseptic handrub performed preoperatively by surgical personnel to eliminate transient hand flora and reduce resident hand flora) 559 . # Healthcare-associated infection (HAI). An infection that develops in a patient who is cared for in any setting where healthcare is delivered (e.g., acute care hospital, chronic care facility, ambulatory clinic, dialysis center, surgicenter, home) and is related to receiving health care (i.e., was not incubating or present at the time healthcare was provided). In ambulatory and home settings, HAI would apply to any infection that is associated with a medical or surgical intervention. Since the geographic location of infection acquisition is often uncertain, the preferred term is considered to be healthcare-associated rather than healthcare-acquired. # Hematopoietic stem cell transplantation (HSCT). Any transplantation of bloodor bone marrow-derived hematopoietic stem cells, regardless of donor type (e.g., allogeneic or autologous) or cell source (e.g., bone marrow, peripheral blood, or placental/umbilical cord blood); associated with periods of severe immunosuppression that vary with the source of the cells, the intensity of chemotherapy required, and the presence of graft versus host disease (MMWR 2000; 49: RR-10). High-efficiency particulate air (HEPA) filter. An air filter that removes >99.97% of particles > 0.3µm (the most penetrating particle size) at a specified flow rate of air. HEPA filters may be integrated into the central air handling systems, installed at the point of use above the ceiling of a room, or used as portable units (MMWR 2003; 52: RR-10). Home care. A wide-range of medical, nursing, rehabilitation, hospice and social services delivered to patients in their place of residence (e.g., private residence, senior living center, assisted living facility). Home health-care services include care provided by home health aides and skilled nurses, respiratory therapists, dieticians, physicians, chaplains, and volunteers; provision of durable medical equipment; home infusion therapy; and physical, speech, and occupational therapy. # Immunocompromised patients. Those patients whose immune mechanisms are deficient because of congenital or acquired immunologic disorders (e.g., human immunodeficiency virus [HIV] infection, congenital immune deficiency syndromes), chronic diseases such as diabetes mellitus, cancer, emphysema, or cardiac failure, ICU care, malnutrition, and immunosuppressive therapy of another disease process [e.g., radiation, cytotoxic chemotherapy, anti-graftrejection medication, corticosteroids, monoclonal antibodies directed against a specific component of the immune system]). The type of infections for which an immunocompromised patient has increased susceptibility is determined by the severity of immunosuppression and the specific component(s) of the immune system that is affected. Patients undergoing allogeneic HSCT and those with chronic graft versus host disease are considered the most vulnerable to HAIs. Immunocompromised states also make it more difficult to diagnose certain infections (e.g., tuberculosis) and are associated with more severe clinical disease states than persons with the same infection and a normal immune system. Infection. The transmission of microorganisms into a host after evading or overcoming defense mechanisms, resulting in the organism's proliferation and invasion within host tissue(s). Host responses to infection may include clinical symptoms or may be subclinical, with manifestations of disease mediated by direct organisms pathogenesis and/or a function of cell-mediated or antibody responses that result in the destruction of host tissues. # Infection control and prevention professional (ICP). A person whose primary training is in either nursing, medical technology, microbiology, or epidemiology and who has acquired special training in infection control. Responsibilities may include collection, analysis, and feedback of infection data and trends to healthcare providers; consultation on infection risk assessment, prevention and control strategies; performance of education and training activities; implementation of evidence-based infection control practices or those mandated by regulatory and licensing agencies; application of epidemiologic principles to improve patient outcomes; participation in planning renovation and construction projects (e.g., to ensure appropriate containment of construction dust); evaluation of new products or procedures on patient outcomes; oversight of employee health services related to infection prevention; implementation of preparedness plans; communication within the healthcare setting, with local and state health departments, and with the community at large concerning infection control issues; and participation in research. Certification in infection control (CIC) is available through the Certification Board of Infection Control and Epidemiology. # Infection control and prevention program. A multidisciplinary program that includes a group of activities to ensure that recommended practices for the prevention of healthcare-associated infections are implemented and followed by HCWs, making the healthcare setting safe from infection for patients and healthcare personnel. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) requires the following five components of an infection control program for accreditation: 1) surveillance: monitoring patients and healthcare personnel for acquisition of infection and/or colonization; 2) investigation: identification and analysis of infection problems or undesirable trends; 3) prevention: implementation of measures to prevent transmission of infectious agents and to reduce risks for device-and procedure-related infections; 4) control: evaluation and management of outbreaks; and 5) reporting: provision of information to external agencies as required by state and federal law and regulation (www.jcaho.org). The infection control program staff has the ultimate authority to determine infection control policies for a healthcare organization with the approval of the organization's governing body. # Long-term care facilities (LTCFs ). An array of residential and outpatient facilities designed to meet the bio-psychosocial needs of persons with sustained self-care deficits. These include skilled nursing facilities, chronic disease hospitals, nursing homes, foster and group homes, institutions for the developmentally disabled, residential care facilities, assisted living facilities, retirement homes, adult day health care facilities, rehabilitation centers, and longterm psychiatric hospitals. # Mask. A term that applies collectively to items used to cover the nose and mouth and includes both procedure masks and surgical masks (www.fda.gov/cdrh/ode/guidance/094.html#4). # Multidrug-resistant organisms (MDROs). In general, bacteria that are resistant to one or more classes of antimicrobial agents and usually are resistant to all but one or two commercially available antimicrobial agents (e.g., MRSA, VRE, extended spectrum beta-lactamase [ESBL]-producing or intrinsically resistant gram-negative bacilli) 176 . # Nosocomial infection. A term that is derived from two Greek words "nosos" (disease) and "komeion" (to take care of) and refers to any infection that develops during or as a result of an admission to an acute care facility (hospital) and was not incubating at the time of admission. # Personal protective equipment (PPE). A variety of barriers used alone or in combination to protect mucous membranes, skin, and clothing from contact with infectious agents. PPE includes gloves, masks, respirators, goggles, face shields, and gowns. # Procedure Mask. A covering for the nose and mouth that is intended for use in general patient care situations. These masks generally attach to the face with ear loops rather than ties or elastic. Unlike surgical masks, procedure masks are not regulated by the Food and Drug Administration. Protective Environment. A specialized patient-care area, usually in a hospital, that has a positive air flow relative to the corridor (i.e., air flows from the room to the outside adjacent space). The combination of high-efficiency particulate air (HEPA) filtration, high numbers (>12) of air changes per hour (ACH), and minimal leakage of air into the room creates an environment that can safely accommodate patients with a severely compromised immune system (e.g., those who have received allogeneic hemopoietic stem-cell transplant [HSCT]) and decrease the risk of exposure to spores produced by environmental fungi. Other components include use of scrubbable surfaces instead of materials such as upholstery or carpeting, cleaning to prevent dust accumulation, and prohibition of fresh flowers or potted plants. Quasi-experimental studies. Studies to evaluate interventions but do not use randomization as part of the study design. These studies are also referred to as nonrandomized, pre-post-intervention study designs. These studies aim to demonstrate causality between an intervention and an outcome but cannot achieve the level of confidence concerning attributable benefit obtained through a randomized, controlled trial. In hospitals and public health settings, randomized control trials often cannot be implemented due to ethical, practical and urgency reasons; therefore, quasi-experimental design studies are used commonly. However, even if an intervention appears to be effective statistically, the question can be raised as to the possibility of alternative explanations for the result.. Such study design is used when it is not logistically feasible or ethically possible to conduct a randomized, controlled trial, (e.g., during outbreaks). Within the classification of quasi-experimental study designs, there is a hierarchy of design features that may contribute to validity of results (Harris et al. CID 2004:38: 1586). # Residential care setting. A facility in which people live, minimal medical care is delivered, and the psychosocial needs of the residents are provided for. # Respirator. A personal protective device worn by healthcare personnel to protect them from inhalation exposure to airborne infectious agents that are < 5 μm in size. These include infectious droplet nuclei from patients with M. tuberculosis, variola virus [smallpox], SARS-CoV), and dust particles that contain infectious particles, such as spores of environmental fungi (e.g., Aspergillus sp. # Respiratory Hygiene/ Cough Etiquette. A combination of measures designed to minimize the transmission of respiratory pathogens via droplet or airborne routes in healthcare settings. The components of Respiratory Hygiene/Cough Etiquette are 1) covering the mouth and nose during coughing and sneezing, 2) using tissues to contain respiratory secretions with prompt disposal into a notouch receptacle, 3) offering a surgical mask to persons who are coughing to decrease contamination of the surrounding environment, and 4) turning the head away from others and maintaining spatial separation, ideally >3 feet, when coughing. These measures are targeted to all patients with symptoms of respiratory infection and their accompanying family members or friends beginning at the point of initial encounter with a healthcare setting (e.g., reception/triage in emergency departments, ambulatory clinics, healthcare provider offices) 126 (Srinivasin A ICHE 2004; 25: 1020; www.cdc.gov/flu/professionals/infectioncontrol/resphygiene.htm). Safety culture/climate. The shared perceptions of workers and management regarding the expectations of safety in the work environment. A hospital safety climate includes the following six organizational components: 1) senior management support for safety programs; 2) absence of workplace barriers to safe work practices; 3) cleanliness and orderliness of the worksite; 4) minimal conflict and good communication among staff members; 5) frequent safetyrelated feedback/training by supervisors; and 6) availability of PPE and engineering controls 620 . Source Control. The process of containing an infectious agent either at the portal of exit from the body or within a confined space. The term is applied most frequently to containment of infectious agents transmitted by the respiratory route but could apply to other routes of transmission, (e.g., a draining wound, vesicular or bullous skin lesions). Respiratory Hygiene/Cough Etiquette that encourages individuals to "cover your cough" and/or wear a mask is a source control measure. The use of enclosing devices for local exhaust ventilation (e.g., booths for sputum induction or administration of aerosolized medication) is another example of source control. # Standard Precautions. A group of infection prevention practices that apply to all patients, regardless of suspected or confirmed diagnosis or presumed infection status. Standard Precautions is a combination and expansion of Universal Precautions 780 and Body Substance Isolation 1102 . Standard Precautions is based on the principle that all blood, body fluids, secretions, excretions except sweat, nonintact skin, and mucous membranes may contain transmissible infectious agents. Standard Precautions includes hand hygiene, and depending on the anticipated exposure, use of gloves, gown, mask, eye protection, or face shield. Also, equipment or items in the patient environment likely to have been contaminated with infectious fluids must be handled in a manner to prevent transmission of infectious agents, (e.g. wear gloves for handling, contain heavily soiled equipment, properly clean and disinfect or sterilize reusable equipment before use on another patient). # Surgical mask. A device worn over the mouth and nose by operating room personnel during surgical procedures to protect both surgical patients and operating room personnel from transfer of microorganisms and body fluids. Surgical masks also are used to protect healthcare personnel from contact with large infectious droplets (>5 μm in size). According to draft guidance issued by the Food and Drug Administration on May 15, 2003, surgical masks are evaluated using standardized testing procedures for fluid resistance, bacterial filtration efficiency, differential pressure (air exchange), and flammability in order to mitigate the risks to health associated with the use of surgical masks. These specifications apply to any masks that are labeled surgical, laser, isolation, or dental or medical procedure (www.fda.gov/cdrh/ode/guidance/094.html#4). Surgical masks do not protect against inhalation of small particles or droplet nuclei and should not be confused with particulate respirators that are recommended for protection against selected airborne infectious agents, (e.g., Mycobacterium tuberculosis). # Appendix A: # APPENDIX A 1 # TYPE AND DURATION OF PRECAUTIONS RECOMMENDED FOR SELECTED INFECTIONS AND CONDITIONS # APPENDIX A 1 # TYPE AND DURATION OF PRECAUTIONS RECOMMENDED FOR SELECTED INFECTIONS AND CONDITIONS # HAND HYGIENE Perform hand hygiene immediately after removing all PPE!
None
None
f0cea40b8c66e7440fafad90fe85defd3516d7e8
cdc
None
Centers for Disease Control and Prevention. Prevention and control of meningococcal disease and Meningococcal disease and college students: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR 2000;49(No. RR-7):.# INTRODUCTION Each year, 2,400-3,000 cases of meningococcal disease occur in the United States, resulting in a rate of 0.8-1.3 per 100,000 population (1)(2)(3). The case-fatality ratio for meningococcal disease is 10% (2 ), despite the continued sensitivity of meningococcus to many antibiotics, including penicillin (4 ). Meningococcal disease also causes substantial morbidity: 11%-19% of survivors have sequelae (e.g., neurologic disability, limb loss, and hearing loss ). During 1991-1998, the highest rate of meningococcal disease occurred among infants aged <1 year; however, the rate for persons aged 18-23 years was also higher than that for the general population (1.4 per 100,000) (CDC, National Electronic Telecommunications System for Surveillance, unpublished data). # BACKGOUND In the United States, 95%-97% of cases of meningococcal disease are sporadic; however, since 1991, the frequency of localized outbreaks has increased (7)(8). Most of these outbreaks have been caused by serogroup C. However, in the past 3 years, localized outbreaks caused by serogroup Y and B organisms have also been reported (8 ). The proportion of sporadic meningococcal cases caused by serogroup Y also increased from 2% during 1989-1991 to 30% during 1992-1996 (2,9 ). The proportion of cases caused by each serogroup varies by age group; more than half of cases among infants aged <1 year are caused by serogroup B, for which no vaccine is licensed or available in the United States (2,10 ). Persons who have certain medical conditions are at increased risk for developing meningococcal disease, particularly persons who have deficiencies in the terminal common complement pathway (C3, C5-9) (11 ). Antecedent viral infection, household crowding, chronic underlying illness, and both active and passive smoking also are associated with increased risk for meningococcal disease (12)(13)(14)(15)(16)(17)(18)(19). During outbreaks, bar or nightclub patronage and alcohol use have also been associated with higher risk for disease (20)(21)(22). In the United States, blacks and persons of low socioeconomic status have been consistently at higher risk for meningococcal disease (2,3,12,18 ). However, race and low socioeconomic status are likely risk markers, rather than risk factors, for this disease. A recent multi-state, case-control study, in which controls were matched to casepatients by age group, revealed that in a multivariable analysis (controlling for sex and education), active and passive smoking, recent respiratory illness, corticosteroid use, new residence, new school, Medicaid insurance, and household crowding were all associated with increased risk for meningococcal disease (13 ). Income and race were not associated with increased risk. Additional research is needed to identify groups at risk that could benefit from prevention efforts. # MENINGOCOCCAL POLYSACCHARIDE VACCINES The quadrivalent A, C, Y, W-135 vaccine (Menomune ® -A,C,Y,W-135, manufactured by Aventis Pasteur) is the formulation currently available in the United States (23 ). Each dose consists of 50 µg of the four purified bacterial capsular polysaccharides. Menomune ® is available in single-dose and 10-dose vials. (Fifty-dose vials are no longer available.) # Primary Vaccination For both adults and children, vaccine is administered subcutaneously as a single, 0.5-ml dose. The vaccine can be administered at the same time as other vaccines but should be given at a different anatomic site. Protective levels of antibody are usually achieved within 7-10 days of vaccination. # Vaccine Immunogenicity and Efficacy The immunogenicity and clinical efficacy of the serogroups A and C meningococcal vaccines have been well established. The serogroup A polysaccharide induces antibody in some children as young as 3 months of age, although a response comparable with that occurring in adults is not achieved until age 4-5 years. The serogroup C component is poorly immunogenic in recipients aged 2 years (30)(31)(32); although clinical protection has not been documented, vaccination with these polysaccharides induces bactericidal antibody. The antibody responses to each of the four polysaccharides in the quadrivalent vaccine are serogroupspecific and independent. Reduced clinical efficacy has not been demonstrated among persons who have received multiple doses of vaccine. However, recent serologic studies have suggested that multiple doses of serogroup C polysaccharide may cause immunologic tolerance to the group C polysaccharide (33,34 ). # Duration of Protection In infants and children aged 90% to <10% 3 years after vaccination among children who were aged <4 years when vaccinated; efficacy was 67% among children who were ≥4 years of age at vaccination (39 ). # RECOMMENDATIONS FOR USE OF MENINGOCOCCAL VACCINE Current Advisory Committee on Immunization Practices (ACIP) guidelines (1 ) suggest that routine vaccination of civilians with the quadrivalent meningococcal polysaccharide vaccine is not recommended because of its relative ineffectiveness in children aged <2 years (the age group with the highest risk for sporadic disease) and because of its relatively short duration of protection. However, the vaccine is recommended for use in control of serogroup C meningococcal outbreaks. An outbreak is defined by the occurrence of three or more confirmed or probable cases of serogroup C meningococcal disease during a period of ≤3 months, with a resulting primary attack rate of at least 10 cases per 100,000 population. For calculation of this threshold, population-based rates are used and not age-specific attack rates, as have been calculated for college students. These recommendations are based on experience with serogroup C meningococcal outbreaks, but these principles may be applicable to outbreaks caused by the other vaccine-preventable meningococcal serogroups, including Y, W-135, and A. College freshmen, particularly those living in dormitories or residence halls, are at modestly increased risk for meningococcal disease compared with persons the same age who are not attending college. Therefore, ACIP has developed recommendations that address educating students and their parents about the risk for disease and about the vaccine so they can make individualized, informed decisions regarding vaccination. (See MMWR Vol. 49, RR-7, which can be referenced in the pages following this report.) Routine vaccination with the quadrivalent vaccine is also recommended for certain high-risk groups, including persons who have terminal complement component deficiencies and those who have anatomic or functional asplenia. Research, industrial, and clinical laboratory personnel who are exposed routinely to Neisseria meningitidis in solutions that may be aerosolized also should be considered for vaccination (1 ). Vaccination with the quadrivalent vaccine may benefit travelers to and U.S. citizens residing in countries in which N. meningitidis is hyperendemic or epidemic, particularly if contact with the local population will be prolonged. Epidemics of meningococcal disease are recurrent in that part of sub-Saharan Africa known as the "meningitis belt," which extends from Senegal in the West to Ethiopia in the East (40 ). Epidemics in the meningitis belt usually occur during the dry season (i.e., from December to June); thus, vaccination is recommended for travelers visiting this region during that time. Information concerning geographic areas for which vaccination is recommended can be obtained from international health clinics for travelers, state health departments, and CDC (telephone 332-4559; internet /). # Revaccination Revaccination may be indicated for persons at high risk for infection (e.g., persons residing in areas in which disease is epidemic), particularly for children who were first vaccinated when they were <4 years of age; such children should be considered for revaccination after 2-3 years if they remain at high risk. Although the need for revaccination of older children and adults has not been determined, antibody levels rapidly decline over 2-3 years, and if indications still exist for vaccination, revaccination may be considered 3-5 years after receipt of the initial dose (1 ). # Precautions and Contraindications Polysaccharide meningococcal vaccines (both A/C and A/C/Y/W-135) have been extensively used in mass vaccination programs as well as in the military and among international travelers. Adverse reactions to polysaccharide meningococcal vaccines are generally mild; the most frequent reaction is pain and redness at the injection site, lasting for 1-2 days. Estimates of the incidence of such local reactions have varied, ranging from 4% to 56% (41,42 ). Transient fever occurred in up to 5% of vaccinees in some studies and occurs more commonly in infants (24,43 ). Severe reactions to polysaccharide meningococcal vaccine are uncommon (24,32,41-48 ) (R. Ball, U.S. Food and Drug Administration, personal communication). Most studies report the rate of systemic allergic reactions (e.g., urticaria, wheezing, and rash) as 0.0-0.1 per 100,000 vaccine doses (24,48 ). Anaphylaxis has been documented in <0.1 per 100,000 vaccine doses (23,47 ). Neurological reactions (e.g., seizures, anesthesias, and paresthesias) are also infrequently observed (42,47 ). The Vaccine Adverse Events Reporting System (VAERS) is a passive surveillance system that detects adverse events that are temporally (but not necessarily causally) associated with vaccination, including adverse events that occur in military personnel. During 1991-1998, a total of 4,568,572 doses of polysaccharide meningococcal vaccine were distributed; 222 adverse events were reported for a rate of 49 adverse events per million doses. In 1999, 42 reports of adverse events were received, but the total number of vaccine doses distributed in 1999 is not yet available (R. Ball, U.S. Food and Drug Administration, personal communication). In the United States from July 1990 through October 1999, a total of 264 adverse events (and no deaths) were reported. Of these adverse events, 226 were categorized as "less serious," with fever, headache, dizziness, and injection-site reactions most commonly reported. Thirty-eight serious adverse events (i.e., those that require hospitalization, are life-threatening, or result in permanent disability) that were temporally associated with vaccination were reported. Serious injection site reactions were reported in eight patients and allergic reactions in three patients. Four cases of Guillain-Barré Syndrome were reported in adults 7-16 days after receiving multiple vaccinations simultaneously, and one case of Guillain-Barré Syndrome was reported in a 9-year-old boy 32 days after receiving meningococcal vaccine alone. An additional seven patients reported serious nervous system abnormalities (e.g., convulsions, paresthesias, diploplia, and optic neuritis); all of these patients received multiple vaccinations simultaneously, making assessment of the role of meningococcal vaccine difficult. Of the 15 miscelleneous adverse events, only three occurred after meningococcal vaccine was administered alone. The minimal number of serious adverse events coupled with the substantial amount of vaccine distributed (>4 million doses) indicate that the vaccine can be considered safe (R. Ball, U.S. Food and Drug Administration, personal communication). Studies of vaccination during pregnancy have not documented adverse effects among either pregnant women or newborns (49-51 ). Based on data from studies involving the use of meningococcal vaccines and other polysaccharide vaccines during pregnancy, altering meningococcal vaccination recommendations during pregnancy is unnecessary. # ANTIMICROBIAL CHEMOPROPHYLAXIS In the United States, the primary means for prevention of sporadic meningococcal disease is antimicrobial chemoprophylaxis of close contacts of infected persons (Table 1). Close contacts include a) household members, b) day care center contacts, and c) anyone directly exposed to the patient's oral secretions (e.g., through kissing, mouth-to-mouth resuscitation, endotracheal intubation, or endotracheal tube management). The attack rate for household contacts exposed to patients who have sporadic meningococcal disease is an estimated four cases per 1,000 persons exposed, which is 500-800 times greater than for the total population (52 ). Because the rate of secondary disease for close contacts is highest during the first few days after onset of disease in the index patient, antimicrobial chemoprophylaxis should be administered as soon as possible (ideally within 24 hours after identification of the index patient). Conversely, chemoprophylaxis administered >14 days after onset of illness in the index patient is probably of limited or no value. Oropharyngeal or nasopharyngeal cultures are not helpful in determining the need for chemoprophylaxis and may unnecessarily delay institution of this preventive measure. Rifampin, ciprofloxacin, and ceftriaxone are all 90%-95% effective in reducing nasopharyngeal carriage of N. meningitidis and are all acceptable alternatives for chemoprophylaxis (53)(54)(55)(56). Systemic antimicrobial therapy of meningococcal disease with agents other than ceftriaxone or other third-generation cephalosporins may not reliably eradicate nasopharyngeal carriage of N. meningitidis. If other agents have been used for treatment, the index patient should receive chemoprophylactic antibiotics for eradication of nasopharyngeal carriage before being discharged from the hospital (57 ). # PROSPECTS FOR IMPROVED MENINGOCOCCAL VACCINES Serogroup A, C, Y, and W-135 meningococcal polysaccharides have been chemically conjugated to protein carriers. These meningococcal conjugate vaccines provoke a T-cell-dependent response that induces a stronger immune response in infants, primes immunologic memory, and leads to booster response to subsequent doses. These vaccines are expected to provide a longer duration of immunity than polysaccharides, even when administered in an infant series, and may provide herd immunity through protection from nasopharyngeal carriage. Clinical trials evaluating these vaccines are ongoing (58)(59)(60). When compared with polysaccharide vaccine, conjugated A and C meningococcal vaccines in infants and toddlers have resulted in similar side effects but improved immune response. Prior vaccination with group C polysaccharide likely does not prevent induction of memory by a subsequent dose of conjugate vaccine (61 ). In late 1999, conjugate C meningococcal vaccines were introduced in the United Kingdom, where rates of meningococcal disease are approximately 2 per 100,000 population, and 30%-40% of cases are caused by serogroup C (62 ). In phase I of this program, infants are being vaccinated at 2, 3, and 4 months concurrently with DTP, Hib, and polio vaccines. Children aged 4-13 months are receiving "catch-up" vaccinations. Children aged 15-17 years are receiving one dose of conjugate C vaccine, and entering college students are receiving one dose of bivalent A/C polysaccharide vaccine. In phase II, scheduled to start in June 2000, a dose of conjugate vaccine will be administered to children aged 14 months-14 years and to persons aged 18-20 years who are not enrolled in college (62 ). Conjugate meningococcal vaccines should be available in the United States within the next 2-4 years. In the interim, the polysaccharide vaccine should not be incorporated into the routine childhood immunization schedule, because the currently available meningococcal polysaccharide vaccines provide limited efficacy of short duration in young children (39 ), in whom the risk for disease is highest (2,3 ). Because the group B polysaccharide is not immunogenic in humans, immunization strategies have focused primarily on noncapsular antigens (10,63 ). Several of these vaccines, developed from specific strains of serogroup B meningococci, have been safe, immunogenic, and efficacious among children and adults and have been used to control outbreaks in South America and Scandinavia (64)(65)(66)(67)(68). Strain-specific differences in outer-membrane proteins suggest that these vaccines may not provide protection against all serogroup B meningococci (69 ). No serogroup B vaccine is currently licensed or available in the United States. # CONCLUSIONS N. meningitidis is a leading cause of bacterial meningitis and sepsis in older children and young adults in the United States. Antimicrobial chemoprophylaxis of close contacts of persons who have sporadic meningococcal disease is the primary means for prevention of meningococcal disease in the United States. The quadrivalent polysaccharide meningococcal vaccine (which protects against serogroups A, C, Y, and W-135) is recommended for control of serogroup C meningococcal disease outbreaks and for use among persons in certain high-risk groups. Travelers to countries in which disease is hyperendemic or epidemic may benefit from vaccination. In addition, college freshmen, especially those who live in dormitories, should be educated about meningococcal disease and the vaccine so that they can make an educated decision about vaccination. Conjugate C meningococcal vaccines were recently introduced into routine childhood immunization schedules in the United Kingdom. These vaccines should be available in the United States within 2-4 years, offering a better tool for control and prevention of meningococcal disease. # Meningococcal Disease and College Students # Recommendations of the Advisory Committee on Immunization Practices (ACIP) INTRODUCTION Neisseria meningitidis causes both sporadic disease and outbreaks. As a result of the control of Haemophilus influenzae type b infections, N. meningitidis has become the leading cause of bacterial meningitis in children and young adults in the United States (1 ). Outbreaks of meningococcal disease were rare in the United States in the 1980s; however, since 1991, the frequency of localized outbreaks has increased (2 ). From July 1994 through July 1997, 42 meningococcal outbreaks were reported, four of which occurred at colleges (3 ). However, outbreaks continue to represent <3% of total U.S. cases (3 ). Rates of meningococcal disease remain highest for infants, but in the past decade, rates have increased among adolescents and young adults (4 ). During 1994-1998, approximately two thirds of cases among persons aged 18-23 years were caused by serogroups C, Y, or W135 and therefore were potentially preventable with available vaccines (5 ) (CDC, unpublished data) (Figure 1). Although the quadrivalent meningococcal polysaccharide vaccine is safe and efficacious (5,6 ), decisions about who to target for vaccination require understanding of the groups at risk, the burden of disease, and the potential benefits of vaccination. New data are available regarding the risk for meningococcal disease in college students. This report reviews these data and provides medical professionals with guidelines concerning meningococcal disease and college students. # BACKGROUND Meningococcal Disease in the Military Military recruits and college freshmen have several common characteristics (e.g., age, diverse geographic backgrounds, and crowded living conditions). Therefore, data obtained from recruits have been used to evaluate meningococcal disease and vaccine among college freshmen. Before 1971, rates of meningococcal disease were elevated among U.S. military recruits. Outbreaks frequently followed large-scale mobilizations, and recruits in their (11 ), and by Fall 1982, all recruits were receiving the quadrivalent polysaccharide vaccine (7 ). However, rates of meningococcal disease in U.S. Army personnel declined before the 1971 vaccination campaigns (7 ), suggesting that smaller recruit populations at training installations and the natural periodicity of outbreaks may have contributed to the decline in disease. Rates of meningococcal disease remain low in the military, and large outbreaks no longer occur. Since 1990, records of all hospitalizations of active duty service members in military hospitals worldwide have been integrated with military personnel records in the Defense Medical Surveillance System (DMSS). During 1990-1998, the overall rate of hospitalizations from meningococcal disease among enlisted, active-duty service members was 0.51 per 100,000 person-years (J. Brundage, DMSS Army Medical Surveillance Activity, personal communication). Approximately 180,000 military recruits receive a single dose of quadrivalent polysaccharide meningococcal vaccine annually. Revaccination is only indicated when military personnel are traveling to countries in which N. meningitidis is hyperendemic or epidemic (D. Trump, personal communication). Before 1999, students reporting to two of the U.S. military academies routinely received meningococcal vaccine. Last year, the other academies initiated meningococcal vaccine programs. # MENINGOCOCCAL DISEASE AND COLLEGE STUDENTS Four recent studies provide data concerning the risk for sporadic meningococcal disease among college students (Table 1) (12)(13)(14)(15). The earliest of these studies was conducted during the 1990-1991 and 1991-1992 school years. A questionnaire designed to evaluate risk factors for meningococcal disease among college students was sent to 1,900 universities, resulting in a 38% response rate (12 ). Forty-three cases of meningococcal disease were reported during the 2 years from colleges with a total enrollment of 4,393,744 students, for a low overall incidence of 1.0 per 100,000 population per year. However, cases of meningococcal disease occurred 9-23 times more frequently in students residing in dormitories than in those residing in other types of accommodations. The low response rate and the inability of the study to control for other risk factors (e.g., freshman status) make these results difficult to interpret. N/A= not applicable. # MMWR June 30, 2000 In a retrospective, cohort study conducted in Maryland for the period 1992-1997, 67 cases of meningococcal disease among persons aged 16-30 years were identified by active, laboratory-based surveillance (13 ). Of those cases, 14 were among students attending Maryland colleges, and 11 were among those in 4-year colleges. The overall incidence of meningococcal disease in Maryland college students was similar to the incidence in the U.S. population of persons the same age (1.74/100,000 vs. 1.44/100,000, respectively); however, rates of disease were elevated among students living in dormitories compared with students living off-campus (3.2/100,000 vs. 0.96/100,000, p=0.05). U.S. surveillance for meningococcal disease in college students was initiated in 1998; from September 1998 through August 1999, 90 cases of meningococcal disease were reported to CDC (14 ). These cases represent approximately 3% of the total cases of meningococcal disease that occur each year in the United States. Eighty-seven (97%) cases occurred in undergraduate students, and 40 (44%) occurred among the 2.27 million freshman students entering college each year (16 ). Among undergraduates, of the 71 (82%) isolates for which serogroup information was available, 35 (49%) were serogroup C, 17 (24%) were serogroup B, 15 (21%) were serogroup Y, and one (1%) was serogroup W-135. Eight (9%) students died. Of the five students who died for whom serogroup information was available, four had serogroup C isolates and one had serogroup Y. U.S. surveillance data from the 1998-1999 school year suggest that the overall rate of meningococcal disease among undergraduate college students is lower than the rate among persons aged 18-23 years who are not enrolled in college (Table 2) (0.7 vs. 1.5/100,000, respectively) (14,16 ). However, rates were higher among specific subgroups of college students. Among the approximately 590,000 freshmen who live in dormitories (17 ), the rate of meningococcal disease was 4.6/100,000, higher than any age group in the population other than children aged <2 years, but lower than the threshold of 10/100,000 recommended for initiating meningococcal vaccination campaigns (6 ). Of 90 students who had meningococcal disease attending college during the 1998- 1999 school year, 50 were enrolled in a case-control study and matched to 148 controls by school, sex, and undergraduate vs. graduate status (14 ). In a multivariable analysis, freshmen living in dormitories were at higher risk for meningococcal disease. In addition, white race, radiator heat, and recent upper respiratory infection were associated with disease. In contrast to the United States, overall rates of meningococcal disease in the United Kingdom are higher among university students compared with non-students of similar age (15 ). From September 1994 through March 1997, university students had an increased annual rate of meningococcal disease (13.2/100,000) compared with nonstudents of similar age in the same health districts (5.5/100,000) and in those health districts without universities (3.7/100,000). As in the United States, regression analysis revealed that "catered hall accommodations," the U.K. equivalent of dormitories, were the main risk factor. Higher rates of disease were observed at universities providing catered hall accommodations for >10% of their student population compared with those providing such housing for <10% of students (15.3/100,000 vs. 5.9/100,000). The increased rate of disease among university students has prompted the United Kingdom to initiate routine vaccination of incoming university students with a bivalent A/C polysaccharide vaccine as part of a new vaccination program (see MMWR 2000; Vol.49, No. RR-6 which can be referenced in the pages preceding this report) (18 ). # MENINGOCOCCAL VACCINE AND COLLEGE STUDENTS On September 30, 1997, the American College Health Association (ACHA), which represents about half of colleges that have student health services, released a statement recommending that "college health services a more proactive role in alerting students and their parents about the dangers of meningococcal disease," that "college students consider vaccination against potentially fatal meningococcal disease," and that "colleges and universities ensure all students have access to a vaccination program for those who want to be vaccinated" (Dr. MarJeanne Collins, Chairman, ACHA Vaccine Preventable Diseases Task Force, personal communication). Parent and college student advocates have also encouraged more widespread use of meningococcal vaccine in college students. In a joint study by ACHA and CDC, surveys were sent to 1,200 ACHA-member schools; of 691 responding schools, 57 (8%) reported that preexposure meningococcal vaccination campaigns had been conducted on their campus since September 1997. A median of 32 students were vaccinated at each school (range: 1-2,300) (J. Capparella, unpublished data). During the 1998-1999 school year, 3%-5% of 148 students enrolled in a case-control study reported receiving prophylactic meningococcal vaccination (14 ). Before the 1999 fall semester, many schools mailed information packets to incoming freshmen; data are not yet available regarding the proportion of students who have been vaccinated. worst case scenarios were evaluated by varying cost of vaccine and administration (range: $54-$88), costs per hospitalization ($10,924-$24,030), value of premature death based on lifetime productivity ($1.3-$4.8 million), cost of side effects of vaccine per case ($3,500-$12,270 per one million doses), and average cost of treating a case of sequelae ($0-$1,476). Vaccination coverage (60% and 100%) and vaccine efficacy (80% and 90%) were also varied for evaluation purposes. Vaccination of freshmen who live in dormitories would result in the administration of approximately 300,000-500,000 doses of vaccine each year, preventing 15-30 cases of meningococcal disease and one to three deaths each year. The cost per case prevented would be $600,000-$1.8 million, at a cost per death prevented of $7 million to $20 million. # Cost-effectiveness of meningococcal vaccine in college students Vaccination of all freshman would result in the administration of approximately 1.4-2.3 million doses of vaccine each year, preventing 37-69 cases of meningococcal disease and two to four deaths caused by meningococcal disease each year. The cost per case prevented would be $1.4-2.9 million, at a cost per death prevented of $22 million to $48 million. These data are similar to data derived from previous studies (19 ). They suggest that for society as a whole, vaccination of college students is unlikely to be cost-effective (Scott et al, unpublished data). # RECOMMENDATIONS FOR USE OF MENINGOCOCCAL POLYSACCHARIDE VACCINE IN COLLEGE STUDENTS College freshmen, particularly those who live in dormitories, are at modestly increased risk for meningococcal disease relative to other persons their age. Vaccination with the currently available quadrivalent meningococcal polysaccharide vaccine will decrease the risk for meningococcal disease among such persons. Vaccination does not eliminate risk because a) the vaccine confers no protection against serogroup B disease and b) although the vaccine is highly effective against serogroups C, Y, W-135, and A, efficacy is <100%. The risk for meningococcal disease among college students is low; therefore, vaccination of all college students, all freshmen, or only freshmen who live in dormitories or residence halls is not likely to be cost-effective for society as a whole. Thus, ACIP is issuing the following recommendations regarding the use of meningococcal polysaccharide vaccines for college students. - Providers of medical care to incoming and current college freshmen, particularly those who plan to or already live in dormitories and residence halls, should, during routine medical care, inform these students and their parents about meningococcal disease and the benefits of vaccination. ACIP does not recommend that the level of increased risk among freshmen warrants any specific changes in living situations for freshmen. - College freshmen who want to reduce their risk for meningococcal disease should either be administered vaccine (by a doctor's office or student health service) or directed to a site where vaccine is available. - The risk for meningococcal disease among non-freshmen college students is similar to that for the general population. However, the vaccine is safe and efficacious and therefore can be provided to non-freshmen undergraduates who want to reduce their risk for meningococcal disease. - Colleges should inform incoming and/or current freshmen, particularly those who plan to live or already live in dormitories or residence halls, about meningococcal disease and the availability of a safe and effective vaccine. - Public health agencies should provide colleges and health-care providers with information about meningococcal disease and the vaccine as well as information regarding how to obtain vaccine. # Additional Considerations about Vaccination of College Students Although the need for revaccination of older children has not been determined, antibody levels decline rapidly over 2-3 years (6 ). Revaccination may be considered for freshmen who were vaccinated more than 3-5 years earlier (5 ). Routine revaccination of college students who were vaccinated as freshmen is not indicated. College students who are at higher risk for meningococcal disease because of a) underlying immune deficiencies or b) travel to countries in which N. meningitidis is hyperendemic or epidemic (i.e., the meningitis belt of sub-Saharan Africa) should be vaccinated (6 ). College students who are employed as research, industrial, and clinical laboratory personnel who are routinely exposed to N. meningitidis in solutions that may be aerosolized should be considered for vaccination (6 ). No data are available regarding whether other closed civilian populations with characteristics similar to college freshman living in dormitories (e.g., preparatory school students) are at the same increased risk for disease. Prevention efforts should focus on groups in whom higher risk has been documented. # CONCLUSIONS College freshmen, especially those who live in dormitories, are at a modestly increased risk for meningococcal disease compared with other persons of the same age, and vaccination with the currently available quadrivalent meningococcal polysaccharide vaccine will decrease their risk for meningococcal disease. Continued surveillance is necessary to evaluate the impact of these recommendations, which have already prompted many universities and clinicians to offer vaccine to college freshmen. Consultation on the use of these recommendations or other issues regarding meningococcal disease is available from the Meningitis and Special Pathogens Branch, Division of Bacterial and Mycotic Diseases, National Center for Infectious Diseases, CDC (telephone: 639-3158). # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on
Centers for Disease Control and Prevention. Prevention and control of meningococcal disease and Meningococcal disease and college students: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR 2000;49(No. RR-7):[inclusive page numbers].# INTRODUCTION Each year, 2,400-3,000 cases of meningococcal disease occur in the United States, resulting in a rate of 0.8-1.3 per 100,000 population (1)(2)(3). The case-fatality ratio for meningococcal disease is 10% (2 ), despite the continued sensitivity of meningococcus to many antibiotics, including penicillin (4 ). Meningococcal disease also causes substantial morbidity: 11%-19% of survivors have sequelae (e.g., neurologic disability, limb loss, and hearing loss [5,6 ]). During 1991-1998, the highest rate of meningococcal disease occurred among infants aged <1 year; however, the rate for persons aged 18-23 years was also higher than that for the general population (1.4 per 100,000) (CDC, National Electronic Telecommunications System for Surveillance, unpublished data). # BACKGOUND In the United States, 95%-97% of cases of meningococcal disease are sporadic; however, since 1991, the frequency of localized outbreaks has increased (7)(8). Most of these outbreaks have been caused by serogroup C. However, in the past 3 years, localized outbreaks caused by serogroup Y and B organisms have also been reported (8 ). The proportion of sporadic meningococcal cases caused by serogroup Y also increased from 2% during 1989-1991 to 30% during 1992-1996 (2,9 ). The proportion of cases caused by each serogroup varies by age group; more than half of cases among infants aged <1 year are caused by serogroup B, for which no vaccine is licensed or available in the United States (2,10 ). Persons who have certain medical conditions are at increased risk for developing meningococcal disease, particularly persons who have deficiencies in the terminal common complement pathway (C3, C5-9) (11 ). Antecedent viral infection, household crowding, chronic underlying illness, and both active and passive smoking also are associated with increased risk for meningococcal disease (12)(13)(14)(15)(16)(17)(18)(19). During outbreaks, bar or nightclub patronage and alcohol use have also been associated with higher risk for disease (20)(21)(22). In the United States, blacks and persons of low socioeconomic status have been consistently at higher risk for meningococcal disease (2,3,12,18 ). However, race and low socioeconomic status are likely risk markers, rather than risk factors, for this disease. A recent multi-state, case-control study, in which controls were matched to casepatients by age group, revealed that in a multivariable analysis (controlling for sex and education), active and passive smoking, recent respiratory illness, corticosteroid use, new residence, new school, Medicaid insurance, and household crowding were all associated with increased risk for meningococcal disease (13 ). Income and race were not associated with increased risk. Additional research is needed to identify groups at risk that could benefit from prevention efforts. # MENINGOCOCCAL POLYSACCHARIDE VACCINES The quadrivalent A, C, Y, W-135 vaccine (Menomune ® -A,C,Y,W-135, manufactured by Aventis Pasteur) is the formulation currently available in the United States (23 ). Each dose consists of 50 µg of the four purified bacterial capsular polysaccharides. Menomune ® is available in single-dose and 10-dose vials. (Fifty-dose vials are no longer available.) # Primary Vaccination For both adults and children, vaccine is administered subcutaneously as a single, 0.5-ml dose. The vaccine can be administered at the same time as other vaccines but should be given at a different anatomic site. Protective levels of antibody are usually achieved within 7-10 days of vaccination. # Vaccine Immunogenicity and Efficacy The immunogenicity and clinical efficacy of the serogroups A and C meningococcal vaccines have been well established. The serogroup A polysaccharide induces antibody in some children as young as 3 months of age, although a response comparable with that occurring in adults is not achieved until age 4-5 years. The serogroup C component is poorly immunogenic in recipients aged <18-24 months (24,25 ). The serogroups A and C vaccines have demonstrated estimated clinical efficacies of ≥85% in school-aged children and adults and are useful in controlling outbreaks (26)(27)(28)(29). Serogroups Y and W-135 polysaccharides are safe and immunogenic in adults and in children aged >2 years (30)(31)(32); although clinical protection has not been documented, vaccination with these polysaccharides induces bactericidal antibody. The antibody responses to each of the four polysaccharides in the quadrivalent vaccine are serogroupspecific and independent. Reduced clinical efficacy has not been demonstrated among persons who have received multiple doses of vaccine. However, recent serologic studies have suggested that multiple doses of serogroup C polysaccharide may cause immunologic tolerance to the group C polysaccharide (33,34 ). # Duration of Protection In infants and children aged <5 years, measurable levels of antibodies against the group A and C polysaccharides decrease substantially during the first 3 years following a single dose of vaccine; in healthy adults, antibody levels also decrease, but antibodies are still detectable up to 10 years after vaccine administration (25,(35)(36)(37)(38). Similarly, although vaccine-induced clinical protection likely persists in school-aged children and adults for at least 3 years, the efficacy of the group A vaccine in children aged <5 years may decrease markedly within this period. In one study, efficacy declined from >90% to <10% 3 years after vaccination among children who were aged <4 years when vaccinated; efficacy was 67% among children who were ≥4 years of age at vaccination (39 ). # RECOMMENDATIONS FOR USE OF MENINGOCOCCAL VACCINE Current Advisory Committee on Immunization Practices (ACIP) guidelines (1 ) suggest that routine vaccination of civilians with the quadrivalent meningococcal polysaccharide vaccine is not recommended because of its relative ineffectiveness in children aged <2 years (the age group with the highest risk for sporadic disease) and because of its relatively short duration of protection. However, the vaccine is recommended for use in control of serogroup C meningococcal outbreaks. An outbreak is defined by the occurrence of three or more confirmed or probable cases of serogroup C meningococcal disease during a period of ≤3 months, with a resulting primary attack rate of at least 10 cases per 100,000 population. For calculation of this threshold, population-based rates are used and not age-specific attack rates, as have been calculated for college students. These recommendations are based on experience with serogroup C meningococcal outbreaks, but these principles may be applicable to outbreaks caused by the other vaccine-preventable meningococcal serogroups, including Y, W-135, and A. College freshmen, particularly those living in dormitories or residence halls, are at modestly increased risk for meningococcal disease compared with persons the same age who are not attending college. Therefore, ACIP has developed recommendations that address educating students and their parents about the risk for disease and about the vaccine so they can make individualized, informed decisions regarding vaccination. (See MMWR Vol. 49, RR-7, which can be referenced in the pages following this report.) Routine vaccination with the quadrivalent vaccine is also recommended for certain high-risk groups, including persons who have terminal complement component deficiencies and those who have anatomic or functional asplenia. Research, industrial, and clinical laboratory personnel who are exposed routinely to Neisseria meningitidis in solutions that may be aerosolized also should be considered for vaccination (1 ). Vaccination with the quadrivalent vaccine may benefit travelers to and U.S. citizens residing in countries in which N. meningitidis is hyperendemic or epidemic, particularly if contact with the local population will be prolonged. Epidemics of meningococcal disease are recurrent in that part of sub-Saharan Africa known as the "meningitis belt," which extends from Senegal in the West to Ethiopia in the East (40 ). Epidemics in the meningitis belt usually occur during the dry season (i.e., from December to June); thus, vaccination is recommended for travelers visiting this region during that time. Information concerning geographic areas for which vaccination is recommended can be obtained from international health clinics for travelers, state health departments, and CDC (telephone [404] 332-4559; internet http://www.cdc.gov/travel/). # Revaccination Revaccination may be indicated for persons at high risk for infection (e.g., persons residing in areas in which disease is epidemic), particularly for children who were first vaccinated when they were <4 years of age; such children should be considered for revaccination after 2-3 years if they remain at high risk. Although the need for revaccination of older children and adults has not been determined, antibody levels rapidly decline over 2-3 years, and if indications still exist for vaccination, revaccination may be considered 3-5 years after receipt of the initial dose (1 ). # Precautions and Contraindications Polysaccharide meningococcal vaccines (both A/C and A/C/Y/W-135) have been extensively used in mass vaccination programs as well as in the military and among international travelers. Adverse reactions to polysaccharide meningococcal vaccines are generally mild; the most frequent reaction is pain and redness at the injection site, lasting for 1-2 days. Estimates of the incidence of such local reactions have varied, ranging from 4% to 56% (41,42 ). Transient fever occurred in up to 5% of vaccinees in some studies and occurs more commonly in infants (24,43 ). Severe reactions to polysaccharide meningococcal vaccine are uncommon (24,32,41-48 ) (R. Ball, U.S. Food and Drug Administration, personal communication). Most studies report the rate of systemic allergic reactions (e.g., urticaria, wheezing, and rash) as 0.0-0.1 per 100,000 vaccine doses (24,48 ). Anaphylaxis has been documented in <0.1 per 100,000 vaccine doses (23,47 ). Neurological reactions (e.g., seizures, anesthesias, and paresthesias) are also infrequently observed (42,47 ). The Vaccine Adverse Events Reporting System (VAERS) is a passive surveillance system that detects adverse events that are temporally (but not necessarily causally) associated with vaccination, including adverse events that occur in military personnel. During 1991-1998, a total of 4,568,572 doses of polysaccharide meningococcal vaccine were distributed; 222 adverse events were reported for a rate of 49 adverse events per million doses. In 1999, 42 reports of adverse events were received, but the total number of vaccine doses distributed in 1999 is not yet available (R. Ball, U.S. Food and Drug Administration, personal communication). In the United States from July 1990 through October 1999, a total of 264 adverse events (and no deaths) were reported. Of these adverse events, 226 were categorized as "less serious," with fever, headache, dizziness, and injection-site reactions most commonly reported. Thirty-eight serious adverse events (i.e., those that require hospitalization, are life-threatening, or result in permanent disability) that were temporally associated with vaccination were reported. Serious injection site reactions were reported in eight patients and allergic reactions in three patients. Four cases of Guillain-Barré Syndrome were reported in adults 7-16 days after receiving multiple vaccinations simultaneously, and one case of Guillain-Barré Syndrome was reported in a 9-year-old boy 32 days after receiving meningococcal vaccine alone. An additional seven patients reported serious nervous system abnormalities (e.g., convulsions, paresthesias, diploplia, and optic neuritis); all of these patients received multiple vaccinations simultaneously, making assessment of the role of meningococcal vaccine difficult. Of the 15 miscelleneous adverse events, only three occurred after meningococcal vaccine was administered alone. The minimal number of serious adverse events coupled with the substantial amount of vaccine distributed (>4 million doses) indicate that the vaccine can be considered safe (R. Ball, U.S. Food and Drug Administration, personal communication). Studies of vaccination during pregnancy have not documented adverse effects among either pregnant women or newborns (49-51 ). Based on data from studies involving the use of meningococcal vaccines and other polysaccharide vaccines during pregnancy, altering meningococcal vaccination recommendations during pregnancy is unnecessary. # ANTIMICROBIAL CHEMOPROPHYLAXIS In the United States, the primary means for prevention of sporadic meningococcal disease is antimicrobial chemoprophylaxis of close contacts of infected persons (Table 1). Close contacts include a) household members, b) day care center contacts, and c) anyone directly exposed to the patient's oral secretions (e.g., through kissing, mouth-to-mouth resuscitation, endotracheal intubation, or endotracheal tube management). The attack rate for household contacts exposed to patients who have sporadic meningococcal disease is an estimated four cases per 1,000 persons exposed, which is 500-800 times greater than for the total population (52 ). Because the rate of secondary disease for close contacts is highest during the first few days after onset of disease in the index patient, antimicrobial chemoprophylaxis should be administered as soon as possible (ideally within 24 hours after identification of the index patient). Conversely, chemoprophylaxis administered >14 days after onset of illness in the index patient is probably of limited or no value. Oropharyngeal or nasopharyngeal cultures are not helpful in determining the need for chemoprophylaxis and may unnecessarily delay institution of this preventive measure. Rifampin, ciprofloxacin, and ceftriaxone are all 90%-95% effective in reducing nasopharyngeal carriage of N. meningitidis and are all acceptable alternatives for chemoprophylaxis (53)(54)(55)(56). Systemic antimicrobial therapy of meningococcal disease with agents other than ceftriaxone or other third-generation cephalosporins may not reliably eradicate nasopharyngeal carriage of N. meningitidis. If other agents have been used for treatment, the index patient should receive chemoprophylactic antibiotics for eradication of nasopharyngeal carriage before being discharged from the hospital (57 ). # PROSPECTS FOR IMPROVED MENINGOCOCCAL VACCINES Serogroup A, C, Y, and W-135 meningococcal polysaccharides have been chemically conjugated to protein carriers. These meningococcal conjugate vaccines provoke a T-cell-dependent response that induces a stronger immune response in infants, primes immunologic memory, and leads to booster response to subsequent doses. These vaccines are expected to provide a longer duration of immunity than polysaccharides, even when administered in an infant series, and may provide herd immunity through protection from nasopharyngeal carriage. Clinical trials evaluating these vaccines are ongoing (58)(59)(60). When compared with polysaccharide vaccine, conjugated A and C meningococcal vaccines in infants and toddlers have resulted in similar side effects but improved immune response. Prior vaccination with group C polysaccharide likely does not prevent induction of memory by a subsequent dose of conjugate vaccine (61 ). In late 1999, conjugate C meningococcal vaccines were introduced in the United Kingdom, where rates of meningococcal disease are approximately 2 per 100,000 population, and 30%-40% of cases are caused by serogroup C (62 ). In phase I of this program, infants are being vaccinated at 2, 3, and 4 months concurrently with DTP, Hib, and polio vaccines. Children aged 4-13 months are receiving "catch-up" vaccinations. Children aged 15-17 years are receiving one dose of conjugate C vaccine, and entering college students are receiving one dose of bivalent A/C polysaccharide vaccine. In phase II, scheduled to start in June 2000, a dose of conjugate vaccine will be administered to children aged 14 months-14 years and to persons aged 18-20 years who are not enrolled in college (62 ). Conjugate meningococcal vaccines should be available in the United States within the next 2-4 years. In the interim, the polysaccharide vaccine should not be incorporated into the routine childhood immunization schedule, because the currently available meningococcal polysaccharide vaccines provide limited efficacy of short duration in young children (39 ), in whom the risk for disease is highest (2,3 ). Because the group B polysaccharide is not immunogenic in humans, immunization strategies have focused primarily on noncapsular antigens (10,63 ). Several of these vaccines, developed from specific strains of serogroup B meningococci, have been safe, immunogenic, and efficacious among children and adults and have been used to control outbreaks in South America and Scandinavia (64)(65)(66)(67)(68). Strain-specific differences in outer-membrane proteins suggest that these vaccines may not provide protection against all serogroup B meningococci (69 ). No serogroup B vaccine is currently licensed or available in the United States. # CONCLUSIONS N. meningitidis is a leading cause of bacterial meningitis and sepsis in older children and young adults in the United States. Antimicrobial chemoprophylaxis of close contacts of persons who have sporadic meningococcal disease is the primary means for prevention of meningococcal disease in the United States. The quadrivalent polysaccharide meningococcal vaccine (which protects against serogroups A, C, Y, and W-135) is recommended for control of serogroup C meningococcal disease outbreaks and for use among persons in certain high-risk groups. Travelers to countries in which disease is hyperendemic or epidemic may benefit from vaccination. In addition, college freshmen, especially those who live in dormitories, should be educated about meningococcal disease and the vaccine so that they can make an educated decision about vaccination. Conjugate C meningococcal vaccines were recently introduced into routine childhood immunization schedules in the United Kingdom. These vaccines should be available in the United States within 2-4 years, offering a better tool for control and prevention of meningococcal disease. # Meningococcal Disease and College Students # Recommendations of the Advisory Committee on Immunization Practices (ACIP) INTRODUCTION Neisseria meningitidis causes both sporadic disease and outbreaks. As a result of the control of Haemophilus influenzae type b infections, N. meningitidis has become the leading cause of bacterial meningitis in children and young adults in the United States (1 ). Outbreaks of meningococcal disease were rare in the United States in the 1980s; however, since 1991, the frequency of localized outbreaks has increased (2 ). From July 1994 through July 1997, 42 meningococcal outbreaks were reported, four of which occurred at colleges (3 ). However, outbreaks continue to represent <3% of total U.S. cases (3 ). Rates of meningococcal disease remain highest for infants, but in the past decade, rates have increased among adolescents and young adults (4 ). During 1994-1998, approximately two thirds of cases among persons aged 18-23 years were caused by serogroups C, Y, or W135 and therefore were potentially preventable with available vaccines (5 ) (CDC, unpublished data) (Figure 1). Although the quadrivalent meningococcal polysaccharide vaccine is safe and efficacious (5,6 ), decisions about who to target for vaccination require understanding of the groups at risk, the burden of disease, and the potential benefits of vaccination. New data are available regarding the risk for meningococcal disease in college students. This report reviews these data and provides medical professionals with guidelines concerning meningococcal disease and college students. # BACKGROUND Meningococcal Disease in the Military Military recruits and college freshmen have several common characteristics (e.g., age, diverse geographic backgrounds, and crowded living conditions). Therefore, data obtained from recruits have been used to evaluate meningococcal disease and vaccine among college freshmen. Before 1971, rates of meningococcal disease were elevated among U.S. military recruits. Outbreaks frequently followed large-scale mobilizations, and recruits in their (11 ), and by Fall 1982, all recruits were receiving the quadrivalent polysaccharide vaccine (7 ). However, rates of meningococcal disease in U.S. Army personnel declined before the 1971 vaccination campaigns (7 ), suggesting that smaller recruit populations at training installations and the natural periodicity of outbreaks may have contributed to the decline in disease. Rates of meningococcal disease remain low in the military, and large outbreaks no longer occur. Since 1990, records of all hospitalizations of active duty service members in military hospitals worldwide have been integrated with military personnel records in the Defense Medical Surveillance System (DMSS). During 1990-1998, the overall rate of hospitalizations from meningococcal disease among enlisted, active-duty service members was 0.51 per 100,000 person-years (J. Brundage, DMSS Army Medical Surveillance Activity, personal communication). Approximately 180,000 military recruits receive a single dose of quadrivalent polysaccharide meningococcal vaccine annually. Revaccination is only indicated when military personnel are traveling to countries in which N. meningitidis is hyperendemic or epidemic (D. Trump, personal communication). Before 1999, students reporting to two of the U.S. military academies routinely received meningococcal vaccine. Last year, the other academies initiated meningococcal vaccine programs. # MENINGOCOCCAL DISEASE AND COLLEGE STUDENTS Four recent studies provide data concerning the risk for sporadic meningococcal disease among college students (Table 1) (12)(13)(14)(15). The earliest of these studies was conducted during the 1990-1991 and 1991-1992 school years. A questionnaire designed to evaluate risk factors for meningococcal disease among college students was sent to 1,900 universities, resulting in a 38% response rate (12 ). Forty-three cases of meningococcal disease were reported during the 2 years from colleges with a total enrollment of 4,393,744 students, for a low overall incidence of 1.0 per 100,000 population per year. However, cases of meningococcal disease occurred 9-23 times more frequently in students residing in dormitories than in those residing in other types of accommodations. The low response rate and the inability of the study to control for other risk factors (e.g., freshman status) make these results difficult to interpret. N/A= not applicable. # MMWR June 30, 2000 In a retrospective, cohort study conducted in Maryland for the period 1992-1997, 67 cases of meningococcal disease among persons aged 16-30 years were identified by active, laboratory-based surveillance (13 ). Of those cases, 14 were among students attending Maryland colleges, and 11 were among those in 4-year colleges. The overall incidence of meningococcal disease in Maryland college students was similar to the incidence in the U.S. population of persons the same age (1.74/100,000 vs. 1.44/100,000, respectively); however, rates of disease were elevated among students living in dormitories compared with students living off-campus (3.2/100,000 vs. 0.96/100,000, p=0.05). U.S. surveillance for meningococcal disease in college students was initiated in 1998; from September 1998 through August 1999, 90 cases of meningococcal disease were reported to CDC (14 ). These cases represent approximately 3% of the total cases of meningococcal disease that occur each year in the United States. Eighty-seven (97%) cases occurred in undergraduate students, and 40 (44%) occurred among the 2.27 million freshman students entering college each year (16 ). Among undergraduates, of the 71 (82%) isolates for which serogroup information was available, 35 (49%) were serogroup C, 17 (24%) were serogroup B, 15 (21%) were serogroup Y, and one (1%) was serogroup W-135. Eight (9%) students died. Of the five students who died for whom serogroup information was available, four had serogroup C isolates and one had serogroup Y. U.S. surveillance data from the 1998-1999 school year suggest that the overall rate of meningococcal disease among undergraduate college students is lower than the rate among persons aged 18-23 years who are not enrolled in college (Table 2) (0.7 vs. 1.5/100,000, respectively) (14,16 ). However, rates were higher among specific subgroups of college students. Among the approximately 590,000 freshmen who live in dormitories (17 ), the rate of meningococcal disease was 4.6/100,000, higher than any age group in the population other than children aged <2 years, but lower than the threshold of 10/100,000 recommended for initiating meningococcal vaccination campaigns (6 ). Of 90 students who had meningococcal disease attending college during the 1998- 1999 school year, 50 were enrolled in a case-control study and matched to 148 controls by school, sex, and undergraduate vs. graduate status (14 ). In a multivariable analysis, freshmen living in dormitories were at higher risk for meningococcal disease. In addition, white race, radiator heat, and recent upper respiratory infection were associated with disease. In contrast to the United States, overall rates of meningococcal disease in the United Kingdom are higher among university students compared with non-students of similar age (15 ). From September 1994 through March 1997, university students had an increased annual rate of meningococcal disease (13.2/100,000) compared with nonstudents of similar age in the same health districts (5.5/100,000) and in those health districts without universities (3.7/100,000). As in the United States, regression analysis revealed that "catered hall accommodations," the U.K. equivalent of dormitories, were the main risk factor. Higher rates of disease were observed at universities providing catered hall accommodations for >10% of their student population compared with those providing such housing for <10% of students (15.3/100,000 vs. 5.9/100,000). The increased rate of disease among university students has prompted the United Kingdom to initiate routine vaccination of incoming university students with a bivalent A/C polysaccharide vaccine as part of a new vaccination program (see MMWR 2000; Vol.49, No. RR-6 which can be referenced in the pages preceding this report) (18 ). # MENINGOCOCCAL VACCINE AND COLLEGE STUDENTS On September 30, 1997, the American College Health Association (ACHA), which represents about half of colleges that have student health services, released a statement recommending that "college health services [take] a more proactive role in alerting students and their parents about the dangers of meningococcal disease," that "college students consider vaccination against potentially fatal meningococcal disease," and that "colleges and universities ensure all students have access to a vaccination program for those who want to be vaccinated" (Dr. MarJeanne Collins, Chairman, ACHA Vaccine Preventable Diseases Task Force, personal communication). Parent and college student advocates have also encouraged more widespread use of meningococcal vaccine in college students. In a joint study by ACHA and CDC, surveys were sent to 1,200 ACHA-member schools; of 691 responding schools, 57 (8%) reported that preexposure meningococcal vaccination campaigns had been conducted on their campus since September 1997. A median of 32 students were vaccinated at each school (range: 1-2,300) (J. Capparella, unpublished data). During the 1998-1999 school year, 3%-5% of 148 students enrolled in a case-control study reported receiving prophylactic meningococcal vaccination (14 ). Before the 1999 fall semester, many schools mailed information packets to incoming freshmen; data are not yet available regarding the proportion of students who have been vaccinated. worst case scenarios were evaluated by varying cost of vaccine and administration (range: $54-$88), costs per hospitalization ($10,924-$24,030), value of premature death based on lifetime productivity ($1.3-$4.8 million), cost of side effects of vaccine per case ($3,500-$12,270 per one million doses), and average cost of treating a case of sequelae ($0-$1,476). Vaccination coverage (60% and 100%) and vaccine efficacy (80% and 90%) were also varied for evaluation purposes. Vaccination of freshmen who live in dormitories would result in the administration of approximately 300,000-500,000 doses of vaccine each year, preventing 15-30 cases of meningococcal disease and one to three deaths each year. The cost per case prevented would be $600,000-$1.8 million, at a cost per death prevented of $7 million to $20 million. # Cost-effectiveness of meningococcal vaccine in college students Vaccination of all freshman would result in the administration of approximately 1.4-2.3 million doses of vaccine each year, preventing 37-69 cases of meningococcal disease and two to four deaths caused by meningococcal disease each year. The cost per case prevented would be $1.4-2.9 million, at a cost per death prevented of $22 million to $48 million. These data are similar to data derived from previous studies (19 ). They suggest that for society as a whole, vaccination of college students is unlikely to be cost-effective (Scott et al, unpublished data). # RECOMMENDATIONS FOR USE OF MENINGOCOCCAL POLYSACCHARIDE VACCINE IN COLLEGE STUDENTS College freshmen, particularly those who live in dormitories, are at modestly increased risk for meningococcal disease relative to other persons their age. Vaccination with the currently available quadrivalent meningococcal polysaccharide vaccine will decrease the risk for meningococcal disease among such persons. Vaccination does not eliminate risk because a) the vaccine confers no protection against serogroup B disease and b) although the vaccine is highly effective against serogroups C, Y, W-135, and A, efficacy is <100%. The risk for meningococcal disease among college students is low; therefore, vaccination of all college students, all freshmen, or only freshmen who live in dormitories or residence halls is not likely to be cost-effective for society as a whole. Thus, ACIP is issuing the following recommendations regarding the use of meningococcal polysaccharide vaccines for college students. q Providers of medical care to incoming and current college freshmen, particularly those who plan to or already live in dormitories and residence halls, should, during routine medical care, inform these students and their parents about meningococcal disease and the benefits of vaccination. ACIP does not recommend that the level of increased risk among freshmen warrants any specific changes in living situations for freshmen. q College freshmen who want to reduce their risk for meningococcal disease should either be administered vaccine (by a doctor's office or student health service) or directed to a site where vaccine is available. q The risk for meningococcal disease among non-freshmen college students is similar to that for the general population. However, the vaccine is safe and efficacious and therefore can be provided to non-freshmen undergraduates who want to reduce their risk for meningococcal disease. q Colleges should inform incoming and/or current freshmen, particularly those who plan to live or already live in dormitories or residence halls, about meningococcal disease and the availability of a safe and effective vaccine. q Public health agencies should provide colleges and health-care providers with information about meningococcal disease and the vaccine as well as information regarding how to obtain vaccine. # Additional Considerations about Vaccination of College Students Although the need for revaccination of older children has not been determined, antibody levels decline rapidly over 2-3 years (6 ). Revaccination may be considered for freshmen who were vaccinated more than 3-5 years earlier (5 ). Routine revaccination of college students who were vaccinated as freshmen is not indicated. College students who are at higher risk for meningococcal disease because of a) underlying immune deficiencies or b) travel to countries in which N. meningitidis is hyperendemic or epidemic (i.e., the meningitis belt of sub-Saharan Africa) should be vaccinated (6 ). College students who are employed as research, industrial, and clinical laboratory personnel who are routinely exposed to N. meningitidis in solutions that may be aerosolized should be considered for vaccination (6 ). No data are available regarding whether other closed civilian populations with characteristics similar to college freshman living in dormitories (e.g., preparatory school students) are at the same increased risk for disease. Prevention efforts should focus on groups in whom higher risk has been documented. # CONCLUSIONS College freshmen, especially those who live in dormitories, are at a modestly increased risk for meningococcal disease compared with other persons of the same age, and vaccination with the currently available quadrivalent meningococcal polysaccharide vaccine will decrease their risk for meningococcal disease. Continued surveillance is necessary to evaluate the impact of these recommendations, which have already prompted many universities and clinicians to offer vaccine to college freshmen. Consultation on the use of these recommendations or other issues regarding meningococcal disease is available from the Meningitis and Special Pathogens Branch, Division of Bacterial and Mycotic Diseases, National Center for Infectious Diseases, CDC (telephone: [404] 639-3158). # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on
None
None
27ff075c6327e0c1913e2e54e034b81d55f23f55
cdc
None
Compared with older children and adults, infants aged <12 months have substantially higher rates of pertussis and the largest burden of pertussis-related deaths. Since 2004, a mean of 3,055 infant pertussis cases with more than 19 deaths has been reported each year through the National Notifiable Diseases Surveillance System (CDC, unpublished data, 2011). The majority of pertussis cases, hospitalizations, and deaths occur in infants aged ≤2 months, who are too young to be vaccinated; therefore, other strategies are required for prevention of pertussis in this age group. Since 2005, the Advisory Committee on Immunization Practices (ACIP) has recommended tetanus toxoid, reduced diphtheria toxoid and acellular pertussis (Tdap) booster vaccines to unvaccinated postpartum mothers and other family members of newborn infants to protect infants from pertussis, a strategy referred to as cocooning (1). Over the past 5 years, cocooning programs have proven difficult to implement widely (2,3). Cocooning programs might achieve moderate vaccination coverage among postpartum mothers but have had limited success in vaccinating fathers or other family members. On June 22, 2011, ACIP made recommendations for use of Tdap in unvaccinated pregnant women and updated recommendations on cocooning and special situations. This report summarizes data considered and conclusions made by ACIP and provides guidance for implementing its recommendations.# ACIP recommends a single Tdap dose for persons aged 11 through 18 years who have completed the recommended childhood diphtheria and tetanus toxoids and pertussis/diphtheria and tetanus toxoids and acellular pertussis (DTP/DTaP) vaccination series and for adults aged 19 through 64 years who have not previously received Tdap (1,4). ACIP also recommends that adults aged 65 years and older receive a single dose of Tdap if they have or anticipate having close contact with an infant aged <12 months and previously have not received Tdap (5). Two Tdap vaccines are available in the United States. Adacel (Sanofi Pasteur) is licensed for use in persons aged 11 through 64 years. Boostrix (GlaxoSmithKline Biologicals) is licensed for use in persons aged ≥10 years (6). The ACIP Pertussis Vaccines Work Group reviewed unpublished Tdap safety data from pregnancy registries and the Vaccine Adverse Event Reporting System (VAERS) and published studies on use of Tdap in pregnant women. The Work Group also considered the epidemiology of pertussis in infants and provider and program feedback, and then presented policy options for consideration to ACIP. These updated recommendations on use of Tdap in pregnant women are consistent with the goal of reducing the burden of pertussis in infants. # Safety of Tdap in Pregnant Women In prelicensure evaluations, the safety of administering a booster dose of Tdap to pregnant women was not studied. Because information on use of Tdap in pregnant women was lacking, both manufacturers of Tdap established pregnancy registries to collect information and pregnancy outcomes from pregnant women vaccinated with Tdap. Data on the safety of administering Tdap to pregnant women are now available. ACIP reviewed published and unpublished data from VAERS, Sanofi Pasteur (Adacel) and GlaxoSmithKline (Boostrix) pregnancy registries, and small studies (7,8). ACIP concluded that available data from these studies did not suggest any elevated frequency or unusual patterns of adverse events in pregnant women who received Tdap and that the few serious adverse events reported were unlikely to have been caused by the vaccine. Both tetanus and diphtheria toxoids (Td) and tetanus toxoid vaccines have been used extensively in pregnant women worldwide to prevent neonatal tetanus. Tetanus-and diphtheria-toxoid containing vaccines administered during pregnancy have not been shown to be teratogenic (9,10). From a safety perspective, ACIP concluded that administration of Tdap after 20 weeks' gestation is preferred to minimize the risk for any low-frequency adverse event and the possibility that any spurious association might appear causative. # Transplacental Maternal Antibodies For infants, transplacentally transferred maternal antibodies might provide protection against pertussis in early life and before beginning the primary DTaP series. Several studies provide evidence supporting the existence of efficient transplacental transfer of pertussis antibodies (7,11,12). Cord blood from newborn infants whose mothers received Tdap during pregnancy or before pregnancy had higher concentrations of pertussis antibodies when compared with cord blood from newborn infants of unvaccinated mothers (7,11). The half-life of transferred maternal pertussis antibodies is approximately 6 weeks (12). The effectiveness of maternal antipertussis antibodies in preventing infant pertussis is not yet known, but pertussis-specific antibodies likely confer protection and modify the severity of pertussis illness (13,14). In addition, a woman vaccinated with Tdap during pregnancy likely will be protected at time of delivery, and therefore less likely to transmit pertussis to her infant. After receipt of Tdap, boosted pertussis-specific antibody levels peak after several weeks, followed by a decline over several months (15,16). To optimize the concentration of maternal antibodies transferred to the fetus, ACIP concluded that unvaccinated pregnant women should receive Tdap, preferably in the third or late second (after 20 weeks gestation) trimester. # Interference with Infant Immune Response to Primary DTaP Vaccination Several studies have suggested that maternal pertussis antibodies can inhibit active pertussis-specific antibody production after administration of DTaP vaccine to infants of mothers vaccinated with Tdap during pregnancy, referred to as blunting (12,17). Because correlates of protection are not fully understood, the clinical importance of blunting of an infant's immune response is not clear. Evidence suggests that any blunting would be short-lived because circulating maternal antibodies decline rapidly (12,18). Circulating maternal pertussis antibodies might reduce an infant's risk for pertussis in the first few months of life but slightly increase risk for disease because of a blunted immune response after receipt of primary DTaP doses. The benefit would be to reduce the risk for disease and death in infants aged <3 months, but the trade-off might be to increase the occurrence of pertussis in older infants; however, this group experiences a substantially lower burden of hospitalizations and mortality (National Notifiable Diseases Surveillance System, CDC, unpublished data, 2011). Currently, two clinical trials are being conducted to measure the immune response of infants receiving DTaP immunization at ages 2, 4, and 6 months whose mothers received Tdap during the third trimester of pregnancy (19,20). These trials also are designed to evaluate safety and immunogenicity of Tdap during pregnancy, but are not sufficiently powered to assess disease endpoints. Analysis of interim data from one trial (19, unpublished data) measured infant antibody to pertussis antigens in a blinded fashion for two groups: infants whose mothers received Tdap and infants whose mothers received Td. The first group had elevated antipertussis antibody levels compared with the second at birth and before dose 1, which might be the result of passive antibody transfer, but had lower antipertussis antibody levels after dose 3. In both groups, antipertussis antibody levels were comparable before doses 2 and 3. Although the first group had lower antipertussis antibody levels after dose 3, the evidence of sufficient immune response to DTaP doses compared with the second group was reassuring. ACIP concluded that the interim data are consistent with previously published literature suggesting a short duration of blunting of the infant response, and that the potential benefit of protection from maternal antibodies in newborn infants outweighs the potential risk for shifting disease burden to later in infancy. # Cocooning Cocooning is defined as the strategy of vaccinating pregnant women immediately postpartum and all other close contacts of infants aged <12 months with Tdap to reduce the risk for transmission of pertussis to infants. Cocooning has been recommended by ACIP since 2005. Cocooning programs have achieved moderate postpartum coverage among mothers but have had limited success in vaccinating fathers or other family members (3) (CDC, unpublished data, 2011). Programmatic challenges make implementation of cocooning programs complex and also impede program expansion and sustainability (2). The effectiveness of vaccinating postpartum mothers and close contacts to protect infants from pertussis is not yet known, but the delay in antibody response among those vaccinated with Tdap after an infant's birth might result in insufficient protection to infants during the first weeks of life (21). ACIP concluded that cocooning alone is an insufficient strategy to prevent pertussis morbidity and mortality in newborn infants. Regardless, ACIP concluded that cocooning likely provides indirect protection to infants and firmly supports vaccination with Tdap for unvaccinated persons who anticipate close contact with an infant. # Decision and Cost Effectiveness Analysis A decision analysis and cost effectiveness model was developed to assess the impact and cost effectiveness of maternal Tdap vaccination during pregnancy compared with immediately postpartum. The model showed that Tdap vaccination during pregnancy would prevent more infant cases, hospitalizations, and deaths compared with the postpartum dose for two reasons: 1) vaccination during pregnancy benefits the mother and infant by providing earlier protection to the mother, thereby protecting the infant at birth; and 2) vaccination during late pregnancy maximizes transfer of maternal antibodies to the infant, likely providing direct protection to the infant for a period after birth. Model results were most sensitive to efficacy of maternal antibodies and risk for disease as a result of blunting; however, a sensitivity analysis in which infants were assumed to have as little as 20% efficacy of maternal antibodies and a 60% increase in risk for disease as a result of blunting found that maternal vaccination during pregnancy was more cost effective and prevented a greater proportion of infant cases and deaths than postpartum maternal vaccination (22). # Guidance for Use Maternal vaccination. ACIP recommends that women's health-care personnel implement a Tdap vaccination program for pregnant women who previously have not received Tdap. Health-care personnel should administer Tdap during pregnancy, preferably during the third or late second trimester (after 20 weeks' gestation). If not administered during pregnancy, Tdap should be administered immediately postpartum. Cocooning. ACIP recommends that adolescents and adults (e.g., parents, siblings, grandparents, childcare providers, and health-care personnel) who have or anticipate having close contact with an infant aged <12 months should receive a single dose of Tdap to protect against pertussis if they have not previously received Tdap. Ideally, these adolescents and adults should receive Tdap at least 2 weeks before beginning close contact with the infant. # Special Situations Pregnant women due for tetanus booster. If a tetanus and diphtheria booster vaccination is indicated during pregnancy for a woman who has previously not received Tdap (i.e., more than 10 years since previous Td), then Tdap should be administered during pregnancy, preferably during the third or late second trimester (after 20 weeks' gestation). Wound management for pregnant women. As part of standard wound management care to prevent tetanus, a tetanus toxoid--containing vaccine might be recommended for wound management in a pregnant woman if 5 years or more have elapsed since last receiving Td. If a tetanus booster is indicated for a pregnant woman who previously has not received Tdap, Tdap should be administered. Pregnant women with unknown or incomplete tetanus vaccination. To ensure protection against maternal and neonatal tetanus, pregnant women who have never been vaccinated against tetanus should receive three vaccinations containing tetanus and reduced diphtheria toxoids. The recommended schedule is 0, 4 weeks, and 6 to 12 months. Tdap should replace 1 dose of Td, preferably during the third or late second trimester (after 20 weeks' gestation) of pregnancy.
Compared with older children and adults, infants aged <12 months have substantially higher rates of pertussis and the largest burden of pertussis-related deaths. Since 2004, a mean of 3,055 infant pertussis cases with more than 19 deaths has been reported each year through the National Notifiable Diseases Surveillance System (CDC, unpublished data, 2011). The majority of pertussis cases, hospitalizations, and deaths occur in infants aged ≤2 months, who are too young to be vaccinated; therefore, other strategies are required for prevention of pertussis in this age group. Since 2005, the Advisory Committee on Immunization Practices (ACIP) has recommended tetanus toxoid, reduced diphtheria toxoid and acellular pertussis (Tdap) booster vaccines to unvaccinated postpartum mothers and other family members of newborn infants to protect infants from pertussis, a strategy referred to as cocooning (1). Over the past 5 years, cocooning programs have proven difficult to implement widely (2,3). Cocooning programs might achieve moderate vaccination coverage among postpartum mothers but have had limited success in vaccinating fathers or other family members. On June 22, 2011, ACIP made recommendations for use of Tdap in unvaccinated pregnant women and updated recommendations on cocooning and special situations. This report summarizes data considered and conclusions made by ACIP and provides guidance for implementing its recommendations.# ACIP recommends a single Tdap dose for persons aged 11 through 18 years who have completed the recommended childhood diphtheria and tetanus toxoids and pertussis/diphtheria and tetanus toxoids and acellular pertussis (DTP/DTaP) vaccination series and for adults aged 19 through 64 years who have not previously received Tdap (1,4). ACIP also recommends that adults aged 65 years and older receive a single dose of Tdap if they have or anticipate having close contact with an infant aged <12 months and previously have not received Tdap (5). Two Tdap vaccines are available in the United States. Adacel (Sanofi Pasteur) is licensed for use in persons aged 11 through 64 years. Boostrix (GlaxoSmithKline Biologicals) is licensed for use in persons aged ≥10 years (6). The ACIP Pertussis Vaccines Work Group reviewed unpublished Tdap safety data from pregnancy registries and the Vaccine Adverse Event Reporting System (VAERS) and published studies on use of Tdap in pregnant women. The Work Group also considered the epidemiology of pertussis in infants and provider and program feedback, and then presented policy options for consideration to ACIP. These updated recommendations on use of Tdap in pregnant women are consistent with the goal of reducing the burden of pertussis in infants. # Safety of Tdap in Pregnant Women In prelicensure evaluations, the safety of administering a booster dose of Tdap to pregnant women was not studied. Because information on use of Tdap in pregnant women was lacking, both manufacturers of Tdap established pregnancy registries to collect information and pregnancy outcomes from pregnant women vaccinated with Tdap. Data on the safety of administering Tdap to pregnant women are now available. ACIP reviewed published and unpublished data from VAERS, Sanofi Pasteur (Adacel) and GlaxoSmithKline (Boostrix) pregnancy registries, and small studies (7,8). ACIP concluded that available data from these studies did not suggest any elevated frequency or unusual patterns of adverse events in pregnant women who received Tdap and that the few serious adverse events reported were unlikely to have been caused by the vaccine. Both tetanus and diphtheria toxoids (Td) and tetanus toxoid vaccines have been used extensively in pregnant women worldwide to prevent neonatal tetanus. Tetanus-and diphtheria-toxoid containing vaccines administered during pregnancy have not been shown to be teratogenic (9,10). From a safety perspective, ACIP concluded that administration of Tdap after 20 weeks' gestation is preferred to minimize the risk for any low-frequency adverse event and the possibility that any spurious association might appear causative. # Transplacental Maternal Antibodies For infants, transplacentally transferred maternal antibodies might provide protection against pertussis in early life and before beginning the primary DTaP series. Several studies provide evidence supporting the existence of efficient transplacental transfer of pertussis antibodies (7,11,12). Cord blood from newborn infants whose mothers received Tdap during pregnancy or before pregnancy had higher concentrations of pertussis antibodies when compared with cord blood from newborn infants of unvaccinated mothers (7,11). The half-life of transferred maternal pertussis antibodies is approximately 6 weeks (12). The effectiveness of maternal antipertussis antibodies in preventing infant pertussis is not yet known, but pertussis-specific antibodies likely confer protection and modify the severity of pertussis illness (13,14). In addition, a woman vaccinated with Tdap during pregnancy likely will be protected at time of delivery, and therefore less likely to transmit pertussis to her infant. After receipt of Tdap, boosted pertussis-specific antibody levels peak after several weeks, followed by a decline over several months (15,16). To optimize the concentration of maternal antibodies transferred to the fetus, ACIP concluded that unvaccinated pregnant women should receive Tdap, preferably in the third or late second (after 20 weeks gestation) trimester. # Interference with Infant Immune Response to Primary DTaP Vaccination Several studies have suggested that maternal pertussis antibodies can inhibit active pertussis-specific antibody production after administration of DTaP vaccine to infants of mothers vaccinated with Tdap during pregnancy, referred to as blunting (12,17). Because correlates of protection are not fully understood, the clinical importance of blunting of an infant's immune response is not clear. Evidence suggests that any blunting would be short-lived because circulating maternal antibodies decline rapidly (12,18). Circulating maternal pertussis antibodies might reduce an infant's risk for pertussis in the first few months of life but slightly increase risk for disease because of a blunted immune response after receipt of primary DTaP doses. The benefit would be to reduce the risk for disease and death in infants aged <3 months, but the trade-off might be to increase the occurrence of pertussis in older infants; however, this group experiences a substantially lower burden of hospitalizations and mortality (National Notifiable Diseases Surveillance System, CDC, unpublished data, 2011). Currently, two clinical trials are being conducted to measure the immune response of infants receiving DTaP immunization at ages 2, 4, and 6 months whose mothers received Tdap during the third trimester of pregnancy (19,20). These trials also are designed to evaluate safety and immunogenicity of Tdap during pregnancy, but are not sufficiently powered to assess disease endpoints. Analysis of interim data from one trial (19, unpublished data) measured infant antibody to pertussis antigens in a blinded fashion for two groups: infants whose mothers received Tdap and infants whose mothers received Td. The first group had elevated antipertussis antibody levels compared with the second at birth and before dose 1, which might be the result of passive antibody transfer, but had lower antipertussis antibody levels after dose 3. In both groups, antipertussis antibody levels were comparable before doses 2 and 3. Although the first group had lower antipertussis antibody levels after dose 3, the evidence of sufficient immune response to DTaP doses compared with the second group was reassuring. ACIP concluded that the interim data are consistent with previously published literature suggesting a short duration of blunting of the infant response, and that the potential benefit of protection from maternal antibodies in newborn infants outweighs the potential risk for shifting disease burden to later in infancy. # Cocooning Cocooning is defined as the strategy of vaccinating pregnant women immediately postpartum and all other close contacts of infants aged <12 months with Tdap to reduce the risk for transmission of pertussis to infants. Cocooning has been recommended by ACIP since 2005. Cocooning programs have achieved moderate postpartum coverage among mothers but have had limited success in vaccinating fathers or other family members (3) (CDC, unpublished data, 2011). Programmatic challenges make implementation of cocooning programs complex and also impede program expansion and sustainability (2). The effectiveness of vaccinating postpartum mothers and close contacts to protect infants from pertussis is not yet known, but the delay in antibody response among those vaccinated with Tdap after an infant's birth might result in insufficient protection to infants during the first weeks of life (21). ACIP concluded that cocooning alone is an insufficient strategy to prevent pertussis morbidity and mortality in newborn infants. Regardless, ACIP concluded that cocooning likely provides indirect protection to infants and firmly supports vaccination with Tdap for unvaccinated persons who anticipate close contact with an infant. # Decision and Cost Effectiveness Analysis A decision analysis and cost effectiveness model was developed to assess the impact and cost effectiveness of maternal Tdap vaccination during pregnancy compared with immediately postpartum. The model showed that Tdap vaccination during pregnancy would prevent more infant cases, hospitalizations, and deaths compared with the postpartum dose for two reasons: 1) vaccination during pregnancy benefits the mother and infant by providing earlier protection to the mother, thereby protecting the infant at birth; and 2) vaccination during late pregnancy maximizes transfer of maternal antibodies to the infant, likely providing direct protection to the infant for a period after birth. Model results were most sensitive to efficacy of maternal antibodies and risk for disease as a result of blunting; however, a sensitivity analysis in which infants were assumed to have as little as 20% efficacy of maternal antibodies and a 60% increase in risk for disease as a result of blunting found that maternal vaccination during pregnancy was more cost effective and prevented a greater proportion of infant cases and deaths than postpartum maternal vaccination (22). # Guidance for Use Maternal vaccination. ACIP recommends that women's health-care personnel implement a Tdap vaccination program for pregnant women who previously have not received Tdap. Health-care personnel should administer Tdap during pregnancy, preferably during the third or late second trimester (after 20 weeks' gestation). If not administered during pregnancy, Tdap should be administered immediately postpartum. Cocooning. ACIP recommends that adolescents and adults (e.g., parents, siblings, grandparents, childcare providers, and health-care personnel) who have or anticipate having close contact with an infant aged <12 months should receive a single dose of Tdap to protect against pertussis if they have not previously received Tdap. Ideally, these adolescents and adults should receive Tdap at least 2 weeks before beginning close contact with the infant. # Special Situations Pregnant women due for tetanus booster. If a tetanus and diphtheria booster vaccination is indicated during pregnancy for a woman who has previously not received Tdap (i.e., more than 10 years since previous Td), then Tdap should be administered during pregnancy, preferably during the third or late second trimester (after 20 weeks' gestation). Wound management for pregnant women. As part of standard wound management care to prevent tetanus, a tetanus toxoid--containing vaccine might be recommended for wound management in a pregnant woman if 5 years or more have elapsed since last receiving Td. If a tetanus booster is indicated for a pregnant woman who previously has not received Tdap, Tdap should be administered. Pregnant women with unknown or incomplete tetanus vaccination. To ensure protection against maternal and neonatal tetanus, pregnant women who have never been vaccinated against tetanus should receive three vaccinations containing tetanus and reduced diphtheria toxoids. The recommended schedule is 0, 4 weeks, and 6 to 12 months. Tdap should replace 1 dose of Td, preferably during the third or late second trimester (after 20 weeks' gestation) of pregnancy.
None
None
0f8932a83e23cfcfd3cff542319409ed7ca645d4
cdc
None
The purpose of this Compendium is to provide information on rabies to veterinarians, public health officials, and others concerned with rabies control. These recommendations serve as the basis for animal rabies-control programs throughout the United States and facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies-control program. This document is reviewed annually and revised as necessary. Recommendations on immunization procedures are contained in Part I; all animal rabies vaccines licensed by the United States Department of Agriculture (USDA) and marketed in the United States are listed in Part II; Part III details the principles of rabies control.# for wildlife. For this reason and because virus-shedding periods are unknown, wild or exotic carnivores and bats should not be kept as pets. Zoos or research institutions may establish vaccination programs that attempt to protect valuable animals, but these programs should not be in lieu of appropriate public health activities that protect humans. The use of licensed oral vaccines for the mass immunization of wildlife should be considered in selected situations, with the approval of the state agency responsible for animal rabies control. # E. Accidental Human Exposure to Vaccine Accidental inoculation can occur during administration of animal rabies vaccine. Such exposure to inactivated vaccines constitutes no risk for acquiring rabies. # F. Identification of Vaccinated Animals All agencies and veterinarians should adopt the standard tag system. This practice will aid the administration of local, state, national, and international rabies-control procedures. Animal license tags should be distinguishable in shape and color from rabies tags. Anodized aluminum rabies tags should be no less than 0.064 inches in thickness. rous mammal (or a bat) not available for testing should be regarded as having been exposed to rabies. a. Dogs and Cats. Unvaccinated dogs and cats exposed to a rabid animal should be euthanized immediately. If the owner is unwilling to have this done, the animal should be placed in strict isolation for 6 months and vaccinated 1 month before being released. Animals with expired vaccinations need to be evaluated on a case-by-case basis. Dogs and cats that are currently vaccinated should be revaccinated immediately, kept under the owner's control, and observed for 45 days. b. Livestock. All species of livestock are susceptible to rabies; cattle and horses are among the most frequently infected. Livestock exposed to a rabid animal and currently vaccinated with a vaccine approved by USDA for that species should be revaccinated immediately and observed for 45 days. Unvaccinated livestock should be slaughtered immediately. If the owner is unwilling to have this done, the animal should be kept under close observation for 6 months. The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk of infection, provided liberal portions of the exposed area are discarded. Federal meat inspectors must reject for slaughter any animal known to have been exposed to rabies within 8 months. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption. However, because pasteurization temperatures will inactivate rabies virus, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure. # Rabies Tags 3) It is rare to have more than one rabid animal in a herd or to have herbivoreto-herbivore transmission; therefore, it may not be necessary to restrict the rest of the herd if a single animal has been exposed to or infected by rabies. c. Other Animals. Other animals bitten by a rabid animal should be euthanized immediately. Such animals currently vaccinated with a vaccine approved by USDA for that species may be revaccinated immediately and placed in strict isolation for at least 90 days. 6. Management of Animals That Bite Humans. A healthy dog or cat that bites a person should be confined and observed for 10 days; it is recommended that rabies vaccine not be administered during the observation period. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement. Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be euthanized, its head removed, and the head shipped under refrigeration for examination by a qualified laboratory designated by the local or state health department. Any stray or unwanted dog or cat that bites a person may be euthanized immediately and the head submitted as described above for rabies examination. Other biting animals that might have exposed a person to rabies should be reported immediately to the local health department. Prior vaccination of an animal may not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs and cats depends on the species, the circumstances of the bite, and the epidemiology of rabies in the area. # C. Control Methods in Wildlife The public should be warned not to handle wildlife. Wild mammals (as well as the offspring of wild species crossbred with domestic dogs and cats) that bite or otherwise expose people, pets, or livestock should be considered for euthanasia and rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment.- 1. Terrestrial Mammals. Continuous and persistent government-funded programs for trapping or poisoning wildlife are not cost effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in high-contact areas (e.g., picnic grounds, camps, or suburban areas) might be indicated for the removal of selected high-risk species of wildlife. The state wildlife agency and state health department should be consulted for coordination of any proposed vaccination or population-reduction programs. 2. Bats. Indigenous rabid bats have been reported from every state except Alaska and Hawaii and have caused rabies in at least 22 humans in the United States. However, it is neither feasible nor desirable to control rabies in bats by programs to reduce bat populations. Bats should be excluded from houses and surrounding structures to prevent direct association with humans. Such structures should then be made bat-proof by sealing entrances used by bats. The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read subscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at / or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800. Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (404) 332-4555. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. 6U.S. Government Printing Office: 1996-733-175/27048 Region IV
The purpose of this Compendium is to provide information on rabies to veterinarians, public health officials, and others concerned with rabies control. These recommendations serve as the basis for animal rabies-control programs throughout the United States and facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies-control program. This document is reviewed annually and revised as necessary. Recommendations on immunization procedures are contained in Part I; all animal rabies vaccines licensed by the United States Department of Agriculture (USDA) and marketed in the United States are listed in Part II; Part III details the principles of rabies control.# for wildlife. For this reason and because virus-shedding periods are unknown, wild or exotic carnivores and bats should not be kept as pets. Zoos or research institutions may establish vaccination programs that attempt to protect valuable animals, but these programs should not be in lieu of appropriate public health activities that protect humans. The use of licensed oral vaccines for the mass immunization of wildlife should be considered in selected situations, with the approval of the state agency responsible for animal rabies control. # E. Accidental Human Exposure to Vaccine Accidental inoculation can occur during administration of animal rabies vaccine. Such exposure to inactivated vaccines constitutes no risk for acquiring rabies. # F. Identification of Vaccinated Animals All agencies and veterinarians should adopt the standard tag system. This practice will aid the administration of local, state, national, and international rabies-control procedures. Animal license tags should be distinguishable in shape and color from rabies tags. Anodized aluminum rabies tags should be no less than 0.064 inches in thickness. rous mammal (or a bat) not available for testing should be regarded as having been exposed to rabies. a. Dogs and Cats. Unvaccinated dogs and cats exposed to a rabid animal should be euthanized immediately. If the owner is unwilling to have this done, the animal should be placed in strict isolation for 6 months and vaccinated 1 month before being released. Animals with expired vaccinations need to be evaluated on a case-by-case basis. Dogs and cats that are currently vaccinated should be revaccinated immediately, kept under the owner's control, and observed for 45 days. b. Livestock. All species of livestock are susceptible to rabies; cattle and horses are among the most frequently infected. Livestock exposed to a rabid animal and currently vaccinated with a vaccine approved by USDA for that species should be revaccinated immediately and observed for 45 days. Unvaccinated livestock should be slaughtered immediately. If the owner is unwilling to have this done, the animal should be kept under close observation for 6 months. The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk of infection, provided liberal portions of the exposed area are discarded. Federal meat inspectors must reject for slaughter any animal known to have been exposed to rabies within 8 months. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption. However, because pasteurization temperatures will inactivate rabies virus, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure. # Rabies Tags 3) It is rare to have more than one rabid animal in a herd or to have herbivoreto-herbivore transmission; therefore, it may not be necessary to restrict the rest of the herd if a single animal has been exposed to or infected by rabies. c. Other Animals. Other animals bitten by a rabid animal should be euthanized immediately. Such animals currently vaccinated with a vaccine approved by USDA for that species may be revaccinated immediately and placed in strict isolation for at least 90 days. 6. Management of Animals That Bite Humans. A healthy dog or cat that bites a person should be confined and observed for 10 days; it is recommended that rabies vaccine not be administered during the observation period. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement. Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be euthanized, its head removed, and the head shipped under refrigeration for examination by a qualified laboratory designated by the local or state health department. Any stray or unwanted dog or cat that bites a person may be euthanized immediately and the head submitted as described above for rabies examination. Other biting animals that might have exposed a person to rabies should be reported immediately to the local health department. Prior vaccination of an animal may not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs and cats depends on the species, the circumstances of the bite, and the epidemiology of rabies in the area. # C. Control Methods in Wildlife The public should be warned not to handle wildlife. Wild mammals (as well as the offspring of wild species crossbred with domestic dogs and cats) that bite or otherwise expose people, pets, or livestock should be considered for euthanasia and rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment.* 1. Terrestrial Mammals. Continuous and persistent government-funded programs for trapping or poisoning wildlife are not cost effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in high-contact areas (e.g., picnic grounds, camps, or suburban areas) might be indicated for the removal of selected high-risk species of wildlife. The state wildlife agency and state health department should be consulted for coordination of any proposed vaccination or population-reduction programs. 2. Bats. Indigenous rabid bats have been reported from every state except Alaska and Hawaii and have caused rabies in at least 22 humans in the United States. However, it is neither feasible nor desirable to control rabies in bats by programs to reduce bat populations. Bats should be excluded from houses and surrounding structures to prevent direct association with humans. Such structures should then be made bat-proof by sealing entrances used by bats. The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read subscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at http://www.cdc.gov/ or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800. Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (404) 332-4555. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. 6U.S. Government Printing Office: 1996-733-175/27048 Region IV
None
None
24f7b3ef063e71fcf24f61d85b0392fa597aafb5
cdc
None
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction Epidemics of influenza typically occur during the winter months in temperate regions and have been responsible for an average of approximately 36,000 deaths/year in the United States during 1990-1999 (1). Influenza viruses also can cause pandemics, during which rates of illness and death from influenza-related complications can increase worldwide. Influenza viruses cause disease among all age groups (2)(3)(4). Rates of infection are highest among children, but rates of serious illness and death are highest among persons aged >65 years and persons of any age who have medical conditions that place them at increased risk for complications from influenza (2,(5)(6)(7). Influenza vaccination is the primary method for preventing influenza and its severe complications. In this report from the Advisory Committee on Immunization Practices (ACIP), the primary target groups recommended for annual vaccination are 1) persons at increased risk for influenza-related complications (e.g., those aged >65 years, children aged 6-23 months, pregnant women, and persons of any age with certain chronic medical conditions); 2) persons aged 50-64 years because this group has an elevated prevalence of certain chronic medical conditions; and 3) persons who live with or care for persons at high risk (e.g., health-care workers and household contacts who have frequent contact with persons at high risk and who can transmit influenza to those persons at high risk). Vaccination is associated with reductions in influenza-related respiratory illness and physician visits among all age groups, hospitalization and death among persons at high risk, otitis media among children, and work absenteeism among adults (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Although influenza vaccination levels increased substantially during the 1990s, further improvements in vaccine coverage levels are needed, chiefly among persons aged 65 years, among children aged 6-23 months, and among health-care workers. ACIP recommends using strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (19,20). Although influenza vaccination remains the cornerstone for the control and treatment of influenza, information on antiviral medications is also presented because these agents are an adjunct to vaccine. # Primary Changes and Updates in the Recommendations The 2004 recommendations include four principal changes or updates: 1. ACIP recommends that healthy children aged 6-23 months, and close contacts of children aged 0-23 months, be vaccinated against influenza (see Target Groups for Vaccination). # Influenza and Its Burden # Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease (21). Influenza A viruses are further categorized into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Influenza B viruses are not categorized into subtypes. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have been in global circulation. In 2001, influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H3N2) and A (H1N1) viruses began circulating widely. Both influenza A and B viruses are further separated into groups on the basis of antigenic characteristics. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral replication. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. A person's immunity to the surface antigens, including hemagglutinin, reduces the likelihood of infection and severity of disease if infection occurs (22). Antibody against one influenza virus type or subtype confers limited or no protection against another. Furthermore, antibody to one antigenic variant of influenza virus might not protect against a new antigenic variant of the same type or subtype (23). Frequent development of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and the reason for the usual incorporation of one or more new strains in each year's influenza vaccine. # Clinical Signs and Symptoms of Influenza Influenza viruses are spread from person to person primarily through the coughing and sneezing of infected persons (21). The incubation period for influenza is 1-4 days, with an average of 2 days (24). Adults typically are infectious from the day before symptoms begin through approximately 5 days after illness onset. Children can be infectious for >10 days, and young children can shed virus for <6 days before their illness onset. Severely immunocompromised persons can shed virus for weeks or months (25)(26)(27)(28). Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symptoms (e.g., fever, myalgia, headache, malaise, nonproductive cough, sore throat, and rhinitis) (29). Among children, otitis media, nausea, and vomiting are also commonly reported with influenza illness (30)(31)(32). Respiratory illness caused by influenza is difficult to distinguish from illness caused by other respiratory pathogens on the basis of symptoms alone (see Role of Laboratory Diagnosis). Reported sensitivities and specificities of clinical definitions for influenza-like illness in studies primarily among adults that include fever and cough have ranged from 63% to 78% and 55% to 71%, respectively, compared with viral culture (33,34). Sensitivity and predictive value of clinical definitions can vary, depending on the degree of co-circulation of other respiratory pathogens and the level of influenza activity (35). A study among older nonhospitalized patients determined that symptoms of fever, cough, and acute onset had a positive predictive value of 30% for influenza (36), whereas a study of hospitalized older patients with chronic cardiopulmonary disease determined that a combination of fever, cough, and illness of <7 days was 78% sensitive and 73% specific for influenza (37). However, a study among vaccinated older persons with chronic lung disease reported that cough was not predictive of influenza infection, although having a fever or feverishness was 68% sensitive and 54% specific for influenza infection (38). Influenza illness typically resolves after a limited number of days for the majority of persons, although cough and malaise can persist for >2 weeks. Among certain persons, influenza can exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease), lead to secondary bacterial pneumonia or primary influenza viral pneumonia, or occur as part of a coinfection with other viral or bacterial pathogens (39). Young children with influenza infection can have initial symptoms mimicking bacterial sepsis with high fevers (40,41), and <20% of children hospitalized with influenza can have febrile seizures (31,42). Influenza infection has also been associated with encephalopathy, transverse myelitis, Reye syndrome, myositis, myocarditis, and pericarditis. (31,39,43,44). # Hospitalizations and Deaths from Influenza The risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age with certain underlying health conditions (see Persons at Increased Risk for Complications) than among healthy older children and younger adults (1,6,8,(45)(46)(47)(48)(49)(50). Estimated rates of influenza-associated hospitalizations have varied substantially by age group in studies conducted during different influenza epidemics (Table 1). Among children aged 0-4 years, hospitalization rates have ranged from approximately 500/100,000 children for those with high-risk medical conditions to 100/100,000 children for those without high-risk medical conditions (51)(52)(53)(54). Within the 0-4 year age group, hospitalization rates are highest among children aged 0-1 years and are comparable to rates reported among persons >65 years (53,54) (Table 1). During influenza epidemics from 1969-70 through 1994-95, the estimated overall number of influenza-associated hospitalizations in the United States ranged from approximately 16,000 to 220,000/epidemic. An average of approximately 114,000 influenza-related excess hospitalizations occurred per year, with 57% of all hospitalizations occurring among persons aged <65 years. Since the 1968 influenza A (H3N2) virus pandemic, the greatest numbers of influenza-associated hospitalizations have occurred during epidemics caused by type A (H3N2) viruses, with an estimated average of 142,000 influenza-associated hospitalizations per year (55). Influenza-related deaths can result from pneumonia as well as from exacerbations of cardiopulmonary conditions and other chronic diseases. Older adults account for >90% of deaths attributed to pneumonia and influenza (1,50). In a recent study of influenza epidemics, approximately 19,000 influenza-associated pulmonary and circulatory deaths per influenza season occurred during 1976-1990, compared with approximately 36,000 deaths during 1990-1999 (1). Estimated rates of influenza-associated pulmonary and circulatory deaths/100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years. In the United States, the number of influenza-associated deaths might be increasing in part because the number of older persons is increasing (56). In addition, influenza seasons in which influenza A (H3N2) viruses predominate are associated with higher mortality (57); influenza A (H3N2) viruses predominated in 90% of influenza seasons during 1990-1999, compared with 57% of seasons during 1976-1990 (1). Deaths from influenza are uncommon among children with and without high-risk conditions, but do occur (58,59). A study that modeled influenza-related deaths estimated that an average of 92 deaths occurred among children aged 50 years (1). Preliminary reports of laboratory-confirmed pediatric deaths during the 2003-04 influenza season indicated that among these 143 influenzarelated deaths (as of April 10, 2004), 58 (41%) were aged <2 years and, of those aged 2-17 years, 65 (45%) did not have an underlying medical condition traditionally considered to place a person at risk for influenza-related complications (unpublished data, CDC National Center for Infectious Diseases, 2004). Further information is needed regarding the risk of severe influenza-complications and optimal strategies for minimizing severe disease and death among children. # Options for Controlling Influenza In the United States, the primary option for reducing the effect of influenza is immunoprophylaxis with vaccine. Inactivated (i.e., killed virus) influenza vaccine and live, attenuated influenza vaccine are available for use in the United States (see Recommendations for Using Inactivated and Live, Attenuated Influenza Vaccine). Vaccinating persons at high risk for complications and their contacts each year before seasonal increases in influenza virus circulation is the most effective means of reducing the effect of influenza. Vaccination coverage can be increased by administering vaccine to persons during hospitalizations or routine health-care visits before the influenza season, making special visits to physicians' offices or clinics unnecessary. When vaccine and epidemic strains are well-matched, achieving increased vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) and among staff can reduce the risk for outbreaks by inducing herd immunity (13). Vaccination of health-care workers and other persons in close contact with persons at increased risk for severe influenza illness can also reduce transmission of influenza and subsequent influenzarelated complications. Antiviral drugs used for chemoprophylaxis or treatment of influenza are a key adjunct to vaccine (see Recommendations for Using Antiviral Agents for Influenza). However, antiviral medications are not a substitute for vaccination. viruses are a reassortant of influenza A (H1N1) and (H3N2) viruses, antibody directed against influenza A (H1N1) and influenza (H3N2) vaccine strains will provide protection against circulating influenza A (H1N2) viruses. Influenza viruses for both the inactivated and live attenuated influenza vaccines are initially grown in embryonated hens' eggs. Thus, both vaccines might contain limited amounts of residual egg protein. New Engl J Med 2000;342:232-9. § § Outcomes were for acute pulmonary conditions. Influenza-attributable hospitalization rates for children at high risk were not included in this study. ¶ ¶ Source: Barker WH, Mullooly JP. Impact of epidemic type A influenza in a defined adult population. Am J Epidemiol 1980;112:798-811. * Outcomes were limited to hospitalizations in which either pneumonia or influenza was listed as the first condition on discharge records (Simonsen) or included anywhere in the list of discharge diagnoses (Barker). † † † Source: Simonsen L, Fukuda, K, Schonberger LB, Cox NJ. Impact of influenza epidemics on hospitalizations. J Infect Dis 2000;181:831-7. § § § Persons at high risk and not at high risk for influenza-related complications are combined. ¶ ¶ ¶ The low estimate is the average during influenza A(H1N1) or influenza B-predominate seasons, and the high estimate is the average during influenza A (H3N2)-predominate seasons. # Influenza Vaccine Composition For the inactivated vaccine, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (60). Subvirion and purified surface antigen preparations of the inactivated vaccine are available. Manufacturing processes differ by manufacturer. Manufacturers might use different compounds to inactivate influenza viruses and add antibiotics to prevent bacterial contamination. Package inserts should be consulted for additional information. # Thimerosal Thimerosal, a mercury-containing compound, has been used as a preservative in vaccines since the 1930s and is used in multidose vials of inactivated influenza vaccine to reduce the likelihood of bacterial contamination. Although no scientific evidence indicates that thimerosal in vaccines leads to serious adverse events in vaccine recipients, in 1999, the U.S. Public Health Service and other organizations recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines to decrease total mercury exposure, chiefly among infants (61)(62)(63). Since mid-2001, vaccines routinely recommended for infants in the United States have been manufactured either without or with only trace amounts of thimerosal to provide a substantial reduction in the total mercury exposure from vaccines for children (64). Vaccines containing trace amounts of thimerosal have <1 mcg mercury/dose. In 1999, 15 of 28 vaccine products for which CDC had contracts did not contain thimerosal as a preservative. In 2004, 27 of 29 products under CDC contract do not contain thimerosal as a preservative. Influenza Vaccines and Thimerosal. LAIV does not contain thimerosal. Thimerosal preservative-containing inactivated influenza vaccines, distributed in multidose containers in the United States, contain 25 mcg of mercury/0.5-mL dose (61,62). Inactivated influenza virus vaccines distributed in the United States as preservative-free vaccines in single-dose syringes contain only trace amounts of thimerosal as a residual from early manufacturing steps. Inactivated influenza vaccine that does not contain thimerosal as a preservative has <1 mcg mercury/0.5-mL dose or <0.5 mcg mercury/0.25-mL dose. This information is included in the package insert provided with each type of inactivated influenza virus vaccine. Beginning in 2004, influenza vaccine is part of the routine childhood immunization schedule. For the 2004-05 influenza season, 6-8 million single-dose syringes of inactivated influenza virus vaccine without thimerosal as a preservative probably will be available. This represents a substantial increase in the available amount of inactivated influenza vaccine without thimerosal as a preservative, compared with approximately 3.2 million doses that were available during the 2003-04 influenza season. Inactivated influenza vaccine without thimerosal as a preservative is available from two manufacturers. Chiron produces Fluvirin™, which is approved by the Food and Drug Administration (FDA) for persons aged >4 years. Fluvirin is marketed as a formulation with thimerosal as a preservative in multidose vials and as a formulation without thimerosal as a preservative in 0.5-mL unit dose syringes. Aventis Pasteur produces FluZone ® , which is FDA-approved for persons aged >6 months. FluZone containing thimerosal as a preservative is available in multidose vials. Preservative-free FluZone packaged as 0.25-mL unit dose syringes is available for use among persons aged 6-35 months. The total amount of inactivated influenza vaccine available without thimerosal as a preservative will be increased as manufacturing capabilities are expanded. The risks of severe illness from influenza infection are elevated among both young children and pregnant women, and both groups benefit from vaccination by preventing illness and death from influenza. In contrast, no scientifically conclusive evidence exists of harm from exposure to thimerosal preservative-containing vaccine, whereas evidence is accumulating of lack of any harm resulting from exposure to such vaccines (61,65). Therefore, the benefits of influenza vaccination outweigh the theoretical risk, if any, for thimerosal exposure through vaccination. Nonetheless, certain persons remain concerned regarding exposure to thimerosal. The U.S. vaccine supply for infants and pregnant women is in a period of transition during which thimerosal in vaccines intended for these groups is being reduced by manufacturers as a feasible means of reducing an infant's total exposure to mercury because other environmental sources of exposure are more difficult or impossible to eliminate. Reductions in thimerosal in other vaccines have been achieved already and have resulted in substantially lowered cumulative exposure to thimerosal from vaccination among infants and children. For all of these reasons, persons recommended to receive inactivated influenza vaccine may receive either vaccine preparation, depending on availability. Supplies of inactivated influenza vaccines without thimerosal as a preservative will be increased for the 2004-05 influenza season compared with the 2003-04 season, and they will be included in CDC contracts to meet anticipated public demand in 2004. # Efficacy and Effectiveness of Inactivated Influenza Vaccine The effectiveness of inactivated influenza vaccine depends primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the viruses in the vaccine and those in circulation. The majority of vaccinated children and young adults develop high postvaccination hemagglutination inhibition antibody titers (66)(67)(68). # MMWR May 28, 2004 These antibody titers are protective against illness caused by strains similar to those in the vaccine (67)(68)(69)(70). Adults Aged <65 Years. When the vaccine and circulating viruses are antigenically similar, influenza vaccine prevents influenza illness among approximately 70%-90% of healthy adults aged <65 years (9,12,71,72). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are wellmatched (9)(10)(11)(12)72,73). Children. Children aged as young as 6 months can develop protective levels of antibody after influenza vaccination (66,67,(74)(75)(76)(77), although the antibody response among children at high risk for influenza-related complications might be lower than among healthy children (78,79). In a randomized study among children aged 1-15 years, inactivated influenza vaccine was 77%-91% effective against influenza respiratory illness and was 44%-49%, 74%-76%, and 70%-81% effective against influenza seroconversion among children aged 1-5, 6-10, and 11-15 years, respectively (68). One study (80) reported a vaccine efficacy of 56% against influenza illness among healthy children aged 3-9 years, and another study (81) determined vaccine efficacy of 22%-54% and 60%-78% among children with asthma aged 2-6 years and 7-14 years, respectively. A 2-year randomized study of children aged 6-24 months determined that >89% of children seroconverted to all three vaccine strains during both years (82). During year 1, among 411 children, vaccine efficacy was 66% (95% confidence interval = 34% and 82%) against cultureconfirmed influenza (attack rates: 5.5% and 15.9% among vaccine and placebo groups, respectively). During year 2, among 375 children, vaccine efficacy was -7% (95% CI = -247% and 67%; attack rates: 3.6% and 3.3% among vaccine and placebo groups, respectively; the second year exhibited lower attack rates overall and was considered a mild season). However, no overall reduction in otitis media was reported (82). Other studies report that trivalent inactivated influenza vaccine decreases the incidence of influenza-associated otitis media among young children by approximately 30% (16,17). Adults Aged >65 Years. Older persons and persons with certain chronic diseases might develop lower postvaccination antibody titers than healthy young adults and thus can remain susceptible to influenza-related upper respiratory tract infection (83)(84)(85). A randomized trial among noninstitutionalized persons aged >60 years reported a vaccine efficacy of 58% against influenza respiratory illness, but indicated that efficacy might be lower among those aged >70 years (86). The vaccine can also be effective in preventing secondary complications and reducing the risk for influenza-related hospitalization and death among adults >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (13)(14)(15)18,87). Among elderly persons not living in nursing homes or similar chronic-care facilities, influenza vaccine is 30%-70% effective in preventing hospitalization for pneumonia and influenza (15,88). Among older persons who do reside in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and deaths. Among this population, the vaccine can be 50%-60% effective in preventing hospitalization or pneumonia and 80% effective in preventing death, although the effectiveness in preventing influenza illness often ranges from 30% to 40% (89)(90)(91). # Efficacy and Effectiveness of LAIV Healthy Children. A randomized, double-blind, placebocontrolled trial among 1,602 healthy children initially aged 15-71 months assessed the efficacy of trivalent LAIV against culture-confirmed influenza during two seasons (92,93). This trial included subsets of 238 healthy children (163 vaccinees and 75 placebo recipients) aged 60-71 months who received 2 doses and 74 children (54 vaccinees and 20 placebo recipients) aged 60-71 months who received a single dose during season one, and a subset of 544 children (375 vaccinees and 169 placebo recipients) aged 60-84 months during season two. Children who continued from season one to season two remained in the same study group. In season one, when vaccine and circulating virus strains were well-matched, efficacy was 93% for all participants, regardless of age, among persons receiving 2 doses of LAIV. Efficacy was 87% in the 60-71-month subset for those who received 2 doses, and was 91% in the subset for those who received 1 or 2 doses. In season two, when the A (H3N2) component was not well-matched between vaccine and circulating virus strains, efficacy was 86% overall and 87% among those aged 60-84 months. The vaccine was 92% efficacious in preventing culture-confirmed influenza during the twoseason study. Other results included a 27% reduction in febrile otitis media and a 28% reduction in otitis media with concomitant antibiotic use. Receipt of LAIV also resulted in decreased fever and otitis media among vaccine recipients who experienced influenza. Healthy Adults. A randomized, double-blind, placebocontrolled trial among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in illness, absenteeism, health-care visits, and medication use during peak and total influenza outbreak periods (94). The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not well-matched. The study did not include testing of viruses by a laboratory. During peak outbreak periods, no difference was identified between LAIV and placebo recipients experiencing any febrile episodes. However, vaccination was associated with reductions in severe febrile illnesses of 19% and febrile upper respiratory tract illnesses of 24%. Vaccination also was associated with fewer days of illness, fewer days of work lost, fewer days with health-care provider visits, and reduced use of prescription antibiotics and over-the-counter medications. Among the subset of 3,637 healthy adults aged 18-49 years, LAIV recipients (n = 2,411) had 26% fewer febrile upperrespiratory illness episodes; 27% fewer lost work days as a result of febrile upper respiratory illness; and 18%-37% fewer days of health-care provider visits caused by febrile illness, compared with placebo recipients (n = 1,226). Days of antibiotic use were reduced by 41%-45% in this age subset. Another randomized, double-blind, placebo-controlled challenge study among 92 healthy adults (LAIV, n = 29; placebo, n = 31; inactivated influenza vaccine, n = 32) aged 18-41 years assessed the efficacy of both LAIV and inactivated vaccine (95). The overall efficacy of LAIV and inactivated influenza vaccine in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, on the basis of experimental challenge by viruses to which study participants were susceptible before vaccination. The difference between the two vaccines was not statistically significant. # Cost-Effectiveness of Influenza Vaccine Influenza vaccination can reduce both health-care costs and productivity losses associated with influenza illness. Economic studies of influenza vaccination of persons aged >65 years conducted in the United States have reported overall societal cost savings and substantial reductions in hospitalization and death (15,88,96). Studies of adults aged 65 years, vaccination resulted in a net savings per quality-adjusted life year (QALY) gained and resulted in costs of $23-$256/QALY among younger age groups. Additional studies of the relative costeffectiveness and cost utility of influenza vaccination among children and among adults aged <65 years are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, and vaccine efficacy when evaluating the long-term costs and benefits of annual vaccination. # Vaccination Coverage Levels Among persons aged >65 years, influenza vaccination levels increased from 33% in 1989 (103) to 66% in 1999 (104), surpassing the Healthy People 2000 objective of 60% (105). Vaccine coverage reached the highest levels recorded (68%) during the 1999-00 influenza season, using the percentage of adults reporting influenza vaccination during the past 12 months who participated in the National Health Interview Survey (NHIS) during the first and second quarters of each calendar year as a proxy measure of influenza vaccine coverage for the previous influenza season (104). Possible reasons for the increase in influenza vaccination levels among persons aged >65 years through the 1999-00 influenza season include 1) greater acceptance of preventive medical services by practitioners; 2) increased delivery and administration of vaccine by health-care providers and sources other than physicians; 3) new information regarding influenza vaccine effectiveness, cost-effectiveness, and safety; and 4) initiation of Medicare reimbursement for influenza vaccination in 1993 (8,14,15,89,90,106,107). Vaccine coverage increased more rapidly through the mid-1990s than during subsequent seasons (average annual percentage increase of 4% from 1988-89 to 1996-97 versus 1% from 1996-97 to 1999-00). Estimated national adult vaccine coverage for the 2001-02 season (Table 2), the most recent for which complete data are available, was 66% for adults aged >65 years and 34% for adults aged 50-64 years (104; unpublished data, CDC National Immunization Program, 2004). The estimated vaccination coverage among adults with high-risk conditions aged 18-49 years and 50-64 years was 23% and 44%, respectively, substantially lower than the Healthy People 2000 and 2010 objective of 60% (104,105,108). Continued annual monitoring is needed to determine the effects of vaccine supply delays, changes in influenza vaccination recommendations and target groups for vaccination, and other factors related to vaccination coverage among adults and children. The Healthy People 2010 objective is to achieve vaccination coverage for 90% of persons aged >65 years (108). Reducing racial and ethnic health disparities, including disparities in vaccination coverage, is an overarching national goal (108). Although estimated influenza vaccination coverage for the 1999-00 season reached the highest levels recorded among older black, Hispanic, and white populations, vaccination levels among blacks and Hispanics continue to lag behind those among whites (104,109). Estimated influenza vaccination levels for 2001 among persons aged >65 years were 66% among non-Hispanic whites, 48% among non-Hispanic blacks, and 54% among Hispanics (109,110). Additional strategies are needed to achieve the Healthy People 2010 objectives among all racial and ethnic groups. In 1997 and 1998, vaccination coverage estimates among nursing home residents were 64%-82% and 83%, respectively (111,112). The Healthy People 2010 goal is to achieve influenza vaccination of 90% among nursing home residents, an increase from the Healthy People 2000 goal of 80% (105,108). Reported vaccination levels are low among children at increased risk for influenza complications. One study conducted among patients in health maintenance organizations reported influenza vaccination percentages ranging from 9% to 10% among children with asthma (113). A 25% vaccination level was reported among children with severe to moderate asthma who attended an allergy and immunology clinic (114). However, a study conducted in a pediatric clinic demonstrated an increase in the vaccination percentage of chil-dren with asthma or reactive airways disease from 5% to 32% after implementing a reminder/recall system (115). One study reported 79% vaccination coverage among children attending a cystic fibrosis treatment center (116). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use. Annual vaccination is recommended for health-care workers. Nonetheless, NHIS reported vaccination coverage of only 34% and 38% among health-care workers in the 1997 and 2002 surveys, respectively (117,118; unpublished data, CDC National Immunization Program, 2004) (Table 2). Vaccination of health-care workers has been associated with reduced work absenteeism (9) and fewer deaths among nursing home patients (119,120). Limited information is available regarding use of influenza vaccine among pregnant women. Among women aged 18-44 years without diabetes responding to the 2001 Behavioral Risk Factor Surveillance System, those reporting they were pregnant were less likely to report influenza vaccination during the past 12 months (13.7%) than those not pregnant (16.8%) (121). Only 12% of pregnant women reported vaccination according to 2002 NHIS data, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions (unpublished data, CDC National Immunization Program, 2004) (Table 2). Although - As recommended by the Advisory Committee on Immunization Practices. † CI = Confidence interval. § Persons categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer in the past 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer in the past 12 months; 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack in the past 12 months. ¶ Aged 18-44 years, pregnant at the time of the survey and without high-risk conditions. Adults were classified as health-care workers if they were currently employed in a health-care occupation or in a health-care industry setting, on the basis of standard occupation and industry categories recoded in groups by CDC's National Center for Health Statistics. † † Interviewed adult in each household containing at least one of the following: a child aged 65 years, or any person aged 2-64 years at high risk (see previous footnote § ). not directly measuring influenza vaccination among women who were past the first trimester of pregnancy during influenza season, these data indicate low compliance with the ACIP recommendations for pregnant women. In a study of influenza vaccine acceptance by pregnant women, 71% who were offered the vaccine chose to be vaccinated (122). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients, although 86% agreed that pregnant women's risk for influenzarelated morbidity and mortality increases during the last two trimesters (123). Recent data indicate that self-report of influenza vaccination among adults, compared with extraction from the medical record, is both sensitive and specific. Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (124). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available. # Recommendations for Using Inactivated and Live, Attenuated Influenza Vaccines Both the inactivated influenza vaccine and LAIV can be used to reduce the risk of influenza. LAIV is only approved for use among healthy persons aged 5-49 years. Inactivated influenza vaccine is approved for persons aged >6 months, including those with high-risk conditions (see following sections on inactivated influenza vaccine and live, attenuated influenza vaccine). # Target Groups for Vaccination # Persons at Increased Risk for Complications Vaccination with inactivated influenza vaccine is recommended for the following persons who are at increased risk for complications from influenza: # Persons Aged 50-64 Years Vaccination is recommended for persons aged 50-64 years because this group has an increased prevalence of persons with high-risk conditions. In 2000, approximately 42 million persons in the United States were aged 50-64 years, of whom 12 million (29%) had one or more high-risk medical conditions (125). Influenza vaccine has been recommended for this entire age group to increase the low vaccination rates among persons in this age group with high-risk conditions (see preceding section). Age-based strategies are more successful in increasing vaccine coverage than patient-selection strategies based on medical conditions. Persons aged 50-64 years without high-risk conditions also receive benefit from vaccination in the form of decreased rates of influenza illness, decreased work absenteeism, and decreased need for medical visits and medication, including antibiotics (9)(10)(11)(12). Further, 50 years is an age when other preventive services begin and when routine assessment of vaccination and other preventive services has been recommended (126,127). # Persons Who Can Transmit Influenza to Those at High Risk Persons who are clinically or subclinically infected can transmit influenza virus to persons at high risk for complications from influenza. Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce influenza-related deaths among persons at high risk. Evidence from two studies indicates that vaccination of healthcare personnel is associated with decreased deaths among nursing home patients (119,120). Health-care workers should be vaccinated against influenza annually. Facilities that employ heath-care workers are strongly encouraged to provide vaccine to workers by using approaches that maximize immuni-zation rates. This will protect health-care workers, their patients, and communities, and will improve prevention, patient safety, and reduce disease burden. Health-care workers' influenza immunization rates should be regularly measured and reported. Although rates of health-care worker vaccination are typically <40%, with moderate effort, organized campaigns can attain higher rates of vaccination among this population (118). The following groups should be vaccinated: - physicians, nurses, and other personnel in both hospital and outpatient-care settings, including medical emergency response workers (e.g., paramedics and emergency medical technicians); - employees of nursing homes and chronic-care facilities who have contact with patients or residents; - employees of assisted living and other residences for persons in groups at high risk; - persons who provide home care to persons in groups at high risk; and - household contacts (including children) of persons in groups at high risk. In addition, because children aged 0-23 months are at increased risk for influenza-related hospitalization (52-54), vaccination is recommended for their household contacts and out-of-home caregivers, particularly for contacts of children aged 0-5 months, because influenza vaccines have not been approved by FDA for use among children aged <6 months (see Healthy Young Children). Healthy persons aged 5-49 years in these groups who are not contacts of severely immunosuppressed persons (see Live, Attenuated Influenza Vaccine Recommendations) can receive either LAIV or inactivated influenza vaccine. All other persons in this group should receive inactivated influenza vaccine. # Additional Information Regarding Vaccination of Specific Populations Pregnant Women Influenza-associated excess deaths among pregnant women were documented during the pandemics of 1918-19 and 1957-58 (128)(129)(130)(131). Case reports and limited studies also indicate that pregnancy can increase the risk for serious medical complications of influenza (132)(133)(134)(135)(136). An increased risk might result from 1) increases in heart rate, stroke volume, and oxygen consumption; 2) decreases in lung capacity; and 3) changes in immunologic function during pregnancy. A study of the effect of influenza during 17 interpandemic influenza seasons demonstrated that the relative risk for hospitalization for selected cardiorespiratory conditions among pregnant women enrolled in Medicaid increased from 1.4 during weeks 14-20 of gestation to 4.7 during weeks 37-42, in comparison with women who were 1-6 months postpartum (137). Women in their third trimester of pregnancy were hospitalized at a rate (i.e., 250/100,000 pregnant women) comparable with that of nonpregnant women who had high-risk medical conditions. Researchers estimate that an average of 1-2 hospitalizations can be prevented for every 1,000 pregnant women vaccinated. Because of the increased risk for influenza-related complications, women who will be pregnant during the influenza season should be vaccinated. Vaccination can occur in any trimester. One study of influenza vaccination of >2,000 pregnant women demonstrated no adverse fetal effects associated with influenza vaccine (138). # Healthy Young Children Studies indicate that rates of hospitalization are higher among young children than older children when influenza viruses are in circulation (51)(52)(53)139,140). The increased rates of hospitalization are comparable with rates for other groups considered at high risk for influenza-related complications. However, the interpretation of these findings has been confounded by co-circulation of respiratory syncytial viruses, which are a cause of serious respiratory viral illness among children and which frequently circulate during the same time as influenza viruses (141)(142)(143). Two recent studies have attempted to separate the effects of respiratory syncytial viruses and influenza viruses on rates of hospitalization among children who do not have high-risk conditions (52,53). Both studies reported that otherwise healthy children aged <2 years, and possibly children aged 2-4 years, are at increased risk for influenza-related hospitalization compared with older healthy children (Table 1). Among the Tennessee Medicaid population during 1973-1993, healthy children aged 6 months-<3 years had rates of influenza-associated hospitalization comparable with or higher than rates among children aged 3-14 years with high-risk conditions (Table 1) (52,54). Another Tennessee study reported a hospitalization rate per year of 3-4/1,000 healthy children aged <2 years for laboratory-confirmed influenza (32). Because children aged 6-23 months are at substantially increased risk for influenza-related hospitalizations, ACIP recommends vaccination of all children in this age group (144). ACIP continues to recommend influenza vaccination of persons aged >6 months who have high-risk medical conditions. The current inactivated influenza vaccine is not approved by FDA for use among children aged <6 months, the pediatric group at greatest risk for influenza-related complications (52). Vaccinating their household contacts and out-of-home caregivers might decrease the probability of influenza infection among these children. Beginning in March 2003, the group of children eligible for influenza vaccine coverage under the Vaccines for Children (VFC) program was expanded to include all VFCeligible children aged 6-23 months and VFC-eligible children aged 2-18 years who are household contacts of children aged 0-23 months (145). # Persons Infected with HIV Limited information is available regarding the frequency and severity of influenza illness or the benefits of influenza vaccination among persons with HIV infection (146,147). However, a retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the attributable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than during the peri-influenza periods. The risk for hospitalization was higher for HIV-infected women than for women with other well-recognized high-risk conditions, including chronic heart and lung diseases (148). Another study estimated that the risk for influenza-related death was 9.4-14.6/10,000 persons with acquired immunodeficiency syndrome (AIDS) compared with 0.09-0.10/10,000 among all persons aged 25-54 years and 6.4-7.0/10,000 among persons aged >65 years (149). Other reports indicate that influenza symptoms might be prolonged and the risk for complications from influenza increased for certain HIVinfected persons (150)(151)(152). Influenza vaccination has been demonstrated to produce substantial antibody titers against influenza among vaccinated HIV-infected persons who have minimal AIDS-related symptoms and high CD4 + T-lymphocyte cell counts (153)(154)(155)(156). A limited, randomized, placebo-controlled trial determined that influenza vaccine was highly effective in preventing symptomatic, laboratory-confirmed influenza infection among HIV-infected persons with a mean of 400 CD4 + T-lymphocyte cells/mm 3 ; a limited number of persons with CD4 + T-lymphocyte cell counts of 100 CD4 + cells and among those with <30,000 viral copies of HIV type-1/mL (152). Among persons who have advanced HIV disease and low CD4 + T-lymphocyte cell counts, influenza vaccine might not induce protective antibody titers (155,156); a second dose of vaccine does not improve the immune response in these persons (156,157). One study determined that HIV RNA (ribonucleic acid) levels increased transiently in one HIV-infected person after influenza infection (158). Studies have demonstrated a transient (i.e., 2-4 week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (155,159). Other studies using similar laboratory techniques have not documented a substantial increase in the replication of HIV (160)(161)(162)(163). Deterioration of CD4 + T-lymphocyte cell counts or progression of HIV disease have not been demonstrated among HIVinfected persons after influenza vaccination compared with unvaccinated persons (156,164). Limited information is available concerning the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza infection or influenza vaccination (146,165). Because influenza can result in serious illness, and because influenza vaccination can result in the production of protective antibody titers, vaccination will benefit HIV-infected persons, including HIV-infected pregnant women. # Breastfeeding Mothers Influenza vaccine does not affect the safety of mothers who are breastfeeding or their infants. Breastfeeding does not adversely affect the immune response and is not a contraindication for vaccination. # Travelers The risk for exposure to influenza during travel depends on the time of year and destination. In the tropics, influenza can occur throughout the year. In the temperate regions of the Southern Hemisphere, the majority of influenza activity occurs during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large organized tourist groups (e.g., on cruise ships) that include persons from areas of the world where influenza viruses are circulating (166,167). Persons at high risk for complications of influenza who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to - travel to the tropics, - travel with organized tourist groups at any time of year, or - travel to the Southern Hemisphere during April-September. No information is available regarding the benefits of revaccinating persons before summer travel who were already vaccinated in the preceding fall. Persons at high risk who receive the previous season's vaccine before travel should be revaccinated with the current vaccine the following fall or winter. Persons aged >50 years and others at high risk should consult with their physicians before embarking on travel during the summer to discuss the symptoms and risks for influenza and May 28, 2004 the advisability of carrying antiviral medications for either prophylaxis or treatment of influenza. # General Population In addition to the groups for which annual influenza vaccination is recommended, physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza (the vaccine can be administered to children >6 months), depending on vaccine availability (see Influenza Vaccine Supply). Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics. # Comparison of LAIV with Inactivated Influenza Vaccine Both inactivated influenza vaccine and LAIV are available to reduce the risk of influenza infection and illness. However, the vaccines also differ in key ways (Table 3). # Major Similarities LAIV and inactivated influenza vaccine contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one A (H1N1) virus, and one B virus. Each year, one or more virus strains might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. Viruses for both vaccines are grown in eggs. Both vaccines are administered annually to provide optimal protection against influenza infection (Table 3). # Major Differences Inactivated influenza vaccine contains killed viruses, whereas LAIV contains attenuated viruses still capable of replication. LAIV is administered intranasally by sprayer, whereas inactivated influenza vaccine is administered intramuscularly by injection. LAIV is more expensive than inactivated influenza vaccine. LAIV is approved for use only among healthy persons aged 5-49 years; inactivated influenza vaccine is approved for use among persons aged >6 months, including those who are healthy and those with chronic medical conditions (Table 3). # Inactivated Influenza Vaccine Recommendations Persons Who Should Not Be Vaccinated with Inactivated Influenza Vaccine Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Prophylactic use of antiviral agents is an option for preventing influenza among such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at high risk for complications from influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Information regarding vaccine components is located in package inserts from each manufacturer. Persons with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate use of influenza vaccine, particularly among children with mild upper respiratory tract infection or allergic rhinitis. # Dosage Dosage recommendations vary according to age group (Table 4). Among previously unvaccinated children aged 1 month apart are recommended for satisfactory antibody responses. If possible, the second dose should be administered before December. If a child aged <9 years receiving vaccine for the first time does not receive a second dose of vaccine within the same season, only 1 dose of vaccine should be administered the following season. Two doses are not required at that time. Among adults, studies have indicated limited or no improvement in antibody response when a second dose is administered during the same season (168)(169)(170). Even when the current influenza vaccine contains one or more antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines during the year after vaccination (171,172). Vaccine prepared for a previous influenza season should not be administered to provide protection for the current season. # Route The intramuscular route is recommended for influenza vaccine. Adults and older children should be vaccinated in the deltoid muscle. A needle length >1 inch can be considered for these age groups because needles <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (173). Infants and young children should be vaccinated in the anterolateral aspect of the thigh (64). ACIP recommends a needle length of 7/8-1 inch for children aged <12 months for intramuscular vaccination into the anterolateral thigh. When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7/8-1.25 inches is recommended (64). # Side Effects and Adverse Reactions When educating patients regarding potential side effects, clinicians should emphasize that 1) inactivated influenza vaccine contains noninfectious killed viruses and cannot cause influenza; and 2) coincidental respiratory disease unrelated to influenza vaccination can occur after vaccination. # Local Reactions In placebo-controlled studies among adults, the most frequent side effect of vaccination is soreness at the vaccination site (affecting 10%-64% of patients) that lasts <2 days (12,(174)(175)(176). These local reactions typically are mild and rarely inter-fere with the person's ability to conduct usual daily activities. One blinded, randomized, cross-over study among 1,952 adults and children with asthma, demonstrated that only body aches were reported more frequently after inactivated influenza vaccine (25.1%) than placebo-injection (20.8%) (177). One study (79) reported 20%-28% of children with asthma aged 9 months-18 years with local pain and swelling and another study (77) reported 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions. A different study (76) reported no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years in a placebo-controlled trial of inactivated influenza vaccine. In a study of 12 children aged 5-32 months, no substantial local or systemic reactions were noted (178). # Systemic Reactions Fever, malaise, myalgia, and other systemic symptoms can occur after vaccination with inactivated vaccine and most often affect persons who have had no prior exposure to the - Populations at high risk from complications of influenza infection include persons aged >65 years; residents of nursing homes and other chronic-care facilities that house persons with chronic medical conditions; adults and children with chronic disorders of the pulmonary or cardiovascular systems; adults and children with chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression; children and adolescents receiving long-term aspirin therapy (at risk for developing Reye syndrome after wild-type influenza infection); pregnant women; and children aged 6-23 months. † No data are available regarding effect on safety or efficacy. § Inactivated influenza vaccine coadministration has been evaluated systematically only among adults with pneumococcal polysaccharide vaccine. # Yes Yes influenza virus antigens in the vaccine (e.g., young children) (179,180). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Recent placebo-controlled trials demonstrate that among older persons and healthy young adults, administration of split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (12,(174)(175)(176). Less information from published studies is available for children, compared with adults. However, in a randomized crossover study among both children and adults with asthma, no increase in asthma exacerbations was reported for either age group (177). An analysis of 215,600 children aged <18 years and 8,476 children aged 6-23 months enrolled in one of five health maintenance organizations reported no increase in biologically plausible medically attended events during the 2 weeks after inactivated influenza vaccination, compared with control periods 3-4 weeks before and after vaccination (181). In a study of 791 healthy children (68), postvaccination fever was noted among 11.5% of children aged 1-5 years, 4.6% among children aged 6-10 years, and 5.1% among children aged 11-15 years. Among children with high-risk medical conditions, one study of 52 children aged 6 months-4 years reported fever among 27% and irritability and insomnia among 25% (77); and a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (182). No placebo comparison was made in these studies. However, in pediatric trials of A/New Jersey/76 swine influenza vaccine, no difference was reported between placebo and split-virus vaccine groups in febrile reactions after injection, although the vaccine was associated with mild local tenderness or erythema (76). Limited data regarding potential adverse events after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). During January 1, 1991-January 23, 2003, VAERS received 1,072 reports of adverse events among children aged <18 years, including 174 reports of adverse events among children aged 6-23 months. The number of influenza vaccine doses received by children during this time period is unknown. The most frequently reported events among children were fever, injection-site reactions, and rash (unpublished data, CDC, 2003). Because of the limitations of spontaneous reporting systems, determining causality for specific types of adverse events, with the exception of injection-site reactions, is usually not possible by using VAERS data alone. Health-care professionals should promptly report all clinically significant adverse events after influenza vaccination of children to VAERS, even if the health-care professional is not certain that the vaccine caused the event. The Institute of Medicine has specifically recommended reporting of potential neurologic complications (e.g., demyelinating disorders such as Guillain-Barré syndrome), although no evidence exists of a causal relationship between influenza vaccine and neurologic disorders in children. Immediate -presumably allergic -reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) rarely occur after influenza vaccination (183). These reactions probably result from hypersensitivity to certain vaccine components; the majority of reactions probably are caused by residual egg protein. Although current influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have had hives or swelling of the lips or tongue, or who have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma or other allergic responses to egg protein, might also be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician should be considered. Protocols have been published for safely administering influenza vaccine to persons with egg allergies (184)(185)(186). Immunogenicity and side effects of split-and whole-virus vaccines are similar among adults when vaccines are administered at the recommended dosage. § For adults and older children, the recommended site of vaccination is the deltoid muscle. The preferred site for infants and young children is the anterolateral aspect of the thigh. ¶ Two doses administered at least 1 month apart are recommended for children aged <9 years who are receiving influenza vaccine for the first time. Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (187,188). When reported, hypersensitivity to thimerosal usually has consisted of local, delayed hypersensitivity reactions (187). # Guillain-Barré Syndrome The 1976 swine influenza vaccine was associated with an increased frequency of GBS (189,190). Among persons who received the swine influenza vaccine in 1976, the rate of GBS was 25 years than persons <25 years (189). Evidence for a causal relation of GBS with subsequent vaccines prepared from other influenza viruses is unclear. Obtaining strong epidemiologic evidence for a possible limited increase in risk is difficult for such a rare condition as GBS, which has an annual incidence of 10-20 cases/1 million adults (191). More definitive data probably will require using other methodologies (e.g., laboratory studies of the pathophysiology of GBS). During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were slightly elevated but were not statistically significant in any of these studies (192)(193)(194). However, in a study of the 1992-93 and 1993-94 seasons, the overall relative risk for GBS was 1.7 (95% CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately 1 additional case of GBS/1 million persons vaccinated. The combined number of GBS cases peaked 2 weeks after vaccination (195). Thus, investigations to date indicate no substantial increase in GBS associated with influenza vaccines (other than the swine influenza vaccine in 1976), and that, if influenza vaccine does pose a risk, it is probably slightly more than one additional case/1 million persons vaccinated. Cases of GBS after influenza infection have been reported, but no epidemiologic studies have documented such an association (196,197). Substantial evidence exists that multiple infectious illnesses, most notably Campylobacter jejuni, as well as upper respiratory tract infections are associated with GBS (191,(198)(199)(200). Even if GBS were a true side effect of vaccination in the years after 1976, the estimated risk for GBS of approximately 1 additional case/1 million persons vaccinated is substantially less than the risk for severe influenza, which can be prevented by vaccination among all age groups, especially persons aged >65 years and those who have medical indications for influenza vaccination (Table 1) (see Hospitalizations and Deaths from Influenza). The potential benefits of influenza vaccina-tion in preventing serious illness, hospitalization, and death substantially outweigh the possible risks for experiencing vaccine-associated GBS. The average case fatality ratio for GBS is 6% and increases with age (191,201). No evidence indicates that the case fatality ratio for GBS differs among vaccinated persons and those not vaccinated. The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (192,202). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown; therefore, avoiding vaccinating persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks after a previous influenza vaccination is prudent. As an alternative, physicians might consider using influenza antiviral chemoprophylaxis for these persons. Although data are limited, for the majority of persons who have a history of GBS and who are at high risk for severe complications from influenza, the established benefits of influenza vaccination justify yearly vaccination. # Live, Attenuated Influenza Vaccine Recommendations Background Description and Action Mechanisms. LAIVs have been in development since the 1960s in the United States, where they have been evaluated as mono-, bi-, and trivalent formulations (203)(204)(205)(206)(207). The LAIV licensed for use in the United States beginning in 2003 is produced by MedImmune, Inc. (Gaithersburg, Maryland; ) and marketed under the name FluMist™. It is a live, trivalent, intranasally administered vaccine that is - attenuated, producing mild or no signs or symptoms related to influenza virus infection; - temperature-sensitive, a property that limits the replication of the vaccine viruses at 38 º C-39 º C, and thus restricts LAIV viruses from replicating efficiently in human lower airways; and - cold-adapted, replicating efficiently at 25 º C, a temperature that is permissive for replication of LAIV viruses, but restrictive for replication of different wild-type viruses. In animal studies, LAIV viruses replicate in the mucosa of the nasopharynx, inducing protective immunity against viruses included in the vaccine, but replicate inefficiently in the lower airways or lungs. The first step in developing an LAIV was the derivation of two stably attenuated master donor viruses (MDV), one for type A and one for type B influenza viruses. The two MDVs each acquired the cold-adapted, temperature-sensitive, attenuated phenotypes through serial passage in viral culture conducted at progressively lower temperatures. The vaccine viruses in LAIV are reassortant viruses containing genes from these MDVs that confer attenuation, temperature sensitivity, and cold adaptation and genes from the recommended contemporary wild-type influenza viruses, encoding the surface antigens hemagglutinin (HA) and neuraminidase (NA). Thus, MDVs provide the stably attenuated vehicles for presenting influenza HA and NA antigens, to which the protective antibody response is directed, to the immune system. The reassortant vaccine viruses are grown in embryonated hens' eggs. After the vaccine is formulated and inserted into individual sprayers for nasal administration, the vaccine must be stored at -15 º C or colder. The immunogenicity of the approved LAIV has been assessed in multiple studies (96,(208)(209)(210)(211)(212)(213), which included approximately 100 children aged 5-17 years, and approximately 300 adults aged 18-49 years. LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not completely understood but appear to involve both serum and nasal secretory antibodies. No single laboratory measurement closely correlates with protective immunity induced by LAIV. Shedding and Transmission of Vaccine Viruses. Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses for >2 days after vaccination, although in lower titers than typically occur with shedding of wild-type influenza viruses. Shedding should not be equated with person-to-person transmission of vaccine viruses, although, in rare instances, shed vaccine viruses can be transmitted from vaccinees to nonvaccinated persons. One unpublished study in a child care center setting assessed transmissibility of vaccine viruses from 98 vaccinated to 99 unvaccinated subjects, all aged 8-36 months. Eighty percent of vaccine recipients shed one or more virus strains, with a mean of 7.6 days' duration (214). One vaccine type influenza type B isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the cold-adapted, temperature-sensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient in the same children's play group. The placebo recipient from whom the influenza type B vaccine virus was isolated did not exhibit symptoms that were different from those experienced by vaccine recipients. The estimated probability of acquiring vac-cine virus after close contact with a single LAIV recipient in this child care population was 0.58%-2.4%. One study assessing shedding of vaccine viruses in 20 healthy vaccinated adults aged 18-49 years demonstrated that the majority of shedding occurred within the first 3 days after vaccination, although one subject was noted to shed virus on day 7 after vaccine receipt. No subject shed vaccine viruses >10 days after vaccination. Duration or type of symptoms associated with receipt of LAIV did not correlate with duration of shedding of vaccine viruses. Person-to-person transmission of vaccine viruses was not assessed in this study (215). Stability of Vaccine Viruses. In clinical trials, viruses shed by vaccine recipients have been phenotypically stable. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (216). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. # Using Live, Attenuated Influenza Vaccine LAIV is an option for vaccination of healthy persons aged 5-49 years, including persons in close contact with groups at high risk and those wanting to avoid influenza. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response, its ease of administration, and the acceptability of an intranasal rather than intramuscular route of administration. # Persons Who Should Not Be Vaccinated with LAIV The following populations should not be vaccinated with LAIV: # Close Contacts of Persons at High Risk for Complications from Influenza Close contacts of persons at high risk for complications from influenza should receive influenza vaccine to reduce transmission of wild-type influenza viruses to persons at high risk. Use of inactivated influenza vaccine is preferred for vaccinating household members, health-care workers, and others who have close contact with severely immunosuppressed persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunosuppressed person requires care in a protective environment. The rationale for not using LAIV among health-care workers caring for such patients is the theoretical risk that a live, attenuated vaccine virus could be transmitted to the severely immunosuppressed person and cause disease. No preference exists for inactivated influenza vaccine use by health-care workers or other persons who have close contact with persons with lesser degrees of immunosuppression (e.g., persons with diabetes, persons with asthma taking corticosteroids, or persons infected with human immunodeficiency virus), and no preference exists for inactivated influenza vaccine use by health-care workers or other healthy persons aged 5-49 years in close contact with all other groups at high risk. If a health-care worker receives LAIV, that worker should refrain from contact with severely immunosuppressed patients as described previously for 7 days after vaccine receipt. Hospital visitors who have received LAIV should refrain from contact with severely immunosuppressed persons for 7 days after vaccination; however, such persons need not be excluded from visitation of patients who are not severely immunosuppressed. # Personnel Who May Administer LAIV Low-level introduction of vaccine viruses into the environment is likely unavoidable when administering LAIV. The risk of acquiring vaccine viruses from the environment is unknown but likely to be limited. Severely immunosuppressed persons should not administer LAIV. However, other persons at high risk for influenza complications may administer LAIV. These include persons with underlying medical conditions placing them at high risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years. # LAIV Dosage and Administration LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV must be stored at -15 º C or colder. LAIV should not be stored in a frost-free freezer (because the temperature might cycle above -15 º C), unless a manufacturer-supplied freezer box is used. LAIV must be thawed before administration. This can be accomplished by holding an individual sprayer in the palm of the hand until thawed, with subsequent immediate administration. Alternatively, the vaccine can be thawed in a refrigerator and stored at 2 º C-8 º C for <24 hours before use. Vaccine should not be refrozen after thawing. LAIV is supplied in a prefilled singleuse sprayer containing 0.5 mL of vaccine. Approximately 0.25 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. If the vaccine recipient sneezes after administration, the dose should not be repeated. LAIV should be administered annually according to the following schedule: - Children aged 5-8 years previously unvaccinated at any time with either LAIV or inactivated influenza vaccine should receive 2 doses † of LAIV separated by 6-10 weeks. - Children aged 5-8 years previously vaccinated at any time with either LAIV or inactivated influenza vaccine should receive 1 dose of LAIV. They do not require a second dose. - Persons aged 9-49 years should receive 1 dose of LAIV. LAIV can be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if clinical judgment indicates nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness. Whether concurrent administration of LAIV with other vaccines affects the safety or efficacy of either LAIV or the simultaneously administered vaccine is unknown. In the absence of specific data indicating interference, following the ACIP general recommendations for immunization is prudent (64). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. An inactivated vaccine can be administered either simultaneously or at any time before or after LAIV. Two live vaccines not administered on the same day should be administered >4 weeks apart when possible. # LAIV and Use of Influenza Antiviral Medications The effect on safety and efficacy of LAIV coadministration with influenza antiviral medications has not been studied. # MMWR May 28, 2004 However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy, and influenza antiviral medications should not be administered for 2 weeks after receipt of LAIV. # LAIV Storage LAIV must be stored at -15 º C or colder. LAIV should not be stored in a frost-free freezer because the temperature might cycle above -15 º C, unless a manufacturer-supplied freezer box or other strategy is used. LAIV can be thawed in a refrigerator and stored at 2 º C-8 º C for <24 hours before use. It should not be refrozen after thawing. Additional information is available at Wyeth Product Quality (1-800-411-0086) or at http:// www.FluMist.com. # Side Effects and Adverse Reactions Twenty prelicensure clinical trials assessed the safety of the approved LAIV. In these combined studies, approximately 28,000 doses of the vaccine were administered to >20,000 subjects. A subset of these trials were randomized, placebocontrolled studies in which >4,000 healthy children aged 5-17 years and >2,000 healthy adults aged 18-49 years were vaccinated. The incidence of adverse events possibly complicating influenza (e.g., pneumonia, bronchitis, bronchiolitis, or central nervous system events) was not statistically different among LAIV and placebo recipients aged 5-49 years. Children. Signs and symptoms reported more often among vaccine recipients than placebo recipients included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0%-26%), and vomiting (3%-13%), abdominal pain (2%), and myalgias (0%-21%) (208,211,213,(217)(218)(219). These symptoms were associated more often with the first dose and were self-limited. In a subset of healthy children aged 60-71 months from one clinical trial (92,93), certain signs and symptoms were reported more often among LAIV recipients after the first dose (n = 214) than placebo recipients (n = 95) (e.g., runny nose, 48.1% versus 44.2%; headache, 17.8% versus 11.6%; vomiting, 4.7% versus 3.2%; myalgias, 6.1% versus 4.2%), but these differences were not statistically significant. Unpublished data from a study including subjects aged 1-17 years indicated an increase in asthma or reactive airways disease in the subset aged 12-59 months. Because of this, LAIV is not approved for use among children aged <60 months. Adults. Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (94,220,221). In one clinical trial (94), among a subset of healthy adults aged 18-49 years, signs and symptoms reported more frequently among LAIV recipi-ents (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (13.9% versus 10.8%); runny nose (44.5% versus 27.1%); sore throat (27.8% versus 17.1%); chills (8.6% versus 6.0%); and tiredness/weakness (25.7% versus 21.6%). Safety Among Groups at High Risk from Influenza-Related Morbidity. Until additional data are acquired, persons at high risk for experiencing complications from influenza infection (e.g., immunocompromised patients; patients with asthma, cystic fibrosis, or chronic obstructive pulmonary disease; or persons aged >65 years) should not be vaccinated with LAIV. Protection from influenza among these groups should be accomplished by using inactivated influenza vaccine. Serious Adverse Events. Serious adverse events among healthy children aged 5-17 years or healthy adults aged 18-49 years occurred at a rate of <1%. Surveillance should continue for adverse events that might not have been detected in previous studies. Health-care professionals should promptly report all clinically significant adverse events after LAIV administration to VAERS, as recommended for inactivated influenza vaccine. # Recommended Vaccines for Different Age Groups When vaccinating children aged 6 months-3 years, healthcare providers should use inactivated influenza vaccine that has been approved by FDA for this age group. Inactivated influenza vaccine from Aventis Pasteur, Inc., (FluZone splitvirus) is approved for use among persons aged >6 months. Inactivated influenza vaccine from Chiron (Fluvirin) is labeled in the United States for use only among persons aged >4 years because data to demonstrate efficacy among younger persons have not been provided to FDA. Live, attenuated influenza vaccine from MedImmune (FluMist) is approved for use by healthy persons aged 5-49 years (Table 5). # Timing of Annual Influenza Vaccination The annual supply of inactivated influenza vaccine and the timing of its distribution cannot be guaranteed in any year. Information regarding the supply of 2004-05 vaccine might not be available until late summer or early fall 2004. To allow vaccine providers to plan for the upcoming vaccination season, taking into account the yearly possibility of vaccine delays or shortages and the need to ensure vaccination of persons at high risk and their contacts, ACIP recommends that vaccine campaigns conducted in October focus their efforts primarily on persons at increased risk for influenza complications and their contacts, including health-care workers. Campaigns conducted in November and later should continue to vaccinate persons at high risk and their contacts, but also vaccinate other persons who wish to decrease their risk for influenza infection. Vaccination efforts for all groups should continue into December and beyond. CDC and other public health agencies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will make recommendations in the summer preceding the 2004-05 influenza season regarding the need for tiered timing of vaccination of different risk groups. # Vaccination in October and November The optimal time to vaccinate is usually during October-November. ACIP recommends that vaccine providers focus their vaccination efforts in October and earlier primarily on persons aged >50 years, persons aged <50 years at increased risk for influenza-related complications (including children aged 6-23 months), household contacts of persons at high risk (including out-of-home caregivers and household contacts of children aged 0-23 months), and health-care workers. Vaccination of children aged <9 years who are receiving vaccine for the first time should also begin in October or earlier because those persons need a booster dose 1 month after the initial dose. Efforts to vaccinate other persons who wish to decrease their risk for influenza infection should begin in November; however, if such persons request vaccination in October, vaccination should not be deferred. Materials to assist providers in prioritizing early vaccine are available at (see also Travelers in this report). # Timing of Organized Vaccination Campaigns Persons planning substantial organized vaccination campaigns should consider scheduling these events after mid-October because the availability of vaccine in any location cannot be ensured consistently in early fall. Scheduling campaigns after mid-October will minimize the need for cancel-lations because vaccine is unavailable. Campaigns conducted before November should focus efforts on vaccination of persons aged >50 years, persons aged <50 years at increased risk for influenza-related complications (including children aged 6-23 months), health-care workers, and household contacts of persons at high-risk (including children aged 0-23 months) to the extent feasible. # Vaccination in December and Later After November, many persons who should or want to receive influenza vaccine remain unvaccinated. In addition, substantial amounts of vaccine have remained unused during three of the past four influenza seasons. To improve vaccine coverage, influenza vaccine should continue to be offered in December and throughout the influenza season as long as vaccine supplies are available, even after influenza activity has been documented in the community. In the United States, seasonal influenza activity can begin to increase as early as October or November, but influenza activity has not reached peak levels in the majority of recent seasons until late December-early March (Table 6). Therefore, although the timing of influenza activity can vary by region, vaccine administered after November is likely to be beneficial in the majority of influenza seasons. Adults develop peak antibody protection against influenza infection 2 weeks after vaccination (222,223). # Vaccination Before October To avoid missed opportunities for vaccination of persons at high risk for serious complications, such persons should be offered vaccine beginning in September during routine healthcare visits or during hospitalizations, if vaccine is available. In facilities housing older persons (e.g., nursing homes), vaccination before October typically should be avoided because antibody levels in such persons can begin to decline within a limited time after vaccination (224). In addition, children aged <9 years who have not been previously vaccinated and who need 2 doses before the start of the influenza season can receive their first dose in September or earlier. # Strategies for Implementing Vaccination Recommendations in Health-Care Settings Successful vaccination programs combine publicity and education for health-care workers and other potential vaccine recipients, a plan for identifying persons at high risk, use of reminder/recall systems, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (19,225). Using standing orders programs is recommended for long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies to ensure the administration of recommended vaccinations for adults (226). Standing orders programs for both influenza and pneumococcal vaccination should be conducted under the supervision of a licensed practitioner according to a physicianapproved facility or agency policy by health-care personnel trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. The Centers for Medicare and Medicaid Services (CMS) has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, long-term-care facilities, and home health agencies (226). To the extent allowed by local and state law, these facilities and agencies may implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaid-eligible patients. Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs as well (20). Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections. # Outpatient Facilities Providing Ongoing Care Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended and who do not have regularly scheduled visits dur-ing the fall should be reminded by mail, telephone, or other means of the need for vaccination. # Outpatient Facilities Providing Episodic or Acute Care Beginning each September, acute health-care facilities (e.g., emergency rooms and walk-in clinics) should offer vaccinations to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities During October and November each year, vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility or anytime afterwards. All residents should be vaccinated at one time, preceding the influenza season. Residents admitted through March after completion of the facility's vaccination program should be vaccinated at the time of admission. # Acute-Care Hospitals Persons of all ages (including children) with high-risk conditions and persons aged >50 years who are hospitalized at any time during September-March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. In one study, 39%-46% of adult patients hospitalized during the winter with influenza-related diagnoses had been hospitalized during the preceding autumn (227). Thus, the hospital serves as a setting in which persons at increased risk for subsequent hospitalization can be identified and vaccinated. However, vaccination of persons at high risk during or after their hospitalizations is often not done. In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (228). Using standing orders in hospitals increases vaccination rates among hospitalized persons (229). # Visiting Nurses and Others Providing Home Care to Persons at High Risk Beginning in September, nursing-care plans should identify patients for whom vaccination is recommended, and vaccine should be administered in the home, if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination. # Other Facilities Providing Services to Persons Aged >50 Years Beginning in October, such facilities as assisted living housing, retirement communities, and recreation centers should offer unvaccinated residents and attendees vaccination on-site before the influenza season. Staff education should emphasize the need for influenza vaccine. # Health-Care Personnel Beginning in October each year, health-care facilities should offer influenza vaccinations to all personnel, including night and weekend staff. Particular emphasis should be placed on providing vaccinations to persons who care for members of groups at high risk. Efforts should be made to educate healthcare personnel regarding the benefits of vaccination and the potential health consequences of influenza illness for themselves and their patients. All health-care personnel should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (118). # Influenza Vaccine Supply During the 2002-03 season, approximately 95 million doses of influenza vaccine were produced, but 12 million doses went unused and had to be destroyed. During the 2003-04 season, approximately 87 million doses of vaccine were produced. During that season, shortages of vaccine were noted in multiple regions of the United States after an unprecedented demand for vaccine lasted longer into the season than usual, caused in part by increased media attention to influenza. On the basis of early projections, manufacturers anticipate production of 90-100 million doses of vaccine for the 2004-05 season. Influenza vaccine delivery delays or vaccine shortages remain possible in part because of the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains. Steps being taken to address possible future delays or vaccine shortages include identification and implementation of ways to expand the influenza vaccine supply and improvement of targeted delivery of vaccine to groups at high risk when delays or shortages are expected. # Future Directions ACIP plans to review new vaccination strategies for improving prevention and control of influenza, including the possibility of expanding recommendations for use of influenza vaccines. In addition, strategies for regularly monitoring vaccine effectiveness will be reviewed. # Recommendations for Using Antiviral Agents for Influenza Antiviral drugs for influenza are an adjunct to influenza vaccine for controlling and preventing influenza. However, these agents are not a substitute for vaccination. Four licensed influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. Amantadine and rimantadine are chemically related antiviral drugs known as adamantanes with activity against influenza A viruses but not influenza B viruses. Amantadine was approved in 1966 for chemoprophylaxis of influenza A (H2N2) infection and was later approved in 1976 for treatment and chemoprophylaxis of influenza type A virus infections among adults and children aged >1 year. Rimantadine was approved in 1993 for treatment and chemoprophylaxis of influenza A infection among adults and prophylaxis among children. Although rimantadine is approved only for chemoprophylaxis of influenza A infection among children, certain specialists in the management of influenza consider it appropriate for treatment of influenza A among children (230). Zanamivir and oseltamivir are chemically related antiviral drugs known as neuraminidase inhibitors that have activity against both influenza A and B viruses. Both zanamivir and oseltamivir were approved in 1999 for treating uncomplicated influenza infections. Zanamivir is approved for treating persons aged >7 years, and oseltamivir is approved for treatment for persons aged >1 year. In 2000, oseltamivir was approved for chemoprophylaxis of influenza among persons aged >13 years. The four drugs differ in pharmacokinetics, side effects, routes of administration, approved age groups, dosages, and costs. An overview of the indications, use, administration, and known primary side effects of these medications is presented in the following sections. Information contained in this report might not represent FDA approval or approved labeling for the antiviral agents described. Package inserts should be consulted for additional information. # Role of Laboratory Diagnosis Appropriate treatment of patients with respiratory illness depends on accurate and timely diagnosis. Early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy. However, because certain bacterial infections can produce symptoms similar to influenza, bacterial infections should be considered and appropriately treated, if suspected. In addition, bacterial infections can occur as a complication of influenza. Influenza surveillance information and diagnostic testing can aid clinical judgment and help guide treatment decisions. The accuracy of clinical diagnosis of influenza on the basis of symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza (29,33,34). Influenza surveillance by state and local health departments and CDC can provide information regarding the presence of influenza viruses in the community. Surveillance can also identify the predominant circulating types, subtypes, and strains of influenza. Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, polymerase chain reaction (PCR) and immunofluorescence (24). Sensitivity and specificity of any test for influenza might vary by the laboratory that performs the test, the type of test used, and the type of specimen tested. Among respiratory specimens for viral isolation or rapid detection, nasopharyngeal specimens are typically more effective than throat swab specimens (231). As with any diagnostic test, results should be evaluated in the context of other clinical information available to health-care providers. Commercial rapid diagnostic tests are available that can be used by laboratories in outpatient settings to detect influenza viruses within 30 minutes (24,232). These rapid tests differ in the types of influenza viruses they can detect and whether they can distinguish between influenza types. Different tests can detect 1) only influenza A viruses; 2) both influenza A and B viruses, but not distinguish between the two types; or 3) both influenza A and B and distinguish between the two. The types of specimens acceptable for use (i.e., throat swab, nasal wash, or nasal swab) also vary by test. The specificity and, in particular, the sensitivity of rapid tests are lower than for viral culture and vary by test (233,234). Because of the lower sensitivity of the rapid tests, physicians should consider confirming negative tests with viral culture or other means. Further, when interpreting results of a rapid influenza test, physicians should consider the positive and negative predictive values of the test in the context of the level of influenza activity in their community. Package inserts and the laboratory performing the test should be consulted for more details regarding use of rapid diagnostic tests. Additional information concerning diagnostic testing is located at . gov/flu/professionals/labdiagnosis.htm. Despite the availability of rapid diagnostic tests, collecting clinical specimens for viral culture is critical, because only culture isolates can provide specific information regarding circulating influenza subtypes and strains. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and chemoprophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor the emergence of antiviral resistance and the emergence of novel influenza A subtypes that might pose a pandemic threat. # Indications for Use Treatment When administered within 2 days of illness onset to otherwise healthy adults, amantadine and rimantadine can reduce the duration of uncomplicated influenza A illness, and zanamivir and oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day, compared with placebo (72,(235)(236)(237)(238)(239)(240)(241)(242)(243)(244)(245)(246)(247)(248)(249). More clinical data are available concerning the efficacy of zanamivir and oseltamivir for treatment of influenza A infection than for treatment of influenza B infection (250)(251)(252)(253)(254)(255)(256)(257)(258)(259)(260)(261)(262)(263)(264)(265)(266). However, in vitro data and studies of treatment among mice and ferrets (267)(268)(269)(270)(271)(272)(273)(274), in addition to clinical studies, have documented that zanamivir and oseltamivir have activity against influenza B viruses (241,(245)(246)(247)275,276). Data are limited regarding the effectiveness of the four antiviral agents in preventing serious influenza-related complications (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases). Evidence for the effectiveness of these four antiviral drugs is principally based on studies of patients with uncomplicated influenza (277). Data are limited and inconclusive concerning the effectiveness of amantadine, rimantadine, zanamivir, and oseltamivir for treatment of influenza among persons at high risk for serious complications of influenza (27,235,237,238,240,241,248,(250)(251)(252)(253)(254). One study assessing oseltamivir treatment primarily among adults reported a reduction in complications necessitating antibiotic therapy compared with placebo (255). Fewer studies of the efficacy of influenza antivirals have been conducted among pediatric populations (235,238,244,245,251,256,257). One study of oseltamivir treatment documented a decreased incidence of otitis media among children (245). Inadequate data exist regarding the safety and efficacy of any of the influenza antiviral drugs for use among children aged <1 year (234). To reduce the emergence of antiviral drug-resistant viruses, amantadine or rimantadine therapy for persons with influenza A illness should be discontinued as soon as clinically warranted, typically after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days. # Chemoprophylaxis Chemoprophylactic drugs are not a substitute for vaccination, although they are critical adjuncts in preventing and controlling influenza. Both amantadine and rimantadine are indicated for chemoprophylaxis of influenza A infection, but not influenza B. Both drugs are approximately 70%-90% effective in preventing illness from influenza A infection (72,235,251). When used as prophylaxis, these antiviral agents can prevent illness while permitting subclinical infection and development of protective antibody against circulating influenza viruses. Therefore, certain persons who take these drugs will develop protective immune responses to circulating influenza viruses. Amantadine and rimantadine do not interfere with the antibody response to the vaccine (235). Both drugs have been studied extensively among nursing home populations as a component of influenza outbreak-control programs, which can limit the spread of influenza within chronic care institutions (235,250,(258)(259)(260). Among the neuraminidase inhibitor antivirals, zanamivir and oseltamivir, only oseltamivir has been approved for prophylaxis, but community studies of healthy adults indicate that both drugs are similarly effective in preventing febrile, laboratory-confirmed influenza illness (efficacy: zanamivir, 84%; oseltamivir, 82%) (261,262,278). Both antiviral agents have also been reported to prevent influenza illness among persons administered chemoprophylaxis after a household member was diagnosed with influenza (263,275,278). Experience with prophylactic use of these agents in institutional settings or among patients with chronic medical conditions is limited in comparison with the adamantanes (247,253,254,(264)(265)(266). One 6-week study of oseltamivir prophylaxis among nursing home residents reported a 92% reduction in influenza illness (247,279). Use of zanamivir has not been reported to impair the immunologic response to influenza vaccine (246,280). Data are not available regarding the efficacy of any of the four antiviral agents in preventing influenza among severely immunocompromised persons. When determining the timing and duration for administering influenza antiviral medications for prophylaxis, factors related to cost, compliance, and potential side effects should be considered. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost-effective, one study of amantadine or rimantadine prophylaxis reported that the drugs should be taken only during the period of peak influenza activity in a community (281). Persons at High Risk Who Are Vaccinated After Influenza Activity Has Begun. Persons at high risk for complications of influenza still can be vaccinated after an outbreak of influenza has begun in a community. However, development of antibodies in adults after vaccination takes approximately 2 weeks (222,223). When influenza vaccine is administered while influenza viruses are circulating, chemoprophylaxis should be considered for persons at high risk during the time from vaccination until immunity has developed. Children aged <9 years who receive influenza vaccine for the first time can require 6 weeks of prophylaxis (i.e., prophylaxis for 4 weeks after the first dose of vaccine and an additional 2 weeks of prophylaxis after the second dose). Persons Who Provide Care to Those at High Risk. To reduce the spread of virus to persons at high risk during community or institutional outbreaks, chemoprophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with persons at high risk. Persons with frequent contact include employees of hospitals, clinics, and chronic-care facilities, household members, visiting nurses, and volunteer workers. If an outbreak is caused by a variant strain of influenza that might not be controlled by the vaccine, chemoprophylaxis should be considered for all such persons, regardless of their vaccination status. Persons Who Have Immune Deficiencies. Chemoprophylaxis can be considered for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, chiefly those with advanced HIV disease. No published data are available concerning possible efficacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infection. Such patients should be monitored closely if chemoprophylaxis is administered. Other Persons. Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Chemoprophylaxis can also be offered to persons who wish to avoid influenza illness. Health-care providers and patients should make this decision on an individual basis. # Control of Influenza Outbreaks in Institutions Using antiviral drugs for treatment and prophylaxis of influenza is a key component of influenza outbreak control in institutions. In addition to antiviral medications, other outbreak-control measures include instituting droplet precautions and establishing cohorts of patients with confirmed or suspected influenza, re-offering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (282)(283)(284) (for additional information regarding outbreak control in specific settings, see Additional Information Regarding Influenza Infection Control Among Specific Populations). The majority of published reports concerning use of antiviral agents to control influenza outbreaks in institutions are based on studies of influenza A outbreaks among nursing home populations where amantadine or rimantadine were used (235,250,(258)(259)(260)281). Less information is available concerning use of neuraminidase inhibitors in influenza A or B institutional outbreaks (253,254,266,279,285). When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice can substantially expedite administration of antiviral medications. When outbreaks occur in institutions, chemoprophylaxis should be administered to all residents, regardless of whether they received influenza vaccinations during the previous fall, and should continue for a minimum of 2 weeks. If surveillance indicates that new cases continue to occur, chemoprophylaxis should be continued until approximately 1 week after the end of the outbreak. The dosage for each resident should be determined individually. Chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza that is not well-matched by the vaccine. In addition to nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories or other settings where persons live in close proximity). For example, chemoprophylaxis with rimantadine has been used successfully to control an influenza A outbreak aboard a large cruise ship (167). To limit the potential transmission of drug-resistant virus during outbreaks in institutions, whether in chronic or acutecare settings or other closed settings, measures should be taken to reduce contact as much as possible between persons taking antiviral drugs for treatment and other persons, including those taking chemoprophylaxis (see Antiviral Drug-Resistant Strains of Influenza). # Dosage Dosage recommendations vary by age group and medical conditions (Table 7). # Children Amantadine. Use of amantadine among children aged 10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg body weight/day, regardless of age, is advisable (252). Rimantadine. Rimantadine is approved for prophylaxis among children aged >1 year and for treatment and prophylaxis among adults. Although rimantadine is approved only for prophylaxis of infection among children, certain specialists in the management of influenza consider it appropriate for treatment among children (230). Use of rimantadine among children aged 10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg body weight/day, regardless of age, is recommended (286). Zanamivir. Zanamivir is approved for treatment among children aged >7 years. The recommended dosage of zanamivir for treatment of influenza is two inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approximately 12 hours apart) (246). Oseltamivir. Oseltamivir is approved for treatment among persons aged >1 year and for chemoprophylaxis among persons aged >13 years. Recommended treatment dosages for children vary by the weight of the child: the dosage recommendation for children who weigh 15-23 kg, the dosage is 45 mg twice a day; for those weighing >23-40 kg, the dosage is 60 mg twice a day; and for children weighing >40 kg, the dosage is 75 mg twice a day. The treatment dosage for persons aged >13 years is 75 mg twice daily. For children aged >13 years, the recommended dose for prophylaxis is 75 mg once a day (247). # Persons Aged >65 Years Amantadine. The daily dosage of amantadine for persons aged >65 years should not exceed 100 mg for prophylaxis or treatment, because renal function declines with increasing age. For certain older persons, the dose should be further reduced. Rimantadine. Among older persons, the incidence and severity of central nervous system (CNS) side effects are substantially lower among those taking rimantadine at a dosage of 100 mg/day than among those taking amantadine at dosages adjusted for estimated renal clearance (287). However, chronically ill older persons have had a higher incidence of CNS and gastrointestinal symptoms and serum concentra- (235). For prophylaxis among persons aged >65 years, the recommended dosage is 100 mg/day. For treatment of older persons in the community, a reduction in dosage to 100 mg/day should be considered if they experience side effects when taking a dosage of 200 mg/day. For treatment of older nursing home residents, the dosage of rimantadine should be reduced to 100 mg/day (286). Zanamivir and Oseltamivir. No reduction in dosage is recommended on the basis of age alone. # Persons with Impaired Renal Function Amantadine. A reduction in dosage is recommended for patients with creatinine clearance <50 mL/min/1.73m 2 . Guidelines for amantadine dosage on the basis of creatinine clearance are located in the package insert. Because recommended dosages on the basis of creatinine clearance might provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully for adverse reactions. If necessary, further reduction in the dose or discontinuation of the drug might be indicated because of side effects. Hemodialysis contributes minimally to amantadine clearance (288,289). Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance <10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including older persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. Hemodialysis contributes minimally to drug clearance (290). Zanamivir. Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were observed (246,291). However, a limited number of healthy volunteers who were administered high doses of intravenous zanamivir tolerated systemic levels of zanamivir that were substantially higher than those resulting from administration of zanamivir by oral inhalation at the recommended dose (292,293). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild to moderate or severe impairment in renal function (246). Oseltamivir. Serum concentrations of oseltamivir carboxylate (GS4071), the active metabolite of oseltamivir, increase with declining renal function (247,294). For patients with creatinine clearance of 10-30 mL/min (247), a reduction of the treatment dosage of oseltamivir to 75 mg once daily and in the prophylaxis dosage to 75 mg every other day is recommended. No treatment or prophylaxis dosing recommendations are available for patients undergoing routine renal dialysis treatment. # Persons with Liver Disease Amantadine. No increase in adverse reactions to amantadine has been observed among persons with liver disease. Rare instances of reversible elevation of liver enzymes among patients receiving amantadine have been reported, although a specific relation between the drug and such changes has not been established (295). Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with severe hepatic dysfunction. Zanamivir and Oseltamivir. Neither of these medications has been studied among persons with hepatic dysfunction. # Persons with Seizure Disorders Amantadine. An increased incidence of seizures has been reported among patients with a history of seizure disorders who have received amantadine (296). Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine. Rimantadine. Seizures (or seizure-like activity) have been reported among persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine (297). The extent to which rimantadine might increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated. Zanamivir and Oseltamivir. Seizure events have been reported during postmarketing use of zanamivir and oseltamivir, although no epidemiologic studies have reported any increased risk for seizures with either zanamivir or oseltamivir use. # Route Amantadine, rimantadine, and oseltamivir are administered orally. Amantadine and rimantadine are available in tablet or syrup form, and oseltamivir is available in capsule or oral suspension form (298,299). Zanamivir is available as a dry powder that is self-administered via oral inhalation by using a plastic device included in the package with the medication. Patients will benefit from instruction and demonstration of correct use of this device (246). # Pharmacokinetics Amantadine Approximately 90% of amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion (258,(300)(301)(302)(303). Thus, renal clearance of amantadine is reduced substantially among persons with renal insufficiency, and dosages might need to be decreased (see Dosage) (Table 7). # Rimantadine Approximately 75% of rimantadine is metabolized by the liver (251). The safety and pharmacokinetics of rimantadine among persons with liver disease have been evaluated only after single-dose administration (251,304). In a study of persons with chronic liver disease (the majority with stabilized cirrhosis), no alterations in liver function were observed after a single dose. However, for persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease (286). Rimantadine and its metabolites are excreted by the kidneys. The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration (251,290). Further studies are needed to determine multiple-dose pharmacokinetics and the most appropriate dosages for patients with renal insufficiency. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6-fold greater than that among healthy persons of the same age (290). Hemodialysis did not contribute to drug clearance. In studies of persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were higher than those among control patients without renal disease who were the same weight, age, and sex (286,305). # Zanamivir In studies of healthy volunteers, approximately 7%-21% of the orally inhaled zanamivir dose reached the lungs, and 70%-87% was deposited in the oropharynx (246,306). Approximately 4%-17% of the total amount of orally inhaled zanamivir is systemically absorbed. Systemically absorbed zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the feces (246,293). # Oseltamivir Approximately 80% of orally administered oseltamivir is absorbed systemically (294). Absorbed oseltamivir is metabolized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxy-late has a half-life of 6-10 hours and is excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway (247,307). Unmetabolized oseltamivir also is excreted in the urine by glomerular filtration and tubular secretion (308). # Side Effects and Adverse Reactions When considering use of influenza antiviral medications (i.e., choice of antiviral drug, dosage, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 7); presence of other medical conditions; indications for use (i.e., prophylaxis or therapy); and the potential for interaction with other medications. # Amantadine and Rimantadine Both amantadine and rimantadine can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day. However, incidence of CNS side effects (e.g., nervousness, anxiety, insomnia, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine than among those taking rimantadine (308). In a 6-week study of prophylaxis among healthy adults, approximately 6% of participants taking rimantadine at a dosage of 200 mg/day experienced one or more CNS symptoms, compared with approximately 13% of those taking the same dosage of amantadine and 4% of those taking placebo (308). A study of older persons also demonstrated fewer CNS side effects associated with rimantadine compared with amantadine (287). Gastrointestinal side effects (e.g., nausea and anorexia) occur among approximately 1%-3% of persons taking either drug, compared with 1% of persons receiving the placebo (308). Side effects associated with amantadine and rimantadine are usually mild and cease soon after discontinuing the drug. Side effects can diminish or disappear after the first week, despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures) (288,296). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among older persons who have been taking amantadine as prophylaxis at a dosage of 200 mg/day (258). Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects (Table 7). In acute overdosage of amantadine, CNS, renal, respiratory, and cardiac toxicity, including arrhythmias, have been reported (288). Because rimantadine has been marketed for a shorter period than amantadine, its safety among certain patient populations (e.g., chronically ill and older persons) has been evaluated less frequently. Because amantadine has anticholinergic effects and might cause mydriasis, it should not be used among patients with untreated angle closure glaucoma (288). # Zanamivir In a study of zanamivir treatment of influenza-like illness among persons with asthma or chronic obstructive pulmonary disease where study medication was administered after use of a B2-agonist, 13% of patients receiving zanamivir and 14% of patients who received placebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (246,248). However, in a phase-I study of persons with mild or moderate asthma who did not have influenza-like illness, 1 of 13 patients experienced bronchospasm after administration of zanamivir (246). In addition, during postmarketing surveillance, cases of respiratory function deterioration after inhalation of zanamivir have been reported. Certain patients had underlying airways disease (e.g., asthma or chronic obstructive pulmonary disease). Because of the risk for serious adverse events and because the efficacy has not been demonstrated among this population, zanamivir is not recommended for treatment for patients with underlying airway disease (246). If physicians decide to prescribe zanamivir to patients with underlying chronic respiratory disease after carefully considering potential risks and benefits, the drug should be used with caution under conditions of appropriate monitoring and supportive care, including the availability of short-acting bronchodilators (277). Patients with asthma or chronic obstructive pulmonary disease who use zanamivir are advised to 1) have a fast-acting inhaled bronchodilator available when inhaling zanamivir and 2) stop using zanamivir and contact their physician if they experience difficulty breathing (246). No definitive evidence is available regarding the safety or efficacy of zanamivir for persons with underlying respiratory or cardiac disease or for persons with complications of acute influenza (277). Allergic reactions, including oropharyngeal or facial edema, have also been reported during postmarketing surveillance (246,253). In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and those receiving placebo (i.e., inhaled lactose vehicle alone) (236)(237)(238)(239)(240)(241)253). The most common adverse events reported by both groups were diarrhea; nausea; sinusitis; nasal signs and symptoms; bronchitis; cough; headache; dizziness; and ear, nose, and throat infections. Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (246). # Oseltamivir Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vomiting, approximately 6%; vomiting, approximately 3%) (242,243,247,309). Among children treated with oseltamivir, 14.3% had vomiting, compared with 8.5% of placebo recipients. Overall, 1% discontinued the drug secondary to this side effect (245), whereas a limited number of adults who were enrolled in clinical treatment trials of oseltamivir discontinued treatment because of these symptoms (247). Similar types and rates of adverse events were reported in studies of oseltamivir prophylaxis (247). Nausea and vomiting might be less severe if oseltamivir is taken with food (247,309). # Use During Pregnancy No clinical studies have been conducted regarding the safety or efficacy of amantadine, rimantadine, zanamivir, or oseltamivir for pregnant women; only two cases of amantadine use for severe influenza illness during the third trimester have been reported (134,135). However, both amantadine and rimantadine have been demonstrated in animal studies to be teratogenic and embryotoxic when administered at substantially high doses (286,288). Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these four drugs should be used during pregnancy only if the potential benefit justifies the potential risk to the embryo or fetus (see manufacturers' package inserts) (246,247,286,288). # Drug Interactions Careful observation is advised when amantadine is administered concurrently with drugs that affect CNS, including CNS stimulants. Concomitant administration of antihistamines or anticholinergic drugs can increase the incidence of adverse CNS reactions (235). No clinically substantial interactions between rimantadine and other drugs have been identified. Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically critical drug interactions have been predicted on the basis of in vitro data and data from studies using rats (246,310). Limited clinical data are available regarding drug interactions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this pathway. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir carboxylate by approximately 50% and a corresponding approximate twofold increase in the plasma levels of oseltamivir carboxylate (247,307). No published data are available concerning the safety or efficacy of using combinations of any of these four influenza antiviral drugs. For more detailed information concerning potential drug interactions for any of these influenza antiviral drugs, package inserts should be consulted. # Antiviral Drug-Resistant Strains of Influenza Amantadine-resistant viruses are cross-resistant to rimantadine and vice versa (311). Drug-resistant viruses can appear in approximately one third of patients when either amantadine or rimantadine is used for therapy (257,312,313). During the course of amantadine or rimantadine therapy, resistant influenza strains can replace susceptible strains within 2-3 days of starting therapy (312,314). Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy (315,316); however, the frequency with which resistant viruses are transmitted and their effect on efforts to control influenza are unknown. Amantadine-and rimantadine-resistant viruses are not more virulent or transmissible than susceptible viruses (317). The screening of epidemic strains of influenza A has rarely detected amantadine-and rimantadine-resistant viruses (312,318,319). Persons who have influenza A infection and who are treated with either amantadine or rimantadine can shed susceptible viruses early in the course of treatment and later shed drugresistant viruses, including after 5-7 days of therapy (257). Such persons can benefit from therapy even when resistant viruses emerge. Resistance to zanamivir and oseltamivir can be induced in influenza A and B viruses in vitro (320)(321)(322)(323)(324)(325)(326)(327), but induction of resistance requires multiple passages in cell culture. By contrast, resistance to amantadine and rimantadine in vitro can be induced with fewer passages in cell culture (328,329). Development of viral resistance to zanamivir and oseltamivir during treatment has been identified but does not appear to be frequent (247,(330)(331)(332)(333). In clinical treatment studies using oseltamivir, 1.3% of posttreatment isolates from patients aged >13 years and 8.6% among patients aged 1-12 years had decreased susceptibility to oseltamivir (247). No isolates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of posttreatment iso-lates tested is limited (334) and the risk for emergence of zanamivir-resistant isolates cannot be quantified (246). Only one clinical isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (331). Available diagnostic tests are not optimal for detecting clinical resistance to the neuraminidase inhibitor antiviral drugs, and additional tests are being developed (334,335). Postmarketing surveillance for neuraminidase inhibitor-resistant influenza viruses is being conducted (336). # Sources of Information Regarding Influenza and Its Surveillance Information regarding influenza surveillance, prevention, detection, and control is available at / weekly/fluactivity.htm. Surveillance information is available through the CDC Voice Information System (influenza update) at 888-232-3228 or CDC Fax Information Service at 888-232-3299. During October-May, surveillance information is updated at least every other week. In addition, periodic updates regarding influenza are published in the MMWR Weekly (). Additional information regarding influenza vaccine can be obtained by calling the CDC Immunization hotline at 800-232-2522 (English) or 800-232-0233 (Spanish). State and local health departments should be consulted concerning availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, and for reporting influenza outbreaks and receiving advice concerning outbreak control. # Additional Information Regarding Influenza Infection Control Among Specific Populations Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, health-care personnel, hospitals, and travelers) are also available in the following publications: - CDC. # Advisory Committee on Immunization Practices Membership
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction Epidemics of influenza typically occur during the winter months in temperate regions and have been responsible for an average of approximately 36,000 deaths/year in the United States during 1990-1999 (1). Influenza viruses also can cause pandemics, during which rates of illness and death from influenza-related complications can increase worldwide. Influenza viruses cause disease among all age groups (2)(3)(4). Rates of infection are highest among children, but rates of serious illness and death are highest among persons aged >65 years and persons of any age who have medical conditions that place them at increased risk for complications from influenza (2,(5)(6)(7). Influenza vaccination is the primary method for preventing influenza and its severe complications. In this report from the Advisory Committee on Immunization Practices (ACIP), the primary target groups recommended for annual vaccination are 1) persons at increased risk for influenza-related complications (e.g., those aged >65 years, children aged 6-23 months, pregnant women, and persons of any age with certain chronic medical conditions); 2) persons aged 50-64 years because this group has an elevated prevalence of certain chronic medical conditions; and 3) persons who live with or care for persons at high risk (e.g., health-care workers and household contacts who have frequent contact with persons at high risk and who can transmit influenza to those persons at high risk). Vaccination is associated with reductions in influenza-related respiratory illness and physician visits among all age groups, hospitalization and death among persons at high risk, otitis media among children, and work absenteeism among adults (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Although influenza vaccination levels increased substantially during the 1990s, further improvements in vaccine coverage levels are needed, chiefly among persons aged <65 years who are at increased risk for influenza-related complications among all racial and ethnic groups, among blacks and Hispanics aged >65 years, among children aged 6-23 months, and among health-care workers. ACIP recommends using strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (19,20). Although influenza vaccination remains the cornerstone for the control and treatment of influenza, information on antiviral medications is also presented because these agents are an adjunct to vaccine. # Primary Changes and Updates in the Recommendations The 2004 recommendations include four principal changes or updates: 1. ACIP recommends that healthy children aged 6-23 months, and close contacts of children aged 0-23 months, be vaccinated against influenza (see Target Groups for Vaccination). # Influenza and Its Burden # Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease (21). Influenza A viruses are further categorized into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Influenza B viruses are not categorized into subtypes. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have been in global circulation. In 2001, influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H3N2) and A (H1N1) viruses began circulating widely. Both influenza A and B viruses are further separated into groups on the basis of antigenic characteristics. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral replication. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. A person's immunity to the surface antigens, including hemagglutinin, reduces the likelihood of infection and severity of disease if infection occurs (22). Antibody against one influenza virus type or subtype confers limited or no protection against another. Furthermore, antibody to one antigenic variant of influenza virus might not protect against a new antigenic variant of the same type or subtype (23). Frequent development of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics and the reason for the usual incorporation of one or more new strains in each year's influenza vaccine. # Clinical Signs and Symptoms of Influenza Influenza viruses are spread from person to person primarily through the coughing and sneezing of infected persons (21). The incubation period for influenza is 1-4 days, with an average of 2 days (24). Adults typically are infectious from the day before symptoms begin through approximately 5 days after illness onset. Children can be infectious for >10 days, and young children can shed virus for <6 days before their illness onset. Severely immunocompromised persons can shed virus for weeks or months (25)(26)(27)(28). Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symptoms (e.g., fever, myalgia, headache, malaise, nonproductive cough, sore throat, and rhinitis) (29). Among children, otitis media, nausea, and vomiting are also commonly reported with influenza illness (30)(31)(32). Respiratory illness caused by influenza is difficult to distinguish from illness caused by other respiratory pathogens on the basis of symptoms alone (see Role of Laboratory Diagnosis). Reported sensitivities and specificities of clinical definitions for influenza-like illness in studies primarily among adults that include fever and cough have ranged from 63% to 78% and 55% to 71%, respectively, compared with viral culture (33,34). Sensitivity and predictive value of clinical definitions can vary, depending on the degree of co-circulation of other respiratory pathogens and the level of influenza activity (35). A study among older nonhospitalized patients determined that symptoms of fever, cough, and acute onset had a positive predictive value of 30% for influenza (36), whereas a study of hospitalized older patients with chronic cardiopulmonary disease determined that a combination of fever, cough, and illness of <7 days was 78% sensitive and 73% specific for influenza (37). However, a study among vaccinated older persons with chronic lung disease reported that cough was not predictive of influenza infection, although having a fever or feverishness was 68% sensitive and 54% specific for influenza infection (38). Influenza illness typically resolves after a limited number of days for the majority of persons, although cough and malaise can persist for >2 weeks. Among certain persons, influenza can exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease), lead to secondary bacterial pneumonia or primary influenza viral pneumonia, or occur as part of a coinfection with other viral or bacterial pathogens (39). Young children with influenza infection can have initial symptoms mimicking bacterial sepsis with high fevers (40,41), and <20% of children hospitalized with influenza can have febrile seizures (31,42). Influenza infection has also been associated with encephalopathy, transverse myelitis, Reye syndrome, myositis, myocarditis, and pericarditis. (31,39,43,44). # Hospitalizations and Deaths from Influenza The risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age with certain underlying health conditions (see Persons at Increased Risk for Complications) than among healthy older children and younger adults (1,6,8,(45)(46)(47)(48)(49)(50). Estimated rates of influenza-associated hospitalizations have varied substantially by age group in studies conducted during different influenza epidemics (Table 1). Among children aged 0-4 years, hospitalization rates have ranged from approximately 500/100,000 children for those with high-risk medical conditions to 100/100,000 children for those without high-risk medical conditions (51)(52)(53)(54). Within the 0-4 year age group, hospitalization rates are highest among children aged 0-1 years and are comparable to rates reported among persons >65 years (53,54) (Table 1). During influenza epidemics from 1969-70 through 1994-95, the estimated overall number of influenza-associated hospitalizations in the United States ranged from approximately 16,000 to 220,000/epidemic. An average of approximately 114,000 influenza-related excess hospitalizations occurred per year, with 57% of all hospitalizations occurring among persons aged <65 years. Since the 1968 influenza A (H3N2) virus pandemic, the greatest numbers of influenza-associated hospitalizations have occurred during epidemics caused by type A (H3N2) viruses, with an estimated average of 142,000 influenza-associated hospitalizations per year (55). Influenza-related deaths can result from pneumonia as well as from exacerbations of cardiopulmonary conditions and other chronic diseases. Older adults account for >90% of deaths attributed to pneumonia and influenza (1,50). In a recent study of influenza epidemics, approximately 19,000 influenza-associated pulmonary and circulatory deaths per influenza season occurred during 1976-1990, compared with approximately 36,000 deaths during 1990-1999 (1). Estimated rates of influenza-associated pulmonary and circulatory deaths/100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years. In the United States, the number of influenza-associated deaths might be increasing in part because the number of older persons is increasing (56). In addition, influenza seasons in which influenza A (H3N2) viruses predominate are associated with higher mortality (57); influenza A (H3N2) viruses predominated in 90% of influenza seasons during 1990-1999, compared with 57% of seasons during 1976-1990 (1). Deaths from influenza are uncommon among children with and without high-risk conditions, but do occur (58,59). A study that modeled influenza-related deaths estimated that an average of 92 deaths occurred among children aged <5 years annually during the 1990's compared with 35,274 deaths among adults aged >50 years (1). Preliminary reports of laboratory-confirmed pediatric deaths during the 2003-04 influenza season indicated that among these 143 influenzarelated deaths (as of April 10, 2004), 58 (41%) were aged <2 years and, of those aged 2-17 years, 65 (45%) did not have an underlying medical condition traditionally considered to place a person at risk for influenza-related complications (unpublished data, CDC National Center for Infectious Diseases, 2004). Further information is needed regarding the risk of severe influenza-complications and optimal strategies for minimizing severe disease and death among children. # Options for Controlling Influenza In the United States, the primary option for reducing the effect of influenza is immunoprophylaxis with vaccine. Inactivated (i.e., killed virus) influenza vaccine and live, attenuated influenza vaccine are available for use in the United States (see Recommendations for Using Inactivated and Live, Attenuated Influenza Vaccine). Vaccinating persons at high risk for complications and their contacts each year before seasonal increases in influenza virus circulation is the most effective means of reducing the effect of influenza. Vaccination coverage can be increased by administering vaccine to persons during hospitalizations or routine health-care visits before the influenza season, making special visits to physicians' offices or clinics unnecessary. When vaccine and epidemic strains are well-matched, achieving increased vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) and among staff can reduce the risk for outbreaks by inducing herd immunity (13). Vaccination of health-care workers and other persons in close contact with persons at increased risk for severe influenza illness can also reduce transmission of influenza and subsequent influenzarelated complications. Antiviral drugs used for chemoprophylaxis or treatment of influenza are a key adjunct to vaccine (see Recommendations for Using Antiviral Agents for Influenza). However, antiviral medications are not a substitute for vaccination. viruses are a reassortant of influenza A (H1N1) and (H3N2) viruses, antibody directed against influenza A (H1N1) and influenza (H3N2) vaccine strains will provide protection against circulating influenza A (H1N2) viruses. Influenza viruses for both the inactivated and live attenuated influenza vaccines are initially grown in embryonated hens' eggs. Thus, both vaccines might contain limited amounts of residual egg protein. New Engl J Med 2000;342:232-9. § § Outcomes were for acute pulmonary conditions. Influenza-attributable hospitalization rates for children at high risk were not included in this study. ¶ ¶ Source: Barker WH, Mullooly JP. Impact of epidemic type A influenza in a defined adult population. Am J Epidemiol 1980;112:798-811. *** Outcomes were limited to hospitalizations in which either pneumonia or influenza was listed as the first condition on discharge records (Simonsen) or included anywhere in the list of discharge diagnoses (Barker). † † † Source: Simonsen L, Fukuda, K, Schonberger LB, Cox NJ. Impact of influenza epidemics on hospitalizations. J Infect Dis 2000;181:831-7. § § § Persons at high risk and not at high risk for influenza-related complications are combined. ¶ ¶ ¶ The low estimate is the average during influenza A(H1N1) or influenza B-predominate seasons, and the high estimate is the average during influenza A (H3N2)-predominate seasons. # Influenza Vaccine Composition For the inactivated vaccine, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (60). Subvirion and purified surface antigen preparations of the inactivated vaccine are available. Manufacturing processes differ by manufacturer. Manufacturers might use different compounds to inactivate influenza viruses and add antibiotics to prevent bacterial contamination. Package inserts should be consulted for additional information. # Thimerosal Thimerosal, a mercury-containing compound, has been used as a preservative in vaccines since the 1930s and is used in multidose vials of inactivated influenza vaccine to reduce the likelihood of bacterial contamination. Although no scientific evidence indicates that thimerosal in vaccines leads to serious adverse events in vaccine recipients, in 1999, the U.S. Public Health Service and other organizations recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines to decrease total mercury exposure, chiefly among infants (61)(62)(63). Since mid-2001, vaccines routinely recommended for infants in the United States have been manufactured either without or with only trace amounts of thimerosal to provide a substantial reduction in the total mercury exposure from vaccines for children (64). Vaccines containing trace amounts of thimerosal have <1 mcg mercury/dose. In 1999, 15 of 28 vaccine products for which CDC had contracts did not contain thimerosal as a preservative. In 2004, 27 of 29 products under CDC contract do not contain thimerosal as a preservative. Influenza Vaccines and Thimerosal. LAIV does not contain thimerosal. Thimerosal preservative-containing inactivated influenza vaccines, distributed in multidose containers in the United States, contain 25 mcg of mercury/0.5-mL dose (61,62). Inactivated influenza virus vaccines distributed in the United States as preservative-free vaccines in single-dose syringes contain only trace amounts of thimerosal as a residual from early manufacturing steps. Inactivated influenza vaccine that does not contain thimerosal as a preservative has <1 mcg mercury/0.5-mL dose or <0.5 mcg mercury/0.25-mL dose. This information is included in the package insert provided with each type of inactivated influenza virus vaccine. Beginning in 2004, influenza vaccine is part of the routine childhood immunization schedule. For the 2004-05 influenza season, 6-8 million single-dose syringes of inactivated influenza virus vaccine without thimerosal as a preservative probably will be available. This represents a substantial increase in the available amount of inactivated influenza vaccine without thimerosal as a preservative, compared with approximately 3.2 million doses that were available during the 2003-04 influenza season. Inactivated influenza vaccine without thimerosal as a preservative is available from two manufacturers. Chiron produces Fluvirin™, which is approved by the Food and Drug Administration (FDA) for persons aged >4 years. Fluvirin is marketed as a formulation with thimerosal as a preservative in multidose vials and as a formulation without thimerosal as a preservative in 0.5-mL unit dose syringes. Aventis Pasteur produces FluZone ® , which is FDA-approved for persons aged >6 months. FluZone containing thimerosal as a preservative is available in multidose vials. Preservative-free FluZone packaged as 0.25-mL unit dose syringes is available for use among persons aged 6-35 months. The total amount of inactivated influenza vaccine available without thimerosal as a preservative will be increased as manufacturing capabilities are expanded. The risks of severe illness from influenza infection are elevated among both young children and pregnant women, and both groups benefit from vaccination by preventing illness and death from influenza. In contrast, no scientifically conclusive evidence exists of harm from exposure to thimerosal preservative-containing vaccine, whereas evidence is accumulating of lack of any harm resulting from exposure to such vaccines (61,65). Therefore, the benefits of influenza vaccination outweigh the theoretical risk, if any, for thimerosal exposure through vaccination. Nonetheless, certain persons remain concerned regarding exposure to thimerosal. The U.S. vaccine supply for infants and pregnant women is in a period of transition during which thimerosal in vaccines intended for these groups is being reduced by manufacturers as a feasible means of reducing an infant's total exposure to mercury because other environmental sources of exposure are more difficult or impossible to eliminate. Reductions in thimerosal in other vaccines have been achieved already and have resulted in substantially lowered cumulative exposure to thimerosal from vaccination among infants and children. For all of these reasons, persons recommended to receive inactivated influenza vaccine may receive either vaccine preparation, depending on availability. Supplies of inactivated influenza vaccines without thimerosal as a preservative will be increased for the 2004-05 influenza season compared with the 2003-04 season, and they will be included in CDC contracts to meet anticipated public demand in 2004. # Efficacy and Effectiveness of Inactivated Influenza Vaccine The effectiveness of inactivated influenza vaccine depends primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the viruses in the vaccine and those in circulation. The majority of vaccinated children and young adults develop high postvaccination hemagglutination inhibition antibody titers (66)(67)(68). # MMWR May 28, 2004 These antibody titers are protective against illness caused by strains similar to those in the vaccine (67)(68)(69)(70). Adults Aged <65 Years. When the vaccine and circulating viruses are antigenically similar, influenza vaccine prevents influenza illness among approximately 70%-90% of healthy adults aged <65 years (9,12,71,72). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are wellmatched (9)(10)(11)(12)72,73). Children. Children aged as young as 6 months can develop protective levels of antibody after influenza vaccination (66,67,(74)(75)(76)(77), although the antibody response among children at high risk for influenza-related complications might be lower than among healthy children (78,79). In a randomized study among children aged 1-15 years, inactivated influenza vaccine was 77%-91% effective against influenza respiratory illness and was 44%-49%, 74%-76%, and 70%-81% effective against influenza seroconversion among children aged 1-5, 6-10, and 11-15 years, respectively (68). One study (80) reported a vaccine efficacy of 56% against influenza illness among healthy children aged 3-9 years, and another study (81) determined vaccine efficacy of 22%-54% and 60%-78% among children with asthma aged 2-6 years and 7-14 years, respectively. A 2-year randomized study of children aged 6-24 months determined that >89% of children seroconverted to all three vaccine strains during both years (82). During year 1, among 411 children, vaccine efficacy was 66% (95% confidence interval [CI] = 34% and 82%) against cultureconfirmed influenza (attack rates: 5.5% and 15.9% among vaccine and placebo groups, respectively). During year 2, among 375 children, vaccine efficacy was -7% (95% CI = -247% and 67%; attack rates: 3.6% and 3.3% among vaccine and placebo groups, respectively; the second year exhibited lower attack rates overall and was considered a mild season). However, no overall reduction in otitis media was reported (82). Other studies report that trivalent inactivated influenza vaccine decreases the incidence of influenza-associated otitis media among young children by approximately 30% (16,17). Adults Aged >65 Years. Older persons and persons with certain chronic diseases might develop lower postvaccination antibody titers than healthy young adults and thus can remain susceptible to influenza-related upper respiratory tract infection (83)(84)(85). A randomized trial among noninstitutionalized persons aged >60 years reported a vaccine efficacy of 58% against influenza respiratory illness, but indicated that efficacy might be lower among those aged >70 years (86). The vaccine can also be effective in preventing secondary complications and reducing the risk for influenza-related hospitalization and death among adults >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (13)(14)(15)18,87). Among elderly persons not living in nursing homes or similar chronic-care facilities, influenza vaccine is 30%-70% effective in preventing hospitalization for pneumonia and influenza (15,88). Among older persons who do reside in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and deaths. Among this population, the vaccine can be 50%-60% effective in preventing hospitalization or pneumonia and 80% effective in preventing death, although the effectiveness in preventing influenza illness often ranges from 30% to 40% (89)(90)(91). # Efficacy and Effectiveness of LAIV Healthy Children. A randomized, double-blind, placebocontrolled trial among 1,602 healthy children initially aged 15-71 months assessed the efficacy of trivalent LAIV against culture-confirmed influenza during two seasons (92,93). This trial included subsets of 238 healthy children (163 vaccinees and 75 placebo recipients) aged 60-71 months who received 2 doses and 74 children (54 vaccinees and 20 placebo recipients) aged 60-71 months who received a single dose during season one, and a subset of 544 children (375 vaccinees and 169 placebo recipients) aged 60-84 months during season two. Children who continued from season one to season two remained in the same study group. In season one, when vaccine and circulating virus strains were well-matched, efficacy was 93% for all participants, regardless of age, among persons receiving 2 doses of LAIV. Efficacy was 87% in the 60-71-month subset for those who received 2 doses, and was 91% in the subset for those who received 1 or 2 doses. In season two, when the A (H3N2) component was not well-matched between vaccine and circulating virus strains, efficacy was 86% overall and 87% among those aged 60-84 months. The vaccine was 92% efficacious in preventing culture-confirmed influenza during the twoseason study. Other results included a 27% reduction in febrile otitis media and a 28% reduction in otitis media with concomitant antibiotic use. Receipt of LAIV also resulted in decreased fever and otitis media among vaccine recipients who experienced influenza. Healthy Adults. A randomized, double-blind, placebocontrolled trial among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in illness, absenteeism, health-care visits, and medication use during peak and total influenza outbreak periods (94). The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not well-matched. The study did not include testing of viruses by a laboratory. During peak outbreak periods, no difference was identified between LAIV and placebo recipients experiencing any febrile episodes. However, vaccination was associated with reductions in severe febrile illnesses of 19% and febrile upper respiratory tract illnesses of 24%. Vaccination also was associated with fewer days of illness, fewer days of work lost, fewer days with health-care provider visits, and reduced use of prescription antibiotics and over-the-counter medications. Among the subset of 3,637 healthy adults aged 18-49 years, LAIV recipients (n = 2,411) had 26% fewer febrile upperrespiratory illness episodes; 27% fewer lost work days as a result of febrile upper respiratory illness; and 18%-37% fewer days of health-care provider visits caused by febrile illness, compared with placebo recipients (n = 1,226). Days of antibiotic use were reduced by 41%-45% in this age subset. Another randomized, double-blind, placebo-controlled challenge study among 92 healthy adults (LAIV, n = 29; placebo, n = 31; inactivated influenza vaccine, n = 32) aged 18-41 years assessed the efficacy of both LAIV and inactivated vaccine (95). The overall efficacy of LAIV and inactivated influenza vaccine in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, on the basis of experimental challenge by viruses to which study participants were susceptible before vaccination. The difference between the two vaccines was not statistically significant. # Cost-Effectiveness of Influenza Vaccine Influenza vaccination can reduce both health-care costs and productivity losses associated with influenza illness. Economic studies of influenza vaccination of persons aged >65 years conducted in the United States have reported overall societal cost savings and substantial reductions in hospitalization and death (15,88,96). Studies of adults aged <65 years have reported that vaccination can reduce both direct medical costs and indirect costs from work absenteeism (8,(10)(11)(12)72,97). Reductions of 34%-44% in physician visits, 32%-45% in lost workdays (10,12), and 25% in antibiotic use for influenza-associated illnesses have been reported (12). One cost-effectiveness analysis estimated a cost of approximately $60-$4,000/illness averted among healthy persons aged 18-64 years, depending on the cost of vaccination, the influenza attack rate, and vaccine effectiveness against influenza-like illness (72). Another cost-benefit economic model estimated an average annual savings of $13.66/ person vaccinated (98). In the second study, 78% of all costs prevented were costs from lost work productivity, whereas the first study did not include productivity losses from influenza illness. Economic studies specifically evaluating the costeffectiveness of vaccinating persons aged 50-64 years are not available, and the number of studies that examine the economics of routinely vaccinating children with inactivated or live, attenuated vaccine are limited (8,(99)(100)(101)(102). However, in a study of inactivated vaccine that included all age groups, cost utility improved with increasing age and among those with chronic medical conditions (8). Among persons aged >65 years, vaccination resulted in a net savings per quality-adjusted life year (QALY) gained and resulted in costs of $23-$256/QALY among younger age groups. Additional studies of the relative costeffectiveness and cost utility of influenza vaccination among children and among adults aged <65 years are needed and should be designed to account for year-to-year variations in influenza attack rates, illness severity, and vaccine efficacy when evaluating the long-term costs and benefits of annual vaccination. # Vaccination Coverage Levels Among persons aged >65 years, influenza vaccination levels increased from 33% in 1989 (103) to 66% in 1999 (104), surpassing the Healthy People 2000 objective of 60% (105). Vaccine coverage reached the highest levels recorded (68%) during the 1999-00 influenza season, using the percentage of adults reporting influenza vaccination during the past 12 months who participated in the National Health Interview Survey (NHIS) during the first and second quarters of each calendar year as a proxy measure of influenza vaccine coverage for the previous influenza season (104). Possible reasons for the increase in influenza vaccination levels among persons aged >65 years through the 1999-00 influenza season include 1) greater acceptance of preventive medical services by practitioners; 2) increased delivery and administration of vaccine by health-care providers and sources other than physicians; 3) new information regarding influenza vaccine effectiveness, cost-effectiveness, and safety; and 4) initiation of Medicare reimbursement for influenza vaccination in 1993 (8,14,15,89,90,106,107). Vaccine coverage increased more rapidly through the mid-1990s than during subsequent seasons (average annual percentage increase of 4% from 1988-89 to 1996-97 versus 1% from 1996-97 to 1999-00). Estimated national adult vaccine coverage for the 2001-02 season (Table 2), the most recent for which complete data are available, was 66% for adults aged >65 years and 34% for adults aged 50-64 years (104; unpublished data, CDC National Immunization Program, 2004). The estimated vaccination coverage among adults with high-risk conditions aged 18-49 years and 50-64 years was 23% and 44%, respectively, substantially lower than the Healthy People 2000 and 2010 objective of 60% (104,105,108). Continued annual monitoring is needed to determine the effects of vaccine supply delays, changes in influenza vaccination recommendations and target groups for vaccination, and other factors related to vaccination coverage among adults and children. The Healthy People 2010 objective is to achieve vaccination coverage for 90% of persons aged >65 years (108). Reducing racial and ethnic health disparities, including disparities in vaccination coverage, is an overarching national goal (108). Although estimated influenza vaccination coverage for the 1999-00 season reached the highest levels recorded among older black, Hispanic, and white populations, vaccination levels among blacks and Hispanics continue to lag behind those among whites (104,109). Estimated influenza vaccination levels for 2001 among persons aged >65 years were 66% among non-Hispanic whites, 48% among non-Hispanic blacks, and 54% among Hispanics (109,110). Additional strategies are needed to achieve the Healthy People 2010 objectives among all racial and ethnic groups. In 1997 and 1998, vaccination coverage estimates among nursing home residents were 64%-82% and 83%, respectively (111,112). The Healthy People 2010 goal is to achieve influenza vaccination of 90% among nursing home residents, an increase from the Healthy People 2000 goal of 80% (105,108). Reported vaccination levels are low among children at increased risk for influenza complications. One study conducted among patients in health maintenance organizations reported influenza vaccination percentages ranging from 9% to 10% among children with asthma (113). A 25% vaccination level was reported among children with severe to moderate asthma who attended an allergy and immunology clinic (114). However, a study conducted in a pediatric clinic demonstrated an increase in the vaccination percentage of chil-dren with asthma or reactive airways disease from 5% to 32% after implementing a reminder/recall system (115). One study reported 79% vaccination coverage among children attending a cystic fibrosis treatment center (116). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use. Annual vaccination is recommended for health-care workers. Nonetheless, NHIS reported vaccination coverage of only 34% and 38% among health-care workers in the 1997 and 2002 surveys, respectively (117,118; unpublished data, CDC National Immunization Program, 2004) (Table 2). Vaccination of health-care workers has been associated with reduced work absenteeism (9) and fewer deaths among nursing home patients (119,120). Limited information is available regarding use of influenza vaccine among pregnant women. Among women aged 18-44 years without diabetes responding to the 2001 Behavioral Risk Factor Surveillance System, those reporting they were pregnant were less likely to report influenza vaccination during the past 12 months (13.7%) than those not pregnant (16.8%) (121). Only 12% of pregnant women reported vaccination according to 2002 NHIS data, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions (unpublished data, CDC National Immunization Program, 2004) (Table 2). Although * As recommended by the Advisory Committee on Immunization Practices. † CI = Confidence interval. § Persons categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer in the past 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer in the past 12 months; 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack in the past 12 months. ¶ Aged 18-44 years, pregnant at the time of the survey and without high-risk conditions. ** Adults were classified as health-care workers if they were currently employed in a health-care occupation or in a health-care industry setting, on the basis of standard occupation and industry categories recoded in groups by CDC's National Center for Health Statistics. † † Interviewed adult in each household containing at least one of the following: a child aged <2 years, an adult aged >65 years, or any person aged 2-64 years at high risk (see previous footnote § ). not directly measuring influenza vaccination among women who were past the first trimester of pregnancy during influenza season, these data indicate low compliance with the ACIP recommendations for pregnant women. In a study of influenza vaccine acceptance by pregnant women, 71% who were offered the vaccine chose to be vaccinated (122). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients, although 86% agreed that pregnant women's risk for influenzarelated morbidity and mortality increases during the last two trimesters (123). Recent data indicate that self-report of influenza vaccination among adults, compared with extraction from the medical record, is both sensitive and specific. Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (124). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available. # Recommendations for Using Inactivated and Live, Attenuated Influenza Vaccines Both the inactivated influenza vaccine and LAIV can be used to reduce the risk of influenza. LAIV is only approved for use among healthy persons aged 5-49 years. Inactivated influenza vaccine is approved for persons aged >6 months, including those with high-risk conditions (see following sections on inactivated influenza vaccine and live, attenuated influenza vaccine). # Target Groups for Vaccination # Persons at Increased Risk for Complications Vaccination with inactivated influenza vaccine is recommended for the following persons who are at increased risk for complications from influenza: • (125). # Persons Aged 50-64 Years Vaccination is recommended for persons aged 50-64 years because this group has an increased prevalence of persons with high-risk conditions. In 2000, approximately 42 million persons in the United States were aged 50-64 years, of whom 12 million (29%) had one or more high-risk medical conditions (125). Influenza vaccine has been recommended for this entire age group to increase the low vaccination rates among persons in this age group with high-risk conditions (see preceding section). Age-based strategies are more successful in increasing vaccine coverage than patient-selection strategies based on medical conditions. Persons aged 50-64 years without high-risk conditions also receive benefit from vaccination in the form of decreased rates of influenza illness, decreased work absenteeism, and decreased need for medical visits and medication, including antibiotics (9)(10)(11)(12). Further, 50 years is an age when other preventive services begin and when routine assessment of vaccination and other preventive services has been recommended (126,127). # Persons Who Can Transmit Influenza to Those at High Risk Persons who are clinically or subclinically infected can transmit influenza virus to persons at high risk for complications from influenza. Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce influenza-related deaths among persons at high risk. Evidence from two studies indicates that vaccination of healthcare personnel is associated with decreased deaths among nursing home patients (119,120). Health-care workers should be vaccinated against influenza annually. Facilities that employ heath-care workers are strongly encouraged to provide vaccine to workers by using approaches that maximize immuni-zation rates. This will protect health-care workers, their patients, and communities, and will improve prevention, patient safety, and reduce disease burden. Health-care workers' influenza immunization rates should be regularly measured and reported. Although rates of health-care worker vaccination are typically <40%, with moderate effort, organized campaigns can attain higher rates of vaccination among this population (118). The following groups should be vaccinated: • physicians, nurses, and other personnel in both hospital and outpatient-care settings, including medical emergency response workers (e.g., paramedics and emergency medical technicians); • employees of nursing homes and chronic-care facilities who have contact with patients or residents; • employees of assisted living and other residences for persons in groups at high risk; • persons who provide home care to persons in groups at high risk; and • household contacts (including children) of persons in groups at high risk. In addition, because children aged 0-23 months are at increased risk for influenza-related hospitalization (52-54), vaccination is recommended for their household contacts and out-of-home caregivers, particularly for contacts of children aged 0-5 months, because influenza vaccines have not been approved by FDA for use among children aged <6 months (see Healthy Young Children). Healthy persons aged 5-49 years in these groups who are not contacts of severely immunosuppressed persons (see Live, Attenuated Influenza Vaccine Recommendations) can receive either LAIV or inactivated influenza vaccine. All other persons in this group should receive inactivated influenza vaccine. # Additional Information Regarding Vaccination of Specific Populations Pregnant Women Influenza-associated excess deaths among pregnant women were documented during the pandemics of 1918-19 and 1957-58 (128)(129)(130)(131). Case reports and limited studies also indicate that pregnancy can increase the risk for serious medical complications of influenza (132)(133)(134)(135)(136). An increased risk might result from 1) increases in heart rate, stroke volume, and oxygen consumption; 2) decreases in lung capacity; and 3) changes in immunologic function during pregnancy. A study of the effect of influenza during 17 interpandemic influenza seasons demonstrated that the relative risk for hospitalization for selected cardiorespiratory conditions among pregnant women enrolled in Medicaid increased from 1.4 during weeks 14-20 of gestation to 4.7 during weeks 37-42, in comparison with women who were 1-6 months postpartum (137). Women in their third trimester of pregnancy were hospitalized at a rate (i.e., 250/100,000 pregnant women) comparable with that of nonpregnant women who had high-risk medical conditions. Researchers estimate that an average of 1-2 hospitalizations can be prevented for every 1,000 pregnant women vaccinated. Because of the increased risk for influenza-related complications, women who will be pregnant during the influenza season should be vaccinated. Vaccination can occur in any trimester. One study of influenza vaccination of >2,000 pregnant women demonstrated no adverse fetal effects associated with influenza vaccine (138). # Healthy Young Children Studies indicate that rates of hospitalization are higher among young children than older children when influenza viruses are in circulation (51)(52)(53)139,140). The increased rates of hospitalization are comparable with rates for other groups considered at high risk for influenza-related complications. However, the interpretation of these findings has been confounded by co-circulation of respiratory syncytial viruses, which are a cause of serious respiratory viral illness among children and which frequently circulate during the same time as influenza viruses (141)(142)(143). Two recent studies have attempted to separate the effects of respiratory syncytial viruses and influenza viruses on rates of hospitalization among children who do not have high-risk conditions (52,53). Both studies reported that otherwise healthy children aged <2 years, and possibly children aged 2-4 years, are at increased risk for influenza-related hospitalization compared with older healthy children (Table 1). Among the Tennessee Medicaid population during 1973-1993, healthy children aged 6 months-<3 years had rates of influenza-associated hospitalization comparable with or higher than rates among children aged 3-14 years with high-risk conditions (Table 1) (52,54). Another Tennessee study reported a hospitalization rate per year of 3-4/1,000 healthy children aged <2 years for laboratory-confirmed influenza (32). Because children aged 6-23 months are at substantially increased risk for influenza-related hospitalizations, ACIP recommends vaccination of all children in this age group (144). ACIP continues to recommend influenza vaccination of persons aged >6 months who have high-risk medical conditions. The current inactivated influenza vaccine is not approved by FDA for use among children aged <6 months, the pediatric group at greatest risk for influenza-related complications (52). Vaccinating their household contacts and out-of-home caregivers might decrease the probability of influenza infection among these children. Beginning in March 2003, the group of children eligible for influenza vaccine coverage under the Vaccines for Children (VFC) program was expanded to include all VFCeligible children aged 6-23 months and VFC-eligible children aged 2-18 years who are household contacts of children aged 0-23 months (145). # Persons Infected with HIV Limited information is available regarding the frequency and severity of influenza illness or the benefits of influenza vaccination among persons with HIV infection (146,147). However, a retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the attributable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than during the peri-influenza periods. The risk for hospitalization was higher for HIV-infected women than for women with other well-recognized high-risk conditions, including chronic heart and lung diseases (148). Another study estimated that the risk for influenza-related death was 9.4-14.6/10,000 persons with acquired immunodeficiency syndrome (AIDS) compared with 0.09-0.10/10,000 among all persons aged 25-54 years and 6.4-7.0/10,000 among persons aged >65 years (149). Other reports indicate that influenza symptoms might be prolonged and the risk for complications from influenza increased for certain HIVinfected persons (150)(151)(152). Influenza vaccination has been demonstrated to produce substantial antibody titers against influenza among vaccinated HIV-infected persons who have minimal AIDS-related symptoms and high CD4 + T-lymphocyte cell counts (153)(154)(155)(156). A limited, randomized, placebo-controlled trial determined that influenza vaccine was highly effective in preventing symptomatic, laboratory-confirmed influenza infection among HIV-infected persons with a mean of 400 CD4 + T-lymphocyte cells/mm 3 ; a limited number of persons with CD4 + T-lymphocyte cell counts of <200 were included in that study (147). A nonrandomized study among HIV-infected persons determined that influenza vaccination was most effective among persons with >100 CD4 + cells and among those with <30,000 viral copies of HIV type-1/mL (152). Among persons who have advanced HIV disease and low CD4 + T-lymphocyte cell counts, influenza vaccine might not induce protective antibody titers (155,156); a second dose of vaccine does not improve the immune response in these persons (156,157). One study determined that HIV RNA (ribonucleic acid) levels increased transiently in one HIV-infected person after influenza infection (158). Studies have demonstrated a transient (i.e., 2-4 week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (155,159). Other studies using similar laboratory techniques have not documented a substantial increase in the replication of HIV (160)(161)(162)(163). Deterioration of CD4 + T-lymphocyte cell counts or progression of HIV disease have not been demonstrated among HIVinfected persons after influenza vaccination compared with unvaccinated persons (156,164). Limited information is available concerning the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza infection or influenza vaccination (146,165). Because influenza can result in serious illness, and because influenza vaccination can result in the production of protective antibody titers, vaccination will benefit HIV-infected persons, including HIV-infected pregnant women. # Breastfeeding Mothers Influenza vaccine does not affect the safety of mothers who are breastfeeding or their infants. Breastfeeding does not adversely affect the immune response and is not a contraindication for vaccination. # Travelers The risk for exposure to influenza during travel depends on the time of year and destination. In the tropics, influenza can occur throughout the year. In the temperate regions of the Southern Hemisphere, the majority of influenza activity occurs during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large organized tourist groups (e.g., on cruise ships) that include persons from areas of the world where influenza viruses are circulating (166,167). Persons at high risk for complications of influenza who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to • travel to the tropics, • travel with organized tourist groups at any time of year, or • travel to the Southern Hemisphere during April-September. No information is available regarding the benefits of revaccinating persons before summer travel who were already vaccinated in the preceding fall. Persons at high risk who receive the previous season's vaccine before travel should be revaccinated with the current vaccine the following fall or winter. Persons aged >50 years and others at high risk should consult with their physicians before embarking on travel during the summer to discuss the symptoms and risks for influenza and May 28, 2004 the advisability of carrying antiviral medications for either prophylaxis or treatment of influenza. # General Population In addition to the groups for which annual influenza vaccination is recommended, physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza (the vaccine can be administered to children >6 months), depending on vaccine availability (see Influenza Vaccine Supply). Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics. # Comparison of LAIV with Inactivated Influenza Vaccine Both inactivated influenza vaccine and LAIV are available to reduce the risk of influenza infection and illness. However, the vaccines also differ in key ways (Table 3). # Major Similarities LAIV and inactivated influenza vaccine contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one A (H1N1) virus, and one B virus. Each year, one or more virus strains might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. Viruses for both vaccines are grown in eggs. Both vaccines are administered annually to provide optimal protection against influenza infection (Table 3). # Major Differences Inactivated influenza vaccine contains killed viruses, whereas LAIV contains attenuated viruses still capable of replication. LAIV is administered intranasally by sprayer, whereas inactivated influenza vaccine is administered intramuscularly by injection. LAIV is more expensive than inactivated influenza vaccine. LAIV is approved for use only among healthy persons aged 5-49 years; inactivated influenza vaccine is approved for use among persons aged >6 months, including those who are healthy and those with chronic medical conditions (Table 3). # Inactivated Influenza Vaccine Recommendations Persons Who Should Not Be Vaccinated with Inactivated Influenza Vaccine Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Prophylactic use of antiviral agents is an option for preventing influenza among such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at high risk for complications from influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Information regarding vaccine components is located in package inserts from each manufacturer. Persons with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate use of influenza vaccine, particularly among children with mild upper respiratory tract infection or allergic rhinitis. # Dosage Dosage recommendations vary according to age group (Table 4). Among previously unvaccinated children aged <9 years, 2 doses administered >1 month apart are recommended for satisfactory antibody responses. If possible, the second dose should be administered before December. If a child aged <9 years receiving vaccine for the first time does not receive a second dose of vaccine within the same season, only 1 dose of vaccine should be administered the following season. Two doses are not required at that time. Among adults, studies have indicated limited or no improvement in antibody response when a second dose is administered during the same season (168)(169)(170). Even when the current influenza vaccine contains one or more antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines during the year after vaccination (171,172). Vaccine prepared for a previous influenza season should not be administered to provide protection for the current season. # Route The intramuscular route is recommended for influenza vaccine. Adults and older children should be vaccinated in the deltoid muscle. A needle length >1 inch can be considered for these age groups because needles <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (173). Infants and young children should be vaccinated in the anterolateral aspect of the thigh (64). ACIP recommends a needle length of 7/8-1 inch for children aged <12 months for intramuscular vaccination into the anterolateral thigh. When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7/8-1.25 inches is recommended (64). # Side Effects and Adverse Reactions When educating patients regarding potential side effects, clinicians should emphasize that 1) inactivated influenza vaccine contains noninfectious killed viruses and cannot cause influenza; and 2) coincidental respiratory disease unrelated to influenza vaccination can occur after vaccination. # Local Reactions In placebo-controlled studies among adults, the most frequent side effect of vaccination is soreness at the vaccination site (affecting 10%-64% of patients) that lasts <2 days (12,(174)(175)(176). These local reactions typically are mild and rarely inter-fere with the person's ability to conduct usual daily activities. One blinded, randomized, cross-over study among 1,952 adults and children with asthma, demonstrated that only body aches were reported more frequently after inactivated influenza vaccine (25.1%) than placebo-injection (20.8%) (177). One study (79) reported 20%-28% of children with asthma aged 9 months-18 years with local pain and swelling and another study (77) reported 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions. A different study (76) reported no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years in a placebo-controlled trial of inactivated influenza vaccine. In a study of 12 children aged 5-32 months, no substantial local or systemic reactions were noted (178). # Systemic Reactions Fever, malaise, myalgia, and other systemic symptoms can occur after vaccination with inactivated vaccine and most often affect persons who have had no prior exposure to the * Populations at high risk from complications of influenza infection include persons aged >65 years; residents of nursing homes and other chronic-care facilities that house persons with chronic medical conditions; adults and children with chronic disorders of the pulmonary or cardiovascular systems; adults and children with chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression; children and adolescents receiving long-term aspirin therapy (at risk for developing Reye syndrome after wild-type influenza infection); pregnant women; and children aged 6-23 months. † No data are available regarding effect on safety or efficacy. § Inactivated influenza vaccine coadministration has been evaluated systematically only among adults with pneumococcal polysaccharide vaccine. # Yes Yes influenza virus antigens in the vaccine (e.g., young children) (179,180). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Recent placebo-controlled trials demonstrate that among older persons and healthy young adults, administration of split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (12,(174)(175)(176). Less information from published studies is available for children, compared with adults. However, in a randomized crossover study among both children and adults with asthma, no increase in asthma exacerbations was reported for either age group (177). An analysis of 215,600 children aged <18 years and 8,476 children aged 6-23 months enrolled in one of five health maintenance organizations reported no increase in biologically plausible medically attended events during the 2 weeks after inactivated influenza vaccination, compared with control periods 3-4 weeks before and after vaccination (181). In a study of 791 healthy children (68), postvaccination fever was noted among 11.5% of children aged 1-5 years, 4.6% among children aged 6-10 years, and 5.1% among children aged 11-15 years. Among children with high-risk medical conditions, one study of 52 children aged 6 months-4 years reported fever among 27% and irritability and insomnia among 25% (77); and a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (182). No placebo comparison was made in these studies. However, in pediatric trials of A/New Jersey/76 swine influenza vaccine, no difference was reported between placebo and split-virus vaccine groups in febrile reactions after injection, although the vaccine was associated with mild local tenderness or erythema (76). Limited data regarding potential adverse events after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). During January 1, 1991-January 23, 2003, VAERS received 1,072 reports of adverse events among children aged <18 years, including 174 reports of adverse events among children aged 6-23 months. The number of influenza vaccine doses received by children during this time period is unknown. The most frequently reported events among children were fever, injection-site reactions, and rash (unpublished data, CDC, 2003). Because of the limitations of spontaneous reporting systems, determining causality for specific types of adverse events, with the exception of injection-site reactions, is usually not possible by using VAERS data alone. Health-care professionals should promptly report all clinically significant adverse events after influenza vaccination of children to VAERS, even if the health-care professional is not certain that the vaccine caused the event. The Institute of Medicine has specifically recommended reporting of potential neurologic complications (e.g., demyelinating disorders such as Guillain-Barré [GBS] syndrome), although no evidence exists of a causal relationship between influenza vaccine and neurologic disorders in children. Immediate -presumably allergic -reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) rarely occur after influenza vaccination (183). These reactions probably result from hypersensitivity to certain vaccine components; the majority of reactions probably are caused by residual egg protein. Although current influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have had hives or swelling of the lips or tongue, or who have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma or other allergic responses to egg protein, might also be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician should be considered. Protocols have been published for safely administering influenza vaccine to persons with egg allergies (184)(185)(186). Immunogenicity and side effects of split-and whole-virus vaccines are similar among adults when vaccines are administered at the recommended dosage. § For adults and older children, the recommended site of vaccination is the deltoid muscle. The preferred site for infants and young children is the anterolateral aspect of the thigh. ¶ Two doses administered at least 1 month apart are recommended for children aged <9 years who are receiving influenza vaccine for the first time. Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (187,188). When reported, hypersensitivity to thimerosal usually has consisted of local, delayed hypersensitivity reactions (187). # Guillain-Barré Syndrome The 1976 swine influenza vaccine was associated with an increased frequency of GBS (189,190). Among persons who received the swine influenza vaccine in 1976, the rate of GBS was <10 cases/1 million persons vaccinated. The risk for influenza vaccine-associated GBS is higher among persons aged >25 years than persons <25 years (189). Evidence for a causal relation of GBS with subsequent vaccines prepared from other influenza viruses is unclear. Obtaining strong epidemiologic evidence for a possible limited increase in risk is difficult for such a rare condition as GBS, which has an annual incidence of 10-20 cases/1 million adults (191). More definitive data probably will require using other methodologies (e.g., laboratory studies of the pathophysiology of GBS). During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were slightly elevated but were not statistically significant in any of these studies (192)(193)(194). However, in a study of the 1992-93 and 1993-94 seasons, the overall relative risk for GBS was 1.7 (95% CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately 1 additional case of GBS/1 million persons vaccinated. The combined number of GBS cases peaked 2 weeks after vaccination (195). Thus, investigations to date indicate no substantial increase in GBS associated with influenza vaccines (other than the swine influenza vaccine in 1976), and that, if influenza vaccine does pose a risk, it is probably slightly more than one additional case/1 million persons vaccinated. Cases of GBS after influenza infection have been reported, but no epidemiologic studies have documented such an association (196,197). Substantial evidence exists that multiple infectious illnesses, most notably Campylobacter jejuni, as well as upper respiratory tract infections are associated with GBS (191,(198)(199)(200). Even if GBS were a true side effect of vaccination in the years after 1976, the estimated risk for GBS of approximately 1 additional case/1 million persons vaccinated is substantially less than the risk for severe influenza, which can be prevented by vaccination among all age groups, especially persons aged >65 years and those who have medical indications for influenza vaccination (Table 1) (see Hospitalizations and Deaths from Influenza). The potential benefits of influenza vaccina-tion in preventing serious illness, hospitalization, and death substantially outweigh the possible risks for experiencing vaccine-associated GBS. The average case fatality ratio for GBS is 6% and increases with age (191,201). No evidence indicates that the case fatality ratio for GBS differs among vaccinated persons and those not vaccinated. The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (192,202). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown; therefore, avoiding vaccinating persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks after a previous influenza vaccination is prudent. As an alternative, physicians might consider using influenza antiviral chemoprophylaxis for these persons. Although data are limited, for the majority of persons who have a history of GBS and who are at high risk for severe complications from influenza, the established benefits of influenza vaccination justify yearly vaccination. # Live, Attenuated Influenza Vaccine Recommendations Background Description and Action Mechanisms. LAIVs have been in development since the 1960s in the United States, where they have been evaluated as mono-, bi-, and trivalent formulations (203)(204)(205)(206)(207). The LAIV licensed for use in the United States beginning in 2003 is produced by MedImmune, Inc. (Gaithersburg, Maryland; http://www.medimmune.com) and marketed under the name FluMist™. It is a live, trivalent, intranasally administered vaccine that is • attenuated, producing mild or no signs or symptoms related to influenza virus infection; • temperature-sensitive, a property that limits the replication of the vaccine viruses at 38 º C-39 º C, and thus restricts LAIV viruses from replicating efficiently in human lower airways; and • cold-adapted, replicating efficiently at 25 º C, a temperature that is permissive for replication of LAIV viruses, but restrictive for replication of different wild-type viruses. In animal studies, LAIV viruses replicate in the mucosa of the nasopharynx, inducing protective immunity against viruses included in the vaccine, but replicate inefficiently in the lower airways or lungs. The first step in developing an LAIV was the derivation of two stably attenuated master donor viruses (MDV), one for type A and one for type B influenza viruses. The two MDVs each acquired the cold-adapted, temperature-sensitive, attenuated phenotypes through serial passage in viral culture conducted at progressively lower temperatures. The vaccine viruses in LAIV are reassortant viruses containing genes from these MDVs that confer attenuation, temperature sensitivity, and cold adaptation and genes from the recommended contemporary wild-type influenza viruses, encoding the surface antigens hemagglutinin (HA) and neuraminidase (NA). Thus, MDVs provide the stably attenuated vehicles for presenting influenza HA and NA antigens, to which the protective antibody response is directed, to the immune system. The reassortant vaccine viruses are grown in embryonated hens' eggs. After the vaccine is formulated and inserted into individual sprayers for nasal administration, the vaccine must be stored at -15 º C or colder. The immunogenicity of the approved LAIV has been assessed in multiple studies (96,(208)(209)(210)(211)(212)(213), which included approximately 100 children aged 5-17 years, and approximately 300 adults aged 18-49 years. LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not completely understood but appear to involve both serum and nasal secretory antibodies. No single laboratory measurement closely correlates with protective immunity induced by LAIV. Shedding and Transmission of Vaccine Viruses. Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses for >2 days after vaccination, although in lower titers than typically occur with shedding of wild-type influenza viruses. Shedding should not be equated with person-to-person transmission of vaccine viruses, although, in rare instances, shed vaccine viruses can be transmitted from vaccinees to nonvaccinated persons. One unpublished study in a child care center setting assessed transmissibility of vaccine viruses from 98 vaccinated to 99 unvaccinated subjects, all aged 8-36 months. Eighty percent of vaccine recipients shed one or more virus strains, with a mean of 7.6 days' duration (214). One vaccine type influenza type B isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the cold-adapted, temperature-sensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient in the same children's play group. The placebo recipient from whom the influenza type B vaccine virus was isolated did not exhibit symptoms that were different from those experienced by vaccine recipients. The estimated probability of acquiring vac-cine virus after close contact with a single LAIV recipient in this child care population was 0.58%-2.4%. One study assessing shedding of vaccine viruses in 20 healthy vaccinated adults aged 18-49 years demonstrated that the majority of shedding occurred within the first 3 days after vaccination, although one subject was noted to shed virus on day 7 after vaccine receipt. No subject shed vaccine viruses >10 days after vaccination. Duration or type of symptoms associated with receipt of LAIV did not correlate with duration of shedding of vaccine viruses. Person-to-person transmission of vaccine viruses was not assessed in this study (215). Stability of Vaccine Viruses. In clinical trials, viruses shed by vaccine recipients have been phenotypically stable. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (216). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. # Using Live, Attenuated Influenza Vaccine LAIV is an option for vaccination of healthy persons aged 5-49 years, including persons in close contact with groups at high risk and those wanting to avoid influenza. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response, its ease of administration, and the acceptability of an intranasal rather than intramuscular route of administration. # Persons Who Should Not Be Vaccinated with LAIV The following populations should not be vaccinated with LAIV: • # Close Contacts of Persons at High Risk for Complications from Influenza Close contacts of persons at high risk for complications from influenza should receive influenza vaccine to reduce transmission of wild-type influenza viruses to persons at high risk. Use of inactivated influenza vaccine is preferred for vaccinating household members, health-care workers, and others who have close contact with severely immunosuppressed persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunosuppressed person requires care in a protective environment. The rationale for not using LAIV among health-care workers caring for such patients is the theoretical risk that a live, attenuated vaccine virus could be transmitted to the severely immunosuppressed person and cause disease. No preference exists for inactivated influenza vaccine use by health-care workers or other persons who have close contact with persons with lesser degrees of immunosuppression (e.g., persons with diabetes, persons with asthma taking corticosteroids, or persons infected with human immunodeficiency virus), and no preference exists for inactivated influenza vaccine use by health-care workers or other healthy persons aged 5-49 years in close contact with all other groups at high risk. If a health-care worker receives LAIV, that worker should refrain from contact with severely immunosuppressed patients as described previously for 7 days after vaccine receipt. Hospital visitors who have received LAIV should refrain from contact with severely immunosuppressed persons for 7 days after vaccination; however, such persons need not be excluded from visitation of patients who are not severely immunosuppressed. # Personnel Who May Administer LAIV Low-level introduction of vaccine viruses into the environment is likely unavoidable when administering LAIV. The risk of acquiring vaccine viruses from the environment is unknown but likely to be limited. Severely immunosuppressed persons should not administer LAIV. However, other persons at high risk for influenza complications may administer LAIV. These include persons with underlying medical conditions placing them at high risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years. # LAIV Dosage and Administration LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV must be stored at -15 º C or colder. LAIV should not be stored in a frost-free freezer (because the temperature might cycle above -15 º C), unless a manufacturer-supplied freezer box is used. LAIV must be thawed before administration. This can be accomplished by holding an individual sprayer in the palm of the hand until thawed, with subsequent immediate administration. Alternatively, the vaccine can be thawed in a refrigerator and stored at 2 º C-8 º C for <24 hours before use. Vaccine should not be refrozen after thawing. LAIV is supplied in a prefilled singleuse sprayer containing 0.5 mL of vaccine. Approximately 0.25 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. If the vaccine recipient sneezes after administration, the dose should not be repeated. LAIV should be administered annually according to the following schedule: • Children aged 5-8 years previously unvaccinated at any time with either LAIV or inactivated influenza vaccine should receive 2 doses † of LAIV separated by 6-10 weeks. • Children aged 5-8 years previously vaccinated at any time with either LAIV or inactivated influenza vaccine should receive 1 dose of LAIV. They do not require a second dose. • Persons aged 9-49 years should receive 1 dose of LAIV. LAIV can be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if clinical judgment indicates nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness. Whether concurrent administration of LAIV with other vaccines affects the safety or efficacy of either LAIV or the simultaneously administered vaccine is unknown. In the absence of specific data indicating interference, following the ACIP general recommendations for immunization is prudent (64). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. An inactivated vaccine can be administered either simultaneously or at any time before or after LAIV. Two live vaccines not administered on the same day should be administered >4 weeks apart when possible. # LAIV and Use of Influenza Antiviral Medications The effect on safety and efficacy of LAIV coadministration with influenza antiviral medications has not been studied. # MMWR May 28, 2004 However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy, and influenza antiviral medications should not be administered for 2 weeks after receipt of LAIV. # LAIV Storage LAIV must be stored at -15 º C or colder. LAIV should not be stored in a frost-free freezer because the temperature might cycle above -15 º C, unless a manufacturer-supplied freezer box or other strategy is used. LAIV can be thawed in a refrigerator and stored at 2 º C-8 º C for <24 hours before use. It should not be refrozen after thawing. Additional information is available at Wyeth Product Quality (1-800-411-0086) or at http:// www.FluMist.com. # Side Effects and Adverse Reactions Twenty prelicensure clinical trials assessed the safety of the approved LAIV. In these combined studies, approximately 28,000 doses of the vaccine were administered to >20,000 subjects. A subset of these trials were randomized, placebocontrolled studies in which >4,000 healthy children aged 5-17 years and >2,000 healthy adults aged 18-49 years were vaccinated. The incidence of adverse events possibly complicating influenza (e.g., pneumonia, bronchitis, bronchiolitis, or central nervous system events) was not statistically different among LAIV and placebo recipients aged 5-49 years. Children. Signs and symptoms reported more often among vaccine recipients than placebo recipients included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0%-26%), and vomiting (3%-13%), abdominal pain (2%), and myalgias (0%-21%) (208,211,213,(217)(218)(219). These symptoms were associated more often with the first dose and were self-limited. In a subset of healthy children aged 60-71 months from one clinical trial (92,93), certain signs and symptoms were reported more often among LAIV recipients after the first dose (n = 214) than placebo recipients (n = 95) (e.g., runny nose, 48.1% versus 44.2%; headache, 17.8% versus 11.6%; vomiting, 4.7% versus 3.2%; myalgias, 6.1% versus 4.2%), but these differences were not statistically significant. Unpublished data from a study including subjects aged 1-17 years indicated an increase in asthma or reactive airways disease in the subset aged 12-59 months. Because of this, LAIV is not approved for use among children aged <60 months. Adults. Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (94,220,221). In one clinical trial (94), among a subset of healthy adults aged 18-49 years, signs and symptoms reported more frequently among LAIV recipi-ents (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (13.9% versus 10.8%); runny nose (44.5% versus 27.1%); sore throat (27.8% versus 17.1%); chills (8.6% versus 6.0%); and tiredness/weakness (25.7% versus 21.6%). Safety Among Groups at High Risk from Influenza-Related Morbidity. Until additional data are acquired, persons at high risk for experiencing complications from influenza infection (e.g., immunocompromised patients; patients with asthma, cystic fibrosis, or chronic obstructive pulmonary disease; or persons aged >65 years) should not be vaccinated with LAIV. Protection from influenza among these groups should be accomplished by using inactivated influenza vaccine. Serious Adverse Events. Serious adverse events among healthy children aged 5-17 years or healthy adults aged 18-49 years occurred at a rate of <1%. Surveillance should continue for adverse events that might not have been detected in previous studies. Health-care professionals should promptly report all clinically significant adverse events after LAIV administration to VAERS, as recommended for inactivated influenza vaccine. # Recommended Vaccines for Different Age Groups When vaccinating children aged 6 months-3 years, healthcare providers should use inactivated influenza vaccine that has been approved by FDA for this age group. Inactivated influenza vaccine from Aventis Pasteur, Inc., (FluZone splitvirus) is approved for use among persons aged >6 months. Inactivated influenza vaccine from Chiron (Fluvirin) is labeled in the United States for use only among persons aged >4 years because data to demonstrate efficacy among younger persons have not been provided to FDA. Live, attenuated influenza vaccine from MedImmune (FluMist) is approved for use by healthy persons aged 5-49 years (Table 5). # Timing of Annual Influenza Vaccination The annual supply of inactivated influenza vaccine and the timing of its distribution cannot be guaranteed in any year. Information regarding the supply of 2004-05 vaccine might not be available until late summer or early fall 2004. To allow vaccine providers to plan for the upcoming vaccination season, taking into account the yearly possibility of vaccine delays or shortages and the need to ensure vaccination of persons at high risk and their contacts, ACIP recommends that vaccine campaigns conducted in October focus their efforts primarily on persons at increased risk for influenza complications and their contacts, including health-care workers. Campaigns conducted in November and later should continue to vaccinate persons at high risk and their contacts, but also vaccinate other persons who wish to decrease their risk for influenza infection. Vaccination efforts for all groups should continue into December and beyond. CDC and other public health agencies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will make recommendations in the summer preceding the 2004-05 influenza season regarding the need for tiered timing of vaccination of different risk groups. # Vaccination in October and November The optimal time to vaccinate is usually during October-November. ACIP recommends that vaccine providers focus their vaccination efforts in October and earlier primarily on persons aged >50 years, persons aged <50 years at increased risk for influenza-related complications (including children aged 6-23 months), household contacts of persons at high risk (including out-of-home caregivers and household contacts of children aged 0-23 months), and health-care workers. Vaccination of children aged <9 years who are receiving vaccine for the first time should also begin in October or earlier because those persons need a booster dose 1 month after the initial dose. Efforts to vaccinate other persons who wish to decrease their risk for influenza infection should begin in November; however, if such persons request vaccination in October, vaccination should not be deferred. Materials to assist providers in prioritizing early vaccine are available at http://www.cdc.gov/flu/professionals/vaccination/index.htm (see also Travelers in this report). # Timing of Organized Vaccination Campaigns Persons planning substantial organized vaccination campaigns should consider scheduling these events after mid-October because the availability of vaccine in any location cannot be ensured consistently in early fall. Scheduling campaigns after mid-October will minimize the need for cancel-lations because vaccine is unavailable. Campaigns conducted before November should focus efforts on vaccination of persons aged >50 years, persons aged <50 years at increased risk for influenza-related complications (including children aged 6-23 months), health-care workers, and household contacts of persons at high-risk (including children aged 0-23 months) to the extent feasible. # Vaccination in December and Later After November, many persons who should or want to receive influenza vaccine remain unvaccinated. In addition, substantial amounts of vaccine have remained unused during three of the past four influenza seasons. To improve vaccine coverage, influenza vaccine should continue to be offered in December and throughout the influenza season as long as vaccine supplies are available, even after influenza activity has been documented in the community. In the United States, seasonal influenza activity can begin to increase as early as October or November, but influenza activity has not reached peak levels in the majority of recent seasons until late December-early March (Table 6). Therefore, although the timing of influenza activity can vary by region, vaccine administered after November is likely to be beneficial in the majority of influenza seasons. Adults develop peak antibody protection against influenza infection 2 weeks after vaccination (222,223). # Vaccination Before October To avoid missed opportunities for vaccination of persons at high risk for serious complications, such persons should be offered vaccine beginning in September during routine healthcare visits or during hospitalizations, if vaccine is available. In facilities housing older persons (e.g., nursing homes), vaccination before October typically should be avoided because antibody levels in such persons can begin to decline within a limited time after vaccination (224). In addition, children aged <9 years who have not been previously vaccinated and who need 2 doses before the start of the influenza season can receive their first dose in September or earlier. # Strategies for Implementing Vaccination Recommendations in Health-Care Settings Successful vaccination programs combine publicity and education for health-care workers and other potential vaccine recipients, a plan for identifying persons at high risk, use of reminder/recall systems, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (19,225). Using standing orders programs is recommended for long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies to ensure the administration of recommended vaccinations for adults (226). Standing orders programs for both influenza and pneumococcal vaccination should be conducted under the supervision of a licensed practitioner according to a physicianapproved facility or agency policy by health-care personnel trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. The Centers for Medicare and Medicaid Services (CMS) has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, long-term-care facilities, and home health agencies (226). To the extent allowed by local and state law, these facilities and agencies may implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaid-eligible patients. Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs as well (20). Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections. # Outpatient Facilities Providing Ongoing Care Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended and who do not have regularly scheduled visits dur-ing the fall should be reminded by mail, telephone, or other means of the need for vaccination. # Outpatient Facilities Providing Episodic or Acute Care Beginning each September, acute health-care facilities (e.g., emergency rooms and walk-in clinics) should offer vaccinations to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities During October and November each year, vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility or anytime afterwards. All residents should be vaccinated at one time, preceding the influenza season. Residents admitted through March after completion of the facility's vaccination program should be vaccinated at the time of admission. # Acute-Care Hospitals Persons of all ages (including children) with high-risk conditions and persons aged >50 years who are hospitalized at any time during September-March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. In one study, 39%-46% of adult patients hospitalized during the winter with influenza-related diagnoses had been hospitalized during the preceding autumn (227). Thus, the hospital serves as a setting in which persons at increased risk for subsequent hospitalization can be identified and vaccinated. However, vaccination of persons at high risk during or after their hospitalizations is often not done. In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (228). Using standing orders in hospitals increases vaccination rates among hospitalized persons (229). # Visiting Nurses and Others Providing Home Care to Persons at High Risk Beginning in September, nursing-care plans should identify patients for whom vaccination is recommended, and vaccine should be administered in the home, if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination. # Other Facilities Providing Services to Persons Aged >50 Years Beginning in October, such facilities as assisted living housing, retirement communities, and recreation centers should offer unvaccinated residents and attendees vaccination on-site before the influenza season. Staff education should emphasize the need for influenza vaccine. # Health-Care Personnel Beginning in October each year, health-care facilities should offer influenza vaccinations to all personnel, including night and weekend staff. Particular emphasis should be placed on providing vaccinations to persons who care for members of groups at high risk. Efforts should be made to educate healthcare personnel regarding the benefits of vaccination and the potential health consequences of influenza illness for themselves and their patients. All health-care personnel should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (118). # Influenza Vaccine Supply During the 2002-03 season, approximately 95 million doses of influenza vaccine were produced, but 12 million doses went unused and had to be destroyed. During the 2003-04 season, approximately 87 million doses of vaccine were produced. During that season, shortages of vaccine were noted in multiple regions of the United States after an unprecedented demand for vaccine lasted longer into the season than usual, caused in part by increased media attention to influenza. On the basis of early projections, manufacturers anticipate production of 90-100 million doses of vaccine for the 2004-05 season. Influenza vaccine delivery delays or vaccine shortages remain possible in part because of the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains. Steps being taken to address possible future delays or vaccine shortages include identification and implementation of ways to expand the influenza vaccine supply and improvement of targeted delivery of vaccine to groups at high risk when delays or shortages are expected. # Future Directions ACIP plans to review new vaccination strategies for improving prevention and control of influenza, including the possibility of expanding recommendations for use of influenza vaccines. In addition, strategies for regularly monitoring vaccine effectiveness will be reviewed. # Recommendations for Using Antiviral Agents for Influenza Antiviral drugs for influenza are an adjunct to influenza vaccine for controlling and preventing influenza. However, these agents are not a substitute for vaccination. Four licensed influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. Amantadine and rimantadine are chemically related antiviral drugs known as adamantanes with activity against influenza A viruses but not influenza B viruses. Amantadine was approved in 1966 for chemoprophylaxis of influenza A (H2N2) infection and was later approved in 1976 for treatment and chemoprophylaxis of influenza type A virus infections among adults and children aged >1 year. Rimantadine was approved in 1993 for treatment and chemoprophylaxis of influenza A infection among adults and prophylaxis among children. Although rimantadine is approved only for chemoprophylaxis of influenza A infection among children, certain specialists in the management of influenza consider it appropriate for treatment of influenza A among children (230). Zanamivir and oseltamivir are chemically related antiviral drugs known as neuraminidase inhibitors that have activity against both influenza A and B viruses. Both zanamivir and oseltamivir were approved in 1999 for treating uncomplicated influenza infections. Zanamivir is approved for treating persons aged >7 years, and oseltamivir is approved for treatment for persons aged >1 year. In 2000, oseltamivir was approved for chemoprophylaxis of influenza among persons aged >13 years. The four drugs differ in pharmacokinetics, side effects, routes of administration, approved age groups, dosages, and costs. An overview of the indications, use, administration, and known primary side effects of these medications is presented in the following sections. Information contained in this report might not represent FDA approval or approved labeling for the antiviral agents described. Package inserts should be consulted for additional information. # Role of Laboratory Diagnosis Appropriate treatment of patients with respiratory illness depends on accurate and timely diagnosis. Early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy. However, because certain bacterial infections can produce symptoms similar to influenza, bacterial infections should be considered and appropriately treated, if suspected. In addition, bacterial infections can occur as a complication of influenza. Influenza surveillance information and diagnostic testing can aid clinical judgment and help guide treatment decisions. The accuracy of clinical diagnosis of influenza on the basis of symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza (29,33,34). Influenza surveillance by state and local health departments and CDC can provide information regarding the presence of influenza viruses in the community. Surveillance can also identify the predominant circulating types, subtypes, and strains of influenza. Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, polymerase chain reaction (PCR) and immunofluorescence (24). Sensitivity and specificity of any test for influenza might vary by the laboratory that performs the test, the type of test used, and the type of specimen tested. Among respiratory specimens for viral isolation or rapid detection, nasopharyngeal specimens are typically more effective than throat swab specimens (231). As with any diagnostic test, results should be evaluated in the context of other clinical information available to health-care providers. Commercial rapid diagnostic tests are available that can be used by laboratories in outpatient settings to detect influenza viruses within 30 minutes (24,232). These rapid tests differ in the types of influenza viruses they can detect and whether they can distinguish between influenza types. Different tests can detect 1) only influenza A viruses; 2) both influenza A and B viruses, but not distinguish between the two types; or 3) both influenza A and B and distinguish between the two. The types of specimens acceptable for use (i.e., throat swab, nasal wash, or nasal swab) also vary by test. The specificity and, in particular, the sensitivity of rapid tests are lower than for viral culture and vary by test (233,234). Because of the lower sensitivity of the rapid tests, physicians should consider confirming negative tests with viral culture or other means. Further, when interpreting results of a rapid influenza test, physicians should consider the positive and negative predictive values of the test in the context of the level of influenza activity in their community. Package inserts and the laboratory performing the test should be consulted for more details regarding use of rapid diagnostic tests. Additional information concerning diagnostic testing is located at http://www.cdc. gov/flu/professionals/labdiagnosis.htm. Despite the availability of rapid diagnostic tests, collecting clinical specimens for viral culture is critical, because only culture isolates can provide specific information regarding circulating influenza subtypes and strains. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and chemoprophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor the emergence of antiviral resistance and the emergence of novel influenza A subtypes that might pose a pandemic threat. # Indications for Use Treatment When administered within 2 days of illness onset to otherwise healthy adults, amantadine and rimantadine can reduce the duration of uncomplicated influenza A illness, and zanamivir and oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day, compared with placebo (72,(235)(236)(237)(238)(239)(240)(241)(242)(243)(244)(245)(246)(247)(248)(249). More clinical data are available concerning the efficacy of zanamivir and oseltamivir for treatment of influenza A infection than for treatment of influenza B infection (250)(251)(252)(253)(254)(255)(256)(257)(258)(259)(260)(261)(262)(263)(264)(265)(266). However, in vitro data and studies of treatment among mice and ferrets (267)(268)(269)(270)(271)(272)(273)(274), in addition to clinical studies, have documented that zanamivir and oseltamivir have activity against influenza B viruses (241,(245)(246)(247)275,276). Data are limited regarding the effectiveness of the four antiviral agents in preventing serious influenza-related complications (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases). Evidence for the effectiveness of these four antiviral drugs is principally based on studies of patients with uncomplicated influenza (277). Data are limited and inconclusive concerning the effectiveness of amantadine, rimantadine, zanamivir, and oseltamivir for treatment of influenza among persons at high risk for serious complications of influenza (27,235,237,238,240,241,248,(250)(251)(252)(253)(254). One study assessing oseltamivir treatment primarily among adults reported a reduction in complications necessitating antibiotic therapy compared with placebo (255). Fewer studies of the efficacy of influenza antivirals have been conducted among pediatric populations (235,238,244,245,251,256,257). One study of oseltamivir treatment documented a decreased incidence of otitis media among children (245). Inadequate data exist regarding the safety and efficacy of any of the influenza antiviral drugs for use among children aged <1 year (234). To reduce the emergence of antiviral drug-resistant viruses, amantadine or rimantadine therapy for persons with influenza A illness should be discontinued as soon as clinically warranted, typically after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days. # Chemoprophylaxis Chemoprophylactic drugs are not a substitute for vaccination, although they are critical adjuncts in preventing and controlling influenza. Both amantadine and rimantadine are indicated for chemoprophylaxis of influenza A infection, but not influenza B. Both drugs are approximately 70%-90% effective in preventing illness from influenza A infection (72,235,251). When used as prophylaxis, these antiviral agents can prevent illness while permitting subclinical infection and development of protective antibody against circulating influenza viruses. Therefore, certain persons who take these drugs will develop protective immune responses to circulating influenza viruses. Amantadine and rimantadine do not interfere with the antibody response to the vaccine (235). Both drugs have been studied extensively among nursing home populations as a component of influenza outbreak-control programs, which can limit the spread of influenza within chronic care institutions (235,250,(258)(259)(260). Among the neuraminidase inhibitor antivirals, zanamivir and oseltamivir, only oseltamivir has been approved for prophylaxis, but community studies of healthy adults indicate that both drugs are similarly effective in preventing febrile, laboratory-confirmed influenza illness (efficacy: zanamivir, 84%; oseltamivir, 82%) (261,262,278). Both antiviral agents have also been reported to prevent influenza illness among persons administered chemoprophylaxis after a household member was diagnosed with influenza (263,275,278). Experience with prophylactic use of these agents in institutional settings or among patients with chronic medical conditions is limited in comparison with the adamantanes (247,253,254,(264)(265)(266). One 6-week study of oseltamivir prophylaxis among nursing home residents reported a 92% reduction in influenza illness (247,279). Use of zanamivir has not been reported to impair the immunologic response to influenza vaccine (246,280). Data are not available regarding the efficacy of any of the four antiviral agents in preventing influenza among severely immunocompromised persons. When determining the timing and duration for administering influenza antiviral medications for prophylaxis, factors related to cost, compliance, and potential side effects should be considered. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost-effective, one study of amantadine or rimantadine prophylaxis reported that the drugs should be taken only during the period of peak influenza activity in a community (281). Persons at High Risk Who Are Vaccinated After Influenza Activity Has Begun. Persons at high risk for complications of influenza still can be vaccinated after an outbreak of influenza has begun in a community. However, development of antibodies in adults after vaccination takes approximately 2 weeks (222,223). When influenza vaccine is administered while influenza viruses are circulating, chemoprophylaxis should be considered for persons at high risk during the time from vaccination until immunity has developed. Children aged <9 years who receive influenza vaccine for the first time can require 6 weeks of prophylaxis (i.e., prophylaxis for 4 weeks after the first dose of vaccine and an additional 2 weeks of prophylaxis after the second dose). Persons Who Provide Care to Those at High Risk. To reduce the spread of virus to persons at high risk during community or institutional outbreaks, chemoprophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with persons at high risk. Persons with frequent contact include employees of hospitals, clinics, and chronic-care facilities, household members, visiting nurses, and volunteer workers. If an outbreak is caused by a variant strain of influenza that might not be controlled by the vaccine, chemoprophylaxis should be considered for all such persons, regardless of their vaccination status. Persons Who Have Immune Deficiencies. Chemoprophylaxis can be considered for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, chiefly those with advanced HIV disease. No published data are available concerning possible efficacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infection. Such patients should be monitored closely if chemoprophylaxis is administered. Other Persons. Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Chemoprophylaxis can also be offered to persons who wish to avoid influenza illness. Health-care providers and patients should make this decision on an individual basis. # Control of Influenza Outbreaks in Institutions Using antiviral drugs for treatment and prophylaxis of influenza is a key component of influenza outbreak control in institutions. In addition to antiviral medications, other outbreak-control measures include instituting droplet precautions and establishing cohorts of patients with confirmed or suspected influenza, re-offering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (282)(283)(284) (for additional information regarding outbreak control in specific settings, see Additional Information Regarding Influenza Infection Control Among Specific Populations). The majority of published reports concerning use of antiviral agents to control influenza outbreaks in institutions are based on studies of influenza A outbreaks among nursing home populations where amantadine or rimantadine were used (235,250,(258)(259)(260)281). Less information is available concerning use of neuraminidase inhibitors in influenza A or B institutional outbreaks (253,254,266,279,285). When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice can substantially expedite administration of antiviral medications. When outbreaks occur in institutions, chemoprophylaxis should be administered to all residents, regardless of whether they received influenza vaccinations during the previous fall, and should continue for a minimum of 2 weeks. If surveillance indicates that new cases continue to occur, chemoprophylaxis should be continued until approximately 1 week after the end of the outbreak. The dosage for each resident should be determined individually. Chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza that is not well-matched by the vaccine. In addition to nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories or other settings where persons live in close proximity). For example, chemoprophylaxis with rimantadine has been used successfully to control an influenza A outbreak aboard a large cruise ship (167). To limit the potential transmission of drug-resistant virus during outbreaks in institutions, whether in chronic or acutecare settings or other closed settings, measures should be taken to reduce contact as much as possible between persons taking antiviral drugs for treatment and other persons, including those taking chemoprophylaxis (see Antiviral Drug-Resistant Strains of Influenza). # Dosage Dosage recommendations vary by age group and medical conditions (Table 7). # Children Amantadine. Use of amantadine among children aged <1 year has not been adequately evaluated. The FDA-approved dosage for children aged 1-9 years for treatment and prophylaxis is 4.4-8.8 mg/kg body weight/day, not to exceed 150 mg/day. Although further studies are needed to determine the optimal dosage for children aged 1-9 years, physicians should consider prescribing only 5 mg/kg body weight/day (not to exceed 150 mg/day) to reduce the risk for toxicity. The approved dosage for children aged >10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg body weight/day, regardless of age, is advisable (252). Rimantadine. Rimantadine is approved for prophylaxis among children aged >1 year and for treatment and prophylaxis among adults. Although rimantadine is approved only for prophylaxis of infection among children, certain specialists in the management of influenza consider it appropriate for treatment among children (230). Use of rimantadine among children aged <1 year has not been adequately evaluated. Rimantadine should be administered in 1 or 2 divided doses at a dosage of 5 mg/kg body weight/day, not to exceed 150 mg/day for children aged 1-9 years. The approved dosage for children aged >10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg body weight/day, regardless of age, is recommended (286). Zanamivir. Zanamivir is approved for treatment among children aged >7 years. The recommended dosage of zanamivir for treatment of influenza is two inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approximately 12 hours apart) (246). Oseltamivir. Oseltamivir is approved for treatment among persons aged >1 year and for chemoprophylaxis among persons aged >13 years. Recommended treatment dosages for children vary by the weight of the child: the dosage recommendation for children who weigh <15 kg is 30 mg twice a day; for children weighing >15-23 kg, the dosage is 45 mg twice a day; for those weighing >23-40 kg, the dosage is 60 mg twice a day; and for children weighing >40 kg, the dosage is 75 mg twice a day. The treatment dosage for persons aged >13 years is 75 mg twice daily. For children aged >13 years, the recommended dose for prophylaxis is 75 mg once a day (247). # Persons Aged >65 Years Amantadine. The daily dosage of amantadine for persons aged >65 years should not exceed 100 mg for prophylaxis or treatment, because renal function declines with increasing age. For certain older persons, the dose should be further reduced. Rimantadine. Among older persons, the incidence and severity of central nervous system (CNS) side effects are substantially lower among those taking rimantadine at a dosage of 100 mg/day than among those taking amantadine at dosages adjusted for estimated renal clearance (287). However, chronically ill older persons have had a higher incidence of CNS and gastrointestinal symptoms and serum concentra- (235). For prophylaxis among persons aged >65 years, the recommended dosage is 100 mg/day. For treatment of older persons in the community, a reduction in dosage to 100 mg/day should be considered if they experience side effects when taking a dosage of 200 mg/day. For treatment of older nursing home residents, the dosage of rimantadine should be reduced to 100 mg/day (286). Zanamivir and Oseltamivir. No reduction in dosage is recommended on the basis of age alone. # Persons with Impaired Renal Function Amantadine. A reduction in dosage is recommended for patients with creatinine clearance <50 mL/min/1.73m 2 . Guidelines for amantadine dosage on the basis of creatinine clearance are located in the package insert. Because recommended dosages on the basis of creatinine clearance might provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully for adverse reactions. If necessary, further reduction in the dose or discontinuation of the drug might be indicated because of side effects. Hemodialysis contributes minimally to amantadine clearance (288,289). Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance <10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including older persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. Hemodialysis contributes minimally to drug clearance (290). Zanamivir. Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were observed (246,291). However, a limited number of healthy volunteers who were administered high doses of intravenous zanamivir tolerated systemic levels of zanamivir that were substantially higher than those resulting from administration of zanamivir by oral inhalation at the recommended dose (292,293). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild to moderate or severe impairment in renal function (246). Oseltamivir. Serum concentrations of oseltamivir carboxylate (GS4071), the active metabolite of oseltamivir, increase with declining renal function (247,294). For patients with creatinine clearance of 10-30 mL/min (247), a reduction of the treatment dosage of oseltamivir to 75 mg once daily and in the prophylaxis dosage to 75 mg every other day is recommended. No treatment or prophylaxis dosing recommendations are available for patients undergoing routine renal dialysis treatment. # Persons with Liver Disease Amantadine. No increase in adverse reactions to amantadine has been observed among persons with liver disease. Rare instances of reversible elevation of liver enzymes among patients receiving amantadine have been reported, although a specific relation between the drug and such changes has not been established (295). Rimantadine. A reduction in dosage to 100 mg/day is recommended for persons with severe hepatic dysfunction. Zanamivir and Oseltamivir. Neither of these medications has been studied among persons with hepatic dysfunction. # Persons with Seizure Disorders Amantadine. An increased incidence of seizures has been reported among patients with a history of seizure disorders who have received amantadine (296). Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine. Rimantadine. Seizures (or seizure-like activity) have been reported among persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine (297). The extent to which rimantadine might increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated. Zanamivir and Oseltamivir. Seizure events have been reported during postmarketing use of zanamivir and oseltamivir, although no epidemiologic studies have reported any increased risk for seizures with either zanamivir or oseltamivir use. # Route Amantadine, rimantadine, and oseltamivir are administered orally. Amantadine and rimantadine are available in tablet or syrup form, and oseltamivir is available in capsule or oral suspension form (298,299). Zanamivir is available as a dry powder that is self-administered via oral inhalation by using a plastic device included in the package with the medication. Patients will benefit from instruction and demonstration of correct use of this device (246). # Pharmacokinetics Amantadine Approximately 90% of amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion (258,(300)(301)(302)(303). Thus, renal clearance of amantadine is reduced substantially among persons with renal insufficiency, and dosages might need to be decreased (see Dosage) (Table 7). # Rimantadine Approximately 75% of rimantadine is metabolized by the liver (251). The safety and pharmacokinetics of rimantadine among persons with liver disease have been evaluated only after single-dose administration (251,304). In a study of persons with chronic liver disease (the majority with stabilized cirrhosis), no alterations in liver function were observed after a single dose. However, for persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease (286). Rimantadine and its metabolites are excreted by the kidneys. The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration (251,290). Further studies are needed to determine multiple-dose pharmacokinetics and the most appropriate dosages for patients with renal insufficiency. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6-fold greater than that among healthy persons of the same age (290). Hemodialysis did not contribute to drug clearance. In studies of persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were higher than those among control patients without renal disease who were the same weight, age, and sex (286,305). # Zanamivir In studies of healthy volunteers, approximately 7%-21% of the orally inhaled zanamivir dose reached the lungs, and 70%-87% was deposited in the oropharynx (246,306). Approximately 4%-17% of the total amount of orally inhaled zanamivir is systemically absorbed. Systemically absorbed zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the feces (246,293). # Oseltamivir Approximately 80% of orally administered oseltamivir is absorbed systemically (294). Absorbed oseltamivir is metabolized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxy-late has a half-life of 6-10 hours and is excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway (247,307). Unmetabolized oseltamivir also is excreted in the urine by glomerular filtration and tubular secretion (308). # Side Effects and Adverse Reactions When considering use of influenza antiviral medications (i.e., choice of antiviral drug, dosage, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 7); presence of other medical conditions; indications for use (i.e., prophylaxis or therapy); and the potential for interaction with other medications. # Amantadine and Rimantadine Both amantadine and rimantadine can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day. However, incidence of CNS side effects (e.g., nervousness, anxiety, insomnia, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine than among those taking rimantadine (308). In a 6-week study of prophylaxis among healthy adults, approximately 6% of participants taking rimantadine at a dosage of 200 mg/day experienced one or more CNS symptoms, compared with approximately 13% of those taking the same dosage of amantadine and 4% of those taking placebo (308). A study of older persons also demonstrated fewer CNS side effects associated with rimantadine compared with amantadine (287). Gastrointestinal side effects (e.g., nausea and anorexia) occur among approximately 1%-3% of persons taking either drug, compared with 1% of persons receiving the placebo (308). Side effects associated with amantadine and rimantadine are usually mild and cease soon after discontinuing the drug. Side effects can diminish or disappear after the first week, despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures) (288,296). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among older persons who have been taking amantadine as prophylaxis at a dosage of 200 mg/day (258). Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects (Table 7). In acute overdosage of amantadine, CNS, renal, respiratory, and cardiac toxicity, including arrhythmias, have been reported (288). Because rimantadine has been marketed for a shorter period than amantadine, its safety among certain patient populations (e.g., chronically ill and older persons) has been evaluated less frequently. Because amantadine has anticholinergic effects and might cause mydriasis, it should not be used among patients with untreated angle closure glaucoma (288). # Zanamivir In a study of zanamivir treatment of influenza-like illness among persons with asthma or chronic obstructive pulmonary disease where study medication was administered after use of a B2-agonist, 13% of patients receiving zanamivir and 14% of patients who received placebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (246,248). However, in a phase-I study of persons with mild or moderate asthma who did not have influenza-like illness, 1 of 13 patients experienced bronchospasm after administration of zanamivir (246). In addition, during postmarketing surveillance, cases of respiratory function deterioration after inhalation of zanamivir have been reported. Certain patients had underlying airways disease (e.g., asthma or chronic obstructive pulmonary disease). Because of the risk for serious adverse events and because the efficacy has not been demonstrated among this population, zanamivir is not recommended for treatment for patients with underlying airway disease (246). If physicians decide to prescribe zanamivir to patients with underlying chronic respiratory disease after carefully considering potential risks and benefits, the drug should be used with caution under conditions of appropriate monitoring and supportive care, including the availability of short-acting bronchodilators (277). Patients with asthma or chronic obstructive pulmonary disease who use zanamivir are advised to 1) have a fast-acting inhaled bronchodilator available when inhaling zanamivir and 2) stop using zanamivir and contact their physician if they experience difficulty breathing (246). No definitive evidence is available regarding the safety or efficacy of zanamivir for persons with underlying respiratory or cardiac disease or for persons with complications of acute influenza (277). Allergic reactions, including oropharyngeal or facial edema, have also been reported during postmarketing surveillance (246,253). In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and those receiving placebo (i.e., inhaled lactose vehicle alone) (236)(237)(238)(239)(240)(241)253). The most common adverse events reported by both groups were diarrhea; nausea; sinusitis; nasal signs and symptoms; bronchitis; cough; headache; dizziness; and ear, nose, and throat infections. Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (246). # Oseltamivir Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vomiting, approximately 6%; vomiting, approximately 3%) (242,243,247,309). Among children treated with oseltamivir, 14.3% had vomiting, compared with 8.5% of placebo recipients. Overall, 1% discontinued the drug secondary to this side effect (245), whereas a limited number of adults who were enrolled in clinical treatment trials of oseltamivir discontinued treatment because of these symptoms (247). Similar types and rates of adverse events were reported in studies of oseltamivir prophylaxis (247). Nausea and vomiting might be less severe if oseltamivir is taken with food (247,309). # Use During Pregnancy No clinical studies have been conducted regarding the safety or efficacy of amantadine, rimantadine, zanamivir, or oseltamivir for pregnant women; only two cases of amantadine use for severe influenza illness during the third trimester have been reported (134,135). However, both amantadine and rimantadine have been demonstrated in animal studies to be teratogenic and embryotoxic when administered at substantially high doses (286,288). Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these four drugs should be used during pregnancy only if the potential benefit justifies the potential risk to the embryo or fetus (see manufacturers' package inserts) (246,247,286,288). # Drug Interactions Careful observation is advised when amantadine is administered concurrently with drugs that affect CNS, including CNS stimulants. Concomitant administration of antihistamines or anticholinergic drugs can increase the incidence of adverse CNS reactions (235). No clinically substantial interactions between rimantadine and other drugs have been identified. Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically critical drug interactions have been predicted on the basis of in vitro data and data from studies using rats (246,310). Limited clinical data are available regarding drug interactions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this pathway. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir carboxylate by approximately 50% and a corresponding approximate twofold increase in the plasma levels of oseltamivir carboxylate (247,307). No published data are available concerning the safety or efficacy of using combinations of any of these four influenza antiviral drugs. For more detailed information concerning potential drug interactions for any of these influenza antiviral drugs, package inserts should be consulted. # Antiviral Drug-Resistant Strains of Influenza Amantadine-resistant viruses are cross-resistant to rimantadine and vice versa (311). Drug-resistant viruses can appear in approximately one third of patients when either amantadine or rimantadine is used for therapy (257,312,313). During the course of amantadine or rimantadine therapy, resistant influenza strains can replace susceptible strains within 2-3 days of starting therapy (312,314). Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy (315,316); however, the frequency with which resistant viruses are transmitted and their effect on efforts to control influenza are unknown. Amantadine-and rimantadine-resistant viruses are not more virulent or transmissible than susceptible viruses (317). The screening of epidemic strains of influenza A has rarely detected amantadine-and rimantadine-resistant viruses (312,318,319). Persons who have influenza A infection and who are treated with either amantadine or rimantadine can shed susceptible viruses early in the course of treatment and later shed drugresistant viruses, including after 5-7 days of therapy (257). Such persons can benefit from therapy even when resistant viruses emerge. Resistance to zanamivir and oseltamivir can be induced in influenza A and B viruses in vitro (320)(321)(322)(323)(324)(325)(326)(327), but induction of resistance requires multiple passages in cell culture. By contrast, resistance to amantadine and rimantadine in vitro can be induced with fewer passages in cell culture (328,329). Development of viral resistance to zanamivir and oseltamivir during treatment has been identified but does not appear to be frequent (247,(330)(331)(332)(333). In clinical treatment studies using oseltamivir, 1.3% of posttreatment isolates from patients aged >13 years and 8.6% among patients aged 1-12 years had decreased susceptibility to oseltamivir (247). No isolates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of posttreatment iso-lates tested is limited (334) and the risk for emergence of zanamivir-resistant isolates cannot be quantified (246). Only one clinical isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (331). Available diagnostic tests are not optimal for detecting clinical resistance to the neuraminidase inhibitor antiviral drugs, and additional tests are being developed (334,335). Postmarketing surveillance for neuraminidase inhibitor-resistant influenza viruses is being conducted (336). # Sources of Information Regarding Influenza and Its Surveillance Information regarding influenza surveillance, prevention, detection, and control is available at http://www.cdc.gov/flu/ weekly/fluactivity.htm. Surveillance information is available through the CDC Voice Information System (influenza update) at 888-232-3228 or CDC Fax Information Service at 888-232-3299. During October-May, surveillance information is updated at least every other week. In addition, periodic updates regarding influenza are published in the MMWR Weekly (http://www.cdc.gov/mmwr). Additional information regarding influenza vaccine can be obtained by calling the CDC Immunization hotline at 800-232-2522 (English) or 800-232-0233 (Spanish). State and local health departments should be consulted concerning availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, and for reporting influenza outbreaks and receiving advice concerning outbreak control. # Additional Information Regarding Influenza Infection Control Among Specific Populations Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, health-care personnel, hospitals, and travelers) are also available in the following publications: • CDC. # Advisory Committee on Immunization Practices Membership
None
None
7419f776406ce1fca8ddeb54f33dad174d54cc7a
cdc
None
These guidelines update previous CDC recommendations for the diagnosis, treatment, and prevention of tuberculosis (TB) among adults and children coinfected with human immunodeficiency virus (HIV) in the United States. The most notable changes in these guidelines reflect both the findings of clinical trials that evaluated new drug regimens for treating and preventing TB among HIV-infected persons and recent advances in the use of antiretroviral therapy. In September 1997, when CDC convened a meeting of expert consultants to discuss current information about HIV-related TB, special emphasis was given to issues related to coadministration of TB therapy and antiretroviral therapy and how to translate this information into management guidelines. Thus, these guidelines are based on the following scientific principles: restores immune function. Adding to CDC's current recommendations for administering isoniazid preventive therapy to HIV-infected persons with positive tuberculin skin tests and to HIV-infected persons who were exposed to patients with infectious TB, this report also describes in detail the use of new shortcourse (i.e., 2 months) multidrug regimens (e.g., a rifamycin, such as rifampin or rifabutin, combined with pyrazinamide) to prevent TB in persons with HIV infection. A continuing education component for U.S. physicians and nurses is included.# - Early diagnosis and effective treatment of TB among HIV-infected patients are critical for curing TB, minimizing the negative effects of TB on the course of HIV, and interrupting the transmission of Mycobacterium tuberculosis to other persons in the community. - All HIV-infected persons at risk for infection with M. tuberculosis must be carefully evaluated and, if indicated, administered therapy to prevent the progression of latent infection to active TB disease and avoid the complications associated with HIV-related TB. - All HIV-infected patients undergoing treatment for TB should be evaluated for antiretroviral therapy, because most patients with HIV-related TB are candidates for concurrent administration of antituberculosis and antiretroviral drug therapies. However, the use of rifampin with protease inhibitors or nonnucleoside reverse transcriptase inhibitors is contraindicated. Ideally, the management of TB among HIV-infected patients taking antiretroviral drugs requires a) directly observed therapy, b) availability of experienced and coordinated TB/HIV care givers, and in most situations, c) use of a TB treatment regimen that includes rifabutin instead of rifampin. Because alternatives to the use of rifampin for antituberculosis treatment are now available, the previously recommended practice of stopping protease inhibitor therapy to allow the use of rifampin for TB treatment is no longer recommended for patients with HIV-related TB. The use of rifabutin-containing antituberculosis regimens should always include an assessment of the patient's response to treatment to decide the appropriate duration of therapy (i.e., 6 months or 9 months). Physicians and patients also should be aware that paradoxical reactions might occur during the course of TB treatment when antiretroviral therapy # INTRODUCTION These guidelines update previous CDC recommendations for treating and preventing active tuberculosis (TB) among adults and children coinfected with human immunodeficiency virus (HIV) (1)(2)(3). The most notable changes in these guidelines reflect both the recent advances in the use of antiretroviral therapy and the findings of clinical trials that evaluated new drug regimens for the treatment and prevention of TB among HIV-infected persons. Antiretroviral therapy is discussed in the context of TB treatment only; more detailed information about antiretroviral therapy is published elsewhere (4 ). In September 1997, CDC convened a meeting of expert consultants who reviewed and considered background information about HIV-related TB in the United States and the scientific principles of therapy for both diseases (Part I of this report). The consultants then used this review as the basis for updating the recommendations for HIV-infected patients with TB (Part II). During their review of the scientific principles of therapy, the expert consultants focused on epidemiologic and clinical interactions between Mycobacterium tuberculosis infection and HIV infection, considering the frequency of coexisting TB and HIV infection and rates of drug-resistant TB among patients infected with HIV in the United States; the copathogenicity of TB and HIV disease; the potential for a poorer outcome of TB therapy and paradoxical reactions to TB treatment among HIV-infected patients; drug interactions between rifampin used for TB therapy and agents commonly used in antiretroviral therapy; use of TB treatment regimens that do not contain rifampin; and results of clinical trials of therapies to prevent TB among HIV-infected persons. Thus, in addition to CDC's current recommendations, these new guidelines include information about the following topics: - directly observed therapy for all patients with HIV-related TB; - rifabutin-containing antituberculosis regimens (or a streptomycin-based alternative regimen that does not contain rifamycin) for treating TB among patients taking antiretroviral drugs that have interactions with rifampin; - monitoring responses to antituberculosis treatment to decide about the appropriate duration of TB therapy; - occurrence and management of paradoxical reactions during TB treatment, when immune function is restored because of antiretroviral therapy; - use of 9 months of isoniazid daily or twice weekly for the treatment of M. tuberculosis infection; 2 MMWR October 30, 1998 - short-course multidrug therapy for latent M. tuberculosis infection; and - special considerations that apply to children and pregnant women with HIVrelated TB. Health-care professionals need to be familiar with these new guidelines to ensure the use of the most effective management strategies for TB patients infected with HIV, while concurrently promoting optimal antiretroviral therapy for these patients. To help clinicians make informed treatment decisions based on the most current research results, the expert consultants have given each recommendation an evidence-based rating similar to the ratings used in previously issued guidelines (4,5 ). However, these recommendations are not intended to substitute for the judgment of an expert physician. When possible, the treatment of TB in HIV-infected persons should be directed by (or done in consultation with) a physician with extensive experience in the care of patients with TB and HIV disease. The implementation of these recommendations will help prevent cases of drug-resistant TB, reduce TB treatment failures, and diminish the adverse effects that TB has on HIV replication. Moreover, these guidelines will contribute to efforts to control TB and eliminate it from the United States by minimizing the likelihood of M. tuberculosis transmission, which will prevent the occurrence of new cases of TB. In future years, health-care professionals can expect changes in the recommendations regarding the therapeutic options used to prevent and treat TB among patients infected with HIV. These changes will reflect the availability of new antiretroviral and antituberculosis agents, new information about existing agents, and subsequent changes in CDC's guidelines for the use of antiretroviral therapy for persons infected with HIV. Multiple copies of this report and all updates are available from the Office of Communications, National Center for HIV, STD, and TB Prevention, CDC, 1600 Clifton Road, Mail Stop E-06, Atlanta, GA 30333. The report also is posted on the CDC Division of TB Elimination Internet website at and the MMWR website at . Readers should consult these sources regularly for updates in the guidelines. # PART I. BACKGROUND AND SCIENTIFIC RATIONALE # Frequency of Coexisting TB and HIV Infection and Disease in the United States In the United States, epidemiologic evidence indicates that the HIV epidemic contributed substantially to the increased numbers of TB cases in the late 1980s and early 1990s (6,7 ). Overlap between the acquired immunodeficiency syndrome (AIDS) and TB epidemics continues to result in increases in TB morbidity. Analysis of national HIV-related TB surveillance data is limited by incomplete reporting of HIV status for persons with TB. As an alternative, state health department personnel have compared TB and AIDS registries to help estimate the proportion of persons reported with TB who are also infected with HIV. In the most recent comparison conducted by the 50 states and Puerto Rico, 14% of persons with TB in 1993-1994 (27% among those aged 25-44 years) also appeared in the AIDS registry (8 ). This proportion of TB patients with AIDS is believed to be a minimum estimate for the United States and might represent an increase in the proportion of TB patients identified as having TB and AIDS in 1990 (9%) (6 ). During 1993-1994, most persons with TB and AIDS (80%) were found in eight reporting areas: New York City, California, Florida, Georgia, Illinois, New Jersey, New York, and Texas (8 ). In prospective epidemiologic studies, investigators have estimated that the annual rate of TB disease among untreated, tuberculin skin-test (TST)-positive, HIV-infected persons in the United States ranges from 1.7 to 7.9 TB cases per 100 person-years (Table 1) (9-11 ). The variability observed in these studies mirrors the differences in TB prevalence observed for different U.S. populations (i.e., the highest case rate was found in a study of a New York City population of intravenous drug users at a time when the incidence of TB was high and increasing ; and the lowest case rate was evident in a community-based cohort of persons enrolled in a study of the pulmonary complications of HIV infection at a time and in a population in which the incidence of TB was relatively low ). However, in all of these studies, the rate of TB disease among HIV-infected, TST-positive persons was approximately 4-26 times higher than the rate among comparable HIV-infected, TST-negative persons, and it was approximately 200-800 times higher than the rate of TB estimated for the U.S. population overall (0.01%) (12 ). Therefore, activities to control and eliminate TB in the United States must include aggressive efforts to identify HIV-infected persons with latent # TABLE 1. Annual rates- of tuberculosis among persons with human imunodeficiency virus infection, by tuberculin skin-test (TST) status -selected years and U.S. areas Location and source # Rate among persons with positive TSTs # Rate among persons with negative TSTs Rate ratio New York City Selwyn et al., 1989 (9 ) 7.9 0. TB infection and to provide them with therapy to prevent progression to active TB disease. # Rates of Drug-Resistant TB Among HIV-Infected Persons in the United States Resistance to antituberculosis drugs is an important consideration for some HIVinfected persons with TB. According to the results of a study of TB cases reported to CDC from 1993 through 1996, the risk of drug-resistant TB was higher among persons with known HIV infection compared with others (13 ). During this 4-year period, among U.S.-born persons aged 25-44 years with TB, HIV test results were reported as positive for 32% of persons, negative for 23%, and unknown for 45%. Using univariate analysis that excluded patients known to have had a previous episode of TB, investigators found that patients known to be HIV seropositive had a significantly higher rate of resistance to all first-line antituberculosis drugs, compared with HIV-seronegative patients and patients with unknown HIV serostatus (Table 2). Moreover, using a multivariate model that included age, history of previous TB, birth country, residence in New York City, and race/ethnicity, the investigators confirmed HIV-positive serostatus as a risk factor for resistance to at least isoniazid, for both isoniazid and rifampin resistance (multidrug-resistant TB) and for rifampin monoresistance (TB resistant to rifampin only). In some areas of the United States with a low level of occurrence of MDR TB, however, differences in MDR TB related to HIV status have not been found (8 ). Reasons for the increased risk for TB drug resistance among HIV-seropositive persons might reflect a higher proportion of TB disease resulting from recently acquired M. tuberculosis infection (14,15 ) and thus an increased risk of disease caused by drug-resistant strains in areas with high community and institutional transmission of drug-resistant strains of M. tuberculosis (16 ). Several well-described outbreaks of † The patient's Mycobacterium tuberculosis isolate had resistance to at least the specified drug but may have had resistance to other drugs as well. § The differences in drug-resistance rates among patients with TB known to be HIV-seropositive, compared with those known to be HIV-seronegative or of unknown status, are statistically significant (Chi-square test statistic, p<0.05). ¶ These figures were calculated for patients with M. tuberculosis isolates tested for isoniazid and rifampin always and streptomycin sometimes. Monoresistant isolates were resistant to rifampin but susceptible to the other first-line drugs tested. Source: CDC, National Tuberculosis Surveillance System. nosocomially transmitted MDR TB, primarily affecting persons with AIDS, support this association (17)(18)(19)(20)(21). In the past decade, reports have increased of TB caused by strains of M. tuberculosis resistant to rifampin only, and growing evidence has indicated that this rare event is associated with HIV coinfection (22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32). In retrospective studies, nonadherence with TB therapy has been associated with acquired rifampin monoresistance (22)(23)(24); and among a small number of patients, the use of rifabutin as prophylaxis for Mycobacterium avium complex was associated with the development of rifamycin resistance (31 ). However, the occurrence of TB relapse with acquired rifampin monoresistance also has been documented among patients with TB who initially had rifampin-susceptible isolates and who were treated with a rifampin-containing TB regimen by directly observed therapy (DOT) (30,32 ). The mechanisms involved in the development of acquired rifampin monoresistance are not clearly understood but could involve the persistence of actively multiplying mycobacteria in patients with severe cellular immunodeficiency, selective antituberculosis drug malabsorption, and inadequate tissue penetration of drugs. Thus, of critical importance for HIV-infected persons is implementation of TB prevention and control strategies such as a) appropriate use of therapy for latent M. tuberculosis infection, b) early diagnosis and effective treatment of active TB (i.e., administering four-drug antituberculosis regimens by DOT to all coinfected patients), and c) prompt compliance with requirements for reporting TB cases and drug-susceptibility test results. Implementing these strategies for persons coinfected with HIV will not only help reduce new cases of TB in general; it also could decrease further transmission of drug-resistant strains and new cases of drug-resistant TB. # Copathogenicity of TB and HIV Disease Human immunodeficiency virus type 1 (HIV-1) and M. tuberculosis are two intracellular pathogens that interact at the population, clinical, and cellular levels. Initial studies of HIV-1 and TB emphasized the impact of HIV-1 on the natural progression of TB, but mounting immunologic and virologic evidence now indicates that the host immune response to M. tuberculosis enhances HIV replication and might accelerate the natural progression of HIV infection (33 ). Therefore, the interaction between these two pathogens has important implications for the prevention and treatment of TB among HIV-infected persons. Studies of the immune response in persons with TB disease support the biologic plausibility of copathogenesis in dually infected persons. The initial interaction between the host immune system and M. tuberculosis occurs in the alveolar macrophages that present mycobacterial antigens to antigen-specific CD4+ T cells (34 ). These T cells release interferon-gamma, a cytokine that acts at the cellular level to activate macrophages and enhance their ability to contain mycobacterial infection. The activated macrophages also release proinflammatory cytokines, such as tumor necrosis factor and interleukin (IL)-1, cytokines that enhance viral replication in monocyte cell lines in vitro (35)(36)(37)(38). The mycobacteria and their products also enhance viral replication by inducing nuclear factor kappa-B, the cellular factor that binds to promoter regions of HIV (39,40 ). When TB disease develops in an HIV-infected person, the prognosis is often poor, though it depends on the person's degree of immunosuppression and response to appropriate antituberculosis therapy (41)(42)(43). The 1-year mortality rate for treated, HIV-related tuberculosis ranges from 20% to 35% and shows little variation between cohorts from industrialized and developing countries (44)(45)(46)(47)(48)(49). The observed mortality rate for HIV-infected persons with TB is approximately four times greater than the rate for TB patients not infected with HIV (44,46,49,50 ). Although the cause of death in the initial period of therapy can be TB (46 ), death after the induction phase of antituberculosis therapy usually is attributed to complications of HIV other than TB (45,51,52 ). Epidemiologic data suggest that active TB accelerates the natural progression of HIV infection. In a retrospective cohort study of HIV-infected women from Zaire, investigators estimated the relative risk of death to be 2.7 among women with active TB compared with those without TB (53 ). In a retrospective cohort study of HIV-infected subjects from the United States, active TB was associated with an increased risk for opportunistic infections and death (54 ). The risk of death, or hazard rate, for persons with HIV-related TB follows a bimodal distribution, peaking within the first 3 months of antituberculosis therapy and then again after 1 year (48 ); the reasons for this distribution are not clear but might relate to the impact of TB on HIV disease progression. The observation that active TB increases deaths associated with HIV infection has been corroborated in studies of three independent cohorts in Europe (55)(56)(57). Early in the HIV epidemic, researchers postulated that the immune activation resulting from concurrent infection with parasitic or bacterial pathogens might alter the natural progression of HIV infection (58 ). Subsequent observations have demonstrated that immune activation from TB enhances both systemic and local HIV replication. In some patients with active TB, the plasma HIV RNA level rises substantially before TB is diagnosed (59 ). Moreover, TB treatment alone leads to reductions in the viral load in these dually infected patients. TB and HIV also interact in the lungs, the site of primary infection with M. tuberculosis. In a recently published study of HIVinfected patients with TB, researchers found that the viral load was higher in the bronchoalveolar lavage fluid from the affected versus the unaffected lung and was correlated with levels of tumor necrosis factor in bronchoalveolar fluid (60 ). Researchers used V3 loop viral sequences to construct a phylogenetic tree and observed that the HIV quasispecies from the affected lung differed from those in the plasma within the same patient. These data suggest that pulmonary TB might act as a potent stimulus for the cellular-level replication of HIV. In summary, recent research findings have improved clinicians' understanding of how HIV affects the natural progression of TB and how TB affects the clinical course of HIV disease, and these findings support the recommendation for prevention, early recognition, and effective treatment for both diseases. # TB Therapy Outcomes Among Patients with HIV-Related TB Among patients treated for TB, early clinical response to therapy and the time in which M. tuberculosis sputum cultures convert from positive to negative appear to be similar for those with HIV infection and those without HIV infection (30,61,62 ). However, the data are less clear about whether rates of TB relapse (recurrence of TB following successful completion of treatment) differ among patients with or without HIV infection (63 ). Current CDC and American Thoracic Society guidelines recommend a 6-month treatment regimen for drug-susceptible TB disease for patients coinfected with HIV (2 ) but suggest prolonged treatment for patients who have a delayed clinical and bacteriologic response to antituberculosis therapy. Some experts have suggested that to ensure an optimal antituberculosis treatment outcome, all patients with HIV-related TB should be treated with a longer course of therapy (i.e., 9 months), regardless of evidence of early response to therapy (64,65 ). To make a recommendation on duration of therapy for HIV-related TB, expert consultants at the September 1997 CDC meeting considered the results of prospective studies that ascertained the posttreatment relapse rate following 6-month TB therapy regimens among patients with HIV infection (Table 3) (29,30,49,66,67 ). Differences in the study designs, including those pertaining to eligibility for enrollment in the study and to the definition of TB relapse, limited the analysis of combined results from the five studies. Despite this limitation, the expert consultants were able to make the following observations: a) the studies had a posttreatment follow-up duration that ranged from 8 to 22 months (median duration: 18 months); b) in three studies (30,49,67 ), investigators found that 6-month TB regimens were associated with a clinically acceptable (≤5.4%) TB relapse rate; and c) in two studies (29,66 ), researchers found a high (≥9%) TB relapse rate associated with the use of 6-month TB regimens. In the Zaire study (66 ), TB patients coinfected with HIV had almost twofold higher posttreatment relapse rates than patients not infected with HIV who received the same TB treatment regimen; however, the authors did not investigate whether the relapses were the result of a recurrence of disease with the same strain of M. tuberculosis or reinfection (new disease) with a different strain. In the other study (29 ), which enrolled HIV-seropositive patients from 21 different sites in the United States, in all three patients who relapsed, the strain of M. tuberculosis isolated during the relapse episode matched, by DNA fingerprint, the strain of M. tuberculosis that was isolated during the initial episode of TB; this finding ruled out the possibility of reinfection. The expert consultants who reviewed the available data agreed that short-course (i.e., 6-month) regimens should be used for the treatment of HIV-related pansusceptible TB (i.e., susceptible to all first-line antituberculosis drugs) in the United States, where patients are usually treated with DOT and where response to antituberculosis daily induction regimen followed by a continuation regimen of 4-month intermittent isoniazid and rifampin. The exceptions are that a) ethambutol was not used during the induction phase in Côte d'Ivoire, and b) half of patients in one of the U.S. studies (30 ) received levofloxacin in addition to the other four drugs during the induction phase. During the continuation phase, patients in Côte d'Ivoire received drugs daily, patients in Haiti received drugs three times a week, and patients in all other studies received drugs twice a week. † Also in this study, 51 comparable patients with HIV-related TB were randomly assigned to a treatment arm in which the duration of the continuation phase was prolonged from 4 months to 7 months. The cultureconfirmed posttreatment relapse rate (2%) in this study arm was not significantly different from the rate in the 6-month study arm (p=1.00); however, an isolate was not available for DNA fingerprinting. drugs can be monitored. This approach limits the use of lengthier multidrug antituberculosis therapies to the minimum possible number of patients with TB and HIV disease. Some experts believe the risk of TB treatment failure is increased among patients with advanced HIV-related immunosuppression and therefore advocate greater caution (or longer duration of therapy) when treating such patients for TB. The available data do not permit CDC to make a definitive recommendation regarding this issue. However, the experts recommended that clinicians treating TB in patients with HIV infection should consider the factors that increase a person's risk for a poor clinical outcome (e.g., lack of adherence to TB therapy, delayed conversion of M. tuberculosis sputum cultures from positive to negative, and delayed clinical response) when deciding the total duration of TB therapy. # Paradoxical Reactions Associated with Initiation of Antiretroviral Therapy During the Course of TB Therapy The temporary exacerbation of TB symptoms and lesions after initiation of antituberculosis therapy -known as a paradoxical reaction -has been described as a rare occurrence (68)(69)(70)(71)(72)(73)(74) attributed to causes such as recovery of the patient's delayed hypersensitivity response and an increase in exposure and reaction to mycobacterial antigens after bactericidal antituberculosis therapy is initiated (75 ). Recently, a similar phenomenon was reported among patients with HIV-related TB (76 ). These reactions appear to be related more often to the concurrent administration of antiretroviral and antituberculosis therapy and occur with greater frequency than do paradoxical reactions associated primarily with the administration of antituberculosis therapy. Patients with paradoxical reactions can have hectic fevers, lymphadenopathy (sometimes severe), worsening of chest radiographic manifestations of TB (e.g., miliary infiltrates, pleural effusions), and worsening of original tuberculous lesions (e.g., cutaneous and peritoneal). However, these reactions are not associated with changes in M. tuberculosis bacteriology (i.e., no change from negative to positive culture and smear), and patients generally feel well and have no signs of toxicity. In a prospective study, paradoxical reactions were more common among 33 patients with HIV-related TB who received TB treatment and combination antiretroviral therapy (36%) than among 55 patients not infected with HIV who received antituberculosis drugs alone (2%) and among 28 HIV-infected patients (historical control patients during pre-zidovudine era) who received antituberculosis drugs alone (7%) (76 ). Furthermore, among patients treated for both diseases, the paradoxical reactions were more temporally related to the initiation of combination antiretroviral therapy (mean +/-standard deviation : 15 +/-11 days afterward) than to the initiation of antituberculosis treatment (mean SD: 109 +/-72 days afterward). Researchers investigated potential causes for these symptoms and lesions (i.e., TB treatment failure, antituberculosis drug resistance, nonadherence with TB therapy, drug fever, development of conditions not related to TB or HIV) but considered such causes unlikely because these evaluations produced negative results, and TB was cured in patients who remained on unmodified antituberculosis regimens. Among patients in this study who received combination antiretroviral therapy, which usually included a protease inhibitor, the paradoxical reactions corresponded with a concurrent drop in HIV viral loads after antiretroviral therapy began and, in all but one patient, occurred while peripheral blood CD4+ T-cell counts were <200 cells/µL (76 ). In the historical control group (i.e., patients who were treated for TB but not for HIV), two (7%) of the 28 patients had a paradoxical reaction after antituberculosis therapy was initiated. This finding indicates that treatment of TB alone might sometimes decrease HIV viral load substantially and improve immune function (40,59,68,76 ). After reviewing information about paradoxical reactions occurring during the course of TB therapy, expert consultants at the September 1997 CDC meeting concluded that exacerbation of TB signs and symptoms in patients with HIV-related TB can occur soon after combination antiretroviral therapy is initiated. Clinicians should always conduct a thorough investigation to eliminate other etiologies before making a diagnosis of paradoxical treatment reaction. For patients with paradoxical reactions, rarely are changes in antituberculosis or antiretroviral therapy needed. If the lymphadenopathy or other lesions are severe, one option is to continue with appropriate antituberculosis therapy and administer short-term steroids that suppress the enhanced immune response. In the prospective study (76 ), despite having low CD4+ T-cell counts, six (86%) of seven TB patients who were initially tuberculin skin-test (TST)-negative had positive TST results after combination antiretroviral therapy was started. The reaction sizes of postantiretroviral TSTs ranged from 7 to 67 mm of induration. Clinicians must be aware of the potential public health and clinical implications of restored TST reactivity among persons who have not been diagnosed with active TB but who might be latently infected with M. tuberculosis. Persons previously known to have negative TST results might benefit from repeat tuberculin testing if they have evidence of restored immune function after antiretroviral therapy is initiated, because TB preventive therapy is recommended for TST-positive HIV-infected persons. # Considerations for TB Therapy for HIV-Infected Patients Treated with Antiretroviral Agents Drug Interactions Between Rifamycins Used for TB Therapy and Antiretroviral Drugs Used for HIV Therapy Widely used antiretroviral drugs available in the United States include protease inhibitors (saquinavir, indinavir, ritonavir, and nelfinavir) and nonnucleoside reverse transcriptase inhibitors (NNRTIs) (nevirapine, delavirdine, and efavirenz). Protease inhibitors and NNRTIs have substantive interactions with the rifamycins (rifampin, rifabutin, and rifapentine) used to treat mycobacterial infections (3,77 ). These drug interactions principally result from changes in the metabolism of the antiretroviral agents and the rifamycins secondary to induction or inhibition of the hepatic cytochrome CYP450 enzyme system (78,79 ). Rifamycin-related CYP450 induction decreases the blood levels of drugs metabolized by CYP450. For example, if protease inhibitors are administered with rifampin (a potent CYP450 inducer), blood concentrations of the protease inhibitors (all of which are metabolized by CYP450) decrease markedly, and most likely the antiretroviral activity of these agents declines as well. Conversely, if ritonavir (a potent CYP450 inhibitor) is administered with rifabutin, blood concentrations of rifabutin increase markedly, and most likely rifabutin toxicity increases as well. Of the available rifamycins, rifampin is the most potent CYP450 inducer; rifabutin has substantially less activity as an inducer; and rifapentine, a newer rifamycin, has intermediate activity as an inducer (80)(81)(82). The four currently approved protease inhibitors and amprenavir (141W94, an investigational agent in Phase III clinical trials) are all, in differing degrees, inhibitors of CYP450 (83,84 ). The rank order of the agents in terms of potency in inhibiting CYP450 is ritonavir (the most potent); amprenavir, indinavir, and nelfinavir (with approximately equal potencies); and saquinavir (the least potent). The magnitude of the effects of coadministering rifamycins and protease inhibitors has been evaluated in limited pharmacokinetic studies (Table 4) (85)(86)(87)(88)(89)(90)(91). The three approved NNRTIs have diverse effects on CYP450: nevirapine is an inducer, delavirdine is an inhibitor, and efavirenz is both an inducer and an inhibitor. The magnitude of the effects of coadministering rifamycins and NNRTIs has also been evaluated in pharmacokinetic studies or has been predicted on the basis of what is known about their potential for inducing or inhibiting CYP450 (Table 5) (92)(93)(94)(95)(96). In contrast to the protease inhibitors and the NNRTIs, the other class of antiretroviral agents available, nucleoside reverse transcriptase inhibitors (NRTIs) (zidovudine, didanosine, zalcitabine, stavudine, and lamivudine) are not metabolized by CYP450. Rifampin (and to a lesser degree, rifabutin) increases the glucuronidation of zidovudine and thus slightly decreases the serum concentration of zidovudine (97)(98)(99)(100). The effect of this interaction probably is not clinically important, and the concurrent use of NRTIs and rifamycins is not contraindicated (77 ). Also, no contraindication exists for the use of NRTIs, NNRTIs, and protease inhibitors with isoniazid, pyrazinamide, ethambutol, or streptomycin. These first-line antituberculosis medications, in contrast to the rifamycins, are not CYP450 inducers. # Coadministration of Antituberculosis and Antiretroviral Therapies According to 1998 U.S. Department of Health and Human Services guidelines on the use of antiretroviral agents among HIV-infected adults and adolescents (4 ), to improve the length and quality of patients' lives, all persons with symptomatic HIV infection should be offered antiretroviral therapy. HIV-infected patients with TB fall in this category. When used appropriately, combinations of potent antiretroviral agents can effect prolonged suppression of HIV replication and reduce the inherent tendency of HIV to generate drug-resistant viral strains. However, as antiretroviral therapeutic regimens have become increasingly effective, they also have become increasingly complex in themselves as well as in the problems they cause for the treatment of other HIV-associated diseases. At present, regimens that include two NRTIs combined with a potent protease inhibitor (or, as an alternative, combined with an NNRTI) are the preferred choice for combination antiretroviral therapy for the majority of patients. Each of the antiretroviral drug combination regimens must be used according to optimum schedules and doses (4 ) because the potential for resistant mutations of HIV decreases if serum concentrations of the multiple antiretroviral drugs are maintained steadily. Because rifampin markedly lowers the blood levels of these drugs and is likely to result in suboptimal antiretroviral therapy, the use of rifampin to treat active TB in a patient who is taking a protease inhibitor or an NNRTI is always contraindicated. Rifabutin is a less potent inducer of the CPY450 cytochrome enzymes than is rifampin and, when used in appropriately modified doses, might not be associated with a clinically significant reduction of protease inhibitors or nevirapine (Table 6). Thus, the substitution of rifabutin for rifampin in TB treatment regimens has been proposed as a practical - Effects are expressed as a percentage change in AUC of the concomitant treatment relative to that of the drug-alone treatment. No data are available regarding the magnitude of these bidirectional interactions when rifamycins are administered two or three times a week instead of daily. † Predicted effect based on knowledge of metabolic pathways for the two drugs. # TABLE 6. Feasibility of using different antiretroviral drugs and rifabutin # Antiretroviral agent # Can be used in combination with rifabutin? Comments Saquinavir (soft-gel formulation) Probably Use of the soft-gel formulation (Fortovase™) in higher-than-usual doses might allow adequate serum concentrations of this drug despite concurrent use of rifabutin.- However, the pharmacokinetic data for this combination are limited in comparison with other protease inhibitors. Because of the expected low bioavailability of the hard-gel formulation (Invirase™) the concurrent use of this agent with rifabutin is not recommended. # Ritonavir No Ritonavir increases concentrations of rifabutin by 35-fold and results in increased rates of toxicity (arthralgia, uveitis, skin discoloration, and leukopenia). These adverse events have been noted in studies of high-dose rifabutin therapy, when rifabutin is administered with clarithromycin (another CYP450 inhibitor) -an indication that these events might result from high serum concentrations of rifabutin. # Indinavir # Yes Data from drug interaction studies (unpublished report, Merck Research Laboratories, West Point, PA, 1998) suggest that the dose of indinavir should be increased from 800 mg every 8 hours to 1,200 mg every 8 hours if used in combination with rifabutin.* # Nelfinavir Yes Some clinical experts suggest that the dose of nelfinavir should be increased from 750 mg three times a day to 1,000 mg three times a day if used in combination with rifabutin.* # Amprenavir # Probably The drug interactions between amprenavir and rifabutin (and thus potential for rifabutin toxicity) are reported to be similar to those of ritonavir with rifabutin. However, potential advantages of using this combination are that a) rifabutin has a minimal effect on reducing the levels of amprenavir and b) even though it has not been studied, rifabutin toxicity is not expected if the daily dose of rifabutin is reduced when used in combination with amprenavir. # NRTIs † Yes Not expected to have clinically significant interaction. # Nevirapine Yes Not known whether nevirapine or rifabutin dose adjustments are necessary when these drugs are used together.* # Delavirdine No Not recommended on the basis of marked decreases in concentrations of delavirdine when administered with rifamycins. # Efavirenz Probably Newly approved agent. Preliminary drug interaction studies suggest that when rifabutin is used concurrently with efavirenz, the dose of rifabutin for both daily and twice weekly administration should be increased from 300 mg to 450 mg. *Daily dose of rifabutin should be reduced from 300 mg to 150 mg if used in combination with amprenavir, nelfinavir, or indinavir. It is unknown whether the dose of rifabutin should be reduced if used in combination with saquinavir (Fortovase™) or nevirapine. † Nucleoside reverse transcriptase inhibitiors, including zidovudine, didanosine, zalcitabine, stavudine, and lamivudine. choice for patients who are also undergoing therapy with protease inhibitors (with the exception of ritonavir or hard-gel capsule saquinavir ) or with the NNRTIs nevirapine or efavirenz (but not delavirdine ). Currently, more clinical and pharmacokinetic data are available on the use of indinavir or nelfinavir with rifabutin than on the use of amprenavir or soft-gel saquinavir (Fortovase™) with rifabutin. Rifapentine is not recommended as a substitute for rifampin because its safety and effectiveness have not been established for the treatment of patients with HIV-related TB. As an alternative to the use of rifamycin for the treatment of TB, the use of streptomycin-based regimens that do not contain rifamycin can be considered for the treatment of TB in patients undergoing antiretroviral therapy with protease inhibitors or NNRTIs. # Use of Rifabutin-Based Regimens for the Treatment of HIV-Related TB At present, TB drug regimens that include rifabutin instead of rifampin appear to offer the best alternative for the treatment of active TB among patients taking antiretroviral therapies that include protease inhibitors or NNRTIs. This recommendation is based on findings from studies of equivalent in vitro antituberculosis activity of rifabutin and rifampin (104,105 ) and the results of three clinical trials (106)(107)(108). These trials demonstrated that 6-month rifabutin-containing regimens (at a daily dose of either 150 mg or 300 mg) were as effective and as safe as similar control regimens containing rifampin for the treatment of TB (Table 7). The smallest (n=49) of these three trials was conducted in Uganda (108 ) and is the only one to include HIVcoinfected patients (who were not undergoing antiretroviral therapy at the time of the study). This study indicated that 81% of patients taking a TB treatment regimen containing daily rifabutin converted their sputum from M. tuberculosis positive to negative after 2 months of treatment, compared with a 48% sputum conversion rate among patients taking a TB regimen containing daily rifampin (p<0.05). However, when the researchers controlled for differences in baseline characteristics (a greater proportion of patients in the rifampin group had cavitary disease), they found no difference in the time to sputum conversion between the two study groups. Studies are under way to evaluate the use of rifabutin administered daily (at a dose of 150 mg) or twice a week (at a dose of 300 mg) for the treatment of TB in HIV-infected patients who take protease inhibitors. Physicians at a state tuberculosis hospital in Florida have treated or consulted on the treatment of approximately 30 HIV-infected patients who received a protease inhibitor while undergoing treatment for TB with rifabutin. Patients have been treated for TB primarily with administration of rifabutin (150 mg daily) as part of four-drug therapy for 2-4 weeks, followed by rifabutin (300 mg twice weekly) as part of four-drug therapy to complete 8 weeks of induction, and then a continuation phase consisting of twice-weekly isoniazid and rifabutin (300 mg) to complete 6 months of treatment. To date, patients treated with this regimen have not experienced clinically significant increases in rifabutin serum levels, have had a minimal incidence of adverse reactions from rifabutin (one patient developed a case of uveitis), and have had a good clinical response to TB and HIV therapies. Approximately 80% of the patients attained sputum conversion by the second month of treatment, most have attained and maintained suppression of HIV replication, and no TB relapses have occurred with up to 1 year of posttreatment follow-up (David Ashkin, M.D., and Masahiro Narita, M.D., A.G. Holley State Tuberculosis Hospital, Lantana, Florida, personal communication, 1998). In previous reports, CDC and the American Thoracic Society jointly recommended the use of rifampin-containing short-course regimens for the initial treatment of HIVrelated TB (2 ). The inclusion of rifampin in regimens to treat TB was supported by data collected from approximately 90 controlled clinical trials conducted from 1968 to 1988 (109 ). Excluding rifampin from the TB treatment regimen was not recommended because regimens not containing rifampin a) had not been proven to have acceptable efficacy (i.e., have been associated with higher rates of TB treatment failure and death and with slower bacteriologic responses to therapy leading to potential increases in the likelihood of M. tuberculosis transmission) and b) require prolonging duration of therapy from 6 months to 12-15 months. Presently, available data suggest that rifabutin in short-course (i.e., 6 months) multidrug regimens to treat TB provides the same benefits as the use of rifampin. Three additional reasons support the use of rifabutin for treating HIV-related TB: a) observations suggest that rifabutin might be more reliably absorbed than rifampin in patients with advanced HIV disease (110,111 ); b) the use of rifabutin appears to have been better tolerated in patients with rifampininduced hepatotoxicity (David Ashkin, M.D., and Masahiro Narita, M.D., A.G. Holley State Tuberculosis Hospital, Lantana, Florida, personal communication, 1998); and c) the use of rifabutin might lessen the possibility of interactions with other medications commonly prescribed for patients with HIV infection (e.g., azole antifungal drugs, anticonvulsant agents, and methadone) (77 ). # Use of Alternative TB Treatment Regimens that Contain Minimal or No Rifamycin TB treatment regimens that contain no rifamycins have been proposed as an alternative for patients who take protease inhibitors or NNRTIs. Several clinical trials conducted in Hong Kong and Africa by the British Medical Research Council and published from 1974 through 1984 provide information about nonrifamycin and minimal-rifamycin regimens for the treatment of TB in patients who were not likely to be infected with HIV (Table 8) (112)(113)(114)(115). Most of these studies demonstrated high relapse rates when regimens not containing streptomycin were used and when the duration of therapy was less than 9 months. However, in a large (n=404) randomized controlled clinical trial in Hong Kong that evaluated the use of six TB treatment regimens consisting of streptomycin, isoniazid, and pyrazinamide either daily, three times a week, or two times a week for 6 or 9 months (112 ), almost all patients treated with any of the study regimens achieved rapid sputum conversions (86%-94% of patients converted within 3 months of therapy). In this study, the 30-month posttreatment follow-up relapse rates were high (18%-24%) among patients treated with 6-month regimens, but the relapse rates among patients treated with 9-month regimens (5%-6%) were similar to the relapse rates expected following the use of rifampin-based TB treatments. Thus, the expert consultants who developed these guidelines concluded that treatment of TB without rifamycin always requires longer-duration (at least 9 months) regimens that include streptomycin (or an injectable antituberculosis drug such as capreomycin, amikacin, or kanamycin) (63 ). However, these TB regimens have not been studied among patients with HIV infection. Streptomycin is highly bactericidal against M. tuberculosis, but it is rarely used in the United States to treat drug-susceptible TB because of problems associated with its administration by injection that can be intensified in patients with low body mass or wasting and because of potential ototoxicity and nephrotoxicity. The associated potential toxicities and increased duration of therapy and the patient's difficulty in adhering to an injectable-drug-based TB regimen can compromise the effectiveness of streptomycin-based TB regimens, and these limitations should be considered by physicians and patients. # Treatment of Latent M. tuberculosis Infection in Patients with HIV Infection Scientific Rationale Preventive therapy for TB is essential to controlling and eliminating TB in the United States (116,117 ). Treatment for HIV-infected persons who are latently infected with M. tuberculosis is an important part of this strategy and is also an important personal health intervention because of the serious complications associated with active TB in HIV-infected persons (118)(119)(120). Expert consultants attending the September 1997 CDC meeting and additional consultants attending a September 1998 meeting sponsored by the American Thoracic Society and CDC considered findings from multiple studies (121)(122)(123)(124)(125)(126)(127)(128)(129)(130) before developing recommendations about the optimal duration of isoniazid preventive therapy regimens; frequency of administration (intermittency) of preventive therapy; new short-course multidrug regimens; and preventive therapy for anergic HIV-infected adults with a high risk of M. tuberculosis infection. A key difference in most preventive therapy trials conducted before and after the beginning of the HIV epidemic is that the earlier trials focused on 12-month regimens of isoniazid, whereas five of seven trials (122)(123)(124)(125)(126) conducted in HIV-infected populations assessed 6-month regimens of isoniazid (Table 9). Four of these 6-month isoniazid regimens (122)(123)(124)(125) were chosen for study on the basis of the operational feasibility of providing therapy in countries with limited resources where preventive therapy programs were not available; the fifth study (126 ), a U.S. trial conducted among anergic patients, used a 6-month regimen because of the absence of previous data about optimal duration of therapy for TST-negative, HIV-infected patients. Despite these variations, the expert consultants concluded that the findings from these different preventive therapy studies should apply to most persons with latent M. tuberculosis infection, regardless of their HIV serostatus, because similar levels of protection have been observed when identical preventive therapy regimens have been administered to persons infected with HIV and those not infected. # Optimal Duration of Isoniazid Regimens for Treatment of Latent M. tuberculosis Infection The American Thoracic Society and CDC have previously recommended a regimen of 12 months of isoniazid alone for treatment of latent M. tuberculosis infection in HIV-infected adults (2 ). The recommended duration of TB preventive therapy for persons not infected with HIV was a minimum of 6 months. When considering the optimal duration of isoniazid preventive therapy, the consultants reviewed findings from two studies conducted in populations not known to be infected with HIV (128,129 ). One of these studies, a controlled trial conducted in seven European countries, compared the efficacy of three durations (3, 6, and 12 months) of isoniazid preventive treatment for TST-positive persons with stable, fibrotic lesions on chest radiographs (128 ). In this study, compliant patients who received medication for 12 months had better protection against TB (93%) than those who received medication for 6 months (69%). The other study was conducted among the Inuits in the Bethel area of Alaska, where participants received 0-24 months of isoniazid preventive therapy (129 ). In an assessment of observed posttherapy case rates of TB relative to the amount of isoniazid ingested (expressed as a percentage of a 12-month regimen), researchers found that higher amounts of therapy corresponded with lower TB rates among participants who had received 0-9 months of isoniazid therapy; after 9 months of therapy, participants had no additional benefits in terms of decreased TB case rates. Four studies of HIV-infected persons have evaluated 6-month and 12-month regimens of daily isoniazid (121,123,125,127 ). Both of the studies that evaluated a 6-month regimen included a placebo comparison group and demonstrated reductions in the incidence of TB among persons in the treatment group -70% in Uganda (123 ) and 75% in Kenya (125 ). A study of the 12-month regimen (121 ), which was conducted in Haiti and also included a placebo comparison group, demonstrated an 83% reduction in the incidence of TB among persons in the treatment group. A multicenter trial conducted in the United States, Mexico, Brazil, and Haiti (127 ) demonstrated that the magnitude of protection obtained from a regimen of isoniazid administered daily for 12 months was similar to that obtained from a regimen of rifampin and pyrazinamide administered daily for 2 months. Isoniazid preventive therapy regimens of 6 and 12 months' duration have not been compared with each other in the same study conducted among HIV-infected persons. In summary, these data indicate that a) the optimal duration of isoniazid preventive therapy should be >6 months to provide the maximum degree of protection against TB; b) therapy for 9 months appears to be sufficient; c) therapy for >12 months does not appear to provide additional protection. # Frequency of Administering Isoniazid Preventive Therapy Two clinical trials (122,124 ) have evaluated 6-month twice-weekly isoniazid regimens for the prevention of active TB in HIV-infected persons (Table 9). Participants enrolled in the twice-weekly 6-month isoniazid arm of a study conducted in Zambia had a 40% reduction in the rate of TB compared with persons who took a placebo for 6 months (124 ). The findings of a trial conducted in Haiti (122 ) suggest that the magnitude of protection obtained from isoniazid administered twice a week for 6 months is equivalent to that obtained from rifampin and pyrazinamide regimens administered twice a week for 2 months. Preventive therapy trials that include twice-weekly isoniazid regimens for >6 months or comparisons of the same drugs administered daily versus intermittently have not been conducted. However, in a Baltimore demonstration project in which isoniazid was administered twice a week (10-15 mg/kg, with a maximum dose of 900 mg) to a cohort of injecting-drug users under directly observed preventive therapy (DOPT), the findings support the efficacy of twice-weekly isoniazid preventive therapy (130 ). Twice-weekly regimens with DOPT were used in Baltimore because the project staff expected that supervised delivery of therapy would enhance adherence with and completion of the preventive therapy regimen. Thus, the available data suggest that the protection obtained from isoniazid preventive therapy regimens should be the same whether the drug is administered daily or twice a week. # Short-Course Multidrug Regimens for TB Preventive Therapy Four clinical trials (122)(123)(124)127 ) conducted among HIV-infected populations have evaluated courses of preventive therapy that are shorter than 6 months and that include rifampin in combination with isoniazid or pyrazinamide (Table 9). The largest and most recent of these trials was a multicenter, randomized TB prevention study conducted from 1992 through 1998 (127 ). Researchers found identical rates of TB (1.2 per 100 person-years) in two groups of TST-positive, HIV-infected persons: those who primarily self-administered isoniazid daily for 12 months and those who primarily self-administered rifampin and pyrazinamide daily for 2 months. Both study groups had similar adverse events and mortality rates; persons taking rifampin and pyrazinamide for 2 months were significantly more likely (80%) to complete therapy than were persons taking isoniazid for 12 months (68%) (p<0.001). Two other trials conducted in Haiti and Zambia (122,124 ) have also evaluated regimens of rifampin and pyrazinamide for the prevention of TB but have not included comparison arms of 12month isoniazid regimens. The study in Haiti ( 122) compared patients receiving rifampin and pyrazinamide administered twice a week for 2 months with patients receiving isoniazid twice a week for 6 months; in both arms of the study, one of the twice-weekly doses was administered by DOPT. Investigators observed no difference in TB risk or mortality among participants enrolled in the two treatment arms (122 ). The placebo-controlled trial in Zambia demonstrated comparable protection from 3 months of rifampin and pyrazinamide versus 6 months of isoniazid; both regimens were self-administered twice a week (124 ). In the multicenter trial (127 ) and in the Haiti and Zambia studies (122,124 ), regimens that included rifampin and pyrazinamide were well tolerated. In a study conducted in Uganda (123 ), investigators observed no statistically significant reduction in TB rates but a high rate of toxicity and drug intolerance among persons who took three drugs (isoniazid, rifampin, and pyrazinamide) daily for 3 months compared with persons who took a placebo (Table 9); 3 months of daily self-administered rifampin and isoniazid provided protection similar to that of 6 months of daily self-administered isoniazid. Thus, short-course multidrug regimens (i.e., two drugs for 2-3 months) have been shown to be effective for the prevention of active TB in HIV-infected persons. The use of three drugs for preventive therapy, however, can be associated with unacceptably high rates of toxicity, and the use of a 3-month regimen of rifampin and isoniazid is not being considered for use in the United States. Available data indicate that in the United States, a regimen of rifampin and pyrazinamide administered daily for 2 months is a reasonable treatment option for HIV-infected adults with latent M. tuberculosis infection. The available data do not permit CDC to make a definitive statement regarding the intermittent (i.e., twice a week) administration of a 2-month regimen of rifampin and pyrazinamide. # Preventive Therapy for Anergic HIV-Infected Adults with a High Risk of Latent M. tuberculosis Infection Isoniazid preventive therapy has not been found to be useful or cost-effective in preventing TB when administered to anergic, HIV-infected persons (123,126 ) (Table 9). The anergic subjects who received isoniazid in the Uganda trial had a statistically insignificant (17%) reduction in the rate of TB (2.5 cases per 100 person-years) compared with patients in the placebo group (3.1 cases per 100 person-years) (123 ). Similarly, anergic HIV-infected persons with a high risk for tuberculous infection who were enrolled in a U.S. multicenter trial and treated with isoniazid daily for 6 months had a rate of TB (0.4 cases per 100 person-years) that was 50% less than, but not statistically different from, the rate observed among patients treated with placebo (0.9 cases per 100 person-years) (126 ). In both of these studies, HIV-infected persons with anergy tolerated isoniazid well, as suggested by the low rates of adverse reactions and high rates of therapy completion. These study findings do not support the routine use of preventive therapy in anergic, HIV-infected persons. Preventive therapy for TST-negative, HIV-infected persons also has not been proven effective (121,124, 125 ) (Table 9); however, some experts recommend primary preventive therapy (to prevent M. tuberculosis infection) for TST-negative or anergic HIV-infected residents of institutions that pose an ongoing high risk for exposure to M. tuberculosis (e.g., prisons, jails, homeless shelters). # Implications of Results of TB Preventive Therapy Trials The effects of TB preventive therapy on mortality and progression of HIV infection appear to be limited, with the exception that such therapy can protect against the development of TB disease and its associated consequences. Moreover, the duration of this protective effect has not been clearly established for HIV-infected persons. Despite these limitations and uncertainties, preventive therapy is recommended because its benefits in preventing TB disease are thought to be greater than the risks of serious treatment-related adverse events, and such therapy benefits society by helping to prevent the spread of infection to other persons in the community. The implementation of TB preventive therapy programs should be facilitated by the use of newly recommended short-course multidrug regimens and twice-weekly isoniazid regimens, especially among patients for whom DOPT is feasible. Because of the drug interactions between rifampin and protease inhibitors or NNRTIs, the use of shorter regimens containing rifampin is contraindicated for patients taking these antiretroviral drugs. Although preventive therapy trials evaluating rifabutin use among TST-positive, HIV-infected persons have not been conducted, the expert consultants reviewed available data and agreed that the use of rifabutin instead of rifampin is valid on the basis of the same scientific principles that support the use of rifabutin for the treatment of active TB. # PART II. RECOMMENDATIONS This section of the report provides clinicians with recommendations for diagnosing, treating, and preventing TB among persons coinfected with HIV while concurrently promoting optimal antiretroviral care for these patients. The recommendations reflect the current state of knowledge regarding the use of antiretroviral agents, but this field of science is rapidly evolving. As new antiretroviral agents and new data regarding existing agents alter therapeutic options and preferences for antiretroviral therapy, these changes might affect future recommendations for the treatment of TB infection and disease among patients coinfected with HIV and the treatment of HIV infection among persons with TB. Expert consultants updated these recommendations after a September 1997 CDC meeting, where they reviewed and considered available information about the scientific principles of therapy for TB and HIV. To help clinicians make informed treatment decisions based on the most current research results, the expert consultants have given the recommendations evidencebased ratings (general recommendations have no rating). The ratings include a letter and a Roman numeral (Table 10), similar to the ratings used in previously issued guidelines (4,5 ). The letter indicates the strength of the recommendation, and the Roman numeral indicates the nature of the evidence supporting the recommendation. Thus, clinicians can use the ratings to differentiate between recommendations based on data from clinical trials versus those based on the opinions of experts familiar with the relevant clinical practice and scientific rationale for such practice (when clinical trial data are not available). However, these recommendations are not intended to substitute for the judgment of an expert physician. Management of HIV-related TB disease is complex, and clinical and public health consequences associated with treatment failure are serious. When possible, treatment of TB among HIV-infected persons should be directed by, or conducted in consultation with, a physician with extensive experience in the care of patients with these two diseases. The objectives of implementing these recommendations are to reduce TB treatment failures, prevent drug-resistant TB, and diminish the adverse effects that TB has on HIV replication. Moreover, these guidelines contribute to efforts to control and eliminate TB from the United States by minimizing the likelihood of M. tuberculosis transmission, which will prevent the occurrence of new cases of TB. Multiple copies of this report and all updates are available from the Office of Communications, National # Active Tuberculosis Clinical and Public Health Principles Prompt initiation of effective antituberculosis treatment increases the probability that a patient with HIV infection who develops TB will be cured of this disease (45,131 ). TB treatment also quickly renders patients noninfectious (30 ), with the resulting reduction in the amount of M. tuberculosis transmitted to others, and it minimizes the patient's risk of death resulting from TB (41)(42)(43)132 ). Therefore, clinicians must immediately and thoroughly investigate the possibility of TB when a patient infected with HIV has symptoms consistent with TB. Persons suspected of having current TB disease should immediately be started on appropriate treatment, ideally with directly observed therapy (DOT) (133)(134)(135), and placed in TB isolation as necessary (136,137 ). Patients with TB and unknown HIV-infection status should be counseled and offered HIV testing. HIV-infected patients undergoing treatment for TB should be evaluated for antiretroviral therapy. Most patients with HIV-related TB are candidates for concurrent administration of antituberculosis and antiretroviral drug therapies (4 ). Health-care providers, administrators, and TB controllers must strive to promote coordinated care for patients with TB and HIV and remove existing barriers to information-sharing between TB control programs and HIV/AIDS programs. TB control programs are responsible for setting TB treatment standards for physicians in the community, promoting the awareness and use of recommended TB infection-control practices, and enforcing state and local health department requirements concerning TB case notification and early reporting of drug-susceptibility test results. Because of the complexity of managing HIV-related TB disease and the serious public health consequences of mismanagement, care for persons with HIV-related TB should be provided by, or in consultation with, experts in the management of both TB and HIV disease. # Diagnosis of HIV-Related Tuberculosis The typical signs and symptoms of pulmonary TB are cough with or without fever, night sweats, weight loss, and upper-lobe infiltrates with or without cavitation on chest x-rays. The diagnosis of TB for some HIV-infected patients might be difficult because TB in an immunocompromised host can be associated with atypical symptoms, a lack of typical symptoms, and a paucity of findings in chest x-rays (138-140 ). Among persons with AIDS, the diagnosis of TB also can be complicated by the presence of other pulmonary infections such as Pneumocystis carinii pneumonia and Mycobacterium avium complex disease and by the occurrence of TB in extrapulmonary sites. For patients with unusual clinical and radiographic findings, the starting point for diagnosing active TB often is a positive tuberculin skin test (TST). All patients with positive TSTs should be evaluated to rule out active TB (see Diagnosis of M. tuberculosis Infection Among HIV-Infected Persons). # Medical Evaluation of Patients Suspected of Having Active TB # Management of HIV-Infected Patients with Active TB Coadministration of TB Treatment and Antiretroviral Therapy The following management strategies are for patients with HIV-related pulmonary TB a) who are not known to have or who do not have risk factors for multidrugresistant TB and b) for whom antiretroviral therapy is appropriate. When they first receive care for active TB disease, some patients might already be receiving antiretroviral therapy, whereas other patients might be newly diagnosed with HIV infection (Figure 1). For these newly diagnosed patients, in addition to the currently established recommendations for the immediate initiation of antituberculosis therapy, recently published guidelines (4 ) recommend the use of antiretroviral therapy. When treatments for HIV and TB disease are begun simultaneously, the optimal setting is one with experienced and coordinated care givers as well as accessible resources to provide a continuum of medical services (e.g., a reliable source of medications and social, psychosocial, and nutritional services). *Coadministration of rifabutin with ritonavir, saquinavir (Invirase™), or delavirdine is not recommended. # FIGURE 1. Recommended management strategies for patients with human immunodeficiency virus (HIV) infection and tuberculosis (TB) OBJECTIVE The objective of this MMWR is to inform the reader of recommendations and guidelines for the diagnosis, treatment, and prevention of tuberculosis (TB) among patients with human immunodeficiency virus (HIV). These recommendations and guidelines are based on a meeting of expert consultants convened in September 1997 by CDC. These recommendations should guide clinical practice and policy development related to appropriate management of patients with or at risk for HIV-related TB. Upon completion of this educational activity, the reader should possess a clear working knowledge of the diagnosis, treatment, and prevention of TB among patients with HIV. # ACCREDITATION Continuing Medical Education (CME) Credit: This activity has been planned and implemented in accordance with the Essentials and Standards of the Accreditation Council for Continuing Medical Education (ACCME) by CDC. CDC is accredited by the ACCME to provide continuing medical education for physicians. CDC awards 2.0 hours of category 1 credit toward the AMA Physician's Recognition Award for this activity. # Continuing Nursing Education (CNE) Credit: This activity for 2.4 contact hours is provided by CDC, which is accredited as a provider of continuing nursing education by the American Nurses Credentialing Center's (ANCC) Commission on Accreditation. # EXPIRATION -October 30, 1999 The response form must be completed and returned electronically, by fax, or by mail, postmarked no later than one year from the publication date of this report, for eligibility to receive continuing education credit. # INSTRUCTIONS # Recommendations and Reports To receive continuing education credit, please answer all of the following questions: 1. Which of the following statements are true? (Indicate all that are true.) A. Human immunodeficiency virus (HIV) infection accelerates the natural progression of tuberculosis (TB). B. Active TB accelerates the natural progression of HIV infection and disease. C. The mortality rate for HIV-infected persons with TB is four times greater than the rate for TB patients not infected with HIV. D. The toxicity associated with isoniazid TB preventive therapy significantly outweighs this therapy's benefits in preventing active TB in an HIV-infected person. 2. Which of the following strategies are recommended for patients with TB taking protease inhibitors for HIV therapy? (Indicate all that are true.) A. Stop protease inhibitor therapy to allow the use of rifampin for TB treatment. B. Use an antituberculosis treatment regimen that includes rifabutin instead of rifampin. C. Ensure that experienced and coordinated TB/HIV care givers are available. D. Provide therapy for TB that is directly observed. # Which of the following strategies are recommended for HIV-infected patients with active TB ? (Indicate all that are true.) A. Immediately and simultaneously begin treatment regimens that include a protease inhibitor for HIV therapy and drugs other than rifampin for antituberculosis therapy. B. Immediately and simultaneously begin treatment regimens that include a protease inhibitor for HIV therapy and rifampin for antituberculosis therapy. C. For patients who start antiretroviral therapy with a protease inhibitor, plan a 2-week period between the last dose of rifampin and the first dose of protease inhibitor. D. For patients who are not on antiretroviral therapy when TB is diagnosed, plan to start antiretroviral therapy at the end of the induction phase of TB therapy (8 weeks) or when TB therapy is completed. # When rifabutin is administered with indinavir, nelfinavir, or amprenavir, the recommended dosage of rifabutin is: (Choose the one correct answer.) A. 300 mg daily or 150 mg twice weekly. B. 300 mg daily or 300 mg twice weekly. C. 150 mg daily or 300 mg twice weekly. D. 150 mg daily or 150 mg twice weekly. # A paradoxical reaction during the course of TB and HIV treatment . . . (Indicate all that are true.) A. is a temporary exacerbation of symptoms and signs of TB in some patients who have restored immune function because of effective antiretroviral therapy. B. is always associated with a positive Mycobacterium tuberculosis culture. C. if suspected, calls for evaluating the patient to rule out other causes of the signs and symptoms (e.g., antituberculosis treatment failure). D. might require hospitalization and possibly time-limited use of corticosteroids (e.g., prednisone therapy) if the patient has severe or life-threatening clinical manifestations. 6. When administered with rifamycins, which of the following drugs might require a dosage adjustment, the use of alternative therapies, or other changes because of drug interaction? (Indicate all that are true.) The following questions will assess your perceptions of the readability and applicability of the material. Answer guide for questions 1-9 A B C D E F 2. A B C D E F 3. A B C D E F 4. A B C D E F 5. A B C D E F 6. A B C D E F 7. A B C D E F 8. A B C D E F 9. A B C D E F 10. A B C D E F 11. A B C D E F 12. A B C D E F 13. A B C D E F 14. A B C D E F 15. A B C D E F 16. A B C D E F Detach or photocopy. Because of drug interactions, the use of rifampin to treat TB is not recommended for patients who a) will start treatment with an antiretroviral regimen that includes a protease inhibitor or a nonnucleoside reverse transcriptase inhibitor (NNRTI) at the same time they begin treatment for TB (4 ) or b) have established HIV infection that is being maintained on such an antiretroviral regimen when TB is newly diagnosed and needs to be treated. Thus, two TB treatment options are currently recommended for these patients: a) a rifabutin-based regimen or b) an alternative nonrifamycin regimen that includes streptomycin (see Treatment Options for Patients with HIV Infection and Drug-Susceptible Pulmonary TB and Figure 1). Using a rifampin-based TB treatment regimen continues to be recommended for patients with HIV infection a) who have not started antiretroviral therapy, when both the patient and the clinician agree that waiting to start such therapy would be prudent or b) for whom antiretroviral management does not include a protease inhibitor or an NNRTI (4 ) (Figure 1). # BOX 1. Components of the medical evaluation for human immunodeficiency virus-infected patients suspected of having tuberculosis The medical evaluation should include the following questions and assessments: # Medical History - Ask all patients about their history of tuberculosis (TB) treatment. If the patient has previously received treatment, care providers must determine the antituberculosis drugs used, duration of treatment, history of adverse reactions, reasons for discontinuation of treatment, history of adherence with treatment, and previous antituberculosis drugsusceptibility test results. - Question all patients about the following risk factors for drug-resistant TB: a) previous treatment for TB, especially if it was incomplete; b) previous residence in a country outside the United States where drug-resistant TB is common; c) close contact with a person who has drug-resistant TB or multidrug-resistant TB; and d) previous residence in an institution (i.e., hospital, prison, homeless shelter) with documented transmission of a drug-resistant strain of TB. - Ask all patients about their history of antiretroviral therapy and their history of therapies to prevent opportunistic infections. If the patient has previously or is currently receiving these treatments, care providers should determine the drugs used, duration of treatment, history of adverse reactions, and reasons for discontinuation of treatment if treatment ended. - Ask female patients whether they might be pregnant. Women of childbearing potential with menses more than 2 weeks late should receive a pregnancy test. (See TB Treatment for HIV-Infected Pregnant Women.) - When clinical specimens for culture and susceptibility testing cannot be obtained from patients (e.g., young children, patients with skeletal or meningeal TB), the culture and drug-susceptibility results of the Mycobacterium tuberculosis strain isolated from the infecting source-patient should be investigated and reviewed if available so that TB treatment for the current patient can be tailored appropriately. (See TB Treatment for HIV-Infected Children.) - If necessary, perform a Mantoux-method tuberculin skin test (TST) to help diagnose culture-negative TB. When determining the time to begin antiretroviral therapy for patients who are acutely ill with TB, clinicians and patients need to consider the existing clinical issues (e.g., drug interactions and toxicities, ability to adhere to two complex treatment regimens, and laboratory abnormalities). A staggered initiation of antituberculosis and antiretroviral treatments for patients not currently on antiretroviral therapy might promote greater adherence to the TB and HIV treatment regimens and reduce the associated drug toxicity of both regimens. This strategy might include starting antiretroviral therapy either at the end of the 2-month induction phase of TB therapy or after TB therapy is completed. When a decision is made to delay initiation of antiretroviral therapy, clinicians should monitor the patient's condition by measuring # BOX 1. Components of the medical evaluation for human immunodeficiency virus-infected patients suspected of having tuberculosis -Continued Chest X-Ray Examination - Perform a chest x-ray examination. HIV-related immunosuppression reduces the inflammatory reaction and cavitation of pulmonary lesions, and therefore HIV-infected patients with pulmonary TB can have atypical findings or normal chest x-rays. Children younger than age 5 years should undergo both a posterior-anterior and a lateral chest x-ray. All other persons should receive a posterior-anterior chest x-ray; additional chest x-ray examinations should be performed at the physician's discretion. Pregnant women who are being evaluated for active TB disease should undergo a chest x-ray (with the appropriate shielding) without delay, even during the first trimester of pregnancy. Patients suspected of having extrapulmonary TB should undergo a chest x-ray to rule out pulmonary TB. # Laboratory Tests - Collect smears for acid-fast bacilli, cultures, and drug susceptibilities from expectorated or induced sputum samples on 3 consecutive days, preferably in the mornings. Children who are unable to produce sputum spontaneously or who cannot use the sputum induction machine should be admitted to the hospital for early morning gastric aspirates on 3 consecutive, separate days. - Obtain a complete blood cell count, including platelets. - Conduct chemistry panel tests, especially for liver enzyme levels (serum glutamic oxalacetic transaminase or aspartate aminotransferase and serum glutamic pyruvic transaminase or alanine aminotransferase ); total bilirubin; uric acid; blood urea nitrogen; and creatinine. # Other Procedures - Perform a baseline visual acuity exam and test for red-green color perception for all patients who will be receiving ethambutol. - Perform baseline audiometry tests if an aminoglycoside (e.g, streptomycin, amikacin, kanamycin) or capreomycin will be administered. - Perform as necessary procedures such as bronchoscopies and bronchoalveolar lavage; biopsies and aspirates (e.g, of peripheral lymph nodes, visceral lymph nodes, liver, and bone marrow); mycobacterial culturing of nonrespiratory clinical specimens (e.g., blood, urine, pleural fluid); and radiologic evaluations other than chest x-rays (e.g., computerized tomographies, magnetic resonance imaging). plasma HIV RNA levels (viral load) and CD4+ T-cell counts and assessing the HIVassociated clinical condition at least every 3 months (4), because such information will assist in decisions regarding the timing for initiating such therapy. For some patients, switching from a rifampin-based TB regimen to either a rifabutin-based or a nonrifamycin-based TB regimen will be necessary if the decision is made to start antiretroviral therapy before completion of antituberculosis therapy. Clinicians and patients should be aware that the potent effect of rifampin as a CYP450 inducer (77,80), which lowers the serum concentration of protease inhibitors and NNRTIs, continues up to at least 2 weeks following the discontinuation of rifampin. Thus, they should consider planning for a 2-week period between the last dose of rifampin and the first dose of protease inhibitors or NNRTIs (see TB Drug Interaction and Absorption and Table 1A of Appendix). # Treatment Options for Patients with HIV Infection and Drug-Susceptible Pulmonary TB A.II DOT and other strategies that promote adherence to therapy should be used for all patients with HIV-related TB. # A.II For patients who are receiving therapy with protease inhibitors or NNRTIs, the initial phase of a 6-month TB regimen consists of isoniazid, rifabutin, pyrazinamide, and ethambutol. These drugs are administered a) daily for 8 weeks or b) daily for at least the first 2 weeks, followed by twice-a-week dosing for 6 weeks, to complete the 2-month induction phase. The second phase of treatment consists of isoniazid and rifabutin administered daily or twice a week for 4 months (see Six-month RFB-based therapy in Table 1A of Appendix). # B.II For patients for whom the use of rifamycins is limited or contraindicated for any reason (e.g., intolerance to rifamycins, patient/clinician decision not to combine antiretroviral therapy with rifabutin), the initial phase of a 9-month TB regimen consists of isoniazid, streptomycin,- pyrazinamide, and ethambutol administered a) daily for 8 weeks or b) daily for at least the first 2 weeks, followed by twice-a-week dosing for 6 weeks, to complete the 2-month induction phase. The second phase of treatment consists of isoniazid, streptomycin,- and pyrazinamide administered 2-3 times a week for 7 months (see Nine-month SM-based therapy in Table 1A of Appendix). A.I For patients who are not candidates for antiretroviral therapy, or for those patients for whom a decision is made not to combine the initiation of antiretroviral therapy with TB therapy, the preferred option continues to be a 6-month regimen that consists of isoniazid, rifampin, pyrazinamide, and ethambutol (or streptomycin). These drugs are administered a) daily for 8 weeks or b) daily for at least the first 2 weeks, followed by 2-3-times-perweek dosing for 6 weeks, to complete the 2-month induction phase. The *Every effort should be made to continue administering streptomycin for the total duration of treatment or for at least 4 months after culture conversion (approximately 6-7 months from the start of treatment). Some experts suggest that in situations in which streptomycin is not included in the regimen for all of the recommended 9 months, ethambutol should be added to the regimen to replace streptomycin, and the duration of treatment should be prolonged from 9 months to 12 months. Alternatives to streptomycin are the injectable drugs amikacin, kanamycin, and capreomycin. second phase of treatment consists of a) isoniazid and rifampin administered daily or 2-3 times a week for 4 months. Isoniazid, rifampin, pyrazinamide, and ethambutol (or streptomycin) also can be administered three times a week for 6 months (see Six-month RIF-based therapy in Table 1A of Appendix). D.II TB regimens consisting of isoniazid, ethambutol, and pyrazinamide (i.e., three-drug regimens that do not contain a rifamycin, an aminoglycoside , or capreomycin) should generally not be used for the treatment of patients with HIV-related TB; if these regimens are used for the treatment of TB, the minimum duration of therapy should be 18 months (or 12 months after documented culture conversion). A.II Pyridoxine (vitamin B 6 ) (25-50 mg daily or 50-100 mg twice weekly) should be administered to all HIV-infected patients who are undergoing TB treatment with isoniazid, to reduce the occurrence of isoniazid-induced side effects in the central and peripheral nervous system. E.II Because CDC's most recent recommendations for the use of antiretroviral therapy strongly advise against interruptions of therapy,- and because alternative TB treatments that do not contain rifampin are available, previous antituberculosis therapy options that involved stopping protease inhibitor therapy to allow the use of rifampin (Option I and Option II ) are no longer recommended. # Medications and Doses for Treatment of TB No rating When rifabutin is used concurrently with indinavir, nelfinavir, or amprenavir, the recommended daily dose of rifabutin should be decreased from 300 mg to 150 mg (Table 2A of Appendix). No rating The dose of rifabutin recommended for twice-weekly administration is 300 mg, and this dose recommendation does not change if rifabutin is used concurrently with indinavir, nelfinavir, or amprenavir (Table 2A of Appendix). No rating Preliminary drug interaction studies suggest that when rifabutin is used concurrently with efavirenz, the dose of rifabutin for both daily and twiceweekly administration should be increased from 300 mg to 450 mg. No rating Three-times-per-week administration of rifabutin used in combination with antiretroviral therapy has not been studied, and thus a recommendation for adjustment of dosages cannot currently be made. No rating Experts do not know whether the daily dose of rifabutin should be reduced when this drug is used concurrently with either soft-gel saquinavir (Fortovase ™ ) or nevirapine. No rating No modifications in the usually recommended doses of isoniazid, ethambutol, pyrazinamide, or streptomycin (Table 2A of Appendix) are necessary if these drugs are used concurrently with protease inhibitors, NNRTIs, or nucleoside reverse transcriptase inhibitors (NRTIs). *To minimize the emergence of drug-resistant HIV strains, if any antiretroviral medication must be temporarily discontinued for any reason, clinicians and patients should be aware of the theoretical advantage of stopping all antiretroviral agents simultaneously, rather than continuing the administration of one or two of these agents alone (4). No rating The safety and effectiveness of rifapentine (Priftin ® ), a rifamycin newly approved by the U.S. Food and Drug Administration for the treatment of pulmonary tuberculosis, have not been established for patients infected with HIV. Administration of rifapentine to patients with HIV-related TB is not currently recommended. # Duration of TB Treatment A.II The minimum duration of short-course rifabutin-containing TB treatment regimens is 6 months, to complete a) at least 180 doses (one dose per day for 6 months) or b) 14 induction doses (one dose per day for 2 weeks) followed by 12 The final decision on the duration of therapy should consider the patient's response to treatment. For patients with delayed response to treatment (see Box 2), the duration of rifamycin-based regimens should be prolonged from 6 months to 9 months (or to 4 months after culture conversion is documented). # A.II The minimum duration of nonrifamycin, streptomycin-based TB treatment regimens is 9 months, to complete a) at least 60 induction doses (one dose per day for 2 months) or b) 14 induction doses (one dose per day for 2 weeks) followed by 12-18 induction doses (two to three doses per week for 6 weeks) plus either 60 continuation doses (two doses per week for 30 weeks) or 90 continuation doses (three doses per week for 30 weeks). # A.II When making the final decision on the duration of therapy, clinicians should consider the patient's response to treatment. For patients with delayed response to treatment (see Box 2), the duration of streptomycinbased regimens should be prolonged from 9 months to 12 months (or to 6 months after culture conversion is documented). # A.III Interruptions in therapy because of drug toxicity or other reasons should be taken into consideration when calculating the end-of-therapy date for individual patients. Completion of therapy is based on total number of medication doses administered and not on duration of therapy alone. A.III Reinstitution of therapy for patients with interrupted TB therapy might require a continuation of the regimen originally prescribed (as long as needed to complete the recommended duration of the particular regimen) *Three-times-per-week rifabutin regimens, used in combination with antiretroviral therapy, have not been studied. # BOX 2. Components of the monthly medical evaluation for human immunodeficiency virus-infected patients undergoing treatment for active tuberculosis For patients infected with human immunodeficiency virus (HIV) who are undergoing treatment for active tuberculosis (TB), clinicians should include the following components in the monthly evaluation: - Once a month, evaluate symptoms and signs of TB (response to treatment) by conducting a) a physical examination (the nature and extent of this evaluation will depend on the patient's symptoms and the site of disease) and b) for patients with pulmonary TB, an examination by smear and culture of an expectorated or induced sputum specimen until cultures are no longer positive for Mycobacterium tuberculosis. - Perform as necessary for individual patients laboratory tests such as: complete blood cell count, platelet count, and tests for serum glutamic oxalacetic transaminase or aspartate aminotransferase (SGOT/AST) and serum glutamic pyruvic transaminase or alanine aminotransferase (SGPT/ALT), alkaline phosphatase, total bilirubin, uric acid, blood urea nitrogen, and creatinine. - To assist in the decision about the duration of TB treatment, investigate the possibility of a delayed response to treatment (Table 1A of Appendix). Delayed response to treatment should be suspected (and in most cases treatment duration should be prolonged) if by the end of the 2-month induction phase of therapy, patients a) continue to be culture-positive for M. tuberculosis or b) do not experience resolution of signs or symptoms of TB or do experience progression of signs or symptoms of TB (e.g., persistent fever, progressive weight loss, or increase in size of lymph nodes, abscesses, or other tuberculous lesions, none of which can be accounted for by a disease other than TB). (See Duration of TB Treatment.) - Some factors potentially associated with TB treatment failure are a large mycobacterial load and extensive lung cavitation at baseline, nonadherence with the drug regimen (even among patients assumed to be on DOT), inappropriately low medication doses, and impaired absorption of drugs. Immediately institute corrective measures for those factors amenable to intervention. - Because patients with HIV-infection often are treated with multiple drugs in addition to antituberculosis drugs, at each visit, review all medications that the patient is taking and assess any change in medications for potential drug interactions with TB medications. Efforts to manage these potential problems related to drug interactions require the coordinated efforts of care givers for HIV and TB disease. (See TB Drug Interaction and Absorption.) - Because several antituberculosis drugs have hepatotoxicity as a potential side effect (Table 2A of Appendix), advise all persons taking TB medications about the symptoms consistent with hepatitis (e.g., anorexia, nausea, vomiting, abdominal pain, jaundice) and instruct them to discontinue all TB medications immediately and seek medical attention promptly if they exhibit such symptoms. These patients usually will need an examination by a physician, liver function tests, and a planned strategy for restarting TB treatment. - If ethambutol is administered, perform a monthly visual acuity exam and test for redgreen color perception. - If streptomycin is administered, perform audiometry and renal function tests as needed. -r a complete renewal of the regimen. In either situation, when therapy is resumed after an interruption of ≥2 months, sputum samples (or other clinical samples as appropriate) should be taken for smear, culture, and drug-susceptibility testing. # Management of the Coadministration of TB and HIV Therapies, Including the Potential for Paradoxical Reactions When antituberculosis treatment has been started, all patients should be monitored for response to antituberculosis therapy, drug-related toxicity, and drug interactions. Detailed recommendations for managing antiretroviral therapy are published elsewhere (4 ), and consultation with experts in this area is highly recommended. The frequency and type of most TB medication side effects are similar among TB patients with and without HIV infection (30,65,67 ). When caring for HIV-infected persons, clinicians must be aware of the following problems that can result from the administration of TB medications: a) patients might have a higher predisposition toward isoniazid-related peripheral neuropathy; b) evaluation of dermatologic reactions related to TB medications might be complicated because HIV-infected patients are subject to several dermatologic diseases related to HIV disease or to medications used for other treatment or prophylaxis reasons; and c) patients undergoing concurrent therapy with rifabutin and protease inhibitors or NNRTIs are at risk for rifabutin toxicity associated with increased serum concentrations of this drug. The reported adverse events associated with rifabutin toxicity include arthralgias, uveitis, and leukopenia (86,(101)(102)(103). Detailed recommendations for managing these adverse reactions are published elsewhere and should be consulted (2,64 ). Paradoxical reactions -temporary exacerbation of symptoms, signs, or radiographic manifestations of TB (e.g., recurrence of fever, enlarged lymph nodes, appearance of cavitation in previously normal chest x-ray) among patients who have experienced a good clinical and bacteriologic response to antituberculosis therapyhave been reported among patients coinfected with HIV who have restored immune function because of antiretroviral therapy (76 ). The synchronization and severity of paradoxical reactions associated with antiretroviral therapy are not well understood; therefore, experts do not know whether the occurrence of these reactions should affect the timing of initiating or changing antiretroviral therapy when such therapy is indicated for a patient with HIV infection. However, because an association between paradoxical reactions and initiation of antiretroviral therapy has been noted, clinicians should be aware of this possibility and discuss the risks with patients undergoing therapy for active TB. # Monthly Medical Evaluation and the Diagnosis and Management of Paradoxical Reactions A.II All patients should receive a monthly clinical evaluation (see Box 2) to monitor their response to treatment, adherence to treatment, and medication side effects (Table 2A of Appendix). During the early days of therapy, the interval between these evaluations might be shorter (e.g., every 2 weeks). # A.II Patients suspected of having paradoxical reactions should be evaluated to rule out other causes for their clinical presentation (e.g., TB treatment failure) before attributing their signs and symptoms to a paradoxical reaction. C.III Some experts recommend that to avoid paradoxical reactions, clinicians should delay the initiation of or changes in antiretroviral therapy until the signs and symptoms of TB are well controlled (possibly 4-8 weeks from the initiation of TB therapy). No rating For patients with a paradoxical reaction in whom the symptoms are not severe or life-threatening, the management of these reactions might consist of symptomatic therapy and no change in antituberculosis or antiretroviral therapy. For patients with a paradoxical reaction associated with severe or life-threatening clinical manifestations (e.g., uncontrollable fever, airway compromise from enlarging lymph nodes, enlarging serosal fluid collections , sepsis-like syndrome), the management might include hospitalization and possibly a time-limited use of corticosteroids (e.g., prednisone started daily at a dose of 60-80 mg and reduced after 1 or 2 weeks, with the resolution of symptoms as a guide; in most cases, corticosteroid therapy should last no more than 4-6 weeks). # TB Drug Interaction and Absorption E.II Given the expected drug interactions that would result in markedly decreased serum levels of antiretroviral agents, and given the overlapping toxicities, the coadministration of rifampin with any of the protease inhibitors or with NNRTIs, as well as the coadministration of rifabutin with ritonavir, hard-gel saquinavir (Invirase ™ ), or delavirdine, is contraindicated. # A.II The potent effect of rifampin as a CYP450 inducer, which lowers the serum concentration of protease inhibitors and NNRTIs, is expected to continue up to at least 2 weeks following the discontinuation of rifampin. Therefore, to diminish the likelihood of adverse effects on drug metabolism, clinicians should plan the start of therapy with protease inhibitors or NNRTIs at least 2 weeks after the date of the last dose of rifampin. # A.II Rifabutin is a less potent CYP450 inducer than rifampin and thus can be used (with adjustments in dosages) concurrently with the NNRTIs nevirapine or efavirenz or with certain protease inhibitors (e.g., indinavir, nelfinavir, and possibly soft-gel saquinavir and amprenavir). No rating Indinavir serum concentrations are decreased by rifabutin-related induction of the hepatic cytochrome P450; therefore, when indinavir is used in combination with rifabutin, the dose of indinavir usually is increased from 800 mg every 8 hours to 1,200 mg every 8 hours. No rating Nelfinavir serum concentrations are also decreased when nelfinavir is used in combination with rifabutin (Table 1A of Appendix); however, the resultant metabolite of nelfinavir is known to be active against HIV. Nevertheless, some experts suggest increasing the dose of nelfinavir from 750 mg three times per day to 1,000 mg three times per day when used in combination with rifabutin. No rating Experts do not know whether dose-modifications are needed for soft-gel saquinavir (Fortovase ™ ), amprenavir, nevirapine, or efavirenz if these agents are used in combination with rifabutin. No rating Many other medications commonly used by patients with HIV infection have drug interactions with the rifamycins (rifampin or rifabutin) of sufficient magnitude to require interventions such as dose adjustments or use of alternative therapies. Some examples of these drugs are hormonal contraceptives, dapsone, ketoconazole, fluconazole, itraconazole, narcotics (including methadone), anticoagulants, corticosteroids, cardiac glycosides, hypoglycemics (sulfonylureas), diazepam, beta-blockers, anticonvulsants, and theophylline. No rating Malabsorption of antituberculosis drugs has been demonstrated in some patients with HIV infection, and in some cases, it has been associated with TB treatment failures and the selection of drug-resistant M. tuberculosis bacilli (141)(142)(143)(144)(145). Therapeutic drug monitoring has been advocated by some experts as an adjunct in the management of HIV-related TB (146 ). This approach might be useful when evaluating patients with TB treatment failure or relapse and in the treatment of multidrug-resistant (MDR) TB. However, the role of therapeutic drug monitoring in the routine management of TB among HIV-infected patients has not been established and is not presently recommended. # Treatment of TB in Special Situations The following general treatment recommendations address special situations such as drug-resistant forms of HIV-related TB, TB among HIV-infected pregnant women, TB among HIV-infected children, and extrapulmonary HIV-related TB. Detailed recommendations for managing these patients are published elsewhere (2,64,(147)(148)(149)(150), and consultation with experts in these areas is highly recommended. # Treatment of Drug-Resistant TB A.II TB disease resistant to isoniazid only. The treatment regimen should generally consist of a rifamycin (rifampin or rifabutin), pyrazinamide, and ethambutol for the duration of treatment. Intermittent therapy administered twice weekly can be used following at least 2 weeks (14 doses) of daily induction therapy (see Duration of TB Treatment). The recommended duration of treatment is 6-9 months or 4 months after culture conversion. Isoniazid is generally stopped when resistance (>1% of bacilli resistant to 1.0 µg/mL of isoniazid) to this drug is discovered; however, when low-level resistance is discovered (>1% of bacilli resistant to 0.2 µg/mL of isoniazid, but no resistance to 1.0 µg/mL of isoniazid), some experts suggest continuing to use isoniazid as part of the treatment regimen. Because the development of acquired rifamycin resistance would result in MDR TB, clinicians should carefully supervise and manage TB treatment for these patients. isoniazid and rifampin. If drug-susceptibility results are not available, a four-drug regimen (e.g., isoniazid, rifamycin, pyrazinamide, and ethambutol) for 2 months, followed by intermittent administration of isoniazid and a rifamycin for 4 months, is recommended. Considerations for antiretroviral therapy for children and adolescents have been published elsewhere (154 ). # TB Treatment for HIV-Infected Patients with Extrapulmonary TB The basic principles that support the treatment of pulmonary TB in HIVinfected patients also apply to extrapulmonary forms of the disease. Most extrapulmonary forms of TB (including TB meningitis, tuberculous lymphadenitis, pericardial TB, pleural TB, and disseminated or miliary TB) are more common among persons with advanced-stage HIV disease (155,156) than among patients with asymptomatic HIV infection. The drug regimens and treatment durations that are recommended for treating pulmonary TB in HIV-infected adults and children (Table 1A of Appendix) are also recommended for treating most patients with extrapulmonary disease. However, for certain forms of extrapulmonary disease, such as meningioma, bone, and joint TB, using a rifamycin-based regimen for at least 9 months is generally recommended. # Latent M. tuberculosis Infection Clinical and Public Health Principles When caring for persons with HIV infection, clinicians should make aggressive efforts to identify those who also are infected with M. tuberculosis. Because the reliability of the tuberculin skin test (TST) can diminish as the CD4+ T-cell count declines, TB screening with TST should be performed as soon as possible after HIV infection is diagnosed. Because the risk of infection and disease with M. tuberculosis is particularly high among HIV-infected contacts of persons with infectious pulmonary or laryngeal TB, these persons must be evaluated for TB as soon as possible after learning of exposure to a patient with infectious TB. Health-care providers, administrators, and TB controllers must coordinate their work and establish TB screening initiatives in settings where a) the prevalence of infection with M. tuberculosis among persons with HIV-infection is expected to be high and b) referral for medical evaluation and TB preventive therapy can be accomplished. Such settings include prisons, jails, prenatal-care programs, drug treatment programs, syringe exchange programs, HIV specialty clinics, acute-care hospitals serving populations at high risk of TB, AIDS patient group residences, some community-health centers, psychiatric institutions, mental health residences, and homeless shelters. All HIV counseling and testing sites must have mechanisms in place to ensure that persons identified with HIV infection receive tuberculin skin testing. TB control programs in jurisdictions that have HIV reporting requirements should make efforts to ensure that all persons with HIV infection have TSTs. Because of the complexity of problems associated with active TB disease in HIVinfected persons, and as part of the efforts to control and eliminate TB in the United States, all HIV-infected persons identified as latently infected with M. tuberculosis A.II # BOX 3. Components of the baseline medical evaluation for tuberculosis preventive treatment for patients infected with human immunodeficiency virus When conducting a medical evaluation to rule out active tuberculosis (TB) and administer preventive treatment for patients infected with human immunodeficiency virus (HIV), clinicians should include the following questions and assessments: # Medical History - Ask patients about their history of recent close contact with a person who has TB disease. If the infecting source-patient is known, his or her culture and drug-susceptibility results should be investigated and reviewed so that the preventive therapy regimen can be tailored appropriately (see Treatment of Latent M. Tuberculosis in Special Situations). - Assess patients for their risk of increased toxicity associated with the medications used for TB preventive therapy (e.g., history of excessive alcohol ingestion, liver disease, hepatitis, chronic use of other medications). - Assess patients for contraindications to TB preventive therapy (Table 3A of Appendix). - When persons have previously received TB preventive treatment, determine what drugs were used, duration of treatment, and any history of adverse reactions (Table 2A of Appendix) and adherence to preventive therapy. - Ask patients about their history of antiretroviral therapy and their history of therapies to prevent opportunistic infections. If a patient has ever received or is receiving any of these treatments, the care provider must determine the drugs used, duration of treatment, history of adverse reactions, potential for drug interactions with TB medications, and any reasons for discontinuation of treatment if treatment ended. # Chest X-Ray Examination - A chest x-ray is indicated for all persons being considered for preventive therapy, to rule out active pulmonary TB disease. Children younger than 5 years old should undergo both a posterior-anterior and a lateral chest x-ray. All other individuals should receive a posterior-anterior chest x-ray only; additional x-rays should be performed at the physician's discretion. Pregnant women who have a positive TST or who have negative TST results but are recent contacts of a person who has infectious TB disease should undergo a chest x-ray (with appropriate shielding) without delay, even during the first trimester of pregnancy. # Laboratory Tests - Obtain a complete blood cell count, and also obtain a platelet count if the patient will be treated with a rifamycin (rifampin or rifabutin). - Conduct chemistry panel tests, especially for liver enzyme levels (serum glutamic oxalacetic transaminase or aspartate aminotransferase and serum glutamic pyruvic transaminase or alanine aminotransferase ) and total bilirubin. In addition, check uric acid levels if the patient will be treated with pyrazinamide. # BOX 4. Components of the monthly medical evaluation for human immunodeficiency virus-infected patients undergoing preventive treatment for latent Mycobacterium tuberculosis infection For patients infected with human immunodeficiency virus (HIV) who are undergoing preventive treatment for latent Mycobacterium tuberculosis infection, clinicians should include the following components in the monthly evaluation: - At every monthly evaluation, review signs and symptoms potentially related to active tuberculosis (TB) disease as well as signs and symptoms of drug reactions potentially related to antituberculosis drugs (Table 2A of Appendix). - At the start of preventive therapy and at each subsequent monthly visit, remind patients of the need to immediately discontinue preventive therapy and seek prompt medical attention if signs and symptoms of hepatotoxicity appear (e.g., anorexia, abdominal pain, nausea, vomiting, change in color of urine and feces, jaundice). - Patients with HIV infection often are treated with multiple drugs in addition to antituberculosis drugs. At each visit, all medications that a patient is taking should be reviewed and assessed for potential drug interactions with TB medications. Some examples of these drugs are hormonal contraceptives, ketoconazole, fluconazole, itraconazole, narcotics (including methadone), anticoagulants, corticosteroids, cardiac glycosides, hypoglycemics (sulfonylureas), diazepam, beta-blockers, anticonvulsants, and theophylline. Efforts to manage these potential problems related to drug interactions require the coordinated efforts of HIV and TB patients' care givers. The choices for preventive treatment for persons who are likely to be infected with a strain of M. tuberculosis resistant to both isoniazid and rifamycins are published elsewhere (161 ). In general, the recommended preventive therapy regimens for these persons include the use of a combination of at least two antituberculosis drugs that the infecting strain is believed to be susceptible to (e.g., ethambutol and pyrazinamide, levofloxacin and ethambutol). The clinician should review the drugsusceptibility pattern of the M. tuberculosis strain isolated from the infecting source-patient before choosing a preventive therapy regimen. # Treatment of Latent M. tuberculosis Infection in Special Situations # A.III For HIV-infected women who are candidates for TB preventive therapy, the initiation or discontinuation of preventive therapy should not be delayed on the basis of pregnancy alone, even during the first trimester. A 9-month regimen of isoniazid administered daily or twice a week is the only recommended option (Table 3A of Appendix). No rating For HIV-infected children who are candidates for TB preventive therapy, a 12-month regimen of isoniazid administered daily is recommended by the American Academy of Pediatrics (162 ). # Follow-up of HIV-Infected Persons Who Have Completed Preventive Therapy # CONCLUSIONS Implementing TB prevention and control strategies for persons infected with HIV has always been important and is even more critical now that a larger selection of new, more potent antiretroviral drugs has enabled clinicians to implement therapies that improve the health and prolong the lives of HIV-infected persons. These antiretroviral therapeutic strategies often include the use of drugs such as the protease inhibitors or the nonnucleoside reverse transcriptase inhibitors (NNRTIs), which because of drug interactions cannot be used concurrently with certain other drugs (e.g., rifampin). Thus, to improve the diagnosis and management of TB and HIV coinfection, TB control programs need to be prepared for the following challenges: - Ensure that all patients with TB receive HIV counseling and testing either on site or elsewhere. Patients with latent M. tuberculosis infection who are at risk for HIV infection also should receive HIV counseling and testing. - Initiate prompt and effective antituberculosis treatment (ideally with directly observed therapy) for all patients diagnosed with HIV-related TB. - Promote optimal antiretroviral therapy for patients with M. tuberculosis and HIV infection. - Become knowledgeable about the indications, potential dosing adjustments, and monitoring requirements of a rifabutin-containing regimen (or an alternative regimen that does not contain rifamycin) for the treatment of TB in patients who are undergoing antiretroviral therapy with protease inhibitors or NNRTIs. - Identify potential risk factors for TB treatment failure or relapse as well as the potential for paradoxical treatment reactions, and learn how to recognize and manage these outcomes. - Follow procedures to ensure early recognition and implementation of effective treatment for drug-resistant TB. - Recognize that previous options that involved stopping protease inhibitor therapy to allow the use of rifampin for TB treatment are no longer recommended for two reasons: a) the most recent guidelines for the use of antiretroviral therapy advise against interrupting HIV therapy, and b) alternatives for TB therapy that do not contain rifampin are available. - Coordinate efforts and establish TB screening initiatives in settings where a) the prevalence of infection with M. tuberculosis among persons with HIV-infection is expected to be high and b) referral for medical evaluation and therapy for active or latent TB is possible. - Be aware of changes in options for TB preventive therapy. In addition to recommendations for using 9 months of isoniazid daily or twice a week, new shortcourse multidrug regimens (e.g., a 2-month course of a rifamycin such as rifabutin or rifampin with pyrazinamide) can be prescribed for HIV-infected patients with latent M. tuberculosis infection. When faced with treatment choices, TB controllers and clinicians can use these recommendations to make informed decisions based on the most current research results available, keeping in mind that as new antiretroviral and antituberculosis agents become available, these guidelines will likely change. The aim of these recommendations is to help reduce TB treatment failures, prevent cases of drug-resistant TB, diminish the adverse effects that TB has on HIV replication, and support efforts to not only control TB, but to eliminate it from the United States. No contraindication exists for the use of RFB with NRTIs. If the patient also is taking indinavir, nelfinavir, or amprenavir, the daily dose of RFB is decreased from 300 mg to 150 mg. The twice-weekly dose of RFB (300 mg) remains unchanged if the patient is also taking these protease inhibitors. If the patient also is taking efavirenz, the daily or twice weekly dose of RFB is increased from 300 mg to 450 mg. Three-times-a-week administration of RFB used in combination with antiretroviral therapy has not been studied. † The concurrent use of RFB is contraindicated with ritonavir, saquinavir (Invirase™), and delavirdine. Information regarding the use of rifabutin with saquinavir (Fortovase™), amprenavir, efavirenz, and nevirapine is limited. § Not applicable. If nelfinavir, indinavir, or amprenavir is administered with RFB, blood concentrations of these protease inhibitors decrease. Thus, when RFB is used concurrently with any of these three drugs, the daily dose of RFB is reduced from 300 mg to 150 mg (the twice-weekly dose of RFB is unchanged, however). ¶ NA=not applicable. If efavirenz is administered with RFB, blood concentrations of RFB decrease. Thus, when RFB is used concurrently with efavirenz, the dose of RFB for both daily and twice weekly administration should be increased from 300 mg to 450 mg.
These guidelines update previous CDC recommendations for the diagnosis, treatment, and prevention of tuberculosis (TB) among adults and children coinfected with human immunodeficiency virus (HIV) in the United States. The most notable changes in these guidelines reflect both the findings of clinical trials that evaluated new drug regimens for treating and preventing TB among HIV-infected persons and recent advances in the use of antiretroviral therapy. In September 1997, when CDC convened a meeting of expert consultants to discuss current information about HIV-related TB, special emphasis was given to issues related to coadministration of TB therapy and antiretroviral therapy and how to translate this information into management guidelines. Thus, these guidelines are based on the following scientific principles: restores immune function. Adding to CDC's current recommendations for administering isoniazid preventive therapy to HIV-infected persons with positive tuberculin skin tests and to HIV-infected persons who were exposed to patients with infectious TB, this report also describes in detail the use of new shortcourse (i.e., 2 months) multidrug regimens (e.g., a rifamycin, such as rifampin or rifabutin, combined with pyrazinamide) to prevent TB in persons with HIV infection. A continuing education component for U.S. physicians and nurses is included.# • Early diagnosis and effective treatment of TB among HIV-infected patients are critical for curing TB, minimizing the negative effects of TB on the course of HIV, and interrupting the transmission of Mycobacterium tuberculosis to other persons in the community. • All HIV-infected persons at risk for infection with M. tuberculosis must be carefully evaluated and, if indicated, administered therapy to prevent the progression of latent infection to active TB disease and avoid the complications associated with HIV-related TB. • All HIV-infected patients undergoing treatment for TB should be evaluated for antiretroviral therapy, because most patients with HIV-related TB are candidates for concurrent administration of antituberculosis and antiretroviral drug therapies. However, the use of rifampin with protease inhibitors or nonnucleoside reverse transcriptase inhibitors is contraindicated. Ideally, the management of TB among HIV-infected patients taking antiretroviral drugs requires a) directly observed therapy, b) availability of experienced and coordinated TB/HIV care givers, and in most situations, c) use of a TB treatment regimen that includes rifabutin instead of rifampin. Because alternatives to the use of rifampin for antituberculosis treatment are now available, the previously recommended practice of stopping protease inhibitor therapy to allow the use of rifampin for TB treatment is no longer recommended for patients with HIV-related TB. The use of rifabutin-containing antituberculosis regimens should always include an assessment of the patient's response to treatment to decide the appropriate duration of therapy (i.e., 6 months or 9 months). Physicians and patients also should be aware that paradoxical reactions might occur during the course of TB treatment when antiretroviral therapy # INTRODUCTION These guidelines update previous CDC recommendations for treating and preventing active tuberculosis (TB) among adults and children coinfected with human immunodeficiency virus (HIV) (1)(2)(3). The most notable changes in these guidelines reflect both the recent advances in the use of antiretroviral therapy and the findings of clinical trials that evaluated new drug regimens for the treatment and prevention of TB among HIV-infected persons. Antiretroviral therapy is discussed in the context of TB treatment only; more detailed information about antiretroviral therapy is published elsewhere (4 ). In September 1997, CDC convened a meeting of expert consultants who reviewed and considered background information about HIV-related TB in the United States and the scientific principles of therapy for both diseases (Part I of this report). The consultants then used this review as the basis for updating the recommendations for HIV-infected patients with TB (Part II). During their review of the scientific principles of therapy, the expert consultants focused on epidemiologic and clinical interactions between Mycobacterium tuberculosis infection and HIV infection, considering the frequency of coexisting TB and HIV infection and rates of drug-resistant TB among patients infected with HIV in the United States; the copathogenicity of TB and HIV disease; the potential for a poorer outcome of TB therapy and paradoxical reactions to TB treatment among HIV-infected patients; drug interactions between rifampin used for TB therapy and agents commonly used in antiretroviral therapy; use of TB treatment regimens that do not contain rifampin; and results of clinical trials of therapies to prevent TB among HIV-infected persons. Thus, in addition to CDC's current recommendations, these new guidelines include information about the following topics: • directly observed therapy for all patients with HIV-related TB; • rifabutin-containing antituberculosis regimens (or a streptomycin-based alternative regimen that does not contain rifamycin) for treating TB among patients taking antiretroviral drugs that have interactions with rifampin; • monitoring responses to antituberculosis treatment to decide about the appropriate duration of TB therapy; • occurrence and management of paradoxical reactions during TB treatment, when immune function is restored because of antiretroviral therapy; • use of 9 months of isoniazid daily or twice weekly for the treatment of M. tuberculosis infection; 2 MMWR October 30, 1998 • short-course multidrug therapy for latent M. tuberculosis infection; and • special considerations that apply to children and pregnant women with HIVrelated TB. Health-care professionals need to be familiar with these new guidelines to ensure the use of the most effective management strategies for TB patients infected with HIV, while concurrently promoting optimal antiretroviral therapy for these patients. To help clinicians make informed treatment decisions based on the most current research results, the expert consultants have given each recommendation an evidence-based rating similar to the ratings used in previously issued guidelines (4,5 ). However, these recommendations are not intended to substitute for the judgment of an expert physician. When possible, the treatment of TB in HIV-infected persons should be directed by (or done in consultation with) a physician with extensive experience in the care of patients with TB and HIV disease. The implementation of these recommendations will help prevent cases of drug-resistant TB, reduce TB treatment failures, and diminish the adverse effects that TB has on HIV replication. Moreover, these guidelines will contribute to efforts to control TB and eliminate it from the United States by minimizing the likelihood of M. tuberculosis transmission, which will prevent the occurrence of new cases of TB. In future years, health-care professionals can expect changes in the recommendations regarding the therapeutic options used to prevent and treat TB among patients infected with HIV. These changes will reflect the availability of new antiretroviral and antituberculosis agents, new information about existing agents, and subsequent changes in CDC's guidelines for the use of antiretroviral therapy for persons infected with HIV. Multiple copies of this report and all updates are available from the Office of Communications, National Center for HIV, STD, and TB Prevention, CDC, 1600 Clifton Road, Mail Stop E-06, Atlanta, GA 30333. The report also is posted on the CDC Division of TB Elimination Internet website at <http://www.cdc.gov/nchstp/tb> and the MMWR website at <http://www.cdc.gov/epo/mmwr/mmwr.html>. Readers should consult these sources regularly for updates in the guidelines. # PART I. BACKGROUND AND SCIENTIFIC RATIONALE # Frequency of Coexisting TB and HIV Infection and Disease in the United States In the United States, epidemiologic evidence indicates that the HIV epidemic contributed substantially to the increased numbers of TB cases in the late 1980s and early 1990s (6,7 ). Overlap between the acquired immunodeficiency syndrome (AIDS) and TB epidemics continues to result in increases in TB morbidity. Analysis of national HIV-related TB surveillance data is limited by incomplete reporting of HIV status for persons with TB. As an alternative, state health department personnel have compared TB and AIDS registries to help estimate the proportion of persons reported with TB who are also infected with HIV. In the most recent comparison conducted by the 50 states and Puerto Rico, 14% of persons with TB in 1993-1994 (27% among those aged 25-44 years) also appeared in the AIDS registry (8 ). This proportion of TB patients with AIDS is believed to be a minimum estimate for the United States and might represent an increase in the proportion of TB patients identified as having TB and AIDS in 1990 (9%) (6 ). During 1993-1994, most persons with TB and AIDS (80%) were found in eight reporting areas: New York City, California, Florida, Georgia, Illinois, New Jersey, New York, and Texas (8 ). In prospective epidemiologic studies, investigators have estimated that the annual rate of TB disease among untreated, tuberculin skin-test (TST)-positive, HIV-infected persons in the United States ranges from 1.7 to 7.9 TB cases per 100 person-years (Table 1) (9-11 ). The variability observed in these studies mirrors the differences in TB prevalence observed for different U.S. populations (i.e., the highest case rate was found in a study of a New York City population of intravenous drug users at a time when the incidence of TB was high and increasing [9 ]; and the lowest case rate was evident in a community-based cohort of persons enrolled in a study of the pulmonary complications of HIV infection at a time and in a population in which the incidence of TB was relatively low [11 ]). However, in all of these studies, the rate of TB disease among HIV-infected, TST-positive persons was approximately 4-26 times higher than the rate among comparable HIV-infected, TST-negative persons, and it was approximately 200-800 times higher than the rate of TB estimated for the U.S. population overall (0.01%) (12 ). Therefore, activities to control and eliminate TB in the United States must include aggressive efforts to identify HIV-infected persons with latent # TABLE 1. Annual rates* of tuberculosis among persons with human imunodeficiency virus infection, by tuberculin skin-test (TST) status -selected years and U.S. areas Location and source # Rate among persons with positive TSTs # Rate among persons with negative TSTs Rate ratio New York City Selwyn et al., 1989 (9 ) 7.9 0. TB infection and to provide them with therapy to prevent progression to active TB disease. # Rates of Drug-Resistant TB Among HIV-Infected Persons in the United States Resistance to antituberculosis drugs is an important consideration for some HIVinfected persons with TB. According to the results of a study of TB cases reported to CDC from 1993 through 1996, the risk of drug-resistant TB was higher among persons with known HIV infection compared with others (13 ). During this 4-year period, among U.S.-born persons aged 25-44 years with TB, HIV test results were reported as positive for 32% of persons, negative for 23%, and unknown for 45%. Using univariate analysis that excluded patients known to have had a previous episode of TB, investigators found that patients known to be HIV seropositive had a significantly higher rate of resistance to all first-line antituberculosis drugs, compared with HIV-seronegative patients and patients with unknown HIV serostatus (Table 2). Moreover, using a multivariate model that included age, history of previous TB, birth country, residence in New York City, and race/ethnicity, the investigators confirmed HIV-positive serostatus as a risk factor for resistance to at least isoniazid, for both isoniazid and rifampin resistance (multidrug-resistant [MDR] TB) and for rifampin monoresistance (TB resistant to rifampin only). In some areas of the United States with a low level of occurrence of MDR TB, however, differences in MDR TB related to HIV status have not been found (8 ). Reasons for the increased risk for TB drug resistance among HIV-seropositive persons might reflect a higher proportion of TB disease resulting from recently acquired M. tuberculosis infection (14,15 ) and thus an increased risk of disease caused by drug-resistant strains in areas with high community and institutional transmission of drug-resistant strains of M. tuberculosis (16 ). Several well-described outbreaks of † The patient's Mycobacterium tuberculosis isolate had resistance to at least the specified drug but may have had resistance to other drugs as well. § The differences in drug-resistance rates among patients with TB known to be HIV-seropositive, compared with those known to be HIV-seronegative or of unknown status, are statistically significant (Chi-square test statistic, p<0.05). ¶ These figures were calculated for patients with M. tuberculosis isolates tested for isoniazid and rifampin always and streptomycin sometimes. Monoresistant isolates were resistant to rifampin but susceptible to the other first-line drugs tested. Source: CDC, National Tuberculosis Surveillance System. nosocomially transmitted MDR TB, primarily affecting persons with AIDS, support this association (17)(18)(19)(20)(21). In the past decade, reports have increased of TB caused by strains of M. tuberculosis resistant to rifampin only, and growing evidence has indicated that this rare event is associated with HIV coinfection (22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32). In retrospective studies, nonadherence with TB therapy has been associated with acquired rifampin monoresistance (22)(23)(24); and among a small number of patients, the use of rifabutin as prophylaxis for Mycobacterium avium complex was associated with the development of rifamycin resistance (31 ). However, the occurrence of TB relapse with acquired rifampin monoresistance also has been documented among patients with TB who initially had rifampin-susceptible isolates and who were treated with a rifampin-containing TB regimen by directly observed therapy (DOT) (30,32 ). The mechanisms involved in the development of acquired rifampin monoresistance are not clearly understood but could involve the persistence of actively multiplying mycobacteria in patients with severe cellular immunodeficiency, selective antituberculosis drug malabsorption, and inadequate tissue penetration of drugs. Thus, of critical importance for HIV-infected persons is implementation of TB prevention and control strategies such as a) appropriate use of therapy for latent M. tuberculosis infection, b) early diagnosis and effective treatment of active TB (i.e., administering four-drug antituberculosis regimens by DOT to all coinfected patients), and c) prompt compliance with requirements for reporting TB cases and drug-susceptibility test results. Implementing these strategies for persons coinfected with HIV will not only help reduce new cases of TB in general; it also could decrease further transmission of drug-resistant strains and new cases of drug-resistant TB. # Copathogenicity of TB and HIV Disease Human immunodeficiency virus type 1 (HIV-1) and M. tuberculosis are two intracellular pathogens that interact at the population, clinical, and cellular levels. Initial studies of HIV-1 and TB emphasized the impact of HIV-1 on the natural progression of TB, but mounting immunologic and virologic evidence now indicates that the host immune response to M. tuberculosis enhances HIV replication and might accelerate the natural progression of HIV infection (33 ). Therefore, the interaction between these two pathogens has important implications for the prevention and treatment of TB among HIV-infected persons. Studies of the immune response in persons with TB disease support the biologic plausibility of copathogenesis in dually infected persons. The initial interaction between the host immune system and M. tuberculosis occurs in the alveolar macrophages that present mycobacterial antigens to antigen-specific CD4+ T cells (34 ). These T cells release interferon-gamma, a cytokine that acts at the cellular level to activate macrophages and enhance their ability to contain mycobacterial infection. The activated macrophages also release proinflammatory cytokines, such as tumor necrosis factor and interleukin (IL)-1, cytokines that enhance viral replication in monocyte cell lines in vitro (35)(36)(37)(38). The mycobacteria and their products also enhance viral replication by inducing nuclear factor kappa-B, the cellular factor that binds to promoter regions of HIV (39,40 ). When TB disease develops in an HIV-infected person, the prognosis is often poor, though it depends on the person's degree of immunosuppression and response to appropriate antituberculosis therapy (41)(42)(43). The 1-year mortality rate for treated, HIV-related tuberculosis ranges from 20% to 35% and shows little variation between cohorts from industrialized and developing countries (44)(45)(46)(47)(48)(49). The observed mortality rate for HIV-infected persons with TB is approximately four times greater than the rate for TB patients not infected with HIV (44,46,49,50 ). Although the cause of death in the initial period of therapy can be TB (46 ), death after the induction phase of antituberculosis therapy usually is attributed to complications of HIV other than TB (45,51,52 ). Epidemiologic data suggest that active TB accelerates the natural progression of HIV infection. In a retrospective cohort study of HIV-infected women from Zaire, investigators estimated the relative risk of death to be 2.7 among women with active TB compared with those without TB (53 ). In a retrospective cohort study of HIV-infected subjects from the United States, active TB was associated with an increased risk for opportunistic infections and death (54 ). The risk of death, or hazard rate, for persons with HIV-related TB follows a bimodal distribution, peaking within the first 3 months of antituberculosis therapy and then again after 1 year (48 ); the reasons for this distribution are not clear but might relate to the impact of TB on HIV disease progression. The observation that active TB increases deaths associated with HIV infection has been corroborated in studies of three independent cohorts in Europe (55)(56)(57). Early in the HIV epidemic, researchers postulated that the immune activation resulting from concurrent infection with parasitic or bacterial pathogens might alter the natural progression of HIV infection (58 ). Subsequent observations have demonstrated that immune activation from TB enhances both systemic and local HIV replication. In some patients with active TB, the plasma HIV RNA level rises substantially before TB is diagnosed (59 ). Moreover, TB treatment alone leads to reductions in the viral load in these dually infected patients. TB and HIV also interact in the lungs, the site of primary infection with M. tuberculosis. In a recently published study of HIVinfected patients with TB, researchers found that the viral load was higher in the bronchoalveolar lavage fluid from the affected versus the unaffected lung and was correlated with levels of tumor necrosis factor in bronchoalveolar fluid (60 ). Researchers used V3 loop viral sequences to construct a phylogenetic tree and observed that the HIV quasispecies from the affected lung differed from those in the plasma within the same patient. These data suggest that pulmonary TB might act as a potent stimulus for the cellular-level replication of HIV. In summary, recent research findings have improved clinicians' understanding of how HIV affects the natural progression of TB and how TB affects the clinical course of HIV disease, and these findings support the recommendation for prevention, early recognition, and effective treatment for both diseases. # TB Therapy Outcomes Among Patients with HIV-Related TB Among patients treated for TB, early clinical response to therapy and the time in which M. tuberculosis sputum cultures convert from positive to negative appear to be similar for those with HIV infection and those without HIV infection (30,61,62 ). However, the data are less clear about whether rates of TB relapse (recurrence of TB following successful completion of treatment) differ among patients with or without HIV infection (63 ). Current CDC and American Thoracic Society guidelines recommend a 6-month treatment regimen for drug-susceptible TB disease for patients coinfected with HIV (2 ) but suggest prolonged treatment for patients who have a delayed clinical and bacteriologic response to antituberculosis therapy. Some experts have suggested that to ensure an optimal antituberculosis treatment outcome, all patients with HIV-related TB should be treated with a longer course of therapy (i.e., 9 months), regardless of evidence of early response to therapy (64,65 ). To make a recommendation on duration of therapy for HIV-related TB, expert consultants at the September 1997 CDC meeting considered the results of prospective studies that ascertained the posttreatment relapse rate following 6-month TB therapy regimens among patients with HIV infection (Table 3) (29,30,49,66,67 ). Differences in the study designs, including those pertaining to eligibility for enrollment in the study and to the definition of TB relapse, limited the analysis of combined results from the five studies. Despite this limitation, the expert consultants were able to make the following observations: a) the studies had a posttreatment follow-up duration that ranged from 8 to 22 months (median duration: 18 months); b) in three studies (30,49,67 ), investigators found that 6-month TB regimens were associated with a clinically acceptable (≤5.4%) TB relapse rate; and c) in two studies (29,66 ), researchers found a high (≥9%) TB relapse rate associated with the use of 6-month TB regimens. In the Zaire study (66 ), TB patients coinfected with HIV had almost twofold higher posttreatment relapse rates than patients not infected with HIV who received the same TB treatment regimen; however, the authors did not investigate whether the relapses were the result of a recurrence of disease with the same strain of M. tuberculosis or reinfection (new disease) with a different strain. In the other study (29 ), which enrolled HIV-seropositive patients from 21 different sites in the United States, in all three patients who relapsed, the strain of M. tuberculosis isolated during the relapse episode matched, by DNA fingerprint, the strain of M. tuberculosis that was isolated during the initial episode of TB; this finding ruled out the possibility of reinfection. The expert consultants who reviewed the available data agreed that short-course (i.e., 6-month) regimens should be used for the treatment of HIV-related pansusceptible TB (i.e., susceptible to all first-line antituberculosis drugs) in the United States, where patients are usually treated with DOT and where response to antituberculosis daily induction regimen followed by a continuation regimen of 4-month intermittent isoniazid and rifampin. The exceptions are that a) ethambutol was not used during the induction phase in Côte d'Ivoire, and b) half of patients in one of the U.S. studies (30 ) received levofloxacin in addition to the other four drugs during the induction phase. During the continuation phase, patients in Côte d'Ivoire received drugs daily, patients in Haiti received drugs three times a week, and patients in all other studies received drugs twice a week. † Also in this study, 51 comparable patients with HIV-related TB were randomly assigned to a treatment arm in which the duration of the continuation phase was prolonged from 4 months to 7 months. The cultureconfirmed posttreatment relapse rate (2%) in this study arm was not significantly different from the rate in the 6-month study arm (p=1.00); however, an isolate was not available for DNA fingerprinting. drugs can be monitored. This approach limits the use of lengthier multidrug antituberculosis therapies to the minimum possible number of patients with TB and HIV disease. Some experts believe the risk of TB treatment failure is increased among patients with advanced HIV-related immunosuppression and therefore advocate greater caution (or longer duration of therapy) when treating such patients for TB. The available data do not permit CDC to make a definitive recommendation regarding this issue. However, the experts recommended that clinicians treating TB in patients with HIV infection should consider the factors that increase a person's risk for a poor clinical outcome (e.g., lack of adherence to TB therapy, delayed conversion of M. tuberculosis sputum cultures from positive to negative, and delayed clinical response) when deciding the total duration of TB therapy. # Paradoxical Reactions Associated with Initiation of Antiretroviral Therapy During the Course of TB Therapy The temporary exacerbation of TB symptoms and lesions after initiation of antituberculosis therapy -known as a paradoxical reaction -has been described as a rare occurrence (68)(69)(70)(71)(72)(73)(74) attributed to causes such as recovery of the patient's delayed hypersensitivity response and an increase in exposure and reaction to mycobacterial antigens after bactericidal antituberculosis therapy is initiated (75 ). Recently, a similar phenomenon was reported among patients with HIV-related TB (76 ). These reactions appear to be related more often to the concurrent administration of antiretroviral and antituberculosis therapy and occur with greater frequency than do paradoxical reactions associated primarily with the administration of antituberculosis therapy. Patients with paradoxical reactions can have hectic fevers, lymphadenopathy (sometimes severe), worsening of chest radiographic manifestations of TB (e.g., miliary infiltrates, pleural effusions), and worsening of original tuberculous lesions (e.g., cutaneous and peritoneal). However, these reactions are not associated with changes in M. tuberculosis bacteriology (i.e., no change from negative to positive culture and smear), and patients generally feel well and have no signs of toxicity. In a prospective study, paradoxical reactions were more common among 33 patients with HIV-related TB who received TB treatment and combination antiretroviral therapy (36%) than among 55 patients not infected with HIV who received antituberculosis drugs alone (2%) and among 28 HIV-infected patients (historical control patients during pre-zidovudine era) who received antituberculosis drugs alone (7%) (76 ). Furthermore, among patients treated for both diseases, the paradoxical reactions were more temporally related to the initiation of combination antiretroviral therapy (mean +/-standard deviation [SD]: 15 +/-11 days afterward) than to the initiation of antituberculosis treatment (mean SD: 109 +/-72 days afterward). Researchers investigated potential causes for these symptoms and lesions (i.e., TB treatment failure, antituberculosis drug resistance, nonadherence with TB therapy, drug fever, development of conditions not related to TB or HIV) but considered such causes unlikely because these evaluations produced negative results, and TB was cured in patients who remained on unmodified antituberculosis regimens. Among patients in this study who received combination antiretroviral therapy, which usually included a protease inhibitor, the paradoxical reactions corresponded with a concurrent drop in HIV viral loads after antiretroviral therapy began and, in all but one patient, occurred while peripheral blood CD4+ T-cell counts were <200 cells/µL (76 ). In the historical control group (i.e., patients who were treated for TB but not for HIV), two (7%) of the 28 patients had a paradoxical reaction after antituberculosis therapy was initiated. This finding indicates that treatment of TB alone might sometimes decrease HIV viral load substantially and improve immune function (40,59,68,76 ). After reviewing information about paradoxical reactions occurring during the course of TB therapy, expert consultants at the September 1997 CDC meeting concluded that exacerbation of TB signs and symptoms in patients with HIV-related TB can occur soon after combination antiretroviral therapy is initiated. Clinicians should always conduct a thorough investigation to eliminate other etiologies before making a diagnosis of paradoxical treatment reaction. For patients with paradoxical reactions, rarely are changes in antituberculosis or antiretroviral therapy needed. If the lymphadenopathy or other lesions are severe, one option is to continue with appropriate antituberculosis therapy and administer short-term steroids that suppress the enhanced immune response. In the prospective study (76 ), despite having low CD4+ T-cell counts, six (86%) of seven TB patients who were initially tuberculin skin-test (TST)-negative had positive TST results after combination antiretroviral therapy was started. The reaction sizes of postantiretroviral TSTs ranged from 7 to 67 mm of induration. Clinicians must be aware of the potential public health and clinical implications of restored TST reactivity among persons who have not been diagnosed with active TB but who might be latently infected with M. tuberculosis. Persons previously known to have negative TST results might benefit from repeat tuberculin testing if they have evidence of restored immune function after antiretroviral therapy is initiated, because TB preventive therapy is recommended for TST-positive HIV-infected persons. # Considerations for TB Therapy for HIV-Infected Patients Treated with Antiretroviral Agents Drug Interactions Between Rifamycins Used for TB Therapy and Antiretroviral Drugs Used for HIV Therapy Widely used antiretroviral drugs available in the United States include protease inhibitors (saquinavir, indinavir, ritonavir, and nelfinavir) and nonnucleoside reverse transcriptase inhibitors (NNRTIs) (nevirapine, delavirdine, and efavirenz). Protease inhibitors and NNRTIs have substantive interactions with the rifamycins (rifampin, rifabutin, and rifapentine) used to treat mycobacterial infections (3,77 ). These drug interactions principally result from changes in the metabolism of the antiretroviral agents and the rifamycins secondary to induction or inhibition of the hepatic cytochrome CYP450 enzyme system (78,79 ). Rifamycin-related CYP450 induction decreases the blood levels of drugs metabolized by CYP450. For example, if protease inhibitors are administered with rifampin (a potent CYP450 inducer), blood concentrations of the protease inhibitors (all of which are metabolized by CYP450) decrease markedly, and most likely the antiretroviral activity of these agents declines as well. Conversely, if ritonavir (a potent CYP450 inhibitor) is administered with rifabutin, blood concentrations of rifabutin increase markedly, and most likely rifabutin toxicity increases as well. Of the available rifamycins, rifampin is the most potent CYP450 inducer; rifabutin has substantially less activity as an inducer; and rifapentine, a newer rifamycin, has intermediate activity as an inducer (80)(81)(82). The four currently approved protease inhibitors and amprenavir (141W94, an investigational agent in Phase III clinical trials) are all, in differing degrees, inhibitors of CYP450 (83,84 ). The rank order of the agents in terms of potency in inhibiting CYP450 is ritonavir (the most potent); amprenavir, indinavir, and nelfinavir (with approximately equal potencies); and saquinavir (the least potent). The magnitude of the effects of coadministering rifamycins and protease inhibitors has been evaluated in limited pharmacokinetic studies (Table 4) (85)(86)(87)(88)(89)(90)(91). The three approved NNRTIs have diverse effects on CYP450: nevirapine is an inducer, delavirdine is an inhibitor, and efavirenz is both an inducer and an inhibitor. The magnitude of the effects of coadministering rifamycins and NNRTIs has also been evaluated in pharmacokinetic studies or has been predicted on the basis of what is known about their potential for inducing or inhibiting CYP450 (Table 5) (92)(93)(94)(95)(96). In contrast to the protease inhibitors and the NNRTIs, the other class of antiretroviral agents available, nucleoside reverse transcriptase inhibitors (NRTIs) (zidovudine, didanosine, zalcitabine, stavudine, and lamivudine) are not metabolized by CYP450. Rifampin (and to a lesser degree, rifabutin) increases the glucuronidation of zidovudine and thus slightly decreases the serum concentration of zidovudine (97)(98)(99)(100). The effect of this interaction probably is not clinically important, and the concurrent use of NRTIs and rifamycins is not contraindicated (77 ). Also, no contraindication exists for the use of NRTIs, NNRTIs, and protease inhibitors with isoniazid, pyrazinamide, ethambutol, or streptomycin. These first-line antituberculosis medications, in contrast to the rifamycins, are not CYP450 inducers. # Coadministration of Antituberculosis and Antiretroviral Therapies According to 1998 U.S. Department of Health and Human Services guidelines on the use of antiretroviral agents among HIV-infected adults and adolescents (4 ), to improve the length and quality of patients' lives, all persons with symptomatic HIV infection should be offered antiretroviral therapy. HIV-infected patients with TB fall in this category. When used appropriately, combinations of potent antiretroviral agents can effect prolonged suppression of HIV replication and reduce the inherent tendency of HIV to generate drug-resistant viral strains. However, as antiretroviral therapeutic regimens have become increasingly effective, they also have become increasingly complex in themselves as well as in the problems they cause for the treatment of other HIV-associated diseases. At present, regimens that include two NRTIs combined with a potent protease inhibitor (or, as an alternative, combined with an NNRTI) are the preferred choice for combination antiretroviral therapy for the majority of patients. Each of the antiretroviral drug combination regimens must be used according to optimum schedules and doses (4 ) because the potential for resistant mutations of HIV decreases if serum concentrations of the multiple antiretroviral drugs are maintained steadily. Because rifampin markedly lowers the blood levels of these drugs and is likely to result in suboptimal antiretroviral therapy, the use of rifampin to treat active TB in a patient who is taking a protease inhibitor or an NNRTI is always contraindicated. Rifabutin is a less potent inducer of the CPY450 cytochrome enzymes than is rifampin and, when used in appropriately modified doses, might not be associated with a clinically significant reduction of protease inhibitors or nevirapine (Table 6). Thus, the substitution of rifabutin for rifampin in TB treatment regimens has been proposed as a practical * Effects are expressed as a percentage change in AUC of the concomitant treatment relative to that of the drug-alone treatment. No data are available regarding the magnitude of these bidirectional interactions when rifamycins are administered two or three times a week instead of daily. † Predicted effect based on knowledge of metabolic pathways for the two drugs. # TABLE 6. Feasibility of using different antiretroviral drugs and rifabutin # Antiretroviral agent # Can be used in combination with rifabutin? Comments Saquinavir (soft-gel formulation) Probably Use of the soft-gel formulation (Fortovase™) in higher-than-usual doses might allow adequate serum concentrations of this drug despite concurrent use of rifabutin.* However, the pharmacokinetic data for this combination are limited in comparison with other protease inhibitors. Because of the expected low bioavailability of the hard-gel formulation (Invirase™) the concurrent use of this agent with rifabutin is not recommended. # Ritonavir No Ritonavir increases concentrations of rifabutin by 35-fold and results in increased rates of toxicity (arthralgia, uveitis, skin discoloration, and leukopenia). These adverse events have been noted in studies of high-dose rifabutin therapy, when rifabutin is administered with clarithromycin (another CYP450 inhibitor) -an indication that these events might result from high serum concentrations of rifabutin. # Indinavir # Yes Data from drug interaction studies (unpublished report, Merck Research Laboratories, West Point, PA, 1998) suggest that the dose of indinavir should be increased from 800 mg every 8 hours to 1,200 mg every 8 hours if used in combination with rifabutin.* # Nelfinavir Yes Some clinical experts suggest that the dose of nelfinavir should be increased from 750 mg three times a day to 1,000 mg three times a day if used in combination with rifabutin.* # Amprenavir # Probably The drug interactions between amprenavir and rifabutin (and thus potential for rifabutin toxicity) are reported to be similar to those of ritonavir with rifabutin. However, potential advantages of using this combination are that a) rifabutin has a minimal effect on reducing the levels of amprenavir and b) even though it has not been studied, rifabutin toxicity is not expected if the daily dose of rifabutin is reduced when used in combination with amprenavir. # NRTIs † Yes Not expected to have clinically significant interaction. # Nevirapine Yes Not known whether nevirapine or rifabutin dose adjustments are necessary when these drugs are used together.* # Delavirdine No Not recommended on the basis of marked decreases in concentrations of delavirdine when administered with rifamycins. # Efavirenz Probably Newly approved agent. Preliminary drug interaction studies suggest that when rifabutin is used concurrently with efavirenz, the dose of rifabutin for both daily and twice weekly administration should be increased from 300 mg to 450 mg. *Daily dose of rifabutin should be reduced from 300 mg to 150 mg if used in combination with amprenavir, nelfinavir, or indinavir. It is unknown whether the dose of rifabutin should be reduced if used in combination with saquinavir (Fortovase™) or nevirapine. † Nucleoside reverse transcriptase inhibitiors, including zidovudine, didanosine, zalcitabine, stavudine, and lamivudine. choice for patients who are also undergoing therapy with protease inhibitors (with the exception of ritonavir [86,87,[101][102][103] or hard-gel capsule saquinavir [Invirase ™ (85 )]) or with the NNRTIs nevirapine or efavirenz (but not delavirdine [93,94 ]). Currently, more clinical and pharmacokinetic data are available on the use of indinavir or nelfinavir with rifabutin than on the use of amprenavir or soft-gel saquinavir (Fortovase™) with rifabutin. Rifapentine is not recommended as a substitute for rifampin because its safety and effectiveness have not been established for the treatment of patients with HIV-related TB. As an alternative to the use of rifamycin for the treatment of TB, the use of streptomycin-based regimens that do not contain rifamycin can be considered for the treatment of TB in patients undergoing antiretroviral therapy with protease inhibitors or NNRTIs. # Use of Rifabutin-Based Regimens for the Treatment of HIV-Related TB At present, TB drug regimens that include rifabutin instead of rifampin appear to offer the best alternative for the treatment of active TB among patients taking antiretroviral therapies that include protease inhibitors or NNRTIs. This recommendation is based on findings from studies of equivalent in vitro antituberculosis activity of rifabutin and rifampin (104,105 ) and the results of three clinical trials (106)(107)(108). These trials demonstrated that 6-month rifabutin-containing regimens (at a daily dose of either 150 mg or 300 mg) were as effective and as safe as similar control regimens containing rifampin for the treatment of TB (Table 7). The smallest (n=49) of these three trials was conducted in Uganda (108 ) and is the only one to include HIVcoinfected patients (who were not undergoing antiretroviral therapy at the time of the study). This study indicated that 81% of patients taking a TB treatment regimen containing daily rifabutin converted their sputum from M. tuberculosis positive to negative after 2 months of treatment, compared with a 48% sputum conversion rate among patients taking a TB regimen containing daily rifampin (p<0.05). However, when the researchers controlled for differences in baseline characteristics (a greater proportion of patients in the rifampin group had cavitary disease), they found no difference in the time to sputum conversion between the two study groups. Studies are under way to evaluate the use of rifabutin administered daily (at a dose of 150 mg) or twice a week (at a dose of 300 mg) for the treatment of TB in HIV-infected patients who take protease inhibitors. Physicians at a state tuberculosis hospital in Florida have treated or consulted on the treatment of approximately 30 HIV-infected patients who received a protease inhibitor while undergoing treatment for TB with rifabutin. Patients have been treated for TB primarily with administration of rifabutin (150 mg daily) as part of four-drug therapy for 2-4 weeks, followed by rifabutin (300 mg twice weekly) as part of four-drug therapy to complete 8 weeks of induction, and then a continuation phase consisting of twice-weekly isoniazid and rifabutin (300 mg) to complete 6 months of treatment. To date, patients treated with this regimen have not experienced clinically significant increases in rifabutin serum levels, have had a minimal incidence of adverse reactions from rifabutin (one patient developed a case of uveitis), and have had a good clinical response to TB and HIV therapies. Approximately 80% of the patients attained sputum conversion by the second month of treatment, most have attained and maintained suppression of HIV replication, and no TB relapses have occurred with up to 1 year of posttreatment follow-up (David Ashkin, M.D., and Masahiro Narita, M.D., A.G. Holley State Tuberculosis Hospital, Lantana, Florida, personal communication, 1998). In previous reports, CDC and the American Thoracic Society jointly recommended the use of rifampin-containing short-course regimens for the initial treatment of HIVrelated TB (2 ). The inclusion of rifampin in regimens to treat TB was supported by data collected from approximately 90 controlled clinical trials conducted from 1968 to 1988 (109 ). Excluding rifampin from the TB treatment regimen was not recommended because regimens not containing rifampin a) had not been proven to have acceptable efficacy (i.e., have been associated with higher rates of TB treatment failure and death and with slower bacteriologic responses to therapy leading to potential increases in the likelihood of M. tuberculosis transmission) and b) require prolonging duration of therapy from 6 months to 12-15 months. Presently, available data suggest that rifabutin in short-course (i.e., 6 months) multidrug regimens to treat TB provides the same benefits as the use of rifampin. Three additional reasons support the use of rifabutin for treating HIV-related TB: a) observations suggest that rifabutin might be more reliably absorbed than rifampin in patients with advanced HIV disease (110,111 ); b) the use of rifabutin appears to have been better tolerated in patients with rifampininduced hepatotoxicity (David Ashkin, M.D., and Masahiro Narita, M.D., A.G. Holley State Tuberculosis Hospital, Lantana, Florida, personal communication, 1998); and c) the use of rifabutin might lessen the possibility of interactions with other medications commonly prescribed for patients with HIV infection (e.g., azole antifungal drugs, anticonvulsant agents, and methadone) (77 ). # Use of Alternative TB Treatment Regimens that Contain Minimal or No Rifamycin TB treatment regimens that contain no rifamycins have been proposed as an alternative for patients who take protease inhibitors or NNRTIs. Several clinical trials conducted in Hong Kong and Africa by the British Medical Research Council and published from 1974 through 1984 provide information about nonrifamycin and minimal-rifamycin regimens for the treatment of TB in patients who were not likely to be infected with HIV (Table 8) (112)(113)(114)(115). Most of these studies demonstrated high relapse rates when regimens not containing streptomycin were used and when the duration of therapy was less than 9 months. However, in a large (n=404) randomized controlled clinical trial in Hong Kong that evaluated the use of six TB treatment regimens consisting of streptomycin, isoniazid, and pyrazinamide either daily, three times a week, or two times a week for 6 or 9 months (112 ), almost all patients treated with any of the study regimens achieved rapid sputum conversions (86%-94% of patients converted within 3 months of therapy). In this study, the 30-month posttreatment follow-up relapse rates were high (18%-24%) among patients treated with 6-month regimens, but the relapse rates among patients treated with 9-month regimens (5%-6%) were similar to the relapse rates expected following the use of rifampin-based TB treatments. Thus, the expert consultants who developed these guidelines concluded that treatment of TB without rifamycin always requires longer-duration (at least 9 months) regimens that include streptomycin (or an injectable antituberculosis drug such as capreomycin, amikacin, or kanamycin) (63 ). However, these TB regimens have not been studied among patients with HIV infection. Streptomycin is highly bactericidal against M. tuberculosis, but it is rarely used in the United States to treat drug-susceptible TB because of problems associated with its administration by injection that can be intensified in patients with low body mass or wasting and because of potential ototoxicity and nephrotoxicity. The associated potential toxicities and increased duration of therapy and the patient's difficulty in adhering to an injectable-drug-based TB regimen can compromise the effectiveness of streptomycin-based TB regimens, and these limitations should be considered by physicians and patients. # Treatment of Latent M. tuberculosis Infection in Patients with HIV Infection Scientific Rationale Preventive therapy for TB is essential to controlling and eliminating TB in the United States (116,117 ). Treatment for HIV-infected persons who are latently infected with M. tuberculosis is an important part of this strategy and is also an important personal health intervention because of the serious complications associated with active TB in HIV-infected persons (118)(119)(120). Expert consultants attending the September 1997 CDC meeting and additional consultants attending a September 1998 meeting sponsored by the American Thoracic Society and CDC considered findings from multiple studies (121)(122)(123)(124)(125)(126)(127)(128)(129)(130) before developing recommendations about the optimal duration of isoniazid preventive therapy regimens; frequency of administration (intermittency) of preventive therapy; new short-course multidrug regimens; and preventive therapy for anergic HIV-infected adults with a high risk of M. tuberculosis infection. A key difference in most preventive therapy trials conducted before and after the beginning of the HIV epidemic is that the earlier trials focused on 12-month regimens of isoniazid, whereas five of seven trials (122)(123)(124)(125)(126) conducted in HIV-infected populations assessed 6-month regimens of isoniazid (Table 9). Four of these 6-month isoniazid regimens (122)(123)(124)(125) were chosen for study on the basis of the operational feasibility of providing therapy in countries with limited resources where preventive therapy programs were not available; the fifth study (126 ), a U.S. trial conducted among anergic patients, used a 6-month regimen because of the absence of previous data about optimal duration of therapy for TST-negative, HIV-infected patients. Despite these variations, the expert consultants concluded that the findings from these different preventive therapy studies should apply to most persons with latent M. tuberculosis infection, regardless of their HIV serostatus, because similar levels of protection have been observed when identical preventive therapy regimens have been administered to persons infected with HIV and those not infected. # Optimal Duration of Isoniazid Regimens for Treatment of Latent M. tuberculosis Infection The American Thoracic Society and CDC have previously recommended a regimen of 12 months of isoniazid alone for treatment of latent M. tuberculosis infection in HIV-infected adults (2 ). The recommended duration of TB preventive therapy for persons not infected with HIV was a minimum of 6 months. When considering the optimal duration of isoniazid preventive therapy, the consultants reviewed findings from two studies conducted in populations not known to be infected with HIV (128,129 ). One of these studies, a controlled trial conducted in seven European countries, compared the efficacy of three durations (3, 6, and 12 months) of isoniazid preventive treatment for TST-positive persons with stable, fibrotic lesions on chest radiographs (128 ). In this study, compliant patients who received medication for 12 months had better protection against TB (93%) than those who received medication for 6 months (69%). The other study was conducted among the Inuits in the Bethel area of Alaska, where participants received 0-24 months of isoniazid preventive therapy (129 ). In an assessment of observed posttherapy case rates of TB relative to the amount of isoniazid ingested (expressed as a percentage of a 12-month regimen), researchers found that higher amounts of therapy corresponded with lower TB rates among participants who had received 0-9 months of isoniazid therapy; after 9 months of therapy, participants had no additional benefits in terms of decreased TB case rates. Four studies of HIV-infected persons have evaluated 6-month and 12-month regimens of daily isoniazid (121,123,125,127 ). Both of the studies that evaluated a 6-month regimen included a placebo comparison group and demonstrated reductions in the incidence of TB among persons in the treatment group -70% in Uganda (123 ) and 75% in Kenya (125 ). A study of the 12-month regimen (121 ), which was conducted in Haiti and also included a placebo comparison group, demonstrated an 83% reduction in the incidence of TB among persons in the treatment group. A multicenter trial conducted in the United States, Mexico, Brazil, and Haiti (127 ) demonstrated that the magnitude of protection obtained from a regimen of isoniazid administered daily for 12 months was similar to that obtained from a regimen of rifampin and pyrazinamide administered daily for 2 months. Isoniazid preventive therapy regimens of 6 and 12 months' duration have not been compared with each other in the same study conducted among HIV-infected persons. In summary, these data indicate that a) the optimal duration of isoniazid preventive therapy should be >6 months to provide the maximum degree of protection against TB; b) therapy for 9 months appears to be sufficient; c) therapy for >12 months does not appear to provide additional protection. # Frequency of Administering Isoniazid Preventive Therapy Two clinical trials (122,124 ) have evaluated 6-month twice-weekly isoniazid regimens for the prevention of active TB in HIV-infected persons (Table 9). Participants enrolled in the twice-weekly 6-month isoniazid arm of a study conducted in Zambia had a 40% reduction in the rate of TB compared with persons who took a placebo for 6 months (124 ). The findings of a trial conducted in Haiti (122 ) suggest that the magnitude of protection obtained from isoniazid administered twice a week for 6 months is equivalent to that obtained from rifampin and pyrazinamide regimens administered twice a week for 2 months. Preventive therapy trials that include twice-weekly isoniazid regimens for >6 months or comparisons of the same drugs administered daily versus intermittently have not been conducted. However, in a Baltimore demonstration project in which isoniazid was administered twice a week (10-15 mg/kg, with a maximum dose of 900 mg) to a cohort of injecting-drug users under directly observed preventive therapy (DOPT), the findings support the efficacy of twice-weekly isoniazid preventive therapy (130 ). Twice-weekly regimens with DOPT were used in Baltimore because the project staff expected that supervised delivery of therapy would enhance adherence with and completion of the preventive therapy regimen. Thus, the available data suggest that the protection obtained from isoniazid preventive therapy regimens should be the same whether the drug is administered daily or twice a week. # Short-Course Multidrug Regimens for TB Preventive Therapy Four clinical trials (122)(123)(124)127 ) conducted among HIV-infected populations have evaluated courses of preventive therapy that are shorter than 6 months and that include rifampin in combination with isoniazid or pyrazinamide (Table 9). The largest and most recent of these trials was a multicenter, randomized TB prevention study conducted from 1992 through 1998 (127 ). Researchers found identical rates of TB (1.2 per 100 person-years) in two groups of TST-positive, HIV-infected persons: those who primarily self-administered isoniazid daily for 12 months and those who primarily self-administered rifampin and pyrazinamide daily for 2 months. Both study groups had similar adverse events and mortality rates; persons taking rifampin and pyrazinamide for 2 months were significantly more likely (80%) to complete therapy than were persons taking isoniazid for 12 months (68%) (p<0.001). Two other trials conducted in Haiti and Zambia (122,124 ) have also evaluated regimens of rifampin and pyrazinamide for the prevention of TB but have not included comparison arms of 12month isoniazid regimens. The study in Haiti ( 122) compared patients receiving rifampin and pyrazinamide administered twice a week for 2 months with patients receiving isoniazid twice a week for 6 months; in both arms of the study, one of the twice-weekly doses was administered by DOPT. Investigators observed no difference in TB risk or mortality among participants enrolled in the two treatment arms (122 ). The placebo-controlled trial in Zambia demonstrated comparable protection from 3 months of rifampin and pyrazinamide versus 6 months of isoniazid; both regimens were self-administered twice a week (124 ). In the multicenter trial (127 ) and in the Haiti and Zambia studies (122,124 ), regimens that included rifampin and pyrazinamide were well tolerated. In a study conducted in Uganda (123 ), investigators observed no statistically significant reduction in TB rates but a high rate of toxicity and drug intolerance among persons who took three drugs (isoniazid, rifampin, and pyrazinamide) daily for 3 months compared with persons who took a placebo (Table 9); 3 months of daily self-administered rifampin and isoniazid provided protection similar to that of 6 months of daily self-administered isoniazid. Thus, short-course multidrug regimens (i.e., two drugs for 2-3 months) have been shown to be effective for the prevention of active TB in HIV-infected persons. The use of three drugs for preventive therapy, however, can be associated with unacceptably high rates of toxicity, and the use of a 3-month regimen of rifampin and isoniazid is not being considered for use in the United States. Available data indicate that in the United States, a regimen of rifampin and pyrazinamide administered daily for 2 months is a reasonable treatment option for HIV-infected adults with latent M. tuberculosis infection. The available data do not permit CDC to make a definitive statement regarding the intermittent (i.e., twice a week) administration of a 2-month regimen of rifampin and pyrazinamide. # Preventive Therapy for Anergic HIV-Infected Adults with a High Risk of Latent M. tuberculosis Infection Isoniazid preventive therapy has not been found to be useful or cost-effective in preventing TB when administered to anergic, HIV-infected persons (123,126 ) (Table 9). The anergic subjects who received isoniazid in the Uganda trial had a statistically insignificant (17%) reduction in the rate of TB (2.5 cases per 100 person-years) compared with patients in the placebo group (3.1 cases per 100 person-years) (123 ). Similarly, anergic HIV-infected persons with a high risk for tuberculous infection who were enrolled in a U.S. multicenter trial and treated with isoniazid daily for 6 months had a rate of TB (0.4 cases per 100 person-years) that was 50% less than, but not statistically different from, the rate observed among patients treated with placebo (0.9 cases per 100 person-years) (126 ). In both of these studies, HIV-infected persons with anergy tolerated isoniazid well, as suggested by the low rates of adverse reactions and high rates of therapy completion. These study findings do not support the routine use of preventive therapy in anergic, HIV-infected persons. Preventive therapy for TST-negative, HIV-infected persons also has not been proven effective (121,124, 125 ) (Table 9); however, some experts recommend primary preventive therapy (to prevent M. tuberculosis infection) for TST-negative or anergic HIV-infected residents of institutions that pose an ongoing high risk for exposure to M. tuberculosis (e.g., prisons, jails, homeless shelters). # Implications of Results of TB Preventive Therapy Trials The effects of TB preventive therapy on mortality and progression of HIV infection appear to be limited, with the exception that such therapy can protect against the development of TB disease and its associated consequences. Moreover, the duration of this protective effect has not been clearly established for HIV-infected persons. Despite these limitations and uncertainties, preventive therapy is recommended because its benefits in preventing TB disease are thought to be greater than the risks of serious treatment-related adverse events, and such therapy benefits society by helping to prevent the spread of infection to other persons in the community. The implementation of TB preventive therapy programs should be facilitated by the use of newly recommended short-course multidrug regimens and twice-weekly isoniazid regimens, especially among patients for whom DOPT is feasible. Because of the drug interactions between rifampin and protease inhibitors or NNRTIs, the use of shorter regimens containing rifampin is contraindicated for patients taking these antiretroviral drugs. Although preventive therapy trials evaluating rifabutin use among TST-positive, HIV-infected persons have not been conducted, the expert consultants reviewed available data and agreed that the use of rifabutin instead of rifampin is valid on the basis of the same scientific principles that support the use of rifabutin for the treatment of active TB. # PART II. RECOMMENDATIONS This section of the report provides clinicians with recommendations for diagnosing, treating, and preventing TB among persons coinfected with HIV while concurrently promoting optimal antiretroviral care for these patients. The recommendations reflect the current state of knowledge regarding the use of antiretroviral agents, but this field of science is rapidly evolving. As new antiretroviral agents and new data regarding existing agents alter therapeutic options and preferences for antiretroviral therapy, these changes might affect future recommendations for the treatment of TB infection and disease among patients coinfected with HIV and the treatment of HIV infection among persons with TB. Expert consultants updated these recommendations after a September 1997 CDC meeting, where they reviewed and considered available information about the scientific principles of therapy for TB and HIV. To help clinicians make informed treatment decisions based on the most current research results, the expert consultants have given the recommendations evidencebased ratings (general recommendations have no rating). The ratings include a letter and a Roman numeral (Table 10), similar to the ratings used in previously issued guidelines (4,5 ). The letter indicates the strength of the recommendation, and the Roman numeral indicates the nature of the evidence supporting the recommendation. Thus, clinicians can use the ratings to differentiate between recommendations based on data from clinical trials versus those based on the opinions of experts familiar with the relevant clinical practice and scientific rationale for such practice (when clinical trial data are not available). However, these recommendations are not intended to substitute for the judgment of an expert physician. Management of HIV-related TB disease is complex, and clinical and public health consequences associated with treatment failure are serious. When possible, treatment of TB among HIV-infected persons should be directed by, or conducted in consultation with, a physician with extensive experience in the care of patients with these two diseases. The objectives of implementing these recommendations are to reduce TB treatment failures, prevent drug-resistant TB, and diminish the adverse effects that TB has on HIV replication. Moreover, these guidelines contribute to efforts to control and eliminate TB from the United States by minimizing the likelihood of M. tuberculosis transmission, which will prevent the occurrence of new cases of TB. Multiple copies of this report and all updates are available from the Office of Communications, National # Active Tuberculosis Clinical and Public Health Principles Prompt initiation of effective antituberculosis treatment increases the probability that a patient with HIV infection who develops TB will be cured of this disease (45,131 ). TB treatment also quickly renders patients noninfectious (30 ), with the resulting reduction in the amount of M. tuberculosis transmitted to others, and it minimizes the patient's risk of death resulting from TB (41)(42)(43)132 ). Therefore, clinicians must immediately and thoroughly investigate the possibility of TB when a patient infected with HIV has symptoms consistent with TB. Persons suspected of having current TB disease should immediately be started on appropriate treatment, ideally with directly observed therapy (DOT) (133)(134)(135), and placed in TB isolation as necessary (136,137 ). Patients with TB and unknown HIV-infection status should be counseled and offered HIV testing. HIV-infected patients undergoing treatment for TB should be evaluated for antiretroviral therapy. Most patients with HIV-related TB are candidates for concurrent administration of antituberculosis and antiretroviral drug therapies (4 ). Health-care providers, administrators, and TB controllers must strive to promote coordinated care for patients with TB and HIV and remove existing barriers to information-sharing between TB control programs and HIV/AIDS programs. TB control programs are responsible for setting TB treatment standards for physicians in the community, promoting the awareness and use of recommended TB infection-control practices, and enforcing state and local health department requirements concerning TB case notification and early reporting of drug-susceptibility test results. Because of the complexity of managing HIV-related TB disease and the serious public health consequences of mismanagement, care for persons with HIV-related TB should be provided by, or in consultation with, experts in the management of both TB and HIV disease. # Diagnosis of HIV-Related Tuberculosis The typical signs and symptoms of pulmonary TB are cough with or without fever, night sweats, weight loss, and upper-lobe infiltrates with or without cavitation on chest x-rays. The diagnosis of TB for some HIV-infected patients might be difficult because TB in an immunocompromised host can be associated with atypical symptoms, a lack of typical symptoms, and a paucity of findings in chest x-rays (138-140 ). Among persons with AIDS, the diagnosis of TB also can be complicated by the presence of other pulmonary infections such as Pneumocystis carinii pneumonia and Mycobacterium avium complex disease and by the occurrence of TB in extrapulmonary sites. For patients with unusual clinical and radiographic findings, the starting point for diagnosing active TB often is a positive tuberculin skin test (TST). All patients with positive TSTs should be evaluated to rule out active TB (see Diagnosis of M. tuberculosis Infection Among HIV-Infected Persons). # Medical Evaluation of Patients Suspected of Having Active TB # Management of HIV-Infected Patients with Active TB Coadministration of TB Treatment and Antiretroviral Therapy The following management strategies are for patients with HIV-related pulmonary TB a) who are not known to have or who do not have risk factors for multidrugresistant TB and b) for whom antiretroviral therapy is appropriate. When they first receive care for active TB disease, some patients might already be receiving antiretroviral therapy, whereas other patients might be newly diagnosed with HIV infection (Figure 1). For these newly diagnosed patients, in addition to the currently established recommendations for the immediate initiation of antituberculosis therapy, recently published guidelines (4 ) recommend the use of antiretroviral therapy. When treatments for HIV and TB disease are begun simultaneously, the optimal setting is one with experienced and coordinated care givers as well as accessible resources to provide a continuum of medical services (e.g., a reliable source of medications and social, psychosocial, and nutritional services). *Coadministration of rifabutin with ritonavir, saquinavir (Invirase™), or delavirdine is not recommended. # FIGURE 1. Recommended management strategies for patients with human immunodeficiency virus (HIV) infection and tuberculosis (TB) OBJECTIVE The objective of this MMWR is to inform the reader of recommendations and guidelines for the diagnosis, treatment, and prevention of tuberculosis (TB) among patients with human immunodeficiency virus (HIV). These recommendations and guidelines are based on a meeting of expert consultants convened in September 1997 by CDC. These recommendations should guide clinical practice and policy development related to appropriate management of patients with or at risk for HIV-related TB. Upon completion of this educational activity, the reader should possess a clear working knowledge of the diagnosis, treatment, and prevention of TB among patients with HIV. # ACCREDITATION Continuing Medical Education (CME) Credit: This activity has been planned and implemented in accordance with the Essentials and Standards of the Accreditation Council for Continuing Medical Education (ACCME) by CDC. CDC is accredited by the ACCME to provide continuing medical education for physicians. CDC awards 2.0 hours of category 1 credit toward the AMA Physician's Recognition Award for this activity. # Continuing Nursing Education (CNE) Credit: This activity for 2.4 contact hours is provided by CDC, which is accredited as a provider of continuing nursing education by the American Nurses Credentialing Center's (ANCC) Commission on Accreditation. # EXPIRATION -October 30, 1999 The response form must be completed and returned electronically, by fax, or by mail, postmarked no later than one year from the publication date of this report, for eligibility to receive continuing education credit. # INSTRUCTIONS # Recommendations and Reports To receive continuing education credit, please answer all of the following questions: 1. Which of the following statements are true? (Indicate all that are true.) A. Human immunodeficiency virus (HIV) infection accelerates the natural progression of tuberculosis (TB). B. Active TB accelerates the natural progression of HIV infection and disease. C. The mortality rate for HIV-infected persons with TB is four times greater than the rate for TB patients not infected with HIV. D. The toxicity associated with isoniazid TB preventive therapy significantly outweighs this therapy's benefits in preventing active TB in an HIV-infected person. 2. Which of the following strategies are recommended for patients with TB taking protease inhibitors for HIV therapy? (Indicate all that are true.) A. Stop protease inhibitor therapy to allow the use of rifampin for TB treatment. B. Use an antituberculosis treatment regimen that includes rifabutin instead of rifampin. C. Ensure that experienced and coordinated TB/HIV care givers are available. D. Provide therapy for TB that is directly observed. # Which of the following strategies are recommended for HIV-infected patients with active TB ? (Indicate all that are true.) A. Immediately and simultaneously begin treatment regimens that include a protease inhibitor for HIV therapy and drugs other than rifampin for antituberculosis therapy. B. Immediately and simultaneously begin treatment regimens that include a protease inhibitor for HIV therapy and rifampin for antituberculosis therapy. C. For patients who start antiretroviral therapy with a protease inhibitor, plan a 2-week period between the last dose of rifampin and the first dose of protease inhibitor. D. For patients who are not on antiretroviral therapy when TB is diagnosed, plan to start antiretroviral therapy at the end of the induction phase of TB therapy (8 weeks) or when TB therapy is completed. # When rifabutin is administered with indinavir, nelfinavir, or amprenavir, the recommended dosage of rifabutin is: (Choose the one correct answer.) A. 300 mg daily or 150 mg twice weekly. B. 300 mg daily or 300 mg twice weekly. C. 150 mg daily or 300 mg twice weekly. D. 150 mg daily or 150 mg twice weekly. # A paradoxical reaction during the course of TB and HIV treatment . . . (Indicate all that are true.) A. is a temporary exacerbation of symptoms and signs of TB in some patients who have restored immune function because of effective antiretroviral therapy. B. is always associated with a positive Mycobacterium tuberculosis culture. C. if suspected, calls for evaluating the patient to rule out other causes of the signs and symptoms (e.g., antituberculosis treatment failure). D. might require hospitalization and possibly time-limited use of corticosteroids (e.g., prednisone therapy) if the patient has severe or life-threatening clinical manifestations. 6. When administered with rifamycins, which of the following drugs might require a dosage adjustment, the use of alternative therapies, or other changes because of drug interaction? (Indicate all that are true.) The following questions will assess your perceptions of the readability and applicability of the material. Answer guide for questions 1-9 1 # [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 2. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 3. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 4. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 5. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 6. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 7. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 8. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 9. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 10. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 11. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 12. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 13. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 14. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 15. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 16. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F Detach or photocopy. Because of drug interactions, the use of rifampin to treat TB is not recommended for patients who a) will start treatment with an antiretroviral regimen that includes a protease inhibitor or a nonnucleoside reverse transcriptase inhibitor (NNRTI) at the same time they begin treatment for TB (4 ) or b) have established HIV infection that is being maintained on such an antiretroviral regimen when TB is newly diagnosed and needs to be treated. Thus, two TB treatment options are currently recommended for these patients: a) a rifabutin-based regimen or b) an alternative nonrifamycin regimen that includes streptomycin (see Treatment Options for Patients with HIV Infection and Drug-Susceptible Pulmonary TB and Figure 1). Using a rifampin-based TB treatment regimen continues to be recommended for patients with HIV infection a) who have not started antiretroviral therapy, when both the patient and the clinician agree that waiting to start such therapy would be prudent or b) for whom antiretroviral management does not include a protease inhibitor or an NNRTI (4 ) (Figure 1). # BOX 1. Components of the medical evaluation for human immunodeficiency virus-infected patients suspected of having tuberculosis The medical evaluation should include the following questions and assessments: # Medical History • Ask all patients about their history of tuberculosis (TB) treatment. If the patient has previously received treatment, care providers must determine the antituberculosis drugs used, duration of treatment, history of adverse reactions, reasons for discontinuation of treatment, history of adherence with treatment, and previous antituberculosis drugsusceptibility test results. • Question all patients about the following risk factors for drug-resistant TB: a) previous treatment for TB, especially if it was incomplete; b) previous residence in a country outside the United States where drug-resistant TB is common; c) close contact with a person who has drug-resistant TB or multidrug-resistant TB; and d) previous residence in an institution (i.e., hospital, prison, homeless shelter) with documented transmission of a drug-resistant strain of TB. • Ask all patients about their history of antiretroviral therapy and their history of therapies to prevent opportunistic infections. If the patient has previously or is currently receiving these treatments, care providers should determine the drugs used, duration of treatment, history of adverse reactions, and reasons for discontinuation of treatment if treatment ended. • Ask female patients whether they might be pregnant. Women of childbearing potential with menses more than 2 weeks late should receive a pregnancy test. (See TB Treatment for HIV-Infected Pregnant Women.) • When clinical specimens for culture and susceptibility testing cannot be obtained from patients (e.g., young children, patients with skeletal or meningeal TB), the culture and drug-susceptibility results of the Mycobacterium tuberculosis strain isolated from the infecting source-patient should be investigated and reviewed if available so that TB treatment for the current patient can be tailored appropriately. (See TB Treatment for HIV-Infected Children.) • If necessary, perform a Mantoux-method tuberculin skin test (TST) to help diagnose culture-negative TB. When determining the time to begin antiretroviral therapy for patients who are acutely ill with TB, clinicians and patients need to consider the existing clinical issues (e.g., drug interactions and toxicities, ability to adhere to two complex treatment regimens, and laboratory abnormalities). A staggered initiation of antituberculosis and antiretroviral treatments for patients not currently on antiretroviral therapy might promote greater adherence to the TB and HIV treatment regimens and reduce the associated drug toxicity of both regimens. This strategy might include starting antiretroviral therapy either at the end of the 2-month induction phase of TB therapy or after TB therapy is completed. When a decision is made to delay initiation of antiretroviral therapy, clinicians should monitor the patient's condition by measuring # BOX 1. Components of the medical evaluation for human immunodeficiency virus-infected patients suspected of having tuberculosis -Continued Chest X-Ray Examination • Perform a chest x-ray examination. HIV-related immunosuppression reduces the inflammatory reaction and cavitation of pulmonary lesions, and therefore HIV-infected patients with pulmonary TB can have atypical findings or normal chest x-rays. Children younger than age 5 years should undergo both a posterior-anterior and a lateral chest x-ray. All other persons should receive a posterior-anterior chest x-ray; additional chest x-ray examinations should be performed at the physician's discretion. Pregnant women who are being evaluated for active TB disease should undergo a chest x-ray (with the appropriate shielding) without delay, even during the first trimester of pregnancy. Patients suspected of having extrapulmonary TB should undergo a chest x-ray to rule out pulmonary TB. # Laboratory Tests • Collect smears for acid-fast bacilli, cultures, and drug susceptibilities from expectorated or induced sputum samples on 3 consecutive days, preferably in the mornings. Children who are unable to produce sputum spontaneously or who cannot use the sputum induction machine should be admitted to the hospital for early morning gastric aspirates on 3 consecutive, separate days. • Obtain a complete blood cell count, including platelets. • Conduct chemistry panel tests, especially for liver enzyme levels (serum glutamic oxalacetic transaminase or aspartate aminotransferase [SGOT/AST] and serum glutamic pyruvic transaminase or alanine aminotransferase [SGPT/ALT]); total bilirubin; uric acid; blood urea nitrogen; and creatinine. # Other Procedures • Perform a baseline visual acuity exam and test for red-green color perception for all patients who will be receiving ethambutol. • Perform baseline audiometry tests if an aminoglycoside (e.g, streptomycin, amikacin, kanamycin) or capreomycin will be administered. • Perform as necessary procedures such as bronchoscopies and bronchoalveolar lavage; biopsies and aspirates (e.g, of peripheral lymph nodes, visceral lymph nodes, liver, and bone marrow); mycobacterial culturing of nonrespiratory clinical specimens (e.g., blood, urine, pleural fluid); and radiologic evaluations other than chest x-rays (e.g., computerized tomographies, magnetic resonance imaging). plasma HIV RNA levels (viral load) and CD4+ T-cell counts and assessing the HIVassociated clinical condition at least every 3 months (4), because such information will assist in decisions regarding the timing for initiating such therapy. For some patients, switching from a rifampin-based TB regimen to either a rifabutin-based or a nonrifamycin-based TB regimen will be necessary if the decision is made to start antiretroviral therapy before completion of antituberculosis therapy. Clinicians and patients should be aware that the potent effect of rifampin as a CYP450 inducer (77,80), which lowers the serum concentration of protease inhibitors and NNRTIs, continues up to at least 2 weeks following the discontinuation of rifampin. Thus, they should consider planning for a 2-week period between the last dose of rifampin and the first dose of protease inhibitors or NNRTIs (see TB Drug Interaction and Absorption and Table 1A of Appendix). # Treatment Options for Patients with HIV Infection and Drug-Susceptible Pulmonary TB A.II DOT and other strategies that promote adherence to therapy should be used for all patients with HIV-related TB. # A.II For patients who are receiving therapy with protease inhibitors or NNRTIs, the initial phase of a 6-month TB regimen consists of isoniazid, rifabutin, pyrazinamide, and ethambutol. These drugs are administered a) daily for 8 weeks or b) daily for at least the first 2 weeks, followed by twice-a-week dosing for 6 weeks, to complete the 2-month induction phase. The second phase of treatment consists of isoniazid and rifabutin administered daily or twice a week for 4 months (see Six-month RFB-based therapy in Table 1A of Appendix). # B.II For patients for whom the use of rifamycins is limited or contraindicated for any reason (e.g., intolerance to rifamycins, patient/clinician decision not to combine antiretroviral therapy with rifabutin), the initial phase of a 9-month TB regimen consists of isoniazid, streptomycin,* pyrazinamide, and ethambutol administered a) daily for 8 weeks or b) daily for at least the first 2 weeks, followed by twice-a-week dosing for 6 weeks, to complete the 2-month induction phase. The second phase of treatment consists of isoniazid, streptomycin,* and pyrazinamide administered 2-3 times a week for 7 months (see Nine-month SM-based therapy in Table 1A of Appendix). A.I For patients who are not candidates for antiretroviral therapy, or for those patients for whom a decision is made not to combine the initiation of antiretroviral therapy with TB therapy, the preferred option continues to be a 6-month regimen that consists of isoniazid, rifampin, pyrazinamide, and ethambutol (or streptomycin). These drugs are administered a) daily for 8 weeks or b) daily for at least the first 2 weeks, followed by 2-3-times-perweek dosing for 6 weeks, to complete the 2-month induction phase. The *Every effort should be made to continue administering streptomycin for the total duration of treatment or for at least 4 months after culture conversion (approximately 6-7 months from the start of treatment). Some experts suggest that in situations in which streptomycin is not included in the regimen for all of the recommended 9 months, ethambutol should be added to the regimen to replace streptomycin, and the duration of treatment should be prolonged from 9 months to 12 months. Alternatives to streptomycin are the injectable drugs amikacin, kanamycin, and capreomycin. second phase of treatment consists of a) isoniazid and rifampin administered daily or 2-3 times a week for 4 months. Isoniazid, rifampin, pyrazinamide, and ethambutol (or streptomycin) also can be administered three times a week for 6 months (see Six-month RIF-based therapy in Table 1A of Appendix). D.II TB regimens consisting of isoniazid, ethambutol, and pyrazinamide (i.e., three-drug regimens that do not contain a rifamycin, an aminoglycoside [e.g., streptomycin, amikacin, kanamycin], or capreomycin) should generally not be used for the treatment of patients with HIV-related TB; if these regimens are used for the treatment of TB, the minimum duration of therapy should be 18 months (or 12 months after documented culture conversion). A.II Pyridoxine (vitamin B 6 ) (25-50 mg daily or 50-100 mg twice weekly) should be administered to all HIV-infected patients who are undergoing TB treatment with isoniazid, to reduce the occurrence of isoniazid-induced side effects in the central and peripheral nervous system. E.II Because CDC's most recent recommendations for the use of antiretroviral therapy strongly advise against interruptions of therapy,* and because alternative TB treatments that do not contain rifampin are available, previous antituberculosis therapy options that involved stopping protease inhibitor therapy to allow the use of rifampin (Option I and Option II [3 ]) are no longer recommended. # Medications and Doses for Treatment of TB No rating When rifabutin is used concurrently with indinavir, nelfinavir, or amprenavir, the recommended daily dose of rifabutin should be decreased from 300 mg to 150 mg (Table 2A of Appendix). No rating The dose of rifabutin recommended for twice-weekly administration is 300 mg, and this dose recommendation does not change if rifabutin is used concurrently with indinavir, nelfinavir, or amprenavir (Table 2A of Appendix). No rating Preliminary drug interaction studies suggest that when rifabutin is used concurrently with efavirenz, the dose of rifabutin for both daily and twiceweekly administration should be increased from 300 mg to 450 mg. No rating Three-times-per-week administration of rifabutin used in combination with antiretroviral therapy has not been studied, and thus a recommendation for adjustment of dosages cannot currently be made. No rating Experts do not know whether the daily dose of rifabutin should be reduced when this drug is used concurrently with either soft-gel saquinavir (Fortovase ™ ) or nevirapine. No rating No modifications in the usually recommended doses of isoniazid, ethambutol, pyrazinamide, or streptomycin (Table 2A of Appendix) are necessary if these drugs are used concurrently with protease inhibitors, NNRTIs, or nucleoside reverse transcriptase inhibitors (NRTIs). *To minimize the emergence of drug-resistant HIV strains, if any antiretroviral medication must be temporarily discontinued for any reason, clinicians and patients should be aware of the theoretical advantage of stopping all antiretroviral agents simultaneously, rather than continuing the administration of one or two of these agents alone (4). No rating The safety and effectiveness of rifapentine (Priftin ® ), a rifamycin newly approved by the U.S. Food and Drug Administration for the treatment of pulmonary tuberculosis, have not been established for patients infected with HIV. Administration of rifapentine to patients with HIV-related TB is not currently recommended. # Duration of TB Treatment A.II The minimum duration of short-course rifabutin-containing TB treatment regimens is 6 months, to complete a) at least 180 doses (one dose per day for 6 months) or b) 14 induction doses (one dose per day for 2 weeks) followed by 12 The final decision on the duration of therapy should consider the patient's response to treatment. For patients with delayed response to treatment (see Box 2), the duration of rifamycin-based regimens should be prolonged from 6 months to 9 months (or to 4 months after culture conversion is documented). # A.II The minimum duration of nonrifamycin, streptomycin-based TB treatment regimens is 9 months, to complete a) at least 60 induction doses (one dose per day for 2 months) or b) 14 induction doses (one dose per day for 2 weeks) followed by 12-18 induction doses (two to three doses per week for 6 weeks) plus either 60 continuation doses (two doses per week for 30 weeks) or 90 continuation doses (three doses per week for 30 weeks). # A.II When making the final decision on the duration of therapy, clinicians should consider the patient's response to treatment. For patients with delayed response to treatment (see Box 2), the duration of streptomycinbased regimens should be prolonged from 9 months to 12 months (or to 6 months after culture conversion is documented). # A.III Interruptions in therapy because of drug toxicity or other reasons should be taken into consideration when calculating the end-of-therapy date for individual patients. Completion of therapy is based on total number of medication doses administered and not on duration of therapy alone. A.III Reinstitution of therapy for patients with interrupted TB therapy might require a continuation of the regimen originally prescribed (as long as needed to complete the recommended duration of the particular regimen) *Three-times-per-week rifabutin regimens, used in combination with antiretroviral therapy, have not been studied. # BOX 2. Components of the monthly medical evaluation for human immunodeficiency virus-infected patients undergoing treatment for active tuberculosis For patients infected with human immunodeficiency virus (HIV) who are undergoing treatment for active tuberculosis (TB), clinicians should include the following components in the monthly evaluation: • Once a month, evaluate symptoms and signs of TB (response to treatment) by conducting a) a physical examination (the nature and extent of this evaluation will depend on the patient's symptoms and the site of disease) and b) for patients with pulmonary TB, an examination by smear and culture of an expectorated or induced sputum specimen until cultures are no longer positive for Mycobacterium tuberculosis. • Perform as necessary for individual patients laboratory tests such as: complete blood cell count, platelet count, and tests for serum glutamic oxalacetic transaminase or aspartate aminotransferase (SGOT/AST) and serum glutamic pyruvic transaminase or alanine aminotransferase (SGPT/ALT), alkaline phosphatase, total bilirubin, uric acid, blood urea nitrogen, and creatinine. • To assist in the decision about the duration of TB treatment, investigate the possibility of a delayed response to treatment (Table 1A of Appendix). Delayed response to treatment should be suspected (and in most cases treatment duration should be prolonged) if by the end of the 2-month induction phase of therapy, patients a) continue to be culture-positive for M. tuberculosis or b) do not experience resolution of signs or symptoms of TB or do experience progression of signs or symptoms of TB (e.g., persistent fever, progressive weight loss, or increase in size of lymph nodes, abscesses, or other tuberculous lesions, none of which can be accounted for by a disease other than TB). (See Duration of TB Treatment.) • Some factors potentially associated with TB treatment failure are a large mycobacterial load and extensive lung cavitation at baseline, nonadherence with the drug regimen (even among patients assumed to be on DOT), inappropriately low medication doses, and impaired absorption of drugs. Immediately institute corrective measures for those factors amenable to intervention. • Because patients with HIV-infection often are treated with multiple drugs in addition to antituberculosis drugs, at each visit, review all medications that the patient is taking and assess any change in medications for potential drug interactions with TB medications. Efforts to manage these potential problems related to drug interactions require the coordinated efforts of care givers for HIV and TB disease. (See TB Drug Interaction and Absorption.) • Because several antituberculosis drugs have hepatotoxicity as a potential side effect (Table 2A of Appendix), advise all persons taking TB medications about the symptoms consistent with hepatitis (e.g., anorexia, nausea, vomiting, abdominal pain, jaundice) and instruct them to discontinue all TB medications immediately and seek medical attention promptly if they exhibit such symptoms. These patients usually will need an examination by a physician, liver function tests, and a planned strategy for restarting TB treatment. • If ethambutol is administered, perform a monthly visual acuity exam and test for redgreen color perception. • If streptomycin is administered, perform audiometry and renal function tests as needed. or a complete renewal of the regimen. In either situation, when therapy is resumed after an interruption of ≥2 months, sputum samples (or other clinical samples as appropriate) should be taken for smear, culture, and drug-susceptibility testing. # Management of the Coadministration of TB and HIV Therapies, Including the Potential for Paradoxical Reactions When antituberculosis treatment has been started, all patients should be monitored for response to antituberculosis therapy, drug-related toxicity, and drug interactions. Detailed recommendations for managing antiretroviral therapy are published elsewhere (4 ), and consultation with experts in this area is highly recommended. The frequency and type of most TB medication side effects are similar among TB patients with and without HIV infection (30,65,67 ). When caring for HIV-infected persons, clinicians must be aware of the following problems that can result from the administration of TB medications: a) patients might have a higher predisposition toward isoniazid-related peripheral neuropathy; b) evaluation of dermatologic reactions related to TB medications might be complicated because HIV-infected patients are subject to several dermatologic diseases related to HIV disease or to medications used for other treatment or prophylaxis reasons; and c) patients undergoing concurrent therapy with rifabutin and protease inhibitors or NNRTIs are at risk for rifabutin toxicity associated with increased serum concentrations of this drug. The reported adverse events associated with rifabutin toxicity include arthralgias, uveitis, and leukopenia (86,(101)(102)(103). Detailed recommendations for managing these adverse reactions are published elsewhere and should be consulted (2,64 ). Paradoxical reactions -temporary exacerbation of symptoms, signs, or radiographic manifestations of TB (e.g., recurrence of fever, enlarged lymph nodes, appearance of cavitation in previously normal chest x-ray) among patients who have experienced a good clinical and bacteriologic response to antituberculosis therapyhave been reported among patients coinfected with HIV who have restored immune function because of antiretroviral therapy (76 ). The synchronization and severity of paradoxical reactions associated with antiretroviral therapy are not well understood; therefore, experts do not know whether the occurrence of these reactions should affect the timing of initiating or changing antiretroviral therapy when such therapy is indicated for a patient with HIV infection. However, because an association between paradoxical reactions and initiation of antiretroviral therapy has been noted, clinicians should be aware of this possibility and discuss the risks with patients undergoing therapy for active TB. # Monthly Medical Evaluation and the Diagnosis and Management of Paradoxical Reactions A.II All patients should receive a monthly clinical evaluation (see Box 2) to monitor their response to treatment, adherence to treatment, and medication side effects (Table 2A of Appendix). During the early days of therapy, the interval between these evaluations might be shorter (e.g., every 2 weeks). # A.II Patients suspected of having paradoxical reactions should be evaluated to rule out other causes for their clinical presentation (e.g., TB treatment failure) before attributing their signs and symptoms to a paradoxical reaction. C.III Some experts recommend that to avoid paradoxical reactions, clinicians should delay the initiation of or changes in antiretroviral therapy until the signs and symptoms of TB are well controlled (possibly 4-8 weeks from the initiation of TB therapy). No rating For patients with a paradoxical reaction in whom the symptoms are not severe or life-threatening, the management of these reactions might consist of symptomatic therapy and no change in antituberculosis or antiretroviral therapy. For patients with a paradoxical reaction associated with severe or life-threatening clinical manifestations (e.g., uncontrollable fever, airway compromise from enlarging lymph nodes, enlarging serosal fluid collections [pleuritis, pericarditis, peritonitis], sepsis-like syndrome), the management might include hospitalization and possibly a time-limited use of corticosteroids (e.g., prednisone started daily at a dose of 60-80 mg and reduced after 1 or 2 weeks, with the resolution of symptoms as a guide; in most cases, corticosteroid therapy should last no more than 4-6 weeks). # TB Drug Interaction and Absorption E.II Given the expected drug interactions that would result in markedly decreased serum levels of antiretroviral agents, and given the overlapping toxicities, the coadministration of rifampin with any of the protease inhibitors or with NNRTIs, as well as the coadministration of rifabutin with ritonavir, hard-gel saquinavir (Invirase ™ ), or delavirdine, is contraindicated. # A.II The potent effect of rifampin as a CYP450 inducer, which lowers the serum concentration of protease inhibitors and NNRTIs, is expected to continue up to at least 2 weeks following the discontinuation of rifampin. Therefore, to diminish the likelihood of adverse effects on drug metabolism, clinicians should plan the start of therapy with protease inhibitors or NNRTIs at least 2 weeks after the date of the last dose of rifampin. # A.II Rifabutin is a less potent CYP450 inducer than rifampin and thus can be used (with adjustments in dosages) concurrently with the NNRTIs nevirapine or efavirenz or with certain protease inhibitors (e.g., indinavir, nelfinavir, and possibly soft-gel saquinavir [Fortovase ™ ] and amprenavir). No rating Indinavir serum concentrations are decreased by rifabutin-related induction of the hepatic cytochrome P450; therefore, when indinavir is used in combination with rifabutin, the dose of indinavir usually is increased from 800 mg every 8 hours to 1,200 mg every 8 hours. No rating Nelfinavir serum concentrations are also decreased when nelfinavir is used in combination with rifabutin (Table 1A of Appendix); however, the resultant metabolite of nelfinavir is known to be active against HIV. Nevertheless, some experts suggest increasing the dose of nelfinavir from 750 mg three times per day to 1,000 mg three times per day when used in combination with rifabutin. No rating Experts do not know whether dose-modifications are needed for soft-gel saquinavir (Fortovase ™ ), amprenavir, nevirapine, or efavirenz if these agents are used in combination with rifabutin. No rating Many other medications commonly used by patients with HIV infection have drug interactions with the rifamycins (rifampin or rifabutin) of sufficient magnitude to require interventions such as dose adjustments or use of alternative therapies. Some examples of these drugs are hormonal contraceptives, dapsone, ketoconazole, fluconazole, itraconazole, narcotics (including methadone), anticoagulants, corticosteroids, cardiac glycosides, hypoglycemics (sulfonylureas), diazepam, beta-blockers, anticonvulsants, and theophylline. No rating Malabsorption of antituberculosis drugs has been demonstrated in some patients with HIV infection, and in some cases, it has been associated with TB treatment failures and the selection of drug-resistant M. tuberculosis bacilli (141)(142)(143)(144)(145). Therapeutic drug monitoring has been advocated by some experts as an adjunct in the management of HIV-related TB (146 ). This approach might be useful when evaluating patients with TB treatment failure or relapse and in the treatment of multidrug-resistant (MDR) TB. However, the role of therapeutic drug monitoring in the routine management of TB among HIV-infected patients has not been established and is not presently recommended. # Treatment of TB in Special Situations The following general treatment recommendations address special situations such as drug-resistant forms of HIV-related TB, TB among HIV-infected pregnant women, TB among HIV-infected children, and extrapulmonary HIV-related TB. Detailed recommendations for managing these patients are published elsewhere (2,64,(147)(148)(149)(150), and consultation with experts in these areas is highly recommended. # Treatment of Drug-Resistant TB A.II TB disease resistant to isoniazid only. The treatment regimen should generally consist of a rifamycin (rifampin or rifabutin), pyrazinamide, and ethambutol for the duration of treatment. Intermittent therapy administered twice weekly can be used following at least 2 weeks (14 doses) of daily induction therapy (see Duration of TB Treatment). The recommended duration of treatment is 6-9 months or 4 months after culture conversion. Isoniazid is generally stopped when resistance (>1% of bacilli resistant to 1.0 µg/mL of isoniazid) to this drug is discovered; however, when low-level resistance is discovered (>1% of bacilli resistant to 0.2 µg/mL of isoniazid, but no resistance to 1.0 µg/mL of isoniazid), some experts suggest continuing to use isoniazid as part of the treatment regimen. Because the development of acquired rifamycin resistance would result in MDR TB, clinicians should carefully supervise and manage TB treatment for these patients. isoniazid and rifampin. If drug-susceptibility results are not available, a four-drug regimen (e.g., isoniazid, rifamycin, pyrazinamide, and ethambutol) for 2 months, followed by intermittent administration of isoniazid and a rifamycin for 4 months, is recommended. Considerations for antiretroviral therapy for children and adolescents have been published elsewhere (154 ). # TB Treatment for HIV-Infected Patients with Extrapulmonary TB The basic principles that support the treatment of pulmonary TB in HIVinfected patients also apply to extrapulmonary forms of the disease. Most extrapulmonary forms of TB (including TB meningitis, tuberculous lymphadenitis, pericardial TB, pleural TB, and disseminated or miliary TB) are more common among persons with advanced-stage HIV disease (155,156) than among patients with asymptomatic HIV infection. The drug regimens and treatment durations that are recommended for treating pulmonary TB in HIV-infected adults and children (Table 1A of Appendix) are also recommended for treating most patients with extrapulmonary disease. However, for certain forms of extrapulmonary disease, such as meningioma, bone, and joint TB, using a rifamycin-based regimen for at least 9 months is generally recommended. # Latent M. tuberculosis Infection Clinical and Public Health Principles When caring for persons with HIV infection, clinicians should make aggressive efforts to identify those who also are infected with M. tuberculosis. Because the reliability of the tuberculin skin test (TST) can diminish as the CD4+ T-cell count declines, TB screening with TST should be performed as soon as possible after HIV infection is diagnosed. Because the risk of infection and disease with M. tuberculosis is particularly high among HIV-infected contacts of persons with infectious pulmonary or laryngeal TB, these persons must be evaluated for TB as soon as possible after learning of exposure to a patient with infectious TB. Health-care providers, administrators, and TB controllers must coordinate their work and establish TB screening initiatives in settings where a) the prevalence of infection with M. tuberculosis among persons with HIV-infection is expected to be high and b) referral for medical evaluation and TB preventive therapy can be accomplished. Such settings include prisons, jails, prenatal-care programs, drug treatment programs, syringe exchange programs, HIV specialty clinics, acute-care hospitals serving populations at high risk of TB, AIDS patient group residences, some community-health centers, psychiatric institutions, mental health residences, and homeless shelters. All HIV counseling and testing sites must have mechanisms in place to ensure that persons identified with HIV infection receive tuberculin skin testing. TB control programs in jurisdictions that have HIV reporting requirements should make efforts to ensure that all persons with HIV infection have TSTs. Because of the complexity of problems associated with active TB disease in HIVinfected persons, and as part of the efforts to control and eliminate TB in the United States, all HIV-infected persons identified as latently infected with M. tuberculosis A.II # BOX 3. Components of the baseline medical evaluation for tuberculosis preventive treatment for patients infected with human immunodeficiency virus When conducting a medical evaluation to rule out active tuberculosis (TB) and administer preventive treatment for patients infected with human immunodeficiency virus (HIV), clinicians should include the following questions and assessments: # Medical History • Ask patients about their history of recent close contact with a person who has TB disease. If the infecting source-patient is known, his or her culture and drug-susceptibility results should be investigated and reviewed so that the preventive therapy regimen can be tailored appropriately (see Treatment of Latent M. Tuberculosis in Special Situations). • Assess patients for their risk of increased toxicity associated with the medications used for TB preventive therapy (e.g., history of excessive alcohol ingestion, liver disease, hepatitis, chronic use of other medications). • Assess patients for contraindications to TB preventive therapy (Table 3A of Appendix). • When persons have previously received TB preventive treatment, determine what drugs were used, duration of treatment, and any history of adverse reactions (Table 2A of Appendix) and adherence to preventive therapy. • Ask patients about their history of antiretroviral therapy and their history of therapies to prevent opportunistic infections. If a patient has ever received or is receiving any of these treatments, the care provider must determine the drugs used, duration of treatment, history of adverse reactions, potential for drug interactions with TB medications, and any reasons for discontinuation of treatment if treatment ended. # Chest X-Ray Examination • A chest x-ray is indicated for all persons being considered for preventive therapy, to rule out active pulmonary TB disease. Children younger than 5 years old should undergo both a posterior-anterior and a lateral chest x-ray. All other individuals should receive a posterior-anterior chest x-ray only; additional x-rays should be performed at the physician's discretion. Pregnant women who have a positive TST or who have negative TST results but are recent contacts of a person who has infectious TB disease should undergo a chest x-ray (with appropriate shielding) without delay, even during the first trimester of pregnancy. # Laboratory Tests • Obtain a complete blood cell count, and also obtain a platelet count if the patient will be treated with a rifamycin (rifampin or rifabutin). • Conduct chemistry panel tests, especially for liver enzyme levels (serum glutamic oxalacetic transaminase or aspartate aminotransferase [SGOT/AST] and serum glutamic pyruvic transaminase or alanine aminotransferase [SGPT/ALT]) and total bilirubin. In addition, check uric acid levels if the patient will be treated with pyrazinamide. # BOX 4. Components of the monthly medical evaluation for human immunodeficiency virus-infected patients undergoing preventive treatment for latent Mycobacterium tuberculosis infection For patients infected with human immunodeficiency virus (HIV) who are undergoing preventive treatment for latent Mycobacterium tuberculosis infection, clinicians should include the following components in the monthly evaluation: • At every monthly evaluation, review signs and symptoms potentially related to active tuberculosis (TB) disease as well as signs and symptoms of drug reactions potentially related to antituberculosis drugs (Table 2A of Appendix). • At the start of preventive therapy and at each subsequent monthly visit, remind patients of the need to immediately discontinue preventive therapy and seek prompt medical attention if signs and symptoms of hepatotoxicity appear (e.g., anorexia, abdominal pain, nausea, vomiting, change in color of urine and feces, jaundice). • Patients with HIV infection often are treated with multiple drugs in addition to antituberculosis drugs. At each visit, all medications that a patient is taking should be reviewed and assessed for potential drug interactions with TB medications. Some examples of these drugs are hormonal contraceptives, ketoconazole, fluconazole, itraconazole, narcotics (including methadone), anticoagulants, corticosteroids, cardiac glycosides, hypoglycemics (sulfonylureas), diazepam, beta-blockers, anticonvulsants, and theophylline. Efforts to manage these potential problems related to drug interactions require the coordinated efforts of HIV and TB patients' care givers. The choices for preventive treatment for persons who are likely to be infected with a strain of M. tuberculosis resistant to both isoniazid and rifamycins are published elsewhere (161 ). In general, the recommended preventive therapy regimens for these persons include the use of a combination of at least two antituberculosis drugs that the infecting strain is believed to be susceptible to (e.g., ethambutol and pyrazinamide, levofloxacin and ethambutol). The clinician should review the drugsusceptibility pattern of the M. tuberculosis strain isolated from the infecting source-patient before choosing a preventive therapy regimen. # Treatment of Latent M. tuberculosis Infection in Special Situations # A.III For HIV-infected women who are candidates for TB preventive therapy, the initiation or discontinuation of preventive therapy should not be delayed on the basis of pregnancy alone, even during the first trimester. A 9-month regimen of isoniazid administered daily or twice a week is the only recommended option (Table 3A of Appendix). No rating For HIV-infected children who are candidates for TB preventive therapy, a 12-month regimen of isoniazid administered daily is recommended by the American Academy of Pediatrics (162 ). # Follow-up of HIV-Infected Persons Who Have Completed Preventive Therapy # CONCLUSIONS Implementing TB prevention and control strategies for persons infected with HIV has always been important and is even more critical now that a larger selection of new, more potent antiretroviral drugs has enabled clinicians to implement therapies that improve the health and prolong the lives of HIV-infected persons. These antiretroviral therapeutic strategies often include the use of drugs such as the protease inhibitors or the nonnucleoside reverse transcriptase inhibitors (NNRTIs), which because of drug interactions cannot be used concurrently with certain other drugs (e.g., rifampin). Thus, to improve the diagnosis and management of TB and HIV coinfection, TB control programs need to be prepared for the following challenges: • Ensure that all patients with TB receive HIV counseling and testing either on site or elsewhere. Patients with latent M. tuberculosis infection who are at risk for HIV infection also should receive HIV counseling and testing. • Initiate prompt and effective antituberculosis treatment (ideally with directly observed therapy) for all patients diagnosed with HIV-related TB. • Promote optimal antiretroviral therapy for patients with M. tuberculosis and HIV infection. • Become knowledgeable about the indications, potential dosing adjustments, and monitoring requirements of a rifabutin-containing regimen (or an alternative regimen that does not contain rifamycin) for the treatment of TB in patients who are undergoing antiretroviral therapy with protease inhibitors or NNRTIs. • Identify potential risk factors for TB treatment failure or relapse as well as the potential for paradoxical treatment reactions, and learn how to recognize and manage these outcomes. • Follow procedures to ensure early recognition and implementation of effective treatment for drug-resistant TB. • Recognize that previous options that involved stopping protease inhibitor therapy to allow the use of rifampin for TB treatment are no longer recommended for two reasons: a) the most recent guidelines for the use of antiretroviral therapy advise against interrupting HIV therapy, and b) alternatives for TB therapy that do not contain rifampin are available. • Coordinate efforts and establish TB screening initiatives in settings where a) the prevalence of infection with M. tuberculosis among persons with HIV-infection is expected to be high and b) referral for medical evaluation and therapy for active or latent TB is possible. • Be aware of changes in options for TB preventive therapy. In addition to recommendations for using 9 months of isoniazid daily or twice a week, new shortcourse multidrug regimens (e.g., a 2-month course of a rifamycin such as rifabutin or rifampin with pyrazinamide) can be prescribed for HIV-infected patients with latent M. tuberculosis infection. When faced with treatment choices, TB controllers and clinicians can use these recommendations to make informed decisions based on the most current research results available, keeping in mind that as new antiretroviral and antituberculosis agents become available, these guidelines will likely change. The aim of these recommendations is to help reduce TB treatment failures, prevent cases of drug-resistant TB, diminish the adverse effects that TB has on HIV replication, and support efforts to not only control TB, but to eliminate it from the United States. No contraindication exists for the use of RFB with NRTIs. If the patient also is taking indinavir, nelfinavir, or amprenavir, the daily dose of RFB is decreased from 300 mg to 150 mg. The twice-weekly dose of RFB (300 mg) remains unchanged if the patient is also taking these protease inhibitors. If the patient also is taking efavirenz, the daily or twice weekly dose of RFB is increased from 300 mg to 450 mg. Three-times-a-week administration of RFB used in combination with antiretroviral therapy has not been studied. † The concurrent use of RFB is contraindicated with ritonavir, saquinavir (Invirase™), and delavirdine. Information regarding the use of rifabutin with saquinavir (Fortovase™), amprenavir, efavirenz, and nevirapine is limited. § Not applicable. If nelfinavir, indinavir, or amprenavir is administered with RFB, blood concentrations of these protease inhibitors decrease. Thus, when RFB is used concurrently with any of these three drugs, the daily dose of RFB is reduced from 300 mg to 150 mg (the twice-weekly dose of RFB is unchanged, however). ¶ NA=not applicable. If efavirenz is administered with RFB, blood concentrations of RFB decrease. Thus, when RFB is used concurrently with efavirenz, the dose of RFB for both daily and twice weekly administration should be increased from 300 mg to 450 mg. # Acknowledgments Staff with the Division of Tuberculosis Elimination (DTBE) at CDC's National Center for HIV, STD, and TB Prevention (NCHSTP) and the panel of expert consultants who prepared this report thank the following persons who attended a September 1998 meeting sponsored by the American Thoracic Society and CDC and provided advice about revising the recommendations on TB preventive therapy: George W. Comstock # A.II TB disease resistant to rifampin only. The 9-month treatment regimen should generally consist of an initial 2-month phase of isoniazid, streptomycin, pyrazinamide, and ethambutol (see Nine-month SM-based therapy in Table 1A of Appendix). The second phase of treatment should consist of isoniazid, streptomycin, and pyrazinamide administered for 7 months. Because the development of acquired isoniazid resistance would result in MDR TB, clinicians should carefully supervise and manage TB treatment for these patients. # A.III Multidrug-resistant TB (resistant to both isoniazid and rifampin). These patients should be managed by or in consultation with physicians experienced in the management of MDR TB. Findings from a retrospective study of patients with MDR TB strongly indicate that early aggressive treatment with appropriate regimens (based on the known or suspected drug-resistance pattern of the M. tuberculosis isolate) markedly decreases deaths associated with MDR TB (63,(151)(152)(153). Most drug regimens currently used to treat MDR TB include an aminoglycoside (e.g., streptomycin, kanamycin, amikacin) or capreomycin, and a fluoroquinolone. The recommended duration of treatment for MDR TB in HIV-seropositive patients is 24 months after culture conversion, and posttreatment followup visits to monitor for TB relapse should be conducted every 4 months for 24 months. Because of the serious personal and public health concerns associated with MDR TB, health departments should always use DOT for these patients and take whatever steps are needed to ensure their adherence to therapy. # TB Treatment for HIV-Infected Pregnant Women HIV-infected pregnant women who have a positive M. tuberculosis culture or who are suspected of having TB disease should be treated without delay. Choices of TB treatment regimens for HIV-infected pregnant women are those that include a rifamycin (Table 1A of Appendix). Routine use of pyrazinamide during pregnancy is recommended by international organizations but has not been recommended in the United States because of inadequate teratogenicity data (2 ). However, for HIV-infected pregnant women, the benefits of a TB treatment regimen that includes pyrazinamide outweigh potential pyrazinamide-related risks to the fetus. Aminoglycosides (e.g, streptomycin, kanamycin, amikacin) and capreomycin are contraindicated for all pregnant women because of potential adverse effects on the fetus. Considerations for antiretroviral therapy for pregnant HIV-infected women have been published elsewhere (4 ). # TB Treatment for HIV-Infected Children # Diagnosis of M. tuberculosis Infection Among HIV-Infected Persons The Mantoux-method TST, with 5 TU of purified protein derivative, is used to diagnose M. tuberculosis infection. A TST reaction size of ≥5 mm of induration is considered positive (i.e., indicative of M. tuberculosis infection) in persons who are infected with HIV. Persons with a TST reaction size of <5 mm but with a history of exposure to TB also could be infected with M. tuberculosis; this possibility should be investigated (157 ). Whenever M. tuberculosis infection is suspected in a patient, an evaluation to rule out active TB and assess the need for preventive therapy should be conducted (see Box 3). This evaluation should include HIV counseling and testing for persons whose HIV status is unknown but who are at risk for HIV infection. Tuberculin Skin Testing Among HIV-Infected Persons A.I As soon as possible after HIV infection is diagnosed, all persons should receive a TST unless previously tested and found to be TST-positive. # A.II As soon as possible (ideally within 7 days) after learning of an exposure to a patient with infectious TB, all HIV-infected persons should be evaluated for TB and receive a TST, regardless of any previous TST results. # B.III TSTs should be conducted periodically for HIV-infected persons who are TST-negative on initial evaluation and who belong to populations with a substantial risk of exposure to M. tuberculosis (e.g., residents of prisons, jails, or homeless shelters). C.III Some experts recommend repeat TSTs for HIV-infected persons who are TST-negative on initial evaluation and whose immune function is restored because of effective antiretroviral therapy. C.I Because results of anergy testing in HIV-infected populations in the United States do not seem useful to clinicians making decisions about preventive therapy, anergy testing is no longer recommended as a routine component of TB screening among HIV-infected persons (157 ). However, some experts support the use of anergy testing to help guide individual decisions regarding preventive therapy, and some recommend that a TST be performed on patients previously classified as anergic if evidence indicates that these patients' immune systems have responded to therapy with antiretroviral drugs. 2A of Appendix). All patients on twice-a-week dosing regimens should receive DOPT; some experts also recommend DOPT for patients on 2-month preventive therapy regimens. The administration of TB preventive therapy regimens that contain rifampin is contraindicated for patients who take protease inhibitors or NNRTIs. For these patients, the substitution of rifabutin for rifampin in preventive therapy regimens is recommended; however, the substitution of rifapentine for rifampin is not currently recommended because rifapentine's safety and effectiveness have not been established for patients infected with HIV. # Candidates for TB Preventive Therapy Among HIV-Infected Persons # Recommended Preventive Therapy Regimens for Patients Receiving Protease Inhibitors or NNRTIs A.II For HIV-infected adults, a 9-month regimen of isoniazid can be administered daily. # B.I For HIV-infected adults, a 9-month regimen of isoniazid can be administered twice a week (DOPT should be used with intermittant dosing regimens). # B.III For HIV-infected adults, a 2-month regimen of rifabutin and pyrazinamide can be administered daily. No rating The concurrent administration of rifabutin is contraindicated with ritonavir, hard-gel saquinavir (Invirase™), and delavirdine. # Recommended Preventive Therapy Regimens for Patients Not Receiving Protease Inhibitors or NNRTIs A.II For HIV-infected adults, a 9-month regimen of isoniazid can be administered daily. # B.I For HIV-infected adults, a 9-month regimen of isoniazid can be administered twice a week. A.I For HIV-infected adults, a 2-month regimen of rifampin and pyrazinamide can be administered daily. # Duration of TB Preventive Therapy A.II Daily isoniazid regimens should consist of at least 270 doses to be administered for 9 months or up to 12 months if interruptions in therapy occur. # Appendix Recommended Treatment Options for Persons with Human Immunodeficiency Virus-Related Tuberculosis Infection and Disease * For patients with intolerance to PZA, some experts recommend the use of rifamycin (RIF or RFB) alone for preventive treatment. Most experts agree that available data support the recommendation that this treatment can be administered for as short a duration as 4 months, although some experts would treat for 6 months. † The concurrent use of RFB is contraindicated with ritonavir, hard-gel saquinavir (Invirase™), and delavirdine. The information regarding the use of RFB with soft-gel saquinavir (Fortovase™), amprenavir, efavirenz, and nevirapine is limited. # FIRST-CLASS MAIL POSTAGE & FEES PAID
None
None
4b5462aaadea53125e82acbd0baadd033b202f99
cdc
None
These recommendations update information concerning the vaccine and antiviral agents available for controlling influenza during the 1997-98 influenza season (superseding MMWR 1996;45:1-24). The principal changes include information about a) the influenza virus strains included in the trivalent vaccine for 1997-98, b) the vaccination of pregnant and breastfeeding women, and c) side effects and adverse reactions.# INTRODUCTION Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (H1, H2, and H3) and two subtypes of neuraminidase (N1 and N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens-especially to the hemagglutinin-reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of circulating strains provide the basis for selecting the virus strains included in each year's vaccine. Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory illnesses, influenza can cause severe malaise lasting several days. More severe illness can result if either primary influenza pneumonia or secondary bacterial pneumonia occurs. During influenza epidemics, high attack rates of acute illness result in both increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower respiratory tract complications. Elderly persons and persons with underlying health problems are at increased risk for complications of influenza. If they become ill with influenza, such members of high-risk groups (see Groups at Increased Risk for Influenza-Related Complications) are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for persons at high risk may increase twofold to fivefold, depending on the age group. Previously healthy children and younger adults also may require hospitalization for influenza-related complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups. An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza. An estimated >20,000 influenza-associated deaths occurred during each of nine different U.S. epidemics from 1972-73 to 1991-92, and >40,000 influenza-associated deaths occurred during each of four of these epidemics. More than 90% of the deaths attributed to pneumonia and influenza occurred among persons aged ≥65 years. The number of elderly persons in the U.S. population is increasing, as well as the number of persons aged <65 years at increased risk for influenza-related complications. Longer life expectancy for a) organ-transplant recipients, b) neonates in intensive-care units, and c) persons who have cystic fibrosis and acquired immunodeficiency syndrome (AIDS) results in a higher survival rate for younger persons at high risk for influenza. Influenza vaccine campaigns are targeted to approximately 32 million persons aged ≥65 years and 27 million to 31 million persons aged <65 years who are at high risk for influenza-associated complications. National health objectives for the year 2000 include vaccination of at least 60% of persons at risk for severe influenza-related illness. Influenza vaccination levels among persons aged ≥65 years increased substantially from 1985 (23%) to 1994 (55%), although vaccination levels among persons aged <65 years at high risk for influenza are estimated to be <30%. Possible reasons for the increase in influenza vaccination levels, especially among persons aged ≥65 years, include greater acceptance of preventive medical services by practitioners, increased delivery and administration of vaccine by health-care providers and sources other than physicians, and the initiation of Medicare reimbursement for influenza vaccination in 1993. # OPTIONS FOR THE CONTROL OF INFLUENZA In the United States, two measures are available that can reduce the impact of influenza: immunoprophylaxis with inactivated (i.e., killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (amantadine or rimantadine). Vaccinating persons at high risk before the influenza season each year is the most effective measure for reducing the impact of influenza. Vaccination can be highly cost effective when it is a) directed at persons who are most likely to experience complications or who are at increased risk for exposure and b) administered to persons at high risk during hospitalizations or routine health-care visits before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. When vaccine and epidemic strains of virus are well matched, achieving high vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) can reduce the risk for outbreaks by inducing herd immunity. # INACTIVATED VACCINE FOR INFLUENZA A AND B Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing the influenza viruses that are likely to circulate in the United States in the upcoming winter. The vaccine is made from highly purified, egg-grown viruses that have been made noninfectious (inactivated). Influenza vaccine rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purified-surfaceantigen preparations are available. Most vaccinated children and young adults develop high postvaccination hemagglutination-inhibition antibody titers. These antibody titers are protective against illness caused by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults and thus may remain susceptible to influenza-related upper respiratory tract infection. However, even if such persons develop influenza illness despite vaccination, the vaccine can be effective in preventing lower respiratory tract involvement or other secondary complications, thereby reducing the risk for hospitalization and death. The effectiveness of influenza vaccine in preventing or attenuating illness varies, depending primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the virus strains included in the vaccine and those that circulate during the influenza season. When a good match exists between vaccine and circulating viruses, influenza vaccine has been shown to prevent illness in approximately 70%-90% of healthy persons aged <65 years. In these circumstances, studies also have indicated that the effectiveness of influenza vaccine in preventing hospitalization for pneumonia and influenza among elderly persons living in settings other than nursing homes or similar chronic-care facilities ranges from 30% to 70%. Among elderly persons residing in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and death. Studies of this population have indicated that the vaccine can be 50%-60% effective in preventing hospitalization and pneumonia and 80% effective in preventing death, even though efficacy in preventing influenza illness may often be in the range of 30%-40% among the frail elderly. Achieving a high rate of vaccination among nursing home residents can reduce the spread of infection in a facility, thus preventing disease through herd immunity. # RECOMMENDATIONS FOR THE USE OF INFLUENZA VACCINE Influenza vaccine is strongly recommended for any person aged ≥6 months whobecause of age or underlying medical condition-is at increased risk for complications of influenza. Health-care workers and others (including household members) in close contact with persons in high-risk groups also should be vaccinated. In addition, influenza vaccine may be administered to any person who wishes to reduce the chance of becoming infected with influenza. The trivalent influenza vaccine prepared for the 1997-98 season will include A/Bayern/07/95-like (H1N1), A/Wuhan/359/95-like (H3N2), and B/Beijing/184/93-like hemagglutinin antigens. For the A/Bayern/07/95-like, A/Wuhan/ 359/95-like, and B/Beijing/184/93-like antigens, U.S. manufacturers will use the antigenically equivalent strains A/Johannesburg/82/96(H1N1), A/Nanchang/933/95 (H3N2), and B/Harbin/07/94 because of their growth properties. Guidelines for the use of vaccine among certain patient populations follow; dosage recommendations vary according to age group (Table 1). Although the current influenza vaccine can contain one or more of the antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines in the year following vaccination. Because the 1997-98 vaccine differs from the 1996-97 vaccine, supplies of 1996-97 vaccine should not be administered to provide protection for the 1997-98 influenza season. Two doses administered at least 1 month apart may be required for satisfactory antibody responses among previously unvaccinated children aged <9 years; however, studies of vaccines similar to those being used currently have indicated little or no improvement in antibody response when a second dose is administered to adults during the same season. During recent decades, data on influenza vaccine immunogenicity and side effects have been obtained for intramuscularly administered vaccine. Because recent influenza vaccines have not been adequately evaluated when administered by other routes, the intramuscular route is recommended. Adults and older children should be vaccinated in the deltoid muscle and infants and young children in the anterolateral aspect of the thigh. - Children and teenagers (aged 6 months-18 years) who are receiving long-term aspirin therapy and therefore might be at risk for developing Reye syndrome after influenza - Women who will be in the second or third trimester of pregnancy during the influenza season Influenza-associated excess mortality among pregnant women has not been documented except during the pandemics of 1918-19 and 1957-58. However, because death-certificate data often do not indicate whether a woman was pregnant at the time of death, studies conducted during interpandemic periods may underestimate the impact of influenza in this population. Case reports and limited studies suggest that pregnancy may increase the risk for serious medical complications of influenza as a result of increases in heart rate, stroke volume and oxygen consumption, decreases in lung capacity, and changes in immunologic function. A recent study of the impact of influenza during 17 interpandemic influenza seasons documented that the relative risk of hospitalization for selected cardiorespiratory conditions among pregnant women increased from 1.4 during weeks 14-20 of gestation to 4.7 during weeks 37-42 compared with rates among women who were 1-6 months postpartum. Women in their third trimester of pregnancy were hospitalized at a rate comparable to that of nonpregnant women who have high-risk medical conditions for whom influenza vaccine has traditionally been recommended. Using data from this study, it was estimated that an average of 1 to 2 hospitalizations among pregnant women could be prevented for every 1,000 pregnant women immunized. On the basis of these and other data that suggest that influenza infection may cause increased morbidity in women during the second and third trimesters of pregnancy, the Advisory Committee on Immunization Practices (ACIP) recommends that women who will be beyond the first trimester of pregnancy (14 weeks' gestation) during the influenza season be vaccinated. Pregnant women who have medical conditions that increase their risk for complications from influenza should be vaccinated before the influenza season-regardless of the stage of pregnancy. Studies of influenza immunization of more than 2,000 pregnant women have demonstrated no adverse fetal effects associated with influenza vaccine; however, more data are needed. Because influenza vaccine is not a live virus vaccine and major systemic reactions to it are rare, many experts consider influenza vaccination safe during any stage of pregnancy. However, because spontaneous abortion is common in the first trimester and unnecessary exposures have traditionally been avoided during this time, some experts prefer influenza vaccination during the second trimester to avoid coincidental association of the vaccine with early pregancy loss. # Groups that Can Transmit Influenza to Persons at High Risk Persons who are clinically or subclinically infected can transmit influenza virus to persons at high risk that they care for or live with. Some persons at high risk (e.g., the elderly, transplant recipients, and persons with AIDS) can have a low antibody response to influenza vaccine. Efforts to protect these members of high-risk groups against influenza might be improved by reducing the likelihood of influenza exposure from their caregivers. Therefore, the following groups should be vaccinated: - physicians, nurses, and other personnel in both hospital and outpatient-care settings; - employees of nursing homes and chronic-care facilities who have contact with patients or residents; - providers of home care to persons at high risk (e.g., visiting nurses and volunteer workers); and - household members (including children) of persons in high-risk groups. # VACCINATION OF OTHER GROUPS Persons Infected with Human Immunodeficiency Virus Limited information exists regarding the frequency and severity of influenza illness among human immunodeficiency virus (HIV)-infected persons, but reports suggest that symptoms might be prolonged and the risk for complications increased for some HIV-infected persons. Influenza vaccine has produced protective antibody titers against influenza in vaccinated HIV-infected persons who have minimal AIDSrelated symptoms and high CD4+ T-lymphocyte cell counts. In patients who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, however, influenza vaccine may not induce protective antibody titers; a second dose of vaccine does not improve the immune response for these persons. Recent studies have examined the effect of influenza vaccination on replication of HIV type 1 (HIV-1). Although some studies have demonstrated a transient (i.e., 2-to 4-week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration, other studies using similar laboratory techniques have not indicated any substantial increase in replication. Deterioration of CD4+ T-lymphocyte cell counts and progression of clinical HIV disease have not been demonstrated among HIV-infected persons who receive vaccine. Because influenza can result in serious illness and complications and because influenza vaccination may result in protective antibody titers, vaccination will benefit many HIV-infected patients. # Breastfeeding Mothers Influenza vaccine does not affect the safety of breastfeeding for mothers or infants. Breastfeeding does not adversely affect immune response and is not a contraindication for vaccination. # Persons Traveling to Foreign Countries The risk for exposure to influenza during travel to foreign countries varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the Southern Hemisphere, most activity occurs from April through September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that begins while traveling, which is an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the Southern Hemisphere from April through September should review their influenza vaccination histories. If they were not vaccinated the previous fall or winter, they should consider influenza vaccination before travel. Persons in high-risk groups should be especially encouraged to receive the most current vaccine. Persons at high risk who received the previous season's vaccine before travel should be revaccinated in the fall or winter with the current vaccine. # General Population Physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics. # PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Use of an antiviral agent (i.e., amantadine or rimantadine) is an option for prevention of influenza A in such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at high risk for complications of influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Specific information about vaccine components can be found in package inserts for each manufacturer. Adults with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever should not contraindicate the use of influenza vaccine, particularly among children with mild upper respiratory tract infection or allergic rhinitis. # SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respiratory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination is soreness at the vaccination site that lasts up to 2 days. These local reactions generally are mild and rarely interfere with the ability to conduct usual daily activities. In addition, two types of systemic reactions have occurred: - Fever, malaise, myalgia, and other systemic symptoms can occur following vaccination and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after vaccination and can persist for 1 or 2 days. Recent placebo-controlled trials suggest that in elderly persons and healthy young adults, split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections. - Immediate-presumably allergic-reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) rarely occur after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component; most reactions likely are caused by residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs-including those who have had occupational asthma or other allergic responses due to exposure to egg protein-might also be at increased risk for reactions from influenza vaccine, and similar consultation should be considered. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenzaassociated complications. Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, most patients do not develop reactions to thimerosal when administered as a component of vaccines-even when patch or intradermal tests for thimerosal indicate hypersensitivity. When reported, hypersensitivity to thimerosal usually has consisted of local, delayed-type hypersensitivity reactions. Unlike the 1976 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been clearly associated with an increased frequency of Guillain-Barré syndrome (GBS). However, obtaining a precise estimate of a small increase in risk is difficult for a rare condition such as GBS, which has an annual background incidence of only one to two cases per 100,000 adult population. During five of six seasons studied since 1976, the point estimates of the relative risks of GBS after influenza vaccination were slightly elevated; however, in none of these studies was the overall elevation in relative risk statistically significant. In the two most recently studied seasons, the combined number of GBS cases peaked 2 weeks after vaccination. Data from all of these studies suggest that if an increased relative risk does exist, it is lower for persons aged ≥65 years than for those 18-64 years of age. The slight increase in the point estimates of the relative risks and the increased number of cases in the second week after vaccination may be the result of vaccination but also could be due to other factors (e.g., confounding or diagnostic bias) rather than a true vaccinerelated risk. Among persons who received the swine influenza vaccine in 1976, the rate of GBS that exceeded the background rate was slightly less than one case per 100,000 vaccinations. Even if GBS were a true side effect in subsequent years, the estimated risk for GBS was much lower than 1:100,000 and substantially less than that for severe influenza, which could be prevented by vaccination, especially for persons aged ≥65 years and those who have medical indications for influenza vaccination. Whereas the incidence of GBS in the general population is very low, persons with a history of GBS have a substantially greater likelihood of subsequently developing GBS than persons without such a history. Thus, the likelihood of coincidentally developing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination might be causally associated with this risk for recurrence is not known. Although avoiding a subsequent influenza vaccination in persons known to have developed GBS within 6 weeks of a previous influenza vaccination seems prudent, for most persons with a history of GBS who are at high risk for severe complications from influenza the established benefits of influenza vaccination justify yearly vaccination. # SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES The target groups for influenza and pneumococcal vaccination overlap considerably. For persons at high risk who have not previously been vaccinated with pneumococcal vaccine, health-care providers should strongly consider administering pneumococcal and influenza vaccines concurrently. Both vaccines can be administered at the same time at different sites without increasing side effects. However, influenza vaccine is administered each year, whereas pneumococcal vaccine is not. Children at high risk for influenza-related complications can receive influenza vaccine at the same time they receive other routine vaccinations, including pertussis vaccine (DTaP or DTP). Because influenza vaccine can cause fever when administered to young children, DTaP (which is less frequently associated with fever and other adverse events) is preferable. # TIMING OF INFLUENZA VACCINATION ACTIVITIES Beginning each September (when vaccine for the upcoming influenza season becomes available) persons at high risk who are seen by health-care providers for routine care or as a result of hospitalization should be offered influenza vaccine. Opportunities to vaccinate persons at high risk for complications of influenza should not be missed. The optimal time for organized vaccination campaigns for persons in high-risk groups is usually the period from October through mid-November. In the United States, influenza activity generally peaks between late December and early March. High levels of influenza activity infrequently occur in the contiguous 48 states before December. Administering vaccine too far in advance of the influenza season should be avoided in facilities such as nursing homes, because antibody levels might begin to decline within a few months of vaccination. Vaccination programs can be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December. Children aged <9 years who have not been vaccinated previously should receive two doses of vaccine at least 1 month apart to maximize the likelihood of a satisfactory antibody response to all three vaccine antigens. The second dose should be administered before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community. # STRATEGIES FOR IMPLEMENTING INFLUENZA VACCINE RECOMMENDATIONS Successful vaccination programs have combined education for health-care workers, publicity and education targeted toward potential recipients, a plan for identifying persons at high risk (usually by medical-record review), and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following paragraphs. # Outpatient Clinics and Physicians' Offices Staff in physicians' offices, clinics, health-maintenance organizations, and employee health clinics should be instructed to identify and label the medical records of patients who should receive vaccine. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccine and its receipt or refusal should be documented in the medical record. Patients in high-risk groups who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccine. If possible, arrangements should be made to provide vaccine with minimal waiting time and at the lowest possible cost. # Facilities Providing Episodic or Acute Care Health-care providers in these settings (e.g., emergency rooms and walk-in clinics) should be familiar with influenza vaccine recommendations. They should offer vaccine to persons in high-risk groups or should provide written information on why, where, and how to obtain the vaccine. Written information should be available in language(s) appropriate for the population served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians rather than by obtaining individual vaccination orders for each patient. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility, and all residents should be vaccinated at one time, immediately preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated when they are admitted. # Acute-Care Hospitals All persons aged ≥65 years and younger persons (including children) with high-risk conditions who are hospitalized at any time from September through March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Household members and others with whom they will have contact should receive written information about why and where to obtain influenza vaccine. # Outpatient Facilities Providing Continuing Care to Patients at High Risk All patients should be offered vaccine before the beginning of the influenza season. Patients admitted to such programs (e.g., hemodialysis centers, hospital specialtycare clinics, and outpatient rehabilitation programs) during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission. Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. # Visiting Nurses and Others Providing Home Care to Persons at High Risk Nursing-care plans should identify patients in high-risk groups, and vaccine should be provided in the home if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination. # Facilities Providing Services to Persons Aged ≥65 Years In these facilities (e.g., retirement communities and recreation centers), all unvaccinated residents/attendees should be offered vaccine on site before the influenza season. Education/publicity programs also should be provided; these programs should emphasize the need for influenza vaccine and provide specific information concerning how, where, and when to obtain it. # Clinics and Others Providing Health Care for Travelers Indications for influenza vaccination should be reviewed before travel. Vaccine should be offered, if appropriate (see Travelers to Foreign Countries). # Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine. Particular emphasis should be placed on vaccination of persons who care for members of high-risk groups (e.g., staff of intensive-care units , staff of medical/surgical units, and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts can enhance compliance, as can a follow-up campaign early in the course of a community outbreak. # ANTIVIRAL AGENTS FOR INFLUENZA A The two antiviral agents with specific activity against influenza A viruses are amantadine hydrochloride and rimantadine hydrochloride. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses. When administered prophylactically to healthy adults or children before and throughout the epidemic period, both drugs are approximately 70%-90% effective in preventing illness caused by naturally occurring strains of type A influenza viruses. Because antiviral agents taken prophylactically can prevent illness but not subclinical infection, some persons who take these drugs can still develop immune responses that will protect them when they are exposed to antigenically related viruses in later years. In otherwise healthy adults, amantadine and rimantadine can reduce the severity and duration of signs and symptoms of influenza A illness when administered within 48 hours of illness onset. Studies evaluating the efficacy of treatment for children with either amantadine or rimantadine are limited. Amantadine was approved for treatment and prophylaxis of all influenza type A virus infections in 1976. Although few placebo-controlled studies were conducted to determine the efficacy of amantadine treatment among children before approval, amantadine is indicated for treatment and prophylaxis of adults and children aged ≥1 year. Rimantadine was approved in 1993 for treatment and prophylaxis in adults but was approved only for prophylaxis in children. Further studies might provide the data needed to support future approval of rimantadine treatment in this age group. As with all drugs, amantadine and rimantadine can cause adverse reactions in some persons. Such adverse reactions rarely are severe; however, for some categories of patients, severe adverse reactions are more likely to occur. Amantadine has been associated with a higher incidence of adverse central nervous system (CNS) reactions than rimantadine (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment). # RECOMMENDATIONS FOR THE USE OF AMANTADINE AND RIMANTADINE # Use as Prophylaxis Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk for severe illness and complications if infected with influenza A virus. When amantadine or rimantadine is administered as prophylaxis, factors such as cost, compliance, and potential side effects should be considered when determining the period of prophylaxis. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost effective, amantadine or rimantadine prophylaxis should be taken only during the period of peak influenza activity in a community. # Persons at High Risk Vaccinated After Influenza A Activity Has Begun Persons at high risk still can be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination can take as long as 2 weeks, during which time chemoprophylaxis should be considered. Children who receive influenza vaccine for the first time can require as long as 6 weeks of prophylaxis (i.e., prophylaxis for 2 weeks after the second dose of vaccine has been received). Amantadine and rimantadine do not interfere with the antibody response to the vaccine. # Persons Providing Care to Those at High Risk To reduce the spread of virus to persons at high risk, chemoprophylaxis may be considered during community outbreaks for a) unvaccinated persons who have frequent contact with persons at high risk (e.g., household members, visiting nurses, and volunteer workers) and b) unvaccinated employees of hospitals, clinics, and chronic-care facilities. For those persons who cannot be vaccinated, chemoprophylaxis during the period of peak influenza activity may be considered. For those persons who receive vaccine at a time when influenza A is present in the community, chemoprophylaxis can be administered for 2 weeks after vaccination. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that might not be controlled by the vaccine. # Persons Who Have Immune Deficiency Chemoprophylaxis might be indicated for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons who have HIV infection, especially those who have advanced HIV disease. No data are available concerning possible interactions with other drugs used in the management of patients who have HIV infection. Such patients should be monitored closely if amantadine or rimantadine chemoprophylaxis is administered. # Persons for Whom Influenza Vaccine Is Contraindicated Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Influenza vaccine may be contraindicated in persons who have severe anaphylactic hypersensitivity to egg protein or other vaccine components. # Other Persons Amantadine or rimantadine also can be administered prophylactically to anyone who wishes to avoid influenza A illness. The health-care provider and patient should make this decision on an individual basis. # Use of Antivirals as Therapy Amantadine and rimantadine can reduce the severity and shorten the duration of influenza A illness among healthy adults when administered within 48 hours of illness onset. Whether antiviral therapy will prevent complications of influenza type A among persons at high risk is unknown. Insufficient data exist to determine the efficacy of rimantadine treatment in children. Thus, rimantadine is currently approved only for prophylaxis in children, but it is not approved for treatment in this age group. Amantadine-and rimantadine-resistant influenza A viruses can emerge when either of these drugs is administered for treatment; amantadine-resistant strains are cross-resistant to rimantadine and vice versa. Both the frequency with which resistant viruses emerge and the extent of their transmission are unknown, but data indicate that amantadine-and rimantadine-resistant viruses are no more virulent or transmissible than amantadine-and rimantadine-sensitive viruses. The screening of naturally occurring epidemic strains of influenza type A has rarely detected amantadine-and rimantadine-resistant viruses. Resistant viruses have most frequently been isolated from persons taking one of these drugs as therapy for influenza A infection. Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy. Persons who have influenza-like illness should avoid contact with uninfected persons as much as possible, regardless of whether they are being treated with amantadine or rimantadine. Persons who have influenza type A infection and who are treated with either drug can shed amantadine-or rimantadinesensitive viruses early in the course of treatment, but can later shed drug-resistant viruses, especially after 5-7 days of therapy. Such persons can benefit from therapy even when resistant viruses emerge; however, they also can transmit infection to other persons with whom they come in contact. Because of possible induction of amantadine or rimantadine resistance, treatment of persons who have influenza-like illness should be discontinued as soon as clinically warranted, generally after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. Laboratory isolation of influenza viruses obtained from persons who are receiving amantadine or rimantadine should be reported to CDC through state health departments, and the isolates should be sent to CDC for antiviral sensitivity testing. # Outbreak Control in Institutions When confirmed or suspected outbreaks of influenza A occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. Contingency planning is needed to ensure rapid administration of amantadine or rimantadine to residents. This planning should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amantadine or rimantadine is used for outbreak control, the drug should be administered to all residents of the institution-regardless of whether they received influenza vaccine the previous fall. The drug should be continued for at least 2 weeks or until approximately 1 week after the end of the outbreak. The dose for each resident should be determined after consulting the dosage recommendations and precautions (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment) and the manufacturer's package insert. To reduce the spread of virus and to minimize disruption of patient care, chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that is not controlled by the vaccine. Chemoprophylaxis also may be considered for controlling influenza A outbreaks in other closed or semi-closed settings (e.g., dormitories or other settings where persons live in close proximity). To reduce the spread of infection and the chances of prophylaxis failure resulting from transmission of drug-resistant virus, measures should be taken to reduce contact as much as possible between persons on chemoprophylaxis and those taking drug for treatment. # CONSIDERATIONS FOR SELECTING AMANTADINE OR RIMANTADINE FOR CHEMOPROPHYLAXIS OR TREATMENT # Side Effects/Toxicity Despite the similarities between the two drugs, amantadine and rimantadine differ in their pharmacokinetic properties. More than 90% of amantadine is excreted unchanged, whereas approximately 75% of rimantadine is metabolized by the liver. However, both drugs and their metabolites are excreted by the kidneys. The pharmacokinetic differences between amantadine and rimantadine might explain differences in side effects. Although both drugs can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day, the incidence of CNS side effects (e.g., nervousness, anxiety, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine compared with those taking rimantadine. In a 6-week study of prophylaxis in healthy adults, approximately 6% of participants taking rimantadine at a dose of 200 mg/day experienced at least one CNS symptom, compared with approximately 14% of those taking the same dose of amantadine and 4% of those taking placebo. The incidence of gastrointestinal side effects (e.g., nausea and anorexia) is approximately 3% among persons taking either drug, compared with 1%-2% among persons receiving the pla-cebo. Side effects associated with both drugs are usually mild and cease soon after discontinuing the drug. Side effects can diminish or disappear after the first week despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among elderly persons who have been taking amantadine as prophylaxis at a dose of 200 mg/day. Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects, and recommendations for reduced dosages for these groups of patients have been made. Because rimantadine has been marketed for a shorter period of time than amantadine, its safety in certain patient populations (e.g., chronically ill and elderly persons) has been evaluated less frequently. Clinical trials of rimantadine have more commonly involved young, healthy persons. Providers should review the package insert before using amantadine or rimantadine for any patient. The patient's age, weight, and renal function; the presence of other medical conditions; the indications for use of amantadine or rimantadine (i.e., prophylaxis or therapy); and the potential for interaction with other medications must be considered, and the dosage and duration of treatment must be adjusted appropriately. Modifications in dosage might be required for persons who have impaired renal or hepatic function, the elderly, children, and persons with a history of seizures (Table 2). The following are guidelines for the use of amantadine and rimantadine in certain patient populations. # Persons Who Have Impaired Renal Function Amantadine Amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion. Thus, renal clearance of amantadine is reduced substantially in persons with renal insufficiency. A reduction in dosage is recommended for patients with creatinine clearance ≤50 mL/min/1.73m 2 . Guidelines for amantadine dosage based on creatinine clearance are found in the packet insert. However, because recommended dosages based on creatinine clearance might provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully so that adverse reactions can be recognized promptly and either the dose can be further reduced or the drug can be discontinued, if necessary. Hemodialysis contributes minimally to drug clearance. # Rimantadine The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration. Further studies are needed to determine the multiple-dose pharmacokinetics and the most appropriate dosages for these patients. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6-fold greater than that in healthy controls of the same age. Hemodialysis did not contribute to drug clearance. In studies among persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were higher compared with control patients without renal disease who were the same weight, age, and sex. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance ≤10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including elderly persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. # Persons Aged ≥65 Years Amantadine Because renal function declines with increasing age, the daily dose for persons aged ≥65 years should not exceed 100 mg for prophylaxis or treatment. For some elderly persons, the dose should be further reduced. Studies suggest that because of their smaller average body size, elderly women are more likely than elderly men to experience side effects at a daily dose of 100 mg. # Rimantadine The incidence and severity of CNS side effects among elderly persons appear to be substantially lower among those taking rimantadine at a dose of 200 mg/day compared with elderly persons taking the same dose of amantadine. However, when rimantadine has been administered at a dosage of 200 mg/day to chronically ill elderly persons, they have had a higher incidence of CNS and gastrointestinal symptoms than healthy, younger persons taking rimantadine at the same dosage. After longterm administration of rimantadine at a dosage of 200 mg/day, serum rimantadine concentrations among elderly nursing-home residents have been twofold to fourfold greater than those reported in younger adults. The dosage of rimantadine should be reduced to 100 mg/day for treatment or prophylaxis of elderly nursing-home residents. Although further studies are needed to determine the optimal dose for other elderly persons, a reduction in dosage to 100 mg/day should be considered for all persons aged ≥65 years if they experience signs and symptoms that might represent side effects when taking a dosage of 200 mg/day. # Persons Who Have Liver Disease Amantadine No increase in adverse reactions to amantadine has been observed among persons who have liver disease. Rare instances of reversible elevation of liver enzymes have been reported in patients receiving amantadine, although a specific relationship between the drug and such changes has not been established. # Rimantadine The safety and pharmacokinetics of rimantadine only have been evaluated after single-dose administration. In a study of persons with chronic liver disease (most with stabilized cirrhosis), no alterations were observed after a single dose. However, in persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease. A dose reduction to 100 mg/day is recommended for persons with severe hepatic dysfunction. # Persons Who Have Seizure Disorders Amantadine An increased incidence of seizures has been reported in patients with a history of seizure disorders who have received amantadine. Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine. # Rimantadine In clinical trials, seizures (or seizure-like activity) have been observed in a few persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine. The extent to which rimantadine might increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated, because such persons usually have been excluded from participating in clinical trials of rimantadine. # Children Amantadine The use of amantadine in children aged <1 year has not been adequately evaluated. The FDA-approved dosage for children aged 1-9 years is 4.4-8.8 mg/kg/day, not to exceed 150 mg/day. Although further studies to determine the optimal dosage for children are needed, physicians should consider prescribing only 5 mg/kg/day (not to exceed 150 mg/day) to reduce the risk for toxicity. The approved dosage for children aged ≥10 years is 200 mg/day; however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, is advisable. # Rimantadine The use of rimantadine in children aged <1 year has not been adequately evaluated. In children aged 1-9 years, rimantadine should be administered in one or two divided doses at a dosage of 5 mg/kg/day, not to exceed 150 mg/day. The approved dosage for children aged ≥10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, also is recommended. # Drug Interactions Amantadine Careful observation is advised when amantadine is administered concurrently with drugs that affect the CNS, especially CNS stimulants. Concomitant administration of antihistamines or anticholinergic drugs may increase the incidence of adverse CNS reactions. # Rimantadine No clinically significant interactions between rimantadine and other drugs have been identified. For more detailed information concerning potential drug interactions for either drug, the package insert should be consulted. # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at / or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800. Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (404) 332-4555. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. 6U.S. Government Printing Office: 1996-733-175/47072 Region IV The following CDC staff members prepared this report:
These recommendations update information concerning the vaccine and antiviral agents available for controlling influenza during the 1997-98 influenza season (superseding MMWR 1996;45[No. RR-5]:1-24). The principal changes include information about a) the influenza virus strains included in the trivalent vaccine for 1997-98, b) the vaccination of pregnant and breastfeeding women, and c) side effects and adverse reactions.# INTRODUCTION Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (H1, H2, and H3) and two subtypes of neuraminidase (N1 and N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens-especially to the hemagglutinin-reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of circulating strains provide the basis for selecting the virus strains included in each year's vaccine. Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory illnesses, influenza can cause severe malaise lasting several days. More severe illness can result if either primary influenza pneumonia or secondary bacterial pneumonia occurs. During influenza epidemics, high attack rates of acute illness result in both increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower respiratory tract complications. Elderly persons and persons with underlying health problems are at increased risk for complications of influenza. If they become ill with influenza, such members of high-risk groups (see Groups at Increased Risk for Influenza-Related Complications) are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for persons at high risk may increase twofold to fivefold, depending on the age group. Previously healthy children and younger adults also may require hospitalization for influenza-related complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups. An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza. An estimated >20,000 influenza-associated deaths occurred during each of nine different U.S. epidemics from 1972-73 to 1991-92, and >40,000 influenza-associated deaths occurred during each of four of these epidemics. More than 90% of the deaths attributed to pneumonia and influenza occurred among persons aged ≥65 years. The number of elderly persons in the U.S. population is increasing, as well as the number of persons aged <65 years at increased risk for influenza-related complications. Longer life expectancy for a) organ-transplant recipients, b) neonates in intensive-care units, and c) persons who have cystic fibrosis and acquired immunodeficiency syndrome (AIDS) results in a higher survival rate for younger persons at high risk for influenza. Influenza vaccine campaigns are targeted to approximately 32 million persons aged ≥65 years and 27 million to 31 million persons aged <65 years who are at high risk for influenza-associated complications. National health objectives for the year 2000 include vaccination of at least 60% of persons at risk for severe influenza-related illness. Influenza vaccination levels among persons aged ≥65 years increased substantially from 1985 (23%) to 1994 (55%), although vaccination levels among persons aged <65 years at high risk for influenza are estimated to be <30%. Possible reasons for the increase in influenza vaccination levels, especially among persons aged ≥65 years, include greater acceptance of preventive medical services by practitioners, increased delivery and administration of vaccine by health-care providers and sources other than physicians, and the initiation of Medicare reimbursement for influenza vaccination in 1993. # OPTIONS FOR THE CONTROL OF INFLUENZA In the United States, two measures are available that can reduce the impact of influenza: immunoprophylaxis with inactivated (i.e., killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (amantadine or rimantadine). Vaccinating persons at high risk before the influenza season each year is the most effective measure for reducing the impact of influenza. Vaccination can be highly cost effective when it is a) directed at persons who are most likely to experience complications or who are at increased risk for exposure and b) administered to persons at high risk during hospitalizations or routine health-care visits before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. When vaccine and epidemic strains of virus are well matched, achieving high vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) can reduce the risk for outbreaks by inducing herd immunity. # INACTIVATED VACCINE FOR INFLUENZA A AND B Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing the influenza viruses that are likely to circulate in the United States in the upcoming winter. The vaccine is made from highly purified, egg-grown viruses that have been made noninfectious (inactivated). Influenza vaccine rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purified-surfaceantigen preparations are available. Most vaccinated children and young adults develop high postvaccination hemagglutination-inhibition antibody titers. These antibody titers are protective against illness caused by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults and thus may remain susceptible to influenza-related upper respiratory tract infection. However, even if such persons develop influenza illness despite vaccination, the vaccine can be effective in preventing lower respiratory tract involvement or other secondary complications, thereby reducing the risk for hospitalization and death. The effectiveness of influenza vaccine in preventing or attenuating illness varies, depending primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the virus strains included in the vaccine and those that circulate during the influenza season. When a good match exists between vaccine and circulating viruses, influenza vaccine has been shown to prevent illness in approximately 70%-90% of healthy persons aged <65 years. In these circumstances, studies also have indicated that the effectiveness of influenza vaccine in preventing hospitalization for pneumonia and influenza among elderly persons living in settings other than nursing homes or similar chronic-care facilities ranges from 30% to 70%. Among elderly persons residing in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and death. Studies of this population have indicated that the vaccine can be 50%-60% effective in preventing hospitalization and pneumonia and 80% effective in preventing death, even though efficacy in preventing influenza illness may often be in the range of 30%-40% among the frail elderly. Achieving a high rate of vaccination among nursing home residents can reduce the spread of infection in a facility, thus preventing disease through herd immunity. # RECOMMENDATIONS FOR THE USE OF INFLUENZA VACCINE Influenza vaccine is strongly recommended for any person aged ≥6 months whobecause of age or underlying medical condition-is at increased risk for complications of influenza. Health-care workers and others (including household members) in close contact with persons in high-risk groups also should be vaccinated. In addition, influenza vaccine may be administered to any person who wishes to reduce the chance of becoming infected with influenza. The trivalent influenza vaccine prepared for the 1997-98 season will include A/Bayern/07/95-like (H1N1), A/Wuhan/359/95-like (H3N2), and B/Beijing/184/93-like hemagglutinin antigens. For the A/Bayern/07/95-like, A/Wuhan/ 359/95-like, and B/Beijing/184/93-like antigens, U.S. manufacturers will use the antigenically equivalent strains A/Johannesburg/82/96(H1N1), A/Nanchang/933/95 (H3N2), and B/Harbin/07/94 because of their growth properties. Guidelines for the use of vaccine among certain patient populations follow; dosage recommendations vary according to age group (Table 1). Although the current influenza vaccine can contain one or more of the antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines in the year following vaccination. Because the 1997-98 vaccine differs from the 1996-97 vaccine, supplies of 1996-97 vaccine should not be administered to provide protection for the 1997-98 influenza season. Two doses administered at least 1 month apart may be required for satisfactory antibody responses among previously unvaccinated children aged <9 years; however, studies of vaccines similar to those being used currently have indicated little or no improvement in antibody response when a second dose is administered to adults during the same season. During recent decades, data on influenza vaccine immunogenicity and side effects have been obtained for intramuscularly administered vaccine. Because recent influenza vaccines have not been adequately evaluated when administered by other routes, the intramuscular route is recommended. Adults and older children should be vaccinated in the deltoid muscle and infants and young children in the anterolateral aspect of the thigh. • Children and teenagers (aged 6 months-18 years) who are receiving long-term aspirin therapy and therefore might be at risk for developing Reye syndrome after influenza • Women who will be in the second or third trimester of pregnancy during the influenza season Influenza-associated excess mortality among pregnant women has not been documented except during the pandemics of 1918-19 and 1957-58. However, because death-certificate data often do not indicate whether a woman was pregnant at the time of death, studies conducted during interpandemic periods may underestimate the impact of influenza in this population. Case reports and limited studies suggest that pregnancy may increase the risk for serious medical complications of influenza as a result of increases in heart rate, stroke volume and oxygen consumption, decreases in lung capacity, and changes in immunologic function. A recent study of the impact of influenza during 17 interpandemic influenza seasons documented that the relative risk of hospitalization for selected cardiorespiratory conditions among pregnant women increased from 1.4 during weeks 14-20 of gestation to 4.7 during weeks 37-42 compared with rates among women who were 1-6 months postpartum. Women in their third trimester of pregnancy were hospitalized at a rate comparable to that of nonpregnant women who have high-risk medical conditions for whom influenza vaccine has traditionally been recommended. Using data from this study, it was estimated that an average of 1 to 2 hospitalizations among pregnant women could be prevented for every 1,000 pregnant women immunized. On the basis of these and other data that suggest that influenza infection may cause increased morbidity in women during the second and third trimesters of pregnancy, the Advisory Committee on Immunization Practices (ACIP) recommends that women who will be beyond the first trimester of pregnancy (14 weeks' gestation) during the influenza season be vaccinated. Pregnant women who have medical conditions that increase their risk for complications from influenza should be vaccinated before the influenza season-regardless of the stage of pregnancy. Studies of influenza immunization of more than 2,000 pregnant women have demonstrated no adverse fetal effects associated with influenza vaccine; however, more data are needed. Because influenza vaccine is not a live virus vaccine and major systemic reactions to it are rare, many experts consider influenza vaccination safe during any stage of pregnancy. However, because spontaneous abortion is common in the first trimester and unnecessary exposures have traditionally been avoided during this time, some experts prefer influenza vaccination during the second trimester to avoid coincidental association of the vaccine with early pregancy loss. # Groups that Can Transmit Influenza to Persons at High Risk Persons who are clinically or subclinically infected can transmit influenza virus to persons at high risk that they care for or live with. Some persons at high risk (e.g., the elderly, transplant recipients, and persons with AIDS) can have a low antibody response to influenza vaccine. Efforts to protect these members of high-risk groups against influenza might be improved by reducing the likelihood of influenza exposure from their caregivers. Therefore, the following groups should be vaccinated: • physicians, nurses, and other personnel in both hospital and outpatient-care settings; • employees of nursing homes and chronic-care facilities who have contact with patients or residents; • providers of home care to persons at high risk (e.g., visiting nurses and volunteer workers); and • household members (including children) of persons in high-risk groups. # VACCINATION OF OTHER GROUPS Persons Infected with Human Immunodeficiency Virus Limited information exists regarding the frequency and severity of influenza illness among human immunodeficiency virus (HIV)-infected persons, but reports suggest that symptoms might be prolonged and the risk for complications increased for some HIV-infected persons. Influenza vaccine has produced protective antibody titers against influenza in vaccinated HIV-infected persons who have minimal AIDSrelated symptoms and high CD4+ T-lymphocyte cell counts. In patients who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, however, influenza vaccine may not induce protective antibody titers; a second dose of vaccine does not improve the immune response for these persons. Recent studies have examined the effect of influenza vaccination on replication of HIV type 1 (HIV-1). Although some studies have demonstrated a transient (i.e., 2-to 4-week) increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration, other studies using similar laboratory techniques have not indicated any substantial increase in replication. Deterioration of CD4+ T-lymphocyte cell counts and progression of clinical HIV disease have not been demonstrated among HIV-infected persons who receive vaccine. Because influenza can result in serious illness and complications and because influenza vaccination may result in protective antibody titers, vaccination will benefit many HIV-infected patients. # Breastfeeding Mothers Influenza vaccine does not affect the safety of breastfeeding for mothers or infants. Breastfeeding does not adversely affect immune response and is not a contraindication for vaccination. # Persons Traveling to Foreign Countries The risk for exposure to influenza during travel to foreign countries varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the Southern Hemisphere, most activity occurs from April through September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that begins while traveling, which is an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the Southern Hemisphere from April through September should review their influenza vaccination histories. If they were not vaccinated the previous fall or winter, they should consider influenza vaccination before travel. Persons in high-risk groups should be especially encouraged to receive the most current vaccine. Persons at high risk who received the previous season's vaccine before travel should be revaccinated in the fall or winter with the current vaccine. # General Population Physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories) should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics. # PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Use of an antiviral agent (i.e., amantadine or rimantadine) is an option for prevention of influenza A in such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at high risk for complications of influenza can benefit from vaccine after appropriate allergy evaluation and desensitization. Specific information about vaccine components can be found in package inserts for each manufacturer. Adults with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever should not contraindicate the use of influenza vaccine, particularly among children with mild upper respiratory tract infection or allergic rhinitis. # SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respiratory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination is soreness at the vaccination site that lasts up to 2 days. These local reactions generally are mild and rarely interfere with the ability to conduct usual daily activities. In addition, two types of systemic reactions have occurred: • Fever, malaise, myalgia, and other systemic symptoms can occur following vaccination and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after vaccination and can persist for 1 or 2 days. Recent placebo-controlled trials suggest that in elderly persons and healthy young adults, split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections. • Immediate-presumably allergic-reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) rarely occur after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component; most reactions likely are caused by residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs-including those who have had occupational asthma or other allergic responses due to exposure to egg protein-might also be at increased risk for reactions from influenza vaccine, and similar consultation should be considered. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenzaassociated complications. Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, most patients do not develop reactions to thimerosal when administered as a component of vaccines-even when patch or intradermal tests for thimerosal indicate hypersensitivity. When reported, hypersensitivity to thimerosal usually has consisted of local, delayed-type hypersensitivity reactions. Unlike the 1976 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been clearly associated with an increased frequency of Guillain-Barré syndrome (GBS). However, obtaining a precise estimate of a small increase in risk is difficult for a rare condition such as GBS, which has an annual background incidence of only one to two cases per 100,000 adult population. During five of six seasons studied since 1976, the point estimates of the relative risks of GBS after influenza vaccination were slightly elevated; however, in none of these studies was the overall elevation in relative risk statistically significant. In the two most recently studied seasons, the combined number of GBS cases peaked 2 weeks after vaccination. Data from all of these studies suggest that if an increased relative risk does exist, it is lower for persons aged ≥65 years than for those 18-64 years of age. The slight increase in the point estimates of the relative risks and the increased number of cases in the second week after vaccination may be the result of vaccination but also could be due to other factors (e.g., confounding or diagnostic bias) rather than a true vaccinerelated risk. Among persons who received the swine influenza vaccine in 1976, the rate of GBS that exceeded the background rate was slightly less than one case per 100,000 vaccinations. Even if GBS were a true side effect in subsequent years, the estimated risk for GBS was much lower than 1:100,000 and substantially less than that for severe influenza, which could be prevented by vaccination, especially for persons aged ≥65 years and those who have medical indications for influenza vaccination. Whereas the incidence of GBS in the general population is very low, persons with a history of GBS have a substantially greater likelihood of subsequently developing GBS than persons without such a history. Thus, the likelihood of coincidentally developing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination might be causally associated with this risk for recurrence is not known. Although avoiding a subsequent influenza vaccination in persons known to have developed GBS within 6 weeks of a previous influenza vaccination seems prudent, for most persons with a history of GBS who are at high risk for severe complications from influenza the established benefits of influenza vaccination justify yearly vaccination. # SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES The target groups for influenza and pneumococcal vaccination overlap considerably. For persons at high risk who have not previously been vaccinated with pneumococcal vaccine, health-care providers should strongly consider administering pneumococcal and influenza vaccines concurrently. Both vaccines can be administered at the same time at different sites without increasing side effects. However, influenza vaccine is administered each year, whereas pneumococcal vaccine is not. Children at high risk for influenza-related complications can receive influenza vaccine at the same time they receive other routine vaccinations, including pertussis vaccine (DTaP or DTP). Because influenza vaccine can cause fever when administered to young children, DTaP (which is less frequently associated with fever and other adverse events) is preferable. # TIMING OF INFLUENZA VACCINATION ACTIVITIES Beginning each September (when vaccine for the upcoming influenza season becomes available) persons at high risk who are seen by health-care providers for routine care or as a result of hospitalization should be offered influenza vaccine. Opportunities to vaccinate persons at high risk for complications of influenza should not be missed. The optimal time for organized vaccination campaigns for persons in high-risk groups is usually the period from October through mid-November. In the United States, influenza activity generally peaks between late December and early March. High levels of influenza activity infrequently occur in the contiguous 48 states before December. Administering vaccine too far in advance of the influenza season should be avoided in facilities such as nursing homes, because antibody levels might begin to decline within a few months of vaccination. Vaccination programs can be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December. Children aged <9 years who have not been vaccinated previously should receive two doses of vaccine at least 1 month apart to maximize the likelihood of a satisfactory antibody response to all three vaccine antigens. The second dose should be administered before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community. # STRATEGIES FOR IMPLEMENTING INFLUENZA VACCINE RECOMMENDATIONS Successful vaccination programs have combined education for health-care workers, publicity and education targeted toward potential recipients, a plan for identifying persons at high risk (usually by medical-record review), and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following paragraphs. # Outpatient Clinics and Physicians' Offices Staff in physicians' offices, clinics, health-maintenance organizations, and employee health clinics should be instructed to identify and label the medical records of patients who should receive vaccine. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccine and its receipt or refusal should be documented in the medical record. Patients in high-risk groups who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccine. If possible, arrangements should be made to provide vaccine with minimal waiting time and at the lowest possible cost. # Facilities Providing Episodic or Acute Care Health-care providers in these settings (e.g., emergency rooms and walk-in clinics) should be familiar with influenza vaccine recommendations. They should offer vaccine to persons in high-risk groups or should provide written information on why, where, and how to obtain the vaccine. Written information should be available in language(s) appropriate for the population served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians rather than by obtaining individual vaccination orders for each patient. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility, and all residents should be vaccinated at one time, immediately preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated when they are admitted. # Acute-Care Hospitals All persons aged ≥65 years and younger persons (including children) with high-risk conditions who are hospitalized at any time from September through March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Household members and others with whom they will have contact should receive written information about why and where to obtain influenza vaccine. # Outpatient Facilities Providing Continuing Care to Patients at High Risk All patients should be offered vaccine before the beginning of the influenza season. Patients admitted to such programs (e.g., hemodialysis centers, hospital specialtycare clinics, and outpatient rehabilitation programs) during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission. Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. # Visiting Nurses and Others Providing Home Care to Persons at High Risk Nursing-care plans should identify patients in high-risk groups, and vaccine should be provided in the home if necessary. Caregivers and other persons in the household (including children) should be referred for vaccination. # Facilities Providing Services to Persons Aged ≥65 Years In these facilities (e.g., retirement communities and recreation centers), all unvaccinated residents/attendees should be offered vaccine on site before the influenza season. Education/publicity programs also should be provided; these programs should emphasize the need for influenza vaccine and provide specific information concerning how, where, and when to obtain it. # Clinics and Others Providing Health Care for Travelers Indications for influenza vaccination should be reviewed before travel. Vaccine should be offered, if appropriate (see Travelers to Foreign Countries). # Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine. Particular emphasis should be placed on vaccination of persons who care for members of high-risk groups (e.g., staff of intensive-care units [including newborn intensive-care units], staff of medical/surgical units, and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts can enhance compliance, as can a follow-up campaign early in the course of a community outbreak. # ANTIVIRAL AGENTS FOR INFLUENZA A The two antiviral agents with specific activity against influenza A viruses are amantadine hydrochloride and rimantadine hydrochloride. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses. When administered prophylactically to healthy adults or children before and throughout the epidemic period, both drugs are approximately 70%-90% effective in preventing illness caused by naturally occurring strains of type A influenza viruses. Because antiviral agents taken prophylactically can prevent illness but not subclinical infection, some persons who take these drugs can still develop immune responses that will protect them when they are exposed to antigenically related viruses in later years. In otherwise healthy adults, amantadine and rimantadine can reduce the severity and duration of signs and symptoms of influenza A illness when administered within 48 hours of illness onset. Studies evaluating the efficacy of treatment for children with either amantadine or rimantadine are limited. Amantadine was approved for treatment and prophylaxis of all influenza type A virus infections in 1976. Although few placebo-controlled studies were conducted to determine the efficacy of amantadine treatment among children before approval, amantadine is indicated for treatment and prophylaxis of adults and children aged ≥1 year. Rimantadine was approved in 1993 for treatment and prophylaxis in adults but was approved only for prophylaxis in children. Further studies might provide the data needed to support future approval of rimantadine treatment in this age group. As with all drugs, amantadine and rimantadine can cause adverse reactions in some persons. Such adverse reactions rarely are severe; however, for some categories of patients, severe adverse reactions are more likely to occur. Amantadine has been associated with a higher incidence of adverse central nervous system (CNS) reactions than rimantadine (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment). # RECOMMENDATIONS FOR THE USE OF AMANTADINE AND RIMANTADINE # Use as Prophylaxis Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk for severe illness and complications if infected with influenza A virus. When amantadine or rimantadine is administered as prophylaxis, factors such as cost, compliance, and potential side effects should be considered when determining the period of prophylaxis. To be maximally effective as prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. However, to be most cost effective, amantadine or rimantadine prophylaxis should be taken only during the period of peak influenza activity in a community. # Persons at High Risk Vaccinated After Influenza A Activity Has Begun Persons at high risk still can be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination can take as long as 2 weeks, during which time chemoprophylaxis should be considered. Children who receive influenza vaccine for the first time can require as long as 6 weeks of prophylaxis (i.e., prophylaxis for 2 weeks after the second dose of vaccine has been received). Amantadine and rimantadine do not interfere with the antibody response to the vaccine. # Persons Providing Care to Those at High Risk To reduce the spread of virus to persons at high risk, chemoprophylaxis may be considered during community outbreaks for a) unvaccinated persons who have frequent contact with persons at high risk (e.g., household members, visiting nurses, and volunteer workers) and b) unvaccinated employees of hospitals, clinics, and chronic-care facilities. For those persons who cannot be vaccinated, chemoprophylaxis during the period of peak influenza activity may be considered. For those persons who receive vaccine at a time when influenza A is present in the community, chemoprophylaxis can be administered for 2 weeks after vaccination. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that might not be controlled by the vaccine. # Persons Who Have Immune Deficiency Chemoprophylaxis might be indicated for persons at high risk who are expected to have an inadequate antibody response to influenza vaccine. This category includes persons who have HIV infection, especially those who have advanced HIV disease. No data are available concerning possible interactions with other drugs used in the management of patients who have HIV infection. Such patients should be monitored closely if amantadine or rimantadine chemoprophylaxis is administered. # Persons for Whom Influenza Vaccine Is Contraindicated Chemoprophylaxis throughout the influenza season or during peak influenza activity might be appropriate for persons at high risk who should not be vaccinated. Influenza vaccine may be contraindicated in persons who have severe anaphylactic hypersensitivity to egg protein or other vaccine components. # Other Persons Amantadine or rimantadine also can be administered prophylactically to anyone who wishes to avoid influenza A illness. The health-care provider and patient should make this decision on an individual basis. # Use of Antivirals as Therapy Amantadine and rimantadine can reduce the severity and shorten the duration of influenza A illness among healthy adults when administered within 48 hours of illness onset. Whether antiviral therapy will prevent complications of influenza type A among persons at high risk is unknown. Insufficient data exist to determine the efficacy of rimantadine treatment in children. Thus, rimantadine is currently approved only for prophylaxis in children, but it is not approved for treatment in this age group. Amantadine-and rimantadine-resistant influenza A viruses can emerge when either of these drugs is administered for treatment; amantadine-resistant strains are cross-resistant to rimantadine and vice versa. Both the frequency with which resistant viruses emerge and the extent of their transmission are unknown, but data indicate that amantadine-and rimantadine-resistant viruses are no more virulent or transmissible than amantadine-and rimantadine-sensitive viruses. The screening of naturally occurring epidemic strains of influenza type A has rarely detected amantadine-and rimantadine-resistant viruses. Resistant viruses have most frequently been isolated from persons taking one of these drugs as therapy for influenza A infection. Resistant viruses have been isolated from persons who live at home or in an institution where other residents are taking or have recently taken amantadine or rimantadine as therapy. Persons who have influenza-like illness should avoid contact with uninfected persons as much as possible, regardless of whether they are being treated with amantadine or rimantadine. Persons who have influenza type A infection and who are treated with either drug can shed amantadine-or rimantadinesensitive viruses early in the course of treatment, but can later shed drug-resistant viruses, especially after 5-7 days of therapy. Such persons can benefit from therapy even when resistant viruses emerge; however, they also can transmit infection to other persons with whom they come in contact. Because of possible induction of amantadine or rimantadine resistance, treatment of persons who have influenza-like illness should be discontinued as soon as clinically warranted, generally after 3-5 days of treatment or within 24-48 hours after the disappearance of signs and symptoms. Laboratory isolation of influenza viruses obtained from persons who are receiving amantadine or rimantadine should be reported to CDC through state health departments, and the isolates should be sent to CDC for antiviral sensitivity testing. # Outbreak Control in Institutions When confirmed or suspected outbreaks of influenza A occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. Contingency planning is needed to ensure rapid administration of amantadine or rimantadine to residents. This planning should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amantadine or rimantadine is used for outbreak control, the drug should be administered to all residents of the institution-regardless of whether they received influenza vaccine the previous fall. The drug should be continued for at least 2 weeks or until approximately 1 week after the end of the outbreak. The dose for each resident should be determined after consulting the dosage recommendations and precautions (see Considerations for Selecting Amantadine or Rimantadine for Chemoprophylaxis or Treatment) and the manufacturer's package insert. To reduce the spread of virus and to minimize disruption of patient care, chemoprophylaxis also can be offered to unvaccinated staff who provide care to persons at high risk. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that is not controlled by the vaccine. Chemoprophylaxis also may be considered for controlling influenza A outbreaks in other closed or semi-closed settings (e.g., dormitories or other settings where persons live in close proximity). To reduce the spread of infection and the chances of prophylaxis failure resulting from transmission of drug-resistant virus, measures should be taken to reduce contact as much as possible between persons on chemoprophylaxis and those taking drug for treatment. # CONSIDERATIONS FOR SELECTING AMANTADINE OR RIMANTADINE FOR CHEMOPROPHYLAXIS OR TREATMENT # Side Effects/Toxicity Despite the similarities between the two drugs, amantadine and rimantadine differ in their pharmacokinetic properties. More than 90% of amantadine is excreted unchanged, whereas approximately 75% of rimantadine is metabolized by the liver. However, both drugs and their metabolites are excreted by the kidneys. The pharmacokinetic differences between amantadine and rimantadine might explain differences in side effects. Although both drugs can cause CNS and gastrointestinal side effects when administered to young, healthy adults at equivalent dosages of 200 mg/day, the incidence of CNS side effects (e.g., nervousness, anxiety, difficulty concentrating, and lightheadedness) is higher among persons taking amantadine compared with those taking rimantadine. In a 6-week study of prophylaxis in healthy adults, approximately 6% of participants taking rimantadine at a dose of 200 mg/day experienced at least one CNS symptom, compared with approximately 14% of those taking the same dose of amantadine and 4% of those taking placebo. The incidence of gastrointestinal side effects (e.g., nausea and anorexia) is approximately 3% among persons taking either drug, compared with 1%-2% among persons receiving the pla-cebo. Side effects associated with both drugs are usually mild and cease soon after discontinuing the drug. Side effects can diminish or disappear after the first week despite continued drug ingestion. However, serious side effects have been observed (e.g., marked behavioral changes, delirium, hallucinations, agitation, and seizures). These more severe side effects have been associated with high plasma drug concentrations and have been observed most often among persons who have renal insufficiency, seizure disorders, or certain psychiatric disorders and among elderly persons who have been taking amantadine as prophylaxis at a dose of 200 mg/day. Clinical observations and studies have indicated that lowering the dosage of amantadine among these persons reduces the incidence and severity of such side effects, and recommendations for reduced dosages for these groups of patients have been made. Because rimantadine has been marketed for a shorter period of time than amantadine, its safety in certain patient populations (e.g., chronically ill and elderly persons) has been evaluated less frequently. Clinical trials of rimantadine have more commonly involved young, healthy persons. Providers should review the package insert before using amantadine or rimantadine for any patient. The patient's age, weight, and renal function; the presence of other medical conditions; the indications for use of amantadine or rimantadine (i.e., prophylaxis or therapy); and the potential for interaction with other medications must be considered, and the dosage and duration of treatment must be adjusted appropriately. Modifications in dosage might be required for persons who have impaired renal or hepatic function, the elderly, children, and persons with a history of seizures (Table 2). The following are guidelines for the use of amantadine and rimantadine in certain patient populations. # Persons Who Have Impaired Renal Function Amantadine Amantadine is excreted unchanged in the urine by glomerular filtration and tubular secretion. Thus, renal clearance of amantadine is reduced substantially in persons with renal insufficiency. A reduction in dosage is recommended for patients with creatinine clearance ≤50 mL/min/1.73m 2 . Guidelines for amantadine dosage based on creatinine clearance are found in the packet insert. However, because recommended dosages based on creatinine clearance might provide only an approximation of the optimal dose for a given patient, such persons should be observed carefully so that adverse reactions can be recognized promptly and either the dose can be further reduced or the drug can be discontinued, if necessary. Hemodialysis contributes minimally to drug clearance. # Rimantadine The safety and pharmacokinetics of rimantadine among patients with renal insufficiency have been evaluated only after single-dose administration. Further studies are needed to determine the multiple-dose pharmacokinetics and the most appropriate dosages for these patients. In a single-dose study of patients with anuric renal failure, the apparent clearance of rimantadine was approximately 40% lower, and the elimination half-life was approximately 1.6-fold greater than that in healthy controls of the same age. Hemodialysis did not contribute to drug clearance. In studies among persons with less severe renal disease, drug clearance was also reduced, and plasma concentrations were higher compared with control patients without renal disease who were the same weight, age, and sex. A reduction in dosage to 100 mg/day is recommended for persons with creatinine clearance ≤10 mL/min. Because of the potential for accumulation of rimantadine and its metabolites, patients with any degree of renal insufficiency, including elderly persons, should be monitored for adverse effects, and either the dosage should be reduced or the drug should be discontinued, if necessary. # Persons Aged ≥65 Years Amantadine Because renal function declines with increasing age, the daily dose for persons aged ≥65 years should not exceed 100 mg for prophylaxis or treatment. For some elderly persons, the dose should be further reduced. Studies suggest that because of their smaller average body size, elderly women are more likely than elderly men to experience side effects at a daily dose of 100 mg. # Rimantadine The incidence and severity of CNS side effects among elderly persons appear to be substantially lower among those taking rimantadine at a dose of 200 mg/day compared with elderly persons taking the same dose of amantadine. However, when rimantadine has been administered at a dosage of 200 mg/day to chronically ill elderly persons, they have had a higher incidence of CNS and gastrointestinal symptoms than healthy, younger persons taking rimantadine at the same dosage. After longterm administration of rimantadine at a dosage of 200 mg/day, serum rimantadine concentrations among elderly nursing-home residents have been twofold to fourfold greater than those reported in younger adults. The dosage of rimantadine should be reduced to 100 mg/day for treatment or prophylaxis of elderly nursing-home residents. Although further studies are needed to determine the optimal dose for other elderly persons, a reduction in dosage to 100 mg/day should be considered for all persons aged ≥65 years if they experience signs and symptoms that might represent side effects when taking a dosage of 200 mg/day. # Persons Who Have Liver Disease Amantadine No increase in adverse reactions to amantadine has been observed among persons who have liver disease. Rare instances of reversible elevation of liver enzymes have been reported in patients receiving amantadine, although a specific relationship between the drug and such changes has not been established. # Rimantadine The safety and pharmacokinetics of rimantadine only have been evaluated after single-dose administration. In a study of persons with chronic liver disease (most with stabilized cirrhosis), no alterations were observed after a single dose. However, in persons with severe liver dysfunction, the apparent clearance of rimantadine was 50% lower than that reported for persons without liver disease. A dose reduction to 100 mg/day is recommended for persons with severe hepatic dysfunction. # Persons Who Have Seizure Disorders Amantadine An increased incidence of seizures has been reported in patients with a history of seizure disorders who have received amantadine. Patients with seizure disorders should be observed closely for possible increased seizure activity when taking amantadine. # Rimantadine In clinical trials, seizures (or seizure-like activity) have been observed in a few persons with a history of seizures who were not receiving anticonvulsant medication while taking rimantadine. The extent to which rimantadine might increase the incidence of seizures among persons with seizure disorders has not been adequately evaluated, because such persons usually have been excluded from participating in clinical trials of rimantadine. # Children Amantadine The use of amantadine in children aged <1 year has not been adequately evaluated. The FDA-approved dosage for children aged 1-9 years is 4.4-8.8 mg/kg/day, not to exceed 150 mg/day. Although further studies to determine the optimal dosage for children are needed, physicians should consider prescribing only 5 mg/kg/day (not to exceed 150 mg/day) to reduce the risk for toxicity. The approved dosage for children aged ≥10 years is 200 mg/day; however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, is advisable. # Rimantadine The use of rimantadine in children aged <1 year has not been adequately evaluated. In children aged 1-9 years, rimantadine should be administered in one or two divided doses at a dosage of 5 mg/kg/day, not to exceed 150 mg/day. The approved dosage for children aged ≥10 years is 200 mg/day (100 mg twice a day); however, for children weighing <40 kg, prescribing 5 mg/kg/day, regardless of age, also is recommended. # Drug Interactions Amantadine Careful observation is advised when amantadine is administered concurrently with drugs that affect the CNS, especially CNS stimulants. Concomitant administration of antihistamines or anticholinergic drugs may increase the incidence of adverse CNS reactions. # Rimantadine No clinically significant interactions between rimantadine and other drugs have been identified. For more detailed information concerning potential drug interactions for either drug, the package insert should be consulted. # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at http://www.cdc.gov/ or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800. Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (404) 332-4555. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. 6U.S. Government Printing Office: 1996-733-175/47072 Region IV # The following CDC staff members prepared this report:
None
None
028c6680c036c4e0df096ea0e214fb6b6176929d
cdc
None
This report updates the previously published summary o f recommendations for vaccinating health-care personnel (HCP) in the United States (CDC. Immunization o f health-care workers: recommendations o f the Advisory Committee on Immunization Practices and the Hospital Infection Control Practices Advisory Committee . M M W R 1997;46[No. RR-18J). This report was reviewed by and includes input from the Healthcare (formerly Hospital) Infection Control Practices Advisory Committee. These updated recommendations can assist hospital administrators, infection-controlpractitioners, employee health clinicians, and HCP in optimizing infection prevention and control programs. The recommendations for vaccinating HCP are presented by disease in two categories: 1) those diseases for which vaccination or documentation o f immunity is recommended because o f risks to HCP in their work settings for acquiring disease or transmitting to patients and 2) those for which vaccination might be indicated in certain circumstances. Background information for each vaccine-preventable disease and specific recommendations for use ofeach vaccine are presented. Certain infection-control measures that relate to vaccination also are included in this report. In addition, ACIP recommendations for the remaining vaccines that are recommended for certain or all adults are summarized, as are considerations for catch-up and travel vaccinations and for work restrictions. This report summarizes all current ACIP recommendations for vaccination o f HCP and does not contain any new recommendations or policies.# Immunization of Health-Care Personnel # Recom m endations of the A dvisory Committee on Im m unization Practices (ACIP) This report updates the previously published summary o f recom m endations o f the A dvisory C om m ittee on Immunization Practices (ACIP) and the Healthcare (formerly Hospital) Infection Control Practices Advisory Committee (HICPAC) for vaccinating health-care personnel (HCP) in the United States (1). The report, which was reviewed by and includes input from HICPAC, summarizes all current ACIP recommendations for vaccination of H C P and does not contain any new recommendations or policies that have not been published previously. These recommendations can assist hospital administrators, infection-control practitioners, employee health clinicians, and H C P in optimizing infection prevention and control programs. H C P are defined as all paid and unpaid persons working in health-care settings who have the potential for exposure to patients and/or to infectious materials, including body substances, contaminated medical supplies and equipment, contaminated environmental surfaces, or contaminated air. H C P might include (but are not lim ited to) physicians, nurses, nursing assistants, therapists, technicians, emergency medical service personnel, dental personnel, pharmacists, laboratory personnel, autopsy personnel, students and trainees, contractual staff not employed by the health-care facility, and persons (e.g., clerical, dietary, housekeeping, laundry, security, maintenance, administrative, billing, and volunteers) not directly involved in patient care but potentially exposed to infectious agents that can be transmitted to and from HCP and patients (2). Because of their contact with patients or infective material from patients, many HCP are at risk for exposure to (and possible transmission of) vaccine-preventable diseases. Employers and Introduction vaccination records for H C P so records can be retrieved easily as needed (3). Each record should reflect im m unity status for indicated vaccine-preventable diseases (i.e., documented disease, vaccination history, or serology results) as well as vaccinations adm inistered during em ploym ent and any documented episodes of adverse events after vaccination (4). For each vaccine, the record should include date of vaccine administration (including for those vaccines that might have been received prior to employment), vaccine manufacturer and lot number, edition and distribution date of the languageappropriate Vaccine Information Statement (VIS) provided to the vaccinee at the time of vaccination, and the name, address, and title of the person administering the vaccine (4). Accurate vaccination records can help to rapidly identify susceptible H C P (i.e., those with no history of vaccination or lack of documentation of immunity) during an outbreak situation and can help reduce costs and disruptions to health care operations (5)(6)(7). H C P should be provided a copy of their vaccination records and encouraged to keep it with their personal health records so they can readily be made available to future employers. HICPAC has encouraged any facility or organization that provides direct patient care to formulate a comprehensive vaccination policy for all HCP (3). The American Hospital Association has endorsed the concept of vaccination programs for both hospital personnel and patients (8). To ensure that all H C P are up to date with respect to recommended vaccines, facilities should review H C P vaccination and immunity status at the time ofhire and on a regular basis (i.e., at least annually) with consideration of offering needed vaccines, if necessary, in conjunction with routine annual disease-prevention measures (e.g., influenza vaccination or tuberculin testing). These recommendations (Tables 2 and 3) should be considered during policy development. Several states and health-care facilities have established requirements relating to assessment of vaccination status and/or administration of one or more vaccines for H C P (9,10). Disease-specific outbreak control measures are described in this report and elsewhere (3,11,12). All HCP should adhere to all other recommended infectioncontrol guidelines, whether or not they are individually determined to have immunity to a vaccine-preventable disease. # Methods In 2008, the ACIP Immunization of Health-Care Personnel Work Group (the W ork Group) was formed as a subgroup of the ACIP Adult Immunization W ork Group to update the previously published recommendations for im m unization of HCP. The W ork Group comprised professionals from academic medicine (pediatrics, family medicine, internal medicine, occupational and environmental medicine, and infectious disease); federal and state public health professionals; and liaisons from the Society for Healthcare Epidemiology of America and HICPAC. The W ork Group met monthly, developed an outline for the report, worked closely with subject matter experts at C D C (who developed, revised, and updated sections of the report), and provided subsequent critical review of the draft documents. The approach of the Work Group was to summarize previously published ACIP recommendations and not to m ake new recom m endations or policies; a comprehensive list of publications containing the various vaccine-specific recommendations is provided (Table 1). In February 2011, the updated report was presented to ACIP, which voted to approve it. The recommendations for vaccination of HCP are presented below by disease in two categories: 1) those diseases for which routine vaccination or documentation of immunity is recommended for H C P because of risks to H C P in their work settings and, should H CP become infected, to the patients they serve and 2) those diseases for which vaccination of HCP might be indicated in certain circumstances. Vaccines recommended in the first category are hepatitis B, seasonal influenza, measles, mumps, and rubella, pertussis, and varicella vaccines. Vaccines in the second category are meningococcal, typhoid, and polio vaccines. Except for influenza, all of the diseases prevented by these vaccines are notifiable at the national level (13). Main changes from the 1997 ACIP recommendations have been summarized (Box). # Diseases for W hich Vaccination Is Recom m ended O n the basis of docum ented nosocomial transmission, H C P are considered to be at substantial risk for acquiring or transm itting hepatitis B, influenza, measles, mumps, rubella, pertussis, and varicella. Current recommendations for vaccination are provided below. # Hepatitis B # Background # Epidemiology and Risk Factors Hepatitis B is an infection caused by the hepatitis B virus (HBV), which is transm itted through percutaneous (i.e., breaks in the skin) or mucosal (i.e., direct contact with mucous membranes) exposure to infectious blood or body fluids. The virus is highly infectious; for nonimmune persons, disease transmission from a needlestick exposure is up to 100 times more likely for exposure to hepatitis B e antigen (HBeAg)-positive blood than to HIV-positive blood (14). HBV infection is a well recognized occupational risk for U.S. HCP and globally. The risk for HBV is associated with degree of contact with blood in the work place and with the hepatitis B e-antigen status of the source persons (15). The virus is also environmentally stable, remaining infectious on environmental surfaces for at least 7 days (16). In 2009 in the United States, 3,371 cases of acute HBV infection were reported nationally, and an estimated 38,000 new cases of HBV infection occurred after accounting for underreporting and underdiagnosis (17). O f 4,519 persons reported with acute HBV infection in 2007, approximately 40% were hospitalized and 1.5% died (18). HBV can lead to chronic infection, which can result in cirrhosis of the liver, liver failure, liver cancer, and death. An estimated 800,000- 1.4 million persons in the United States are living with chronic HBV infection; these persons serve as the main reservoir for continued HBV transmission (19). Vaccines to prevent hepatitis B became available in the United States in 1981; a decade later, a national strategy to eliminate HBV infection was implemented, and the routine vaccination of children was recommended (20). During 1990During 2009, the rate of new HBV infections declined approximately 84%, from 8.5 to 1.1 cases per 100,000 population (17); the decline was greatest (98%) among persons aged <19 years, for whom recommendations for routine infant and adolescent vaccination have been applied. Although hepatitis B vaccine coverage is high in infants, children, and adolescents (91.8% in infants aged 19-35 months and 91.6% in adolescents aged 13-17 years) (21,22), coverage remains lower (41.8% in 2009) for certain adult populations, including those with behavioral risks for HBV infection (e.g., men who have sex with men and persons who use injection drugs) (23). # Hepatitis B in Health-Care Settings During 1982, when hepatitis B vaccine was first recommended for HCP, an estimated 10,000 infections occurred among persons employed in a medical or dental field. By 2004, the number of HBV infections among HCP had decreased to an estimated 304 infections, largely resulting from the im plem entation of routine preexposure vaccination and improved infection-control precautions (24)(25)(26). The risk for acquiring HBV infection from occupational exposures is dependent on the frequency of percutaneous and mucosal exposures to blood or body fluids (e.g., semen, saliva, and wound exudates) containing HBV, particularly fluids containing HBeAg (a marker for high HBV replication and viral load) (27)(28)(29)(30)(31). The risk is higher during the professional BOX. Summary of main changes- from 1997 Advisory Comm ittee on Immunization Practices/Hospital (now Healthcare) Infection Control Practices Advisory Comm ittee recommendations for immunization of health-care personnel (HCP) # Hepatitis B - H C P and trainees in certain populations at high risk for chronic hepatitis B (e.g., those born in countries with high and intermediate endemicity) should be tested for HBsAg and anti-HBc/anti-HBs to determine infection status. # Influenza - Emphasis that all HCP, not just those with direct patient care duties, should receive an annual influenza vaccination - Comprehensive programs to increase vaccine coverage among H C P are needed; influenza vaccination rates among HCP within facilities should be measured and reported regularly. Measles, mumps, and rubella (MMR) - History of disease is no longer considered adequate presumptive evidence of measles or mumps immunity for HCP; laboratory confirmation of disease was added as acceptable presumptive evidence of immunity. History of disease has never been considered adequate evidence of immunity for rubella. - The footnotes have been changed regarding the recommendations for personnel born before 1957 in routine and outbreak contexts. Specifically, guidance is provided for 2 doses of M M R for measles and mumps protection and 1 dose of M M R for rubella protection. Pertussis - HCP, regardless of age, should receive a single dose of Tdap as soon as feasible if they have not previously received Tdap. - The minimal interval was removed, and Tdap can now be administered regardless of interval since the last tetanus or diphtheria-containing vaccine. - Hospitals and ambulatory-care facilities should provide Tdap for HCP and use approaches that maximize vaccination rates. Varicella Criteria for evidence of immunity to varicella were established. For H C P they include - written documentation with 2 doses of vaccine, - laboratory evidence of immunity or laboratory confirmation of disease, - diagnosis of history of varicella disease by health-care provider, or diagnosis of history of herpes zoster by health-care provider. # Meningococcal - H C P with anatomic or functional asplenia or persistent complement component deficiencies should now receive a 2-dose series of meningococcal conjugate vaccine. H C P with H IV infection who are vaccinated should also receive a 2 dose series. - Those HCP who remain in groups at high risk are recommended to be revaccinated every 5 years. Abbreviations: HBsAg = Hepatitis B surface antigen; anti-HBc = hepatitis B core antibody; anti-HBs = hepatitis B surface antibody; Tdap = tetanus toxoid, reduced diptheria toxoid and acellular pertussis vaccine; HIV = human immunodeficiency virus. training period and can vary throughout a person's career (1). Depending on the tasks performed, health-care or public safety personnel m ight be at risk for HBV exposure; in addition, personnel providing care and assistance to persons in outpatient settings and those residing in long-term-care facilities (e.g., assisted living) might be at risk for acquiring or facilitating transmission of HBV infection when they perform procedures that expose them to blood (e.g., assisted bloodglucose monitoring and wound care) (32)(33)(34). A Federal Standard issued in D ecem ber 1991 under the O ccupational Safety and H ealth Act m andates that hepatitis B vaccine be made available at the employer's expense to all health-care personnel who are exposed occupationally to blood or other potentially infectious materials (35). The Federal Standard defines occupational exposure as reasonably anticipated skin, eye, m ucous m em brane, or parenteral contact with blood or other potentially infectious materials that might result from the performance of an employee's duties (35 # Vaccine Effectiveness The 3-dose vaccine series administered intramuscularly at 0, 1, and 6 months produces a protective antibody response in approximately 30% -55% of healthy adults aged 90% after the third dose (40)(41)(42). After age 40 years, <90% of persons vaccinated with 3 doses have a protective antibody response, and by age 60 years, protective levels of antibody develop in approximately 75% of vaccinated persons (43). Smoking, obesity, genetic factors, and immune suppression also are associated with dim inished im m une response to hepatitis B vaccination (43-46). # Duration of Immunity Protection against symptomatic and chronic HBV infection has been docum ented to persist for >22 years in vaccine responders (47). Im m unocom petent persons who achieve hepatitis B surface antibody (anti-HBs) concentrations of >10 mIU/mL after preexposure vaccination have protection against both acute disease and chronic infection. Anti-HBs levels decline over time. Regardless, responders continue to be protected, and the majority of responders will show an anamnestic response to vaccine challenge (47)(48)(49)(50)(51). Declines might be somewhat faster among persons vaccinated as infants rather than as older children, adolescents, or adults and among those administered recombinant vaccine instead of plasma vaccine (which has not been commercially available in the United States since the late 1980s). Although immunogenicity is lower among immunocompromised persons, those who achieve and maintain a protective antibody response before exposure to HBV have a high level of protection from infection (52). Among persons who do not respond to a primary 3-dose vaccine series (i.e., those in whom anti-HBs concentrations of >10 mIU/mL were not achieved), 25% -50% respond to an additional vaccine dose, and 44% -100% respond to a 3-dose revaccination series using standard or high dosage vac cine (43,(53)(54)(55)(56)(57)(58). Persons who have measurable but low (i.e., 1-9 mIU/mL) levels of anti-HBs after the initial series have better response to revaccination than persons who have no anti-HBs (49,53,54). Persons who do not have protective levels of anti-HBs 1-2 months after revaccination either are infected with HBV or can be considered primary nonresponders; for the latter group, genetic factors might be associated with nonresponse to hepatitis B vaccination (54,58,59). ACIP does not recommend more than two vaccine series in nonresponders (52). # Vaccine Safety Hepatitis B vaccines have been demonstrated to be safe when administered to infants, children, adolescents, and adults (52,60,61). Although rare cases of arthritis or alopecia have been associated temporally with hepatitis B vaccination, recent data do not support a causal relationship between hepatitis B vaccine and either arthritis or alopecia (61)(62)(63). During 1982During 2004, an estimated 70 million adolescents and adults and 50 million infants and children in the United States received >1 dose of hepatitis B vaccine (52). The most frequently reported side effects in persons receiving hepatitis B vaccine are pain at the injection site (3% -29% ) and temperature of >99.9 F (>37.7 C) (1% -6% ) (64-67). However, in placebo-controlled studies, these side effects were reported no more frequently among persons receiving hepatitis B vaccine than among persons receiving placebo (40,41,(64)(65)(66)(67). Revaccination is not associated with an increase in adverse events. Hepatitis B vaccination is contraindicated for persons with a history of hypersensitivity to yeast or any vaccine component (4,(64)(65)(66). Persons with a history of serious adverse events (e.g., anaphylaxis) after receipt of hepatitis B vaccine should not receive additional doses. As with other vaccines, vaccination of persons with moderate or severe acute illness, with or without fever, should be deferred until illness resolves (4). Vaccination is not contraindicated in persons with a history of multiple sclerosis, Guillain-Barre Syndrome, autoim m une disease (e.g., systemic lupus erythematosis and rheumatoid arthritis), or other chronic diseases. Pregnancy is not a contraindication to vaccination; limited data suggest that developing fetuses are not at risk for adverse events when hepatitis B vaccine is administered to pregnant women (4,68). Available vaccines contain noninfectious hepatitis B surface antigen (HBsAg) and do not pose any risk for infection to the fetus. # Postexposure The need for postexposure prophylaxis should be evaluated immediately after H C P experience any percutaneous, ocular, m ucous-m em brane or nonintact skin exposure to blood or body fluid in the workplace. Decisions to administer postexposure prophylaxis should be based on the HBsAg status of the source and the vaccination history and vaccine-response status of the exposed H C P (Table 4) (72). # Unvaccinated and Incompletely Vaccinated HCP and Trainees - Unvaccinated or incompletely vaccinated persons who experience a workplace exposure from persons known to be HBsAg-positive should receive 1 dose of hepatitis B immune globulin HBIG (i.e., passive vaccination) as soon as possible after exposure (preferably within 24 hours). The effectiveness of HBIG when administered >7 days after percutaneous or permucosal exposures is unknown (Table 4). - Hepatitis B vaccine should be administered in the deltoid muscle as soon as possible after exposure; HBIG should be administered at the same time at another injection site. The 3-dose hepatitis B vaccine series should be completed for previously unvaccinated and incompletely vaccinated persons who have needlestick or other percutaneous exposures, regardless of the HBsAg status of the source and w hether the status o f the source is known. To document protective levels of anti-HBs (>10mIU/mL), postvaccination testing of persons who received HBIG for postexposure prophylaxis should be performed after anti-HBs from HBIG is no longer detectable (4-6 months after administration). # Vaccinated HCP and Trainees - Vaccinated H C P with documented immunity (anti-HBs concentrations of >10 mIU/mL) require no postexposure prophylaxis, serologic testing, or additional vaccination. - Vaccinated H C P with docum ented nonresponse to a 3-dose vaccine series should receive 1 dose of HBIG and a second 3-dose vaccine series if the source is HBsAgpositive or known to be at high risk for carrying hepatitis. If the source is known or determ ined to be HBsAgnegative, these previously nonresponding H C P should com p lete the rev accin atio n series and undergo postvaccination testing to ensure that their response status is docum ented ( Influenza causes an estim ated average of >200,000 hospitalizations and 3,000-49,000 deaths annually in the United States (74)(75)(76). The majority of influenza-related severe illnesses and deaths occur among persons with chronic medical conditions, infants and young children, seniors, and pregnant women (74)(75)(76)(77)(78). Reducing the risk for influenza among persons at higher risk for complications is a major focus of influenza prevention strategies (77). # Influenza Transmission in Health-Care Settings HCP are exposed to patients with influenza in the workplace and are thus at risk of occupationally acquired influenza and of transmitting influenza to patients and other HCP. In a cross sectional survey of hospital house staff (physicians in training), 37% reported influenza-like illness during September-April, and 9% reported more than one respiratory illness. Length of illness varied (range: 1-10 days; mean: 7 days), as did days of work missed (range: 0-10 days; mean: 0.7 days) (79). Infected HCP who continue to work while ill might transmit influenza to patients, many of whom are at increased risk for severe outcomes from influenza. H C P are therefore recommended for routine annual influenza vaccination (77). Few randomized trials of the effect that influenza vaccination has on illness in H CP have been conducted. In one randomized trial of 427 HCP, influenza vaccination of H C P failed to decrease episodes of respiratory infection or duration of illness but was associated with a 28% decrease in absenteeism (from 1.4 days to 1.0 day) attributable to respiratory infections (80). No laboratory confirmation of influenza was obtained in this study. In another randomized trial among HCP, vaccination was associated with a significantly lower rate of serological evidence of influenza infection, with a vaccine efficacy rate of 88% for influenza A and 89% for influenza B (p<0.05) (81); however, no significant differences were noted in days of febrile respiratory illness or absenteeism. Influenza can cause outbreaks of severe respiratory illness among hospitalized persons and long-term-care residents (82)(83)(84)(85)(86)(87)(88)(89)(90). Influenza outbreaks in hospitals (86)(87)(88) and long-termcare facilities (91) have been associated with low vaccination rates among HCP. One nonrandomized study demonstrated an increase in H C W vaccination rates and decrease in nosocomially acquired, laboratory-confirmed influenza in a hospital after a mobile cart-based H C P vaccination program was introduced (86). Several randomized controlled studies of the impact of H C P vaccination on morbidity and mortality in long-term care facilities have been performed (92)(93)(94)(95). These studies have demonstrated substantial decreases in all cause mortality (92)(93)(94)(95) and influenza-like illness (92,94,95). However, studies which examine and demonstrate efficacy in preventing more specific outcomes (e.g., laboratory-confirmed influenza illness and mortality) are lacking. Recent systematic reviews suggest that vaccination of HCP in settings in which patients also were vaccinated provided significant reductions in deaths among elderly patients from all causes and deaths from pneumonia, but also note that additional randomized controlled trials are warranted (96,97), as are examination of more specific outcomes. Preventing influenza among HCP who might serve as sources of influenza virus transmission provides additional protection to patients at risk for influenza complications. Vaccination of H C P can specifically benefit patients who cannot receive vaccination (e.g., infants aged 85 years and immune-compromised persons), and persons for whom antiviral treatment is not available (e.g., persons with medical contraindications). Although annual vaccination has long been recommended for H C P and is a high priority for reducing morbidity associated with influenza in health-care settings (98)(99)(100), national survey data have demonstrated that the vaccination coverage level during the 2008-09 season was 52.9% (101). # Considerations Regarding Influenza Vaccination of HCP Barriers to H C P aceptance of influenza vaccination have included fear of vaccine side effects (particularly influenza like symptoms), insufficient time or inconvenience, perceived ineffectiveness of the vaccine, perceived low likelihood of contracting influenza, avoidance of medications, and fear of needles (79,(102)(103)(104)(105)(106)(107)(108)(109). Factors demonstrated to increase vaccine acceptance include a desire for self-protection, previous receipt of influenza vaccine, a desire to protect patients, and perceived effectiveness of vaccine (79,105,106,(109)(110)(111)(112). Strategies that have demonstrated improvement in H C P vaccination rates have included campaigns to emphasize the benefits of H C P vaccination for staff and patients, vaccination of senior medical staff or opinion leaders, removing administrative barriers (e.g., costs), providing vaccine in locations and at times easily accessible by HCP, and monitoring and reporting H C P influenza vaccination rates (99,(113)(114)(115)(116)(117)(118)(119)(120). Intranasally administered live attenuated influenza vaccine (LAIV) is an option for healthy, nonpregnant adults aged <50 years who dislike needles. The practice of obtaining signed declinations from HCP offered influenza vaccination has been adopted by some institutions but has not yet been demonstrated to exceed coverage rates of >70% -80% (99,115,(121)(122)(123). Institutions that require declination statements from H C P who refuse influenza vaccination should educate and counsel these HCP about benefits of the vaccine. Each health-care facility should develop a comprehensive influenza vaccination strategy that includes targeted education about the disease, including disease risk among H C P and patients, and about the vaccine. In addition, the program should establish easily accessible vaccination sites and inform H C P about their locations and schedule. Facilities that employ H C P should provide influenza vaccine at no cost to personnel (124). The most effective combination of approaches for achieving high influenza vaccination coverage among H C P likely varies by institution. Hospitals and health-care organizations in the United States traditionally have employed an immunization strategy that includes one or more of the following components: education about influenza, easy access to vaccine, incentives to encourage immunization, organized campaigns, institution of declination policies, and legislative and regulatory efforts (e.g., vaccination requirements) (99,115,(121)(122)(123)(124)(125)(126). Beginning January 1, 2007, the Joint Comm ission on A ccreditation o f H ealth-C are O rganizations required accredited organizations to offer influenza vaccinations to staff, including volunteers and licensed independent practitioners and to report coverage levels among HCP (127). Standards are available for measuring vaccination coverage among HCP as a measure of program performance within a health care setting (128). Beginning January 2013, the Centers for Medicaid Services will require acute care hospitals to report HCP influenza vaccine as part of its hospital inpatient quality reporting program.* # Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety Effectiveness of influenza vaccines varies from year to year and depends on the age and health status of the person getting the vaccine and the similarity or "match" between the viruses or virus in the vaccine and those in circulation. Vaccine strains are selected for inclusion in the influenza vaccine every year based on international surveillance and scientists' estimations about which types and strains of viruses will circulate in a given year. Annual vaccination is recommended because the predom inant circulating influenza viruses typically change from season to season and, because immunity declines over time postvaccination (77). In placebo-controlled studies am ong adults, the most frequent side effect o f vaccination was soreness at the vaccination site (affecting 10% -64% of patients) that lasted <2 days (129,130). These injection-site reactions typically were mild and rarely interfered with the recipient's ability to conduct usual daily activities. The main contraindication to influenza vaccination is a history of anaphylactic hypersensitivity to egg or other components of the vaccine. A history of Guillain-Barre Syndrome within 6 weeks following a previous dose of influenza vaccine is considered to be a precaution for use of influenza vaccines (77 # Vaccination Annual influenza vaccination is recommended for all persons aged >6 m onths who have no medical contraindication; therefore, vaccination of all HCP who have no contraindications is recommended. The influenza vaccine is evaluated annually with one or more vaccine strains updated almost every year. In addition, antibody titers decline during the year after vaccination. T hus, annual vaccination w ith the current season's formulation is recommended. Annual vaccination is appropriate and safe to begin as early in the season as vaccine is available. HCP should be among the groups considered for prioritized receipt of influenza vaccines when vaccine supply is limited. Two types of influenza vaccines are available. LAIV is administered intranasally and is licensed for use in healthy nonpregnant persons aged 2-49 years. The trivalent inactivated vaccine (TIV) is administered as an intramuscular injection and can be given to any person aged >6 months. Both vaccine types contain vaccine virus strains that are selected to stimulate a protective immune response against the wild-type viruses that are thought to be most likely in circulation during the upcoming season. Use of LAIV for H C P who care for patients housed in protective inpatient environments has been a theoretic concern, but transmission of LAIV in health-care settings has not been reported. LAIV can be used for HCP who work in any setting, except those who care for severely immunocompromised hospitalized persons who require care in a protective environment. H C P who themselves have a condition that confers high risk for influenza complications, who are pregnant, or who are aged >50 years should not receive LAIV and should be administered TIV instead. An inactivated trivalent vaccine containing 60 mcg of hemagglutinin antigen per influenza vaccine virus strain (Fluzone High-Dose ) is an alternative inactivated vaccine for persons aged >65 years. Persons aged >65 years may be administered any of the standard-dose TIV preparations or Fluzone High-Dose (77). The majority of T IV preparations are administered intramuscularly. An intradermally administered T IV was licensed in May 2011 and is an alternative to other TIV preparations for persons aged 18-64 years (131). # Use of Antiviral Drugs for Treating Exposed Persons and Controlling Outbreaks Use of antiviral drugs for chemoprophylaxis or treatment of influenza is an adjunct to (but not a substitute for) vaccination. Oseltamivir or zanamivir are recommended currently for chemoprophylaxis or treatment of influenza (132,133). TIV Recommendations can be administered to exposed, unvaccinated H C P at the same time as chemoprophylaxis, but LAIV should be avoided because the antiviral medication will prevent viral replication needed to stimulate a vaccine response (77). Antivirals are used often among patients during outbreaks in closed settings such as long-term-care facilities but also can be administered to unvaccinated H C P during outbreaks, when an exposure to a person with influenza occurs, or after exposure when vaccination is not thought to be protective against the strain to which a vaccinated H C P was exposed. Chemoprophylaxis consists of 1 dose (of either antiviral drug) daily for 10 days, and treatment consists of 1 dose twice daily for 5 days. In many instances of H C P exposure, watchful waiting and early initiation of treatment if symptoms appear is preferred rather than use of antiviral chemoprophylaxis immediately after exposure. The intensity and duration of the exposure and the underlying health status of the exposed worker are important factors in clinical judgm ents about w hether to provide chemoprophylaxis. If chemoprophylaxis is used, the provider should base choice of the agent on whether the circulating strain or strains of influenza have demonstrated resistance to particular antivirals. # Program Evaluation - Health-care adm inistrators should include influenza vaccination coverage among H C P as a measure of quality of care (124). - Influenza vaccination rates among HCP within facilities should be regularly measured and reported, and ward-, unit-, and specialty-specific coverage rates should be provided to staff and adm inistration (124). Such information might be useful to promote compliance with vaccination policies. # Measles # Background # Epidemiology and Risk Factors Measles is a highly contagious rash illness that is transmitted by respiratory droplets and airborne spread. Severe complications, w hich m ight result in death, include pneum onia and encephalitis. Before the national measles vaccination program was im plem ented in 1963, almost every person acquired measles before adulthood; an estimated 3-4 million persons in the U nited States acquired measles each year (134). Approximately 500,000 persons were reported to have had measles annually, of whom 500 persons died, 48,000 were hospitalized, and another 1,000 had permanent brain damage from measles encephalitis (134). Through a successful 2-dose measles vaccination program (i.e., a first dose at age 12-15 months and a second dose between ages 4 -6 years) (135) and better measles control throughout the region o f the Americas (136), endemic transmission of measles was interrupted in the United States, and measles was declared eliminated from the country in 2000 (137). However, measles remains widespread in the majority of countries outside the Western Hemisphere, with an estimated 20 m illion measles cases occurring worldwide (138) and approximately 164,000 related deaths (139). Thus, the United States continues to experience international importations that might lead to transmission among U.S. residents and limited outbreaks, especially in unvaccinated populations (140)(141)(142)(143). During 2001-2008, a total of 557 confirmed measles cases were reported in the United States from 37 states and the District of Columbia (annual median: 56; range: 37 in 2004 to 140 in 2008), representing an annual incidence ofless than one case per million population (144). O fthe 557 reported case-patients, 126 (23%) were hospitalized (annual median: 16; range: 5-29); of these, at least five case-patients were admitted to intensive care. Two deaths were reported, both in 2003 (144). O f the 557 reported case-patients during 2001-2008, a total of 223 (40%) were adults, including 156 (28%) aged 20-39 years and 67 (12%) aged >40 years. O f the 438 measles cases among U.S. residents, 285 (65%) cases were considered preventable (i.e., occurred among persons who were eligible for vaccination but were unvaccinated) (144). The remaining 153 (35%) cases were considered nonpreventable. Cases were defined as nonpreventable if they occurred among U.S. resident case-patients who had received >1 dose of measles-containing vaccine, if patients were vaccinated as recommended if traveling internationally, or if they were not vaccinated but had other evidence of im m unity (i.e., were born before 1957 and therefore presumed immune from natural disease in childhood, had laboratory evidence of immunity, or had documentation of physician-diagnosed disease) or for whom vaccination is not recommended. During 2001-2008, a total of 12.5% (one of eight) of measles cases reported to CD C among H C P occurred in persons born before 1957; the other seven cases occurred among H C P born after 1957. Measles-mumps-rubella (MMR) vaccination policies have been enforced with variable success in United States health care facilities over the past decade. Even though medical settings were a primary site of measles transmission during the 1989-1991 measles resurgence (145,146), as of September 2011, only three states (New York, Oklahoma, and Rhode Island) had laws mandating that all hospital personnel have proof of measles immunity and did not allow for religious or philosophic exemptions (147). Vaccine coverage in the United States is high; in 2010, a total of 91 # Measles Transmission and the Costs of Mitigating Measles Exposures in Health-Care Settings Health-care-associated cases of measles are of public health concern. Because of the severity of measles, infected persons are likely to seek medical care in primary health-care facilities, emergency departments, or hospitals (141,149,150). Medical settings played a prominent role in perpetuating outbreaks of measles transmission during the 1989-1991 measles resurgence (145,146) and were a primary site of measles transmission in a health-care-associated outbreak in 2008 (149). During 2001 2008, a total of 27 reported measles cases were transmitted in U.S. health-care facilities, accounting for 5% of all reported U.S. measles cases. Because of the greater opportunity for exposure, HCP are at higher risk than the general population for becoming infected with measles. A study conducted in 1996 in medical facilities in a county in Washington state indicated that HCP were 19 times more likely to develop measles than other adults (151). During 2001During -2008, in the 23 health-care settings in which measles transmission was reported, eight cases occurred among HCP, six (75%) of whom were unvaccinated or had unknown vaccination status. One health-care provider was hospitalized in an intensive care unit for 6 days from severe measles complications (142). During a health-care-associated measles outbreak in Arizona in 2008 with 14 cases, six cases were acquired in hospitals, and one was acquired in an outpatient setting. One unvaccinated health-care worker developed measles and infected a hospital emergency room patient who required intensive care following hospital admission for measles (149). High costs also are involved in evaluating and containing exposures and outbreaks in health-care facilities, as well as a substantial disruption of regular hospital routines when control measures are instituted, especially if hospitals do not have readily available data on the measles immunity status of their staff and others included in the facility vaccination program. In 2005 in Indiana, one hospital spent more than $113,000 responding to a measles outbreak (142), and in 2008 in Arizona, two hospitals spent $799,136 responding to and containing cases in their facilities (149). The Arizona outbreak response required rapid review of measles documentation of 14,844 H C P at seven hospitals and emergency vaccination of approximately 4,500 H C P who lacked documentation of measles immunity. Serologic testing at two hospitals among 1,583 H C P without documented history of vaccination or without documented laboratory evidence of measles immunity revealed that 138 (9%) of these persons lacked measles IgG antibodies (149). # Vaccine Effectiveness, Duration of Immunity and Seroprevalence Studies, and Vaccine Safety Vaccine Effectiveness M M R vaccine is highly effective in preventing measles with a 1-dose vaccine effectiveness of 95% when administered on or after age 12 months and a 2-dose vaccine effectiveness of 99% (135). # Duration of Immunity and Seroprevalence Studies Two doses of live measles vaccine are considered to provide long-lasting immunity (135). Although antibody levels decline following vaccination, a study examining neutralizing antibody levels up to 10 years following the second dose of M M R vaccine in children indicates that antibodies remain above the level considered protective (152). Studies among H C P in the United States during the measles resurgence in the late 1980s through early 1990s demonstrated that 4% -10% of all H C P lacked measles IgG antibodies (153)(154)(155)(156). During the 2008 Arizona outbreak, of the 1,077 health-care providers born during or after 1957 without documented measles immunity, 121 (11%) were seronegative (149). In a study of measles seroprevalence among 469 newly hired H C P at a hospital in N orth Carolina who were born before 1957, and thus considered immune by age, who could not provide written evidence of immunity to measles, serologic testing indicated that six (1.3%) lacked measles IgG antibodies (157). Other serologic studies of hospital-based H C P indicate that 2%-9% of those born before 1957 lacked antibodies to measles (156,(158)(159)(160). A survey co n d u cted d u rin g 1999-2004 foun d a seroprevalence of measles antibodies of 95.9% among persons in the U.S. population aged 6-49 years (161). The survey indicated that the lowest prevalence, 92.4% , was among adults born during 1967-1976 (161). A 1999 study of U.S. residents aged >20 years determined that 93% had antibodies to measles virus (162). # Vaccine Safety Measles vaccine is administered in combination with the mumps and rubella components as the M M R vaccine in the United States. Monovalent measles vaccine rarely has been used in the United States in the past 2 decades and is no longer available. After decades of use, evidence demonstrates that M M R vaccine has an excellent safety profile (134). T he m ajority of docum ented adverse events occur in children. In rare circumstances, M M R vaccination of adults has been associated w ith the following adverse events: anaphylaxis (approximately 1.0-3.5 occurrences per million doses administered) (134), thrombocytopenia from the measles component or rubella component (a rate of three to four cases for every 100,000 doses) ( 134), and acute arthritis from the rubella component (arthralgia develops among approximately 25% of rubella-susceptible postpubertal females after M M R vaccination, and approximately 10% have acute arthritis-like signs and symptoms) (135). W hen joint symptoms occur, they generally persist for 1 day-3 weeks and rarely recur (135). Chronic joint symptoms attributable to the rubella component of the M M R vaccine are reported very rarely, if they occur at all. Evidence does not support an association between M M R vaccination and any of the following: hearing loss, retinopathy, optic neuritis, Guillain-Barre Syndrome, type 1 diabetes, Crohn's disease, or autism (135,(163)(164)(165)(166)(167)(168)(169). A woman can excrete the rubella vaccine virus in breast milk and transmit the virus to her infant, but the infection remains asymptomatic (135) Presumptive evidence of immunity to measles for persons who work in health-care facilities includes any of the following: - written documentation of vaccination with 2 doses of live measles or M M R vaccine administered at least 28 days apart,'!' - laboratory evidence of immunity, § - laboratory confirmation of disease, or - birth before 1957.J # Prevaccination Testing Prevaccination antibody screening before M M R vaccination for an employee who does not have adequate presumptive evidence of im m unity is not necessary unless the medical facility considers it cost effective (134,(170)(171)(172) although no recent studies have been conducted. For H C P who have 2 documented doses of M M R vaccine or other acceptable evidence of immunity to measles, serologic testing for immunity is not recommended. In the event that a H C P who has 2 documented doses of M M R vaccine is tested serologically and determined to have negative or equivocal measles titer results, it is not recommended that the person receive an additional dose of M M R vaccine. Such persons should be considered to have presumptive evidence of measles immunity. Documented ageappropriate vaccination supersedes the results of subsequent serologic testing. Because rapid vaccination is necessary to halt disease transmission, during outbreaks of measles, serologic screening before vaccination is not recommended. # Use of Vaccine and Immune Globulin for Treating Exposed Persons and Controlling Outbreaks Following airborne infection-control precautions and implementing other infection-control measures are important to control the spread of measles but might fail to prevent all nosocomial transmission, because transmission to other susceptible persons might occur before illness is recognized. Persons infected with measles are infectious 4 days before rash onset through 4 days after rash onset. W hen a person who is suspected of having measles visits a health-care facility, airborne infection-control precautions should be followed stringently. The patient should be asked immediately to wear a medical mask and should be placed in an airborne-infection isolation room (i.e., a negative airpressure room) as soon as possible. If an airborne-infection isolation room is not available, the patient should be placed in a private room with the door closed and be asked to wear a mask. If possible, only staff with presumptive evidence of immunity should enter the room of a person with suspect or confirmed measles. Regardless of presumptive immunity status, all staff entering the room should use respiratory protection consistent with airborne infection-control precautions (i.e., use of an N95 respirator or a respirator with similar effectiveness in preventing airborne transmission) (3,150). Because of the possibility, albeit low (~1%), of measles vaccine failure in H C P exposed to infected patients (173), all H C P should observe airborne precautions in caring for patients with measles. H C P in whom measles occurs should be excluded from work until >4 days following rash onset. Contacts w ith measles-compatible sym ptom s should be isolated, and appropriate infection-control measures (e.g., rapid vaccination of susceptible contacts) should be implemented to prevent further spread (174). If measles exposures occur in a health-care facility, all contacts should be evaluated immediately for presumptive evidence of measles immunity. HCP without evidence of immunity should be offered the first dose of M M R vaccine and excluded from work from day 5-21 following exposure (135). HCP without evidence of immunity who are not vaccinated after exposure should be removed from all patient contact and excluded from the facility from day 5 after their first exposure through day 21 after the last exposure, even if they have received postexposure intramuscular immune globulin of 0.25 mL/kg (40 mg IgG/ kg) (135). Those with documentation of 1 vaccine dose may remain at work and should receive the second dose. C ase-patient contacts who do not have presum ptive evidence of measles immunity should be vaccinated, offered intramuscular immune globulin of 0.25 mL/kg (40 mg IgG/ kg), which is the standard dosage for nonimmunocompromised persons (135), or quarantined until 21 days after their exposure to the case-patient. Contacts with measles-compatible symptoms should be isolated, and appropriate infectioncontrol measures should be implemented to prevent further spread. If immune globulin is administered to an exposed person, observations should continue for signs and symptoms of measles for 28 days after exposure because immune globulin might prolong the incubation period. Available data suggest that live virus measles vaccine, if administered within 72 hours of measles exposure, will prevent, or modify disease (134). Even if it is too late to provide effective postexposure prophylaxis by administering MMR, the vaccine can provide protection against future exposure to all three infections. Identifying persons who lack evidence ofmeasles immunity during contact investigations provides a good opportunity to offer M M R vaccine to protect against measles as well as mumps and rubella, not only for HCP who are part of an organization' s vaccination program, but also for patients and visitors. If an exposed person is already incubating measles, M M R vaccination will not exacerbate symptoms. In these circumstances, persons should be advised that a measles-like illness occurring shortly after vaccination could be attributable either to natural infection or to the vaccine strain. In such circumstances, specimens should be submitted for viral strain identification. # Mumps # Background # Epidemiology and Risk Factors Mumps is an acute viral infection characterized by fever and inflammation of the salivary glands (usually parotitis) (175). The spectrum of illness ranges from subclinical infection (20% -40%) to nonspecific respiratory illness, sialadenitis including classic parotitis, deafness, orchitis, and meningoencephalitis; severity increases with age (175). In the prevaccine era, mumps was a common childhood illness, with approximately 186,000 mumps cases reported in the United States per year (176). After the introduction of the Jeryl Lynn strain mumps vaccine in 1967 and the implementation of the 1-dose mumps vaccine policy for children in 1977 (177), reports of mumps cases in the United States declined 99% (178). During 1986-1987, an increase in reported mumps cases occurred, primarily affecting unvaccinated adolescents and young adults. In the late 1980s, sporadic outbreaks continued to occur that affected both unvaccinated and 1-dose vaccinated adolescents and young adults (178). In 1989, a second dose of M M R vaccine was recommended nationwide for better measles control among school-aged children (179). Historically low rates of mumps followed with only several hundred reported cases per year in the United States during 2000-2005. In 1998, a national goal to eliminate mumps was set for 2010 (180). However, in 2006, a total of 6,584 mumps cases were reported in the United States, the largest U.S. mumps outbreak in nearly 20 years (181-183). Whereas overall national mumps incidence was 2.2 per 100,000 population, eight states in the Midwest were the most affected, with 2.5-66.1 cases per 100.000 population (183). The highest incidence (31.1 cases per 100,000 population) was among persons aged 18-24 years (e.g., college-aged students), the majority of whom had received 2 doses of mumps-containing vaccine. O f the 4,017 case-patients for whom age and vaccination status were known, 1,786 (44%) were aged >25 years (incidence: 7.2 cases per 100.000 persons); of these 1,786 patients, 351 (20%) received at least 2 doses, 444 (25%) received 1 dose, 336 (19%) were unvaccinated, and 655 (37%) had unknown vaccination status. Since the 2006 resurgence, two additional large U.S. mumps outbreaks have occurred, both during 2009-2010, one among members o f a religious com m unity w ith cases occurring throughout the northeastern United States (184) and the other in Guam ( 185 # Mumps Transmission and the Costs of Mitigating Mumps Exposures in Health-care Settings Although health-care-associated transmission of mumps is infrequent, it might be underreported because of the high percentage (~20%-40%) of infected persons who might be asymptomatic (186)(187)(188)(189). In a survey of 9,299 adults in different professions conducted in 1968, before vaccine was used routinely, the rate of mumps acquisition was highest among dentists and HCP, with rates of 18% among dentists and 15% among physicians (37% for pediatricians), compared with 9% among primary and secondary school teachers and 2% among university staff members (190). In the postvaccine era, m um ps transm ission also has been documented in medical settings (191)(192)(193). During a Tennessee mumps outbreak during 1986-1987, a total of 17 (12%) of 146 hospitals and three (50%) of six long-term-care facilities reported one or more practices that could contribute to the spread of mumps, including not isolating patients with mumps, assigning susceptible staff to care for patients with mumps, and not immunizing susceptible employees. Health care-associated transmission resulted in six cases of mumps infections among health-care providers and nine cases of mumps infections among patients (191). In Utah in 1994, two health-care providers in a hospital developed mumps after they had contact with an infected patient (192). During the 2006 outbreak, one health-care facility in Chicago experienced ongoing mumps transmission lasting 4 weeks (193). During the 2006 multistate U.S. outbreak, 144 (8.5%) of 1,705 adult case-patients in Iowa for whom occupation was known were health-care providers (Iowa D epartm ent o f Public H ealth, unpublished data, 2006). W hether transmission occurred from patients, coworkers, or persons in the community is unknown. During the 2009-2010 outbreak in the northeastern region of the United States, seven (0.2%) of the 3,400 case-patients were health-care providers, six of whom likely were infected by patients because they had no other known exposure. Exposures to mumps in health-care settings also can result in added economic costs because of furlough or reassignment of staff members from patient-care duties or closure of wards (194). In 2006, a Kansas hospital spent $98,682 containing a mumps outbreak (195). During a mumps outbreak in Chicago in 2006, one health-care facility spent $262,788 controlling the outbreak (193). # Vaccine Effectiveness, Duration of Immunity and Seroprevalence Studies, and Vaccine Safety Vaccine Effectiveness M M R vaccine has a 1-dose vaccine effectiveness in preventing mumps of 80% -85% (range: 75% -91% ) (175,(196)(197)(198)(199) and a 2-dose vaccine effectiveness of 79% -95% (199)(200)(201)(202). In a study conducted on two Iowa college campuses during the 2006 mumps outbreak among a population that was primarily vaccinated with 2 doses, 2-dose vaccine effectiveness ranged from 79% to 88% (202). # Duration of Immunity and Seroprevalence Studies Mumps antibody levels wane over time following the first or second dose of vaccination (203,204), but the correlates of im m unity to m um ps are poorly understood and the significance of these waning antibody levels is unclear. A study on a university campus in Nebraska in 2006 indicated lower levels of mumps neutralizing antibodies among students who had been vaccinated with a second M M R dose >15 years previously than among those who had been vaccinated 1-5 years previously, but the difference was not statistically significant (p>0.05) (205). In a 2006 study on a university campus in Kansas, students with mumps were more likely to have received a second dose of M M R vaccine >10 years previously than were their roommates without mumps (206). However, another 2006 study from an Iowa college campus identified no such association (202). During 1999-2004, national seroprevalence for mumps antibodies for persons aged 6 -4 9 years was 90% (95% confidence interval : 88.8-91.1) (207). In the Nebraska study, 414 (94%) of the 440 participants were seropositive for mumps antibodies (205). A study in Kansas in 2006 indicated that 13% of hospital employees lacked antibodies to the mumps virus (195). In a recent study on mumps seroprevalence among 381 newly hired health-care personnel at a hospital in North Carolina who were born before 1957 and thus considered immune by age and who could not provide written evidence of immunity to mumps, serologic testing indicated that 14 (3.7%) lacked IgG antibodies to mumps (157). For HCP who do not have adequate presumptive evidence of mum ps imm unity, prevaccination antibody screening before M M R vaccination is not necessary (135,175). For H C P who have 2 documented doses of M M R vaccine or other acceptable evidence of immunity to mumps, serologic testing for im m unity is not recommended. In the event that a health-care provider who has 2 docum ented doses of M M R vaccine is tested serologically and determined to have negative or equivocal mumps titer results, it is not recommended that the person receive an additional dose of M M R vaccine. Such persons should be considered immune to mumps. Documented age-appropriate vaccination supersedes the results of subsequent serologic testing. Likewise, during outbreaks of mumps, serologic screening before vaccination is not recommended because rapid vaccination is necessary to halt disease transmission. # Controlling Mumps Outbreaks in Health-Care Settings Placing patients in droplet precautions and implementing other infection-control measures is important to control the spread of mumps but might fail to prevent all nosocomial transmission, because transmission to other susceptible persons might occur before illness is recognized (208). W hen a person suspected of having mumps visits a health-care facility, only H C P with adequate presum ptive evidence of im m unity should be exposed to the person, and in addition to standard precautions, droplet precautions should be followed. The index case-patient should be isolated, and respiratory precautions The first dose of mumps-containing vaccine should be administered on or after the first birthday; the second dose should be administered no earlier than 28 days after the first dose. Mumps immunoglobulin (IgG) in the serum; equivocal results should be considered negative. § § The majority of persons born before 1957 are likely to have been infected naturally between birth and 1977, the year that mumps vaccination was recommended for routine use, and may be presumed immune, even if they have not had (gown and gloves) should be used for patient contact. Negative pressure rooms are not required. The patient should be isolated for 5 days after the onset of parotitis, during which time shedding of virus is likely to occur (209). If mumps exposures occur in a health-care facility, all contacts should be evaluated for evidence of mumps immunity. HCP with no evidence of mumps immunity who are exposed to patients with mumps should be offered the first dose of M M R vaccine as soon as possible, but vaccine can be administered at any interval following exposure; they should be excluded from duty from day 12 after the first unprotected exposure through day 25 after the most recent exposure. H C P with documentation of 1 vaccine dose may remain at work and should receive the second dose. HCP with mumps should be excluded from work for 5 days from the onset of parotitis (209). Antibody response to the mumps com ponent of M M R vaccine generally is believed not to develop soon enough to provide effective prophylaxis after exposure to suspected mum ps (191,210), but data are insufficient to rule out a prophylactic effect. N onetheless, the vaccine is not recom m ended for prophylactic purposes after exposure. However, identifying persons who lack presumptive evidence of mumps immunity during contact investigations provides a good opportunity to offer M M R vaccine to protect against mumps as well as measles and rubella, not only for HCP who are part of an organization's vaccination program, but also for patients and visitors. If an exposed person already is incubating mumps, M M R vaccination will not exacerbate the symptoms. In these circumstances persons should be advised that a mumps-like illness occurring shortly after vaccination is likely to be attributable to natural infection. In such circumstances, specimens should be submitted for viral strain identification to differentiate between vaccine and wild type virus. Immune globulin is not routinely used for postexposure protection from mumps because no evidence exists that it is effective (135). # Rubella # Background # Epidemiology and Risk Factors Rubella (German measles) is a viral disease characterized by rash, low-grade fever, lym phadenopathy, and malaise (211). A lthough rubella is considered a benign disease, transient arthralgia and arthritis are observed commonly in infected adults, particularly among postpubertal females. Chronic arthritis has been reported after rubella infection, but such reports are rare, and evidence of an association is weak (212). O ther complications that occur infrequently are thrombocytopenia and encephalitis (211). Infection is asymptomatic in 25%-50% of cases (213). Clinical diagnosis of rubella is unreliable and should not be considered in assessing immune status. Many rash illnesses might mimic rubella infection and many rubella infections are unrecognized. The only reliable evidence of previous rubella infection is the presence of serum rubella IgG antibody (211). O f primary concern are the effects that rubella can have when a pregnant woman becomes infected, especially during the first trimester, which can result in miscarriages, stillbirths, therapeutic abortions, and congenital rubella syndrome (CRS), a constellation of birth defects that often includes blindness, deafness, mental retardation, and congenital heart defects (211,213). Postnatal rubella is transmitted through direct or droplet contact from nasopharyngeal secretions. The incubation period ranges from 12 to 23 days (214,215). An ill person is most contagious when the rash first appears, but the period of maximal communicability extends from a few days before to 7 days after rash onset (213). Rubella is less contagious than measles. In the prevaccine era, rubella was an endemic disease globally with larger epidemics that occurred; in the United States, rubella epidemics occurred approximately every 7 years (211). During the 1964-1965 global rubella epidemic, an estimated 12.5 million cases of rubella occurred in the United States, resulting in approximately 2,000 cases of encephalitis, 11,250 fetal deaths attributable to spontaneous or surgical abortions, 2,100 infants who were stillborn or died soon after birth, and 20,000 infants born with CRS. The economic impact of this epidemic in the United States alone was estimated at $1.5 billion in 1965 dollars ($10 billion in 2010 dollars) (216). After the rubella vaccine was licensed in the United States in 1969, reported rubella cases decreased from 57,686 in 1969 to 12,491 in 1976 (216), and CRS cases reported nationwide decreased from 68 in 1970 to 23 in 1976 (217). Declines in rubella age-specific incidence occurred in all age groups, including adolescents and adults, but the greatest declines were among children aged <15 years (216) As of September 2011, only three states (i.e., New York, Oklahoma, and Rhode Island) had laws mandating that all hospital personnel have proof of rubella immunity and did not allow for religious or philosophical exemptions (147). Additional states had requirements for specific types of facilities or for certain employees within those facilities, but they did not have universal laws mandating proof of rubella immunity for all hospital personnel (147). # Rubella Transmission and the Costs of Mitigating Rubella Exposures in Health-Care Settings No documented transmission of rubella to HCP or other hospital staff or patients in U.S. health-care facilities has occurred since elimination was declared. However, in the decades before elimination, rubella transmission was documented in at least 10 U.S. medical settings (221)(222)(223)(224)(225)(226)(227)(228)(229)(230)(231) and led to outbreaks with serious consequences, including pregnancy terminations, disruption of hospital routine, absenteeism from work, expensive containment measures, negative publicity, and the threat of litigation (232). In these outbreaks, transmission occurred from HCP to susceptible coworkers and patients, as well as from patients to HCP and other patients. No data are available on whether HCP are at increased risk for acquiring rubella compared with other professions. # Vaccine Effectiveness, Duration of Immunity and Seroprevalence Studies, and Vaccine Safety # Vaccine Effectiveness Vaccine effectiveness of the RA 27/3 rubella vaccine against clinical rubella is 95% (85%-99% CI) and >99% for clinical laboratory confirmed rubella (211,233). Antibody responses to rubella as part of M M R vaccine are equal (i.e., >99%) to those seen after the single-antigen RA 27/3 rubella vaccine (211,234). # Duration of Immunity and Seroprevalence Studies In clinical trials, 97% -99% of susceptible persons who received a single dose of the RA 27/3 rubella vaccine when they were aged >12 months developed antibody (211,235,236). Two studies have demonstrated that vaccine-induced rubella antibodies might wane after 12-15 years (237,238); however, rubella surveillance data do not indicate that rubella and CRS are increasing among vaccinated persons. N ational seroprevalence for rubella antibodies among persons aged 6-49 years during 1999-2004 was 91% (239). During 1986-1990, serologic surveys in one hospital indicated that 5% of H C P (including persons born in 1957 or earlier) did not have detectable rubella antibody (240). Earlier studies indicated that up to 14%-19% of U.S. hospital personnel, including young women of childbearing age, lacked detectable rubella antibody (225,241,242). In a recent study on rubella seroprevalence among 477 newly hired HCP at a hospital in N orth Carolina who were born before 1957, and thus considered immune by age, who could not provide written evidence of immunity to rubella, serologic testing revealed that 14 (3.1%) lacked detectable levels of antibody to rubella (157). Because of the potential for contact with pregnant women in any type of health-care facility, all HCP should have documented presumptive evidence of immunity to rubella. History of disease is not considered adequate evidence of immunity. As a result of the theoretic risk to the fetus, women should be counseled to avoid becoming pregnant for 28 days after receipt of a rubella-containing vaccine (243). However, receipt of rubella-containing vaccine during pregnancy should not be a reason to consider termination of pregnancy; data from 18 years of following to term 321 known rubella-susceptible women who were vaccinated within 3 months before or 3 m onths after conception indicated that none of the 324 infants born to these mothers had malformations compatible with congenital rubella syndrome, but five had evidence of subclinical rubella infection (244). The estimated risk for serious malformations to fetuses attributable to the mother receiving RA 27/3 vaccine is considered to range from zero to 1.6% (135,244). Evidence does not support a link between M M R vaccination and any of the following: hearing loss, retinopathy, optic neuritis, Guillain-Barre Syndrome, type 1 diabetes, Crohn's disease, or autism (135,(163)(164)(165)(166)(167)(168)(169). A woman can excrete the rubella vaccine virus in breast milk and transmit the virus to her infant, but the infection remains asymptomatic (135) Presumptive evidence of immunity to rubella for persons who work in health-care facilities includes any of the following: - written documentation of vaccination with 1 dose of live rubella or M M R vaccine, - laboratory evidence of im m unity,- laboratory confirmation of rubella infection or disease, or - birth before 1957* (except women of childbearing potential who could become pregnant, although pregnancy in this age group would be exceedingly ra re^). # Prevaccination Testing For HCP who do not have adequate presumptive evidence of rubella im m unity, prevaccination antibody screening before M M R vaccination is not necessary unless the medical facility considers it cost effective (135). For HCP who have 1 documented dose of M M R vaccine or other acceptable evidence of immunity to rubella, serologic testing for immunity is not recommended. In the event that a health-care provider who has at least 1 documented dose of rubella-containing vaccine is tested serologically and determined to have negative or equivocal rubella titer results, receipt of an additional dose of M M R vaccine for prevention of rubella is not recommended. Such persons should be considered im m une to rubella. However, if the provider requires a second dose of measles or mumps vaccine, then a second dose of M M R should be adm inistered. D ocum ented age-appropriate vaccination supersedes the results of subsequent serologic testing. Likewise, during outbreaks of rubella, serologic screening before vaccination is not recommended because rapid vaccination is necessary to halt disease transmission. # Controlling Rubella Outbreaks To prevent transmission of rubella in health-care settings, patients suspected to have rubella should be placed in private rooms. In addition to standard precautions, droplet precautions Because rubella can occur in some persons born before 1957 and because congenital rubella and congenital rubella syndrome can occur in the offspring of women infected with rubella virus during pregnancy, birth before 1957 is not acceptable evidence of rubella immunity for women who could become pregnant. should be followed until 7 days after onset of symptoms. Room doors can remain open, and special ventilation is not required. Any exposed HCP who do not have adequate presumptive evidence of rubella immunity should be excluded from duty beginning 7 days after exposure to rubella and continuing through either 1) 23 days after the most recent exposure or 2) 7 days after rash appears if the provider develops rubella (213)(214)(215). Exposed H C P who do not have adequate presum ptive evidence of im m unity who are vaccinated postexposure should be excluded from duty for 23 days after the most recent exposure to rubella because no evidence exists that postexposure vaccination is effective in preventing rubella infection (244). N either rubella-containing vaccine (244) nor im m une globulin (IG) (211,244) is effective for postexposure prophylaxis of rubella. Although intramuscular administration of 20 mL of immune globulin within 72 hours of rubella exposure might reduce the risk for rubella, it will not eliminate the risk (135,245); infants with congenital rubella have been born to women who received IG shortly after exposure (213). In addition, administration of IG after exposure to rubella might modify or suppress symptoms and create an unwarranted sense of security with respect to transmission. If exposure to rubella does not cause infection, postexposure vaccination with M M R vaccine should induce protection against subsequent infection of rubella, as well as measles and mumps. If the exposure results in infection, no evidence indicates that administration of M M R vaccine during the presymptomatic or prodromal stage of illness increases the risk for vaccine-associated adverse events (213). # Pertussis # Background # Epidemiology and Risk Factors Pertussis is a highly contagious bacterial infection. Secondary attack rates among susceptible household contacts exceed 80% (246,247). Transmission occurs by direct contact with respiratory secretions or large aerosolized droplets from the respiratory tract of infected persons. The incubation period is generally 7-10 days but can be as long as 21 days. The period of communicability starts with the onset of the catarrhal stage and extends into the paroxysmal stage. Symptoms of early pertussis (catarrhal phase) are indistinguishable from other upper respiratory infections. Vaccinated adolescents and adults, whose immunity from childhood vaccinations wanes 5-10 years after the most recent dose of vaccine (usually administered at age 4-6 years), are an important source of pertussis infection for susceptible infants. Infants too young to be vaccinated are at greatest risk for severe pertussis, including hospitalization and death. The disease can be transmitted from adults to close contacts, especially unvaccinated children. Vaccination coverage am ong infants and children for diphtheria and tetanus toxoids and acellular pertussis (DTaP) vaccine remains high. In 2010, coverage for children aged 19-35 months who have received >4 doses of DTaP/diphtheria and tetanus toxoids and pertussis vaccine (DTP)/diphtheria and tetanus toxoids vaccine (DT) was 84% (21). Among children entering kindergarten for the 2009-2010 school year, DTaP coverage was 93% (148). Vaccination coverage for tetanus toxoid, reduced diphtheria toxoid and acellular pertussis (Tdap) vaccine was 68.7% among adolescents in 2010 and <7% among adults in 2009 (22,248). Tdap vaccination coverage among H C P was 17.0% in 2009 (248). # Disease in Health-Care Settings and Impact on Health Care Personnel and Patients In hospital settings, transmission of pertussis has occurred from hospital visitors to patients, from HCP to patients, and from patients to H C P (249-252). Although of limited size (range: 2-17 patients and 5-13 staff), documented outbreaks were costly and disruptive. In each outbreak, H C P were evaluated for cough illness and required diagnostic testing, prophylactic antibiotics, and exclusion from work. D uring outbreaks that occur in hospitals, the risk for contracting pertussis among patients or staff is often difficult to quantify because exposure is not well defined. Serologic studies conducted among hospital staff indicate that exposure to pertussis is much more frequent than suggested by attack rates of clinical disease (246,(249)(250)(251)(252)(253)(254). In one outbreak, seroprevalence of pertussis agglutinating antibodies among H C P correlated w ith the degree of patient contact and was highest among pediatric house staff (82%) and ward nurses (71%) and lowest among nurses with administrative responsibilities (35%) (251). A model to estimate the cost of vaccinating H CP and the net return from preventing nosocomial pertussis was constructed using probabilistic methods and a hypothetical cohort of 1,000 H C P with direct patient contact followed for 10 years (255). Baseline assumptions, determined from data in the literature, included incidence of pertussis in HCP, ratio of identified exposures per HCP case, symptomatic percentage of seroconfirmed pertussis infections in HCP, cost of infectioncontrol measures per exposed person, vaccine efficacy, vaccine coverage, employment turnover rate, adverse events, and cost of vaccine (255). In a 10-year period, the cost of infection control would be $388,000 w ithout Tdap vaccination of HCP compared with $69,000 with such a program (255). Introduction of a vaccination program would result in a net savings as high as $535,000 and a benefit-cost ratio of 2.38 (i.e., for every dollar spent on the vaccination program, the hospital would save $2.38 on control measures) (255). # Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety A prelicensure im m unogenicity and safety study in adolescents and adults of a vaccine containing acellular pertussis estimated vaccine efficacy to be 92% (256). Recent postlicensure studies ofTdap demonstrate vaccine effectiveness at 78% and 66% (257,258). Duration of im m unity from vaccination has yet to be evaluated. D ata from pre-and postlicensure studies support the safety ofTdap in adolescents and adults (259)(260)(261)(262)(263). Since the 2005 Tdap recommendations for HCP, one study tried to determine if postexposure prophylaxis following pertussis exposure was necessary for Tdap-vaccinated H C P (264). During the study period, 116 exposures occurred among 94 HCP. Pertussis infection occurred in 2% of those who received postexposure prophylaxis compared with 10% of those who did not, suggesting a possible benefit of postexposure prophylaxis among Tdap-vaccinated HCP (264). Because Tdap coverage is suboptimal among HCP, and the duration of protection afforded by Tdap is unknown, vaccination status does not change the approach to evaluate the need for postexposure prophylaxis in exposed H CP Postexposure prophylaxis is necessary for HCP in contact with persons at risk for severe disease. Other HCP either should receive postexposure prophylaxis or be monitored for 21 days after pertussis exposure and treated at the onset of signs and symptoms of pertussis. Recommended postexposure prophylaxis antibiotics for HCP exposed to pertussis include azithromycin, clarithroymycin, or erythromycin. HCP are not at greater risk for diphtheria or tetanus than the general population. # Recommendations # Vaccination Regardless of age, H C P should receive a single dose ofTdap as soon as feasible if they have not previously received Tdap and regardless of the time since their most recent Td vaccination. Vaccinating HCP with Tdap will protect them against pertussis and is expected to reduce transmission to patients, other HCP, household members, and persons in the community. Tdap is not licensed for multiple administrations; therefore, after receipt of Tdap, H C P should receive Td for future booster vaccination against tetanus and diphtheria. Hospitals and ambulatory-care facilities should provide Tdap for H C P and use approaches that maximize vaccination rates (e.g., education about the benefits of vaccination, convenient access, and the provision of Tdap at no charge). # Prevaccination Testing Prevaccination serologic testing is not recommended. # Demonstrating Immunity Im m unity cannot be dem onstrated through serologic testing because serologic correlates of protection are not well established. # Controlling Pertussis Outbreaks in Health-Care Settings Prevention of pertussis transmission in health-care settings involves diagnosis and early treatment of clinical cases, droplet isolation of infectious patients who are hospitalized, exclusion from work of H C P who are infectious, and postexposure prophylaxis. Early diagnosis of pertussis, before secondary transmission occurs, is difficult because the disease is highly communicable during the catarrhal stage, when symptoms are still nonspecific. Pertussis should be considered in the differential diagnoses for any patient with an acute cough illness with severe or prolonged paroxysmal cough, particularly if characterized by posttussive vomiting, whoop, or apnea. Nasopharyngeal specimens should be taken, if possible, from the posterior nasopharynx with a calcium alginate or Dacron swab for cultures and/or polymerase chain reaction (PCR) assay. Health-care facilities should maximize efforts to prevent transmission of Bordetella pertussis. Precautions to prevent respiratory droplet transmission or spread by close or direct contact should be employed in the care of patients admitted to hospital with suspected or confirmed pertussis (265). These precautions should remain in effect until patients are improved clinically and have completed at least 5 days of appropriate antimicrobial therapy. HCP in whom symptoms (i.e., unexplained rhinitis or acute cough) develop after known pertussis exposure might be at risk for transmitting pertussis and should be excluded from work until 5 days after the start of appropriate therapy (3). Data on the need for postexposure prophylaxis in Tdapvaccinated H C P are inconclusive (264). Certain vaccinated H C P are still at risk for B. pertussis. Tdap might not preclude the need for postexposure prophylaxis. Postexposure antimicrobial prophylaxis is recommended for all HCP who have unprotected exposure to pertussis and are likely to expose a patient at risk for severe pertussis (e.g., hospitalized neonates and pregnant women). O ther H C P should either receive postexposure antimicrobial prophylaxis or be monitored daily for 21 days after pertussis exposure and treated at the onset of signs and symptoms of pertussis. # Varicella # Background # Epidemiology and Risk Factors Varicella is a highly infectious disease caused by primary infection with varicella-zoster virus (VZV). VZV is transmitted from person to person by direct contact, inhalation of aerosols from vesicular fluid of skin lesions of varicella or herpes zoster (HZ), a localized, generally painful vesicular rash commonly called shingles, or infected respiratory tract secretions that also might be aerosolized (266). The average incubation period is 14-16 days after exposure to rash (range: 10-21 days). Infected persons are contagious an estimated 1-2 days before rash onset until all lesions are crusted, typically 4 -7 days after rash onset (266). Varicella secondary attack rates can reach 90% among susceptible contacts. Typically, primary infection with VZV results in lifetime immunity. VZV remains dormant in sensory-nerve ganglia and can reactivate at a later time, causing H Z. Before the U.S. childhood varicella vaccination program began in 1995, approximately 90% ofvaricella disease occurred among children aged 85% in varicella incidence, hospitalizations, and deaths (267)(268)(269). The decline in disease incidence was greatest among children for whom vaccination was recommended; however, declines occurred in every age group including infants too young to be vaccinated and adults, indicating reduced communitywide transmission of VZV. C urrent incidence of varicella am ong adults is low (<0.1/1,000 population), and adult cases represent <10% of all reported varicella cases (270). National seroprevalence data from 1999-2004 demonstrated that, in the early vaccine era, adults continued to have high immunity to varicella (271). In this study, 98% of persons aged 20-49 years had VZV-specific IgG antibodies. However, with declining likelihood of exposure to VZV, children and adolescents who did not receive 2 doses of varicella vaccine could remain susceptible to VZV infection as they age into adulthood, when varicella can be more severe. The clinical presentation of varicella has changed since the implementation of the varicella vaccination program, with more than half of varicella cases reported in 2008 occurring among persons who were vaccinated previously, the majority of them children. Varicella disease in vaccinated children (breakthrough varicella) usually has a modified or atypical presentation; the rash is typically mild, with 50 lesions were as infectious as unvaccinated children (272). Because the majority of adults are immune and few need vaccination, fewer breakthrough cases have been reported among adults than among children, and breakthrough varicella in adults has tended to be milder than varicella in unvaccinated adults (273,274). The epidemiology of varicella in tropical and subtropical regions differs from that in the United States. In these regions, a higher proportion ofVZV infections are acquired later in life. Persons emigrating from these regions might be more likely to be susceptible to varicella compared to U.S.-born persons and, therefore, are at a higher risk for developing varicella if unvaccinated and exposed (275,276). # Disease in Health-Care Settings and Impact on Health Care Personnel and Patients A lthough relatively rare in the U nited States since introduction of varicella vaccine, nosocomial transmission of VZV is well recognized and can be life-threatening to certain patients (277)(278)(279)(280)(281)(282)(283)(284)(285)(286)(287)(288)(289). In addition to hospital settings, nosocomial VZV transmission has been reported in longterm-care facilities and a hospital-associated residential facility (290,291). Sources of nosocomial exposure that have resulted in transmission include patients, HCP, and visitors with either varicella or H Z. Both localized and disseminated H Z in immunocompetent as well as immunocompromised patients have been identified as sources of nosocomial transmission of VZV. Localized H Z has been demonstrated to be much less infectious than varicella; disseminated H Z is considered to be as infectious as varicella (266). Nosocomial transmission has been attributed to delays in the diagnosis or reporting of varicella or H Z and in failures to implement control measures promptly. In hospitals and other health-care settings, airborne transmission of VZV from patients with either varicella or H Z has resulted in varicella in HCP and patients who had no direct contact with the index case-patient (284)(285)(286)(287)(288)291). Although all susceptible patients in health-care settings are at risk for severe varicella disease with complications, certain patients without evidence of immunity are at increased risk: pregnant women, premature infants born to susceptible mothers, infants born at <28 weeks' gestation or who weigh <1,000 grams regardless of maternal immune status, and immunocompromised persons of all ages (including persons who are undergoing immunosuppressive therapy, have malignant disease, or are immunodeficient). VZV exposures among patients and HCP can be disruptive to patient care, time-consuming, and costly even when they do not result in VZV transmission (281,282,292). Studies ofVZV exposure in health-care settings have documented that a single provider with unrecognized varicella can result in the exposure of >30 patients and >30 employees (292). Identification of susceptible patients and staff, medical management of susceptible exposed patients at risk for complications of varicella, and furloughing of susceptible exposed HCP are time-consuming and costly (281,282). W ith the overall reduction in varicella disease attributable to the success of the vaccination program, the risk for exposure to VZV from varicella cases in health-care settings is likely declining. In addition, an increasing proportion of varicella cases occur in vaccinated persons who are less contagious. Diagnosis of varicella has become increasingly challenging as a growing proportion of cases occur in vaccinated persons in whom disease is mild, and HCP encounter patients with varicella less frequently. Although not currently routinely recommended for the diagnosis and management of varicella, laboratory testing of suspected varicella cases is likely to become increasingly useful in health-care settings, especially as the positive predictive value of clinical diagnosis declines. # Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety # Vaccine Effectiveness Formal studies to evaluate vaccine efficacy or effectiveness have not been performed among adults. Studies of varicella vaccine effectiveness performed among children indicated good performance of 1 dose for prevention of all varicella (80% -85%) and >95% effectiveness for prevention of moderate and severe disease (266,293). Studies have indicated that a second dose among children produces an improved humoral and cellular immune response that correlates with improved protection against disease (266,294). Varicella vaccine effectiveness is expected to be lower in adults than in children. Adolescents and adults require 2 doses to achieve seroconversion rates similar to those seen in children after 1 dose (266). A study of adults who received 2 doses of varicella vaccine 4 or 8 weeks apart and were exposed subsequently to varicella in the household estimated an 80% reduction in the expected number of cases (295). # Duration of Immunity Serologic correlates of protection against varicella using commercially available assays have not been established for adults (266). In clinical studies, detectable antibody levels have persisted for at least 5 years in 97% of adolescents and adults who were administered 2 doses of varicella vaccine 4-8 weeks apart, but boosts in antibody levels were observed following exposures to varicella, which could account for the long-term persistence of antibodies after vaccination in these studies (295). Studies have demonstrated that whereas 25%-31% of adult vaccine recipients who seroconverted lost detectable antibodies 1-11 years after vaccination (273,296), vaccine-induced VZV-specific T-cell proliferation (marker for cell-mediated immunity ) was maintained in 94% of adults 1 and 5 years postvaccination (297). Disease was mild in vaccinated persons who developed varicella after exposure to VZV, even among vaccinees who did not seroconvert or who lost detectable antibody (273,274). Severity of illness and attack rates among vaccinated adults did not increase over time. These studies suggest that VZV-specific CMI affords protection to vaccinated adults, even in the absence of detectable antibody response. # Vaccine Safety The varicella vaccine has an excellent safety profile. In clinical trials, the most common adverse events among adolescents and adults were injection-site complaints (24.4% after the first dose and 32.5% after the second dose) (266,295). Varicella-like rash at the injection site occurred in 3% of vaccine recipients after the first dose and in 1% after the second. A nonlocalized rash occurred in 5.5% of vaccine recipients after the first dose and in 0.9% after the second, with a median number of lesions of five, at a peak of 7-21 and 0-23 days postvaccination, respectively (295). Data on serious adverse events among adults after varicella vaccination are limited, but the proportion of serious adverse events among all adverse events reported to the Vaccine Adverse Events Reporting System during 1995-2005 was low (5%) among both children and adults (298). Serious adverse events reported among children included pneumonia, hepatitis, H Z (some hospitalized), meningitis with H Z, ataxia, encephalitis, throm bocytopenic purpura. N ot all adverse events reported after varicella vaccination have been laboratory confirmed to be attributable to the vaccine strain VZV (266,298). Risk for transmission of vaccine virus was assessed in placebo recipients who were siblings of vaccinated children and among healthy siblings of vaccinated leukemic children (266). The findings suggest that transmission of varicella vaccine virus from healthy persons to susceptible contacts is very rare. The risk might be increased in vaccinees in whom a varicella-like rash develops after vaccination. However, this risk is also low. The benefits of vaccinating HC P without evidence of immunity outweigh this extremely low potential risk. Since implementation of the varicella vaccine program, transmission of vaccine virus has been documented from eight persons (all of whom had a rash after vaccination) resulting in nine secondary infections among household and long-term-care facility contacts (299). No transmission has been documented from vaccinated HCP. # Recommendations # Vaccination Health-care institutions should ensure that all HCP have evidence of immunity to varicella. This information should be documented and readily available at the work location. HCP without evidence of immunity to varicella should receive 2 doses of varicella vaccine administered 4-8 weeks apart. If >8 weeks elapse after the first dose, the second dose may be administered without restarting the schedule. Recently vaccinated HCP do not require any restriction in their work activities; however, HCP who develop a vaccine-related rash after vaccination should avoid contact with persons without evidence of immunity to varicella who are at risk for severe disease and complications until all lesions resolve (i.e., are crusted over) or, if they develop lesions that do not crust (macules and papules only), until no new lesions appear within a 24-hour period. Evidence of immunity for HCP includes any of the following (266): - written documentation of vaccination with 2 doses of varicella vaccine, - laboratory evidence o f im m unity § § § or laboratory confirmation of disease, - diagnosis or verification of a history of varicella disease by a health-care provider,JJJ or - diagnosis or verification of a history of H Z by a health-care provider. In health-care settings, serologic screening before vaccination of personnel without evidence of immunity is likely to be cost effective. Key factors determining cost-effectiveness include sensitivity and specificity of serologic tests, the nosocomial transmission rate, seroprevalence of VZV antibody in the personnel population, and policies for managing vaccine recipients developing postvaccination rash or who are § § § Commercial assays can be used to assess disease-induced immunity, but they often lack sensitivity to detect vaccine-induced immunity (i.e., they might yield false-negative results). JJJ Verification of history or diagnosis of typical disease can be provided by any health-care provider (e.g., a school or occupational clinic nurse, nurse practitioner, physician assistant, or physician). For persons reporting a history of, or reporting with, atypical or mild cases, assessment by a physician or their designee is recommended, and one of the following should be sought: 1) an epidemiologic link to a typical varicella case or to a laboratoryconfirmed case or 2) evidence of laboratory confirmation if it was performed at the time of acute disease. W hen such documentation is lacking, persons should not be considered as having a valid history of disease because other diseases might mimic mild atypical varicella. exposed subsequently to VZV. Institutions may elect to test all unvaccinated HCP, regardless of disease history, because a small proportion of persons with a positive history of disease might be susceptible. For the purpose of screening HCP, a less sensitive and more specific commercial ELISA should be considered. The latex agglutination test can produce falsepositive results, and HCP who remained unvaccinated because of false test results subsequently contracted varicella (289). Routine testing for varicella im m unity after 2 doses of vaccine is not recommended. Available commercial assays are not sensitive enough to detect antibody after vaccination in all instances. Sensitive tests that are not generally available have indicated that 92% -99% of adults develop antibodies after the second dose (266). Seroconversion does not always result in full protection against disease and, given the role of CMI for providing long-term protection, absence of antibodies does not necessarily mean susceptibility. Documented receipt of 2 doses of varicella vaccine supersedes results of subsequent serologic testing. Health-care institutions should establish protocols and recommendations for screening and vaccinating HCP and for management of H CP after exposures in the work place. Institutions also should consider precautions for H C P in whom rash occurs after vaccination, although they should also consider the possibility of wild-type disease in HCP with recent exposure to varicella or HZ. A vaccine to prevent H Z is available and recommended for all persons aged >60 years without contraindications to vaccination. H Z vaccine is not indicated for HCP for the prevention of nosocomial transmission, but HCP aged >60 years may receive the vaccine on the basis of the general recom m endation for H Z vaccination, to reduce their individual risk for HZ. # Varicella Control Strategies Appropriate measures should be implemented to manage cases and control outbreaks (300). # Patient Care Only HCP with evidence of immunity to varicella should care for patients who have confirmed or suspected varicella or HZ. Airborne precautions (i.e., negative air-flow rooms) and contact precautions should be employed for all patients with varicella or disseminated H Z and for immunocompromised patients with localized H Z until disseminated infection is ruled out. These precautions should be kept in place until lesions are dry and crusted. If negative air-flow rooms are not available, patients should be isolated in closed rooms and should not have contact with persons without evidence of immunity to varicella. For immunocompetent persons with localized HZ, standard precautions and complete covering of the lesions are recommended. # Postexposure Management of HCP and Patients Exposure to V ZV is defined as close contact w ith an infectious person, such as close indoor contact (e.g., in the same room) or face-to-face contact. Experts differ regarding the duration of contact; some suggest 5 minutes, and others up to 1 hour; all agree that it does not include transitory contact (301). All exposed, susceptible patients and H C P should be identified using the criteria for evidence of immunity. An additional criterion of evidence of immunity only for patients who are not immunocompromised or pregnant is birth in the United States before 1980. Postexposure prophylaxis with vaccination or varicella-zoster immunoglobulin, depending on immune status, of exposed HCP and patients without evidence of immunity is recommended (266). HCP who have received 2 doses of vaccine and who are exposed to VZV (varicella, disseminated H Z, and uncovered lesions of a localized HZ) should be monitored daily during days 8-21 after exposure for fever, skin lesions, and systemic symptoms suggestive of varicella. H C P can be monitored directly by occupational health program or infection-control practitioners or instructed to report fever, headache, or other constitutional symptoms and any atypical skin lesions immediately. HCP should be excluded from a work facility immediately if symptoms occur. H C P who have received 1 dose of vaccine and who are exposed to VZV (varicella, disseminated H Z, and uncovered lesions of a localized HZ) (in the community or health-care setting/workplace) should receive the second dose within 3-5 days after exposure to rash (provided 4 weeks have elapsed after the first dose). After vaccination, management is similar to that of 2-dose vaccine recipients. Those who did not receive a second dose or who received the second dose >5 days after exposure should be excluded from work for 8-21 days after exposure. Unvaccinated HCP who have no other evidence of immunity who are exposed to VZV (varicella, disseminated H Z, and uncovered lesions of a localized HZ) are potentially infective from days 8-21 after exposure and should be furloughed during this period. They should receive postexposure vaccination as soon as possible. Vaccination within 3-5 days of exposure to rash might modify the disease if infection occurred. Vaccination >5 days postexposure is still indicated because it induces protection against subsequent exposures (if the current exposure did not cause infection). For HCP at risk for severe disease for whom varicella vaccination is contraindicated (e.g., pregnant or immunocomprosed HCP), varicella-zoster immune globulin after exposure is recommended. The varicella-zoster immune globulin product currently used in the United States, VariZIG (Cangene Corporation, Winnipeg, Canada), is available under an Investigational New Drug Application Expanded Access protocol; a sample release form is available at . gov/downloads/BiologicsBloodVaccines/SafetyAvailability/ UCM 176031.pdf. Varicella-zoster immune globulin might prolong the incubation period by a week, thus extending the time during which personnel should not work from 21 to 28 days. In case of an outbreak, H C P without evidence of immunity who have contraindications to vaccination should be excluded from the outbreak setting through 21 days after rash onset of the last identified case-patient because of the risk for severe disease in these groups. If the VZV exposure was to localized H Z with covered lesions, no work restrictions are needed if the exposed HCP had previously received at least 1 dose of vaccine or received the first dose within 3-5 days postexposure. A second dose should be administered at the appropriate interval. H C P should be monitored daily during days 8-21 after exposure for fever, skin lesions, and systemic symptoms suggestive of varicella and excluded from a work facility if symptoms occur. If at least 1 dose was not received, restriction from patient contact is recommended. # Diseases for W hich Vaccination Might Be Indicated in Certain Circum stances Health-care facilities and other organizations should consider including in their vaccination programs vaccines to prevent meningococcal disease, typhoid fever, and polio for H CP who have certain health conditions or who work in laboratories or regions outside the United States where the risk for workrelated exposure exists. # Meningococcal Disease # Background # Epidemiology and Risk Factors Meningococcal disease is rare among adults in the United States and incidence has decreased to historic lows; during 1998-2007 the average annual incidence of meningococcal disease was 0.28 (range: 0.26-0.31) cases per 100,000 population among persons aged 25-64 years (302). Routine vaccination with meningococcal conjugate vaccine is recommended by ACIP for adolescents aged 11-18 years, with the primary dose at age 11-12 years and the booster dose at age 16 years. In 2010, coverage with meningococcal conjugate vaccine among persons aged 13-17 years was 62.7% (22). Nosocomial transmission of Neisseria meningitidis is rare, but H C P have become infected after direct contact with respiratory secretions of infected persons (e.g., managing of an airway during resuscitation) and in a laboratory setting. HCP can decrease the risk for infection by adhering to precautions to prevent exposure to respiratory droplets (303,304) and by taking antimicrobial chemoprophylaxis if exposed directly to respiratory secretions. # Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety Two quadrivalent (A, C, W-135, Y) conjugate meningococcal vaccines (MCV4) are licensed for persons aged through 55 years (305,306). Both protect against two of the three serogroups that cause the majority of meningococcal disease in the United States and against 75% of disease among adults. Available data indicate that the majority of persons do not have enough circulating functional antibody to be protected >5 years after a single dose of MCV4. Both vaccines had similar safety profiles in clinical trials. Quadrivalent (A, C, W -135, Y) meningococcal polysaccharide vaccine (MPSV4) is available for use in persons aged >55 years. No vaccine for serogroup B meningococcal disease is licensed in the United States. # Recommendations # Vaccination # M CV4 is not recommended routinely for all HCP # HCP Recommended to Receive Vaccine to Prevent Meningococcal Disease A 2-dose vaccine series is recommended for H C P with know n asplenia or persistent com plem ent com ponent deficiencies, because these conditions increase the risk for meningococcal disease. HCP traveling to countries in which meningococcal disease is hyperendemic or epidemic also are at increased risk for infection and should receive vaccine. Those with known asplenia or persistent complement component deficiencies should receive a 2-dose vaccine series. All other HCP traveling to work to high-risk areas should receive a single dose of M CV4 before travel if they have never received it or if they received it >5 years previously. Clinical microbiologists and research microbiologists who might be exposed routinely to isolates of N. meningitides should receive a single dose of M CV4 and receive a booster dose every 5 years if they remain at increased risk. Health-care personnel aged >55 years who have any of the above risk factors for meningococcal disease should be vaccinated with MPSV4 (305). # HCP Who May Elect to Receive Vaccine to Prevent Meningococcal Disease H CP with known HIV infection are likely at increased risk for meningococcal disease and may elect vaccination. If these HCP are vaccinated, they should receive a 2-dose vaccine series (307). # Booster Doses H CP who receive the 2-dose M CV4 vaccine series and/or remain in a group at increased risk should receive a booster dose every 5 years (306). # Postexposure Management of Exposed HCP Postexposure prophylaxis is advised for all persons who have had intensive, unprotected contact (i.e., without wearing a mask) with infected patients (e.g., via m outh-to-m outh resuscitation, endotracheal intubation, or endotracheal tube management), including HCP who have been vaccinated with either the conjugate or polysaccharide vaccine (3). A ntim icrobial prophylaxis can eradicate carriage o f N. meningitidis and prevent infections in persons who have unprotected exposure to patients w ith m eningococcal infections (305). Rifampin, ciprofloxacin, and ceftriaxone are effective in eradicating nasopharyngeal carriage of N. meningitidis. In areas of the United States where ciprofloxacinresistant strains of N. meningitidis have been detected (as of August 30, 2011, only parts of Minnesota and North Dakota), ciprofloxacin should not be used for chemoprophylaxis (308). Azithromycin can be used as an alternative. Ceftriaxone can be used during pregnancy. Postexposure prophylaxis should be administered within 24 hours of exposure when feasible; postexposure prophylaxis administered >14 days after exposure is of limited or no value (305). H C P not otherwise indicated for vaccination may be recommended to be vaccinated with meningococcal vaccine in the setting of a com m unity or institutional outbreak of meningococcal disease caused by a serogroup contained in the vaccine. # Typhoid Fever # Background # Epidemiology and Risk Factors The incidence of typhoid fever declined steadily in the United States during 1900-1960 and has since remained low. During 1999-2006, on average, 237 cases were reported annually to the National Typhoid and Paratyphoid Fever Surveillance System (309). The median age of patients was 22 years and 54% were male; 79% reported foreign travel during the 30 days before onset of symptoms. Among international travelers, the risk for Salmonella Typhi infection appears to be highest for those who visit friends and relatives in countries in which typhoid fever is endemic and for those who visit (even for a short time) the most highly endemic areas (e.g., the Indian subcontinent) (310). Increasing resistance to flu o ro q u in o lo n es such as ciprofloxacin, which are used to treat multidrug-resistant. S. Typhi, has been seen particularly among travelers to south and southeast Asia (311). Isolates with decreased susceptibility to ciprofloxacin (DCS) do not qualify as resistant according to current Clinical and Laboratory Standards Institute criteria but are associated with poorer clinical outcomes (311,312). Resistance to nalidixic acid, a quinolone, is a marker for DCS and increased from 19% in 1999 to 59% in 2008 (313). Nine isolates resistant to ciprofloxacin also were seen during this time period (313). Although overall S. Typhi infections have declined in the United States, increased incidence and antimicrobial resistance including resistance to fluoroquinolones have been seen for paratyphoid fever caused by Paratyphi A (314). No vaccines that protect against Paratyphi A infection are available. # Transmission and Exposure in Health-Care Settings During 1985-1994, seven cases oflaboratory-acquired typhoid fever were reported among persons working in microbiology laboratories, only one of whom had been vaccinated (315). Additionally, S. Typhi might be transmitted nosocomially via the hands of infected persons (315). # Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety Two typhoid vaccines are distributed in the United States: oral live-attenuated Ty21a vaccine (one enteric-coated capsule taken on alternate days for a total of four capsules) and the capsular polysaccharide parenteral vaccine (1 0.5 mL intramuscular dose). Both vaccines protect 50% -80% of recipients. To maintain immunity, booster doses of the oral vaccine are required every 5 years, and booster doses of the injected vaccine are required every 2 years. Complication rates are low for both types of S. Typhi vaccines. During 1994-1999, serious adverse events requiring hospitalization occurred in an estimated 0.47 to 1.3 per 100,000 doses, and no deaths occurred (310). However, live-attenuated Ty21a vaccine should not be used among immunocompromised persons, including those infected with HIV (316). Theoretic concerns have been raised about the immunogenicity of live, attenuated Ty21a vaccine in persons concurrently receiving antimicrobials (including antimalarial chemoprophylaxis), viral vaccines, or immune globulin (317). A third type of vaccine, a parenteral heat-inactivated vaccine associated with higher reactogenicity, was discontinued in 2000 (310,318). # Vaccination M icrobiologists and others who work frequently with S. Typhi should be vaccinated with either of the two licensed and available vaccines. Booster vaccinations should be administered on schedule according to the manufacturers' recommendations. # Controlling the Spread of Typhoid Fever Personal hygiene, particularly hand hygiene before and after all patient contacts, will minimize risk for transmitting enteric pathogens to patients. However, H C P who contract an acute diarrheal illness accompanied by fever, cramps, or bloody stools are likely to excrete substantial numbers of infective organisms in their feces. Excluding these H C P from care of patients until the illness has been evaluated and treated can prevent transmission (3). # Poliomyelitis # Background # Epidemiology and Risk Factors In the United States, the last indigenously acquired cases of poliomyelitis caused by wild poliovirus occurred in 1979, and the Americas were certified to be free of indigenous wild poliovirus in 1994 (319,320). W ith the complete transition from use of oral poliovirus vaccine (OPV) to inactivated poliovirus vaccine (IPV) in 2000, vaccine-associated paralytic poliomyelitis (VAPP) attributable to O PV also has been eliminated (321,322) # Transmission and Exposure in Health-Care Settings Poliovirus can be recovered from infected persons, including from pharyngeal specim ens, feces, urine, and (rarely) cerebrospinal fluid. H C P and laboratory workers might be exposed if they come into close contact with infected persons (e.g., travelers returning from areas where polio is endemic) or with specimens that contain poliovirus. # Recommendations Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety Both IPV and OPV are highly immunogenic and effective when administered according to their schedules. In studies conducted in the United States, 3 doses of IPV resulted in 100% serocoversion for types 2 and 3 poliovirus and 96%-100% for type 1 (326). Imm unity is prolonged and might be lifelong. IPV is well tolerated, and no serious adverse events have been associated with its use. IPV is an inactivated vaccine and does not cause VAPP IPV is contraindicated in persons with a history of hypersensitivity to any component of the vaccine, including 2-phenoxyethanol, formaldehyde, neomycin, streptomycin, and polymyxin B. OPV is no longer available in the United States. # Recommendations # Vaccination Because the majority of adults born in the United States are likely immune to polio as a result of vaccination during childhood, poliovirus vaccine is not routinely recommended for persons aged >18 years. The childhood recommendation for poliovirus vaccine consists of 4 doses at ages 2, 4, and 6-18 months and 4-6 years. However, vaccination is recommended for H C P who are at greater risk for exposure to polioviruses than the general population, including laboratory workers who handle specimens that might contain polioviruses and H C P who have close contact with patients who might be excreting wild polioviruses, including H CP who travel to work in areas where polioviruses are circulating. Unvaccinated H C P should receive a 3-dose series of IPV, with dose 2 administered 4-8 weeks after dose 1, and dose 3 administered 6-12 months after dose 2. H C P who have previously completed a routine series of poliovirus vaccine and who are at increased risk can receive a lifetime booster dose of IPV if they remain at increased risk for exposure. Available data do not indicate the need for more than a single lifetime booster dose with IPV for adults. # Controlling the Spread of Poliovirus Standard precautions always should be practiced when handling biologic specim ens. Suspect cases require an immediate investigation including collection of appropriate laboratory specimens and control measures. All suspect or confirmed cases should be reported immediately to the local or state health department. # Other Vaccines Recom m ended for Adults Certain vaccines are recommended for adults based on age or other individual risk factors but not because of occupational exposure (327). Vaccine-specific ACIP recommendations should be consulted for details on schedules, indications, contraindications, and precautions for these vaccines. - to be at increased risk for hepatitis A virus infection because of occupational exposure, including persons exposed to sewage. Hepatitis A vaccine is recommended for person w ith chronic liver disease, international travelers, and certain other groups at increased risk for exposure to hepatitis A. # Catch-Up and Travel Vaccination # Catch-Up Programs Managers of health-care facilities should implement catch-up vaccination programs for HCP who already are employed, in addition to developing policies for achieving high vaccination coverage among newly hired HCP HCP vaccination records could be reviewed annually during the influenza vaccination season or concurrent with annual TB testing. This strategy could help prevent outbreaks of vaccine-preventable diseases. Because education, especially when combined with other interventions such as reminder/recall systems and low or no out-of-pocket costs, enhances the success of many vaccination programs, informational materials should be available to assist in answering questions from HCP regarding the diseases, vaccines, and toxoids as well as the program or policy being implemented (120,328). Conducting educational workshops or seminars several weeks before the initiation of a catch-up vaccination program might promote acceptance of program goals. # Travel Hospital personnel and other H C P who perform research or health-care work in foreign countries might be at increased risk for acquiring certain diseases that can be prevented by vaccines recommended in the United States (e.g., hepatitis B, influenza, M M R , T dap , poliovirus, varicella, and meningococcal vaccines) and travel-related vaccines (e.g., hepatitis A, Japanese encephalitis, rabies, typhoid, or yellow fever vaccines) (329). Elevated risks for acquiring these diseases might stem from exposure to patients in health-care settings (e.g., poliomyelitis and meningococcal disease) but also might arise from circumstances unrelated to patient care (e.g., high endemicity of hepatitis A or exposure to arthropod-vector diseases ). All H C P should seek the advice of a health-care provider familiar with travel medicine at least 4-6 weeks before travel to ensure that they are up to date on routine vaccinations and that they receive vaccinations recommended for their destination (329). Although bacille Calmette-Guérin vaccination is not recommended routinely in the United States, H C P should discuss potential beneficial and other consequences of this vaccination with their health care provider. # Work Restrictions W ork restrictions for susceptible H C P (i.e., no history of vaccination or documented lack of immunity) exposed to or infected with certain vaccine-preventable diseases can range from restricting individual HCP from patient contact to complete exclusion from duty (Table 5). A furloughed employee should be considered in the same category as an employee excluded from the facility. Specific recommendations concerning work restrictions in these circumstances have been published previously (3,11). The risk for rubella vaccine-associated malformations in the offspring o f women pregnant when vaccinated or who become pregnant w ithin 1 month after vaccination is negligible.11 Such women should be counseled regarding the theoretical basis o f concern for the fetus. # vaccine. See table footnotes on page 41 111 Commercial assays can be used to assess disease-induced immunity, but they often lack sensitivity to detect vaccine-induced im m unity (i.e., they m ight yield false-negative results). § § § Verification o f history or diagnosis o f typical disease can be provided by any health-care provider (e.g., school or occupational clinic nurse, nurse practitioner, physician assistant, or physician). For persons reporting a history of, or reporting with, atypical or mild cases, assessment by a physician or their designee is recommended, and one o f the following should be sought: 1) an epidemiologic link to a typical varicella case or to a laboratory-confirm ed case or 2) evidence o f laboratory confirmation if it was performed at the tim e o f acute disease. When such documentation is lacking, persons should not be considered as having a valid history o f disease because other diseases m ight mimic mild atypical varicella. 111 For example, immunocompromised patients or pregnant women. Those who develop acute respiratory symptoms w ith o u t fever should be considered for evaluation by occupational health to determine appropriateness o f contact with patients and can be allowed to work unless caring for patients in a protective environment; these personnel should be considered for temporary reassignment or exclusion from work for 7 days from symptom onset or until the resolution o f all noncough symptoms, whichever is longer. If symptoms such as cough and sneezing are still present, HCP should wear a facemask during patient care activities. The im portance of performing frequent hand hygiene (especially before and after each patient contact) should be reinforced. Measles
This report updates the previously published summary o f recommendations for vaccinating health-care personnel (HCP) in the United States (CDC. Immunization o f health-care workers: recommendations o f the Advisory Committee on Immunization Practices [ACIP] and the Hospital Infection Control Practices Advisory Committee [HICPAC]. M M W R 1997;46[No. RR-18J). This report was reviewed by and includes input from the Healthcare (formerly Hospital) Infection Control Practices Advisory Committee. These updated recommendations can assist hospital administrators, infection-controlpractitioners, employee health clinicians, and HCP in optimizing infection prevention and control programs. The recommendations for vaccinating HCP are presented by disease in two categories: 1) those diseases for which vaccination or documentation o f immunity is recommended because o f risks to HCP in their work settings for acquiring disease or transmitting to patients and 2) those for which vaccination might be indicated in certain circumstances. Background information for each vaccine-preventable disease and specific recommendations for use ofeach vaccine are presented. Certain infection-control measures that relate to vaccination also are included in this report. In addition, ACIP recommendations for the remaining vaccines that are recommended for certain or all adults are summarized, as are considerations for catch-up and travel vaccinations and for work restrictions. This report summarizes all current ACIP recommendations for vaccination o f HCP and does not contain any new recommendations or policies.# Immunization of Health-Care Personnel # Recom m endations of the A dvisory Committee on Im m unization Practices (ACIP) This report updates the previously published summary o f recom m endations o f the A dvisory C om m ittee on Immunization Practices (ACIP) and the Healthcare (formerly Hospital) Infection Control Practices Advisory Committee (HICPAC) for vaccinating health-care personnel (HCP) in the United States (1). The report, which was reviewed by and includes input from HICPAC, summarizes all current ACIP recommendations for vaccination of H C P and does not contain any new recommendations or policies that have not been published previously. These recommendations can assist hospital administrators, infection-control practitioners, employee health clinicians, and H C P in optimizing infection prevention and control programs. H C P are defined as all paid and unpaid persons working in health-care settings who have the potential for exposure to patients and/or to infectious materials, including body substances, contaminated medical supplies and equipment, contaminated environmental surfaces, or contaminated air. H C P might include (but are not lim ited to) physicians, nurses, nursing assistants, therapists, technicians, emergency medical service personnel, dental personnel, pharmacists, laboratory personnel, autopsy personnel, students and trainees, contractual staff not employed by the health-care facility, and persons (e.g., clerical, dietary, housekeeping, laundry, security, maintenance, administrative, billing, and volunteers) not directly involved in patient care but potentially exposed to infectious agents that can be transmitted to and from HCP and patients (2). Because of their contact with patients or infective material from patients, many HCP are at risk for exposure to (and possible transmission of) vaccine-preventable diseases. Employers and Introduction vaccination records for H C P so records can be retrieved easily as needed (3). Each record should reflect im m unity status for indicated vaccine-preventable diseases (i.e., documented disease, vaccination history, or serology results) as well as vaccinations adm inistered during em ploym ent and any documented episodes of adverse events after vaccination (4). For each vaccine, the record should include date of vaccine administration (including for those vaccines that might have been received prior to employment), vaccine manufacturer and lot number, edition and distribution date of the languageappropriate Vaccine Information Statement (VIS) provided to the vaccinee at the time of vaccination, and the name, address, and title of the person administering the vaccine (4). Accurate vaccination records can help to rapidly identify susceptible H C P (i.e., those with no history of vaccination or lack of documentation of immunity) during an outbreak situation and can help reduce costs and disruptions to health care operations (5)(6)(7). H C P should be provided a copy of their vaccination records and encouraged to keep it with their personal health records so they can readily be made available to future employers. HICPAC has encouraged any facility or organization that provides direct patient care to formulate a comprehensive vaccination policy for all HCP (3). The American Hospital Association has endorsed the concept of vaccination programs for both hospital personnel and patients (8). To ensure that all H C P are up to date with respect to recommended vaccines, facilities should review H C P vaccination and immunity status at the time ofhire and on a regular basis (i.e., at least annually) with consideration of offering needed vaccines, if necessary, in conjunction with routine annual disease-prevention measures (e.g., influenza vaccination or tuberculin testing). These recommendations (Tables 2 and 3) should be considered during policy development. Several states and health-care facilities have established requirements relating to assessment of vaccination status and/or administration of one or more vaccines for H C P (9,10). Disease-specific outbreak control measures are described in this report and elsewhere (3,11,12). All HCP should adhere to all other recommended infectioncontrol guidelines, whether or not they are individually determined to have immunity to a vaccine-preventable disease. # Methods In 2008, the ACIP Immunization of Health-Care Personnel Work Group (the W ork Group) was formed as a subgroup of the ACIP Adult Immunization W ork Group to update the previously published recommendations for im m unization of HCP. The W ork Group comprised professionals from academic medicine (pediatrics, family medicine, internal medicine, occupational and environmental medicine, and infectious disease); federal and state public health professionals; and liaisons from the Society for Healthcare Epidemiology of America and HICPAC. The W ork Group met monthly, developed an outline for the report, worked closely with subject matter experts at C D C (who developed, revised, and updated sections of the report), and provided subsequent critical review of the draft documents. The approach of the Work Group was to summarize previously published ACIP recommendations and not to m ake new recom m endations or policies; a comprehensive list of publications containing the various vaccine-specific recommendations is provided (Table 1). In February 2011, the updated report was presented to ACIP, which voted to approve it. The recommendations for vaccination of HCP are presented below by disease in two categories: 1) those diseases for which routine vaccination or documentation of immunity is recommended for H C P because of risks to H C P in their work settings and, should H CP become infected, to the patients they serve and 2) those diseases for which vaccination of HCP might be indicated in certain circumstances. Vaccines recommended in the first category are hepatitis B, seasonal influenza, measles, mumps, and rubella, pertussis, and varicella vaccines. Vaccines in the second category are meningococcal, typhoid, and polio vaccines. Except for influenza, all of the diseases prevented by these vaccines are notifiable at the national level (13). Main changes from the 1997 ACIP recommendations have been summarized (Box). # Diseases for W hich Vaccination Is Recom m ended O n the basis of docum ented nosocomial transmission, H C P are considered to be at substantial risk for acquiring or transm itting hepatitis B, influenza, measles, mumps, rubella, pertussis, and varicella. Current recommendations for vaccination are provided below. # Hepatitis B # Background # Epidemiology and Risk Factors Hepatitis B is an infection caused by the hepatitis B virus (HBV), which is transm itted through percutaneous (i.e., breaks in the skin) or mucosal (i.e., direct contact with mucous membranes) exposure to infectious blood or body fluids. The virus is highly infectious; for nonimmune persons, disease transmission from a needlestick exposure is up to 100 times more likely for exposure to hepatitis B e antigen (HBeAg)-positive blood than to HIV-positive blood (14). HBV infection is a well recognized occupational risk for U.S. HCP and globally. The risk for HBV is associated with degree of contact with blood in the work place and with the hepatitis B e-antigen status of the source persons (15). The virus is also environmentally stable, remaining infectious on environmental surfaces for at least 7 days (16). In 2009 in the United States, 3,371 cases of acute HBV infection were reported nationally, and an estimated 38,000 new cases of HBV infection occurred after accounting for underreporting and underdiagnosis (17). O f 4,519 persons reported with acute HBV infection in 2007, approximately 40% were hospitalized and 1.5% died (18). HBV can lead to chronic infection, which can result in cirrhosis of the liver, liver failure, liver cancer, and death. An estimated 800,000- 1.4 million persons in the United States are living with chronic HBV infection; these persons serve as the main reservoir for continued HBV transmission (19). Vaccines to prevent hepatitis B became available in the United States in 1981; a decade later, a national strategy to eliminate HBV infection was implemented, and the routine vaccination of children was recommended (20). During 1990During 2009, the rate of new HBV infections declined approximately 84%, from 8.5 to 1.1 cases per 100,000 population (17); the decline was greatest (98%) among persons aged <19 years, for whom recommendations for routine infant and adolescent vaccination have been applied. Although hepatitis B vaccine coverage is high in infants, children, and adolescents (91.8% in infants aged 19-35 months and 91.6% in adolescents aged 13-17 years) (21,22), coverage remains lower (41.8% in 2009) for certain adult populations, including those with behavioral risks for HBV infection (e.g., men who have sex with men and persons who use injection drugs) (23). # Hepatitis B in Health-Care Settings During 1982, when hepatitis B vaccine was first recommended for HCP, an estimated 10,000 infections occurred among persons employed in a medical or dental field. By 2004, the number of HBV infections among HCP had decreased to an estimated 304 infections, largely resulting from the im plem entation of routine preexposure vaccination and improved infection-control precautions (24)(25)(26). The risk for acquiring HBV infection from occupational exposures is dependent on the frequency of percutaneous and mucosal exposures to blood or body fluids (e.g., semen, saliva, and wound exudates) containing HBV, particularly fluids containing HBeAg (a marker for high HBV replication and viral load) (27)(28)(29)(30)(31). The risk is higher during the professional BOX. Summary of main changes* from 1997 Advisory Comm ittee on Immunization Practices/Hospital (now Healthcare) Infection Control Practices Advisory Comm ittee recommendations for immunization of health-care personnel (HCP) # Hepatitis B • H C P and trainees in certain populations at high risk for chronic hepatitis B (e.g., those born in countries with high and intermediate endemicity) should be tested for HBsAg and anti-HBc/anti-HBs to determine infection status. # Influenza • Emphasis that all HCP, not just those with direct patient care duties, should receive an annual influenza vaccination • Comprehensive programs to increase vaccine coverage among H C P are needed; influenza vaccination rates among HCP within facilities should be measured and reported regularly. Measles, mumps, and rubella (MMR) • History of disease is no longer considered adequate presumptive evidence of measles or mumps immunity for HCP; laboratory confirmation of disease was added as acceptable presumptive evidence of immunity. History of disease has never been considered adequate evidence of immunity for rubella. • The footnotes have been changed regarding the recommendations for personnel born before 1957 in routine and outbreak contexts. Specifically, guidance is provided for 2 doses of M M R for measles and mumps protection and 1 dose of M M R for rubella protection. Pertussis • HCP, regardless of age, should receive a single dose of Tdap as soon as feasible if they have not previously received Tdap. • The minimal interval was removed, and Tdap can now be administered regardless of interval since the last tetanus or diphtheria-containing vaccine. • Hospitals and ambulatory-care facilities should provide Tdap for HCP and use approaches that maximize vaccination rates. Varicella Criteria for evidence of immunity to varicella were established. For H C P they include • written documentation with 2 doses of vaccine, • laboratory evidence of immunity or laboratory confirmation of disease, • diagnosis of history of varicella disease by health-care provider, or diagnosis of history of herpes zoster by health-care provider. # Meningococcal • H C P with anatomic or functional asplenia or persistent complement component deficiencies should now receive a 2-dose series of meningococcal conjugate vaccine. H C P with H IV infection who are vaccinated should also receive a 2 dose series. • Those HCP who remain in groups at high risk are recommended to be revaccinated every 5 years. Abbreviations: HBsAg = Hepatitis B surface antigen; anti-HBc = hepatitis B core antibody; anti-HBs = hepatitis B surface antibody; Tdap = tetanus toxoid, reduced diptheria toxoid and acellular pertussis vaccine; HIV = human immunodeficiency virus. training period and can vary throughout a person's career (1). Depending on the tasks performed, health-care or public safety personnel m ight be at risk for HBV exposure; in addition, personnel providing care and assistance to persons in outpatient settings and those residing in long-term-care facilities (e.g., assisted living) might be at risk for acquiring or facilitating transmission of HBV infection when they perform procedures that expose them to blood (e.g., assisted bloodglucose monitoring and wound care) (32)(33)(34). A Federal Standard issued in D ecem ber 1991 under the O ccupational Safety and H ealth Act m andates that hepatitis B vaccine be made available at the employer's expense to all health-care personnel who are exposed occupationally to blood or other potentially infectious materials (35). The Federal Standard defines occupational exposure as reasonably anticipated skin, eye, m ucous m em brane, or parenteral contact with blood or other potentially infectious materials that might result from the performance of an employee's duties (35 # Vaccine Effectiveness The 3-dose vaccine series administered intramuscularly at 0, 1, and 6 months produces a protective antibody response in approximately 30% -55% of healthy adults aged <40 years after the first dose, 75% after the second dose, and >90% after the third dose (40)(41)(42). After age 40 years, <90% of persons vaccinated with 3 doses have a protective antibody response, and by age 60 years, protective levels of antibody develop in approximately 75% of vaccinated persons (43). Smoking, obesity, genetic factors, and immune suppression also are associated with dim inished im m une response to hepatitis B vaccination (43-46). # Duration of Immunity Protection against symptomatic and chronic HBV infection has been docum ented to persist for >22 years in vaccine responders (47). Im m unocom petent persons who achieve hepatitis B surface antibody (anti-HBs) concentrations of >10 mIU/mL after preexposure vaccination have protection against both acute disease and chronic infection. Anti-HBs levels decline over time. Regardless, responders continue to be protected, and the majority of responders will show an anamnestic response to vaccine challenge (47)(48)(49)(50)(51). Declines might be somewhat faster among persons vaccinated as infants rather than as older children, adolescents, or adults and among those administered recombinant vaccine instead of plasma vaccine (which has not been commercially available in the United States since the late 1980s). Although immunogenicity is lower among immunocompromised persons, those who achieve and maintain a protective antibody response before exposure to HBV have a high level of protection from infection (52). Among persons who do not respond to a primary 3-dose vaccine series (i.e., those in whom anti-HBs concentrations of >10 mIU/mL were not achieved), 25% -50% respond to an additional vaccine dose, and 44% -100% respond to a 3-dose revaccination series using standard or high dosage vac cine (43,(53)(54)(55)(56)(57)(58). Persons who have measurable but low (i.e., 1-9 mIU/mL) levels of anti-HBs after the initial series have better response to revaccination than persons who have no anti-HBs (49,53,54). Persons who do not have protective levels of anti-HBs 1-2 months after revaccination either are infected with HBV or can be considered primary nonresponders; for the latter group, genetic factors might be associated with nonresponse to hepatitis B vaccination (54,58,59). ACIP does not recommend more than two vaccine series in nonresponders (52). # Vaccine Safety Hepatitis B vaccines have been demonstrated to be safe when administered to infants, children, adolescents, and adults (52,60,61). Although rare cases of arthritis or alopecia have been associated temporally with hepatitis B vaccination, recent data do not support a causal relationship between hepatitis B vaccine and either arthritis or alopecia (61)(62)(63). During 1982During 2004, an estimated 70 million adolescents and adults and 50 million infants and children in the United States received >1 dose of hepatitis B vaccine (52). The most frequently reported side effects in persons receiving hepatitis B vaccine are pain at the injection site (3% -29% ) and temperature of >99.9 F (>37.7 C) (1% -6% ) (64-67). However, in placebo-controlled studies, these side effects were reported no more frequently among persons receiving hepatitis B vaccine than among persons receiving placebo (40,41,(64)(65)(66)(67). Revaccination is not associated with an increase in adverse events. Hepatitis B vaccination is contraindicated for persons with a history of hypersensitivity to yeast or any vaccine component (4,(64)(65)(66). Persons with a history of serious adverse events (e.g., anaphylaxis) after receipt of hepatitis B vaccine should not receive additional doses. As with other vaccines, vaccination of persons with moderate or severe acute illness, with or without fever, should be deferred until illness resolves (4). Vaccination is not contraindicated in persons with a history of multiple sclerosis, Guillain-Barre Syndrome, autoim m une disease (e.g., systemic lupus erythematosis and rheumatoid arthritis), or other chronic diseases. Pregnancy is not a contraindication to vaccination; limited data suggest that developing fetuses are not at risk for adverse events when hepatitis B vaccine is administered to pregnant women (4,68). Available vaccines contain noninfectious hepatitis B surface antigen (HBsAg) and do not pose any risk for infection to the fetus. # Postexposure The need for postexposure prophylaxis should be evaluated immediately after H C P experience any percutaneous, ocular, m ucous-m em brane or nonintact skin exposure to blood or body fluid in the workplace. Decisions to administer postexposure prophylaxis should be based on the HBsAg status of the source and the vaccination history and vaccine-response status of the exposed H C P (Table 4) (72). # Unvaccinated and Incompletely Vaccinated HCP and Trainees • Unvaccinated or incompletely vaccinated persons who experience a workplace exposure from persons known to be HBsAg-positive should receive 1 dose of hepatitis B immune globulin HBIG (i.e., passive vaccination) as soon as possible after exposure (preferably within 24 hours). The effectiveness of HBIG when administered >7 days after percutaneous or permucosal exposures is unknown (Table 4). • Hepatitis B vaccine should be administered in the deltoid muscle as soon as possible after exposure; HBIG should be administered at the same time at another injection site. The 3-dose hepatitis B vaccine series should be completed for previously unvaccinated and incompletely vaccinated persons who have needlestick or other percutaneous exposures, regardless of the HBsAg status of the source and w hether the status o f the source is known. To document protective levels of anti-HBs (>10mIU/mL), postvaccination testing of persons who received HBIG for postexposure prophylaxis should be performed after anti-HBs from HBIG is no longer detectable (4-6 months after administration). # Vaccinated HCP and Trainees • Vaccinated H C P with documented immunity (anti-HBs concentrations of >10 mIU/mL) require no postexposure prophylaxis, serologic testing, or additional vaccination. • Vaccinated H C P with docum ented nonresponse to a 3-dose vaccine series should receive 1 dose of HBIG and a second 3-dose vaccine series if the source is HBsAgpositive or known to be at high risk for carrying hepatitis. If the source is known or determ ined to be HBsAgnegative, these previously nonresponding H C P should com p lete the rev accin atio n series and undergo postvaccination testing to ensure that their response status is docum ented ( Influenza causes an estim ated average of >200,000 hospitalizations and 3,000-49,000 deaths annually in the United States (74)(75)(76). The majority of influenza-related severe illnesses and deaths occur among persons with chronic medical conditions, infants and young children, seniors, and pregnant women (74)(75)(76)(77)(78). Reducing the risk for influenza among persons at higher risk for complications is a major focus of influenza prevention strategies (77). # Influenza Transmission in Health-Care Settings HCP are exposed to patients with influenza in the workplace and are thus at risk of occupationally acquired influenza and of transmitting influenza to patients and other HCP. In a cross sectional survey of hospital house staff (physicians in training), 37% reported influenza-like illness during September-April, and 9% reported more than one respiratory illness. Length of illness varied (range: 1-10 days; mean: 7 days), as did days of work missed (range: 0-10 days; mean: 0.7 days) (79). Infected HCP who continue to work while ill might transmit influenza to patients, many of whom are at increased risk for severe outcomes from influenza. H C P are therefore recommended for routine annual influenza vaccination (77). Few randomized trials of the effect that influenza vaccination has on illness in H CP have been conducted. In one randomized trial of 427 HCP, influenza vaccination of H C P failed to decrease episodes of respiratory infection or duration of illness but was associated with a 28% decrease in absenteeism (from 1.4 days to 1.0 day) attributable to respiratory infections (80). No laboratory confirmation of influenza was obtained in this study. In another randomized trial among HCP, vaccination was associated with a significantly lower rate of serological evidence of influenza infection, with a vaccine efficacy rate of 88% for influenza A and 89% for influenza B (p<0.05) (81); however, no significant differences were noted in days of febrile respiratory illness or absenteeism. Influenza can cause outbreaks of severe respiratory illness among hospitalized persons and long-term-care residents (82)(83)(84)(85)(86)(87)(88)(89)(90). Influenza outbreaks in hospitals (86)(87)(88) and long-termcare facilities (91) have been associated with low vaccination rates among HCP. One nonrandomized study demonstrated an increase in H C W vaccination rates and decrease in nosocomially acquired, laboratory-confirmed influenza in a hospital after a mobile cart-based H C P vaccination program was introduced (86). Several randomized controlled studies of the impact of H C P vaccination on morbidity and mortality in long-term care facilities have been performed (92)(93)(94)(95). These studies have demonstrated substantial decreases in all cause mortality (92)(93)(94)(95) and influenza-like illness (92,94,95). However, studies which examine and demonstrate efficacy in preventing more specific outcomes (e.g., laboratory-confirmed influenza illness and mortality) are lacking. Recent systematic reviews suggest that vaccination of HCP in settings in which patients also were vaccinated provided significant reductions in deaths among elderly patients from all causes and deaths from pneumonia, but also note that additional randomized controlled trials are warranted (96,97), as are examination of more specific outcomes. Preventing influenza among HCP who might serve as sources of influenza virus transmission provides additional protection to patients at risk for influenza complications. Vaccination of H C P can specifically benefit patients who cannot receive vaccination (e.g., infants aged <6 months or those with severe allergic reactions to prior influenza vaccination), patients who respond poorly to vaccination (e.g., persons aged >85 years and immune-compromised persons), and persons for whom antiviral treatment is not available (e.g., persons with medical contraindications). Although annual vaccination has long been recommended for H C P and is a high priority for reducing morbidity associated with influenza in health-care settings (98)(99)(100), national survey data have demonstrated that the vaccination coverage level during the 2008-09 season was 52.9% (101). # Considerations Regarding Influenza Vaccination of HCP Barriers to H C P aceptance of influenza vaccination have included fear of vaccine side effects (particularly influenza like symptoms), insufficient time or inconvenience, perceived ineffectiveness of the vaccine, perceived low likelihood of contracting influenza, avoidance of medications, and fear of needles (79,(102)(103)(104)(105)(106)(107)(108)(109). Factors demonstrated to increase vaccine acceptance include a desire for self-protection, previous receipt of influenza vaccine, a desire to protect patients, and perceived effectiveness of vaccine (79,105,106,(109)(110)(111)(112). Strategies that have demonstrated improvement in H C P vaccination rates have included campaigns to emphasize the benefits of H C P vaccination for staff and patients, vaccination of senior medical staff or opinion leaders, removing administrative barriers (e.g., costs), providing vaccine in locations and at times easily accessible by HCP, and monitoring and reporting H C P influenza vaccination rates (99,(113)(114)(115)(116)(117)(118)(119)(120). Intranasally administered live attenuated influenza vaccine (LAIV) is an option for healthy, nonpregnant adults aged <50 years who dislike needles. The practice of obtaining signed declinations from HCP offered influenza vaccination has been adopted by some institutions but has not yet been demonstrated to exceed coverage rates of >70% -80% (99,115,(121)(122)(123). Institutions that require declination statements from H C P who refuse influenza vaccination should educate and counsel these HCP about benefits of the vaccine. Each health-care facility should develop a comprehensive influenza vaccination strategy that includes targeted education about the disease, including disease risk among H C P and patients, and about the vaccine. In addition, the program should establish easily accessible vaccination sites and inform H C P about their locations and schedule. Facilities that employ H C P should provide influenza vaccine at no cost to personnel (124). The most effective combination of approaches for achieving high influenza vaccination coverage among H C P likely varies by institution. Hospitals and health-care organizations in the United States traditionally have employed an immunization strategy that includes one or more of the following components: education about influenza, easy access to vaccine, incentives to encourage immunization, organized campaigns, institution of declination policies, and legislative and regulatory efforts (e.g., vaccination requirements) (99,115,(121)(122)(123)(124)(125)(126). Beginning January 1, 2007, the Joint Comm ission on A ccreditation o f H ealth-C are O rganizations required accredited organizations to offer influenza vaccinations to staff, including volunteers and licensed independent practitioners and to report coverage levels among HCP (127). Standards are available for measuring vaccination coverage among HCP as a measure of program performance within a health care setting (128). Beginning January 2013, the Centers for Medicaid Services will require acute care hospitals to report HCP influenza vaccine as part of its hospital inpatient quality reporting program.* # Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety Effectiveness of influenza vaccines varies from year to year and depends on the age and health status of the person getting the vaccine and the similarity or "match" between the viruses or virus in the vaccine and those in circulation. Vaccine strains are selected for inclusion in the influenza vaccine every year based on international surveillance and scientists' estimations about which types and strains of viruses will circulate in a given year. Annual vaccination is recommended because the predom inant circulating influenza viruses typically change from season to season and, because immunity declines over time postvaccination (77). In placebo-controlled studies am ong adults, the most frequent side effect o f vaccination was soreness at the vaccination site (affecting 10% -64% of patients) that lasted <2 days (129,130). These injection-site reactions typically were mild and rarely interfered with the recipient's ability to conduct usual daily activities. The main contraindication to influenza vaccination is a history of anaphylactic hypersensitivity to egg or other components of the vaccine. A history of Guillain-Barre Syndrome within 6 weeks following a previous dose of influenza vaccine is considered to be a precaution for use of influenza vaccines (77 # Vaccination Annual influenza vaccination is recommended for all persons aged >6 m onths who have no medical contraindication; therefore, vaccination of all HCP who have no contraindications is recommended. The influenza vaccine is evaluated annually with one or more vaccine strains updated almost every year. In addition, antibody titers decline during the year after vaccination. T hus, annual vaccination w ith the current season's formulation is recommended. Annual vaccination is appropriate and safe to begin as early in the season as vaccine is available. HCP should be among the groups considered for prioritized receipt of influenza vaccines when vaccine supply is limited. Two types of influenza vaccines are available. LAIV is administered intranasally and is licensed for use in healthy nonpregnant persons aged 2-49 years. The trivalent inactivated vaccine (TIV) is administered as an intramuscular injection and can be given to any person aged >6 months. Both vaccine types contain vaccine virus strains that are selected to stimulate a protective immune response against the wild-type viruses that are thought to be most likely in circulation during the upcoming season. Use of LAIV for H C P who care for patients housed in protective inpatient environments has been a theoretic concern, but transmission of LAIV in health-care settings has not been reported. LAIV can be used for HCP who work in any setting, except those who care for severely immunocompromised hospitalized persons who require care in a protective environment. H C P who themselves have a condition that confers high risk for influenza complications, who are pregnant, or who are aged >50 years should not receive LAIV and should be administered TIV instead. An inactivated trivalent vaccine containing 60 mcg of hemagglutinin antigen per influenza vaccine virus strain (Fluzone High-Dose [sanofi pasteur]) is an alternative inactivated vaccine for persons aged >65 years. Persons aged >65 years may be administered any of the standard-dose TIV preparations or Fluzone High-Dose (77). The majority of T IV preparations are administered intramuscularly. An intradermally administered T IV was licensed in May 2011 and is an alternative to other TIV preparations for persons aged 18-64 years (131). # Use of Antiviral Drugs for Treating Exposed Persons and Controlling Outbreaks Use of antiviral drugs for chemoprophylaxis or treatment of influenza is an adjunct to (but not a substitute for) vaccination. Oseltamivir or zanamivir are recommended currently for chemoprophylaxis or treatment of influenza (132,133). TIV Recommendations can be administered to exposed, unvaccinated H C P at the same time as chemoprophylaxis, but LAIV should be avoided because the antiviral medication will prevent viral replication needed to stimulate a vaccine response (77). Antivirals are used often among patients during outbreaks in closed settings such as long-term-care facilities but also can be administered to unvaccinated H C P during outbreaks, when an exposure to a person with influenza occurs, or after exposure when vaccination is not thought to be protective against the strain to which a vaccinated H C P was exposed. Chemoprophylaxis consists of 1 dose (of either antiviral drug) daily for 10 days, and treatment consists of 1 dose twice daily for 5 days. In many instances of H C P exposure, watchful waiting and early initiation of treatment if symptoms appear is preferred rather than use of antiviral chemoprophylaxis immediately after exposure. The intensity and duration of the exposure and the underlying health status of the exposed worker are important factors in clinical judgm ents about w hether to provide chemoprophylaxis. If chemoprophylaxis is used, the provider should base choice of the agent on whether the circulating strain or strains of influenza have demonstrated resistance to particular antivirals. # Program Evaluation • Health-care adm inistrators should include influenza vaccination coverage among H C P as a measure of quality of care (124). • Influenza vaccination rates among HCP within facilities should be regularly measured and reported, and ward-, unit-, and specialty-specific coverage rates should be provided to staff and adm inistration (124). Such information might be useful to promote compliance with vaccination policies. # Measles # Background # Epidemiology and Risk Factors Measles is a highly contagious rash illness that is transmitted by respiratory droplets and airborne spread. Severe complications, w hich m ight result in death, include pneum onia and encephalitis. Before the national measles vaccination program was im plem ented in 1963, almost every person acquired measles before adulthood; an estimated 3-4 million persons in the U nited States acquired measles each year (134). Approximately 500,000 persons were reported to have had measles annually, of whom 500 persons died, 48,000 were hospitalized, and another 1,000 had permanent brain damage from measles encephalitis (134). Through a successful 2-dose measles vaccination program (i.e., a first dose at age 12-15 months and a second dose between ages 4 -6 years) (135) and better measles control throughout the region o f the Americas (136), endemic transmission of measles was interrupted in the United States, and measles was declared eliminated from the country in 2000 (137). However, measles remains widespread in the majority of countries outside the Western Hemisphere, with an estimated 20 m illion measles cases occurring worldwide (138) and approximately 164,000 related deaths (139). Thus, the United States continues to experience international importations that might lead to transmission among U.S. residents and limited outbreaks, especially in unvaccinated populations (140)(141)(142)(143). During 2001-2008, a total of 557 confirmed measles cases were reported in the United States from 37 states and the District of Columbia (annual median: 56; range: 37 in 2004 to 140 in 2008), representing an annual incidence ofless than one case per million population (144). O fthe 557 reported case-patients, 126 (23%) were hospitalized (annual median: 16; range: 5-29); of these, at least five case-patients were admitted to intensive care. Two deaths were reported, both in 2003 (144). O f the 557 reported case-patients during 2001-2008, a total of 223 (40%) were adults, including 156 (28%) aged 20-39 years and 67 (12%) aged >40 years. O f the 438 measles cases among U.S. residents, 285 (65%) cases were considered preventable (i.e., occurred among persons who were eligible for vaccination but were unvaccinated) (144). The remaining 153 (35%) cases were considered nonpreventable. Cases were defined as nonpreventable if they occurred among U.S. resident case-patients who had received >1 dose of measles-containing vaccine, if patients were vaccinated as recommended if traveling internationally, or if they were not vaccinated but had other evidence of im m unity (i.e., were born before 1957 and therefore presumed immune from natural disease in childhood, had laboratory evidence of immunity, or had documentation of physician-diagnosed disease) or for whom vaccination is not recommended. During 2001-2008, a total of 12.5% (one of eight) of measles cases reported to CD C among H C P occurred in persons born before 1957; the other seven cases occurred among H C P born after 1957. Measles-mumps-rubella (MMR) vaccination policies have been enforced with variable success in United States health care facilities over the past decade. Even though medical settings were a primary site of measles transmission during the 1989-1991 measles resurgence (145,146), as of September 2011, only three states (New York, Oklahoma, and Rhode Island) had laws mandating that all hospital personnel have proof of measles immunity and did not allow for religious or philosophic exemptions (147). Vaccine coverage in the United States is high; in 2010, a total of 91 # Measles Transmission and the Costs of Mitigating Measles Exposures in Health-Care Settings Health-care-associated cases of measles are of public health concern. Because of the severity of measles, infected persons are likely to seek medical care in primary health-care facilities, emergency departments, or hospitals (141,149,150). Medical settings played a prominent role in perpetuating outbreaks of measles transmission during the 1989-1991 measles resurgence (145,146) and were a primary site of measles transmission in a health-care-associated outbreak in 2008 (149). During 2001 2008, a total of 27 reported measles cases were transmitted in U.S. health-care facilities, accounting for 5% of all reported U.S. measles cases. Because of the greater opportunity for exposure, HCP are at higher risk than the general population for becoming infected with measles. A study conducted in 1996 in medical facilities in a county in Washington state indicated that HCP were 19 times more likely to develop measles than other adults (151). During 2001During -2008, in the 23 health-care settings in which measles transmission was reported, eight cases occurred among HCP, six (75%) of whom were unvaccinated or had unknown vaccination status. One health-care provider was hospitalized in an intensive care unit for 6 days from severe measles complications (142). During a health-care-associated measles outbreak in Arizona in 2008 with 14 cases, six cases were acquired in hospitals, and one was acquired in an outpatient setting. One unvaccinated health-care worker developed measles and infected a hospital emergency room patient who required intensive care following hospital admission for measles (149). High costs also are involved in evaluating and containing exposures and outbreaks in health-care facilities, as well as a substantial disruption of regular hospital routines when control measures are instituted, especially if hospitals do not have readily available data on the measles immunity status of their staff and others included in the facility vaccination program. In 2005 in Indiana, one hospital spent more than $113,000 responding to a measles outbreak (142), and in 2008 in Arizona, two hospitals spent $799,136 responding to and containing cases in their facilities (149). The Arizona outbreak response required rapid review of measles documentation of 14,844 H C P at seven hospitals and emergency vaccination of approximately 4,500 H C P who lacked documentation of measles immunity. Serologic testing at two hospitals among 1,583 H C P without documented history of vaccination or without documented laboratory evidence of measles immunity revealed that 138 (9%) of these persons lacked measles IgG antibodies (149). # Vaccine Effectiveness, Duration of Immunity and Seroprevalence Studies, and Vaccine Safety Vaccine Effectiveness M M R vaccine is highly effective in preventing measles with a 1-dose vaccine effectiveness of 95% when administered on or after age 12 months and a 2-dose vaccine effectiveness of 99% (135). # Duration of Immunity and Seroprevalence Studies Two doses of live measles vaccine are considered to provide long-lasting immunity (135). Although antibody levels decline following vaccination, a study examining neutralizing antibody levels up to 10 years following the second dose of M M R vaccine in children indicates that antibodies remain above the level considered protective (152). Studies among H C P in the United States during the measles resurgence in the late 1980s through early 1990s demonstrated that 4% -10% of all H C P lacked measles IgG antibodies (153)(154)(155)(156). During the 2008 Arizona outbreak, of the 1,077 health-care providers born during or after 1957 without documented measles immunity, 121 (11%) were seronegative (149). In a study of measles seroprevalence among 469 newly hired H C P at a hospital in N orth Carolina who were born before 1957, and thus considered immune by age, who could not provide written evidence of immunity to measles, serologic testing indicated that six (1.3%) lacked measles IgG antibodies (157). Other serologic studies of hospital-based H C P indicate that 2%-9% of those born before 1957 lacked antibodies to measles (156,(158)(159)(160). A survey co n d u cted d u rin g 1999-2004 foun d a seroprevalence of measles antibodies of 95.9% among persons in the U.S. population aged 6-49 years (161). The survey indicated that the lowest prevalence, 92.4% , was among adults born during 1967-1976 (161). A 1999 study of U.S. residents aged >20 years determined that 93% had antibodies to measles virus (162). # Vaccine Safety Measles vaccine is administered in combination with the mumps and rubella components as the M M R vaccine in the United States. Monovalent measles vaccine rarely has been used in the United States in the past 2 decades and is no longer available. After decades of use, evidence demonstrates that M M R vaccine has an excellent safety profile (134). T he m ajority of docum ented adverse events occur in children. In rare circumstances, M M R vaccination of adults has been associated w ith the following adverse events: anaphylaxis (approximately 1.0-3.5 occurrences per million doses administered) (134), thrombocytopenia from the measles component or rubella component (a rate of three to four cases for every 100,000 doses) ( 134), and acute arthritis from the rubella component (arthralgia develops among approximately 25% of rubella-susceptible postpubertal females after M M R vaccination, and approximately 10% have acute arthritis-like signs and symptoms) (135). W hen joint symptoms occur, they generally persist for 1 day-3 weeks and rarely recur (135). Chronic joint symptoms attributable to the rubella component of the M M R vaccine are reported very rarely, if they occur at all. Evidence does not support an association between M M R vaccination and any of the following: hearing loss, retinopathy, optic neuritis, Guillain-Barre Syndrome, type 1 diabetes, Crohn's disease, or autism (135,(163)(164)(165)(166)(167)(168)(169). A woman can excrete the rubella vaccine virus in breast milk and transmit the virus to her infant, but the infection remains asymptomatic (135) Presumptive evidence of immunity to measles for persons who work in health-care facilities includes any of the following: . • written documentation of vaccination with 2 doses of live measles or M M R vaccine administered at least 28 days apart,'!' • laboratory evidence of immunity, § • laboratory confirmation of disease, or • birth before 1957.J # Prevaccination Testing Prevaccination antibody screening before M M R vaccination for an employee who does not have adequate presumptive evidence of im m unity is not necessary unless the medical facility considers it cost effective (134,(170)(171)(172) although no recent studies have been conducted. For H C P who have 2 documented doses of M M R vaccine or other acceptable evidence of immunity to measles, serologic testing for immunity is not recommended. In the event that a H C P who has 2 documented doses of M M R vaccine is tested serologically and determined to have negative or equivocal measles titer results, it is not recommended that the person receive an additional dose of M M R vaccine. Such persons should be considered to have presumptive evidence of measles immunity. Documented ageappropriate vaccination supersedes the results of subsequent serologic testing. Because rapid vaccination is necessary to halt disease transmission, during outbreaks of measles, serologic screening before vaccination is not recommended. # Use of Vaccine and Immune Globulin for Treating Exposed Persons and Controlling Outbreaks Following airborne infection-control precautions and implementing other infection-control measures are important to control the spread of measles but might fail to prevent all nosocomial transmission, because transmission to other susceptible persons might occur before illness is recognized. Persons infected with measles are infectious 4 days before rash onset through 4 days after rash onset. W hen a person who is suspected of having measles visits a health-care facility, airborne infection-control precautions should be followed stringently. The patient should be asked immediately to wear a medical mask and should be placed in an airborne-infection isolation room (i.e., a negative airpressure room) as soon as possible. If an airborne-infection isolation room is not available, the patient should be placed in a private room with the door closed and be asked to wear a mask. If possible, only staff with presumptive evidence of immunity should enter the room of a person with suspect or confirmed measles. Regardless of presumptive immunity status, all staff entering the room should use respiratory protection consistent with airborne infection-control precautions (i.e., use of an N95 respirator or a respirator with similar effectiveness in preventing airborne transmission) (3,150). Because of the possibility, albeit low (~1%), of measles vaccine failure in H C P exposed to infected patients (173), all H C P should observe airborne precautions in caring for patients with measles. H C P in whom measles occurs should be excluded from work until >4 days following rash onset. Contacts w ith measles-compatible sym ptom s should be isolated, and appropriate infection-control measures (e.g., rapid vaccination of susceptible contacts) should be implemented to prevent further spread (174). If measles exposures occur in a health-care facility, all contacts should be evaluated immediately for presumptive evidence of measles immunity. HCP without evidence of immunity should be offered the first dose of M M R vaccine and excluded from work from day 5-21 following exposure (135). HCP without evidence of immunity who are not vaccinated after exposure should be removed from all patient contact and excluded from the facility from day 5 after their first exposure through day 21 after the last exposure, even if they have received postexposure intramuscular immune globulin of 0.25 mL/kg (40 mg IgG/ kg) (135). Those with documentation of 1 vaccine dose may remain at work and should receive the second dose. C ase-patient contacts who do not have presum ptive evidence of measles immunity should be vaccinated, offered intramuscular immune globulin of 0.25 mL/kg (40 mg IgG/ kg), which is the standard dosage for nonimmunocompromised persons (135), or quarantined until 21 days after their exposure to the case-patient. Contacts with measles-compatible symptoms should be isolated, and appropriate infectioncontrol measures should be implemented to prevent further spread. If immune globulin is administered to an exposed person, observations should continue for signs and symptoms of measles for 28 days after exposure because immune globulin might prolong the incubation period. Available data suggest that live virus measles vaccine, if administered within 72 hours of measles exposure, will prevent, or modify disease (134). Even if it is too late to provide effective postexposure prophylaxis by administering MMR, the vaccine can provide protection against future exposure to all three infections. Identifying persons who lack evidence ofmeasles immunity during contact investigations provides a good opportunity to offer M M R vaccine to protect against measles as well as mumps and rubella, not only for HCP who are part of an organization' s vaccination program, but also for patients and visitors. If an exposed person is already incubating measles, M M R vaccination will not exacerbate symptoms. In these circumstances, persons should be advised that a measles-like illness occurring shortly after vaccination could be attributable either to natural infection or to the vaccine strain. In such circumstances, specimens should be submitted for viral strain identification. # Mumps # Background # Epidemiology and Risk Factors Mumps is an acute viral infection characterized by fever and inflammation of the salivary glands (usually parotitis) (175). The spectrum of illness ranges from subclinical infection (20% -40%) to nonspecific respiratory illness, sialadenitis including classic parotitis, deafness, orchitis, and meningoencephalitis; severity increases with age (175). In the prevaccine era, mumps was a common childhood illness, with approximately 186,000 mumps cases reported in the United States per year (176). After the introduction of the Jeryl Lynn strain mumps vaccine in 1967 and the implementation of the 1-dose mumps vaccine policy for children in 1977 (177), reports of mumps cases in the United States declined 99% (178). During 1986-1987, an increase in reported mumps cases occurred, primarily affecting unvaccinated adolescents and young adults. In the late 1980s, sporadic outbreaks continued to occur that affected both unvaccinated and 1-dose vaccinated adolescents and young adults (178). In 1989, a second dose of M M R vaccine was recommended nationwide for better measles control among school-aged children (179). Historically low rates of mumps followed with only several hundred reported cases per year in the United States during 2000-2005. In 1998, a national goal to eliminate mumps was set for 2010 (180). However, in 2006, a total of 6,584 mumps cases were reported in the United States, the largest U.S. mumps outbreak in nearly 20 years (181-183). Whereas overall national mumps incidence was 2.2 per 100,000 population, eight states in the Midwest were the most affected, with 2.5-66.1 cases per 100.000 population (183). The highest incidence (31.1 cases per 100,000 population) was among persons aged 18-24 years (e.g., college-aged students), the majority of whom had received 2 doses of mumps-containing vaccine. O f the 4,017 case-patients for whom age and vaccination status were known, 1,786 (44%) were aged >25 years (incidence: 7.2 cases per 100.000 persons); of these 1,786 patients, 351 (20%) received at least 2 doses, 444 (25%) received 1 dose, 336 (19%) were unvaccinated, and 655 (37%) had unknown vaccination status. Since the 2006 resurgence, two additional large U.S. mumps outbreaks have occurred, both during 2009-2010, one among members o f a religious com m unity w ith cases occurring throughout the northeastern United States (184) and the other in Guam ( 185 # Mumps Transmission and the Costs of Mitigating Mumps Exposures in Health-care Settings Although health-care-associated transmission of mumps is infrequent, it might be underreported because of the high percentage (~20%-40%) of infected persons who might be asymptomatic (186)(187)(188)(189). In a survey of 9,299 adults in different professions conducted in 1968, before vaccine was used routinely, the rate of mumps acquisition was highest among dentists and HCP, with rates of 18% among dentists and 15% among physicians (37% for pediatricians), compared with 9% among primary and secondary school teachers and 2% among university staff members (190). In the postvaccine era, m um ps transm ission also has been documented in medical settings (191)(192)(193). During a Tennessee mumps outbreak during 1986-1987, a total of 17 (12%) of 146 hospitals and three (50%) of six long-term-care facilities reported one or more practices that could contribute to the spread of mumps, including not isolating patients with mumps, assigning susceptible staff to care for patients with mumps, and not immunizing susceptible employees. Health care-associated transmission resulted in six cases of mumps infections among health-care providers and nine cases of mumps infections among patients (191). In Utah in 1994, two health-care providers in a hospital developed mumps after they had contact with an infected patient (192). During the 2006 outbreak, one health-care facility in Chicago experienced ongoing mumps transmission lasting 4 weeks (193). During the 2006 multistate U.S. outbreak, 144 (8.5%) of 1,705 adult case-patients in Iowa for whom occupation was known were health-care providers (Iowa D epartm ent o f Public H ealth, unpublished data, 2006). W hether transmission occurred from patients, coworkers, or persons in the community is unknown. During the 2009-2010 outbreak in the northeastern region of the United States, seven (0.2%) of the 3,400 case-patients were health-care providers, six of whom likely were infected by patients because they had no other known exposure. Exposures to mumps in health-care settings also can result in added economic costs because of furlough or reassignment of staff members from patient-care duties or closure of wards (194). In 2006, a Kansas hospital spent $98,682 containing a mumps outbreak (195). During a mumps outbreak in Chicago in 2006, one health-care facility spent $262,788 controlling the outbreak (193). # Vaccine Effectiveness, Duration of Immunity and Seroprevalence Studies, and Vaccine Safety Vaccine Effectiveness M M R vaccine has a 1-dose vaccine effectiveness in preventing mumps of 80% -85% (range: 75% -91% ) (175,(196)(197)(198)(199) and a 2-dose vaccine effectiveness of 79% -95% (199)(200)(201)(202). In a study conducted on two Iowa college campuses during the 2006 mumps outbreak among a population that was primarily vaccinated with 2 doses, 2-dose vaccine effectiveness ranged from 79% to 88% (202). # Duration of Immunity and Seroprevalence Studies Mumps antibody levels wane over time following the first or second dose of vaccination (203,204), but the correlates of im m unity to m um ps are poorly understood and the significance of these waning antibody levels is unclear. A study on a university campus in Nebraska in 2006 indicated lower levels of mumps neutralizing antibodies among students who had been vaccinated with a second M M R dose >15 years previously than among those who had been vaccinated 1-5 years previously, but the difference was not statistically significant (p>0.05) (205). In a 2006 study on a university campus in Kansas, students with mumps were more likely to have received a second dose of M M R vaccine >10 years previously than were their roommates without mumps (206). However, another 2006 study from an Iowa college campus identified no such association (202). During 1999-2004, national seroprevalence for mumps antibodies for persons aged 6 -4 9 years was 90% (95% confidence interval [CI]: 88.8-91.1) (207). In the Nebraska study, 414 (94%) of the 440 participants were seropositive for mumps antibodies (205). A study in Kansas in 2006 indicated that 13% of hospital employees lacked antibodies to the mumps virus (195). In a recent study on mumps seroprevalence among 381 newly hired health-care personnel at a hospital in North Carolina who were born before 1957 and thus considered immune by age and who could not provide written evidence of immunity to mumps, serologic testing indicated that 14 (3.7%) lacked IgG antibodies to mumps (157). For HCP who do not have adequate presumptive evidence of mum ps imm unity, prevaccination antibody screening before M M R vaccination is not necessary (135,175). For H C P who have 2 documented doses of M M R vaccine or other acceptable evidence of immunity to mumps, serologic testing for im m unity is not recommended. In the event that a health-care provider who has 2 docum ented doses of M M R vaccine is tested serologically and determined to have negative or equivocal mumps titer results, it is not recommended that the person receive an additional dose of M M R vaccine. Such persons should be considered immune to mumps. Documented age-appropriate vaccination supersedes the results of subsequent serologic testing. Likewise, during outbreaks of mumps, serologic screening before vaccination is not recommended because rapid vaccination is necessary to halt disease transmission. # Controlling Mumps Outbreaks in Health-Care Settings Placing patients in droplet precautions and implementing other infection-control measures is important to control the spread of mumps but might fail to prevent all nosocomial transmission, because transmission to other susceptible persons might occur before illness is recognized (208). W hen a person suspected of having mumps visits a health-care facility, only H C P with adequate presum ptive evidence of im m unity should be exposed to the person, and in addition to standard precautions, droplet precautions should be followed. The index case-patient should be isolated, and respiratory precautions ** The first dose of mumps-containing vaccine should be administered on or after the first birthday; the second dose should be administered no earlier than 28 days after the first dose. Mumps immunoglobulin (IgG) in the serum; equivocal results should be considered negative. § § The majority of persons born before 1957 are likely to have been infected naturally between birth and 1977, the year that mumps vaccination was recommended for routine use, and may be presumed immune, even if they have not had (gown and gloves) should be used for patient contact. Negative pressure rooms are not required. The patient should be isolated for 5 days after the onset of parotitis, during which time shedding of virus is likely to occur (209). If mumps exposures occur in a health-care facility, all contacts should be evaluated for evidence of mumps immunity. HCP with no evidence of mumps immunity who are exposed to patients with mumps should be offered the first dose of M M R vaccine as soon as possible, but vaccine can be administered at any interval following exposure; they should be excluded from duty from day 12 after the first unprotected exposure through day 25 after the most recent exposure. H C P with documentation of 1 vaccine dose may remain at work and should receive the second dose. HCP with mumps should be excluded from work for 5 days from the onset of parotitis (209). Antibody response to the mumps com ponent of M M R vaccine generally is believed not to develop soon enough to provide effective prophylaxis after exposure to suspected mum ps (191,210), but data are insufficient to rule out a prophylactic effect. N onetheless, the vaccine is not recom m ended for prophylactic purposes after exposure. However, identifying persons who lack presumptive evidence of mumps immunity during contact investigations provides a good opportunity to offer M M R vaccine to protect against mumps as well as measles and rubella, not only for HCP who are part of an organization's vaccination program, but also for patients and visitors. If an exposed person already is incubating mumps, M M R vaccination will not exacerbate the symptoms. In these circumstances persons should be advised that a mumps-like illness occurring shortly after vaccination is likely to be attributable to natural infection. In such circumstances, specimens should be submitted for viral strain identification to differentiate between vaccine and wild type virus. Immune globulin is not routinely used for postexposure protection from mumps because no evidence exists that it is effective (135). # Rubella # Background # Epidemiology and Risk Factors Rubella (German measles) is a viral disease characterized by rash, low-grade fever, lym phadenopathy, and malaise (211). A lthough rubella is considered a benign disease, transient arthralgia and arthritis are observed commonly in infected adults, particularly among postpubertal females. Chronic arthritis has been reported after rubella infection, but such reports are rare, and evidence of an association is weak (212). O ther complications that occur infrequently are thrombocytopenia and encephalitis (211). Infection is asymptomatic in 25%-50% of cases (213). Clinical diagnosis of rubella is unreliable and should not be considered in assessing immune status. Many rash illnesses might mimic rubella infection and many rubella infections are unrecognized. The only reliable evidence of previous rubella infection is the presence of serum rubella IgG antibody (211). O f primary concern are the effects that rubella can have when a pregnant woman becomes infected, especially during the first trimester, which can result in miscarriages, stillbirths, therapeutic abortions, and congenital rubella syndrome (CRS), a constellation of birth defects that often includes blindness, deafness, mental retardation, and congenital heart defects (211,213). Postnatal rubella is transmitted through direct or droplet contact from nasopharyngeal secretions. The incubation period ranges from 12 to 23 days (214,215). An ill person is most contagious when the rash first appears, but the period of maximal communicability extends from a few days before to 7 days after rash onset (213). Rubella is less contagious than measles. In the prevaccine era, rubella was an endemic disease globally with larger epidemics that occurred; in the United States, rubella epidemics occurred approximately every 7 years (211). During the 1964-1965 global rubella epidemic, an estimated 12.5 million cases of rubella occurred in the United States, resulting in approximately 2,000 cases of encephalitis, 11,250 fetal deaths attributable to spontaneous or surgical abortions, 2,100 infants who were stillborn or died soon after birth, and 20,000 infants born with CRS. The economic impact of this epidemic in the United States alone was estimated at $1.5 billion in 1965 dollars ($10 billion in 2010 dollars) (216). After the rubella vaccine was licensed in the United States in 1969, reported rubella cases decreased from 57,686 in 1969 to 12,491 in 1976 (216), and CRS cases reported nationwide decreased from 68 in 1970 to 23 in 1976 (217). Declines in rubella age-specific incidence occurred in all age groups, including adolescents and adults, but the greatest declines were among children aged <15 years (216) As of September 2011, only three states (i.e., New York, Oklahoma, and Rhode Island) had laws mandating that all hospital personnel have proof of rubella immunity and did not allow for religious or philosophical exemptions (147). Additional states had requirements for specific types of facilities or for certain employees within those facilities, but they did not have universal laws mandating proof of rubella immunity for all hospital personnel (147). . # Rubella Transmission and the Costs of Mitigating Rubella Exposures in Health-Care Settings No documented transmission of rubella to HCP or other hospital staff or patients in U.S. health-care facilities has occurred since elimination was declared. However, in the decades before elimination, rubella transmission was documented in at least 10 U.S. medical settings (221)(222)(223)(224)(225)(226)(227)(228)(229)(230)(231) and led to outbreaks with serious consequences, including pregnancy terminations, disruption of hospital routine, absenteeism from work, expensive containment measures, negative publicity, and the threat of litigation (232). In these outbreaks, transmission occurred from HCP to susceptible coworkers and patients, as well as from patients to HCP and other patients. No data are available on whether HCP are at increased risk for acquiring rubella compared with other professions. # Vaccine Effectiveness, Duration of Immunity and Seroprevalence Studies, and Vaccine Safety # Vaccine Effectiveness Vaccine effectiveness of the RA 27/3 rubella vaccine against clinical rubella is 95% (85%-99% CI) and >99% for clinical laboratory confirmed rubella (211,233). Antibody responses to rubella as part of M M R vaccine are equal (i.e., >99%) to those seen after the single-antigen RA 27/3 rubella vaccine (211,234). # Duration of Immunity and Seroprevalence Studies In clinical trials, 97% -99% of susceptible persons who received a single dose of the RA 27/3 rubella vaccine when they were aged >12 months developed antibody (211,235,236). Two studies have demonstrated that vaccine-induced rubella antibodies might wane after 12-15 years (237,238); however, rubella surveillance data do not indicate that rubella and CRS are increasing among vaccinated persons. N ational seroprevalence for rubella antibodies among persons aged 6-49 years during 1999-2004 was 91% (239). During 1986-1990, serologic surveys in one hospital indicated that 5% of H C P (including persons born in 1957 or earlier) did not have detectable rubella antibody (240). Earlier studies indicated that up to 14%-19% of U.S. hospital personnel, including young women of childbearing age, lacked detectable rubella antibody (225,241,242). In a recent study on rubella seroprevalence among 477 newly hired HCP at a hospital in N orth Carolina who were born before 1957, and thus considered immune by age, who could not provide written evidence of immunity to rubella, serologic testing revealed that 14 (3.1%) lacked detectable levels of antibody to rubella (157). Because of the potential for contact with pregnant women in any type of health-care facility, all HCP should have documented presumptive evidence of immunity to rubella. History of disease is not considered adequate evidence of immunity. As a result of the theoretic risk to the fetus, women should be counseled to avoid becoming pregnant for 28 days after receipt of a rubella-containing vaccine (243). However, receipt of rubella-containing vaccine during pregnancy should not be a reason to consider termination of pregnancy; data from 18 years of following to term 321 known rubella-susceptible women who were vaccinated within 3 months before or 3 m onths after conception indicated that none of the 324 infants born to these mothers had malformations compatible with congenital rubella syndrome, but five had evidence of subclinical rubella infection (244). The estimated risk for serious malformations to fetuses attributable to the mother receiving RA 27/3 vaccine is considered to range from zero to 1.6% (135,244). Evidence does not support a link between M M R vaccination and any of the following: hearing loss, retinopathy, optic neuritis, Guillain-Barre Syndrome, type 1 diabetes, Crohn's disease, or autism (135,(163)(164)(165)(166)(167)(168)(169). A woman can excrete the rubella vaccine virus in breast milk and transmit the virus to her infant, but the infection remains asymptomatic (135) Presumptive evidence of immunity to rubella for persons who work in health-care facilities includes any of the following: . • written documentation of vaccination with 1 dose of live rubella or M M R vaccine, • laboratory evidence of im m unity,• laboratory confirmation of rubella infection or disease, or • birth before 1957*** (except women of childbearing potential who could become pregnant, although pregnancy in this age group would be exceedingly ra re^). # Prevaccination Testing For HCP who do not have adequate presumptive evidence of rubella im m unity, prevaccination antibody screening before M M R vaccination is not necessary unless the medical facility considers it cost effective (135). For HCP who have 1 documented dose of M M R vaccine or other acceptable evidence of immunity to rubella, serologic testing for immunity is not recommended. In the event that a health-care provider who has at least 1 documented dose of rubella-containing vaccine is tested serologically and determined to have negative or equivocal rubella titer results, receipt of an additional dose of M M R vaccine for prevention of rubella is not recommended. Such persons should be considered im m une to rubella. However, if the provider requires a second dose of measles or mumps vaccine, then a second dose of M M R should be adm inistered. D ocum ented age-appropriate vaccination supersedes the results of subsequent serologic testing. Likewise, during outbreaks of rubella, serologic screening before vaccination is not recommended because rapid vaccination is necessary to halt disease transmission. # Controlling Rubella Outbreaks To prevent transmission of rubella in health-care settings, patients suspected to have rubella should be placed in private rooms. In addition to standard precautions, droplet precautions Because rubella can occur in some persons born before 1957 and because congenital rubella and congenital rubella syndrome can occur in the offspring of women infected with rubella virus during pregnancy, birth before 1957 is not acceptable evidence of rubella immunity for women who could become pregnant. should be followed until 7 days after onset of symptoms. Room doors can remain open, and special ventilation is not required. Any exposed HCP who do not have adequate presumptive evidence of rubella immunity should be excluded from duty beginning 7 days after exposure to rubella and continuing through either 1) 23 days after the most recent exposure or 2) 7 days after rash appears if the provider develops rubella (213)(214)(215). Exposed H C P who do not have adequate presum ptive evidence of im m unity who are vaccinated postexposure should be excluded from duty for 23 days after the most recent exposure to rubella because no evidence exists that postexposure vaccination is effective in preventing rubella infection (244). N either rubella-containing vaccine (244) nor im m une globulin (IG) (211,244) is effective for postexposure prophylaxis of rubella. Although intramuscular administration of 20 mL of immune globulin within 72 hours of rubella exposure might reduce the risk for rubella, it will not eliminate the risk (135,245); infants with congenital rubella have been born to women who received IG shortly after exposure (213). In addition, administration of IG after exposure to rubella might modify or suppress symptoms and create an unwarranted sense of security with respect to transmission. If exposure to rubella does not cause infection, postexposure vaccination with M M R vaccine should induce protection against subsequent infection of rubella, as well as measles and mumps. If the exposure results in infection, no evidence indicates that administration of M M R vaccine during the presymptomatic or prodromal stage of illness increases the risk for vaccine-associated adverse events (213). # Pertussis # Background # Epidemiology and Risk Factors Pertussis is a highly contagious bacterial infection. Secondary attack rates among susceptible household contacts exceed 80% (246,247). Transmission occurs by direct contact with respiratory secretions or large aerosolized droplets from the respiratory tract of infected persons. The incubation period is generally 7-10 days but can be as long as 21 days. The period of communicability starts with the onset of the catarrhal stage and extends into the paroxysmal stage. Symptoms of early pertussis (catarrhal phase) are indistinguishable from other upper respiratory infections. Vaccinated adolescents and adults, whose immunity from childhood vaccinations wanes 5-10 years after the most recent dose of vaccine (usually administered at age 4-6 years), are an important source of pertussis infection for susceptible infants. Infants too young to be vaccinated are at greatest risk for severe pertussis, including hospitalization and death. The disease can be transmitted from adults to close contacts, especially unvaccinated children. Vaccination coverage am ong infants and children for diphtheria and tetanus toxoids and acellular pertussis (DTaP) vaccine remains high. In 2010, coverage for children aged 19-35 months who have received >4 doses of DTaP/diphtheria and tetanus toxoids and pertussis vaccine (DTP)/diphtheria and tetanus toxoids vaccine (DT) was 84% (21). Among children entering kindergarten for the 2009-2010 school year, DTaP coverage was 93% (148). Vaccination coverage for tetanus toxoid, reduced diphtheria toxoid and acellular pertussis (Tdap) vaccine was 68.7% among adolescents in 2010 and <7% among adults in 2009 (22,248). Tdap vaccination coverage among H C P was 17.0% in 2009 (248). # Disease in Health-Care Settings and Impact on Health Care Personnel and Patients In hospital settings, transmission of pertussis has occurred from hospital visitors to patients, from HCP to patients, and from patients to H C P (249-252). Although of limited size (range: 2-17 patients and 5-13 staff), documented outbreaks were costly and disruptive. In each outbreak, H C P were evaluated for cough illness and required diagnostic testing, prophylactic antibiotics, and exclusion from work. D uring outbreaks that occur in hospitals, the risk for contracting pertussis among patients or staff is often difficult to quantify because exposure is not well defined. Serologic studies conducted among hospital staff indicate that exposure to pertussis is much more frequent than suggested by attack rates of clinical disease (246,(249)(250)(251)(252)(253)(254). In one outbreak, seroprevalence of pertussis agglutinating antibodies among H C P correlated w ith the degree of patient contact and was highest among pediatric house staff (82%) and ward nurses (71%) and lowest among nurses with administrative responsibilities (35%) (251). A model to estimate the cost of vaccinating H CP and the net return from preventing nosocomial pertussis was constructed using probabilistic methods and a hypothetical cohort of 1,000 H C P with direct patient contact followed for 10 years (255). Baseline assumptions, determined from data in the literature, included incidence of pertussis in HCP, ratio of identified exposures per HCP case, symptomatic percentage of seroconfirmed pertussis infections in HCP, cost of infectioncontrol measures per exposed person, vaccine efficacy, vaccine coverage, employment turnover rate, adverse events, and cost of vaccine (255). In a 10-year period, the cost of infection control would be $388,000 w ithout Tdap vaccination of HCP compared with $69,000 with such a program (255). Introduction of a vaccination program would result in a net savings as high as $535,000 and a benefit-cost ratio of 2.38 (i.e., for every dollar spent on the vaccination program, the hospital would save $2.38 on control measures) (255). # Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety A prelicensure im m unogenicity and safety study in adolescents and adults of a vaccine containing acellular pertussis estimated vaccine efficacy to be 92% (256). Recent postlicensure studies ofTdap demonstrate vaccine effectiveness at 78% and 66% (257,258). Duration of im m unity from vaccination has yet to be evaluated. D ata from pre-and postlicensure studies support the safety ofTdap in adolescents and adults (259)(260)(261)(262)(263). Since the 2005 Tdap recommendations for HCP, one study tried to determine if postexposure prophylaxis following pertussis exposure was necessary for Tdap-vaccinated H C P (264). During the study period, 116 exposures occurred among 94 HCP. Pertussis infection occurred in 2% of those who received postexposure prophylaxis compared with 10% of those who did not, suggesting a possible benefit of postexposure prophylaxis among Tdap-vaccinated HCP (264). Because Tdap coverage is suboptimal among HCP, and the duration of protection afforded by Tdap is unknown, vaccination status does not change the approach to evaluate the need for postexposure prophylaxis in exposed H CP Postexposure prophylaxis is necessary for HCP in contact with persons at risk for severe disease. Other HCP either should receive postexposure prophylaxis or be monitored for 21 days after pertussis exposure and treated at the onset of signs and symptoms of pertussis. Recommended postexposure prophylaxis antibiotics for HCP exposed to pertussis include azithromycin, clarithroymycin, or erythromycin. HCP are not at greater risk for diphtheria or tetanus than the general population. # Recommendations # Vaccination Regardless of age, H C P should receive a single dose ofTdap as soon as feasible if they have not previously received Tdap and regardless of the time since their most recent Td vaccination. Vaccinating HCP with Tdap will protect them against pertussis and is expected to reduce transmission to patients, other HCP, household members, and persons in the community. Tdap is not licensed for multiple administrations; therefore, after receipt of Tdap, H C P should receive Td for future booster vaccination against tetanus and diphtheria. Hospitals and ambulatory-care facilities should provide Tdap for H C P and use approaches that maximize vaccination rates (e.g., education about the benefits of vaccination, convenient access, and the provision of Tdap at no charge). # Prevaccination Testing Prevaccination serologic testing is not recommended. # Demonstrating Immunity Im m unity cannot be dem onstrated through serologic testing because serologic correlates of protection are not well established. # Controlling Pertussis Outbreaks in Health-Care Settings Prevention of pertussis transmission in health-care settings involves diagnosis and early treatment of clinical cases, droplet isolation of infectious patients who are hospitalized, exclusion from work of H C P who are infectious, and postexposure prophylaxis. Early diagnosis of pertussis, before secondary transmission occurs, is difficult because the disease is highly communicable during the catarrhal stage, when symptoms are still nonspecific. Pertussis should be considered in the differential diagnoses for any patient with an acute cough illness with severe or prolonged paroxysmal cough, particularly if characterized by posttussive vomiting, whoop, or apnea. Nasopharyngeal specimens should be taken, if possible, from the posterior nasopharynx with a calcium alginate or Dacron swab for cultures and/or polymerase chain reaction (PCR) assay. Health-care facilities should maximize efforts to prevent transmission of Bordetella pertussis. Precautions to prevent respiratory droplet transmission or spread by close or direct contact should be employed in the care of patients admitted to hospital with suspected or confirmed pertussis (265). These precautions should remain in effect until patients are improved clinically and have completed at least 5 days of appropriate antimicrobial therapy. HCP in whom symptoms (i.e., unexplained rhinitis or acute cough) develop after known pertussis exposure might be at risk for transmitting pertussis and should be excluded from work until 5 days after the start of appropriate therapy (3). Data on the need for postexposure prophylaxis in Tdapvaccinated H C P are inconclusive (264). Certain vaccinated H C P are still at risk for B. pertussis. Tdap might not preclude the need for postexposure prophylaxis. Postexposure antimicrobial prophylaxis is recommended for all HCP who have unprotected exposure to pertussis and are likely to expose a patient at risk for severe pertussis (e.g., hospitalized neonates and pregnant women). O ther H C P should either receive postexposure antimicrobial prophylaxis or be monitored daily for 21 days after pertussis exposure and treated at the onset of signs and symptoms of pertussis. # Varicella # Background # Epidemiology and Risk Factors Varicella is a highly infectious disease caused by primary infection with varicella-zoster virus (VZV). VZV is transmitted from person to person by direct contact, inhalation of aerosols from vesicular fluid of skin lesions of varicella or herpes zoster (HZ), a localized, generally painful vesicular rash commonly called shingles, or infected respiratory tract secretions that also might be aerosolized (266). The average incubation period is 14-16 days after exposure to rash (range: 10-21 days). Infected persons are contagious an estimated 1-2 days before rash onset until all lesions are crusted, typically 4 -7 days after rash onset (266). Varicella secondary attack rates can reach 90% among susceptible contacts. Typically, primary infection with VZV results in lifetime immunity. VZV remains dormant in sensory-nerve ganglia and can reactivate at a later time, causing H Z. Before the U.S. childhood varicella vaccination program began in 1995, approximately 90% ofvaricella disease occurred among children aged <15 years (266). During 1997-2009, national varicella vaccine coverage among children aged 19-35 months increased from 27% to 90%, leading to dramatic declines of >85% in varicella incidence, hospitalizations, and deaths (267)(268)(269). The decline in disease incidence was greatest among children for whom vaccination was recommended; however, declines occurred in every age group including infants too young to be vaccinated and adults, indicating reduced communitywide transmission of VZV. C urrent incidence of varicella am ong adults is low (<0.1/1,000 population), and adult cases represent <10% of all reported varicella cases (270). National seroprevalence data from 1999-2004 demonstrated that, in the early vaccine era, adults continued to have high immunity to varicella (271). In this study, 98% of persons aged 20-49 years had VZV-specific IgG antibodies. However, with declining likelihood of exposure to VZV, children and adolescents who did not receive 2 doses of varicella vaccine could remain susceptible to VZV infection as they age into adulthood, when varicella can be more severe. The clinical presentation of varicella has changed since the implementation of the varicella vaccination program, with more than half of varicella cases reported in 2008 occurring among persons who were vaccinated previously, the majority of them children. Varicella disease in vaccinated children (breakthrough varicella) usually has a modified or atypical presentation; the rash is typically mild, with <50 lesions that are more likely to be predominantly maculopapular than vesicular (266). Fever is less common, and the duration of illness is shorter. Nevertheless, breakthrough varicella is infectious. One study indicated that vaccinated children with varicella with <50 lesions were only one third as infectious as unvaccinated children whereas those with >50 lesions were as infectious as unvaccinated children (272). Because the majority of adults are immune and few need vaccination, fewer breakthrough cases have been reported among adults than among children, and breakthrough varicella in adults has tended to be milder than varicella in unvaccinated adults (273,274). The epidemiology of varicella in tropical and subtropical regions differs from that in the United States. In these regions, a higher proportion ofVZV infections are acquired later in life. Persons emigrating from these regions might be more likely to be susceptible to varicella compared to U.S.-born persons and, therefore, are at a higher risk for developing varicella if unvaccinated and exposed (275,276). # Disease in Health-Care Settings and Impact on Health Care Personnel and Patients A lthough relatively rare in the U nited States since introduction of varicella vaccine, nosocomial transmission of VZV is well recognized and can be life-threatening to certain patients (277)(278)(279)(280)(281)(282)(283)(284)(285)(286)(287)(288)(289). In addition to hospital settings, nosocomial VZV transmission has been reported in longterm-care facilities and a hospital-associated residential facility (290,291). Sources of nosocomial exposure that have resulted in transmission include patients, HCP, and visitors with either varicella or H Z. Both localized and disseminated H Z in immunocompetent as well as immunocompromised patients have been identified as sources of nosocomial transmission of VZV. Localized H Z has been demonstrated to be much less infectious than varicella; disseminated H Z is considered to be as infectious as varicella (266). Nosocomial transmission has been attributed to delays in the diagnosis or reporting of varicella or H Z and in failures to implement control measures promptly. In hospitals and other health-care settings, airborne transmission of VZV from patients with either varicella or H Z has resulted in varicella in HCP and patients who had no direct contact with the index case-patient (284)(285)(286)(287)(288)291). Although all susceptible patients in health-care settings are at risk for severe varicella disease with complications, certain patients without evidence of immunity are at increased risk: pregnant women, premature infants born to susceptible mothers, infants born at <28 weeks' gestation or who weigh <1,000 grams regardless of maternal immune status, and immunocompromised persons of all ages (including persons who are undergoing immunosuppressive therapy, have malignant disease, or are immunodeficient). VZV exposures among patients and HCP can be disruptive to patient care, time-consuming, and costly even when they do not result in VZV transmission (281,282,292). Studies ofVZV exposure in health-care settings have documented that a single provider with unrecognized varicella can result in the exposure of >30 patients and >30 employees (292). Identification of susceptible patients and staff, medical management of susceptible exposed patients at risk for complications of varicella, and furloughing of susceptible exposed HCP are time-consuming and costly (281,282). W ith the overall reduction in varicella disease attributable to the success of the vaccination program, the risk for exposure to VZV from varicella cases in health-care settings is likely declining. In addition, an increasing proportion of varicella cases occur in vaccinated persons who are less contagious. Diagnosis of varicella has become increasingly challenging as a growing proportion of cases occur in vaccinated persons in whom disease is mild, and HCP encounter patients with varicella less frequently. Although not currently routinely recommended for the diagnosis and management of varicella, laboratory testing of suspected varicella cases is likely to become increasingly useful in health-care settings, especially as the positive predictive value of clinical diagnosis declines. # Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety # Vaccine Effectiveness Formal studies to evaluate vaccine efficacy or effectiveness have not been performed among adults. Studies of varicella vaccine effectiveness performed among children indicated good performance of 1 dose for prevention of all varicella (80% -85%) and >95% effectiveness for prevention of moderate and severe disease (266,293). Studies have indicated that a second dose among children produces an improved humoral and cellular immune response that correlates with improved protection against disease (266,294). Varicella vaccine effectiveness is expected to be lower in adults than in children. Adolescents and adults require 2 doses to achieve seroconversion rates similar to those seen in children after 1 dose (266). A study of adults who received 2 doses of varicella vaccine 4 or 8 weeks apart and were exposed subsequently to varicella in the household estimated an 80% reduction in the expected number of cases (295). # Duration of Immunity Serologic correlates of protection against varicella using commercially available assays have not been established for adults (266). In clinical studies, detectable antibody levels have persisted for at least 5 years in 97% of adolescents and adults who were administered 2 doses of varicella vaccine 4-8 weeks apart, but boosts in antibody levels were observed following exposures to varicella, which could account for the long-term persistence of antibodies after vaccination in these studies (295). Studies have demonstrated that whereas 25%-31% of adult vaccine recipients who seroconverted lost detectable antibodies 1-11 years after vaccination (273,296), vaccine-induced VZV-specific T-cell proliferation (marker for cell-mediated immunity [CMI]) was maintained in 94% of adults 1 and 5 years postvaccination (297). Disease was mild in vaccinated persons who developed varicella after exposure to VZV, even among vaccinees who did not seroconvert or who lost detectable antibody (273,274). Severity of illness and attack rates among vaccinated adults did not increase over time. These studies suggest that VZV-specific CMI affords protection to vaccinated adults, even in the absence of detectable antibody response. # Vaccine Safety The varicella vaccine has an excellent safety profile. In clinical trials, the most common adverse events among adolescents and adults were injection-site complaints (24.4% after the first dose and 32.5% after the second dose) (266,295). Varicella-like rash at the injection site occurred in 3% of vaccine recipients after the first dose and in 1% after the second. A nonlocalized rash occurred in 5.5% of vaccine recipients after the first dose and in 0.9% after the second, with a median number of lesions of five, at a peak of 7-21 and 0-23 days postvaccination, respectively (295). Data on serious adverse events among adults after varicella vaccination are limited, but the proportion of serious adverse events among all adverse events reported to the Vaccine Adverse Events Reporting System during 1995-2005 was low (5%) among both children and adults (298). Serious adverse events reported among children included pneumonia, hepatitis, H Z (some hospitalized), meningitis with H Z, ataxia, encephalitis, throm bocytopenic purpura. N ot all adverse events reported after varicella vaccination have been laboratory confirmed to be attributable to the vaccine strain VZV (266,298). Risk for transmission of vaccine virus was assessed in placebo recipients who were siblings of vaccinated children and among healthy siblings of vaccinated leukemic children (266). The findings suggest that transmission of varicella vaccine virus from healthy persons to susceptible contacts is very rare. The risk might be increased in vaccinees in whom a varicella-like rash develops after vaccination. However, this risk is also low. The benefits of vaccinating HC P without evidence of immunity outweigh this extremely low potential risk. Since implementation of the varicella vaccine program, transmission of vaccine virus has been documented from eight persons (all of whom had a rash after vaccination) resulting in nine secondary infections among household and long-term-care facility contacts (299). No transmission has been documented from vaccinated HCP. # Recommendations # Vaccination Health-care institutions should ensure that all HCP have evidence of immunity to varicella. This information should be documented and readily available at the work location. HCP without evidence of immunity to varicella should receive 2 doses of varicella vaccine administered 4-8 weeks apart. If >8 weeks elapse after the first dose, the second dose may be administered without restarting the schedule. Recently vaccinated HCP do not require any restriction in their work activities; however, HCP who develop a vaccine-related rash after vaccination should avoid contact with persons without evidence of immunity to varicella who are at risk for severe disease and complications until all lesions resolve (i.e., are crusted over) or, if they develop lesions that do not crust (macules and papules only), until no new lesions appear within a 24-hour period. Evidence of immunity for HCP includes any of the following (266): • written documentation of vaccination with 2 doses of varicella vaccine, • laboratory evidence o f im m unity § § § or laboratory confirmation of disease, • diagnosis or verification of a history of varicella disease by a health-care provider,JJJ or • diagnosis or verification of a history of H Z by a health-care provider. In health-care settings, serologic screening before vaccination of personnel without evidence of immunity is likely to be cost effective. Key factors determining cost-effectiveness include sensitivity and specificity of serologic tests, the nosocomial transmission rate, seroprevalence of VZV antibody in the personnel population, and policies for managing vaccine recipients developing postvaccination rash or who are § § § Commercial assays can be used to assess disease-induced immunity, but they often lack sensitivity to detect vaccine-induced immunity (i.e., they might yield false-negative results). JJJ Verification of history or diagnosis of typical disease can be provided by any health-care provider (e.g., a school or occupational clinic nurse, nurse practitioner, physician assistant, or physician). For persons reporting a history of, or reporting with, atypical or mild cases, assessment by a physician or their designee is recommended, and one of the following should be sought: 1) an epidemiologic link to a typical varicella case or to a laboratoryconfirmed case or 2) evidence of laboratory confirmation if it was performed at the time of acute disease. W hen such documentation is lacking, persons should not be considered as having a valid history of disease because other diseases might mimic mild atypical varicella. exposed subsequently to VZV. Institutions may elect to test all unvaccinated HCP, regardless of disease history, because a small proportion of persons with a positive history of disease might be susceptible. For the purpose of screening HCP, a less sensitive and more specific commercial ELISA should be considered. The latex agglutination test can produce falsepositive results, and HCP who remained unvaccinated because of false test results subsequently contracted varicella (289). Routine testing for varicella im m unity after 2 doses of vaccine is not recommended. Available commercial assays are not sensitive enough to detect antibody after vaccination in all instances. Sensitive tests that are not generally available have indicated that 92% -99% of adults develop antibodies after the second dose (266). Seroconversion does not always result in full protection against disease and, given the role of CMI for providing long-term protection, absence of antibodies does not necessarily mean susceptibility. Documented receipt of 2 doses of varicella vaccine supersedes results of subsequent serologic testing. Health-care institutions should establish protocols and recommendations for screening and vaccinating HCP and for management of H CP after exposures in the work place. Institutions also should consider precautions for H C P in whom rash occurs after vaccination, although they should also consider the possibility of wild-type disease in HCP with recent exposure to varicella or HZ. A vaccine to prevent H Z is available and recommended for all persons aged >60 years without contraindications to vaccination. H Z vaccine is not indicated for HCP for the prevention of nosocomial transmission, but HCP aged >60 years may receive the vaccine on the basis of the general recom m endation for H Z vaccination, to reduce their individual risk for HZ. # Varicella Control Strategies Appropriate measures should be implemented to manage cases and control outbreaks (300). # Patient Care Only HCP with evidence of immunity to varicella should care for patients who have confirmed or suspected varicella or HZ. Airborne precautions (i.e., negative air-flow rooms) and contact precautions should be employed for all patients with varicella or disseminated H Z and for immunocompromised patients with localized H Z until disseminated infection is ruled out. These precautions should be kept in place until lesions are dry and crusted. If negative air-flow rooms are not available, patients should be isolated in closed rooms and should not have contact with persons without evidence of immunity to varicella. For immunocompetent persons with localized HZ, standard precautions and complete covering of the lesions are recommended. # Postexposure Management of HCP and Patients Exposure to V ZV is defined as close contact w ith an infectious person, such as close indoor contact (e.g., in the same room) or face-to-face contact. Experts differ regarding the duration of contact; some suggest 5 minutes, and others up to 1 hour; all agree that it does not include transitory contact (301). All exposed, susceptible patients and H C P should be identified using the criteria for evidence of immunity. An additional criterion of evidence of immunity only for patients who are not immunocompromised or pregnant is birth in the United States before 1980. Postexposure prophylaxis with vaccination or varicella-zoster immunoglobulin, depending on immune status, of exposed HCP and patients without evidence of immunity is recommended (266). HCP who have received 2 doses of vaccine and who are exposed to VZV (varicella, disseminated H Z, and uncovered lesions of a localized HZ) should be monitored daily during days 8-21 after exposure for fever, skin lesions, and systemic symptoms suggestive of varicella. H C P can be monitored directly by occupational health program or infection-control practitioners or instructed to report fever, headache, or other constitutional symptoms and any atypical skin lesions immediately. HCP should be excluded from a work facility immediately if symptoms occur. H C P who have received 1 dose of vaccine and who are exposed to VZV (varicella, disseminated H Z, and uncovered lesions of a localized HZ) (in the community or health-care setting/workplace) should receive the second dose within 3-5 days after exposure to rash (provided 4 weeks have elapsed after the first dose). After vaccination, management is similar to that of 2-dose vaccine recipients. Those who did not receive a second dose or who received the second dose >5 days after exposure should be excluded from work for 8-21 days after exposure. Unvaccinated HCP who have no other evidence of immunity who are exposed to VZV (varicella, disseminated H Z, and uncovered lesions of a localized HZ) are potentially infective from days 8-21 after exposure and should be furloughed during this period. They should receive postexposure vaccination as soon as possible. Vaccination within 3-5 days of exposure to rash might modify the disease if infection occurred. Vaccination >5 days postexposure is still indicated because it induces protection against subsequent exposures (if the current exposure did not cause infection). For HCP at risk for severe disease for whom varicella vaccination is contraindicated (e.g., pregnant or immunocomprosed HCP), varicella-zoster immune globulin after exposure is recommended. The varicella-zoster immune globulin product currently used in the United States, VariZIG (Cangene Corporation, Winnipeg, Canada), is available under an Investigational New Drug Application Expanded Access protocol; a sample release form is available at http://www.fda. gov/downloads/BiologicsBloodVaccines/SafetyAvailability/ UCM 176031.pdf. Varicella-zoster immune globulin might prolong the incubation period by a week, thus extending the time during which personnel should not work from 21 to 28 days. In case of an outbreak, H C P without evidence of immunity who have contraindications to vaccination should be excluded from the outbreak setting through 21 days after rash onset of the last identified case-patient because of the risk for severe disease in these groups. If the VZV exposure was to localized H Z with covered lesions, no work restrictions are needed if the exposed HCP had previously received at least 1 dose of vaccine or received the first dose within 3-5 days postexposure. A second dose should be administered at the appropriate interval. H C P should be monitored daily during days 8-21 after exposure for fever, skin lesions, and systemic symptoms suggestive of varicella and excluded from a work facility if symptoms occur. If at least 1 dose was not received, restriction from patient contact is recommended. # Diseases for W hich Vaccination Might Be Indicated in Certain Circum stances Health-care facilities and other organizations should consider including in their vaccination programs vaccines to prevent meningococcal disease, typhoid fever, and polio for H CP who have certain health conditions or who work in laboratories or regions outside the United States where the risk for workrelated exposure exists. # Meningococcal Disease # Background # Epidemiology and Risk Factors Meningococcal disease is rare among adults in the United States and incidence has decreased to historic lows; during 1998-2007 the average annual incidence of meningococcal disease was 0.28 (range: 0.26-0.31) cases per 100,000 population among persons aged 25-64 years (302). Routine vaccination with meningococcal conjugate vaccine is recommended by ACIP for adolescents aged 11-18 years, with the primary dose at age 11-12 years and the booster dose at age 16 years. In 2010, coverage with meningococcal conjugate vaccine among persons aged 13-17 years was 62.7% (22). Nosocomial transmission of Neisseria meningitidis is rare, but H C P have become infected after direct contact with respiratory secretions of infected persons (e.g., managing of an airway during resuscitation) and in a laboratory setting. HCP can decrease the risk for infection by adhering to precautions to prevent exposure to respiratory droplets (303,304) and by taking antimicrobial chemoprophylaxis if exposed directly to respiratory secretions. # Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety Two quadrivalent (A, C, W-135, Y) conjugate meningococcal vaccines (MCV4) are licensed for persons aged through 55 years (305,306). Both protect against two of the three serogroups that cause the majority of meningococcal disease in the United States and against 75% of disease among adults. Available data indicate that the majority of persons do not have enough circulating functional antibody to be protected >5 years after a single dose of MCV4. Both vaccines had similar safety profiles in clinical trials. Quadrivalent (A, C, W -135, Y) meningococcal polysaccharide vaccine (MPSV4) is available for use in persons aged >55 years. No vaccine for serogroup B meningococcal disease is licensed in the United States. # Recommendations # Vaccination # M CV4 is not recommended routinely for all HCP # HCP Recommended to Receive Vaccine to Prevent Meningococcal Disease A 2-dose vaccine series is recommended for H C P with know n asplenia or persistent com plem ent com ponent deficiencies, because these conditions increase the risk for meningococcal disease. HCP traveling to countries in which meningococcal disease is hyperendemic or epidemic also are at increased risk for infection and should receive vaccine. Those with known asplenia or persistent complement component deficiencies should receive a 2-dose vaccine series. All other HCP traveling to work to high-risk areas should receive a single dose of M CV4 before travel if they have never received it or if they received it >5 years previously. Clinical microbiologists and research microbiologists who might be exposed routinely to isolates of N. meningitides should receive a single dose of M CV4 and receive a booster dose every 5 years if they remain at increased risk. Health-care personnel aged >55 years who have any of the above risk factors for meningococcal disease should be vaccinated with MPSV4 (305). # HCP Who May Elect to Receive Vaccine to Prevent Meningococcal Disease H CP with known HIV infection are likely at increased risk for meningococcal disease and may elect vaccination. If these HCP are vaccinated, they should receive a 2-dose vaccine series (307). # Booster Doses H CP who receive the 2-dose M CV4 vaccine series and/or remain in a group at increased risk should receive a booster dose every 5 years (306). # Postexposure Management of Exposed HCP Postexposure prophylaxis is advised for all persons who have had intensive, unprotected contact (i.e., without wearing a mask) with infected patients (e.g., via m outh-to-m outh resuscitation, endotracheal intubation, or endotracheal tube management), including HCP who have been vaccinated with either the conjugate or polysaccharide vaccine (3). A ntim icrobial prophylaxis can eradicate carriage o f N. meningitidis and prevent infections in persons who have unprotected exposure to patients w ith m eningococcal infections (305). Rifampin, ciprofloxacin, and ceftriaxone are effective in eradicating nasopharyngeal carriage of N. meningitidis. In areas of the United States where ciprofloxacinresistant strains of N. meningitidis have been detected (as of August 30, 2011, only parts of Minnesota and North Dakota), ciprofloxacin should not be used for chemoprophylaxis (308). Azithromycin can be used as an alternative. Ceftriaxone can be used during pregnancy. Postexposure prophylaxis should be administered within 24 hours of exposure when feasible; postexposure prophylaxis administered >14 days after exposure is of limited or no value (305). H C P not otherwise indicated for vaccination may be recommended to be vaccinated with meningococcal vaccine in the setting of a com m unity or institutional outbreak of meningococcal disease caused by a serogroup contained in the vaccine. # Typhoid Fever # Background # Epidemiology and Risk Factors The incidence of typhoid fever declined steadily in the United States during 1900-1960 and has since remained low. During 1999-2006, on average, 237 cases were reported annually to the National Typhoid and Paratyphoid Fever Surveillance System (309). The median age of patients was 22 years and 54% were male; 79% reported foreign travel during the 30 days before onset of symptoms. Among international travelers, the risk for Salmonella Typhi infection appears to be highest for those who visit friends and relatives in countries in which typhoid fever is endemic and for those who visit (even for a short time) the most highly endemic areas (e.g., the Indian subcontinent) (310). Increasing resistance to flu o ro q u in o lo n es such as ciprofloxacin, which are used to treat multidrug-resistant. S. Typhi, has been seen particularly among travelers to south and southeast Asia (311). Isolates with decreased susceptibility to ciprofloxacin (DCS) do not qualify as resistant according to current Clinical and Laboratory Standards Institute criteria but are associated with poorer clinical outcomes (311,312). Resistance to nalidixic acid, a quinolone, is a marker for DCS and increased from 19% in 1999 to 59% in 2008 (313). Nine isolates resistant to ciprofloxacin also were seen during this time period (313). Although overall S. Typhi infections have declined in the United States, increased incidence and antimicrobial resistance including resistance to fluoroquinolones have been seen for paratyphoid fever caused by Paratyphi A (314). No vaccines that protect against Paratyphi A infection are available. # Transmission and Exposure in Health-Care Settings During 1985-1994, seven cases oflaboratory-acquired typhoid fever were reported among persons working in microbiology laboratories, only one of whom had been vaccinated (315). Additionally, S. Typhi might be transmitted nosocomially via the hands of infected persons (315). # Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety Two typhoid vaccines are distributed in the United States: oral live-attenuated Ty21a vaccine (one enteric-coated capsule taken on alternate days for a total of four capsules) and the capsular polysaccharide parenteral vaccine (1 0.5 mL intramuscular dose). Both vaccines protect 50% -80% of recipients. To maintain immunity, booster doses of the oral vaccine are required every 5 years, and booster doses of the injected vaccine are required every 2 years. Complication rates are low for both types of S. Typhi vaccines. During 1994-1999, serious adverse events requiring hospitalization occurred in an estimated 0.47 to 1.3 per 100,000 doses, and no deaths occurred (310). However, live-attenuated Ty21a vaccine should not be used among immunocompromised persons, including those infected with HIV (316). Theoretic concerns have been raised about the immunogenicity of live, attenuated Ty21a vaccine in persons concurrently receiving antimicrobials (including antimalarial chemoprophylaxis), viral vaccines, or immune globulin (317). A third type of vaccine, a parenteral heat-inactivated vaccine associated with higher reactogenicity, was discontinued in 2000 (310,318). # Vaccination M icrobiologists and others who work frequently with S. Typhi should be vaccinated with either of the two licensed and available vaccines. Booster vaccinations should be administered on schedule according to the manufacturers' recommendations. # Controlling the Spread of Typhoid Fever Personal hygiene, particularly hand hygiene before and after all patient contacts, will minimize risk for transmitting enteric pathogens to patients. However, H C P who contract an acute diarrheal illness accompanied by fever, cramps, or bloody stools are likely to excrete substantial numbers of infective organisms in their feces. Excluding these H C P from care of patients until the illness has been evaluated and treated can prevent transmission (3). # Poliomyelitis # Background # Epidemiology and Risk Factors In the United States, the last indigenously acquired cases of poliomyelitis caused by wild poliovirus occurred in 1979, and the Americas were certified to be free of indigenous wild poliovirus in 1994 (319,320). W ith the complete transition from use of oral poliovirus vaccine (OPV) to inactivated poliovirus vaccine (IPV) in 2000, vaccine-associated paralytic poliomyelitis (VAPP) attributable to O PV also has been eliminated (321,322) # Transmission and Exposure in Health-Care Settings Poliovirus can be recovered from infected persons, including from pharyngeal specim ens, feces, urine, and (rarely) cerebrospinal fluid. H C P and laboratory workers might be exposed if they come into close contact with infected persons (e.g., travelers returning from areas where polio is endemic) or with specimens that contain poliovirus. # Recommendations Vaccine Effectiveness, Duration of Immunity, and Vaccine Safety Both IPV and OPV are highly immunogenic and effective when administered according to their schedules. In studies conducted in the United States, 3 doses of IPV resulted in 100% serocoversion for types 2 and 3 poliovirus and 96%-100% for type 1 (326). Imm unity is prolonged and might be lifelong. IPV is well tolerated, and no serious adverse events have been associated with its use. IPV is an inactivated vaccine and does not cause VAPP IPV is contraindicated in persons with a history of hypersensitivity to any component of the vaccine, including 2-phenoxyethanol, formaldehyde, neomycin, streptomycin, and polymyxin B. OPV is no longer available in the United States. # Recommendations # Vaccination Because the majority of adults born in the United States are likely immune to polio as a result of vaccination during childhood, poliovirus vaccine is not routinely recommended for persons aged >18 years. The childhood recommendation for poliovirus vaccine consists of 4 doses at ages 2, 4, and 6-18 months and 4-6 years. However, vaccination is recommended for H C P who are at greater risk for exposure to polioviruses than the general population, including laboratory workers who handle specimens that might contain polioviruses and H C P who have close contact with patients who might be excreting wild polioviruses, including H CP who travel to work in areas where polioviruses are circulating. Unvaccinated H C P should receive a 3-dose series of IPV, with dose 2 administered 4-8 weeks after dose 1, and dose 3 administered 6-12 months after dose 2. H C P who have previously completed a routine series of poliovirus vaccine and who are at increased risk can receive a lifetime booster dose of IPV if they remain at increased risk for exposure. Available data do not indicate the need for more than a single lifetime booster dose with IPV for adults. # Controlling the Spread of Poliovirus Standard precautions always should be practiced when handling biologic specim ens. Suspect cases require an immediate investigation including collection of appropriate laboratory specimens and control measures. All suspect or confirmed cases should be reported immediately to the local or state health department. # Other Vaccines Recom m ended for Adults Certain vaccines are recommended for adults based on age or other individual risk factors but not because of occupational exposure (327). Vaccine-specific ACIP recommendations should be consulted for details on schedules, indications, contraindications, and precautions for these vaccines. • to be at increased risk for hepatitis A virus infection because of occupational exposure, including persons exposed to sewage. Hepatitis A vaccine is recommended for person w ith chronic liver disease, international travelers, and certain other groups at increased risk for exposure to hepatitis A. # Catch-Up and Travel Vaccination # Catch-Up Programs Managers of health-care facilities should implement catch-up vaccination programs for HCP who already are employed, in addition to developing policies for achieving high vaccination coverage among newly hired HCP HCP vaccination records could be reviewed annually during the influenza vaccination season or concurrent with annual TB testing. This strategy could help prevent outbreaks of vaccine-preventable diseases. Because education, especially when combined with other interventions such as reminder/recall systems and low or no out-of-pocket costs, enhances the success of many vaccination programs, informational materials should be available to assist in answering questions from HCP regarding the diseases, vaccines, and toxoids as well as the program or policy being implemented (120,328). Conducting educational workshops or seminars several weeks before the initiation of a catch-up vaccination program might promote acceptance of program goals. # Travel Hospital personnel and other H C P who perform research or health-care work in foreign countries might be at increased risk for acquiring certain diseases that can be prevented by vaccines recommended in the United States (e.g., hepatitis B, influenza, M M R , T dap , poliovirus, varicella, and meningococcal vaccines) and travel-related vaccines (e.g., hepatitis A, Japanese encephalitis, rabies, typhoid, or yellow fever vaccines) (329). Elevated risks for acquiring these diseases might stem from exposure to patients in health-care settings (e.g., poliomyelitis and meningococcal disease) but also might arise from circumstances unrelated to patient care (e.g., high endemicity of hepatitis A or exposure to arthropod-vector diseases [e.g., yellow fever]). All H C P should seek the advice of a health-care provider familiar with travel medicine at least 4-6 weeks before travel to ensure that they are up to date on routine vaccinations and that they receive vaccinations recommended for their destination (329). Although bacille Calmette-Guérin vaccination is not recommended routinely in the United States, H C P should discuss potential beneficial and other consequences of this vaccination with their health care provider. # Work Restrictions W ork restrictions for susceptible H C P (i.e., no history of vaccination or documented lack of immunity) exposed to or infected with certain vaccine-preventable diseases can range from restricting individual HCP from patient contact to complete exclusion from duty (Table 5). A furloughed employee should be considered in the same category as an employee excluded from the facility. Specific recommendations concerning work restrictions in these circumstances have been published previously (3,11). The risk for rubella vaccine-associated malformations in the offspring o f women pregnant when vaccinated or who become pregnant w ithin 1 month after vaccination is negligible.11 Such women should be counseled regarding the theoretical basis o f concern for the fetus. # vaccine. See table footnotes on page 41 111 Commercial assays can be used to assess disease-induced immunity, but they often lack sensitivity to detect vaccine-induced im m unity (i.e., they m ight yield false-negative results). § § § Verification o f history or diagnosis o f typical disease can be provided by any health-care provider (e.g., school or occupational clinic nurse, nurse practitioner, physician assistant, or physician). For persons reporting a history of, or reporting with, atypical or mild cases, assessment by a physician or their designee is recommended, and one o f the following should be sought: 1) an epidemiologic link to a typical varicella case or to a laboratory-confirm ed case or 2) evidence o f laboratory confirmation if it was performed at the tim e o f acute disease. When such documentation is lacking, persons should not be considered as having a valid history o f disease because other diseases m ight mimic mild atypical varicella. 111 For example, immunocompromised patients or pregnant women. Those who develop acute respiratory symptoms w ith o u t fever should be considered for evaluation by occupational health to determine appropriateness o f contact with patients and can be allowed to work unless caring for patients in a protective environment; these personnel should be considered for temporary reassignment or exclusion from work for 7 days from symptom onset or until the resolution o f all noncough symptoms, whichever is longer. If symptoms such as cough and sneezing are still present, HCP should wear a facemask during patient care activities. The im portance of performing frequent hand hygiene (especially before and after each patient contact) should be reinforced. Measles # Acknowledgments The following persons contributed to this report: Rachel J. Wilson, GeoffA. Beckett, M PH, Division ofViral Hepatitis, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, CDC; LaDora O. Woods, BS, Carter Consulting, Inc., Atlanta, Georgia.
None
None
11201a428de3597e55a15b5132d853a4e8b5050d
cdc
None
for their support in typing and preparing the document for pub I¡cation. Table 1-1.-Respirator recommendations Average work shift concentration gf radon progeny (WL) Respirator recommendations 0 to 0.083 (1/12) No respirator required >0.083 to 0.42 to i 0.83 Any air-purifying half-mask respirator equipped with a HEPA filter Any SAR equipped with a half mask and operated in a demand (negative-pressure) mode Any more protective respirator >0.83 to 2.08 to 4.15 to 83.0 to 166.0 or unknown concentration or emergency entry Any SCBA equipped with a full facepiece and operated in a pressure demand or other positive pressure mode 2.9 10.0 Any SAR equipped with a full facepiece operated in a pressure demand or other positive pressure mode in combination with an auxiliary self-contained breathing apparatus operated in a pressure demand or other positive pressure mode 2.9 10.0 Emergency escape Any self-contained self-rescuer (SCSR) NA NA As estimated using the sampling techniques described in Appendix IV. jNA=Not applicable. §HEPA = high-efficiency particulate air. " S e e appropriated credit factors below. SAR = supplied-air respirator. 'TPAPR = powered air-purifying respirator. *Adapted from Schiager et a l . 1981. 'Used in an instant working level monitor. English 28 13.27 2.11 These studies contain limitations in study design, radon progeny exposure records, smoking history information, followup, etc. Comparisons between these studies, especially for the purposes of risk assessment. 'p<0.05 (some p-values were estimated from the observed lung cancer deaths and the Poisson frequency distribution); rate ratios depend on lung cancer mortality in the comparison population and are sensitive to error in rates that are based on a small number of expected deaths. §The expected number of deaths was estimated from the rate ratios provided by the authors. *Not applicable. The 95% confidence limits of the rate ratios range from 14.4 to infinity. 13. Edling C, Axelson 0. Quantitative aspects of radon daughter exposure and lung cancer in underground miners. Br J Ind Med 1983;40:182-7. 14. Whittemore AS, McMillan A. Lung cancer mortality among U.S. uranium miners: A reappraisal.# FOREWORD As Director of the National Institute for Occupational Safety and Health (NIOSH), I am accustomed to making decisions on difficult issues, but few issues have presented the legislative, scientific, and public health dilemmas that accompany recommending criteria to control the exposure of workers to radon progeny in underground mines. The development of this criteria document is subject to the provisions of two legislative mandates. First, the Occupational Safety and Health Act of 1970 requires safe and healthful working conditions for every working person. The Act further requires NIOSH to preserve our human resources by providing medical and other criteria that will ensure, insofar as practicable, that no worker will suffer diminished health, functional capacity, or life expectancy as a result of work experience . The Act also authorizes NIOSH to recommend new criteria to further improve working conditions Sections 22(c) and (d)]. In addition, the Federal Coal Mine Health and Safety Act of 1969 and the Federal Mine Safety and Health Amendments Act of 1977 require NIOSH to develop and revise recommended occupational safety and health standards for mine workers. Specifically, the Secretary of Health, Education, and Welfare (now the Secretary of Health and Human Services) is required to consider, "in addition to the attainment of the highest degree of health protection for the miner . . . the latest available scientific data in the field, the technical feasibility of the standards, and experience gained under this and other health statutes" Title 1,Section 101(d)]. These mandates have required NIOSH to weigh its obligation to assure the highest degree of health protection for miners against the technical feasibility of the recommended standard in the development of recommendations for controlling radon progeny exposure in underground mines. The control of exposure to radon progeny presents an unprecedented problem because of the ubiquitous yet variable nature of their presence in mines and the ambient environment. To complicate this matter further, recent reports indicate that an exposure-related health risk may exist at background exposure levels. The full ramifications of this dilemma can easily be appreciated by considering two points. The first is that dilution ventilation (the primary engineering approach to reducing the concentration of radon progeny in mines) is accomplished by the exchange of mine air with air from the outside environment. Obviously, this approach is not a viable option for the total elimination of radon progeny in underground mines because the outside air is also contaminated with radon progeny. In addition, this approach would not be a prudent community environmental public health measure in some situations because it involves releasing an additional burden of radon progeny to the ambient environment and thereby contributing to the background level in the immediate area of the mine. Thus ventilation cannot be used to totally eliminate exposure to radon progeny in mines. The second point to consider in this dilemma is that the variable nature of radon progeny exposure in the ambient environment precludes recommending an annual cumulative exposure limit that includes both occupational and ambient contributions. Because ambient exposure varies, such a recommendation would result in an occupational exposure limit and associated risk that would vary with the locale. This approach is obviously undesirable, for it would lead to a nightmare of confusion and complicated enforcement requirements and would probably result in unequal protection of miners. Data from both human and animal studies clearly demonstrate a direct link between lung cancer and radon exposure. Specific epidemiological studies provide a basis for quantitatively estimating human risk at various exposure levels. Such analyses clearly show that a radon exposure of 4 WLM (4 working level months) per year over a 30-year working lifetime (the current Mine Safety and Health Administration standard) poses a significant and unacceptable risk of lung cancer. This risk must be substantially reduced. In recommending an exposure limit for radon progeny, NIOSH considered not only the results of its own risk assessment and the technical feasibility of the recommended standard, but also the uncertainty of the data available on risk. Uncertainties are inherent in both the risk assessment methods and the scientific data on which the risk assessment is based. This fact must be understood and acknowledged. Some of the factors involved in these uncertainties include the choice of risk assessment method and model, the measurement methods used for data collection, and risk estimates derived from data that are heavily weighted with higher exposures. The first of these factors in risk uncertainty involves the choice of a risk assessment method and/or model (such as the Cox proportional hazards model used in the NIOSH risk assessment study). NIOSH has attempted to develop a mathematical model that best describes the lung cancer risk in miners exposed to radon progeny. The use of a risk assessment model is merely a practical way to work with a very complex problem. There are modeling approaches other than the one chosen for this study. Each choice would result in a somewhat different description of the relationship between radon progeny exposure and lung cancer risk. NIOSH has attempted to compare the alternatives that are available and applicable. NIOSH scientists have considered the differences that might arise through a review of the available scientific literature and discussions with other scientists who have evaluated this exposure-related lung cancer risk. Although alternative models might yield minimally different quantitative risk estimates, none of them would lead the Institute to a qualitatively different risk assessment (i.e., that exposures to radon progeny at the current standard are associated with excesses of lung cancer). The second factor involved in risk uncertainty is the measurement method used for data collection. This study involves a follow-up period of more than 35 years, more than 3,000 miners, and thousands of measurements. The older data are subject to greater uncertainty than the more recent data because of improvements in the entire measurement process over the course of the study. The third factor involved in risk uncertainty is the process of generating risk estimates at lower exposure levels. One consideration is that such risk estimates are derived from data heavily weighted by higher exposures (note that the annual cumulative exposures of most miners in this study are higher than either the current MSHA standard or the proposed NIOSH recommended exposure limit ). Another consideration is the desirability of placing occupational risk in the context of background exposure risk. However, the latter has not been evaluated and would have to be estimated on the basis of occupational data. We therefore do not believe that it is currently possible to contrast these two types of risks. Nonetheless, EPA has generated some initial information on background exposure risk in A Citizen's Guide to Rad o n . This document indicates that action should be taken to lower radon progeny levels in homes with measured concentrations of 0.02 WL or greater. NIOSH estimates that this concentration would probably result in a cumulative exposure that is less than 1 WLM but within an order of magnitude of that value. New information is clearly needed on background exposure levels and the hazards associated with such exposure before occupational and nonoccupational risks can be reliably quantified and validly contrasted. Until these data are available, the final target exposure limits cannot be identified for control of this hazard in our total environment. The uncertainties in the data and a recent study commissioned by the Bureau of Mines on the feasibilities of controlling radon progeny levels in mines have been weighed along with the available evidence and the obligations of NIOSH. This process has resulted in an REL of 1 WLM per year. Our own quantitative risk assessment clearly shows that significant health risks are posed by an exposure level of 1 WLM per year over a 30-year working lifetime. NIOSH therefore regards this REL as an upper limit and further recommends that mine operators limit exposure to radon progeny to the lowest levels possible. In addition, NIOSH wishes to emphasize that this recommended standard contains many important provisions in addition to the annual exposure limit. These include recommendations for limited work shift concentrations of radon progeny, sampling and analytical methods, recordkeeping, medical surveillance, posting of hazardous information, respiratory protection, worker education and notification, and sanitation. All of these recommendations help minimize risk. In summary, NIOSH has the legislative, scientific, and public health responsibility to protect the health of miners by developing recommendations that eliminate or minimize occupational risks, recommended exposure limit of 1 WLM per year, I of the recommended standard fully satisfies the Although I am approving the do not feel that this part Institute's commitment to protect the health of all of the Nation's miners. Future research may provide evidence of new and more effective methods for reducing occupational exposures to radon progeny, more reliable risk estimates at low exposure levels, and improved risk assessment methods. If new information demonstrates that a lower exposure limit constitutes both prudent public health and a feasible engineering policy, NIOSH will revise its recommended standard. # I. RECOMMENDATIONS FOR A RADON PROGENY STANDARD The National Institute for Occupational Safety and Health (NIOSH) recommends that worker exposure to radon progeny in underground mines be controlled by compliance with this recommended standard, which is designed to protect the health of underground miners over a working lifetime of 30 years. Mine operators should regard the recommended exposure limit for radon progeny as the upper boundary for exposure; they should make every effort to limit radon progeny to the lowest possible concentrations. This recommended standard will be reviewed and revised as necessary. Radon progeny (also known as radon daughters) are the short-lived decay products of radon, an inert gas that is one of the natural decay products of uranium. The short-lived radon progeny (i.e., polonium-210, lead-214, bismuth-214, and polonium-214) are solids and exist in air as free ions or as ions attached to dust particles. The NIOSH recommended exposure limit (REL) is based on (1) evidence that a substantial risk of lung cancer is associated with an occupational exposure to radon progeny, and (2) the technical feasibility of reducing exposures. In this document, NIOSH presents recommendations that will protect miners employed year-round at any mine work area for as long as 30 years (the period of time used by MSHA as a miner's working lifetime). The exposure limit contained in this recommended standard is measurable by techniques that are valid, reproducible, and available to industry and government agencies. NIOSH has concluded that current technology is sufficient to achieve compliance with the recommended standard. Because knowledge of the carcinogenic process is incomplete and no data exist to demonstrate a safe level of exposure to carcinogens, NIOSH maintains that occupational exposure to carcinogens such as radon progeny should be reduced to the lowest level technically achievable. Compliance with this standard does not relieve mine operators from complying with other applicable standards. # S ection 1 -D e fin itio n s ( a ) Miner Miners include all mine personnel who are involved with any underground operation (e.g., drilling, blasting, haulage, and maintenance). # (b ) Working Level One working level (WL) is any combination of short-lived radon progeny in 1 liter (L) of air that will ultimately release 1.3 x 10^ million electron volts (MeV) of alpha energy during decay to lead-210. # ( c ) Working Level Month A working level month (WLM) is the product of the radon progeny concentration in WL and the exposure duration in months. For example, if a miner is exposed at a concentration of 0.083 WL for 1 month 1 (170 hours ),- then the cumulative exposure for the month is 0.083 WLM. If the cumulative exposure of the same miner is 0.083 WLM for each of 12 consecutive months (2,040 hr), then the cumulative exposure for the year is 1 WLM. # (d ) Work Area A work area is any stope, drift heading, travelway, haulageway, shop, station, lunchroom, or any other underground location where miners work, travel, or congregate. # ( e ) Average Work S h if t C oncentration The average work shift concentration is the average concentration of radon progeny present during a work shift in a given area. This concentration is used to represent the miner's breathing zone exposure to radon progeny. # Section 2 -Environment (Workplace A i r ) ( a ) Recommended Exposure L im it (REL) Exposure to radon progeny in underground mines shall not exceed 1 WLM per year, and the average work shift concentration shall not exceed 1/12 of 1 WL (or 0.083 WL). The REL of 1 WLM per year is an upper limit of cumulative exposure, and every effort shall be made to reduce exposures to the lowest levels possible. # (b ) Sampling and A nalysis Grab samples for radon progeny in the workplace shall be taken and analyzed using working level monitors, the Kusnetz method, or any other method at least equivalent in accuracy, precision, and sensitivity. Sampling and analytical methods are described in Chapter II. Details of the recommended sampling strategy are contained in Appendix IV. The recommended sampling strategy allows the use of grab samples for estimating the average work shift concentration of radon progeny. # Section 3 -M onitoring and Recording Exposures ( a ) Exposure M onitoring All operators of underground mines shall perform environmental evaluations in all work areas to determine exposures to radon progeny. (1) An initial environmental evaluation shall be conducted in each work area to determine the average work shift concentration of radon progeny. *Note that Mine Safety and Health Administration (MSHA) regulations are based on 173 hr per month. (2) Periodic environmental evaluations shall be conducted at intervals (as described in Appendix IV) in each work area. An alternative sampling strategy may be used if the mine operator can demonstrate that it effectively monitors exposure to radon progeny. (3) If environmental monitoring in a work area indicates that the average work shift concentration of radon progeny exceeds 1/12 WL (as described in Appendix IV), the mine operator shall prepare an action plan describing the types of engineering controls and work practices that will be implemented to reduce the average work shift concentration in that area. # (b ) Exposure M onitoring Records The mine operator shall determine and record the exposure to radon progeny. Each miner's exposure shall be calculated using monitoring data obtained for the areas in which the miner worked. These records shall include (1) locations, dates, and times of measurements, (2) sampling and analytical methods used, (3) the number, duration, and results of the samples taken, and (4) all items required by Sections 3(b) (2) and (3). All records shall be retained at the mine site or nearest mine office as described in Section 10. ( 1 ) C a lc u la tin g the M in e r's D a ily Exposure The average work shift concentration of radon progeny for each work area shall be used to calculate each miner's daily exposure. If no monitoring has been conducted in a work area on a particular day, the daily average work shift concentration for that area shall be determined by averaging the results obtained on the last day of monitoring with the results from the next day that monitoring is conducted. A miner's exposure (in WLM) for a given area is calculated as foI lows: WLM = WL x T 170 hr where WL is the average work shift concentration of radon progeny, T is the total time (hours) spent in the area, and 170 is the number of hours worked per month. A miner's total cumulative exposure for the year is the sum of the daily exposures (as calculated above) for all work areas in which time was spent during the work shift. # ) Uranium Mines Exposure to radon progeny shall be recorded daily for each uranium miner. These records shall include the miner's name, social security number, the time spent in each work area, estimated exposure to radon progeny for each work area as determined in Section 3(b) (3), and (if applicable) the type of respiratory protection and duration of its use. (3) Nonuranium Mines Exposure to radon progeny shall be recorded daily for all miners assigned to work in areas where environmental monitoring for radon progeny is required as described in Appendix IV. These exposure monitoring records shall include the miner's name, social security number, time the miner has spent in each work area, estimated exposure to radon progeny for each work area as determined in Section 3(b) (3), and (if applicable) the type of respiratory protection used and the duration of its use. (4) Respirator Credit The type of respirator worn and the credit given for wearing it (see Section 7) shall be recorded for each miner. Mine operators shall record both the average work shift concentration of radon progeny and the adjusted exposure concentration calculated by using the respirator credit. The adjusted exposure concentration shall be used to determine the miner's cumulative exposure for compliance with the REL of 1.0 WLM/year. Section 4 -Medical Surveillance (a) General (1) The mine operator shall institute a medical surveillance program for all miners. (2) The mine operator shall ensure that all medical examinations and procedures are performed by or under the direction of a Iicensed physician. (3) The mine operator shall provide the required medical surveillance at a reasonable time and place without loss of pay or cost to the miners. (4) The mine operator shall provide the following information to the physician performing or responsible for the medical surveillance program: a copy of the radon progeny standard, the miner's duration of employment, the miner's cumulative exposure to radon progeny (or an estimate of potential exposure to radon progeny if the miner is a new employee), a description of the miner's duties as they relate to his exposure, and a description of any protective equipment the miner has used or may be required to use. (5) The mine operator or physician shall counsel tobacco-smoking miners about their increased risk of developing lung cancer from the combined exposure to tobacco smoke and radon progeny. The mine operator or physician shall encourage the miner to participate in a smoking cessation program. The mine operator shall enforce a policy prohibiting smoking at the mine site. (6) The physician shall provide the mine operator and the miner with a written statement describing any medical conditions found during the preplacement or periodic medical examinations that may increase the miner's health risk when exposed to radon progeny. This written statement shall not reveal specific findings, but shall include any recommended limitations on the miner's exposure to radon progeny or ability to use respirators and other personal protective equipment. # (b ) Preplacement Medical Examination The preplacement medical examination of each miner shall include the foI lowing: (1) A comprehensive medical and work history (including smoking history) that emphasizes the identification of existing medical conditions and attempts to elicit information about previous occupational exposure to radon progeny. (2) A thorough examination of the miner's respiratory system, including pulmonary function tests. The initial and subsequent pulmonary function tests shall include determination of forced vital capacity (FVC) and forced expiratory volume in 1 second (FEV-j) using the current American Thoracic Society (ATS) recommendations on instrumentation, technician training, and interpretation. A prospective miner with symptomatic, spirometric, or radiographic evidence of pulmonary impairment should be counseled about the risks of continued exposure. (3) A posterio-anterior chest X-ray using the current ATS recommendations on instrumentation, technician training, and interpretat ion. (4) Other tests deemed appropriate by the physician. # ( c ) P e rio d ic Medical Examination The periodic medical examination for each miner shall include the fotlowing: (1) An annual update of medical and work histories (including smoking history). (2) An evaluation of the miner's respiratory system. Because of the potential for chronic respiratory disease, this evaluation shall include spirometry at intervals determined by the physician. Miners that have spirometric or radiographic evidence or symptoms of pulmonary impairment should be counseled by the physician regarding the risks of continued exposure. (3) A posterio-anter¡or chest X-ray at intervals determined by the physician using the current ATS recommendations on instrumentation, technician training, and interpretation. Periodic chest X-rays are recommended for monitoring miners exposed to fibrogenic respiratory hazards (e.g., quartz). Ordinarily, chest X-rays may be obtained every 5 years for the first 15 years of employment and every 2 years thereafter, depending on the nature and intensity of exposures and their related health risks. A recent X-ray obtained for other purposes (e.g., upon hospitalization) may be substituted for the periodic X-ray if it is of acceptable quality. (4) Other tests deemed appropriate by the physician. # Section 5 -Posting All warning signs shall be printed in both English and the predominant language of non-Eng Iish-reading miners. Miners unable to read the posted signs shall be informed verbally about the hazardous areas of the mine and the instructions printed on the signs. (a) Readily visible signs containing the following information shall be posted at mine entrances or in work areas that require environmental monitoring for radon progeny as described in Appendix IV: # AUTHORIZED PERSONNEL ONLY DANGER! POTENTIAL RADIATION HAZARD RADON PROGENY (b) If respiratory protection is required, the following statement shall be added in large letters to the sign required in Section 5(a): # RESPIRATORY PROTECTION REQUIRED IN THIS AREA Section 6 -Work P ra c tic e s and Engineering C ontrols Effective work practices and engineering controls shall be instituted by the mine operator to reduce the concentration of radon progeny to the lowest technically achievable limit. Since there is no typical mine and each operation has some unique features, the work practices and engineering controls in this section may need to be adapted for use in particular si tuat ions. Examples of effective ore extraction and handling procedures include the following: minimizing the number of ore faces simultaneously exposed, performing retreat mining toward intake air, limiting the underground storage and handling of ore, locating ore transfer points away from ventilation intakes, removing dust spilled from ore cars, minimizing ore spillage by maintaining roadways and carefully loading haulage vehicles, and covering ore until it is moved to the surface. ( 2 ) B la s tin g Blasting should be performed at the end of the work shift whenever possible. Miners shall be evacuated from exhaust drifts until environmental sampling confirms that the average work shift concentration of radon progeny does not exceed 1/12 WL. Refer to Section 7 if respiratory protection is required for subsequent reentry. ( 3 ) Worker R o tatio n The mine operator shall not use the planned rotation of miners to maintain an individual's exposure below the REL of 1.0 WLM per year. NIOSH acknowledges, however, that some miners may inadvert ently be exposed to short-term high concentrations of radon progeny. For example, such exposures may occur when engineering controls fail. To ensure that the miners' cumulative exposure remains below the REL in such circumstances, it may be necessary to transfer them to other jobs or work areas that have lower concentra tions of radon progeny. Miners transferred under these circum stances shall retain their pay as prescribed for coal miners under Section 203(b) of the Federal Coal Mine Safety and Health Act of 1977. # (b ) Engineering C o ntrols Mechanical exhaust ventilation used alone or in combination with other engineering controls and work practices can effectively reduce exposures to radon progeny. Ventilation systems discharging outside the mine shall conform with applicable local, State, and Federal air pollution regulations and shall not constitute a hazard to miners or to the general population. (1) Ductwork shall be kept in good repair to maintain designed airflows. The effectiveness of mechanical ventilation systems shall be determined periodically and as soon as possible after any significant changes have been made in production or control. A log shall be kept showing designed airflow and the results of all airflow measurements. (2) Fans shall be operated continuously in the work areas of an active mine and before the opening of a previously inactive mine or inactive section until environmental sampling confirms that the average work shift concentrations of radon progeny do not exceed 1/12 WL. Refer to Section 7 if respiratory protection is required. *Code of Federal Regulations. See CFR in references. (3) Fresh air shall be provided to miners in dead end areas near the working faces. (4) Bulkheads, backfill, and sealants shall be used to control exposures as appropriate. Appendix III provides a general discussion of engineering control methods. S ection 7 -R e s p ira to r S e le c tio n and C re d it fo r R e s p ira to r Use ( a ) General C onsiderations NIOSH has determined that a radon progeny exposure limit of 1.0 WLM per year is technically achievable in mines through the use of effective work practices and engineering controls. Over a 30-year working lifetime, this exposure limit will reduce but not eliminate the risk of lung cancer associated with exposure to radon progeny. NIOSH considers respirators to be one of the last options for worker protection. Work practices and engineering controls are more effective means for limiting exposures and providing a safe environment for all workers. Respirator use in underground mines is not always practical for a number of reasons, including the additional physiological burden and safety hazards they pose. NIOSH therefore recommends that engineering controls and work practices be used where technically achievable to control the exposure of miners to radon progeny. Compliance with an exposure limit of 1.0 WLM per year requires an average exposure of 1/12 WL throughout the year to ensure that the miner can work for an entire year (i.e., 2,040 hr). For average work shift concentrations above 1/12 WL, NIOSH recommends mandatory respirator use as well as the implementation of engineering controls and work practices to reduce exposure to radon progeny. Occupational exposure to radon progeny above background concentrations has been associated with excess lung cancer risk. Therefore, regardless of the exposure concentration, NIOSH advises the use of respirators to further reduce exposure and decrease the risk of lung cancer. Respiratory protection shall be used by miners (1) when work practices and engineering controls are not adequate to limit average work shift concentrations of radon progeny to 1/12 WL, (2) when entering a mine area where concentrations of radon progeny are unknown, or (3) during emergencies. Use only those respirators approved by NIOSH or the Mine Safety and Health Administration (MSHA). # (b ) R e s p ira to r P ro te c tio n Program Whenever respirators are used, a complete respiratory protection program shall be instituted. This program must follow the recommendations contained in ANSI Z88. (published by the American National Standards Institute) and the respirator-use criteria in 30 CFR 57.5005. The respiratory protection program described in ANSI Z88.2-1969 requires the following: (1) A written program for respiratory protection that contains standard operating procedures governing the selection and use of respi rators. (2) Periodic worker training in the proper use and limitations of respi rators. (3) Evaluation of working conditions in the mine. (4) An estimate of anticipated exposure. (5) An estimate of the physical stress that will be placed on the miner. A detailed medical examination of each miner shall be conducted according to the guidelines set forth in Appendix V. (6) Routine inspection, maintenance, disinfection, proper storage, and evaluation of respirators. (7) Information concerning the manufacturers' instructions for respirator fit-testing and proper use. # ( c ) R e s p ira to r S e le c tio n NIOSH makes the following recommendations for respirator selection: (1) A respirator is not required for exposure to average work shift concentrations less than or equal to 1/12 WL. (2) For exposure to average work shift concentrations greater than 1/12 WL, NIOSH recommends those respirators listed in Table 1-1. (3) For entry into areas where radon progeny concentrations are unknown or exceed 166 WL, or for emergency entry, NIOSH recommends only the most protective respirators (any full-facepiece, positive-pressure, self-contained breathing apparatus or full-facepiece, positive-pressure, supplied-air respirator and SCBA combinat ion). These recommendations are based on the fact that radon progeny exist as particulates and that miners are not exposed to hazardous concentrations of nonparticulate contaminants. If protection against nonparticulate contaminants is required, different types of respirators must be selected. # (d ) C re d it fo r R e s p ira to r Use When respirators are worn properly, the miner's average work shift exposure can be reduced by a factor that depends on the class of respirator worn. Table 1-1 provides the credit factors for the various classes of respirators. For example, if a miner wears a helmet-type, powered, air-purifying respirator (PAPR) for 65% of the work shift and the radon progeny concentration in the work area is 0.3 WL, then the miner's exposure can be adjusted by dividing 0.3 WL by 2.7, the credit factor for this class of respirator. This results in an adjusted exposure of 0.11 WL for that miner. Respirator credit is discused in detaiI in Chapter 11. # S ection 8 -Inform ing Workers o f the Hazards o f Radon Progeny ( a ) N o tific a t io n o f Hazards The mine operator shall provide all miners with information about workplace hazards before job assignment and at least annually thereafter. ( b ) T ra in in g (1) The mine operator shall institute a continuing education program conducted by persons with expertise in occupational safety and health. The purpose of this program is to ensure that all miners have current knowledge of workplace hazards, effective work practices, engineering controls, and the proper use of respirators and other personal protective equipment. This program shall also include a description of the general nature of the environmental and medical surveillance programs and the advantages of participating in them. This information shall be kept on file and be readily available to miners for examination and copying. The mine operator shall maintain a written plan of these training and surveillance programs. (2) Miners shall be instructed about their responsibilities for following proper work practices and sanitation procedures necessary to protect their health and safety. # S ection 9 -S a n ita tio n ( a ) E ating and D rinking The preparation, storage, dispensing (including vending machines), or consumption of food shall be prohibited in any area where a toxic material is present. The mine operator shall provide facilities so that miners can wash their hands and faces thoroughly with soap or mild detergent and water before eating or drinking. # (b ) Smoki ng Smoking shall be prohibited in underground work areas. # ( c ) To i I e t Fac i I i t i es The mine operator shall provide an adequate number of toilet facilities and encourage the miners to wash their hands thoroughly with soap or mild detergent and water before and after using these facilities. # (d ) Change Rooms (1) The mine operator shall provide clean change rooms for the miners. (2) The mine operator shall provide storage facilities such as lockers to permit the miners to store street clothing and personal i terns. # ( e ) Showers The mine operator shall provide showers and encourage the miners to shower at the end of the work shift. # ( f ) Laundering (1) The mine operator shall provide for the cleaning, laundering, or disposal of contaminated work clothing and equipment. (2) The mine operator shall ensure that contaminated work clothing or equipment that is to be cleaned, laundered, or disposed of is placed in a closed container to prevent dispersion of dust. (3) Any person who cleans or launders this contaminated work clothing or equipment must be informed by the operator that it may be contaminated with radioactive materials. # S ection 10 -Recordkeeping Requirements ( a ) Record R etention (1) The mine operator shall retain all records of the monitoring required in Section 3(b). (2) All monitoring records shall be retained for at least 40 years after termination of employment. (3) The mine operator shall retain the medical records required by Section 4. These records shall be retained for at least 40 years after termination of employment. # (b ) A v a il a b i l it y o f Records The miner shall have access to his medical records and be permitted to obtain copies of them. Records shall also be made available to former miners, or their representative and to the designated representatives of the Secretary of Labor and the Secretary of Health and Human Services. # ( c ) T ran s fer o f Records (1) Upon termination of employment, the mine operator shall provide the miner with a copy of his records specified in Section 10(a). (2) Whenever the mine operator transfers ownership of the mine, all records described in this section shall be transferred to the new operator, who shall maintain them as required by this standard. (3) Whenever a mine operator ceases to do business and there is no successor, the mine operator shall notify the miners of their rights of access to those records at least 3 months before cessation of business. (4) The Director of NIOSH shall be notified in writing before (a) a mine operator ceases to do business and there is no successor to maintain records, and (b) the mine operator intends to dispose of those records. (5) No records shall be destroyed until the Director of NIOSH responds in writing to the mine operator. II. # INTRODUCTION # A. Scope Radon is a gas that diffuses continuously from surrounding rock and broken ore into the air of underground mines, where it may accumulate; radon may also be carried into mines through groundwater containing dissolved radon . Radon gas may be inhaled and immediately exhaled without appreciably affecting the respiratory tissues. However, when attached or unattached radon progeny are inhaled, they may be deposited on the epithe lial tissues of the tracheobronchial airways. Alpha radiation may subse quently be emitted into those tissues from polonium-218 and polonium-214, thus posing a cancer risk to miners who inhale radon progeny. This document presents the criteria and recommendations for an exposure standard that is intended to decrease the risk of lung cancer in miners occupationally exposed to short-lived, alpha-emitting decay products of radon (radon progeny) in underground mines. The REL for radon progeny applies only to the workplace and is not designed to protect the population at large. The REL is intended to ( 1) protect miners from the development of lung cancer, (2) be measurable by techniques that are valid, reproducible, and available to industry and government agencies, and (3) be technically ach i evabIe. # B. Current Standard MSHA has established radiation protection standards for workers in underground metal and nonmetal mines . This standard limits a miner's radon progeny exposure to a concentration of 1.0 WL and an annual cumulative exposure of 4 WLM. Each WLM is determined as a 173-hr cumulative, time-weighted exposure . Smoking is prohibited in all areas of a mine where radon progeny exposures must be determined; respiratory protection is required in areas where the concen tration of radon progeny exceeds 1.0 WL. According to current MSHA regulations, the exhaust air of underground mines must be sampled to determine the concentration of radon progeny. # Uran i um M ines If the concentration of radon progeny in the exhaust air of a uranium mine exceeds 0.1 WL, samples representative of a miner's breathing zone must be taken at random times every 2 weeks in each work area (i.e., stopes, drift headings, travelways, haulageways, shops, stations, lunchrooms, or any other place where miners work, travel, or congregate). If concentrations of radon progeny exceed 0.3 WL in a work area, sampling must be done weekly until the concentration has been reduced to 0.3 WL or less for 5 consecutive weeks. Uranium mine operators must calculate, record, and report to MSHA the radon progeny exposure of each underground miner. The records must include the miner's time in each work area and the radon progeny concentration measured in each of those areas. # Nonuranium Mines If the concentration of radon progeny in the exhaust air of nonuranium mines exceeds 0.1 WL, and if concentrations are between 0.1 and 0.3 WL in an active working area, samples representative of a worker's breathing zone must be taken at least every 3 months at random times until the concentrations of radon progeny are less than 0.1 WL in that area. Samples must be taken annually thereafter. If the concentration of radon progeny exceeds 0.3 WL in a working area, samples must be taken at least weekly until the concentration has been reduced to 0.3 WL or less for 5 consecutive weeks. Operators of nonuranium mines must calculate, record, and report to MSHA the radon progeny exposures of miners assigned to areas with concentrations of radon progeny exceeding 0.3 WL. The records must include the miner's time in each work area and the radon progeny concentration measured in each of those areas. # C. Uranium Decay S e ries Figure 11-1 shows the sequence by which the most abundant isotope of uranium (238u) decays to a radioactively stable isotope of lead (206pb). Radon (222Rn) is an inert gas with a radiologic half-life of 3.8 days; it is a product of the natural decay of radium (226Ra ). When radon decays, alpha particles and gamma radiation are emitted, and an isotope of polonium (218p0 ) ¡s formed. Polonium-218 (218p0 ) anc| j ts decay productslead-214 (214pb), bismuth-214 (214b ' i ), and polonium-214 (214p0 )-are commonly referred to as short-lived radon progeny because they have half-lives of 27 minutes or less (see . Both polonium-218 and polonium-214 emit alpha particles as they decay. The short-lived progeny are solids and exist in air as free ions (unattached progeny) or as ions adsorbed to dust particles (attached progeny). Because it is a gas, radon diffuses through rock or soil and into the air of underground mines, where it may accumulate; radon may also be carried into mines through groundwater containing dissolved radon . Radon may be inhaled and immediately exhaled without appreciably affecting the respiratory tissues. However, when the radon progeny (either attached or unattached) are inhaled, they may be deposited in the epithelial tissues of the tracheobronchial airways, where alpha radiation from polonium-218 and polonium-214 may be subsequently emitted. The quantity of mucus in those airways and the efficiency of its clearance (retrograde ciliary action) into the esophagus are important factors that affect the total radiation absorbed at a specific site within the respiratory tract. Alpha particles are energetic helium nuclei. As they pass through tissue, they dissipate energy by the excitation and ionization of atoms in the tissue; it is this process that damages cells. Because alpha particles travel less than 100 micrometers in tissue, intense ionization occurs close to the site of deposition of the inhaled alpha-emitting radon progeny. Beta particles (electrons) and gamma radiation (shortwave electromagnetic radiation) can also cause ionization in tissues, but they travel farther through tissues and dissipate less energy per unit path length than do alpha particles . The beta parti cles and gamma radiation emitted by radon progeny make a negligible contri bution to the radiation dose in the lung . # D-U n its o f Measure The common unit of radioactivity is the curie (Ci), which is the rate at which the atoms of a radioactive substance decay; 1 Ci equals 3.7 x 10^® disintegrations per second (dps). The picocurie (pCi) corresponds to 3.7 x 10~2 dps. The International System of Units (SI) unit of radio activity is the becquerel (Bq), which is equivalent to 1 dps. Therefore, 1 pCi is equivalent to 0.037 Bq. When radon gas and radon progeny are inhaled, the radiation exposure is primarily caused by the short-lived radon progeny (polonium-218, lead-214, bismuth-214, and polonium-214, which are deposited in the lung) rather than by the radon gas. Because it was not feasible to routinely measure the individual radon progeny, the U.S. Public Health Service introduced the concept of the working level, or WL . The WL unit represents the amount of alpha radiation emitted from the short-lived radon progeny. One WL is any combination of short-lived radon progeny in 1 liter (L) of air that will ultimately release 1.3 x 10^ million electron volts (MeV) of alpha energy during decay to lead-210. The SI unit of measure for potential alpha energy concentration is joules per cubic meter of air (J/m3 ); 1 WL is equal to 2.08 x 10"5 J/m3 , The equilibrium between radon gas and radon progeny must be known in order to convert units of radioactivity (Ci or Bq) to a potential alpha energy concentration (WL or J/m3 ). The equilibrium factor (F) is defined as the ratio of the equilibrium-equivalent concentration of the short-lived radon progeny to the actual concentration of radon in air . When the equilibrium factor approaches 1.0, it means that the concentration of radon progeny is increasing relative to the concentration of radon. At complete radioactive equilibrium (F=1.0), the rate of radon progeny decay equals the rate at which the progeny are produced. Thus the radioactivity of the decay products equals the radioactivity of the radon . In under ground mines, the equilibrium factor mainly depends on the ventilation rate and the aerosol concentration . Values of F ranging from 0.08 to 0.65 are typical in underground mines . Radioactivity and potential alpha energy concentration values at various equilibria are presented in Table 11-1. The common unit of measure for human exposure to radon progeny is the working level month (WLM). One WLM is defined as the exposure of a worker to radon progeny at a concentration of 1.0 WL for a working period of 1 month (170 hr).- The SI unit for WLM is joule-hour per cubic meter of air (J-h/m3 ); 1 WLM is equal to 3.6 x 10-3 J-h/m3 . *Note that MSHA regulations are based on 173 hr per month. *F is defined as the quotient of the equilibrium-equivalent radon progeny activity divided by the radon activity. The rad (radiation absorbed dose) is the unit of measure for the absorbed dose of ionizing radiation. One rad corresponds to the energy transfer of 6.24 x 107 MeV per gram of any absorbing material . The rem (roentgen equivalent man) is the unit of measure for the dose equivalent of any ionizing radiation in man. One rem is equivalent to one rad multiplied by a radiation quality factor (QF). The radiation QF expresses the relative effectiveness of radiation with differing Iinear-energy-transfer (LET) values to produce a given biological effect. The radiation QFs for beta particles and gamma radiation are each approximately 1; the radiation QF for alpha radiation varies from 10 to 20 . For equal doses of absorbed radiation (rads), the dose equivalent (rems) attributed to alpha particles is 10 to 20 times greater than the dose equiva lent attributed to high-energy beta particles or gamma radiation. The SI unit of measure for the dose equivalent is the sievert (Sv). One rem is equal to 0.01 Sv . # E. Worker Exposure In 1986, 22,499 workers were employed in 427 metal and nonmetal mines in the United States. In the past few years, the number of underground uranium mines operating in the United States has decreased dramatically from 300 in 1980 to 16 in 198416 in . Accordingly, the number of miners employed in these mines has also decreased from 9,076 in 19799,076 in , and to 448 in 1986. Table II-2 shows the range in concentrations of airborne radon progeny measured in U.S. underground metal and nonmetal mines from 1976 through 1985 . As illustrated in 38 of the 254 operating underground nonuranium mines sampled during fiscal year 1985 contained concentrations of airborne radon progeny equal to or greater than 0.1 WL in . With an estimated average of 55 workers per mine, approximately 2,090 nonuranium miners were at risk of exposure to radon progeny concentrations equal to or greater than 0. 1 WL in 1985. The gamma radiation exposures of U.S. uranium miners are generally regarded to be less than the whole-body occupational exposure limit of 5 rem (50 mSv) per year . # F. Measurement Methods fo r A irborne Radon Progeny # D escrip tio n o f Measurement Methods # a . Grab Samp I i ng Methods Grab sampling methods for measuring airborne radon progeny involve drawing a known volume of air through a filter and counting the alpha or beta radioactivity on the filter during or after sampling. Grab sampling methods used in underground mines are listed in Table II-5. In one-count grab sampling methods such as those used with instant working-level monitors, the radioactivity is determined over a single counting period using a scintillation counter. In two-count methods, the radioactivity is determined over two counting periods, and the ratio of these two measurements is used to calculate the radon progeny concentrations. In a three-count method, radon progeny concentrations are derived from the relative changes in the measurements taken at three 30-minute intervals. Critically important factors are the proper calibration of radiation detectors and pumps, filters that precisely fit the equipment, and accurate maintenance of the flow rate during the sampling period. It is also important to prevent the accumulation of radionuclides to avoid contamination of the pump, counting equipment, and filters . Statistical uncertainties associated with the various grab sampling methods for radon progeny are presented in Data indi cate that the relative precision of the methods is the same. The major differences are the total time period required for sampling and analysis, the capability of determining exposure concentrations at the work site, and the amount of routine maintenance and cali bration required of the instrumentation. # b. Continuous M onitoring Methods In continuous monitoring methods, air is sampled continuously, and (as with other methods) the alpha or beta radioactivities are determined over the length of the collection period. Continuous monitoring devices and systems have been described elsewhere . The characteristics and statistical uncertainties for some continuous monitoring methods are presented in Table II-7. The U.S. Bureau of Mines has designed an automated continuous monitoring system in which up to 768 detector stations can be linked to a central control unit. The system was designed to trigger an alarm when airborne radon progeny exceed a specified concentration . Although continuous monitoring methods can provide a rapid estimate of exposure concentrations, placement of the instrumentation in active work areas is difficult and may not always be representative of the exposure in the miner's breathing z o n e . # c . Personal Dosimeters Personal dosimeters for radon progeny are intended to automatically record a miner's cumulative exposure regardless of fluctuations in radon progeny concentrations. Thus these devices eliminate the need to document work area location and occupancy time. Although several personal dosimeters have been tested in U.S. uranium mines, none are in routine use in this country because of problems in calibration and lack of precision . # . Pass i ve Dos i meters Passive dosimeters rely on the natural migration of attached and unattached radon progeny to the detection area of the device without the use of an air pump. Thin plastic foils sensitive to alpha particles are used as detectors. Although passive dosimeters using track etch foi Is have been studied in under ground mines , such devices are still in the developmental stage . # Act i ve Dos i meters Active dosimeters use a mechanical pump to draw a known volume of air through a filter. The alpha radiation emitted by the radon progeny collected on the filter is counted and recorded automatically. The following dosimeter detectors have been tested for use under mining conditions: thermoluminescent detectors , electronic detectors , and track etch detectors . Active track etch dosimeters are used for radiation monitoring in all underground mines in France . # d. Factors to Consider When S e le c tin g Measurement Methods Concentrations of radon progeny have been reported to vary among the different uranium mines and work areas within each mine . These variations have been attributed to the type of mining process, the grade of ore mined, and the effectiveness of the ventila tion to control exposures. Historically, radon progeny exposures in work areas were measured by grab sampling techniques that used the Kusnetz count method or by the instant working-level monitor. More recently, other methods such as continuous monitors and personal dosimeters have also been used in mines. Personal dosimeter methods are clearly more desirable, but they have not been rigorously tested in U.S. mines, and they have been reported to be unreliable for determining exposures over an 8-to 10-hr work shift . Continuous monitoring methods can rapidly detect changes in radon progeny concentrations and can be equipped with an alarm system that will be activated at preset concentrations. These monitors are often stationed at fixed locations within travelways, haulageways, shops, etc. because of the difficulty of moving and restationing them within active mine areas. Although these monitors do not usually provide adequate data for determining worker exposures, they can signal the occurrence of problems in the ventilation system and identify exposure sources. NIOSH believes that the use of instant working-level monitors or the Kusnetz count method will provide reliable estimates of exposure to radon progeny. Other methods at least equivalent in accuracy, precision, and sensitivity can be used (see Table II -6). Any method chosen must be capable of meeting the sampling strategy requirements described in Appendix IV. # G. R e s p ira to r S e le c tio n and C re d it fo r R e s p ira to r Use # . R e s p ira to r S e le c tio n Historically, NIOSH has recommended the use of the most protective respirators- when workers are exposed to potential occupational carcinogens . Although cumulative exposure to radon progeny may result in cancer, the use of the most protective respirators may not always be technically feasible or safe in routine underground mining operations. Supplied-air respirators (SARs) that are NIOSH/MSHAcertified provide breathing air from compressors or a cascade system of air-supply tanks and are approved only for use with air lines less than 300 ft long. However, the use of SARs may not be practical in under ground mining operations. The reasons are that it is difficult to provide sufficient quantities of breathing air through air lines over long distances and that the air lines are susceptible to crimping and severing from the movement of mining vehicles and haulage cars on tracks. Furthermore, many underground work areas and passageways in mines are too small and cramped with equipment to accommodate air compressors or large air-supply tanks. In addition to being cumbersome in underground mines because of their size, self-contained breathing apparatuses (SCBAs) weigh as much as 35 lb, and SARs weigh approximately 6 lb. Thus when SCBAs or SARs are worn for extended periods, their additional weight can also cause increased physiological burden in the form of heat stress . Finally, NIOSH believes that the routine use of SCBAs and SARs may result in increased injuries in underground mining operations. NIOSH is not aware of any studies specifically dealing with injuries or other safety hazards associated with the use of SARs and SCBAs in mines. However, several studies have shown that obstacles introduced into the workplace result in a significantly increased risk of injury from tripping, slipping, or falling . Because mining is currently one of the most dangerous industries in the United States with regard to occupational deaths and injuries (MMWR 1987, BLS 1987, NIOSH believes that this problem would be exacerbated by the routine use of SCBAs and SARs. NIOSH believes that there is sufficient safety and health evidence to recommend against the routine use of SARs and SCBAs for reducing exposure to radon progeny during underground mining operations. Table 1-1 lists the respirators that NIOSH recommends for use against exposure to radon progeny. For average work shift concentrations of radon progeny that exceed 1/12 WL, the recommended respirators include air-purifying respirators with high-efficiency particulate air (HEPA) filters. The HEPA filter media are recommended by NIOSH for use with the air-purifying classes of respirators. These filters are the most *E i ther (1) any self-contained breathing apparatus (SCBA) equipped with a full facepiece and operated in a pressure-demand or other positive-pressure mode, or (2) any supplied-air respirator (SAR) equipped with a full face piece and operated in a pressure-demand or other positive-pressure mode in combination with an auxiliary SCBA operated in a pressure-demand or other positive-pressure mode. efficient type of particulate filter available, and they are less susceptible than others to performance degradation resulting from humid storage and use conditions . # . C re d it fo r R e s p ira to r Use A miner's exposure to radon progeny may be less than the average work shift concentration in an area, depending on the class of respirator worn and the percentage of time the respirator is worn properly. This reduced exposure for miners who wear respirators can be calculated by dividing the average work shift concentration of radon progeny by the credit factor (CF) for that class of respirator (see Table 1 -1). The credit factors in Table 1-1 were determined by the following equat ion: P- = 1/CF = Pw x tw + Pn x tn where Pf = the total penetration of radon progeny into the respirator facepiece. APF = the assigned protection factor (a complete listing of the APFs for all classes of respirators can be found in the NIOSH Respirator Decision Logic ). # CF = the credi t factor Pw = the penetration of radon progeny while wearing the respirator (i.e., 1/APF) tw = the proportion of time during the work shift that the miner wears the respirator properly Pn = the penetration of radon progeny while not wearing a respirator properly (i.e., 100% or 1.0) tn = the proportion of time during the work shift that the miner does not wear the respirator properly (i.e., 1.0 -tw ) An unpublished Canadian study evaluated the proportion of time during the work shift that a group of underground uranium miners properly wore their helmet-type, powered, air-purifying respirators If a m i n e operator can verify to MSHA that respirator utilization is greater than 65%, NI0SH recommends a recalculation of the CFs using this higher utilization rate. However, the highest utilization rate that NIOSH recommends is 90%. The CFs for these extremes of utilization rates are listed in Table 1-1. # III. BASIS FOR THE RECOMMENDED STANDARD # A. Assessment o f E ffe c ts 1 . Human Studies a . A ssociation o f Radon Exposure w ith Lung Cancer M o r ta lity Appendix I contains the report prepared by NIOSH and submitted to MSHA on May 31, 1985, entitled Evaluation of Epidemiologic Studies Examining the Lung Cancer Mortality of Underground Miners. Several of the epidemiologic studies evaluated in that report demonstrate an association between exposure to radon progeny and lung cancer mor tality in underground uranium miners . The relationship between exposure to radon progeny and lung cancer mortality has also been observed in workers in underground metal mines , iron ore mines , tin mines , fluorspar mines , gold mines , and zinc/lead mines . In addition, some of these studies demonstrate a direct exposure-response relationship between lifetime cumulative exposure to radon progeny and lung cancer mortality . Statistically significant standardized mortality ratios (SMRs)- above 400% were observed in three studies in which workers accumulated mean lifetime exposures above 100 WLM . Statistically significant SMRs between 140% and 390% were observed in two other studies in which workers accumulated mean lifetime exposures below 100 WLM and in preliminary findings of a third study in which workers accumulated estimated mean lifetime exposures below 100 WLM . # b. S y n e rg is tic E ffe c ts o f Other Substances Although the literature consistently demonstrates an association between lung cancer incidence and exposure to radon progeny, it is possible that some of the miners studied were exposed to other substances as well. These substances may have acted synergistically *The standardized mortality ratio (SMR) is the ratio of the mortality rates of two groups being compared. This ratio is expressed as a percentage and is usually adjusted for age or time differences between the two groups. with radon progeny to potentiate the effects of the radon exposure. . Substances with synergistic potential include arsenic , hexavalent chromium, nickel, cobalt ; serpentine ; iron ore dust , and diesel exhaust . Risk analyses were performed on data from epidemiologic studies of U.S. uranium miners and Swedish iron ore mine workers . These analyses indicate that the risk of mortality from lung cancer among miners who are exposed to radon progeny is greater among those who smoke cigarettes than those who do not smoke. # c. R e la tio n o f Lung Cancer Risk to Cumulative Radon Exposure Since completion of the NIOSH report (Appendix I), Howe et a l . published a study of surface and underground mine workers who had worked in Canada in the Eldorado Lodge Uranium Mine between 1948 and 1980. That cohort included 8,487 workers. Based on these stratified categories the risk of death from lung cancer increased linearly with increasing exposure. For the exposure categories of 5 to 24, 25 to 49, and 50 to 99 WLM, the relative risk was elevated, but the difference from the expected risk for an unexposed population was not statistically significant. For all exposure categories above 100 WLM, the relative risk was significantly elevated (p<0.05); note, however, that the first 10 years of followup were excluded from this calculation. For the total cohort, the relative risk coefficient was 3.28% per WLM, and the absolute risk coefficient was 20.8 per 10® person-years per WLM (any excess mortality within the 10 years following initial exposure was excluded from this calculation). Two additional epidemiologic studies that were not included in the 1985 NIOSH report (Appendix I) have also been reported. Pham et a l . studied 1,173 iron mine workers in France who were aged 35 to 55 and who had normal chest X-rays at the beginning of the study period in 1975. Thirteen mine workers who had worked underground for a mean of 25.2 years died of lung cancer between 1975 and 1980; only 3.7 deaths were expected from an age-standardized comparison with French males for this same period (SMR = 351; p<0.05). Although exposure records were not available, the authors estimated that some workers may have received lifetime cumulative exposures to radon progeny in the range of 100 to 150 WLM. So 11i et al. observed 318 niobium mine workers in Norway from 1953 through 1981; 77 of these miners were underground workers. This cohort experienced a total of 12 lung cancer deaths, though only 2.96 deaths were expected on the basis of age-specific rates for Norwegian males (SMR = 405; p<0.001). The underground workers experienced 9 lung cancer deaths whereas only 0.81 were expected on the basis of age-specific rates for Norwegian males (SMR = 1,111; p<0.001). From estimates of total exposure to alpha radiation (based on limited measurements of radon and thoron progeny taken in 1959), the authors determined that the risk of lung cancer increased significantly (p<0.05) with increasing alpha radiation for the exposure categories of 1 to 19, 20 to 79, 80 to 119, and greater than or equal to 120 WLM. The excess absolute risk for those exposed workers was reported to be 50 per 10® person-years per WLM. The epidemiologic studies of lung cancer mortality in mine workers exposed to radon progeny (including those studies discussed in Appendix I) are summarized in Tables III-2 and III-3. # An i maI Stud i es a . E ffe c ts o f Exposure to Radon Progeny Chameaud et al. studied the effects of exposure to radon progeny in specific pathogen-free (SPF) Sprague-Dawley rats. A total of 1,800 rats were exposed to radon progeny for 1 to 3 hr per day for 14 to 82 days, yielding an accumulated exposure of 20 to 50 WLM. An additional 600 rats were unexposed. The lung cancer incidence in rats was reported to be directly proportional to their lifetime cumulative exposure to radon progeny. The authors con cluded that the amount of radiation needed to double the natural incidence of lung cancer in these rats was 20 WLM. Reduced life spans were not observed for rats in any of the exposure groups. b. R e la tio n o f Lung Cancer Incidence to Radon Progeny Exposure Chameaud et al. determined that the lifetime risk coefficient (uncorrected for life span shortening) for the induction of lung cancers in rats was approximately 140 to 850 x 10~® per WLM for exposures ranging from 20 to 4,500 WLM (Table III -4). This is consistent with the lifetime risk coefficient for lung cancer in humans (150 to 450 x 10"® per WLM) estimated by the International Commission on Radiological Protection (ICRP) . As shown in Table MI-4, lung cancer incidence in rats increased as cumulative exposure to radon progeny increased. In contrast, the lifetime risk of lung cancer per unit of exposure (WLM) decreased with increasing exposure. These findings agree with those of the NI0SH risk assessment (Appendix II), in which the lifetime cumulative risk of lung cancer per unit of exposure decreased as cumulative exposure increased in underground uranium mine workers. # c . S y n e rg is tic E ffe c ts o f C ig a r e tte Smoke Chameaud et a l . studied the ability of radon progeny to initiate lung cancer in groups of 50 SPF Sprague-Dawley rats that were subsequently exposed to cigarette smoke. The chamber concentrations of alpha radiation were 0, 300, and 3,000 WL; these concentrations yielded cumulative dose levels of 0, 100, 500, and 4,000 WLM, respectively, over a 2-month period for those groups of animals. Treatment groups were exposed to a total of 352 hr of cigarette smoke (9 cigarettes/500 L of air for 10 to 15 min per day, 4 days per week for 1 year. Exposure to radon progeny alone demonstrated a directly proportional dose-effect relationship for induction of cancer (500 and 4,000 WLM). However, when a similar period of radon exposure was followed by exposure to cigarette smoke, a dose-related, twofold to fourfold increase occurred in lung cancer incidence. The authors stated that the groups receiving high and medium doses of radon and cigarettes had cancers that were not only larger but were more invasive and metastatic compared with the groups exposed to radon alone. Conversely, neither the cigarette smoke alone nor the low-dose radon progeny exposure (100 WLM) alone induced lung cancer. In a parallel lifetime study, Chameaud et a l . related the sequence of exposure to radon progeny and cigarette smoke to the incidence of lung tumors (cancer incidence was not specified) in groups of 50 SPF Sprague-Dawley rats. One group of rats was exposed to radon progeny only (a cumulative exposure of 4,000 WLM); a second group was exposed first to cigarette smoke and then to radon progeny (4,000 WLM); a third group was exposed to radon progeny (4,000 WLM) and then to cigarette smoke; and a fourth group was exposed to cigarette smoke only. Similar incidences of tumors were observed among the rats exposed to radon progeny only (10 tumors) and those exposed first to cigarette smoke and then to radon progeny (8 tumors). In contrast, when the exposure to radon progeny preceded the exposure to cigarette smoke, the effect was potentiated-that is, 32 rats developed tumors. As stated previously, none of the rats exposed to cigarette smoke only developed lung cancer. No statistical analyses were performed on the results of this study. # d. S ig n ific a n c e o f Animal Studies Life span experiments in animals exposed to radon progeny alone have demonstrated that increasing exposures produce increasing incidences of lung cancer. This finding is similar to those of the epidemio logic studies cited in the preceding section (III, A, 1). Because epidemiologic data are available, these animal data contribute relatively little to the final assessment of risk in humans or to the determination of an REL for exposure to radon progeny. Thus this document discusses only selected animal studies of the carcin ogenic potential of radon progeny. Other studies have examined the sequential or concomitant exposures of rats, dogs, and hamsters to substances other than radon progeny (e.g., uranium ore dust, thorium, and tobacco smoke). Several additional studies (critiqued but not described in this document) confirm the adverse health effects of radon progeny on exposed animals . These animal studies generally confirm the risk of lung cancer reported among workers exposed to radon progeny. # B. Risk Assessment NIOSH studied the lung cancer risk of uranium miners by using data from a U.S. Public Health Service (USPHS) study of white male uranium mine workers from the Colorado Plateau area (Colorado, Arizona, New Mexico, and Utah). That NIOSH risk assessment is described in a report entitled Quantitative Risk Assessment of Lung Cancer in U.S. Uranium Miners, which is reproduced in Appendix II. Appendix I contains a detailed discussion of the USPHS data. In the NIOSH risk assessment, data were analyzed for 3,346 workers who had been followed from 1950 through 1982. By 1982, 1,215 workers had died; 256 of these deaths (21.1%) were due to lung cancer. A generalized version of the Cox proportional hazards model was used to estimate the relative risk of death resulting from lung cancer over a 30-year working lifetime at several cumulative exposure values. (The 30-year working lifetime was selected to maintain consistency with the working lifetime commonly described by MSHA.) Relative risk is defined as the ratio of lung cancer mortality in a selected exposed group to lung cancer mortality in a comparison group. The quantitative risk assessment model presented in Appendix II did not include the length of time since the end of the mining exposure, which is a significant predictor of relative risk. This term was subsequently added to the generalized Cox model, and new parameter estimates were computed. The estimates in Tables 111-5 and 111-6 are based on this model. The major difference between the two quantitative risk assessment models is that under the new model, the relative risk estimates increase more rapidly during exposure and decrease more rapidly after exposure. The risk of death resulting from lung cancer increased with increasing lifetime cumulative exposure to radon progeny (Table MI -5); this finding is consistent with Appendix II. This direct relationship has been observed in previous epidemiologic studies . As shown in Table MI-5, the relative risk of 1.57 at 30 WLM corresponds to an average exposure of 1 WLM per year for a working lifetime of 30 years. *Estimates are based on a log-relative risk model fitted to age at initial exposure, time since cessation of exposure, and the natural logarithms of the following variables: cumulative mining and background exposure to radon progeny; cumulative cigarette smoking and background smoking; and rate of exposure to radon progeny. +The approximate 95% confidence limits were calculated by applying the parameters from the quantitative risk assessment model together with their variances and covariances to the lung cancer mortality rates in the Colorado Plateau using an actuarial approach. In addition to receiving workplace exposures to radon progeny, workers were assumed to have received average environmental exposures of 0.4 WLM/year. This is the value derived in the NIOSH risk assessment that led to the best fit of the model to the data. This assumed value is consistent with estimates of exposure to radon progeny for persons living near ore-bearing lands in the United States . The average nonoccupational exposure to radon progeny from natural geologic sources for persons in the United States is approximately 0.2 WLM/year . Relative risk modeling is common in the epidemiologic literature (especially in studies with lengthy followup) because the dramatic changes that occur in mortality rates with age and calendar year make absolute risk models extremely complicated. Most relative risk models assume that mortality rates in exposed populations are roughly proportional to the rates in unexposed populations at all ages and calendar periods. Because this assumption often approximates reality over a broad range of ages and calendar periods, the relative risk model can be expressed in a less complex form than absolute risk models (i.e., the relative risk model can be expressed without terms involving age and calendar year). Excess lifetime risk estimates for lung cancer mortality have been generated (Table III-6) by applying the relative risk estimates in Table III-5 (see also Appendix II) to the lung cancer and all-causes mortality rates for white males in the Colorado Plateau States. Excess risk is defined as the arithmetic difference between the risk of lung cancer mortality in a selected exposed group and the risk of lung cancer mortality in an unexposed comparison group. The estimated excess lung cancer deaths (i.e., excess lifetime risk) in Table 111-6 were computed by approximating the average of the exposuredetermined relative risk function over 5-year age intervals spanning an entire lifetime. These average relative risks and the corresponding mortality rates for lung cancer and for all causes of death among white males in the Colorado Plateau were used to compute the probability of lung cancer mortality during a lifetime using the National Academy of Sciences actuarial method . It is important to understand the limitations of these risk estimates when examining the values in Table III-6. These limitations include the foIlowi n g : - A relatively small portion of the cohort had the observed lower levels of cumulative occupational exposure, and the ability to generate precise point-risk estimates at the lower range of occupational exposure is not as strong. Only 7% of the workers in the cohort had lifetime cumulative exposures below 30 WLM, and only 7 of the 256 lung cancer deaths occurred among workers with lower cumulative exposures. - The reliability of these excess lifetime risk estimates depends on (1) the accuracy of the original relative risk estimates and (2) the appropriateness of using lung cancer rates for the general white male population in the Colorado Plateau as an estimate of the background lung cancer rate (i.e., that which would occur in populations exposed only to background levels of radon progeny). This cohort contained no unexposed mine workers from which to estimate background lung cancer rates. Although certain limitations exist in using this type of rate , background lung cancer mortality rates from the general U.S. white male population were used to estimate the background rates for the cohort. - The background lung cancer rates were not corrected for cigarette smoking. However, the relative risk estimates used to calculate the lifetime risk estimates were adjusted for smoking. This implies that the lifetime risk estimates are only appropriate for a population with a pattern of smoking similar to that of the white male population of the Colorado Plateau. In developing its recommendations, NIOSH has attempted to compare the risk of occupational exposure to radon progeny with the risk of background exposure in homes (i.e., exposure accruing outside the mining environment). The estimated average background exposure to radon progeny is approximately 0. Given this value for background lung cancer risk, another question that must be considered is to what level it would be reasonable to control occupational risk. In its benzene decision, the U.S. Supreme Court gave the following example as the basis for evaluating the occupational risk of chemically induced leukemia: An exposure associated with 1 excess death per million exposed persons might pose an acceptable risk, whereas an exposure associated with 1 excess death per 1,000 exposed persons would pose a significant risk that should be reduced. This example is useful, but it cannot be strictly applied in all cases because it was offered as an illustration and not a fixed rule. In the specific case of lung cancer risk associated with radon progeny exposure, the example cannot be strictly applied because the background risk of lung cancer is much greater than the risk of leukemia at background exposure levels and because cigarette smoking is known to create a confounding effect that greatly increases risk. An excess of 1 lung cancer death per 1,000 would probably not be detectable in a general population if data were subject to considerable uncertainty, since it would necessitate differentiating between 46 and 45 deaths per 1,000. Another important consideration is the technical feasibility of a given exposure limit. As stated earlier, NIOSH has determined that a cumulative exposure limit of 1 WLM per year is achievable (though some argue that it is not feasible on the basis of economics or current technology). NIOSH has found no evidence that any cumulative exposure lower than 1 WLM per year is feasible. As shown in Table III-6, the occupational exposure limit required to reduce expected lifetime risk to 1 excess lung cancer death per 1,000 miners is approximately 0.1 WLM per year. This cumulative exposure would require an occupational exposure concentration that is less than the concentration associated with a cumulative background exposure of 0.4 WLM per year, assuming background expoure is acquired outside the occupational environment. An occupational exposure limit of 0.1 WLM per year would therefore require average mine concentrations lower than the estimated background exposure concentrations. In view of the preceding factors, the level of uncertainty in available data, and the apparent unfeasibility of limiting cumulative exposures to less than 1.0 WLM per year, it does not seem reasonable to recommend exposure limits that would yield only 1 excess lung cancer death per 1,000 miners.. NIOSH has determined that an exposure limit of 1 WLM is feasible in some mines and that such an exposure limit would substantially reduce risks from those associated with the current MSHA standard. An REL of 1 WLM per year is therefore recommended to substantially reduce risk and to stimulate the implementation and development of engineering and mining techniques to reduce exposure. The enforcement of this recommendation combined with additional health and developmental research may facilitate future exposure reductions and thereby reduce lung cancer risk. # C. Techn i caI Feas i b i I i ty In a report to the U.S. Bureau of Mines, Bloomster et a l . analyzed the technical feasibility of reducing the airborne concentrations of radon progeny in underground uranium mines. The study included data from 14 underground uranium mines operating during the study period (September 1981to May 1984. The authors concluded that some mines could operate at an annual exposure standard of 2 WLM by using dilution ventilation alone if it was introduced early in the development of the mine and if no contamination of inlet air was present. An extensive engineering analysis of 2 of the 14 mines indicated that it might be feasible to meet an operating standard of 1 WLM by using dilution ventilation in combination with other control methods such as bulkheading, air filtration, and use of sealants. The authors expressed doubt about the technical feasibility of operating these uranium mines at a standard of 0.5 WLM. Appendix III provides descriptions of engineering control methods that may be useful in underground mines. # D. Recommendati ons Several schemes exist for identifying and classifying a substance as a carcinogen. For example, the National Toxicology Program (NTP) , the International Agency for Research on Cancer (IARC) , and the Occupational Safety and Health Administration (OSHA) have all considered this problem. NIOSH considers the OSHA classification the most appropriate for use in identifying potential occupational carcinogens and supports the following definition: A "potential occupational carcinogen" is any substance, or combination or mixture of substances, which causes an increased incidence of benign and/or malignant neoplasms, or a substantial decrease in the latency period between exposure and onset of neoplasms in humans or in one or more experimental mammalian species as the result of any oral, respiratory, or dermal exposure, or any other exposure which results in the induction of tumors at a site other than the site of administration . This definition also includes any substance that mammals may potentially metabolize into one or more occupational carcinogens. The epidemiologic data examined by NIOSH demonstrates that occupational exposure to radon progeny in underground mines has the potential for causing lung cancer in miners (see Appendix I) . These human data are supported by a number of studies in which various animal species exposed to radon progeny also developed lung cancer . Furthermore, the NIOSH risk assessment presented in Appendix II, which was based on the human studies, clearly demonstrates that a relationship exists between cumulative radon progeny exposure and the risk of developing lung cancer. The risk assessment shows that as cumulative exposure decreases, the risk of developing cancer decreases. In arriving at an REL, NIOSH attempts to identify that exposure at which no worker will suffer material impairment of health or functional capacity. In the case of radon progeny, this task is difficult because the NIOSH risk assessment shows that even with an exposure of 0.5 WLM per year (15 WLM for a 30-year cumulative exposure) or below, the risk of developing lung cancer increases (see Table III-4 and Appendix II). These results indicate that NIOSH should recommend an annual cumulative exposure limit well below 0.5 WLM, but NIOSH must also consider the technical feasibility of the REL. In previous NIOSH recommendations for the control of carcinogens, technical feasibility has often been interpreted as the ability to quantitate exposure; however, those recommendations were intended for use by nonmining industries where product substitution and engineering and process controls are generally more feasible. Data from 1984 indicate that 94.4% of the workers in U.S. underground uranium mines accumulated annual radon progeny exposures of less than 2 WLM . Information obtained from the Bureau of Mines indicates that it is now technically feasible to achieve an annual radon progeny concentration of 1.0 WLM. NIOSH therefore recommends that cumulative exposure to radon progeny be limited to 1.0 WLM per year. This recommendation is intended to protect the health of America's underground miners, but it is tempered by the fact that currently it is not technically feasible to achieve annual exposures lower than 1.0 WLM using work practices and engineering controls. To meet the NIOSH recommendation for an annual cumulative exposure of 1 WLM and to assure that mining is a viable year-long occupation, NIOSH believes that the daily average work shift concentration of radon progeny should not exceed 1/12 WL in any work area. Although adherence to the NIOSH REL will significantly reduce the risk of lung cancer in underground mine workers, it will not eliminate it. No effective medical procedure currently exists to treat lung cancer caused by exposure to radon progeny. Furthermore, it has been demonstrated that exposure to both radon progeny and tobacco smoke result in a combined lung cancer risk that is greater than the risk posed by radon progeny or smoke alone. Thus it should be noted that the interaction between radon progeny and smoking is at a minimum additive, and more likely multiplicative. Cigarette smoking should therefore be emphasized as an even greater detriment to mine workers exposed to radon progeny than it is to the general public. The implementation of smoking cessation programs should reduce the incidence of lung cancer in underground mine workers. Because radon progeny are ubiquitous, exposure cannot be totally eliminated. For U.S. residents, the average annual nonoccupational exposure to radon progeny from natural geologic sources is approximately 0.2 WLM, and annual occupational exposures may be considerably higher. The REL of 1 WLM per year is not designed for the population at large, and no extrapolation is warranted beyond occupational exposures in underground metal or nonmetal mines. The REL is designed only for the radon progeny exposures of underground metal and nonmetal mine workers as applicable # IV. RESEARCH NEEDS The following research is needed to further reduce the risk of lung cancer development from occupational exposure to radon progeny. # A. Epidem iologic S tudies Needs for epidemiological studies have been identified as follows: - A need exists for a followup study of the U.S. miner cohort from the original Public Health Service study to explore the risk of lung cancer among nonsmoking miners exposed to low concentrations of radon progeny. NIOSH is currently resurveying this cohort to update exposure histories and gather additional information on smoking behavior, dietary practices, and tumor cell types. - A study is needed to determine whether radon gas itself or other contaminants that might be found in uranium mines are associated with increased morbidity and mortality. - An epidemiologic study of injuries is needed in the mining industry to identify safety-related problems, since this industry has one of the highest injury rates in the United States. Particular emphasis should be given to whether or not respirator use is associated with an increased injury and health risk. This investigation should examine slips, trips, falls, and heart attacks. # B. Engineering C o ntrols and Work P ra c tic e s Research should be conducted to develop more effective control technology methods for reducing exposure to radon progeny to less than 1 WLM. A control technology assessment of the uranium mining industry will assist in this effort by examining existing state-of-the-art technologies and work practices and by recommending new methods for controlling exposure to radon progeny. # C. R e sp ira to ry P ro te c tio n The following two types of research are recommended for respiratory protect ion: - Research should be conducted to determine the extent of gamma radiation emitted from particles trapped on high-efficiency particulate air (HEPA) filters used on air-purifying respirators. - A study should also be conducted to evaluate the physiological stress placed on miners who must wear respiratory protection. This study should be conducted both in the laboratory and in the mines. # D. Environmental (W orkplace) M onitoring Studies needed for environmental (workplace) monitoring are as follows: - Research should be conducted to characterize and evaluate the importance of particle size, unattached fraction, and condensation nuclei concentrations in estimating the bronchial dose of radon progeny. The dose of alpha radiation affecting the bronchial airways depends on the size of the particles to which it is attached. Recent studies have shown that as particle size decreases, the concentration of radon progeny increases. - Continued research is also needed to determine which factors affect the equilibrium between radon progeny and radon gas in mines. These studies should examine how such information may be used to predict the extent of exposure to radon progeny. Additional development and field testing of personal sampling devices is also needed for more complete determination of a miner's daily exposure. # PUBLIC HEALTH PERSPECTIVE # V. PUBLIC HEALTH PERSPECTIVE In developing this document, NIOSH has been challenged to carefully consider all of the Institute's legislative, scientific, and moral responsibilities. NIOSH has been required to review diverse scientific data that are subject to uncertainty and then, in keeping with its mandates, to recommend criteria that will attain the highest level of health protection and will at the same time account for other factors such as technical feasibility and insights gained through research and development. To develop a public health perspective on the risk posed by occupational exposure to radon progeny, NIOSH must weigh a number of factors, as follows. 1. Human and animal data both clearly establish that exposure to radon progeny increases the risk of lung cancer. The human data consist of a number of positive epidemiologic studies, several of which demonstrate an exposure-related health risk that is not accounted for by smoking behavior. The animal studies demonstrate that lung cancer risk increases with exposure in the absence of smoking. It is important to note that smoking by miners appears to greatly exacerbate the risk of lung cancer posed by exposure to radon progeny alone. 2. The NIOSH risk assessment, based on a USPHS study of uranium miners , demonstrated a significant exposure-response relationship. This analysis indicates that exposure to radon progeny at the current MSHA occupational exposure limit of 4 WLM over a working lifetime will result in 42 excess lung cancers per thousand miners. A miner's working lifetime has been defined as 30 years (MSHA uses 30 years as a miner's working lifetime). Risk declines substantially if a lower annual cumulative exposure is received over the working lifetime. Any risk assessment is subject to uncertainty because risk assessment models may not reflect risk in a completely reliable way and because the data on which they are based are subject to uncertainties and Iimitat ions. The Cox proportional hazards model, which was chosen for the NIOSH risk assessment, is considered one of the strongest analytical approaches for longitudinal epidemiologic data. But it is not clear how accurately the data can be extrapolated to predict risk below the levels of observed exposure. Current biological theory hypothesizes that carcinogenic processes involve an initiation stage that is followed by other stages before an actual malignancy is established. The essential characteristics of all of these stages remain to be delineated. The Cox proportional hazards model is very powerful in describing human risks based on epidemiologic data. Its strength is partly due to the model's ability to accommodate long follow-up periods during which changes occur in some of the risk factors, e.g., cumulative exposure. Nevertheless, it is not clear how accurately the Cox model or any other risk assessment model predicts the risk from a multistage cancer process when exposures are below levels that have been studied. The USPHS study on which the NIOSH risk assessment was based is an extensive study, but the risk assessment is subject to uncertainties and limitations because of the nature of the data. For example, more uncertainty would be inherent in the risk estimates at the lower range of exposure because a relatively small proportion of this study population received the lowest cumulative exposures. Approximately 7% of the USPHS uranium miners (which included 7 lung cancers) had received cumulative occupational exposure levels of 30 WLM or less. At lower cumulative exposure levels, even smaller proportions of the cohort and fewer lung cancers were represented. Thus point estimates at these lower cumulative exposure levels would be more subject to the influence of chance occurrences. In addition, exposure levels are subject to measurement error. The uncertainty of exposures have been estimated to range from a relative standard deviation of 38% to as high as 97% (see Appendix II). 3. Radon progeny and its associated risk are present in our ambient environment as well as in the mining environment and cannot be totally eliminated. Radon progeny are ubiquitous in that they emanate from all ore-bearing deposits containing elements that decay to produce radon gas. The national average exposure to radon progeny has been estimated to be 0.2 WLM . Areas containing large amounts of ore-bearing deposits (e.g., the Colorado Plateau) are likely to have higher-than-average background levels of radon progeny . The NIOSH risk assessment uses a fitted estimate of 0.4 WLM as the background exposure (the average annual cumulative exposure incurred in nonoccupational environments) in the Colorado Plateau. Although no adequate measurement data are available to characterize the level and variability of exposure in the homes of the general population and of miners, it is clear that everyone is exposed and that the degree of exposure depends on each individual's home and work environment. The limits of concentration detection and the accuracy of the measurement techniques become a potential problem when quantifying the very low radon progeny exposure and concentration levels that may be found in the ambient environment (these levels are generally much lower than those found in underground mining environments). Extrapolation of risk to such low levels would become even more problematic than at higher levels because of the reduced accuracy of measuring such concentration levels. The elimination of radon progeny from the ambient and mining environments is not possible. The primary engineering method for controlling radon progeny exposure is dilution ventilation. However, if radon progeny are ubiquitous in our ambient environment, no source of air is free of contamination. The only protective equipment that would eliminate exposure to radon progeny is the self-contained breathing apparatus (SCBA). SCBAs are unaccept able for wear in the ambient environment and represent a significant safety hazard if worn extensively in the mining environment. It is valuable to compare the lung cancer risks associated with the miner's occupational and background exposures. No assessment has yet been made of the lung cancer risk associated with background exposures, either for the general U.S. population or for the population living in the Colorado Plateau. Until studies in homes are completed, it will be impossible to directly contrast the risks of occupational and background exposures. Nonetheless, it is important to consider occupational risk in the context of the lung cancer risk experienced by the general population. On the basis of State vital statistics records, NIOSH estimates that the lifetime risk of lung cancer in the Colorado Plateau, uncorrected for smoking, is approximately 45 lung cancers per 1,000 white males. Unfortunately, accurate lung cancer rates are not available for nonsmokers in the Colorado Plateau. Given this value for the lung cancer risk of background exposure, another question that must be considered is to what level it would be reasonable to control occupational risk. In its benzene decision, the U.S. Supreme Court gave the following example for the basis of evaluating occupational risk of chemically induced leukemia. The example indicated that an exposure associated with 1 excess death per 1 million exposed persons might pose an acceptable risk, whereas an exposure associated with 1 excess death per 1,000 exposed persons would pose a significant risk that should be reduced. This example is useful but it cannot be strictly applied in all cases, since it was offered as an illustration and not a fixed rule. In the specific case of lung cancer risk associated with radon progeny exposure, the example cannot be strictly applied because the risk of lung cancer is much greater than the risk of leukemia at background exposure levels and because cigarette smoking is known to create a confounding effect that greatly increases risk. An excess of 1 lung cancer death per 1,000 would probably not be detectable in a general population if the data were subject to considerable uncertainty, since it would necessitate differentiating between 46 and 45 deaths per 1,000. 5. The technical feasibility of achieving lower exposure levels is subject to the limitations of available technology. A report commissioned by the Bureau of Mines [Bloomster et al. 1984a, 1984b has been the primary source for NIOSH's assessment of the technical feasibility of lower exposure levels. On the basis of an extensive engineering analysis of two uranium mines, these investigators indicated that it might be feasible to meet an operating standard of 1 WLM using the best available engineering controls. They expressed doubt about the technical feasibility of operating these uranium mines at a standard of 0.5 WLM. This analysis has been used by NIOSH to define exposure limits that are technically achievable with the best available technology. These are some of the issues that have been considered by NIOSH in developing a recommended standard to prevent lung cancer associated with exposure to radon progeny. This process of weighing risk from a public health perspective parallels the philosophy of risk presented in the 1985 document entitled Risk Assessment and Risk Management of Toxic Substances: A Report to the Secretary . In the process of developing this public health perspective, NIOSH has performed the following actions: - NIOSH has identified a public health hazard posed by an occupational exposure. Exposure to radon progeny that occurs in underground mines and the ambient environment has been shown to cause a significant increase in lung cancer among uranium and other underground miners. - NIOSH has developed recommendations in a manner that is prudent and in concert with the public need. This process was accomplished by complying with the Institute's legislative mandates to attain the highest level of health protection and at the same time consider other factors such as technical feasibility. - NIOSH has sought appropriate public participation by eliciting external reviews. Reviews were requested from more than 60 individuals or groups, including industry, labor, academia, and government representatives. NIOSH has received and considered the comments from more than 30 of these reviewers. - NIOSH has communicated risk understandably to both experts and lay persons. NIOSH has expressed risk as relative risk, which most epidemiologists and biostatisticians believe to be the most appropriate mode of expressing human cancer risk. NIOSH has also expressed risk as lifetime excess risk per 1,000 miners, an expression that can easily be interpreted by both experts and lay persons. - NIOSH has used all currently available information and the most extensive pertinent set of human data to estimate risk. The USPHS study of uranium miners is the most extensive set of data available in the United States on the radon progeny exposure of underground miners. Unlike other analyses, this study used the entire qualifying cohort in the risk assessment, regardless of exposure level. These investigators felt this was the most valid way to analyze epidemiologic data using the selected model. - NIOSH has considered alternative recommendations for risk control that are based on viable exposure limits and engineering controls. The considered options ranged from proposing no REL to prohibiting all occupational exposure to radon progeny. The REL presented in this document was chosen after a thorough weighing of the available data and Institute mandates. - NIOSH has advanced the process of risk assessment and policy development by conducting the most thorough risk assessment possible on underground miners exposed to radon progeny. The NIOSH risk assessment permitted estimation of lung cancer risk at the lower range of the observed exposure levels. The assessment also suggested meaningful research areas such as lung cancer risks at low radon exposure levels, synergistic effects of other exposures such as cigarette smoke, effects of radon progeny on late versus early stage cancer, and the need for improved engineering control of exposure. An extensive and complicated process has been used to develop the recommendations in this criteria document. After weighing the conclusions drawn from available data, the mandates of the Institute, and the public health issues, NIOSH recommends an annual cumulative exposure limit of no more than 1 WLM per year. However, as stated earlier, even this exposure poses a significant risk of lung cancer over a working lifetime. Thus NIOSH further recommends that mine operators regard this REL as an upper limit and that they make every effort to limit radon progeny to the lowest possible concentrations. In addition, NIOSH wishes to emphasize the fact that this standard contains many important provisions in addition to the annual exposure limit. These include recommendations for limited work shift concentrations of radon progeny, sampling and analytical methods, recordkeeping, medical surveillance, posting of hazardous information, respiratory protection, worker education and notification, and sanitation. All of these recommendations help minimize risk. NIOSH recognizes its commitment to protect the health of the Nation's miners and will continue to reexamine this complex occupational health issue. Research on new and more effective methods for reducing occupational exposures will improve the available control technologies. Additional data on exposure levels and associated health risks will permit firmer estimates of risk and hence better recommendations. NIOSH will revise its recommended standard as important new data become available. # REFERENCES AIF . Exposure of U.S. underground uranium miners to radon daughters in 1982 as reported by 29 underground uranium mine operations, from a survey made April 24, 1984. Unpublished report submitted to Mine Safety and Health Administration by Atomic Industrial Forum. AIF . Exposure of U.S. underground uranium miners to radon daughters in 1984 as reported by 20 underground uranium mine operations, from a survey made December 11, 1986 Borak TB, Franco E, Schiager KJ, Johnson JA, Holub HF . Evaluation of recent developments in radon progeny measurements. In: Gomez M, ed. International Conference: Radiation Hazards in Mining. New York, NY: Society of Mining Engineers of American Institute of Mining, Metallurgical, and Petroleum Engineers, Inc., pp. 419-425. Boyd JT, Doll R, Faulds JS, Leiper J . Cancer of the lung in iron ore (haematite) miners. Br J Ind Med 27:97-105. Breslin A J , George AC, Weinstein MS . Investigation of the radiological characteristics of uranium mine atmospheres. New York, NY: Atomic Energy Commission, New York Operations Office, Health and Safety Laboratory. HASL-220, December 1969. Brookins DG [1986. Indoor and soil Rn measurements in the Albuquerque, New Mexico, area. Health Phys 51:529-533 In fifteen epidemiologic studies, researchers reported excess lung cancer deaths among underground miners who worked in mines where radon progeny were present. In addition, several studies show a dose-response relationship between radon progeny exposure and lung cancer mortality. In two recent studies, investigators report excess lung cancer deaths due to mean cumulative radon progeny exposures below 100 Working Level Months (WLM) (specifically, at 40-90 WLM and 80 WLM). The health risks from other exposures (i.e., arsenic, diesel exhaust, smoking, chromium, nickel, and radiation) in the mining environment can affect lung cancer risks due to radon progeny exposure. Unfortunately, the literature contains limited information about other exposures found in mines. The available information, concerning whether cigarette smoke and radon progeny exposures act together in an additive or multiplicative fashion is inconclusive; nevertheless, a combined exposure to radon progeny and cigarette smoke results in a higher risk than exposure to either one alone. X-ray surveillance and sputum cytology appear to be ineffective in the prevention of radon progeny-induced lung cancers in individual miners; therefore, these techniques are not recommended. Also, at this point, there is insufficient evidence to conclude that there is an association between one specific lung cancer cell type and radon progeny exposure. According to annual radon progeny exposure records from the Atomic Industrial Forum (AIF) and MSHA, it is technically feasible for the United States mining industry to meet a standard lower than the current annual exposure limit of 4 WLM. Recent engineering research suggests that it is technically feasible for mines to meet a standard as low as 1 WLM. Based upon qualitative analysis of these studies and public health policy, NIOSH recommends that the annual radon progeny permissible exposure limit (PEL) of 4 WLM be lowered. NIOSH wishes to withhold a recommendation for a specific PEL, until completion of a NIOSH quantitative risk assessment, which is now in progress. # II. INTRODUCTION The National Institute for Occupational Safety and Health (NIOSH) submits this report in response to the Mine Safety and Health Administration's (MSHA) Advanced Notice of Proposed Rulemaking (ANPR) concerning radiation standards for metal and nonmetal mines. This report evaluates fifteen epidemiologic studies that examine the lung cancer mortality of underground miners exposed to radon progeny. The fifteen studies are divided into two groups: five primary studies and ten secondary studies. Overall, the ten secondary studies provide additional information about the association between lung cancer mortality and radon progeny exposure, yet have more limitations (in study design, study population size, radon exposure records, thoroughness of follow-up, etc.) than the five primary studies. Recommendations for the medical surveillance of underground miners exposed to radon progeny are included. The United States mining industry's ability to meet a radon progeny exposure standard lower than the present four Working Level Months (WLM), based solely on technical feasibility, is also discussed. A working level (WL) is a standard measure of the alpha radiation energy in air. This energy can result from the radioactive decay of radon (Rn-222) and thoron (Rn-220) gases. A WL is defined as any combination of short-lived radon decay products (polonium-218, lead-214, bismuth-214, polonium-214) per liter of air that will result in the emission of 1.3 X 10^ million electron volts (MeV) of alpha energy . NIOSH defines a WLM as an exposure to 1 WL for 170 hours. For the information of the reader, two appendices and a glossary are included. Appendix A contains data from the Atomic Industrial Forum (AIF) an organization representing the interests of the United States uranium mining industry, and MSHA on the numbers and radon progeny exposures of underground miners in the United States. Appendix B lists methods currently in use for controlling radon progeny exposures underground. Finally, there is a glossary containing epidemiologic and health physics terms. # III. EVALUATI ON OF EPI DEM IOLOGIC EVIDENCE # A. In tro d u ct ion This report examines five primary and ten secondary epidemiologic studies of underground miners. It describes the important points, strengths, and limitations of each study. The five primary epidemiologic studies examined lung cancer mortality among uranium miners in the United States, Czechoslovakia, and Ontario; iron miners in Malmberget, Sweden; and fluorspar miners in Newfoundland. The ten secondary epidemiologic studies examined mortality among iron ore miners in Grangesberg, Gallivare, and Kiruna, Sweden; zinc-lead miners in Sweden; metal and Navajo uranium miners in the United States; tin and iron ore miners in Great Britain; uranium miners in France; and tin miners in Yunnan, China. Finally, two recent studies analyze the interaction between radon progeny exposure and smoking. This report focuses on the lung cancer experience of these fifteen underground mining cohorts. In general, the study cohorts did not show excess mortality due to cancers other than lung, except for four studies that reported excess stomach cancers and one report of excess skin cancer among underground miners. Excess stomach cancers were reported among underground tin miners in Cornwall, England (standardized mortality ratio (SMR) = 200, p value unspecified by the authors, however estimated at p<0.05, from the observed deaths and the Poisson frequency distribution) ; gold miners in Ontario (SMR=148, p<0.001) ; metal miners in the United States (SMR=149, p<0.01) ; and iron ore miners in Sweden (SMR=189, p<0.01) . Sevcova et al. (1978) reported excess skin cancers among underground uranium miners in Czechoslovakia (an observed skin cancer incidence of 28.6 versus an expected of 6.3 per 10,000 workers; p<0.05), that they attributed to external alpha radiation from radon progeny. Arsenic is present in the Czechoslovakian uranium mines (arsenic levels unidentified) and the association between arsenic and skin cancer is well documented . The excess mortality from stomach and skin cancers among these cohorts needs further study. In all five primary epidemiologic studies, the exposure records for the individual miners lack precision. Frequently, an individual miner's exposure was calculated from an annual average radon progeny exposure estimate for a particular mine or mine area, thus, an individual miner's true exposure could vary greatly from the estimated exposure. Of the five primary epidemiologic studies, the Czechoslovakian study has the best records for radon progeny exposure . The Swedish study has limited exposure records for their cohort (8 years of measurements for 44 years of follow-up), and the miners' mean exposures were about five WLM per year . The lower radon progeny concentrations found in Swedish mines indicate that the potential error due to excursions in concentration was less than in mines in the United States, Newfoundland, and Ontario, where higher concentrations were measured (Table III-2). Overall, the radon progeny exposure records from the United States, Ontario, and Newfoundland have similar limitations (detailed in sections B, D, and F). WL measurements made in uranium mines in the United States and fluorspar mines in Newfoundland fluctuated greatly, reaching unusually high radon progeny concentrations: in the fluorspar mines, a maximum of 200 WL , and in the uranium mines, 3 out of 1,700 mines averaged over 200 WL . NI0SH is currently investigating the variability and quality of the exposure records kept for uranium mines in the United States. Exposure data quality, although important, does not solely determine a study's strength; one should also evaluate the epidemiologic and statistical methods used. This review reports both the attributable and relative risk estimates for lung cancer (see Glossary for definitions) when they are provided by the authors . # B. Uranium Miners in the U nited S tates # . Descr i p t i on The United States Public Health Services (USPHS) conducted an epidemiologic study examining mortality among underground uranium miners from the Colorado Plateau Job turnover in the uranium mines was substantial; the majority of miners worked less than 10 years underground (not accounting for gaps in employment) . Nevertheless, approximately 33 percent of the cohort worked 10 or more years and 7 percent worked 20 or more years underground in uranium mines (not accounting for gaps in employment) . The number of months worked underground ranged from 1 to 370 (over 30 years), with a median of 48 months (4 years). Some miners worked underground in uranium or nonuranium mines before they entered the USPHS study, and before radon progeny levels were recorded. Among these miners, 13.7 percent started mining before 1947 . The cohort's early radon progeny exposures probably represented a small proportion of their total lifetime exposures; Lundin et al. (1971) noted that the study group accumulated only 16 percent of their total radon progeny exposure before 1950 . A bias toward overestimating exposure and a narrow sampling strategy were two major influences effecting the miners' exposure records. First, some of the USPHS exposure data records were biased by including disproportionately more measurements from mine areas with high radon progeny levels. Radon progeny samples taken during 1951-1960 were stated to be representative of the mine areas in which miners received exposures. Also, the U.S. Bureau of Mines (USBOM), the New Mexico State Health Department, and the Arizona mine inspector continued to take representative samples after 1960 . During 1960-68, however, additional radon progeny samples were collected for control purposes by mine inspectors from Colorado, Utah, and Wyoming . In this case, inspectors sampled disproportionately more mines and mine sections that had high radon progeny levels. This sampling bias also tended to increase estimates for geographic areas of mining (locality, district, or state) . Thus, some average annual WL exposure records collected during 1960-68 overestimated the uranium miners' exposure. Second, there is little exposure data available for some uranium mines, especially small mines. For the entire period 1951-68, nearly 43,000 measurements were available to characterize about 2,500 uranium mines. More samples were usually taken in the larger mines that employed most of the miners. In many mines, however, only one or two samples were ever taken . At the present time, the USPHS exposure data set has 34,120 "average" (undefined by Lundin et al. 1971) annual WL exposure records from 1,706 surface and underground uranium mines, made over a 20-year period . These records consist of "guesstimates", "estimates", "extrapolations", and actual WL measurements (Table 111-1 (1981) used the formula for attributable risk to determine that about 80 percent of the deaths due to lung cancer in this cohort were attributable to uranium mining . As of 1971, statistically significant excess cancers were found in all radon progeny exposure categories above 120 WLM ; the exposure categories were: less than 120, 120-359, 360-839, 840-1799, 1800-3719, and 3720 and over, in WLM. NIOSH continues to monitor the mortality experience of this cohort, particularly those workers exposed at or below 120 WLM. The terms "guesstimates," "estimates," and "extrapolations" were defined in this manner by Lundin et al. (1971) ; NIOSH recognizes the limitations of these definitions, but uses them for consistency with published reports. # Strengths This is a large, well traced, and analyzed study; the study cohort is clearly defined. It contains smoking histories and radon progeny exposure records for the same individuals. Although the radon progeny exposure data were measured by different persons, a standard sampling and counting technique was used and the technical quality of the measurements was good . # . L im ita tio n s The major limitation in the exposure data quality are that there were few measurements for small mines, (although fewer miners worked in these mines) miners' work histories were self reported, and many exposures were overestimated during 1960-68 . Another limitation is that many miners fell into high radon progeny exposure categories; however 20 pe-rcent of the miners were assigned to the category below 120 WLM . Several reviewers have found that the USPHS study gives lower estimates of risk per WLM for radon progeny exposure than the other four major epidemiologic studies . This may be due to the overestimation of exposure by Lundin et al. (1971) or other factors. # C. Uranium Miners in # Strengths One positive feature of this study is the large amount of exposure data available. Radon gas measurements started in 1948, with a minimum mean of 101+8 measurements per mine . Other strengths include the number of workers exposed to low radon progeny levels, a long period of follow-up (average of 26 years by 1975) , and the limited exposure to radon progeny from other underground mining (less than 2 percent of the study group members mined nonuranium ores) . In addition, Sevc et al. (1984) investigated the hazards from other exposures, such as silica, arsenic, asbestos, chromium, nickel, and cobalt, and concluded that these were not causing the excess lung cancer risk of the uranium miners . Sevc (1970) reported maximum dust levels between 2.0-10.0 mg/m3 during 1952-56, and stated that the miners' risk of silicosis was relatively low . Chromium, nickel, and cobalt were present only in trace amounts in mine dusts. Although arsenic was present in these mines (concentration unspecified), there was no significant difference in lung cancer mortality between two mining areas with comparable radon progeny exposure levels, but fiftyfold differences in arsenic concentrations . # Limitations The limitations of the Czechoslovakian study are that the exposure estimates made before 1960 were based on radon gas, rather than direct radon progeny measurements. A second limitation is that the cohort definition and the epidemiologic methods used by the Czechoslovakian researchers make it difficult to compare their findings with those from the other four primary studies. The radon gas and progeny equilibrium ratio is necessary to estimate WL concentrations from radon gas measurements correctly. The authors provided insufficient detail about the equilibrium ratio in the Czechoslovakian uranium mines to allow evaluation of the data quality . If Sevc et al. (1976) had equilibrium ratio records or a reliable way to estimate the equilibrium ratio, then using radon gas exposure measurements to estimate WL would not seriously bias their results. Sevc, Kunz and associates defined their cohort as men who entered employment in the Czechoslovakian uranium mines in the years 1948-1953 (for Group A miners), and worked underground at least 4 years . It is unclear from the published reports whether the Czechoslovakian miners accumulated their person-years at risk of dying (PYR) from the time they entered the cohort or from their time of first exposure. The cohorts' average 26 years of follow-up by 1975 , implies that the PYR were accumulated from a miner's time of first exposure . In most epidemiologic studies, a miner's PYR accumulate after he enters the cohort. The Cze noslovakian method of accumulating PYR makes it difficult to directly compare their lifetable analysis and findings with those from other miner studies. Sevc et al. (1984) also neglected the effect of smoking in their data analysis, although they stated that this would not effect their results, because the percentage of cigarette smokers among miners (70 percent) was comparable to that among the general male population of Czechoslovakia . # D. Uranium # Strengths This study's greatest strength lies in the miners' low mean cumulative exposures (40-90 WLM) to radon progeny, exposures much lower than those reported in the United States, Czechoslovakian, and Newfoundland studies (see Table III-2, at the end of this Chapter). Another good feature of this study is that the researchers carefully traced uranium miners' work experience in other hard rock mines. Large numbers of uranium miners in Ontario (66 percent of the study cohort) had some hard rock mining experience. # . L im ita tio n s This study has three disadvantages; first, the cohort is severely truncated, with only about 18 years (median value) of follow-up and a median attained age of 39 years by 1977 . A short follow-up on a young cohort creates problems because lung cancer is rarely manifested before age 40 . Second, thoron progeny and gamma radiation levels vary and can reach substantial levels In some Ontario uranium mines . For example, Cote and Townsend (1981) found that thoron progeny working levels were about half the radon progeny working levels in an Elliot Lake, Ontario uranium mine . The Kusnetz method is frequently used to measure radon progeny in mines and can discriminate between radon and thoron progeny. When used improperly, however, the Kusnetz method can mistakenly count thoron progeny as radon progeny, so that the true radon progeny exposure may be overestimated . From the limited information in the published reports , it is unclear whether measurement error was introduced by using the Kusnetz method improperly. There are no epidemiologic data available to estimate the health risks due to thoron progeny. The Advisory Committee on Radiological Protection from the Canadian Atomic Energy Control Board (AECB) reviewed research on microdosimetry which indicated that the main contribution to the WLM from thoron progeny comes from the radioactive decay of long-lived Pb-212 (ThB, the half life=10.6 hours). Its half-life is long enough for the Pb-212 to translocate from the lungs into other tissue, where it emits much of its alpha energy. Radon progeny have shorter half-lives than Pb-212 and emit most of their alpha energy in the lung. Therefore, the AECB concluded that the risk of lung cancer induction by 1 WLM of thoron progeny is about one third of that for 1 WLM of radon progeny . Finally, Muller et al. (1985) published limited information about the smoking habits of these miners, and the researchers' present risk estimates are uncorrected for smoking . Out of a group of 57 uranium miners who died of lung cancer, only one was a nonsmoker and the rest smoked . Muller and associates plan to conduct a case-control study of the effects of smoking upon lung cancer risk in miners. Although they stated that correction for smoking will not substantially change their risk estimates , at low levels of radon progeny exposure, it is important to take into account the effect of smoking; thus, definitive conclusions regarding this study must await the smoking history analysis. # E. Iron Miners in # Strengths The strengths of this study include the relatively low radon progeny exposures of the miners (mean of 4.8 WLM per year), the long follow-up period, and the stability of the work force. The ascertainment of vital status (99.5 percent), and the confirmation of diagnoses for causes of death was thorough (about 50 percent of all deaths in Sweden are followed by autopsy). In addition, Radford and St. Clair Renard (1984) used case-control methods and environmental measurements to rule out health risks from diesel exhaust, iron ore dust, silica, arsenic, chromium, nickel, and asbestos in the mines . # . L im ita tio n s The major limitations of the iron miners' study were the limited exposure data available for analysis and an unclear cohort definition; there was also a question about how the authors adjusted for lung cancer latency. Radon gas, in the Swedish iron mines, was first measured in 1968. That means that for the average 44 years of follow-up, there exist exposure estimates based on actual measurements for only 8 years. The researchers reconstructed past concentrations based on measurements made at each mine level and area during 1968-1972 and on knowledge of the natural and mechanical ventilation used previously. They assumed that mine ventilation systems and radon progeny concentrations during 1968-72 were comparable with those in the past, by analogy with quartz dust levels measured in the mines since the 1930's . The researchers calculated average yearly exposures in WLM for each decade from the average hours per month underground and radon progeny concentrations in each area, weighted by the number of man-hours worked underground . These crude calculations make tenuous the connection between a given individual miner and a particular radon progeny exposure level. Nonetheless, the iron miners as a group, probably received very low average exposures to radon progeny compared to uranium miners . Radford et a I., stated : "...we consider that average exposures are probably accurate to + 30 percent" ; thus, the true average exposure could be between 56 and 104 WLM. Exactly how Radford and St. Clair Renard defined the cohort, and calculated or excluded the PYR, was unclear from the article. To account for a 10-year lung cancer latency, they excluded PYR for lung cancer during the first 10 years after mining was begun . From their description, it is unclear when mining was begun and whether PYR were counted from the beginning of mining, January 1, 1951, or some other date. It is assumed that most of the miners' PYR were excluded from the years prior to 1951, rather than the period 1951-1976 (years when the authors analyzed mortality), and that the mining population was stable. If one makes these assumptions (unstated by the authors), then adjusting for latency by excluding PYR during the first 10 years after the start of mining should produce unbiased SMR calculations. On the other hand, adjustments for latency that incorrectly exclude many PYR lower the expected number of deaths, thereby possibly overestimating the SMR and the risk due to radon progeny. Because of insufficient information, NIOSH is unable to completely evaluate the effect of the 10-year adjustment for latency on the SMR in this study, although it appears to be minor. # F. Fluorspar Miners in # Strengths One strength of this study was the long follow-up period; workers were followed for an average of about 30 years of observation . Also, the researchers obtained smoking history data for 41 percent of the cohort . # . L im ita tio n s There were three principle limitations in this study. First, there was limited exposure data available before 1968 (See above). Second, the study failed to trace large numbers of workers; 591 workers who lacked adequate personal identifying information (name and year of birth) were dropped from the analysis. Third, this study lacks an adequate basis for estimating expected deaths. Lung cancer rate comparisons between the mining population, with its many smokers, and the Newfoundland or Canadian national populations, would exaggerate excess deaths due to radon progeny exposure. Morrison et al. (1985) tried to avoid this problem by generating the expected number of deaths among underground workers from a comparison with mortality rates among surface workers (adjusted for age, time period, and disease specific mortality) . A problem with this study design is that the control group may be exposed to radon progeny. Some of the men classified as surface workers (controls) may have received some radiation exposure, by means of either misclassification or unrecorded short periods of working underground. Also, it is difficult to correctly adjust for age, time period, and disease specific mortality, when there are proportionately fewer workers in the control group (surface workers) than in the exposed group (as of 1971, underground workers accounted for 57 percent of the total person-years ). The lack of an adequate comparison group is a serious limitation, so risk estimates from this study must be viewed wi th caut ion. # G. Secondary Epidem iologic Studies The ten epidemiologic studies reviewed herein examine mortality among miner populations in China, Sweden, the United States, Great Britain, France, and China. Several studies demonstrated elevated radon progeny levels and excess lung cancer deaths among underground miners, but lacked information about radon progeny exposure, or levels of other mine carcinogens. Other studies contained severe limitations or biases that also restricted their usefulness. Overall, the ten secondary studies provide additional information about the association between lung cancer mortality and radon progeny exposure, yet have more limitations (in study design, study population size, radon exposure records, thoroughness of followup, etc.) than the five primary studies. To be concise, the secondary studies are described in less detail than the primary studies. Iron Ore Miners in Grangesberg, Sweden # Zinc-Lead M iners in Sweden This case referent study examined lung cancer mortality during 1956-76 among residents from the parish of Hammar, Sweden, an area with two zinc-lead mines . Twenty-nine subjects who died of lung cancer, including 21 who were underground miners, were matched with three referents who died before or after each case. Some problems with the study were the smalI number of cases and a fai lure to match for age or smoking status. Axelson and Sundell (1978) reported a sixteenfold increase (p<0.0001) in lung cancer mortality among the miners versus nonminers. Although they lacked individual information on exposure, they estimated a radon progeny level of about 1 WL in the mines, based on measurements made in the 1970's . These results should be viewed with caution; since they demonstrated that age was a confounding factor, yet they did not match cases and referents for age. # . Iron Ore Miners in K iruna, Sweden This study examined lung cancer mortality among residents of the Kiruna parish in Northern Sweden, an area containing two underground iron mines . One strength of this study is that migration in the Kiruna area was slight, therefore, nearly all former miners' deaths were registered in Kiruna. From 1950 to 1970 a total of 41 men (in Kiruna) between the ages of 30-74 years died of lung cancer. Thirteen of these were underground miners, and it is possible, although unclear in the report, that 18 were surface workers. One limitation of this study is that the the age distribution of underground miners was unrecorded, and therefore, proportional mortality was used instead of the lifetable method to calculate the expected mortality. Another limitation is that the expected mortality was not adjusted for smoking status, since information from family and fellow workers indicated that 12 of the 13 underground miners smoked (8 smoked cigarettes, 4 smoked pipes). Jorgensen (1973) compared the 13 deaths observed among underground miners with expected deaths of 4.47, based on local rates, and 4.21, based on Swedish national rates. In both cases, he reported significantly elevated mortality (p<0.05) among the underground miners . Because this proportional mortality study involved few lung cancer cases, 13 for underground miners and 28 for all other men in Kiruna, the results should be viewed with caution. Radon progeny exposure records were unavailable for the underground miners, however, there were measurements of 10-100 pCi/l radon progeny (about 0.10-1.0 WL at 100 percent equilibrium). Iro n Ore Miners in Kiruna and G a lliv a r e , Sweden # . Metal Miners in the U nited S tates This cohort mortality study involved white male underground metal miners in the United States. The cohort was defined as miners who had completed, at a minimum, their fifteenth year of underground mining experience between January 1, 1937 and December 31,1948 For the period 1947-1955, there were no radon measurements available, however, a committee of experts estimated average monthly radon progeny exposures varied from 1-10 WLM. # T in M iners in C o rn w all, Great B r ita in # Uranium Miners in France In 1956, 7,470 radon gas measurements were collected. From 1957 to 1970, about 20-30 radon gas measurements were collected per miner per year; from 1970 to the present, 57-70 per miner, per year. The only limitation of these records is that they are based on radon gas, rather than direct radon progeny measurements. At present, the mean factor of equilibrium in the French mines is 0.22. The miners' average annual radon progeny exposures varied from 2.5 to 4. There were many excess lung cancers at low radon progeny exposures, i.e., an SMR of 436 (p value unspecified by the authors, however estimated at p<0.05, from the number of observed deaths and the Poisson frequency distribution) at cumulative exposures below 140 WLM. Arsenic concentrations in ore samples were high, 1.50-3.53 percent . For the years 1950-59, it was estimated that a miner inhaled 1.99-7.43 mg arsenic per year . The authors suggested that the high arsenic content in the ore samples may cause lung cancer . The strength of this study lies in the large number (12,243) of underground miners studied. One limitation is that the study cohort is ill-defined; the study design mixes aspects of a survey for incidence with a cohort study. Wang et al. (1984) fail to describe when the workers started mining and how many were lost to follow-up; also, whether the 12,243 miners worked between 1975-81 or constituted all tin miners who ever worked underground. The major limitation appears when comparing these studies with other mining research studies because Wang and associates handled radon progeny measurement techniques and epidemiologic methods in a different manner. For instance, they did not mention if their mortality statistics were adjusted for age or smoking status. Their comparison population, male residents in urban Shanghai municipality, has much higher lung cancer rates than males in rural Yunnan province . Therefore, the Shanghai comparison group was inappropriate and may have underestimated these miners' lung cancer r i sks. Another limitation is that arsenic exposure has been associated with lung cancer among copper smelter and pesticide workers . This research may be most useful for studying the interaction of two carcinogens, arsenic and alpha radiation from radon progeny, rather than for studying radon progeny lung cancer risks alone. # H. Smoking The two most thorough studies of the interaction between smoking and radon progeny exposure are those by Whittemore and McMillan (1983), using the U.S. white uranium miners data set , and by Radford and St. Clair Renard (1984) using the Swedish iron miners data set . The major flaw in other studies of the interaction between smoking and radon progeny exposure is an inadequate sample size of miners with both exposure records and smoking histories. # . Uranium Miners in th e U nited S tates Whittemore and McMillan (1983) examined lung cancer mortality among the white USPHS uranium miners cohort, based on a mortality follow-up through December 31, 1977. In their analysis, they included nine additional miner lung cancer deaths which occurred after December 31, 1977, for a total of 194 lung cancer cases (see section III.B). For each case, four control subjects were randomly selected from among those white miners born within 8 months of the case and known to survive him, yielding a total of 776 matched controls . A regression analysis of the radon progeny exposure and smoking data for cases and controls revealed that the data fit a multiplicative linear relative risk model , but showed "significantly poor fit" (p<0.01) for the additive linear relative risk model . The data demonstrated a synergistic effect, that is, the combined action of smoking and radon progeny was greater than the sum of the actions of each separately. . # Whittemore and McMillan, based on the multiplicative linear relative risk model , suggested that miners who have smoked 20 pack-years of cigarettes (excluding tobacco use within the past 10 years) experience radiat ion-induced lung cancer rates per WLM that are roughly five times those of nonsmoking miners (They estimated that B-j, the excess relative risk per unit of radon progeny, was 0.31X10~2 and B2 , the excess relative risk per unit of cigarette smoke exposure, was 0.51X103). Iron Miners from Malmberget, Sweden # . Conclusions R elated to th e In te ra c tio n o f Radon Progeny Exposure and Smoking Studies of white uranium miners in the United States , and iron miners in Sweden , support different models of risk due to radon progeny and smoking; the first supports a multiplicative model, the second, an additive model. These two studies arrive at different conclusions which is not surprising, given the differences in statistical methods, cumulative exposure levels (the averages differed by a factor of 10), smoking histories, and method of calculating expected deaths between the studies. Based on the presently available information, it is impossible to conclude whether the additive or multiplicative model is the best. Nevertheless, present research indicates a higher risk from combined exposure; data from both radiation exposure and smoking histories are essential for an accurate estimation of radiogenic lung cancer risks. # I . Discussion and Conclusions R elated to the Epidem iologic E valu atio n 1. The Five Prim ary Epidem iologic S tudies The five primary epidemiologic studies that examine lung cancer mortality among underground miners are the studies of uranium miners in the United States, Czechoslovakia, and Ontario, as well as iron miners in Sweden and fluorspar miners in Newfoundland. Despite the individual limitations of each study, the association of radon progeny exposure and lung cancer was shown to persist for all five studies, using different study populations and methodologies. There was an elevated lung cancer SMR and a dose-response relationship for radon progeny exposure and lung cancer among the five underground miners' cohorts; the higher the estimated radon progeny exposure, the greater the number of excess deaths. Some studies adjusted their mortality figures for the estimated latency of radiogenic lung cancer, yet the association between lung cancer cases and radon progeny exposure remained. Table III-2 is a summary of the observed and expected deaths and the SMR's in the five studies. These studies handled adjustments for latency, lagging dose, smoking history, or age as detailed in the footnotes in Table III-2. As yet, there is no one standard method to adjust person-years, expected deaths, or SMR's, or even agreement that these parameters should be adjusted. A M five studies lacked adequate radon progeny exposure data for individuals because, in general, these data were originally collected for monitoring, and not research purposes. In addition, some studies based the exposure assessment upon radon gas measurements, which must be converted to radon progeny estimates. five studies, rather than eliminate a particular study because of exposure data quality. # It is reasonable, however, to extract what information is available from these The primary studies of iron miners in Sweden and uranium miners in Czechoslovakia searched for other exposures (i.e., mineral ores, radiation, diesel fumes) in the mining environment. The Czechoslovakian uranium mines contained various amounts of arsenic, but only trace amounts of chromium, nickel, and arsenic . Researchers examined lung cancer mortality in two uranium mining localities that had similar radon progeny levels, but a fiftyfold difference in arsenic concentrations. They failed to find a significant difference in mortality between the two groups of miners , concluding that arsenic was not affecting the lung cancer rates of underground miners in Czechoslovakia. Arsenic, chromium and nickel were essentially absent in the Swedish iron mines. There were occasional inclusions of serpentine, but no identifiable asbestos fibers in dust samples . The Swedish iron mines contained iron ore dust, but Stokinger (1984), after review of the literature from health reports involving underground iron ore miners, iron and steel workers, foundrymen, welders, workers in the magnetic tape industry, and others, concluded that these studies failed to clearly demonstrate the carcinogenicity of iron oxide dust . The influence of other types of radiation present in the mines, such as long-lived alpha, beta, and gamma radiations, cannot be determined from these five studies. The miners do not show an excess mortality from leukemia, a disease linked to high gamma radiation exposures . Most of the studies provided insufficient information about diesel fume exposures in the mines, so that it is impossible to reach conclusions regarding the effect of diesel fume exposure upon lung cancer risk. In the Swedish iron mines, 70 percent of miners with lung cancer left underground work or died before diesel equipment was introduced in the 1960's ; the remaining miners had brief diesel fume exposures immediately before death . Therefore, diesel fume exposure could not account for the excess mortality in the Swedish cohort . Cigarette smoke appears to be the most important carcinogen common to the five primary studies. The proportion of cigarette smokers among underground miners in the United States, Newfoundland and Sweden was greater than among the general male population in those countries . The influence of possible carcinogens in mines (in addition to radon progeny) upon lung cancer mortality needs further research. # . The Ten Secondary Epidem iologic Studies Ten epidemiologic studies were identified by NIOSH as secondary studies, which strengthen the association between excess lung cancer mortality and radon progeny exposure, yet have more limitations (in study design, radon exposure records, follow-up, etc.) than the five primary studies. The ten epidemiologic studies examined lung cancer mortality among underground iron ore and zinc-lead miners in Sweden, metal and Navaho uranium miners in the United States, tin and iron ore miners in Great Britain, uranium miners in France, and tin miners in China. All ten studies have incomplete radon progeny exposure records. Nevertheless, all reported an elevated lung cancer mortality in underground miners and the presence of radon progeny in the mines. The studies of metal miners in the United States , and tin miners in China also found arsenic in the mines; Wang et al. (1984) suggested that the high arsenic content of the ore may be a cause of lung cancer . The study of tin miners in China found an exposure-response relationship between cumulative radon progeny exposure and excess lung cancer mortality, but at the lowest exposure level (less than 140 WLM), still found an unusually high SMR (436) . The arsenic exposures of these underground miners may contribute to the high lung cancer SMR; arsenic exposure is associated with lung cancer in copper smelter and arsenical pesticide workers . The study of iron ore miners in Grangesberg, Sweden estimated an attributable risk of 30-40 cases per 10® PY-WLM for miners over the age of 50 . This attributable risk estimate is comparable to that reported by Radford and St. Clair Renard (1984) for miners in Malmberget, Sweden in the same age group . # The Lowest Cumulative Radon Progeny Exposures Associated w ith Excess Lung Cancer M o r ta lity The five primary epidemiologic studies are far from completion, since the cohorts' follow-ups are truncated. For example, the uranium miners in the United States were followed a mean of 19 years (by 1977), while the iron miners in Sweden were followed a mean of 44 years (the Swedish study has the longest follow-up period of the five primary studies). Lung cancer rarely is manifested before age 40, regardless of etiology . Frequently, the initial analyses performed on a cohort lack enough PYR and statistical power to show a statistically significant association between excess lung cancer mortality and low radon progeny exposure levels. Later analyses accumulate additional PYR for the entire cohort and specific subgroups, increasing the ability to detect an effect due to radon progeny. This point is important when determining the lowest radon progeny exposures associated with excess lung cancers. A longer follow-up period, resulting in more PYR and statistical power in a study, may reveal an association between excess lung cancer mortality and radon progeny at lower cumulative exposures. had an average of about 10 years of follow-up (by 1968) and found excess cancers above 120 WLM. The study of uranium miners in Czechoslovakia found excess mortality above 100 WLM . Two recent studies, of miners in Ontario and Sweden, reported excess cancers at cumulative radon progeny exposure levels of 40-90 WLM and 80 WLM, respectively . Thus, two epidemiologic studies found excess lung cancer mortality associated with radon progeny exposure levels below 100 WLM. # The study of uranium miners in the United States by Lundin et al . (1971) In addition, studies suggest that both radon progeny exposure and smoking are involved in the lung cancer mortality of underground miners; however, the available information does not allow one to state whether radon progeny and smoking interact in an additive or multiplicative fashion . One estimate is that miners who smoked 20 pack-years of cigarettes have radiat ion-induced lung cancer rates per WLM that are roughly five times those of nonsmoking miners . Finally, the five primary and ten secondary mining epidemiologic studies all demonstrate excess lung cancer mortality among underground miners working in the presence of radon progeny. # IV . MEDICAL SCREENING AND SURVEILLANCE OF UNDERGROUND MINERS EXPOSED TO SHORT-LIVED ALPHA PARTICLES # A. Q u a litie s o f E ffe c tiv e Medical Screening and S u rv e illa n c e # B. Screening and Lung Cancer Prevention Alpha radiat ion-induced lung cancer may be preventable (by limiting exposures to radon and thoron progeny) but not treatable. By the time radiogenic lung cancer is detected among individuals in an exposed work force by routine periodic screening, the affected workers fail to benefit from any further preventive or therapeutic measures. # Available screening tests may detect radiat ion-induced, premalignant abnormalities in asymptomatic exposed workers years before disease appears. At the current state of knowledge, however, it is unknown whether medical removal of asymptomatic workers with these abnormalities will prevent progression to malignant disease. A recent study by NIOSH tested the within-reader reliability of an expert in sputum cytologic and histopathology. The reader reliably detected malignant changes, but frequently read early changes as "premalignant" on one occasion and as "within normal limits" on other occasions . To date, there is no convincing evidence that routine periodic medical screening of workers exposed to pulmonary carcinogens is an effective means of prevention of mortality due to lung cancer in these workers. Coke oven workers are presently the only group of workers covered by a mandatory rule for periodic screening by sputum cytology and chest X-rays. Although the effectiveness of that regulation has not yet been evaluated, such studies are now underway. In addition, NIOSH is currently collecting data on lung cancer rates and the results of sputum cytologic tests for some miners in the USPHS cohort . Also, frequent exposures of underground miners to chest X-rays for screening purposes are not recommended at present. # CRITERIA FOR DETERMINING THE SUITABILITY OF A CANCER SCREENING TEST FOR USE IN THE WORKPLACE # Coles and Morrison (1980) : 1. The test should be "effective" in terms of its validity, reliability, sensitivity, specificity, and opera tional characteristics (such as predictive value). The test should be "acceptable" to workers in terms of its cost, convenience, accessibility, lack of morbidity. # Halperin et al. (1984) : The test should be "effective" in terms of its validity, relia bility, sensitivity, specificity, and operational characteristics (such as predictive value). The test need not be uncompli cated or inexpensive, but its performance and interpretation must be done by competent profes sionals. The test must be "acceptable" to workers in terms of its cost, convenience, accessibility, lack of morbidi t y . The test results must be eval uated by comparison to a suitable population, not necessarily the general population. # Action levels and related medical decisions must be deter mined in advance of screening (based on #1 above). # Adapted from Effective treatment is avail able if asymptomatic disease is detected. # Detectable preclinical phase must be highly prevalent among screened population. # Halperin et al. (1984) 1. The disease has important individual and public health consequences. 2. Disease need not be treatable, but must be preventable. # A detectable preclinical phase (DPCP) must exist and a target population (exhibiting a high prevalence of DPCP) be identified. Follow-up care (diagnostic, treatment, and social services) must be avaiI able. Natural history of the disease determines the feasibility and frequency of testing. # Adapted from A review of studies reporting histopathologic associations with radon progeny exposures lacked sufficient information to conclude definitely that only one specific lung cancer cell type was associated with these exposures . In addition, a case-control study using data from the Third National Cancer Survey found that cigarette smoking was significantly associated with all three histologic types of lung cancer ; the relationship with small-cell carcinoma was strongest overall (odds ratio = 5.1), whereas those with squamous and adenocarcinoma were approximately equivalent (odds ratio = 3.1). The issue of histopathologic associations with radon progeny exposures needs further research, especially considering that many underground miners smoked cigarettes. # Both cessation of smoking and reduction of the radon progeny exposures -f underground miners will lower their risks for lung cancer. # C. Recommendat i ons 1. Smoking Since it appears that inhaled radon progeny either add to or multiply the underlying high lung cancer risk in smokers, a smoking cessation program is recommended. The combined effects of a lower (more protective) Permissible Exposure Limit (PEL) and cessation of cigarette smoking would probably provide a significant reduction in lifetime risks. # Lung Function Tests A baseline chest X-ray and annual spirometrie lung function tests, performed and interpreted according to the criteria of NIOSH or the American Thoracic Society, would be appropriate for medical decision-making concerning job placement, medical removal protection, and disability compensation should work-related respiratory problems develop.at a later time. # . X -ra y Screening While chest X-ray screening is not an effective means of prevention of death due to occupational lung cancer, examination at 5-year intervals, and industry-wide analyses of the results of such tests may be an effective means of supplementing the primary prevention of other lung diseases, such as pneumoconioses. # . R a d ia tio n Exposure Records The lifetime radiation exposure record of underground miners should include information about the dose and frequency of medical irradiation. If radiation exposed workers are routinely screened for lung diseases by baseline and periodic follow-up chest X-rays, they will receive an average of about 0.025 rad per examination of external X-irradiation (where an "examination" consists of a postero-anterior and a lateral exposure) . If examinations are conducted every 5 years, the average lung dose would be about 0.005 per year. Furthermore, because of the frequency of on-the-job accidents and injuries in underground mining , underground miners may receive considerably more medical X-irradiation over a working lifetime than workers exposed to other sources of ionizing radiation. For each of the following diagnostic examinations, the approximate X-ray dose to the lung is indicated in parentheses: thoracic spine (0.421 rad), ribs (0.324 rad), lumbar spine (0.133 rad), one shoulder (0.039 rad), lumbosacral spine (0.035 rad), and skull (0.002 rad) [ # V. FEASIBILITY OF LOWERING THE STANDARD This section examines the feasibility of lowering the current radon progeny exposure standard, including differences between current exposures and lower projected standards. # A. Comparison o f Current U.S. Underground Miner Radon Progeny Exposures with D iffe r e n t Standards The uranium mining industry in the United States has recorded the annual exposures of underground miners, and the Atomic Industrial Forum (AIF) figures are displayed in Appendix Tables A-9 to A-11. There has been some discrepancy between MSHA records and AIF records . Overall, the industry has been successful in controlling exposures to 4 WLM. Furthermore, the percentage of miners exposed to 3 WLM or higher decreased between 1973 and 1982 (Tables A-9 to A-11, ). There has been substantial mobility among the uranium miners, with many people working for short periods of time at different mines, so that the AIF separated the annual exposure data into two sets, "all persons assigned to work underground" and "persons who worked underground 1,500 hours or more" i.e., full time. It appears that, for most underground uranium mine workers, the mining industry already can meet a radon progeny standard below the current level of 4 WLM annual exposure. If the radon progeny exposure standard was set at 1 WLM, approximately one-third of all underground workers and less than two-thirds of the full-time underground workers would be exposed above a # In the case of nonuranium mines in the United States, it is clear they can meet a lower exposure standard, based on the limited data submitted by the mining companies to MSHA (Appendix Table A-5). Of those nonuranium mine workers whose exposures to radon progeny were recorded, the upper limits of exposure varied from 0.16 to 2.20 WL, depending upon the mining industry (Appendix Table A-1). Acknowledging the limitations on the way exposure data is collected (Appendix A, section A.2.a.), the mining companies submitted information to MSHA which suggests that no more than 450 individuals are occasionally exposed to substantial radon progeny levels (i.e., 0.3 WL and above) (Appendix Table A-5). During 1983, these radon progeny exposed employees were found in only 4 nonuranium mines, out of a total of about 574 U.S. nonuranium metal and nonmetal underground mines. Therefore, it should be feasible to control the radon progeny exposures encountered by these 450 (or fewer) miners. # B. The Technological C apacity to F u rth er Reduce Exposure Some of the highest radon progeny exposures are received by people working in the smallest uranium mines, those employing less than ten people . Probably, these small mines can improve their ventilation systems and reduce worker exposures. Currently, many of these small mines are not operating due to the depressed prices for uranium. # There are a variety of techniques besides ventilation that can reduce workers' radiation exposures (Appendix B). In general, these techniques are more costly and less effective than ventilation. Nevertheless, these methods, in addition to ventilation, could be used to decrease the exposures of the relatively small number of uranium workers (46 during 1982) It should be technically feasible for the mining industry to control the radon progeny exposures of these relatively few workers receiving substantial exposures. Also, an engineering analysis (based on data from only two mines) suggested that it is technically feasible for uranium mines to meet a standard as low as 1 WLM, using control techniques such as ventilation, bulkheads, and backfilling. # VI. CONCLUSION # Each of the five primary epidemiologic studies contained strengths and limitations. All of the studies rely on incomplete radon progeny exposure estimates to calculate the cumulative exposures of the underground miner cohorts. Nevertheless, they contain sufficient strength to demonstrate an excess lung cancer risk associated with radon progeny exposure. Also, an exposure-response relationship exists between cumulative radon progeny exposure and lung cancer mortality . Statistically significant SMR's above 400 were observed in three studies, where workers accumulated mean exposures above 100 WLM , Statistically significant SMR's between 140 and 390 were observed in two studies , where workers accumulated mean exposures below 100 WLM, and in preliminary findings from a third study by Tirmarche et al. (1985) where workers probably accumulated mean exposures below 100 WLM , NIOSH acknowledges the efforts of various groups to compare attributable and relative risk estimates across different epidemiologic studies. NIOSH can neither validate nor refute these findings. At this point, without access to the raw data and more specific information about epidemiologic methods, NIOSH is unwilling to speculate or make comparisons between the attributable or relative risk estimates in the five primary studies. There were several classifications for identifying a substance as a carcinogen. "Potential occupational carcinogen" means any substance, or combination or mixture of substances, which causes an increased incidence of benign and/or malignant neoplasms, or a substantial decrease in the latency period between exposure and onset of neoplasms in humans or in one or more experimental mammalian species as the result of any oral, respiratory or dermal exposure, or any other exposure which results in the induction of tumors at a site other than the site of administration. # Such classifications have been developed by the National This definition also includes any substance which is metabolized into one or more potential occupational carcinogens by mammals. Since exposure to radon progeny has been shown to produce lung cancer in underground miners, it meets the OSHA criteria; thus, radon progeny should be considered an occupational carcinogen. Data on the current radon progeny exposures of uranium, metal, and nonmetal miners suggests that the mining industry, overall, is already capable of meeting a radon progeny standard below the current annual limit of 4 WLM. Recent limited research (based on data from only 2 mines) suggests that, using ventilation, bulkheads, and backfilling, it is technically feasible for mines to meet a standard as low as 1 WLM. At the present time, there is no effective medical method to prevent or treat lung cancer to radon progeny exposure. Also, there is insufficient evidence to support an association between a specific lung cancer cell type and radon progeny exposure . Only exposure prevention measures are effective in lowering radon progeny induced lung cancer rates. These preventive measures include lowering the radon progeny exposures of underground uranium miners (and perhaps some underground metal and nonmetal miners), especially those that receive annual cumulative exposures near the present limit of 4 WLM. An additional measure is to encourage miners to stop smoking, because smoking and radon progeny exposure may act multiplicatively, or at least additively, to cause lung cancer. Finally, a lowering of exposure, especially for the workers currently exposed near 4 WLM, is recommended. Recent information suggests that it is technically feasible to control radon progeny exposures to levels as low as 1 WLM. NIOSH wishes to withhold a recommendation for a specific PEL, until completion of a quantitative risk assessment, which is now in progress. In addition, the specific medical recommendations listed in Chapter IV should be implemented. # A. Current Work Force # Uranium Miners a. M iners in th e U nited S tates The number of underground mine workers (including miners and service and support staff) dropped from approximately 5,037 in 1980 to 2,150 in 1982 (Table A-2). The number of underground miners, the group receiving the highest exposures, dropped from 2,760 to 1,275. This decrease was due to a recent fall in the price of uranium and reduced uranium demand. In the mines in the United States there are numerous temporary, short-term workers; in 1978, out of all employees who worked underground, only 46 percent worked 1,500 or more hours underground (i.e., full-time). # b. M iners Outside th e U nited S tates Czechoslovakia, China, France, Italy, Australia, Canada, and Argentina have underground uranium mines (see Table A-3 # Ill # B. C urrent Exposures in M ining # Stanley Equity Gold Inc. # Leadv i 1 1 e Un i t Asarco # Revenue -Vi rginius Ranchers -- -- -- Baryte 0.2 -- -- -- Coal 0.1 -- -- -- # South Africa 1973 -- In mines where production employees also perform maintenance, service and supervisory duties., such employees were classified as production workers. In cases where corporations had widely separated operations under different managers, each was considered a separate operation. cProduction includes production and development miners. # Exposure of U.S. underground uranium miners to radon daughters in 1979 as In mines where production employees also perform maintenance, service and supervisory duties, such employees were classified as production workers. "Maintenance includes mechanics and electricians. Service includes motormen, haulage crews, drift repairmen, station tenders, skip tenders, etc. 'Salaried includes engineers, supervisors, geologists and ventilation personnel. uranium mines in the United States, this figure is somewhat misleading. These workers can receive high exposures, and because they only work for short periods of time, their annual average exposure is low. The average exposure for those miners working full time, that is over 1,500 hours underground, was higher; 1.45 WLM in 1978. Underground mining exposure records were placed into four general job categories by the AIF, i.e., production, maintenance, service, and salaried. As a group, the production workers who worked more than 1,500 hours underground should have higher exposures than the remaining uranium mining work force. In 1978, the average exposure of these workers was 1.74 WLM (see Table A-7) and in 1979 their average exposure was approximately 1.88 WLM . In contrast, in 1979 and 1980, MSHA inspectors recorded average radon progeny WL concentrations for underground uranium mining production workers of 0.30 WL or higher, which means that some of these workers could receive 4 WLM or more per year. Cooper estimated that the average annual exposure of full-time underground production workers was about 2. 9 WLM during 1979 . The number of workers that receive these high exposure levels may be small; AIF reported that among full-time underground uranium miners in 1979, only 3 out of 1,711 production workers and 3 out of 488 maintenance workers received more than 4 WLM annually (see Tables A-10 and A-11). Overall, most uranium mine workers' (including those workers who spend only part of their time underground) exposure is well below the standard of 4 WLM and on the average may be about 1 WLM , (see Table A -7). A relatively small number of workers, primarily full-time underground production and maintenance workers, have exposures above the 4 WLM standard (see Tables A-9 through A-11). The most recent available data, for 1982, showed that only 2 underground employees (0.1 percent) received radon progeny exposures of 4.0-5.0 WLM and 44 employees (1.6 percent) received exposures of 3.0-4.0 WLM. . It should be possible to lower radon progeny exposure levels for this relatively small number of miners. # b. Miners Outside the United States The exposure of underground uranium miners depends on the quality of the uranium ore body and the ventilation rate. In other countries, (excepting Canada) the uranium ore is frequently of a lower grade than the ore in the United States, so with good ventilation techniques, the foreign uranium miners should receive lower exposures than the miners in the United States. Recent figures for radiation exposure in underground uranium mines in Canada, France, India, Argentina, and China have been published in the literature (see Table A-3 The underground uranium miners of Canada had an average annual exposure to radon progeny of 0.74 WLM in 1978. In 1980, the median exposure for miners in three underground mines in Saskatchewan was below 0.6 WLM and only about three workers in one mine were exposed to 3-4 WLM. In addition, some of these miners had substantial gamma exposure. In the Cluff mine, gamma exposures were as high as 3.5 rem and above, and in the Eldorado and Cluff mines many workers (approximately 60) were exposed to 1-3 rem . The French uranium miners had average annual radon progeny exposures of 2.0 WLM and 1.4 WLM in 1978 and1979, respectively . In 1975, the median radon progeny exposure was below 0.10 WL, yet as many as 5.35 percent of the workers were exposed to 0.30 to 0.80 WL, potentially receiving more than 4 WLM annually (see Table A-12) . In 1975, there was also a record of gamma exposure in French underground uranium mines. The mean annual dose was 0.49 rem, but some miners received much higher doses; 9.16 percent received 1.0-1.5 rem, 5.3 percent received 1.5-2.5 rem and 0.65 percent received 2.5-3.0 rem . In the underground uranium mines in France, gamma exposure may constitute a major part of the total radiation. There is limited information available concerning typical radon progeny exposures in underground uranium mines in India, Argentina,and China . For the mines in India, figures for potential exposure are given by job category. In 1979, the drilling crew received an estimate of 2.6 WLM of potential alpha energy exposure, the mucking crew about 2.1 WLM, and "others" about 1.7 WLM (see Table A -13) , In Argentina, the average annual radon progeny exposure was about 2.4 WLM during 1980. # Nonuranium Miners a. Hard Rock Miners in the United States Some of the highest radon progeny exposures are found in the iron, zinc, fluorspar, and bauxite mines (Table A-1). In 1975, iron miners were exposed to 0.14-0.90 WL, zinc miners to 0.07-1.40 WL, fluorspar miners to 0.30-2.20 WL and bauxite miners to 0.07-1. 40 WL . If these readings are typical, some hard rock miners in the United States, especially those in fluorspar mines, could have radon progeny exposures much higher than 4 WLM. However, recent data submitted by U.S. metal and non metal mining companies to MSHA suggests that no more than 450 individuals are occasionally exposed to 0.3 WL (Table A -5). During 1983, only 4 companies, 2 molybdenum, 1 phosphate and 1 tungsten, submitted individual exposure records for their employees to MSHA. It is possible that the mining companies failed to report additional employees who received radon exposures, but this is the only data available. From this data, one concludes that, except for a few molybdeum, phosphate, and tungsten mines, radon progeny exposure is not a problem in U.S. hard rock mines. Thus, in general, hard rock mines should be able to meet an annual radon progeny standard below 4 WLM. , 1982, p. 199 . # b. Hard Rock Miners Outside the United States Radon progeny exposure levels have been measured in nonuranium mines in Finland, Italy, Norway, South Africa, Sweden, the United Kingdom and Poland (Tables A-14 to A-16) . The most recent figures for all of these countries show annual average radon progeny exposures of 2.6 WLM or less. However, in many of these countries the average potential alpha energy concentrations exceed 0.3 WL, suggesting that individual miners may be exposed to more than 4 WLM per year (if they work full time during the year). Nonuranium miners (especially iron, zinc, lead, copper, or gold miners) in Italy, Poland, South Africa, and Great Britain may be exposed to more than 4 WLM annually . In the United Kingdom, 4 percent of the noncoal miners were exposed to 4 WLM or more, however, many of the miners did not work full 8-hour shifts. If the underground noncoal miners in the United Kingdom worked full 8-hour shifts, as many as 20 percent of the workers could be exposed above 4 WLM/yr . Recent reports for five Chinese tin mines showed radon progeny levels of 0.67 to 1.73 WL during 1978 . 22 It may be most effective to combine some of these techniques, e.g., to use positive pressure ventilation in combination with procedures to decrease the volume of the mine air needing ventilation, such as bulkheads or backfilling. Bulkheads could be made more secure against radon gas leaks by maintaining a slight negative pressure behind the bulkhead and painting sealant on nearby exposed rock. Finally, most of the techniques described in Table B-1 and in this chapter will decrease inhalation exposure to alpha radiation from the decay products of radon and thoron gases, but won't affect gamma radiation levels. # Mechanical Ventilation Mechanical ventilation is the primary and most successful technique currently in use for reducing exposure to radon decay products. In uranium mines in the United States, during the early 1950's before mechanical ventilation became prevalent, average measurements of 2-200 WL of radon decay products were common . In contrast, during 1979 and 1980, the highest average working level for radon progeny recorded by MSHA was 0.46 WL (Table B-2). Thus, there has been a great decrease in exposure to radon decay products in uranium mines primarily due to improvement in ventilation. Sweden has also successfully reduced radon progeny levels in nonuranium mines with mechanical ventilation. The average annual exposure for the nonuranium miners of Sweden was 4.7 WLM in 1970, due to ventilation improvements, and decreased to 0.7 WLM in 1980 . In the case of uranium miners in the United States, it is not clear whether there could be significant further decreases in exposures to radon decay products with ventilation improvements alone. These few mines may need to use other techniques, besides dilution ventilation, to reduce miners' exposure to radon progeny (Table B-1). # Other Dust Control Methods Spraying water and delaying blasting until the end of shifts are two other dust control methods currently in use in most underground uranium mines. Most mines use these methods to control silica dust, but in uranium mines these methods can help control uranium ore dust. Drilling and blasting are two mining activities that generate high levels of uranium ore dust. Exposure to uranium ore dust alone may be carcinogenic, and high dust or smoke levels may modify the respiratory tract distribution of a miner's exposure to radon progeny (by increasing the proportion of radon progeny attached to respirable and nonrespirable size dust particles). In wet drilling, water sprays from the drill onto If a person approaches or exceeds the lifetime limit on exposure, they are trans ferred to another job at a lower exposure level with retention of pay, if available, or are removed from work at full pay if another job is not available. Advantages: Protects individual miners against high cumulative exposures. Disadvantages: Spreads exposure over a larger number of people. This system works best when used with a reliable bioassay for exposure, which is not available in the case of radon gas or progeny. Medical removal may not be effective if intense, short-term exposure to inhaled alpha radiation is more hazardous than cumulative radiation exposure. (Cont i nued) The drills are equipped with automatic water valves that turn the water and compressed air on simultaneously. (These techniques have been used in mines since the 1930's.) Advantages: The water cuts down on radioactive uranium ore dust. Disadvantages: Difficult to set up in areas where water is scarce. The miners us i ng the drill get w e t . Dynamite blasting at the end of shift, instead of throughout the day, reduces exposure to dust and smoke. Also, radon gas levels tend to be high immediately after blasting . Advantages: Most miners have less exposure to dust and smoke particles and thus less radiation exposure. Disadvantages: Extra production schedule planning is necessary. This involves the use of fan maintenance, backup electrical systems, and spare fans to minimize fan shutdowns during working hours. Positive pressure at the rock surface is a barrier to radon flow. One drawback is that high positive pressure in one area may force the radon into nearby low pressure areas . (Cont inued) Exhaust ventilation removes radon, thoron and daughters, as well as diesel fumes, but it also increases the emission of radon from the surrounding rock by creating a negative pressure. Positive pressure ventilation is shut down during times when the mine is inactive, creating a temporary negative pressure. This results in energy savings during the shut down periods. The best ventilation method to use depends on the mine topography and production schedule. Ventilation methods may be most effective when used in combination with techniques that cut down on the area needing ventilation, such as bulkheads and backfilling . The filter respirator covers the miner's mouth and nose and filters the mine air through fiber filters . Advantages: As a temporary short-term protective measure, the half-mask respirator affords approximately greater than 90 percent efficiency in reduction of miner's exposure to radon daughters attached to dusts, fumes, and mists. Disadvantages: The respirators may hinder vision, be warm to use under some working conditions, add significant resistance to the miner's breathing and require careful maintenance to assure their continued effectiveness. Filter respirators must be carefully fitted to each wearer, using quantitative respirator fit tests. Only MSHA/NIOSH-certified respirators shall be u s e d . (Cont inued) the rock while the drill operates, thus decreasing dust levels. Miners also wet down muck piles and the walls of some tunnels to control dust. Since the 1930's these two techniques have been used in some mines. Blasting increases uranium ore dust and radon gas levels remain high for about an hour afterwards . Delaying blasting until the end of the work-shift removes the miner from an area with high dust and radon gas levels, and allows the ventilation system to reduce these levels before the miner returns to work. # Additional Control Methods Air cleaning equipment, filter respirators, and separate air supplies are seldom used in the underground mining environment. An air cleaning apparatus can remove dust, but it is expensive compared to traditional ventilation methods and is most useful in circumscribed areas . Filter respirators and supplied-air respirators are difficult to use in the mining environment and their use should be limited to emergency conditions, such as temporary excursions of the radon progeny concentrations above 1 WL. Respirators tend to restrict movement and vision, may be too warm to wear, have significant breathing resistance, and require careful maintenance and fitting to assure their continued effectiveness. Only MSHA/NIOSH-certified respirators shall be used. Another radon progeny control method is robotics or increased automation. Techniques, such as robotics, that minimize the time the miner spends in the high exposure areas of the mine and in activities such as drilling, blasting, or loading ore, will decrease the miner's radiation exposure. Although, at present, robotics has a limited place in the mines, it may be possible in the future to further automate the uranium ore mining process. # B. A dm inistrative Controls # Medical Removal Protection One type of administrative control is a medical removal protection (MRP) program. Under this program, when an individual's exposure approaches or exceeds a certain limit, the person is reassigned to an area with a lower exposure level. The MRP program has been very effective in reducing exposure in the (noncarcinogenic) lead industries . In this case, blood lead levels could be used as a method to biologically monitor a worker's lead exposure. However, MRP has certain drawbacks when used as an administrative control for exposure to a known human carcinogen such as radon progeny in underground uranium mines. First, according to our current knowledge of radiation carcinogenesis, it is prudent public health policy to presume that there is no threshold for radon-progeny-induced cancer, and thus no exposure can be assumed to be safe. Therefore, the high exposure individuals who are removed from the job are protected against further radon progeny risk, but the radon progeny exposure (and risk) is spread out over a larger population of workers. Second, at this time, there is no good biological monitoring method for radon progeny exposure because the primary health effect is a carcinogenic, rather than a toxicologic, response. Routine, periodic 132 sputum cytological examinations and chest X-rays are not effective screening tests for the detection of early reversible signs of lung cancer, and cancer itself may only appear after years of exposure. Finally, respirators (as they are presently designed) are very difficult to use in the underground mining environment. # Alarm Systems Another type of administrative control involves the use of alarm systems. This method has been fairly effective in coal mines where continuous monitors for methane gas have been tied to alarm systems. Reliable continuous monitors for radon progeny are now technically feasible (see ) and could be connected to alarm systems, as well as the control center for the ventilation system. The person who controls the ventilation could increase air movement in mine areas with high radon progeny levels. Also, the continuous monitors might be useful for enforcement purposes, because the MSHA inspector would have a record of excessive radon progeny measurements levels since the last inspection. For recordkeeping and enforcement purposes, the use of data from continuous alarm-monitors would depend heavily on the reliability and validity of these devices, as well as their durability and security from tampering in the mine environment. # Contract Mining Many underground uranium miners, especially those that drill, blast, and move ore, are given incentive bonuses for the volume of ore removed. Such a system encourages high productivity from the workers, but any time they spend on safety measures means less time to spend mining ore. The contract mining system also encourages miners to work overtime, thus increasing their cumulative internal and external radiation exposures. In addition, some miners, especially before the reduced demand for uranium, went from mine to mine working uranium ore one month and gold the next, getting radon progeny exposures in both locations. This mobility of the work force makes it harder to monitor and track the miners' total radiation exposure, making it more likely that a miner could receive cumulative exposures in excess of current and future standards. One type of administrative control is to modify the contract mining system so that workers would have more incentive to protect their own health on the job. This issue needs further study and discussion, including input from the mining industries, unions, and contract miners. # GLOSSARY Absorbed Do s e : The amount of energy absorbed by ionizing radiation per unit mass. Absorbed doses are expressed in units of rads or grays, or in prefixed forms of these units such as millirad (mrad, 10~3 rad), microrad (urad, 10-® rad), etc.- The gray (Gy) is equal to 1 joule per kilogram (1 J/kg). The £ad is equal to 6.24 x 10® MeV per gram, or 100 ergs per gram. One gray = 100 rad. Additive Relative Risk Mod e l : The relative risk from the combined exposure to radon progeny and smoking equals the sum of the risks from each exposure considered separately. One example of an additive linear relative risk mode I is: Person-Years at Risk (PYR): In a lifetable analysis, the number of PY at risk of dying from disease, usually calculated from the time the miner enters the cohort until death or the end of follow-up. Some authors adjust the PYR for an assumed 10-year latent period for lung cancer by subtracting PYR accumulated during the first 10 years after a miner starts to work underground (see above, (Lagging)). R = 1+B, Potential Alpha Energy Concentration (PAEC): May cause biological damage during the radioactive decay of radon or thoron gases and their progeny, is measured in units called Working Levels (see below). # Proportional Mortality Ratio (PMR): The ratio of two mortality proportions, expressed as a percentage, often adjusted for age or time differences between the two groups being compared.- Prospect ive: A study characteristic. Disease has not occurred in study groups at the start of a study. Units of Radioactivity: Curie and BecguereI 1 curie = 2.22 x 10^2 disintegrations/minute 1 becquerel (Bq) = 1 d/sec 1 picocurie (pCi) = 2.22 d/minute Radioactive Decay: Disintegration of the nucleus of an unstable nuclide by spontaneous emission of charged particles, photons, or both. Radon (Rn) or Radon and its Progeny: Specifically refers to the "parent" noble gas (Rn-222), and its short-lived alpha-radiation-emitting radioactive decay products ("progeny" or "daughters"). Radon is a gas, the radon progeny are radioactive solids. Rads and rems are comparable (i.e., the quality factor (QF) = 1) when dealing with beta particles and gamma photons. The QF for alpha particles from inhaled radon progeny are generally considered to be in the range of 10 to 20. Retrospective: A study characteristic. Disease has already occurred in study groups at the start of a study. Standardization: A procedure to reduce the biasing effect of a confounding variable. A feature of data analysis. # Standardized Mortality Ratio (SMR): The ratio of mortality rates, expressed as percentage, usually adjusted for age or time differences between the two groups being compared. Synergism: The combined action of two factors which is greater than the sum of the actions of each of them. Thoron: A radioactive gas (Rn-220), sometimes found in the presence of radon (Rn-222). Thoron progeny are the solid, short-lived, alpha radiation emitting decay products (progeny or daughters) of thoron gas. Working Level (WL): A standard measure of the alpha radiation energy in air. This energy can come from the radioactive decay of radon (Rn-222) and thoron (Rn-220) gases. The working level is defined as any combination of short-lived radon decay products per liter of air that will result in the emission of 1.3 x 10^ million electron volts (MeV) of alpha energy. Working Level Month (WLM): A person exposed to 1 WL for 170 hours is said to have acquired an exposure of one Working Level Month. The Mine Safety and Health Administration defines a Working Level Month as a person's exposure to 1 WL for 173 hours. Taken from Shapiro (1981) . Taken from Monson (1980) . + Taken from Thomas et a I., (1985) . # I. INTRODUCTION A report evaluating epidemiologic studies of lung cancer in underground miners was recently sent to the Mine Safety and Health Administration (MSHA) by the National Institute for Occupational Safety and Health (NIOSH). That report concluded that prolonged exposure to radon progeny at the current standard of 4 WLM/year produced an elevated risk of death from lung cancer. It is the objective of this report to make quantitative risk estimates for various levels of cumulative exposure. In addition, other factors influencing the exposure-risk relationship will be identified and quantified whenever possible. This report is based upon data collected from a cohort consisting of 3366 white underground uranium miners working in the Colorado Plateau (located within the states of Colorado, Utah, New Mexico and Arizona). The actual risk estimates were computed from data on 3346 members of the cohort. Ten original members were determined to have had no record of underground mining, four were non-white, and six had inadequate cigarette smoking i nformat ion. Entry into the cohort was defined by race, sex, working at least one month in underground uranium mines, volunteering for at least one medical survey between 1950 and 1960, and providing social and occupational data of sufficient detail . NIOSH has now updated the mortality experience of the cohort through December 31, 1982. Lung Cancer mortality was defined as anyone assigned an International Classification of Disease (ICD) code of 162 or 163 (same designation in Sixth through Ninth Revisions). Previous analyses of this cohort reported by Waxwei ler et a l . and Whittemore and McMillan considered follow-up only through 1977. Table 1 presents a comparison of vital status of the cohort at the end of 1977 and 1982. # II. PROTOCOL FOR STATISTICAL ANALYSIS Much of the epidemiologic work in the past regarding the analysis of mortality in occupational cohorts has involved modified life table analysis. This form of analysis has a strong appeal due to its familiarity and ease of interpretation. It is mathematically straight forward since person-years at risk are simply divided into a number of strata and age-calendar year specific mortality rates from some reference population are applied to each. The U.S. population is often used as the reference population in such life table analyses. This expected mortality is then compared to the observed mortality via a ratio defined as: # A. Type of Analysis Used 2 o SMR; = i i j 2 E ij i where SMR = standardized mortality ratio for cause j Ojj = the observed number of deaths for cause j in stratum i and Ejj = the expected number of deaths for cause j in stratum i from reference population rates If the total number of observed deaths in all of the strata of interest is large and if the reference population is the appropriate comparison group, this would be the method of choice. No modeling would be needed in such a situation. However, after stratification by age, race, sex, calendar year, other confounders, and finally the exposure of interest, there are seldom enough observed deaths to make rates in these strata reliable. Another problem frequently encountered is a fundamental difference in certain etiologic characteristics between the study population and the reference population. For example, the study group may smoke at substantially different rates than the reference population. Often the occupational study group is "healthier" than the reference population due to selection criteria for employment (Enterline ). This is usually referred to as the "healthy worker effect." An alternative to use of the modified life table approach is some form of statistical modeling. Modeling to estimate health risks is necessary when conclusions must be drawn about risk in regions of the exposure-response relationship for which data are too sparse to estimate risk directly. The use of models also permits risk estimates to be simultaneously adjusted for confounders, such as age or co-carcinogenic exposures, as well as interactions between exposure and other risk factors. This flexibility is particularly important in making risk estimates at relatively low cumulative exposures when using the Colorado Plateau data. Most miners in this cohort were exposed to high levels of radon progeny (mean exposure = 834 WLM). Since primary interest in risk estimates is below 120 WLM based on current exposures, some type of statistical model is essential. There have been a number of types of models suggested for examination of cause-specific mortality as a function of various risk factors. The two most popular types of models are the absolute risk model and the relative risk model. The absolute risk model can be written as: R(t ;z) = Rg(t) + R ( z , /?) where R9t;z) is the incidence at age t for someone with risk factors z, Rg(t) is the baseline or background incidence at age t and R(z, ) is the incremental incidence as a function of the risk factors z, and coefficients which are estimated from the data. This form of risk model was not used in the risk assessment since it had been rejected due to poor fit to the U.S. uranium miner data by Lundin et al. . In contrast, the relative risk model generally takes the form: R(t;z) = R0 (t) R(z,0). This model assumes that excess risk is proportional to background incidence rates. Relative risk models have become increasingly popular in recent years and were found to provide good fits to the data from earlier follow-ups of the U.S. uranium miners cohort by Lundin et al. and Whittemore and McMillan . This type of model has been selected as the basic analytical method for this report. # B. The Proportional Hazards Modei A relative risk model which is particularly well-suited to longitudinal mortality studies is one proposed by Cox . This model is commonly referred to as the Cox proportional hazards model. A major advantage of this approach over the more common life table method is that it permits the use of internal comparison groups while controlling simultaneously for such confounders as cigarette smoking, age, and year of birth. In addition, time-dependent covariates such as cumulative exposure may be incorporated into the model. This is essential in any longitudinal study where follow-up and the exposure period overlap. Relative risk estimates are based on rate ratios similar to those produced in the modified life table analysis. That is, the Cox model operates in a dynamic framework by considering incidence rates over the entire period of follow-up. The Cox model can be expressed mathematically as: X(t;z) = X 0 (t)exp(£z(t)) where X(t;z) for this study is the age-specific lung cancer mortality rate for a miner with exposure and other risk factors represented by a covariate vector z. The underlying age-specific lung cancer mortality rate for the unexposed is represented byAo(t). The function exp(J3z) is generally used to model risk of death from the cause of interest which depends upon the risk factors z and the coefficients ^ which are estimated from the data. # C. Alternative Forms of the Risk Function Although the exponential or log-linear function exp(0z) is the usual choice of a model for risk, any positive function may be used as long as the risk function is equal to 1.0 when the coefficients are all equal to zero. The most common alternative risk functions are the linear (1 + 0z) and the power function (exp( /?1nz) = z^ ). All three forms of risk functions were considered in modeling the U.S. uranium miners data. # D. Results of Model Development # Ident i f ic a t ion of Confounders and/or E ffe c t Mod i f iers Cumulative exposure as measured by total WLM for each miner was the primary exposure variable. Since cigarette smoking is known to have a strong effect upon the risk of lung cancer, cumulative smoking history as measured in pack-years was also included in the model. Another risk factor strongly associated with lung cancer mortality is age. This was tightly controlled by using age as the time dimension t in the model A(t;z). That is, the age at death of each lung cancer victim was recorded and all other miners alive and at risk were compared to him at that age. In this way, the cumulative exposure to radon daughters and pack-years of cigarettes were incorporated as time-dependent covariates by calculating their values at each age of death from lung cancer. This assures that proper age-adjusted comparisons were made throughout the period of follow-up. A number of other variables were examined in developing the appropriate risk model. A list of all potential risk factors considered for inclusion in the model are provided in Table 2. These variables were considered independently as potential confounders in a stepwise fashion (both backward and forward selection procedures) and also as potential effect modifiers by assessing their interaction with cumulative radon daughter exposure. An attempt was made to compare the fit of each of the three models during the model development stage of the analysis. However, it soon became apparent that the linear model did not fit well over the full range of radon daughter exposures and cumulative smoking levels. In fact, the iterative solution to the likelihood equations would not converge when using the linear model when both cumulative exposure and pack-years of smoking were both entered simultaneously (either as linear or linear-quadratic forms). The linear model could only be made to converge when the model was restricted to cumulative exposure below 600 WLM with no other covariates included. The restricted linear model resulted in a non-significant result in this exposure range and was subsequently eliminated from consideration. Of the remaining two types of relative risk models (log-linear and power function), the covariates found to be most highly associated with lung cancer incidence rates were cumulative exposure (WLM), cumulative smoking (packs), and age at initial exposure (months). Table 3 illustrates the form and degree of fit as measured by the likelihood 1BGR -background radon exposure = 0.2 WLM/year BGS = background cigarette smoking = 0.005 packs/day ratio for these two models. The log-linear model required the addition of quadratic terms in cumulative exposure and cigarette smoking to provide an adequate fit. This was not necessary when developing the power function model. As shown in Table 3, the power function model provided the best fit to the data and will be used hereafter in the risk assessment. Since the power function model involves the natural logarithms of cumulative exposure and cumulative cigarette smoking, zero values of these variables were not permitted. In order to avoid this an estimate of cumulative background exposure was added to each miner's cumulative radon daughter and cigarette totals. Based upon estimates of the NCRP (Report No. 77, 1984), 0.2 WLM per year since birth were added to each miner's exposure totals. This is the estimated background exposure in the U.S. and is also the amount used by Whittemore and McMillan in an earlier analysis. In a similar fashion 0.005 packs per day were added for each day since birth to the cumulative smoking totals based upon estimates of Hinds and First . Of particular interest is the joint effect of exposure to radon daughters and cigarette smoking. Therefore, the interaction of radon daughter exposure and cigarette smoking was included in the multiplicative power function model. The results showed a negative, borderline significant result ( |3=0.087,p=0.058). When a similar analysis was run with mortality data complete only through 1977, there was no indication of a significant negative effect. Therefore, based on more complete follow-up through 1982, the joint effect of radon daughter exposure and cigarette smoking appears to be slightly less than multiplicative but greater than additive. This is similar to the finding of Thomas and McNeill in their grouped data analysis of the five major radon daughter cohorts. It is still consistent with a synergistic effect of radon exposure and cigarette smoking which is usually defined as a joint effect exceeding the sum of the individual effects. # Weighting Exposure Over Time An important consideration in fitting any of these models was the proper time-weighting of exposure. Since most forms of cancer, including lung cancer, have relatively long latency periods between exposure and manifestation of the disease, some weighting of exposure over time is appropriate. The most common weighting scheme is commonly referred to as lagging. This involves elimination of any exposure accumulated in a specified period of years before death from lung cancer. This provides a way of considering only that exposure that had a reasonable chance of causing death from lung cancer. Obviously exposures received in the few years immediately prior to lung cancer deaths are ineffective in the exposure-response relationship. In order to investigate the appropriate number of years to lag exposure in this cohort, a series of lags ranging from 0 to 12 years was used. Figure 1 illustrates the results of these trials. It is evident from the improved fit, as measured by the log-likelihood of the model, that a lag of 6 years for cumulative exposure is the best choice for this analysis. Cumulative cigarette smoking was rather insensitive to the amount of lag in the range of 0 to 12 years. Therefore, for the purpose of consistency cumulative smoking was also lagged 6 years. This contrasts to the lag of 10 years chosen by Whittemore and McMillan for these data and also by Muller et al. for the Canadian data. Their choices were somewhat arbitrary and largely based on knowledge that most cancers involve relatively long latency periods. The implications of a shorter lag will be discussed in a later section of this report. An issue related to lagging of cumulative exposure and cumulative cigarette smoking is the lack of information on these variables in recent years. Radon daughter exposure was last updated in 1969. However, the absence of current exposure information should have minimal impact upon this analysis since over 90% of the miners in the cohort had retired from uranium mining for more than one year by 1969. Those few who continued mining were exposed at levels considerably less than those experienced in earlier years. Since cigarette smoking status was also unknown after 1969, all miners still smoking at that time were assumed to continue at their last recorded smoking rate. NIOSH is currently conducting a survey of radon daughter exposure and cigarette smoking status subsequent to 1969, but this information will not be available for at least another year. The aim of lagging exposure is the elimination of exposure which is not etiologically responsible for lung cancer mortality. An implicit assumption in the use of this technique is that exposure changes from completely effective to completely ineffective at one instant in time. The actual form of this weighting function is illustrated in Figure 2. Because of the biological implausibi Iity of such a situation, Land proposed that the effectiveness of cumulative exposure be linearly phased in over a period of several years. An illustration of such a weighting function is provided in Figure 3. Consequently, we tried various combinations of lagging and linear partial weighting with the combination illustrated in Figure 3 providing the best fit, i.e. a lag of 4 years followed by linear partial weighting in the period 4-10 years prior to death from lung cancer. This scheme provided a fit essentially the same as that of a simple lag of six years but was chosen over lagging because of its biological plausibility. # INFLUENCE OF TEMPORAL FACTORS Perhaps the most difficult aspect of producing a valid quantitative risk assessment is dealing with the effects of various time-related factors upon the exposure-risk relationship. One very important temporal influence concerns the two components of cumulative exposure itself. In most longitudinal studies the quantitative exposure index is some form of cumulative exposure. However, cumulative exposure is actually the product of duration of exposure and intensity or rate of exposure. When one uses cumulative exposure in assessing risk, the implicit assumption is that high exposure rates for short periods of time are equivalent etiologically to low exposures for long periods of time, all else being equal. A number of investigators have examined the effect of exposure rate in the U.S. uranium miner data. Whittemore and McMillan ) found no statistically significant effect of exposure rate. Lundin et al . in the 1971 monograph concluded that there was no significant evidence of an exposure rate effect in the 120-360 WLM cumulative exposure range. These investigators apparently defined exposure rate as the ratio of total cumulative exposure and duration of employment (defined as the period of time between first and last employment in underground uranium mining work histories). For most forms of employment, this is the accepted definition of average exposure rate. However, underground uranium mining is a very sporadic form of employment. The actual time spent underground was often a relatively small fraction of the total employment history. Therefore, exposure rate as defined by cumulative exposure divided by the number of months actually spent underground is often a very different measure than that obtained by using duration of employment in the denominator. Consequently, the effect of exposure rate was re-examined using the actual average exposure rate experienced while underground, eliminating any gaps in employment. Although earlier analyses using total duration of employment produced negative but non-significant results, the refined definition showed a statistically significant negative exposure rate effect (p =-0.043, p<0.001) as shown in Table 4. This implies that among groups of miners receiving equivalent cumulative exposures, those exposed to lower levels for longer periods of time are at greater risk of lung cancer. Because the coefficient is relatively small, however, an appreciable effect upon risk of lung cancer would not be expected unless rates were different by an order of magnitude, i.e., a miner with exposure received at a rate ten times lower than a miner of the same age, smoking habits, and cumulative exposure would have (0.1)~-043= i i o 4 or 10.4% greater risk of lung cancer. Because a negative exposure rate effect is very important and potentially controversial, it was examined in more depth. Of particular interest was the possibility that this effect was different at low versus high cumulative exposure levels. Consequently, the homogeneity of this effect across the full exposure range was examined by forming two sub-cohorts: one below the mean exposure (834 WLM) and one above the mean. The interaction of the exposure rate effect with these two strata was then tested. Results showed a significant interaction ( P=0.157, P=0.019). The direction of the A. Exposure-Rate Effect Background for cumulative radon daughter exposure: BGR=0.4 WLM/year Background for cumulative cigarette smoking: BGS=0.005 packs/day interaction indicated that the exposure rate effect was stronger in the lower cumulative exposure range (0-834 WLM). Specifically, a miner who received total exposure below 834 WLM at rate one tenth as great as another miner of the same age, smoking status and cumulative exposure would have a 58 percent greater risk of lung cancer. However, the increased risk would only be 10 percent at the lower exposure rate for miners in the 834-10,000 WLM range. Although a statistically significant negative exposure-rate effect had not been found previously in this U.S. cohort, there is considerable evidence of such findings in animal studies of high LET radiation. Raabe et al . reported a strong low dose-rate effect in beagles exposed to internally deposited isotopes of radium and strontium. Risk of bone cancer was as much as ten times as great per unit dose for low rates as compared to the highest rates used. Cross et a l . found a negative dose-rate effect for risk of lung tumors in rats exposed to airborne radon daughters. Chameaud et al. found similar results in a French study of Sprague-Dawley rats exposed to inhalation of radon decay products. Hill et a l . found reduced dose rates of fission-spectrum neutrons produced significantly higher neoplastic transformation rates per rad in cell cultures of C3H mouse embryos. Although all of these studies show low dose-rate effects, no study as yet, animal or human, has investigated such effects at the very low dose rates currently found in well-ventilated uranium mines. # B. Calendar Time It is well-known that mortality patterns change over time. Such exogenous risk factors as the prevalence of smoking and alcohol consumption, medical care, and various life style characteristics are all influenced by a changing society. Therefore, the effect of calendar time upon risk estimates, often called the cohort effect, must be controlled. The analysis of the U.S. uranium miners cohort was stratified by decade of birth so that miners dying of lung cancer were compared only to those members of the cohort at the same age and who were born within 10 years of the case. The usual assumption in a stratified analysis is that baseline mortality rates may be different from stratum to stratum but the relat i ve risk is the same across all strata for miners with comparable risk factors. In order to check this assumption, the interaction of cumulative radon daughter exposure and birth decade was examined. Results indicated a statistically significant positive interaction ( P=0.173,P=0.002). This implies that miners born in later decades are at a greater risk of lung cancer per unit of exposure when compared to miners of the same age born earlier. Since miners born in later decades were exposed at lower exposure rates, this result could be associated with the negative exposure rate effect described ear Iier. # C. M ultistage Theory of Carcinogenesis One of the most popular theories for explaining the temporal patterns in mortality studies of carcinogenesis is the multistage model. Originally proposed by Muller and Nordling and later refined by Armitage and Doll , the multistage theory predicts an increase in cancer incidence as a function of time since exposure to some carcinogen. In general, the theory proposes that a malignant tumor arises from a single cell which has undergone a series of heritable changes. The changes may be thought of as distinct stages in the carcinogenic process, each with a low probability of occurrence and a slow progression time in the absence of carcinogenic exposures. A carcinogen may act on any or all of the stages in this process. Carcinogens affecting the first stage are commonly referred to as initiators, while those affecting later stages are called promoters or progressors. Initiators are characterized by long latency periods between initial exposure and death, often exceeding 20 years. Promoters, on the other hand, usually have shorter latent periods since fewer stages must be transgressed before a malignant cell is produced. It is impossible to prove whether or not the mathematical form of the multistage model actually holds in a given situation. However, a number of its predictions have been verified experimentally by Peto et al. . Therefore, if one subscribes to some form of the multistage model, it is possible to predict whether exposure acts at an early or late stage in the carcinogenic process by examining the temporal patterns in the data. Whittemore , Day and Brown , and Brown and Chu have all reported the effect on excess relative risk of age at initial exposure and time since cessation of exposure. By examining these factors, we may better understand the underlying cancer mechanism operative in this cohort. # D. Age a t I n i t i a l Exposure Whittemore considered the multistage model using three exposure scenarios: single exposure at one point in time, continuous exposure at a constant rate, and exposure of varying intensity. When considering the latter category (the usual occupational situation) she found that excess relative risk was a decreasing function of age at initial exposure if an early stage was affected. When a late stage is affected by exposure, however, excess relative risk is an increasing function of age at initial exposure. Day and Brown predicted the functional relationship between excess relative risk and age at initial exposure for the first four stages of a five-stage process when duration was held constant. Figure 4 illustrates their findings which are in qualitative agreement with those of Whittemore. Results of the analysis in our data, as illustrated in Table 4, indicate a positive and statistically significant coefficient for age at initial exposure (P=0.0023, P=0.003). This implies that miners initially exposed at later ages are at greater risk of lung cancer than those exposed at younger ages, all else being equal. Specifically, a miner with the same radon daughter exposure and smoking history who was initially exposed ten years (120 months) later in age than another miner, would have exp(0.0023x120)=1.32 or 32% higher risk of lung cancer. This result is consistent with the effect of radon daughters occurring at a late stage in the carcinogenic process. A similar age effect was reported by Mancuso et al. in an analysis of cancer risk in the Hanford workers exposed to who I e-body radiation. An analysis of age at start of smoking amona miners resulted in a negative but non-significant coefficient (3=0. 0016,p=0.22). This would imply that cigarette smoking in this cohort acted at an early to intermediate stage. It could also be consistent with the hypothesis of Doll and Peto that smoking acts at both early and late stages, which would tend to obscure predictive ability of age at start of smoking. A plot of the effect of age at initial exposure for both radon daughters and cigarette smoking is given in Figure 5. Day and Brown predicted the effect upon relative risk of time since cessation of exposure when a multistage model is assumed. # E. Time Since Cessation of Exposure They found that when exposure begins some time after infancy, excess relative risk increases, peaks, and then decreases with time since termination of exposure when the first stage is affected. When the penultimate (next to last) stage is affected, relative risk strictly decreases with time after last exposure. Figure 6 illustrates their predictions for the effect of time since cessation of exposure on the first four stages in a five stage model with duration of exposure fixed at five years. In order to investigate the effect of cessation of exposure on this cohort, all miners were identified who had indicated retirement from uranium mining during the course of follow-up. Approximately 95% of the cohort had retired for more than one year prior to 1970. The average time since last exposure was 18.0 years for those miners not dying of lung cancer and 9.9 years for lung cancer cases. The time in months since last exposure was entered as a time-dependent covariable in the original model containing log of exposure, log of smoking, and age at initial exposure. The estimated coefficient of this term was negative and highly significant ( P=-0.0056,p<0.001). Thus a miner's chances of surviving lung cancer increase dramatically with each year outside the mines. Specifically, the model predicts that the risk of lung cancer 10 years after mining uranium is exp(-0.0056x120)=0.511 relative to someone still mining with the same cumulative exposure, smoking history, and age. When a similar analysis of time since cessation of cigarette smoking was run, the results were inconclusive. The coefficient was very small and non-significant (P=0.003,p=0.75). However, since a relatively small number of miners were ex-smokers (7.7%) there is little power for detection of such an effect even if it actually exists. Figure 7 illustrates the effect of time since last exposure for both radon daughters and cigarette smoking. The implication of these results are essentially the same as that obtained by examination of age at initial exposure. The strong negative effect of time since last exposure implies that radon daughters act at a late stage in the carcinogenic process. The effect of stopping cigarette smoking, while based on a small amount of data, still indicates either an intermediate stage effect or a combination of early and late stage effects. # E F F E C T OF T IM E S IN C E L A S T E X PO SU R E ON E X C E S S R E L A T IV E R IS K T IM E S IN C E L A S T E X P O S U R E (Y E A R S !) LE G E N D . E X PO SU R E -------------RADON S M O K IN G # IV. ERRORS IN EXPOSURE DATA AND THEIR EFFECT UPON RISK ASSESSMENT In animal carcinogenesis studies, exposures or doses are usually known with a high degree of accuracy and precision. However, the same cannot be said regarding epidemiologic quantitative risk studies. In most epidemiologic studies, the actual dose to target organs can only be estimated by dosimetric modeling. This is seldom attempted in quantitative risk assessments. The dosimetry of radon daughter exposure is very complex, involving such factors as respiration rates, particle size distribution, deposition in the lung, and radon/radon daughter equilibrium. Most risk assessments are modeled as functions of some exposure index, which is the method used in this report. It is the purpose of this section to estimate the magnitude of exposure errors and their effect upon quantitative risk models. According to Lundin et a l . , exposures in a given mine and year were estimated in one of four ways: 1. actual measurements 2. interpolation or extrapolation in time 3. geographic area estimation 4. estimates prior to 1950 based upon knowledge of ore bodies, ventilation practices, and earliest measurements. These methods will subsequently be called Methods 1,2,3,and 4. In assessing the error associated with individual exposure determinations, it is first necessary to consider the variability introduced by each of the four methods. # A. Magnitude of Error in Exposure Data # Method 1 Table 5 provides a frequency count of white miners working underground from 1950-68 and the mean number of samples taken in each mine visited in those years. The Kusnetz procedure for measuring radon daughters was most often used during the period of study (Johnson and Schiager 1981). This is an area monitoring method based on alpha counts collected on a filter/pump apparatus. The resulting data were generally thought to be of good quality (Lundin et al., 1971). Data from mines in which 5 or more measurements were taken in a given year were analyzed. These data followed a lognormal distribution with little change over the period 1951-1968. Prior to 1960 were taken largely by the U.S. Public Health Service, while post-1960 sampling was conducted by state mine inspectors. Therefore, data were separated into pre and post 1960 periods and estimates of the coefficient of variation (CV) were made for each period. Results indicated a slight but non-significant increase in CV's after 1960 (106.6% vs 118.3%). Since the measurements were grab samples taken at different times within each mine, the total pooled CV=112.5% over the period 1951-1968 is assumed to include sampling errors, counting errors, and environmental fluctuations over time. This estimate agrees well with the CV of 110% found in an independent study of U.S. mines in the period 1973-79 when exposure levels were much lower (Schiager et al. 1981). In other studies, however, an average CV of 30% was reported for area samples in Canadian mines (Makepeace and Stocker 1980) while fluctuations of 20-30% around daily means were found for radon measurements in non-uranium Norwegian mines (Berteig and Stranden 1981). # Method 2 In order to assess the error in interpolating for gaps in sampling of 1 to 3 years, a simulation procedure was used. Mines having the longest periods of continuous annual measurements were identified. Then the even years' averages wee omitted and the average of the two adjacent years was substituted. In this way it was possible to compare the observed annual average with the expected average had that year been missing. This strategy was repeated by imposing three year gaps in the data and again using the average of adjacent years to estimate the three intervening years. The error variance attributable to Method 2 was then calculated by: 0 2 , S dofliOj/E.»2 i N-1 where 0j = actual measurements for intervening years Ej = interpolated values estimated by average of adjacent years. The resulting CV was 120.8% for 1 year interpolation and 137.3% for 3 year interpolation. Since these results were not significantly different, they were pooled to yield a CV=131.9%. # Method 3 This method used annual mine averages in the same geographic locality to estimate radon daughter levels in mines for which Methods 1 and 2 could not be used. In order to assess the error associated with this method, four of the uranium mining localities with the greatest number of annual measurements were selected. A simulation procedure similar to that used for Method 2 was employed. Annual averages for selected mines in these localities were omitted for 1 to 4 years. The averages for mines in the nearest district were substituted as the expected radon level if the annual average actually had been missing. The error variance was calculated in the same way as Method 2. The resulting CV was 148.6% for this method. # Method 4 No measurements were available in the period prior to 1950. Therefore, the estimates made using knowledge of ore bodies, ventilation, and earliest known measurements in these mines could not be verified. These estimates comprised less than 6% of the 34,120 annual averages used in exposure assessment. In addition, since only 8 percent of the total underground exposure time for the cohort occurred prior to 1950, the influence of these measurements should be minimal. However, since the error for this method was probably the greatest of the methods used, we estimated the overall CV for Method 4 to be 25% greater than that for Method 3, i.e. CV=186%. Table 6 shows the number of annual averages for each of the four methods. Actual measurements comprised only about 10% of the data. In order to obtain an overall estimate of the relative error, a weighted average of the CV's for each method was calculated based on the number of determinations for each method. The resulting overall CV=137%. The error associated with each miner's cumulative exposure can then be calculated using our estimate of the error in each radon daughter level (WL). The total cumulative exposure (WLM) for each miner is obtained from: wlm = S ( « -¡jH u a io v , j where WL j j is the estimated exposure for mine i in year j and UGMONjj is the number of months spent underground in mine i during year j. The variance of WLM assuming independence of WL j j is then: Var(WLM) = X (UGMON..)2 var (WL..) i , j 1J 1J = 2 (UGMON..)2 (CV)2 (WL..)2 . . iJ ij ' -J 1 1 where CV is the coefficient of variation for the estimated exposure WL j j . If we substitute our estimate of the overall CV=137% and use total cumulative exposure divided by total months underground (WLM/TOTMON) as an estimate of WL j j for each individual miner, the average CV for cumulative exposure (WLM) is 0.97 or a relative standard deviation of 97% of the total WLM for each miner. Since radon daughter measurements were taken in different areas of each mine and often at different times of the day or week, we will assume that the variance in these measurements reflects the variance in exposure levels among individual miners, i.e. # Var(»LMij). where 0jj|< = variance in exposure measurement for miner k in mine i and year j . # B. E ffe c t on R ela tiv e Risk Estimation of Exposure Measurement Errors There appears to be a general impression that errors in exposure measurements usually cause an underestimation of relative risk. Indeed, Bross originally demonstrated that if misclassification was equal in two comparison populations, one would tend to underestimate differences in proportions of diseased persons. Keys and Kihlberg qualified this concept by showing that relative risk is underestimated when misclassification errors are independent of disease and exposure relationships. In general, it has been shown by Copeland et a l . among others, that relative risk estimates are biased too low in the presence of non differential misclassification (equal misclassification of disease in both exposed and unexposed groups). Little work has been done concerning the effects of errors in continuous measures of exposure upon relative risk estimates obtained from statistical models. It is this situation that is a potential problem to the analysis in this report. Prentice introduced a method for dealing with errors in individual exposure measures when using the Cox proportional hazards model. Prentice, and more recently Hornung , have shown that the direction of bias in relative risk estimation depends upon the error distribution and the shape of the exposure-response model. In general, when the variability in individual exposure errors increases with the level of exposure and the relative risk model is supra-linear (curving upward), relative risk will actually be overestimated when exposure errors are ignored. The popular log-1 inear or exponential risk function is an example of a model which may often overestimate relative risk in the presence of errors whose magnitude increases with increasing levels of cumulative exposure. As was reported earlier, the log-linear model did not provide the best fit to the data. Instead, the power function model which involved the logarithms of cumulative exposure and cumulative cigarette smoking provided a better fit. The effect upon risk estimates using this model was investigated when errors in exposure are lognormal as indicated in the previous section. Without presenting the statistical details, it is sufficient to say that under these conditions (power function model and lognormal distribution of exposure errors) the effect upon relative risk estimates is negligible. If the exposure measurements were generally higher than those actually experienced by the miners, as mentioned in the 1971 Monograph, relative risk per WLM would be underestimated regardless of the distribution of exposure measurement errors. In summary, the degree of error in individual exposure measurements was quite high, an estimated CV of 97%. If, however, these individual errors were lognormally distributed about the annual average concentration in each mine, the degree of bias in relative risk estimates generated by the power function model would be minimal. Regardless of the form of the error distribution, the relative risks generated by the exposure-response model would be too low if the exposure measurements were systematically too high. Therefore, examination of the pattern of error in the exposure data would suggest that relative risks produced by the power function model are either unbiased or possibly a bit low. # V. QUANTITATIVE RISK ESTIMATES The previous sections have outlined the protocol for the risk model development, the selection of an appropriate quantitative risk model, the temporal factors influencing risk estimation, and the magnitude and effect of exposure measurement errors. These are factors requiring careful study before attempting to make valid quantitative risk estimates. In most risk assessments, results are reported relative to some unexposed population. In animal studies, a control group is generally used for this purpose. In life table, analyses expected mortality is obtained from some standard population, often that of the U.S. The problems inherent with the use of such external referents have been well documented . Although a subcohort of miners unexposed to radon daughters would be ideal for a referent group, there were no unexposed miners in the U.S. cohort. Since the proportional hazards model uses internal comparisons in generating risk estimates, risk projections relative to an unexposed population necessarily involve an extrapolation to zero exposure. In the case of the power function model, a background exposure of 0.2 WLM/year of age was added to every miner's cumulative total. All risk estimates are relative to someone exposed to these background rates. Therefore, quantitative relative risk estimates are somewhat sensitive to the choice of a background exposure rate. One way of checking the appropriateness of the model is to divide cumulative exposure into discrete intervals and calculate lung cancer risks in each interval relative to risks experienced in the lowest interval. In this way, relative risk estimates are free of any exposure-response function. If the risk model then fits the risk estimates in the selected intervals, one would be assured that the model is appropriate for quantitative risk estimation. The cumulative exposure intervals chosen for this analysis were: less than 20 WLM, 20-120, 120-240, 240-480, 480-960, 960-1920, 1920-3720, and greater than 3720 WLM. Risk estimates in each interval are calculated relative to the risk in the interval less than 20 WLM. and are plotted at the mean exposure in each interval: 66.6, 179, 351, 698, 1352, 2579, and 5416 WLM, respectively. Figure 8 illustrates how these interval estimates are uniformly lower than those produced by the risk model when using 0.2 WLM/year as a background rate of exposure. The shape of the risk model, however, shows remarkably good agreement with the pattern of relative risk estimates in the selected intervals. This implies that the quantitative risk model is appropriate exclusive of the intercept. This could be due to either an improper choice of baseline exposure rate or the fact that all interval estimates are relative to exposure in the lowest interval, 0-20 WLM. If there is some level of excess risk in this interval relative to an actual unexposed population, the interval estimates would be too low. The cumulative exposure of 0.2 WLM/year is an estimate is an estimate of the background exposure in the overall U.S. population , Exposures near ore-bearing lands are known to be considerably higher than average . Therefore, it is probable that background exposures in the Colorado Plateau area are higher than average U.S. levels. # DOTTED LINES AND VERTICAL BARS REPRESENT 95% CONFIDENCE LIMITS In the interest of using a background more in line with exposures received by persons living in the Colorado Plateau, the background exposure was increased to 0.4 WLM/year. This produced a quantitative risk model that agreed very well with the interval estimates, as can be seen in Figure 9. Using this model, relative risk estimates were calculated for cumulative radon daughter exposures in the range 30 to 120 WLM corresponding to exposure levels of from one to four WLM/year over a 30-year working lifetime. These estimates range from a relative risk of 1.42 at 30 WLM to 2.07 at 120 WLM compared to someone of the same age and smoking habits with a cumulative lifetime background exposure of 24 WLM and a background exposure rate of 0.4 WLM/year. These estimates (0.9 to 1.4 excess relative risk per 100 WLM) are slightly higher than those reported by Muller et al. for the Ontario miners, but somewhat less than the estimates of Radford and Renard for the Swedish iron miners. Obviously, these estimates are subject to the usual caveats concerning extrapolation from higher cumulative exposures and exposure rates. Because relatively few data are currently available in this cohort below 120 WLM (10 lung cancer deaths out of 709 miners), there may be some doubt that the model used actually is appropriate at these low levels. However, the pattern of relative risk estimates produced in each of the categorized exposure levels would suggest that this model fits the data well in range of 60 to 6000 WLM. # Magnitude and Effect of Errors in Exposure Measurements Analyses of the errors associated with the four methods of estimating uranium mine exposure levels indicated a lognormal distribution of errors with the relative standard deviation or CV=97 percent. Although errors of this magnitude may cause overestimation of relative risk when using the log-linear risk model, the better-fitting power function model is generally insensitive to errors of this type. In fact, if estimated exposure levels were systematically higher than those actually received by the miners , relative risks per unit WLM would be underestimated for this data. # Q u a n tita tiv e Risk Estimates Present day radon daughter exposures are considerably less than those experienced in the past by uranium miners. There is also current interest in low-level exposure to the general population from indoor radon and its decay products. Consequently, the primary cumulative exposure range of interest in risk assessment appears to be below 120 WLM. Although approximately 20 percent of the cumulative exposures in this study were below this level, there have been only 10 lung cancer deaths among this subgroup as of the end of 1982. Until this cohort is followed to extinction, epidemiologic models such as that produced in this report will be necessary to evaluate the risk of lung cancer mortality at these lower exposures. The model developed for this report provides a very good fit to the data in the range 60 to 6000 WLM. It seems reasonable that predictions based upon this model would be reliable at least for occupational exposure to adult white males. There is little or no mortality data available regarding women and children. The risk estimates provided in Table 7 are presented as an evaluation based upon careful consideration of all factors thought to influence such long-term mortality studies. All of the caveats associated with such evaluations apply to some degree to these results. Exclusive of background exposure. 2Risks are calculated using exposure rate interaction model in Table 6 relative to miners of the same age and smoking habits with a cumulative lifetime background exposure of 24 WLM and background exposure rate of 0.4 WLM/year. # APPENDIX III # A. Introduction This appendix contains examples of engineering control methods that can be used to reduce miners' exposure to radon progeny in underground uranium mines, although the same methods are applicable to other hard rock mines. Many of these control methods have been traditionally used in uranium mines, yet only recently have researchers (primarily from the Bureau of Mines) studied the efficacy of these methods . # B. Meehan i caI Vent i I a t i on Mechanical ventilation is the primary and most successful technique for reducing exposure to radon progeny. Average measurements of 2 to 200 working levels (WL) of radon progeny were common in U.S. uranium mines during the early 1950's before mechanical ventilation became prevalent . In contrast, during 1979 and 1980, the average concentrations of radon progeny recorded by MSHA ranged from 0.30 to 0.46 WL in the production areas of 61 underground uranium mines . Thus the concentration of radon progeny in U.S. uranium mines has been greatly decreased, mainly because of improved ventilation. Sweden has also successfully reduced radon progeny concentrations in mines with improved mechanical ventilation; the average annual exposure for nonuranium miners in Sweden decreased from 4.7 working level months (WLM) in 1970 to 0.7 WLM in 1980 . # General P rin c ip le s Dilution ventilation in large mines consists of primary and secondary ventilation systems. In the primary system, fresh air is brought into the mine either through separate air shafts or through mine entrances used for miner access and equipment transport. The air can be blown in by a fan located at the surface or drawn in by a fan located inside the mine. Once in the mine, the air is blown or drawn through the main active passageways and then is pushed or drawn out of the mine through special ventilation shafts or openings used to remove ore. The secondary or auxiliary ventilation system provides fresh air to miners working in areas that include stopes and faces where access comes from a single shaft or drift and thus the work area is a dead end. For these areas, the air is often removed through the same shaft that was used to bring in the air. The source of fresh air for the secondary system is provided from the primary air system in the main passageway. To prevent mixing fresh air and contaminated air in the shaft or drift leading to the dead end, the secondary system usually consists of ductwork with a fan to blow or exhaust fresh air from the main passageway to the face. The contaminated air then passively returns to ENGINEERING CONTROL METHODS the main passageway (because of a pressure gradient) through the shaft without contaminating the supply air. The contaminated air at the stope may also be brought back to the main passageway through a second duct and fan system. Once returned to the main passageway, the contaminated air joins the primary exhaust air stream which is then carried out of the mine. # Designing a Dilution Ventilation System Ventilation requirements must be considered when planning and designing the mine. Adding mine ventilation as an afterthought once the mine has been designed or completed is usually more expensive and less efficient. Consideration should be given to the following when designing the ventilation plan for a mine : - Identify the outline of the ore body that will be mined; - Determine the rate of emanation of radon from the rock in which the ore occurs; - Place as much of the primary ventilation system as possible including entrances and passageways in barren ground (i.e., ground not containing ore); - Set up passageways so that a split or parallel system of ventilation can be used; - Set up the mine so that working faces ventilated in a series are minimized; - Design the mine so that air inlets are located on one side of the ore body and exhaust airways on the opposite side of the ore body; - Design the mine so that the distances ventilation air travels in the mine are minimized (reduce or eliminate reentrainment and short ci rcui ts); - Design the mine so that adequate volumes of air can be provided without having high pressure drops across air controls in haulage and production areas; - Design the ventilation system to account for increasing concentrations of radon gas, and therefore radon progeny, since as the mine ages there will be more surface area for gas exchange into the mine; and - Consider control devices, fans, push-pull systems, and minimizing leaks when designing the system. The primary ventilation system delivers fresh air for the secondary air system and removes contaminated air from the secondary air system. The design of the primary air system is discussed in the following paragraphs. a. S p lit or P a ra lle l V e n tila tio n Systems A "split" or "parallel" ventilation system involves providing all or just a few working areas with fresh air that has not been previously used to ventilate other working areas. After the working areas are ventilated, the air is then pushed or drawn back into the primary system where it is moved out of the mine. By contrast, in a "series" ventilation system, all areas are ventilated by a single continuous air circuit. The advantages of a split or parallel system include the reduction of both the residence time and cumulative air contamination . A series system, on the other hand, has several disadvantages. In addition to its long residence air times and a cumulative build-up of air contaminants from one area to another, other disadvantages include the following: - High air velocities which are often required; - Higher power costs associated with moving air at high velocities because of increased static pressures, unless additional ventilation shafts are constructed ; and - The potential spread of toxic gases to all areas of the mine in the event of a fi re. However, to keep residence time down in a split or parallel ventilation system, the air velocities to the multiple drifts must be maintained. This will increase the fan and power requirements; additional ventilation shafts may also be necessary. # b. Control Devices Sliding door regulators are used to prevent air from passing where miners and equipment need to pass through periodically. The problem with doors is that to be effective they must be closed after being used. Doors must also be well-constructed to remain secure with repeated usage; steel doors in substantial frames are most commonly used in Canada . # c. Pushing Versus P u llin g V e n tila tio n Systems The pressure on the air intake side of a mine is always greater than on the exhaust side regardless of whether a pushing or pulling ventilation system is used. The difference is that the intake side # Primary Ventilation System pressure is greater than atmospheric pressure for a pushing system, whereas in an exhaust or pulling system, the pressure on the intake side is below atmospheric pressure. Exhausting (pulling) offers some advantages over a pushing system. For example, forcing air into haulageways and escape areas often requires air locks and other equipment. Exhaust systems draw air from these locations without the need for air locks and remove air from the mine through special airways to exhaust fans. # . Secondary ( A u x ilia r y ) V e n tila tio n System The secondary (auxiliary) ventilation system brings sufficient fresh air to the working area from the primary air system without mixing it with the returning contaminated air from the face. # a . Use o f Ducts Another advantage of an air-blowing system is that it increases pressure. Measurements of radon gas content in air exhausted from mines have shown that the radon gas emitted into the atmosphere was 20% less with the air-blowing system than with an exhaust system . This indicates that less radon gas diffused into the ventilated areas when an air-blowing system was utilized than when an exhaust system was used. # c . Exhaust Duct System (P u ll System) In the exhausting or pulling system, contaminated air is drawn from the working face by a duct that runs from the face to the main passageway in the primary air system through the access tunnel. Fresh air is then drawn into the access tunnel toward the work area by the pressure gradient created by removing air. Exhausting (pulling) air from the face instead of blowing (pushing) offers the fo 11ow i ng advan tages: - Fresh incoming air is maintained in the tunnel used by miners to access the active stope, and # d . Push-Pu11 System A push-pull system contains two ducts in the accessway, one for pushing clean air to the face and the other for exhausting air from the face back to the primary air system. This system has many of the advantages of both the push and the pull systems including the following: - The blowing of air that sweeps across and ventilates the active face, thus providing good dilution in work areas; - The efficient collection of contaminants near the work face; and - Reducing the contamination of air in the access tunnel. The main disadvantages are the cost and that it occupies more drift area . The amount of radon gas diffusing into mine spaces from interstitial rock is dependent on the pressure in the mine space. The lower the atmospheric pressure in the mine space as compared to the pressure in the interstitial rock, the more radon gas will pass from the rock to the mine space. Conversely, the greater the pressure in the mine space as compared to the rock, the less radon gas will seep into the mine space. Overpressurization and mine pumping are two control measures which take advantage of this principle to reduce concentrations of radon gas. In overpressurization, more ventilation air is pushed into mine spaces than is removed. Although Edwards and Bates have stated "nothing that we have found provides mining companies with sufficient guidelines for applying the overpressurized ventilation system effectively," they conducted a mathematical study of overpressurization and drew the following conclusion: overpressurization does decrease the radon flux. It was estimated that a 2% pressure differential in a sandstone matrix would result in a 50% reduction in radon flux with mine sink lengths of 100 meters or less. A mine sink is an area either in the mine itself or a naturally occurring space or lattice in the matrix where the interstitial air can flow. If the distance between the sink and the mine space approaches 200 meters, the benefit of overpressurization is lost. Because of the dramatic increase in radon gas in the sink area during overpressurization of work areas, no miners should be allowed in those sinks without proper respiratory protection. However, many open spaces that can serve as sinks are filled in and cannot be occupied. The Bureau of Mines is gathering information on the effects of overpressurization in mines. Data from the pressurization of an enclosed chamber in a mine indicated that the radon concentration was 99% lower than the concentration under static conditions and 92% lower than the concentration under controlled ventilation conditions , In a study by Schroeder et al . of mine areas that were pressurized by 10 mm of mercury, the radon flux decreased from 5 to 20 fold as compared to normal ventilation conditions. In mine pumping, a negative pressure is created in the mine space by sealing air intake openings and permitting the exhaust fans to operate. This is done during an offshift when no miners are in the mine. Because of the negative pressure created in the mine with respect to the surrounding rock, radon is drawn into the mine space from the inter stitial rock at a rate higher than would occur under static conditions. The air intakes must be opened well before miners enter the mine to permit the ventilation system to remove the radon gas and radon progeny that have accumulated in the mine spaces. After this accumulation has been removed, the mine spaces should have lower concentrations of radon gas (and therefore radon progeny) when the miners reenter the mine. This is because much of the radon gas in the surrounding interstitial rock has been removed and is not available to diffuse into the mine working areas. However, monitoring of these areas would be required prior to allowing miners to enter. More studies are needed to determine the effectiveness of this control procedure . # Overpressurization and Mine Pumping # activities. Summers et al. tested the efficacy of this procedure and found that the amount of radon gas escaping through the surrounding rock was insufficient to warrant the uniform use of a wall sealant, provided that all cracks, fissures, and holes were sealed to prevent major leaks. # . Membrane S ealants Used on Bulkheads # . N egative A ir Pressure Behind a Bulkhead A slight negative pressure behind the bulkhead of about 0.03 cm water with respect to active areas will prevent radon gas leaks into fresh ventilation air [Thomas et al. 1981 (1) covering the backfill with one meter of clean sand, (2) sealing the surface of the tailings, (3) using a bulkhead to seal the backfilled stope and maintaining a negative pressure behind the bulkhead, and (4) using nonradioactive materials as backfill instead of mill tai Iings. # . E ffic ie n c y In summary, backfilling with uranium tailings can be as effective as bulkheading in reducing radon progeny emissions, although it is more costly. Because high radon progeny concentrations are emitted from wet backfill and during the backfilling process, backfilling should not be used in active mine areas and miners should be protected from overexposure during backfilling operations. # E. S ealants Used on Mine W alls This section describes sealants used as diffusion barriers against radon gas, including how the sealants are applied and the best materials used as sealants. Also, the effectiveness of sealants for reducing radon emanation and exposure will be discussed. # G. Automation Another radon progeny control method is increased automation. Techniques such as robotics that minimize the time the miner spends in the high exposure areas of the mine and in activities such as drilling, blasting, or loading ore, will decrease the miner's radiation exposure. Although, at present, robotics has a limited place in the mines, it may be possible in the future to further automate the ore mining process. 1. Two different sampling days are randomly selected from each 2-week block of t ime. 2. The stations within a cluster are to be sampled on the same workdays and work shifts. All stations within a cluster are to be alternately sampled, seven times on each sampling day, each time in independent random order. During the work shift, the seven periods for sampling of the entire cluster shall be equally spaced in time. For example, the three stations A, B, and C could be considered a cluster and sampled as ABC, BCA, ACB, CBA, CAB, BAC, and ACB during seven successive intervals of approximately equal durations. If it is not feasible to sample in this manner, then sampling can be conducted along the most efficient path but with a different, randomly determined starting point on each day (e.g., BCA, BCA,..., BCA during one sampling day and ABC, ABC,..., ABC or CAB, CAB,..., CAB during other sampling days). 3. The estimated average work shift concentration (ftj) for each sampling day (i = 1,2,...12) is computed from an analysis of the seven grab samples taken on that day. Formulae for this computation are contained in section G. 4. Whenever a; for a particular station exceeds 0.14 WL, then that station shall be resampled the next workday. [Note: In this case, f t ; = and sampling on the "next workday" (day B) is in addition to the two randomly selected sampling days required in a 2-week block of time.] a. I f oLq (the estimated average work shift concentration on the next workday) is < 0.14 WL, then exposure monitoring shall continue as described starting at section C,1. b. 'If also exceeds 0.14 WL, then: (1) steps shall be taken to reduce the radon progeny concentration in that work area by implementing work practices and engineering controls, (2) respiratory protection shall be required for all miners entering that work area, and (3) grab sampling as described in section C,2 shall be conducted on a consecutive daily basis. Grab sampling shall continue on a consecutive daily basis until the estimated average work shift concentrations on any two consecutive workdays (ft^ and are both < 0.10 WL. When and are both < 0.10 WL, then the requirements for respiratory protection are waived and exposure monitoring can revert to the schedule described starting at section C ,1. This criterion (as discussed in section H,2) serves to provide early confirmation that the corrective steps taken by the mine operator have been effective in limiting the average work shift concentration of radon progeny to a level not exceeding 1.5 times the recommended exposure limit (REL) of 1/12 WL. # G. Statistical Considerations and Data Analysis Formulae The following are the statistical notations used in the sampling strategy: Cjj = measured concentration of radon progeny in the jth grab sample taken on the ith sampling day, where j = 1,2,...,7 for each day and i = 1,2,...,12 (2 workdays selected at random from each of six consecutive blocks of time). # c>\j = measured concentration of radon progeny in the jth grab sample taken on day A, where j = 1,2,...,7. # CBj = measured concentration of radon progeny in the jth grab sample taken on the next workday following day A, where j = 1,2, ,7. than 0.14 WL on two consecutive workdays, substantial evidence exists that the long-term average work shift concentration exceeds 1/12 WL. Therefore, when aA and ftg both exceed 0.14 WL in a work area, NIOSH recommends that radon progeny concentrations be reduced in that work area by implementing work practices and engineering controls, and that the use of respiratory protection be required for all miners entering that work area. These recommendations are also made when the 95% lower confidence limit for the long-term average work shift concentration (LCL) exceeds 1/12 WL (see section D,4). # Return to Compliance with the REL The NIOSH sampling strategy uses criteria with approximately 90% confidence for an initial determination that a work area is tentatively back to compliance with the REL. Specifically, estimated average work shift concentrations from two consecutive workdays (i.e., and ftg) in which both are < 0.10 WL was chosen as a criterion that demonstrates reasonable evidence that the average radon progeny concent rat ion is bei ng cont roI led to < 0.125 WL (i.e., 1.5 t imes the REL). Given the levels of intraday and interday variabilities observed in the Johnson data set, a work area with an average work shift concentration of 0.125 WL (i.e., 50% above the REL of 1/12 WL) has 0.90 probability to have one or both of a pair of consecutive estimated average work shift concentrations above 0.10 WL. This "2-day" decision rule limits the magnitude with which a work area's average work shift concentration may exceed 1/12 WL and be undetected. This rule also has the advantage of permitting an early return to normal operations after a period of corrective actions to reduce exposure concentrations, at the expense of having less than high confidence that the REL is not being exceeded by more than 50%. However, only a small proportion of time passes until the next sampling day (as specified in the sampling strategy relative to the year), so that the 2-day rule limits the contribution of a temporarily excessive exposure in a work area to a miner's cumulative annual exposure. At a later time, the lower confidence limit criterion for noncompliance determined after 12 randomly selected sampling days (i.e., LCL > 1/12 WL) would be likely to detect a statistically significant increase above the REL if the long-term average work shift concentration were as high as 0.125 WL. # Less Frequent Exposure Monitoring The upper confidence limit criterion (i.e., UCL < 1/12 WL) gives 95% confidence that the long-term average work shift concentration is not above 1/12 WL, under the assumption that aj's exhibit log-normally distributed random variations. The additional requirement that &j and &j+i do not exceed 0.14 WL is meant to detect a temporarily or periodically high average work shift concentration (i.e., high aj's that are not sustained for the full block of time from which 12 sampling days were selected). When both of these requirements are met, only 2 randomly selected sampling days are then required per 26-week block of t ime. 2 0 1 UCL< 0.063 WL gives greater than 95% confidence that the long-term average work shift concentration (a,) is < 0.063 WL (i.e., a. is no larger than 75% of the REL), under the assumption that ftj's exhibit log-normally distributed random variations. Under the additional assumption that geometric standard deviations (GSDs) for intraday and interday (log-normal) variability are similar to those reported in Johnson , the criterion that &12 (the estimated average work shift concentration on the last of the 12 sampling days) be < 0.033 WL gives 95% confidence that a projected future reference period would have a long-term average work shift concentration < 0.063 WL. # Cessation of Exposure Monitoring 2 0 2 reported . However, several recent studies suggest that this is not a practical concern, at least not in healthy individuals . Theoretically, the increased fluctuations in thoracic pressure caused by breathing with a respirator might constitute an increased risk to subjects with a history of spontaneous pneumothorax. Few data are available in this area. While an individual is using a negative-pressure respirator with relatively high resistance during very heavy exercise, the usual maximal-peak negative oral pressure during inhalation is about 15-17 cm of water . Similarly, the usual maximal-peak positive oral pressure during exhalation is about 15-17 cm of water, which might occur with a respirator in a positive-pressure mode, again during very heavy exercise . By comparison, maximal positive pressures such as those during a vigorous cough can generate 200 cm of water pressure . The normal maximal negative pleural pressure at full inspiration is -40 cm of water , and normal subjects can generate -80 to -160 cm of negative water pressure . Thus while vigorous exercise with a respirator does alter pleural pressures, the risk of barotrauma would seem to be substantially less than that of coughing. In some asthmatics, an asthmatic attack may be exacerbated or induced by a variety of factors including exercise, cold air, and stress, all of which may be associated with wearing a respirator. While most asthmatics who are able to control their condition should not have problems with respirators, a physician's judgment and a field trial may be needed in selected cases. # Cardiac E ffe c ts The added work of breathing from respirators is small and could not be detected in several studies . A typical respirator might double the work of breathing (from 3% to 6% of the total oxygen consumption), but this is probably not of clinical significance . In concordance with this view, several other studies indicated that at the same workloads heart rate does not change with the wearing of a respirator . In contrast, the added cardiac stress due to the weight of a heavy respirator may be considerable. A self-contained breathing apparatus (SCBA) may weigh up to 35 pounds. Heavier respirators can reduce maximum external workloads by 20% and similarly increase heart rate at a given submaximal workload . In addition, it should be noted that many uses of SCBA (e.g., for firefighting and hazardous waste site work) also necessitate the wearing of 10-25 pounds of protective clothing. # Miscellaneous Health Effects In addition to the health effects (described above) associated with wearing respirators, specific groups of respirator wearers may be affected by the following factors: (1) Corneal Irritation or Abrasion Corneal irritation or abrasion might occur with the exposure. This would, of course, be a problem primarily with quarter-and half-face masks, especially with particulate exposures. However, exposures could occur with full-face respirators because of leaks or inadvisable removal of the respirator for any reason. While corneal irritation or abrasion might also occur without contact lenses, their presence is known to substantially increase this risk. # .(2) Loss or Misplacement of a Contact Lens The loss or misplacement of a contact lens by an individual wearing a respirator might prompt the wearer to remove the respirator, thereby resulting in exposure to the hazard as well as to the potential problems noted above. # (3) Eye Irritation from Respirator Airflow The constant airflow of some respirators, such as powered air-purifying respirators (PAPR's) or continuous flow air-line respirators, might irritate the eyes of a contact lens wearer. # B. Suggested Medical E valu atio n and C r i t e r i a f o r R e s p ira to r Use The following NIOSH recommendations allow latitude for the physician in determining a medical evaluation for a specific situation. More specific guidelines may become available as knowledge increases regarding human stresses from the complex interactions of worker health status, respirator usage, and job tasks. While some of the following recommendations should be part of any medical evaluation of workers who wear respirators, others are applicable for specific situations. - A physician should determine fitness to wear a respirator by considering the worker's health, the type of respirator, and the conditions of respirator use. The recommendation above leaves the final decision of an individual's fitness to wear a respirator to the person who is best qualified to evaluate the multiple clinical and other variables. Much of the clinical and other data could be gathered by other personnel. It should be emphasized that the clinical examination alone is only one part of the fitness determination. Collaboration with foremen, industrial hygienists, and others may often be needed to better assess the work conditions and other factors that affect an individual's fitness to wear a respirator. - A medical history and at least a limited physical examination are recommended. The medical history and physical examination should emphasize the evaluation of the cardiopulmonary system and should elicit any history of respirator use. The history is an important tool in medical diagnosis and can be used to detect most problems that might require further evaluation. Objectives of the physical examination should be to confirm the clinical impression based on the history and to detect important medical conditions (such as hypertension) that may be essentially asymptomatic. - While chest X-ray and/or spirometry may be medically indicated in some fitness determinations, these should not be routinely performed. In most cases, the hazardous situations requiring the wearing of respirators will also mandate periodic chest X-rays and/or spirometry for exposed workers. When such information is available, it should be used in the determination of fitness to wear respirators. Data from routine chest X-rays and spirometry are not recommended solely for determining if a respirator should be worn. In most cases, with an essentially normal clinical examination (history and physical) these data are unlikely to influence the respirator fitness determination; additionally, the X-ray would be an unnecessary source of radiation exposure to the worker. Chest X-rays in general do not accurately reflect a person's cardiopulmonary physiologic status, and limited studies suggest that mild to moderate impairment detected by spirometry would not preclude the wearing of respirators in most cases. Thus it is recommended that chest X-rays and/or spirometry be done only when clinically indicated. - The recommended periodicity of medical fitness determinations varies according to several factors but could be as infrequent as every 5 years. Federal or other applicable regulations shall be followed regarding the frequency of respirator fitness determinations. The guidelines for most work conditions for which respirators are required are shown in - The respirator wearer should be observed during a trial period to evaluate potential physiological problems. In addition to considering the physical effects of wearing respirators, the physician should determine if wearing a given respirator would cause extreme anxiety or claustrophobic reaction in the individual. This could be done during training while the worker is wearing the respirator and is engaged in some exercise that approximates the actual work situation. Present OSHA regulations state that a worker should be provided the opportunity to wear the respirator "in normal air for a long familiarity period..." .- This trial period should also be used to evaluate the ability and tolerance of the worker to wear the respirator . This trial period need not be associated with respirator fit testing and should not compromise the effectiveness of the vital fit testing procedure. CFR = Code of Federal Regulations. See CFR in references. - Examining physicians should realize that the main stress of heavy exercise while using a respirator is usually on the cardiovascular system and that heavy respirators (e.g., SCBA) can substantially increase this stress. Accordingly, physicians may want to consider exercise stress tests with electrocardiographic monitoring when heavy respirators are used, when cardiovascular risk factors are present, or when extremely stressful conditions are expected. Some respirators may weigh up to 35 pounds and may increase workloads by 20 percent. Although a lower activity level could compensate for this added stress , a lower activity level might not always be possible. Physicians should also be aware of other added stresses, such as heavy protective clothing and intense ambient heat, that would increase the worker's cardiac demand. As an extreme example, firefighters who use a SCBA inside burning buildings may work at maximal exercise levels under life-threatening conditions. In such cases, the detection of occult cardiac disease, which might manifest itself during heavy stress, may be important. Some authors have either recommended stress testing or at least its consideration in the fitness determination . Kilbom has recommended stress testing at 5-year intervals for firefighters below age 40 who use SCBA and at 2-year intervals for those aged 40-50. He further suggested that firemen over age 50 not be allowed to wear SCBA. Exercise stress testing has not been recommended for medical screening for coronary artery disease in the general population . It has an estimated sensitivity and specificity of 78% and 69%, respectively, when the disease is defined by coronary angiography , In a recent 6-year prospective study, stress testing to determine the potential for heart attacks indicated a positive predictive value of 27% when the prevalence of disease was 3.5% . While stress testing has limited effectiveness in medical screening, it could detect individuals who may not be able to complete the heavy exercise required in some jobs. A definitive recommendation regarding exercise stress testing cannot be made at this time. Further research may determine whether this is a useful tool in selected circumstances. - An important concept is that "general work limitations and restrictions identified for other work activities also shall apply for respirator use" . In many cases, if a worker is physically able to do an assigned job while not wearing a respirator, the worker will in most situations not be at increased risk when performing the same job while wearing a respirator. - Because of the variability in the types of respirators, work conditions, and workers' health status, many employers may wish to designate categories of fitness to wear respirators, thereby excluding some workers from strenuous work situations involving the wearing of respirators. Depending on the various circumstances, several permissible categories of respirator usage are possible. One conceivable scheme would consist of three overall categories: full respirator use, no respirator use, and limited respirator use including "escape only" respirators. The latter category excludes heavy respirators and strenuous work conditions. Before identifying the conditions that would be used to classify workers into various categories, it is critical that the physician be aware that these conditions have not been validated and are presented only for consideration. The physician should modify the use of these conditions based on actual experience, further research, and individual worker sensitivities. He may also wish to consider the following conditions in selecting or permitting the use of respirators: -History of spontaneous pneumothorax; -CIaus t rophob i a/anxi ety reac t i on; -Use of contact lenses (for some respirators); -Moderate or severe pulmonary disease; -Angina pectoris, significant arrhythmias, recent myocardial infarction; -Symptomatic or uncontrolled hypertension; and -Advanced age. Wearing a respirator would probably not play a significant role in causing lung damage such as pneumothorax. However, without good evidence that wearing a respirator would not cause such lung damage, the physician would be prudent to prohibit the individual with a history of spontaneous pneumothorax from wearing a respirator. Moderate lung disease is defined by the Intermountain Thoracic Society as being present when the following conditions exist-a forced expiratory volume in one second (FEV-|) divided by the forced vital capacity (FVC) (i.e., FEV-|/FVC) of 0.45 to 0.60, or an FVC of 51% to 65% of the predicted FVC value. Similar arbitrary limits could be set for age and hypertension. It would seem more reasonable, however, to combine several risk factors into an overall estimate of fitness to wear respirators under certain conditions. Here the judgment and clinical experience of the physician are needed. Many impaired workers would even be able to work safely while wearing respirators if they could control their own work pace, including having sufficient time to rest. # C. Cone Ius ion Individual judgment is needed to determine the factors affecting an individual's fitness to wear a respirator. While many of the preceding guidelines are based on limited evidence, they should provide a useful starting point for a respirator fitness screening program. Further research is needed to validate these and other recommendations currently in use. Of particular interest would be laboratory studies involving physiologically impaired individuals and field studies conducted under actual day-to-day work condi t ions. # C E N T E R S F O R D IS E A S E C O N T R O L N A T IO N A L IN S T IT U T E F O R O C C U P A T IO N A L S A F E T Y A N D H E A L T H R O B E R T A . T A F T L A B O R # APPENDICES # APPENDIX I # EVALUATION OF EPIDEMIOLOGIC STUDIES EXAMINING THE LUNG CANCER MORTALITY OF UNDERGROUND MINERS Confounding Bias: A potential attribute of data. In measuring an association between an exposure and a disease, a confounding factor is one that is associated with the exposure and independently is a cause of the disease. Confounding bias can be controlled if information on the confounding factor is present. Coulomb: The charge flowing past a point of a circuit in one second, when there is a current of one ampere in the circuit; also, the aggregate charge carried by 6 x 10^ electrons. Electron Vo 11: The change in potential energy of a particle having a charge equal to the electronic charge (1.60 x 10-^ coulombs), moving through a potential difference of 1 volt. Half-Li fe: The time required for a radioactive substance to decay to one half of its initial activity. Follow-up Period: The length of time between a person entering an epidemiological study cohort and the present report (or the end of the study). Incidence Ra t e : The number of new cases of disease per unit of population per unit of time, e.g., 3/1000/year. Interaction: The association of one factor (occupation) with disease modified by the effect of another factor (smoking). The measure of association can be the rate or odds ratio. This follows a nonmult ip Iicative model (may be additive). Ionizing Radi at ion: Any electromagnetic or particulate radiation capable of producing ions, directly or indirectly, in its passage through matter. Lagging Exposures: Lagging of the cumulative exposure assigned to a miner. Some authors consider that radon progeny exposures are "redundant" if they occur after lung cancer is induced. Some authors believe that cumulative exposures should be lagged by a certain number of years (5 or 10), to exclude redundant exposures occurring during these years. For example, Radford and St. Clair Renard discounted the last 5 years of exposures from the cumulative total WLM assigned to each case of lung cancer in their analysi s . Biologic Latent Period: The time between an increment of exposure and the increase in risk attributable to it.+ Epidemiologic Latent Period: The time between first exposure and death in those developing the disease during the study interval. Linear Hypothesis: The hypothesis that excess risk is proportional to dose. # FIGURE 4 # EFFECT OF ACE AT INITIAL EXPOSURE ON A MU LT IS TA GE MODEL # A G E A T INITIAL E X P O S U R E # VI. SUMMARY AND CONCLUSIONS A valid quantitative risk assessment is much more than simply fitting an exposure-response curve to mortality data. This is especially true when considering an epidemiologic risk assessment. There are a great variety of risk factors and temporal effects that may alter the interpretation of the data analysis. This report is an attempt to address such modifying influences in an effort to better understand the underlying cancer mechanisms operative in the cohort of U.S. uranium mi daughters. There were a number of findings which are important in lung cancer in the U.S. cohort. Influence of C ig a re tte Smoking The joint effect of cumulative cigarette smoking daughter exposure was found to be intermediate multiplicative. This would imply a synergistic usual definition as an effect exceeding the sum risks. # Exposure-Rate E ffe c t Analysis of this data revealed that modeling cumulative exposure alone may not adequately predict the relative risk of lung cancer from chronic exposure to radon daughters. Miners receiving a given amount of cumulative exposure at lower rates for longer periods of time were at greater risk relative to those with the same cumulative exposure received at higher rates for shorter periods of time. This This implies that results extrapolated from historical exposures at high rates may yield conservative results at current lower rates. Indeed, it is possible that lower risk estimates in the U.S. study, when compared to the four other major radon studies, as reported by Thomas et al. may be due to the higher exposure rates received by U.S. miners. # Late-Stage Carcinogenic E ffect Careful examination of temporal effects implies that exposure to radon daughters acts at a late stage in the carcinogenic process. All temporal factors agreed in this respect. The appropriate lag to remove redundant exposure was a relatively short six years. Older miners at initial exposure were at greater risk than those exposed at younger ages. The relative risk of lung cancer decreases with the length of time after cessation of exposure. Whether or not the mathematical form of the multistage theory of carcinogenesis applies to this cohort, the temporal patterns are worth noting. # C. BuIkheads # D e s c rip tio n The second most important control measure used in underground mines today is the construction of bulkheads across inactive stopes or drifts . Bulkheads isolate inactive stopes, prevent the mixture of contaminated air from these stopes with fresh air, and help control the direction of air flow to working areas. Maintaining a negative air pressure behind a bulkhead will prevent leaks ; this is important because radon progeny concentrations can exceed 1,000 WL behind a bulkhead . In addition, bulkheads must be strong and flexible enough to maintain an airtight seal during typical mining conditions, such as the ground movement and air shocks from blasting and the impact from accidental contact with mining equipment. # A bulkhead consists of three functional parts: (1) the primary structure, (2) the seal between the primary structure and the rock, and (3) a surface seal on the rock within one meter from the plane of the bulkhead [Summers et al. 1982 The primary bulkhead structure fills most of the opening in the stope and provides resistance to shocks from blasting or contact with machinery. The primary structure consists of timber or an expanded metal lath covered with a continuous non-porous membrane. The membrane may be attached to, or sprayed upon, the timber in the primary structure. The membrane must not crack or develop holes or leaks during mining activities . The second part of the bulkhead, the seal between the primary structure and the surrounding rock, must resist running water and the air shocks and rock movements due to blasting. The third part of the bulkhead, the seal on the surface of the rock within one meter from the plane of the bulkhead, must be made of a material that adheres to damp rock surfaces and can withstand mining 6. Fan Operation estimate that 100 bulkheads sealing 12.5 stopes would reduce the overall radon gas emissions into mine air by 2.25 Ci/day, a reduction of 25%. In summary, bulkheads are very effective in reducing radon gas (and thus radon progeny) in mine air. Especially promising are the new bulkheads designed by Summers et al. and further tested by Bloomster et al. . These bulkheads may eventually replace the leakier and more flammable polyurethane bulkheads presently being used underground. # D. Backf i11in q In the uranium mining process, large quantities of ore are brought to the surface, leaving voids which may collapse if they are not stabilized. The tailings remaining after the uranium is extracted are often used as backfill. There are three benefits of backfilling stopes: (1) ground stabilization, ( 2 Next, the coarse tailings "sand' ' is mixed with water to form a slurry and pumped into worked-out stopes. Sometimes the slurry is mixed with cement before pumping. After the water in the slurry percolates away, the stope is left filled with densely packed sand or cement. The radon progeny hazard can be increased, at least temporarily, by backfilling. Although the sand has considerably less radium than the ore or host rock, the finely divided sand has a larger surface area and many fine interstices between the grains through which radon gas can move. Therefore, the radon gas emanation rate of the sand is much higher than the ore or host rock . During backfilling, agitation of the slurry releases high concentrations of radon gas . Also, high concentrations of radon gas can collect above the sand in the newly filled stope (possibly reaching 65,000-75,000 pCi/l). Thus the advantage of decreasing the ventilation volume with the backfill must be weighed against the increased emanation rate of the backfill . Mixing the slurry with cement will not prevent this increase in emanation rate because the radon gas can also travel freely through fine pores in the cement, especially water-filled pores. Indeed, radon gas emanates from porous cement, sand, or ore at a higher rate when it is wet than when it is dry unless the dry material is overlain with a thick During experiments in a mine, Franklin et al. found that backfilling 90% of a stope reduced the total radon progeny emissions from the stope by 85%. A feasibility study estimated that backfilling can be as effective as problem unless there were several thousand visible pinholes per square meter of sealant. # E ffe c tiv e n e s s o f S ealants in Reducing Radon Emanation Rates The effectiveness of sealants depends, in part, upon the porosity of the rock walls. Sealants produce the greatest decrease in radon emanation when applied to sandstone or other porous rock; sealants applied to granite will appear to be less effective because granite provides a natural barrier to radon emanation . Thus results from the tests for the effectiveness of sealants vary greatly depending on the porosity of the rock walls along with the mine ventilation rate, the grade of the uranium ore, and other factors. estimated that the overall decrease in radon emissions would be 56% if the same sealant coating was applied to 80% of the mine surfaces. Although sealants were less effective and more costly than bulkheads, the use of sealants is less disruptive to the mining process than the use of bulkheads . In summary, there are at least seven materials available that make effective mine sealants. These materials can reduce radon emanation from mine walls by 50-75%. # F. C o n tro llin g R ad io active Water Underground # APPENDIX IV # GRAB SAMPLING STRATEGY REQUIREMENTS FOR DETERMINATION OF RADON PROGENY EXPOSURES A. # Introduct ion Airborne concentrations of radon progeny must be monitored regularly to provide the basis for their control. Miners' exposures must be limited to no more than 1.0 WLM per year and the average concentration of radon progeny in any work area must not exceed 1/12 WL during any work shift. The sampling strategy described here was developed after an evaluation of mine sampling data and the typical variability of radon progeny concentrations in underground mines. This strategy will allow the collection of timely and reliable environmental data that can be used as the basis for control of cumulative exposures. This sampling strategy allows for the determination of the arithmetic average of time-varying concentrations of radon progeny during a work shift in a given work area. The determination is based on an unbiased estimate made from grab samples taken at random intervals through out the work shift. Random sampling of work shifts during a reference period is also included for determination of a long-term arithmetic average work shift concentration. The formulae needed to calculate the statistical quantities used in this sampling strategy are contained in section G of this appendix. The rationale for the critical decision points used in the sampling strategy are contained in section H. # B. Definition of Terms and Notations STATION: A sampling location within a work area that represents the radon progeny concentration to which miners are exposed. CLUSTER: Two or more stations at which sampling will be conducted during any work shift. The stations in a cluster should be located at different work areas but must be in close proximity to each other so that alternating grab samples could be taken during the same work shi ft. # BLOCK OF TIME: A period in which two different sampling days are randomly selected. # AVERAGE WORK SHIFT CONCENTRATION: The average concentration of radon progeny in working levels (WL) during a work shift at a given station. # AVERAGE: The arithmetic mean. The same term can be used for the average of several sample results or for the arithmetic mean of a distribution of concentrations that vary during a continuous period of time. In the latter case, the terms "average," "arithmetic average," and "time-weighted average" are synonymous. # otj : Average work shift concentration for day i, where i = 1,2,...,12 and day i is the ittl day in a time-ordered sequence of the 12 days that were randomly selected from the reference period. # LCL: 95% one-sided lower confidence limit for a. # UCL: 95% one-sided upper confidence limit for a. If is < 0.14 WL, then: (a) continue collecting seven grab samples on each of the two randomly selected sampling days in each 2-week block of time, and (b) continue using the criteria given in section C. After 12 weeks of sampling in which no two consecutive sampling days ( f t ; and &j+i) were in excess of 0.14 WL, use the criteria given in section D for assurance, based on 12 days of sampling, that the average work shift concentration of radon progeny is in compliance with the REL, which, if verified, will result in less frequent exposure monitoring requirements. # D. Criteria for Less Frequent Exposure Monitoring To determine if less frequent exposure monitoring can be conducted at a specific work area, the following statistical decision criteria must be used 1. Compute a. (the estimated average work shift concentration using seven grab samples per sampling day) for a work area during the reference period in which 12 samples were taken and no two consecutive sampling days ($j and &j+-|) were in excess of 0.14 WL. Formulae for this computation are contained in section G. 2. Compute LCL and UCL, the 95% one-sided lower and 95% one-sided upper confidence limits, respectively, for the average work shift concentration during the reference period from which the 12 sampling days were taken. Formulae for these computations are contained in section G; o i . from section D,1 is a quantity used in the formulae for LCL and UCL. # The block length can be increased If LCL exceeds 1/12 WL, then: (a) steps shall be taken to reduce the radon progeny concentration in that work area by implementing work practices and engineering controls, (b) respiratory protection shall be required for all miners entering that work area, and (c) grab sampling as described in section C,2 shall be conducted on a consecutive daily bas i s . Grab sampling shall continue on a consecutive daily basis until the estimated average work shift concentrations on any two consecutive workdays (&^ and o^) are both < 0.10 WL. When ^ and ag are both < 0.10 WL, then the requirements for respiratory protection are waived and exposure monitoring can revert to the schedule described starting at section C , 1. # E . Criteria for Continuation of Less Frequent Exposure Monitoring After completion of two additional sampling days during the subsequent 26-week period, the data from the last 12 days sampled must be used to compute a new UCL for the period in which the 12 sampling days occurred. 1. Sampling may continue under the less frequent sampling schedule (i.e., 2 days per 26-week block of time) if both of the following results occur at a station: (a) UCL for the reference period from which the last 12 sampling days were taken is < 1/12 WL, and (b) the estimated average work shift concentrations on the last two of the 12 sampling days (&11 and &12) were both < 0.14 WL. In this case, an updated UCL shall be recomputed after completion of sampling in each subsequent 26-week block of time to determine if less frequent sampling (i.e., on two days during a 26-week period) should be continued according to the criteria of this part. If ei ther of these conditions are not met, then LCL must be computed from data obtained from the last 12 days sampled (see section E,2 which follows). If LCL for the reference period from which the 12 sampling days were taken at a station exceeds 1/12 WL, then: (a) steps shall be taken to reduce the radon progeny concentration in that work area by implementing work practices and engineering controls, (b) respiratory protection shall be required for all miners entering that work area, and (c) grab sampling as described in section C,2 shall be conducted on a consecutive dai ly basis. Grab sampling shall continue on a consecutive daily basis until the estimated average work shift concentrations on any two consecutive workdays and ftg) are both < 0.10 WL. When and are both < 0.10 WL, then the requirements for respiratory protection are waived and exposure monitoring can revert to the schedule described starting at section C,1. If LCL for the reference period from which the 12 sampling days were randomly taken is < 1/12 WL, but the estimated average work shift concentration determined for either of the last two of the 12 sampling days (&11 or &12) exceeds 0.14 WL, then monitoring at that station shall return to the more frequent sampling schedule (2 days per 2-week block of time). In this case, &11 or &12 becomes ^ and sampling is required on the next workday to obtain ag, as described starting at section C,4,a. # F . Criteria for Cessation of Exposure Monitoring Sampling can be discontinued at a station if both of the following results occur at that station: (1) UCL for the reference period from which 12 sampling days were taken is < 0.063 WL, and (2) the estimated average work shift concentration for the last of the 12 sampling days (&12) is < 0.033 WL. However, sampling should return to the regular schedule, as described starting at section C,1 if an environmental change or a change in mining operations occurs that may alter radon progeny concentrations in that work area. # APPENDIX V In recommending medical evaluation criteria for respirator use, one should apply rigorous decision-making principles ; tests used should be chosen for operating characteristics such as sensitivity, specificity, and predictive value. Unfortunately, many knowledge gaps exist in this area. The problem is complicated by the large variety of respirators, their conditions of use, and individual differences in the physiologic and psychologic responses to them. For these reasons, the following guidelines are to be considered as informed suggestions rather than established NIOSH policy recommendations. They are intended primarily to assist the physician in developing medical evaluation criteria for respi rator use. # A. Background Info rm ation # Pulmonary E ffe c ts In general, the added inspiratory and expiratory resistances and dead space of most respirators cause an increase in tidal volume and a decrease in respiratory rate and ventilation (including a small decrease in alveolar ventilation). These respirator effects have usually been small both among healthy individuals and, in limited studies, among individuals with impaired lung function . This generalization is applicable to most respirators when resistances (particularly expiratory resistance) are low . While most studies report minimal physiologic effects during submaximal exercise, the resistances commonly lead to reduced endurance and reduced maximal exercise performance . The dead space of a respirator (reflecting the amount of expired air that must be rebreathed before fresh air is obtained) tends to cause increased ventilation. At least one study has shown substantially increased ventilation with a full-face respirator, a type that can have a large effective dead space . However, the net effect of a respirator's added resistances and dead space is usually a small decrease in ventilation . The potential for adverse effects, particularly decreased cardiac output, from the positive pressure feature of some respirators has been Raven et al. found statistically significant higher systolic and/or diastolic blood pressures during exercise for persons wearing respirators. Arborelius et al. did not find significant differences for persons wearing respirators during exercise. # Body Temperature E ffe c ts Proper regulation of body temperature is primarily of concern with the closed circuit SCBA that produces oxygen via an exothermic chemical reaction. Inspired air within these respirators may reach 120°F (49°C), thus depriving the wearer of a minor cooling mechanism and causing discomfort. Obviously this can be more of a problem with heavy exercise and when ambient conditions and/or protective clothing further reduce the body's ability to lose heat. The increase in heart rate because of increasing temperature represents an additional cardiac stress. Closed-circuit breathing units of any type have the potential for causing heat stress since warm expired gases (after exothermic carbon dioxide removal with or without oxygen addition) are rebreathed. Respirators with large dead spaces also have this potential problem, again because of partial rebreathing of warmed expired air , # . Sensory E ffe c ts Respirators may reduce visual fields, decrease voice clarity and loudness, and decrease hearing ability. Besides the potential for reduced productivity, these effects may result in reduced industrial safety. These factors may also contribute to a general feeling of stress . # Psychologic E ffe c ts This important topic is discussed in recent reviews by Morgan [Morgan 1983a[Morgan , 1983b. There is little doubt that virtually everyone suffers some discomfort when wearing a respirator. The large variability and the subjective nature of the psycho-physiologic aspects of wearing a respirator, however, make studies and specific recommendations difficult. Fit testing obviously serves an important additional function by providing a trial to determine if the wearer can psychologically tolerate the respirator. The great majority of workers can tolerate respirators, and experience in wearing them aids in this tolerance . However, some individuals are likely to remain psychologically unfit for wearing respirators. # . Local I r r i t a t i o n E ffe c ts Allergic skin reactions may occur occasionally from wearing a respirator, and skin occlusion may cause irritation or exacerbation of preexisting conditions such as pseudofolliculitis barbae. Facial discomfort from the pressure of the mask may occur, particularly when the fit is unsatisfactory.
for their support in typing and preparing the document for pub I¡cation. Table 1-1.-Respirator recommendations Average work shift concentration gf radon progeny (WL) Respirator recommendations 0 to 0.083 (1/12) No respirator required >0.083 to < 0.42 Any disposable respirator equipped with a HEPA § filter Any more protective respirator >0.42 to i 0.83 Any air-purifying half-mask respirator equipped with a HEPA filter Any SAR** equipped with a half mask and operated in a demand (negative-pressure) mode Any more protective respirator >0.83 to < 2.08 Any powered PAPRtt equipped with a hood or helmet and a HEPA filter Any SAR equipped with a hood or helmet and operated in a continuous flow mode Any more protective respirator See footnotes at end of table. for radon progeny Credit factor for respirator use 65% utilization 90% utilization NA+ NA 2.1 3.6 # # 2.4 5.3 2.4 5.3 # # 2.7 7.4 2.7 7.4 Table 1-1 (Continued).-Respirator recommendations for radon progeny Average work shift concentration gf radon progeny Respirator recommendations Credit factor ________ for respirator use_______ 65% utilization 90% utilization >2.08 to < 4.15 Any air-purifying, full face-2.8 8.5 piece respirator equipped with a HEPA filter Any PAPR equipped with a tight-2.8 8.5 fitting facepiece and a HEPA filter Any SAR equipped with a full 2.8 8.5 facepiece and operated in a demand (negative-pressure) mode Any SAR equipped with a tight-2.8 8.5 fitting facepiece and operated in a continuous-flow mode Any self-contained breathing 2.8 8.5 apparatus (SCBA) equipped with a full facepiece and operated in a demand (negative-pressure) mode Any more protective respirator ^ > 4.15 to <. 83.0 Any SAR equipped with a half-2.9 9.9 mask and operated in a pressuredemand or other positive-pressure mode Any more protective respirator ^ Ŝ ee footnotes at end of table. Table 1-1 (Continued).-Respirator recommendations for radon progeny Average work shift concentration gf radon progeny Credit factor for resDi rator use Respirator recommendations 65% utilization 90% utilization >83.0 to < 166.0 Any SAR equipped with a full facepiece and operated in a pressure demand or other positive pressure mode 2.9 10.0 Any more protective respirator # # >166.0 or unknown concentration or emergency entry Any SCBA equipped with a full facepiece and operated in a pressure demand or other positive pressure mode 2.9 10.0 Any SAR equipped with a full facepiece operated in a pressure demand or other positive pressure mode in combination with an auxiliary self-contained breathing apparatus operated in a pressure demand or other positive pressure mode 2.9 10.0 Emergency escape Any self-contained self-rescuer (SCSR) NA NA As estimated using the sampling techniques described in Appendix IV. jNA=Not applicable. §HEPA = high-efficiency particulate air. " S e e appropriated credit factors below. SAR = supplied-air respirator. 'TPAPR = powered air-purifying respirator. *Adapted from Schiager et a l . 1981. 'Used in an instant working level monitor. English 28 13.27 2.11 These studies contain limitations in study design, radon progeny exposure records, smoking history information, followup, etc. Comparisons between these studies, especially for the purposes of risk assessment. 'p<0.05 (some p-values were estimated from the observed lung cancer deaths and the Poisson frequency distribution); rate ratios depend on lung cancer mortality in the comparison population and are sensitive to error in rates that are based on a small number of expected deaths. §The expected number of deaths was estimated from the rate ratios provided by the authors. *Not applicable. The 95% confidence limits of the rate ratios range from 14.4 to infinity. 13. Edling C, Axelson 0. Quantitative aspects of radon daughter exposure and lung cancer in underground miners. Br J Ind Med 1983;40:182-7. 14. Whittemore AS, McMillan A. Lung cancer mortality among U.S. uranium miners: A reappraisal.# FOREWORD As Director of the National Institute for Occupational Safety and Health (NIOSH), I am accustomed to making decisions on difficult issues, but few issues have presented the legislative, scientific, and public health dilemmas that accompany recommending criteria to control the exposure of workers to radon progeny in underground mines. The development of this criteria document is subject to the provisions of two legislative mandates. First, the Occupational Safety and Health Act of 1970 [Public Law (PL) 91-596, which established NIOSH] requires safe and healthful working conditions for every working person. The Act further requires NIOSH to preserve our human resources by providing medical and other criteria that will ensure, insofar as practicable, that no worker will suffer diminished health, functional capacity, or life expectancy as a result of work experience [PL 91-596, Sections 6(b) (5)]. The Act also authorizes NIOSH to recommend new criteria to further improve working conditions Sections 22(c) and (d)]. In addition, the Federal Coal Mine Health and Safety Act of 1969 and the Federal Mine Safety and Health Amendments Act of 1977 require NIOSH to develop and revise recommended occupational safety and health standards for mine workers. Specifically, the Secretary of Health, Education, and Welfare (now the Secretary of Health and Human Services) is required to consider, "in addition to the attainment of the highest degree of health protection for the miner . . . the latest available scientific data in the field, the technical feasibility of the standards, and experience gained under this and other health statutes" Title 1,Section 101(d)]. These mandates have required NIOSH to weigh its obligation to assure the highest degree of health protection for miners against the technical feasibility of the recommended standard in the development of recommendations for controlling radon progeny exposure in underground mines. The control of exposure to radon progeny presents an unprecedented problem because of the ubiquitous yet variable nature of their presence in mines and the ambient environment. To complicate this matter further, recent reports indicate that an exposure-related health risk may exist at background exposure levels. The full ramifications of this dilemma can easily be appreciated by considering two points. The first is that dilution ventilation (the primary engineering approach to reducing the concentration of radon progeny in mines) is accomplished by the exchange of mine air with air from the outside environment. Obviously, this approach is not a viable option for the total elimination of radon progeny in underground mines because the outside air is also contaminated with radon progeny. In addition, this approach would not be a prudent community environmental public health measure in some situations because it involves releasing an additional burden of radon progeny to the ambient environment and thereby contributing to the background level in the immediate area of the mine. Thus ventilation cannot be used to totally eliminate exposure to radon progeny in mines. The second point to consider in this dilemma is that the variable nature of radon progeny exposure in the ambient environment precludes recommending an annual cumulative exposure limit that includes both occupational and ambient contributions. Because ambient exposure varies, such a recommendation would result in an occupational exposure limit and associated risk that would vary with the locale. This approach is obviously undesirable, for it would lead to a nightmare of confusion and complicated enforcement requirements and would probably result in unequal protection of miners. Data from both human and animal studies clearly demonstrate a direct link between lung cancer and radon exposure. Specific epidemiological studies provide a basis for quantitatively estimating human risk at various exposure levels. Such analyses clearly show that a radon exposure of 4 WLM (4 working level months) per year over a 30-year working lifetime (the current Mine Safety and Health Administration [MSHA] standard) poses a significant and unacceptable risk of lung cancer. This risk must be substantially reduced. In recommending an exposure limit for radon progeny, NIOSH considered not only the results of its own risk assessment and the technical feasibility of the recommended standard, but also the uncertainty of the data available on risk. Uncertainties are inherent in both the risk assessment methods and the scientific data on which the risk assessment is based. This fact must be understood and acknowledged. Some of the factors involved in these uncertainties include the choice of risk assessment method and model, the measurement methods used for data collection, and risk estimates derived from data that are heavily weighted with higher exposures. The first of these factors in risk uncertainty involves the choice of a risk assessment method and/or model (such as the Cox proportional hazards model used in the NIOSH risk assessment study). NIOSH has attempted to develop a mathematical model that best describes the lung cancer risk in miners exposed to radon progeny. The use of a risk assessment model is merely a practical way to work with a very complex problem. There are modeling approaches other than the one chosen for this study. Each choice would result in a somewhat different description of the relationship between radon progeny exposure and lung cancer risk. NIOSH has attempted to compare the alternatives that are available and applicable. NIOSH scientists have considered the differences that might arise through a review of the available scientific literature and discussions with other scientists who have evaluated this exposure-related lung cancer risk. Although alternative models might yield minimally different quantitative risk estimates, none of them would lead the Institute to a qualitatively different risk assessment (i.e., that exposures to radon progeny at the current standard are associated with excesses of lung cancer). The second factor involved in risk uncertainty is the measurement method used for data collection. This study involves a follow-up period of more than 35 years, more than 3,000 miners, and thousands of measurements. The older data are subject to greater uncertainty than the more recent data because of improvements in the entire measurement process over the course of the study. The third factor involved in risk uncertainty is the process of generating risk estimates at lower exposure levels. One consideration is that such risk estimates are derived from data heavily weighted by higher exposures (note that the annual cumulative exposures of most miners in this study are higher than either the current MSHA standard or the proposed NIOSH recommended exposure limit [REL]). Another consideration is the desirability of placing occupational risk in the context of background exposure risk. However, the latter has not been evaluated and would have to be estimated on the basis of occupational data. We therefore do not believe that it is currently possible to contrast these two types of risks. Nonetheless, EPA has generated some initial information on background exposure risk in A Citizen's Guide to Rad o n . This document indicates that action should be taken to lower radon progeny levels in homes with measured concentrations of 0.02 WL or greater. NIOSH estimates that this concentration would probably result in a cumulative exposure that is less than 1 WLM but within an order of magnitude of that value. New information is clearly needed on background exposure levels and the hazards associated with such exposure before occupational and nonoccupational risks can be reliably quantified and validly contrasted. Until these data are available, the final target exposure limits cannot be identified for control of this hazard in our total environment. The uncertainties in the data and a recent study commissioned by the Bureau of Mines on the feasibilities of controlling radon progeny levels in mines have been weighed along with the available evidence and the obligations of NIOSH. This process has resulted in an REL of 1 WLM per year. Our own quantitative risk assessment clearly shows that significant health risks are posed by an exposure level of 1 WLM per year over a 30-year working lifetime. NIOSH therefore regards this REL as an upper limit and further recommends that mine operators limit exposure to radon progeny to the lowest levels possible. In addition, NIOSH wishes to emphasize that this recommended standard contains many important provisions in addition to the annual exposure limit. These include recommendations for limited work shift concentrations of radon progeny, sampling and analytical methods, recordkeeping, medical surveillance, posting of hazardous information, respiratory protection, worker education and notification, and sanitation. All of these recommendations help minimize risk. In summary, NIOSH has the legislative, scientific, and public health responsibility to protect the health of miners by developing recommendations that eliminate or minimize occupational risks, recommended exposure limit of 1 WLM per year, I of the recommended standard fully satisfies the Although I am approving the do not feel that this part Institute's commitment to protect the health of all of the Nation's miners. Future research may provide evidence of new and more effective methods for reducing occupational exposures to radon progeny, more reliable risk estimates at low exposure levels, and improved risk assessment methods. If new information demonstrates that a lower exposure limit constitutes both prudent public health and a feasible engineering policy, NIOSH will revise its recommended standard. # I. RECOMMENDATIONS FOR A RADON PROGENY STANDARD The National Institute for Occupational Safety and Health (NIOSH) recommends that worker exposure to radon progeny in underground mines be controlled by compliance with this recommended standard, which is designed to protect the health of underground miners over a working lifetime of 30 years. Mine operators should regard the recommended exposure limit for radon progeny as the upper boundary for exposure; they should make every effort to limit radon progeny to the lowest possible concentrations. This recommended standard will be reviewed and revised as necessary. Radon progeny (also known as radon daughters) are the short-lived decay products of radon, an inert gas that is one of the natural decay products of uranium. The short-lived radon progeny (i.e., polonium-210, lead-214, bismuth-214, and polonium-214) are solids and exist in air as free ions or as ions attached to dust particles. The NIOSH recommended exposure limit (REL) is based on (1) evidence that a substantial risk of lung cancer is associated with an occupational exposure to radon progeny, and (2) the technical feasibility of reducing exposures. In this document, NIOSH presents recommendations that will protect miners employed year-round at any mine work area for as long as 30 years (the period of time used by MSHA as a miner's working lifetime). The exposure limit contained in this recommended standard is measurable by techniques that are valid, reproducible, and available to industry and government agencies. NIOSH has concluded that current technology is sufficient to achieve compliance with the recommended standard. Because knowledge of the carcinogenic process is incomplete and no data exist to demonstrate a safe level of exposure to carcinogens, NIOSH maintains that occupational exposure to carcinogens such as radon progeny should be reduced to the lowest level technically achievable. Compliance with this standard does not relieve mine operators from complying with other applicable standards. # S ection 1 -D e fin itio n s ( a ) Miner Miners include all mine personnel who are involved with any underground operation (e.g., drilling, blasting, haulage, and maintenance). # (b ) Working Level One working level (WL) is any combination of short-lived radon progeny in 1 liter (L) of air that will ultimately release 1.3 x 10^ million electron volts (MeV) of alpha energy during decay to lead-210. # ( c ) Working Level Month A working level month (WLM) is the product of the radon progeny concentration in WL and the exposure duration in months. For example, if a miner is exposed at a concentration of 0.083 WL for 1 month 1 (170 hours [hr]),* then the cumulative exposure for the month is 0.083 WLM. If the cumulative exposure of the same miner is 0.083 WLM for each of 12 consecutive months (2,040 hr), then the cumulative exposure for the year is 1 WLM. # (d ) Work Area A work area is any stope, drift heading, travelway, haulageway, shop, station, lunchroom, or any other underground location where miners work, travel, or congregate. # ( e ) Average Work S h if t C oncentration The average work shift concentration is the average concentration of radon progeny present during a work shift in a given area. This concentration is used to represent the miner's breathing zone exposure to radon progeny. # Section 2 -Environment (Workplace A i r ) ( a ) Recommended Exposure L im it (REL) Exposure to radon progeny in underground mines shall not exceed 1 WLM per year, and the average work shift concentration shall not exceed 1/12 of 1 WL (or 0.083 WL). The REL of 1 WLM per year is an upper limit of cumulative exposure, and every effort shall be made to reduce exposures to the lowest levels possible. # (b ) Sampling and A nalysis Grab samples for radon progeny in the workplace shall be taken and analyzed using working level monitors, the Kusnetz method, or any other method at least equivalent in accuracy, precision, and sensitivity. Sampling and analytical methods are described in Chapter II. Details of the recommended sampling strategy are contained in Appendix IV. The recommended sampling strategy allows the use of grab samples for estimating the average work shift concentration of radon progeny. # Section 3 -M onitoring and Recording Exposures ( a ) Exposure M onitoring All operators of underground mines shall perform environmental evaluations in all work areas to determine exposures to radon progeny. (1) An initial environmental evaluation shall be conducted in each work area to determine the average work shift concentration of radon progeny. *Note that Mine Safety and Health Administration (MSHA) regulations are based on 173 hr per month. (2) Periodic environmental evaluations shall be conducted at intervals (as described in Appendix IV) in each work area. An alternative sampling strategy may be used if the mine operator can demonstrate that it effectively monitors exposure to radon progeny. (3) If environmental monitoring in a work area indicates that the average work shift concentration of radon progeny exceeds 1/12 WL (as described in Appendix IV), the mine operator shall prepare an action plan describing the types of engineering controls and work practices that will be implemented to reduce the average work shift concentration in that area. # (b ) Exposure M onitoring Records The mine operator shall determine and record the exposure to radon progeny. Each miner's exposure shall be calculated using monitoring data obtained for the areas in which the miner worked. These records shall include (1) locations, dates, and times of measurements, (2) sampling and analytical methods used, (3) the number, duration, and results of the samples taken, and (4) all items required by Sections 3(b) (2) and (3). All records shall be retained at the mine site or nearest mine office as described in Section 10. ( 1 ) C a lc u la tin g the M in e r's D a ily Exposure The average work shift concentration of radon progeny for each work area shall be used to calculate each miner's daily exposure. If no monitoring has been conducted in a work area on a particular day, the daily average work shift concentration for that area shall be determined by averaging the results obtained on the last day of monitoring with the results from the next day that monitoring is conducted. A miner's exposure (in WLM) for a given area is calculated as foI lows: WLM = WL x T 170 hr where WL is the average work shift concentration of radon progeny, T is the total time (hours) spent in the area, and 170 is the number of hours worked per month. A miner's total cumulative exposure for the year is the sum of the daily exposures (as calculated above) for all work areas in which time was spent during the work shift. ( # ) Uranium Mines Exposure to radon progeny shall be recorded daily for each uranium miner. These records shall include the miner's name, social security number, the time spent in each work area, estimated exposure to radon progeny for each work area as determined in Section 3(b) (3), and (if applicable) the type of respiratory protection and duration of its use. (3) Nonuranium Mines Exposure to radon progeny shall be recorded daily for all miners assigned to work in areas where environmental monitoring for radon progeny is required as described in Appendix IV. These exposure monitoring records shall include the miner's name, social security number, time the miner has spent in each work area, estimated exposure to radon progeny for each work area as determined in Section 3(b) (3), and (if applicable) the type of respiratory protection used and the duration of its use. (4) Respirator Credit The type of respirator worn and the credit given for wearing it (see Section 7) shall be recorded for each miner. Mine operators shall record both the average work shift concentration of radon progeny and the adjusted exposure concentration calculated by using the respirator credit. The adjusted exposure concentration shall be used to determine the miner's cumulative exposure for compliance with the REL of 1.0 WLM/year. Section 4 -Medical Surveillance (a) General (1) The mine operator shall institute a medical surveillance program for all miners. (2) The mine operator shall ensure that all medical examinations and procedures are performed by or under the direction of a Iicensed physician. (3) The mine operator shall provide the required medical surveillance at a reasonable time and place without loss of pay or cost to the miners. (4) The mine operator shall provide the following information to the physician performing or responsible for the medical surveillance program: a copy of the radon progeny standard, the miner's duration of employment, the miner's cumulative exposure to radon progeny (or an estimate of potential exposure to radon progeny if the miner is a new employee), a description of the miner's duties as they relate to his exposure, and a description of any protective equipment the miner has used or may be required to use. (5) The mine operator or physician shall counsel tobacco-smoking miners about their increased risk of developing lung cancer from the combined exposure to tobacco smoke and radon progeny. The mine operator or physician shall encourage the miner to participate in a smoking cessation program. The mine operator shall enforce a policy prohibiting smoking at the mine site. (6) The physician shall provide the mine operator and the miner with a written statement describing any medical conditions found during the preplacement or periodic medical examinations that may increase the miner's health risk when exposed to radon progeny. This written statement shall not reveal specific findings, but shall include any recommended limitations on the miner's exposure to radon progeny or ability to use respirators and other personal protective equipment. # (b ) Preplacement Medical Examination The preplacement medical examination of each miner shall include the foI lowing: (1) A comprehensive medical and work history (including smoking history) that emphasizes the identification of existing medical conditions and attempts to elicit information about previous occupational exposure to radon progeny. (2) A thorough examination of the miner's respiratory system, including pulmonary function tests. The initial and subsequent pulmonary function tests shall include determination of forced vital capacity (FVC) and forced expiratory volume in 1 second (FEV-j) using the current American Thoracic Society (ATS) recommendations on instrumentation, technician training, and interpretation. A prospective miner with symptomatic, spirometric, or radiographic evidence of pulmonary impairment should be counseled about the risks of continued exposure. (3) A posterio-anterior chest X-ray using the current ATS recommendations on instrumentation, technician training, and interpretat ion. (4) Other tests deemed appropriate by the physician. # ( c ) P e rio d ic Medical Examination The periodic medical examination for each miner shall include the fotlowing: (1) An annual update of medical and work histories (including smoking history). (2) An evaluation of the miner's respiratory system. Because of the potential for chronic respiratory disease, this evaluation shall include spirometry at intervals determined by the physician. Miners that have spirometric or radiographic evidence or symptoms of pulmonary impairment should be counseled by the physician regarding the risks of continued exposure. (3) A posterio-anter¡or chest X-ray at intervals determined by the physician using the current ATS recommendations on instrumentation, technician training, and interpretation. Periodic chest X-rays are recommended for monitoring miners exposed to fibrogenic respiratory hazards (e.g., quartz). Ordinarily, chest X-rays may be obtained every 5 years for the first 15 years of employment and every 2 years thereafter, depending on the nature and intensity of exposures and their related health risks. A recent X-ray obtained for other purposes (e.g., upon hospitalization) may be substituted for the periodic X-ray if it is of acceptable quality. (4) Other tests deemed appropriate by the physician. # Section 5 -Posting All warning signs shall be printed in both English and the predominant language of non-Eng Iish-reading miners. Miners unable to read the posted signs shall be informed verbally about the hazardous areas of the mine and the instructions printed on the signs. (a) Readily visible signs containing the following information shall be posted at mine entrances or in work areas that require environmental monitoring for radon progeny as described in Appendix IV: # AUTHORIZED PERSONNEL ONLY DANGER! POTENTIAL RADIATION HAZARD RADON PROGENY (b) If respiratory protection is required, the following statement shall be added in large letters to the sign required in Section 5(a): # RESPIRATORY PROTECTION REQUIRED IN THIS AREA Section 6 -Work P ra c tic e s and Engineering C ontrols Effective work practices and engineering controls shall be instituted by the mine operator to reduce the concentration of radon progeny to the lowest technically achievable limit. Since there is no typical mine and each operation has some unique features, the work practices and engineering controls in this section may need to be adapted for use in particular si tuat ions. Examples of effective ore extraction and handling procedures include the following: minimizing the number of ore faces simultaneously exposed, performing retreat mining toward intake air, limiting the underground storage and handling of ore, locating ore transfer points away from ventilation intakes, removing dust spilled from ore cars, minimizing ore spillage by maintaining roadways and carefully loading haulage vehicles, and covering ore until it is moved to the surface. ( 2 ) B la s tin g Blasting should be performed at the end of the work shift whenever possible. Miners shall be evacuated from exhaust drifts until environmental sampling confirms that the average work shift concentration of radon progeny does not exceed 1/12 WL. Refer to Section 7 if respiratory protection is required for subsequent reentry. ( 3 ) Worker R o tatio n The mine operator shall not use the planned rotation of miners to maintain an individual's exposure below the REL of 1.0 WLM per year. NIOSH acknowledges, however, that some miners may inadvert ently be exposed to short-term high concentrations of radon progeny. For example, such exposures may occur when engineering controls fail. To ensure that the miners' cumulative exposure remains below the REL in such circumstances, it may be necessary to transfer them to other jobs or work areas that have lower concentra tions of radon progeny. Miners transferred under these circum stances shall retain their pay as prescribed for coal miners under Section 203(b) of the Federal Coal Mine Safety and Health Act of 1977. # (b ) Engineering C o ntrols Mechanical exhaust ventilation used alone or in combination with other engineering controls and work practices can effectively reduce exposures to radon progeny. Ventilation systems discharging outside the mine shall conform with applicable local, State, and Federal [40 CFR*61,Subpart B] air pollution regulations and shall not constitute a hazard to miners or to the general population. (1) Ductwork shall be kept in good repair to maintain designed airflows. The effectiveness of mechanical ventilation systems shall be determined periodically and as soon as possible after any significant changes have been made in production or control. A log shall be kept showing designed airflow and the results of all airflow measurements. (2) Fans shall be operated continuously in the work areas of an active mine and before the opening of a previously inactive mine or inactive section until environmental sampling confirms that the average work shift concentrations of radon progeny do not exceed 1/12 WL. Refer to Section 7 if respiratory protection is required. *Code of Federal Regulations. See CFR in references. (3) Fresh air shall be provided to miners in dead end areas near the working faces. (4) Bulkheads, backfill, and sealants shall be used to control exposures as appropriate. Appendix III provides a general discussion of engineering control methods. S ection 7 -R e s p ira to r S e le c tio n and C re d it fo r R e s p ira to r Use ( a ) General C onsiderations NIOSH has determined that a radon progeny exposure limit of 1.0 WLM per year is technically achievable in mines through the use of effective work practices and engineering controls. Over a 30-year working lifetime, this exposure limit will reduce but not eliminate the risk of lung cancer associated with exposure to radon progeny. NIOSH considers respirators to be one of the last options for worker protection. Work practices and engineering controls are more effective means for limiting exposures and providing a safe environment for all workers. Respirator use in underground mines is not always practical for a number of reasons, including the additional physiological burden and safety hazards they pose. NIOSH therefore recommends that engineering controls and work practices be used where technically achievable to control the exposure of miners to radon progeny. Compliance with an exposure limit of 1.0 WLM per year requires an average exposure of 1/12 WL throughout the year to ensure that the miner can work for an entire year (i.e., 2,040 hr). For average work shift concentrations above 1/12 WL, NIOSH recommends mandatory respirator use as well as the implementation of engineering controls and work practices to reduce exposure to radon progeny. Occupational exposure to radon progeny above background concentrations has been associated with excess lung cancer risk. Therefore, regardless of the exposure concentration, NIOSH advises the use of respirators to further reduce exposure and decrease the risk of lung cancer. Respiratory protection shall be used by miners (1) when work practices and engineering controls are not adequate to limit average work shift concentrations of radon progeny to 1/12 WL, (2) when entering a mine area where concentrations of radon progeny are unknown, or (3) during emergencies. Use only those respirators approved by NIOSH or the Mine Safety and Health Administration (MSHA). # (b ) R e s p ira to r P ro te c tio n Program Whenever respirators are used, a complete respiratory protection program shall be instituted. This program must follow the recommendations contained in ANSI Z88. (published by the American National Standards Institute) and the respirator-use criteria in 30 CFR 57.5005. The respiratory protection program described in ANSI Z88.2-1969 requires the following: (1) A written program for respiratory protection that contains standard operating procedures governing the selection and use of respi rators. (2) Periodic worker training in the proper use and limitations of respi rators. (3) Evaluation of working conditions in the mine. (4) An estimate of anticipated exposure. (5) An estimate of the physical stress that will be placed on the miner. A detailed medical examination of each miner shall be conducted according to the guidelines set forth in Appendix V. (6) Routine inspection, maintenance, disinfection, proper storage, and evaluation of respirators. (7) Information concerning the manufacturers' instructions for respirator fit-testing and proper use. # ( c ) R e s p ira to r S e le c tio n NIOSH makes the following recommendations for respirator selection: (1) A respirator is not required for exposure to average work shift concentrations less than or equal to 1/12 WL. (2) For exposure to average work shift concentrations greater than 1/12 WL, NIOSH recommends those respirators listed in Table 1-1. (3) For entry into areas where radon progeny concentrations are unknown or exceed 166 WL, or for emergency entry, NIOSH recommends only the most protective respirators (any full-facepiece, positive-pressure, self-contained breathing apparatus [SCBA] or full-facepiece, positive-pressure, supplied-air respirator and SCBA combinat ion). These recommendations are based on the fact that radon progeny exist as particulates and that miners are not exposed to hazardous concentrations of nonparticulate contaminants. If protection against nonparticulate contaminants is required, different types of respirators must be selected. # (d ) C re d it fo r R e s p ira to r Use When respirators are worn properly, the miner's average work shift exposure can be reduced by a factor that depends on the class of respirator worn. Table 1-1 provides the credit factors for the various classes of respirators. For example, if a miner wears a helmet-type, powered, air-purifying respirator (PAPR) for 65% of the work shift and the radon progeny concentration in the work area is 0.3 WL, then the miner's exposure can be adjusted by dividing 0.3 WL by 2.7, the credit factor for this class of respirator. This results in an adjusted exposure of 0.11 WL for that miner. Respirator credit is discused in detaiI in Chapter 11. # S ection 8 -Inform ing Workers o f the Hazards o f Radon Progeny ( a ) N o tific a t io n o f Hazards The mine operator shall provide all miners with information about workplace hazards before job assignment and at least annually thereafter. ( b ) T ra in in g (1) The mine operator shall institute a continuing education program conducted by persons with expertise in occupational safety and health. The purpose of this program is to ensure that all miners have current knowledge of workplace hazards, effective work practices, engineering controls, and the proper use of respirators and other personal protective equipment. This program shall also include a description of the general nature of the environmental and medical surveillance programs and the advantages of participating in them. This information shall be kept on file and be readily available to miners for examination and copying. The mine operator shall maintain a written plan of these training and surveillance programs. (2) Miners shall be instructed about their responsibilities for following proper work practices and sanitation procedures necessary to protect their health and safety. # S ection 9 -S a n ita tio n ( a ) E ating and D rinking The preparation, storage, dispensing (including vending machines), or consumption of food shall be prohibited in any area where a toxic material is present. The mine operator shall provide facilities so that miners can wash their hands and faces thoroughly with soap or mild detergent and water before eating or drinking. # (b ) Smoki ng Smoking shall be prohibited in underground work areas. # ( c ) To i I e t Fac i I i t i es The mine operator shall provide an adequate number of toilet facilities and encourage the miners to wash their hands thoroughly with soap or mild detergent and water before and after using these facilities. # (d ) Change Rooms (1) The mine operator shall provide clean change rooms for the miners. (2) The mine operator shall provide storage facilities such as lockers to permit the miners to store street clothing and personal i terns. # ( e ) Showers The mine operator shall provide showers and encourage the miners to shower at the end of the work shift. # ( f ) Laundering (1) The mine operator shall provide for the cleaning, laundering, or disposal of contaminated work clothing and equipment. (2) The mine operator shall ensure that contaminated work clothing or equipment that is to be cleaned, laundered, or disposed of is placed in a closed container to prevent dispersion of dust. (3) Any person who cleans or launders this contaminated work clothing or equipment must be informed by the operator that it may be contaminated with radioactive materials. # S ection 10 -Recordkeeping Requirements ( a ) Record R etention (1) The mine operator shall retain all records of the monitoring required in Section 3(b). (2) All monitoring records shall be retained for at least 40 years after termination of employment. (3) The mine operator shall retain the medical records required by Section 4. These records shall be retained for at least 40 years after termination of employment. # (b ) A v a il a b i l it y o f Records The miner shall have access to his medical records and be permitted to obtain copies of them. Records shall also be made available to former miners, or their representative and to the designated representatives of the Secretary of Labor and the Secretary of Health and Human Services. # ( c ) T ran s fer o f Records (1) Upon termination of employment, the mine operator shall provide the miner with a copy of his records specified in Section 10(a). (2) Whenever the mine operator transfers ownership of the mine, all records described in this section shall be transferred to the new operator, who shall maintain them as required by this standard. (3) Whenever a mine operator ceases to do business and there is no successor, the mine operator shall notify the miners of their rights of access to those records at least 3 months before cessation of business. (4) The Director of NIOSH shall be notified in writing before (a) a mine operator ceases to do business and there is no successor to maintain records, and (b) the mine operator intends to dispose of those records. (5) No records shall be destroyed until the Director of NIOSH responds in writing to the mine operator. II. # INTRODUCTION # A. Scope Radon is a gas that diffuses continuously from surrounding rock and broken ore into the air of underground mines, where it may accumulate; radon may also be carried into mines through groundwater containing dissolved radon [Snihs 1981]. Radon gas may be inhaled and immediately exhaled without appreciably affecting the respiratory tissues. However, when attached or unattached radon progeny are inhaled, they may be deposited on the epithe lial tissues of the tracheobronchial airways. Alpha radiation may subse quently be emitted into those tissues from polonium-218 and polonium-214, thus posing a cancer risk to miners who inhale radon progeny. This document presents the criteria and recommendations for an exposure standard that is intended to decrease the risk of lung cancer in miners occupationally exposed to short-lived, alpha-emitting decay products of radon (radon progeny) in underground mines. The REL for radon progeny applies only to the workplace and is not designed to protect the population at large. The REL is intended to ( 1) protect miners from the development of lung cancer, (2) be measurable by techniques that are valid, reproducible, and available to industry and government agencies, and (3) be technically ach i evabIe. # B. Current Standard MSHA has established radiation protection standards for workers in underground metal and nonmetal mines [30 CFR 57.5037 through 57.5047]. This standard limits a miner's radon progeny exposure to a concentration of 1.0 WL and an annual cumulative exposure of 4 WLM. Each WLM is determined as a 173-hr cumulative, time-weighted exposure [30 CFR 57.5040(6)]. Smoking is prohibited in all areas of a mine where radon progeny exposures must be determined; respiratory protection is required in areas where the concen tration of radon progeny exceeds 1.0 WL. According to current MSHA regulations, the exhaust air of underground mines must be sampled to determine the concentration of radon progeny. # Uran i um M ines If the concentration of radon progeny in the exhaust air of a uranium mine exceeds 0.1 WL, samples representative of a miner's breathing zone must be taken at random times every 2 weeks in each work area (i.e., stopes, drift headings, travelways, haulageways, shops, stations, lunchrooms, or any other place where miners work, travel, or congregate). If concentrations of radon progeny exceed 0.3 WL in a work area, sampling must be done weekly until the concentration has been reduced to 0.3 WL or less for 5 consecutive weeks. Uranium mine operators must calculate, record, and report to MSHA the radon progeny exposure of each underground miner. The records must include the miner's time in each work area and the radon progeny concentration measured in each of those areas. # Nonuranium Mines If the concentration of radon progeny in the exhaust air of nonuranium mines exceeds 0.1 WL, and if concentrations are between 0.1 and 0.3 WL in an active working area, samples representative of a worker's breathing zone must be taken at least every 3 months at random times until the concentrations of radon progeny are less than 0.1 WL in that area. Samples must be taken annually thereafter. If the concentration of radon progeny exceeds 0.3 WL in a working area, samples must be taken at least weekly until the concentration has been reduced to 0.3 WL or less for 5 consecutive weeks. Operators of nonuranium mines must calculate, record, and report to MSHA the radon progeny exposures of miners assigned to areas with concentrations of radon progeny exceeding 0.3 WL. The records must include the miner's time in each work area and the radon progeny concentration measured in each of those areas. # C. Uranium Decay S e ries Figure 11-1 shows the sequence by which the most abundant isotope of uranium (238u) decays to a radioactively stable isotope of lead (206pb). Radon (222Rn) is an inert gas with a radiologic half-life of 3.8 days; it is a product of the natural decay of radium (226Ra ). When radon decays, alpha particles and gamma radiation are emitted, and an isotope of polonium (218p0 ) ¡s formed. Polonium-218 (218p0 ) anc| j ts decay productslead-214 (214pb), bismuth-214 (214b ' i ), and polonium-214 (214p0 )-are commonly referred to as short-lived radon progeny because they have half-lives of 27 minutes or less (see . Both polonium-218 and polonium-214 emit alpha particles as they decay. The short-lived progeny are solids and exist in air as free ions (unattached progeny) or as ions adsorbed to dust particles (attached progeny). Because it is a gas, radon diffuses through rock or soil and into the air of underground mines, where it may accumulate; radon may also be carried into mines through groundwater containing dissolved radon [Snihs 1981]. Radon may be inhaled and immediately exhaled without appreciably affecting the respiratory tissues. However, when the radon progeny (either attached or unattached) are inhaled, they may be deposited in the epithelial tissues of the tracheobronchial airways, where alpha radiation from polonium-218 and polonium-214 may be subsequently emitted. The quantity of mucus in those airways and the efficiency of its clearance (retrograde ciliary action) into the esophagus are important factors that affect the total radiation absorbed at a specific site within the respiratory tract. Alpha particles are energetic helium nuclei. As they pass through tissue, they dissipate energy by the excitation and ionization of atoms in the tissue; it is this process that damages cells. Because alpha particles travel less than 100 micrometers in tissue, intense ionization occurs close to the site of deposition of the inhaled alpha-emitting radon progeny. Beta particles (electrons) and gamma radiation (shortwave electromagnetic radiation) can also cause ionization in tissues, but they travel farther through tissues and dissipate less energy per unit path length than do alpha particles [Casarett 1968;Wang et a l . 1975;Shapiro 1981]. The beta parti cles and gamma radiation emitted by radon progeny make a negligible contri bution to the radiation dose in the lung [Evans 1969]. # D-U n its o f Measure The common unit of radioactivity is the curie (Ci), which is the rate at which the atoms of a radioactive substance decay; 1 Ci equals 3.7 x 10^® disintegrations per second (dps). The picocurie (pCi) corresponds to 3.7 x 10~2 dps. The International System of Units (SI) unit of radio activity is the becquerel (Bq), which is equivalent to 1 dps. Therefore, 1 pCi is equivalent to 0.037 Bq. When radon gas and radon progeny are inhaled, the radiation exposure is primarily caused by the short-lived radon progeny (polonium-218, lead-214, bismuth-214, and polonium-214, which are deposited in the lung) rather than by the radon gas. Because it was not feasible to routinely measure the individual radon progeny, the U.S. Public Health Service introduced the concept of the working level, or WL [Holaday et al. 1957]. The WL unit represents the amount of alpha radiation emitted from the short-lived radon progeny. One WL is any combination of short-lived radon progeny in 1 liter (L) of air that will ultimately release 1.3 x 10^ million electron volts (MeV) of alpha energy during decay to lead-210. The SI unit of measure for potential alpha energy concentration is joules per cubic meter of air (J/m3 ); 1 WL is equal to 2.08 x 10"5 J/m3 [ICRP 1981], The equilibrium between radon gas and radon progeny must be known in order to convert units of radioactivity (Ci or Bq) to a potential alpha energy concentration (WL or J/m3 ). The equilibrium factor (F) is defined as the ratio of the equilibrium-equivalent concentration of the short-lived radon progeny to the actual concentration of radon in air [ICRP 1981]. When the equilibrium factor approaches 1.0, it means that the concentration of radon progeny is increasing relative to the concentration of radon. At complete radioactive equilibrium (F=1.0), the rate of radon progeny decay equals the rate at which the progeny are produced. Thus the radioactivity of the decay products equals the radioactivity of the radon [Shapiro 1981]. In under ground mines, the equilibrium factor mainly depends on the ventilation rate and the aerosol concentration [Urban et al. 1985]. Values of F ranging from 0.08 to 0.65 are typical in underground mines [Breslin et al. 1969]. Radioactivity and potential alpha energy concentration values at various equilibria are presented in Table 11-1. The common unit of measure for human exposure to radon progeny is the working level month (WLM). One WLM is defined as the exposure of a worker to radon progeny at a concentration of 1.0 WL for a working period of 1 month (170 hr).* The SI unit for WLM is joule-hour per cubic meter of air (J-h/m3 ); 1 WLM is equal to 3.6 x 10-3 J-h/m3 . *Note that MSHA regulations are based on 173 hr per month. *F is defined as the quotient of the equilibrium-equivalent radon progeny activity divided by the radon activity. The rad (radiation absorbed dose) is the unit of measure for the absorbed dose of ionizing radiation. One rad corresponds to the energy transfer of 6.24 x 107 MeV per gram of any absorbing material [Shapiro 1981]. The rem (roentgen equivalent man) is the unit of measure for the dose equivalent of any ionizing radiation in man. One rem is equivalent to one rad multiplied by a radiation quality factor (QF). The radiation QF expresses the relative effectiveness of radiation with differing Iinear-energy-transfer (LET) values to produce a given biological effect. The radiation QFs for beta particles and gamma radiation are each approximately 1; the radiation QF for alpha radiation varies from 10 to 20 [NGRP 1975;ICRP 1977;NCRP 1984a]. For equal doses of absorbed radiation (rads), the dose equivalent (rems) attributed to alpha particles is 10 to 20 times greater than the dose equiva lent attributed to high-energy beta particles or gamma radiation. The SI unit of measure for the dose equivalent is the sievert (Sv). One rem is equal to 0.01 Sv [Shapiro 1981]. # E. Worker Exposure In 1986, 22,499 workers were employed in 427 metal and nonmetal mines in the United States. In the past few years, the number of underground uranium mines operating in the United States has decreased dramatically from 300 in 1980 [Federal Register 1986] to 16 in 198416 in [MSHA 1986]. Accordingly, the number of miners employed in these mines has also decreased from 9,076 in 19799,076 in [Cooper 1981, to 1,405 in 1984[AIF 1984], and to 448 in 1986[MSHA 1986]. Table II-2 shows the range in concentrations of airborne radon progeny measured in U.S. underground metal and nonmetal mines from 1976 through 1985 [MSHA 1986]. As illustrated in 38 of the 254 operating underground nonuranium mines sampled during fiscal year 1985 contained concentrations of airborne radon progeny equal to or greater than 0.1 WL in [MSHA 1986]. With an estimated average of 55 workers per mine, approximately 2,090 nonuranium miners were at risk of exposure to radon progeny concentrations equal to or greater than 0. 1 WL in 1985[MSHA 1986. presents the annual cumulative radon progeny exposures of miners in 20 U.S. underground uranium mines in 1984; these data are presented by job category. Of the 1,405 underground uranium miners working in 1984, 400 (28%) had annual cumulative exposures to radon progeny greater than 1.0 WLM [AIF 1986]. The gamma radiation exposures of U.S. uranium miners are generally regarded to be less than the whole-body occupational exposure limit of 5 rem (50 mSv) per year [Breslin et al. 1969;Schiager et a l . 1981]. # F. Measurement Methods fo r A irborne Radon Progeny # D escrip tio n o f Measurement Methods # a . Grab Samp I i ng Methods Grab sampling methods for measuring airborne radon progeny involve drawing a known volume of air through a filter and counting the alpha or beta radioactivity on the filter during or after sampling. Grab sampling methods used in underground mines are listed in Table II-5. In one-count grab sampling methods such as those used with instant working-level monitors, the radioactivity is determined over a single counting period using a scintillation counter. In two-count methods, the radioactivity is determined over two counting periods, and the ratio of these two measurements is used to calculate the radon progeny concentrations. In a three-count method, radon progeny concentrations are derived from the relative changes in the measurements taken at three 30-minute intervals. Critically important factors are the proper calibration of radiation detectors and pumps, filters that precisely fit the equipment, and accurate maintenance of the flow rate during the sampling period. It is also important to prevent the accumulation of radionuclides to avoid contamination of the pump, counting equipment, and filters [Schiager et al. 1981]. # -- Statistical uncertainties associated with the various grab sampling methods for radon progeny are presented in Data indi cate that the relative precision of the methods is the same. The major differences are the total time period required for sampling and analysis, the capability of determining exposure concentrations at the work site, and the amount of routine maintenance and cali bration required of the instrumentation. # b. Continuous M onitoring Methods In continuous monitoring methods, air is sampled continuously, and (as with other methods) the alpha or beta radioactivities are determined over the length of the collection period. Continuous monitoring devices and systems have been described elsewhere [Haider and Jacobi 1973;Holmgren 1974;Droullard and Holub 1977;Kawaji et al. 1981;Bigu and Kaldenbach 1984;Sheeran and Franklin 1984;Bigu and Kaldenbach 1985;Droullard and Holub 1985]. The characteristics and statistical uncertainties for some continuous monitoring methods are presented in Table II-7. The U.S. Bureau of Mines has designed an automated continuous monitoring system in which up to 768 detector stations can be linked to a central control unit. The system was designed to trigger an alarm when airborne radon progeny exceed a specified concentration [Sheeran and Franklin 1984]. Although continuous monitoring methods can provide a rapid estimate of exposure concentrations, placement of the instrumentation in active work areas is difficult and may not always be representative of the exposure in the miner's breathing z o n e . # c . Personal Dosimeters Personal dosimeters for radon progeny are intended to automatically record a miner's cumulative exposure regardless of fluctuations in radon progeny concentrations. Thus these devices eliminate the need to document work area location and occupancy time. Although several personal dosimeters have been tested in U.S. uranium mines, none are in routine use in this country because of problems in calibration and lack of precision [Schiager et al. 1981]. # . Pass i ve Dos i meters Passive dosimeters rely on the natural migration of attached and unattached radon progeny to the detection area of the device without the use of an air pump. Thin plastic foils sensitive to alpha particles are used as detectors. Although passive dosimeters using track etch foi Is have been studied in under ground mines [Domanski et a l . 1982], such devices are still in the developmental stage [Schiager et al. 1981]. # Act i ve Dos i meters Active dosimeters use a mechanical pump to draw a known volume of air through a filter. The alpha radiation emitted by the radon progeny collected on the filter is counted and recorded automatically. The following dosimeter detectors have been tested for use under mining conditions: thermoluminescent detectors [McCurdy et al. 1969;White 1971;Phillips et al. 1979;Southwest Research Institute 1980;Grealy et al. 1982], electronic detectors [Durkin 1977], and track etch detectors [Auxier et al. 1971;Zettwoog 1981;Bernhard et a l . 1984]. Active track etch dosimeters are used for radiation monitoring in all underground mines in France [Schiager et a l . 1981;Bernhard et a l . 1984]. # d. Factors to Consider When S e le c tin g Measurement Methods Concentrations of radon progeny have been reported to vary among the different uranium mines and work areas within each mine [Schiager et a l . 1981]. These variations have been attributed to the type of mining process, the grade of ore mined, and the effectiveness of the ventila tion to control exposures. Historically, radon progeny exposures in work areas were measured by grab sampling techniques that used the Kusnetz count method or by the instant working-level monitor. More recently, other methods such as continuous monitors and personal dosimeters have also been used in mines. Personal dosimeter methods are clearly more desirable, but they have not been rigorously tested in U.S. mines, and they have been reported to be unreliable for determining exposures over an 8-to 10-hr work shift [Schiager et al. 1981]. Continuous monitoring methods can rapidly detect changes in radon progeny concentrations and can be equipped with an alarm system that will be activated at preset concentrations. These monitors are often stationed at fixed locations within travelways, haulageways, shops, etc. because of the difficulty of moving and restationing them within active mine areas. Although these monitors do not usually provide adequate data for determining worker exposures, they can signal the occurrence of problems in the ventilation system and identify exposure sources. NIOSH believes that the use of instant working-level monitors or the Kusnetz count method will provide reliable estimates of exposure to radon progeny. Other methods at least equivalent in accuracy, precision, and sensitivity can be used (see Table II -6). Any method chosen must be capable of meeting the sampling strategy requirements described in Appendix IV. # G. R e s p ira to r S e le c tio n and C re d it fo r R e s p ira to r Use # . R e s p ira to r S e le c tio n Historically, NIOSH has recommended the use of the most protective respirators* when workers are exposed to potential occupational carcinogens [NIOSH 1987]. Although cumulative exposure to radon progeny may result in cancer, the use of the most protective respirators may not always be technically feasible or safe in routine underground mining operations. Supplied-air respirators (SARs) that are NIOSH/MSHAcertified provide breathing air from compressors or a cascade system of air-supply tanks and are approved only for use with air lines less than 300 ft long. However, the use of SARs may not be practical in under ground mining operations. The reasons are that it is difficult to provide sufficient quantities of breathing air through air lines over long distances and that the air lines are susceptible to crimping and severing from the movement of mining vehicles and haulage cars on tracks. Furthermore, many underground work areas and passageways in mines are too small and cramped with equipment to accommodate air compressors or large air-supply tanks. In addition to being cumbersome in underground mines because of their size, self-contained breathing apparatuses (SCBAs) weigh as much as 35 lb, and SARs weigh approximately 6 lb. Thus when SCBAs or SARs are worn for extended periods, their additional weight can also cause increased physiological burden in the form of heat stress [White andRonk 1984a, 1984b;White and Hodous 1987;White et a l . 1987]. Finally, NIOSH believes that the routine use of SCBAs and SARs may result in increased injuries in underground mining operations. NIOSH is not aware of any studies specifically dealing with injuries or other safety hazards associated with the use of SARs and SCBAs in mines. However, several studies have shown that obstacles introduced into the workplace result in a significantly increased risk of injury from tripping, slipping, or falling [National Safety Council 1981;Szymusiak andRyan 1982a, 1982b]. Because mining is currently one of the most dangerous industries in the United States with regard to occupational deaths and injuries (MMWR 1987, BLS 1987, NIOSH believes that this problem would be exacerbated by the routine use of SCBAs and SARs. NIOSH believes that there is sufficient safety and health evidence to recommend against the routine use of SARs and SCBAs for reducing exposure to radon progeny during underground mining operations. Table 1-1 lists the respirators that NIOSH recommends for use against exposure to radon progeny. For average work shift concentrations of radon progeny that exceed 1/12 WL, the recommended respirators include air-purifying respirators with high-efficiency particulate air (HEPA) filters. The HEPA filter media are recommended by NIOSH for use with the air-purifying classes of respirators. These filters are the most *E i ther (1) any self-contained breathing apparatus (SCBA) equipped with a full facepiece and operated in a pressure-demand or other positive-pressure mode, or (2) any supplied-air respirator (SAR) equipped with a full face piece and operated in a pressure-demand or other positive-pressure mode in combination with an auxiliary SCBA operated in a pressure-demand or other positive-pressure mode. efficient type of particulate filter available, and they are less susceptible than others to performance degradation resulting from humid storage and use conditions [Stevens and Moyer 1987]. # . C re d it fo r R e s p ira to r Use A miner's exposure to radon progeny may be less than the average work shift concentration in an area, depending on the class of respirator worn and the percentage of time the respirator is worn properly. This reduced exposure for miners who wear respirators can be calculated by dividing the average work shift concentration of radon progeny by the credit factor (CF) for that class of respirator (see Table 1 -1). The credit factors in Table 1-1 were determined by the following equat ion: P* = 1/CF = Pw x tw + Pn x tn where Pf = the total penetration of radon progeny into the respirator facepiece. APF = the assigned protection factor (a complete listing of the APFs for all classes of respirators can be found in the NIOSH Respirator Decision Logic [NIOSH 1987]). # CF = the credi t factor Pw = the penetration of radon progeny while wearing the respirator (i.e., 1/APF) tw = the proportion of time during the work shift that the miner wears the respirator properly Pn = the penetration of radon progeny while not wearing a respirator properly (i.e., 100% or 1.0) tn = the proportion of time during the work shift that the miner does not wear the respirator properly (i.e., 1.0 -tw ) An unpublished Canadian study evaluated the proportion of time during the work shift that a group of underground uranium miners properly wore their helmet-type, powered, air-purifying respirators [Linauskas and Kalos 1984;Kalos 1986] If a m i n e operator can verify to MSHA that respirator utilization is greater than 65%, NI0SH recommends a recalculation of the CFs using this higher utilization rate. However, the highest utilization rate that NIOSH recommends is 90%. The CFs for these extremes of utilization rates are listed in Table 1-1. # III. BASIS FOR THE RECOMMENDED STANDARD # A. Assessment o f E ffe c ts 1 . Human Studies a . A ssociation o f Radon Exposure w ith Lung Cancer M o r ta lity Appendix I contains the report prepared by NIOSH and submitted to MSHA on May 31, 1985, entitled Evaluation of Epidemiologic Studies Examining the Lung Cancer Mortality of Underground Miners. Several of the epidemiologic studies evaluated in that report demonstrate an association between exposure to radon progeny and lung cancer mor tality in underground uranium miners [Lundin et al. 1971;Sevc et a l . 1976;Kunz et a l . 1978;Waxweiler et al. 1981;Placek et al. 1983;Samet et a l . 1984;Muller et a l . 1985;Tirmarche et a l . 1985]. The relationship between exposure to radon progeny and lung cancer mortality has also been observed in workers in underground metal mines [Wagoner et a l . 1963], iron ore mines [Boyd et al. 1970;Jorgensen 1973Jorgensen (updated in 1984; Damber and Larsson 1982;Ed ling and Axe I son 1983;Radford and Renard 1984], tin mines [Fox et al. 1981;Jingyuan et a l . 1981;Wang et a l . 1984], fluorspar mines [Morrison et al. 1985], gold mines [Muller et a l . 1985], and zinc/lead mines [Axe I son and Sundell 1978]. In addition, some of these studies demonstrate a direct exposure-response relationship between lifetime cumulative exposure to radon progeny and lung cancer mortality [Lundin et al. 1971;Sevc et a l . 1976;Kunz et al. 1978;Morrison et al. 1985;Muller et al. 1985]. Statistically significant standardized mortality ratios (SMRs)* above 400% were observed in three studies in which workers accumulated mean lifetime exposures above 100 WLM [Sevc et al. 1976;Kunz et a l . 1978;Waxweiler et a l . 1981;Morrison et al. 1985]. Statistically significant SMRs between 140% and 390% were observed in two other studies in which workers accumulated mean lifetime exposures below 100 WLM [Radford and Renard 1984;Muller et a l . 1985] and in preliminary findings of a third study in which workers accumulated estimated mean lifetime exposures below 100 WLM [Tirmarche et a l . 1985]. # b. S y n e rg is tic E ffe c ts o f Other Substances Although the literature consistently demonstrates an association between lung cancer incidence and exposure to radon progeny, it is possible that some of the miners studied were exposed to other substances as well. These substances may have acted synergistically *The standardized mortality ratio (SMR) is the ratio of the mortality rates of two groups being compared. This ratio is expressed as a percentage and is usually adjusted for age or time differences between the two groups. with radon progeny to potentiate the effects of the radon exposure. [ Doull et al. 1980]. Substances with synergistic potential include arsenic [NIOSH 1975a;Wang et a l . 1984;Sevc et a l . 1984], hexavalent chromium, nickel, cobalt [NIOSH 1975b;NIOSH 1977;Sevc et a l . 1984]; serpentine [Radford and Renard 1984]; iron ore dust [Boyd et al. 1970;Jorgensen 1973Jorgensen , 1984Damber and Larsson 1982;Ed ling and Axe I son 1983;Pham et al. 1983;Radford and Renard 1984], and diesel exhaust [Wagoner et a l . 1963;Boyd et a l . 1970;Waxwei ler et al. 1981;Fox et a l . 1981;Damber and Larsson 1982;Ediing and Axelson 1983;Jorgensen 1984;Sevc et al. 1984;Muller et al. 1985;Morrison et al. 1985;Tirmarche et al. 1985]. Risk analyses were performed on data from epidemiologic studies of U.S. uranium miners [Whittemore and McMillan 1983; Appendix II] and Swedish iron ore mine workers [Radford and Renard 1984]. These analyses indicate that the risk of mortality from lung cancer among miners who are exposed to radon progeny is greater among those who smoke cigarettes than those who do not smoke. # c. R e la tio n o f Lung Cancer Risk to Cumulative Radon Exposure Since completion of the NIOSH report (Appendix I), Howe et a l . [1986] published a study of surface and underground mine workers who had worked in Canada in the Eldorado Lodge Uranium Mine between 1948 and 1980. That cohort included 8,487 workers. Based on these stratified categories the risk of death from lung cancer increased linearly with increasing exposure. For the exposure categories of 5 to 24, 25 to 49, and 50 to 99 WLM, the relative risk was elevated, but the difference from the expected risk for an unexposed population was not statistically significant. For all exposure categories above 100 WLM, the relative risk was significantly elevated (p<0.05); note, however, that the first 10 years of followup were excluded from this calculation. For the total cohort, the relative risk coefficient was 3.28% per WLM, and the absolute risk coefficient was 20.8 per 10® person-years per WLM (any excess mortality within the 10 years following initial exposure was excluded from this calculation). Two additional epidemiologic studies that were not included in the 1985 NIOSH report (Appendix I) have also been reported. Pham et a l . [1983] studied 1,173 iron mine workers in France who were aged 35 to 55 and who had normal chest X-rays at the beginning of the study period in 1975. Thirteen mine workers who had worked underground for a mean of 25.2 years died of lung cancer between 1975 and 1980; only 3.7 deaths were expected from an age-standardized comparison with French males for this same period (SMR = 351; p<0.05). Although exposure records were not available, the authors estimated that some workers may have received lifetime cumulative exposures to radon progeny in the range of 100 to 150 WLM. So 11i et al. [1985] observed 318 niobium mine workers in Norway from 1953 through 1981; 77 of these miners were underground workers. This cohort experienced a total of 12 lung cancer deaths, though only 2.96 deaths were expected on the basis of age-specific rates for Norwegian males (SMR = 405; p<0.001). The underground workers experienced 9 lung cancer deaths whereas only 0.81 were expected on the basis of age-specific rates for Norwegian males (SMR = 1,111; p<0.001). From estimates of total exposure to alpha radiation (based on limited measurements of radon and thoron progeny taken in 1959), the authors determined that the risk of lung cancer increased significantly (p<0.05) with increasing alpha radiation for the exposure categories of 1 to 19, 20 to 79, 80 to 119, and greater than or equal to 120 WLM. The excess absolute risk for those exposed workers was reported to be 50 per 10® person-years per WLM. The epidemiologic studies of lung cancer mortality in mine workers exposed to radon progeny (including those studies discussed in Appendix I) are summarized in Tables III-2 and III-3. # An i maI Stud i es a . E ffe c ts o f Exposure to Radon Progeny Chameaud et al. [1984a] studied the effects of exposure to radon progeny in specific pathogen-free (SPF) Sprague-Dawley rats. A total of 1,800 rats were exposed to radon progeny for 1 to 3 hr per day for 14 to 82 days, yielding an accumulated exposure of 20 to 50 WLM. An additional 600 rats were unexposed. The lung cancer incidence in rats was reported to be directly proportional to their lifetime cumulative exposure to radon progeny. The authors con cluded that the amount of radiation needed to double the natural incidence of lung cancer in these rats was 20 WLM. Reduced life spans were not observed for rats in any of the exposure groups. b. R e la tio n o f Lung Cancer Incidence to Radon Progeny Exposure Chameaud et al. [1981;1984a] determined that the lifetime risk coefficient (uncorrected for life span shortening) for the induction of lung cancers in rats was approximately 140 to 850 x 10~® per WLM for exposures ranging from 20 to 4,500 WLM (Table III -4). This is consistent with the lifetime risk coefficient for lung cancer in humans (150 to 450 x 10"® per WLM) estimated by the International Commission on Radiological Protection (ICRP) [ICRP 1981]. As shown in Table MI-4, lung cancer incidence in rats increased as cumulative exposure to radon progeny increased. In contrast, the lifetime risk of lung cancer per unit of exposure (WLM) decreased with increasing exposure. These findings agree with those of the NI0SH risk assessment (Appendix II), in which the lifetime cumulative risk of lung cancer per unit of exposure decreased as cumulative exposure increased in underground uranium mine workers. # c . S y n e rg is tic E ffe c ts o f C ig a r e tte Smoke Chameaud et a l . [1980,1982] studied the ability of radon progeny to initiate lung cancer in groups of 50 SPF Sprague-Dawley rats that were subsequently exposed to cigarette smoke. The chamber concentrations of alpha radiation were 0, 300, and 3,000 WL; these concentrations yielded cumulative dose levels of 0, 100, 500, and 4,000 WLM, respectively, over a 2-month period for those groups of animals. Treatment groups were exposed to a total of 352 hr of cigarette smoke (9 cigarettes/500 L of air for 10 to 15 min per day, 4 days per week for 1 year. Exposure to radon progeny alone demonstrated a directly proportional dose-effect relationship for induction of cancer (500 and 4,000 WLM). However, when a similar period of radon exposure was followed by exposure to cigarette smoke, a dose-related, twofold to fourfold increase occurred in lung cancer incidence. The authors stated that the groups receiving high and medium doses of radon and cigarettes had cancers that were not only larger but were more invasive and metastatic compared with the groups exposed to radon alone. Conversely, neither the cigarette smoke alone nor the low-dose radon progeny exposure (100 WLM) alone induced lung cancer. In a parallel lifetime study, Chameaud et a l . [1981] related the sequence of exposure to radon progeny and cigarette smoke to the incidence of lung tumors (cancer incidence was not specified) in groups of 50 SPF Sprague-Dawley rats. One group of rats was exposed to radon progeny only (a cumulative exposure of 4,000 WLM); a second group was exposed first to cigarette smoke and then to radon progeny (4,000 WLM); a third group was exposed to radon progeny (4,000 WLM) and then to cigarette smoke; and a fourth group was exposed to cigarette smoke only. Similar incidences of tumors were observed among the rats exposed to radon progeny only (10 tumors) and those exposed first to cigarette smoke and then to radon progeny (8 tumors). In contrast, when the exposure to radon progeny preceded the exposure to cigarette smoke, the effect was potentiated-that is, 32 rats developed tumors. As stated previously, none of the rats exposed to cigarette smoke only developed lung cancer. No statistical analyses were performed on the results of this study. # d. S ig n ific a n c e o f Animal Studies Life span experiments in animals exposed to radon progeny alone have demonstrated that increasing exposures produce increasing incidences of lung cancer. This finding is similar to those of the epidemio logic studies cited in the preceding section (III, A, 1). Because epidemiologic data are available, these animal data contribute relatively little to the final assessment of risk in humans or to the determination of an REL for exposure to radon progeny. Thus this document discusses only selected animal studies of the carcin ogenic potential of radon progeny. Other studies have examined the sequential or concomitant exposures of rats, dogs, and hamsters to substances other than radon progeny (e.g., uranium ore dust, thorium, and tobacco smoke). Several additional studies (critiqued but not described in this document) confirm the adverse health effects of radon progeny on exposed animals [Chameaud et al. 1974[Chameaud et al. , 1984bFilipy et al. 1977aFilipy et al. , 1977bGaven et al . 1977;PNL 1978;Cross et a l . 1981Cross et a l . , 1982aCross et a l . , 1982bCross et a l . , 1983Cross et a l . , 1984Cross 1984]. These animal studies generally confirm the risk of lung cancer reported among workers exposed to radon progeny. # B. Risk Assessment NIOSH studied the lung cancer risk of uranium miners by using data from a U.S. Public Health Service (USPHS) study [Lundin et a l . 1971] of white male uranium mine workers from the Colorado Plateau area (Colorado, Arizona, New Mexico, and Utah). That NIOSH risk assessment is described in a report entitled Quantitative Risk Assessment of Lung Cancer in U.S. Uranium Miners, which is reproduced in Appendix II. Appendix I contains a detailed discussion of the USPHS data. In the NIOSH risk assessment, data were analyzed for 3,346 workers who had been followed from 1950 through 1982. By 1982, 1,215 workers had died; 256 of these deaths (21.1%) were due to lung cancer. A generalized version of the Cox proportional hazards model was used to estimate the relative risk of death resulting from lung cancer over a 30-year working lifetime at several cumulative exposure values. (The 30-year working lifetime was selected to maintain consistency with the working lifetime commonly described by MSHA.) Relative risk is defined as the ratio of lung cancer mortality in a selected exposed group to lung cancer mortality in a comparison group. The quantitative risk assessment model presented in Appendix II did not include the length of time since the end of the mining exposure, which is a significant predictor of relative risk. This term was subsequently added to the generalized Cox model, and new parameter estimates were computed. The estimates in Tables 111-5 and 111-6 are based on this model. The major difference between the two quantitative risk assessment models is that under the new model, the relative risk estimates increase more rapidly during exposure and decrease more rapidly after exposure. The risk of death resulting from lung cancer increased with increasing lifetime cumulative exposure to radon progeny (Table MI -5); this finding is consistent with Appendix II. This direct relationship has been observed in previous epidemiologic studies [Lundin et a l . 1971;Sevc et al. 1976;Kunz et a 1. 1978;Morrison et a l . 1985;Muller et a 1. 1985]. As shown in Table MI-5, the relative risk of 1.57 at 30 WLM corresponds to an average exposure of 1 WLM per year for a working lifetime of 30 years. *Estimates are based on a log-relative risk model fitted to age at initial exposure, time since cessation of exposure, and the natural logarithms of the following variables: cumulative mining and background exposure to radon progeny; cumulative cigarette smoking and background smoking; and rate of exposure to radon progeny. +The approximate 95% confidence limits were calculated by applying the parameters from the quantitative risk assessment model together with their variances and covariances to the lung cancer mortality rates in the Colorado Plateau using an actuarial approach. In addition to receiving workplace exposures to radon progeny, workers were assumed to have received average environmental exposures of 0.4 WLM/year. This is the value derived in the NIOSH risk assessment that led to the best fit of the model to the data. This assumed value is consistent with estimates of exposure to radon progeny for persons living near ore-bearing lands in the United States [NCRP 1975;Brookins 1986]. The average nonoccupational exposure to radon progeny from natural geologic sources for persons in the United States is approximately 0.2 WLM/year [NCRP 1984b]. Relative risk modeling is common in the epidemiologic literature (especially in studies with lengthy followup) because the dramatic changes that occur in mortality rates with age and calendar year make absolute risk models extremely complicated. Most relative risk models assume that mortality rates in exposed populations are roughly proportional to the rates in unexposed populations at all ages and calendar periods. Because this assumption often approximates reality over a broad range of ages and calendar periods, the relative risk model can be expressed in a less complex form than absolute risk models (i.e., the relative risk model can be expressed without terms involving age and calendar year). Excess lifetime risk estimates for lung cancer mortality have been generated (Table III-6) by applying the relative risk estimates in Table III-5 (see also Appendix II) to the lung cancer and all-causes mortality rates for white males in the Colorado Plateau States. Excess risk is defined as the arithmetic difference between the risk of lung cancer mortality in a selected exposed group and the risk of lung cancer mortality in an unexposed comparison group. The estimated excess lung cancer deaths (i.e., excess lifetime risk) in Table 111-6 were computed by approximating the average of the exposuredetermined relative risk function over 5-year age intervals spanning an entire lifetime. These average relative risks and the corresponding mortality rates for lung cancer and for all causes of death among white males in the Colorado Plateau were used to compute the probability of lung cancer mortality during a lifetime using the National Academy of Sciences actuarial method [NAS 1987]. It is important to understand the limitations of these risk estimates when examining the values in Table III-6. These limitations include the foIlowi n g : • A relatively small portion of the cohort had the observed lower levels of cumulative occupational exposure, and the ability to generate precise point-risk estimates at the lower range of occupational exposure is not as strong. Only 7% of the workers in the cohort had lifetime cumulative exposures below 30 WLM, and only 7 of the 256 lung cancer deaths occurred among workers with lower cumulative exposures. • The reliability of these excess lifetime risk estimates depends on (1) the accuracy of the original relative risk estimates and (2) the appropriateness of using lung cancer rates for the general white male population in the Colorado Plateau as an estimate of the background lung cancer rate (i.e., that which would occur in populations exposed only to background levels of radon progeny). This cohort contained no unexposed mine workers from which to estimate background lung cancer rates. Although certain limitations exist in using this type of rate [Monson 1980], background lung cancer mortality rates from the general U.S. white male population were used to estimate the background rates for the cohort. • The background lung cancer rates were not corrected for cigarette smoking. However, the relative risk estimates used to calculate the lifetime risk estimates were adjusted for smoking. This implies that the lifetime risk estimates are only appropriate for a population with a pattern of smoking similar to that of the white male population of the Colorado Plateau. In developing its recommendations, NIOSH has attempted to compare the risk of occupational exposure to radon progeny with the risk of background exposure in homes (i.e., exposure accruing outside the mining environment). The estimated average background exposure to radon progeny is approximately 0. Given this value for background lung cancer risk, another question that must be considered is to what level it would be reasonable to control occupational risk. In its benzene decision, the U.S. Supreme Court gave the following example as the basis for evaluating the occupational risk of chemically induced leukemia: An exposure associated with 1 excess death per million exposed persons might pose an acceptable risk, whereas an exposure associated with 1 excess death per 1,000 exposed persons would pose a significant risk that should be reduced. This example is useful, but it cannot be strictly applied in all cases because it was offered as an illustration and not a fixed rule. In the specific case of lung cancer risk associated with radon progeny exposure, the example cannot be strictly applied because the background risk of lung cancer is much greater than the risk of leukemia at background exposure levels and because cigarette smoking is known to create a confounding effect that greatly increases risk. An excess of 1 lung cancer death per 1,000 would probably not be detectable in a general population if data were subject to considerable uncertainty, since it would necessitate differentiating between 46 and 45 deaths per 1,000. Another important consideration is the technical feasibility of a given exposure limit. As stated earlier, NIOSH has determined that a cumulative exposure limit of 1 WLM per year is achievable (though some argue that it is not feasible on the basis of economics or current technology). NIOSH has found no evidence that any cumulative exposure lower than 1 WLM per year is feasible. As shown in Table III-6, the occupational exposure limit required to reduce expected lifetime risk to 1 excess lung cancer death per 1,000 miners is approximately 0.1 WLM per year. This cumulative exposure would require an occupational exposure concentration that is less than the concentration associated with a cumulative background exposure of 0.4 WLM per year, assuming background expoure is acquired outside the occupational environment. An occupational exposure limit of 0.1 WLM per year would therefore require average mine concentrations lower than the estimated background exposure concentrations. In view of the preceding factors, the level of uncertainty in available data, and the apparent unfeasibility of limiting cumulative exposures to less than 1.0 WLM per year, it does not seem reasonable to recommend exposure limits that would yield only 1 excess lung cancer death per 1,000 miners.. NIOSH has determined that an exposure limit of 1 WLM is feasible in some mines and that such an exposure limit would substantially reduce risks from those associated with the current MSHA standard. An REL of 1 WLM per year is therefore recommended to substantially reduce risk and to stimulate the implementation and development of engineering and mining techniques to reduce exposure. The enforcement of this recommendation combined with additional health and developmental research may facilitate future exposure reductions and thereby reduce lung cancer risk. # C. Techn i caI Feas i b i I i ty In a report to the U.S. Bureau of Mines, Bloomster et a l . [1984a] analyzed the technical feasibility of reducing the airborne concentrations of radon progeny in underground uranium mines. The study included data from 14 underground uranium mines operating during the study period (September 1981to May 1984. The authors concluded that some mines could operate at an annual exposure standard of 2 WLM by using dilution ventilation alone if it was introduced early in the development of the mine and if no contamination of inlet air was present. An extensive engineering analysis of 2 of the 14 mines indicated that it might be feasible to meet an operating standard of 1 WLM by using dilution ventilation in combination with other control methods such as bulkheading, air filtration, and use of sealants. The authors expressed doubt about the technical feasibility of operating these uranium mines at a standard of 0.5 WLM. Appendix III provides descriptions of engineering control methods that may be useful in underground mines. # D. Recommendati ons Several schemes exist for identifying and classifying a substance as a carcinogen. For example, the National Toxicology Program (NTP) [NTP 1984], the International Agency for Research on Cancer (IARC) [WHO 1979], and the Occupational Safety and Health Administration (OSHA) [29 CFR 1990, Identification, Classification, and Regulation of Potential Occupational Carcinogens (also known as "The OSHA Cancer Policy")] have all considered this problem. NIOSH considers the OSHA classification the most appropriate for use in identifying potential occupational carcinogens and supports the following definition: A "potential occupational carcinogen" is any substance, or combination or mixture of substances, which causes an increased incidence of benign and/or malignant neoplasms, or a substantial decrease in the latency period between exposure and onset of neoplasms in humans or in one or more experimental mammalian species as the result of any oral, respiratory, or dermal exposure, or any other exposure which results in the induction of tumors at a site other than the site of administration [29 CFR 1990.103]. This definition also includes any substance that mammals may potentially metabolize into one or more occupational carcinogens. The epidemiologic data examined by NIOSH demonstrates that occupational exposure to radon progeny in underground mines has the potential for causing lung cancer in miners (see Appendix I) [Pham et al. 1983;So 11i et al. 1985;Howe et a l . 1986]. These human data are supported by a number of studies in which various animal species exposed to radon progeny also developed lung cancer [Chameaud et a l . 1974[Chameaud et a l . , 1980[Chameaud et a l . , 1981[Chameaud et a l . , 1982[Chameaud et a l . , 1984a[Chameaud et a l . , 1984bFilipy et al. 1977aFilipy et al. , 1977bGaven et al. 1977;PNL 1978;Cross et al. 1981Cross et al. , 1982aCross et al. , 1982bCross et al. , 1983Cross et al. , 1984Cross 1984]. Furthermore, the NIOSH risk assessment presented in Appendix II, which was based on the human studies, clearly demonstrates that a relationship exists between cumulative radon progeny exposure and the risk of developing lung cancer. The risk assessment shows that as cumulative exposure decreases, the risk of developing cancer decreases. In arriving at an REL, NIOSH attempts to identify that exposure at which no worker will suffer material impairment of health or functional capacity. In the case of radon progeny, this task is difficult because the NIOSH risk assessment shows that even with an exposure of 0.5 WLM per year (15 WLM for a 30-year cumulative exposure) or below, the risk of developing lung cancer increases (see Table III-4 and Appendix II). These results indicate that NIOSH should recommend an annual cumulative exposure limit well below 0.5 WLM, but NIOSH must also consider the technical feasibility of the REL. In previous NIOSH recommendations for the control of carcinogens, technical feasibility has often been interpreted as the ability to quantitate exposure; however, those recommendations were intended for use by nonmining industries where product substitution and engineering and process controls are generally more feasible. Data from 1984 indicate that 94.4% of the workers in U.S. underground uranium mines accumulated annual radon progeny exposures of less than 2 WLM [AIF 1986]. Information obtained from the Bureau of Mines [Bloomster et al. 1984a] indicates that it is now technically feasible to achieve an annual radon progeny concentration of 1.0 WLM. NIOSH therefore recommends that cumulative exposure to radon progeny be limited to 1.0 WLM per year. This recommendation is intended to protect the health of America's underground miners, but it is tempered by the fact that currently it is not technically feasible to achieve annual exposures lower than 1.0 WLM using work practices and engineering controls. To meet the NIOSH recommendation for an annual cumulative exposure of 1 WLM and to assure that mining is a viable year-long occupation, NIOSH believes that the daily average work shift concentration of radon progeny should not exceed 1/12 WL in any work area. Although adherence to the NIOSH REL will significantly reduce the risk of lung cancer in underground mine workers, it will not eliminate it. No effective medical procedure currently exists to treat lung cancer caused by exposure to radon progeny. Furthermore, it has been demonstrated that exposure to both radon progeny and tobacco smoke result in a combined lung cancer risk that is greater than the risk posed by radon progeny or smoke alone. Thus it should be noted that the interaction between radon progeny and smoking is at a minimum additive, and more likely multiplicative. Cigarette smoking should therefore be emphasized as an even greater detriment to mine workers exposed to radon progeny than it is to the general public. The implementation of smoking cessation programs should reduce the incidence of lung cancer in underground mine workers. Because radon progeny are ubiquitous, exposure cannot be totally eliminated. For U.S. residents, the average annual nonoccupational exposure to radon progeny from natural geologic sources is approximately 0.2 WLM, and annual occupational exposures may be considerably higher. The REL of 1 WLM per year is not designed for the population at large, and no extrapolation is warranted beyond occupational exposures in underground metal or nonmetal mines. The REL is designed only for the radon progeny exposures of underground metal and nonmetal mine workers as applicable # IV. RESEARCH NEEDS The following research is needed to further reduce the risk of lung cancer development from occupational exposure to radon progeny. # A. Epidem iologic S tudies Needs for epidemiological studies have been identified as follows: • A need exists for a followup study of the U.S. miner cohort from the original Public Health Service study to explore the risk of lung cancer among nonsmoking miners exposed to low concentrations of radon progeny. NIOSH is currently resurveying this cohort to update exposure histories and gather additional information on smoking behavior, dietary practices, and tumor cell types. • A study is needed to determine whether radon gas itself or other contaminants that might be found in uranium mines are associated with increased morbidity and mortality. • An epidemiologic study of injuries is needed in the mining industry to identify safety-related problems, since this industry has one of the highest injury rates in the United States. Particular emphasis should be given to whether or not respirator use is associated with an increased injury and health risk. This investigation should examine slips, trips, falls, and heart attacks. # B. Engineering C o ntrols and Work P ra c tic e s Research should be conducted to develop more effective control technology methods for reducing exposure to radon progeny to less than 1 WLM. A control technology assessment of the uranium mining industry will assist in this effort by examining existing state-of-the-art technologies and work practices and by recommending new methods for controlling exposure to radon progeny. # C. R e sp ira to ry P ro te c tio n The following two types of research are recommended for respiratory protect ion: • Research should be conducted to determine the extent of gamma radiation emitted from particles trapped on high-efficiency particulate air (HEPA) filters used on air-purifying respirators. • A study should also be conducted to evaluate the physiological stress placed on miners who must wear respiratory protection. This study should be conducted both in the laboratory and in the mines. # D. Environmental (W orkplace) M onitoring Studies needed for environmental (workplace) monitoring are as follows: • Research should be conducted to characterize and evaluate the importance of particle size, unattached fraction, and condensation nuclei concentrations in estimating the bronchial dose of radon progeny. The dose of alpha radiation affecting the bronchial airways depends on the size of the particles to which it is attached. Recent studies have shown that as particle size decreases, the concentration of radon progeny increases. • Continued research is also needed to determine which factors affect the equilibrium between radon progeny and radon gas in mines. These studies should examine how such information may be used to predict the extent of exposure to radon progeny. Additional development and field testing of personal sampling devices is also needed for more complete determination of a miner's daily exposure. # PUBLIC HEALTH PERSPECTIVE # V. PUBLIC HEALTH PERSPECTIVE In developing this document, NIOSH has been challenged to carefully consider all of the Institute's legislative, scientific, and moral responsibilities. NIOSH has been required to review diverse scientific data that are subject to uncertainty and then, in keeping with its mandates, to recommend criteria that will attain the highest level of health protection and will at the same time account for other factors such as technical feasibility and insights gained through research and development. To develop a public health perspective on the risk posed by occupational exposure to radon progeny, NIOSH must weigh a number of factors, as follows. 1. Human and animal data both clearly establish that exposure to radon progeny increases the risk of lung cancer. The human data consist of a number of positive epidemiologic studies, several of which demonstrate an exposure-related health risk that is not accounted for by smoking behavior. The animal studies demonstrate that lung cancer risk increases with exposure in the absence of smoking. It is important to note that smoking by miners appears to greatly exacerbate the risk of lung cancer posed by exposure to radon progeny alone. 2. The NIOSH risk assessment, based on a USPHS study of uranium miners [Lundin et al. 1971], demonstrated a significant exposure-response relationship. This analysis indicates that exposure to radon progeny at the current MSHA occupational exposure limit of 4 WLM over a working lifetime will result in 42 excess lung cancers per thousand miners. A miner's working lifetime has been defined as 30 years (MSHA uses 30 years as a miner's working lifetime). Risk declines substantially if a lower annual cumulative exposure is received over the working lifetime. Any risk assessment is subject to uncertainty because risk assessment models may not reflect risk in a completely reliable way and because the data on which they are based are subject to uncertainties and Iimitat ions. The Cox proportional hazards model, which was chosen for the NIOSH risk assessment, is considered one of the strongest analytical approaches for longitudinal epidemiologic data. But it is not clear how accurately the data can be extrapolated to predict risk below the levels of observed exposure. Current biological theory hypothesizes that carcinogenic processes involve an initiation stage that is followed by other stages before an actual malignancy is established. The essential characteristics of all of these stages remain to be delineated. The Cox proportional hazards model is very powerful in describing human risks based on epidemiologic data. Its strength is partly due to the model's ability to accommodate long follow-up periods during which changes occur in some of the risk factors, e.g., cumulative exposure. Nevertheless, it is not clear how accurately the Cox model or any other risk assessment model predicts the risk from a multistage cancer process when exposures are below levels that have been studied. The USPHS study [Lundin et al . 1971] on which the NIOSH risk assessment was based is an extensive study, but the risk assessment is subject to uncertainties and limitations because of the nature of the data. For example, more uncertainty would be inherent in the risk estimates at the lower range of exposure because a relatively small proportion of this study population received the lowest cumulative exposures. Approximately 7% of the USPHS uranium miners (which included 7 lung cancers) had received cumulative occupational exposure levels of 30 WLM or less. At lower cumulative exposure levels, even smaller proportions of the cohort and fewer lung cancers were represented. Thus point estimates at these lower cumulative exposure levels would be more subject to the influence of chance occurrences. In addition, exposure levels are subject to measurement error. The uncertainty of exposures have been estimated to range from a relative standard deviation of 38% [Schaiger et a l . 1981] to as high as 97% (see Appendix II). 3. Radon progeny and its associated risk are present in our ambient environment as well as in the mining environment and cannot be totally eliminated. Radon progeny are ubiquitous in that they emanate from all ore-bearing deposits containing elements that decay to produce radon gas. The national average exposure to radon progeny has been estimated to be 0.2 WLM [NCRP 1975]. Areas containing large amounts of ore-bearing deposits (e.g., the Colorado Plateau) are likely to have higher-than-average background levels of radon progeny [NCRP 1975]. The NIOSH risk assessment uses a fitted estimate of 0.4 WLM as the background exposure (the average annual cumulative exposure incurred in nonoccupational environments) in the Colorado Plateau. Although no adequate measurement data are available to characterize the level and variability of exposure in the homes of the general population and of miners, it is clear that everyone is exposed and that the degree of exposure depends on each individual's home and work environment. The limits of concentration detection and the accuracy of the measurement techniques become a potential problem when quantifying the very low radon progeny exposure and concentration levels that may be found in the ambient environment (these levels are generally much lower than those found in underground mining environments). Extrapolation of risk to such low levels would become even more problematic than at higher levels because of the reduced accuracy of measuring such concentration levels. The elimination of radon progeny from the ambient and mining environments is not possible. The primary engineering method for controlling radon progeny exposure is dilution ventilation. However, if radon progeny are ubiquitous in our ambient environment, no source of air is free of contamination. The only protective equipment that would eliminate exposure to radon progeny is the self-contained breathing apparatus (SCBA). SCBAs are unaccept able for wear in the ambient environment and represent a significant safety hazard if worn extensively in the mining environment. # 4. It is valuable to compare the lung cancer risks associated with the miner's occupational and background exposures. No assessment has yet been made of the lung cancer risk associated with background exposures, either for the general U.S. population or for the population living in the Colorado Plateau. Until studies in homes are completed, it will be impossible to directly contrast the risks of occupational and background exposures. Nonetheless, it is important to consider occupational risk in the context of the lung cancer risk experienced by the general population. On the basis of State vital statistics records, NIOSH estimates that the lifetime risk of lung cancer in the Colorado Plateau, uncorrected for smoking, is approximately 45 lung cancers per 1,000 white males. Unfortunately, accurate lung cancer rates are not available for nonsmokers in the Colorado Plateau. Given this value for the lung cancer risk of background exposure, another question that must be considered is to what level it would be reasonable to control occupational risk. In its benzene decision, the U.S. Supreme Court gave the following example for the basis of evaluating occupational risk of chemically induced leukemia. The example indicated that an exposure associated with 1 excess death per 1 million exposed persons might pose an acceptable risk, whereas an exposure associated with 1 excess death per 1,000 exposed persons would pose a significant risk that should be reduced. This example is useful but it cannot be strictly applied in all cases, since it was offered as an illustration and not a fixed rule. In the specific case of lung cancer risk associated with radon progeny exposure, the example cannot be strictly applied because the risk of lung cancer is much greater than the risk of leukemia at background exposure levels and because cigarette smoking is known to create a confounding effect that greatly increases risk. An excess of 1 lung cancer death per 1,000 would probably not be detectable in a general population if the data were subject to considerable uncertainty, since it would necessitate differentiating between 46 and 45 deaths per 1,000. 5. The technical feasibility of achieving lower exposure levels is subject to the limitations of available technology. A report commissioned by the Bureau of Mines [Bloomster et al. 1984a, 1984b has been the primary source for NIOSH's assessment of the technical feasibility of lower exposure levels. On the basis of an extensive engineering analysis of two uranium mines, these investigators indicated that it might be feasible to meet an operating standard of 1 WLM using the best available engineering controls. They expressed doubt about the technical feasibility of operating these uranium mines at a standard of 0.5 WLM. This analysis has been used by NIOSH to define exposure limits that are technically achievable with the best available technology. These are some of the issues that have been considered by NIOSH in developing a recommended standard to prevent lung cancer associated with exposure to radon progeny. This process of weighing risk from a public health perspective parallels the philosophy of risk presented in the 1985 document entitled Risk Assessment and Risk Management of Toxic Substances: A Report to the Secretary [CCERP 1985]. In the process of developing this public health perspective, NIOSH has performed the following actions: • NIOSH has identified a public health hazard posed by an occupational exposure. Exposure to radon progeny that occurs in underground mines and the ambient environment has been shown to cause a significant increase in lung cancer among uranium and other underground miners. • NIOSH has developed recommendations in a manner that is prudent and in concert with the public need. This process was accomplished by complying with the Institute's legislative mandates to attain the highest level of health protection and at the same time consider other factors such as technical feasibility. • NIOSH has sought appropriate public participation by eliciting external reviews. Reviews were requested from more than 60 individuals or groups, including industry, labor, academia, and government representatives. NIOSH has received and considered the comments from more than 30 of these reviewers. • NIOSH has communicated risk understandably to both experts and lay persons. NIOSH has expressed risk as relative risk, which most epidemiologists and biostatisticians believe to be the most appropriate mode of expressing human cancer risk. NIOSH has also expressed risk as lifetime excess risk per 1,000 miners, an expression that can easily be interpreted by both experts and lay persons. • NIOSH has used all currently available information and the most extensive pertinent set of human data to estimate risk. The USPHS study of uranium miners [Lundin et a l . 1971] is the most extensive set of data available in the United States on the radon progeny exposure of underground miners. Unlike other analyses, this study used the entire qualifying cohort in the risk assessment, regardless of exposure level. These investigators felt this was the most valid way to analyze epidemiologic data using the selected model. • NIOSH has considered alternative recommendations for risk control that are based on viable exposure limits and engineering controls. The considered options ranged from proposing no REL to prohibiting all occupational exposure to radon progeny. The REL presented in this document was chosen after a thorough weighing of the available data and Institute mandates. • NIOSH has advanced the process of risk assessment and policy development by conducting the most thorough risk assessment possible on underground miners exposed to radon progeny. The NIOSH risk assessment permitted estimation of lung cancer risk at the lower range of the observed exposure levels. The assessment also suggested meaningful research areas such as lung cancer risks at low radon exposure levels, synergistic effects of other exposures such as cigarette smoke, effects of radon progeny on late versus early stage cancer, and the need for improved engineering control of exposure. An extensive and complicated process has been used to develop the recommendations in this criteria document. After weighing the conclusions drawn from available data, the mandates of the Institute, and the public health issues, NIOSH recommends an annual cumulative exposure limit of no more than 1 WLM per year. However, as stated earlier, even this exposure poses a significant risk of lung cancer over a working lifetime. Thus NIOSH further recommends that mine operators regard this REL as an upper limit and that they make every effort to limit radon progeny to the lowest possible concentrations. In addition, NIOSH wishes to emphasize the fact that this standard contains many important provisions in addition to the annual exposure limit. These include recommendations for limited work shift concentrations of radon progeny, sampling and analytical methods, recordkeeping, medical surveillance, posting of hazardous information, respiratory protection, worker education and notification, and sanitation. All of these recommendations help minimize risk. NIOSH recognizes its commitment to protect the health of the Nation's miners and will continue to reexamine this complex occupational health issue. Research on new and more effective methods for reducing occupational exposures will improve the available control technologies. Additional data on exposure levels and associated health risks will permit firmer estimates of risk and hence better recommendations. NIOSH will revise its recommended standard as important new data become available. # REFERENCES AIF [1984]. Exposure of U.S. underground uranium miners to radon daughters in 1982 as reported by 29 underground uranium mine operations, from a survey made April 24, 1984. Unpublished report submitted to Mine Safety and Health Administration by Atomic Industrial Forum. AIF [1986]. Exposure of U.S. underground uranium miners to radon daughters in 1984 as reported by 20 underground uranium mine operations, from a survey made December 11, 1986 Borak TB, Franco E, Schiager KJ, Johnson JA, Holub HF [1981]. Evaluation of recent developments in radon progeny measurements. In: Gomez M, ed. International Conference: Radiation Hazards in Mining. New York, NY: Society of Mining Engineers of American Institute of Mining, Metallurgical, and Petroleum Engineers, Inc., pp. 419-425. Boyd JT, Doll R, Faulds JS, Leiper J [1970]. Cancer of the lung in iron ore (haematite) miners. Br J Ind Med 27:97-105. Breslin A J , George AC, Weinstein MS [1969]. Investigation of the radiological characteristics of uranium mine atmospheres. New York, NY: Atomic Energy Commission, New York Operations Office, Health and Safety Laboratory. HASL-220, December 1969. Brookins DG [1986. Indoor and soil Rn measurements in the Albuquerque, New Mexico, area. Health Phys 51:529-533 In fifteen epidemiologic studies, researchers reported excess lung cancer deaths among underground miners who worked in mines where radon progeny were present. In addition, several studies show a dose-response relationship between radon progeny exposure and lung cancer mortality. In two recent studies, investigators report excess lung cancer deaths due to mean cumulative radon progeny exposures below 100 Working Level Months (WLM) (specifically, at 40-90 WLM and 80 WLM). The health risks from other exposures (i.e., arsenic, diesel exhaust, smoking, chromium, nickel, and radiation) in the mining environment can affect lung cancer risks due to radon progeny exposure. Unfortunately, the literature contains limited information about other exposures found in mines. The available information, concerning whether cigarette smoke and radon progeny exposures act together in an additive or multiplicative fashion is inconclusive; nevertheless, a combined exposure to radon progeny and cigarette smoke results in a higher risk than exposure to either one alone. X-ray surveillance and sputum cytology appear to be ineffective in the prevention of radon progeny-induced lung cancers in individual miners; therefore, these techniques are not recommended. Also, at this point, there is insufficient evidence to conclude that there is an association between one specific lung cancer cell type and radon progeny exposure. According to annual radon progeny exposure records from the Atomic Industrial Forum (AIF) and MSHA, it is technically feasible for the United States mining industry to meet a standard lower than the current annual exposure limit of 4 WLM. Recent engineering research suggests that it is technically feasible for mines to meet a standard as low as 1 WLM. Based upon qualitative analysis of these studies and public health policy, NIOSH recommends that the annual radon progeny permissible exposure limit (PEL) of 4 WLM be lowered. NIOSH wishes to withhold a recommendation for a specific PEL, until completion of a NIOSH quantitative risk assessment, which is now in progress. # II. INTRODUCTION The National Institute for Occupational Safety and Health (NIOSH) submits this report in response to the Mine Safety and Health Administration's (MSHA) Advanced Notice of Proposed Rulemaking (ANPR) concerning radiation standards for metal and nonmetal mines. This report evaluates fifteen epidemiologic studies that examine the lung cancer mortality of underground miners exposed to radon progeny. The fifteen studies are divided into two groups: five primary studies and ten secondary studies. Overall, the ten secondary studies provide additional information about the association between lung cancer mortality and radon progeny exposure, yet have more limitations (in study design, study population size, radon exposure records, thoroughness of follow-up, etc.) than the five primary studies. Recommendations for the medical surveillance of underground miners exposed to radon progeny are included. The United States mining industry's ability to meet a radon progeny exposure standard lower than the present four Working Level Months (WLM), based solely on technical feasibility, is also discussed. A working level (WL) is a standard measure of the alpha radiation energy in air. This energy can result from the radioactive decay of radon (Rn-222) and thoron (Rn-220) gases. A WL is defined as any combination of short-lived radon decay products (polonium-218, lead-214, bismuth-214, polonium-214) per liter of air that will result in the emission of 1.3 X 10^ million electron volts (MeV) of alpha energy [1]. NIOSH defines a WLM as an exposure to 1 WL for 170 hours. For the information of the reader, two appendices and a glossary are included. Appendix A contains data from the Atomic Industrial Forum (AIF) an organization representing the interests of the United States uranium mining industry, and MSHA on the numbers and radon progeny exposures of underground miners in the United States. Appendix B lists methods currently in use for controlling radon progeny exposures underground. Finally, there is a glossary containing epidemiologic and health physics terms. # III. EVALUATI ON OF EPI DEM IOLOGIC EVIDENCE # A. In tro d u ct ion This report examines five primary and ten secondary epidemiologic studies of underground miners. It describes the important points, strengths, and limitations of each study. The five primary epidemiologic studies examined lung cancer mortality among uranium miners in the United States, Czechoslovakia, and Ontario; iron miners in Malmberget, Sweden; and fluorspar miners in Newfoundland. The ten secondary epidemiologic studies examined mortality among iron ore miners in Grangesberg, Gallivare, and Kiruna, Sweden; zinc-lead miners in Sweden; metal and Navajo uranium miners in the United States; tin and iron ore miners in Great Britain; uranium miners in France; and tin miners in Yunnan, China. Finally, two recent studies analyze the interaction between radon progeny exposure and smoking. This report focuses on the lung cancer experience of these fifteen underground mining cohorts. In general, the study cohorts did not show excess mortality due to cancers other than lung, except for four studies that reported excess stomach cancers and one report of excess skin cancer among underground miners. Excess stomach cancers were reported among underground tin miners in Cornwall, England (standardized mortality ratio (SMR) = 200, p value unspecified by the authors, however estimated at p<0.05, from the observed deaths and the Poisson frequency distribution) [2]; gold miners in Ontario (SMR=148, p<0.001) [3]; metal miners in the United States (SMR=149, p<0.01) [4]; and iron ore miners in Sweden (SMR=189, p<0.01) [5]. Sevcova et al. (1978) [6] reported excess skin cancers among underground uranium miners in Czechoslovakia (an observed skin cancer incidence of 28.6 versus an expected of 6.3 per 10,000 workers; p<0.05), that they attributed to external alpha radiation from radon progeny. Arsenic is present in the Czechoslovakian uranium mines (arsenic levels unidentified) [7] and the association between arsenic and skin cancer is well documented [8,9]. The excess mortality from stomach and skin cancers among these cohorts needs further study. In all five primary epidemiologic studies, the exposure records for the individual miners lack precision. Frequently, an individual miner's exposure was calculated from an annual average radon progeny exposure estimate for a particular mine or mine area, thus, an individual miner's true exposure could vary greatly from the estimated exposure. Of the five primary epidemiologic studies, the Czechoslovakian study has the best records for radon progeny exposure [10]. The Swedish study has limited exposure records for their cohort (8 years of measurements for 44 years of follow-up), and the miners' mean exposures were about five WLM per year [5]. The lower radon progeny concentrations found in Swedish mines indicate that the potential error due to excursions in concentration was less than in mines in the United States, Newfoundland, and Ontario, where higher concentrations were measured (Table III-2). Overall, the radon progeny exposure records from the United States, Ontario, and Newfoundland have similar limitations (detailed in sections B, D, and F). WL measurements made in uranium mines in the United States and fluorspar mines in Newfoundland fluctuated greatly, reaching unusually high radon progeny concentrations: in the fluorspar mines, a maximum of 200 WL [11], and in the uranium mines, 3 out of 1,700 mines averaged over 200 WL [12]. NI0SH is currently investigating the variability and quality of the exposure records kept for uranium mines in the United States. Exposure data quality, although important, does not solely determine a study's strength; one should also evaluate the epidemiologic and statistical methods used. This review reports both the attributable and relative risk estimates for lung cancer (see Glossary for definitions) when they are provided by the authors [3,5,13,14]. # B. Uranium Miners in the U nited S tates # . Descr i p t i on The United States Public Health Services (USPHS) conducted an epidemiologic study examining mortality among underground uranium miners from the Colorado Plateau [12,15] Job turnover in the uranium mines was substantial; the majority of miners worked less than 10 years underground (not accounting for gaps in employment) [14]. Nevertheless, approximately 33 percent of the cohort worked 10 or more years and 7 percent worked 20 or more years underground in uranium mines (not accounting for gaps in employment) [15]. The number of months worked underground ranged from 1 to 370 (over 30 years), with a median of 48 months (4 years). Some miners worked underground in uranium or nonuranium mines before they entered the USPHS study, and before radon progeny levels were recorded. Among these miners, 13.7 percent started mining before 1947 [15]. The cohort's early radon progeny exposures probably represented a small proportion of their total lifetime exposures; Lundin et al. (1971) noted that the study group accumulated only 16 percent of their total radon progeny exposure before 1950 [12]. A bias toward overestimating exposure and a narrow sampling strategy were two major influences effecting the miners' exposure records. First, some of the USPHS exposure data records were biased by including disproportionately more measurements from mine areas with high radon progeny levels. Radon progeny samples taken during 1951-1960 were stated to be representative of the mine areas in which miners received exposures. Also, the U.S. Bureau of Mines (USBOM), the New Mexico State Health Department, and the Arizona mine inspector continued to take representative samples after 1960 [12]. During 1960-68, however, additional radon progeny samples were collected for control purposes by mine inspectors from Colorado, Utah, and Wyoming [12]. In this case, inspectors sampled disproportionately more mines and mine sections that had high radon progeny levels. This sampling bias also tended to increase estimates for geographic areas of mining (locality, district, or state) [12]. Thus, some average annual WL exposure records collected during 1960-68 overestimated the uranium miners' exposure. Second, there is little exposure data available for some uranium mines, especially small mines. For the entire period 1951-68, nearly 43,000 measurements were available to characterize about 2,500 uranium mines. More samples were usually taken in the larger mines that employed most of the miners. In many mines, however, only one or two samples were ever taken [12]. At the present time, the USPHS exposure data set has 34,120 "average" (undefined by Lundin et al. 1971) annual WL exposure records from 1,706 surface and underground uranium mines, made over a 20-year period [12,17]. These records consist of "guesstimates", "estimates", "extrapolations", and actual WL measurements (Table 111-1 (1981) used the formula for attributable risk to determine that about 80 percent of the deaths due to lung cancer in this cohort were attributable to uranium mining [15]. As of 1971, statistically significant excess cancers were found in all radon progeny exposure categories above 120 WLM [12]; the exposure categories were: less than 120, 120-359, 360-839, 840-1799, 1800-3719, and 3720 and over, in WLM. NIOSH continues to monitor the mortality experience of this cohort, particularly those workers exposed at or below 120 WLM. The terms "guesstimates," "estimates," and "extrapolations" were defined in this manner by Lundin et al. (1971) [12]; NIOSH recognizes the limitations of these definitions, but uses them for consistency with published reports. # Strengths This is a large, well traced, and analyzed study; the study cohort is clearly defined. It contains smoking histories and radon progeny exposure records for the same individuals. Although the radon progeny exposure data were measured by different persons, a standard sampling and counting technique was used and the technical quality of the measurements was good [12]. # . L im ita tio n s The major limitation in the exposure data quality are that there were few measurements for small mines, (although fewer miners worked in these mines) miners' work histories were self reported, and many exposures were overestimated during 1960-68 [12]. Another limitation is that many miners fell into high radon progeny exposure categories; however 20 pe-rcent of the miners were assigned to the category below 120 WLM [18]. Several reviewers have found that the USPHS study gives lower estimates of risk per WLM for radon progeny exposure than the other four major epidemiologic studies [19,20,21]. This may be due to the overestimation of exposure by Lundin et al. (1971) [12] or other factors. # C. Uranium Miners in # Strengths One positive feature of this study is the large amount of exposure data available. Radon gas measurements started in 1948, with a minimum mean of 101+8 measurements per mine [10]. Other strengths include the number of workers exposed to low radon progeny levels, a long period of follow-up (average of 26 years by 1975) [24], and the limited exposure to radon progeny from other underground mining (less than 2 percent of the study group members mined nonuranium ores) [10]. In addition, Sevc et al. (1984) investigated the hazards from other exposures, such as silica, arsenic, asbestos, chromium, nickel, and cobalt, and concluded that these were not causing the excess lung cancer risk of the uranium miners [7]. Sevc (1970) reported maximum dust levels between 2.0-10.0 mg/m3 during 1952-56, and stated that the miners' risk of silicosis was relatively low [26]. Chromium, nickel, and cobalt were present only in trace amounts in mine dusts. Although arsenic was present in these mines (concentration unspecified), there was no significant difference in lung cancer mortality between two mining areas with comparable radon progeny exposure levels, but fiftyfold differences in arsenic concentrations [27,28,29,30,31,32]. # Limitations The limitations of the Czechoslovakian study are that the exposure estimates made before 1960 were based on radon gas, rather than direct radon progeny measurements. A second limitation is that the cohort definition and the epidemiologic methods used by the Czechoslovakian researchers make it difficult to compare their findings with those from the other four primary studies. The radon gas and progeny equilibrium ratio is necessary to estimate WL concentrations from radon gas measurements correctly. The authors provided insufficient detail about the equilibrium ratio in the Czechoslovakian uranium mines to allow evaluation of the data quality [10]. If Sevc et al. (1976) had equilibrium ratio records or a reliable way to estimate the equilibrium ratio, then using radon gas exposure measurements to estimate WL would not seriously bias their results. Sevc, Kunz and associates defined their cohort as men who entered employment in the Czechoslovakian uranium mines in the years 1948-1953 (for Group A miners), and worked underground at least 4 years [22]. It is unclear from the published reports whether the Czechoslovakian miners accumulated their person-years at risk of dying (PYR) from the time they entered the cohort or from their time of first exposure. The cohorts' average 26 years of follow-up by 1975 [25], implies that the PYR were accumulated from a miner's time of first exposure [33]. In most epidemiologic studies, a miner's PYR accumulate after he enters the cohort. The Cze noslovakian method of accumulating PYR makes it difficult to directly compare their lifetable analysis and findings with those from other miner studies. Sevc et al. (1984) also neglected the effect of smoking in their data analysis, although they stated that this would not effect their results, because the percentage of cigarette smokers among miners (70 percent) was comparable to that among the general male population of Czechoslovakia [7]. # D. Uranium # Strengths This study's greatest strength lies in the miners' low mean cumulative exposures (40-90 WLM) to radon progeny, exposures much lower than those reported in the United States, Czechoslovakian, and Newfoundland studies (see Table III-2, at the end of this Chapter). Another good feature of this study is that the researchers carefully traced uranium miners' work experience in other hard rock mines. Large numbers of uranium miners in Ontario (66 percent of the study cohort) had some hard rock mining experience. # . L im ita tio n s This study has three disadvantages; first, the cohort is severely truncated, with only about 18 years (median value) of follow-up and a median attained age of 39 years by 1977 [34]. A short follow-up on a young cohort creates problems because lung cancer is rarely manifested before age 40 [20,21]. Second, thoron progeny and gamma radiation levels vary and can reach substantial levels In some Ontario uranium mines [35,36,37]. For example, Cote and Townsend (1981) found that thoron progeny working levels were about half the radon progeny working levels in an Elliot Lake, Ontario uranium mine [37]. The Kusnetz method is frequently used to measure radon progeny in mines and can discriminate between radon and thoron progeny. When used improperly, however, the Kusnetz method can mistakenly count thoron progeny as radon progeny, so that the true radon progeny exposure may be overestimated [37]. From the limited information in the published reports [3,34], it is unclear whether measurement error was introduced by using the Kusnetz method improperly. There are no epidemiologic data available to estimate the health risks due to thoron progeny. The Advisory Committee on Radiological Protection from the Canadian Atomic Energy Control Board (AECB) reviewed research on microdosimetry which indicated that the main contribution to the WLM from thoron progeny comes from the radioactive decay of long-lived Pb-212 (ThB, the half life=10.6 hours). Its half-life is long enough for the Pb-212 to translocate from the lungs into other tissue, where it emits much of its alpha energy. Radon progeny have shorter half-lives than Pb-212 and emit most of their alpha energy in the lung. Therefore, the AECB concluded that the risk of lung cancer induction by 1 WLM of thoron progeny is about one third of that for 1 WLM of radon progeny [38]. Finally, Muller et al. (1985) published limited information about the smoking habits of these miners, and the researchers' present risk estimates are uncorrected for smoking [3]. Out of a group of 57 uranium miners who died of lung cancer, only one was a nonsmoker and the rest smoked [39]. Muller and associates plan to conduct a case-control study of the effects of smoking upon lung cancer risk in miners. Although they stated that correction for smoking will not substantially change their risk estimates [3], at low levels of radon progeny exposure, it is important to take into account the effect of smoking; thus, definitive conclusions regarding this study must await the smoking history analysis. # E. Iron Miners in # Strengths The strengths of this study include the relatively low radon progeny exposures of the miners (mean of 4.8 WLM per year), the long follow-up period, and the stability of the work force. The ascertainment of vital status (99.5 percent), and the confirmation of diagnoses for causes of death was thorough (about 50 percent of all deaths in Sweden are followed by autopsy). In addition, Radford and St. Clair Renard (1984) used case-control methods and environmental measurements to rule out health risks from diesel exhaust, iron ore dust, silica, arsenic, chromium, nickel, and asbestos in the mines [5]. # . L im ita tio n s The major limitations of the iron miners' study were the limited exposure data available for analysis and an unclear cohort definition; there was also a question about how the authors adjusted for lung cancer latency. Radon gas, in the Swedish iron mines, was first measured in 1968. That means that for the average 44 years of follow-up, there exist exposure estimates based on actual measurements for only 8 years. The researchers reconstructed past concentrations based on measurements made at each mine level and area during 1968-1972 and on knowledge of the natural and mechanical ventilation used previously. They assumed that mine ventilation systems and radon progeny concentrations during 1968-72 were comparable with those in the past, by analogy with quartz dust levels measured in the mines since the 1930's [5]. The researchers calculated average yearly exposures in WLM for each decade from the average hours per month underground and radon progeny concentrations in each area, weighted by the number of man-hours worked underground [5]. These crude calculations make tenuous the connection between a given individual miner and a particular radon progeny exposure level. Nonetheless, the iron miners as a group, probably received very low average exposures to radon progeny compared to uranium miners [5,19]. Radford et a I., stated : "...we consider that average exposures are probably accurate to + 30 percent" [5]; thus, the true average exposure could be between 56 and 104 WLM. Exactly how Radford and St. Clair Renard defined the cohort, and calculated or excluded the PYR, was unclear from the article. To account for a 10-year lung cancer latency, they excluded PYR for lung cancer during the first 10 years after mining was begun [5]. From their description, it is unclear when mining was begun and whether PYR were counted from the beginning of mining, January 1, 1951, or some other date. It is assumed that most of the miners' PYR were excluded from the years prior to 1951, rather than the period 1951-1976 (years when the authors analyzed mortality), and that the mining population was stable. If one makes these assumptions (unstated by the authors), then adjusting for latency by excluding PYR during the first 10 years after the start of mining should produce unbiased SMR calculations. On the other hand, adjustments for latency that incorrectly exclude many PYR lower the expected number of deaths, thereby possibly overestimating the SMR and the risk due to radon progeny. Because of insufficient information, NIOSH is unable to completely evaluate the effect of the 10-year adjustment for latency on the SMR in this study, although it appears to be minor. # F. Fluorspar Miners in # Strengths One strength of this study was the long follow-up period; workers were followed for an average of about 30 years of observation [11,19]. Also, the researchers obtained smoking history data for 41 percent of the cohort [11]. # . L im ita tio n s There were three principle limitations in this study. First, there was limited exposure data available before 1968 (See above). Second, the study failed to trace large numbers of workers; 591 workers who lacked adequate personal identifying information (name and year of birth) were dropped from the analysis. Third, this study lacks an adequate basis for estimating expected deaths. Lung cancer rate comparisons between the mining population, with its many smokers, and the Newfoundland or Canadian national populations, would exaggerate excess deaths due to radon progeny exposure. Morrison et al. (1985) tried to avoid this problem by generating the expected number of deaths among underground workers from a comparison with mortality rates among surface workers (adjusted for age, time period, and disease specific mortality) [11]. A problem with this study design is that the control group may be exposed to radon progeny. Some of the men classified as surface workers (controls) may have received some radiation exposure, by means of either misclassification or unrecorded short periods of working underground. Also, it is difficult to correctly adjust for age, time period, and disease specific mortality, when there are proportionately fewer workers in the control group (surface workers) than in the exposed group (as of 1971, underground workers accounted for 57 percent of the total person-years [40]). The lack of an adequate comparison group is a serious limitation, so risk estimates from this study must be viewed wi th caut ion. # G. Secondary Epidem iologic Studies The ten epidemiologic studies reviewed herein examine mortality among miner populations in China, Sweden, the United States, Great Britain, France, and China. Several studies demonstrated elevated radon progeny levels and excess lung cancer deaths among underground miners, but lacked information about radon progeny exposure, or levels of other mine carcinogens. Other studies contained severe limitations or biases that also restricted their usefulness. Overall, the ten secondary studies provide additional information about the association between lung cancer mortality and radon progeny exposure, yet have more limitations (in study design, study population size, radon exposure records, thoroughness of followup, etc.) than the five primary studies. To be concise, the secondary studies are described in less detail than the primary studies. # 1. Iron Ore Miners in Grangesberg, Sweden # Zinc-Lead M iners in Sweden This case referent study examined lung cancer mortality during 1956-76 among residents from the parish of Hammar, Sweden, an area with two zinc-lead mines [42]. Twenty-nine subjects who died of lung cancer, including 21 who were underground miners, were matched with three referents who died before or after each case. Some problems with the study were the smalI number of cases and a fai lure to match for age or smoking status. Axelson and Sundell (1978) reported a sixteenfold increase (p<0.0001) in lung cancer mortality among the miners versus nonminers. Although they lacked individual information on exposure, they estimated a radon progeny level of about 1 WL in the mines, based on measurements made in the 1970's [42]. These results should be viewed with caution; since they demonstrated that age was a confounding factor, yet they did not match cases and referents for age. # . Iron Ore Miners in K iruna, Sweden This study examined lung cancer mortality among residents of the Kiruna parish in Northern Sweden, an area containing two underground iron mines [43]. One strength of this study is that migration in the Kiruna area was slight, therefore, nearly all former miners' deaths were registered in Kiruna. From 1950 to 1970 a total of 41 men (in Kiruna) between the ages of 30-74 years died of lung cancer. Thirteen of these were underground miners, and it is possible, although unclear in the report, that 18 were surface workers. One limitation of this study is that the the age distribution of underground miners was unrecorded, and therefore, proportional mortality was used instead of the lifetable method to calculate the expected mortality. Another limitation is that the expected mortality was not adjusted for smoking status, since information from family and fellow workers indicated that 12 of the 13 underground miners smoked (8 smoked cigarettes, 4 smoked pipes). Jorgensen (1973) compared the 13 deaths observed among underground miners with expected deaths of 4.47, based on local rates, and 4.21, based on Swedish national rates. In both cases, he reported significantly elevated mortality (p<0.05) among the underground miners [43]. Because this proportional mortality study involved few lung cancer cases, 13 for underground miners and 28 for all other men in Kiruna, the results should be viewed with caution. Radon progeny exposure records were unavailable for the underground miners, however, there were measurements of 10-100 pCi/l radon progeny (about 0.10-1.0 WL at 100 percent equilibrium). # . Iro n Ore Miners in Kiruna and G a lliv a r e , Sweden # . Metal Miners in the U nited S tates This cohort mortality study involved white male underground metal miners in the United States. The cohort was defined as miners who had completed, at a minimum, their fifteenth year of underground mining experience between January 1, 1937 and December 31,1948 For the period 1947-1955, there were no radon measurements available, however, a committee of experts estimated average monthly radon progeny exposures varied from 1-10 WLM. # T in M iners in C o rn w all, Great B r ita in # Uranium Miners in France In 1956, 7,470 radon gas measurements were collected. From 1957 to 1970, about 20-30 radon gas measurements were collected per miner per year; from 1970 to the present, 57-70 per miner, per year. The only limitation of these records is that they are based on radon gas, rather than direct radon progeny measurements. At present, the mean factor of equilibrium in the French mines is 0.22. The miners' average annual radon progeny exposures varied from 2.5 to 4. There were many excess lung cancers at low radon progeny exposures, i.e., an SMR of 436 (p value unspecified by the authors, however estimated at p<0.05, from the number of observed deaths and the Poisson frequency distribution) at cumulative exposures below 140 WLM. Arsenic concentrations in ore samples were high, 1.50-3.53 percent [49]. For the years 1950-59, it was estimated that a miner inhaled 1.99-7.43 mg arsenic per year [48]. The authors suggested that the high arsenic content in the ore samples may cause lung cancer [49]. The strength of this study lies in the large number (12,243) of underground miners studied. One limitation is that the study cohort is ill-defined; the study design mixes aspects of a survey for incidence with a cohort study. Wang et al. (1984) [49] fail to describe when the workers started mining and how many were lost to follow-up; also, whether the 12,243 miners worked between 1975-81 or constituted all tin miners who ever worked underground. The major limitation appears when comparing these studies with other mining research studies because Wang and associates handled radon progeny measurement techniques and epidemiologic methods in a different manner. For instance, they did not mention if their mortality statistics were adjusted for age or smoking status. Their comparison population, male residents in urban Shanghai municipality, has much higher lung cancer rates than males in rural Yunnan province [50]. Therefore, the Shanghai comparison group was inappropriate and may have underestimated these miners' lung cancer r i sks. Another limitation is that arsenic exposure has been associated with lung cancer among copper smelter and pesticide workers [8,9]. This research may be most useful for studying the interaction of two carcinogens, arsenic and alpha radiation from radon progeny, rather than for studying radon progeny lung cancer risks alone. # H. Smoking The two most thorough studies of the interaction between smoking and radon progeny exposure are those by Whittemore and McMillan (1983), using the U.S. white uranium miners data set [14], and by Radford and St. Clair Renard (1984) using the Swedish iron miners data set [5]. The major flaw in other studies of the interaction between smoking and radon progeny exposure [13,16,42,51] is an inadequate sample size of miners with both exposure records and smoking histories. # . Uranium Miners in th e U nited S tates Whittemore and McMillan (1983) examined lung cancer mortality among the white USPHS uranium miners cohort, based on a mortality follow-up through December 31, 1977. In their analysis, they included nine additional miner lung cancer deaths which occurred after December 31, 1977, for a total of 194 lung cancer cases [14] (see section III.B). For each case, four control subjects were randomly selected from among those white miners born within 8 months of the case and known to survive him, yielding a total of 776 matched controls [14]. A regression analysis of the radon progeny exposure and smoking data for cases and controls revealed that the data fit a multiplicative linear relative risk model [R=(1+B-|WLM)(1+B2PKS)], but showed "significantly poor fit" (p<0.01) for the additive linear relative risk model [R=1+B-|WLM+B2PKS] [14]. The data demonstrated a synergistic effect, that is, the combined action of smoking and radon progeny was greater than the sum of the actions of each separately. [14]. # Whittemore and McMillan, based on the multiplicative linear relative risk model [R=(1+B-|WLM)(1+B2PKS)], suggested that miners who have smoked 20 pack-years of cigarettes (excluding tobacco use within the past 10 years) experience radiat ion-induced lung cancer rates per WLM that are roughly five times those of nonsmoking miners (They estimated that B-j, the excess relative risk per unit of radon progeny, was 0.31X10~2 and B2 , the excess relative risk per unit of cigarette smoke exposure, was 0.51X103). # 2. Iron Miners from Malmberget, Sweden # . Conclusions R elated to th e In te ra c tio n o f Radon Progeny Exposure and Smoking Studies of white uranium miners in the United States [14], and iron miners in Sweden [5], support different models of risk due to radon progeny and smoking; the first supports a multiplicative model, the second, an additive model. These two studies arrive at different conclusions which is not surprising, given the differences in statistical methods, cumulative exposure levels (the averages differed by a factor of 10), smoking histories, and method of calculating expected deaths between the studies. Based on the presently available information, it is impossible to conclude whether the additive or multiplicative model is the best. Nevertheless, present research indicates a higher risk from combined exposure; data from both radiation exposure and smoking histories are essential for an accurate estimation of radiogenic lung cancer risks. # I . Discussion and Conclusions R elated to the Epidem iologic E valu atio n 1. The Five Prim ary Epidem iologic S tudies The five primary epidemiologic studies that examine lung cancer mortality among underground miners are the studies of uranium miners in the United States, Czechoslovakia, and Ontario, as well as iron miners in Sweden and fluorspar miners in Newfoundland. Despite the individual limitations of each study, the association of radon progeny exposure and lung cancer was shown to persist for all five studies, using different study populations and methodologies. There was an elevated lung cancer SMR and a dose-response relationship for radon progeny exposure and lung cancer among the five underground miners' cohorts; the higher the estimated radon progeny exposure, the greater the number of excess deaths. Some studies [3,5,14] adjusted their mortality figures for the estimated latency of radiogenic lung cancer, yet the association between lung cancer cases and radon progeny exposure remained. Table III-2 is a summary of the observed and expected deaths and the SMR's in the five studies. These studies handled adjustments for latency, lagging dose, smoking history, or age as detailed in the footnotes in Table III-2. As yet, there is no one standard method to adjust person-years, expected deaths, or SMR's, or even agreement that these parameters should be adjusted. A M five studies [3,5,10,11,12] lacked adequate radon progeny exposure data for individuals because, in general, these data were originally collected for monitoring, and not research purposes. In addition, some studies [5,10] based the exposure assessment upon radon gas measurements, which must be converted to radon progeny estimates. five studies, rather than eliminate a particular study because of exposure data quality. # It is reasonable, however, to extract what information is available from these The primary studies of iron miners in Sweden and uranium miners in Czechoslovakia searched for other exposures [9,53,54] (i.e., mineral ores, radiation, diesel fumes) in the mining environment. The Czechoslovakian uranium mines contained various amounts of arsenic, but only trace amounts of chromium, nickel, and arsenic [7,9,53,54]. Researchers examined lung cancer mortality in two uranium mining localities that had similar radon progeny levels, but a fiftyfold difference in arsenic concentrations. They failed to find a significant difference in mortality between the two groups of miners [27,28,29,30,31], concluding that arsenic was not affecting the lung cancer rates of underground miners in Czechoslovakia. Arsenic, chromium and nickel were essentially absent in the Swedish iron mines. There were occasional inclusions of serpentine, but no identifiable asbestos fibers in dust samples [5]. The Swedish iron mines contained iron ore dust, but Stokinger (1984), after review of the literature from health reports involving underground iron ore miners, iron and steel workers, foundrymen, welders, workers in the magnetic tape industry, and others, concluded that these studies failed to clearly demonstrate the carcinogenicity of iron oxide dust [55]. The influence of other types of radiation present in the mines, such as long-lived alpha, beta, and gamma radiations, cannot be determined from these five studies. The miners do not show an excess mortality from leukemia, a disease linked to high gamma radiation exposures [1,3,15]. Most of the studies provided insufficient information about diesel fume exposures in the mines, so that it is impossible to reach conclusions regarding the effect of diesel fume exposure upon lung cancer risk. In the Swedish iron mines, 70 percent of miners with lung cancer left underground work or died before diesel equipment was introduced in the 1960's ; the remaining miners had brief diesel fume exposures immediately before death [5]. Therefore, diesel fume exposure could not account for the excess mortality in the Swedish cohort [5]. Cigarette smoke appears to be the most important carcinogen common to the five primary studies. The proportion of cigarette smokers among underground miners in the United States, Newfoundland and Sweden was greater than among the general male population in those countries [5,12,40]. The influence of possible carcinogens in mines (in addition to radon progeny) upon lung cancer mortality needs further research. # . The Ten Secondary Epidem iologic Studies Ten epidemiologic studies were identified by NIOSH as secondary studies, which strengthen the association between excess lung cancer mortality and radon progeny exposure, yet have more limitations (in study design, radon exposure records, follow-up, etc.) than the five primary studies. The ten epidemiologic studies examined lung cancer mortality among underground iron ore and zinc-lead miners in Sweden, metal and Navaho uranium miners in the United States, tin and iron ore miners in Great Britain, uranium miners in France, and tin miners in China. All ten studies have incomplete radon progeny exposure records. Nevertheless, all reported an elevated lung cancer mortality in underground miners and the presence of radon progeny in the mines. The studies of metal miners in the United States [4], and tin miners in China [49] also found arsenic in the mines; Wang et al. (1984) suggested that the high arsenic content of the ore may be a cause of lung cancer [8,9,49]. The study of tin miners in China found an exposure-response relationship between cumulative radon progeny exposure and excess lung cancer mortality, but at the lowest exposure level (less than 140 WLM), still found an unusually high SMR (436) [49]. The arsenic exposures of these underground miners may contribute to the high lung cancer SMR; arsenic exposure is associated with lung cancer in copper smelter and arsenical pesticide workers [8,9]. The study of iron ore miners in Grangesberg, Sweden estimated an attributable risk of 30-40 cases per 10® PY-WLM for miners over the age of 50 [13]. This attributable risk estimate is comparable to that reported by Radford and St. Clair Renard (1984) for miners in Malmberget, Sweden in the same age group [5]. # The Lowest Cumulative Radon Progeny Exposures Associated w ith Excess Lung Cancer M o r ta lity The five primary epidemiologic studies are far from completion, since the cohorts' follow-ups are truncated. For example, the uranium miners in the United States were followed a mean of 19 years (by 1977), while the iron miners in Sweden were followed a mean of 44 years (the Swedish study has the longest follow-up period of the five primary studies). Lung cancer rarely is manifested before age 40, regardless of etiology [20,35]. Frequently, the initial analyses performed on a cohort lack enough PYR and statistical power to show a statistically significant association between excess lung cancer mortality and low radon progeny exposure levels. Later analyses accumulate additional PYR for the entire cohort and specific subgroups, increasing the ability to detect an effect due to radon progeny. This point is important when determining the lowest radon progeny exposures associated with excess lung cancers. A longer follow-up period, resulting in more PYR and statistical power in a study, may reveal an association between excess lung cancer mortality and radon progeny at lower cumulative exposures. [12] had an average of about 10 years of follow-up (by 1968) and found excess cancers above 120 WLM. The study of uranium miners in Czechoslovakia found excess mortality above 100 WLM [10,24]. Two recent studies, of miners in Ontario and Sweden, reported excess cancers at cumulative radon progeny exposure levels of 40-90 WLM and 80 WLM, respectively [3,5]. Thus, two epidemiologic studies found excess lung cancer mortality associated with radon progeny exposure levels below 100 WLM. # The study of uranium miners in the United States by Lundin et al . (1971) In addition, studies suggest that both radon progeny exposure and smoking are involved in the lung cancer mortality of underground miners; however, the available information does not allow one to state whether radon progeny and smoking interact in an additive or multiplicative fashion [5,14]. One estimate is that miners who smoked 20 pack-years of cigarettes have radiat ion-induced lung cancer rates per WLM that are roughly five times those of nonsmoking miners [14]. Finally, the five primary and ten secondary mining epidemiologic studies all demonstrate excess lung cancer mortality among underground miners working in the presence of radon progeny. # IV . MEDICAL SCREENING AND SURVEILLANCE OF UNDERGROUND MINERS EXPOSED TO SHORT-LIVED ALPHA PARTICLES # A. Q u a litie s o f E ffe c tiv e Medical Screening and S u rv e illa n c e # B. Screening and Lung Cancer Prevention Alpha radiat ion-induced lung cancer may be preventable (by limiting exposures to radon and thoron progeny) but not treatable. By the time radiogenic lung cancer is detected among individuals in an exposed work force by routine periodic screening, the affected workers fail to benefit from any further preventive or therapeutic measures. # Available screening tests may detect radiat ion-induced, premalignant abnormalities in asymptomatic exposed workers years before disease appears. At the current state of knowledge, however, it is unknown whether medical removal of asymptomatic workers with these abnormalities will prevent progression to malignant disease. A recent study by NIOSH tested the within-reader reliability of an expert in sputum cytologic and histopathology. The reader reliably detected malignant changes, but frequently read early changes as "premalignant" on one occasion and as "within normal limits" on other occasions [65]. To date, there is no convincing evidence that routine periodic medical screening of workers exposed to pulmonary carcinogens is an effective means of prevention of mortality due to lung cancer in these workers. Coke oven workers are presently the only group of workers covered by a mandatory rule for periodic screening by sputum cytology and chest X-rays. Although the effectiveness of that regulation has not yet been evaluated, such studies are now underway. In addition, NIOSH is currently collecting data on lung cancer rates and the results of sputum cytologic tests for some miners in the USPHS cohort [66]. Also, frequent exposures of underground miners to chest X-rays for screening purposes are not recommended at present. # CRITERIA FOR DETERMINING THE SUITABILITY OF A CANCER SCREENING TEST FOR USE IN THE WORKPLACE # Coles and Morrison (1980) [61]: 1. The test should be "effective" in terms of its validity, reliability, sensitivity, specificity, and opera tional characteristics (such as predictive value). # 2. The test should be "acceptable" to workers in terms of its cost, convenience, accessibility, lack of morbidity. # Halperin et al. (1984) [62]: 1. The test should be "effective" in terms of its validity, relia bility, sensitivity, specificity, and operational characteristics (such as predictive value). # 2. The test need not be uncompli cated or inexpensive, but its performance and interpretation must be done by competent profes sionals. # 3. The test must be "acceptable" to workers in terms of its cost, convenience, accessibility, lack of morbidi t y . # 4. The test results must be eval uated by comparison to a suitable population, not necessarily the general population. # Action levels and related medical decisions must be deter mined in advance of screening (based on #1 above). # Adapted from [61,62] # 2. Effective treatment is avail able if asymptomatic disease is detected. # Detectable preclinical phase must be highly prevalent among screened population. # Halperin et al. (1984)[62] 1. The disease has important individual and public health consequences. 2. Disease need not be treatable, but must be preventable. # A detectable preclinical phase (DPCP) must exist and a target population (exhibiting a high prevalence of DPCP) be identified. # 4. Follow-up care (diagnostic, treatment, and social services) must be avaiI able. # 5. Natural history of the disease determines the feasibility and frequency of testing. # Adapted from [61,62] A review of studies reporting histopathologic associations with radon progeny exposures lacked sufficient information to conclude definitely that only one specific lung cancer cell type was associated with these exposures [67]. In addition, a case-control study using data from the Third National Cancer Survey found that cigarette smoking was significantly associated with all three histologic types of lung cancer [68]; the relationship with small-cell carcinoma was strongest overall (odds ratio = 5.1), whereas those with squamous and adenocarcinoma were approximately equivalent (odds ratio = 3.1). The issue of histopathologic associations with radon progeny exposures needs further research, especially considering that many underground miners smoked cigarettes. # Both cessation of smoking [69] and reduction of the radon progeny exposures of underground miners will lower their risks for lung cancer. # C. Recommendat i ons 1. Smoking Since it appears that inhaled radon progeny either add to or multiply the underlying high lung cancer risk in smokers, a smoking cessation program is recommended. The combined effects of a lower (more protective) Permissible Exposure Limit (PEL) and cessation of cigarette smoking [69] would probably provide a significant reduction in lifetime risks. # Lung Function Tests A baseline chest X-ray and annual spirometrie lung function tests, performed and interpreted according to the criteria of NIOSH or the American Thoracic Society, would be appropriate for medical decision-making concerning job placement, medical removal protection, and disability compensation should work-related respiratory problems develop.at a later time. # . X -ra y Screening While chest X-ray screening is not an effective means of prevention of death due to occupational lung cancer, examination at 5-year intervals, and industry-wide analyses of the results of such tests may be an effective means of supplementing the primary prevention of other lung diseases, such as pneumoconioses. # . R a d ia tio n Exposure Records The lifetime radiation exposure record of underground miners should include information about the dose and frequency of medical irradiation. If radiation exposed workers are routinely screened for lung diseases by baseline and periodic follow-up chest X-rays, they will receive an average of about 0.025 rad per examination of external X-irradiation (where an "examination" consists of a postero-anterior and a lateral exposure) [70]. If examinations are conducted every 5 years, the average lung dose would be about 0.005 per year. Furthermore, because of the frequency of on-the-job accidents and injuries in underground mining [15], underground miners may receive considerably more medical X-irradiation over a working lifetime than workers exposed to other sources of ionizing radiation. For each of the following diagnostic examinations, the approximate X-ray dose to the lung is indicated in parentheses: thoracic spine (0.421 rad), ribs (0.324 rad), lumbar spine (0.133 rad), one shoulder (0.039 rad), lumbosacral spine (0.035 rad), and skull (0.002 rad) [ # V. FEASIBILITY OF LOWERING THE STANDARD This section examines the feasibility of lowering the current radon progeny exposure standard, including differences between current exposures and lower projected standards. # A. Comparison o f Current U.S. Underground Miner Radon Progeny Exposures with D iffe r e n t Standards The uranium mining industry in the United States has recorded the annual exposures of underground miners, and the Atomic Industrial Forum (AIF) figures are displayed in Appendix Tables A-9 to A-11. There has been some discrepancy between MSHA records and AIF records [73]. Overall, the industry has been successful in controlling exposures to 4 WLM. Furthermore, the percentage of miners exposed to 3 WLM or higher decreased between 1973 and 1982 (Tables A-9 to A-11, [74]). There has been substantial mobility among the uranium miners, with many people working for short periods of time at different mines, so that the AIF separated the annual exposure data into two sets, "all persons assigned to work underground" and "persons who worked underground 1,500 hours or more" i.e., full time. It appears that, for most underground uranium mine workers, the mining industry already can meet a radon progeny standard below the current level of 4 WLM annual exposure. If the radon progeny exposure standard was set at 1 WLM, approximately one-third of all underground workers and less than two-thirds of the full-time underground workers would be exposed above a # In the case of nonuranium mines in the United States, it is clear they can meet a lower exposure standard, based on the limited data submitted by the mining companies to MSHA (Appendix Table A-5). Of those nonuranium mine workers whose exposures to radon progeny were recorded, the upper limits of exposure varied from 0.16 to 2.20 WL, depending upon the mining industry (Appendix Table A-1). Acknowledging the limitations on the way exposure data is collected (Appendix A, section A.2.a.), the mining companies submitted information to MSHA which suggests that no more than 450 individuals are occasionally exposed to substantial radon progeny levels (i.e., 0.3 WL and above) (Appendix Table A-5). During 1983, these radon progeny exposed employees were found in only 4 nonuranium mines, out of a total of about 574 U.S. nonuranium metal and nonmetal underground mines. Therefore, it should be feasible to control the radon progeny exposures encountered by these 450 (or fewer) miners. # B. The Technological C apacity to F u rth er Reduce Exposure Some of the highest radon progeny exposures are received by people working in the smallest uranium mines, those employing less than ten people [73]. Probably, these small mines can improve their ventilation systems and reduce worker exposures. Currently, many of these small mines are not operating due to the depressed prices for uranium. # There are a variety of techniques besides ventilation that can reduce workers' radiation exposures (Appendix B). In general, these techniques are more costly and less effective than ventilation. Nevertheless, these methods, in addition to ventilation, could be used to decrease the exposures of the relatively small number of uranium workers (46 during 1982) It should be technically feasible for the mining industry to control the radon progeny exposures of these relatively few workers receiving substantial exposures. Also, an engineering analysis (based on data from only two mines) suggested that it is technically feasible for uranium mines to meet a standard as low as 1 WLM, using control techniques such as ventilation, bulkheads, and backfilling. # VI. CONCLUSION # Each of the five primary epidemiologic studies contained strengths and limitations. All of the studies [3,5,10,11,12] rely on incomplete radon progeny exposure estimates to calculate the cumulative exposures of the underground miner cohorts. Nevertheless, they contain sufficient strength to demonstrate an excess lung cancer risk associated with radon progeny exposure. Also, an exposure-response relationship exists between cumulative radon progeny exposure and lung cancer mortality [3,10,11,12,24]. Statistically significant SMR's above 400 were observed in three studies, where workers accumulated mean exposures above 100 WLM [10,11,15,24], Statistically significant SMR's between 140 and 390 were observed in two studies [3,5], where workers accumulated mean exposures below 100 WLM, and in preliminary findings from a third study by Tirmarche et al. (1985) where workers probably accumulated mean exposures below 100 WLM [47], NIOSH acknowledges the efforts of various groups [19,21,35,77] to compare attributable and relative risk estimates across different epidemiologic studies. NIOSH can neither validate nor refute these findings. At this point, without access to the raw data and more specific information about epidemiologic methods, NIOSH is unwilling to speculate or make comparisons between the attributable or relative risk estimates in the five primary studies. There were several classifications for identifying a substance as a carcinogen. "Potential occupational carcinogen" means any substance, or combination or mixture of substances, which causes an increased incidence of benign and/or malignant neoplasms, or a substantial decrease in the latency period between exposure and onset of neoplasms in humans or in one or more experimental mammalian species as the result of any oral, respiratory or dermal exposure, or any other exposure which results in the induction of tumors at a site other than the site of administration. # Such classifications have been developed by the National This definition also includes any substance which is metabolized into one or more potential occupational carcinogens by mammals. Since exposure to radon progeny has been shown to produce lung cancer in underground miners, it meets the OSHA criteria; thus, radon progeny should be considered an occupational carcinogen. Data on the current radon progeny exposures of uranium, metal, and nonmetal miners suggests that the mining industry, overall, is already capable of meeting a radon progeny standard below the current annual limit of 4 WLM. Recent limited research (based on data from only 2 mines) suggests that, using ventilation, bulkheads, and backfilling, it is technically feasible for mines to meet a standard as low as 1 WLM. At the present time, there is no effective medical method to prevent or treat lung cancer to radon progeny exposure. Also, there is insufficient evidence to support an association between a specific lung cancer cell type and radon progeny exposure [67,68]. Only exposure prevention measures are effective in lowering radon progeny induced lung cancer rates. These preventive measures include lowering the radon progeny exposures of underground uranium miners (and perhaps some underground metal and nonmetal miners), especially those that receive annual cumulative exposures near the present limit of 4 WLM. An additional measure is to encourage miners to stop smoking, because smoking and radon progeny exposure may act multiplicatively, or at least additively, to cause lung cancer. Finally, a lowering of exposure, especially for the workers currently exposed near 4 WLM, is recommended. Recent information suggests that it is technically feasible to control radon progeny exposures to levels as low as 1 WLM. NIOSH wishes to withhold a recommendation for a specific PEL, until completion of a quantitative risk assessment, which is now in progress. In addition, the specific medical recommendations listed in Chapter IV should be implemented. # A. Current Work Force # Uranium Miners a. M iners in th e U nited S tates The number of underground mine workers (including miners and service and support staff) dropped from approximately 5,037 in 1980 to 2,150 in 1982 (Table A-2). The number of underground miners, the group receiving the highest exposures, dropped from 2,760 to 1,275. This decrease was due to a recent fall in the price of uranium and reduced uranium demand. In the mines in the United States there are numerous temporary, short-term workers; in 1978, out of all employees who worked underground, only 46 percent worked 1,500 or more hours underground (i.e., full-time). # b. M iners Outside th e U nited S tates Czechoslovakia, China, France, Italy, Australia, Canada, and Argentina have underground uranium mines (see Table A-3 # Ill # B. C urrent Exposures in M ining # Stanley Equity Gold Inc. -------------- # 11 -------------- # Leadv i 1 1 e Un i t Asarco --------------------- # 95 ------- # Revenue -Vi rginius Ranchers " " " -- -- -- Baryte 0.2 -- -- -- Coal 0.1 -- -- -- # South Africa 1973 -- In mines where production employees also perform maintenance, service and supervisory duties., such employees were classified as production workers. In cases where corporations had widely separated operations under different managers, each was considered a separate operation. cProduction includes production and development miners. # Exposure of U.S. underground uranium miners to radon daughters in 1979 as In mines where production employees also perform maintenance, service and supervisory duties, such employees were classified as production workers. "Maintenance includes mechanics and electricians. Service includes motormen, haulage crews, drift repairmen, station tenders, skip tenders, etc. 'Salaried includes engineers, supervisors, geologists and ventilation personnel. uranium mines in the United States, this figure is somewhat misleading. These workers can receive high exposures, and because they only work for short periods of time, their annual average exposure is low. The average exposure for those miners working full time, that is over 1,500 hours underground, was higher; 1.45 WLM in 1978. Underground mining exposure records were placed into four general job categories by the AIF, i.e., production, maintenance, service, and salaried. As a group, the production workers who worked more than 1,500 hours underground should have higher exposures than the remaining uranium mining work force. In 1978, the average exposure of these workers was 1.74 WLM (see Table A-7) and in 1979 their average exposure was approximately 1.88 WLM [57]. In contrast, in 1979 and 1980, MSHA inspectors recorded average radon progeny WL concentrations for underground uranium mining production workers of 0.30 WL or higher, which means that some of these workers could receive 4 WLM or more per year. Cooper estimated that the average annual exposure of full-time underground production workers was about 2. 9 WLM during 1979 [57]. The number of workers that receive these high exposure levels may be small; AIF reported that among full-time underground uranium miners in 1979, only 3 out of 1,711 production workers and 3 out of 488 maintenance workers received more than 4 WLM annually (see Tables A-10 and A-11). Overall, most uranium mine workers' (including those workers who spend only part of their time underground) exposure is well below the standard of 4 WLM and on the average may be about 1 WLM [82], (see Table A -7). A relatively small number of workers, primarily full-time underground production and maintenance workers, have exposures above the 4 WLM standard (see Tables A-9 through A-11). The most recent available data, for 1982, showed that only 2 underground employees (0.1 percent) received radon progeny exposures of 4.0-5.0 WLM and 44 employees (1.6 percent) received exposures of 3.0-4.0 WLM. [58]. It should be possible to lower radon progeny exposure levels for this relatively small number of miners. # b. Miners Outside the United States The exposure of underground uranium miners depends on the quality of the uranium ore body and the ventilation rate. In other countries, (excepting Canada) the uranium ore is frequently of a lower grade than the ore in the United States, so with good ventilation techniques, the foreign uranium miners should receive lower exposures than the miners in the United States. Recent figures for radiation exposure in underground uranium mines in Canada, France, India, Argentina, and China have been published in the literature (see Table A-3 ) [82]. The underground uranium miners of Canada had an average annual exposure to radon progeny of 0.74 WLM in 1978. In 1980, the median exposure for miners in three underground mines in Saskatchewan was below 0.6 WLM and only about three workers in one mine were exposed to 3-4 WLM. In addition, some of these miners had substantial gamma exposure. In the Cluff mine, gamma exposures were as high as 3.5 rem and above, and in the Eldorado and Cluff mines many workers (approximately 60) were exposed to 1-3 rem [90]. The French uranium miners had average annual radon progeny exposures of 2.0 WLM and 1.4 WLM in 1978 and1979, respectively [82]. In 1975, the median radon progeny exposure was below 0.10 WL, yet as many as 5.35 percent of the workers were exposed to 0.30 to 0.80 WL, potentially receiving more than 4 WLM annually (see Table A-12) [91]. In 1975, there was also a record of gamma exposure in French underground uranium mines. The mean annual dose was 0.49 rem, but some miners received much higher doses; 9.16 percent received 1.0-1.5 rem, 5.3 percent received 1.5-2.5 rem and 0.65 percent received 2.5-3.0 rem [91]. In the underground uranium mines in France, gamma exposure may constitute a major part of the total radiation. There is limited information available concerning typical radon progeny exposures in underground uranium mines in India, Argentina,and China [82]. For the mines in India, figures for potential exposure are given by job category. In 1979, the drilling crew received an estimate of 2.6 WLM of potential alpha energy exposure, the mucking crew about 2.1 WLM, and "others" about 1.7 WLM (see Table A -13) [82], In Argentina, the average annual radon progeny exposure was about 2.4 WLM during 1980. # Nonuranium Miners a. Hard Rock Miners in the United States Some of the highest radon progeny exposures are found in the iron, zinc, fluorspar, and bauxite mines (Table A-1). In 1975, iron miners were exposed to 0.14-0.90 WL, zinc miners to 0.07-1.40 WL, fluorspar miners to 0.30-2.20 WL and bauxite miners to 0.07-1. 40 WL [83,84]. If these readings are typical, some hard rock miners in the United States, especially those in fluorspar mines, could have radon progeny exposures much higher than 4 WLM. However, recent data submitted by U.S. metal and non metal mining companies to MSHA suggests that no more than 450 individuals are occasionally exposed to 0.3 WL (Table A -5). During 1983, only 4 companies, 2 molybdenum, 1 phosphate and 1 tungsten, submitted individual exposure records for their employees to MSHA. It is possible that the mining companies failed to report additional employees who received radon exposures, but this is the only data available. From this data, one concludes that, except for a few molybdeum, phosphate, and tungsten mines, radon progeny exposure is not a problem in U.S. hard rock mines. Thus, in general, hard rock mines should be able to meet an annual radon progeny standard below 4 WLM. , 1982, p. 199 [82]. # b. Hard Rock Miners Outside the United States Radon progeny exposure levels have been measured in nonuranium mines in Finland, Italy, Norway, South Africa, Sweden, the United Kingdom and Poland (Tables A-14 to A-16) [82,81]. The most recent figures for all of these countries show annual average radon progeny exposures of 2.6 WLM or less. However, in many of these countries the average potential alpha energy concentrations exceed 0.3 WL, suggesting that individual miners may be exposed to more than 4 WLM per year (if they work full time during the year). Nonuranium miners (especially iron, zinc, lead, copper, or gold miners) in Italy, Poland, South Africa, and Great Britain may be exposed to more than 4 WLM annually [82]. In the United Kingdom, 4 percent of the noncoal miners were exposed to 4 WLM or more, however, many of the miners did not work full 8-hour shifts. If the underground noncoal miners in the United Kingdom worked full 8-hour shifts, as many as 20 percent of the workers could be exposed above 4 WLM/yr [81]. Recent reports for five Chinese tin mines showed radon progeny levels of 0.67 to 1.73 WL during 1978 [40]. 22 It may be most effective to combine some of these techniques, e.g., to use positive pressure ventilation in combination with procedures to decrease the volume of the mine air needing ventilation, such as bulkheads or backfilling. Bulkheads could be made more secure against radon gas leaks by maintaining a slight negative pressure behind the bulkhead and painting sealant on nearby exposed rock. Finally, most of the techniques described in Table B-1 and in this chapter will decrease inhalation exposure to alpha radiation from the decay products of radon and thoron gases, but won't affect gamma radiation levels. # Mechanical Ventilation Mechanical ventilation is the primary and most successful technique currently in use for reducing exposure to radon decay products. In uranium mines in the United States, during the early 1950's before mechanical ventilation became prevalent, average measurements of 2-200 WL of radon decay products were common [11]. In contrast, during 1979 and 1980, the highest average working level for radon progeny recorded by MSHA was 0.46 WL (Table B-2). Thus, there has been a great decrease in exposure to radon decay products in uranium mines primarily due to improvement in ventilation. Sweden has also successfully reduced radon progeny levels in nonuranium mines with mechanical ventilation. The average annual exposure for the nonuranium miners of Sweden was 4.7 WLM in 1970, due to ventilation improvements, and decreased to 0.7 WLM in 1980 [95]. In the case of uranium miners in the United States, it is not clear whether there could be significant further decreases in exposures to radon decay products with ventilation improvements alone. These few mines may need to use other techniques, besides dilution ventilation, to reduce miners' exposure to radon progeny (Table B-1). # Other Dust Control Methods Spraying water and delaying blasting until the end of shifts are two other dust control methods currently in use in most underground uranium mines. Most mines use these methods to control silica dust, but in uranium mines these methods can help control uranium ore dust. Drilling and blasting are two mining activities that generate high levels of uranium ore dust. Exposure to uranium ore dust alone may be carcinogenic, and high dust or smoke levels may modify the respiratory tract distribution of a miner's exposure to radon progeny (by increasing the proportion of radon progeny attached to respirable and nonrespirable size dust particles). In wet drilling, water sprays from the drill onto If a person approaches or exceeds the lifetime limit on exposure, they are trans ferred to another job at a lower exposure level with retention of pay, if available, or are removed from work at full pay if another job is not available. Advantages: Protects individual miners against high cumulative exposures. Disadvantages: Spreads exposure over a larger number of people. This system works best when used with a reliable bioassay for exposure, which is not available in the case of radon gas or progeny. Medical removal may not be effective if intense, short-term exposure to inhaled alpha radiation is more hazardous than cumulative radiation exposure. (Cont i nued) The drills are equipped with automatic water valves that turn the water and compressed air on simultaneously. (These techniques have been used in mines since the 1930's.) Advantages: The water cuts down on radioactive uranium ore dust. Disadvantages: Difficult to set up in areas where water is scarce. The miners us i ng the drill get w e t . Dynamite blasting at the end of shift, instead of throughout the day, reduces exposure to dust and smoke. Also, radon gas levels tend to be high immediately after blasting [93]. Advantages: Most miners have less exposure to dust and smoke particles and thus less radiation exposure. Disadvantages: Extra production schedule planning is necessary. This involves the use of fan maintenance, backup electrical systems, and spare fans to minimize fan shutdowns during working hours. Positive pressure at the rock surface is a barrier to radon flow. One drawback is that high positive pressure in one area may force the radon into nearby low pressure areas [92,93]. (Cont inued) Exhaust ventilation removes radon, thoron and daughters, as well as diesel fumes, but it also increases the emission of radon from the surrounding rock by creating a negative pressure. Positive pressure ventilation is shut down during times when the mine is inactive, creating a temporary negative pressure. This results in energy savings during the shut down periods. The best ventilation method to use depends on the mine topography and production schedule. Ventilation methods may be most effective when used in combination with techniques that cut down on the area needing ventilation, such as bulkheads and backfilling [92,93]. The filter respirator covers the miner's mouth and nose and filters the mine air through fiber filters [94]. Advantages: As a temporary short-term protective measure, the half-mask respirator affords approximately greater than 90 percent efficiency in reduction of miner's exposure to radon daughters attached to dusts, fumes, and mists. Disadvantages: The respirators may hinder vision, be warm to use under some working conditions, add significant resistance to the miner's breathing and require careful maintenance to assure their continued effectiveness. Filter respirators must be carefully fitted to each wearer, using quantitative respirator fit tests. Only MSHA/NIOSH-certified respirators shall be u s e d . (Cont inued) the rock while the drill operates, thus decreasing dust levels. Miners also wet down muck piles and the walls of some tunnels to control dust. Since the 1930's these two techniques have been used in some mines. Blasting increases uranium ore dust and radon gas levels remain high for about an hour afterwards [93]. Delaying blasting until the end of the work-shift removes the miner from an area with high dust and radon gas levels, and allows the ventilation system to reduce these levels before the miner returns to work. # Additional Control Methods Air cleaning equipment, filter respirators, and separate air supplies are seldom used in the underground mining environment. An air cleaning apparatus can remove dust, but it is expensive compared to traditional ventilation methods and is most useful in circumscribed areas [92]. Filter respirators and supplied-air respirators are difficult to use in the mining environment and their use should be limited to emergency conditions, such as temporary excursions of the radon progeny concentrations above 1 WL. Respirators tend to restrict movement and vision, may be too warm to wear, have significant breathing resistance, and require careful maintenance and fitting to assure their continued effectiveness. Only MSHA/NIOSH-certified respirators shall be used. Another radon progeny control method is robotics or increased automation. Techniques, such as robotics, that minimize the time the miner spends in the high exposure areas of the mine and in activities such as drilling, blasting, or loading ore, will decrease the miner's radiation exposure. Although, at present, robotics has a limited place in the mines, it may be possible in the future to further automate the uranium ore mining process. # B. A dm inistrative Controls # Medical Removal Protection One type of administrative control is a medical removal protection (MRP) program. Under this program, when an individual's exposure approaches or exceeds a certain limit, the person is reassigned to an area with a lower exposure level. The MRP program has been very effective in reducing exposure in the (noncarcinogenic) lead industries [96]. In this case, blood lead levels could be used as a method to biologically monitor a worker's lead exposure. However, MRP has certain drawbacks when used as an administrative control for exposure to a known human carcinogen such as radon progeny in underground uranium mines. First, according to our current knowledge of radiation carcinogenesis, it is prudent public health policy to presume that there is no threshold for radon-progeny-induced cancer, and thus no exposure can be assumed to be safe. Therefore, the high exposure individuals who are removed from the job are protected against further radon progeny risk, but the radon progeny exposure (and risk) is spread out over a larger population of workers. Second, at this time, there is no good biological monitoring method for radon progeny exposure because the primary health effect is a carcinogenic, rather than a toxicologic, response. Routine, periodic 132 sputum cytological examinations and chest X-rays are not effective screening tests for the detection of early reversible signs of lung cancer, and cancer itself may only appear after years of exposure. Finally, respirators (as they are presently designed) are very difficult to use in the underground mining environment. # Alarm Systems Another type of administrative control involves the use of alarm systems. This method has been fairly effective in coal mines where continuous monitors for methane gas have been tied to alarm systems. Reliable continuous monitors for radon progeny are now technically feasible (see [89]) and could be connected to alarm systems, as well as the control center for the ventilation system. The person who controls the ventilation could increase air movement in mine areas with high radon progeny levels. Also, the continuous monitors might be useful for enforcement purposes, because the MSHA inspector would have a record of excessive radon progeny measurements levels since the last inspection. For recordkeeping and enforcement purposes, the use of data from continuous alarm-monitors would depend heavily on the reliability and validity of these devices, as well as their durability and security from tampering in the mine environment. # Contract Mining Many underground uranium miners, especially those that drill, blast, and move ore, are given incentive bonuses for the volume of ore removed. Such a system encourages high productivity from the workers, but any time they spend on safety measures means less time to spend mining ore. The contract mining system also encourages miners to work overtime, thus increasing their cumulative internal and external radiation exposures. In addition, some miners, especially before the reduced demand for uranium, went from mine to mine working uranium ore one month and gold the next, getting radon progeny exposures in both locations. This mobility of the work force makes it harder to monitor and track the miners' total radiation exposure, making it more likely that a miner could receive cumulative exposures in excess of current and future standards. One type of administrative control is to modify the contract mining system so that workers would have more incentive to protect their own health on the job. This issue needs further study and discussion, including input from the mining industries, unions, and contract miners. # GLOSSARY Absorbed Do s e : The amount of energy absorbed by ionizing radiation per unit mass. Absorbed doses are expressed in units of rads or grays, or in prefixed forms of these units such as millirad (mrad, 10~3 rad), microrad (urad, 10-® rad), etc.* The gray (Gy) is equal to 1 joule per kilogram (1 J/kg). The £ad is equal to 6.24 x 10® MeV per gram, or 100 ergs per gram. One gray = 100 rad. Additive Relative Risk Mod e l : The relative risk from the combined exposure to radon progeny and smoking equals the sum of the risks from each exposure considered separately. One example of an additive linear relative risk mode I is: Person-Years at Risk (PYR): In a lifetable analysis, the number of PY at risk of dying from disease, usually calculated from the time the miner enters the cohort until death or the end of follow-up. Some authors adjust the PYR for an assumed 10-year latent period for lung cancer by subtracting PYR accumulated during the first 10 years after a miner starts to work underground (see above, (Lagging)). R = 1+B, Potential Alpha Energy Concentration (PAEC): May cause biological damage during the radioactive decay of radon or thoron gases and their progeny, is measured in units called Working Levels (see below). # Proportional Mortality Ratio (PMR): The ratio of two mortality proportions, expressed as a percentage, often adjusted for age or time differences between the two groups being compared.* Prospect ive: A study characteristic. Disease has not occurred in study groups at the start of a study.** Units of Radioactivity: Curie and BecguereI 1 curie = 2.22 x 10^2 disintegrations/minute 1 becquerel (Bq) = 1 d/sec 1 picocurie (pCi) = 2.22 d/minute Radioactive Decay: Disintegration of the nucleus of an unstable nuclide by spontaneous emission of charged particles, photons, or both. Radon (Rn) or Radon and its Progeny: Specifically refers to the "parent" noble gas (Rn-222), and its short-lived alpha-radiation-emitting radioactive decay products ("progeny" or "daughters"). Radon is a gas, the radon progeny are radioactive solids. Rads and rems are comparable (i.e., the quality factor (QF) = 1) when dealing with beta particles and gamma photons. The QF for alpha particles from inhaled radon progeny are generally considered to be in the range of 10 to 20. Retrospective: A study characteristic. Disease has already occurred in study groups at the start of a study.** Standardization: A procedure to reduce the biasing effect of a confounding variable. A feature of data analysis.** # Standardized Mortality Ratio (SMR): The ratio of mortality rates, expressed as percentage, usually adjusted for age or time differences between the two groups being compared.** Synergism: The combined action of two factors which is greater than the sum of the actions of each of them. Thoron: A radioactive gas (Rn-220), sometimes found in the presence of radon (Rn-222). Thoron progeny are the solid, short-lived, alpha radiation emitting decay products (progeny or daughters) of thoron gas. Working Level (WL): A standard measure of the alpha radiation energy in air. This energy can come from the radioactive decay of radon (Rn-222) and thoron (Rn-220) gases. The working level is defined as any combination of short-lived radon decay products per liter of air that will result in the emission of 1.3 x 10^ million electron volts (MeV) of alpha energy. Working Level Month (WLM): A person exposed to 1 WL for 170 hours is said to have acquired an exposure of one Working Level Month. The Mine Safety and Health Administration defines a Working Level Month as a person's exposure to 1 WL for 173 hours. # * Taken from Shapiro (1981) [1]. ** Taken from Monson (1980) [98]. + Taken from Thomas et a I., (1985) [99]. # I. INTRODUCTION A report evaluating epidemiologic studies of lung cancer in underground miners was recently sent to the Mine Safety and Health Administration (MSHA) by the National Institute for Occupational Safety and Health (NIOSH). That report concluded that prolonged exposure to radon progeny at the current standard of 4 WLM/year produced an elevated risk of death from lung cancer. It is the objective of this report to make quantitative risk estimates for various levels of cumulative exposure. In addition, other factors influencing the exposure-risk relationship will be identified and quantified whenever possible. This report is based upon data collected from a cohort consisting of 3366 white underground uranium miners working in the Colorado Plateau (located within the states of Colorado, Utah, New Mexico and Arizona). The actual risk estimates were computed from data on 3346 members of the cohort. Ten original members were determined to have had no record of underground mining, four were non-white, and six had inadequate cigarette smoking i nformat ion. Entry into the cohort was defined by race, sex, working at least one month in underground uranium mines, volunteering for at least one medical survey between 1950 and 1960, and providing social and occupational data of sufficient detail [Lundin et al . 1971]. NIOSH has now updated the mortality experience of the cohort through December 31, 1982. Lung Cancer mortality was defined as anyone assigned an International Classification of Disease (ICD) code of 162 or 163 (same designation in Sixth through Ninth Revisions). Previous analyses of this cohort reported by Waxwei ler et a l . [1981] and Whittemore and McMillan [1983] considered follow-up only through 1977. Table 1 presents a comparison of vital status of the cohort at the end of 1977 and 1982. # II. PROTOCOL FOR STATISTICAL ANALYSIS Much of the epidemiologic work in the past regarding the analysis of mortality in occupational cohorts has involved modified life table analysis. This form of analysis has a strong appeal due to its familiarity and ease of interpretation. It is mathematically straight forward since person-years at risk are simply divided into a number of strata and age-calendar year specific mortality rates from some reference population are applied to each. The U.S. population is often used as the reference population in such life table analyses. This expected mortality is then compared to the observed mortality via a ratio defined as: # A. Type of Analysis Used 2 o SMR; = i i j 2 E ij i where SMR = standardized mortality ratio for cause j Ojj = the observed number of deaths for cause j in stratum i and Ejj = the expected number of deaths for cause j in stratum i from reference population rates If the total number of observed deaths in all of the strata of interest is large and if the reference population is the appropriate comparison group, this would be the method of choice. No modeling would be needed in such a situation. However, after stratification by age, race, sex, calendar year, other confounders, and finally the exposure of interest, there are seldom enough observed deaths to make rates in these strata reliable. Another problem frequently encountered is a fundamental difference in certain etiologic characteristics between the study population and the reference population. For example, the study group may smoke at substantially different rates than the reference population. Often the occupational study group is "healthier" than the reference population due to selection criteria for employment (Enterline [1976]). This is usually referred to as the "healthy worker effect." An alternative to use of the modified life table approach is some form of statistical modeling. Modeling to estimate health risks is necessary when conclusions must be drawn about risk in regions of the exposure-response relationship for which data are too sparse to estimate risk directly. The use of models also permits risk estimates to be simultaneously adjusted for confounders, such as age or co-carcinogenic exposures, as well as interactions between exposure and other risk factors. This flexibility is particularly important in making risk estimates at relatively low cumulative exposures when using the Colorado Plateau data. Most miners in this cohort were exposed to high levels of radon progeny (mean exposure = 834 WLM). Since primary interest in risk estimates is below 120 WLM based on current exposures, some type of statistical model is essential. There have been a number of types of models suggested for examination of cause-specific mortality as a function of various risk factors. The two most popular types of models are the absolute risk model and the relative risk model. The absolute risk model can be written as: R(t ;z) = Rg(t) + R ( z , /?) where R9t;z) is the incidence at age t for someone with risk factors z, Rg(t) is the baseline or background incidence at age t and R(z, ) is the incremental incidence as a function of the risk factors z, and coefficients which are estimated from the data. This form of risk model was not used in the risk assessment since it had been rejected due to poor fit to the U.S. uranium miner data by Lundin et al. [1979]. In contrast, the relative risk model generally takes the form: R(t;z) = R0 (t) R(z,0). This model assumes that excess risk is proportional to background incidence rates. Relative risk models have become increasingly popular in recent years and were found to provide good fits to the data from earlier follow-ups of the U.S. uranium miners cohort by Lundin et al. [1979] and Whittemore and McMillan [1983]. This type of model has been selected as the basic analytical method for this report. # B. The Proportional Hazards Modei A relative risk model which is particularly well-suited to longitudinal mortality studies is one proposed by Cox [1972]. This model is commonly referred to as the Cox proportional hazards model. A major advantage of this approach over the more common life table method is that it permits the use of internal comparison groups while controlling simultaneously for such confounders as cigarette smoking, age, and year of birth. In addition, time-dependent covariates such as cumulative exposure may be incorporated into the model. This is essential in any longitudinal study where follow-up and the exposure period overlap. Relative risk estimates are based on rate ratios similar to those produced in the modified life table analysis. That is, the Cox model operates in a dynamic framework by considering incidence rates over the entire period of follow-up. The Cox model can be expressed mathematically as: X(t;z) = X 0 (t)exp(£z(t)) where X(t;z) for this study is the age-specific lung cancer mortality rate for a miner with exposure and other risk factors represented by a covariate vector z. The underlying age-specific lung cancer mortality rate for the unexposed is represented byAo(t). The function exp(J3z) is generally used to model risk of death from the cause of interest which depends upon the risk factors z and the coefficients ^ which are estimated from the data. # C. Alternative Forms of the Risk Function Although the exponential or log-linear function exp(0z) is the usual choice of a model for risk, any positive function may be used as long as the risk function is equal to 1.0 when the coefficients are all equal to zero. The most common alternative risk functions are the linear (1 + 0z) and the power function (exp( /?1nz) = z^ ). All three forms of risk functions were considered in modeling the U.S. uranium miners data. # D. Results of Model Development # Ident i f ic a t ion of Confounders and/or E ffe c t Mod i f iers Cumulative exposure as measured by total WLM for each miner was the primary exposure variable. Since cigarette smoking is known to have a strong effect upon the risk of lung cancer, cumulative smoking history as measured in pack-years was also included in the model. Another risk factor strongly associated with lung cancer mortality is age. This was tightly controlled by using age as the time dimension t in the model A(t;z). That is, the age at death of each lung cancer victim was recorded and all other miners alive and at risk were compared to him at that age. In this way, the cumulative exposure to radon daughters and pack-years of cigarettes were incorporated as time-dependent covariates by calculating their values at each age of death from lung cancer. This assures that proper age-adjusted comparisons were made throughout the period of follow-up. A number of other variables were examined in developing the appropriate risk model. A list of all potential risk factors considered for inclusion in the model are provided in Table 2. These variables were considered independently as potential confounders in a stepwise fashion (both backward and forward selection procedures) and also as potential effect modifiers by assessing their interaction with cumulative radon daughter exposure. An attempt was made to compare the fit of each of the three models during the model development stage of the analysis. However, it soon became apparent that the linear model did not fit well over the full range of radon daughter exposures and cumulative smoking levels. In fact, the iterative solution to the likelihood equations would not converge when using the linear model when both cumulative exposure and pack-years of smoking were both entered simultaneously (either as linear or linear-quadratic forms). The linear model could only be made to converge when the model was restricted to cumulative exposure below 600 WLM with no other covariates included. The restricted linear model resulted in a non-significant result in this exposure range and was subsequently eliminated from consideration. Of the remaining two types of relative risk models (log-linear and power function), the covariates found to be most highly associated with lung cancer incidence rates were cumulative exposure (WLM), cumulative smoking (packs), and age at initial exposure (months). Table 3 illustrates the form and degree of fit as measured by the likelihood 1BGR -background radon exposure = 0.2 WLM/year BGS = background cigarette smoking = 0.005 packs/day ratio for these two models. The log-linear model required the addition of quadratic terms in cumulative exposure and cigarette smoking to provide an adequate fit. This was not necessary when developing the power function model. As shown in Table 3, the power function model provided the best fit to the data and will be used hereafter in the risk assessment. Since the power function model involves the natural logarithms of cumulative exposure and cumulative cigarette smoking, zero values of these variables were not permitted. In order to avoid this an estimate of cumulative background exposure was added to each miner's cumulative radon daughter and cigarette totals. Based upon estimates of the NCRP (Report No. 77, 1984), 0.2 WLM per year since birth were added to each miner's exposure totals. This is the estimated background exposure in the U.S. and is also the amount used by Whittemore and McMillan [1983] in an earlier analysis. In a similar fashion 0.005 packs per day were added for each day since birth to the cumulative smoking totals based upon estimates of Hinds and First [1975]. Of particular interest is the joint effect of exposure to radon daughters and cigarette smoking. Therefore, the interaction of radon daughter exposure and cigarette smoking was included in the multiplicative power function model. The results showed a negative, borderline significant result ( |3=0.087,p=0.058). When a similar analysis was run with mortality data complete only through 1977, there was no indication of a significant negative effect. Therefore, based on more complete follow-up through 1982, the joint effect of radon daughter exposure and cigarette smoking appears to be slightly less than multiplicative but greater than additive. This is similar to the finding of Thomas and McNeill [1985] in their grouped data analysis of the five major radon daughter cohorts. It is still consistent with a synergistic effect of radon exposure and cigarette smoking which is usually defined as a joint effect exceeding the sum of the individual effects. # Weighting Exposure Over Time An important consideration in fitting any of these models was the proper time-weighting of exposure. Since most forms of cancer, including lung cancer, have relatively long latency periods between exposure and manifestation of the disease, some weighting of exposure over time is appropriate. The most common weighting scheme is commonly referred to as lagging. This involves elimination of any exposure accumulated in a specified period of years before death from lung cancer. This provides a way of considering only that exposure that had a reasonable chance of causing death from lung cancer. Obviously exposures received in the few years immediately prior to lung cancer deaths are ineffective in the exposure-response relationship. In order to investigate the appropriate number of years to lag exposure in this cohort, a series of lags ranging from 0 to 12 years was used. Figure 1 illustrates the results of these trials. It is evident from the improved fit, as measured by the log-likelihood of the model, that a lag of 6 years for cumulative exposure is the best choice for this analysis. Cumulative cigarette smoking was rather insensitive to the amount of lag in the range of 0 to 12 years. Therefore, for the purpose of consistency cumulative smoking was also lagged 6 years. This contrasts to the lag of 10 years chosen by Whittemore and McMillan [1983] for these data and also by Muller et al. for the Canadian data. Their choices were somewhat arbitrary and largely based on knowledge that most cancers involve relatively long latency periods. The implications of a shorter lag will be discussed in a later section of this report. An issue related to lagging of cumulative exposure and cumulative cigarette smoking is the lack of information on these variables in recent years. Radon daughter exposure was last updated in 1969. However, the absence of current exposure information should have minimal impact upon this analysis since over 90% of the miners in the cohort had retired from uranium mining for more than one year by 1969. Those few who continued mining were exposed at levels considerably less than those experienced in earlier years. Since cigarette smoking status was also unknown after 1969, all miners still smoking at that time were assumed to continue at their last recorded smoking rate. NIOSH is currently conducting a survey of radon daughter exposure and cigarette smoking status subsequent to 1969, but this information will not be available for at least another year. The aim of lagging exposure is the elimination of exposure which is not etiologically responsible for lung cancer mortality. An implicit assumption in the use of this technique is that exposure changes from completely effective to completely ineffective at one instant in time. The actual form of this weighting function is illustrated in Figure 2. Because of the biological implausibi Iity of such a situation, Land [1976] proposed that the effectiveness of cumulative exposure be linearly phased in over a period of several years. An illustration of such a weighting function is provided in Figure 3. Consequently, we tried various combinations of lagging and linear partial weighting with the combination illustrated in Figure 3 providing the best fit, i.e. a lag of 4 years followed by linear partial weighting in the period 4-10 years prior to death from lung cancer. This scheme provided a fit essentially the same as that of a simple lag of six years but was chosen over lagging because of its biological plausibility. # INFLUENCE OF TEMPORAL FACTORS Perhaps the most difficult aspect of producing a valid quantitative risk assessment is dealing with the effects of various time-related factors upon the exposure-risk relationship. One very important temporal influence concerns the two components of cumulative exposure itself. In most longitudinal studies the quantitative exposure index is some form of cumulative exposure. However, cumulative exposure is actually the product of duration of exposure and intensity or rate of exposure. When one uses cumulative exposure in assessing risk, the implicit assumption is that high exposure rates for short periods of time are equivalent etiologically to low exposures for long periods of time, all else being equal. A number of investigators have examined the effect of exposure rate in the U.S. uranium miner data. Whittemore and McMillan [198]) found no statistically significant effect of exposure rate. Lundin et al . in the 1971 monograph concluded that there was no significant evidence of an exposure rate effect in the 120-360 WLM cumulative exposure range. These investigators apparently defined exposure rate as the ratio of total cumulative exposure and duration of employment (defined as the period of time between first and last employment in underground uranium mining work histories). For most forms of employment, this is the accepted definition of average exposure rate. However, underground uranium mining is a very sporadic form of employment. The actual time spent underground was often a relatively small fraction of the total employment history. Therefore, exposure rate as defined by cumulative exposure divided by the number of months actually spent underground is often a very different measure than that obtained by using duration of employment in the denominator. Consequently, the effect of exposure rate was re-examined using the actual average exposure rate experienced while underground, eliminating any gaps in employment. Although earlier analyses using total duration of employment produced negative but non-significant results, the refined definition showed a statistically significant negative exposure rate effect (p =-0.043, p<0.001) as shown in Table 4. This implies that among groups of miners receiving equivalent cumulative exposures, those exposed to lower levels for longer periods of time are at greater risk of lung cancer. Because the coefficient is relatively small, however, an appreciable effect upon risk of lung cancer would not be expected unless rates were different by an order of magnitude, i.e., a miner with exposure received at a rate ten times lower than a miner of the same age, smoking habits, and cumulative exposure would have (0.1)~-043= i i o 4 or 10.4% greater risk of lung cancer. Because a negative exposure rate effect is very important and potentially controversial, it was examined in more depth. Of particular interest was the possibility that this effect was different at low versus high cumulative exposure levels. Consequently, the homogeneity of this effect across the full exposure range was examined by forming two sub-cohorts: one below the mean exposure (834 WLM) and one above the mean. The interaction of the exposure rate effect with these two strata was then tested. Results showed a significant interaction ( P=0.157, P=0.019). The direction of the A. Exposure-Rate Effect Background for cumulative radon daughter exposure: BGR=0.4 WLM/year Background for cumulative cigarette smoking: BGS=0.005 packs/day interaction indicated that the exposure rate effect was stronger in the lower cumulative exposure range (0-834 WLM). Specifically, a miner who received total exposure below 834 WLM at rate one tenth as great as another miner of the same age, smoking status and cumulative exposure would have a 58 percent greater risk of lung cancer. However, the increased risk would only be 10 percent at the lower exposure rate for miners in the 834-10,000 WLM range. Although a statistically significant negative exposure-rate effect had not been found previously in this U.S. cohort, there is considerable evidence of such findings in animal studies of high LET radiation. Raabe et al . [1983] reported a strong low dose-rate effect in beagles exposed to internally deposited isotopes of radium and strontium. Risk of bone cancer was as much as ten times as great per unit dose for low rates as compared to the highest rates used. Cross et a l . [1980] found a negative dose-rate effect for risk of lung tumors in rats exposed to airborne radon daughters. Chameaud et al. [1981] found similar results in a French study of Sprague-Dawley rats exposed to inhalation of radon decay products. Hill et a l . [1982] found reduced dose rates of fission-spectrum neutrons produced significantly higher neoplastic transformation rates per rad in cell cultures of C3H mouse embryos. Although all of these studies show low dose-rate effects, no study as yet, animal or human, has investigated such effects at the very low dose rates currently found in well-ventilated uranium mines. # B. Calendar Time It is well-known that mortality patterns change over time. Such exogenous risk factors as the prevalence of smoking and alcohol consumption, medical care, and various life style characteristics are all influenced by a changing society. Therefore, the effect of calendar time upon risk estimates, often called the cohort effect, must be controlled. The analysis of the U.S. uranium miners cohort was stratified by decade of birth so that miners dying of lung cancer were compared only to those members of the cohort at the same age and who were born within 10 years of the case. The usual assumption in a stratified analysis is that baseline mortality rates may be different from stratum to stratum but the relat i ve risk is the same across all strata for miners with comparable risk factors. In order to check this assumption, the interaction of cumulative radon daughter exposure and birth decade was examined. Results indicated a statistically significant positive interaction ( P=0.173,P=0.002). This implies that miners born in later decades are at a greater risk of lung cancer per unit of exposure when compared to miners of the same age born earlier. Since miners born in later decades were exposed at lower exposure rates, this result could be associated with the negative exposure rate effect described ear Iier. # C. M ultistage Theory of Carcinogenesis One of the most popular theories for explaining the temporal patterns in mortality studies of carcinogenesis is the multistage model. Originally proposed by Muller [1951] and Nordling [1953] and later refined by Armitage and Doll [1961], the multistage theory predicts an increase in cancer incidence as a function of time since exposure to some carcinogen. In general, the theory proposes that a malignant tumor arises from a single cell which has undergone a series of heritable changes. The changes may be thought of as distinct stages in the carcinogenic process, each with a low probability of occurrence and a slow progression time in the absence of carcinogenic exposures. A carcinogen may act on any or all of the stages in this process. Carcinogens affecting the first stage are commonly referred to as initiators, while those affecting later stages are called promoters or progressors. Initiators are characterized by long latency periods between initial exposure and death, often exceeding 20 years. Promoters, on the other hand, usually have shorter latent periods since fewer stages must be transgressed before a malignant cell is produced. It is impossible to prove whether or not the mathematical form of the multistage model actually holds in a given situation. However, a number of its predictions have been verified experimentally by Peto et al. [1975]. Therefore, if one subscribes to some form of the multistage model, it is possible to predict whether exposure acts at an early or late stage in the carcinogenic process by examining the temporal patterns in the data. Whittemore [1977], Day and Brown [1980], and Brown and Chu [1983] have all reported the effect on excess relative risk of age at initial exposure and time since cessation of exposure. By examining these factors, we may better understand the underlying cancer mechanism operative in this cohort. # D. Age a t I n i t i a l Exposure Whittemore [1977] considered the multistage model using three exposure scenarios: single exposure at one point in time, continuous exposure at a constant rate, and exposure of varying intensity. When considering the latter category (the usual occupational situation) she found that excess relative risk was a decreasing function of age at initial exposure if an early stage was affected. When a late stage is affected by exposure, however, excess relative risk is an increasing function of age at initial exposure. Day and Brown [1980] predicted the functional relationship between excess relative risk and age at initial exposure for the first four stages of a five-stage process when duration was held constant. Figure 4 illustrates their findings which are in qualitative agreement with those of Whittemore. Results of the analysis in our data, as illustrated in Table 4, indicate a positive and statistically significant coefficient for age at initial exposure (P=0.0023, P=0.003). This implies that miners initially exposed at later ages are at greater risk of lung cancer than those exposed at younger ages, all else being equal. Specifically, a miner with the same radon daughter exposure and smoking history who was initially exposed ten years (120 months) later in age than another miner, would have exp(0.0023x120)=1.32 or 32% higher risk of lung cancer. This result is consistent with the effect of radon daughters occurring at a late stage in the carcinogenic process. A similar age effect was reported by Mancuso et al. [1977] in an analysis of cancer risk in the Hanford workers exposed to who I e-body radiation. An analysis of age at start of smoking amona miners resulted in a negative but non-significant coefficient (3=0. 0016,p=0.22). This would imply that cigarette smoking in this cohort acted at an early to intermediate stage. It could also be consistent with the hypothesis of Doll and Peto [1978] that smoking acts at both early and late stages, which would tend to obscure predictive ability of age at start of smoking. A plot of the effect of age at initial exposure for both radon daughters and cigarette smoking is given in Figure 5. Day and Brown [1980] predicted the effect upon relative risk of time since cessation of exposure when a multistage model is assumed. # E. Time Since Cessation of Exposure They found that when exposure begins some time after infancy, excess relative risk increases, peaks, and then decreases with time since termination of exposure when the first stage is affected. When the penultimate (next to last) stage is affected, relative risk strictly decreases with time after last exposure. Figure 6 illustrates their predictions for the effect of time since cessation of exposure on the first four stages in a five stage model with duration of exposure fixed at five years. In order to investigate the effect of cessation of exposure on this cohort, all miners were identified who had indicated retirement from uranium mining during the course of follow-up. Approximately 95% of the cohort had retired for more than one year prior to 1970. The average time since last exposure was 18.0 years for those miners not dying of lung cancer and 9.9 years for lung cancer cases. The time in months since last exposure was entered as a time-dependent covariable in the original model containing log of exposure, log of smoking, and age at initial exposure. The estimated coefficient of this term was negative and highly significant ( P=-0.0056,p<0.001). Thus a miner's chances of surviving lung cancer increase dramatically with each year outside the mines. Specifically, the model predicts that the risk of lung cancer 10 years after mining uranium is exp(-0.0056x120)=0.511 relative to someone still mining with the same cumulative exposure, smoking history, and age. When a similar analysis of time since cessation of cigarette smoking was run, the results were inconclusive. The coefficient was very small and non-significant (P=0.003,p=0.75). However, since a relatively small number of miners were ex-smokers (7.7%) there is little power for detection of such an effect even if it actually exists. Figure 7 illustrates the effect of time since last exposure for both radon daughters and cigarette smoking. The implication of these results are essentially the same as that obtained by examination of age at initial exposure. The strong negative effect of time since last exposure implies that radon daughters act at a late stage in the carcinogenic process. The effect of stopping cigarette smoking, while based on a small amount of data, still indicates either an intermediate stage effect or a combination of early and late stage effects. # E F F E C T OF T IM E S IN C E L A S T E X PO SU R E ON E X C E S S R E L A T IV E R IS K T IM E S IN C E L A S T E X P O S U R E (Y E A R S !) LE G E N D . E X PO SU R E -------------RADON S M O K IN G # IV. ERRORS IN EXPOSURE DATA AND THEIR EFFECT UPON RISK ASSESSMENT In animal carcinogenesis studies, exposures or doses are usually known with a high degree of accuracy and precision. However, the same cannot be said regarding epidemiologic quantitative risk studies. In most epidemiologic studies, the actual dose to target organs can only be estimated by dosimetric modeling. This is seldom attempted in quantitative risk assessments. The dosimetry of radon daughter exposure is very complex, involving such factors as respiration rates, particle size distribution, deposition in the lung, and radon/radon daughter equilibrium. Most risk assessments are modeled as functions of some exposure index, which is the method used in this report. It is the purpose of this section to estimate the magnitude of exposure errors and their effect upon quantitative risk models. According to Lundin et a l . [1971], exposures in a given mine and year were estimated in one of four ways: 1. actual measurements 2. interpolation or extrapolation in time 3. geographic area estimation 4. estimates prior to 1950 based upon knowledge of ore bodies, ventilation practices, and earliest measurements. These methods will subsequently be called Methods 1,2,3,and 4. In assessing the error associated with individual exposure determinations, it is first necessary to consider the variability introduced by each of the four methods. # A. Magnitude of Error in Exposure Data # Method 1 Table 5 provides a frequency count of white miners working underground from 1950-68 and the mean number of samples taken in each mine visited in those years. The Kusnetz procedure for measuring radon daughters was most often used during the period of study (Johnson and Schiager 1981). This is an area monitoring method based on alpha counts collected on a filter/pump apparatus. The resulting data were generally thought to be of good quality (Lundin et al., 1971). Data from mines in which 5 or more measurements were taken in a given year were analyzed. These data followed a lognormal distribution with little change over the period 1951-1968. Prior to 1960 were taken largely by the U.S. Public Health Service, while post-1960 sampling was conducted by state mine inspectors. Therefore, data were separated into pre and post 1960 periods and estimates of the coefficient of variation (CV) were made for each period. Results indicated a slight but non-significant increase in CV's after 1960 (106.6% vs 118.3%). Since the measurements were grab samples taken at different times within each mine, the total pooled CV=112.5% over the period 1951-1968 is assumed to include sampling errors, counting errors, and environmental fluctuations over time. This estimate agrees well with the CV of 110% found in an independent study of U.S. mines in the period 1973-79 when exposure levels were much lower (Schiager et al. 1981). In other studies, however, an average CV of 30% was reported for area samples in Canadian mines (Makepeace and Stocker 1980) while fluctuations of 20-30% around daily means were found for radon measurements in non-uranium Norwegian mines (Berteig and Stranden 1981). # Method 2 In order to assess the error in interpolating for gaps in sampling of 1 to 3 years, a simulation procedure was used. Mines having the longest periods of continuous annual measurements were identified. Then the even years' averages wee omitted and the average of the two adjacent years was substituted. In this way it was possible to compare the observed annual average with the expected average had that year been missing. This strategy was repeated by imposing three year gaps in the data and again using the average of adjacent years to estimate the three intervening years. The error variance attributable to Method 2 was then calculated by: 0 2 , S dofliOj/E.»2 i N-1 where 0j = actual measurements for intervening years Ej = interpolated values estimated by average of adjacent years. The resulting CV was 120.8% for 1 year interpolation and 137.3% for 3 year interpolation. Since these results were not significantly different, they were pooled to yield a CV=131.9%. # Method 3 This method used annual mine averages in the same geographic locality to estimate radon daughter levels in mines for which Methods 1 and 2 could not be used. In order to assess the error associated with this method, four of the uranium mining localities with the greatest number of annual measurements were selected. A simulation procedure similar to that used for Method 2 was employed. Annual averages for selected mines in these localities were omitted for 1 to 4 years. The averages for mines in the nearest district were substituted as the expected radon level if the annual average actually had been missing. The error variance was calculated in the same way as Method 2. The resulting CV was 148.6% for this method. # Method 4 No measurements were available in the period prior to 1950. Therefore, the estimates made using knowledge of ore bodies, ventilation, and earliest known measurements in these mines could not be verified. These estimates comprised less than 6% of the 34,120 annual averages used in exposure assessment. In addition, since only 8 percent of the total underground exposure time for the cohort occurred prior to 1950, the influence of these measurements should be minimal. However, since the error for this method was probably the greatest of the methods used, we estimated the overall CV for Method 4 to be 25% greater than that for Method 3, i.e. CV=186%. Table 6 shows the number of annual averages for each of the four methods. Actual measurements comprised only about 10% of the data. In order to obtain an overall estimate of the relative error, a weighted average of the CV's for each method was calculated based on the number of determinations for each method. The resulting overall CV=137%. The error associated with each miner's cumulative exposure can then be calculated using our estimate of the error in each radon daughter level (WL). The total cumulative exposure (WLM) for each miner is obtained from: wlm = S ( « -¡jH u a io v ■, j where WL j j is the estimated exposure for mine i in year j and UGMONjj is the number of months spent underground in mine i during year j. The variance of WLM assuming independence of WL j j is then: Var(WLM) = X (UGMON..)2 var (WL..) i , j 1J 1J = 2 (UGMON..)2 (CV)2 (WL..)2 . . iJ ij ' -J 1 1 where CV is the coefficient of variation for the estimated exposure WL j j . If we substitute our estimate of the overall CV=137% and use total cumulative exposure divided by total months underground (WLM/TOTMON) as an estimate of WL j j for each individual miner, the average CV for cumulative exposure (WLM) is 0.97 or a relative standard deviation of 97% of the total WLM for each miner. Since radon daughter measurements were taken in different areas of each mine and often at different times of the day or week, we will assume that the variance in these measurements reflects the variance in exposure levels among individual miners, i.e. # Var(»LMij). where 0jj|< = variance in exposure measurement for miner k in mine i and year j . # B. E ffe c t on R ela tiv e Risk Estimation of Exposure Measurement Errors There appears to be a general impression that errors in exposure measurements usually cause an underestimation of relative risk. Indeed, Bross [1954] originally demonstrated that if misclassification was equal in two comparison populations, one would tend to underestimate differences in proportions of diseased persons. Keys and Kihlberg [1963] qualified this concept by showing that relative risk is underestimated when misclassification errors are independent of disease and exposure relationships. In general, it has been shown by Copeland et a l . [1977] among others, that relative risk estimates are biased too low in the presence of non differential misclassification (equal misclassification of disease in both exposed and unexposed groups). Little work has been done concerning the effects of errors in continuous measures of exposure upon relative risk estimates obtained from statistical models. It is this situation that is a potential problem to the analysis in this report. Prentice [1982] introduced a method for dealing with errors in individual exposure measures when using the Cox proportional hazards model. Prentice, and more recently Hornung [1985], have shown that the direction of bias in relative risk estimation depends upon the error distribution and the shape of the exposure-response model. In general, when the variability in individual exposure errors increases with the level of exposure and the relative risk model is supra-linear (curving upward), relative risk will actually be overestimated when exposure errors are ignored. The popular log-1 inear or exponential risk function is an example of a model which may often overestimate relative risk in the presence of errors whose magnitude increases with increasing levels of cumulative exposure. As was reported earlier, the log-linear model did not provide the best fit to the data. Instead, the power function model which involved the logarithms of cumulative exposure and cumulative cigarette smoking provided a better fit. The effect upon risk estimates using this model was investigated when errors in exposure are lognormal as indicated in the previous section. Without presenting the statistical details, it is sufficient to say that under these conditions (power function model and lognormal distribution of exposure errors) the effect upon relative risk estimates is negligible. If the exposure measurements were generally higher than those actually experienced by the miners, as mentioned in the 1971 Monograph, relative risk per WLM would be underestimated regardless of the distribution of exposure measurement errors. In summary, the degree of error in individual exposure measurements was quite high, an estimated CV of 97%. If, however, these individual errors were lognormally distributed about the annual average concentration in each mine, the degree of bias in relative risk estimates generated by the power function model would be minimal. Regardless of the form of the error distribution, the relative risks generated by the exposure-response model would be too low if the exposure measurements were systematically too high. Therefore, examination of the pattern of error in the exposure data would suggest that relative risks produced by the power function model are either unbiased or possibly a bit low. # V. QUANTITATIVE RISK ESTIMATES The previous sections have outlined the protocol for the risk model development, the selection of an appropriate quantitative risk model, the temporal factors influencing risk estimation, and the magnitude and effect of exposure measurement errors. These are factors requiring careful study before attempting to make valid quantitative risk estimates. In most risk assessments, results are reported relative to some unexposed population. In animal studies, a control group is generally used for this purpose. In life table, analyses expected mortality is obtained from some standard population, often that of the U.S. The problems inherent with the use of such external referents have been well documented [Enterline 1976]. Although a subcohort of miners unexposed to radon daughters would be ideal for a referent group, there were no unexposed miners in the U.S. cohort. Since the proportional hazards model uses internal comparisons in generating risk estimates, risk projections relative to an unexposed population necessarily involve an extrapolation to zero exposure. In the case of the power function model, a background exposure of 0.2 WLM/year of age was added to every miner's cumulative total. All risk estimates are relative to someone exposed to these background rates. Therefore, quantitative relative risk estimates are somewhat sensitive to the choice of a background exposure rate. One way of checking the appropriateness of the model is to divide cumulative exposure into discrete intervals and calculate lung cancer risks in each interval relative to risks experienced in the lowest interval. In this way, relative risk estimates are free of any exposure-response function. If the risk model then fits the risk estimates in the selected intervals, one would be assured that the model is appropriate for quantitative risk estimation. The cumulative exposure intervals chosen for this analysis were: less than 20 WLM, 20-120, 120-240, 240-480, 480-960, 960-1920, 1920-3720, and greater than 3720 WLM. Risk estimates in each interval are calculated relative to the risk in the interval less than 20 WLM. and are plotted at the mean exposure in each interval: 66.6, 179, 351, 698, 1352, 2579, and 5416 WLM, respectively. Figure 8 illustrates how these interval estimates are uniformly lower than those produced by the risk model when using 0.2 WLM/year as a background rate of exposure. The shape of the risk model, however, shows remarkably good agreement with the pattern of relative risk estimates in the selected intervals. This implies that the quantitative risk model is appropriate exclusive of the intercept. This could be due to either an improper choice of baseline exposure rate or the fact that all interval estimates are relative to exposure in the lowest interval, 0-20 WLM. If there is some level of excess risk in this interval relative to an actual unexposed population, the interval estimates would be too low. The cumulative exposure of 0.2 WLM/year is an estimate is an estimate of the background exposure in the overall U.S. population [NCRP Report No. 77, 1984], Exposures near ore-bearing lands are known to be considerably higher than average [NCRP Report No. 45, 1975]. Therefore, it is probable that background exposures in the Colorado Plateau area are higher than average U.S. levels. # DOTTED LINES AND VERTICAL BARS REPRESENT 95% CONFIDENCE LIMITS In the interest of using a background more in line with exposures received by persons living in the Colorado Plateau, the background exposure was increased to 0.4 WLM/year. This produced a quantitative risk model that agreed very well with the interval estimates, as can be seen in Figure 9. Using this model, relative risk estimates were calculated for cumulative radon daughter exposures in the range 30 to 120 WLM corresponding to exposure levels of from one to four WLM/year over a 30-year working lifetime. These estimates range from a relative risk of 1.42 at 30 WLM to 2.07 at 120 WLM compared to someone of the same age and smoking habits with a cumulative lifetime background exposure of 24 WLM and a background exposure rate of 0.4 WLM/year. These estimates (0.9 to 1.4 excess relative risk per 100 WLM) are slightly higher than those reported by Muller et al. [1983] for the Ontario miners, but somewhat less than the estimates of Radford and Renard [1984] for the Swedish iron miners. Obviously, these estimates are subject to the usual caveats concerning extrapolation from higher cumulative exposures and exposure rates. Because relatively few data are currently available in this cohort below 120 WLM (10 lung cancer deaths out of 709 miners), there may be some doubt that the model used actually is appropriate at these low levels. However, the pattern of relative risk estimates produced in each of the categorized exposure levels would suggest that this model fits the data well in range of 60 to 6000 WLM. # Magnitude and Effect of Errors in Exposure Measurements Analyses of the errors associated with the four methods of estimating uranium mine exposure levels indicated a lognormal distribution of errors with the relative standard deviation or CV=97 percent. Although errors of this magnitude may cause overestimation of relative risk when using the log-linear risk model, the better-fitting power function model is generally insensitive to errors of this type. In fact, if estimated exposure levels were systematically higher than those actually received by the miners [Lundin et al. 1971], relative risks per unit WLM would be underestimated for this data. # Q u a n tita tiv e Risk Estimates Present day radon daughter exposures are considerably less than those experienced in the past by uranium miners. There is also current interest in low-level exposure to the general population from indoor radon and its decay products. Consequently, the primary cumulative exposure range of interest in risk assessment appears to be below 120 WLM. Although approximately 20 percent of the cumulative exposures in this study were below this level, there have been only 10 lung cancer deaths among this subgroup as of the end of 1982. Until this cohort is followed to extinction, epidemiologic models such as that produced in this report will be necessary to evaluate the risk of lung cancer mortality at these lower exposures. The model developed for this report provides a very good fit to the data in the range 60 to 6000 WLM. It seems reasonable that predictions based upon this model would be reliable at least for occupational exposure to adult white males. There is little or no mortality data available regarding women and children. The risk estimates provided in Table 7 are presented as an evaluation based upon careful consideration of all factors thought to influence such long-term mortality studies. All of the caveats associated with such evaluations apply to some degree to these results. Exclusive of background exposure. 2Risks are calculated using exposure rate interaction model in Table 6 relative to miners of the same age and smoking habits with a cumulative lifetime background exposure of 24 WLM and background exposure rate of 0.4 WLM/year. # APPENDIX III # A. Introduction This appendix contains examples of engineering control methods that can be used to reduce miners' exposure to radon progeny in underground uranium mines, although the same methods are applicable to other hard rock mines. Many of these control methods have been traditionally used in uranium mines, yet only recently have researchers (primarily from the Bureau of Mines) studied the efficacy of these methods [Bates and Franklin 1977;Bloomster et al. 1984aBloomster et al. , 1984bFranklin et al. 1975aFranklin et al. , 1975bFranklin et al. , 1977Franklin et al. , 1981Franklin et al. , 1982Steinhausler et al. 1981]. # B. Meehan i caI Vent i I a t i on Mechanical ventilation is the primary and most successful technique for reducing exposure to radon progeny. Average measurements of 2 to 200 working levels (WL) of radon progeny were common in U.S. uranium mines during the early 1950's before mechanical ventilation became prevalent [Lundin et al. 1971]. In contrast, during 1979 and 1980, the average concentrations of radon progeny recorded by MSHA ranged from 0.30 to 0.46 WL in the production areas of 61 underground uranium mines [Cooper 1981]. Thus the concentration of radon progeny in U.S. uranium mines has been greatly decreased, mainly because of improved ventilation. Sweden has also successfully reduced radon progeny concentrations in mines with improved mechanical ventilation; the average annual exposure for nonuranium miners in Sweden decreased from 4.7 working level months (WLM) in 1970 to 0.7 WLM in 1980 [Snihs 1981]. # General P rin c ip le s Dilution ventilation in large mines consists of primary and secondary ventilation systems. In the primary system, fresh air is brought into the mine either through separate air shafts or through mine entrances used for miner access and equipment transport. The air can be blown in by a fan located at the surface or drawn in by a fan located inside the mine. Once in the mine, the air is blown or drawn through the main active passageways and then is pushed or drawn out of the mine through special ventilation shafts or openings used to remove ore. The secondary or auxiliary ventilation system provides fresh air to miners working in areas that include stopes and faces where access comes from a single shaft or drift and thus the work area is a dead end. For these areas, the air is often removed through the same shaft that was used to bring in the air. The source of fresh air for the secondary system is provided from the primary air system in the main passageway. To prevent mixing fresh air and contaminated air in the shaft or drift leading to the dead end, the secondary system usually consists of ductwork with a fan to blow or exhaust fresh air from the main passageway to the face. The contaminated air then passively returns to ENGINEERING CONTROL METHODS the main passageway (because of a pressure gradient) through the shaft without contaminating the supply air. The contaminated air at the stope may also be brought back to the main passageway through a second duct and fan system. Once returned to the main passageway, the contaminated air joins the primary exhaust air stream which is then carried out of the mine. # Designing a Dilution Ventilation System Ventilation requirements must be considered when planning and designing the mine. Adding mine ventilation as an afterthought once the mine has been designed or completed is usually more expensive and less efficient. Consideration should be given to the following when designing the ventilation plan for a mine [Ferdinand and Cleveland 1984;Bossard et a I. 1983]: • Identify the outline of the ore body that will be mined; • Determine the rate of emanation of radon from the rock in which the ore occurs; • Place as much of the primary ventilation system as possible including entrances and passageways in barren ground (i.e., ground not containing ore); • Set up passageways so that a split or parallel system of ventilation can be used; • Set up the mine so that working faces ventilated in a series are minimized; • Design the mine so that air inlets are located on one side of the ore body and exhaust airways on the opposite side of the ore body; • Design the mine so that the distances ventilation air travels in the mine are minimized (reduce or eliminate reentrainment and short ci rcui ts); • Design the mine so that adequate volumes of air can be provided without having high pressure drops across air controls in haulage and production areas; • Design the ventilation system to account for increasing concentrations of radon gas, and therefore radon progeny, since as the mine ages there will be more surface area for gas exchange into the mine; and • Consider control devices, fans, push-pull systems, and minimizing leaks when designing the system. The primary ventilation system delivers fresh air for the secondary air system and removes contaminated air from the secondary air system. The design of the primary air system is discussed in the following paragraphs. a. S p lit or P a ra lle l V e n tila tio n Systems A "split" or "parallel" ventilation system involves providing all or just a few working areas with fresh air that has not been previously used to ventilate other working areas. After the working areas are ventilated, the air is then pushed or drawn back into the primary system where it is moved out of the mine. By contrast, in a "series" ventilation system, all areas are ventilated by a single continuous air circuit. The advantages of a split or parallel system include the reduction of both the residence time and cumulative air contamination [Ferdinand and Cleveland 1984]. A series system, on the other hand, has several disadvantages. In addition to its long residence air times and a cumulative build-up of air contaminants from one area to another, other disadvantages include the following: • High air velocities which are often required; • Higher power costs associated with moving air at high velocities because of increased static pressures, unless additional ventilation shafts are constructed [Rock et al. 1971]; and • The potential spread of toxic gases to all areas of the mine in the event of a fi re. However, to keep residence time down in a split or parallel ventilation system, the air velocities to the multiple drifts must be maintained. This will increase the fan and power requirements; additional ventilation shafts may also be necessary. # b. Control Devices Sliding door regulators are used to prevent air from passing where miners and equipment need to pass through periodically. The problem with doors is that to be effective they must be closed after being used. Doors must also be well-constructed to remain secure with repeated usage; steel doors in substantial frames are most commonly used in Canada [Rock and Walker 1970]. # c. Pushing Versus P u llin g V e n tila tio n Systems The pressure on the air intake side of a mine is always greater than on the exhaust side regardless of whether a pushing or pulling ventilation system is used. The difference is that the intake side # Primary Ventilation System pressure is greater than atmospheric pressure for a pushing system, whereas in an exhaust or pulling system, the pressure on the intake side is below atmospheric pressure. Exhausting (pulling) offers some advantages over a pushing system. For example, forcing air into haulageways and escape areas often requires air locks and other equipment. Exhaust systems draw air from these locations without the need for air locks and remove air from the mine through special airways to exhaust fans. # . Secondary ( A u x ilia r y ) V e n tila tio n System The secondary (auxiliary) ventilation system brings sufficient fresh air to the working area from the primary air system without mixing it with the returning contaminated air from the face. # a . Use o f Ducts Another advantage of an air-blowing system is that it increases pressure. Measurements of radon gas content in air exhausted from mines have shown that the radon gas emitted into the atmosphere was 20% less with the air-blowing system than with an exhaust system [Franklin 1981]. This indicates that less radon gas diffused into the ventilated areas when an air-blowing system was utilized than when an exhaust system was used. # c . Exhaust Duct System (P u ll System) In the exhausting or pulling system, contaminated air is drawn from the working face by a duct that runs from the face to the main passageway in the primary air system through the access tunnel. Fresh air is then drawn into the access tunnel toward the work area by the pressure gradient created by removing air. Exhausting (pulling) air from the face instead of blowing (pushing) offers the fo 11ow i ng advan tages: • Fresh incoming air is maintained in the tunnel used by miners to access the active stope, and # d . Push-Pu11 System A push-pull system contains two ducts in the accessway, one for pushing clean air to the face and the other for exhausting air from the face back to the primary air system. This system has many of the advantages of both the push and the pull systems including the following: • The blowing of air that sweeps across and ventilates the active face, thus providing good dilution in work areas; • The efficient collection of contaminants near the work face; and • Reducing the contamination of air in the access tunnel. The main disadvantages are the cost and that it occupies more drift area [Rock et al. 1971]. The amount of radon gas diffusing into mine spaces from interstitial rock is dependent on the pressure in the mine space. The lower the atmospheric pressure in the mine space as compared to the pressure in the interstitial rock, the more radon gas will pass from the rock to the mine space. Conversely, the greater the pressure in the mine space as compared to the rock, the less radon gas will seep into the mine space. Overpressurization and mine pumping are two control measures which take advantage of this principle to reduce concentrations of radon gas. In overpressurization, more ventilation air is pushed into mine spaces than is removed. Although Edwards and Bates [1980] have stated "nothing that we have found provides mining companies with sufficient guidelines for applying the overpressurized ventilation system effectively," they conducted a mathematical study of overpressurization and drew the following conclusion: overpressurization does decrease the radon flux. It was estimated that a 2% pressure differential in a sandstone matrix would result in a 50% reduction in radon flux with mine sink lengths of 100 meters or less. A mine sink is an area either in the mine itself or a naturally occurring space or lattice in the matrix where the interstitial air can flow. If the distance between the sink and the mine space approaches 200 meters, the benefit of overpressurization is lost. Because of the dramatic increase in radon gas in the sink area during overpressurization of work areas, no miners should be allowed in those sinks without proper respiratory protection. However, many open spaces that can serve as sinks are filled in and cannot be occupied. The Bureau of Mines is gathering information on the effects of overpressurization in mines. Data from the pressurization of an enclosed chamber in a mine indicated that the radon concentration was 99% lower than the concentration under static conditions and 92% lower than the concentration under controlled ventilation conditions [Bates and Franklin 1977], In a study by Schroeder et al . [1966] of mine areas that were pressurized by 10 mm of mercury, the radon flux decreased from 5 to 20 fold as compared to normal ventilation conditions. In mine pumping, a negative pressure is created in the mine space by sealing air intake openings and permitting the exhaust fans to operate. This is done during an offshift when no miners are in the mine. Because of the negative pressure created in the mine with respect to the surrounding rock, radon is drawn into the mine space from the inter stitial rock at a rate higher than would occur under static conditions. The air intakes must be opened well before miners enter the mine to permit the ventilation system to remove the radon gas and radon progeny that have accumulated in the mine spaces. After this accumulation has been removed, the mine spaces should have lower concentrations of radon gas (and therefore radon progeny) when the miners reenter the mine. This is because much of the radon gas in the surrounding interstitial rock has been removed and is not available to diffuse into the mine working areas. However, monitoring of these areas would be required prior to allowing miners to enter. More studies are needed to determine the effectiveness of this control procedure [Bates and Franklin 1977]. # Overpressurization and Mine Pumping # activities. Summers et al. [1982] tested the efficacy of this procedure and found that the amount of radon gas escaping through the surrounding rock was insufficient to warrant the uniform use of a wall sealant, provided that all cracks, fissures, and holes were sealed to prevent major leaks. # . Membrane S ealants Used on Bulkheads # . N egative A ir Pressure Behind a Bulkhead A slight negative pressure behind the bulkhead of about 0.03 cm water with respect to active areas will prevent radon gas leaks into fresh ventilation air [Thomas et al. 1981 (1) covering the backfill with one meter of clean sand, (2) sealing the surface of the tailings, (3) using a bulkhead to seal the backfilled stope and maintaining a negative pressure behind the bulkhead, and (4) using nonradioactive materials as backfill instead of mill tai Iings. # . E ffic ie n c y In summary, backfilling with uranium tailings can be as effective as bulkheading in reducing radon progeny emissions, although it is more costly. Because high radon progeny concentrations are emitted from wet backfill and during the backfilling process, backfilling should not be used in active mine areas and miners should be protected from overexposure during backfilling operations. # E. S ealants Used on Mine W alls This section describes sealants used as diffusion barriers against radon gas, including how the sealants are applied and the best materials used as sealants. Also, the effectiveness of sealants for reducing radon emanation and exposure will be discussed. # G. Automation Another radon progeny control method is increased automation. Techniques such as robotics that minimize the time the miner spends in the high exposure areas of the mine and in activities such as drilling, blasting, or loading ore, will decrease the miner's radiation exposure. Although, at present, robotics has a limited place in the mines, it may be possible in the future to further automate the ore mining process. 1. Two different sampling days are randomly selected from each 2-week block of t ime. 2. The stations within a cluster are to be sampled on the same workdays and work shifts. All stations within a cluster are to be alternately sampled, seven times on each sampling day, each time in independent random order. During the work shift, the seven periods for sampling of the entire cluster shall be equally spaced in time. For example, the three stations A, B, and C could be considered a cluster and sampled as ABC, BCA, ACB, CBA, CAB, BAC, and ACB during seven successive intervals of approximately equal durations. If it is not feasible to sample in this manner, then sampling can be conducted along the most efficient path but with a different, randomly determined starting point on each day (e.g., BCA, BCA,..., BCA during one sampling day and ABC, ABC,..., ABC or CAB, CAB,..., CAB during other sampling days). 3. The estimated average work shift concentration (ftj) for each sampling day (i = 1,2,...12) is computed from an analysis of the seven grab samples taken on that day. Formulae for this computation are contained in section G. 4. Whenever a; for a particular station exceeds 0.14 WL, then that station shall be resampled the next workday. [Note: In this case, f t ; = and sampling on the "next workday" (day B) is in addition to the two randomly selected sampling days required in a 2-week block of time.] a. I f oLq (the estimated average work shift concentration on the next workday) is < 0.14 WL, then exposure monitoring shall continue as described starting at section C,1. b. 'If also exceeds 0.14 WL, then: (1) steps shall be taken to reduce the radon progeny concentration in that work area by implementing work practices and engineering controls, (2) respiratory protection shall be required for all miners entering that work area, and (3) grab sampling as described in section C,2 shall be conducted on a consecutive daily basis. Grab sampling shall continue on a consecutive daily basis until the estimated average work shift concentrations on any two consecutive workdays (ft^ and are both < 0.10 WL. When and are both < 0.10 WL, then the requirements for respiratory protection are waived and exposure monitoring can revert to the schedule described starting at section C ,1. [Note: A new reference period shall begin at this time, requiring 12 randomly selected sampling days, the first of which is to be coded as i = 1.] This criterion (as discussed in section H,2) serves to provide early confirmation that the corrective steps taken by the mine operator have been effective in limiting the average work shift concentration of radon progeny to a level not exceeding 1.5 times the recommended exposure limit (REL) of 1/12 WL. # G. Statistical Considerations and Data Analysis Formulae The following are the statistical notations used in the sampling strategy: Cjj = measured concentration of radon progeny in the jth grab sample taken on the ith sampling day, where j = 1,2,...,7 for each day and i = 1,2,...,12 (2 workdays selected at random from each of six consecutive blocks of time). # c>\j = measured concentration of radon progeny in the jth grab sample taken on day A, where j = 1,2,...,7. # CBj = measured concentration of radon progeny in the jth grab sample taken on the next workday following day A, where j = 1,2, ■ ■ •,7. than 0.14 WL on two consecutive workdays, substantial evidence exists that the long-term average work shift concentration exceeds 1/12 WL. Therefore, when aA and ftg both exceed 0.14 WL in a work area, NIOSH recommends that radon progeny concentrations be reduced in that work area by implementing work practices and engineering controls, and that the use of respiratory protection be required for all miners entering that work area. These recommendations are also made when the 95% lower confidence limit for the long-term average work shift concentration (LCL) exceeds 1/12 WL (see section D,4). # Return to Compliance with the REL The NIOSH sampling strategy uses criteria with approximately 90% confidence for an initial determination that a work area is tentatively back to compliance with the REL. Specifically, estimated average work shift concentrations from two consecutive workdays (i.e., and ftg) in which both are < 0.10 WL was chosen as a criterion that demonstrates reasonable evidence that the average radon progeny concent rat ion is bei ng cont roI led to < 0.125 WL (i.e., 1.5 t imes the REL). Given the levels of intraday and interday variabilities observed in the Johnson [1978] data set, a work area with an average work shift concentration of 0.125 WL (i.e., 50% above the REL of 1/12 WL) has 0.90 probability to have one or both of a pair of consecutive estimated average work shift concentrations above 0.10 WL. This "2-day" decision rule limits the magnitude with which a work area's average work shift concentration may exceed 1/12 WL and be undetected. This rule also has the advantage of permitting an early return to normal operations after a period of corrective actions to reduce exposure concentrations, at the expense of having less than high confidence that the REL is not being exceeded by more than 50%. However, only a small proportion of time passes until the next sampling day (as specified in the sampling strategy relative to the year), so that the 2-day rule limits the contribution of a temporarily excessive exposure in a work area to a miner's cumulative annual exposure. At a later time, the lower confidence limit criterion for noncompliance determined after 12 randomly selected sampling days (i.e., LCL > 1/12 WL) would be likely to detect a statistically significant increase above the REL if the long-term average work shift concentration were as high as 0.125 WL. # Less Frequent Exposure Monitoring The upper confidence limit criterion (i.e., UCL < 1/12 WL) gives 95% confidence that the long-term average work shift concentration is not above 1/12 WL, under the assumption that aj's exhibit log-normally distributed random variations. The additional requirement that &j and &j+i do not exceed 0.14 WL is meant to detect a temporarily or periodically high average work shift concentration (i.e., high aj's that are not sustained for the full block of time from which 12 sampling days were selected). When both of these requirements are met, only 2 randomly selected sampling days are then required per 26-week block of t ime. 2 0 1 UCL< 0.063 WL gives greater than 95% confidence that the long-term average work shift concentration (a,) is < 0.063 WL (i.e., a. is no larger than 75% of the REL), under the assumption that ftj's exhibit log-normally distributed random variations. Under the additional assumption that geometric standard deviations (GSDs) for intraday and interday (log-normal) variability are similar to those reported in Johnson [1978], the criterion that &12 (the estimated average work shift concentration on the last of the 12 sampling days) be < 0.033 WL gives 95% confidence that a projected future reference period would have a long-term average work shift concentration < 0.063 WL. # Cessation of Exposure Monitoring 2 0 2 reported [Meyer et al. 1975]. However, several recent studies suggest that this is not a practical concern, at least not in healthy individuals [Bjurstedt et al. 1979;Arborelius et al. 1983;Dahlback and BalId in 1984]. Theoretically, the increased fluctuations in thoracic pressure caused by breathing with a respirator might constitute an increased risk to subjects with a history of spontaneous pneumothorax. Few data are available in this area. While an individual is using a negative-pressure respirator with relatively high resistance during very heavy exercise, the usual maximal-peak negative oral pressure during inhalation is about 15-17 cm of water [Dahlback and Balldin 1984]. Similarly, the usual maximal-peak positive oral pressure during exhalation is about 15-17 cm of water, which might occur with a respirator in a positive-pressure mode, again during very heavy exercise [Dahlback and Balldin 1984]. By comparison, maximal positive pressures such as those during a vigorous cough can generate 200 cm of water pressure [Black and Hyatt 1969]. The normal maximal negative pleural pressure at full inspiration is -40 cm of water [Bates et al. 1971], and normal subjects can generate -80 to -160 cm of negative water pressure [Black and Hyatt 1969]. Thus while vigorous exercise with a respirator does alter pleural pressures, the risk of barotrauma would seem to be substantially less than that of coughing. In some asthmatics, an asthmatic attack may be exacerbated or induced by a variety of factors including exercise, cold air, and stress, all of which may be associated with wearing a respirator. While most asthmatics who are able to control their condition should not have problems with respirators, a physician's judgment and a field trial may be needed in selected cases. # Cardiac E ffe c ts The added work of breathing from respirators is small and could not be detected in several studies [Gee et al. 1968;Hodous et al. 1983]. A typical respirator might double the work of breathing (from 3% to 6% of the total oxygen consumption), but this is probably not of clinical significance [Gee et al. 1968]. In concordance with this view, several other studies indicated that at the same workloads heart rate does not change with the wearing of a respirator [Raven et al. 1982;Harber et al. 1982;Hodous et al. 1983;Arborelius et al. 1983;Petsonk et al. 1983]. In contrast, the added cardiac stress due to the weight of a heavy respirator may be considerable. A self-contained breathing apparatus (SCBA) may weigh up to 35 pounds. Heavier respirators can reduce maximum external workloads by 20% and similarly increase heart rate at a given submaximal workload [Raven et al. 1977]. In addition, it should be noted that many uses of SCBA (e.g., for firefighting and hazardous waste site work) also necessitate the wearing of 10-25 pounds of protective clothing. # Miscellaneous Health Effects In addition to the health effects (described above) associated with wearing respirators, specific groups of respirator wearers may be affected by the following factors: (1) Corneal Irritation or Abrasion Corneal irritation or abrasion might occur with the exposure. This would, of course, be a problem primarily with quarter-and half-face masks, especially with particulate exposures. However, exposures could occur with full-face respirators because of leaks or inadvisable removal of the respirator for any reason. While corneal irritation or abrasion might also occur without contact lenses, their presence is known to substantially increase this risk. # .(2) Loss or Misplacement of a Contact Lens The loss or misplacement of a contact lens by an individual wearing a respirator might prompt the wearer to remove the respirator, thereby resulting in exposure to the hazard as well as to the potential problems noted above. # (3) Eye Irritation from Respirator Airflow The constant airflow of some respirators, such as powered air-purifying respirators (PAPR's) or continuous flow air-line respirators, might irritate the eyes of a contact lens wearer. # B. Suggested Medical E valu atio n and C r i t e r i a f o r R e s p ira to r Use The following NIOSH recommendations allow latitude for the physician in determining a medical evaluation for a specific situation. More specific guidelines may become available as knowledge increases regarding human stresses from the complex interactions of worker health status, respirator usage, and job tasks. While some of the following recommendations should be part of any medical evaluation of workers who wear respirators, others are applicable for specific situations. • A physician should determine fitness to wear a respirator by considering the worker's health, the type of respirator, and the conditions of respirator use. The recommendation above leaves the final decision of an individual's fitness to wear a respirator to the person who is best qualified to evaluate the multiple clinical and other variables. Much of the clinical and other data could be gathered by other personnel. It should be emphasized that the clinical examination alone is only one part of the fitness determination. Collaboration with foremen, industrial hygienists, and others may often be needed to better assess the work conditions and other factors that affect an individual's fitness to wear a respirator. • A medical history and at least a limited physical examination are recommended. The medical history and physical examination should emphasize the evaluation of the cardiopulmonary system and should elicit any history of respirator use. The history is an important tool in medical diagnosis and can be used to detect most problems that might require further evaluation. Objectives of the physical examination should be to confirm the clinical impression based on the history and to detect important medical conditions (such as hypertension) that may be essentially asymptomatic. • While chest X-ray and/or spirometry may be medically indicated in some fitness determinations, these should not be routinely performed. In most cases, the hazardous situations requiring the wearing of respirators will also mandate periodic chest X-rays and/or spirometry for exposed workers. When such information is available, it should be used in the determination of fitness to wear respirators. Data from routine chest X-rays and spirometry are not recommended solely for determining if a respirator should be worn. In most cases, with an essentially normal clinical examination (history and physical) these data are unlikely to influence the respirator fitness determination; additionally, the X-ray would be an unnecessary source of radiation exposure to the worker. Chest X-rays in general do not accurately reflect a person's cardiopulmonary physiologic status, and limited studies suggest that mild to moderate impairment detected by spirometry would not preclude the wearing of respirators in most cases. Thus it is recommended that chest X-rays and/or spirometry be done only when clinically indicated. • The recommended periodicity of medical fitness determinations varies according to several factors but could be as infrequent as every 5 years. Federal or other applicable regulations shall be followed regarding the frequency of respirator fitness determinations. The guidelines for most work conditions for which respirators are required are shown in • The respirator wearer should be observed during a trial period to evaluate potential physiological problems. In addition to considering the physical effects of wearing respirators, the physician should determine if wearing a given respirator would cause extreme anxiety or claustrophobic reaction in the individual. This could be done during training while the worker is wearing the respirator and is engaged in some exercise that approximates the actual work situation. Present OSHA regulations state that a worker should be provided the opportunity to wear the respirator "in normal air for a long familiarity period..." [29 CFR 1910.134(e)( 5)].* This trial period should also be used to evaluate the ability and tolerance of the worker to wear the respirator [Harber 1984]. This trial period need not be associated with respirator fit testing and should not compromise the effectiveness of the vital fit testing procedure. CFR = Code of Federal Regulations. See CFR in references. • Examining physicians should realize that the main stress of heavy exercise while using a respirator is usually on the cardiovascular system and that heavy respirators (e.g., SCBA) can substantially increase this stress. Accordingly, physicians may want to consider exercise stress tests with electrocardiographic monitoring when heavy respirators are used, when cardiovascular risk factors are present, or when extremely stressful conditions are expected. Some respirators may weigh up to 35 pounds and may increase workloads by 20 percent. Although a lower activity level could compensate for this added stress [Manning and Griggs 1983], a lower activity level might not always be possible. Physicians should also be aware of other added stresses, such as heavy protective clothing and intense ambient heat, that would increase the worker's cardiac demand. As an extreme example, firefighters who use a SCBA inside burning buildings may work at maximal exercise levels under life-threatening conditions. In such cases, the detection of occult cardiac disease, which might manifest itself during heavy stress, may be important. Some authors have either recommended stress testing [Kilbom 1980] or at least its consideration in the fitness determination [ANSI 1984]. Kilbom [1980] has recommended stress testing at 5-year intervals for firefighters below age 40 who use SCBA and at 2-year intervals for those aged 40-50. He further suggested that firemen over age 50 not be allowed to wear SCBA. Exercise stress testing has not been recommended for medical screening for coronary artery disease in the general population [Weiner et al. 1979;Epstein 1979]. It has an estimated sensitivity and specificity of 78% and 69%, respectively, when the disease is defined by coronary angiography [Weiner et al. 1979;Nicklin and Balaban 1984], In a recent 6-year prospective study, stress testing to determine the potential for heart attacks indicated a positive predictive value of 27% when the prevalence of disease was 3.5% [Giagnoni et al. 1983;Folli 1984]. While stress testing has limited effectiveness in medical screening, it could detect individuals who may not be able to complete the heavy exercise required in some jobs. A definitive recommendation regarding exercise stress testing cannot be made at this time. Further research may determine whether this is a useful tool in selected circumstances. • An important concept is that "general work limitations and restrictions identified for other work activities also shall apply for respirator use" [ANSI 1984]. In many cases, if a worker is physically able to do an assigned job while not wearing a respirator, the worker will in most situations not be at increased risk when performing the same job while wearing a respirator. • Because of the variability in the types of respirators, work conditions, and workers' health status, many employers may wish to designate categories of fitness to wear respirators, thereby excluding some workers from strenuous work situations involving the wearing of respirators. Depending on the various circumstances, several permissible categories of respirator usage are possible. One conceivable scheme would consist of three overall categories: full respirator use, no respirator use, and limited respirator use including "escape only" respirators. The latter category excludes heavy respirators and strenuous work conditions. Before identifying the conditions that would be used to classify workers into various categories, it is critical that the physician be aware that these conditions have not been validated and are presented only for consideration. The physician should modify the use of these conditions based on actual experience, further research, and individual worker sensitivities. He may also wish to consider the following conditions in selecting or permitting the use of respirators: -History of spontaneous pneumothorax; -CIaus t rophob i a/anxi ety reac t i on; -Use of contact lenses (for some respirators); -Moderate or severe pulmonary disease; -Angina pectoris, significant arrhythmias, recent myocardial infarction; -Symptomatic or uncontrolled hypertension; and -Advanced age. Wearing a respirator would probably not play a significant role in causing lung damage such as pneumothorax. However, without good evidence that wearing a respirator would not cause such lung damage, the physician would be prudent to prohibit the individual with a history of spontaneous pneumothorax from wearing a respirator. Moderate lung disease is defined by the Intermountain Thoracic Society [Kanner and Morris 1975] as being present when the following conditions exist-a forced expiratory volume in one second (FEV-|) divided by the forced vital capacity (FVC) (i.e., FEV-|/FVC) of 0.45 to 0.60, or an FVC of 51% to 65% of the predicted FVC value. Similar arbitrary limits could be set for age and hypertension. It would seem more reasonable, however, to combine several risk factors into an overall estimate of fitness to wear respirators under certain conditions. Here the judgment and clinical experience of the physician are needed. Many impaired workers would even be able to work safely while wearing respirators if they could control their own work pace, including having sufficient time to rest. # C. Cone Ius ion Individual judgment is needed to determine the factors affecting an individual's fitness to wear a respirator. While many of the preceding guidelines are based on limited evidence, they should provide a useful starting point for a respirator fitness screening program. Further research is needed to validate these and other recommendations currently in use. Of particular interest would be laboratory studies involving physiologically impaired individuals and field studies conducted under actual day-to-day work condi t ions. # C E N T E R S F O R D IS E A S E C O N T R O L N A T IO N A L IN S T IT U T E F O R O C C U P A T IO N A L S A F E T Y A N D H E A L T H R O B E R T A . T A F T L A B O R # APPENDICES # APPENDIX I # EVALUATION OF EPIDEMIOLOGIC STUDIES EXAMINING THE LUNG CANCER MORTALITY OF UNDERGROUND MINERS Confounding Bias: A potential attribute of data. In measuring an association between an exposure and a disease, a confounding factor is one that is associated with the exposure and independently is a cause of the disease. Confounding bias can be controlled if information on the confounding factor is present. Coulomb: The charge flowing past a point of a circuit in one second, when there is a current of one ampere in the circuit; also, the aggregate charge carried by 6 x 10^ electrons. Electron Vo 11: The change in potential energy of a particle having a charge equal to the electronic charge (1.60 x 10-^ coulombs), moving through a potential difference of 1 volt. Half-Li fe: The time required for a radioactive substance to decay to one half of its initial activity. Follow-up Period: The length of time between a person entering an epidemiological study cohort and the present report (or the end of the study). Incidence Ra t e : The number of new cases of disease per unit of population per unit of time, e.g., 3/1000/year.** Interaction: The association of one factor (occupation) with disease modified by the effect of another factor (smoking). The measure of association can be the rate or odds ratio. This follows a nonmult ip Iicative model (may be additive). Ionizing Radi at ion: Any electromagnetic or particulate radiation capable of producing ions, directly or indirectly, in its passage through matter. Lagging Exposures: Lagging of the cumulative exposure assigned to a miner. Some authors consider that radon progeny exposures are "redundant" if they occur after lung cancer is induced. Some authors believe that cumulative exposures should be lagged by a certain number of years (5 or 10), to exclude redundant exposures occurring during these years. For example, Radford and St. Clair Renard [5] discounted the last 5 years of exposures from the cumulative total WLM assigned to each case of lung cancer in their analysi s . Biologic Latent Period: The time between an increment of exposure and the increase in risk attributable to it.+ Epidemiologic Latent Period: The time between first exposure and death in those developing the disease during the study interval. Linear Hypothesis: The hypothesis that excess risk is proportional to dose. # FIGURE 4 # EFFECT OF ACE AT INITIAL EXPOSURE ON A MU LT IS TA GE MODEL # A G E A T INITIAL E X P O S U R E # VI. SUMMARY AND CONCLUSIONS A valid quantitative risk assessment is much more than simply fitting an exposure-response curve to mortality data. This is especially true when considering an epidemiologic risk assessment. There are a great variety of risk factors and temporal effects that may alter the interpretation of the data analysis. This report is an attempt to address such modifying influences in an effort to better understand the underlying cancer mechanisms operative in the cohort of U.S. uranium mi daughters. There were a number of findings which are important in lung cancer in the U.S. cohort. # 1. Influence of C ig a re tte Smoking The joint effect of cumulative cigarette smoking daughter exposure was found to be intermediate multiplicative. This would imply a synergistic usual definition as an effect exceeding the sum risks. # Exposure-Rate E ffe c t Analysis of this data revealed that modeling cumulative exposure alone may not adequately predict the relative risk of lung cancer from chronic exposure to radon daughters. Miners receiving a given amount of cumulative exposure at lower rates for longer periods of time were at greater risk relative to those with the same cumulative exposure received at higher rates for shorter periods of time. This This implies that results extrapolated from historical exposures at high rates may yield conservative results at current lower rates. Indeed, it is possible that lower risk estimates in the U.S. study, when compared to the four other major radon studies, as reported by Thomas et al. [1985] may be due to the higher exposure rates received by U.S. miners. # Late-Stage Carcinogenic E ffect Careful examination of temporal effects implies that exposure to radon daughters acts at a late stage in the carcinogenic process. All temporal factors agreed in this respect. The appropriate lag to remove redundant exposure was a relatively short six years. Older miners at initial exposure were at greater risk than those exposed at younger ages. The relative risk of lung cancer decreases with the length of time after cessation of exposure. Whether or not the mathematical form of the multistage theory of carcinogenesis applies to this cohort, the temporal patterns are worth noting. # C. BuIkheads # D e s c rip tio n The second most important control measure used in underground mines today is the construction of bulkheads across inactive stopes or drifts [Bates and Franklin 1977]. Bulkheads isolate inactive stopes, prevent the mixture of contaminated air from these stopes with fresh air, and help control the direction of air flow to working areas. Maintaining a negative air pressure behind a bulkhead will prevent leaks [Franklin 1981]; this is important because radon progeny concentrations can exceed 1,000 WL behind a bulkhead [Bates and Franklin 1977]. In addition, bulkheads must be strong and flexible enough to maintain an airtight seal during typical mining conditions, such as the ground movement and air shocks from blasting and the impact from accidental contact with mining equipment. # A bulkhead consists of three functional parts: (1) the primary structure, (2) the seal between the primary structure and the rock, and (3) a surface seal on the rock within one meter from the plane of the bulkhead [Summers et al. 1982 # ], The primary bulkhead structure fills most of the opening in the stope and provides resistance to shocks from blasting or contact with machinery. The primary structure consists of timber or an expanded metal lath covered with a continuous non-porous membrane. The membrane may be attached to, or sprayed upon, the timber in the primary structure. The membrane must not crack or develop holes or leaks during mining activities [Franklin 1981;Summers et al. 1982]. The second part of the bulkhead, the seal between the primary structure and the surrounding rock, must resist running water and the air shocks and rock movements due to blasting. The third part of the bulkhead, the seal on the surface of the rock within one meter from the plane of the bulkhead, must be made of a material that adheres to damp rock surfaces and can withstand mining 6. Fan Operation estimate that 100 bulkheads sealing 12.5 stopes would reduce the overall radon gas emissions into mine air by 2.25 Ci/day, a reduction of 25%. In summary, bulkheads are very effective in reducing radon gas (and thus radon progeny) in mine air. Especially promising are the new bulkheads designed by Summers et al. [1982] and further tested by Bloomster et al. [1984b]. These bulkheads may eventually replace the leakier and more flammable polyurethane bulkheads presently being used underground. # D. Backf i11in q In the uranium mining process, large quantities of ore are brought to the surface, leaving voids which may collapse if they are not stabilized. The tailings remaining after the uranium is extracted are often used as backfill. There are three benefits of backfilling stopes: (1) ground stabilization, ( 2 Next, the coarse tailings "sand' ' is mixed with water to form a slurry and pumped into worked-out stopes. Sometimes the slurry is mixed with cement before pumping. After the water in the slurry percolates away, the stope is left filled with densely packed sand or cement. The radon progeny hazard can be increased, at least temporarily, by backfilling. Although the sand has considerably less radium than the ore or host rock, the finely divided sand has a larger surface area and many fine interstices between the grains through which radon gas can move. Therefore, the radon gas emanation rate of the sand is much higher than the ore or host rock [Raghavayya and Khan 1973; Thompkins 1982]. During backfilling, agitation of the slurry releases high concentrations of radon gas [Bates and Franklin 1977]. Also, high concentrations of radon gas can collect above the sand in the newly filled stope (possibly reaching 65,000-75,000 pCi/l). Thus the advantage of decreasing the ventilation volume with the backfill must be weighed against the increased emanation rate of the backfill [Bates and Franklin 1977]. Mixing the slurry with cement will not prevent this increase in emanation rate because the radon gas can also travel freely through fine pores in the cement, especially water-filled pores. Indeed, radon gas emanates from porous cement, sand, or ore at a higher rate when it is wet than when it is dry unless the dry material is overlain with a thick During experiments in a mine, Franklin et al. [1981] found that backfilling 90% of a stope reduced the total radon progeny emissions from the stope by 85%. A feasibility study estimated that backfilling can be as effective as problem unless there were several thousand visible pinholes per square meter of sealant. # E ffe c tiv e n e s s o f S ealants in Reducing Radon Emanation Rates The effectiveness of sealants depends, in part, upon the porosity of the rock walls. Sealants produce the greatest decrease in radon emanation when applied to sandstone or other porous rock; sealants applied to granite will appear to be less effective because granite provides a natural barrier to radon emanation [Lindsay et al. 1981a]. Thus results from the tests for the effectiveness of sealants vary greatly depending on the porosity of the rock walls along with the mine ventilation rate, the grade of the uranium ore, and other factors. estimated that the overall decrease in radon emissions would be 56% if the same sealant coating was applied to 80% of the mine surfaces. Although sealants were less effective and more costly than bulkheads, the use of sealants is less disruptive to the mining process than the use of bulkheads [Bloomster et al. 1984b]. In summary, there are at least seven materials available that make effective mine sealants. These materials can reduce radon emanation from mine walls by 50-75%. # F. C o n tro llin g R ad io active Water Underground # APPENDIX IV # GRAB SAMPLING STRATEGY REQUIREMENTS FOR DETERMINATION OF RADON PROGENY EXPOSURES A. # Introduct ion Airborne concentrations of radon progeny must be monitored regularly to provide the basis for their control. Miners' exposures must be limited to no more than 1.0 WLM per year and the average concentration of radon progeny in any work area must not exceed 1/12 WL during any work shift. The sampling strategy described here was developed after an evaluation of mine sampling data and the typical variability of radon progeny concentrations in underground mines. This strategy will allow the collection of timely and reliable environmental data that can be used as the basis for control of cumulative exposures. This sampling strategy allows for the determination of the arithmetic average of time-varying concentrations of radon progeny during a work shift in a given work area. The determination is based on an unbiased estimate made from grab samples taken at random intervals through out the work shift. Random sampling of work shifts during a reference period is also included for determination of a long-term arithmetic average work shift concentration. The formulae needed to calculate the statistical quantities used in this sampling strategy are contained in section G of this appendix. The rationale for the critical decision points used in the sampling strategy are contained in section H. # B. Definition of Terms and Notations STATION: A sampling location within a work area that represents the radon progeny concentration to which miners are exposed. CLUSTER: Two or more stations at which sampling will be conducted during any work shift. The stations in a cluster should be located at different work areas but must be in close proximity to each other so that alternating grab samples could be taken during the same work shi ft. # BLOCK OF TIME: A period in which two different sampling days are randomly selected. # AVERAGE WORK SHIFT CONCENTRATION: The average concentration of radon progeny in working levels (WL) during a work shift at a given station. # AVERAGE: The arithmetic mean. The same term can be used for the average of several sample results or for the arithmetic mean of a distribution of concentrations that vary during a continuous period of time. In the latter case, the terms "average," "arithmetic average," and "time-weighted average" are synonymous. # otj : Average work shift concentration for day i, where i = 1,2,...,12 and day i is the ittl day in a time-ordered sequence of the 12 days that were randomly selected from the reference period. # LCL: 95% one-sided lower confidence limit for a. # UCL: 95% one-sided upper confidence limit for a. # 5. If is < 0.14 WL, then: (a) continue collecting seven grab samples on each of the two randomly selected sampling days in each 2-week block of time, and (b) continue using the criteria given in section C. After 12 weeks of sampling in which no two consecutive sampling days ( f t ; and &j+i) were in excess of 0.14 WL, use the criteria given in section D for assurance, based on 12 days of sampling, that the average work shift concentration of radon progeny is in compliance with the REL, which, if verified, will result in less frequent exposure monitoring requirements. # D. Criteria for Less Frequent Exposure Monitoring To determine if less frequent exposure monitoring can be conducted at a specific work area, the following statistical decision criteria must be used 1. Compute a. (the estimated average work shift concentration using seven grab samples per sampling day) for a work area during the reference period in which 12 samples were taken and no two consecutive sampling days ($j and &j+-|) were in excess of 0.14 WL. Formulae for this computation are contained in section G. 2. Compute LCL and UCL, the 95% one-sided lower and 95% one-sided upper confidence limits, respectively, for the average work shift concentration during the reference period from which the 12 sampling days were taken. Formulae for these computations are contained in section G; o i . from section D,1 is a quantity used in the formulae for LCL and UCL. # The block length can be increased # 4. If LCL exceeds 1/12 WL, then: (a) steps shall be taken to reduce the radon progeny concentration in that work area by implementing work practices and engineering controls, (b) respiratory protection shall be required for all miners entering that work area, and (c) grab sampling as described in section C,2 shall be conducted on a consecutive daily bas i s . Grab sampling shall continue on a consecutive daily basis until the estimated average work shift concentrations on any two consecutive workdays (&^ and o^) are both < 0.10 WL. When ^ and ag are both < 0.10 WL, then the requirements for respiratory protection are waived and exposure monitoring can revert to the schedule described starting at section C , 1. [Note: A new reference period shall begin at this time, requiring 12 randomly selected sampling days, the first of which is to be coded as i = 1.] # E . Criteria for Continuation of Less Frequent Exposure Monitoring After completion of two additional sampling days during the subsequent 26-week period, the data from the last 12 days sampled must be used to compute a new UCL for the period in which the 12 sampling days occurred. 1. Sampling may continue under the less frequent sampling schedule (i.e., 2 days per 26-week block of time) if both of the following results occur at a station: (a) UCL for the reference period from which the last 12 sampling days were taken is < 1/12 WL, and (b) the estimated average work shift concentrations on the last two of the 12 sampling days (&11 and &12) were both < 0.14 WL. In this case, an updated UCL shall be recomputed after completion of sampling in each subsequent 26-week block of time to determine if less frequent sampling (i.e., on two days during a 26-week period) should be continued according to the criteria of this part. If ei ther of these conditions are not met, then LCL must be computed from data obtained from the last 12 days sampled (see section E,2 which follows). # 2. If LCL for the reference period from which the 12 sampling days were taken at a station exceeds 1/12 WL, then: (a) steps shall be taken to reduce the radon progeny concentration in that work area by implementing work practices and engineering controls, (b) respiratory protection shall be required for all miners entering that work area, and (c) grab sampling as described in section C,2 shall be conducted on a consecutive dai ly basis. Grab sampling shall continue on a consecutive daily basis until the estimated average work shift concentrations on any two consecutive workdays and ftg) are both < 0.10 WL. When and are both < 0.10 WL, then the requirements for respiratory protection are waived and exposure monitoring can revert to the schedule described starting at section C,1. # 3. If LCL for the reference period from which the 12 sampling days were randomly taken is < 1/12 WL, but the estimated average work shift concentration determined for either of the last two of the 12 sampling days (&11 or &12) exceeds 0.14 WL, then monitoring at that station shall return to the more frequent sampling schedule (2 days per 2-week block of time). In this case, &11 or &12 becomes ^ and sampling is required on the next workday to obtain ag, as described starting at section C,4,a. # F . Criteria for Cessation of Exposure Monitoring Sampling can be discontinued at a station if both of the following results occur at that station: (1) UCL for the reference period from which 12 sampling days were taken is < 0.063 WL, and (2) the estimated average work shift concentration for the last of the 12 sampling days (&12) is < 0.033 WL. However, sampling should return to the regular schedule, as described starting at section C,1 if an environmental change or a change in mining operations occurs that may alter radon progeny concentrations in that work area. # APPENDIX V In recommending medical evaluation criteria for respirator use, one should apply rigorous decision-making principles [Halperin et al. 1986]; tests used should be chosen for operating characteristics such as sensitivity, specificity, and predictive value. Unfortunately, many knowledge gaps exist in this area. The problem is complicated by the large variety of respirators, their conditions of use, and individual differences in the physiologic and psychologic responses to them. For these reasons, the following guidelines are to be considered as informed suggestions rather than established NIOSH policy recommendations. They are intended primarily to assist the physician in developing medical evaluation criteria for respi rator use. # A. Background Info rm ation # Pulmonary E ffe c ts In general, the added inspiratory and expiratory resistances and dead space of most respirators cause an increase in tidal volume and a decrease in respiratory rate and ventilation (including a small decrease in alveolar ventilation). These respirator effects have usually been small both among healthy individuals and, in limited studies, among individuals with impaired lung function [Gee et al. 1968;Altose et al. 1977;Raven et al. 1981;Hodous et al. 1983;Hodous et al. 1986]. This generalization is applicable to most respirators when resistances (particularly expiratory resistance) are low [Bentley et al. 1973;Love et al. 1977]. While most studies report minimal physiologic effects during submaximal exercise, the resistances commonly lead to reduced endurance and reduced maximal exercise performance [Craig et al. 1970;Raven et al. 1977;Stemler and Craig 1977;Myhre et al. 1979;Deno et al. 1981]. The dead space of a respirator (reflecting the amount of expired air that must be rebreathed before fresh air is obtained) tends to cause increased ventilation. At least one study has shown substantially increased ventilation with a full-face respirator, a type that can have a large effective dead space [James et al. 1984]. However, the net effect of a respirator's added resistances and dead space is usually a small decrease in ventilation [Craig et al. 1970;Hermansen et al. 1972;Raven et al. 1977;Stemler and Craig 1977;Deno et al. 1981;Hodous et al. 1983]. The potential for adverse effects, particularly decreased cardiac output, from the positive pressure feature of some respirators has been Raven et al. [1982] found statistically significant higher systolic and/or diastolic blood pressures during exercise for persons wearing respirators. Arborelius et al. [1983] did not find significant differences for persons wearing respirators during exercise. # Body Temperature E ffe c ts Proper regulation of body temperature is primarily of concern with the closed circuit SCBA that produces oxygen via an exothermic chemical reaction. Inspired air within these respirators may reach 120°F (49°C), thus depriving the wearer of a minor cooling mechanism and causing discomfort. Obviously this can be more of a problem with heavy exercise and when ambient conditions and/or protective clothing further reduce the body's ability to lose heat. The increase in heart rate because of increasing temperature represents an additional cardiac stress. Closed-circuit breathing units of any type have the potential for causing heat stress since warm expired gases (after exothermic carbon dioxide removal with or without oxygen addition) are rebreathed. Respirators with large dead spaces also have this potential problem, again because of partial rebreathing of warmed expired air [James et al. 1984], # . Sensory E ffe c ts Respirators may reduce visual fields, decrease voice clarity and loudness, and decrease hearing ability. Besides the potential for reduced productivity, these effects may result in reduced industrial safety. These factors may also contribute to a general feeling of stress [Morgan 1983a]. # Psychologic E ffe c ts This important topic is discussed in recent reviews by Morgan [Morgan 1983a[Morgan , 1983b. There is little doubt that virtually everyone suffers some discomfort when wearing a respirator. The large variability and the subjective nature of the psycho-physiologic aspects of wearing a respirator, however, make studies and specific recommendations difficult. Fit testing obviously serves an important additional function by providing a trial to determine if the wearer can psychologically tolerate the respirator. The great majority of workers can tolerate respirators, and experience in wearing them aids in this tolerance [Morgan 1983b]. However, some individuals are likely to remain psychologically unfit for wearing respirators. # . Local I r r i t a t i o n E ffe c ts Allergic skin reactions may occur occasionally from wearing a respirator, and skin occlusion may cause irritation or exacerbation of preexisting conditions such as pseudofolliculitis barbae. Facial discomfort from the pressure of the mask may occur, particularly when the fit is unsatisfactory.
None
None
df3f848209ed7779016aeedd817b3c01bec92a8f
cdc
None
# CONTENTS C D C encourages organizations that have previously relied on copying and posting PDFs o f the schedules to their websites to instead use a safer m ethod to consistently display current schedules. This form o f "content syndication" ensures that the most current and accurate immunization schedule information is on each organization's website. This one-time step assures that your website displays current yearly schedules as soon as they are published, or revised. To place the schedules on a website, organizations simply include two lines o f CDC-furnished com puter code on their Web page. Each organization's Web developer places the code into their existing website; the code automatically loads the current C D C schedule and footnotes. T he schedule is visible within the organization's Web page, and all other images and Web navigation display unchanged. Any C D C revisions or updates will automatically and immediately be reflected on the organization's Web page. This form o f content syndication also gives organizations the ability to offer a PDF o f each schedule on their website. Staff members and Web visitors can print as well as view im m unization schedules and be confident they have the most current versions. Instructions for copying and placing syndication code are available at / schedules/syndicate.html. C D C offers technical assistance for organizations implementing this form o f content syndication. For assistance, readers can complete the e-mail form on the N C IR D Web support page (), and a N CIRD Web team staff member will contact them and provide assistance. Each year, the Advisory Committee on Immunization Practices (ACIP) reviews the current recom m ended im m unization schedules for persons aged 0 through 18 years to ensure that the schedule reflects current recommendations for licensed vaccines. In October 2012, ACIP approved the recommended immunization schedules for persons aged 0 through 18 years for 2013, which includes several changes from 2012. # Advisory C o m m itte e on Im m u n iza tio n Practices (ACIP) R ecom m ended Im m u n iza tio n Schedule fo r Persons Health-care providers are advised to use both the recommended schedule and the catch-up schedule (Figures 1 and 2) in com bination w ith their footnotes (pages 6-8) and not as stand-alones. For guidance on the use o f all the vaccines in the schedules, including contraindications and precautions to use o f a vaccine, providers are referred to the respective ACIP vaccine recommendations. Printable versions o f the regular and catch-up schedules are available at in various formats, including landscape and pocket-sized, in regular paper or laminated versions. A "parent friendly" regular schedule is available at l#print. For 2013, several new references and links to additional inform ation have been added, including one for travel vaccine requirements and recommendations (1). New references also are provided for vaccination o f persons w ith prim ary and secondary im m unodeficiencies. Changes to the previous schedules (2) include the following: - Figure 1, "Recommended immunization schedule for persons aged 0 through 18 years" replaces "Recommended immunization schedule for persons aged 0 through 6 years" and "Recommended immunization schedule for persons aged 7 through 18 years." -W ording was added to bars to represent the respective vaccine dose numbers in the series. -T he meningococcal conjugate vaccine (MCV4) purple bar was extended to age 6 weeks, to reflect licensure of Hib-M enCY vaccine. These recom m endationsmu st be read with the footnotes that foilow. For those w h o fa ll behind or start late, provide catch-up va ccination at the earliest opportunity as indica ted by the greet bars in Figure 1. To determine minimum intesvals between doses,see the catch-up schedule (Figute 2). School entry and adolescent vaccine age groups are in bold. The figure below provides catch-up schedules and minimum intervals between doses for children whose vaccinations have? been delayed. A vaccine series does not need to be restarted, regfrdl ess o fth e tim e that has elapsed between doces. Use the sectior appropriate for thechild'sage. Alwcys use this table in co n ju n ition wieh Figure 1 and the footnotes that follow. # Footnotes: Recommended Immunization Schedule for Persons Aged 0 Through 18 Years -United States, 2013 # Additional guidance for use o f the vaccines described in this publication is available at # Hepatitis B (HepB) vaccine. (Minimum age: birth) Routine vaccination: At birth - Administer monovalent HepB vaccine to all newborns before hospital discharge. - For infants born to hepatitis B surface antigen (HBsAg)-positive mothers, administer HepB vaccine and 0.5 mL o f hepatitis B im mune globulin (HBIG) w ithin 12 hours o f birth. These infants should be tested for HBsAg and antibody to HBsAg (anti-HBs) 1 to 2 months after completion o f the HepB series, at age 9 through 18 months (preferably at the next well-child visit). - If mother's HBsAg status is unknown, w ithin 12 hours o f birth administer HepB vaccine to all infants regardless o f birth weight. For infants weigh ing 2,000 grams (no later than age 1 week). # Doses following the birth dose # Rotavirus (RV) vaccines. (Minimum age: 6 weeks for both RV-1 and RV-5 ). Routine vaccination: - Administer a series of RV vaccine to all infants as follows: 1. If RV-1 is used, administer a 2-dose series at 2 and 4 months o f age. 2. If RV-5 is used, administer a 3-dose series at ages 2, 4, and 6 months. 3. If any dose in series was RV-5 or vaccine product is unknown for any dose in the series, a total o f 3 doses o f RV vaccine should be administered. # Catch-up vaccination: - The maximum age for the first dose in the series is 14 weeks, 6 days. - Vaccination should not be initiated for infants aged 15 weeks 0 days or older. - The maximum age for the final dose in the series is 8 months, 0 days. - If RV-1(Rotarix) is administered for the first and second doses, a third dose is not indicated. - For other catch-up issues, see Figure 2. S u p p le m e n t Please note: An erratum has been published for this issue. To view the erratum, please click here. - Administer 2 doses o f MMR vaccine to children aged 12 months and older, before departure from the United States for international travel. The first dose should be administered on or after age 12 months and the second dose at least 4 weeks later. # Diphtheria and tetanus toxoids and acellular pertussis ( # Catch-up vaccination: - Ensure that all school-aged children and adolescents have had 2 doses of MMR vaccine; the minimum interval between the 2 doses is 4 weeks. months to all adolescents aged 11-12 years. Either HPV4 or HPV2 may be used for females, and only HPV4 may be used for males. - The vaccine series can be started beginning at age 9 years. - Administer the second dose 1 to 2 months after the first dose and the third dose 6 months after the first dose (at least 24 weeks after the first dose). # Catch-up vaccination: - Administer the vaccine series to females (either HPV2 or HPV4) and males (HPV4) at age 13 through 18 years if not previously vaccinated. Please note: An erratum has been published for this issue. To view the erratum, please click here. The Advisory Committee on Immunization Practices (ACIP) annually reviews and updates the adult immunization schedule, which is designed to provide vaccine providers with a summary o f existing ACIP recommendations regarding the routine use o f vaccines for adults (Figures 1 and 2). The adult schedule also includes a table summarizing the primary contraindications and precautions for routinely recommended vaccines (Table ). In October 2012, ACIP approved the adult immunization schedule for 2013. This schedule also incorporates changes to vaccine recommendations voted on by ACIP at its October 24-25, 2012 meeting. # Additional Vaccine Information The primary updates include adding information for the first time on the use o f 13-valent pneumococcal conjugate vaccine (PCV13) and the timing of administration ofPCV13 relative to the 23-valent pneumococcal polysaccharide vaccine (PPSV23) in adults (4). PCV13 is recommended for adults aged 19 years and older with immunocompromising conditions (including chronic renal failure and nephrotic syndrome), functional or anatomic asplenia, cerebrospinal fluid leaks, or cochlear implants. The schedule also clarifies which adults need 1 or 2 doses of PPSV23 before age 65 years. O ther changes to the PPSV23 footnote include adding information regarding recommendations for vaccination when vaccination status is unknown. For tetanus, diphtheria, and acellular pertussis (Tdap) vaccine, recom m endations have been expanded to include routine vaccination o f adults aged 65 years and older and for pregnant women to receive Tdap vaccine with each pregnancy. T he ideal tim ing o f Tdap vaccination during pregnancy is during 27-36 weeks' gestation. This recom m endation was made to increase the likelihood o f optimal protection for the pregnant woman and her infant during the first few m onths of the infant's life, when the child is too young for vaccination but at highest risk for severe illness and death from pertussis (5,6). M anufacturers o f the live, attenuated influenza vaccine (LAIV) have obtained Food and Drug Administration (FDA) approval for a quadrivalent influenza vaccine that contains one influenza A (H 3N 2), one influenza A (H 1N 1) and two influenza B vaccine virus strains, one from each lineage o f circulating influenza B viruses. In approxim ately h alf of the recent influenza seasons, the trivalent influenza vaccine has included an influenza B vaccine virus from the lineage different from the predom inant circulating influenza B strains (7). Inclusion o f both lineages o f influenza B virus is intended to increase the likelihood that the vaccine provides cross reactive antibody against a higher proportion o f circulating influenza B viruses. For LAIV, beginning with the 2013-14 season, it is expected that only the quadrivalent formulation will be available and manufacture o f the trivalent formulation will cease. It is possible that quadrivalent inactivated influenza vaccine formulations might be available for the 2013-14 season as well. Because a mix o f quadrivalent and trivalent influenza vaccines might be available in 2013-14, the abbreviation for inactivated influenza vaccine has been changed from trivalent inactivated influenza vaccine (TIV) to inactivated influenza vaccine (IIV). T he abbreviation for LAIV remains unchanged. M inor wording changes, clarifications, or simplifications have been made to footnotes for measles, m um ps, rubella vaccine (MMR), hum an papillomavirus vaccine (HPV), zoster vaccine, and hepatitis A and hepatitis B vaccines. A correction has been made to Figure 1 for M M R vaccine: the bar that indicated the vaccine might be used in certain situations by persons born before 1957 has been removed. Persons born before 1957 are considered im mune, and routine vaccination is not recommended. Considerations for the possible use of M M R vaccine in outbreak situations are included in the 2011 M M W R publication on vaccination o f health-care personnel (8). In addition, a correction was made to Figure 2 for PPSV23. This vaccine is indicated for men who have sex with men if they have another risk factor (e.g., age or underlying condition); the bar has been changed from yellow to purple to more accurately reflect the recommendation. Vaccine providers are reminded to consult the full ACIP vaccine recommendations if they have questions and to bear in m ind that additional updates might be made for specific vaccines during the year between updates to the adult schedule. Printable versions o f the 2013 adult im m unization schedule and other inform ation is available at / vaccines/schedules/hcp/adult.html. Inform ation about adult vaccination is available at . h tm . ACIP statements and inform ation for specific vaccines is available at .cdc.gov/vaccines/pubs/acip-list. h tm . Adverse events from vaccination should be reported at or by telephone, 800-822-7967. This schedule has been approved by the American Academy o f Family Physicians, the American College o f Physicians, the American College o f Obstetricians and Gynecologists, and the American College o f Nurse-Midwives. T he adult im m unization schedule is published in the Annals o f Internal Medicine at the same time that it is published in M M W R. # Changes for 2013 # Footnotes - Inform ation was added to footnote #1 to direct readers to additional inform ation regarding recommendations for vaccination when vaccination status is unknown. - T he influenza vaccination footnote (#2) now uses the abbreviation IIV for inactivated influenza vaccine and drops the abbreviation T IV for trivalent inactivated vaccine (TIV). For the 2013-14 influenza season, it is expected that the LAIV will be available only in a quadrivalent formulation; IIV might be available in both trivalent and quadrivalent formulations. - The tetanus, diphtheria, and acellular pertussis (Td/Tdap) vaccination footnote (#3) is updated to include the recommendation to vaccinate pregnant women with Tdap during each pregnancy, regardless of the interval since prior Td/Tdap vaccination and to include the recommendation for all other adults, including persons aged 65 years and older, to receive 1 dose of Tdap vaccine. - T he varicella (#4) and H PV (#5) footnotes were simplified; no changes in recommendations were made. Additional inform ation was added to the H P V footnote regarding H PV vaccination and pregnancy. - T he zoster footnote (#6) was changed to clarify that ACIP recommends vaccination o f persons beginning at age 60 years both for persons with and w ithout underlying health conditions for whom the vaccine is not contraindicated. - The measles, mumps, rubella (MMR) vaccine footnote (#7) was modified to reflect the new recommendation that a provider diagnosis of measles, mumps, or rubella is not considered acceptable evidence of immunity. Previously, a provider diagnosis of measles or mumps, but not rubella, was considered acceptable evidence of immunity. - Inform ation was added to the pneumococcal polysaccharide (PPSV23) vaccination footnote (#8) and PPSV23 revaccination footnote (#9) to clarify that persons with certain medical conditions are recommended to receive 2 doses o f PPSV23 before age 65 years. In addition, even those who receive 2 doses of PPSV23 before age 65 years are recommended to receive PPSV23 at age 65 years, as long as it has been 5 years since the most recent dose. T he PPSV23 footnote refers to footnote #10 for pneumococcal conjugate 13-valent vaccine (PCV13) regarding the timing of PCV13 vaccine relative to PPSV23 for those persons recommended to be vaccinated with both pneumococcal vaccines. - A new footnote (#10) was added for PCV13 vaccine. This vaccine is recommended for adults aged 19 years and older with im m unocomprom ising conditions (including chronic renal failure and nephrotic syndrome), functional or anatomic asplenia, cerebrospinal fluid leaks, or cochlear implants. Those not previously vaccinated with PCV13 or PPSV23 should receive a single dose o f PCV13, followed by a dose o f PPSV23 at least 8 weeks later. Those previously vaccinated with PPSV23 should be vaccinated with PCV13 one year or more after PPSV23 vaccination (4). - The hepatitis A vaccine footnote (#12) was updated to clarify that vaccination is recommended for persons with a history o f either injection or noninjection illicit drug use. - The hepatitis B vaccine footnote (#13) includes minor wording changes and adds information on the vaccine schedule for hepatitis B vaccine series for the Recombivax HB vaccine. The dosing schedules for other hepatitis B vaccines were included in prior years' schedules. # Figures - For figure 1, the bar for T dap/T d for persons aged 65 years and older has been changed to solid yellow because all adults, including those 65 years and older, are now recommended to receive one dose of Tdap vaccine (5). - T he bar for M M R vaccine for persons born before 1957 has been removed. M M R vaccine is not recommended routinely for persons born before 1957. Considerations for vaccination in measles or mumps outbreak settings are discussed in the ACIP recommendations for health care personnel (8). - A new row for PCV13 vaccine has been added. - For Figure 2, the recom m endation for Tdap vaccination with each pregnancy is included, with a single dose of Tdap recommended for all other groups (6). - A correction was made to change the color for PPSV23 from yellow to purple for men who have sex with men (MSM). PPSV23 is recommended for M SM who have another risk factor such as age group or medical condition. - A row for PCV13 was added (4). # Contraindications and Precautions Table - T he inactivated influenza vaccine precautions were updated to indicate that persons who experience only hives with exposure to eggs should receive IIV rather than LAIV. - Pregnancy was removed as a precaution for hepatitis A vaccine. This is an inactivated vaccine, and similar to hepatitis B vaccines, is recommended if another high risk condition or other indication is present. - Language was clarified regarding the precaution for use o f antiviral medications and vaccination with varicella or zoster vaccines. # ACIP Adult Immunization Work Group Work # Human papillomavirus (HPV) vaccination - Two vaccines are licensed for use in females, bivalent HPV vaccine (HPV2) and quadrivalent HPV vaccine (HPV4), and one HPV vaccine for use in males (HPV4). - For females, either HPV4 or HPV2 is recommended in a 3-dose series for routine vaccination at age 11 or 12 years, and for those aged 13 through 26 years, if not previously vaccinated. - For males, HPV4 is recommended in a 3-dose series for routine vaccination at age 11 or 12 years, and for those aged 13 through 21 years, if not previ ously vaccinated. Males aged 22 through 26 years may be vaccinated. - HPV4 is recommended for men who have sex with men (MSM) through age 26 years for those who did not get any or all doses when they were younger. - Vaccination is recommended for immunocompromised persons (including those with HIV infection) through age 26 years for those who did not get any or all doses when they were younger. - A complete series for either HPV4 or HPV2 consists o f 3 doses. The second dose should be administered 1-2 months after the first dose; the third dose should be administered 6 months after the first dose (at least 24 weeks after the first dose). - HPV vaccines are not recommended for use in pregnant women. However, pregnancy testing is not needed before vaccination. If a woman is found to be pregnant after initiating the vaccination series, no intervention is needed; the remainder o f the 3-dose series should be delayed until completion o f pregnancy. - Although HPV vaccination is not specifically recommended for health-care personnel (HCP) based on their occupation, HCP should receive the HPV vaccine as recommended (see above). # Zoster vaccination - A single dose o f zoster vaccine is recommended for adults aged 60 years and older regardless o f whether they report a prior episode o f herpes zoster. Although the vaccine is licensed by the Food and Drug Administration (FDA) for use among and can be administered to persons aged 50 years and older, ACIP recommends that vaccination begins at age 60 years. - Persons aged 60 years and older with chronic medical conditions may be vaccinated unless their condition constitutes a contraindication, such as pregnancy or severe immunodeficiency. - Although zoster vaccination is not specifically recommended for HCP, they should receive the vaccine if they are in the recommended age group. # Measles, mumps, rubella (MMR) vaccination - Adults born before 1957 generally are considered immune to measles and mumps. dose should be administered 1 month after the first dose; the third dose should be given at least 2 months after the second dose (and at least 4 months after the first dose). If the combined hepatitis A and hepatitis B vaccine (Twinrix) is used, give 3 doses at 0, 1, and 6 months; alternatively, a 4-dose Twinrix schedule, administered on days 0, 7, and 21-30 followed by a booster dose at month 12 may be used. - Adult patients receiving hemodialysis or with other immunocompromising conditions should receive 1 dose o f 40 pg/mL (Recombivax HB) adminis tered on a 3-dose schedule at 0, 1, and 6 months or 2 doses o f 20 ^g/m L (Engerix-B) administered simultaneously on a 4-dose schedule at 0, 1, 2, and 6 months. Recent (within 11 months) receipt o f antibody-containing blood product (specific interval depends on product).6,7 # Selected conditions for which Moderate or severe acute illness with or w ithout fever. Receipt o f specific antivirals (i.e., acyclovir, famciclovir, or valacyclovir) 24 hours before vaccination; avoid use o f these antiviral drugs for 14 days after vaccination. # Human papillomavirus (HPV) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Moderate or severe acute illness with or w ithout fever. Pregnancy. # Zoster Severe allergic reaction (e.g., anaphylaxis) to a vaccine component. Known severe im munodeficiency (e.g., from hematologic and solid tumors, receipt o f chemotherapy, or long-term immunosuppressive therapy5 or patients with HIV infection who are severely immunocompromised). Pregnancy. Moderate or severe acute illness with or w ithout fever. Receipt o f specific antivirals (i.e., acyclovir, famciclovir, or valacyclovir) 24 hours before vaccination; avoid use o f these antiviral drugs for 14 days after vaccination. Measles, mumps, rubella (MMR)3 Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Known severe im munodeficiency (e.g., from hematologic and solid tumors, receipt of chemotherapy, congenital immunodeficiency, or long-term immunosuppressive therapy5 or patients with HIV infection who are severely immunocompromised). Pregnancy. Moderate or severe acute illness with or w ithout fever. Recent (within 11 months) receipt o f antibody-containing blood product (specific interval depends on product).6,7 History o f throm bocytopenia or throm bocytopenic purpura. Need for tuberculin skin testing.8 See footnotes on page 18. # Supplem ent TABLE. (Continued) Contraindications and precautions to commonly used vaccines in adults1*"* # Vaccine Contraindications Precautions Pneumococcal polysaccharide (PPSV) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Moderate or severe acute illness with or w ithout fever. Pneumococcal conjugate (PCV13) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component, including to any vaccine containing diphtheria toxoid. Moderate or severe acute illness with or w ithout fever. Meningococcal, conjugate, (MCV4); meningococcal, polysaccharide (MPSV4) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Moderate or severe acute illness with or w ithout fever. Hepatitis A (HepA) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Moderate or severe acute illness with or w ithout fever. Hepatitis B (HepB) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Moderate or severe acute illness with or w ithout fever. 1. Vaccine package inserts and the full ACIP recommendations for these vaccines should be consulted for additional information on vaccine-related contraindications and precautions and for more information on vaccine excipients. Events or conditions listed as precautions should be reviewed carefully. Benefits o f and risks for administering a specific vaccine to a person under these circumstances should be considered. If the risk from the vaccine is believed to outweigh the benefit, the vaccine should not be administered. If the benefit o f vaccination is believed to outweigh the risk, the vaccine should be administered. A contraindication is a condition in a recipient that increases the chance o f a serious adverse reaction. Therefore, a vaccine should not be administered when a contraindication is present. 6. Vaccine should be deferred for the appropriate interval if replacement immune globulin products are being administered. # Supplem ent These recommendations must be read w ith the footnotes that follow. ------------- I -------------- for use among and can be administered to persons aged 50 years and older, ACIP recommends PCV13 for adults aged 19 years and older with the specific medical conditions noted above. # Meningococcal vaccination - Administer 2 doses o f meningococcal conjugate vaccine quadrivalent (MCV4) at least 2 months apart to adults w ith functional asplenia or per sistent complement component deficiencies. - HIV-infected persons who are vaccinated also should receive 2 doses. - Administer a single dose o f meningococcal vaccine to microbiologists routinely exposed to isolates o f N eisseria m e n in g itid is , military recruits, and persons who travel to or live in countries in which meningococcal disease is hyperendemic or epidemic. - First-year college students up through age 21 years who are living in residence halls should be vaccinated if they have not received a dose on or after their 16th birthday. - MCV4 is preferred for adults w ith any o f the preceding indications who are aged 55 years and younger; meningococcal polysaccharide vaccine (MPSV4) is preferred for adults aged 56 years and older. - Revaccination w ith MCV4 every 5 years is recommended for adults previ ously vaccinated with MCV4 or MPSV4 who remain at increased risk for infection (e.g., adults w ith anatomic or functional asplenia or persistent complem ent component deficiencies). # Hepatitis A vaccination
# CONTENTS C D C encourages organizations that have previously relied on copying and posting PDFs o f the schedules to their websites to instead use a safer m ethod to consistently display current schedules. This form o f "content syndication" ensures that the most current and accurate immunization schedule information is on each organization's website. This one-time step assures that your website displays current yearly schedules as soon as they are published, or revised. To place the schedules on a website, organizations simply include two lines o f CDC-furnished com puter code on their Web page. Each organization's Web developer places the code into their existing website; the code automatically loads the current C D C schedule and footnotes. T he schedule is visible within the organization's Web page, and all other images and Web navigation display unchanged. Any C D C revisions or updates will automatically and immediately be reflected on the organization's Web page. This form o f content syndication also gives organizations the ability to offer a PDF o f each schedule on their website. Staff members and Web visitors can print as well as view im m unization schedules and be confident they have the most current versions. Instructions for copying and placing syndication code are available at http://www.cdc.gov/vaccines/ schedules/syndicate.html. C D C offers technical assistance for organizations implementing this form o f content syndication. For assistance, readers can complete the e-mail form on the N C IR D Web support page (http://www.cdc.gov/vaccines/web-support.html), and a N CIRD Web team staff member will contact them and provide assistance. Each year, the Advisory Committee on Immunization Practices (ACIP) reviews the current recom m ended im m unization schedules for persons aged 0 through 18 years to ensure that the schedule reflects current recommendations for licensed vaccines. In October 2012, ACIP approved the recommended immunization schedules for persons aged 0 through 18 years for 2013, which includes several changes from 2012. # Advisory C o m m itte e on Im m u n iza tio n Practices (ACIP) R ecom m ended Im m u n iza tio n Schedule fo r Persons Health-care providers are advised to use both the recommended schedule and the catch-up schedule (Figures 1 and 2) in com bination w ith their footnotes (pages 6-8) and not as stand-alones. For guidance on the use o f all the vaccines in the schedules, including contraindications and precautions to use o f a vaccine, providers are referred to the respective ACIP vaccine recommendations. Printable versions o f the regular and catch-up schedules are available at http://www.cdc.gov/vaccines/schedules in various formats, including landscape and pocket-sized, in regular paper or laminated versions. A "parent friendly" regular schedule is available at http://www.cdc.gov/vaccines/schedules/easy-toread/child.htm l#print. For 2013, several new references and links to additional inform ation have been added, including one for travel vaccine requirements and recommendations (1). New references also are provided for vaccination o f persons w ith prim ary and secondary im m unodeficiencies. Changes to the previous schedules (2) include the following: • Figure 1, "Recommended immunization schedule for persons aged 0 through 18 years" replaces "Recommended immunization schedule for persons aged 0 through 6 years" and "Recommended immunization schedule for persons aged 7 through 18 years." -W ording was added to bars to represent the respective vaccine dose numbers in the series. -T he meningococcal conjugate vaccine (MCV4) purple bar was extended to age 6 weeks, to reflect licensure of Hib-M enCY vaccine. These recom m endationsmu st be read with the footnotes that foilow. For those w h o fa ll behind or start late, provide catch-up va ccination at the earliest opportunity as indica ted by the greet bars in Figure 1. To determine minimum intesvals between doses,see the catch-up schedule (Figute 2). School entry and adolescent vaccine age groups are in bold. The figure below provides catch-up schedules and minimum intervals between doses for children whose vaccinations have? been delayed. A vaccine series does not need to be restarted, regfrdl ess o fth e tim e that has elapsed between doces. Use the sectior appropriate for thechild'sage. Alwcys use this table in co n ju n ition wieh Figure 1 and the footnotes that follow. # Footnotes: Recommended Immunization Schedule for Persons Aged 0 Through 18 Years -United States, 2013 # Additional guidance for use o f the vaccines described in this publication is available at http://www.cdc.gov/vaccines/pubs/acip-list.htm # Hepatitis B (HepB) vaccine. (Minimum age: birth) Routine vaccination: At birth • Administer monovalent HepB vaccine to all newborns before hospital discharge. • For infants born to hepatitis B surface antigen (HBsAg)-positive mothers, administer HepB vaccine and 0.5 mL o f hepatitis B im mune globulin (HBIG) w ithin 12 hours o f birth. These infants should be tested for HBsAg and antibody to HBsAg (anti-HBs) 1 to 2 months after completion o f the HepB series, at age 9 through 18 months (preferably at the next well-child visit). • If mother's HBsAg status is unknown, w ithin 12 hours o f birth administer HepB vaccine to all infants regardless o f birth weight. For infants weigh ing <2,000 grams, administer HBIG in addition to HepB within 12 hours of birth. Determine mother's HBsAg status as soon as possible and, if she is HBsAg-positive, also administer HBIG for infants weighing >2,000 grams (no later than age 1 week). # Doses following the birth dose # Rotavirus (RV) vaccines. (Minimum age: 6 weeks for both RV-1 [Rotarix] and RV-5 [RotaTeq]). Routine vaccination: • Administer a series of RV vaccine to all infants as follows: 1. If RV-1 is used, administer a 2-dose series at 2 and 4 months o f age. 2. If RV-5 is used, administer a 3-dose series at ages 2, 4, and 6 months. 3. If any dose in series was RV-5 or vaccine product is unknown for any dose in the series, a total o f 3 doses o f RV vaccine should be administered. # Catch-up vaccination: • The maximum age for the first dose in the series is 14 weeks, 6 days. • Vaccination should not be initiated for infants aged 15 weeks 0 days or older. • The maximum age for the final dose in the series is 8 months, 0 days. • If RV-1(Rotarix) is administered for the first and second doses, a third dose is not indicated. • For other catch-up issues, see Figure 2. S u p p le m e n t Please note: An erratum has been published for this issue. To view the erratum, please click here. • Administer 2 doses o f MMR vaccine to children aged 12 months and older, before departure from the United States for international travel. The first dose should be administered on or after age 12 months and the second dose at least 4 weeks later. # Diphtheria and tetanus toxoids and acellular pertussis ( # Catch-up vaccination: • Ensure that all school-aged children and adolescents have had 2 doses of MMR vaccine; the minimum interval between the 2 doses is 4 weeks. months to all adolescents aged 11-12 years. Either HPV4 or HPV2 may be used for females, and only HPV4 may be used for males. • The vaccine series can be started beginning at age 9 years. • Administer the second dose 1 to 2 months after the first dose and the third dose 6 months after the first dose (at least 24 weeks after the first dose). # Catch-up vaccination: • Administer the vaccine series to females (either HPV2 or HPV4) and males (HPV4) at age 13 through 18 years if not previously vaccinated. Please note: An erratum has been published for this issue. To view the erratum, please click here. The Advisory Committee on Immunization Practices (ACIP) annually reviews and updates the adult immunization schedule, which is designed to provide vaccine providers with a summary o f existing ACIP recommendations regarding the routine use o f vaccines for adults (Figures 1 and 2). The adult schedule also includes a table summarizing the primary contraindications and precautions for routinely recommended vaccines (Table ). In October 2012, ACIP approved the adult immunization schedule for 2013. This schedule also incorporates changes to vaccine recommendations voted on by ACIP at its October 24-25, 2012 meeting. # Additional Vaccine Information The primary updates include adding information for the first time on the use o f 13-valent pneumococcal conjugate vaccine (PCV13) and the timing of administration ofPCV13 relative to the 23-valent pneumococcal polysaccharide vaccine (PPSV23) in adults (4). PCV13 is recommended for adults aged 19 years and older with immunocompromising conditions (including chronic renal failure and nephrotic syndrome), functional or anatomic asplenia, cerebrospinal fluid leaks, or cochlear implants. The schedule also clarifies which adults need 1 or 2 doses of PPSV23 before age 65 years. O ther changes to the PPSV23 footnote include adding information regarding recommendations for vaccination when vaccination status is unknown. For tetanus, diphtheria, and acellular pertussis (Tdap) vaccine, recom m endations have been expanded to include routine vaccination o f adults aged 65 years and older and for pregnant women to receive Tdap vaccine with each pregnancy. T he ideal tim ing o f Tdap vaccination during pregnancy is during 27-36 weeks' gestation. This recom m endation was made to increase the likelihood o f optimal protection for the pregnant woman and her infant during the first few m onths of the infant's life, when the child is too young for vaccination but at highest risk for severe illness and death from pertussis (5,6). M anufacturers o f the live, attenuated influenza vaccine (LAIV) have obtained Food and Drug Administration (FDA) approval for a quadrivalent influenza vaccine that contains one influenza A (H 3N 2), one influenza A (H 1N 1) and two influenza B vaccine virus strains, one from each lineage o f circulating influenza B viruses. In approxim ately h alf of the recent influenza seasons, the trivalent influenza vaccine has included an influenza B vaccine virus from the lineage different from the predom inant circulating influenza B strains (7). Inclusion o f both lineages o f influenza B virus is intended to increase the likelihood that the vaccine provides cross reactive antibody against a higher proportion o f circulating influenza B viruses. For LAIV, beginning with the 2013-14 season, it is expected that only the quadrivalent formulation will be available and manufacture o f the trivalent formulation will cease. It is possible that quadrivalent inactivated influenza vaccine formulations might be available for the 2013-14 season as well. Because a mix o f quadrivalent and trivalent influenza vaccines might be available in 2013-14, the abbreviation for inactivated influenza vaccine has been changed from trivalent inactivated influenza vaccine (TIV) to inactivated influenza vaccine (IIV). T he abbreviation for LAIV remains unchanged. M inor wording changes, clarifications, or simplifications have been made to footnotes for measles, m um ps, rubella vaccine (MMR), hum an papillomavirus vaccine (HPV), zoster vaccine, and hepatitis A and hepatitis B vaccines. A correction has been made to Figure 1 for M M R vaccine: the bar that indicated the vaccine might be used in certain situations by persons born before 1957 has been removed. Persons born before 1957 are considered im mune, and routine vaccination is not recommended. Considerations for the possible use of M M R vaccine in outbreak situations are included in the 2011 M M W R publication on vaccination o f health-care personnel (8). In addition, a correction was made to Figure 2 for PPSV23. This vaccine is indicated for men who have sex with men if they have another risk factor (e.g., age or underlying condition); the bar has been changed from yellow to purple to more accurately reflect the recommendation. Vaccine providers are reminded to consult the full ACIP vaccine recommendations if they have questions and to bear in m ind that additional updates might be made for specific vaccines during the year between updates to the adult schedule. Printable versions o f the 2013 adult im m unization schedule and other inform ation is available at http://www.cdc.gov/ vaccines/schedules/hcp/adult.html. Inform ation about adult vaccination is available at http://www.cdc.gov/vaccines/default. h tm . ACIP statements and inform ation for specific vaccines is available at http://www .cdc.gov/vaccines/pubs/acip-list. h tm . Adverse events from vaccination should be reported at http://www.vaers.hhs.gov or by telephone, 800-822-7967. This schedule has been approved by the American Academy o f Family Physicians, the American College o f Physicians, the American College o f Obstetricians and Gynecologists, and the American College o f Nurse-Midwives. T he adult im m unization schedule is published in the Annals o f Internal Medicine at the same time that it is published in M M W R. # Changes for 2013 # Footnotes • Inform ation was added to footnote #1 to direct readers to additional inform ation regarding recommendations for vaccination when vaccination status is unknown. • T he influenza vaccination footnote (#2) now uses the abbreviation IIV for inactivated influenza vaccine and drops the abbreviation T IV for trivalent inactivated vaccine (TIV). For the 2013-14 influenza season, it is expected that the LAIV will be available only in a quadrivalent formulation; IIV might be available in both trivalent and quadrivalent formulations. • The tetanus, diphtheria, and acellular pertussis (Td/Tdap) vaccination footnote (#3) is updated to include the recommendation to vaccinate pregnant women with Tdap during each pregnancy, regardless of the interval since prior Td/Tdap vaccination and to include the recommendation for all other adults, including persons aged 65 years and older, to receive 1 dose of Tdap vaccine. • T he varicella (#4) and H PV (#5) footnotes were simplified; no changes in recommendations were made. Additional inform ation was added to the H P V footnote regarding H PV vaccination and pregnancy. • T he zoster footnote (#6) was changed to clarify that ACIP recommends vaccination o f persons beginning at age 60 years both for persons with and w ithout underlying health conditions for whom the vaccine is not contraindicated. • The measles, mumps, rubella (MMR) vaccine footnote (#7) was modified to reflect the new recommendation that a provider diagnosis of measles, mumps, or rubella is not considered acceptable evidence of immunity. Previously, a provider diagnosis of measles or mumps, but not rubella, was considered acceptable evidence of immunity. • Inform ation was added to the pneumococcal polysaccharide (PPSV23) vaccination footnote (#8) and PPSV23 revaccination footnote (#9) to clarify that persons with certain medical conditions are recommended to receive 2 doses o f PPSV23 before age 65 years. In addition, even those who receive 2 doses of PPSV23 before age 65 years are recommended to receive PPSV23 at age 65 years, as long as it has been 5 years since the most recent dose. T he PPSV23 footnote refers to footnote #10 for pneumococcal conjugate 13-valent vaccine (PCV13) regarding the timing of PCV13 vaccine relative to PPSV23 for those persons recommended to be vaccinated with both pneumococcal vaccines. • A new footnote (#10) was added for PCV13 vaccine. This vaccine is recommended for adults aged 19 years and older with im m unocomprom ising conditions (including chronic renal failure and nephrotic syndrome), functional or anatomic asplenia, cerebrospinal fluid leaks, or cochlear implants. Those not previously vaccinated with PCV13 or PPSV23 should receive a single dose o f PCV13, followed by a dose o f PPSV23 at least 8 weeks later. Those previously vaccinated with PPSV23 should be vaccinated with PCV13 one year or more after PPSV23 vaccination (4). • The hepatitis A vaccine footnote (#12) was updated to clarify that vaccination is recommended for persons with a history o f either injection or noninjection illicit drug use. • The hepatitis B vaccine footnote (#13) includes minor wording changes and adds information on the vaccine schedule for hepatitis B vaccine series for the Recombivax HB vaccine. The dosing schedules for other hepatitis B vaccines were included in prior years' schedules. # Figures • For figure 1, the bar for T dap/T d for persons aged 65 years and older has been changed to solid yellow because all adults, including those 65 years and older, are now recommended to receive one dose of Tdap vaccine (5). • T he bar for M M R vaccine for persons born before 1957 has been removed. M M R vaccine is not recommended routinely for persons born before 1957. Considerations for vaccination in measles or mumps outbreak settings are discussed in the ACIP recommendations for health care personnel (8). • A new row for PCV13 vaccine has been added. • For Figure 2, the recom m endation for Tdap vaccination with each pregnancy is included, with a single dose of Tdap recommended for all other groups (6). • A correction was made to change the color for PPSV23 from yellow to purple for men who have sex with men (MSM). PPSV23 is recommended for M SM who have another risk factor such as age group or medical condition. • A row for PCV13 was added (4). # Contraindications and Precautions Table • T he inactivated influenza vaccine precautions were updated to indicate that persons who experience only hives with exposure to eggs should receive IIV rather than LAIV. • Pregnancy was removed as a precaution for hepatitis A vaccine. This is an inactivated vaccine, and similar to hepatitis B vaccines, is recommended if another high risk condition or other indication is present. • Language was clarified regarding the precaution for use o f antiviral medications and vaccination with varicella or zoster vaccines. # ACIP Adult Immunization Work Group Work # Human papillomavirus (HPV) vaccination • Two vaccines are licensed for use in females, bivalent HPV vaccine (HPV2) and quadrivalent HPV vaccine (HPV4), and one HPV vaccine for use in males (HPV4). • For females, either HPV4 or HPV2 is recommended in a 3-dose series for routine vaccination at age 11 or 12 years, and for those aged 13 through 26 years, if not previously vaccinated. • For males, HPV4 is recommended in a 3-dose series for routine vaccination at age 11 or 12 years, and for those aged 13 through 21 years, if not previ ously vaccinated. Males aged 22 through 26 years may be vaccinated. • HPV4 is recommended for men who have sex with men (MSM) through age 26 years for those who did not get any or all doses when they were younger. • Vaccination is recommended for immunocompromised persons (including those with HIV infection) through age 26 years for those who did not get any or all doses when they were younger. • A complete series for either HPV4 or HPV2 consists o f 3 doses. The second dose should be administered 1-2 months after the first dose; the third dose should be administered 6 months after the first dose (at least 24 weeks after the first dose). • HPV vaccines are not recommended for use in pregnant women. However, pregnancy testing is not needed before vaccination. If a woman is found to be pregnant after initiating the vaccination series, no intervention is needed; the remainder o f the 3-dose series should be delayed until completion o f pregnancy. • Although HPV vaccination is not specifically recommended for health-care personnel (HCP) based on their occupation, HCP should receive the HPV vaccine as recommended (see above). # Zoster vaccination • A single dose o f zoster vaccine is recommended for adults aged 60 years and older regardless o f whether they report a prior episode o f herpes zoster. Although the vaccine is licensed by the Food and Drug Administration (FDA) for use among and can be administered to persons aged 50 years and older, ACIP recommends that vaccination begins at age 60 years. • Persons aged 60 years and older with chronic medical conditions may be vaccinated unless their condition constitutes a contraindication, such as pregnancy or severe immunodeficiency. • Although zoster vaccination is not specifically recommended for HCP, they should receive the vaccine if they are in the recommended age group. # Measles, mumps, rubella (MMR) vaccination • Adults born before 1957 generally are considered immune to measles and mumps. dose should be administered 1 month after the first dose; the third dose should be given at least 2 months after the second dose (and at least 4 months after the first dose). If the combined hepatitis A and hepatitis B vaccine (Twinrix) is used, give 3 doses at 0, 1, and 6 months; alternatively, a 4-dose Twinrix schedule, administered on days 0, 7, and 21-30 followed by a booster dose at month 12 may be used. • Adult patients receiving hemodialysis or with other immunocompromising conditions should receive 1 dose o f 40 pg/mL (Recombivax HB) adminis tered on a 3-dose schedule at 0, 1, and 6 months or 2 doses o f 20 ^g/m L (Engerix-B) administered simultaneously on a 4-dose schedule at 0, 1, 2, and 6 months. Recent (within 11 months) receipt o f antibody-containing blood product (specific interval depends on product).6,7 # Selected conditions for which Moderate or severe acute illness with or w ithout fever. Receipt o f specific antivirals (i.e., acyclovir, famciclovir, or valacyclovir) 24 hours before vaccination; avoid use o f these antiviral drugs for 14 days after vaccination. # Human papillomavirus (HPV) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Moderate or severe acute illness with or w ithout fever. Pregnancy. # Zoster Severe allergic reaction (e.g., anaphylaxis) to a vaccine component. Known severe im munodeficiency (e.g., from hematologic and solid tumors, receipt o f chemotherapy, or long-term immunosuppressive therapy5 or patients with HIV infection who are severely immunocompromised). Pregnancy. Moderate or severe acute illness with or w ithout fever. Receipt o f specific antivirals (i.e., acyclovir, famciclovir, or valacyclovir) 24 hours before vaccination; avoid use o f these antiviral drugs for 14 days after vaccination. Measles, mumps, rubella (MMR)3 Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Known severe im munodeficiency (e.g., from hematologic and solid tumors, receipt of chemotherapy, congenital immunodeficiency, or long-term immunosuppressive therapy5 or patients with HIV infection who are severely immunocompromised). Pregnancy. Moderate or severe acute illness with or w ithout fever. Recent (within 11 months) receipt o f antibody-containing blood product (specific interval depends on product).6,7 History o f throm bocytopenia or throm bocytopenic purpura. Need for tuberculin skin testing.8 See footnotes on page 18. # Supplem ent TABLE. (Continued) Contraindications and precautions to commonly used vaccines in adults1*"* # Vaccine Contraindications Precautions Pneumococcal polysaccharide (PPSV) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Moderate or severe acute illness with or w ithout fever. Pneumococcal conjugate (PCV13) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component, including to any vaccine containing diphtheria toxoid. Moderate or severe acute illness with or w ithout fever. Meningococcal, conjugate, (MCV4); meningococcal, polysaccharide (MPSV4) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Moderate or severe acute illness with or w ithout fever. Hepatitis A (HepA) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Moderate or severe acute illness with or w ithout fever. Hepatitis B (HepB) Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component. Moderate or severe acute illness with or w ithout fever. 1. Vaccine package inserts and the full ACIP recommendations for these vaccines should be consulted for additional information on vaccine-related contraindications and precautions and for more information on vaccine excipients. Events or conditions listed as precautions should be reviewed carefully. Benefits o f and risks for administering a specific vaccine to a person under these circumstances should be considered. If the risk from the vaccine is believed to outweigh the benefit, the vaccine should not be administered. If the benefit o f vaccination is believed to outweigh the risk, the vaccine should be administered. A contraindication is a condition in a recipient that increases the chance o f a serious adverse reaction. Therefore, a vaccine should not be administered when a contraindication is present. 6. Vaccine should be deferred for the appropriate interval if replacement immune globulin products are being administered. # Supplem ent These recommendations must be read w ith the footnotes that follow. ------------- I -------------- for use among and can be administered to persons aged 50 years and older, ACIP recommends PCV13 for adults aged 19 years and older with the specific medical conditions noted above. # Meningococcal vaccination • Administer 2 doses o f meningococcal conjugate vaccine quadrivalent (MCV4) at least 2 months apart to adults w ith functional asplenia or per sistent complement component deficiencies. • HIV-infected persons who are vaccinated also should receive 2 doses. • Administer a single dose o f meningococcal vaccine to microbiologists routinely exposed to isolates o f N eisseria m e n in g itid is , military recruits, and persons who travel to or live in countries in which meningococcal disease is hyperendemic or epidemic. • First-year college students up through age 21 years who are living in residence halls should be vaccinated if they have not received a dose on or after their 16th birthday. • MCV4 is preferred for adults w ith any o f the preceding indications who are aged 55 years and younger; meningococcal polysaccharide vaccine (MPSV4) is preferred for adults aged 56 years and older. • Revaccination w ith MCV4 every 5 years is recommended for adults previ ously vaccinated with MCV4 or MPSV4 who remain at increased risk for infection (e.g., adults w ith anatomic or functional asplenia or persistent complem ent component deficiencies). # Hepatitis A vaccination •
None
None
f124c3c87ab6fa1003527cff1c175d12f286f9c9
cdc
None
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction Public health surveillance is the ongoing, systematic collection, analysis, interpretation, and dissemination of data about a health-related event for use in public health action to reduce morbidity and mortality and to improve health (1). Surveillance serves at least eight public health functions. These include supporting case detection and public health interventions, estimating the impact of a disease or injury, portraying the natural history of a health condition, determining the distribution and spread of illness, generating hypotheses and stimulating research, evaluating prevention and control measures, and facilitating planning (2). Another important public health function of surveillance is outbreak detection (i.e., identifying an increase in frequency of disease above the background occurrence of the disease). Outbreaks typically have been recognized either based on accumulated case reports of reportable diseases or by clinicians and laboratorians who alert public health officials about clusters of diseases. Because of the threat of terrorism and the increasing availability of electronic health data, enhancements are being made to existing surveillance systems, and new surveillance systems have been developed and implemented in public health jurisdictions with the goal of early and complete detection of outbreaks (3). The usefulness of surveillance systems for early detection and response to outbreaks has not been established, and substantial costs can be incurred in developing or enhancing and managing these surveillance systems and investigating false alarms (4). The measurement of the performance of public health surveillance systems for outbreak detection is needed to establish the relative value of different approaches and to provide information needed to improve their efficacy for detection of outbreaks at the earliest stages. This report supplements existing CDC guidelines for evaluating public health surveillance systems (1). Specifically, the report provides a framework to evaluate timeliness for outbreak detection and the balance among sensitivity, predictive value positive (PVP), and predictive value negative (PVN) for detecting outbreaks. This framework also encourages detailed description of system design and operations and of their experience with outbreak detection. The framework is best applied to systems that have data to demonstrate the attributes of the system under consideration. Nonetheless, this framework also can be applied to systems that are in early stages of development or in the planning phase by using citations from the published literature to support conclusions. Ideally, the evaluation should compare the performance of the surveillance system under scrutiny to alternative surveillance systems and produce an assessment of the relative usefulness for early detection of outbreaks. # Background Early detection of outbreaks can be achieved in three ways: 1) by timely and complete receipt, review, and investigation of disease case reports, including the prompt recognition and reporting to or consultation with health departments by physicians, health-care facilities, and laboratories consistent with disease reporting laws or regulations; 2) by improving the ability to recognize patterns indicative of a possible outbreak early in its course, such as through analytic tools that improve the predictive value of data at an early stage of an outbreak or by lowering the threshold for investigating possible outbreaks; and 3) through receipt of new types of data that can signify an outbreak earlier in its course. These new types of data might include health-care product purchases, absences from work or school, presenting symptoms to a health-care provider, or laboratory test orders (5). # Disease Case Reports The foundation of communicable disease surveillance in the United States is the state and local application of the reportable disease surveillance system known as the National Notifiable Disease Surveillance System (NNDSS), which includes the listing of diseases and laboratory findings of public health interest, the publication of case definitions for their surveillance, and a system for passing case reports from local to state to CDC. This process occurs best where two-way communication occurs between public health agencies and the clinical community: clinicians and laboratories report cases and clusters of reportable and unusual diseases, and health departments consult on case diagnosis and management, alerts, surveillance summaries, and clinical and public health recommendations and policies. Faster, more specific and affordable diagnostic methods and decision-support tools for diseases with substantial outbreak potential could improve the timely recognition of reportable diseases. On-going health-care provider and laboratory outreach, education, and 24-hour access to public health professionals are needed to enhance reporting of urgent health threats. Electronic laboratory reporting (i.e., the automated transfer of designated data from a laboratory database to a public health data repository using a defined message structure) also will improve the timeliness and completeness of reporting notifiable conditions (6)(7)(8) and can serve as a model for electronic reporting of a wider range of clinical information. A comprehensive surveillance effort supports timely investigation (i.e., tracking of cases once an outbreak has been recognized) and data needs for managing the public health response to an outbreak or terrorist event. # Pattern Recognition Statistical tools for pattern recognition and aberration detection can be applied to screen data for patterns warranting further public health investigation and to enhance recognition of subtle or obscure outbreak patterns (9). Automated analysis and visualization tools can lessen the need for frequent and intensive manual analysis of surveillance data. # New Data Types Many new surveillance systems, loosely termed syndromic surveillance systems, use data that are not diagnostic of a disease but that might indicate the early stages of an outbreak. The scope of this framework is broader than these novel systems, yet the wide-ranging definitions and expectations of syndromic surveillance require clarification. Syndromic surveillance for early outbreak detection is an investigational approach where health department staff, assisted by automated data acquisition and generation of statistical signals, monitor disease indicators continually (real-time) or at least daily (near real-time) to detect outbreaks of diseases earlier and more completely than might otherwise be possible with traditional public health methods (e.g., by reportable disease surveillance and telephone consultation). The distinguishing characteristic of syndromic surveillance is the use of indicator data types. For example, a laboratory is a data source that can support traditional disease case reporting by submitting reports of confirmatory laboratory results for notifiable conditions; however, test requests are a type of laboratory data that might be used as an outbreak indicator by tracking excess volume of test requests for diseases that typically cause outbreaks. New data types have been used by public health to enhance surveillance, reflecting events that might precede a clinical diagnosis (e.g., patient's chief complaints in emergency departments, clinical impressions on ambulance log sheets, prescriptions filled, retail drug and product purchases, school or work absenteeism, and constellations of medical signs and symptoms in persons seen in various clinical settings). Outbreak detection is the overriding purpose of syndromic surveillance for terrorism preparedness. Enhanced casefinding and monitoring the course and population characteristics of a recognized outbreak also are potential benefits of syndromic surveillance (4). A manual syndromic surveillance system was used to detect additional anthrax cases in the fall of 2001 when the outbreak was recognized (10). Complicating the understanding of syndromic surveillance is that syndromes have been used for case detection and management of diseases when the condition is infrequent and the syndrome is relatively specific for the condition of interest. Acute flaccid paralysis is a syndromic marker for poliomyelitis and is used to detect single cases of suspected polio in a timely way to initiate investigation and control measures. In this case, the syndrome is relatively uncommon and serious and serves as a proxy for polio (11). Syndromes also have been used effectively for surveillance in resource-poor settings for sexually transmitted disease detection and control where laboratory confirmation is not possible or practical (12). However, syndromic surveillance for terrorism is not intended for early detection of single cases or limited outbreaks because the early clinical manifestations of diseases that might be caused by terrorism are common and nonspecific (13). # Framework This framework is intended to support the evaluation of all public health surveillance systems for the timely detection of outbreaks. The framework is organized into four categories: system description, outbreak detection, experience, and conclusions and recommendations. A comprehensive evaluation will address all four categories. # A. System Description 1. Purpose. The purpose(s) of the system should be explicitly and clearly described and should include the intended uses of the system. The evaluation methods might be prioritized differently for different purposes. For example, if terrorism is expected to be rare, reassurance might be the primary purpose of the terrorism surveillance system. However, for reassurance to be credible, negative results must be accurate and the system should have a demonstrated ability to detect outbreaks of the kind and size being dismissed. The description of purpose should include the indications for implementing the system; whether the system is designed for short-term, high-risk situations or long-term, continuous use; the context in which the system operates (whether it stands alone or augments data from other surveillance systems); what type of outbreaks the system is intended to detect; and what secondary functional value is desired. Designers of the system should specify the desired sensitivity and specificity of the system and whether it is intended to capture small or large events. 2. Stakeholders. The stakeholders of the system should be listed. Stakeholders include those who provide data for the system and those who use the information generated by the system (e.g., public health practitioners; health-care providers; other health-related data providers; public safety officials; government officials at local, state, and federal levels; community residents; nongovernmental organizations; and commercial systems developers). The stakeholders might vary among different systems and might change as conditions change. Listing stakeholders helps define who the system is intended to serve and provides context for the evaluation results. 3. Operation. All aspects of the operation of the syndromic surveillance system should be described in detail to allow stakeholders to validate the description of the system and for other interested parties to understand the complexity and resources needed to operate such a system. Detailed system description also will facilitate evaluation by highlighting variations in system operation that are relevant to variations in system performance (Figure 1). Such a conceptual model can facilitate the description of the system. The description of the surveillance process should address 1) systemwide characteristics (data flow ), including data and transmission standards to facilitate interoperability and data sharing between information systems, security, privacy, and confidentiality; 2) data sources (used broadly in this framework to include the dataproducing facility , the data type , and the data format ); 3) data processing before analysis (the data collation, filtering, transformation, and routing functions required for public health to use the data, including the classification and assigning of syndromes); 4) statistical analysis (tools for automated screening of data for potential outbreaks); and 5) epidemiologic analysis, interpretation, and investigation (the rules, procedures, and tools that support decision-making in response to a system signal, including adequate staffing with trained epidemiologists who can review, explore, and interpret the data in a timely manner). # B. Outbreak Detection The ability of a system to reliably detect an outbreak at the earliest possible stage depends on the timely capture and processing of the data produced by transactions of health behaviors (e.g., over-the-counter pharmaceutical sales, emergency department visits, and nurse call-line volume) or health-care activities (e.g., laboratory test volume and triage categorization of chief complaint) that might indicate an outbreak; the validity of the data for measuring the conditions of interest at the earliest stage of illness and the quality of those data; and the detection methods applied to these processed surveillance data to distinguish expected events from those indicative of an outbreak. 1. Timeliness. The timeliness of surveillance approaches for outbreak detection is measured by the lapse of time from exposure to the disease agent to the initiation of a public health intervention. A timeline with interim milestones is proposed to improve the specificity of timeliness measures (Figure 3). Although measuring all of the time points that define the intervals might be impractical or inexact in an applied outbreak setting, measuring intervals in a consistent way can be used to compare alternative outbreak-detection approaches and specific surveillance systems. - Onset of exposure: By anchoring the timeline on exposure, the timeliness advantage of different data sources can be assessed and compared. Exposure can most easily be estimated in a pointsource outbreak. Time of exposure is often inferred from knowledge of the agent (e.g., incubation period) and the epidemiology of the outbreak. - Onset of symptoms: The interval to symptom onset in each case is defined by the incubation period for the agent. Time of symptom onset might be estimated using case interviews or existing knowledge of the agent and the time of exposure. The incubation period might vary according to host factors and the route and dose of the exposure. - Onset of behavior: Following symptom onset, several health behaviors can occur (e.g., purchasing over-thecounter medication from a store, calling in sick to work, or visiting an urgent-care center). When an affected person interacts with the health-care system, a variety of health-care provider behaviors might be performed (e.g., order of a laboratory test and admission to a hospital). The selection of data sources for a system has a strong influence on timeliness. Some of those experiencing symptoms will initiate a health behavior or stimulate a healthcare provider behavior that is a necessary step to being captured in the surveillance system. - Capture of data: The timing of the capture of a behavior by the data-providing facility varies by data type and can be influenced by system design. A retail purchase might be entered in an electronic database at the moment the transaction is completed, or a record might not be generated in a clinical setting until hours after health care was sought. - Completion of data processing: Time is required for the facility providing the data to process the data and produce the files needed for public health. Records might be transmitted to a central repository only periodically (e.g., weekly). Data form can influence processing time (e.g., transcription from paper to electronic form and coding text-based data), and data manipulations needed to de-identify data and prepare necessary files can affect processing time. - Capture of data in public health surveillance system: The time required to transfer data from the data providing facility to the public health entity varies according to the frequency established for routine data transmission and by the data transmission method (e.g., Internet, mail, or courier). # Application of pattern recognition tools/algorithms: Before analytic tools can be applied to the data in the surveillance system, certain processing steps are necessary (e.g., categorization into syndrome categories, application of case definition, and data transformations). - Generation of automated alert: The detection algorithm's alerting interval is a product of how often the algorithm is run and a report generated and the capacity of the algorithm to filter noise and detect an aberration as early as possible in the course of the outbreak. - Initiation of public health investigation: The initiation of a public health investigation occurs when a decision is made to acquire additional data. Analysis and judgment are applied by public health-care providers to the processed surveillance data and other available information to decide whether new data collection is warranted to confirm the existence of an outbreak. The challenge of interpreting data from multiple surveillance systems could diminish potential advantages in timeliness. The focus on outbreak detection allows for investigations of potential outbreaks to proceed before a specific clinical diagnosis is obtained. - Initiation of public health intervention: When an outbreak of public health significance is confirmed, interventions can be implemented to control the severity of disease and prevent further spread. Interventions might be of a general nature directed to the recognition of an outbreak (e.g., apply respiratory infection precautions and obtain clinical specimens for diagnosis) or can be specific to the diagnosis (e.g., antibiotic prophylaxis or vaccination). # Validity. Measuring the validity of a system for outbreak detection requires an operational definition of an outbreak. Although a statistical deviation from a baseline rate can be useful for triggering further investigation, it is not sufficient for defining an outbreak. In practice, the confirmation of an outbreak is a judgment that depends on past experience with the condition, the severity of the condition, the communicability of the condition, confidence in the diagnosis of the condition, public health concern about outbreaks at the time, having options for effective prevention or control, and the resources required and available to respond. Operationally, an outbreak is defined by the affected public health jurisdiction when the occurrence of a condition has changed sufficiently to warrant public health attention. The validity of a surveillance system for outbreak detection varies according to the outbreak scenario and surveillance system factors. These factors can confound the comparison of systems and must be carefully described in the evaluation. For example, the minimum size of an outbreak that can be detected by a system cannot be objectively compared among systems unless they are identical or differences are accounted for in several ways. - Case definitions: Establish the specificity and sensitivity for the condition of interest on the basis of the data source, data type, and response criteria. - Baseline estimation: Determine the stability of the background occurrence of cases. Estimations are affected by factors such as population size and geographic distribution. The performance of detection algorithms will vary by the quality and duration and inherent variability of baseline data. - Reporting delays: Result in incomplete data, introducing bias that will diminish the performance of detection algorithms. - Data characteristics: Includes underlying patterns in the data (e.g., seasonal variation) and systematic errors inherent in the data (e.g., product sales that influence purchasing behaviors unrelated to illness). - Outbreak characteristics: Results from agent, host, and environmental factors that affect the epidemiology of the outbreak. For example, a large aerosol exposure with an agent causing serious disease in a highly susceptible population will have different detection potential than an outbreak of similar size spread person-to-person over a longer time and dispersed distribution. - Statistical analysis: Defines how data are screened for outbreak detection. Detection algorithms have different performance characteristics under different outbreak conditions. - Epidemiologic analysis, interpretation, and investigation: The procedures, resources, and tools for analysis, interpretation, and response that can substantially affect the ability to detect and respond to outbreaks. # Validation Approaches Different approaches to outbreak detection need to be evaluated under the same conditions to isolate the unique features of the system (e.g., data type) from the outbreak characteristics and the health department capacity. The data needed to evaluate and compare the performance of surveillance systems for early outbreak detection can be obtained from naturally occurring outbreaks or through simulation. Controlled comparisons of surveillance systems for detection of deliberately induced outbreaks will be difficult because of the infrequency of such outbreaks and the diversity of systems and outbreak settings. However, understanding the value of different surveillance approaches to early detection will increase as descriptions of their experience with detecting and missing naturally occurring outbreaks accumulate. Accumulation of experience descriptions is made more difficult by not having standard methods for measuring outbreak detection successes and failures across systems and by the diversity of surveillance system and outbreak factors that influence performance. Standardized classification of system and outbreak factors will enable comparison of experiences across systems. Pending the development of classification standards, descriptive evaluation should include as much detail as possible. Proxy outbreak scenarios reflect the types of naturally occurring outbreaks that should not be missed to instill confidence in the ability of these systems to detect outbreaks caused by terrorism. Examples of proxy events or outbreaks include seasonal events (e.g., increases in influenza, norovirus gastroenteritis, and other infectious respiratory agents) and community outbreaks (e.g., foodborne, waterborne, hepatitis A, child-care-associated shigellosis, legionellosis, and coccidioidomycosis and histoplasmosis in areas where the diseases are endemic). The measurement of outbreaks detected, false alarms, and outbreaks missed or detected late should be designed as a routine part any system workflow and conducted with minimal effort or complexity. Routine reporting should be automated where possible. Relevant information needs include: the number of statistical aberrations detected at a set threshold in a defined period of time (e.g., frequency per month at a given p-value); the action taken as a result of the signals (e.g., review for data errors, in-depth follow-up analysis of the specific conditions within the syndrome category, manual epidemiologic analysis to characterize a signal, examining data from other systems, and increasing the frequency of reporting from affected sites); resources directed to the follow-up of the alert; public health response that resulted (e.g., an alert to clinicians, timely dissemination of information to other health entities, a vaccination campaign, or no further Generation of a response); documentation of how every recognized outbreak in the jurisdiction was detected; an assessment of the value of the follow-up effort (e.g., the effort was an appropriate application of public health resources); a detailed description of the agent, host, and environmental conditions of the outbreak; and the number of outbreaks detected only late in their course or in retrospect. To evaluate the relative value of different methods for outbreak detection, a direct comparison approach is needed. For example, if a health department detects a substantial number of its outbreaks through telephone consultations, then a phone call tracking system might produce the data needed to compare telephone consults with other approaches for early detection of outbreaks. As an alternative to naturally occurring outbreaks, simulations can allow for the control and modification of agent, host, and environmental factors to study system performance across a range of common scenarios. However, simulations are limited in their ability to mimic the diversity and unpredictability of real-life events. Whenever possible, simulated outbreaks should be superimposed on historical trend data. To evaluate detection algorithms comparatively, a shared challenge problem and data set would be helpful. Simulation is limited by the availability of well-documented outbreak scenarios (e.g., organism or agent characteristics, transmission characteristics, and population characteristics). Simulations should incorporate data for each of the factors described previously. Multiple simulation runs should be used to test algorithm performance in different outbreak scenarios, allowing for generation of operating characteristic curves that reflect performance in a range of conditions. Focused studies to validate the performance of limited aspects of systems (e.g., data sources, case definitions, statistical methods, and timeliness of reporting) can provide indirect evidence of system performance. Component studies also can test assumptions about outbreak scenarios and support better data simulation. Syndrome case definitions for certain specific data sources need to be validated. Component validation studies should emphasize outbreak detection over case detection. These studies contain explicit hypotheses and research questions and should be shared in a manner to advance the development of outbreak detection systems without unnecessary duplication. # Statistical Assessment of Validity Surveillance systems must balance the risk for an outbreak, the value of early intervention, and the finite resources for investigation. Perceived high risk and high value of timely detection support high sensitivity and low thresholds for investigation. A low threshold can prompt resource-intensive investigations and occupy vital staff, and a high threshold might delay detection and intervention. The perceived threat of an outbreak, the community value attached to early detection, and the investigation resources available might vary over time. As a result, specifying a fixed relation between optimal sensitivity and predictive value for purposes of evaluation might be difficult. The sensitivity and PVP and PVN are closely linked and considered together in this framework. Sensitivity is the percentage of outbreaks occurring in the jurisdiction detected by the system. PVP reflects the probability of a system signal being an outbreak. PVN reflects the probability that no outbreak is occurring when the system does not yield a signal. The calculation of sensitivity and predictive value is described in detail in the updated guidelines for evaluating public health surveillance systems (1). Measurement of sensitivity requires an alternative data source of high quality (e.g., "gold" standard) to confirm outbreaks in the population that were missed by the surveillance system. Sensitivity for outbreak detection could be assessed through capture-recapture techniques with two independent data sources (14). The high costs associated with responding to false alarms and with delayed response to outbreaks demand efforts to quantify and limit the impact of both. As long as the likelihood of terrorism is extremely low, PVP will remain near zero and a certain level of nonterrorism signals will be a necessary part of conducting surveillance for the detection of terrorism. Better performance can be achieved in one attribute (e.g., sensitivity) without a performance decrement in another (e.g., PVP) by changing the system (e.g., adding a data type or applying a better detection algorithm). Improving sensitivity by lowering the cut-off for signaling an outbreak will reduce PVP. Sensitivity and PVP for these surveillance systems will ultimately be calibrated in each system to balance the secondary benefits (e.g., detection of naturally occurring outbreaks, disease case finding and management, reassurance of no outbreak during periods of heightened risk, and a stronger reporting and consultation relation between public health and clinical medicine) with the locally acceptable level of false alarms. # Data Quality The validity of syndromic surveillance system data is dependent on data quality. Error-prone systems and data prone to inaccurate measurement can negatively affect detection of unusual trends. Although data quality might be a less critical problem for screening common, nonspecific indicators for statistical aberrations, quality should be evaluated and improved to the extent possible. Measuring data quality is dependent on a standard (e.g., medical record review or fabricated test data with values known to the evaluator). The updated May 7, guidelines for evaluating public health surveillance systems (1) describe data quality in additional detail. - Representativeness: When case ascertainment within a population is incomplete (e.g., in a sentinel system or a statistically based sample), representativeness reflects whether a system accurately describes the distribution of cases by time, place, and person. Geographic representativeness is particularly important for detecting outbreaks of infectious diseases. - Completeness of data: The frequency of unknown or blank responses to data items in the system can be used to measure the level of completeness. For systems that update data from previous transmissions, time should be factored into measurement by indicating the percentage of records that are complete (i.e., all variables are captured for a record) on initial report and within an appropriate interval (e.g., 48 hours) of submission. Sites with substantial reporting delays can be flagged for reliability concerns and targeted for improvement. Incomplete data can require follow-up before analysis, with associated decreases in timeliness and increase in cost. When multiple data providers contribute to a common data store for statistical analysis, the percentage of reporting sources that submit their data on a routine interval (e.g., every 24 hours) conveys the completeness of the aggregate database for routine analysis. Evaluation of completeness should include a description of the problems experienced with manual data management (e.g., coding errors or loss of data) and the problems with automated data management (e.g., programming errors or inappropriate filtering of data). # C. System Experience The performance attributes described in this section convey the experience that has accrued in using the system. 1. System usefulness. A surveillance system is useful for outbreak detection depending on its contribution to the early detection of outbreaks of public health significance that leads to an effective intervention. An assessment of usefulness goes beyond detection to address the impact or value added by its application. Measurement of usefulness is inexact. As with validity, measurement will benefit from common terminology and standard data elements. In the interim, detailed efforts to describe and illustrate the consequences of early detection efforts will improve understanding of their usefulness. Evaluation should begin with a review of the objectives of the system and should consider the priorities. To the extent possible, usefulness should be described by the disease prevention and control actions taken as a result of the analysis and interpretation of the data from the system. The impact of the surveillance system should be contrasted with other mechanisms available for outbreak detection. An assessment of usefulness should list the outbreaks detected and the role that different methods played in the identification of each one. Examples of how the system has been used to detect or track health problems other than outbreaks in the community should be included. The public health response to the outbreaks and health problems detected should be described as well as how data from new or modified surveillance systems support inferences about disease patterns that would not be possible without them. Surveillance systems for early outbreak detection are sometimes justified for the reassurance they provide when aberrant patterns are not apparent during a heightened risk period or when the incidence of cases declines during an outbreak. When community reassurance is claimed as a benefit of the surveillance system, reassurance should be defined and the measurement quantified (e.g., number of phone calls from the public on a health department hotline, successful press conferences, satisfaction of public health decision-makers, or resources to institutionalize the new surveillance system). A description should include who is reassured and of what they are reassured, and reassurance should be evaluated for validity by estimating the PVN. 2. Flexibility. The flexibility of a surveillance system refers to the system's ability to change as needs change. The adaptation to changing detection needs or operating conditions should occur with minimal additional time, personnel, or other resources. Flexibility generally improves the more data processing is handled centrally rather than distributed to individual data-providing facilities because fewer system and operator behavior changes are needed. Flexibility should address the ability of the system to apply evolving data standards and code sets as reflected in Public Health Information Network (PHIN) standards (). Flexibility includes the adaptability of the system to shift from outbreak detection to outbreak management. The flexibility of the system to meet changing detection needs can include the ability to add unique data to refine signal detection, to capture exposure and other data relevant to managing an outbreak, to add data providers to increase population coverage and detect or track low frequency events, to modify case definitions (the aggregation of codes into syndrome groupings), to improve the detection algorithm to filter random variations in trends more efficiently, and to adjust the detection threshold. Flexibility also can be reflected by the ability of the system to detect and monitor naturally occurring outbreaks in the absence of terrorism. System flexibility is needed to balance the risk for an outbreak, the value of early intervention, and the resources for investigation as understanding of these factors changes. 3. System acceptability. As with the routine evaluation of public health surveillance systems (1), the acceptability of a surveillance system for early outbreak detection is reflected by the willingness of participants and stakeholders to contribute to the data collection and analysis. This concept includes the authority and willingness to share electronic health data and should include an assessment of the legal basis for the collection of prediagnosis data and the implications of privacy laws (e.g., Health Insurance Portability and Accountability Act Privacy Rule) (15). All states have broad disease-reporting laws that require reporting of diseases of public health importance, and many of these laws appear compatible with the authority to receive syndromic surveillance data (16). The authority to require reporting of indicator data for persons who lack evidence of a reportable condition and in the absence of an emergency is less clear and needs to be verified by jurisdictions. Acceptability can vary over time as the threat level, perceived value of early detection, support for the methods of surveillance, and resources fluctuate. Acceptability of a system can be inferred from the extent of its adoption. Acceptability is reflected by the participation rate of potential reporting sources, by the completeness of data reporting, and by the timeliness of person-dependent steps in the system (e.g., manual data entry from emergency department logs as distinguished from electronic data from the normal clinical workflow). 4. Portability. The portability of a surveillance system addresses how well the system could be duplicated in another setting. Adherence to the PHIN standards can enhance portability by reducing variability in the application of information technology between sites. Reliance on person-dependent steps, including judgment and action criteria (e.g., for analysis and interpretation) should be fully documented to improve system portability. Portability also is influenced by the simplicity of the system. Examples should be provided of the deployment of similar systems in other settings, and the experience of those efforts should be described. In the absence of examples, features of the system that might support or detract from portability should be described. 5. System stability. The stability of a surveillance system refers to its resilience to system changes (e.g., change in coding from International Classifications of Disease, Ninth Revision to ICD-10). Stability can be demonstrated by the duration and consistent operation of the system. System stability is distinguished from the reliability of data elements within the system. The consistent representation of the condition under surveillance (reliability) is an aspect of data quality. Stability can be measured by the frequency of system outages or downtime for servicing during periods of need, including downtime of data providers, the frequency of personnel deficiencies from staff turnover, and budget constraints. Ongoing support by system designers and evolving software updates might improve system stability. Stability also can be reflected in the extent of control over costs and system changes that the sponsoring agency maintains. 6. System costs. Cost is a vital factor in assessing the relative value of surveillance for terrorism preparedness. Costeffectiveness analyses and data modeling are needed under a range of scenarios to estimate the value of innovations in surveillance for outbreak detection and terrorism preparedness (17). Improved methods of measuring cost and impact are needed. Costs borne by data providers should be noted; however, the cost perspective should be that of the community (societal perspective) to account for costs of prevention and treatment born by the community. Direct costs include the fees paid for software and data, the personnel salary and support expenses (e.g., training, equipment support, and travel), and other resources needed to operate the system and produce information for public health decisions (e.g. office supplies, Internet and telephone lines, and other communication equipment). Fixed costs for running the system should be differentiated from the variable costs of responding to system alarms. Variable costs include the cost of follow-up activities (e.g., for diagnosis, case-management, or community interventions). The cost of responding to false alarms represents a variable but inherent inefficiency of an early detection system that should be accounted for in the evaluation. Similarly, variable costs include the financial and public health costs of missing outbreaks entirely or recognizing them late. Costs vary because the sensitivity and timeliness of the detection methods can be modified according to changes in tolerance for missing outbreaks and for responding to false alarms. Similarly, the threshold and methods for investigating system alarms can vary with the perceived risk and need to respond. Costs from public health response to false alarms with traditional surveillance systems need to be measured in a comparable way when assessing the relative value of new surveillance methods. Cost savings should be estimated by assessing the impact of prevention and control efforts (e.g., health-care costs and productivity losses averted) Questions to answer include the following: - How many investigations were initiated as a result of these data? # D. Conclusions and Recommendations for Use and Improvement of Systems for Early Outbreak Detection The evaluation should be summarized to convey the strengths and weaknesses of the system under scrutiny. Summarizing and reporting evaluation findings should facilitate the comparison of systems for those making decisions about new or existing surveillance methods. These conclusions should be validated among stakeholders of the system and modified accordingly. Recommendations should address adoption, continuation, or modification of the surveillance system so that it can better achieve its intended purposes. Recommendations should be disseminated widely and actively interpreted for all appropriate audiences. An Institute of Medicine study concluded that although innovative surveillance methods might be increasingly helpful in the detection and monitoring of outbreaks, a balance is needed between strengthening proven approaches (e.g., diagnosis of infectious illness and strengthening the liaison between clinical-care providers and health departments) and the exploration and evaluation of new approaches (17). Guidance for the evaluation of surveillance systems for outbreak detection is on-going. Many advances are needed in understanding of systems and outbreak characteristics to improve performance metrics. For example, research is needed to understand the personal health and clinical health care behaviors that might serve as early indicators of priority diseases; analytic methods are needed to improve pattern recognition and to integrate multiple streams of data; a shared vocabulary is needed for describing outbreak conditions, managing text-based information, and supporting case definitions; and evaluation research is needed, including costeffectiveness of different surveillance models for early detection, both in real-life comparisons and in simulated data environments, to characterize the size and nature of epidemics that can be detected through innovative surveillance approaches. Pending more robust measures of system performance, the goal of this framework is to improve public health surveillance systems for early outbreak detection by providing practical guidance for evaluation. ❑ Indicate the frequency of editing and updating the electronic file ❑ Indicate how incomplete records are handled in analysis and reports ❑ Describe how data archiving and disposal is managed ❑ Describe how new data sources or necessary changes in data sources are identified and incorporated in the system. # D. Statistical Analysis ❑ Describe how the health outcome baseline is established: ❑ Describe the population under surveillance ❑ Describe the source, the criteria, and the methods for establishing the background frequencies used to detect aberrations ❑ How much baseline data are managed in the analysis database ❑ Describe analytic methods used in automated analyses (i.e., aberration detection): ❑ Describe in mathematical and statistical detail the algorithms intended to signal an event requiring further investigation ❑ Describe adaptations in analytic methods to account for different outbreak patterns that might be anticipated in different data sources and types and for different outbreak scenarios ❑ Indicate how reporting delays are corrected for in the analysis. ❑ Describe the method of adjusting results for potential confounding factors ❑ Describe how the system adapts over time and the empirical basis for modifications in the methods ❑ Describe the detection process: ❑ The frequency of data analysis ❑ How an alarm is generated ❑ Where the alarm goes ❑ The type of alarms generated by the system ❑ What is done to ensure that signals are not being missed ❑ Describe the report generation process: ❑ What routine reports are generated ❑ Whether data are presented graphically or in tables ❑ Whether data can be manipulated to get a specific # E. Epidemiologic Analysis, Interpretation, and Investigation ❑ Describe the process for managing system alarms: ❑ Describe the special procedures instituted when the alarm is generated (e.g., review for data errors, in-depth manual analysis of the specific conditions within the syndrome category, manual epidemiological analysis to identify subgroups responsible for an alarm, examining data from other systems, increasing the frequency of reporting from affected sites) ❑ Estimate the person-hours that are devoted to review and analysis each day and the interval at which data are analyzed ❑ Indicate documented procedures for managing system alarms. ❑ Indicate communication method that staff is alerted of alarms (e.g., whether they get paged at home, receive an automated e-mail, etc.?) ❑ Indicate the expectations and schedule of staff to actively check the system and schedule, including nights and weekends ❑ Indicate the response options to an alarm and the factors that influence the choice (e.g., wait for an alarm in another system, initiate an onsite investigation, alert clinicians to gather information) ❑ Describe the process for identifying cases for investigation when the data analyzed routinely are unidentified ❑ Describe how independent data types are integrated in the analysis for improved decision making ❑ Describe the rules, procedures, and tools for communication ❑ Indicate the mechanisms used and content guidance provided for sharing results with 1) reporting sources, 2) response community, and 3) the public; ❑ Describe how decisions are made for sending urgent communications and the methods for sending urgent communications ❑ Indicate whether receipt of a communication is acknowledged and how unacknowledged receipt is managed ❑ Indicate how often urgent communications and routine reports are sent ❑ Describe the protocol for conducting surveillance during outbreak management, if one exists ❑ Indicate how often data will be updated and analyzed ❑ Describe how the system can be modified or customized to meet special data needs ❑ Describe how the system will monitor the impact of prevention and control measures ❑ Describe how and how often system components are tested for operational readiness (e.g., 'spiked' data or modeling exercises) # A. System-wide Issues ❑ Describe the political, administrative, and geographic context for the system ❑ Provide a process model that describes the data flow of the system: # Appendix. Operations Checklist Vol. / RR-5
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction Public health surveillance is the ongoing, systematic collection, analysis, interpretation, and dissemination of data about a health-related event for use in public health action to reduce morbidity and mortality and to improve health (1). Surveillance serves at least eight public health functions. These include supporting case detection and public health interventions, estimating the impact of a disease or injury, portraying the natural history of a health condition, determining the distribution and spread of illness, generating hypotheses and stimulating research, evaluating prevention and control measures, and facilitating planning (2). Another important public health function of surveillance is outbreak detection (i.e., identifying an increase in frequency of disease above the background occurrence of the disease). Outbreaks typically have been recognized either based on accumulated case reports of reportable diseases or by clinicians and laboratorians who alert public health officials about clusters of diseases. Because of the threat of terrorism and the increasing availability of electronic health data, enhancements are being made to existing surveillance systems, and new surveillance systems have been developed and implemented in public health jurisdictions with the goal of early and complete detection of outbreaks (3). The usefulness of surveillance systems for early detection and response to outbreaks has not been established, and substantial costs can be incurred in developing or enhancing and managing these surveillance systems and investigating false alarms (4). The measurement of the performance of public health surveillance systems for outbreak detection is needed to establish the relative value of different approaches and to provide information needed to improve their efficacy for detection of outbreaks at the earliest stages. This report supplements existing CDC guidelines for evaluating public health surveillance systems (1). Specifically, the report provides a framework to evaluate timeliness for outbreak detection and the balance among sensitivity, predictive value positive (PVP), and predictive value negative (PVN) for detecting outbreaks. This framework also encourages detailed description of system design and operations and of their experience with outbreak detection. The framework is best applied to systems that have data to demonstrate the attributes of the system under consideration. Nonetheless, this framework also can be applied to systems that are in early stages of development or in the planning phase by using citations from the published literature to support conclusions. Ideally, the evaluation should compare the performance of the surveillance system under scrutiny to alternative surveillance systems and produce an assessment of the relative usefulness for early detection of outbreaks. # Background Early detection of outbreaks can be achieved in three ways: 1) by timely and complete receipt, review, and investigation of disease case reports, including the prompt recognition and reporting to or consultation with health departments by physicians, health-care facilities, and laboratories consistent with disease reporting laws or regulations; 2) by improving the ability to recognize patterns indicative of a possible outbreak early in its course, such as through analytic tools that improve the predictive value of data at an early stage of an outbreak or by lowering the threshold for investigating possible outbreaks; and 3) through receipt of new types of data that can signify an outbreak earlier in its course. These new types of data might include health-care product purchases, absences from work or school, presenting symptoms to a health-care provider, or laboratory test orders (5). # Disease Case Reports The foundation of communicable disease surveillance in the United States is the state and local application of the reportable disease surveillance system known as the National Notifiable Disease Surveillance System (NNDSS), which includes the listing of diseases and laboratory findings of public health interest, the publication of case definitions for their surveillance, and a system for passing case reports from local to state to CDC. This process occurs best where two-way communication occurs between public health agencies and the clinical community: clinicians and laboratories report cases and clusters of reportable and unusual diseases, and health departments consult on case diagnosis and management, alerts, surveillance summaries, and clinical and public health recommendations and policies. Faster, more specific and affordable diagnostic methods and decision-support tools for diseases with substantial outbreak potential could improve the timely recognition of reportable diseases. On-going health-care provider and laboratory outreach, education, and 24-hour access to public health professionals are needed to enhance reporting of urgent health threats. Electronic laboratory reporting (i.e., the automated transfer of designated data from a laboratory database to a public health data repository using a defined message structure) also will improve the timeliness and completeness of reporting notifiable conditions (6)(7)(8) and can serve as a model for electronic reporting of a wider range of clinical information. A comprehensive surveillance effort supports timely investigation (i.e., tracking of cases once an outbreak has been recognized) and data needs for managing the public health response to an outbreak or terrorist event. # Pattern Recognition Statistical tools for pattern recognition and aberration detection can be applied to screen data for patterns warranting further public health investigation and to enhance recognition of subtle or obscure outbreak patterns (9). Automated analysis and visualization tools can lessen the need for frequent and intensive manual analysis of surveillance data. # New Data Types Many new surveillance systems, loosely termed syndromic surveillance systems, use data that are not diagnostic of a disease but that might indicate the early stages of an outbreak. The scope of this framework is broader than these novel systems, yet the wide-ranging definitions and expectations of syndromic surveillance require clarification. Syndromic surveillance for early outbreak detection is an investigational approach where health department staff, assisted by automated data acquisition and generation of statistical signals, monitor disease indicators continually (real-time) or at least daily (near real-time) to detect outbreaks of diseases earlier and more completely than might otherwise be possible with traditional public health methods (e.g., by reportable disease surveillance and telephone consultation). The distinguishing characteristic of syndromic surveillance is the use of indicator data types. For example, a laboratory is a data source that can support traditional disease case reporting by submitting reports of confirmatory laboratory results for notifiable conditions; however, test requests are a type of laboratory data that might be used as an outbreak indicator by tracking excess volume of test requests for diseases that typically cause outbreaks. New data types have been used by public health to enhance surveillance, reflecting events that might precede a clinical diagnosis (e.g., patient's chief complaints in emergency departments, clinical impressions on ambulance log sheets, prescriptions filled, retail drug and product purchases, school or work absenteeism, and constellations of medical signs and symptoms in persons seen in various clinical settings). Outbreak detection is the overriding purpose of syndromic surveillance for terrorism preparedness. Enhanced casefinding and monitoring the course and population characteristics of a recognized outbreak also are potential benefits of syndromic surveillance (4). A manual syndromic surveillance system was used to detect additional anthrax cases in the fall of 2001 when the outbreak was recognized (10). Complicating the understanding of syndromic surveillance is that syndromes have been used for case detection and management of diseases when the condition is infrequent and the syndrome is relatively specific for the condition of interest. Acute flaccid paralysis is a syndromic marker for poliomyelitis and is used to detect single cases of suspected polio in a timely way to initiate investigation and control measures. In this case, the syndrome is relatively uncommon and serious and serves as a proxy for polio (11). Syndromes also have been used effectively for surveillance in resource-poor settings for sexually transmitted disease detection and control where laboratory confirmation is not possible or practical (12). However, syndromic surveillance for terrorism is not intended for early detection of single cases or limited outbreaks because the early clinical manifestations of diseases that might be caused by terrorism are common and nonspecific (13). # Framework This framework is intended to support the evaluation of all public health surveillance systems for the timely detection of outbreaks. The framework is organized into four categories: system description, outbreak detection, experience, and conclusions and recommendations. A comprehensive evaluation will address all four categories. # A. System Description 1. Purpose. The purpose(s) of the system should be explicitly and clearly described and should include the intended uses of the system. The evaluation methods might be prioritized differently for different purposes. For example, if terrorism is expected to be rare, reassurance might be the primary purpose of the terrorism surveillance system. However, for reassurance to be credible, negative results must be accurate and the system should have a demonstrated ability to detect outbreaks of the kind and size being dismissed. The description of purpose should include the indications for implementing the system; whether the system is designed for short-term, high-risk situations or long-term, continuous use; the context in which the system operates (whether it stands alone or augments data from other surveillance systems); what type of outbreaks the system is intended to detect; and what secondary functional value is desired. Designers of the system should specify the desired sensitivity and specificity of the system and whether it is intended to capture small or large events. 2. Stakeholders. The stakeholders of the system should be listed. Stakeholders include those who provide data for the system and those who use the information generated by the system (e.g., public health practitioners; health-care providers; other health-related data providers; public safety officials; government officials at local, state, and federal levels; community residents; nongovernmental organizations; and commercial systems developers). The stakeholders might vary among different systems and might change as conditions change. Listing stakeholders helps define who the system is intended to serve and provides context for the evaluation results. 3. Operation. All aspects of the operation of the syndromic surveillance system should be described in detail to allow stakeholders to validate the description of the system and for other interested parties to understand the complexity and resources needed to operate such a system. Detailed system description also will facilitate evaluation by highlighting variations in system operation that are relevant to variations in system performance (Figure 1). Such a conceptual model can facilitate the description of the system. The description of the surveillance process should address 1) systemwide characteristics (data flow [Figure 2]), including data and transmission standards to facilitate interoperability and data sharing between information systems, security, privacy, and confidentiality; 2) data sources (used broadly in this framework to include the dataproducing facility [i.e., the entity sharing data with the public health surveillance system], the data type [e.g., chief complaint, discharge diagnosis, laboratory test order], and the data format [e.g., electronic or paper, text descriptions of events or illnesses, or structured data reworded or stored in standardized format]); 3) data processing before analysis (the data collation, filtering, transformation, and routing functions required for public health to use the data, including the classification and assigning of syndromes); 4) statistical analysis (tools for automated screening of data for potential outbreaks); and 5) epidemiologic analysis, interpretation, and investigation (the rules, procedures, and tools that support decision-making in response to a system signal, including adequate staffing with trained epidemiologists who can review, explore, and interpret the data in a timely manner). # B. Outbreak Detection The ability of a system to reliably detect an outbreak at the earliest possible stage depends on the timely capture and processing of the data produced by transactions of health behaviors (e.g., over-the-counter pharmaceutical sales, emergency department visits, and nurse call-line volume) or health-care activities (e.g., laboratory test volume and triage categorization of chief complaint) that might indicate an outbreak; the validity of the data for measuring the conditions of interest at the earliest stage of illness and the quality of those data; and the detection methods applied to these processed surveillance data to distinguish expected events from those indicative of an outbreak. 1. Timeliness. The timeliness of surveillance approaches for outbreak detection is measured by the lapse of time from exposure to the disease agent to the initiation of a public health intervention. A timeline with interim milestones is proposed to improve the specificity of timeliness measures (Figure 3). Although measuring all of the time points that define the intervals might be impractical or inexact in an applied outbreak setting, measuring intervals in a consistent way can be used to compare alternative outbreak-detection approaches and specific surveillance systems. • Onset of exposure: By anchoring the timeline on exposure, the timeliness advantage of different data sources can be assessed and compared. Exposure can most easily be estimated in a pointsource outbreak. Time of exposure is often inferred from knowledge of the agent (e.g., incubation period) and the epidemiology of the outbreak. • Onset of symptoms: The interval to symptom onset in each case is defined by the incubation period for the agent. Time of symptom onset might be estimated using case interviews or existing knowledge of the agent and the time of exposure. The incubation period might vary according to host factors and the route and dose of the exposure. • Onset of behavior: Following symptom onset, several health behaviors can occur (e.g., purchasing over-thecounter medication from a store, calling in sick to work, or visiting an urgent-care center). When an affected person interacts with the health-care system, a variety of health-care provider behaviors might be performed (e.g., order of a laboratory test and admission to a hospital). The selection of data sources for a system has a strong influence on timeliness. Some of those experiencing symptoms will initiate a health behavior or stimulate a healthcare provider behavior that is a necessary step to being captured in the surveillance system. • Capture of data: The timing of the capture of a behavior by the data-providing facility varies by data type and can be influenced by system design. A retail purchase might be entered in an electronic database at the moment the transaction is completed, or a record might not be generated in a clinical setting until hours after health care was sought. • Completion of data processing: Time is required for the facility providing the data to process the data and produce the files needed for public health. Records might be transmitted to a central repository only periodically (e.g., weekly). Data form can influence processing time (e.g., transcription from paper to electronic form and coding text-based data), and data manipulations needed to de-identify data and prepare necessary files can affect processing time. • Capture of data in public health surveillance system: The time required to transfer data from the data providing facility to the public health entity varies according to the frequency established for routine data transmission and by the data transmission method (e.g., Internet, mail, or courier). # • Application of pattern recognition tools/algorithms: Before analytic tools can be applied to the data in the surveillance system, certain processing steps are necessary (e.g., categorization into syndrome categories, application of case definition, and data transformations). • Generation of automated alert: The detection algorithm's alerting interval is a product of how often the algorithm is run and a report generated and the capacity of the algorithm to filter noise and detect an aberration as early as possible in the course of the outbreak. • Initiation of public health investigation: The initiation of a public health investigation occurs when a decision is made to acquire additional data. Analysis and judgment are applied by public health-care providers to the processed surveillance data and other available information to decide whether new data collection is warranted to confirm the existence of an outbreak. The challenge of interpreting data from multiple surveillance systems could diminish potential advantages in timeliness. The focus on outbreak detection allows for investigations of potential outbreaks to proceed before a specific clinical diagnosis is obtained. • Initiation of public health intervention: When an outbreak of public health significance is confirmed, interventions can be implemented to control the severity of disease and prevent further spread. Interventions might be of a general nature directed to the recognition of an outbreak (e.g., apply respiratory infection precautions and obtain clinical specimens for diagnosis) or can be specific to the diagnosis (e.g., antibiotic prophylaxis or vaccination). # Validity. Measuring the validity of a system for outbreak detection requires an operational definition of an outbreak. Although a statistical deviation from a baseline rate can be useful for triggering further investigation, it is not sufficient for defining an outbreak. In practice, the confirmation of an outbreak is a judgment that depends on past experience with the condition, the severity of the condition, the communicability of the condition, confidence in the diagnosis of the condition, public health concern about outbreaks at the time, having options for effective prevention or control, and the resources required and available to respond. Operationally, an outbreak is defined by the affected public health jurisdiction when the occurrence of a condition has changed sufficiently to warrant public health attention. The validity of a surveillance system for outbreak detection varies according to the outbreak scenario and surveillance system factors. These factors can confound the comparison of systems and must be carefully described in the evaluation. For example, the minimum size of an outbreak that can be detected by a system cannot be objectively compared among systems unless they are identical or differences are accounted for in several ways. • Case definitions: Establish the specificity and sensitivity for the condition of interest on the basis of the data source, data type, and response criteria. • Baseline estimation: Determine the stability of the background occurrence of cases. Estimations are affected by factors such as population size and geographic distribution. The performance of detection algorithms will vary by the quality and duration and inherent variability of baseline data. • Reporting delays: Result in incomplete data, introducing bias that will diminish the performance of detection algorithms. • Data characteristics: Includes underlying patterns in the data (e.g., seasonal variation) and systematic errors inherent in the data (e.g., product sales that influence purchasing behaviors unrelated to illness). • Outbreak characteristics: Results from agent, host, and environmental factors that affect the epidemiology of the outbreak. For example, a large aerosol exposure with an agent causing serious disease in a highly susceptible population will have different detection potential than an outbreak of similar size spread person-to-person over a longer time and dispersed distribution. • Statistical analysis: Defines how data are screened for outbreak detection. Detection algorithms have different performance characteristics under different outbreak conditions. • Epidemiologic analysis, interpretation, and investigation: The procedures, resources, and tools for analysis, interpretation, and response that can substantially affect the ability to detect and respond to outbreaks. # Validation Approaches Different approaches to outbreak detection need to be evaluated under the same conditions to isolate the unique features of the system (e.g., data type) from the outbreak characteristics and the health department capacity. The data needed to evaluate and compare the performance of surveillance systems for early outbreak detection can be obtained from naturally occurring outbreaks or through simulation. Controlled comparisons of surveillance systems for detection of deliberately induced outbreaks will be difficult because of the infrequency of such outbreaks and the diversity of systems and outbreak settings. However, understanding the value of different surveillance approaches to early detection will increase as descriptions of their experience with detecting and missing naturally occurring outbreaks accumulate. Accumulation of experience descriptions is made more difficult by not having standard methods for measuring outbreak detection successes and failures across systems and by the diversity of surveillance system and outbreak factors that influence performance. Standardized classification of system and outbreak factors will enable comparison of experiences across systems. Pending the development of classification standards, descriptive evaluation should include as much detail as possible. Proxy outbreak scenarios reflect the types of naturally occurring outbreaks that should not be missed to instill confidence in the ability of these systems to detect outbreaks caused by terrorism. Examples of proxy events or outbreaks include seasonal events (e.g., increases in influenza, norovirus gastroenteritis, and other infectious respiratory agents) and community outbreaks (e.g., foodborne, waterborne, hepatitis A, child-care-associated shigellosis, legionellosis, and coccidioidomycosis and histoplasmosis in areas where the diseases are endemic). The measurement of outbreaks detected, false alarms, and outbreaks missed or detected late should be designed as a routine part any system workflow and conducted with minimal effort or complexity. Routine reporting should be automated where possible. Relevant information needs include: the number of statistical aberrations detected at a set threshold in a defined period of time (e.g., frequency per month at a given p-value); the action taken as a result of the signals (e.g., review for data errors, in-depth follow-up analysis of the specific conditions within the syndrome category, manual epidemiologic analysis to characterize a signal, examining data from other systems, and increasing the frequency of reporting from affected sites); resources directed to the follow-up of the alert; public health response that resulted (e.g., an alert to clinicians, timely dissemination of information to other health entities, a vaccination campaign, or no further Generation of a response); documentation of how every recognized outbreak in the jurisdiction was detected; an assessment of the value of the follow-up effort (e.g., the effort was an appropriate application of public health resources); a detailed description of the agent, host, and environmental conditions of the outbreak; and the number of outbreaks detected only late in their course or in retrospect. To evaluate the relative value of different methods for outbreak detection, a direct comparison approach is needed. For example, if a health department detects a substantial number of its outbreaks through telephone consultations, then a phone call tracking system might produce the data needed to compare telephone consults with other approaches for early detection of outbreaks. As an alternative to naturally occurring outbreaks, simulations can allow for the control and modification of agent, host, and environmental factors to study system performance across a range of common scenarios. However, simulations are limited in their ability to mimic the diversity and unpredictability of real-life events. Whenever possible, simulated outbreaks should be superimposed on historical trend data. To evaluate detection algorithms comparatively, a shared challenge problem and data set would be helpful. Simulation is limited by the availability of well-documented outbreak scenarios (e.g., organism or agent characteristics, transmission characteristics, and population characteristics). Simulations should incorporate data for each of the factors described previously. Multiple simulation runs should be used to test algorithm performance in different outbreak scenarios, allowing for generation of operating characteristic curves that reflect performance in a range of conditions. Focused studies to validate the performance of limited aspects of systems (e.g., data sources, case definitions, statistical methods, and timeliness of reporting) can provide indirect evidence of system performance. Component studies also can test assumptions about outbreak scenarios and support better data simulation. Syndrome case definitions for certain specific data sources need to be validated. Component validation studies should emphasize outbreak detection over case detection. These studies contain explicit hypotheses and research questions and should be shared in a manner to advance the development of outbreak detection systems without unnecessary duplication. # Statistical Assessment of Validity Surveillance systems must balance the risk for an outbreak, the value of early intervention, and the finite resources for investigation. Perceived high risk and high value of timely detection support high sensitivity and low thresholds for investigation. A low threshold can prompt resource-intensive investigations and occupy vital staff, and a high threshold might delay detection and intervention. The perceived threat of an outbreak, the community value attached to early detection, and the investigation resources available might vary over time. As a result, specifying a fixed relation between optimal sensitivity and predictive value for purposes of evaluation might be difficult. The sensitivity and PVP and PVN are closely linked and considered together in this framework. Sensitivity is the percentage of outbreaks occurring in the jurisdiction detected by the system. PVP reflects the probability of a system signal being an outbreak. PVN reflects the probability that no outbreak is occurring when the system does not yield a signal. The calculation of sensitivity and predictive value is described in detail in the updated guidelines for evaluating public health surveillance systems (1). Measurement of sensitivity requires an alternative data source of high quality (e.g., "gold" standard) to confirm outbreaks in the population that were missed by the surveillance system. Sensitivity for outbreak detection could be assessed through capture-recapture techniques with two independent data sources (14). The high costs associated with responding to false alarms and with delayed response to outbreaks demand efforts to quantify and limit the impact of both. As long as the likelihood of terrorism is extremely low, PVP will remain near zero and a certain level of nonterrorism signals will be a necessary part of conducting surveillance for the detection of terrorism. Better performance can be achieved in one attribute (e.g., sensitivity) without a performance decrement in another (e.g., PVP) by changing the system (e.g., adding a data type or applying a better detection algorithm). Improving sensitivity by lowering the cut-off for signaling an outbreak will reduce PVP. Sensitivity and PVP for these surveillance systems will ultimately be calibrated in each system to balance the secondary benefits (e.g., detection of naturally occurring outbreaks, disease case finding and management, reassurance of no outbreak during periods of heightened risk, and a stronger reporting and consultation relation between public health and clinical medicine) with the locally acceptable level of false alarms. # Data Quality The validity of syndromic surveillance system data is dependent on data quality. Error-prone systems and data prone to inaccurate measurement can negatively affect detection of unusual trends. Although data quality might be a less critical problem for screening common, nonspecific indicators for statistical aberrations, quality should be evaluated and improved to the extent possible. Measuring data quality is dependent on a standard (e.g., medical record review or fabricated test data with values known to the evaluator). The updated May 7, guidelines for evaluating public health surveillance systems (1) describe data quality in additional detail. • Representativeness: When case ascertainment within a population is incomplete (e.g., in a sentinel system or a statistically based sample), representativeness reflects whether a system accurately describes the distribution of cases by time, place, and person. Geographic representativeness is particularly important for detecting outbreaks of infectious diseases. • Completeness of data: The frequency of unknown or blank responses to data items in the system can be used to measure the level of completeness. For systems that update data from previous transmissions, time should be factored into measurement by indicating the percentage of records that are complete (i.e., all variables are captured for a record) on initial report and within an appropriate interval (e.g., 48 hours) of submission. Sites with substantial reporting delays can be flagged for reliability concerns and targeted for improvement. Incomplete data can require follow-up before analysis, with associated decreases in timeliness and increase in cost. When multiple data providers contribute to a common data store for statistical analysis, the percentage of reporting sources that submit their data on a routine interval (e.g., every 24 hours) conveys the completeness of the aggregate database for routine analysis. Evaluation of completeness should include a description of the problems experienced with manual data management (e.g., coding errors or loss of data) and the problems with automated data management (e.g., programming errors or inappropriate filtering of data). # C. System Experience The performance attributes described in this section convey the experience that has accrued in using the system. 1. System usefulness. A surveillance system is useful for outbreak detection depending on its contribution to the early detection of outbreaks of public health significance that leads to an effective intervention. An assessment of usefulness goes beyond detection to address the impact or value added by its application. Measurement of usefulness is inexact. As with validity, measurement will benefit from common terminology and standard data elements. In the interim, detailed efforts to describe and illustrate the consequences of early detection efforts will improve understanding of their usefulness. Evaluation should begin with a review of the objectives of the system and should consider the priorities. To the extent possible, usefulness should be described by the disease prevention and control actions taken as a result of the analysis and interpretation of the data from the system. The impact of the surveillance system should be contrasted with other mechanisms available for outbreak detection. An assessment of usefulness should list the outbreaks detected and the role that different methods played in the identification of each one. Examples of how the system has been used to detect or track health problems other than outbreaks in the community should be included. The public health response to the outbreaks and health problems detected should be described as well as how data from new or modified surveillance systems support inferences about disease patterns that would not be possible without them. Surveillance systems for early outbreak detection are sometimes justified for the reassurance they provide when aberrant patterns are not apparent during a heightened risk period or when the incidence of cases declines during an outbreak. When community reassurance is claimed as a benefit of the surveillance system, reassurance should be defined and the measurement quantified (e.g., number of phone calls from the public on a health department hotline, successful press conferences, satisfaction of public health decision-makers, or resources to institutionalize the new surveillance system). A description should include who is reassured and of what they are reassured, and reassurance should be evaluated for validity by estimating the PVN. 2. Flexibility. The flexibility of a surveillance system refers to the system's ability to change as needs change. The adaptation to changing detection needs or operating conditions should occur with minimal additional time, personnel, or other resources. Flexibility generally improves the more data processing is handled centrally rather than distributed to individual data-providing facilities because fewer system and operator behavior changes are needed. Flexibility should address the ability of the system to apply evolving data standards and code sets as reflected in Public Health Information Network (PHIN) standards (http://www.cdc.gov/phin). Flexibility includes the adaptability of the system to shift from outbreak detection to outbreak management. The flexibility of the system to meet changing detection needs can include the ability to add unique data to refine signal detection, to capture exposure and other data relevant to managing an outbreak, to add data providers to increase population coverage and detect or track low frequency events, to modify case definitions (the aggregation of codes into syndrome groupings), to improve the detection algorithm to filter random variations in trends more efficiently, and to adjust the detection threshold. Flexibility also can be reflected by the ability of the system to detect and monitor naturally occurring outbreaks in the absence of terrorism. System flexibility is needed to balance the risk for an outbreak, the value of early intervention, and the resources for investigation as understanding of these factors changes. 3. System acceptability. As with the routine evaluation of public health surveillance systems (1), the acceptability of a surveillance system for early outbreak detection is reflected by the willingness of participants and stakeholders to contribute to the data collection and analysis. This concept includes the authority and willingness to share electronic health data and should include an assessment of the legal basis for the collection of prediagnosis data and the implications of privacy laws (e.g., Health Insurance Portability and Accountability Act Privacy Rule) (15). All states have broad disease-reporting laws that require reporting of diseases of public health importance, and many of these laws appear compatible with the authority to receive syndromic surveillance data (16). The authority to require reporting of indicator data for persons who lack evidence of a reportable condition and in the absence of an emergency is less clear and needs to be verified by jurisdictions. Acceptability can vary over time as the threat level, perceived value of early detection, support for the methods of surveillance, and resources fluctuate. Acceptability of a system can be inferred from the extent of its adoption. Acceptability is reflected by the participation rate of potential reporting sources, by the completeness of data reporting, and by the timeliness of person-dependent steps in the system (e.g., manual data entry from emergency department logs as distinguished from electronic data from the normal clinical workflow). 4. Portability. The portability of a surveillance system addresses how well the system could be duplicated in another setting. Adherence to the PHIN standards can enhance portability by reducing variability in the application of information technology between sites. Reliance on person-dependent steps, including judgment and action criteria (e.g., for analysis and interpretation) should be fully documented to improve system portability. Portability also is influenced by the simplicity of the system. Examples should be provided of the deployment of similar systems in other settings, and the experience of those efforts should be described. In the absence of examples, features of the system that might support or detract from portability should be described. 5. System stability. The stability of a surveillance system refers to its resilience to system changes (e.g., change in coding from International Classifications of Disease, Ninth Revision to ICD-10). Stability can be demonstrated by the duration and consistent operation of the system. System stability is distinguished from the reliability of data elements within the system. The consistent representation of the condition under surveillance (reliability) is an aspect of data quality. Stability can be measured by the frequency of system outages or downtime for servicing during periods of need, including downtime of data providers, the frequency of personnel deficiencies from staff turnover, and budget constraints. Ongoing support by system designers and evolving software updates might improve system stability. Stability also can be reflected in the extent of control over costs and system changes that the sponsoring agency maintains. 6. System costs. Cost is a vital factor in assessing the relative value of surveillance for terrorism preparedness. Costeffectiveness analyses and data modeling are needed under a range of scenarios to estimate the value of innovations in surveillance for outbreak detection and terrorism preparedness (17). Improved methods of measuring cost and impact are needed. Costs borne by data providers should be noted; however, the cost perspective should be that of the community (societal perspective) to account for costs of prevention and treatment born by the community. Direct costs include the fees paid for software and data, the personnel salary and support expenses (e.g., training, equipment support, and travel), and other resources needed to operate the system and produce information for public health decisions (e.g. office supplies, Internet and telephone lines, and other communication equipment). Fixed costs for running the system should be differentiated from the variable costs of responding to system alarms. Variable costs include the cost of follow-up activities (e.g., for diagnosis, case-management, or community interventions). The cost of responding to false alarms represents a variable but inherent inefficiency of an early detection system that should be accounted for in the evaluation. Similarly, variable costs include the financial and public health costs of missing outbreaks entirely or recognizing them late. Costs vary because the sensitivity and timeliness of the detection methods can be modified according to changes in tolerance for missing outbreaks and for responding to false alarms. Similarly, the threshold and methods for investigating system alarms can vary with the perceived risk and need to respond. Costs from public health response to false alarms with traditional surveillance systems need to be measured in a comparable way when assessing the relative value of new surveillance methods. Cost savings should be estimated by assessing the impact of prevention and control efforts (e.g., health-care costs and productivity losses averted) Questions to answer include the following: • How many investigations were initiated as a result of these data? • # D. Conclusions and Recommendations for Use and Improvement of Systems for Early Outbreak Detection The evaluation should be summarized to convey the strengths and weaknesses of the system under scrutiny. Summarizing and reporting evaluation findings should facilitate the comparison of systems for those making decisions about new or existing surveillance methods. These conclusions should be validated among stakeholders of the system and modified accordingly. Recommendations should address adoption, continuation, or modification of the surveillance system so that it can better achieve its intended purposes. Recommendations should be disseminated widely and actively interpreted for all appropriate audiences. An Institute of Medicine study concluded that although innovative surveillance methods might be increasingly helpful in the detection and monitoring of outbreaks, a balance is needed between strengthening proven approaches (e.g., diagnosis of infectious illness and strengthening the liaison between clinical-care providers and health departments) and the exploration and evaluation of new approaches (17). Guidance for the evaluation of surveillance systems for outbreak detection is on-going. Many advances are needed in understanding of systems and outbreak characteristics to improve performance metrics. For example, research is needed to understand the personal health and clinical health care behaviors that might serve as early indicators of priority diseases; analytic methods are needed to improve pattern recognition and to integrate multiple streams of data; a shared vocabulary is needed for describing outbreak conditions, managing text-based information, and supporting case definitions; and evaluation research is needed, including costeffectiveness of different surveillance models for early detection, both in real-life comparisons and in simulated data environments, to characterize the size and nature of epidemics that can be detected through innovative surveillance approaches. Pending more robust measures of system performance, the goal of this framework is to improve public health surveillance systems for early outbreak detection by providing practical guidance for evaluation. ❑ Indicate the frequency of editing and updating the electronic file ❑ Indicate how incomplete records are handled in analysis and reports ❑ Describe how data archiving and disposal is managed ❑ Describe how new data sources or necessary changes in data sources are identified and incorporated in the system. # D. Statistical Analysis ❑ Describe how the health outcome baseline is established: ❑ Describe the population under surveillance ❑ Describe the source, the criteria, and the methods for establishing the background frequencies used to detect aberrations ❑ How much baseline data are managed in the analysis database ❑ Describe analytic methods used in automated analyses (i.e., aberration detection): ❑ Describe in mathematical and statistical detail the algorithms intended to signal an event requiring further investigation ❑ Describe adaptations in analytic methods to account for different outbreak patterns that might be anticipated in different data sources and types and for different outbreak scenarios ❑ Indicate how reporting delays are corrected for in the analysis. ❑ Describe the method of adjusting results for potential confounding factors ❑ Describe how the system adapts over time and the empirical basis for modifications in the methods ❑ Describe the detection process: ❑ The frequency of data analysis ❑ How an alarm is generated ❑ Where the alarm goes ❑ The type of alarms generated by the system ❑ What is done to ensure that signals are not being missed ❑ Describe the report generation process: ❑ What routine reports are generated ❑ Whether data are presented graphically or in tables ❑ Whether data can be manipulated to get a specific # E. Epidemiologic Analysis, Interpretation, and Investigation ❑ Describe the process for managing system alarms: ❑ Describe the special procedures instituted when the alarm is generated (e.g., review for data errors, in-depth manual analysis of the specific conditions within the syndrome category, manual epidemiological analysis to identify subgroups responsible for an alarm, examining data from other systems, increasing the frequency of reporting from affected sites) ❑ Estimate the person-hours that are devoted to review and analysis each day and the interval at which data are analyzed ❑ Indicate documented procedures for managing system alarms. ❑ Indicate communication method that staff is alerted of alarms (e.g., whether they get paged at home, receive an automated e-mail, etc.?) ❑ Indicate the expectations and schedule of staff to actively check the system and schedule, including nights and weekends ❑ Indicate the response options to an alarm and the factors that influence the choice (e.g., wait for an alarm in another system, initiate an onsite investigation, alert clinicians to gather information) ❑ Describe the process for identifying cases for investigation when the data analyzed routinely are unidentified ❑ Describe how independent data types are integrated in the analysis for improved decision making ❑ Describe the rules, procedures, and tools for communication ❑ Indicate the mechanisms used and content guidance provided for sharing results with 1) reporting sources, 2) response community, and 3) the public; ❑ Describe how decisions are made for sending urgent communications and the methods for sending urgent communications ❑ Indicate whether receipt of a communication is acknowledged and how unacknowledged receipt is managed ❑ Indicate how often urgent communications and routine reports are sent ❑ Describe the protocol for conducting surveillance during outbreak management, if one exists ❑ Indicate how often data will be updated and analyzed ❑ Describe how the system can be modified or customized to meet special data needs ❑ Describe how the system will monitor the impact of prevention and control measures ❑ Describe how and how often system components are tested for operational readiness (e.g., 'spiked' data or modeling exercises) # A. System-wide Issues ❑ Describe the political, administrative, and geographic context for the system ❑ Provide a process model that describes the data flow of the system: # Appendix. Operations Checklist Vol. / RR-5
None
None
c3a0023064dec976d499d10a4f817f805180e6de
cdc
None
jointly sponsored this new practice guideline on the treatment of drug-resistant tuberculosis (DR-TB). The document includes recommendations on the treatment of multidrug-resistant TB (MDR-TB) as well as isoniazid-resistant but rifampin-susceptible TB. Methods: Published systematic reviews, meta-analyses, and a new individual patient data meta-analysis from 12,030 patients, in 50 studies, across 25 countries with confirmed pulmonary rifampinresistant TB were used for this guideline. Meta-analytic approaches included propensity score matching to reduce confounding. Each recommendation was discussed by an expert committee, screened for conflicts of interest, according to the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) methodology. Results: Twenty-one Population, Intervention, Comparator, and Outcomes questions were addressed, generating 25 GRADE-based recommendations. Certainty in the evidence was judged to be very low, because the data came from observational studies with significant loss to follow-up and imbalance in background regimens between comparator groups. Good practices in the management of MDR-TB are described. On the basis of the evidence review, a clinical strategy tool for building a treatment regimen for MDR-TB is also provided. Conclusions: New recommendations are made for the choice and number of drugs in a regimen, the duration of intensive and continuation phases, and the role of injectable drugs for MDR-TB. On the basis of these recommendations, an effective all-oral regimen for MDR-TB can be assembled. Recommendations are also provided on the role of surgery in treatment of MDR-TB and for treatment of contacts exposed to MDR-TB and treatment of isoniazid-resistant TB.# Overview Treatment of tuberculosis (TB), regardless of the results of drug susceptibility testing (DST), is focused on both curing the individual patient and minimizing the transmission of Mycobacterium tuberculosis to other persons. Thus, effective treatment of TB has benefits for both the individual patient and the community in which the patient resides. However, notable complexities need to be addressed to successfully treat disease resulting from drug-resistant M. tuberculosis isolates compared with treatment of drugsusceptible TB disease, including additional molecular and phenotypic diagnostic tests to determine drug susceptibility; the use of second-line drugs, which have toxicities that increase harms that must be balanced with their benefits; and prolonged treatment durations. The new recommendations provided in this guideline are for the treatment of drug-resistant TB (DR-TB), including multidrug-resistant TB (MDR-TB) and isoniazid-resistant TB, and are intended to help providers identify the therapeutic options associated with improved outcomes (i.e., greater treatment success, fewer adverse events, and fewer deaths) and in the context of individual patient values and preferences. Worthy of emphasis, the committee recommends that only drugs to which the patient's M. tuberculosis isolate has documented, or high likelihood of, susceptibility be included in an effective treatment regimen, noted as an ungraded good practice statement, and consistent with ongoing stewardship efforts for the optimal use of antibiotics (1). Drugs known to be ineffective on the basis of in vitro growth-based or molecular DST should not be used. The following alphabetically listed drugs and drug classes were considered for inclusion in treatment regimens: amoxicillin/clavulanate, bedaquiline, carbapenem with clavulanic acid, clofazimine, cycloserine, delamanid, ethambutol, ethionamide, fluoroquinolones, injectable agents, linezolid, macrolides, p-aminosalicylic acid, and pyrazinamide. Of note, pretomanid in combination with bedaquiline and linezolid was recently approved by U.S. Food and Drug Administration (FDA) for the treatment of a specific limited population of adults with pulmonary extensively drugresistant (XDR-TB) or treatment-intolerant or nonresponsive MDR-TB; however, the preparation and completion of these guidelines predated this approval (2). For each drug or drug class, the following Population, Intervention, Comparator, and Outcomes (PICO) question was addressed: In patients with MDR-TB, are outcomes safely improved when regimens include the following individual drugs or drug classes compared with regimens that do not include them? The recommendations in this practice guideline were supported by scientific evidence, including results of a propensity score (PS)-matched individual patient data meta-analyses (IPDMA) conducted using a database of more than 12,000 patient records from 25 countries in support of these guidelines (see APPENDIX A: METHODOLOGY in the online supplement) (3). We used the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) approach to appraise the quality of evidence and to formulate, write, and grade most recommendations (4,5). Although published data on costs are noted in these guidelines, the committee did not conduct formal cost-effectiveness analyses of the treatments reviewed to determine a recommendation. The treatment of DR-TB can be complicated and thus is necessarily preceded and accompanied by important components of care relating to the access to TB experts, microbiological and molecular diagnosis, education, monitoring and followup, and global patient-centered strategies. The writing committee considered that these topics are crucial but do not require formal and extensive evidence appraisal in the context of the present guidelines. Following GRADE guidance, it was thus decided that these practices would be addressed in good practice statements. Below, six ungraded good practice statements as well as 25 GRADE-based recommendations addressing 21 PICO questions (Table 1) are listed. Questions were selected according to their importance to clinical practice, as determined by the guideline panel, expert advisors, and patient advocates. The implications of the strength of recommendation, conditional or strong, for patients, clinicians, and policy makers are described in APPENDIX A and shown in Table 2. Detailed and referenced information on treatment of MDR-TB is available below, including a summary of the evidence, and the benefits, harms, and additional considerations of each practice or recommendation. # Summary of Good Practices For patients being evaluated and treated for any form of drug-resistant TB, the following six ungraded good practice statements are emphasized, as the writing committee had high confidence in their net benefit: 1. Consultation should be requested with a TB expert when there is suspicion of or confirmation of DR-TB. In the United States, TB experts can be found through CDC-supported TB Centers of Excellence for Training, Education, and Medical Consultation (. cdc.gov/tb/education/rtmc/default.htm), through local health department TB control programs (. gov/tb/links/tboffices.htm), and through international MDR-TB expert groups such as the Global TB Network (6). 2. Molecular DSTs should be obtained for rapid detection of mutations associated with resistance. When rifampin resistance is detected, additional DST should be performed immediately for first-line drugs, fluoroquinolones, and aminoglycosides. Resistance to fluoroquinolones should be excluded whenever isoniazid resistance is found. 3. Regimens should include only drugs to which the patient's M. tuberculosis isolate has documented or high Table 1. Questions Regarding the Treatment of Drug-Resistant Tuberculosis Selected by the Guideline Writing Committee Number of effective drugs in a regimen for MDR-TB 1. Should patients with MDR-TB be prescribed five effective drugs vs. more or fewer agents during the intensive and continuation phases of treatment? Duration of intensive and continuation phases of treatment for MDR-TB 2. Should patients with MDR-TB undergoing intensive-phase treatment be treated for >6 mo after culture conversion or ,6 mo after culture conversion? 3. Should patients with MDR-TB undergoing continuation-phase treatment be treated for >18 mo after culture conversion or ,18 mo after culture conversion? Drug and drug classes for the treatment of MDR-TB 4. In patients with MDR-TB, are outcomes safely improved when regimens include amoxicillin/clavulanate compared with regimens that do not include amoxicillin/clavulanate? 5. In patients with MDR-TB, are outcomes safely improved when regimens include bedaquiline compared with regimens that do not include bedaquiline? 6. In patients with MDR-TB, are outcomes safely improved when regimens include carbapenems with clavulanic acid compared with regimens that do not include them? 7. In patients with MDR-TB, are outcomes safely improved when regimens include clofazimine compared with regimens that do not include clofazimine? 8. In patients with MDR-TB, are outcomes safely improved when regimens include cycloserine compared with regimens that do not include cycloserine? 9. In patients with MDR-TB, are outcomes safely improved when regimens include delamanid compared with regimens that do not include delamanid? 10. In patients with MDR-TB, are outcomes safely improved when regimens include ethambutol compared with regimens that do not include ethambutol? 11. In patients with MDR-TB, are outcomes safely improved when regimens include ethionamide/prothionamide compared with regimens that do not include ethionamide/prothionamide? 12. In patients with MDR-TB, are outcomes safely improved when regimens include fluoroquinolones compared with regimens that do not include fluoroquinolones? 13. In patients with MDR-TB, are outcomes safely improved when regimens include an injectable compared with regimens that do not include an injectable? 14. In patients with MDR-TB, are outcomes safely improved when regimens include linezolid compared with regimens that do not include linezolid? 15. In patients with MDR-TB, are outcomes safely improved when regimens include macrolides compared with regimens that do not include macrolides? 16. In patients with MDR-TB, are outcomes safely improved when regimens include p-aminosalicylic acid compared with regimens that do not include p-aminosalicylic acid? 17. In patients with MDR-TB, are outcomes safely improved when regimens include pyrazinamide compared with regimens that do not include pyrazinamide? Use of a standardized, shorter-course regimen of <12 mo for the treatment of MDR-TB 18. In patients with MDR-TB, does treatment with a standardized MDR-TB regimen for <12 mo lead to better outcomes than treatment with an MDR-TB regimen for 18-24 mo? Treatment of isoniazid-resistant, rifampin-susceptible TB: 19a. Should patients with isoniazid-resistant TB be treated with a regimen composed of a fluoroquinolone, rifampin, ethambutol, and pyrazinamide for 6 mo compared with rifampin, ethambutol, and pyrazinamide (without a fluoroquinolone) for 6 mo? 19b. Should patients with isoniazid-resistant TB be treated with a regimen composed of fluoroquinolone, rifampin, and ethambutol for 6 mo and pyrazinamide for the first 2 mo compared with a regimen composed of a fluoroquinolone, rifampin, ethambutol, and pyrazinamide for 6 mo? Surgery as adjunctive therapy for MDR-TB: 20. Among patients with MDR/XDR TB receiving antimicrobial therapy, does lung resection surgery (i.e., lobectomy or pneumonectomy) lead to better outcomes than no surgery? Management of contacts exposed to an infectious patient with MDR-TB: 21. Should contacts exposed to an infectious patient with MDR-TB be offered LTBI treatment vs. followed with observation alone? likelihood of susceptibility (hereafter defined as effective). Drugs known to be ineffective based on in vitro growthbased or molecular resistance should NOT be used. This recommendation applies to all drugs and treatment regimens discussed in this practice guideline, unless reliable methods of testing susceptibility for a drug have yet to be developed. 4. Treatment response should be monitored clinically, radiographically, and bacteriologically, with cultures obtained at least monthly for pulmonary TB. When cultures remain positive after 3 months of treatment, susceptibility tests for drugs should be repeated. Weight and other measures of clinical response should be recorded monthly. 5. Patients should be educated and asked about adverse effects at each visit. Adverse effects should be investigated and ameliorated. 6. Patient-centered case management helps patients understand their diagnoses, understand and participate in their treatment, and discuss potential barriers to treatment. Patient-centered strategies and interventions should be used to minimize barriers to treatment. # Summary of Recommendations For the selection of an effective MDR-TB treatment regimen and duration of MDR-TB treatment: 1. We suggest using at least five drugs in the intensive phase of treatment and four drugs in the continuation phase of treatment (conditional recommendation, very low certainty in the evidence). 2. We suggest an intensive-phase duration of treatment of between 5 and 7 months after culture conversion (conditional recommendation, very low certainty in the evidence). 3. We suggest a total treatment duration of between 15 and 21 months after culture conversion (conditional recommendations, very low certainty in the evidence). 4. In patients with pre-XDR-TB and XDR-TB, which are both subsets of MDR-TB, we suggest a total treatment duration of between 15 and 24 months after culture conversion (conditional recommendations, very low certainty in the evidence). For the selection of oral drugs for MDR-TB treatment (in order of strength of recommendation): 5. We recommend including a latergeneration fluoroquinolone (levofloxacin or moxifloxacin) (strong recommendation, low certainty of evidence). 6. We recommend including bedaquiline (strong recommendation, very low certainty in the evidence). 7. We suggest including linezolid (conditional recommendation, very low certainty in the evidence). 8. We suggest including clofazimine (conditional recommendation, very low certainty of evidence). 9. We suggest including cycloserine (conditional recommendation, very low certainty in the evidence). 10. We suggest including ethambutol only when other more effective drugs cannot be assembled to achieve a total of five drugs in the regimen (conditional recommendation, very low certainty in the evidence). 11. We suggest including pyrazinamide in a regimen for treatment of patients with MDR-TB or with isoniazidresistant TB, when the M. tuberculosis isolate has not been found resistant to pyrazinamide (conditional recommendation, very low certainty in the evidence). 12. The guideline panel was unable to make a clinical recommendation for or against delamanid because of the absence of data in the PS-matched IPDMA conducted for this practice guideline. We make a research recommendation for the conduct of randomized clinical trials and cohort studies evaluating the efficacy, safety, and tolerability of delamanid in combination with other oral agents. Until additional data are available, the guideline panel concurs with the conditional recommendation of the 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis # For patients The overwhelming majority of individuals in this situation would want the recommended course of action, and only a small minority would not. The majority of individuals in this situation would want the suggested course of action, but a sizeable minority would not. # For clinicians The overwhelming majority of individuals should receive the recommended course of action. Adherence to this recommendation according to the guideline could be used as a quality criterion or performance indicator. Formal decision aids are not likely to be needed to help individuals make decisions consistent with their values and preferences. Different choices will be appropriate for different patients, and you must help each patient arrive at a management decision consistent with her or his values and preferences. Decision aids may be useful to help individuals make decisions consistent with their values and preferences. Clinicians should expect to spend more time with patients when working toward a decision. For policy makers The recommendation can be adapted as policy in most situations, including for the use as performance indicators. Treatment that delamanid may be included in the treatment of patients with MDR/rifampin-resistant (RR)-TB aged >3 years on longer regimens (7). For selected oral drugs previously included in regimens for the treatment of MDR-TB: 13. We recommend NOT including amoxicillin-clavulanate, with the exception of when the patient is receiving a carbapenem wherein the inclusion of clavulanate is necessary (strong recommendation, very low certainty in the evidence). 14. We recommend NOT including the macrolides azithromycin and clarithromycin (strong recommendation, very low certainty in the evidence). 15. We suggest NOT including ethionamide/prothionamide if more effective drugs are available to construct a regimen with at least five effective drugs (conditional recommendation, very low certainty in the evidence). 16. We suggest NOT including p-aminosalicylic acid in a regimen if more effective drugs are available to construct a regimen with at least five effective drugs (conditional recommendation, very low certainty in the evidence). For the selection of drugs administered through injection when needed to compose an effective treatment regimen for MDR-TB: 17. We suggest including amikacin or streptomycin when susceptibility to these drugs is confirmed (conditional recommendation, very low certainty of evidence). 18. We suggest including a carbapenem (always to be used with amoxicillinclavulanic acid) (conditional recommendation, very low certainty of evidence). 19. We suggest NOT including kanamycin or capreomycin (conditional recommendation, very low certainty in the evidence). A summary of the recommendations on drugs, the certainty in the evidence, and the relative risks of success and death is provided in Figure 1. Additional details and other outcomes of interest are provided in the section on Drugs and Drug Classes and For the use of the WHO-recommended standardized shorter-course 9-to 12-month regimen for MDR-TB: 20. The shorter-course regimen is standardized with the use of kanamycin (which the committee recommends against using) and includes drugs for which there is documented or high likelihood of resistance (e.g., isoniazid, ethionamide, pyrazinamide). Although the STREAM (Standard Treatment Regimen of Anti-Tuberculosis Drugs for Patients with MDR-TB) Stage 1 randomized trial found the shorter-course regimen to be noninferior to longer injectablecontaining regimens with respect to the primary efficacy outcome (8), the guideline committee cannot make a recommendation either for or against this standardized shorter-course regimen, compared with longer individualized all-oral regimens that can be composed in accordance with the recommendations in this practice guideline. We make a research recommendation for the conduct of randomized clinical trials evaluating the efficacy, safety, and tolerability of modified shorter-course regimens that include newer oral agents, exclude injectables, and include drugs for which susceptibility is documented or highly likely. For the role of surgery in the treatment of MDR-TB: 21. We suggest elective partial lung resection (e.g., lobectomy or wedge resection), rather than medical therapy alone, for adults with MDR-TB receiving antimicrobial-based therapy (conditional recommendation, very low certainty in the evidence). The writing committee believes this option would be beneficial for patients for whom clinical judgement, supported by bacteriological and radiographic data, suggest a strong risk of treatment failure or relapse with medical therapy alone. 22. We suggest medical therapy alone, rather than including elective total lung resection (pneumonectomy), for adults with MDR-TB receiving antimicrobial therapy (conditional recommendation, very low certainty in the evidence). For the treatment of isoniazid-resistant TB: 23. We suggest adding a later-generation fluoroquinolone to a 6-month regimen of daily rifampin, ethambutol, and pyrazinamide for patients with isoniazid-resistant TB (conditional recommendation, very low certainty in the evidence). 24. In patients with isoniazid-resistant TB treated with a daily regimen of a latergeneration fluoroquinolone, rifampin, ethambutol, and pyrazinamide, we suggest that the duration of pyrazinamide can be shortened to 2 months in selected situations (i.e., noncavitary and lower burden disease or toxicity from pyrazinamide) (conditional recommendation, very low certainty in the evidence). For the management of contacts to patients with MDR-TB: 25. We suggest offering treatment for latent TB infection (LTBI) for contacts to patients with MDR-TB versus following with observation alone (conditional recommendation, very low certainty in the evidence). We suggest 6 to 12 months of treatment with a later-generation fluoroquinolone alone or with a second drug, on the basis of drug susceptibility of the source-case M. tuberculosis isolate. On the basis of evidence of increased toxicity, adverse events, and discontinuations, pyrazinamide should not be routinely used as the second drug. In this guideline, we provide new recommendations for treatment of MDR-TB and for treatment of isoniazid-resistant TB. On the basis of the evidence review conducted for this guideline, a clinical strategy tool for building a treatment regimen for MDR-TB is provided. # Introduction The American Thoracic Society (ATS), U.S. Centers for Disease Control and Prevention, European Respiratory Society (ERS), and Infectious Diseases Society of America (IDSA) have jointly developed these Drug-Resistant Tuberculosis guidelines. These new recommendations are based on the certainty in the evidence (also known as the quality of evidence) and developed based on the evidence that was appraised using GRADE methodology, which incorporates patient values and costs as well as judgments about trade-offs between benefits and harms (4,5). A carefully selected panel of experts, screened for conflicts of interest, including specialists in pulmonary medicine, infectious diseases, pediatrics, primary care, public health, epidemiology, economics, pharmacokinetics, microbiology, systematic review methodology, and patient advocacy, was assembled to assess the evidence supporting each recommendation. In contrast to prior analytic approaches, wherein systematic reviews of aggregate data were used for decision-making, a new PS-matched meta-analysis of IPDMA from 12,030 patients, in 50 studies, from 25 countries with confirmed pulmonary RR-TB was conducted for this guideline (3). Given the paucity of high-quality randomized controlled trials conducted in DR-TB, individual data from observational studies represent the next best level of evidence for analyses. Nonetheless, the writing committee noted that observational data are prone to bias and confounding. The 21 PICO questions and associated recommendations are summarized below, all appraised using GRADE methodology (see APPENDIX B). Questions were selected according to their importance to clinical practice, as determined by the guideline panel, expert advisors, and patient advocates. The implications of the strength of recommendation, conditional or strong, for patients, clinicians, and policy makers are described in APPENDIX A. On the basis of the GRADE methodology framework, all recommendations in these guidelines are based on very low certainty in the evidence. The writing committee selected death, treatment success, and serious adverse effects as the endpoints of critical importance on which to generate recommendations. Our meta-analytic approaches included PS matching to AMERICAN THORACIC SOCIETY DOCUMENTS e98 reduce confounding (on the basis of individual-level covariates of age, sex, HIV coinfection, acid-fast bacilli smear results, cavitation on chest radiographs, history of TB treatment with first-line or second-line TB drugs, and number of possibly effective drugs in the regimen, among other variables), described in detail in the publication of the PS-matched IPDMA publication and in APPENDIX A (3). However, the risk of bias remained serious, because the average loss to follow-up across included studies was 10% to 20%. In addition, despite the efforts, there was a large residual imbalance in background regimens used in experimental and control groups. These guidelines are intended for settings in which treatment is individualized and where mycobacterial cultures, molecular (genotypic) and culture-based (phenotypic) DSTs, and radiographic facilities are available (9,10). Of note, published data on costs are referenced in these guidelines, but the committee did not conduct formal costeffectiveness analyses of the treatments and interventions reviewed in relation to determining a recommendation. In these guidelines, MDR-TB is defined specifically as resistance to at least isoniazid and rifampin, the two most important first-line drugs. XDR-TB is a subset of MDR-TB with additional resistance to a fluoroquinolone and a second-line injectable agent. Because XDR-TB evolves from MDR-TB in two steps, the term "pre-XDR-TB" was introduced to identify MDR-TB with additional resistance to either one but not both of these classes of drugs. In these guidelines, we also provide recommendations for the treatment of isoniazid-resistant TB. # Good Practices for Treating DR-TB The fundamentals of TB care, regardless of the treatment selected, rest on ensuring timely diagnosis and initiation of appropriate therapy, with ongoing support and management to achieve successful treatment completion and cure. The responsibility for successful treatment of TB is placed primarily on the provider or program initiating therapy rather than on the patient (11). Nevertheless, a patientcentered approach, described more fully in the section on case management below, requires the involvement of the patient in decision-making. We recommend seeking consultation with an expert in TB when there is suspicion for or confirmation of DR-TB (ungraded good practice statement). In the United States, DR-TB experts can be found through CDCsupported TB Centers of Excellence for Training, Education, and Medical Consultation (/ tb/education/rtmc/default.htm), through local health department TB Control Programs (/ links/tboffices.htm), and through international MDR-TB expert groups such as the British Thoracic Society MDR-TB Clinical Advisory Service (. brit-thoracic.org.uk/) and the Global TB Network (6). Additional good practices in the treatment of patients in need of evaluation for DR-TB include the following: # Diagnosing TB and Identification of Drug Resistance The potential for drug resistance is considered in every patient. An aggressive effort is made to collect biological specimens for detection of M. tuberculosis and for drug resistance. A rapid test for a least rifampin resistance should ideally be done for every patient, but especially for those at risk of drug resistance. The concern for possible resistance is heightened for patients from areas of the world with at least a moderate incidence of TB in general (>20/100,000) and a high primary MDR-TB prevalence (>2%) (12). Individuals who have or recently had close contact with a patient with infectious DR-TB, especially when the contact is a young child or has HIV infection, are at risk of developing DR-TB. Molecular methods, and more recently whole-genome sequencing (WGS), are increasingly available and can provide information on resistance to all first-line and many second-line drugs. Many public health laboratories provide molecular tests, and WGS is available in selected laboratories. These tests can be used to guide initial therapeutic decisions and contribute to population-level control of DR-TB. Providers should be familiar with phenotypic and genotypic laboratory services available in their locale. In the United States, CDC's Division of Tuberculosis Elimination Laboratory Branch provides testing services for both clinical specimens and isolates of M. tuberculosis (. gov/tb/topic/laboratory/default.htm). CDC's Molecular Detection of Drug Resistance (MDDR) service serves to rapidly identify DR-TB. This service uses DNA sequencing for detection of mutations most frequently associated with resistance to both first-line (e.g., rifampin, isoniazid, ethambutol, and pyrazinamide) as well as second-line drugs (MDDR Service Request Form is available here: https:// www.cdc.gov/tb/topic/laboratory/ MDDRsubmissionform.pdf). Recently published ATS/CDC/IDSA Official Practice Guidelines for the diagnosis of TB provide additional details on the optimal use of diagnostic tools and algorithms (13). # Treatment and Monitoring of DR-TB Regimens should only include drugs to which the patient's isolate has documented or high likelihood of susceptibility. Drugs known to be ineffective, on the basis of in vitro resistance or clinical and epidemiological information (i.e., resistance in the index case or high population prevalence of resistance), should not be used, even when resistance is present in only a small percentage of the mycobacteria in the population. If at least 1% of organisms in a solid media culture exhibit resistance to a drug (the current standard laboratory definition of drug resistance) (14), using that drug in a regimen will increase the risk for poor treatment outcomes, and the isolate will eventually exhibit 100% resistance to the drug. Drugs should be selected based on their efficacy and the likelihood that patients will be able to tolerate them without significant toxicity. Treatment response is monitored clinically (decrease in cough and systemic symptoms and increase in weight), radiographically, and bacteriologically (15)(16)(17)(18). If sputum cultures remain positive after 3 months of treatment, or if there is bacteriological reversion from negative to positive at any time, DST should be repeated (11). Patients should be asked about the clinical response at each visit and weight recorded monthly. Monthly cultures help to identify early evidence of failure (19). Most persons have difficulty taking one or more of the drugs used to treat MDR-TB. Patients should be educated about adverse effects, and all adverse effects should be investigated and ameliorated. Some adverse effects are difficult to tolerate but do not put patients at risk for serious short-or long-term damage to organ systems and can be managed with symptom-specific ancillary medication and supportive care. Nausea # AMERICAN THORACIC SOCIETY DOCUMENTS with vomiting is common and is not always an indication to discontinue therapy permanently. New-onset vomiting may indicate liver toxicity or, in children especially, increased intracranial pressure. If drug-induced liver toxicity (and increased intracranial pressure) is excluded, vomiting can be managed by changing dosing schedule, giving medications with a small snack (noting that this may affect plasma concentrations of the drug), or premedicating adult patients with an antiemetic (noting that some prolong the QT interval) before the dose. Patients may note fatigue or describe myalgia or arthralgia, but these symptoms are not typically treatment limiting. Although low-grade adverse effects can resolve gradually over time, recognizing the negative impact on patient's quality of life, all adverse effects must be addressed diligently. The Curry International TB Center's guides, Drug Resistant TB: Clinician's Survival Guide and the Nursing Guide for Managing Side Effects to Drug-Resistant TB Treatment, in addition to the World Health Organization (WHO) companion handbook, are good resources to assist in evaluation and management of patients (15,16,18). # Infection Control and DR-TB Three main strategies will reduce the transmission of DR-TB: rapid diagnosis, prompt appropriate treatment, and improved airborne infection control (11,13,20,21). Rapid molecular DST and conventional phenotypic culture-based DST are almost universally available in the United States, Europe, and low-incidence, high-resource countries (22). Targeted active case finding combined with rapid diagnostics leading to effective therapy is a strategy endorsed by WHO (13,23,24). Treatment delays have been associated with increased transmission. A systematic review and meta-analysis looking at patientrelated risk factors for transmission of M. tuberculosis found that treatment initiation delays of 28 to 30 days were significantly associated with increased transmission to contacts (25). Effective therapy renders patients with TB, even those with DR-TB, rapidly noninfectious (24,26). The rapid reduction in infectiousness even in the setting of MDR-TB makes outpatient therapy possible, but directly observed therapy (DOT) applied through patientcentered approaches plays an important role in this regard (11,23). Infection control measures such as administrative and environmental controls, and personal protective equipment, listed in order of priority, are important for preventing the transmission of M. tuberculosis regardless of drug susceptibility. Every healthcare facility should have these measures in place per CDC guidelines (27). Furthermore, patients should be educated about the importance of infection control measures, such as the value of good ventilation, open windows, and the risks of exposure for children ,5 years of age and immunocompromised individuals (21). Case Management for DR-TB Case management is a collaborative process that entails engaging with patients; comprehensively assessing, monitoring, and attending to patients' physical, psychological, social, material, and informational needs; care planning; medication management; facilitating access to services; and functioning as patient advocate/agent (28)(29)(30). Commonly, case management is used in community and public health settings to coordinate services for patients with chronic and complex health conditions and to attain good, quality, cost-effective outcomes. The practice of case management has long been considered an important component of care for patients with TB (20,31,32), and a patient-centered (or family-centered in case of children) approach is preferred (7,11,33). Patient-centered case management helps patients understand their diagnosis and treatment and participate in treatment selection and promotes communication about factors that matter to the patient (33)(34)(35). Importantly, a patient-centered approach includes discussions with the patient to identify potential barriers to care and the selection of strategies and interventions to address and minimize these barriers (7,11,33,(35)(36)(37). Case management tools for drugresistant TB include a drug-o-gram, which organizes clinical details in a format that aligns test results with treatment, as well as laboratory, bacteriology, and other toxicity monitoring flow sheets. This format is a visible representation of pertinent clinical parameters being tracked (16,38). A monitoring checklist or care plan can also help the case manager ensure timely drug toxicity monitoring and provision of examinations required to assess a patient's response to treatment. Case management interventions that have demonstrated favorable outcomes include the provision of patient education and counseling related to diagnosis, treatment, and adherence, as well as the use of treatment adherence interventions alongside suitable patientcentered administration options. For example, home or community-based DOT is shown to be preferred and associated with greater likelihood of treatment success compared with health facility-based DOT or self-administered therapy (11,39). Enhancing treatment completion through the use of patient-centered case management strategies, including DOT, also aims to reduce risk of acquisition of drug resistance, which aligns with international efforts in antibiotic stewardship (1). Recent WHO guidelines evaluated various adherence interventions and outcomes of TB treatment through a systematic review of clinical trials and observational studies, identifying data from 129 studies for quantitative analysis published through 2018 (39). Another meta-analysis of 22 randomized controlled trials of DOT and other interventions to improve adherence reported significant increases in cure with DOT (18%) and with patient education and counseling (16%). Compared with the complementary groups, loss to follow-up was 49% lower with DOT, 26% lower with financial incentives, and 13% lower with patient education and counseling. However, there was no significant reduction in mortality (40). The use of video-enabled electronic devices to conduct DOT is expanding rapidly (41,42). Electronic methods for DOT have the potential to improve TB treatment outcomes and extend public health support to patients with TB when face-to-face DOT is not feasible. Pilot studies have reported that patients find electronic methods for DOT to be both acceptable and more convenient than traditional in-person DOT. These studies also reported good adherence, fewer unobserved doses, and high satisfaction among study participants (43)(44)(45)(46). Further evidence is needed to validate electronically observed therapy under more diverse conditions, with larger cross-sections of patient subgroups, including children, and to determine to what degree these methods impact treatment outcomes for patients with DR-TB (39,(47)(48)(49). # Treatment of MDR-TB, Number of Drugs, and Duration of Treatment Phases # Number of Drugs in the Regimen Up until the recent 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment (7), WHO has recommended at least five drugs in the intensive phase of treatment, defined by the use of a second-line injectable agent (23). The recent WHO change to recommending at least four effective drugs at initiation of treatment is graded as a conditional recommendation with very low certainty in the estimates of effect (7). Of note, both our guideline committee and the 2019 WHO guidelines promote the use of newer or repurposed oral agents with greater efficacy and deemphasize the use of injectable agents (7). Given these changes and that an injectable drug is no longer obligatory, the intensive phase can no longer be defined by the inclusion of injectables. In this guideline, we define the intensive phase to be the initiation phase of treatment with at least five effective drugs. These recommendations do not apply to the WHO-endorsed shorter-course 9-to 12-month (Bangladesh) regimen, which combines seven drugs for 4 months or until sputum smear conversion, whichever is the longer period, followed by four drugs in the continuation phase (23). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. For the analysis of number of effective drugs, we counted drugs with published evidence from randomized trials of effectiveness (3). Hence, we counted ethambutol, pyrazinamide, all injectables and fluoroquinolones, ethionamide/prothionamide, cycloserine/terizidone, and p-aminosalicylic acid on the basis of DST showing susceptibility and counted clofazimine, linezolid, carbapenems, bedaquiline, and delamanid-if susceptible, or no DST for that drug. We did not count amoxicillin/clavulanate (in the absence of a carbapenem) and macrolides as effective drugs (see PICO Questions 4 and 15). The intensive phase was defined by the use of an injectable agent (other than imipenem-cilastatin or meropenem). Culture conversion was not an outcome measure. Instead, the analysis assessed the association of the number of possibly effective drugs included in the first 2 weeks of the intensive phase with the two final treatment outcomes: 1) "treatment success," which includes both cure and treatment completed; and 2) mortality. For the intensive phase of treatment, final treatment outcomes were compared between those treated with five or more (n = 2,527) effective drugs and those treated with three or four effective drugs (n = 5,923). Benefits. In our PS-matched IPDMA, using zero to two effective drugs as the reference value, treatment success was most likely with regimens for MDR-TB containing five effective drugs in the intensive phase (Table 3). Mortality was also significantly reduced for those taking five or six effective drugs. The number of drugs used in the continuation phase was also evaluated (Table 4). In the continuation phase, four Definition of abbreviations: aOR = adjusted odds ratio; CI = confidence interval. *World Health Organization definitions: fail = treatment terminated or need for permanent regimen change of at least two antituberculosis drugs because of: lack of conversion by the end of the intensive phase, bacteriological reversion in the continuation phase after conversion to negative, evidence of additional acquired resistance to fluoroquinolones or second-line injectable drugs or adverse drug reactions. † Relapse was defined as a positive bacteriological culture in the 12 months after treatment completion. PICO Question 1: Should patients with MDR-TB be prescribed five effective drugs versus more or fewer agents during the intensive and continuation phases of treatment? Recommendation 1a: We suggest using at least five drugs in the intensive phase of treatment of MDR-TB (conditional recommendation, very low certainty of evidence). # Recommendation 1b: We suggest using at least four drugs in the continuation phase of treatment of MDR-TB (conditional recommendation, very low certainty of evidence). # AMERICAN THORACIC SOCIETY DOCUMENTS drugs gave the greatest success (adjusted odds ratio , 2.3) and four or more drugs the greatest reduction in mortality. Few significant differences were found in patients with pre-XDR-TB or XDR-TB. Subgroup analyses in pre-XDR-TB and XDR-TB did not suggest that a different number of effective drugs for the intensive and continuation phases was required to achieve good outcomes. Harms. The recommendation places a higher value on reduced morbidity and greater treatment success than on the as yet unmeasured avoidance of the summative or synergistic toxicity across drugs in the treatment regimen. Additional considerations. Each drug was counted as equivalent in effectiveness, representing the average effectiveness of the most widely used MDR-TB drugs, for the purpose of adjusting for the number of other effective drugs in the regimen. A regimen composition approach with suggested ranking of drugs is provided in the section BUILDING A TREATMENT REGIMEN FOR MDR-TB. The smallest number of drugs to achieve sputum conversion and ensure a cure without relapse minimizes problems with drug-drug interactions, reduces adverse effects, and is cost effective. Injectable drugs are problematic in terms of route of administration, patient preference, significant adverse effects such as hearing loss or vertigo, and costs of administration and monitoring blood levels. On the basis of these limitations, combined with evidence of low efficacy for injectable agents, the use of injectables in defining the intensive phase of treatment is no longer appropriate. WHO has recategorized injectables to a lower grouping and recommended that the injectables only be used when oral medicines from higher categories cannot be used (7). The South African National Department of Health also has endorsed replacing the injectable agents in the shorter-course MDR-TB treatment regimen in adults and children .12 years of age with bedaquiline, on the basis of a retrospective observational study of improved mortality with adding bedaquiline (50). In conjunction, experts from the Sentinel Project on Pediatric Drug-Resistant Tuberculosis () have also called for replacing the injectable drug in children ,12 years of age with an alternative effective drug (51). Conclusions. At least five drugs should be used in the intensive phase of treatment and four drugs in the continuation phase of treatment of MDR-TB (conditional recommendation, very low certainty of evidence). Drugs of poor or doubtful efficacy should not be added to a regimen purely to ensure that the recommended number of drugs is obtained. Research needs. Randomized controlled trials with fewer but more effective and safer drugs should be undertaken (e.g., TB-PRACTECAL, ClinicalTrials.gov identifier NCT02589782). Randomized trials comparing regimens with and without injectable agents are underway (e.g., STREAM-stage 2, NCT02409290; Evaluating Newly Approved Drugs for Multidrug-resistant TB: endTB, NCT02754765). The effect of duration of individual drugs in the intensive phase on culture conversion as well as treatment cure without relapse should be explored (52). # Duration of Intensive and Continuation Phases in Treating MDR-TB Treatment of both drug-susceptible and MDR-TB has typically been divided across PICO Question 2: Should patients with MDR-TB undergoing intensive-phase treatment be treated for >6 months after culture conversion or ,6 months after culture conversion? Recommendation 2: In patients with MDR-TB, we suggest an intensivephase duration of treatment of between 5 and 7 months after culture conversion (conditional recommendation, very low certainty of evidence). two phases, with an initial "intensive" phase that contains more drugs than the subsequent "continuation" phase. For MDR-TB, this intensive phase has historically been characterized by the use of an aminoglycoside (amikacin or kanamycin) or a polypeptide (capreomycin) delivered parenterally (53). This approach has been intended to provide greater bactericidal activity during the time when the bacillary burden is highest and reducing the number of drugs in the continuation phase to reduce risk of toxicity and intolerability caused by the multidrug regimen at a phase when the microbial burden has diminished. The 6-month duration of bedaquiline treatment, in part, was developed by analogy, and clinical trials are underway to determine whether this drug may replace aminoglycosides in terms of an initial intensive phase of treatment. The duration of the intensive phase has never been examined through a randomized, controlled clinical trial. Rather, it has been defined by a combination of practicality, the clinical experiences of MDR-TB experts, and, most recently, an IPDMA that resulted in the WHO practice guideline recommendations of 2011 that an intensive phase of at least 8 months' duration be used (54). This represented a departure from the prior recommendations in 2008 that the injectable agent should be continued for at least 6 months and at least 4 months after the patient first becomes and remains smear or culture negative (55). This is similar to current expert guidance provided by the Curry International TB Center Drug Resistant TB: Clinician's Survival Guide, namely "Intensive phase: recommend at least 6 months beyond culture conversion for the use of injectable agent" (16). The PS-matched IPDMA completed as part of the present guideline development process includes cohorts that were treated according to this range of durations of the intensive phase and represents the available evidence base for our recommendation. Our analyses and recommendations for the duration of intensive and continuation phases of therapy are anchored to the timing of culture conversion, as this approach factors in that treatment response may vary by patient, resistance patterns, and regimen composition and potency, among other factors. Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. The association between treatment duration of the intensive phase (defined by the use of an injectable agent) after culture conversion and outcomes (success, failure, relapse) was examined among 4,122 (49.3%) subjects from 29 studies who had sufficient information to be included in this analysis (3). This total reflected exclusion of studies without a reported intensive-phase duration (n = 942 patients) or time to culture conversion (n = 1,880 patients), as well as 955 (11.4%) patients in the included studies but for whom intensive-phase duration or time to conversion was not known. Patients taking the 9-to 12-month standardized shorter-course regimen were excluded (see PICO Question 18). An additional 464 (5.0%) patients were excluded for conversion after the end of the intensive phase or after 14.3 months, or for an intensive phase exceeding 24.3 months. Among the 4,122 patients included, 3,303 (80.1%) had MDR-TB without resistance to fluoroquinolone or second-line injectable. The remainder had pre-or XDR-TB. To reflect meaningful variations in the duration of the intensive regimen after conversion, stratified analyses were conducted: strata were 0 to 1.0 months, 1.01 to 3.0 months, 3.01 to 5.0 months, 5.01 to 7.0 months, and 7.1 to 15.0 months (see Table 5). The largest proportion (28.6% ) received treatment for 5.01 to 7 (mean, 5.9) months after conversion. Time to conversion was roughly inversely proportional to duration of intensive phase, suggesting that the tendency in the included studies was to treat for a total intensive-phase duration of 5 to 8 months. Benefits. The patients who received treatment for 5.01 to 7 months after conversion experienced a 3.3-fold increase in adjusted odds of treatment success (95% confidence interval , 2.1-5.2) compared with the reference group (0-1.0 months of intensive phase after conversion). Some variability was observed in distribution of intensive-phase duration after culture conversion by baseline characteristics: ,10.0% of patients who received longer durations of intensive phase were HIVcoinfected, compared with .15% in the 0.0 to 3.0 month interval groups. Number of (effective) drugs was greater in longer durations than in shorter. Patients with MDR-TB who received 5.01 to 7.0 months of intensive-phase treatment after conversion had the highest odds of success in adjusted PS-matched analysis; although other, shorter intervals showed improvement over the reference, <1.0 month of postconversion treatment, the benefits of 5 to 7 months of treatment after conversion were more pronounced (Table 6). The effect estimates were also better for subgroups of MDR only (aOR, 2.0; 95% CI, 1.1-3.4) and pre-XDR (aOR, 1.5; 95% CI, 0.6-3.7) subgroups. # Harms Detailed data on duration-related toxicities were not available through our PS-matched IPDMA. On the basis of the clinical observations and experiences of the MDR-TB experts on the guideline committee, the intensive phase should not be prolonged beyond that considered necessary to optimize treatment outcomes. Additional considerations. No information is available in the datasets on AMERICAN THORACIC SOCIETY DOCUMENTS how duration was selected. Duration selected may reflect interim response to treatment or may reflect a planned duration that is not conditioned on treatment response; the latter is suggested by the observation that duration was inversely related to time to conversion except in the 7.01-to 15-month interval. Analyses were adjusted for possible baseline confounders, relying on PS matching, to reduce the bias introduced. However, the possibility of unmeasured confounding by indication, in particular by time-varying characteristics (such as toxicity, microbiological, radiographic, or clinical results), cannot be ruled out. Such confounding by indication would likely result in an underestimate of the benefit of a longer intensive phase after conversion: patients in the 7.01-to 15-month interval had slower conversion and more poor outcomes than those in the 5.01-to 7.0month interval. Last, the optimal total duration of treatment for MDR-TB using injectable-free, all-oral regimens cannot be determined from these datasets, but clinical trials evaluating newer drugs and all-oral regimens for MDR-TB are underway (56). Conclusions. We suggest an intensivephase treatment of between 5 and 7 months after culture conversion in patients with MDR-TB (conditional recommendation, very low certainty in the evidence). The clinical context, extent of disease, and response to treatment, among other factors, will play a role in choosing a final duration from within the recommended range. There were limited data in pre-XDR-TB and XDR-TB. Subgroup analyses in pre-XDR and XDR-TB did not suggest that a different duration for the intensive phase would be required to achieve good outcomes. This intensive-phase duration recommendation does not apply to the 9-to 12-month shortercourse regimen addressed in PICO 18. Research needs. Further research is urgently needed to define the optimal durations of treatment using newer drugs and all-oral regimens. Research in defining optimal duration that prevents death or loss to follow-up is also needed. In the short term, further research on datasets that include longitudinal observations that can inform choice of regimen duration would be important to reducing the uncertainty around the present recommendations. In the long term, randomized controlled trials evaluating various intensive phases and durations are needed. Research on stratified medicine approaches that use measures of burden of disease and consider subgroups in the selection of the optimal duration may allow for greater precision and better inform decision-making around the balance of benefits and harms of durations for individual patients (57). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. The association between total duration of treatment after culture conversion and outcomes (success, failure, relapse) was performed on 4,691 observations from 32 studies (3). We excluded 18 studies (n = 2,615) that did not report time to culture conversion and 798 patients from included studies because endpoints were missing. Last, 259 patients were excluded for outlier values in total duration or culture conversion; patients whose outcome was loss to follow-up or death were excluded. Among the 4,691 included, 3,034 had MDR-TB. The remainder had pre-XDR or XDR-TB. To reflect meaningful variations in the duration of treatment after conversion, stratified analyses were conducted: Time to conversion was inversely related to duration of treatment after conversion (Table 7). Benefits. Among all resistance groups, durations of 15.01 to 21 months after conversion outperformed the reference (12.01-15 mo); intervals of 15.01 to 18 (aOR, 2.1; 95% CI, 1.4-3.1), 18.01 to 21 (aOR, 1.6; 95% CI, 1.1-2.3), and 21.01 to 24 (aOR, 1.2; 95% CI, 0.9-1.8) months were indistinguishable from each other (Table 8). In the MDR-only subgroup, the duration of 15.01 to 18.0 months was associated with similar to slightly improved outcomes compared with reference (aOR, 1.8; 95% CI, 1.0-3.0) (data not shown). Results for the pre-XDR-TB subgroup supported the notion that longer intervals were associated with success; effects were statistically significant, with confidence intervals that were wide and overlapping across durations of 15.01 to 24 months (data not shown). PICO Question 3: Should patients with MDR-TB undergoing continuation-phase treatment be treated for >18 months after culture conversion or ,18 months after culture conversion? Recommendation 3a: In patients with MDR-TB, we suggest a total treatment duration of between 15 and 21 months after culture conversion (conditional recommendations, very low certainty in the evidence). # Recommendation 3b: In patients with pre-XDR-TB and XDR-TB, we suggest a total treatment duration of between 15 and 24 months after culture conversion (conditional recommendations, very low certainty in the evidence). # AMERICAN THORACIC SOCIETY DOCUMENTS e104 Harms. Extending treatment longer than necessary can engender additional toxicity and costs to patients and health systems. For this reason, recommendations are for the minimum duration found to have a significant treatment advantage, and different recommendations are made for the MDR only and XDR/pre-XDR subgroups. Additional considerations. No information on how duration was selected is available from the data. Duration selection may reflect interim response to treatment or may reflect a planned duration that is not conditioned on treatment response; the latter is suggested by the observation that duration was inversely related to time to conversion except in the longest interval. Analyses were adjusted for all possible baseline confounders, relying on PS matching, to minimize the bias introduced. Nevertheless, the possibility of unmeasured confounding by indication, in particular by time-varying characteristics, cannot be ruled out. Such confounding by indication would likely result in an underestimate of the benefit of a longer treatment duration after conversion. No significant difference in outcomes was observed across durations for the XDR-TB subgroup, likely because of small numbers and the aforementioned potential bias. Conclusions. We suggest a total treatment duration of at least 15 and up to 21 months after culture conversion for patients with MDR-TB only. In patients with pre-XDR-TB and XDR-TB, we suggest a total treatment duration of between 15 and 24 months after culture conversion (conditional recommendations, very low certainty in the evidence). The clinical context and extent of disease will be relevant for choosing a final duration from within the recommended range. Research needs. The optimal total duration of regimens composed of only oral drugs urgently needs further research. Defining optimal total durations that prevent death or loss to follow-up also needs study. In the short term, further research on datasets that include longitudinal observations that can inform choice of regimen duration would be very important to reducing the uncertainty around the present recommendation. In the long term, randomized controlled trials comparing different intensive phases and durations are needed. As with the duration of the intensive phase, research is needed in stratified medicine approaches that consider measures of burden of disease, and subgroups in the selection of the optimal total duration will allow for greater precision in selecting and individualizing regimen duration (57). # Drugs and Drug Classes These guidelines are intended for settings in which treatment is individualized based on DST results and clinical and epidemiological factors. Individualized treatment regimens should only include drugs to which the patient's isolate has documented or high likelihood of susceptibility. Treatment regimens should favor medications that are associated with improved outcomes, as identified in our PS-matched IPDMA, that limit toxicity and that incorporate patient preferences. The following alphabetically listed drugs and drug classes were considered for inclusion in treatment regimens: amoxicillin/clavulanate, bedaquiline, carbapenem with clavulanic acid, clofazimine, cycloserine, delamanid, ethambutol, ethionamide, fluoroquinolones, injectable agents, linezolid, macrolides, p-aminosalicylic acid, and pyrazinamide. For each drug or drug class, the following PICO was addressed: In patients with MDR-TB, are outcomes safely improved when regimens include the following individual drugs or drug classes compared with regimens that do not include them? Amoxicillin/Clavulanate Amoxicillin-clavulanate, consisting of the b-lactam antibiotic amoxicillin and the b-lactamase inhibitor potassium clavulanate, is considered safe and effective for numerous bacterial infections. Although M. tuberculosis has an impenetrable cell wall and produces a b-lactamase inhibitor (58), amoxicillin-clavulanate has been used to treat TB. It is viewed as a "salvage" agent when few other PICO Question 4-Amoxicillin/Clavulanate: In patients with MDR-TB, are outcomes safely improved when regimens include amoxicillin/clavulanate compared with regimens that do not include amoxicillin/clavulanate? Recommendation 4: We recommend NOT including amoxicillin-clavulanate in a treatment regimen for patients with MDR-TB, with the exception of when the patient is receiving a carbapenem, wherein the inclusion of clavulanate is necessary (strong recommendation, very low certainty in the evidence). Our recommendation against the use of amoxicillin-clavulanate (except to provide clavulanate when using a carbapenem) in MDR-TB treatment is strong despite the evidence being judged to be of very low certainty because we viewed the increased mortality and decreased likelihood of treatment success associated with the use of this drug as having a notably unfavorable balance of benefits to potential harms. (63,64), and the combination has been reported to be efficacious, safe, and tolerable when added to linezolid and other drugs (65)(66)(67)(68)(69). See PICO Question 6 for the evaluation of carbapenems with amoxicillin-clavulanate. Summary of the evidence. Procedures and methodology to assemble and the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PS-matched IPDMA showed that patients who received amoxicillinclavulanate were more likely to have been treated with second-line drugs and to be resistant to fluoroquinolones or any second-line injectable (3). They were more likely to have received a later-generation fluoroquinolone (72%), capreomycin (58%), linezolid (25%), and bedaquiline (8%). In adjusted analyses, patients who received amoxicillin-clavulanate were less likely to have treatment success (aOR, 0.6; 95% CI, 0.5-0.8) and more likely to die (aOR, 1.7; 95% CI, 1.3-2.1) compared with patients who did not receive amoxicillin-clavulanate. Benefits. The addition of amoxicillinclavulanate (without coadministration with carbapenems) to a regimen for MDR-TB does not appear to provide benefit. Harms. Patients who received amoxicillin-clavulanate were less likely to have treatment success and more likely to die than patients who did not receive amoxicillin-clavulanate. Data on adverse effects were not collected systematically across studies in our PS-matched IPDMA or in most trials. A published systematic review identified diarrhea and candidiasis as key adverse effects associated with amoxicillin-clavulanic acid use (70). Additional considerations. Clavulanic acid is only available as a coformulation with amoxicillin. Therefore, amoxicillin-clavulanate must be given whenever carbapenems are included in an MDR-TB regimen (see PICO Question 6 on carbapenems with clavulanate). Conclusions. Patients who received amoxicillin-clavulanate were less likely to achieve treatment success and more likely to die than patients who did not receive amoxicillin-clavulanate. These data suggest amoxicillin-clavulanate should not be used in MDR-TB treatment, except to provide clavulanate when using a carbapenem (see PICO Question 6 on carbapenems with clavulanate). Our recommendation against the use of amoxicillin-clavulanate (except to provide clavulanate when using a carbapenem) in MDR-TB treatment is strong despite the evidence being judged to be of very low certainty because we viewed the increased mortality and decreased likelihood of treatment success associated with the use of this drug as having a notably unfavorable balance of benefits to potential harms. Research needs. The development of a clavulanic acid formulation without amoxicillin, for use in combination with carbapenems, would be helpful and would avoid unnecessary toxicities from amoxicillin as well as adhere to international efforts in promoting antimicrobial stewardship (1). # Bedaquiline Bedaquiline, a diarylquinoline, approved by FDA in 2013, is the first drug with a novel mechanism of action against M. tuberculosis to have been approved by FDA in .40 years (71,72). Bedaquiline is bactericidal to nonreplicating and actively replicating mycobacteria, through ATP synthase inhibition, and has bactericidal and sterilizing activity in the murine model of TB infection (73). No cross-resistance has been found between bedaquiline and the following: isoniazid, rifampin, ethambutol, pyrazinamide, streptomycin, amikacin, or moxifloxacin. There have been a few reports of cross-resistance with clofazimine PICO Question 5-Bedaquiline: In patients with MDR-TB, are outcomes safely improved when regimens include bedaquiline compared with regimens that do not include bedaquiline? Recommendation 5: We recommend including bedaquiline in a regimen for the treatment of patients with MDR-TB (strong recommendation, very low certainty in the evidence). Our recommendation for the use of bedaquiline is strong despite very low certainty in the evidence because we viewed the significant reduction in mortality, improved treatment success, and relatively few adverse effects associated with MDR-TB treatment including bedaquiline (compared with no bedaquiline) as having a particularly favorable balance of benefits over harms. AMERICAN THORACIC SOCIETY DOCUMENTS e106 (74,75). Bedaquiline is customarily used as part of combination therapy (minimum four-drug therapy) for adults aged >18 years with a diagnosis of pulmonary MDR-TB when an effective treatment regimen cannot otherwise be provided (e.g., mycobacterial isolates show a complicated drug resistance pattern, drug intolerance, or drug-drug interactions) (76). The recommended dose of bedaquiline for the treatment of pulmonary MDR-TB in adults is 400 mg administered orally once daily for 2 weeks, followed by 200 mg administered orally three times weekly, for an entire treatment duration of 24 weeks (72,(76)(77)(78). Bedaquiline has recently has been identified as the key drug in assembling an all-oral, injectable-free drug regimen in South Africa (79). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PSmatched IPDMA included 411 patients who received bedaquiline-containing regimens, for whom it was assumed that there was no bedaquiline resistance, and 10,932 patients who did not receive bedaquiline (3). Patients who received bedaquiline-containing regimens were more likely to have cavitary disease (82% vs. 62%), to have been treated with second-line drugs (34% vs. 14%), to be infected with an organism resistant to fluoroquinolones (57% vs. 21%) or to any second-line injectable (58% vs. 23%), and to have XDR-TB (29% vs. 13%). They were also more likely to have received a latergeneration fluoroquinolone (69% vs. 54%), linezolid (64% vs. 4%), or clofazimine (48% vs. 4%). Treatment success (cure and completion) was slightly greater (70% vs. 60%; P = 0.001) with bedaquiline, whereas failure/relapse (6% vs. 9%), death (10% vs. 15%), and loss to follow-up (14% vs. 16%) were less frequent in PS-adjusted analyses. Benefits. Our PS-matched IPDMA showed that patients treated with bedaquiline-containing regimens were more likely to have treatment success (aOR, 2.0; 95% CI, 1.4-2.9), less likely to experience failure/relapse, and less likely to die (aOR, 0.4; 95% CI, 0.3-0.5; absolute risk reduction, 7.2%). Results were similar when only patients from high-income countries treated with bedaquiline-containing regimens were included and when comparing only patients with XDR-TB who were and were not treated with bedaquiline. A recent large program-based observational multinational study confirmed a high treatment success rate (76.9%) with bedaquiline-containing regimens, with a low proportion (5.8%) of interruptions owing to adverse events (80). In our IPDMA, using PS-matched pairs comparing the effects of bedaquiline to those of clofazimine, statistically significant differences for success versus failure/relapse favoring bedaquiline use were observed (aOR, 2.1; 95% CI, 1.1-4.1) (using .170 PS-matched pairs), but statistically significant findings were not observed in mortality or when restricting analyses to high-income countries. PS-matched pairs analyses comparing the effects of various combinations of drugs used with bedaquiline all noted improved outcomes. When comparisons of effects of bedaquiline and linezolid with those of no bedaquiline or linezolid were performed, combination of bedaquiline and linezolid was associated with an aOR of 2.7 (95% CI, 1.5-4.9) for success versus failure/relapse and with an aOR of 0.3 (95% CI, 0.2-0.4) for death versus success/failure/relapse. Similarly, when PS-matched pair analyses comparing the effects of bedaquiline and clofazimine to those of no bedaquiline or clofazimine were performed, combination of bedaquiline and clofazimine was associated with an aOR of 5.0 (95% CI, 2.4-10.6) for success versus failure/relapse, and with an aOR of 0.3 (95% CI, 0.2-0.5) for death versus success/failure/relapse. Of note, patients to whom bedaquiline was administered tended to have more cavitary disease or to have pre-XDR or XDR-TB. Prescription of bedaquiline was associated with greater success and less death than clofazimine, but the greatest success and least mortality was found when bedaquiline was administered together with linezolid or clofazimine. Finally, a recent retrospective routinecare observational study in South Africa showed that a bedaquiline-containing regimen was associated with significantly lower mortality (128 deaths among 1,016 patients receiving bedaquiline compared with 4,612 deaths among 18,601 patients on the standard regimens). Bedaquiline was associated with a reduction in the risk of all-cause mortality for patients with MDR-or RR-TB (hazard ratio , 0.35; 95% CI, 0.28-0.46) and XDR-TB (HR, 0.26; 95% CI, 0.18-0.38) compared with standard regimens (50). Harms. In a review of cases treated with bedaquiline, only 44 of 1,266 (3.5%) cases with information available discontinued bedaquiline because of adverse events. Only 8 of 875 (0.9%) discontinued bedaquiline because of QT interval prolongation (two restarted the drug after resolution of the acute episode without further problems) (81). Additional considerations. When bedaquiline is included in the regimen, most experts obtain ECGs after the initial 2 weeks of therapy and then at monthly intervals to monitor for QT interval prolongation. Serum electrolytes, including calcium, magnesium, and potassium, are also monitored. The 2013 CDC guidelines on the use of bedaquiline note that there is insufficient evidence to provide guidance on the use of bedaquiline in children but that its use can be considered on a case-by-case basis given the high mortality and limited treatment options for MDR-TB (71). More recently, adolescents .10 years old and weight .34 kg have been safely treated off-label with the recommended adult dose of bedaquiline (82). The Sentinel Project on Pediatric Drug-Resistant Tuberculosis has recommended that children >12 years of age and >31 kg body weight receive bedaquiline at the same dose as adults, and children >6 years with 16 to 30 kg body weight could receive half the adult bedaquiline dose for the same indications (7,17,83). Conclusions. In our PS-matched IPDMA, bedaquiline-containing regimens were more likely to achieve treatment success and to have a lower rate of death than those that did not include bedaquiline. Bedaquiline should be included in a regimen to achieve a total of five effective drugs for the treatment of patients with MDR-TB. Our recommendation for the use of bedaquiline is strong despite very low certainty in the evidence because we viewed the significant reduction in mortality, improved treatment success, and relatively few adverse effects associated with MDR-TB treatment including bedaquiline (compared with no bedaquiline) as having a particularly favorable balance of benefits over harms. Research needs. Further research is needed to elucidate the potential synergy of bedaquiline with other agents. Studies underway are evaluating the use of bedaquiline together with linezolid, clofazimine, or nitroimidazoles AMERICAN THORACIC SOCIETY DOCUMENTS (i.e., pretomanid and delamanid). Research is also needed on the safety, tolerability, and efficacy of bedaquiline-based shorter-course regimens as well as on the use of bedaquiline for durations .24 weeks, an approach currently considered when effective treatment cannot otherwise be provided (71,84). Finally, research is urgently needed on the risk factors and any interventions (e.g., the selection of companion drugs) that influence acquisition of bedaquiline resistance. # Carbapenems with Clavulanic Acid The combination of carbapenems and clavulanate, a b-lactamase inhibitor, has been shown to have in vitro bactericidal activity (65,85,86). As clavulanate is not available by itself, the combination drug amoxicillin-clavulanate must be given with the carbapenems. Carbapenems have been used primarily for MDR and XDR TB, and a recent systematic review found that carbapenems are safe and likely to be effective (65). Carbapenem drugs have been recently included in WHO guidelines for the treatment of DR-TB (23). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PS-matched IPDMA compared death, treatment success, and culture conversion among 169 individuals who received carbapenems to 9,535 individuals from centers that did not use any of the drugs previously classified by WHO as "Group 5" drugs (3). In addition, we considered adverse events reported in a recent review (65), IPDMA, and systematic review that summarize the available evidence on carbapenems from five primary studies (all observational studies) (23,(65)(66)(67)(87)(88)(89)(90)(91)(92). The reviewed evidence showed that there was concomitant use of linezolid and bedaquiline in large proportions of the patient population receiving carbapenems, suggestive of confounding, which was considered in the review of the efficacy results. Benefits. In our PS-matched IPDMA, inclusion of carbapenems in the treatment regimen had no effect on the risk of death (aOR, 1.0; 95% CI, 0.5-1.7) or culture conversion (aOR, 2.3; 95% CI, 0.8-6.9). However, treatment success was more likely among patients treated with regimens including a carbapenem compared with those treated with regimens not including a carbapenem (aOR, 4.0; 95% CI, 1.7-9.1). Harms. In our meta-analysis, the rate of treatment discontinuation due to an adverse event was lower among patients treated with regimens including a carbapenem than those treated with regimens not including a carbapenem, although this difference was not statistically significant (relative risk, 0.82; 95% CI, 0.23-2.94). In the published literature, treatment discontinuation due to adverse events has been reported in 0 to 3% of patients, with minor adverse events occurring in 5% to 6% of patients (65). Additional considerations. Of note, clavulanic acid is only available as a coformulation with amoxicillin. Therefore, amoxicillin-clavulanate must be given to achieve a dose of 125 mg of the clavulanate with each dose of carbapenem daily, whenever carbapenems are included in an MDR-TB regimen. DST is currently not available for carbapenems. When used, imipenem-cilastatin/clavulanate or meropenem/clavulanate have been administered intravenously in a hospital setting, with multiple injections required daily (67). There is no clear evidence of whether imipenem-cilastatin or meropenem is more efficacious (66). Ertapenem belongs to the same family (65,93); because of its longer half-life and the possibility to administer it once a day intramuscularly or intravenously, ertapenem may be useful when a patient treated with meropenem or imipenem-cilastatin intravenously during hospitalization is discharged and needs to continue carbapenem-based treatment outpatient (92). Long-term intravenous administration requires long-term venous access through an indwelling catheter, which carries risks of infection, thrombosis, and thromboembolism. Data on the use of a carbapenem combined with amoxicillin-clavulanate in children are lacking-only two adolescents are included in one review (65). However, efficacy has been shown in adults, and both carbapenems and amoxicillinclavulanate have been used safely in children for other purposes (although there are no long-term treatment safety data in young children); therefore, this combination could be used in children with MDR-TB where there is no other option to build an effective regimen. Conclusions. In our PS-matched IPDMA, inclusion of carbapenems with clavulanic acid was associated with an increase in treatment success when compared with the control group not receiving carbapenems with clavulanic acid. Carbapenems with clavulanic acid can be included in a regimen to achieve a total of five effective drugs for the treatment of patients with MDR-TB. Research needs. Randomized, controlled clinical trials that confirm the role of carbapenems used for different durations of time within different regimens are needed. A comparative evaluation of the safety, tolerability, and efficacy of the different agents is also necessary. Because of the high cost of these drugs, economic analyses will also be useful (65). The development of oral formulations of carbapenems is underway, which, if proven safe and effective, would enhance feasibility of using these agents significantly. Additional advances in rapid DST for carbapenems would enhance potential scale-up of use for this class of drugs. # AMERICAN THORACIC SOCIETY DOCUMENTS e108 Clofazimine Clofazimine, a fat-soluble riminophenazine dye, has shown in vitro and in vivo activity as a sterilizing drug to treat MDR-TB (94)(95)(96). Clofazimine has been used as a leprosy drug and was first developed in the 1950s because of in vitro and in vivo activity against M. tuberculosis (94, 95, 97). Although the exact mechanism of action is not known, it is a prodrug that appears to have both antimycobacterial and antiinflammatory properties (98). The published clinical evidence on the safety and efficacy of clofazimine for treatment of TB is modest (99). However, interest in the drug has increased since WHO endorsed the new shorter-course regimen (23, 100), which includes clofazimine (101)(102)(103)(104)(105)(106)(107)(108). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PSmatched IPDMA compared death, treatment success, and culture conversion among 639 individuals who received clofazimine to 7,398 individuals from centers that did not use any of the drugs previously classified by WHO as "Group 5" drugs (3). In addition, we considered adverse events reported in a published systematic review of nine observational studies (six MDR-TB and three XDR-TB) including 599 patients treated with clofazimine (427 from a single cohort study from Bangladesh) (95) and two studies published after the systematic review. They include a trial at six specialty hospitals in China that randomized 105 patients to 21 months of treatment with an optimized background regimen with or without clofazimine (100 mg/d) ( 94) and a retrospective cohort study from Brazil that compared outcomes in patients with MDR-TB treated with clofazimine (100 mg/d)-containing regimens versus pyrazinamide-containing regimens (109). Benefits. Our PS-matched IPDMA showed that treatment success was more likely with regimens containing clofazimine compared with regimens that did not (aOR, 1.5; 95% CI, 1.1-2.1). Addition of clofazimine was associated with an aOR of 0.8 (95% CI, 0.6-1.0) for the outcome of death and with an aOR of 1.1 (95% CI, 0.6-1.8) for culture conversion within 6 months. As noted in PICO Question 5 on bedaquiline, when PSmatched pair analyses were performed comparing the effects of bedaquiline and clofazimine to those of no bedaquiline or clofazimine, the combination of bedaquiline and clofazimine was associated with an aOR of 5.0 (95% CI, 2.4-10.6) for success versus failure/relapse and with an aOR of 0.3 (95% CI, 0.2-0.5) for death versus success/failure/relapse. Harms. In our PS-matched IPDMA, 2 of 81 (2.5%) patients treated with regimens including clofazimine had treatment discontinued because of adverse events. Brownish skin pigmentation has been observed in 75% to 100%, ichthyosis in 8% to 20%, gastrointestinal intolerance in 40% to 50%, and neurological disturbances in up to 13% of patients (94,95,109). Clofazimine was judged to have small to moderate desirable effects (on treatment success, mortality, and culture conversions) and small undesirable effects (low risk of serious adverse events requiring treatment discontinuation). However, there is important uncertainty about how patients value skin discoloration. Some patients perceive skin discoloration as significant, which may become a key factor in the acceptability of this drug in some settings. Recent publications have noted the potential QT interval-prolonging effects of clofazimine when used in combination with bedaquiline and delamanid (110)(111)(112). Additional considerations. Access is a limitation to expanded use of clofazimine for treatment of MDR-TB, especially in the United States and Europe, where clofazimine lacks a TB indication. In the United States, clofazimine is currently only available under an investigational new drug protocol administered by FDA, which can be a burdensome procedure. Significant improvements in the mechanisms for accessing clofazimine are necessary for more expanded use of this drug. Quality-assured clofazimine is available through the Global Drug Facility, although currently in limited quantities. Although our PS-matched IPDMA had limited pediatric data, the efficacy in children is believed by experts to be the same as in adults. A recent IPDMA in children with MDR-TB did not show a benefit using clofazimine, which might be related to selection of cases and very small numbers (23 of 641) receiving clofazimine (113). Moreover, children are included in the WHO shortercourse regimen for MDR-TB treatment, which includes clofazimine, albeit on the basis of limited data. Dosing clofazimine in children is challenging because of the lack of childfriendly formulations and lack of pharmacokinetic data. The currently recommended dose for clofazimine in children varies from 2 to 5 mg/kg daily. Skin discoloration is common; some improve during treatment and others resolve rapidly after discontinuation of treatment. Ichthyosis is less common and improves with intensive efforts at applying lubricants during treatment. As noted, QT interval prolongation may occur and is especially important to monitor by ECG if given together with other QT interval-prolonging medication (68, 83, 110-112, 114, 115). Clofazimine may have cross-resistance with bedaquiline, which may need to be considered when building a regimen for MDR-TB (75). Conclusions. In our PS-matched IPDMA, inclusion of clofazimine was associated with increase in treatment success when compared with the control group not receiving clofazimine. Clofazimine can be included in a regimen to achieve a total of five effective drugs for the treatment of patients with MDR-TB. Research needs. The value of using loading doses and the optimal dosing for clofazimine requires additional research. Pharmacokinetic data in adults and children are needed. # Cycloserine Cycloserine is an oral bacteriostatic drug that has been part of the backbone regimen for MDR-TB treatment in the past. Cycloserine is a broad-spectrum antibiotic that inhibits cell wall synthesis (116,117). Terizidone is a structural analog that is a combination of two cycloserine molecules; it appears to be used interchangeably by experts, although it is currently not available in the United States (118). Although some advantages of cycloserine include the absence of crossresistance to other drugs and reasonable gastrointestinal tolerability, a significant drawback relates to psychological side effects, which occasionally necessitate discontinuation of the drug (16). Summary of the evidence. Previous studies showed an increase of the likelihood of treatment success when cycloserine was included in the MDR-TB regimen (marginally statistically significant) (96). Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Benefits. Our PS-matched IPDMA showed that inclusion of cycloserine was associated with an increase in treatment success (aOR, 1.5; 95% CI, 1.4-1.7) and a decrease in AMERICAN THORACIC SOCIETY DOCUMENTS mortality (aOR, 0.6; 95% CI, 0.5-0.6) when compared with the control group. A published IPDMA of children with successful treatment of MDR-TB showed that cycloserine as a single drug had an aOR of 1.7 (95% CI, 0.9-3.0; 641 children with MDR-TB, of whom 356 received cycloserine/terizidone) (113). Limited pharmacokinetic data are available on cycloserine (or terizidone) in children, but a recent study of 25 children (only 5 to ,12 yr of age) receiving a median of 14.3 mg/kg (range, 10.0-18.0 mg/kg) showed that the maximum concentration of the drug achieved in the plasma after dose administration was similar to the adult target maximum concentration of 20 to 35 mg/L (119). The recommended dose for children of 10 to 20 mg/kg/d seems adequate. An advantage of cycloserine in children, as in adults, with miliary and/or central nervous system (CNS) TB is that it penetrates the CNS well. Cycloserine may interfere with the pharmacokinetics and absorption of isoniazid and ethionamide/prothionamide; therefore, it may be advisable to give it separately from these drugs if used together in the same regimen (117). Harms. Cycloserine has CNS adverse effects, which are reported to occur in about 20% to 30% of adults (120). A meta-analysis of published articles identified adverse events in 201 of 2,164 patients across all studies (118). An average weighted proportion of patients receiving cycloserine who discontinued treatment owing to adverse effect pooled across the studies was 9.1% (95% CI, 6.4-11.7%). An average weighted proportion of patients with psychiatric adverse effects was 5.7% (95% CI, 3.7-7.6%) (118). Limited data exist on cycloserine adverse effects in children. In older pediatric studies, no adverse effects were reported with the use of cycloserine (121)(122)(123). In a systematic review of outcomes of MDR-TB in children, 6 of 182 (3.3%) children had adverse effects attributed to cycloserine, which included depression, anxiety, hallucinations, transitory psychosis, and blurred vision (124). Additional considerations. Lowerthan-recommended serum cycloserine concentrations and delayed absorption have been reported (125). Some experts obtain peak concentrations within the first 1 to 2 weeks of therapy and continue to monitor serially. Because of specific technical challenges related to cycloserine DST, poor accuracy of testing in liquid media, and poor intrinsic reproducibility of results, few laboratories perform DST for cycloserine (16). Conclusions. In our PS-matched IPDMA, inclusion of cycloserine was associated with a decrease in mortality and increase in treatment success when compared with the control group not receiving cycloserine. Cycloserine can be included in a regimen to achieve a total of five effective drugs for the treatment of patients with MDR-TB. Research needs. Studies are needed to determine the effect of cycloserine on the absorption of isoniazid and ethionamide. # Delamanid Delamanid is a nitro-dihydro-imidazooxazole derivative, which was approved for the treatment of MDR-TB by the European Medicines Agency in 2013 but has not yet received FDA approval. In 2014, WHO issued interim policy guidance on the use of delamanid for the treatment of MDR-TB on the basis of phase 2b clinical trial data. The interim policy guidance stated that "delamanid may be added to an MDR-TB regimen in adult patients with pulmonary TB conditional upon: i) careful selection of patients likely to benefit; ii) patient informed consent; iii) adherence to WHO recommendations in designing a longer MDR-TB regimen; iv) close monitoring of clinical treatment response; and v) active TB drug-safety monitoring and management (aDSM)" (126). WHO issued an updated position statement on the use of delamanid for MDR-TB in 2018 on the basis of final results from the phase 3 randomized controlled trial, Trial 213 (127). Summary of the evidence. Delamanid data were not available as part of the PS-matched IPDMA completed for this guideline development process; however, the 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment, wherein individual-level data on delamanid were attained for analyses, provides practice guidance on the use of delamanid for treatment of MDR-TB (7,128). Several recent reports, clinical trials, and cohort studies report on delamanid-containing regimens achieving success rates of 77% to 84%, in absence of major adverse events (128)(129)(130)(131). In the United States, through a compassionate use program, delamanid has recently been successfully accessed and used in the care of patients with MDR-TB (132). Regarding use of delamanid in children, only a small number of children (.6 yr of age) have had documented access to the drug, also by compassionate use (133,134). On the basis of review of the data available in the IPDMA conducted by WHO, delamanid has been included in "Group C" in their guidelines, corresponding to drugs that can be added "to complete the regimen and when medicines from Groups A and B cannot be used." WHO has recommended that "delamanid may be included in the treatment of MDR/RR-TB patients aged 3 years or more on longer regimens" (7,135). The guideline writing committee concurs with the updated 2019 WHO guidance (7). As noted for other drugs discussed in this practice guideline, QT interval PICO Question 9-Delamanid: In patients with MDR-TB, are outcomes safely improved when regimens include delamanid compared with regimens that do not include delamanid? Recommendation 9: The guideline panel was unable to make a clinical recommendation for or against delamanid because of the absence of data in the PSmatched IPDMA conducted for this practice guideline. We make a research recommendation for the conduct of randomized clinical trials and cohort studies evaluating the efficacy, safety, and tolerability of delamanid in combination with other oral agents. Until additional data are available, the guideline panel concurs with the conditional recommendation of the 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment that delamanid may be included in the treatment of patients with MDR/RR-TB aged >3 years on longer regimens (7). PICO Question 8-Cycloserine: In patients with MDR-TB, are outcomes safely improved when regimens include cycloserine compared with regimens that do not include cycloserine? Recommendation 8: We suggest including cycloserine in a regimen for treatment of patients with MDR-TB (conditional recommendation, very low certainty in the evidence). prolongation may occur with delamanid use, and routine ECG monitoring is recommended. A recent evaluation of the cardiac safety of delamanid and bedaquiline given together as part of multidrug therapy for MDR-TB concluded that the combined effect on the corrected QT interval (QTc) using the Fridericia formula (QTcF) is clinically modest and no more than additive (ClinicalTrials.gov Identifier: NCT02583048), with mean change in QTcF from baseline at 11.9 milliseconds (95.1% CI, 7.4-16.5 ms) in the bedaquiline arm, 8.6 milliseconds (95.1% CI, 4.0-13.2 ms) in the delamanid arm, and 20.7 milliseconds (95.1% CI, 16.1-25.4 ms) in the combined bedaquiline and delamanid arm (136). Conclusions. The guideline panel was unable to make a recommendation for or against delamanid because of the absence of data in the IPDMA conducted for this practice guideline. In 2018, WHO updated the IPDMA with additional data and recommended delamanid be included in the third tier of drugs, Group C, and for the drug to be used in the treatment of patients with MDR/RR-TB aged >3 years on longer regimens. The guideline writing committee agrees with the updated 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment that delamanid may be included in the treatment of patients with MDR/RR-TB aged >3 years on longer regimens (7). We make a research recommendation for the conduct of randomized clinical trials and cohort studies evaluating the efficacy, safety, and tolerability of all-oral, shortened regimens inclusive of delamanid in combination with other oral agents. Research needs. The companion drugs with which to combine delamanid for optimal efficacy, safety, and tolerability remain uncertain. The conduct of randomized clinical trials and cohort studies evaluating all-oral, shortened regimens, inclusive of delamanid and in combination with other oral agents are urgently needed. The endTB trial is a phase 3 multicountry randomized clinical trial evaluating the efficacy and safety of five new, all-oral, shortened regimens including combinations of bedaquiline and delamanid (ClinicalTrials.gov Identifier: NCT02754765). Recent cohort data are also emerging on the use of delamanid and bedaquiline in combination, which to date has been generally reserved for severe cases with extensive resistance and when other options are not feasible (137)(138)(139)(140). The Nix-TB (A Phase 3 Study Assessing the Safety and Efficacy of Bedaquiline Plus PA-824 Plus Linezolid in Subjects with Drug-Resistant Pulmonary Tuberculosis) trial (ClinicalTrials.gov Identifier: NCT02333799) evaluated an all-oral 6-month regimen comprising bedaquiline, pretomanid (member of the nitroimidazooxazines class of compounds), and linezolid for the treatment of either XDR-TB or treatment-intolerant or nonresponsive MDR-TB. FDA granted priority review of the new drug application for pretomanid in March 2019, the Antimicrobial Drugs Advisory Committee discussed pretomanid in June 2019, and in August 2019 FDA approved pretomanid in combination with bedaquiline and linezolid for the treatment of a specific limited population of adults with pulmonary XDR-TB or treatment-intolerant or nonresponsive MDR-TB (2,141). Given FDA approval, additional trials that include diverse patient populations will be essential to understand optimal use of the regimen, in addition to head-to-head comparisons of pretomanid with delamanid to determine if these drugs can be used interchangeably. # Ethambutol Ethambutol is an ethylenediamine that inhibits arabinosyl transferases, which contribute to M. tuberculosis cell wall synthesis (142). The drug is included in standard regimens for treatment of drugsusceptible TB and commonly used in regimens for MDR-TB (11,16,23). The published evidence available on its safety and efficacy is modest, and its use as part of the first-line drug-susceptible TB regimen rests largely on its ability to prevent the emergence of resistance to the other drugs in the regimen rather than on its own sterilizing activity (143). Ethambutol has been demonstrated to have modest sterilizing activity as part of combination regimens when used against ethambutolsusceptible isolates for treatment of drugsusceptible TB. The drug is not effective when used for ethambutol-resistant isolates, and such use is not recommended (54,96). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PS-matched IPDMA compared outcomes among 3,002 individuals with isolates susceptible to ethambutol who received the drug to 667 individuals with isolates susceptible to ethambutol who did not receive the drug (3). Receipt of ethambutol was associated with an aOR of 0.9 (95% CI, 0.7-1.1) for the outcome of treatment success; receipt of ethambutol was associated with an aOR of 1.0 (95% CI, 0.9-1.2) for the outcome of death. However, the groups were not balanced with regard to other effective drugs; in particular, the patients not receiving ethambutol were more likely to receive linezolid. Thus, the ability to detect an effect of ethambutol might have been obscured. In a previous IPDMA, before linezolid and bedaquiline use were common, administration of ethambutol among patients with susceptible isolates was associated with an aOR of 1.7 (95% CI, 1.2-2.4) for cure/complete versus failure/relapse and an aOR of 1.6 (95% CI, 1.1-2.4) for cure/complete versus failure/relapse/death, compared with patients receiving ethambutol whose isolates were resistant in vitro (144). Benefits. Our PS-matched IPDMA showed that treatment success was not significantly more likely with regimens containing ethambutol. However, a number of earlier studies indicate that ethambutol is associated with increased success when used to treat patients whose isolates are susceptible to the drug (143). Moreover, these earlier studies determined that the effectiveness of ethambutol is directly related to dose, and that a dose of 25 mg/kg was more effective than a dose of 15 mg/kg (143). Dosages were not available for analysis in our PS-matched IPDMA. Finally, our PS-matched IPDMA did not address a key attribute of ethambutol, the prevention of the emergence of resistance, which is a substantial concern among patients with MDR-TB (145). PICO Question 10-Ethambutol: In patients with MDR-TB, are outcomes safely improved when regimens include ethambutol compared with regimens that do not include ethambutol? Recommendation 10: We suggest including ethambutol in a regimen for treatment of patients with MDR-TB only when more effective drugs cannot be assembled to achieve a total of five effective drugs in the regimen (conditional recommendation, very low certainty in the evidence). # AMERICAN THORACIC SOCIETY DOCUMENTS Harms. In a recent review of the tolerability of TB drugs, ethambutol was associated with serious adverse events in 6 of 1,325 (0.5%) patients (23). This is consistent with previous reports of ethambutol toxicity, in which optic neuropathy (including optic neuritis and retrobulbar neuritis) was attributed to ethambutol, manifesting as symptoms of decreased visual acuity, scotomata, color blindness, or visual defects. These toxic effects are dose dependent and, although serious, are generally reversible if recognized promptly and the drug is discontinued or the dose reduced. Additional considerations. When ethambutol is used, some experts recommend using the higher dose of 25 mg/kg, which is associated with increased efficacy but also slightly greater ocular toxicity. Other experts recommend using 15 to 20 mg/kg, counting the drug predominantly as providing protection against the acquisition of additional resistance. All patients receiving ethambutol as part of an MDR-TB treatment regimen should be monitored monthly for signs of ocular toxicity, particularly visual impairment, and if this is detected, ethambutol should be discontinued. If optic neuritis occurs in patients taking both ethambutol and linezolid, both drugs must be stopped. Many patients may be rechallenged successfully with linezolid once vision normalizes, but rechallenge with ethambutol is not recommended. The efficacy of ethambutol is not expected to be different in children compared with adults. A dose of 25 mg/kg is recommended by experts for children with MDR-TB. Two reviews on safety of ethambutol relating to vision have shown low risk of ocular toxicity (143,146). However, other drugs, such as linezolid, may also cause optic neuritis, and this should be taken into consideration when treating MDR-TB in children, especially as screening for visual loss is difficult in young children. Conclusions. We suggest including ethambutol in a regimen for treatment of patients with MDR-TB only when more effective drugs cannot be used to achieve a total of five effective drugs in the regimen (conditional recommendation, very low certainty in the evidence). The committee was divided on the value of recommending ethambutol, given the drug was not associated with any benefit in terms of treatment success or mortality. Concerns about the comparability of IPDMA patient groups treated with and without ethambutol were noted. Unlike ethionamide/prothionamide, where there are substantial undesirable effects, ethambutol had small undesirable effects (low risk of serious adverse events requiring treatment discontinuation). Finally, our PS-matched IPDMA did not address the prevention of emergence of resistance-a recognized attribute of ethambutol in the treatment of drug-susceptible TB-although the applicability of this attribute to MDR-TB regimens is unknown. Overall, the committee suggested that ethambutol can be used only when more effective drugs cannot be assembled to achieve the necessary five effective drugs in the regimen. Research needs. More evidence is needed on the beneficial role that ethambutol may play in preventing emergence of drug resistance in MDR-TB, including to newer oral drugs. More research is also needed to improve on the reliability of DST for ethambutol in different settings. # Ethionamide and Prothionamide Ethionamide and prothionamide are derivatives of isonicotinic acid, somewhat similar in structure to isoniazid. Ethionamide is a prodrug and requires activation, after which it inhibits mycobacterial fatty acid synthesis that is necessary for cell wall synthesis and repair. Clinical studies indicate efficacy of these drugs, and they have been included in regimens for treatment of MDR-TB and for treatment of TB meningitis in adults and children (16,23,147,148). Summary of evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PS-matched IPDMA showed that ethionamide/prothionamide were not associated with any benefit, even if susceptible by phenotypic DST (3). Previous studies have shown, however, an increase in the likelihood of treatment success if ethionamide was included in the MDR-TB regimen (23,149). These different results can be explained by the inclusion of newer and more effective drugs, improving outcomes in the comparator groups, potentially biasing the observed effect against ethionamide; fewer patients in the ethionamide/prothionamide group, compared with control subjects, received newer-generation fluoroquinolone (51% vs. 76%), amikacin (17% vs. 35%), and linezolid (5% vs. 18%), and more patient in the ethionamide/prothionamide groups received kanamycin (51% vs. 12%) and capreomycin (27% vs. 16%). Benefits. No benefits were identified for inclusion of ethionamide/prothionamide in regimens, even if susceptible by phenotypic DST. A pediatric IPDMA of MDR-TB cases also did not show benefit using ethionamide/ prothionamide; however, the majority of children (590 of 641) in this study did receive ethionamide or prothionamide, potentially limiting the ability to discern a benefit (113). Harms. The potential of ethionamide to cause adverse events can also limit its tolerability (120). A review of studies comparing a regimen containing ethionamide or prothionamide (all studies before 1970) showed that adverse effects leading to discontinuation of treatment were equally frequent with ethionamide (11.3%; range 6-42%) and prothionamide (11.9%; range 6-40%). Adverse effects, when reported, included abnormal liver function tests, gastrointestinal intolerance, endocrine dysfunction, and hypothyroidism, the latter occasionally requiring treatment with thyroxine (147). Hypothyroidism is a common adverse effect of ethionamide (experienced by approximately 20%) (150,151) and is particularly noted if used with p-aminosalicylic acid and in HIV-infected children (152). Gastrointestinal disturbance is common but usually resolve within the first 2 weeks of treatment in children. Additional considerations. When ethionamide/prothionamide is included in a regimen for which five other effective drugs cannot be assembled, experts recommend dose escalation (drug ramping) at time of treatment initiation, as well as monitoring of thyroid-stimulating hormone for evidence of hypothyroidism requiring replacement (16). Of note, in the presence of inhA mutation, many isolates will show PICO Question 11-Ethionamide/ prothionamide: In patients with MDR-TB, are outcomes safely improved when regimens include ethionamide/ prothionamide compared with regimens that do not include ethionamide/prothionamide? Recommendation 11: We suggest NOT including ethionamide/ prothionamide in a treatment regimen for patients with MDR-TB if newer and more effective drugs are available to construct a regimen with at least five effective drugs (conditional recommendation, very low certainty in the evidence). cross-resistance between isoniazid and ethionamide/prothionamide (153). Conclusions. When the individualized treatment regimen for patients with MDR-TB contains newer-generation, moreeffective drugs, the addition of ethionamide/prothionamide does not appear to provide benefit. Research needs. Research efforts are underway to identify potential boosters of potency for ethionamide, which, if successful when coadministered, may result in improved therapeutic index and overall better risk-benefit ratio for use (154). # Fluoroquinolones: Levofloxacin, Moxifloxacin, Ciprofloxacin, and Ofloxacin The fluoroquinolones are a family of chemically related drugs characterized by a common core dual-ring structure (155). Ofloxacin, then levofloxacin, then moxifloxacin sequentially improved on earlier generation's spectrum of activity, including mycobacteria, and their antimycobacterial action increased as evidenced by lower minimum inhibitory concentrations (MICs) and increasing success in clinical use (156,157). Physicians began using these drugs to treat MDR-TB on the basis of in vitro data, with subsequent case series and observational studies showing efficacy (158), although none of the fluoroquinolones are currently indicated by regulatory authorities for the treatment of TB. In general, these drugs are well absorbed orally, have favorable pharmacological profiles for once-daily dosing, are generally well tolerated, and now are available in generic formulations (155). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. In our PSmatched IPDMA, ofloxacin was used most commonly (n = 4,020), followed by levofloxacin (n = 3,872), moxifloxacin (n = 2,132), and ciprofloxacin (n = 431); 734 patients did not receive a fluoroquinolone, the comparison group for subsequent analyses, and 828 patients received two or more quinolones and were excluded (3). The groups were similar in age, sex, proportion AFB smear positive, proportion with cavitary disease on chest radiograph, and prior treatment with first-line drugs. However, the no-quinolone group had substantially more HIV coinfection (44% vs. 13-28%; proportion with HIV coinfection across different quinolones), more past treatment with secondline drugs (52% vs. 8-25%), more quinolone resistance (86% vs. 6-33%), more second line injectables resistance (74% vs.12-34%), and more XDR TB (73% vs. 4-19%). In terms of treatment, the no-fluoroquinolone group received less amikacin (7% vs. 18-43%) and kanamycin (13% vs, 26-65%) and more capreomycin (66% vs. 6-34%). Therefore, the median number of effective drugs (intensive phase) was lower in the no-fluoroquinolone group (2.1) than in the other groups (3.3-4.0). Thus, treatment success was lower in the nofluoroquinolone group (35% vs. 55-68%) and failure/relapse was higher than but not greatly different from the others (13% vs. 2-10%); mortality, however, was much higher (40% vs. 11-15%). Benefits. In our PS-matched IPDMA, among patients with susceptible isolates, levofloxacin-containing regimens compared with no quinolone were associated with significantly more treatment successes (aOR, 4.2; 95% CI, 3.3-5.4) and significantly fewer deaths (aOR, 0.6; 95% CI, 0.5-0.7). Moxifloxacin, compared with no quinolone, was also associated with significantly more treatment successes (aOR, 3.8; 95% CI, 2.8-5.2) and significantly fewer deaths (aOR, 0.5; 95% CI, 0.4-0.6). In the subgroup with resistance to an injectable drug(s), levofloxacin or moxifloxacin were associated with a significant improvement in treatment success (aOR, 1.8; 95% CI, 1.2-2.8) and reduction in death (aOR, 0.6; 95% CI, 0.4-0.8), although the corresponding adjusted risk differences were not statistically significant. In pairwise comparisons, both levofloxacin and moxifloxacin were associated with significantly better treatment outcomes than ofloxacin. The aORs of death were lower for the two later-generation quinolones when compared with ofloxacin (levofloxacin: aOR, 0.8; 95% CI, 0.6-0.9; moxifloxacin: aOR, 0.8; 95% CI, 0.6-1.0) (data not shown). Ofloxacin and ciprofloxacin are considered inferior quinolones against M. tuberculosis (144,149,159,160). Levofloxacin and moxifloxacin did not differ significantly from each other. In a recent IPDMA describing treatment outcomes in children treated for MDR-TB (113), new and repurposed TB drugs, including late-generation fluoroquinolones, were not used enough to adequately evaluate efficacy, but most experts and emerging evidence suggest that the efficacy of fluoroquinolones noted in adults should be similar in children (83,161). Harms. In our PS-matched IPDMA, grade 3 adverse events were recorded systematically in a subset of 1,962 patients from a cohort study of MDR-TB at 27 sites in nine countries. Permanent discontinuation of fluoroquinolones due to adverse events was uncommon. Among 150 patients treated with levofloxacin, the drug was stopped permanently because of adverse events in 6 (4.0%). Among 398 patients treated with moxifloxacin, the drug was stopped permanently in 14 (3.5%) patients. Among 1,167 patients treated with ofloxacin, the drug was stopped permanently in 56 (4.8%) patients. An analysis of 56 clinical trials comparing quinolones against placebo or against other antimicrobial agents found generally similar adverse event profiles (3). Seven studies reported more frequent adverse events and six studies reported fewer adverse events in fluoroquinolone-treated patients. The most frequent adverse effects reported are those of the gastrointestinal tract in 3% to 17% and CNS in 0.9% to 11% of patients (155,162). Allergic and other hypersensitivity reactions and other skin reactions occur in 0.5% to 2.8% of patients. Other adverse effects that occur in .1% of patients are cardiac (QT interval prolongation) and endocrine (hypoglycemia). Recently, FDA strengthened warnings in the prescribing information for the entire class of fluoroquinolones on risks of severe hypoglycemia, certain mental health side effects, and tendonitis, as well as risks of ruptures or tears in the aorta (163). Safety concerns persist for long-term pediatric use of fluoroquinolones, especially regarding arthropathy. However, several long-term prospective and retrospective studies in children have confirmed that severe adverse effects with the fluoroquinolones are rare, including musculoskeletal, neurological, and QT interval prolongation adverse effects (164)(165)(166). Additional considerations. As latergeneration fluoroquinolones have become generic, their cost has decreased greatly and their use has expanded to many different indications, so procurement and availability have not been problematic, but resistance is more common than among aminoglycosides. Some foods or beverages and antacids high in content of divalent or AMERICAN THORACIC SOCIETY DOCUMENTS trivalent cations can reduce fluoroquinolone absorption (16,(167)(168)(169). When feasible, taking fluoroquinolones on an empty stomach minimizes the potential of delays in or reduction of fluoroquinolone absorption. Moxifloxacin and to a lesser extent levofloxacin prolong the QT interval. Moxifloxacin may necessitate ECG monitoring, especially if patients have a baseline QTc . 500 milliseconds or take other QT-prolonging drugs. Pharmacokinetic modeling studies for levofloxacin and moxifloxacin use in children are ongoing: recent data suggest that for children, a levofloxacin dose of at least 15 to 20 mg/kg daily (170,171), and a moxifloxacin dose of 10 to 15 mg/kg/d are effective (based on data showing too low exposure at 7.5-10 mg/kg) (172). Although a modeling paper suggests a dose of 25 mg/kg/d for infants to 3 months of age and 20 mg/kg/d for toddlers, safety at these doses has not been verified (173). Conclusions. In our PS-matched IPDMA, patients treated with moxifloxacin and levofloxacin had better outcomes than patients not treated with any fluoroquinolones, or compared with patients treated with ofloxacin, after adjustment for numerous covariates that in themselves were strong determinants of outcome, such as clinical characteristics, extent of drug resistance, and number of other effective drugs in the treatment regimen. Moxifloxacin or levofloxacin should be included in a regimen to achieve a total of five effective drugs for the treatment of patients with MDR-TB. Our recommendation for the use of moxifloxacin or levofloxacin is strong despite very low certainty in the evidence, because we viewed the significant reduction in mortality, improved treatment success, and relatively few adverse effects associated with MDR-TB treatment that includes these latergeneration fluoroquinolones (compared with no fluoroquinolones) as having a particularly favorable balance of benefits over harms. NCT01918397) and dosing regimens, as preliminary work suggests these drugs may be more effective at higher doses (174). The activity, safety, and tolerability of using higher doses of fluoroquinolones against isolates with modestly elevated MICs should also be explored. Modest increases in toxicity can be minimized and managed by clinicians and may still be preferable to treating MDR-TB without quinolones. Higher doses may also limit acquired resistance (175,176). Injectables: Amikacin, Capreomycin, Kanamycin, and Streptomycin The term "injectable drugs" in general encompasses four drugs: the aminoglycoside antibiotics streptomycin, amikacin, and kanamycin; and the cyclic polypeptide antibiotic capreomycin (155,162,177). Aminoglycosides are highly cationic and water soluble, but insoluble in organic solvents and hydrophobic environments, explaining much about their pharmacology, limited ability to cross lipid membranes, poor absorption from the gastrointestinal tract, and poor penetration into the CNS. Consequently, they are administered parenterally through slow intravenous infusion or through intramuscular injection (177). Streptomycin's core ring structure differs from all other aminoglycosides, explaining in part why cross-resistance between streptomycin and other aminoglycosides is uncommon (155,177). These drugs block protein synthesis at the ribosomal level by binding to a highly conserved nucleotide sequence in the prokaryotic 16S ribosomal subunit, the mRNA decoding region, where translation of mRNA codon to aminoacyltransfer RNA anticodon normally takes place (155,177). Aminoglycosides share three important characteristics: 1) concentration-dependent killing, 2) a postantibiotic effect, and 3) synergism with other antibacterial drugs (155). These drugs kill bacteria in proportion to drug concentration, so a single daily dose is more effective than divided doses or a continuous infusion. Moreover, antibacterial activity continues many hours after serum levels become undetectable, the so-called "postantibiotic" effect (155). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. In our PS-matched IPDMA, treatment outcomes for 613 patients who did not receive an injectable drug were compared with 1,554 patients treated with streptomycin, 4,330 treated with kanamycin, 2,275 treated PICO Question 12-Fluoroquinolones: In patients with MDR-TB, are outcomes safely improved when regimens include fluoroquinolones compared with regimens that do not include fluoroquinolones? Recommendation 12: We recommend including moxifloxacin or levofloxacin in a regimen for treatment of patients with MDR-TB (strong recommendation, low certainty in the evidence). Our recommendation for the use of moxifloxacin or levofloxacin is strong despite very low certainty in the evidence because we viewed the significant reduction in mortality, improved treatment success, and relatively few adverse effects associated with MDR-TB treatment that includes these latergeneration fluoroquinolones (compared with no fluoroquinolones) as having a particularly favorable balance of benefits over harms. PICO Question 13-Injectables: In patients with MDR-TB, are outcomes safely improved when regimens include an injectable compared with regimens that do not include an injectable? Recommendation 13: We suggest including amikacin or streptomycin in a regimen for treatment of patients with MDR-TB when susceptibility to these drugs is confirmed (conditional recommendation, very low certainty in the evidence). Because of their toxicity, these drugs should be used when more effective or less toxic therapies cannot otherwise be assembled to achieve a total of five effective drugs. We suggest NOT including kanamycin or capreomycin (conditional recommendation, very low certainty in the evidence). with amikacin, and 2,401 treated with capreomycin. Patients who received two or more injectable drugs (n = 875) were excluded, because the outcome could not be ascribed to one of the drugs (3). The noinjectable comparison group had notably fewer AFB smear-positive patients and patients with cavitary disease. On the other hand, patients in this group had more past treatment with second-line drugs; more resistance to quinolones, injectables, and XDR TB; and much more treatment with linezolid. In these respects, the capreomycin group was similar (more past treatment with second-line drugs, worse pretreatment drug resistance) to the other three drugs. Analysis was based on patients whose isolates were susceptible to the drug they received, with exceptions noted below. # Benefits. INJECTABLE DRUGS COMPARED WITH NO INJECTABLE DRUG. Compared with 613 patients treated without an injectable drug, 1,554 streptomycin-treated patients had increased treatment success (aOR, 1.5; 95% CI, 1.1-2.1). In the subgroup with quinolone resistance (17.8% of the total), streptomycin-treated patients had increased treatment success (aOR, 3.0; 95% CI, 1.3-6.6). Compared with 613 patients not treated with injectable drugs, 2,275 amikacin-treated patients had increased treatment success (aOR, 2.0; 95% CI, 1.5-2.6). In the subgroup with quinolone resistance, amikacin was also associated with increased treatment success (aOR, 3.0; 1.6-5.6). Among patients with XDR-TB, amikacin was associated with reduced deaths (aOR, 0.4; 95% CI, 0.2-0.8). In contrast, neither kanamycin nor capreomycin was associated with any benefit on treatment success or death versus no injectable drug at all. To the contrary, kanamycin treatment was associated with fewer treatment successes (aOR, 0.5; 95% CI, 0.4-0.6). Capreomycin was associated with an increased risk of death (aOR, 1.4; 95% CI, 1.1-1.7). In XDR-TB, capreomycin was associated with increased deaths (aOR, 3.4; 95% CI, 2.7-4.3) when compared with regimens with no injectable drug. In a pediatric IPDMA, the use of second-line injectable agents (amikacin, kanamycin, capreomycin) in children with confirmed MDR-TB was associated with more treatment success compared with those not receiving second-line injectable agents (aOR, 2.94; 95% CI, 1.05-8.28; P = 0.041) (113). However, a high proportion of children with less-severe disease who received no second-line injectable agents still did well; therefore, children may be able to be spared from injectables and the associated toxicities if newer, more-effective drugs can be included in an all-oral regimen (113). # INJECTABLE DRUGS COMPARED AGAINST EACH OTHER. Compared with streptomycintreated patients, patients treated with amikacin had increased treatment successes (aOR, 1.7; 95% CI, 1.3-2.2) but with no significant impact on death (aOR, 1.0; 95% CI, 0.8-1.2). Kanamycin and capreomycin were both significantly inferior to streptomycin in every respect. Amikacin was superior to kanamycin and capreomycin in every respect, with higher treatment success rates, lower death rates, or both. In the quinolone-resistant subgroup, on the other hand, use of amikacin did not have a significant effect on treatment success (aOR, 1.7; 95% CI, 0.9-3.4) or death (aOR, 1.2; 95% CI, 0.7-2.0) compared with use of streptomycin. Harms. All aminoglycosides and capreomycin share important toxicities, especially nephrotoxicity, ototoxicity, and electrolyte disturbances, as well as other less common toxicities (155,162,177). The ototoxicity can be vestibular, resulting in loss of balance, or cochlear, resulting in hearing loss. Because these drugs date back to the early years of antibiotic discovery and development, there is vast published experience with their toxicities. In the treatment of MDR-TB, the risk of toxicity is substantial, because the duration of treatment is many months. Ototoxicity may be severe and irreversible but with close monitoring can be minimized or prevented. Nephrotoxicity is often reversible when identified early and addressed appropriately; however, ototoxicity can progress even after the drug is stopped. Monitoring (measuring drug levels; monthly high-quality audiometry, electrolytes, and serum creatinine) and skilled management can prevent or mitigate these effects and are part of standard practice for MDR-TB experts. Risk of hearing loss increases with increasing duration of treatment. For aminoglycosides in general, the estimated frequency of nephrotoxicity is 5% to 15% and of ototoxicity 2% to 14%, including 2% to 10% cochlear and 3% to 14% vestibular (155). In a survey of clinical trials performed between 1975 and 1982 and totaling z10,000 patients, the incidence of amikacin nephrotoxicity was 8.7% (178). Cumulative dose and duration predicted toxicity; older age, dehydration/hypovolemia/hypotension, prior aminoglycoside treatment, coexisting hepatic or renal disease, and concomitant medications were important as well (155). Significant hearing impairment exceeds 50% in some reported series, renal dysfunction approaches 50%, and vestibular dysfunction as high as 20% has been reported (155,162,177). At the other extreme, series have been published with no significant or permanent toxicity, including with longer-term use (155,162,177). Direct comparisons of drug toxicity for the four drugs used in TB are scarce. Experience suggests streptomycin may be more ototoxic, but that may be a consequence of far longer and wider use of streptomycin. Observational studies suggest amikacin may be more ototoxic than the other drugs (179)(180)(181). In children, ototoxicity (hearing loss) has been documented in up to 24% of cases in a retrospective study, which also brings serious implications for development of normal speech (182). Pain of intramuscular injections can be safely reduced by adding lidocaine to amikacin injections without interference of pharmacokinetics (183). Dose range of amikacin should be 15 to 20 mg/kg as single daily dose. Additional considerations. When injectables are used, serum creatinine, electrolyte measurements, clinical assessment for vertigo and tinnitus, highquality audiometry (including hearing frequency of 6,000-8,000 Hz, as highfrequency hearing loss is seen initially), and clinical examinations should be conducted at least monthly, or more frequently if adverse effects occur. Limited data suggest there may be a genetic predisposition for hearing loss associated with specific mitochondrial gene mutations (184). Although routine genetic testing is not currently suggested, the provider should be aware of these genetic mutations and the risks of ototoxicity (185,186). Because bactericidal activity is concentration dependent, high peak levels and single daily dosing are preferred to divided doses. Because of the postantibiotic effect, toxicity can be minimized by allowing trough levels between doses to remain below detectable levels for many hours. Some experts take advantage of these AMERICAN THORACIC SOCIETY DOCUMENTS pharmacokinetic and pharmacodynamic properties with thrice-weekly dosing, especially after conversion of cultures to negative, decreasing healthcare system demands and the requirement for daily injections (16,177). Intramuscular injections are painful, and 6 months of injections can be traumatic, especially to younger patients. Patients' values may differ sharply from providers in this respect and should be considered when determining whether to use injectable agents. Conclusions. In our PS-matched IPDMA, the use of amikacin and streptomycin when the patient's isolate was susceptible to these drugs was associated with an increase in treatment success when compared with the control group not receiving these injectables. However, because of their toxicity and modest efficacy compared with other drugs, which also are less toxic, these drugs should be reserved for when more-effective or lesstoxic therapies cannot be assembled to achieve a total of five effective drugs. In our analyses, amikacin and streptomycin had similar aORs for treatment success in a minority of patients with fluoroquinolone resistance. Kanamycin and capreomycin were ineffective. We recommend against using kanamycin or capreomycin. As is the case for adults, the use of amikacin and streptomycin for children should also be reserved to when more-effective or less-toxic therapies cannot be assembled to achieve a total of five effective drugs. Research needs. N-acetylcysteine, a thiol-containing antioxidant, may limit the severity and irreversibility of aminoglycoside-induced ototoxicity (187,188), warranting additional research into this and other otoprotective measures that may improve the balance of benefits and harms for using these injectable agents. # Linezolid Linezolid is an oxazolidinone antibiotic that inhibits bacterial protein synthesis by preventing the fusion of 30S and 50S ribosomal subunits (189). It also binds to human mitochondria and inhibits protein synthesis, which is the mechanism of toxicity in clinical use (190,191). Linezolid was initially used off-label in the absence of consistent scientific evidence as part of the regimen for difficult-to-treat cases of MDR-and XDR-TB (192). The first large retrospective observational study suggested linezolid was effective, but with frequent and often severe adverse events (192). The same study suggested, for the first time, that reducing the daily dose from 1,200 mg to 600 mg per day might be associated with fewer adverse events and improved tolerability. Several systematic reviews, IPDMAs, and one controlled clinical trial have been published on linezolid for treatment of MDR-TB (87,(193)(194)(195)(196). The clinical trial confirmed previous observational findings and, in particular, the effectiveness of linezolid and its potential toxicity (195). A meta-analysis with 121 cases of patients with MDR-TB from 11 countries, treated with linezolid, confirmed linezolid effectiveness (culture conversion, 93.5%; treatment success, 81.8%) and that the 600mg daily dose was safer than the 1,200-mg dose (46.7% adverse events vs. 74.5%, respectively) without lowering its effectiveness (196). The clinical trial confirmed the efficacy, safety, and tolerability of linezolid in patients with XDR-TB (194). There are insufficient data regarding the effectiveness of initiating treatment with doses ,600 mg daily to recommend lower doses. Recently, the importance of therapeutic drug monitoring to reduce adverse events potentially due to linezolid has been emphasized (197,198). Summary of the evidence. Linezolid was used in regimens for MDR-and XDR-TB across 38 studies (3). The initial dose of linezolid was 1,200 mg/d for 91 patients in five studies, 600 mg/d for 784 patients in 28 studies, and 300 mg/d for 99 patients in five studies (3). Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Benefits. In our PS-matched IPDMA, patients who received linezolid-containing regimens were more likely to achieve treatment success (aOR, 3.4; 95% CI, 2.6-4.5) and to have a lower rate of death (aOR, 0.3; 95% CI, 0.2-0.3) than those who did not receive linezolid. The effect on treatment success was more pronounced when studies from only high-income countries were included (aOR, 3.9; 95% CI, 2.6-5.8). The greatest impact was found in patients with XDR-TB, where the aOR for successful treatment versus failure or relapse was 6.3 (95% CI, 3.9-10.1) and the aOR for death was 0.1 (95% CI, 0.1-0.2). When PS-matched pairs analyses were performed comparing the effects of bedaquiline and linezolid with those of no bedaquiline or linezolid, the combination of bedaquiline and linezolid was associated with an aOR of 2.7 (95% CI, 1.5-4.9) for success versus failure/relapse, and of 0.3 (95% CI, 0.2-0.4) for death versus success/failure/relapse. The efficacy of linezolid in children with TB has been shown in two studies of children ,18 years of age, albeit with few patients (199,200). Harms. Adverse effects associated with linezolid in patients with TB include neurotoxicity (i.e., peripheral neuropathy and optic neuritis), myelosuppression, hyperlactatemia, and diarrhea, all of which are presumably secondary to the inhibition of mitochondrial protein synthesis (190,201). A published systematic review of 12 studies conducted in 11 countries globally reported an adverse event rate of 58.9% (hematological, neurological, and gastrointestinal), predominantly noted in individuals treated with a dosages .600 mg/d (196). Hematological toxicity can occur quickly after starting treatment and can involve any cell line. Neurotoxicity, including optic neuritis and peripheral neuropathy, occur later, usually after 12 to 20 weeks of treatment. Toxicity has been associated with trough levels of .2.0 PICO Question 14-Linezolid: In patients with MDR-TB, are outcomes safely improved when regimens include linezolid compared with regimens that do not include linezolid? Recommendation 14: We suggest including linezolid in a regimen for the treatment of patients with MDR-TB (conditional recommendation, very low certainty in the evidence). This is a conditional recommendation despite linezolid-containing regimens showing large reduction in mortality and improved treatment success, similar to bedaquiline and later-generation fluoroquinolones, because linezolid had more adverse effects and the balance of benefits and harms was less favorable compared with those drugs. (202)(203)(204). Adverse effects, especially myelosuppression, are also noted to be common at the currently recommended dose of 10 mg/kg twice daily in children ,10 years of age. Additional considerations. Although use of linezolid for the FDA-approved 28 days or less for non-TB indications is associated with an acceptable adverse effect profile (205), the data on the longer-term use necessary for MDR-TB are limited (3,195,200). Strict clinical monitoring for potential toxicity (in particular peripheral neuropathy, optic neuritis, anemia, and leukopenia) is necessary because of the risk of adverse events associated with the longterm use of linezolid (16). If optic neuritis occurs, many patients may be rechallenged successfully with linezolid once vision normalizes. Assessment for visual toxicity must continue after restarting linezolid. Some patients are able to be rechallenged with the full dose; others are able to avoid recurrent visual toxicity with a reduced dose of linezolid at 300 mg daily (195). Linezolid should generally not be administered to patients taking serotonergic agents, such as monoamine oxidase inhibitors, because of the potential for serious CNS reactions, such as serotonin syndrome. Because monoamine oxidase type A deaminates serotonin, and selective serotonin reuptake inhibitors potentiate the action of serotonin by inhibiting its neuronal reuptake, administration of linezolid concurrently with a selective serotonin reuptake inhibitor can lead to serious reactions, such as serotonin syndrome or neuroleptic malignant syndrome-like reactions (16). One randomized clinical trial demonstrated that lowering the dose from 600 mg/d to 300 mg/d after culture conversion reduced toxicity (195). For children, one modeling study reported that linezolid dose of 15 mg/kg in full-term neonates and infants aged 90%, with ,10% achieving linezolid area under the concentration-versus-time curve (AUC) of 0 to 24 associated with toxicity (173). On the basis of modeling of pharmacokinetic data from 48 children, WHO and Sentinel Project recommend pediatric doses of linezolid as 15 mg/kg once daily for children ,15 kg and 10 to 12 mg/kg once daily for those weighing .15 kg (7,17). It is common practice for patients on linezolid to be prescribed vitamin B6 (16). Conclusions. In our PS-matched IPDMA, patients who received linezolidcontaining regimens were more likely to achieve treatment success and to have a lower rate of death than those who did not receive linezolid. We suggest including linezolid in a regimen for the treatment of patients with MDR-TB. This is a conditional recommendation despite linezolid-containing regimens showing large reduction in mortality and improved treatment success, similar to bedaquiline and later-generation fluoroquinolones, because linezolid had more adverse effects and the balance of benefits and harms was less favorable compared with those drugs. Research needs. Clinical trials of combinations of new chemical entities plus linezolid, including when administered at different doses and durations to optimize its therapeutic effect while minimizing toxicity are underway (Nix-TB, ClinicalTrials.gov Identifier: NCT02333799; ZeNix , NCT03086486). Further linezolid pharmacokinetic and safety data are needed to find optimal dosing in adults and children. # Macrolides: Azithromycin and Clarithromycin The macrolides azithromycin and clarithromycin have unclear efficacy and role in the treatment of MDR-TB (23). Macrolides are commonly used to treat upper and lower respiratory tract infections and have an essential role in the treatment of nontuberculous mycobacteria (206). They are believed to have immunomodulatory and antiinflammatory effects. M. tuberculosis has intrinsic, inducible resistance to clarithromycin (207,208), and in vivo murine TB models confirm the lack of activity of macrolides (209). Clarithromycin may increase linezolid serum exposure when used in combination (210), prompting some consideration of potential synergy between macrolides and other MDR-TB drugs (211). WHO does not recommend use of macrolides to treat MDR-TB (23). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. In our PSmatched IPDMA, patients who received macrolides (n = 1,067) were more likely to have been treated with second-line drugs and to have resistance to fluoroquinolones or any PICO Question 15-Macrolides: In patients with MDR-TB, are outcomes safely improved when regimens include macrolides compared with regimens that do not include macrolides? Recommendation 15: We recommend NOT including the macrolides azithromycin and clarithromycin in a treatment regimen for patients with MDR-TB (strong recommendation, very low certainty in the evidence). Our recommendation against the use of the macrolides azithromycin and clarithromycin in MDR-TB treatment is strong despite the evidence being judged to be of very low certainty because we viewed the increased mortality and decreased likelihood of treatment success associated with the use of this drug class as having a notably unfavorable balance of benefits to potential harms. PICO Question 16-p-Aminosalicylic Acid: In patients with MDR-TB, are outcomes safely improved when regimens include p-aminosalicylic acid compared with regimens that do not include p-aminosalicylic acid? Recommendation 16: We suggest NOT including p-aminosalicylic acid in a treatment regimen for patients with MDR-TB (conditional recommendation, very low certainty in the evidence). When the individualized treatment regimen for patients with MDR-TB contains newer-generation, more-effective drugs, the addition of p-aminosalicylic acid does not appear to provide a benefit. # AMERICAN THORACIC SOCIETY DOCUMENTS second-line injectable (3). Patients who received macrolides were more likely to receive later-generation fluoroquinolones, capreomycin, and linezolid. In adjusted analyses, patients who received macrolides were less likely to achieve treatment success (aOR, 0.6; 95% CI, 0.5-0.8) and had a higher rate of death (aOR, 1.6, 95% CI, 1.2-2.0) than patients who did not receive macrolides. Benefits. The available evidence does not support the use of macrolides in the treatment of MDR-TB. Harms. In our PS-matched IPDMA, patients who received macrolides were less likely to achieve treatment success (aOR, 0.6; 95% CI, 0.5-0.8) and had a higher rate of death (aOR, 1.6; 95% CI, 1.2-2.0) than patients who did not receive macrolides. Conclusions. Macrolides, specifically azithromycin or clarithromycin, should not be included in a regimen for the treatment of patients with MDR-TB. Our recommendation against the use of the macrolides azithromycin and clarithromycin in MDR-TB treatment is strong despite the evidence being judged to be of very low certainty because we viewed the increased mortality and decreased likelihood of treatment success associated with the use of this drug class as having a notably unfavorable balance of benefits to potential harms. Research needs. Further research may be warranted on newer-generation macrolides and to elucidate if there is potential synergy of macrolides with linezolid or other second-line agents. # p-Aminosalicylic Acid One of the first agents found to be effective against TB (212), p-aminosalicylic acid has been widely used clinically, although its precise mode of action remains uncertain (213). With the discovery of other more potent drugs, including rifampin, p-aminosalicylic acid, initially used combined with streptomycin, was no longer considered in first-line regimens. It is now used as part of a treatment regimen for MDR-and XDR-TB, although the benefits of p-aminosalicylic acid are not clear and toxicity limits its use. Current guidance recommends that p-aminosalicylic acid be used to compose a regimen when five effective drugs cannot otherwise be assembled (16,23). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. In our PS-matched IPDMA, p-aminosalicylic acid was not associated with any benefit on treatment success (aOR, 0.8; 95% CI, 0.7-1.0) and was associated with increased death (aOR, 1.2; 95% CI, 1.1-1.4) (3). The potential to cause adverse events (12.2% in a previous meta-analysis) has also limited the tolerability of p-aminosalicylic acid (23). Benefits. No association with any benefit on treatment success was identified for p-aminosalicylic acid in our PS-matched IPDMA. In an IPDMA of children with MDR-TB, there was no benefit identified for the inclusion of p-aminosalicylic acid in regimens (aOR, 0.75; 95% CI, 0.25-1.96; P = 0.483) (113). Harms. Gastrointestinal distress is common with p-aminosalicylic acid but reported to occur less with the PASER formulation than with older preparations (16). Rare hepatotoxicity and thrombocytopenia have been reported, as has reversible hypothyroidism, particularly when used concomitantly with ethionamide (120). In the setting of hypothyroidism, some experts provide thyroid replacement therapy rather than discontinuing p-aminosalicylic acid. Clinical experience has shown paminosalicylic acid to be better tolerated regarding gastrointestinal disturbance in children than in adults, although hypothyroidism also remains common in children. Additional considerations. Indirect comparison suggests that ethionamide may be better than p-aminosalicylic acid if a fifth drug is needed to construct a regimen with at least five effective drugs. Old studies have shown p-aminosalicylic acid to be a good companion drug to protect other drugs from developing resistance and also have shown efficacy in combination with first-line drugs (214,215). It has recently been suggested by pediatric DR-TB experts that in children p-aminosalicylic acid may replace an injectable agent in the absence of better drugs (83). When p-aminosalicylic acid is used, experts recommend monitoring thyroid-stimulating hormone, electrolytes, blood counts, and liver function tests. Conclusions. When the individualized treatment regimen for patients with MDR-TB contains newer-generation, more-effective drugs, the addition of p-aminosalicylic acid does not appear to provide benefit. Research needs. Given the limited armament of TB drugs available for treating MDR-TB, some experts have advocated for the evaluation of doseoptimized p-aminosalicylic acid to improve efficacy while minimizing toxicity (216). Research on whether p-aminosalicylic acid, among other agents, may offer protection against the acquisition of resistance to other drugs in the regimen would also be valuable. # Pyrazinamide Pyrazinamide, a nicotinamide analog, is a prodrug that is converted in vivo into pyrazinoic acid, which interferes with mycobacterial fatty acid synthase. Pyrazinamide has demonstrated effectiveness against M. tuberculosis; it is included in standard regimens for treatment of drug-susceptible TB and also used in regimens for MDR-TB (11,16,23). However, recent population-based studies conducted as part of multicountry surveillance activities have shown that pyrazinamide resistance is highly associated with rifampin resistance (217,218). This finding, in conjunction with evidence that pyrazinamide efficacy is reduced in the setting of pncA gene mutations (219)(220)(221)(222), underscores the importance of documenting drug susceptibility to pyrazinamide by WGS, molecular tests, or traditional DST, if the drug is to be included as part of a regimen for MDR-TB. # AMERICAN THORACIC SOCIETY DOCUMENTS Pyrazinamide has been demonstrated to have substantial sterilizing activity as part of combination regimens and has allowed for treatment shortening in drugsusceptible TB (223,224). Higher doses have been shown to be more effective in animal models and phase 2A studies, but doses of 40 to 70 mg/kg were found to be too toxic to be pursued further in human studies (224). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PS-matched IPDMA compared outcomes in 1,986 individuals with isolates susceptible to pyrazinamide who received the drug with 307 individuals with isolates susceptible to pyrazinamide who did not receive the drug (3). However, the groups were not balanced with regard to other effective drugs; in particular, the patients not receiving pyrazinamide were more likely to receive linezolid and a latergeneration fluoroquinolone. Thus, the ability to detect an effect of pyrazinamide might have been partially obscured. In a previous IPDMA, before linezolid and bedaquiline were more commonly used, administration of pyrazinamide among patients with susceptible isolates was associated with an aOR of 1.9 (95% CI, 1.3-2.9) for cure/complete versus failure/relapse and an aOR of 1.6 (95% CI, 1.3-2.1) for cure/complete versus failure/relapse/death, compared with patients receiving pyrazinamide whose isolates were resistant in vitro (6). Benefits. In our PS-matched IPDMA, treatment success was significantly less likely with regimens containing pyrazinamide (aOR, 0.7; 95% CI, 0.5-0.9), but death was also significantly less frequent (aOR, 0.7; 95% CI, 0.6-0.8). This paradox may be due to confounding in our PS-matched IPDMA, as patients who did not receive pyrazinamide were substantially more likely to receive linezolid. Moreover, our PS-matched IPDMA did not assess the potential ability of pyrazinamide to Levofloxacin doses of up to 1,250 mg have been used safely when needed to achieve therapeutic concentrations. A recent population pharmacokinetic study in South African children found that higher levofloxacin doses from 18 mg/kg/d for younger children, up to 40 mg/kg/d for older children, may be required to achieve adult-equivalent exposures (170). ‡ Higher moxifloxacin doses have been used safely when the isolate is resistant to ofloxacin and the minimum inhibitory concentration for levofloxacin or moxifloxacin suggests higher doses may overcome resistance. Higher doses also are used in cases of malabsorption. x Cycloserine doses can be divided if needed (typically twice daily). Doses .750 mg are difficult for many patients to tolerate. jj Cycloserine dose may be lowered if serum concentrations exceed 35 mg/ml, even if patient is not experiencing toxicity, to prevent central nervous system toxicity. ¶ Modified from adult intermittent dose of 25 mg/kg, and accounting for larger total body water content and faster clearance of injectable drugs in most children. Dosing can be guided by serum concentrations. Ethionamide can be given at bedtime or with a meal to reduce nausea. Experienced clinicians suggest starting with 250 mg once daily and gradually increasing the dose over 1 week. Serum concentrations may be useful in determining the appropriate dose. Few patients tolerate 500 mg twice daily. † † Studies are ongoing evaluating meropenem at higher doses (ClinicalTrials.gov identifiers: NCT03174184 and NCT02349841). ‡ ‡ Some experts prescribe p-aminosalicylic acid at 6 g, and up to12 g, administered once daily (16,216). xx For children, some experts prescribe p-aminosalicylic acid at 200 mg/kg administered once daily (216). jjjj Isoniazid is tested at two concentrations. Some experts use these results (or resistance conferred through mutations in inhA) to select a higher dose when it tests resistant at the lower concentration and susceptible at the higher concentration. The higher dose may achieve in vivo concentrations sufficiently high to overcome low-level resistance (16,216). AMERICAN THORACIC SOCIETY DOCUMENTS e120 contribute to treatment shortening, as has been achieved in drug-susceptible TB. Because pyrazinamide is associated with increased success only when used to treat patients whose isolates are susceptible to the drug (144), whenever feasible, the decision to include pyrazinamide in a regimen should be based on pyrazinamide susceptibility results. In an IPDMA of children with MDR-TB, the addition of pyrazinamide to a regimen showed no benefit in the treatment of confirmed MDR-TB cases (aOR, 1.63; 95% CI, 0.41-6.56; P = 0.484) (113). Pyrazinamide resistance was, however, not tested and improved selection of cases on the basis of resistance might change this outcome. Harms. Pyrazinamide has been used extensively in the treatment of TB, and toxicities are well documented (225). The most common is gastrointestinal upset or intolerance. In a recent review of the tolerability of TB drugs, pyrazinamide was associated with serious adverse events in 56 of 2,023 (2.8%) patients (226). This is consistent with previous reports of pyrazinamide toxicity (11,225,227). Hepatic enzyme elevations are common with pyrazinamide, and significant hepatotoxicity, although less common, can occur. Modest elevated serum uric acid levels are also expected, although the clinical significance of this is unclear. Nongouty polyarthralgias and hypersensitivity reactions can occur. Flares of clinical gout can also occur, especially in those with a history of prior gouty arthritis. Additional considerations. All patients receiving pyrazinamide as part of an MDR-TB treatment regimen should be monitored carefully for signs or symptoms of hepatotoxicity and have their pyrazinamide dose held or decreased if such toxicity is detected. Isolated increases in uric acid without symptoms of gout are common and are not an indication to discontinue the drug. Recent population-based studies have found that pyrazinamide resistance is common in the setting of MDR with some regional variability, suggesting that pyrazinamide susceptibility should be confirmed or suspected if the drug is included in the regimen (217,228,229). Although there are known challenges related to accurately determining phenotypic DST for pyrazinamide, recent highly predictive DNA sequencing techniques show significant promise for newer genomic approaches (230). When included in the regimen, most experts use doses of 25 to 40 mg/kg/d orally. The recommended dose of pyrazinamide in children is 30 to 40 mg/kg daily. Conclusions. We suggest including pyrazinamide in a regimen for the treatment of patients with MDR-TB, when the M. tuberculosis isolate has not been found resistant to pyrazinamide. Research needs. Pyrazinamide is being evaluated as part of novel treatment regimens for both DS and MDR-TB in multiple clinical trials (56). Development of a reliable, simple molecular test for pyrazinamide susceptibility is a critically important research need. Dose optimization studies for pyrazinamide are warranted, as higher doses may be more efficacious but also may be more toxic. # Building a Treatment Regimen for MDR-TB The guideline committee proposes a clinical strategy tool for building a treatment regimen for MDR-TB (Table 10). The clinical strategy tool incorporates the evidence-based review of the individual drugs, with consideration of the balance of benefits and harms for each drug, the experience of MDR-TB experts on the committee, as well as perspectives of patients. This clinical strategy tool encourages the building of all oral regimens with five effective drugs (to which the isolate is susceptible or has low likelihood of resistance) for the treatment of MDR-TB. In our PS-matched IPDMA, significant favorable synergies were identified with improved treatment success and reduced mortality when bedaquiline was used in combination with linezolid or clofazimine. As noted, amikacin and streptomycin show modest effectiveness when the patient's isolate is susceptible to these drugs; however, because of their significant toxicities, aminoglycosides should be reserved for when a more-effective or lesstoxic regimen cannot otherwise be assembled. The final choice of drugs and drug classes is contingent on many factors, including patient preferences, harms and benefits associated with agents, the capacity to appropriately monitor for significant adverse effects, consideration of drug-drug interactions, comorbidities, and drug availability. Final regimen development, therefore, is individualized and may differ substantially from the approach described in Table 10. Doses of the drugs for treating adults and children with MDR-TB are provided in Table 9, modified and updated from the 2016 ATS/CDC/IDSA Treatment of Drug-Susceptible TB Practice Guidelines (11). # Role of Therapeutic Drug Monitoring in Treatment of MDR-TB Specific pharmacokinetic (PK)/ pharmacodynamic (PD) targets, especially the AUC divided by the MIC over 24 hours (AUC 0-24 /MIC), are being recognized as playing an important role in determining efficacy (231). This has been demonstrated most clearly in hollow fiber systems in vitro that isolate the activity of the drug against the organism (232,233). Further evidence has been provided by animal models (mouse, rabbit, nonhuman primates) and through human clinical trials (234)(235)(236)(237). Drug levels are usually chosen to be four to five times greater than the MIC. Because of the complexities of clinical disease, it is more challenging to isolate the PK/PD contribution of individual drugs, but studies have nonetheless been informative on the relationships between efficacy and drug exposure (238)(239)(240). In human TB, a combination of drugs is used, and each patient has his or her unique duration of disease, host genetics, and particular strain of M. tuberculosis. The general term for these PK/PD data is "exposure-response" data (i.e., for a given amount of drug exposure, how much response can you expect?). Much of the published data focus on the first-line TB drugs, with some emerging data becoming available for second-line drugs (241)(242)(243). In clinical practice, the actual MIC for each drug often is not available. Epidemiological cut-off values or "critical concentrations" that separate wild-type from more resistant isolates can be used for selecting what drugs to include in a regimen (244). These in vitro cut-offs are based on patterns of susceptibility compared with achievable concentrations within humans. An organism is not just "susceptible" as an inherent property; it is susceptible to inhibition or killing by specific, tested concentrations of the AMERICAN THORACIC SOCIETY DOCUMENTS drugs. Individual MIC values might be preferred; however, in practice, there are technical and financial barriers to such individualized data. Following standardized dosing, clinical experience over the past 3 decades clearly shows that some patients have low drug concentrations, leading to clinical failures (11,231,241,242,244). Many MDR-TB experts use therapeutic drug monitoring (TDM) to identify patients with problems with drug absorption, thereby informing dose adjustments. Individuals who have poor response to TB treatment despite adherence may benefit from TDM (11). Patients with TB with gastrointestinal problems that increase risk of malabsorption, concurrent HIV infection, impaired renal clearance, or diabetes should be prioritized for TDM (11). Furthermore, some experts use TDM for all patients being treated for MDR or XDR-TB early in treatment, rather than waiting for a poor response. TDM should be used and interpreted in consultation with an expert in MDR-TB. TDM also provides patient-specific information that may help limit toxicity due to certain drugs, including the injectable drugs cycloserine and linezolid (245)(246)(247). In particular, linezolid toxicity is associated with elevated trough values (202)(203)(204). "Target" ranges for the injectable drugs and cycloserine have been proposed, and research continues on refining these. Fluoroquinolones display concentrationdependent efficacy, and higher doses of moxifloxacin and levofloxacin are being studied (174,248). Currently, Table 10. Clinical Strategy to Build an Individualized Treatment Regimen for MDR-TB d Build a regimen using five or more drugs to which the isolate is susceptible (or has low likelihood of resistance), preferably with drugs that have not been used to treat the patient previously. d Choice of drugs is contingent on capacity to appropriately monitor for significant adverse effects, patient comorbidities, and preferences/values (choices therefore subject to program and patient safety limitations). d In children with TB disease who are contacts of infectious MDR-TB source cases, the source case's isolate DST result should be used if an isolate is not obtained from the child. d TB expert medical consultation is recommended (ungraded good practice statement). Step 1: Choose one later-generation fluoroquinolone Levofloxacin Moxifloxacin Step 2: Choose both of these prioritized drugs Bedaquiline Linezolid Step 3: Choose both of these prioritized drugs Clofazimine Cycloserine/terizidone Step 4: If a regimen cannot be assembled with five effective oral drugs, and the isolate is susceptible, use one of these injectable agents- Amikacin Streptomycin Step 5: If needed or if oral agents preferred over injectable agents in Step 4, use the following drugs † Delamanid ‡ Pyrazinamide Ethambutol Step 6: If limited options and cannot assemble a regimen of five effective drugs, consider use of the following drugs Ethionamide or prothionamide x Imipenem-cilastatin/clavulanate or meropenem/clavulanate jj p-Aminosalicylic acid ¶ High-dose isoniazid The following drugs are no longer recommended for inclusion in MDR-TB regimens: Capreomycin and kanamycin Amoxicillin/clavulanate (when used without a carbapenem) Azithromycin and clarithromycin Definition of abbreviations: DST = drug susceptibility testing; INH = isoniazid; IPDMA = individual patient data meta-analyses; MDR = multidrug-resistant; PS = propensity score; TB = tuberculosis. *Amikacin and streptomycin should be used only when the patient's isolate is susceptible to these drugs. Because of their toxicity, these drugs should be reserved for when more-effective or less-toxic therapies cannot be assembled to achieve a total of five effective drugs. † Patient preferences in terms of the harms and benefits associated with injectables (the use of which is no longer obligatory), the capacity to appropriately monitor for significant adverse effects, consideration of drug-drug interactions, and patient comorbidities should be considered in selecting Step 5 agents over injectables. Ethambutol and pyrazinamide had mixed/marginal performance on outcomes assessed in our PS-matched IPDMA; however, some experts may prefer these drugs over injectable agents to build a regimen of at least five effective oral drugs. Use pyrazinamide and ethambutol only when the isolate is documented as susceptible. ‡ Data on dosing and safety of delamanid are available in children >3 years of age. x Mutations in the inhA region of the Mycobacterium tuberculosis genome can confer resistance to ethionamide/prothionamide as well as to INH. In this situation, ethionamide/prothionamide may not be a good choice unless the isolate is shown to be susceptible with in vitro testing. jj Divided daily intravenous dosing limits feasibility. Optimal duration of use not defined. ¶ Fair/poor tolerability and low performance. Adverse effects reported to be less common in children. Data not reviewed in our PS-matched IPDMA, but high-dose isoniazid can be considered despite low-level isoniazid resistance but not with high-level INH resistance. AMERICAN THORACIC SOCIETY DOCUMENTS e122 avoiding low serum concentrations seems advisable. A common clinical practice is to collect samples at 2 and 6 hours after drug administration to measure concentrations that may distinguish normal drug absorption (a 2-h value within the normal range) from delayed absorption (a 6-h value greater than the 2-h value, approaching the normal range) and malabsorption (both values are below the normal range). This approach also works for injectable drugs. Furthermore, trough values for linezolid can be helpful (11,231,241,242,244). Although it is clearly possible to cure patients without TDM and even in the setting of lower serum drug concentrations, the available data suggest that the probability of cure decreases with decreasing drug concentrations (231)(232)(233)(234)(235)(236)(237)(238)(239)(240)(241). Given the incomplete information currently available, many expert clinicians use in vitro susceptibility data and TDM as available tools for optimizing the treatment of patients with MDR-and XDR-TB. For all patients, but in particular those receiving outpatient treatment, TDM should be coordinated and conducted using patientcentered approaches. Shorter-Course, Standardized, 9-to 12-Month Regimen for MDR-TB A randomized, phase 3, noninferiority trial, STREAM Stage 1 (ClinicalTrials.gov identifier: NCT02409290) was recently conducted to assess a shorter-course regimen composed of existing drugs for MDR-TB (8). This regimen had achieved success rates of 83% (95% CI, 71.0-90.3%) in cohort studies (249,250). The shortercourse regimen is standardized and composed of kanamycin, moxifloxacin (in place of gatifloxacin, which was the fluoroquinolone originally used in the "Bangladesh" regimen), prothionamide, clofazimine, pyrazinamide, high-dose isoniazid, and ethambutol for an initial period (4-6 mo) and moxifloxacin, clofazimine, pyrazinamide, and ethambutol for the continuation phase (5 mo) (23). The total duration of therapy is between 9 and 11 months, compared with an individualized regimen of 18 to 24 months of therapy. The medication costs of the shorter regimen are believed to be less than that of conventional regimens, currently undergoing a formal economic evaluation as part of the STREAM trials (251). # Summary of the Evidence In the STREAM Stage 1 clinical trial, of 424 participants who underwent randomization, 383 were included in the modified intention-to-treat population (8). Favorable status was reported in 79.8% of participants in the long-regimen group and in 78.8% of those in the short-regimen group-a difference, with adjustment for HIV status, of 1.0 percentage point (95% CI, 27.5 to 9.5). The results with respect to noninferiority were consistent among the 321 participants in the per-protocol population. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. In our PS-matched IPDMA of 12,030 patients (n = 50 studies), 169 (n = 33 studies) were eligible for analysis of death and 1,369 (n = 33 studies) were eligible for analysis of treatment success (249) once the criteria used for the shorter-course regimen were applied (249). There were data from 532 individuals (n = 3 studies) on the shorter regimen available for analysis of death and 498 (n = 3 studies) for analysis of treatment success. After using PS matching to adjust for age, sex, HIV, smear status, past TB treatment with first-line drugs, and number of effective drugs, there were no statistically significant associations for the shorter regimen with treatment success (aOR, 0.5; 95% CI, 0.02-13), or deaths (aOR, 1.7; 95% CI, 0.6-4.6), compared with individualized regimens. # Benefits The shorter regimen allows for a significantly shorter duration of therapy compared with the individualized regimen and consequently less pill burden, medication cost, and associated provider administration costs. The burden on patients of lost productivity or lost wages and out-of-pocket costs is considerably less with a shorter regimen. From preliminary results of the STREAM Stage 1 trial, there was a documented reduction in costs to patients due to less time away from work, fewer clinic visits, and less spending on supplementary food (252). # Harms In the STREAM Stage 1 clinical trial, an adverse event of grade 3 or higher occurred in 45.4% of participants in the long-regimen group and in 48.2% in the short-regimen group (8). Prolongation of either the QT interval or the QTc to 500 milliseconds occurred in 11.0% of participants in the short-regimen group, compared with 6.4% in the long-regimen group (P = 0.14). Death occurred in 8.5% of participants in the short-regimen group and in 6.4% in the long-regimen group, and acquired resistance to fluoroquinolones or aminoglycosides occurred in 3.3% and 2.3%, respectively. Among the participants who had HIV coinfection at baseline, 18 of 103 (17.5%) in the short-regimen group died, compared with 4 of 50 (8.0%) in the PICO Question 18-Shorter-course, standardized regimen: In patients with MDR-TB, does treatment with a standardized MDR-TB regimen for <12 months lead to better outcomes than treatment with an MDR-TB regimen for 18-24 months? Recommendation 18: The shorter-course regimen is standardized with the use of kanamycin (which the committee recommends against using) and includes drugs for which there is documented or high likelihood of resistance (e.g., isoniazid, ethionamide, pyrazinamide). Although the STREAM Stage 1 randomized trial found the shorter-course regimen to be noninferior to longer injectable-containing regimens with respect to the primary efficacy outcome (8), the guideline committee cannot make a recommendation either for or against this standardized shorter-course regimen, compared with longer individualized all-oral regimens that can be composed in accordance with the recommendations in this practice guideline. We make a research recommendation for the conduct of randomized clinical trials evaluating the efficacy, safety, and tolerability of modified shorter-course regimens that include newer oral agents, exclude injectables, and include drugs for which susceptibility is documented or highly likely. AMERICAN THORACIC SOCIETY DOCUMENTS long-regimen group (HR in a post hoc analysis, 2.23; 95% CI, 0.76-6.60) (8). Similarly, in a recently published IPDMA, a greater proportion of individuals who received the shorter regimen had less treatment success and more death (249). In the IPDMA, other adverse effects were also statistically insignificant with the shorter regimen, including deafness and ototoxicity (relative risk, 1.5; 95% CI, 0.6-4.0), liver injury (relative risk, 2.2; 95% CI, 0.5-10.3), hepatitis (relative risk, 2.5; 95% CI, 0.3-21.2), and renal impairment (relative risk, 4.5; 95% CI, 0.6-35.2). One of the most concerning adverse effects of the shorter regimen is hearing loss, with 7.1% reported in the African study and between 0% and 23% in the meta-analysis (249,250). The STREAM Stage 1 trial found that ear and labyrinth disorders occurred in 7.4% participants on the shorter regimen, compared with 5.7% with the longer regimen, which was not statistically significant (8); the frequency may have been lower than in cohort studies because hearing loss was not monitored by audiometry in STREAM Stage 1. # Additional Considerations When applying the eligibility criteria from WHO for using the shorter-course, standardized regimen to the population included in our PS-matched IPDMA (7,253), .15% of individuals would have been eligible for the standardized, shorter regimen. In Europe, patient eligibility for the shorter-course regimen has ranged from 7.9% (48 of 612 new cases) in a study performed at TB reference centers (254) to 16.9% in a surveillance-based study performed by the European Centre for Disease Prevention and Control (ECDC) (93,107,255). In the United States, .15% of patients with MDR-TB would be eligible for the shorter-course MDR-TB regimen (256). Data from California showed either 14.6% or 20.5% eligibility, based on whether high-dose isoniazid or ethionamide would be determined effective on the basis of genetic katG or inhA mutations, respectively (257). The availability of molecular and growth-based DST is key to determining the potential eligibility of shorter-course regimen use. The combined use of both modalities can better inform drug resistance and potential treatment regimens. High-dose isoniazid is likely to be active against organisms with low-level isoniazid resistance, commonly associated with a mutation in the inhA gene that also confers resistance to ethionamide (257). Ethionamide is more likely to be active against M. tuberculosis organisms with high-level resistance to isoniazid, associated with a mutation in katG and which are less commonly resistant to ethionamide (257). Global data have shown that katG is present in 64.3% of the cases tested and inhA in 19.2%, with some regional differences (258). The shorter regimen includes both ethionamide and high-dose isoniazid and therefore is likely to be effective against both of these common MDR-TB resistance patterns (257,259). There is uncertainty regarding the diagnostic accuracy and reproducibility of ethambutol and pyrazinamide growthbased DST assays (260). For ethambutol, the ECDC has reported that the growth-based DST when performed on Mycobacteria Growth Indicator Tube (MGIT) is reliable (107), whereas Model Performance Evaluation Program data have identified variability in detecting ethambutol resistance, with the majority of laboratories having disagreement on one to several strains with ethambutol resistance (261). Alternatively, molecular testing for pncA mutations as a method for determining pyrazinamide resistance may be more reliable, because on average 85% of TB strains resistant to pyrazinamide will have such a mutation (217,221,222). In limited-resourced settings, testing all the drugs composing the shorter-course regimen may not be possible (106,108). In contrast, in Europe, DST for ethambutol (93,107) and pyrazinamide ( 106) is well studied, but katG and inhA genetic mutations are not. In the United States, growth-based DST data are available for ethambutol, pyrazinamide, high-and lowlevel isoniazid, and ethionamide, and molecular testing is available through some state public health laboratories and through the CDC's MDDR service (257). Therefore, considering patients with either katG or inhA (but not both) mutations to be eligible for the shorter regimen might be rational. Last, the rate of cross-resistance between ofloxacin and moxifloxacin is likely not complete. Data from the ECDC showed that 81% of M. tuberculosis isolates that were resistant to ofloxacin were also resistant to moxifloxacin. However, other settings (e.g., Bangladesh and Pakistan) have shown cross-resistance as low as 7% (107). # Conclusions The shorter-course regimen was judged by the guidelines committee to have minimal desirable effects (on treatment success, mortality, and culture conversions) and small to moderate undesirable effects (adverse events, limited applicability, and the use of kanamycin as part of the standardized regimen), and includes drugs for which there is documented or high likelihood of resistance (e.g., isoniazid, ethionamide, and pyrazinamide). Although the WHO's STREAM Stage 1 randomized trial found the shorter-course regimen to be noninferior to a long regimen with respect to the primary efficacy outcome (8), the guideline committee cannot make a recommendation either for or against this standardized shorter-course regimen compared PICO Question 19-Surgery for MDR-TB: Should elective lung resection surgery (i.e., lobectomy or pneumonectomy) be used as an adjunctive therapeutic option in combination with antimicrobial therapy, versus medical therapy alone, for adults with MDR-TB? Recommendation 19a: We suggest elective partial lung resection (e.g., lobectomy or wedge resection), rather than medical therapy alone, for adults with MDR-TB receiving antimicrobial-based therapy (conditional recommendation, very low certainty in the evidence). The writing committee believes this option would be beneficial for patients for whom clinical judgement, supported by bacteriological and radiographic data, suggests a strong risk of treatment failure or relapse with medical therapy alone. Recommendation 19b: We suggest medical therapy alone, rather than including elective total lung resection (pneumonectomy), for adults with MDR-TB receiving antimicrobial therapy (conditional recommendation, very low certainty in the evidence). with longer individualized regimens. We instead make a research recommendation for the conduct of randomized clinical trials evaluating the efficacy, safety, and tolerability of modified shorter-course regimens that include newer oral agents, exclude injectables, and include drugs for which susceptibility is confirmed or deemed to be highly likely. If this shorter-course regimen is used, we recommend obtaining DST for all medications in the regimen, with the exception of clofazimine, for which reliable testing is not available, and recommend careful side effect monitoring, including high-quality audiometry, monthly microbiologic monitoring, and close case management, especially in persons with HIV. # Research Needs We make a research recommendation for the conduct of randomized clinical trials evaluating the efficacy, safety, and tolerability of modified shorter-course regimens that include newer oral agents, exclude injectables, and include drugs for which susceptibility is confirmed or deemed to be highly likely. Further research is needed on medications such as linezolid, bedaquiline, and other medications currently in clinical trials as substitutions in the regimen if patients experience adverse effects or resistance develops to any of the medications in the regimen (262). Research on modified shorter-course regimens for pediatric patients and in individuals living with HIV is also needed. Until more data regarding the use and outcomes of the shorter-course regimens in patients with HIV are available, this treatment approach should preferentially be considered only within a research study. Current trials are underway to help answer some these questions (262). # Role of Surgery in MDR-TB Surgery was one of the first therapeutic approaches for treating TB. It was replaced by chemotherapy between 1960 and 1975. However, several scientific societies and national and international organizations suggest consideration of surgery as an adjunctive therapy for MDR-TB. This is based on the results of observational retrospective studies. Selected indications are highlighted, including failure of drug therapy, relapse, localized (e.g., an isolated cavity) or extensive pulmonary TB, and clinical complications (e.g., hemoptysis or empyema) (23,(263)(264)(265)(266)(267)(268)(269)(270). # Summary of the Evidence Systematic reviews and meta-analyses have been performed on the role of surgery in patients with MDR-and XDR-TB (263,271,272). The main limitation of systematic reviews of surgery in MDR and XDR-TB that summarize results of observational studies combining study-level data is the tremendous variability in patient characteristics, background chemotherapy regimens, and types of surgical procedures (263,271,272). An IPDMA of surgery in MDR-TB was designed to address those shortcomings (271). Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. The IPDMA identified 67 cohort studies, of which 45 could not be used because either individual patient data were not available or the surgical status of patients was not known. Twenty-six studies comprising 6,431 patients with MDR-TB were included (271). We used data from this IPDMA to generate the evidence profiles. Despite the analytic advantages of an IPDMA, which allows for adjustment on baseline imbalance in many prognostic factors, there was a substantial residual risk of bias in the results. There is no information about the impact of surgery on adverse effects or quality of life. # Benefits Patients with partial lung resection had a higher probability of treatment success, as opposed to treatment failure, relapse, or death (aOR, 3.0; 95% CI, 1.5-5.9). However, the estimate is very uncertain because of the limitations of individual studies and lack of precision in results. # Harms In the published IPDMA, a substantially higher proportion of patients who had a pneumonectomy died (8.5%) compared with those who had a partial resection (2.2%), but the authors could not establish whether patients died as a result of surgical complications or of their TB (271). The estimates of the effects of pneumonectomy on risk of death (aOR, 1.8; 95% CI, 0.6-5.1) and treatment success (aOR, 0.8; 95% CI, 0.1-6.0) were not statistically significant. As for partial lung resection, both estimates are very uncertain. Treatment success in patients with XDR-TB was noted to be lower when patients underwent surgery compared with patients who did not (aOR, PICO Question 20-Treatment of isoniazid-resistant TB: PICO Question 20a: Should patients with isoniazid-resistant TB be treated with a regimen composed of a fluoroquinolone, rifampin, ethambutol, and pyrazinamide for 6 months compared with rifampin, ethambutol, and pyrazinamide (without a fluoroquinolone) for 6 months? PICO Question 20b: Should patients with isoniazid-resistant TB be treated with a regimen composed of fluoroquinolone, rifampin, and ethambutol for 6 months and pyrazinamide for the first 2 months compared with a regimen composed of a fluoroquinolone, rifampin, ethambutol, and pyrazinamide for 6 months? Recommendation 20a: We suggest adding a later-generation fluoroquinolone to a 6-month regimen of daily rifampin, ethambutol, and pyrazinamide for patients with isoniazid-resistant TB (conditional recommendation, very low certainty in the evidence). # Recommendation 20b: In patients with isoniazid-resistant TB treated with a daily regimen of a later-generation fluoroquinolone, rifampin, ethambutol, and pyrazinamide, we suggest that the duration of pyrazinamide can be shortened to 2 months in selected situations (i.e., noncavitary and lower-burden disease or toxicity from pyrazinamide) (conditional recommendation, very low certainty in the evidence). AMERICAN THORACIC SOCIETY DOCUMENTS 0.4; 95% CI, 0.2-0.9), an effect that was heterogeneous across studies and may also be confounded by factors that predisposed to poor outcomes in these patients (271). Alternatively, patients who were healthy enough to withstand surgery may have intrinsically lower mortality, so it is difficult to predict the direction of potential bias. # Additional Considerations In the MDR-TB treatment guidelines updated in 2016, WHO recommended elective partial surgery as an intervention complementary to chemotherapy in specifically selected individuals (23). The committee acknowledged there is significant uncertainty, with a paucity of publications and data regarding patient preferences, acceptability by surgical personnel, cost, and feasibility of performing elective surgery as an intervention for MDR-TB. # Conclusions On the basis of the limited evidence that is available, there appears to be a net benefit from an elective partial lung resection (e.g., lobectomy or wedge resection) when offered together with a recommended MDR-TB regimen, compared with medical therapy alone. Committee members believed that this therapeutic option would probably be more beneficial when clinical judgement, supported by bacteriological and radiographic data, suggests a strong risk of relapse or treatment failure with medical regimen alone. We found no currently available evidence that pneumonectomy would be beneficial for patients with MDR TB receiving a background drug regimen. # Research Needs Rigorously designed and conducted studies, ideally well-done randomized trials that measure and report benefits but also adverse effects and quality of life, are needed to clarify the role of surgery in the management of patients with MDR-TB. The following specific issues need to be addressed: the optimal timing for surgery, the optimal drug regimens and their duration before and after surgery, the role of surgery in special populations and patients with comorbid conditions (e.g., those living with HIV), the optimal surgical approaches, the optimal infection control measures to be implemented perioperatively, and the role of pulmonary rehabilitation (263,267,270,273). # Treatment of Isoniazid-Resistant TB Isoniazid is an important first-line agent for the treatment of TB, possessing potent early bactericidal activity against M. tuberculosis. However, monoresistance to isoniazid is frequent worldwide, with an estimated prevalence of 8% (range, 5-11%) of TB cases (12). A recent systematic review and metaanalysis compared the treatment outcomes of isoniazid-resistant TB to outcomes of drugsusceptible TB and found that treatment of isoniazid-resistant TB with first-line drugs resulted in suboptimal outcomes, with higher treatment failure (11% vs. 1%) and relapse (10% vs. 5%) (274). In addition, the study found that standardized empirical treatment of new isoniazid-resistant TB cases may be contributing to higher rates of acquired drug resistance (8% vs. 0.3%). Prior ATS, CDC, and IDSA guidelines recommended treatment with a standard four-drug regimen (isoniazid, rifampin, pyrazinamide, and ethambutol) for 6 months, with discontinuation of isoniazid after the results of DST are known and if isoniazid resistance was found (275). # Summary of the Evidence Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. An IPDMA of 33 datasets with 6,424 patients, of whom 3,923 patients in 23 studies received regimens related to isoniazid-resistant TB, was used as the evidence (276). Regimens of interest for our analyses (all with or without isoniazid) were 1) rifampin, ethambutol, and pyrazinamide; and 2) rifampin, ethambutol, and pyrazinamide, plus a fluoroquinolone. For these analyses, isoniazid-resistant TB was defined based on phenotypic resistance to isoniazid and susceptibility to rifampicin, with or without additional resistance to pyrazinamide, ethambutol, or streptomycin. Critical concentrations of 0.1 mg/ml or 0.2 mg/ml were used in 21 out of 23 centers for the definition of resistance to isoniazid (276). For PICO Question 20a, we found few studies that used regimens of 6 months of rifampin, ethambutol, and pyrazinamide, plus a fluoroquinolone, but several that used regimens of 8 to 9 months of this regimen; hence, we combined 6 with .6 months for this PICO. For the comparator of rifampin, ethambutol, pyrazinamide with or without isoniazid, we similarly combined regimens of 6 and .6 months' duration regimens, after comparison between these durations showed outcomes were not significantly different. For PICO Question 20b, few patients received regimens that had a fluoroquinolone, rifampin, ethambutol, and a shorter duration of pyrazinamide. A total of 118 patients received 1 to 3 months of pyrazinamide in conjunction with 6 and .6 months' durations of rifampin, ethambutol, and fluoroquinolone-containing regimen and were available for analyses. # Benefits Compared with 6 months of daily rifampin, ethambutol, and pyrazinamide (with or without isoniazid), adding a fluoroquinolone to this regimen was associated with significantly greater treatment success (aOR, 2.8; 95% CI, 1.1-7.3) but with no significant effect on mortality (aOR, 0.7; 95% CI, 0.4-1.1) or acquired rifampicin resistance (aOR, 0.1; 95% CI, 0.0-1.2). When evaluating the impact of shortening the duration of pyrazinamide (ranging from 1-3 mo) in a regimen that contains a fluoroquinolone, the treatment success was very high, with 117 of 118 patients achieving treatment success. On the other hand, comparisons of shorter pyrazinamide regimens to regimens including both a fluoroquinolone and pyrazinamide for >6 months did not show significantly different results (aOR, 5.2; 95% CI, 0.6-46.7). Similar results were obtained when the comparisons were restricted to patients receiving latergeneration fluoroquinolones, namely moxifloxacin, levofloxacin, and gatifloxacin (data not shown). # Harms The outcome of adverse events from TB drugs was intended to be assessed in the IPDMA but could not be analyzed because these outcomes were either not reported or reported with very different definitions. The adverse events associated with TB drugs, especially pyrazinamide, are well established (227,277). # Additional Considerations We note that the estimates of effect were imprecise from the IPDMA because of the small number of patients who received the AMERICAN THORACIC SOCIETY DOCUMENTS e126 regimens of interest. Given that pyrazinamide is the most toxic of the present first-line drugs, a key potential advantage of adding a fluoroquinolone would be to shorten the duration of pyrazinamide to the initial 2 months of treatment. Although the IPDMA only had 118 patients receiving fluoroquinolone-containing regimens with shorter durations of pyrazinamide, it is noted that treatment success was very high in this group (117 of 118). On the basis of the efficacy signals seen and the known toxicities of prolonged pyrazinamide, the committee viewed the balance of benefits and harms to favor shortening the duration of x Increases in serum creatinine via decrease in renal tubular creatinine excretion are commonly seen with cobicistat and dolutegravir usage. This is not a toxicity. jj Indirect hyperbilirubinemia is expected with atazanavir and indinavir. This is not a toxicity. # AMERICAN THORACIC SOCIETY DOCUMENTS pyrazinamide when a later-generation fluoroquinolone is included in the regimen in patients in whom there is toxicity anticipated or experienced because of pyrazinamide or when the patient has noncavitary, lower burden of disease. Finally, plasma peak concentration and exposure to moxifloxacin have been shown to decrease by approximately 30% when coadministered with rifampin (278,279). The impact of these decreased exposures on outcomes has not been established. However, some experts opt to use levofloxacin when combined with rifampin. # Conclusions We conclude that in patients with isoniazidresistant TB, the addition of a later-generation fluoroquinolone to 6 months of daily rifampin, ethambutol, and pyrazinamide improves treatment success rates. In patients in whom toxicity from pyrazinamide is anticipated or experienced, or in patients with active TB with lower burden of disease (i.e., noncavitary), the committee viewed the balance of benefits and harms to favor shortening the duration of pyrazinamide when a later-generation fluoroquinolone is included in the regimen, acknowledging that the certainty in the evidence is very low and more research is needed. # Research Needs All but 2 of the 23 studies included in the IPDMA were observational. Moreover, the analyzed population included only 37 children, 119 patients with diabetes, and 249 patients with HIV infection. Given the burden of isoniazid-resistant TB worldwide, clinical trials that include these special populations and evaluate new regimens, including the evaluation of the efficacy, safety, and tolerability of shorter versus longer durations of pyrazinamide, are urgently needed. # Treatment of MDR-TB in Special Situations # HIV Infection Among patients with HIV and TB, rifampin resistance has been identified more often compared with those without HIV (280,281). Patients with MDR-TB and HIV have up to a fourfold higher risk of mortality compared with patients with MDR-TB without HIV (282). Low CD4 cell counts (e.g., <50 cells/ml) in patients with HIV and MDR-TB further correlate with higher mortality (283,284). Numerous practice guidelines recommend HIV testing of people with suspected or confirmed TB (11,20,34,285), regardless of drug resistance. Because of their high risk for mortality early in the course of TB disease, people with HIV in whom TB is suspected are recommended to receive rapid testing for TB using nucleic acid amplification tests coupled with molecular diagnostic DST for rifampin (with or without isoniazid) (13,20). For patients with HIV receiving therapy for drug-susceptible TB, studies found significantly lower mortality among patients receiving concurrent antiretroviral therapy (ART) compared with those not receiving ART (286)(287)(288). U.S. practice guidelines recommend starting ART within 2 weeks of initiating treatment for drug-susceptible non-CNS TB for patients with CD4 cell counts ,50 cells/ml and by 8 to 12 weeks for patients with higher CD4 cell counts (11,289). Although the optimal timing for starting ART to reduce patient mortality has not yet been adequately determined for patients with MDR-TB, lower mortality rates have been shown in multiple studies among patients with MDR-TB receiving concurrent ART compared with those not receiving ART (283,(290)(291)(292)(293)(294)(295)(296)(297), especially among patients with TB with CD4 counts ,50 cells/ml (283,292). The decreased mortality seen in patients treated for MDR-TB who concurrently receive ART, especially among those with CD4 cell counts ,50 cells/ml, supports a similar ART management approach as recommended for drugsusceptible TB. MDR-TB of the CNS in patients with HIV presents additional challenges. TB meningitis with isoniazid-resistant TB, RR-TB, or MDR-TB can be associated with higher mortality compared with drugsusceptible disease (298)(299)(300). Starting ART early in patients with TB is associated with a higher incidence of immune reconstitution inflammatory syndrome (286,288,301), and this can be even more problematic in CNS disease. A recent study in Vietnam of patients with advanced AIDS found a higher incidence of potentially lifethreatening (grade 4) adverse events among patients with TB meningitis starting ART early compared with those delaying ART until after 2 months of standardized firstline TB treatment and did not show a survival benefit of starting ART early (302). It has been recommended to delay starting ART by 8 weeks in patients with CNS TB and HIV (11); however, there is a paucity of data in patients with MDR-TB of the CNS. The optimal approach for initiation of ART in patients with this medical condition remains uncertain, and close clinical monitoring is warranted. Drug interactions between antiretroviral and anti-TB agents are common in the management of patients with HIV and TB, particularly with rifamycins. Because the rifamycins (with the possible exception of rifabutin) are not used for treatment of MDR-TB, interactions of other TB medication classes with ART should be considered. Bedaquiline and/or delamanid might be considered for use in patients with HIV. Although efavirenz can produce a decrease in serum bedaquiline concentrations and this combination is avoided, other ART drugs, including the protease inhibitors and cobicistat, can result in increased serum bedaquiline levels. The current WHO recommendations for the use of delamanid apply to patients living with HIV (7). Drug-drug interaction studies in healthy volunteers of delamanid with tenofovir, efavirenz, and lopinavir/ritonavir show that no dose adjustments were needed (303). Nonetheless, when delamanid is included in the regimen for MDR-TB in HIVinfected patients, the design of their PICO Question 21-Treatment of Contacts Exposed to MDR-TB: Should contacts exposed to an infectious patient with MDR-TB be offered LTBI treatment versus followed with observation alone? Recommendation 21: For contacts with presumed MDR LTBI due to exposure to an infectious patient with MDR-TB, we suggest offering treatment for LTBI (conditional recommendation, very low certainty in the evidence). We suggest 6 to 12 months of treatment with a later-generation fluoroquinolone alone or with a second drug, on the basis of drug susceptibility of the source-case M. tuberculosis isolate. On the basis of evidence of increased toxicity, adverse events, and discontinuations, pyrazinamide should not be routinely used as the second drug. On the basis of recent modeling studies, it is estimated that there are about 1 million incident cases of TB in children annually and 230,000 deaths caused by the disease (306,307). About 35,000 cases of MDR-and XDR-TB occur in children annually (308,309). Our PS-matched IPDMA did not include sufficient numbers of children to allow the formulations of GRADE-based recommendations. Nonetheless, on the basis of a recent IPDMA of 975 children with MDR-TB from 18 countries, recent pharmacokinetic studies in children, and several observational studies showing good outcomes, the recommendations noted on choice of drugs, composition of regimens, and durations of treatment for adults can also be applied to children with MDR-TB (113,119,124,161,165,170,200,310). There are some special considerations for formulating effective regimens for children of various ages and adolescents. The bacterial burden in young children with TB is much smaller than that in most adults with TB. As a result, most drug resistance in children was present when the organism was inhaled (primary resistance), and further development of resistance while on therapy (secondary resistance) is much less common in children. However, the paucibacillary nature of childhood TB also makes microbiologic confirmation much more difficult. The only way to determine drug susceptibility in cases that meet clinical definitions of TB disease (i.e., microbiologic confirmation is not available) is by linking the child to a specific source case for whom the drug susceptibilities of the organism are known. Linking the child and a specific case is more feasible in lowburden settings to which these guidelines apply but can be difficult in high-burden settings when there may be more than one possible source case. Also, standard definitions of relapse and treatment failure for pediatric TB trials are inconsistent because the low burden of organisms in children makes microbiologic confirmation of these outcomes difficult. There are several technical factors that can affect the outcome of treatment for MDR-and XDR-TB in children. The two age extremes of childhood have been somewhat neglected in studies of MDR-and XDR-TB. Little is known about the pharmacokinetics, safety, and tolerability of the drugs used to treat drug-resistant TB in neonates, infants, and toddlers (115). Children, especially those ,2 years of age, are more prone to developing disseminated TB, including meningitis. Drugs that penetrate well into the CSF, such as linezolid, might have an advantage over drugs that appear to have less penetration, such as ethambutol and bedaquiline. Adolescents can develop TB similar to that found either in adults or in young children. However, adolescent patients often have been excluded from TB treatment trials. Most of the oral drugs used to treat MDRand XDR-TB are not licensed for children. Although the Global Drug Facility has pediatric dispersible tablets available ( available.asp), these formulations are not registered with FDA or the European Medicines Agency, which has limited commercial availability of child-friendly dosage forms. As a result, for younger children, the medication often has to be crushed or put into suspension or capsules once opened, and the pharmacokinetics and pharmacodynamics of these various preparations are unknown. HIV-infected children may be exposed to lower concentrations of certain orally administered drugs than HIV-uninfected children given the same bodyweight dose. Unfortunately, the pharmacokinetics in HIV-infected children of drugs used to treat MDR-and XDR-TB are largely unstudied. Children generally have a more difficult time tolerating injectable medications because of pain and the fact that many children with TB are malnourished and have diminished muscle mass. With the recent development of new oral drugs, it is hoped that injectable drugs can be avoided in children whenever possible. Fortunately, children generally tolerate the oral TB drugs better than adults, with fewer serious adverse events resulting in fewer breaks in therapy. Also, most children with TB have not developed the common chronic diseases of adulthood and will not suffer complications of them during treatment. However, drug adverse effects can be difficult to assess in children and likely are underreported. In general, the same schedules used to monitor adverse events and laboratory abnormalities in adult patients treated for MDR-or XDR-TB also should be used for children. # Outcomes of MDR-TB treatment in children. A recently published systematic review (33 studies) and IPDMA (28 of the studies) described treatment outcomes for 975 children with MDR-TB using random effects multivariate logistical regression adjusted for age, sex, HIV infection, malnutrition, severe extrapulmonary disease, and severe pulmonary disease on chest radiograph (113). Overall, 78% had a successful treatment outcome, including 75% of the microbiologically confirmed cases. However, treatment was successful in only 56% of HIV-infected children who did not also receive ART during TB treatment compared with an 82% success rate in those also treated with ART. In children with confirmed MDR-TB, the use of injectable agents and high-dose isoniazid was associated with treatment success. Unfortunately, limitations of this study included that the vast majority of patients AMERICAN THORACIC SOCIETY DOCUMENTS came from one site (Cape Town, South Africa), difficulty in estimating the treatment effects of individual drugs within multidrug regimens, the availability of only observational cohort studies, and that treatment decisions were based on the clinician's perception of illness, with resulting potential for bias. Conclusions. Excellent treatment outcomes have been demonstrated in both trials and extensive clinical experience for children with MDR-and XDR-TB using individualized treatment regimens with the currently available drugs. Microbiologic cure and probable cure rates in children can reach 80% to 90% with early recognition of drug resistance and adequate treatment (311). The greatest difficulties have been recognizing that the child has MDR-TB and the ability of the child to tolerate injectable drugs; fortunately, with expanded knowledge of and experience with the newer oral drugs, such as bedaquiline and delamanid, pediatricians with expertise in MDR-TB believe that the majority of children with MDR-TB likely can be cured with an all-oral drug regimen. # Pregnant Women Untreated MDR-TB during pregnancy can be associated with adverse maternal and fetal outcomes. Crucial gaps exist in the literature on treatment of MDR-TB in pregnant women, including the effectiveness, safety, and tolerability of available treatment regimens, as well as timing and duration of second-line drugs. In support of these guidelines, we conducted a systematic literature review with a focus on pregnant women with MDR-TB and included all original research reporting MDR-TB treatment outcomes during pregnancy, including case reports and case series, in all languages. We excluded animal studies, review articles, letters to the editor, and articles without documentation of MDR-TB treatment during pregnancy. Fulltext review and data extraction were conducted by three reviewers. Our initial search yielded 280 publications, of which 16 met inclusion criteria for full-text review (312)(313)(314)(315)(316)(317)(318)(319)(320)(321)(322)(323)(324)(325)(326)(327); however, 3 publications were eventually excluded because of lack of data on medications or outcomes (315,325,327). The remaining 13 articles were observational case reviews without any comparison groups. Treatment regimens reported were individualized according to drug susceptibility and tolerability of drugs. Summary of the evidence. Of the 65 pregnant women for whom MDR-TB treatment outcome data were available, 49% (n = 32) were cured and 20% (n = 13) completed treatment for a treatment success proportion of 69%. Fourteen percent (n = 9) of the women died. Treatment failure was reported in 9% (n = 6), and 3% (n = 2) were lost to followup. Across these studies, four women were still receiving treatment at the time of publication. Fetal outcomes included 78.5% (n = 51) healthy births, with eight children being born premature or having low birth weight. Medical abortions were obtained by 12% of the patients (n = 8), and 3% (n = 2) underwent spontaneous abortions. Stillbirth was reported for one child (1.5%), 3% (n = 2) of children were born with HIV, and 1.5% (n = 1) had TB/HIV coinfection. Conclusions. On the basis of the limited data available, we conclude that there is evidence to support treatment of MDR-TB during pregnancy, including the prescription of second-line drugs. Most of the second-line drugs are pregnancy category C per the U.S. Food and Drug Administration, with the exception of bedaquiline and meropenem, which are classified as category B (according to the previous FDA letter-based classification system, currently undergoing revision ), and aminoglycosides, which are category D (71,320,329). Despite low cure rates reported in the literature, we believe that the benefits of treatment to mother, child, and the community outweigh the harms. There is no evidence to support one particular regimen; however, most MDR-TB experts avoid aminoglycosides and ethionamide in pregnant women if alternative agents can be used for an effective treatment regimen. Research needs. Global registries aimed at collecting data on the efficacy, safety, and tolerability, and maternal and fetal outcomes associated with MDR-TB regimens in pregnant women are urgently needed. Furthermore, given the significant morbidity and mortality associated with MDR-TB, a reconsideration of the current approach of assumed exclusion of pregnant women from MDR-TB clinical trials is warranted. A recent consensus statement from an international expert panel advocated for allowing pregnant and lactating women to remain eligible for phase III MDR-TB trials, unless there is a compelling reason for exclusion (330). # Treatment of Contacts Exposed to MDR-TB In 1992 and 2000, ATS and CDC advised: 1) no treatment for MDR LTBI for persons not at high risk for progression to TB disease, but provision of clinical follow-up for TB disease signs and symptoms; 2) 6 to 12 months of treatment with two or more medications to which the isolate of the source case is susceptible (331,332). # Summary of the Evidence Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. A systematic review of 21 published observational studies that examined outcomes of TB incidence, treatment completion, adverse effects, and cost effectiveness was used as the evidence (333). Six articles compared TB incidence among contacts receiving MDR LTBI treatment versus untreated contacts: 10 presented TB incidence only for contacts who received MDR LTBI treatment, and 5 presented TB incidence only for untreated contacts. # Benefits Using data from five non-registry-matched comparison studies included in the systematic review of 21 published observational studies, MDR-TB incidence occurred in 2 of 190 (1.1%) patients treated for MDR LTBI, compared with 18 of 126 (14.3%) in those who received no MDR LTBI treatment (290). The estimated MDR-TB incidence reduction was 90% (9-99%), using a negative binomial model controlling for person time and overdispersion (333). # Harms From 11 studies having data by regimen on treatment discontinuation due to adverse effects, there was high (51%) treatment discontinuation among patients taking pyrazinamide-containing regimens (333). About one-third of patients taking fluoroquinolone-containing regimens, without pyrazinamide, had adverse effects, but only 2% discontinued treatment. For children <15 years of age regardless of regimen, treatment discontinuation due to adverse effects was substantially less (5% vs. 33%) (333). # Additional Considerations Modeled cost-effectiveness studies have shown greatest benefit using a fluoroquinolone-based MDR LTBI treatment. In one study, the most costeffective regimen was fluoroquinolone/ethambutol, followed by fluoroquinolone alone, then by pyrazinamide/ethambutol (333). A pyrazinamide/fluoroquinolone regimen was particularly toxic, as measured by treatment discontinuation, and prevented about half as many TB cases as the most cost-effective option (333). An expert panel generated a policy brief on this topic recently, acknowledged that further evidence is urgently needed, but still endorsed the immediate implementation of postexposure management of household contacts of MDR-TB, endorsing the approach as being effective, feasible, and cost efficient (334). The Curry International TB Center Drug Resistant TB: Clinician's Survival Guide suggests an LTBI regimen of levofloxacin or moxifloxacin alone or combined with a second medication to which the isolate of the infectious source patient is susceptible (16). Among those with presumed MDR LTBI, there is probable effectiveness of preventing TB through MDR LTBI treatment. From systematic reviews of studies reporting incidence for both treated and untreated contacts, TB incidence was significantly lower with MDR LTBI treatment (333,(335)(336)(337)(338)(339). In studies that published outcomes by regimen, there were high treatment discontinuation rates due to adverse effects and toxicity in persons taking pyrazinamide-containing MDR-LTBI regimens (336,337,(339)(340)(341)(342). There were low treatment discontinuation rates due to adverse effects and toxicity in persons taking fluoroquinolone-containing MDR-LTBI regimens. Finally, children are at high risk for developing MDR-TB if infected with MDR LTBI and generally experience fewer adverse effects than adults. Given the uncertainties above, many TB experts and programs provide extended post-treatment completion follow-up for patients treated for presumed MDR LTBI. # Conclusions For contacts with presumed MDR LTBI due to exposure to an infectious patient with MDR-TB, we suggest offering treatment for LTBI versus following with observation alone. For treatment of MDR LTBI, we suggest 6 to 12 months' treatment with a fluoroquinolone alone or with a second drug, on the basis of source-case isolate DST. On the basis of evidence of increased toxicity, adverse events, and discontinuations, pyrazinamide should not be routinely used as the second drug. In lieu of fluoroquinolone-based treatment, there are few data for the use of other secondline medications and, because of toxicity, they are not recommended by experts. For contacts to fluoroquinolone-resistant, pre-XDR-TB, pyrazinamide/ethambutol may be an effective option, if source-case isolate DST shows susceptibility to these drugs. In children, TB drugs are generally better tolerated, and levofloxacin is preferred because of the availability of an oral suspension formulation. ): 6 months of daily delamanid versus 6 months of isoniazid. In addition, more information is needed on the cost effectiveness of treating contacts with presumed DR-TB infection using these newer MDR LTBI regimens. # Summary of Key Differences between ATS/CDC/ERS/IDSA and WHO 2019 Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment The 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment guidelines were based on a modified set of data that expanded on the initial individual patient dataset used for these ATS/CDC/ERS/IDSA guidelines (3). For the 2019 WHO update, data from 625 patients were included by WHO from eight other datasets received in 2018 (Australia, Belarus, Brazil, France, Latvia, Republic of Korea, Russian Federation, and the EndTB project), as well as a sample of 3,626 patients receiving treatment in South Africa, including 1,210 cases started on bedaquiline in 2015. With these newer data added, the WHO individual patient dataset removed 3,367 records because of incomplete DST documentation (7). Despite the changes in the datasets described, WHO and ATS/CDC/ERS/IDSA recommendations are largely concordant, as they were derived concurrently, using a similar approach (GRADE methodology and a multidisciplinary Guideline Development Group), and were informed by an individual patient data meta-analysis that overlapped substantially. The main differences are highlighted below. This official clinical practice guideline was prepared by an ad hoc subcommittee of the ATS, CDC, ERS, and IDSA. Members of the subcommittee are as follows: ART regimens should be developed in consultation with HIV and ART experts. A thorough review of all patients' medications should be performed, in consultation with MDR-TB experts, to select individualized regimens with less potential for overlapping ART/TB drug toxicities (Table 11). Useful webpages regarding drug interactions (TB/HIV and other) are available from AIDSinfo (), CDC (/ guidelines/TB_HIV_Drugs/default.htm), University of California San Francisco (http:// hivinsite.ucsf.edu/insite?page=ar-00-02), University of Liverpool (/), and Indiana University (/ clinpharm/ddis/). The management of MDR-TB is more complex among patients with HIV infection. The higher pill burden of combined ART with expanded TB drug therapy, potential drug-drug interactions, the management of immune reconstitution inflammatory syndrome, and other concurrent HIV-associated opportunistic diseases all pose unique challenges in the care of these patients. The management of patients with HIV and MDR-TB can best be performed by a multidisciplinary care team composed of health providers experienced in MDR-TB, HIV, and public health case management (304,305
jointly sponsored this new practice guideline on the treatment of drug-resistant tuberculosis (DR-TB). The document includes recommendations on the treatment of multidrug-resistant TB (MDR-TB) as well as isoniazid-resistant but rifampin-susceptible TB. Methods: Published systematic reviews, meta-analyses, and a new individual patient data meta-analysis from 12,030 patients, in 50 studies, across 25 countries with confirmed pulmonary rifampinresistant TB were used for this guideline. Meta-analytic approaches included propensity score matching to reduce confounding. Each recommendation was discussed by an expert committee, screened for conflicts of interest, according to the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) methodology. Results: Twenty-one Population, Intervention, Comparator, and Outcomes questions were addressed, generating 25 GRADE-based recommendations. Certainty in the evidence was judged to be very low, because the data came from observational studies with significant loss to follow-up and imbalance in background regimens between comparator groups. Good practices in the management of MDR-TB are described. On the basis of the evidence review, a clinical strategy tool for building a treatment regimen for MDR-TB is also provided. Conclusions: New recommendations are made for the choice and number of drugs in a regimen, the duration of intensive and continuation phases, and the role of injectable drugs for MDR-TB. On the basis of these recommendations, an effective all-oral regimen for MDR-TB can be assembled. Recommendations are also provided on the role of surgery in treatment of MDR-TB and for treatment of contacts exposed to MDR-TB and treatment of isoniazid-resistant TB.# Overview Treatment of tuberculosis (TB), regardless of the results of drug susceptibility testing (DST), is focused on both curing the individual patient and minimizing the transmission of Mycobacterium tuberculosis to other persons. Thus, effective treatment of TB has benefits for both the individual patient and the community in which the patient resides. However, notable complexities need to be addressed to successfully treat disease resulting from drug-resistant M. tuberculosis isolates compared with treatment of drugsusceptible TB disease, including additional molecular and phenotypic diagnostic tests to determine drug susceptibility; the use of second-line drugs, which have toxicities that increase harms that must be balanced with their benefits; and prolonged treatment durations. The new recommendations provided in this guideline are for the treatment of drug-resistant TB (DR-TB), including multidrug-resistant TB (MDR-TB) and isoniazid-resistant TB, and are intended to help providers identify the therapeutic options associated with improved outcomes (i.e., greater treatment success, fewer adverse events, and fewer deaths) and in the context of individual patient values and preferences. Worthy of emphasis, the committee recommends that only drugs to which the patient's M. tuberculosis isolate has documented, or high likelihood of, susceptibility be included in an effective treatment regimen, noted as an ungraded good practice statement, and consistent with ongoing stewardship efforts for the optimal use of antibiotics (1). Drugs known to be ineffective on the basis of in vitro growth-based or molecular DST should not be used. The following alphabetically listed drugs and drug classes were considered for inclusion in treatment regimens: amoxicillin/clavulanate, bedaquiline, carbapenem with clavulanic acid, clofazimine, cycloserine, delamanid, ethambutol, ethionamide, fluoroquinolones, injectable agents, linezolid, macrolides, p-aminosalicylic acid, and pyrazinamide. Of note, pretomanid in combination with bedaquiline and linezolid was recently approved by U.S. Food and Drug Administration (FDA) for the treatment of a specific limited population of adults with pulmonary extensively drugresistant (XDR-TB) or treatment-intolerant or nonresponsive MDR-TB; however, the preparation and completion of these guidelines predated this approval (2). For each drug or drug class, the following Population, Intervention, Comparator, and Outcomes (PICO) question was addressed: In patients with MDR-TB, are outcomes safely improved when regimens include the following individual drugs or drug classes compared with regimens that do not include them? The recommendations in this practice guideline were supported by scientific evidence, including results of a propensity score (PS)-matched individual patient data meta-analyses (IPDMA) conducted using a database of more than 12,000 patient records from 25 countries in support of these guidelines (see APPENDIX A: METHODOLOGY in the online supplement) (3). We used the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) approach to appraise the quality of evidence and to formulate, write, and grade most recommendations (4,5). Although published data on costs are noted in these guidelines, the committee did not conduct formal cost-effectiveness analyses of the treatments reviewed to determine a recommendation. The treatment of DR-TB can be complicated and thus is necessarily preceded and accompanied by important components of care relating to the access to TB experts, microbiological and molecular diagnosis, education, monitoring and followup, and global patient-centered strategies. The writing committee considered that these topics are crucial but do not require formal and extensive evidence appraisal in the context of the present guidelines. Following GRADE guidance, it was thus decided that these practices would be addressed in good practice statements. Below, six ungraded good practice statements as well as 25 GRADE-based recommendations addressing 21 PICO questions (Table 1) are listed. Questions were selected according to their importance to clinical practice, as determined by the guideline panel, expert advisors, and patient advocates. The implications of the strength of recommendation, conditional or strong, for patients, clinicians, and policy makers are described in APPENDIX A and shown in Table 2. Detailed and referenced information on treatment of MDR-TB is available below, including a summary of the evidence, and the benefits, harms, and additional considerations of each practice or recommendation. # Summary of Good Practices For patients being evaluated and treated for any form of drug-resistant TB, the following six ungraded good practice statements are emphasized, as the writing committee had high confidence in their net benefit: 1. Consultation should be requested with a TB expert when there is suspicion of or confirmation of DR-TB. In the United States, TB experts can be found through CDC-supported TB Centers of Excellence for Training, Education, and Medical Consultation (http://www. cdc.gov/tb/education/rtmc/default.htm), through local health department TB control programs (https://www.cdc. gov/tb/links/tboffices.htm), and through international MDR-TB expert groups such as the Global TB Network (6). 2. Molecular DSTs should be obtained for rapid detection of mutations associated with resistance. When rifampin resistance is detected, additional DST should be performed immediately for first-line drugs, fluoroquinolones, and aminoglycosides. Resistance to fluoroquinolones should be excluded whenever isoniazid resistance is found. 3. Regimens should include only drugs to which the patient's M. tuberculosis isolate has documented or high Table 1. Questions Regarding the Treatment of Drug-Resistant Tuberculosis Selected by the Guideline Writing Committee Number of effective drugs in a regimen for MDR-TB 1. Should patients with MDR-TB be prescribed five effective drugs vs. more or fewer agents during the intensive and continuation phases of treatment? Duration of intensive and continuation phases of treatment for MDR-TB 2. Should patients with MDR-TB undergoing intensive-phase treatment be treated for >6 mo after culture conversion or ,6 mo after culture conversion? 3. Should patients with MDR-TB undergoing continuation-phase treatment be treated for >18 mo after culture conversion or ,18 mo after culture conversion? Drug and drug classes for the treatment of MDR-TB 4. In patients with MDR-TB, are outcomes safely improved when regimens include amoxicillin/clavulanate compared with regimens that do not include amoxicillin/clavulanate? 5. In patients with MDR-TB, are outcomes safely improved when regimens include bedaquiline compared with regimens that do not include bedaquiline? 6. In patients with MDR-TB, are outcomes safely improved when regimens include carbapenems with clavulanic acid compared with regimens that do not include them? 7. In patients with MDR-TB, are outcomes safely improved when regimens include clofazimine compared with regimens that do not include clofazimine? 8. In patients with MDR-TB, are outcomes safely improved when regimens include cycloserine compared with regimens that do not include cycloserine? 9. In patients with MDR-TB, are outcomes safely improved when regimens include delamanid compared with regimens that do not include delamanid? 10. In patients with MDR-TB, are outcomes safely improved when regimens include ethambutol compared with regimens that do not include ethambutol? 11. In patients with MDR-TB, are outcomes safely improved when regimens include ethionamide/prothionamide compared with regimens that do not include ethionamide/prothionamide? 12. In patients with MDR-TB, are outcomes safely improved when regimens include fluoroquinolones compared with regimens that do not include fluoroquinolones? 13. In patients with MDR-TB, are outcomes safely improved when regimens include an injectable compared with regimens that do not include an injectable? 14. In patients with MDR-TB, are outcomes safely improved when regimens include linezolid compared with regimens that do not include linezolid? 15. In patients with MDR-TB, are outcomes safely improved when regimens include macrolides compared with regimens that do not include macrolides? 16. In patients with MDR-TB, are outcomes safely improved when regimens include p-aminosalicylic acid compared with regimens that do not include p-aminosalicylic acid? 17. In patients with MDR-TB, are outcomes safely improved when regimens include pyrazinamide compared with regimens that do not include pyrazinamide? Use of a standardized, shorter-course regimen of <12 mo for the treatment of MDR-TB 18. In patients with MDR-TB, does treatment with a standardized MDR-TB regimen for <12 mo lead to better outcomes than treatment with an MDR-TB regimen for 18-24 mo? Treatment of isoniazid-resistant, rifampin-susceptible TB: 19a. Should patients with isoniazid-resistant TB be treated with a regimen composed of a fluoroquinolone, rifampin, ethambutol, and pyrazinamide for 6 mo compared with rifampin, ethambutol, and pyrazinamide (without a fluoroquinolone) for 6 mo? 19b. Should patients with isoniazid-resistant TB be treated with a regimen composed of fluoroquinolone, rifampin, and ethambutol for 6 mo and pyrazinamide for the first 2 mo compared with a regimen composed of a fluoroquinolone, rifampin, ethambutol, and pyrazinamide for 6 mo? Surgery as adjunctive therapy for MDR-TB: 20. Among patients with MDR/XDR TB receiving antimicrobial therapy, does lung resection surgery (i.e., lobectomy or pneumonectomy) lead to better outcomes than no surgery? Management of contacts exposed to an infectious patient with MDR-TB: 21. Should contacts exposed to an infectious patient with MDR-TB be offered LTBI treatment vs. followed with observation alone? likelihood of susceptibility (hereafter defined as effective). Drugs known to be ineffective based on in vitro growthbased or molecular resistance should NOT be used. This recommendation applies to all drugs and treatment regimens discussed in this practice guideline, unless reliable methods of testing susceptibility for a drug have yet to be developed. 4. Treatment response should be monitored clinically, radiographically, and bacteriologically, with cultures obtained at least monthly for pulmonary TB. When cultures remain positive after 3 months of treatment, susceptibility tests for drugs should be repeated. Weight and other measures of clinical response should be recorded monthly. 5. Patients should be educated and asked about adverse effects at each visit. Adverse effects should be investigated and ameliorated. 6. Patient-centered case management helps patients understand their diagnoses, understand and participate in their treatment, and discuss potential barriers to treatment. Patient-centered strategies and interventions should be used to minimize barriers to treatment. # Summary of Recommendations For the selection of an effective MDR-TB treatment regimen and duration of MDR-TB treatment: 1. We suggest using at least five drugs in the intensive phase of treatment and four drugs in the continuation phase of treatment (conditional recommendation, very low certainty in the evidence). 2. We suggest an intensive-phase duration of treatment of between 5 and 7 months after culture conversion (conditional recommendation, very low certainty in the evidence). 3. We suggest a total treatment duration of between 15 and 21 months after culture conversion (conditional recommendations, very low certainty in the evidence). 4. In patients with pre-XDR-TB and XDR-TB, which are both subsets of MDR-TB, we suggest a total treatment duration of between 15 and 24 months after culture conversion (conditional recommendations, very low certainty in the evidence). For the selection of oral drugs for MDR-TB treatment (in order of strength of recommendation): 5. We recommend including a latergeneration fluoroquinolone (levofloxacin or moxifloxacin) (strong recommendation, low certainty of evidence). 6. We recommend including bedaquiline (strong recommendation, very low certainty in the evidence). 7. We suggest including linezolid (conditional recommendation, very low certainty in the evidence). 8. We suggest including clofazimine (conditional recommendation, very low certainty of evidence). 9. We suggest including cycloserine (conditional recommendation, very low certainty in the evidence). 10. We suggest including ethambutol only when other more effective drugs cannot be assembled to achieve a total of five drugs in the regimen (conditional recommendation, very low certainty in the evidence). 11. We suggest including pyrazinamide in a regimen for treatment of patients with MDR-TB or with isoniazidresistant TB, when the M. tuberculosis isolate has not been found resistant to pyrazinamide (conditional recommendation, very low certainty in the evidence). 12. The guideline panel was unable to make a clinical recommendation for or against delamanid because of the absence of data in the PS-matched IPDMA conducted for this practice guideline. We make a research recommendation for the conduct of randomized clinical trials and cohort studies evaluating the efficacy, safety, and tolerability of delamanid in combination with other oral agents. Until additional data are available, the guideline panel concurs with the conditional recommendation of the 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis # For patients The overwhelming majority of individuals in this situation would want the recommended course of action, and only a small minority would not. The majority of individuals in this situation would want the suggested course of action, but a sizeable minority would not. # For clinicians The overwhelming majority of individuals should receive the recommended course of action. Adherence to this recommendation according to the guideline could be used as a quality criterion or performance indicator. Formal decision aids are not likely to be needed to help individuals make decisions consistent with their values and preferences. Different choices will be appropriate for different patients, and you must help each patient arrive at a management decision consistent with her or his values and preferences. Decision aids may be useful to help individuals make decisions consistent with their values and preferences. Clinicians should expect to spend more time with patients when working toward a decision. For policy makers The recommendation can be adapted as policy in most situations, including for the use as performance indicators. Treatment that delamanid may be included in the treatment of patients with MDR/rifampin-resistant (RR)-TB aged >3 years on longer regimens (7). For selected oral drugs previously included in regimens for the treatment of MDR-TB: 13. We recommend NOT including amoxicillin-clavulanate, with the exception of when the patient is receiving a carbapenem wherein the inclusion of clavulanate is necessary (strong recommendation, very low certainty in the evidence). 14. We recommend NOT including the macrolides azithromycin and clarithromycin (strong recommendation, very low certainty in the evidence). 15. We suggest NOT including ethionamide/prothionamide if more effective drugs are available to construct a regimen with at least five effective drugs (conditional recommendation, very low certainty in the evidence). 16. We suggest NOT including p-aminosalicylic acid in a regimen if more effective drugs are available to construct a regimen with at least five effective drugs (conditional recommendation, very low certainty in the evidence). For the selection of drugs administered through injection when needed to compose an effective treatment regimen for MDR-TB: 17. We suggest including amikacin or streptomycin when susceptibility to these drugs is confirmed (conditional recommendation, very low certainty of evidence). 18. We suggest including a carbapenem (always to be used with amoxicillinclavulanic acid) (conditional recommendation, very low certainty of evidence). 19. We suggest NOT including kanamycin or capreomycin (conditional recommendation, very low certainty in the evidence). A summary of the recommendations on drugs, the certainty in the evidence, and the relative risks of success and death is provided in Figure 1. Additional details and other outcomes of interest are provided in the section on Drugs and Drug Classes and For the use of the WHO-recommended standardized shorter-course 9-to 12-month regimen for MDR-TB: 20. The shorter-course regimen is standardized with the use of kanamycin (which the committee recommends against using) and includes drugs for which there is documented or high likelihood of resistance (e.g., isoniazid, ethionamide, pyrazinamide). Although the STREAM (Standard Treatment Regimen of Anti-Tuberculosis Drugs for Patients with MDR-TB) Stage 1 randomized trial found the shorter-course regimen to be noninferior to longer injectablecontaining regimens with respect to the primary efficacy outcome (8), the guideline committee cannot make a recommendation either for or against this standardized shorter-course regimen, compared with longer individualized all-oral regimens that can be composed in accordance with the recommendations in this practice guideline. We make a research recommendation for the conduct of randomized clinical trials evaluating the efficacy, safety, and tolerability of modified shorter-course regimens that include newer oral agents, exclude injectables, and include drugs for which susceptibility is documented or highly likely. For the role of surgery in the treatment of MDR-TB: 21. We suggest elective partial lung resection (e.g., lobectomy or wedge resection), rather than medical therapy alone, for adults with MDR-TB receiving antimicrobial-based therapy (conditional recommendation, very low certainty in the evidence). The writing committee believes this option would be beneficial for patients for whom clinical judgement, supported by bacteriological and radiographic data, suggest a strong risk of treatment failure or relapse with medical therapy alone. 22. We suggest medical therapy alone, rather than including elective total lung resection (pneumonectomy), for adults with MDR-TB receiving antimicrobial therapy (conditional recommendation, very low certainty in the evidence). For the treatment of isoniazid-resistant TB: 23. We suggest adding a later-generation fluoroquinolone to a 6-month regimen of daily rifampin, ethambutol, and pyrazinamide for patients with isoniazid-resistant TB (conditional recommendation, very low certainty in the evidence). 24. In patients with isoniazid-resistant TB treated with a daily regimen of a latergeneration fluoroquinolone, rifampin, ethambutol, and pyrazinamide, we suggest that the duration of pyrazinamide can be shortened to 2 months in selected situations (i.e., noncavitary and lower burden disease or toxicity from pyrazinamide) (conditional recommendation, very low certainty in the evidence). For the management of contacts to patients with MDR-TB: 25. We suggest offering treatment for latent TB infection (LTBI) for contacts to patients with MDR-TB versus following with observation alone (conditional recommendation, very low certainty in the evidence). We suggest 6 to 12 months of treatment with a later-generation fluoroquinolone alone or with a second drug, on the basis of drug susceptibility of the source-case M. tuberculosis isolate. On the basis of evidence of increased toxicity, adverse events, and discontinuations, pyrazinamide should not be routinely used as the second drug. In this guideline, we provide new recommendations for treatment of MDR-TB and for treatment of isoniazid-resistant TB. On the basis of the evidence review conducted for this guideline, a clinical strategy tool for building a treatment regimen for MDR-TB is provided. # Introduction The American Thoracic Society (ATS), U.S. Centers for Disease Control and Prevention, European Respiratory Society (ERS), and Infectious Diseases Society of America (IDSA) have jointly developed these Drug-Resistant Tuberculosis guidelines. These new recommendations are based on the certainty in the evidence (also known as the quality of evidence) and developed based on the evidence that was appraised using GRADE methodology, which incorporates patient values and costs as well as judgments about trade-offs between benefits and harms (4,5). A carefully selected panel of experts, screened for conflicts of interest, including specialists in pulmonary medicine, infectious diseases, pediatrics, primary care, public health, epidemiology, economics, pharmacokinetics, microbiology, systematic review methodology, and patient advocacy, was assembled to assess the evidence supporting each recommendation. In contrast to prior analytic approaches, wherein systematic reviews of aggregate data were used for decision-making, a new PS-matched meta-analysis of IPDMA from 12,030 patients, in 50 studies, from 25 countries with confirmed pulmonary RR-TB was conducted for this guideline (3). Given the paucity of high-quality randomized controlled trials conducted in DR-TB, individual data from observational studies represent the next best level of evidence for analyses. Nonetheless, the writing committee noted that observational data are prone to bias and confounding. The 21 PICO questions and associated recommendations are summarized below, all appraised using GRADE methodology (see APPENDIX B). Questions were selected according to their importance to clinical practice, as determined by the guideline panel, expert advisors, and patient advocates. The implications of the strength of recommendation, conditional or strong, for patients, clinicians, and policy makers are described in APPENDIX A. On the basis of the GRADE methodology framework, all recommendations in these guidelines are based on very low certainty in the evidence. The writing committee selected death, treatment success, and serious adverse effects as the endpoints of critical importance on which to generate recommendations. Our meta-analytic approaches included PS matching to AMERICAN THORACIC SOCIETY DOCUMENTS e98 reduce confounding (on the basis of individual-level covariates of age, sex, HIV coinfection, acid-fast bacilli [AFB] smear results, cavitation on chest radiographs, history of TB treatment with first-line or second-line TB drugs, and number of possibly effective drugs in the regimen, among other variables), described in detail in the publication of the PS-matched IPDMA publication and in APPENDIX A (3). However, the risk of bias remained serious, because the average loss to follow-up across included studies was 10% to 20%. In addition, despite the efforts, there was a large residual imbalance in background regimens used in experimental and control groups. These guidelines are intended for settings in which treatment is individualized and where mycobacterial cultures, molecular (genotypic) and culture-based (phenotypic) DSTs, and radiographic facilities are available (9,10). Of note, published data on costs are referenced in these guidelines, but the committee did not conduct formal costeffectiveness analyses of the treatments and interventions reviewed in relation to determining a recommendation. In these guidelines, MDR-TB is defined specifically as resistance to at least isoniazid and rifampin, the two most important first-line drugs. XDR-TB is a subset of MDR-TB with additional resistance to a fluoroquinolone and a second-line injectable agent. Because XDR-TB evolves from MDR-TB in two steps, the term "pre-XDR-TB" was introduced to identify MDR-TB with additional resistance to either one but not both of these classes of drugs. In these guidelines, we also provide recommendations for the treatment of isoniazid-resistant TB. # Good Practices for Treating DR-TB The fundamentals of TB care, regardless of the treatment selected, rest on ensuring timely diagnosis and initiation of appropriate therapy, with ongoing support and management to achieve successful treatment completion and cure. The responsibility for successful treatment of TB is placed primarily on the provider or program initiating therapy rather than on the patient (11). Nevertheless, a patientcentered approach, described more fully in the section on case management below, requires the involvement of the patient in decision-making. We recommend seeking consultation with an expert in TB when there is suspicion for or confirmation of DR-TB (ungraded good practice statement). In the United States, DR-TB experts can be found through CDCsupported TB Centers of Excellence for Training, Education, and Medical Consultation (http://www.cdc.gov/ tb/education/rtmc/default.htm), through local health department TB Control Programs (https://www.cdc.gov/tb/ links/tboffices.htm), and through international MDR-TB expert groups such as the British Thoracic Society MDR-TB Clinical Advisory Service (http://mdrtb. brit-thoracic.org.uk/) and the Global TB Network (6). Additional good practices in the treatment of patients in need of evaluation for DR-TB include the following: # Diagnosing TB and Identification of Drug Resistance The potential for drug resistance is considered in every patient. An aggressive effort is made to collect biological specimens for detection of M. tuberculosis and for drug resistance. A rapid test for a least rifampin resistance should ideally be done for every patient, but especially for those at risk of drug resistance. The concern for possible resistance is heightened for patients from areas of the world with at least a moderate incidence of TB in general (>20/100,000) and a high primary MDR-TB prevalence (>2%) (12). Individuals who have or recently had close contact with a patient with infectious DR-TB, especially when the contact is a young child or has HIV infection, are at risk of developing DR-TB. Molecular methods, and more recently whole-genome sequencing (WGS), are increasingly available and can provide information on resistance to all first-line and many second-line drugs. Many public health laboratories provide molecular tests, and WGS is available in selected laboratories. These tests can be used to guide initial therapeutic decisions and contribute to population-level control of DR-TB. Providers should be familiar with phenotypic and genotypic laboratory services available in their locale. In the United States, CDC's Division of Tuberculosis Elimination Laboratory Branch provides testing services for both clinical specimens and isolates of M. tuberculosis (https://www.cdc. gov/tb/topic/laboratory/default.htm). CDC's Molecular Detection of Drug Resistance (MDDR) service serves to rapidly identify DR-TB. This service uses DNA sequencing for detection of mutations most frequently associated with resistance to both first-line (e.g., rifampin, isoniazid, ethambutol, and pyrazinamide) as well as second-line drugs (MDDR Service Request Form is available here: https:// www.cdc.gov/tb/topic/laboratory/ MDDRsubmissionform.pdf). Recently published ATS/CDC/IDSA Official Practice Guidelines for the diagnosis of TB provide additional details on the optimal use of diagnostic tools and algorithms (13). # Treatment and Monitoring of DR-TB Regimens should only include drugs to which the patient's isolate has documented or high likelihood of susceptibility. Drugs known to be ineffective, on the basis of in vitro resistance or clinical and epidemiological information (i.e., resistance in the index case or high population prevalence of resistance), should not be used, even when resistance is present in only a small percentage of the mycobacteria in the population. If at least 1% of organisms in a solid media culture exhibit resistance to a drug (the current standard laboratory definition of drug resistance) (14), using that drug in a regimen will increase the risk for poor treatment outcomes, and the isolate will eventually exhibit 100% resistance to the drug. Drugs should be selected based on their efficacy and the likelihood that patients will be able to tolerate them without significant toxicity. Treatment response is monitored clinically (decrease in cough and systemic symptoms and increase in weight), radiographically, and bacteriologically (15)(16)(17)(18). If sputum cultures remain positive after 3 months of treatment, or if there is bacteriological reversion from negative to positive at any time, DST should be repeated (11). Patients should be asked about the clinical response at each visit and weight recorded monthly. Monthly cultures help to identify early evidence of failure (19). Most persons have difficulty taking one or more of the drugs used to treat MDR-TB. Patients should be educated about adverse effects, and all adverse effects should be investigated and ameliorated. Some adverse effects are difficult to tolerate but do not put patients at risk for serious short-or long-term damage to organ systems and can be managed with symptom-specific ancillary medication and supportive care. Nausea # AMERICAN THORACIC SOCIETY DOCUMENTS with vomiting is common and is not always an indication to discontinue therapy permanently. New-onset vomiting may indicate liver toxicity or, in children especially, increased intracranial pressure. If drug-induced liver toxicity (and increased intracranial pressure) is excluded, vomiting can be managed by changing dosing schedule, giving medications with a small snack (noting that this may affect plasma concentrations of the drug), or premedicating adult patients with an antiemetic (noting that some prolong the QT interval) before the dose. Patients may note fatigue or describe myalgia or arthralgia, but these symptoms are not typically treatment limiting. Although low-grade adverse effects can resolve gradually over time, recognizing the negative impact on patient's quality of life, all adverse effects must be addressed diligently. The Curry International TB Center's guides, Drug Resistant TB: Clinician's Survival Guide and the Nursing Guide for Managing Side Effects to Drug-Resistant TB Treatment, in addition to the World Health Organization (WHO) companion handbook, are good resources to assist in evaluation and management of patients (15,16,18). # Infection Control and DR-TB Three main strategies will reduce the transmission of DR-TB: rapid diagnosis, prompt appropriate treatment, and improved airborne infection control (11,13,20,21). Rapid molecular DST and conventional phenotypic culture-based DST are almost universally available in the United States, Europe, and low-incidence, high-resource countries (22). Targeted active case finding combined with rapid diagnostics leading to effective therapy is a strategy endorsed by WHO (13,23,24). Treatment delays have been associated with increased transmission. A systematic review and meta-analysis looking at patientrelated risk factors for transmission of M. tuberculosis found that treatment initiation delays of 28 to 30 days were significantly associated with increased transmission to contacts (25). Effective therapy renders patients with TB, even those with DR-TB, rapidly noninfectious (24,26). The rapid reduction in infectiousness even in the setting of MDR-TB makes outpatient therapy possible, but directly observed therapy (DOT) applied through patientcentered approaches plays an important role in this regard (11,23). Infection control measures such as administrative and environmental controls, and personal protective equipment, listed in order of priority, are important for preventing the transmission of M. tuberculosis regardless of drug susceptibility. Every healthcare facility should have these measures in place per CDC guidelines (27). Furthermore, patients should be educated about the importance of infection control measures, such as the value of good ventilation, open windows, and the risks of exposure for children ,5 years of age and immunocompromised individuals (21). Case Management for DR-TB Case management is a collaborative process that entails engaging with patients; comprehensively assessing, monitoring, and attending to patients' physical, psychological, social, material, and informational needs; care planning; medication management; facilitating access to services; and functioning as patient advocate/agent (28)(29)(30). Commonly, case management is used in community and public health settings to coordinate services for patients with chronic and complex health conditions and to attain good, quality, cost-effective outcomes. The practice of case management has long been considered an important component of care for patients with TB (20,31,32), and a patient-centered (or family-centered in case of children) approach is preferred (7,11,33). Patient-centered case management helps patients understand their diagnosis and treatment and participate in treatment selection and promotes communication about factors that matter to the patient (33)(34)(35). Importantly, a patient-centered approach includes discussions with the patient to identify potential barriers to care and the selection of strategies and interventions to address and minimize these barriers (7,11,33,(35)(36)(37). Case management tools for drugresistant TB include a drug-o-gram, which organizes clinical details in a format that aligns test results with treatment, as well as laboratory, bacteriology, and other toxicity monitoring flow sheets. This format is a visible representation of pertinent clinical parameters being tracked (16,38). A monitoring checklist or care plan can also help the case manager ensure timely drug toxicity monitoring and provision of examinations required to assess a patient's response to treatment. Case management interventions that have demonstrated favorable outcomes include the provision of patient education and counseling related to diagnosis, treatment, and adherence, as well as the use of treatment adherence interventions alongside suitable patientcentered administration options. For example, home or community-based DOT is shown to be preferred and associated with greater likelihood of treatment success compared with health facility-based DOT or self-administered therapy (11,39). Enhancing treatment completion through the use of patient-centered case management strategies, including DOT, also aims to reduce risk of acquisition of drug resistance, which aligns with international efforts in antibiotic stewardship (1). Recent WHO guidelines evaluated various adherence interventions and outcomes of TB treatment through a systematic review of clinical trials and observational studies, identifying data from 129 studies for quantitative analysis published through 2018 (39). Another meta-analysis of 22 randomized controlled trials of DOT and other interventions to improve adherence reported significant increases in cure with DOT (18%) and with patient education and counseling (16%). Compared with the complementary groups, loss to follow-up was 49% lower with DOT, 26% lower with financial incentives, and 13% lower with patient education and counseling. However, there was no significant reduction in mortality (40). The use of video-enabled electronic devices to conduct DOT is expanding rapidly (41,42). Electronic methods for DOT have the potential to improve TB treatment outcomes and extend public health support to patients with TB when face-to-face DOT is not feasible. Pilot studies have reported that patients find electronic methods for DOT to be both acceptable and more convenient than traditional in-person DOT. These studies also reported good adherence, fewer unobserved doses, and high satisfaction among study participants (43)(44)(45)(46). Further evidence is needed to validate electronically observed therapy under more diverse conditions, with larger cross-sections of patient subgroups, including children, and to determine to what degree these methods impact treatment outcomes for patients with DR-TB (39,(47)(48)(49). # Treatment of MDR-TB, Number of Drugs, and Duration of Treatment Phases # Number of Drugs in the Regimen Up until the recent 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment (7), WHO has recommended at least five drugs in the intensive phase of treatment, defined by the use of a second-line injectable agent (23). The recent WHO change to recommending at least four effective drugs at initiation of treatment is graded as a conditional recommendation with very low certainty in the estimates of effect (7). Of note, both our guideline committee and the 2019 WHO guidelines promote the use of newer or repurposed oral agents with greater efficacy and deemphasize the use of injectable agents (7). Given these changes and that an injectable drug is no longer obligatory, the intensive phase can no longer be defined by the inclusion of injectables. In this guideline, we define the intensive phase to be the initiation phase of treatment with at least five effective drugs. These recommendations do not apply to the WHO-endorsed shorter-course 9-to 12-month (Bangladesh) regimen, which combines seven drugs for 4 months or until sputum smear conversion, whichever is the longer period, followed by four drugs in the continuation phase (23). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. For the analysis of number of effective drugs, we counted drugs with published evidence from randomized trials of effectiveness (3). Hence, we counted ethambutol, pyrazinamide, all injectables and fluoroquinolones, ethionamide/prothionamide, cycloserine/terizidone, and p-aminosalicylic acid on the basis of DST showing susceptibility and counted clofazimine, linezolid, carbapenems, bedaquiline, and delamanid-if susceptible, or no DST for that drug. We did not count amoxicillin/clavulanate (in the absence of a carbapenem) and macrolides as effective drugs (see PICO Questions 4 and 15). The intensive phase was defined by the use of an injectable agent (other than imipenem-cilastatin or meropenem). Culture conversion was not an outcome measure. Instead, the analysis assessed the association of the number of possibly effective drugs included in the first 2 weeks of the intensive phase with the two final treatment outcomes: 1) "treatment success," which includes both cure and treatment completed; and 2) mortality. For the intensive phase of treatment, final treatment outcomes were compared between those treated with five or more (n = 2,527) effective drugs and those treated with three or four effective drugs (n = 5,923). Benefits. In our PS-matched IPDMA, using zero to two effective drugs as the reference value, treatment success was most likely with regimens for MDR-TB containing five effective drugs in the intensive phase (Table 3). Mortality was also significantly reduced for those taking five or six effective drugs. The number of drugs used in the continuation phase was also evaluated (Table 4). In the continuation phase, four Definition of abbreviations: aOR = adjusted odds ratio; CI = confidence interval. *World Health Organization definitions: fail = treatment terminated or need for permanent regimen change of at least two antituberculosis drugs because of: lack of conversion by the end of the intensive phase, bacteriological reversion in the continuation phase after conversion to negative, evidence of additional acquired resistance to fluoroquinolones or second-line injectable drugs or adverse drug reactions. † Relapse was defined as a positive bacteriological culture in the 12 months after treatment completion. PICO Question 1: Should patients with MDR-TB be prescribed five effective drugs versus more or fewer agents during the intensive and continuation phases of treatment? Recommendation 1a: We suggest using at least five drugs in the intensive phase of treatment of MDR-TB (conditional recommendation, very low certainty of evidence). # Recommendation 1b: We suggest using at least four drugs in the continuation phase of treatment of MDR-TB (conditional recommendation, very low certainty of evidence). # AMERICAN THORACIC SOCIETY DOCUMENTS drugs gave the greatest success (adjusted odds ratio [aOR], 2.3) and four or more drugs the greatest reduction in mortality. Few significant differences were found in patients with pre-XDR-TB or XDR-TB. Subgroup analyses in pre-XDR-TB and XDR-TB did not suggest that a different number of effective drugs for the intensive and continuation phases was required to achieve good outcomes. Harms. The recommendation places a higher value on reduced morbidity and greater treatment success than on the as yet unmeasured avoidance of the summative or synergistic toxicity across drugs in the treatment regimen. Additional considerations. Each drug was counted as equivalent in effectiveness, representing the average effectiveness of the most widely used MDR-TB drugs, for the purpose of adjusting for the number of other effective drugs in the regimen. A regimen composition approach with suggested ranking of drugs is provided in the section BUILDING A TREATMENT REGIMEN FOR MDR-TB. The smallest number of drugs to achieve sputum conversion and ensure a cure without relapse minimizes problems with drug-drug interactions, reduces adverse effects, and is cost effective. Injectable drugs are problematic in terms of route of administration, patient preference, significant adverse effects such as hearing loss or vertigo, and costs of administration and monitoring blood levels. On the basis of these limitations, combined with evidence of low efficacy for injectable agents, the use of injectables in defining the intensive phase of treatment is no longer appropriate. WHO has recategorized injectables to a lower grouping and recommended that the injectables only be used when oral medicines from higher categories cannot be used (7). The South African National Department of Health also has endorsed replacing the injectable agents in the shorter-course MDR-TB treatment regimen in adults and children .12 years of age with bedaquiline, on the basis of a retrospective observational study of improved mortality with adding bedaquiline (50). In conjunction, experts from the Sentinel Project on Pediatric Drug-Resistant Tuberculosis (http://sentinel-project.org) have also called for replacing the injectable drug in children ,12 years of age with an alternative effective drug (51). Conclusions. At least five drugs should be used in the intensive phase of treatment and four drugs in the continuation phase of treatment of MDR-TB (conditional recommendation, very low certainty of evidence). Drugs of poor or doubtful efficacy should not be added to a regimen purely to ensure that the recommended number of drugs is obtained. Research needs. Randomized controlled trials with fewer but more effective and safer drugs should be undertaken (e.g., TB-PRACTECAL, [Pragmatic Clinical Trial for a More Effective Concise and Less Toxic MDR-TB Treatment Regimen] ClinicalTrials.gov identifier NCT02589782). Randomized trials comparing regimens with and without injectable agents are underway (e.g., STREAM-stage 2, NCT02409290; Evaluating Newly Approved Drugs for Multidrug-resistant TB: endTB, NCT02754765). The effect of duration of individual drugs in the intensive phase on culture conversion as well as treatment cure without relapse should be explored (52). # Duration of Intensive and Continuation Phases in Treating MDR-TB Treatment of both drug-susceptible and MDR-TB has typically been divided across PICO Question 2: Should patients with MDR-TB undergoing intensive-phase treatment be treated for >6 months after culture conversion or ,6 months after culture conversion? Recommendation 2: In patients with MDR-TB, we suggest an intensivephase duration of treatment of between 5 and 7 months after culture conversion (conditional recommendation, very low certainty of evidence). two phases, with an initial "intensive" phase that contains more drugs than the subsequent "continuation" phase. For MDR-TB, this intensive phase has historically been characterized by the use of an aminoglycoside (amikacin or kanamycin) or a polypeptide (capreomycin) delivered parenterally (53). This approach has been intended to provide greater bactericidal activity during the time when the bacillary burden is highest and reducing the number of drugs in the continuation phase to reduce risk of toxicity and intolerability caused by the multidrug regimen at a phase when the microbial burden has diminished. The 6-month duration of bedaquiline treatment, in part, was developed by analogy, and clinical trials are underway to determine whether this drug may replace aminoglycosides in terms of an initial intensive phase of treatment. The duration of the intensive phase has never been examined through a randomized, controlled clinical trial. Rather, it has been defined by a combination of practicality, the clinical experiences of MDR-TB experts, and, most recently, an IPDMA that resulted in the WHO practice guideline recommendations of 2011 that an intensive phase of at least 8 months' duration be used (54). This represented a departure from the prior recommendations in 2008 that the injectable agent should be continued for at least 6 months and at least 4 months after the patient first becomes and remains smear or culture negative (55). This is similar to current expert guidance provided by the Curry International TB Center Drug Resistant TB: Clinician's Survival Guide, namely "Intensive phase: recommend at least 6 months beyond culture conversion for the use of injectable agent" (16). The PS-matched IPDMA completed as part of the present guideline development process includes cohorts that were treated according to this range of durations of the intensive phase and represents the available evidence base for our recommendation. Our analyses and recommendations for the duration of intensive and continuation phases of therapy are anchored to the timing of culture conversion, as this approach factors in that treatment response may vary by patient, resistance patterns, and regimen composition and potency, among other factors. Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. The association between treatment duration of the intensive phase (defined by the use of an injectable agent) after culture conversion and outcomes (success, failure, relapse) was examined among 4,122 (49.3%) subjects from 29 studies who had sufficient information to be included in this analysis (3). This total reflected exclusion of studies without a reported intensive-phase duration (n = 942 [11.3%] patients) or time to culture conversion (n = 1,880 [22.5%] patients), as well as 955 (11.4%) patients in the included studies but for whom intensive-phase duration or time to conversion was not known. Patients taking the 9-to 12-month standardized shorter-course regimen were excluded (see PICO Question 18). An additional 464 (5.0%) patients were excluded for conversion after the end of the intensive phase or after 14.3 months, or for an intensive phase exceeding 24.3 months. Among the 4,122 patients included, 3,303 (80.1%) had MDR-TB without resistance to fluoroquinolone or second-line injectable. The remainder had pre-or XDR-TB. To reflect meaningful variations in the duration of the intensive regimen after conversion, stratified analyses were conducted: strata were 0 to 1.0 months, 1.01 to 3.0 months, 3.01 to 5.0 months, 5.01 to 7.0 months, and 7.1 to 15.0 months (see Table 5). The largest proportion (28.6% [1,179]) received treatment for 5.01 to 7 (mean, 5.9) months after conversion. Time to conversion was roughly inversely proportional to duration of intensive phase, suggesting that the tendency in the included studies was to treat for a total intensive-phase duration of 5 to 8 months. Benefits. The patients who received treatment for 5.01 to 7 months after conversion experienced a 3.3-fold increase in adjusted odds of treatment success (95% confidence interval [CI], 2.1-5.2) compared with the reference group (0-1.0 months of intensive phase after conversion). Some variability was observed in distribution of intensive-phase duration after culture conversion by baseline characteristics: ,10.0% of patients who received longer durations of intensive phase were HIVcoinfected, compared with .15% in the 0.0 to 3.0 month interval groups. Number of (effective) drugs was greater in longer durations than in shorter. Patients with MDR-TB who received 5.01 to 7.0 months of intensive-phase treatment after conversion had the highest odds of success in adjusted PS-matched analysis; although other, shorter intervals showed improvement over the reference, <1.0 month of postconversion treatment, the benefits of 5 to 7 months of treatment after conversion were more pronounced (Table 6). The effect estimates were also better for subgroups of MDR only (aOR, 2.0; 95% CI, 1.1-3.4) and pre-XDR (aOR, 1.5; 95% CI, 0.6-3.7) subgroups. # Harms Detailed data on duration-related toxicities were not available through our PS-matched IPDMA. On the basis of the clinical observations and experiences of the MDR-TB experts on the guideline committee, the intensive phase should not be prolonged beyond that considered necessary to optimize treatment outcomes. Additional considerations. No information is available in the datasets on AMERICAN THORACIC SOCIETY DOCUMENTS how duration was selected. Duration selected may reflect interim response to treatment or may reflect a planned duration that is not conditioned on treatment response; the latter is suggested by the observation that duration was inversely related to time to conversion except in the 7.01-to 15-month interval. Analyses were adjusted for possible baseline confounders, relying on PS matching, to reduce the bias introduced. However, the possibility of unmeasured confounding by indication, in particular by time-varying characteristics (such as toxicity, microbiological, radiographic, or clinical results), cannot be ruled out. Such confounding by indication would likely result in an underestimate of the benefit of a longer intensive phase after conversion: patients in the 7.01-to 15-month interval had slower conversion and more poor outcomes than those in the 5.01-to 7.0month interval. Last, the optimal total duration of treatment for MDR-TB using injectable-free, all-oral regimens cannot be determined from these datasets, but clinical trials evaluating newer drugs and all-oral regimens for MDR-TB are underway (56). Conclusions. We suggest an intensivephase treatment of between 5 and 7 months after culture conversion in patients with MDR-TB (conditional recommendation, very low certainty in the evidence). The clinical context, extent of disease, and response to treatment, among other factors, will play a role in choosing a final duration from within the recommended range. There were limited data in pre-XDR-TB and XDR-TB. Subgroup analyses in pre-XDR and XDR-TB did not suggest that a different duration for the intensive phase would be required to achieve good outcomes. This intensive-phase duration recommendation does not apply to the 9-to 12-month shortercourse regimen addressed in PICO 18. Research needs. Further research is urgently needed to define the optimal durations of treatment using newer drugs and all-oral regimens. Research in defining optimal duration that prevents death or loss to follow-up is also needed. In the short term, further research on datasets that include longitudinal observations that can inform choice of regimen duration would be important to reducing the uncertainty around the present recommendations. In the long term, randomized controlled trials evaluating various intensive phases and durations are needed. Research on stratified medicine approaches that use measures of burden of disease and consider subgroups in the selection of the optimal duration may allow for greater precision and better inform decision-making around the balance of benefits and harms of durations for individual patients (57). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. The association between total duration of treatment after culture conversion and outcomes (success, failure, relapse) was performed on 4,691 observations from 32 studies (3). We excluded 18 studies (n = 2,615) that did not report time to culture conversion and 798 patients from included studies because endpoints were missing. Last, 259 patients were excluded for outlier values in total duration or culture conversion; patients whose outcome was loss to follow-up or death were excluded. Among the 4,691 included, 3,034 had MDR-TB. The remainder had pre-XDR or XDR-TB. To reflect meaningful variations in the duration of treatment after conversion, stratified analyses were conducted: Time to conversion was inversely related to duration of treatment after conversion (Table 7). Benefits. Among all resistance groups, durations of 15.01 to 21 months after conversion outperformed the reference (12.01-15 mo); intervals of 15.01 to 18 (aOR, 2.1; 95% CI, 1.4-3.1), 18.01 to 21 (aOR, 1.6; 95% CI, 1.1-2.3), and 21.01 to 24 (aOR, 1.2; 95% CI, 0.9-1.8) months were indistinguishable from each other (Table 8). In the MDR-only subgroup, the duration of 15.01 to 18.0 months was associated with similar to slightly improved outcomes compared with reference (aOR, 1.8; 95% CI, 1.0-3.0) (data not shown). Results for the pre-XDR-TB subgroup supported the notion that longer intervals were associated with success; effects were statistically significant, with confidence intervals that were wide and overlapping across durations of 15.01 to 24 months (data not shown). PICO Question 3: Should patients with MDR-TB undergoing continuation-phase treatment be treated for >18 months after culture conversion or ,18 months after culture conversion? Recommendation 3a: In patients with MDR-TB, we suggest a total treatment duration of between 15 and 21 months after culture conversion (conditional recommendations, very low certainty in the evidence). # Recommendation 3b: In patients with pre-XDR-TB and XDR-TB, we suggest a total treatment duration of between 15 and 24 months after culture conversion (conditional recommendations, very low certainty in the evidence). # AMERICAN THORACIC SOCIETY DOCUMENTS e104 Harms. Extending treatment longer than necessary can engender additional toxicity and costs to patients and health systems. For this reason, recommendations are for the minimum duration found to have a significant treatment advantage, and different recommendations are made for the MDR only and XDR/pre-XDR subgroups. Additional considerations. No information on how duration was selected is available from the data. Duration selection may reflect interim response to treatment or may reflect a planned duration that is not conditioned on treatment response; the latter is suggested by the observation that duration was inversely related to time to conversion except in the longest interval. Analyses were adjusted for all possible baseline confounders, relying on PS matching, to minimize the bias introduced. Nevertheless, the possibility of unmeasured confounding by indication, in particular by time-varying characteristics, cannot be ruled out. Such confounding by indication would likely result in an underestimate of the benefit of a longer treatment duration after conversion. No significant difference in outcomes was observed across durations for the XDR-TB subgroup, likely because of small numbers and the aforementioned potential bias. Conclusions. We suggest a total treatment duration of at least 15 and up to 21 months after culture conversion for patients with MDR-TB only. In patients with pre-XDR-TB and XDR-TB, we suggest a total treatment duration of between 15 and 24 months after culture conversion (conditional recommendations, very low certainty in the evidence). The clinical context and extent of disease will be relevant for choosing a final duration from within the recommended range. Research needs. The optimal total duration of regimens composed of only oral drugs urgently needs further research. Defining optimal total durations that prevent death or loss to follow-up also needs study. In the short term, further research on datasets that include longitudinal observations that can inform choice of regimen duration would be very important to reducing the uncertainty around the present recommendation. In the long term, randomized controlled trials comparing different intensive phases and durations are needed. As with the duration of the intensive phase, research is needed in stratified medicine approaches that consider measures of burden of disease, and subgroups in the selection of the optimal total duration will allow for greater precision in selecting and individualizing regimen duration (57). # Drugs and Drug Classes These guidelines are intended for settings in which treatment is individualized based on DST results and clinical and epidemiological factors. Individualized treatment regimens should only include drugs to which the patient's isolate has documented or high likelihood of susceptibility. Treatment regimens should favor medications that are associated with improved outcomes, as identified in our PS-matched IPDMA, that limit toxicity and that incorporate patient preferences. The following alphabetically listed drugs and drug classes were considered for inclusion in treatment regimens: amoxicillin/clavulanate, bedaquiline, carbapenem with clavulanic acid, clofazimine, cycloserine, delamanid, ethambutol, ethionamide, fluoroquinolones, injectable agents, linezolid, macrolides, p-aminosalicylic acid, and pyrazinamide. For each drug or drug class, the following PICO was addressed: In patients with MDR-TB, are outcomes safely improved when regimens include the following individual drugs or drug classes compared with regimens that do not include them? Amoxicillin/Clavulanate Amoxicillin-clavulanate, consisting of the b-lactam antibiotic amoxicillin and the b-lactamase inhibitor potassium clavulanate, is considered safe and effective for numerous bacterial infections. Although M. tuberculosis has an impenetrable cell wall and produces a b-lactamase inhibitor (58), amoxicillin-clavulanate has been used to treat TB. It is viewed as a "salvage" agent when few other PICO Question 4-Amoxicillin/Clavulanate: In patients with MDR-TB, are outcomes safely improved when regimens include amoxicillin/clavulanate compared with regimens that do not include amoxicillin/clavulanate? Recommendation 4: We recommend NOT including amoxicillin-clavulanate in a treatment regimen for patients with MDR-TB, with the exception of when the patient is receiving a carbapenem, wherein the inclusion of clavulanate is necessary (strong recommendation, very low certainty in the evidence). Our recommendation against the use of amoxicillin-clavulanate (except to provide clavulanate when using a carbapenem) in MDR-TB treatment is strong despite the evidence being judged to be of very low certainty because we viewed the increased mortality and decreased likelihood of treatment success associated with the use of this drug as having a notably unfavorable balance of benefits to potential harms. (63,64), and the combination has been reported to be efficacious, safe, and tolerable when added to linezolid and other drugs (65)(66)(67)(68)(69). See PICO Question 6 for the evaluation of carbapenems with amoxicillin-clavulanate. Summary of the evidence. Procedures and methodology to assemble and the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PS-matched IPDMA showed that patients who received amoxicillinclavulanate were more likely to have been treated with second-line drugs and to be resistant to fluoroquinolones or any second-line injectable (3). They were more likely to have received a later-generation fluoroquinolone (72%), capreomycin (58%), linezolid (25%), and bedaquiline (8%). In adjusted analyses, patients who received amoxicillin-clavulanate were less likely to have treatment success (aOR, 0.6; 95% CI, 0.5-0.8) and more likely to die (aOR, 1.7; 95% CI, 1.3-2.1) compared with patients who did not receive amoxicillin-clavulanate. Benefits. The addition of amoxicillinclavulanate (without coadministration with carbapenems) to a regimen for MDR-TB does not appear to provide benefit. Harms. Patients who received amoxicillin-clavulanate were less likely to have treatment success and more likely to die than patients who did not receive amoxicillin-clavulanate. Data on adverse effects were not collected systematically across studies in our PS-matched IPDMA or in most trials. A published systematic review identified diarrhea and candidiasis as key adverse effects associated with amoxicillin-clavulanic acid use (70). Additional considerations. Clavulanic acid is only available as a coformulation with amoxicillin. Therefore, amoxicillin-clavulanate must be given whenever carbapenems are included in an MDR-TB regimen (see PICO Question 6 on carbapenems with clavulanate). Conclusions. Patients who received amoxicillin-clavulanate were less likely to achieve treatment success and more likely to die than patients who did not receive amoxicillin-clavulanate. These data suggest amoxicillin-clavulanate should not be used in MDR-TB treatment, except to provide clavulanate when using a carbapenem (see PICO Question 6 on carbapenems with clavulanate). Our recommendation against the use of amoxicillin-clavulanate (except to provide clavulanate when using a carbapenem) in MDR-TB treatment is strong despite the evidence being judged to be of very low certainty because we viewed the increased mortality and decreased likelihood of treatment success associated with the use of this drug as having a notably unfavorable balance of benefits to potential harms. Research needs. The development of a clavulanic acid formulation without amoxicillin, for use in combination with carbapenems, would be helpful and would avoid unnecessary toxicities from amoxicillin as well as adhere to international efforts in promoting antimicrobial stewardship (1). # Bedaquiline Bedaquiline, a diarylquinoline, approved by FDA in 2013, is the first drug with a novel mechanism of action against M. tuberculosis to have been approved by FDA in .40 years (71,72). Bedaquiline is bactericidal to nonreplicating and actively replicating mycobacteria, through ATP synthase inhibition, and has bactericidal and sterilizing activity in the murine model of TB infection (73). No cross-resistance has been found between bedaquiline and the following: isoniazid, rifampin, ethambutol, pyrazinamide, streptomycin, amikacin, or moxifloxacin. There have been a few reports of cross-resistance with clofazimine PICO Question 5-Bedaquiline: In patients with MDR-TB, are outcomes safely improved when regimens include bedaquiline compared with regimens that do not include bedaquiline? Recommendation 5: We recommend including bedaquiline in a regimen for the treatment of patients with MDR-TB (strong recommendation, very low certainty in the evidence). Our recommendation for the use of bedaquiline is strong despite very low certainty in the evidence because we viewed the significant reduction in mortality, improved treatment success, and relatively few adverse effects associated with MDR-TB treatment including bedaquiline (compared with no bedaquiline) as having a particularly favorable balance of benefits over harms. AMERICAN THORACIC SOCIETY DOCUMENTS e106 (74,75). Bedaquiline is customarily used as part of combination therapy (minimum four-drug therapy) for adults aged >18 years with a diagnosis of pulmonary MDR-TB when an effective treatment regimen cannot otherwise be provided (e.g., mycobacterial isolates show a complicated drug resistance pattern, drug intolerance, or drug-drug interactions) (76). The recommended dose of bedaquiline for the treatment of pulmonary MDR-TB in adults is 400 mg administered orally once daily for 2 weeks, followed by 200 mg administered orally three times weekly, for an entire treatment duration of 24 weeks (72,(76)(77)(78). Bedaquiline has recently has been identified as the key drug in assembling an all-oral, injectable-free drug regimen in South Africa (79). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PSmatched IPDMA included 411 patients who received bedaquiline-containing regimens, for whom it was assumed that there was no bedaquiline resistance, and 10,932 patients who did not receive bedaquiline (3). Patients who received bedaquiline-containing regimens were more likely to have cavitary disease (82% vs. 62%), to have been treated with second-line drugs (34% vs. 14%), to be infected with an organism resistant to fluoroquinolones (57% vs. 21%) or to any second-line injectable (58% vs. 23%), and to have XDR-TB (29% vs. 13%). They were also more likely to have received a latergeneration fluoroquinolone (69% vs. 54%), linezolid (64% vs. 4%), or clofazimine (48% vs. 4%). Treatment success (cure and completion) was slightly greater (70% vs. 60%; P = 0.001) with bedaquiline, whereas failure/relapse (6% vs. 9%), death (10% vs. 15%), and loss to follow-up (14% vs. 16%) were less frequent in PS-adjusted analyses. Benefits. Our PS-matched IPDMA showed that patients treated with bedaquiline-containing regimens were more likely to have treatment success (aOR, 2.0; 95% CI, 1.4-2.9), less likely to experience failure/relapse, and less likely to die (aOR, 0.4; 95% CI, 0.3-0.5; absolute risk reduction, 7.2%). Results were similar when only patients from high-income countries treated with bedaquiline-containing regimens were included and when comparing only patients with XDR-TB who were and were not treated with bedaquiline. A recent large program-based observational multinational study confirmed a high treatment success rate (76.9%) with bedaquiline-containing regimens, with a low proportion (5.8%) of interruptions owing to adverse events (80). In our IPDMA, using PS-matched pairs comparing the effects of bedaquiline to those of clofazimine, statistically significant differences for success versus failure/relapse favoring bedaquiline use were observed (aOR, 2.1; 95% CI, 1.1-4.1) (using .170 PS-matched pairs), but statistically significant findings were not observed in mortality or when restricting analyses to high-income countries. PS-matched pairs analyses comparing the effects of various combinations of drugs used with bedaquiline all noted improved outcomes. When comparisons of effects of bedaquiline and linezolid with those of no bedaquiline or linezolid were performed, combination of bedaquiline and linezolid was associated with an aOR of 2.7 (95% CI, 1.5-4.9) for success versus failure/relapse and with an aOR of 0.3 (95% CI, 0.2-0.4) for death versus success/failure/relapse. Similarly, when PS-matched pair analyses comparing the effects of bedaquiline and clofazimine to those of no bedaquiline or clofazimine were performed, combination of bedaquiline and clofazimine was associated with an aOR of 5.0 (95% CI, 2.4-10.6) for success versus failure/relapse, and with an aOR of 0.3 (95% CI, 0.2-0.5) for death versus success/failure/relapse. Of note, patients to whom bedaquiline was administered tended to have more cavitary disease or to have pre-XDR or XDR-TB. Prescription of bedaquiline was associated with greater success and less death than clofazimine, but the greatest success and least mortality was found when bedaquiline was administered together with linezolid or clofazimine. Finally, a recent retrospective routinecare observational study in South Africa showed that a bedaquiline-containing regimen was associated with significantly lower mortality (128 [12.6%] deaths among 1,016 patients receiving bedaquiline compared with 4,612 deaths [24.8%] among 18,601 patients on the standard regimens). Bedaquiline was associated with a reduction in the risk of all-cause mortality for patients with MDR-or RR-TB (hazard ratio [HR], 0.35; 95% CI, 0.28-0.46) and XDR-TB (HR, 0.26; 95% CI, 0.18-0.38) compared with standard regimens (50). Harms. In a review of cases treated with bedaquiline, only 44 of 1,266 (3.5%) cases with information available discontinued bedaquiline because of adverse events. Only 8 of 875 (0.9%) discontinued bedaquiline because of QT interval prolongation (two restarted the drug after resolution of the acute episode without further problems) (81). Additional considerations. When bedaquiline is included in the regimen, most experts obtain ECGs after the initial 2 weeks of therapy and then at monthly intervals to monitor for QT interval prolongation. Serum electrolytes, including calcium, magnesium, and potassium, are also monitored. The 2013 CDC guidelines on the use of bedaquiline note that there is insufficient evidence to provide guidance on the use of bedaquiline in children but that its use can be considered on a case-by-case basis given the high mortality and limited treatment options for MDR-TB (71). More recently, adolescents .10 years old and weight .34 kg have been safely treated off-label with the recommended adult dose of bedaquiline (82). The Sentinel Project on Pediatric Drug-Resistant Tuberculosis has recommended that children >12 years of age and >31 kg body weight receive bedaquiline at the same dose as adults, and children >6 years with 16 to 30 kg body weight could receive half the adult bedaquiline dose for the same indications (7,17,83). Conclusions. In our PS-matched IPDMA, bedaquiline-containing regimens were more likely to achieve treatment success and to have a lower rate of death than those that did not include bedaquiline. Bedaquiline should be included in a regimen to achieve a total of five effective drugs for the treatment of patients with MDR-TB. Our recommendation for the use of bedaquiline is strong despite very low certainty in the evidence because we viewed the significant reduction in mortality, improved treatment success, and relatively few adverse effects associated with MDR-TB treatment including bedaquiline (compared with no bedaquiline) as having a particularly favorable balance of benefits over harms. Research needs. Further research is needed to elucidate the potential synergy of bedaquiline with other agents. Studies underway are evaluating the use of bedaquiline together with linezolid, clofazimine, or nitroimidazoles AMERICAN THORACIC SOCIETY DOCUMENTS (i.e., pretomanid and delamanid). Research is also needed on the safety, tolerability, and efficacy of bedaquiline-based shorter-course regimens as well as on the use of bedaquiline for durations .24 weeks, an approach currently considered when effective treatment cannot otherwise be provided (71,84). Finally, research is urgently needed on the risk factors and any interventions (e.g., the selection of companion drugs) that influence acquisition of bedaquiline resistance. # Carbapenems with Clavulanic Acid The combination of carbapenems and clavulanate, a b-lactamase inhibitor, has been shown to have in vitro bactericidal activity (65,85,86). As clavulanate is not available by itself, the combination drug amoxicillin-clavulanate must be given with the carbapenems. Carbapenems have been used primarily for MDR and XDR TB, and a recent systematic review found that carbapenems are safe and likely to be effective (65). Carbapenem drugs have been recently included in WHO guidelines for the treatment of DR-TB (23). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PS-matched IPDMA compared death, treatment success, and culture conversion among 169 individuals who received carbapenems to 9,535 individuals from centers that did not use any of the drugs previously classified by WHO as "Group 5" drugs (3). In addition, we considered adverse events reported in a recent review (65), IPDMA, and systematic review that summarize the available evidence on carbapenems from five primary studies (all observational studies) (23,(65)(66)(67)(87)(88)(89)(90)(91)(92). The reviewed evidence showed that there was concomitant use of linezolid and bedaquiline in large proportions of the patient population receiving carbapenems, suggestive of confounding, which was considered in the review of the efficacy results. Benefits. In our PS-matched IPDMA, inclusion of carbapenems in the treatment regimen had no effect on the risk of death (aOR, 1.0; 95% CI, 0.5-1.7) or culture conversion (aOR, 2.3; 95% CI, 0.8-6.9). However, treatment success was more likely among patients treated with regimens including a carbapenem compared with those treated with regimens not including a carbapenem (aOR, 4.0; 95% CI, 1.7-9.1). Harms. In our meta-analysis, the rate of treatment discontinuation due to an adverse event was lower among patients treated with regimens including a carbapenem than those treated with regimens not including a carbapenem, although this difference was not statistically significant (relative risk, 0.82; 95% CI, 0.23-2.94). In the published literature, treatment discontinuation due to adverse events has been reported in 0 to 3% of patients, with minor adverse events occurring in 5% to 6% of patients (65). Additional considerations. Of note, clavulanic acid is only available as a coformulation with amoxicillin. Therefore, amoxicillin-clavulanate must be given to achieve a dose of 125 mg of the clavulanate with each dose of carbapenem daily, whenever carbapenems are included in an MDR-TB regimen. DST is currently not available for carbapenems. When used, imipenem-cilastatin/clavulanate or meropenem/clavulanate have been administered intravenously in a hospital setting, with multiple injections required daily (67). There is no clear evidence of whether imipenem-cilastatin or meropenem is more efficacious (66). Ertapenem belongs to the same family (65,93); because of its longer half-life and the possibility to administer it once a day intramuscularly or intravenously, ertapenem may be useful when a patient treated with meropenem or imipenem-cilastatin intravenously during hospitalization is discharged and needs to continue carbapenem-based treatment outpatient (92). Long-term intravenous administration requires long-term venous access through an indwelling catheter, which carries risks of infection, thrombosis, and thromboembolism. Data on the use of a carbapenem combined with amoxicillin-clavulanate in children are lacking-only two adolescents are included in one review (65). However, efficacy has been shown in adults, and both carbapenems and amoxicillinclavulanate have been used safely in children for other purposes (although there are no long-term treatment safety data in young children); therefore, this combination could be used in children with MDR-TB where there is no other option to build an effective regimen. Conclusions. In our PS-matched IPDMA, inclusion of carbapenems with clavulanic acid was associated with an increase in treatment success when compared with the control group not receiving carbapenems with clavulanic acid. Carbapenems with clavulanic acid can be included in a regimen to achieve a total of five effective drugs for the treatment of patients with MDR-TB. Research needs. Randomized, controlled clinical trials that confirm the role of carbapenems used for different durations of time within different regimens are needed. A comparative evaluation of the safety, tolerability, and efficacy of the different agents is also necessary. Because of the high cost of these drugs, economic analyses will also be useful (65). The development of oral formulations of carbapenems is underway, which, if proven safe and effective, would enhance feasibility of using these agents significantly. Additional advances in rapid DST for carbapenems would enhance potential scale-up of use for this class of drugs. # AMERICAN THORACIC SOCIETY DOCUMENTS e108 Clofazimine Clofazimine, a fat-soluble riminophenazine dye, has shown in vitro and in vivo activity as a sterilizing drug to treat MDR-TB (94)(95)(96). Clofazimine has been used as a leprosy drug and was first developed in the 1950s because of in vitro and in vivo activity against M. tuberculosis (94, 95, 97). Although the exact mechanism of action is not known, it is a prodrug that appears to have both antimycobacterial and antiinflammatory properties (98). The published clinical evidence on the safety and efficacy of clofazimine for treatment of TB is modest (99). However, interest in the drug has increased since WHO endorsed the new shorter-course regimen (23, 100), which includes clofazimine (101)(102)(103)(104)(105)(106)(107)(108). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PSmatched IPDMA compared death, treatment success, and culture conversion among 639 individuals who received clofazimine to 7,398 individuals from centers that did not use any of the drugs previously classified by WHO as "Group 5" drugs (3). In addition, we considered adverse events reported in a published systematic review of nine observational studies (six MDR-TB and three XDR-TB) including 599 patients treated with clofazimine (427 from a single cohort study from Bangladesh) (95) and two studies published after the systematic review. They include a trial at six specialty hospitals in China that randomized 105 patients to 21 months of treatment with an optimized background regimen with or without clofazimine (100 mg/d) ( 94) and a retrospective cohort study from Brazil that compared outcomes in patients with MDR-TB treated with clofazimine (100 mg/d)-containing regimens versus pyrazinamide-containing regimens (109). Benefits. Our PS-matched IPDMA showed that treatment success was more likely with regimens containing clofazimine compared with regimens that did not (aOR, 1.5; 95% CI, 1.1-2.1). Addition of clofazimine was associated with an aOR of 0.8 (95% CI, 0.6-1.0) for the outcome of death and with an aOR of 1.1 (95% CI, 0.6-1.8) for culture conversion within 6 months. As noted in PICO Question 5 on bedaquiline, when PSmatched pair analyses were performed comparing the effects of bedaquiline and clofazimine to those of no bedaquiline or clofazimine, the combination of bedaquiline and clofazimine was associated with an aOR of 5.0 (95% CI, 2.4-10.6) for success versus failure/relapse and with an aOR of 0.3 (95% CI, 0.2-0.5) for death versus success/failure/relapse. Harms. In our PS-matched IPDMA, 2 of 81 (2.5%) patients treated with regimens including clofazimine had treatment discontinued because of adverse events. Brownish skin pigmentation has been observed in 75% to 100%, ichthyosis in 8% to 20%, gastrointestinal intolerance in 40% to 50%, and neurological disturbances in up to 13% of patients (94,95,109). Clofazimine was judged to have small to moderate desirable effects (on treatment success, mortality, and culture conversions) and small undesirable effects (low risk of serious adverse events requiring treatment discontinuation). However, there is important uncertainty about how patients value skin discoloration. Some patients perceive skin discoloration as significant, which may become a key factor in the acceptability of this drug in some settings. Recent publications have noted the potential QT interval-prolonging effects of clofazimine when used in combination with bedaquiline and delamanid (110)(111)(112). Additional considerations. Access is a limitation to expanded use of clofazimine for treatment of MDR-TB, especially in the United States and Europe, where clofazimine lacks a TB indication. In the United States, clofazimine is currently only available under an investigational new drug protocol administered by FDA, which can be a burdensome procedure. Significant improvements in the mechanisms for accessing clofazimine are necessary for more expanded use of this drug. Quality-assured clofazimine is available through the Global Drug Facility, although currently in limited quantities. Although our PS-matched IPDMA had limited pediatric data, the efficacy in children is believed by experts to be the same as in adults. A recent IPDMA in children with MDR-TB did not show a benefit using clofazimine, which might be related to selection of cases and very small numbers (23 of 641) receiving clofazimine (113). Moreover, children are included in the WHO shortercourse regimen for MDR-TB treatment, which includes clofazimine, albeit on the basis of limited data. Dosing clofazimine in children is challenging because of the lack of childfriendly formulations and lack of pharmacokinetic data. The currently recommended dose for clofazimine in children varies from 2 to 5 mg/kg daily. Skin discoloration is common; some improve during treatment and others resolve rapidly after discontinuation of treatment. Ichthyosis is less common and improves with intensive efforts at applying lubricants during treatment. As noted, QT interval prolongation may occur and is especially important to monitor by ECG if given together with other QT interval-prolonging medication (68, 83, 110-112, 114, 115). Clofazimine may have cross-resistance with bedaquiline, which may need to be considered when building a regimen for MDR-TB (75). Conclusions. In our PS-matched IPDMA, inclusion of clofazimine was associated with increase in treatment success when compared with the control group not receiving clofazimine. Clofazimine can be included in a regimen to achieve a total of five effective drugs for the treatment of patients with MDR-TB. Research needs. The value of using loading doses and the optimal dosing for clofazimine requires additional research. Pharmacokinetic data in adults and children are needed. # Cycloserine Cycloserine is an oral bacteriostatic drug that has been part of the backbone regimen for MDR-TB treatment in the past. Cycloserine is a broad-spectrum antibiotic that inhibits cell wall synthesis (116,117). Terizidone is a structural analog that is a combination of two cycloserine molecules; it appears to be used interchangeably by experts, although it is currently not available in the United States (118). Although some advantages of cycloserine include the absence of crossresistance to other drugs and reasonable gastrointestinal tolerability, a significant drawback relates to psychological side effects, which occasionally necessitate discontinuation of the drug (16). Summary of the evidence. Previous studies showed an increase of the likelihood of treatment success when cycloserine was included in the MDR-TB regimen (marginally statistically significant) (96). Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Benefits. Our PS-matched IPDMA showed that inclusion of cycloserine was associated with an increase in treatment success (aOR, 1.5; 95% CI, 1.4-1.7) and a decrease in AMERICAN THORACIC SOCIETY DOCUMENTS mortality (aOR, 0.6; 95% CI, 0.5-0.6) when compared with the control group. A published IPDMA of children with successful treatment of MDR-TB showed that cycloserine as a single drug had an aOR of 1.7 (95% CI, 0.9-3.0; 641 children with MDR-TB, of whom 356 [56%] received cycloserine/terizidone) (113). Limited pharmacokinetic data are available on cycloserine (or terizidone) in children, but a recent study of 25 children (only 5 to ,12 yr of age) receiving a median of 14.3 mg/kg (range, 10.0-18.0 mg/kg) showed that the maximum concentration of the drug achieved in the plasma after dose administration was similar to the adult target maximum concentration of 20 to 35 mg/L (119). The recommended dose for children of 10 to 20 mg/kg/d seems adequate. An advantage of cycloserine in children, as in adults, with miliary and/or central nervous system (CNS) TB is that it penetrates the CNS well. Cycloserine may interfere with the pharmacokinetics and absorption of isoniazid and ethionamide/prothionamide; therefore, it may be advisable to give it separately from these drugs if used together in the same regimen (117). Harms. Cycloserine has CNS adverse effects, which are reported to occur in about 20% to 30% of adults (120). A meta-analysis of published articles identified adverse events in 201 of 2,164 patients across all studies (118). An average weighted proportion of patients receiving cycloserine who discontinued treatment owing to adverse effect pooled across the studies was 9.1% (95% CI, 6.4-11.7%). An average weighted proportion of patients with psychiatric adverse effects was 5.7% (95% CI, 3.7-7.6%) (118). Limited data exist on cycloserine adverse effects in children. In older pediatric studies, no adverse effects were reported with the use of cycloserine (121)(122)(123). In a systematic review of outcomes of MDR-TB in children, 6 of 182 (3.3%) children had adverse effects attributed to cycloserine, which included depression, anxiety, hallucinations, transitory psychosis, and blurred vision (124). Additional considerations. Lowerthan-recommended serum cycloserine concentrations and delayed absorption have been reported (125). Some experts obtain peak concentrations within the first 1 to 2 weeks of therapy and continue to monitor serially. Because of specific technical challenges related to cycloserine DST, poor accuracy of testing in liquid media, and poor intrinsic reproducibility of results, few laboratories perform DST for cycloserine (16). Conclusions. In our PS-matched IPDMA, inclusion of cycloserine was associated with a decrease in mortality and increase in treatment success when compared with the control group not receiving cycloserine. Cycloserine can be included in a regimen to achieve a total of five effective drugs for the treatment of patients with MDR-TB. Research needs. Studies are needed to determine the effect of cycloserine on the absorption of isoniazid and ethionamide. # Delamanid Delamanid is a nitro-dihydro-imidazooxazole derivative, which was approved for the treatment of MDR-TB by the European Medicines Agency in 2013 but has not yet received FDA approval. In 2014, WHO issued interim policy guidance on the use of delamanid for the treatment of MDR-TB on the basis of phase 2b clinical trial data. The interim policy guidance stated that "delamanid may be added to an MDR-TB regimen in adult patients with pulmonary TB conditional upon: i) careful selection of patients likely to benefit; ii) patient informed consent; iii) adherence to WHO recommendations in designing a longer MDR-TB regimen; iv) close monitoring of clinical treatment response; and v) active TB drug-safety monitoring and management (aDSM)" (126). WHO issued an updated position statement on the use of delamanid for MDR-TB in 2018 on the basis of final results from the phase 3 randomized controlled trial, Trial 213 (127). Summary of the evidence. Delamanid data were not available as part of the PS-matched IPDMA completed for this guideline development process; however, the 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment, wherein individual-level data on delamanid were attained for analyses, provides practice guidance on the use of delamanid for treatment of MDR-TB (7,128). Several recent reports, clinical trials, and cohort studies report on delamanid-containing regimens achieving success rates of 77% to 84%, in absence of major adverse events (128)(129)(130)(131). In the United States, through a compassionate use program, delamanid has recently been successfully accessed and used in the care of patients with MDR-TB (132). Regarding use of delamanid in children, only a small number of children (.6 yr of age) have had documented access to the drug, also by compassionate use (133,134). On the basis of review of the data available in the IPDMA conducted by WHO, delamanid has been included in "Group C" in their guidelines, corresponding to drugs that can be added "to complete the regimen and when medicines from Groups A and B cannot be used." WHO has recommended that "delamanid may be included in the treatment of MDR/RR-TB patients aged 3 years or more on longer regimens" (7,135). The guideline writing committee concurs with the updated 2019 WHO guidance (7). As noted for other drugs discussed in this practice guideline, QT interval PICO Question 9-Delamanid: In patients with MDR-TB, are outcomes safely improved when regimens include delamanid compared with regimens that do not include delamanid? Recommendation 9: The guideline panel was unable to make a clinical recommendation for or against delamanid because of the absence of data in the PSmatched IPDMA conducted for this practice guideline. We make a research recommendation for the conduct of randomized clinical trials and cohort studies evaluating the efficacy, safety, and tolerability of delamanid in combination with other oral agents. Until additional data are available, the guideline panel concurs with the conditional recommendation of the 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment that delamanid may be included in the treatment of patients with MDR/RR-TB aged >3 years on longer regimens (7). PICO Question 8-Cycloserine: In patients with MDR-TB, are outcomes safely improved when regimens include cycloserine compared with regimens that do not include cycloserine? Recommendation 8: We suggest including cycloserine in a regimen for treatment of patients with MDR-TB (conditional recommendation, very low certainty in the evidence). prolongation may occur with delamanid use, and routine ECG monitoring is recommended. A recent evaluation of the cardiac safety of delamanid and bedaquiline given together as part of multidrug therapy for MDR-TB concluded that the combined effect on the corrected QT interval (QTc) using the Fridericia formula (QTcF) is clinically modest and no more than additive (ClinicalTrials.gov Identifier: NCT02583048), with mean change in QTcF from baseline at 11.9 milliseconds (95.1% CI, 7.4-16.5 ms) in the bedaquiline arm, 8.6 milliseconds (95.1% CI, 4.0-13.2 ms) in the delamanid arm, and 20.7 milliseconds (95.1% CI, 16.1-25.4 ms) in the combined bedaquiline and delamanid arm (136). Conclusions. The guideline panel was unable to make a recommendation for or against delamanid because of the absence of data in the IPDMA conducted for this practice guideline. In 2018, WHO updated the IPDMA with additional data and recommended delamanid be included in the third tier of drugs, Group C, and for the drug to be used in the treatment of patients with MDR/RR-TB aged >3 years on longer regimens. The guideline writing committee agrees with the updated 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment that delamanid may be included in the treatment of patients with MDR/RR-TB aged >3 years on longer regimens (7). We make a research recommendation for the conduct of randomized clinical trials and cohort studies evaluating the efficacy, safety, and tolerability of all-oral, shortened regimens inclusive of delamanid in combination with other oral agents. Research needs. The companion drugs with which to combine delamanid for optimal efficacy, safety, and tolerability remain uncertain. The conduct of randomized clinical trials and cohort studies evaluating all-oral, shortened regimens, inclusive of delamanid and in combination with other oral agents are urgently needed. The endTB trial is a phase 3 multicountry randomized clinical trial evaluating the efficacy and safety of five new, all-oral, shortened regimens including combinations of bedaquiline and delamanid (ClinicalTrials.gov Identifier: NCT02754765). Recent cohort data are also emerging on the use of delamanid and bedaquiline in combination, which to date has been generally reserved for severe cases with extensive resistance and when other options are not feasible (137)(138)(139)(140). The Nix-TB (A Phase 3 Study Assessing the Safety and Efficacy of Bedaquiline Plus PA-824 Plus Linezolid in Subjects with Drug-Resistant Pulmonary Tuberculosis) trial (ClinicalTrials.gov Identifier: NCT02333799) evaluated an all-oral 6-month regimen comprising bedaquiline, pretomanid (member of the nitroimidazooxazines class of compounds), and linezolid for the treatment of either XDR-TB or treatment-intolerant or nonresponsive MDR-TB. FDA granted priority review of the new drug application for pretomanid in March 2019, the Antimicrobial Drugs Advisory Committee discussed pretomanid in June 2019, and in August 2019 FDA approved pretomanid in combination with bedaquiline and linezolid for the treatment of a specific limited population of adults with pulmonary XDR-TB or treatment-intolerant or nonresponsive MDR-TB (2,141). Given FDA approval, additional trials that include diverse patient populations will be essential to understand optimal use of the regimen, in addition to head-to-head comparisons of pretomanid with delamanid to determine if these drugs can be used interchangeably. # Ethambutol Ethambutol is an ethylenediamine that inhibits arabinosyl transferases, which contribute to M. tuberculosis cell wall synthesis (142). The drug is included in standard regimens for treatment of drugsusceptible TB and commonly used in regimens for MDR-TB (11,16,23). The published evidence available on its safety and efficacy is modest, and its use as part of the first-line drug-susceptible TB regimen rests largely on its ability to prevent the emergence of resistance to the other drugs in the regimen rather than on its own sterilizing activity (143). Ethambutol has been demonstrated to have modest sterilizing activity as part of combination regimens when used against ethambutolsusceptible isolates for treatment of drugsusceptible TB. The drug is not effective when used for ethambutol-resistant isolates, and such use is not recommended (54,96). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PS-matched IPDMA compared outcomes among 3,002 individuals with isolates susceptible to ethambutol who received the drug to 667 individuals with isolates susceptible to ethambutol who did not receive the drug (3). Receipt of ethambutol was associated with an aOR of 0.9 (95% CI, 0.7-1.1) for the outcome of treatment success; receipt of ethambutol was associated with an aOR of 1.0 (95% CI, 0.9-1.2) for the outcome of death. However, the groups were not balanced with regard to other effective drugs; in particular, the patients not receiving ethambutol were more likely to receive linezolid. Thus, the ability to detect an effect of ethambutol might have been obscured. In a previous IPDMA, before linezolid and bedaquiline use were common, administration of ethambutol among patients with susceptible isolates was associated with an aOR of 1.7 (95% CI, 1.2-2.4) for cure/complete versus failure/relapse and an aOR of 1.6 (95% CI, 1.1-2.4) for cure/complete versus failure/relapse/death, compared with patients receiving ethambutol whose isolates were resistant in vitro (144). Benefits. Our PS-matched IPDMA showed that treatment success was not significantly more likely with regimens containing ethambutol. However, a number of earlier studies indicate that ethambutol is associated with increased success when used to treat patients whose isolates are susceptible to the drug (143). Moreover, these earlier studies determined that the effectiveness of ethambutol is directly related to dose, and that a dose of 25 mg/kg was more effective than a dose of 15 mg/kg (143). Dosages were not available for analysis in our PS-matched IPDMA. Finally, our PS-matched IPDMA did not address a key attribute of ethambutol, the prevention of the emergence of resistance, which is a substantial concern among patients with MDR-TB (145). PICO Question 10-Ethambutol: In patients with MDR-TB, are outcomes safely improved when regimens include ethambutol compared with regimens that do not include ethambutol? Recommendation 10: We suggest including ethambutol in a regimen for treatment of patients with MDR-TB only when more effective drugs cannot be assembled to achieve a total of five effective drugs in the regimen (conditional recommendation, very low certainty in the evidence). # AMERICAN THORACIC SOCIETY DOCUMENTS Harms. In a recent review of the tolerability of TB drugs, ethambutol was associated with serious adverse events in 6 of 1,325 (0.5%) patients (23). This is consistent with previous reports of ethambutol toxicity, in which optic neuropathy (including optic neuritis and retrobulbar neuritis) was attributed to ethambutol, manifesting as symptoms of decreased visual acuity, scotomata, color blindness, or visual defects. These toxic effects are dose dependent and, although serious, are generally reversible if recognized promptly and the drug is discontinued or the dose reduced. Additional considerations. When ethambutol is used, some experts recommend using the higher dose of 25 mg/kg, which is associated with increased efficacy but also slightly greater ocular toxicity. Other experts recommend using 15 to 20 mg/kg, counting the drug predominantly as providing protection against the acquisition of additional resistance. All patients receiving ethambutol as part of an MDR-TB treatment regimen should be monitored monthly for signs of ocular toxicity, particularly visual impairment, and if this is detected, ethambutol should be discontinued. If optic neuritis occurs in patients taking both ethambutol and linezolid, both drugs must be stopped. Many patients may be rechallenged successfully with linezolid once vision normalizes, but rechallenge with ethambutol is not recommended. The efficacy of ethambutol is not expected to be different in children compared with adults. A dose of 25 mg/kg is recommended by experts for children with MDR-TB. Two reviews on safety of ethambutol relating to vision have shown low risk of ocular toxicity (143,146). However, other drugs, such as linezolid, may also cause optic neuritis, and this should be taken into consideration when treating MDR-TB in children, especially as screening for visual loss is difficult in young children. Conclusions. We suggest including ethambutol in a regimen for treatment of patients with MDR-TB only when more effective drugs cannot be used to achieve a total of five effective drugs in the regimen (conditional recommendation, very low certainty in the evidence). The committee was divided on the value of recommending ethambutol, given the drug was not associated with any benefit in terms of treatment success or mortality. Concerns about the comparability of IPDMA patient groups treated with and without ethambutol were noted. Unlike ethionamide/prothionamide, where there are substantial undesirable effects, ethambutol had small undesirable effects (low risk of serious adverse events requiring treatment discontinuation). Finally, our PS-matched IPDMA did not address the prevention of emergence of resistance-a recognized attribute of ethambutol in the treatment of drug-susceptible TB-although the applicability of this attribute to MDR-TB regimens is unknown. Overall, the committee suggested that ethambutol can be used only when more effective drugs cannot be assembled to achieve the necessary five effective drugs in the regimen. Research needs. More evidence is needed on the beneficial role that ethambutol may play in preventing emergence of drug resistance in MDR-TB, including to newer oral drugs. More research is also needed to improve on the reliability of DST for ethambutol in different settings. # Ethionamide and Prothionamide Ethionamide and prothionamide are derivatives of isonicotinic acid, somewhat similar in structure to isoniazid. Ethionamide is a prodrug and requires activation, after which it inhibits mycobacterial fatty acid synthesis that is necessary for cell wall synthesis and repair. Clinical studies indicate efficacy of these drugs, and they have been included in regimens for treatment of MDR-TB and for treatment of TB meningitis in adults and children (16,23,147,148). Summary of evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PS-matched IPDMA showed that ethionamide/prothionamide were not associated with any benefit, even if susceptible by phenotypic DST (3). Previous studies have shown, however, an increase in the likelihood of treatment success if ethionamide was included in the MDR-TB regimen (23,149). These different results can be explained by the inclusion of newer and more effective drugs, improving outcomes in the comparator groups, potentially biasing the observed effect against ethionamide; fewer patients in the ethionamide/prothionamide group, compared with control subjects, received newer-generation fluoroquinolone (51% vs. 76%), amikacin (17% vs. 35%), and linezolid (5% vs. 18%), and more patient in the ethionamide/prothionamide groups received kanamycin (51% vs. 12%) and capreomycin (27% vs. 16%). Benefits. No benefits were identified for inclusion of ethionamide/prothionamide in regimens, even if susceptible by phenotypic DST. A pediatric IPDMA of MDR-TB cases also did not show benefit using ethionamide/ prothionamide; however, the majority of children (590 of 641) in this study did receive ethionamide or prothionamide, potentially limiting the ability to discern a benefit (113). Harms. The potential of ethionamide to cause adverse events can also limit its tolerability (120). A review of studies comparing a regimen containing ethionamide or prothionamide (all studies before 1970) showed that adverse effects leading to discontinuation of treatment were equally frequent with ethionamide (11.3%; range 6-42%) and prothionamide (11.9%; range 6-40%). Adverse effects, when reported, included abnormal liver function tests, gastrointestinal intolerance, endocrine dysfunction, and hypothyroidism, the latter occasionally requiring treatment with thyroxine (147). Hypothyroidism is a common adverse effect of ethionamide (experienced by approximately 20%) (150,151) and is particularly noted if used with p-aminosalicylic acid and in HIV-infected children (152). Gastrointestinal disturbance is common but usually resolve within the first 2 weeks of treatment in children. Additional considerations. When ethionamide/prothionamide is included in a regimen for which five other effective drugs cannot be assembled, experts recommend dose escalation (drug ramping) at time of treatment initiation, as well as monitoring of thyroid-stimulating hormone for evidence of hypothyroidism requiring replacement (16). Of note, in the presence of inhA mutation, many isolates will show PICO Question 11-Ethionamide/ prothionamide: In patients with MDR-TB, are outcomes safely improved when regimens include ethionamide/ prothionamide compared with regimens that do not include ethionamide/prothionamide? Recommendation 11: We suggest NOT including ethionamide/ prothionamide in a treatment regimen for patients with MDR-TB if newer and more effective drugs are available to construct a regimen with at least five effective drugs (conditional recommendation, very low certainty in the evidence). cross-resistance between isoniazid and ethionamide/prothionamide (153). Conclusions. When the individualized treatment regimen for patients with MDR-TB contains newer-generation, moreeffective drugs, the addition of ethionamide/prothionamide does not appear to provide benefit. Research needs. Research efforts are underway to identify potential boosters of potency for ethionamide, which, if successful when coadministered, may result in improved therapeutic index and overall better risk-benefit ratio for use (154). # Fluoroquinolones: Levofloxacin, Moxifloxacin, Ciprofloxacin, and Ofloxacin The fluoroquinolones are a family of chemically related drugs characterized by a common core dual-ring structure (155). Ofloxacin, then levofloxacin, then moxifloxacin sequentially improved on earlier generation's spectrum of activity, including mycobacteria, and their antimycobacterial action increased as evidenced by lower minimum inhibitory concentrations (MICs) and increasing success in clinical use (156,157). Physicians began using these drugs to treat MDR-TB on the basis of in vitro data, with subsequent case series and observational studies showing efficacy (158), although none of the fluoroquinolones are currently indicated by regulatory authorities for the treatment of TB. In general, these drugs are well absorbed orally, have favorable pharmacological profiles for once-daily dosing, are generally well tolerated, and now are available in generic formulations (155). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. In our PSmatched IPDMA, ofloxacin was used most commonly (n = 4,020), followed by levofloxacin (n = 3,872), moxifloxacin (n = 2,132), and ciprofloxacin (n = 431); 734 patients did not receive a fluoroquinolone, the comparison group for subsequent analyses, and 828 patients received two or more quinolones and were excluded (3). The groups were similar in age, sex, proportion AFB smear positive, proportion with cavitary disease on chest radiograph, and prior treatment with first-line drugs. However, the no-quinolone group had substantially more HIV coinfection (44% vs. 13-28%; proportion with HIV coinfection across different quinolones), more past treatment with secondline drugs (52% vs. 8-25%), more quinolone resistance (86% vs. 6-33%), more second line injectables resistance (74% vs.12-34%), and more XDR TB (73% vs. 4-19%). In terms of treatment, the no-fluoroquinolone group received less amikacin (7% vs. 18-43%) and kanamycin (13% vs, 26-65%) and more capreomycin (66% vs. 6-34%). Therefore, the median number of effective drugs (intensive phase) was lower in the no-fluoroquinolone group (2.1) than in the other groups (3.3-4.0). Thus, treatment success was lower in the nofluoroquinolone group (35% vs. 55-68%) and failure/relapse was higher than but not greatly different from the others (13% vs. 2-10%); mortality, however, was much higher (40% vs. 11-15%). Benefits. In our PS-matched IPDMA, among patients with susceptible isolates, levofloxacin-containing regimens compared with no quinolone were associated with significantly more treatment successes (aOR, 4.2; 95% CI, 3.3-5.4) and significantly fewer deaths (aOR, 0.6; 95% CI, 0.5-0.7). Moxifloxacin, compared with no quinolone, was also associated with significantly more treatment successes (aOR, 3.8; 95% CI, 2.8-5.2) and significantly fewer deaths (aOR, 0.5; 95% CI, 0.4-0.6). In the subgroup with resistance to an injectable drug(s), levofloxacin or moxifloxacin were associated with a significant improvement in treatment success (aOR, 1.8; 95% CI, 1.2-2.8) and reduction in death (aOR, 0.6; 95% CI, 0.4-0.8), although the corresponding adjusted risk differences were not statistically significant. In pairwise comparisons, both levofloxacin and moxifloxacin were associated with significantly better treatment outcomes than ofloxacin. The aORs of death were lower for the two later-generation quinolones when compared with ofloxacin (levofloxacin: aOR, 0.8; 95% CI, 0.6-0.9; moxifloxacin: aOR, 0.8; 95% CI, 0.6-1.0) (data not shown). Ofloxacin and ciprofloxacin are considered inferior quinolones against M. tuberculosis (144,149,159,160). Levofloxacin and moxifloxacin did not differ significantly from each other. In a recent IPDMA describing treatment outcomes in children treated for MDR-TB (113), new and repurposed TB drugs, including late-generation fluoroquinolones, were not used enough to adequately evaluate efficacy, but most experts and emerging evidence suggest that the efficacy of fluoroquinolones noted in adults should be similar in children (83,161). Harms. In our PS-matched IPDMA, grade 3 adverse events were recorded systematically in a subset of 1,962 patients from a cohort study of MDR-TB at 27 sites in nine countries. Permanent discontinuation of fluoroquinolones due to adverse events was uncommon. Among 150 patients treated with levofloxacin, the drug was stopped permanently because of adverse events in 6 (4.0%). Among 398 patients treated with moxifloxacin, the drug was stopped permanently in 14 (3.5%) patients. Among 1,167 patients treated with ofloxacin, the drug was stopped permanently in 56 (4.8%) patients. An analysis of 56 clinical trials comparing quinolones against placebo or against other antimicrobial agents found generally similar adverse event profiles (3). Seven studies reported more frequent adverse events and six studies reported fewer adverse events in fluoroquinolone-treated patients. The most frequent adverse effects reported are those of the gastrointestinal tract in 3% to 17% and CNS in 0.9% to 11% of patients (155,162). Allergic and other hypersensitivity reactions and other skin reactions occur in 0.5% to 2.8% of patients. Other adverse effects that occur in .1% of patients are cardiac (QT interval prolongation) and endocrine (hypoglycemia). Recently, FDA strengthened warnings in the prescribing information for the entire class of fluoroquinolones on risks of severe hypoglycemia, certain mental health side effects, and tendonitis, as well as risks of ruptures or tears in the aorta (163). Safety concerns persist for long-term pediatric use of fluoroquinolones, especially regarding arthropathy. However, several long-term prospective and retrospective studies in children have confirmed that severe adverse effects with the fluoroquinolones are rare, including musculoskeletal, neurological, and QT interval prolongation adverse effects (164)(165)(166). Additional considerations. As latergeneration fluoroquinolones have become generic, their cost has decreased greatly and their use has expanded to many different indications, so procurement and availability have not been problematic, but resistance is more common than among aminoglycosides. Some foods or beverages and antacids high in content of divalent or AMERICAN THORACIC SOCIETY DOCUMENTS trivalent cations can reduce fluoroquinolone absorption (16,(167)(168)(169). When feasible, taking fluoroquinolones on an empty stomach minimizes the potential of delays in or reduction of fluoroquinolone absorption. Moxifloxacin and to a lesser extent levofloxacin prolong the QT interval. Moxifloxacin may necessitate ECG monitoring, especially if patients have a baseline QTc . 500 milliseconds or take other QT-prolonging drugs. Pharmacokinetic modeling studies for levofloxacin and moxifloxacin use in children are ongoing: recent data suggest that for children, a levofloxacin dose of at least 15 to 20 mg/kg daily (170,171), and a moxifloxacin dose of 10 to 15 mg/kg/d are effective (based on data showing too low exposure at 7.5-10 mg/kg) (172). Although a modeling paper suggests a dose of 25 mg/kg/d for infants to 3 months of age and 20 mg/kg/d for toddlers, safety at these doses has not been verified (173). Conclusions. In our PS-matched IPDMA, patients treated with moxifloxacin and levofloxacin had better outcomes than patients not treated with any fluoroquinolones, or compared with patients treated with ofloxacin, after adjustment for numerous covariates that in themselves were strong determinants of outcome, such as clinical characteristics, extent of drug resistance, and number of other effective drugs in the treatment regimen. Moxifloxacin or levofloxacin should be included in a regimen to achieve a total of five effective drugs for the treatment of patients with MDR-TB. Our recommendation for the use of moxifloxacin or levofloxacin is strong despite very low certainty in the evidence, because we viewed the significant reduction in mortality, improved treatment success, and relatively few adverse effects associated with MDR-TB treatment that includes these latergeneration fluoroquinolones (compared with no fluoroquinolones) as having a particularly favorable balance of benefits over harms. NCT01918397) and dosing regimens, as preliminary work suggests these drugs may be more effective at higher doses (174). The activity, safety, and tolerability of using higher doses of fluoroquinolones against isolates with modestly elevated MICs should also be explored. Modest increases in toxicity can be minimized and managed by clinicians and may still be preferable to treating MDR-TB without quinolones. Higher doses may also limit acquired resistance (175,176). Injectables: Amikacin, Capreomycin, Kanamycin, and Streptomycin The term "injectable drugs" in general encompasses four drugs: the aminoglycoside antibiotics streptomycin, amikacin, and kanamycin; and the cyclic polypeptide antibiotic capreomycin (155,162,177). Aminoglycosides are highly cationic and water soluble, but insoluble in organic solvents and hydrophobic environments, explaining much about their pharmacology, limited ability to cross lipid membranes, poor absorption from the gastrointestinal tract, and poor penetration into the CNS. Consequently, they are administered parenterally through slow intravenous infusion or through intramuscular injection (177). Streptomycin's core ring structure differs from all other aminoglycosides, explaining in part why cross-resistance between streptomycin and other aminoglycosides is uncommon (155,177). These drugs block protein synthesis at the ribosomal level by binding to a highly conserved nucleotide sequence in the prokaryotic 16S ribosomal subunit, the mRNA decoding region, where translation of mRNA codon to aminoacyltransfer RNA anticodon normally takes place (155,177). Aminoglycosides share three important characteristics: 1) concentration-dependent killing, 2) a postantibiotic effect, and 3) synergism with other antibacterial drugs (155). These drugs kill bacteria in proportion to drug concentration, so a single daily dose is more effective than divided doses or a continuous infusion. Moreover, antibacterial activity continues many hours after serum levels become undetectable, the so-called "postantibiotic" effect (155). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. In our PS-matched IPDMA, treatment outcomes for 613 patients who did not receive an injectable drug were compared with 1,554 patients treated with streptomycin, 4,330 treated with kanamycin, 2,275 treated PICO Question 12-Fluoroquinolones: In patients with MDR-TB, are outcomes safely improved when regimens include fluoroquinolones compared with regimens that do not include fluoroquinolones? Recommendation 12: We recommend including moxifloxacin or levofloxacin in a regimen for treatment of patients with MDR-TB (strong recommendation, low certainty in the evidence). Our recommendation for the use of moxifloxacin or levofloxacin is strong despite very low certainty in the evidence because we viewed the significant reduction in mortality, improved treatment success, and relatively few adverse effects associated with MDR-TB treatment that includes these latergeneration fluoroquinolones (compared with no fluoroquinolones) as having a particularly favorable balance of benefits over harms. PICO Question 13-Injectables: In patients with MDR-TB, are outcomes safely improved when regimens include an injectable compared with regimens that do not include an injectable? Recommendation 13: We suggest including amikacin or streptomycin in a regimen for treatment of patients with MDR-TB when susceptibility to these drugs is confirmed (conditional recommendation, very low certainty in the evidence). Because of their toxicity, these drugs should be used when more effective or less toxic therapies cannot otherwise be assembled to achieve a total of five effective drugs. We suggest NOT including kanamycin or capreomycin (conditional recommendation, very low certainty in the evidence). with amikacin, and 2,401 treated with capreomycin. Patients who received two or more injectable drugs (n = 875) were excluded, because the outcome could not be ascribed to one of the drugs (3). The noinjectable comparison group had notably fewer AFB smear-positive patients and patients with cavitary disease. On the other hand, patients in this group had more past treatment with second-line drugs; more resistance to quinolones, injectables, and XDR TB; and much more treatment with linezolid. In these respects, the capreomycin group was similar (more past treatment with second-line drugs, worse pretreatment drug resistance) to the other three drugs. Analysis was based on patients whose isolates were susceptible to the drug they received, with exceptions noted below. # Benefits. INJECTABLE DRUGS COMPARED WITH NO INJECTABLE DRUG. Compared with 613 patients treated without an injectable drug, 1,554 streptomycin-treated patients had increased treatment success (aOR, 1.5; 95% CI, 1.1-2.1). In the subgroup with quinolone resistance (17.8% of the total), streptomycin-treated patients had increased treatment success (aOR, 3.0; 95% CI, 1.3-6.6). Compared with 613 patients not treated with injectable drugs, 2,275 amikacin-treated patients had increased treatment success (aOR, 2.0; 95% CI, 1.5-2.6). In the subgroup with quinolone resistance, amikacin was also associated with increased treatment success (aOR, 3.0; 1.6-5.6). Among patients with XDR-TB, amikacin was associated with reduced deaths (aOR, 0.4; 95% CI, 0.2-0.8). In contrast, neither kanamycin nor capreomycin was associated with any benefit on treatment success or death versus no injectable drug at all. To the contrary, kanamycin treatment was associated with fewer treatment successes (aOR, 0.5; 95% CI, 0.4-0.6). Capreomycin was associated with an increased risk of death (aOR, 1.4; 95% CI, 1.1-1.7). In XDR-TB, capreomycin was associated with increased deaths (aOR, 3.4; 95% CI, 2.7-4.3) when compared with regimens with no injectable drug. In a pediatric IPDMA, the use of second-line injectable agents (amikacin, kanamycin, capreomycin) in children with confirmed MDR-TB was associated with more treatment success compared with those not receiving second-line injectable agents (aOR, 2.94; 95% CI, 1.05-8.28; P = 0.041) (113). However, a high proportion of children with less-severe disease who received no second-line injectable agents still did well; therefore, children may be able to be spared from injectables and the associated toxicities if newer, more-effective drugs can be included in an all-oral regimen (113). # INJECTABLE DRUGS COMPARED AGAINST EACH OTHER. Compared with streptomycintreated patients, patients treated with amikacin had increased treatment successes (aOR, 1.7; 95% CI, 1.3-2.2) but with no significant impact on death (aOR, 1.0; 95% CI, 0.8-1.2). Kanamycin and capreomycin were both significantly inferior to streptomycin in every respect. Amikacin was superior to kanamycin and capreomycin in every respect, with higher treatment success rates, lower death rates, or both. In the quinolone-resistant subgroup, on the other hand, use of amikacin did not have a significant effect on treatment success (aOR, 1.7; 95% CI, 0.9-3.4) or death (aOR, 1.2; 95% CI, 0.7-2.0) compared with use of streptomycin. Harms. All aminoglycosides and capreomycin share important toxicities, especially nephrotoxicity, ototoxicity, and electrolyte disturbances, as well as other less common toxicities (155,162,177). The ototoxicity can be vestibular, resulting in loss of balance, or cochlear, resulting in hearing loss. Because these drugs date back to the early years of antibiotic discovery and development, there is vast published experience with their toxicities. In the treatment of MDR-TB, the risk of toxicity is substantial, because the duration of treatment is many months. Ototoxicity may be severe and irreversible but with close monitoring can be minimized or prevented. Nephrotoxicity is often reversible when identified early and addressed appropriately; however, ototoxicity can progress even after the drug is stopped. Monitoring (measuring drug levels; monthly high-quality audiometry, electrolytes, and serum creatinine) and skilled management can prevent or mitigate these effects and are part of standard practice for MDR-TB experts. Risk of hearing loss increases with increasing duration of treatment. For aminoglycosides in general, the estimated frequency of nephrotoxicity is 5% to 15% and of ototoxicity 2% to 14%, including 2% to 10% cochlear and 3% to 14% vestibular (155). In a survey of clinical trials performed between 1975 and 1982 and totaling z10,000 patients, the incidence of amikacin nephrotoxicity was 8.7% (178). Cumulative dose and duration predicted toxicity; older age, dehydration/hypovolemia/hypotension, prior aminoglycoside treatment, coexisting hepatic or renal disease, and concomitant medications were important as well (155). Significant hearing impairment exceeds 50% in some reported series, renal dysfunction approaches 50%, and vestibular dysfunction as high as 20% has been reported (155,162,177). At the other extreme, series have been published with no significant or permanent toxicity, including with longer-term use (155,162,177). Direct comparisons of drug toxicity for the four drugs used in TB are scarce. Experience suggests streptomycin may be more ototoxic, but that may be a consequence of far longer and wider use of streptomycin. Observational studies suggest amikacin may be more ototoxic than the other drugs (179)(180)(181). In children, ototoxicity (hearing loss) has been documented in up to 24% of cases in a retrospective study, which also brings serious implications for development of normal speech (182). Pain of intramuscular injections can be safely reduced by adding lidocaine to amikacin injections without interference of pharmacokinetics (183). Dose range of amikacin should be 15 to 20 mg/kg as single daily dose. Additional considerations. When injectables are used, serum creatinine, electrolyte measurements, clinical assessment for vertigo and tinnitus, highquality audiometry (including hearing frequency of 6,000-8,000 Hz, as highfrequency hearing loss is seen initially), and clinical examinations should be conducted at least monthly, or more frequently if adverse effects occur. Limited data suggest there may be a genetic predisposition for hearing loss associated with specific mitochondrial gene mutations (184). Although routine genetic testing is not currently suggested, the provider should be aware of these genetic mutations and the risks of ototoxicity (185,186). Because bactericidal activity is concentration dependent, high peak levels and single daily dosing are preferred to divided doses. Because of the postantibiotic effect, toxicity can be minimized by allowing trough levels between doses to remain below detectable levels for many hours. Some experts take advantage of these AMERICAN THORACIC SOCIETY DOCUMENTS pharmacokinetic and pharmacodynamic properties with thrice-weekly dosing, especially after conversion of cultures to negative, decreasing healthcare system demands and the requirement for daily injections (16,177). Intramuscular injections are painful, and 6 months of injections can be traumatic, especially to younger patients. Patients' values may differ sharply from providers in this respect and should be considered when determining whether to use injectable agents. Conclusions. In our PS-matched IPDMA, the use of amikacin and streptomycin when the patient's isolate was susceptible to these drugs was associated with an increase in treatment success when compared with the control group not receiving these injectables. However, because of their toxicity and modest efficacy compared with other drugs, which also are less toxic, these drugs should be reserved for when more-effective or lesstoxic therapies cannot be assembled to achieve a total of five effective drugs. In our analyses, amikacin and streptomycin had similar aORs for treatment success in a minority of patients with fluoroquinolone resistance. Kanamycin and capreomycin were ineffective. We recommend against using kanamycin or capreomycin. As is the case for adults, the use of amikacin and streptomycin for children should also be reserved to when more-effective or less-toxic therapies cannot be assembled to achieve a total of five effective drugs. Research needs. N-acetylcysteine, a thiol-containing antioxidant, may limit the severity and irreversibility of aminoglycoside-induced ototoxicity (187,188), warranting additional research into this and other otoprotective measures that may improve the balance of benefits and harms for using these injectable agents. # Linezolid Linezolid is an oxazolidinone antibiotic that inhibits bacterial protein synthesis by preventing the fusion of 30S and 50S ribosomal subunits (189). It also binds to human mitochondria and inhibits protein synthesis, which is the mechanism of toxicity in clinical use (190,191). Linezolid was initially used off-label in the absence of consistent scientific evidence as part of the regimen for difficult-to-treat cases of MDR-and XDR-TB (192). The first large retrospective observational study suggested linezolid was effective, but with frequent and often severe adverse events (192). The same study suggested, for the first time, that reducing the daily dose from 1,200 mg to 600 mg per day might be associated with fewer adverse events and improved tolerability. Several systematic reviews, IPDMAs, and one controlled clinical trial have been published on linezolid for treatment of MDR-TB (87,(193)(194)(195)(196). The clinical trial confirmed previous observational findings and, in particular, the effectiveness of linezolid and its potential toxicity (195). A meta-analysis with 121 cases of patients with MDR-TB from 11 countries, treated with linezolid, confirmed linezolid effectiveness (culture conversion, 93.5%; treatment success, 81.8%) and that the 600mg daily dose was safer than the 1,200-mg dose (46.7% adverse events vs. 74.5%, respectively) without lowering its effectiveness (196). The clinical trial confirmed the efficacy, safety, and tolerability of linezolid in patients with XDR-TB (194). There are insufficient data regarding the effectiveness of initiating treatment with doses ,600 mg daily to recommend lower doses. Recently, the importance of therapeutic drug monitoring to reduce adverse events potentially due to linezolid has been emphasized (197,198). Summary of the evidence. Linezolid was used in regimens for MDR-and XDR-TB across 38 studies (3). The initial dose of linezolid was 1,200 mg/d for 91 patients in five studies, 600 mg/d for 784 patients in 28 studies, and 300 mg/d for 99 patients in five studies (3). Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Benefits. In our PS-matched IPDMA, patients who received linezolid-containing regimens were more likely to achieve treatment success (aOR, 3.4; 95% CI, 2.6-4.5) and to have a lower rate of death (aOR, 0.3; 95% CI, 0.2-0.3) than those who did not receive linezolid. The effect on treatment success was more pronounced when studies from only high-income countries were included (aOR, 3.9; 95% CI, 2.6-5.8). The greatest impact was found in patients with XDR-TB, where the aOR for successful treatment versus failure or relapse was 6.3 (95% CI, 3.9-10.1) and the aOR for death was 0.1 (95% CI, 0.1-0.2). When PS-matched pairs analyses were performed comparing the effects of bedaquiline and linezolid with those of no bedaquiline or linezolid, the combination of bedaquiline and linezolid was associated with an aOR of 2.7 (95% CI, 1.5-4.9) for success versus failure/relapse, and of 0.3 (95% CI, 0.2-0.4) for death versus success/failure/relapse. The efficacy of linezolid in children with TB has been shown in two studies of children ,18 years of age, albeit with few patients (199,200). Harms. Adverse effects associated with linezolid in patients with TB include neurotoxicity (i.e., peripheral neuropathy and optic neuritis), myelosuppression, hyperlactatemia, and diarrhea, all of which are presumably secondary to the inhibition of mitochondrial protein synthesis (190,201). A published systematic review of 12 studies conducted in 11 countries globally reported an adverse event rate of 58.9% (hematological, neurological, and gastrointestinal), predominantly noted in individuals treated with a dosages .600 mg/d (196). Hematological toxicity can occur quickly after starting treatment and can involve any cell line. Neurotoxicity, including optic neuritis and peripheral neuropathy, occur later, usually after 12 to 20 weeks of treatment. Toxicity has been associated with trough levels of .2.0 PICO Question 14-Linezolid: In patients with MDR-TB, are outcomes safely improved when regimens include linezolid compared with regimens that do not include linezolid? Recommendation 14: We suggest including linezolid in a regimen for the treatment of patients with MDR-TB (conditional recommendation, very low certainty in the evidence). This is a conditional recommendation despite linezolid-containing regimens showing large reduction in mortality and improved treatment success, similar to bedaquiline and later-generation fluoroquinolones, because linezolid had more adverse effects and the balance of benefits and harms was less favorable compared with those drugs. (202)(203)(204). Adverse effects, especially myelosuppression, are also noted to be common at the currently recommended dose of 10 mg/kg twice daily in children ,10 years of age. Additional considerations. Although use of linezolid for the FDA-approved 28 days or less for non-TB indications is associated with an acceptable adverse effect profile (205), the data on the longer-term use necessary for MDR-TB are limited (3,195,200). Strict clinical monitoring for potential toxicity (in particular peripheral neuropathy, optic neuritis, anemia, and leukopenia) is necessary because of the risk of adverse events associated with the longterm use of linezolid (16). If optic neuritis occurs, many patients may be rechallenged successfully with linezolid once vision normalizes. Assessment for visual toxicity must continue after restarting linezolid. Some patients are able to be rechallenged with the full dose; others are able to avoid recurrent visual toxicity with a reduced dose of linezolid at 300 mg daily (195). Linezolid should generally not be administered to patients taking serotonergic agents, such as monoamine oxidase inhibitors, because of the potential for serious CNS reactions, such as serotonin syndrome. Because monoamine oxidase type A deaminates serotonin, and selective serotonin reuptake inhibitors potentiate the action of serotonin by inhibiting its neuronal reuptake, administration of linezolid concurrently with a selective serotonin reuptake inhibitor can lead to serious reactions, such as serotonin syndrome or neuroleptic malignant syndrome-like reactions (16). One randomized clinical trial demonstrated that lowering the dose from 600 mg/d to 300 mg/d after culture conversion reduced toxicity (195). For children, one modeling study reported that linezolid dose of 15 mg/kg in full-term neonates and infants aged <3 months and 10 mg/kg in toddlers, administered once daily, achieved cumulative fraction of response of >90%, with ,10% achieving linezolid area under the concentration-versus-time curve (AUC) of 0 to 24 associated with toxicity (173). On the basis of modeling of pharmacokinetic data from 48 children, WHO and Sentinel Project recommend pediatric doses of linezolid as 15 mg/kg once daily for children ,15 kg and 10 to 12 mg/kg once daily for those weighing .15 kg (7,17). It is common practice for patients on linezolid to be prescribed vitamin B6 (16). Conclusions. In our PS-matched IPDMA, patients who received linezolidcontaining regimens were more likely to achieve treatment success and to have a lower rate of death than those who did not receive linezolid. We suggest including linezolid in a regimen for the treatment of patients with MDR-TB. This is a conditional recommendation despite linezolid-containing regimens showing large reduction in mortality and improved treatment success, similar to bedaquiline and later-generation fluoroquinolones, because linezolid had more adverse effects and the balance of benefits and harms was less favorable compared with those drugs. Research needs. Clinical trials of combinations of new chemical entities plus linezolid, including when administered at different doses and durations to optimize its therapeutic effect while minimizing toxicity are underway (Nix-TB, ClinicalTrials.gov Identifier: NCT02333799; ZeNix [Safety and Efficacy of Various Doses and Treatment Durations of Linezolid Plus Bedaquiline and Pretomanid in Participants with Pulmonary TB, XDR-TB, Pre-XDR-TB or Non-responsive/Intolerant MDR-TB], NCT03086486). Further linezolid pharmacokinetic and safety data are needed to find optimal dosing in adults and children. # Macrolides: Azithromycin and Clarithromycin The macrolides azithromycin and clarithromycin have unclear efficacy and role in the treatment of MDR-TB (23). Macrolides are commonly used to treat upper and lower respiratory tract infections and have an essential role in the treatment of nontuberculous mycobacteria (206). They are believed to have immunomodulatory and antiinflammatory effects. M. tuberculosis has intrinsic, inducible resistance to clarithromycin (207,208), and in vivo murine TB models confirm the lack of activity of macrolides (209). Clarithromycin may increase linezolid serum exposure when used in combination (210), prompting some consideration of potential synergy between macrolides and other MDR-TB drugs (211). WHO does not recommend use of macrolides to treat MDR-TB (23). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. In our PSmatched IPDMA, patients who received macrolides (n = 1,067) were more likely to have been treated with second-line drugs and to have resistance to fluoroquinolones or any PICO Question 15-Macrolides: In patients with MDR-TB, are outcomes safely improved when regimens include macrolides compared with regimens that do not include macrolides? Recommendation 15: We recommend NOT including the macrolides azithromycin and clarithromycin in a treatment regimen for patients with MDR-TB (strong recommendation, very low certainty in the evidence). Our recommendation against the use of the macrolides azithromycin and clarithromycin in MDR-TB treatment is strong despite the evidence being judged to be of very low certainty because we viewed the increased mortality and decreased likelihood of treatment success associated with the use of this drug class as having a notably unfavorable balance of benefits to potential harms. PICO Question 16-p-Aminosalicylic Acid: In patients with MDR-TB, are outcomes safely improved when regimens include p-aminosalicylic acid compared with regimens that do not include p-aminosalicylic acid? Recommendation 16: We suggest NOT including p-aminosalicylic acid in a treatment regimen for patients with MDR-TB (conditional recommendation, very low certainty in the evidence). When the individualized treatment regimen for patients with MDR-TB contains newer-generation, more-effective drugs, the addition of p-aminosalicylic acid does not appear to provide a benefit. # AMERICAN THORACIC SOCIETY DOCUMENTS second-line injectable (3). Patients who received macrolides were more likely to receive later-generation fluoroquinolones, capreomycin, and linezolid. In adjusted analyses, patients who received macrolides were less likely to achieve treatment success (aOR, 0.6; 95% CI, 0.5-0.8) and had a higher rate of death (aOR, 1.6, 95% CI, 1.2-2.0) than patients who did not receive macrolides. Benefits. The available evidence does not support the use of macrolides in the treatment of MDR-TB. Harms. In our PS-matched IPDMA, patients who received macrolides were less likely to achieve treatment success (aOR, 0.6; 95% CI, 0.5-0.8) and had a higher rate of death (aOR, 1.6; 95% CI, 1.2-2.0) than patients who did not receive macrolides. Conclusions. Macrolides, specifically azithromycin or clarithromycin, should not be included in a regimen for the treatment of patients with MDR-TB. Our recommendation against the use of the macrolides azithromycin and clarithromycin in MDR-TB treatment is strong despite the evidence being judged to be of very low certainty because we viewed the increased mortality and decreased likelihood of treatment success associated with the use of this drug class as having a notably unfavorable balance of benefits to potential harms. Research needs. Further research may be warranted on newer-generation macrolides and to elucidate if there is potential synergy of macrolides with linezolid or other second-line agents. # p-Aminosalicylic Acid One of the first agents found to be effective against TB (212), p-aminosalicylic acid has been widely used clinically, although its precise mode of action remains uncertain (213). With the discovery of other more potent drugs, including rifampin, p-aminosalicylic acid, initially used combined with streptomycin, was no longer considered in first-line regimens. It is now used as part of a treatment regimen for MDR-and XDR-TB, although the benefits of p-aminosalicylic acid are not clear and toxicity limits its use. Current guidance recommends that p-aminosalicylic acid be used to compose a regimen when five effective drugs cannot otherwise be assembled (16,23). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. In our PS-matched IPDMA, p-aminosalicylic acid was not associated with any benefit on treatment success (aOR, 0.8; 95% CI, 0.7-1.0) and was associated with increased death (aOR, 1.2; 95% CI, 1.1-1.4) (3). The potential to cause adverse events (12.2% in a previous meta-analysis) has also limited the tolerability of p-aminosalicylic acid (23). Benefits. No association with any benefit on treatment success was identified for p-aminosalicylic acid in our PS-matched IPDMA. In an IPDMA of children with MDR-TB, there was no benefit identified for the inclusion of p-aminosalicylic acid in regimens (aOR, 0.75; 95% CI, 0.25-1.96; P = 0.483) (113). Harms. Gastrointestinal distress is common with p-aminosalicylic acid but reported to occur less with the PASER formulation than with older preparations (16). Rare hepatotoxicity and thrombocytopenia have been reported, as has reversible hypothyroidism, particularly when used concomitantly with ethionamide (120). In the setting of hypothyroidism, some experts provide thyroid replacement therapy rather than discontinuing p-aminosalicylic acid. Clinical experience has shown paminosalicylic acid to be better tolerated regarding gastrointestinal disturbance in children than in adults, although hypothyroidism also remains common in children. Additional considerations. Indirect comparison suggests that ethionamide may be better than p-aminosalicylic acid if a fifth drug is needed to construct a regimen with at least five effective drugs. Old studies have shown p-aminosalicylic acid to be a good companion drug to protect other drugs from developing resistance and also have shown efficacy in combination with first-line drugs (214,215). It has recently been suggested by pediatric DR-TB experts that in children p-aminosalicylic acid may replace an injectable agent in the absence of better drugs (83). When p-aminosalicylic acid is used, experts recommend monitoring thyroid-stimulating hormone, electrolytes, blood counts, and liver function tests. Conclusions. When the individualized treatment regimen for patients with MDR-TB contains newer-generation, more-effective drugs, the addition of p-aminosalicylic acid does not appear to provide benefit. Research needs. Given the limited armament of TB drugs available for treating MDR-TB, some experts have advocated for the evaluation of doseoptimized p-aminosalicylic acid to improve efficacy while minimizing toxicity (216). Research on whether p-aminosalicylic acid, among other agents, may offer protection against the acquisition of resistance to other drugs in the regimen would also be valuable. # Pyrazinamide Pyrazinamide, a nicotinamide analog, is a prodrug that is converted in vivo into pyrazinoic acid, which interferes with mycobacterial fatty acid synthase. Pyrazinamide has demonstrated effectiveness against M. tuberculosis; it is included in standard regimens for treatment of drug-susceptible TB and also used in regimens for MDR-TB (11,16,23). However, recent population-based studies conducted as part of multicountry surveillance activities have shown that pyrazinamide resistance is highly associated with rifampin resistance (217,218). This finding, in conjunction with evidence that pyrazinamide efficacy is reduced in the setting of pncA gene mutations (219)(220)(221)(222), underscores the importance of documenting drug susceptibility to pyrazinamide by WGS, molecular tests, or traditional DST, if the drug is to be included as part of a regimen for MDR-TB. # AMERICAN THORACIC SOCIETY DOCUMENTS Pyrazinamide has been demonstrated to have substantial sterilizing activity as part of combination regimens and has allowed for treatment shortening in drugsusceptible TB (223,224). Higher doses have been shown to be more effective in animal models and phase 2A studies, but doses of 40 to 70 mg/kg were found to be too toxic to be pursued further in human studies (224). Summary of the evidence. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. Our PS-matched IPDMA compared outcomes in 1,986 individuals with isolates susceptible to pyrazinamide who received the drug with 307 individuals with isolates susceptible to pyrazinamide who did not receive the drug (3). However, the groups were not balanced with regard to other effective drugs; in particular, the patients not receiving pyrazinamide were more likely to receive linezolid and a latergeneration fluoroquinolone. Thus, the ability to detect an effect of pyrazinamide might have been partially obscured. In a previous IPDMA, before linezolid and bedaquiline were more commonly used, administration of pyrazinamide among patients with susceptible isolates was associated with an aOR of 1.9 (95% CI, 1.3-2.9) for cure/complete versus failure/relapse and an aOR of 1.6 (95% CI, 1.3-2.1) for cure/complete versus failure/relapse/death, compared with patients receiving pyrazinamide whose isolates were resistant in vitro (6). Benefits. In our PS-matched IPDMA, treatment success was significantly less likely with regimens containing pyrazinamide (aOR, 0.7; 95% CI, 0.5-0.9), but death was also significantly less frequent (aOR, 0.7; 95% CI, 0.6-0.8). This paradox may be due to confounding in our PS-matched IPDMA, as patients who did not receive pyrazinamide were substantially more likely to receive linezolid. Moreover, our PS-matched IPDMA did not assess the potential ability of pyrazinamide to Levofloxacin doses of up to 1,250 mg have been used safely when needed to achieve therapeutic concentrations. A recent population pharmacokinetic study in South African children found that higher levofloxacin doses from 18 mg/kg/d for younger children, up to 40 mg/kg/d for older children, may be required to achieve adult-equivalent exposures (170). ‡ Higher moxifloxacin doses have been used safely when the isolate is resistant to ofloxacin and the minimum inhibitory concentration for levofloxacin or moxifloxacin suggests higher doses may overcome resistance. Higher doses also are used in cases of malabsorption. x Cycloserine doses can be divided if needed (typically twice daily). Doses .750 mg are difficult for many patients to tolerate. jj Cycloserine dose may be lowered if serum concentrations exceed 35 mg/ml, even if patient is not experiencing toxicity, to prevent central nervous system toxicity. ¶ Modified from adult intermittent dose of 25 mg/kg, and accounting for larger total body water content and faster clearance of injectable drugs in most children. Dosing can be guided by serum concentrations. **Ethionamide can be given at bedtime or with a meal to reduce nausea. Experienced clinicians suggest starting with 250 mg once daily and gradually increasing the dose over 1 week. Serum concentrations may be useful in determining the appropriate dose. Few patients tolerate 500 mg twice daily. † † Studies are ongoing evaluating meropenem at higher doses (ClinicalTrials.gov identifiers: NCT03174184 and NCT02349841). ‡ ‡ Some experts prescribe p-aminosalicylic acid at 6 g, and up to12 g, administered once daily (16,216). xx For children, some experts prescribe p-aminosalicylic acid at 200 mg/kg administered once daily (216). jjjj Isoniazid is tested at two concentrations. Some experts use these results (or resistance conferred through mutations in inhA) to select a higher dose when it tests resistant at the lower concentration and susceptible at the higher concentration. The higher dose may achieve in vivo concentrations sufficiently high to overcome low-level resistance (16,216). AMERICAN THORACIC SOCIETY DOCUMENTS e120 contribute to treatment shortening, as has been achieved in drug-susceptible TB. Because pyrazinamide is associated with increased success only when used to treat patients whose isolates are susceptible to the drug (144), whenever feasible, the decision to include pyrazinamide in a regimen should be based on pyrazinamide susceptibility results. In an IPDMA of children with MDR-TB, the addition of pyrazinamide to a regimen showed no benefit in the treatment of confirmed MDR-TB cases (aOR, 1.63; 95% CI, 0.41-6.56; P = 0.484) (113). Pyrazinamide resistance was, however, not tested and improved selection of cases on the basis of resistance might change this outcome. Harms. Pyrazinamide has been used extensively in the treatment of TB, and toxicities are well documented (225). The most common is gastrointestinal upset or intolerance. In a recent review of the tolerability of TB drugs, pyrazinamide was associated with serious adverse events in 56 of 2,023 (2.8%) patients (226). This is consistent with previous reports of pyrazinamide toxicity (11,225,227). Hepatic enzyme elevations are common with pyrazinamide, and significant hepatotoxicity, although less common, can occur. Modest elevated serum uric acid levels are also expected, although the clinical significance of this is unclear. Nongouty polyarthralgias and hypersensitivity reactions can occur. Flares of clinical gout can also occur, especially in those with a history of prior gouty arthritis. Additional considerations. All patients receiving pyrazinamide as part of an MDR-TB treatment regimen should be monitored carefully for signs or symptoms of hepatotoxicity and have their pyrazinamide dose held or decreased if such toxicity is detected. Isolated increases in uric acid without symptoms of gout are common and are not an indication to discontinue the drug. Recent population-based studies have found that pyrazinamide resistance is common in the setting of MDR with some regional variability, suggesting that pyrazinamide susceptibility should be confirmed or suspected if the drug is included in the regimen (217,228,229). Although there are known challenges related to accurately determining phenotypic DST for pyrazinamide, recent highly predictive DNA sequencing techniques show significant promise for newer genomic approaches (230). When included in the regimen, most experts use doses of 25 to 40 mg/kg/d orally. The recommended dose of pyrazinamide in children is 30 to 40 mg/kg daily. Conclusions. We suggest including pyrazinamide in a regimen for the treatment of patients with MDR-TB, when the M. tuberculosis isolate has not been found resistant to pyrazinamide. Research needs. Pyrazinamide is being evaluated as part of novel treatment regimens for both DS and MDR-TB in multiple clinical trials (56). Development of a reliable, simple molecular test for pyrazinamide susceptibility is a critically important research need. Dose optimization studies for pyrazinamide are warranted, as higher doses may be more efficacious but also may be more toxic. # Building a Treatment Regimen for MDR-TB The guideline committee proposes a clinical strategy tool for building a treatment regimen for MDR-TB (Table 10). The clinical strategy tool incorporates the evidence-based review of the individual drugs, with consideration of the balance of benefits and harms for each drug, the experience of MDR-TB experts on the committee, as well as perspectives of patients. This clinical strategy tool encourages the building of all oral regimens with five effective drugs (to which the isolate is susceptible or has low likelihood of resistance) for the treatment of MDR-TB. In our PS-matched IPDMA, significant favorable synergies were identified with improved treatment success and reduced mortality when bedaquiline was used in combination with linezolid or clofazimine. As noted, amikacin and streptomycin show modest effectiveness when the patient's isolate is susceptible to these drugs; however, because of their significant toxicities, aminoglycosides should be reserved for when a more-effective or lesstoxic regimen cannot otherwise be assembled. The final choice of drugs and drug classes is contingent on many factors, including patient preferences, harms and benefits associated with agents, the capacity to appropriately monitor for significant adverse effects, consideration of drug-drug interactions, comorbidities, and drug availability. Final regimen development, therefore, is individualized and may differ substantially from the approach described in Table 10. Doses of the drugs for treating adults and children with MDR-TB are provided in Table 9, modified and updated from the 2016 ATS/CDC/IDSA Treatment of Drug-Susceptible TB Practice Guidelines (11). # Role of Therapeutic Drug Monitoring in Treatment of MDR-TB Specific pharmacokinetic (PK)/ pharmacodynamic (PD) targets, especially the AUC divided by the MIC over 24 hours (AUC 0-24 /MIC), are being recognized as playing an important role in determining efficacy (231). This has been demonstrated most clearly in hollow fiber systems in vitro that isolate the activity of the drug against the organism (232,233). Further evidence has been provided by animal models (mouse, rabbit, nonhuman primates) and through human clinical trials (234)(235)(236)(237). Drug levels are usually chosen to be four to five times greater than the MIC. Because of the complexities of clinical disease, it is more challenging to isolate the PK/PD contribution of individual drugs, but studies have nonetheless been informative on the relationships between efficacy and drug exposure (238)(239)(240). In human TB, a combination of drugs is used, and each patient has his or her unique duration of disease, host genetics, and particular strain of M. tuberculosis. The general term for these PK/PD data is "exposure-response" data (i.e., for a given amount of drug exposure, how much response can you expect?). Much of the published data focus on the first-line TB drugs, with some emerging data becoming available for second-line drugs (241)(242)(243). In clinical practice, the actual MIC for each drug often is not available. Epidemiological cut-off values or "critical concentrations" that separate wild-type from more resistant isolates can be used for selecting what drugs to include in a regimen (244). These in vitro cut-offs are based on patterns of susceptibility compared with achievable concentrations within humans. An organism is not just "susceptible" as an inherent property; it is susceptible to inhibition or killing by specific, tested concentrations of the AMERICAN THORACIC SOCIETY DOCUMENTS drugs. Individual MIC values might be preferred; however, in practice, there are technical and financial barriers to such individualized data. Following standardized dosing, clinical experience over the past 3 decades clearly shows that some patients have low drug concentrations, leading to clinical failures (11,231,241,242,244). Many MDR-TB experts use therapeutic drug monitoring (TDM) to identify patients with problems with drug absorption, thereby informing dose adjustments. Individuals who have poor response to TB treatment despite adherence may benefit from TDM (11). Patients with TB with gastrointestinal problems that increase risk of malabsorption, concurrent HIV infection, impaired renal clearance, or diabetes should be prioritized for TDM (11). Furthermore, some experts use TDM for all patients being treated for MDR or XDR-TB early in treatment, rather than waiting for a poor response. TDM should be used and interpreted in consultation with an expert in MDR-TB. TDM also provides patient-specific information that may help limit toxicity due to certain drugs, including the injectable drugs cycloserine and linezolid (245)(246)(247). In particular, linezolid toxicity is associated with elevated trough values (202)(203)(204). "Target" ranges for the injectable drugs and cycloserine have been proposed, and research continues on refining these. Fluoroquinolones display concentrationdependent efficacy, and higher doses of moxifloxacin and levofloxacin are being studied (174,248). Currently, Table 10. Clinical Strategy to Build an Individualized Treatment Regimen for MDR-TB d Build a regimen using five or more drugs to which the isolate is susceptible (or has low likelihood of resistance), preferably with drugs that have not been used to treat the patient previously. d Choice of drugs is contingent on capacity to appropriately monitor for significant adverse effects, patient comorbidities, and preferences/values (choices therefore subject to program and patient safety limitations). d In children with TB disease who are contacts of infectious MDR-TB source cases, the source case's isolate DST result should be used if an isolate is not obtained from the child. d TB expert medical consultation is recommended (ungraded good practice statement). Step 1: Choose one later-generation fluoroquinolone Levofloxacin Moxifloxacin Step 2: Choose both of these prioritized drugs Bedaquiline Linezolid Step 3: Choose both of these prioritized drugs Clofazimine Cycloserine/terizidone Step 4: If a regimen cannot be assembled with five effective oral drugs, and the isolate is susceptible, use one of these injectable agents* Amikacin Streptomycin Step 5: If needed or if oral agents preferred over injectable agents in Step 4, use the following drugs † Delamanid ‡ Pyrazinamide Ethambutol Step 6: If limited options and cannot assemble a regimen of five effective drugs, consider use of the following drugs Ethionamide or prothionamide x Imipenem-cilastatin/clavulanate or meropenem/clavulanate jj p-Aminosalicylic acid ¶ High-dose isoniazid** The following drugs are no longer recommended for inclusion in MDR-TB regimens: Capreomycin and kanamycin Amoxicillin/clavulanate (when used without a carbapenem) Azithromycin and clarithromycin Definition of abbreviations: DST = drug susceptibility testing; INH = isoniazid; IPDMA = individual patient data meta-analyses; MDR = multidrug-resistant; PS = propensity score; TB = tuberculosis. *Amikacin and streptomycin should be used only when the patient's isolate is susceptible to these drugs. Because of their toxicity, these drugs should be reserved for when more-effective or less-toxic therapies cannot be assembled to achieve a total of five effective drugs. † Patient preferences in terms of the harms and benefits associated with injectables (the use of which is no longer obligatory), the capacity to appropriately monitor for significant adverse effects, consideration of drug-drug interactions, and patient comorbidities should be considered in selecting Step 5 agents over injectables. Ethambutol and pyrazinamide had mixed/marginal performance on outcomes assessed in our PS-matched IPDMA; however, some experts may prefer these drugs over injectable agents to build a regimen of at least five effective oral drugs. Use pyrazinamide and ethambutol only when the isolate is documented as susceptible. ‡ Data on dosing and safety of delamanid are available in children >3 years of age. x Mutations in the inhA region of the Mycobacterium tuberculosis genome can confer resistance to ethionamide/prothionamide as well as to INH. In this situation, ethionamide/prothionamide may not be a good choice unless the isolate is shown to be susceptible with in vitro testing. jj Divided daily intravenous dosing limits feasibility. Optimal duration of use not defined. ¶ Fair/poor tolerability and low performance. Adverse effects reported to be less common in children. **Data not reviewed in our PS-matched IPDMA, but high-dose isoniazid can be considered despite low-level isoniazid resistance but not with high-level INH resistance. AMERICAN THORACIC SOCIETY DOCUMENTS e122 avoiding low serum concentrations seems advisable. A common clinical practice is to collect samples at 2 and 6 hours after drug administration to measure concentrations that may distinguish normal drug absorption (a 2-h value within the normal range) from delayed absorption (a 6-h value greater than the 2-h value, approaching the normal range) and malabsorption (both values are below the normal range). This approach also works for injectable drugs. Furthermore, trough values for linezolid can be helpful (11,231,241,242,244). Although it is clearly possible to cure patients without TDM and even in the setting of lower serum drug concentrations, the available data suggest that the probability of cure decreases with decreasing drug concentrations (231)(232)(233)(234)(235)(236)(237)(238)(239)(240)(241). Given the incomplete information currently available, many expert clinicians use in vitro susceptibility data and TDM as available tools for optimizing the treatment of patients with MDR-and XDR-TB. For all patients, but in particular those receiving outpatient treatment, TDM should be coordinated and conducted using patientcentered approaches. Shorter-Course, Standardized, 9-to 12-Month Regimen for MDR-TB A randomized, phase 3, noninferiority trial, STREAM Stage 1 (ClinicalTrials.gov identifier: NCT02409290) was recently conducted to assess a shorter-course regimen composed of existing drugs for MDR-TB (8). This regimen had achieved success rates of 83% (95% CI, 71.0-90.3%) in cohort studies (249,250). The shortercourse regimen is standardized and composed of kanamycin, moxifloxacin (in place of gatifloxacin, which was the fluoroquinolone originally used in the "Bangladesh" regimen), prothionamide, clofazimine, pyrazinamide, high-dose isoniazid, and ethambutol for an initial period (4-6 mo) and moxifloxacin, clofazimine, pyrazinamide, and ethambutol for the continuation phase (5 mo) (23). The total duration of therapy is between 9 and 11 months, compared with an individualized regimen of 18 to 24 months of therapy. The medication costs of the shorter regimen are believed to be less than that of conventional regimens, currently undergoing a formal economic evaluation as part of the STREAM trials (251). # Summary of the Evidence In the STREAM Stage 1 clinical trial, of 424 participants who underwent randomization, 383 were included in the modified intention-to-treat population (8). Favorable status was reported in 79.8% of participants in the long-regimen group and in 78.8% of those in the short-regimen group-a difference, with adjustment for HIV status, of 1.0 percentage point (95% CI, 27.5 to 9.5). The results with respect to noninferiority were consistent among the 321 participants in the per-protocol population. Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. In our PS-matched IPDMA of 12,030 patients (n = 50 studies), 169 (n = 33 studies) were eligible for analysis of death and 1,369 (n = 33 studies) were eligible for analysis of treatment success (249) once the criteria used for the shorter-course regimen were applied (249). There were data from 532 individuals (n = 3 studies) on the shorter regimen available for analysis of death and 498 (n = 3 studies) for analysis of treatment success. After using PS matching to adjust for age, sex, HIV, smear status, past TB treatment with first-line drugs, and number of effective drugs, there were no statistically significant associations for the shorter regimen with treatment success (aOR, 0.5; 95% CI, 0.02-13), or deaths (aOR, 1.7; 95% CI, 0.6-4.6), compared with individualized regimens. # Benefits The shorter regimen allows for a significantly shorter duration of therapy compared with the individualized regimen and consequently less pill burden, medication cost, and associated provider administration costs. The burden on patients of lost productivity or lost wages and out-of-pocket costs is considerably less with a shorter regimen. From preliminary results of the STREAM Stage 1 trial, there was a documented reduction in costs to patients due to less time away from work, fewer clinic visits, and less spending on supplementary food (252). # Harms In the STREAM Stage 1 clinical trial, an adverse event of grade 3 or higher occurred in 45.4% of participants in the long-regimen group and in 48.2% in the short-regimen group (8). Prolongation of either the QT interval or the QTc to 500 milliseconds occurred in 11.0% of participants in the short-regimen group, compared with 6.4% in the long-regimen group (P = 0.14). Death occurred in 8.5% of participants in the short-regimen group and in 6.4% in the long-regimen group, and acquired resistance to fluoroquinolones or aminoglycosides occurred in 3.3% and 2.3%, respectively. Among the participants who had HIV coinfection at baseline, 18 of 103 (17.5%) in the short-regimen group died, compared with 4 of 50 (8.0%) in the PICO Question 18-Shorter-course, standardized regimen: In patients with MDR-TB, does treatment with a standardized MDR-TB regimen for <12 months lead to better outcomes than treatment with an MDR-TB regimen for 18-24 months? Recommendation 18: The shorter-course regimen is standardized with the use of kanamycin (which the committee recommends against using) and includes drugs for which there is documented or high likelihood of resistance (e.g., isoniazid, ethionamide, pyrazinamide). Although the STREAM Stage 1 randomized trial found the shorter-course regimen to be noninferior to longer injectable-containing regimens with respect to the primary efficacy outcome (8), the guideline committee cannot make a recommendation either for or against this standardized shorter-course regimen, compared with longer individualized all-oral regimens that can be composed in accordance with the recommendations in this practice guideline. We make a research recommendation for the conduct of randomized clinical trials evaluating the efficacy, safety, and tolerability of modified shorter-course regimens that include newer oral agents, exclude injectables, and include drugs for which susceptibility is documented or highly likely. AMERICAN THORACIC SOCIETY DOCUMENTS long-regimen group (HR in a post hoc analysis, 2.23; 95% CI, 0.76-6.60) (8). Similarly, in a recently published IPDMA, a greater proportion of individuals who received the shorter regimen had less treatment success and more death (249). In the IPDMA, other adverse effects were also statistically insignificant with the shorter regimen, including deafness and ototoxicity (relative risk, 1.5; 95% CI, 0.6-4.0), liver injury (relative risk, 2.2; 95% CI, 0.5-10.3), hepatitis (relative risk, 2.5; 95% CI, 0.3-21.2), and renal impairment (relative risk, 4.5; 95% CI, 0.6-35.2). One of the most concerning adverse effects of the shorter regimen is hearing loss, with 7.1% reported in the African study and between 0% and 23% in the meta-analysis (249,250). The STREAM Stage 1 trial found that ear and labyrinth disorders occurred in 7.4% participants on the shorter regimen, compared with 5.7% with the longer regimen, which was not statistically significant (8); the frequency may have been lower than in cohort studies because hearing loss was not monitored by audiometry in STREAM Stage 1. # Additional Considerations When applying the eligibility criteria from WHO for using the shorter-course, standardized regimen to the population included in our PS-matched IPDMA (7,253), .15% of individuals would have been eligible for the standardized, shorter regimen. In Europe, patient eligibility for the shorter-course regimen has ranged from 7.9% (48 of 612 new cases) in a study performed at TB reference centers (254) to 16.9% in a surveillance-based study performed by the European Centre for Disease Prevention and Control (ECDC) (93,107,255). In the United States, .15% of patients with MDR-TB would be eligible for the shorter-course MDR-TB regimen (256). Data from California showed either 14.6% or 20.5% eligibility, based on whether high-dose isoniazid or ethionamide would be determined effective on the basis of genetic katG or inhA mutations, respectively (257). The availability of molecular and growth-based DST is key to determining the potential eligibility of shorter-course regimen use. The combined use of both modalities can better inform drug resistance and potential treatment regimens. High-dose isoniazid is likely to be active against organisms with low-level isoniazid resistance, commonly associated with a mutation in the inhA gene that also confers resistance to ethionamide (257). Ethionamide is more likely to be active against M. tuberculosis organisms with high-level resistance to isoniazid, associated with a mutation in katG and which are less commonly resistant to ethionamide (257). Global data have shown that katG is present in 64.3% of the cases tested and inhA in 19.2%, with some regional differences (258). The shorter regimen includes both ethionamide and high-dose isoniazid and therefore is likely to be effective against both of these common MDR-TB resistance patterns (257,259). There is uncertainty regarding the diagnostic accuracy and reproducibility of ethambutol and pyrazinamide growthbased DST assays (260). For ethambutol, the ECDC has reported that the growth-based DST when performed on Mycobacteria Growth Indicator Tube (MGIT) is reliable (107), whereas Model Performance Evaluation Program data have identified variability in detecting ethambutol resistance, with the majority of laboratories having disagreement on one to several strains with ethambutol resistance (261). Alternatively, molecular testing for pncA mutations as a method for determining pyrazinamide resistance may be more reliable, because on average 85% of TB strains resistant to pyrazinamide will have such a mutation (217,221,222). In limited-resourced settings, testing all the drugs composing the shorter-course regimen may not be possible (106,108). In contrast, in Europe, DST for ethambutol (93,107) and pyrazinamide ( 106) is well studied, but katG and inhA genetic mutations are not. In the United States, growth-based DST data are available for ethambutol, pyrazinamide, high-and lowlevel isoniazid, and ethionamide, and molecular testing is available through some state public health laboratories and through the CDC's MDDR service (257). Therefore, considering patients with either katG or inhA (but not both) mutations to be eligible for the shorter regimen might be rational. Last, the rate of cross-resistance between ofloxacin and moxifloxacin is likely not complete. Data from the ECDC showed that 81% of M. tuberculosis isolates that were resistant to ofloxacin were also resistant to moxifloxacin. However, other settings (e.g., Bangladesh and Pakistan) have shown cross-resistance as low as 7% (107). # Conclusions The shorter-course regimen was judged by the guidelines committee to have minimal desirable effects (on treatment success, mortality, and culture conversions) and small to moderate undesirable effects (adverse events, limited applicability, and the use of kanamycin as part of the standardized regimen), and includes drugs for which there is documented or high likelihood of resistance (e.g., isoniazid, ethionamide, and pyrazinamide). Although the WHO's STREAM Stage 1 randomized trial found the shorter-course regimen to be noninferior to a long regimen with respect to the primary efficacy outcome (8), the guideline committee cannot make a recommendation either for or against this standardized shorter-course regimen compared PICO Question 19-Surgery for MDR-TB: Should elective lung resection surgery (i.e., lobectomy or pneumonectomy) be used as an adjunctive therapeutic option in combination with antimicrobial therapy, versus medical therapy alone, for adults with MDR-TB? Recommendation 19a: We suggest elective partial lung resection (e.g., lobectomy or wedge resection), rather than medical therapy alone, for adults with MDR-TB receiving antimicrobial-based therapy (conditional recommendation, very low certainty in the evidence). The writing committee believes this option would be beneficial for patients for whom clinical judgement, supported by bacteriological and radiographic data, suggests a strong risk of treatment failure or relapse with medical therapy alone. Recommendation 19b: We suggest medical therapy alone, rather than including elective total lung resection (pneumonectomy), for adults with MDR-TB receiving antimicrobial therapy (conditional recommendation, very low certainty in the evidence). with longer individualized regimens. We instead make a research recommendation for the conduct of randomized clinical trials evaluating the efficacy, safety, and tolerability of modified shorter-course regimens that include newer oral agents, exclude injectables, and include drugs for which susceptibility is confirmed or deemed to be highly likely. If this shorter-course regimen is used, we recommend obtaining DST for all medications in the regimen, with the exception of clofazimine, for which reliable testing is not available, and recommend careful side effect monitoring, including high-quality audiometry, monthly microbiologic monitoring, and close case management, especially in persons with HIV. # Research Needs We make a research recommendation for the conduct of randomized clinical trials evaluating the efficacy, safety, and tolerability of modified shorter-course regimens that include newer oral agents, exclude injectables, and include drugs for which susceptibility is confirmed or deemed to be highly likely. Further research is needed on medications such as linezolid, bedaquiline, and other medications currently in clinical trials as substitutions in the regimen if patients experience adverse effects or resistance develops to any of the medications in the regimen (262). Research on modified shorter-course regimens for pediatric patients and in individuals living with HIV is also needed. Until more data regarding the use and outcomes of the shorter-course regimens in patients with HIV are available, this treatment approach should preferentially be considered only within a research study. Current trials are underway to help answer some these questions (262). # Role of Surgery in MDR-TB Surgery was one of the first therapeutic approaches for treating TB. It was replaced by chemotherapy between 1960 and 1975. However, several scientific societies and national and international organizations suggest consideration of surgery as an adjunctive therapy for MDR-TB. This is based on the results of observational retrospective studies. Selected indications are highlighted, including failure of drug therapy, relapse, localized (e.g., an isolated cavity) or extensive pulmonary TB, and clinical complications (e.g., hemoptysis or empyema) (23,(263)(264)(265)(266)(267)(268)(269)(270). # Summary of the Evidence Systematic reviews and meta-analyses have been performed on the role of surgery in patients with MDR-and XDR-TB (263,271,272). The main limitation of systematic reviews of surgery in MDR and XDR-TB that summarize results of observational studies combining study-level data is the tremendous variability in patient characteristics, background chemotherapy regimens, and types of surgical procedures (263,271,272). An IPDMA of surgery in MDR-TB was designed to address those shortcomings (271). Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. The IPDMA identified 67 cohort studies, of which 45 could not be used because either individual patient data were not available or the surgical status of patients was not known. Twenty-six studies comprising 6,431 patients with MDR-TB were included (271). We used data from this IPDMA to generate the evidence profiles. Despite the analytic advantages of an IPDMA, which allows for adjustment on baseline imbalance in many prognostic factors, there was a substantial residual risk of bias in the results. There is no information about the impact of surgery on adverse effects or quality of life. # Benefits Patients with partial lung resection had a higher probability of treatment success, as opposed to treatment failure, relapse, or death (aOR, 3.0; 95% CI, 1.5-5.9). However, the estimate is very uncertain because of the limitations of individual studies and lack of precision in results. # Harms In the published IPDMA, a substantially higher proportion of patients who had a pneumonectomy died (8.5%) compared with those who had a partial resection (2.2%), but the authors could not establish whether patients died as a result of surgical complications or of their TB (271). The estimates of the effects of pneumonectomy on risk of death (aOR, 1.8; 95% CI, 0.6-5.1) and treatment success (aOR, 0.8; 95% CI, 0.1-6.0) were not statistically significant. As for partial lung resection, both estimates are very uncertain. Treatment success in patients with XDR-TB was noted to be lower when patients underwent surgery compared with patients who did not (aOR, PICO Question 20-Treatment of isoniazid-resistant TB: PICO Question 20a: Should patients with isoniazid-resistant TB be treated with a regimen composed of a fluoroquinolone, rifampin, ethambutol, and pyrazinamide for 6 months compared with rifampin, ethambutol, and pyrazinamide (without a fluoroquinolone) for 6 months? PICO Question 20b: Should patients with isoniazid-resistant TB be treated with a regimen composed of fluoroquinolone, rifampin, and ethambutol for 6 months and pyrazinamide for the first 2 months compared with a regimen composed of a fluoroquinolone, rifampin, ethambutol, and pyrazinamide for 6 months? Recommendation 20a: We suggest adding a later-generation fluoroquinolone to a 6-month regimen of daily rifampin, ethambutol, and pyrazinamide for patients with isoniazid-resistant TB (conditional recommendation, very low certainty in the evidence). # Recommendation 20b: In patients with isoniazid-resistant TB treated with a daily regimen of a later-generation fluoroquinolone, rifampin, ethambutol, and pyrazinamide, we suggest that the duration of pyrazinamide can be shortened to 2 months in selected situations (i.e., noncavitary and lower-burden disease or toxicity from pyrazinamide) (conditional recommendation, very low certainty in the evidence). AMERICAN THORACIC SOCIETY DOCUMENTS 0.4; 95% CI, 0.2-0.9), an effect that was heterogeneous across studies and may also be confounded by factors that predisposed to poor outcomes in these patients (271). Alternatively, patients who were healthy enough to withstand surgery may have intrinsically lower mortality, so it is difficult to predict the direction of potential bias. # Additional Considerations In the MDR-TB treatment guidelines updated in 2016, WHO recommended elective partial surgery as an intervention complementary to chemotherapy in specifically selected individuals (23). The committee acknowledged there is significant uncertainty, with a paucity of publications and data regarding patient preferences, acceptability by surgical personnel, cost, and feasibility of performing elective surgery as an intervention for MDR-TB. # Conclusions On the basis of the limited evidence that is available, there appears to be a net benefit from an elective partial lung resection (e.g., lobectomy or wedge resection) when offered together with a recommended MDR-TB regimen, compared with medical therapy alone. Committee members believed that this therapeutic option would probably be more beneficial when clinical judgement, supported by bacteriological and radiographic data, suggests a strong risk of relapse or treatment failure with medical regimen alone. We found no currently available evidence that pneumonectomy would be beneficial for patients with MDR TB receiving a background drug regimen. # Research Needs Rigorously designed and conducted studies, ideally well-done randomized trials that measure and report benefits but also adverse effects and quality of life, are needed to clarify the role of surgery in the management of patients with MDR-TB. The following specific issues need to be addressed: the optimal timing for surgery, the optimal drug regimens and their duration before and after surgery, the role of surgery in special populations and patients with comorbid conditions (e.g., those living with HIV), the optimal surgical approaches, the optimal infection control measures to be implemented perioperatively, and the role of pulmonary rehabilitation (263,267,270,273). # Treatment of Isoniazid-Resistant TB Isoniazid is an important first-line agent for the treatment of TB, possessing potent early bactericidal activity against M. tuberculosis. However, monoresistance to isoniazid is frequent worldwide, with an estimated prevalence of 8% (range, 5-11%) of TB cases (12). A recent systematic review and metaanalysis compared the treatment outcomes of isoniazid-resistant TB to outcomes of drugsusceptible TB and found that treatment of isoniazid-resistant TB with first-line drugs resulted in suboptimal outcomes, with higher treatment failure (11% vs. 1%) and relapse (10% vs. 5%) (274). In addition, the study found that standardized empirical treatment of new isoniazid-resistant TB cases may be contributing to higher rates of acquired drug resistance (8% vs. 0.3%). Prior ATS, CDC, and IDSA guidelines recommended treatment with a standard four-drug regimen (isoniazid, rifampin, pyrazinamide, and ethambutol) for 6 months, with discontinuation of isoniazid after the results of DST are known and if isoniazid resistance was found (275). # Summary of the Evidence Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. An IPDMA of 33 datasets with 6,424 patients, of whom 3,923 patients in 23 studies received regimens related to isoniazid-resistant TB, was used as the evidence (276). Regimens of interest for our analyses (all with or without isoniazid) were 1) rifampin, ethambutol, and pyrazinamide; and 2) rifampin, ethambutol, and pyrazinamide, plus a fluoroquinolone. For these analyses, isoniazid-resistant TB was defined based on phenotypic resistance to isoniazid and susceptibility to rifampicin, with or without additional resistance to pyrazinamide, ethambutol, or streptomycin. Critical concentrations of 0.1 mg/ml or 0.2 mg/ml were used in 21 out of 23 centers for the definition of resistance to isoniazid (276). For PICO Question 20a, we found few studies that used regimens of 6 months of rifampin, ethambutol, and pyrazinamide, plus a fluoroquinolone, but several that used regimens of 8 to 9 months of this regimen; hence, we combined 6 with .6 months for this PICO. For the comparator of rifampin, ethambutol, pyrazinamide with or without isoniazid, we similarly combined regimens of 6 and .6 months' duration regimens, after comparison between these durations showed outcomes were not significantly different. For PICO Question 20b, few patients received regimens that had a fluoroquinolone, rifampin, ethambutol, and a shorter duration of pyrazinamide. A total of 118 patients received 1 to 3 months of pyrazinamide in conjunction with 6 and .6 months' durations of rifampin, ethambutol, and fluoroquinolone-containing regimen and were available for analyses. # Benefits Compared with 6 months of daily rifampin, ethambutol, and pyrazinamide (with or without isoniazid), adding a fluoroquinolone to this regimen was associated with significantly greater treatment success (aOR, 2.8; 95% CI, 1.1-7.3) but with no significant effect on mortality (aOR, 0.7; 95% CI, 0.4-1.1) or acquired rifampicin resistance (aOR, 0.1; 95% CI, 0.0-1.2). When evaluating the impact of shortening the duration of pyrazinamide (ranging from 1-3 mo) in a regimen that contains a fluoroquinolone, the treatment success was very high, with 117 of 118 patients achieving treatment success. On the other hand, comparisons of shorter pyrazinamide regimens to regimens including both a fluoroquinolone and pyrazinamide for >6 months did not show significantly different results (aOR, 5.2; 95% CI, 0.6-46.7). Similar results were obtained when the comparisons were restricted to patients receiving latergeneration fluoroquinolones, namely moxifloxacin, levofloxacin, and gatifloxacin (data not shown). # Harms The outcome of adverse events from TB drugs was intended to be assessed in the IPDMA but could not be analyzed because these outcomes were either not reported or reported with very different definitions. The adverse events associated with TB drugs, especially pyrazinamide, are well established (227,277). # Additional Considerations We note that the estimates of effect were imprecise from the IPDMA because of the small number of patients who received the AMERICAN THORACIC SOCIETY DOCUMENTS e126 regimens of interest. Given that pyrazinamide is the most toxic of the present first-line drugs, a key potential advantage of adding a fluoroquinolone would be to shorten the duration of pyrazinamide to the initial 2 months of treatment. Although the IPDMA only had 118 patients receiving fluoroquinolone-containing regimens with shorter durations of pyrazinamide, it is noted that treatment success was very high in this group (117 of 118). On the basis of the efficacy signals seen and the known toxicities of prolonged pyrazinamide, the committee viewed the balance of benefits and harms to favor shortening the duration of x Increases in serum creatinine via decrease in renal tubular creatinine excretion are commonly seen with cobicistat and dolutegravir usage. This is not a toxicity. jj Indirect hyperbilirubinemia is expected with atazanavir and indinavir. This is not a toxicity. # AMERICAN THORACIC SOCIETY DOCUMENTS pyrazinamide when a later-generation fluoroquinolone is included in the regimen in patients in whom there is toxicity anticipated or experienced because of pyrazinamide or when the patient has noncavitary, lower burden of disease. Finally, plasma peak concentration and exposure to moxifloxacin have been shown to decrease by approximately 30% when coadministered with rifampin (278,279). The impact of these decreased exposures on outcomes has not been established. However, some experts opt to use levofloxacin when combined with rifampin. # Conclusions We conclude that in patients with isoniazidresistant TB, the addition of a later-generation fluoroquinolone to 6 months of daily rifampin, ethambutol, and pyrazinamide improves treatment success rates. In patients in whom toxicity from pyrazinamide is anticipated or experienced, or in patients with active TB with lower burden of disease (i.e., noncavitary), the committee viewed the balance of benefits and harms to favor shortening the duration of pyrazinamide when a later-generation fluoroquinolone is included in the regimen, acknowledging that the certainty in the evidence is very low and more research is needed. # Research Needs All but 2 of the 23 studies included in the IPDMA were observational. Moreover, the analyzed population included only 37 children, 119 patients with diabetes, and 249 patients with HIV infection. Given the burden of isoniazid-resistant TB worldwide, clinical trials that include these special populations and evaluate new regimens, including the evaluation of the efficacy, safety, and tolerability of shorter versus longer durations of pyrazinamide, are urgently needed. # Treatment of MDR-TB in Special Situations # HIV Infection Among patients with HIV and TB, rifampin resistance has been identified more often compared with those without HIV (280,281). Patients with MDR-TB and HIV have up to a fourfold higher risk of mortality compared with patients with MDR-TB without HIV (282). Low CD4 cell counts (e.g., <50 cells/ml) in patients with HIV and MDR-TB further correlate with higher mortality (283,284). Numerous practice guidelines recommend HIV testing of people with suspected or confirmed TB (11,20,34,285), regardless of drug resistance. Because of their high risk for mortality early in the course of TB disease, people with HIV in whom TB is suspected are recommended to receive rapid testing for TB using nucleic acid amplification tests coupled with molecular diagnostic DST for rifampin (with or without isoniazid) (13,20). For patients with HIV receiving therapy for drug-susceptible TB, studies found significantly lower mortality among patients receiving concurrent antiretroviral therapy (ART) compared with those not receiving ART (286)(287)(288). U.S. practice guidelines recommend starting ART within 2 weeks of initiating treatment for drug-susceptible non-CNS TB for patients with CD4 cell counts ,50 cells/ml and by 8 to 12 weeks for patients with higher CD4 cell counts (11,289). Although the optimal timing for starting ART to reduce patient mortality has not yet been adequately determined for patients with MDR-TB, lower mortality rates have been shown in multiple studies among patients with MDR-TB receiving concurrent ART compared with those not receiving ART (283,(290)(291)(292)(293)(294)(295)(296)(297), especially among patients with TB with CD4 counts ,50 cells/ml (283,292). The decreased mortality seen in patients treated for MDR-TB who concurrently receive ART, especially among those with CD4 cell counts ,50 cells/ml, supports a similar ART management approach as recommended for drugsusceptible TB. MDR-TB of the CNS in patients with HIV presents additional challenges. TB meningitis with isoniazid-resistant TB, RR-TB, or MDR-TB can be associated with higher mortality compared with drugsusceptible disease (298)(299)(300). Starting ART early in patients with TB is associated with a higher incidence of immune reconstitution inflammatory syndrome (286,288,301), and this can be even more problematic in CNS disease. A recent study in Vietnam of patients with advanced AIDS found a higher incidence of potentially lifethreatening (grade 4) adverse events among patients with TB meningitis starting ART early compared with those delaying ART until after 2 months of standardized firstline TB treatment and did not show a survival benefit of starting ART early (302). It has been recommended to delay starting ART by 8 weeks in patients with CNS TB and HIV (11); however, there is a paucity of data in patients with MDR-TB of the CNS. The optimal approach for initiation of ART in patients with this medical condition remains uncertain, and close clinical monitoring is warranted. Drug interactions between antiretroviral and anti-TB agents are common in the management of patients with HIV and TB, particularly with rifamycins. Because the rifamycins (with the possible exception of rifabutin) are not used for treatment of MDR-TB, interactions of other TB medication classes with ART should be considered. Bedaquiline and/or delamanid might be considered for use in patients with HIV. Although efavirenz can produce a decrease in serum bedaquiline concentrations and this combination is avoided, other ART drugs, including the protease inhibitors and cobicistat, can result in increased serum bedaquiline levels. The current WHO recommendations for the use of delamanid apply to patients living with HIV (7). Drug-drug interaction studies in healthy volunteers of delamanid with tenofovir, efavirenz, and lopinavir/ritonavir show that no dose adjustments were needed (303). Nonetheless, when delamanid is included in the regimen for MDR-TB in HIVinfected patients, the design of their PICO Question 21-Treatment of Contacts Exposed to MDR-TB: Should contacts exposed to an infectious patient with MDR-TB be offered LTBI treatment versus followed with observation alone? Recommendation 21: For contacts with presumed MDR LTBI due to exposure to an infectious patient with MDR-TB, we suggest offering treatment for LTBI (conditional recommendation, very low certainty in the evidence). We suggest 6 to 12 months of treatment with a later-generation fluoroquinolone alone or with a second drug, on the basis of drug susceptibility of the source-case M. tuberculosis isolate. On the basis of evidence of increased toxicity, adverse events, and discontinuations, pyrazinamide should not be routinely used as the second drug. On the basis of recent modeling studies, it is estimated that there are about 1 million incident cases of TB in children annually and 230,000 deaths caused by the disease (306,307). About 35,000 cases of MDR-and XDR-TB occur in children annually (308,309). Our PS-matched IPDMA did not include sufficient numbers of children to allow the formulations of GRADE-based recommendations. Nonetheless, on the basis of a recent IPDMA of 975 children with MDR-TB from 18 countries, recent pharmacokinetic studies in children, and several observational studies showing good outcomes, the recommendations noted on choice of drugs, composition of regimens, and durations of treatment for adults can also be applied to children with MDR-TB (113,119,124,161,165,170,200,310). There are some special considerations for formulating effective regimens for children of various ages and adolescents. The bacterial burden in young children with TB is much smaller than that in most adults with TB. As a result, most drug resistance in children was present when the organism was inhaled (primary resistance), and further development of resistance while on therapy (secondary resistance) is much less common in children. However, the paucibacillary nature of childhood TB also makes microbiologic confirmation much more difficult. The only way to determine drug susceptibility in cases that meet clinical definitions of TB disease (i.e., microbiologic confirmation is not available) is by linking the child to a specific source case for whom the drug susceptibilities of the organism are known. Linking the child and a specific case is more feasible in lowburden settings to which these guidelines apply but can be difficult in high-burden settings when there may be more than one possible source case. Also, standard definitions of relapse and treatment failure for pediatric TB trials are inconsistent because the low burden of organisms in children makes microbiologic confirmation of these outcomes difficult. There are several technical factors that can affect the outcome of treatment for MDR-and XDR-TB in children. The two age extremes of childhood have been somewhat neglected in studies of MDR-and XDR-TB. Little is known about the pharmacokinetics, safety, and tolerability of the drugs used to treat drug-resistant TB in neonates, infants, and toddlers (115). Children, especially those ,2 years of age, are more prone to developing disseminated TB, including meningitis. Drugs that penetrate well into the CSF, such as linezolid, might have an advantage over drugs that appear to have less penetration, such as ethambutol and bedaquiline. Adolescents can develop TB similar to that found either in adults or in young children. However, adolescent patients often have been excluded from TB treatment trials. Most of the oral drugs used to treat MDRand XDR-TB are not licensed for children. Although the Global Drug Facility has pediatric dispersible tablets available (http://stoptb.org/gdf/drugsupply/drugs_ available.asp), these formulations are not registered with FDA or the European Medicines Agency, which has limited commercial availability of child-friendly dosage forms. As a result, for younger children, the medication often has to be crushed or put into suspension or capsules once opened, and the pharmacokinetics and pharmacodynamics of these various preparations are unknown. HIV-infected children may be exposed to lower concentrations of certain orally administered drugs than HIV-uninfected children given the same bodyweight dose. Unfortunately, the pharmacokinetics in HIV-infected children of drugs used to treat MDR-and XDR-TB are largely unstudied. Children generally have a more difficult time tolerating injectable medications because of pain and the fact that many children with TB are malnourished and have diminished muscle mass. With the recent development of new oral drugs, it is hoped that injectable drugs can be avoided in children whenever possible. Fortunately, children generally tolerate the oral TB drugs better than adults, with fewer serious adverse events resulting in fewer breaks in therapy. Also, most children with TB have not developed the common chronic diseases of adulthood and will not suffer complications of them during treatment. However, drug adverse effects can be difficult to assess in children and likely are underreported. In general, the same schedules used to monitor adverse events and laboratory abnormalities in adult patients treated for MDR-or XDR-TB also should be used for children. # Outcomes of MDR-TB treatment in children. A recently published systematic review (33 studies) and IPDMA (28 of the studies) described treatment outcomes for 975 children with MDR-TB using random effects multivariate logistical regression adjusted for age, sex, HIV infection, malnutrition, severe extrapulmonary disease, and severe pulmonary disease on chest radiograph (113). Overall, 78% had a successful treatment outcome, including 75% of the microbiologically confirmed cases. However, treatment was successful in only 56% of HIV-infected children who did not also receive ART during TB treatment compared with an 82% success rate in those also treated with ART. In children with confirmed MDR-TB, the use of injectable agents and high-dose isoniazid was associated with treatment success. Unfortunately, limitations of this study included that the vast majority of patients AMERICAN THORACIC SOCIETY DOCUMENTS came from one site (Cape Town, South Africa), difficulty in estimating the treatment effects of individual drugs within multidrug regimens, the availability of only observational cohort studies, and that treatment decisions were based on the clinician's perception of illness, with resulting potential for bias. Conclusions. Excellent treatment outcomes have been demonstrated in both trials and extensive clinical experience for children with MDR-and XDR-TB using individualized treatment regimens with the currently available drugs. Microbiologic cure and probable cure rates in children can reach 80% to 90% with early recognition of drug resistance and adequate treatment (311). The greatest difficulties have been recognizing that the child has MDR-TB and the ability of the child to tolerate injectable drugs; fortunately, with expanded knowledge of and experience with the newer oral drugs, such as bedaquiline and delamanid, pediatricians with expertise in MDR-TB believe that the majority of children with MDR-TB likely can be cured with an all-oral drug regimen. # Pregnant Women Untreated MDR-TB during pregnancy can be associated with adverse maternal and fetal outcomes. Crucial gaps exist in the literature on treatment of MDR-TB in pregnant women, including the effectiveness, safety, and tolerability of available treatment regimens, as well as timing and duration of second-line drugs. In support of these guidelines, we conducted a systematic literature review with a focus on pregnant women with MDR-TB and included all original research reporting MDR-TB treatment outcomes during pregnancy, including case reports and case series, in all languages. We excluded animal studies, review articles, letters to the editor, and articles without documentation of MDR-TB treatment during pregnancy. Fulltext review and data extraction were conducted by three reviewers. Our initial search yielded 280 publications, of which 16 met inclusion criteria for full-text review (312)(313)(314)(315)(316)(317)(318)(319)(320)(321)(322)(323)(324)(325)(326)(327); however, 3 publications were eventually excluded because of lack of data on medications or outcomes (315,325,327). The remaining 13 articles were observational case reviews without any comparison groups. Treatment regimens reported were individualized according to drug susceptibility and tolerability of drugs. Summary of the evidence. Of the 65 pregnant women for whom MDR-TB treatment outcome data were available, 49% (n = 32) were cured and 20% (n = 13) completed treatment for a treatment success proportion of 69%. Fourteen percent (n = 9) of the women died. Treatment failure was reported in 9% (n = 6), and 3% (n = 2) were lost to followup. Across these studies, four women were still receiving treatment at the time of publication. Fetal outcomes included 78.5% (n = 51) healthy births, with eight children being born premature or having low birth weight. Medical abortions were obtained by 12% of the patients (n = 8), and 3% (n = 2) underwent spontaneous abortions. Stillbirth was reported for one child (1.5%), 3% (n = 2) of children were born with HIV, and 1.5% (n = 1) had TB/HIV coinfection. Conclusions. On the basis of the limited data available, we conclude that there is evidence to support treatment of MDR-TB during pregnancy, including the prescription of second-line drugs. Most of the second-line drugs are pregnancy category C per the U.S. Food and Drug Administration, with the exception of bedaquiline and meropenem, which are classified as category B (according to the previous FDA letter-based classification system, currently undergoing revision [328]), and aminoglycosides, which are category D (71,320,329). Despite low cure rates reported in the literature, we believe that the benefits of treatment to mother, child, and the community outweigh the harms. There is no evidence to support one particular regimen; however, most MDR-TB experts avoid aminoglycosides and ethionamide in pregnant women if alternative agents can be used for an effective treatment regimen. Research needs. Global registries aimed at collecting data on the efficacy, safety, and tolerability, and maternal and fetal outcomes associated with MDR-TB regimens in pregnant women are urgently needed. Furthermore, given the significant morbidity and mortality associated with MDR-TB, a reconsideration of the current approach of assumed exclusion of pregnant women from MDR-TB clinical trials is warranted. A recent consensus statement from an international expert panel advocated for allowing pregnant and lactating women to remain eligible for phase III MDR-TB trials, unless there is a compelling reason for exclusion (330). # Treatment of Contacts Exposed to MDR-TB In 1992 and 2000, ATS and CDC advised: 1) no treatment for MDR LTBI for persons not at high risk for progression to TB disease, but provision of clinical follow-up for TB disease signs and symptoms; 2) 6 to 12 months of treatment with two or more medications to which the isolate of the source case is susceptible (331,332). # Summary of the Evidence Procedures and methodology to assemble and rank the certainty in the evidence are reported in APPENDIX A, with evidence profiles for PICO questions reported in APPENDIX B. A systematic review of 21 published observational studies that examined outcomes of TB incidence, treatment completion, adverse effects, and cost effectiveness was used as the evidence (333). Six articles compared TB incidence among contacts receiving MDR LTBI treatment versus untreated contacts: 10 presented TB incidence only for contacts who received MDR LTBI treatment, and 5 presented TB incidence only for untreated contacts. # Benefits Using data from five non-registry-matched comparison studies included in the systematic review of 21 published observational studies, MDR-TB incidence occurred in 2 of 190 (1.1%) patients treated for MDR LTBI, compared with 18 of 126 (14.3%) in those who received no MDR LTBI treatment (290). The estimated MDR-TB incidence reduction was 90% (9-99%), using a negative binomial model controlling for person time and overdispersion (333). # Harms From 11 studies having data by regimen on treatment discontinuation due to adverse effects, there was high (51%) treatment discontinuation among patients taking pyrazinamide-containing regimens (333). About one-third of patients taking fluoroquinolone-containing regimens, without pyrazinamide, had adverse effects, but only 2% discontinued treatment. For children <15 years of age regardless of regimen, treatment discontinuation due to adverse effects was substantially less (5% vs. 33%) (333). # Additional Considerations Modeled cost-effectiveness studies have shown greatest benefit using a fluoroquinolone-based MDR LTBI treatment. In one study, the most costeffective regimen was fluoroquinolone/ethambutol, followed by fluoroquinolone alone, then by pyrazinamide/ethambutol (333). A pyrazinamide/fluoroquinolone regimen was particularly toxic, as measured by treatment discontinuation, and prevented about half as many TB cases as the most cost-effective option (333). An expert panel generated a policy brief on this topic recently, acknowledged that further evidence is urgently needed, but still endorsed the immediate implementation of postexposure management of household contacts of MDR-TB, endorsing the approach as being effective, feasible, and cost efficient (334). The Curry International TB Center Drug Resistant TB: Clinician's Survival Guide suggests an LTBI regimen of levofloxacin or moxifloxacin alone or combined with a second medication to which the isolate of the infectious source patient is susceptible (16). Among those with presumed MDR LTBI, there is probable effectiveness of preventing TB through MDR LTBI treatment. From systematic reviews of studies reporting incidence for both treated and untreated contacts, TB incidence was significantly lower with MDR LTBI treatment (333,(335)(336)(337)(338)(339). In studies that published outcomes by regimen, there were high treatment discontinuation rates due to adverse effects and toxicity in persons taking pyrazinamide-containing MDR-LTBI regimens (336,337,(339)(340)(341)(342). There were low treatment discontinuation rates due to adverse effects and toxicity in persons taking fluoroquinolone-containing MDR-LTBI regimens. Finally, children are at high risk for developing MDR-TB if infected with MDR LTBI and generally experience fewer adverse effects than adults. Given the uncertainties above, many TB experts and programs provide extended post-treatment completion follow-up for patients treated for presumed MDR LTBI. # Conclusions For contacts with presumed MDR LTBI due to exposure to an infectious patient with MDR-TB, we suggest offering treatment for LTBI versus following with observation alone. For treatment of MDR LTBI, we suggest 6 to 12 months' treatment with a fluoroquinolone alone or with a second drug, on the basis of source-case isolate DST. On the basis of evidence of increased toxicity, adverse events, and discontinuations, pyrazinamide should not be routinely used as the second drug. In lieu of fluoroquinolone-based treatment, there are few data for the use of other secondline medications and, because of toxicity, they are not recommended by experts. For contacts to fluoroquinolone-resistant, pre-XDR-TB, pyrazinamide/ethambutol may be an effective option, if source-case isolate DST shows susceptibility to these drugs. In children, TB drugs are generally better tolerated, and levofloxacin is preferred because of the availability of an oral suspension formulation. ): 6 months of daily delamanid versus 6 months of isoniazid. In addition, more information is needed on the cost effectiveness of treating contacts with presumed DR-TB infection using these newer MDR LTBI regimens. # Summary of Key Differences between ATS/CDC/ERS/IDSA and WHO 2019 Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment The 2019 WHO Consolidated Guidelines on Drug-Resistant Tuberculosis Treatment guidelines were based on a modified set of data that expanded on the initial individual patient dataset used for these ATS/CDC/ERS/IDSA guidelines (3). For the 2019 WHO update, data from 625 patients were included by WHO from eight other datasets received in 2018 (Australia, Belarus, Brazil, France, Latvia, Republic of Korea, Russian Federation, and the EndTB project), as well as a sample of 3,626 patients receiving treatment in South Africa, including 1,210 cases started on bedaquiline in 2015. With these newer data added, the WHO individual patient dataset removed 3,367 records because of incomplete DST documentation (7). Despite the changes in the datasets described, WHO and ATS/CDC/ERS/IDSA recommendations are largely concordant, as they were derived concurrently, using a similar approach (GRADE methodology and a multidisciplinary Guideline Development Group), and were informed by an individual patient data meta-analysis that overlapped substantially. The main differences are highlighted below. This official clinical practice guideline was prepared by an ad hoc subcommittee of the ATS, CDC, ERS, and IDSA. Members of the subcommittee are as follows: # ART regimens should be developed in consultation with HIV and ART experts. A thorough review of all patients' medications should be performed, in consultation with MDR-TB experts, to select individualized regimens with less potential for overlapping ART/TB drug toxicities (Table 11). Useful webpages regarding drug interactions (TB/HIV and other) are available from AIDSinfo (https://aidsinfo.nih.gov/guidelines), CDC (http://www.cdc.gov/tb/publications/ guidelines/TB_HIV_Drugs/default.htm), University of California San Francisco (http:// hivinsite.ucsf.edu/insite?page=ar-00-02), University of Liverpool (http://www.hivdruginteractions.org/), and Indiana University (http://medicine.iupui.edu/ clinpharm/ddis/). The management of MDR-TB is more complex among patients with HIV infection. The higher pill burden of combined ART with expanded TB drug therapy, potential drug-drug interactions, the management of immune reconstitution inflammatory syndrome, and other concurrent HIV-associated opportunistic diseases all pose unique challenges in the care of these patients. The management of patients with HIV and MDR-TB can best be performed by a multidisciplinary care team composed of health providers experienced in MDR-TB, HIV, and public health case management (304,305
None
None
342bf9187b6ba9baa1bc61dee0dc78a0f3f235e0
cdc
None
F o r - « le by th e S u p e rin te n d e n t of D o c u m e n ts, U .S . G o vem m en t P rin tin g O ffic e , W ash in g to n , D .C . 20402 HEW Publication No. (NIOSH) 76-138 PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and safety of workers exposed to an ever-increasing number of potential hazards at their workplace. The National Institute for Occupational Safety and Health has projected a formal system of research, with priorities determined on the basis of specified indices, to provide relevant data from which valid criteria for effective standards can be derived. Recommended standards for occupational exposure, which are the result of this work, are based on the health effects of exposure. The Secretary of Labor will weigh these recommen dations along with other considerations such as feasibility and means of implementation in developing regulatory standards. It is intended to present successive reports as research and epide miologic studies are completed and as sampling and analytical methods are developed. Criteria and standards will be reviewed periodically to ensure continuing protection of the worker. I am pleased to acknowledge the contributions to this report on methylene chloride by members of my staff and the valuable constructive comments by the Review Consultants on methylene chloride, by the ad hoc committees of the American Industrial Hygiene Association and the American Academy of Occupational Medicine, by Robert B. O'Connor, M.D., NIOSH consultant in occupational medicine, and by William M. Pierce on respiratory protection and work practices. The NIOSH recommendations for standards are not necessarily a consensus of all the consultants and pro fessional societies that reviewed this criteria document on methylene chloride. Lists of the NIOSH Review Committee members and of the Review Consultants appear on the following pages.# CRITERIA DOCUMENT: # RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR METHYLENE CHLORIDE "Occupational exposure to methylene chloride" is defined as exposure above one-half the daily time-weighted average (TWA) exposure limit, except when there is also exposure to carbon monoxide (CO) at more than 9 ppm. Because the toxicities of CO and methylene chloride are additive, the appropriate environmental limit and action level of methylene chloride must be reduced in the presence of CO. When CO levels are more than 9 ppm "occupational exposure to methylene chloride" shall be determined from the (3) Only appropriate respirators as described in Table 1- Less than or equal 1) Any supplied air respirator, to 3,750 ppm (50X) with full facepiece, helmet or hood. 2) Any self-contained breathing apparatus with full facepiece. 3,750 ppm or more 1) Self-contained breathing apparatus, pressure demand mode (positive pressure), in full facepiece. Combination supplied air respira tor, pressure demand mode, with auxiliary self-contained air supply. Evacuation or escape 1) Any escape-type, self-contained (no concentration breathing apparatus. limit) 2) Any escape--type gas mask providing protection against methylene chloride (C) The employer shall provide respirators in accordance with (E) Respirators specified in Table 1- However, records which form the basis for concluding that the exposures are below one-half the limit shall be maintained and exposure surveys shall be made when any process change indicates the need for réévaluation or at the discretion of the compliance officer. # (b) Where exposure concentrations have not been determined, they shall be determined within 6 months of the promulgation of a standard incorporating these recommendations. (c) Where it has been determined that environmental concentrations result in TWA workday exposures above one-half the limit, employers shall maintain records of environmental exposures to methylene chloride based upon the following sampling and recording schedules: (1) Samples shall be collected at least every 6 months in accordance with Appendix I for the evaluation of the work environment with respect to the recommended standard. (2) Environmental samples shall be taken when a new process is installed or when process changes are made which may cause an increase in environmental concentrations. Increased production, relocation of existing operations, and increased overtime also requires resampling. (3) In all monitoring, samples shall be collected which are representative of breathing-zone exposures characteristic of each job or specific operation in each work area. The method of analysis was not given. The hands and arms of the chemist were also exposed to liquid methylene chloride when the salt was removed from the distillation apparatus. After A disadvantage of the method is the indirect system of measurement requiring collection and desorption prior to analysis. ( The Czechoslovak Committee of MAC listed standards for Hungary (6 ppm), Great Britain (500 ppm) and GDR (141 ppm). The Czechoslovak Committee considered that contamination of methylene chloride with methyl chloride was the reason for the low USSR standard. The 1969 ANSI Z-37 standard was adopted as the federal standard, 29 CFR 1910.1000, Table G2. This standard is a TWA of 500 ppm for an 8 hour/day, a ceiling concentration of 1,000 ppm, and a maximum peak of 2,000 ppm for no more than 5 minutes in any 2 h o u r s . Human male subjects (nonsmokers) exposed experimentally to methylene chloride for 7.5 hours daily on 5 consecutive days attained average peak COHb percentages of 2.9% at 50 ppm methylene chloride, 5.7% at 100 ppm methylene chloride, and 9.6% at 250 ppm methylene chloride. # Basis for Recommended Environmental In each case, the peaks were attained on the 5th day of exposure. The date and time of sample collection. (2) Sampling duration. (3) Total sample volume. (4) Location of sampling. # MATERIAL SAFETY DATA SHEET # PRODUCT IDENTIFICATION # U N U S U A L F IR E A N D E X P L O S IO N H A Z A R D V HEALTH HAZARD INFORMATION H E A L T H H A Z A R O D A T A R O U T E S O F E X P O SU R E IN H A L A T IO N S K IN C O N T A C T S K IN A B S O R P T IO N E Y E C O N T A C T IN G E S T IO N E F F E C T S OF O V E R E X P O S U R E A C U T E O V E R E X P O S U R E C H R O N IC O V E R E X P O S U R E E M E R G E N C Y A N D F IR ST
F o r * « le by th e S u p e rin te n d e n t of D o c u m e n ts, U .S . G o vem m en t P rin tin g O ffic e , W ash in g to n , D .C . 20402 HEW Publication No. (NIOSH) 76-138 PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and safety of workers exposed to an ever-increasing number of potential hazards at their workplace. The National Institute for Occupational Safety and Health has projected a formal system of research, with priorities determined on the basis of specified indices, to provide relevant data from which valid criteria for effective standards can be derived. Recommended standards for occupational exposure, which are the result of this work, are based on the health effects of exposure. The Secretary of Labor will weigh these recommen dations along with other considerations such as feasibility and means of implementation in developing regulatory standards. It is intended to present successive reports as research and epide miologic studies are completed and as sampling and analytical methods are developed. Criteria and standards will be reviewed periodically to ensure continuing protection of the worker. I am pleased to acknowledge the contributions to this report on methylene chloride by members of my staff and the valuable constructive comments by the Review Consultants on methylene chloride, by the ad hoc committees of the American Industrial Hygiene Association and the American Academy of Occupational Medicine, by Robert B. O'Connor, M.D., NIOSH consultant in occupational medicine, and by William M. Pierce on respiratory protection and work practices. The NIOSH recommendations for standards are not necessarily a consensus of all the consultants and pro fessional societies that reviewed this criteria document on methylene chloride. Lists of the NIOSH Review Committee members and of the Review Consultants appear on the following pages.# CRITERIA DOCUMENT: # RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR METHYLENE CHLORIDE "Occupational exposure to methylene chloride" is defined as exposure above one-half the daily time-weighted average (TWA) exposure limit, except when there is also exposure to carbon monoxide (CO) at more than 9 ppm. Because the toxicities of CO and methylene chloride are additive, the appropriate environmental limit and action level of methylene chloride must be reduced in the presence of CO. When CO levels are more than 9 ppm "occupational exposure to methylene chloride" shall be determined from the (3) Only appropriate respirators as described in Table 1- Less than or equal 1) Any supplied air respirator, to 3,750 ppm (50X) with full facepiece, helmet or hood. 2) Any self-contained breathing apparatus with full facepiece. 3,750 ppm or more 1) Self-contained breathing apparatus, pressure demand mode (positive pressure), in full facepiece. # 2) Combination supplied air respira tor, pressure demand mode, with auxiliary self-contained air supply. Evacuation or escape 1) Any escape-type, self-contained (no concentration breathing apparatus. limit) 2) Any escape--type gas mask providing protection against methylene chloride (C) The employer shall provide respirators in accordance with (E) Respirators specified in Table 1- However, records which form the basis for concluding that the exposures are below one-half the limit shall be maintained and exposure surveys shall be made when any process change indicates the need for réévaluation or at the discretion of the compliance officer. # (b) Where exposure concentrations have not been determined, they shall be determined within 6 months of the promulgation of a standard incorporating these recommendations. (c) Where it has been determined that environmental concentrations result in TWA workday exposures above one-half the limit, employers shall maintain records of environmental exposures to methylene chloride based upon the following sampling and recording schedules: (1) Samples shall be collected at least every 6 months in accordance with Appendix I for the evaluation of the work environment with respect to the recommended standard. (2) Environmental samples shall be taken when a new process is installed or when process changes are made which may cause an increase in environmental concentrations. Increased production, relocation of existing operations, and increased overtime also requires resampling. (3) In all monitoring, samples shall be collected which are representative of breathing-zone exposures characteristic of each job or specific operation in each work area. The method of analysis was not given. [58] The hands and arms of the chemist were also exposed to liquid methylene chloride when the salt was removed from the distillation apparatus. After A disadvantage of the method is the indirect system of measurement requiring collection and desorption prior to analysis. ( The Czechoslovak Committee of MAC listed standards for Hungary (6 ppm), Great Britain (500 ppm) and GDR (141 ppm). [133] The Czechoslovak Committee considered that contamination of methylene chloride with methyl chloride was the reason for the low USSR standard. The 1969 ANSI Z-37 standard [5] was adopted as the federal standard, 29 CFR 1910.1000, Table G2. This standard is a TWA of 500 ppm for an 8 hour/day, a ceiling concentration of 1,000 ppm, and a maximum peak of 2,000 ppm for no more than 5 minutes in any 2 h o u r s . Human male subjects (nonsmokers) exposed experimentally to methylene chloride for 7.5 hours daily on 5 consecutive days attained average peak COHb percentages of 2.9% at 50 ppm methylene chloride, 5.7% at 100 ppm methylene chloride, and 9.6% at 250 ppm methylene chloride. # Basis for Recommended Environmental In each case, the peaks were attained on the 5th day of exposure. # (1) The date and time of sample collection. (2) Sampling duration. (3) Total sample volume. (4) Location of sampling. ( # MATERIAL SAFETY DATA SHEET # PRODUCT IDENTIFICATION # U N U S U A L F IR E A N D E X P L O S IO N H A Z A R D V HEALTH HAZARD INFORMATION H E A L T H H A Z A R O D A T A R O U T E S O F E X P O SU R E IN H A L A T IO N S K IN C O N T A C T S K IN A B S O R P T IO N E Y E C O N T A C T IN G E S T IO N E F F E C T S OF O V E R E X P O S U R E A C U T E O V E R E X P O S U R E C H R O N IC O V E R E X P O S U R E E M E R G E N C Y A N D F IR ST
None
None
0b698e0e189e28d0bd9597fafb6eb34f799adea1
cdc
None
Disclosure of Relationship CDC, our planners, and our content experts wish to disclose they have no financial interest or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters. Planners have reviewed content to ensure there is no bias. This document will not include any discussion of the unlabeled use of a product or a product under investigational use, with the exception that some of the recommendations in this document might be inconsistent with package labeling.# Introduction Unintended pregnancy rates remain high in the United States; approximately 45% of all pregnancies are unintended, with higher proportions among adolescent and young women, women who are racial/ethnic minorities, and women with lower levels of education and income (1). Unintended pregnancies increase the risk for poor maternal and infant outcomes (2) and in 2010, resulted in U.S. government health care expenditures of $21 billion (3). Approximately half of unintended pregnancies are among women who were not using contraception at the time they became pregnant; the other half are among women who became pregnant despite reported use of contraception (4). Strategies to prevent unintended pregnancy include assisting women at risk for unintended pregnancy and their partners with choosing appropriate contraceptive methods and helping them use methods correctly and consistently to prevent pregnancy. In 2013, CDC published the first U.S. Selected Practice Recommendations for Contraceptive Use (U.S. SPR), adapted from global guidance developed by the World Health Organization (WHO SPR), which provided evidence-based guidance on how to use contraceptive methods safely and effectively once they are deemed to be medically appropriate. # U.S. SPR is a companion document to U.S. Medical Eligibility Criteria for Contraceptive Use (U.S. MEC) (. gov/reproductivehealth/unintendedpregnancy/usmec.htm), which provides recommendations on safe use of contraceptive methods for women with various medical conditions and other characteristics (5). WHO intended for the global guidance to be used by local or national policy makers, family planning program managers, and the scientific community as a reference when they develop family planning guidance at the country or program level. During 2012-2013, CDC went through a formal process to adapt the global guidance for best implementation in the United States, which included rigorous identification and critical appraisal of the scientific evidence through systematic reviews, and input from national experts on how to translate that evidence into recommendations for U.S. health care providers (6). At that time, CDC committed to keeping this guidance up to date and based on the best available evidence, with full review every few years (6). This document updates the 2013 U.S. SPR (6) with new evidence and input from experts. Major updates include 1) revised recommendations for starting regular contraception after the use of emergency contraceptive pills and 2) new recommendations for the use of medications to ease insertion of intrauterine devices (IUDs). Recommendations are provided for health care providers on the safe and effective use of contraceptive methods and address provision of contraceptive methods and management of side effects and other problems with contraceptive method use, within the framework of removing unnecessary medical barriers to accessing and using contraception. These recommendations are meant to serve as a source of clinical guidance for health care providers; health care providers should always consider the individual clinical circumstances of each person seeking family planning services. This report is not intended to be a substitute for professional medical advice for individual patients, who should seek advice from their health care providers when considering family planning options. # Summary of Changes from the 2013 U.S. SPR Updated Recommendations Recommendations have been updated regarding when to start regular contraception after ulipristal acetate (UPA) emergency contraceptive pills: - Advise the woman to start or resume hormonal contraception no sooner than 5 days after use of UPA, and provide or prescribe the regular contraceptive method as needed. For methods requiring a visit to a health care provider, such as depo-medroxyprogesterone acetate (DMPA), implants, and IUDs, starting the method at the time of UPA use may be considered; the risk that the regular contraceptive method might decrease the effectiveness of UPA must be weighed against the risk of not starting a regular hormonal contraceptive method. - The woman needs to abstain from sexual intercourse or use barrier contraception for the next 7 days after starting or resuming regular contraception or until her next menses, whichever comes first. - Any nonhormonal contraceptive method can be started immediately after the use of UPA. - Advise the woman to have a pregnancy test if she does not have a withdrawal bleed within 3 weeks. # New Recommendations New recommendations have been made for medications to ease IUD insertion: - Misoprostol is not recommended for routine use before IUD insertion. Misoprostol might be helpful in select circumstances (e.g., in women with a recent failed insertion). - Paracervical block with lidocaine might reduce patient pain during IUD insertion. # Methods Since publication of the 2013 U.S. SPR, CDC has monitored the literature for new evidence relevant to the recommendations through the WHO/CDC continuous identification of research evidence (CIRE) system (7). This system identifies new evidence as it is published and allows WHO and CDC to update systematic reviews and facilitate updates to recommendations as new evidence warrants. Automated searches are run in PubMed weekly, and the results are reviewed. Abstracts that meet specific criteria are added to the web-based CIRE system, which facilitates coordination and peer review of systematic reviews for both WHO and CDC. In 2014, CDC reviewed all of the existing recommendations in the 2013 U.S. SPR for new evidence identified by CIRE that had the potential to lead to a changed recommendation. During August 27-28, 2014, CDC held a meeting in Atlanta, Georgia, of 11 family planning experts and representatives from partner organizations to solicit their input on the scope of and process for updating both the 2010 U.S. MEC and the 2013 U.S. SPR. The participants were experts in family planning and represented different provider types and organizations that represent health care providers. A list of participants is provided at the end of this report. The meeting related to topics to be addressed in the update of U.S. SPR based on new scientific evidence published since 2013 (identified though the CIRE system), topics addressed at a 2014 WHO meeting to update global guidance, and suggestions CDC received from providers for the addition of recommendations not included in the 2013 U.S. SPR (e.g., from provider feedback through e-mail, public inquiry, and questions received at conferences). CDC identified one topic to consider adding to the guidance: the use of medications to ease IUD insertion (evidence question: "Among women of reproductive age, does use of medications before IUD insertion improve the safety or effectiveness of the procedure or affect patient outcomes compared with nonuse of these medications?"). CDC also identified one topic for which new evidence warranted a review of an existing recommendation: initiation of regular contraception after emergency contraceptive pills (evidence question: "Does ulipristal acetate for emergency contraception interact with regular use of hormonal contraception leading to decreased effectiveness of either contraceptive method?"). CDC determined that all other recommendations in the 2013 U.S. SPR were up to date and consistent with the current body of evidence for that recommendation. In preparation for a subsequent expert meeting August 2015, to review the scientific evidence # Maintaining Updated Guidance As with any evidence-based guidance document, a key challenge is keeping the recommendations up to date as new scientific evidence becomes available. Working with WHO, CDC uses the CIRE system to ensure that WHO and CDC guidance is based on the best available evidence and that a mechanism is in place to update guidance when new evidence becomes available (7). CDC will continue to work with WHO to identify and assess all new relevant evidence and determine whether changes in the recommendations are warranted. In most cases, U.S. SPR will follow any updates in the WHO guidance, which typically occurs every 5 years (or sooner if warranted by new data). In addition, CDC will review any interim WHO updates for their application in the United States. CDC also will identify and assess any new literature for the recommendations that are not included in the WHO guidance and will completely review U.S. SPR every 5 years. Updates to the guidance can be found on the U.S. SPR website (/ UnintendedPregnancy/USSPR.htm). # How To Use This Document The recommendations in this report are intended to help health care providers address issues related to use of contraceptives, such as how to help a woman initiate use of a contraceptive method, which examinations and tests are needed before initiating use of a contraceptive method, what regular follow-up is needed, and how to address problems that often arise during use, including missed pills and side effects such as unscheduled bleeding. Each recommendation addresses what a woman or health care provider can do in specific situations. For situations in which certain groups of women might be medically ineligible to follow the recommendations, comments and reference to U.S. MEC are provided (5). The full U.S. MEC recommendations and the evidence supporting those recommendations have been updated in 2016 (5) and are summarized (Appendix A). The information in this document is organized by contraceptive method, and the methods generally are presented in order of effectiveness, from highest to lowest. However, the recommendations are not intended to provide guidance on every aspect of provision and management of contraceptive method use. Instead, they incorporate the best available evidence to address specific issues regarding common, yet sometimes complex, clinical issues. Each contraceptive method section generally includes information about initiation of the method, regular follow-up, and management of problems with use (e.g., usage errors and side effects). Each section first provides the recommendation and then includes comments and a brief summary of the scientific evidence on which the recommendation is based. The level of evidence from the systematic reviews for each evidence summary are provided based on the U.S. Preventive Services Task Force system, which includes ratings for study design (I: randomized controlled trials; II-1: controlled trials without randomization; II-2: observational studies; and II-3: multiple time series or descriptive studies), ratings for internal validity (good, fair, or poor), and categorization of the evidence as direct or indirect for the specific review question (10). Recommendations in this document are provided for permanent methods of contraception, such as vasectomy and female sterilization, as well as for reversible methods of contraception, including the copper-containing intrauterine device (Cu-IUD); levonorgestrel-releasing IUDs (LNG-IUDs); the etonogestrel implant; progestin-only injectables; progestinonly pills (POPs); combined hormonal contraceptive methods that contain both estrogen and a progestin, including combined oral contraceptives (COCs), a transdermal contraceptive patch, and a vaginal contraceptive ring; and the standard days method (SDM). Recommendations also are provided for emergency use of the Cu-IUD and emergency contraceptive pills (ECPs). For each contraceptive method, recommendations are provided on the timing for initiation of the method and indications for when and for how long additional contraception, or a back-up method, is needed. Many of these recommendations include guidance that a woman can start a contraceptive method at any time during her menstrual cycle if it is reasonably certain that she is not pregnant. Guidance for health care providers on how to be reasonably certain that a woman is not pregnant also is provided. For each contraceptive method, recommendations include the examinations and tests needed before initiation of the method. These recommendations apply to persons who are presumed to be healthy. Those with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. U.S. MEC might be useful in such circumstances (5). Most women need no or very few examinations or tests before initiating a contraceptive method although they might be needed to address other noncontraceptive health needs (12). Any additional screening needed for preventive health care can be performed at the time of contraception initiation, and initiation should not be delayed for test results. The following classification system was developed by WHO and adopted by CDC to categorize the applicability of the various examinations or tests before initiation of contraceptive methods (13): Class A: These tests and examinations are essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: These tests and examinations contribute substantially to safe and effective use, although implementation can be considered within the public health context, service context, or both. The risk for not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: These tests and examinations do not contribute substantially to safe and effective use of the contraceptive method. These classifications focus on the relation of the examinations or tests to safe initiation of a contraceptive method. They are not intended to address the appropriateness of these examinations or tests in other circumstances. For example, some of the examinations or tests that are not deemed necessary for safe and effective contraceptive use might be appropriate for good preventive health care or for diagnosing or assessing suspected medical conditions. Systematic reviews were conducted for several different types of examinations and tests to assess whether a screening test was associated with safe use of contraceptive methods. Because no single convention exists for screening panels for certain diseases, including diabetes, lipid disorders, and liver diseases, the search strategies included broad terms for the tests and diseases of interest. Summary charts and clinical algorithms that summarize the guidance for the various contraceptive methods have been developed for many of the recommendations, including when to start using specific contraceptive methods (Appendix B), examinations and tests needed before initiating the various contraceptive methods (Appendix C), routine follow-up after initiating contraception (Appendix D), management of bleeding irregularities (Appendix E), and management of IUDs when users are found to have pelvic inflammatory disease (PID) (Appendix F). These summaries might be helpful to health care providers when managing family planning patients. Additional tools are available on the U.S. SPR website (. gov/reproductivehealth/UnintendedPregnancy/USSPR.htm). # Contraceptive Method Choice Many elements need to be considered individually by a woman, man, or couple when choosing the most appropriate contraceptive method. Some of these elements include safety, effectiveness, availability (including accessibility and affordability), and acceptability. Although most contraceptive methods are safe for use by most women, U.S. MEC provides recommendations on the safety of specific contraceptive methods for women with certain characteristics and medical conditions (5); a U.S. MEC summary (Appendix A) and the categories of medical eligibility criteria for contraceptive use (Box 1) are provided. Voluntary informed choice of contraceptive methods is an essential guiding principle, and contraceptive counseling, where applicable, might be an important contributor to the successful use of contraceptive methods. Contraceptive method effectiveness is critically important in minimizing the risk for unintended pregnancy, particularly among women for whom an unintended pregnancy would pose additional health risks. The effectiveness of contraceptive methods depends both on the inherent effectiveness of the method itself and on how consistently and correctly it is used (Figure 1). Both consistent and correct use can vary greatly with characteristics such as age, income, desire to prevent or delay pregnancy, and culture. Methods that depend on consistent and correct use by clients have a wide range of effectiveness between typical use (actual use, including incorrect or inconsistent use) and perfect use (correct and consistent use according to directions) (14). IUDs and implants are considered longacting, reversible contraception (LARC); these methods are highly effective because they do not depend on regular compliance from the user. LARC methods are appropriate for most women, including adolescents and nulliparous women. All women should be counseled about the full range and effectiveness of contraceptive options for which they are medically eligible so that they can identify the optimal method. In choosing a method of contraception, dual protection from the simultaneous risk for human immunodeficiency virus (HIV) and other sexually transmitted diseases (STDs) also should be considered. Although hormonal contraceptives and IUDs are highly effective at preventing pregnancy, they do not protect against STDs, including HIV. Consistent and correct use of the male latex condom reduces the risk for HIV infection and other STDs, including chlamydial infection, gonococcal infection, and trichomoniasis (15). Although evidence is limited, use of female condoms can provide protection from acquisition and transmission of STDs (15). All patients, regardless of contraceptive choice, should be counseled about the use of condoms and the risk for STDs, including HIV infection (15). Additional information about prevention and treatment of STDs is available from the CDC Sexually Transmitted Diseases Treatment Guidelines (. cdc.gov/std/treatment) (15). Women, men, and couples have increasing numbers of safe and effective choices for contraceptive methods, including LARC methods such as IUDs and implants, to reduce the risk for unintended pregnancy. However, with these expanded options comes the need for evidence-based guidance to help health care providers offer quality family planning care to their patients, including assistance in choosing the most appropriate contraceptive method for individual circumstances and using that method correctly, consistently, and continuously to maximize effectiveness. Removing unnecessary barriers can help patients access and successfully use contraceptive methods. Several medical barriers to initiating and continuing contraceptive methods might exist, such as unnecessary screening examinations and tests before starting the method (e.g., a pelvic examination before initiation of COCs), inability to receive the contraceptive on the same day as the visit (e.g., waiting for test results that might not be needed or waiting until the woman's next menstrual cycle to start use), and difficulty obtaining continued contraceptive supplies (e.g., restrictions on number of pill packs dispensed at one time). Removing unnecessary steps, such as providing prophylactic antibiotics at the time of IUD insertion or requiring unnecessary follow-up procedures, also can help patients access and successfully use contraception. # How To Be Reasonably Certain that a Woman Is Not Pregnant In most cases, a detailed history provides the most accurate assessment of pregnancy risk in a woman who is about to start using a contraceptive method. Several criteria for assessing pregnancy risk are listed in the recommendation that follows. These criteria are highly accurate (i.e., a negative predictive value of 99%-100%) in ruling out pregnancy among women who are not pregnant (16)(17)(18)(19). Therefore, CDC recommends that health care providers use these criteria to assess pregnancy status in a woman who is about to start using contraceptives (Box 2). If a woman meets one of these criteria (and therefore the health care provider can be reasonably certain that she is not pregnant), a urine pregnancy BOX 1. Categories of medical eligibility criteria for contraceptive use - U.S. MEC 1 = A condition for which there is no restriction for the use of the contraceptive method. - U.S. MEC 2 = A condition for which the advantages of using the method generally outweigh the theoretical or proven risks. - U.S. MEC 3 = A condition for which the theoretical or proven risks usually outweigh the advantages of using the method. - U.S. MEC 4 = A condition that represents an unacceptable health risk if the contraceptive method is used. test might be considered in addition to these criteria (based on clinical judgment), bearing in mind the limitations of the accuracy of pregnancy testing. If a woman does not meet any of these criteria, then the health care provider cannot be reasonably certain that she is not pregnant, even with a negative pregnancy test. Routine pregnancy testing for every woman is not necessary. On the basis of clinical judgment, health care providers might consider the addition of a urine pregnancy test; however, they should be aware of the limitations, including accuracy of the test relative to the time of last sexual intercourse, recent delivery, or spontaneous or induced abortion. Routine pregnancy testing for every woman is not necessary. If a woman has had recent (i.e., within the last 5 days) unprotected sexual intercourse, consider offering emergency contraception (either a Cu-IUD or ECPs) if pregnancy is not desired. Comments and Evidence Summary. The criteria for determining whether a woman is pregnant depend on the assurance that she has not ovulated within a certain amount of time after her last menses, spontaneous or induced abortion, or delivery. Among menstruating women, the timing of ovulation can vary widely. During an average 28-day cycle, ovulation generally occurs during days 9-20 (20). In addition, the likelihood of ovulation is low from days 1-7 of the menstrual cycle (21). After a spontaneous or an induced abortion, ovulation can occur within 2-3 weeks and has been found to occur as early as 8-13 days after the end of the pregnancy. Therefore, the likelihood of ovulation is low ≤7 days after # JANUARY # S p e r m i c i d e # Spermicide # How to make your method most e ective # Vasectomy and hysteroscopic sterilization: After procedure, little or nothing to do or remember. Use another method for rst 3 months. Injectable: Get repeat injections on time. Pills: Take a pill each day. # Least E ective # Most E ective Less than 1 pregnancy 6-12 pregnancies per 100 women in a year per 100 women in a year 18 or more pregnancies per 100 women in a year # CONDOMS SHOULD ALWAYS BE USED TO REDUCE THE RISK OF SEXUALLY TRANSMITTED INFECTIONS. Other Methods of Contraception Lactational Amenorrhea Method: LAM is a highly e ective, temporary method of contraception. Emergency Contraception: Emergency contraceptive pills or a copper IUD after unprotected intercourse substantially reduces risk of pregnancy. an abortion (22)(23)(24). A systematic review reported that the mean day of first ovulation among postpartum nonlactating women occurred 45-94 days after delivery (25). In one study, the earliest ovulation was reported at 25 days after delivery. Among women who are within 6 months postpartum, are fully or nearly fully breastfeeding (exclusively breastfeeding or the vast majority of feeds are breastfeeds), and are amenorrheic, the risk for pregnancy is <2% (26,27). Although pregnancy tests often are performed before initiating contraception, the accuracy of qualitative urine pregnancy tests varies depending on the timing of the test relative to missed menses, recent sexual intercourse, or recent pregnancy. The sensitivity of a pregnancy test is defined as the concentration of human chorionic gonadotropin (hCG) at which 95% of tests are positive. Most qualitative pregnancy tests approved by the U.S. Food and Drug Administration (FDA) report a sensitivity of 20-25 mIU/mL in urine (28)(29)(30)(31). However, pregnancy detection rates can vary widely because of differences in test sensitivity and the timing of testing relative to missed menses (30,32). Some studies have shown that an additional 11 days past the day of expected menses are needed to detect 100% of pregnancies using qualitative tests (29). In addition, pregnancy tests cannot detect a pregnancy resulting from recent sexual intercourse. Qualitative tests also might have positive results for several weeks after termination of pregnancy because hCG can be present for several weeks after delivery or abortion (spontaneous or induced) (33)(34)(35). For contraceptive methods other than IUDs, the benefits of starting to use a contraceptive method likely exceed any risk, even in situations in which the health care provider is uncertain whether the woman is pregnant. Therefore, the health care provider can consider having patients start using contraceptive methods other than IUDs at any time, with a follow-up pregnancy test in 2-4 weeks. The risks of not starting to use contraception should be weighed against the risks of initiating contraception use in a woman who might be already pregnant. Most studies have shown no increased risk for adverse outcomes, including congenital anomalies or neonatal or infant death, among infants exposed in utero to COCs (36)(37)(38). Studies also have shown no increased risk for neonatal or infant death or developmental abnormalities among infants exposed in utero to DMPA (37,39,40). In contrast, for women who want to begin using an IUD (Cu-IUD or LNG-IUD), in situations in which the health care provider is uncertain whether the woman is pregnant, the woman should be provided with another contraceptive method to use until the health care provider is reasonably certain that she is not pregnant and can insert the IUD. Pregnancies among women with IUDs are at higher risk for complications such as spontaneous abortion, septic abortion, preterm delivery, and chorioamnionitis (41). A systematic review identified four analyses of data from three diagnostic accuracy studies that evaluated the performance of the listed criteria (Box 2) through use of a pregnancy checklist compared with a urine pregnancy test conducted concurrently (42). The performance of the checklist to diagnose or exclude pregnancy varied, with sensitivity of 55%-100% and specificity of 39%-89%. The negative predictive value was consistent across studies at 99%-100%; the pregnancy checklist correctly ruled out women who were not pregnant. One of the studies assessed the added usefulness of signs and symptoms of pregnancy and found that these criteria did not substantially improve the performance of the pregnancy checklist, although the number of women with signs and symptoms was small (16) (Level of evidence: Diagnostic accuracy studies, fair, direct). # Intrauterine Contraception Four IUDs are available in the United States, the coppercontaining IUD and three levonorgestrel-releasing IUDs (containing a total of either 13.5 mg or 52 mg levonorgestrel). Fewer than 1 woman out of 100 becomes pregnant in the first year of using IUDs (with typical use) (14). IUDs are long-acting, are reversible, and can be used by women of all ages, including adolescents, and by parous and nulliparous women. IUDs do not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # BOX 2. How to be reasonably certain that a woman is not pregnant A health care provider can be reasonably certain that a woman is not pregnant if she has no symptoms or signs of pregnancy and meets any one of the following criteria: - is ≤7 days after the start of normal menses - has not had sexual intercourse since the start of last normal menses. - has been correctly and consistently using a reliable method of contraception - is ≤7 days after spontaneous or induced abortion - is within 4 weeks postpartum - is fully or nearly fully breastfeeding (exclusively breastfeeding or the vast majority of feeds are breastfeeds), amenorrheic, and <6 months postpartum # Initiation of Cu-IUDs Timing - The Cu-IUD can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). - The Cu-IUD also can be inserted within 5 days of the first act of unprotected sexual intercourse as an emergency contraceptive. If the day of ovulation can be estimated, the Cu-IUD also can be inserted >5 days after sexual intercourse as long as insertion does not occur >5 days after ovulation. # Need for Back-Up Contraception - No additional contraceptive protection is needed after Cu-IUD insertion. # Special Considerations Amenorrhea (Not Postpartum) - Timing: The Cu-IUD can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). - Need for back-up contraception: No additional contraceptive protection is needed. # Postpartum (Including After Cesarean Delivery) - Timing: The Cu-IUD can be inserted at any time postpartum, including immediately postpartum (U.S. MEC 1 or 2) (Box 1), if it is reasonably certain that the woman is not pregnant (Box 2). The Cu-IUD should not be inserted in a woman with postpartum sepsis (e.g., chorioamnionitis or endometritis) (U.S. MEC 4). - Need for back-up contraception: No additional contraceptive protection is needed. # Postabortion (Spontaneous or Induced) - Timing: The Cu-IUD can be inserted within the first 7 days, including immediately postabortion (U.S. MEC 1 for first-trimester abortion and U.S. MEC 2 for secondtrimester abortion). The Cu-IUD should not be inserted immediately after a septic abortion (U.S. MEC 4). - Need for back-up contraception: No additional contraceptive protection is needed. # Switching from Another Contraceptive Method - Timing: The Cu-IUD can be inserted immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. - Need for back-up contraception: No additional contraceptive protection is needed. Comments and Evidence Summary. In situations in which the health care provider is not reasonably certain that the woman is not pregnant, the woman should be provided with another contraceptive method to use until the health care provider can be reasonably certain that she is not pregnant and can insert the Cu-IUD. A systematic review identified eight studies that suggested that timing of Cu-IUD insertion in relation to the menstrual cycle in non-postpartum women had little effect on longterm outcomes (rates of continuation, removal, expulsion, or pregnancy) or on short-term outcomes (pain at insertion, bleeding at insertion, or immediate expulsion) (43) (Level of evidence: II-2, fair, direct). # Initiation of LNG-IUDs # Timing of LNG-IUD Insertion - The LNG-IUD can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). # Need for Back-Up Contraception - If the LNG-IUD is inserted within the first 7 days since menstrual bleeding started, no additional contraceptive protection is needed. - If the LNG-IUD is inserted >7 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Special Considerations Amenorrhea (Not Postpartum) - Timing: The LNG-IUD can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). - Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Postpartum (Including After Cesarean Delivery) # Switching from Another Contraceptive Method - Timing: The LNG-IUD can be inserted immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. - Need for back-up contraception: If it has been >7 days since menstrual bleeding began, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. - Switching from a Cu-IUD: If the woman has had sexual intercourse since the start of her current menstrual cycle and it has been >5 days since menstrual bleeding started, theoretically, residual sperm might be in the genital tract, which could lead to fertilization if ovulation occurs. A health care provider may consider providing any type of ECPs at the time of LNG-IUD insertion. Comments and Evidence Summary. In situations in which the health care provider is uncertain whether the woman might be pregnant, the woman should be provided with another contraceptive method to use until the health care provider can be reasonably certain that she is not pregnant and can insert the LNG-IUD. If a woman needs to use additional contraceptive protection when switching to an LNG-IUD from another contraceptive method, consider continuing her previous method for 7 days after LNG-IUD insertion. No direct evidence was found regarding the effects of inserting LNG-IUDs on different days of the cycle on short-or longterm outcomes (43). # Examinations and Tests Needed Before Initiation of a Cu-IUD or an LNG-IUD Among healthy women, few examinations or tests are needed before initiation of an IUD (Table 1). Bimanual examination and cervical inspection are necessary before IUD insertion. A baseline weight and BMI measurement might be useful for monitoring IUD users over time. If a woman has not been screened for STDs according to STD screening guidelines, screening can be performed at the time of insertion. Women with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. U.S. MEC might be useful in such circumstances (5). Comments and Evidence Summary. Weight (BMI): Obese women can use IUDs (U.S. MEC 1) (5); therefore, screening for obesity is not necessary for the safe initiation of IUDs. However, measuring weight and calculating - Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; the risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: does not contribute substantially to safe and effective use of the contraceptive method. † Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. § Most women do not require additional STD screening at the time of IUD insertion. If a woman with risk factors for STDs has not been screened for gonorrhea and chlamydia according to CDC's STD Treatment Guidelines (/ treatment), screening can be performed at the time of IUD insertion, and insertion should not be delayed. Women with current purulent cervicitis or chlamydial infection or gonococcal infection should not undergo IUD insertion (U.S. MEC 4). # BMI (weight / height ) at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. Bimanual examination and cervical inspection: Bimanual examination and cervical inspection are necessary before IUD insertion to assess uterine size and position and to detect any cervical or uterine abnormalities that might indicate infection or otherwise prevent IUD insertion (44,45). STDs: Women should be routinely screened for chlamydial infection and gonorrhea according to national screening guidelines. The CDC Sexually Transmitted Diseases Treatment Guidelines provide information on screening eligibility, timing, and frequency of screening and on screening for persons with risk factors (15) (). If STD screening guidelines have been followed, most women do not need additional STD screening at the time of IUD insertion, and insertion should not be delayed. If a woman with risk factors for STDs has not been screened for gonorrhea and chlamydia according to CDC STD treatment guidelines, screening can be performed at the time of IUD insertion, and insertion should not be delayed. Women with current purulent cervicitis or chlamydial infection or gonococcal infection should not undergo IUD insertion (U.S. MEC 4). A systematic review identified two studies that demonstrated no differences in PID rates among women who screened positive for gonorrhea or chlamydia and underwent concurrent IUD insertion compared with women who screened positive and initiated other contraceptive methods (46). Indirect evidence demonstrates women who undergo same-day STD screening and IUD insertion have similar PID rates compared with women who have delayed IUD insertion. Women who undergo same-day STD screening and IUD insertion have low incidence rates of PID. Algorithms for predicting PID among women with risk factors for STDs have poor predictive value. Risk for PID among women with risk factors for STDs is low (15,(47)(48)(49)(50)(51)(52)(53)(54)(55)(56)(57). Although women with STDs at the time of IUD insertion have a higher risk for PID, the overall rate of PID among all IUD users is low (51,54). Hemoglobin: Women with iron-deficiency anemia can use the LNG-IUD (U.S. MEC 1) (5); therefore, screening for anemia is not necessary for safe initiation of the LNG-IUD. Women with iron-deficiency anemia generally can use Cu-IUDs (U.S. MEC 2) (5). Measurement of hemoglobin before initiation of Cu-IUDs is not necessary because of the minimal change in hemoglobin among women with and without anemia using Cu-IUDs. A systematic review identified four studies that provided direct evidence for changes in hemoglobin among women with anemia who received Cu-IUDs (58). Evidence from one randomized trial (59) and one prospective cohort study (60) showed no significant changes in hemoglobin among Cu-IUD users with anemia, whereas two prospective cohort studies (61,62) showed a statistically significant decrease in hemoglobin levels during 12 months of follow-up; however, the magnitude of the decrease was small and most likely not clinically significant. The systematic review also identified 21 studies that provided indirect evidence by examining changes in hemoglobin among healthy women receiving Cu-IUDs (63-83), which generally showed no clinically significant changes in hemoglobin levels with up to 5 years of follow up (Level of evidence: I to II-2, fair, direct). Lipids: Screening for dyslipidemias is not necessary for the safe initiation of Cu-IUD or LNG-IUD because of the low prevalence of undiagnosed disease in women of reproductive age and the low likelihood of clinically significant changes with use of hormonal contraceptives. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with lipid measurement before initiation of hormonal contraceptives (57). During 2009-2012 among women aged 20-44 years in the United States, 7.6% had high cholesterol, defined as total serum cholesterol ≥240 mg/dL (84). During 1999-2008, the prevalence of undiagnosed hypercholesterolemia among women aged 20-44 years was approximately 2% (85). Studies have shown mixed results about the effects of hormonal methods on lipid levels among both healthy women and women with baseline lipid abnormalities, and the clinical significance of these changes is unclear (86)(87)(88)(89). Liver enzymes: Women with liver disease can use the Cu-IUD (U.S. MEC 1) (5); therefore, screening for liver disease is not necessary for the safe initiation of the Cu-IUD. Although women with certain liver diseases generally should not use the LNG-IUD (U.S. MEC 3) (5), screening for liver disease before initiation of the LNG-IUD is not necessary because of the low prevalence of these conditions and the high likelihood that women with liver disease already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with liver enzyme tests before initiation of hormonal contraceptive use (57). In 2012, among U.S. women, the percentage with liver disease (not further specified) was 1.3% (90). In 2013, the incidence of acute hepatitis A, B, or C was ≤1 per 100,000 U.S. population (91). During 2002-2011, the incidence of liver carcinoma among U.S. women was approximately 3.7 per 100,000 population (92). Because estrogen and progestins are metabolized in the liver, the use of hormonal contraceptives among women with liver disease might, theoretically, be a concern. The use of hormonal contraceptives, specifically COCs and POPs, does not affect disease progression or severity in women with hepatitis, cirrhosis, or benign focal nodular hyperplasia (93,94), although evidence is limited, and no evidence exists for the LNG-IUD. Clinical breast examination: Women with breast disease can use the Cu-IUD (U.S. MEC 1) (5); therefore, screening for breast disease is not necessary for the safe initiation of the Cu-IUD. Although women with current breast cancer should not use the LNG-IUD (U.S. MEC 4) (5), screening asymptomatic women with a clinical breast examination before inserting an IUD is not necessary because of the low prevalence of breast cancer among women of reproductive age. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a breast examination before initiation of hormonal contraceptives (95). The incidence of breast cancer among women of reproductive age in the United States is low. In 2012, the incidence of breast cancer among women aged 20-49 years was approximately 70.7 per 100,000 women (96). Cervical cytology: Although women with cervical cancer should not undergo IUD insertion (U.S. MEC 4) (5), screening asymptomatic women with cervical cytology before IUD insertion is not necessary because of the high rates of cervical screening, low incidence of cervical cancer in the United States, and high likelihood that a woman with cervical cancer already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with cervical cytology before initiation of IUDs (57). Cervical cancer is rare in the United States, with an incidence rate of 9.8 per 100,000 women during 2012 (96). The incidence and mortality rates from cervical cancer have declined dramatically in the United States, largely because of cervical cytology screening (97). Overall screening rates for cervical cancer in the United States are high; in 2013 among women aged 18-44 years, approximately 77% reported having cervical cytology screening within the last 3 years (98). HIV screening: Women with HIV infection can use (U.S. MEC 1) or generally can use (U.S. MEC 2) IUDs (5). Therefore, HIV screening is not necessary before IUD insertion. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened for HIV infection before IUD insertion (57). Limited evidence suggests that IUDs are not associated with disease progression, increased infection, or other adverse health effects among women with HIV infection (99)(100)(101)(102)(103)(104)(105)(106)(107)(108)(109)(110)(111)(112)(113)(114). Other screening: Women with hypertension, diabetes, or thrombogenic mutations can use (U.S. MEC 1) or generally can use (U.S. MEC 2) IUDs (5). Therefore, screening for these conditions is not necessary for the safe initiation of IUDs. # Provision of Medications to Ease IUD Insertion - Misoprostol is not recommended for routine use before IUD insertion. Misoprostol might be helpful in select circumstances (e.g., in women with a recent failed insertion). - Paracervical block with lidocaine might reduce patient pain during IUD insertion. Comments and Evidence Summary. Potential barriers to IUD use include anticipated pain with insertion and provider concerns about difficult insertion. Identifying effective approaches to ease IUD insertion might increase IUD initiation. Evidence for misoprostol from two systematic reviews, including a total of 10 randomized controlled trials, suggests that misoprostol does not improve provider ease of insertion, reduce the need for adjunctive insertion measures, or improve insertion success (Level of evidence: I, good to fair, direct) and might increase patient pain and side effects (Level of evidence: I, high quality) (115,116). However, one randomized controlled trial examined women with a recent failed IUD insertion and found significantly higher insertion success with second insertion attempt among women pretreated with misoprostol versus placebo (Level of evidence: I, good, direct) (117). Limited evidence for paracervical block with lidocaine from one systematic review suggests that it might reduce patient pain (115). In this review, two randomized controlled trials found significantly reduced pain at either tenaculum placement or IUD insertion among women receiving paracervical block with 1% lidocaine 3-5 minutes before IUD insertion (118,119). Neither trial found differences in side effects among women receiving paracervical block compared with controls (Level of evidence: I, moderate to low quality) (118,119). Limited evidence on nonsteroidal antiinflammatory drugs (NSAIDs) and nitric oxide donors generally suggested no positive effect; evidence on lidocaine with administration other than paracervical block was limited and inconclusive (Level of evidence for provider ease of insertion: I, good to poor, direct; Level of evidence for need for adjunctive insertion measures: I, fair, direct; Level of evidence for patient pain: I, high to low quality; Level of evidence for side effects: I, high to low quality) (115,116). # Provision of Prophylactic Antibiotics at the Time of IUD Insertion - Prophylactic antibiotics are generally not recommended for Cu-IUD or LNG-IUD insertion. Comments and Evidence Summary. Theoretically, IUD insertion could induce bacterial spread and lead to complications such as PID or infective endocarditis. A metaanalysis was conducted of randomized controlled trials examining antibiotic prophylaxis versus placebo or no treatment for IUD insertion (120). Use of prophylaxis reduced the frequency of unscheduled return visits but did not significantly reduce the incidence of PID or premature IUD discontinuation. Although the risk for PID was higher within the first 20 days after insertion, the incidence of PID was low among all women who had IUDs inserted (51). In addition, the American Heart Association recommends that the use of prophylactic antibiotics solely to prevent infective endocarditis is not needed for genitourinary procedures (121). Studies have not demonstrated a conclusive link between genitourinary procedures and infective endocarditis or a preventive benefit of prophylactic antibiotics during such procedures (121). # Routine Follow-Up After IUD Insertion These recommendations address when routine follow-up is needed for safe and effective continued use of contraception for healthy women. The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from more frequent follow-up visits include adolescents, persons with certain medical conditions or characteristics, and persons with multiple medical conditions. - Advise the woman to return at any time to discuss side effects or other problems, if she wants to change the method being used, and when it is time to remove or replace the contraceptive method. No routine follow-up visit is required. - At other routine visits, health care providers who see IUD users should do the following: -Assess the woman's satisfaction with her contraceptive method and whether she has any concerns about method use. -Assess any changes in health status, including medications, that would change the appropriateness of the IUD for safe and effective continued use on the basis of U.S. MEC (e.g., category 3 and 4 conditions and characteristics). -Consider performing an examination to check for the presence of the IUD strings. -Consider assessing weight changes and counseling women who are concerned about weight changes perceived to be associated with their contraceptive method. Comments and Evidence Summary. Evidence from a systematic review about the effect of a specific follow-up visit schedule on IUD continuation is very limited and of poor quality. The evidence did not suggest that greater frequency of visits or earlier timing of the first follow-up visit after insertion improves continuation of use ( 122) (Level of evidence: II-2, poor, direct). Evidence from four studies from a systematic review on the incidence of PID among IUD initiators, or IUD removal as a result of PID, suggested that the incidence of PID did not differ between women using Cu-IUDs and those using DMPA, COCs, or LNG-IUDs (123) (Level of evidence: I to II-2, good, indirect). Evidence on the timing of PID after IUD insertion is mixed. Although the rate of PID generally was low, the largest study suggested that the rate of PID was significantly higher in the first 20 days after insertion (51) (Level of evidence: I to II-3, good to poor, indirect). # Bleeding Irregularities with Cu-IUD Use - Before Cu-IUD insertion, provide counseling about potential changes in bleeding patterns during Cu-IUD use. Unscheduled spotting or light bleeding, as well as heavy or prolonged bleeding, is common during the first 3-6 months of Cu-IUD use, is generally not harmful, and decreases with continued Cu-IUD use. - If clinically indicated, consider an underlying gynecological problem, such as Cu-IUD displacement, an STD, pregnancy, or new pathologic uterine conditions (e.g., polyps or fibroids), especially in women who have already been using the Cu-IUD for a few months or longer and who have developed a new onset of heavy or prolonged bleeding. If an underlying gynecological problem is found, treat the condition or refer for care. - If an underlying gynecological problem is not found and the woman requests treatment, the following treatment option can be considered during days of bleeding: -NSAIDs for short-term treatment (5-7 days) - If bleeding persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. Comments and Evidence Summary. During contraceptive counseling and before insertion of the Cu-IUD, information about common side effects such as unscheduled spotting or light bleeding or heavy or prolonged menstrual bleeding, especially during the first 3-6 months of use, should be discussed (64). These bleeding irregularities are generally not harmful. Enhanced counseling about expected bleeding patterns and reassurance that bleeding irregularities are generally not harmful has been shown to reduce method discontinuation in clinical trials with other contraceptives (i.e., DMPA) (124,125). Evidence is limited on specific drugs, doses, and durations of use for effective treatments for bleeding irregularities with Cu-IUD use. Therefore, although this report includes general recommendations for treatments to consider, evidence for specific regimens is lacking. A systematic review identified 11 studies that examined various therapeutic treatments for heavy menstrual bleeding, prolonged menstrual bleeding, or both among women using Cu-IUDs (126). Nine studies examined the use of various oral NSAIDs for the treatment of heavy or prolonged menstrual bleeding among Cu-IUD users and compared them with either a placebo or a baseline cycle. Three of these trials examined the use of indomethacin (127)(128)(129), three examined mefenamic acid (130)(131)(132), and three examined flufenamic acid (127,128,133). Other NSAIDs used in the reported trials included alclofenac (127,128), suprofen (134), and diclofenac sodium (135). All but one NSAID study ( 131) demonstrated statistically significant or notable reductions in mean total menstrual blood loss with NSAID use. One study among 19 Cu-IUD users with heavy bleeding suggested that treatment with oral tranexamic acid can significantly reduce mean blood loss during treatment compared with placebo (135). Data regarding the overall safety of tranexamic acid are limited; an FDA warning states that tranexamic acid is contraindicated in women with active thromboembolic disease or with a history or intrinsic risk for thrombosis or thromboembolism (136,137). Treatment with aspirin demonstrated no statistically significant change in mean blood loss among women whose pretreatment menstrual blood loss was >80 ml or 60-80 mL; treatment resulted in a significant increase among women whose pretreatment menstrual blood loss was <60 mL (138). One study examined the use of a synthetic form of vasopressin, intranasal desmopressin (300 µg/day), for the first 5 days of menses for three treatment cycles and found a significant reduction in mean blood loss compared with baseline (130) (Level of evidence: I to II-3, poor to fair, direct). Only one small study examined treatment of spotting with three separate NSAIDs and did not observe improvements in spotting in any of the groups (127) (Level of evidence: I, poor, direct). # Bleeding Irregularities (Including Amenorrhea) with LNG-IUD Use - Before LNG-IUD insertion, provide counseling about potential changes in bleeding patterns during LNG-IUD use. Unscheduled spotting or light bleeding is expected during the first 3-6 months of LNG-IUD use, is generally not harmful, and decreases with continued LNG-IUD use. Over time, bleeding generally decreases with LNG-IUD use, and many women experience only light menstrual bleeding or amenorrhea. Heavy or prolonged bleeding, either unscheduled or menstrual, is uncommon during LNG-IUD use. # Irregular Bleeding (Spotting, Light Bleeding, or Heavy or Prolonged Bleeding) - If clinically indicated, consider an underlying gynecological problem, such as LNG-IUD displacement, an STD, pregnancy, or new pathologic uterine conditions (e.g., polyps or fibroids). If an underlying gynecological problem is found, treat the condition or refer for care. - If bleeding persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. # Amenorrhea - Amenorrhea does not require any medical treatment. Provide reassurance. -If a woman's regular bleeding pattern changes abruptly to amenorrhea, consider ruling out pregnancy if clinically indicated. - If amenorrhea persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired Comments and Evidence Summary. During contraceptive counseling and before insertion of the LNG-IUD, information about common side effects such as unscheduled spotting or light bleeding, especially during the first 3-6 months of use, should be discussed. Approximately half of LNG-IUD users are likely to experience amenorrhea or oligomenorrhea by 2 years of use (139). These bleeding irregularities are generally not harmful. Enhanced counseling about expected bleeding patterns and reassurance that bleeding irregularities are generally not harmful has been shown to reduce method discontinuation in clinical trials with other hormonal contraceptives (i.e., DMPA) (124,125). No direct evidence was found regarding therapeutic treatments for bleeding irregularities during LNG-IUD use. # Management of the IUD when a Cu-IUD or an LNG-IUD User Is Found To Have PID - Treat the PID according to the CDC Sexually Transmitted Diseases Treatment Guidelines (15). - Provide comprehensive management for STDs, including counseling about condom use. - The IUD does not need to be removed immediately if the woman needs ongoing contraception. - Reassess the woman in 48-72 hours. If no clinical improvement occurs, continue antibiotics and consider removal of the IUD. - If the woman wants to discontinue use, remove the IUD sometime after antibiotics have been started to avoid the potential risk for bacterial spread resulting from the removal procedure. - If the IUD is removed, consider ECPs if appropriate. Counsel the woman on alternative contraceptive methods, and offer another method if it is desired. - A summary of IUD management in women with PID is provided (Appendix F). Comments and Evidence Summary. Treatment outcomes do not generally differ between women with PID who retain the IUD and those who have the IUD removed; however, appropriate antibiotic treatment and close clinical follow-up are necessary. A systematic review identified four studies that included women using copper or nonhormonal IUDs who developed PID and compared outcomes between women who had the IUD removed or did not (140). One randomized trial showed that women with IUDs removed had longer hospitalizations than those who did not, although no differences in PID recurrences or subsequent pregnancies were observed (141). Another randomized trial showed no differences in laboratory findings among women who removed the IUD compared with those who did not (142). One prospective cohort study showed no differences in clinical or laboratory findings during hospitalization; however, the IUD removal group had longer hospitalizations (143). One randomized trial showed that the rate of recovery for most clinical signs and symptoms was higher among women who had the IUD removed than among women who did not (144). No evidence was found regarding women using LNG-IUDs (Level of evidence: I to II-2, fair, direct.) # Management of the IUD when a Cu-IUD or an LNG-IUD User is Found To Be Pregnant - Evaluate for possible ectopic pregnancy. - Advise the woman that she has an increased risk for spontaneous abortion (including septic abortion that might be life threatening) and for preterm delivery if the IUD is left in place. The removal of the IUD reduces these risks but might not decrease the risk to the baseline level of a pregnancy without an IUD. -If she does not want to continue the pregnancy, counsel her about options. -If she wants to continue the pregnancy, advise her to seek care promptly if she has heavy bleeding, cramping, pain, abnormal vaginal discharge, or fever. # IUD Strings Are Visible or Can Be Retrieved Safely from the Cervical Canal - Advise the woman that the IUD should be removed as soon as possible. -If the IUD is to be removed, remove it by pulling on the strings gently. -Advise the woman that she should return promptly if she has heavy bleeding, cramping, pain, abnormal vaginal discharge, or fever. - If she chooses to keep the IUD, advise her to seek care promptly if she has heavy bleeding, cramping, pain, abnormal vaginal discharge, or fever. # IUD Strings Are Not Visible and Cannot Be Safely Retrieved - If ultrasonography is available, consider performing or referring for ultrasound examination to determine the location of the IUD. If the IUD cannot be located, it might have been expelled or have perforated the uterine wall. - If ultrasonography is not possible or the IUD is determined by ultrasound to be inside the uterus, advise the woman to seek care promptly if she has heavy bleeding, cramping, pain, abnormal vaginal discharge, or fever. Comments and Evidence Summary. Removing the IUD improves the pregnancy outcome if the IUD strings are visible or the device can be retrieved safely from the cervical canal. Risks for spontaneous abortion, preterm delivery, and infection are substantial if the IUD is left in place. Theoretically, the fetus might be affected by hormonal exposure from an LNG-IUD. However, whether this exposure increases the risk for fetal abnormalities is unknown. A systematic review identified nine studies suggesting that women who did not remove their IUDs during pregnancy were at greater risk for adverse pregnancy outcomes (including spontaneous abortion, septic abortion, preterm delivery, and chorioamnionitis) compared with women who had their IUDs removed or who did not have an IUD (41). Cu-IUD removal decreased risks but not to the baseline risk for pregnancies without an IUD. One case series examined LNG-IUDs. When they were not removed, 8 out of 10 pregnancies ended in spontaneous abortions (Level of evidence: II-2, fair, direct). # Implants The etonogestrel implant, a single rod with 68 mg of etonogestrel, is available in the United States. Fewer than 1 woman out of 100 become pregnant in the first year of use of the etonogestrel implant with typical use (14). The implant is long acting, is reversible, and can be used by women of all ages, including adolescents. The implant does not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # Initiation of Implants Timing - The implant can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). # Need for Back-Up Contraception - If the implant is inserted within the first 5 days since menstrual bleeding started, no additional contraceptive protection is needed. - If the implant is inserted >5 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Special Considerations Amenorrhea (Not Postpartum) - Timing: The implant can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). - Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Postpartum (Breastfeeding) # Postabortion (Spontaneous or Induced) - Timing: The implant can be inserted within the first 7 days, including immediately after the abortion (U.S. MEC 1). - Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days unless the implant is placed at the time of a surgical abortion. # Switching from Another Contraceptive Method - Timing: The implant can be inserted immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. - Need for back-up contraception: If it has been >5 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days after insertion. - Switching from an IUD: If the woman has had sexual intercourse since the start of her current menstrual cycle and it has been >5 days since menstrual bleeding started, theoretically, residual sperm might be in the genital tract, which could lead to fertilization if ovulation occurs. A health care provider may consider any of the following options: -Advise the woman to retain the IUD for at least 7 days after the implant is inserted and return for IUD removal. -Advise the woman to abstain from sexual intercourse or use barrier contraception for 7 days before removing the IUD and switching to the new method. -If the woman cannot return for IUD removal and has not abstained from sexual intercourse or used barrier contraception for 7 days, advise the woman to use ECPs (with the exception of UPA) at the time of IUD removal. Comments and Evidence Summary. In situations in which the health care provider is uncertain whether the woman might be pregnant, the benefits of starting the implant likely exceed any risk; therefore, starting the implant should be considered at any time, with a follow-up pregnancy test in 2-4 weeks. If a woman needs to use additional contraceptive protection when switching to an implant from another contraceptive method, consider continuing her previous method for 7 days after implant insertion. No direct evidence was found regarding the effects of starting the etonogestrel implant at different times of the cycle. # Examinations and Tests Needed Before Implant Insertion Among healthy women, no examinations or tests are needed before initiation of an implant, although a baseline weight and BMI measurement might be useful for monitoring implant users over time (Table 2). Women with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. U.S. MEC might be useful in such circumstances (5). Comments and Evidence Summary. Weight (BMI): Obese women can use implants (U.S. MEC 1) (5); therefore, screening for obesity is not necessary for the safe initiation of implants. However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. Bimanual examination and cervical inspection: A pelvic examination is not necessary before initiation of implants because it would not facilitate detection of conditions for which implant use would be unsafe. Women with current breast cancer should not use implants (U.S. MEC 4); women with certain liver diseases generally should not (U.S. MEC 3) use implants (5). However, none of these conditions are likely to be detected by pelvic examination (145). A systematic review identified two case-control studies that compared delayed and immediate pelvic examination before initiation of hormonal contraceptives, specifically oral contraceptives or DMPA (95). No differences in risk factors for cervical neoplasia, incidence of STDs, incidence of abnormal Papanicolaou smears, or incidence of abnormal wet mounts were observed. No evidence was found regarding implants (Level of evidence: II-2 fair, direct). Lipids: Screening for dyslipidemias is not necessary for the safe initiation of implants because of the low prevalence of undiagnosed disease in women of reproductive age and the low likelihood of clinically significant changes with use of hormonal contraceptives. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with lipid measurement before initiation of hormonal contraceptives (57). During 2009-2012 among women aged 20-44 years in the United States, 7.6% had high cholesterol, defined as total serum cholesterol ≥240 mg/ dL (84). During 1999-2008, the prevalence of undiagnosed hypercholesterolemia among women aged 20-44 years was approximately 2% (85). Studies have shown mixed results regarding the effects of hormonal methods on lipid levels among both healthy women and women with baseline lipid abnormalities, and the clinical significance of these changes is unclear (86)(87)(88)(89). Liver enzymes: Although women with certain liver diseases generally should not use implants (U.S. MEC 3) (5), screening for liver disease before initiation of implants is not necessary because of the low prevalence of these conditions and the high likelihood that women with liver disease already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with liver enzyme tests before initiation of hormonal contraceptives (57). In 2012, the percentage of U.S. women with liver disease (not further specified) was 1.3% (90). In 2013, the incidence of acute hepatitis A, B, or C was ≤1 per 100,000 U.S. population (91). During 2002-2011, the incidence of liver carcinoma among U.S. women was approximately 3.7 per 100,000 population (92). Because estrogen and progestins are metabolized in the liver, the use of hormonal contraceptives among women with liver disease might, theoretically, be a concern. The use of hormonal contraceptives, specifically COCs and POPs, does not affect disease progression or severity in women with hepatitis, cirrhosis, or benign focal nodular hyperplasia (93,94), although evidence is limited and no evidence exists for implants. Clinical breast examination: Although women with current breast cancer should not use implants (U.S. MEC 4) (5), screening asymptomatic women with a clinical breast - Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; the risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: does not contribute substantially to safe and effective use of the contraceptive method. † Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. examination before initiation of implants is not necessary because of the low prevalence of breast cancer among women of reproductive age (15-49 years). A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a breast examination before initiation of hormonal contraceptives (95). The incidence of breast cancer among women of reproductive age in the United States is low. In 2012, the incidence of breast cancer among women aged 20-49 years was approximately 70.7 per 100,000 women (96). Other screening: Women with hypertension, diabetes, anemia, thrombogenic mutations, cervical intraepithelial neoplasia, cervical cancer, STDs, or HIV infection can use (U.S. MEC 1) or generally can use (U.S. MEC 2) implants (5); therefore, screening for these conditions is not necessary for the safe initiation of implants. # Routine Follow-Up After Implant Insertion These recommendations address when routine follow-up is needed for safe and effective continued use of contraception for healthy women. The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from more frequent follow-up visits include adolescents, those with certain medical conditions or characteristics, and those with multiple medical conditions. - Advise the woman to return at any time to discuss side effects or other problems, if she wants to change the method being used, and when it is time to remove or replace the contraceptive method. No routine follow-up visit is required. - At other routine visits, health care providers seeing implant users should do the following: -Assess the woman's satisfaction with her contraceptive method and whether she has any concerns about method use. -Assess any changes in health status, including medications, that would change the appropriateness of the implant for safe and effective continued use based on U.S. MEC (e.g., category 3 and 4 conditions and characteristics). -Consider assessing weight changes and counseling women who are concerned about weight change perceived to be associated with their contraceptive method. Comments and Evidence Summary. A systematic review did not identify any evidence regarding whether a routine follow-up visit after initiating an implant improves correct or continued use (122). # Bleeding Irregularities (Including Amenorrhea) During Implant Use - Before implant insertion, provide counseling about potential changes in bleeding patterns during implant use. Unscheduled spotting or light bleeding is common with implant use, and some women experience amenorrhea. These bleeding changes are generally not harmful and might or might not decrease with continued implant use. Heavy or prolonged bleeding, unscheduled or menstrual, is uncommon during implant use. Irregular Bleeding (Spotting, Light Bleeding, or Heavy or Prolonged Bleeding) - If clinically indicated, consider an underlying gynecological problem, such as interactions with other medications, an STD, pregnancy, or new pathologic uterine conditions (e.g., polyps or fibroids). If an underlying gynecological problem is found, treat the condition or refer for care. - If an underlying gynecologic problem is not found and the woman wants treatment, the following treatment options during days of bleeding can be considered: -NSAIDS for short-term treatment (5-7 days) -Hormonal treatment (if medically eligible) with lowdose COCs or estrogen for short-term treatment (10-20 days) - If irregular bleeding persists and the woman finds it unacceptable, counsel her on alternative methods, and offer another method if it is desired. # Amenorrhea - Amenorrhea does not require any medical treatment. Provide reassurance. -If a woman's regular bleeding pattern changes abruptly to amenorrhea, consider ruling out pregnancy if clinically indicated. - If amenorrhea persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. Comments and Evidence Summary. During contraceptive counseling and before insertion of the implant, information about common side effects, such as unscheduled spotting or light bleeding and amenorrhea, especially during the first year of use, should be discussed. A pooled analysis of data from 11 clinical trials indicates that a significant proportion of etonogestrel implant users had relatively little bleeding: 22% of women experienced amenorrhea and 34% experienced infrequent spotting, although 7% reported frequent bleeding and 18% reported prolonged bleeding (146). Unscheduled bleeding or amenorrhea is generally not harmful. Enhanced counseling about expected bleeding patterns and reassurance that bleeding irregularities are generally not harmful has been shown to reduce discontinuation in clinical trials with other hormonal contraceptives (i.e., DMPA) (124,125). A systematic review and four newly published studies examined several medications for the treatment of bleeding irregularities with primarily levonorgestrel contraceptive implants (147)(148)(149)(150)(151). Two small studies found significant cessation of bleeding within 7 days of start of treatment among women taking oral celecoxib (200 mg) daily for 5 days or oral mefenamic acid (500 mg) 3 times daily for 5 days compared with placebo (149,150). Differences in bleeding cessation were not found among women with etonogestrel implants taking mifepristone but were found when women with the implants combined mifepristone with either ethinyl estradiol or doxycycline (151,152). Doxycycline alone or in combination with ethinyl estradiol did not improve bleeding cessation among etonogestrel implant users (151). Among LNG implant users, mifepristone reduced the number of bleeding or spotting days but only after 6 months of treatment (153). Evidence also suggests that estrogen (154)(155)(156), daily COCs (154), LNG pills (155), tamoxifen (157), or tranexamic acid (158) can reduce the number of bleeding or spotting days during treatment among LNG implant users. In one small study, vitamin E was found to significantly reduce the mean number of bleeding days after the first treatment cycle; however, another larger study reported no significant differences in length of bleeding and spotting episodes with vitamin E treatment (159,160). Use of aspirin did not result in a significant difference in median length of bleeding or bleeding and spotting episodes after treatment (159). One study among implant users reported a reduction in number of bleeding days after initiating ibuprofen; however, another trial did not demonstrate any significant differences in the number of spotting and bleeding episodes with ibuprofen compared with placebo (148,155). # Injectables Progestin-only injectable contraceptives (DMPA, 150 mg intramuscularly or 104 mg subcutaneously) are available in the United States; the only difference between these two formulations is the route of administration. Approximately 6 out of 100 women will become pregnant in the first year of use of DMPA with typical use (14). DMPA is reversible and can be used by women of all ages, including adolescents. DMPA does not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # Initiation of Injectables Timing - The first DMPA injection can be given at any time if it is reasonably certain that the woman is not pregnant (Box 2). # Need for Back-Up Contraception - If DMPA is started within the first 7 days since menstrual bleeding started, no additional contraceptive protection is needed. - If DMPA is started >7 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Special Considerations Amenorrhea (Not Postpartum) - Timing: The first DMPA injection can be given at any time if it is reasonably certain that the woman is not pregnant (Box 2). - Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Postpartum (Breastfeeding) - Postabortion (Spontaneous or Induced) - Timing: The first DMPA injection can be given within the first 7 days, including immediately after the abortion (U.S. MEC 1). - Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days unless the injection is given at the time of a surgical abortion. # Switching from Another Contraceptive Method - Timing: The first DMPA injection can be given immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. - Need for back-up contraception: If it has been >7 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. - Switching from an IUD: If the woman has had sexual intercourse since the start of her current menstrual cycle and it has been >5 days since menstrual bleeding started, theoretically, residual sperm might be in the genital tract, which could lead to fertilization if ovulation occurs. A health care provider may consider any of the following options: -Advise the women to retain the IUD for at least 7 days after the injection and return for IUD removal. -Advise the woman to abstain from sexual intercourse or use barrier contraception for 7 days before removing the IUD and switching to the new method. -If the woman cannot return for IUD removal and has not abstained from sexual intercourse or used barrier contraception for 7 days, advise the woman to use ECPs (with the exception of UPA) at the time of IUD removal. Comments and Evidence Summary. In situations in which the health care provider is uncertain whether the woman might be pregnant, the benefits of starting DMPA likely exceed any risk; therefore, starting DMPA should be considered at any time, with a follow-up pregnancy test in 2-4 weeks. If a woman needs to use additional contraceptive protection when switching to DMPA from another contraceptive method, consider continuing her previous method for 7 days after DMPA injection. A systematic review identified eight articles examining DMPA initiation on different days of the menstrual cycle (161). Evidence from two studies with small sample sizes indicated that DMPA injections given up to day 7 of the menstrual cycle inhibited ovulation; when DMPA was administered after day 7, ovulation occurred in some women. Cervical mucus was of poor quality (i.e., not favorable for sperm penetration) in 90% of women within 24 hours of the injection (Level of evidence: II-2, fair) (162)(163)(164). Studies found that use of another contraceptive method until DMPA could be initiated (bridging option) did not help women initiate DMPA and was associated with more unintended pregnancies than immediate receipt of DMPA (165-169) (Level of evidence: I to II-3, fair to poor, indirect). # Examinations and Tests Needed Before Initiation of an Injectable Among healthy women, no examinations or tests are needed before initiation of DMPA, although a baseline weight and BMI measurement might be useful to monitor DMPA users over time (Table 3). Women with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. U.S. MEC might be useful in such circumstances (5). Comments and Evidence Summary. Weight (BMI): Obese women can use (U.S. MEC 1) or generally can use (U.S. MEC 2) DMPA (5); therefore, screening for obesity is not necessary for the safe initiation of DMPA. However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. (See guidance on follow-up for DMPA users for evidence on weight gain with DMPA use). Bimanual examination and cervical inspection: Pelvic examination is not necessary before initiation of DMPA because it does not facilitate detection of conditions for which DMPA would be unsafe. Although women with current breast cancer should not use DMPA (U.S. MEC 4), and women with severe hypertension, heart disease, vascular disease, or certain liver diseases generally should not use DMPA (U.S. MEC 3) (5), none of these conditions are likely to be detected by pelvic examination (145). A systematic review identified two casecontrol studies that compared delayed versus immediate pelvic examination before initiation of hormonal contraceptives, specifically oral contraceptives or DMPA (95). No differences in risk factors for cervical neoplasia, incidence of STDs, incidence of abnormal Papanicolaou smears, or incidence of abnormal wet mounts were observed (Level of evidence: II-2, fair, direct). Blood pressure: Women with hypertension generally can use DMPA (U.S. MEC 2), with the exception of women with severe hypertension or vascular disease, who generally should not use DMPA (U.S. MEC 3) (5). Screening for hypertension before initiation of DMPA is not necessary because of the low prevalence of undiagnosed severe hypertension and the high likelihood that women with these conditions already would have had them diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a blood pressure measurement before initiation of progestin-only contraceptives (170). The prevalence of undiagnosed hypertension among women of reproductive age is low. During 2009-2012 among women aged 20-44 years in the United States, the prevalence of hypertension was 8.7% (84). During 1999-2008, the percentage of women aged 20-44 years with undiagnosed hypertension was 1.9% (85). Glucose: Although women with complicated diabetes generally should not use DMPA (U.S. MEC 3) (5), screening for diabetes before initiation of DMPA is not necessary because of the low prevalence of undiagnosed diabetes and the high likelihood that women with complicated diabetes would already have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with glucose measurement before initiation of hormonal contraceptives (57). The prevalence of diabetes among women of reproductive age is low. During 2009-2012 among women aged 20-44 years in the United States, the prevalence of diabetes was 3.3% (84). During 1999-2008, the percentage of women aged 20-44 years with undiagnosed diabetes was 0.5% (85). Although hormonal contraceptives can have some adverse effects on glucose metabolism in healthy and diabetic women, the overall clinical effect is minimal (171)(172)(173)(174)(175)(176)(177). Lipids: Screening for dyslipidemias is not necessary for the safe initiation of injectables because of the low prevalence of undiagnosed disease in women of reproductive age and the low likelihood of clinically significant changes with use of hormonal contraceptives. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with lipid measurement before initiation of hormonal contraceptives (57). During 2009-2012 among women aged 20-44 years in the United States, 7.6% had high cholesterol, defined as total serum cholesterol ≥240 mg/dL (84). During 1999-2008, the prevalence of undiagnosed hypercholesterolemia among women aged 20-44 years was approximately 2% (85). Studies have shown mixed results about the effects of hormonal methods on lipid levels among both healthy women and women with baseline lipid abnormalities, and the clinical significance of these changes is unclear (86)(87)(88)(89). Liver enzymes: Although women with certain liver diseases generally should not use DMPA (U.S. MEC 3) (5), screening for liver disease before initiation of DMPA is not necessary because of the low prevalence of these conditions and the high likelihood that women with liver disease already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with liver enzyme tests before initiation of hormonal contraceptives (57). In 2012, among U.S. women, the percentage with liver disease (not further specified) was 1.3% (90). In 2013, the incidence of acute hepatitis A, B, or C was ≤1 per 100,000 U.S. population (91). During 2002-2011, the incidence of liver carcinoma among U.S. women was approximately 3.7 per 100,000 population (92). Because estrogen and progestins are metabolized in the liver, the use of hormonal contraceptives among women with liver disease might, theoretically, be a concern. The use of hormonal contraceptives, specifically COCs and POPs, does not affect disease progression or severity in women with hepatitis, cirrhosis, or benign focal nodular hyperplasia (93,94), although evidence is limited and no evidence exists for DMPA. - Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; the risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: does not contribute substantially to safe and effective use of the contraceptive method. † Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. # Clinical breast examination: Although women with current breast cancer should not use DMPA (U.S. MEC 4) (5), screening asymptomatic women with a clinical breast examination before initiating DMPA is not necessary because of the low prevalence of breast cancer among women of reproductive age. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a clinical breast examination before initiation of hormonal contraceptives (95). The incidence of breast cancer among women of reproductive age in the United States is low. In 2012, the incidence of breast cancer among women aged 20-49 years was approximately 70.7 per 100,000 women (96). Other screening: Women with anemia, thrombogenic mutations, cervical intraepithelial neoplasia, cervical cancer, HIV infection, or other STDs can use (U.S. MEC 1) or generally can use (U.S. MEC 2) DMPA (5); therefore, screening for these conditions is not necessary for the safe initiation of DMPA. # Routine Follow-Up After Injectable Initiation These recommendations address when routine follow-up is recommended for safe and effective continued use of contraception for healthy women. The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from frequent follow-up visits include adolescents, those with certain medical conditions or characteristics, and those with multiple medical conditions. - Advise the woman to return at any time to discuss side effects or other problems, if she wants to change the method being used, and when it is time for reinjection. No routine follow-up visit is required. - At other routine visits, health care providers seeing injectable users should do the following: -Assess the woman's satisfaction with her contraceptive method and whether she has any concerns about method use. -Assess any changes in health status, including medications, that would change the appropriateness of the injectable for safe and effective continued use based on U.S. MEC (e.g., category 3 and 4 conditions and characteristics). -Consider assessing weight changes and counseling women who are concerned about weight change perceived to be associated with their contraceptive method. Comments and Evidence Summary. Although no evidence exists regarding whether a routine follow-up visit after initiating DMPA improves correct or continued use, monitoring weight or BMI change over time is important for DMPA users. A systematic review identified a limited body of evidence that examined whether weight gain in the few months after DMPA initiation predicted future weight gain (123). Two studies found significant differences in weight gain or BMI at follow-up periods ranging from 12 to 36 months between early weight gainers (i.e., those who gained >5% of their baseline body weight within 6 months after initiation) and those who were not early weight gainers (178,179). The differences between groups were more pronounced at 18, 24, and 36 months than at 12 months. One study found that most adolescent DMPA users who had gained >5% of their baseline weight by 3 months gained even more weight by 12 months (180) (Level of evidence: II-2, fair, to II-3, fair, direct). # Timing of Repeat Injections Reinjection Interval - Provide repeat DMPA injections every 3 months (13 weeks). # Special Considerations Early Injection - The repeat DMPA injection can be given early when necessary. # Late Injection - The repeat DMPA injection can be given up to 2 weeks late (15 weeks from the last injection) without requiring additional contraceptive protection. - If the woman is >2 weeks late (>15 weeks from the last injection) for a repeat DMPA injection, she can have the injection if it is reasonably certain that she is not pregnant (Box 2). She needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. She might consider the use of emergency contraception (with the exception of UPA) if appropriate. Comments and Evidence Summary. No time limits exist for early injections; injections can be given when necessary (e.g., when a woman cannot return at the routine interval). WHO has extended the time that a woman can have a late reinjection (i.e., grace period) for DMPA use from 2 weeks to 4 weeks on the basis of data from one study showing low pregnancy rates through 4 weeks; however, the CDC expert group did not consider the data to be generalizable to the United States because a large proportion of women in the study were breastfeeding. Therefore, U.S. SPR recommends a grace period of 2 weeks. A systematic review identified 12 studies evaluating time to pregnancy or ovulation after the last injection of DMPA (181). Although pregnancy rates were low during the 2-week interval following the reinjection date and for 4 weeks following the reinjection date, data were sparse, and one study included a large proportion of breastfeeding women (182)(183)(184). Studies also indicated a wide variation in time to ovulation after the last DMPA injection, with the majority ranging from 15 to 49 weeks from the last injection (185)(186)(187)(188)(189)(190)(191)(192)(193) # Unscheduled Spotting or Light Bleeding - If clinically indicated, consider an underlying gynecological problem, such as interactions with other medications, an STD, pregnancy, or new pathologic uterine conditions (e.g., polyps or fibroids). If an underlying gynecological problem is found, treat the condition or refer for care. - If an underlying gynecologic problem is not found and the woman wants treatment, the following treatment option during days of bleeding can be considered: -NSAIDs for short-term treatment (5-7 days) - If unscheduled spotting or light bleeding persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. # Heavy or Prolonged Bleeding - If clinically indicated, consider an underlying gynecological problem, such as interactions with other medications, an STD, pregnancy, or new pathologic uterine conditions (such as fibroids or polyps). If an underlying gynecologic problem is identified, treat the condition or refer for care. - If an underlying gynecologic problem is not found and the woman wants treatment, the following treatment options during days of bleeding can be considered: -NSAIDS for short-term treatment (5-7 days) -Hormonal treatment (if medically eligible) with lowdose COCs or estrogen for short-term treatment (10-20 days) - If heavy or prolonged bleeding persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. # Amenorrhea - Amenorrhea does not require any medical treatment. Provide reassurance. -If a woman's regular bleeding pattern changes abruptly to amenorrhea, consider ruling out pregnancy if clinically indicated. - If amenorrhea persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. Comments and Evidence Summary. During contraceptive counseling and before initiation of DMPA, information about common side effects such as irregular bleeding should be discussed. Unscheduled bleeding or spotting is common with DMPA use (194). In addition, amenorrhea is common after ≥1 years of continuous use (194,195). These bleeding irregularities are generally not harmful. Enhanced counseling among DMPA users detailing expected bleeding patterns and reassurance that these irregularities generally are not harmful has been shown to reduce DMPA discontinuation in clinical trials (124,125). A systematic review, as well as two additional studies, examined the treatment of bleeding irregularities during DMPA use (195)(196)(197). Two small studies found significant cessation of bleeding within 7 days of starting treatment among women taking valdecoxib for 5 days or mefenamic acid for 5 days compared with placebo (198,199). Treatment with ethinyl estradiol was found to stop bleeding better than placebo during the treatment period, although rates of discontinuation were high and safety outcomes were not examined (200). In one small study among DMPA users who had been experiencing amenorrhea for 2 months, treatment with COCs was found to alleviate amenorrhea better than placebo (201). No studies examined the effects of aspirin on bleeding irregularities among DMPA users. # Combined Hormonal Contraceptives Combined hormonal contraceptives contain both estrogen and a progestin and include 1) COCs (various formulations), 2) a transdermal contraceptive patch (which releases 150 µg of norelgestromin and 20 µg ethinyl estradiol daily), and 3) a vaginal contraceptive ring (which releases 120 µg etonogestrel and 15 µg ethinyl estradiol daily). Approximately 9 out of 100 women become pregnant in the first year of use with combined hormonal contraceptives with typical use (14). These methods are reversible and can be used by women of all ages. Combined hormonal contraceptives are generally used for US Department of Health and Human Services/Centers for Disease Control and Prevention 21-24 consecutive days, followed by 4-7 hormone-free days (either no use or placebo pills). These methods are sometimes used for an extended period with infrequent or no hormonefree days. Combined hormonal contraceptives do not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # Initiation of Combined Hormonal Contraceptives Timing - Combined hormonal contraceptives can be initiated at any time if it is reasonably certain that the woman is not pregnant (Box 2). # Need for Back-Up Contraception - If combined hormonal contraceptives are started within the first 5 days since menstrual bleeding started, no additional contraceptive protection is needed. - If combined hormonal contraceptives are started >5 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Special Considerations Amenorrhea (Not Postpartum) - Timing: Combined hormonal contraceptives can be started at any time if it is reasonably certain that the woman is not pregnant (Box 2). - Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Postpartum (Breastfeeding) - Timing: Combined hormonal contraceptives can be started when the woman is medically eligible to use the method (5) and if it is reasonably certain that she is not pregnant. ( # Postabortion (Spontaneous or Induced) - Timing: Combined hormonal contraceptives can be started within the first 7 days following first-trimester or second-trimester abortion, including immediately postabortion (U.S. MEC 1). - Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days unless combined hormonal contraceptives are started at the time of a surgical abortion. # Switching from Another Contraceptive Method - Timing: Combined hormonal contraceptives can be started immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. - Need for back-up contraception: If it has been >5 days since menstrual bleeding started, she needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. - Switching from an IUD: If the woman has had sexual intercourse since the start of her current menstrual cycle and it has been >5 days since menstrual bleeding started, theoretically, residual sperm might be in the genital tract, which could lead to fertilization if ovulation occurs. A health care provider may consider any of the following options: -Advise the women to retain the IUD for at least 7 days after combined hormonal contraceptives are initiated and return for IUD removal. -Advise the woman to abstain from sexual intercourse or use barrier contraception for 7 days before removing the IUD and switching to the new method. -If the woman cannot return for IUD removal and has not abstained from sexual intercourse or used barrier contraception for 7 days, advise the woman to use ECPs at the time of IUD removal. Combined hormonal contraceptives can be started immediately after use of ECPs (with the exception of UPA). Combined hormonal contraceptives can be started no sooner than 5 days after use of UPA. Comments and Evidence Summary. In situations in which the health care provider is uncertain whether the woman might be pregnant, the benefits of starting combined hormonal contraceptives likely exceed any risk; therefore, starting combined hormonal contraceptives should be considered at any time, with a follow-up pregnancy test in 2-4 weeks. If a woman needs to use additional contraceptive protection when switching to combined hormonal contraceptives from another contraceptive method, consider continuing her previous method for 7 days after starting combined hormonal contraceptives. A systematic review of 18 studies examined the effects of starting combined hormonal contraceptives on different days of the menstrual cycle (202). Overall, the evidence suggested that pregnancy rates did not differ by the timing of combined hormonal contraceptive initiation (169,(203)(204)(205) (Level of evidence: I to II-3, fair, indirect). The more follicular activity that occurred before starting COCs, the more likely ovulation was to occur; however, no ovulations occurred when COCs were started at a follicle diameter of 10 mm (mean cycle day 7.6) or when the ring was started at 13 mm (median cycle day 11) (206-215) (Level of evidence: I to II-3, fair, indirect). Bleeding patterns and other side effects did not vary with the timing of combined hormonal contraceptive initiation (204,205,(216)(217)(218)(219)(220) (Level of evidence: I to II-2, good to poor, direct). Although continuation rates of combined hormonal contraceptives were initially improved by the "quick start" approach (i.e., starting on the day of the visit), the advantage disappeared over time (203,204,(216)(217)(218)(219)(220)(221) (Level of evidence: I to II-2, good to poor, direct). # Examinations and Test Needed Before Initiation of Combined Hormonal Contraceptives Among healthy women, few examinations or tests are needed before initiation of combined hormonal contraceptives (Table 4). Blood pressure should be measured before initiation of combined hormonal contraceptives. Baseline weight and BMI measurements might be useful for monitoring combined hormonal contraceptive users over time. Women with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. U.S. MEC might be useful in such circumstances (5). Comments and Evidence Summary. Blood pressure: Women who have more severe hypertension (systolic pressure of ≥160 mmHg or diastolic pressure of ≥100 mm Hg) or vascular disease should not use combined hormonal contraceptives (U.S. MEC 4), and women who have less severe hypertension (systolic pressure of 140-159 mm Hg or diastolic pressure of 90-99 mm Hg) or adequately controlled hypertension generally should not use combined hormonal contraceptives (U.S. MEC 3) (5). Therefore, blood pressure should be evaluated before initiating combined hormonal contraceptives. In instances in which blood pressure cannot be measured by a provider, blood pressure measured in other settings can be reported by the woman to her provider. Evidence suggests that cardiovascular outcomes are worse among women who did not have their blood pressure measured before initiating COCs. A systematic review identified six articles from three studies that reported cardiovascular outcomes among women who had blood pressure measurements and women who did not have blood pressure measurements before initiating COCs (170). Three case-control studies showed that women who did not have blood pressure measurements before initiating COCs had a higher risk for acute myocardial infarction than women who did have blood pressure measurements (222)(223)(224). Two case-control studies showed that women who did not have blood pressure measurements before initiating COCs had a higher risk for ischemic stroke than women who did have blood pressure measurements (225,226). One case-control study showed no difference in the risk for hemorrhagic stroke among women who initiated COCs regardless of whether their blood pressure was measured (227). Studies that examined hormonal contraceptive methods other than COCs were not identified (Level of evidence: II-2, fair, direct). # Weight (BMI): Obese women generally can use combined hormonal contraceptives (U.S. MEC 2) (5); therefore, screening for obesity is not necessary for the safe initiation of combined hormonal contraceptives. However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. Bimanual examination and cervical inspection: Pelvic examination is not necessary before initiation of combined hormonal contraceptives because it does not facilitate detection of conditions for which hormonal contraceptives would be unsafe. Women with certain conditions such as current breast cancer, severe hypertension or vascular disease, heart disease, migraine headaches with aura, and certain liver diseases, as well as women aged ≥35 years and who smoke ≥15 cigarettes per day, should not use (U.S. MEC 4) or generally should not use (U.S. MEC 3) combined hormonal contraceptives (5); however, none of these conditions are likely to be detected by pelvic examination (145). A systematic review identified two case-control studies that compared delayed and immediate pelvic examination before initiation of hormonal contraceptives, specifically oral contraceptives or DMPA (95). No differences in risk factors for cervical neoplasia, incidence of STDs, incidence of abnormal Papanicolaou smears, or incidence of abnormal wet mounts were found (Level of evidence: Level II-2 fair, direct). Glucose: Although women with complicated diabetes should not use (U.S. MEC 4) or generally should not use (U.S. MEC 3) combined hormonal contraceptives, depending on the severity of the condition (5), screening for diabetes before initiation of hormonal contraceptives is not necessary because of the low prevalence of undiagnosed diabetes and the high likelihood that women with complicated diabetes already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with glucose measurement before initiation of hormonal contraceptives (57). The prevalence of diabetes among women of reproductive age is low. During 2009-2012 among women aged 20-44 years in the United States, the prevalence of diabetes was 3.3% (84). During 1999-2008, the percentage of women aged 20-44 years with undiagnosed diabetes was 0.5% (85). Although hormonal contraceptives can have some adverse effects on glucose metabolism in healthy and diabetic women, the overall clinical effect is minimal (171)(172)(173)(174)(175)(176)(177). Lipids: Screening for dyslipidemias is not necessary for the safe initiation of combined hormonal contraceptives because of the low prevalence of undiagnosed disease in women of reproductive age and the low likelihood of clinically significant changes with use of hormonal contraceptives. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with lipid measurement before initiation of hormonal contraceptives (57). During 2009-2012 among women aged 20-44 years in the United States, 7.6% had high cholesterol, defined as total serum cholesterol ≥240 mg/dL (84). During 1999-2008, the prevalence of undiagnosed hypercholesterolemia among women aged 20-44 years was approximately 2% (85). A systematic review identified few studies, all of poor quality, that suggest that women with known dyslipidemias using combined hormonal contraceptives might be at increased risk for myocardial infarction, cerebrovascular accident, or venous thromboembolism compared with women without dyslipidemias; no studies were identified that examined risk for pancreatitis among women with known dyslipidemias using combined hormonal contraceptives (89). Studies have shown mixed results regarding the effects of hormonal contraceptives on lipid levels among both healthy women and women with - Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; the risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: does not contribute substantially to safe and effective use of the contraceptive method. † In instances in which blood pressure cannot be measured by a provider, blood pressure measured in other settings can be reported by the woman to her provider. § Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. baseline lipid abnormalities, and the clinical significance of these changes is unclear (86)(87)(88)(89). Liver enzymes: Although women with certain liver diseases should not use (U.S. MEC 4) or generally should not use (U.S. MEC 3) combined hormonal contraceptives (5), screening for liver disease before initiation of combined hormonal contraceptives is not necessary because of the low prevalence of these conditions and the high likelihood that women with liver disease already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with liver enzyme tests before initiation of hormonal contraceptives (57). In 2012, among U.S. women, the percentage with liver disease (not further specified) was 1.3% (90). In 2013, the incidence of acute hepatitis A, B, or C was ≤1 per 100,000 U.S. population (91). During 2002-2011, the incidence of liver carcinoma among U.S. women was approximately 3.7 per 100,000 population (92). Because estrogen and progestins are metabolized in the liver, the use of hormonal contraceptives among women with liver disease might, theoretically, be a concern. The use of hormonal contraceptives, specifically COCs and POPs, does not affect disease progression or severity in women with hepatitis, cirrhosis, or benign focal nodular hyperplasia (93,94), although evidence is limited; no evidence exists for other types of combined hormonal contraceptives. Thrombogenic mutations: Women with thrombogenic mutations should not use combined hormonal contraceptives (U.S. MEC 4) (5) because of the increased risk for venous thromboembolism (228). However, studies have shown that universal screening for thrombogenic mutations before initiating COCs is not cost-effective because of the rarity of the conditions and the high cost of screening (229)(230)(231). Clinical breast examination: Although women with current breast cancer should not use combined hormonal contraceptives (U.S. MEC 4) (5), screening asymptomatic women with a clinical breast examination before initiating combined hormonal contraceptives is not necessary because of the low prevalence of breast cancer among women of reproductive age. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a breast examination before initiation of hormonal contraceptives (95). The incidence of breast cancer among women of reproductive age in the United States is low. In 2012, the incidence of breast cancer among women aged 20-49 years was approximately 70.7 per 100,000 women (96). Other screening: Women with anemia, cervical intraepithelial neoplasia, cervical cancer, HIV infection, or other STDs can use (U.S. MEC 1) or generally can use (U.S. MEC 2) combined hormonal contraceptives (5); therefore, screening for these conditions is not necessary for the safe initiation of combined hormonal contraceptives. # Number of Pill Packs that Should Be Provided at Initial and Return Visits - At the initial and return visits, provide or prescribe up to a 1-year supply of COCs (e.g., 13 28-day pill packs), depending on the woman's preferences and anticipated use. - A woman should be able to obtain COCs easily in the amount and at the time she needs them. Comments and Evidence Summary. The more pill packs given up to 13 cycles, the higher the continuation rates. Restricting the number of pill packs distributed or prescribed can result in unwanted discontinuation of the method and increased risk for pregnancy. A systematic review of the evidence suggested that providing a greater number of pill packs was associated with increased continuation (232). Studies that compared provision of one versus 12 packs, one versus 12 or 13 packs, or three versus seven packs found increased continuation of pill use among women provided with more pill packs (233)(234)(235). However, one study found no difference in continuation when patients were provided one and then three packs versus four packs all at once (236). In addition to continuation, a greater number of pills packs provided was associated with fewer pregnancy tests, fewer pregnancies, and lower cost per client. However, a greater number of pill packs (i.e., 13 packs versus three packs) also was associated with increased pill wastage in one study (234) (Level of evidence: I to II-2, fair, direct). # Routine Follow-Up After Combined Hormonal Contraceptive Initiation These recommendations address when routine follow-up is recommended for safe and effective continued use of contraception for healthy women. The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from more frequent follow-up visits include adolescents, those with certain medical conditions or characteristics, and those with multiple medical conditions. - Advise the woman to return at any time to discuss side effects or other problems or if she wants to change the method being used. No routine follow-up visit is required. - At other routine visits, health care providers seeing combined hormonal contraceptive users should do the following: -Assess the woman's satisfaction with her contraceptive method and whether she has any concerns about method use. -Assess any changes in health status, including medications, that would change the appropriateness of combined hormonal contraceptives for safe and effective continued use based on U.S. MEC (e.g., category 3 and 4 conditions and characteristics). -Assess blood pressure. -Consider assessing weight changes and counseling women who are concerned about weight change perceived to be associated with their contraceptive method. Comments and Evidence Summary. No evidence exists regarding whether a routine follow-up visit after initiating combined hormonal contraceptives improves correct or continued use. Monitoring blood pressure is important for combined hormonal contraceptive users. Health care providers might consider recommending women obtain blood pressure measurements in other settings. A systematic review identified five studies that examined the incidence of hypertension among women who began using a COC versus those who started a nonhormonal method of contraception or a placebo (123). Few women developed hypertension after initiating COCs, and studies examining increases in blood pressure after COC initiation found mixed results. No studies were identified that examined changes in blood pressure among patch or vaginal ring users (Level of evidence: I, fair, to II-2, fair, indirect). # Late or Missed Doses and Side Effects from Combined Hormonal Contraceptive Use For the following recommendations, a dose is considered late when <24 hours have elapsed since the dose should have been taken. A dose is considered missed if ≥24 hours have elapsed since the dose should have been taken. For example, if a COC pill was supposed to have been taken on Monday at 9:00 a.m. and is taken at 11:00 a.m., the pill is late; however, by Tuesday morning at 11:00 a.m., Monday's 9:00 a.m. pill has been missed and Tuesday's 9:00 a.m. pill is late. For COCs, the recommendations only apply to late or missed hormonally active pills and not to placebo pills. Recommendations are provided for late or missed pills (Figure 2), the patch (Figure 3), and the ring (Figure 4). Comments and Evidence Summary. Inconsistent or incorrect use of combined hormonal contraceptives is a major cause of combined hormonal contraceptive failure. Extending the hormone-free interval is considered to be a particularly risky time to miss combined hormonal contraceptives. Seven days of continuous combined hormonal contraceptive use is deemed necessary to reliably prevent ovulation. The recommendations reflect a balance between simplicity and precision of science. Women who frequently miss COCs or experience other usage errors with combined hormonal patch or combined vaginal ring should consider an alternative contraceptive method that is less dependent on the user to be effective (e.g., IUD, implant, or injectable). A systematic review identified 36 studies that examined measures of contraceptive effectiveness of combined hormonal contraceptives during cycles with extended hormone-free intervals, shortened hormone-free intervals, or deliberate nonadherence on days not adjacent to the hormone-free interval (237). Most of the studies examined COCs (215,, two examined the combined hormonal patch (259,266), and six examined the combined vaginal ring (211,(267)(268)(269)(270)(271). No direct evidence on the effect of missed pills on the risk for pregnancy was found. Studies of women deliberately extending the hormone-free interval up to 14 days found wide variability in the amount of follicular development and occurrence of ovulation (241,244,246,247,249,250,(252)(253)(254)(255); in general, the risk for ovulation was low, and among women who did ovulate, cycles were usually abnormal. In studies of women who deliberately missed pills on various days during the cycle not adjacent to the hormone-free interval, ovulation occurred infrequently (239,(245)(246)(247)255,256,258,259). Studies comparing 7-day hormone-free intervals with shorter hormone-free intervals found lower rates of pregnancy (238,242,251,257) and significantly greater suppression of ovulation (240,250,(261)(262)(263)265) among women with shorter intervals in all but one study (260), which found no difference. Two studies that compared 30-µg ethinyl estradiol pills with 20-µg ethinyl estradiol pills showed more follicular activity when 20-µg ethinyl estradiol pills were missed (241,244). In studies examining the combined vaginal ring, three studies found that nondeliberate extension of the hormone-free interval for 24 to <48 hours from the scheduled period did not increase the risk for pregnancy (267,268,270); one study found that ring insertion after a deliberately extended hormone-free interval that allowed a 13-mm follicle to develop interrupted ovarian function and further follicular growth (211); and one study found that inhibition of ovulation was maintained after deliberately forgetting to remove the ring for up to 2 weeks after normal ring use (271). In studies examining the combined hormonal patch, one study found that missing 1-3 consecutive days before patch replacement (either wearing one patch 3 days longer before replacement or going 3 days without a patch before replacing the next patch) on days not adjacent to the patch-free interval resulted in little follicular activity and low risk for ovulation (259), and one pharmacokinetic study found that serum levels of If one hormonal pill is late: (<24 hours since a pill should have been taken) If one hormonal pill has been missed: (24 to <48 hours since a pill should have been taken) If two or more consecutive hormonal pills have been missed: (≥48 hours since a pill should have been taken) - Take the late or missed pill as soon as possible. - Continue taking the remaining pills at the usual time (even if it means taking two pills on the same day). - No additional contraceptive protection is needed. - Emergency contraception is not usually needed but can be considered (with the exception of UPA) if hormonal pills were missed earlier in the cycle or in the last week of the previous cycle. - Take the most recent missed pill as soon as possible. (Any other missed pills should be discarded.) - Continue taking the remaining pills at the usual time (even if it means taking two pills on the same day). - Use back-up contraception (e.g., condoms) or avoid sexual intercourse until hormonal pills have been taken for 7 consecutive days. - If pills were missed in the last week of hormonal pills (e.g., days 15-21 for 28-day pill packs): - Omit the hormone-free interval by nishing the hormonal pills in the current pack and starting a new pack the next day. o If unable to start a new pack immediately, use back-up contraception (e.g., condoms) or avoid sexual intercourse until hormonal pills from a new pack have been taken for 7 consecutive days. - Emergency contraception should be considered (with the exception of UPA) if hormonal pills were missed during the rst week and unprotected sexual intercourse occurred in the previous 5 days. - Emergency contraception may also be considered (with the exception of UPA) at other times as appropriate. ethinyl estradiol and progestin norelgestromin remained within reference ranges after extending patch wear for 3 days (266). No studies were found on extending the patch-free interval. In studies that provide indirect evidence on the effects of missed combined hormonal contraception on surrogate measures of pregnancy, how differences in surrogate measures correspond to pregnancy risk is unclear (Level of evidence: I, good, indirect to II-3, poor, direct). # Vomiting or Severe Diarrhea While Using COCs Certain steps should be taken by women who experience vomiting or severe diarrhea while using COCs (Figure 5). Comments and Evidence Summary. Theoretically, the contraceptive effectiveness of COCs might be decreased because of vomiting or severe diarrhea. Because of the lack of evidence that addresses vomiting or severe diarrhea while using COCs, these recommendations are based on the recommendations for missed COCs. No evidence was found on the effects of vomiting or diarrhea on measures of contraceptive effectiveness including pregnancy, follicular development, hormone levels, or cervical mucus quality. # Unscheduled Bleeding with Extended or Continuous Use of Combined Hormonal Contraceptives - Before initiation of combined hormonal contraceptives, provide counseling about potential changes in bleeding patterns during extended or continuous combined hormonal contraceptive use. (Extended contraceptive use is defined as a planned hormone-free interval after at least two contiguous cycles. Continuous contraceptive use is defined as uninterrupted use of hormonal contraception without a hormone-free interval) (272). - Unscheduled spotting or bleeding is common during the first 3-6 months of extended or continuous combined hormonal contraceptive use. It is generally not harmful and decreases with continued combined hormonal contraceptive use. - If clinically indicated, consider an underlying gynecological problem, such as inconsistent use, interactions with other medications, cigarette smoking, an STD, pregnancy, or new pathologic uterine conditions (e.g., polyps or fibroids). If an underlying gynecological problem is found, treat the condition or refer for care. - If an underlying gynecological problem is not found and the woman wants treatment, the following treatment option can be considered: Delayed insertion of a new ring or delayed reinsertion of a current ring for <48 hours since a ring should have been inserted Delayed insertion of a new ring or delayed reinsertion for ≥48 hours since a ring should have been inserted - Insert ring as soon as possible. - Keep the ring in until the scheduled ring removal day. - No additional contraceptive protection is needed. - Emergency contraception is not usually needed but can be considered (with the exception of UPA) if delayed insertion or reinsertion occurred earlier in the cycle or in the last week of the previous cycle. - Insert ring as soon as possible. - Keep the ring in until the scheduled ring removal day. - Use back-up contraception (e.g., condoms) or avoid sexual intercourse until a ring has been worn for 7 consecutive days. - Emergency contraception may also be considered (with the exception of UPA) at other times as appropriate. # FIGURE 4. Recommended actions after delayed insertion or reinsertion- with combined vaginal ring Abbreviation: UPA = ulipristal acetate. - If removal takes place but the woman is unsure of how long the ring has been removed, consider the ring to have been removed for ≥48 hours since a ring should have been inserted or reinserted. -Advise the woman to discontinue combined hormonal contraceptive use (i.e., a hormone-free interval) for 3-4 consecutive days; a hormone-free interval is not recommended during the first 21 days of using the continuous or extended combined hormonal contraceptive method. A hormone-free interval also is not recommended more than once per month because contraceptive effectiveness might be reduced. - If unscheduled spotting or bleeding persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. Comments and Evidence Summary. During contraceptive counseling and before initiating extended or continuous combined hormonal contraceptives, information about common side effects such as unscheduled spotting or bleeding, especially during the first 3-6 months of use, should be discussed (273). These bleeding irregularities are generally not harmful and usually improve with persistent use of the hormonal method. To avoid unscheduled spotting or bleeding, counseling should emphasize the importance of correct use and timing; for users of contraceptive pills, emphasize consistent pill use. Enhanced counseling about expected bleeding patterns and reassurance that bleeding irregularities are generally not harmful has been shown to reduce method discontinuation in clinical trials with DMPA (124,125,274). A systematic review identified three studies with small study populations that addressed treatments for unscheduled bleeding among women using extended or continuous combined hormonal contraceptives (275). In two separate randomized clinical trials in which women were taking either contraceptive pills or using the contraceptive ring continuously for 168 days, women assigned to a hormone-free interval of 3 or 4 days reported improved bleeding. Although they noted an initial increase in flow, this was followed by an abrupt decrease 7-8 days later with eventual cessation of flow 11-12 days later. These findings were compared with women who continued to use their method without a hormonefree interval, in which a greater proportion reported either treatment failure or fewer days of amenorrhea (276,277). In another randomized trial of 66 women with unscheduled bleeding among women using 84 days of hormonally active contraceptive pills, oral doxycycline (100 mg twice daily) initiated the first day of bleeding and taken for 5 days did not result in any improvement in bleeding compared with placebo (278) (Level of evidence: I, fair, direct). Vomiting or diarrhea (for any reason, for any duration), that occurs within 24 hours after taking a hormonal pill Vomiting or diarrhea, for any reason, continuing for 24 to <48 hours after taking any hormonal pill Vomiting or diarrhea, for any reason, continuing for ≥48 hours after taking any hormonal pill - Taking another hormonal pill (redose) is unnecessary. - Continue taking pills daily at the usual time (if possible, despite discomfort). - No additional contraceptive protection is needed. - Emergency contraception is not usually needed but can be considered (with the exception of UPA) as appropriate. - Continue taking pills daily at the usual time (if possible, despite discomfort). - Use back-up contraception (e.g., condoms) or avoid sexual intercourse until hormonal pills have been taken for 7 consecutive days after vomiting or diarrhea has resolved. # Progestin-Only Pills POPs contain only a progestin and no estrogen and are available in the United States. Approximately 9 out of 100 women become pregnant in the first year of use with POPs with typical use (14). POPs are reversible and can be used by women of all ages. POPs do not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # Initiation of POPs Timing - POPs can be started at any time if it is reasonably certain that the woman is not pregnant (Box 2). # Need for Back-Up Contraception - If POPs are started within the first 5 days since menstrual bleeding started, no additional contraceptive protection is needed. - If POPs are started >5 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. # Special Considerations Amenorrhea (Not Postpartum) - Timing: POPs can be started at any time if it is reasonably certain that the woman is not pregnant (Box 2). - Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. # Postpartum (Breastfeeding) - Timing: POPs can be started at any time, including immediately postpartum (U.S. MEC 2 if 5 days since menstrual bleeding started, she needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. # Postpartum (Not Breastfeeding) - Timing: POPs can be started at any time, including immediately postpartum (U.S. MEC 1), if it is reasonably certain that the woman is not pregnant (Box 2). - Need for back-up contraception: If a woman is 5 days since menstrual bleeding started, she needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. # Postabortion (Spontaneous or Induced) - Timing: POPs can be started within the first 7 days, including immediately postabortion (U.S. MEC 1). - Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days unless POPs are started at the time of a surgical abortion. # Switching from Another Contraceptive Method - Timing: POPs can be started immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. - Need for back-up contraception: If it has been >5 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. - Switching from an IUD: If the woman has had sexual intercourse since the start of her current menstrual cycle and it has been >5 days since menstrual bleeding started, theoretically, residual sperm might be in the genital tract, which could lead to fertilization if ovulation occurs. A health care provider may consider any of the following options: -Advise the women to retain the IUD for at least 2 days after POPs are initiated and return for IUD removal. -Advise the woman to abstain from sexual intercourse or use barrier contraception for 7 days before removing the IUD and switching to the new method. -If the woman cannot return for IUD removal and has not abstained from sexual intercourse or used barrier contraception for 7 days, advise the woman to use ECPs at the time of IUD removal. POPs can be started immediately after use of ECPs (with the exception of UPA). POPs can be started no sooner than 5 days after use of UPA. Comments and Evidence Summary. In situations in which the health care provider is uncertain whether the woman might be pregnant, the benefits of starting POPs likely exceed any risk; therefore, starting POPs should be considered at any time, with a follow-up pregnancy test in 2-4 weeks. Unlike COCs, POPs inhibit ovulation in about half of cycles, although the rates vary widely by individual (279). Peak serum steroid levels are reached about 2 hours after administration, followed by rapid distribution and elimination, such that by 24 hours after administration, serum steroid levels are near baseline (279). Therefore, taking POPs at approximately the same time each day is important. An estimated 48 hours of POP use has been deemed necessary to achieve the contraceptive effects on cervical mucus (279). If a woman needs to use additional contraceptive protection when switching to POPs from another contraceptive method, consider continuing her previous method for 2 days after starting POPs. No direct evidence was found regarding the effects of starting POPs at different times of the cycle. # Examinations and Tests Needed Before Initiation of POPs Among healthy women, no examinations or tests are needed before initiation of POPs, although a baseline weight and BMI measurement might be useful for monitoring POP users over time (Table 5). Women with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. The U.S. MEC might be useful in such circumstances (5). Comments and Evidence Summary. Weight (BMI): Obese women can use POPs (U.S. MEC 1) (5); therefore, screening for obesity is not necessary for the safe initiation of POPs. However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. Bimanual examination and cervical inspection: Pelvic examination is not necessary before initiation of POPs because it does not facilitate detection of conditions for which POPs would be unsafe. Women with current breast cancer should not use POPs (U.S. MEC 4), and women with certain liver diseases generally should not use POPs (U.S. MEC 3) (5); however, neither of these conditions are likely to be detected by pelvic examination (145). A systematic review identified two casecontrol studies that compared delayed versus immediate pelvic examination before initiation of hormonal contraceptives, specifically oral contraceptives or DMPA (95). No differences in risk factors for cervical neoplasia, incidence of STDs, incidence of abnormal Papanicolaou smears, or incidence of abnormal findings from wet mounts were observed (Level of evidence: II-2 fair, direct). Lipids: Screening for dyslipidemias is not necessary for the safe initiation of POPs because of the low prevalence of undiagnosed disease in women of reproductive age and the low likelihood of clinically significant changes with use of hormonal contraceptives. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with lipid measurement before initiation of hormonal contraceptives (57). During 2009-2012 among women aged 20-44 years in the United States, 7.6% had high cholesterol, defined as total serum cholesterol ≥240 mg/dL (84). During 1999-2008, the prevalence of undiagnosed hypercholesterolemia among women aged 20-44 years was approximately 2% (85). Studies have shown mixed results about the effects of hormonal methods on lipid levels among both healthy women and women with baseline lipid abnormalities, and the clinical significance of these changes is unclear (86)(87)(88)(89). Liver enzymes: Although women with certain liver diseases generally should not use POPs (U.S. MEC 3) (5), screening for liver disease before initiation of POPs is not necessary because of the low prevalence of these conditions and the high likelihood that women with liver disease already would - Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; the risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: does not contribute substantially to safe and effective use of the contraceptive method. † Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with liver enzyme tests before initiation of hormonal contraceptives (57). In 2012, among U.S. women, the percentage with liver disease (not further specified) was 1.3% (90). In 2013, the incidence of acute hepatitis A, B, or C was ≤1 per 100,000 U.S. population (91). During 2002-2011, the incidence of liver carcinoma among U.S. women was approximately 3.7 per 100,000 population (92). Because estrogen and progestins are metabolized in the liver, the use of hormonal contraceptives among women with liver disease might, theoretically, be a concern. The use of hormonal contraceptives, specifically COCs and POPs, does not affect disease progression or severity in women with hepatitis, cirrhosis, or benign focal nodular hyperplasia (93,94). Clinical breast examination: Although women with current breast cancer should not use POPs (U.S. MEC 4) (5), screening asymptomatic women with a clinical breast examination before initiating POPs is not necessary because of the low prevalence of breast cancer among women of reproductive age. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a clinical breast examination before initiation of hormonal contraceptives (95). The incidence of breast cancer among women of reproductive age in the United States is low. In 2012, the incidence of breast cancer among women aged 20-49 years was approximately 70.7 per 100,000 women (96). Other screening: Women with hypertension, diabetes, anemia, thrombogenic mutations, cervical intraepithelial neoplasia, cervical cancer, STDs, or HIV infection can use (U.S. MEC 1) or generally can use (U.S. MEC 2) POPs (5); therefore, screening for these conditions is not necessary for the safe initiation of POPs. # Number of Pill Packs that Should Be Provided at Initial and Return Visits - At the initial and return visit, provide or prescribe up to a 1-year supply of POPs (e.g., 13 28-day pill packs), depending on the woman's preferences and anticipated use. - A woman should be able to obtain POPs easily in the amount and at the time she needs them. Comments and Evidence Summary. The more pill packs given up to 13 cycles, the higher the continuation rates. Restricting the number of pill packs distributed or prescribed can result in unwanted discontinuation of the method and increased risk for pregnancy. A systematic review of the evidence suggested that providing a greater number of pill packs was associated with increased continuation (232). Studies that compared provision of one versus 12 packs, one versus 12 or 13 packs, or three versus seven packs found increased continuation of pill use among women provided with more pill packs (233)(234)(235). However, one study found no difference in continuation when patients were provided one and then three packs versus four packs all at once (236). In addition to continuation, a greater number of pill packs provided was associated with fewer pregnancy tests, fewer pregnancies, and lower cost per client. However, a greater number of pill packs (13 packs versus three packs) also was associated with increased pill wastage in one study (234) (Level of evidence: I to II-2, fair, direct). # Routine Follow-Up After POP Initiation These recommendations address when routine follow-up is recommended for safe and effective continued use of contraception for healthy women. The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from more frequent follow-up visits include adolescents, those with certain medical conditions or characteristics, and those with multiple medical conditions. - Advise the woman to return at any time to discuss side effects or other problems or if she wants to change the method being used. No routine follow-up visit is required. - At other routine visits, health care providers seeing POP users should do the following: -Assess the woman's satisfaction with her contraceptive method and whether she has any concerns about method use. -Assess any changes in health status, including medications, that would change the appropriateness of POPs for safe and effective continued use based on U.S. MEC (e.g., category 3 and 4 conditions and characteristics). -Consider assessing weight changes and counseling women who are concerned about weight change perceived to be associated with their contraceptive method. Comments and Evidence Summary. No evidence was found regarding whether a routine follow-up visit after initiating POPs improves correct and continued use. # Missed POPs For the following recommendations, a dose is considered missed if it has been >3 hours since it should have been taken. - Take one pill as soon as possible. - Continue taking pills daily, one each day, at the same time each day, even if it means taking two pills on the same day. - Use back-up contraception (e.g., condoms) or avoid sexual intercourse until pills have been taken correctly, on time, for 2 consecutive days. - Emergency contraception should be considered (with the exception of UPA) if the woman has had unprotected sexual intercourse. Comments and Evidence Summary. Inconsistent or incorrect use of oral contraceptive pills is a major reason for oral contraceptive failure. Unlike COCs, POPs inhibit ovulation in about half of cycles, although this rate varies widely by individual (279). Peak serum steroid levels are reached about 2 hours after administration, followed by rapid distribution and elimination, such that by 24 hours after administration, serum steroid levels are near baseline (279). Therefore, taking POPs at approximately the same time each day is important. An estimated 48 hours of POP use was deemed necessary to achieve the contraceptive effects on cervical mucus (279). Women who frequently miss POPs should consider an alternative contraceptive method that is less dependent on the user to be effective (e.g., IUD, implant, or injectable). No evidence was found regarding the effects of missed POPs available in the United States on measures of contraceptive effectiveness including pregnancy, follicular development, hormone levels, or cervical mucus quality. # Vomiting or Diarrhea (for any Reason or Duration) that Occurs Within 3 Hours After Taking a Pill - Take another pill as soon as possible (if possible, despite discomfort). - Continue taking pills daily, one each day, at the same time each day. - Use back-up contraception (e.g., condoms) or avoid sexual intercourse until 2 days after vomiting or diarrhea has resolved. - Emergency contraception should be considered (with the exception of UPA) if the woman has had unprotected sexual intercourse. Comments and Evidence Summary. Theoretically, the contraceptive effectiveness of POPs might be decreased because of vomiting or severe diarrhea. Because of the lack of evidence to address this question, these recommendations are based on the recommendations for missed POPs. No evidence was found regarding the effects of vomiting or diarrhea on measures of contraceptive effectiveness, including pregnancy, follicular development, hormone levels, or cervical mucus quality. # Standard Days Method SDM is a method based on fertility awareness; users must avoid unprotected sexual intercourse on days 8-19 of the menstrual cycle (280). Approximately 5 out of 100 women become pregnant in the first year of use with perfect (i.e., correct and consistent) use of SDM (280); effectiveness based on typical use is not available for this method but is expected to be lower than that for perfect use. SDM is reversible and can be used by women of all ages. SDM does not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # Use of SDM Among Women with Various Durations of the Menstrual Cycle Menstrual Cycles of 26-32 Days - The woman may use the method. - Provide a barrier method of contraception for protection on days 8-19 if she wants one. - If she has unprotected sexual intercourse during days 8-19, consider the use of emergency contraception if appropriate. # Two or More Cycles of 32 Days Within Any 1 Year of SDM Use - Advise the woman that the method might not be appropriate for her because of a higher risk for pregnancy. Help her consider another method. Comments and Evidence Summary. The probability of pregnancy is increased when the menstrual cycle is outside the range of 26-32 days, even if unprotected sexual intercourse is avoided on days 8-19. A study of 7,600 menstrual cycles, including information on cycle length and signs of ovulation, concluded that the theoretical effectiveness of SDM is greatest for women with cycles of 26-32 days, that the method is still effective for women who occasionally have a cycle outside this range, and that it is less effective for women who consistently have cycles outside this range. Information from daily hormonal measurements shows that the timing of the 6-day fertile window varies greatly, even among women with regular cycles (21,281,282). # Emergency Contraception Emergency contraception consists of methods that can be used by women after sexual intercourse to prevent pregnancy. Emergency contraception methods have varying ranges of effectiveness depending on the method and timing of administration. Four options are available in the United States: the Cu-IUD and three types of ECPs. # Types of Emergency Contraception Intrauterine Device - Cu-IUD # ECPs - UPA in a single dose (30 mg) - Levonorgestrel in a single dose (1.5 mg) or as a split dose (1 dose of 0.75 mg of levonorgestrel followed by a second dose of 0.75 mg of levonorgestrel 12 hours later) - Combined estrogen and progestin in 2 doses (Yuzpe regimen: 1 dose of 100 µg of ethinyl estradiol plus 0.50 mg of levonorgestrel followed by a second dose of 100 µg of ethinyl estradiol plus 0.50 mg of levonorgestrel 12 hours later) # Initiation of Emergency Contraception Timing Cu-IUD - The Cu-IUD can be inserted within 5 days of the first act of unprotected sexual intercourse as an emergency contraceptive. - In addition, when the day of ovulation can be estimated, the Cu-IUD can be inserted beyond 5 days after sexual intercourse, as long as insertion does not occur >5 days after ovulation. # ECPs - ECPs should be taken as soon as possible within 5 days of unprotected sexual intercourse. Comments and Evidence Summary. Cu-IUDs are highly effective as emergency contraception (283) and can be continued as regular contraception. UPA and levonorgestrel ECPs have similar effectiveness when taken within 3 days after unprotected sexual intercourse; however, UPA has been shown to be more effective than the levonorgestrel formulation 3-5 days after unprotected sexual intercourse (284). The combined estrogen and progestin regimen is less effective than UPA or levonorgestrel and also is associated with more frequent occurrence of side effects (nausea and vomiting) (285). The levonorgestrel formulation might be less effective than UPA among obese women (286). Two studies of UPA use found consistent decreases in pregnancy rates when administered within 120 hours of unprotected sexual intercourse (284,287). Five studies found that the levonorgestrel and combined regimens decreased risk for pregnancy through the fifth day after unprotected sexual intercourse; however, rates of pregnancy were slightly higher when ECPs were taken after 3 days (288)(289)(290)(291)(292). A meta-analysis of levonorgestrel ECPs found that pregnancy rates were low when administered within 4 days after unprotected sexual intercourse but increased at 4-5 days (293) (Level of evidence: I to II-2, good to poor, direct). # Advance Provision of ECPs - An advance supply of ECPs may be provided so that ECPs will be available when needed and can be taken as soon as possible after unprotected sexual intercourse. Comments and Evidence Summary. A systematic review identified 17 studies that reported on safety or effectiveness of advance ECPs in adult or adolescent women (294). Any use of ECPs was two to seven times greater among women who received an advance supply of ECPs. However, a summary estimate (relative risk = 0.97; 95% confidence interval = 0.77-1.22) of five randomized controlled trials did not indicate a significant reduction in unintended pregnancies at 12 months with advance provision of ECPs. In the majority of studies among adults or adolescents, patterns of regular contraceptive use, pregnancy rates, and incidence of STDs did not vary between those who received advance ECPs and those who did not. Although available evidence supports the safety of advance provision of ECPs, effectiveness of advance provision of ECPs in reducing pregnancy rates at the population level has not been demonstrated (Level of evidence: I to II-3, good to poor, direct). # Initiation of Regular Contraception After ECPs # UPA - Advise the woman to start or resume hormonal contraception no sooner than 5 days after use of UPA, and provide or prescribe the regular contraceptive method as needed. For methods requiring a visit to a health care provider, such as DMPA, implants, and IUDs, starting the method at the time of UPA use may be considered; the risk that the regular contraceptive method might decrease the effectiveness of UPA must be weighed against the risk of not starting a regular hormonal contraceptive method. - The woman needs to abstain from sexual intercourse or use barrier contraception for the next 7 days after starting or resuming regular contraception or until her next menses, whichever comes first. - Any nonhormonal contraceptive method can be started immediately after the use of UPA. - Advise the woman to have a pregnancy test if she does not have a withdrawal bleed within 3 weeks. # Levonorgestrel and Combined Estrogen and Progestin ECPs - Any regular contraceptive method can be started immediately after the use of levonorgestrel or combined estrogen and progestin ECPs. - The woman needs to abstain from sexual intercourse or use barrier contraception for 7 days. - Advise the woman to have a pregnancy test if she does not have a withdrawal bleed within 3 weeks. Comments and Evidence Summary. The resumption or initiation of regular hormonal contraception after ECP use involves consideration of the risk for pregnancy if ECPs fail and the risks for unintended pregnancy if contraception initiation is delayed until the subsequent menstrual cycle. A health care provider may provide or prescribe pills, the patch, or the ring for a woman to start no sooner than 5 days after use of UPA. For methods requiring a visit to a health care provider, such as DMPA, implants, and IUDs, starting the method at the time of UPA use may be considered; the risk that the regular contraceptive method might decrease the effectiveness of UPA must be weighed against the risk of not starting a regular hormonal contraceptive method. Data on when a woman can start regular contraception after ECPs are limited to pharmacodynamic data and expert opinion (295)(296)(297). In one pharmacodynamic study of women who were randomly assigned to either UPA or placebo groups mid-cycle followed by a 21-day course of combined hormonal contraception found no difference between UPA and placebo groups in the time for women's ovaries to reach quiescence by ultrasound and serum estradiol (296); this finding suggests that UPA did not have an effect on the combined hormonal contraception. In another pharmacodynamic study with a crossover design, women were randomly assigned to one of three groups: 1) UPA followed by desogestrel for 20 days started 1 day later; 2) UPA plus placebo; or 3) placebo plus desogestrel for 20 days (295). Among women taking UPA followed by desogestrel, a higher incidence of ovulation in the first 5 days was found compared with UPA alone (45% versus 3%, respectively), suggesting desogestrel might decrease the effectiveness of UPA. No concern exists that administering combined estrogen and progestin or levonorgestrel formulations of ECPs concurrently with systemic hormonal contraception decreases the effectiveness of either emergency or regular contraceptive methods because these formulations do not have antiprogestin properties like UPA. If a woman is planning to initiate contraception after the next menstrual bleeding after ECP use, the cycle in which ECPs are used might be shortened, prolonged, or involve unscheduled bleeding. # Prevention and Management of Nausea and Vomiting with ECP Use # Nausea and Vomiting - Levonorgestrel and UPA ECPs cause less nausea and vomiting than combined estrogen and progestin ECPs. - Routine use of antiemetics before taking ECPs is not recommended. Pretreatment with antiemetics may be considered depending on availability and clinical judgment. # Vomiting Within 3 Hours of Taking ECPs - Another dose of ECP should be taken as soon as possible. Use of an antiemetic should be considered. Comments and Evidence Summary. Many women do not experience nausea or vomiting when taking ECPs, and predicting which women will experience nausea or vomiting is difficult. Although routine use of antiemetics before taking ECPs is not recommended, antiemetics are effective in some women and can be offered when appropriate. Health care providers who are deciding whether to offer antiemetics to women taking ECPs should consider the following: 1) women taking combined estrogen and progestin ECPs are more likely to experience nausea and vomiting than those who take levonorgestrel or UPA ECPs; 2) evidence indicates that antiemetics reduce the occurrence of nausea and vomiting in women taking combined estrogen and progestin ECPs; and 3) women who take antiemetics might experience other side effects from the antiemetics. A systematic review examined incidence of nausea and vomiting with different ECP regimens and effectiveness of antinausea drugs in reducing nausea and vomiting with ECP use (298). The levonorgestrel regimen was associated with significantly less nausea than a nonstandard dose of UPA (50 mg) and the standard combined estrogen and progestin regimen (299)(300)(301). Use of the split-dose levonorgestrel showed no differences in nausea and vomiting compared with the single-dose levonorgestrel (288,290,292,302) (Level of evidence: I, good-fair, indirect). Two trials of antinausea drugs, meclizine and metoclopramide, taken before combined estrogen and progestin ECPs, reduced the severity of nausea (303,304). Significantly less vomiting occurred with meclizine but not metoclopramide (Level of evidence: I, good-fair, direct). No direct evidence was found regarding the effects of vomiting after taking ECPs. # Female Sterilization Laparoscopic, abdominal, and hysteroscopic methods of female sterilization are available in the United States, and some of these procedures can be performed in an outpatient procedure or office setting. Fewer than 1 out of 100 women become pregnant in the first year after female sterilization (14). Because these methods are intended to be irreversible, all women should be appropriately counseled about the permanency of sterilization and the availability of highly effective, long-acting, reversible methods of contraception. Female sterilization does not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # When Hysteroscopic Sterilization is Reliable for Contraception - Before a woman can rely on hysteroscopic sterilization for contraception, a hysterosalpingogram (HSG) must be performed 3 months after the sterilization procedure to confirm bilateral tubal occlusion. - The woman should be advised that she needs to abstain from sexual intercourse or use additional contraceptive protection until she has confirmed bilateral tubal occlusion. # When Laparoscopic and Abdominal Approaches are Reliable for Contraception - A woman can rely on sterilization for contraception immediately after laparoscopic and abdominal approaches. No additional contraceptive protection is needed. Comments and Evidence Summary. HSG confirmation is necessary to confirm bilateral tubal occlusion after hysteroscopic sterilization. The inserts for the hysteroscopic sterilization system available in the United States are placed bilaterally into the fallopian tubes and require 3 months for adequate fibrosis and scarring leading to bilateral tubal occlusion. After hysteroscopic sterilization, advise the woman to correctly and consistently use an effective method of contraception while awaiting confirmation. If compliance with another method might be a problem, a woman and her health care provider may consider DMPA injection at the time of sterilization to ensure adequate contraception for 3 months. Unlike laparoscopic and abdominal sterilizations, pregnancy risk beyond 7 years of follow-up has not been studied among women who received hysteroscopic sterilization. Pregnancy risk with at least 10 years of follow-up has been studied among women who received laparoscopic and abdominal sterilizations (305,306). Although these methods are highly effective, pregnancies can occur many years after the procedure, and the risk for pregnancy is higher among younger women (306,307). A systematic review was conducted to identify studies that reported whether pregnancies occurred after hysteroscopic sterilization (308). Twenty-four studies were identified that reported whether pregnancies occurred after hysteroscopic sterilization and found that very few pregnancies occurred among women with confirmed bilateral tubal occlusion; however, few studies include long-term follow-up, and none with follow up for >7 years. Among women who had successful bilateral placement, most pregnancies that occurred after hysteroscopic sterilization were in women who did not have confirmed bilateral tubal occlusion at 3 months, either because of lack of follow up or misinterpretation of HSG results (309)(310)(311). Some pregnancies occurred within 3 months of placement, including among women who were already pregnant at the time of the procedure, women who did not use alternative contraception, or women who had failures of alternative contraception (310)(311)(312)(313)(314)(315). Although these studies generally demonstrated high rates of bilateral placement, some pregnancies occurred as a result of lack of bilateral placement identified on later imaging (310,311,(313)(314)(315)(316). Most pregnancies occurred after deviations from FDA directions, which include placement in the early follicular phase of the menstrual cycle, imaging at 3 months to document proper placement, and use of effective alternative contraception until documented occlusion (Level of evidence: II-3, fair, direct). # Male Sterilization Male sterilization, or vasectomy, is one of the few contraceptive methods available to men and can be performed in an outpatient procedure or office setting. Fewer than 1 woman out of 100 becomes pregnant in the first year after her male partner undergoes sterilization (14). Because male sterilization is intended to be irreversible, all men should be appropriately counseled about the permanency of sterilization and the availability of highly effective, long-acting, reversible methods of contraception for women. Male sterilization does not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # When Vasectomy is Reliable for Contraception - A semen analysis should be performed 8-16 weeks after a vasectomy to ensure the procedure was successful. - The man should be advised that he should use additional contraceptive protection or abstain from sexual intercourse until he has confirmation of vasectomy success by postvasectomy semen analysis. # Other Postprocedure Recommendations - The man should refrain from ejaculation for approximately 1 week after the vasectomy to allow for healing of surgical sites and, after certain methods of vasectomy, occlusion of the vas. Comments and Evidence Summary. The Vasectomy Guideline Panel of the American Urological Association performed a systematic review of key issues concerning the practice of vasectomy (317). All English-language publications on vasectomy published during 1949-2011 were reviewed. For more information, see the American Urological Association Vasectomy Guidelines (/ education/clinical-guidance/Vasectomy.pdf ). Motile sperm disappear within a few weeks after vasectomy (318)(319)(320)(321). The time to azoospermia varies widely in different studies; however, by 12 weeks after the vasectomy, 80% of men have azoospermia, and almost all others have rare nonmotile sperm (defined as ≤100,000 nonmotile sperm per milliliter) (317). The number of ejaculations after vasectomy is not a reliable indicator of when azoospermia or rare nonmotile sperm will be achieved (317). Once azoospermia or rare nonmotile sperm has been achieved, patients can rely on the vasectomy for contraception, although not with 100% certainty. The risk for pregnancy after a man has achieved postvasectomy azoospermia is approximately one in 2,000 (322)(323)(324)(325)(326). A median of 78% (range 33%-100%) of men return for a single postvasectomy semen analysis (317). In the largest cohorts that appear typical of North American vasectomy practice, approximately two thirds of men (55%-71%) return for at least one postvasectomy semen analysis (322,(327)(328)(329)(330)(331). Assigning men an appointment after their vasectomy might improve compliance with follow-up (332). # When Women Can Stop Using Contraceptives - Contraceptive protection is still needed for women aged >44 years if the woman wants to avoid pregnancy. Comments and Evidence Summary. The age at which a woman is no longer at risk for pregnancy is not known. Although uncommon, spontaneous pregnancies occur among women aged >44 years. Both the American College of Obstetricians and Gynecologists and the North American Menopause Society recommend that women continue contraceptive use until menopause or age 50-55 years (333,334). The median age of menopause is approximately 51 years in North America (333) but can vary from ages 40-60 years (335). The median age of definitive loss of natural fertility is 41 years but can range up to age 51 years (336,337). No reliable laboratory tests are available to confirm definitive loss of fertility in a woman. The assessment of follicle-stimulating hormone levels to determine when a woman is no longer fertile might not be accurate (333). Health care providers should consider the risks for becoming pregnant in a woman of advanced reproductive age, as well as any risks of continuing contraception until menopause. Pregnancies among women of advanced reproductive age are at higher risk for maternal complications, such as hemorrhage, venous thromboembolism, and death, and fetal complications, such as spontaneous abortion, stillbirth, and congenital anomalies (338)(339)(340). Risks associated with continuing contraception, in particular risks for acute cardiovascular events (venous thromboembolism, myocardial infarction, or stroke) or breast cancer, also are important to consider. U.S. MEC states that on the basis of age alone, women aged >45 years can use POPs, implants, the LNG-IUD, or the Cu-IUD (U.S. MEC 1) (5). Women aged >45 years generally can use combined hormonal contraceptives and DMPA (U.S. MEC 2) (5). However, women in this age group might have chronic conditions or other risk factors that might render use of hormonal contraceptive methods unsafe; U.S. MEC might be helpful in guiding the safe use of contraceptives in these women. In two studies, the incidence of venous thromboembolism was higher among oral contraceptive users aged ≥45 years compared with younger oral contraceptive users (341)(342)(343); however, an interaction between hormonal contraception and increased age compared with baseline risk was not demonstrated (341,342) or was not examined (343). The relative risk for myocardial infarction was higher among all oral contraceptive users than in nonusers, although a trend of increased relative risk with increasing age was not demonstrated (344,345). No studies were found regarding the risk for stroke in COC users aged ≥45 years (Level of evidence: II-2, good to poor, direct). A pooled analysis by the Collaborative Group on Hormonal Factors and Breast Cancer in 1996 (346) found small increased relative risks for breast cancer among women aged ≥45 years whose last use of combined hormonal contraceptives was <5 years previously and for those whose last use was 5-9 years previously. Seven more recent studies suggested small but nonsignificant increased relative risks for breast carcinoma in situ or breast cancer among women who had used oral contraceptives or DMPA when they were aged ≥40 years compared with those who had never used either method (347-353) (Level of evidence: II-2, fair, direct). # Conclusion Most women can start most contraceptive methods at any time, and few examinations or tests, if any, are needed before starting a contraceptive method. Routine follow-up for most women includes assessment of her satisfaction with the contraceptive method, concerns about method use, and changes in health status or medications that could affect medical eligibility for continued use of the method. Because changes in bleeding patterns are one of the major reasons for discontinuation of contraception, recommendations are provided for the management of bleeding irregularities with various contraceptive methods. In addition, because women and health care providers can be confused about the procedures for missed pills and dosing errors with the contraceptive patch and ring, the instructions are streamlined for easier use. ECPs and emergency use of the Cu-IUD are important options for women, and recommendations on using these methods, as well as starting regular contraception after use of emergency contraception, are provided. Male and female sterilization are highly effective methods of contraception for men, women, and couples who have completed childbearing; for men undergoing vasectomy and women undergoing a hysteroscopic sterilization procedure, additional contraceptive protection is needed until the success of the procedure can be confirmed. CDC is committed to working with partners at the federal, national, and local levels to disseminate, implement, and evaluate U.S. SPR recommendations so that the information reaches health care providers. Strategies for dissemination and implementation include collaborating with other federal agencies and professional and service organizations to widely distribute the recommendations through presentations, electronic distribution, newsletters, and other publications; development of provider tools and job aids to assist providers in implementing the new recommendations; and training activities for students, as well as for continuing education. CDC conducts surveys of family planning health care providers to assess attitudes and practices related to contraceptive use. Results from these surveys will assist CDC in evaluating the impact of these recommendations on the provision of contraceptives in the United States. Finally, CDC will continually monitor new scientific evidence and will update these recommendations as warranted by new evidence. Updates to the recommendations, as well as provider tools and other resources, are available on the CDC U.S. SPR website: http:// www.cdc.gov/reproductivehealth/UnintendedPregnancy/ USSPR.htm. # Handling Conflict of Interest To promote transparency, all participants were asked to disclose any potential conflicts of interest to CDC prior to the expert meeting and to report any potential conflicts of interest during the introductory portion of the expert meeting. All potential conflicts of interest are listed above. No participants were excluded from discussion based on potential conflicts of interest. One presenter was an employee of a pharmaceutical company and participated by teleconference; after the presentation and questions related to the presentation, the presenter was excused from the discussion. CDC staff who ultimately decided and developed these recommendations have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters relevant to these recommendations. ) for clarifications to the numeric categories, as well as for summaries of the evidence and additional comments. Hormonal contraceptives and intrauterine devices do not protect against sexually transmitted diseases (STDs), including human immunodeficiency virus (HIV), and women using these methods should be counseled that consistent and correct use of the male latex condom reduces the risk for transmission of HIV and other STDs. Use of female condoms can provide protection from transmission of STDs, although data are limited. Health-care providers can use the summary table as a quick reference guide to the classifications for hormonal contraceptive methods and intrauterine contraception to compare classifications across these methods (Box A1) (Table A1). For BOX A1. Categories for classifying hormonal contraceptives and intrauterine devices 1 = A condition for which there is no restriction for the use of the contraceptive method. 2 = A condition for which the advantages of using the method generally outweigh the theoretical or proven risks. 3 = A condition for which the theoretical or proven risks usually outweigh the advantages of using the method. 4 = A condition that represents an unacceptable health risk if the contraceptive method is used. - In instances in which blood pressure cannot be measured by a provider, blood pressure measured in other settings can be reported by the woman to her provider. † Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. § A bimanual examination (not cervical inspection) is needed for diaphragm fitting. ¶ Most women do not require additional STD screening at the time of IUD insertion. If a woman with risk factors for STDs has not been screened for gonorrhea and chlamydia according to CDC's STD Treatment Guidelines (), screening can be performed at the time of IUD insertion, and insertion should not be delayed. Women with current purulent cervicitis or chlamydial infection or gonococcal infection should not undergo IUD insertion (U.S. MEC 4). The examinations or tests noted apply to women who are presumed to be healthy (Table C1). Those with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. The 2016 U.S. Medical Eligibility Criteria for Contraceptive Use (U.S. MEC) might be useful in such circumstances (1). The following classification was considered useful in differentiating the applicability of the various examinations or tests: - Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. - Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. - Class C: does not contribute substantially to safe and effective use of the contraceptive method. These classifications focus on the relationship of the examinations or tests to safe initiation of a contraceptive method. They are not intended to address the appropriateness of these examinations or tests in other circumstances. For example, some of the examinations or tests that are not deemed necessary for safe and effective contraceptive use might be appropriate for good preventive health care or for diagnosing or assessing suspected medical conditions. Any additional screening needed for preventive health care can be performed at the time of contraception initiation and initiation should not be delayed for test results. No examinations or tests are needed before initiating condoms or spermicides. A bimanual examination is necessary for diaphragm fitting. A bimanual examination and cervical inspection are needed for cervical cap fitting.
Disclosure of Relationship CDC, our planners, and our content experts wish to disclose they have no financial interest or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters. Planners have reviewed content to ensure there is no bias. This document will not include any discussion of the unlabeled use of a product or a product under investigational use, with the exception that some of the recommendations in this document might be inconsistent with package labeling.# Introduction Unintended pregnancy rates remain high in the United States; approximately 45% of all pregnancies are unintended, with higher proportions among adolescent and young women, women who are racial/ethnic minorities, and women with lower levels of education and income (1). Unintended pregnancies increase the risk for poor maternal and infant outcomes (2) and in 2010, resulted in U.S. government health care expenditures of $21 billion (3). Approximately half of unintended pregnancies are among women who were not using contraception at the time they became pregnant; the other half are among women who became pregnant despite reported use of contraception (4). Strategies to prevent unintended pregnancy include assisting women at risk for unintended pregnancy and their partners with choosing appropriate contraceptive methods and helping them use methods correctly and consistently to prevent pregnancy. In 2013, CDC published the first U.S. Selected Practice Recommendations for Contraceptive Use (U.S. SPR), adapted from global guidance developed by the World Health Organization (WHO SPR), which provided evidence-based guidance on how to use contraceptive methods safely and effectively once they are deemed to be medically appropriate. # U.S. SPR is a companion document to U.S. Medical Eligibility Criteria for Contraceptive Use (U.S. MEC) (http://www.cdc. gov/reproductivehealth/unintendedpregnancy/usmec.htm), which provides recommendations on safe use of contraceptive methods for women with various medical conditions and other characteristics (5). WHO intended for the global guidance to be used by local or national policy makers, family planning program managers, and the scientific community as a reference when they develop family planning guidance at the country or program level. During 2012-2013, CDC went through a formal process to adapt the global guidance for best implementation in the United States, which included rigorous identification and critical appraisal of the scientific evidence through systematic reviews, and input from national experts on how to translate that evidence into recommendations for U.S. health care providers (6). At that time, CDC committed to keeping this guidance up to date and based on the best available evidence, with full review every few years (6). This document updates the 2013 U.S. SPR (6) with new evidence and input from experts. Major updates include 1) revised recommendations for starting regular contraception after the use of emergency contraceptive pills and 2) new recommendations for the use of medications to ease insertion of intrauterine devices (IUDs). Recommendations are provided for health care providers on the safe and effective use of contraceptive methods and address provision of contraceptive methods and management of side effects and other problems with contraceptive method use, within the framework of removing unnecessary medical barriers to accessing and using contraception. These recommendations are meant to serve as a source of clinical guidance for health care providers; health care providers should always consider the individual clinical circumstances of each person seeking family planning services. This report is not intended to be a substitute for professional medical advice for individual patients, who should seek advice from their health care providers when considering family planning options. # Summary of Changes from the 2013 U.S. SPR Updated Recommendations Recommendations have been updated regarding when to start regular contraception after ulipristal acetate (UPA) emergency contraceptive pills: • Advise the woman to start or resume hormonal contraception no sooner than 5 days after use of UPA, and provide or prescribe the regular contraceptive method as needed. For methods requiring a visit to a health care provider, such as depo-medroxyprogesterone acetate (DMPA), implants, and IUDs, starting the method at the time of UPA use may be considered; the risk that the regular contraceptive method might decrease the effectiveness of UPA must be weighed against the risk of not starting a regular hormonal contraceptive method. • The woman needs to abstain from sexual intercourse or use barrier contraception for the next 7 days after starting or resuming regular contraception or until her next menses, whichever comes first. • Any nonhormonal contraceptive method can be started immediately after the use of UPA. • Advise the woman to have a pregnancy test if she does not have a withdrawal bleed within 3 weeks. # New Recommendations New recommendations have been made for medications to ease IUD insertion: • Misoprostol is not recommended for routine use before IUD insertion. Misoprostol might be helpful in select circumstances (e.g., in women with a recent failed insertion). • Paracervical block with lidocaine might reduce patient pain during IUD insertion. # Methods Since publication of the 2013 U.S. SPR, CDC has monitored the literature for new evidence relevant to the recommendations through the WHO/CDC continuous identification of research evidence (CIRE) system (7). This system identifies new evidence as it is published and allows WHO and CDC to update systematic reviews and facilitate updates to recommendations as new evidence warrants. Automated searches are run in PubMed weekly, and the results are reviewed. Abstracts that meet specific criteria are added to the web-based CIRE system, which facilitates coordination and peer review of systematic reviews for both WHO and CDC. In 2014, CDC reviewed all of the existing recommendations in the 2013 U.S. SPR for new evidence identified by CIRE that had the potential to lead to a changed recommendation. During August 27-28, 2014, CDC held a meeting in Atlanta, Georgia, of 11 family planning experts and representatives from partner organizations to solicit their input on the scope of and process for updating both the 2010 U.S. MEC and the 2013 U.S. SPR. The participants were experts in family planning and represented different provider types and organizations that represent health care providers. A list of participants is provided at the end of this report. The meeting related to topics to be addressed in the update of U.S. SPR based on new scientific evidence published since 2013 (identified though the CIRE system), topics addressed at a 2014 WHO meeting to update global guidance, and suggestions CDC received from providers for the addition of recommendations not included in the 2013 U.S. SPR (e.g., from provider feedback through e-mail, public inquiry, and questions received at conferences). CDC identified one topic to consider adding to the guidance: the use of medications to ease IUD insertion (evidence question: "Among women of reproductive age, does use of medications before IUD insertion improve the safety or effectiveness of the procedure [ease of insertion, need for adjunctive insertion measures, or insertion success] or affect patient outcomes [pain or side effects] compared with nonuse of these medications?"). CDC also identified one topic for which new evidence warranted a review of an existing recommendation: initiation of regular contraception after emergency contraceptive pills (evidence question: "Does ulipristal acetate for emergency contraception interact with regular use of hormonal contraception leading to decreased effectiveness of either contraceptive method?"). CDC determined that all other recommendations in the 2013 U.S. SPR were up to date and consistent with the current body of evidence for that recommendation. In preparation for a subsequent expert meeting August [26][27][28]2015, to review the scientific evidence # Maintaining Updated Guidance As with any evidence-based guidance document, a key challenge is keeping the recommendations up to date as new scientific evidence becomes available. Working with WHO, CDC uses the CIRE system to ensure that WHO and CDC guidance is based on the best available evidence and that a mechanism is in place to update guidance when new evidence becomes available (7). CDC will continue to work with WHO to identify and assess all new relevant evidence and determine whether changes in the recommendations are warranted. In most cases, U.S. SPR will follow any updates in the WHO guidance, which typically occurs every 5 years (or sooner if warranted by new data). In addition, CDC will review any interim WHO updates for their application in the United States. CDC also will identify and assess any new literature for the recommendations that are not included in the WHO guidance and will completely review U.S. SPR every 5 years. Updates to the guidance can be found on the U.S. SPR website (http://www.cdc.gov/reproductivehealth/ UnintendedPregnancy/USSPR.htm). # How To Use This Document The recommendations in this report are intended to help health care providers address issues related to use of contraceptives, such as how to help a woman initiate use of a contraceptive method, which examinations and tests are needed before initiating use of a contraceptive method, what regular follow-up is needed, and how to address problems that often arise during use, including missed pills and side effects such as unscheduled bleeding. Each recommendation addresses what a woman or health care provider can do in specific situations. For situations in which certain groups of women might be medically ineligible to follow the recommendations, comments and reference to U.S. MEC are provided (5). The full U.S. MEC recommendations and the evidence supporting those recommendations have been updated in 2016 (5) and are summarized (Appendix A). The information in this document is organized by contraceptive method, and the methods generally are presented in order of effectiveness, from highest to lowest. However, the recommendations are not intended to provide guidance on every aspect of provision and management of contraceptive method use. Instead, they incorporate the best available evidence to address specific issues regarding common, yet sometimes complex, clinical issues. Each contraceptive method section generally includes information about initiation of the method, regular follow-up, and management of problems with use (e.g., usage errors and side effects). Each section first provides the recommendation and then includes comments and a brief summary of the scientific evidence on which the recommendation is based. The level of evidence from the systematic reviews for each evidence summary are provided based on the U.S. Preventive Services Task Force system, which includes ratings for study design (I: randomized controlled trials; II-1: controlled trials without randomization; II-2: observational studies; and II-3: multiple time series or descriptive studies), ratings for internal validity (good, fair, or poor), and categorization of the evidence as direct or indirect for the specific review question (10). Recommendations in this document are provided for permanent methods of contraception, such as vasectomy and female sterilization, as well as for reversible methods of contraception, including the copper-containing intrauterine device (Cu-IUD); levonorgestrel-releasing IUDs (LNG-IUDs); the etonogestrel implant; progestin-only injectables; progestinonly pills (POPs); combined hormonal contraceptive methods that contain both estrogen and a progestin, including combined oral contraceptives (COCs), a transdermal contraceptive patch, and a vaginal contraceptive ring; and the standard days method (SDM). Recommendations also are provided for emergency use of the Cu-IUD and emergency contraceptive pills (ECPs). For each contraceptive method, recommendations are provided on the timing for initiation of the method and indications for when and for how long additional contraception, or a back-up method, is needed. Many of these recommendations include guidance that a woman can start a contraceptive method at any time during her menstrual cycle if it is reasonably certain that she is not pregnant. Guidance for health care providers on how to be reasonably certain that a woman is not pregnant also is provided. For each contraceptive method, recommendations include the examinations and tests needed before initiation of the method. These recommendations apply to persons who are presumed to be healthy. Those with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. U.S. MEC might be useful in such circumstances (5). Most women need no or very few examinations or tests before initiating a contraceptive method although they might be needed to address other noncontraceptive health needs (12). Any additional screening needed for preventive health care can be performed at the time of contraception initiation, and initiation should not be delayed for test results. The following classification system was developed by WHO and adopted by CDC to categorize the applicability of the various examinations or tests before initiation of contraceptive methods (13): Class A: These tests and examinations are essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: These tests and examinations contribute substantially to safe and effective use, although implementation can be considered within the public health context, service context, or both. The risk for not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: These tests and examinations do not contribute substantially to safe and effective use of the contraceptive method. These classifications focus on the relation of the examinations or tests to safe initiation of a contraceptive method. They are not intended to address the appropriateness of these examinations or tests in other circumstances. For example, some of the examinations or tests that are not deemed necessary for safe and effective contraceptive use might be appropriate for good preventive health care or for diagnosing or assessing suspected medical conditions. Systematic reviews were conducted for several different types of examinations and tests to assess whether a screening test was associated with safe use of contraceptive methods. Because no single convention exists for screening panels for certain diseases, including diabetes, lipid disorders, and liver diseases, the search strategies included broad terms for the tests and diseases of interest. Summary charts and clinical algorithms that summarize the guidance for the various contraceptive methods have been developed for many of the recommendations, including when to start using specific contraceptive methods (Appendix B), examinations and tests needed before initiating the various contraceptive methods (Appendix C), routine follow-up after initiating contraception (Appendix D), management of bleeding irregularities (Appendix E), and management of IUDs when users are found to have pelvic inflammatory disease (PID) (Appendix F). These summaries might be helpful to health care providers when managing family planning patients. Additional tools are available on the U.S. SPR website (http://www.cdc. gov/reproductivehealth/UnintendedPregnancy/USSPR.htm). # Contraceptive Method Choice Many elements need to be considered individually by a woman, man, or couple when choosing the most appropriate contraceptive method. Some of these elements include safety, effectiveness, availability (including accessibility and affordability), and acceptability. Although most contraceptive methods are safe for use by most women, U.S. MEC provides recommendations on the safety of specific contraceptive methods for women with certain characteristics and medical conditions (5); a U.S. MEC summary (Appendix A) and the categories of medical eligibility criteria for contraceptive use (Box 1) are provided. Voluntary informed choice of contraceptive methods is an essential guiding principle, and contraceptive counseling, where applicable, might be an important contributor to the successful use of contraceptive methods. Contraceptive method effectiveness is critically important in minimizing the risk for unintended pregnancy, particularly among women for whom an unintended pregnancy would pose additional health risks. The effectiveness of contraceptive methods depends both on the inherent effectiveness of the method itself and on how consistently and correctly it is used (Figure 1). Both consistent and correct use can vary greatly with characteristics such as age, income, desire to prevent or delay pregnancy, and culture. Methods that depend on consistent and correct use by clients have a wide range of effectiveness between typical use (actual use, including incorrect or inconsistent use) and perfect use (correct and consistent use according to directions) (14). IUDs and implants are considered longacting, reversible contraception (LARC); these methods are highly effective because they do not depend on regular compliance from the user. LARC methods are appropriate for most women, including adolescents and nulliparous women. All women should be counseled about the full range and effectiveness of contraceptive options for which they are medically eligible so that they can identify the optimal method. In choosing a method of contraception, dual protection from the simultaneous risk for human immunodeficiency virus (HIV) and other sexually transmitted diseases (STDs) also should be considered. Although hormonal contraceptives and IUDs are highly effective at preventing pregnancy, they do not protect against STDs, including HIV. Consistent and correct use of the male latex condom reduces the risk for HIV infection and other STDs, including chlamydial infection, gonococcal infection, and trichomoniasis (15). Although evidence is limited, use of female condoms can provide protection from acquisition and transmission of STDs (15). All patients, regardless of contraceptive choice, should be counseled about the use of condoms and the risk for STDs, including HIV infection (15). Additional information about prevention and treatment of STDs is available from the CDC Sexually Transmitted Diseases Treatment Guidelines (http://www. cdc.gov/std/treatment) (15). Women, men, and couples have increasing numbers of safe and effective choices for contraceptive methods, including LARC methods such as IUDs and implants, to reduce the risk for unintended pregnancy. However, with these expanded options comes the need for evidence-based guidance to help health care providers offer quality family planning care to their patients, including assistance in choosing the most appropriate contraceptive method for individual circumstances and using that method correctly, consistently, and continuously to maximize effectiveness. Removing unnecessary barriers can help patients access and successfully use contraceptive methods. Several medical barriers to initiating and continuing contraceptive methods might exist, such as unnecessary screening examinations and tests before starting the method (e.g., a pelvic examination before initiation of COCs), inability to receive the contraceptive on the same day as the visit (e.g., waiting for test results that might not be needed or waiting until the woman's next menstrual cycle to start use), and difficulty obtaining continued contraceptive supplies (e.g., restrictions on number of pill packs dispensed at one time). Removing unnecessary steps, such as providing prophylactic antibiotics at the time of IUD insertion or requiring unnecessary follow-up procedures, also can help patients access and successfully use contraception. # How To Be Reasonably Certain that a Woman Is Not Pregnant In most cases, a detailed history provides the most accurate assessment of pregnancy risk in a woman who is about to start using a contraceptive method. Several criteria for assessing pregnancy risk are listed in the recommendation that follows. These criteria are highly accurate (i.e., a negative predictive value of 99%-100%) in ruling out pregnancy among women who are not pregnant (16)(17)(18)(19). Therefore, CDC recommends that health care providers use these criteria to assess pregnancy status in a woman who is about to start using contraceptives (Box 2). If a woman meets one of these criteria (and therefore the health care provider can be reasonably certain that she is not pregnant), a urine pregnancy BOX 1. Categories of medical eligibility criteria for contraceptive use • U.S. MEC 1 = A condition for which there is no restriction for the use of the contraceptive method. • U.S. MEC 2 = A condition for which the advantages of using the method generally outweigh the theoretical or proven risks. • U.S. MEC 3 = A condition for which the theoretical or proven risks usually outweigh the advantages of using the method. • U.S. MEC 4 = A condition that represents an unacceptable health risk if the contraceptive method is used. test might be considered in addition to these criteria (based on clinical judgment), bearing in mind the limitations of the accuracy of pregnancy testing. If a woman does not meet any of these criteria, then the health care provider cannot be reasonably certain that she is not pregnant, even with a negative pregnancy test. Routine pregnancy testing for every woman is not necessary. On the basis of clinical judgment, health care providers might consider the addition of a urine pregnancy test; however, they should be aware of the limitations, including accuracy of the test relative to the time of last sexual intercourse, recent delivery, or spontaneous or induced abortion. Routine pregnancy testing for every woman is not necessary. If a woman has had recent (i.e., within the last 5 days) unprotected sexual intercourse, consider offering emergency contraception (either a Cu-IUD or ECPs) if pregnancy is not desired. Comments and Evidence Summary. The criteria for determining whether a woman is pregnant depend on the assurance that she has not ovulated within a certain amount of time after her last menses, spontaneous or induced abortion, or delivery. Among menstruating women, the timing of ovulation can vary widely. During an average 28-day cycle, ovulation generally occurs during days 9-20 (20). In addition, the likelihood of ovulation is low from days 1-7 of the menstrual cycle (21). After a spontaneous or an induced abortion, ovulation can occur within 2-3 weeks and has been found to occur as early as 8-13 days after the end of the pregnancy. Therefore, the likelihood of ovulation is low ≤7 days after # JANUARY # S p e r m i c i d e # Spermicide # How to make your method most e ective # Vasectomy and hysteroscopic sterilization: After procedure, little or nothing to do or remember. Use another method for rst 3 months. Injectable: Get repeat injections on time. Pills: Take a pill each day. # Least E ective # Most E ective Less than 1 pregnancy 6-12 pregnancies per 100 women in a year per 100 women in a year 18 or more pregnancies per 100 women in a year # CONDOMS SHOULD ALWAYS BE USED TO REDUCE THE RISK OF SEXUALLY TRANSMITTED INFECTIONS. Other Methods of Contraception Lactational Amenorrhea Method: LAM is a highly e ective, temporary method of contraception. Emergency Contraception: Emergency contraceptive pills or a copper IUD after unprotected intercourse substantially reduces risk of pregnancy. an abortion (22)(23)(24). A systematic review reported that the mean day of first ovulation among postpartum nonlactating women occurred 45-94 days after delivery (25). In one study, the earliest ovulation was reported at 25 days after delivery. Among women who are within 6 months postpartum, are fully or nearly fully breastfeeding (exclusively breastfeeding or the vast majority [≥85%] of feeds are breastfeeds), and are amenorrheic, the risk for pregnancy is <2% (26,27). Although pregnancy tests often are performed before initiating contraception, the accuracy of qualitative urine pregnancy tests varies depending on the timing of the test relative to missed menses, recent sexual intercourse, or recent pregnancy. The sensitivity of a pregnancy test is defined as the concentration of human chorionic gonadotropin (hCG) at which 95% of tests are positive. Most qualitative pregnancy tests approved by the U.S. Food and Drug Administration (FDA) report a sensitivity of 20-25 mIU/mL in urine (28)(29)(30)(31). However, pregnancy detection rates can vary widely because of differences in test sensitivity and the timing of testing relative to missed menses (30,32). Some studies have shown that an additional 11 days past the day of expected menses are needed to detect 100% of pregnancies using qualitative tests (29). In addition, pregnancy tests cannot detect a pregnancy resulting from recent sexual intercourse. Qualitative tests also might have positive results for several weeks after termination of pregnancy because hCG can be present for several weeks after delivery or abortion (spontaneous or induced) (33)(34)(35). For contraceptive methods other than IUDs, the benefits of starting to use a contraceptive method likely exceed any risk, even in situations in which the health care provider is uncertain whether the woman is pregnant. Therefore, the health care provider can consider having patients start using contraceptive methods other than IUDs at any time, with a follow-up pregnancy test in 2-4 weeks. The risks of not starting to use contraception should be weighed against the risks of initiating contraception use in a woman who might be already pregnant. Most studies have shown no increased risk for adverse outcomes, including congenital anomalies or neonatal or infant death, among infants exposed in utero to COCs (36)(37)(38). Studies also have shown no increased risk for neonatal or infant death or developmental abnormalities among infants exposed in utero to DMPA (37,39,40). In contrast, for women who want to begin using an IUD (Cu-IUD or LNG-IUD), in situations in which the health care provider is uncertain whether the woman is pregnant, the woman should be provided with another contraceptive method to use until the health care provider is reasonably certain that she is not pregnant and can insert the IUD. Pregnancies among women with IUDs are at higher risk for complications such as spontaneous abortion, septic abortion, preterm delivery, and chorioamnionitis (41). A systematic review identified four analyses of data from three diagnostic accuracy studies that evaluated the performance of the listed criteria (Box 2) through use of a pregnancy checklist compared with a urine pregnancy test conducted concurrently (42). The performance of the checklist to diagnose or exclude pregnancy varied, with sensitivity of 55%-100% and specificity of 39%-89%. The negative predictive value was consistent across studies at 99%-100%; the pregnancy checklist correctly ruled out women who were not pregnant. One of the studies assessed the added usefulness of signs and symptoms of pregnancy and found that these criteria did not substantially improve the performance of the pregnancy checklist, although the number of women with signs and symptoms was small (16) (Level of evidence: Diagnostic accuracy studies, fair, direct). # Intrauterine Contraception Four IUDs are available in the United States, the coppercontaining IUD and three levonorgestrel-releasing IUDs (containing a total of either 13.5 mg or 52 mg levonorgestrel). Fewer than 1 woman out of 100 becomes pregnant in the first year of using IUDs (with typical use) (14). IUDs are long-acting, are reversible, and can be used by women of all ages, including adolescents, and by parous and nulliparous women. IUDs do not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # BOX 2. How to be reasonably certain that a woman is not pregnant A health care provider can be reasonably certain that a woman is not pregnant if she has no symptoms or signs of pregnancy and meets any one of the following criteria: • is ≤7 days after the start of normal menses • has not had sexual intercourse since the start of last normal menses. • has been correctly and consistently using a reliable method of contraception • is ≤7 days after spontaneous or induced abortion • is within 4 weeks postpartum • is fully or nearly fully breastfeeding (exclusively breastfeeding or the vast majority [≥85%] of feeds are breastfeeds), amenorrheic, and <6 months postpartum # Initiation of Cu-IUDs Timing • The Cu-IUD can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). • The Cu-IUD also can be inserted within 5 days of the first act of unprotected sexual intercourse as an emergency contraceptive. If the day of ovulation can be estimated, the Cu-IUD also can be inserted >5 days after sexual intercourse as long as insertion does not occur >5 days after ovulation. # Need for Back-Up Contraception • No additional contraceptive protection is needed after Cu-IUD insertion. # Special Considerations Amenorrhea (Not Postpartum) • Timing: The Cu-IUD can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). • Need for back-up contraception: No additional contraceptive protection is needed. # Postpartum (Including After Cesarean Delivery) • Timing: The Cu-IUD can be inserted at any time postpartum, including immediately postpartum (U.S. MEC 1 or 2) (Box 1), if it is reasonably certain that the woman is not pregnant (Box 2). The Cu-IUD should not be inserted in a woman with postpartum sepsis (e.g., chorioamnionitis or endometritis) (U.S. MEC 4). • Need for back-up contraception: No additional contraceptive protection is needed. # Postabortion (Spontaneous or Induced) • Timing: The Cu-IUD can be inserted within the first 7 days, including immediately postabortion (U.S. MEC 1 for first-trimester abortion and U.S. MEC 2 for secondtrimester abortion). The Cu-IUD should not be inserted immediately after a septic abortion (U.S. MEC 4). • Need for back-up contraception: No additional contraceptive protection is needed. # Switching from Another Contraceptive Method • Timing: The Cu-IUD can be inserted immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. • Need for back-up contraception: No additional contraceptive protection is needed. Comments and Evidence Summary. In situations in which the health care provider is not reasonably certain that the woman is not pregnant, the woman should be provided with another contraceptive method to use until the health care provider can be reasonably certain that she is not pregnant and can insert the Cu-IUD. A systematic review identified eight studies that suggested that timing of Cu-IUD insertion in relation to the menstrual cycle in non-postpartum women had little effect on longterm outcomes (rates of continuation, removal, expulsion, or pregnancy) or on short-term outcomes (pain at insertion, bleeding at insertion, or immediate expulsion) (43) (Level of evidence: II-2, fair, direct). # Initiation of LNG-IUDs # Timing of LNG-IUD Insertion • The LNG-IUD can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). # Need for Back-Up Contraception • If the LNG-IUD is inserted within the first 7 days since menstrual bleeding started, no additional contraceptive protection is needed. • If the LNG-IUD is inserted >7 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Special Considerations Amenorrhea (Not Postpartum) • Timing: The LNG-IUD can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). • Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Postpartum (Including After Cesarean Delivery) • # Switching from Another Contraceptive Method • Timing: The LNG-IUD can be inserted immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. • Need for back-up contraception: If it has been >7 days since menstrual bleeding began, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. • Switching from a Cu-IUD: If the woman has had sexual intercourse since the start of her current menstrual cycle and it has been >5 days since menstrual bleeding started, theoretically, residual sperm might be in the genital tract, which could lead to fertilization if ovulation occurs. A health care provider may consider providing any type of ECPs at the time of LNG-IUD insertion. Comments and Evidence Summary. In situations in which the health care provider is uncertain whether the woman might be pregnant, the woman should be provided with another contraceptive method to use until the health care provider can be reasonably certain that she is not pregnant and can insert the LNG-IUD. If a woman needs to use additional contraceptive protection when switching to an LNG-IUD from another contraceptive method, consider continuing her previous method for 7 days after LNG-IUD insertion. No direct evidence was found regarding the effects of inserting LNG-IUDs on different days of the cycle on short-or longterm outcomes (43). # Examinations and Tests Needed Before Initiation of a Cu-IUD or an LNG-IUD Among healthy women, few examinations or tests are needed before initiation of an IUD (Table 1). Bimanual examination and cervical inspection are necessary before IUD insertion. A baseline weight and BMI measurement might be useful for monitoring IUD users over time. If a woman has not been screened for STDs according to STD screening guidelines, screening can be performed at the time of insertion. Women with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. U.S. MEC might be useful in such circumstances (5). Comments and Evidence Summary. Weight (BMI): Obese women can use IUDs (U.S. MEC 1) (5); therefore, screening for obesity is not necessary for the safe initiation of IUDs. However, measuring weight and calculating * Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; the risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: does not contribute substantially to safe and effective use of the contraceptive method. † Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. § Most women do not require additional STD screening at the time of IUD insertion. If a woman with risk factors for STDs has not been screened for gonorrhea and chlamydia according to CDC's STD Treatment Guidelines (http://www.cdc.gov/std/ treatment), screening can be performed at the time of IUD insertion, and insertion should not be delayed. Women with current purulent cervicitis or chlamydial infection or gonococcal infection should not undergo IUD insertion (U.S. MEC 4). # BMI (weight [kg] / height [m 2 ] ) at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. Bimanual examination and cervical inspection: Bimanual examination and cervical inspection are necessary before IUD insertion to assess uterine size and position and to detect any cervical or uterine abnormalities that might indicate infection or otherwise prevent IUD insertion (44,45). STDs: Women should be routinely screened for chlamydial infection and gonorrhea according to national screening guidelines. The CDC Sexually Transmitted Diseases Treatment Guidelines provide information on screening eligibility, timing, and frequency of screening and on screening for persons with risk factors (15) (http://www.cdc.gov/std/treatment). If STD screening guidelines have been followed, most women do not need additional STD screening at the time of IUD insertion, and insertion should not be delayed. If a woman with risk factors for STDs has not been screened for gonorrhea and chlamydia according to CDC STD treatment guidelines, screening can be performed at the time of IUD insertion, and insertion should not be delayed. Women with current purulent cervicitis or chlamydial infection or gonococcal infection should not undergo IUD insertion (U.S. MEC 4). A systematic review identified two studies that demonstrated no differences in PID rates among women who screened positive for gonorrhea or chlamydia and underwent concurrent IUD insertion compared with women who screened positive and initiated other contraceptive methods (46). Indirect evidence demonstrates women who undergo same-day STD screening and IUD insertion have similar PID rates compared with women who have delayed IUD insertion. Women who undergo same-day STD screening and IUD insertion have low incidence rates of PID. Algorithms for predicting PID among women with risk factors for STDs have poor predictive value. Risk for PID among women with risk factors for STDs is low (15,(47)(48)(49)(50)(51)(52)(53)(54)(55)(56)(57). Although women with STDs at the time of IUD insertion have a higher risk for PID, the overall rate of PID among all IUD users is low (51,54). Hemoglobin: Women with iron-deficiency anemia can use the LNG-IUD (U.S. MEC 1) (5); therefore, screening for anemia is not necessary for safe initiation of the LNG-IUD. Women with iron-deficiency anemia generally can use Cu-IUDs (U.S. MEC 2) (5). Measurement of hemoglobin before initiation of Cu-IUDs is not necessary because of the minimal change in hemoglobin among women with and without anemia using Cu-IUDs. A systematic review identified four studies that provided direct evidence for changes in hemoglobin among women with anemia who received Cu-IUDs (58). Evidence from one randomized trial (59) and one prospective cohort study (60) showed no significant changes in hemoglobin among Cu-IUD users with anemia, whereas two prospective cohort studies (61,62) showed a statistically significant decrease in hemoglobin levels during 12 months of follow-up; however, the magnitude of the decrease was small and most likely not clinically significant. The systematic review also identified 21 studies that provided indirect evidence by examining changes in hemoglobin among healthy women receiving Cu-IUDs (63-83), which generally showed no clinically significant changes in hemoglobin levels with up to 5 years of follow up (Level of evidence: I to II-2, fair, direct). Lipids: Screening for dyslipidemias is not necessary for the safe initiation of Cu-IUD or LNG-IUD because of the low prevalence of undiagnosed disease in women of reproductive age and the low likelihood of clinically significant changes with use of hormonal contraceptives. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with lipid measurement before initiation of hormonal contraceptives (57). During 2009-2012 among women aged 20-44 years in the United States, 7.6% had high cholesterol, defined as total serum cholesterol ≥240 mg/dL (84). During 1999-2008, the prevalence of undiagnosed hypercholesterolemia among women aged 20-44 years was approximately 2% (85). Studies have shown mixed results about the effects of hormonal methods on lipid levels among both healthy women and women with baseline lipid abnormalities, and the clinical significance of these changes is unclear (86)(87)(88)(89). Liver enzymes: Women with liver disease can use the Cu-IUD (U.S. MEC 1) (5); therefore, screening for liver disease is not necessary for the safe initiation of the Cu-IUD. Although women with certain liver diseases generally should not use the LNG-IUD (U.S. MEC 3) (5), screening for liver disease before initiation of the LNG-IUD is not necessary because of the low prevalence of these conditions and the high likelihood that women with liver disease already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with liver enzyme tests before initiation of hormonal contraceptive use (57). In 2012, among U.S. women, the percentage with liver disease (not further specified) was 1.3% (90). In 2013, the incidence of acute hepatitis A, B, or C was ≤1 per 100,000 U.S. population (91). During 2002-2011, the incidence of liver carcinoma among U.S. women was approximately 3.7 per 100,000 population (92). Because estrogen and progestins are metabolized in the liver, the use of hormonal contraceptives among women with liver disease might, theoretically, be a concern. The use of hormonal contraceptives, specifically COCs and POPs, does not affect disease progression or severity in women with hepatitis, cirrhosis, or benign focal nodular hyperplasia (93,94), although evidence is limited, and no evidence exists for the LNG-IUD. Clinical breast examination: Women with breast disease can use the Cu-IUD (U.S. MEC 1) (5); therefore, screening for breast disease is not necessary for the safe initiation of the Cu-IUD. Although women with current breast cancer should not use the LNG-IUD (U.S. MEC 4) (5), screening asymptomatic women with a clinical breast examination before inserting an IUD is not necessary because of the low prevalence of breast cancer among women of reproductive age. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a breast examination before initiation of hormonal contraceptives (95). The incidence of breast cancer among women of reproductive age in the United States is low. In 2012, the incidence of breast cancer among women aged 20-49 years was approximately 70.7 per 100,000 women (96). Cervical cytology: Although women with cervical cancer should not undergo IUD insertion (U.S. MEC 4) (5), screening asymptomatic women with cervical cytology before IUD insertion is not necessary because of the high rates of cervical screening, low incidence of cervical cancer in the United States, and high likelihood that a woman with cervical cancer already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with cervical cytology before initiation of IUDs (57). Cervical cancer is rare in the United States, with an incidence rate of 9.8 per 100,000 women during 2012 (96). The incidence and mortality rates from cervical cancer have declined dramatically in the United States, largely because of cervical cytology screening (97). Overall screening rates for cervical cancer in the United States are high; in 2013 among women aged 18-44 years, approximately 77% reported having cervical cytology screening within the last 3 years (98). HIV screening: Women with HIV infection can use (U.S. MEC 1) or generally can use (U.S. MEC 2) IUDs (5). Therefore, HIV screening is not necessary before IUD insertion. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened for HIV infection before IUD insertion (57). Limited evidence suggests that IUDs are not associated with disease progression, increased infection, or other adverse health effects among women with HIV infection (99)(100)(101)(102)(103)(104)(105)(106)(107)(108)(109)(110)(111)(112)(113)(114). Other screening: Women with hypertension, diabetes, or thrombogenic mutations can use (U.S. MEC 1) or generally can use (U.S. MEC 2) IUDs (5). Therefore, screening for these conditions is not necessary for the safe initiation of IUDs. # Provision of Medications to Ease IUD Insertion • Misoprostol is not recommended for routine use before IUD insertion. Misoprostol might be helpful in select circumstances (e.g., in women with a recent failed insertion). • Paracervical block with lidocaine might reduce patient pain during IUD insertion. Comments and Evidence Summary. Potential barriers to IUD use include anticipated pain with insertion and provider concerns about difficult insertion. Identifying effective approaches to ease IUD insertion might increase IUD initiation. Evidence for misoprostol from two systematic reviews, including a total of 10 randomized controlled trials, suggests that misoprostol does not improve provider ease of insertion, reduce the need for adjunctive insertion measures, or improve insertion success (Level of evidence: I, good to fair, direct) and might increase patient pain and side effects (Level of evidence: I, high quality) (115,116). However, one randomized controlled trial examined women with a recent failed IUD insertion and found significantly higher insertion success with second insertion attempt among women pretreated with misoprostol versus placebo (Level of evidence: I, good, direct) (117). Limited evidence for paracervical block with lidocaine from one systematic review suggests that it might reduce patient pain (115). In this review, two randomized controlled trials found significantly reduced pain at either tenaculum placement or IUD insertion among women receiving paracervical block with 1% lidocaine 3-5 minutes before IUD insertion (118,119). Neither trial found differences in side effects among women receiving paracervical block compared with controls (Level of evidence: I, moderate to low quality) (118,119). Limited evidence on nonsteroidal antiinflammatory drugs (NSAIDs) and nitric oxide donors generally suggested no positive effect; evidence on lidocaine with administration other than paracervical block was limited and inconclusive (Level of evidence for provider ease of insertion: I, good to poor, direct; Level of evidence for need for adjunctive insertion measures: I, fair, direct; Level of evidence for patient pain: I, high to low quality; Level of evidence for side effects: I, high to low quality) (115,116). # Provision of Prophylactic Antibiotics at the Time of IUD Insertion • Prophylactic antibiotics are generally not recommended for Cu-IUD or LNG-IUD insertion. Comments and Evidence Summary. Theoretically, IUD insertion could induce bacterial spread and lead to complications such as PID or infective endocarditis. A metaanalysis was conducted of randomized controlled trials examining antibiotic prophylaxis versus placebo or no treatment for IUD insertion (120). Use of prophylaxis reduced the frequency of unscheduled return visits but did not significantly reduce the incidence of PID or premature IUD discontinuation. Although the risk for PID was higher within the first 20 days after insertion, the incidence of PID was low among all women who had IUDs inserted (51). In addition, the American Heart Association recommends that the use of prophylactic antibiotics solely to prevent infective endocarditis is not needed for genitourinary procedures (121). Studies have not demonstrated a conclusive link between genitourinary procedures and infective endocarditis or a preventive benefit of prophylactic antibiotics during such procedures (121). # Routine Follow-Up After IUD Insertion These recommendations address when routine follow-up is needed for safe and effective continued use of contraception for healthy women. The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from more frequent follow-up visits include adolescents, persons with certain medical conditions or characteristics, and persons with multiple medical conditions. • Advise the woman to return at any time to discuss side effects or other problems, if she wants to change the method being used, and when it is time to remove or replace the contraceptive method. No routine follow-up visit is required. • At other routine visits, health care providers who see IUD users should do the following: -Assess the woman's satisfaction with her contraceptive method and whether she has any concerns about method use. -Assess any changes in health status, including medications, that would change the appropriateness of the IUD for safe and effective continued use on the basis of U.S. MEC (e.g., category 3 and 4 conditions and characteristics). -Consider performing an examination to check for the presence of the IUD strings. -Consider assessing weight changes and counseling women who are concerned about weight changes perceived to be associated with their contraceptive method. Comments and Evidence Summary. Evidence from a systematic review about the effect of a specific follow-up visit schedule on IUD continuation is very limited and of poor quality. The evidence did not suggest that greater frequency of visits or earlier timing of the first follow-up visit after insertion improves continuation of use ( 122) (Level of evidence: II-2, poor, direct). Evidence from four studies from a systematic review on the incidence of PID among IUD initiators, or IUD removal as a result of PID, suggested that the incidence of PID did not differ between women using Cu-IUDs and those using DMPA, COCs, or LNG-IUDs (123) (Level of evidence: I to II-2, good, indirect). Evidence on the timing of PID after IUD insertion is mixed. Although the rate of PID generally was low, the largest study suggested that the rate of PID was significantly higher in the first 20 days after insertion (51) (Level of evidence: I to II-3, good to poor, indirect). # Bleeding Irregularities with Cu-IUD Use • Before Cu-IUD insertion, provide counseling about potential changes in bleeding patterns during Cu-IUD use. Unscheduled spotting or light bleeding, as well as heavy or prolonged bleeding, is common during the first 3-6 months of Cu-IUD use, is generally not harmful, and decreases with continued Cu-IUD use. • If clinically indicated, consider an underlying gynecological problem, such as Cu-IUD displacement, an STD, pregnancy, or new pathologic uterine conditions (e.g., polyps or fibroids), especially in women who have already been using the Cu-IUD for a few months or longer and who have developed a new onset of heavy or prolonged bleeding. If an underlying gynecological problem is found, treat the condition or refer for care. • If an underlying gynecological problem is not found and the woman requests treatment, the following treatment option can be considered during days of bleeding: -NSAIDs for short-term treatment (5-7 days) • If bleeding persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. Comments and Evidence Summary. During contraceptive counseling and before insertion of the Cu-IUD, information about common side effects such as unscheduled spotting or light bleeding or heavy or prolonged menstrual bleeding, especially during the first 3-6 months of use, should be discussed (64). These bleeding irregularities are generally not harmful. Enhanced counseling about expected bleeding patterns and reassurance that bleeding irregularities are generally not harmful has been shown to reduce method discontinuation in clinical trials with other contraceptives (i.e., DMPA) (124,125). Evidence is limited on specific drugs, doses, and durations of use for effective treatments for bleeding irregularities with Cu-IUD use. Therefore, although this report includes general recommendations for treatments to consider, evidence for specific regimens is lacking. A systematic review identified 11 studies that examined various therapeutic treatments for heavy menstrual bleeding, prolonged menstrual bleeding, or both among women using Cu-IUDs (126). Nine studies examined the use of various oral NSAIDs for the treatment of heavy or prolonged menstrual bleeding among Cu-IUD users and compared them with either a placebo or a baseline cycle. Three of these trials examined the use of indomethacin (127)(128)(129), three examined mefenamic acid (130)(131)(132), and three examined flufenamic acid (127,128,133). Other NSAIDs used in the reported trials included alclofenac (127,128), suprofen (134), and diclofenac sodium (135). All but one NSAID study ( 131) demonstrated statistically significant or notable reductions in mean total menstrual blood loss with NSAID use. One study among 19 Cu-IUD users with heavy bleeding suggested that treatment with oral tranexamic acid can significantly reduce mean blood loss during treatment compared with placebo (135). Data regarding the overall safety of tranexamic acid are limited; an FDA warning states that tranexamic acid is contraindicated in women with active thromboembolic disease or with a history or intrinsic risk for thrombosis or thromboembolism (136,137). Treatment with aspirin demonstrated no statistically significant change in mean blood loss among women whose pretreatment menstrual blood loss was >80 ml or 60-80 mL; treatment resulted in a significant increase among women whose pretreatment menstrual blood loss was <60 mL (138). One study examined the use of a synthetic form of vasopressin, intranasal desmopressin (300 µg/day), for the first 5 days of menses for three treatment cycles and found a significant reduction in mean blood loss compared with baseline (130) (Level of evidence: I to II-3, poor to fair, direct). Only one small study examined treatment of spotting with three separate NSAIDs and did not observe improvements in spotting in any of the groups (127) (Level of evidence: I, poor, direct). # Bleeding Irregularities (Including Amenorrhea) with LNG-IUD Use • Before LNG-IUD insertion, provide counseling about potential changes in bleeding patterns during LNG-IUD use. Unscheduled spotting or light bleeding is expected during the first 3-6 months of LNG-IUD use, is generally not harmful, and decreases with continued LNG-IUD use. Over time, bleeding generally decreases with LNG-IUD use, and many women experience only light menstrual bleeding or amenorrhea. Heavy or prolonged bleeding, either unscheduled or menstrual, is uncommon during LNG-IUD use. # Irregular Bleeding (Spotting, Light Bleeding, or Heavy or Prolonged Bleeding) • If clinically indicated, consider an underlying gynecological problem, such as LNG-IUD displacement, an STD, pregnancy, or new pathologic uterine conditions (e.g., polyps or fibroids). If an underlying gynecological problem is found, treat the condition or refer for care. • If bleeding persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. # Amenorrhea • Amenorrhea does not require any medical treatment. Provide reassurance. -If a woman's regular bleeding pattern changes abruptly to amenorrhea, consider ruling out pregnancy if clinically indicated. • If amenorrhea persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired Comments and Evidence Summary. During contraceptive counseling and before insertion of the LNG-IUD, information about common side effects such as unscheduled spotting or light bleeding, especially during the first 3-6 months of use, should be discussed. Approximately half of LNG-IUD users are likely to experience amenorrhea or oligomenorrhea by 2 years of use (139). These bleeding irregularities are generally not harmful. Enhanced counseling about expected bleeding patterns and reassurance that bleeding irregularities are generally not harmful has been shown to reduce method discontinuation in clinical trials with other hormonal contraceptives (i.e., DMPA) (124,125). No direct evidence was found regarding therapeutic treatments for bleeding irregularities during LNG-IUD use. # Management of the IUD when a Cu-IUD or an LNG-IUD User Is Found To Have PID • Treat the PID according to the CDC Sexually Transmitted Diseases Treatment Guidelines (15). • Provide comprehensive management for STDs, including counseling about condom use. • The IUD does not need to be removed immediately if the woman needs ongoing contraception. • Reassess the woman in 48-72 hours. If no clinical improvement occurs, continue antibiotics and consider removal of the IUD. • If the woman wants to discontinue use, remove the IUD sometime after antibiotics have been started to avoid the potential risk for bacterial spread resulting from the removal procedure. • If the IUD is removed, consider ECPs if appropriate. Counsel the woman on alternative contraceptive methods, and offer another method if it is desired. • A summary of IUD management in women with PID is provided (Appendix F). Comments and Evidence Summary. Treatment outcomes do not generally differ between women with PID who retain the IUD and those who have the IUD removed; however, appropriate antibiotic treatment and close clinical follow-up are necessary. A systematic review identified four studies that included women using copper or nonhormonal IUDs who developed PID and compared outcomes between women who had the IUD removed or did not (140). One randomized trial showed that women with IUDs removed had longer hospitalizations than those who did not, although no differences in PID recurrences or subsequent pregnancies were observed (141). Another randomized trial showed no differences in laboratory findings among women who removed the IUD compared with those who did not (142). One prospective cohort study showed no differences in clinical or laboratory findings during hospitalization; however, the IUD removal group had longer hospitalizations (143). One randomized trial showed that the rate of recovery for most clinical signs and symptoms was higher among women who had the IUD removed than among women who did not (144). No evidence was found regarding women using LNG-IUDs (Level of evidence: I to II-2, fair, direct.) # Management of the IUD when a Cu-IUD or an LNG-IUD User is Found To Be Pregnant • Evaluate for possible ectopic pregnancy. • Advise the woman that she has an increased risk for spontaneous abortion (including septic abortion that might be life threatening) and for preterm delivery if the IUD is left in place. The removal of the IUD reduces these risks but might not decrease the risk to the baseline level of a pregnancy without an IUD. -If she does not want to continue the pregnancy, counsel her about options. -If she wants to continue the pregnancy, advise her to seek care promptly if she has heavy bleeding, cramping, pain, abnormal vaginal discharge, or fever. # IUD Strings Are Visible or Can Be Retrieved Safely from the Cervical Canal • Advise the woman that the IUD should be removed as soon as possible. -If the IUD is to be removed, remove it by pulling on the strings gently. -Advise the woman that she should return promptly if she has heavy bleeding, cramping, pain, abnormal vaginal discharge, or fever. • If she chooses to keep the IUD, advise her to seek care promptly if she has heavy bleeding, cramping, pain, abnormal vaginal discharge, or fever. # IUD Strings Are Not Visible and Cannot Be Safely Retrieved • If ultrasonography is available, consider performing or referring for ultrasound examination to determine the location of the IUD. If the IUD cannot be located, it might have been expelled or have perforated the uterine wall. • If ultrasonography is not possible or the IUD is determined by ultrasound to be inside the uterus, advise the woman to seek care promptly if she has heavy bleeding, cramping, pain, abnormal vaginal discharge, or fever. Comments and Evidence Summary. Removing the IUD improves the pregnancy outcome if the IUD strings are visible or the device can be retrieved safely from the cervical canal. Risks for spontaneous abortion, preterm delivery, and infection are substantial if the IUD is left in place. Theoretically, the fetus might be affected by hormonal exposure from an LNG-IUD. However, whether this exposure increases the risk for fetal abnormalities is unknown. A systematic review identified nine studies suggesting that women who did not remove their IUDs during pregnancy were at greater risk for adverse pregnancy outcomes (including spontaneous abortion, septic abortion, preterm delivery, and chorioamnionitis) compared with women who had their IUDs removed or who did not have an IUD (41). Cu-IUD removal decreased risks but not to the baseline risk for pregnancies without an IUD. One case series examined LNG-IUDs. When they were not removed, 8 out of 10 pregnancies ended in spontaneous abortions (Level of evidence: II-2, fair, direct). # Implants The etonogestrel implant, a single rod with 68 mg of etonogestrel, is available in the United States. Fewer than 1 woman out of 100 become pregnant in the first year of use of the etonogestrel implant with typical use (14). The implant is long acting, is reversible, and can be used by women of all ages, including adolescents. The implant does not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # Initiation of Implants Timing • The implant can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). # Need for Back-Up Contraception • If the implant is inserted within the first 5 days since menstrual bleeding started, no additional contraceptive protection is needed. • If the implant is inserted >5 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Special Considerations Amenorrhea (Not Postpartum) • Timing: The implant can be inserted at any time if it is reasonably certain that the woman is not pregnant (Box 2). • Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Postpartum (Breastfeeding) • # Postabortion (Spontaneous or Induced) • Timing: The implant can be inserted within the first 7 days, including immediately after the abortion (U.S. MEC 1). • Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days unless the implant is placed at the time of a surgical abortion. # Switching from Another Contraceptive Method • Timing: The implant can be inserted immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. • Need for back-up contraception: If it has been >5 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days after insertion. • Switching from an IUD: If the woman has had sexual intercourse since the start of her current menstrual cycle and it has been >5 days since menstrual bleeding started, theoretically, residual sperm might be in the genital tract, which could lead to fertilization if ovulation occurs. A health care provider may consider any of the following options: -Advise the woman to retain the IUD for at least 7 days after the implant is inserted and return for IUD removal. -Advise the woman to abstain from sexual intercourse or use barrier contraception for 7 days before removing the IUD and switching to the new method. -If the woman cannot return for IUD removal and has not abstained from sexual intercourse or used barrier contraception for 7 days, advise the woman to use ECPs (with the exception of UPA) at the time of IUD removal. Comments and Evidence Summary. In situations in which the health care provider is uncertain whether the woman might be pregnant, the benefits of starting the implant likely exceed any risk; therefore, starting the implant should be considered at any time, with a follow-up pregnancy test in 2-4 weeks. If a woman needs to use additional contraceptive protection when switching to an implant from another contraceptive method, consider continuing her previous method for 7 days after implant insertion. No direct evidence was found regarding the effects of starting the etonogestrel implant at different times of the cycle. # Examinations and Tests Needed Before Implant Insertion Among healthy women, no examinations or tests are needed before initiation of an implant, although a baseline weight and BMI measurement might be useful for monitoring implant users over time (Table 2). Women with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. U.S. MEC might be useful in such circumstances (5). Comments and Evidence Summary. Weight (BMI): Obese women can use implants (U.S. MEC 1) (5); therefore, screening for obesity is not necessary for the safe initiation of implants. However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. Bimanual examination and cervical inspection: A pelvic examination is not necessary before initiation of implants because it would not facilitate detection of conditions for which implant use would be unsafe. Women with current breast cancer should not use implants (U.S. MEC 4); women with certain liver diseases generally should not (U.S. MEC 3) use implants (5). However, none of these conditions are likely to be detected by pelvic examination (145). A systematic review identified two case-control studies that compared delayed and immediate pelvic examination before initiation of hormonal contraceptives, specifically oral contraceptives or DMPA (95). No differences in risk factors for cervical neoplasia, incidence of STDs, incidence of abnormal Papanicolaou smears, or incidence of abnormal wet mounts were observed. No evidence was found regarding implants (Level of evidence: II-2 fair, direct). Lipids: Screening for dyslipidemias is not necessary for the safe initiation of implants because of the low prevalence of undiagnosed disease in women of reproductive age and the low likelihood of clinically significant changes with use of hormonal contraceptives. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with lipid measurement before initiation of hormonal contraceptives (57). During 2009-2012 among women aged 20-44 years in the United States, 7.6% had high cholesterol, defined as total serum cholesterol ≥240 mg/ dL (84). During 1999-2008, the prevalence of undiagnosed hypercholesterolemia among women aged 20-44 years was approximately 2% (85). Studies have shown mixed results regarding the effects of hormonal methods on lipid levels among both healthy women and women with baseline lipid abnormalities, and the clinical significance of these changes is unclear (86)(87)(88)(89). Liver enzymes: Although women with certain liver diseases generally should not use implants (U.S. MEC 3) (5), screening for liver disease before initiation of implants is not necessary because of the low prevalence of these conditions and the high likelihood that women with liver disease already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with liver enzyme tests before initiation of hormonal contraceptives (57). In 2012, the percentage of U.S. women with liver disease (not further specified) was 1.3% (90). In 2013, the incidence of acute hepatitis A, B, or C was ≤1 per 100,000 U.S. population (91). During 2002-2011, the incidence of liver carcinoma among U.S. women was approximately 3.7 per 100,000 population (92). Because estrogen and progestins are metabolized in the liver, the use of hormonal contraceptives among women with liver disease might, theoretically, be a concern. The use of hormonal contraceptives, specifically COCs and POPs, does not affect disease progression or severity in women with hepatitis, cirrhosis, or benign focal nodular hyperplasia (93,94), although evidence is limited and no evidence exists for implants. Clinical breast examination: Although women with current breast cancer should not use implants (U.S. MEC 4) (5), screening asymptomatic women with a clinical breast * Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; the risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: does not contribute substantially to safe and effective use of the contraceptive method. † Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. examination before initiation of implants is not necessary because of the low prevalence of breast cancer among women of reproductive age (15-49 years). A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a breast examination before initiation of hormonal contraceptives (95). The incidence of breast cancer among women of reproductive age in the United States is low. In 2012, the incidence of breast cancer among women aged 20-49 years was approximately 70.7 per 100,000 women (96). Other screening: Women with hypertension, diabetes, anemia, thrombogenic mutations, cervical intraepithelial neoplasia, cervical cancer, STDs, or HIV infection can use (U.S. MEC 1) or generally can use (U.S. MEC 2) implants (5); therefore, screening for these conditions is not necessary for the safe initiation of implants. # Routine Follow-Up After Implant Insertion These recommendations address when routine follow-up is needed for safe and effective continued use of contraception for healthy women. The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from more frequent follow-up visits include adolescents, those with certain medical conditions or characteristics, and those with multiple medical conditions. • Advise the woman to return at any time to discuss side effects or other problems, if she wants to change the method being used, and when it is time to remove or replace the contraceptive method. No routine follow-up visit is required. • At other routine visits, health care providers seeing implant users should do the following: -Assess the woman's satisfaction with her contraceptive method and whether she has any concerns about method use. -Assess any changes in health status, including medications, that would change the appropriateness of the implant for safe and effective continued use based on U.S. MEC (e.g., category 3 and 4 conditions and characteristics). -Consider assessing weight changes and counseling women who are concerned about weight change perceived to be associated with their contraceptive method. Comments and Evidence Summary. A systematic review did not identify any evidence regarding whether a routine follow-up visit after initiating an implant improves correct or continued use (122). # Bleeding Irregularities (Including Amenorrhea) During Implant Use • Before implant insertion, provide counseling about potential changes in bleeding patterns during implant use. Unscheduled spotting or light bleeding is common with implant use, and some women experience amenorrhea. These bleeding changes are generally not harmful and might or might not decrease with continued implant use. Heavy or prolonged bleeding, unscheduled or menstrual, is uncommon during implant use. Irregular Bleeding (Spotting, Light Bleeding, or Heavy or Prolonged Bleeding) • If clinically indicated, consider an underlying gynecological problem, such as interactions with other medications, an STD, pregnancy, or new pathologic uterine conditions (e.g., polyps or fibroids). If an underlying gynecological problem is found, treat the condition or refer for care. • If an underlying gynecologic problem is not found and the woman wants treatment, the following treatment options during days of bleeding can be considered: -NSAIDS for short-term treatment (5-7 days) -Hormonal treatment (if medically eligible) with lowdose COCs or estrogen for short-term treatment (10-20 days) • If irregular bleeding persists and the woman finds it unacceptable, counsel her on alternative methods, and offer another method if it is desired. # Amenorrhea • Amenorrhea does not require any medical treatment. Provide reassurance. -If a woman's regular bleeding pattern changes abruptly to amenorrhea, consider ruling out pregnancy if clinically indicated. • If amenorrhea persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. Comments and Evidence Summary. During contraceptive counseling and before insertion of the implant, information about common side effects, such as unscheduled spotting or light bleeding and amenorrhea, especially during the first year of use, should be discussed. A pooled analysis of data from 11 clinical trials indicates that a significant proportion of etonogestrel implant users had relatively little bleeding: 22% of women experienced amenorrhea and 34% experienced infrequent spotting, although 7% reported frequent bleeding and 18% reported prolonged bleeding (146). Unscheduled bleeding or amenorrhea is generally not harmful. Enhanced counseling about expected bleeding patterns and reassurance that bleeding irregularities are generally not harmful has been shown to reduce discontinuation in clinical trials with other hormonal contraceptives (i.e., DMPA) (124,125). A systematic review and four newly published studies examined several medications for the treatment of bleeding irregularities with primarily levonorgestrel contraceptive implants (147)(148)(149)(150)(151). Two small studies found significant cessation of bleeding within 7 days of start of treatment among women taking oral celecoxib (200 mg) daily for 5 days or oral mefenamic acid (500 mg) 3 times daily for 5 days compared with placebo (149,150). Differences in bleeding cessation were not found among women with etonogestrel implants taking mifepristone but were found when women with the implants combined mifepristone with either ethinyl estradiol or doxycycline (151,152). Doxycycline alone or in combination with ethinyl estradiol did not improve bleeding cessation among etonogestrel implant users (151). Among LNG implant users, mifepristone reduced the number of bleeding or spotting days but only after 6 months of treatment (153). Evidence also suggests that estrogen (154)(155)(156), daily COCs (154), LNG pills (155), tamoxifen (157), or tranexamic acid (158) can reduce the number of bleeding or spotting days during treatment among LNG implant users. In one small study, vitamin E was found to significantly reduce the mean number of bleeding days after the first treatment cycle; however, another larger study reported no significant differences in length of bleeding and spotting episodes with vitamin E treatment (159,160). Use of aspirin did not result in a significant difference in median length of bleeding or bleeding and spotting episodes after treatment (159). One study among implant users reported a reduction in number of bleeding days after initiating ibuprofen; however, another trial did not demonstrate any significant differences in the number of spotting and bleeding episodes with ibuprofen compared with placebo (148,155). # Injectables Progestin-only injectable contraceptives (DMPA, 150 mg intramuscularly or 104 mg subcutaneously) are available in the United States; the only difference between these two formulations is the route of administration. Approximately 6 out of 100 women will become pregnant in the first year of use of DMPA with typical use (14). DMPA is reversible and can be used by women of all ages, including adolescents. DMPA does not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # Initiation of Injectables Timing • The first DMPA injection can be given at any time if it is reasonably certain that the woman is not pregnant (Box 2). # Need for Back-Up Contraception • If DMPA is started within the first 7 days since menstrual bleeding started, no additional contraceptive protection is needed. • If DMPA is started >7 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Special Considerations Amenorrhea (Not Postpartum) • Timing: The first DMPA injection can be given at any time if it is reasonably certain that the woman is not pregnant (Box 2). • Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Postpartum (Breastfeeding) • Postabortion (Spontaneous or Induced) • Timing: The first DMPA injection can be given within the first 7 days, including immediately after the abortion (U.S. MEC 1). • Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days unless the injection is given at the time of a surgical abortion. # Switching from Another Contraceptive Method • Timing: The first DMPA injection can be given immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. • Need for back-up contraception: If it has been >7 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. • Switching from an IUD: If the woman has had sexual intercourse since the start of her current menstrual cycle and it has been >5 days since menstrual bleeding started, theoretically, residual sperm might be in the genital tract, which could lead to fertilization if ovulation occurs. A health care provider may consider any of the following options: -Advise the women to retain the IUD for at least 7 days after the injection and return for IUD removal. -Advise the woman to abstain from sexual intercourse or use barrier contraception for 7 days before removing the IUD and switching to the new method. -If the woman cannot return for IUD removal and has not abstained from sexual intercourse or used barrier contraception for 7 days, advise the woman to use ECPs (with the exception of UPA) at the time of IUD removal. Comments and Evidence Summary. In situations in which the health care provider is uncertain whether the woman might be pregnant, the benefits of starting DMPA likely exceed any risk; therefore, starting DMPA should be considered at any time, with a follow-up pregnancy test in 2-4 weeks. If a woman needs to use additional contraceptive protection when switching to DMPA from another contraceptive method, consider continuing her previous method for 7 days after DMPA injection. A systematic review identified eight articles examining DMPA initiation on different days of the menstrual cycle (161). Evidence from two studies with small sample sizes indicated that DMPA injections given up to day 7 of the menstrual cycle inhibited ovulation; when DMPA was administered after day 7, ovulation occurred in some women. Cervical mucus was of poor quality (i.e., not favorable for sperm penetration) in 90% of women within 24 hours of the injection (Level of evidence: II-2, fair) (162)(163)(164). Studies found that use of another contraceptive method until DMPA could be initiated (bridging option) did not help women initiate DMPA and was associated with more unintended pregnancies than immediate receipt of DMPA (165-169) (Level of evidence: I to II-3, fair to poor, indirect). # Examinations and Tests Needed Before Initiation of an Injectable Among healthy women, no examinations or tests are needed before initiation of DMPA, although a baseline weight and BMI measurement might be useful to monitor DMPA users over time (Table 3). Women with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. U.S. MEC might be useful in such circumstances (5). Comments and Evidence Summary. Weight (BMI): Obese women can use (U.S. MEC 1) or generally can use (U.S. MEC 2) DMPA (5); therefore, screening for obesity is not necessary for the safe initiation of DMPA. However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. (See guidance on follow-up for DMPA users for evidence on weight gain with DMPA use). Bimanual examination and cervical inspection: Pelvic examination is not necessary before initiation of DMPA because it does not facilitate detection of conditions for which DMPA would be unsafe. Although women with current breast cancer should not use DMPA (U.S. MEC 4), and women with severe hypertension, heart disease, vascular disease, or certain liver diseases generally should not use DMPA (U.S. MEC 3) (5), none of these conditions are likely to be detected by pelvic examination (145). A systematic review identified two casecontrol studies that compared delayed versus immediate pelvic examination before initiation of hormonal contraceptives, specifically oral contraceptives or DMPA (95). No differences in risk factors for cervical neoplasia, incidence of STDs, incidence of abnormal Papanicolaou smears, or incidence of abnormal wet mounts were observed (Level of evidence: II-2, fair, direct). Blood pressure: Women with hypertension generally can use DMPA (U.S. MEC 2), with the exception of women with severe hypertension or vascular disease, who generally should not use DMPA (U.S. MEC 3) (5). Screening for hypertension before initiation of DMPA is not necessary because of the low prevalence of undiagnosed severe hypertension and the high likelihood that women with these conditions already would have had them diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a blood pressure measurement before initiation of progestin-only contraceptives (170). The prevalence of undiagnosed hypertension among women of reproductive age is low. During 2009-2012 among women aged 20-44 years in the United States, the prevalence of hypertension was 8.7% (84). During 1999-2008, the percentage of women aged 20-44 years with undiagnosed hypertension was 1.9% (85). Glucose: Although women with complicated diabetes generally should not use DMPA (U.S. MEC 3) (5), screening for diabetes before initiation of DMPA is not necessary because of the low prevalence of undiagnosed diabetes and the high likelihood that women with complicated diabetes would already have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with glucose measurement before initiation of hormonal contraceptives (57). The prevalence of diabetes among women of reproductive age is low. During 2009-2012 among women aged 20-44 years in the United States, the prevalence of diabetes was 3.3% (84). During 1999-2008, the percentage of women aged 20-44 years with undiagnosed diabetes was 0.5% (85). Although hormonal contraceptives can have some adverse effects on glucose metabolism in healthy and diabetic women, the overall clinical effect is minimal (171)(172)(173)(174)(175)(176)(177). Lipids: Screening for dyslipidemias is not necessary for the safe initiation of injectables because of the low prevalence of undiagnosed disease in women of reproductive age and the low likelihood of clinically significant changes with use of hormonal contraceptives. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with lipid measurement before initiation of hormonal contraceptives (57). During 2009-2012 among women aged 20-44 years in the United States, 7.6% had high cholesterol, defined as total serum cholesterol ≥240 mg/dL (84). During 1999-2008, the prevalence of undiagnosed hypercholesterolemia among women aged 20-44 years was approximately 2% (85). Studies have shown mixed results about the effects of hormonal methods on lipid levels among both healthy women and women with baseline lipid abnormalities, and the clinical significance of these changes is unclear (86)(87)(88)(89). Liver enzymes: Although women with certain liver diseases generally should not use DMPA (U.S. MEC 3) (5), screening for liver disease before initiation of DMPA is not necessary because of the low prevalence of these conditions and the high likelihood that women with liver disease already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with liver enzyme tests before initiation of hormonal contraceptives (57). In 2012, among U.S. women, the percentage with liver disease (not further specified) was 1.3% (90). In 2013, the incidence of acute hepatitis A, B, or C was ≤1 per 100,000 U.S. population (91). During 2002-2011, the incidence of liver carcinoma among U.S. women was approximately 3.7 per 100,000 population (92). Because estrogen and progestins are metabolized in the liver, the use of hormonal contraceptives among women with liver disease might, theoretically, be a concern. The use of hormonal contraceptives, specifically COCs and POPs, does not affect disease progression or severity in women with hepatitis, cirrhosis, or benign focal nodular hyperplasia (93,94), although evidence is limited and no evidence exists for DMPA. * Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; the risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: does not contribute substantially to safe and effective use of the contraceptive method. † Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. # Clinical breast examination: Although women with current breast cancer should not use DMPA (U.S. MEC 4) (5), screening asymptomatic women with a clinical breast examination before initiating DMPA is not necessary because of the low prevalence of breast cancer among women of reproductive age. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a clinical breast examination before initiation of hormonal contraceptives (95). The incidence of breast cancer among women of reproductive age in the United States is low. In 2012, the incidence of breast cancer among women aged 20-49 years was approximately 70.7 per 100,000 women (96). Other screening: Women with anemia, thrombogenic mutations, cervical intraepithelial neoplasia, cervical cancer, HIV infection, or other STDs can use (U.S. MEC 1) or generally can use (U.S. MEC 2) DMPA (5); therefore, screening for these conditions is not necessary for the safe initiation of DMPA. # Routine Follow-Up After Injectable Initiation These recommendations address when routine follow-up is recommended for safe and effective continued use of contraception for healthy women. The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from frequent follow-up visits include adolescents, those with certain medical conditions or characteristics, and those with multiple medical conditions. • Advise the woman to return at any time to discuss side effects or other problems, if she wants to change the method being used, and when it is time for reinjection. No routine follow-up visit is required. • At other routine visits, health care providers seeing injectable users should do the following: -Assess the woman's satisfaction with her contraceptive method and whether she has any concerns about method use. -Assess any changes in health status, including medications, that would change the appropriateness of the injectable for safe and effective continued use based on U.S. MEC (e.g., category 3 and 4 conditions and characteristics). -Consider assessing weight changes and counseling women who are concerned about weight change perceived to be associated with their contraceptive method. Comments and Evidence Summary. Although no evidence exists regarding whether a routine follow-up visit after initiating DMPA improves correct or continued use, monitoring weight or BMI change over time is important for DMPA users. A systematic review identified a limited body of evidence that examined whether weight gain in the few months after DMPA initiation predicted future weight gain (123). Two studies found significant differences in weight gain or BMI at follow-up periods ranging from 12 to 36 months between early weight gainers (i.e., those who gained >5% of their baseline body weight within 6 months after initiation) and those who were not early weight gainers (178,179). The differences between groups were more pronounced at 18, 24, and 36 months than at 12 months. One study found that most adolescent DMPA users who had gained >5% of their baseline weight by 3 months gained even more weight by 12 months (180) (Level of evidence: II-2, fair, to II-3, fair, direct). # Timing of Repeat Injections Reinjection Interval • Provide repeat DMPA injections every 3 months (13 weeks). # Special Considerations Early Injection • The repeat DMPA injection can be given early when necessary. # Late Injection • The repeat DMPA injection can be given up to 2 weeks late (15 weeks from the last injection) without requiring additional contraceptive protection. • If the woman is >2 weeks late (>15 weeks from the last injection) for a repeat DMPA injection, she can have the injection if it is reasonably certain that she is not pregnant (Box 2). She needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. She might consider the use of emergency contraception (with the exception of UPA) if appropriate. Comments and Evidence Summary. No time limits exist for early injections; injections can be given when necessary (e.g., when a woman cannot return at the routine interval). WHO has extended the time that a woman can have a late reinjection (i.e., grace period) for DMPA use from 2 weeks to 4 weeks on the basis of data from one study showing low pregnancy rates through 4 weeks; however, the CDC expert group did not consider the data to be generalizable to the United States because a large proportion of women in the study were breastfeeding. Therefore, U.S. SPR recommends a grace period of 2 weeks. A systematic review identified 12 studies evaluating time to pregnancy or ovulation after the last injection of DMPA (181). Although pregnancy rates were low during the 2-week interval following the reinjection date and for 4 weeks following the reinjection date, data were sparse, and one study included a large proportion of breastfeeding women (182)(183)(184). Studies also indicated a wide variation in time to ovulation after the last DMPA injection, with the majority ranging from 15 to 49 weeks from the last injection (185)(186)(187)(188)(189)(190)(191)(192)(193) # Unscheduled Spotting or Light Bleeding • If clinically indicated, consider an underlying gynecological problem, such as interactions with other medications, an STD, pregnancy, or new pathologic uterine conditions (e.g., polyps or fibroids). If an underlying gynecological problem is found, treat the condition or refer for care. • If an underlying gynecologic problem is not found and the woman wants treatment, the following treatment option during days of bleeding can be considered: -NSAIDs for short-term treatment (5-7 days) • If unscheduled spotting or light bleeding persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. # Heavy or Prolonged Bleeding • If clinically indicated, consider an underlying gynecological problem, such as interactions with other medications, an STD, pregnancy, or new pathologic uterine conditions (such as fibroids or polyps). If an underlying gynecologic problem is identified, treat the condition or refer for care. • If an underlying gynecologic problem is not found and the woman wants treatment, the following treatment options during days of bleeding can be considered: -NSAIDS for short-term treatment (5-7 days) -Hormonal treatment (if medically eligible) with lowdose COCs or estrogen for short-term treatment (10-20 days) • If heavy or prolonged bleeding persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. # Amenorrhea • Amenorrhea does not require any medical treatment. Provide reassurance. -If a woman's regular bleeding pattern changes abruptly to amenorrhea, consider ruling out pregnancy if clinically indicated. • If amenorrhea persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. Comments and Evidence Summary. During contraceptive counseling and before initiation of DMPA, information about common side effects such as irregular bleeding should be discussed. Unscheduled bleeding or spotting is common with DMPA use (194). In addition, amenorrhea is common after ≥1 years of continuous use (194,195). These bleeding irregularities are generally not harmful. Enhanced counseling among DMPA users detailing expected bleeding patterns and reassurance that these irregularities generally are not harmful has been shown to reduce DMPA discontinuation in clinical trials (124,125). A systematic review, as well as two additional studies, examined the treatment of bleeding irregularities during DMPA use (195)(196)(197). Two small studies found significant cessation of bleeding within 7 days of starting treatment among women taking valdecoxib for 5 days or mefenamic acid for 5 days compared with placebo (198,199). Treatment with ethinyl estradiol was found to stop bleeding better than placebo during the treatment period, although rates of discontinuation were high and safety outcomes were not examined (200). In one small study among DMPA users who had been experiencing amenorrhea for 2 months, treatment with COCs was found to alleviate amenorrhea better than placebo (201). No studies examined the effects of aspirin on bleeding irregularities among DMPA users. # Combined Hormonal Contraceptives Combined hormonal contraceptives contain both estrogen and a progestin and include 1) COCs (various formulations), 2) a transdermal contraceptive patch (which releases 150 µg of norelgestromin and 20 µg ethinyl estradiol daily), and 3) a vaginal contraceptive ring (which releases 120 µg etonogestrel and 15 µg ethinyl estradiol daily). Approximately 9 out of 100 women become pregnant in the first year of use with combined hormonal contraceptives with typical use (14). These methods are reversible and can be used by women of all ages. Combined hormonal contraceptives are generally used for US Department of Health and Human Services/Centers for Disease Control and Prevention 21-24 consecutive days, followed by 4-7 hormone-free days (either no use or placebo pills). These methods are sometimes used for an extended period with infrequent or no hormonefree days. Combined hormonal contraceptives do not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # Initiation of Combined Hormonal Contraceptives Timing • Combined hormonal contraceptives can be initiated at any time if it is reasonably certain that the woman is not pregnant (Box 2). # Need for Back-Up Contraception • If combined hormonal contraceptives are started within the first 5 days since menstrual bleeding started, no additional contraceptive protection is needed. • If combined hormonal contraceptives are started >5 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Special Considerations Amenorrhea (Not Postpartum) • Timing: Combined hormonal contraceptives can be started at any time if it is reasonably certain that the woman is not pregnant (Box 2). • Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. # Postpartum (Breastfeeding) • Timing: Combined hormonal contraceptives can be started when the woman is medically eligible to use the method (5) and if it is reasonably certain that she is not pregnant. ( # Postabortion (Spontaneous or Induced) • Timing: Combined hormonal contraceptives can be started within the first 7 days following first-trimester or second-trimester abortion, including immediately postabortion (U.S. MEC 1). • Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days unless combined hormonal contraceptives are started at the time of a surgical abortion. # Switching from Another Contraceptive Method • Timing: Combined hormonal contraceptives can be started immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. • Need for back-up contraception: If it has been >5 days since menstrual bleeding started, she needs to abstain from sexual intercourse or use additional contraceptive protection for the next 7 days. • Switching from an IUD: If the woman has had sexual intercourse since the start of her current menstrual cycle and it has been >5 days since menstrual bleeding started, theoretically, residual sperm might be in the genital tract, which could lead to fertilization if ovulation occurs. A health care provider may consider any of the following options: -Advise the women to retain the IUD for at least 7 days after combined hormonal contraceptives are initiated and return for IUD removal. -Advise the woman to abstain from sexual intercourse or use barrier contraception for 7 days before removing the IUD and switching to the new method. -If the woman cannot return for IUD removal and has not abstained from sexual intercourse or used barrier contraception for 7 days, advise the woman to use ECPs at the time of IUD removal. Combined hormonal contraceptives can be started immediately after use of ECPs (with the exception of UPA). Combined hormonal contraceptives can be started no sooner than 5 days after use of UPA. Comments and Evidence Summary. In situations in which the health care provider is uncertain whether the woman might be pregnant, the benefits of starting combined hormonal contraceptives likely exceed any risk; therefore, starting combined hormonal contraceptives should be considered at any time, with a follow-up pregnancy test in 2-4 weeks. If a woman needs to use additional contraceptive protection when switching to combined hormonal contraceptives from another contraceptive method, consider continuing her previous method for 7 days after starting combined hormonal contraceptives. A systematic review of 18 studies examined the effects of starting combined hormonal contraceptives on different days of the menstrual cycle (202). Overall, the evidence suggested that pregnancy rates did not differ by the timing of combined hormonal contraceptive initiation (169,(203)(204)(205) (Level of evidence: I to II-3, fair, indirect). The more follicular activity that occurred before starting COCs, the more likely ovulation was to occur; however, no ovulations occurred when COCs were started at a follicle diameter of 10 mm (mean cycle day 7.6) or when the ring was started at 13 mm (median cycle day 11) (206-215) (Level of evidence: I to II-3, fair, indirect). Bleeding patterns and other side effects did not vary with the timing of combined hormonal contraceptive initiation (204,205,(216)(217)(218)(219)(220) (Level of evidence: I to II-2, good to poor, direct). Although continuation rates of combined hormonal contraceptives were initially improved by the "quick start" approach (i.e., starting on the day of the visit), the advantage disappeared over time (203,204,(216)(217)(218)(219)(220)(221) (Level of evidence: I to II-2, good to poor, direct). # Examinations and Test Needed Before Initiation of Combined Hormonal Contraceptives Among healthy women, few examinations or tests are needed before initiation of combined hormonal contraceptives (Table 4). Blood pressure should be measured before initiation of combined hormonal contraceptives. Baseline weight and BMI measurements might be useful for monitoring combined hormonal contraceptive users over time. Women with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. U.S. MEC might be useful in such circumstances (5). Comments and Evidence Summary. Blood pressure: Women who have more severe hypertension (systolic pressure of ≥160 mmHg or diastolic pressure of ≥100 mm Hg) or vascular disease should not use combined hormonal contraceptives (U.S. MEC 4), and women who have less severe hypertension (systolic pressure of 140-159 mm Hg or diastolic pressure of 90-99 mm Hg) or adequately controlled hypertension generally should not use combined hormonal contraceptives (U.S. MEC 3) (5). Therefore, blood pressure should be evaluated before initiating combined hormonal contraceptives. In instances in which blood pressure cannot be measured by a provider, blood pressure measured in other settings can be reported by the woman to her provider. Evidence suggests that cardiovascular outcomes are worse among women who did not have their blood pressure measured before initiating COCs. A systematic review identified six articles from three studies that reported cardiovascular outcomes among women who had blood pressure measurements and women who did not have blood pressure measurements before initiating COCs (170). Three case-control studies showed that women who did not have blood pressure measurements before initiating COCs had a higher risk for acute myocardial infarction than women who did have blood pressure measurements (222)(223)(224). Two case-control studies showed that women who did not have blood pressure measurements before initiating COCs had a higher risk for ischemic stroke than women who did have blood pressure measurements (225,226). One case-control study showed no difference in the risk for hemorrhagic stroke among women who initiated COCs regardless of whether their blood pressure was measured (227). Studies that examined hormonal contraceptive methods other than COCs were not identified (Level of evidence: II-2, fair, direct). # Weight (BMI): Obese women generally can use combined hormonal contraceptives (U.S. MEC 2) (5); therefore, screening for obesity is not necessary for the safe initiation of combined hormonal contraceptives. However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. Bimanual examination and cervical inspection: Pelvic examination is not necessary before initiation of combined hormonal contraceptives because it does not facilitate detection of conditions for which hormonal contraceptives would be unsafe. Women with certain conditions such as current breast cancer, severe hypertension or vascular disease, heart disease, migraine headaches with aura, and certain liver diseases, as well as women aged ≥35 years and who smoke ≥15 cigarettes per day, should not use (U.S. MEC 4) or generally should not use (U.S. MEC 3) combined hormonal contraceptives (5); however, none of these conditions are likely to be detected by pelvic examination (145). A systematic review identified two case-control studies that compared delayed and immediate pelvic examination before initiation of hormonal contraceptives, specifically oral contraceptives or DMPA (95). No differences in risk factors for cervical neoplasia, incidence of STDs, incidence of abnormal Papanicolaou smears, or incidence of abnormal wet mounts were found (Level of evidence: Level II-2 fair, direct). Glucose: Although women with complicated diabetes should not use (U.S. MEC 4) or generally should not use (U.S. MEC 3) combined hormonal contraceptives, depending on the severity of the condition (5), screening for diabetes before initiation of hormonal contraceptives is not necessary because of the low prevalence of undiagnosed diabetes and the high likelihood that women with complicated diabetes already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with glucose measurement before initiation of hormonal contraceptives (57). The prevalence of diabetes among women of reproductive age is low. During 2009-2012 among women aged 20-44 years in the United States, the prevalence of diabetes was 3.3% (84). During 1999-2008, the percentage of women aged 20-44 years with undiagnosed diabetes was 0.5% (85). Although hormonal contraceptives can have some adverse effects on glucose metabolism in healthy and diabetic women, the overall clinical effect is minimal (171)(172)(173)(174)(175)(176)(177). Lipids: Screening for dyslipidemias is not necessary for the safe initiation of combined hormonal contraceptives because of the low prevalence of undiagnosed disease in women of reproductive age and the low likelihood of clinically significant changes with use of hormonal contraceptives. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with lipid measurement before initiation of hormonal contraceptives (57). During 2009-2012 among women aged 20-44 years in the United States, 7.6% had high cholesterol, defined as total serum cholesterol ≥240 mg/dL (84). During 1999-2008, the prevalence of undiagnosed hypercholesterolemia among women aged 20-44 years was approximately 2% (85). A systematic review identified few studies, all of poor quality, that suggest that women with known dyslipidemias using combined hormonal contraceptives might be at increased risk for myocardial infarction, cerebrovascular accident, or venous thromboembolism compared with women without dyslipidemias; no studies were identified that examined risk for pancreatitis among women with known dyslipidemias using combined hormonal contraceptives (89). Studies have shown mixed results regarding the effects of hormonal contraceptives on lipid levels among both healthy women and women with * Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; the risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: does not contribute substantially to safe and effective use of the contraceptive method. † In instances in which blood pressure cannot be measured by a provider, blood pressure measured in other settings can be reported by the woman to her provider. § Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. baseline lipid abnormalities, and the clinical significance of these changes is unclear (86)(87)(88)(89). Liver enzymes: Although women with certain liver diseases should not use (U.S. MEC 4) or generally should not use (U.S. MEC 3) combined hormonal contraceptives (5), screening for liver disease before initiation of combined hormonal contraceptives is not necessary because of the low prevalence of these conditions and the high likelihood that women with liver disease already would have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with liver enzyme tests before initiation of hormonal contraceptives (57). In 2012, among U.S. women, the percentage with liver disease (not further specified) was 1.3% (90). In 2013, the incidence of acute hepatitis A, B, or C was ≤1 per 100,000 U.S. population (91). During 2002-2011, the incidence of liver carcinoma among U.S. women was approximately 3.7 per 100,000 population (92). Because estrogen and progestins are metabolized in the liver, the use of hormonal contraceptives among women with liver disease might, theoretically, be a concern. The use of hormonal contraceptives, specifically COCs and POPs, does not affect disease progression or severity in women with hepatitis, cirrhosis, or benign focal nodular hyperplasia (93,94), although evidence is limited; no evidence exists for other types of combined hormonal contraceptives. Thrombogenic mutations: Women with thrombogenic mutations should not use combined hormonal contraceptives (U.S. MEC 4) (5) because of the increased risk for venous thromboembolism (228). However, studies have shown that universal screening for thrombogenic mutations before initiating COCs is not cost-effective because of the rarity of the conditions and the high cost of screening (229)(230)(231). Clinical breast examination: Although women with current breast cancer should not use combined hormonal contraceptives (U.S. MEC 4) (5), screening asymptomatic women with a clinical breast examination before initiating combined hormonal contraceptives is not necessary because of the low prevalence of breast cancer among women of reproductive age. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a breast examination before initiation of hormonal contraceptives (95). The incidence of breast cancer among women of reproductive age in the United States is low. In 2012, the incidence of breast cancer among women aged 20-49 years was approximately 70.7 per 100,000 women (96). Other screening: Women with anemia, cervical intraepithelial neoplasia, cervical cancer, HIV infection, or other STDs can use (U.S. MEC 1) or generally can use (U.S. MEC 2) combined hormonal contraceptives (5); therefore, screening for these conditions is not necessary for the safe initiation of combined hormonal contraceptives. # Number of Pill Packs that Should Be Provided at Initial and Return Visits • At the initial and return visits, provide or prescribe up to a 1-year supply of COCs (e.g., 13 28-day pill packs), depending on the woman's preferences and anticipated use. • A woman should be able to obtain COCs easily in the amount and at the time she needs them. Comments and Evidence Summary. The more pill packs given up to 13 cycles, the higher the continuation rates. Restricting the number of pill packs distributed or prescribed can result in unwanted discontinuation of the method and increased risk for pregnancy. A systematic review of the evidence suggested that providing a greater number of pill packs was associated with increased continuation (232). Studies that compared provision of one versus 12 packs, one versus 12 or 13 packs, or three versus seven packs found increased continuation of pill use among women provided with more pill packs (233)(234)(235). However, one study found no difference in continuation when patients were provided one and then three packs versus four packs all at once (236). In addition to continuation, a greater number of pills packs provided was associated with fewer pregnancy tests, fewer pregnancies, and lower cost per client. However, a greater number of pill packs (i.e., 13 packs versus three packs) also was associated with increased pill wastage in one study (234) (Level of evidence: I to II-2, fair, direct). # Routine Follow-Up After Combined Hormonal Contraceptive Initiation These recommendations address when routine follow-up is recommended for safe and effective continued use of contraception for healthy women. The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from more frequent follow-up visits include adolescents, those with certain medical conditions or characteristics, and those with multiple medical conditions. • Advise the woman to return at any time to discuss side effects or other problems or if she wants to change the method being used. No routine follow-up visit is required. • At other routine visits, health care providers seeing combined hormonal contraceptive users should do the following: -Assess the woman's satisfaction with her contraceptive method and whether she has any concerns about method use. -Assess any changes in health status, including medications, that would change the appropriateness of combined hormonal contraceptives for safe and effective continued use based on U.S. MEC (e.g., category 3 and 4 conditions and characteristics). -Assess blood pressure. -Consider assessing weight changes and counseling women who are concerned about weight change perceived to be associated with their contraceptive method. Comments and Evidence Summary. No evidence exists regarding whether a routine follow-up visit after initiating combined hormonal contraceptives improves correct or continued use. Monitoring blood pressure is important for combined hormonal contraceptive users. Health care providers might consider recommending women obtain blood pressure measurements in other settings. A systematic review identified five studies that examined the incidence of hypertension among women who began using a COC versus those who started a nonhormonal method of contraception or a placebo (123). Few women developed hypertension after initiating COCs, and studies examining increases in blood pressure after COC initiation found mixed results. No studies were identified that examined changes in blood pressure among patch or vaginal ring users (Level of evidence: I, fair, to II-2, fair, indirect). # Late or Missed Doses and Side Effects from Combined Hormonal Contraceptive Use For the following recommendations, a dose is considered late when <24 hours have elapsed since the dose should have been taken. A dose is considered missed if ≥24 hours have elapsed since the dose should have been taken. For example, if a COC pill was supposed to have been taken on Monday at 9:00 a.m. and is taken at 11:00 a.m., the pill is late; however, by Tuesday morning at 11:00 a.m., Monday's 9:00 a.m. pill has been missed and Tuesday's 9:00 a.m. pill is late. For COCs, the recommendations only apply to late or missed hormonally active pills and not to placebo pills. Recommendations are provided for late or missed pills (Figure 2), the patch (Figure 3), and the ring (Figure 4). Comments and Evidence Summary. Inconsistent or incorrect use of combined hormonal contraceptives is a major cause of combined hormonal contraceptive failure. Extending the hormone-free interval is considered to be a particularly risky time to miss combined hormonal contraceptives. Seven days of continuous combined hormonal contraceptive use is deemed necessary to reliably prevent ovulation. The recommendations reflect a balance between simplicity and precision of science. Women who frequently miss COCs or experience other usage errors with combined hormonal patch or combined vaginal ring should consider an alternative contraceptive method that is less dependent on the user to be effective (e.g., IUD, implant, or injectable). A systematic review identified 36 studies that examined measures of contraceptive effectiveness of combined hormonal contraceptives during cycles with extended hormone-free intervals, shortened hormone-free intervals, or deliberate nonadherence on days not adjacent to the hormone-free interval (237). Most of the studies examined COCs (215,, two examined the combined hormonal patch (259,266), and six examined the combined vaginal ring (211,(267)(268)(269)(270)(271). No direct evidence on the effect of missed pills on the risk for pregnancy was found. Studies of women deliberately extending the hormone-free interval up to 14 days found wide variability in the amount of follicular development and occurrence of ovulation (241,244,246,247,249,250,(252)(253)(254)(255); in general, the risk for ovulation was low, and among women who did ovulate, cycles were usually abnormal. In studies of women who deliberately missed pills on various days during the cycle not adjacent to the hormone-free interval, ovulation occurred infrequently (239,(245)(246)(247)255,256,258,259). Studies comparing 7-day hormone-free intervals with shorter hormone-free intervals found lower rates of pregnancy (238,242,251,257) and significantly greater suppression of ovulation (240,250,(261)(262)(263)265) among women with shorter intervals in all but one study (260), which found no difference. Two studies that compared 30-µg ethinyl estradiol pills with 20-µg ethinyl estradiol pills showed more follicular activity when 20-µg ethinyl estradiol pills were missed (241,244). In studies examining the combined vaginal ring, three studies found that nondeliberate extension of the hormone-free interval for 24 to <48 hours from the scheduled period did not increase the risk for pregnancy (267,268,270); one study found that ring insertion after a deliberately extended hormone-free interval that allowed a 13-mm follicle to develop interrupted ovarian function and further follicular growth (211); and one study found that inhibition of ovulation was maintained after deliberately forgetting to remove the ring for up to 2 weeks after normal ring use (271). In studies examining the combined hormonal patch, one study found that missing 1-3 consecutive days before patch replacement (either wearing one patch 3 days longer before replacement or going 3 days without a patch before replacing the next patch) on days not adjacent to the patch-free interval resulted in little follicular activity and low risk for ovulation (259), and one pharmacokinetic study found that serum levels of If one hormonal pill is late: (<24 hours since a pill should have been taken) If one hormonal pill has been missed: (24 to <48 hours since a pill should have been taken) If two or more consecutive hormonal pills have been missed: (≥48 hours since a pill should have been taken) • Take the late or missed pill as soon as possible. • Continue taking the remaining pills at the usual time (even if it means taking two pills on the same day). • No additional contraceptive protection is needed. • Emergency contraception is not usually needed but can be considered (with the exception of UPA) if hormonal pills were missed earlier in the cycle or in the last week of the previous cycle. • Take the most recent missed pill as soon as possible. (Any other missed pills should be discarded.) • Continue taking the remaining pills at the usual time (even if it means taking two pills on the same day). • Use back-up contraception (e.g., condoms) or avoid sexual intercourse until hormonal pills have been taken for 7 consecutive days. • If pills were missed in the last week of hormonal pills (e.g., days 15-21 for 28-day pill packs): o Omit the hormone-free interval by nishing the hormonal pills in the current pack and starting a new pack the next day. o If unable to start a new pack immediately, use back-up contraception (e.g., condoms) or avoid sexual intercourse until hormonal pills from a new pack have been taken for 7 consecutive days. • Emergency contraception should be considered (with the exception of UPA) if hormonal pills were missed during the rst week and unprotected sexual intercourse occurred in the previous 5 days. • Emergency contraception may also be considered (with the exception of UPA) at other times as appropriate. ethinyl estradiol and progestin norelgestromin remained within reference ranges after extending patch wear for 3 days (266). No studies were found on extending the patch-free interval. In studies that provide indirect evidence on the effects of missed combined hormonal contraception on surrogate measures of pregnancy, how differences in surrogate measures correspond to pregnancy risk is unclear (Level of evidence: I, good, indirect to II-3, poor, direct). # Vomiting or Severe Diarrhea While Using COCs Certain steps should be taken by women who experience vomiting or severe diarrhea while using COCs (Figure 5). Comments and Evidence Summary. Theoretically, the contraceptive effectiveness of COCs might be decreased because of vomiting or severe diarrhea. Because of the lack of evidence that addresses vomiting or severe diarrhea while using COCs, these recommendations are based on the recommendations for missed COCs. No evidence was found on the effects of vomiting or diarrhea on measures of contraceptive effectiveness including pregnancy, follicular development, hormone levels, or cervical mucus quality. # Unscheduled Bleeding with Extended or Continuous Use of Combined Hormonal Contraceptives • Before initiation of combined hormonal contraceptives, provide counseling about potential changes in bleeding patterns during extended or continuous combined hormonal contraceptive use. (Extended contraceptive use is defined as a planned hormone-free interval after at least two contiguous cycles. Continuous contraceptive use is defined as uninterrupted use of hormonal contraception without a hormone-free interval) (272). • Unscheduled spotting or bleeding is common during the first 3-6 months of extended or continuous combined hormonal contraceptive use. It is generally not harmful and decreases with continued combined hormonal contraceptive use. • If clinically indicated, consider an underlying gynecological problem, such as inconsistent use, interactions with other medications, cigarette smoking, an STD, pregnancy, or new pathologic uterine conditions (e.g., polyps or fibroids). If an underlying gynecological problem is found, treat the condition or refer for care. • If an underlying gynecological problem is not found and the woman wants treatment, the following treatment option can be considered: Delayed insertion of a new ring or delayed reinsertion of a current ring for <48 hours since a ring should have been inserted Delayed insertion of a new ring or delayed reinsertion for ≥48 hours since a ring should have been inserted • Insert ring as soon as possible. • Keep the ring in until the scheduled ring removal day. • No additional contraceptive protection is needed. • Emergency contraception is not usually needed but can be considered (with the exception of UPA) if delayed insertion or reinsertion occurred earlier in the cycle or in the last week of the previous cycle. • Insert ring as soon as possible. • Keep the ring in until the scheduled ring removal day. • Use back-up contraception (e.g., condoms) or avoid sexual intercourse until a ring has been worn for 7 consecutive days. • Emergency contraception may also be considered (with the exception of UPA) at other times as appropriate. # FIGURE 4. Recommended actions after delayed insertion or reinsertion* with combined vaginal ring Abbreviation: UPA = ulipristal acetate. * If removal takes place but the woman is unsure of how long the ring has been removed, consider the ring to have been removed for ≥48 hours since a ring should have been inserted or reinserted. -Advise the woman to discontinue combined hormonal contraceptive use (i.e., a hormone-free interval) for 3-4 consecutive days; a hormone-free interval is not recommended during the first 21 days of using the continuous or extended combined hormonal contraceptive method. A hormone-free interval also is not recommended more than once per month because contraceptive effectiveness might be reduced. • If unscheduled spotting or bleeding persists and the woman finds it unacceptable, counsel her on alternative contraceptive methods, and offer another method if it is desired. Comments and Evidence Summary. During contraceptive counseling and before initiating extended or continuous combined hormonal contraceptives, information about common side effects such as unscheduled spotting or bleeding, especially during the first 3-6 months of use, should be discussed (273). These bleeding irregularities are generally not harmful and usually improve with persistent use of the hormonal method. To avoid unscheduled spotting or bleeding, counseling should emphasize the importance of correct use and timing; for users of contraceptive pills, emphasize consistent pill use. Enhanced counseling about expected bleeding patterns and reassurance that bleeding irregularities are generally not harmful has been shown to reduce method discontinuation in clinical trials with DMPA (124,125,274). A systematic review identified three studies with small study populations that addressed treatments for unscheduled bleeding among women using extended or continuous combined hormonal contraceptives (275). In two separate randomized clinical trials in which women were taking either contraceptive pills or using the contraceptive ring continuously for 168 days, women assigned to a hormone-free interval of 3 or 4 days reported improved bleeding. Although they noted an initial increase in flow, this was followed by an abrupt decrease 7-8 days later with eventual cessation of flow 11-12 days later. These findings were compared with women who continued to use their method without a hormonefree interval, in which a greater proportion reported either treatment failure or fewer days of amenorrhea (276,277). In another randomized trial of 66 women with unscheduled bleeding among women using 84 days of hormonally active contraceptive pills, oral doxycycline (100 mg twice daily) initiated the first day of bleeding and taken for 5 days did not result in any improvement in bleeding compared with placebo (278) (Level of evidence: I, fair, direct). Vomiting or diarrhea (for any reason, for any duration), that occurs within 24 hours after taking a hormonal pill Vomiting or diarrhea, for any reason, continuing for 24 to <48 hours after taking any hormonal pill Vomiting or diarrhea, for any reason, continuing for ≥48 hours after taking any hormonal pill • Taking another hormonal pill (redose) is unnecessary. • Continue taking pills daily at the usual time (if possible, despite discomfort). • No additional contraceptive protection is needed. • Emergency contraception is not usually needed but can be considered (with the exception of UPA) as appropriate. • Continue taking pills daily at the usual time (if possible, despite discomfort). • Use back-up contraception (e.g., condoms) or avoid sexual intercourse until hormonal pills have been taken for 7 consecutive days after vomiting or diarrhea has resolved. # Progestin-Only Pills POPs contain only a progestin and no estrogen and are available in the United States. Approximately 9 out of 100 women become pregnant in the first year of use with POPs with typical use (14). POPs are reversible and can be used by women of all ages. POPs do not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # Initiation of POPs Timing • POPs can be started at any time if it is reasonably certain that the woman is not pregnant (Box 2). # Need for Back-Up Contraception • If POPs are started within the first 5 days since menstrual bleeding started, no additional contraceptive protection is needed. • If POPs are started >5 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. # Special Considerations Amenorrhea (Not Postpartum) • Timing: POPs can be started at any time if it is reasonably certain that the woman is not pregnant (Box 2). • Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. # Postpartum (Breastfeeding) • Timing: POPs can be started at any time, including immediately postpartum (U.S. MEC 2 if <1 month postpartum; U.S. MEC 1 if ≥1 month postpartum) if it is reasonably certain that the woman is not pregnant (Box 2). • Need for back-up contraception: If the woman is <6 months postpartum, amenorrheic, and fully or nearly fully breastfeeding (exclusively breastfeeding or the vast majority [≥85%] of feeds are breastfeeds) (27), no additional contraceptive protection is needed. Otherwise, a woman who is ≥21 days postpartum and has not experienced return of her menstrual cycles, she needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. If her menstrual cycles have returned and it has been >5 days since menstrual bleeding started, she needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. # Postpartum (Not Breastfeeding) • Timing: POPs can be started at any time, including immediately postpartum (U.S. MEC 1), if it is reasonably certain that the woman is not pregnant (Box 2). • Need for back-up contraception: If a woman is <21 days postpartum, no additional contraceptive protection is needed. Women who are ≥21 days postpartum and whose menstrual cycles have not returned need to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. If her menstrual cycles have returned and it has been >5 days since menstrual bleeding started, she needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. # Postabortion (Spontaneous or Induced) • Timing: POPs can be started within the first 7 days, including immediately postabortion (U.S. MEC 1). • Need for back-up contraception: The woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days unless POPs are started at the time of a surgical abortion. # Switching from Another Contraceptive Method • Timing: POPs can be started immediately if it is reasonably certain that the woman is not pregnant (Box 2). Waiting for her next menstrual cycle is unnecessary. • Need for back-up contraception: If it has been >5 days since menstrual bleeding started, the woman needs to abstain from sexual intercourse or use additional contraceptive protection for the next 2 days. • Switching from an IUD: If the woman has had sexual intercourse since the start of her current menstrual cycle and it has been >5 days since menstrual bleeding started, theoretically, residual sperm might be in the genital tract, which could lead to fertilization if ovulation occurs. A health care provider may consider any of the following options: -Advise the women to retain the IUD for at least 2 days after POPs are initiated and return for IUD removal. -Advise the woman to abstain from sexual intercourse or use barrier contraception for 7 days before removing the IUD and switching to the new method. -If the woman cannot return for IUD removal and has not abstained from sexual intercourse or used barrier contraception for 7 days, advise the woman to use ECPs at the time of IUD removal. POPs can be started immediately after use of ECPs (with the exception of UPA). POPs can be started no sooner than 5 days after use of UPA. Comments and Evidence Summary. In situations in which the health care provider is uncertain whether the woman might be pregnant, the benefits of starting POPs likely exceed any risk; therefore, starting POPs should be considered at any time, with a follow-up pregnancy test in 2-4 weeks. Unlike COCs, POPs inhibit ovulation in about half of cycles, although the rates vary widely by individual (279). Peak serum steroid levels are reached about 2 hours after administration, followed by rapid distribution and elimination, such that by 24 hours after administration, serum steroid levels are near baseline (279). Therefore, taking POPs at approximately the same time each day is important. An estimated 48 hours of POP use has been deemed necessary to achieve the contraceptive effects on cervical mucus (279). If a woman needs to use additional contraceptive protection when switching to POPs from another contraceptive method, consider continuing her previous method for 2 days after starting POPs. No direct evidence was found regarding the effects of starting POPs at different times of the cycle. # Examinations and Tests Needed Before Initiation of POPs Among healthy women, no examinations or tests are needed before initiation of POPs, although a baseline weight and BMI measurement might be useful for monitoring POP users over time (Table 5). Women with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. The U.S. MEC might be useful in such circumstances (5). Comments and Evidence Summary. Weight (BMI): Obese women can use POPs (U.S. MEC 1) (5); therefore, screening for obesity is not necessary for the safe initiation of POPs. However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. Bimanual examination and cervical inspection: Pelvic examination is not necessary before initiation of POPs because it does not facilitate detection of conditions for which POPs would be unsafe. Women with current breast cancer should not use POPs (U.S. MEC 4), and women with certain liver diseases generally should not use POPs (U.S. MEC 3) (5); however, neither of these conditions are likely to be detected by pelvic examination (145). A systematic review identified two casecontrol studies that compared delayed versus immediate pelvic examination before initiation of hormonal contraceptives, specifically oral contraceptives or DMPA (95). No differences in risk factors for cervical neoplasia, incidence of STDs, incidence of abnormal Papanicolaou smears, or incidence of abnormal findings from wet mounts were observed (Level of evidence: II-2 fair, direct). Lipids: Screening for dyslipidemias is not necessary for the safe initiation of POPs because of the low prevalence of undiagnosed disease in women of reproductive age and the low likelihood of clinically significant changes with use of hormonal contraceptives. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with lipid measurement before initiation of hormonal contraceptives (57). During 2009-2012 among women aged 20-44 years in the United States, 7.6% had high cholesterol, defined as total serum cholesterol ≥240 mg/dL (84). During 1999-2008, the prevalence of undiagnosed hypercholesterolemia among women aged 20-44 years was approximately 2% (85). Studies have shown mixed results about the effects of hormonal methods on lipid levels among both healthy women and women with baseline lipid abnormalities, and the clinical significance of these changes is unclear (86)(87)(88)(89). Liver enzymes: Although women with certain liver diseases generally should not use POPs (U.S. MEC 3) (5), screening for liver disease before initiation of POPs is not necessary because of the low prevalence of these conditions and the high likelihood that women with liver disease already would * Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; the risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. Class C: does not contribute substantially to safe and effective use of the contraceptive method. † Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. have had the condition diagnosed. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with liver enzyme tests before initiation of hormonal contraceptives (57). In 2012, among U.S. women, the percentage with liver disease (not further specified) was 1.3% (90). In 2013, the incidence of acute hepatitis A, B, or C was ≤1 per 100,000 U.S. population (91). During 2002-2011, the incidence of liver carcinoma among U.S. women was approximately 3.7 per 100,000 population (92). Because estrogen and progestins are metabolized in the liver, the use of hormonal contraceptives among women with liver disease might, theoretically, be a concern. The use of hormonal contraceptives, specifically COCs and POPs, does not affect disease progression or severity in women with hepatitis, cirrhosis, or benign focal nodular hyperplasia (93,94). Clinical breast examination: Although women with current breast cancer should not use POPs (U.S. MEC 4) (5), screening asymptomatic women with a clinical breast examination before initiating POPs is not necessary because of the low prevalence of breast cancer among women of reproductive age. A systematic review did not identify any evidence regarding outcomes among women who were screened versus not screened with a clinical breast examination before initiation of hormonal contraceptives (95). The incidence of breast cancer among women of reproductive age in the United States is low. In 2012, the incidence of breast cancer among women aged 20-49 years was approximately 70.7 per 100,000 women (96). Other screening: Women with hypertension, diabetes, anemia, thrombogenic mutations, cervical intraepithelial neoplasia, cervical cancer, STDs, or HIV infection can use (U.S. MEC 1) or generally can use (U.S. MEC 2) POPs (5); therefore, screening for these conditions is not necessary for the safe initiation of POPs. # Number of Pill Packs that Should Be Provided at Initial and Return Visits • At the initial and return visit, provide or prescribe up to a 1-year supply of POPs (e.g., 13 28-day pill packs), depending on the woman's preferences and anticipated use. • A woman should be able to obtain POPs easily in the amount and at the time she needs them. Comments and Evidence Summary. The more pill packs given up to 13 cycles, the higher the continuation rates. Restricting the number of pill packs distributed or prescribed can result in unwanted discontinuation of the method and increased risk for pregnancy. A systematic review of the evidence suggested that providing a greater number of pill packs was associated with increased continuation (232). Studies that compared provision of one versus 12 packs, one versus 12 or 13 packs, or three versus seven packs found increased continuation of pill use among women provided with more pill packs (233)(234)(235). However, one study found no difference in continuation when patients were provided one and then three packs versus four packs all at once (236). In addition to continuation, a greater number of pill packs provided was associated with fewer pregnancy tests, fewer pregnancies, and lower cost per client. However, a greater number of pill packs (13 packs versus three packs) also was associated with increased pill wastage in one study (234) (Level of evidence: I to II-2, fair, direct). # Routine Follow-Up After POP Initiation These recommendations address when routine follow-up is recommended for safe and effective continued use of contraception for healthy women. The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from more frequent follow-up visits include adolescents, those with certain medical conditions or characteristics, and those with multiple medical conditions. • Advise the woman to return at any time to discuss side effects or other problems or if she wants to change the method being used. No routine follow-up visit is required. • At other routine visits, health care providers seeing POP users should do the following: -Assess the woman's satisfaction with her contraceptive method and whether she has any concerns about method use. -Assess any changes in health status, including medications, that would change the appropriateness of POPs for safe and effective continued use based on U.S. MEC (e.g., category 3 and 4 conditions and characteristics). -Consider assessing weight changes and counseling women who are concerned about weight change perceived to be associated with their contraceptive method. Comments and Evidence Summary. No evidence was found regarding whether a routine follow-up visit after initiating POPs improves correct and continued use. # Missed POPs For the following recommendations, a dose is considered missed if it has been >3 hours since it should have been taken. • Take one pill as soon as possible. • Continue taking pills daily, one each day, at the same time each day, even if it means taking two pills on the same day. • Use back-up contraception (e.g., condoms) or avoid sexual intercourse until pills have been taken correctly, on time, for 2 consecutive days. • Emergency contraception should be considered (with the exception of UPA) if the woman has had unprotected sexual intercourse. Comments and Evidence Summary. Inconsistent or incorrect use of oral contraceptive pills is a major reason for oral contraceptive failure. Unlike COCs, POPs inhibit ovulation in about half of cycles, although this rate varies widely by individual (279). Peak serum steroid levels are reached about 2 hours after administration, followed by rapid distribution and elimination, such that by 24 hours after administration, serum steroid levels are near baseline (279). Therefore, taking POPs at approximately the same time each day is important. An estimated 48 hours of POP use was deemed necessary to achieve the contraceptive effects on cervical mucus (279). Women who frequently miss POPs should consider an alternative contraceptive method that is less dependent on the user to be effective (e.g., IUD, implant, or injectable). No evidence was found regarding the effects of missed POPs available in the United States on measures of contraceptive effectiveness including pregnancy, follicular development, hormone levels, or cervical mucus quality. # Vomiting or Diarrhea (for any Reason or Duration) that Occurs Within 3 Hours After Taking a Pill • Take another pill as soon as possible (if possible, despite discomfort). • Continue taking pills daily, one each day, at the same time each day. • Use back-up contraception (e.g., condoms) or avoid sexual intercourse until 2 days after vomiting or diarrhea has resolved. • Emergency contraception should be considered (with the exception of UPA) if the woman has had unprotected sexual intercourse. Comments and Evidence Summary. Theoretically, the contraceptive effectiveness of POPs might be decreased because of vomiting or severe diarrhea. Because of the lack of evidence to address this question, these recommendations are based on the recommendations for missed POPs. No evidence was found regarding the effects of vomiting or diarrhea on measures of contraceptive effectiveness, including pregnancy, follicular development, hormone levels, or cervical mucus quality. # Standard Days Method SDM is a method based on fertility awareness; users must avoid unprotected sexual intercourse on days 8-19 of the menstrual cycle (280). Approximately 5 out of 100 women become pregnant in the first year of use with perfect (i.e., correct and consistent) use of SDM (280); effectiveness based on typical use is not available for this method but is expected to be lower than that for perfect use. SDM is reversible and can be used by women of all ages. SDM does not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # Use of SDM Among Women with Various Durations of the Menstrual Cycle Menstrual Cycles of 26-32 Days • The woman may use the method. • Provide a barrier method of contraception for protection on days 8-19 if she wants one. • If she has unprotected sexual intercourse during days 8-19, consider the use of emergency contraception if appropriate. # Two or More Cycles of <26 or >32 Days Within Any 1 Year of SDM Use • Advise the woman that the method might not be appropriate for her because of a higher risk for pregnancy. Help her consider another method. Comments and Evidence Summary. The probability of pregnancy is increased when the menstrual cycle is outside the range of 26-32 days, even if unprotected sexual intercourse is avoided on days 8-19. A study of 7,600 menstrual cycles, including information on cycle length and signs of ovulation, concluded that the theoretical effectiveness of SDM is greatest for women with cycles of 26-32 days, that the method is still effective for women who occasionally have a cycle outside this range, and that it is less effective for women who consistently have cycles outside this range. Information from daily hormonal measurements shows that the timing of the 6-day fertile window varies greatly, even among women with regular cycles (21,281,282). # Emergency Contraception Emergency contraception consists of methods that can be used by women after sexual intercourse to prevent pregnancy. Emergency contraception methods have varying ranges of effectiveness depending on the method and timing of administration. Four options are available in the United States: the Cu-IUD and three types of ECPs. # Types of Emergency Contraception Intrauterine Device • Cu-IUD # ECPs • UPA in a single dose (30 mg) • Levonorgestrel in a single dose (1.5 mg) or as a split dose (1 dose of 0.75 mg of levonorgestrel followed by a second dose of 0.75 mg of levonorgestrel 12 hours later) • Combined estrogen and progestin in 2 doses (Yuzpe regimen: 1 dose of 100 µg of ethinyl estradiol plus 0.50 mg of levonorgestrel followed by a second dose of 100 µg of ethinyl estradiol plus 0.50 mg of levonorgestrel 12 hours later) # Initiation of Emergency Contraception Timing Cu-IUD • The Cu-IUD can be inserted within 5 days of the first act of unprotected sexual intercourse as an emergency contraceptive. • In addition, when the day of ovulation can be estimated, the Cu-IUD can be inserted beyond 5 days after sexual intercourse, as long as insertion does not occur >5 days after ovulation. # ECPs • ECPs should be taken as soon as possible within 5 days of unprotected sexual intercourse. Comments and Evidence Summary. Cu-IUDs are highly effective as emergency contraception (283) and can be continued as regular contraception. UPA and levonorgestrel ECPs have similar effectiveness when taken within 3 days after unprotected sexual intercourse; however, UPA has been shown to be more effective than the levonorgestrel formulation 3-5 days after unprotected sexual intercourse (284). The combined estrogen and progestin regimen is less effective than UPA or levonorgestrel and also is associated with more frequent occurrence of side effects (nausea and vomiting) (285). The levonorgestrel formulation might be less effective than UPA among obese women (286). Two studies of UPA use found consistent decreases in pregnancy rates when administered within 120 hours of unprotected sexual intercourse (284,287). Five studies found that the levonorgestrel and combined regimens decreased risk for pregnancy through the fifth day after unprotected sexual intercourse; however, rates of pregnancy were slightly higher when ECPs were taken after 3 days (288)(289)(290)(291)(292). A meta-analysis of levonorgestrel ECPs found that pregnancy rates were low when administered within 4 days after unprotected sexual intercourse but increased at 4-5 days (293) (Level of evidence: I to II-2, good to poor, direct). # Advance Provision of ECPs • An advance supply of ECPs may be provided so that ECPs will be available when needed and can be taken as soon as possible after unprotected sexual intercourse. Comments and Evidence Summary. A systematic review identified 17 studies that reported on safety or effectiveness of advance ECPs in adult or adolescent women (294). Any use of ECPs was two to seven times greater among women who received an advance supply of ECPs. However, a summary estimate (relative risk = 0.97; 95% confidence interval = 0.77-1.22) of five randomized controlled trials did not indicate a significant reduction in unintended pregnancies at 12 months with advance provision of ECPs. In the majority of studies among adults or adolescents, patterns of regular contraceptive use, pregnancy rates, and incidence of STDs did not vary between those who received advance ECPs and those who did not. Although available evidence supports the safety of advance provision of ECPs, effectiveness of advance provision of ECPs in reducing pregnancy rates at the population level has not been demonstrated (Level of evidence: I to II-3, good to poor, direct). # Initiation of Regular Contraception After ECPs # UPA • Advise the woman to start or resume hormonal contraception no sooner than 5 days after use of UPA, and provide or prescribe the regular contraceptive method as needed. For methods requiring a visit to a health care provider, such as DMPA, implants, and IUDs, starting the method at the time of UPA use may be considered; the risk that the regular contraceptive method might decrease the effectiveness of UPA must be weighed against the risk of not starting a regular hormonal contraceptive method. • The woman needs to abstain from sexual intercourse or use barrier contraception for the next 7 days after starting or resuming regular contraception or until her next menses, whichever comes first. • Any nonhormonal contraceptive method can be started immediately after the use of UPA. • Advise the woman to have a pregnancy test if she does not have a withdrawal bleed within 3 weeks. # Levonorgestrel and Combined Estrogen and Progestin ECPs • Any regular contraceptive method can be started immediately after the use of levonorgestrel or combined estrogen and progestin ECPs. • The woman needs to abstain from sexual intercourse or use barrier contraception for 7 days. • Advise the woman to have a pregnancy test if she does not have a withdrawal bleed within 3 weeks. Comments and Evidence Summary. The resumption or initiation of regular hormonal contraception after ECP use involves consideration of the risk for pregnancy if ECPs fail and the risks for unintended pregnancy if contraception initiation is delayed until the subsequent menstrual cycle. A health care provider may provide or prescribe pills, the patch, or the ring for a woman to start no sooner than 5 days after use of UPA. For methods requiring a visit to a health care provider, such as DMPA, implants, and IUDs, starting the method at the time of UPA use may be considered; the risk that the regular contraceptive method might decrease the effectiveness of UPA must be weighed against the risk of not starting a regular hormonal contraceptive method. Data on when a woman can start regular contraception after ECPs are limited to pharmacodynamic data and expert opinion (295)(296)(297). In one pharmacodynamic study of women who were randomly assigned to either UPA or placebo groups mid-cycle followed by a 21-day course of combined hormonal contraception found no difference between UPA and placebo groups in the time for women's ovaries to reach quiescence by ultrasound and serum estradiol (296); this finding suggests that UPA did not have an effect on the combined hormonal contraception. In another pharmacodynamic study with a crossover design, women were randomly assigned to one of three groups: 1) UPA followed by desogestrel for 20 days started 1 day later; 2) UPA plus placebo; or 3) placebo plus desogestrel for 20 days (295). Among women taking UPA followed by desogestrel, a higher incidence of ovulation in the first 5 days was found compared with UPA alone (45% versus 3%, respectively), suggesting desogestrel might decrease the effectiveness of UPA. No concern exists that administering combined estrogen and progestin or levonorgestrel formulations of ECPs concurrently with systemic hormonal contraception decreases the effectiveness of either emergency or regular contraceptive methods because these formulations do not have antiprogestin properties like UPA. If a woman is planning to initiate contraception after the next menstrual bleeding after ECP use, the cycle in which ECPs are used might be shortened, prolonged, or involve unscheduled bleeding. # Prevention and Management of Nausea and Vomiting with ECP Use # Nausea and Vomiting • Levonorgestrel and UPA ECPs cause less nausea and vomiting than combined estrogen and progestin ECPs. • Routine use of antiemetics before taking ECPs is not recommended. Pretreatment with antiemetics may be considered depending on availability and clinical judgment. # Vomiting Within 3 Hours of Taking ECPs • Another dose of ECP should be taken as soon as possible. Use of an antiemetic should be considered. Comments and Evidence Summary. Many women do not experience nausea or vomiting when taking ECPs, and predicting which women will experience nausea or vomiting is difficult. Although routine use of antiemetics before taking ECPs is not recommended, antiemetics are effective in some women and can be offered when appropriate. Health care providers who are deciding whether to offer antiemetics to women taking ECPs should consider the following: 1) women taking combined estrogen and progestin ECPs are more likely to experience nausea and vomiting than those who take levonorgestrel or UPA ECPs; 2) evidence indicates that antiemetics reduce the occurrence of nausea and vomiting in women taking combined estrogen and progestin ECPs; and 3) women who take antiemetics might experience other side effects from the antiemetics. A systematic review examined incidence of nausea and vomiting with different ECP regimens and effectiveness of antinausea drugs in reducing nausea and vomiting with ECP use (298). The levonorgestrel regimen was associated with significantly less nausea than a nonstandard dose of UPA (50 mg) and the standard combined estrogen and progestin regimen (299)(300)(301). Use of the split-dose levonorgestrel showed no differences in nausea and vomiting compared with the single-dose levonorgestrel (288,290,292,302) (Level of evidence: I, good-fair, indirect). Two trials of antinausea drugs, meclizine and metoclopramide, taken before combined estrogen and progestin ECPs, reduced the severity of nausea (303,304). Significantly less vomiting occurred with meclizine but not metoclopramide (Level of evidence: I, good-fair, direct). No direct evidence was found regarding the effects of vomiting after taking ECPs. # Female Sterilization Laparoscopic, abdominal, and hysteroscopic methods of female sterilization are available in the United States, and some of these procedures can be performed in an outpatient procedure or office setting. Fewer than 1 out of 100 women become pregnant in the first year after female sterilization (14). Because these methods are intended to be irreversible, all women should be appropriately counseled about the permanency of sterilization and the availability of highly effective, long-acting, reversible methods of contraception. Female sterilization does not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # When Hysteroscopic Sterilization is Reliable for Contraception • Before a woman can rely on hysteroscopic sterilization for contraception, a hysterosalpingogram (HSG) must be performed 3 months after the sterilization procedure to confirm bilateral tubal occlusion. • The woman should be advised that she needs to abstain from sexual intercourse or use additional contraceptive protection until she has confirmed bilateral tubal occlusion. # When Laparoscopic and Abdominal Approaches are Reliable for Contraception • A woman can rely on sterilization for contraception immediately after laparoscopic and abdominal approaches. No additional contraceptive protection is needed. Comments and Evidence Summary. HSG confirmation is necessary to confirm bilateral tubal occlusion after hysteroscopic sterilization. The inserts for the hysteroscopic sterilization system available in the United States are placed bilaterally into the fallopian tubes and require 3 months for adequate fibrosis and scarring leading to bilateral tubal occlusion. After hysteroscopic sterilization, advise the woman to correctly and consistently use an effective method of contraception while awaiting confirmation. If compliance with another method might be a problem, a woman and her health care provider may consider DMPA injection at the time of sterilization to ensure adequate contraception for 3 months. Unlike laparoscopic and abdominal sterilizations, pregnancy risk beyond 7 years of follow-up has not been studied among women who received hysteroscopic sterilization. Pregnancy risk with at least 10 years of follow-up has been studied among women who received laparoscopic and abdominal sterilizations (305,306). Although these methods are highly effective, pregnancies can occur many years after the procedure, and the risk for pregnancy is higher among younger women (306,307). A systematic review was conducted to identify studies that reported whether pregnancies occurred after hysteroscopic sterilization (308). Twenty-four studies were identified that reported whether pregnancies occurred after hysteroscopic sterilization and found that very few pregnancies occurred among women with confirmed bilateral tubal occlusion; however, few studies include long-term follow-up, and none with follow up for >7 years. Among women who had successful bilateral placement, most pregnancies that occurred after hysteroscopic sterilization were in women who did not have confirmed bilateral tubal occlusion at 3 months, either because of lack of follow up or misinterpretation of HSG results (309)(310)(311). Some pregnancies occurred within 3 months of placement, including among women who were already pregnant at the time of the procedure, women who did not use alternative contraception, or women who had failures of alternative contraception (310)(311)(312)(313)(314)(315). Although these studies generally demonstrated high rates of bilateral placement, some pregnancies occurred as a result of lack of bilateral placement identified on later imaging (310,311,(313)(314)(315)(316). Most pregnancies occurred after deviations from FDA directions, which include placement in the early follicular phase of the menstrual cycle, imaging at 3 months to document proper placement, and use of effective alternative contraception until documented occlusion (Level of evidence: II-3, fair, direct). # Male Sterilization Male sterilization, or vasectomy, is one of the few contraceptive methods available to men and can be performed in an outpatient procedure or office setting. Fewer than 1 woman out of 100 becomes pregnant in the first year after her male partner undergoes sterilization (14). Because male sterilization is intended to be irreversible, all men should be appropriately counseled about the permanency of sterilization and the availability of highly effective, long-acting, reversible methods of contraception for women. Male sterilization does not protect against STDs; consistent and correct use of male latex condoms reduces the risk for STDs, including HIV. # When Vasectomy is Reliable for Contraception • A semen analysis should be performed 8-16 weeks after a vasectomy to ensure the procedure was successful. • The man should be advised that he should use additional contraceptive protection or abstain from sexual intercourse until he has confirmation of vasectomy success by postvasectomy semen analysis. # Other Postprocedure Recommendations • The man should refrain from ejaculation for approximately 1 week after the vasectomy to allow for healing of surgical sites and, after certain methods of vasectomy, occlusion of the vas. Comments and Evidence Summary. The Vasectomy Guideline Panel of the American Urological Association performed a systematic review of key issues concerning the practice of vasectomy (317). All English-language publications on vasectomy published during 1949-2011 were reviewed. For more information, see the American Urological Association Vasectomy Guidelines (https://www.auanet.org/common/pdf/ education/clinical-guidance/Vasectomy.pdf ). Motile sperm disappear within a few weeks after vasectomy (318)(319)(320)(321). The time to azoospermia varies widely in different studies; however, by 12 weeks after the vasectomy, 80% of men have azoospermia, and almost all others have rare nonmotile sperm (defined as ≤100,000 nonmotile sperm per milliliter) (317). The number of ejaculations after vasectomy is not a reliable indicator of when azoospermia or rare nonmotile sperm will be achieved (317). Once azoospermia or rare nonmotile sperm has been achieved, patients can rely on the vasectomy for contraception, although not with 100% certainty. The risk for pregnancy after a man has achieved postvasectomy azoospermia is approximately one in 2,000 (322)(323)(324)(325)(326). A median of 78% (range 33%-100%) of men return for a single postvasectomy semen analysis (317). In the largest cohorts that appear typical of North American vasectomy practice, approximately two thirds of men (55%-71%) return for at least one postvasectomy semen analysis (322,(327)(328)(329)(330)(331). Assigning men an appointment after their vasectomy might improve compliance with follow-up (332). # When Women Can Stop Using Contraceptives • Contraceptive protection is still needed for women aged >44 years if the woman wants to avoid pregnancy. Comments and Evidence Summary. The age at which a woman is no longer at risk for pregnancy is not known. Although uncommon, spontaneous pregnancies occur among women aged >44 years. Both the American College of Obstetricians and Gynecologists and the North American Menopause Society recommend that women continue contraceptive use until menopause or age 50-55 years (333,334). The median age of menopause is approximately 51 years in North America (333) but can vary from ages 40-60 years (335). The median age of definitive loss of natural fertility is 41 years but can range up to age 51 years (336,337). No reliable laboratory tests are available to confirm definitive loss of fertility in a woman. The assessment of follicle-stimulating hormone levels to determine when a woman is no longer fertile might not be accurate (333). Health care providers should consider the risks for becoming pregnant in a woman of advanced reproductive age, as well as any risks of continuing contraception until menopause. Pregnancies among women of advanced reproductive age are at higher risk for maternal complications, such as hemorrhage, venous thromboembolism, and death, and fetal complications, such as spontaneous abortion, stillbirth, and congenital anomalies (338)(339)(340). Risks associated with continuing contraception, in particular risks for acute cardiovascular events (venous thromboembolism, myocardial infarction, or stroke) or breast cancer, also are important to consider. U.S. MEC states that on the basis of age alone, women aged >45 years can use POPs, implants, the LNG-IUD, or the Cu-IUD (U.S. MEC 1) (5). Women aged >45 years generally can use combined hormonal contraceptives and DMPA (U.S. MEC 2) (5). However, women in this age group might have chronic conditions or other risk factors that might render use of hormonal contraceptive methods unsafe; U.S. MEC might be helpful in guiding the safe use of contraceptives in these women. In two studies, the incidence of venous thromboembolism was higher among oral contraceptive users aged ≥45 years compared with younger oral contraceptive users (341)(342)(343); however, an interaction between hormonal contraception and increased age compared with baseline risk was not demonstrated (341,342) or was not examined (343). The relative risk for myocardial infarction was higher among all oral contraceptive users than in nonusers, although a trend of increased relative risk with increasing age was not demonstrated (344,345). No studies were found regarding the risk for stroke in COC users aged ≥45 years (Level of evidence: II-2, good to poor, direct). A pooled analysis by the Collaborative Group on Hormonal Factors and Breast Cancer in 1996 (346) found small increased relative risks for breast cancer among women aged ≥45 years whose last use of combined hormonal contraceptives was <5 years previously and for those whose last use was 5-9 years previously. Seven more recent studies suggested small but nonsignificant increased relative risks for breast carcinoma in situ or breast cancer among women who had used oral contraceptives or DMPA when they were aged ≥40 years compared with those who had never used either method (347-353) (Level of evidence: II-2, fair, direct). # Conclusion Most women can start most contraceptive methods at any time, and few examinations or tests, if any, are needed before starting a contraceptive method. Routine follow-up for most women includes assessment of her satisfaction with the contraceptive method, concerns about method use, and changes in health status or medications that could affect medical eligibility for continued use of the method. Because changes in bleeding patterns are one of the major reasons for discontinuation of contraception, recommendations are provided for the management of bleeding irregularities with various contraceptive methods. In addition, because women and health care providers can be confused about the procedures for missed pills and dosing errors with the contraceptive patch and ring, the instructions are streamlined for easier use. ECPs and emergency use of the Cu-IUD are important options for women, and recommendations on using these methods, as well as starting regular contraception after use of emergency contraception, are provided. Male and female sterilization are highly effective methods of contraception for men, women, and couples who have completed childbearing; for men undergoing vasectomy and women undergoing a hysteroscopic sterilization procedure, additional contraceptive protection is needed until the success of the procedure can be confirmed. CDC is committed to working with partners at the federal, national, and local levels to disseminate, implement, and evaluate U.S. SPR recommendations so that the information reaches health care providers. Strategies for dissemination and implementation include collaborating with other federal agencies and professional and service organizations to widely distribute the recommendations through presentations, electronic distribution, newsletters, and other publications; development of provider tools and job aids to assist providers in implementing the new recommendations; and training activities for students, as well as for continuing education. CDC conducts surveys of family planning health care providers to assess attitudes and practices related to contraceptive use. Results from these surveys will assist CDC in evaluating the impact of these recommendations on the provision of contraceptives in the United States. Finally, CDC will continually monitor new scientific evidence and will update these recommendations as warranted by new evidence. Updates to the recommendations, as well as provider tools and other resources, are available on the CDC U.S. SPR website: http:// www.cdc.gov/reproductivehealth/UnintendedPregnancy/ USSPR.htm. # Handling Conflict of Interest To promote transparency, all participants were asked to disclose any potential conflicts of interest to CDC prior to the expert meeting and to report any potential conflicts of interest during the introductory portion of the expert meeting. All potential conflicts of interest are listed above. No participants were excluded from discussion based on potential conflicts of interest. One presenter was an employee of a pharmaceutical company and participated by teleconference; after the presentation and questions related to the presentation, the presenter was excused from the discussion. CDC staff who ultimately decided and developed these recommendations have no financial interests or other relationships with the manufacturers of commercial products, suppliers of commercial services, or commercial supporters relevant to these recommendations. ) for clarifications to the numeric categories, as well as for summaries of the evidence and additional comments. Hormonal contraceptives and intrauterine devices do not protect against sexually transmitted diseases (STDs), including human immunodeficiency virus (HIV), and women using these methods should be counseled that consistent and correct use of the male latex condom reduces the risk for transmission of HIV and other STDs. Use of female condoms can provide protection from transmission of STDs, although data are limited. Health-care providers can use the summary table as a quick reference guide to the classifications for hormonal contraceptive methods and intrauterine contraception to compare classifications across these methods (Box A1) (Table A1). For BOX A1. Categories for classifying hormonal contraceptives and intrauterine devices 1 = A condition for which there is no restriction for the use of the contraceptive method. 2 = A condition for which the advantages of using the method generally outweigh the theoretical or proven risks. 3 = A condition for which the theoretical or proven risks usually outweigh the advantages of using the method. 4 = A condition that represents an unacceptable health risk if the contraceptive method is used. * In instances in which blood pressure cannot be measured by a provider, blood pressure measured in other settings can be reported by the woman to her provider. † Weight (BMI) measurement is not needed to determine medical eligibility for any methods of contraception because all methods can be used (U.S. MEC 1) or generally can be used (U.S. MEC 2) among obese women (Box 1). However, measuring weight and calculating BMI at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. § A bimanual examination (not cervical inspection) is needed for diaphragm fitting. ¶ Most women do not require additional STD screening at the time of IUD insertion. If a woman with risk factors for STDs has not been screened for gonorrhea and chlamydia according to CDC's STD Treatment Guidelines (http://www.cdc.gov/std/treatment), screening can be performed at the time of IUD insertion, and insertion should not be delayed. Women with current purulent cervicitis or chlamydial infection or gonococcal infection should not undergo IUD insertion (U.S. MEC 4). The examinations or tests noted apply to women who are presumed to be healthy (Table C1). Those with known medical problems or other special conditions might need additional examinations or tests before being determined to be appropriate candidates for a particular method of contraception. The 2016 U.S. Medical Eligibility Criteria for Contraceptive Use (U.S. MEC) might be useful in such circumstances (1). The following classification was considered useful in differentiating the applicability of the various examinations or tests: • Class A: essential and mandatory in all circumstances for safe and effective use of the contraceptive method. • Class B: contributes substantially to safe and effective use, but implementation may be considered within the public health and/or service context; risk of not performing an examination or test should be balanced against the benefits of making the contraceptive method available. • Class C: does not contribute substantially to safe and effective use of the contraceptive method. These classifications focus on the relationship of the examinations or tests to safe initiation of a contraceptive method. They are not intended to address the appropriateness of these examinations or tests in other circumstances. For example, some of the examinations or tests that are not deemed necessary for safe and effective contraceptive use might be appropriate for good preventive health care or for diagnosing or assessing suspected medical conditions. Any additional screening needed for preventive health care can be performed at the time of contraception initiation and initiation should not be delayed for test results. No examinations or tests are needed before initiating condoms or spermicides. A bimanual examination is necessary for diaphragm fitting. A bimanual examination and cervical inspection are needed for cervical cap fitting. # Acknowledgments # U.S. Selected Practice Recommendations for # Ovarian cancer This condition is associated with increased risk for adverse health events as a result of pregnancy. # Respiratory Conditions # Cystic fibrosis This condition is associated with increased risk for adverse health events as a result of pregnancy. # Sickle cell disease This condition is associated with increased risk for adverse health events as a result of pregnancy. See ) at baseline might be helpful for monitoring any changes and counseling women who might be concerned about weight change perceived to be associated with their contraceptive method. † Most women do not require additional STD screening at the time of IUD insertion. If a woman with risk factors for STDs has not been screened for gonorrhea and chlamydia according to CDC's STD Treatment Guidelines (http://www.cdc.gov/std/treatment), screening can be performed at the time of IUD insertion, and insertion should not be delayed. Women with current purulent cervicitis or chlamydial infection or gonococcal infection should not undergo IUD insertion (U.S. MEC 4). # Appendix D Routine Follow-Up After Contraceptive Initiation # General follow-up Advise women to return at any time to discuss side effects or other problems or if they want to change the method. Advise women using IUDs, implants, or injectables when the IUD or implant needs to be removed or when a reinjection is needed. No routine follow-up visit is required. X X X X X # Other routine visits Assess the woman's satisfaction with her current method and whether she has any concerns about method use. These recommendations address when routine follow-up is recommended for safe and effective continued use of contraception for healthy women (Table D1). The recommendations refer to general situations and might vary for different users and different situations. Specific populations who might benefit from frequent follow-up visits include adolescents, those with certain medical conditions or characteristics, and those with multiple medical conditions. # Appendix E Management of Women with Bleeding Irregularities While Using Contraception* If bleeding persists, or if the woman requests it, medical treatment can be considered. # Cu-IUD users For unscheduled spotting or light bleeding or for heavy or prolonged bleeding: Continue IUD. • Reassess in 24-48 hours. Remove IUD after beginning antibiotics. • Continue antibiotics. • Consider removal of IUD. • O er another contraceptive method. • O er emergency contraception.
None
None
120c453465d9356f67b499584cff8f4189422c8f
cdc
None
# Footnotes (Figures 1 and 2) The influenza vaccination footnote (#1) is revised and shortened to reflect a recommendation for vaccination of all persons aged 6 months and older, including all adults. The high-dose influenza vaccine (Fluzone), licensed in 2010 for adults aged 65 years and older, is mentioned as an option for this age group. The Td/Tdap vaccination footnote (#2) has language added to indicate that persons aged 65 years and older who have close contact with an infant aged less than 12 months should get vaccinated with Tdap; the additional language notes that all persons aged 65 years and older may get vaccinated with Tdap. Also added is the recommendation to administer Tdap regardless of interval since the most recent Td-containing vaccine. The HPV vaccination footnote (#4) has language added to the introductory sentences to indicate that either quadrivalent vaccine or bivalent vaccine is recommended for females. The MMR vaccination footnote (#6) has been revised mainly by consolidating common language that previously had been part of each of the three vaccine component sections into one introductory statement. The revaccination with PPSV footnote (#8) clarifies that one-time revaccination after 5 years only applies to persons with indicated chronic conditions who are aged 19 through 64 years. The meningococcal vaccination footnote (#9) has language added to indicate that a 2-dose series of meningococcal conjugate vaccine is recommended for adults with anatomic or functional asplenia, or persistent complement component deficiencies, as well adults with human immunodeficiency (HIV) virus infection who are vaccinated. Language has been added that a single dose of meningococcal vaccine is still recommended for those with other indications. Also, language has been added to clarify that quadrivalent meningococcal conjugate vaccine (MCV4) is a quadrivalent vaccine. The language for the selected conditions for the Hib footnote (#12) has been shortened to clarify which persons at high risk may receive 1 dose of Hib vaccine. The recommended adult immunization schedule has been approved by the Advisory Committee on Immunization Practices, the American Academy of Family Physicians, the American College of Obstetricians and Gynecologists, and the American College of Physicians. # Suggested citation: Centers for Disease Control and Prevention. Recommended adult immunization schedule---United States, 2011. MMWR 201160(4). # NOTE: The above recommendations must be read along with the footnotes on pages 3--4 of this schedule. # Influenza vaccination Annual vaccination against influenza is recommended for all persons aged 6 months and older, including all adults. Healthy, nonpregnant adults aged less than 50 years without high-risk medical conditions can receive either intranasally administered live, attenuated influenza vaccine (FluMist), or inactivated vaccine. Other persons should receive the inactivated vaccine. Adults aged 65 years and older can receive the standard influenza vaccine or the high-dose (Fluzone) influenza vaccine. Additional information about influenza vaccination is available at . # Tetanus, diphtheria, and acellular pertussis (Td/Tdap) vaccination Administer a one-time dose of Tdap to adults aged less than 65 years who have not received Tdap previously or for whom vaccine status is unknown to replace one of the 10-year Td boosters, and as soon as feasible to all 1) postpartum women, 2) close contacts of infants younger than age 12 months (e.g., grandparents and child-care providers), and 3) health-care personnel with direct patient contact. Adults aged 65 years and older who have not previously received Tdap and who have close contact with an infant aged less than 12 months also should be vaccinated. Other adults aged 65 years and older may receive Tdap. Tdap can be administered regardless of interval since the most recent tetanus or diphtheria-containing vaccine. Adults with uncertain or incomplete history of completing a 3-dose primary vaccination series with Tdcontaining vaccines should begin or complete a primary vaccination series. For unvaccinated adults, administer the first 2 doses at least 4 weeks apart and the third dose 6--12 months after the second. If incompletely vaccinated (i.e., less than 3 doses), administer remaining doses. Substitute a one-time dose of Tdap for one of the doses of Td, either in the primary series or for the routine booster, whichever comes first. If a woman is pregnant and received the most recent Td vaccination 10 or more years previously, administer Td during the second or third trimester. If the woman received the most recent Td vaccination less than 10 years previously, administer Tdap during the immediate postpartum period. At the clinician's discretion, Td may be deferred during pregnancy and Tdap substituted in the immediate postpartum period, or Tdap may be administered instead of Td to a pregnant woman after an informed discussion with the woman. The ACIP statement for recommendations for administering Td as prophylaxis in wound management is available at . # Varicella vaccination All adults without evidence of immunity to varicella should receive 2 doses of single-antigen varicella vaccine if not previously vaccinated or a second dose if they have received only 1 dose, unless they have a medical contraindication. Special consideration should be given to those who 1) have close contact with persons at high risk for severe disease (e.g., health-care personnel and family contacts of persons with immunocompromising conditions) or 2) are at high risk for exposure or transmission (e.g., teachers; child-care employees; residents and staff members of institutional settings, including correctional institutions; college students; military personnel; adolescents and adults living in households with children; nonpregnant women of childbearing age; and international travelers). Evidence of immunity to varicella in adults includes any of the following: 1) documentation of 2 doses of varicella vaccine at least 4 weeks apart; 2) U.S.-born before 1980 (although for health-care personnel and pregnant women, birth before 1980 should not be considered evidence of immunity); 3) history of varicella based on diagnosis or verification of varicella by a health-care provider (for a patient reporting a history of or having an atypical case, a mild case, or both, health-care providers should seek either an epidemiologic link with a typical varicella case or to a laboratory-confirmed case or evidence of laboratory confirmation, if it was performed at the time of acute disease); 4) history of herpes zoster based on diagnosis or verification of herpes zoster by a health-care provider; or 5) laboratory evidence of immunity or laboratory confirmation of disease. Pregnant women should be assessed for evidence of varicella immunity. Women who do not have evidence of immunity should receive the first dose of varicella vaccine upon completion or termination of pregnancy and before discharge from the health-care facility. The second dose should be administered 4--8 weeks after the first dose. # Human papillomavirus (HPV) vaccination HPV vaccination with either quadrivalent (HPV4) vaccine or bivalent vaccine (HPV2) is recommended for females at age 11 or 12 years and catch-up vaccination for females aged 13 through 26 years. Ideally, vaccine should be administered before potential exposure to HPV through sexual activity; however, females who are sexually active should still be vaccinated consistent with age-based recommendations. Sexually active females who have not been infected with any of the four HPV vaccine types (types 6, 11, 16, and 18, all of which HPV4 prevents) or any of the two HPV vaccine types (types 16 and 18, both of which HPV2 prevents) receive the full benefit of the vaccination. Vaccination is less beneficial for females who have already been infected with one or more of the HPV vaccine types. HPV4 or HPV2 can be administered to persons with a history of genital warts, abnormal Papanicolaou test, or positive HPV DNA test, because these conditions are not evidence of previous infection with all vaccine HPV types. HPV4 may be administered to males aged 9 through 26 years to reduce their likelihood of genital warts. HPV4 would be most effective when administered before exposure to HPV through sexual contact. A complete series for either HPV4 or HPV2 consists of 3 doses. The second dose should be administered 1--2 months after the first dose; the third dose should be administered 6 months after the first dose. Although HPV vaccination is not specifically recommended for persons with the medical indications described in Figure 2, "Vaccines that might be indicated for adults based on medical and other indications," it may be administered to these persons because the HPV vaccine is not a live-virus vaccine. However, the immune response and vaccine efficacy might be less for persons with the medical indications described in Figure 2 than in persons who do not have the medical indications described or who are immunocompetent. # Herpes zoster vaccination A single dose of zoster vaccine is recommended for adults aged 60 years and older regardless of whether they report a previous episode of herpes zoster. Persons with chronic medical conditions may be vaccinated unless their condition constitutes a contraindication. # Measles, mumps, rubella (MMR) vaccination Adults born before 1957 generally are considered immune to measles and mumps. All adults born in 1957 or later should have documentation of 1 or more doses of MMR vaccine unless they have a medical contraindication to the vaccine, laboratory evidence of immunity to each of the three diseases, or documentation of provider-diagnosed measles or mumps disease. For rubella, documentation of provider-diagnosed disease is not considered acceptable evidence of immunity. Measles component: A second dose of MMR vaccine, administered a minimum of 28 days after the first dose, is recommended for adults who 1) have been recently exposed to measles or are in an outbreak setting; 2) are students in postsecondary educational institutions; 3) work in a health-care facility; or 4) plan to travel internationally. Persons who received inactivated (killed) measles vaccine or measles vaccine of unknown type during 1963--1967 should be revaccinated with 2 doses of MMR vaccine. Mumps component: A second dose of MMR vaccine, administered a minimum of 28 days after the first dose, is recommended for adults who 1) live in a community experiencing a mumps outbreak and are in an affected age group; 2) are students in postsecondary educational institutions; 3) work in a health-care facility; or 4) plan to travel internationally. Persons vaccinated before 1979 with either killed mumps vaccine or mumps vaccine of unknown type who are at high risk for mumps infection (e.g. persons who are working in a health-care facility) should be revaccinated with 2 doses of MMR vaccine. Rubella component: For women of childbearing age, regardless of birth year, rubella immunity should be determined. If there is no evidence of immunity, women who are not pregnant should be vaccinated. Pregnant women who do not have evidence of immunity should receive MMR vaccine upon completion or termination of pregnancy and before discharge from the health-care facility. Health-care personnel born before 1957: For unvaccinated health-care personnel born before 1957 who lack laboratory evidence of measles, mumps, and/or rubella immunity or laboratory confirmation of disease, health-care facilities should 1) consider routinely vaccinating personnel with 2 doses of MMR vaccine at the appropriate interval (for measles and mumps) and 1 dose of MMR vaccine (for rubella), and 2) recommend 2 doses of MMR vaccine at the appropriate interval during an outbreak of measles or mumps, and 1 dose during an outbreak of rubella. Complete information about evidence of immunity is available at . # Pneumococcal polysaccharide (PPSV) vaccination Vaccinate all persons with the following indications: Medical: Chronic lung disease (including asthma); chronic cardiovascular diseases; diabetes mellitus; chronic liver diseases; cirrhosis; chronic alcoholism; functional or anatomic asplenia (e.g., sickle cell disease or splenectomy ); immunocompromising conditions (including chronic renal failure or nephrotic syndrome); and cochlear implants and cerebrospinal fluid leaks. Vaccinate as close to HIV diagnosis as possible. Other: Residents of nursing homes or long-term care facilities and persons who smoke cigarettes. Routine use of PPSV is not recommended for American Indians/Alaska Natives or persons aged less than 65 years unless they have underlying medical conditions that are PPSV indications. However, public health authorities may consider recommending PPSV for American Indians/Alaska Natives and persons aged 50 through 64 years who are living in areas where the risk for invasive pneumococcal disease is increased. # Revaccination with PPSV One-time revaccination after 5 years is recommended for persons aged 19 through 64 years with chronic renal failure or nephrotic syndrome; functional or anatomic asplenia (e.g., sickle cell disease or splenectomy); and for persons with immunocompromising conditions. For persons aged 65 years and older, one-time revaccination is recommended if they were vaccinated 5 or more years previously and were aged less than 65 years at the time of primary vaccination. # Meningococcal vaccination # Meningococcal vaccine should be administered to persons with the following indications: Medical: A 2-dose series of meningococcal conjugate vaccine is recommended for adults with anatomic or functional asplenia, or persistent complement component deficiencies. Adults with HIV infection who are vaccinated should also receive a routine 2-dose series. The 2 doses should be administered at 0 and 2 months. Other: A single dose of meningococcal vaccine is recommended for unvaccinated first-year college students living in dormitories; microbiologists routinely exposed to isolates of Neisseria meningitidis; military recruits; and persons who travel to or live in countries in which meningococcal disease is hyperendemic or epidemic (e.g., the "meningitis belt" of sub-Saharan Africa during the dry season ), particularly if their contact with local populations will be prolonged. Vaccination is required by the government of Saudi Arabia for all travelers to Mecca during the annual Hajj. Meningococcal conjugate vaccine, quadrivalent (MCV4) is preferred for adults with any of the preceding indications who are aged 55 years and younger; meningococcal polysaccharide vaccine (MPSV4) is preferred for adults aged 56 years and older. Revaccination with MCV4 every 5 years is recommended for adults previously vaccinated with MCV4 or MPSV4 who remain at increased risk for infection (e.g., adults with anatomic or functional asplenia, or persistent complement component deficiencies). # Hepatitis A vaccination Vaccinate persons with any of the following indications and any person seeking protection from hepatitis A virus (HAV) infection: Behavioral: Men who have sex with men and persons who use injection drugs. Occupational: Persons working with HAV-infected primates or with HAV in a research laboratory setting. Medical: Persons with chronic liver disease and persons who receive clotting factor concentrates. Other: Persons traveling to or working in countries that have high or intermediate endemicity of hepatitis A (a list of countries is available at ). Unvaccinated persons who anticipate close personal contact (e.g., household or regular babysitting) with an international adoptee during the first 60 days after arrival in the United States from a country with high or intermediate endemicity should be vaccinated. The first dose of the 2-dose hepatitis A vaccine series should be administered as soon as adoption is planned, ideally 2 or more weeks before the arrival of the adoptee. Single-antigen vaccine formulations should be administered in a 2-dose schedule at either 0 and 6--12 months (Havrix), or 0 and 6--18 months (Vaqta). If the combined hepatitis A and hepatitis B vaccine (Twinrix) is used, administer 3 doses at 0, 1, and 6 months; alternatively, a 4-dose schedule may be used, administered on days 0, 7, and 21--30, followed by a booster dose at month 12. # Hepatitis B vaccination Vaccinate persons with any of the following indications and any person seeking protection from hepatitis B virus (HBV) infection: Behavioral: Sexually active persons who are not in a long-term, mutually monogamous relationship (e.g., persons with more than one sex partner during the previous 6 months); persons seeking evaluation or treatment for a sexually transmitted disease (STD); current or recent injection-drug users; and men who have sex with men. Occupational: Health-care personnel and public-safety workers who are exposed to blood or other potentially infectious body fluids. Medical: Persons with end-stage renal disease, including patients receiving hemodialysis; persons with HIV infection; and persons with chronic liver disease. Other: Household contacts and sex partners of persons with chronic HBV infection; clients and staff members of institutions for persons with developmental disabilities; and international travelers to countries with high or intermediate prevalence of chronic HBV infection (a list of countries is available at ). Hepatitis B vaccination is recommended for all adults in the following settings: STD treatment facilities; HIV testing and treatment facilities; facilities providing drug-abuse treatment and prevention services; health-care settings targeting services to injection-drug users or men who have sex with men; correctional facilities; end-stage renal disease programs and facilities for chronic hemodialysis patients; and institutions and nonresidential day-care facilities for persons with developmental disabilities. Administer missing doses to complete a 3-dose series of hepatitis B vaccine to those persons not vaccinated or not completely vaccinated. The second dose should be administered 1 month after the first dose; the third dose should be given at least 2 months after the second dose (and at least 4 months after the first dose). If the combined hepatitis A and hepatitis B vaccine (Twinrix) is used, administer 3 doses at 0, 1, and 6 months; alternatively, a 4-dose Twinrix schedule, administered on days 0, 7, and 21 to 30, followed by a booster dose at month 12 may be used. Adult patients receiving hemodialysis or with other immunocompromising conditions should receive 1 dose of 40 µg/mL (Recombivax HB) administered on a 3-dose schedule or 2 doses of 20 µg/mL (Engerix-B) administered simultaneously on a 4-dose schedule at 0, 1, 2, and 6 months. # Selected conditions for which Haemophilus influenzae type b (Hib) vaccine may be used 1 dose of Hib vaccine should be considered for persons who have sickle cell disease, leukemia, or HIV infection, or who have had a splenectomy, if they have not previously received Hib vaccine. # Immunocompromising conditions Inactivated vaccines generally are acceptable (e.g., pneumococcal, meningococcal, influenza ) and live vaccines generally are avoided in persons with immune deficiencies or immunocompromising conditions. Information on specific conditions is available at . These schedules indicate the recommended age groups and medical indications for which administration of currently licensed vaccines is commonly indicated for adults ages 19 years and older, as of January 1, 2011. For all vaccines being recommended on the adult immunization schedule: a vaccine series does not need to be restarted, regardless of the time that has elapsed between doses. Licensed combination vaccines may be used whenever any components of the combination are indicated and when the vaccine's other components are not contraindicated. For detailed recommendations on all vaccines, including those used primarily for travelers or that are issued during the year, consult the manufacturers' package inserts and the complete statements from the Advisory Committee on Immunization Practices (http:// www.cdc.gov/vaccines/pubs/acip-list.htm). Report all clinically significant postvaccination reactions to the Vaccine Adverse Event Reporting System (VAERS). Reporting forms and instructions on filing a VAERS report are available at or by telephone, 800-822-7967. Information on how to file a Vaccine Injury Compensation Program claim is available at or by telephone, 800-338-2382. Information about filing a claim for vaccine injury is available through the U.S. Court of Federal Claims, 717 Madison Place, N.W., Washington, D.C. 20005;telephone, 202-357-6400. Additional information about the vaccines in this schedule, extent of available data, and contraindications for vaccination also is available at or from the CDC-INFO Contact Center at 800-CDC- in English and Spanish, 24 hours a day, 7 days a week.
# Footnotes (Figures 1 and 2) The influenza vaccination footnote (#1) is revised and shortened to reflect a recommendation for vaccination of all persons aged 6 months and older, including all adults. The high-dose influenza vaccine (Fluzone), licensed in 2010 for adults aged 65 years and older, is mentioned as an option for this age group. The Td/Tdap vaccination footnote (#2) has language added to indicate that persons aged 65 years and older who have close contact with an infant aged less than 12 months should get vaccinated with Tdap; the additional language notes that all persons aged 65 years and older may get vaccinated with Tdap. Also added is the recommendation to administer Tdap regardless of interval since the most recent Td-containing vaccine. The HPV vaccination footnote (#4) has language added to the introductory sentences to indicate that either quadrivalent vaccine or bivalent vaccine is recommended for females. The MMR vaccination footnote (#6) has been revised mainly by consolidating common language that previously had been part of each of the three vaccine component sections into one introductory statement. The revaccination with PPSV footnote (#8) clarifies that one-time revaccination after 5 years only applies to persons with indicated chronic conditions who are aged 19 through 64 years. The meningococcal vaccination footnote (#9) has language added to indicate that a 2-dose series of meningococcal conjugate vaccine is recommended for adults with anatomic or functional asplenia, or persistent complement component deficiencies, as well adults with human immunodeficiency (HIV) virus infection who are vaccinated. Language has been added that a single dose of meningococcal vaccine is still recommended for those with other indications. Also, language has been added to clarify that quadrivalent meningococcal conjugate vaccine (MCV4) is a quadrivalent vaccine. The language for the selected conditions for the Hib footnote (#12) has been shortened to clarify which persons at high risk may receive 1 dose of Hib vaccine. The recommended adult immunization schedule has been approved by the Advisory Committee on Immunization Practices, the American Academy of Family Physicians, the American College of Obstetricians and Gynecologists, and the American College of Physicians. # Suggested citation: Centers for Disease Control and Prevention. Recommended adult immunization schedule---United States, 2011. MMWR 201160(4). # NOTE: The above recommendations must be read along with the footnotes on pages 3--4 of this schedule. # Influenza vaccination Annual vaccination against influenza is recommended for all persons aged 6 months and older, including all adults. Healthy, nonpregnant adults aged less than 50 years without high-risk medical conditions can receive either intranasally administered live, attenuated influenza vaccine (FluMist), or inactivated vaccine. Other persons should receive the inactivated vaccine. Adults aged 65 years and older can receive the standard influenza vaccine or the high-dose (Fluzone) influenza vaccine. Additional information about influenza vaccination is available at http://www.cdc.gov/vaccines/vpd-vac/flu/default.htm. # Tetanus, diphtheria, and acellular pertussis (Td/Tdap) vaccination Administer a one-time dose of Tdap to adults aged less than 65 years who have not received Tdap previously or for whom vaccine status is unknown to replace one of the 10-year Td boosters, and as soon as feasible to all 1) postpartum women, 2) close contacts of infants younger than age 12 months (e.g., grandparents and child-care providers), and 3) health-care personnel with direct patient contact. Adults aged 65 years and older who have not previously received Tdap and who have close contact with an infant aged less than 12 months also should be vaccinated. Other adults aged 65 years and older may receive Tdap. Tdap can be administered regardless of interval since the most recent tetanus or diphtheria-containing vaccine. Adults with uncertain or incomplete history of completing a 3-dose primary vaccination series with Tdcontaining vaccines should begin or complete a primary vaccination series. For unvaccinated adults, administer the first 2 doses at least 4 weeks apart and the third dose 6--12 months after the second. If incompletely vaccinated (i.e., less than 3 doses), administer remaining doses. Substitute a one-time dose of Tdap for one of the doses of Td, either in the primary series or for the routine booster, whichever comes first. If a woman is pregnant and received the most recent Td vaccination 10 or more years previously, administer Td during the second or third trimester. If the woman received the most recent Td vaccination less than 10 years previously, administer Tdap during the immediate postpartum period. At the clinician's discretion, Td may be deferred during pregnancy and Tdap substituted in the immediate postpartum period, or Tdap may be administered instead of Td to a pregnant woman after an informed discussion with the woman. The ACIP statement for recommendations for administering Td as prophylaxis in wound management is available at http://www.cdc.gov/vaccines/pubs/acip-list.htm. # Varicella vaccination All adults without evidence of immunity to varicella should receive 2 doses of single-antigen varicella vaccine if not previously vaccinated or a second dose if they have received only 1 dose, unless they have a medical contraindication. Special consideration should be given to those who 1) have close contact with persons at high risk for severe disease (e.g., health-care personnel and family contacts of persons with immunocompromising conditions) or 2) are at high risk for exposure or transmission (e.g., teachers; child-care employees; residents and staff members of institutional settings, including correctional institutions; college students; military personnel; adolescents and adults living in households with children; nonpregnant women of childbearing age; and international travelers). Evidence of immunity to varicella in adults includes any of the following: 1) documentation of 2 doses of varicella vaccine at least 4 weeks apart; 2) U.S.-born before 1980 (although for health-care personnel and pregnant women, birth before 1980 should not be considered evidence of immunity); 3) history of varicella based on diagnosis or verification of varicella by a health-care provider (for a patient reporting a history of or having an atypical case, a mild case, or both, health-care providers should seek either an epidemiologic link with a typical varicella case or to a laboratory-confirmed case or evidence of laboratory confirmation, if it was performed at the time of acute disease); 4) history of herpes zoster based on diagnosis or verification of herpes zoster by a health-care provider; or 5) laboratory evidence of immunity or laboratory confirmation of disease. Pregnant women should be assessed for evidence of varicella immunity. Women who do not have evidence of immunity should receive the first dose of varicella vaccine upon completion or termination of pregnancy and before discharge from the health-care facility. The second dose should be administered 4--8 weeks after the first dose. # Human papillomavirus (HPV) vaccination HPV vaccination with either quadrivalent (HPV4) vaccine or bivalent vaccine (HPV2) is recommended for females at age 11 or 12 years and catch-up vaccination for females aged 13 through 26 years. Ideally, vaccine should be administered before potential exposure to HPV through sexual activity; however, females who are sexually active should still be vaccinated consistent with age-based recommendations. Sexually active females who have not been infected with any of the four HPV vaccine types (types 6, 11, 16, and 18, all of which HPV4 prevents) or any of the two HPV vaccine types (types 16 and 18, both of which HPV2 prevents) receive the full benefit of the vaccination. Vaccination is less beneficial for females who have already been infected with one or more of the HPV vaccine types. HPV4 or HPV2 can be administered to persons with a history of genital warts, abnormal Papanicolaou test, or positive HPV DNA test, because these conditions are not evidence of previous infection with all vaccine HPV types. HPV4 may be administered to males aged 9 through 26 years to reduce their likelihood of genital warts. HPV4 would be most effective when administered before exposure to HPV through sexual contact. A complete series for either HPV4 or HPV2 consists of 3 doses. The second dose should be administered 1--2 months after the first dose; the third dose should be administered 6 months after the first dose. Although HPV vaccination is not specifically recommended for persons with the medical indications described in Figure 2, "Vaccines that might be indicated for adults based on medical and other indications," it may be administered to these persons because the HPV vaccine is not a live-virus vaccine. However, the immune response and vaccine efficacy might be less for persons with the medical indications described in Figure 2 than in persons who do not have the medical indications described or who are immunocompetent. # Herpes zoster vaccination A single dose of zoster vaccine is recommended for adults aged 60 years and older regardless of whether they report a previous episode of herpes zoster. Persons with chronic medical conditions may be vaccinated unless their condition constitutes a contraindication. # Measles, mumps, rubella (MMR) vaccination Adults born before 1957 generally are considered immune to measles and mumps. All adults born in 1957 or later should have documentation of 1 or more doses of MMR vaccine unless they have a medical contraindication to the vaccine, laboratory evidence of immunity to each of the three diseases, or documentation of provider-diagnosed measles or mumps disease. For rubella, documentation of provider-diagnosed disease is not considered acceptable evidence of immunity. Measles component: A second dose of MMR vaccine, administered a minimum of 28 days after the first dose, is recommended for adults who 1) have been recently exposed to measles or are in an outbreak setting; 2) are students in postsecondary educational institutions; 3) work in a health-care facility; or 4) plan to travel internationally. Persons who received inactivated (killed) measles vaccine or measles vaccine of unknown type during 1963--1967 should be revaccinated with 2 doses of MMR vaccine. Mumps component: A second dose of MMR vaccine, administered a minimum of 28 days after the first dose, is recommended for adults who 1) live in a community experiencing a mumps outbreak and are in an affected age group; 2) are students in postsecondary educational institutions; 3) work in a health-care facility; or 4) plan to travel internationally. Persons vaccinated before 1979 with either killed mumps vaccine or mumps vaccine of unknown type who are at high risk for mumps infection (e.g. persons who are working in a health-care facility) should be revaccinated with 2 doses of MMR vaccine. Rubella component: For women of childbearing age, regardless of birth year, rubella immunity should be determined. If there is no evidence of immunity, women who are not pregnant should be vaccinated. Pregnant women who do not have evidence of immunity should receive MMR vaccine upon completion or termination of pregnancy and before discharge from the health-care facility. Health-care personnel born before 1957: For unvaccinated health-care personnel born before 1957 who lack laboratory evidence of measles, mumps, and/or rubella immunity or laboratory confirmation of disease, health-care facilities should 1) consider routinely vaccinating personnel with 2 doses of MMR vaccine at the appropriate interval (for measles and mumps) and 1 dose of MMR vaccine (for rubella), and 2) recommend 2 doses of MMR vaccine at the appropriate interval during an outbreak of measles or mumps, and 1 dose during an outbreak of rubella. Complete information about evidence of immunity is available at http://www.cdc.gov/vaccines/recs/provisional/default.htm. # Pneumococcal polysaccharide (PPSV) vaccination Vaccinate all persons with the following indications: Medical: Chronic lung disease (including asthma); chronic cardiovascular diseases; diabetes mellitus; chronic liver diseases; cirrhosis; chronic alcoholism; functional or anatomic asplenia (e.g., sickle cell disease or splenectomy [if elective splenectomy is planned, vaccinate at least 2 weeks before surgery]); immunocompromising conditions (including chronic renal failure or nephrotic syndrome); and cochlear implants and cerebrospinal fluid leaks. Vaccinate as close to HIV diagnosis as possible. Other: Residents of nursing homes or long-term care facilities and persons who smoke cigarettes. Routine use of PPSV is not recommended for American Indians/Alaska Natives or persons aged less than 65 years unless they have underlying medical conditions that are PPSV indications. However, public health authorities may consider recommending PPSV for American Indians/Alaska Natives and persons aged 50 through 64 years who are living in areas where the risk for invasive pneumococcal disease is increased. # Revaccination with PPSV One-time revaccination after 5 years is recommended for persons aged 19 through 64 years with chronic renal failure or nephrotic syndrome; functional or anatomic asplenia (e.g., sickle cell disease or splenectomy); and for persons with immunocompromising conditions. For persons aged 65 years and older, one-time revaccination is recommended if they were vaccinated 5 or more years previously and were aged less than 65 years at the time of primary vaccination. # Meningococcal vaccination # Meningococcal vaccine should be administered to persons with the following indications: Medical: A 2-dose series of meningococcal conjugate vaccine is recommended for adults with anatomic or functional asplenia, or persistent complement component deficiencies. Adults with HIV infection who are vaccinated should also receive a routine 2-dose series. The 2 doses should be administered at 0 and 2 months. Other: A single dose of meningococcal vaccine is recommended for unvaccinated first-year college students living in dormitories; microbiologists routinely exposed to isolates of Neisseria meningitidis; military recruits; and persons who travel to or live in countries in which meningococcal disease is hyperendemic or epidemic (e.g., the "meningitis belt" of sub-Saharan Africa during the dry season [December through June]), particularly if their contact with local populations will be prolonged. Vaccination is required by the government of Saudi Arabia for all travelers to Mecca during the annual Hajj. Meningococcal conjugate vaccine, quadrivalent (MCV4) is preferred for adults with any of the preceding indications who are aged 55 years and younger; meningococcal polysaccharide vaccine (MPSV4) is preferred for adults aged 56 years and older. Revaccination with MCV4 every 5 years is recommended for adults previously vaccinated with MCV4 or MPSV4 who remain at increased risk for infection (e.g., adults with anatomic or functional asplenia, or persistent complement component deficiencies). # Hepatitis A vaccination Vaccinate persons with any of the following indications and any person seeking protection from hepatitis A virus (HAV) infection: Behavioral: Men who have sex with men and persons who use injection drugs. Occupational: Persons working with HAV-infected primates or with HAV in a research laboratory setting. Medical: Persons with chronic liver disease and persons who receive clotting factor concentrates. Other: Persons traveling to or working in countries that have high or intermediate endemicity of hepatitis A (a list of countries is available at http://wwwn.cdc.gov/travel/contentdiseases.aspx). Unvaccinated persons who anticipate close personal contact (e.g., household or regular babysitting) with an international adoptee during the first 60 days after arrival in the United States from a country with high or intermediate endemicity should be vaccinated. The first dose of the 2-dose hepatitis A vaccine series should be administered as soon as adoption is planned, ideally 2 or more weeks before the arrival of the adoptee. Single-antigen vaccine formulations should be administered in a 2-dose schedule at either 0 and 6--12 months (Havrix), or 0 and 6--18 months (Vaqta). If the combined hepatitis A and hepatitis B vaccine (Twinrix) is used, administer 3 doses at 0, 1, and 6 months; alternatively, a 4-dose schedule may be used, administered on days 0, 7, and 21--30, followed by a booster dose at month 12. # Hepatitis B vaccination Vaccinate persons with any of the following indications and any person seeking protection from hepatitis B virus (HBV) infection: Behavioral: Sexually active persons who are not in a long-term, mutually monogamous relationship (e.g., persons with more than one sex partner during the previous 6 months); persons seeking evaluation or treatment for a sexually transmitted disease (STD); current or recent injection-drug users; and men who have sex with men. Occupational: Health-care personnel and public-safety workers who are exposed to blood or other potentially infectious body fluids. Medical: Persons with end-stage renal disease, including patients receiving hemodialysis; persons with HIV infection; and persons with chronic liver disease. Other: Household contacts and sex partners of persons with chronic HBV infection; clients and staff members of institutions for persons with developmental disabilities; and international travelers to countries with high or intermediate prevalence of chronic HBV infection (a list of countries is available at http://wwwn.cdc.gov/travel/contentdiseases.aspx). Hepatitis B vaccination is recommended for all adults in the following settings: STD treatment facilities; HIV testing and treatment facilities; facilities providing drug-abuse treatment and prevention services; health-care settings targeting services to injection-drug users or men who have sex with men; correctional facilities; end-stage renal disease programs and facilities for chronic hemodialysis patients; and institutions and nonresidential day-care facilities for persons with developmental disabilities. Administer missing doses to complete a 3-dose series of hepatitis B vaccine to those persons not vaccinated or not completely vaccinated. The second dose should be administered 1 month after the first dose; the third dose should be given at least 2 months after the second dose (and at least 4 months after the first dose). If the combined hepatitis A and hepatitis B vaccine (Twinrix) is used, administer 3 doses at 0, 1, and 6 months; alternatively, a 4-dose Twinrix schedule, administered on days 0, 7, and 21 to 30, followed by a booster dose at month 12 may be used. Adult patients receiving hemodialysis or with other immunocompromising conditions should receive 1 dose of 40 µg/mL (Recombivax HB) administered on a 3-dose schedule or 2 doses of 20 µg/mL (Engerix-B) administered simultaneously on a 4-dose schedule at 0, 1, 2, and 6 months. # Selected conditions for which Haemophilus influenzae type b (Hib) vaccine may be used 1 dose of Hib vaccine should be considered for persons who have sickle cell disease, leukemia, or HIV infection, or who have had a splenectomy, if they have not previously received Hib vaccine. # Immunocompromising conditions Inactivated vaccines generally are acceptable (e.g., pneumococcal, meningococcal, influenza [inactivated influenza vaccine]) and live vaccines generally are avoided in persons with immune deficiencies or immunocompromising conditions. Information on specific conditions is available at http://www.cdc.gov/vaccines/pubs/acip-list.htm. These schedules indicate the recommended age groups and medical indications for which administration of currently licensed vaccines is commonly indicated for adults ages 19 years and older, as of January 1, 2011. For all vaccines being recommended on the adult immunization schedule: a vaccine series does not need to be restarted, regardless of the time that has elapsed between doses. Licensed combination vaccines may be used whenever any components of the combination are indicated and when the vaccine's other components are not contraindicated. For detailed recommendations on all vaccines, including those used primarily for travelers or that are issued during the year, consult the manufacturers' package inserts and the complete statements from the Advisory Committee on Immunization Practices (http:// www.cdc.gov/vaccines/pubs/acip-list.htm). Report all clinically significant postvaccination reactions to the Vaccine Adverse Event Reporting System (VAERS). Reporting forms and instructions on filing a VAERS report are available at http://www.vaers.hhs.gov or by telephone, 800-822-7967. Information on how to file a Vaccine Injury Compensation Program claim is available at http://www.hrsa.gov/vaccinecompensation or by telephone, 800-338-2382. Information about filing a claim for vaccine injury is available through the U.S. Court of Federal Claims, 717 Madison Place, N.W., Washington, D.C. 20005;telephone, 202-357-6400. Additional information about the vaccines in this schedule, extent of available data, and contraindications for vaccination also is available at http://www.cdc.gov/vaccines or from the CDC-INFO Contact Center at 800-CDC- in English and Spanish, 24 hours a day, 7 days a week.
None
None
1d09e7ac7b1d01505aa5b76dc8e4910ca8d8c4e5
cdc
None
The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and provide for the safety of workers occupationally exposed to an ever-increasing number of potential hazards. The Division of Criteria Documentation and Standards Development, National Institute for Occupational Safety and Health (NIOSH), had primary responsibility for the development of the criteria and recommended standard for nitriles. Sonia Berg of this Division served as criteria manager. Equitable Environmental Health, Inc. (EEH) developed the basic information for consideration by NIOSH staff and consultants under contract CDC 210-77-0148. The Division review of this document was provided by Jon R. May, Ph.D. (Chairman), and J.# Ceiling (b) Sampling and Analysis Workplace air samples shall be collected and analyzed for nitriles as described in Appendix I or by any method shown to be at least equivalent in accuracy, precision, and sensitivity. # Section 2 -Medical Medical surveillance shall be made available as specified below to all employees subject to exposure to the compounds covered by this standard. ------> - U.S. GOVERNMENT PRINTING OFFICE: 1918-6 5 7 -0 6 1 / 1 8 5 3 # D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E P U B L IC H E A L T H S E R V IC E CENTER 114. difference in area is to be expected. # Documentation of # (e) Measurement of Area The area of the sample peak is measured by an electronic integrator or some other suitable form of area measurement, and preliminary results are read from a standard curve prepared as discussed in the following sections. # Determination of Desorption Efficiency (a) Importance The # (c) Gas Chromatographic Conditions The typical operating conditions for the gas chromatograph are: (1) 50 ml/minute (60 psig) nitrogen carrier gas flow. (2) 65 ml/minute (24 psig) hydrogen gas flow to detector. (3) 500 ml/minute(50 psig) airflow to detector. (3) Keep the patient comfortably warm but not hot. # MATERIAL SAFETY DATA SHEET PRODUCT IDENTIFICATION # _________________ V HEALTH HAZARD INFORMATION H E A L T H H A Z A R D D A T A R O U T E S OF EXPO SUR E IN H A L A T IO N S K IN C O N T A C T S K IN A B S O R P T IO N EYE C O N T A C T IN G E S T IO N E FFEC TS OF O V E R E X P O S U R E XII. # TABLES AND FIGURE
The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and provide for the safety of workers occupationally exposed to an ever-increasing number of potential hazards. The Division of Criteria Documentation and Standards Development, National Institute for Occupational Safety and Health (NIOSH), had primary responsibility for the development of the criteria and recommended standard for nitriles. Sonia Berg of this Division served as criteria manager. Equitable Environmental Health, Inc. (EEH) developed the basic information for consideration by NIOSH staff and consultants under contract CDC 210-77-0148. The Division review of this document was provided by Jon R. May, Ph.D. (Chairman), and J.# Ceiling (b) Sampling and Analysis Workplace air samples shall be collected and analyzed for nitriles as described in Appendix I or by any method shown to be at least equivalent in accuracy, precision, and sensitivity. # Section 2 -Medical Medical surveillance shall be made available as specified below to all employees subject to exposure to the compounds covered by this standard. ------> # 155 * U.S. GOVERNMENT PRINTING OFFICE: 1918-6 5 7 -0 6 1 / 1 8 5 3 # D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E P U B L IC H E A L T H S E R V IC E CENTER # 110. 111. # 112. # 113. 114. difference in area is to be expected. # 120. # 121. # 122. # 123. # 124. # 125. # Documentation of # (e) Measurement of Area The area of the sample peak is measured by an electronic integrator or some other suitable form of area measurement, and preliminary results are read from a standard curve prepared as discussed in the following sections. # Determination of Desorption Efficiency (a) Importance The # (c) Gas Chromatographic Conditions The typical operating conditions for the gas chromatograph are: (1) 50 ml/minute (60 psig) nitrogen carrier gas flow. (2) 65 ml/minute (24 psig) hydrogen gas flow to detector. (3) 500 ml/minute(50 psig) airflow to detector. (3) Keep the patient comfortably warm but not hot. # 134 # MATERIAL SAFETY DATA SHEET PRODUCT IDENTIFICATION # _________________ V HEALTH HAZARD INFORMATION H E A L T H H A Z A R D D A T A R O U T E S OF EXPO SUR E IN H A L A T IO N S K IN C O N T A C T S K IN A B S O R P T IO N EYE C O N T A C T IN G E S T IO N E FFEC TS OF O V E R E X P O S U R E # 149 XII. # TABLES AND FIGURE
None
None
883a90399fa8c4919f46496c3187b11292afeac4
cdc
None
Synopsis.................................... prevalence among patients seeking medical care at acute-care hospitals. The guidelines enable hospital staff members to perform a simple, rapid, and inexpensive survey to determine seroprevalence among the patient population, protecting the anonymity of those who are tested. The guidelines are based on national experience with large-scale anonymous, unlinked HIV serosurveys. The data from a rapid assessment survey are particularly useful for evaluating the need to provide routine, voluntary HIV counseling and testing and treatment for HIV infection. Beyond that, such data can be used in targeting education efforts, in reinforcing the use of appropriate universal precautions, in resource allocation, and in determining the need for further studies of HIV infection among the population in the hospital catchment area. The Centers for Disease Control and Prevention has developed guidelines for determining HIV sero-PATIENTS WITH AIDS have been admitted to more than 90 percent of the metropolitan area hospitals in the United States and to about 40 percent of the rural area hospitals (1). In addition, a large number of persons with undiagnosed HIV infection have been hospitalized (2). The number and proportion of their patients infected with HIV is unknown in most hospitals, however. Hospital seroprevalence data have many uses *evaluating the need to provide routine, voluntary HIV counseling and testing services as recommended by the Centers for Disease Control and Prevention (CDC) (3), - evaluating the need for treatment services for HIV infection and associated conditions, - targeting education efforts, - reinforcing the use of appropriate universal precautions by health care workers, - assisting in making decisions regarding resource allocation, and - determining the need for further epidemiologic studies about demographic and behavioral characteristics in the hospital's catchment population. Based on the experience of the 39 U.S. acutecare hospitals participating in the Sentinel Hospital Surveillance System for HIV Infection (2,4,5), CDC has developed and field-tested generic guidelines (6,7), approved by a CDC institutional review board for protection of human subjects. Under the guidelines, hospital personnel can perform a simple, rapid, and inexpensive survey to determine the HIV seroprevalence among the patient population and protect the anonymity of those who are tested. The guidelines provide for anonymous and unlinked testing of residual blood specimens collected for routine diagnostic purposes. Tested patients cannot be identified. This approach prevents bias from self-deferral, such as when persons at high risk for HIV infection refuse to participate in a voluntary study (8). The guidelines are flexible, so they can easily be adapted to the special needs or interests of a particular hospital. This is not to imply that hospitals must or should conduct such surveys. Our purpose is to assist those hospitals that wish to determine their HIV seroprevalence. The guidelines provide help in addressing important issues involved in performing such a survey, including anonymity of patients, scientific validity, and generalizability of results. The results from a hospital survey can apply to its treated population but not necessarily to the general population of its catchment area. It is unclear how frequently these surveys should be performed. Staffs at some hospitals may want to conduct a single survey, others might want to repeat the survey annually or less frequently. # Survey Design For reasons of ethics and simplicity, exhaustive efforts must be made to destroy any possible link between patients and their blood specimens prior to HIV serologic testing to avoid inadvertent identification of patients whose blood is tested. Data obtained by CDC's Sentinel Hospital Surveillance System for HIV infection (2,5) show an age distribution similar to that of AIDS patients overall, with the highest seroprevalence rates in the 25-44-year-old age group. To obtain maximal precision of seroprevalence estimates with a minimal sample size, we suggest sampling patients ages 15-54. Since the seroprevalence in patients younger or older than this age category is likely to be very low, sampling in children and the elderly would add little information and could increase the chance of inadvertent identification of a patient who was tested, particularly in hospitals where the overall seroprevalence is low (less than 1 percent). These seroprevalence surveys can be conducted and coordinated by many different persons in a hospital but require knowledge of routine procedures in the hospital's chemistry and hematology laboratories. Possible principal investigators include laboratory directors or infectious disease physicians collaborating with laboratory personnel. Alternatively, infection control personnel can be principal investigators and should collaborate with both laboratory and infectious disease personnel in conducting these surveys. Principal investigators should have knowledge of research study design, issues about confidentiality, and data analysis. In addition, personnel will need to be knowledgeable about laboratory practices, specimen handling, data collection, data management, and HIV-antibody testing. Demographic data. The distribution of AIDS cases and HIV infection varies widely by geographic area, sex, age, and race (9,10). In this protocol, for simplicity, "race" is used to include ethnicity (for example, Hispanic) as a mutually exclusive race category, even though members of the ethnic group may be of various races. The rates are generally higher among men compared with women and among blacks and Hispanics compared with whites. They are highest among people ages 25 to 44. Therefore, we suggest that data about age group, sex, and race be included. (Specific year of age in combination with the other variables might compromise anonymity; precise year of age is not very useful for the analysis and should not be collected.) Some hospitals may have the ability to collect additional data such as information regarding patient's known HIV status or HIV-related disease. This information could be used to help estimate the number of unrecognized or unsuspected HIV infections in the overall patient population. Collected data considered together, must not, however, allow linkage of HIV test results to specific people. Sample size. For this survey, the recommended sample size is 1,000 patients. With this sample size, if there is only 1 seropositive out of the 1,000 tested, the approximate 95 percent confidence interval for the true seroprevalence can be construed to be between 0.002 percent and 0.56 percent. In a sample with a 1 percent seroprevalence rate (10 seropositive specimens), the equivalent interval would be 0.48 percent to 1.84 percent. If more precise results are needed, larger sample sizes would be required. Stratification of the sample. To generalize the results of the sample to a defined, hospital-patient subgroup (hospital inpatients or a particular patient service, for example), the sample must be representative of the patient population. Blood specimens collected sequentially in a given period, such as 1-2 days, may not constitute a representative sample, however. To correct for possible biases, the best sample includes stratification by age and sex to ensure that the sample consists of the typical age and sex distribution of patients ages 15-54 at the hospital. The stratification also simplifies the avoidance of duplicate specimens, data management, and analysis of data by age and sex. To obtain the distribution by age and sex of an average patient population ages 15-54 in a specific hospital, discharge data from a representative period, obtained from the hospital medical records or billing department, can be separated by age and sex of all patients in that age category into eight strata: ages 15-24, 25-34, 35-44, and 45-54 for each sex. In general, the discharge data of the last calendar year should yield representative data of an average patient population. Changes like the opening of a new trauma care unit in the middle of the year, however, may have a major impact on the patient distribution. In such a case, one has to be sure that the population from which the age and sex distribution is taken is representative of the actual patient population (for example, discharge data of the last 3 months.) The proportion of each group is calculated by dividing the number of discharges in that group by the total number of discharges of patients ages 15-54. An example of this procedure for an average hospital is given in table 1. To obtain the number of specimens per age-and sex-stratum that should be included in the survey sample, it is necessary to multiply the total sample size (1,000) by the proportion of the appropriate group (table 2). Although this procedure is not a randomized sampling procedure, it should provide a good approximation of the patient population because the numbers per stratum are calculated in advance. If different populations, such as outpatients and inpatients, are included in the survey, collection of separate samples of 1,000 specimens for each population is recommended. Sampling problems. Laboratory or hospital procedures can also lead to a sampling bias. For example, if some specimens of a particular patient group such as obstetric patients are sequestered for other purposes like special studies, those specimens may not be available for this survey. When designing the survey and interpreting results, investigators should determine if such procedures are practiced in their laboratories. In some hospitals, the total sample could be selected in less than 1 week. For example, if a large group of patients is admitted to the hospital on a certain day of the week for a specific procedure like elective surgery, these patients could be either over-or underrepresented in the sample, depending on whether specimens drawn on that particular day were included or excluded from the sample. One way to compensate for such a bias is to select only every second or third available specimen to ensure that sampling occurs over a sufficiently long period. # Sampling Procedure To perform this survey, investigators need access to residual blood specimens and corresponding demographic data. The most efficient way to perform the survey is to collect blood specimens and the corresponding demographic data at one central laboratory. The chemistry laboratory, for example, might be the most likely place to obtain leftover serum or plasma without additional procedures like centrifugation of heparinized blood and have specimens from patients that meet the survey population requirements. In addition, personnel at the collection laboratory will need access to patient data on age group, sex, race, and clinical service. If hospitals have different laboratories for different units, investigators could either perform separate surveys of the patients of the different units or stratify the The sequence of serodiagnosis procedures for human immunodeficiency virus using enzyme immunoassay (EIA) tos md bhe use of th Westen blot sway fore o HIV1 el h ben p"MMlshed (7). sample according to the overall patient distribution in the different units. Specimens could then be collected consecutively for each age group and sex stratum in each laboratory. Data collection form. The form used to collect sample data is designed to enable personnel to collect the data, conduct the stratified sampling while avoiding duplicate testing of the same person, and protect the anonymity of patients being selected. Sampling with this form does not require the use of a computer; however, a computerized data base can be easily developed from it. Before the beginning of the survey, investigators must clearly define the information they want to collect. The recommended variables are sex, age group, race, and clinical service. Specific categories are given for sex and age group. The race variable has two generally useful fixed categories of white and black patients. Another race-ethnicity (Hispanic, for example) may be specified if the proportion of this group is at least 10 percent of the patient population. No particular race or ethnic group should be used if it represents less than 10 percent of the patient population; all such groups should be combined and classified as "other." For the service unit, there are four proposed categories, surgery, internal medicine, obstetrics, and psychiatry. Other units, if essential to note, should be categorized in advance. In addition, it could be indicated whether a patient is treated as an inpa-tient, outpatient, or in the emergency room. If additional information is of special importance to a particular hospital, it may be collected and specified. It is especially important to specify such categories in advance to be sure that the meaning is unequivocal and that the data collection is systematic. Any additional data should be essential to interpreting and using the results of the survey and should not allow for the identification of a person tested in the survey. Additional information, such as specific diagnosis, might allow for inadvertent identification, if the diagnosis is rare. Therefore, investigators must be cautious in collecting any additional information. Avoiding duplicate sampling. Besides the survey number, which may be preprinted on the data collection forms, the last 4 digits of the patient's hospital identification number (usually 7 digits) can be recorded temporarily on a data collection form to avoid duplicate sampling. These segments of the patient numbers must be removed and destroyed before the specimens are serologically tested. Each specimen should be labelled with the corresponding survey number and prepared labels attached to the data collection forms. Collection of specimens and corresponding data. When the patients and the laboratory site where specimens will be collected are identified and the total number of patients in each of the eight age and sex strata is determined, data collection forms should be prepared. For example, if all specimens can be collected in one laboratory site and the total number of specimens for men ages 25-34 is 82, four data collection forms (for 25 specimens each) could be labeled for this age-sex stratum and stapled together. The last number for this stratum should be highlighted to make it easy to recognize when the total number of this particular stratum is achieved. The specimen selectors in the laboratory can then take the following steps: 1. Check each specimen after routine testing to determine whether there is enough serum (or plasma) left over. A volume of 0.5 milliliter (ml) or more would be ideal and would allow for extra tests for indeterminate specimens. A minimum of 0.2 ml is enough for the routine HIV serology, however. If there is less than 0.2 ml available, the specimen should be excluded. 2. If there is at least 0.2 ml available, check whether the specimen fits into one of the eight sex-age strata (ages 15-54 years). If not, exclude it and check the next specimen. If yes, check whether there is already a specimen included in the corresponding age-sex stratum with the same patient number (last four digits). If yes, exclude it and check the next specimen. 3. If not already sampled (as indicated by the last 4 digits of patient number), the specimen should be selected and the last four digits of the patient number and demographic data recorded on the form. Such data may need to be located through the laboratory's computer system or at another place in the hospital with access to patient demographics. A sample of 0.5 ml (at least 0.2 ml, see step 1) should be taken and the corresponding label, which is attached to the data collection form, should be placed on the tube. No other labels should be attached to the tube holding the sample. Persons aliquoting specimens should wear gloves and a laboratory coat. Some laboratory workers may also prefer to work in a hood, behind a plastic shield, or wear safety glasses (11). 4. When all forms for a particular stratum are filled (and, thus, the total number of specimens in this stratum is achieved), continue to collect specimens only in the incomplete strata. 5. When the forms of all eight strata are full and before submitting the specimens for HIV testing, cut off and destroy the columns with the last four digits of the patient number. # Specimen Storage After the aliquots are collected and the tubes are labeled with their survey numbers, the specimens should be stored in a freezer (at least -20°C) until the total sample size is achieved and the serums (or plasma specimens) are tested for HIV antibodies. If the specimens cannot be frozen the day they are collected, they may be temporarily stored in a refrigerator (4°-80 C) for a maximum of 5 days. # Guidelines for HIV Antibody Testing The specimens should be tested (see chart) using an FDA-licensed HIV-1 enzyme immunoassay (EIA) kit. If the specimen is nonreactive in the initial test, the specimen is classified as negative. If the specimen is reactive in the initial test, the EIA should be repeated in duplicate, using fresh aliquots of the same specimen. If at least one of the repeated EIAs is reactive, the specimen is classified as "EIA repeatedly reactive" and should be tested with a supplemental test. The recommended supple-mental test is an FDA-licensed Western blot kit. Guidelines for the interpretation and use of the Western blot assay have been published by the Association of State and Territorial Public Health Laboratory Directors and Centers for Disease Control and Prevention (12). HIV antibody results can be recorded on the data collection form or entered into a computer file. The laboratory technician who performed the HIV antibody testing, but who did not select the specimens to be tested, may enter the HIV antibody results into the appropriate data base. The person(s) who selected the specimens should not see individual test results or the completed data collection forms, as a further protection of the anonymity of the survey. # Data Management A sequence of preprinted survey numbers on data collection forms and preassigned sticker labels facilitate recording of the laboratory results directly on data collection forms. If desired, data can be transferred to a computer (with a data management package such as EPI-Info). The survey can be conducted without the use of a computer, however. If a computer is used to avoid duplicate sampling of the same patient, a separate temporary file, contalning only the age-sex strata and the last 4 digits of the patients' numbers (and not the survey numbers), should be used during sampling to ensure that no link between patient number and HIV test result is possible. After the sampling is completed and before the HIV tests are performed, the data file with the last 4 digits of the patient number must be deleted. If a computer is used for data analysis, a permanent fie with survey number (but without the last 4 digits of the patient number), and demographic information should be used for entering HIV antibody test results. The permanent data file should not contain any information that links percent confidence interval Is 7.60, 24.43, using the formula in table 3. The overall seroprevaence of this population would be 5.2 percent (52 X 100 percent, divided by 1,000). An approximate 95 percent confidence interval for the values of the seroprevalence estimate using the formula on page 000 for samples with more than 20 positive specimern wouid be 3.97, 6.82. personal identifiers, such as patient numbers, with HIV test results. The principal investigator can use the data base for analysis of aggregated data but should not report line-by-line results. The principal investigator must be responsible for ensuring that inadvertent linkage of HIV antibody test results to a specific patient does not occur. For further protec-tion of the anonymity of the patients, the person(s) collecting the specimens and checking patient records should not see line-by-line HIV test result data. The specimen collector(s) may, however, see grouped summary data. # Interpretation of the Results The estimate of seroprevalence in the patient population is x n where x is the total number of seropositive patients observed (summed over all strata) and n is the total sample size (number tested). (Multiplying by 100 will express this seroprevalence in percent positive.) Estimates of seroprevalence in a subgroup may also be obtained by dividing the number of seropositives from that subgroup by the number tested in that subgroup. For example, separate estimates of seroprevalence for men and women may be calculated. Approximate confidence limits for the seroprevalence can be obtained with the Poisson distribution. Thus, confidence limits for seroprevalence in the individual strata or groups of strata may be calculated by using the procedures described in table 3 if 20 or fewer HIV-seropositive specimens are observed. If more than 20 HIV-seropositive specimens are observed, regardless of the size of the sample, the 95 percent confidence interval is calculated by using the following equation: x + 1.92 ± 1.96 lIx + 0.96 X 100 percent n where x = observed number of positive specimens and n = sample size. As an example, suppose that 21 people are found to be seropositive out of a sample of 275 (seroprevalence = 7.6 percent). Then the 95 percent confidence interval for the seroprevalence is 21 + 1.92 ± 1.96 121 + 0.96 X 100 percent 275 In this case, the 95 percent confidence interval is 4.99 percent, 11.67 percent. An example of the interpretation of results obtained by a survey following this protocol is given in table 4. # Personnel Time The time required to conduct a rapid assessment of HIV seroprevalence survey depends on familiarity with laboratory-based studies, the size of the hospital, and the number of laboratory specimens available. In addition to the principal investigator's time, the equivalent of one person working full time for 1 to 2 months is needed to complete the specimen collection and handling and data management. In addition, time is needed for a technician to perform HIV-antibody testing.
# Synopsis.................................... prevalence among patients seeking medical care at acute-care hospitals. The guidelines enable hospital staff members to perform a simple, rapid, and inexpensive survey to determine seroprevalence among the patient population, protecting the anonymity of those who are tested. The guidelines are based on national experience with large-scale anonymous, unlinked HIV serosurveys. The data from a rapid assessment survey are particularly useful for evaluating the need to provide routine, voluntary HIV counseling and testing and treatment for HIV infection. Beyond that, such data can be used in targeting education efforts, in reinforcing the use of appropriate universal precautions, in resource allocation, and in determining the need for further studies of HIV infection among the population in the hospital catchment area. The Centers for Disease Control and Prevention has developed guidelines for determining HIV sero-PATIENTS WITH AIDS have been admitted to more than 90 percent of the metropolitan area hospitals in the United States and to about 40 percent of the rural area hospitals (1). In addition, a large number of persons with undiagnosed HIV infection have been hospitalized (2). The number and proportion of their patients infected with HIV is unknown in most hospitals, however. Hospital seroprevalence data have many uses *evaluating the need to provide routine, voluntary HIV counseling and testing services as recommended by the Centers for Disease Control and Prevention (CDC) (3), * evaluating the need for treatment services for HIV infection and associated conditions, * targeting education efforts, * reinforcing the use of appropriate universal precautions by health care workers, * assisting in making decisions regarding resource allocation, and * determining the need for further epidemiologic studies about demographic and behavioral characteristics in the hospital's catchment population. Based on the experience of the 39 U.S. acutecare hospitals participating in the Sentinel Hospital Surveillance System for HIV Infection (2,4,5), CDC has developed and field-tested generic guidelines (6,7), approved by a CDC institutional review board for protection of human subjects. Under the guidelines, hospital personnel can perform a simple, rapid, and inexpensive survey to determine the HIV seroprevalence among the patient population and protect the anonymity of those who are tested. The guidelines provide for anonymous and unlinked testing of residual blood specimens collected for routine diagnostic purposes. Tested patients cannot be identified. This approach prevents bias from self-deferral, such as when persons at high risk for HIV infection refuse to participate in a voluntary study (8). The guidelines are flexible, so they can easily be adapted to the special needs or interests of a particular hospital. This is not to imply that hospitals must or should conduct such surveys. Our purpose is to assist those hospitals that wish to determine their HIV seroprevalence. The guidelines provide help in addressing important issues involved in performing such a survey, including anonymity of patients, scientific validity, and generalizability of results. The results from a hospital survey can apply to its treated population but not necessarily to the general population of its catchment area. It is unclear how frequently these surveys should be performed. Staffs at some hospitals may want to conduct a single survey, others might want to repeat the survey annually or less frequently. # Survey Design For reasons of ethics and simplicity, exhaustive efforts must be made to destroy any possible link between patients and their blood specimens prior to HIV serologic testing to avoid inadvertent identification of patients whose blood is tested. Data obtained by CDC's Sentinel Hospital Surveillance System for HIV infection (2,5) show an age distribution similar to that of AIDS patients overall, with the highest seroprevalence rates in the 25-44-year-old age group. To obtain maximal precision of seroprevalence estimates with a minimal sample size, we suggest sampling patients ages 15-54. Since the seroprevalence in patients younger or older than this age category is likely to be very low, sampling in children and the elderly would add little information and could increase the chance of inadvertent identification of a patient who was tested, particularly in hospitals where the overall seroprevalence is low (less than 1 percent). These seroprevalence surveys can be conducted and coordinated by many different persons in a hospital but require knowledge of routine procedures in the hospital's chemistry and hematology laboratories. Possible principal investigators include laboratory directors or infectious disease physicians collaborating with laboratory personnel. Alternatively, infection control personnel can be principal investigators and should collaborate with both laboratory and infectious disease personnel in conducting these surveys. Principal investigators should have knowledge of research study design, issues about confidentiality, and data analysis. In addition, personnel will need to be knowledgeable about laboratory practices, specimen handling, data collection, data management, and HIV-antibody testing. Demographic data. The distribution of AIDS cases and HIV infection varies widely by geographic area, sex, age, and race (9,10). In this protocol, for simplicity, "race" is used to include ethnicity (for example, Hispanic) as a mutually exclusive race category, even though members of the ethnic group may be of various races. The rates are generally higher among men compared with women and among blacks and Hispanics compared with whites. They are highest among people ages 25 to 44. Therefore, we suggest that data about age group, sex, and race be included. (Specific year of age in combination with the other variables might compromise anonymity; precise year of age is not very useful for the analysis and should not be collected.) Some hospitals may have the ability to collect additional data such as information regarding patient's known HIV status or HIV-related disease. This information could be used to help estimate the number of unrecognized or unsuspected HIV infections in the overall patient population. Collected data considered together, must not, however, allow linkage of HIV test results to specific people. Sample size. For this survey, the recommended sample size is 1,000 patients. With this sample size, if there is only 1 seropositive out of the 1,000 tested, the approximate 95 percent confidence interval for the true seroprevalence can be construed to be between 0.002 percent and 0.56 percent. In a sample with a 1 percent seroprevalence rate (10 seropositive specimens), the equivalent interval would be 0.48 percent to 1.84 percent. If more precise results are needed, larger sample sizes would be required. Stratification of the sample. To generalize the results of the sample to a defined, hospital-patient subgroup (hospital inpatients or a particular patient service, for example), the sample must be representative of the patient population. Blood specimens collected sequentially in a given period, such as 1-2 days, may not constitute a representative sample, however. To correct for possible biases, the best sample includes stratification by age and sex to ensure that the sample consists of the typical age and sex distribution of patients ages 15-54 at the hospital. The stratification also simplifies the avoidance of duplicate specimens, data management, and analysis of data by age and sex. To obtain the distribution by age and sex of an average patient population ages 15-54 in a specific hospital, discharge data from a representative period, obtained from the hospital medical records or billing department, can be separated by age and sex of all patients in that age category into eight strata: ages 15-24, 25-34, 35-44, and 45-54 for each sex. In general, the discharge data of the last calendar year should yield representative data of an average patient population. Changes like the opening of a new trauma care unit in the middle of the year, however, may have a major impact on the patient distribution. In such a case, one has to be sure that the population from which the age and sex distribution is taken is representative of the actual patient population (for example, discharge data of the last 3 months.) The proportion of each group is calculated by dividing the number of discharges in that group by the total number of discharges of patients ages 15-54. An example of this procedure for an average hospital is given in table 1. To obtain the number of specimens per age-and sex-stratum that should be included in the survey sample, it is necessary to multiply the total sample size (1,000) by the proportion of the appropriate group (table 2). Although this procedure is not a randomized sampling procedure, it should provide a good approximation of the patient population because the numbers per stratum are calculated in advance. If different populations, such as outpatients and inpatients, are included in the survey, collection of separate samples of 1,000 specimens for each population is recommended. Sampling problems. Laboratory or hospital procedures can also lead to a sampling bias. For example, if some specimens of a particular patient group such as obstetric patients are sequestered for other purposes like special studies, those specimens may not be available for this survey. When designing the survey and interpreting results, investigators should determine if such procedures are practiced in their laboratories. In some hospitals, the total sample could be selected in less than 1 week. For example, if a large group of patients is admitted to the hospital on a certain day of the week for a specific procedure like elective surgery, these patients could be either over-or underrepresented in the sample, depending on whether specimens drawn on that particular day were included or excluded from the sample. One way to compensate for such a bias is to select only every second or third available specimen to ensure that sampling occurs over a sufficiently long period. # Sampling Procedure To perform this survey, investigators need access to residual blood specimens and corresponding demographic data. The most efficient way to perform the survey is to collect blood specimens and the corresponding demographic data at one central laboratory. The chemistry laboratory, for example, might be the most likely place to obtain leftover serum or plasma without additional procedures like centrifugation of heparinized blood and have specimens from patients that meet the survey population requirements. In addition, personnel at the collection laboratory will need access to patient data on age group, sex, race, and clinical service. If hospitals have different laboratories for different units, investigators could either perform separate surveys of the patients of the different units or stratify the The sequence of serodiagnosis procedures for human immunodeficiency virus using enzyme immunoassay (EIA) tos md bhe use of th Westen blot sway fore o HIV1 el h ben p"MMlshed (7). sample according to the overall patient distribution in the different units. Specimens could then be collected consecutively for each age group and sex stratum in each laboratory. Data collection form. The form used to collect sample data is designed to enable personnel to collect the data, conduct the stratified sampling while avoiding duplicate testing of the same person, and protect the anonymity of patients being selected. Sampling with this form does not require the use of a computer; however, a computerized data base can be easily developed from it. Before the beginning of the survey, investigators must clearly define the information they want to collect. The recommended variables are sex, age group, race, and clinical service. Specific categories are given for sex and age group. The race variable has two generally useful fixed categories of white and black patients. Another race-ethnicity (Hispanic, for example) may be specified if the proportion of this group is at least 10 percent of the patient population. No particular race or ethnic group should be used if it represents less than 10 percent of the patient population; all such groups should be combined and classified as "other." For the service unit, there are four proposed categories, surgery, internal medicine, obstetrics, and psychiatry. Other units, if essential to note, should be categorized in advance. In addition, it could be indicated whether a patient is treated as an inpa-tient, outpatient, or in the emergency room. If additional information is of special importance to a particular hospital, it may be collected and specified. It is especially important to specify such categories in advance to be sure that the meaning is unequivocal and that the data collection is systematic. Any additional data should be essential to interpreting and using the results of the survey and should not allow for the identification of a person tested in the survey. Additional information, such as specific diagnosis, might allow for inadvertent identification, if the diagnosis is rare. Therefore, investigators must be cautious in collecting any additional information. Avoiding duplicate sampling. Besides the survey number, which may be preprinted on the data collection forms, the last 4 digits of the patient's hospital identification number (usually 7 digits) can be recorded temporarily on a data collection form to avoid duplicate sampling. These segments of the patient numbers must be removed and destroyed before the specimens are serologically tested. Each specimen should be labelled with the corresponding survey number and prepared labels attached to the data collection forms. Collection of specimens and corresponding data. When the patients and the laboratory site where specimens will be collected are identified and the total number of patients in each of the eight age and sex strata is determined, data collection forms should be prepared. For example, if all specimens can be collected in one laboratory site and the total number of specimens for men ages 25-34 is 82, four data collection forms (for 25 specimens each) could be labeled for this age-sex stratum and stapled together. The last number for this stratum should be highlighted to make it easy to recognize when the total number of this particular stratum is achieved. The specimen selectors in the laboratory can then take the following steps: 1. Check each specimen after routine testing to determine whether there is enough serum (or plasma) left over. A volume of 0.5 milliliter (ml) or more would be ideal and would allow for extra tests for indeterminate specimens. A minimum of 0.2 ml is enough for the routine HIV serology, however. If there is less than 0.2 ml available, the specimen should be excluded. 2. If there is at least 0.2 ml available, check whether the specimen fits into one of the eight sex-age strata (ages 15-54 years). If not, exclude it and check the next specimen. If yes, check whether there is already a specimen included in the corresponding age-sex stratum with the same patient number (last four digits). If yes, exclude it and check the next specimen. 3. If not already sampled (as indicated by the last 4 digits of patient number), the specimen should be selected and the last four digits of the patient number and demographic data recorded on the form. Such data may need to be located through the laboratory's computer system or at another place in the hospital with access to patient demographics. A sample of 0.5 ml (at least 0.2 ml, see step 1) should be taken and the corresponding label, which is attached to the data collection form, should be placed on the tube. No other labels should be attached to the tube holding the sample. Persons aliquoting specimens should wear gloves and a laboratory coat. Some laboratory workers may also prefer to work in a hood, behind a plastic shield, or wear safety glasses (11). 4. When all forms for a particular stratum are filled (and, thus, the total number of specimens in this stratum is achieved), continue to collect specimens only in the incomplete strata. 5. When the forms of all eight strata are full and before submitting the specimens for HIV testing, cut off and destroy the columns with the last four digits of the patient number. # Specimen Storage After the aliquots are collected and the tubes are labeled with their survey numbers, the specimens should be stored in a freezer (at least -20°C) until the total sample size is achieved and the serums (or plasma specimens) are tested for HIV antibodies. If the specimens cannot be frozen the day they are collected, they may be temporarily stored in a refrigerator (4°-80 C) for a maximum of 5 days. # Guidelines for HIV Antibody Testing The specimens should be tested (see chart) using an FDA-licensed HIV-1 enzyme immunoassay (EIA) kit. If the specimen is nonreactive in the initial test, the specimen is classified as negative. If the specimen is reactive in the initial test, the EIA should be repeated in duplicate, using fresh aliquots of the same specimen. If at least one of the repeated EIAs is reactive, the specimen is classified as "EIA repeatedly reactive" and should be tested with a supplemental test. The recommended supple-mental test is an FDA-licensed Western blot kit. Guidelines for the interpretation and use of the Western blot assay have been published by the Association of State and Territorial Public Health Laboratory Directors and Centers for Disease Control and Prevention (12). HIV antibody results can be recorded on the data collection form or entered into a computer file. The laboratory technician who performed the HIV antibody testing, but who did not select the specimens to be tested, may enter the HIV antibody results into the appropriate data base. The person(s) who selected the specimens should not see individual test results or the completed data collection forms, as a further protection of the anonymity of the survey. # Data Management A sequence of preprinted survey numbers on data collection forms and preassigned sticker labels facilitate recording of the laboratory results directly on data collection forms. If desired, data can be transferred to a computer (with a data management package such as EPI-Info). The survey can be conducted without the use of a computer, however. If a computer is used to avoid duplicate sampling of the same patient, a separate temporary file, contalning only the age-sex strata and the last 4 digits of the patients' numbers (and not the survey numbers), should be used during sampling to ensure that no link between patient number and HIV test result is possible. After the sampling is completed and before the HIV tests are performed, the data file with the last 4 digits of the patient number must be deleted. If a computer is used for data analysis, a permanent fie with survey number (but without the last 4 digits of the patient number), and demographic information should be used for entering HIV antibody test results. The permanent data file should not contain any information that links percent confidence interval Is 7.60, 24.43, using the formula in table 3. The overall seroprevaence of this population would be 5.2 percent (52 X 100 percent, divided by 1,000). An approximate 95 percent confidence interval for the values of the seroprevalence estimate using the formula on page 000 for samples with more than 20 positive specimern wouid be 3.97, 6.82. personal identifiers, such as patient numbers, with HIV test results. The principal investigator can use the data base for analysis of aggregated data but should not report line-by-line results. The principal investigator must be responsible for ensuring that inadvertent linkage of HIV antibody test results to a specific patient does not occur. For further protec-tion of the anonymity of the patients, the person(s) collecting the specimens and checking patient records should not see line-by-line HIV test result data. The specimen collector(s) may, however, see grouped summary data. # Interpretation of the Results The estimate of seroprevalence in the patient population is x n where x is the total number of seropositive patients observed (summed over all strata) and n is the total sample size (number tested). (Multiplying by 100 will express this seroprevalence in percent positive.) Estimates of seroprevalence in a subgroup may also be obtained by dividing the number of seropositives from that subgroup by the number tested in that subgroup. For example, separate estimates of seroprevalence for men and women may be calculated. Approximate confidence limits for the seroprevalence can be obtained with the Poisson distribution. Thus, confidence limits for seroprevalence in the individual strata or groups of strata may be calculated by using the procedures described in table 3 if 20 or fewer HIV-seropositive specimens are observed. If more than 20 HIV-seropositive specimens are observed, regardless of the size of the sample, the 95 percent confidence interval is calculated by using the following equation: x + 1.92 ± 1.96 lIx + 0.96 X 100 percent n where x = observed number of positive specimens and n = sample size. As an example, suppose that 21 people are found to be seropositive out of a sample of 275 (seroprevalence = 7.6 percent). Then the 95 percent confidence interval for the seroprevalence is 21 + 1.92 ± 1.96 121 + 0.96 X 100 percent 275 In this case, the 95 percent confidence interval is 4.99 percent, 11.67 percent. An example of the interpretation of results obtained by a survey following this protocol is given in table 4. # Personnel Time The time required to conduct a rapid assessment of HIV seroprevalence survey depends on familiarity with laboratory-based studies, the size of the hospital, and the number of laboratory specimens available. In addition to the principal investigator's time, the equivalent of one person working full time for 1 to 2 months is needed to complete the specimen collection and handling and data management. In addition, time is needed for a technician to perform HIV-antibody testing.
None
None
25af3fc78ea6c5e2ee5d21aa0aa6311fe12485e2
cdc
None
Although the environment serves as a reservoir for a variety of microorganisms, it is rarely implicated in disease transmission except in the immunocompromised population. Inadvertent exposures to environmental opportunistic pathogens (e.g., Aspergillus spp. and Legionella spp.) or airborne pathogens (e.g., Mycobacterium tuberculosis and varicella-zoster virus) may result in infections with significant morbidity and/or mortality. Lack of adherence to established standards and guidance (e.g., water quality in dialysis, proper ventilation for specialized care areas such as operating rooms, and proper use of disinfectants) can result in adverse patient outcomes in health-care facilities.The objective is to develop an environmental infection-control guideline that reviews and reaffirms strategies for the prevention of environmentally-mediated infections, particularly among health-care workers and immunocompromised patients. The recommendations are evidence-based whenever possible.The contributors to this guideline reviewed predominantly English-language articles identified from MEDLINE literature searches, bibliographies from published articles, and infection-control textbooks.Articles dealing with outbreaks of infection due to environmental opportunistic microorganisms and epidemiological-or laboratory experimental studies were reviewed. Current editions of guidelines and standards from organizations (i.e.# Outcome Measures: Infections caused by the microorganisms described in this guideline are rare events, and the effect of these recommendations on infection rates in a facility may not be readily measurable. Therefore, the following steps to measure performance are suggested to evaluate these recommendations: 1. Document whether infection-control personnel are actively involved in all phases of a healthcare facility's demolition, construction, and renovation. Activities should include performing a risk assessment of the necessary types of construction barriers, and daily monitoring and documenting of the presence of negative airflow within the construction zone or renovation area. 2. Monitor and document daily the negative airflow in airborne infection isolation rooms (AII) and positive airflow in protective environment rooms (PE), especially when patients are in these rooms. 3. Perform assays at least once a month by using standard quantitative methods for endotoxin in water used to reprocess hemodialyzers, and for heterotrophic, mesophilic bacteria in water used to prepare dialysate and for hemodialyzer reprocessing. # Executive Summary The Guidelines for Environmental Infection Control in Health-Care Facilities is a compilation of recommendations for the prevention and control of infectious diseases that are associated with healthcare environments. This document a. revises multiple sections from previous editions of the Centers for Disease Control and Prevention document titled Guideline for Handwashing and Hospital Environmental Control; 1, 2 b. incorporates discussions of air and water environmental concerns from CDC's Guideline for the Prevention of Nosocomial Pneumonia; 3 c. consolidates relevant environmental infection-control measures from other CDC guidelines; and d. includes two topics not addressed in previous CDC guidelines -infection-control concerns related to animals in health-care facilities and water quality in hemodialysis settings. Part I of this report, Background Information: Environmental Infection Control in Health-Care Facilities, provides a comprehensive review of the scientific literature. Attention is given to engineering and infectioncontrol concerns during construction, demolition, renovation, and repairs of health-care facilities. Use of an infection-control risk assessment is strongly supported before the start of these or any other activities expected to generate dust or water aerosols. Also reviewed in Part I are infection-control measures used to recover from catastrophic events (e.g., flooding, sewage spills, loss of electricity and ventilation, and disruption of the water supply) and the limited effects of environmental surfaces, laundry, plants, animals, medical wastes, cloth furnishings, and carpeting on disease transmission in healthcare facilities. Part II of this guideline, Recommendations for Environmental Infection Control in Health-Care Facilities, outlines environmental infection control in health-care facilities, describing measures for preventing infections associated with air, water, and other elements of the environment. These recommendations represent the views of different divisions within CDC's National Center for Infectious Diseases (NCID) (e.g., the Division of Healthcare Quality Promotion and the Division of Bacterial and Mycotic Diseases ) and the consensus of the Healthcare Infection Control Practices Advisory Committee (HICPAC), a 12-member group that advises CDC on concerns related to the surveillance, prevention, and control of health-care associated infections, primarily in U.S. healthcare facilities.10 In 1999, HICPAC's infection-control focus was expanded from acute-care hospitals to all venues where health care is provided (e.g., outpatient surgical centers, urgent care centers, clinics, outpatient dialysis centers, physicians' offices, and skilled nursing facilities). The topics addressed in this guideline are applicable to the majority of health-care venues in the United States. This document is intended for use primarily by infection-control professionals (ICPs), epidemiologists, employee health and safety personnel, information system specialists, administrators, engineers, facility managers, environmental service professionals, and architects for health-care facilities. Key recommendations include a. infection-control impact of ventilation system and water system performance; b. establishment of a multidisciplinary team to conduct infection-control risk assessment; c. use of dust-control procedures and barriers during construction, repair, renovation, or demolition; d. environmental infection-control measures for special care areas with patients at high risk; e. use of airborne particle sampling to monitor the effectiveness of air filtration and dust-control measures; f. procedures to prevent airborne contamination in operating rooms when infectious tuberculosis patients require surgery g. guidance regarding appropriate indications for routine culturing of water as part of a comprehensive control program for legionellae; h. guidance for recovering from water system disruptions, water leaks, and natural disasters ; i. infection-control concepts for equipment that uses water from main lines ); j. environmental surface cleaning and disinfection strategies with respect to antibiotic-resistant microorganisms; k. infection-control procedures for health-care laundry; l. use of animals in health care for activities and therapy; m. managing the presence of service animals in health-care facilities; n. infection-control strategies for when animals receive treatment in human health-care facilities; and o. a call to reinstate the practice of inactivating amplified cultures and stocks of microorganisms on-site during medical waste treatment. Whenever possible, the recommendations in Part II are based on data from well-designed scientific studies. However, certain of these studies were conducted by using narrowly defined patient populations or for specific health-care settings (e.g., hospitals versus long-term care facilities), making generalization of findings potentially problematic. Construction standards for hospitals or other healthcare facilities may not apply to residential home-care units. Similarly, infection-control measures indicated for immunosuppressed patient care are usually not necessary in those facilities where such patients are not present. Other recommendations were derived from knowledge gained during infectious disease investigations in health-care facilities, where successful termination of the outbreak was often the result of multiple interventions, the majority of which cannot be independently and rigorously evaluated. This is especially true for construction situations involving air or water. Other recommendations are derived from empiric engineering concepts and may reflect an industry standard rather than an evidence-based conclusion. Where recommendations refer to guidance from the American Institute of Architects (AIA), (AIA guidance has been superseded by the Facilities Guidelines Institute ) the statements reflect standards intended for new construction or renovation. Existing structures and engineered systems are expected to be in continued compliance with the standards in effect at the time of construction or renovation. Also, in the absence of scientific confirmation, certain infectioncontrol recommendations that cannot be rigorously evaluated are based on a strong theoretical rationale and suggestive evidence. Finally, certain recommendations are derived from existing federal regulations. The references and the appendices comprise Parts III and IV of this document, respectively. Infections caused by the microorganisms described in these guidelines are rare events, and the effect of these recommendations on infection rates in a facility may not be readily measurable. Therefore, the following steps to measure performance are suggested to evaluate these recommendations (Box 1): Box 1. Environmental infection control: performance measures 1. Document whether infection-control personnel are actively involved in all phases of a health-care facility's demolition, construction, and renovation. Activities should include performing a risk assessment of the necessary types of construction barriers, and daily monitoring and documenting of the presence of negative airflow within the construction zone or renovation area. 2. Monitor and document daily the negative airflow in airborne infection isolation (AII) rooms and positive airflow in protective environment (PE) rooms, especially when patients are in these rooms. 3. Perform assays at least once a month by using standard quantitative methods for endotoxin in water used to reprocess hemodialyzers, and for heterotrophic and mesophilic bacteria in water used to prepare dialysate and for hemodialyzer reprocessing. 4. Evaluate possible environmental sources (e.g., water, laboratory solutions, or reagents) of specimen contamination when nontuberculous mycobacteria (NTM) of unlikely clinical importance are isolated from clinical cultures. If environmental contamination is found, eliminate the probable mechanisms. 5. Document policies to identify and respond to water damage. Such policies should result in either repair and drying of wet structural or porous materials within 72 hours, or removal of the wet material if drying is unlikely with 72 hours. Topics outside the scope of this document include a. noninfectious adverse events (e.g., sick building syndrome); b. environmental concerns in the home; c. home health care; d. bioterrorism; and e. healthcare-associated foodborne illness. This document includes only limited discussion of handwashing/hand hygiene; standard precautions; and infection-control measures used to prevent instrument or equipment contamination during patient care (e.g., preventing waterborne contamination of nebulizers or ventilator humidifiers). These topics are mentioned only if they are important in minimizing the transfer of pathogens to and from persons or equipment and the environment. Although the document discusses principles of cleaning and disinfection as they are applied to maintenance of environmental surfaces, the full discussion of sterilization and disinfection of medical instruments and direct patient-care devices is deferred for inclusion in the # Part I. Background Information: Environmental Infection Control in Health-Care Facilities # A. Introduction The health-care environment contains a diverse population of microorganisms, but only a few are significant pathogens for susceptible humans. Microorganisms are present in great numbers in moist, organic environments, but some also can persist under dry conditions. Although pathogenic microorganisms can be detected in air and water and on fomites, assessing their role in causing infection and disease is difficult. 11 Only a few reports clearly delineate a "cause and effect" with respect to the environment and in particular, housekeeping surfaces. Eight criteria are used to evaluate the strength of evidence for an environmental source or means of transmission of infectious agents (Box 2). 11,12 Applying these criteria to disease investigations allows scientists to assess the contribution of the environment to disease transmission. An example of this application is the identification of a pathogen (e.g., vancomycin-resistant enterococci ) on an environmental surface during an outbreak. The presence of the pathogen does not establish its causal role; its transmission from source to host could be through indirect means (e.g., via hand transferral). 11 The surface, therefore, would be considered one of a number of potential reservoirs for the pathogen, but not the "de facto" source of exposure. An understanding of how infection occurs after exposure, based on the principles of the "chain of infection," is also important in evaluating the contribution of the environment to health-care associated disease. 13 All of the components of the "chain" must be operational for infection to occur (Box 3). Box 2. Eight criteria for evaluating the strength of evidence for environmental sources of infection*+ 1. The organism can survive after inoculation onto the fomite. 2. The organism can be cultured from in-use fomites. 3. The organism can proliferate in or on the fomite. 4. Some measure of acquisition of infection cannot be explained by other recognized modes of transmission. 5. Retrospective case-control studies show an association between exposure to the fomite and infection. Prospective case-control studies may be possible when more than one similar type of fomite is in use. 7. Prospective studies allocating exposure to the fomite to a subset of patients show an association between exposure and infection. The presence of the susceptible host is one of these components that underscores the importance of the health-care environment and opportunistic pathogens on fomites and in air and water. As a result of advances in medical technology and therapies (e.g., cytotoxic chemotherapy and transplantation medicine), more patients are becoming immunocompromised in the course of treatment and are therefore at increased risk for acquiring health-care associated opportunistic infections. Trends in health-care delivery (e.g., early discharge of patients from acute care facilities) also are changing the distribution of patient populations and increasing the number of immunocompromised persons in nonacute-care hospitals. According to the American Hospital Association (AHA), in 1998, the number of hospitals in the United States totaled 6,021; these hospitals had a total of 1,013,000 beds, 14 representing a 5.5% decrease in the number of acute-care facilities and a 10.2% decrease in the number of beds over the 5year period 1994-1998. 14 In addition, the total average daily number of patients receiving care in U.S. acute-care hospitals in 1998 was 662,000 (65.4%) -36.5% less than the 1978 average of 1,042,000. 14 As the number of acute-care hospitals declines, the length of stay in these facilities is concurrently decreasing, particularly for immunocompetent patients. Those patients remaining in acute-care facilities are likely to be those requiring extensive medical interventions who therefore at high risk for opportunistic infection. The growing population of severely immunocompromised patients is at odds with demands on the health-care industry to remain viable in the marketplace; to incorporate modern equipment, new diagnostic procedures, and new treatments; and to construct new facilities. Increasing numbers of health-care facilities are likely to be faced with construction in the near future as hospitals consolidate to reduce costs, defer care to ambulatory centers and satellite clinics, and try to create more "home-like" acute-care settings. In 1998, approximately 75% of health-care associated construction projects focused on renovation of existing outpatient facilities or the building of such facilities; 15 the number of projects associated with outpatient health care rose by 17% from 1998 through 1999. 16 An aging population is also creating increasing demand for assisted-living facilities and skilled nursing centers. Construction of assisted-living facilities in 1998 increased 49% from the previous year, with 138 projects completed at a cost of $703 million. 16 Overall, from 1998 to 1999, health-care associated construction costs increased by 28.5%, from $11.56 billion to $14.86 billion. 16 Environmental disturbances associated with construction activities near health-care facilities pose airborne and waterborne disease threats risks for the substantial number of patients who are at risk for health-care associated opportunistic infections. The increasing age of hospitals and other health-care facilities is also generating ongoing need for repair and remediation work (e.g., installing wiring for new information systems, removing old sinks, and repairing elevator shafts) that can introduce or increase contamination of the air and water in patient-care environments. Aging equipment, deferred maintenance, and natural disasters provide additional mechanisms for the entry of environmental pathogens into highrisk patient-care areas. Architects, engineers, construction contractors, environmental health scientists, and industrial hygienists historically have directed the design and function of hospitals' physical plants. Increasingly, however, because of the growth in the number of susceptible patients and the increase in construction projects, the involvement of hospital epidemiologists and infection-control professionals is required. These experts help make plans for building, maintaining, and renovating health-care facilities to ensure that the adverse impact of the environment on the incidence of health-care associated infections is minimal. The following are examples of adverse outcomes that could have been prevented had such experts been involved in the planning process: transmission of infections caused by Mycobacterium tuberculosis, varicella-zoster virus (VZV), and measles (i.e., rubeola) facilitated by inappropriate air-handling systems in health-care facilities; 6 disease outbreaks caused by Aspergillus spp., Mucoraceae, 20 and Penicillium spp. associated with the absence of environmental controls during periods of health-care facility-associated construction; 21 infections and/or colonizations of patients and staff with vancomycin-resistant Enterococcus faecium and Clostridium difficile acquired indirectly from contact with organisms present on environmental surfaces in health-care facilities; and outbreaks and pseudoepidemics of legionellae, 26,27 Pseudomonas aeruginosa, 28-30 and the nontuberculous mycobacteria (NTM) 31,32 linked to water and aqueous solutions used in health-care facilities. The purpose of this guideline is to provide useful information for both health-care professionals and engineers in efforts to provide a safe environment in which quality health care may be provided to patients. The recommendations herein provide guidance to minimize the risk for and prevent transmission of pathogens in the indoor environment. # B. Key Terms Used in this Guideline Although Appendix A provides definitions for terms discussed in Part I, several terms that pertain to specific patient-care areas and patients who are at risk for health-care associated opportunistic infections are presented here. Specific engineering parameters for these care areas are discussed more fully in the text. Airborne Infection Isolation (AII) refers to the isolation of patients infected with organisms spread via airborne droplet nuclei <5 μm in diameter. This isolation area receives numerous air changes per hour (ACH) (≥12 ACH for new construction as of 2001; ≥6 ACH for construction before 2001), and is under negative pressure, such that the direction of the airflow is from the outside adjacent space (e.g., corridor) into the room. The air in an AII room is preferably exhausted to the outside, but may be recirculated provided that the return air is filtered through a high efficiency particulate air (HEPA) filter. The use of personal respiratory protection is also indicated for persons entering these rooms. Protective Environment (PE) is a specialized patient-care area, usually in a hospital, with a positive airflow relative to the corridor (i.e., air flows from the room to the outside adjacent space). The combination of HEPA filtration, high numbers of air changes per hour (≥12 ACH), and minimal leakage of air into the room creates an environment that can safely accommodate patients who have undergone allogeneic hematopoietic stem cell transplant (HSCT). Immunocompromised patients are those patients whose immune mechanisms are deficient because of immunologic disorders (e.g., human immunodeficiency virus infection, congenital immune deficiency syndrome, chronic diseases ) or immunosuppressive therapy (e.g., radiation, cytotoxic chemotherapy, anti-rejection medication, and steroids). Immunocompromised patients who are identified as high-risk patients have the greatest risk of infection caused by airborne or waterborne microorganisms. Patients in this subset include those who are severely neutropenic for prolonged periods of time (i.e., an absolute neutrophil count of ≤500 cells/mL), allogeneic HSCT patients, and those who have received intensive chemotherapy (e.g., childhood acute myelogenous leukemia patients). # C. Air 1. Modes of Transmission of Airborne Diseases A variety of airborne infections in susceptible hosts can result from exposures to clinically significant microorganisms released into the air when environmental reservoirs (i.e., soil, water, dust, and decaying organic matter) are disturbed. Once these materials are brought indoors into a health-care facility by any of a number of vehicles (e.g., people, air currents, water, construction materials, and equipment), the attendant microorganisms can proliferate in various indoor ecological niches and, if subsequently disbursed into the air, serve as a source for airborne health-care associated infections. Respiratory infections can be acquired from exposure to pathogens contained either in droplets or droplet nuclei. Exposure to microorganisms in droplets (e.g., through aerosolized oral and nasal secretions from infected patients 33 ) constitutes a form of direct contact transmission. When droplets are produced during a sneeze or cough, a cloud of infectious particles >5 μm in size is expelled, resulting in the potential exposure of susceptible persons within 3 feet of the source person. 6 Examples of pathogens spread in this manner are influenza virus, rhinoviruses, adenoviruses, and respiratory syncytial virus (RSV). Because these agents primarily are transmitted directly and because the droplets tend to fall out of the air quickly, measures to control air flow in a health-care facility (e.g., use of negative pressure rooms) generally are not indicated for preventing the spread of diseases caused by these agents. Strategies to control the spread of these diseases are outlined in another guideline. 3 The spread of airborne infectious diseases via droplet nuclei is a form of indirect transmission. 34 Droplet nuclei are the residuals of droplets that, when suspended in air, subsequently dry and produce particles ranging in size from 1-5 μm. These particles can a. contain potentially viable microorganisms, b. be protected by a coat of dry secretions, c. remain suspended indefinitely in air, and d. be transported over long distances. The microorganisms in droplet nuclei persist in favorable conditions (e.g., a dry, cool atmosphere with little or no direct exposure to sunlight or other sources of radiation). Pathogenic microorganisms that can be spread via droplet nuclei include Mycobacterium tuberculosis, VZV, measles virus (i.e., rubeola), and smallpox virus (i.e., variola major). 6 Several environmental pathogens have life-cycle forms that are similar in size to droplet nuclei and may exhibit similar behavior in the air. The spores of Aspergillus fumigatus have a diameter of 2-3.5 μm, with a settling velocity estimated at 0.03 cm/second (or about 1 meter/hour) in still air. With this enhanced buoyancy, the spores, which resist desiccation, can remain airborne indefinitely in air currents and travel far from their source. 35 # Airborne Infectious Diseases in Health-Care Facilities a. Aspergillosis and Other Fungal Diseases Aspergillosis is caused by molds belonging to the genus Aspergillus. Aspergillus spp. are prototype health-care acquired pathogens associated with dusty or moist environmental conditions. Clinical and epidemiologic aspects of aspergillosis (Table 1) are discussed extensively in another guideline. 3 Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 1. Clinical and epidemiologic characteristics of aspergillosis # Modes of transmission Airborne transmission of fungal spores; direct inhalation; direct inoculation from environmental sources (rare) and equipment can become contaminated with Aspergillus spp. spores and serve as sources of infection if stored in such areas. 57 Most cases of aspergillosis are caused by Aspergillus fumigatus, a thermotolerant/thermophilic fungus capable of growing over a temperature range from 53.6°F-127.4°F (12°C-53°C); optimal growth occurs at approximately 104°F (40°C), a temperature inhibitory to most other saprophytic fungi. 99 It can use cellulose or sugars as carbon sources; because its respiratory process requires an ample supply of carbon, decomposing organic matter is an ideal substrate. (For AIA guidance on Aspergillus spp. see Table 2.) Other opportunistic fungi that have been occasionally linked with health-care associated infections are members of the order Mucorales (e.g., Rhizopus spp.) and miscellaneous moniliaceous molds (e.g., Fusarium spp. and Penicillium spp.) (Table 2). Many of these fungi can proliferate in moist environments (e.g., water-damaged wood and building materials). Some fungi (e.g., Fusarium spp. and Pseudoallescheria spp.) also can be airborne pathogens. 100 As with aspergillosis, a major risk factor for disease caused by any of these pathogens is the host's severe immunosuppression from either underlying disease or immunosuppressive therapy. 101,102 Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 2. Environmental fungal pathogens: entry into and contamination of the healthcare facility Fungal pathogen Implicated environmental vehicle Aspergillus spp. - Improperly functioning ventilation systems 20,46,47,97,98,103,104 - Air filters+ 17,18, Pigeons, their droppings and roosts are associated with spread of Aspergillus, Cryptococcus, and Histoplasma spp. There have been at least three outbreaks linked to contamination of the filtering systems from bird droppings 98,103,104 Pigeon mites may gain access into a health-care facility through the ventilation system. 119 - Air filter frames 17,18 - Window air conditioners 96 - Backflow of contaminated air 107 - Air exhaust contamination+ 104 - False ceilings 48,57,97,108 - Fibrous insulation and perforated metal ceilings 66 - Acoustic ceiling tiles, plasterboard 18,109 - Fireproofing material 48,49 - Damp wood building materials 49 - Opening doors to construction site 110 # Construction 69 - Open windows 20,108,111 105 - Topical anesthetic 117 Acremonium spp. - Air filters 105 Cladosporium spp. - Air filters 105 # Sporothrix - Construction (pseudoepidemic) 118 + The American Institute of Architects (AIA) standards stipulate that for new or renovated construction o exhaust outlets are to be placed >25 feet from air intake systems, o the bottom of outdoor air intakes for HVAC systems should be 6 feet above ground or 3 feet above roof level, and o exhaust outlets from contaminated areas are situated above the roof level and arranged to minimize the recirculation of exhausted air back into the building. 120 Infections due Cryptococcus neoformans, Histoplasma capsulatum, or Coccidioides immitis can occur in health-care settings if nearby ground is disturbed and a malfunction of the facility's air-intake components allows these pathogens to enter the ventilation system. C. neoformans is a yeast usually 4-8 μm in size. However, viable particles of <2 μm diameter (and thus permissive to alveolar deposition) have been found in soil contaminated with bird droppings, particularly from pigeons. 98,103,104,121 H. capsulatum, with the infectious microconidia ranging in size from 2-5 μm, is endemic in the soil of the central river valleys of the United States. Substantial numbers of these infectious particles have been associated with chicken coops and the roosts of blackbirds. 98,103,104,122 Several outbreaks of histoplasmosis have been associated with disruption of the environment; construction activities in an endemic area may be a potential risk factor for health-care acquired airborne infection. 123,124 C. immitis, with arthrospores of 3-5 μm diameter, has similar potential, especially in the endemic southwestern United States and during seasons of drought followed by heavy rainfall. After the 1994 earthquake centered near Northridge, California, the incidence of coccidioidomycosis in the surrounding area exceeded the historical norm. 125 Emerging evidence suggests that Pneumocystis carinii, now classified as a fungus, may be spread via airborne, person-to-person transmission. 126 Controlled studies in animals first demonstrated that P. carinii could be spread through the air. 127 More recent studies in health-care settings have detected nucleic acids of P. carinii in air samples from areas frequented or occupied by P. carinii-infected patients but not in control areas that are not occupied by these patients. 128,129 Clusters of cases have been identified among immunocompromised patients who had contact with a source patient and with each other. Recent studies have examined the presence of P. carinii DNA in oropharyngeal washings and the nares of infected patients, their direct contacts, and persons with no direct contact. 130,131 Molecular analysis of the DNA by polymerase chain reaction (PCR) provides evidence for airborne transmission of P. carinii from infected patients to direct contacts, but immunocompetent contacts tend to become transiently colonized rather than infected. 131 The role of colonized persons in the spread of P. carinii pneumonia (PCP) remains to be determined. At present, specific modifications to ventilation systems to control spread of PCP in a health-care facility are not indicated. Current recommendations outline isolation procedures to minimize or eliminate contact of immunocompromised patients not on PCP prophylaxis with PCP-infected patients. 6,132 # b. Tuberculosis and Other Bacterial Diseases The bacterium most commonly associated with airborne transmission is Mycobacterium tuberculosis. A comprehensive review of the microbiology and epidemiology of M. tuberculosis and guidelines for tuberculosis (TB) infection control have been published. 4,133,134 A summary of the clinical and epidemiologic information from these materials is provided in this guideline (Table 3). Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. # Factors affecting severity and outcomes - Concentration of droplet nuclei in air, duration of exposure - Age at infection - Immunosuppression due to therapy or disease, underlying chronic medical conditions, history of malignancies or lesions or the lungs Occurrence - Worldwide; incidence in the United States is 56 cases/100,000 population (2001) 136 Mortality rate - 930 deaths in the United States (1999) 136 Chemoprophylaxis / treatment - Treatment of latent infection includes isoniazid (INH) or rifampin (RIF) 4,134, - Directly observed therapy (DOT) for active cases as indicated: INH, RIF, pyrazinamide (PZA), ethambutol (EMB), streptomycin (SM) in various combinations determined by prevalent levels of specific resistance 4,134, - Consult therapy guidelines for specific treatment indications 139 - Material in this table is compiled from references 4, 133-141. M. tuberculosis is carried by droplet nuclei generated when persons (primarily adults and adolescents) who have pulmonary or laryngeal TB sneeze, cough, speak, or sing; 139 normal air currents can keep these particles airborne for prolonged periods and spread them throughout a room or building. 142 However, transmission of TB has occurred from mycobacteria aerosolized during provision of care (e.g., wound/lesion care or during handling of infectious peritoneal dialysis fluid) for extrapulmonary TB patients. 135,140 Gram-positive cocci (i.e., Staphylococcus aureus, group A beta-hemolytic streptococci), also important health-care associated pathogens, are resistant to inactivation by drying and can persist in the environment and on environmental surfaces for extended periods. These organisms can be shed from heavily colonized persons and discharged into the air. Airborne dispersal of S. aureus is directly associated with the concentration of the bacterium in the anterior nares. 143 Approximately 10% of healthy carriers will disseminate S. aureus into the air, and some persons become more effective disseminators of S. aureus than others. The dispersal of S. aureus into air can be exacerbated by concurrent viral upper respiratory infection, thereby turning a carrier into a "cloud shedder." 149 Outbreaks of surgical site infections (SSIs) caused by group A beta-hemolytic streptococci have been traced to airborne transmission from colonized operating-room personnel to patients. In these situations, the strain causing the outbreak was recovered from the air in the operating room 150,151,154 or on settle plates in a room in which the carrier exercised. S. aureus and group A streptococci have not been linked to airborne transmission outside of operating rooms, burn units, and neonatal nurseries. 155,156 Transmission of these agents occurs primarily via contact and droplets. Other gram-positive bacteria linked to airborne transmission include Bacillus spp. which are capable of sporulation as environmental conditions become less favorable to support their growth. Outbreaks and pseudo-outbreaks have been attributed to Bacillus cereus in maternity, pediatric, intensive care, and bronchoscopy units; many of these episodes were secondary to environmental contamination. Gram-negative bacteria rarely are associated with episodes of airborne transmission because they generally require moist environments for persistence and growth. The main exception is Acinetobacter spp., which can withstand the inactivating effects of drying. In one epidemiologic investigation of bloodstream infections among pediatric patients, identical Acinetobacter spp. were cultured from the patients, air, and room air conditioners in a nursery. 161 Aerosols generated from showers and faucets may potentially contain legionellae and other gram-negative waterborne bacteria (e.g., Pseudomonas aeruginosa). Exposure to these organisms is through direct inhalation. However, because water is the source of the organisms and exposure occurs in the vicinity of the aerosol, the discussion of the diseases associated with such aerosols and the prevention measures used to curtail their spread is discussed in another section of the Guideline (see Part I: Water). # c. Airborne Viral Diseases Some human viruses are transmitted from person to person via droplet aerosols, but very few viruses are consistently airborne in transmission (i.e., are routinely suspended in an infective state in air and capable of spreading great distances), and health-care associated outbreaks of airborne viral disease are limited to a few agents. Consequently, infection-control measures used to prevent spread of these viral diseases in health-care facilities primarily involve patient isolation, vaccination of susceptible persons, and antiviral therapy as appropriate rather than measures to control air flow or quality. 6 Infections caused by VZV frequently are described in health-care facilities. Health-care associated airborne outbreaks of VZV infections from patients with primary infection and disseminated zoster have been documented; patients with localized zoster have, on rare occasions, also served as source patients for outbreaks in health-care facilities. VZV infection can be prevented by vaccination, although patients who develop a rash within 6 weeks of receiving varicella vaccine or who develop breakthrough varicella following exposure should be considered contagious. 167 Viruses whose major mode of transmission is via droplet contact rarely have caused clusters of infections in group settings through airborne routes. The factors facilitating airborne distribution of these viruses in an infective state are unknown, but a presumed requirement is a source patient in the early stage of infection who is shedding large numbers of viral particles into the air. Airborne transmission of measles has been documented in health-care facilities. In addition, institutional outbreaks of influenza virus infections have occurred predominantly in nursing homes, and less frequently in medical and neonatal intensive care units, chronic-care areas, HSCT units, and pediatric wards. Some evidence supports airborne transmission of influenza viruses by droplet nuclei, 181,182 and case clusters in pediatric wards suggest that droplet nuclei may play a role in transmitting certain respiratory pathogens (e.g., adenoviruses and respiratory syncytial virus ). 177,183,184 Some evidence also supports airborne transmission of enteric viruses. An outbreak of a Norwalk-like virus infection involving more than 600 staff personnel over a 3-week period was investigated in a Toronto, Ontario hospital in 1985; common sources (e.g., food and water) were ruled out during the investigation, leaving airborne spread as the most likely mode of transmission. 185 Smallpox virus, a potential agent of bioterrorism, is spread predominantly via direct contact with infectious droplets, but it also can be associated with airborne transmission. 186,187 A German hospital study from 1970 documented the ability of this virus to spread over considerable distances and cause infection at low doses in a well-vaccinated population; factors potentially facilitating transmission in this situation included a patient with cough and an extensive rash, indoor air with low relative humidity, and faulty ventilation patterns resulting from hospital design (e.g., open windows). 188 Smallpox patients with extensive rash are more likely to have lesions present on mucous membranes and therefore have greater potential to disseminate virus into the air. 188 In addition to the smallpox transmission in Germany, two cases of laboratory-acquired smallpox virus infection in the United Kingdom in 1978 also were thought to be caused by airborne transmission. 189 Ebola Virus Disease Update : The recommendations in this guideline for Ebola has been superseded by these CDC documents: - Infection Prevention and Control Recommendations for Hospitalized Patients with Known or Suspected Ebola Virus Disease in U.S. Hospitals () - Interim Guidance for Environmental Infection Control in Hospitals for Ebola Virus () See CDC's Ebola Virus Disease website () for current information on how Ebola virus is transmitted. Airborne transmission may play a role in the natural spread of hantaviruses and certain hemorrhagic fever viruses (e.g., Ebola, Marburg, and Lassa), but evidence for airborne spread of these agents in health-care facilities is inconclusive. 190 Although hantaviruses can be transmitted when aerosolized from rodent excreta, 191,192 person-to-person spread of hantavirus infection from source patients has not occurred in healthcare facilities. Nevertheless, health-care workers are advised to contain potentially infectious aerosols and wear National Institute of Occupational Safety and Health (NIOSH) approved respiratory protection when working with this agent in laboratories or autopsy suites. 196 Lassa virus transmission via aerosols has been demonstrated in the laboratory and incriminated in health-care associated infections in Africa, but airborne spread of this agent in hospitals in developed nations likely is inefficient. 200,201 Yellow fever is considered to be a viral hemorrhagic fever agent with high aerosol infectivity potential, but health-care associated transmission of this virus has not been described. 202 Viral hemorrhagic fever diseases primarily occur after direct exposure to infected blood and body fluids, and the use of standard and droplet precautions prevents transmission early in the course of these illnesses. 203,204 However, whether these viruses can persist in droplet nuclei that might remain after droplet production from coughs or vomiting in the latter stages of illness is unknown. 205 Although the use of a negative-pressure room is not required during the early stages of illness, its use might be prudent at the time of hospitalization to avoid the need for subsequent patient transfer. Current CDC guidelines recommend negative-pressure rooms with anterooms for patients with hemorrhagic fever and use of HEPA respirators by persons entering these rooms when the patient has prominent cough, vomiting, diarrhea, or hemorrhage. 6,203 Face shields or goggles will help to prevent mucous-membrane exposure to potentially-aerosolized infectious material in these situations. If an anteroom is not available, portable, industrial-grade high efficiency particulate air (HEPA) filter units can be used to provide the equivalent of additional air changes per hour (ACH). # Mycobacterium tuberculosis+ Measles (rubeola) virus Varicella-zoster virus Occasional reports in health-care facilities (atypical) Acremonium spp. 105,206 Fusarium spp. 102 Pseudoallescheria boydii 100 Scedosporium spp. 116 Sporothrix cyanescens ¶ 118 Acinetobacter spp. 161 Bacillus spp. ¶ 160,207 Brucella spp. Staphylococcus aureus 148,156 Group A Streptococcus 151 Smallpox virus (variola) § 188,189 Influenza viruses 181,182 Respiratory syncytial virus 183 Adenoviruses 184 Norwalk-like virus 185 No reports in health-care facilities; known to be airborne outside. Coccidioides immitis 125 Cryptococcus spp. 121 Histoplasma capsulatum 124 Coxiella burnetii (Q fever) 212 Hantaviruses 193,195 Lassa virus 205 Marburg virus 205 Ebola virus † 205 Crimean-Congo virus 205 Under investigation Pneumocystis carinii 131 n/a n/a - This list excludes microorganisms transmitted from aerosols derived from water. # Heating, Ventilation, and Air Conditioning Systems in Health-Care Facilities # a. Basic Components and Operations Heating, ventilation, and air conditioning (HVAC) systems in health-care facilities are designed to maintain the indoor air temperature and humidity at comfortable levels for staff, patients, and visitors control odors; remove contaminated air; facilitate air-handling requirements to protect susceptible staff and patients from airborne health-care associated pathogens; and minimize the risk for transmission of airborne pathogens from infected patients. 35,120 An HVAC system includes an outside air inlet or intake; filters; humidity modification mechanisms (i.e., humidity control in summer, humidification in winter); heating and cooling equipment; fans; ductwork; air exhaust or out-takes; and registers, diffusers, or grilles for proper distribution of the air (Figure 1). 213,214 Decreased performance of healthcare facility HVAC systems, filter inefficiencies, improper installation, and poor maintenance can contribute to the spread of health-care associated airborne infections. The American Institute of Architects (AIA) has published guidelines for the design and construction of new health-care facilities and for renovation of existing facilities. These AIA guidelines address indoor air-quality standards (e.g., ventilation rates, temperature levels, humidity levels, pressure relationships, and minimum air changes per hour ) specific to each zone or area in health-care facilities (e.g., operating rooms, laboratories, diagnostic areas, patient-care areas, and support departments). 120 ). More than 40 state agencies that license health-care facilities have either incorporated or adopted by reference these guidelines into their state standards. JCAHO, through its surveys, ensures that facilities are in compliance with the ventilation guidelines of this standard for new construction and renovation. # Figure 1. Diagram of a ventilation system* Outdoor air and recirculated air pass through air cleaners (e.g., filter banks) designed to reduce the concentration of airborne contaminants. Air is conditioned for temperature and humidity before it enters the occupied space as supply air. Infiltration is air leakage inward through cracks and interstitial spaces of walls, floors, and ceilings. Exfiltration is air leakage outward through these same cracks and spaces. Return air is largely exhausted from the system, but a portion is recirculated with fresh, incoming air. - Used with permission of the publisher of reference 214 (ASHRAE) Engineering controls to contain or prevent the spread of airborne contaminants center on local exhaust ventilation , general ventilation, and air cleaning. 4 General ventilation encompasses dilution and removal of contaminants via well-mixed air distribution of filtered air, directing contaminants toward exhaust registers and grilles via uniform, non-mixed airflow patterns, pressurization of individual spaces relative to all other spaces, and pressurization of buildings relative to the outdoors and other attached buildings. A centralized HVAC system operates as follows. Outdoor air enters the system, where low-efficiency or "roughing" filters remove large particulate matter and many microorganisms. The air enters the distribution system for conditioning to appropriate temperature and humidity levels, passes through an additional bank of filters for further cleaning, and is delivered to each zone of the building. After the conditioned air is distributed to the designated space, it is withdrawn through a return duct system and delivered back to the HVAC unit. A portion of this "return air" is exhausted to the outside while the remainder is mixed with outdoor air for dilution and filtered for removal of contaminants. 215 Air from toilet rooms or other soiled areas is usually exhausted directly to the atmosphere through a separate duct exhaust system. Air from rooms housing tuberculosis patients is exhausted to the outside if possible, or passed through a HEPA filter before recirculation. Ultraviolet germicidal irradiation (UVGI) can be used as an adjunct air-cleaning measure, but it cannot replace HEPA filtration. 15 b. Filtration i. Filter Types and Methods of Filtration Filtration, the physical removal of particulates from air, is the first step in achieving acceptable indoor air quality. Filtration is the primary means of cleaning the air. Five methods of filtration can be used (Table 5). During filtration, outdoor air passes through two filter beds or banks (with efficiencies of 20%-40% and ≥90%, respectively) for effective removal of particles 1-5 μm in diameter. 35,120 The low-to-medium efficiency filters in the first bank have low resistance to airflow, but this feature allows some small particulates to pass onto heating and air conditioning coils and into the indoor environment. 35 Incoming air is mixed with recirculated air and reconditioned for temperature and humidity before being filtered by the second bank of filters. The performance of filters with ≤90% efficiency is measured using either the dust-spot test or the weight-arrestance test. 35,216 The second filter bank usually consists of high-efficiency filters. This filtration system is adequate for most patient-care areas in ambulatory-care facilities and hospitals, including the operating room environment and areas providing central services. 120 Nursing facilities use 90% dust-spot efficient filters as the second bank of filters, 120 whereas a HEPA filter bank may be indicated for special-care areas of hospitals. HEPA filters are at least 99.97% efficient for removing particles ≥0.3 μm in diameter. (As a reference, Aspergillus spores are 2.5-3.0 μm in diameter.) Examples of care areas where HEPA filters are used include PE rooms and those operating rooms designated for orthopedic implant procedures. 35 Maintenance costs associated with HEPA filters are high compared with other types of filters, but use of in-line disposable prefilters can increase the life of a HEPA filter by approximately 25%. Alternatively, if a disposable prefilter is followed by a filter that is 90% efficient, the life of the HEPA filter can be extended ninefold. This concept, called progressive filtration, allows HEPA filters in special care areas to be used for 10 years. 213 Although progressive filtering will extend the mechanical ability of the HEPA filter, these filters may absorb chemicals in the environment and later desorb those chemicals, thereby necessitating a more frequent replacement program. HEPA filter efficiency is monitored with the dioctylphthalate (DOP) particle test using particles that are 0.3 μm in diameter. 218 HEPA filters are usually framed with metal, although some older versions have wood frames. A metal frame has no advantage over a properly fitted wood frame with respect to performance, but wood can compromise the air quality if it becomes and remains wet, allowing the growth of fungi and bacteria. Hospitals are therefore advised to phase out water-damaged or spent wood-framed filter units and replace them with metal-framed HEPA filters. HEPA filters are usually fixed into the HVAC system; however, portable, industrial grade HEPA units are available that can filter air at the rate of 300-800 ft 3 /min. Portable HEPA filters are used to a. temporarily recirculate air in rooms with no general ventilation, b. augment systems that cannot provide adequate airflow, and c. provide increased effectiveness in airflow. 4 Portable HEPA units are useful engineering controls that help clean the air when the central HVAC system is undergoing repairs 219 but these units do not satisfy fresh-air requirements. 214 The effectiveness of the portable unit for particle removal is dependent on a. the configuration of the room, b. the furniture and persons in the room, c. the placement of the units relative to the contents and layout of the room, and d. the location of the supply and exhaust registers or grilles. If portable, industrial-grade units are used, they should be capable of recirculating all or nearly all of the room air through the HEPA filter, and the unit should be designed to achieve the equivalent of ≥12 ACH. 4 (An average room has approximately 1,600 ft 3 of airspace.) The hospital engineering department should be contacted to provide ACH information in the event that a portable HEPA filter unit is necessary to augment the existing fixed HVAC system for air cleaning. ii. Filter Maintenance Efficiency of the filtration system is dependent on the density of the filters, which can create a drop in pressure unless compensated by stronger and more efficient fans, thus maintaining air flow. For optimal performance, filters require monitoring and replacement in accordance with the manufacturer's recommendations and standard preventive maintenance practices. 220 Upon removal, spent filters can be bagged and discarded with the routine solid waste, regardless of their patient-care area location. 221 Excess accumulation of dust and particulates increases filter efficiency, requiring more pressure to push the air through. The pressure differential across filters is measured by use of manometers or other gauges. A pressure reading that exceeds specifications indicates the need to change the filter. Filters also require regular inspection for other potential causes of decreased performance. Gaps in and around filter banks and heavy soil and debris upstream of poorly maintained filters have been implicated in health-care associated outbreaks of aspergillosis, especially when accompanied by construction activities at the facility. 17,18,106,222 # c. Ultraviolet Germicidal Irradiation (UVGI) As a supplemental air-cleaning measure, UVGI is effective in reducing the transmission of airborne bacterial and viral infections in hospitals, military housing, and classrooms, but it has only a minimal inactivating effect on fungal spores. UVGI is also used in air handling units to prevent or limit the growth of vegetative bacteria and fungi. Most commercially available UV lamps used for germicidal purposes are low-pressure mercury vapor lamps that emit radiant energy predominantly at a wave-length of 253.7 nm. 229,230 Two systems of UVGI have been used in health-care settings -duct irradiation and upper-room air irradiation. In duct irradiation systems, UV lamps are placed inside ducts that remove air from rooms to disinfect the air before it is recirculated. When properly designed, installed, and maintained, high levels of UVGI can be attained in the ducts with little or no exposure of persons in the rooms. 231,232 In upper-room air irradiation, UV lamps are either suspended from the ceiling or mounted on the wall. 4 Upper air UVGI units have two basic designs: a. a "pan" fixture with UVGI unshielded above the unit to direct the irradiation upward and b. a fixture with a series of parallel plates to columnize the irradiation outward while preventing the light from getting to the eyes of the room's occupants. The germicidal effect is dependent on air mixing via convection between the room's irradiated upper zone and the lower patient-care zones. 233,234 Bacterial inactivation studies using BCG mycobacteria and Serratia marcescens have estimated the effect of UVGI as equivalent to 10 ACH-39 ACH. 235,236 Another study, however, suggests that UVGI may result in fewer equivalent ACH in the patient-care zone, especially if the mixing of air between zones is insufficient. 234 The use of fans or HVAC systems to generate air movement may increase the effectiveness of UVGI if airborne microorganisms are exposed to the light energy for a sufficient length of time. 233,235, The optimal relationship between ventilation and UVGI is not known. Because the clinical effectiveness of UV systems may vary, UVGI is not recommended for air management prior to air recirculation from airborne isolation rooms. It is also not recommended as a substitute for HEPA filtration, local exhaust of air to the outside, or negative pressure. 4 The use of UV lamps and HEPA filtration in a single unit offers only minimal infection-control benefits over those provided by the use of a HEPA filter alone. 240 Duct systems with UVGI are not recommended as a substitute for HEPA filters if the air from isolation rooms must be recirculated to other areas of the facility. 4 Regular maintenance of UVGI systems is crucial and usually consists of keeping the bulbs free of dust and replacing old bulbs as necessary. Safety issues associated with the use of UVGI systems are described in other guidelines. 4 # d. Conditioned Air in Occupied Spaces Temperature and humidity are two essential components of conditioned air. After outside air passes through a low-or medium-efficiency filter, the air undergoes conditioning for temperature and humidity control before it passes through high-efficiency or HEPA filtration. i. Temperature HVAC systems in health-care facilities are often single-duct or dual-duct systems. 35,241 A single-duct system distributes cooled air (55°F ) throughout the building and uses thermostatically controlled reheat boxes located in the terminal ductwork to warm the air for individual or multiple rooms. The dual-duct system consists of parallel ducts, one with a cold air stream and the other with a hot air stream. A mixing box in each room or group of rooms mixes the two air streams to achieve the desired temperature. Temperature standards are given as either a single temperature or a range, depending on the specific health-care zone. Cool temperature standards (68°F-73°F ) usually are associated with operating rooms, clean workrooms, and endoscopy suites. 120 A warmer temperature (75°F ) is needed in areas requiring greater degrees of patient comfort. Most other zones use a temperature range of 70°F-75°F (21°C-24°C). 120 Temperatures outside of these ranges may be needed occasionally in limited areas depending on individual circumstances during patient care (e.g., cooler temperatures in operating rooms during specialized operations). ii. Humidity Four measures of humidity are used to quantify different physical properties of the mixture of water vapor and air. The most common of these is relative humidity, which is the ratio of the amount of water vapor in the air to the amount of water vapor air can hold at that temperature. 242 The other measures of humidity are specific humidity, dew point, and vapor pressure. 242 Relative humidity measures the percentage of saturation. At 100% relative humidity, the air is saturated. For most areas within health-care facilities, the designated comfort range is 30%-60% relative humidity. 120,214 Relative humidity levels >60%, in addition to being perceived as uncomfortable, promote fungal growth. 243 Humidity levels can be manipulated by either of two mechanisms. 244 In a water-wash unit, water is sprayed and drops are taken up by the filtered air; additional heating or cooling of this air sets the humidity levels. The second mechanism is by means of water vapor created from steam and added to filtered air in humidifying boxes. Reservoir-type humidifiers are not allowed in health-care facilities as per AIA guidelines and many state codes. 120 Cool-mist humidifiers should be avoided, because they can disseminate aerosols containing allergens and microorganisms. 245 Additionally, the small, personal-use versions of this equipment can be difficult to clean. # iii. Ventilation The control of air pollutants (e.g., microorganisms, dust, chemicals, and smoke) at the source is the most effective way to maintain clean air. The second most effective means of controlling indoor air pollution is through ventilation. Ventilation rates are voluntary unless a state or local government specifies a standard in health-care licensing or health department requirements. These standards typically apply to only the design of a facility, rather than its operation. 220,246 Health-care facilities without specific ventilation standards should follow the AIA guideline specific to the year in which the building was 120, 214, 241 built or the ANSI/ASHRAE Standard 62, Ventilation for Acceptable Indoor Air Quality. Ventilation guidelines are defined in terms of air volume per minute per occupant and are based on the assumption that occupants and their activities are responsible for most of the contaminants in the conditioned space. 215 Most ventilation rates for health-care facilities are expressed as room ACH. Peak efficiency for particle removal in the air space occurs between 12 ACH-15 ACH. 35,247,248 Ventilation rates vary among the different patient-care areas of a health-care facility (Appendix B). 120 Health-care facilities generally use recirculated air. 35,120,241,249,250 Fans create sufficient positive pressure to force air through the building duct work and adequate negative pressure to evacuate air from the conditioned space into the return duct work and/or exhaust, thereby completing the circuit in a sealed system (Figure 1). However, because gaseous contaminants tend to accumulate as the air recirculates, a percentage of the recirculated air is exhausted to the outside and replaced by fresh outdoor air. In hospitals, the delivery of filtered air to an occupied space is an engineered system design issue, the full discussion of which is beyond the scope of this document. Hospitals with areas not served by central HVAC systems often use through-the-wall or fan coil air conditioning units as the sole source of room ventilation. AIA guidelines for newly installed systems stipulate that through-the-wall fan-coil units be equipped with permanent (i.e., cleanable) or replaceable filters with a minimum efficiency of 68% weight arrestance. 120 These units may be used only as recirculating units; all outdoor air requirements must be met by a separate central air handling system with proper filtration, with a minimum of two outside air changes in general patient rooms (D. Erickson, ASHE, 2000). 120 If a patient room is equipped with an individual through-the-wall fan coil unit, the room should not be used as either AII or as PE. 120 These requirements, although directed to new HVAC installations also are appropriate for existing settings. Non-central air-handling systems are prone to problems associated with excess condensation accumulating in drip pans and improper filter maintenance; health-care facilities should clean or replace the filters in these units on a regular basis while the patient is out of the room. Laminar airflow ventilation systems are designed to move air in a single pass, usually through a bank of HEPA filters either along a wall or in the ceiling, in a one-way direction through a clean zone with parallel streamlines. Laminar airflow can be directed vertically or horizontally; the unidirectional system optimizes airflow and minimizes air turbulence. 63,241 Delivery of air at a rate of 0.5 meters per second (90 ± 20 ft/min) helps to minimize opportunities for microorganism proliferation. 63,251,252 Laminar airflow systems have been used in PE to help reduce the risk for health-care associated airborne infections (e.g., aspergillosis) in high-risk patients. 63,93,253,254 However, data that demonstrate a survival benefit for patients in PE with laminar airflow are lacking. Given the high cost of installation and apparent lack of benefit, the value of laminar airflow in this setting is questionable. 9,37 Few data support the use of laminar airflow systems elsewhere in a hospital. 255 iv. Pressurization Positive and negative pressures refer to a pressure differential between two adjacent air spaces (e.g., rooms and hallways). Air flows away from areas or rooms with positive pressure (pressurized), while air flows into areas with negative pressure (depressurized). AII rooms are set at negative pressure to prevent airborne microorganisms in the room from entering hallways and corridors. PE rooms housing severely neutropenic patients are set at positive pressure to keep airborne pathogens in adjacent spaces or corridors from coming into and contaminating the airspace occupied by such high-risk patients. Selfclosing doors are mandatory for both of these areas to help maintain the correct pressure differential. 4,6,120 Older health-care facilities may have variable pressure rooms (i.e., rooms in which the ventilation can be manually switched between positive and negative pressure). These rooms are no longer permitted in the construction of new facilities or in renovated areas of the facility, 120 and their use in existing facilities has been discouraged because of difficulties in assuring the proper pressure differential, especially for the negative pressure setting, and because of the potential for error associated with switching the pressure differentials for the room. Continued use of existing variable pressure rooms depends on a partnership between engineering and infection control. Both positive-and negativepressure rooms should be maintained according to specific engineering specifications (Table 6). Health-care professionals (e.g., infection control, hospital epidemiologists) must perform a risk assessment to determine the appropriate number of AII rooms (negative pressure) and/or PE rooms (positive pressure) to serve the patient population. The AIA guidelines require a certain number of AII rooms as a minimum, and it is important to refer to the edition under which the building was built for appropriate guidance. 120 In large health-care facilities with central HVAC systems, sealed windows help to ensure the efficient operation of the system, especially with respect to creating and maintaining pressure differentials. Sealing the windows in PE areas helps minimize the risk of airborne contamination from the outside. One outbreak of aspergillosis among immunosuppressed patients in a hospital was attributed in part to an open window in the unit during a time when both construction and a fire happened nearby; sealing the window prevented further entry of fungal spores into the unit from the outside air. 111 Additionally, all emergency exits (e.g., fire escapes and emergency doors) in PE wards should be kept closed (except during emergencies) and equipped with alarms. # e. Infection Control Impact of HVAC System Maintenance and Repair A failure or malfunction of any component of the HVAC system may subject patients and staff to discomfort and exposure to airborne contaminants. Only limited information is available from formal studies on the infection-control implications of a complete air-handling system failure or shutdown for maintenance. Most experience has been derived from infectious disease outbreaks and adverse outcomes among high-risk patients when HVAC systems are poorly maintained. (See Table 7 for potential ventilation hazards, consequences, and correction measures.) AIA guidelines prohibit U.S. hospitals and surgical centers from shutting down their HVAC systems for purposes other than required maintenance, filter changes, and construction. 120 Airflow can be reduced; however, sufficient supply, return, and exhaust must be provided to maintain required pressure relationships when the space is not occupied. Maintaining these relationships can be accomplished with special drives on the air-handling units (i.e., a variable air ventilation system). Microorganisms proliferate in environments wherever air, dust, and water are present, and air-handling systems can be ideal environments for microbial growth. 35 Properly engineered HVAC systems require routine maintenance and monitoring to provide acceptable indoor air quality efficiently and to minimize conditions that favor the proliferation of health-care associated pathogens. 35,249 Performance monitoring of the system includes determining pressure differentials across filters, regular inspection of system filters, DOP testing of HEPA filters, testing of low-or medium efficiency filters, and manometer tests for positive-and negative-pressure areas in accordance with nationally recognized standards, guidelines, and manufacturers' recommendations. The use of hand-held, calibrated equipment that can provide a numerical reading on a daily basis is preferred for engineering purposes (A.Streifel, University of Minnesota, 2000). 256 Several methods that provide a visual, qualitative measure of pressure differentials (i.e., airflow direction) include smoke-tube tests or placing flutter strips, ping-pong balls, or tissue in the air stream. Preventive filter and duct maintenance (e.g., cleaning ductwork vents, replacing filters as needed, and properly disposing spent filters into plastic bags immediately upon removal) is important to prevent potential exposures of patients and staff during HVAC system shut-down. The frequency of filter inspection and the parameters of this inspection are established by each facility to meet their unique needs. Ductwork in older health-care facilities may have insulation on the interior surfaces that can trap contaminants. This insulation material tends to break down over time to be discharged from the HVAC system. Additionally, a malfunction of the air-intake system can overburden the filtering system and permit aerosolization of fungal pathogens. Keeping the intakes free from bird droppings, especially those from pigeons, helps to minimize the concentration of fungal spores entering from the outside. 98 Accumulation of dust and moisture within HVAC systems increases the risk for spread of health-careassociated environmental fungi and bacteria. Clusters of infections caused by Aspergillus spp., P. aeruginosa, S. aureus, and Acinetobacter spp. have been linked to poorly maintained and/or malfunctioning air conditioning systems. 68,161,257,258 Efforts to limit excess humidity and moisture in the infrastructure and on air-stream surfaces in the HVAC system can minimize the proliferation and dispersion of fungal spores and waterborne bacteria throughout indoor air. Within the HVAC system, water is present in water-wash units, humidifying boxes, or cooling units. The dual-duct system may also create conditions of high humidity and excess moisture that favor fungal growth in drain pans as well as in fibrous insulation material that becomes damp as a result of the humid air passing over the hot stream and condensing. If moisture is present in the HVAC system, periods of stagnation should be avoided. Bursts of organisms can be released upon system start-up, increasing the risk of airborne infection. 206 Proper engineering of the HVAC system is critical to preventing dispersal of airborne organisms. In one hospital, endophthalmitis caused by Acremonium kiliense infection following cataract extraction in an ambulatory surgical center was traced to aerosols derived from the humidifier water in the ventilation system. 206 The organism proliferated because the ventilation system was turned off routinely when the center was not in operation; the air was filtered before humidification, but not afterwards. Most health-care facilities have contingency plans in case of disruption of HVAC services. These plans include back-up power generators that maintain the ventilation system in high-risk areas (e.g., operating rooms, intensive-care units, negative-and positive-pressure rooms, transplantation units, and oncology units). Alternative generators are required to engage within 10 seconds of a loss of main power. If the ventilation system is out of service, rendering indoor air stagnant, sufficient time must be allowed to clean the air and re-establish the appropriate number of ACH once the HVAC system begins to function again. Air filters may also need to be changed, because reactivation of the system can dislodge substantial amounts of dust and create a transient burst of fungal spores. Duct cleaning in health-care facilities has benefits in terms of system performance, but its usefulness for infection control has not been conclusively determined. Duct cleaning typically involves using specialized tools to dislodge dirt and a high-powered vacuum cleaner to clean out debris. 263 Some duct-cleaning services also apply chemical biocides or sealants to the inside surfaces of ducts to minimize fungal growth and prevent the release of particulate matter. The U.S. Environmental Protection Agency (EPA), however, has concerns with the use of sanitizers and/or disinfectants to treat the surfaces of ductwork, because the label indications for most of these products may not specifically include the use of the product in HVAC systems. 264 Further, EPA has not evaluated the potency of disinfectants in such applications, nor has the agency examined the potential attendant health and safety risks. The EPA recommends that companies use only those chemical biocides that are registered for use in HVAC systems. 264 Although infrequent cleaning of the exhaust ducts in AII areas has been documented as a cause of diminishing negative pressure and a decrease in the air exchange rates, 214 no data indicate that duct cleaning, beyond what is recommended for optimal performance, improves indoor air quality or reduces the risk of infection. Exhaust return systems should be cleaned as part of routine system maintenance. Duct cleaning has not been shown to prevent any health problems, 265 and EPA studies indicate that airborne particulate levels do not increase as a result of dirty air ducts, nor do they diminish after cleaning, presumably because much of the dirt inside air ducts adheres to duct surfaces and does not enter the conditioned space. 265 Additional research is needed to determine if air-duct contamination can significantly increase the airborne infection risk in general areas of health-care facilities. # Construction, Renovation, Remediation, Repair, and Demolition # a. General Information Environmental disturbances caused by construction and/or renovation and repair activities (e.g., disruption of the above-ceiling area, running cables through the ceiling, and structural repairs) in and near health-care facilities markedly increase the airborne Aspergillus spp. spore counts in the indoor air of such facilities, thereby increasing the risk for health-care associated aspergillosis among high-risk patients. Although one case of health-care associated aspergillosis is often difficult to link to a specific environmental exposure, the occurrence of temporarily clustered cases increase the likelihood that an environmental source within the facility may be identified and corrected. Construction, renovation, repair, and demolition activities in health-care facilities require substantial planning and coordination to minimize the risk for airborne infection both during projects and after their completion. Several organizations and experts have endorsed a multi-disciplinary team approach (Box 4) to coordinate the various stages of construction activities (e.g., project inception, project implementation, final walk-through, and completion). 120,249,250, Environmental services, employee health, engineering, and infection control must be represented in construction planning and design meetings should be convened with architects and design engineers. The number of members and disciplines represented is a function of the complexity of a project. Smaller, less complex projects and maintenance may require a minimal number of members beyond the core representation from engineering, infection control, environmental services, and the directors of the specialized departments. # Box 4. Suggested members and functions of a multi-disciplinary coordination team for construction, renovation, repair, and demolition projects # Members - Infection-control personnel, including hospital epidemiologists - Laboratory personnel - Facility administrators or their designated representatives, facility managers - Director of engineering - Risk-management personnel - Directors of specialized programs (e.g., transplantation, oncology and ICU programs) - Employee safety personnel, industrial hygienists, and regulatory affairs personnel - Environmental services personnel Information systems personnel - Construction administrators or their designated representatives - Architects, design engineers, project managers, and contractors # Functions and responsibilities Coordinate members' input in developing a comprehensive project management plan. Conduct a risk assessment of the project to determine potential hazards to susceptible patients. Prevent unnecessary exposures of patients, visitors, and staff to infectious agents. Oversee all infection-control aspects of construction activities. Establish site-specific infection-control protocols for specialized areas. Provide education about the infection-control impact of construction to staff and construction workers. Ensure compliance with technical standards, contract provisions, and regulations. Establish a mechanism to address and correct problems quickly. Develop contingency plans for emergency response to power failures, water supply disruptions, and fires. Provide a water-damage management plan (including drying protocols) for handling water intrusion from floods, leaks, and condensation. Develop a plan for structural maintenance. Education of maintenance and construction workers, health-care staff caring for high-risk patients, and persons responsible for controlling indoor air quality heightens awareness that minimizing dust and moisture intrusion from construction sites into high-risk patient-care areas helps to maintain a safe environment. 120,250,271, Visual and printed educational materials should be provided in the language spoken by the workers. Staff and construction workers also need to be aware of the potentially catastrophic consequences of dust and moisture intrusion when an HVAC system or water system fails during construction or repair; action plans to deal quickly with these emergencies should be developed in advance and kept on file. Incorporation of specific standards into construction contracts may help to prevent departures from recommended practices as projects progress. Establishing specific lines of communication is important to address problems (e.g., dust control, indoor air quality, noise levels, and vibrations), resolve complaints, and keep projects moving toward completion. Health-care facility staff should develop a mechanism to monitor worker adherence to infection-control guidelines on a daily basis in and around the construction site for the duration of the project. # b. Preliminary Considerations The three major topics to consider before initiating any construction or repair activity are as follows: a. design and function of the new structure or area, b. assessment of environmental risks for airborne disease and opportunities for prevention, and c. measures to contain dust and moisture during construction or repairs. A checklist of design and function considerations can help to ensure that a planned structure or area can be easily serviced and maintained for environmental infection control (Box 5) . 17,250,273, Specifications for the construction, renovation, remodeling, and maintenance of health-care facilities are outlined in the AIA document, Guidelines for Design and Construction of Hospitals and Health Care Facilities. 120,275 Box 5. Construction design and function considerations for environmental infection control Proactive strategies can help prevent environmentally mediated airborne infections in health-care facilities during demolition, construction, and renovation. The potential presence of dust and moisture and their contribution to health-care associated infections must be critically evaluated early in the planning of any demolition, construction, renovation, and repairs. 120,250,251,273,274, Consideration must extend beyond dust generated by major projects to include dust that can become airborne if disturbed during routine maintenance and minor renovation activities (e.g., exposure of ceiling spaces for inspection; installation of conduits, cable, or sprinkler systems; rewiring; and structural repairs or replacement). 273,276,277 Other projects that can compromise indoor air quality include construction and repair jobs that inadvertently allow substantial amounts of raw, unfiltered outdoor air to enter the facility (e.g., repair of elevators and elevator shafts) and activities that dampen any structure, area, or item made of porous materials or characterized by cracks and crevices (e.g., sink cabinets in need of repair, carpets, ceilings, floors, walls, vinyl wall coverings, upholstery, drapes, and countertops). 18,273,277 Molds grow and proliferate on these surfaces when they become and remain wet. 21,120,250,266,270,272,280 Scrubbable materials are preferred for use in patient-care areas. Containment measures for dust and/or moisture control are dictated by the location of the construction site. Outdoor demolition and construction require actions to keep dust and moisture out of the facility (e.g., sealing windows and vents and keeping doors closed or sealed). Containment of dust and moisture generated from construction inside a facility requires barrier structures (either pre-fabricated or constructed of more durable materials as needed) and engineering controls to clean the air in and around the construction or repair site. # c. Infection-Control Risk Assessment An infection-control risk assessment (ICRA) conducted before initiating repairs, demolition, construction, or renovation activities can identify potential exposures of susceptible patients to dust and moisture and determine the need for dust and moisture containment measures. This assessment centers on the type and extent of the construction or repairs in the work area but may also need to include adjacent patient-care areas, supply storage, and areas on levels above and below the proposed project. An example of designing an ICRA as a matrix, the policy for performing an ICRA and implementing its results, and a sample permit form that streamlines the communication process are available. 281 Knowledge of the air flow patterns and pressure differentials helps minimize or eliminate the inadvertent dispersion of dust that could contaminate air space, patient-care items, and surfaces. 57,282,283 A recent aspergillosis outbreak among oncology patients was attributed to depressurization of the building housing the HSCT unit while construction was underway in an adjacent building. Pressure readings in the affected building (including 12 of 25 HSCT-patient rooms) ranged from 0.1 Pa-5.8 Pa. Unfiltered outdoor air flowed into the building through doors and windows, exposing patients in the HSCT unit to fungal spores. 283 During long-term projects, providing temporary essential services (e.g., toilet facilities) and conveniences (e.g., vending machines) to construction workers within the site will help to minimize traffic in and out of the area. The type of barrier systems necessary for the scope of the project must be defined. 12,120,250,279,284 Depending on the location and extent of the construction, patients may need to be relocated to other areas in the facility not affected by construction dust. 51,285 Such relocation might be especially prudent when construction takes place within units housing immunocompromised patients (e.g., severely neutropenic patients and patients on corticosteroid therapy). Advance assessment of high-risk locations and planning for the possible transport of patients to other departments can minimize delays and waiting time in hallways. 51 Although hospitals have provided immunocompromised patients with some form of respiratory protection for use outside their rooms, the issue is complex and remains unresolved until more research can be done. Previous guidance on this issue has been inconsistent. 9 Protective respirators (i.e., N95) were well tolerated by patients when used to prevent further cases of construction-related aspergillosis in a recent outbreak. 283 The routine use of the N95 respirator by patients, however, has not been evaluated for preventing exposure to fungal spores during periods of non-construction. Although health-care workers who would be using the N95 respirator for personal respiratory protect must be fittested, there is no indication that either patients or visitors should undergo fit-testing. Surveillance activities should augment preventive strategies during construction projects. 3,4,20,110,286,287 By determining baseline levels of health-care acquired airborne and waterborne infections, infectioncontrol staff can monitor changes in infection rates and patterns during and immediately after construction, renovations, or repairs. 3 # d. Air Sampling Air sampling in health-care facilities may be conducted both during periods of construction and on a periodic basis to determine indoor air quality, efficacy of dust-control measures, or air-handling system performance via parametric monitoring. Parametric monitoring consists of measuring the physical periodic assessment of the system (e.g., air flow direction and pressure, ACH, and filter efficiency) can give assurance of proper ventilation, especially for special care areas and operating rooms. 288 Air sampling is used to detect aerosols (i.e., particles or microorganisms). Particulate sampling (i.e., total numbers and size range of particulates) is a practical method for evaluating the infection-control performance of the HVAC system, with an emphasis on filter efficiency in removing respirable particles (<5 μm in diameter) or larger particles from the air. Particle size is reported in terms of the mass median aerodynamic diameter (MMAD), whereas count median aerodynamic diameter (CMAD) is useful with respect to particle concentrations. Particle counts in a given air space within the health-care facility should be evaluated against counts obtained in a comparison area. Particle counts indoors are commonly compared with the particulate levels of the outdoor air. This approach determines the "rank order" air quality from "dirty" (i.e., the outdoor air) to "clean" (i.e., air filtered through high-efficiency filters ) to "cleanest" (i.e., HEPA-filtered air). 288 Comparisons from one indoor area to another may also provide useful information about the magnitude of an indoor air-quality problem. Making rank-order comparisons between clean, highly-filtered areas and dirty areas and/or outdoors is one way to interpret sampling results in the absence of air quality and action level standards. 35,289 In addition to verifying filter performance, particle counts can help determine if barriers and efforts to control dust dispersion from construction are effective. This type of monitoring is helpful when performed at various times and barrier perimeter locations during the project. Gaps or breaks in the barriers' joints or seals can then be identified and repaired. The American Conference of Governmental Industrial Hygienists (ACGIH) has set a threshold limit value-time weighted average (TLV®-TWA) of 10 mg/m 3 for nuisance dust that contains no asbestos and <1% crystalline silica. 290 Alternatively, OSHA has set permissible exposure limits (PELs) for inert or nuisance dust as follows: respirable fraction at 5 mg/m 3 and total dust at 15 mg/m 3 . 291 Although these standards are not measures of a bioaerosol, they are used for indoor air quality assessment in occupational settings and may be useful criteria in construction areas. Application of ACGIH guidance to health-care settings has not been standardized, but particulate counts in health-care facilities are likely to be well below this threshold value and approaching cleanroom standards in certain care areas (e.g., operating rooms). 100 Particle counters and anemometers are used in particulate evaluation. The anemometer measures air flow velocity, which can be used to determine sample volumes. Particulate sampling usually does not require microbiology laboratory services for the reporting of results. Microbiologic sampling of air in health-care facilities remains controversial because of currently unresolved technical limitations and the need for substantial laboratory support (Box 6). Infection-control professionals, laboratorians, and engineers should determine if microbiologic and/or particle sampling is warranted and assess proposed methods for sampling. The most significant technical limitation of air sampling for airborne fungal agents is the lack of standards linking fungal spore levels with infection rates. Despite this limitation, several health-care institutions have opted to use microbiologic sampling when construction projects are anticipated and/or underway in efforts to assess the safety of the environment for immunocompromised patients. 35,289 Microbiologic air sampling should be limited to assays for airborne fungi; of those, the thermotolerant fungi (i.e., those capable of growing at 95°F-98.6°F ) are of particular concern because of their pathogenicity in immunocompromised hosts. 35 Use of selective media (e.g., Sabouraud dextrose agar and inhibitory mold agar) helps with the initial identification of recovered organisms. Microbiologic sampling for fungal spores performed as part of various airborne disease outbreak investigations has also been problematic. 18,49,106,111,112,289 The precise source of a fungus is often difficult to trace with certainty, and sampling conducted after exposure may neither reflect the circumstances that were linked to infection nor distinguish between health-care acquired and community-acquired infections. Because fungal strains may fluctuate rapidly in the environment, health-care acquired Aspergillus spp. infection cannot be confirmed or excluded if the infecting strain is not found in the health-care setting. 287 Sensitive molecular typing methods (e.g., randomly amplified polymorphic DNA (RAPD) techniques and a more recent DNA fingerprinting technique that detects restriction fragment length polymorphisms in fungal genomic DNA) to identify strain differences among Aspergillus spp., however, are becoming increasingly used in epidemiologic investigations of health-care acquired fungal infection (A.Streifel, University of Minnesota, 2000). 68,110,286,287, During case cluster evaluation, microbiologic sampling may provide an isolate from the environment for molecular typing and comparison with patient isolates. Therefore, it may be prudent for the clinical laboratory to save Aspergillus spp. isolated from colonizations and invasive disease cases among patients in PE, oncology, and transplant services for these purposes. Sedimentation methods using settle plates and volumetric sampling methods using solid impactors are commonly employed when sampling air for bacteria and fungi. Settle plates have been used by numerous investigators to detect airborne bacteria or to measure air quality during medical procedures (e.g., surgery). 17,60,97,151,161,287 Settle plates, because they rely on gravity during sampling, tend to select for larger particles and lack sensitivity for respirable particles (e.g., individual fungal spores), especially in highly-filtered environments. Therefore, they are considered impractical for general use. 35,289, Settle plates, however, may detect fungi aerosolized during medical procedures (e.g., during wound dressing changes), as described in a recent outbreak of aspergillosis among liver transplant patients. 302 The use of slit or sieve impactor samplers capable of collecting large volumes of air in short periods of time are needed to detect low numbers of fungal spores in highly filtered areas. 35,289 In some outbreaks, aspergillosis cases have occurred when fungal spore concentrations in PE ambient air ranged as low as 0.9-2.2 colony-forming units per cubic meter (CFU/m 3 ) of air. 18,94 On the basis of the expected spore counts in the ambient air and the performance parameters of various types of volumetric air samplers, investigators of a recent aspergillosis outbreak have suggested that an air volume of at least 1000 L (1 m 3 ) should be considered when sampling highly filtered areas. 283 Investigators have also suggested limits of 15 CFU/m 3 for gross colony counts of fungal organisms and <0.1 CFU/m 3 for Aspergillus fumigatus and other potentially opportunistic fungi in heavily filtered areas (≥12 ACH and filtration of ≥99.97% efficiency). 120 No correlation of these values with the incidence of health-care-associated fungal infection rates has been reported. Air sampling in health-care facilities, whether used to monitor air quality during construction, to verify filter efficiency, or to commission new space prior to occupancy, requires careful notation of the circumstances of sampling. Most air sampling is performed under undisturbed conditions. However, when the air is sampled during or after human activity (e.g., walking and vacuuming), a higher number of airborne microorganisms likely is detected. 297 The contribution of human activity to the significance of air sampling and its impact on health-care associated infection rates remain to be defined. Comparing microbiologic sampling results from a target area (e.g., an area of construction) to those from an unaffected location in the facility can provide information about distribution and concentration of potential airborne pathogens. A comparison of microbial species densities in outdoor air versus indoor air has been used to help pinpoint fungal spore bursts. Fungal spore densities in outdoor air are variable, although the degree of variation with the seasons appears to be more dramatic in the United States than in Europe. 92,287,303 Particulate and microbiologic air sampling have been used when commissioning new HVAC system installations; however, such sampling is particularly important for newly constructed or renovated PE or operating rooms. Particulate sampling is used as part of a battery of tests to determine if a new HVAC system is performing to specifications for filtration and the proper number of ACH. 268,288,304 Microbiologic air sampling, however, remains controversial in this application, because no standards for comparison purposes have been determined. If performed, sampling should be limited to determining the density of fungal spores per unit volume of air space. High numbers of spores may indicate contamination of air-handling system components prior to installation or a system deficiency when culture results are compared with known filter efficiencies and rates of air exchange. # e. External Demolition and Construction External demolition, planned building implosions, and dirt excavation generate considerable dust and debris that can contain airborne microorganisms. In one study, peak concentrations in outdoor air at grade level and HVAC intakes during site excavation averaged 20,000 CFU/m 3 for all fungi and 500 CFU/m 3 for Aspergillus fumigatus, compared with 19 CFU/m 3 and 4 CFU/m 3 , respectively, in the absence of construction. 280 Many health-care institutions are located in large, urban areas; building implosions are becoming a more frequent concern. Infection-control risk assessment teams, particularly those in facilities located in urban renewal areas, would benefit by developing risk management strategies for external demolition and construction as a standing policy. In light of the events of 11 September 2001, it may be necessary for the team to identify those dust exclusion measures that can be implemented rapidly in response to emergency situations (Table 8). Issues to be reviewed prior to demolition include a. proximity of the air intake system to the work site, b. adequacy of window seals and door seals, c. proximity of areas frequented by immunocompromised patients, and d. location of the underground utilities (D. Erickson, ASHE, 2000). 120,250,273,276,277,280,305 Minimizing the entry of outside dust into the HVAC system is crucial in reducing the risk for airborne contaminants. Facility engineers should be consulted about the potential impact of shutting down the system or increasing the filtration. Selected air handlers, especially those located close to excavation sites, may have to be shut off temporarily to keep from overloading the system with dust and debris. Care is needed to avoid significant facility-wide reductions in pressure differentials that may cause the building to become negatively pressured relative to the outside. To prevent excessive particulate overload and subsequent reductions in effectiveness of intake air systems that cannot be shut off temporarily, air filters must be inspected frequently for proper installation and function. Excessive dust penetration can be avoided if recirculated air is maximally utilized while outdoor air intakes are shut down. Scheduling demolition and excavation during the winter, when Aspergillus spp. spores may be present in lower numbers, can help, although seasonal variations in spore density differ around the world. 92,287,303 Dust control can be managed by misting the dirt and debris during heavy dust-generating activities. To decrease the amount of aerosols from excavation and demolition projects, nearby windows, especially in areas housing immunocompromised patients, can be sealed and window and door frames caulked or weatherstripped to prevent dust intrusion. 50,301,306 Monitoring for adherence to these control measures throughout demolition or excavation is crucial. Diverting pedestrian traffic away from the construction sites decreases the amount of dust tracked back into the health-care facility and minimizes exposure of high-risk patients to environmental pathogens. Additionally, closing entrances near construction or demolition sites might be beneficial; if this is not practical, creating an air lock (i.e., pressurizing the entry way) is another option. # f. Internal Demolition, Construction, Renovations, and Repairs The focus of a properly implemented infection-control program during interior construction and repairs is containment of dust and moisture. This objective is achieved by a. educating construction workers about the importance of control measures b. preparing the site; c. notifying and issuing advisories for staff, patients, and visitors; d. moving staff and patients and relocating patients as needed; e. issuing standards of practice and precautions during activities and maintenance; f. monitoring for adherence to control measures during construction and providing prompt feedback about lapses in control g. monitoring HVAC performance; h. implementing daily clean-up, terminal cleaning and removal of debris upon completion; and i. ensuring the integrity of the water system during and after construction. These activities should be coordinated with engineering staff and infection-control professionals. Physical barriers capable of containing smoke and dust will confine dispersed fungal spores to the construction zone. 279,284,307,308 The specific type of physical barrier required depends on the project's scope and duration and on local fire codes. Short-term projects that result in minimal dust dispersion (e.g., installation of new cables or wiring above ceiling tiles) require only portable plastic enclosures with negative pressure and HEPA filtration of the exhaust air from the enclosed work area. The placement of a portable industrial-grade HEPA filter device capable of filtration rate of 300-800 ft 3 /min. adjacent to the work area will help to remove fungal spores, but its efficacy is dependent on the supplied ACH and size of the area. If the project is extensive but short-term, dust-abatement, fire-resistant plastic curtains (e.g., Visqueen®) may be adequate. These should be completely airtight and sealed from ceiling to floor with overlapping curtains; 276,277,309 holes, tears, or other perforations should be repaired promptly with tape. A portable, industrial-grade HEPA filter unit on continuous operation is needed within the contained area, with the filtered air exhausted to the outside of the work zone. Patients should not remain in the room when dust-generating activities are performed. Tools to assist the decision-making process regarding selection of barriers based on an ICRA approach are available. 281 More elaborate barriers are indicated for long-term projects that generate moderate to large amounts of dust. These barrier structures typically consist of rigid, noncombustible walls constructed from sheet rock, drywall, plywood, or plaster board and covered with sheet plastic (e.g., Visqueen®). Barrier requirements to prevent the intrusion of dust into patient-care areas include a. installing a plastic dust abatement curtain before construction of the rigid barrier b. sealing and taping all joint edges including the top and bottom; c. extending the barrier from floor to floor, which takes into account the space above the finished, lay-down ceiling; and d. fitting or sealing any temporary doors connecting the construction zone to the adjacent area. (See Box 7 for a list of the various construction and repair activities that require the use of some type of barrier.) Dust and moisture abatement and control rely primarily on the impermeable barrier containment approach; as construction continues, numerous opportunities can lead to dispersion of dust to other areas of the health-care facility. Infection-control measures that augment the use of barrier containment should be undertaken (Table 9). Dust-control measures for clinical laboratories are an essential part of the infection-control strategy during hospital construction or renovation. Use of plastic or solid barriers may be needed if the ICRA determines that air flow from construction areas may introduce airborne contaminants into the laboratory space. In one facility, pseudofungemia clusters attributed to Aspergillus spp. and Penicillium spp. were linked to improper air flow patterns and construction projects adjacent to the laboratory; intrusion of dust and spores into a biological safety cabinet from construction activity immediately next to the cabinet resulted in a cluster of cultures contaminated with Aspergillus niger. 310,311 Reportedly, no barrier containment was used and the HEPA filtration system was overloaded with dust. In addition, an outbreak of pseudobacteremia caused by Bacillus spp. occurred in another hospital during construction above a storage area for blood culture bottles. 207 Airborne spread of Bacillus spp. spores resulted in contamination of the bottles' plastic lids, which were not disinfected or handled with proper aseptic technique prior to collection of blood samples. 1. Monitor the construction area daily for compliance with the infection-control plan. 2. Protective outer clothing for construction workers should be removed before entering clean areas. 3. Use mats with tacky surfaces within the construction zone at the entry; cover sufficient area so that both feet make contact with the mat while walking through the entry. 4. Construct an anteroom as needed where coveralls can be donned and removed. 5. Clean the construction zone and all areas used by construction workers with a wet mop. 6. If the area is carpeted, vacuum daily with a HEPA-filtered-equipped vacuum. 7. Provide temporary essential services (e.g., toilets) and worker conveniences (e.g, vending machines) in the construction zone as appropriate. 8. Damp-wipe tools if removed from the construction zone or left in the area. 9. Ensure that construction barriers remain well sealed; use particle sampling as needed. 10. Ensure that the clinical laboratory is free from dust contamination. # Infection-control measure Steps for implementation Complete the project. 1. Flush the main water system to clear dust-contaminated lines. # Environmental Infection-Control Measures for Special Health-Care Settings Areas in health-care facilities that require special ventilation include a. operating rooms b. PE rooms used by high-risk, immunocompromised patients; and c. AII rooms for isolation of patients with airborne infections (e.g., those caused by M. tuberculosis, VZV, or measles virus). The number of rooms required for PE and AII are determined by a risk assessment of the health-care facility. 6 Continuous, visual monitoring of air flow direction is required for new or renovated pressurized rooms. 120,256 # a. Protective Environments (PE) Although the exact configuration and specifications of PEs might differ among hospitals, these care areas for high-risk, immunocompromised patients are designed to minimize fungal spore counts in air by maintaining a. filtration of incoming air by using central or point-of-use HEPA filters b. directed room air flow ; c. positive room air pressure of 2.5 Pa relative to the corridor; d. well-sealed rooms; and e. ≥12 ACH. 44,120,251,254, Air flow rates must be adjusted accordingly to ensure sufficient ACH, and these rates vary depending on certain factors (e.g., room air leakage area). For example, to provide ≥12 ACH in a typical patient room with 0.5 sq. ft. air leakage, the air flow rate will be minimally 125 cubic feet/min (cfm). 320,321 Higher air flow rates may be needed. A general ventilation diagram for a positive-pressure room is given in Figure 2. Directed room air flow in PE rooms is not laminar; parallel air streams are not generated. Studies attempting to demonstrate patient benefit from laminar air flow in a PE setting are equivocal. 316, 318, 319, 322 -327 Air flow direction at the entrances to these areas should be maintained and verified, preferably on a daily basis, using either a visual means of indication (e.g., smoke tubes and flutter strips) or manometers. Permanent installation of a visual monitoring device is indicated for new PE construction and renovation. 120 Facility service structures can interfere with the proper unidirectional air flow from the patients' rooms to the adjacent corridor. In one outbreak investigation, Aspergillus spp. infections in a critical care unit may have been associated with a pneumatic specimen transport system, a textile disposal duct system, and central vacuum lines for housekeeping, all of which disrupted proper air flow from the patients' rooms to the outside and allowed entry of fungal spores into the unit (M. McNeil, CDC, 2000). The use of surface fungicide treatments is becoming more common, especially for building materials. 329 Copper-based compounds have demonstrated anti-fungal activity and are often applied to wood or paint. Copper-8-quinolinolate was used on environmental surfaces contaminated with Aspergillus spp. to control one reported outbreak of aspergillosis.310 The compound was also incorporated into the fireproofing material of a newly constructed hospital to help decrease the environmental spore burden. 316 # b. Airborne Infection Isolation (AII) Acute-care inpatient facilities need at least one room equipped to house patients with airborne infectious disease. Every health-care facility, including ambulatory and long-term care facilities, should undertake an ICRA to identify the need for AII areas. Once the need is established, the appropriate ventilation equipment can be identified. Air handling systems for this purpose need not be restricted to central systems. Guidelines for the prevention of health-care acquired TB have been published in response to multiple reports of health-care associated transmission of multi-drug resistant strains. 4,330 In reports documenting health-care acquired TB, investigators have noted a failure to comply fully with prevention measures in established guidelines. 331 -345 These gaps highlight the importance of prompt recognition of the disease, isolation of patients, proper treatment, and engineering controls. AII rooms are also appropriate for the care and management of smallpox patients. 6 Environmental infection control with respect to smallpox is currently being revisited (see Appendix E). Salient features of engineering controls for AII areas include a. use of negative pressure rooms with close monitoring of air flow direction using manometers or temporary or installed visual indicators placed in the room with the door closed b. minimum 6 ACH for existing facilities, ≥12 ACH for areas under renovation or for new construction; and c. air from negative pressure rooms and treatment rooms exhausted directly to the outside if possible. 4,120,248 As with PE, airflow rates need to be determined to ensure the proper numbers of ACH. 320,321 AII rooms can be constructed either with (Figure 3) or without (Figure 4) an anteroom. When the recirculation of air from AII rooms is unavoidable, HEPA filters should be installed in the exhaust duct leading from the room to the general ventilation system. In addition to UVGI fixtures in the room, UVGI can be placed in the ducts as an adjunct measure to HEPA filtration, but it can not replace the HEPA filter. 4 One of the components of airborne infection isolation is respiratory protection for health-care workers and visitors when entering AII rooms. 4,6,347 Recommendations of the type of respiratory protection are dependent on the patient's airborne infection (indicating the need for AII) and the risk of infection to persons entering the AII room. A more in-depth discussion of respiratory protection in this instance is presented in the current isolation guideline; 6 a revision of this guideline is in development. Coughinducing procedures (e.g., endotracheal intubation and suctioning of known or suspected TB patients, diagnostic sputum induction, aerosol treatments, and bronchoscopy) require similar precautions. Additional engineering measures are necessary for the management of patients requiring PE (i.e., allogeneic HSCT patients) who concurrently have airborne infection. For this type of patient treatment, an anteroom (Figure 4) is required in new construction and renovation as per AIA guidelines. 120 The pressure differential of an anteroom can be positive or negative relative to the patient in the room. 120 An anteroom can act as an airlock (Figure 4). If the anteroom is positive relative to the air space in the patient's room, staff members do not have to mask prior to entry into the anteroom if air is directly exhausted to the outside and a minimum of 10 ACH (Figure 4, top diagram). 120 When an anteroom is negative relative to both the AII room and the corridor, health-care workers must mask prior to entering the anteroom (Figure 4, bottom diagram). If an AII room with an anteroom is not available, use of a portable, industrial-grade HEPA filter unit may help to increase the number of ACHs while facilitating the removal of fungal spores; however, a fresh air source must be present to achieve the proper air exchange rate. Incoming ambient air should receive HEPA filtration. # c. Operating Rooms Operating room air may contain microorganisms, dust, aerosol, lint, skin squamous epithelial cells, and respiratory droplets. The microbial level in operating room air is directly proportional to the number of people moving in the room. 351 One study documented lower infection rates with coagulase-negative staphylococci among patients when operating room traffic during the surgical procedure was limited. 352 Therefore, efforts should be made to minimize personnel traffic during operations. Outbreaks of SSIs caused by group A beta-hemolytic streptococci have been traced to airborne transmission from colonized operating-room personnel to patients. Several potential health-care associated pathogens (e.g., Staphylococcus aureus and Staphylococcus epidermidis) and drug-resistant organisms have also been recovered from areas adjacent to the surgical field, 353 but the extent to which the presence of bacteria near the surgical field influences the development of postoperative SSIs is not clear. 354 Proper ventilation, humidity (<68%), and temperature control in the operating room is important for the comfort of surgical personnel and patients, but also in preventing environmental conditions that encourage growth and transmission of microorganisms. 355 Operating rooms should be maintained at positive pressure with respect to corridors and adjacent areas. 356 Operating rooms typically do not have a variable air handling system. Variable air handling systems are permitted for use in operating rooms only if they continue to provide a positive pressure with respect to the corridors and adjacent areas and the proper ACHs are maintained when the room is occupied. Conventional operating-room ventilation systems produce a minimum of about 15 ACH of filtered air for thermal control, three (20%) of which must be fresh air. 120,357,358 Air should be introduced at the ceiling and exhausted near the floor. 357,359 Laminar airflow and UVGI have been suggested as adjunct measures to reduce SSI risk for certain operations. Laminar airflow is designed to move particle-free air over the aseptic operating field at a uniform velocity (0.3-0.5 m/sec), sweeping away particles in its path. This air flow can be directed vertically or horizontally, and recirculated air is passed through a HEPA filter. Neither laminar airflow nor UV light, however, has been conclusively shown to decrease overall SSI risk. 356, Elective surgery on infectious TB patients should be postponed until such patients have received adequate drug therapy. The use of general anesthesia in TB patients poses infection-control challenges because intubation can induce coughing, and the anesthesia breathing circuit apparatus potentially can become contaminated. 371 Although operating room suites at 15 ACH exceed the air exchanges required transmission of TB to operating-room personnel. If feasible, intubation and extubation of the TB surgical patient should be performed in AII. AIA currently does not recommend changing pressure from positive to negative or setting it to neutral; most facilities lack the capability to do so. 120 When emergency surgery is indicated for a suspected/diagnosed infectious TB patient, taking specific infection-control measures is prudent (Box 8). + The placement of portable HEPA filter units in the operating room must be carefully evaluated for potential disruptions in normal air flow. The portable unit should be turned off while the surgical procedure is underway and turned on following extubation. Portable HEPA filter units previously placed in construction areas may be used in subsequent patient care, provided that all internal and external surfaces are cleaned and the filter's performance is verified with appropriate particle testing and is changed, if needed. # Other Aerosol Hazards in Health-Care Facilities In addition to infectious bioaerosols, several crucial non-infectious, indoor air-quality issues must be addressed by health-care facilities. The presence of sensitizing and allergenic agents and irritants in the workplace (e.g., ethylene oxide, glutaraldehyde, formaldehyde, hexachlorophene, and latex allergens 375 ) is increasing. Asthma and dermatologic and systemic reactions often result with exposure to these chemicals. Anesthetic gases and aerosolized medications (e.g., ribavirin, pentamidine, and aminoglycosides) represent some of the emerging potentially hazardous exposures to health-care workers. Containment of the aerosol at the source is the first level of engineering control, but personal protective equipment (e.g., masks, respirators, and glove liners) that distances the worker from the hazard also may be needed. Laser plumes and surgical smoke represent another potential risk for health-care workers. Lasers transfer electromagnetic energy into tissues, resulting in the release of a heated plume that includes particles, gases, tissue debris, and offensive smells. One concern is that aerosolized infectious material in the laser plume might reach the nasal mucosa of surgeons and adjacent personnel. Although some viruses (i.e., varicella-zoster virus, pseudorabies virus, and herpes simplex virus) do not aerosolize efficiently, 379,380 other viruses and bacteria (e.g., human papilloma virus , HIV, coagulasenegative Staphylococcus, Corynebacterium spp., and Neisseria spp.) have been detected in laser plumes. The presence of an infectious agent in a laser plume may not, however, be sufficient to cause disease from airborne exposure, especially if the normal mode of transmission for the agent is not airborne. No evidence indicated that HIV or hepatitis B virus (HBV) has been transmitted via aerosolization and inhalation. 388 Although continuing studies are needed to fully evaluate the risk of laser plumes to surgical personnel, the prevention measures in these other guidelines should be followed: a. NIOSH recommendations, 378 b. the Recommended Practices for Laser Safety in Practice Settings developed by the Association of periOperative Registered Nurses , 389 c. the assessments of ECRI, 390-392 and d. the ANSI standard. 393 These guidelines recommend the use of a. respirators (N95 or N100) or full face shields and masks, 260 b. central wall-suction units with in-line filters to collect particulate matter from minimal plumes, and c. dedicated mechanical smoke exhaust systems with a high-efficiency filter to remove large amounts of laser plume. Although transmission of TB has occurred as a result of abscess management practices that lacked airborne particulate control measures and respiratory protection, use of a smoke evacuator or needle aspirator and a high degree of clinical awareness can help protect healthcare workers when excising and draining an extrapulmonary TB abscess. 137 # D. Water 1. Modes of Transmission of Waterborne Diseases Moist environments and aqueous solutions in health-care settings have the potential to serve as reservoirs for waterborne microorganisms. Under favorable environmental circumstances (e.g., warm temperature and the presence of a source of nutrition), many bacterial and some protozoal microorganisms can either proliferate in active growth or remain for long periods in highly stable, environmentally resistant (yet infectious) forms. Modes of transmission for waterborne infections include direct contact ; ingestion of water ; indirect-contact transmission ; 6 inhalation of aerosols dispersed from water sources; 3 and aspiration of contaminated water. The first three modes of transmission are commonly associated with infections caused by gram-negative bacteria and nontuberculous mycobacteria (NTM). Aerosols generated from water sources contaminated with Legionella spp. often serve as the vehicle for introducing legionellae to the respiratory tract. 394 # Waterborne Infectious Diseases in Health-Care Facilities a. Legionellosis Legionellosis is a collective term describing infection produced by Legionella spp., whereas Legionnaires disease is a multi-system illness with pneumonia. 395 The clinical and epidemiologic aspects of these diseases (Table 11) are discussed extensively in another guideline. 3 Although Legionnaires disease is a respiratory infection, infection-control measures intended to prevent healthcare-associated cases center on the quality of water-the principal reservoir for Legionella spp. # Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 11. Clinical and epidemiologic characteristics of legionellosis/Legionnaires disease # Modes of transmission - Aspiration of water, direct inhalation or water aerosols. 3,400 Causative agent - Legionella pneumophila (90% of infections); L. micdadei, L. bozemanii, L. dumoffii, L. longbeachii, (14 additional species can cause infection in humans). Source of exposure - Exposure to environmental sources of Legionella spp. (i.e., water or water aerosols). 31,33, Clinical syndromes and diseases Two distinct illnesses: - Pontiac fever ; and - progressive pneumonia that may be accompanied by cardiac, renal, and gastrointestinal involvement 3 Patient populations at greatest risk - Immunosuppressed patients (e.g., transplant patients, cancer patients, and patients receiving corticosteroid therapy); - Immunocompromised patients (e.g., surgical patients, patients with underlying chronic lung disease, and dialysis patients); - Elderly persons; and - Patients who smoke. # Occurrence - Proportion of community-acquired pneumonia caused by Legionella spp. ranges from 1%-5%; estimated annual incidence among the general population is 8,000-18,000 cases in the United States; the incidence of healthcare-associated pneumonia (0%-14%) may be underestimated if appropriate laboratory diagnostic methods are unavailable. 396,397, Mortality rate - Mortality declined markedly during 1980-1998, from 34% to 12% for all cases; the mortality rate is higher among persons with health-care associated pneumonia compared with the rate among community-acquired pneumonia patients (14% for health-care associated pneumonia versus 10% for community-acquired pneumonia ). 445 Legionella spp. are commonly found in various natural and man-made aquatic environments 446,447 and can enter health-care facility water systems in low or undetectable numbers. 448,449 Cooling towers, evaporative condensers, heated potable water distribution systems, and locally-produced distilled water can provide environments for multiplication of legionellae. In several hospital outbreaks, patients have been infected through exposure to contaminated aerosols generated by cooling towers, showers, faucets, respiratory therapy equipment, and room-air humidifiers. 455 Factors that enhance colonization and amplification of legionellae in man-made water environments include a. temperatures of 77°F-107.6°F , b. stagnation, 461 c. scale and sediment, 462 and d. presence of certain free-living aquatic amoebae that can support intracellular growth of legionellae. 462,463 The bacteria multiply within single-cell protozoa in the environment and within alveolar macrophages in humans. # b. Other Gram-Negative Bacterial Infections Other gram-negative bacteria present in potable water also can cause health-care associated infections. Clinically important, opportunistic organisms in tap water include Pseudomonas aeruginosa, Pseudomonas spp., Burkholderia cepacia, Ralstonia pickettii, Stenotrophomonas maltophilia, and Sphingomonas spp. (Tables 12 and 13). Immunocompromised patients are at greatest risk of developing infection. Medical conditions associated with these bacterial agents range from colonization of the respiratory and urinary tracts to deep, disseminated infections that can result in pneumonia and bloodstream bacteremia. Colonization by any of these organisms often precedes the development of infection. The use of tap water in medical care (e.g., in direct patient care, as a diluent for solutions, as a water source for medical instruments and equipment, and during the final stages of instrument disinfection) therefore presents a potential risk for exposure. Colonized patients also can serve as a source of contamination, particularly for moist environments of medical equipment (e.g., ventilators). In addition to Legionella spp., Pseudomonas aeruginosa and Pseudomonas spp. are among the most clinically relevant, gram-negative, health-care associated pathogens identified from water. These and other gram-negative, non-fermentative bacteria have minimal nutritional requirements (i.e., these organisms can grow in distilled water) and can tolerate a variety of physical conditions. These attributes are critical to the success of these organisms as health-care associated pathogens. Measures to prevent the spread of these organisms and other waterborne, gram-negative bacteria include hand hygiene, glove use, barrier precautions, and eliminating potentially contaminated environmental reservoirs. 464,465 Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 12. Pseudomonas aeruginosa infections # Modes of transmission - Direct contact with water, aerosols; aspiration of water and inhalation of water aerosols; and indirect transfer from moist environmental surfaces via hands of health-care workers. 28, Clinical syndromes and diseases - Septicemia, pneumonia (particularly ventilator-associated), chronic respiratory infections among cystic fibrosis patients, urinary tract infections, skin and soft-tissue infections (e.g., tissue necrosis and hemorrhage), burn-wound infections, folliculitis, endocarditis, central nervous system infections (e.g., meningitis and abscess), eye infections, and bone and joint infections. Environmental sources of pseudomonads in healthcare settings - Potable (tap) water, distilled water, antiseptic solutions contaminated with tap water, sinks, hydrotherapy pools, whirlpools and whirlpool spas, water baths, lithotripsy therapy tanks, dialysis water, eyewash stations, flower vases, and endoscopes with residual moisture in the channels. 28,29,466,468, Environmental sources of pseudomonads in the community - Fomites (e.g., drug injection equipment stored in contaminated water). 494,495 Patient populations at greatest risk - Intensive care unit (ICU) patients (including neonatal ICU), transplant patients (organ and hematopoietic stem cell), neutropenic patients, burn therapy and hydrotherapy patients, patients with malignancies, cystic fibrosis patients, patients with underlying medical conditions, and dialysis patients. 28, 466, 467, 472, 477, 493, 506-508, 511, 512, 521-526 Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. 527 - Contaminated solutions and disinfectants 528,529 - Dialysis machines 527 - Nebulizers - Water baths 533 - Intrinsically-contaminated mouthwash 534 (This report describes contamination occurring during manufacture prior to use by the health-care facility staff. All other entries reflect extrinsic sources of contamination.) - Ventilator temperature probes 535 # Stenotrophomonas maltophlia, Sphingomonas spp. - Distilled water 536,537 - Contaminated solutions and disinfectants 529 - Dialysis machines 527 - Nebulizers - Water 538 - Ventilator temperature probes 539 # Ralstonia pickettii - Fentanyl solutions 540 - Chlorhexidine 541 - Distilled water 541 - Contaminated respiratory therapy solution 541,542 Serratia marcescens - Potable water 543 - Contaminated antiseptics (i.e., benzalkonium chloride and chlorhexidine) - Contaminated disinfectants (i.e., quaternary ammonium compounds and glutaraldehyde) 547,548 Acinetobacter spp. - Medical equipment that collects moisture (e.g., mechanical ventilators, cool mist humidifiers, vaporizers, and mist tents) - Room humidifiers 553,555 - Environmental surfaces Enterobacter spp. - Humidifier water 565 - Intravenous fluids - Unsterilized cotton swabs 573 - Ventilators 565,569 - Rubber piping on a suctioning machine 565,569 - Blood gas analyzers 570 Two additional gram-negative bacterial pathogens that can proliferate in moist environments are Acinetobacter spp. and Enterobacter spp. 571,572 Members of both genera are responsible for healthcareassociated episodes of colonization, bloodstream infections, pneumonia, and urinary tract infections among medically compromised patients, especially those in ICUs and burn therapy units. 566, Infections caused by Acinetobacter spp. represent a significant clinical problem. Average infection rates are higher from July through October compared with rates from November through June. 584 Mortality rates associated with Acinetobacter bacteremia are 17%-52%, and rates as high as 71% have been reported for pneumonia caused by infection with either Acinetobacter spp. or Pseudomonas spp.Multidrug resistance, especially in third generation cephalosporins for Enterobacter spp., contributes to increased morbidity and mortality. 569,572 Patients and health-care workers contribute significantly to the environmental contamination of surfaces and equipment with Acinetobacter spp. and Enterobacter spp., especially in intensive care areas, because of the nature of the medical equipment (e.g., ventilators) and the moisture associated with this equipment. 549,571,572,585 Hand carriage and hand transfer are commonly associated with health-care-associated transmission of these organisms and for S. marcescens. 586 Enterobacter spp. are primarily spread in this manner among patients by the hands of health-care workers. 567,587 Acinetobacter spp. have been isolated from the hands of 4%-33% of health-care workers in some studies, and transfer of an epidemic strain of Acinetobacter from patients' skin to health-care workers' hands has been demonstrated experimentally. 591 Acinetobacter infections and outbreaks have also been attributed to medical equipment and materials (e.g., ventilators, cool mist humidifiers, vaporizers, and mist tents) that may have contact with water of uncertain quality (e.g., rinsing a ventilator circuit in tap water). Strict adherence to hand hygiene helps prevent the spread of both Acinetobacter spp. and Enterobacter spp. 577,592 Acinetobacter spp. have also been detected on dry environmental surfaces (e.g., bed rails, counters, sinks, bed cupboards, bedding, floors, telephones, and medical charts) in the vicinity of colonized or infected patients; such contamination is especially problematic for surfaces that are frequently touched. In two studies, the survival periods of Acinetobacter baumannii and Acinetobacter calcoaceticus on dry surfaces approximated that for S. aureus (e.g., 26-27 days). 593,594 Because Acinetobacter spp. may come from numerous sources at any given time, laboratory investigation of health-care associated Acinetobacter infections should involve techniques to determine biotype, antibiotype, plasmid profile, and genomic fingerprinting (i.e., macrorestriction analysis) to accurately identify sources and modes of transmission of the organism(s). 595 # c. Infections and Pseudo-Infections Due to Nontuberculous Mycobacteria NTM are acid-fast bacilli (AFB) commonly found in potable water. NTM include both saprophytic and opportunistic organisms. Many NTM are of low pathogenicity, and some measure of host impairment is necessary to enhance clinical disease. 596 The four most common forms of human disease associated with NTM are pulmonary disease in adults; cervical lymph node disease in children; skin, soft tissue, and bone infections; and disseminated disease in immunocompromised patients. 596,597 Person-to-person acquisition of NTM infection, especially among immunocompetent persons, does not appear to occur, and close contacts of patients are not readily infected, despite the high numbers of organisms harbored by such patients. 596, NTM are spread via all modes of transmission associated with water. In addition to health-care associated outbreaks of clinical disease, NTM can colonize patients in health-care facilities through consumption of contaminated water or ice or through inhalation of aerosols. Colonization following NTM exposure, particularly of the respiratory tract, occurs when a patient's local defense mechanisms are impaired; overt clinical disease does not develop. 606 The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 14a. Infections or colonizations # Pathogen Vehicles associated with infections or colonizations # Mycobacterium abscessus - Inadequately sterilized medical instruments 613 # Mycobacterium avium complex (MAC) - Potable water # Mycobacterium chelonae - Dialysis, reprocessed dialyzers 31,32 - Inadequately-sterilized medical instruments, jet injectors 617,618 - Contaminated solutions 619,620 - Hydrotherapy tanks 621 # Mycobacterium fortuitum - Aerosols from showers or other water sources 605,606 - Ice 602 - Inadequately sterilized medical instruments 603 - Hydrotherapy tanks 622 # Mycobacterium marinum - Hydrotherapy tanks 623 # Mycobacterium ulcerans - Potable water 624 - Laboratory solution (intrinsically contaminated 625 - Potable water ingestion prior to sputum specimen collection 626 # Mycobacterium kansasii - Potable water 627 # Mycobacterium terrae - Potable water 608 # Mycobacterium xenopi - Potable water 609,612,627 NTM can be isolated from both natural and man-made environments. Numerous studies have identified 632 Some NTM species (e.g., Mycobacterium xenopi) can survive in water at 113°F (45°C), and can be isolated from hot water taps, which can pose a problem for hospitals that lower the temperature of their hot water systems. 627 Other NTM (e.g., Mycobacterium kansasii, M. gordonae, M. fortuitum, and M. chelonae) cannot tolerate high temperatures and are associated more often with cold water lines and taps. 629 NTM have a high resistance to chlorine; they can tolerate free chlorine concentrations of 0.05-0.2 mg/L (0.05-0.2 ppm) found at the tap. 598,633,634 They are 20-100 times more resistant to chlorine compared with coliforms; slow-growing strains of NTM (e.g., Mycobacterium avium and M. kanasii) appear to be more resistant to chorine inactivation compared to fast-growing NTM. 635 Slow-growing NTM species have also demonstrated some resistance to formaldehyde and glutaraldehyde, which has posed problems for reuse of hemodialyzers. 31 The ability of NTM to form biofilms at fluid-surface interfaces (e.g., interior surfaces of water pipes) contributes to the organisms' resistance to chemical inactivation and provides a microenvironment for growth and proliferation. 636,637 # d. Cryptosporidiosis Cryptosporidium parvum is a protozoan parasite that causes self-limiting gastroenteritis in normal hosts but can cause severe, life-threatening disease in immunocompromised patients. First recognized as a human pathogen in 1976, C. parvum can be present in natural and finished waters after fecal contamination from either human or animal sources. The health risks associated with drinking potable water contaminated with minimal numbers of C. parvum oocysts are unknown. 642 It remains to be determined if immunosuppressed persons are more susceptible to lower doses of oocysts than are immunocompetent persons. One study demonstrated that a median 50% infectious dose (ID50) of 132 oocysts of calf origin was sufficient to cause infection among healthy volunteers. 643 In a second study, the same researchers found that oocysts obtained from infected foals (newborn horses) were infectious for human volunteers at median ID50 of 10 oocysts, indicating that different strains or species of Cryptosporidium may vary in their infectivity for humans. 644 In a small study population of 17 healthy adults with pre-existing antibody to C. parvum, the ID50 was determined to be 1,880 oocysts, more than 20-fold higher than in seronegative persons. 645 These data suggest that pre-existing immunity derived from previous exposures to Cryptosporidium offers some protection from infection and illness that ordinarily would result from exposure to low numbers of oocysts. 645,646 Oocysts, particularly those with thick walls, are environmentally resistant, but their survival under natural water conditions is poorly understood. Under laboratory conditions, some oocysts remain viable and infectious in cold (41°F ) for months. 641 The prevalence of Cryptosporidium in the U.S. drinking water supply is notable. Two surveys of approximately 300 surface water supplies revealed that 55%-77% of the water samples contained Cryptosporidium oocysts. 647,648 Because the oocysts are highly resistant to common disinfectants (e.g., chlorine) used to treat drinking water, filtration of the water is important in reducing the risk of waterborne transmission. Coagulation-floculation and sedimentation, when used with filtration, can collectively achieve approximately a 2.5 log10 reduction in the number of oocysts. 649 However, outbreaks have been associated with both filtered and unfiltered drinking water systems (e.g., the 1993 outbreak in Milwaukee, Wisconsin that affected 400,000 people). 641, The presence of oocysts in the water is not an absolute indicator that infection will occur when the water is consumed, nor does the absence of detectable oocysts guarantee that infection will not occur. Healthcare associated outbreaks of cryptosporidiosis primarily have been described among groups of elderly patients and immunocompromised persons. 653 # Water Systems in Health-Care Facilities a. Basic Components and Point-of-Use Fixtures Treated municipal water enters a health-care facility via the water mains and is distributed throughout the building(s) by a network of pipes constructed of galvanized iron, copper, and polyvinylchloride (PVC). The pipe runs should be as short as is practical. Where recirculation is employed, the pipe runs should be insulated and long dead legs avoided in efforts to minimize the potential for water stagnation, which favors the proliferation of Legionella spp. and NTM. In high-risk applications (e.g., PE areas for severely immunosuppressed patients), insulated recirculation loops should be incorporated as a design minimal loss. Each water service main, branch main, riser, and branch (to a group of fixtures) has a valve and a means to reach the valves via an access panel. 120 Each fixture has a stop valve. Valves permit the isolation of a portion of the water system within a facility during repairs or maintenance. Vacuum breakers and other similar devices in the lines prevent water from back-flowing into the system. All systems that supply water should be evaluated to determine risk for potential back siphonage and cross connections. Health-care facilities generate hot water from municipal water using a boiler system. Hot water heaters and storage vessels for such systems should have a drainage facility at the lowest point, and the heating element should be located as close as possible to the bottom of the vessel to facilitate mixing and to prevent water temperature stratification. Those hot or cold water systems that incorporate an elevated holding tank should be inspected and cleaned annually. Lids should fit securely to exclude foreign materials. The most common point-of-use fixtures for water in patient-care areas are sinks, faucets, aerators, showers, and toilets; eye-wash stations are found primarily in laboratories. The potential for these fixtures to serve as a reservoir for pathogenic microorganisms has long been recognized (Table 15). 509, Wet surfaces and the production of aerosols facilitate the multiplication and dispersion of microbes. The level of risk associated with aerosol production from point-of-use fixtures varies. Aerosols from shower heads and aerators have been linked to a limited number of clusters of gram-negative bacterial colonizations and infections, including Legionnaires disease, especially in areas where immunocompromised patients are present (e.g., surgical ICUs, transplant units, and oncology units). 412, 415, In one report, clinical infection was not evident among immunocompetent persons (e.g., hospital staff) who used hospital showers when Legionella pneumophila was present in the water system. 660 Given the infrequency of reported outbreaks associated with faucet aerators, consensus has not been reached regarding the disinfection of or removal of these devices from general use. If additional clusters of infections or colonizations occur in high-risk patient-care areas, it may be prudent to clean and decontaminate the aerators or to remove them. 658,659 ASHRAE recommends cleaning and monthly disinfection of aerators in high-risk patient-care areas as part of Legionella control measures. 661 Although aerosols are produced with toilet flushing, 662,663 no epidemiologic evidence suggests that these aerosols pose a direct infection hazard. Although not considered a standard point-of-use fixture, decorative fountains are being installed in increasing numbers in health-care facilities and other public buildings. Aerosols from a decorative fountain have been associated with transmission of Legionella pneumophila serogroup 1 infection to a small cluster of older adults. 664 This hotel lobby fountain had been irregularly maintained, and water in the fountain may have been heated by submersed lighting, all of which favored the proliferation of Legionella in the system. 664 Because of the potential for generations of infectious aerosols, a prudent prevention measure is to avoid locating these fixtures in or near high-risk patient-care areas and to adhere to written policies for routine fountain maintenance. 120 Follow public health guidelines. (See Tables 12-14) # Potable water Legionella Aerosol inhalation Moderate: occasional welldescribed outbreaks. Provide supplemental treatment for water. (See # b. Water Temperature and Pressure Hot water temperature is usually measured at the point of use or at the point at which the water line enters equipment requiring hot water for proper operation. 120 Generally, the hot water temperature in hospital patient-care areas is no greater than a temperature within the range of 105°F-120°F (40.6°C-49°C), depending on the AIA guidance issued at the year in which the facility was built. 120 Hot water temperature in patient-care areas of skilled nursing-care facilities is set within a slightly lower range of 95°F-110°F (35°C-43.3°C) depending on the AIA guidance at the time of facility construction. 120 Many states have adopted a temperature setting in these ranges into their health-care regulations and building codes. ASHRAE, however, has recommended higher settings. 661 Steam jets or booster heaters are usually needed to meet the hot water temperature requirements in certain service areas of the hospital (e.g., the kitchen or the laundry ). 120 Additionally, water lines may need to be heated to a particular temperature specified by manufacturers of specific hospital equipment. Hot-water distribution systems serving patient-care areas are generally operated under constant recirculation to provide continuous hot water at each hot-water outlet. 120 If a facility is or has a hemodialysis unit, then continuously circulated, cold treated water is provided to that unit. 120 To minimize the growth and persistence of gram-negative waterborne bacteria (e.g., thermophilic NTM and Legionella spp.), 627, cold water in health-care facilities should be stored and distributed at temperatures below 68°F (20°C); hot water should be stored above 140°F (60°C) and circulated with a minimum return temperature of 124°F (51°C), 661 or the highest temperature specified in state regulations and building codes. If the return temperature setting of 124°F (51°C) is permitted, then installation of preset thermostatic mixing valves near the point-of-use can help to prevent scalding. Valve maintenance is especially important in preventing valve failure, which can result in scalding. New shower systems in large buildings, hospitals, and nursing homes should be designed to permit mixing of hot and cold water near the shower head. The warm water section of pipe between the control valve and shower head should be self-draining. Where buildings can not be retrofitted, other approaches to minimize the growth of Legionella spp. include periodically increasing the temperature to at least 150°F at the point of use and adding additional chlorine and flushing the water. 661,710,711 Systems should be inspected annually to ensure that thermostats are functioning properly. Maintaining adequate pressure also helps to ensure the integrity of the piping system. # c. Infection-Control Impact of Water System Maintenance and Repair Corrective measures for water-system failures have not been studied in well-designed experiments; these measures are instead based on empiric engineering and infection-control principles. Health-care facilities can occasionally sustain both intentional cut-offs by the municipal water authority to permit new construction project tie-ins and unintentional disruptions in service when a water main breaks as a result of aging infrastructure or a construction accident. Vacuum breakers or other similar devices can prevent backflow of water in the facility's distribution system during water-disruption emergencies. 11 To be prepared for such an emergency, all health-care facilities need contingency plans that identify the total demand for potable water, the quantity of replacement water required for a minimum of 24 hours when the water system is down, mechanisms for emergency water distribution, and procedures for correcting drops in water pressure that affect operation of essential devices and equipment that are driven or cooled by a water system . 713 # Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. Detailed, up-to-date plans for hot and cold water piping systems should be readily available for maintenance and repair purposes in case of system problems. Opening potable water systems for repair or construction and subjecting systems to water-pressure changes can result in water discoloration and dramatic increases in the concentrations of Legionella spp. and other gram-negative bacteria. The maintenance of a chlorine residual at all points within the piping system also offers some protection from entry of contamination to the pipes in the event of inadvertent cross-connection between potable and nonpotable water lines. As a minimum preventive measure, ASHRAE recommends a thorough flushing of the system. 661 High-temperature flushing or hyperchlorination may also be appropriate strategies to decrease potentially high concentrations of waterborne organisms. The decision to pursue either of these remediation strategies, however, should be made on a case-by-case basis. If only a portion of the system is involved, high temperature flushing or chlorination can be used on only that portion of the system. 661 When shock decontamination of hot water systems is necessary (e.g., after disruption caused by construction and after cross-connections), the hot water temperature should be raised to 160°F-170°F (71°C-77°C) and maintained at that level while each outlet around the system is progressively flushed. A minimum flush time of 5 minutes has been recommended; 3 the optimal flush time is not known, however, and longer flush times may be necessary. 714 The number of outlets that can be flushed simultaneously depends on the capacity of the water heater and the flow capability of the system. Appropriate safety procedures to prevent scalding are essential. When possible, flushing should be performed when the fewest building occupants are present (e.g., during nights and weekends). When thermal shock treatment is not possible, shock chlorination may serve as an alternative method. 661 Experience with this method of decontamination is limited, however, and high levels of free chlorine can corrode metals. Chlorine should be added, preferably overnight, to achieve a free chlorine residual of at least 2 mg/L (2 ppm) throughout the system. 661 This may require chlorination of the water heater or tank to levels of 20-50 mg/L (20-50 ppm). The pH of the water should be maintained at 7.0-8.0. 661 After completion of the decontamination, recolonization of the hot water system is likely to occur unless proper temperatures are maintained or a procedure such as continuous supplemental chlorination is continued. Interruptions of the water supply and sewage spills are situations that require immediate recovery and remediation measures to ensure the health and safety of patients and staff. 715 When delivery of potable water through the municipal distribution system has been disrupted, the public water supplier must issue a "boil water" advisory if microbial contamination presents an immediate public health risk to customers. The hospital engineer should oversee the restoration of the water system in the facility and clear it for use when appropriate. Hospitals must maintain a high level of surveillance for waterborne disease among patients and staff after the advisory is lifted. 642 Flooding from either external (e.g., from a hurricane) or internal sources (e.g., a water system break) usually results in property damage and a temporary loss of water and sanitation. JCAHO requires all hospitals to have plans that address facility response for recovery from both internal and external disasters. 713,719 The plans are required to discuss a. general emergency preparedness, b. staffing, c. regional planning among area hospitals, d. emergency supply of potable water, e. infection control and medical services needs, f. climate control, and g. remediation. The basic principles of structural recovery from flooding are similar to those for recovery from sewage contamination (Box 9 and 10). Following a major event (e.g., flooding), facilities may elect to conduct microbial sampling of water after the system is restored to verify that water quality has been returned to safe levels (<500 CFU/mL, heterotrophic plate count). This approach may help identify point-of-use fixtures that may harbor contamination as a result of design or engineering features. 720 Medical records should be allowed to dry and then either photocopied or placed in plastic covers before returning them to the record. Moisture meters can be used to assess water-damaged structural materials. If porous structural materials for walls have a moisture content of >20% after 72 hours, the affected material should be removed. 266,278,313 The management of water-damaged structural materials is not strictly limited to major water catastrophes (e.g., flooding and sewage intrusions); the same principles are used to evaluate the damage from leaking roofs, point-of-use fixtures, and equipment. Additional sources of moisture include condensate on walls from boilers and poorly engineered humidification in HVAC systems. An exception to these recommendations is made for hemodialysis units where water is further treated either by portable water treatment or large-scale water treatment systems usually involving reverse osmosis (RO). In the United States, >97% of dialysis facilities use RO treatment for their water. 721 However, changing pre-treatment filters and disinfecting the system to prevent colonization of the RO membrane and microbial contamination down-stream of the pre-treatment filter are prudent measures. # Box 10. Contingency planning for flooding General emergency preparedness - Ensure that emergency electrical generators are not located in flood-prone areas of the facility. - Develop alternative strategies for moving patients, water containers, medical records, equipment, and supplies in the event that the elevators are inoperable. - Establish in advance a centralized base of operations with batteries, flashlights, and cellular phones. - Ensure sufficient supplies of sandbags to place at the entrances and the area around boilers, incinerators, and generators. - Establish alternative strategies for bringing core employees to the facility if high water prevents travel. # Staffing Patterns - Temporarily reassign licensed staff as needed to critical care areas to provide manual ventilation and to perform vital assessments on patients. - Designate a core group of employees to remain on site to keep all services operational if the facility remains open. - Train all employees in emergency preparedness procedures. # Regional planning among are facilities for disaster management - Incorporate community support and involvement (e.g., media alerts, news, and transportation). - Develop in advance strategies for transferring patients, as needed. - Develop strategies for sharing supplies and providing essential services among participating facilities (e.g., central sterile department services, and laundry services). - Identify sources for emergency provisions (e.g., blood, emergency vehicles, and bottled water). # Medical services and infection control - Use alcohol-based hand rubs in general patient-care areas. - Postpone elective surgeries until full services are restored, or transfer these patients to other facilities. - Consider using portable dialysis machines. (Portable dialysis machines require less water compared to the larger units situated in dialysis settings.) - Provide an adequate supply of tetanus and hepatitis A immunizations for patients and staff. # Climate control - Provide adequate water for cooling towers. (Water for cooling towers may need to be trucked in, especially if the tower uses a potable water source.) - Material in this box was compiled from references 713, 716-719. # Strategies for Controlling Waterborne Microbial Contamination a. Supplemental Treatment of Water with Heat and/or Chemicals In addition to using supplemental treatment methods as remediation measures after inadvertent contamination of water systems, health-care facilities sometimes use special measures to control waterborne microorganisms on a sustained basis. This decision is most often associated with outbreaks of legionellosis and subsequent efforts to control legionellae, 722 although some facilities have tried supplemental measures to better control thermophilic NTM. 627 The primary disinfectant for both cold and hot water systems is chlorine. However, chlorine residuals are expected to be low, and possibly nonexistent, in hot water tanks because of extended retention time in the tank and elevated water temperature. Flushing, especially that which removes sludge from the bottom of the tank, probably provides the most effective treatment of water systems. Unlike the situation for disinfecting cooling towers, no equivalent recommendations have been made for potable water systems, although specific intervention strategies have been published. 403,723 The principal approaches to disinfection of potable systems are heat flushing using temperatures 160°F-170°F (71°-77°C), hyperchlorination, and physical cleaning of hot-water tanks. 3,403,661 Potable systems are easily recolonized and may require continuous intervention (e.g., raising of hot water temperatures or continuous chlorination). 403,711 Chlorine solutions lose potency over time, thereby rendering the stocking of large quantities of chlorine impractical. Some hospitals with hot water systems identified as the source of Legionella spp. have performed emergency decontamination of their systems by pulse (i.e., one-time) thermal disinfection/superheating or hyperchlorination. 711,714,724,725 After either of these procedures, hospitals either maintain their heated water with a minimum return temperature of 124°F (51°C) and cold water at <68°F (<20°C) or chlorinate their hot water to achieve 1-2 mg/L (1-2 ppm) of free residual chlorine at the tap. 26, 437, 709-711, 726, 727 Additional measures (e.g., physical cleaning or replacement of hot-water storage tanks, water heaters, faucets, and shower heads) may be required to help eliminate accumulations of scale and sediment that protect organisms from the biocidal effects of heat and chlorine. 457,711 Alternative methods for controlling and eradicating legionellae in water systems (e.g., treating water with chlorine dioxide, heavy metal ions , ozone, and UV light) have limited the growth of legionellae under laboratory and operating conditions. Further studies on the long-term efficacy of these treatments are needed before these methods can be considered standard applications. Renewed interest in the use of chloramines stems from concerns about adverse health effects associated with disinfectants and disinfection by-products. 743 Monochloramine usage minimizes the formation of disinfection by-products, including trihalomethanes and haloacetic acids. Monochloramine can also reach distal points in a water system and can penetrate into bacterial biofilms more effectively than free chlorine. 744 However, monochloramine use is limited to municipal water treatment plants and is currently not available to health-care facilities as a supplemental water-treatment approach. A recent study indicated that 90% of Legionnaires disease outbreaks associated with drinking water could have been prevented if monochloramine rather than free chlorine has been used for residual disinfection. 745 In a retrospective comparison of health-care associated Legionnaires disease incidence in central Texas hospitals, the same research group documented an absence of cases in facilities located in communities with monochloramine-treated municipal water. 746 Additional data are needed regarding the effectiveness of using monochloramine before its routine use as a disinfectant in water systems can be recommended. No data have been published regarding the effectiveness of monochloramine installed at the level of the health-care facility. Additional filtration of potable water systems is not routinely necessary. Filters are used in water lines in dialysis units, however, and may be inserted into the lines for specific equipment (e.g., endoscope washers and disinfectors) for the purpose of providing bacteria-free water for instrument reprocessing. Additionally, an RO unit is usually added to the distribution system leading to PE areas. # b. Primary Prevention of Legionnaires Disease (No Cases Identified) The primary and secondary environmental infection-control strategies described in this section on the guideline pertain to health-care facilities without transplant units. Infection-control measures specific to PE or transplant units (i.e., patient-care areas housing patients at the highest risk for morbidity and mortality from Legionella spp. infection) are described in the subsection titled Preventing Legionnaires Disease in Protective Environments. Health-care facilities use at least two general strategies to prevent health-care associated legionellosis when no cases or only sporadic cases have been detected. The first is an environmental surveillance approach involving periodic culturing of water samples from the hospital's potable water system to monitor for Legionella spp. If any sample is culture-positive, diagnostic testing is recommended for all patients with health-care associated pneumonia. 748,749 In-house testing is recommended for facilities with transplant programs as part of a comprehensive treatment/management program. If ≥30% of the samples are culture-positive for Legionella spp., decontamination of the facility's potable water system is warranted. 748 The premise for this approach is that no cases of health-care associated legionellosis can occur if Legionella spp. are not present in the potable water system, and, conversely, cases of health-care associated legionellosis could potentially occur if Legionella spp. are cultured from the water. 26,751 Physicians who are informed that the hospital's potable water system is culture-positive for Legionella spp. are more likely to order diagnostic tests for legionellosis. A potential advantage of the environmental surveillance approach is that periodic culturing of water is less costly than routine laboratory diagnostic testing for all patients who have health-care associated pneumonia. The primary argument against this approach is that, in the absence of cases, the relationship between water-culture results and legionellosis risk remains undefined. 3 Legionnella spp. can be present in the water systems of buildings 752 without being associated with known cases of disease. 437,707,753 In a study of 84 hospitals in Québec, 68% of the water systems were found to be colonized with Legionella spp., and 26% were colonized at >30% of sites sampled; cases of Legionnaires disease, however, were infrequently reported from these hospitals. 707 Other factors also argue against environmental surveillance. Interpretation of results from periodic water culturing might be confounded by differing results among the sites sampled in a single water system and by fluctuations in the concentration of Legionella spp. at the same site. 709,754 In addition, the risk for illness after exposure to a given source might be influenced by several factors other than the presence or concentration of organisms, including a. the degree to which contaminated water is aerosolized into respirable droplets, b. the proximity of the infectious aerosol to the potential host, c. the susceptibility of the host, and d. the virulence properties of the contaminating strain. Thus, data are insufficient to assign a level of disease risk even on the basis of the number of colonyforming units detected in samples from areas for immunocompetent patients. Conducting environmental surveillance would obligate hospital administrators to initiate water-decontamination programs if Legionella spp. are identified. Therefore, periodic monitoring of water from the hospital's potable water system and from aerosol-producing devices is not widely recommended in facilities that have not experienced cases of health-care associated legionellosis. 661,758 The second strategy to prevent and control health-care associated legionellosis is a clinical approach, in which providers maintain a high index of suspicion for legionellosis and order appropriate diagnostic tests (i.e., culture, urine antigen, and direct fluorescent antibody serology) for patients with health-care associated pneumonia who are at high risk for legionellosis and its complications. 437,759,760 The testing of autopsy specimens can be included in this strategy should a death resulting from healthcare-associated pneumonia occur. Identification of one case of definite or two cases of possible healthcare-associated Legionnaires disease should prompt an epidemiologic investigation for a hospital source of Legionella spp., which may involve culturing the facility's water for Legionella. Routine maintenance of cooling towers, and use of sterile water for the filling and terminal rinsing of nebulization devices and ventilation equipment can help to minimize potential sources of contamination. Circulating potable water temperatures should match those outlined in the subsection titled Water Temperature and Pressure, as permitted by state code. # c. Secondary Prevention of Legionnaires Disease (With Identified Cases) The indications for a full-scale environmental investigation to search for and subsequently decontaminate identified sources of Legionella spp. in health-care facilities without transplant units have not been clarified; these indications would likely differ depending on the facility. Case categories for health-care associated Legionnaires disease in facilities without transplant units include definite cases (i.e., laboratory-confirmed cases of legionellosis that occur in patients who have been hospitalized continuously for ≥10 days before the onset of illness) and possible cases (i.e., laboratory-confirmed infections that occur 2-9 days after hospital admission). 3 In settings in which as few as one to three health-care associated cases are recognized over several months, intensified surveillance for Legionnaires disease has frequently identified numerous additional cases. 405,408,432,453,739,759,760 This finding suggests the need for a low threshold for initiating an investigation after laboratory confirmation of cases of healthcare associated legionellosis. When developing a strategy for responding to such a finding, however, infection-control personnel should consider the level of risk for health-care-associated acquisition of, and mortality from, Legionella spp. infection at their particular facility. An epidemiologic investigation conducted to determine the source of Legionella spp. involves several important steps (Box 11). Laboratory assessment is crucial in supporting epidemiologic evidence of a link between human illness and a specific environmental source. 761 Strain determination from subtype analysis is most frequently used in these investigations. 410, Once the environmental source is established and confirmed with laboratory support, supplemental water treatment strategies can be initiated as appropriate. # Box 11. Steps in an epidemiologic investigation for legionellosis - Review medical and microbiologic records. # d. Preventing Legionnaires Disease in Protective Environments This subsection outlines infection-control measures applicable to those health-care facilities providing care to severely immunocompromised patients. Indigenous microorganisms in the tap water of these facilities may pose problems for such patients. These measures are designed to prevent the generation of potentially infectious aerosols from water and the subsequent exposure of PE patients or other immunocompromised patients (e.g., transplant patients) (Table 17). Infection-control measures that address the use of water with medical equipment (e.g., ventilators, nebulizers, and equipment humidifiers) are described in other guidelines and publications. 3,455 If one case of laboratory-confirmed, health-care associated Legionnaires disease is identified in a patient in a solid-organ transplant program or in PE (i.e., an inpatient in PE for all or part of the 2-10 days prior to onset of illness) or if two or more laboratory-confirmed cases occur among patients who had visited an outpatient PE setting, the hospital should report the cases to the local and state health departments. The hospital should then initiate a thorough epidemiologic and environmental investigation to determine the likely environmental sources of Legionella spp. 9 The source of Legionella should be decontaminated or removed. Isolated cases may be difficult to investigate. Because transplant recipients are at substantially higher risk for disease and death from legionellosis compared with other hospitalized patients, periodic culturing for Legionella spp. in water samples from the potable water in the solid-organ transplant and/or PE unit can be performed as part of an overall strategy to prevent Legionnaires disease in PE units. 9,431,710,769 The optimal methodology (i.e., frequency and number of sites) for environmental surveillance cultures in PE units has not been determined, and the cost-effectiveness of this strategy has not been evaluated. Because transplant recipients are at high risk for Legionnaires disease and because no data are available to determine a safe concentration of legionellae organisms in potable water, the goal of environmental surveillance for Legionella spp. should be to maintain water systems with no detectable organisms. 9,431 Culturing for legionellae may be used to assess the effectiveness of water treatment or decontamination methods, a practice that provides benefits to both patients and health-care workers. 767,770 Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. - Use water that is not contaminated with Legionella spp. for patients' sponge baths. 9 - Provide sterile water for drinking, tooth brushing, or for flushing nasogastric tubes. 9,412 - Perform supplemental treatment of the water for the unit. 732 - Consider periodic monitoring (i.e., culturing) of the unit water supply for Legionella spp. Protecting patient-care devices and instruments from inadvertent tap water contamination during room cleaning procedures is also important in any immunocompromised patient-care area. In a recent outbreak of gram-negative bacteremias among open-heart-surgery patients, pressure-monitoring equipment that was assembled and left uncovered overnight prior to the next day's surgeries was inadvertently contaminated with mists and splashing water from a hose-disinfectant system used for cleaning. 771 # Cooling Towers and Evaporative Condensers Modern health-care facilities maintain indoor climate control during warm weather by use of cooling towers (large facilities) or evaporative condensers (smaller buildings). A cooling tower is a wet-type, evaporative heat transfer device used to discharge to the atmosphere waste heat from a building's air conditioning condensers (Figure 5). 772,773 Warm water from air-conditioning condensers is piped to the cooling tower where it is sprayed downward into a counter-or cross-current air flow. To accelerate heat transfer to the air, the water passes over the fill, which either breaks water into droplets or causes it to spread into a thin film. 772,773 Most systems use fans to move air through the tower, although some large industrial cooling towers rely on natural draft circulation of air. The cooled water from the tower is piped back to the condenser, where it again picks up heat generated during the process of chilling the system's refrigerant. The water is cycled back to the cooling tower to be cooled. Closed-circuit cooling towers and evaporative condensers are also evaporative heat-transfer devices. In these systems, the process fluid (e.g., water, ethylene glycol/water mixture, oil, or a condensing refrigerant) does not directly contact the cooling air, but is contained inside a coil assembly. 661 Water temperatures are approximate and may differ substantially according to system use and design. Warm water from the condenser (or chiller) is sprayed downward into a counter-or cross-current air flow. Water passes over the fill (a component of the system designed to increase the surface area of the water exposed to air), and heat from the water is transferred to the air. Some of the water becomes aerosolized during this process, although the volume of aerosol discharged to the air can be reduced by the placement of a drift eliminator. Water cooled in the tower returns to the heat source to cool refrigerant from the air conditioning unit. Cooling towers and evaporative condensers incorporate inertial stripping devices called drift eliminators to remove water droplets generated within the unit. Although the effectiveness of these eliminators varies substantially depending on design and condition, some water droplets in the size range of <5 μm will likely leave the unit, and some larger droplets leaving the unit may be reduced to ≤5 μm by evaporation. Thus, even with proper operation, a cooling tower or evaporative condenser can generate and expel respirable water aerosols. If either the water in the unit's basin or the make-up water (added to replace water lost to evaporation) contains Legionella spp. or other waterborne microorganisms, these organisms can be aerosolized and dispersed from the unit. 774 Clusters of both Legionnaires disease and Pontiac fever have been traced to exposure to infectious water aerosols originating from cooling towers and evaporative condensers contaminated with Legionella spp. Although most of these outbreaks have been community-acquired episodes of pneumonia, health-care associated Legionnaires disease has been linked to cooling tower aerosol exposure. 404,405 Contaminated aerosols from cooling towers on hospital premises gained entry to the buildings either through open windows or via air handling system intakes located near the tower equipment. Cooling towers and evaporative condensers provide ideal ecological niches for Legionella spp. The typical temperature of the water in cooling towers ranges from 85°F-95°F (29°C-35°C), although temperatures can be above 120°F (49°C) and below 70°F (21°C) depending on system heat load, ambient temperature, and operating strategy. 661 An Australian study of cooling towers found that legionellae colonized or multiplied in towers with basin temperatures above 60.8°F (16°C), and multiplication became explosive at temperatures above 73.4°F (23°C). 783 Water temperature in closed-circuit cooling towers and evaporative condensers is similar to that in cooling towers. Considerable variation in the piping arrangement occurs. In addition, stagnant areas or dead legs may be difficult to clean or penetrate with biocides. Several documents address the routine maintenance of cooling towers, evaporative condensers, and whirlpool spas. 661, 784-787 They suggest following manufacturer's recommendations for cleaning and biocide treatment of these devices; all health-care facilities should ensure proper maintenance for their cooling towers and evaporative condensers, even in the absence of Legionella spp (Appendix C). Because cooling towers and evaporative condensers can be shut down during periods when air conditioning is not needed, this maintenance cleaning and treatment should be performed before starting up the system for the first time in the warm season. 782 Emergency decontamination protocols describing cleaning procedures and hyperchlorination for cooling towers have been developed for towers implicated in the transmission of legionellosis. 786, 787 # Dialysis Water Quality and Dialysate # a. Rationale for Water Treatment in Hemodialysis Hemodialysis, hemofiltration, and hemodiafiltration require special water-treatment processes to prevent adverse patient outcomes of dialysis therapy resulting from improper formulation of dialysate with water containing high levels of certain chemical or biological contaminants. The Association for the Advancement of Medical Instrumentation (AAMI) has established chemical and microbiologic standards for the water used to prepare dialysate, substitution fluid, or to reprocess hemodialyzers for renal replacement therapy. The AAMI standards address: a. equipment and processes used to purify water for the preparation of concentrates and dialysate and the reprocessing of dialyzers for multiple use and b. the devices used to store and distribute this water. Future revisions to these standards may include hemofiltration and hemodiafiltration. Water treatment systems used in hemodialysis employ several physical and/or chemical processes either singly or in combination (Figure 6). These systems may be portable units or large systems that feed several rooms. In the United States, >97% of maintenance hemodialysis facilities use RO alone or in combination with deionization. 793 Many acute-care facilities use portable hemodialysis machines with attached portable water treatment systems that use either deionization or RO. These machines were exempted from earlier versions of AAMI recommendations, but given current knowledge about toxic exposures to and inflammatory processes in patients new to dialysis, these machines should now come into compliance with current AAMI recommendations for hemodialysis water and dialysate quality. 788,789 Previous recommendations were based on the assumption that acute-care patients did not experience the same degree of adverse effects from short-term, cumulative exposures to either chemicals or microbiologic agents present in hemodialysis fluids compared with the effects encountered by patients during chronic, maintenance dialysis. 788,789 Additionally, JCAHO is reviewing inpatient practices and record-keeping for dialysis (acute and maintenance) for adherence to AAMI standards and recommended practices. Neither the water used to prepare dialysate nor the dialysate itself needs to be sterile, but tap water can not be used without additional treatment. Infections caused by rapid-growing NTM (e.g., Mycobacterium chelonae and M. abscessus) present a potential risk to hemodialysis patients (especially those in hemodialyzer reuse programs) if disinfection procedures to inactivate mycobacteria in the water (lowlevel disinfection) and the hemodialyzers (high-level disinfection) are inadequate. 31,32,633 Other factors associated with microbial contamination in dialysis systems could involve the water treatment system, the water and dialysate distribution systems, and the type of hemodialyzer. 666,667, Understanding the various factors and their influence on contamination levels is the key to preventing high levels of microbial contamination in dialysis therapy. In several studies, pyrogenic reactions were demonstrated to have been caused by lipopolysaccharide or endotoxin associated with gram-negative bacteria. 794, Early studies demonstrated that parenteral exposure to endotoxin at a concentration of 1 ng/kg body weight/hour was the threshold dose for producing pyrogenic reactions in humans, and that the relative potencies of endotoxin differ by bacterial species. 804,805 Gram-negative water bacteria (e.g., Pseudomonas spp.) have been shown to multiply rapidly in a variety of hospital-associated fluids that can be used as supply water for hemodialysis (e.g., distilled water, deionized water, RO water, and softened water) and in dialysate (a balanced salt solution made with this water). 806 Several studies have demonstrated that the attack rates of pyrogenic reactions are directly associated with the number of bacteria in dialysate. 666,667,807 These studies provided the rationale for setting the heterotrophic bacteria standards in the first AAMI hemodialysis guideline at ≤2,000 CFU/mL in dialysate and one log lower (≤200 CFU/mL) for the water used to prepare dialysate. 668,788 If the level of bacterial contamination exceeded 200 CFU/mL in water, this level could be amplified in the system and effectively constitute a high inoculum for dialysate at the start of a dialysis treatment. 807,808 Pyrogenic reactions did not appear to occur when the level of contamination was below 2,000 CFU/mL in dialysate unless the source of the endotoxin was exogenous to the dialysis system (i.e., present in the community water supply). Endotoxins in a community water supply have been linked to the development of pyrogenic reactions among dialysis patients. 794 Whether endotoxin actually crosses the dialyzer membrane is controversial. Several investigators have shown that bacteria growing in dialysate-generated products that could cross the dialyzer membrane. 809,810 Gram-negative bacteria growing in dialysate have produced endotoxins that in turn stimulated the production of anti-endotoxin antibodies in hemodialysis patients; 801,811 these data suggest that bacterial endotoxins, although large molecules, cross dialyzer membranes either intact or as fragments. The use of the very permeable membranes known as high-flux membranes (which allow large molecules to traverse the membrane) increases the potential for passage of endotoxins into the blood path. Several studies support this contention. In one such study, an increase in plasma endotoxin concentrations during dialysis was observed when patients were dialyzed against dialysate containing 10 3 -10 4 CFU/mL Pseudomonas spp. 812 In vitro studies using both radiolabeled lipopolysaccharide and biologic assays have demonstrated that biologically active substances derived from bacteria found in dialysate can cross a variety of dialyzer membranes. 802, Patients treated with high-flux membranes have had higher levels of anti-endotoxin antibodies than subjects or patients treated with conventional membranes. 817 Finally, since 1989, 19%-22% of dialysis centers have reported pyrogenic reactions in the absence of septicemia. 818,819 Investigations of adverse outcomes among patients using reprocessed dialyzers have demonstrated a greater risk for developing pyrogenic reactions when the water used to reprocess these devices contained >6 ng/mL endotoxin and >10 4 CFU/mL bacteria. 820 In addition to the variability in endotoxin assays, host factors also are involved in determining whether a patient will mount a response to endotoxin. 803 Outbreak investigations of pyrogenic reactions and bacteremias associated with hemodialyzer reuse have demonstrated that pyrogenic reactions are prevented once the endotoxin level in the water used to reprocess the dialyzers is returned to below the AAMI standard level. 821 Reuse of dialyzers and use of bicarbonate dialysate, high-flux dialyzer membranes, or high-flux dialysis may increase the potential for pyrogenic reactions if the water in the dialysis setting does not meet standards. Although investigators have been unable to demonstrate endotoxin transfer across dialyzer membranes, 803,822,823 the preponderance of reports now supports the ability of endotoxin to transfer across at least some high-flux membranes under some operating conditions. In addition to the acute risk of pyrogenic reactions, indirect evidence in increasingly demonstrating that chronic exposure to low amounts of endotoxin may play a role in some of the long-term complications of hemodialysis therapy. Patients treated with ultrafiltered dialysate for 5-6 months have demonstrated a decrease in serum β2 microglobulin concentrations and a decrease in markers of an inflammatory response. In studies of longer duration, use of microbiologically ultrapure dialysate has been associated with a decreased incidence of β2 microglobulin-associated amyloidosis. 827,828 Although patient benefit likely is associated with the use of ultrapure dialysate, no consensus has been reached regarding the potential adoption of this as standard in the United States. Debate continues regarding the bacterial and endotoxin limits for dialysate. As advances in water treatment and hemodialysis processes occur, efforts are underway to move improved technology from the manufacturer out into the user community. Cost-benefit studies, however, have not been done, and substantially increased costs to implement newer water treatment modalities are anticipated. To reconcile AAMI documents with current International Organization for Standardization (ISO) format, AAMI has determined that its hemodialysis standards will be discussed in the following four installments: RD 5 for hemodialysis equipment, RD 62 for product water quality, RD 47 for dialyzer reprocessing, and RD 52 for dialysate quality. The Renal Diseases and Dialysis Committee of AAMI is expected to finalize and promulgated the dialysate standard pertinent to the user community (RD 52), adopting by reference the bacterial and endotoxin limits in product water as currently outlined in the AAMI standard that applies to systems manufacturers (RD 62). At present, the user community should continue to observe water quality and dialysate standards as outlined in AAMI RD 5 (Hemodialysis Systems, 1992) and AAMI RD 47 (Reuse of Hemodialyzers, 1993) until the new RD 52 standard becomes available (Table 18). 789, 791 RD 47-1993). The current AAMI standard directed at systems manufacturers (RD 62 ) now specifies that all product water used to prepare dialysate or to reprocess dialyzers for multiple use should contain <2 endotoxin units per milliliter (EU/mL). 792 A level of 2 EU/mL was chosen as the upper limit for endotoxin because this level is easily achieved with contemporary water treatment systems using RO and/or ultrafiltration. CDC has advocated monthly endotoxin testing along with microbiologic assays of water, because endotoxin activity may not correspond to the total heterotrophic plate counts. 829 Additionally, the current AAMI standard RD 62 for manufacturers includes action levels for product water. Because 48 hours can elapse between the time of sampling water for microbial contamination and the time when results are received, and because bacterial proliferation can be rapid, action levels for microbial counts and endotoxin concentrations are reported as 50 CFU/mL and 1 EU/mL, respectively, in this revision of the standard. 792 These recommendations will allow users to initiate corrective action before levels exceed the maximum levels established by the standard. In hemodialysis, the net movement of water is from the blood to the dialysate, although within the dialyzer, local movement of water from the dialysate to the blood through the phenomenon of backfiltration may occur, particularly in dialyzers with highly permeable membranes. 830 In contrast, hemofiltration and hemodiaflltration feature infusion of large volumes of electrolyte solution (20-70 L) into the blood. Increasingly, this electrolyte solution is being prepared on-line from water and concentrate. Because of the large volumes of fluid infused, AAMI considered the necessity of setting more stringent requirements for water to be used in this application, but this organization has not yet established these because of lack of expert consensus and insufficient experience with on-line therapies in the United States. On-line hemofiltration and hemodiafiltration systems use sequential ultrafiltration as the final step in the preparation of infusion fluid. Several experts from AAMI concur that these point-ofuse ultrafiltration systems should be capable of further reducing the bacteria and endotoxin burden of solutions prepared from water meeting the requirements of the AAMI standard to a safe level for infusion. # b. Microbial Control Strategies The strategy for controlling massive accumulations of gram-negative water bacteria and NTM in dialysis systems primarily involves preventing their growth through proper disinfection of water-treatment systems and hemodialysis machines. Gram-negative water bacteria, their associated lipopolysaccharides (bacterial endotoxins), and NTM ultimately come from the community water supply, and levels of these bacteria can be amplified depending on the water treatment system, dialysate distribution system, type of dialysis machine, and method of disinfection (Table 19). 634,794,831 Control strategies are designed to reduce levels of microbial contamination in water and dialysis fluid to relatively low levels but not to completely eradicate it. # Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. - Size Oversized diameter and length decrease fluid flow and increase bacterial reservoir for both treated water and centrally-prepared dialysate. # Construction Rough joints, dead ends, unused branches, and polyvinyl chloride (PVC) piping can act as bacterial reservoirs. # Elevation Outlet taps should be located at the highest elevation to prevent loss of disinfectant; keep a recirculation loop in the system; flush unused ports routinely. # Storage tanks Tanks are undesirable because they act as a reservoir for water bacteria; if tanks are present, they must be routinely scrubbed and disinfected. # Dialysis machines - Single-pass Disinfectant should have contact with all parts of the machine that are exposed to water or dialysis fluid. # Recirculating single-pass or recirculating (batch) Recirculating pumps and machine design allow for massive contamination levels if not properly disinfected; overnight chemical germicide treatment is recommended. Two components of hemodialysis water distribution systems -pipes (particularly those made of polyvinyl chloride ) and storage tanks -can serve as reservoirs of microbial contamination. Hemodialysis systems frequently use pipes that are wider and longer than are needed to handle the required flow, which slows the fluid velocity and increases both the total fluid volume and the wetted surface area of the system. Gram-negative bacteria in fluids remaining in pipes overnight multiply rapidly and colonize the wet surfaces, producing bacterial populations and endotoxin quantities in proportion to the volume and surface area. Such colonization results in formation of protective biofilm that is difficult to remove and protects the bacteria from disinfection. 832 Routine (i.e., monthly), low-level disinfection of the pipes can help to control bacterial contamination of the distribution system. Additional measures to protect pipes from contaminations include a. situating all outlet taps at equal elevation and at the highest point of the system so that the disinfectant cannot drain from pipes by gravity before adequate contact time has elapsed and b. eliminating rough joints, dead-end pipes, and unused branches and taps that can trap fluid and serve as reservoirs of bacteria capable of continuously inoculating the entire volume of the system. 800 Maintain a flow velocity of 3-5 ft/sec. A storage tank in the distribution system greatly increases the volume of fluid and surface area available and can serve as a niche for water bacteria. Storage tanks are therefore not recommended for use in dialysis systems unless they are frequently drained and adequately disinfected, including scrubbing the sides of the tank to remove bacterial biofilm. An ultrafilter should be used distal to the storage tank. 808,833 Microbiologic sampling of dialysis fluids is recommended because gram-negative bacteria can proliferate rapidly in water and dialysate in hemodialysis systems; high levels of these organisms place patients at risk for pyrogenic reactions or health-care associated infection. 667,668,808 Health-care facilities are advised to sample dialysis fluids at least monthly using standard microbiologic assay methods for waterborne microorganisms. 788,793,799, Product water used to prepare dialysate and to reprocess hemodialyzers for reuse on the same patient should also be tested for bacterial endotoxin on a monthly basis. 792,829,837 (See Appendix C for information about water sampling methods for dialysis.) Cross-contamination of dialysis machines and inadequate disinfection measures can facilitate the spread of waterborne organisms to patients. Steps should be taken to ensure that dialysis equipment is performing correctly and that all connectors, lines, and other components are specific for the equipment, in good repair, and properly in place. A recent outbreak of gram-negative bacteremias among dialysis patients was attributed to faulty valves in a drain port of the machine that allowed backflow of saline used to flush the dialyzer before patient use. 838,839 This backflow contaminated the drain priming connectors, which contaminated the blood lines and exposed the patients to high concentrations of gram-negative bacteria. Environmental infection control in dialysis settings also includes low-level disinfection of housekeeping surfaces and spot decontamination of spills of blood (see Environmental Services in Part I of this guideline for further information). # c. Infection-Control Issues in Peritoneal Dialysis Peritoneal dialysis (PD), most commonly administered as continuous ambulatory peritoneal dialysis (CAPD) and continual cycling peritoneal dialysis (CCPD), is the third most common treatment for end-stage renal disease (ESRD) in the United States, accounting for 12% of all dialysis patients. 840 Peritonitis is the primary complication of CAPD, with coagulase-negative staphylococci the most clinically significant causative organisms. 841 Other organisms that have been found to produce peritonitis include Staphylococcus aureus, Mycobacterium fortuitum, M. mucogenicum, Stenotrophomonas maltophilia, Burkholderia cepacia, Corynebacterium jeikeium, Candida spp., and other fungi. Substantial morbidity is associated with peritoneal dialysis infections. Removal of peritoneal dialysis catheters usually is required for treatment of peritonitis caused by fungi, NTM, or other bacteria that are not cleared within the first several days of effective antimicrobial treatment. Furthermore, recurrent episodes of peritonitis may lead to fibrosis and loss of the dialysis membrane. Many reported episodes of peritonitis are associated with exit-site or tunneled catheter infections. Risk factors for the development of peritonitis in PD patients include 1. under dialysis, 2. immune suppression, 3. prolonged antimicrobial treatment, 4. patient age , 5. length of hospital stay, and 6. hypoalbuminemia. 844,851,852 Concern has been raised about infection risk associated with the use of automated cyclers in both inpatient and outpatient settings; however, studies suggest that PD patients who use automated cyclers have much lower infection rates. 853 One study noted that a closed-drainage system reduced the incidence of systemrelated peritonitis among intermittent peritoneal dialysis (IPD) patients from 3.6 to 1.5 cases/100 patient days. 854 The association of peritonitis with management of spent dialysate fluids requires additional study. Therefore, ensuring that the tip of the waste line is not submerged beneath the water level in a toilet or in a drain is prudent. # Ice Machines and Ice Microorganisms may be present in ice, ice-storage chests, and ice-making machines. The two main sources of microorganisms in ice are the potable water from which it is made and a transferral of organisms from hands (Table 20). Ice from contaminated ice machines has been associated with patient colonization, blood stream infections, pulmonary and gastrointestinal illnesses, and pseudoinfections. 602,603,683,684,854,855 Microorganisms in ice can secondarily contaminate clinical specimens and medical solutions that require cold temperatures for either transport or holding. 601,620 An outbreak of surgical-site infections was interrupted when sterile ice was used in place of tap water ice to cool cardioplegia solutions. 601 Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 20. Microorganisms and their sources in ice and ice machines From potable water - Legionella spp. 684,685,857,858 - Nontuberculous mycobacteria (NTM) 602,603,859 - Pseudomonas aeruginosa 859 - Burkholderia cepacia 859,860 - Stenotrophomonas maltophilia 860 - Flavobacterium spp. 860 From fecally-contaminated water - Norwalk virus - Giardia lamblia 864 - Cryptosporidium parvum 685 From hand-transfer of organisms - Acinetobacter spp. 859 - Coagulase-negative staphylococci 859 - Salmonella enteriditis 865 - Cryptosporidium parvum 685 In a study comparing the microbial populations of hospital ice machines with organisms recovered from ice samples gathered from the community, samples from 27 hospital ice machines yielded low numbers (<10 CFU/mL) of several potentially opportunistic microorganisms, mainly gram-negative bacilli. 859 During the survey period, no health-care associated infections were attributed to the use of ice. Ice from community sources had higher levels of microbial contamination (75%-95% of 194 samples had total heterotrophic plate counts <500 CFU/mL, with the proportion of positive cultures dependent on the incubation temperature) and showed evidence of fecal contamination from the source water. 859 Thus, ice machines in health-care settings are no more heavily contaminated compared with ice machines in the community. If the source water for ice in a health-care facility is not fecally contaminated, then ice from clean ice machines and chests should pose no increased hazard for immunocompetent patients. Some waterborne bacteria found in ice could potentially be a risk to immunocompromised patients if they consume ice or drink beverages with ice. For example, Burkholderia cepacia in ice could present an infection risk for cystic fibrosis patients. 859,860 Therefore, protecting immunosuppressed and otherwise medically at-risk patients from exposure to tap water and ice potentially contaminated with opportunistic pathogens is prudent. 9 No microbiologic standards for ice, ice-making machines, or ice storage equipment have been established, although several investigators have suggested the need for such standards. 859,866 Culturing of ice machines is not routinely recommended, but it may be useful as part of an epidemiologic investigation. Sampling might also help determine the best schedule for cleaning open ice-storage chests. Recommendations for a regular program of maintenance and disinfection have been published. Health-care facilities are advised to clean ice-storage chests on a regular basis. Open ice chests may require a more frequent cleaning schedule compared with chests that have covers. Portable ice chests and containers require cleaning and low-level disinfection before the addition of ice intended for consumption. Ice-making machines may require less frequent cleaning, but their maintenance is important to proper performance. The manufacturer's instructions for both the proper method of cleaning and/or maintenance should be followed. These instructions may also recommend an EPA-registered disinfectant to ensure chemical potency, materials compatibility, and safety. In the event that instructions and suitable EPA-registered disinfectants are not available for this process, then a generic approach to cleaning, disinfecting, and maintaining ice machines and dispensers can be used (Box 12). Ice and ice-making machines also may be contaminated via improper storage or handling of ice by patients and/or staff. 684-686, 855-858, 870 Suggested steps to avoid this means of contamination include a. minimizing or avoiding direct hand contact with ice intended for consumption, b. using a hard-surface scoop to dispense ice, and c. installing machines that dispense ice directly into portable containers at the touch of a control. 687,869 Box 12. General steps for cleaning and maintaining ice machines, dispensers, and storage chests*+ 1. Disconnect unit from power supply. 2. Remove and discard ice from bin or storage chest. 3. Allow unit to warm to room temperature. 4. Disassemble removable parts of machine that make contact with water to make ice. 5. Thoroughly clean machine and parts with water and detergent. 6. Dry external surfaces of removable parts before reassembling. 7. Check for any needed repair. 8. Replace feeder lines, as appropriate (e.g., when damaged, old, or difficult to clean). 9. Ensure presence of an air space in tubing leading from water inlet into water distribution system of machine. 10. Inspect for rodent or insect infestations under the unit and treat, as needed. 11. Check door gaskets (open compartment models) for evidence of leakage or dripping into the storage chest. 12. Clean the ice-storage chest or bin with fresh water and detergent; rinse with fresh tap water. 13. Sanitize machine by circulating a 50-100 parts per million (ppm) solution of sodium hypochlorite (i.e., 4-8 mL sodium hypochlorite/gallon of water) through the ice-making and storage systems for 2 hours (100 ppm solution), or 4 hours (50 ppm solution). 14. Drain sodium hypochlorite solutions and flush with fresh tap water. 15. Allow all surfaces of equipment to dry before returning to service. - Material in this box is adapted from reference 869. + These general guidelines should be used only where manufacturer-recommended methods and EPA-registered disinfectants are not available. # Hydrotherapy Tanks and Pools # a. General Information Hydrotherapy equipment (e.g., pools, whirlpools, whirlpool spas, hot tubs, and physiotherapy tanks) traditionally has been used to treat patients with certain medical conditions (e.g., burns, 871,872 septic ulcers, lesions, amputations, 873 orthopedic impairments and injuries, arthritis, 874 and kidney lithotripsy). 654 Wound-care medicine is increasingly moving away from hydrotherapy, however, in favor of bedside pulsed-lavage therapy using sterile solutions for cleaning and irrigation. 492, Several episodes of health-care associated infections have been linked to use of hydrotherapy equipment ( Infection control for hydrotherapy tanks, pools, or birthing tanks presents unique challenges because indigenous microorganisms are always present in the water during treatments. In addition, some studies have found free living amoebae (i.e., Naegleria lovaniensis), which are commonly found in association with Naegleria fowleri, in hospital hydrotherapy pools. 890 Although hydrotherapy is at times appropriate for patients with wounds, burns, or other types of non-intact skin conditions (determined on a case-bycase basis), this equipment should not be considered "semi-critical" in accordance with the Spaulding classification. 891 Microbial data to evaluate the risk of infection to patients using hydrotherapy pools and birthing tanks are insufficient. Nevertheless, health-care facilities should maintain stringent cleaning and disinfection practices in accordance with the manufacturer's instructions and with relevant scientific literature until data supporting more rigorous infection-control measures become available. Factors that should be considered in therapy decisions in this situation would include a. availability of alternative aseptic techniques for wound management and b. a risk-benefit analysis of using traditional hydrotherapy. # b. Hydrotherapy Tanks Hydrotherapy tanks (e.g., whirlpools, Hubbard tanks and whirlpool bath tubs) are shallow tanks constructed of stainless steel, plexiglass, or tile. They are closed-cycle water systems with hydrojets to circulate, aerate, and agitate the water. The maximum water temperature range is 50°F-104°F (10°C-40°C). The warm water temperature, constant agitation and aeration, and design of the hydrotherapy tanks provide ideal conditions for bacterial proliferation if the equipment is not properly maintained, cleaned, and disinfected. The design of the hydrotherapy equipment should be evaluated for potential infection-control problems that can be associated with inaccessible surfaces that can be difficult to clean and/or remain wet in between uses (i.e., recessed drain plates with fixed grill plates). 887 Associated equipment (e.g., parallel bars, plinths, Hoyer lifts, and wheelchairs) can also be potential reservoirs of microorganisms, depending on the materials used in these items (i.e., porous vs. non-porous materials) and the surfaces that may become wet during use. Patients with active skin colonizations and wound infections can serve as sources of contamination for the equipment and the water. Contamination from spilled tub water can extend to drains, floors, and walls. Health-care associated colonization or infection can result from exposure to endogenous sources of microorganisms (autoinoculation) or exogenous sources (via cross-contamination from other patients previously receiving treatment in the unit). Although some facilities have used tub liners to minimize environmental contamination of the tanks, the use of a tub liner does not eliminate the need for cleaning and disinfection. Draining these small pools and tanks after each patient use, thoroughly cleaning with a detergent, and disinfecting according to manufacturers' instructions have reduced bacterial contamination levels in the water from 10 4 CFU/mL to <10 CFU/mL. 892 A chlorine residual of 15 ppm in the water should be obtained prior to the patient's therapy session (e.g., by adding 15 grams of calcium hypochlorite 70% per 100 gallons of water). 892 A study of commercial and residential whirlpools found that superchlorination or draining, cleaning, disinfection, and refilling of whirlpools markedly reduced densities of Pseudomonas aeruginosa in whirlpool water. 893 The bacterial populations were rapidly replenished, however, when disinfectant concentrations dropped below recommended levels for recreational use (i.e., chlorine at 3.0 ppm or bromine at 6.0 ppm). When using chlorine, however, knowing whether the community drinking-water system is disinfected with chloramine is important, because municipal utilities adjust the pH of the water to the basic side to enhance chloramine formation. Because chlorine is not very effective at pH levels above 8, it may be necessary to re-adjust the pH of the water to a more acidic level. 894 A few reports describe the addition of antiseptic chemicals to hydrotherapy tank water, especially for burn patient therapy. One study involving a minimal number of participants demonstrated a reduction in the number of Pseudomonas spp. and other gram-negative bacteria from both patients and equipment surfaces when chloramine-T ("chlorazene") was added to the water. 898 Chloramine-T has not, however, been approved for water treatment in the United States. # c. Hydrotherapy Pools Hydrotherapy pools typically serve large numbers of patients and are usually heated to 91.4°F-98.6°F (31°C-37°C). The temperature range is more narrow (94°F-96.8°F ) for pediatric and geriatric patient use. 899 Because the size of hydrotherapy pools precludes draining after patient use, proper management is required to maintain the proper balance of water conditioning (i.e., alkalinity, hardness, and temperature) and disinfection. The most widely used chemicals for disinfection of pools are chlorine and chlorine compounds -calcium hypochlorite, sodium hypochlorite, lithium hypochlorite, chloroisocyanurates, and chlorine gas. Solid and liquid formulations of chlorine chemicals are the easiest and safest to use. 900 Other halogenated compounds have also been used for pool-water disinfection, albeit on a limited scale. Bromine, which forms bactericidal bromamines in the presence of ammonia, has limited use because of its association with contact dermatitis. 901 Iodine does not bleach hair, swim suits, or cause eye irritation, but when introduced at proper concentrations, it gives water a greenish-yellowish cast. 892 In practical terms, maintenance of large hydrotherapy pools (e.g., those used for exercise) is similar to that for indoor public pools (i.e., continuous filtration, chlorine residuals no less than 0.4 ppm, and pH of 7.2-7.6). 902,903 Supply pipes and pumps also need to be maintained to eliminate the possibility of this equipment serving as a reservoir for waterborne organisms. 904 Specific standards for chlorine residual and pH of the water are addressed in local and state regulations. Patients who are fecally incontinent or who have draining wounds should refrain from using these pools until their condition improves. # d. Birthing Tanks and Other Equipment The use of birthing tanks, whirlpool spas, and whirlpools is a recent addition to obstetrical practice. 905 Few studies on the potential risks associated with these pieces of equipment have been conducted. In one study of 32 women, a newborn contracted a Pseudomonas infection after being birthed in such a tank, the strain of which was identical to the organism isolated from the tank water. 906 Another report documented identical strains of P. aeruginosa isolates from a newborn with sepsis and on the environmental surfaces of a tub that the mother used for relaxation while in labor. 907 Other studies have shown no significant increases in the rates of post-immersion infections among mothers and infants. 908,909 Because the water and the tub surfaces routinely become contaminated with the mother's skin flora and blood during labor and delivery, birthing tanks and other tub equipment must be drained after each patient use and the surfaces thoroughly cleaned and disinfected. Health-care facilities are advised to follow the manufacturer's instructions for selection of disinfection method and chemical germicide. The range of chlorine residuals for public whirlpools and whirlpool spas is 2-5 ppm. 910 Use of an inflatable tub is an alternative solution, but this item must be cleaned and disinfected between patients if it is not considered a single-use unit. Recreational tanks and whirlpool spas are increasingly being used as hydrotherapy equipment. Although such home equipment appears to be suitable for hydrotherapy, they are neither designed nor constructed to function in this capacity. Additionally, manufacturers generally are not obligated to provide the healthcare facility with cleaning and disinfecting instructions appropriate for medical equipment use, and the U.S. Food and Drug Administration (FDA) does not evaluate recreational equipment. Health-care facilities should therefore carefully evaluate this "off-label" use of home equipment before proceeding with a purchase. # Miscellaneous Medical/Dental Equipment Connected to Main Water Systems a. Automated Endoscope Reprocessors The automated endoscopic reprocessor (AER) is classified by the FDA as an accessory for the flexible endoscope. 654 A properly operating AER can provide a more consistent, reliable method of decontaminating and terminal reprocessing for endoscopes between patient procedures than manual reprocessing methods alone. 911 An endoscope is generally subjected to high-level disinfection using a liquid chemical sterilant or a high-level disinfectant. Because the instrument is a semi-critical device, the optimal rinse fluid for a disinfected endoscope would be sterile water. 3 Sterile water, however, is expensive and difficult to produce in sufficient quantities and with adequate quality assurance for instrument rinsing in an AER. 912,913 Therefore, one option to be used for AERs is rinse water that has been passed through filters with a pore size of 0.1-0.2 μm to render the water "bacteria-free." These filters usually are located in the water line at or near the port where the mains water enters the equipment. The product water (i.e., tap water passing through these filters) in these applications is not considered equivalent in microbial quality to that for membrane-filtered water as produced by pharmaceutical firms. Membrane filtration in pharmaceutical applications is intended to ensure the microbial quality of polished product water. Water has been linked to the contamination of flexible fiberoptic endoscopes in the following two scenarios: a. rinsing a disinfected endoscope with unfiltered tap water, followed by storage of the instrument without drying out the internal channels and b. contamination of AERs from tap water inadvertently introduced into the equipment. In the latter instance, the machine's water reservoirs and fluid circuitry become contaminated with waterborne, heterotrophic bacteria (e.g., Pseudomonas aeruginosa and NTM), which can survive and persist in biofilms attached to these components. Colonization of the reservoirs and water lines of the AER becomes problematic if the required cleaning, disinfection, and maintenance are not performed on the equipment as recommended by the manufacturer. 669,916,917 Use of the 0.1-0.2-μm filter in the water line helps to keep bacterial contamination to a minimum, 670,911,917 but filters may fail and allow bacteria to pass through to the equipment and then to the instrument undergoing reprocessing. 671-674, 913, 918 Filters also require maintenance for proper performance. 670,911,912,918,919 Heightened awareness of the proper disinfection of the connectors that hook the instrument to the AER may help to further reduce the potential for contaminating endoscopes during reprocessing. 920 An emerging issue in the field of endoscopy is that of the possible role of rinse water monitoring and its potential to help reduce endoscopy/bronchoscopyassociated infections. 918 Studies have linked deficiencies in endoscope cleaning and/or disinfecting processes to the incidence of post-endoscopic adverse outcomes. Several clusters have been traced to AERs of older designs and these were associated with water quality. 675, Regardless of whether manual or automated terminal reprocessing is used for endoscopes, the internal channels of the instrument should be dried before storage. 925 The presence of residual moisture in the internal channels encourages the proliferation of waterborne microorganisms, some of which may be pathogenic. One of the most frequently used methods employs 70% isopropyl alcohol to flush the internal channels, followed by forced air drying of these channels and hanging the endoscope vertically in a protected cabinet; this method ensures internal drying of the endoscope, lessens the potential for proliferation of waterborne microorganisms, 669,913,917,922,926,927 and is consistent with professional organization guidance for endoscope reprocessing. 928 An additional problem with waterborne microbial contamination of AERs centers on increased microbial resistance to alkaline glutaraldehyde, a widely used liquid chemical sterilant/high-level disinfectant. 669,929 Opportunistic waterborne microorganisms (e.g., Mycobacterium chelonae, Methylobacterium spp.) have been associated with pseudo-outbreaks and colonization; infection caused by these organisms has been associated with procedures conducted in clinical settings (e.g., bronchoscopy). 669,913, Increasing microbial resistance to glutaraldehyde has been attributed to improper use of the disinfectant in the equipment, allowing the dilution of glutaraldehyde to fall below the manufacturer's recommended minimal use concentration. 929 # b. Dental Unit Water Lines Dental unit water lines (DUWLs) consist of small-bore plastic tubing that delivers water used for general, non-surgical irrigation and as a coolant to dental handpieces, sonic and ultrasonic scalers, and air-water syringes; municipal tap water is the source water for these lines. The presence of biofilms of waterborne bacteria and fungi (e.g., Legionella spp., Pseudomonas aeruginosa, and NTM) in DUWLs has been established. 636,637,694,695, Biofilms continually release planktonic microorganisms into the water, the titers of which can exceed 1H10 6 CFU/mL. 694 However, scientific evidence indicates that immunocompetent persons are only at minimal risk for substantial adverse health effects after contact with water from a dental unit. Nonetheless, exposing patients or dental personnel to water of uncertain microbiological quality is not consistent with universally accepted infection-control principles. 935 In 1993, CDC issued guidelines relative to water quality in a dental setting. These guidelines recommend that all dental instruments that use water (including high-speed handpieces) should be run to discharge water for 20-30 seconds after each patient and for several minutes before the start of each clinic day. 936 This practice can help to flush out any patient materials that many have entered the turbine, air, or waterlines. 937,938 The 1993 guidance also indicated that waterlines be flushed at the beginning of the clinic day. Although these guidelines are designed to help reduce the number of microorganisms present in treatment water, they do not address the issue of reducing or preventing biofilm formation in the waterlines. Research published subsequent to the 1993 dental infection control guideline suggests that flushing the lines at the beginning of the day has only minimal effect on the status of the biofilm in the lines and does not reliably improve the quality of water during dental treatment. Updated recommendations on infection-control practices for water line use in dentistry will be available in late 2003. 942 The numbers of microorganisms in water used as coolant or irrigant for non-surgical dental treatment should be as low as reasonably achievable and, at a minimum, should meet nationally recognized standards for safe drinking water. 935,943 Only minimal evidence suggests that water meeting drinking water standards poses a health hazard for immunocompetent persons. The EPA, the American Public Health Association (APHA), and the American Water Works Association (AWWA) have set a maximum limit of 500 CFU/mL for aerobic, heterotrophic, mesophilic bacteria in drinking water in municipal distribution systems. 944,945 This standard is achievable, given improvements in water-line technology. Dentists should consult with the manufacturer of their dental unit to determine the best equipment and method for maintaining and monitoring good water quality. 935,946 # E. Environmental Services 1. Principles of Cleaning and Disinfecting Environmental Surfaces Although microbiologically contaminated surfaces can serve as reservoirs of potential pathogens, these surfaces generally are not directly associated with transmission of infections to either staff or patients. The transferral of microorganisms from environmental surfaces to patients is largely via hand contact with the surface. 947,948 Although hand hygiene is important to minimize the impact of this transfer, cleaning and disinfecting environmental surfaces as appropriate is fundamental in reducing their potential contribution to the incidence of healthcare-associated infections. The principles of cleaning and disinfecting environmental surfaces take into account the intended use of the surface or item in patient care. CDC retains the Spaulding classification for medical and surgical instruments, which outlines three categories based on the potential for the instrument to transmit infection if the instrument is microbiologically contaminated before use. 949,950 These categories are "critical," "semicritical," and "noncritical." In 1991, CDC proposed an additional category designated "environmental surfaces" to Spaulding's original classification 951 to represent surfaces that generally do not come into direct contact with patients during care. Environmental surfaces carry the least risk of disease transmission and can be safely decontaminated using less rigorous methods than those used on medical instruments and devices. Environmental surfaces can be further divided into medical equipment surfaces (e.g., knobs or handles on hemodialysis machines, x-ray machines, instrument carts, and dental units) and housekeeping surfaces (e.g., floors, walls, and tabletops). 951 The following factors influence the choice of disinfection procedure for environmental surfaces: a. the nature of the item to be disinfected, b. the number of microorganisms present, c. the innate resistance of those microorganisms to the inactivating effects of the germicide, d. the amount of organic soil present, e. the type and concentration of germicide used, f. duration and temperature of germicide contact, and g. if using a proprietary product, other specific indications and directions for use. 952,953 Cleaning is the necessary first step of any sterilization or disinfection process. Cleaning is a form of decontamination that renders the environmental surface safe to handle or use by removing organic matter, salts, and visible soils, all of which interfere with microbial inactivation. The physical action of scrubbing with detergents and surfactants and rinsing with water removes large numbers of microorganisms from surfaces. 957 If the surface is not cleaned before the terminal reprocessing procedures are started, the success of the sterilization or disinfection process is compromised. Spaulding proposed three levels of disinfection for the treatment of devices and surfaces that do not require sterility for safe use. These disinfection levels are "high-level," "intermediate-level," and "low level." 949,950 The basis for these levels is that microorganisms can usually be grouped according to their innate resistance to a spectrum of physical or chemical germicidal agents (Table 22). This information, coupled with the instrument/surface classification, determines the appropriate level of terminal disinfection for an instrument or surface. The process of high-level disinfection, an appropriate standard of treatment for heat-sensitive, semicritical medical instruments (e.g., flexible, fiberoptic endoscopes), inactivates all vegetative bacteria, mycobacteria, viruses, fungi, and some bacterial spores. High-level disinfection is accomplished with powerful, sporicidal chemicals (e.g., glutaraldehyde, peracetic acid, and hydrogen peroxide) that are not appropriate for use on housekeeping surfaces. These liquid chemical sterilants/high-level disinfectants are highly toxic. Use of these chemicals for applications other than those indicated in their label instructions (i.e., as immersion chemicals for treating heat-sensitive medical instruments) is not appropriate. 964 Intermediate-level disinfection does not necessarily kill bacterial spores, but it does inactivate Mycobacterium tuberculosis var. bovis, which is substantially more resistant to chemical germicides than ordinary vegetative bacteria, fungi, and medium to small viruses (with or without lipid envelopes). Chemical germicides with sufficient potency to achieve intermediate-level disinfection include chlorine-containing compounds (e.g., sodium hypochlorite), alcohols, some phenolics, and some iodophors. Low-level disinfection inactivates vegetative bacteria, fungi, enveloped viruses (e.g., human immunodeficiency virus , and influenza viruses), and some non-enveloped viruses (e.g., adenoviruses). Low-level disinfectants include quaternary ammonium compounds, some phenolics, and some iodophors. Sanitizers are agents that reduce the numbers of bacterial contaminants to safe levels as judged by public health requirements, and are used in cleaning operations, particularly in food service and dairy applications. Germicidal chemicals that have been approved by FDA as skin antiseptics are not appropriate for use as environmental surface disinfectants. 951 The selection and use of chemical germicides are largely matters of judgment, guided by product label instructions, information, and regulations. Liquid sterilant chemicals and high-level disinfectants intended for use on critical and semi-critical medical/dental devices and instruments are regulated exclusively by the FDA as a result of recent memoranda of understanding between FDA and the EPA that delineates agency authority for chemical germicide regulation. 965,966 A common misconception in the use of surface disinfectants in health-care settings relates to the underlying purpose for use of proprietary products labeled as a "tuberculocidal" germicide. Such products will not interrupt and prevent the transmission of TB in health-care settings because TB is not acquired from environmental surfaces. The tuberculocidal claim is used as a benchmark by which to measure germicidal potency. Because mycobacteria have the highest intrinsic level of resistance among the vegetative bacteria, viruses, and fungi, any germicide with a tuberculocidal claim on the label (i.e., an intermediate-level disinfectant) is considered capable of inactivating a broad spectrum of pathogens, including much less resistant organisms such the bloodborne pathogens (e.g., hepatitis B virus , hepatitis C virus , and HIV). It is this broad spectrum capability, rather than the product's specific potency against mycobacteria, that is the basis for protocols and OSHA regulations indicating the appropriateness of using tuberculocidal chemicals for surface disinfection. 967 # General Cleaning Strategies for Patient-Care Areas The number and types of microorganisms present on environmental surfaces are influenced by the following factors: a. number of people in the environment, b. amount of activity, c. amount of moisture, d. presence of material capable of supporting microbial growth, e. rate at which organisms suspended in the air are removed, and f. type of surface and orientation . 968 Strategies for cleaning and disinfecting surfaces in patient-care areas take into account a. potential for direct patient contact, b. degree and frequency of hand contact, and c. potential contamination of the surface with body substances or environmental sources of microorganisms (e.g., soil, dust, and water). # a. Cleaning of Medical Equipment Manufacturers of medical equipment should provide care and maintenance instructions specific to their equipment. These instructions should include information about a. the equipments' compatibility with chemical germicides, b. whether the equipment is water-resistant or can be safely immersed for cleaning, and c. how the equipment should be decontaminated if servicing is required. 967 In the absence of manufacturers' instructions, non-critical medical equipment (e.g., stethoscopes, blood pressure cuffs, dialysis machines, and equipment knobs and controls) usually only require cleansing followed by low-to intermediate-level disinfection, depending on the nature and degree of contamination. Ethyl alcohol or isopropyl alcohol in concentrations of 60%-90% (v/v) is often used to disinfect small surfaces (e.g., rubber stoppers of multiple-dose medication vials, and thermometers) 952, 969 and occasionally external surfaces of equipment (e.g., stethoscopes and ventilators). However, alcohol evaporates rapidly, which makes extended contact times difficult to achieve unless items are immersed, a factor that precludes its practical use as a large-surface disinfectant. 951 Alcohol may cause discoloration, swelling, hardening, and cracking of rubber and certain plastics after prolonged and repeated use and may damage the shellac mounting of lenses in medical equipment. 970 Barrier protection of surfaces and equipment is useful, especially if these surfaces are a. touched frequently by gloved hands during the delivery of patient care, b. likely to become contaminated with body substances, or c. difficult to clean. Impervious-backed paper, aluminum foil, and plastic or fluid-resistant covers are suitable for use as barrier protection. An example of this approach is the use of plastic wrapping to cover the handle of the operatory light in dental-care settings. 936,942 Coverings should be removed and discarded while the health-care worker is still gloved. 936,942 The health-care worker, after ungloving and performing hand hygiene, must cover these surfaces with clean materials before the next patient encounter. # b. Cleaning Housekeeping Surfaces Housekeeping surfaces require regular cleaning and removal of soil and dust. Dry conditions favor the persistence of gram-positive cocci (e.g., coagulase-negative Staphylococcus spp.) in dust and on surfaces, whereas moist, soiled environments favor the growth and persistence of gram-negative bacilli. 948,971,972 Fungi are also present on dust and proliferate in moist, fibrous material. Most, if not all, housekeeping surfaces need to be cleaned only with soap and water or a detergent/disinfectant, depending on the nature of the surface and the type and degree of contamination. Cleaning and disinfection schedules and methods vary according to the area of the health-care facility, type of surface to be cleaned, and the amount and type of soil present. Disinfectant/detergent formulations registered by EPA are used for environmental surface cleaning, but the actual physical removal of microorganisms and soil by wiping or scrubbing is probably as important, if not more so, than any antimicrobial effect of the cleaning agent used. 973 Therefore, cost, safety, product-surface compatibility, and acceptability by housekeepers can be the main criteria for selecting a registered agent. If using a proprietary detergent/disinfectant, the manufacturers' instructions for appropriate use of the product should be followed. 974 Consult the products' material safety data sheets (MSDS) to determine appropriate precautions to prevent hazardous conditions during product application. Personal protective equipment (PPE) used during cleaning and housekeeping procedures should be appropriate to the task. Housekeeping surfaces can be divided into two groups -those with minimal hand-contact (e.g., floors, and ceilings) and those with frequent hand-contact ("high touch surfaces"). The methods, thoroughness, and frequency of cleaning and the products used are determined by health-care facility policy. 6 However, high-touch housekeeping surfaces in patient-care areas (e.g., doorknobs, bedrails, light switches, wall areas around the toilet in the patient's room, and the edges of privacy curtains) should be cleaned and/or disinfected more frequently than surfaces with minimal hand contact. Infection-control practitioners typically use a risk-assessment approach to identify high-touch surfaces and then coordinate an appropriate cleaning and disinfecting strategy and schedule with the housekeeping staff. Horizontal surfaces with infrequent hand contact (e.g., window sills and hard-surface flooring) in routine patient-care areas require cleaning on a regular basis, when soiling or spills occur, and when a patient is discharged from the facility. 6 Regular cleaning of surfaces and decontamination, as needed, is also advocated to protect potentially exposed workers. 967 Cleaning of walls, blinds, and window curtains is recommended when they are visibly soiled. 972,973,975 Disinfectant fogging is not recommended for general infection control in routine patient-care areas. 2,976 Further, paraformaldehyde, which was once used in this application, is no longer registered by EPA for this purpose. Use of paraformaldehyde in these circumstances requires either registration or an exemption issued by EPA under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). Infection control, industrial hygienists, and environmental services supervisors should assess the cleaning procedures, chemicals used, and the safety issues to determine if a temporary relocation of the patient is needed when cleaning in the room. Extraordinary cleaning and decontamination of floors in health-care settings is unwarranted. Studies have demonstrated that disinfection of floors offers no advantage over regular detergent/water cleaning and has minimal or no impact on the occurrence of health-care associated infections. 947,948, Additionally, newly cleaned floors become rapidly recontaminated from airborne microorganisms and those transferred from shoes, equipment wheels, and body substances. 971,975,981 Nevertheless, healthcare institutions or contracted cleaning companies may choose to use an EPA-registered detergent/disinfectant for cleaning low-touch surfaces (e.g., floors) in patient-care areas because of the difficulty that personnel may have in determining if a spill contains blood or body fluids (requiring a detergent/disinfectant for clean-up) or when a multi-drug resistant organism is likely to be in the environment. Methods for cleaning non-porous floors include wet mopping and wet vacuuming, dry dusting with electrostatic materials, and spray buffing. 973, Methods that produce minimal mists and aerosols or dispersion of dust in patient-care areas are preferred. 9,20,109,272 Part of the cleaning strategy is to minimize contamination of cleaning solutions and cleaning tools. Bucket solutions become contaminated almost immediately during cleaning, and continued use of the solution transfers increasing numbers of microorganisms to each subsequent surface to be cleaned. 971,981,985 Cleaning solutions should be replaced frequently. A variety of "bucket" methods have been devised to address the frequency with which cleaning solutions are replaced. 986,987 Another source of contamination in the cleaning process is the cleaning cloth or mop head, especially if left soaking in dirty cleaning solutions. 971, Laundering of cloths and mop heads after use and allowing them to dry before re-use can help to minimize the degree of contamination. 990 A simplified approach to cleaning involves replacing soiled cloths and mop heads with clean items each time a bucket of detergent/disinfectant is emptied and replaced with fresh, clean solution (B. Stover, Kosair Children's Hospital, 2000). Disposable cleaning cloths and mop heads are an alternative option, if costs permit. Another reservoir for microorganisms in the cleaning process may be dilute solutions of the detergents or disinfectants, especially if the working solution is prepared in a dirty container, stored for long periods of time, or prepared incorrectly. 547 Gram-negative bacilli (e.g., Pseudomonas spp. and Serratia marcescens) have been detected in solutions of some disinfectants (e.g., phenolics and quaternary ammonium compounds). 547,991 Contemporary EPA registration regulations have helped to minimize this problem by asking manufacturers to provide potency data to support label claims for detergent/disinfectant properties under real-use conditions (e.g., diluting the product with tap water instead of distilled water). Application of contaminated cleaning solutions, particularly from small-quantity aerosol spray bottles or with equipment that might generate aerosols during operation, should be avoided, especially in high-risk patient areas. 992,993 Making sufficient fresh cleaning solution for daily cleaning, discarding any remaining solution, and drying out the container will help to minimize the degree of bacterial contamination. Containers that dispense liquid as opposed to spray-nozzle dispensers (e.g., quart-sized dishwashing liquid bottles) can be used to apply detergent/disinfectants to surfaces and then to cleaning cloths with minimal aerosol generation. A pre-mixed, "ready-to-use" detergent/disinfectant solution may be used if available. # c. Cleaning Special Care Areas Guidelines have been published regarding cleaning strategies for isolation areas and operating rooms. 6,7 The basic strategies for areas housing immunosuppressed patients include a. wet dusting horizontal surfaces daily with cleaning cloths pre-moistened with detergent or an EPAregistered hospital disinfectant or disinfectant wipes; 94, 98463 b. using care when wet dusting equipment and surfaces above the patient to avoid patient contact with the detergent/disinfectant; c. avoiding the use of cleaning equipment that produces mists or aerosols; d. equipping vacuums with HEPA filters, especially for the exhaust, when used in any patient-care area housing immunosuppressed patients; 9, 94, 986 and e. regular cleaning and maintenance of equipment to ensure efficient particle removal. When preparing the cleaning cloths for wet-dusting, freshly prepared solutions of detergents or disinfectants should be used rather than cloths that have soaked in such solutions for long periods of time. Dispersal of microorganisms in the air from dust or aerosols is more problematic in these settings than elsewhere in health-care facilities. Vacuum cleaners can serve as dust disseminators if they are not operating properly. 994 Doors to immunosuppressed patients' rooms should be closed when nearby areas are being vacuumed. 9 Bacterial and fungal contamination of filters in cleaning equipment is inevitable, and these filters should be cleaned regularly or replaced as per equipment manufacturer instructions. Mats with tacky surfaces placed in operating rooms and other patient-care areas only slightly minimize the overall degree of contamination of floors and have little impact on the incidence rate of health-careassociated infection in general. 351,971,983 An exception, however, is the use of tacky mats inside the entry ways of cordoned-off construction areas inside the health-care facility; these mats help to minimize the intrusion of dust into patient-care areas. Special precautions for cleaning incubators, mattresses, and other nursery surfaces have been recommended to address reports of hyperbilirubinemia in newborns linked to inadequately diluted solutions of phenolics and poor ventilation. These medical conditions have not, however, been associated with the use of properly prepared solutions of phenolics. Non-porous housekeeping surfaces in neonatal units can be disinfected with properly diluted or pre-mixed phenolics, followed by rinsing with clean water. 997 However, phenolics are not recommended for cleaning infant bassinets and incubators during the stay of the infant. Infants who remain in the nursery for an extended period should be moved periodically to freshly cleaned and disinfected bassinets and incubators. 997 If phenolics are used for cleaning bassinets and incubators after they have been vacated, the surfaces should be rinsed thoroughly with water and dried before either piece of equipment is reused. Cleaning and disinfecting protocols should allow for the full contact time specified for the product used. Bassinet mattresses should be replaced, however, if the mattress cover surface is broken. 997 # Cleaning Strategies for Spills of Blood and Body Substances Neither HBV, HCV, nor HIV has ever been transmitted from a housekeeping surface (i.e., floors, walls, or countertops). Nonetheless, prompt removal and surface disinfection of an area contaminated by either blood or body substances are sound infection-control practices and OSHA requirements. 967 Studies have demonstrated that HIV is inactivated rapidly after being exposed to commonly used chemical germicides at concentrations that are much lower than those used in practice. HBV is readily inactivated with a variety of germicides, including quaternary ammonium compounds. 1004 Embalming fluids (e.g., formaldehyde) are also capable of completely inactivating HIV and HBV. 1005,1006 OSHA has revised its regulation for disinfecting spills of blood or other potentially infectious material to include proprietary products whose label includes inactivation claims for HBV and HIV, provided that such surfaces have not become contaminated with agent(s) or volumes of or concentrations of agent(s) for which a higher level of disinfection is recommended. 1007 These registered products are listed in EPA's List D -Registered Antimicrobials Effective Against Hepatitis B Virus and Human HIV-1, which may include products tested against duck hepatitis B virus (DHBV) as a surrogate for HBV. 1008,1009 Additional lists of interest include EPA's List C -Registered Antimicrobials Effective Against Human HIV-1 and EPA's List E -Registered Antimicrobials Effective Against Mycobacterium spp., Hepatitis B Virus, and Human HIV-1. Sodium hypochlorite solutions are inexpensive and effective broad-spectrum germicidal solutions. 1010,1011 Generic sources of sodium hypochlorite include household chlorine bleach or reagent grade chemical. Concentrations of sodium hypochlorite solutions with a range of 5,000-6,150 ppm (1:10 v/v dilution of household bleaches marketed in the United States) to 500-615 ppm (1:100 v/v dilution) free chlorine are effective depending on the amount of organic material (e.g., blood, mucus, and urine) present on the surface to be cleaned and disinfected. 1010, 1011 EPA-registered chemical germicides may be more compatible with certain materials that could be corroded by repeated exposure to sodium hypochlorite, especially the 1:10 dilution. Appropriate personal protective equipment (e.g., gloves and goggles) should be worn when preparing and using hypochlorite solutions or other chemical germicides. 967 Despite laboratory evidence demonstrating adequate potency against bloodborne pathogens (e.g., HIV and HBV), many chlorine bleach products available in grocery and chemical-supply stores are not registered by the EPA for use as surface disinfectants. Use of these chlorine products as surface disinfectants is considered by the EPA to be an "unregistered use." EPA encourages the use of registered products because the agency reviews them for safety and performance when the product is used according to label instructions. When unregistered products are used for surface disinfection, users do so at their own risk. Strategies for decontaminating spills of blood and other body fluids differ based on the setting in which they occur and the volume of the spill. 1010 In patient-care areas, workers can manage small spills with cleaning and then disinfecting using an intermediate-level germicide or an EPA-registered germicide from the EPA List D or E. 967,1007 For spills containing large amounts of blood or other body substances, workers should first remove visible organic matter with absorbent material (e.g., disposable paper towels discarded into leak-proof, properly labeled containment) and then clean and decontaminate the area. 1002, 1003, 1012 If the surface is nonporous and a generic form of a sodium hypochlorite solution is used (e.g., household bleach), a 1:100 dilution is appropriate for decontamination assuming that a. the worker assigned to clean the spill is wearing gloves and other personal protective equipment appropriate to the task, b. most of the organic matter of the spill has been removed with absorbent material, and c. the surface has been cleaned to remove residual organic matter. A recent study demonstrated that even strong chlorine solutions (i.e., 1:10 dilution of chlorine bleach) may fail to totally inactivate high titers of virus in large quantities of blood, but in the absence of blood these disinfectants can achieve complete viral inactivation. 1011 This evidence supports the need to remove most organic matter from a large spill before final disinfection of the surface. Additionally, EPAregistered proprietary disinfectant label claims are based on use on a pre-cleaned surface. 951,954 Managing spills of blood, body fluids, or other infectious materials in clinical, public health, and research laboratories requires more stringent measures because of a. the higher potential risk of disease transmission associated with large volumes of blood and body fluids and b. high numbers of microorganisms associated with diagnostic cultures. The use of an intermediate-level germicide for routine decontamination in the laboratory is prudent. 954 Recommended practices for managing large spills of concentrated infectious agents in the laboratory include a. confining the contaminated area, b. flooding the area with a liquid chemical germicide before cleaning, and c. decontaminating with fresh germicidal chemical of at least intermediate-level disinfectant potency. 1010 A suggested technique when flooding the spill with germicide is to lay absorbent material down on the spill and apply sufficient germicide to thoroughly wet both the spill and the absorbent material. 1013 If using a solution of household chlorine bleach, a 1:10 dilution is recommended for this purpose. EPAregistered germicides should be used according to the manufacturers' instructions for use dilution and contact time. Gloves should be worn during the cleaning and decontamination procedures in both clinical and laboratory settings. PPE in such a situation may include the use of respiratory protection (e.g., an N95 respirator) if clean-up procedures are expected to generate infectious aerosols. Protocols for cleaning spills should be developed and made available on record as part of good laboratory practice. 1013 Workers in laboratories and in patient-care areas of the facility should receive periodic training in environmentalsurface infection-control strategies and procedures as part of an overall infection-control and safety curriculum. # Carpeting and Cloth Furnishings a. Carpeting Carpeting has been used for more than 30 years in both public and patient-care areas of health-care facilities. Advantages of carpeting in patient-care areas include a. its noise-limiting characteristics b. the "humanizing" effect on health care; and c. its contribution to reductions in falls and resultant injuries, particularly for the elderly. Compared to hard-surface flooring, however, carpeting is harder to keep clean, especially after spills of blood and body substances. It is also harder to push equipment with wheels (e.g., wheelchairs, carts, and gurneys) on carpeting. Several studies have documented the presence of diverse microbial populations, primarily bacteria and fungi, in carpeting; 111, 1017-1024 the variety and number of microorganisms tend to stabilize over time. New carpeting quickly becomes colonized, with bacterial growth plateauing after about 4 weeks. 1019 Vacuuming and cleaning the carpeting can temporarily reduce the numbers of bacteria, but these populations soon rebound and return to pre-cleaning levels. 1019,1020,1023 Bacterial contamination tends to increase with higher levels of activity. 1025 Soiled carpeting that is or remains damp or wet provides an ideal setting for the proliferation and persistence of gram-negative bacteria and fungi. 1026 Carpeting that remains damp should be removed, ideally within 72 hours. Despite the evidence of bacterial growth and persistence in carpeting, only limited epidemiologic evidence demonstrates that carpets influence health-care associated infection rates in areas housing immunocompetent patients. 1023,1025,1027 This guideline, therefore, includes no recommendations against the use of carpeting in these areas. Nonetheless, avoiding the use of carpeting is prudent in areas where spills are likely to occur (e.g., laboratories, areas around sinks, and janitor closets) and where patients may be at greater risk of infection from airborne environmental pathogens (e.g., HSCT units, burn units, ICUs, and ORs). 111,1028 An outbreak of aspergillosis in an HSCT unit was recently attributed to carpet contamination and a particular method of carpet cleaning. 111 A window in the unit had been opened repeatedly during the time of a nearby building fire, which allowed fungal spore intrusion into the unit. After the window was sealed, the carpeting was cleaned using a "bonnet buffing" machine, which dispersed Aspergillus spores into the air. 111 Wet vacuuming was instituted, replacing the dry cleaning method used previously; no additional cases of invasive aspergillosis were identified. The care setting and the method of carpet cleaning are important factors to consider when attempting to minimize or prevent production of aerosols and dispersal of carpet microorganisms into the air. 94,111 Both vacuuming and shampooing or wet cleaning with equipment can disperse microorganisms to the air. 111,994 Vacuum cleaners should be maintained to minimize dust dispersal in general, and be equipped with HEPA filters, especially for use in high-risk patient-care areas. 9, 94, 986 Some formulations of carpetcleaning chemicals, if applied or used improperly, can be dispersed into the air as a fine dust capable of causing respiratory irritation in patients and staff. 1029 Cleaning equipment, especially those that engage in wet cleaning and extraction, can become contaminated with waterborne organisms (e.g., Pseudomonas aeruginosa) and serve as a reservoir for these organisms if this equipment is not properly maintained. Substantial numbers of bacteria can then be transferred to carpeting during the cleaning process. 1030 Therefore, keeping the carpet cleaning equipment in good repair and allowing such equipment to dry between uses is prudent. Carpet cleaning should be performed on a regular basis determined by internal policy. Although spills of blood and body substances on non-porous surfaces require prompt spot cleaning using standard cleaning procedures and application of chemical germicides, 967 similar decontamination approaches to blood and body substance spills on carpeting can be problematic from a regulatory perspective. 1031 Most, if not all, modern carpet brands suitable for public facilities can tolerate the activity of a variety of liquid chemical germicides. However, according to OSHA, carpeting contaminated with blood or other potentially infectious materials cannot be fully decontaminated. 1032 Therefore, facilities electing to use carpeting for high-activity patient-care areas may choose carpet tiles in areas at high risk for spills. 967,1032 In the event of contamination with blood or other body substances, carpet tiles can be removed, discarded, and replaced. OSHA also acknowledges that only minimal direct skin contact occurs with carpeting, and therefore, employers are expected to make reasonable efforts to clean and sanitize carpeting using carpet detergent/cleaner products. 1032 Over the last few years, some carpet manufacturers have treated their products with fungicidal and/or bactericidal chemicals. Although these chemicals may help to reduce the overall numbers of bacteria or fungi present in carpet, their use does not preclude the routine care and maintenance of the carpeting. Limited evidence suggests that chemically treated carpet may have helped to keep health-care-associated aspergillosis rates low in one HSCT unit, 111 but overall, treated carpeting has not been shown to prevent the incidence of health-care associated infections in care areas for immunocompetent patients. # b. Cloth Furnishings Upholstered furniture and furnishings are becoming increasingly common in patient-care areas. These furnishings range from simple cloth chairs in patients' rooms to a complete decorating scheme that gives the interior of the facility more the look of an elegant hotel. 1033 Even though pathogenic microorganisms have been isolated from the surfaces of cloth chairs, no epidemiologic evidence suggests that general patient-care areas with cloth furniture pose increased risks of health-care associated infection compared with areas that contain hard-surfaced furniture. 1034,1035 Allergens (e.g., dog and cat dander) have been detected in or on cloth furniture in clinics and elsewhere in hospitals in concentrations higher than those found on bed linens. 1034,1035 These allergens presumably are transferred from the clothing of visitors. Researchers have therefore suggested that cloth chairs should be vacuumed regularly to keep the dust and allergen levels to a minimum. This recommendation, however, has generated concerns that aerosols created from vacuuming could place immunocompromised patients or patients with preexisting lung disease (e.g., asthma) at risk for development of health-care associated, environmental airborne disease. 9, 20, 109, 988 Recovering worn, upholstered furniture (especially the seat cushion) with covers that are easily cleaned (e.g., vinyl), or replacing the item is prudent; minimizing the use of upholstered furniture and furnishings in any patient-care areas where immunosuppressed patients are located (e.g., HSCT units) reduces the likelihood of disease. 9 # Flowers and Plants in Patient-Care Areas Fresh flowers, dried flowers, and potted plants are common items in health-care facilities. In 1974, clinicians isolated an Erwinia sp. post mortem from a neonate diagnosed with fulminant septicemia, meningitis, and respiratory distress syndrome. 1038 Because Erwinia spp. are plant pathogens, plants brought into the delivery room were suspected to be the source of the bacteria, although the case report did not definitively establish a direct link. Several subsequent studies evaluated the numbers and diversity of microorganisms in the vase water of cut flowers. These studies revealed that high concentrations of bacteria, ranging from 10 4 -10 10 CFU/mL, were often present, especially if the water was changed infrequently. 515,702,1039 The major group of microorganisms in flower vase water was gram-negative bacteria, with Pseudomonas aeruginosa the most frequently isolated organism. 515, 702, 1039, 1040 P. aeruginosa was also the primary organism directly isolated from chrysanthemums and other potted plants. 1041,1042 However, flowers in hospitals were not significantly more contaminated with bacteria compared with flowers in restaurants or in the home. 702 Additionally, no differences in the diversity and degree of antibiotic resistance of bacteria have been observed in samples isolated from hospital flowers versus those obtained from flowers elsewhere. 702 Despite the diversity and large numbers of bacteria associated with flower-vase water and potted plants, minimal or no evidence indicates that the presence of plants in immunocompetent patient-care areas poses an increased risk of health-care associated infection. 515 In one study involving a limited number of surgical patients, no correlation was observed between bacterial isolates from flowers in the area and the incidence and etiology of postoperative infections among the patients. 1040 Similar conclusions were reached in a study that examined the bacteria found in potted plants. 1042 Nonetheless, some precautions for general patient-care settings should be implemented, including a. limiting flower and plant care to staff with no direct patient contact, b. advising health-care staff to wear gloves when handling plants, c. washing hands after handling plants, d. changing vase water every 2 days and discharging the water into a sink outside the immediate patient environment, and e. cleaning and disinfecting vases after use. 702 Some researchers have examined the possibility of adding a chemical germicide to vase water to control bacterial populations. Certain chemicals (e.g., hydrogen peroxide and chlorhexidine) are well tolerated by plants. 1040,1043,1044 Use of these chemicals, however, was not evaluated in studies to assess impact on health-care associated infection rates. Modern florists now have a variety of products available to add to vase water to extend the life of cut flowers and to minimize bacterial clouding of the water. Flowers (fresh and dried) and ornamental plants, however, may serve as a reservoir of Aspergillus spp., and dispersal of conidiospores into the air from this source can occur. 109 Health-care associated outbreaks of invasive aspergillosis reinforce the importance of maintaining an environment as free of Aspergillus spp. spores as possible for patients with severe, prolonged neutropenia. Potted plants, fresh-cut flowers, and dried flower arrangements may provide a reservoir for these fungi as well as other fungal species (e.g., Fusarium spp.). 109,1045,1046 Researchers in one study of bacteria and flowers suggested that flowers and vase water should be avoided in areas providing care to medically at-risk patients (e.g., oncology patients and transplant patients), although this study did not attempt to correlate the observations of bacterial populations in the vase water with the incidence of health-care associated infections. 515 Another study using molecular epidemiology techniques demonstrated identical Aspergillus terreus types among environmental and clinical specimens isolated from infected patients with hematological malignancies. 1046 Therefore, attempts should be made to exclude flowers and plants from areas where immunosuppressed patients are be located (e.g., HSCT units). 9, 1046 # Pest Control Cockroaches, flies and maggots, ants, mosquitoes, spiders, mites, midges, and mice are among the typical arthropod and vertebrate pest populations found in health-care facilities. Insects can serve as agents for the mechanical transmission of microorganisms, or as active participants in the disease transmission process by serving as a vector. Arthropods recovered from health-care facilities have been shown to carry a wide variety of pathogenic microorganisms. Studies have suggested that the diversity of microorganisms associated with insects reflects the microbial populations present in the indoor health-care environment; some pathogens encountered in insects from hospitals were either absent from or present to a lesser degree in insects trapped from residential settings. Some of the microbial populations associated with insects in hospitals have demonstrated resistance to antibiotics. 1048,1059, Insect habitats are characterized by warmth, moisture, and availability of food. 1064 Insects forage in and feed on substrates, including but not limited to food scraps from kitchens/cafeteria, foods in vending machines, discharges on dressings either in use or discarded, other forms of human detritis, medical wastes, human wastes, and routine solid waste. Cockroaches, in particular, have been known to feed on fixed sputum smears in laboratories. 1065,1066 Both cockroaches and ants are frequently found in the laundry, central sterile supply departments, and anywhere in the facility where water or moisture is present (e.g., sink traps, drains and janitor closets). Ants will often find their way into sterile packs of items as they forage in a warm, moist environment. 1057 Cockroaches and other insects frequent loading docks and other areas with direct access to the outdoors. Although insects carry a wide variety of pathogenic microorganisms on their surfaces and in their gut, the direct association of insects with disease transmission (apart from vector transmission) is limited, especially in health-care settings; the presence of insects in itself likely does not contribute substantially to health-care associated disease transmission in developed countries. However, outbreaks of infection attributed to microorganisms carried by insects may occur because of infestation coupled with breaks in standard infection-control practices. 1063 Studies have been conducted to examine the role of houseflies as possible vectors for shigellosis and other forms of diarrheal disease in non-health-care settings. 1046, 1067 When control measures aimed at reducing the fly population density were implemented, a concomitant reduction in the incidence of diarrheal infections, carriage of Shigella organisms, and mortality caused by diarrhea among infants and young children was observed. Myiasis is defined as a parasitosis in which the larvae of any of a variety of flies use living or necrotic tissue or body substances of the host as a nutritional source. 1068 Larvae from health-care acquired myiasis have been observed in nares, wounds, eyes, ears, sinuses, and the external urogenital structures. Patients with this rare condition are typically older adults with underlying medical conditions (e.g., diabetes, chronic wounds, and alcoholism) who have a decreased capacity to ward off the flies. Persons with underlying conditions who live or travel to tropical regions of the world are especially at risk. 1070,1071 Cases occur in the summer and early fall months in temperate climates when flies are most active. 1071 An environmental assessment and review of the patient's history are necessary to verify that the source of the myiasis is health-care acquired and to identify corrective measures. 1069,1072 Simple prevention measures (e.g., installing screens on windows) are important in reducing the incidence of myiasis. 1072 From a public health and hygiene perspective, arthropod and vertebrate pests should be eradicated from all indoor environments, including health-care facilities. 1073,1074 Modern approaches to institutional pest management usually focus on a. eliminating food sources, indoor habitats, and other conditions that attract pests b. excluding pests from the indoor environments; and c. applying pesticides as needed. 1075 Sealing windows in modern health-care facilities helps to minimize insect intrusion. When windows need to be opened for ventilation, ensuring that screens are in good repair and closing doors to the outside can help with pest control. Insects should be kept out of all areas of the health-care facility, especially ORs and any area where immunosuppressed patients are located. A pest-control specialist with appropriate credentials can provide a regular insect-control program that is tailored to the needs of the facility and uses approved chemicals and/or physical methods. Industrial hygienists can provide information on possible adverse reactions of patients and staff to pesticides and suggest alternative methods for pest control, as needed. ) represent crucial and growing concerns for infection control. Although the term GISA is technically a more accurate description of the strains isolated to date (most of which are classified as having intermediate resistance to both vancomycin and teicoplanin), the term "glycopeptide" may not be recognized by many clinicians. Thus, the label of VISA, which emphasizes a change in minimum inhibitory concentration (MICs) to vancomycin, is similar to that of VRE and is more meaningful to clinicians. 1076 According to National Nosocomial Infection Surveillance (NNIS) statistics for infections acquired among ICU patients in the United States in 1999, 52.3% of infections resulting from S. aureus were identified as MRSA infections, and 25.2% of enterococcal infections were attributed to VRE. These figures reflect a 37% and a 43% increase, respectively, since 1994-1998. 1077 People represent the primary reservoir of S. aureus. 1078 Although S. aureus has been isolated from a variety of environmental surfaces (e.g., stethoscopes, floors, charts, furniture, dry mops, and hydrotherapy tanks), the role of environmental contamination in transmission of this organism in health care appears to be minimal. S. aureus contamination of surfaces and tanks within burn therapy units, however, may be a major factor in the transmission of infection among burn patients. 1083 Colonized patients are the principal reservoir of VRE, and patients who are immunosuppressed (e.g., transplant patients) or otherwise medically at-risk (e.g., ICU patients, cardio-thoracic surgical patients, patients previously hospitalized for extended periods, and those having received multi-antimicrobial or vancomycin therapy) are at greatest risk for VRE colonization. The mechanisms by which crosscolonization take place are not well defined, although recent studies have indicated that both MRSA and VRE may be transmitted either a. directly from patient to patient, b. indirectly by transient carriage on the hands of health-care workers, 1088-1091 or c. by hand transfer of these gram-positive organisms from contaminated environmental surfaces and patient-care equipment. 1084,1087, In one survey, hand carriage of VRE in workers in a long-term care facility ranged from 13%-41%. 1098 Many of the environmental surfaces found to be contaminated with VRE in outbreak investigations have been those that are touched frequently by the patient or the health-care worker. 1099 Such high-touch surfaces include bedrails, doorknobs, bed linens, gowns, overbed tables, blood pressure cuffs, computer table, bedside tables, and various medical equipment. 22,1087,1094,1095, Contamination of environmental surfaces with VRE generally occurs in clinical laboratories and areas where colonized patients are present, 1087, 1092, 1094, 1095, 1103 but the potential for contamination increases when such patients have diarrhea 1087 or have multiple body-site colonization. 1104 Additional factors that can be important in the dispersion of these pathogens to environmental surfaces are misuse of glove techniques by healthcare workers (especially when cleaning fecal contamination from surfaces) and patient, family, and visitor hygiene. # Special Pathogen Concerns a. Antibiotic-Resistant Gram-Positive Cocci Interest in the importance of environmental reservoirs of VRE increased when laboratory studies demonstrated that enterococci can persist in a viable state on dry environmental surfaces for extended periods of time (7 days to 4 months) 1099,1105 and multiple strains can be identified during extensive periods of surveillance. 1104 VRE can be recovered from inoculated hands of health-care workers (with or without gloves) for up to 60 minutes. 22 The presence of either MRSA, VISA, or VRE on environmental surfaces, however, does not mean that patients in the contaminated areas will become colonized. Strict adherence to hand hygiene/handwashing and the proper use of barrier precautions help to minimize the potential for spread of these pathogens. Published recommendations for preventing the spread of vancomycin resistance address isolation measures, including patient cohorting and management of patient-care items. 5 Direct patient-care items (e.g., blood pressure cuffs) should be disposable whenever possible when used in contact isolation settings for patients with multiply resistant microorganisms. 1102 1114,1115 These represented isolated cases, and neither the family members nor the health-care providers of these case-patients had evidence of colonization or infection with VRSA. Conventional environmental infection-control measures (i.e., cleaning and then disinfecting surfaces using EPA-registered disinfectants with label claims for S. aureus) were used during the environmental investigation of these two cases; 1110-1112 however, studies have yet to evaluate the potential intrinsic resistance of these VRSA strains to surface disinfectants. Standard procedures during terminal cleaning and disinfection of surfaces, if performed incorrectly, may be inadequate for the elimination of VRE from patient rooms. 1113, Given the sensitivity of VRE to hospital disinfectants, current disinfecting protocols should be effective if they are diligently carried out and properly performed. Health-care facilities should be sure that housekeeping staff use correct procedures for cleaning and disinfecting surfaces in VRE-contaminated areas, which include using sufficient amounts of germicide at proper use dilution and allowing adequate contact time. 1118 # b. Clostridium difficile Clostridium difficile is the most frequent etiologic agent for health-care associated diarrhea. 1119,1120 In one hospital, 30% of adults who developed health-care associated diarrhea were positive for C. difficile. 1121 One recent study employing PCR-ribotyping techniques demonstrated that cases of C. difiicile-acquired diarrhea occurring in the hospital included patients whose infections were attributed to endogenous C. difficile strains and patients whose illnesses were considered to be health-care-associated infections. 1122 Most patients remain asymptomatic after infection, but the organism continues to be shed in their stools. Risk factors for acquiring C. difficile-associated infection include a. exposure to antibiotic therapy, particularly with beta-lactam agents; 1123 b. gastrointestinal procedures and surgery; 1124 c. advanced age; and d. indiscriminate use of antibiotics. Of all the measures that have been used to prevent the spread of C. difficile-associated diarrhea, the most successful has been the restriction of the use of antimicrobial agents. 1129,1130 C. difficile is an anaerobic, gram-positive bacterium. Normally fastidious in its vegetative state, it is capable of sporulating when environmental conditions no longer support its continued growth. The capacity to form spores enables the organism to persist in the environment (e.g., in soil and on dry surfaces) for extended periods of time. Environmental contamination by this microorganism is well known, especially in places where fecal contamination may occur. 1131 The environment (especially housekeeping surfaces) rarely serves as a direct source of infection for patients. 1024, However, direct exposure to contaminated patient-care items (e.g., rectal thermometers) and high-touch surfaces in patients' bathrooms (e.g., light switches) have been implicated as sources of infection. 1130,1135,1136,1138 Transfer of the pathogen to the patient via the hands of health-care workers is thought to be the most likely mechanism of exposure. 24,1133,1139 Standard isolation techniques intended to minimize enteric contamination of patients, health-care-workers' hands, patient-care items, and environmental surfaces have been published. 1140 Handwashing remains the most effective means of reducing hand contamination. Proper use of gloves is an ancillary measure that helps to further minimize transfer of these pathogens from one surface to another. The degree to which the environment becomes contaminated with C. difficile spores is proportional to the number of patients with C. difficile-associated diarrhea, 24,1132,1135 although asymptomatic, colonized patients may also serve as a source of contamination. Few studies have examined the use of specific chemical germicides for the inactivation of C. difficile spores, and no well-controlled trials have been conducted to determine efficacy of surface disinfection and its impact on health-care associated diarrhea. Some investigators have evaluated the use of chlorine-containing chemicals (e.g., 1,000 ppm hypochlorite at recommended use-dilution, 5,000 ppm sodium hypochlorite , 1:100 v/v dilutions of unbuffered hypochlorite, and phosphate-buffered hypochlorite ). One of the studies demonstrated that the number of contaminated environmental sites was reduced by half, 1135 whereas another two studies demonstrated declines in health-care associated C. difficile infections in a HSCT unit 1141 and in two geriatric medical units 1142 during a period of hypochlorite use. The presence of confounding factors, however, was acknowledged in one of these studies. 1142 The recommended approach to environmental infection control with respect to C. difficile is meticulous cleaning followed by disinfection using hypochlorite-based germicides as appropriate. 952,1130,1143 However, because no EPAregistered surface disinfectants with label claims for inactivation of C. difficile spores are available, the recommendation is based on the best available evidence from the scientific literature. # c. Respiratory and Enteric Viruses in Pediatric-Care Settings Although the viruses mentioned in this guideline are not unique to the pediatric-care setting in healthcare facilities, their prevalence in these areas, especially during the winter months, is substantial. Children (particularly neonates) are more likely to develop infection and substantial clinical disease from these agents compared with adults and therefore are more likely to require supportive care during their illness. Common respiratory viruses in pediatric-care areas include rhinoviruses, respiratory syncytial virus (RSV), adenoviruses, influenza viruses, and parainfluenza viruses. Transmission of these viruses occurs primarily via direct contact with small-particle aerosols or via hand contamination with respiratory secretions that are then transferred to the nose or eyes. Because transmission primarily requires close personal contact, contact precautions are appropriate to interrupt transmission. 6 Hand contamination can occur from direct contact with secretions or indirectly from touching high-touch environmental surfaces that have become contaminated with virus from large droplets. The indirect transfer of virus from one persion to other via hand contact with frequently-touched fomites was demonstrated in a study using a bacteriophage whose environmental stability approximated that of human viral pathogens (e.g., poliovirus and parvovirus). 1144 The impact of this mode of transmission with respect to human respiratory-and enteric viruses is dependent on the ability of these agents to survive on environmental surfaces. Infectious RSV has been recovered from skin, porous surfaces, and non-porous surfaces after 30 minutes, 1 hour, and 7 hours, respectively. 1145 Parainfluenza viruses are known to persist for up to 4 hours on porous surfaces and up to 10 hours on non-porous surfaces. 1146 Rhinoviruses can persist on porous surfaces and non-porous surfaces for approximately 1 and 3 hours respectively; study participants in a controlled environment became infected with rhinoviruses after first touching a surface with dried secretions and then touching their nasal or conjunctival mucosa. 1147 Although the efficiency of direct transmission of these viruses from surfaces in uncontrolled settings remains to be defined, these data underscore the basis for maintaining regular protocols for cleaning and disinfecting of high-touch surfaces. The clinically important enteric viruses encountered in pediatric care settings include enteric adenovirus, astroviruses, caliciviruses, and rotavirus. Group A rotavirus is the most common cause of infectious diarrhea in infants and children. Transmission of this virus is primarily fecal-oral, however, the role of fecally contaminated surfaces and fomites in rotavirus transmission is unclear. During one epidemiologic investigation of enteric disease among children attending day care, rotavirus contamination was detected on 19% of inanimate objects in the center. 1148,1149 In an outbreak in a pediatric unit, secondary cases of rotavirus infection clustered in areas where children with rotaviral diarrhea were located. 1150 Astroviruses cause gastroenteritis and diarrhea in newborns and young children and can persist on fecally contaminated surfaces for several months during periods of relatively low humidity. 1151,1152 Outbreaks of small roundstructured viruses (i.e., caliciviruses ) can affect both patients and staff, with attack rates of ≥50%. 1153 Routes of person-to-person transmission include fecal-oral spread and aerosols generated from vomiting. Fecal contamination of surfaces in care settings can spread large amounts of virus to the environment. Studies that have attempted to use low-and intermediate-level disinfectants to inactivate rotavirus suspended in feces have demonstrated a protective effect of high concentrations of organic matter. 1157,1158 Intermediate-level disinfectants (e.g., alcoholic quaternary ammonium compounds, and chlorine solutions) can be effective in inactivating enteric viruses provided that a cleaning step to remove most of the organic matter precedes terminal disinfection. 1158 These findings underscore the need for proper cleaning and disinfecting procedures where contamination of environmental surfaces with body substances is likely. EPA-registered surface disinfectants with label claims for these viral agents should be used in these settings. Using disposable, protective barrier coverings may help to minimize the degree of surface contamination. 936 # d. Severe Acute Respiratory Syndrome (SARS) Virus In November 2002 an atypical pneumonia of unknown etiology emerged in Asia and subsequently developed into an international outbreak of respiratory illness among persons in 29 countries during the first six months of 2003. "Severe acute respiratory syndrome" (SARS) is a viral upper respiratory infection associated with a newly described coronavirus (SARS-associated Co-V ). SARS-CoV is an enveloped RNA virus. It is present in high titers in respiratory secretions, stool, and blood of infected persons. The modes of transmission determined from epidemiologic investigations were primarily forms of direct contact (i.e., large droplet aerosolization and person-to-person contact). Respiratory secretions were presumed to be the major source of virus in these situations; airborne transmission of virus has not been completely ruled out. Little is known about the impact of fecal-oral transmission and SARS. The epidemiology of SARS-CoV infection is not completely understood, and therefore recommended infection control and prevention measures to contain the spread of SARS will evolve as new information becomes available. 1159 At present there is no indication that established strategies for cleaning (i.e., to remove the majority of bioburden) and disinfecting equipment and environmental surfaces need to be changed for the environmental infection control of SARS. In-patient rooms housing SARS patients should be cleaned and disinfected at least daily and at the time of patient transfer or discharge. More frequent cleaning and disinfection may be indicated for high-touch surfaces and following aerosolproducing procedures (e.g., intubation, bronchoscopy, and sputum production). While there are presently no disinfectant products registered by EPA specifically for inactivation of SARS-CoV, EPA-registered hospital disinfectants that are equivalent to low-and intermediate-level germicides may be used on precleaned, hard, non-porous surfaces in accordance with manufacturer's instructions for environmental surface disinfection. Monitoring adherence to guidelines established for cleaning and disinfection is an important component of environmental infection control to contain the spread of SARS. # e. Creutzfeldt-Jakob Disease (CJD) in Patient-Care Areas Creutzfeldt-Jakob disease (CJD) is a rare, invariably fatal, transmissible spongiform encephalopathy (TSE) that occurs worldwide with an average annual incidence of 1 case per million population. CJD is one of several TSEs affecting humans; other diseases in this group include kuru, fatal familial insomnia, and Gerstmann-Sträussler-Scheinker syndrome. A TSE that affects a younger population (compared to the age range of CJD cases) has been described primarily in the United Kingdom since 1996. 1163 This variant form of CJD (vCJD) is clinically and neuropathologically distinguishable from classic CJD; epidemiologic and laboratory evidence suggests a causal association for bovine spongiform encephalopathy (BSE ) and vCJD. The agent associated with CJD is a prion, which is an abnormal isoform of a normal protein constituent of the central nervous system. The mechanism by which the normal form of the protein is converted to the abnormal, disease-causing prion is unknown. The tertiary conformation of the abnormal prion protein appears to confer a heightened degree of resistance to conventional methods of sterilization and disinfection. 1170,1171 Although about 90% of CJD cases occur sporadically, a limited number of cases are the result of a direct exposure to prion-containing material (usually central nervous system tissue or pituitary hormones) acquired as a result of health care (iatrogenic cases). These cases have been linked to a. pituitary hormone therapy , 1170-1174 b. transplants of either dura mater or corneas, and c. neurosurgical instruments and depth electrodes. In the cases involving instruments and depth electrodes, conventional cleaning and terminal reprocessing methods of the day failed to fully inactivate the contaminating prions and are considered inadequate by today's standards. Prion inactivation studies involving whole tissues and tissue homogenates have been conducted to determine the parameters of physical and chemical methods of sterilization or disinfection necessary for complete inactivation; 1170, 1186-1191 however, the application of these findings to environmental infection control in health-care settings is problematic. No studies have evaluated the effectiveness of medical instrument reprocessing in inactivating prions. Despite a consensus that abnormal prions display some extreme measure of resistance to inactivation by either physical or chemical methods, scientists disagree about the exact conditions needed for sterilization. Inactivation studies utilizing whole tissues present extraordinary challenges to any sterilizing method. 1192 Additionally, the experimental designs of these studies preclude the evaluation of surface cleaning as a part of the total approach to pathogen inactivation. 951,1192 Some researchers have recommended the use of either a 1:2 v/v dilution of sodium hypochorite (approximately 20,000 ppm), full-strength sodium hypochlorite (50,000-60,000 ppm), or 1-2 N sodium hydroxide (NaOH) for the inactivation of prions on certain surfaces (e.g., those found in the pathology laboratory). 1170,1188 Although these chemicals may be appropriate for the decontamination of laboratory, operating-room, or autopsy-room surfaces that come into contact with central nervous system tissue from a known or suspected patient, this approach is not indicated for routine or terminal cleaning of a room previously occupied by a CJD patient. Both chemicals pose hazards for the healthcare worker doing the decontamination. NaOH is caustic and should not make contact with the skin. Sodium hypochlorite solutions (i.e., chlorine bleach) can corrode metals (e.g., aluminum). MSDS information should be consulted when attempting to work with concentrated solutions of either chemical. Currently, no EPAregistered products have label claims for prion inactivation; therefore, this guidance is based on the best available evidence from the scientific literature. Environmental infection-control strategies must based on the principles of the "chain of infection," regardless of the disease of concern. 13 Although CJD is transmissible, it is not highly contagious. All iatrogenic cases of CJD have been linked to a direct exposure to prion-contaminated central nervous system tissue or pituitary hormones. The six documented iatrogenic cases associated with instruments and devices involved neurosurgical instruments and devices that introduced residual contamination directly to the recipient's brain. No evidence suggests that vCJD has been transmitted iatrogenically or that either CJD or vCJD has been transmitted from environmental surfaces (e.g., housekeeping surfaces). Therefore, routine procedures are adequate for terminal cleaning and disinfection of a CJD patient's room. Additionally, in epidemiologic studies involving highly transfused patients, blood was not identified as a source for prion transmission. Routine procedures for containing, decontaminating, and disinfecting surfaces with blood spills should be adequate for proper infection control in these situations. 951,1199 Guidance for environmental infection control in ORs and autopsy areas has been published. 1197,1199 Hospitals should develop risk-assessment procedures to identify patients with known or suspected CJD in efforts to implement prion-specific infection-control measures for the OR and for instrument reprocessing. 1200 This assessment also should be conducted for older patients undergoing non-lesionous neurosurgery when such procedures are being done for diagnosis. Disposable, impermeable coverings should be used during these autopsies and neurosurgeries to minimize surface contamination. Surfaces that have become contaminated with central nervous system tissue or cerebral spinal fluid should be cleaned and decontaminated by a. removing most of the tissue or body substance with absorbent materials, b. wetting the surface with a sodium hypochlorite solution containing ≥5,000 ppm or a 1 N NaOH solution, and c. rinsing thoroughly. 951,1201 The optimum duration of contact exposure in these instances is unclear. Some researchers recommend a 1-hour contact time on the basis of tissue-inactivation studies, 1197,1198,1201 whereas other reviewers of the subject draw no conclusions from this research. 1199 Factors to consider before cleaning a potentially contaminated surface are a. the degree to which gross tissue/body substance contamination can be effectively removed and b. the ease with which the surface can be cleaned. # F. Environmental Sampling This portion of Part I addresses the basic principles and methods of sampling environmental surfaces and other environmental sources for microorganisms. The applied strategies of sampling with respect to environmental infection control have been discussed in the appropriate preceding subsections. # General Principles: Microbiologic Sampling of the Environment Before 1970, U.S. hospitals conducted regularly scheduled culturing of the air and environmental surfaces (e.g., floors, walls, and table tops). 1202 By 1970, CDC and the American Hospital Association (AHA) were advocating the discontinuation of routine environmental culturing because rates of healthcare-associated infection had not been associated with levels of general microbial contamination of air or environmental surfaces, and because meaningful standards for permissible levels of microbial contamination of environmental surfaces or air did not exist. 1203-1205During 1970, 25% of U.S. hospitals reduced the extent of such routine environmental culturing -a trend that has continued. 1206,1207 Random, undirected sampling (referred to as "routine" in previous guidelines) differs from the current practice of targeted sampling for defined purposes. 2,1204 Previous recommendations against routine sampling were not intended to discourage the use of sampling in which sample collection, culture, and interpretation are conducted in accordance with defined protocols. 2 In this guideline, targeted microbiologic sampling connotes a monitoring process that includes a. a written, defined, multidisciplinary protocol for sample collection and culturing b. analysis and interpretation of results using scientifically determined or anticipatory baseline values for comparison; and c. expected actions based on the results obtained. Infection control, in conjunction with laboratorians, should assess the health-care facility's capability to conduct sampling and determine when expert consultation and/or services are needed. Microbiologic sampling of air, water, and inanimate surfaces (i.e., environmental sampling) is an expensive and time-consuming process that is complicated by many variables in protocol, analysis, and interpretation. It is therefore indicated for only four situations. 1208 The first is to support an investigation of an outbreak of disease or infections when environmental reservoirs or fomites are implicated epidemiologically in disease transmission. 161,1209,1210 It is important that such culturing be supported by epidemiologic data. Environmental sampling, as with all laboratory testing, should not be conducted if there is no plan for interpreting and acting on the results obtained. 11,1211,1212 Linking microorganisms from environmental samples with clinical isolates by molecular epidemiology is crucial whenever it is possible to do so. The second situation for which environmental sampling may be warranted is in research. Well-designed and controlled experimental methods and approaches can provide new information about the spread of health-care associated diseases. 126, 129 A classic example is the study of environmental microbial contamination that compared health-care associated infection rates in an old hospital and a new facility before and shortly after occupancy. 947 The third indication for sampling is to monitor a potentially hazardous environmental condition, confirm the presence of a hazardous chemical or biological agent, and validate the successful abatement of the hazard. This type of sampling can be used to: a. detect bioaerosols released from the operation of health-care equipment (e.g., an ultrasonic cleaner) and determine the success of repairs in containing the hazard, 1213 b. detect the release of an agent of bioterrorism in an indoor environmental setting and determine its successful removal or inactivation, and c. sample for industrial hygiene or safety purposes (e.g., monitoring a "sick building"). The fourth indication is for quality assurance to evaluate the effects of a change in infection-control practice or to ensure that equipment or systems perform according to specifications and expected outcomes. Any sampling for quality-assurance purposes must follow sound sampling protocols and address confounding factors through the use of properly selected controls. Results from a single environmental sample are difficult to interpret in the absence of a frame of reference or perspective. Evaluations of a change in infection-control practice are based on the assumption that the effect will be measured over a finite period, usually of short duration. Conducting quality-assurance sampling on an extended basis, especially in the absence of an adverse outcome, is usually unjustified. A possible exception might be the use of air sampling during major construction periods to qualitatively detect breaks in environmental infection-control measures. In one study, which began as part of an investigation of an outbreak of health-care associated aspergillosis, airborne concentrations of Aspergillus spores were measured in efforts to evaluate the effectiveness of sealing hospital doors and windows during a period of construction of a nearby building. 50 Other examples of sampling for quality-assurance purposes may include commissioning newly constructed space in special care areas (i.e., ORs and units for immunosuppressed patients) or assessing a change in housekeeping practice. However, the only types of routine environmental microbiologic sampling recommended as part of a quality-assurance program are a. the biological monitoring of sterilization processes by using bacterial spores 1214 and b. the monthly culturing of water used in hemodialysis applications and for the final dialysate use dilution. Some experts also advocate periodic environmental sampling to evaluate the microbial/particulate quality for regular maintenance of the air handling system (e.g., filters) and to verify that the components of the system meet manufacturer's specifications (A. Streifel, University of Minnesota, 2000). Certain equipment in health-care settings (e.g., biological safety cabinets) may also be monitored with air flow and particulate sampling to determine performance or as part of adherence to a certification program; results can then be compared with a predetermined standard of performance. These measurements, however, usually do not require microbiologic testing. # Air Sampling Biological contaminants occur in the air as aerosols and may include bacteria, fungi, viruses, and pollens. 1215,1216 Aerosols are characterized as solid or liquid particles suspended in air. Talking for 5 minutes and coughing each can produce 3,000 droplet nuclei; sneezing can generate approximately 40,000 droplets which then evaporate to particles in the size range of 0.5-12 μm. 137,1217 Particles in a biological aerosol usually vary in size from <1 μm to ≥50 μm. These particles may consist of a single, unattached organism or may occur in the form of clumps composed of a number of bacteria. Clumps can also include dust and dried organic or inorganic material. Vegetative forms of bacterial cells and viruses may be present in the air in a lesser number than bacterial spores or fungal spores. Factors that determine the survival of microorganisms within a bioaerosol include a. the suspending medium, b. temperature, c. relative humidity, d. oxygen sensitivity, and e. exposure to UV or electromagnetic radiation. 1215 Many vegetative cells will not survive for lengthy periods of time in the air unless the protective cover (e.g., dried organic or inorganic matter). 1216 Pathogens that resist drying (e.g., Staphylococcus spp., Streptococcus spp., and fungal spores) can survive for long periods and can be carried considerable distances via air and still remain viable. They may also settle on surfaces and become airborne again as secondary aerosols during certain activities (e.g., sweeping and bed making). 1216,1218 Microbiologic air sampling is used as needed to determine the numbers and types of microorganisms, or particulates, in indoor air. 289 Air sampling for quality control is, however, problematic because of lack of uniform air-quality standards. Although airborne spores of Aspergillus spp. can pose a risk for neutropenic patients, the critical number (i.e., action level) of these spores above which outbreaks of aspergillosis would be expected to occur has not been defined. Health-care professionals considering the use of air sampling should keep in mind that the results represent indoor air quality at singular points in time, and these may be affected by a variety of factors, including a. indoor traffic, b. visitors entering the facility, c. temperature, d. time of day or year, e. relative humidity, f. relative concentration of particles or organisms, and g) the performance of the air-handling system components. To be meaningful, air-sampling results must be compared with those obtained from other defined areas, conditions, or time periods. Several preliminary concerns must be addressed when designing a microbiologic air sampling strategy (Box 13). Because the amount of particulate material and bacteria retained in the respiratory system is largely dependent on the size of the inhaled particles, particle size should be determined when studying airborne microorganisms and their relation to respiratory infections. Particles >5 μm are efficiently trapped in the upper respiratory tract and are removed primarily by ciliary action. 1219 Particles ≤5 μm in diameter reach the lung, but the greatest retention in the alveoli is of particles 1-2 μm in diameter. # Box 13. Preliminary concerns for conducting air sampling - Consider the possible characteristics and conditions of the aerosol, including size range of particles, relative amount of inert material, concentration of microorganisms, and environmental factors. - Determine the type of sampling instruments, sampling time, and duration of the sampling program. - Determine the number of samples to be taken. - Ensure that adequate equipment and supplies are available. - Determine the method of assay that will ensure optimal recovery of microorganisms. - Select a laboratory that will provide proper microbiologic support. - Ensure that samples can be refrigerated if they cannot be assayed in the laboratory promptly. Bacteria, fungi, and particulates in air can be identified and quantified with the same methods and equipment (Table 23). The basic methods include a. impingement in liquids, b. impaction on solid surfaces, c. sedimentation, d. filtration, e. centrifugation, f. electrostatic precipitation, and g. thermal precipitation. 1218 Of these, impingement in liquids, impaction on solid surfaces, and sedimentation (on settle plates) have been used for various air-sampling purposes in health-care settings. 289 Several instruments are available for sampling airborne bacteria and fungi (Box 14). Some of the samplers are self-contained units requiring only a power supply and the appropriate collecting medium, but most require additional auxiliary equipment (e.g., a vacuum pump and an airflow measuring device ). Sedimentation or depositional methods use settle plates and therefore need no special instruments or equipment. Selection of an instrument for air sampling requires a clear understanding of the type of information desired and the particular determinations that must be made (Box 14). Information may be needed regarding a. one particular organism or all organisms that may be present in the air, b. the concentration of viable particles or of viable organisms, c. the change in concentration with time, and d. the size distribution of the collected particles. Before sampling begins, decisions should be made regarding whether the results are to be qualitative or quantitative. Comparing quantities of airborne microorganisms to those of outdoor air is also standard operating procedure. Infection-control professionals, hospital epidemiologists, industrial hygienists, and laboratory supervisors, as part of a multidisciplinary team, should discuss the potential need for microbial air sampling to determine if the capacity and expertise to conduct such sampling exists within the facility and when it is appropriate to enlist the services of an environmental microbiologist consultant. # Box 14. Selecting an air sampling device* The following factors must be considered when choosing an air sampling instrument: - Viability and type of the organism to be sampled - Compatibility with the selected method of analysis - Sensitivity of particles to sampling - Assumed concentrations and particle size - Whether airborne clumps must be broken (i.e., total viable organism count vs. particle count) - Volume of air to be sampled and length of time sampler is to be continuously operated Liquid impinger and solid impactor samplers are the most practical for sampling bacteria, particles, and fungal spores, because they can sample large volumes of air in relatively short periods of time. 289 Solid impactor units are available as either "slit" or "sieve" designs. Slit impactors use a rotating disc as support for the collecting surface, which allows determinations of concentration over time. Sieve impactors commonly use stages with calibrated holes of different diameters. Some impactor-type samplers use centrifugal force to impact particles onto agar surfaces. The interior of either device must be made sterile to avoid inadvertent contamination from the sampler. Results obtained from either sampling device can be expressed as organisms or particles per unit volume of air (CFU/m 3 ). Sampling for bacteria requires special attention, because bacteria may be present as individual organisms, as clumps, or mixed with or adhering to dust or covered with a protective coating of dried organic or inorganic substances. Reports of bacterial concentrations determined by air sampling therefore must indicate whether the results represent individual organisms or particles bearing multiple cells. Certain types of samplers (e.g., liquid impingers) will completely or partially disintegrate clumps and large particles; the sampling result will therefore reflect the total number of individual organisms present in the air. The task of sizing a bioaerosol is simplified through the use of sieves or slit impactors because these samplers will separate the particles and microorganisms into size ranges as the sample is collected. These samplers must, however, be calibrated first by sampling aerosols under similar use conditions. 1225 The use of settle plates (i.e., the sedimentation or depositional method) is not recommended when sampling air for fungal spores, because single spores can remain suspended in air indefinitely. 289 Settle plates have been used mainly to sample for particulates and bacteria either in research studies or during epidemiologic investigations. 161, Results of sedimentation sampling are typically expressed as numbers of viable particles or viable bacteria per unit area per the duration of sampling time (i.e., CFU/area/time); this method can not quantify the volume of air sampled. Because the survival of microorganisms during air sampling is inversely proportional to the velocity at which the air is taken into the sampler, 1215 one advantage of using a settle plate is its reliance on gravity to bring organisms and particles into contact with its surface, thus enhancing the potential for optimal survival of collected organisms. This process, however, takes several hours to complete and may be impractical for some situations. Air samplers are designed to meet differing measurement requirements. Some samplers are better suited for one form of measurement than others. No one type of sampler and assay procedure can be used to collect and enumerate 100% of airborne organisms. The sampler and/or sampling method chosen should, however, have an adequate sampling rate to collect a sufficient number of particles in a reasonable time period so that a representative sample of air is obtained for biological analysis. Newer analytical techniques for assaying air samples include PCR methods and enzyme-linked immunosorbent assays (ELISAs). # Water Sampling A detailed discussion of the principles and practices of water sampling has been published. 945 Water sampling in health-care settings is used detect waterborne pathogens of clinical significance or to determine the quality of finished water in a facility's distribution system. Routine testing of the water in a health-care facility is usually not indicated, but sampling in support of outbreak investigations can help determine appropriate infection-control measures. Water-quality assessments in dialysis settings have been discussed in this guideline (see Water, Dialysis Water Quality and Dialysate, and Appendix C). Health-care facilities that conduct water sampling should have their samples assayed in a laboratory that uses established methods and quality-assurance protocols. Water specimens are not "static specimens" at ambient temperature; potential changes in both numbers and types of microbial populations can occur during transport. Consequently, water samples should be sent to the testing laboratory cold (i.e., at approximately 39.2°F ) and testing should be done as soon as practical after collection (preferably within 24 hours). Because most water sampling in health-care facilities involves the testing of finished water from the facility's distribution system, a reducing agent (i.e., sodium thiosulfate ) needs to be added to neutralize residual chlorine or other halogen in the collected sample. If the water contains elevated levels of heavy metals, then a chelating agent should be added to the specimen. The minimum volume of water to be collected should be sufficient to complete any and all assays indicated; 100 mL is considered a suitable minimum volume. Sterile collection equipment should always be used. Sampling from a tap requires flushing of the water line before sample collection. If the tap is a mixing faucet, attachments (e.g., screens and aerators) must be removed, and hot and then cold water must be run through the tap before collecting the sample. 945 If the cleanliness of the tap is questionable, disinfection with 500-600 ppm sodium hypochlorite (1:100 v/v dilution of chlorine bleach) and flushing the tap should precede sample collection. Microorganisms in finished or treated water often are physically damaged ("stressed") to the point that growth is limited when assayed under standard conditions. Such situations lead to false-negative readings and misleading assessments of water quality. Appropriate neutralization of halogens and chelation of heavy metals are crucial to the recovery of these organisms. The choice of recovery media and incubation conditions will also affect the assay. Incubation temperatures should be closer to the ambient temperature of the water rather than at 98.6°F (37°C), and recovery media should be formulated to provide appropriate concentrations of nutrients to support organisms exhibiting less than rigorous growth. 945 High-nutrient content media (e.g., blood agar and tryptic soy agar ) may actually inhibit the growth of these damaged organisms. Reduced nutrient media (e.g., diluted peptone and R2A) are preferable for recovery of these organisms. 945 Use of aerobic, heterotrophic plate counts allows both a qualitative and quantitative measurement for water quality. If bacterial counts in water are expected to be high in number (e.g., during waterborne outbreak investigations), assaying small quantities using pour plates or spread plates is appropriate. 945 Membrane filtration is used when low-count specimens are expected and larger sampling volumes are required (≥100 mL). The sample is filtered through the membrane, and the filter is applied directly faceup onto the surface of the agar plate and incubated. Unlike the testing of potable water supplies for coliforms (which uses standardized test and specimen collection parameters and conditions), water sampling to support epidemiologic investigations of disease outbreaks may be subjected to modifications dictated by the circumstances present in the facility. Assay methods for waterborne pathogens may also not be standardized. Therefore, control or comparison samples should be included in the experimental design. Any departure from a standard method should be fully documented and should be considered when interpreting results and developing strategies. Assay methods specific for clinically significant waterborne pathogens (e.g., Legionella spp., Aeromonas spp., Pseudomonas spp., and Acinetobacter spp.) are more complicated and costly compared with both methods used to detect coliforms and other standard indicators of water quality. # Environmental Surface Sampling Routine environmental-surface sampling (e.g., surveillance cultures) in health-care settings is neither cost-effective nor warranted. 951, 1225 When indicated, surface sampling should be conducted with multidisciplinary approval in adherence to carefully considered plans of action and policy (Box 15). # Box 15. Undertaking environmental-surface sampling* The following factors should be considered before engaging in environmental-surface sampling: - Background information from the literature and present activities (i.e., preliminary results from an epidemiologic investigation) - Location of surfaces to be sampled Surface sampling is used currently for research, as part of an epidemiologic investigation, or as part of a comprehensive approach for specific quality assurance purposes. As a research tool, surface sampling has been used to determine a. potential environmental reservoirs of pathogens, 564, b. survival of microorganisms on surfaces, 1232,1233 and c. the sources of the environmental contamination. 1023 Some or all of these approaches can also be used during outbreak investigations. 1232 Discussion of surface sampling of medical devices and instruments is beyond the scope of this document and is deferred to future guidelines on sterilization and disinfection issues. Meaningful results depend on the selection of appropriate sampling and assay techniques. 1214 The media, reagents, and equipment required for surface sampling are available from any well-equipped microbiology laboratory and laboratory supplier. For quantitative assessment of surface organisms, nonselective, nutrient-rich agar media and broth (e.g., TSA and brain-heart infusion broth with or without 5% sheep or rabbit blood supplement) are used for the recovery of aerobic bacteria. Broth media are used with membrane-filtration techniques. Further sample work-up may require the use of selective media for the isolation and enumeration of specific groups of microorganisms. Examples of selective media are MacConkey agar (MAC ), Cetrimide agar (selects for Pseudomonas aeruginosa), or Sabouraud dextrose-and malt extract agars and broths (select for fungi). Qualitative determinations of organisms from surfaces require only the use of selective or non-selective broth media. Effective sampling of surfaces requires moisture, either already present on the surface to be sampled or via moistened swabs, sponges, wipes, agar surfaces, or membrane filters. 1214, Dilution fluids and rinse fluids include various buffers or general purpose broth media (Table 24). If disinfectant residuals are expected on surfaces being sampled, specific neutralizer chemicals should be used in both the growth media and the dilution or rinse fluids. Lists of the neutralizers, the target disinfectant active ingredients, and the use concentrations have been published. 1214,1237 Alternatively, instead of adding neutralizing chemicals to existing culture media (or if the chemical nature of the disinfectant residuals is unknown), the use of either a. commercially available media including a variety of specific and nonspecific neutralizers or b. double-strength broth media will facilitate optimal recovery of microorganisms. The inclusion of appropriate control specimens should be included to rule out both residual antimicrobial activity from surface disinfectants and potential toxicity caused by the presence of neutralizer chemicals carried over into the assay system. 1214 Several methods can be used for collecting environmental surface samples (Table 25). Specific stepbystep discussions of each of the methods have been published. 1214,1239 For best results, all methods should incorporate aseptic techniques, sterile equipment, and sterile recovery media. Sample/rinse methods are frequently chosen because of their versatility. However, these sampling methods are the most prone to errors caused by manipulation of the swab, gauze pad, or sponge. 1238 Additionally, no microbiocidal or microbiostatic agents should be present in any of these items when used for sampling. 1238 Each of the rinse methods requires effective elution of microorganisms from the item used to sample the surface. Thorough mixing of the rinse fluids after elution (e.g., via manual or mechanical mixing using a vortex mixer, shaking with or without glass beads, and ultrasonic bath) will help to remove and suspend material from the sampling device and break up clumps of organisms for a more accurate count. 1238 In some instances, the item used to sample the surface (e.g., gauze pad and sponge) may be immersed in the rinse fluids in a sterile bag and subjected to stomaching. 1238 This technique, however, is suitable only for soft or absorbent items that will not puncture the bag during the elution process. If sampling is conducted as part of an epidemiologic investigation of a disease outbreak, identification of isolates to species level is mandatory, and characterization beyond the species level is preferred. 1214 When interpreting the results of the sampling, the expected degree of microbial contamination associated with the various categories of surfaces in the Spaulding classification must be considered. Environmental surfaces should be visibly clean; recognized pathogens in numbers sufficient to result in secondary transfer to other animate or inanimate surfaces should be absent from the surface being sampled. 1214 Although the interpretation of a sample with positive microbial growth is self-evident, an environmental surface sample, especially that obtained from housekeeping surfaces, that shows no growth does not represent a "sterile" surface. Sensitivities of the sampling and assay methods (i.e., level of detection) must be taken into account when no-growth samples are encountered. Properly collected control samples will help rule out extraneous contamination of the surface sample. # G. Laundry and Bedding # General Information Laundry in a health-care facility may include bed sheets and blankets, towels, personal clothing, patient apparel, uniforms, scrub suits, gowns, and drapes for surgical procedures. 1245 Although contaminated textiles and fabrics in health-care facilities can be a source of substantial numbers of pathogenic microorganisms, reports of health-care associated diseases linked to contaminated fabrics are so few in number that the overall risk of disease transmission during the laundry process likely is negligible. When the incidence of such events are evaluated in the context of the volume of items laundered in health-care settings (estimated to be 5 billion pounds annually in the United States), 1246 existing control measures (e.g., standard precautions) are effective in reducing the risk of disease transmission to patients and staff. Therefore, use of current control measures should be continued to minimize the contribution of contaminated laundry to the incidence of health-care associated infections. The control measures described in this section of the guideline are based on principles of hygiene, common sense, and consensus guidance; they pertain to laundry services utilized by health-care facilities, either inhouse or contract, rather than to laundry done in the home. # Epidemiology and General Aspects of Infection Control Contaminated textiles and fabrics often contain high numbers of microorganisms from body substances, including blood, skin, stool, urine, vomitus, and other body tissues and fluids. When textiles are heavily contaminated with potentially infective body substances, they can contain bacterial loads of 10 6 -10 8 CFU/100 cm 2 of fabric. 1247 Disease transmission attributed to health-care laundry has involved contaminated fabrics that were handled inappropriately (i.e., the shaking of soiled linens). Bacteria (Salmonella spp., Bacillus cereus), viruses (hepatitis B virus ), fungi (Microsporum canis), and ectoparasites (scabies) presumably have been transmitted from contaminated textiles and fabrics to workers via a. direct contact or b. aerosols of contaminated lint generated from sorting and handling contaminated textiles. In these events, however, investigations could not rule out the possibility that some of these reported infections were acquired from community sources. Through a combination of soil removal, pathogen removal, and pathogen inactivation, contaminated laundry can be rendered hygienically clean. Hygienically clean laundry carries negligible risk to health-care workers and patients, provided that the clean textiles, fabric, and clothing are not inadvertently contaminated before use. OSHA defines contaminated laundry as "laundry which has been soiled with blood or other potentially infectious materials or may contain sharps." 967 The purpose of the laundry portion of the standard is to protect the worker from exposure to potentially infectious materials during collection, handling, and sorting of contaminated textiles through the use of personal protective equipment, proper work practices, containment, labeling, hazard communication, and ergonomics. Experts are divided regarding the practice of transporting clothes worn at the workplace to the healthcare worker's home for laundering. Although OSHA regulations prohibit home laundering of items that are considered personal protective apparel or equipment (e.g., laboratory coats), 967 experts disagree about whether this regulation extends to uniforms and scrub suits that are not contaminated with blood or other potentially infectious material. Health-care facility policies on this matter vary and may be inconsistent with recommendations of professional organizations. 1253,1254 Uniforms without blood or body substance contamination presumably do not differ appreciably from street clothes in the degree and microbial nature of soilage. Home laundering would be expected to remove this level of soil adequately. However, if health-care facilities require the use of uniforms, they should either make provisions to launder them or provide information to the employee regarding infection control and cleaning guidelines for the item based on the tasks being performed at the facility. Health-care facilities should address the need to provide this service and should determine the frequency for laundering these items. In a recent study examining the microbial contamination of medical students' white coats, the students perceived the coats as "clean" as long as the garments were not visibly contaminated with body substances, even after wearing the coats for several weeks. 1255 The heaviest bacterial load was found on the sleeves and the pockets of these garments; the organisms most frequently isolated were Staphylococcus aureus, diphtheroids, and Acinetobacter spp. 1255 Presumably, the sleeves of the coat may make contact with a patient and potentially serve to transfer environmentally stable microorganisms among patients. In this study, however, surveillance was not conducted among patients to detect new infections or colonizations. The students did, however, report that they would likely replace their coats more frequently and regularly if clean coats were provided. 1255 Apart from this study, which documents the presence of pathogenic bacteria on health-care facility clothing, reports of infections attributed to either the contact with such apparel or with home laundering have been rare. 1256,1257 Laundry services for health-care facilities are provided either in-house (i.e., on-premise laundry ), co-operatives (i.e., those entities owned and operated by a group of facilities), or by off-site commercial laundries. In the latter, the textiles may be owned by the health-care facility, in which case the processor is paid for laundering only. Alternatively, the textiles may be owned by the processor who is paid for every piece laundered on a "rental" fee. The laundry facility in a health-care setting should be designed for efficiency in providing hygienically clean textiles, fabrics, and apparel for patients and staff. Guidelines for laundry construction and operation for health-care facilities, including nursing facilities, have been published. 120,1258 The design and engineering standards for existing facilities are those cited in the AIA edition in effect during the time of the facility's construction. 120 A laundry facility is usually partitioned into two separate areas -a "dirty" area for receiving and handling the soiled laundry and a "clean" area for processing the washed items. 1259 To minimize the potential for recontaminating cleaned laundry with aerosolized contaminated lint, areas receiving contaminated textiles should be at negative air pressure relative to the clean areas. Laundry areas should have handwashing facilities readily available to workers. Laundry workers should wear appropriate personal protective equipment (e.g., gloves and protective garments) while sorting soiled fabrics and textiles. 967 Laundry equipment should be used and maintained according to the manufacturer's instructions to prevent microbial contamination of the system. 1250,1263 Damp textiles should not be left in machines overnight. 1250 # Collecting, Transporting, and Sorting Contaminated Textiles and Fabrics The laundry process starts with the removal of used or contaminated textiles, fabrics, and/or clothing from the areas where such contamination occurred, including but not limited to patients' rooms, surgical/operating areas, and laboratories. Handling contaminated laundry with a minimum of agitation can help prevent the generation of potentially contaminated lint aerosols in patient-care areas. 967,1259 Sorting or rinsing contaminated laundry at the location where contamination occurred is prohibited by OSHA. 967 Contaminated textiles and fabrics are placed into bags or other appropriate containment in this location; these bags are then securely tied or otherwise closed to prevent leakage. 967 Single bags of sufficient tensile strength are adequate for containing laundry, but leak-resistant containment is needed if the laundry is wet and capable of soaking through a cloth bag. 1264 Bags containing contaminated laundry must be clearly identified with labels, color-coding, or other methods so that health-care workers handle these items safely, regardless of whether the laundry is transported within the facility or destined for transport to an off-site laundry service. 967 Typically, contaminated laundry originating in isolation areas of the hospital is segregated and handled with special practices; however, few, if any, cases of health-care associated infection have been linked to this source. 1265 Single-blinded studies have demonstrated that laundry from isolation areas is no more heavily contaminated with microorganisms than laundry from elsewhere in the hospital. 1266 Therefore, adherence to standard precautions when handling contaminated laundry in isolation areas and minimizing agitation of the contaminated items are considered sufficient to prevent the dispersal of potentially infectious aerosols. 6 Contaminated textiles and fabrics in bags can be transported by cart or chute. 1258,1262 Laundry chutes require proper design, maintenance, and use, because the piston-like action of a laundry bag traveling in the chute can propel airborne microbial contaminants throughout the facility. Laundry chutes should be maintained under negative air pressure to prevent the spread of microorganisms from floor to floor. Loose, contaminated pieces of laundry should not be tossed into chutes, and laundry bags should be closed or otherwise secured to prevent the contents from falling out into the chute. 1270 Health-care facilities should determine the point in the laundry process at which textiles and fabrics should be sorted. Sorting after washing minimizes the exposure of laundry workers to infective material in soiled fabrics, reduces airborne microbial contamination in the laundry area, and helps to prevent potential percutaneous injuries to personnel. 1271 Sorting laundry before washing protects both the machinery and fabrics from hard objects (e.g., needles, syringes, and patients' property) and reduces the potential for recontamination of clean textiles. 1272 Sorting laundry before washing also allows for customization of laundry formulas based on the mix of products in the system and types of soils encountered. Additionally, if work flow allows, increasing the amount of segregation by specific product types will usually yield the greatest amount of work efficiency during inspection, folding, and pack-making operations. 1253 Protective apparel for the workers and appropriate ventilation can minimize these exposures. 967, Gloves used for the task of sorting laundry should be of sufficient thickness to minimize sharps injuries. 967 Employee safety personnel and industrial hygienists can help to determine the appropriate glove choice. # Parameters of the Laundry Process Fabrics, textiles, and clothing used in health-care settings are disinfected during laundering and generally rendered free of vegetative pathogens (i.e., hygienically clean), but they are not sterile. 1273 Laundering cycles consist of flush, main wash, bleaching, rinsing, and souring. 1274 Cleaned wet textiles, fabrics, and clothing are then dried, pressed as needed, and prepared (e.g., folded and packaged) for distribution back to the facility. Clean linens provided by an off-site laundry must be packaged prior to transport to prevent inadvertent contamination from dust and dirt during loading, delivery, and unloading. Functional packaging of laundry can be achieved in several ways, including a. placing clean linen in a hamper lined with a previously unused liner, which is then closed or covered b. placing clean linen in a properly cleaned cart and covering the cart with disposable material or a properly cleaned reusable textile material that can be secured to the cart; and c. wrapping individual bundles of clean textiles in plastic or other suitable material and sealing or taping the bundles. The antimicrobial action of the laundering process results from a combination of mechanical, thermal, and chemical factors. 1271,1275,1276 Dilution and agitation in water remove substantial quantities of microorganisms. Soaps and detergents function to suspend soils and also exhibit some microbiocidal properties. Hot water provides an effective means of destroying microorganisms. 1277 A temperature of at least 160°F (71°C) for a minimum of 25 minutes is commonly recommended for hot-water washing. 2 Water of this temperature can be provided by steam jet or separate booster heater. 120 The use of chlorine bleach assures an extra margin of safety. 1278, 1279 A total available chlorine residual of 50-150 ppm is usually achieved during the bleach cycle. 1277 Chlorine bleach becomes activated at water temperatures of 135°F-145°F (57.2°C-62.7°C). The last of the series of rinse cycles is the addition of a mild acid (i.e., sour) to neutralize any alkalinity in the water supply, soap, or detergent. The rapid shift in pH from approximately 12 to 5 is an effective means to inactivate some microorganisms. 1247 Effective removal of residual alkali from fabrics is an important measure in reducing the risk for skin reactions among patients. Chlorine bleach is an economical, broad-spectrum chemical germicide that enhances the effectiveness of the laundering process. Chlorine bleach is not, however, an appropriate laundry additive for all fabrics. Traditionally, bleach was not recommended for laundering flame-retardant fabrics, linens, and clothing because its use diminished the flame-retardant properties of the treated fabric. 1273 However, some modernday flame retardant fabrics can now tolerate chlorine bleach. Flame-retardant fabrics, whether topically treated or inherently flame retardant, should be thoroughly rinsed during the rinse cycles, because detergent residues are capable of supporting combustion. Chlorine alternatives (e.g., activated oxygenbased laundry detergents) provide added benefits for fabric and color safety in addition to antimicrobial activity. Studies comparing the antimicrobial potencies of chlorine bleach and oxygen-based bleach are needed. Oxygen-based bleach and detergents used in health-care settings should be registered by EPA to ensure adequate disinfection of laundry. Health-care workers should note the cleaning instructions of textiles, fabrics, drapes, and clothing to identify special laundering requirements and appropriate hygienic cleaning options. 1278 Although hot-water washing is an effective laundry disinfection method, the cost can be substantial. Laundries are typically the largest users of hot water in hospitals. They consume 50%-75% of the total hot water, 1280 representing an average of 10%-15% of the energy used by a hospital. Several studies have demonstrated that lower water temperatures of 71°F-77°F (22°C-25°C) can reduce microbial contamination when the cycling of the washer, the wash detergent, and the amount of laundry additive are carefully monitored and controlled. 1247, Low-temperature laundry cycles rely heavily on the presence of chlorine-or oxygen-activated bleach to reduce the levels of microbial contamination. The selection of hot-or cold-water laundry cycles may be dictated by state health-care facility licensing standards or by other regulation. Regardless of whether hot or cold water is used for washing, the temperatures reached in drying and especially during ironing provide additional significant microbiocidal action. 1247 Dryer temperatures and cycle times are dictated by the materials in the fabrics. Man-made fibers (i.e., polyester and polyester blends) require shorter times and lower temperatures. After washing, cleaned and dried textiles, fabrics, and clothing are pressed, folded, and packaged for transport, distribution, and storage by methods that ensure their cleanliness until use. 2 State regulations and/or accrediting standards may dictate the procedures for this activity. Clean/sterile and contaminated textiles should be transported from the laundry to the health-care facility in vehicles (e.g., trucks, vans, and carts) that allow for separation of clean/sterile and contaminated items. Clean/sterile textiles and contaminated textiles may be transported in the same vehicle, provided that the use of physical barriers and/or space separation can be verified to be effective in protecting the clean/sterile items from contamination. Clean, uncovered/unwrapped textiles stored in a clean location for short periods of time (e.g., uncovered and used within a few hours) have not been demonstrated to contribute to increased levels of health-care acquired infection. Such textiles can be stored in convenient places for use during the provision of care, provided that the textiles can be maintained dry and free from soil and body-substance contamination. In the absence of microbiologic standards for laundered textiles, no rationale exists for routine microbiologic sampling of cleaned health-care textiles and fabrics. 1286 Sampling may be used as part of an outbreak investigation if epidemiologic evidence suggests that textiles, fabrics, or clothing are a suspected vehicle for disease transmission. Sampling techniques include aseptically macerating the fabric into pieces and adding these to broth media or using contact plates (RODAC plates) for direct surface sampling. 1271,1286 When evaluating the disinfecting properties of the laundering process specifically, placing pieces of fabric between two membrane filters may help to minimize the contribution of the physical removal of microorganisms. 1287 Washing machines and dryers in residential-care settings are more likely to be consumer items rather than the commercial, heavy-duty, large volume units typically found in hospitals and other institutional healthcare settings. Although all washing machines and dryers in health-care settings must be properly maintained for performance according to the manufacturer's instructions, questions have been raised about the need to disinfect washers and dryers in residential-care settings. Disinfection of the tubs and tumblers of these machines is unnecessary when proper laundry procedures are followed; these procedures involve a. the physical removal of bulk solids (e.g., feces) before the wash/dry cycle and b. proper use of temperature, detergent, and laundry additives. Infection has not been linked to laundry procedures in residential-care facilities, even when consumer versions of detergents and laundry additives are used. # Special Laundry Situations Some textile items (e.g., surgical drapes and reusable gowns) must be sterilized before use and therefore require steam autoclaving after laundering. 7 Although the American Academy of Pediatrics in previous guidelines recommended autoclaving for linens in neonatal intensive care units (NICUs), studies on the microbial quality of routinely cleaned NICU linen have not identified any increased risk for infection among the neonates receiving care. 1288 Consequently, hygienically clean linens are suitable for use in this setting. 997 The use of sterile linens in burn therapy units remains unresolved. Coated or laminated fabrics are often used in the manufacture of PPE. When these items become contaminated with blood or other body substances, the manufacturer's instructions for decontamination and cleaning take into account the compatibility of the rubber backing with the chemical germicides or detergents used in the process. The directions for decontaminating these items should be followed as indicated; the item should be discarded when the backing develops surface cracks. Dry cleaning, a cleaning process that utilizes organic solvents (e.g., perchloroethylene) for soil removal, is an alternative means of cleaning fabrics that might be damaged in conventional laundering and detergent washing. Several studies, however, have shown that dry cleaning alone is relatively ineffective in reducing the numbers of bacteria and viruses on contaminated linens; 1289, 1290 microbial populations are significantly reduced only when dry-cleaned articles are heat pressed. Dry cleaning should therefore not be considered a routine option for health-care facility laundry and should be reserved for those circumstances in which fabrics can not be safely cleaned with water and detergent. 1291 # Surgical Gowns, Drapes, and Disposable Fabrics An issue of recent concern involves the use of disposable (i.e., single use) versus reusable (i.e., multiple use) surgical attire and fabrics in health-care settings. 1292 Regardless of the material used to manufacture gowns and drapes, these items must be resistant to liquid and microbial penetration. 7, Surgical gowns and drapes must be registered with FDA to demonstrate their safety and effectiveness. Repellency and pore size of the fabric contribute to gown performance, but performance capability can be influenced by the item's design and construction. 1298,1299 Reinforced gowns (i.e., gowns with double-layered fabric) generally are more resistant to liquid strike-through. 1300,1301 Reinforced gowns may, however, be less comfortable. Guidelines for selection and use of barrier materials for surgical gowns and drapes have been published. 1302 When selecting a barrier product, repellency level and type of barrier should be compatible for the exposure expected. 967 However, data are limited regarding the association between gown or drape characteristics and risk for surgical site infections. 7,1303 Health-care facilities must ensure optimal protection of patients and health-care workers. Not all fabric items in health care lend themselves to single-use. Facilities exploring options for gowns and drapes should consider the expense of disposable items and the impact on the facility's waste-management costs once these items are discarded. Costs associated with the use of durable goods involve the fabric or textile items; staff expenses to collect, sort, clean, and package the laundry; and energy costs to operate the laundry if on-site or the costs to contract with an outside service. 1304,1305 # Antimicrobial-Impregnated Articles and Consumer Items Bearing Antimicrobial Labeling Manufacturers are increasingly incorporating antibacterial or antimicrobial chemicals into consumer and health-care items. Some consumer products bearing labels that indicate treatment with antimicrobial chemicals have included pens, cutting boards, toys, household cleaners, hand lotions, cat litter, soaps, cotton swabs, toothbrushes, and cosmetics. The "antibacterial" label on household cleaning products, in particular, gives consumers the impression that the products perform "better" than comparable products without this labeling, when in fact all household cleaners have antibacterial properties. In the health-care setting, treated items may include children's pajamas, mattresses, and bed linens with label claims of antimicrobial properties. These claims require careful evaluation to determine whether they pertain to the use of antimicrobial chemicals as preservatives for the fabric or other components or whether they imply a health claim. 1306,1307 No evidence is available to suggest that use of these products will make consumers and patients healthier or prevent disease. No data support the use of these items as part of a sound infection-control strategy, and therefore, the additional expense of replacing a facility's bedding and sheets with these treated products is unwarranted. EPA has reaffirmed its position that manufacturers who make public health claims for articles containing antimicrobial chemicals must provide evidence to support those claims as part of the registration process. 1308 Current EPA regulations outlined in the Treated Articles Exemption of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) require manufacturers to register both the antimicrobial chemical used in or on the product and the finished product itself if a public health claim is maintained for the item. The exemption applies to the use of antimicrobial chemicals for the purpose of preserving the integrity of the product's raw material(s). The U.S. Federal Trade Commission (FTC) is evaluating manufacturer advertising of products with antimicrobial claims. 1309 # Standard Mattresses, Pillows, and Air-Fluidized Beds Standard mattresses and pillows can become contaminated with body substances during patient care if the integrity of the covers of these items is compromised. The practice of sticking needles into the mattress should be avoided. A mattress cover is generally a fitted, protective material, the purpose of which is to prevent the mattress from becoming contaminated with body fluids and substances. A linen sheet placed on the mattress is not considered a mattress cover. Patches for tears and holes in mattress covers do not provide an impermeable surface over the mattress. Mattress covers should be replaced when torn; the mattress should be replaced if it is visibly stained. Wet mattresses, in particular, can be a substantial environmental source of microorganisms. Infections and colonizations caused by Acinetobacter spp., MRSA, and Pseudomonas aeruginosa have been described, especially among burn patients. In these reports, the removal of wet mattresses was an effective infection-control measure. Efforts were made to ensure that pads and covers were cleaned and disinfected between patients using disinfectant products compatible with mattress-cover materials to ensure that these covers remained impermeable to fluids. Pillows and their covers should be easily cleanable, preferably in a hot water laundry cycle. 1315 These should be laundered between patients or if contaminated with body substances. Air-fluidized beds are used for the care of patients immobilized for extended periods of time because of therapy or injury (e.g., pain, decubitus ulcers, and burns). 1316 These specialized beds consist of a base unit filled with microsphere beads fluidized by warm, dry air flowing upward from a diffuser located at the bottom of the unit. A porous, polyester filter sheet separates the patient from direct contact with the beads but allows body fluids to pass through to the beads. Moist beads aggregate into clumps which settle to the bottom where they are removed as part of routine bed maintenance. Because the beads become contaminated with the patient's body substances, concerns have been raised about the potential for these beds to serve as an environmental source of pathogens. Certain pathogens (e.g., Enterococcus spp., Serratia marcescens, Staphylococcus aureus, and Streptococcus fecalis) have been recovered either from the microsphere beads or the polyester sheet after cleaning. 1317,1318 Reports of crosscontamination of patients, however, are few. 1318 Nevertheless, routine maintenance and between-patient decontamination procedures can minimize potential risks to patients. Regular removal of bead clumps, coupled with the warm, dry air of the bed, can help to minimize bacterial growth in the unit. Beads are decontaminated between patients by high heat (113°F-194°F , depending on the manufacturer's specifications) for at least 1 hour; this procedure is particularly important for the inactivation of Enterococcus spp. which are relatively resistant to heat. 1322,1323 The polyester filter sheet requires regular changing and thorough cleaning and disinfection, especially between patients. 1317,1318,1322,1323 Microbial contamination of the air space in the immediate vicinity of a properly maintained air-fluidized bed is similar to that found in air around conventional bedding, despite the air flow out of the base unit and around the patient. 1320,1324,1325 An operational air-fluidized bed can, however, interfere with proper pressure differentials, especially in negative-pressure rooms; 1326 the effect varies with the location of the bed relative to the room's configuration and supply and exhaust vent locations. Use of an air-fluidized bed in a negative-pressure room requires consultation with a facility engineer to determine appropriate placement of the bed. H. Animals in Health-Care Facilities # General Information Animals in health-care facilities traditionally have been limited to laboratories and research areas. However, their presence in patient-care areas is now more frequent, both in acute-care and long-term care settings, prompting consideration for the potential transmission of zoonotic pathogens from animals to humans in these settings. Although dogs and cats may be commonly encountered in health-care settings, other animals (e.g., fish, birds, non-human primates, rabbits, rodents, and reptiles) also can be present as research, resident, or service animals. These animals can serve as sources of zoonotic pathogens that could potentially infect patients and health-care workers (Table 26). Animals potentially can serve as reservoirs for antibiotic-resistant microorganisms, which can be introduced to the health-care setting while the animal is present. VRE have been isolated from both farm animals and pets, 1341 and a cat in a geriatric care center was found to be colonized with MRSA. 1342 + Indicates that the pathogen associated with the infection has been isolated from animals and is considered to pose potential risk to humans. Zoonoses can be transmitted from animals to humans either directly or indirectly via bites, scratches, aerosols, ectoparasites, accidental ingestion, or contact with contaminated soil, food, water, or unpasteurized milk. 1331,1332, Colonization and hand transferral of pathogens acquired from pets in health-care workers' homes represent potential sources and modes of transmission of zoonotic pathogens in health-care settings. An outbreak of infections caused by a yeast (Malassezia pachydermatis) among newborns was traced to transfer of the yeast from the hands of health-care workers with pet dogs at home. 1346 In addition, an outbreak of ringworm in a NICU caused by Microsporum canis was associated with a nurse and her cat, 1347 and an outbreak of Rhodococcus (Gordona) bronchialis sternal SSIs after coronary-artery bypass surgery was traced to a colonized nurse whose dogs were culture-positive for the organism. 1348 In the latter outbreak, whether the dogs were the sole source of the organism and whether other environmental reservoirs contributed to the outbreak are unknown. Nonetheless, limited data indicate that outbreaks of infectious disease have occurred as a result of contact with animals in areas housing immunocompetent patients. However, the low frequency of outbreaks may result from a. the relatively limited presence of the animals in health-care facilities and b. the immunocompetency of the patients involved in the encounters. Formal scientific studies to evaluate potential risks of transmission of zoonoses in health-care settings outside of the laboratory are lacking. # Animal-Assisted Activities, Animal-Assisted Therapy, and Resident Animals Animal-Assisted Activities (AAA) are those programs that enhance the patients' quality of life. These programs allow patients to visit animals in either a common, central location in the facility or in individual patient rooms. A group session with the animals enhances opportunities for ambulatory patients and facility residents to interact with caregivers, family members, and volunteers. Alternatively, allowing the animals access to individual rooms provides the same opportunity to nonambulatory patients and patients for whom privacy or dignity issues are a consideration. The decision to allow this access to patients' rooms should be made on a case-by-case basis, with the consultation and consent of the attending physician and nursing staff. Animal-Assisted Therapy (AAT) is a goal-directed intervention that incorporates an animal into the treatment process provided by a credentialed therapist. 1330,1331 The concept for AAT arose from the observation that some patients with pets at home recover from surgical and medical procedures more rapidly than patients without pets. 1352,1353 Contact with animals is considered beneficial for enhancing wellness in certain patient populations (e.g., children, the elderly, and extended-care hospitalized patients). 1349, However, evidence supporting this benefit is largely derived from anecdotal reports and observations of patient/animal interactions. Guidelines for establishing AAT programs are available for facilities considering this option. 1360 The incorporation of non-human primates into an AAA or AAT program is not encouraged because of concerns regarding potential disease transmission from and unpredictable behavior of these animals. 1361,1362 Animals participating in either AAA or AAT sessions should be in good health and up-to-date with recommended immunizations and prophylactic medications (e.g., heartworm prevention) as determined by a licensed veterinarian based on local needs and recommendations. Regular re-evaluation of the animal's health and behavior status is essential. 1360 Animals should be routinely screened for enteric parasites and/or have evidence of a recently completed antihelminthic regimen. 1363 They should also be free of ectoparasites (e.g., fleas and ticks) and should have no sutures, open wounds, or obvious dermatologic lesions that could be associated with bacterial, fungal, or viral infections or parasitic infestations. Incorporating young animals (i.e., those aged <1 year) into these programs is not encouraged because of issues regarding unpredictable behavior and elimination control. Additionally, health of these animals at risk. Animals should be clean and well-groomed. The visits must be supervised by persons who know the animals and their behavior. Animal handlers should be trained in these activities and receive site-specific orientation to ensure that they work efficiently with the staff in the specific healthcare environment. 1360 Additionally, animal handlers should be in good health. 1360 The most important infection-control measure to prevent potential disease transmission is strict enforcement of hand-hygiene measures (e.g., using either soap and water or an alcohol-based hand rub) for all patients, staff, and residents after handling the animals. 1355,1364 Care should also be taken to avoid direct contact with animal urine or feces. Clean-up of these substances from environmental surfaces requires gloves and the use of leak-resistant plastic bags to discard absorbent material used in the process. 2 The area must be cleaned after visits according to standard cleaning procedures. The American Academy of Allergy, Asthma, and Immunology estimates that dog or cat allergies occur in approximately 15% of the population. 1365 Minimizing contact with animal saliva, dander, and/or urine helps to mitigate allergic responses. 1365-1367 Some facilities may not allow animal visitation for patients with a. underlying asthma, b. known allergies to cat or dog hair, c. respiratory allergies of unknown etiology, and d. immunosuppressive disorders. Hair shedding can be minimized by processes that remove dead hair (e.g., grooming) and that prevent the shedding of dead hair (e.g., therapy capes for dogs). Allergens can be minimized by bathing therapy animals within 24 hours of a visit. 1333,1368 Animal therapists and handlers must take precautions to prevent animal bites. Common pathogens associated with animal bites include Capnocytophaga canimorsus, Pasteurella spp., Staphylococcus spp., and Streptococcus spp. Selecting well-behaved and well-trained animals for these programs greatly decreases the incidence of bites. Rodents, exotic species, wild/domestic animals (i.e., wolf-dog hybrids), and wild animals whose behavior is unpredictable should be excluded from AAA or AAT programs. A well-trained animal handler should be able to recognize stress in the animal and to determine when to terminate a session to minimize risk. When an animal bites a person during AAA or AAT, the animal is to be permanently removed from the program. If a bite does occur, the wound must be cleansed immediately and monitored for subsequent infection. Most infections can be treated with antibiotics, and antibiotics often are prescribed prophylactically in these situations. The health-care facility's infection-control staff should participate actively in planning for and coordinating AAA and AAT sessions. Many facilities do not offer AAA or AAT programs for severely immunocompromised patients (e.g., HSCT patients and patients on corticosteroid therapy). 1339 The question of whether family pets or companion animals can visit terminally-ill HSCT patients or other severely immunosuppressed patients is best handled on a case-by-case basis, although animals should not be brought into the HSCT unit or any other unit housing severely immunosuppressed patients. An indepth discussion of this issue is presented elsewhere. 1366 Immunocompromised patients who have been discharged from a health-care facility may be at higher risk for acquiring some pet-related zoonoses. Although guidelines have been developed to minimize the risk of disease transmission to HIV-infected patients, 8 these recommendations may be applicable for patients with other immunosuppressive disorders. In addition to handwashing or hand hygiene, these recommendations include avoiding contact with a. animal feces and soiled litter box materials, b. animals with diarrhea, c. very young animals (i.e., dogs <6 months of age and cats <1 year of age), and d. exotic animals and reptiles. 8 Pets or companion animals with diarrhea should receive veterinary care to resolve their condition. Many health-care facilities are adopting more home-like environments for residential-care or extendedstay patients in acute-care settings, and resident animals are one element of this approach. 1369 One concept, the "Eden Alternative," incorporates children, plants, and animals (e.g., dogs, cats, fish, birds, rabbits, and rodents) into the daily care setting. 1370,1371 The concept of working with resident animals has not been scientifically evaluated. Several issues beyond the benefits of therapy must be considered before embarking on such a program, including a. whether the animals will come into direct contact with patients and/or be allowed to roam freely in the facility b. how the staff will provide care for the animals; c. the management of patients' or residents' allergies, asthma, and phobias; d. precautionary measures to prevent bites and scratches; and e. measures to properly manage the disposal of animal feces and urine, thereby preventing environmental contamination by zoonotic microorganisms (e.g., Toxoplasma spp., Toxocara spp., and Ancylostoma spp.). 1372,1373 Few data document a link between health-care acquired infection rates and frequency of cleaning fish tanks or rodent cages. Skin infections caused by Mycobacterium marinum have been described among persons who have fish aquariums at home. 1374,1375 Nevertheless, immunocompromised patients should avoid direct contact with fish tanks and cages and the aerosols that these items produce. Further, fish tanks should be kept clean on a regular basis as determined by facility policy, and this task should be performed by gloved staff members who are not responsible for patient care. The use of the infectioncontrol risk assessment can help determine whether a fish tank poses a risk for patient or resident safety and health in these situations. No evidence, however, links the incidence of health-care acquired infections among immunocompetent patients or residents with the presence of a properly cleaned and maintained fish tank, even in dining areas. As a general preventive measure, resident animal programs are advised to restrict animals from a. food preparation kitchens, b. laundries, c. central sterile supply and any storage areas for clean supplies, and d. medication preparation areas. Resident-animal programs in acute-care facilities should not allow the animals into the isolation areas, protective environments, ORs, or any area where immunocompromised patients are housed. Patients and staff routinely should wash their hands or use waterless, alcohol-based hand-hygiene products after contact with animals. # Service Animals Although this section provides an overview about service animals in health-care settings, it cannot address every situation or question that may arise (see Appendix E -Information Resources). A service animal is any animal individually trained to do work or perform tasks for the benefit of a person with a disability. 1366, 1376 A service animal is not considered a pet but rather an animal trained to provide assistance to a person because of a disability. Title III of the "Americans with Disabilities Act" (ADA) of 1990 mandates that persons with disabilities accompanied by service animals be allowed access with their service animals into places of public accommodation, including restaurants, public transportation, schools, and health-care facilities. 1366,1376 In health-care facilities, a person with a disability requiring a service animal may be an employee, a visitor, or a patient. An overview of the subject of service animals and their presence in health-care facilities has been published. 1366 No evidence suggests that animals pose a more significant risk of transmitting infection than people; therefore, service animals should not be excluded from such areas, unless an individual patient's situation or a particular animal poses greater risk that cannot be mitigated through reasonable measures. If health-care personnel, visitors, and patients are permitted to enter care areas (e.g., inpatient rooms, some ICUs, and public areas) without taking additional precautions to prevent transmission of infectious agents (e.g., donning gloves, gowns, or masks), a clean, healthy, well-behaved service animal should be allowed access with its handler. 1366 Similarly, if immunocompromised patients are able to receive visitors without using protective garments or equipment, an exclusion of service animals from this area would not be justified. 1366 Because health-care facilities are covered by the ADA or the Rehabilitation Act, a person with a disability may be accompanied by a service animal within the facility unless the animal's presence or behavior creates a fundamental alteration in the nature of a facility's services in a particular area or a direct threat to other persons in a particular area. 1366 A "direct threat" is defined as a significant risk to the health or safety of others that cannot be mitigated or eliminated by modifying policies, practices, or procedures. 1376 The determination that a service animal poses a direct threat in any particular healthcare setting must be based on an individualized assessment of the service animal, the patient, and the health-care situation. When evaluating risk in such situations, health-care personnel should consider the nature of the risk (including duration and severity); the probability that injury will occur; and whether reasonable modifications of policies, practices, or procedures will mitigate the risk (J. Wodatch, U.S. Department of Justice, 2000). The person with a disability should contribute to the risk-assessment process as part of a pre-procedure health-care provider/patient conference. Excluding a service animal from an OR or similar special care areas (e.g., burn units, some ICUs, PE units, and any other area containing equipment critical for life support) is appropriate if these areas are considered to have "restricted access" with regards to the general public. General infection-control measures that dictate such limited access include a. the area is required to meet environmental criteria to minimize the risk of disease transmission, b. strict attention to hand hygiene and absence of dermatologic conditions, and c. barrier protective measures are indicated for persons in the affected space. No infection-control measures regarding the use of barrier precautions could be reasonably imposed on the service animal. Excluding a service animal that becomes threatening because of a perceived danger to its handler during treatment also is appropriate; however, exclusion of such an animal must be based on the actual behavior of the particular animal, not on speculation about how the animal might behave. Another issue regarding service animals is whether to permit persons with disabilities to be accompanied by their service animals during all phases of their stay in the health-care facility. Healthcare personnel should discuss all aspects of anticipatory care with the patient who uses a service animal. Health-care personnel may not exclude a service animal because health-care staff may be able to perform the same services that the service animal does (e.g., retrieving dropped items and guiding an otherwise ambulatory person to the restroom). Similarly, health-care personnel can not exclude service animals because the health-care staff perceive a lack of need for the service animal during the person's stay in the health-care facility. A person with a disability is entitled to independent access (i.e., to be accompanied by a service animal unless the animal poses a direct threat or a fundamental alteration in the nature of services); "need" for the animal is not a valid factor in either analysis. For some forms of care (e.g., ambulation as physical therapy following total hip replacement or knee replacement), the service animal should not be used in place of a credentialed health-care worker who directly provides therapy. However, service animals need not be restricted from being in the presence of its handler during this time; in addition, rehabilitation and discharge planning should incorporate the patient's future use of the animal. The health-care personnel and the patient with a disability should discuss both the possible need for the service animal to be separated from its handler for a period of time during non-emergency care and an alternate plan of care for the service animal in the event the patient is unable or unwilling to provide that care. This plan might include family members taking the animal out of the facility several times a day for exercise and elimination, the animal staying with relatives, or boarding off-site. Care of the service animal, however, remains the obligation of the person with the disability, not the health-care staff. Although animals potentially carry zoonotic pathogens transmissible to man, the risk is minimal with a healthy, clean, vaccinated, well-behaved, and well-trained service animal, the most common of which are dogs and cats. No reports have been published regarding infectious disease that affects humans originating in service dogs. Standard cleaning procedures are sufficient following occupation of an area by a service animal. 1366 Clean-up of spills of animal urine, feces, or other body substances can be accomplished with blood/body substance procedures outlined in the Environmental Services section of this guideline. No special bathing procedures are required prior to a service animal accompanying its handler into a health-care facility. Providing access to exotic animals (e.g., reptiles and non-human primates) that are used as service animals is problematic. Concerns about these animals are discussed in two published reviews. 1331,1366 Because some of these animals exhibit high-risk behaviors that may increase the potential for zoonotic disease transmission (e.g., herpes B infection), providing health-care facility access to nonhuman primates used as service animals is discouraged, especially if these animals might come into contact with the general public. 1361,1362 Health-care administrators should consult the Americans with Disabilities Act for guidance when developing policies about service animals in their facilities. 1366,1376 Requiring documentation for access of a service animal to an area generally accessible to the public would impose a burden on a person with a disability. When health-care workers are not certain that an animal is a service animal, they may ask the person who has the animal if it is a service animal required because of a disability; however, no certification or other documentation of service animal status can be required. 1377 # Animals as Patients in Human Health-Care Facilities The potential for direct and indirect transmission of zoonoses must be considered when rooms and equipment in human health-care facilities are used for the medical or surgical treatment or diagnosis of animals. 1378 Inquiries should be made to veterinary medical professionals to determine an appropriate facility and equipment to care for an animal. The central issue associated with providing medical or surgical care to animals in human health-care facilities is whether cross-contamination occurs between the animal patient and the human health-care workers and/or human patients. The fundamental principles of infection control and aseptic practice should differ only minimally, if at all, between veterinary medicine and human medicine. Health-careassociated infections can and have occurred in both patients and workers in veterinary medical facilities when lapses in infection-control procedures are evident. Further, veterinary patients can be at risk for acquiring infection from veterinary health-care workers if proper precautions are not taken. 1385 The issue of providing care to veterinary patients in human health-care facilities can be divided into the following three areas of infection-control concerns: a. whether the room/area used for animal care can be made safe for human patients, b. whether the medical/surgical instruments used on animals can be subsequently used on human patients, and c. which disinfecting or sterilizing procedures need to be done for these purposes. Studies addressing these concerns are lacking. However, with respect to disinfection or sterilization in veterinary settings, only minimal evidence suggests that zoonotic microbial pathogens are unusually resistant to inactivation by chemical or physical agents (with the exception of prions). Ample evidence supports the contrary observation (i.e., that pathogens from human-and animal sources are similar in their relative instrinsic resistance to inactivation). Further, no evidence suggests that zoonotic pathogens behave differently from human pathogens with respect to ventilation. Despite this knowledge, an aesthetic and sociologic perception that animal care must remain separate from human care persists. Health-care facilities, however, are increasingly faced with requests from the veterinary medical community for access to human health-care facilities for reasons that are largely economical (e.g., costs of acquiring sophisticated diagnostic technology and complex medical instruments). If hospital guidelines allow treatment of animals, alternate veterinary resources (including veterinary hospitals, clinics, and universities) should be exhausted before using human health-care settings. Additionally, the hospital's public/media relations should be notified of the situation. The goal is to develop policies and procedures to proactively and positively discuss and disclose this activity to the general public. An infection-control risk assessment (ICRA) must be undertaken to evaluate the circumstances specific to providing care to animals in a human health-care facility. Individual hospital policies and guidelines should be reviewed before any animal treatment is considered in such facilities. Animals treated in human health-care facilities should be under the direct care and supervision of a licensed veterinarian; they also should be free of known infectious diseases, ectoparasites, and other external contaminants (e.g., soil, urine, and feces). Measures should be taken to avoid treating animals with a known or suspected zoonotic disease in a human health-care setting (e.g., lambs being treated for Q fever). If human health-care facilities must be used for animal treatment or diagnostics, the following general infection-control actions are suggested: a. whenever possible, the use of ORs or other rooms used for invasive procedures should be avoided b. when all other space options are exhausted and use of the aforementioned rooms is unavoidable, the procedure should be scheduled late in the day as the last procedure for that particular area such that patients are not present in the department/unit/area; c. environmental surfaces should be thoroughly cleaned and disinfected using procedures discussed in the Environmental Services portion of this guideline after the animal is removed from the care area; d. sufficient time should be allowed for ACH to help prevent allergic reactions by human patients ; e. only disposable equipment or equipment that can be thoroughly and easily cleaned, disinfected, or sterilized should be used; f. when medical or surgical instruments, especially those invasive instruments that are difficult to clean , are used on animals, these instruments should be reserved for future use only on animals; and g) standard precautions should be followed. # Research Animals in Health-Care Facilities The risk of acquiring a zoonotic infection from research animals has decreased in recent years because many small laboratory animals (e.g., mice, rats, and rabbits) come from quality stock and have defined microbiologic profiles. 1392 Larger animals (e.g., nonhuman primates) are still obtained frequently from the wild and may harbor pathogens transmissible to humans. Primates, in particular, benefit from vaccinations to protect their health during the research period provided the vaccination does not interfere with the study of the particular agent. Animals serving as models for human disease studies pose some risk for transmission of infection to laboratory or health-care workers from percutaneous or mucosal exposure. Exposures can occur either through a. direct contact with an infected animal or its body substances and secretions or b. indirect contact with infectious material on equipment, instruments, surfaces, or supplies. 1392 Uncontained aerosols generated during laboratory procedures can also transmit infection. Infection-control measures to prevent transmission of zoonotic infections from research animals are largely derived from the following basic laboratory safety principles: a. purchasing pathogen-free animals, b. quarantining incoming animals to detect any zoonotic pathogens, c. treating infected animals or removing them from the facility, d. vaccinating animal carriers and high-risk contacts if possible, e. using specialized containment caging or facilities, and f. using protective clothing and equipment . 1392 An excellent resource for detailed discussion of these safety measures has been published. 1013 The animal research unit within a health-care facility should be engineered to provide a. adequate containment of animals and pathogens; b. daily decontamination and transport of equipment and waste; c. proper ventilation and air filtration, which prevents recirculation of the air in the unit to other areas of the facility; and d. negative air pressure in the animal rooms relative to the corridors. To ensure adequate security and containment, no through traffic to other areas of the health-care facility should flow through this unit; access should be restricted to animal-care staff, researchers, environmental services, maintenance, and security personnel. Occupational health programs for animal-care staff, researchers, and maintenance staff should take into consideration the animals' natural pathogens and research pathogens. Components of such programs include a. prophylactic vaccines, b. TB skin testing when primates are used, c. baseline serums, and d. hearing and respiratory testing. Work practices, PPE, and engineering controls specific for each of the four animal biosafety levels have been published. 1013,1393 The facility's occupational or employee health clinic should be aware of the appropriate post-exposure procedures involving zoonoses and have available the appropriate postexposure biologicals and medications. Animal-research-area staff should also develop standard operating procedures for a. daily animal husbandry b. pathogen containment and decontamination; c. management, cleaning, disinfecting and/or sterilizing equipment and instruments; and d. employee training for laboratory safety and safety procedures specific to animal research worksites. 1013 The federal Animal Welfare Act of 1966 and its amendments serve as the regulatory basis for ensuring animal welfare in research. 1394,1395 I. Regulated Medical Waste # Epidemiology No epidemiologic evidence suggests that most of the solid-or liquid wastes from hospitals, other healthcare facilities, or clinical/research laboratories is any more infective than residential waste. Several studies have compared the microbial load and the diversity of microorganisms in residential wastes and wastes obtained from a variety of health-care settings. Although hospital wastes had a greater number of different bacterial species compared with residential waste, wastes from residences were more heavily contaminated. 1397,1398 Moreover, no epidemiologic evidence suggests that traditional wastedisposal practices of health-care facilities (whereby clinical and microbiological wastes were decontaminated on site before leaving the facility) have caused disease in either the health-care setting or the general community. 1400,1401 This statement excludes, however, sharps injuries sustained during or immediately after the delivery of patient care before the sharp is "discarded." Therefore, identifying wastes for which handling and disposal precautions are indicated is largely a matter of judgment about the relative risk of disease transmission, because no reasonable standards on which to base these determinations have been developed. Aesthetic and emotional considerations (originating during the early years of the HIV epidemic) have, however, figured into the development of treatment and disposal policies, particularly for pathology and anatomy wastes and sharps. Public concerns have resulted in the promulgation of federal, state, and local rules and regulations regarding medical waste management and disposal. # Categories of Medical Waste Precisely defining medical waste on the basis of quantity and type of etiologic agents present is virtually impossible. The most practical approach to medical waste management is to identify wastes that represent a sufficient potential risk of causing infection during handling and disposal and for which some precautions likely are prudent. 2 Health-care facility medical wastes targeted for handling and disposal precautions include microbiology laboratory waste (e.g., microbiologic cultures and stocks of microorganisms), pathology and anatomy waste, blood specimens from clinics and laboratories, blood products, and other body-fluid specimens. 2 Moreover, the risk of either injury or infection from certain sharp items (e.g., needles and scalpel blades) contaminated with blood also must be considered. Although any item that has had contact with blood, exudates, or secretions may be potentially infective, treating all such waste as infective is neither practical nor necessary. Federal, state, and local guidelines and regulations specify the categories of medical waste that are subject to regulation and outline the requirements associated with treatment and disposal. The categorization of these wastes has generated the term "regulated medical waste." This term emphasizes the role of regulation in defining the actual material and as an alternative to "infectious waste," given the lack of evidence of this type of waste's infectivity. State regulations also address the degree or amount of contamination (e.g., blood-soaked gauze) that defines the discarded item as a regulated medical waste. The EPA's Manual for Infectious Waste Management identifies and categorizes other specific types of waste generated in health-care facilities with research laboratories that also require handling precautions. 1406 # Management of Regulated Medical Waste in Health-Care Facilities Medical wastes require careful disposal and containment before collection and consolidation for treatment. OSHA has dictated initial measures for discarding regulated medical-waste items. These measures are designed to protect the workers who generate medical wastes and who manage the wastes from point of generation to disposal. 967 A single, leak-resistant biohazard bag is usually adequate for containment of regulated medical wastes, provided the bag is sturdy and the waste can be discarded without contaminating the bag's exterior. The contamination or puncturing of the bag requires placement into a second biohazard bag. All bags should be securely closed for disposal. Puncture-resistant containers located at the point of use (e.g., sharps containers) are used as containment for discarded slides or tubes with small amounts of blood, scalpel blades, needles and syringes, and unused sterile sharps. 967 To prevent needlestick injuries, needles and other contaminated sharps should not be recapped, purposefully bent, or broken by hand. CDC has published general guidelines for handling sharps. 6,1415 Health-care facilities may need additional precautions to prevent the production of aerosols during the handling of blood-contaminated items for certain rare diseases or conditions (e.g., Lassa fever and Ebola virus infection). 203 Transporting and storing regulated medical wastes within the health-care facility prior to terminal treatment is often necessary. Both federal and state regulations address the safe transport and storage of on-and off-site regulated medical wastes. Health-care facilities are instructed to dispose medical wastes regularly to avoid accumulation. Medical wastes requiring storage should be kept in labeled, leakproof, puncture-resistant containers under conditions that minimize or prevent foul odors. The storage area should be well ventilated and be inaccessible to pests. Any facility that generates regulated medical wastes should have a regulated medical waste management plan to ensure health and environmental safety as per federal, state, and local regulations. # Treatment of Regulated Medical Waste Regulated medical wastes are treated or decontaminated to reduce the microbial load in or on the waste and to render the by-products safe for further handling and disposal. From a microbiologic standpoint, waste need not be rendered "sterile" because the treated waste will not be deposited in a sterile site. In addition, waste need not be subjected to the same reprocessing standards as are surgical instruments. Historically, treatment methods involved steam-sterilization (i.e., autoclaving), incineration, or interment (for anatomy wastes). Alternative treatment methods developed in recent years include chemical disinfection, grinding/shredding/disinfection methods, energy-based technologies (e.g., microwave or radiowave treatments), and disinfection/encapsulation methods. 1409 State medical waste regulations specify appropriate treatment methods for each category of regulated medical waste. Of all the categories comprising regulated medical waste, microbiologic wastes (e.g., untreated cultures, stocks, and amplified microbial populations) pose the greatest potential for infectious disease transmission, and sharps pose the greatest risk for injuries. Untreated stocks and cultures of microorganisms are subsets of the clinical laboratory or microbiologic waste stream. If the microorganism must be grown and amplified in culture to high concentration to permit work with the specimen, this item should be considered for on-site decontamination, preferably within the laboratory unit. Historically, this was accomplished effectively by either autoclaving (steam sterilization) or incineration. If steam sterilization in the health-care facility is used for waste treatment, exposure of the waste for up to 90 minutes at 250°F (121°C) in a autoclave (depending on the size of the load and type container) may be necessary to ensure an adequate decontamination cycle. After steam sterilization, the residue can be safely handled and discarded with all other nonhazardous solid waste in accordance with state solidwaste disposal regulations. On-site incineration is another treatment option for microbiologic, pathologic, and anatomic waste, provided the incinerator is engineered to burn these wastes completely and stay within EPA emissions standards. 1410 Improper incineration of waste with high moisture and low energy content (e.g., pathology waste) can lead to emission problems. State medical-waste regulatory programs identify acceptable methods for inactivating amplified stocks and cultures of microorganisms, some of which may employ technology rather than steam sterilization or incineration. Concerns have been raised about the ability of modern health-care facilities to inactivate microbiologic wastes on-site, given that many of these institutions have decommissioned their laboratory autoclaves. Current laboratory guidelines for working with infectious microorganisms at biosafety level (BSL) 3 recommend that all laboratory waste be decontaminated before disposal by an approved method, preferably within the laboratory. 1013 These same guidelines recommend that all materials removed from a BSL 4 laboratory (unless they are biological materials that are to remain viable) are to be decontaminated before they leave the laboratory. 1013 Recent federal regulations for laboratories that handle certain biological agents known as "select agents" (i.e., those that have the potential to pose a severe threat to public health and safety) require these agents (and those obtained from a clinical specimen intended for diagnostic, reference, or verification purposes) to be destroyed on-site before disposal. 1412 Although recommendations for laboratory waste disposal from BSL 1 or 2 laboratories (e.g., most health-care clinical and diagnostic laboratories) allow for these materials to be decontaminated off-site before disposal, on-site decontamination by a known effective method is preferred to reduce the potential of exposure during the handling of infectious material. A recent outbreak of TB among workers in a regional medical-waste treatment facility in the United States demonstrated the hazards associated with aerosolized microbiologic wastes. 1419,1420 The facility received diagnostic cultures of Mycobacterium tuberculosis from several different health-care facilities before these cultures were chemically disinfected; this facility treated this waste with a grinding/shredding process that generated aerosols from the material. 1419,1420 Several operational deficiencies facilitated the release of aerosols and exposed workers to airborne M. tuberculosis. Among the suggested control measures was that health-care facilities perform on-site decontamination of laboratory waste containing live cultures of microorganisms before release of the waste to a waste management company. 1419,1420 This measure is supported by recommendations found in the CDC/NIH guideline for laboratory workers. 1013 This outbreak demonstrates the need to avoid the use of any medical-waste treatment method or technology that can aerosolize pathogens from live cultures and stocks (especially those of airborne microorganisms) unless aerosols can be effectively contained and workers can be equipped with proper PPE. Safe laboratory practices, including those addressing waste management, have been published. 1013,1422 In an era when local, state, and federal health-care facilities and laboratories are developing bioterrorism response strategies and capabilities, the need to reinstate in-laboratory capacity to destroy cultures and stocks of microorganisms becomes a relevant issue. 1423 Recent federal regulations require health-care facility laboratories to maintain the capability of destroying discarded cultures and stocks on-site if these laboratories isolate from a clinical specimen any microorganism or toxin identified as a "select agent" from a clinical specimen (Table 27). 1412,1413 As an alternative, isolated cultures of select agents can be transferred to a facility registered to accept these agents in accordance with federal regulations. 1412 State medical waste regulations can, however, complicate or completely prevent this transfer if these cultures are determined to be medical waste, because most states regulate the inter-facility transfer of untreated medical wastes. # Discharging Blood, Fluids to Sanitary Sewers or Septic Tanks The contents of all vessels that contain more than a few milliliters of blood remaining after laboratory procedures, suction fluids, or bulk blood can either be inactivated in accordance with state-approved treatment technologies or carefully poured down a utility sink drain or toilet. 1414 State regulations may dictate the maximum volume allowable for discharge of blood/body fluids to the sanitary sewer. No evidence indicates that bloodborne diseases have been transmitted from contact with raw or treated sewage. Many bloodborne pathogens, particularly bloodborne viruses, are not stable in the environment for long periods of time; 1425,1426 therefore, the discharge of small quantities of blood and other body fluids to the sanitary sewer is considered a safe method of disposing of these waste materials. 1414 The following factors increase the likelihood that bloodborne pathogens will be inactivated in the disposal process: a. dilution of the discharged materials with water b. inactivation of pathogens resulting from exposure to cleaning chemicals, disinfectants, and other chemicals in raw sewage; and c. effectiveness of sewage treatment in inactivating any residual bloodborne pathogens that reach the treatment facility. Small amounts of blood and other body fluids should not affect the functioning of a municipal sewer system. However, large quantities of these fluids, with their high protein content, might interfere with the biological oxygen demand (BOD) of the system. Local municipal sewage treatment restrictions may dictate that an alternative method of bulk fluid disposal be selected. State regulations may dictate what quantity constitutes a small amount of blood or body fluids. Although concerns have been raised about the discharge of blood and other body fluids to a septic tank system, no evidence suggests that septic tanks have transmitted bloodborne infections. A properly functioning septic system is adequate for inactivating bloodborne pathogens. System manufacturers' instructions specify what materials may be discharged to the septic tank without jeopardizing its proper operation. # Medical Waste and CJD Concerns also have been raised about the need for special handling and treatment procedures for wastes generated during the care of patients with CJD or other transmissible spongiform encephalopathies (TSEs). Prions, the agents that cause TSEs, have significant resistance to inactivation by a variety of physical, chemical, or gaseous methods. 1427 No epidemiologic evidence, however, links acquisition of CJD with medical-waste disposal practices. Although handling neurologic tissue for pathologic examination and autopsy materials with care, using barrier precautions, and following specific procedures for the autopsy are prudent measures, 1197 employing extraordinary measures once the materials are discarded is unnecessary. Regulated medical wastes generated during the care of the CJD patient can be managed using the same strategies as wastes generated during the care of other patients. After decontamination, these wastes may then be disposed in a sanitary landfill or discharged to the sanitary sewer, as appropriate. # Part II. Recommendations for Environmental Infection Control in Health-Care Facilities # A. Rationale for Recommendations As in previous CDC guidelines, each recommendation is categorized on the basis of existing scientific data, theoretic rationale, applicability, and possible economic benefit. The recommendations are evidence-based wherever possible. However, certain recommendations are derived from empiric infection-control or engineering principles, theoretic rationale, or from experience gained from events that cannot be readily studied (e.g., floods). The HICPAC system for categorizing recommendations has been modified to include a category for engineering standards and actions required by state or federal regulations. Guidelines and standards published by the American Institute of Architects (AIA), American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE), and the Association for the Advancement in Medical Instrumentation (AAMI) form the basis of certain recommendations. These standards reflect a consensus of expert opinions and extensive consultation with agencies of the U.S. Department of Health and Human Services. Compliance with these standards is usually voluntary. However, state and federal governments often adopt these standards as regulations. For example, the standards from AIA regarding construction and design of new or renovated health-care facilities, have been adopted by reference by >40 states. Certain recommendations have two category ratings (e.g., Categories IA and IC or Categories IB and IC), indicating the recommendation is evidence-based as well as a standard or regulation. # B. Rating Categories Recommendations are rated according to the following categories: - - Develop and implement a maintenance schedule for ACH, pressure differentials, and filtration efficiencies using facility-specific data as part of the multidisciplinary risk assessment. Take into account the age and reliability of the system. - - Document these parameters, especially the pressure differentials. 3. Engineer humidity controls into the HVAC system and monitor the controls to ensure proper moisture removal. 120 Category IC (AIA: 7.31.D9) - - Locate duct humidifiers upstream from the final filters. - - Incorporate a water-removal mechanism into the system. - - Locate all duct takeoffs sufficiently down-stream from the humidifier so that moisture is completely absorbed. 4. Incorporate steam humidifiers, if possible, to reduce potential for microbial proliferation within the system, and avoid use of cool mist humidifiers. Category II 5. Ensure that air intakes and exhaust outlets are located properly in construction of new facilities and renovation of existing facilities. - - Locate exhaust outlets from contaminated areas above roof level to minimize recirculation of exhausted air. 6. Maintain air intakes and inspect filters periodically to ensure proper operation. 3,120,249,250,277 Category IC (AIA: 7.31.D8) 7. Bag dust-filled filters immediately upon removal to prevent dispersion of dust and fungal spores during transport within the facility. 106, 221 Category IB - - Seal or close the bag containing the discarded filter. - - Discard spent filters as regular solid waste, regardless of the area from which they were removed. 221 8. Remove bird roosts and nests near air intakes to prevent mites and fungal spores from entering the ventilation system. 3,98,119 Category IB 9. Prevent dust accumulation by cleaning air-duct grilles in accordance with facility-specific procedures and schedules when rooms are not occupied by patients. 21,120,249,250,277 Category IC, II (AIA: 7.31.D10) 10. Periodically measure output to monitor system function; clean ventilation ducts as part of routine HVAC maintenance to ensure optimum performance. 120,263,264 Category II (AIA: 7.31.D10) C. Use portable, industrial-grade HEPA filter units capable of filtration rates in the range of 300-800 ft 3 /min. to augment removal of respirable particles as needed. 219 Category II M. Whenever feasible, design and install fixed backup ventilation systems for new or renovated construction for PE rooms, AII rooms, operating rooms, and other critical care areas identified by ICRA. 120 A. Establish a multidisciplinary team that includes infection-control staff to coordinate demolition, construction, and renovation projects and consider proactive preventive measures at the inception; produce and maintain summary statements of the team's activities. 17,19,20,97,109,120,249,250, Category IB, IC (AIA: 5.1) B. Educate both the construction team and the health-care staff in immunocompromised patient-care areas regarding the airborne infection risks associated with construction projects, dispersal of fungal spores during such activities, and methods to control the dissemination of fungal spores. 3, 249, 250, 273-277, 1428-1432 Category IB C. Incorporate mandatory adherence agreements for infection control into construction contracts, with penalties for noncompliance and mechanisms to ensure timely correction of problems. 3,120,249, Category IC (AIA: 5.1) D. Establish and maintain surveillance for airborne environmental disease (e.g., aspergillosis) as appropriate during construction, renovation, repair, and demolition activities to ensure the health and safety of immunocompromised patients. 3,64,65,79 Category IB 1. Using active surveillance, monitor for airborne fungal infections in immunocompromised patients. 3,9,64,65 Category IB 2. Periodically review the facility's microbiologic, histopathologic, and postmortem data to identify additional cases. 3,9,64,65 Category IB 3. If cases of aspergillosis or other health-care associated airborne fungal infections occur, aggressively pursue the diagnosis with tissue biopsies and cultures as feasible. 3,64,65,79,249, Category IB E. Implement infection-control measures relevant to construction, renovation, maintenance, demolition, and repair. 96,97,120,276,277 Category IB, IC (AIA: 5.1, 5.2) 1. Before the project gets underway, perform an ICRA to define the scope of the project and the need for barrier measures. 96,97,120,249, - - Determine if immunocompromised patients may be at risk for exposure to fungal spores from dust generated during the project. 20, 109, 273-275, 277 - - Develop a contingency plan to prevent such exposures. 20,109,277 Category IB, IC (AIA: 5.1) 2. Implement infection-control measures for external demolition and construction activities. 50,249,283 - - Determine if the facility can operate temporarily on recirculated air; if feasible, seal off adjacent air intakes. - - If this is not possible or practical, check the low-efficiency (roughing) filter banks frequently and replace as needed to avoid buildup of particulates. - - Seal windows and reduce wherever possible other sources of outside air intrusion (e.g., open doors in stairwells and corridors), especially in PE areas. Category IB 3. Avoid damaging the underground water distribution system (i.e., buried pipes) to prevent soil and dust contamination of the water. 120,305 Category IB, IC (AIA: 5.1) 121 4. Implement infection-control measures for internal construction activities. 20,49,97,120,249, - - Construct barriers to prevent dust from construction areas from entering patient-care areas; ensure that barriers are impermeable to fungal spores and in compliance with local fire codes. 20,49,97,120,284,312,713,1431 - - Block and seal off return air vents if rigid barriers are used for containment. 120,276,277 - - Implement dust control measures on surfaces and by diverting pedestrian traffic away from work zones. 20, 49, 97, 120 - - Relocate patients whose rooms are adjacent to work zones, depending upon their immune status, the scope of the project, the potential for generation of dust or water aerosols, and the methods used to control these aerosols. 49,120,281 120,249, ~ Category IB h. Clean work zones and their entrances daily by - wet-wiping tools and tool carts before their removal from the work zone; - placing mats with tacky surfaces inside the entrance; and - covering debris and securing this covering before removing debris from the work zone. 120,249, ~ Category IB i. In patient-care areas, for major repairs that include removal of ceiling tiles and disruption of the space above the false ceiling, use plastic sheets or prefabricated plastic units to contain dust; use a negative pressure system within this enclosure to remove dust; and either pass air through an industrial grade, portable HEPA filter capable of filtration rates ranging from 300-800 ft 3 /min., or exhaust air directly to the outside. 49,276,277,281,309 - - properly constructing windows, doors, and intake and exhaust ports; - - maintaining ceilings that are smooth and free of fissures, open joints, and crevices; - - sealing walls above and below the ceiling, and - - monitoring for leakage and making necessary repairs. 3,111,120,317,318 Category IB, IC (AIA: 7.2.D3) 3. Ventilate the room to maintain ≥12 ACH. 3,9,120,241,317,318 Category IC (AIA: 7.2.D) 4. Locate air supply and exhaust grilles so that clean, filtered air enters from one side of the room, flows across the patient's bed, and exits from the opposite side of the room. 3,120,317,318 Category IC (AIA: 7.31.D1) 5. Maintain positive room air pressure (≥2.5 Pa ) in relation to the corridor. 3,35,120,317,318 Category IB, IC (AIA: Table 7.2) 6. Maintain airflow patterns and monitor these on a daily basis by using permanently installed visual means of detecting airflow in new or renovated construction, or using other visual methods (e.g., flutter strips, or smoke tubes) in existing PE units. Document the monitoring results. 120,273 Category IC (AIA: 7.2.D6) 7. Install self-closing devices on all room exit doors in protective environments. 120 3. Place smallpox patients in negative pressure rooms at the onset of their illness, preferably using a room with an anteroom if available. 6 Category II D. No recommendation is offered regarding negative pressure or isolation rooms for patients with Pneumocystis carinii pneumonia. 126,131,132 Unresolved issue E. Maintain back-up ventilation equipment (e.g., portable units for fans or filters) for emergency provision of ventilation requirements for AII rooms and take immediate steps to restore the fixed ventilation system function. 4,120,278 Category IC (AIA: 5.1) # C.V. Infection-Control and Ventilation Requirements for Operating Rooms A. Implement environmental infection-control and ventilation measures for operating rooms. 1. Maintain positive-pressure ventilation with respect to corridors and adjacent areas. 7,120,356 Category IB, IC (AIA: Table 7.2) 2. Maintain ≥15 ACH, of which ≥3 ACH should be fresh air. 120,357,358 Category IC (AIA: Table 7.2) 3. Filter all recirculated and fresh air through the appropriate filters, providing 90% efficiency (dust-spot testing) at a minimum. 120,362 Category IC (AIA: Table 7.3) 4. In rooms not engineered for horizontal laminar airflow, introduce air at the ceiling and exhaust air near the floor. 120,357,359 B.1) because extubation is a coughproducing procedure. 4,358 Category IB C. Use portable, industrial-grade HEPA filters temporarily for supplemental air cleaning during intubation and extubation for infectious TB patients who require surgery. 4,219,358 Category II 1. Position the units appropriately so that all room air passes through the filter; obtain engineering consultation to determine the appropriate placement of the unit. 4 Category II 2. Switch the portable unit off during the surgical procedure. Category II 3. Provide fresh air as per ventilation standards for operating rooms; portable units do not meet the requirements for the number of fresh ACH. 120,215,219 Category II D. If possible, schedule infectious TB patients as the last surgical cases of the day to maximize the time available for removal of airborne contamination. Category II E. No recommendation is offered for performing orthopedic implant operations in rooms supplied with laminar airflow. 362,364 Unresolved issue F. Maintain backup ventilation equipment (e.g., portable units for fans or filters) for emergency provision of ventilation requirements for operating rooms, and take immediate steps to restore the fixed ventilation system function. 68,120,278,372 Category IB, IC (AIA: 5.1) # C.VI. Other Potential Infectious Aerosol Hazards in Health-Care Facilities A. In settings where surgical lasers are used, wear appropriate personal protective equipment, including N95 or N100 respirators, to minimize exposure to laser plumes. 347, 378, 389 Category IC (OSHA; 29 CFR 1910.134,139) B. Use central wall suction units with in-line filters to evacuate minimal laser plumes. 378,382,386,389 Category II C. Use a mechanical smoke evacuation system with a high-efficiency filter to manage the generation of large amounts of laser plume, when ablating tissue infected with human papilloma virus (HPV) or performing procedures on a patient with extrapulmonary TB. 4,382, Category II # D. Recommendations-Water # D.I. Controlling the Spread of Waterborne Microoganisms A. Practice hand hygiene to prevent the hand transfer of waterborne pathogens, and use barrier precautions (e.g., gloves) as defined by other guidelines. 6,464,577,586,592,1364 - - Flush each outlet until chlorine odor is detected. - - Maintain the elevated chlorine concentration in the system for ≥2 hrs (but ≤24 hrs). 4. Use a very thorough flushing of the water system instead of chlorination if a highly chlorineresistant microorganism (e.g., Cryptosporidium spp.) is suspected as the water contaminant. Category II F. Flush and restart equipment and fixtures according to manufacturers' instructions. Category II G. Change the pretreatment filter and disinfect the dialysis water system with an EPA-registered product to prevent colonization of the reverse osmosis membrane and downstream microbial contamination. 721 Category II H. Run water softeners through a regeneration cycle to restore their capacity and function. Category II I. If the facility has a water-holding reservoir or water-storage tank, consult the facility engineer or local health department to determine whether this equipment needs to be drained, disinfected with an EPA-registered product, and refilled. Category II J. Implement facility management procedures to manage a sewage system failure or flooding (e.g., arranging with other health-care facilities for temporary transfer of patients or provision of services), and establish communications with the local municipal water utility and the local health department to ensure that advisories are received in a timely manner upon release. 713 These recommendations refer to the spraying or fogging of chemicals (e.g., formaldehyde, phenolbased agents, or quaternary ammonium compounds) as a way to decontaminate environmental surfaces or disinfect the air in patient rooms. The recommendation against fogging was based on studies in the 1970's that reported a lack of microbicidal efficacy (e.g., use of quaternary ammonium compounds in mist applications) but also adverse effects on healthcare workers and others in facilities where these methods were utilized. Furthermore, some of these chemicals are not EPA-registered for use in fogging-type applications. The 2003 recommendations still apply; however, CDC does not yet make a recommendation regarding these newer technologies. This issue will be revisited as additional evidence becomes available. G. Avoid large-surface cleaning methods that produce mists or aerosols or disperse dust in patient-care areas. 9,20,109,272 # H.II. Animal-Assisted Activities, Animal-Assisted Therapy, and Resident Animal Programs A. Avoid selection of nonhuman primates and reptiles in animal-assisted activities, animal-assisted therapy, or resident animal programs. 1360-1362 Category IB B. Enroll animals that are fully vaccinated for zoonotic diseases and that are healthy, clean, wellgroomed, and negative for enteric parasites or otherwise have completed recent antihelminthic treatment under the regular care of a veterinarian. 1349, 1360 Category II C. Enroll animals that are trained with the assistance or under the direction of individuals who are experienced in this field. 1360 Category II D. Ensure that animals are handled by persons trained in providing activities or therapies safely, and who know the animals' health status and behavior traits. 1349,1360 Category II E. Take prompt action when an incident of biting or scratching by an animal occurs during an animalassisted activity or therapy. 1. Remove the animal permanently from these programs. 1360 Category II 2. Report the incident promptly to appropriate authorities (e.g., infection-control staff, animal program coordinator, or local animal control). 1360 Category II 3. Promptly clean and treat scratches, bites, or other accidental breaks in the skin. Category II F. Perform an ICRA and work actively with the animal handler prior to conducting an animal-assisted activity or therapy to determine if the session should be held in a public area of the facility or in individual patient rooms. 1349,1360 Category II G. Take precautions to mitigate allergic responses to animals. Category II 1. Minimize shedding of animal dander by bathing animals <24 hours before a visit. 1360 Category II 2. Groom animals to remove loose hair before a visit, or using a therapy animal cape. 1358 Category II H. Use routine cleaning protocols for housekeeping surfaces after therapy sessions. Category II I. Restrict resident animals, including fish in fish tanks, from access to or placement in patient-care areas, food preparation areas, dining areas, laundry, central sterile supply areas, sterile and clean supply storage areas, medication preparation areas, operating rooms, isolation areas, and PE areas. Category II J. Establish a facility policy for regular cleaning of fish tanks, rodent cages, bird cages, and any other animal dwellings and assign this cleaning task to a nonpatient-care staff member; avoid splashing tank water or contaminating environmental surfaces with animal bedding. Category II # H.III. Protective Measures for Immunocompromised Patients A. Advise patients to avoid contact with animal feces and body fluids such as saliva, urine, or solid litter box material. 8 Category II B. Promptly clean and treat scratches, bites, or other wounds that break the skin. 8 Category II C. Advise patients to avoid direct or indirect contact with reptiles. 1340 Category IB D. Conduct a case-by-case assessment to determine if animal-assisted activities or animal-assisted therapy programs are appropriate for immunocompromised patients. 1349 Category II E. No recommendation is offered regarding permitting pet visits to terminally ill immunosuppressed patients outside their PE units. Unresolved issue # H.IV. Service Animals Edit : An - indicates recommendations that were renumbered for clarity. The renumbering does not constitute change to the intent of the recommendations. A. Avoid providing access to nonhuman primates and reptiles as service animals. 1340, 1362 Category IB B. Allow service animals access to the facility in accordance with the Americans with Disabilities Act of 1990, unless the presence of the animal creates a direct threat to other persons or a fundamental alteration in the nature of services. 1366,1376 Category IC (U.S. Department of Justice: 28 CFR § 36.302) C. When a decision must be made regarding a service animal's access to any particular area of the health-care facility, evaluate the service animal, the patient, and the health-care situation on a caseby-case basis to determine whether significant risk of harm exists and whether reasonable modifications in policies and procedures will mitigate this risk. 1376 Category IC (Justice: 28 CFR § 36.208 and App.B) D. If a patient must be separated from his or her service animal while in the health-care facility - - ascertain from the person what arrangements have been made for supervision or care of the animal during this period of separation; and - - make appropriate arrangements to address the patient's needs in the absence of the service animal. Category II # H.V. Animals as Patients in Human Health-Care Facilities A. Develop health-care facility policies to address the treatment of animals in human healthcare facilities. 1. Use the multidisciplinary team approach to policy development, including public media relations in order to disclose and discuss these activities. Category II 2. Exhaust all veterinary facility, equipment, and instrument options before undertaking the procedure. Category II 3. Ensure that the care of the animal is supervised by a licensed veterinarian. Category II B. When animals are treated in human health-care facilities, avoid treating animals in operating rooms or other patient-care areas where invasive procedures are performed (e.g., cardiac catheterization laboratories, or invasive nuclear medicine areas). Category II C. Schedule the animal procedure for the last case of the day for the area, at a time when human patients are not scheduled to be in the vicinity. Category II D. Adhere strictly to standard precautions. Category II E. Clean and disinfect environmental surfaces thoroughly using an EPA-registered product in the room after the animal is removed. Category II F. Allow sufficient ACH to clean the air and help remove airborne dander, microorganisms, and allergens [Appendix B, # Part III. References Note: The bold item in parentheses indicated the citation number or the location of this reference listed in the MMWR version of this guideline. microbiological agents they work with. There are four biosafety levels based on the hazards associated with the various microbiological agents. BOD5: the amount of dissolved oxygen consumed in five days by biological processes breaking down organic matter. Bonneting: a floor cleaning method for either carpeted or hard surface floors that uses a circular motion of a large fibrous disc to lift and remove soil and dust from the surface. Capped spur: a pipe leading from the water recirculating system to an outlet that has been closed off ("capped"). A capped spur cannot be flushed, and it might not be noticed unless the surrounding wall is removed. CFU/m 3 : colony forming units per cubic meter (of air). Chlamydospores: thick-walled, typically spherical or ovoid resting spores asexually produced by certain types of fungi from cells of the somatic hyphae. Chloramines: compounds containing nitrogen, hydrogen, and chlorine. These are formed by the reaction between hypochlorous acid (HOCl) and ammonia (NH3) and/or organic amines in water. The formation of chloramines in drinking water treatment extends the disinfecting power of chlorine. The term is also referred to as Combined Available Chlorine. Cleaning: the removal of visible soil and organic contamination from a device or surface, using either the physical action of scrubbing with a surfactant or detergent and water, or an energy-based process (e.g., ultrasonic cleaners) with appropriate chemical agents. Coagulation-flocculation: coagulation is the clumping of particles that results in the settling of impurities. It may be induced by coagulants (e.g., lime, alum, and iron salts). Flocculation in water and wastewater treatment is the agglomeration or clustering of colloidal and finely-divided suspended matter after coagulation by gentle stirring by either mechanical or hydraulic means, such that they can be separated from water or sewage. Commissioning (a room): testing a system or device to ensure that it meets the pre-use specifications as indicated by the manufacturer or predetermined standard, or air sampling in a room to establish a preoccupancy baseline standard of microbial or particulate contamination. The term is also referred to as benchmarking at 77°F (25°C). Completely packaged: functionally packaged, as for laundry. Conidia: asexual spores of fungi borne externally. Conidiophores: specialized hyphae that bear conidia in fungi. Conditioned space: that part of a building that is heated or cooled, or both, for the comfort of the occupants. Contaminant: an unwanted airborne constituent that may reduce the acceptibility of air. Convection: the transfer of heat or other atmospheric properties within the atmosphere or in the airspace of an enclosure by the circulation of currents from one region to another, especially by such motion directed upward. Cooling tower: a structure engineered to receive accumulated heat from ventilation systems and equipment and transfer this heat to water, which then releases the stored heat to the atmosphere through evaporative cooling. Critical item (medical instrument): a medical instrument or device that contacts normally sterile areas of the body or enters the vascular system. There is a high risk of infection from such devices if they are microbiologically contaminated prior to use. These devices must be sterilized before use. Dead legs: areas in the water system where water stagnates. A dead leg is a pipe or spur, leading from the water recirculating system to an outlet that is used infrequently, resulting in inadequate flow of water from the recirculating system to the outlet. This inadequate flow reduces the perfusion of heat or chlorine into this part of the water distribution system, thereby adversely affecting the disinfection of the water system in that area. Deionization: removal of ions from water by exchange with other ions associated with fixed charges on a resin bed. Cations are usually removed and H + ions are exchanged; OHions are exchanged for anions. Detritis: particulate matter produced by or remaining after the wearing away or disintegration of a substance or tissue. Dew point: the temperature at which a gas or vapor condenses to form a liquid; the point at which moisture begins to condense out of the air. At dew point, air is cooled to the point where it is at 100% relative humidity or saturation. Dialysate: the aqueous electrolyte solution, usually containing dextrose, used to make a concentration gradient between the solution and blood in the hemodialyzer (dialyzer). Dialyzer: a device that consists of two compartments (blood and dialysate) separated by a semipermeable membrane. A dialyzer is usually referred to as an artificial kidney. Diffuser: the grille plate that disperses the air stream coming into the conditioned air space. Direct transmission: involves direct body surface-to-body surface contact and physical transfer of microorganisms between a susceptible host and an infected/colonized person, or exposure to cloud of infectious particles within 3 feet of the source; the aerosolized particles are >5 μm in size. Disability: as defined by the Americans with Disabilities Act, a disability is any physical or mental impairment that substantially limits one or more major life activities, including but not limited to walking, talking, seeing, breathing, hearing, or caring for oneself. Disinfection: a generally less lethal process of microbial inactivation (compared to sterilization) that eliminates virtually all recognized pathogenic microorganisms but not necessarily all microbial forms (e.g., bacterial spores). Drain pans: pans that collect water within the HVAC system and remove it from the system. Condensation results when air and steam come together. Drift: circulating water lost from the cooling tower in the form as liquid droplets entrained in the exhaust air stream (i.e., exhaust aerosols from a cooling tower). Drift eliminators: an assembly of baffles or labyrinth passages through which the air passes prior to its exit from the cooling tower. The purpose of a drift eliminator is to remove entrained water droplets from the exhaust air. Droplets: particles of moisture, such as are generated when a person coughs or sneezes, or when water is converted to a fine mist by a device such as an aerator or shower head. These particles may contain infectious microorganisms. Intermediate in size between drops and droplet nuclei, these particles tend to quickly settle out from the air so that any risk of disease transmission is generally limited to persons in close proximity to the droplet source. Droplet nuclei: sufficiently small particles (1-5 μm in diameter) that can remain airborne indefinitely and cause infection when a susceptible person is exposed at or beyond 3 feet of the source of these particles. Dual duct system: an HVAC system that consists of parallel ducts that produce a cold air stream in one and a hot air stream in the other. Dust: an air suspension of particles (aerosol) of any solid material, usually with particle sizes ≤100 μm in diameter. Dust-spot test: a procedure that uses atmospheric air or a defined dust to measure a filter's ability to remove particles. A photometer is used to measure air samples on either side of the filter, and the difference is expressed as a percentage of particles removed. Effective leakage area: the area through which air can enter or leave the room. This does not include supply, return, or exhaust ducts. The smaller the effective leakage area, the better isolated the room. Endotoxin: the lipopolysaccharides of gram-negative bacteria, the toxic character of which resides in the lipid portion. Endotoxins generally produce pyrogenic reactions in persons exposed to these bacterial components. Enveloped virus: a virus whose outer surface is derived from a membrane of the host cell (either nuclear or the cell's outer membrane) during the budding phase of the maturation process. This membrane-derived material contains lipid, a component that makes these viruses sensitive to the action of chemical germicides. Evaporative condenser: a wet-type, heat-rejection unit that produces large volumes of aerosols during the process of removing heat from conditioned space air. Exhaust air: air removed from a space and not reused therein. Exposure: the condition of being subjected to something (e.g., infectious agents) that could have a harmful effect. Fastidious: having complex nutritional requirements for growth, as in microorganisms. Fill: that portion of a cooling tower which makes up its primary heat transfer surface. Fill is alternatively known as "packing." Finished water: treated, or potable water. Fixed room-air HEPA recirculation systems: nonmobile devices or systems that remove airborne contaminants by recirculating air through a HEPA filter. These may be built into the room and permanently # Air Sampling for Aerosols Containing Legionellae Air sampling is an insensitive means of detecting Legionella pneumophila, and is of limited practical value in environmental sampling for this pathogen. In certain instances, however, it can be used to a. demonstrate the presence of legionellae in aerosol droplets associated with suspected bacterial reservoirs b. define the role of certain devices in disease transmission; and c. quantitate and determine the size of the droplets containing legionellae. 1436 Stringent controls and calibration are necessary when sampling is used to determine particle size and numbers of viable bacteria. 1437 Samplers should be placed in locations where human exposure to aerosols is anticipated, and investigators should wear a NIOSH-approved respirator (e.g., N95 respirator) if sampling involves exposure to potentially infectious aerosols. Methods used to sample air for legionellae include impingement in liquid, impaction on solid medium, and sedimentation using settle plates. 1436 The Chemical Corps.-type all-glass impingers (AGI) with the stem 30 mm from the bottom of the flask have been used successfully to sample for legionellae. 1436 Because of the velocity at which air samples are collected, clumps tend to become fragmented, leading to a more accurate count of bacteria present in the air. The disadvantages of this method are a. the velocity of collection tends to destroy some vegetative cells b. the method does not differentiate particle sizes; and c. AGIs are easily broken in the field. Yeast extract broth (0.25%) is the recommended liquid medium for AGI sampling of legionellae; 1437 standard methods for water samples can be used to culture these samples. Andersen samplers are viable particle samplers in which particles pass through jet orifices of decreasing size in cascade fashion until they impact on an agar surface. 1218 The agar plates are then removed and incubated. The stage distribution of the legionellae should indicate the extent to which the bacteria would have penetrated the respiratory system. The advantages of this sampling method are a. the equipment is more durable during use b. the sampler can cetermine the number and size of droplets containing legionellae; c. the agar plates can be placed directly in an incubator with no further manipulations; and d. both selective and nonselective BCYE agar can be used. If the samples must be shipped to a laboratory, they should be packed and shipped without refrigeration as soon as possible. # Calculation of Air Sampling Results Assuming that each colony on the agar plate is the growth from a single bacteria-carrying particle, the contamination of the air being sampled is determined from the number of colonies counted. The airborne microorganisms may be reported in terms of the number per cubic foot of air sampled. The following formulas can be applied to convert colony counts to organisms per cubic foot of air sampled. 1218 For solid agar impactor samplers: C / (R H P) = N where N = number of organisms collected per cubic foot of air sampled C = total plate count R = airflow rate in cubic feet per minute P = duration of sampling period in minutes For liquid impingers: (C H V) / (Q H P H R) = N where C = total number of colonies from all aliquots plated V = final volume in mL of collecting media Q = total number of mL plated P, R, and N are defined as above 3. To satisfy exhaust needs, replacement air from outside is necessary. Table B.3 does not attempt to describe specific amounts of outside air to be supplied to individual spaces except for certain areas such as those listed. Distribution of the outside air, added to the system to balance required exhaust, shall be as required by good engineering practice. 4. Number of air changes may be reduced when the room is unoccupied if provisions are made to ensure that the number of air changes indicated is reestablished any time the space is being utilized. Adjustments shall include provisions so that the direction of air movement shall remain the same when the number of air changes is reduced. Areas not indicated as having continuous directional control may have ventilation systems shut down when space is unoccupied and ventilation is not otherwise needed. 5. Air from areas with contamination and/or odor problems shall be exhausted to the outside and not recirculated to other areas. Note that individual circumstances may require special consideration for air exhaust to outside. 6. Because of cleaning difficulty and potential for buildup of contamination, recirculating room units shall not be used in areas marked "No." Isolation rooms may be ventilated by reheat induction units in which only the primary air supplied from a central system passes through the reheat unit. Gravitytype heating or cooling units such as radiators or convectors shall not be used in special care areas. 7. The ranges listed are the minimum and maximum limits where control is specifically needed. See A8.31.D in the AIA guideline (reference 120) for additional information. 8. Where temperature ranges are indicated, the systems shall be capable of maintaining the rooms at any point within the range. A single figure indicates a heating or cooling capacity of at least the indicated temperature. This is usually applicable where residents may be undressed and require a warmer environment. Nothing in these guidelines shall be construed as precluding the use of temperatures lower than those noted when the residents' comfort and medical conditions make lower temperatures desirable. Unoccupied areas such as storage rooms shall have temperatures appropriate for the function intended. 9. See A8.31.D1 in the AIA guideline (reference 120). 10. Food preparation facilities shall have ventilation systems whose air supply mechanisms are interfaced appropriately with exhaust hood controls or relief vents so that exfiltration or infiltration to or from exit corridors does not compromise the exit corridor restrictions of NFPA 90A, the pressure requirements of NFPA 96, or the maximum defined in the table. The number of air changes may be reduced or varied to any extent required for odor control when the space is not in use. heterotrophic plate count would suggest a greater rate and extent of biofilm formation in a health-care facility water system. The water supplied to the facility should also contain <1 coliform bacteria/100 mL. Coliform bacteria are organisms whose presence in the distribution system could indicate fecal contamination. It has been shown that coliform bacteria can colonize biofilms within drinking water systems. Intermittant contamination of a water system with these organisms could lead to colonization of the system. Water samples can be collected from throughout the health-care facility system, including both hot and cold water sources; samples should be cultured by standard methods. 945 If heterotrophic plate counts in samples from the facility water system are higher than those from samples collected at the point of water entry to the building, it can be concluded that the facility water quality has diminished. If biofilms are detected in the facility water system and determined by an epidemiologic and environmental investigation to be a reservoir for health-care associated pathogens, the municipal water supplier could be contacted with a request to provide higher chlorine residuals in the distribution system, or the health-care facility could consider installing a supplemental chlorination system. Sample collection sites for biofilm in health-care facilities include a. hot water tanks b. shower heads; and c. faucet aerators, especially in immunocompromised patient-care areas. Swabs should be placed into tubes containing phosphate buffered water, pH 7.2 or phosphate buffered saline, shipped to the laboratory under refrigeration and processed within 24 hrs. of collection. Samples are suspended by vortexing with sterile glass beads and plated onto a nonselective medium (e.g., Plate Count Agar or R2A medium) and selective media (e.g., media for Legionella spp. isolation) after serial dilution. If the plate counts are elevated above levels in the water (i.e. comparing the plate count per square centimeter of swabbed surface to the plate count per milliliter of water), then biofilm formation can be suspected. In the case of an outbreak, it would be advisable to isolate organisms from these plates to determine whether the suspect organisms are present in the biofilm or water samples and compare them to the organisms isolated from patient specimens. # Water and Dialysate Sampling Strategies in Dialysis In order to detect the low, total viable heterotrophic plate counts outlined by the current AAMI standards for water and dialysate in dialysis settings, it is necessary to use standard quantitative culture techniques with appropriate sensitivity levels. 792,832,833 The membrane filter technique is particularly suited for this application because it permits large volumes of water to be assayed. 792,834 Since the membrane filter technique may not be readily available in clinical laboratories, the spread plate assay can be used as an alternative. 834 If the spread plate assay is used, however, the standard prohibits the use of a calibrated loop when applying sample to the plate. 792 The prohibition is based on the low sensitivity of the calibrated loop. A standard calibrated loop transfers 0.001 mL of sample to the culture medium, so that the minimum sensitivity of the assay is 1,000 CFU/mL. This level of sensitivity is unacceptable when the maximum allowable limit for microorganisms is 200 CFU/mL. Therefore, when the spread plate method is used, a pipette must be used to place 0.1-0.5 mL of water on the culture medium. The current AAMI standard specifically prohibits the use of nutrient-rich media (e.g., blood agar, and chocolate agar) in dialysis water and dialysate assays because these culture media are too rich for growth of the naturally occurring organisms found in water. 792 Debate continues within AAMI, however, as to the most appropriate culture medium and incubation conditions to be used. The original clinical observations on which the microbiological requirements of this standard were based used Standard Methods Agar (SMA), a medium containing relatively few nutrients. 666 The use of tryptic soy agar (TSA), a general purpose medium for isolating and cultivating microorganisms was recommended in later versions of the standard because it was thought to be more appropriate for culturing bicarbonate-containing dialysate. 788,789,835 Moreover, culturing systems based on TSA are readily available from commercial sources. Several studies, however, have shown that the use of nutrient-poor media, such as R2A, results in an increased recovery of bacteria from water. 1451,1452 The original standard also specified incubation for 48 hours at 95°F-98.6°F (35°C-37°C) before enumeration of bacterial colonies. Extending the culturing time up to 168 hours, or 7 days and using incubation temperatures of 73.4°F-82.4°F (23°C-28°C) have also been shown to increase the recovery of bacteria. 1451,1452 Other investigators, however, have not found such clear cut differences between culturing techniques. 835,1453 After considerable discussion, the AAMI Committee has not reached a consensus regarding changes in the assay technique, and the use of TSA or its equivalent for 48 hours at 95°F-98.6°F (35°C-37°C) remains the recommended method. It should be recognized, however, that these culturing conditions may underestimate the bacterial burden in the water and fail to identify the presence of some organisms. Specifically, the recommended method may not detect the presence of various NTM that have been associated with several outbreaks of infection in dialysis units. 31,32 In these instances, however, the high numbers of mycobacteria in the water were related to the total heterotrophic plate counts, each of which was significantly greater than that allowable by the AAMI standard. Additionally, the recommended method will not detect fungi and yeast, which have been shown to contaminate water used for hemodialysis applications. 1454 Biofilm on the surface of the pipes may hide viable bacterial colonies, even though no viable colonies are detected in the water using sensitive culturing techniques. 1455 Many disinfection processes remove biofilm poorly, and a rapid increase in the level of bacteria in the water following disinfection may indicate significant biofilm formation. Therefore, although the results of microbiological surveillance obtained using the test methods outlined above may be useful in guiding disinfection schedules and in demonstrating compliance with AAMI standards, they should not be taken as an indication of the absolute microbiological purity of the water. 792 Endotoxin can be tested by one of two types of assays a. a kinetic test method or b. a gel-clot assay. Endotoxin units are assayed by the Limulus Amebocyte Lysate (LAL) method. Because endotoxins differ in their activity on a mass basis, their activity is referred to a standard Escherichia coli endotoxin. The current standard (EC-6) is prepared from E. coli O113:H10. The relationship between mass of endotoxin and its activity varies with both the lot of LAL and the lot of control standard endotoxin used. Since standards for endotoxin were harmonized in 1983 with the introduction of EC-5, the relationship between mass and activity of endotoxin has been approximately 5-10 EU/ng. Studies to harmonize standards have led to the measurement of endotoxin units (EU) where 5 EU is equivalent to 1 ng E. coli O55:B5 endotoxin. 1456 In summary, water used to prepare dialysate and to reprocess hemodialyzers should not contain a total microbial count >200 CFU/mL as determined by assay on TSA agar for 48 hrs. at 96.8°F (36°C), and ≤2 endotoxin units (EU) per mL. The dialysate at the end of a dialysis treatment should not contain >2,000 CFU/mL. 31, 32, 668, 789, 792 # Water Sampling Strategies and Culture Techniques for Detecting Legionellae Legionella spp. are ubiquitous and can be isolated from 20%-40% of freshwater environments, including man-made water systems. 1457,1458 In health-care facilities, where legionellae in potable water rarely result in disease among immunocompromised patients, courses of remedial action are unclear. Scheduled microbiologic monitoring for legionellae remains controversial because the presence of legionellae is not necessarily evidence of a potential for causing disease. 1459 CDC recommends aggressive disinfection measures for cleaning and maintaining devices known to transmit legionellae, but does not recommend regularly scheduled microbiologic assays for the bacteria. 396 However, scheduled monitoring of potable water within a hospital might be considered in certain settings where persons are highly susceptible to illness and mortality from Legionella infection (e.g., hematopoietic stem cell transplantation units and solid organ transplant units). 9 Also, after an outbreak of legionellosis, health officials agree monitoring is necessary to identify the source and to evaluate the efficacy of biocides or other prevention measures. Examination of water samples is the most efficient microbiologic method for identifying sources of legionellae and is an integral part of an epidemiologic investigation into health-care associated Legionnaires disease. Because of the diversity of plumbing and HVAC systems in health-care facilities, the number and types of sites to be tested must be determined before collection of water samples. One environmental sampling protocol that addresses sampling site selection in hospitals might serve as a prototype for sampling in other institutions. 1209 Any water source that might be aerosolized should be considered a potential source for transmission of legionellae. The bacteria are rarely found in municipal water supplies and tend to colonize plumbing systems and point-of-use devices. To colonize, legionellae usually require a temperature range of 77°F-108°F (25°C-42.2°C) and are most commonly located in hot water systems. 1460 Legionellae do not survive drying. Therefore, air-conditioning equipment condensate, which frequently evaporates, is not a likely source. 1461 Water samples and swabs from point-of-use devices or system surfaces should be collected when sampling for legionellae (Box C.1). 1437 Swabs of system surfaces allow sampling of biofilms, which frequently contain legionellae. When culturing faucet aerators and shower heads, swabs of surface areas should be collected first; water samples are collected after aerators or shower heads are removed from their pipes. Collection and culture techniques are outlined (Box C.2). Swabs can be streaked directly onto buffered charcoal yeast extract agar (BCYE) plates if the pates are available at the collection site. If the swabs and water samples must be transported back to a laboratory for processing, immersing individual swabs in sample water minimizes drying during transit. Place swabs and water samples in insulated coolers to protect specimens from temperature extremes. Box C.1. Potential sampling sites for Legionella spp. in health-care facilities* - Potable water systems incoming water main, water softener unit, holding tanks, cisterns, water heater tanks (at the inflows and outflows) - Potable water outlets, especially those in or near patient rooms faucets or taps, showers - Cooling towers and evaporative condensers makeup water (e.g., added to replace water lost because of evaporation, drift, or leakage), basin (i.e., area under the tower for collection of cooled water), sump (i.e., section of basin from which cooled water returns to heat source), heat sources (e.g., chillers) - Humidfiers (e.g., nebullizers) bubblers for oxygen, water used for respiratory therapy equipment 2. Collect culture swabs of internal surfaces of faucets, aerators, and shower heads in a sterile, screw-top container (e.g., 50 mL plastic centrifuge tube). Submerge each swab in 5-10 mL of sample water taken from the same device from which the sample was obtained. 3. Transport samples and process in a laboratory proficient at culturing water specimens for Legionella spp. as soon as possible after collection. (Samples may be transported at room temperature but must be protected from temperature extremes. Samples not processed within 24 hours of collection should be refrigerated.) 4. Test samples for the presence of Legionella spp. by using semiselective culture media using procedures specific to the cultivation and detection of Legionella spp. - Detection of Legionella spp. antigen by the direct fluorescent antibody technique is not suitable for environmental samples. o Use of polymerase chain reaction for identification of Legionella spp. is not recommended until more data regading the sensitivity and specificity of this procedure are available. # Procedure for Cleaning Cooling Towers and Related Equipment I. Perform these steps prior to chemical disinfection and mechanical cleaning. A. Provide protective equipment to workers who perform the disinfection, to prevent their exposure to chemicals used for disinfection and aerosolized water containing Legionella spp. Protective equipment may include full-length protective clothing, boots, gloves, goggles, and a full-or halfface mask that combines a HEPA filter and chemical cartridges to protect against airborne chlorine levels of up to 10 mg/L. B. Shut off cooling tower. 1. Shut off the heat source, if possible. 2. Shut off fans, if present, on the cooling tower/evaporative condenser (CT/EC). 3. Shut off the system blowdown (i.e., purge) valve. 4. Shut off the automated blowdown controller, if present, and set the system controller to manual. 5. Keep make-up water valves open. 6. Close building air-intake vents within at least 30 meters of the CT/EC until after the cleaning procedure is complete. 7. Continue operating pumps for water circulation through the CT/EC. # II. Perform these chemical disinfection procedures. A. Add fast-release, chlorine-containing disinfectant in pellet, granular, or liquid form, and follow safety instructions on the product label. Use EPA-registered products, if available. Examples of disinfectants include sodium hypochlorite (NaOCl) or calcium hypochlorite (Ca2), calculated to achieve initial free residual chlorine (FRC) of 50 mg/L: either a. Monitor the FRC by using an FRC-measuring device with the DPD method (e.g., a swimmingpool test kit), and measure the pH with a pH meter every 15 minutes for 2 hours. Add chlorine as needed to maintain the FRC at ≥10 mg/L. Because the biocidal effect of chlorine is reduced at a higher pH, adjust the pH to 7.5-8.0. The pH may be lowered by using any acid (e.g., nuriatic acid or sulfuric acid used for maintenance of swimming pools) that is compatible with the treatment chemicals. E. Two hours after adding disinfectant and dispersant or after the FRC level is stable at ≥10 mg/L, monitor at 2-hour intervals and maintain the FRC at ≥10 mg/L for 24 hours. F. After the FRC level has been maintained at ≥10 mg/L for 24 hours, drain the system. CT/EC water may be drained safely into the sanitary sewer. Municipal water and sewerage authorities should be contacted regarding local regulations. If a sanitary sewer is not available, consult local or state authorities (e.g., a department of natural resources or environmental protection) regarding disposal of water. If necessary, the drain-off may be dechlorinated by dissipation or chemical neutralization with sodium bisulfite. - - sharps . 2 Category II B. Consult federal, state, and local regulations to determine if other waste items are considered regulated medical wastes. 967,1407,1408 # I.II. Disposal Plan for Regulated Medical Wastes A. Develop a plan for the collection, handling, predisposal treatment, and terminal disposal of regulated medical wastes. 967 # I.III. Handling, Transporting, and Storing Regulated Medical Wastes A. Inform personnel involved in the handling and disposal of potentially infective waste of the possible health and safety hazards; ensure that they are trained in appropriate handling and disposal methods. 967 Category IC (OSHA: 29 CFR 1910.1030 § g.2.i) B. Manage the handling and disposal of regulated medical wastes generated in isolation areas by using the same methods as for regulated medical wastes from other patient-care areas. 2 Category II C. Use proper sharps disposal strategies. 967 Category IC (OSHA: 29 CFR 191029 CFR .1030.iii.A) 1. Use a sharps container capable of maintaining its impermeability after waste treatment to avoid subsequent physical injuries during final disposal. 967 Category IC (OSHA: 29 CFR 191029 CFR .1030.iii.A) 2. Place disposable syringes with needles, including sterile sharps that are being discarded, scalpel blades, and other sharp items into puncture-resistant containers located as close as practical to the point of use. 967 Category IC (OSHA: 29 CFR 191029 CFR .1030.iii.A) 3. Do not bend, recap, or break used syringe needles before discarding them into a container. 6,967,1415 Category IC (OSHA: 29 CFR 1910.1030 § d.2.vii and § d.2.vii.A) D. Store regulated medical wastes awaiting treatment in a properly ventilated area that is inaccessible to vertebrate pests; use waste containers that prevent the development of noxious odors. Category IC (States; AHJ) E. If treatment options are not available at the site where the medical waste is generated, transport regulated medical wastes in closed, impervious containers to the on-site treatment location or to another facility for treatment as appropriate. Category IC (States; AHJ) # I.IV. Treatment and Disposal of Regulated Medical Wastes A. Treat regulated medical wastes by using a method (e.g., steam sterilization, incineration, interment, or an alternative treatment technology) approved by the appropriate authority having jurisdiction (AHJ) (e.g., states, Indian Health Service , Veterans Affairs ) before disposal in a sanitary landfill. Category IC (States, AHJ) B. Follow precautions for treating microbiological wastes (e.g., amplified cultures and stocks of microorganisms). 1013 Category IC (DHHS: BMBL) 1. Biosafety level 4 laboratories must inactivate microbiological wastes in the laboratory by using an approved inactivation method (e.g., autoclaving) before transport to and disposal in a sanitary landfill. 1013 Category IC (DHHS: BMBL) 2. Biosafety level 3 laboratories must inactivate microbiological wastes in the laboratory by using an approved inactivation method (e.g., autoclaving) or incinerate them at the facility before transport to and disposal in a sanitary landfill. 1013 Category IC (DHHS: BMBL) C. Biosafety levels 1 and 2 laboratories should develop strategies to inactivate amplified microbial cultures and stocks onsite by using an approved inactivation method (e.g., autoclaving) instead of packaging and shipping untreated wastes to an offsite facility for treatment and disposal. 1013, Category II # Part IV. Appendices Appendix A. Glossary of Terms Acceptable indoor air quality: air in which there are no known contaminants at harmful concentrations as determined by knowledgeble authorities and with which a substantial majority (≥80%) of the people exposed do not express dissatisfaction. ACGIH: American Conference of Governmental Industrial Hygienists. Action level: the concentration of a contaminant at which steps should be taken to interrupt the trend toward higher, unacceptable levels. Aerosol: particles of respirable size generated by both humans and environmental sources and that have the capability of remaining viable and airborne for extended periods in the indoor environment. AIA: American Institute of Architects, a professional group responsible for publishing the Guidelines for Design and Construction of Hospitals and Healthcare Facilities, a consensus document for design and construction of health-care facilities endorsed by the U.S. Department of Health and Human Services, healthcare professionals, and professional organizations. # Air changes per hour (ACH): the ratio of the volume of air flowing through a space in a certain period of time (the airflow rate) to the volume of that space (the room volume). This ratio is expressed as the number of air changes per hour (ACH). Air mixing: the degree to which air supplied to a room mixes with the air already in the room, usually expressed as a mixing factor. This factor varies from 1 (for perfect mixing) to 10 (for poor mixing). It is used as a multiplier to determine the actual airflow required (i.e., the recommended ACH multiplied by the mixing factor equals the actual ACH required). Airborne transmission: a means of spreading infection when airborne droplet nuclei (small particle residue of evaporated droplets ≤5 μm in size containing microorganisms that remain suspended in air for long periods of time) are inhaled by the susceptible host. Air-cleaning system: a device or combination of devices applied to reduce the concentration of airborne contaminants (e.g., microorganisms, dusts, fumes, aerosols, other particulate matter, and gases). Air conditioning: the process of treating air to meet the requirements of a conditioned space by controlling its temperature, humidity, cleanliness, and distribution. Allogeneic: non-twin, non-self. The term refers to transplanted tissue from a donor closely matched to a recipient but not related to that person. Ambient air: the air surrounding an object. Anemometer: a flow meter which measures the wind force and velocity of air. An anemometer is often used as a means of determining the volume of air being drawn into an air sampler. Anteroom: a small room leading from a corridor into an isolation room. This room can act as an airlock, preventing the escape of contaminants from the isolation room into the corridor. ASHE: American Society for Healthcare Engineering, an association affiliated with the American Hospital Association. ASHRAE: American Society of Heating, Refrigerating, and Air-Conditioning Engineers Inc. # Autologous self: The term refers to transplanted tissue whose source is the same as the recipient, or an identical twin. Automated cycler: a machine used during peritoneal dialysis which pumps fluid into and out of the patient while he/she sleeps. Biochemical oxygen demand (BOD): a measure of the amount of oxygen removed from aquatic environments by aerobic microorganisms for their metabolic requirements. Measurement of BOD is used to determine the level of organic pollution of a stream or lake. The greater the BOD, the greater the degree of water pollution. The term is also referred to as Biological Oxygen Demand (BOD). Biological oxygen demand (BOD): an indirect measure of the concentration of biologically degradable material present in organic wastes (pertaining to water quality). It usually reflects the amount of oxygen consumed in five days by biological processes breaking down organic waste (BOD5). Biosafety level: a combination of microbiological practices, laboratory facilities, and safety equipment determined to be sufficient to reduce or prevent occupational exposures of laboratory personnel to the Intermediate-level disinfection: a disinfection process that inactivates vegetative bacteria, most fungi, mycobacteria, and most viruses (particularly the enveloped viruses), but does not inactivate bacterial spores. Isoform: a possible configuration (tertiary structure) of a protein molecule. With respect to prion proteins, the molecules with large amounts of α-conformation are the normal isoform of that particular protein, whereas those prions with large amounts of β-sheet conformation are the proteins associated with the development of spongiform encephalopathy (e.g., Creutzfeldt-Jakob disease ). Laminar flow: HEPA-filtered air that is blown into a room at a rate of 90 ± 10 feet/min in a unidirectional pattern with 100 ACH-400 ACH. Large enveloped virus: viruses whose particle diameter is >50 nm and whose outer surface is covered by a lipid-containing structure derived from the membranes of the host cells. Examples of large enveloped viruses include influenza viruses, herpes simplex viruses, and poxviruses. Laser plume: the transfer of electromagnetic energy into tissues which results in a release of particles, gases, and tissue debris. Lipid-containing viruses: viruses whose particle contains lipid components. The term is generally synonymous with enveloped viruses whose outer surface is derived from host cell membranes. Lipidcontaining viruses are sensitive to the inactivating effects of liquid chemical germicides. Lithotriptors: instruments used for crushing caliculi (i.e., calcified stones, and sand) in the bladder or kidneys. Low efficiency filter: the prefilter with a particle-removal efficiency of approximately 30% through which incoming air first passes. See also Prefilter. Low-level disinfection: a disinfection process that will inactivate most vegetative bacteria, some fungi, and some viruses, but cannot be relied upon to inactivate resistant microorganisms (e.g., mycobacteria or bacterial spores). Makeup air: outdoor air supplied to the ventilation system to replace exhaust air. Makeup water: a cold water supply source for a cooling tower. Manometer: a device that measures the pressure of liquids and gases. A manometer is used to verify air filter performance by measuring pressure differentials on either side of the filter. Membrane filtration: an assay method suitable for recovery and enumeration of microorganisms from liquid samples. This method is used when sample volume is large and anticipated microbial contamination levels are low. Mesophilic: that which favors a moderate temperature. For mesophilic bacteria, a temperature range of 68°F-131°F (20°C-55°C) is favorable for their growth and proliferation. Mixing box: the site where the cold and hot air streams mix in the HVAC system, usually situated close to the air outlet for the room. Mixing faucet: a faucet that mixes hot and cold water to produce water at a desired temperature. MMAD: Mass Median Aerodynamic Diameter. This is the unit used by ACGIH to describe the size of particles when particulate air sampling is conducted. Moniliaceous: hyaline or brightly colored. This is a laboratory term for the distinctive characteristics of certain opportunistic fungi in culture (e.g., Aspergillus spp. and Fusarium spp.). Monochloramine: the result of the reaction between chlorine and ammonia that contains only one chlorine atom. Monochloramine is used by municipal water systems as a water treatment. Natural ventilation: the movement of outdoor air into a space through intentionally provided openings (i.e., windows, doors, or nonpowered ventilators). Negative pressure: air pressure differential between two adjacent airspaces such that air flow is directed into the room relative to the corridor ventilation (i.e., room air is prevented from flowing out of the room and into adjacent areas). Neutropenia: a medical condition in which the patient's concentration of neutrophils is substantially less than that in the normal range. Severe neutropenia occurs when the concentration is <1,000 polymorphonuclear cells/μL for 2 weeks or <100 polymorphonuclear cells /mL for 1 week, particularly for hematopoietic stem cell transplant (HSCT) recipients. Noncritical devices: medical devices or surfaces that come into contact with only intact skin. The risk of infection from use of these devices is low. Non-enveloped virus: a virus whose particle is not covered by a structure derived from a membrane of the host cell. Non-enveloped viruses have little or no lipid compounds in their biochemical composition, a characteristic that is significant to their inherent resistance to the action of chemical germicides. Nosocomial: an occurrence, usually an infection, that is acquired in a hospital as a result of medical care. NTM: nontuberculous mycobacteria. These organisms are also known as atypical mycobacteria, or as "Mycobacteria other than tuberculosis" (MOTT). This descriptive term refers to any of the fast-or slowgrowing Mycobacterium spp. found in primarily in natural or man-made waters, but it excludes Mycobacterium tuberculosis and its variants. Nuisance dust: generally innocuous dust, not recognized as the direct cause of serious pathological conditions. Oocysts: a cyst in which sporozoites are formed; a reproductive aspect of the life cycle of a number of parasitic agents (e.g., Cryptosporidium spp., and Cyclospora spp.). Outdoor air: air taken from the external atmosphere and, therefore, not previously circulated through the ventilation system. Parallel streamlines: a unidirectional airflow pattern achieved in a laminar flow setting, characterized by little or no mixing of air. Particulate matter (particles): a state of matter in which solid or liquid substances exist in the form of aggregated molecules or particles. Airborne particulate matter is typically in the size range of 0.01-100 μm diameter. Pasteurization: a disinfecting method for liquids during which the liquids are heated to 140°F (60°C) for a short time (≥30 mins.) to significantly reduce the numbers of pathogenic or spoilage microorganisms. Plinth: a treatment table or a piece of equipment used to reposition the patient for treatment. Portable room-air HEPA recirculation units: free-standing portable devices that remove airborne contaminants by recirculating air through a HEPA filter. Positive pressure: air pressure differential between two adjacent air spaces such that air flow is directed from the room relative to the corridor ventilation (i.e., air from corridors and adjacent areas is prevented from entering the room). Potable (drinking) water: water that is fit to drink. The microbiological quality of this water as defined by EPA microbiological standards from the Surface Water Treatment Rule: a. Giardia lamblia: 99.9% killed/inactivated b. viruses: 99.9% inactivated; c. Legionella spp.: no limit, but if Giardia and viruses are inactivated, Legionella will also be controlled; d. heterotrophic plate count : ≤500 CFU/mL; and e. >5% of water samples total coliform-positive in a month. PPE: Personal Protective Equipment. ppm: parts per million. The term is a measure of concentration in solution. Chlorine bleaches (undiluted) that are available in the U.S. (5.25%-6.15% sodium hypochlorite) contain approximately 50,000-61,500 parts per million of free and available chlorine. Prefilter: the first filter for incoming fresh air in a HVAC system. This filter is approximately 30% efficient in removing particles from the air. See also Low-Efficiency Filter. Prion: a class of agent associated with the transmission of diseases knowns as transmissible spongiform encephalopathies (TSEs). Prions are considered to consist of protein only, and the abnormal isoform of this protein is thought to be the agent that causes diseases such as Creutzfeldt-Jakob disease (CJD), kuru, scrapie, bovine spongiform encephalopathy (BSE), and the human version of BSE which is variant CJD (vCJD). Product water: water produced by a water treatment system or individual component of that system. Protective environment: a special care area, usually in a hospital, designed to prevent transmission of opportunistic airborne pathogens to severely immunosuppressed patients. Pseudoepidemic (pseudo-outbreak): a cluster of positive microbiologic cultures in the absence of clinical disease. A pseudoepidemic usually results from contamination of the laboratory apparatus and process used to recover microorganisms. Pyrogenic: an endotoxin burden such that a patient would receive ≥5 endotoxin units (EU) per kilogram of body weight per hour, thereby causing a febrile response. In dialysis this usually refers to water or dialysate having endotoxin concentrations of ≥5 EU/mL. Rank order: a strategy for assessing overall indoor air quality and filter performance by comparing airborne particle counts from lowest to highest (i.e., from the best filtered air spaces to those with the least filtration). RAPD: a method of genotyping microorganisms by randomly amplified polymorphic DNA. This is one version of the polymerase chain reaction method. Recirculated air: air removed from the conditioned space and intended for reuse as supply air. Relative humidity: the ratio of the amount of water vapor in the atmosphere to the amount necessary for saturation at the same temperature. Relative humidity is expressed in terms of percent and measures the percentage of saturation. At 100% relative humidity, the air is saturated. The relative humidity decreases when the temperature is increased without changing the amount of moisture in the air. Reprocessing (of medical instruments): the procedures or steps taken to make a medical instrument safe for use on the next patient. Reprocessing encompasses both cleaning and the final or terminal step (i.e., sterilization or disinfection) which is determined by the intended use of the instrument according to the Spaulding classification. Residuals: the presence and concentration of a chemical in media (e.g., water) or on a surface after the chemical has been added. Reservoir: a nonclinical source of infection. Respirable particles: those particles that penetrate into and are deposited in the nonciliated portion of the lung. Particles >10 μm in diameter are not respirable. Return air: air removed from a space to be then recirculated. Reverse osmosis (RO): an advanced method of water or wastewater treatment that relies on a semipermeable membrane to separate waters from pollutants. An external force is used to reverse the normal osmotic process resulting in the solvent moving from a solution of higher concentration to one of lower concentration. Riser: water piping that connects the circulating water supply line, from the level of the base of the tower or supply header, to the tower's distribution system. RODAC: Replicate Organism Direct Agar Contact. This term refers to a nutrient agar plate whose convex agar surface is directly pressed onto an environmental surface for the purpose of microbiologic sampling of that surface. Room-air HEPA recirculation systems and units: devices (either fixed or portable) that remove airborne contaminants by recirculating air through a HEPA filter. Routine sampling: environmental sampling conducted without a specific, intended purpose and with no action plan dependent on the results obtained. Sanitizer: an agent that reduces microbial contamination to safe levels as judged by public health standards or requirements. Saprophytic: a naturally-occurring microbial contaminant. Sedimentation: the act or process of depositing sediment from suspension in water. The term also refers to the process whereby solids settle out of wastewater by gravity during treatment. Semicritical devices: medical devices that come into contact with mucous membranes or non-intact skin. Service animal: any animal individually trained to do work or perform tasks for the benefit of a person with a disability. Shedding: the generation and dispersion of particles and spores by sources within the patient area, through activities such as patient movement and airflow over surfaces. Single-pass ventilation: ventilation in which 100% of the air supplied to an area is exhausted to the outside. Small, non-enveloped viruses: viruses whose particle diameter is <50 nm and whose outer surface is the protein of the particle itself and not that of host cell membrane components. Examples of small, nonenveloped viruses are polioviruses and hepatitis A virus. Spaulding Classification: the categorization of inanimate medical device surfaces in the medical environment as proposed in 1972 by Dr. Earle Spaulding. Surfaces are divided into three general categories, based on the theoretical risk of infection if the surfaces are contaminated at time of use. The categories are "critical," "semicritical," and "noncritical." Specific humidity: the mass of water vapor per unit mass of moist air. It is expressed as grains of water per pound of dry air, or pounds of water per pound of dry air. The specific humidity changes as moisture is added or removed. However, temperature changes do not change the specific humidity unless the air is cooled below the dew point. Splatter: visible drops of liquid or body fluid that are expelled forcibly into the air and settle out quickly, as distinguished from particles of an aerosol which remain airborne indefinitely. Steady state: the usual state of an area. Sterilization: the use of a physical or chemical procedure to destroy all microbial life, including large numbers of highly-resistant bacterial endospores. Stop valve: a valve that regulates the flow of fluid through a pipe. The term may also refer to a faucet. Substitution fluid: fluid that is used for fluid management of patients receiving hemodiafiltration. This fluid can be prepared on-line at the machine through a series of ultrafilters or with the use of sterile peritoneal dialysis fluid. Supply air: air that is delivered to the conditioned space and used for ventilation, heating, cooling, humidification, or dehumidification. Tensile strength: the resistance of a material to a force tending to tear it apart, measured as the maximum tension the material can withstand without tearing. Ultrafilter: a membrane filter with a pore size in the range of 0.001-0.05 μm, the performance of which is usually rated in terms of a nominal molecular weight cut-off (defined as the smallest molecular weight species for which the filter membrance has more than 90% rejection). Ultrafiltered dialysate: the process by which dialysate is passed through a filter having a molecular weight cut-off of approximately 1 kilodalton for the purpose of removing bacteria and endotoxin from the bath. Ultraviolet germicidal irradiation (UVGI): the use of ultraviolet radiation to kill or inactivate microorganisms. Ultraviolet germicidal irradiation lamps: lamps that kill or inactivate microorganisms by emitting ultraviolet germicidal radiation, predominantly at a wavelength of 254 nm. UVGI lamps can be used in ceiling or wall fixtures or within air ducts of ventilation systems. Vapor pressure: the pressure exerted by free molecules at the surface of a solid or liquid. Vapor pressure is a function of temperature, increasing as the temperature rises. Vegetative bacteria: bacteria that are actively growing and metabolizing, as opposed to a bacterial state of quiescence that is achieved when certain bacteria (gram-positive bacilli) convert to spores when the environment can no longer support active growth. Vehicle: any object, person, surface, fomite, or media that may carry and transfer infectious microorganisms from one site to another. Ventilation: the process of supplying and removing air by natural or mechanical means to and from any space. Such air may or may not be conditioned. Ventilation air: that portion of the supply air consisting of outdoor air plus any recirculated air that has been treated for the purpose of maintaining acceptable indoor air quality. Ventilation, dilution: an engineering control technique to dilute and remove airborne contaminants by the flow of air into and out of an area. Air that contains droplet nuclei is removed and replaced by contaminantfree air. If the flow is sufficient, droplet nuclei become dispersed, and their concentration in the air is diminished. Ventilation, local exhaust: ventilation used to capture and removed airborne contaminants by enclosing the contaminant source (the patient) or by placing an exhaust hood close to the contaminant source. v/v: volume to volume. This term is an expression of concentration of a percentage solution when the principle component is added as a liquid to the diluent. w/v: weight to volume. This term is an expression of concentration of a percentage solution when the principle component is added as a solid to the diluent. Weight-arrestance: a measure of filter efficiency, used primarily when describing the performance of lowand medium-efficiency filters. The measurement of weight-arrestance is performed by feeding a standardized synthetic dust to the filter and weighing the fraction of the dust removed. Q / V = ACH ¶ Values apply to an empty room with no aerosol-generating source. With a person present and generating aerosol, this table would not apply. Other equations are available that include a constant generating source. However, certain diseases (e.g., infectious tuberculosis) are not likely to be aerosolized at a constant rate. The times given assume perfect mixing of the air within the space (i.e., mixing factor = 1). However, perfect mixing usually does not occur. Removal times will be longer in rooms or areas with imperfect mixing or air stagnation. 213 Caution should be exercised in using this table in such situations. For booths or other local ventilation enclosures, manufacturers' instructions should be consulted. # Appendix B. Air 1. Airborne Contaminant Removal # Ventilation Specifications for Health-Care Facilities The following tables from the AIA Guidelines for Design and Construction of Hospitals and Health-Care Facilities, 2001 are reprinted with permission of the American Institute of Architects and the publisher (The Facilities Guidelines Institute). 120 Note: This table is Table 7 20. Food preparation centers shall have ventilation systems whose air supply mechanisms are interfaced appropriately with exhaust hood controls or relief vents so that exfiltration or infiltration to or from exit corridors does not compromise the exit corridor restrictions of NFPA 90A, the pressure requirements of NFPA 96, or the maximum defined in the table. The number of air changes may be reduced or varied to any extent required for odor control when the space is not in use. See Section 7.31.D1.p in the AIA guideline (reference 120). # Appendix I: A7. Recirculating devices with HEPA filters may have potential uses in existing facilities as interim, supplemental environmental controls to meet requirements for the control of airborne infectious agents. Limitations in design must be recognized. The design of either portable or fixed systems should prevent stagnation and short circuiting of airflow. The supply and exhaust locations should direct clean air to areas where health-care workers are likely to work, across the infectious source, and then to the exhaust, so that the healthcare worker is not in position between the infectious source and the exhaust location. The design of such systems should also allow for easy access for scheduled preventative maintenance and cleaning. A11. The verification of airflow direction can include a simple visual method such as smoke trail, ball-in-tube, or flutterstrip. These devices will require a minimum differential air pressure to indicate airflow direction. Notes: 1. The ventilation rates in this table cover ventilation for comfort, as well as for asepsis and odor control in areas of nursing facilities that directly affect resident care and are determined based on nursing facilities being predominantly "No Smoking" facilities. Where smoking may be allowed, ventilation rates will need adjustment. Areas where specific ventilation rates are not given in the table shall be ventilated in accordance with ASHRAE Standard 62, Ventilation for Acceptable Indoor Air Quality, and ASHRAE Handbook -HVAC Applications. OSHA standards and/or NIOSH criteria require special ventilation requirements for employee health and safety within nursing facilities. 2. Design of the ventilation system shall, insofar as possible, provide that air movement is from clean to less clean areas. However, continuous compliance may be impractical with full utilization of some forms of variable air volume and load shedding systems that may be used for energy conservation. Areas that do require positive and continuous control are noted with "Out" or "In" to indicate the required direction of air movement in relation to the space named. Rate of air movement may, of course, be varied as needed within the limits required for positive control. Where indication of air movement direction is enclosed in parentheses, continuous directional control is required only when the specialized equipment or device is in use or where room use may otherwise compromise the intent of movement from clean to less clean. Air movement for rooms with dashes and nonpatient areas may vary as necessary to satisfy the requirements of those spaces. Additional adjustments may be needed when space is unused or unoccupied and air systems are deenergized or reduced. # Appendix C. Water 1. Biofilms Microorganisms have a tendency to associate with and stick to surfaces. These adherent organisms can initiate and develop biofilms, which are comprised of cells embedded in a matrix of extracellularly produced polymers and associated abiotic particles. 1438 It is inevitable that biofilms will form in most water systems. In the health-care facility environment, biofilms may be found in the potable water supply piping, hot water tanks, air conditioning cooling towers, or in sinks, sink traps, aerators, or shower heads. Biofilms, especially in water systems, are not present as a continuous slime or film, but are more often scanty and heterogeneous in nature. 1439 Biofilms may form under stagnant as well as flowing conditions, so storage tanks, in addition to water system piping, may be vulnerable to the development of biofilm, especially if water temperatures are low enough to allow the growth of thermophilic bacteria (e.g., Legionella spp.). Favorable conditions for biofilm formation are present if these structures and equipment are not cleaned for extended periods of time. 1440 Algae, protozoa, and fungi may be present in biofilms, but the predominant microorganisms of water system biofilms are gram-negative bacteria. Although most of these organisms will not normally pose a problem for healthy individuals, certain biofilm bacteria (e.g., Pseudomonas aeruginosa, Klebsiella spp., Pantoea agglomerans, and Enterobacter cloacae) all may be agents for opportunistic infections for immunocompromised individuals. 1441,1442 These biofilm organisms may easily contaminate indwelling medical devices or intravenous (IV) fluids, and they could be transferred on the hands of health-care workers. Biofilms may potentially provide an environment for the survival of pathogenic organisms, such as Legionella pneumophila and E. coli O157:H7. Although the association of biofilms and medical devices provides a plausible explanation for a variety of health-care associated infections, it is not clear how the presence of biofilms in the water system may influence the rates of health-care-associated waterborne infection. Organisms within biofilms behave quite differently than their planktonic (i.e., free floating) counterparts. Research has shown that biofilm-associated organisms are more resistant to antibiotics and disinfectants than are planktonic organisms, either because the cells are protected by the polymer matrix, or because they are physiologically different. Nevertheless, municipal water utilities attempt to maintain a chlorine residual in the distribution system to discourage microbiological growth. Though chlorine in its various forms is a proven disinfectant, it has been shown to be less effective against biofilm bacteria. 1448 Higher levels of chlorine for longer contact times are necessary to eliminate biofilms. Routine sampling of health-care facility water systems for biofilms is not warranted. If an epidemiologic investigation points to the water supply system as a possible source of infection, then water sampling for biofilm organisms should be considered so that prevention and control strategies can be developed. An established biofilm is is difficult to remove totally in existing piping. Strategies to remediate biofilms in a water system would include flushing the system piping, hot water tank, dead legs, and those areas of the facility's water system subject to low or intermittent flow. The benefits of this treatment would include a. elimination of corrosion deposits and sludge from the bottom of hot water tanks, b. removal of biofilms from shower heads and sink aerators, and c. circulation of fresh water containing elevated chlorine residuals into the health-care facility water system. The general strategy for evaluating water system biofilm depends on a comparision of the bacteriological quality of the incoming municipal water and that of water sampled from within facility's distribution system. Heterotrophic plate counts and coliform counts, both of which are routinely run by the municipal water utility, will at least provide in indication of the potential for biofilm formation. Heterotrophic plate count levels in potable water should be 500 CFU/mL would indicate a general decrease in water quality. A direct correlation between heterotrophic plate count and biofilm levels has been demonstrated. 1450 Therefore, an increase in G. Refill the system with water and repeat the procedure outline in steps 2-7 in I-B above. # III. Perform mechanical cleaning. A. After water from the second chemical disinfection has been drained, shut down the CT/EC. B. Inspect all water-contact areas for sediment, sludge, and scale. Using brushes and/or a lowpressure water hose, thoroughly clean all CT/EC water-contact areas, including the basin, sump, fill, spray nozzles, and fittings. Replace components as needed. C. If possible, clean CT/EC water-contact areas within the chillers. # IV. Perform these procedures after mechanical cleaning. A. Fill the system with water and add chlorine to achieve an FRC level of 10 mg/L. B. Circulate the water for 1 hour, then open the blowdown valve and flush the entire system until the water is free of turbidity. C. Drain the system. D. Open any air-intake vents that were closed before cleaning. E. Fill the system with water. The CT/EC may be put back into service using an effective watertreatment program. # Maintenance Procedures Used to Decrease Survival and Multiplications of Legionella spp. in Potable-Water Distribution Systems Wherever allowable by state code, provide water at ≥124°F (≥51°C) at all points in the heated water system, including the taps. This requires that water in calorifiers (e.g., water heaters) be maintained at ≥140°F (≥60°C). In the United Kingdom, where maintenance of water temperatures at ≥122°F (≥50°C) in hospitals has been mandated, installation of blending or mixing valves at or near taps to reduce the water temperature to ≤109.4°F (≤63°C) has been recommended in certain settings to reduce the risk for scald injury to patients, visitors, and health care workers. 726 However, Legionella spp. can multiply even in short segments of pipe containing water at this temperature. Increasing the flow rate from the hot-watercirculation system may help lessen the likelihood of water stagnation and cooling. 711,1465 Insulation of plumbing to ensure delivery of cold (<68°F ) water to water heaters (and to cold-water outlets) may diminish the opportunity for bacterial multiplication. 456 Both dead legs and capped spurs within the plumbing system provide areas of stagnation and cooling to <122°F (<50°C) regardless of the circulating water temperature; these segments may need to be removed to prevent colonization. 704 Rubber fittings within plumbing systems have been associated with persistent colonization, and replacement of these fittings may be required for Legionella spp. eradication. 1467 Continuous chlorination to maintain concentrations of free residual chlorine at 1-2 mg/L (1-2 ppm) at the tap is an alternative option for treatment. This requires the placement of flow-adjusted, continuous injectors of chlorine throughout the water distribution system. Adverse effects of continuous chlorination can include accelerated corrosion of plumbing (resulting in system leaks) and production of potentially carcinogenic trihalomethanes. However, when levels of free residual chlorine are below 3 mg/L (3 ppm), trihalomethane levels are kept below the maximum safety level recommended by the EPA. 727, 1468, 1469 228 # Appendix D. Insects and Microorganisms Format Change : The format of this section was changed to improve readability and accessibility. The content is unchanged. # Appendix E. Information Resources The following sources of information may be helpful to the reader. Some of these are available at no charge, while others are available for purchase from the publisher. # Air and Water # Appendix F. Areas of Future Research Air - Standardize the methodology and interpretation of microbiologic air sampling (e.g., determine action levels or minimum infectious dose for aspergillosis, and evaluate the significance of airborne bacteria and fungi in the surgical field and the impact on postoperative SSI). - Develop new molecular typing methods to better define the epidemiology of health-care associated outbreaks of aspergillosis and to associate isolates recovered from both clinical and environmental sources. - Develop new methods for the diagnosis of aspergillosis that can lead reliably to early recognition of infection. - Assess the value of laminar flow technology for surgeries other than for joint replacement surgery. - Determine if particulate sampling can be routinely performed in lieu of microbiologic sampling for purposes such as determining air quality of clean environments (e.g., operating rooms, HSCT units). # Water - Evaluate new methods of water treatment, both in the facility and at the water utility (e.g., ozone, chlorine dioxide, copper/silver/monochloramine) and perform cost-benefit analyses of treatment in preventing health-care associated legionellosis. - Evaluate the role of biofilms in overall water quality and determine the impact of water treatments for the control of biofilm in distribution systems. - Determine if the use of ultrapure fluids in dialysis is feasible and warranted, and determine the action level for the final bath. - Develop quality assurance protocols and validated methods for sampling filtered rinse water used with AERs and determine acceptable microbiologic quality of AER rinse water. # Environmental Services - Evaluate the innate resistance of microorganisms to the action of chemical germicides, and determine what, if any, linkage there may be between antibiotic resistance and resistance to disinfectants. # Laundry and Bedding - Evaluate the microbial inactivation capabilities of new laundry detergents, bleach substitutes, other laundry additives, and new laundry technologies. # Animals in Health-Care Facilities - Conduct surveillance to monitor incidence of infections among patients in facilities that use animal programs, and conduct investigations to determine new infection control strategies to prevent these infections. - Evaluate the epidemiologic impact of performing procedures on animals (e.g., surgery or imaging) in human health-care facilities. # Regulated Medical Waste - Determine the efficiency of current medical waste treatment technologies to inactivate emerging pathogens that may be present in medical waste (e.g., SARS-coV). - Explore options to enable health-care facilities to reinstate the capacity to inactivate microbiological cultures and stocks on-site.
Although the environment serves as a reservoir for a variety of microorganisms, it is rarely implicated in disease transmission except in the immunocompromised population. Inadvertent exposures to environmental opportunistic pathogens (e.g., Aspergillus spp. and Legionella spp.) or airborne pathogens (e.g., Mycobacterium tuberculosis and varicella-zoster virus) may result in infections with significant morbidity and/or mortality. Lack of adherence to established standards and guidance (e.g., water quality in dialysis, proper ventilation for specialized care areas such as operating rooms, and proper use of disinfectants) can result in adverse patient outcomes in health-care facilities.The objective is to develop an environmental infection-control guideline that reviews and reaffirms strategies for the prevention of environmentally-mediated infections, particularly among health-care workers and immunocompromised patients. The recommendations are evidence-based whenever possible.The contributors to this guideline reviewed predominantly English-language articles identified from MEDLINE literature searches, bibliographies from published articles, and infection-control textbooks.Articles dealing with outbreaks of infection due to environmental opportunistic microorganisms and epidemiological-or laboratory experimental studies were reviewed. Current editions of guidelines and standards from organizations (i.e.# Outcome Measures: Infections caused by the microorganisms described in this guideline are rare events, and the effect of these recommendations on infection rates in a facility may not be readily measurable. Therefore, the following steps to measure performance are suggested to evaluate these recommendations: 1. Document whether infection-control personnel are actively involved in all phases of a healthcare facility's demolition, construction, and renovation. Activities should include performing a risk assessment of the necessary types of construction barriers, and daily monitoring and documenting of the presence of negative airflow within the construction zone or renovation area. 2. Monitor and document daily the negative airflow in airborne infection isolation rooms (AII) and positive airflow in protective environment rooms (PE), especially when patients are in these rooms. 3. Perform assays at least once a month by using standard quantitative methods for endotoxin in water used to reprocess hemodialyzers, and for heterotrophic, mesophilic bacteria in water used to prepare dialysate and for hemodialyzer reprocessing. # Executive Summary The Guidelines for Environmental Infection Control in Health-Care Facilities is a compilation of recommendations for the prevention and control of infectious diseases that are associated with healthcare environments. This document a. revises multiple sections from previous editions of the Centers for Disease Control and Prevention [CDC] document titled Guideline for Handwashing and Hospital Environmental Control; 1, 2 b. incorporates discussions of air and water environmental concerns from CDC's Guideline for the Prevention of Nosocomial Pneumonia; 3 c. consolidates relevant environmental infection-control measures from other CDC guidelines; [4][5][6][7][8][9] and d. includes two topics not addressed in previous CDC guidelines -infection-control concerns related to animals in health-care facilities and water quality in hemodialysis settings. Part I of this report, Background Information: Environmental Infection Control in Health-Care Facilities, provides a comprehensive review of the scientific literature. Attention is given to engineering and infectioncontrol concerns during construction, demolition, renovation, and repairs of health-care facilities. Use of an infection-control risk assessment is strongly supported before the start of these or any other activities expected to generate dust or water aerosols. Also reviewed in Part I are infection-control measures used to recover from catastrophic events (e.g., flooding, sewage spills, loss of electricity and ventilation, and disruption of the water supply) and the limited effects of environmental surfaces, laundry, plants, animals, medical wastes, cloth furnishings, and carpeting on disease transmission in healthcare facilities. Part II of this guideline, Recommendations for Environmental Infection Control in Health-Care Facilities, outlines environmental infection control in health-care facilities, describing measures for preventing infections associated with air, water, and other elements of the environment. These recommendations represent the views of different divisions within CDC's National Center for Infectious Diseases (NCID) (e.g., the Division of Healthcare Quality Promotion [DHQP] and the Division of Bacterial and Mycotic Diseases [DBMD]) and the consensus of the Healthcare Infection Control Practices Advisory Committee (HICPAC), a 12-member group that advises CDC on concerns related to the surveillance, prevention, and control of health-care associated infections, primarily in U.S. healthcare facilities.10 In 1999, HICPAC's infection-control focus was expanded from acute-care hospitals to all venues where health care is provided (e.g., outpatient surgical centers, urgent care centers, clinics, outpatient dialysis centers, physicians' offices, and skilled nursing facilities). The topics addressed in this guideline are applicable to the majority of health-care venues in the United States. This document is intended for use primarily by infection-control professionals (ICPs), epidemiologists, employee health and safety personnel, information system specialists, administrators, engineers, facility managers, environmental service professionals, and architects for health-care facilities. Key recommendations include a. infection-control impact of ventilation system and water system performance; b. establishment of a multidisciplinary team to conduct infection-control risk assessment; c. use of dust-control procedures and barriers during construction, repair, renovation, or demolition; d. environmental infection-control measures for special care areas with patients at high risk; e. use of airborne particle sampling to monitor the effectiveness of air filtration and dust-control measures; f. procedures to prevent airborne contamination in operating rooms when infectious tuberculosis [TB] patients require surgery g. guidance regarding appropriate indications for routine culturing of water as part of a comprehensive control program for legionellae; h. guidance for recovering from water system disruptions, water leaks, and natural disasters [e.g., flooding]; i. infection-control concepts for equipment that uses water from main lines [e.g., water systems for hemodialysis, ice machines, hydrotherapy equipment, dental unit water lines, and automated endoscope reprocessors]); j. environmental surface cleaning and disinfection strategies with respect to antibiotic-resistant microorganisms; k. infection-control procedures for health-care laundry; l. use of animals in health care for activities and therapy; m. managing the presence of service animals in health-care facilities; n. infection-control strategies for when animals receive treatment in human health-care facilities; and o. a call to reinstate the practice of inactivating amplified cultures and stocks of microorganisms on-site during medical waste treatment. Whenever possible, the recommendations in Part II are based on data from well-designed scientific studies. However, certain of these studies were conducted by using narrowly defined patient populations or for specific health-care settings (e.g., hospitals versus long-term care facilities), making generalization of findings potentially problematic. Construction standards for hospitals or other healthcare facilities may not apply to residential home-care units. Similarly, infection-control measures indicated for immunosuppressed patient care are usually not necessary in those facilities where such patients are not present. Other recommendations were derived from knowledge gained during infectious disease investigations in health-care facilities, where successful termination of the outbreak was often the result of multiple interventions, the majority of which cannot be independently and rigorously evaluated. This is especially true for construction situations involving air or water. Other recommendations are derived from empiric engineering concepts and may reflect an industry standard rather than an evidence-based conclusion. Where recommendations refer to guidance from the American Institute of Architects (AIA), (AIA guidance has been superseded by the Facilities Guidelines Institute [FGI]) the statements reflect standards intended for new construction or renovation. Existing structures and engineered systems are expected to be in continued compliance with the standards in effect at the time of construction or renovation. Also, in the absence of scientific confirmation, certain infectioncontrol recommendations that cannot be rigorously evaluated are based on a strong theoretical rationale and suggestive evidence. Finally, certain recommendations are derived from existing federal regulations. The references and the appendices comprise Parts III and IV of this document, respectively. Infections caused by the microorganisms described in these guidelines are rare events, and the effect of these recommendations on infection rates in a facility may not be readily measurable. Therefore, the following steps to measure performance are suggested to evaluate these recommendations (Box 1): Box 1. Environmental infection control: performance measures 1. Document whether infection-control personnel are actively involved in all phases of a health-care facility's demolition, construction, and renovation. Activities should include performing a risk assessment of the necessary types of construction barriers, and daily monitoring and documenting of the presence of negative airflow within the construction zone or renovation area. 2. Monitor and document daily the negative airflow in airborne infection isolation (AII) rooms and positive airflow in protective environment (PE) rooms, especially when patients are in these rooms. 3. Perform assays at least once a month by using standard quantitative methods for endotoxin in water used to reprocess hemodialyzers, and for heterotrophic and mesophilic bacteria in water used to prepare dialysate and for hemodialyzer reprocessing. 4. Evaluate possible environmental sources (e.g., water, laboratory solutions, or reagents) of specimen contamination when nontuberculous mycobacteria (NTM) of unlikely clinical importance are isolated from clinical cultures. If environmental contamination is found, eliminate the probable mechanisms. 5. Document policies to identify and respond to water damage. Such policies should result in either repair and drying of wet structural or porous materials within 72 hours, or removal of the wet material if drying is unlikely with 72 hours. Topics outside the scope of this document include a. noninfectious adverse events (e.g., sick building syndrome); b. environmental concerns in the home; c. home health care; d. bioterrorism; and e. healthcare-associated foodborne illness. This document includes only limited discussion of handwashing/hand hygiene; standard precautions; and infection-control measures used to prevent instrument or equipment contamination during patient care (e.g., preventing waterborne contamination of nebulizers or ventilator humidifiers). These topics are mentioned only if they are important in minimizing the transfer of pathogens to and from persons or equipment and the environment. Although the document discusses principles of cleaning and disinfection as they are applied to maintenance of environmental surfaces, the full discussion of sterilization and disinfection of medical instruments and direct patient-care devices is deferred for inclusion in the # Part I. Background Information: Environmental Infection Control in Health-Care Facilities # A. Introduction The health-care environment contains a diverse population of microorganisms, but only a few are significant pathogens for susceptible humans. Microorganisms are present in great numbers in moist, organic environments, but some also can persist under dry conditions. Although pathogenic microorganisms can be detected in air and water and on fomites, assessing their role in causing infection and disease is difficult. 11 Only a few reports clearly delineate a "cause and effect" with respect to the environment and in particular, housekeeping surfaces. Eight criteria are used to evaluate the strength of evidence for an environmental source or means of transmission of infectious agents (Box 2). 11,12 Applying these criteria to disease investigations allows scientists to assess the contribution of the environment to disease transmission. An example of this application is the identification of a pathogen (e.g., vancomycin-resistant enterococci [VRE]) on an environmental surface during an outbreak. The presence of the pathogen does not establish its causal role; its transmission from source to host could be through indirect means (e.g., via hand transferral). 11 The surface, therefore, would be considered one of a number of potential reservoirs for the pathogen, but not the "de facto" source of exposure. An understanding of how infection occurs after exposure, based on the principles of the "chain of infection," is also important in evaluating the contribution of the environment to health-care associated disease. 13 All of the components of the "chain" must be operational for infection to occur (Box 3). Box 2. Eight criteria for evaluating the strength of evidence for environmental sources of infection*+ 1. The organism can survive after inoculation onto the fomite. 2. The organism can be cultured from in-use fomites. 3. The organism can proliferate in or on the fomite. 4. Some measure of acquisition of infection cannot be explained by other recognized modes of transmission. 5. Retrospective case-control studies show an association between exposure to the fomite and infection. # 6. Prospective case-control studies may be possible when more than one similar type of fomite is in use. 7. Prospective studies allocating exposure to the fomite to a subset of patients show an association between exposure and infection. The presence of the susceptible host is one of these components that underscores the importance of the health-care environment and opportunistic pathogens on fomites and in air and water. As a result of advances in medical technology and therapies (e.g., cytotoxic chemotherapy and transplantation medicine), more patients are becoming immunocompromised in the course of treatment and are therefore at increased risk for acquiring health-care associated opportunistic infections. Trends in health-care delivery (e.g., early discharge of patients from acute care facilities) also are changing the distribution of patient populations and increasing the number of immunocompromised persons in nonacute-care hospitals. According to the American Hospital Association (AHA), in 1998, the number of hospitals in the United States totaled 6,021; these hospitals had a total of 1,013,000 beds, 14 representing a 5.5% decrease in the number of acute-care facilities and a 10.2% decrease in the number of beds over the 5year period 1994-1998. 14 In addition, the total average daily number of patients receiving care in U.S. acute-care hospitals in 1998 was 662,000 (65.4%) -36.5% less than the 1978 average of 1,042,000. 14 As the number of acute-care hospitals declines, the length of stay in these facilities is concurrently decreasing, particularly for immunocompetent patients. Those patients remaining in acute-care facilities are likely to be those requiring extensive medical interventions who therefore at high risk for opportunistic infection. The growing population of severely immunocompromised patients is at odds with demands on the health-care industry to remain viable in the marketplace; to incorporate modern equipment, new diagnostic procedures, and new treatments; and to construct new facilities. Increasing numbers of health-care facilities are likely to be faced with construction in the near future as hospitals consolidate to reduce costs, defer care to ambulatory centers and satellite clinics, and try to create more "home-like" acute-care settings. In 1998, approximately 75% of health-care associated construction projects focused on renovation of existing outpatient facilities or the building of such facilities; 15 the number of projects associated with outpatient health care rose by 17% from 1998 through 1999. 16 An aging population is also creating increasing demand for assisted-living facilities and skilled nursing centers. Construction of assisted-living facilities in 1998 increased 49% from the previous year, with 138 projects completed at a cost of $703 million. 16 Overall, from 1998 to 1999, health-care associated construction costs increased by 28.5%, from $11.56 billion to $14.86 billion. 16 Environmental disturbances associated with construction activities near health-care facilities pose airborne and waterborne disease threats risks for the substantial number of patients who are at risk for health-care associated opportunistic infections. The increasing age of hospitals and other health-care facilities is also generating ongoing need for repair and remediation work (e.g., installing wiring for new information systems, removing old sinks, and repairing elevator shafts) that can introduce or increase contamination of the air and water in patient-care environments. Aging equipment, deferred maintenance, and natural disasters provide additional mechanisms for the entry of environmental pathogens into highrisk patient-care areas. Architects, engineers, construction contractors, environmental health scientists, and industrial hygienists historically have directed the design and function of hospitals' physical plants. Increasingly, however, because of the growth in the number of susceptible patients and the increase in construction projects, the involvement of hospital epidemiologists and infection-control professionals is required. These experts help make plans for building, maintaining, and renovating health-care facilities to ensure that the adverse impact of the environment on the incidence of health-care associated infections is minimal. The following are examples of adverse outcomes that could have been prevented had such experts been involved in the planning process: transmission of infections caused by Mycobacterium tuberculosis, varicella-zoster virus (VZV), and measles (i.e., rubeola) facilitated by inappropriate air-handling systems in health-care facilities; 6 disease outbreaks caused by Aspergillus spp., [17][18][19] Mucoraceae, 20 and Penicillium spp. associated with the absence of environmental controls during periods of health-care facility-associated construction; 21 infections and/or colonizations of patients and staff with vancomycin-resistant Enterococcus faecium [VRE] and Clostridium difficile acquired indirectly from contact with organisms present on environmental surfaces in health-care facilities; [22][23][24][25] and outbreaks and pseudoepidemics of legionellae, 26,27 Pseudomonas aeruginosa, 28-30 and the nontuberculous mycobacteria (NTM) 31,32 linked to water and aqueous solutions used in health-care facilities. The purpose of this guideline is to provide useful information for both health-care professionals and engineers in efforts to provide a safe environment in which quality health care may be provided to patients. The recommendations herein provide guidance to minimize the risk for and prevent transmission of pathogens in the indoor environment. # B. Key Terms Used in this Guideline Although Appendix A provides definitions for terms discussed in Part I, several terms that pertain to specific patient-care areas and patients who are at risk for health-care associated opportunistic infections are presented here. Specific engineering parameters for these care areas are discussed more fully in the text. Airborne Infection Isolation (AII) refers to the isolation of patients infected with organisms spread via airborne droplet nuclei <5 μm in diameter. This isolation area receives numerous air changes per hour (ACH) (≥12 ACH for new construction as of 2001; ≥6 ACH for construction before 2001), and is under negative pressure, such that the direction of the airflow is from the outside adjacent space (e.g., corridor) into the room. The air in an AII room is preferably exhausted to the outside, but may be recirculated provided that the return air is filtered through a high efficiency particulate air (HEPA) filter. The use of personal respiratory protection is also indicated for persons entering these rooms. Protective Environment (PE) is a specialized patient-care area, usually in a hospital, with a positive airflow relative to the corridor (i.e., air flows from the room to the outside adjacent space). The combination of HEPA filtration, high numbers of air changes per hour (≥12 ACH), and minimal leakage of air into the room creates an environment that can safely accommodate patients who have undergone allogeneic hematopoietic stem cell transplant (HSCT). Immunocompromised patients are those patients whose immune mechanisms are deficient because of immunologic disorders (e.g., human immunodeficiency virus [HIV] infection, congenital immune deficiency syndrome, chronic diseases [such as diabetes, cancer, emphysema, and cardiac failure]) or immunosuppressive therapy (e.g., radiation, cytotoxic chemotherapy, anti-rejection medication, and steroids). Immunocompromised patients who are identified as high-risk patients have the greatest risk of infection caused by airborne or waterborne microorganisms. Patients in this subset include those who are severely neutropenic for prolonged periods of time (i.e., an absolute neutrophil count [ANC] of ≤500 cells/mL), allogeneic HSCT patients, and those who have received intensive chemotherapy (e.g., childhood acute myelogenous leukemia patients). # C. Air 1. Modes of Transmission of Airborne Diseases A variety of airborne infections in susceptible hosts can result from exposures to clinically significant microorganisms released into the air when environmental reservoirs (i.e., soil, water, dust, and decaying organic matter) are disturbed. Once these materials are brought indoors into a health-care facility by any of a number of vehicles (e.g., people, air currents, water, construction materials, and equipment), the attendant microorganisms can proliferate in various indoor ecological niches and, if subsequently disbursed into the air, serve as a source for airborne health-care associated infections. Respiratory infections can be acquired from exposure to pathogens contained either in droplets or droplet nuclei. Exposure to microorganisms in droplets (e.g., through aerosolized oral and nasal secretions from infected patients 33 ) constitutes a form of direct contact transmission. When droplets are produced during a sneeze or cough, a cloud of infectious particles >5 μm in size is expelled, resulting in the potential exposure of susceptible persons within 3 feet of the source person. 6 Examples of pathogens spread in this manner are influenza virus, rhinoviruses, adenoviruses, and respiratory syncytial virus (RSV). Because these agents primarily are transmitted directly and because the droplets tend to fall out of the air quickly, measures to control air flow in a health-care facility (e.g., use of negative pressure rooms) generally are not indicated for preventing the spread of diseases caused by these agents. Strategies to control the spread of these diseases are outlined in another guideline. 3 The spread of airborne infectious diseases via droplet nuclei is a form of indirect transmission. 34 Droplet nuclei are the residuals of droplets that, when suspended in air, subsequently dry and produce particles ranging in size from 1-5 μm. These particles can a. contain potentially viable microorganisms, b. be protected by a coat of dry secretions, c. remain suspended indefinitely in air, and d. be transported over long distances. The microorganisms in droplet nuclei persist in favorable conditions (e.g., a dry, cool atmosphere with little or no direct exposure to sunlight or other sources of radiation). Pathogenic microorganisms that can be spread via droplet nuclei include Mycobacterium tuberculosis, VZV, measles virus (i.e., rubeola), and smallpox virus (i.e., variola major). 6 Several environmental pathogens have life-cycle forms that are similar in size to droplet nuclei and may exhibit similar behavior in the air. The spores of Aspergillus fumigatus have a diameter of 2-3.5 μm, with a settling velocity estimated at 0.03 cm/second (or about 1 meter/hour) in still air. With this enhanced buoyancy, the spores, which resist desiccation, can remain airborne indefinitely in air currents and travel far from their source. 35 # Airborne Infectious Diseases in Health-Care Facilities a. Aspergillosis and Other Fungal Diseases Aspergillosis is caused by molds belonging to the genus Aspergillus. Aspergillus spp. are prototype health-care acquired pathogens associated with dusty or moist environmental conditions. Clinical and epidemiologic aspects of aspergillosis (Table 1) are discussed extensively in another guideline. 3 Format Change [November 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 1. Clinical and epidemiologic characteristics of aspergillosis # Modes of transmission Airborne transmission of fungal spores; direct inhalation; direct inoculation from environmental sources (rare) and equipment can become contaminated with Aspergillus spp. spores and serve as sources of infection if stored in such areas. 57 Most cases of aspergillosis are caused by Aspergillus fumigatus, a thermotolerant/thermophilic fungus capable of growing over a temperature range from 53.6°F-127.4°F (12°C-53°C); optimal growth occurs at approximately 104°F (40°C), a temperature inhibitory to most other saprophytic fungi. 99 It can use cellulose or sugars as carbon sources; because its respiratory process requires an ample supply of carbon, decomposing organic matter is an ideal substrate. (For AIA guidance on Aspergillus spp. see Table 2.) Other opportunistic fungi that have been occasionally linked with health-care associated infections are members of the order Mucorales (e.g., Rhizopus spp.) and miscellaneous moniliaceous molds (e.g., Fusarium spp. and Penicillium spp.) (Table 2). Many of these fungi can proliferate in moist environments (e.g., water-damaged wood and building materials). Some fungi (e.g., Fusarium spp. and Pseudoallescheria spp.) also can be airborne pathogens. 100 As with aspergillosis, a major risk factor for disease caused by any of these pathogens is the host's severe immunosuppression from either underlying disease or immunosuppressive therapy. 101,102 Format Change [November 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 2. Environmental fungal pathogens: entry into and contamination of the healthcare facility Fungal pathogen Implicated environmental vehicle Aspergillus spp. • Improperly functioning ventilation systems 20,46,47,97,98,103,104 • Air filters+ 17,18,[105][106][107] Pigeons, their droppings and roosts are associated with spread of Aspergillus, Cryptococcus, and Histoplasma spp. There have been at least three outbreaks linked to contamination of the filtering systems from bird droppings 98,103,104 Pigeon mites may gain access into a health-care facility through the ventilation system. 119 • Air filter frames 17,18 • Window air conditioners 96 • Backflow of contaminated air 107 • Air exhaust contamination+ 104 • False ceilings 48,57,97,108 • Fibrous insulation and perforated metal ceilings 66 • Acoustic ceiling tiles, plasterboard 18,109 • Fireproofing material 48,49 • Damp wood building materials 49 • Opening doors to construction site 110 # • Construction 69 • Open windows 20,108,111 105 • Topical anesthetic 117 Acremonium spp. • Air filters 105 Cladosporium spp. • Air filters 105 # Sporothrix • Construction (pseudoepidemic) 118 + The American Institute of Architects (AIA) standards stipulate that for new or renovated construction o exhaust outlets are to be placed >25 feet from air intake systems, o the bottom of outdoor air intakes for HVAC systems should be 6 feet above ground or 3 feet above roof level, and o exhaust outlets from contaminated areas are situated above the roof level and arranged to minimize the recirculation of exhausted air back into the building. 120 Infections due Cryptococcus neoformans, Histoplasma capsulatum, or Coccidioides immitis can occur in health-care settings if nearby ground is disturbed and a malfunction of the facility's air-intake components allows these pathogens to enter the ventilation system. C. neoformans is a yeast usually 4-8 μm in size. However, viable particles of <2 μm diameter (and thus permissive to alveolar deposition) have been found in soil contaminated with bird droppings, particularly from pigeons. 98,103,104,121 H. capsulatum, with the infectious microconidia ranging in size from 2-5 μm, is endemic in the soil of the central river valleys of the United States. Substantial numbers of these infectious particles have been associated with chicken coops and the roosts of blackbirds. 98,103,104,122 Several outbreaks of histoplasmosis have been associated with disruption of the environment; construction activities in an endemic area may be a potential risk factor for health-care acquired airborne infection. 123,124 C. immitis, with arthrospores of 3-5 μm diameter, has similar potential, especially in the endemic southwestern United States and during seasons of drought followed by heavy rainfall. After the 1994 earthquake centered near Northridge, California, the incidence of coccidioidomycosis in the surrounding area exceeded the historical norm. 125 Emerging evidence suggests that Pneumocystis carinii, now classified as a fungus, may be spread via airborne, person-to-person transmission. 126 Controlled studies in animals first demonstrated that P. carinii could be spread through the air. 127 More recent studies in health-care settings have detected nucleic acids of P. carinii in air samples from areas frequented or occupied by P. carinii-infected patients but not in control areas that are not occupied by these patients. 128,129 Clusters of cases have been identified among immunocompromised patients who had contact with a source patient and with each other. Recent studies have examined the presence of P. carinii DNA in oropharyngeal washings and the nares of infected patients, their direct contacts, and persons with no direct contact. 130,131 Molecular analysis of the DNA by polymerase chain reaction (PCR) provides evidence for airborne transmission of P. carinii from infected patients to direct contacts, but immunocompetent contacts tend to become transiently colonized rather than infected. 131 The role of colonized persons in the spread of P. carinii pneumonia (PCP) remains to be determined. At present, specific modifications to ventilation systems to control spread of PCP in a health-care facility are not indicated. Current recommendations outline isolation procedures to minimize or eliminate contact of immunocompromised patients not on PCP prophylaxis with PCP-infected patients. 6,132 # b. Tuberculosis and Other Bacterial Diseases The bacterium most commonly associated with airborne transmission is Mycobacterium tuberculosis. A comprehensive review of the microbiology and epidemiology of M. tuberculosis and guidelines for tuberculosis (TB) infection control have been published. 4,133,134 A summary of the clinical and epidemiologic information from these materials is provided in this guideline (Table 3). Format Change [November 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. # Factors affecting severity and outcomes • Concentration of droplet nuclei in air, duration of exposure • Age at infection • Immunosuppression due to therapy or disease, underlying chronic medical conditions, history of malignancies or lesions or the lungs Occurrence • Worldwide; incidence in the United States is 56 cases/100,000 population (2001) 136 Mortality rate • 930 deaths in the United States (1999) 136 Chemoprophylaxis / treatment • Treatment of latent infection includes isoniazid (INH) or rifampin (RIF) 4,134,[137][138][139] • Directly observed therapy (DOT) for active cases as indicated: INH, RIF, pyrazinamide (PZA), ethambutol (EMB), streptomycin (SM) in various combinations determined by prevalent levels of specific resistance 4,134,[137][138][139] • Consult therapy guidelines for specific treatment indications 139 * Material in this table is compiled from references 4, 133-141. M. tuberculosis is carried by droplet nuclei generated when persons (primarily adults and adolescents) who have pulmonary or laryngeal TB sneeze, cough, speak, or sing; 139 normal air currents can keep these particles airborne for prolonged periods and spread them throughout a room or building. 142 However, transmission of TB has occurred from mycobacteria aerosolized during provision of care (e.g., wound/lesion care or during handling of infectious peritoneal dialysis fluid) for extrapulmonary TB patients. 135,140 Gram-positive cocci (i.e., Staphylococcus aureus, group A beta-hemolytic streptococci), also important health-care associated pathogens, are resistant to inactivation by drying and can persist in the environment and on environmental surfaces for extended periods. These organisms can be shed from heavily colonized persons and discharged into the air. Airborne dispersal of S. aureus is directly associated with the concentration of the bacterium in the anterior nares. 143 Approximately 10% of healthy carriers will disseminate S. aureus into the air, and some persons become more effective disseminators of S. aureus than others. [144][145][146][147][148] The dispersal of S. aureus into air can be exacerbated by concurrent viral upper respiratory infection, thereby turning a carrier into a "cloud shedder." 149 Outbreaks of surgical site infections (SSIs) caused by group A beta-hemolytic streptococci have been traced to airborne transmission from colonized operating-room personnel to patients. [150][151][152][153] In these situations, the strain causing the outbreak was recovered from the air in the operating room 150,151,154 or on settle plates in a room in which the carrier exercised. [151][152][153] S. aureus and group A streptococci have not been linked to airborne transmission outside of operating rooms, burn units, and neonatal nurseries. 155,156 Transmission of these agents occurs primarily via contact and droplets. Other gram-positive bacteria linked to airborne transmission include Bacillus spp. which are capable of sporulation as environmental conditions become less favorable to support their growth. Outbreaks and pseudo-outbreaks have been attributed to Bacillus cereus in maternity, pediatric, intensive care, and bronchoscopy units; many of these episodes were secondary to environmental contamination. [157][158][159][160] Gram-negative bacteria rarely are associated with episodes of airborne transmission because they generally require moist environments for persistence and growth. The main exception is Acinetobacter spp., which can withstand the inactivating effects of drying. In one epidemiologic investigation of bloodstream infections among pediatric patients, identical Acinetobacter spp. were cultured from the patients, air, and room air conditioners in a nursery. 161 Aerosols generated from showers and faucets may potentially contain legionellae and other gram-negative waterborne bacteria (e.g., Pseudomonas aeruginosa). Exposure to these organisms is through direct inhalation. However, because water is the source of the organisms and exposure occurs in the vicinity of the aerosol, the discussion of the diseases associated with such aerosols and the prevention measures used to curtail their spread is discussed in another section of the Guideline (see Part I: Water). # c. Airborne Viral Diseases Some human viruses are transmitted from person to person via droplet aerosols, but very few viruses are consistently airborne in transmission (i.e., are routinely suspended in an infective state in air and capable of spreading great distances), and health-care associated outbreaks of airborne viral disease are limited to a few agents. Consequently, infection-control measures used to prevent spread of these viral diseases in health-care facilities primarily involve patient isolation, vaccination of susceptible persons, and antiviral therapy as appropriate rather than measures to control air flow or quality. 6 Infections caused by VZV frequently are described in health-care facilities. Health-care associated airborne outbreaks of VZV infections from patients with primary infection and disseminated zoster have been documented; patients with localized zoster have, on rare occasions, also served as source patients for outbreaks in health-care facilities. [162][163][164][165][166] VZV infection can be prevented by vaccination, although patients who develop a rash within 6 weeks of receiving varicella vaccine or who develop breakthrough varicella following exposure should be considered contagious. 167 Viruses whose major mode of transmission is via droplet contact rarely have caused clusters of infections in group settings through airborne routes. The factors facilitating airborne distribution of these viruses in an infective state are unknown, but a presumed requirement is a source patient in the early stage of infection who is shedding large numbers of viral particles into the air. Airborne transmission of measles has been documented in health-care facilities. [168][169][170][171] In addition, institutional outbreaks of influenza virus infections have occurred predominantly in nursing homes, [172][173][174][175][176] and less frequently in medical and neonatal intensive care units, chronic-care areas, HSCT units, and pediatric wards. [177][178][179][180] Some evidence supports airborne transmission of influenza viruses by droplet nuclei, 181,182 and case clusters in pediatric wards suggest that droplet nuclei may play a role in transmitting certain respiratory pathogens (e.g., adenoviruses and respiratory syncytial virus [RSV]). 177,183,184 Some evidence also supports airborne transmission of enteric viruses. An outbreak of a Norwalk-like virus infection involving more than 600 staff personnel over a 3-week period was investigated in a Toronto, Ontario hospital in 1985; common sources (e.g., food and water) were ruled out during the investigation, leaving airborne spread as the most likely mode of transmission. 185 Smallpox virus, a potential agent of bioterrorism, is spread predominantly via direct contact with infectious droplets, but it also can be associated with airborne transmission. 186,187 A German hospital study from 1970 documented the ability of this virus to spread over considerable distances and cause infection at low doses in a well-vaccinated population; factors potentially facilitating transmission in this situation included a patient with cough and an extensive rash, indoor air with low relative humidity, and faulty ventilation patterns resulting from hospital design (e.g., open windows). 188 Smallpox patients with extensive rash are more likely to have lesions present on mucous membranes and therefore have greater potential to disseminate virus into the air. 188 In addition to the smallpox transmission in Germany, two cases of laboratory-acquired smallpox virus infection in the United Kingdom in 1978 also were thought to be caused by airborne transmission. 189 Ebola Virus Disease Update [August 2014]: The recommendations in this guideline for Ebola has been superseded by these CDC documents: • Infection Prevention and Control Recommendations for Hospitalized Patients with Known or Suspected Ebola Virus Disease in U.S. Hospitals (https://www.cdc.gov/vhf/ebola/healthcareus/hospitals/infection-control.html) • Interim Guidance for Environmental Infection Control in Hospitals for Ebola Virus (https://www.cdc.gov/vhf/ebola/healthcare-us/cleaning/hospitals.html) See CDC's Ebola Virus Disease website (https://www.cdc.gov/vhf/ebola/index.html) for current information on how Ebola virus is transmitted. Airborne transmission may play a role in the natural spread of hantaviruses and certain hemorrhagic fever viruses (e.g., Ebola, Marburg, and Lassa), but evidence for airborne spread of these agents in health-care facilities is inconclusive. 190 Although hantaviruses can be transmitted when aerosolized from rodent excreta, 191,192 person-to-person spread of hantavirus infection from source patients has not occurred in healthcare facilities. [193][194][195] Nevertheless, health-care workers are advised to contain potentially infectious aerosols and wear National Institute of Occupational Safety and Health (NIOSH) approved respiratory protection when working with this agent in laboratories or autopsy suites. 196 Lassa virus transmission via aerosols has been demonstrated in the laboratory and incriminated in health-care associated infections in Africa, [197][198][199] but airborne spread of this agent in hospitals in developed nations likely is inefficient. 200,201 Yellow fever is considered to be a viral hemorrhagic fever agent with high aerosol infectivity potential, but health-care associated transmission of this virus has not been described. 202 Viral hemorrhagic fever diseases primarily occur after direct exposure to infected blood and body fluids, and the use of standard and droplet precautions prevents transmission early in the course of these illnesses. 203,204 However, whether these viruses can persist in droplet nuclei that might remain after droplet production from coughs or vomiting in the latter stages of illness is unknown. 205 Although the use of a negative-pressure room is not required during the early stages of illness, its use might be prudent at the time of hospitalization to avoid the need for subsequent patient transfer. Current CDC guidelines recommend negative-pressure rooms with anterooms for patients with hemorrhagic fever and use of HEPA respirators by persons entering these rooms when the patient has prominent cough, vomiting, diarrhea, or hemorrhage. 6,203 Face shields or goggles will help to prevent mucous-membrane exposure to potentially-aerosolized infectious material in these situations. If an anteroom is not available, portable, industrial-grade high efficiency particulate air (HEPA) filter units can be used to provide the equivalent of additional air changes per hour (ACH). # Mycobacterium tuberculosis+ Measles (rubeola) virus [168][169][170] Varicella-zoster virus [162][163][164][165][166] Occasional reports in health-care facilities (atypical) Acremonium spp. 105,206 Fusarium spp. 102 Pseudoallescheria boydii 100 Scedosporium spp. 116 Sporothrix cyanescens ¶ 118 Acinetobacter spp. 161 Bacillus spp. ¶ 160,207 Brucella spp.** [208][209][210][211] Staphylococcus aureus 148,156 Group A Streptococcus 151 Smallpox virus (variola) § 188,189 Influenza viruses 181,182 Respiratory syncytial virus 183 Adenoviruses 184 Norwalk-like virus 185 No reports in health-care facilities; known to be airborne outside. Coccidioides immitis 125 Cryptococcus spp. 121 Histoplasma capsulatum 124 Coxiella burnetii (Q fever) 212 Hantaviruses 193,195 Lassa virus 205 Marburg virus 205 Ebola virus † 205 Crimean-Congo virus 205 Under investigation Pneumocystis carinii 131 n/a n/a * This list excludes microorganisms transmitted from aerosols derived from water. # Heating, Ventilation, and Air Conditioning Systems in Health-Care Facilities # a. Basic Components and Operations Heating, ventilation, and air conditioning (HVAC) systems in health-care facilities are designed to maintain the indoor air temperature and humidity at comfortable levels for staff, patients, and visitors control odors; remove contaminated air; facilitate air-handling requirements to protect susceptible staff and patients from airborne health-care associated pathogens; and minimize the risk for transmission of airborne pathogens from infected patients. 35,120 An HVAC system includes an outside air inlet or intake; filters; humidity modification mechanisms (i.e., humidity control in summer, humidification in winter); heating and cooling equipment; fans; ductwork; air exhaust or out-takes; and registers, diffusers, or grilles for proper distribution of the air (Figure 1). 213,214 Decreased performance of healthcare facility HVAC systems, filter inefficiencies, improper installation, and poor maintenance can contribute to the spread of health-care associated airborne infections. The American Institute of Architects (AIA) has published guidelines for the design and construction of new health-care facilities and for renovation of existing facilities. These AIA guidelines address indoor air-quality standards (e.g., ventilation rates, temperature levels, humidity levels, pressure relationships, and minimum air changes per hour [ACH]) specific to each zone or area in health-care facilities (e.g., operating rooms, laboratories, diagnostic areas, patient-care areas, and support departments). 120 ). More than 40 state agencies that license health-care facilities have either incorporated or adopted by reference these guidelines into their state standards. JCAHO, through its surveys, ensures that facilities are in compliance with the ventilation guidelines of this standard for new construction and renovation. # Figure 1. Diagram of a ventilation system* Outdoor air and recirculated air pass through air cleaners (e.g., filter banks) designed to reduce the concentration of airborne contaminants. Air is conditioned for temperature and humidity before it enters the occupied space as supply air. Infiltration is air leakage inward through cracks and interstitial spaces of walls, floors, and ceilings. Exfiltration is air leakage outward through these same cracks and spaces. Return air is largely exhausted from the system, but a portion is recirculated with fresh, incoming air. * Used with permission of the publisher of reference 214 (ASHRAE) Engineering controls to contain or prevent the spread of airborne contaminants center on local exhaust ventilation [i.e., source control], general ventilation, and air cleaning. 4 General ventilation encompasses dilution and removal of contaminants via well-mixed air distribution of filtered air, directing contaminants toward exhaust registers and grilles via uniform, non-mixed airflow patterns, pressurization of individual spaces relative to all other spaces, and pressurization of buildings relative to the outdoors and other attached buildings. A centralized HVAC system operates as follows. Outdoor air enters the system, where low-efficiency or "roughing" filters remove large particulate matter and many microorganisms. The air enters the distribution system for conditioning to appropriate temperature and humidity levels, passes through an additional bank of filters for further cleaning, and is delivered to each zone of the building. After the conditioned air is distributed to the designated space, it is withdrawn through a return duct system and delivered back to the HVAC unit. A portion of this "return air" is exhausted to the outside while the remainder is mixed with outdoor air for dilution and filtered for removal of contaminants. 215 Air from toilet rooms or other soiled areas is usually exhausted directly to the atmosphere through a separate duct exhaust system. Air from rooms housing tuberculosis patients is exhausted to the outside if possible, or passed through a HEPA filter before recirculation. Ultraviolet germicidal irradiation (UVGI) can be used as an adjunct air-cleaning measure, but it cannot replace HEPA filtration. 15 b. Filtration i. Filter Types and Methods of Filtration Filtration, the physical removal of particulates from air, is the first step in achieving acceptable indoor air quality. Filtration is the primary means of cleaning the air. Five methods of filtration can be used (Table 5). During filtration, outdoor air passes through two filter beds or banks (with efficiencies of 20%-40% and ≥90%, respectively) for effective removal of particles 1-5 μm in diameter. 35,120 The low-to-medium efficiency filters in the first bank have low resistance to airflow, but this feature allows some small particulates to pass onto heating and air conditioning coils and into the indoor environment. 35 Incoming air is mixed with recirculated air and reconditioned for temperature and humidity before being filtered by the second bank of filters. The performance of filters with ≤90% efficiency is measured using either the dust-spot test or the weight-arrestance test. 35,216 The second filter bank usually consists of high-efficiency filters. This filtration system is adequate for most patient-care areas in ambulatory-care facilities and hospitals, including the operating room environment and areas providing central services. 120 Nursing facilities use 90% dust-spot efficient filters as the second bank of filters, 120 whereas a HEPA filter bank may be indicated for special-care areas of hospitals. HEPA filters are at least 99.97% efficient for removing particles ≥0.3 μm in diameter. (As a reference, Aspergillus spores are 2.5-3.0 μm in diameter.) Examples of care areas where HEPA filters are used include PE rooms and those operating rooms designated for orthopedic implant procedures. 35 Maintenance costs associated with HEPA filters are high compared with other types of filters, but use of in-line disposable prefilters can increase the life of a HEPA filter by approximately 25%. Alternatively, if a disposable prefilter is followed by a filter that is 90% efficient, the life of the HEPA filter can be extended ninefold. This concept, called progressive filtration, allows HEPA filters in special care areas to be used for 10 years. 213 Although progressive filtering will extend the mechanical ability of the HEPA filter, these filters may absorb chemicals in the environment and later desorb those chemicals, thereby necessitating a more frequent replacement program. HEPA filter efficiency is monitored with the dioctylphthalate (DOP) particle test using particles that are 0.3 μm in diameter. 218 HEPA filters are usually framed with metal, although some older versions have wood frames. A metal frame has no advantage over a properly fitted wood frame with respect to performance, but wood can compromise the air quality if it becomes and remains wet, allowing the growth of fungi and bacteria. Hospitals are therefore advised to phase out water-damaged or spent wood-framed filter units and replace them with metal-framed HEPA filters. HEPA filters are usually fixed into the HVAC system; however, portable, industrial grade HEPA units are available that can filter air at the rate of 300-800 ft 3 /min. Portable HEPA filters are used to a. temporarily recirculate air in rooms with no general ventilation, b. augment systems that cannot provide adequate airflow, and c. provide increased effectiveness in airflow. 4 Portable HEPA units are useful engineering controls that help clean the air when the central HVAC system is undergoing repairs 219 but these units do not satisfy fresh-air requirements. 214 The effectiveness of the portable unit for particle removal is dependent on a. the configuration of the room, b. the furniture and persons in the room, c. the placement of the units relative to the contents and layout of the room, and d. the location of the supply and exhaust registers or grilles. If portable, industrial-grade units are used, they should be capable of recirculating all or nearly all of the room air through the HEPA filter, and the unit should be designed to achieve the equivalent of ≥12 ACH. 4 (An average room has approximately 1,600 ft 3 of airspace.) The hospital engineering department should be contacted to provide ACH information in the event that a portable HEPA filter unit is necessary to augment the existing fixed HVAC system for air cleaning. ii. Filter Maintenance Efficiency of the filtration system is dependent on the density of the filters, which can create a drop in pressure unless compensated by stronger and more efficient fans, thus maintaining air flow. For optimal performance, filters require monitoring and replacement in accordance with the manufacturer's recommendations and standard preventive maintenance practices. 220 Upon removal, spent filters can be bagged and discarded with the routine solid waste, regardless of their patient-care area location. 221 Excess accumulation of dust and particulates increases filter efficiency, requiring more pressure to push the air through. The pressure differential across filters is measured by use of manometers or other gauges. A pressure reading that exceeds specifications indicates the need to change the filter. Filters also require regular inspection for other potential causes of decreased performance. Gaps in and around filter banks and heavy soil and debris upstream of poorly maintained filters have been implicated in health-care associated outbreaks of aspergillosis, especially when accompanied by construction activities at the facility. 17,18,106,222 # c. Ultraviolet Germicidal Irradiation (UVGI) As a supplemental air-cleaning measure, UVGI is effective in reducing the transmission of airborne bacterial and viral infections in hospitals, military housing, and classrooms, but it has only a minimal inactivating effect on fungal spores. [223][224][225][226][227][228] UVGI is also used in air handling units to prevent or limit the growth of vegetative bacteria and fungi. Most commercially available UV lamps used for germicidal purposes are low-pressure mercury vapor lamps that emit radiant energy predominantly at a wave-length of 253.7 nm. 229,230 Two systems of UVGI have been used in health-care settings -duct irradiation and upper-room air irradiation. In duct irradiation systems, UV lamps are placed inside ducts that remove air from rooms to disinfect the air before it is recirculated. When properly designed, installed, and maintained, high levels of UVGI can be attained in the ducts with little or no exposure of persons in the rooms. 231,232 In upper-room air irradiation, UV lamps are either suspended from the ceiling or mounted on the wall. 4 Upper air UVGI units have two basic designs: a. a "pan" fixture with UVGI unshielded above the unit to direct the irradiation upward and b. a fixture with a series of parallel plates to columnize the irradiation outward while preventing the light from getting to the eyes of the room's occupants. The germicidal effect is dependent on air mixing via convection between the room's irradiated upper zone and the lower patient-care zones. 233,234 Bacterial inactivation studies using BCG mycobacteria and Serratia marcescens have estimated the effect of UVGI as equivalent to 10 ACH-39 ACH. 235,236 Another study, however, suggests that UVGI may result in fewer equivalent ACH in the patient-care zone, especially if the mixing of air between zones is insufficient. 234 The use of fans or HVAC systems to generate air movement may increase the effectiveness of UVGI if airborne microorganisms are exposed to the light energy for a sufficient length of time. 233,235,[237][238][239] The optimal relationship between ventilation and UVGI is not known. Because the clinical effectiveness of UV systems may vary, UVGI is not recommended for air management prior to air recirculation from airborne isolation rooms. It is also not recommended as a substitute for HEPA filtration, local exhaust of air to the outside, or negative pressure. 4 The use of UV lamps and HEPA filtration in a single unit offers only minimal infection-control benefits over those provided by the use of a HEPA filter alone. 240 Duct systems with UVGI are not recommended as a substitute for HEPA filters if the air from isolation rooms must be recirculated to other areas of the facility. 4 Regular maintenance of UVGI systems is crucial and usually consists of keeping the bulbs free of dust and replacing old bulbs as necessary. Safety issues associated with the use of UVGI systems are described in other guidelines. 4 # d. Conditioned Air in Occupied Spaces Temperature and humidity are two essential components of conditioned air. After outside air passes through a low-or medium-efficiency filter, the air undergoes conditioning for temperature and humidity control before it passes through high-efficiency or HEPA filtration. i. Temperature HVAC systems in health-care facilities are often single-duct or dual-duct systems. 35,241 A single-duct system distributes cooled air (55°F [12.8°C]) throughout the building and uses thermostatically controlled reheat boxes located in the terminal ductwork to warm the air for individual or multiple rooms. The dual-duct system consists of parallel ducts, one with a cold air stream and the other with a hot air stream. A mixing box in each room or group of rooms mixes the two air streams to achieve the desired temperature. Temperature standards are given as either a single temperature or a range, depending on the specific health-care zone. Cool temperature standards (68°F-73°F [20°C-23°C]) usually are associated with operating rooms, clean workrooms, and endoscopy suites. 120 A warmer temperature (75°F [24°C]) is needed in areas requiring greater degrees of patient comfort. Most other zones use a temperature range of 70°F-75°F (21°C-24°C). 120 Temperatures outside of these ranges may be needed occasionally in limited areas depending on individual circumstances during patient care (e.g., cooler temperatures in operating rooms during specialized operations). ii. Humidity Four measures of humidity are used to quantify different physical properties of the mixture of water vapor and air. The most common of these is relative humidity, which is the ratio of the amount of water vapor in the air to the amount of water vapor air can hold at that temperature. 242 The other measures of humidity are specific humidity, dew point, and vapor pressure. 242 Relative humidity measures the percentage of saturation. At 100% relative humidity, the air is saturated. For most areas within health-care facilities, the designated comfort range is 30%-60% relative humidity. 120,214 Relative humidity levels >60%, in addition to being perceived as uncomfortable, promote fungal growth. 243 Humidity levels can be manipulated by either of two mechanisms. 244 In a water-wash unit, water is sprayed and drops are taken up by the filtered air; additional heating or cooling of this air sets the humidity levels. The second mechanism is by means of water vapor created from steam and added to filtered air in humidifying boxes. Reservoir-type humidifiers are not allowed in health-care facilities as per AIA guidelines and many state codes. 120 Cool-mist humidifiers should be avoided, because they can disseminate aerosols containing allergens and microorganisms. 245 Additionally, the small, personal-use versions of this equipment can be difficult to clean. # iii. Ventilation The control of air pollutants (e.g., microorganisms, dust, chemicals, and smoke) at the source is the most effective way to maintain clean air. The second most effective means of controlling indoor air pollution is through ventilation. Ventilation rates are voluntary unless a state or local government specifies a standard in health-care licensing or health department requirements. These standards typically apply to only the design of a facility, rather than its operation. 220,246 Health-care facilities without specific ventilation standards should follow the AIA guideline specific to the year in which the building was 120, 214, 241 built or the ANSI/ASHRAE Standard 62, Ventilation for Acceptable Indoor Air Quality. Ventilation guidelines are defined in terms of air volume per minute per occupant and are based on the assumption that occupants and their activities are responsible for most of the contaminants in the conditioned space. 215 Most ventilation rates for health-care facilities are expressed as room ACH. Peak efficiency for particle removal in the air space occurs between 12 ACH-15 ACH. 35,247,248 Ventilation rates vary among the different patient-care areas of a health-care facility (Appendix B). 120 Health-care facilities generally use recirculated air. 35,120,241,249,250 Fans create sufficient positive pressure to force air through the building duct work and adequate negative pressure to evacuate air from the conditioned space into the return duct work and/or exhaust, thereby completing the circuit in a sealed system (Figure 1). However, because gaseous contaminants tend to accumulate as the air recirculates, a percentage of the recirculated air is exhausted to the outside and replaced by fresh outdoor air. In hospitals, the delivery of filtered air to an occupied space is an engineered system design issue, the full discussion of which is beyond the scope of this document. Hospitals with areas not served by central HVAC systems often use through-the-wall or fan coil air conditioning units as the sole source of room ventilation. AIA guidelines for newly installed systems stipulate that through-the-wall fan-coil units be equipped with permanent (i.e., cleanable) or replaceable filters with a minimum efficiency of 68% weight arrestance. 120 These units may be used only as recirculating units; all outdoor air requirements must be met by a separate central air handling system with proper filtration, with a minimum of two outside air changes in general patient rooms (D. Erickson, ASHE, 2000). 120 If a patient room is equipped with an individual through-the-wall fan coil unit, the room should not be used as either AII or as PE. 120 These requirements, although directed to new HVAC installations also are appropriate for existing settings. Non-central air-handling systems are prone to problems associated with excess condensation accumulating in drip pans and improper filter maintenance; health-care facilities should clean or replace the filters in these units on a regular basis while the patient is out of the room. Laminar airflow ventilation systems are designed to move air in a single pass, usually through a bank of HEPA filters either along a wall or in the ceiling, in a one-way direction through a clean zone with parallel streamlines. Laminar airflow can be directed vertically or horizontally; the unidirectional system optimizes airflow and minimizes air turbulence. 63,241 Delivery of air at a rate of 0.5 meters per second (90 ± 20 ft/min) helps to minimize opportunities for microorganism proliferation. 63,251,252 Laminar airflow systems have been used in PE to help reduce the risk for health-care associated airborne infections (e.g., aspergillosis) in high-risk patients. 63,93,253,254 However, data that demonstrate a survival benefit for patients in PE with laminar airflow are lacking. Given the high cost of installation and apparent lack of benefit, the value of laminar airflow in this setting is questionable. 9,37 Few data support the use of laminar airflow systems elsewhere in a hospital. 255 iv. Pressurization Positive and negative pressures refer to a pressure differential between two adjacent air spaces (e.g., rooms and hallways). Air flows away from areas or rooms with positive pressure (pressurized), while air flows into areas with negative pressure (depressurized). AII rooms are set at negative pressure to prevent airborne microorganisms in the room from entering hallways and corridors. PE rooms housing severely neutropenic patients are set at positive pressure to keep airborne pathogens in adjacent spaces or corridors from coming into and contaminating the airspace occupied by such high-risk patients. Selfclosing doors are mandatory for both of these areas to help maintain the correct pressure differential. 4,6,120 Older health-care facilities may have variable pressure rooms (i.e., rooms in which the ventilation can be manually switched between positive and negative pressure). These rooms are no longer permitted in the construction of new facilities or in renovated areas of the facility, 120 and their use in existing facilities has been discouraged because of difficulties in assuring the proper pressure differential, especially for the negative pressure setting, and because of the potential for error associated with switching the pressure differentials for the room. Continued use of existing variable pressure rooms depends on a partnership between engineering and infection control. Both positive-and negativepressure rooms should be maintained according to specific engineering specifications (Table 6). Health-care professionals (e.g., infection control, hospital epidemiologists) must perform a risk assessment to determine the appropriate number of AII rooms (negative pressure) and/or PE rooms (positive pressure) to serve the patient population. The AIA guidelines require a certain number of AII rooms as a minimum, and it is important to refer to the edition under which the building was built for appropriate guidance. 120 In large health-care facilities with central HVAC systems, sealed windows help to ensure the efficient operation of the system, especially with respect to creating and maintaining pressure differentials. Sealing the windows in PE areas helps minimize the risk of airborne contamination from the outside. One outbreak of aspergillosis among immunosuppressed patients in a hospital was attributed in part to an open window in the unit during a time when both construction and a fire happened nearby; sealing the window prevented further entry of fungal spores into the unit from the outside air. 111 Additionally, all emergency exits (e.g., fire escapes and emergency doors) in PE wards should be kept closed (except during emergencies) and equipped with alarms. # e. Infection Control Impact of HVAC System Maintenance and Repair A failure or malfunction of any component of the HVAC system may subject patients and staff to discomfort and exposure to airborne contaminants. Only limited information is available from formal studies on the infection-control implications of a complete air-handling system failure or shutdown for maintenance. Most experience has been derived from infectious disease outbreaks and adverse outcomes among high-risk patients when HVAC systems are poorly maintained. (See Table 7 for potential ventilation hazards, consequences, and correction measures.) AIA guidelines prohibit U.S. hospitals and surgical centers from shutting down their HVAC systems for purposes other than required maintenance, filter changes, and construction. 120 Airflow can be reduced; however, sufficient supply, return, and exhaust must be provided to maintain required pressure relationships when the space is not occupied. Maintaining these relationships can be accomplished with special drives on the air-handling units (i.e., a variable air ventilation [VAV] system). Microorganisms proliferate in environments wherever air, dust, and water are present, and air-handling systems can be ideal environments for microbial growth. 35 Properly engineered HVAC systems require routine maintenance and monitoring to provide acceptable indoor air quality efficiently and to minimize conditions that favor the proliferation of health-care associated pathogens. 35,249 Performance monitoring of the system includes determining pressure differentials across filters, regular inspection of system filters, DOP testing of HEPA filters, testing of low-or medium efficiency filters, and manometer tests for positive-and negative-pressure areas in accordance with nationally recognized standards, guidelines, and manufacturers' recommendations. The use of hand-held, calibrated equipment that can provide a numerical reading on a daily basis is preferred for engineering purposes (A.Streifel, University of Minnesota, 2000). 256 Several methods that provide a visual, qualitative measure of pressure differentials (i.e., airflow direction) include smoke-tube tests or placing flutter strips, ping-pong balls, or tissue in the air stream. Preventive filter and duct maintenance (e.g., cleaning ductwork vents, replacing filters as needed, and properly disposing spent filters into plastic bags immediately upon removal) is important to prevent potential exposures of patients and staff during HVAC system shut-down. The frequency of filter inspection and the parameters of this inspection are established by each facility to meet their unique needs. Ductwork in older health-care facilities may have insulation on the interior surfaces that can trap contaminants. This insulation material tends to break down over time to be discharged from the HVAC system. Additionally, a malfunction of the air-intake system can overburden the filtering system and permit aerosolization of fungal pathogens. Keeping the intakes free from bird droppings, especially those from pigeons, helps to minimize the concentration of fungal spores entering from the outside. 98 Accumulation of dust and moisture within HVAC systems increases the risk for spread of health-careassociated environmental fungi and bacteria. Clusters of infections caused by Aspergillus spp., P. aeruginosa, S. aureus, and Acinetobacter spp. have been linked to poorly maintained and/or malfunctioning air conditioning systems. 68,161,257,258 Efforts to limit excess humidity and moisture in the infrastructure and on air-stream surfaces in the HVAC system can minimize the proliferation and dispersion of fungal spores and waterborne bacteria throughout indoor air. [259][260][261][262] Within the HVAC system, water is present in water-wash units, humidifying boxes, or cooling units. The dual-duct system may also create conditions of high humidity and excess moisture that favor fungal growth in drain pans as well as in fibrous insulation material that becomes damp as a result of the humid air passing over the hot stream and condensing. If moisture is present in the HVAC system, periods of stagnation should be avoided. Bursts of organisms can be released upon system start-up, increasing the risk of airborne infection. 206 Proper engineering of the HVAC system is critical to preventing dispersal of airborne organisms. In one hospital, endophthalmitis caused by Acremonium kiliense infection following cataract extraction in an ambulatory surgical center was traced to aerosols derived from the humidifier water in the ventilation system. 206 The organism proliferated because the ventilation system was turned off routinely when the center was not in operation; the air was filtered before humidification, but not afterwards. Most health-care facilities have contingency plans in case of disruption of HVAC services. These plans include back-up power generators that maintain the ventilation system in high-risk areas (e.g., operating rooms, intensive-care units, negative-and positive-pressure rooms, transplantation units, and oncology units). Alternative generators are required to engage within 10 seconds of a loss of main power. If the ventilation system is out of service, rendering indoor air stagnant, sufficient time must be allowed to clean the air and re-establish the appropriate number of ACH once the HVAC system begins to function again. Air filters may also need to be changed, because reactivation of the system can dislodge substantial amounts of dust and create a transient burst of fungal spores. Duct cleaning in health-care facilities has benefits in terms of system performance, but its usefulness for infection control has not been conclusively determined. Duct cleaning typically involves using specialized tools to dislodge dirt and a high-powered vacuum cleaner to clean out debris. 263 Some duct-cleaning services also apply chemical biocides or sealants to the inside surfaces of ducts to minimize fungal growth and prevent the release of particulate matter. The U.S. Environmental Protection Agency (EPA), however, has concerns with the use of sanitizers and/or disinfectants to treat the surfaces of ductwork, because the label indications for most of these products may not specifically include the use of the product in HVAC systems. 264 Further, EPA has not evaluated the potency of disinfectants in such applications, nor has the agency examined the potential attendant health and safety risks. The EPA recommends that companies use only those chemical biocides that are registered for use in HVAC systems. 264 Although infrequent cleaning of the exhaust ducts in AII areas has been documented as a cause of diminishing negative pressure and a decrease in the air exchange rates, 214 no data indicate that duct cleaning, beyond what is recommended for optimal performance, improves indoor air quality or reduces the risk of infection. Exhaust return systems should be cleaned as part of routine system maintenance. Duct cleaning has not been shown to prevent any health problems, 265 and EPA studies indicate that airborne particulate levels do not increase as a result of dirty air ducts, nor do they diminish after cleaning, presumably because much of the dirt inside air ducts adheres to duct surfaces and does not enter the conditioned space. 265 Additional research is needed to determine if air-duct contamination can significantly increase the airborne infection risk in general areas of health-care facilities. # Construction, Renovation, Remediation, Repair, and Demolition # a. General Information Environmental disturbances caused by construction and/or renovation and repair activities (e.g., disruption of the above-ceiling area, running cables through the ceiling, and structural repairs) in and near health-care facilities markedly increase the airborne Aspergillus spp. spore counts in the indoor air of such facilities, thereby increasing the risk for health-care associated aspergillosis among high-risk patients. Although one case of health-care associated aspergillosis is often difficult to link to a specific environmental exposure, the occurrence of temporarily clustered cases increase the likelihood that an environmental source within the facility may be identified and corrected. Construction, renovation, repair, and demolition activities in health-care facilities require substantial planning and coordination to minimize the risk for airborne infection both during projects and after their completion. Several organizations and experts have endorsed a multi-disciplinary team approach (Box 4) to coordinate the various stages of construction activities (e.g., project inception, project implementation, final walk-through, and completion). 120,249,250,[273][274][275][276] Environmental services, employee health, engineering, and infection control must be represented in construction planning and design meetings should be convened with architects and design engineers. The number of members and disciplines represented is a function of the complexity of a project. Smaller, less complex projects and maintenance may require a minimal number of members beyond the core representation from engineering, infection control, environmental services, and the directors of the specialized departments. # Box 4. Suggested members and functions of a multi-disciplinary coordination team for construction, renovation, repair, and demolition projects # Members • Infection-control personnel, including hospital epidemiologists • Laboratory personnel • Facility administrators or their designated representatives, facility managers • Director of engineering • Risk-management personnel • Directors of specialized programs (e.g., transplantation, oncology and ICU [intensive care unit] programs) • Employee safety personnel, industrial hygienists, and regulatory affairs personnel • Environmental services personnel Information systems personnel • Construction administrators or their designated representatives • Architects, design engineers, project managers, and contractors # Functions and responsibilities # • Coordinate members' input in developing a comprehensive project management plan. # • Conduct a risk assessment of the project to determine potential hazards to susceptible patients. # • Prevent unnecessary exposures of patients, visitors, and staff to infectious agents. # • Oversee all infection-control aspects of construction activities. # • Establish site-specific infection-control protocols for specialized areas. # • Provide education about the infection-control impact of construction to staff and construction workers. # • Ensure compliance with technical standards, contract provisions, and regulations. # • Establish a mechanism to address and correct problems quickly. # • Develop contingency plans for emergency response to power failures, water supply disruptions, and fires. # • Provide a water-damage management plan (including drying protocols) for handling water intrusion from floods, leaks, and condensation. # • Develop a plan for structural maintenance. Education of maintenance and construction workers, health-care staff caring for high-risk patients, and persons responsible for controlling indoor air quality heightens awareness that minimizing dust and moisture intrusion from construction sites into high-risk patient-care areas helps to maintain a safe environment. 120,250,271,[275][276][277][278] Visual and printed educational materials should be provided in the language spoken by the workers. Staff and construction workers also need to be aware of the potentially catastrophic consequences of dust and moisture intrusion when an HVAC system or water system fails during construction or repair; action plans to deal quickly with these emergencies should be developed in advance and kept on file. Incorporation of specific standards into construction contracts may help to prevent departures from recommended practices as projects progress. Establishing specific lines of communication is important to address problems (e.g., dust control, indoor air quality, noise levels, and vibrations), resolve complaints, and keep projects moving toward completion. Health-care facility staff should develop a mechanism to monitor worker adherence to infection-control guidelines on a daily basis in and around the construction site for the duration of the project. # b. Preliminary Considerations The three major topics to consider before initiating any construction or repair activity are as follows: a. design and function of the new structure or area, b. assessment of environmental risks for airborne disease and opportunities for prevention, and c. measures to contain dust and moisture during construction or repairs. A checklist of design and function considerations can help to ensure that a planned structure or area can be easily serviced and maintained for environmental infection control (Box 5) . 17,250,273,[275][276][277] Specifications for the construction, renovation, remodeling, and maintenance of health-care facilities are outlined in the AIA document, Guidelines for Design and Construction of Hospitals and Health Care Facilities. 120,275 Box 5. Construction design and function considerations for environmental infection control Proactive strategies can help prevent environmentally mediated airborne infections in health-care facilities during demolition, construction, and renovation. The potential presence of dust and moisture and their contribution to health-care associated infections must be critically evaluated early in the planning of any demolition, construction, renovation, and repairs. 120,250,251,273,274,[276][277][278][279] Consideration must extend beyond dust generated by major projects to include dust that can become airborne if disturbed during routine maintenance and minor renovation activities (e.g., exposure of ceiling spaces for inspection; installation of conduits, cable, or sprinkler systems; rewiring; and structural repairs or replacement). 273,276,277 Other projects that can compromise indoor air quality include construction and repair jobs that inadvertently allow substantial amounts of raw, unfiltered outdoor air to enter the facility (e.g., repair of elevators and elevator shafts) and activities that dampen any structure, area, or item made of porous materials or characterized by cracks and crevices (e.g., sink cabinets in need of repair, carpets, ceilings, floors, walls, vinyl wall coverings, upholstery, drapes, and countertops). 18,273,277 Molds grow and proliferate on these surfaces when they become and remain wet. 21,120,250,266,270,272,280 Scrubbable materials are preferred for use in patient-care areas. Containment measures for dust and/or moisture control are dictated by the location of the construction site. Outdoor demolition and construction require actions to keep dust and moisture out of the facility (e.g., sealing windows and vents and keeping doors closed or sealed). Containment of dust and moisture generated from construction inside a facility requires barrier structures (either pre-fabricated or constructed of more durable materials as needed) and engineering controls to clean the air in and around the construction or repair site. # c. Infection-Control Risk Assessment An infection-control risk assessment (ICRA) conducted before initiating repairs, demolition, construction, or renovation activities can identify potential exposures of susceptible patients to dust and moisture and determine the need for dust and moisture containment measures. This assessment centers on the type and extent of the construction or repairs in the work area but may also need to include adjacent patient-care areas, supply storage, and areas on levels above and below the proposed project. An example of designing an ICRA as a matrix, the policy for performing an ICRA and implementing its results, and a sample permit form that streamlines the communication process are available. 281 Knowledge of the air flow patterns and pressure differentials helps minimize or eliminate the inadvertent dispersion of dust that could contaminate air space, patient-care items, and surfaces. 57,282,283 A recent aspergillosis outbreak among oncology patients was attributed to depressurization of the building housing the HSCT unit while construction was underway in an adjacent building. Pressure readings in the affected building (including 12 of 25 HSCT-patient rooms) ranged from 0.1 Pa-5.8 Pa. Unfiltered outdoor air flowed into the building through doors and windows, exposing patients in the HSCT unit to fungal spores. 283 During long-term projects, providing temporary essential services (e.g., toilet facilities) and conveniences (e.g., vending machines) to construction workers within the site will help to minimize traffic in and out of the area. The type of barrier systems necessary for the scope of the project must be defined. 12,120,250,279,284 Depending on the location and extent of the construction, patients may need to be relocated to other areas in the facility not affected by construction dust. 51,285 Such relocation might be especially prudent when construction takes place within units housing immunocompromised patients (e.g., severely neutropenic patients and patients on corticosteroid therapy). Advance assessment of high-risk locations and planning for the possible transport of patients to other departments can minimize delays and waiting time in hallways. 51 Although hospitals have provided immunocompromised patients with some form of respiratory protection for use outside their rooms, the issue is complex and remains unresolved until more research can be done. Previous guidance on this issue has been inconsistent. 9 Protective respirators (i.e., N95) were well tolerated by patients when used to prevent further cases of construction-related aspergillosis in a recent outbreak. 283 The routine use of the N95 respirator by patients, however, has not been evaluated for preventing exposure to fungal spores during periods of non-construction. Although health-care workers who would be using the N95 respirator for personal respiratory protect must be fittested, there is no indication that either patients or visitors should undergo fit-testing. Surveillance activities should augment preventive strategies during construction projects. 3,4,20,110,286,287 By determining baseline levels of health-care acquired airborne and waterborne infections, infectioncontrol staff can monitor changes in infection rates and patterns during and immediately after construction, renovations, or repairs. 3 # d. Air Sampling Air sampling in health-care facilities may be conducted both during periods of construction and on a periodic basis to determine indoor air quality, efficacy of dust-control measures, or air-handling system performance via parametric monitoring. Parametric monitoring consists of measuring the physical periodic assessment of the system (e.g., air flow direction and pressure, ACH, and filter efficiency) can give assurance of proper ventilation, especially for special care areas and operating rooms. 288 Air sampling is used to detect aerosols (i.e., particles or microorganisms). Particulate sampling (i.e., total numbers and size range of particulates) is a practical method for evaluating the infection-control performance of the HVAC system, with an emphasis on filter efficiency in removing respirable particles (<5 μm in diameter) or larger particles from the air. Particle size is reported in terms of the mass median aerodynamic diameter (MMAD), whereas count median aerodynamic diameter (CMAD) is useful with respect to particle concentrations. Particle counts in a given air space within the health-care facility should be evaluated against counts obtained in a comparison area. Particle counts indoors are commonly compared with the particulate levels of the outdoor air. This approach determines the "rank order" air quality from "dirty" (i.e., the outdoor air) to "clean" (i.e., air filtered through high-efficiency filters [90%-95% filtration]) to "cleanest" (i.e., HEPA-filtered air). 288 Comparisons from one indoor area to another may also provide useful information about the magnitude of an indoor air-quality problem. Making rank-order comparisons between clean, highly-filtered areas and dirty areas and/or outdoors is one way to interpret sampling results in the absence of air quality and action level standards. 35,289 In addition to verifying filter performance, particle counts can help determine if barriers and efforts to control dust dispersion from construction are effective. This type of monitoring is helpful when performed at various times and barrier perimeter locations during the project. Gaps or breaks in the barriers' joints or seals can then be identified and repaired. The American Conference of Governmental Industrial Hygienists (ACGIH) has set a threshold limit value-time weighted average (TLV®-TWA) of 10 mg/m 3 for nuisance dust that contains no asbestos and <1% crystalline silica. 290 Alternatively, OSHA has set permissible exposure limits (PELs) for inert or nuisance dust as follows: respirable fraction at 5 mg/m 3 and total dust at 15 mg/m 3 . 291 Although these standards are not measures of a bioaerosol, they are used for indoor air quality assessment in occupational settings and may be useful criteria in construction areas. Application of ACGIH guidance to health-care settings has not been standardized, but particulate counts in health-care facilities are likely to be well below this threshold value and approaching cleanroom standards in certain care areas (e.g., operating rooms). 100 Particle counters and anemometers are used in particulate evaluation. The anemometer measures air flow velocity, which can be used to determine sample volumes. Particulate sampling usually does not require microbiology laboratory services for the reporting of results. Microbiologic sampling of air in health-care facilities remains controversial because of currently unresolved technical limitations and the need for substantial laboratory support (Box 6). Infection-control professionals, laboratorians, and engineers should determine if microbiologic and/or particle sampling is warranted and assess proposed methods for sampling. The most significant technical limitation of air sampling for airborne fungal agents is the lack of standards linking fungal spore levels with infection rates. Despite this limitation, several health-care institutions have opted to use microbiologic sampling when construction projects are anticipated and/or underway in efforts to assess the safety of the environment for immunocompromised patients. 35,289 Microbiologic air sampling should be limited to assays for airborne fungi; of those, the thermotolerant fungi (i.e., those capable of growing at 95°F-98.6°F [35°C-37°C]) are of particular concern because of their pathogenicity in immunocompromised hosts. 35 Use of selective media (e.g., Sabouraud dextrose agar and inhibitory mold agar) helps with the initial identification of recovered organisms. Microbiologic sampling for fungal spores performed as part of various airborne disease outbreak investigations has also been problematic. 18,49,106,111,112,289 The precise source of a fungus is often difficult to trace with certainty, and sampling conducted after exposure may neither reflect the circumstances that were linked to infection nor distinguish between health-care acquired and community-acquired infections. Because fungal strains may fluctuate rapidly in the environment, health-care acquired Aspergillus spp. infection cannot be confirmed or excluded if the infecting strain is not found in the health-care setting. 287 Sensitive molecular typing methods (e.g., randomly amplified polymorphic DNA (RAPD) techniques and a more recent DNA fingerprinting technique that detects restriction fragment length polymorphisms in fungal genomic DNA) to identify strain differences among Aspergillus spp., however, are becoming increasingly used in epidemiologic investigations of health-care acquired fungal infection (A.Streifel, University of Minnesota, 2000). 68,110,286,287,[292][293][294][295][296] During case cluster evaluation, microbiologic sampling may provide an isolate from the environment for molecular typing and comparison with patient isolates. Therefore, it may be prudent for the clinical laboratory to save Aspergillus spp. isolated from colonizations and invasive disease cases among patients in PE, oncology, and transplant services for these purposes. Sedimentation methods using settle plates and volumetric sampling methods using solid impactors are commonly employed when sampling air for bacteria and fungi. Settle plates have been used by numerous investigators to detect airborne bacteria or to measure air quality during medical procedures (e.g., surgery). 17,60,97,151,161,287 Settle plates, because they rely on gravity during sampling, tend to select for larger particles and lack sensitivity for respirable particles (e.g., individual fungal spores), especially in highly-filtered environments. Therefore, they are considered impractical for general use. 35,289,[298][299][300][301] Settle plates, however, may detect fungi aerosolized during medical procedures (e.g., during wound dressing changes), as described in a recent outbreak of aspergillosis among liver transplant patients. 302 The use of slit or sieve impactor samplers capable of collecting large volumes of air in short periods of time are needed to detect low numbers of fungal spores in highly filtered areas. 35,289 In some outbreaks, aspergillosis cases have occurred when fungal spore concentrations in PE ambient air ranged as low as 0.9-2.2 colony-forming units per cubic meter (CFU/m 3 ) of air. 18,94 On the basis of the expected spore counts in the ambient air and the performance parameters of various types of volumetric air samplers, investigators of a recent aspergillosis outbreak have suggested that an air volume of at least 1000 L (1 m 3 ) should be considered when sampling highly filtered areas. 283 Investigators have also suggested limits of 15 CFU/m 3 for gross colony counts of fungal organisms and <0.1 CFU/m 3 for Aspergillus fumigatus and other potentially opportunistic fungi in heavily filtered areas (≥12 ACH and filtration of ≥99.97% efficiency). 120 No correlation of these values with the incidence of health-care-associated fungal infection rates has been reported. Air sampling in health-care facilities, whether used to monitor air quality during construction, to verify filter efficiency, or to commission new space prior to occupancy, requires careful notation of the circumstances of sampling. Most air sampling is performed under undisturbed conditions. However, when the air is sampled during or after human activity (e.g., walking and vacuuming), a higher number of airborne microorganisms likely is detected. 297 The contribution of human activity to the significance of air sampling and its impact on health-care associated infection rates remain to be defined. Comparing microbiologic sampling results from a target area (e.g., an area of construction) to those from an unaffected location in the facility can provide information about distribution and concentration of potential airborne pathogens. A comparison of microbial species densities in outdoor air versus indoor air has been used to help pinpoint fungal spore bursts. Fungal spore densities in outdoor air are variable, although the degree of variation with the seasons appears to be more dramatic in the United States than in Europe. 92,287,303 Particulate and microbiologic air sampling have been used when commissioning new HVAC system installations; however, such sampling is particularly important for newly constructed or renovated PE or operating rooms. Particulate sampling is used as part of a battery of tests to determine if a new HVAC system is performing to specifications for filtration and the proper number of ACH. 268,288,304 Microbiologic air sampling, however, remains controversial in this application, because no standards for comparison purposes have been determined. If performed, sampling should be limited to determining the density of fungal spores per unit volume of air space. High numbers of spores may indicate contamination of air-handling system components prior to installation or a system deficiency when culture results are compared with known filter efficiencies and rates of air exchange. # e. External Demolition and Construction External demolition, planned building implosions, and dirt excavation generate considerable dust and debris that can contain airborne microorganisms. In one study, peak concentrations in outdoor air at grade level and HVAC intakes during site excavation averaged 20,000 CFU/m 3 for all fungi and 500 CFU/m 3 for Aspergillus fumigatus, compared with 19 CFU/m 3 and 4 CFU/m 3 , respectively, in the absence of construction. 280 Many health-care institutions are located in large, urban areas; building implosions are becoming a more frequent concern. Infection-control risk assessment teams, particularly those in facilities located in urban renewal areas, would benefit by developing risk management strategies for external demolition and construction as a standing policy. In light of the events of 11 September 2001, it may be necessary for the team to identify those dust exclusion measures that can be implemented rapidly in response to emergency situations (Table 8). Issues to be reviewed prior to demolition include a. proximity of the air intake system to the work site, b. adequacy of window seals and door seals, c. proximity of areas frequented by immunocompromised patients, and d. location of the underground utilities (D. Erickson, ASHE, 2000). 120,250,273,276,277,280,305 Minimizing the entry of outside dust into the HVAC system is crucial in reducing the risk for airborne contaminants. Facility engineers should be consulted about the potential impact of shutting down the system or increasing the filtration. Selected air handlers, especially those located close to excavation sites, may have to be shut off temporarily to keep from overloading the system with dust and debris. Care is needed to avoid significant facility-wide reductions in pressure differentials that may cause the building to become negatively pressured relative to the outside. To prevent excessive particulate overload and subsequent reductions in effectiveness of intake air systems that cannot be shut off temporarily, air filters must be inspected frequently for proper installation and function. Excessive dust penetration can be avoided if recirculated air is maximally utilized while outdoor air intakes are shut down. Scheduling demolition and excavation during the winter, when Aspergillus spp. spores may be present in lower numbers, can help, although seasonal variations in spore density differ around the world. 92,287,303 Dust control can be managed by misting the dirt and debris during heavy dust-generating activities. To decrease the amount of aerosols from excavation and demolition projects, nearby windows, especially in areas housing immunocompromised patients, can be sealed and window and door frames caulked or weatherstripped to prevent dust intrusion. 50,301,306 Monitoring for adherence to these control measures throughout demolition or excavation is crucial. Diverting pedestrian traffic away from the construction sites decreases the amount of dust tracked back into the health-care facility and minimizes exposure of high-risk patients to environmental pathogens. Additionally, closing entrances near construction or demolition sites might be beneficial; if this is not practical, creating an air lock (i.e., pressurizing the entry way) is another option. # f. Internal Demolition, Construction, Renovations, and Repairs The focus of a properly implemented infection-control program during interior construction and repairs is containment of dust and moisture. This objective is achieved by a. educating construction workers about the importance of control measures b. preparing the site; c. notifying and issuing advisories for staff, patients, and visitors; d. moving staff and patients and relocating patients as needed; e. issuing standards of practice and precautions during activities and maintenance; f. monitoring for adherence to control measures during construction and providing prompt feedback about lapses in control g. monitoring HVAC performance; h. implementing daily clean-up, terminal cleaning and removal of debris upon completion; and i. ensuring the integrity of the water system during and after construction. These activities should be coordinated with engineering staff and infection-control professionals. Physical barriers capable of containing smoke and dust will confine dispersed fungal spores to the construction zone. 279,284,307,308 The specific type of physical barrier required depends on the project's scope and duration and on local fire codes. Short-term projects that result in minimal dust dispersion (e.g., installation of new cables or wiring above ceiling tiles) require only portable plastic enclosures with negative pressure and HEPA filtration of the exhaust air from the enclosed work area. The placement of a portable industrial-grade HEPA filter device capable of filtration rate of 300-800 ft 3 /min. adjacent to the work area will help to remove fungal spores, but its efficacy is dependent on the supplied ACH and size of the area. If the project is extensive but short-term, dust-abatement, fire-resistant plastic curtains (e.g., Visqueen®) may be adequate. These should be completely airtight and sealed from ceiling to floor with overlapping curtains; 276,277,309 holes, tears, or other perforations should be repaired promptly with tape. A portable, industrial-grade HEPA filter unit on continuous operation is needed within the contained area, with the filtered air exhausted to the outside of the work zone. Patients should not remain in the room when dust-generating activities are performed. Tools to assist the decision-making process regarding selection of barriers based on an ICRA approach are available. 281 More elaborate barriers are indicated for long-term projects that generate moderate to large amounts of dust. These barrier structures typically consist of rigid, noncombustible walls constructed from sheet rock, drywall, plywood, or plaster board and covered with sheet plastic (e.g., Visqueen®). Barrier requirements to prevent the intrusion of dust into patient-care areas include a. installing a plastic dust abatement curtain before construction of the rigid barrier b. sealing and taping all joint edges including the top and bottom; c. extending the barrier from floor to floor, which takes into account the space [approximately 2-8 ft.] above the finished, lay-down ceiling; and d. fitting or sealing any temporary doors connecting the construction zone to the adjacent area. (See Box 7 for a list of the various construction and repair activities that require the use of some type of barrier.) Dust and moisture abatement and control rely primarily on the impermeable barrier containment approach; as construction continues, numerous opportunities can lead to dispersion of dust to other areas of the health-care facility. Infection-control measures that augment the use of barrier containment should be undertaken (Table 9). Dust-control measures for clinical laboratories are an essential part of the infection-control strategy during hospital construction or renovation. Use of plastic or solid barriers may be needed if the ICRA determines that air flow from construction areas may introduce airborne contaminants into the laboratory space. In one facility, pseudofungemia clusters attributed to Aspergillus spp. and Penicillium spp. were linked to improper air flow patterns and construction projects adjacent to the laboratory; intrusion of dust and spores into a biological safety cabinet from construction activity immediately next to the cabinet resulted in a cluster of cultures contaminated with Aspergillus niger. 310,311 Reportedly, no barrier containment was used and the HEPA filtration system was overloaded with dust. In addition, an outbreak of pseudobacteremia caused by Bacillus spp. occurred in another hospital during construction above a storage area for blood culture bottles. 207 Airborne spread of Bacillus spp. spores resulted in contamination of the bottles' plastic lids, which were not disinfected or handled with proper aseptic technique prior to collection of blood samples. 1. Monitor the construction area daily for compliance with the infection-control plan. 2. Protective outer clothing for construction workers should be removed before entering clean areas. 3. Use mats with tacky surfaces within the construction zone at the entry; cover sufficient area so that both feet make contact with the mat while walking through the entry. 4. Construct an anteroom as needed where coveralls can be donned and removed. 5. Clean the construction zone and all areas used by construction workers with a wet mop. 6. If the area is carpeted, vacuum daily with a HEPA-filtered-equipped vacuum. 7. Provide temporary essential services (e.g., toilets) and worker conveniences (e.g, vending machines) in the construction zone as appropriate. 8. Damp-wipe tools if removed from the construction zone or left in the area. 9. Ensure that construction barriers remain well sealed; use particle sampling as needed. 10. Ensure that the clinical laboratory is free from dust contamination. # Infection-control measure Steps for implementation Complete the project. 1. Flush the main water system to clear dust-contaminated lines. # Environmental Infection-Control Measures for Special Health-Care Settings Areas in health-care facilities that require special ventilation include a. operating rooms b. PE rooms used by high-risk, immunocompromised patients; and c. AII rooms for isolation of patients with airborne infections (e.g., those caused by M. tuberculosis, VZV, or measles virus). The number of rooms required for PE and AII are determined by a risk assessment of the health-care facility. 6 Continuous, visual monitoring of air flow direction is required for new or renovated pressurized rooms. 120,256 # a. Protective Environments (PE) Although the exact configuration and specifications of PEs might differ among hospitals, these care areas for high-risk, immunocompromised patients are designed to minimize fungal spore counts in air by maintaining a. filtration of incoming air by using central or point-of-use HEPA filters b. directed room air flow [i.e., from supply on one side of the room, across the patient, and out through the exhaust on the opposite side of the room]; c. positive room air pressure of 2.5 Pa [0.01" water gauge] relative to the corridor; d. well-sealed rooms; and e. ≥12 ACH. 44,120,251,254,[316][317][318][319] Air flow rates must be adjusted accordingly to ensure sufficient ACH, and these rates vary depending on certain factors (e.g., room air leakage area). For example, to provide ≥12 ACH in a typical patient room with 0.5 sq. ft. air leakage, the air flow rate will be minimally 125 cubic feet/min (cfm). 320,321 Higher air flow rates may be needed. A general ventilation diagram for a positive-pressure room is given in Figure 2. Directed room air flow in PE rooms is not laminar; parallel air streams are not generated. Studies attempting to demonstrate patient benefit from laminar air flow in a PE setting are equivocal. 316, 318, 319, 322 -327 Air flow direction at the entrances to these areas should be maintained and verified, preferably on a daily basis, using either a visual means of indication (e.g., smoke tubes and flutter strips) or manometers. Permanent installation of a visual monitoring device is indicated for new PE construction and renovation. 120 Facility service structures can interfere with the proper unidirectional air flow from the patients' rooms to the adjacent corridor. In one outbreak investigation, Aspergillus spp. infections in a critical care unit may have been associated with a pneumatic specimen transport system, a textile disposal duct system, and central vacuum lines for housekeeping, all of which disrupted proper air flow from the patients' rooms to the outside and allowed entry of fungal spores into the unit (M. McNeil, CDC, 2000). The use of surface fungicide treatments is becoming more common, especially for building materials. 329 Copper-based compounds have demonstrated anti-fungal activity and are often applied to wood or paint. Copper-8-quinolinolate was used on environmental surfaces contaminated with Aspergillus spp. to control one reported outbreak of aspergillosis.310 The compound was also incorporated into the fireproofing material of a newly constructed hospital to help decrease the environmental spore burden. 316 # b. Airborne Infection Isolation (AII) Acute-care inpatient facilities need at least one room equipped to house patients with airborne infectious disease. Every health-care facility, including ambulatory and long-term care facilities, should undertake an ICRA to identify the need for AII areas. Once the need is established, the appropriate ventilation equipment can be identified. Air handling systems for this purpose need not be restricted to central systems. Guidelines for the prevention of health-care acquired TB have been published in response to multiple reports of health-care associated transmission of multi-drug resistant strains. 4,330 In reports documenting health-care acquired TB, investigators have noted a failure to comply fully with prevention measures in established guidelines. 331 -345 These gaps highlight the importance of prompt recognition of the disease, isolation of patients, proper treatment, and engineering controls. AII rooms are also appropriate for the care and management of smallpox patients. 6 Environmental infection control with respect to smallpox is currently being revisited (see Appendix E). Salient features of engineering controls for AII areas include a. use of negative pressure rooms with close monitoring of air flow direction using manometers or temporary or installed visual indicators [e.g., smoke tubes and flutter strips] placed in the room with the door closed b. minimum 6 ACH for existing facilities, ≥12 ACH for areas under renovation or for new construction; and c. air from negative pressure rooms and treatment rooms exhausted directly to the outside if possible. 4,120,248 As with PE, airflow rates need to be determined to ensure the proper numbers of ACH. 320,321 AII rooms can be constructed either with (Figure 3) or without (Figure 4) an anteroom. When the recirculation of air from AII rooms is unavoidable, HEPA filters should be installed in the exhaust duct leading from the room to the general ventilation system. In addition to UVGI fixtures in the room, UVGI can be placed in the ducts as an adjunct measure to HEPA filtration, but it can not replace the HEPA filter. 4 One of the components of airborne infection isolation is respiratory protection for health-care workers and visitors when entering AII rooms. 4,6,347 Recommendations of the type of respiratory protection are dependent on the patient's airborne infection (indicating the need for AII) and the risk of infection to persons entering the AII room. A more in-depth discussion of respiratory protection in this instance is presented in the current isolation guideline; 6 a revision of this guideline is in development. Coughinducing procedures (e.g., endotracheal intubation and suctioning of known or suspected TB patients, diagnostic sputum induction, aerosol treatments, and bronchoscopy) require similar precautions. [348][349][350] Additional engineering measures are necessary for the management of patients requiring PE (i.e., allogeneic HSCT patients) who concurrently have airborne infection. For this type of patient treatment, an anteroom (Figure 4) is required in new construction and renovation as per AIA guidelines. 120 The pressure differential of an anteroom can be positive or negative relative to the patient in the room. 120 An anteroom can act as an airlock (Figure 4). If the anteroom is positive relative to the air space in the patient's room, staff members do not have to mask prior to entry into the anteroom if air is directly exhausted to the outside and a minimum of 10 ACH (Figure 4, top diagram). 120 When an anteroom is negative relative to both the AII room and the corridor, health-care workers must mask prior to entering the anteroom (Figure 4, bottom diagram). If an AII room with an anteroom is not available, use of a portable, industrial-grade HEPA filter unit may help to increase the number of ACHs while facilitating the removal of fungal spores; however, a fresh air source must be present to achieve the proper air exchange rate. Incoming ambient air should receive HEPA filtration. # c. Operating Rooms Operating room air may contain microorganisms, dust, aerosol, lint, skin squamous epithelial cells, and respiratory droplets. The microbial level in operating room air is directly proportional to the number of people moving in the room. 351 One study documented lower infection rates with coagulase-negative staphylococci among patients when operating room traffic during the surgical procedure was limited. 352 Therefore, efforts should be made to minimize personnel traffic during operations. Outbreaks of SSIs caused by group A beta-hemolytic streptococci have been traced to airborne transmission from colonized operating-room personnel to patients. [150][151][152][153][154] Several potential health-care associated pathogens (e.g., Staphylococcus aureus and Staphylococcus epidermidis) and drug-resistant organisms have also been recovered from areas adjacent to the surgical field, 353 but the extent to which the presence of bacteria near the surgical field influences the development of postoperative SSIs is not clear. 354 Proper ventilation, humidity (<68%), and temperature control in the operating room is important for the comfort of surgical personnel and patients, but also in preventing environmental conditions that encourage growth and transmission of microorganisms. 355 Operating rooms should be maintained at positive pressure with respect to corridors and adjacent areas. 356 Operating rooms typically do not have a variable air handling system. Variable air handling systems are permitted for use in operating rooms only if they continue to provide a positive pressure with respect to the corridors and adjacent areas and the proper ACHs are maintained when the room is occupied. Conventional operating-room ventilation systems produce a minimum of about 15 ACH of filtered air for thermal control, three (20%) of which must be fresh air. 120,357,358 Air should be introduced at the ceiling and exhausted near the floor. 357,359 Laminar airflow and UVGI have been suggested as adjunct measures to reduce SSI risk for certain operations. Laminar airflow is designed to move particle-free air over the aseptic operating field at a uniform velocity (0.3-0.5 m/sec), sweeping away particles in its path. This air flow can be directed vertically or horizontally, and recirculated air is passed through a HEPA filter. [360][361][362][363] Neither laminar airflow nor UV light, however, has been conclusively shown to decrease overall SSI risk. 356,[364][365][366][367][368][369][370] Elective surgery on infectious TB patients should be postponed until such patients have received adequate drug therapy. The use of general anesthesia in TB patients poses infection-control challenges because intubation can induce coughing, and the anesthesia breathing circuit apparatus potentially can become contaminated. 371 Although operating room suites at 15 ACH exceed the air exchanges required transmission of TB to operating-room personnel. If feasible, intubation and extubation of the TB surgical patient should be performed in AII. AIA currently does not recommend changing pressure from positive to negative or setting it to neutral; most facilities lack the capability to do so. 120 When emergency surgery is indicated for a suspected/diagnosed infectious TB patient, taking specific infection-control measures is prudent (Box 8). + The placement of portable HEPA filter units in the operating room must be carefully evaluated for potential disruptions in normal air flow. The portable unit should be turned off while the surgical procedure is underway and turned on following extubation. Portable HEPA filter units previously placed in construction areas may be used in subsequent patient care, provided that all internal and external surfaces are cleaned and the filter's performance is verified with appropriate particle testing and is changed, if needed. # Other Aerosol Hazards in Health-Care Facilities In addition to infectious bioaerosols, several crucial non-infectious, indoor air-quality issues must be addressed by health-care facilities. The presence of sensitizing and allergenic agents and irritants in the workplace (e.g., ethylene oxide, glutaraldehyde, formaldehyde, hexachlorophene, and latex allergens 375 ) is increasing. Asthma and dermatologic and systemic reactions often result with exposure to these chemicals. Anesthetic gases and aerosolized medications (e.g., ribavirin, pentamidine, and aminoglycosides) represent some of the emerging potentially hazardous exposures to health-care workers. Containment of the aerosol at the source is the first level of engineering control, but personal protective equipment (e.g., masks, respirators, and glove liners) that distances the worker from the hazard also may be needed. Laser plumes and surgical smoke represent another potential risk for health-care workers. [376][377][378] Lasers transfer electromagnetic energy into tissues, resulting in the release of a heated plume that includes particles, gases, tissue debris, and offensive smells. One concern is that aerosolized infectious material in the laser plume might reach the nasal mucosa of surgeons and adjacent personnel. Although some viruses (i.e., varicella-zoster virus, pseudorabies virus, and herpes simplex virus) do not aerosolize efficiently, 379,380 other viruses and bacteria (e.g., human papilloma virus [HPV], HIV, coagulasenegative Staphylococcus, Corynebacterium spp., and Neisseria spp.) have been detected in laser plumes. [381][382][383][384][385][386][387] The presence of an infectious agent in a laser plume may not, however, be sufficient to cause disease from airborne exposure, especially if the normal mode of transmission for the agent is not airborne. No evidence indicated that HIV or hepatitis B virus (HBV) has been transmitted via aerosolization and inhalation. 388 Although continuing studies are needed to fully evaluate the risk of laser plumes to surgical personnel, the prevention measures in these other guidelines should be followed: a. NIOSH recommendations, 378 b. the Recommended Practices for Laser Safety in Practice Settings developed by the Association of periOperative Registered Nurses [AORN], 389 c. the assessments of ECRI, 390-392 and d. the ANSI standard. 393 These guidelines recommend the use of a. respirators (N95 or N100) or full face shields and masks, 260 b. central wall-suction units with in-line filters to collect particulate matter from minimal plumes, and c. dedicated mechanical smoke exhaust systems with a high-efficiency filter to remove large amounts of laser plume. Although transmission of TB has occurred as a result of abscess management practices that lacked airborne particulate control measures and respiratory protection, use of a smoke evacuator or needle aspirator and a high degree of clinical awareness can help protect healthcare workers when excising and draining an extrapulmonary TB abscess. 137 # D. Water 1. Modes of Transmission of Waterborne Diseases Moist environments and aqueous solutions in health-care settings have the potential to serve as reservoirs for waterborne microorganisms. Under favorable environmental circumstances (e.g., warm temperature and the presence of a source of nutrition), many bacterial and some protozoal microorganisms can either proliferate in active growth or remain for long periods in highly stable, environmentally resistant (yet infectious) forms. Modes of transmission for waterborne infections include direct contact [e.g., that required for hydrotherapy]; ingestion of water [e.g., through consuming contaminated ice]; indirect-contact transmission [e.g., from an improperly reprocessed medical device]; 6 inhalation of aerosols dispersed from water sources; 3 and aspiration of contaminated water. The first three modes of transmission are commonly associated with infections caused by gram-negative bacteria and nontuberculous mycobacteria (NTM). Aerosols generated from water sources contaminated with Legionella spp. often serve as the vehicle for introducing legionellae to the respiratory tract. 394 # Waterborne Infectious Diseases in Health-Care Facilities a. Legionellosis Legionellosis is a collective term describing infection produced by Legionella spp., whereas Legionnaires disease is a multi-system illness with pneumonia. 395 The clinical and epidemiologic aspects of these diseases (Table 11) are discussed extensively in another guideline. 3 Although Legionnaires disease is a respiratory infection, infection-control measures intended to prevent healthcare-associated cases center on the quality of water-the principal reservoir for Legionella spp. # Format Change [November 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 11. Clinical and epidemiologic characteristics of legionellosis/Legionnaires disease # Modes of transmission • Aspiration of water, direct inhalation or water aerosols. 3,[394][395][396][397][398]400 Causative agent • Legionella pneumophila (90% of infections); L. micdadei, L. bozemanii, L. dumoffii, L. longbeachii, (14 additional species can cause infection in humans). [395][396][397][398][399] Source of exposure • Exposure to environmental sources of Legionella spp. (i.e., water or water aerosols). 31,33,[401][402][403][404][405][406][407][408][409][410][411][412][413][414] Clinical syndromes and diseases Two distinct illnesses: [397][398][399][415][416][417][418][419][420][421][422] • Pontiac fever [a milder, influenza-like illness]; and • progressive pneumonia that may be accompanied by cardiac, renal, and gastrointestinal involvement 3 Patient populations at greatest risk • Immunosuppressed patients (e.g., transplant patients, cancer patients, and patients receiving corticosteroid therapy); • Immunocompromised patients (e.g., surgical patients, patients with underlying chronic lung disease, and dialysis patients); • Elderly persons; and • Patients who smoke. [395][396][397][423][424][425][426][427][428][429][430][431][432][433] # Occurrence • Proportion of community-acquired pneumonia caused by Legionella spp. ranges from 1%-5%; estimated annual incidence among the general population is 8,000-18,000 cases in the United States; the incidence of healthcare-associated pneumonia (0%-14%) may be underestimated if appropriate laboratory diagnostic methods are unavailable. 396,397,[434][435][436][437][438][439][440][441][442][443][444] Mortality rate • Mortality declined markedly during 1980-1998, from 34% to 12% for all cases; the mortality rate is higher among persons with health-care associated pneumonia compared with the rate among community-acquired pneumonia patients (14% for health-care associated pneumonia versus 10% for community-acquired pneumonia [1998 data]). [395][396][397]445 Legionella spp. are commonly found in various natural and man-made aquatic environments 446,447 and can enter health-care facility water systems in low or undetectable numbers. 448,449 Cooling towers, evaporative condensers, heated potable water distribution systems, and locally-produced distilled water can provide environments for multiplication of legionellae. [450][451][452][453][454] In several hospital outbreaks, patients have been infected through exposure to contaminated aerosols generated by cooling towers, showers, faucets, respiratory therapy equipment, and room-air humidifiers. [401][402][403][404][405][406][407][408][409][410]455 Factors that enhance colonization and amplification of legionellae in man-made water environments include a. temperatures of 77°F-107.6°F [25°C-42°C], [456][457][458][459][460] b. stagnation, 461 c. scale and sediment, 462 and d. presence of certain free-living aquatic amoebae that can support intracellular growth of legionellae. 462,463 The bacteria multiply within single-cell protozoa in the environment and within alveolar macrophages in humans. # b. Other Gram-Negative Bacterial Infections Other gram-negative bacteria present in potable water also can cause health-care associated infections. Clinically important, opportunistic organisms in tap water include Pseudomonas aeruginosa, Pseudomonas spp., Burkholderia cepacia, Ralstonia pickettii, Stenotrophomonas maltophilia, and Sphingomonas spp. (Tables 12 and 13). Immunocompromised patients are at greatest risk of developing infection. Medical conditions associated with these bacterial agents range from colonization of the respiratory and urinary tracts to deep, disseminated infections that can result in pneumonia and bloodstream bacteremia. Colonization by any of these organisms often precedes the development of infection. The use of tap water in medical care (e.g., in direct patient care, as a diluent for solutions, as a water source for medical instruments and equipment, and during the final stages of instrument disinfection) therefore presents a potential risk for exposure. Colonized patients also can serve as a source of contamination, particularly for moist environments of medical equipment (e.g., ventilators). In addition to Legionella spp., Pseudomonas aeruginosa and Pseudomonas spp. are among the most clinically relevant, gram-negative, health-care associated pathogens identified from water. These and other gram-negative, non-fermentative bacteria have minimal nutritional requirements (i.e., these organisms can grow in distilled water) and can tolerate a variety of physical conditions. These attributes are critical to the success of these organisms as health-care associated pathogens. Measures to prevent the spread of these organisms and other waterborne, gram-negative bacteria include hand hygiene, glove use, barrier precautions, and eliminating potentially contaminated environmental reservoirs. 464,465 Format Change [November 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 12. Pseudomonas aeruginosa infections # Modes of transmission • Direct contact with water, aerosols; aspiration of water and inhalation of water aerosols; and indirect transfer from moist environmental surfaces via hands of health-care workers. 28,[502][503][504][505][506] Clinical syndromes and diseases • Septicemia, pneumonia (particularly ventilator-associated), chronic respiratory infections among cystic fibrosis patients, urinary tract infections, skin and soft-tissue infections (e.g., tissue necrosis and hemorrhage), burn-wound infections, folliculitis, endocarditis, central nervous system infections (e.g., meningitis and abscess), eye infections, and bone and joint infections. Environmental sources of pseudomonads in healthcare settings • Potable (tap) water, distilled water, antiseptic solutions contaminated with tap water, sinks, hydrotherapy pools, whirlpools and whirlpool spas, water baths, lithotripsy therapy tanks, dialysis water, eyewash stations, flower vases, and endoscopes with residual moisture in the channels. 28,29,466,468,[507][508][509][510][511][512][513][514][515][516][517][518][519][520] Environmental sources of pseudomonads in the community • Fomites (e.g., drug injection equipment stored in contaminated water). 494,495 Patient populations at greatest risk • Intensive care unit (ICU) patients (including neonatal ICU), transplant patients (organ and hematopoietic stem cell), neutropenic patients, burn therapy and hydrotherapy patients, patients with malignancies, cystic fibrosis patients, patients with underlying medical conditions, and dialysis patients. 28, 466, 467, 472, 477, 493, 506-508, 511, 512, 521-526 Format Change [November 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. 527 • Contaminated solutions and disinfectants 528,529 • Dialysis machines 527 • Nebulizers [530][531][532] • Water baths 533 • Intrinsically-contaminated mouthwash 534 (This report describes contamination occurring during manufacture prior to use by the health-care facility staff. All other entries reflect extrinsic sources of contamination.) • Ventilator temperature probes 535 # Stenotrophomonas maltophlia, Sphingomonas spp. • Distilled water 536,537 • Contaminated solutions and disinfectants 529 • Dialysis machines 527 • Nebulizers [530][531][532] • Water 538 • Ventilator temperature probes 539 # Ralstonia pickettii • Fentanyl solutions 540 • Chlorhexidine 541 • Distilled water 541 • Contaminated respiratory therapy solution 541,542 Serratia marcescens • Potable water 543 • Contaminated antiseptics (i.e., benzalkonium chloride and chlorhexidine) [544][545][546] • Contaminated disinfectants (i.e., quaternary ammonium compounds and glutaraldehyde) 547,548 Acinetobacter spp. • Medical equipment that collects moisture (e.g., mechanical ventilators, cool mist humidifiers, vaporizers, and mist tents) [549][550][551][552][553][554][555][556] • Room humidifiers 553,555 • Environmental surfaces [557][558][559][560][561][562][563][564] Enterobacter spp. • Humidifier water 565 • Intravenous fluids [566][567][568][569][570][571][572][573][574][575][576][577][578] • Unsterilized cotton swabs 573 • Ventilators 565,569 • Rubber piping on a suctioning machine 565,569 • Blood gas analyzers 570 Two additional gram-negative bacterial pathogens that can proliferate in moist environments are Acinetobacter spp. and Enterobacter spp. 571,572 Members of both genera are responsible for healthcareassociated episodes of colonization, bloodstream infections, pneumonia, and urinary tract infections among medically compromised patients, especially those in ICUs and burn therapy units. 566,[572][573][574][575][576][577][578][579][580][581][582][583] Infections caused by Acinetobacter spp. represent a significant clinical problem. Average infection rates are higher from July through October compared with rates from November through June. 584 Mortality rates associated with Acinetobacter bacteremia are 17%-52%, and rates as high as 71% have been reported for pneumonia caused by infection with either Acinetobacter spp. or Pseudomonas spp.Multidrug resistance, especially in third generation cephalosporins for Enterobacter spp., contributes to increased morbidity and mortality. 569,572 Patients and health-care workers contribute significantly to the environmental contamination of surfaces and equipment with Acinetobacter spp. and Enterobacter spp., especially in intensive care areas, because of the nature of the medical equipment (e.g., ventilators) and the moisture associated with this equipment. 549,571,572,585 Hand carriage and hand transfer are commonly associated with health-care-associated transmission of these organisms and for S. marcescens. 586 Enterobacter spp. are primarily spread in this manner among patients by the hands of health-care workers. 567,587 Acinetobacter spp. have been isolated from the hands of 4%-33% of health-care workers in some studies, [585][586][587][588][589][590] and transfer of an epidemic strain of Acinetobacter from patients' skin to health-care workers' hands has been demonstrated experimentally. 591 Acinetobacter infections and outbreaks have also been attributed to medical equipment and materials (e.g., ventilators, cool mist humidifiers, vaporizers, and mist tents) that may have contact with water of uncertain quality (e.g., rinsing a ventilator circuit in tap water). [549][550][551][552][553][554][555][556] Strict adherence to hand hygiene helps prevent the spread of both Acinetobacter spp. and Enterobacter spp. 577,592 Acinetobacter spp. have also been detected on dry environmental surfaces (e.g., bed rails, counters, sinks, bed cupboards, bedding, floors, telephones, and medical charts) in the vicinity of colonized or infected patients; such contamination is especially problematic for surfaces that are frequently touched. [557][558][559][560][561][562][563][564] In two studies, the survival periods of Acinetobacter baumannii and Acinetobacter calcoaceticus on dry surfaces approximated that for S. aureus (e.g., 26-27 days). 593,594 Because Acinetobacter spp. may come from numerous sources at any given time, laboratory investigation of health-care associated Acinetobacter infections should involve techniques to determine biotype, antibiotype, plasmid profile, and genomic fingerprinting (i.e., macrorestriction analysis) to accurately identify sources and modes of transmission of the organism(s). 595 # c. Infections and Pseudo-Infections Due to Nontuberculous Mycobacteria NTM are acid-fast bacilli (AFB) commonly found in potable water. NTM include both saprophytic and opportunistic organisms. Many NTM are of low pathogenicity, and some measure of host impairment is necessary to enhance clinical disease. 596 The four most common forms of human disease associated with NTM are pulmonary disease in adults; cervical lymph node disease in children; skin, soft tissue, and bone infections; and disseminated disease in immunocompromised patients. 596,597 Person-to-person acquisition of NTM infection, especially among immunocompetent persons, does not appear to occur, and close contacts of patients are not readily infected, despite the high numbers of organisms harbored by such patients. 596,[598][599][600] NTM are spread via all modes of transmission associated with water. In addition to health-care associated outbreaks of clinical disease, NTM can colonize patients in health-care facilities through consumption of contaminated water or ice or through inhalation of aerosols. [601][602][603][604][605] Colonization following NTM exposure, particularly of the respiratory tract, occurs when a patient's local defense mechanisms are impaired; overt clinical disease does not develop. 606 The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 14a. Infections or colonizations # Pathogen Vehicles associated with infections or colonizations # Mycobacterium abscessus • Inadequately sterilized medical instruments 613 # Mycobacterium avium complex (MAC) • Potable water [614][615][616] # Mycobacterium chelonae • Dialysis, reprocessed dialyzers 31,32 • Inadequately-sterilized medical instruments, jet injectors 617,618 • Contaminated solutions 619,620 • Hydrotherapy tanks 621 # Mycobacterium fortuitum • Aerosols from showers or other water sources 605,606 • Ice 602 • Inadequately sterilized medical instruments 603 • Hydrotherapy tanks 622 # Mycobacterium marinum • Hydrotherapy tanks 623 # Mycobacterium ulcerans • Potable water 624 • Laboratory solution (intrinsically contaminated 625 • Potable water ingestion prior to sputum specimen collection 626 # Mycobacterium kansasii • Potable water 627 # Mycobacterium terrae • Potable water 608 # Mycobacterium xenopi • Potable water 609,612,627 NTM can be isolated from both natural and man-made environments. Numerous studies have identified 632 Some NTM species (e.g., Mycobacterium xenopi) can survive in water at 113°F (45°C), and can be isolated from hot water taps, which can pose a problem for hospitals that lower the temperature of their hot water systems. 627 Other NTM (e.g., Mycobacterium kansasii, M. gordonae, M. fortuitum, and M. chelonae) cannot tolerate high temperatures and are associated more often with cold water lines and taps. 629 NTM have a high resistance to chlorine; they can tolerate free chlorine concentrations of 0.05-0.2 mg/L (0.05-0.2 ppm) found at the tap. 598,633,634 They are 20-100 times more resistant to chlorine compared with coliforms; slow-growing strains of NTM (e.g., Mycobacterium avium and M. kanasii) appear to be more resistant to chorine inactivation compared to fast-growing NTM. 635 Slow-growing NTM species have also demonstrated some resistance to formaldehyde and glutaraldehyde, which has posed problems for reuse of hemodialyzers. 31 The ability of NTM to form biofilms at fluid-surface interfaces (e.g., interior surfaces of water pipes) contributes to the organisms' resistance to chemical inactivation and provides a microenvironment for growth and proliferation. 636,637 # d. Cryptosporidiosis Cryptosporidium parvum is a protozoan parasite that causes self-limiting gastroenteritis in normal hosts but can cause severe, life-threatening disease in immunocompromised patients. First recognized as a human pathogen in 1976, C. parvum can be present in natural and finished waters after fecal contamination from either human or animal sources. [638][639][640][641] The health risks associated with drinking potable water contaminated with minimal numbers of C. parvum oocysts are unknown. 642 It remains to be determined if immunosuppressed persons are more susceptible to lower doses of oocysts than are immunocompetent persons. One study demonstrated that a median 50% infectious dose (ID50) of 132 oocysts of calf origin was sufficient to cause infection among healthy volunteers. 643 In a second study, the same researchers found that oocysts obtained from infected foals (newborn horses) were infectious for human volunteers at median ID50 of 10 oocysts, indicating that different strains or species of Cryptosporidium may vary in their infectivity for humans. 644 In a small study population of 17 healthy adults with pre-existing antibody to C. parvum, the ID50 was determined to be 1,880 oocysts, more than 20-fold higher than in seronegative persons. 645 These data suggest that pre-existing immunity derived from previous exposures to Cryptosporidium offers some protection from infection and illness that ordinarily would result from exposure to low numbers of oocysts. 645,646 Oocysts, particularly those with thick walls, are environmentally resistant, but their survival under natural water conditions is poorly understood. Under laboratory conditions, some oocysts remain viable and infectious in cold (41°F [5°C]) for months. 641 The prevalence of Cryptosporidium in the U.S. drinking water supply is notable. Two surveys of approximately 300 surface water supplies revealed that 55%-77% of the water samples contained Cryptosporidium oocysts. 647,648 Because the oocysts are highly resistant to common disinfectants (e.g., chlorine) used to treat drinking water, filtration of the water is important in reducing the risk of waterborne transmission. Coagulation-floculation and sedimentation, when used with filtration, can collectively achieve approximately a 2.5 log10 reduction in the number of oocysts. 649 However, outbreaks have been associated with both filtered and unfiltered drinking water systems (e.g., the 1993 outbreak in Milwaukee, Wisconsin that affected 400,000 people). 641,[650][651][652] The presence of oocysts in the water is not an absolute indicator that infection will occur when the water is consumed, nor does the absence of detectable oocysts guarantee that infection will not occur. Healthcare associated outbreaks of cryptosporidiosis primarily have been described among groups of elderly patients and immunocompromised persons. 653 # Water Systems in Health-Care Facilities a. Basic Components and Point-of-Use Fixtures Treated municipal water enters a health-care facility via the water mains and is distributed throughout the building(s) by a network of pipes constructed of galvanized iron, copper, and polyvinylchloride (PVC). The pipe runs should be as short as is practical. Where recirculation is employed, the pipe runs should be insulated and long dead legs avoided in efforts to minimize the potential for water stagnation, which favors the proliferation of Legionella spp. and NTM. In high-risk applications (e.g., PE areas for severely immunosuppressed patients), insulated recirculation loops should be incorporated as a design minimal loss. Each water service main, branch main, riser, and branch (to a group of fixtures) has a valve and a means to reach the valves via an access panel. 120 Each fixture has a stop valve. Valves permit the isolation of a portion of the water system within a facility during repairs or maintenance. Vacuum breakers and other similar devices in the lines prevent water from back-flowing into the system. All systems that supply water should be evaluated to determine risk for potential back siphonage and cross connections. Health-care facilities generate hot water from municipal water using a boiler system. Hot water heaters and storage vessels for such systems should have a drainage facility at the lowest point, and the heating element should be located as close as possible to the bottom of the vessel to facilitate mixing and to prevent water temperature stratification. Those hot or cold water systems that incorporate an elevated holding tank should be inspected and cleaned annually. Lids should fit securely to exclude foreign materials. The most common point-of-use fixtures for water in patient-care areas are sinks, faucets, aerators, showers, and toilets; eye-wash stations are found primarily in laboratories. The potential for these fixtures to serve as a reservoir for pathogenic microorganisms has long been recognized (Table 15). 509,[654][655][656] Wet surfaces and the production of aerosols facilitate the multiplication and dispersion of microbes. The level of risk associated with aerosol production from point-of-use fixtures varies. Aerosols from shower heads and aerators have been linked to a limited number of clusters of gram-negative bacterial colonizations and infections, including Legionnaires disease, especially in areas where immunocompromised patients are present (e.g., surgical ICUs, transplant units, and oncology units). 412, 415, # 656-659 In one report, clinical infection was not evident among immunocompetent persons (e.g., hospital staff) who used hospital showers when Legionella pneumophila was present in the water system. 660 Given the infrequency of reported outbreaks associated with faucet aerators, consensus has not been reached regarding the disinfection of or removal of these devices from general use. If additional clusters of infections or colonizations occur in high-risk patient-care areas, it may be prudent to clean and decontaminate the aerators or to remove them. 658,659 ASHRAE recommends cleaning and monthly disinfection of aerators in high-risk patient-care areas as part of Legionella control measures. 661 Although aerosols are produced with toilet flushing, 662,663 no epidemiologic evidence suggests that these aerosols pose a direct infection hazard. Although not considered a standard point-of-use fixture, decorative fountains are being installed in increasing numbers in health-care facilities and other public buildings. Aerosols from a decorative fountain have been associated with transmission of Legionella pneumophila serogroup 1 infection to a small cluster of older adults. 664 This hotel lobby fountain had been irregularly maintained, and water in the fountain may have been heated by submersed lighting, all of which favored the proliferation of Legionella in the system. 664 Because of the potential for generations of infectious aerosols, a prudent prevention measure is to avoid locating these fixtures in or near high-risk patient-care areas and to adhere to written policies for routine fountain maintenance. 120 Follow public health guidelines. (See Tables 12-14) # Potable water Legionella Aerosol inhalation Moderate: occasional welldescribed outbreaks. Provide supplemental treatment for water. (See # b. Water Temperature and Pressure Hot water temperature is usually measured at the point of use or at the point at which the water line enters equipment requiring hot water for proper operation. 120 Generally, the hot water temperature in hospital patient-care areas is no greater than a temperature within the range of 105°F-120°F (40.6°C-49°C), depending on the AIA guidance issued at the year in which the facility was built. 120 Hot water temperature in patient-care areas of skilled nursing-care facilities is set within a slightly lower range of 95°F-110°F (35°C-43.3°C) depending on the AIA guidance at the time of facility construction. 120 Many states have adopted a temperature setting in these ranges into their health-care regulations and building codes. ASHRAE, however, has recommended higher settings. 661 Steam jets or booster heaters are usually needed to meet the hot water temperature requirements in certain service areas of the hospital (e.g., the kitchen [120°F (49°C)] or the laundry [160°F (71°C)]). 120 Additionally, water lines may need to be heated to a particular temperature specified by manufacturers of specific hospital equipment. Hot-water distribution systems serving patient-care areas are generally operated under constant recirculation to provide continuous hot water at each hot-water outlet. 120 If a facility is or has a hemodialysis unit, then continuously circulated, cold treated water is provided to that unit. 120 To minimize the growth and persistence of gram-negative waterborne bacteria (e.g., thermophilic NTM and Legionella spp.), 627,[703][704][705][706][707][708][709] cold water in health-care facilities should be stored and distributed at temperatures below 68°F (20°C); hot water should be stored above 140°F (60°C) and circulated with a minimum return temperature of 124°F (51°C), 661 or the highest temperature specified in state regulations and building codes. If the return temperature setting of 124°F (51°C) is permitted, then installation of preset thermostatic mixing valves near the point-of-use can help to prevent scalding. Valve maintenance is especially important in preventing valve failure, which can result in scalding. New shower systems in large buildings, hospitals, and nursing homes should be designed to permit mixing of hot and cold water near the shower head. The warm water section of pipe between the control valve and shower head should be self-draining. Where buildings can not be retrofitted, other approaches to minimize the growth of Legionella spp. include periodically increasing the temperature to at least 150°F [66°C] at the point of use [i.e., faucets] and adding additional chlorine and flushing the water. 661,710,711 Systems should be inspected annually to ensure that thermostats are functioning properly. Maintaining adequate pressure also helps to ensure the integrity of the piping system. # c. Infection-Control Impact of Water System Maintenance and Repair Corrective measures for water-system failures have not been studied in well-designed experiments; these measures are instead based on empiric engineering and infection-control principles. Health-care facilities can occasionally sustain both intentional cut-offs by the municipal water authority to permit new construction project tie-ins and unintentional disruptions in service when a water main breaks as a result of aging infrastructure or a construction accident. Vacuum breakers or other similar devices can prevent backflow of water in the facility's distribution system during water-disruption emergencies. 11 To be prepared for such an emergency, all health-care facilities need contingency plans that identify the total demand for potable water, the quantity of replacement water [e.g., bottled water] required for a minimum of 24 hours when the water system is down, mechanisms for emergency water distribution, and procedures for correcting drops in water pressure that affect operation of essential devices and equipment that are driven or cooled by a water system [Table 16]. 713 # Format Change [November 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. Detailed, up-to-date plans for hot and cold water piping systems should be readily available for maintenance and repair purposes in case of system problems. Opening potable water systems for repair or construction and subjecting systems to water-pressure changes can result in water discoloration and dramatic increases in the concentrations of Legionella spp. and other gram-negative bacteria. The maintenance of a chlorine residual at all points within the piping system also offers some protection from entry of contamination to the pipes in the event of inadvertent cross-connection between potable and nonpotable water lines. As a minimum preventive measure, ASHRAE recommends a thorough flushing of the system. 661 High-temperature flushing or hyperchlorination may also be appropriate strategies to decrease potentially high concentrations of waterborne organisms. The decision to pursue either of these remediation strategies, however, should be made on a case-by-case basis. If only a portion of the system is involved, high temperature flushing or chlorination can be used on only that portion of the system. 661 When shock decontamination of hot water systems is necessary (e.g., after disruption caused by construction and after cross-connections), the hot water temperature should be raised to 160°F-170°F (71°C-77°C) and maintained at that level while each outlet around the system is progressively flushed. A minimum flush time of 5 minutes has been recommended; 3 the optimal flush time is not known, however, and longer flush times may be necessary. 714 The number of outlets that can be flushed simultaneously depends on the capacity of the water heater and the flow capability of the system. Appropriate safety procedures to prevent scalding are essential. When possible, flushing should be performed when the fewest building occupants are present (e.g., during nights and weekends). When thermal shock treatment is not possible, shock chlorination may serve as an alternative method. 661 Experience with this method of decontamination is limited, however, and high levels of free chlorine can corrode metals. Chlorine should be added, preferably overnight, to achieve a free chlorine residual of at least 2 mg/L (2 ppm) throughout the system. 661 This may require chlorination of the water heater or tank to levels of 20-50 mg/L (20-50 ppm). The pH of the water should be maintained at 7.0-8.0. 661 After completion of the decontamination, recolonization of the hot water system is likely to occur unless proper temperatures are maintained or a procedure such as continuous supplemental chlorination is continued. Interruptions of the water supply and sewage spills are situations that require immediate recovery and remediation measures to ensure the health and safety of patients and staff. 715 When delivery of potable water through the municipal distribution system has been disrupted, the public water supplier must issue a "boil water" advisory if microbial contamination presents an immediate public health risk to customers. The hospital engineer should oversee the restoration of the water system in the facility and clear it for use when appropriate. Hospitals must maintain a high level of surveillance for waterborne disease among patients and staff after the advisory is lifted. 642 Flooding from either external (e.g., from a hurricane) or internal sources (e.g., a water system break) usually results in property damage and a temporary loss of water and sanitation. [716][717][718] JCAHO requires all hospitals to have plans that address facility response for recovery from both internal and external disasters. 713,719 The plans are required to discuss a. general emergency preparedness, b. staffing, c. regional planning among area hospitals, d. emergency supply of potable water, e. infection control and medical services needs, f. climate control, and g. remediation. The basic principles of structural recovery from flooding are similar to those for recovery from sewage contamination (Box 9 and 10). Following a major event (e.g., flooding), facilities may elect to conduct microbial sampling of water after the system is restored to verify that water quality has been returned to safe levels (<500 CFU/mL, heterotrophic plate count). This approach may help identify point-of-use fixtures that may harbor contamination as a result of design or engineering features. 720 Medical records should be allowed to dry and then either photocopied or placed in plastic covers before returning them to the record. Moisture meters can be used to assess water-damaged structural materials. If porous structural materials for walls have a moisture content of >20% after 72 hours, the affected material should be removed. 266,278,313 The management of water-damaged structural materials is not strictly limited to major water catastrophes (e.g., flooding and sewage intrusions); the same principles are used to evaluate the damage from leaking roofs, point-of-use fixtures, and equipment. Additional sources of moisture include condensate on walls from boilers and poorly engineered humidification in HVAC systems. An exception to these recommendations is made for hemodialysis units where water is further treated either by portable water treatment or large-scale water treatment systems usually involving reverse osmosis (RO). In the United States, >97% of dialysis facilities use RO treatment for their water. 721 However, changing pre-treatment filters and disinfecting the system to prevent colonization of the RO membrane and microbial contamination down-stream of the pre-treatment filter are prudent measures. # Box 10. Contingency planning for flooding General emergency preparedness • Ensure that emergency electrical generators are not located in flood-prone areas of the facility. • Develop alternative strategies for moving patients, water containers, medical records, equipment, and supplies in the event that the elevators are inoperable. • Establish in advance a centralized base of operations with batteries, flashlights, and cellular phones. • Ensure sufficient supplies of sandbags to place at the entrances and the area around boilers, incinerators, and generators. • Establish alternative strategies for bringing core employees to the facility if high water prevents travel. # Staffing Patterns • Temporarily reassign licensed staff as needed to critical care areas to provide manual ventilation and to perform vital assessments on patients. • Designate a core group of employees to remain on site to keep all services operational if the facility remains open. • Train all employees in emergency preparedness procedures. # Regional planning among are facilities for disaster management • Incorporate community support and involvement (e.g., media alerts, news, and transportation). • Develop in advance strategies for transferring patients, as needed. • Develop strategies for sharing supplies and providing essential services among participating facilities (e.g., central sterile department services, and laundry services). • Identify sources for emergency provisions (e.g., blood, emergency vehicles, and bottled water). # Medical services and infection control • Use alcohol-based hand rubs in general patient-care areas. • Postpone elective surgeries until full services are restored, or transfer these patients to other facilities. • Consider using portable dialysis machines. (Portable dialysis machines require less water compared to the larger units situated in dialysis settings.) • Provide an adequate supply of tetanus and hepatitis A immunizations for patients and staff. # Climate control • Provide adequate water for cooling towers. (Water for cooling towers may need to be trucked in, especially if the tower uses a potable water source.) * Material in this box was compiled from references 713, 716-719. # Strategies for Controlling Waterborne Microbial Contamination a. Supplemental Treatment of Water with Heat and/or Chemicals In addition to using supplemental treatment methods as remediation measures after inadvertent contamination of water systems, health-care facilities sometimes use special measures to control waterborne microorganisms on a sustained basis. This decision is most often associated with outbreaks of legionellosis and subsequent efforts to control legionellae, 722 although some facilities have tried supplemental measures to better control thermophilic NTM. 627 The primary disinfectant for both cold and hot water systems is chlorine. However, chlorine residuals are expected to be low, and possibly nonexistent, in hot water tanks because of extended retention time in the tank and elevated water temperature. Flushing, especially that which removes sludge from the bottom of the tank, probably provides the most effective treatment of water systems. Unlike the situation for disinfecting cooling towers, no equivalent recommendations have been made for potable water systems, although specific intervention strategies have been published. 403,723 The principal approaches to disinfection of potable systems are heat flushing using temperatures 160°F-170°F (71°-77°C), hyperchlorination, and physical cleaning of hot-water tanks. 3,403,661 Potable systems are easily recolonized and may require continuous intervention (e.g., raising of hot water temperatures or continuous chlorination). 403,711 Chlorine solutions lose potency over time, thereby rendering the stocking of large quantities of chlorine impractical. Some hospitals with hot water systems identified as the source of Legionella spp. have performed emergency decontamination of their systems by pulse (i.e., one-time) thermal disinfection/superheating or hyperchlorination. 711,714,724,725 After either of these procedures, hospitals either maintain their heated water with a minimum return temperature of 124°F (51°C) and cold water at <68°F (<20°C) or chlorinate their hot water to achieve 1-2 mg/L (1-2 ppm) of free residual chlorine at the tap. 26, 437, 709-711, 726, 727 Additional measures (e.g., physical cleaning or replacement of hot-water storage tanks, water heaters, faucets, and shower heads) may be required to help eliminate accumulations of scale and sediment that protect organisms from the biocidal effects of heat and chlorine. 457,711 Alternative methods for controlling and eradicating legionellae in water systems (e.g., treating water with chlorine dioxide, heavy metal ions [i.e., copper/silver ions], ozone, and UV light) have limited the growth of legionellae under laboratory and operating conditions. [728][729][730][731][732][733][734][735][736][737][738][739][740][741][742] Further studies on the long-term efficacy of these treatments are needed before these methods can be considered standard applications. Renewed interest in the use of chloramines stems from concerns about adverse health effects associated with disinfectants and disinfection by-products. 743 Monochloramine usage minimizes the formation of disinfection by-products, including trihalomethanes and haloacetic acids. Monochloramine can also reach distal points in a water system and can penetrate into bacterial biofilms more effectively than free chlorine. 744 However, monochloramine use is limited to municipal water treatment plants and is currently not available to health-care facilities as a supplemental water-treatment approach. A recent study indicated that 90% of Legionnaires disease outbreaks associated with drinking water could have been prevented if monochloramine rather than free chlorine has been used for residual disinfection. 745 In a retrospective comparison of health-care associated Legionnaires disease incidence in central Texas hospitals, the same research group documented an absence of cases in facilities located in communities with monochloramine-treated municipal water. 746 Additional data are needed regarding the effectiveness of using monochloramine before its routine use as a disinfectant in water systems can be recommended. No data have been published regarding the effectiveness of monochloramine installed at the level of the health-care facility. Additional filtration of potable water systems is not routinely necessary. Filters are used in water lines in dialysis units, however, and may be inserted into the lines for specific equipment (e.g., endoscope washers and disinfectors) for the purpose of providing bacteria-free water for instrument reprocessing. Additionally, an RO unit is usually added to the distribution system leading to PE areas. # b. Primary Prevention of Legionnaires Disease (No Cases Identified) The primary and secondary environmental infection-control strategies described in this section on the guideline pertain to health-care facilities without transplant units. Infection-control measures specific to PE or transplant units (i.e., patient-care areas housing patients at the highest risk for morbidity and mortality from Legionella spp. infection) are described in the subsection titled Preventing Legionnaires Disease in Protective Environments. Health-care facilities use at least two general strategies to prevent health-care associated legionellosis when no cases or only sporadic cases have been detected. The first is an environmental surveillance approach involving periodic culturing of water samples from the hospital's potable water system to monitor for Legionella spp. [747][748][749][750] If any sample is culture-positive, diagnostic testing is recommended for all patients with health-care associated pneumonia. 748,749 In-house testing is recommended for facilities with transplant programs as part of a comprehensive treatment/management program. If ≥30% of the samples are culture-positive for Legionella spp., decontamination of the facility's potable water system is warranted. 748 The premise for this approach is that no cases of health-care associated legionellosis can occur if Legionella spp. are not present in the potable water system, and, conversely, cases of health-care associated legionellosis could potentially occur if Legionella spp. are cultured from the water. 26,751 Physicians who are informed that the hospital's potable water system is culture-positive for Legionella spp. are more likely to order diagnostic tests for legionellosis. A potential advantage of the environmental surveillance approach is that periodic culturing of water is less costly than routine laboratory diagnostic testing for all patients who have health-care associated pneumonia. The primary argument against this approach is that, in the absence of cases, the relationship between water-culture results and legionellosis risk remains undefined. 3 Legionnella spp. can be present in the water systems of buildings 752 without being associated with known cases of disease. 437,707,753 In a study of 84 hospitals in Québec, 68% of the water systems were found to be colonized with Legionella spp., and 26% were colonized at >30% of sites sampled; cases of Legionnaires disease, however, were infrequently reported from these hospitals. 707 Other factors also argue against environmental surveillance. Interpretation of results from periodic water culturing might be confounded by differing results among the sites sampled in a single water system and by fluctuations in the concentration of Legionella spp. at the same site. 709,754 In addition, the risk for illness after exposure to a given source might be influenced by several factors other than the presence or concentration of organisms, including a. the degree to which contaminated water is aerosolized into respirable droplets, b. the proximity of the infectious aerosol to the potential host, c. the susceptibility of the host, and d. the virulence properties of the contaminating strain. [755][756][757] Thus, data are insufficient to assign a level of disease risk even on the basis of the number of colonyforming units detected in samples from areas for immunocompetent patients. Conducting environmental surveillance would obligate hospital administrators to initiate water-decontamination programs if Legionella spp. are identified. Therefore, periodic monitoring of water from the hospital's potable water system and from aerosol-producing devices is not widely recommended in facilities that have not experienced cases of health-care associated legionellosis. 661,758 The second strategy to prevent and control health-care associated legionellosis is a clinical approach, in which providers maintain a high index of suspicion for legionellosis and order appropriate diagnostic tests (i.e., culture, urine antigen, and direct fluorescent antibody [DFA] serology) for patients with health-care associated pneumonia who are at high risk for legionellosis and its complications. 437,759,760 The testing of autopsy specimens can be included in this strategy should a death resulting from healthcare-associated pneumonia occur. Identification of one case of definite or two cases of possible healthcare-associated Legionnaires disease should prompt an epidemiologic investigation for a hospital source of Legionella spp., which may involve culturing the facility's water for Legionella. Routine maintenance of cooling towers, and use of sterile water for the filling and terminal rinsing of nebulization devices and ventilation equipment can help to minimize potential sources of contamination. Circulating potable water temperatures should match those outlined in the subsection titled Water Temperature and Pressure, as permitted by state code. # c. Secondary Prevention of Legionnaires Disease (With Identified Cases) The indications for a full-scale environmental investigation to search for and subsequently decontaminate identified sources of Legionella spp. in health-care facilities without transplant units have not been clarified; these indications would likely differ depending on the facility. Case categories for health-care associated Legionnaires disease in facilities without transplant units include definite cases (i.e., laboratory-confirmed cases of legionellosis that occur in patients who have been hospitalized continuously for ≥10 days before the onset of illness) and possible cases (i.e., laboratory-confirmed infections that occur 2-9 days after hospital admission). 3 In settings in which as few as one to three health-care associated cases are recognized over several months, intensified surveillance for Legionnaires disease has frequently identified numerous additional cases. 405,408,432,453,739,759,760 This finding suggests the need for a low threshold for initiating an investigation after laboratory confirmation of cases of healthcare associated legionellosis. When developing a strategy for responding to such a finding, however, infection-control personnel should consider the level of risk for health-care-associated acquisition of, and mortality from, Legionella spp. infection at their particular facility. An epidemiologic investigation conducted to determine the source of Legionella spp. involves several important steps (Box 11). Laboratory assessment is crucial in supporting epidemiologic evidence of a link between human illness and a specific environmental source. 761 Strain determination from subtype analysis is most frequently used in these investigations. 410,[762][763][764] Once the environmental source is established and confirmed with laboratory support, supplemental water treatment strategies can be initiated as appropriate. # Box 11. Steps in an epidemiologic investigation for legionellosis • Review medical and microbiologic records. • # d. Preventing Legionnaires Disease in Protective Environments This subsection outlines infection-control measures applicable to those health-care facilities providing care to severely immunocompromised patients. Indigenous microorganisms in the tap water of these facilities may pose problems for such patients. These measures are designed to prevent the generation of potentially infectious aerosols from water and the subsequent exposure of PE patients or other immunocompromised patients (e.g., transplant patients) (Table 17). Infection-control measures that address the use of water with medical equipment (e.g., ventilators, nebulizers, and equipment humidifiers) are described in other guidelines and publications. 3,455 If one case of laboratory-confirmed, health-care associated Legionnaires disease is identified in a patient in a solid-organ transplant program or in PE (i.e., an inpatient in PE for all or part of the 2-10 days prior to onset of illness) or if two or more laboratory-confirmed cases occur among patients who had visited an outpatient PE setting, the hospital should report the cases to the local and state health departments. The hospital should then initiate a thorough epidemiologic and environmental investigation to determine the likely environmental sources of Legionella spp. 9 The source of Legionella should be decontaminated or removed. Isolated cases may be difficult to investigate. Because transplant recipients are at substantially higher risk for disease and death from legionellosis compared with other hospitalized patients, periodic culturing for Legionella spp. in water samples from the potable water in the solid-organ transplant and/or PE unit can be performed as part of an overall strategy to prevent Legionnaires disease in PE units. 9,431,710,769 The optimal methodology (i.e., frequency and number of sites) for environmental surveillance cultures in PE units has not been determined, and the cost-effectiveness of this strategy has not been evaluated. Because transplant recipients are at high risk for Legionnaires disease and because no data are available to determine a safe concentration of legionellae organisms in potable water, the goal of environmental surveillance for Legionella spp. should be to maintain water systems with no detectable organisms. 9,431 Culturing for legionellae may be used to assess the effectiveness of water treatment or decontamination methods, a practice that provides benefits to both patients and health-care workers. 767,770 Format Change [November 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. • Use water that is not contaminated with Legionella spp. for patients' sponge baths. 9 • Provide sterile water for drinking, tooth brushing, or for flushing nasogastric tubes. 9,412 • Perform supplemental treatment of the water for the unit. 732 • Consider periodic monitoring (i.e., culturing) of the unit water supply for Legionella spp. Protecting patient-care devices and instruments from inadvertent tap water contamination during room cleaning procedures is also important in any immunocompromised patient-care area. In a recent outbreak of gram-negative bacteremias among open-heart-surgery patients, pressure-monitoring equipment that was assembled and left uncovered overnight prior to the next day's surgeries was inadvertently contaminated with mists and splashing water from a hose-disinfectant system used for cleaning. 771 # Cooling Towers and Evaporative Condensers Modern health-care facilities maintain indoor climate control during warm weather by use of cooling towers (large facilities) or evaporative condensers (smaller buildings). A cooling tower is a wet-type, evaporative heat transfer device used to discharge to the atmosphere waste heat from a building's air conditioning condensers (Figure 5). 772,773 Warm water from air-conditioning condensers is piped to the cooling tower where it is sprayed downward into a counter-or cross-current air flow. To accelerate heat transfer to the air, the water passes over the fill, which either breaks water into droplets or causes it to spread into a thin film. 772,773 Most systems use fans to move air through the tower, although some large industrial cooling towers rely on natural draft circulation of air. The cooled water from the tower is piped back to the condenser, where it again picks up heat generated during the process of chilling the system's refrigerant. The water is cycled back to the cooling tower to be cooled. Closed-circuit cooling towers and evaporative condensers are also evaporative heat-transfer devices. In these systems, the process fluid (e.g., water, ethylene glycol/water mixture, oil, or a condensing refrigerant) does not directly contact the cooling air, but is contained inside a coil assembly. 661 Water temperatures are approximate and may differ substantially according to system use and design. Warm water from the condenser (or chiller) is sprayed downward into a counter-or cross-current air flow. Water passes over the fill (a component of the system designed to increase the surface area of the water exposed to air), and heat from the water is transferred to the air. Some of the water becomes aerosolized during this process, although the volume of aerosol discharged to the air can be reduced by the placement of a drift eliminator. Water cooled in the tower returns to the heat source to cool refrigerant from the air conditioning unit. Cooling towers and evaporative condensers incorporate inertial stripping devices called drift eliminators to remove water droplets generated within the unit. Although the effectiveness of these eliminators varies substantially depending on design and condition, some water droplets in the size range of <5 μm will likely leave the unit, and some larger droplets leaving the unit may be reduced to ≤5 μm by evaporation. Thus, even with proper operation, a cooling tower or evaporative condenser can generate and expel respirable water aerosols. If either the water in the unit's basin or the make-up water (added to replace water lost to evaporation) contains Legionella spp. or other waterborne microorganisms, these organisms can be aerosolized and dispersed from the unit. 774 Clusters of both Legionnaires disease and Pontiac fever have been traced to exposure to infectious water aerosols originating from cooling towers and evaporative condensers contaminated with Legionella spp. Although most of these outbreaks have been community-acquired episodes of pneumonia, [775][776][777][778][779][780][781][782] health-care associated Legionnaires disease has been linked to cooling tower aerosol exposure. 404,405 Contaminated aerosols from cooling towers on hospital premises gained entry to the buildings either through open windows or via air handling system intakes located near the tower equipment. Cooling towers and evaporative condensers provide ideal ecological niches for Legionella spp. The typical temperature of the water in cooling towers ranges from 85°F-95°F (29°C-35°C), although temperatures can be above 120°F (49°C) and below 70°F (21°C) depending on system heat load, ambient temperature, and operating strategy. 661 An Australian study of cooling towers found that legionellae colonized or multiplied in towers with basin temperatures above 60.8°F (16°C), and multiplication became explosive at temperatures above 73.4°F (23°C). 783 Water temperature in closed-circuit cooling towers and evaporative condensers is similar to that in cooling towers. Considerable variation in the piping arrangement occurs. In addition, stagnant areas or dead legs may be difficult to clean or penetrate with biocides. Several documents address the routine maintenance of cooling towers, evaporative condensers, and whirlpool spas. 661, 784-787 They suggest following manufacturer's recommendations for cleaning and biocide treatment of these devices; all health-care facilities should ensure proper maintenance for their cooling towers and evaporative condensers, even in the absence of Legionella spp (Appendix C). Because cooling towers and evaporative condensers can be shut down during periods when air conditioning is not needed, this maintenance cleaning and treatment should be performed before starting up the system for the first time in the warm season. 782 Emergency decontamination protocols describing cleaning procedures and hyperchlorination for cooling towers have been developed for towers implicated in the transmission of legionellosis. 786, 787 # Dialysis Water Quality and Dialysate # a. Rationale for Water Treatment in Hemodialysis Hemodialysis, hemofiltration, and hemodiafiltration require special water-treatment processes to prevent adverse patient outcomes of dialysis therapy resulting from improper formulation of dialysate with water containing high levels of certain chemical or biological contaminants. The Association for the Advancement of Medical Instrumentation (AAMI) has established chemical and microbiologic standards for the water used to prepare dialysate, substitution fluid, or to reprocess hemodialyzers for renal replacement therapy. [788][789][790][791][792] The AAMI standards address: a. equipment and processes used to purify water for the preparation of concentrates and dialysate and the reprocessing of dialyzers for multiple use and b. the devices used to store and distribute this water. Future revisions to these standards may include hemofiltration and hemodiafiltration. Water treatment systems used in hemodialysis employ several physical and/or chemical processes either singly or in combination (Figure 6). These systems may be portable units or large systems that feed several rooms. In the United States, >97% of maintenance hemodialysis facilities use RO alone or in combination with deionization. 793 Many acute-care facilities use portable hemodialysis machines with attached portable water treatment systems that use either deionization or RO. These machines were exempted from earlier versions of AAMI recommendations, but given current knowledge about toxic exposures to and inflammatory processes in patients new to dialysis, these machines should now come into compliance with current AAMI recommendations for hemodialysis water and dialysate quality. 788,789 Previous recommendations were based on the assumption that acute-care patients did not experience the same degree of adverse effects from short-term, cumulative exposures to either chemicals or microbiologic agents present in hemodialysis fluids compared with the effects encountered by patients during chronic, maintenance dialysis. 788,789 Additionally, JCAHO is reviewing inpatient practices and record-keeping for dialysis (acute and maintenance) for adherence to AAMI standards and recommended practices. Neither the water used to prepare dialysate nor the dialysate itself needs to be sterile, but tap water can not be used without additional treatment. Infections caused by rapid-growing NTM (e.g., Mycobacterium chelonae and M. abscessus) present a potential risk to hemodialysis patients (especially those in hemodialyzer reuse programs) if disinfection procedures to inactivate mycobacteria in the water (lowlevel disinfection) and the hemodialyzers (high-level disinfection) are inadequate. 31,32,633 Other factors associated with microbial contamination in dialysis systems could involve the water treatment system, the water and dialysate distribution systems, and the type of hemodialyzer. 666,667,[794][795][796][797][798][799] Understanding the various factors and their influence on contamination levels is the key to preventing high levels of microbial contamination in dialysis therapy. In several studies, pyrogenic reactions were demonstrated to have been caused by lipopolysaccharide or endotoxin associated with gram-negative bacteria. 794,[800][801][802][803] Early studies demonstrated that parenteral exposure to endotoxin at a concentration of 1 ng/kg body weight/hour was the threshold dose for producing pyrogenic reactions in humans, and that the relative potencies of endotoxin differ by bacterial species. 804,805 Gram-negative water bacteria (e.g., Pseudomonas spp.) have been shown to multiply rapidly in a variety of hospital-associated fluids that can be used as supply water for hemodialysis (e.g., distilled water, deionized water, RO water, and softened water) and in dialysate (a balanced salt solution made with this water). 806 Several studies have demonstrated that the attack rates of pyrogenic reactions are directly associated with the number of bacteria in dialysate. 666,667,807 These studies provided the rationale for setting the heterotrophic bacteria standards in the first AAMI hemodialysis guideline at ≤2,000 CFU/mL in dialysate and one log lower (≤200 CFU/mL) for the water used to prepare dialysate. 668,788 If the level of bacterial contamination exceeded 200 CFU/mL in water, this level could be amplified in the system and effectively constitute a high inoculum for dialysate at the start of a dialysis treatment. 807,808 Pyrogenic reactions did not appear to occur when the level of contamination was below 2,000 CFU/mL in dialysate unless the source of the endotoxin was exogenous to the dialysis system (i.e., present in the community water supply). Endotoxins in a community water supply have been linked to the development of pyrogenic reactions among dialysis patients. 794 Whether endotoxin actually crosses the dialyzer membrane is controversial. Several investigators have shown that bacteria growing in dialysate-generated products that could cross the dialyzer membrane. 809,810 Gram-negative bacteria growing in dialysate have produced endotoxins that in turn stimulated the production of anti-endotoxin antibodies in hemodialysis patients; 801,811 these data suggest that bacterial endotoxins, although large molecules, cross dialyzer membranes either intact or as fragments. The use of the very permeable membranes known as high-flux membranes (which allow large molecules [e.g., β2 microglobulin] to traverse the membrane) increases the potential for passage of endotoxins into the blood path. Several studies support this contention. In one such study, an increase in plasma endotoxin concentrations during dialysis was observed when patients were dialyzed against dialysate containing 10 3 -10 4 CFU/mL Pseudomonas spp. 812 In vitro studies using both radiolabeled lipopolysaccharide and biologic assays have demonstrated that biologically active substances derived from bacteria found in dialysate can cross a variety of dialyzer membranes. 802,[813][814][815][816] Patients treated with high-flux membranes have had higher levels of anti-endotoxin antibodies than subjects or patients treated with conventional membranes. 817 Finally, since 1989, 19%-22% of dialysis centers have reported pyrogenic reactions in the absence of septicemia. 818,819 Investigations of adverse outcomes among patients using reprocessed dialyzers have demonstrated a greater risk for developing pyrogenic reactions when the water used to reprocess these devices contained >6 ng/mL endotoxin and >10 4 CFU/mL bacteria. 820 In addition to the variability in endotoxin assays, host factors also are involved in determining whether a patient will mount a response to endotoxin. 803 Outbreak investigations of pyrogenic reactions and bacteremias associated with hemodialyzer reuse have demonstrated that pyrogenic reactions are prevented once the endotoxin level in the water used to reprocess the dialyzers is returned to below the AAMI standard level. 821 Reuse of dialyzers and use of bicarbonate dialysate, high-flux dialyzer membranes, or high-flux dialysis may increase the potential for pyrogenic reactions if the water in the dialysis setting does not meet standards. [796][797][798] Although investigators have been unable to demonstrate endotoxin transfer across dialyzer membranes, 803,822,823 the preponderance of reports now supports the ability of endotoxin to transfer across at least some high-flux membranes under some operating conditions. In addition to the acute risk of pyrogenic reactions, indirect evidence in increasingly demonstrating that chronic exposure to low amounts of endotoxin may play a role in some of the long-term complications of hemodialysis therapy. Patients treated with ultrafiltered dialysate for 5-6 months have demonstrated a decrease in serum β2 microglobulin concentrations and a decrease in markers of an inflammatory response. [824][825][826] In studies of longer duration, use of microbiologically ultrapure dialysate has been associated with a decreased incidence of β2 microglobulin-associated amyloidosis. 827,828 Although patient benefit likely is associated with the use of ultrapure dialysate, no consensus has been reached regarding the potential adoption of this as standard in the United States. Debate continues regarding the bacterial and endotoxin limits for dialysate. As advances in water treatment and hemodialysis processes occur, efforts are underway to move improved technology from the manufacturer out into the user community. Cost-benefit studies, however, have not been done, and substantially increased costs to implement newer water treatment modalities are anticipated. To reconcile AAMI documents with current International Organization for Standardization (ISO) format, AAMI has determined that its hemodialysis standards will be discussed in the following four installments: RD 5 for hemodialysis equipment, RD 62 for product water quality, RD 47 for dialyzer reprocessing, and RD 52 for dialysate quality. The Renal Diseases and Dialysis Committee of AAMI is expected to finalize and promulgated the dialysate standard pertinent to the user community (RD 52), adopting by reference the bacterial and endotoxin limits in product water as currently outlined in the AAMI standard that applies to systems manufacturers (RD 62). At present, the user community should continue to observe water quality and dialysate standards as outlined in AAMI RD 5 (Hemodialysis Systems, 1992) and AAMI RD 47 (Reuse of Hemodialyzers, 1993) until the new RD 52 standard becomes available (Table 18). 789, 791 RD 47-1993). The current AAMI standard directed at systems manufacturers (RD 62 [Water Treatment Equipment for Hemodialysis Applications, 2001]) now specifies that all product water used to prepare dialysate or to reprocess dialyzers for multiple use should contain <2 endotoxin units per milliliter (EU/mL). 792 A level of 2 EU/mL was chosen as the upper limit for endotoxin because this level is easily achieved with contemporary water treatment systems using RO and/or ultrafiltration. CDC has advocated monthly endotoxin testing along with microbiologic assays of water, because endotoxin activity may not correspond to the total heterotrophic plate counts. 829 Additionally, the current AAMI standard RD 62 for manufacturers includes action levels for product water. Because 48 hours can elapse between the time of sampling water for microbial contamination and the time when results are received, and because bacterial proliferation can be rapid, action levels for microbial counts and endotoxin concentrations are reported as 50 CFU/mL and 1 EU/mL, respectively, in this revision of the standard. 792 These recommendations will allow users to initiate corrective action before levels exceed the maximum levels established by the standard. In hemodialysis, the net movement of water is from the blood to the dialysate, although within the dialyzer, local movement of water from the dialysate to the blood through the phenomenon of backfiltration may occur, particularly in dialyzers with highly permeable membranes. 830 In contrast, hemofiltration and hemodiaflltration feature infusion of large volumes of electrolyte solution (20-70 L) into the blood. Increasingly, this electrolyte solution is being prepared on-line from water and concentrate. Because of the large volumes of fluid infused, AAMI considered the necessity of setting more stringent requirements for water to be used in this application, but this organization has not yet established these because of lack of expert consensus and insufficient experience with on-line therapies in the United States. On-line hemofiltration and hemodiafiltration systems use sequential ultrafiltration as the final step in the preparation of infusion fluid. Several experts from AAMI concur that these point-ofuse ultrafiltration systems should be capable of further reducing the bacteria and endotoxin burden of solutions prepared from water meeting the requirements of the AAMI standard to a safe level for infusion. # b. Microbial Control Strategies The strategy for controlling massive accumulations of gram-negative water bacteria and NTM in dialysis systems primarily involves preventing their growth through proper disinfection of water-treatment systems and hemodialysis machines. Gram-negative water bacteria, their associated lipopolysaccharides (bacterial endotoxins), and NTM ultimately come from the community water supply, and levels of these bacteria can be amplified depending on the water treatment system, dialysate distribution system, type of dialysis machine, and method of disinfection (Table 19). 634,794,831 Control strategies are designed to reduce levels of microbial contamination in water and dialysis fluid to relatively low levels but not to completely eradicate it. # Format Change [November 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. • Size Oversized diameter and length decrease fluid flow and increase bacterial reservoir for both treated water and centrally-prepared dialysate. # • Construction Rough joints, dead ends, unused branches, and polyvinyl chloride (PVC) piping can act as bacterial reservoirs. # • Elevation Outlet taps should be located at the highest elevation to prevent loss of disinfectant; keep a recirculation loop in the system; flush unused ports routinely. # • Storage tanks Tanks are undesirable because they act as a reservoir for water bacteria; if tanks are present, they must be routinely scrubbed and disinfected. # Dialysis machines • Single-pass Disinfectant should have contact with all parts of the machine that are exposed to water or dialysis fluid. # • Recirculating single-pass or recirculating (batch) Recirculating pumps and machine design allow for massive contamination levels if not properly disinfected; overnight chemical germicide treatment is recommended. Two components of hemodialysis water distribution systems -pipes (particularly those made of polyvinyl chloride [PVC]) and storage tanks -can serve as reservoirs of microbial contamination. Hemodialysis systems frequently use pipes that are wider and longer than are needed to handle the required flow, which slows the fluid velocity and increases both the total fluid volume and the wetted surface area of the system. Gram-negative bacteria in fluids remaining in pipes overnight multiply rapidly and colonize the wet surfaces, producing bacterial populations and endotoxin quantities in proportion to the volume and surface area. Such colonization results in formation of protective biofilm that is difficult to remove and protects the bacteria from disinfection. 832 Routine (i.e., monthly), low-level disinfection of the pipes can help to control bacterial contamination of the distribution system. Additional measures to protect pipes from contaminations include a. situating all outlet taps at equal elevation and at the highest point of the system so that the disinfectant cannot drain from pipes by gravity before adequate contact time has elapsed and b. eliminating rough joints, dead-end pipes, and unused branches and taps that can trap fluid and serve as reservoirs of bacteria capable of continuously inoculating the entire volume of the system. 800 Maintain a flow velocity of 3-5 ft/sec. A storage tank in the distribution system greatly increases the volume of fluid and surface area available and can serve as a niche for water bacteria. Storage tanks are therefore not recommended for use in dialysis systems unless they are frequently drained and adequately disinfected, including scrubbing the sides of the tank to remove bacterial biofilm. An ultrafilter should be used distal to the storage tank. 808,833 Microbiologic sampling of dialysis fluids is recommended because gram-negative bacteria can proliferate rapidly in water and dialysate in hemodialysis systems; high levels of these organisms place patients at risk for pyrogenic reactions or health-care associated infection. 667,668,808 Health-care facilities are advised to sample dialysis fluids at least monthly using standard microbiologic assay methods for waterborne microorganisms. 788,793,799,[834][835][836] Product water used to prepare dialysate and to reprocess hemodialyzers for reuse on the same patient should also be tested for bacterial endotoxin on a monthly basis. 792,829,837 (See Appendix C for information about water sampling methods for dialysis.) Cross-contamination of dialysis machines and inadequate disinfection measures can facilitate the spread of waterborne organisms to patients. Steps should be taken to ensure that dialysis equipment is performing correctly and that all connectors, lines, and other components are specific for the equipment, in good repair, and properly in place. A recent outbreak of gram-negative bacteremias among dialysis patients was attributed to faulty valves in a drain port of the machine that allowed backflow of saline used to flush the dialyzer before patient use. 838,839 This backflow contaminated the drain priming connectors, which contaminated the blood lines and exposed the patients to high concentrations of gram-negative bacteria. Environmental infection control in dialysis settings also includes low-level disinfection of housekeeping surfaces and spot decontamination of spills of blood (see Environmental Services in Part I of this guideline for further information). # c. Infection-Control Issues in Peritoneal Dialysis Peritoneal dialysis (PD), most commonly administered as continuous ambulatory peritoneal dialysis (CAPD) and continual cycling peritoneal dialysis (CCPD), is the third most common treatment for end-stage renal disease (ESRD) in the United States, accounting for 12% of all dialysis patients. 840 Peritonitis is the primary complication of CAPD, with coagulase-negative staphylococci the most clinically significant causative organisms. 841 Other organisms that have been found to produce peritonitis include Staphylococcus aureus, Mycobacterium fortuitum, M. mucogenicum, Stenotrophomonas maltophilia, Burkholderia cepacia, Corynebacterium jeikeium, Candida spp., and other fungi. [842][843][844][845][846][847][848][849][850] Substantial morbidity is associated with peritoneal dialysis infections. Removal of peritoneal dialysis catheters usually is required for treatment of peritonitis caused by fungi, NTM, or other bacteria that are not cleared within the first several days of effective antimicrobial treatment. Furthermore, recurrent episodes of peritonitis may lead to fibrosis and loss of the dialysis membrane. Many reported episodes of peritonitis are associated with exit-site or tunneled catheter infections. Risk factors for the development of peritonitis in PD patients include 1. under dialysis, 2. immune suppression, 3. prolonged antimicrobial treatment, 4. patient age [more infections occur in younger patients and older hospitalized patients], 5. length of hospital stay, and 6. hypoalbuminemia. 844,851,852 Concern has been raised about infection risk associated with the use of automated cyclers in both inpatient and outpatient settings; however, studies suggest that PD patients who use automated cyclers have much lower infection rates. 853 One study noted that a closed-drainage system reduced the incidence of systemrelated peritonitis among intermittent peritoneal dialysis (IPD) patients from 3.6 to 1.5 cases/100 patient days. 854 The association of peritonitis with management of spent dialysate fluids requires additional study. Therefore, ensuring that the tip of the waste line is not submerged beneath the water level in a toilet or in a drain is prudent. # Ice Machines and Ice Microorganisms may be present in ice, ice-storage chests, and ice-making machines. The two main sources of microorganisms in ice are the potable water from which it is made and a transferral of organisms from hands (Table 20). Ice from contaminated ice machines has been associated with patient colonization, blood stream infections, pulmonary and gastrointestinal illnesses, and pseudoinfections. 602,603,683,684,854,855 Microorganisms in ice can secondarily contaminate clinical specimens and medical solutions that require cold temperatures for either transport or holding. 601,620 An outbreak of surgical-site infections was interrupted when sterile ice was used in place of tap water ice to cool cardioplegia solutions. 601 Format Change [November 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. # Table 20. Microorganisms and their sources in ice and ice machines From potable water • Legionella spp. 684,685,857,858 • Nontuberculous mycobacteria (NTM) 602,603,859 • Pseudomonas aeruginosa 859 • Burkholderia cepacia 859,860 • Stenotrophomonas maltophilia 860 • Flavobacterium spp. 860 From fecally-contaminated water • Norwalk virus [861][862][863] • Giardia lamblia 864 • Cryptosporidium parvum 685 From hand-transfer of organisms • Acinetobacter spp. 859 • Coagulase-negative staphylococci 859 • Salmonella enteriditis 865 • Cryptosporidium parvum 685 In a study comparing the microbial populations of hospital ice machines with organisms recovered from ice samples gathered from the community, samples from 27 hospital ice machines yielded low numbers (<10 CFU/mL) of several potentially opportunistic microorganisms, mainly gram-negative bacilli. 859 During the survey period, no health-care associated infections were attributed to the use of ice. Ice from community sources had higher levels of microbial contamination (75%-95% of 194 samples had total heterotrophic plate counts <500 CFU/mL, with the proportion of positive cultures dependent on the incubation temperature) and showed evidence of fecal contamination from the source water. 859 Thus, ice machines in health-care settings are no more heavily contaminated compared with ice machines in the community. If the source water for ice in a health-care facility is not fecally contaminated, then ice from clean ice machines and chests should pose no increased hazard for immunocompetent patients. Some waterborne bacteria found in ice could potentially be a risk to immunocompromised patients if they consume ice or drink beverages with ice. For example, Burkholderia cepacia in ice could present an infection risk for cystic fibrosis patients. 859,860 Therefore, protecting immunosuppressed and otherwise medically at-risk patients from exposure to tap water and ice potentially contaminated with opportunistic pathogens is prudent. 9 No microbiologic standards for ice, ice-making machines, or ice storage equipment have been established, although several investigators have suggested the need for such standards. 859,866 Culturing of ice machines is not routinely recommended, but it may be useful as part of an epidemiologic investigation. [867][868][869] Sampling might also help determine the best schedule for cleaning open ice-storage chests. Recommendations for a regular program of maintenance and disinfection have been published. [866][867][868][869] Health-care facilities are advised to clean ice-storage chests on a regular basis. Open ice chests may require a more frequent cleaning schedule compared with chests that have covers. Portable ice chests and containers require cleaning and low-level disinfection before the addition of ice intended for consumption. Ice-making machines may require less frequent cleaning, but their maintenance is important to proper performance. The manufacturer's instructions for both the proper method of cleaning and/or maintenance should be followed. These instructions may also recommend an EPA-registered disinfectant to ensure chemical potency, materials compatibility, and safety. In the event that instructions and suitable EPA-registered disinfectants are not available for this process, then a generic approach to cleaning, disinfecting, and maintaining ice machines and dispensers can be used (Box 12). Ice and ice-making machines also may be contaminated via improper storage or handling of ice by patients and/or staff. 684-686, 855-858, 870 Suggested steps to avoid this means of contamination include a. minimizing or avoiding direct hand contact with ice intended for consumption, b. using a hard-surface scoop to dispense ice, and c. installing machines that dispense ice directly into portable containers at the touch of a control. 687,869 Box 12. General steps for cleaning and maintaining ice machines, dispensers, and storage chests*+ 1. Disconnect unit from power supply. 2. Remove and discard ice from bin or storage chest. 3. Allow unit to warm to room temperature. 4. Disassemble removable parts of machine that make contact with water to make ice. 5. Thoroughly clean machine and parts with water and detergent. 6. Dry external surfaces of removable parts before reassembling. 7. Check for any needed repair. 8. Replace feeder lines, as appropriate (e.g., when damaged, old, or difficult to clean). 9. Ensure presence of an air space in tubing leading from water inlet into water distribution system of machine. 10. Inspect for rodent or insect infestations under the unit and treat, as needed. 11. Check door gaskets (open compartment models) for evidence of leakage or dripping into the storage chest. 12. Clean the ice-storage chest or bin with fresh water and detergent; rinse with fresh tap water. 13. Sanitize machine by circulating a 50-100 parts per million (ppm) solution of sodium hypochlorite (i.e., 4-8 mL sodium hypochlorite/gallon of water) through the ice-making and storage systems for 2 hours (100 ppm solution), or 4 hours (50 ppm solution). 14. Drain sodium hypochlorite solutions and flush with fresh tap water. 15. Allow all surfaces of equipment to dry before returning to service. * Material in this box is adapted from reference 869. + These general guidelines should be used only where manufacturer-recommended methods and EPA-registered disinfectants are not available. # Hydrotherapy Tanks and Pools # a. General Information Hydrotherapy equipment (e.g., pools, whirlpools, whirlpool spas, hot tubs, and physiotherapy tanks) traditionally has been used to treat patients with certain medical conditions (e.g., burns, 871,872 septic ulcers, lesions, amputations, 873 orthopedic impairments and injuries, arthritis, 874 and kidney lithotripsy). 654 Wound-care medicine is increasingly moving away from hydrotherapy, however, in favor of bedside pulsed-lavage therapy using sterile solutions for cleaning and irrigation. 492,[875][876][877][878] Several episodes of health-care associated infections have been linked to use of hydrotherapy equipment ( Infection control for hydrotherapy tanks, pools, or birthing tanks presents unique challenges because indigenous microorganisms are always present in the water during treatments. In addition, some studies have found free living amoebae (i.e., Naegleria lovaniensis), which are commonly found in association with Naegleria fowleri, in hospital hydrotherapy pools. 890 Although hydrotherapy is at times appropriate for patients with wounds, burns, or other types of non-intact skin conditions (determined on a case-bycase basis), this equipment should not be considered "semi-critical" in accordance with the Spaulding classification. 891 Microbial data to evaluate the risk of infection to patients using hydrotherapy pools and birthing tanks are insufficient. Nevertheless, health-care facilities should maintain stringent cleaning and disinfection practices in accordance with the manufacturer's instructions and with relevant scientific literature until data supporting more rigorous infection-control measures become available. Factors that should be considered in therapy decisions in this situation would include a. availability of alternative aseptic techniques for wound management and b. a risk-benefit analysis of using traditional hydrotherapy. # b. Hydrotherapy Tanks Hydrotherapy tanks (e.g., whirlpools, Hubbard tanks and whirlpool bath tubs) are shallow tanks constructed of stainless steel, plexiglass, or tile. They are closed-cycle water systems with hydrojets to circulate, aerate, and agitate the water. The maximum water temperature range is 50°F-104°F (10°C-40°C). The warm water temperature, constant agitation and aeration, and design of the hydrotherapy tanks provide ideal conditions for bacterial proliferation if the equipment is not properly maintained, cleaned, and disinfected. The design of the hydrotherapy equipment should be evaluated for potential infection-control problems that can be associated with inaccessible surfaces that can be difficult to clean and/or remain wet in between uses (i.e., recessed drain plates with fixed grill plates). 887 Associated equipment (e.g., parallel bars, plinths, Hoyer lifts, and wheelchairs) can also be potential reservoirs of microorganisms, depending on the materials used in these items (i.e., porous vs. non-porous materials) and the surfaces that may become wet during use. Patients with active skin colonizations and wound infections can serve as sources of contamination for the equipment and the water. Contamination from spilled tub water can extend to drains, floors, and walls. [680][681][682][683] Health-care associated colonization or infection can result from exposure to endogenous sources of microorganisms (autoinoculation) or exogenous sources (via cross-contamination from other patients previously receiving treatment in the unit). Although some facilities have used tub liners to minimize environmental contamination of the tanks, the use of a tub liner does not eliminate the need for cleaning and disinfection. Draining these small pools and tanks after each patient use, thoroughly cleaning with a detergent, and disinfecting according to manufacturers' instructions have reduced bacterial contamination levels in the water from 10 4 CFU/mL to <10 CFU/mL. 892 A chlorine residual of 15 ppm in the water should be obtained prior to the patient's therapy session (e.g., by adding 15 grams of calcium hypochlorite 70% [e.g., HTH®] per 100 gallons of water). 892 A study of commercial and residential whirlpools found that superchlorination or draining, cleaning, disinfection, and refilling of whirlpools markedly reduced densities of Pseudomonas aeruginosa in whirlpool water. 893 The bacterial populations were rapidly replenished, however, when disinfectant concentrations dropped below recommended levels for recreational use (i.e., chlorine at 3.0 ppm or bromine at 6.0 ppm). When using chlorine, however, knowing whether the community drinking-water system is disinfected with chloramine is important, because municipal utilities adjust the pH of the water to the basic side to enhance chloramine formation. Because chlorine is not very effective at pH levels above 8, it may be necessary to re-adjust the pH of the water to a more acidic level. 894 A few reports describe the addition of antiseptic chemicals to hydrotherapy tank water, especially for burn patient therapy. [895][896][897] One study involving a minimal number of participants demonstrated a reduction in the number of Pseudomonas spp. and other gram-negative bacteria from both patients and equipment surfaces when chloramine-T ("chlorazene") was added to the water. 898 Chloramine-T has not, however, been approved for water treatment in the United States. # c. Hydrotherapy Pools Hydrotherapy pools typically serve large numbers of patients and are usually heated to 91.4°F-98.6°F (31°C-37°C). The temperature range is more narrow (94°F-96.8°F [35°C-36°C]) for pediatric and geriatric patient use. 899 Because the size of hydrotherapy pools precludes draining after patient use, proper management is required to maintain the proper balance of water conditioning (i.e., alkalinity, hardness, and temperature) and disinfection. The most widely used chemicals for disinfection of pools are chlorine and chlorine compounds -calcium hypochlorite, sodium hypochlorite, lithium hypochlorite, chloroisocyanurates, and chlorine gas. Solid and liquid formulations of chlorine chemicals are the easiest and safest to use. 900 Other halogenated compounds have also been used for pool-water disinfection, albeit on a limited scale. Bromine, which forms bactericidal bromamines in the presence of ammonia, has limited use because of its association with contact dermatitis. 901 Iodine does not bleach hair, swim suits, or cause eye irritation, but when introduced at proper concentrations, it gives water a greenish-yellowish cast. 892 In practical terms, maintenance of large hydrotherapy pools (e.g., those used for exercise) is similar to that for indoor public pools (i.e., continuous filtration, chlorine residuals no less than 0.4 ppm, and pH of 7.2-7.6). 902,903 Supply pipes and pumps also need to be maintained to eliminate the possibility of this equipment serving as a reservoir for waterborne organisms. 904 Specific standards for chlorine residual and pH of the water are addressed in local and state regulations. Patients who are fecally incontinent or who have draining wounds should refrain from using these pools until their condition improves. # d. Birthing Tanks and Other Equipment The use of birthing tanks, whirlpool spas, and whirlpools is a recent addition to obstetrical practice. 905 Few studies on the potential risks associated with these pieces of equipment have been conducted. In one study of 32 women, a newborn contracted a Pseudomonas infection after being birthed in such a tank, the strain of which was identical to the organism isolated from the tank water. 906 Another report documented identical strains of P. aeruginosa isolates from a newborn with sepsis and on the environmental surfaces of a tub that the mother used for relaxation while in labor. 907 Other studies have shown no significant increases in the rates of post-immersion infections among mothers and infants. 908,909 Because the water and the tub surfaces routinely become contaminated with the mother's skin flora and blood during labor and delivery, birthing tanks and other tub equipment must be drained after each patient use and the surfaces thoroughly cleaned and disinfected. Health-care facilities are advised to follow the manufacturer's instructions for selection of disinfection method and chemical germicide. The range of chlorine residuals for public whirlpools and whirlpool spas is 2-5 ppm. 910 Use of an inflatable tub is an alternative solution, but this item must be cleaned and disinfected between patients if it is not considered a single-use unit. Recreational tanks and whirlpool spas are increasingly being used as hydrotherapy equipment. Although such home equipment appears to be suitable for hydrotherapy, they are neither designed nor constructed to function in this capacity. Additionally, manufacturers generally are not obligated to provide the healthcare facility with cleaning and disinfecting instructions appropriate for medical equipment use, and the U.S. Food and Drug Administration (FDA) does not evaluate recreational equipment. Health-care facilities should therefore carefully evaluate this "off-label" use of home equipment before proceeding with a purchase. # Miscellaneous Medical/Dental Equipment Connected to Main Water Systems a. Automated Endoscope Reprocessors The automated endoscopic reprocessor (AER) is classified by the FDA as an accessory for the flexible endoscope. 654 A properly operating AER can provide a more consistent, reliable method of decontaminating and terminal reprocessing for endoscopes between patient procedures than manual reprocessing methods alone. 911 An endoscope is generally subjected to high-level disinfection using a liquid chemical sterilant or a high-level disinfectant. Because the instrument is a semi-critical device, the optimal rinse fluid for a disinfected endoscope would be sterile water. 3 Sterile water, however, is expensive and difficult to produce in sufficient quantities and with adequate quality assurance for instrument rinsing in an AER. 912,913 Therefore, one option to be used for AERs is rinse water that has been passed through filters with a pore size of 0.1-0.2 μm to render the water "bacteria-free." These filters usually are located in the water line at or near the port where the mains water enters the equipment. The product water (i.e., tap water passing through these filters) in these applications is not considered equivalent in microbial quality to that for membrane-filtered water as produced by pharmaceutical firms. Membrane filtration in pharmaceutical applications is intended to ensure the microbial quality of polished product water. Water has been linked to the contamination of flexible fiberoptic endoscopes in the following two scenarios: a. rinsing a disinfected endoscope with unfiltered tap water, followed by storage of the instrument without drying out the internal channels and b. contamination of AERs from tap water inadvertently introduced into the equipment. In the latter instance, the machine's water reservoirs and fluid circuitry become contaminated with waterborne, heterotrophic bacteria (e.g., Pseudomonas aeruginosa and NTM), which can survive and persist in biofilms attached to these components. [914][915][916][917] Colonization of the reservoirs and water lines of the AER becomes problematic if the required cleaning, disinfection, and maintenance are not performed on the equipment as recommended by the manufacturer. 669,916,917 Use of the 0.1-0.2-μm filter in the water line helps to keep bacterial contamination to a minimum, 670,911,917 but filters may fail and allow bacteria to pass through to the equipment and then to the instrument undergoing reprocessing. 671-674, 913, 918 Filters also require maintenance for proper performance. 670,911,912,918,919 Heightened awareness of the proper disinfection of the connectors that hook the instrument to the AER may help to further reduce the potential for contaminating endoscopes during reprocessing. 920 An emerging issue in the field of endoscopy is that of the possible role of rinse water monitoring and its potential to help reduce endoscopy/bronchoscopyassociated infections. 918 Studies have linked deficiencies in endoscope cleaning and/or disinfecting processes to the incidence of post-endoscopic adverse outcomes. [921][922][923][924] Several clusters have been traced to AERs of older designs and these were associated with water quality. 675,[914][915][916] Regardless of whether manual or automated terminal reprocessing is used for endoscopes, the internal channels of the instrument should be dried before storage. 925 The presence of residual moisture in the internal channels encourages the proliferation of waterborne microorganisms, some of which may be pathogenic. One of the most frequently used methods employs 70% isopropyl alcohol to flush the internal channels, followed by forced air drying of these channels and hanging the endoscope vertically in a protected cabinet; this method ensures internal drying of the endoscope, lessens the potential for proliferation of waterborne microorganisms, 669,913,917,922,926,927 and is consistent with professional organization guidance for endoscope reprocessing. 928 An additional problem with waterborne microbial contamination of AERs centers on increased microbial resistance to alkaline glutaraldehyde, a widely used liquid chemical sterilant/high-level disinfectant. 669,929 Opportunistic waterborne microorganisms (e.g., Mycobacterium chelonae, Methylobacterium spp.) have been associated with pseudo-outbreaks and colonization; infection caused by these organisms has been associated with procedures conducted in clinical settings (e.g., bronchoscopy). 669,913,[929][930][931] Increasing microbial resistance to glutaraldehyde has been attributed to improper use of the disinfectant in the equipment, allowing the dilution of glutaraldehyde to fall below the manufacturer's recommended minimal use concentration. 929 # b. Dental Unit Water Lines Dental unit water lines (DUWLs) consist of small-bore plastic tubing that delivers water used for general, non-surgical irrigation and as a coolant to dental handpieces, sonic and ultrasonic scalers, and air-water syringes; municipal tap water is the source water for these lines. The presence of biofilms of waterborne bacteria and fungi (e.g., Legionella spp., Pseudomonas aeruginosa, and NTM) in DUWLs has been established. 636,637,694,695,[932][933][934] Biofilms continually release planktonic microorganisms into the water, the titers of which can exceed 1H10 6 CFU/mL. 694 However, scientific evidence indicates that immunocompetent persons are only at minimal risk for substantial adverse health effects after contact with water from a dental unit. Nonetheless, exposing patients or dental personnel to water of uncertain microbiological quality is not consistent with universally accepted infection-control principles. 935 In 1993, CDC issued guidelines relative to water quality in a dental setting. These guidelines recommend that all dental instruments that use water (including high-speed handpieces) should be run to discharge water for 20-30 seconds after each patient and for several minutes before the start of each clinic day. 936 This practice can help to flush out any patient materials that many have entered the turbine, air, or waterlines. 937,938 The 1993 guidance also indicated that waterlines be flushed at the beginning of the clinic day. Although these guidelines are designed to help reduce the number of microorganisms present in treatment water, they do not address the issue of reducing or preventing biofilm formation in the waterlines. Research published subsequent to the 1993 dental infection control guideline suggests that flushing the lines at the beginning of the day has only minimal effect on the status of the biofilm in the lines and does not reliably improve the quality of water during dental treatment. [939][940][941] Updated recommendations on infection-control practices for water line use in dentistry will be available in late 2003. 942 The numbers of microorganisms in water used as coolant or irrigant for non-surgical dental treatment should be as low as reasonably achievable and, at a minimum, should meet nationally recognized standards for safe drinking water. 935,943 Only minimal evidence suggests that water meeting drinking water standards poses a health hazard for immunocompetent persons. The EPA, the American Public Health Association (APHA), and the American Water Works Association (AWWA) have set a maximum limit of 500 CFU/mL for aerobic, heterotrophic, mesophilic bacteria in drinking water in municipal distribution systems. 944,945 This standard is achievable, given improvements in water-line technology. Dentists should consult with the manufacturer of their dental unit to determine the best equipment and method for maintaining and monitoring good water quality. 935,946 # E. Environmental Services 1. Principles of Cleaning and Disinfecting Environmental Surfaces Although microbiologically contaminated surfaces can serve as reservoirs of potential pathogens, these surfaces generally are not directly associated with transmission of infections to either staff or patients. The transferral of microorganisms from environmental surfaces to patients is largely via hand contact with the surface. 947,948 Although hand hygiene is important to minimize the impact of this transfer, cleaning and disinfecting environmental surfaces as appropriate is fundamental in reducing their potential contribution to the incidence of healthcare-associated infections. The principles of cleaning and disinfecting environmental surfaces take into account the intended use of the surface or item in patient care. CDC retains the Spaulding classification for medical and surgical instruments, which outlines three categories based on the potential for the instrument to transmit infection if the instrument is microbiologically contaminated before use. 949,950 These categories are "critical," "semicritical," and "noncritical." In 1991, CDC proposed an additional category designated "environmental surfaces" to Spaulding's original classification 951 to represent surfaces that generally do not come into direct contact with patients during care. Environmental surfaces carry the least risk of disease transmission and can be safely decontaminated using less rigorous methods than those used on medical instruments and devices. Environmental surfaces can be further divided into medical equipment surfaces (e.g., knobs or handles on hemodialysis machines, x-ray machines, instrument carts, and dental units) and housekeeping surfaces (e.g., floors, walls, and tabletops). 951 The following factors influence the choice of disinfection procedure for environmental surfaces: a. the nature of the item to be disinfected, b. the number of microorganisms present, c. the innate resistance of those microorganisms to the inactivating effects of the germicide, d. the amount of organic soil present, e. the type and concentration of germicide used, f. duration and temperature of germicide contact, and g. if using a proprietary product, other specific indications and directions for use. 952,953 Cleaning is the necessary first step of any sterilization or disinfection process. Cleaning is a form of decontamination that renders the environmental surface safe to handle or use by removing organic matter, salts, and visible soils, all of which interfere with microbial inactivation. [954][955][956][957][958][959][960] The physical action of scrubbing with detergents and surfactants and rinsing with water removes large numbers of microorganisms from surfaces. 957 If the surface is not cleaned before the terminal reprocessing procedures are started, the success of the sterilization or disinfection process is compromised. Spaulding proposed three levels of disinfection for the treatment of devices and surfaces that do not require sterility for safe use. These disinfection levels are "high-level," "intermediate-level," and "low level." 949,950 The basis for these levels is that microorganisms can usually be grouped according to their innate resistance to a spectrum of physical or chemical germicidal agents (Table 22). This information, coupled with the instrument/surface classification, determines the appropriate level of terminal disinfection for an instrument or surface. The process of high-level disinfection, an appropriate standard of treatment for heat-sensitive, semicritical medical instruments (e.g., flexible, fiberoptic endoscopes), inactivates all vegetative bacteria, mycobacteria, viruses, fungi, and some bacterial spores. High-level disinfection is accomplished with powerful, sporicidal chemicals (e.g., glutaraldehyde, peracetic acid, and hydrogen peroxide) that are not appropriate for use on housekeeping surfaces. These liquid chemical sterilants/high-level disinfectants are highly toxic. [961][962][963] Use of these chemicals for applications other than those indicated in their label instructions (i.e., as immersion chemicals for treating heat-sensitive medical instruments) is not appropriate. 964 Intermediate-level disinfection does not necessarily kill bacterial spores, but it does inactivate Mycobacterium tuberculosis var. bovis, which is substantially more resistant to chemical germicides than ordinary vegetative bacteria, fungi, and medium to small viruses (with or without lipid envelopes). Chemical germicides with sufficient potency to achieve intermediate-level disinfection include chlorine-containing compounds (e.g., sodium hypochlorite), alcohols, some phenolics, and some iodophors. Low-level disinfection inactivates vegetative bacteria, fungi, enveloped viruses (e.g., human immunodeficiency virus [HIV], and influenza viruses), and some non-enveloped viruses (e.g., adenoviruses). Low-level disinfectants include quaternary ammonium compounds, some phenolics, and some iodophors. Sanitizers are agents that reduce the numbers of bacterial contaminants to safe levels as judged by public health requirements, and are used in cleaning operations, particularly in food service and dairy applications. Germicidal chemicals that have been approved by FDA as skin antiseptics are not appropriate for use as environmental surface disinfectants. 951 The selection and use of chemical germicides are largely matters of judgment, guided by product label instructions, information, and regulations. Liquid sterilant chemicals and high-level disinfectants intended for use on critical and semi-critical medical/dental devices and instruments are regulated exclusively by the FDA as a result of recent memoranda of understanding between FDA and the EPA that delineates agency authority for chemical germicide regulation. 965,966 A common misconception in the use of surface disinfectants in health-care settings relates to the underlying purpose for use of proprietary products labeled as a "tuberculocidal" germicide. Such products will not interrupt and prevent the transmission of TB in health-care settings because TB is not acquired from environmental surfaces. The tuberculocidal claim is used as a benchmark by which to measure germicidal potency. Because mycobacteria have the highest intrinsic level of resistance among the vegetative bacteria, viruses, and fungi, any germicide with a tuberculocidal claim on the label (i.e., an intermediate-level disinfectant) is considered capable of inactivating a broad spectrum of pathogens, including much less resistant organisms such the bloodborne pathogens (e.g., hepatitis B virus [HBV], hepatitis C virus [HCV], and HIV). It is this broad spectrum capability, rather than the product's specific potency against mycobacteria, that is the basis for protocols and OSHA regulations indicating the appropriateness of using tuberculocidal chemicals for surface disinfection. 967 # General Cleaning Strategies for Patient-Care Areas The number and types of microorganisms present on environmental surfaces are influenced by the following factors: a. number of people in the environment, b. amount of activity, c. amount of moisture, d. presence of material capable of supporting microbial growth, e. rate at which organisms suspended in the air are removed, and f. type of surface and orientation [i.e., horizontal or vertical]. 968 Strategies for cleaning and disinfecting surfaces in patient-care areas take into account a. potential for direct patient contact, b. degree and frequency of hand contact, and c. potential contamination of the surface with body substances or environmental sources of microorganisms (e.g., soil, dust, and water). # a. Cleaning of Medical Equipment Manufacturers of medical equipment should provide care and maintenance instructions specific to their equipment. These instructions should include information about a. the equipments' compatibility with chemical germicides, b. whether the equipment is water-resistant or can be safely immersed for cleaning, and c. how the equipment should be decontaminated if servicing is required. 967 In the absence of manufacturers' instructions, non-critical medical equipment (e.g., stethoscopes, blood pressure cuffs, dialysis machines, and equipment knobs and controls) usually only require cleansing followed by low-to intermediate-level disinfection, depending on the nature and degree of contamination. Ethyl alcohol or isopropyl alcohol in concentrations of 60%-90% (v/v) is often used to disinfect small surfaces (e.g., rubber stoppers of multiple-dose medication vials, and thermometers) 952, 969 and occasionally external surfaces of equipment (e.g., stethoscopes and ventilators). However, alcohol evaporates rapidly, which makes extended contact times difficult to achieve unless items are immersed, a factor that precludes its practical use as a large-surface disinfectant. 951 Alcohol may cause discoloration, swelling, hardening, and cracking of rubber and certain plastics after prolonged and repeated use and may damage the shellac mounting of lenses in medical equipment. 970 Barrier protection of surfaces and equipment is useful, especially if these surfaces are a. touched frequently by gloved hands during the delivery of patient care, b. likely to become contaminated with body substances, or c. difficult to clean. Impervious-backed paper, aluminum foil, and plastic or fluid-resistant covers are suitable for use as barrier protection. An example of this approach is the use of plastic wrapping to cover the handle of the operatory light in dental-care settings. 936,942 Coverings should be removed and discarded while the health-care worker is still gloved. 936,942 The health-care worker, after ungloving and performing hand hygiene, must cover these surfaces with clean materials before the next patient encounter. # b. Cleaning Housekeeping Surfaces Housekeeping surfaces require regular cleaning and removal of soil and dust. Dry conditions favor the persistence of gram-positive cocci (e.g., coagulase-negative Staphylococcus spp.) in dust and on surfaces, whereas moist, soiled environments favor the growth and persistence of gram-negative bacilli. 948,971,972 Fungi are also present on dust and proliferate in moist, fibrous material. Most, if not all, housekeeping surfaces need to be cleaned only with soap and water or a detergent/disinfectant, depending on the nature of the surface and the type and degree of contamination. Cleaning and disinfection schedules and methods vary according to the area of the health-care facility, type of surface to be cleaned, and the amount and type of soil present. Disinfectant/detergent formulations registered by EPA are used for environmental surface cleaning, but the actual physical removal of microorganisms and soil by wiping or scrubbing is probably as important, if not more so, than any antimicrobial effect of the cleaning agent used. 973 Therefore, cost, safety, product-surface compatibility, and acceptability by housekeepers can be the main criteria for selecting a registered agent. If using a proprietary detergent/disinfectant, the manufacturers' instructions for appropriate use of the product should be followed. 974 Consult the products' material safety data sheets (MSDS) to determine appropriate precautions to prevent hazardous conditions during product application. Personal protective equipment (PPE) used during cleaning and housekeeping procedures should be appropriate to the task. Housekeeping surfaces can be divided into two groups -those with minimal hand-contact (e.g., floors, and ceilings) and those with frequent hand-contact ("high touch surfaces"). The methods, thoroughness, and frequency of cleaning and the products used are determined by health-care facility policy. 6 However, high-touch housekeeping surfaces in patient-care areas (e.g., doorknobs, bedrails, light switches, wall areas around the toilet in the patient's room, and the edges of privacy curtains) should be cleaned and/or disinfected more frequently than surfaces with minimal hand contact. Infection-control practitioners typically use a risk-assessment approach to identify high-touch surfaces and then coordinate an appropriate cleaning and disinfecting strategy and schedule with the housekeeping staff. Horizontal surfaces with infrequent hand contact (e.g., window sills and hard-surface flooring) in routine patient-care areas require cleaning on a regular basis, when soiling or spills occur, and when a patient is discharged from the facility. 6 Regular cleaning of surfaces and decontamination, as needed, is also advocated to protect potentially exposed workers. 967 Cleaning of walls, blinds, and window curtains is recommended when they are visibly soiled. 972,973,975 Disinfectant fogging is not recommended for general infection control in routine patient-care areas. 2,976 Further, paraformaldehyde, which was once used in this application, is no longer registered by EPA for this purpose. Use of paraformaldehyde in these circumstances requires either registration or an exemption issued by EPA under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). Infection control, industrial hygienists, and environmental services supervisors should assess the cleaning procedures, chemicals used, and the safety issues to determine if a temporary relocation of the patient is needed when cleaning in the room. Extraordinary cleaning and decontamination of floors in health-care settings is unwarranted. Studies have demonstrated that disinfection of floors offers no advantage over regular detergent/water cleaning and has minimal or no impact on the occurrence of health-care associated infections. 947,948,[977][978][979][980] Additionally, newly cleaned floors become rapidly recontaminated from airborne microorganisms and those transferred from shoes, equipment wheels, and body substances. 971,975,981 Nevertheless, healthcare institutions or contracted cleaning companies may choose to use an EPA-registered detergent/disinfectant for cleaning low-touch surfaces (e.g., floors) in patient-care areas because of the difficulty that personnel may have in determining if a spill contains blood or body fluids (requiring a detergent/disinfectant for clean-up) or when a multi-drug resistant organism is likely to be in the environment. Methods for cleaning non-porous floors include wet mopping and wet vacuuming, dry dusting with electrostatic materials, and spray buffing. 973,[982][983][984] Methods that produce minimal mists and aerosols or dispersion of dust in patient-care areas are preferred. 9,20,109,272 Part of the cleaning strategy is to minimize contamination of cleaning solutions and cleaning tools. Bucket solutions become contaminated almost immediately during cleaning, and continued use of the solution transfers increasing numbers of microorganisms to each subsequent surface to be cleaned. 971,981,985 Cleaning solutions should be replaced frequently. A variety of "bucket" methods have been devised to address the frequency with which cleaning solutions are replaced. 986,987 Another source of contamination in the cleaning process is the cleaning cloth or mop head, especially if left soaking in dirty cleaning solutions. 971,[988][989][990] Laundering of cloths and mop heads after use and allowing them to dry before re-use can help to minimize the degree of contamination. 990 A simplified approach to cleaning involves replacing soiled cloths and mop heads with clean items each time a bucket of detergent/disinfectant is emptied and replaced with fresh, clean solution (B. Stover, Kosair Children's Hospital, 2000). Disposable cleaning cloths and mop heads are an alternative option, if costs permit. Another reservoir for microorganisms in the cleaning process may be dilute solutions of the detergents or disinfectants, especially if the working solution is prepared in a dirty container, stored for long periods of time, or prepared incorrectly. 547 Gram-negative bacilli (e.g., Pseudomonas spp. and Serratia marcescens) have been detected in solutions of some disinfectants (e.g., phenolics and quaternary ammonium compounds). 547,991 Contemporary EPA registration regulations have helped to minimize this problem by asking manufacturers to provide potency data to support label claims for detergent/disinfectant properties under real-use conditions (e.g., diluting the product with tap water instead of distilled water). Application of contaminated cleaning solutions, particularly from small-quantity aerosol spray bottles or with equipment that might generate aerosols during operation, should be avoided, especially in high-risk patient areas. 992,993 Making sufficient fresh cleaning solution for daily cleaning, discarding any remaining solution, and drying out the container will help to minimize the degree of bacterial contamination. Containers that dispense liquid as opposed to spray-nozzle dispensers (e.g., quart-sized dishwashing liquid bottles) can be used to apply detergent/disinfectants to surfaces and then to cleaning cloths with minimal aerosol generation. A pre-mixed, "ready-to-use" detergent/disinfectant solution may be used if available. # c. Cleaning Special Care Areas Guidelines have been published regarding cleaning strategies for isolation areas and operating rooms. 6,7 The basic strategies for areas housing immunosuppressed patients include a. wet dusting horizontal surfaces daily with cleaning cloths pre-moistened with detergent or an EPAregistered hospital disinfectant or disinfectant wipes; 94, 98463 b. using care when wet dusting equipment and surfaces above the patient to avoid patient contact with the detergent/disinfectant; c. avoiding the use of cleaning equipment that produces mists or aerosols; d. equipping vacuums with HEPA filters, especially for the exhaust, when used in any patient-care area housing immunosuppressed patients; 9, 94, 986 and e. regular cleaning and maintenance of equipment to ensure efficient particle removal. When preparing the cleaning cloths for wet-dusting, freshly prepared solutions of detergents or disinfectants should be used rather than cloths that have soaked in such solutions for long periods of time. Dispersal of microorganisms in the air from dust or aerosols is more problematic in these settings than elsewhere in health-care facilities. Vacuum cleaners can serve as dust disseminators if they are not operating properly. 994 Doors to immunosuppressed patients' rooms should be closed when nearby areas are being vacuumed. 9 Bacterial and fungal contamination of filters in cleaning equipment is inevitable, and these filters should be cleaned regularly or replaced as per equipment manufacturer instructions. Mats with tacky surfaces placed in operating rooms and other patient-care areas only slightly minimize the overall degree of contamination of floors and have little impact on the incidence rate of health-careassociated infection in general. 351,971,983 An exception, however, is the use of tacky mats inside the entry ways of cordoned-off construction areas inside the health-care facility; these mats help to minimize the intrusion of dust into patient-care areas. Special precautions for cleaning incubators, mattresses, and other nursery surfaces have been recommended to address reports of hyperbilirubinemia in newborns linked to inadequately diluted solutions of phenolics and poor ventilation. [995][996][997] These medical conditions have not, however, been associated with the use of properly prepared solutions of phenolics. Non-porous housekeeping surfaces in neonatal units can be disinfected with properly diluted or pre-mixed phenolics, followed by rinsing with clean water. 997 However, phenolics are not recommended for cleaning infant bassinets and incubators during the stay of the infant. Infants who remain in the nursery for an extended period should be moved periodically to freshly cleaned and disinfected bassinets and incubators. 997 If phenolics are used for cleaning bassinets and incubators after they have been vacated, the surfaces should be rinsed thoroughly with water and dried before either piece of equipment is reused. Cleaning and disinfecting protocols should allow for the full contact time specified for the product used. Bassinet mattresses should be replaced, however, if the mattress cover surface is broken. 997 # Cleaning Strategies for Spills of Blood and Body Substances Neither HBV, HCV, nor HIV has ever been transmitted from a housekeeping surface (i.e., floors, walls, or countertops). Nonetheless, prompt removal and surface disinfection of an area contaminated by either blood or body substances are sound infection-control practices and OSHA requirements. 967 Studies have demonstrated that HIV is inactivated rapidly after being exposed to commonly used chemical germicides at concentrations that are much lower than those used in practice. [998][999][1000][1001][1002][1003] HBV is readily inactivated with a variety of germicides, including quaternary ammonium compounds. 1004 Embalming fluids (e.g., formaldehyde) are also capable of completely inactivating HIV and HBV. 1005,1006 OSHA has revised its regulation for disinfecting spills of blood or other potentially infectious material to include proprietary products whose label includes inactivation claims for HBV and HIV, provided that such surfaces have not become contaminated with agent(s) or volumes of or concentrations of agent(s) for which a higher level of disinfection is recommended. 1007 These registered products are listed in EPA's List D -Registered Antimicrobials Effective Against Hepatitis B Virus and Human HIV-1, which may include products tested against duck hepatitis B virus (DHBV) as a surrogate for HBV. 1008,1009 Additional lists of interest include EPA's List C -Registered Antimicrobials Effective Against Human HIV-1 and EPA's List E -Registered Antimicrobials Effective Against Mycobacterium spp., Hepatitis B Virus, and Human HIV-1. Sodium hypochlorite solutions are inexpensive and effective broad-spectrum germicidal solutions. 1010,1011 Generic sources of sodium hypochlorite include household chlorine bleach or reagent grade chemical. Concentrations of sodium hypochlorite solutions with a range of 5,000-6,150 ppm (1:10 v/v dilution of household bleaches marketed in the United States) to 500-615 ppm (1:100 v/v dilution) free chlorine are effective depending on the amount of organic material (e.g., blood, mucus, and urine) present on the surface to be cleaned and disinfected. 1010, 1011 EPA-registered chemical germicides may be more compatible with certain materials that could be corroded by repeated exposure to sodium hypochlorite, especially the 1:10 dilution. Appropriate personal protective equipment (e.g., gloves and goggles) should be worn when preparing and using hypochlorite solutions or other chemical germicides. 967 Despite laboratory evidence demonstrating adequate potency against bloodborne pathogens (e.g., HIV and HBV), many chlorine bleach products available in grocery and chemical-supply stores are not registered by the EPA for use as surface disinfectants. Use of these chlorine products as surface disinfectants is considered by the EPA to be an "unregistered use." EPA encourages the use of registered products because the agency reviews them for safety and performance when the product is used according to label instructions. When unregistered products are used for surface disinfection, users do so at their own risk. Strategies for decontaminating spills of blood and other body fluids differ based on the setting in which they occur and the volume of the spill. 1010 In patient-care areas, workers can manage small spills with cleaning and then disinfecting using an intermediate-level germicide or an EPA-registered germicide from the EPA List D or E. 967,1007 For spills containing large amounts of blood or other body substances, workers should first remove visible organic matter with absorbent material (e.g., disposable paper towels discarded into leak-proof, properly labeled containment) and then clean and decontaminate the area. 1002, 1003, 1012 If the surface is nonporous and a generic form of a sodium hypochlorite solution is used (e.g., household bleach), a 1:100 dilution is appropriate for decontamination assuming that a. the worker assigned to clean the spill is wearing gloves and other personal protective equipment appropriate to the task, b. most of the organic matter of the spill has been removed with absorbent material, and c. the surface has been cleaned to remove residual organic matter. A recent study demonstrated that even strong chlorine solutions (i.e., 1:10 dilution of chlorine bleach) may fail to totally inactivate high titers of virus in large quantities of blood, but in the absence of blood these disinfectants can achieve complete viral inactivation. 1011 This evidence supports the need to remove most organic matter from a large spill before final disinfection of the surface. Additionally, EPAregistered proprietary disinfectant label claims are based on use on a pre-cleaned surface. 951,954 Managing spills of blood, body fluids, or other infectious materials in clinical, public health, and research laboratories requires more stringent measures because of a. the higher potential risk of disease transmission associated with large volumes of blood and body fluids and b. high numbers of microorganisms associated with diagnostic cultures. The use of an intermediate-level germicide for routine decontamination in the laboratory is prudent. 954 Recommended practices for managing large spills of concentrated infectious agents in the laboratory include a. confining the contaminated area, b. flooding the area with a liquid chemical germicide before cleaning, and c. decontaminating with fresh germicidal chemical of at least intermediate-level disinfectant potency. 1010 A suggested technique when flooding the spill with germicide is to lay absorbent material down on the spill and apply sufficient germicide to thoroughly wet both the spill and the absorbent material. 1013 If using a solution of household chlorine bleach, a 1:10 dilution is recommended for this purpose. EPAregistered germicides should be used according to the manufacturers' instructions for use dilution and contact time. Gloves should be worn during the cleaning and decontamination procedures in both clinical and laboratory settings. PPE in such a situation may include the use of respiratory protection (e.g., an N95 respirator) if clean-up procedures are expected to generate infectious aerosols. Protocols for cleaning spills should be developed and made available on record as part of good laboratory practice. 1013 Workers in laboratories and in patient-care areas of the facility should receive periodic training in environmentalsurface infection-control strategies and procedures as part of an overall infection-control and safety curriculum. # Carpeting and Cloth Furnishings a. Carpeting Carpeting has been used for more than 30 years in both public and patient-care areas of health-care facilities. Advantages of carpeting in patient-care areas include a. its noise-limiting characteristics b. the "humanizing" effect on health care; and c. its contribution to reductions in falls and resultant injuries, particularly for the elderly. [1014][1015][1016] Compared to hard-surface flooring, however, carpeting is harder to keep clean, especially after spills of blood and body substances. It is also harder to push equipment with wheels (e.g., wheelchairs, carts, and gurneys) on carpeting. Several studies have documented the presence of diverse microbial populations, primarily bacteria and fungi, in carpeting; 111, 1017-1024 the variety and number of microorganisms tend to stabilize over time. New carpeting quickly becomes colonized, with bacterial growth plateauing after about 4 weeks. 1019 Vacuuming and cleaning the carpeting can temporarily reduce the numbers of bacteria, but these populations soon rebound and return to pre-cleaning levels. 1019,1020,1023 Bacterial contamination tends to increase with higher levels of activity. [1018][1019][1020]1025 Soiled carpeting that is or remains damp or wet provides an ideal setting for the proliferation and persistence of gram-negative bacteria and fungi. 1026 Carpeting that remains damp should be removed, ideally within 72 hours. Despite the evidence of bacterial growth and persistence in carpeting, only limited epidemiologic evidence demonstrates that carpets influence health-care associated infection rates in areas housing immunocompetent patients. 1023,1025,1027 This guideline, therefore, includes no recommendations against the use of carpeting in these areas. Nonetheless, avoiding the use of carpeting is prudent in areas where spills are likely to occur (e.g., laboratories, areas around sinks, and janitor closets) and where patients may be at greater risk of infection from airborne environmental pathogens (e.g., HSCT units, burn units, ICUs, and ORs). 111,1028 An outbreak of aspergillosis in an HSCT unit was recently attributed to carpet contamination and a particular method of carpet cleaning. 111 A window in the unit had been opened repeatedly during the time of a nearby building fire, which allowed fungal spore intrusion into the unit. After the window was sealed, the carpeting was cleaned using a "bonnet buffing" machine, which dispersed Aspergillus spores into the air. 111 Wet vacuuming was instituted, replacing the dry cleaning method used previously; no additional cases of invasive aspergillosis were identified. The care setting and the method of carpet cleaning are important factors to consider when attempting to minimize or prevent production of aerosols and dispersal of carpet microorganisms into the air. 94,111 Both vacuuming and shampooing or wet cleaning with equipment can disperse microorganisms to the air. 111,994 Vacuum cleaners should be maintained to minimize dust dispersal in general, and be equipped with HEPA filters, especially for use in high-risk patient-care areas. 9, 94, 986 Some formulations of carpetcleaning chemicals, if applied or used improperly, can be dispersed into the air as a fine dust capable of causing respiratory irritation in patients and staff. 1029 Cleaning equipment, especially those that engage in wet cleaning and extraction, can become contaminated with waterborne organisms (e.g., Pseudomonas aeruginosa) and serve as a reservoir for these organisms if this equipment is not properly maintained. Substantial numbers of bacteria can then be transferred to carpeting during the cleaning process. 1030 Therefore, keeping the carpet cleaning equipment in good repair and allowing such equipment to dry between uses is prudent. Carpet cleaning should be performed on a regular basis determined by internal policy. Although spills of blood and body substances on non-porous surfaces require prompt spot cleaning using standard cleaning procedures and application of chemical germicides, 967 similar decontamination approaches to blood and body substance spills on carpeting can be problematic from a regulatory perspective. 1031 Most, if not all, modern carpet brands suitable for public facilities can tolerate the activity of a variety of liquid chemical germicides. However, according to OSHA, carpeting contaminated with blood or other potentially infectious materials cannot be fully decontaminated. 1032 Therefore, facilities electing to use carpeting for high-activity patient-care areas may choose carpet tiles in areas at high risk for spills. 967,1032 In the event of contamination with blood or other body substances, carpet tiles can be removed, discarded, and replaced. OSHA also acknowledges that only minimal direct skin contact occurs with carpeting, and therefore, employers are expected to make reasonable efforts to clean and sanitize carpeting using carpet detergent/cleaner products. 1032 Over the last few years, some carpet manufacturers have treated their products with fungicidal and/or bactericidal chemicals. Although these chemicals may help to reduce the overall numbers of bacteria or fungi present in carpet, their use does not preclude the routine care and maintenance of the carpeting. Limited evidence suggests that chemically treated carpet may have helped to keep health-care-associated aspergillosis rates low in one HSCT unit, 111 but overall, treated carpeting has not been shown to prevent the incidence of health-care associated infections in care areas for immunocompetent patients. # b. Cloth Furnishings Upholstered furniture and furnishings are becoming increasingly common in patient-care areas. These furnishings range from simple cloth chairs in patients' rooms to a complete decorating scheme that gives the interior of the facility more the look of an elegant hotel. 1033 Even though pathogenic microorganisms have been isolated from the surfaces of cloth chairs, no epidemiologic evidence suggests that general patient-care areas with cloth furniture pose increased risks of health-care associated infection compared with areas that contain hard-surfaced furniture. 1034,1035 Allergens (e.g., dog and cat dander) have been detected in or on cloth furniture in clinics and elsewhere in hospitals in concentrations higher than those found on bed linens. 1034,1035 These allergens presumably are transferred from the clothing of visitors. Researchers have therefore suggested that cloth chairs should be vacuumed regularly to keep the dust and allergen levels to a minimum. This recommendation, however, has generated concerns that aerosols created from vacuuming could place immunocompromised patients or patients with preexisting lung disease (e.g., asthma) at risk for development of health-care associated, environmental airborne disease. 9, 20, 109, 988 Recovering worn, upholstered furniture (especially the seat cushion) with covers that are easily cleaned (e.g., vinyl), or replacing the item is prudent; minimizing the use of upholstered furniture and furnishings in any patient-care areas where immunosuppressed patients are located (e.g., HSCT units) reduces the likelihood of disease. 9 # Flowers and Plants in Patient-Care Areas Fresh flowers, dried flowers, and potted plants are common items in health-care facilities. In 1974, clinicians isolated an Erwinia sp. post mortem from a neonate diagnosed with fulminant septicemia, meningitis, and respiratory distress syndrome. 1038 Because Erwinia spp. are plant pathogens, plants brought into the delivery room were suspected to be the source of the bacteria, although the case report did not definitively establish a direct link. Several subsequent studies evaluated the numbers and diversity of microorganisms in the vase water of cut flowers. These studies revealed that high concentrations of bacteria, ranging from 10 4 -10 10 CFU/mL, were often present, especially if the water was changed infrequently. 515,702,1039 The major group of microorganisms in flower vase water was gram-negative bacteria, with Pseudomonas aeruginosa the most frequently isolated organism. 515, 702, 1039, 1040 P. aeruginosa was also the primary organism directly isolated from chrysanthemums and other potted plants. 1041,1042 However, flowers in hospitals were not significantly more contaminated with bacteria compared with flowers in restaurants or in the home. 702 Additionally, no differences in the diversity and degree of antibiotic resistance of bacteria have been observed in samples isolated from hospital flowers versus those obtained from flowers elsewhere. 702 Despite the diversity and large numbers of bacteria associated with flower-vase water and potted plants, minimal or no evidence indicates that the presence of plants in immunocompetent patient-care areas poses an increased risk of health-care associated infection. 515 In one study involving a limited number of surgical patients, no correlation was observed between bacterial isolates from flowers in the area and the incidence and etiology of postoperative infections among the patients. 1040 Similar conclusions were reached in a study that examined the bacteria found in potted plants. 1042 Nonetheless, some precautions for general patient-care settings should be implemented, including a. limiting flower and plant care to staff with no direct patient contact, b. advising health-care staff to wear gloves when handling plants, c. washing hands after handling plants, d. changing vase water every 2 days and discharging the water into a sink outside the immediate patient environment, and e. cleaning and disinfecting vases after use. 702 Some researchers have examined the possibility of adding a chemical germicide to vase water to control bacterial populations. Certain chemicals (e.g., hydrogen peroxide and chlorhexidine) are well tolerated by plants. 1040,1043,1044 Use of these chemicals, however, was not evaluated in studies to assess impact on health-care associated infection rates. Modern florists now have a variety of products available to add to vase water to extend the life of cut flowers and to minimize bacterial clouding of the water. Flowers (fresh and dried) and ornamental plants, however, may serve as a reservoir of Aspergillus spp., and dispersal of conidiospores into the air from this source can occur. 109 Health-care associated outbreaks of invasive aspergillosis reinforce the importance of maintaining an environment as free of Aspergillus spp. spores as possible for patients with severe, prolonged neutropenia. Potted plants, fresh-cut flowers, and dried flower arrangements may provide a reservoir for these fungi as well as other fungal species (e.g., Fusarium spp.). 109,1045,1046 Researchers in one study of bacteria and flowers suggested that flowers and vase water should be avoided in areas providing care to medically at-risk patients (e.g., oncology patients and transplant patients), although this study did not attempt to correlate the observations of bacterial populations in the vase water with the incidence of health-care associated infections. 515 Another study using molecular epidemiology techniques demonstrated identical Aspergillus terreus types among environmental and clinical specimens isolated from infected patients with hematological malignancies. 1046 Therefore, attempts should be made to exclude flowers and plants from areas where immunosuppressed patients are be located (e.g., HSCT units). 9, 1046 # Pest Control Cockroaches, flies and maggots, ants, mosquitoes, spiders, mites, midges, and mice are among the typical arthropod and vertebrate pest populations found in health-care facilities. Insects can serve as agents for the mechanical transmission of microorganisms, or as active participants in the disease transmission process by serving as a vector. [1047][1048][1049] Arthropods recovered from health-care facilities have been shown to carry a wide variety of pathogenic microorganisms. [1050][1051][1052][1053][1054][1055][1056] Studies have suggested that the diversity of microorganisms associated with insects reflects the microbial populations present in the indoor health-care environment; some pathogens encountered in insects from hospitals were either absent from or present to a lesser degree in insects trapped from residential settings. [1057][1058][1059][1060] Some of the microbial populations associated with insects in hospitals have demonstrated resistance to antibiotics. 1048,1059,[1061][1062][1063] Insect habitats are characterized by warmth, moisture, and availability of food. 1064 Insects forage in and feed on substrates, including but not limited to food scraps from kitchens/cafeteria, foods in vending machines, discharges on dressings either in use or discarded, other forms of human detritis, medical wastes, human wastes, and routine solid waste. [1057][1058][1059][1060][1061] Cockroaches, in particular, have been known to feed on fixed sputum smears in laboratories. 1065,1066 Both cockroaches and ants are frequently found in the laundry, central sterile supply departments, and anywhere in the facility where water or moisture is present (e.g., sink traps, drains and janitor closets). Ants will often find their way into sterile packs of items as they forage in a warm, moist environment. 1057 Cockroaches and other insects frequent loading docks and other areas with direct access to the outdoors. Although insects carry a wide variety of pathogenic microorganisms on their surfaces and in their gut, the direct association of insects with disease transmission (apart from vector transmission) is limited, especially in health-care settings; the presence of insects in itself likely does not contribute substantially to health-care associated disease transmission in developed countries. However, outbreaks of infection attributed to microorganisms carried by insects may occur because of infestation coupled with breaks in standard infection-control practices. 1063 Studies have been conducted to examine the role of houseflies as possible vectors for shigellosis and other forms of diarrheal disease in non-health-care settings. 1046, 1067 When control measures aimed at reducing the fly population density were implemented, a concomitant reduction in the incidence of diarrheal infections, carriage of Shigella organisms, and mortality caused by diarrhea among infants and young children was observed. Myiasis is defined as a parasitosis in which the larvae of any of a variety of flies use living or necrotic tissue or body substances of the host as a nutritional source. 1068 Larvae from health-care acquired myiasis have been observed in nares, wounds, eyes, ears, sinuses, and the external urogenital structures. [1069][1070][1071] Patients with this rare condition are typically older adults with underlying medical conditions (e.g., diabetes, chronic wounds, and alcoholism) who have a decreased capacity to ward off the flies. Persons with underlying conditions who live or travel to tropical regions of the world are especially at risk. 1070,1071 Cases occur in the summer and early fall months in temperate climates when flies are most active. 1071 An environmental assessment and review of the patient's history are necessary to verify that the source of the myiasis is health-care acquired and to identify corrective measures. 1069,1072 Simple prevention measures (e.g., installing screens on windows) are important in reducing the incidence of myiasis. 1072 From a public health and hygiene perspective, arthropod and vertebrate pests should be eradicated from all indoor environments, including health-care facilities. 1073,1074 Modern approaches to institutional pest management usually focus on a. eliminating food sources, indoor habitats, and other conditions that attract pests b. excluding pests from the indoor environments; and c. applying pesticides as needed. 1075 Sealing windows in modern health-care facilities helps to minimize insect intrusion. When windows need to be opened for ventilation, ensuring that screens are in good repair and closing doors to the outside can help with pest control. Insects should be kept out of all areas of the health-care facility, especially ORs and any area where immunosuppressed patients are located. A pest-control specialist with appropriate credentials can provide a regular insect-control program that is tailored to the needs of the facility and uses approved chemicals and/or physical methods. Industrial hygienists can provide information on possible adverse reactions of patients and staff to pesticides and suggest alternative methods for pest control, as needed. ) represent crucial and growing concerns for infection control. Although the term GISA is technically a more accurate description of the strains isolated to date (most of which are classified as having intermediate resistance to both vancomycin and teicoplanin), the term "glycopeptide" may not be recognized by many clinicians. Thus, the label of VISA, which emphasizes a change in minimum inhibitory concentration (MICs) to vancomycin, is similar to that of VRE and is more meaningful to clinicians. 1076 According to National Nosocomial Infection Surveillance (NNIS) statistics for infections acquired among ICU patients in the United States in 1999, 52.3% of infections resulting from S. aureus were identified as MRSA infections, and 25.2% of enterococcal infections were attributed to VRE. These figures reflect a 37% and a 43% increase, respectively, since 1994-1998. 1077 People represent the primary reservoir of S. aureus. 1078 Although S. aureus has been isolated from a variety of environmental surfaces (e.g., stethoscopes, floors, charts, furniture, dry mops, and hydrotherapy tanks), the role of environmental contamination in transmission of this organism in health care appears to be minimal. [1079][1080][1081][1082] S. aureus contamination of surfaces and tanks within burn therapy units, however, may be a major factor in the transmission of infection among burn patients. 1083 Colonized patients are the principal reservoir of VRE, and patients who are immunosuppressed (e.g., transplant patients) or otherwise medically at-risk (e.g., ICU patients, cardio-thoracic surgical patients, patients previously hospitalized for extended periods, and those having received multi-antimicrobial or vancomycin therapy) are at greatest risk for VRE colonization. [1084][1085][1086][1087] The mechanisms by which crosscolonization take place are not well defined, although recent studies have indicated that both MRSA and VRE may be transmitted either a. directly from patient to patient, b. indirectly by transient carriage on the hands of health-care workers, 1088-1091 or c. by hand transfer of these gram-positive organisms from contaminated environmental surfaces and patient-care equipment. 1084,1087,[1092][1093][1094][1095][1096][1097] In one survey, hand carriage of VRE in workers in a long-term care facility ranged from 13%-41%. 1098 Many of the environmental surfaces found to be contaminated with VRE in outbreak investigations have been those that are touched frequently by the patient or the health-care worker. 1099 Such high-touch surfaces include bedrails, doorknobs, bed linens, gowns, overbed tables, blood pressure cuffs, computer table, bedside tables, and various medical equipment. 22,1087,1094,1095,[1100][1101][1102] Contamination of environmental surfaces with VRE generally occurs in clinical laboratories and areas where colonized patients are present, 1087, 1092, 1094, 1095, 1103 but the potential for contamination increases when such patients have diarrhea 1087 or have multiple body-site colonization. 1104 Additional factors that can be important in the dispersion of these pathogens to environmental surfaces are misuse of glove techniques by healthcare workers (especially when cleaning fecal contamination from surfaces) and patient, family, and visitor hygiene. # Special Pathogen Concerns a. Antibiotic-Resistant Gram-Positive Cocci Interest in the importance of environmental reservoirs of VRE increased when laboratory studies demonstrated that enterococci can persist in a viable state on dry environmental surfaces for extended periods of time (7 days to 4 months) 1099,1105 and multiple strains can be identified during extensive periods of surveillance. 1104 VRE can be recovered from inoculated hands of health-care workers (with or without gloves) for up to 60 minutes. 22 The presence of either MRSA, VISA, or VRE on environmental surfaces, however, does not mean that patients in the contaminated areas will become colonized. Strict adherence to hand hygiene/handwashing and the proper use of barrier precautions help to minimize the potential for spread of these pathogens. Published recommendations for preventing the spread of vancomycin resistance address isolation measures, including patient cohorting and management of patient-care items. 5 Direct patient-care items (e.g., blood pressure cuffs) should be disposable whenever possible when used in contact isolation settings for patients with multiply resistant microorganisms. 1102 1114,1115 These represented isolated cases, and neither the family members nor the health-care providers of these case-patients had evidence of colonization or infection with VRSA. Conventional environmental infection-control measures (i.e., cleaning and then disinfecting surfaces using EPA-registered disinfectants with label claims for S. aureus) were used during the environmental investigation of these two cases; 1110-1112 however, studies have yet to evaluate the potential intrinsic resistance of these VRSA strains to surface disinfectants. Standard procedures during terminal cleaning and disinfection of surfaces, if performed incorrectly, may be inadequate for the elimination of VRE from patient rooms. 1113,[1116][1117][1118] Given the sensitivity of VRE to hospital disinfectants, current disinfecting protocols should be effective if they are diligently carried out and properly performed. Health-care facilities should be sure that housekeeping staff use correct procedures for cleaning and disinfecting surfaces in VRE-contaminated areas, which include using sufficient amounts of germicide at proper use dilution and allowing adequate contact time. 1118 # b. Clostridium difficile Clostridium difficile is the most frequent etiologic agent for health-care associated diarrhea. 1119,1120 In one hospital, 30% of adults who developed health-care associated diarrhea were positive for C. difficile. 1121 One recent study employing PCR-ribotyping techniques demonstrated that cases of C. difiicile-acquired diarrhea occurring in the hospital included patients whose infections were attributed to endogenous C. difficile strains and patients whose illnesses were considered to be health-care-associated infections. 1122 Most patients remain asymptomatic after infection, but the organism continues to be shed in their stools. Risk factors for acquiring C. difficile-associated infection include a. exposure to antibiotic therapy, particularly with beta-lactam agents; 1123 b. gastrointestinal procedures and surgery; 1124 c. advanced age; and d. indiscriminate use of antibiotics. [1125][1126][1127][1128] Of all the measures that have been used to prevent the spread of C. difficile-associated diarrhea, the most successful has been the restriction of the use of antimicrobial agents. 1129,1130 C. difficile is an anaerobic, gram-positive bacterium. Normally fastidious in its vegetative state, it is capable of sporulating when environmental conditions no longer support its continued growth. The capacity to form spores enables the organism to persist in the environment (e.g., in soil and on dry surfaces) for extended periods of time. Environmental contamination by this microorganism is well known, especially in places where fecal contamination may occur. 1131 The environment (especially housekeeping surfaces) rarely serves as a direct source of infection for patients. 1024,[1132][1133][1134][1135][1136] However, direct exposure to contaminated patient-care items (e.g., rectal thermometers) and high-touch surfaces in patients' bathrooms (e.g., light switches) have been implicated as sources of infection. 1130,1135,1136,1138 Transfer of the pathogen to the patient via the hands of health-care workers is thought to be the most likely mechanism of exposure. 24,1133,1139 Standard isolation techniques intended to minimize enteric contamination of patients, health-care-workers' hands, patient-care items, and environmental surfaces have been published. 1140 Handwashing remains the most effective means of reducing hand contamination. Proper use of gloves is an ancillary measure that helps to further minimize transfer of these pathogens from one surface to another. The degree to which the environment becomes contaminated with C. difficile spores is proportional to the number of patients with C. difficile-associated diarrhea, 24,1132,1135 although asymptomatic, colonized patients may also serve as a source of contamination. Few studies have examined the use of specific chemical germicides for the inactivation of C. difficile spores, and no well-controlled trials have been conducted to determine efficacy of surface disinfection and its impact on health-care associated diarrhea. Some investigators have evaluated the use of chlorine-containing chemicals (e.g., 1,000 ppm hypochlorite at recommended use-dilution, 5,000 ppm sodium hypochlorite [1:10 v/v dilution], 1:100 v/v dilutions of unbuffered hypochlorite, and phosphate-buffered hypochlorite [1,600 ppm]). One of the studies demonstrated that the number of contaminated environmental sites was reduced by half, 1135 whereas another two studies demonstrated declines in health-care associated C. difficile infections in a HSCT unit 1141 and in two geriatric medical units 1142 during a period of hypochlorite use. The presence of confounding factors, however, was acknowledged in one of these studies. 1142 The recommended approach to environmental infection control with respect to C. difficile is meticulous cleaning followed by disinfection using hypochlorite-based germicides as appropriate. 952,1130,1143 However, because no EPAregistered surface disinfectants with label claims for inactivation of C. difficile spores are available, the recommendation is based on the best available evidence from the scientific literature. # c. Respiratory and Enteric Viruses in Pediatric-Care Settings Although the viruses mentioned in this guideline are not unique to the pediatric-care setting in healthcare facilities, their prevalence in these areas, especially during the winter months, is substantial. Children (particularly neonates) are more likely to develop infection and substantial clinical disease from these agents compared with adults and therefore are more likely to require supportive care during their illness. Common respiratory viruses in pediatric-care areas include rhinoviruses, respiratory syncytial virus (RSV), adenoviruses, influenza viruses, and parainfluenza viruses. Transmission of these viruses occurs primarily via direct contact with small-particle aerosols or via hand contamination with respiratory secretions that are then transferred to the nose or eyes. Because transmission primarily requires close personal contact, contact precautions are appropriate to interrupt transmission. 6 Hand contamination can occur from direct contact with secretions or indirectly from touching high-touch environmental surfaces that have become contaminated with virus from large droplets. The indirect transfer of virus from one persion to other via hand contact with frequently-touched fomites was demonstrated in a study using a bacteriophage whose environmental stability approximated that of human viral pathogens (e.g., poliovirus and parvovirus). 1144 The impact of this mode of transmission with respect to human respiratory-and enteric viruses is dependent on the ability of these agents to survive on environmental surfaces. Infectious RSV has been recovered from skin, porous surfaces, and non-porous surfaces after 30 minutes, 1 hour, and 7 hours, respectively. 1145 Parainfluenza viruses are known to persist for up to 4 hours on porous surfaces and up to 10 hours on non-porous surfaces. 1146 Rhinoviruses can persist on porous surfaces and non-porous surfaces for approximately 1 and 3 hours respectively; study participants in a controlled environment became infected with rhinoviruses after first touching a surface with dried secretions and then touching their nasal or conjunctival mucosa. 1147 Although the efficiency of direct transmission of these viruses from surfaces in uncontrolled settings remains to be defined, these data underscore the basis for maintaining regular protocols for cleaning and disinfecting of high-touch surfaces. The clinically important enteric viruses encountered in pediatric care settings include enteric adenovirus, astroviruses, caliciviruses, and rotavirus. Group A rotavirus is the most common cause of infectious diarrhea in infants and children. Transmission of this virus is primarily fecal-oral, however, the role of fecally contaminated surfaces and fomites in rotavirus transmission is unclear. During one epidemiologic investigation of enteric disease among children attending day care, rotavirus contamination was detected on 19% of inanimate objects in the center. 1148,1149 In an outbreak in a pediatric unit, secondary cases of rotavirus infection clustered in areas where children with rotaviral diarrhea were located. 1150 Astroviruses cause gastroenteritis and diarrhea in newborns and young children and can persist on fecally contaminated surfaces for several months during periods of relatively low humidity. 1151,1152 Outbreaks of small roundstructured viruses (i.e., caliciviruses [Norwalk virus and Norwalk-like viruses]) can affect both patients and staff, with attack rates of ≥50%. 1153 Routes of person-to-person transmission include fecal-oral spread and aerosols generated from vomiting. [1154][1155][1156] Fecal contamination of surfaces in care settings can spread large amounts of virus to the environment. Studies that have attempted to use low-and intermediate-level disinfectants to inactivate rotavirus suspended in feces have demonstrated a protective effect of high concentrations of organic matter. 1157,1158 Intermediate-level disinfectants (e.g., alcoholic quaternary ammonium compounds, and chlorine solutions) can be effective in inactivating enteric viruses provided that a cleaning step to remove most of the organic matter precedes terminal disinfection. 1158 These findings underscore the need for proper cleaning and disinfecting procedures where contamination of environmental surfaces with body substances is likely. EPA-registered surface disinfectants with label claims for these viral agents should be used in these settings. Using disposable, protective barrier coverings may help to minimize the degree of surface contamination. 936 # d. Severe Acute Respiratory Syndrome (SARS) Virus In November 2002 an atypical pneumonia of unknown etiology emerged in Asia and subsequently developed into an international outbreak of respiratory illness among persons in 29 countries during the first six months of 2003. "Severe acute respiratory syndrome" (SARS) is a viral upper respiratory infection associated with a newly described coronavirus (SARS-associated Co-V [SARS-CoV]). SARS-CoV is an enveloped RNA virus. It is present in high titers in respiratory secretions, stool, and blood of infected persons. The modes of transmission determined from epidemiologic investigations were primarily forms of direct contact (i.e., large droplet aerosolization and person-to-person contact). Respiratory secretions were presumed to be the major source of virus in these situations; airborne transmission of virus has not been completely ruled out. Little is known about the impact of fecal-oral transmission and SARS. The epidemiology of SARS-CoV infection is not completely understood, and therefore recommended infection control and prevention measures to contain the spread of SARS will evolve as new information becomes available. 1159 At present there is no indication that established strategies for cleaning (i.e., to remove the majority of bioburden) and disinfecting equipment and environmental surfaces need to be changed for the environmental infection control of SARS. In-patient rooms housing SARS patients should be cleaned and disinfected at least daily and at the time of patient transfer or discharge. More frequent cleaning and disinfection may be indicated for high-touch surfaces and following aerosolproducing procedures (e.g., intubation, bronchoscopy, and sputum production). While there are presently no disinfectant products registered by EPA specifically for inactivation of SARS-CoV, EPA-registered hospital disinfectants that are equivalent to low-and intermediate-level germicides may be used on precleaned, hard, non-porous surfaces in accordance with manufacturer's instructions for environmental surface disinfection. Monitoring adherence to guidelines established for cleaning and disinfection is an important component of environmental infection control to contain the spread of SARS. # e. Creutzfeldt-Jakob Disease (CJD) in Patient-Care Areas Creutzfeldt-Jakob disease (CJD) is a rare, invariably fatal, transmissible spongiform encephalopathy (TSE) that occurs worldwide with an average annual incidence of 1 case per million population. [1160][1161][1162] CJD is one of several TSEs affecting humans; other diseases in this group include kuru, fatal familial insomnia, and Gerstmann-Sträussler-Scheinker syndrome. A TSE that affects a younger population (compared to the age range of CJD cases) has been described primarily in the United Kingdom since 1996. 1163 This variant form of CJD (vCJD) is clinically and neuropathologically distinguishable from classic CJD; epidemiologic and laboratory evidence suggests a causal association for bovine spongiform encephalopathy (BSE [Mad Cow disease]) and vCJD. [1163][1164][1165][1166] The agent associated with CJD is a prion, which is an abnormal isoform of a normal protein constituent of the central nervous system. [1167][1168][1169] The mechanism by which the normal form of the protein is converted to the abnormal, disease-causing prion is unknown. The tertiary conformation of the abnormal prion protein appears to confer a heightened degree of resistance to conventional methods of sterilization and disinfection. 1170,1171 Although about 90% of CJD cases occur sporadically, a limited number of cases are the result of a direct exposure to prion-containing material (usually central nervous system tissue or pituitary hormones) acquired as a result of health care (iatrogenic cases). These cases have been linked to a. pituitary hormone therapy [from human sources as opposed to hormones prepared through the use of recombinant technology], 1170-1174 b. transplants of either dura mater or corneas, [1175][1176][1177][1178][1179][1180][1181] and c. neurosurgical instruments and depth electrodes. [1182][1183][1184][1185] In the cases involving instruments and depth electrodes, conventional cleaning and terminal reprocessing methods of the day failed to fully inactivate the contaminating prions and are considered inadequate by today's standards. Prion inactivation studies involving whole tissues and tissue homogenates have been conducted to determine the parameters of physical and chemical methods of sterilization or disinfection necessary for complete inactivation; 1170, 1186-1191 however, the application of these findings to environmental infection control in health-care settings is problematic. No studies have evaluated the effectiveness of medical instrument reprocessing in inactivating prions. Despite a consensus that abnormal prions display some extreme measure of resistance to inactivation by either physical or chemical methods, scientists disagree about the exact conditions needed for sterilization. Inactivation studies utilizing whole tissues present extraordinary challenges to any sterilizing method. 1192 Additionally, the experimental designs of these studies preclude the evaluation of surface cleaning as a part of the total approach to pathogen inactivation. 951,1192 Some researchers have recommended the use of either a 1:2 v/v dilution of sodium hypochorite (approximately 20,000 ppm), full-strength sodium hypochlorite (50,000-60,000 ppm), or 1-2 N sodium hydroxide (NaOH) for the inactivation of prions on certain surfaces (e.g., those found in the pathology laboratory). 1170,1188 Although these chemicals may be appropriate for the decontamination of laboratory, operating-room, or autopsy-room surfaces that come into contact with central nervous system tissue from a known or suspected patient, this approach is not indicated for routine or terminal cleaning of a room previously occupied by a CJD patient. Both chemicals pose hazards for the healthcare worker doing the decontamination. NaOH is caustic and should not make contact with the skin. Sodium hypochlorite solutions (i.e., chlorine bleach) can corrode metals (e.g., aluminum). MSDS information should be consulted when attempting to work with concentrated solutions of either chemical. Currently, no EPAregistered products have label claims for prion inactivation; therefore, this guidance is based on the best available evidence from the scientific literature. Environmental infection-control strategies must based on the principles of the "chain of infection," regardless of the disease of concern. 13 Although CJD is transmissible, it is not highly contagious. All iatrogenic cases of CJD have been linked to a direct exposure to prion-contaminated central nervous system tissue or pituitary hormones. The six documented iatrogenic cases associated with instruments and devices involved neurosurgical instruments and devices that introduced residual contamination directly to the recipient's brain. No evidence suggests that vCJD has been transmitted iatrogenically or that either CJD or vCJD has been transmitted from environmental surfaces (e.g., housekeeping surfaces). Therefore, routine procedures are adequate for terminal cleaning and disinfection of a CJD patient's room. Additionally, in epidemiologic studies involving highly transfused patients, blood was not identified as a source for prion transmission. [1193][1194][1195][1196][1197][1198] Routine procedures for containing, decontaminating, and disinfecting surfaces with blood spills should be adequate for proper infection control in these situations. 951,1199 Guidance for environmental infection control in ORs and autopsy areas has been published. 1197,1199 Hospitals should develop risk-assessment procedures to identify patients with known or suspected CJD in efforts to implement prion-specific infection-control measures for the OR and for instrument reprocessing. 1200 This assessment also should be conducted for older patients undergoing non-lesionous neurosurgery when such procedures are being done for diagnosis. Disposable, impermeable coverings should be used during these autopsies and neurosurgeries to minimize surface contamination. Surfaces that have become contaminated with central nervous system tissue or cerebral spinal fluid should be cleaned and decontaminated by a. removing most of the tissue or body substance with absorbent materials, b. wetting the surface with a sodium hypochlorite solution containing ≥5,000 ppm or a 1 N NaOH solution, and c. rinsing thoroughly. 951,[1197][1198][1199]1201 The optimum duration of contact exposure in these instances is unclear. Some researchers recommend a 1-hour contact time on the basis of tissue-inactivation studies, 1197,1198,1201 whereas other reviewers of the subject draw no conclusions from this research. 1199 Factors to consider before cleaning a potentially contaminated surface are a. the degree to which gross tissue/body substance contamination can be effectively removed and b. the ease with which the surface can be cleaned. # F. Environmental Sampling This portion of Part I addresses the basic principles and methods of sampling environmental surfaces and other environmental sources for microorganisms. The applied strategies of sampling with respect to environmental infection control have been discussed in the appropriate preceding subsections. # General Principles: Microbiologic Sampling of the Environment Before 1970, U.S. hospitals conducted regularly scheduled culturing of the air and environmental surfaces (e.g., floors, walls, and table tops). 1202 By 1970, CDC and the American Hospital Association (AHA) were advocating the discontinuation of routine environmental culturing because rates of healthcare-associated infection had not been associated with levels of general microbial contamination of air or environmental surfaces, and because meaningful standards for permissible levels of microbial contamination of environmental surfaces or air did not exist. 1203-1205During 1970, 25% of U.S. hospitals reduced the extent of such routine environmental culturing -a trend that has continued. 1206,1207 Random, undirected sampling (referred to as "routine" in previous guidelines) differs from the current practice of targeted sampling for defined purposes. 2,1204 Previous recommendations against routine sampling were not intended to discourage the use of sampling in which sample collection, culture, and interpretation are conducted in accordance with defined protocols. 2 In this guideline, targeted microbiologic sampling connotes a monitoring process that includes a. a written, defined, multidisciplinary protocol for sample collection and culturing b. analysis and interpretation of results using scientifically determined or anticipatory baseline values for comparison; and c. expected actions based on the results obtained. Infection control, in conjunction with laboratorians, should assess the health-care facility's capability to conduct sampling and determine when expert consultation and/or services are needed. Microbiologic sampling of air, water, and inanimate surfaces (i.e., environmental sampling) is an expensive and time-consuming process that is complicated by many variables in protocol, analysis, and interpretation. It is therefore indicated for only four situations. 1208 The first is to support an investigation of an outbreak of disease or infections when environmental reservoirs or fomites are implicated epidemiologically in disease transmission. 161,1209,1210 It is important that such culturing be supported by epidemiologic data. Environmental sampling, as with all laboratory testing, should not be conducted if there is no plan for interpreting and acting on the results obtained. 11,1211,1212 Linking microorganisms from environmental samples with clinical isolates by molecular epidemiology is crucial whenever it is possible to do so. The second situation for which environmental sampling may be warranted is in research. Well-designed and controlled experimental methods and approaches can provide new information about the spread of health-care associated diseases. 126, 129 A classic example is the study of environmental microbial contamination that compared health-care associated infection rates in an old hospital and a new facility before and shortly after occupancy. 947 The third indication for sampling is to monitor a potentially hazardous environmental condition, confirm the presence of a hazardous chemical or biological agent, and validate the successful abatement of the hazard. This type of sampling can be used to: a. detect bioaerosols released from the operation of health-care equipment (e.g., an ultrasonic cleaner) and determine the success of repairs in containing the hazard, 1213 b. detect the release of an agent of bioterrorism in an indoor environmental setting and determine its successful removal or inactivation, and c. sample for industrial hygiene or safety purposes (e.g., monitoring a "sick building"). The fourth indication is for quality assurance to evaluate the effects of a change in infection-control practice or to ensure that equipment or systems perform according to specifications and expected outcomes. Any sampling for quality-assurance purposes must follow sound sampling protocols and address confounding factors through the use of properly selected controls. Results from a single environmental sample are difficult to interpret in the absence of a frame of reference or perspective. Evaluations of a change in infection-control practice are based on the assumption that the effect will be measured over a finite period, usually of short duration. Conducting quality-assurance sampling on an extended basis, especially in the absence of an adverse outcome, is usually unjustified. A possible exception might be the use of air sampling during major construction periods to qualitatively detect breaks in environmental infection-control measures. In one study, which began as part of an investigation of an outbreak of health-care associated aspergillosis, airborne concentrations of Aspergillus spores were measured in efforts to evaluate the effectiveness of sealing hospital doors and windows during a period of construction of a nearby building. 50 Other examples of sampling for quality-assurance purposes may include commissioning newly constructed space in special care areas (i.e., ORs and units for immunosuppressed patients) or assessing a change in housekeeping practice. However, the only types of routine environmental microbiologic sampling recommended as part of a quality-assurance program are a. the biological monitoring of sterilization processes by using bacterial spores 1214 and b. the monthly culturing of water used in hemodialysis applications and for the final dialysate use dilution. Some experts also advocate periodic environmental sampling to evaluate the microbial/particulate quality for regular maintenance of the air handling system (e.g., filters) and to verify that the components of the system meet manufacturer's specifications (A. Streifel, University of Minnesota, 2000). Certain equipment in health-care settings (e.g., biological safety cabinets) may also be monitored with air flow and particulate sampling to determine performance or as part of adherence to a certification program; results can then be compared with a predetermined standard of performance. These measurements, however, usually do not require microbiologic testing. # Air Sampling Biological contaminants occur in the air as aerosols and may include bacteria, fungi, viruses, and pollens. 1215,1216 Aerosols are characterized as solid or liquid particles suspended in air. Talking for 5 minutes and coughing each can produce 3,000 droplet nuclei; sneezing can generate approximately 40,000 droplets which then evaporate to particles in the size range of 0.5-12 μm. 137,1217 Particles in a biological aerosol usually vary in size from <1 μm to ≥50 μm. These particles may consist of a single, unattached organism or may occur in the form of clumps composed of a number of bacteria. Clumps can also include dust and dried organic or inorganic material. Vegetative forms of bacterial cells and viruses may be present in the air in a lesser number than bacterial spores or fungal spores. Factors that determine the survival of microorganisms within a bioaerosol include a. the suspending medium, b. temperature, c. relative humidity, d. oxygen sensitivity, and e. exposure to UV or electromagnetic radiation. 1215 Many vegetative cells will not survive for lengthy periods of time in the air unless the protective cover (e.g., dried organic or inorganic matter). 1216 Pathogens that resist drying (e.g., Staphylococcus spp., Streptococcus spp., and fungal spores) can survive for long periods and can be carried considerable distances via air and still remain viable. They may also settle on surfaces and become airborne again as secondary aerosols during certain activities (e.g., sweeping and bed making). 1216,1218 Microbiologic air sampling is used as needed to determine the numbers and types of microorganisms, or particulates, in indoor air. 289 Air sampling for quality control is, however, problematic because of lack of uniform air-quality standards. Although airborne spores of Aspergillus spp. can pose a risk for neutropenic patients, the critical number (i.e., action level) of these spores above which outbreaks of aspergillosis would be expected to occur has not been defined. Health-care professionals considering the use of air sampling should keep in mind that the results represent indoor air quality at singular points in time, and these may be affected by a variety of factors, including a. indoor traffic, b. visitors entering the facility, c. temperature, d. time of day or year, e. relative humidity, f. relative concentration of particles or organisms, and g) the performance of the air-handling system components. To be meaningful, air-sampling results must be compared with those obtained from other defined areas, conditions, or time periods. Several preliminary concerns must be addressed when designing a microbiologic air sampling strategy (Box 13). Because the amount of particulate material and bacteria retained in the respiratory system is largely dependent on the size of the inhaled particles, particle size should be determined when studying airborne microorganisms and their relation to respiratory infections. Particles >5 μm are efficiently trapped in the upper respiratory tract and are removed primarily by ciliary action. 1219 Particles ≤5 μm in diameter reach the lung, but the greatest retention in the alveoli is of particles 1-2 μm in diameter. [1220][1221][1222] # Box 13. Preliminary concerns for conducting air sampling • Consider the possible characteristics and conditions of the aerosol, including size range of particles, relative amount of inert material, concentration of microorganisms, and environmental factors. • Determine the type of sampling instruments, sampling time, and duration of the sampling program. • Determine the number of samples to be taken. • Ensure that adequate equipment and supplies are available. • Determine the method of assay that will ensure optimal recovery of microorganisms. • Select a laboratory that will provide proper microbiologic support. • Ensure that samples can be refrigerated if they cannot be assayed in the laboratory promptly. Bacteria, fungi, and particulates in air can be identified and quantified with the same methods and equipment (Table 23). The basic methods include a. impingement in liquids, b. impaction on solid surfaces, c. sedimentation, d. filtration, e. centrifugation, f. electrostatic precipitation, and g. thermal precipitation. 1218 Of these, impingement in liquids, impaction on solid surfaces, and sedimentation (on settle plates) have been used for various air-sampling purposes in health-care settings. 289 Several instruments are available for sampling airborne bacteria and fungi (Box 14). Some of the samplers are self-contained units requiring only a power supply and the appropriate collecting medium, but most require additional auxiliary equipment (e.g., a vacuum pump and an airflow measuring device [i.e., a flowmeter or anemometer]). Sedimentation or depositional methods use settle plates and therefore need no special instruments or equipment. Selection of an instrument for air sampling requires a clear understanding of the type of information desired and the particular determinations that must be made (Box 14). Information may be needed regarding a. one particular organism or all organisms that may be present in the air, b. the concentration of viable particles or of viable organisms, c. the change in concentration with time, and d. the size distribution of the collected particles. Before sampling begins, decisions should be made regarding whether the results are to be qualitative or quantitative. Comparing quantities of airborne microorganisms to those of outdoor air is also standard operating procedure. Infection-control professionals, hospital epidemiologists, industrial hygienists, and laboratory supervisors, as part of a multidisciplinary team, should discuss the potential need for microbial air sampling to determine if the capacity and expertise to conduct such sampling exists within the facility and when it is appropriate to enlist the services of an environmental microbiologist consultant. # Box 14. Selecting an air sampling device* The following factors must be considered when choosing an air sampling instrument: • Viability and type of the organism to be sampled • Compatibility with the selected method of analysis • Sensitivity of particles to sampling • Assumed concentrations and particle size • Whether airborne clumps must be broken (i.e., total viable organism count vs. particle count) • Volume of air to be sampled and length of time sampler is to be continuously operated Liquid impinger and solid impactor samplers are the most practical for sampling bacteria, particles, and fungal spores, because they can sample large volumes of air in relatively short periods of time. 289 Solid impactor units are available as either "slit" or "sieve" designs. Slit impactors use a rotating disc as support for the collecting surface, which allows determinations of concentration over time. Sieve impactors commonly use stages with calibrated holes of different diameters. Some impactor-type samplers use centrifugal force to impact particles onto agar surfaces. The interior of either device must be made sterile to avoid inadvertent contamination from the sampler. Results obtained from either sampling device can be expressed as organisms or particles per unit volume of air (CFU/m 3 ). Sampling for bacteria requires special attention, because bacteria may be present as individual organisms, as clumps, or mixed with or adhering to dust or covered with a protective coating of dried organic or inorganic substances. Reports of bacterial concentrations determined by air sampling therefore must indicate whether the results represent individual organisms or particles bearing multiple cells. Certain types of samplers (e.g., liquid impingers) will completely or partially disintegrate clumps and large particles; the sampling result will therefore reflect the total number of individual organisms present in the air. The task of sizing a bioaerosol is simplified through the use of sieves or slit impactors because these samplers will separate the particles and microorganisms into size ranges as the sample is collected. These samplers must, however, be calibrated first by sampling aerosols under similar use conditions. 1225 The use of settle plates (i.e., the sedimentation or depositional method) is not recommended when sampling air for fungal spores, because single spores can remain suspended in air indefinitely. 289 Settle plates have been used mainly to sample for particulates and bacteria either in research studies or during epidemiologic investigations. 161,[1226][1227][1228][1229] Results of sedimentation sampling are typically expressed as numbers of viable particles or viable bacteria per unit area per the duration of sampling time (i.e., CFU/area/time); this method can not quantify the volume of air sampled. Because the survival of microorganisms during air sampling is inversely proportional to the velocity at which the air is taken into the sampler, 1215 one advantage of using a settle plate is its reliance on gravity to bring organisms and particles into contact with its surface, thus enhancing the potential for optimal survival of collected organisms. This process, however, takes several hours to complete and may be impractical for some situations. Air samplers are designed to meet differing measurement requirements. Some samplers are better suited for one form of measurement than others. No one type of sampler and assay procedure can be used to collect and enumerate 100% of airborne organisms. The sampler and/or sampling method chosen should, however, have an adequate sampling rate to collect a sufficient number of particles in a reasonable time period so that a representative sample of air is obtained for biological analysis. Newer analytical techniques for assaying air samples include PCR methods and enzyme-linked immunosorbent assays (ELISAs). # Water Sampling A detailed discussion of the principles and practices of water sampling has been published. 945 Water sampling in health-care settings is used detect waterborne pathogens of clinical significance or to determine the quality of finished water in a facility's distribution system. Routine testing of the water in a health-care facility is usually not indicated, but sampling in support of outbreak investigations can help determine appropriate infection-control measures. Water-quality assessments in dialysis settings have been discussed in this guideline (see Water, Dialysis Water Quality and Dialysate, and Appendix C). Health-care facilities that conduct water sampling should have their samples assayed in a laboratory that uses established methods and quality-assurance protocols. Water specimens are not "static specimens" at ambient temperature; potential changes in both numbers and types of microbial populations can occur during transport. Consequently, water samples should be sent to the testing laboratory cold (i.e., at approximately 39.2°F [4°C]) and testing should be done as soon as practical after collection (preferably within 24 hours). Because most water sampling in health-care facilities involves the testing of finished water from the facility's distribution system, a reducing agent (i.e., sodium thiosulfate [Na2S2O3]) needs to be added to neutralize residual chlorine or other halogen in the collected sample. If the water contains elevated levels of heavy metals, then a chelating agent should be added to the specimen. The minimum volume of water to be collected should be sufficient to complete any and all assays indicated; 100 mL is considered a suitable minimum volume. Sterile collection equipment should always be used. Sampling from a tap requires flushing of the water line before sample collection. If the tap is a mixing faucet, attachments (e.g., screens and aerators) must be removed, and hot and then cold water must be run through the tap before collecting the sample. 945 If the cleanliness of the tap is questionable, disinfection with 500-600 ppm sodium hypochlorite (1:100 v/v dilution of chlorine bleach) and flushing the tap should precede sample collection. Microorganisms in finished or treated water often are physically damaged ("stressed") to the point that growth is limited when assayed under standard conditions. Such situations lead to false-negative readings and misleading assessments of water quality. Appropriate neutralization of halogens and chelation of heavy metals are crucial to the recovery of these organisms. The choice of recovery media and incubation conditions will also affect the assay. Incubation temperatures should be closer to the ambient temperature of the water rather than at 98.6°F (37°C), and recovery media should be formulated to provide appropriate concentrations of nutrients to support organisms exhibiting less than rigorous growth. 945 High-nutrient content media (e.g., blood agar and tryptic soy agar [TSA]) may actually inhibit the growth of these damaged organisms. Reduced nutrient media (e.g., diluted peptone and R2A) are preferable for recovery of these organisms. 945 Use of aerobic, heterotrophic plate counts allows both a qualitative and quantitative measurement for water quality. If bacterial counts in water are expected to be high in number (e.g., during waterborne outbreak investigations), assaying small quantities using pour plates or spread plates is appropriate. 945 Membrane filtration is used when low-count specimens are expected and larger sampling volumes are required (≥100 mL). The sample is filtered through the membrane, and the filter is applied directly faceup onto the surface of the agar plate and incubated. Unlike the testing of potable water supplies for coliforms (which uses standardized test and specimen collection parameters and conditions), water sampling to support epidemiologic investigations of disease outbreaks may be subjected to modifications dictated by the circumstances present in the facility. Assay methods for waterborne pathogens may also not be standardized. Therefore, control or comparison samples should be included in the experimental design. Any departure from a standard method should be fully documented and should be considered when interpreting results and developing strategies. Assay methods specific for clinically significant waterborne pathogens (e.g., Legionella spp., Aeromonas spp., Pseudomonas spp., and Acinetobacter spp.) are more complicated and costly compared with both methods used to detect coliforms and other standard indicators of water quality. # Environmental Surface Sampling Routine environmental-surface sampling (e.g., surveillance cultures) in health-care settings is neither cost-effective nor warranted. 951, 1225 When indicated, surface sampling should be conducted with multidisciplinary approval in adherence to carefully considered plans of action and policy (Box 15). # Box 15. Undertaking environmental-surface sampling* The following factors should be considered before engaging in environmental-surface sampling: • Background information from the literature and present activities (i.e., preliminary results from an epidemiologic investigation) • Location of surfaces to be sampled Surface sampling is used currently for research, as part of an epidemiologic investigation, or as part of a comprehensive approach for specific quality assurance purposes. As a research tool, surface sampling has been used to determine a. potential environmental reservoirs of pathogens, 564,[1230][1231][1232] b. survival of microorganisms on surfaces, 1232,1233 and c. the sources of the environmental contamination. 1023 Some or all of these approaches can also be used during outbreak investigations. 1232 Discussion of surface sampling of medical devices and instruments is beyond the scope of this document and is deferred to future guidelines on sterilization and disinfection issues. Meaningful results depend on the selection of appropriate sampling and assay techniques. 1214 The media, reagents, and equipment required for surface sampling are available from any well-equipped microbiology laboratory and laboratory supplier. For quantitative assessment of surface organisms, nonselective, nutrient-rich agar media and broth (e.g., TSA and brain-heart infusion broth [BHI] with or without 5% sheep or rabbit blood supplement) are used for the recovery of aerobic bacteria. Broth media are used with membrane-filtration techniques. Further sample work-up may require the use of selective media for the isolation and enumeration of specific groups of microorganisms. Examples of selective media are MacConkey agar (MAC [selects for gram-negative bacteria]), Cetrimide agar (selects for Pseudomonas aeruginosa), or Sabouraud dextrose-and malt extract agars and broths (select for fungi). Qualitative determinations of organisms from surfaces require only the use of selective or non-selective broth media. Effective sampling of surfaces requires moisture, either already present on the surface to be sampled or via moistened swabs, sponges, wipes, agar surfaces, or membrane filters. 1214,[1234][1235][1236] Dilution fluids and rinse fluids include various buffers or general purpose broth media (Table 24). If disinfectant residuals are expected on surfaces being sampled, specific neutralizer chemicals should be used in both the growth media and the dilution or rinse fluids. Lists of the neutralizers, the target disinfectant active ingredients, and the use concentrations have been published. 1214,1237 Alternatively, instead of adding neutralizing chemicals to existing culture media (or if the chemical nature of the disinfectant residuals is unknown), the use of either a. commercially available media including a variety of specific and nonspecific neutralizers or b. double-strength broth media will facilitate optimal recovery of microorganisms. The inclusion of appropriate control specimens should be included to rule out both residual antimicrobial activity from surface disinfectants and potential toxicity caused by the presence of neutralizer chemicals carried over into the assay system. 1214 Several methods can be used for collecting environmental surface samples (Table 25). Specific stepbystep discussions of each of the methods have been published. 1214,1239 For best results, all methods should incorporate aseptic techniques, sterile equipment, and sterile recovery media. Sample/rinse methods are frequently chosen because of their versatility. However, these sampling methods are the most prone to errors caused by manipulation of the swab, gauze pad, or sponge. 1238 Additionally, no microbiocidal or microbiostatic agents should be present in any of these items when used for sampling. 1238 Each of the rinse methods requires effective elution of microorganisms from the item used to sample the surface. Thorough mixing of the rinse fluids after elution (e.g., via manual or mechanical mixing using a vortex mixer, shaking with or without glass beads, and ultrasonic bath) will help to remove and suspend material from the sampling device and break up clumps of organisms for a more accurate count. 1238 In some instances, the item used to sample the surface (e.g., gauze pad and sponge) may be immersed in the rinse fluids in a sterile bag and subjected to stomaching. 1238 This technique, however, is suitable only for soft or absorbent items that will not puncture the bag during the elution process. If sampling is conducted as part of an epidemiologic investigation of a disease outbreak, identification of isolates to species level is mandatory, and characterization beyond the species level is preferred. 1214 When interpreting the results of the sampling, the expected degree of microbial contamination associated with the various categories of surfaces in the Spaulding classification must be considered. Environmental surfaces should be visibly clean; recognized pathogens in numbers sufficient to result in secondary transfer to other animate or inanimate surfaces should be absent from the surface being sampled. 1214 Although the interpretation of a sample with positive microbial growth is self-evident, an environmental surface sample, especially that obtained from housekeeping surfaces, that shows no growth does not represent a "sterile" surface. Sensitivities of the sampling and assay methods (i.e., level of detection) must be taken into account when no-growth samples are encountered. Properly collected control samples will help rule out extraneous contamination of the surface sample. # G. Laundry and Bedding # General Information Laundry in a health-care facility may include bed sheets and blankets, towels, personal clothing, patient apparel, uniforms, scrub suits, gowns, and drapes for surgical procedures. 1245 Although contaminated textiles and fabrics in health-care facilities can be a source of substantial numbers of pathogenic microorganisms, reports of health-care associated diseases linked to contaminated fabrics are so few in number that the overall risk of disease transmission during the laundry process likely is negligible. When the incidence of such events are evaluated in the context of the volume of items laundered in health-care settings (estimated to be 5 billion pounds annually in the United States), 1246 existing control measures (e.g., standard precautions) are effective in reducing the risk of disease transmission to patients and staff. Therefore, use of current control measures should be continued to minimize the contribution of contaminated laundry to the incidence of health-care associated infections. The control measures described in this section of the guideline are based on principles of hygiene, common sense, and consensus guidance; they pertain to laundry services utilized by health-care facilities, either inhouse or contract, rather than to laundry done in the home. # Epidemiology and General Aspects of Infection Control Contaminated textiles and fabrics often contain high numbers of microorganisms from body substances, including blood, skin, stool, urine, vomitus, and other body tissues and fluids. When textiles are heavily contaminated with potentially infective body substances, they can contain bacterial loads of 10 6 -10 8 CFU/100 cm 2 of fabric. 1247 Disease transmission attributed to health-care laundry has involved contaminated fabrics that were handled inappropriately (i.e., the shaking of soiled linens). Bacteria (Salmonella spp., Bacillus cereus), viruses (hepatitis B virus [HBV]), fungi (Microsporum canis), and ectoparasites (scabies) presumably have been transmitted from contaminated textiles and fabrics to workers via a. direct contact or b. aerosols of contaminated lint generated from sorting and handling contaminated textiles. [1248][1249][1250][1251][1252] In these events, however, investigations could not rule out the possibility that some of these reported infections were acquired from community sources. Through a combination of soil removal, pathogen removal, and pathogen inactivation, contaminated laundry can be rendered hygienically clean. Hygienically clean laundry carries negligible risk to health-care workers and patients, provided that the clean textiles, fabric, and clothing are not inadvertently contaminated before use. OSHA defines contaminated laundry as "laundry which has been soiled with blood or other potentially infectious materials or may contain sharps." 967 The purpose of the laundry portion of the standard is to protect the worker from exposure to potentially infectious materials during collection, handling, and sorting of contaminated textiles through the use of personal protective equipment, proper work practices, containment, labeling, hazard communication, and ergonomics. Experts are divided regarding the practice of transporting clothes worn at the workplace to the healthcare worker's home for laundering. Although OSHA regulations prohibit home laundering of items that are considered personal protective apparel or equipment (e.g., laboratory coats), 967 experts disagree about whether this regulation extends to uniforms and scrub suits that are not contaminated with blood or other potentially infectious material. Health-care facility policies on this matter vary and may be inconsistent with recommendations of professional organizations. 1253,1254 Uniforms without blood or body substance contamination presumably do not differ appreciably from street clothes in the degree and microbial nature of soilage. Home laundering would be expected to remove this level of soil adequately. However, if health-care facilities require the use of uniforms, they should either make provisions to launder them or provide information to the employee regarding infection control and cleaning guidelines for the item based on the tasks being performed at the facility. Health-care facilities should address the need to provide this service and should determine the frequency for laundering these items. In a recent study examining the microbial contamination of medical students' white coats, the students perceived the coats as "clean" as long as the garments were not visibly contaminated with body substances, even after wearing the coats for several weeks. 1255 The heaviest bacterial load was found on the sleeves and the pockets of these garments; the organisms most frequently isolated were Staphylococcus aureus, diphtheroids, and Acinetobacter spp. 1255 Presumably, the sleeves of the coat may make contact with a patient and potentially serve to transfer environmentally stable microorganisms among patients. In this study, however, surveillance was not conducted among patients to detect new infections or colonizations. The students did, however, report that they would likely replace their coats more frequently and regularly if clean coats were provided. 1255 Apart from this study, which documents the presence of pathogenic bacteria on health-care facility clothing, reports of infections attributed to either the contact with such apparel or with home laundering have been rare. 1256,1257 Laundry services for health-care facilities are provided either in-house (i.e., on-premise laundry [OPL]), co-operatives (i.e., those entities owned and operated by a group of facilities), or by off-site commercial laundries. In the latter, the textiles may be owned by the health-care facility, in which case the processor is paid for laundering only. Alternatively, the textiles may be owned by the processor who is paid for every piece laundered on a "rental" fee. The laundry facility in a health-care setting should be designed for efficiency in providing hygienically clean textiles, fabrics, and apparel for patients and staff. Guidelines for laundry construction and operation for health-care facilities, including nursing facilities, have been published. 120,1258 The design and engineering standards for existing facilities are those cited in the AIA edition in effect during the time of the facility's construction. 120 A laundry facility is usually partitioned into two separate areas -a "dirty" area for receiving and handling the soiled laundry and a "clean" area for processing the washed items. 1259 To minimize the potential for recontaminating cleaned laundry with aerosolized contaminated lint, areas receiving contaminated textiles should be at negative air pressure relative to the clean areas. [1260][1261][1262] Laundry areas should have handwashing facilities readily available to workers. Laundry workers should wear appropriate personal protective equipment (e.g., gloves and protective garments) while sorting soiled fabrics and textiles. 967 Laundry equipment should be used and maintained according to the manufacturer's instructions to prevent microbial contamination of the system. 1250,1263 Damp textiles should not be left in machines overnight. 1250 # Collecting, Transporting, and Sorting Contaminated Textiles and Fabrics The laundry process starts with the removal of used or contaminated textiles, fabrics, and/or clothing from the areas where such contamination occurred, including but not limited to patients' rooms, surgical/operating areas, and laboratories. Handling contaminated laundry with a minimum of agitation can help prevent the generation of potentially contaminated lint aerosols in patient-care areas. 967,1259 Sorting or rinsing contaminated laundry at the location where contamination occurred is prohibited by OSHA. 967 Contaminated textiles and fabrics are placed into bags or other appropriate containment in this location; these bags are then securely tied or otherwise closed to prevent leakage. 967 Single bags of sufficient tensile strength are adequate for containing laundry, but leak-resistant containment is needed if the laundry is wet and capable of soaking through a cloth bag. 1264 Bags containing contaminated laundry must be clearly identified with labels, color-coding, or other methods so that health-care workers handle these items safely, regardless of whether the laundry is transported within the facility or destined for transport to an off-site laundry service. 967 Typically, contaminated laundry originating in isolation areas of the hospital is segregated and handled with special practices; however, few, if any, cases of health-care associated infection have been linked to this source. 1265 Single-blinded studies have demonstrated that laundry from isolation areas is no more heavily contaminated with microorganisms than laundry from elsewhere in the hospital. 1266 Therefore, adherence to standard precautions when handling contaminated laundry in isolation areas and minimizing agitation of the contaminated items are considered sufficient to prevent the dispersal of potentially infectious aerosols. 6 Contaminated textiles and fabrics in bags can be transported by cart or chute. 1258,1262 Laundry chutes require proper design, maintenance, and use, because the piston-like action of a laundry bag traveling in the chute can propel airborne microbial contaminants throughout the facility. [1267][1268][1269] Laundry chutes should be maintained under negative air pressure to prevent the spread of microorganisms from floor to floor. Loose, contaminated pieces of laundry should not be tossed into chutes, and laundry bags should be closed or otherwise secured to prevent the contents from falling out into the chute. 1270 Health-care facilities should determine the point in the laundry process at which textiles and fabrics should be sorted. Sorting after washing minimizes the exposure of laundry workers to infective material in soiled fabrics, reduces airborne microbial contamination in the laundry area, and helps to prevent potential percutaneous injuries to personnel. 1271 Sorting laundry before washing protects both the machinery and fabrics from hard objects (e.g., needles, syringes, and patients' property) and reduces the potential for recontamination of clean textiles. 1272 Sorting laundry before washing also allows for customization of laundry formulas based on the mix of products in the system and types of soils encountered. Additionally, if work flow allows, increasing the amount of segregation by specific product types will usually yield the greatest amount of work efficiency during inspection, folding, and pack-making operations. 1253 Protective apparel for the workers and appropriate ventilation can minimize these exposures. 967,[1258][1259][1260] Gloves used for the task of sorting laundry should be of sufficient thickness to minimize sharps injuries. 967 Employee safety personnel and industrial hygienists can help to determine the appropriate glove choice. # Parameters of the Laundry Process Fabrics, textiles, and clothing used in health-care settings are disinfected during laundering and generally rendered free of vegetative pathogens (i.e., hygienically clean), but they are not sterile. 1273 Laundering cycles consist of flush, main wash, bleaching, rinsing, and souring. 1274 Cleaned wet textiles, fabrics, and clothing are then dried, pressed as needed, and prepared (e.g., folded and packaged) for distribution back to the facility. Clean linens provided by an off-site laundry must be packaged prior to transport to prevent inadvertent contamination from dust and dirt during loading, delivery, and unloading. Functional packaging of laundry can be achieved in several ways, including a. placing clean linen in a hamper lined with a previously unused liner, which is then closed or covered b. placing clean linen in a properly cleaned cart and covering the cart with disposable material or a properly cleaned reusable textile material that can be secured to the cart; and c. wrapping individual bundles of clean textiles in plastic or other suitable material and sealing or taping the bundles. The antimicrobial action of the laundering process results from a combination of mechanical, thermal, and chemical factors. 1271,1275,1276 Dilution and agitation in water remove substantial quantities of microorganisms. Soaps and detergents function to suspend soils and also exhibit some microbiocidal properties. Hot water provides an effective means of destroying microorganisms. 1277 A temperature of at least 160°F (71°C) for a minimum of 25 minutes is commonly recommended for hot-water washing. 2 Water of this temperature can be provided by steam jet or separate booster heater. 120 The use of chlorine bleach assures an extra margin of safety. 1278, 1279 A total available chlorine residual of 50-150 ppm is usually achieved during the bleach cycle. 1277 Chlorine bleach becomes activated at water temperatures of 135°F-145°F (57.2°C-62.7°C). The last of the series of rinse cycles is the addition of a mild acid (i.e., sour) to neutralize any alkalinity in the water supply, soap, or detergent. The rapid shift in pH from approximately 12 to 5 is an effective means to inactivate some microorganisms. 1247 Effective removal of residual alkali from fabrics is an important measure in reducing the risk for skin reactions among patients. Chlorine bleach is an economical, broad-spectrum chemical germicide that enhances the effectiveness of the laundering process. Chlorine bleach is not, however, an appropriate laundry additive for all fabrics. Traditionally, bleach was not recommended for laundering flame-retardant fabrics, linens, and clothing because its use diminished the flame-retardant properties of the treated fabric. 1273 However, some modernday flame retardant fabrics can now tolerate chlorine bleach. Flame-retardant fabrics, whether topically treated or inherently flame retardant, should be thoroughly rinsed during the rinse cycles, because detergent residues are capable of supporting combustion. Chlorine alternatives (e.g., activated oxygenbased laundry detergents) provide added benefits for fabric and color safety in addition to antimicrobial activity. Studies comparing the antimicrobial potencies of chlorine bleach and oxygen-based bleach are needed. Oxygen-based bleach and detergents used in health-care settings should be registered by EPA to ensure adequate disinfection of laundry. Health-care workers should note the cleaning instructions of textiles, fabrics, drapes, and clothing to identify special laundering requirements and appropriate hygienic cleaning options. 1278 Although hot-water washing is an effective laundry disinfection method, the cost can be substantial. Laundries are typically the largest users of hot water in hospitals. They consume 50%-75% of the total hot water, 1280 representing an average of 10%-15% of the energy used by a hospital. Several studies have demonstrated that lower water temperatures of 71°F-77°F (22°C-25°C) can reduce microbial contamination when the cycling of the washer, the wash detergent, and the amount of laundry additive are carefully monitored and controlled. 1247,[1281][1282][1283][1284][1285] Low-temperature laundry cycles rely heavily on the presence of chlorine-or oxygen-activated bleach to reduce the levels of microbial contamination. The selection of hot-or cold-water laundry cycles may be dictated by state health-care facility licensing standards or by other regulation. Regardless of whether hot or cold water is used for washing, the temperatures reached in drying and especially during ironing provide additional significant microbiocidal action. 1247 Dryer temperatures and cycle times are dictated by the materials in the fabrics. Man-made fibers (i.e., polyester and polyester blends) require shorter times and lower temperatures. After washing, cleaned and dried textiles, fabrics, and clothing are pressed, folded, and packaged for transport, distribution, and storage by methods that ensure their cleanliness until use. 2 State regulations and/or accrediting standards may dictate the procedures for this activity. Clean/sterile and contaminated textiles should be transported from the laundry to the health-care facility in vehicles (e.g., trucks, vans, and carts) that allow for separation of clean/sterile and contaminated items. Clean/sterile textiles and contaminated textiles may be transported in the same vehicle, provided that the use of physical barriers and/or space separation can be verified to be effective in protecting the clean/sterile items from contamination. Clean, uncovered/unwrapped textiles stored in a clean location for short periods of time (e.g., uncovered and used within a few hours) have not been demonstrated to contribute to increased levels of health-care acquired infection. Such textiles can be stored in convenient places for use during the provision of care, provided that the textiles can be maintained dry and free from soil and body-substance contamination. In the absence of microbiologic standards for laundered textiles, no rationale exists for routine microbiologic sampling of cleaned health-care textiles and fabrics. 1286 Sampling may be used as part of an outbreak investigation if epidemiologic evidence suggests that textiles, fabrics, or clothing are a suspected vehicle for disease transmission. Sampling techniques include aseptically macerating the fabric into pieces and adding these to broth media or using contact plates (RODAC plates) for direct surface sampling. 1271,1286 When evaluating the disinfecting properties of the laundering process specifically, placing pieces of fabric between two membrane filters may help to minimize the contribution of the physical removal of microorganisms. 1287 Washing machines and dryers in residential-care settings are more likely to be consumer items rather than the commercial, heavy-duty, large volume units typically found in hospitals and other institutional healthcare settings. Although all washing machines and dryers in health-care settings must be properly maintained for performance according to the manufacturer's instructions, questions have been raised about the need to disinfect washers and dryers in residential-care settings. Disinfection of the tubs and tumblers of these machines is unnecessary when proper laundry procedures are followed; these procedures involve a. the physical removal of bulk solids (e.g., feces) before the wash/dry cycle and b. proper use of temperature, detergent, and laundry additives. Infection has not been linked to laundry procedures in residential-care facilities, even when consumer versions of detergents and laundry additives are used. # Special Laundry Situations Some textile items (e.g., surgical drapes and reusable gowns) must be sterilized before use and therefore require steam autoclaving after laundering. 7 Although the American Academy of Pediatrics in previous guidelines recommended autoclaving for linens in neonatal intensive care units (NICUs), studies on the microbial quality of routinely cleaned NICU linen have not identified any increased risk for infection among the neonates receiving care. 1288 Consequently, hygienically clean linens are suitable for use in this setting. 997 The use of sterile linens in burn therapy units remains unresolved. Coated or laminated fabrics are often used in the manufacture of PPE. When these items become contaminated with blood or other body substances, the manufacturer's instructions for decontamination and cleaning take into account the compatibility of the rubber backing with the chemical germicides or detergents used in the process. The directions for decontaminating these items should be followed as indicated; the item should be discarded when the backing develops surface cracks. Dry cleaning, a cleaning process that utilizes organic solvents (e.g., perchloroethylene) for soil removal, is an alternative means of cleaning fabrics that might be damaged in conventional laundering and detergent washing. Several studies, however, have shown that dry cleaning alone is relatively ineffective in reducing the numbers of bacteria and viruses on contaminated linens; 1289, 1290 microbial populations are significantly reduced only when dry-cleaned articles are heat pressed. Dry cleaning should therefore not be considered a routine option for health-care facility laundry and should be reserved for those circumstances in which fabrics can not be safely cleaned with water and detergent. 1291 # Surgical Gowns, Drapes, and Disposable Fabrics An issue of recent concern involves the use of disposable (i.e., single use) versus reusable (i.e., multiple use) surgical attire and fabrics in health-care settings. 1292 Regardless of the material used to manufacture gowns and drapes, these items must be resistant to liquid and microbial penetration. 7,[1293][1294][1295][1296][1297] Surgical gowns and drapes must be registered with FDA to demonstrate their safety and effectiveness. Repellency and pore size of the fabric contribute to gown performance, but performance capability can be influenced by the item's design and construction. 1298,1299 Reinforced gowns (i.e., gowns with double-layered fabric) generally are more resistant to liquid strike-through. 1300,1301 Reinforced gowns may, however, be less comfortable. Guidelines for selection and use of barrier materials for surgical gowns and drapes have been published. 1302 When selecting a barrier product, repellency level and type of barrier should be compatible for the exposure expected. 967 However, data are limited regarding the association between gown or drape characteristics and risk for surgical site infections. 7,1303 Health-care facilities must ensure optimal protection of patients and health-care workers. Not all fabric items in health care lend themselves to single-use. Facilities exploring options for gowns and drapes should consider the expense of disposable items and the impact on the facility's waste-management costs once these items are discarded. Costs associated with the use of durable goods involve the fabric or textile items; staff expenses to collect, sort, clean, and package the laundry; and energy costs to operate the laundry if on-site or the costs to contract with an outside service. 1304,1305 # Antimicrobial-Impregnated Articles and Consumer Items Bearing Antimicrobial Labeling Manufacturers are increasingly incorporating antibacterial or antimicrobial chemicals into consumer and health-care items. Some consumer products bearing labels that indicate treatment with antimicrobial chemicals have included pens, cutting boards, toys, household cleaners, hand lotions, cat litter, soaps, cotton swabs, toothbrushes, and cosmetics. The "antibacterial" label on household cleaning products, in particular, gives consumers the impression that the products perform "better" than comparable products without this labeling, when in fact all household cleaners have antibacterial properties. In the health-care setting, treated items may include children's pajamas, mattresses, and bed linens with label claims of antimicrobial properties. These claims require careful evaluation to determine whether they pertain to the use of antimicrobial chemicals as preservatives for the fabric or other components or whether they imply a health claim. 1306,1307 No evidence is available to suggest that use of these products will make consumers and patients healthier or prevent disease. No data support the use of these items as part of a sound infection-control strategy, and therefore, the additional expense of replacing a facility's bedding and sheets with these treated products is unwarranted. EPA has reaffirmed its position that manufacturers who make public health claims for articles containing antimicrobial chemicals must provide evidence to support those claims as part of the registration process. 1308 Current EPA regulations outlined in the Treated Articles Exemption of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) require manufacturers to register both the antimicrobial chemical used in or on the product and the finished product itself if a public health claim is maintained for the item. The exemption applies to the use of antimicrobial chemicals for the purpose of preserving the integrity of the product's raw material(s). The U.S. Federal Trade Commission (FTC) is evaluating manufacturer advertising of products with antimicrobial claims. 1309 # Standard Mattresses, Pillows, and Air-Fluidized Beds Standard mattresses and pillows can become contaminated with body substances during patient care if the integrity of the covers of these items is compromised. The practice of sticking needles into the mattress should be avoided. A mattress cover is generally a fitted, protective material, the purpose of which is to prevent the mattress from becoming contaminated with body fluids and substances. A linen sheet placed on the mattress is not considered a mattress cover. Patches for tears and holes in mattress covers do not provide an impermeable surface over the mattress. Mattress covers should be replaced when torn; the mattress should be replaced if it is visibly stained. Wet mattresses, in particular, can be a substantial environmental source of microorganisms. Infections and colonizations caused by Acinetobacter spp., MRSA, and Pseudomonas aeruginosa have been described, especially among burn patients. [1310][1311][1312][1313][1314][1315] In these reports, the removal of wet mattresses was an effective infection-control measure. Efforts were made to ensure that pads and covers were cleaned and disinfected between patients using disinfectant products compatible with mattress-cover materials to ensure that these covers remained impermeable to fluids. [1310][1311][1312][1313][1314] Pillows and their covers should be easily cleanable, preferably in a hot water laundry cycle. 1315 These should be laundered between patients or if contaminated with body substances. Air-fluidized beds are used for the care of patients immobilized for extended periods of time because of therapy or injury (e.g., pain, decubitus ulcers, and burns). 1316 These specialized beds consist of a base unit filled with microsphere beads fluidized by warm, dry air flowing upward from a diffuser located at the bottom of the unit. A porous, polyester filter sheet separates the patient from direct contact with the beads but allows body fluids to pass through to the beads. Moist beads aggregate into clumps which settle to the bottom where they are removed as part of routine bed maintenance. Because the beads become contaminated with the patient's body substances, concerns have been raised about the potential for these beds to serve as an environmental source of pathogens. Certain pathogens (e.g., Enterococcus spp., Serratia marcescens, Staphylococcus aureus, and Streptococcus fecalis) have been recovered either from the microsphere beads or the polyester sheet after cleaning. 1317,1318 Reports of crosscontamination of patients, however, are few. 1318 Nevertheless, routine maintenance and between-patient decontamination procedures can minimize potential risks to patients. Regular removal of bead clumps, coupled with the warm, dry air of the bed, can help to minimize bacterial growth in the unit. [1319][1320][1321] Beads are decontaminated between patients by high heat (113°F-194°F [45°C-90°C], depending on the manufacturer's specifications) for at least 1 hour; this procedure is particularly important for the inactivation of Enterococcus spp. which are relatively resistant to heat. 1322,1323 The polyester filter sheet requires regular changing and thorough cleaning and disinfection, especially between patients. 1317,1318,1322,1323 Microbial contamination of the air space in the immediate vicinity of a properly maintained air-fluidized bed is similar to that found in air around conventional bedding, despite the air flow out of the base unit and around the patient. 1320,1324,1325 An operational air-fluidized bed can, however, interfere with proper pressure differentials, especially in negative-pressure rooms; 1326 the effect varies with the location of the bed relative to the room's configuration and supply and exhaust vent locations. Use of an air-fluidized bed in a negative-pressure room requires consultation with a facility engineer to determine appropriate placement of the bed. H. Animals in Health-Care Facilities # General Information Animals in health-care facilities traditionally have been limited to laboratories and research areas. However, their presence in patient-care areas is now more frequent, both in acute-care and long-term care settings, prompting consideration for the potential transmission of zoonotic pathogens from animals to humans in these settings. Although dogs and cats may be commonly encountered in health-care settings, other animals (e.g., fish, birds, non-human primates, rabbits, rodents, and reptiles) also can be present as research, resident, or service animals. These animals can serve as sources of zoonotic pathogens that could potentially infect patients and health-care workers (Table 26). [1327][1328][1329][1330][1331][1332][1333][1334][1335][1336][1337][1338][1339][1340] Animals potentially can serve as reservoirs for antibiotic-resistant microorganisms, which can be introduced to the health-care setting while the animal is present. VRE have been isolated from both farm animals and pets, 1341 and a cat in a geriatric care center was found to be colonized with MRSA. 1342 + Indicates that the pathogen associated with the infection has been isolated from animals and is considered to pose potential risk to humans. Zoonoses can be transmitted from animals to humans either directly or indirectly via bites, scratches, aerosols, ectoparasites, accidental ingestion, or contact with contaminated soil, food, water, or unpasteurized milk. 1331,1332,[1343][1344][1345] Colonization and hand transferral of pathogens acquired from pets in health-care workers' homes represent potential sources and modes of transmission of zoonotic pathogens in health-care settings. An outbreak of infections caused by a yeast (Malassezia pachydermatis) among newborns was traced to transfer of the yeast from the hands of health-care workers with pet dogs at home. 1346 In addition, an outbreak of ringworm in a NICU caused by Microsporum canis was associated with a nurse and her cat, 1347 and an outbreak of Rhodococcus (Gordona) bronchialis sternal SSIs after coronary-artery bypass surgery was traced to a colonized nurse whose dogs were culture-positive for the organism. 1348 In the latter outbreak, whether the dogs were the sole source of the organism and whether other environmental reservoirs contributed to the outbreak are unknown. Nonetheless, limited data indicate that outbreaks of infectious disease have occurred as a result of contact with animals in areas housing immunocompetent patients. However, the low frequency of outbreaks may result from a. the relatively limited presence of the animals in health-care facilities and b. the immunocompetency of the patients involved in the encounters. Formal scientific studies to evaluate potential risks of transmission of zoonoses in health-care settings outside of the laboratory are lacking. # Animal-Assisted Activities, Animal-Assisted Therapy, and Resident Animals Animal-Assisted Activities (AAA) are those programs that enhance the patients' quality of life. These programs allow patients to visit animals in either a common, central location in the facility or in individual patient rooms. A group session with the animals enhances opportunities for ambulatory patients and facility residents to interact with caregivers, family members, and volunteers. [1349][1350][1351] Alternatively, allowing the animals access to individual rooms provides the same opportunity to nonambulatory patients and patients for whom privacy or dignity issues are a consideration. The decision to allow this access to patients' rooms should be made on a case-by-case basis, with the consultation and consent of the attending physician and nursing staff. Animal-Assisted Therapy (AAT) is a goal-directed intervention that incorporates an animal into the treatment process provided by a credentialed therapist. 1330,1331 The concept for AAT arose from the observation that some patients with pets at home recover from surgical and medical procedures more rapidly than patients without pets. 1352,1353 Contact with animals is considered beneficial for enhancing wellness in certain patient populations (e.g., children, the elderly, and extended-care hospitalized patients). 1349,[1354][1355][1356][1357] However, evidence supporting this benefit is largely derived from anecdotal reports and observations of patient/animal interactions. [1357][1358][1359] Guidelines for establishing AAT programs are available for facilities considering this option. 1360 The incorporation of non-human primates into an AAA or AAT program is not encouraged because of concerns regarding potential disease transmission from and unpredictable behavior of these animals. 1361,1362 Animals participating in either AAA or AAT sessions should be in good health and up-to-date with recommended immunizations and prophylactic medications (e.g., heartworm prevention) as determined by a licensed veterinarian based on local needs and recommendations. Regular re-evaluation of the animal's health and behavior status is essential. 1360 Animals should be routinely screened for enteric parasites and/or have evidence of a recently completed antihelminthic regimen. 1363 They should also be free of ectoparasites (e.g., fleas and ticks) and should have no sutures, open wounds, or obvious dermatologic lesions that could be associated with bacterial, fungal, or viral infections or parasitic infestations. Incorporating young animals (i.e., those aged <1 year) into these programs is not encouraged because of issues regarding unpredictable behavior and elimination control. Additionally, health of these animals at risk. Animals should be clean and well-groomed. The visits must be supervised by persons who know the animals and their behavior. Animal handlers should be trained in these activities and receive site-specific orientation to ensure that they work efficiently with the staff in the specific healthcare environment. 1360 Additionally, animal handlers should be in good health. 1360 The most important infection-control measure to prevent potential disease transmission is strict enforcement of hand-hygiene measures (e.g., using either soap and water or an alcohol-based hand rub) for all patients, staff, and residents after handling the animals. 1355,1364 Care should also be taken to avoid direct contact with animal urine or feces. Clean-up of these substances from environmental surfaces requires gloves and the use of leak-resistant plastic bags to discard absorbent material used in the process. 2 The area must be cleaned after visits according to standard cleaning procedures. The American Academy of Allergy, Asthma, and Immunology estimates that dog or cat allergies occur in approximately 15% of the population. 1365 Minimizing contact with animal saliva, dander, and/or urine helps to mitigate allergic responses. 1365-1367 Some facilities may not allow animal visitation for patients with a. underlying asthma, b. known allergies to cat or dog hair, c. respiratory allergies of unknown etiology, and d. immunosuppressive disorders. Hair shedding can be minimized by processes that remove dead hair (e.g., grooming) and that prevent the shedding of dead hair (e.g., therapy capes for dogs). Allergens can be minimized by bathing therapy animals within 24 hours of a visit. 1333,1368 Animal therapists and handlers must take precautions to prevent animal bites. Common pathogens associated with animal bites include Capnocytophaga canimorsus, Pasteurella spp., Staphylococcus spp., and Streptococcus spp. Selecting well-behaved and well-trained animals for these programs greatly decreases the incidence of bites. Rodents, exotic species, wild/domestic animals (i.e., wolf-dog hybrids), and wild animals whose behavior is unpredictable should be excluded from AAA or AAT programs. A well-trained animal handler should be able to recognize stress in the animal and to determine when to terminate a session to minimize risk. When an animal bites a person during AAA or AAT, the animal is to be permanently removed from the program. If a bite does occur, the wound must be cleansed immediately and monitored for subsequent infection. Most infections can be treated with antibiotics, and antibiotics often are prescribed prophylactically in these situations. The health-care facility's infection-control staff should participate actively in planning for and coordinating AAA and AAT sessions. Many facilities do not offer AAA or AAT programs for severely immunocompromised patients (e.g., HSCT patients and patients on corticosteroid therapy). 1339 The question of whether family pets or companion animals can visit terminally-ill HSCT patients or other severely immunosuppressed patients is best handled on a case-by-case basis, although animals should not be brought into the HSCT unit or any other unit housing severely immunosuppressed patients. An indepth discussion of this issue is presented elsewhere. 1366 Immunocompromised patients who have been discharged from a health-care facility may be at higher risk for acquiring some pet-related zoonoses. Although guidelines have been developed to minimize the risk of disease transmission to HIV-infected patients, 8 these recommendations may be applicable for patients with other immunosuppressive disorders. In addition to handwashing or hand hygiene, these recommendations include avoiding contact with a. animal feces and soiled litter box materials, b. animals with diarrhea, c. very young animals (i.e., dogs <6 months of age and cats <1 year of age), and d. exotic animals and reptiles. 8 Pets or companion animals with diarrhea should receive veterinary care to resolve their condition. Many health-care facilities are adopting more home-like environments for residential-care or extendedstay patients in acute-care settings, and resident animals are one element of this approach. 1369 One concept, the "Eden Alternative," incorporates children, plants, and animals (e.g., dogs, cats, fish, birds, rabbits, and rodents) into the daily care setting. 1370,1371 The concept of working with resident animals has not been scientifically evaluated. Several issues beyond the benefits of therapy must be considered before embarking on such a program, including a. whether the animals will come into direct contact with patients and/or be allowed to roam freely in the facility b. how the staff will provide care for the animals; c. the management of patients' or residents' allergies, asthma, and phobias; d. precautionary measures to prevent bites and scratches; and e. measures to properly manage the disposal of animal feces and urine, thereby preventing environmental contamination by zoonotic microorganisms (e.g., Toxoplasma spp., Toxocara spp., and Ancylostoma spp.). 1372,1373 Few data document a link between health-care acquired infection rates and frequency of cleaning fish tanks or rodent cages. Skin infections caused by Mycobacterium marinum have been described among persons who have fish aquariums at home. 1374,1375 Nevertheless, immunocompromised patients should avoid direct contact with fish tanks and cages and the aerosols that these items produce. Further, fish tanks should be kept clean on a regular basis as determined by facility policy, and this task should be performed by gloved staff members who are not responsible for patient care. The use of the infectioncontrol risk assessment can help determine whether a fish tank poses a risk for patient or resident safety and health in these situations. No evidence, however, links the incidence of health-care acquired infections among immunocompetent patients or residents with the presence of a properly cleaned and maintained fish tank, even in dining areas. As a general preventive measure, resident animal programs are advised to restrict animals from a. food preparation kitchens, b. laundries, c. central sterile supply and any storage areas for clean supplies, and d. medication preparation areas. Resident-animal programs in acute-care facilities should not allow the animals into the isolation areas, protective environments, ORs, or any area where immunocompromised patients are housed. Patients and staff routinely should wash their hands or use waterless, alcohol-based hand-hygiene products after contact with animals. # Service Animals Although this section provides an overview about service animals in health-care settings, it cannot address every situation or question that may arise (see Appendix E -Information Resources). A service animal is any animal individually trained to do work or perform tasks for the benefit of a person with a disability. 1366, 1376 A service animal is not considered a pet but rather an animal trained to provide assistance to a person because of a disability. Title III of the "Americans with Disabilities Act" (ADA) of 1990 mandates that persons with disabilities accompanied by service animals be allowed access with their service animals into places of public accommodation, including restaurants, public transportation, schools, and health-care facilities. 1366,1376 In health-care facilities, a person with a disability requiring a service animal may be an employee, a visitor, or a patient. An overview of the subject of service animals and their presence in health-care facilities has been published. 1366 No evidence suggests that animals pose a more significant risk of transmitting infection than people; therefore, service animals should not be excluded from such areas, unless an individual patient's situation or a particular animal poses greater risk that cannot be mitigated through reasonable measures. If health-care personnel, visitors, and patients are permitted to enter care areas (e.g., inpatient rooms, some ICUs, and public areas) without taking additional precautions to prevent transmission of infectious agents (e.g., donning gloves, gowns, or masks), a clean, healthy, well-behaved service animal should be allowed access with its handler. 1366 Similarly, if immunocompromised patients are able to receive visitors without using protective garments or equipment, an exclusion of service animals from this area would not be justified. 1366 Because health-care facilities are covered by the ADA or the Rehabilitation Act, a person with a disability may be accompanied by a service animal within the facility unless the animal's presence or behavior creates a fundamental alteration in the nature of a facility's services in a particular area or a direct threat to other persons in a particular area. 1366 A "direct threat" is defined as a significant risk to the health or safety of others that cannot be mitigated or eliminated by modifying policies, practices, or procedures. 1376 The determination that a service animal poses a direct threat in any particular healthcare setting must be based on an individualized assessment of the service animal, the patient, and the health-care situation. When evaluating risk in such situations, health-care personnel should consider the nature of the risk (including duration and severity); the probability that injury will occur; and whether reasonable modifications of policies, practices, or procedures will mitigate the risk (J. Wodatch, U.S. Department of Justice, 2000). The person with a disability should contribute to the risk-assessment process as part of a pre-procedure health-care provider/patient conference. Excluding a service animal from an OR or similar special care areas (e.g., burn units, some ICUs, PE units, and any other area containing equipment critical for life support) is appropriate if these areas are considered to have "restricted access" with regards to the general public. General infection-control measures that dictate such limited access include a. the area is required to meet environmental criteria to minimize the risk of disease transmission, b. strict attention to hand hygiene and absence of dermatologic conditions, and c. barrier protective measures [e.g., using gloves, wearing gowns and masks] are indicated for persons in the affected space. No infection-control measures regarding the use of barrier precautions could be reasonably imposed on the service animal. Excluding a service animal that becomes threatening because of a perceived danger to its handler during treatment also is appropriate; however, exclusion of such an animal must be based on the actual behavior of the particular animal, not on speculation about how the animal might behave. Another issue regarding service animals is whether to permit persons with disabilities to be accompanied by their service animals during all phases of their stay in the health-care facility. Healthcare personnel should discuss all aspects of anticipatory care with the patient who uses a service animal. Health-care personnel may not exclude a service animal because health-care staff may be able to perform the same services that the service animal does (e.g., retrieving dropped items and guiding an otherwise ambulatory person to the restroom). Similarly, health-care personnel can not exclude service animals because the health-care staff perceive a lack of need for the service animal during the person's stay in the health-care facility. A person with a disability is entitled to independent access (i.e., to be accompanied by a service animal unless the animal poses a direct threat or a fundamental alteration in the nature of services); "need" for the animal is not a valid factor in either analysis. For some forms of care (e.g., ambulation as physical therapy following total hip replacement or knee replacement), the service animal should not be used in place of a credentialed health-care worker who directly provides therapy. However, service animals need not be restricted from being in the presence of its handler during this time; in addition, rehabilitation and discharge planning should incorporate the patient's future use of the animal. The health-care personnel and the patient with a disability should discuss both the possible need for the service animal to be separated from its handler for a period of time during non-emergency care and an alternate plan of care for the service animal in the event the patient is unable or unwilling to provide that care. This plan might include family members taking the animal out of the facility several times a day for exercise and elimination, the animal staying with relatives, or boarding off-site. Care of the service animal, however, remains the obligation of the person with the disability, not the health-care staff. Although animals potentially carry zoonotic pathogens transmissible to man, the risk is minimal with a healthy, clean, vaccinated, well-behaved, and well-trained service animal, the most common of which are dogs and cats. No reports have been published regarding infectious disease that affects humans originating in service dogs. Standard cleaning procedures are sufficient following occupation of an area by a service animal. 1366 Clean-up of spills of animal urine, feces, or other body substances can be accomplished with blood/body substance procedures outlined in the Environmental Services section of this guideline. No special bathing procedures are required prior to a service animal accompanying its handler into a health-care facility. Providing access to exotic animals (e.g., reptiles and non-human primates) that are used as service animals is problematic. Concerns about these animals are discussed in two published reviews. 1331,1366 Because some of these animals exhibit high-risk behaviors that may increase the potential for zoonotic disease transmission (e.g., herpes B infection), providing health-care facility access to nonhuman primates used as service animals is discouraged, especially if these animals might come into contact with the general public. 1361,1362 Health-care administrators should consult the Americans with Disabilities Act for guidance when developing policies about service animals in their facilities. 1366,1376 Requiring documentation for access of a service animal to an area generally accessible to the public would impose a burden on a person with a disability. When health-care workers are not certain that an animal is a service animal, they may ask the person who has the animal if it is a service animal required because of a disability; however, no certification or other documentation of service animal status can be required. 1377 # Animals as Patients in Human Health-Care Facilities The potential for direct and indirect transmission of zoonoses must be considered when rooms and equipment in human health-care facilities are used for the medical or surgical treatment or diagnosis of animals. 1378 Inquiries should be made to veterinary medical professionals to determine an appropriate facility and equipment to care for an animal. The central issue associated with providing medical or surgical care to animals in human health-care facilities is whether cross-contamination occurs between the animal patient and the human health-care workers and/or human patients. The fundamental principles of infection control and aseptic practice should differ only minimally, if at all, between veterinary medicine and human medicine. Health-careassociated infections can and have occurred in both patients and workers in veterinary medical facilities when lapses in infection-control procedures are evident. [1379][1380][1381][1382][1383][1384] Further, veterinary patients can be at risk for acquiring infection from veterinary health-care workers if proper precautions are not taken. 1385 The issue of providing care to veterinary patients in human health-care facilities can be divided into the following three areas of infection-control concerns: a. whether the room/area used for animal care can be made safe for human patients, b. whether the medical/surgical instruments used on animals can be subsequently used on human patients, and c. which disinfecting or sterilizing procedures need to be done for these purposes. Studies addressing these concerns are lacking. However, with respect to disinfection or sterilization in veterinary settings, only minimal evidence suggests that zoonotic microbial pathogens are unusually resistant to inactivation by chemical or physical agents (with the exception of prions). Ample evidence supports the contrary observation (i.e., that pathogens from human-and animal sources are similar in their relative instrinsic resistance to inactivation). [1386][1387][1388][1389][1390][1391] Further, no evidence suggests that zoonotic pathogens behave differently from human pathogens with respect to ventilation. Despite this knowledge, an aesthetic and sociologic perception that animal care must remain separate from human care persists. Health-care facilities, however, are increasingly faced with requests from the veterinary medical community for access to human health-care facilities for reasons that are largely economical (e.g., costs of acquiring sophisticated diagnostic technology and complex medical instruments). If hospital guidelines allow treatment of animals, alternate veterinary resources (including veterinary hospitals, clinics, and universities) should be exhausted before using human health-care settings. Additionally, the hospital's public/media relations should be notified of the situation. The goal is to develop policies and procedures to proactively and positively discuss and disclose this activity to the general public. An infection-control risk assessment (ICRA) must be undertaken to evaluate the circumstances specific to providing care to animals in a human health-care facility. Individual hospital policies and guidelines should be reviewed before any animal treatment is considered in such facilities. Animals treated in human health-care facilities should be under the direct care and supervision of a licensed veterinarian; they also should be free of known infectious diseases, ectoparasites, and other external contaminants (e.g., soil, urine, and feces). Measures should be taken to avoid treating animals with a known or suspected zoonotic disease in a human health-care setting (e.g., lambs being treated for Q fever). If human health-care facilities must be used for animal treatment or diagnostics, the following general infection-control actions are suggested: a. whenever possible, the use of ORs or other rooms used for invasive procedures should be avoided [e.g., cardiac catheterization labs and invasive nuclear medicine areas] b. when all other space options are exhausted and use of the aforementioned rooms is unavoidable, the procedure should be scheduled late in the day as the last procedure for that particular area such that patients are not present in the department/unit/area; c. environmental surfaces should be thoroughly cleaned and disinfected using procedures discussed in the Environmental Services portion of this guideline after the animal is removed from the care area; d. sufficient time should be allowed for ACH to help prevent allergic reactions by human patients [Table B.1. in Appendix B]; e. only disposable equipment or equipment that can be thoroughly and easily cleaned, disinfected, or sterilized should be used; f. when medical or surgical instruments, especially those invasive instruments that are difficult to clean [e.g., endoscopes], are used on animals, these instruments should be reserved for future use only on animals; and g) standard precautions should be followed. # Research Animals in Health-Care Facilities The risk of acquiring a zoonotic infection from research animals has decreased in recent years because many small laboratory animals (e.g., mice, rats, and rabbits) come from quality stock and have defined microbiologic profiles. 1392 Larger animals (e.g., nonhuman primates) are still obtained frequently from the wild and may harbor pathogens transmissible to humans. Primates, in particular, benefit from vaccinations to protect their health during the research period provided the vaccination does not interfere with the study of the particular agent. Animals serving as models for human disease studies pose some risk for transmission of infection to laboratory or health-care workers from percutaneous or mucosal exposure. Exposures can occur either through a. direct contact with an infected animal or its body substances and secretions or b. indirect contact with infectious material on equipment, instruments, surfaces, or supplies. 1392 Uncontained aerosols generated during laboratory procedures can also transmit infection. Infection-control measures to prevent transmission of zoonotic infections from research animals are largely derived from the following basic laboratory safety principles: a. purchasing pathogen-free animals, b. quarantining incoming animals to detect any zoonotic pathogens, c. treating infected animals or removing them from the facility, d. vaccinating animal carriers and high-risk contacts if possible, e. using specialized containment caging or facilities, and f. using protective clothing and equipment [e.g., gloves, face shields, gowns, and masks]. 1392 An excellent resource for detailed discussion of these safety measures has been published. 1013 The animal research unit within a health-care facility should be engineered to provide a. adequate containment of animals and pathogens; b. daily decontamination and transport of equipment and waste; c. proper ventilation and air filtration, which prevents recirculation of the air in the unit to other areas of the facility; and d. negative air pressure in the animal rooms relative to the corridors. To ensure adequate security and containment, no through traffic to other areas of the health-care facility should flow through this unit; access should be restricted to animal-care staff, researchers, environmental services, maintenance, and security personnel. Occupational health programs for animal-care staff, researchers, and maintenance staff should take into consideration the animals' natural pathogens and research pathogens. Components of such programs include a. prophylactic vaccines, b. TB skin testing when primates are used, c. baseline serums, and d. hearing and respiratory testing. Work practices, PPE, and engineering controls specific for each of the four animal biosafety levels have been published. 1013,1393 The facility's occupational or employee health clinic should be aware of the appropriate post-exposure procedures involving zoonoses and have available the appropriate postexposure biologicals and medications. Animal-research-area staff should also develop standard operating procedures for a. daily animal husbandry [e.g., protection of the employee while facilitating animal welfare] b. pathogen containment and decontamination; c. management, cleaning, disinfecting and/or sterilizing equipment and instruments; and d. employee training for laboratory safety and safety procedures specific to animal research worksites. 1013 The federal Animal Welfare Act of 1966 and its amendments serve as the regulatory basis for ensuring animal welfare in research. 1394,1395 I. Regulated Medical Waste # Epidemiology No epidemiologic evidence suggests that most of the solid-or liquid wastes from hospitals, other healthcare facilities, or clinical/research laboratories is any more infective than residential waste. Several studies have compared the microbial load and the diversity of microorganisms in residential wastes and wastes obtained from a variety of health-care settings. [1399][1400][1401][1402] Although hospital wastes had a greater number of different bacterial species compared with residential waste, wastes from residences were more heavily contaminated. 1397,1398 Moreover, no epidemiologic evidence suggests that traditional wastedisposal practices of health-care facilities (whereby clinical and microbiological wastes were decontaminated on site before leaving the facility) have caused disease in either the health-care setting or the general community. 1400,1401 This statement excludes, however, sharps injuries sustained during or immediately after the delivery of patient care before the sharp is "discarded." Therefore, identifying wastes for which handling and disposal precautions are indicated is largely a matter of judgment about the relative risk of disease transmission, because no reasonable standards on which to base these determinations have been developed. Aesthetic and emotional considerations (originating during the early years of the HIV epidemic) have, however, figured into the development of treatment and disposal policies, particularly for pathology and anatomy wastes and sharps. [1402][1403][1404][1405] Public concerns have resulted in the promulgation of federal, state, and local rules and regulations regarding medical waste management and disposal. [1406][1407][1408][1409][1410][1411][1412][1413][1414] # Categories of Medical Waste Precisely defining medical waste on the basis of quantity and type of etiologic agents present is virtually impossible. The most practical approach to medical waste management is to identify wastes that represent a sufficient potential risk of causing infection during handling and disposal and for which some precautions likely are prudent. 2 Health-care facility medical wastes targeted for handling and disposal precautions include microbiology laboratory waste (e.g., microbiologic cultures and stocks of microorganisms), pathology and anatomy waste, blood specimens from clinics and laboratories, blood products, and other body-fluid specimens. 2 Moreover, the risk of either injury or infection from certain sharp items (e.g., needles and scalpel blades) contaminated with blood also must be considered. Although any item that has had contact with blood, exudates, or secretions may be potentially infective, treating all such waste as infective is neither practical nor necessary. Federal, state, and local guidelines and regulations specify the categories of medical waste that are subject to regulation and outline the requirements associated with treatment and disposal. The categorization of these wastes has generated the term "regulated medical waste." This term emphasizes the role of regulation in defining the actual material and as an alternative to "infectious waste," given the lack of evidence of this type of waste's infectivity. State regulations also address the degree or amount of contamination (e.g., blood-soaked gauze) that defines the discarded item as a regulated medical waste. The EPA's Manual for Infectious Waste Management identifies and categorizes other specific types of waste generated in health-care facilities with research laboratories that also require handling precautions. 1406 # Management of Regulated Medical Waste in Health-Care Facilities Medical wastes require careful disposal and containment before collection and consolidation for treatment. OSHA has dictated initial measures for discarding regulated medical-waste items. These measures are designed to protect the workers who generate medical wastes and who manage the wastes from point of generation to disposal. 967 A single, leak-resistant biohazard bag is usually adequate for containment of regulated medical wastes, provided the bag is sturdy and the waste can be discarded without contaminating the bag's exterior. The contamination or puncturing of the bag requires placement into a second biohazard bag. All bags should be securely closed for disposal. Puncture-resistant containers located at the point of use (e.g., sharps containers) are used as containment for discarded slides or tubes with small amounts of blood, scalpel blades, needles and syringes, and unused sterile sharps. 967 To prevent needlestick injuries, needles and other contaminated sharps should not be recapped, purposefully bent, or broken by hand. CDC has published general guidelines for handling sharps. 6,1415 Health-care facilities may need additional precautions to prevent the production of aerosols during the handling of blood-contaminated items for certain rare diseases or conditions (e.g., Lassa fever and Ebola virus infection). 203 Transporting and storing regulated medical wastes within the health-care facility prior to terminal treatment is often necessary. Both federal and state regulations address the safe transport and storage of on-and off-site regulated medical wastes. [1406][1407][1408] Health-care facilities are instructed to dispose medical wastes regularly to avoid accumulation. Medical wastes requiring storage should be kept in labeled, leakproof, puncture-resistant containers under conditions that minimize or prevent foul odors. The storage area should be well ventilated and be inaccessible to pests. Any facility that generates regulated medical wastes should have a regulated medical waste management plan to ensure health and environmental safety as per federal, state, and local regulations. # Treatment of Regulated Medical Waste Regulated medical wastes are treated or decontaminated to reduce the microbial load in or on the waste and to render the by-products safe for further handling and disposal. From a microbiologic standpoint, waste need not be rendered "sterile" because the treated waste will not be deposited in a sterile site. In addition, waste need not be subjected to the same reprocessing standards as are surgical instruments. Historically, treatment methods involved steam-sterilization (i.e., autoclaving), incineration, or interment (for anatomy wastes). Alternative treatment methods developed in recent years include chemical disinfection, grinding/shredding/disinfection methods, energy-based technologies (e.g., microwave or radiowave treatments), and disinfection/encapsulation methods. 1409 State medical waste regulations specify appropriate treatment methods for each category of regulated medical waste. Of all the categories comprising regulated medical waste, microbiologic wastes (e.g., untreated cultures, stocks, and amplified microbial populations) pose the greatest potential for infectious disease transmission, and sharps pose the greatest risk for injuries. Untreated stocks and cultures of microorganisms are subsets of the clinical laboratory or microbiologic waste stream. If the microorganism must be grown and amplified in culture to high concentration to permit work with the specimen, this item should be considered for on-site decontamination, preferably within the laboratory unit. Historically, this was accomplished effectively by either autoclaving (steam sterilization) or incineration. If steam sterilization in the health-care facility is used for waste treatment, exposure of the waste for up to 90 minutes at 250°F (121°C) in a autoclave (depending on the size of the load and type container) may be necessary to ensure an adequate decontamination cycle. [1416][1417][1418] After steam sterilization, the residue can be safely handled and discarded with all other nonhazardous solid waste in accordance with state solidwaste disposal regulations. On-site incineration is another treatment option for microbiologic, pathologic, and anatomic waste, provided the incinerator is engineered to burn these wastes completely and stay within EPA emissions standards. 1410 Improper incineration of waste with high moisture and low energy content (e.g., pathology waste) can lead to emission problems. State medical-waste regulatory programs identify acceptable methods for inactivating amplified stocks and cultures of microorganisms, some of which may employ technology rather than steam sterilization or incineration. Concerns have been raised about the ability of modern health-care facilities to inactivate microbiologic wastes on-site, given that many of these institutions have decommissioned their laboratory autoclaves. Current laboratory guidelines for working with infectious microorganisms at biosafety level (BSL) 3 recommend that all laboratory waste be decontaminated before disposal by an approved method, preferably within the laboratory. 1013 These same guidelines recommend that all materials removed from a BSL 4 laboratory (unless they are biological materials that are to remain viable) are to be decontaminated before they leave the laboratory. 1013 Recent federal regulations for laboratories that handle certain biological agents known as "select agents" (i.e., those that have the potential to pose a severe threat to public health and safety) require these agents (and those obtained from a clinical specimen intended for diagnostic, reference, or verification purposes) to be destroyed on-site before disposal. 1412 Although recommendations for laboratory waste disposal from BSL 1 or 2 laboratories (e.g., most health-care clinical and diagnostic laboratories) allow for these materials to be decontaminated off-site before disposal, on-site decontamination by a known effective method is preferred to reduce the potential of exposure during the handling of infectious material. A recent outbreak of TB among workers in a regional medical-waste treatment facility in the United States demonstrated the hazards associated with aerosolized microbiologic wastes. 1419,1420 The facility received diagnostic cultures of Mycobacterium tuberculosis from several different health-care facilities before these cultures were chemically disinfected; this facility treated this waste with a grinding/shredding process that generated aerosols from the material. 1419,1420 Several operational deficiencies facilitated the release of aerosols and exposed workers to airborne M. tuberculosis. Among the suggested control measures was that health-care facilities perform on-site decontamination of laboratory waste containing live cultures of microorganisms before release of the waste to a waste management company. 1419,1420 This measure is supported by recommendations found in the CDC/NIH guideline for laboratory workers. 1013 This outbreak demonstrates the need to avoid the use of any medical-waste treatment method or technology that can aerosolize pathogens from live cultures and stocks (especially those of airborne microorganisms) unless aerosols can be effectively contained and workers can be equipped with proper PPE. [1419][1420][1421] Safe laboratory practices, including those addressing waste management, have been published. 1013,1422 In an era when local, state, and federal health-care facilities and laboratories are developing bioterrorism response strategies and capabilities, the need to reinstate in-laboratory capacity to destroy cultures and stocks of microorganisms becomes a relevant issue. 1423 Recent federal regulations require health-care facility laboratories to maintain the capability of destroying discarded cultures and stocks on-site if these laboratories isolate from a clinical specimen any microorganism or toxin identified as a "select agent" from a clinical specimen (Table 27). 1412,1413 As an alternative, isolated cultures of select agents can be transferred to a facility registered to accept these agents in accordance with federal regulations. 1412 State medical waste regulations can, however, complicate or completely prevent this transfer if these cultures are determined to be medical waste, because most states regulate the inter-facility transfer of untreated medical wastes. # Discharging Blood, Fluids to Sanitary Sewers or Septic Tanks The contents of all vessels that contain more than a few milliliters of blood remaining after laboratory procedures, suction fluids, or bulk blood can either be inactivated in accordance with state-approved treatment technologies or carefully poured down a utility sink drain or toilet. 1414 State regulations may dictate the maximum volume allowable for discharge of blood/body fluids to the sanitary sewer. No evidence indicates that bloodborne diseases have been transmitted from contact with raw or treated sewage. Many bloodborne pathogens, particularly bloodborne viruses, are not stable in the environment for long periods of time; 1425,1426 therefore, the discharge of small quantities of blood and other body fluids to the sanitary sewer is considered a safe method of disposing of these waste materials. 1414 The following factors increase the likelihood that bloodborne pathogens will be inactivated in the disposal process: a. dilution of the discharged materials with water b. inactivation of pathogens resulting from exposure to cleaning chemicals, disinfectants, and other chemicals in raw sewage; and c. effectiveness of sewage treatment in inactivating any residual bloodborne pathogens that reach the treatment facility. Small amounts of blood and other body fluids should not affect the functioning of a municipal sewer system. However, large quantities of these fluids, with their high protein content, might interfere with the biological oxygen demand (BOD) of the system. Local municipal sewage treatment restrictions may dictate that an alternative method of bulk fluid disposal be selected. State regulations may dictate what quantity constitutes a small amount of blood or body fluids. Although concerns have been raised about the discharge of blood and other body fluids to a septic tank system, no evidence suggests that septic tanks have transmitted bloodborne infections. A properly functioning septic system is adequate for inactivating bloodborne pathogens. System manufacturers' instructions specify what materials may be discharged to the septic tank without jeopardizing its proper operation. # Medical Waste and CJD Concerns also have been raised about the need for special handling and treatment procedures for wastes generated during the care of patients with CJD or other transmissible spongiform encephalopathies (TSEs). Prions, the agents that cause TSEs, have significant resistance to inactivation by a variety of physical, chemical, or gaseous methods. 1427 No epidemiologic evidence, however, links acquisition of CJD with medical-waste disposal practices. Although handling neurologic tissue for pathologic examination and autopsy materials with care, using barrier precautions, and following specific procedures for the autopsy are prudent measures, 1197 employing extraordinary measures once the materials are discarded is unnecessary. Regulated medical wastes generated during the care of the CJD patient can be managed using the same strategies as wastes generated during the care of other patients. After decontamination, these wastes may then be disposed in a sanitary landfill or discharged to the sanitary sewer, as appropriate. # Part II. Recommendations for Environmental Infection Control in Health-Care Facilities # A. Rationale for Recommendations As in previous CDC guidelines, each recommendation is categorized on the basis of existing scientific data, theoretic rationale, applicability, and possible economic benefit. The recommendations are evidence-based wherever possible. However, certain recommendations are derived from empiric infection-control or engineering principles, theoretic rationale, or from experience gained from events that cannot be readily studied (e.g., floods). The HICPAC system for categorizing recommendations has been modified to include a category for engineering standards and actions required by state or federal regulations. Guidelines and standards published by the American Institute of Architects (AIA), American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE), and the Association for the Advancement in Medical Instrumentation (AAMI) form the basis of certain recommendations. These standards reflect a consensus of expert opinions and extensive consultation with agencies of the U.S. Department of Health and Human Services. Compliance with these standards is usually voluntary. However, state and federal governments often adopt these standards as regulations. For example, the standards from AIA regarding construction and design of new or renovated health-care facilities, have been adopted by reference by >40 states. Certain recommendations have two category ratings (e.g., Categories IA and IC or Categories IB and IC), indicating the recommendation is evidence-based as well as a standard or regulation. # B. Rating Categories Recommendations are rated according to the following categories: • * Develop and implement a maintenance schedule for ACH, pressure differentials, and filtration efficiencies using facility-specific data as part of the multidisciplinary risk assessment. Take into account the age and reliability of the system. • * Document these parameters, especially the pressure differentials. 3. Engineer humidity controls into the HVAC system and monitor the controls to ensure proper moisture removal. 120 Category IC (AIA: 7.31.D9) • • * Locate duct humidifiers upstream from the final filters. • * Incorporate a water-removal mechanism into the system. • * Locate all duct takeoffs sufficiently down-stream from the humidifier so that moisture is completely absorbed. 4. Incorporate steam humidifiers, if possible, to reduce potential for microbial proliferation within the system, and avoid use of cool mist humidifiers. Category II 5. Ensure that air intakes and exhaust outlets are located properly in construction of new facilities and renovation of existing facilities. • * Locate exhaust outlets from contaminated areas above roof level to minimize recirculation of exhausted air. 6. Maintain air intakes and inspect filters periodically to ensure proper operation. 3,120,249,250,[273][274][275]277 Category IC (AIA: 7.31.D8) 7. Bag dust-filled filters immediately upon removal to prevent dispersion of dust and fungal spores during transport within the facility. 106, 221 Category IB • * Seal or close the bag containing the discarded filter. • * Discard spent filters as regular solid waste, regardless of the area from which they were removed. 221 8. Remove bird roosts and nests near air intakes to prevent mites and fungal spores from entering the ventilation system. 3,98,119 Category IB 9. Prevent dust accumulation by cleaning air-duct grilles in accordance with facility-specific procedures and schedules when rooms are not occupied by patients. 21,120,249,250,[273][274][275]277 Category IC, II (AIA: 7.31.D10) 10. Periodically measure output to monitor system function; clean ventilation ducts as part of routine HVAC maintenance to ensure optimum performance. 120,263,264 Category II (AIA: 7.31.D10) C. Use portable, industrial-grade HEPA filter units capable of filtration rates in the range of 300-800 ft 3 /min. to augment removal of respirable particles as needed. 219 Category II M. Whenever feasible, design and install fixed backup ventilation systems for new or renovated construction for PE rooms, AII rooms, operating rooms, and other critical care areas identified by ICRA. 120 A. Establish a multidisciplinary team that includes infection-control staff to coordinate demolition, construction, and renovation projects and consider proactive preventive measures at the inception; produce and maintain summary statements of the team's activities. 17,19,20,97,109,120,249,250,[273][274][275][276][277] Category IB, IC (AIA: 5.1) B. Educate both the construction team and the health-care staff in immunocompromised patient-care areas regarding the airborne infection risks associated with construction projects, dispersal of fungal spores during such activities, and methods to control the dissemination of fungal spores. 3, 249, 250, 273-277, 1428-1432 Category IB C. Incorporate mandatory adherence agreements for infection control into construction contracts, with penalties for noncompliance and mechanisms to ensure timely correction of problems. 3,120,249,[273][274][275][276][277] Category IC (AIA: 5.1) D. Establish and maintain surveillance for airborne environmental disease (e.g., aspergillosis) as appropriate during construction, renovation, repair, and demolition activities to ensure the health and safety of immunocompromised patients. 3,64,65,79 Category IB 1. Using active surveillance, monitor for airborne fungal infections in immunocompromised patients. 3,9,64,65 Category IB 2. Periodically review the facility's microbiologic, histopathologic, and postmortem data to identify additional cases. 3,9,64,65 Category IB 3. If cases of aspergillosis or other health-care associated airborne fungal infections occur, aggressively pursue the diagnosis with tissue biopsies and cultures as feasible. 3,64,65,79,249,[273][274][275][276][277] Category IB E. Implement infection-control measures relevant to construction, renovation, maintenance, demolition, and repair. 96,97,120,276,277 Category IB, IC (AIA: 5.1, 5.2) 1. Before the project gets underway, perform an ICRA to define the scope of the project and the need for barrier measures. 96,97,120,249,[273][274][275][276][277] • * Determine if immunocompromised patients may be at risk for exposure to fungal spores from dust generated during the project. 20, 109, 273-275, 277 • * Develop a contingency plan to prevent such exposures. 20,109,[273][274][275]277 Category IB, IC (AIA: 5.1) 2. Implement infection-control measures for external demolition and construction activities. 50,249,[273][274][275][276][277]283 • * Determine if the facility can operate temporarily on recirculated air; if feasible, seal off adjacent air intakes. • * If this is not possible or practical, check the low-efficiency (roughing) filter banks frequently and replace as needed to avoid buildup of particulates. • * Seal windows and reduce wherever possible other sources of outside air intrusion (e.g., open doors in stairwells and corridors), especially in PE areas. Category IB 3. Avoid damaging the underground water distribution system (i.e., buried pipes) to prevent soil and dust contamination of the water. 120,305 Category IB, IC (AIA: 5.1) 121 4. Implement infection-control measures for internal construction activities. 20,49,97,120,249,[273][274][275][276][277] • * Construct barriers to prevent dust from construction areas from entering patient-care areas; ensure that barriers are impermeable to fungal spores and in compliance with local fire codes. 20,49,97,120,284,312,713,1431 • * Block and seal off return air vents if rigid barriers are used for containment. 120,276,277 • * Implement dust control measures on surfaces and by diverting pedestrian traffic away from work zones. 20, 49, 97, 120 • * Relocate patients whose rooms are adjacent to work zones, depending upon their immune status, the scope of the project, the potential for generation of dust or water aerosols, and the methods used to control these aerosols. 49,120,281 120,249,[273][274][275][276][277] ~ Category IB h. Clean work zones and their entrances daily by • wet-wiping tools and tool carts before their removal from the work zone; • placing mats with tacky surfaces inside the entrance; and • covering debris and securing this covering before removing debris from the work zone. 120,249,[273][274][275][276][277] ~ Category IB i. In patient-care areas, for major repairs that include removal of ceiling tiles and disruption of the space above the false ceiling, use plastic sheets or prefabricated plastic units to contain dust; use a negative pressure system within this enclosure to remove dust; and either pass air through an industrial grade, portable HEPA filter capable of filtration rates ranging from 300-800 ft 3 /min., or exhaust air directly to the outside. 49,276,277,281,309 • * properly constructing windows, doors, and intake and exhaust ports; • * maintaining ceilings that are smooth and free of fissures, open joints, and crevices; • * sealing walls above and below the ceiling, and • * monitoring for leakage and making necessary repairs. 3,111,120,317,318 Category IB, IC (AIA: 7.2.D3) 3. Ventilate the room to maintain ≥12 ACH. 3,9,120,241,317,318 Category IC (AIA: 7.2.D) 4. Locate air supply and exhaust grilles so that clean, filtered air enters from one side of the room, flows across the patient's bed, and exits from the opposite side of the room. 3,120,317,318 Category IC (AIA: 7.31.D1) 5. Maintain positive room air pressure (≥2.5 Pa [0.01-inch water gauge]) in relation to the corridor. 3,35,120,317,318 Category IB, IC (AIA: Table 7.2) 6. Maintain airflow patterns and monitor these on a daily basis by using permanently installed visual means of detecting airflow in new or renovated construction, or using other visual methods (e.g., flutter strips, or smoke tubes) in existing PE units. Document the monitoring results. 120,273 Category IC (AIA: 7.2.D6) 7. Install self-closing devices on all room exit doors in protective environments. 120 3. Place smallpox patients in negative pressure rooms at the onset of their illness, preferably using a room with an anteroom if available. 6 Category II D. No recommendation is offered regarding negative pressure or isolation rooms for patients with Pneumocystis carinii pneumonia. 126,131,132 Unresolved issue E. Maintain back-up ventilation equipment (e.g., portable units for fans or filters) for emergency provision of ventilation requirements for AII rooms and take immediate steps to restore the fixed ventilation system function. 4,120,278 Category IC (AIA: 5.1) # C.V. Infection-Control and Ventilation Requirements for Operating Rooms A. Implement environmental infection-control and ventilation measures for operating rooms. 1. Maintain positive-pressure ventilation with respect to corridors and adjacent areas. 7,120,356 Category IB, IC (AIA: Table 7.2) 2. Maintain ≥15 ACH, of which ≥3 ACH should be fresh air. 120,357,358 Category IC (AIA: Table 7.2) 3. Filter all recirculated and fresh air through the appropriate filters, providing 90% efficiency (dust-spot testing) at a minimum. 120,362 Category IC (AIA: Table 7.3) 4. In rooms not engineered for horizontal laminar airflow, introduce air at the ceiling and exhaust air near the floor. 120,357,359 B.1) because extubation is a coughproducing procedure. 4,358 Category IB C. Use portable, industrial-grade HEPA filters temporarily for supplemental air cleaning during intubation and extubation for infectious TB patients who require surgery. 4,219,358 Category II 1. Position the units appropriately so that all room air passes through the filter; obtain engineering consultation to determine the appropriate placement of the unit. 4 Category II 2. Switch the portable unit off during the surgical procedure. Category II 3. Provide fresh air as per ventilation standards for operating rooms; portable units do not meet the requirements for the number of fresh ACH. 120,215,219 Category II D. If possible, schedule infectious TB patients as the last surgical cases of the day to maximize the time available for removal of airborne contamination. Category II E. No recommendation is offered for performing orthopedic implant operations in rooms supplied with laminar airflow. 362,364 Unresolved issue F. Maintain backup ventilation equipment (e.g., portable units for fans or filters) for emergency provision of ventilation requirements for operating rooms, and take immediate steps to restore the fixed ventilation system function. 68,120,278,372 Category IB, IC (AIA: 5.1) # C.VI. Other Potential Infectious Aerosol Hazards in Health-Care Facilities A. In settings where surgical lasers are used, wear appropriate personal protective equipment, including N95 or N100 respirators, to minimize exposure to laser plumes. 347, 378, 389 Category IC (OSHA; 29 CFR 1910.134,139) B. Use central wall suction units with in-line filters to evacuate minimal laser plumes. 378,382,386,389 Category II C. Use a mechanical smoke evacuation system with a high-efficiency filter to manage the generation of large amounts of laser plume, when ablating tissue infected with human papilloma virus (HPV) or performing procedures on a patient with extrapulmonary TB. 4,382,[389][390][391][392] Category II # D. Recommendations-Water # D.I. Controlling the Spread of Waterborne Microoganisms A. Practice hand hygiene to prevent the hand transfer of waterborne pathogens, and use barrier precautions (e.g., gloves) as defined by other guidelines. 6,464,577,586,592,1364 • * Flush each outlet until chlorine odor is detected. • * Maintain the elevated chlorine concentration in the system for ≥2 hrs (but ≤24 hrs). 4. Use a very thorough flushing of the water system instead of chlorination if a highly chlorineresistant microorganism (e.g., Cryptosporidium spp.) is suspected as the water contaminant. Category II F. Flush and restart equipment and fixtures according to manufacturers' instructions. Category II G. Change the pretreatment filter and disinfect the dialysis water system with an EPA-registered product to prevent colonization of the reverse osmosis membrane and downstream microbial contamination. 721 Category II H. Run water softeners through a regeneration cycle to restore their capacity and function. Category II I. If the facility has a water-holding reservoir or water-storage tank, consult the facility engineer or local health department to determine whether this equipment needs to be drained, disinfected with an EPA-registered product, and refilled. Category II J. Implement facility management procedures to manage a sewage system failure or flooding (e.g., arranging with other health-care facilities for temporary transfer of patients or provision of services), and establish communications with the local municipal water utility and the local health department to ensure that advisories are received in a timely manner upon release. 713 These recommendations refer to the spraying or fogging of chemicals (e.g., formaldehyde, phenolbased agents, or quaternary ammonium compounds) as a way to decontaminate environmental surfaces or disinfect the air in patient rooms. The recommendation against fogging was based on studies in the 1970's that reported a lack of microbicidal efficacy (e.g., use of quaternary ammonium compounds in mist applications) but also adverse effects on healthcare workers and others in facilities where these methods were utilized. Furthermore, some of these chemicals are not EPA-registered for use in fogging-type applications. The 2003 recommendations still apply; however, CDC does not yet make a recommendation regarding these newer technologies. This issue will be revisited as additional evidence becomes available. G. Avoid large-surface cleaning methods that produce mists or aerosols or disperse dust in patient-care areas. 9,20,109,272 # H.II. Animal-Assisted Activities, Animal-Assisted Therapy, and Resident Animal Programs A. Avoid selection of nonhuman primates and reptiles in animal-assisted activities, animal-assisted therapy, or resident animal programs. 1360-1362 Category IB B. Enroll animals that are fully vaccinated for zoonotic diseases and that are healthy, clean, wellgroomed, and negative for enteric parasites or otherwise have completed recent antihelminthic treatment under the regular care of a veterinarian. 1349, 1360 Category II C. Enroll animals that are trained with the assistance or under the direction of individuals who are experienced in this field. 1360 Category II D. Ensure that animals are handled by persons trained in providing activities or therapies safely, and who know the animals' health status and behavior traits. 1349,1360 Category II E. Take prompt action when an incident of biting or scratching by an animal occurs during an animalassisted activity or therapy. 1. Remove the animal permanently from these programs. 1360 Category II 2. Report the incident promptly to appropriate authorities (e.g., infection-control staff, animal program coordinator, or local animal control). 1360 Category II 3. Promptly clean and treat scratches, bites, or other accidental breaks in the skin. Category II F. Perform an ICRA and work actively with the animal handler prior to conducting an animal-assisted activity or therapy to determine if the session should be held in a public area of the facility or in individual patient rooms. 1349,1360 Category II G. Take precautions to mitigate allergic responses to animals. Category II 1. Minimize shedding of animal dander by bathing animals <24 hours before a visit. 1360 Category II 2. Groom animals to remove loose hair before a visit, or using a therapy animal cape. 1358 Category II H. Use routine cleaning protocols for housekeeping surfaces after therapy sessions. Category II I. Restrict resident animals, including fish in fish tanks, from access to or placement in patient-care areas, food preparation areas, dining areas, laundry, central sterile supply areas, sterile and clean supply storage areas, medication preparation areas, operating rooms, isolation areas, and PE areas. Category II J. Establish a facility policy for regular cleaning of fish tanks, rodent cages, bird cages, and any other animal dwellings and assign this cleaning task to a nonpatient-care staff member; avoid splashing tank water or contaminating environmental surfaces with animal bedding. Category II # H.III. Protective Measures for Immunocompromised Patients A. Advise patients to avoid contact with animal feces and body fluids such as saliva, urine, or solid litter box material. 8 Category II B. Promptly clean and treat scratches, bites, or other wounds that break the skin. 8 Category II C. Advise patients to avoid direct or indirect contact with reptiles. 1340 Category IB D. Conduct a case-by-case assessment to determine if animal-assisted activities or animal-assisted therapy programs are appropriate for immunocompromised patients. 1349 Category II E. No recommendation is offered regarding permitting pet visits to terminally ill immunosuppressed patients outside their PE units. Unresolved issue # H.IV. Service Animals Edit [February 2017]: An * indicates recommendations that were renumbered for clarity. The renumbering does not constitute change to the intent of the recommendations. A. Avoid providing access to nonhuman primates and reptiles as service animals. 1340, 1362 Category IB B. Allow service animals access to the facility in accordance with the Americans with Disabilities Act of 1990, unless the presence of the animal creates a direct threat to other persons or a fundamental alteration in the nature of services. 1366,1376 Category IC (U.S. Department of Justice: 28 CFR § 36.302) C. When a decision must be made regarding a service animal's access to any particular area of the health-care facility, evaluate the service animal, the patient, and the health-care situation on a caseby-case basis to determine whether significant risk of harm exists and whether reasonable modifications in policies and procedures will mitigate this risk. 1376 Category IC (Justice: 28 CFR § 36.208 and App.B) D. If a patient must be separated from his or her service animal while in the health-care facility • * ascertain from the person what arrangements have been made for supervision or care of the animal during this period of separation; and • * make appropriate arrangements to address the patient's needs in the absence of the service animal. Category II # H.V. Animals as Patients in Human Health-Care Facilities A. Develop health-care facility policies to address the treatment of animals in human healthcare facilities. 1. Use the multidisciplinary team approach to policy development, including public media relations in order to disclose and discuss these activities. Category II 2. Exhaust all veterinary facility, equipment, and instrument options before undertaking the procedure. Category II 3. Ensure that the care of the animal is supervised by a licensed veterinarian. Category II B. When animals are treated in human health-care facilities, avoid treating animals in operating rooms or other patient-care areas where invasive procedures are performed (e.g., cardiac catheterization laboratories, or invasive nuclear medicine areas). Category II C. Schedule the animal procedure for the last case of the day for the area, at a time when human patients are not scheduled to be in the vicinity. Category II D. Adhere strictly to standard precautions. Category II E. Clean and disinfect environmental surfaces thoroughly using an EPA-registered product in the room after the animal is removed. Category II F. Allow sufficient ACH to clean the air and help remove airborne dander, microorganisms, and allergens [Appendix B, # Part III. References Note: The bold item in parentheses indicated the citation number or the location of this reference listed in the MMWR version of this guideline. microbiological agents they work with. There are four biosafety levels based on the hazards associated with the various microbiological agents. BOD5: the amount of dissolved oxygen consumed in five days by biological processes breaking down organic matter. Bonneting: a floor cleaning method for either carpeted or hard surface floors that uses a circular motion of a large fibrous disc to lift and remove soil and dust from the surface. Capped spur: a pipe leading from the water recirculating system to an outlet that has been closed off ("capped"). A capped spur cannot be flushed, and it might not be noticed unless the surrounding wall is removed. CFU/m 3 : colony forming units per cubic meter (of air). Chlamydospores: thick-walled, typically spherical or ovoid resting spores asexually produced by certain types of fungi from cells of the somatic hyphae. Chloramines: compounds containing nitrogen, hydrogen, and chlorine. These are formed by the reaction between hypochlorous acid (HOCl) and ammonia (NH3) and/or organic amines in water. The formation of chloramines in drinking water treatment extends the disinfecting power of chlorine. The term is also referred to as Combined Available Chlorine. Cleaning: the removal of visible soil and organic contamination from a device or surface, using either the physical action of scrubbing with a surfactant or detergent and water, or an energy-based process (e.g., ultrasonic cleaners) with appropriate chemical agents. Coagulation-flocculation: coagulation is the clumping of particles that results in the settling of impurities. It may be induced by coagulants (e.g., lime, alum, and iron salts). Flocculation in water and wastewater treatment is the agglomeration or clustering of colloidal and finely-divided suspended matter after coagulation by gentle stirring by either mechanical or hydraulic means, such that they can be separated from water or sewage. Commissioning (a room): testing a system or device to ensure that it meets the pre-use specifications as indicated by the manufacturer or predetermined standard, or air sampling in a room to establish a preoccupancy baseline standard of microbial or particulate contamination. The term is also referred to as benchmarking at 77°F (25°C). Completely packaged: functionally packaged, as for laundry. Conidia: asexual spores of fungi borne externally. Conidiophores: specialized hyphae that bear conidia in fungi. Conditioned space: that part of a building that is heated or cooled, or both, for the comfort of the occupants. Contaminant: an unwanted airborne constituent that may reduce the acceptibility of air. Convection: the transfer of heat or other atmospheric properties within the atmosphere or in the airspace of an enclosure by the circulation of currents from one region to another, especially by such motion directed upward. Cooling tower: a structure engineered to receive accumulated heat from ventilation systems and equipment and transfer this heat to water, which then releases the stored heat to the atmosphere through evaporative cooling. Critical item (medical instrument): a medical instrument or device that contacts normally sterile areas of the body or enters the vascular system. There is a high risk of infection from such devices if they are microbiologically contaminated prior to use. These devices must be sterilized before use. Dead legs: areas in the water system where water stagnates. A dead leg is a pipe or spur, leading from the water recirculating system to an outlet that is used infrequently, resulting in inadequate flow of water from the recirculating system to the outlet. This inadequate flow reduces the perfusion of heat or chlorine into this part of the water distribution system, thereby adversely affecting the disinfection of the water system in that area. Deionization: removal of ions from water by exchange with other ions associated with fixed charges on a resin bed. Cations are usually removed and H + ions are exchanged; OHions are exchanged for anions. Detritis: particulate matter produced by or remaining after the wearing away or disintegration of a substance or tissue. Dew point: the temperature at which a gas or vapor condenses to form a liquid; the point at which moisture begins to condense out of the air. At dew point, air is cooled to the point where it is at 100% relative humidity or saturation. Dialysate: the aqueous electrolyte solution, usually containing dextrose, used to make a concentration gradient between the solution and blood in the hemodialyzer (dialyzer). Dialyzer: a device that consists of two compartments (blood and dialysate) separated by a semipermeable membrane. A dialyzer is usually referred to as an artificial kidney. Diffuser: the grille plate that disperses the air stream coming into the conditioned air space. Direct transmission: involves direct body surface-to-body surface contact and physical transfer of microorganisms between a susceptible host and an infected/colonized person, or exposure to cloud of infectious particles within 3 feet of the source; the aerosolized particles are >5 μm in size. Disability: as defined by the Americans with Disabilities Act, a disability is any physical or mental impairment that substantially limits one or more major life activities, including but not limited to walking, talking, seeing, breathing, hearing, or caring for oneself. Disinfection: a generally less lethal process of microbial inactivation (compared to sterilization) that eliminates virtually all recognized pathogenic microorganisms but not necessarily all microbial forms (e.g., bacterial spores). Drain pans: pans that collect water within the HVAC system and remove it from the system. Condensation results when air and steam come together. Drift: circulating water lost from the cooling tower in the form as liquid droplets entrained in the exhaust air stream (i.e., exhaust aerosols from a cooling tower). Drift eliminators: an assembly of baffles or labyrinth passages through which the air passes prior to its exit from the cooling tower. The purpose of a drift eliminator is to remove entrained water droplets from the exhaust air. Droplets: particles of moisture, such as are generated when a person coughs or sneezes, or when water is converted to a fine mist by a device such as an aerator or shower head. These particles may contain infectious microorganisms. Intermediate in size between drops and droplet nuclei, these particles tend to quickly settle out from the air so that any risk of disease transmission is generally limited to persons in close proximity to the droplet source. Droplet nuclei: sufficiently small particles (1-5 μm in diameter) that can remain airborne indefinitely and cause infection when a susceptible person is exposed at or beyond 3 feet of the source of these particles. Dual duct system: an HVAC system that consists of parallel ducts that produce a cold air stream in one and a hot air stream in the other. Dust: an air suspension of particles (aerosol) of any solid material, usually with particle sizes ≤100 μm in diameter. Dust-spot test: a procedure that uses atmospheric air or a defined dust to measure a filter's ability to remove particles. A photometer is used to measure air samples on either side of the filter, and the difference is expressed as a percentage of particles removed. Effective leakage area: the area through which air can enter or leave the room. This does not include supply, return, or exhaust ducts. The smaller the effective leakage area, the better isolated the room. Endotoxin: the lipopolysaccharides of gram-negative bacteria, the toxic character of which resides in the lipid portion. Endotoxins generally produce pyrogenic reactions in persons exposed to these bacterial components. Enveloped virus: a virus whose outer surface is derived from a membrane of the host cell (either nuclear or the cell's outer membrane) during the budding phase of the maturation process. This membrane-derived material contains lipid, a component that makes these viruses sensitive to the action of chemical germicides. Evaporative condenser: a wet-type, heat-rejection unit that produces large volumes of aerosols during the process of removing heat from conditioned space air. Exhaust air: air removed from a space and not reused therein. Exposure: the condition of being subjected to something (e.g., infectious agents) that could have a harmful effect. Fastidious: having complex nutritional requirements for growth, as in microorganisms. Fill: that portion of a cooling tower which makes up its primary heat transfer surface. Fill is alternatively known as "packing." Finished water: treated, or potable water. Fixed room-air HEPA recirculation systems: nonmobile devices or systems that remove airborne contaminants by recirculating air through a HEPA filter. These may be built into the room and permanently # Air Sampling for Aerosols Containing Legionellae Air sampling is an insensitive means of detecting Legionella pneumophila, and is of limited practical value in environmental sampling for this pathogen. In certain instances, however, it can be used to a. demonstrate the presence of legionellae in aerosol droplets associated with suspected bacterial reservoirs b. define the role of certain devices [e.g., showers, faucets, decorative fountains, or evaporate condensers] in disease transmission; and c. quantitate and determine the size of the droplets containing legionellae. 1436 Stringent controls and calibration are necessary when sampling is used to determine particle size and numbers of viable bacteria. 1437 Samplers should be placed in locations where human exposure to aerosols is anticipated, and investigators should wear a NIOSH-approved respirator (e.g., N95 respirator) if sampling involves exposure to potentially infectious aerosols. Methods used to sample air for legionellae include impingement in liquid, impaction on solid medium, and sedimentation using settle plates. 1436 The Chemical Corps.-type all-glass impingers (AGI) with the stem 30 mm from the bottom of the flask have been used successfully to sample for legionellae. 1436 Because of the velocity at which air samples are collected, clumps tend to become fragmented, leading to a more accurate count of bacteria present in the air. The disadvantages of this method are a. the velocity of collection tends to destroy some vegetative cells b. the method does not differentiate particle sizes; and c. AGIs are easily broken in the field. Yeast extract broth (0.25%) is the recommended liquid medium for AGI sampling of legionellae; 1437 standard methods for water samples can be used to culture these samples. Andersen samplers are viable particle samplers in which particles pass through jet orifices of decreasing size in cascade fashion until they impact on an agar surface. 1218 The agar plates are then removed and incubated. The stage distribution of the legionellae should indicate the extent to which the bacteria would have penetrated the respiratory system. The advantages of this sampling method are a. the equipment is more durable during use b. the sampler can cetermine the number and size of droplets containing legionellae; c. the agar plates can be placed directly in an incubator with no further manipulations; and d. both selective and nonselective BCYE agar can be used. If the samples must be shipped to a laboratory, they should be packed and shipped without refrigeration as soon as possible. # Calculation of Air Sampling Results Assuming that each colony on the agar plate is the growth from a single bacteria-carrying particle, the contamination of the air being sampled is determined from the number of colonies counted. The airborne microorganisms may be reported in terms of the number per cubic foot of air sampled. The following formulas can be applied to convert colony counts to organisms per cubic foot of air sampled. 1218 For solid agar impactor samplers: C / (R H P) = N where N = number of organisms collected per cubic foot of air sampled C = total plate count R = airflow rate in cubic feet per minute P = duration of sampling period in minutes For liquid impingers: (C H V) / (Q H P H R) = N where C = total number of colonies from all aliquots plated V = final volume in mL of collecting media Q = total number of mL plated P, R, and N are defined as above 3. To satisfy exhaust needs, replacement air from outside is necessary. Table B.3 does not attempt to describe specific amounts of outside air to be supplied to individual spaces except for certain areas such as those listed. Distribution of the outside air, added to the system to balance required exhaust, shall be as required by good engineering practice. 4. Number of air changes may be reduced when the room is unoccupied if provisions are made to ensure that the number of air changes indicated is reestablished any time the space is being utilized. Adjustments shall include provisions so that the direction of air movement shall remain the same when the number of air changes is reduced. Areas not indicated as having continuous directional control may have ventilation systems shut down when space is unoccupied and ventilation is not otherwise needed. 5. Air from areas with contamination and/or odor problems shall be exhausted to the outside and not recirculated to other areas. Note that individual circumstances may require special consideration for air exhaust to outside. 6. Because of cleaning difficulty and potential for buildup of contamination, recirculating room units shall not be used in areas marked "No." Isolation rooms may be ventilated by reheat induction units in which only the primary air supplied from a central system passes through the reheat unit. Gravitytype heating or cooling units such as radiators or convectors shall not be used in special care areas. 7. The ranges listed are the minimum and maximum limits where control is specifically needed. See A8.31.D in the AIA guideline (reference 120) for additional information. 8. Where temperature ranges are indicated, the systems shall be capable of maintaining the rooms at any point within the range. A single figure indicates a heating or cooling capacity of at least the indicated temperature. This is usually applicable where residents may be undressed and require a warmer environment. Nothing in these guidelines shall be construed as precluding the use of temperatures lower than those noted when the residents' comfort and medical conditions make lower temperatures desirable. Unoccupied areas such as storage rooms shall have temperatures appropriate for the function intended. 9. See A8.31.D1 in the AIA guideline (reference 120). 10. Food preparation facilities shall have ventilation systems whose air supply mechanisms are interfaced appropriately with exhaust hood controls or relief vents so that exfiltration or infiltration to or from exit corridors does not compromise the exit corridor restrictions of NFPA 90A, the pressure requirements of NFPA 96, or the maximum defined in the table. The number of air changes may be reduced or varied to any extent required for odor control when the space is not in use. heterotrophic plate count would suggest a greater rate and extent of biofilm formation in a health-care facility water system. The water supplied to the facility should also contain <1 coliform bacteria/100 mL. Coliform bacteria are organisms whose presence in the distribution system could indicate fecal contamination. It has been shown that coliform bacteria can colonize biofilms within drinking water systems. Intermittant contamination of a water system with these organisms could lead to colonization of the system. Water samples can be collected from throughout the health-care facility system, including both hot and cold water sources; samples should be cultured by standard methods. 945 If heterotrophic plate counts in samples from the facility water system are higher than those from samples collected at the point of water entry to the building, it can be concluded that the facility water quality has diminished. If biofilms are detected in the facility water system and determined by an epidemiologic and environmental investigation to be a reservoir for health-care associated pathogens, the municipal water supplier could be contacted with a request to provide higher chlorine residuals in the distribution system, or the health-care facility could consider installing a supplemental chlorination system. Sample collection sites for biofilm in health-care facilities include a. hot water tanks b. shower heads; and c. faucet aerators, especially in immunocompromised patient-care areas. Swabs should be placed into tubes containing phosphate buffered water, pH 7.2 or phosphate buffered saline, shipped to the laboratory under refrigeration and processed within 24 hrs. of collection. Samples are suspended by vortexing with sterile glass beads and plated onto a nonselective medium (e.g., Plate Count Agar or R2A medium) and selective media (e.g., media for Legionella spp. isolation) after serial dilution. If the plate counts are elevated above levels in the water (i.e. comparing the plate count per square centimeter of swabbed surface to the plate count per milliliter of water), then biofilm formation can be suspected. In the case of an outbreak, it would be advisable to isolate organisms from these plates to determine whether the suspect organisms are present in the biofilm or water samples and compare them to the organisms isolated from patient specimens. # Water and Dialysate Sampling Strategies in Dialysis In order to detect the low, total viable heterotrophic plate counts outlined by the current AAMI standards for water and dialysate in dialysis settings, it is necessary to use standard quantitative culture techniques with appropriate sensitivity levels. 792,832,833 The membrane filter technique is particularly suited for this application because it permits large volumes of water to be assayed. 792,834 Since the membrane filter technique may not be readily available in clinical laboratories, the spread plate assay can be used as an alternative. 834 If the spread plate assay is used, however, the standard prohibits the use of a calibrated loop when applying sample to the plate. 792 The prohibition is based on the low sensitivity of the calibrated loop. A standard calibrated loop transfers 0.001 mL of sample to the culture medium, so that the minimum sensitivity of the assay is 1,000 CFU/mL. This level of sensitivity is unacceptable when the maximum allowable limit for microorganisms is 200 CFU/mL. Therefore, when the spread plate method is used, a pipette must be used to place 0.1-0.5 mL of water on the culture medium. The current AAMI standard specifically prohibits the use of nutrient-rich media (e.g., blood agar, and chocolate agar) in dialysis water and dialysate assays because these culture media are too rich for growth of the naturally occurring organisms found in water. 792 Debate continues within AAMI, however, as to the most appropriate culture medium and incubation conditions to be used. The original clinical observations on which the microbiological requirements of this standard were based used Standard Methods Agar (SMA), a medium containing relatively few nutrients. 666 The use of tryptic soy agar (TSA), a general purpose medium for isolating and cultivating microorganisms was recommended in later versions of the standard because it was thought to be more appropriate for culturing bicarbonate-containing dialysate. 788,789,835 Moreover, culturing systems based on TSA are readily available from commercial sources. Several studies, however, have shown that the use of nutrient-poor media, such as R2A, results in an increased recovery of bacteria from water. 1451,1452 The original standard also specified incubation for 48 hours at 95°F-98.6°F (35°C-37°C) before enumeration of bacterial colonies. Extending the culturing time up to 168 hours, or 7 days and using incubation temperatures of 73.4°F-82.4°F (23°C-28°C) have also been shown to increase the recovery of bacteria. 1451,1452 Other investigators, however, have not found such clear cut differences between culturing techniques. 835,1453 After considerable discussion, the AAMI Committee has not reached a consensus regarding changes in the assay technique, and the use of TSA or its equivalent for 48 hours at 95°F-98.6°F (35°C-37°C) remains the recommended method. It should be recognized, however, that these culturing conditions may underestimate the bacterial burden in the water and fail to identify the presence of some organisms. Specifically, the recommended method may not detect the presence of various NTM that have been associated with several outbreaks of infection in dialysis units. 31,32 In these instances, however, the high numbers of mycobacteria in the water were related to the total heterotrophic plate counts, each of which was significantly greater than that allowable by the AAMI standard. Additionally, the recommended method will not detect fungi and yeast, which have been shown to contaminate water used for hemodialysis applications. 1454 Biofilm on the surface of the pipes may hide viable bacterial colonies, even though no viable colonies are detected in the water using sensitive culturing techniques. 1455 Many disinfection processes remove biofilm poorly, and a rapid increase in the level of bacteria in the water following disinfection may indicate significant biofilm formation. Therefore, although the results of microbiological surveillance obtained using the test methods outlined above may be useful in guiding disinfection schedules and in demonstrating compliance with AAMI standards, they should not be taken as an indication of the absolute microbiological purity of the water. 792 Endotoxin can be tested by one of two types of assays a. a kinetic test method [e.g., colorimetric or turbidimetric] or b. a gel-clot assay. Endotoxin units are assayed by the Limulus Amebocyte Lysate (LAL) method. Because endotoxins differ in their activity on a mass basis, their activity is referred to a standard Escherichia coli endotoxin. The current standard (EC-6) is prepared from E. coli O113:H10. The relationship between mass of endotoxin and its activity varies with both the lot of LAL and the lot of control standard endotoxin used. Since standards for endotoxin were harmonized in 1983 with the introduction of EC-5, the relationship between mass and activity of endotoxin has been approximately 5-10 EU/ng. Studies to harmonize standards have led to the measurement of endotoxin units (EU) where 5 EU is equivalent to 1 ng E. coli O55:B5 endotoxin. 1456 In summary, water used to prepare dialysate and to reprocess hemodialyzers should not contain a total microbial count >200 CFU/mL as determined by assay on TSA agar for 48 hrs. at 96.8°F (36°C), and ≤2 endotoxin units (EU) per mL. The dialysate at the end of a dialysis treatment should not contain >2,000 CFU/mL. 31, 32, 668, 789, 792 # Water Sampling Strategies and Culture Techniques for Detecting Legionellae Legionella spp. are ubiquitous and can be isolated from 20%-40% of freshwater environments, including man-made water systems. 1457,1458 In health-care facilities, where legionellae in potable water rarely result in disease among immunocompromised patients, courses of remedial action are unclear. Scheduled microbiologic monitoring for legionellae remains controversial because the presence of legionellae is not necessarily evidence of a potential for causing disease. 1459 CDC recommends aggressive disinfection measures for cleaning and maintaining devices known to transmit legionellae, but does not recommend regularly scheduled microbiologic assays for the bacteria. 396 However, scheduled monitoring of potable water within a hospital might be considered in certain settings where persons are highly susceptible to illness and mortality from Legionella infection (e.g., hematopoietic stem cell transplantation units and solid organ transplant units). 9 Also, after an outbreak of legionellosis, health officials agree monitoring is necessary to identify the source and to evaluate the efficacy of biocides or other prevention measures. Examination of water samples is the most efficient microbiologic method for identifying sources of legionellae and is an integral part of an epidemiologic investigation into health-care associated Legionnaires disease. Because of the diversity of plumbing and HVAC systems in health-care facilities, the number and types of sites to be tested must be determined before collection of water samples. One environmental sampling protocol that addresses sampling site selection in hospitals might serve as a prototype for sampling in other institutions. 1209 Any water source that might be aerosolized should be considered a potential source for transmission of legionellae. The bacteria are rarely found in municipal water supplies and tend to colonize plumbing systems and point-of-use devices. To colonize, legionellae usually require a temperature range of 77°F-108°F (25°C-42.2°C) and are most commonly located in hot water systems. 1460 Legionellae do not survive drying. Therefore, air-conditioning equipment condensate, which frequently evaporates, is not a likely source. 1461 Water samples and swabs from point-of-use devices or system surfaces should be collected when sampling for legionellae (Box C.1). 1437 Swabs of system surfaces allow sampling of biofilms, which frequently contain legionellae. When culturing faucet aerators and shower heads, swabs of surface areas should be collected first; water samples are collected after aerators or shower heads are removed from their pipes. Collection and culture techniques are outlined (Box C.2). Swabs can be streaked directly onto buffered charcoal yeast extract agar (BCYE) plates if the pates are available at the collection site. If the swabs and water samples must be transported back to a laboratory for processing, immersing individual swabs in sample water minimizes drying during transit. Place swabs and water samples in insulated coolers to protect specimens from temperature extremes. Box C.1. Potential sampling sites for Legionella spp. in health-care facilities* • Potable water systems incoming water main, water softener unit, holding tanks, cisterns, water heater tanks (at the inflows and outflows) • Potable water outlets, especially those in or near patient rooms faucets or taps, showers • Cooling towers and evaporative condensers makeup water (e.g., added to replace water lost because of evaporation, drift, or leakage), basin (i.e., area under the tower for collection of cooled water), sump (i.e., section of basin from which cooled water returns to heat source), heat sources (e.g., chillers) • Humidfiers (e.g., nebullizers) bubblers for oxygen, water used for respiratory therapy equipment 2. Collect culture swabs of internal surfaces of faucets, aerators, and shower heads in a sterile, screw-top container (e.g., 50 mL plastic centrifuge tube). Submerge each swab in 5-10 mL of sample water taken from the same device from which the sample was obtained. 3. Transport samples and process in a laboratory proficient at culturing water specimens for Legionella spp. as soon as possible after collection. (Samples may be transported at room temperature but must be protected from temperature extremes. Samples not processed within 24 hours of collection should be refrigerated.) 4. Test samples for the presence of Legionella spp. by using semiselective culture media using procedures specific to the cultivation and detection of Legionella spp. o Detection of Legionella spp. antigen by the direct fluorescent antibody technique is not suitable for environmental samples. o Use of polymerase chain reaction for identification of Legionella spp. is not recommended until more data regading the sensitivity and specificity of this procedure are available. # Procedure for Cleaning Cooling Towers and Related Equipment I. Perform these steps prior to chemical disinfection and mechanical cleaning. A. Provide protective equipment to workers who perform the disinfection, to prevent their exposure to chemicals used for disinfection and aerosolized water containing Legionella spp. Protective equipment may include full-length protective clothing, boots, gloves, goggles, and a full-or halfface mask that combines a HEPA filter and chemical cartridges to protect against airborne chlorine levels of up to 10 mg/L. B. Shut off cooling tower. 1. Shut off the heat source, if possible. 2. Shut off fans, if present, on the cooling tower/evaporative condenser (CT/EC). 3. Shut off the system blowdown (i.e., purge) valve. 4. Shut off the automated blowdown controller, if present, and set the system controller to manual. 5. Keep make-up water valves open. 6. Close building air-intake vents within at least 30 meters of the CT/EC until after the cleaning procedure is complete. 7. Continue operating pumps for water circulation through the CT/EC. # II. Perform these chemical disinfection procedures. A. Add fast-release, chlorine-containing disinfectant in pellet, granular, or liquid form, and follow safety instructions on the product label. Use EPA-registered products, if available. Examples of disinfectants include sodium hypochlorite (NaOCl) or calcium hypochlorite (Ca[OCl]2), calculated to achieve initial free residual chlorine (FRC) of 50 mg/L: either a. Monitor the FRC by using an FRC-measuring device with the DPD method (e.g., a swimmingpool test kit), and measure the pH with a pH meter every 15 minutes for 2 hours. Add chlorine as needed to maintain the FRC at ≥10 mg/L. Because the biocidal effect of chlorine is reduced at a higher pH, adjust the pH to 7.5-8.0. The pH may be lowered by using any acid (e.g., nuriatic acid or sulfuric acid used for maintenance of swimming pools) that is compatible with the treatment chemicals. E. Two hours after adding disinfectant and dispersant or after the FRC level is stable at ≥10 mg/L, monitor at 2-hour intervals and maintain the FRC at ≥10 mg/L for 24 hours. F. After the FRC level has been maintained at ≥10 mg/L for 24 hours, drain the system. CT/EC water may be drained safely into the sanitary sewer. Municipal water and sewerage authorities should be contacted regarding local regulations. If a sanitary sewer is not available, consult local or state authorities (e.g., a department of natural resources or environmental protection) regarding disposal of water. If necessary, the drain-off may be dechlorinated by dissipation or chemical neutralization with sodium bisulfite. # • * sharps [e.g., needles and scalpels]. 2 Category II B. Consult federal, state, and local regulations to determine if other waste items are considered regulated medical wastes. 967,1407,1408 # I.II. Disposal Plan for Regulated Medical Wastes A. Develop a plan for the collection, handling, predisposal treatment, and terminal disposal of regulated medical wastes. 967 # I.III. Handling, Transporting, and Storing Regulated Medical Wastes A. Inform personnel involved in the handling and disposal of potentially infective waste of the possible health and safety hazards; ensure that they are trained in appropriate handling and disposal methods. 967 Category IC (OSHA: 29 CFR 1910.1030 § g.2.i) B. Manage the handling and disposal of regulated medical wastes generated in isolation areas by using the same methods as for regulated medical wastes from other patient-care areas. 2 Category II C. Use proper sharps disposal strategies. 967 Category IC (OSHA: 29 CFR 191029 CFR .1030.iii.A) 1. Use a sharps container capable of maintaining its impermeability after waste treatment to avoid subsequent physical injuries during final disposal. 967 Category IC (OSHA: 29 CFR 191029 CFR .1030.iii.A) 2. Place disposable syringes with needles, including sterile sharps that are being discarded, scalpel blades, and other sharp items into puncture-resistant containers located as close as practical to the point of use. 967 Category IC (OSHA: 29 CFR 191029 CFR .1030.iii.A) 3. Do not bend, recap, or break used syringe needles before discarding them into a container. 6,967,1415 Category IC (OSHA: 29 CFR 1910.1030 § d.2.vii and § d.2.vii.A) D. Store regulated medical wastes awaiting treatment in a properly ventilated area that is inaccessible to vertebrate pests; use waste containers that prevent the development of noxious odors. Category IC (States; AHJ) E. If treatment options are not available at the site where the medical waste is generated, transport regulated medical wastes in closed, impervious containers to the on-site treatment location or to another facility for treatment as appropriate. Category IC (States; AHJ) # I.IV. Treatment and Disposal of Regulated Medical Wastes A. Treat regulated medical wastes by using a method (e.g., steam sterilization, incineration, interment, or an alternative treatment technology) approved by the appropriate authority having jurisdiction (AHJ) (e.g., states, Indian Health Service [IHS], Veterans Affairs [VA]) before disposal in a sanitary landfill. Category IC (States, AHJ) B. Follow precautions for treating microbiological wastes (e.g., amplified cultures and stocks of microorganisms). 1013 Category IC (DHHS: BMBL) 1. Biosafety level 4 laboratories must inactivate microbiological wastes in the laboratory by using an approved inactivation method (e.g., autoclaving) before transport to and disposal in a sanitary landfill. 1013 Category IC (DHHS: BMBL) 2. Biosafety level 3 laboratories must inactivate microbiological wastes in the laboratory by using an approved inactivation method (e.g., autoclaving) or incinerate them at the facility before transport to and disposal in a sanitary landfill. 1013 Category IC (DHHS: BMBL) C. Biosafety levels 1 and 2 laboratories should develop strategies to inactivate amplified microbial cultures and stocks onsite by using an approved inactivation method (e.g., autoclaving) instead of packaging and shipping untreated wastes to an offsite facility for treatment and disposal. 1013,[1419][1420][1421] Category II # Part IV. Appendices Appendix A. Glossary of Terms Acceptable indoor air quality: air in which there are no known contaminants at harmful concentrations as determined by knowledgeble authorities and with which a substantial majority (≥80%) of the people exposed do not express dissatisfaction. ACGIH: American Conference of Governmental Industrial Hygienists. Action level: the concentration of a contaminant at which steps should be taken to interrupt the trend toward higher, unacceptable levels. Aerosol: particles of respirable size generated by both humans and environmental sources and that have the capability of remaining viable and airborne for extended periods in the indoor environment. AIA: American Institute of Architects, a professional group responsible for publishing the Guidelines for Design and Construction of Hospitals and Healthcare Facilities, a consensus document for design and construction of health-care facilities endorsed by the U.S. Department of Health and Human Services, healthcare professionals, and professional organizations. # Air changes per hour (ACH): the ratio of the volume of air flowing through a space in a certain period of time (the airflow rate) to the volume of that space (the room volume). This ratio is expressed as the number of air changes per hour (ACH). Air mixing: the degree to which air supplied to a room mixes with the air already in the room, usually expressed as a mixing factor. This factor varies from 1 (for perfect mixing) to 10 (for poor mixing). It is used as a multiplier to determine the actual airflow required (i.e., the recommended ACH multiplied by the mixing factor equals the actual ACH required). Airborne transmission: a means of spreading infection when airborne droplet nuclei (small particle residue of evaporated droplets ≤5 μm in size containing microorganisms that remain suspended in air for long periods of time) are inhaled by the susceptible host. Air-cleaning system: a device or combination of devices applied to reduce the concentration of airborne contaminants (e.g., microorganisms, dusts, fumes, aerosols, other particulate matter, and gases). Air conditioning: the process of treating air to meet the requirements of a conditioned space by controlling its temperature, humidity, cleanliness, and distribution. Allogeneic: non-twin, non-self. The term refers to transplanted tissue from a donor closely matched to a recipient but not related to that person. Ambient air: the air surrounding an object. Anemometer: a flow meter which measures the wind force and velocity of air. An anemometer is often used as a means of determining the volume of air being drawn into an air sampler. Anteroom: a small room leading from a corridor into an isolation room. This room can act as an airlock, preventing the escape of contaminants from the isolation room into the corridor. ASHE: American Society for Healthcare Engineering, an association affiliated with the American Hospital Association. ASHRAE: American Society of Heating, Refrigerating, and Air-Conditioning Engineers Inc. # Autologous self: The term refers to transplanted tissue whose source is the same as the recipient, or an identical twin. Automated cycler: a machine used during peritoneal dialysis which pumps fluid into and out of the patient while he/she sleeps. Biochemical oxygen demand (BOD): a measure of the amount of oxygen removed from aquatic environments by aerobic microorganisms for their metabolic requirements. Measurement of BOD is used to determine the level of organic pollution of a stream or lake. The greater the BOD, the greater the degree of water pollution. The term is also referred to as Biological Oxygen Demand (BOD). Biological oxygen demand (BOD): an indirect measure of the concentration of biologically degradable material present in organic wastes (pertaining to water quality). It usually reflects the amount of oxygen consumed in five days by biological processes breaking down organic waste (BOD5). Biosafety level: a combination of microbiological practices, laboratory facilities, and safety equipment determined to be sufficient to reduce or prevent occupational exposures of laboratory personnel to the Intermediate-level disinfection: a disinfection process that inactivates vegetative bacteria, most fungi, mycobacteria, and most viruses (particularly the enveloped viruses), but does not inactivate bacterial spores. Isoform: a possible configuration (tertiary structure) of a protein molecule. With respect to prion proteins, the molecules with large amounts of α-conformation are the normal isoform of that particular protein, whereas those prions with large amounts of β-sheet conformation are the proteins associated with the development of spongiform encephalopathy (e.g., Creutzfeldt-Jakob disease [CJD]). Laminar flow: HEPA-filtered air that is blown into a room at a rate of 90 ± 10 feet/min in a unidirectional pattern with 100 ACH-400 ACH. Large enveloped virus: viruses whose particle diameter is >50 nm and whose outer surface is covered by a lipid-containing structure derived from the membranes of the host cells. Examples of large enveloped viruses include influenza viruses, herpes simplex viruses, and poxviruses. Laser plume: the transfer of electromagnetic energy into tissues which results in a release of particles, gases, and tissue debris. Lipid-containing viruses: viruses whose particle contains lipid components. The term is generally synonymous with enveloped viruses whose outer surface is derived from host cell membranes. Lipidcontaining viruses are sensitive to the inactivating effects of liquid chemical germicides. Lithotriptors: instruments used for crushing caliculi (i.e., calcified stones, and sand) in the bladder or kidneys. Low efficiency filter: the prefilter with a particle-removal efficiency of approximately 30% through which incoming air first passes. See also Prefilter. Low-level disinfection: a disinfection process that will inactivate most vegetative bacteria, some fungi, and some viruses, but cannot be relied upon to inactivate resistant microorganisms (e.g., mycobacteria or bacterial spores). Makeup air: outdoor air supplied to the ventilation system to replace exhaust air. Makeup water: a cold water supply source for a cooling tower. Manometer: a device that measures the pressure of liquids and gases. A manometer is used to verify air filter performance by measuring pressure differentials on either side of the filter. Membrane filtration: an assay method suitable for recovery and enumeration of microorganisms from liquid samples. This method is used when sample volume is large and anticipated microbial contamination levels are low. Mesophilic: that which favors a moderate temperature. For mesophilic bacteria, a temperature range of 68°F-131°F (20°C-55°C) is favorable for their growth and proliferation. Mixing box: the site where the cold and hot air streams mix in the HVAC system, usually situated close to the air outlet for the room. Mixing faucet: a faucet that mixes hot and cold water to produce water at a desired temperature. MMAD: Mass Median Aerodynamic Diameter. This is the unit used by ACGIH to describe the size of particles when particulate air sampling is conducted. Moniliaceous: hyaline or brightly colored. This is a laboratory term for the distinctive characteristics of certain opportunistic fungi in culture (e.g., Aspergillus spp. and Fusarium spp.). Monochloramine: the result of the reaction between chlorine and ammonia that contains only one chlorine atom. Monochloramine is used by municipal water systems as a water treatment. Natural ventilation: the movement of outdoor air into a space through intentionally provided openings (i.e., windows, doors, or nonpowered ventilators). Negative pressure: air pressure differential between two adjacent airspaces such that air flow is directed into the room relative to the corridor ventilation (i.e., room air is prevented from flowing out of the room and into adjacent areas). Neutropenia: a medical condition in which the patient's concentration of neutrophils is substantially less than that in the normal range. Severe neutropenia occurs when the concentration is <1,000 polymorphonuclear cells/μL for 2 weeks or <100 polymorphonuclear cells /mL for 1 week, particularly for hematopoietic stem cell transplant (HSCT) recipients. Noncritical devices: medical devices or surfaces that come into contact with only intact skin. The risk of infection from use of these devices is low. Non-enveloped virus: a virus whose particle is not covered by a structure derived from a membrane of the host cell. Non-enveloped viruses have little or no lipid compounds in their biochemical composition, a characteristic that is significant to their inherent resistance to the action of chemical germicides. Nosocomial: an occurrence, usually an infection, that is acquired in a hospital as a result of medical care. NTM: nontuberculous mycobacteria. These organisms are also known as atypical mycobacteria, or as "Mycobacteria other than tuberculosis" (MOTT). This descriptive term refers to any of the fast-or slowgrowing Mycobacterium spp. found in primarily in natural or man-made waters, but it excludes Mycobacterium tuberculosis and its variants. Nuisance dust: generally innocuous dust, not recognized as the direct cause of serious pathological conditions. Oocysts: a cyst in which sporozoites are formed; a reproductive aspect of the life cycle of a number of parasitic agents (e.g., Cryptosporidium spp., and Cyclospora spp.). Outdoor air: air taken from the external atmosphere and, therefore, not previously circulated through the ventilation system. Parallel streamlines: a unidirectional airflow pattern achieved in a laminar flow setting, characterized by little or no mixing of air. Particulate matter (particles): a state of matter in which solid or liquid substances exist in the form of aggregated molecules or particles. Airborne particulate matter is typically in the size range of 0.01-100 μm diameter. Pasteurization: a disinfecting method for liquids during which the liquids are heated to 140°F (60°C) for a short time (≥30 mins.) to significantly reduce the numbers of pathogenic or spoilage microorganisms. Plinth: a treatment table or a piece of equipment used to reposition the patient for treatment. Portable room-air HEPA recirculation units: free-standing portable devices that remove airborne contaminants by recirculating air through a HEPA filter. Positive pressure: air pressure differential between two adjacent air spaces such that air flow is directed from the room relative to the corridor ventilation (i.e., air from corridors and adjacent areas is prevented from entering the room). Potable (drinking) water: water that is fit to drink. The microbiological quality of this water as defined by EPA microbiological standards from the Surface Water Treatment Rule: a. Giardia lamblia: 99.9% killed/inactivated b. viruses: 99.9% inactivated; c. Legionella spp.: no limit, but if Giardia and viruses are inactivated, Legionella will also be controlled; d. heterotrophic plate count [HPC]: ≤500 CFU/mL; and e. >5% of water samples total coliform-positive in a month. PPE: Personal Protective Equipment. ppm: parts per million. The term is a measure of concentration in solution. Chlorine bleaches (undiluted) that are available in the U.S. (5.25%-6.15% sodium hypochlorite) contain approximately 50,000-61,500 parts per million of free and available chlorine. Prefilter: the first filter for incoming fresh air in a HVAC system. This filter is approximately 30% efficient in removing particles from the air. See also Low-Efficiency Filter. Prion: a class of agent associated with the transmission of diseases knowns as transmissible spongiform encephalopathies (TSEs). Prions are considered to consist of protein only, and the abnormal isoform of this protein is thought to be the agent that causes diseases such as Creutzfeldt-Jakob disease (CJD), kuru, scrapie, bovine spongiform encephalopathy (BSE), and the human version of BSE which is variant CJD (vCJD). Product water: water produced by a water treatment system or individual component of that system. Protective environment: a special care area, usually in a hospital, designed to prevent transmission of opportunistic airborne pathogens to severely immunosuppressed patients. Pseudoepidemic (pseudo-outbreak): a cluster of positive microbiologic cultures in the absence of clinical disease. A pseudoepidemic usually results from contamination of the laboratory apparatus and process used to recover microorganisms. Pyrogenic: an endotoxin burden such that a patient would receive ≥5 endotoxin units (EU) per kilogram of body weight per hour, thereby causing a febrile response. In dialysis this usually refers to water or dialysate having endotoxin concentrations of ≥5 EU/mL. Rank order: a strategy for assessing overall indoor air quality and filter performance by comparing airborne particle counts from lowest to highest (i.e., from the best filtered air spaces to those with the least filtration). RAPD: a method of genotyping microorganisms by randomly amplified polymorphic DNA. This is one version of the polymerase chain reaction method. Recirculated air: air removed from the conditioned space and intended for reuse as supply air. Relative humidity: the ratio of the amount of water vapor in the atmosphere to the amount necessary for saturation at the same temperature. Relative humidity is expressed in terms of percent and measures the percentage of saturation. At 100% relative humidity, the air is saturated. The relative humidity decreases when the temperature is increased without changing the amount of moisture in the air. Reprocessing (of medical instruments): the procedures or steps taken to make a medical instrument safe for use on the next patient. Reprocessing encompasses both cleaning and the final or terminal step (i.e., sterilization or disinfection) which is determined by the intended use of the instrument according to the Spaulding classification. Residuals: the presence and concentration of a chemical in media (e.g., water) or on a surface after the chemical has been added. Reservoir: a nonclinical source of infection. Respirable particles: those particles that penetrate into and are deposited in the nonciliated portion of the lung. Particles >10 μm in diameter are not respirable. Return air: air removed from a space to be then recirculated. Reverse osmosis (RO): an advanced method of water or wastewater treatment that relies on a semipermeable membrane to separate waters from pollutants. An external force is used to reverse the normal osmotic process resulting in the solvent moving from a solution of higher concentration to one of lower concentration. Riser: water piping that connects the circulating water supply line, from the level of the base of the tower or supply header, to the tower's distribution system. RODAC: Replicate Organism Direct Agar Contact. This term refers to a nutrient agar plate whose convex agar surface is directly pressed onto an environmental surface for the purpose of microbiologic sampling of that surface. Room-air HEPA recirculation systems and units: devices (either fixed or portable) that remove airborne contaminants by recirculating air through a HEPA filter. Routine sampling: environmental sampling conducted without a specific, intended purpose and with no action plan dependent on the results obtained. Sanitizer: an agent that reduces microbial contamination to safe levels as judged by public health standards or requirements. Saprophytic: a naturally-occurring microbial contaminant. Sedimentation: the act or process of depositing sediment from suspension in water. The term also refers to the process whereby solids settle out of wastewater by gravity during treatment. Semicritical devices: medical devices that come into contact with mucous membranes or non-intact skin. Service animal: any animal individually trained to do work or perform tasks for the benefit of a person with a disability. Shedding: the generation and dispersion of particles and spores by sources within the patient area, through activities such as patient movement and airflow over surfaces. Single-pass ventilation: ventilation in which 100% of the air supplied to an area is exhausted to the outside. Small, non-enveloped viruses: viruses whose particle diameter is <50 nm and whose outer surface is the protein of the particle itself and not that of host cell membrane components. Examples of small, nonenveloped viruses are polioviruses and hepatitis A virus. Spaulding Classification: the categorization of inanimate medical device surfaces in the medical environment as proposed in 1972 by Dr. Earle Spaulding. Surfaces are divided into three general categories, based on the theoretical risk of infection if the surfaces are contaminated at time of use. The categories are "critical," "semicritical," and "noncritical." Specific humidity: the mass of water vapor per unit mass of moist air. It is expressed as grains of water per pound of dry air, or pounds of water per pound of dry air. The specific humidity changes as moisture is added or removed. However, temperature changes do not change the specific humidity unless the air is cooled below the dew point. Splatter: visible drops of liquid or body fluid that are expelled forcibly into the air and settle out quickly, as distinguished from particles of an aerosol which remain airborne indefinitely. Steady state: the usual state of an area. Sterilization: the use of a physical or chemical procedure to destroy all microbial life, including large numbers of highly-resistant bacterial endospores. Stop valve: a valve that regulates the flow of fluid through a pipe. The term may also refer to a faucet. Substitution fluid: fluid that is used for fluid management of patients receiving hemodiafiltration. This fluid can be prepared on-line at the machine through a series of ultrafilters or with the use of sterile peritoneal dialysis fluid. Supply air: air that is delivered to the conditioned space and used for ventilation, heating, cooling, humidification, or dehumidification. Tensile strength: the resistance of a material to a force tending to tear it apart, measured as the maximum tension the material can withstand without tearing. Ultrafilter: a membrane filter with a pore size in the range of 0.001-0.05 μm, the performance of which is usually rated in terms of a nominal molecular weight cut-off (defined as the smallest molecular weight species for which the filter membrance has more than 90% rejection). Ultrafiltered dialysate: the process by which dialysate is passed through a filter having a molecular weight cut-off of approximately 1 kilodalton for the purpose of removing bacteria and endotoxin from the bath. Ultraviolet germicidal irradiation (UVGI): the use of ultraviolet radiation to kill or inactivate microorganisms. Ultraviolet germicidal irradiation lamps: lamps that kill or inactivate microorganisms by emitting ultraviolet germicidal radiation, predominantly at a wavelength of 254 nm. UVGI lamps can be used in ceiling or wall fixtures or within air ducts of ventilation systems. Vapor pressure: the pressure exerted by free molecules at the surface of a solid or liquid. Vapor pressure is a function of temperature, increasing as the temperature rises. Vegetative bacteria: bacteria that are actively growing and metabolizing, as opposed to a bacterial state of quiescence that is achieved when certain bacteria (gram-positive bacilli) convert to spores when the environment can no longer support active growth. Vehicle: any object, person, surface, fomite, or media that may carry and transfer infectious microorganisms from one site to another. Ventilation: the process of supplying and removing air by natural or mechanical means to and from any space. Such air may or may not be conditioned. Ventilation air: that portion of the supply air consisting of outdoor air plus any recirculated air that has been treated for the purpose of maintaining acceptable indoor air quality. Ventilation, dilution: an engineering control technique to dilute and remove airborne contaminants by the flow of air into and out of an area. Air that contains droplet nuclei is removed and replaced by contaminantfree air. If the flow is sufficient, droplet nuclei become dispersed, and their concentration in the air is diminished. Ventilation, local exhaust: ventilation used to capture and removed airborne contaminants by enclosing the contaminant source (the patient) or by placing an exhaust hood close to the contaminant source. v/v: volume to volume. This term is an expression of concentration of a percentage solution when the principle component is added as a liquid to the diluent. w/v: weight to volume. This term is an expression of concentration of a percentage solution when the principle component is added as a solid to the diluent. Weight-arrestance: a measure of filter efficiency, used primarily when describing the performance of lowand medium-efficiency filters. The measurement of weight-arrestance is performed by feeding a standardized synthetic dust to the filter and weighing the fraction of the dust removed. Q / V = ACH ¶ Values apply to an empty room with no aerosol-generating source. With a person present and generating aerosol, this table would not apply. Other equations are available that include a constant generating source. However, certain diseases (e.g., infectious tuberculosis) are not likely to be aerosolized at a constant rate. The times given assume perfect mixing of the air within the space (i.e., mixing factor = 1). However, perfect mixing usually does not occur. Removal times will be longer in rooms or areas with imperfect mixing or air stagnation. 213 Caution should be exercised in using this table in such situations. For booths or other local ventilation enclosures, manufacturers' instructions should be consulted. # Appendix B. Air 1. Airborne Contaminant Removal # Ventilation Specifications for Health-Care Facilities The following tables from the AIA Guidelines for Design and Construction of Hospitals and Health-Care Facilities, 2001 are reprinted with permission of the American Institute of Architects and the publisher (The Facilities Guidelines Institute). 120 Note: This table is Table 7 20. Food preparation centers shall have ventilation systems whose air supply mechanisms are interfaced appropriately with exhaust hood controls or relief vents so that exfiltration or infiltration to or from exit corridors does not compromise the exit corridor restrictions of NFPA 90A, the pressure requirements of NFPA 96, or the maximum defined in the table. The number of air changes may be reduced or varied to any extent required for odor control when the space is not in use. See Section 7.31.D1.p in the AIA guideline (reference 120). # Appendix I: A7. Recirculating devices with HEPA filters may have potential uses in existing facilities as interim, supplemental environmental controls to meet requirements for the control of airborne infectious agents. Limitations in design must be recognized. The design of either portable or fixed systems should prevent stagnation and short circuiting of airflow. The supply and exhaust locations should direct clean air to areas where health-care workers are likely to work, across the infectious source, and then to the exhaust, so that the healthcare worker is not in position between the infectious source and the exhaust location. The design of such systems should also allow for easy access for scheduled preventative maintenance and cleaning. A11. The verification of airflow direction can include a simple visual method such as smoke trail, ball-in-tube, or flutterstrip. These devices will require a minimum differential air pressure to indicate airflow direction. Notes: 1. The ventilation rates in this table cover ventilation for comfort, as well as for asepsis and odor control in areas of nursing facilities that directly affect resident care and are determined based on nursing facilities being predominantly "No Smoking" facilities. Where smoking may be allowed, ventilation rates will need adjustment. Areas where specific ventilation rates are not given in the table shall be ventilated in accordance with ASHRAE Standard 62, Ventilation for Acceptable Indoor Air Quality, and ASHRAE Handbook -HVAC Applications. OSHA standards and/or NIOSH criteria require special ventilation requirements for employee health and safety within nursing facilities. 2. Design of the ventilation system shall, insofar as possible, provide that air movement is from clean to less clean areas. However, continuous compliance may be impractical with full utilization of some forms of variable air volume and load shedding systems that may be used for energy conservation. Areas that do require positive and continuous control are noted with "Out" or "In" to indicate the required direction of air movement in relation to the space named. Rate of air movement may, of course, be varied as needed within the limits required for positive control. Where indication of air movement direction is enclosed in parentheses, continuous directional control is required only when the specialized equipment or device is in use or where room use may otherwise compromise the intent of movement from clean to less clean. Air movement for rooms with dashes and nonpatient areas may vary as necessary to satisfy the requirements of those spaces. Additional adjustments may be needed when space is unused or unoccupied and air systems are deenergized or reduced. # Appendix C. Water 1. Biofilms Microorganisms have a tendency to associate with and stick to surfaces. These adherent organisms can initiate and develop biofilms, which are comprised of cells embedded in a matrix of extracellularly produced polymers and associated abiotic particles. 1438 It is inevitable that biofilms will form in most water systems. In the health-care facility environment, biofilms may be found in the potable water supply piping, hot water tanks, air conditioning cooling towers, or in sinks, sink traps, aerators, or shower heads. Biofilms, especially in water systems, are not present as a continuous slime or film, but are more often scanty and heterogeneous in nature. 1439 Biofilms may form under stagnant as well as flowing conditions, so storage tanks, in addition to water system piping, may be vulnerable to the development of biofilm, especially if water temperatures are low enough to allow the growth of thermophilic bacteria (e.g., Legionella spp.). Favorable conditions for biofilm formation are present if these structures and equipment are not cleaned for extended periods of time. 1440 Algae, protozoa, and fungi may be present in biofilms, but the predominant microorganisms of water system biofilms are gram-negative bacteria. Although most of these organisms will not normally pose a problem for healthy individuals, certain biofilm bacteria (e.g., Pseudomonas aeruginosa, Klebsiella spp., Pantoea agglomerans, and Enterobacter cloacae) all may be agents for opportunistic infections for immunocompromised individuals. 1441,1442 These biofilm organisms may easily contaminate indwelling medical devices or intravenous (IV) fluids, and they could be transferred on the hands of health-care workers. [1441][1442][1443][1444] Biofilms may potentially provide an environment for the survival of pathogenic organisms, such as Legionella pneumophila and E. coli O157:H7. Although the association of biofilms and medical devices provides a plausible explanation for a variety of health-care associated infections, it is not clear how the presence of biofilms in the water system may influence the rates of health-care-associated waterborne infection. Organisms within biofilms behave quite differently than their planktonic (i.e., free floating) counterparts. Research has shown that biofilm-associated organisms are more resistant to antibiotics and disinfectants than are planktonic organisms, either because the cells are protected by the polymer matrix, or because they are physiologically different. [1445][1446][1447][1448][1449][1450] Nevertheless, municipal water utilities attempt to maintain a chlorine residual in the distribution system to discourage microbiological growth. Though chlorine in its various forms is a proven disinfectant, it has been shown to be less effective against biofilm bacteria. 1448 Higher levels of chlorine for longer contact times are necessary to eliminate biofilms. Routine sampling of health-care facility water systems for biofilms is not warranted. If an epidemiologic investigation points to the water supply system as a possible source of infection, then water sampling for biofilm organisms should be considered so that prevention and control strategies can be developed. An established biofilm is is difficult to remove totally in existing piping. Strategies to remediate biofilms in a water system would include flushing the system piping, hot water tank, dead legs, and those areas of the facility's water system subject to low or intermittent flow. The benefits of this treatment would include a. elimination of corrosion deposits and sludge from the bottom of hot water tanks, b. removal of biofilms from shower heads and sink aerators, and c. circulation of fresh water containing elevated chlorine residuals into the health-care facility water system. The general strategy for evaluating water system biofilm depends on a comparision of the bacteriological quality of the incoming municipal water and that of water sampled from within facility's distribution system. Heterotrophic plate counts and coliform counts, both of which are routinely run by the municipal water utility, will at least provide in indication of the potential for biofilm formation. Heterotrophic plate count levels in potable water should be <500 CFU/mL. These levels may increase on occasion, but counts consistently >500 CFU/mL would indicate a general decrease in water quality. A direct correlation between heterotrophic plate count and biofilm levels has been demonstrated. 1450 Therefore, an increase in G. Refill the system with water and repeat the procedure outline in steps 2-7 in I-B above. # III. Perform mechanical cleaning. A. After water from the second chemical disinfection has been drained, shut down the CT/EC. B. Inspect all water-contact areas for sediment, sludge, and scale. Using brushes and/or a lowpressure water hose, thoroughly clean all CT/EC water-contact areas, including the basin, sump, fill, spray nozzles, and fittings. Replace components as needed. C. If possible, clean CT/EC water-contact areas within the chillers. # IV. Perform these procedures after mechanical cleaning. A. Fill the system with water and add chlorine to achieve an FRC level of 10 mg/L. B. Circulate the water for 1 hour, then open the blowdown valve and flush the entire system until the water is free of turbidity. C. Drain the system. D. Open any air-intake vents that were closed before cleaning. E. Fill the system with water. The CT/EC may be put back into service using an effective watertreatment program. # Maintenance Procedures Used to Decrease Survival and Multiplications of Legionella spp. in Potable-Water Distribution Systems Wherever allowable by state code, provide water at ≥124°F (≥51°C) at all points in the heated water system, including the taps. This requires that water in calorifiers (e.g., water heaters) be maintained at ≥140°F (≥60°C). In the United Kingdom, where maintenance of water temperatures at ≥122°F (≥50°C) in hospitals has been mandated, installation of blending or mixing valves at or near taps to reduce the water temperature to ≤109.4°F (≤63°C) has been recommended in certain settings to reduce the risk for scald injury to patients, visitors, and health care workers. 726 However, Legionella spp. can multiply even in short segments of pipe containing water at this temperature. Increasing the flow rate from the hot-watercirculation system may help lessen the likelihood of water stagnation and cooling. 711,1465 Insulation of plumbing to ensure delivery of cold (<68°F [<20°C]) water to water heaters (and to cold-water outlets) may diminish the opportunity for bacterial multiplication. 456 Both dead legs and capped spurs within the plumbing system provide areas of stagnation and cooling to <122°F (<50°C) regardless of the circulating water temperature; these segments may need to be removed to prevent colonization. 704 Rubber fittings within plumbing systems have been associated with persistent colonization, and replacement of these fittings may be required for Legionella spp. eradication. 1467 Continuous chlorination to maintain concentrations of free residual chlorine at 1-2 mg/L (1-2 ppm) at the tap is an alternative option for treatment. This requires the placement of flow-adjusted, continuous injectors of chlorine throughout the water distribution system. Adverse effects of continuous chlorination can include accelerated corrosion of plumbing (resulting in system leaks) and production of potentially carcinogenic trihalomethanes. However, when levels of free residual chlorine are below 3 mg/L (3 ppm), trihalomethane levels are kept below the maximum safety level recommended by the EPA. 727, 1468, 1469 228 # Appendix D. Insects and Microorganisms Format Change [February 2016]: The format of this section was changed to improve readability and accessibility. The content is unchanged. # Appendix E. Information Resources The following sources of information may be helpful to the reader. Some of these are available at no charge, while others are available for purchase from the publisher. # Air and Water # Appendix F. Areas of Future Research Air • Standardize the methodology and interpretation of microbiologic air sampling (e.g., determine action levels or minimum infectious dose for aspergillosis, and evaluate the significance of airborne bacteria and fungi in the surgical field and the impact on postoperative SSI). • Develop new molecular typing methods to better define the epidemiology of health-care associated outbreaks of aspergillosis and to associate isolates recovered from both clinical and environmental sources. • Develop new methods for the diagnosis of aspergillosis that can lead reliably to early recognition of infection. • Assess the value of laminar flow technology for surgeries other than for joint replacement surgery. • Determine if particulate sampling can be routinely performed in lieu of microbiologic sampling for purposes such as determining air quality of clean environments (e.g., operating rooms, HSCT units). # Water • Evaluate new methods of water treatment, both in the facility and at the water utility (e.g., ozone, chlorine dioxide, copper/silver/monochloramine) and perform cost-benefit analyses of treatment in preventing health-care associated legionellosis. • Evaluate the role of biofilms in overall water quality and determine the impact of water treatments for the control of biofilm in distribution systems. • Determine if the use of ultrapure fluids in dialysis is feasible and warranted, and determine the action level for the final bath. • Develop quality assurance protocols and validated methods for sampling filtered rinse water used with AERs and determine acceptable microbiologic quality of AER rinse water. # Environmental Services • Evaluate the innate resistance of microorganisms to the action of chemical germicides, and determine what, if any, linkage there may be between antibiotic resistance and resistance to disinfectants. # Laundry and Bedding • Evaluate the microbial inactivation capabilities of new laundry detergents, bleach substitutes, other laundry additives, and new laundry technologies. # Animals in Health-Care Facilities • Conduct surveillance to monitor incidence of infections among patients in facilities that use animal programs, and conduct investigations to determine new infection control strategies to prevent these infections. • Evaluate the epidemiologic impact of performing procedures on animals (e.g., surgery or imaging) in human health-care facilities. # Regulated Medical Waste • Determine the efficiency of current medical waste treatment technologies to inactivate emerging pathogens that may be present in medical waste (e.g., SARS-coV). • Explore options to enable health-care facilities to reinstate the capacity to inactivate microbiological cultures and stocks on-site.
None
None